* KVM currently invalidates the entirety of the page tables, not just
   those for the memslot being touched, when a memslot is moved or deleted.
   The former does not have particularly noticeable overhead, but Intel's
   TDX will require the guest to re-accept private pages if they are
   dropped from the secure EPT, which is a non starter.  Actually,
   the only reason why this is not already being done is a bug which
   was never fully investigated and caused VM instability with assigned
   GeForce GPUs, so allow userspace to opt into the new behavior.
 
 * Advertise AVX10.1 to userspace (effectively prep work for the "real" AVX10
   functionality that is on the horizon).
 
 * Rework common MSR handling code to suppress errors on userspace accesses to
   unsupported-but-advertised MSRs.  This will allow removing (almost?) all of
   KVM's exemptions for userspace access to MSRs that shouldn't exist based on
   the vCPU model (the actual cleanup is non-trivial future work).
 
 * Rework KVM's handling of x2APIC ICR, again, because AMD (x2AVIC) splits the
   64-bit value into the legacy ICR and ICR2 storage, whereas Intel (APICv)
   stores the entire 64-bit value at the ICR offset.
 
 * Fix a bug where KVM would fail to exit to userspace if one was triggered by
   a fastpath exit handler.
 
 * Add fastpath handling of HLT VM-Exit to expedite re-entering the guest when
   there's already a pending wake event at the time of the exit.
 
 * Fix a WARN caused by RSM entering a nested guest from SMM with invalid guest
   state, by forcing the vCPU out of guest mode prior to signalling SHUTDOWN
   (the SHUTDOWN hits the VM altogether, not the nested guest)
 
 * Overhaul the "unprotect and retry" logic to more precisely identify cases
   where retrying is actually helpful, and to harden all retry paths against
   putting the guest into an infinite retry loop.
 
 * Add support for yielding, e.g. to honor NEED_RESCHED, when zapping rmaps in
   the shadow MMU.
 
 * Refactor pieces of the shadow MMU related to aging SPTEs in prepartion for
   adding multi generation LRU support in KVM.
 
 * Don't stuff the RSB after VM-Exit when RETPOLINE=y and AutoIBRS is enabled,
   i.e. when the CPU has already flushed the RSB.
 
 * Trace the per-CPU host save area as a VMCB pointer to improve readability
   and cleanup the retrieval of the SEV-ES host save area.
 
 * Remove unnecessary accounting of temporary nested VMCB related allocations.
 
 * Set FINAL/PAGE in the page fault error code for EPT violations if and only
   if the GVA is valid.  If the GVA is NOT valid, there is no guest-side page
   table walk and so stuffing paging related metadata is nonsensical.
 
 * Fix a bug where KVM would incorrectly synthesize a nested VM-Exit instead of
   emulating posted interrupt delivery to L2.
 
 * Add a lockdep assertion to detect unsafe accesses of vmcs12 structures.
 
 * Harden eVMCS loading against an impossible NULL pointer deref (really truly
   should be impossible).
 
 * Minor SGX fix and a cleanup.
 
 * Misc cleanups
 
 Generic:
 
 * Register KVM's cpuhp and syscore callbacks when enabling virtualization in
   hardware, as the sole purpose of said callbacks is to disable and re-enable
   virtualization as needed.
 
 * Enable virtualization when KVM is loaded, not right before the first VM
   is created.  Together with the previous change, this simplifies a
   lot the logic of the callbacks, because their very existence implies
   virtualization is enabled.
 
 * Fix a bug that results in KVM prematurely exiting to userspace for coalesced
   MMIO/PIO in many cases, clean up the related code, and add a testcase.
 
 * Fix a bug in kvm_clear_guest() where it would trigger a buffer overflow _if_
   the gpa+len crosses a page boundary, which thankfully is guaranteed to not
   happen in the current code base.  Add WARNs in more helpers that read/write
   guest memory to detect similar bugs.
 
 Selftests:
 
 * Fix a goof that caused some Hyper-V tests to be skipped when run on bare
   metal, i.e. NOT in a VM.
 
 * Add a regression test for KVM's handling of SHUTDOWN for an SEV-ES guest.
 
 * Explicitly include one-off assets in .gitignore.  Past Sean was completely
   wrong about not being able to detect missing .gitignore entries.
 
 * Verify userspace single-stepping works when KVM happens to handle a VM-Exit
   in its fastpath.
 
 * Misc cleanups
 -----BEGIN PGP SIGNATURE-----
 
 iQFIBAABCAAyFiEE8TM4V0tmI4mGbHaCv/vSX3jHroMFAmb201AUHHBib256aW5p
 QHJlZGhhdC5jb20ACgkQv/vSX3jHroOM1gf+Ij7dpCh0KwoNYlHfW2aCHAv3PqQd
 cKMDSGxoCernbJEyPO/3qXNUK+p4zKedk3d92snW3mKa+cwxMdfthJ3i9d7uoNiw
 7hAgcfKNHDZGqAQXhx8QcVF3wgp+diXSyirR+h1IKrGtCCmjMdNC8ftSYe6voEkw
 VTVbLL+tER5H0Xo5UKaXbnXKDbQvWLXkdIqM8dtLGFGLQ2PnF/DdMP0p6HYrKf1w
 B7LBu0rvqYDL8/pS82mtR3brHJXxAr9m72fOezRLEUbfUdzkTUi/b1vEe6nDCl0Q
 i/PuFlARDLWuetlR0VVWKNbop/C/l4EmwCcKzFHa+gfNH3L9361Oz+NzBw==
 =Q7kz
 -----END PGP SIGNATURE-----

Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm

Pull x86 kvm updates from Paolo Bonzini:
 "x86:

   - KVM currently invalidates the entirety of the page tables, not just
     those for the memslot being touched, when a memslot is moved or
     deleted.

     This does not traditionally have particularly noticeable overhead,
     but Intel's TDX will require the guest to re-accept private pages
     if they are dropped from the secure EPT, which is a non starter.

     Actually, the only reason why this is not already being done is a
     bug which was never fully investigated and caused VM instability
     with assigned GeForce GPUs, so allow userspace to opt into the new
     behavior.

   - Advertise AVX10.1 to userspace (effectively prep work for the
     "real" AVX10 functionality that is on the horizon)

   - Rework common MSR handling code to suppress errors on userspace
     accesses to unsupported-but-advertised MSRs

     This will allow removing (almost?) all of KVM's exemptions for
     userspace access to MSRs that shouldn't exist based on the vCPU
     model (the actual cleanup is non-trivial future work)

   - Rework KVM's handling of x2APIC ICR, again, because AMD (x2AVIC)
     splits the 64-bit value into the legacy ICR and ICR2 storage,
     whereas Intel (APICv) stores the entire 64-bit value at the ICR
     offset

   - Fix a bug where KVM would fail to exit to userspace if one was
     triggered by a fastpath exit handler

   - Add fastpath handling of HLT VM-Exit to expedite re-entering the
     guest when there's already a pending wake event at the time of the
     exit

   - Fix a WARN caused by RSM entering a nested guest from SMM with
     invalid guest state, by forcing the vCPU out of guest mode prior to
     signalling SHUTDOWN (the SHUTDOWN hits the VM altogether, not the
     nested guest)

   - Overhaul the "unprotect and retry" logic to more precisely identify
     cases where retrying is actually helpful, and to harden all retry
     paths against putting the guest into an infinite retry loop

   - Add support for yielding, e.g. to honor NEED_RESCHED, when zapping
     rmaps in the shadow MMU

   - Refactor pieces of the shadow MMU related to aging SPTEs in
     prepartion for adding multi generation LRU support in KVM

   - Don't stuff the RSB after VM-Exit when RETPOLINE=y and AutoIBRS is
     enabled, i.e. when the CPU has already flushed the RSB

   - Trace the per-CPU host save area as a VMCB pointer to improve
     readability and cleanup the retrieval of the SEV-ES host save area

   - Remove unnecessary accounting of temporary nested VMCB related
     allocations

   - Set FINAL/PAGE in the page fault error code for EPT violations if
     and only if the GVA is valid. If the GVA is NOT valid, there is no
     guest-side page table walk and so stuffing paging related metadata
     is nonsensical

   - Fix a bug where KVM would incorrectly synthesize a nested VM-Exit
     instead of emulating posted interrupt delivery to L2

   - Add a lockdep assertion to detect unsafe accesses of vmcs12
     structures

   - Harden eVMCS loading against an impossible NULL pointer deref
     (really truly should be impossible)

   - Minor SGX fix and a cleanup

   - Misc cleanups

  Generic:

   - Register KVM's cpuhp and syscore callbacks when enabling
     virtualization in hardware, as the sole purpose of said callbacks
     is to disable and re-enable virtualization as needed

   - Enable virtualization when KVM is loaded, not right before the
     first VM is created

     Together with the previous change, this simplifies a lot the logic
     of the callbacks, because their very existence implies
     virtualization is enabled

   - Fix a bug that results in KVM prematurely exiting to userspace for
     coalesced MMIO/PIO in many cases, clean up the related code, and
     add a testcase

   - Fix a bug in kvm_clear_guest() where it would trigger a buffer
     overflow _if_ the gpa+len crosses a page boundary, which thankfully
     is guaranteed to not happen in the current code base. Add WARNs in
     more helpers that read/write guest memory to detect similar bugs

  Selftests:

   - Fix a goof that caused some Hyper-V tests to be skipped when run on
     bare metal, i.e. NOT in a VM

   - Add a regression test for KVM's handling of SHUTDOWN for an SEV-ES
     guest

   - Explicitly include one-off assets in .gitignore. Past Sean was
     completely wrong about not being able to detect missing .gitignore
     entries

   - Verify userspace single-stepping works when KVM happens to handle a
     VM-Exit in its fastpath

   - Misc cleanups"

* tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (127 commits)
  Documentation: KVM: fix warning in "make htmldocs"
  s390: Enable KVM_S390_UCONTROL config in debug_defconfig
  selftests: kvm: s390: Add VM run test case
  KVM: SVM: let alternatives handle the cases when RSB filling is required
  KVM: VMX: Set PFERR_GUEST_{FINAL,PAGE}_MASK if and only if the GVA is valid
  KVM: x86/mmu: Use KVM_PAGES_PER_HPAGE() instead of an open coded equivalent
  KVM: x86/mmu: Add KVM_RMAP_MANY to replace open coded '1' and '1ul' literals
  KVM: x86/mmu: Fold mmu_spte_age() into kvm_rmap_age_gfn_range()
  KVM: x86/mmu: Morph kvm_handle_gfn_range() into an aging specific helper
  KVM: x86/mmu: Honor NEED_RESCHED when zapping rmaps and blocking is allowed
  KVM: x86/mmu: Add a helper to walk and zap rmaps for a memslot
  KVM: x86/mmu: Plumb a @can_yield parameter into __walk_slot_rmaps()
  KVM: x86/mmu: Move walk_slot_rmaps() up near for_each_slot_rmap_range()
  KVM: x86/mmu: WARN on MMIO cache hit when emulating write-protected gfn
  KVM: x86/mmu: Detect if unprotect will do anything based on invalid_list
  KVM: x86/mmu: Subsume kvm_mmu_unprotect_page() into the and_retry() version
  KVM: x86: Rename reexecute_instruction()=>kvm_unprotect_and_retry_on_failure()
  KVM: x86: Update retry protection fields when forcing retry on emulation failure
  KVM: x86: Apply retry protection to "unprotect on failure" path
  KVM: x86: Check EMULTYPE_WRITE_PF_TO_SP before unprotecting gfn
  ...
This commit is contained in:
Linus Torvalds 2024-09-28 09:20:14 -07:00
commit 3efc57369a
82 changed files with 2803 additions and 1452 deletions

View File

@ -2677,6 +2677,23 @@
Default is Y (on). Default is Y (on).
kvm.enable_virt_at_load=[KVM,ARM64,LOONGARCH,MIPS,RISCV,X86]
If enabled, KVM will enable virtualization in hardware
when KVM is loaded, and disable virtualization when KVM
is unloaded (if KVM is built as a module).
If disabled, KVM will dynamically enable and disable
virtualization on-demand when creating and destroying
VMs, i.e. on the 0=>1 and 1=>0 transitions of the
number of VMs.
Enabling virtualization at module lode avoids potential
latency for creation of the 0=>1 VM, as KVM serializes
virtualization enabling across all online CPUs. The
"cost" of enabling virtualization when KVM is loaded,
is that doing so may interfere with using out-of-tree
hypervisors that want to "own" virtualization hardware.
kvm.enable_vmware_backdoor=[KVM] Support VMware backdoor PV interface. kvm.enable_vmware_backdoor=[KVM] Support VMware backdoor PV interface.
Default is false (don't support). Default is false (don't support).

View File

@ -4214,7 +4214,9 @@ whether or not KVM_CAP_X86_USER_SPACE_MSR's KVM_MSR_EXIT_REASON_FILTER is
enabled. If KVM_MSR_EXIT_REASON_FILTER is enabled, KVM will exit to userspace enabled. If KVM_MSR_EXIT_REASON_FILTER is enabled, KVM will exit to userspace
on denied accesses, i.e. userspace effectively intercepts the MSR access. If on denied accesses, i.e. userspace effectively intercepts the MSR access. If
KVM_MSR_EXIT_REASON_FILTER is not enabled, KVM will inject a #GP into the guest KVM_MSR_EXIT_REASON_FILTER is not enabled, KVM will inject a #GP into the guest
on denied accesses. on denied accesses. Note, if an MSR access is denied during emulation of MSR
load/stores during VMX transitions, KVM ignores KVM_MSR_EXIT_REASON_FILTER.
See the below warning for full details.
If an MSR access is allowed by userspace, KVM will emulate and/or virtualize If an MSR access is allowed by userspace, KVM will emulate and/or virtualize
the access in accordance with the vCPU model. Note, KVM may still ultimately the access in accordance with the vCPU model. Note, KVM may still ultimately
@ -4229,9 +4231,22 @@ filtering. In that mode, ``KVM_MSR_FILTER_DEFAULT_DENY`` is invalid and causes
an error. an error.
.. warning:: .. warning::
MSR accesses as part of nested VM-Enter/VM-Exit are not filtered. MSR accesses that are side effects of instruction execution (emulated or
This includes both writes to individual VMCS fields and reads/writes native) are not filtered as hardware does not honor MSR bitmaps outside of
through the MSR lists pointed to by the VMCS. RDMSR and WRMSR, and KVM mimics that behavior when emulating instructions
to avoid pointless divergence from hardware. E.g. RDPID reads MSR_TSC_AUX,
SYSENTER reads the SYSENTER MSRs, etc.
MSRs that are loaded/stored via dedicated VMCS fields are not filtered as
part of VM-Enter/VM-Exit emulation.
MSRs that are loaded/store via VMX's load/store lists _are_ filtered as part
of VM-Enter/VM-Exit emulation. If an MSR access is denied on VM-Enter, KVM
synthesizes a consistency check VM-Exit(EXIT_REASON_MSR_LOAD_FAIL). If an
MSR access is denied on VM-Exit, KVM synthesizes a VM-Abort. In short, KVM
extends Intel's architectural list of MSRs that cannot be loaded/saved via
the VM-Enter/VM-Exit MSR list. It is platform owner's responsibility to
to communicate any such restrictions to their end users.
x2APIC MSR accesses cannot be filtered (KVM silently ignores filters that x2APIC MSR accesses cannot be filtered (KVM silently ignores filters that
cover any x2APIC MSRs). cover any x2APIC MSRs).
@ -8082,6 +8097,14 @@ KVM_X86_QUIRK_MWAIT_NEVER_UD_FAULTS By default, KVM emulates MONITOR/MWAIT (if
guest CPUID on writes to MISC_ENABLE if guest CPUID on writes to MISC_ENABLE if
KVM_X86_QUIRK_MISC_ENABLE_NO_MWAIT is KVM_X86_QUIRK_MISC_ENABLE_NO_MWAIT is
disabled. disabled.
KVM_X86_QUIRK_SLOT_ZAP_ALL By default, KVM invalidates all SPTEs in
fast way for memslot deletion when VM type
is KVM_X86_DEFAULT_VM.
When this quirk is disabled or when VM type
is other than KVM_X86_DEFAULT_VM, KVM zaps
only leaf SPTEs that are within the range of
the memslot being deleted.
=================================== ============================================ =================================== ============================================
7.32 KVM_CAP_MAX_VCPU_ID 7.32 KVM_CAP_MAX_VCPU_ID

View File

@ -11,6 +11,8 @@ The acquisition orders for mutexes are as follows:
- cpus_read_lock() is taken outside kvm_lock - cpus_read_lock() is taken outside kvm_lock
- kvm_usage_lock is taken outside cpus_read_lock()
- kvm->lock is taken outside vcpu->mutex - kvm->lock is taken outside vcpu->mutex
- kvm->lock is taken outside kvm->slots_lock and kvm->irq_lock - kvm->lock is taken outside kvm->slots_lock and kvm->irq_lock
@ -24,6 +26,13 @@ The acquisition orders for mutexes are as follows:
are taken on the waiting side when modifying memslots, so MMU notifiers are taken on the waiting side when modifying memslots, so MMU notifiers
must not take either kvm->slots_lock or kvm->slots_arch_lock. must not take either kvm->slots_lock or kvm->slots_arch_lock.
cpus_read_lock() vs kvm_lock:
- Taking cpus_read_lock() outside of kvm_lock is problematic, despite that
being the official ordering, as it is quite easy to unknowingly trigger
cpus_read_lock() while holding kvm_lock. Use caution when walking vm_list,
e.g. avoid complex operations when possible.
For SRCU: For SRCU:
- ``synchronize_srcu(&kvm->srcu)`` is called inside critical sections - ``synchronize_srcu(&kvm->srcu)`` is called inside critical sections
@ -227,10 +236,16 @@ time it will be set using the Dirty tracking mechanism described above.
:Type: mutex :Type: mutex
:Arch: any :Arch: any
:Protects: - vm_list :Protects: - vm_list
- kvm_usage_count
``kvm_usage_lock``
^^^^^^^^^^^^^^^^^^
:Type: mutex
:Arch: any
:Protects: - kvm_usage_count
- hardware virtualization enable/disable - hardware virtualization enable/disable
:Comment: KVM also disables CPU hotplug via cpus_read_lock() during :Comment: Exists to allow taking cpus_read_lock() while kvm_usage_count is
enable/disable. protected, which simplifies the virtualization enabling logic.
``kvm->mn_invalidate_lock`` ``kvm->mn_invalidate_lock``
^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^
@ -290,11 +305,12 @@ time it will be set using the Dirty tracking mechanism described above.
wakeup. wakeup.
``vendor_module_lock`` ``vendor_module_lock``
^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^
:Type: mutex :Type: mutex
:Arch: x86 :Arch: x86
:Protects: loading a vendor module (kvm_amd or kvm_intel) :Protects: loading a vendor module (kvm_amd or kvm_intel)
:Comment: Exists because using kvm_lock leads to deadlock. cpu_hotplug_lock is :Comment: Exists because using kvm_lock leads to deadlock. kvm_lock is taken
taken outside of kvm_lock, e.g. in KVM's CPU online/offline callbacks, and in notifiers, e.g. __kvmclock_cpufreq_notifier(), that may be invoked while
many operations need to take cpu_hotplug_lock when loading a vendor module, cpu_hotplug_lock is held, e.g. from cpufreq_boost_trigger_state(), and many
e.g. updating static calls. operations need to take cpu_hotplug_lock when loading a vendor module, e.g.
updating static calls.

View File

@ -2164,7 +2164,7 @@ static void cpu_hyp_uninit(void *discard)
} }
} }
int kvm_arch_hardware_enable(void) int kvm_arch_enable_virtualization_cpu(void)
{ {
/* /*
* Most calls to this function are made with migration * Most calls to this function are made with migration
@ -2184,7 +2184,7 @@ int kvm_arch_hardware_enable(void)
return 0; return 0;
} }
void kvm_arch_hardware_disable(void) void kvm_arch_disable_virtualization_cpu(void)
{ {
kvm_timer_cpu_down(); kvm_timer_cpu_down();
kvm_vgic_cpu_down(); kvm_vgic_cpu_down();
@ -2380,7 +2380,7 @@ static int __init do_pkvm_init(u32 hyp_va_bits)
/* /*
* The stub hypercalls are now disabled, so set our local flag to * The stub hypercalls are now disabled, so set our local flag to
* prevent a later re-init attempt in kvm_arch_hardware_enable(). * prevent a later re-init attempt in kvm_arch_enable_virtualization_cpu().
*/ */
__this_cpu_write(kvm_hyp_initialized, 1); __this_cpu_write(kvm_hyp_initialized, 1);
preempt_enable(); preempt_enable();

View File

@ -261,7 +261,7 @@ long kvm_arch_dev_ioctl(struct file *filp,
return -ENOIOCTLCMD; return -ENOIOCTLCMD;
} }
int kvm_arch_hardware_enable(void) int kvm_arch_enable_virtualization_cpu(void)
{ {
unsigned long env, gcfg = 0; unsigned long env, gcfg = 0;
@ -300,7 +300,7 @@ int kvm_arch_hardware_enable(void)
return 0; return 0;
} }
void kvm_arch_hardware_disable(void) void kvm_arch_disable_virtualization_cpu(void)
{ {
write_csr_gcfg(0); write_csr_gcfg(0);
write_csr_gstat(0); write_csr_gstat(0);

View File

@ -728,8 +728,8 @@ struct kvm_mips_callbacks {
int (*handle_fpe)(struct kvm_vcpu *vcpu); int (*handle_fpe)(struct kvm_vcpu *vcpu);
int (*handle_msa_disabled)(struct kvm_vcpu *vcpu); int (*handle_msa_disabled)(struct kvm_vcpu *vcpu);
int (*handle_guest_exit)(struct kvm_vcpu *vcpu); int (*handle_guest_exit)(struct kvm_vcpu *vcpu);
int (*hardware_enable)(void); int (*enable_virtualization_cpu)(void);
void (*hardware_disable)(void); void (*disable_virtualization_cpu)(void);
int (*check_extension)(struct kvm *kvm, long ext); int (*check_extension)(struct kvm *kvm, long ext);
int (*vcpu_init)(struct kvm_vcpu *vcpu); int (*vcpu_init)(struct kvm_vcpu *vcpu);
void (*vcpu_uninit)(struct kvm_vcpu *vcpu); void (*vcpu_uninit)(struct kvm_vcpu *vcpu);

View File

@ -125,14 +125,14 @@ int kvm_arch_vcpu_should_kick(struct kvm_vcpu *vcpu)
return 1; return 1;
} }
int kvm_arch_hardware_enable(void) int kvm_arch_enable_virtualization_cpu(void)
{ {
return kvm_mips_callbacks->hardware_enable(); return kvm_mips_callbacks->enable_virtualization_cpu();
} }
void kvm_arch_hardware_disable(void) void kvm_arch_disable_virtualization_cpu(void)
{ {
kvm_mips_callbacks->hardware_disable(); kvm_mips_callbacks->disable_virtualization_cpu();
} }
int kvm_arch_init_vm(struct kvm *kvm, unsigned long type) int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)

View File

@ -2869,7 +2869,7 @@ static unsigned int kvm_vz_resize_guest_vtlb(unsigned int size)
return ret + 1; return ret + 1;
} }
static int kvm_vz_hardware_enable(void) static int kvm_vz_enable_virtualization_cpu(void)
{ {
unsigned int mmu_size, guest_mmu_size, ftlb_size; unsigned int mmu_size, guest_mmu_size, ftlb_size;
u64 guest_cvmctl, cvmvmconfig; u64 guest_cvmctl, cvmvmconfig;
@ -2983,7 +2983,7 @@ static int kvm_vz_hardware_enable(void)
return 0; return 0;
} }
static void kvm_vz_hardware_disable(void) static void kvm_vz_disable_virtualization_cpu(void)
{ {
u64 cvmvmconfig; u64 cvmvmconfig;
unsigned int mmu_size; unsigned int mmu_size;
@ -3280,8 +3280,8 @@ static struct kvm_mips_callbacks kvm_vz_callbacks = {
.handle_msa_disabled = kvm_trap_vz_handle_msa_disabled, .handle_msa_disabled = kvm_trap_vz_handle_msa_disabled,
.handle_guest_exit = kvm_trap_vz_handle_guest_exit, .handle_guest_exit = kvm_trap_vz_handle_guest_exit,
.hardware_enable = kvm_vz_hardware_enable, .enable_virtualization_cpu = kvm_vz_enable_virtualization_cpu,
.hardware_disable = kvm_vz_hardware_disable, .disable_virtualization_cpu = kvm_vz_disable_virtualization_cpu,
.check_extension = kvm_vz_check_extension, .check_extension = kvm_vz_check_extension,
.vcpu_init = kvm_vz_vcpu_init, .vcpu_init = kvm_vz_vcpu_init,
.vcpu_uninit = kvm_vz_vcpu_uninit, .vcpu_uninit = kvm_vz_vcpu_uninit,

View File

@ -20,7 +20,7 @@ long kvm_arch_dev_ioctl(struct file *filp,
return -EINVAL; return -EINVAL;
} }
int kvm_arch_hardware_enable(void) int kvm_arch_enable_virtualization_cpu(void)
{ {
csr_write(CSR_HEDELEG, KVM_HEDELEG_DEFAULT); csr_write(CSR_HEDELEG, KVM_HEDELEG_DEFAULT);
csr_write(CSR_HIDELEG, KVM_HIDELEG_DEFAULT); csr_write(CSR_HIDELEG, KVM_HIDELEG_DEFAULT);
@ -35,7 +35,7 @@ int kvm_arch_hardware_enable(void)
return 0; return 0;
} }
void kvm_arch_hardware_disable(void) void kvm_arch_disable_virtualization_cpu(void)
{ {
kvm_riscv_aia_disable(); kvm_riscv_aia_disable();

View File

@ -59,6 +59,7 @@ CONFIG_CMM=m
CONFIG_APPLDATA_BASE=y CONFIG_APPLDATA_BASE=y
CONFIG_S390_HYPFS_FS=y CONFIG_S390_HYPFS_FS=y
CONFIG_KVM=m CONFIG_KVM=m
CONFIG_KVM_S390_UCONTROL=y
CONFIG_S390_UNWIND_SELFTEST=m CONFIG_S390_UNWIND_SELFTEST=m
CONFIG_S390_KPROBES_SANITY_TEST=m CONFIG_S390_KPROBES_SANITY_TEST=m
CONFIG_S390_MODULES_SANITY_TEST=m CONFIG_S390_MODULES_SANITY_TEST=m

View File

@ -348,20 +348,29 @@ static inline int plo_test_bit(unsigned char nr)
return cc == 0; return cc == 0;
} }
static __always_inline void __insn32_query(unsigned int opcode, u8 *query) static __always_inline void __sortl_query(u8 (*query)[32])
{ {
asm volatile( asm volatile(
" lghi 0,0\n" " lghi 0,0\n"
" lgr 1,%[query]\n" " la 1,%[query]\n"
/* Parameter registers are ignored */ /* Parameter registers are ignored */
" .insn rrf,%[opc] << 16,2,4,6,0\n" " .insn rre,0xb9380000,2,4\n"
: [query] "=R" (*query)
: :
: [query] "d" ((unsigned long)query), [opc] "i" (opcode) : "cc", "0", "1");
: "cc", "memory", "0", "1");
} }
#define INSN_SORTL 0xb938 static __always_inline void __dfltcc_query(u8 (*query)[32])
#define INSN_DFLTCC 0xb939 {
asm volatile(
" lghi 0,0\n"
" la 1,%[query]\n"
/* Parameter registers are ignored */
" .insn rrf,0xb9390000,2,4,6,0\n"
: [query] "=R" (*query)
:
: "cc", "0", "1");
}
static void __init kvm_s390_cpu_feat_init(void) static void __init kvm_s390_cpu_feat_init(void)
{ {
@ -415,10 +424,10 @@ static void __init kvm_s390_cpu_feat_init(void)
kvm_s390_available_subfunc.kdsa); kvm_s390_available_subfunc.kdsa);
if (test_facility(150)) /* SORTL */ if (test_facility(150)) /* SORTL */
__insn32_query(INSN_SORTL, kvm_s390_available_subfunc.sortl); __sortl_query(&kvm_s390_available_subfunc.sortl);
if (test_facility(151)) /* DFLTCC */ if (test_facility(151)) /* DFLTCC */
__insn32_query(INSN_DFLTCC, kvm_s390_available_subfunc.dfltcc); __dfltcc_query(&kvm_s390_available_subfunc.dfltcc);
if (MACHINE_HAS_ESOP) if (MACHINE_HAS_ESOP)
allow_cpu_feat(KVM_S390_VM_CPU_FEAT_ESOP); allow_cpu_feat(KVM_S390_VM_CPU_FEAT_ESOP);

View File

@ -179,6 +179,7 @@ static __always_inline bool cpuid_function_is_indexed(u32 function)
case 0x1d: case 0x1d:
case 0x1e: case 0x1e:
case 0x1f: case 0x1f:
case 0x24:
case 0x8000001d: case 0x8000001d:
return true; return true;
} }

View File

@ -14,8 +14,8 @@ BUILD_BUG_ON(1)
* be __static_call_return0. * be __static_call_return0.
*/ */
KVM_X86_OP(check_processor_compatibility) KVM_X86_OP(check_processor_compatibility)
KVM_X86_OP(hardware_enable) KVM_X86_OP(enable_virtualization_cpu)
KVM_X86_OP(hardware_disable) KVM_X86_OP(disable_virtualization_cpu)
KVM_X86_OP(hardware_unsetup) KVM_X86_OP(hardware_unsetup)
KVM_X86_OP(has_emulated_msr) KVM_X86_OP(has_emulated_msr)
KVM_X86_OP(vcpu_after_set_cpuid) KVM_X86_OP(vcpu_after_set_cpuid)
@ -125,7 +125,7 @@ KVM_X86_OP_OPTIONAL(mem_enc_unregister_region)
KVM_X86_OP_OPTIONAL(vm_copy_enc_context_from) KVM_X86_OP_OPTIONAL(vm_copy_enc_context_from)
KVM_X86_OP_OPTIONAL(vm_move_enc_context_from) KVM_X86_OP_OPTIONAL(vm_move_enc_context_from)
KVM_X86_OP_OPTIONAL(guest_memory_reclaimed) KVM_X86_OP_OPTIONAL(guest_memory_reclaimed)
KVM_X86_OP(get_msr_feature) KVM_X86_OP(get_feature_msr)
KVM_X86_OP(check_emulate_instruction) KVM_X86_OP(check_emulate_instruction)
KVM_X86_OP(apic_init_signal_blocked) KVM_X86_OP(apic_init_signal_blocked)
KVM_X86_OP_OPTIONAL(enable_l2_tlb_flush) KVM_X86_OP_OPTIONAL(enable_l2_tlb_flush)

View File

@ -36,6 +36,7 @@
#include <asm/kvm_page_track.h> #include <asm/kvm_page_track.h>
#include <asm/kvm_vcpu_regs.h> #include <asm/kvm_vcpu_regs.h>
#include <asm/hyperv-tlfs.h> #include <asm/hyperv-tlfs.h>
#include <asm/reboot.h>
#define __KVM_HAVE_ARCH_VCPU_DEBUGFS #define __KVM_HAVE_ARCH_VCPU_DEBUGFS
@ -211,6 +212,7 @@ enum exit_fastpath_completion {
EXIT_FASTPATH_NONE, EXIT_FASTPATH_NONE,
EXIT_FASTPATH_REENTER_GUEST, EXIT_FASTPATH_REENTER_GUEST,
EXIT_FASTPATH_EXIT_HANDLED, EXIT_FASTPATH_EXIT_HANDLED,
EXIT_FASTPATH_EXIT_USERSPACE,
}; };
typedef enum exit_fastpath_completion fastpath_t; typedef enum exit_fastpath_completion fastpath_t;
@ -280,10 +282,6 @@ enum x86_intercept_stage;
#define PFERR_PRIVATE_ACCESS BIT_ULL(49) #define PFERR_PRIVATE_ACCESS BIT_ULL(49)
#define PFERR_SYNTHETIC_MASK (PFERR_IMPLICIT_ACCESS | PFERR_PRIVATE_ACCESS) #define PFERR_SYNTHETIC_MASK (PFERR_IMPLICIT_ACCESS | PFERR_PRIVATE_ACCESS)
#define PFERR_NESTED_GUEST_PAGE (PFERR_GUEST_PAGE_MASK | \
PFERR_WRITE_MASK | \
PFERR_PRESENT_MASK)
/* apic attention bits */ /* apic attention bits */
#define KVM_APIC_CHECK_VAPIC 0 #define KVM_APIC_CHECK_VAPIC 0
/* /*
@ -1629,8 +1627,10 @@ struct kvm_x86_ops {
int (*check_processor_compatibility)(void); int (*check_processor_compatibility)(void);
int (*hardware_enable)(void); int (*enable_virtualization_cpu)(void);
void (*hardware_disable)(void); void (*disable_virtualization_cpu)(void);
cpu_emergency_virt_cb *emergency_disable_virtualization_cpu;
void (*hardware_unsetup)(void); void (*hardware_unsetup)(void);
bool (*has_emulated_msr)(struct kvm *kvm, u32 index); bool (*has_emulated_msr)(struct kvm *kvm, u32 index);
void (*vcpu_after_set_cpuid)(struct kvm_vcpu *vcpu); void (*vcpu_after_set_cpuid)(struct kvm_vcpu *vcpu);
@ -1727,6 +1727,8 @@ struct kvm_x86_ops {
void (*enable_nmi_window)(struct kvm_vcpu *vcpu); void (*enable_nmi_window)(struct kvm_vcpu *vcpu);
void (*enable_irq_window)(struct kvm_vcpu *vcpu); void (*enable_irq_window)(struct kvm_vcpu *vcpu);
void (*update_cr8_intercept)(struct kvm_vcpu *vcpu, int tpr, int irr); void (*update_cr8_intercept)(struct kvm_vcpu *vcpu, int tpr, int irr);
const bool x2apic_icr_is_split;
const unsigned long required_apicv_inhibits; const unsigned long required_apicv_inhibits;
bool allow_apicv_in_x2apic_without_x2apic_virtualization; bool allow_apicv_in_x2apic_without_x2apic_virtualization;
void (*refresh_apicv_exec_ctrl)(struct kvm_vcpu *vcpu); void (*refresh_apicv_exec_ctrl)(struct kvm_vcpu *vcpu);
@ -1806,7 +1808,7 @@ struct kvm_x86_ops {
int (*vm_move_enc_context_from)(struct kvm *kvm, unsigned int source_fd); int (*vm_move_enc_context_from)(struct kvm *kvm, unsigned int source_fd);
void (*guest_memory_reclaimed)(struct kvm *kvm); void (*guest_memory_reclaimed)(struct kvm *kvm);
int (*get_msr_feature)(struct kvm_msr_entry *entry); int (*get_feature_msr)(u32 msr, u64 *data);
int (*check_emulate_instruction)(struct kvm_vcpu *vcpu, int emul_type, int (*check_emulate_instruction)(struct kvm_vcpu *vcpu, int emul_type,
void *insn, int insn_len); void *insn, int insn_len);
@ -2060,6 +2062,8 @@ void kvm_prepare_emulation_failure_exit(struct kvm_vcpu *vcpu);
void kvm_enable_efer_bits(u64); void kvm_enable_efer_bits(u64);
bool kvm_valid_efer(struct kvm_vcpu *vcpu, u64 efer); bool kvm_valid_efer(struct kvm_vcpu *vcpu, u64 efer);
int kvm_get_msr_with_filter(struct kvm_vcpu *vcpu, u32 index, u64 *data);
int kvm_set_msr_with_filter(struct kvm_vcpu *vcpu, u32 index, u64 data);
int __kvm_get_msr(struct kvm_vcpu *vcpu, u32 index, u64 *data, bool host_initiated); int __kvm_get_msr(struct kvm_vcpu *vcpu, u32 index, u64 *data, bool host_initiated);
int kvm_get_msr(struct kvm_vcpu *vcpu, u32 index, u64 *data); int kvm_get_msr(struct kvm_vcpu *vcpu, u32 index, u64 *data);
int kvm_set_msr(struct kvm_vcpu *vcpu, u32 index, u64 data); int kvm_set_msr(struct kvm_vcpu *vcpu, u32 index, u64 data);
@ -2136,7 +2140,15 @@ int kvm_get_nr_pending_nmis(struct kvm_vcpu *vcpu);
void kvm_update_dr7(struct kvm_vcpu *vcpu); void kvm_update_dr7(struct kvm_vcpu *vcpu);
int kvm_mmu_unprotect_page(struct kvm *kvm, gfn_t gfn); bool __kvm_mmu_unprotect_gfn_and_retry(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa,
bool always_retry);
static inline bool kvm_mmu_unprotect_gfn_and_retry(struct kvm_vcpu *vcpu,
gpa_t cr2_or_gpa)
{
return __kvm_mmu_unprotect_gfn_and_retry(vcpu, cr2_or_gpa, false);
}
void kvm_mmu_free_roots(struct kvm *kvm, struct kvm_mmu *mmu, void kvm_mmu_free_roots(struct kvm *kvm, struct kvm_mmu *mmu,
ulong roots_to_free); ulong roots_to_free);
void kvm_mmu_free_guest_mode_roots(struct kvm *kvm, struct kvm_mmu *mmu); void kvm_mmu_free_guest_mode_roots(struct kvm *kvm, struct kvm_mmu *mmu);
@ -2254,6 +2266,7 @@ int kvm_cpu_has_injectable_intr(struct kvm_vcpu *v);
int kvm_cpu_has_interrupt(struct kvm_vcpu *vcpu); int kvm_cpu_has_interrupt(struct kvm_vcpu *vcpu);
int kvm_cpu_has_extint(struct kvm_vcpu *v); int kvm_cpu_has_extint(struct kvm_vcpu *v);
int kvm_arch_interrupt_allowed(struct kvm_vcpu *vcpu); int kvm_arch_interrupt_allowed(struct kvm_vcpu *vcpu);
int kvm_cpu_get_extint(struct kvm_vcpu *v);
int kvm_cpu_get_interrupt(struct kvm_vcpu *v); int kvm_cpu_get_interrupt(struct kvm_vcpu *v);
void kvm_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event); void kvm_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event);
@ -2345,7 +2358,8 @@ int memslot_rmap_alloc(struct kvm_memory_slot *slot, unsigned long npages);
KVM_X86_QUIRK_OUT_7E_INC_RIP | \ KVM_X86_QUIRK_OUT_7E_INC_RIP | \
KVM_X86_QUIRK_MISC_ENABLE_NO_MWAIT | \ KVM_X86_QUIRK_MISC_ENABLE_NO_MWAIT | \
KVM_X86_QUIRK_FIX_HYPERCALL_INSN | \ KVM_X86_QUIRK_FIX_HYPERCALL_INSN | \
KVM_X86_QUIRK_MWAIT_NEVER_UD_FAULTS) KVM_X86_QUIRK_MWAIT_NEVER_UD_FAULTS | \
KVM_X86_QUIRK_SLOT_ZAP_ALL)
/* /*
* KVM previously used a u32 field in kvm_run to indicate the hypercall was * KVM previously used a u32 field in kvm_run to indicate the hypercall was

View File

@ -36,6 +36,20 @@
#define EFER_FFXSR (1<<_EFER_FFXSR) #define EFER_FFXSR (1<<_EFER_FFXSR)
#define EFER_AUTOIBRS (1<<_EFER_AUTOIBRS) #define EFER_AUTOIBRS (1<<_EFER_AUTOIBRS)
/*
* Architectural memory types that are common to MTRRs, PAT, VMX MSRs, etc.
* Most MSRs support/allow only a subset of memory types, but the values
* themselves are common across all relevant MSRs.
*/
#define X86_MEMTYPE_UC 0ull /* Uncacheable, a.k.a. Strong Uncacheable */
#define X86_MEMTYPE_WC 1ull /* Write Combining */
/* RESERVED 2 */
/* RESERVED 3 */
#define X86_MEMTYPE_WT 4ull /* Write Through */
#define X86_MEMTYPE_WP 5ull /* Write Protected */
#define X86_MEMTYPE_WB 6ull /* Write Back */
#define X86_MEMTYPE_UC_MINUS 7ull /* Weak Uncacheabled (PAT only) */
/* FRED MSRs */ /* FRED MSRs */
#define MSR_IA32_FRED_RSP0 0x1cc /* Level 0 stack pointer */ #define MSR_IA32_FRED_RSP0 0x1cc /* Level 0 stack pointer */
#define MSR_IA32_FRED_RSP1 0x1cd /* Level 1 stack pointer */ #define MSR_IA32_FRED_RSP1 0x1cd /* Level 1 stack pointer */
@ -365,6 +379,12 @@
#define MSR_IA32_CR_PAT 0x00000277 #define MSR_IA32_CR_PAT 0x00000277
#define PAT_VALUE(p0, p1, p2, p3, p4, p5, p6, p7) \
((X86_MEMTYPE_ ## p0) | (X86_MEMTYPE_ ## p1 << 8) | \
(X86_MEMTYPE_ ## p2 << 16) | (X86_MEMTYPE_ ## p3 << 24) | \
(X86_MEMTYPE_ ## p4 << 32) | (X86_MEMTYPE_ ## p5 << 40) | \
(X86_MEMTYPE_ ## p6 << 48) | (X86_MEMTYPE_ ## p7 << 56))
#define MSR_IA32_DEBUGCTLMSR 0x000001d9 #define MSR_IA32_DEBUGCTLMSR 0x000001d9
#define MSR_IA32_LASTBRANCHFROMIP 0x000001db #define MSR_IA32_LASTBRANCHFROMIP 0x000001db
#define MSR_IA32_LASTBRANCHTOIP 0x000001dc #define MSR_IA32_LASTBRANCHTOIP 0x000001dc
@ -1159,15 +1179,6 @@
#define MSR_IA32_VMX_VMFUNC 0x00000491 #define MSR_IA32_VMX_VMFUNC 0x00000491
#define MSR_IA32_VMX_PROCBASED_CTLS3 0x00000492 #define MSR_IA32_VMX_PROCBASED_CTLS3 0x00000492
/* VMX_BASIC bits and bitmasks */
#define VMX_BASIC_VMCS_SIZE_SHIFT 32
#define VMX_BASIC_TRUE_CTLS (1ULL << 55)
#define VMX_BASIC_64 0x0001000000000000LLU
#define VMX_BASIC_MEM_TYPE_SHIFT 50
#define VMX_BASIC_MEM_TYPE_MASK 0x003c000000000000LLU
#define VMX_BASIC_MEM_TYPE_WB 6LLU
#define VMX_BASIC_INOUT 0x0040000000000000LLU
/* Resctrl MSRs: */ /* Resctrl MSRs: */
/* - Intel: */ /* - Intel: */
#define MSR_IA32_L3_QOS_CFG 0xc81 #define MSR_IA32_L3_QOS_CFG 0xc81
@ -1185,11 +1196,6 @@
#define MSR_IA32_SMBA_BW_BASE 0xc0000280 #define MSR_IA32_SMBA_BW_BASE 0xc0000280
#define MSR_IA32_EVT_CFG_BASE 0xc0000400 #define MSR_IA32_EVT_CFG_BASE 0xc0000400
/* MSR_IA32_VMX_MISC bits */
#define MSR_IA32_VMX_MISC_INTEL_PT (1ULL << 14)
#define MSR_IA32_VMX_MISC_VMWRITE_SHADOW_RO_FIELDS (1ULL << 29)
#define MSR_IA32_VMX_MISC_PREEMPTION_TIMER_SCALE 0x1F
/* AMD-V MSRs */ /* AMD-V MSRs */
#define MSR_VM_CR 0xc0010114 #define MSR_VM_CR 0xc0010114
#define MSR_VM_IGNNE 0xc0010115 #define MSR_VM_IGNNE 0xc0010115

View File

@ -25,8 +25,8 @@ void __noreturn machine_real_restart(unsigned int type);
#define MRR_BIOS 0 #define MRR_BIOS 0
#define MRR_APM 1 #define MRR_APM 1
#if IS_ENABLED(CONFIG_KVM_INTEL) || IS_ENABLED(CONFIG_KVM_AMD)
typedef void (cpu_emergency_virt_cb)(void); typedef void (cpu_emergency_virt_cb)(void);
#if IS_ENABLED(CONFIG_KVM_INTEL) || IS_ENABLED(CONFIG_KVM_AMD)
void cpu_emergency_register_virt_callback(cpu_emergency_virt_cb *callback); void cpu_emergency_register_virt_callback(cpu_emergency_virt_cb *callback);
void cpu_emergency_unregister_virt_callback(cpu_emergency_virt_cb *callback); void cpu_emergency_unregister_virt_callback(cpu_emergency_virt_cb *callback);
void cpu_emergency_disable_virtualization(void); void cpu_emergency_disable_virtualization(void);

View File

@ -516,6 +516,20 @@ struct ghcb {
u32 ghcb_usage; u32 ghcb_usage;
} __packed; } __packed;
struct vmcb {
struct vmcb_control_area control;
union {
struct vmcb_save_area save;
/*
* For SEV-ES VMs, the save area in the VMCB is used only to
* save/load host state. Guest state resides in a separate
* page, the aptly named VM Save Area (VMSA), that is encrypted
* with the guest's private key.
*/
struct sev_es_save_area host_sev_es_save;
};
} __packed;
#define EXPECTED_VMCB_SAVE_AREA_SIZE 744 #define EXPECTED_VMCB_SAVE_AREA_SIZE 744
#define EXPECTED_GHCB_SAVE_AREA_SIZE 1032 #define EXPECTED_GHCB_SAVE_AREA_SIZE 1032
@ -532,6 +546,7 @@ static inline void __unused_size_checks(void)
BUILD_BUG_ON(sizeof(struct ghcb_save_area) != EXPECTED_GHCB_SAVE_AREA_SIZE); BUILD_BUG_ON(sizeof(struct ghcb_save_area) != EXPECTED_GHCB_SAVE_AREA_SIZE);
BUILD_BUG_ON(sizeof(struct sev_es_save_area) != EXPECTED_SEV_ES_SAVE_AREA_SIZE); BUILD_BUG_ON(sizeof(struct sev_es_save_area) != EXPECTED_SEV_ES_SAVE_AREA_SIZE);
BUILD_BUG_ON(sizeof(struct vmcb_control_area) != EXPECTED_VMCB_CONTROL_AREA_SIZE); BUILD_BUG_ON(sizeof(struct vmcb_control_area) != EXPECTED_VMCB_CONTROL_AREA_SIZE);
BUILD_BUG_ON(offsetof(struct vmcb, save) != EXPECTED_VMCB_CONTROL_AREA_SIZE);
BUILD_BUG_ON(sizeof(struct ghcb) != EXPECTED_GHCB_SIZE); BUILD_BUG_ON(sizeof(struct ghcb) != EXPECTED_GHCB_SIZE);
/* Check offsets of reserved fields */ /* Check offsets of reserved fields */
@ -568,11 +583,6 @@ static inline void __unused_size_checks(void)
BUILD_BUG_RESERVED_OFFSET(ghcb, 0xff0); BUILD_BUG_RESERVED_OFFSET(ghcb, 0xff0);
} }
struct vmcb {
struct vmcb_control_area control;
struct vmcb_save_area save;
} __packed;
#define SVM_CPUID_FUNC 0x8000000a #define SVM_CPUID_FUNC 0x8000000a
#define SVM_SELECTOR_S_SHIFT 4 #define SVM_SELECTOR_S_SHIFT 4

View File

@ -122,19 +122,17 @@
#define VM_ENTRY_ALWAYSON_WITHOUT_TRUE_MSR 0x000011ff #define VM_ENTRY_ALWAYSON_WITHOUT_TRUE_MSR 0x000011ff
#define VMX_MISC_PREEMPTION_TIMER_RATE_MASK 0x0000001f
#define VMX_MISC_SAVE_EFER_LMA 0x00000020
#define VMX_MISC_ACTIVITY_HLT 0x00000040
#define VMX_MISC_ACTIVITY_WAIT_SIPI 0x00000100
#define VMX_MISC_ZERO_LEN_INS 0x40000000
#define VMX_MISC_MSR_LIST_MULTIPLIER 512
/* VMFUNC functions */ /* VMFUNC functions */
#define VMFUNC_CONTROL_BIT(x) BIT((VMX_FEATURE_##x & 0x1f) - 28) #define VMFUNC_CONTROL_BIT(x) BIT((VMX_FEATURE_##x & 0x1f) - 28)
#define VMX_VMFUNC_EPTP_SWITCHING VMFUNC_CONTROL_BIT(EPTP_SWITCHING) #define VMX_VMFUNC_EPTP_SWITCHING VMFUNC_CONTROL_BIT(EPTP_SWITCHING)
#define VMFUNC_EPTP_ENTRIES 512 #define VMFUNC_EPTP_ENTRIES 512
#define VMX_BASIC_32BIT_PHYS_ADDR_ONLY BIT_ULL(48)
#define VMX_BASIC_DUAL_MONITOR_TREATMENT BIT_ULL(49)
#define VMX_BASIC_INOUT BIT_ULL(54)
#define VMX_BASIC_TRUE_CTLS BIT_ULL(55)
static inline u32 vmx_basic_vmcs_revision_id(u64 vmx_basic) static inline u32 vmx_basic_vmcs_revision_id(u64 vmx_basic)
{ {
return vmx_basic & GENMASK_ULL(30, 0); return vmx_basic & GENMASK_ULL(30, 0);
@ -145,9 +143,30 @@ static inline u32 vmx_basic_vmcs_size(u64 vmx_basic)
return (vmx_basic & GENMASK_ULL(44, 32)) >> 32; return (vmx_basic & GENMASK_ULL(44, 32)) >> 32;
} }
static inline u32 vmx_basic_vmcs_mem_type(u64 vmx_basic)
{
return (vmx_basic & GENMASK_ULL(53, 50)) >> 50;
}
static inline u64 vmx_basic_encode_vmcs_info(u32 revision, u16 size, u8 memtype)
{
return revision | ((u64)size << 32) | ((u64)memtype << 50);
}
#define VMX_MISC_SAVE_EFER_LMA BIT_ULL(5)
#define VMX_MISC_ACTIVITY_HLT BIT_ULL(6)
#define VMX_MISC_ACTIVITY_SHUTDOWN BIT_ULL(7)
#define VMX_MISC_ACTIVITY_WAIT_SIPI BIT_ULL(8)
#define VMX_MISC_INTEL_PT BIT_ULL(14)
#define VMX_MISC_RDMSR_IN_SMM BIT_ULL(15)
#define VMX_MISC_VMXOFF_BLOCK_SMI BIT_ULL(28)
#define VMX_MISC_VMWRITE_SHADOW_RO_FIELDS BIT_ULL(29)
#define VMX_MISC_ZERO_LEN_INS BIT_ULL(30)
#define VMX_MISC_MSR_LIST_MULTIPLIER 512
static inline int vmx_misc_preemption_timer_rate(u64 vmx_misc) static inline int vmx_misc_preemption_timer_rate(u64 vmx_misc)
{ {
return vmx_misc & VMX_MISC_PREEMPTION_TIMER_RATE_MASK; return vmx_misc & GENMASK_ULL(4, 0);
} }
static inline int vmx_misc_cr3_count(u64 vmx_misc) static inline int vmx_misc_cr3_count(u64 vmx_misc)
@ -508,9 +527,10 @@ enum vmcs_field {
#define VMX_EPTP_PWL_4 0x18ull #define VMX_EPTP_PWL_4 0x18ull
#define VMX_EPTP_PWL_5 0x20ull #define VMX_EPTP_PWL_5 0x20ull
#define VMX_EPTP_AD_ENABLE_BIT (1ull << 6) #define VMX_EPTP_AD_ENABLE_BIT (1ull << 6)
/* The EPTP memtype is encoded in bits 2:0, i.e. doesn't need to be shifted. */
#define VMX_EPTP_MT_MASK 0x7ull #define VMX_EPTP_MT_MASK 0x7ull
#define VMX_EPTP_MT_WB 0x6ull #define VMX_EPTP_MT_WB X86_MEMTYPE_WB
#define VMX_EPTP_MT_UC 0x0ull #define VMX_EPTP_MT_UC X86_MEMTYPE_UC
#define VMX_EPT_READABLE_MASK 0x1ull #define VMX_EPT_READABLE_MASK 0x1ull
#define VMX_EPT_WRITABLE_MASK 0x2ull #define VMX_EPT_WRITABLE_MASK 0x2ull
#define VMX_EPT_EXECUTABLE_MASK 0x4ull #define VMX_EPT_EXECUTABLE_MASK 0x4ull

View File

@ -439,6 +439,7 @@ struct kvm_sync_regs {
#define KVM_X86_QUIRK_MISC_ENABLE_NO_MWAIT (1 << 4) #define KVM_X86_QUIRK_MISC_ENABLE_NO_MWAIT (1 << 4)
#define KVM_X86_QUIRK_FIX_HYPERCALL_INSN (1 << 5) #define KVM_X86_QUIRK_FIX_HYPERCALL_INSN (1 << 5)
#define KVM_X86_QUIRK_MWAIT_NEVER_UD_FAULTS (1 << 6) #define KVM_X86_QUIRK_MWAIT_NEVER_UD_FAULTS (1 << 6)
#define KVM_X86_QUIRK_SLOT_ZAP_ALL (1 << 7)
#define KVM_STATE_NESTED_FORMAT_VMX 0 #define KVM_STATE_NESTED_FORMAT_VMX 0
#define KVM_STATE_NESTED_FORMAT_SVM 1 #define KVM_STATE_NESTED_FORMAT_SVM 1

View File

@ -55,6 +55,12 @@
#include "mtrr.h" #include "mtrr.h"
static_assert(X86_MEMTYPE_UC == MTRR_TYPE_UNCACHABLE);
static_assert(X86_MEMTYPE_WC == MTRR_TYPE_WRCOMB);
static_assert(X86_MEMTYPE_WT == MTRR_TYPE_WRTHROUGH);
static_assert(X86_MEMTYPE_WP == MTRR_TYPE_WRPROT);
static_assert(X86_MEMTYPE_WB == MTRR_TYPE_WRBACK);
/* arch_phys_wc_add returns an MTRR register index plus this offset. */ /* arch_phys_wc_add returns an MTRR register index plus this offset. */
#define MTRR_TO_PHYS_WC_OFFSET 1000 #define MTRR_TO_PHYS_WC_OFFSET 1000

View File

@ -705,7 +705,7 @@ void kvm_set_cpu_caps(void)
kvm_cpu_cap_init_kvm_defined(CPUID_7_1_EDX, kvm_cpu_cap_init_kvm_defined(CPUID_7_1_EDX,
F(AVX_VNNI_INT8) | F(AVX_NE_CONVERT) | F(PREFETCHITI) | F(AVX_VNNI_INT8) | F(AVX_NE_CONVERT) | F(PREFETCHITI) |
F(AMX_COMPLEX) F(AMX_COMPLEX) | F(AVX10)
); );
kvm_cpu_cap_init_kvm_defined(CPUID_7_2_EDX, kvm_cpu_cap_init_kvm_defined(CPUID_7_2_EDX,
@ -721,6 +721,10 @@ void kvm_set_cpu_caps(void)
SF(SGX1) | SF(SGX2) | SF(SGX_EDECCSSA) SF(SGX1) | SF(SGX2) | SF(SGX_EDECCSSA)
); );
kvm_cpu_cap_init_kvm_defined(CPUID_24_0_EBX,
F(AVX10_128) | F(AVX10_256) | F(AVX10_512)
);
kvm_cpu_cap_mask(CPUID_8000_0001_ECX, kvm_cpu_cap_mask(CPUID_8000_0001_ECX,
F(LAHF_LM) | F(CMP_LEGACY) | 0 /*SVM*/ | 0 /* ExtApicSpace */ | F(LAHF_LM) | F(CMP_LEGACY) | 0 /*SVM*/ | 0 /* ExtApicSpace */ |
F(CR8_LEGACY) | F(ABM) | F(SSE4A) | F(MISALIGNSSE) | F(CR8_LEGACY) | F(ABM) | F(SSE4A) | F(MISALIGNSSE) |
@ -949,7 +953,7 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
switch (function) { switch (function) {
case 0: case 0:
/* Limited to the highest leaf implemented in KVM. */ /* Limited to the highest leaf implemented in KVM. */
entry->eax = min(entry->eax, 0x1fU); entry->eax = min(entry->eax, 0x24U);
break; break;
case 1: case 1:
cpuid_entry_override(entry, CPUID_1_EDX); cpuid_entry_override(entry, CPUID_1_EDX);
@ -1174,6 +1178,28 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
break; break;
} }
break; break;
case 0x24: {
u8 avx10_version;
if (!kvm_cpu_cap_has(X86_FEATURE_AVX10)) {
entry->eax = entry->ebx = entry->ecx = entry->edx = 0;
break;
}
/*
* The AVX10 version is encoded in EBX[7:0]. Note, the version
* is guaranteed to be >=1 if AVX10 is supported. Note #2, the
* version needs to be captured before overriding EBX features!
*/
avx10_version = min_t(u8, entry->ebx & 0xff, 1);
cpuid_entry_override(entry, CPUID_24_0_EBX);
entry->ebx |= avx10_version;
entry->eax = 0;
entry->ecx = 0;
entry->edx = 0;
break;
}
case KVM_CPUID_SIGNATURE: { case KVM_CPUID_SIGNATURE: {
const u32 *sigptr = (const u32 *)KVM_SIGNATURE; const u32 *sigptr = (const u32 *)KVM_SIGNATURE;
entry->eax = KVM_CPUID_FEATURES; entry->eax = KVM_CPUID_FEATURES;

View File

@ -108,7 +108,7 @@ EXPORT_SYMBOL_GPL(kvm_cpu_has_interrupt);
* Read pending interrupt(from non-APIC source) * Read pending interrupt(from non-APIC source)
* vector and intack. * vector and intack.
*/ */
static int kvm_cpu_get_extint(struct kvm_vcpu *v) int kvm_cpu_get_extint(struct kvm_vcpu *v)
{ {
if (!kvm_cpu_has_extint(v)) { if (!kvm_cpu_has_extint(v)) {
WARN_ON(!lapic_in_kernel(v)); WARN_ON(!lapic_in_kernel(v));
@ -131,6 +131,7 @@ static int kvm_cpu_get_extint(struct kvm_vcpu *v)
} else } else
return kvm_pic_read_irq(v->kvm); /* PIC */ return kvm_pic_read_irq(v->kvm); /* PIC */
} }
EXPORT_SYMBOL_GPL(kvm_cpu_get_extint);
/* /*
* Read pending interrupt vector and intack. * Read pending interrupt vector and intack.
@ -141,9 +142,12 @@ int kvm_cpu_get_interrupt(struct kvm_vcpu *v)
if (vector != -1) if (vector != -1)
return vector; /* PIC */ return vector; /* PIC */
return kvm_get_apic_interrupt(v); /* APIC */ vector = kvm_apic_has_interrupt(v); /* APIC */
if (vector != -1)
kvm_apic_ack_interrupt(v, vector);
return vector;
} }
EXPORT_SYMBOL_GPL(kvm_cpu_get_interrupt);
void kvm_inject_pending_timer_irqs(struct kvm_vcpu *vcpu) void kvm_inject_pending_timer_irqs(struct kvm_vcpu *vcpu)
{ {

View File

@ -1944,7 +1944,7 @@ static void start_sw_tscdeadline(struct kvm_lapic *apic)
u64 ns = 0; u64 ns = 0;
ktime_t expire; ktime_t expire;
struct kvm_vcpu *vcpu = apic->vcpu; struct kvm_vcpu *vcpu = apic->vcpu;
unsigned long this_tsc_khz = vcpu->arch.virtual_tsc_khz; u32 this_tsc_khz = vcpu->arch.virtual_tsc_khz;
unsigned long flags; unsigned long flags;
ktime_t now; ktime_t now;
@ -2453,6 +2453,43 @@ void kvm_lapic_set_eoi(struct kvm_vcpu *vcpu)
} }
EXPORT_SYMBOL_GPL(kvm_lapic_set_eoi); EXPORT_SYMBOL_GPL(kvm_lapic_set_eoi);
#define X2APIC_ICR_RESERVED_BITS (GENMASK_ULL(31, 20) | GENMASK_ULL(17, 16) | BIT(13))
int kvm_x2apic_icr_write(struct kvm_lapic *apic, u64 data)
{
if (data & X2APIC_ICR_RESERVED_BITS)
return 1;
/*
* The BUSY bit is reserved on both Intel and AMD in x2APIC mode, but
* only AMD requires it to be zero, Intel essentially just ignores the
* bit. And if IPI virtualization (Intel) or x2AVIC (AMD) is enabled,
* the CPU performs the reserved bits checks, i.e. the underlying CPU
* behavior will "win". Arbitrarily clear the BUSY bit, as there is no
* sane way to provide consistent behavior with respect to hardware.
*/
data &= ~APIC_ICR_BUSY;
kvm_apic_send_ipi(apic, (u32)data, (u32)(data >> 32));
if (kvm_x86_ops.x2apic_icr_is_split) {
kvm_lapic_set_reg(apic, APIC_ICR, data);
kvm_lapic_set_reg(apic, APIC_ICR2, data >> 32);
} else {
kvm_lapic_set_reg64(apic, APIC_ICR, data);
}
trace_kvm_apic_write(APIC_ICR, data);
return 0;
}
static u64 kvm_x2apic_icr_read(struct kvm_lapic *apic)
{
if (kvm_x86_ops.x2apic_icr_is_split)
return (u64)kvm_lapic_get_reg(apic, APIC_ICR) |
(u64)kvm_lapic_get_reg(apic, APIC_ICR2) << 32;
return kvm_lapic_get_reg64(apic, APIC_ICR);
}
/* emulate APIC access in a trap manner */ /* emulate APIC access in a trap manner */
void kvm_apic_write_nodecode(struct kvm_vcpu *vcpu, u32 offset) void kvm_apic_write_nodecode(struct kvm_vcpu *vcpu, u32 offset)
{ {
@ -2470,7 +2507,7 @@ void kvm_apic_write_nodecode(struct kvm_vcpu *vcpu, u32 offset)
* maybe-unecessary write, and both are in the noise anyways. * maybe-unecessary write, and both are in the noise anyways.
*/ */
if (apic_x2apic_mode(apic) && offset == APIC_ICR) if (apic_x2apic_mode(apic) && offset == APIC_ICR)
kvm_x2apic_icr_write(apic, kvm_lapic_get_reg64(apic, APIC_ICR)); WARN_ON_ONCE(kvm_x2apic_icr_write(apic, kvm_x2apic_icr_read(apic)));
else else
kvm_lapic_reg_write(apic, offset, kvm_lapic_get_reg(apic, offset)); kvm_lapic_reg_write(apic, offset, kvm_lapic_get_reg(apic, offset));
} }
@ -2922,14 +2959,13 @@ void kvm_inject_apic_timer_irqs(struct kvm_vcpu *vcpu)
} }
} }
int kvm_get_apic_interrupt(struct kvm_vcpu *vcpu) void kvm_apic_ack_interrupt(struct kvm_vcpu *vcpu, int vector)
{ {
int vector = kvm_apic_has_interrupt(vcpu);
struct kvm_lapic *apic = vcpu->arch.apic; struct kvm_lapic *apic = vcpu->arch.apic;
u32 ppr; u32 ppr;
if (vector == -1) if (WARN_ON_ONCE(vector < 0 || !apic))
return -1; return;
/* /*
* We get here even with APIC virtualization enabled, if doing * We get here even with APIC virtualization enabled, if doing
@ -2957,8 +2993,8 @@ int kvm_get_apic_interrupt(struct kvm_vcpu *vcpu)
__apic_update_ppr(apic, &ppr); __apic_update_ppr(apic, &ppr);
} }
return vector;
} }
EXPORT_SYMBOL_GPL(kvm_apic_ack_interrupt);
static int kvm_apic_state_fixup(struct kvm_vcpu *vcpu, static int kvm_apic_state_fixup(struct kvm_vcpu *vcpu,
struct kvm_lapic_state *s, bool set) struct kvm_lapic_state *s, bool set)
@ -2990,18 +3026,22 @@ static int kvm_apic_state_fixup(struct kvm_vcpu *vcpu,
/* /*
* In x2APIC mode, the LDR is fixed and based on the id. And * In x2APIC mode, the LDR is fixed and based on the id. And
* ICR is internally a single 64-bit register, but needs to be * if the ICR is _not_ split, ICR is internally a single 64-bit
* split to ICR+ICR2 in userspace for backwards compatibility. * register, but needs to be split to ICR+ICR2 in userspace for
* backwards compatibility.
*/ */
if (set) { if (set)
*ldr = kvm_apic_calc_x2apic_ldr(x2apic_id); *ldr = kvm_apic_calc_x2apic_ldr(x2apic_id);
icr = __kvm_lapic_get_reg(s->regs, APIC_ICR) | if (!kvm_x86_ops.x2apic_icr_is_split) {
(u64)__kvm_lapic_get_reg(s->regs, APIC_ICR2) << 32; if (set) {
__kvm_lapic_set_reg64(s->regs, APIC_ICR, icr); icr = __kvm_lapic_get_reg(s->regs, APIC_ICR) |
} else { (u64)__kvm_lapic_get_reg(s->regs, APIC_ICR2) << 32;
icr = __kvm_lapic_get_reg64(s->regs, APIC_ICR); __kvm_lapic_set_reg64(s->regs, APIC_ICR, icr);
__kvm_lapic_set_reg(s->regs, APIC_ICR2, icr >> 32); } else {
icr = __kvm_lapic_get_reg64(s->regs, APIC_ICR);
__kvm_lapic_set_reg(s->regs, APIC_ICR2, icr >> 32);
}
} }
} }
@ -3194,22 +3234,12 @@ int kvm_lapic_set_vapic_addr(struct kvm_vcpu *vcpu, gpa_t vapic_addr)
return 0; return 0;
} }
int kvm_x2apic_icr_write(struct kvm_lapic *apic, u64 data)
{
data &= ~APIC_ICR_BUSY;
kvm_apic_send_ipi(apic, (u32)data, (u32)(data >> 32));
kvm_lapic_set_reg64(apic, APIC_ICR, data);
trace_kvm_apic_write(APIC_ICR, data);
return 0;
}
static int kvm_lapic_msr_read(struct kvm_lapic *apic, u32 reg, u64 *data) static int kvm_lapic_msr_read(struct kvm_lapic *apic, u32 reg, u64 *data)
{ {
u32 low; u32 low;
if (reg == APIC_ICR) { if (reg == APIC_ICR) {
*data = kvm_lapic_get_reg64(apic, APIC_ICR); *data = kvm_x2apic_icr_read(apic);
return 0; return 0;
} }

View File

@ -88,15 +88,14 @@ int kvm_create_lapic(struct kvm_vcpu *vcpu);
void kvm_free_lapic(struct kvm_vcpu *vcpu); void kvm_free_lapic(struct kvm_vcpu *vcpu);
int kvm_apic_has_interrupt(struct kvm_vcpu *vcpu); int kvm_apic_has_interrupt(struct kvm_vcpu *vcpu);
void kvm_apic_ack_interrupt(struct kvm_vcpu *vcpu, int vector);
int kvm_apic_accept_pic_intr(struct kvm_vcpu *vcpu); int kvm_apic_accept_pic_intr(struct kvm_vcpu *vcpu);
int kvm_get_apic_interrupt(struct kvm_vcpu *vcpu);
int kvm_apic_accept_events(struct kvm_vcpu *vcpu); int kvm_apic_accept_events(struct kvm_vcpu *vcpu);
void kvm_lapic_reset(struct kvm_vcpu *vcpu, bool init_event); void kvm_lapic_reset(struct kvm_vcpu *vcpu, bool init_event);
u64 kvm_lapic_get_cr8(struct kvm_vcpu *vcpu); u64 kvm_lapic_get_cr8(struct kvm_vcpu *vcpu);
void kvm_lapic_set_tpr(struct kvm_vcpu *vcpu, unsigned long cr8); void kvm_lapic_set_tpr(struct kvm_vcpu *vcpu, unsigned long cr8);
void kvm_lapic_set_eoi(struct kvm_vcpu *vcpu); void kvm_lapic_set_eoi(struct kvm_vcpu *vcpu);
void kvm_lapic_set_base(struct kvm_vcpu *vcpu, u64 value); void kvm_lapic_set_base(struct kvm_vcpu *vcpu, u64 value);
u64 kvm_lapic_get_base(struct kvm_vcpu *vcpu);
void kvm_recalculate_apic_map(struct kvm *kvm); void kvm_recalculate_apic_map(struct kvm *kvm);
void kvm_apic_set_version(struct kvm_vcpu *vcpu); void kvm_apic_set_version(struct kvm_vcpu *vcpu);
void kvm_apic_after_set_mcg_cap(struct kvm_vcpu *vcpu); void kvm_apic_after_set_mcg_cap(struct kvm_vcpu *vcpu);

View File

@ -223,8 +223,6 @@ static inline u8 permission_fault(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu,
bool kvm_mmu_may_ignore_guest_pat(void); bool kvm_mmu_may_ignore_guest_pat(void);
int kvm_arch_write_log_dirty(struct kvm_vcpu *vcpu);
int kvm_mmu_post_init_vm(struct kvm *kvm); int kvm_mmu_post_init_vm(struct kvm *kvm);
void kvm_mmu_pre_destroy_vm(struct kvm *kvm); void kvm_mmu_pre_destroy_vm(struct kvm *kvm);

View File

@ -614,32 +614,6 @@ static u64 mmu_spte_get_lockless(u64 *sptep)
return __get_spte_lockless(sptep); return __get_spte_lockless(sptep);
} }
/* Returns the Accessed status of the PTE and resets it at the same time. */
static bool mmu_spte_age(u64 *sptep)
{
u64 spte = mmu_spte_get_lockless(sptep);
if (!is_accessed_spte(spte))
return false;
if (spte_ad_enabled(spte)) {
clear_bit((ffs(shadow_accessed_mask) - 1),
(unsigned long *)sptep);
} else {
/*
* Capture the dirty status of the page, so that it doesn't get
* lost when the SPTE is marked for access tracking.
*/
if (is_writable_pte(spte))
kvm_set_pfn_dirty(spte_to_pfn(spte));
spte = mark_spte_for_access_track(spte);
mmu_spte_update_no_track(sptep, spte);
}
return true;
}
static inline bool is_tdp_mmu_active(struct kvm_vcpu *vcpu) static inline bool is_tdp_mmu_active(struct kvm_vcpu *vcpu)
{ {
return tdp_mmu_enabled && vcpu->arch.mmu->root_role.direct; return tdp_mmu_enabled && vcpu->arch.mmu->root_role.direct;
@ -938,6 +912,7 @@ static struct kvm_memory_slot *gfn_to_memslot_dirty_bitmap(struct kvm_vcpu *vcpu
* in this rmap chain. Otherwise, (rmap_head->val & ~1) points to a struct * in this rmap chain. Otherwise, (rmap_head->val & ~1) points to a struct
* pte_list_desc containing more mappings. * pte_list_desc containing more mappings.
*/ */
#define KVM_RMAP_MANY BIT(0)
/* /*
* Returns the number of pointers in the rmap chain, not counting the new one. * Returns the number of pointers in the rmap chain, not counting the new one.
@ -950,16 +925,16 @@ static int pte_list_add(struct kvm_mmu_memory_cache *cache, u64 *spte,
if (!rmap_head->val) { if (!rmap_head->val) {
rmap_head->val = (unsigned long)spte; rmap_head->val = (unsigned long)spte;
} else if (!(rmap_head->val & 1)) { } else if (!(rmap_head->val & KVM_RMAP_MANY)) {
desc = kvm_mmu_memory_cache_alloc(cache); desc = kvm_mmu_memory_cache_alloc(cache);
desc->sptes[0] = (u64 *)rmap_head->val; desc->sptes[0] = (u64 *)rmap_head->val;
desc->sptes[1] = spte; desc->sptes[1] = spte;
desc->spte_count = 2; desc->spte_count = 2;
desc->tail_count = 0; desc->tail_count = 0;
rmap_head->val = (unsigned long)desc | 1; rmap_head->val = (unsigned long)desc | KVM_RMAP_MANY;
++count; ++count;
} else { } else {
desc = (struct pte_list_desc *)(rmap_head->val & ~1ul); desc = (struct pte_list_desc *)(rmap_head->val & ~KVM_RMAP_MANY);
count = desc->tail_count + desc->spte_count; count = desc->tail_count + desc->spte_count;
/* /*
@ -968,10 +943,10 @@ static int pte_list_add(struct kvm_mmu_memory_cache *cache, u64 *spte,
*/ */
if (desc->spte_count == PTE_LIST_EXT) { if (desc->spte_count == PTE_LIST_EXT) {
desc = kvm_mmu_memory_cache_alloc(cache); desc = kvm_mmu_memory_cache_alloc(cache);
desc->more = (struct pte_list_desc *)(rmap_head->val & ~1ul); desc->more = (struct pte_list_desc *)(rmap_head->val & ~KVM_RMAP_MANY);
desc->spte_count = 0; desc->spte_count = 0;
desc->tail_count = count; desc->tail_count = count;
rmap_head->val = (unsigned long)desc | 1; rmap_head->val = (unsigned long)desc | KVM_RMAP_MANY;
} }
desc->sptes[desc->spte_count++] = spte; desc->sptes[desc->spte_count++] = spte;
} }
@ -982,7 +957,7 @@ static void pte_list_desc_remove_entry(struct kvm *kvm,
struct kvm_rmap_head *rmap_head, struct kvm_rmap_head *rmap_head,
struct pte_list_desc *desc, int i) struct pte_list_desc *desc, int i)
{ {
struct pte_list_desc *head_desc = (struct pte_list_desc *)(rmap_head->val & ~1ul); struct pte_list_desc *head_desc = (struct pte_list_desc *)(rmap_head->val & ~KVM_RMAP_MANY);
int j = head_desc->spte_count - 1; int j = head_desc->spte_count - 1;
/* /*
@ -1011,7 +986,7 @@ static void pte_list_desc_remove_entry(struct kvm *kvm,
if (!head_desc->more) if (!head_desc->more)
rmap_head->val = 0; rmap_head->val = 0;
else else
rmap_head->val = (unsigned long)head_desc->more | 1; rmap_head->val = (unsigned long)head_desc->more | KVM_RMAP_MANY;
mmu_free_pte_list_desc(head_desc); mmu_free_pte_list_desc(head_desc);
} }
@ -1024,13 +999,13 @@ static void pte_list_remove(struct kvm *kvm, u64 *spte,
if (KVM_BUG_ON_DATA_CORRUPTION(!rmap_head->val, kvm)) if (KVM_BUG_ON_DATA_CORRUPTION(!rmap_head->val, kvm))
return; return;
if (!(rmap_head->val & 1)) { if (!(rmap_head->val & KVM_RMAP_MANY)) {
if (KVM_BUG_ON_DATA_CORRUPTION((u64 *)rmap_head->val != spte, kvm)) if (KVM_BUG_ON_DATA_CORRUPTION((u64 *)rmap_head->val != spte, kvm))
return; return;
rmap_head->val = 0; rmap_head->val = 0;
} else { } else {
desc = (struct pte_list_desc *)(rmap_head->val & ~1ul); desc = (struct pte_list_desc *)(rmap_head->val & ~KVM_RMAP_MANY);
while (desc) { while (desc) {
for (i = 0; i < desc->spte_count; ++i) { for (i = 0; i < desc->spte_count; ++i) {
if (desc->sptes[i] == spte) { if (desc->sptes[i] == spte) {
@ -1063,12 +1038,12 @@ static bool kvm_zap_all_rmap_sptes(struct kvm *kvm,
if (!rmap_head->val) if (!rmap_head->val)
return false; return false;
if (!(rmap_head->val & 1)) { if (!(rmap_head->val & KVM_RMAP_MANY)) {
mmu_spte_clear_track_bits(kvm, (u64 *)rmap_head->val); mmu_spte_clear_track_bits(kvm, (u64 *)rmap_head->val);
goto out; goto out;
} }
desc = (struct pte_list_desc *)(rmap_head->val & ~1ul); desc = (struct pte_list_desc *)(rmap_head->val & ~KVM_RMAP_MANY);
for (; desc; desc = next) { for (; desc; desc = next) {
for (i = 0; i < desc->spte_count; i++) for (i = 0; i < desc->spte_count; i++)
@ -1088,10 +1063,10 @@ unsigned int pte_list_count(struct kvm_rmap_head *rmap_head)
if (!rmap_head->val) if (!rmap_head->val)
return 0; return 0;
else if (!(rmap_head->val & 1)) else if (!(rmap_head->val & KVM_RMAP_MANY))
return 1; return 1;
desc = (struct pte_list_desc *)(rmap_head->val & ~1ul); desc = (struct pte_list_desc *)(rmap_head->val & ~KVM_RMAP_MANY);
return desc->tail_count + desc->spte_count; return desc->tail_count + desc->spte_count;
} }
@ -1153,13 +1128,13 @@ static u64 *rmap_get_first(struct kvm_rmap_head *rmap_head,
if (!rmap_head->val) if (!rmap_head->val)
return NULL; return NULL;
if (!(rmap_head->val & 1)) { if (!(rmap_head->val & KVM_RMAP_MANY)) {
iter->desc = NULL; iter->desc = NULL;
sptep = (u64 *)rmap_head->val; sptep = (u64 *)rmap_head->val;
goto out; goto out;
} }
iter->desc = (struct pte_list_desc *)(rmap_head->val & ~1ul); iter->desc = (struct pte_list_desc *)(rmap_head->val & ~KVM_RMAP_MANY);
iter->pos = 0; iter->pos = 0;
sptep = iter->desc->sptes[iter->pos]; sptep = iter->desc->sptes[iter->pos];
out: out:
@ -1307,15 +1282,6 @@ static bool __rmap_clear_dirty(struct kvm *kvm, struct kvm_rmap_head *rmap_head,
return flush; return flush;
} }
/**
* kvm_mmu_write_protect_pt_masked - write protect selected PT level pages
* @kvm: kvm instance
* @slot: slot to protect
* @gfn_offset: start of the BITS_PER_LONG pages we care about
* @mask: indicates which pages we should protect
*
* Used when we do not need to care about huge page mappings.
*/
static void kvm_mmu_write_protect_pt_masked(struct kvm *kvm, static void kvm_mmu_write_protect_pt_masked(struct kvm *kvm,
struct kvm_memory_slot *slot, struct kvm_memory_slot *slot,
gfn_t gfn_offset, unsigned long mask) gfn_t gfn_offset, unsigned long mask)
@ -1339,16 +1305,6 @@ static void kvm_mmu_write_protect_pt_masked(struct kvm *kvm,
} }
} }
/**
* kvm_mmu_clear_dirty_pt_masked - clear MMU D-bit for PT level pages, or write
* protect the page if the D-bit isn't supported.
* @kvm: kvm instance
* @slot: slot to clear D-bit
* @gfn_offset: start of the BITS_PER_LONG pages we care about
* @mask: indicates which pages we should clear D-bit
*
* Used for PML to re-log the dirty GPAs after userspace querying dirty_bitmap.
*/
static void kvm_mmu_clear_dirty_pt_masked(struct kvm *kvm, static void kvm_mmu_clear_dirty_pt_masked(struct kvm *kvm,
struct kvm_memory_slot *slot, struct kvm_memory_slot *slot,
gfn_t gfn_offset, unsigned long mask) gfn_t gfn_offset, unsigned long mask)
@ -1372,24 +1328,16 @@ static void kvm_mmu_clear_dirty_pt_masked(struct kvm *kvm,
} }
} }
/**
* kvm_arch_mmu_enable_log_dirty_pt_masked - enable dirty logging for selected
* PT level pages.
*
* It calls kvm_mmu_write_protect_pt_masked to write protect selected pages to
* enable dirty logging for them.
*
* We need to care about huge page mappings: e.g. during dirty logging we may
* have such mappings.
*/
void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm, void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm,
struct kvm_memory_slot *slot, struct kvm_memory_slot *slot,
gfn_t gfn_offset, unsigned long mask) gfn_t gfn_offset, unsigned long mask)
{ {
/* /*
* Huge pages are NOT write protected when we start dirty logging in * If the slot was assumed to be "initially all dirty", write-protect
* initially-all-set mode; must write protect them here so that they * huge pages to ensure they are split to 4KiB on the first write (KVM
* are split to 4K on the first write. * dirty logs at 4KiB granularity). If eager page splitting is enabled,
* immediately try to split huge pages, e.g. so that vCPUs don't get
* saddled with the cost of splitting.
* *
* The gfn_offset is guaranteed to be aligned to 64, but the base_gfn * The gfn_offset is guaranteed to be aligned to 64, but the base_gfn
* of memslot has no such restriction, so the range can cross two large * of memslot has no such restriction, so the range can cross two large
@ -1411,7 +1359,16 @@ void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm,
PG_LEVEL_2M); PG_LEVEL_2M);
} }
/* Now handle 4K PTEs. */ /*
* (Re)Enable dirty logging for all 4KiB SPTEs that map the GFNs in
* mask. If PML is enabled and the GFN doesn't need to be write-
* protected for other reasons, e.g. shadow paging, clear the Dirty bit.
* Otherwise clear the Writable bit.
*
* Note that kvm_mmu_clear_dirty_pt_masked() is called whenever PML is
* enabled but it chooses between clearing the Dirty bit and Writeable
* bit based on the context.
*/
if (kvm_x86_ops.cpu_dirty_log_size) if (kvm_x86_ops.cpu_dirty_log_size)
kvm_mmu_clear_dirty_pt_masked(kvm, slot, gfn_offset, mask); kvm_mmu_clear_dirty_pt_masked(kvm, slot, gfn_offset, mask);
else else
@ -1453,18 +1410,12 @@ static bool kvm_vcpu_write_protect_gfn(struct kvm_vcpu *vcpu, u64 gfn)
return kvm_mmu_slot_gfn_write_protect(vcpu->kvm, slot, gfn, PG_LEVEL_4K); return kvm_mmu_slot_gfn_write_protect(vcpu->kvm, slot, gfn, PG_LEVEL_4K);
} }
static bool __kvm_zap_rmap(struct kvm *kvm, struct kvm_rmap_head *rmap_head, static bool kvm_zap_rmap(struct kvm *kvm, struct kvm_rmap_head *rmap_head,
const struct kvm_memory_slot *slot) const struct kvm_memory_slot *slot)
{ {
return kvm_zap_all_rmap_sptes(kvm, rmap_head); return kvm_zap_all_rmap_sptes(kvm, rmap_head);
} }
static bool kvm_zap_rmap(struct kvm *kvm, struct kvm_rmap_head *rmap_head,
struct kvm_memory_slot *slot, gfn_t gfn, int level)
{
return __kvm_zap_rmap(kvm, rmap_head, slot);
}
struct slot_rmap_walk_iterator { struct slot_rmap_walk_iterator {
/* input fields. */ /* input fields. */
const struct kvm_memory_slot *slot; const struct kvm_memory_slot *slot;
@ -1513,7 +1464,7 @@ static bool slot_rmap_walk_okay(struct slot_rmap_walk_iterator *iterator)
static void slot_rmap_walk_next(struct slot_rmap_walk_iterator *iterator) static void slot_rmap_walk_next(struct slot_rmap_walk_iterator *iterator)
{ {
while (++iterator->rmap <= iterator->end_rmap) { while (++iterator->rmap <= iterator->end_rmap) {
iterator->gfn += (1UL << KVM_HPAGE_GFN_SHIFT(iterator->level)); iterator->gfn += KVM_PAGES_PER_HPAGE(iterator->level);
if (iterator->rmap->val) if (iterator->rmap->val)
return; return;
@ -1534,23 +1485,71 @@ static void slot_rmap_walk_next(struct slot_rmap_walk_iterator *iterator)
slot_rmap_walk_okay(_iter_); \ slot_rmap_walk_okay(_iter_); \
slot_rmap_walk_next(_iter_)) slot_rmap_walk_next(_iter_))
typedef bool (*rmap_handler_t)(struct kvm *kvm, struct kvm_rmap_head *rmap_head, /* The return value indicates if tlb flush on all vcpus is needed. */
struct kvm_memory_slot *slot, gfn_t gfn, typedef bool (*slot_rmaps_handler) (struct kvm *kvm,
int level); struct kvm_rmap_head *rmap_head,
const struct kvm_memory_slot *slot);
static __always_inline bool kvm_handle_gfn_range(struct kvm *kvm, static __always_inline bool __walk_slot_rmaps(struct kvm *kvm,
struct kvm_gfn_range *range, const struct kvm_memory_slot *slot,
rmap_handler_t handler) slot_rmaps_handler fn,
int start_level, int end_level,
gfn_t start_gfn, gfn_t end_gfn,
bool can_yield, bool flush_on_yield,
bool flush)
{ {
struct slot_rmap_walk_iterator iterator; struct slot_rmap_walk_iterator iterator;
bool ret = false;
for_each_slot_rmap_range(range->slot, PG_LEVEL_4K, KVM_MAX_HUGEPAGE_LEVEL, lockdep_assert_held_write(&kvm->mmu_lock);
range->start, range->end - 1, &iterator)
ret |= handler(kvm, iterator.rmap, range->slot, iterator.gfn,
iterator.level);
return ret; for_each_slot_rmap_range(slot, start_level, end_level, start_gfn,
end_gfn, &iterator) {
if (iterator.rmap)
flush |= fn(kvm, iterator.rmap, slot);
if (!can_yield)
continue;
if (need_resched() || rwlock_needbreak(&kvm->mmu_lock)) {
if (flush && flush_on_yield) {
kvm_flush_remote_tlbs_range(kvm, start_gfn,
iterator.gfn - start_gfn + 1);
flush = false;
}
cond_resched_rwlock_write(&kvm->mmu_lock);
}
}
return flush;
}
static __always_inline bool walk_slot_rmaps(struct kvm *kvm,
const struct kvm_memory_slot *slot,
slot_rmaps_handler fn,
int start_level, int end_level,
bool flush_on_yield)
{
return __walk_slot_rmaps(kvm, slot, fn, start_level, end_level,
slot->base_gfn, slot->base_gfn + slot->npages - 1,
true, flush_on_yield, false);
}
static __always_inline bool walk_slot_rmaps_4k(struct kvm *kvm,
const struct kvm_memory_slot *slot,
slot_rmaps_handler fn,
bool flush_on_yield)
{
return walk_slot_rmaps(kvm, slot, fn, PG_LEVEL_4K, PG_LEVEL_4K, flush_on_yield);
}
static bool __kvm_rmap_zap_gfn_range(struct kvm *kvm,
const struct kvm_memory_slot *slot,
gfn_t start, gfn_t end, bool can_yield,
bool flush)
{
return __walk_slot_rmaps(kvm, slot, kvm_zap_rmap,
PG_LEVEL_4K, KVM_MAX_HUGEPAGE_LEVEL,
start, end - 1, can_yield, true, flush);
} }
bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range) bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range)
@ -1558,7 +1557,9 @@ bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range)
bool flush = false; bool flush = false;
if (kvm_memslots_have_rmaps(kvm)) if (kvm_memslots_have_rmaps(kvm))
flush = kvm_handle_gfn_range(kvm, range, kvm_zap_rmap); flush = __kvm_rmap_zap_gfn_range(kvm, range->slot,
range->start, range->end,
range->may_block, flush);
if (tdp_mmu_enabled) if (tdp_mmu_enabled)
flush = kvm_tdp_mmu_unmap_gfn_range(kvm, range, flush); flush = kvm_tdp_mmu_unmap_gfn_range(kvm, range, flush);
@ -1570,31 +1571,6 @@ bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range)
return flush; return flush;
} }
static bool kvm_age_rmap(struct kvm *kvm, struct kvm_rmap_head *rmap_head,
struct kvm_memory_slot *slot, gfn_t gfn, int level)
{
u64 *sptep;
struct rmap_iterator iter;
int young = 0;
for_each_rmap_spte(rmap_head, &iter, sptep)
young |= mmu_spte_age(sptep);
return young;
}
static bool kvm_test_age_rmap(struct kvm *kvm, struct kvm_rmap_head *rmap_head,
struct kvm_memory_slot *slot, gfn_t gfn, int level)
{
u64 *sptep;
struct rmap_iterator iter;
for_each_rmap_spte(rmap_head, &iter, sptep)
if (is_accessed_spte(*sptep))
return true;
return false;
}
#define RMAP_RECYCLE_THRESHOLD 1000 #define RMAP_RECYCLE_THRESHOLD 1000
static void __rmap_add(struct kvm *kvm, static void __rmap_add(struct kvm *kvm,
@ -1629,12 +1605,52 @@ static void rmap_add(struct kvm_vcpu *vcpu, const struct kvm_memory_slot *slot,
__rmap_add(vcpu->kvm, cache, slot, spte, gfn, access); __rmap_add(vcpu->kvm, cache, slot, spte, gfn, access);
} }
static bool kvm_rmap_age_gfn_range(struct kvm *kvm,
struct kvm_gfn_range *range, bool test_only)
{
struct slot_rmap_walk_iterator iterator;
struct rmap_iterator iter;
bool young = false;
u64 *sptep;
for_each_slot_rmap_range(range->slot, PG_LEVEL_4K, KVM_MAX_HUGEPAGE_LEVEL,
range->start, range->end - 1, &iterator) {
for_each_rmap_spte(iterator.rmap, &iter, sptep) {
u64 spte = *sptep;
if (!is_accessed_spte(spte))
continue;
if (test_only)
return true;
if (spte_ad_enabled(spte)) {
clear_bit((ffs(shadow_accessed_mask) - 1),
(unsigned long *)sptep);
} else {
/*
* Capture the dirty status of the page, so that
* it doesn't get lost when the SPTE is marked
* for access tracking.
*/
if (is_writable_pte(spte))
kvm_set_pfn_dirty(spte_to_pfn(spte));
spte = mark_spte_for_access_track(spte);
mmu_spte_update_no_track(sptep, spte);
}
young = true;
}
}
return young;
}
bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range)
{ {
bool young = false; bool young = false;
if (kvm_memslots_have_rmaps(kvm)) if (kvm_memslots_have_rmaps(kvm))
young = kvm_handle_gfn_range(kvm, range, kvm_age_rmap); young = kvm_rmap_age_gfn_range(kvm, range, false);
if (tdp_mmu_enabled) if (tdp_mmu_enabled)
young |= kvm_tdp_mmu_age_gfn_range(kvm, range); young |= kvm_tdp_mmu_age_gfn_range(kvm, range);
@ -1647,7 +1663,7 @@ bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range)
bool young = false; bool young = false;
if (kvm_memslots_have_rmaps(kvm)) if (kvm_memslots_have_rmaps(kvm))
young = kvm_handle_gfn_range(kvm, range, kvm_test_age_rmap); young = kvm_rmap_age_gfn_range(kvm, range, true);
if (tdp_mmu_enabled) if (tdp_mmu_enabled)
young |= kvm_tdp_mmu_test_age_gfn(kvm, range); young |= kvm_tdp_mmu_test_age_gfn(kvm, range);
@ -2713,36 +2729,49 @@ void kvm_mmu_change_mmu_pages(struct kvm *kvm, unsigned long goal_nr_mmu_pages)
write_unlock(&kvm->mmu_lock); write_unlock(&kvm->mmu_lock);
} }
int kvm_mmu_unprotect_page(struct kvm *kvm, gfn_t gfn) bool __kvm_mmu_unprotect_gfn_and_retry(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa,
bool always_retry)
{ {
struct kvm_mmu_page *sp; struct kvm *kvm = vcpu->kvm;
LIST_HEAD(invalid_list); LIST_HEAD(invalid_list);
int r; struct kvm_mmu_page *sp;
gpa_t gpa = cr2_or_gpa;
bool r = false;
r = 0; /*
write_lock(&kvm->mmu_lock); * Bail early if there aren't any write-protected shadow pages to avoid
for_each_gfn_valid_sp_with_gptes(kvm, sp, gfn) { * unnecessarily taking mmu_lock lock, e.g. if the gfn is write-tracked
r = 1; * by a third party. Reading indirect_shadow_pages without holding
kvm_mmu_prepare_zap_page(kvm, sp, &invalid_list); * mmu_lock is safe, as this is purely an optimization, i.e. a false
* positive is benign, and a false negative will simply result in KVM
* skipping the unprotect+retry path, which is also an optimization.
*/
if (!READ_ONCE(kvm->arch.indirect_shadow_pages))
goto out;
if (!vcpu->arch.mmu->root_role.direct) {
gpa = kvm_mmu_gva_to_gpa_write(vcpu, cr2_or_gpa, NULL);
if (gpa == INVALID_GPA)
goto out;
} }
write_lock(&kvm->mmu_lock);
for_each_gfn_valid_sp_with_gptes(kvm, sp, gpa_to_gfn(gpa))
kvm_mmu_prepare_zap_page(kvm, sp, &invalid_list);
/*
* Snapshot the result before zapping, as zapping will remove all list
* entries, i.e. checking the list later would yield a false negative.
*/
r = !list_empty(&invalid_list);
kvm_mmu_commit_zap_page(kvm, &invalid_list); kvm_mmu_commit_zap_page(kvm, &invalid_list);
write_unlock(&kvm->mmu_lock); write_unlock(&kvm->mmu_lock);
return r; out:
} if (r || always_retry) {
vcpu->arch.last_retry_eip = kvm_rip_read(vcpu);
static int kvm_mmu_unprotect_page_virt(struct kvm_vcpu *vcpu, gva_t gva) vcpu->arch.last_retry_addr = cr2_or_gpa;
{ }
gpa_t gpa;
int r;
if (vcpu->arch.mmu->root_role.direct)
return 0;
gpa = kvm_mmu_gva_to_gpa_read(vcpu, gva, NULL);
r = kvm_mmu_unprotect_page(vcpu->kvm, gpa >> PAGE_SHIFT);
return r; return r;
} }
@ -2914,10 +2943,8 @@ static int mmu_set_spte(struct kvm_vcpu *vcpu, struct kvm_memory_slot *slot,
trace_kvm_mmu_set_spte(level, gfn, sptep); trace_kvm_mmu_set_spte(level, gfn, sptep);
} }
if (wrprot) { if (wrprot && write_fault)
if (write_fault) ret = RET_PF_WRITE_PROTECTED;
ret = RET_PF_EMULATE;
}
if (flush) if (flush)
kvm_flush_remote_tlbs_gfn(vcpu->kvm, gfn, level); kvm_flush_remote_tlbs_gfn(vcpu->kvm, gfn, level);
@ -4549,7 +4576,7 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault
return RET_PF_RETRY; return RET_PF_RETRY;
if (page_fault_handle_page_track(vcpu, fault)) if (page_fault_handle_page_track(vcpu, fault))
return RET_PF_EMULATE; return RET_PF_WRITE_PROTECTED;
r = fast_page_fault(vcpu, fault); r = fast_page_fault(vcpu, fault);
if (r != RET_PF_INVALID) if (r != RET_PF_INVALID)
@ -4618,8 +4645,6 @@ int kvm_handle_page_fault(struct kvm_vcpu *vcpu, u64 error_code,
if (!flags) { if (!flags) {
trace_kvm_page_fault(vcpu, fault_address, error_code); trace_kvm_page_fault(vcpu, fault_address, error_code);
if (kvm_event_needs_reinjection(vcpu))
kvm_mmu_unprotect_page_virt(vcpu, fault_address);
r = kvm_mmu_page_fault(vcpu, fault_address, error_code, insn, r = kvm_mmu_page_fault(vcpu, fault_address, error_code, insn,
insn_len); insn_len);
} else if (flags & KVM_PV_REASON_PAGE_NOT_PRESENT) { } else if (flags & KVM_PV_REASON_PAGE_NOT_PRESENT) {
@ -4642,7 +4667,7 @@ static int kvm_tdp_mmu_page_fault(struct kvm_vcpu *vcpu,
int r; int r;
if (page_fault_handle_page_track(vcpu, fault)) if (page_fault_handle_page_track(vcpu, fault))
return RET_PF_EMULATE; return RET_PF_WRITE_PROTECTED;
r = fast_page_fault(vcpu, fault); r = fast_page_fault(vcpu, fault);
if (r != RET_PF_INVALID) if (r != RET_PF_INVALID)
@ -4719,6 +4744,7 @@ static int kvm_tdp_map_page(struct kvm_vcpu *vcpu, gpa_t gpa, u64 error_code,
switch (r) { switch (r) {
case RET_PF_FIXED: case RET_PF_FIXED:
case RET_PF_SPURIOUS: case RET_PF_SPURIOUS:
case RET_PF_WRITE_PROTECTED:
return 0; return 0;
case RET_PF_EMULATE: case RET_PF_EMULATE:
@ -5963,6 +5989,106 @@ void kvm_mmu_track_write(struct kvm_vcpu *vcpu, gpa_t gpa, const u8 *new,
write_unlock(&vcpu->kvm->mmu_lock); write_unlock(&vcpu->kvm->mmu_lock);
} }
static bool is_write_to_guest_page_table(u64 error_code)
{
const u64 mask = PFERR_GUEST_PAGE_MASK | PFERR_WRITE_MASK | PFERR_PRESENT_MASK;
return (error_code & mask) == mask;
}
static int kvm_mmu_write_protect_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa,
u64 error_code, int *emulation_type)
{
bool direct = vcpu->arch.mmu->root_role.direct;
/*
* Do not try to unprotect and retry if the vCPU re-faulted on the same
* RIP with the same address that was previously unprotected, as doing
* so will likely put the vCPU into an infinite. E.g. if the vCPU uses
* a non-page-table modifying instruction on the PDE that points to the
* instruction, then unprotecting the gfn will unmap the instruction's
* code, i.e. make it impossible for the instruction to ever complete.
*/
if (vcpu->arch.last_retry_eip == kvm_rip_read(vcpu) &&
vcpu->arch.last_retry_addr == cr2_or_gpa)
return RET_PF_EMULATE;
/*
* Reset the unprotect+retry values that guard against infinite loops.
* The values will be refreshed if KVM explicitly unprotects a gfn and
* retries, in all other cases it's safe to retry in the future even if
* the next page fault happens on the same RIP+address.
*/
vcpu->arch.last_retry_eip = 0;
vcpu->arch.last_retry_addr = 0;
/*
* It should be impossible to reach this point with an MMIO cache hit,
* as RET_PF_WRITE_PROTECTED is returned if and only if there's a valid,
* writable memslot, and creating a memslot should invalidate the MMIO
* cache by way of changing the memslot generation. WARN and disallow
* retry if MMIO is detected, as retrying MMIO emulation is pointless
* and could put the vCPU into an infinite loop because the processor
* will keep faulting on the non-existent MMIO address.
*/
if (WARN_ON_ONCE(mmio_info_in_cache(vcpu, cr2_or_gpa, direct)))
return RET_PF_EMULATE;
/*
* Before emulating the instruction, check to see if the access was due
* to a read-only violation while the CPU was walking non-nested NPT
* page tables, i.e. for a direct MMU, for _guest_ page tables in L1.
* If L1 is sharing (a subset of) its page tables with L2, e.g. by
* having nCR3 share lower level page tables with hCR3, then when KVM
* (L0) write-protects the nested NPTs, i.e. npt12 entries, KVM is also
* unknowingly write-protecting L1's guest page tables, which KVM isn't
* shadowing.
*
* Because the CPU (by default) walks NPT page tables using a write
* access (to ensure the CPU can do A/D updates), page walks in L1 can
* trigger write faults for the above case even when L1 isn't modifying
* PTEs. As a result, KVM will unnecessarily emulate (or at least, try
* to emulate) an excessive number of L1 instructions; because L1's MMU
* isn't shadowed by KVM, there is no need to write-protect L1's gPTEs
* and thus no need to emulate in order to guarantee forward progress.
*
* Try to unprotect the gfn, i.e. zap any shadow pages, so that L1 can
* proceed without triggering emulation. If one or more shadow pages
* was zapped, skip emulation and resume L1 to let it natively execute
* the instruction. If no shadow pages were zapped, then the write-
* fault is due to something else entirely, i.e. KVM needs to emulate,
* as resuming the guest will put it into an infinite loop.
*
* Note, this code also applies to Intel CPUs, even though it is *very*
* unlikely that an L1 will share its page tables (IA32/PAE/paging64
* format) with L2's page tables (EPT format).
*
* For indirect MMUs, i.e. if KVM is shadowing the current MMU, try to
* unprotect the gfn and retry if an event is awaiting reinjection. If
* KVM emulates multiple instructions before completing event injection,
* the event could be delayed beyond what is architecturally allowed,
* e.g. KVM could inject an IRQ after the TPR has been raised.
*/
if (((direct && is_write_to_guest_page_table(error_code)) ||
(!direct && kvm_event_needs_reinjection(vcpu))) &&
kvm_mmu_unprotect_gfn_and_retry(vcpu, cr2_or_gpa))
return RET_PF_RETRY;
/*
* The gfn is write-protected, but if KVM detects its emulating an
* instruction that is unlikely to be used to modify page tables, or if
* emulation fails, KVM can try to unprotect the gfn and let the CPU
* re-execute the instruction that caused the page fault. Do not allow
* retrying an instruction from a nested guest as KVM is only explicitly
* shadowing L1's page tables, i.e. unprotecting something for L1 isn't
* going to magically fix whatever issue caused L2 to fail.
*/
if (!is_guest_mode(vcpu))
*emulation_type |= EMULTYPE_ALLOW_RETRY_PF;
return RET_PF_EMULATE;
}
int noinline kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, u64 error_code, int noinline kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, u64 error_code,
void *insn, int insn_len) void *insn, int insn_len)
{ {
@ -6008,6 +6134,10 @@ int noinline kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, u64 err
if (r < 0) if (r < 0)
return r; return r;
if (r == RET_PF_WRITE_PROTECTED)
r = kvm_mmu_write_protect_fault(vcpu, cr2_or_gpa, error_code,
&emulation_type);
if (r == RET_PF_FIXED) if (r == RET_PF_FIXED)
vcpu->stat.pf_fixed++; vcpu->stat.pf_fixed++;
else if (r == RET_PF_EMULATE) else if (r == RET_PF_EMULATE)
@ -6018,32 +6148,6 @@ int noinline kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, u64 err
if (r != RET_PF_EMULATE) if (r != RET_PF_EMULATE)
return 1; return 1;
/*
* Before emulating the instruction, check if the error code
* was due to a RO violation while translating the guest page.
* This can occur when using nested virtualization with nested
* paging in both guests. If true, we simply unprotect the page
* and resume the guest.
*/
if (vcpu->arch.mmu->root_role.direct &&
(error_code & PFERR_NESTED_GUEST_PAGE) == PFERR_NESTED_GUEST_PAGE) {
kvm_mmu_unprotect_page(vcpu->kvm, gpa_to_gfn(cr2_or_gpa));
return 1;
}
/*
* vcpu->arch.mmu.page_fault returned RET_PF_EMULATE, but we can still
* optimistically try to just unprotect the page and let the processor
* re-execute the instruction that caused the page fault. Do not allow
* retrying MMIO emulation, as it's not only pointless but could also
* cause us to enter an infinite loop because the processor will keep
* faulting on the non-existent MMIO address. Retrying an instruction
* from a nested guest is also pointless and dangerous as we are only
* explicitly shadowing L1's page tables, i.e. unprotecting something
* for L1 isn't going to magically fix whatever issue cause L2 to fail.
*/
if (!mmio_info_in_cache(vcpu, cr2_or_gpa, direct) && !is_guest_mode(vcpu))
emulation_type |= EMULTYPE_ALLOW_RETRY_PF;
emulate: emulate:
return x86_emulate_instruction(vcpu, cr2_or_gpa, emulation_type, insn, return x86_emulate_instruction(vcpu, cr2_or_gpa, emulation_type, insn,
insn_len); insn_len);
@ -6202,59 +6306,6 @@ void kvm_configure_mmu(bool enable_tdp, int tdp_forced_root_level,
} }
EXPORT_SYMBOL_GPL(kvm_configure_mmu); EXPORT_SYMBOL_GPL(kvm_configure_mmu);
/* The return value indicates if tlb flush on all vcpus is needed. */
typedef bool (*slot_rmaps_handler) (struct kvm *kvm,
struct kvm_rmap_head *rmap_head,
const struct kvm_memory_slot *slot);
static __always_inline bool __walk_slot_rmaps(struct kvm *kvm,
const struct kvm_memory_slot *slot,
slot_rmaps_handler fn,
int start_level, int end_level,
gfn_t start_gfn, gfn_t end_gfn,
bool flush_on_yield, bool flush)
{
struct slot_rmap_walk_iterator iterator;
lockdep_assert_held_write(&kvm->mmu_lock);
for_each_slot_rmap_range(slot, start_level, end_level, start_gfn,
end_gfn, &iterator) {
if (iterator.rmap)
flush |= fn(kvm, iterator.rmap, slot);
if (need_resched() || rwlock_needbreak(&kvm->mmu_lock)) {
if (flush && flush_on_yield) {
kvm_flush_remote_tlbs_range(kvm, start_gfn,
iterator.gfn - start_gfn + 1);
flush = false;
}
cond_resched_rwlock_write(&kvm->mmu_lock);
}
}
return flush;
}
static __always_inline bool walk_slot_rmaps(struct kvm *kvm,
const struct kvm_memory_slot *slot,
slot_rmaps_handler fn,
int start_level, int end_level,
bool flush_on_yield)
{
return __walk_slot_rmaps(kvm, slot, fn, start_level, end_level,
slot->base_gfn, slot->base_gfn + slot->npages - 1,
flush_on_yield, false);
}
static __always_inline bool walk_slot_rmaps_4k(struct kvm *kvm,
const struct kvm_memory_slot *slot,
slot_rmaps_handler fn,
bool flush_on_yield)
{
return walk_slot_rmaps(kvm, slot, fn, PG_LEVEL_4K, PG_LEVEL_4K, flush_on_yield);
}
static void free_mmu_pages(struct kvm_mmu *mmu) static void free_mmu_pages(struct kvm_mmu *mmu)
{ {
if (!tdp_enabled && mmu->pae_root) if (!tdp_enabled && mmu->pae_root)
@ -6528,9 +6579,8 @@ static bool kvm_rmap_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_e
if (WARN_ON_ONCE(start >= end)) if (WARN_ON_ONCE(start >= end))
continue; continue;
flush = __walk_slot_rmaps(kvm, memslot, __kvm_zap_rmap, flush = __kvm_rmap_zap_gfn_range(kvm, memslot, start,
PG_LEVEL_4K, KVM_MAX_HUGEPAGE_LEVEL, end, true, flush);
start, end - 1, true, flush);
} }
} }
@ -6818,7 +6868,7 @@ static void kvm_shadow_mmu_try_split_huge_pages(struct kvm *kvm,
*/ */
for (level = KVM_MAX_HUGEPAGE_LEVEL; level > target_level; level--) for (level = KVM_MAX_HUGEPAGE_LEVEL; level > target_level; level--)
__walk_slot_rmaps(kvm, slot, shadow_mmu_try_split_huge_pages, __walk_slot_rmaps(kvm, slot, shadow_mmu_try_split_huge_pages,
level, level, start, end - 1, true, false); level, level, start, end - 1, true, true, false);
} }
/* Must be called with the mmu_lock held in write-mode. */ /* Must be called with the mmu_lock held in write-mode. */
@ -6997,10 +7047,42 @@ void kvm_arch_flush_shadow_all(struct kvm *kvm)
kvm_mmu_zap_all(kvm); kvm_mmu_zap_all(kvm);
} }
/*
* Zapping leaf SPTEs with memslot range when a memslot is moved/deleted.
*
* Zapping non-leaf SPTEs, a.k.a. not-last SPTEs, isn't required, worst
* case scenario we'll have unused shadow pages lying around until they
* are recycled due to age or when the VM is destroyed.
*/
static void kvm_mmu_zap_memslot_leafs(struct kvm *kvm, struct kvm_memory_slot *slot)
{
struct kvm_gfn_range range = {
.slot = slot,
.start = slot->base_gfn,
.end = slot->base_gfn + slot->npages,
.may_block = true,
};
write_lock(&kvm->mmu_lock);
if (kvm_unmap_gfn_range(kvm, &range))
kvm_flush_remote_tlbs_memslot(kvm, slot);
write_unlock(&kvm->mmu_lock);
}
static inline bool kvm_memslot_flush_zap_all(struct kvm *kvm)
{
return kvm->arch.vm_type == KVM_X86_DEFAULT_VM &&
kvm_check_has_quirk(kvm, KVM_X86_QUIRK_SLOT_ZAP_ALL);
}
void kvm_arch_flush_shadow_memslot(struct kvm *kvm, void kvm_arch_flush_shadow_memslot(struct kvm *kvm,
struct kvm_memory_slot *slot) struct kvm_memory_slot *slot)
{ {
kvm_mmu_zap_all_fast(kvm); if (kvm_memslot_flush_zap_all(kvm))
kvm_mmu_zap_all_fast(kvm);
else
kvm_mmu_zap_memslot_leafs(kvm, slot);
} }
void kvm_mmu_invalidate_mmio_sptes(struct kvm *kvm, u64 gen) void kvm_mmu_invalidate_mmio_sptes(struct kvm *kvm, u64 gen)

View File

@ -258,6 +258,8 @@ int kvm_tdp_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault);
* RET_PF_CONTINUE: So far, so good, keep handling the page fault. * RET_PF_CONTINUE: So far, so good, keep handling the page fault.
* RET_PF_RETRY: let CPU fault again on the address. * RET_PF_RETRY: let CPU fault again on the address.
* RET_PF_EMULATE: mmio page fault, emulate the instruction directly. * RET_PF_EMULATE: mmio page fault, emulate the instruction directly.
* RET_PF_WRITE_PROTECTED: the gfn is write-protected, either unprotected the
* gfn and retry, or emulate the instruction directly.
* RET_PF_INVALID: the spte is invalid, let the real page fault path update it. * RET_PF_INVALID: the spte is invalid, let the real page fault path update it.
* RET_PF_FIXED: The faulting entry has been fixed. * RET_PF_FIXED: The faulting entry has been fixed.
* RET_PF_SPURIOUS: The faulting entry was already fixed, e.g. by another vCPU. * RET_PF_SPURIOUS: The faulting entry was already fixed, e.g. by another vCPU.
@ -274,6 +276,7 @@ enum {
RET_PF_CONTINUE = 0, RET_PF_CONTINUE = 0,
RET_PF_RETRY, RET_PF_RETRY,
RET_PF_EMULATE, RET_PF_EMULATE,
RET_PF_WRITE_PROTECTED,
RET_PF_INVALID, RET_PF_INVALID,
RET_PF_FIXED, RET_PF_FIXED,
RET_PF_SPURIOUS, RET_PF_SPURIOUS,
@ -349,8 +352,6 @@ int kvm_mmu_max_mapping_level(struct kvm *kvm,
void kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault); void kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault);
void disallowed_hugepage_adjust(struct kvm_page_fault *fault, u64 spte, int cur_level); void disallowed_hugepage_adjust(struct kvm_page_fault *fault, u64 spte, int cur_level);
void *mmu_memory_cache_alloc(struct kvm_mmu_memory_cache *mc);
void track_possible_nx_huge_page(struct kvm *kvm, struct kvm_mmu_page *sp); void track_possible_nx_huge_page(struct kvm *kvm, struct kvm_mmu_page *sp);
void untrack_possible_nx_huge_page(struct kvm *kvm, struct kvm_mmu_page *sp); void untrack_possible_nx_huge_page(struct kvm *kvm, struct kvm_mmu_page *sp);

View File

@ -57,6 +57,7 @@
TRACE_DEFINE_ENUM(RET_PF_CONTINUE); TRACE_DEFINE_ENUM(RET_PF_CONTINUE);
TRACE_DEFINE_ENUM(RET_PF_RETRY); TRACE_DEFINE_ENUM(RET_PF_RETRY);
TRACE_DEFINE_ENUM(RET_PF_EMULATE); TRACE_DEFINE_ENUM(RET_PF_EMULATE);
TRACE_DEFINE_ENUM(RET_PF_WRITE_PROTECTED);
TRACE_DEFINE_ENUM(RET_PF_INVALID); TRACE_DEFINE_ENUM(RET_PF_INVALID);
TRACE_DEFINE_ENUM(RET_PF_FIXED); TRACE_DEFINE_ENUM(RET_PF_FIXED);
TRACE_DEFINE_ENUM(RET_PF_SPURIOUS); TRACE_DEFINE_ENUM(RET_PF_SPURIOUS);

View File

@ -646,10 +646,10 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault,
* really care if it changes underneath us after this point). * really care if it changes underneath us after this point).
*/ */
if (FNAME(gpte_changed)(vcpu, gw, top_level)) if (FNAME(gpte_changed)(vcpu, gw, top_level))
goto out_gpte_changed; return RET_PF_RETRY;
if (WARN_ON_ONCE(!VALID_PAGE(vcpu->arch.mmu->root.hpa))) if (WARN_ON_ONCE(!VALID_PAGE(vcpu->arch.mmu->root.hpa)))
goto out_gpte_changed; return RET_PF_RETRY;
/* /*
* Load a new root and retry the faulting instruction in the extremely * Load a new root and retry the faulting instruction in the extremely
@ -659,7 +659,7 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault,
*/ */
if (unlikely(kvm_mmu_is_dummy_root(vcpu->arch.mmu->root.hpa))) { if (unlikely(kvm_mmu_is_dummy_root(vcpu->arch.mmu->root.hpa))) {
kvm_make_request(KVM_REQ_MMU_FREE_OBSOLETE_ROOTS, vcpu); kvm_make_request(KVM_REQ_MMU_FREE_OBSOLETE_ROOTS, vcpu);
goto out_gpte_changed; return RET_PF_RETRY;
} }
for_each_shadow_entry(vcpu, fault->addr, it) { for_each_shadow_entry(vcpu, fault->addr, it) {
@ -674,34 +674,38 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault,
sp = kvm_mmu_get_child_sp(vcpu, it.sptep, table_gfn, sp = kvm_mmu_get_child_sp(vcpu, it.sptep, table_gfn,
false, access); false, access);
if (sp != ERR_PTR(-EEXIST)) { /*
/* * Synchronize the new page before linking it, as the CPU (KVM)
* We must synchronize the pagetable before linking it * is architecturally disallowed from inserting non-present
* because the guest doesn't need to flush tlb when * entries into the TLB, i.e. the guest isn't required to flush
* the gpte is changed from non-present to present. * the TLB when changing the gPTE from non-present to present.
* Otherwise, the guest may use the wrong mapping. *
* * For PG_LEVEL_4K, kvm_mmu_find_shadow_page() has already
* For PG_LEVEL_4K, kvm_mmu_get_page() has already * synchronized the page via kvm_sync_page().
* synchronized it transiently via kvm_sync_page(). *
* * For higher level pages, which cannot be unsync themselves
* For higher level pagetable, we synchronize it via * but can have unsync children, synchronize via the slower
* the slower mmu_sync_children(). If it needs to * mmu_sync_children(). If KVM needs to drop mmu_lock due to
* break, some progress has been made; return * contention or to reschedule, instruct the caller to retry
* RET_PF_RETRY and retry on the next #PF. * the #PF (mmu_sync_children() ensures forward progress will
* KVM_REQ_MMU_SYNC is not necessary but it * be made).
* expedites the process. */
*/ if (sp != ERR_PTR(-EEXIST) && sp->unsync_children &&
if (sp->unsync_children && mmu_sync_children(vcpu, sp, false))
mmu_sync_children(vcpu, sp, false)) return RET_PF_RETRY;
return RET_PF_RETRY;
}
/* /*
* Verify that the gpte in the page we've just write * Verify that the gpte in the page, which is now either
* protected is still there. * write-protected or unsync, wasn't modified between the fault
* and acquiring mmu_lock. This needs to be done even when
* reusing an existing shadow page to ensure the information
* gathered by the walker matches the information stored in the
* shadow page (which could have been modified by a different
* vCPU even if the page was already linked). Holding mmu_lock
* prevents the shadow page from changing after this point.
*/ */
if (FNAME(gpte_changed)(vcpu, gw, it.level - 1)) if (FNAME(gpte_changed)(vcpu, gw, it.level - 1))
goto out_gpte_changed; return RET_PF_RETRY;
if (sp != ERR_PTR(-EEXIST)) if (sp != ERR_PTR(-EEXIST))
link_shadow_page(vcpu, it.sptep, sp); link_shadow_page(vcpu, it.sptep, sp);
@ -755,9 +759,6 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault,
FNAME(pte_prefetch)(vcpu, gw, it.sptep); FNAME(pte_prefetch)(vcpu, gw, it.sptep);
return ret; return ret;
out_gpte_changed:
return RET_PF_RETRY;
} }
/* /*
@ -805,7 +806,7 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault
if (page_fault_handle_page_track(vcpu, fault)) { if (page_fault_handle_page_track(vcpu, fault)) {
shadow_page_table_clear_flood(vcpu, fault->addr); shadow_page_table_clear_flood(vcpu, fault->addr);
return RET_PF_EMULATE; return RET_PF_WRITE_PROTECTED;
} }
r = mmu_topup_memory_caches(vcpu, true); r = mmu_topup_memory_caches(vcpu, true);

View File

@ -1046,10 +1046,8 @@ static int tdp_mmu_map_handle_target_level(struct kvm_vcpu *vcpu,
* protected, emulation is needed. If the emulation was skipped, * protected, emulation is needed. If the emulation was skipped,
* the vCPU would have the same fault again. * the vCPU would have the same fault again.
*/ */
if (wrprot) { if (wrprot && fault->write)
if (fault->write) ret = RET_PF_WRITE_PROTECTED;
ret = RET_PF_EMULATE;
}
/* If a MMIO SPTE is installed, the MMIO will need to be emulated. */ /* If a MMIO SPTE is installed, the MMIO will need to be emulated. */
if (unlikely(is_mmio_spte(vcpu->kvm, new_spte))) { if (unlikely(is_mmio_spte(vcpu->kvm, new_spte))) {

View File

@ -17,6 +17,7 @@ enum kvm_only_cpuid_leafs {
CPUID_8000_0007_EDX, CPUID_8000_0007_EDX,
CPUID_8000_0022_EAX, CPUID_8000_0022_EAX,
CPUID_7_2_EDX, CPUID_7_2_EDX,
CPUID_24_0_EBX,
NR_KVM_CPU_CAPS, NR_KVM_CPU_CAPS,
NKVMCAPINTS = NR_KVM_CPU_CAPS - NCAPINTS, NKVMCAPINTS = NR_KVM_CPU_CAPS - NCAPINTS,
@ -46,6 +47,7 @@ enum kvm_only_cpuid_leafs {
#define X86_FEATURE_AVX_NE_CONVERT KVM_X86_FEATURE(CPUID_7_1_EDX, 5) #define X86_FEATURE_AVX_NE_CONVERT KVM_X86_FEATURE(CPUID_7_1_EDX, 5)
#define X86_FEATURE_AMX_COMPLEX KVM_X86_FEATURE(CPUID_7_1_EDX, 8) #define X86_FEATURE_AMX_COMPLEX KVM_X86_FEATURE(CPUID_7_1_EDX, 8)
#define X86_FEATURE_PREFETCHITI KVM_X86_FEATURE(CPUID_7_1_EDX, 14) #define X86_FEATURE_PREFETCHITI KVM_X86_FEATURE(CPUID_7_1_EDX, 14)
#define X86_FEATURE_AVX10 KVM_X86_FEATURE(CPUID_7_1_EDX, 19)
/* Intel-defined sub-features, CPUID level 0x00000007:2 (EDX) */ /* Intel-defined sub-features, CPUID level 0x00000007:2 (EDX) */
#define X86_FEATURE_INTEL_PSFD KVM_X86_FEATURE(CPUID_7_2_EDX, 0) #define X86_FEATURE_INTEL_PSFD KVM_X86_FEATURE(CPUID_7_2_EDX, 0)
@ -55,6 +57,11 @@ enum kvm_only_cpuid_leafs {
#define KVM_X86_FEATURE_BHI_CTRL KVM_X86_FEATURE(CPUID_7_2_EDX, 4) #define KVM_X86_FEATURE_BHI_CTRL KVM_X86_FEATURE(CPUID_7_2_EDX, 4)
#define X86_FEATURE_MCDT_NO KVM_X86_FEATURE(CPUID_7_2_EDX, 5) #define X86_FEATURE_MCDT_NO KVM_X86_FEATURE(CPUID_7_2_EDX, 5)
/* Intel-defined sub-features, CPUID level 0x00000024:0 (EBX) */
#define X86_FEATURE_AVX10_128 KVM_X86_FEATURE(CPUID_24_0_EBX, 16)
#define X86_FEATURE_AVX10_256 KVM_X86_FEATURE(CPUID_24_0_EBX, 17)
#define X86_FEATURE_AVX10_512 KVM_X86_FEATURE(CPUID_24_0_EBX, 18)
/* CPUID level 0x80000007 (EDX). */ /* CPUID level 0x80000007 (EDX). */
#define KVM_X86_FEATURE_CONSTANT_TSC KVM_X86_FEATURE(CPUID_8000_0007_EDX, 8) #define KVM_X86_FEATURE_CONSTANT_TSC KVM_X86_FEATURE(CPUID_8000_0007_EDX, 8)
@ -90,6 +97,7 @@ static const struct cpuid_reg reverse_cpuid[] = {
[CPUID_8000_0021_EAX] = {0x80000021, 0, CPUID_EAX}, [CPUID_8000_0021_EAX] = {0x80000021, 0, CPUID_EAX},
[CPUID_8000_0022_EAX] = {0x80000022, 0, CPUID_EAX}, [CPUID_8000_0022_EAX] = {0x80000022, 0, CPUID_EAX},
[CPUID_7_2_EDX] = { 7, 2, CPUID_EDX}, [CPUID_7_2_EDX] = { 7, 2, CPUID_EDX},
[CPUID_24_0_EBX] = { 0x24, 0, CPUID_EBX},
}; };
/* /*

View File

@ -624,17 +624,31 @@ int emulator_leave_smm(struct x86_emulate_ctxt *ctxt)
#endif #endif
/* /*
* Give leave_smm() a chance to make ISA-specific changes to the vCPU * FIXME: When resuming L2 (a.k.a. guest mode), the transition to guest
* state (e.g. enter guest mode) before loading state from the SMM * mode should happen _after_ loading state from SMRAM. However, KVM
* state-save area. * piggybacks the nested VM-Enter flows (which is wrong for many other
* reasons), and so nSVM/nVMX would clobber state that is loaded from
* SMRAM and from the VMCS/VMCB.
*/ */
if (kvm_x86_call(leave_smm)(vcpu, &smram)) if (kvm_x86_call(leave_smm)(vcpu, &smram))
return X86EMUL_UNHANDLEABLE; return X86EMUL_UNHANDLEABLE;
#ifdef CONFIG_X86_64 #ifdef CONFIG_X86_64
if (guest_cpuid_has(vcpu, X86_FEATURE_LM)) if (guest_cpuid_has(vcpu, X86_FEATURE_LM))
return rsm_load_state_64(ctxt, &smram.smram64); ret = rsm_load_state_64(ctxt, &smram.smram64);
else else
#endif #endif
return rsm_load_state_32(ctxt, &smram.smram32); ret = rsm_load_state_32(ctxt, &smram.smram32);
/*
* If RSM fails and triggers shutdown, architecturally the shutdown
* occurs *before* the transition to guest mode. But due to KVM's
* flawed handling of RSM to L2 (see above), the vCPU may already be
* in_guest_mode(). Force the vCPU out of guest mode before delivering
* the shutdown, so that L1 enters shutdown instead of seeing a VM-Exit
* that architecturally shouldn't be possible.
*/
if (ret != X86EMUL_CONTINUE && is_guest_mode(vcpu))
kvm_leave_nested(vcpu);
return ret;
} }

View File

@ -1693,8 +1693,8 @@ static int svm_set_nested_state(struct kvm_vcpu *vcpu,
return -EINVAL; return -EINVAL;
ret = -ENOMEM; ret = -ENOMEM;
ctl = kzalloc(sizeof(*ctl), GFP_KERNEL_ACCOUNT); ctl = kzalloc(sizeof(*ctl), GFP_KERNEL);
save = kzalloc(sizeof(*save), GFP_KERNEL_ACCOUNT); save = kzalloc(sizeof(*save), GFP_KERNEL);
if (!ctl || !save) if (!ctl || !save)
goto out_free; goto out_free;

View File

@ -573,7 +573,7 @@ static void __svm_write_tsc_multiplier(u64 multiplier)
static __always_inline struct sev_es_save_area *sev_es_host_save_area(struct svm_cpu_data *sd) static __always_inline struct sev_es_save_area *sev_es_host_save_area(struct svm_cpu_data *sd)
{ {
return page_address(sd->save_area) + 0x400; return &sd->save_area->host_sev_es_save;
} }
static inline void kvm_cpu_svm_disable(void) static inline void kvm_cpu_svm_disable(void)
@ -592,14 +592,14 @@ static inline void kvm_cpu_svm_disable(void)
} }
} }
static void svm_emergency_disable(void) static void svm_emergency_disable_virtualization_cpu(void)
{ {
kvm_rebooting = true; kvm_rebooting = true;
kvm_cpu_svm_disable(); kvm_cpu_svm_disable();
} }
static void svm_hardware_disable(void) static void svm_disable_virtualization_cpu(void)
{ {
/* Make sure we clean up behind us */ /* Make sure we clean up behind us */
if (tsc_scaling) if (tsc_scaling)
@ -610,7 +610,7 @@ static void svm_hardware_disable(void)
amd_pmu_disable_virt(); amd_pmu_disable_virt();
} }
static int svm_hardware_enable(void) static int svm_enable_virtualization_cpu(void)
{ {
struct svm_cpu_data *sd; struct svm_cpu_data *sd;
@ -696,7 +696,7 @@ static void svm_cpu_uninit(int cpu)
return; return;
kfree(sd->sev_vmcbs); kfree(sd->sev_vmcbs);
__free_page(sd->save_area); __free_page(__sme_pa_to_page(sd->save_area_pa));
sd->save_area_pa = 0; sd->save_area_pa = 0;
sd->save_area = NULL; sd->save_area = NULL;
} }
@ -704,23 +704,24 @@ static void svm_cpu_uninit(int cpu)
static int svm_cpu_init(int cpu) static int svm_cpu_init(int cpu)
{ {
struct svm_cpu_data *sd = per_cpu_ptr(&svm_data, cpu); struct svm_cpu_data *sd = per_cpu_ptr(&svm_data, cpu);
struct page *save_area_page;
int ret = -ENOMEM; int ret = -ENOMEM;
memset(sd, 0, sizeof(struct svm_cpu_data)); memset(sd, 0, sizeof(struct svm_cpu_data));
sd->save_area = snp_safe_alloc_page_node(cpu_to_node(cpu), GFP_KERNEL); save_area_page = snp_safe_alloc_page_node(cpu_to_node(cpu), GFP_KERNEL);
if (!sd->save_area) if (!save_area_page)
return ret; return ret;
ret = sev_cpu_init(sd); ret = sev_cpu_init(sd);
if (ret) if (ret)
goto free_save_area; goto free_save_area;
sd->save_area_pa = __sme_page_pa(sd->save_area); sd->save_area = page_address(save_area_page);
sd->save_area_pa = __sme_page_pa(save_area_page);
return 0; return 0;
free_save_area: free_save_area:
__free_page(sd->save_area); __free_page(save_area_page);
sd->save_area = NULL;
return ret; return ret;
} }
@ -1124,8 +1125,7 @@ static void svm_hardware_unsetup(void)
for_each_possible_cpu(cpu) for_each_possible_cpu(cpu)
svm_cpu_uninit(cpu); svm_cpu_uninit(cpu);
__free_pages(pfn_to_page(iopm_base >> PAGE_SHIFT), __free_pages(__sme_pa_to_page(iopm_base), get_order(IOPM_SIZE));
get_order(IOPM_SIZE));
iopm_base = 0; iopm_base = 0;
} }
@ -1301,7 +1301,7 @@ static void init_vmcb(struct kvm_vcpu *vcpu)
if (!kvm_hlt_in_guest(vcpu->kvm)) if (!kvm_hlt_in_guest(vcpu->kvm))
svm_set_intercept(svm, INTERCEPT_HLT); svm_set_intercept(svm, INTERCEPT_HLT);
control->iopm_base_pa = __sme_set(iopm_base); control->iopm_base_pa = iopm_base;
control->msrpm_base_pa = __sme_set(__pa(svm->msrpm)); control->msrpm_base_pa = __sme_set(__pa(svm->msrpm));
control->int_ctl = V_INTR_MASKING_MASK; control->int_ctl = V_INTR_MASKING_MASK;
@ -1503,7 +1503,7 @@ static void svm_vcpu_free(struct kvm_vcpu *vcpu)
sev_free_vcpu(vcpu); sev_free_vcpu(vcpu);
__free_page(pfn_to_page(__sme_clr(svm->vmcb01.pa) >> PAGE_SHIFT)); __free_page(__sme_pa_to_page(svm->vmcb01.pa));
__free_pages(virt_to_page(svm->msrpm), get_order(MSRPM_SIZE)); __free_pages(virt_to_page(svm->msrpm), get_order(MSRPM_SIZE));
} }
@ -1533,7 +1533,7 @@ static void svm_prepare_switch_to_guest(struct kvm_vcpu *vcpu)
* TSC_AUX is always virtualized for SEV-ES guests when the feature is * TSC_AUX is always virtualized for SEV-ES guests when the feature is
* available. The user return MSR support is not required in this case * available. The user return MSR support is not required in this case
* because TSC_AUX is restored on #VMEXIT from the host save area * because TSC_AUX is restored on #VMEXIT from the host save area
* (which has been initialized in svm_hardware_enable()). * (which has been initialized in svm_enable_virtualization_cpu()).
*/ */
if (likely(tsc_aux_uret_slot >= 0) && if (likely(tsc_aux_uret_slot >= 0) &&
(!boot_cpu_has(X86_FEATURE_V_TSC_AUX) || !sev_es_guest(vcpu->kvm))) (!boot_cpu_has(X86_FEATURE_V_TSC_AUX) || !sev_es_guest(vcpu->kvm)))
@ -2825,17 +2825,17 @@ static int efer_trap(struct kvm_vcpu *vcpu)
return kvm_complete_insn_gp(vcpu, ret); return kvm_complete_insn_gp(vcpu, ret);
} }
static int svm_get_msr_feature(struct kvm_msr_entry *msr) static int svm_get_feature_msr(u32 msr, u64 *data)
{ {
msr->data = 0; *data = 0;
switch (msr->index) { switch (msr) {
case MSR_AMD64_DE_CFG: case MSR_AMD64_DE_CFG:
if (cpu_feature_enabled(X86_FEATURE_LFENCE_RDTSC)) if (cpu_feature_enabled(X86_FEATURE_LFENCE_RDTSC))
msr->data |= MSR_AMD64_DE_CFG_LFENCE_SERIALIZE; *data |= MSR_AMD64_DE_CFG_LFENCE_SERIALIZE;
break; break;
default: default:
return KVM_MSR_RET_INVALID; return KVM_MSR_RET_UNSUPPORTED;
} }
return 0; return 0;
@ -3144,7 +3144,7 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr)
* feature is available. The user return MSR support is not * feature is available. The user return MSR support is not
* required in this case because TSC_AUX is restored on #VMEXIT * required in this case because TSC_AUX is restored on #VMEXIT
* from the host save area (which has been initialized in * from the host save area (which has been initialized in
* svm_hardware_enable()). * svm_enable_virtualization_cpu()).
*/ */
if (boot_cpu_has(X86_FEATURE_V_TSC_AUX) && sev_es_guest(vcpu->kvm)) if (boot_cpu_has(X86_FEATURE_V_TSC_AUX) && sev_es_guest(vcpu->kvm))
break; break;
@ -3191,18 +3191,21 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr)
kvm_pr_unimpl_wrmsr(vcpu, ecx, data); kvm_pr_unimpl_wrmsr(vcpu, ecx, data);
break; break;
case MSR_AMD64_DE_CFG: { case MSR_AMD64_DE_CFG: {
struct kvm_msr_entry msr_entry; u64 supported_de_cfg;
msr_entry.index = msr->index; if (svm_get_feature_msr(ecx, &supported_de_cfg))
if (svm_get_msr_feature(&msr_entry))
return 1; return 1;
/* Check the supported bits */ if (data & ~supported_de_cfg)
if (data & ~msr_entry.data)
return 1; return 1;
/* Don't allow the guest to change a bit, #GP */ /*
if (!msr->host_initiated && (data ^ msr_entry.data)) * Don't let the guest change the host-programmed value. The
* MSR is very model specific, i.e. contains multiple bits that
* are completely unknown to KVM, and the one bit known to KVM
* is simply a reflection of hardware capabilities.
*/
if (!msr->host_initiated && data != svm->msr_decfg)
return 1; return 1;
svm->msr_decfg = data; svm->msr_decfg = data;
@ -4156,12 +4159,21 @@ static int svm_vcpu_pre_run(struct kvm_vcpu *vcpu)
static fastpath_t svm_exit_handlers_fastpath(struct kvm_vcpu *vcpu) static fastpath_t svm_exit_handlers_fastpath(struct kvm_vcpu *vcpu)
{ {
struct vcpu_svm *svm = to_svm(vcpu);
if (is_guest_mode(vcpu)) if (is_guest_mode(vcpu))
return EXIT_FASTPATH_NONE; return EXIT_FASTPATH_NONE;
if (to_svm(vcpu)->vmcb->control.exit_code == SVM_EXIT_MSR && switch (svm->vmcb->control.exit_code) {
to_svm(vcpu)->vmcb->control.exit_info_1) case SVM_EXIT_MSR:
if (!svm->vmcb->control.exit_info_1)
break;
return handle_fastpath_set_msr_irqoff(vcpu); return handle_fastpath_set_msr_irqoff(vcpu);
case SVM_EXIT_HLT:
return handle_fastpath_hlt(vcpu);
default:
break;
}
return EXIT_FASTPATH_NONE; return EXIT_FASTPATH_NONE;
} }
@ -4992,8 +5004,9 @@ static struct kvm_x86_ops svm_x86_ops __initdata = {
.check_processor_compatibility = svm_check_processor_compat, .check_processor_compatibility = svm_check_processor_compat,
.hardware_unsetup = svm_hardware_unsetup, .hardware_unsetup = svm_hardware_unsetup,
.hardware_enable = svm_hardware_enable, .enable_virtualization_cpu = svm_enable_virtualization_cpu,
.hardware_disable = svm_hardware_disable, .disable_virtualization_cpu = svm_disable_virtualization_cpu,
.emergency_disable_virtualization_cpu = svm_emergency_disable_virtualization_cpu,
.has_emulated_msr = svm_has_emulated_msr, .has_emulated_msr = svm_has_emulated_msr,
.vcpu_create = svm_vcpu_create, .vcpu_create = svm_vcpu_create,
@ -5011,7 +5024,7 @@ static struct kvm_x86_ops svm_x86_ops __initdata = {
.vcpu_unblocking = avic_vcpu_unblocking, .vcpu_unblocking = avic_vcpu_unblocking,
.update_exception_bitmap = svm_update_exception_bitmap, .update_exception_bitmap = svm_update_exception_bitmap,
.get_msr_feature = svm_get_msr_feature, .get_feature_msr = svm_get_feature_msr,
.get_msr = svm_get_msr, .get_msr = svm_get_msr,
.set_msr = svm_set_msr, .set_msr = svm_set_msr,
.get_segment_base = svm_get_segment_base, .get_segment_base = svm_get_segment_base,
@ -5062,6 +5075,8 @@ static struct kvm_x86_ops svm_x86_ops __initdata = {
.enable_nmi_window = svm_enable_nmi_window, .enable_nmi_window = svm_enable_nmi_window,
.enable_irq_window = svm_enable_irq_window, .enable_irq_window = svm_enable_irq_window,
.update_cr8_intercept = svm_update_cr8_intercept, .update_cr8_intercept = svm_update_cr8_intercept,
.x2apic_icr_is_split = true,
.set_virtual_apic_mode = avic_refresh_virtual_apic_mode, .set_virtual_apic_mode = avic_refresh_virtual_apic_mode,
.refresh_apicv_exec_ctrl = avic_refresh_apicv_exec_ctrl, .refresh_apicv_exec_ctrl = avic_refresh_apicv_exec_ctrl,
.apicv_post_state_restore = avic_apicv_post_state_restore, .apicv_post_state_restore = avic_apicv_post_state_restore,
@ -5266,7 +5281,7 @@ static __init int svm_hardware_setup(void)
iopm_va = page_address(iopm_pages); iopm_va = page_address(iopm_pages);
memset(iopm_va, 0xff, PAGE_SIZE * (1 << order)); memset(iopm_va, 0xff, PAGE_SIZE * (1 << order));
iopm_base = page_to_pfn(iopm_pages) << PAGE_SHIFT; iopm_base = __sme_page_pa(iopm_pages);
init_msrpm_offsets(); init_msrpm_offsets();
@ -5425,8 +5440,6 @@ static struct kvm_x86_init_ops svm_init_ops __initdata = {
static void __svm_exit(void) static void __svm_exit(void)
{ {
kvm_x86_vendor_exit(); kvm_x86_vendor_exit();
cpu_emergency_unregister_virt_callback(svm_emergency_disable);
} }
static int __init svm_init(void) static int __init svm_init(void)
@ -5442,8 +5455,6 @@ static int __init svm_init(void)
if (r) if (r)
return r; return r;
cpu_emergency_register_virt_callback(svm_emergency_disable);
/* /*
* Common KVM initialization _must_ come last, after this, /dev/kvm is * Common KVM initialization _must_ come last, after this, /dev/kvm is
* exposed to userspace! * exposed to userspace!

View File

@ -25,7 +25,21 @@
#include "cpuid.h" #include "cpuid.h"
#include "kvm_cache_regs.h" #include "kvm_cache_regs.h"
#define __sme_page_pa(x) __sme_set(page_to_pfn(x) << PAGE_SHIFT) /*
* Helpers to convert to/from physical addresses for pages whose address is
* consumed directly by hardware. Even though it's a physical address, SVM
* often restricts the address to the natural width, hence 'unsigned long'
* instead of 'hpa_t'.
*/
static inline unsigned long __sme_page_pa(struct page *page)
{
return __sme_set(page_to_pfn(page) << PAGE_SHIFT);
}
static inline struct page *__sme_pa_to_page(unsigned long pa)
{
return pfn_to_page(__sme_clr(pa) >> PAGE_SHIFT);
}
#define IOPM_SIZE PAGE_SIZE * 3 #define IOPM_SIZE PAGE_SIZE * 3
#define MSRPM_SIZE PAGE_SIZE * 2 #define MSRPM_SIZE PAGE_SIZE * 2
@ -321,7 +335,7 @@ struct svm_cpu_data {
u32 next_asid; u32 next_asid;
u32 min_asid; u32 min_asid;
struct page *save_area; struct vmcb *save_area;
unsigned long save_area_pa; unsigned long save_area_pa;
struct vmcb *current_vmcb; struct vmcb *current_vmcb;

View File

@ -209,10 +209,8 @@ SYM_FUNC_START(__svm_vcpu_run)
7: vmload %_ASM_AX 7: vmload %_ASM_AX
8: 8:
#ifdef CONFIG_MITIGATION_RETPOLINE
/* IMPORTANT: Stuff the RSB immediately after VM-Exit, before RET! */ /* IMPORTANT: Stuff the RSB immediately after VM-Exit, before RET! */
FILL_RETURN_BUFFER %_ASM_AX, RSB_CLEAR_LOOPS, X86_FEATURE_RETPOLINE FILL_RETURN_BUFFER %_ASM_AX, RSB_CLEAR_LOOPS, X86_FEATURE_RSB_VMEXIT
#endif
/* Clobbers RAX, RCX, RDX. */ /* Clobbers RAX, RCX, RDX. */
RESTORE_HOST_SPEC_CTRL RESTORE_HOST_SPEC_CTRL
@ -348,10 +346,8 @@ SYM_FUNC_START(__svm_sev_es_vcpu_run)
2: cli 2: cli
#ifdef CONFIG_MITIGATION_RETPOLINE
/* IMPORTANT: Stuff the RSB immediately after VM-Exit, before RET! */ /* IMPORTANT: Stuff the RSB immediately after VM-Exit, before RET! */
FILL_RETURN_BUFFER %rax, RSB_CLEAR_LOOPS, X86_FEATURE_RETPOLINE FILL_RETURN_BUFFER %rax, RSB_CLEAR_LOOPS, X86_FEATURE_RSB_VMEXIT
#endif
/* Clobbers RAX, RCX, RDX, consumes RDI (@svm) and RSI (@spec_ctrl_intercepted). */ /* Clobbers RAX, RCX, RDX, consumes RDI (@svm) and RSI (@spec_ctrl_intercepted). */
RESTORE_HOST_SPEC_CTRL RESTORE_HOST_SPEC_CTRL

View File

@ -54,9 +54,7 @@ struct nested_vmx_msrs {
}; };
struct vmcs_config { struct vmcs_config {
int size; u64 basic;
u32 basic_cap;
u32 revision_id;
u32 pin_based_exec_ctrl; u32 pin_based_exec_ctrl;
u32 cpu_based_exec_ctrl; u32 cpu_based_exec_ctrl;
u32 cpu_based_2nd_exec_ctrl; u32 cpu_based_2nd_exec_ctrl;
@ -76,7 +74,7 @@ extern struct vmx_capability vmx_capability __ro_after_init;
static inline bool cpu_has_vmx_basic_inout(void) static inline bool cpu_has_vmx_basic_inout(void)
{ {
return (((u64)vmcs_config.basic_cap << 32) & VMX_BASIC_INOUT); return vmcs_config.basic & VMX_BASIC_INOUT;
} }
static inline bool cpu_has_virtual_nmis(void) static inline bool cpu_has_virtual_nmis(void)
@ -225,7 +223,7 @@ static inline bool cpu_has_vmx_vmfunc(void)
static inline bool cpu_has_vmx_shadow_vmcs(void) static inline bool cpu_has_vmx_shadow_vmcs(void)
{ {
/* check if the cpu supports writing r/o exit information fields */ /* check if the cpu supports writing r/o exit information fields */
if (!(vmcs_config.misc & MSR_IA32_VMX_MISC_VMWRITE_SHADOW_RO_FIELDS)) if (!(vmcs_config.misc & VMX_MISC_VMWRITE_SHADOW_RO_FIELDS))
return false; return false;
return vmcs_config.cpu_based_2nd_exec_ctrl & return vmcs_config.cpu_based_2nd_exec_ctrl &
@ -367,7 +365,7 @@ static inline bool cpu_has_vmx_invvpid_global(void)
static inline bool cpu_has_vmx_intel_pt(void) static inline bool cpu_has_vmx_intel_pt(void)
{ {
return (vmcs_config.misc & MSR_IA32_VMX_MISC_INTEL_PT) && return (vmcs_config.misc & VMX_MISC_INTEL_PT) &&
(vmcs_config.cpu_based_2nd_exec_ctrl & SECONDARY_EXEC_PT_USE_GPA) && (vmcs_config.cpu_based_2nd_exec_ctrl & SECONDARY_EXEC_PT_USE_GPA) &&
(vmcs_config.vmentry_ctrl & VM_ENTRY_LOAD_IA32_RTIT_CTL); (vmcs_config.vmentry_ctrl & VM_ENTRY_LOAD_IA32_RTIT_CTL);
} }

View File

@ -23,8 +23,10 @@ struct kvm_x86_ops vt_x86_ops __initdata = {
.hardware_unsetup = vmx_hardware_unsetup, .hardware_unsetup = vmx_hardware_unsetup,
.hardware_enable = vmx_hardware_enable, .enable_virtualization_cpu = vmx_enable_virtualization_cpu,
.hardware_disable = vmx_hardware_disable, .disable_virtualization_cpu = vmx_disable_virtualization_cpu,
.emergency_disable_virtualization_cpu = vmx_emergency_disable_virtualization_cpu,
.has_emulated_msr = vmx_has_emulated_msr, .has_emulated_msr = vmx_has_emulated_msr,
.vm_size = sizeof(struct kvm_vmx), .vm_size = sizeof(struct kvm_vmx),
@ -41,7 +43,7 @@ struct kvm_x86_ops vt_x86_ops __initdata = {
.vcpu_put = vmx_vcpu_put, .vcpu_put = vmx_vcpu_put,
.update_exception_bitmap = vmx_update_exception_bitmap, .update_exception_bitmap = vmx_update_exception_bitmap,
.get_msr_feature = vmx_get_msr_feature, .get_feature_msr = vmx_get_feature_msr,
.get_msr = vmx_get_msr, .get_msr = vmx_get_msr,
.set_msr = vmx_set_msr, .set_msr = vmx_set_msr,
.get_segment_base = vmx_get_segment_base, .get_segment_base = vmx_get_segment_base,
@ -89,6 +91,8 @@ struct kvm_x86_ops vt_x86_ops __initdata = {
.enable_nmi_window = vmx_enable_nmi_window, .enable_nmi_window = vmx_enable_nmi_window,
.enable_irq_window = vmx_enable_irq_window, .enable_irq_window = vmx_enable_irq_window,
.update_cr8_intercept = vmx_update_cr8_intercept, .update_cr8_intercept = vmx_update_cr8_intercept,
.x2apic_icr_is_split = false,
.set_virtual_apic_mode = vmx_set_virtual_apic_mode, .set_virtual_apic_mode = vmx_set_virtual_apic_mode,
.set_apic_access_page_addr = vmx_set_apic_access_page_addr, .set_apic_access_page_addr = vmx_set_apic_access_page_addr,
.refresh_apicv_exec_ctrl = vmx_refresh_apicv_exec_ctrl, .refresh_apicv_exec_ctrl = vmx_refresh_apicv_exec_ctrl,

View File

@ -981,7 +981,7 @@ static u32 nested_vmx_load_msr(struct kvm_vcpu *vcpu, u64 gpa, u32 count)
__func__, i, e.index, e.reserved); __func__, i, e.index, e.reserved);
goto fail; goto fail;
} }
if (kvm_set_msr(vcpu, e.index, e.value)) { if (kvm_set_msr_with_filter(vcpu, e.index, e.value)) {
pr_debug_ratelimited( pr_debug_ratelimited(
"%s cannot write MSR (%u, 0x%x, 0x%llx)\n", "%s cannot write MSR (%u, 0x%x, 0x%llx)\n",
__func__, i, e.index, e.value); __func__, i, e.index, e.value);
@ -1017,7 +1017,7 @@ static bool nested_vmx_get_vmexit_msr_value(struct kvm_vcpu *vcpu,
} }
} }
if (kvm_get_msr(vcpu, msr_index, data)) { if (kvm_get_msr_with_filter(vcpu, msr_index, data)) {
pr_debug_ratelimited("%s cannot read MSR (0x%x)\n", __func__, pr_debug_ratelimited("%s cannot read MSR (0x%x)\n", __func__,
msr_index); msr_index);
return false; return false;
@ -1112,9 +1112,9 @@ static void prepare_vmx_msr_autostore_list(struct kvm_vcpu *vcpu,
/* /*
* Emulated VMEntry does not fail here. Instead a less * Emulated VMEntry does not fail here. Instead a less
* accurate value will be returned by * accurate value will be returned by
* nested_vmx_get_vmexit_msr_value() using kvm_get_msr() * nested_vmx_get_vmexit_msr_value() by reading KVM's
* instead of reading the value from the vmcs02 VMExit * internal MSR state instead of reading the value from
* MSR-store area. * the vmcs02 VMExit MSR-store area.
*/ */
pr_warn_ratelimited( pr_warn_ratelimited(
"Not enough msr entries in msr_autostore. Can't add msr %x\n", "Not enough msr entries in msr_autostore. Can't add msr %x\n",
@ -1251,21 +1251,32 @@ static bool is_bitwise_subset(u64 superset, u64 subset, u64 mask)
static int vmx_restore_vmx_basic(struct vcpu_vmx *vmx, u64 data) static int vmx_restore_vmx_basic(struct vcpu_vmx *vmx, u64 data)
{ {
const u64 feature_and_reserved = const u64 feature_bits = VMX_BASIC_DUAL_MONITOR_TREATMENT |
/* feature (except bit 48; see below) */ VMX_BASIC_INOUT |
BIT_ULL(49) | BIT_ULL(54) | BIT_ULL(55) | VMX_BASIC_TRUE_CTLS;
/* reserved */
BIT_ULL(31) | GENMASK_ULL(47, 45) | GENMASK_ULL(63, 56); const u64 reserved_bits = GENMASK_ULL(63, 56) |
GENMASK_ULL(47, 45) |
BIT_ULL(31);
u64 vmx_basic = vmcs_config.nested.basic; u64 vmx_basic = vmcs_config.nested.basic;
if (!is_bitwise_subset(vmx_basic, data, feature_and_reserved)) BUILD_BUG_ON(feature_bits & reserved_bits);
/*
* Except for 32BIT_PHYS_ADDR_ONLY, which is an anti-feature bit (has
* inverted polarity), the incoming value must not set feature bits or
* reserved bits that aren't allowed/supported by KVM. Fields, i.e.
* multi-bit values, are explicitly checked below.
*/
if (!is_bitwise_subset(vmx_basic, data, feature_bits | reserved_bits))
return -EINVAL; return -EINVAL;
/* /*
* KVM does not emulate a version of VMX that constrains physical * KVM does not emulate a version of VMX that constrains physical
* addresses of VMX structures (e.g. VMCS) to 32-bits. * addresses of VMX structures (e.g. VMCS) to 32-bits.
*/ */
if (data & BIT_ULL(48)) if (data & VMX_BASIC_32BIT_PHYS_ADDR_ONLY)
return -EINVAL; return -EINVAL;
if (vmx_basic_vmcs_revision_id(vmx_basic) != if (vmx_basic_vmcs_revision_id(vmx_basic) !=
@ -1334,16 +1345,29 @@ vmx_restore_control_msr(struct vcpu_vmx *vmx, u32 msr_index, u64 data)
static int vmx_restore_vmx_misc(struct vcpu_vmx *vmx, u64 data) static int vmx_restore_vmx_misc(struct vcpu_vmx *vmx, u64 data)
{ {
const u64 feature_and_reserved_bits = const u64 feature_bits = VMX_MISC_SAVE_EFER_LMA |
/* feature */ VMX_MISC_ACTIVITY_HLT |
BIT_ULL(5) | GENMASK_ULL(8, 6) | BIT_ULL(14) | BIT_ULL(15) | VMX_MISC_ACTIVITY_SHUTDOWN |
BIT_ULL(28) | BIT_ULL(29) | BIT_ULL(30) | VMX_MISC_ACTIVITY_WAIT_SIPI |
/* reserved */ VMX_MISC_INTEL_PT |
GENMASK_ULL(13, 9) | BIT_ULL(31); VMX_MISC_RDMSR_IN_SMM |
VMX_MISC_VMWRITE_SHADOW_RO_FIELDS |
VMX_MISC_VMXOFF_BLOCK_SMI |
VMX_MISC_ZERO_LEN_INS;
const u64 reserved_bits = BIT_ULL(31) | GENMASK_ULL(13, 9);
u64 vmx_misc = vmx_control_msr(vmcs_config.nested.misc_low, u64 vmx_misc = vmx_control_msr(vmcs_config.nested.misc_low,
vmcs_config.nested.misc_high); vmcs_config.nested.misc_high);
if (!is_bitwise_subset(vmx_misc, data, feature_and_reserved_bits)) BUILD_BUG_ON(feature_bits & reserved_bits);
/*
* The incoming value must not set feature bits or reserved bits that
* aren't allowed/supported by KVM. Fields, i.e. multi-bit values, are
* explicitly checked below.
*/
if (!is_bitwise_subset(vmx_misc, data, feature_bits | reserved_bits))
return -EINVAL; return -EINVAL;
if ((vmx->nested.msrs.pinbased_ctls_high & if ((vmx->nested.msrs.pinbased_ctls_high &
@ -2317,10 +2341,12 @@ static void prepare_vmcs02_early(struct vcpu_vmx *vmx, struct loaded_vmcs *vmcs0
/* Posted interrupts setting is only taken from vmcs12. */ /* Posted interrupts setting is only taken from vmcs12. */
vmx->nested.pi_pending = false; vmx->nested.pi_pending = false;
if (nested_cpu_has_posted_intr(vmcs12)) if (nested_cpu_has_posted_intr(vmcs12)) {
vmx->nested.posted_intr_nv = vmcs12->posted_intr_nv; vmx->nested.posted_intr_nv = vmcs12->posted_intr_nv;
else } else {
vmx->nested.posted_intr_nv = -1;
exec_control &= ~PIN_BASED_POSTED_INTR; exec_control &= ~PIN_BASED_POSTED_INTR;
}
pin_controls_set(vmx, exec_control); pin_controls_set(vmx, exec_control);
/* /*
@ -2470,6 +2496,7 @@ static void prepare_vmcs02_rare(struct vcpu_vmx *vmx, struct vmcs12 *vmcs12)
if (!hv_evmcs || !(hv_evmcs->hv_clean_fields & if (!hv_evmcs || !(hv_evmcs->hv_clean_fields &
HV_VMX_ENLIGHTENED_CLEAN_FIELD_GUEST_GRP2)) { HV_VMX_ENLIGHTENED_CLEAN_FIELD_GUEST_GRP2)) {
vmcs_write16(GUEST_ES_SELECTOR, vmcs12->guest_es_selector); vmcs_write16(GUEST_ES_SELECTOR, vmcs12->guest_es_selector);
vmcs_write16(GUEST_CS_SELECTOR, vmcs12->guest_cs_selector); vmcs_write16(GUEST_CS_SELECTOR, vmcs12->guest_cs_selector);
vmcs_write16(GUEST_SS_SELECTOR, vmcs12->guest_ss_selector); vmcs_write16(GUEST_SS_SELECTOR, vmcs12->guest_ss_selector);
@ -2507,7 +2534,7 @@ static void prepare_vmcs02_rare(struct vcpu_vmx *vmx, struct vmcs12 *vmcs12)
vmcs_writel(GUEST_GDTR_BASE, vmcs12->guest_gdtr_base); vmcs_writel(GUEST_GDTR_BASE, vmcs12->guest_gdtr_base);
vmcs_writel(GUEST_IDTR_BASE, vmcs12->guest_idtr_base); vmcs_writel(GUEST_IDTR_BASE, vmcs12->guest_idtr_base);
vmx->segment_cache.bitmask = 0; vmx_segment_cache_clear(vmx);
} }
if (!hv_evmcs || !(hv_evmcs->hv_clean_fields & if (!hv_evmcs || !(hv_evmcs->hv_clean_fields &
@ -4284,11 +4311,52 @@ static int vmx_check_nested_events(struct kvm_vcpu *vcpu)
} }
if (kvm_cpu_has_interrupt(vcpu) && !vmx_interrupt_blocked(vcpu)) { if (kvm_cpu_has_interrupt(vcpu) && !vmx_interrupt_blocked(vcpu)) {
int irq;
if (block_nested_events) if (block_nested_events)
return -EBUSY; return -EBUSY;
if (!nested_exit_on_intr(vcpu)) if (!nested_exit_on_intr(vcpu))
goto no_vmexit; goto no_vmexit;
nested_vmx_vmexit(vcpu, EXIT_REASON_EXTERNAL_INTERRUPT, 0, 0);
if (!nested_exit_intr_ack_set(vcpu)) {
nested_vmx_vmexit(vcpu, EXIT_REASON_EXTERNAL_INTERRUPT, 0, 0);
return 0;
}
irq = kvm_cpu_get_extint(vcpu);
if (irq != -1) {
nested_vmx_vmexit(vcpu, EXIT_REASON_EXTERNAL_INTERRUPT,
INTR_INFO_VALID_MASK | INTR_TYPE_EXT_INTR | irq, 0);
return 0;
}
irq = kvm_apic_has_interrupt(vcpu);
if (WARN_ON_ONCE(irq < 0))
goto no_vmexit;
/*
* If the IRQ is L2's PI notification vector, process posted
* interrupts for L2 instead of injecting VM-Exit, as the
* detection/morphing architecturally occurs when the IRQ is
* delivered to the CPU. Note, only interrupts that are routed
* through the local APIC trigger posted interrupt processing,
* and enabling posted interrupts requires ACK-on-exit.
*/
if (irq == vmx->nested.posted_intr_nv) {
vmx->nested.pi_pending = true;
kvm_apic_clear_irr(vcpu, irq);
goto no_vmexit;
}
nested_vmx_vmexit(vcpu, EXIT_REASON_EXTERNAL_INTERRUPT,
INTR_INFO_VALID_MASK | INTR_TYPE_EXT_INTR | irq, 0);
/*
* ACK the interrupt _after_ emulating VM-Exit, as the IRQ must
* be marked as in-service in vmcs01.GUEST_INTERRUPT_STATUS.SVI
* if APICv is active.
*/
kvm_apic_ack_interrupt(vcpu, irq);
return 0; return 0;
} }
@ -4806,7 +4874,7 @@ static void nested_vmx_restore_host_state(struct kvm_vcpu *vcpu)
goto vmabort; goto vmabort;
} }
if (kvm_set_msr(vcpu, h.index, h.value)) { if (kvm_set_msr_with_filter(vcpu, h.index, h.value)) {
pr_debug_ratelimited( pr_debug_ratelimited(
"%s WRMSR failed (%u, 0x%x, 0x%llx)\n", "%s WRMSR failed (%u, 0x%x, 0x%llx)\n",
__func__, j, h.index, h.value); __func__, j, h.index, h.value);
@ -4969,14 +5037,6 @@ void nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32 vm_exit_reason,
vcpu->arch.mp_state = KVM_MP_STATE_RUNNABLE; vcpu->arch.mp_state = KVM_MP_STATE_RUNNABLE;
if (likely(!vmx->fail)) { if (likely(!vmx->fail)) {
if ((u16)vm_exit_reason == EXIT_REASON_EXTERNAL_INTERRUPT &&
nested_exit_intr_ack_set(vcpu)) {
int irq = kvm_cpu_get_interrupt(vcpu);
WARN_ON(irq < 0);
vmcs12->vm_exit_intr_info = irq |
INTR_INFO_VALID_MASK | INTR_TYPE_EXT_INTR;
}
if (vm_exit_reason != -1) if (vm_exit_reason != -1)
trace_kvm_nested_vmexit_inject(vmcs12->vm_exit_reason, trace_kvm_nested_vmexit_inject(vmcs12->vm_exit_reason,
vmcs12->exit_qualification, vmcs12->exit_qualification,
@ -7051,7 +7111,7 @@ static void nested_vmx_setup_misc_data(struct vmcs_config *vmcs_conf,
{ {
msrs->misc_low = (u32)vmcs_conf->misc & VMX_MISC_SAVE_EFER_LMA; msrs->misc_low = (u32)vmcs_conf->misc & VMX_MISC_SAVE_EFER_LMA;
msrs->misc_low |= msrs->misc_low |=
MSR_IA32_VMX_MISC_VMWRITE_SHADOW_RO_FIELDS | VMX_MISC_VMWRITE_SHADOW_RO_FIELDS |
VMX_MISC_EMULATED_PREEMPTION_TIMER_RATE | VMX_MISC_EMULATED_PREEMPTION_TIMER_RATE |
VMX_MISC_ACTIVITY_HLT | VMX_MISC_ACTIVITY_HLT |
VMX_MISC_ACTIVITY_WAIT_SIPI; VMX_MISC_ACTIVITY_WAIT_SIPI;
@ -7066,12 +7126,10 @@ static void nested_vmx_setup_basic(struct nested_vmx_msrs *msrs)
* guest, and the VMCS structure we give it - not about the * guest, and the VMCS structure we give it - not about the
* VMX support of the underlying hardware. * VMX support of the underlying hardware.
*/ */
msrs->basic = msrs->basic = vmx_basic_encode_vmcs_info(VMCS12_REVISION, VMCS12_SIZE,
VMCS12_REVISION | X86_MEMTYPE_WB);
VMX_BASIC_TRUE_CTLS |
((u64)VMCS12_SIZE << VMX_BASIC_VMCS_SIZE_SHIFT) |
(VMX_BASIC_MEM_TYPE_WB << VMX_BASIC_MEM_TYPE_SHIFT);
msrs->basic |= VMX_BASIC_TRUE_CTLS;
if (cpu_has_vmx_basic_inout()) if (cpu_has_vmx_basic_inout())
msrs->basic |= VMX_BASIC_INOUT; msrs->basic |= VMX_BASIC_INOUT;
} }

View File

@ -39,11 +39,17 @@ bool nested_vmx_check_io_bitmaps(struct kvm_vcpu *vcpu, unsigned int port,
static inline struct vmcs12 *get_vmcs12(struct kvm_vcpu *vcpu) static inline struct vmcs12 *get_vmcs12(struct kvm_vcpu *vcpu)
{ {
lockdep_assert_once(lockdep_is_held(&vcpu->mutex) ||
!refcount_read(&vcpu->kvm->users_count));
return to_vmx(vcpu)->nested.cached_vmcs12; return to_vmx(vcpu)->nested.cached_vmcs12;
} }
static inline struct vmcs12 *get_shadow_vmcs12(struct kvm_vcpu *vcpu) static inline struct vmcs12 *get_shadow_vmcs12(struct kvm_vcpu *vcpu)
{ {
lockdep_assert_once(lockdep_is_held(&vcpu->mutex) ||
!refcount_read(&vcpu->kvm->users_count));
return to_vmx(vcpu)->nested.cached_shadow_vmcs12; return to_vmx(vcpu)->nested.cached_shadow_vmcs12;
} }
@ -109,7 +115,7 @@ static inline unsigned nested_cpu_vmx_misc_cr3_count(struct kvm_vcpu *vcpu)
static inline bool nested_cpu_has_vmwrite_any_field(struct kvm_vcpu *vcpu) static inline bool nested_cpu_has_vmwrite_any_field(struct kvm_vcpu *vcpu)
{ {
return to_vmx(vcpu)->nested.msrs.misc_low & return to_vmx(vcpu)->nested.msrs.misc_low &
MSR_IA32_VMX_MISC_VMWRITE_SHADOW_RO_FIELDS; VMX_MISC_VMWRITE_SHADOW_RO_FIELDS;
} }
static inline bool nested_cpu_has_zero_length_injection(struct kvm_vcpu *vcpu) static inline bool nested_cpu_has_zero_length_injection(struct kvm_vcpu *vcpu)

View File

@ -274,7 +274,7 @@ static int handle_encls_ecreate(struct kvm_vcpu *vcpu)
* simultaneously set SGX_ATTR_PROVISIONKEY to bypass the check to * simultaneously set SGX_ATTR_PROVISIONKEY to bypass the check to
* enforce restriction of access to the PROVISIONKEY. * enforce restriction of access to the PROVISIONKEY.
*/ */
contents = (struct sgx_secs *)__get_free_page(GFP_KERNEL_ACCOUNT); contents = (struct sgx_secs *)__get_free_page(GFP_KERNEL);
if (!contents) if (!contents)
return -ENOMEM; return -ENOMEM;

View File

@ -525,10 +525,6 @@ static const struct kvm_vmx_segment_field {
VMX_SEGMENT_FIELD(LDTR), VMX_SEGMENT_FIELD(LDTR),
}; };
static inline void vmx_segment_cache_clear(struct vcpu_vmx *vmx)
{
vmx->segment_cache.bitmask = 0;
}
static unsigned long host_idt_base; static unsigned long host_idt_base;
@ -755,7 +751,7 @@ fault:
return -EIO; return -EIO;
} }
static void vmx_emergency_disable(void) void vmx_emergency_disable_virtualization_cpu(void)
{ {
int cpu = raw_smp_processor_id(); int cpu = raw_smp_processor_id();
struct loaded_vmcs *v; struct loaded_vmcs *v;
@ -1998,15 +1994,15 @@ static inline bool is_vmx_feature_control_msr_valid(struct vcpu_vmx *vmx,
return !(msr->data & ~valid_bits); return !(msr->data & ~valid_bits);
} }
int vmx_get_msr_feature(struct kvm_msr_entry *msr) int vmx_get_feature_msr(u32 msr, u64 *data)
{ {
switch (msr->index) { switch (msr) {
case KVM_FIRST_EMULATED_VMX_MSR ... KVM_LAST_EMULATED_VMX_MSR: case KVM_FIRST_EMULATED_VMX_MSR ... KVM_LAST_EMULATED_VMX_MSR:
if (!nested) if (!nested)
return 1; return 1;
return vmx_get_vmx_msr(&vmcs_config.nested, msr->index, &msr->data); return vmx_get_vmx_msr(&vmcs_config.nested, msr, data);
default: default:
return KVM_MSR_RET_INVALID; return KVM_MSR_RET_UNSUPPORTED;
} }
} }
@ -2605,13 +2601,13 @@ static u64 adjust_vmx_controls64(u64 ctl_opt, u32 msr)
static int setup_vmcs_config(struct vmcs_config *vmcs_conf, static int setup_vmcs_config(struct vmcs_config *vmcs_conf,
struct vmx_capability *vmx_cap) struct vmx_capability *vmx_cap)
{ {
u32 vmx_msr_low, vmx_msr_high;
u32 _pin_based_exec_control = 0; u32 _pin_based_exec_control = 0;
u32 _cpu_based_exec_control = 0; u32 _cpu_based_exec_control = 0;
u32 _cpu_based_2nd_exec_control = 0; u32 _cpu_based_2nd_exec_control = 0;
u64 _cpu_based_3rd_exec_control = 0; u64 _cpu_based_3rd_exec_control = 0;
u32 _vmexit_control = 0; u32 _vmexit_control = 0;
u32 _vmentry_control = 0; u32 _vmentry_control = 0;
u64 basic_msr;
u64 misc_msr; u64 misc_msr;
int i; int i;
@ -2734,29 +2730,29 @@ static int setup_vmcs_config(struct vmcs_config *vmcs_conf,
_vmexit_control &= ~x_ctrl; _vmexit_control &= ~x_ctrl;
} }
rdmsr(MSR_IA32_VMX_BASIC, vmx_msr_low, vmx_msr_high); rdmsrl(MSR_IA32_VMX_BASIC, basic_msr);
/* IA-32 SDM Vol 3B: VMCS size is never greater than 4kB. */ /* IA-32 SDM Vol 3B: VMCS size is never greater than 4kB. */
if ((vmx_msr_high & 0x1fff) > PAGE_SIZE) if (vmx_basic_vmcs_size(basic_msr) > PAGE_SIZE)
return -EIO; return -EIO;
#ifdef CONFIG_X86_64 #ifdef CONFIG_X86_64
/* IA-32 SDM Vol 3B: 64-bit CPUs always have VMX_BASIC_MSR[48]==0. */ /*
if (vmx_msr_high & (1u<<16)) * KVM expects to be able to shove all legal physical addresses into
* VMCS fields for 64-bit kernels, and per the SDM, "This bit is always
* 0 for processors that support Intel 64 architecture".
*/
if (basic_msr & VMX_BASIC_32BIT_PHYS_ADDR_ONLY)
return -EIO; return -EIO;
#endif #endif
/* Require Write-Back (WB) memory type for VMCS accesses. */ /* Require Write-Back (WB) memory type for VMCS accesses. */
if (((vmx_msr_high >> 18) & 15) != 6) if (vmx_basic_vmcs_mem_type(basic_msr) != X86_MEMTYPE_WB)
return -EIO; return -EIO;
rdmsrl(MSR_IA32_VMX_MISC, misc_msr); rdmsrl(MSR_IA32_VMX_MISC, misc_msr);
vmcs_conf->size = vmx_msr_high & 0x1fff; vmcs_conf->basic = basic_msr;
vmcs_conf->basic_cap = vmx_msr_high & ~0x1fff;
vmcs_conf->revision_id = vmx_msr_low;
vmcs_conf->pin_based_exec_ctrl = _pin_based_exec_control; vmcs_conf->pin_based_exec_ctrl = _pin_based_exec_control;
vmcs_conf->cpu_based_exec_ctrl = _cpu_based_exec_control; vmcs_conf->cpu_based_exec_ctrl = _cpu_based_exec_control;
vmcs_conf->cpu_based_2nd_exec_ctrl = _cpu_based_2nd_exec_control; vmcs_conf->cpu_based_2nd_exec_ctrl = _cpu_based_2nd_exec_control;
@ -2844,7 +2840,7 @@ fault:
return -EFAULT; return -EFAULT;
} }
int vmx_hardware_enable(void) int vmx_enable_virtualization_cpu(void)
{ {
int cpu = raw_smp_processor_id(); int cpu = raw_smp_processor_id();
u64 phys_addr = __pa(per_cpu(vmxarea, cpu)); u64 phys_addr = __pa(per_cpu(vmxarea, cpu));
@ -2881,7 +2877,7 @@ static void vmclear_local_loaded_vmcss(void)
__loaded_vmcs_clear(v); __loaded_vmcs_clear(v);
} }
void vmx_hardware_disable(void) void vmx_disable_virtualization_cpu(void)
{ {
vmclear_local_loaded_vmcss(); vmclear_local_loaded_vmcss();
@ -2903,13 +2899,13 @@ struct vmcs *alloc_vmcs_cpu(bool shadow, int cpu, gfp_t flags)
if (!pages) if (!pages)
return NULL; return NULL;
vmcs = page_address(pages); vmcs = page_address(pages);
memset(vmcs, 0, vmcs_config.size); memset(vmcs, 0, vmx_basic_vmcs_size(vmcs_config.basic));
/* KVM supports Enlightened VMCS v1 only */ /* KVM supports Enlightened VMCS v1 only */
if (kvm_is_using_evmcs()) if (kvm_is_using_evmcs())
vmcs->hdr.revision_id = KVM_EVMCS_VERSION; vmcs->hdr.revision_id = KVM_EVMCS_VERSION;
else else
vmcs->hdr.revision_id = vmcs_config.revision_id; vmcs->hdr.revision_id = vmx_basic_vmcs_revision_id(vmcs_config.basic);
if (shadow) if (shadow)
vmcs->hdr.shadow_vmcs = 1; vmcs->hdr.shadow_vmcs = 1;
@ -3002,7 +2998,7 @@ static __init int alloc_kvm_area(void)
* physical CPU. * physical CPU.
*/ */
if (kvm_is_using_evmcs()) if (kvm_is_using_evmcs())
vmcs->hdr.revision_id = vmcs_config.revision_id; vmcs->hdr.revision_id = vmx_basic_vmcs_revision_id(vmcs_config.basic);
per_cpu(vmxarea, cpu) = vmcs; per_cpu(vmxarea, cpu) = vmcs;
} }
@ -4219,6 +4215,13 @@ static int vmx_deliver_nested_posted_interrupt(struct kvm_vcpu *vcpu,
{ {
struct vcpu_vmx *vmx = to_vmx(vcpu); struct vcpu_vmx *vmx = to_vmx(vcpu);
/*
* DO NOT query the vCPU's vmcs12, as vmcs12 is dynamically allocated
* and freed, and must not be accessed outside of vcpu->mutex. The
* vCPU's cached PI NV is valid if and only if posted interrupts
* enabled in its vmcs12, i.e. checking the vector also checks that
* L1 has enabled posted interrupts for L2.
*/
if (is_guest_mode(vcpu) && if (is_guest_mode(vcpu) &&
vector == vmx->nested.posted_intr_nv) { vector == vmx->nested.posted_intr_nv) {
/* /*
@ -5804,8 +5807,9 @@ static int handle_ept_violation(struct kvm_vcpu *vcpu)
error_code |= (exit_qualification & EPT_VIOLATION_RWX_MASK) error_code |= (exit_qualification & EPT_VIOLATION_RWX_MASK)
? PFERR_PRESENT_MASK : 0; ? PFERR_PRESENT_MASK : 0;
error_code |= (exit_qualification & EPT_VIOLATION_GVA_TRANSLATED) != 0 ? if (error_code & EPT_VIOLATION_GVA_IS_VALID)
PFERR_GUEST_FINAL_MASK : PFERR_GUEST_PAGE_MASK; error_code |= (exit_qualification & EPT_VIOLATION_GVA_TRANSLATED) ?
PFERR_GUEST_FINAL_MASK : PFERR_GUEST_PAGE_MASK;
/* /*
* Check that the GPA doesn't exceed physical memory limits, as that is * Check that the GPA doesn't exceed physical memory limits, as that is
@ -7265,6 +7269,8 @@ static fastpath_t vmx_exit_handlers_fastpath(struct kvm_vcpu *vcpu,
return handle_fastpath_set_msr_irqoff(vcpu); return handle_fastpath_set_msr_irqoff(vcpu);
case EXIT_REASON_PREEMPTION_TIMER: case EXIT_REASON_PREEMPTION_TIMER:
return handle_fastpath_preemption_timer(vcpu, force_immediate_exit); return handle_fastpath_preemption_timer(vcpu, force_immediate_exit);
case EXIT_REASON_HLT:
return handle_fastpath_hlt(vcpu);
default: default:
return EXIT_FASTPATH_NONE; return EXIT_FASTPATH_NONE;
} }
@ -7965,6 +7971,7 @@ static __init void vmx_set_cpu_caps(void)
kvm_cpu_cap_clear(X86_FEATURE_SGX_LC); kvm_cpu_cap_clear(X86_FEATURE_SGX_LC);
kvm_cpu_cap_clear(X86_FEATURE_SGX1); kvm_cpu_cap_clear(X86_FEATURE_SGX1);
kvm_cpu_cap_clear(X86_FEATURE_SGX2); kvm_cpu_cap_clear(X86_FEATURE_SGX2);
kvm_cpu_cap_clear(X86_FEATURE_SGX_EDECCSSA);
} }
if (vmx_umip_emulated()) if (vmx_umip_emulated())
@ -8515,7 +8522,7 @@ __init int vmx_hardware_setup(void)
u64 use_timer_freq = 5000ULL * 1000 * 1000; u64 use_timer_freq = 5000ULL * 1000 * 1000;
cpu_preemption_timer_multi = cpu_preemption_timer_multi =
vmcs_config.misc & VMX_MISC_PREEMPTION_TIMER_RATE_MASK; vmx_misc_preemption_timer_rate(vmcs_config.misc);
if (tsc_khz) if (tsc_khz)
use_timer_freq = (u64)tsc_khz * 1000; use_timer_freq = (u64)tsc_khz * 1000;
@ -8582,8 +8589,6 @@ static void __vmx_exit(void)
{ {
allow_smaller_maxphyaddr = false; allow_smaller_maxphyaddr = false;
cpu_emergency_unregister_virt_callback(vmx_emergency_disable);
vmx_cleanup_l1d_flush(); vmx_cleanup_l1d_flush();
} }
@ -8630,8 +8635,6 @@ static int __init vmx_init(void)
pi_init_cpu(cpu); pi_init_cpu(cpu);
} }
cpu_emergency_register_virt_callback(vmx_emergency_disable);
vmx_check_vmcs12_offsets(); vmx_check_vmcs12_offsets();
/* /*

View File

@ -17,10 +17,6 @@
#include "run_flags.h" #include "run_flags.h"
#include "../mmu.h" #include "../mmu.h"
#define MSR_TYPE_R 1
#define MSR_TYPE_W 2
#define MSR_TYPE_RW 3
#define X2APIC_MSR(r) (APIC_BASE_MSR + ((r) >> 4)) #define X2APIC_MSR(r) (APIC_BASE_MSR + ((r) >> 4))
#ifdef CONFIG_X86_64 #ifdef CONFIG_X86_64
@ -756,4 +752,9 @@ static inline bool vmx_can_use_ipiv(struct kvm_vcpu *vcpu)
return lapic_in_kernel(vcpu) && enable_ipiv; return lapic_in_kernel(vcpu) && enable_ipiv;
} }
static inline void vmx_segment_cache_clear(struct vcpu_vmx *vmx)
{
vmx->segment_cache.bitmask = 0;
}
#endif /* __KVM_X86_VMX_H */ #endif /* __KVM_X86_VMX_H */

View File

@ -104,6 +104,14 @@ static inline void evmcs_load(u64 phys_addr)
struct hv_vp_assist_page *vp_ap = struct hv_vp_assist_page *vp_ap =
hv_get_vp_assist_page(smp_processor_id()); hv_get_vp_assist_page(smp_processor_id());
/*
* When enabling eVMCS, KVM verifies that every CPU has a valid hv_vp_assist_page()
* and aborts enabling the feature otherwise. CPU onlining path is also checked in
* vmx_hardware_enable().
*/
if (KVM_BUG_ON(!vp_ap, kvm_get_running_vcpu()->kvm))
return;
if (current_evmcs->hv_enlightenments_control.nested_flush_hypercall) if (current_evmcs->hv_enlightenments_control.nested_flush_hypercall)
vp_ap->nested_control.features.directhypercall = 1; vp_ap->nested_control.features.directhypercall = 1;
vp_ap->current_nested_vmcs = phys_addr; vp_ap->current_nested_vmcs = phys_addr;

View File

@ -47,7 +47,7 @@ static __always_inline void vmcs_check16(unsigned long field)
BUILD_BUG_ON_MSG(__builtin_constant_p(field) && ((field) & 0x6001) == 0x2001, BUILD_BUG_ON_MSG(__builtin_constant_p(field) && ((field) & 0x6001) == 0x2001,
"16-bit accessor invalid for 64-bit high field"); "16-bit accessor invalid for 64-bit high field");
BUILD_BUG_ON_MSG(__builtin_constant_p(field) && ((field) & 0x6000) == 0x4000, BUILD_BUG_ON_MSG(__builtin_constant_p(field) && ((field) & 0x6000) == 0x4000,
"16-bit accessor invalid for 32-bit high field"); "16-bit accessor invalid for 32-bit field");
BUILD_BUG_ON_MSG(__builtin_constant_p(field) && ((field) & 0x6000) == 0x6000, BUILD_BUG_ON_MSG(__builtin_constant_p(field) && ((field) & 0x6000) == 0x6000,
"16-bit accessor invalid for natural width field"); "16-bit accessor invalid for natural width field");
} }

View File

@ -13,8 +13,9 @@ extern struct kvm_x86_init_ops vt_init_ops __initdata;
void vmx_hardware_unsetup(void); void vmx_hardware_unsetup(void);
int vmx_check_processor_compat(void); int vmx_check_processor_compat(void);
int vmx_hardware_enable(void); int vmx_enable_virtualization_cpu(void);
void vmx_hardware_disable(void); void vmx_disable_virtualization_cpu(void);
void vmx_emergency_disable_virtualization_cpu(void);
int vmx_vm_init(struct kvm *kvm); int vmx_vm_init(struct kvm *kvm);
void vmx_vm_destroy(struct kvm *kvm); void vmx_vm_destroy(struct kvm *kvm);
int vmx_vcpu_precreate(struct kvm *kvm); int vmx_vcpu_precreate(struct kvm *kvm);
@ -56,7 +57,7 @@ bool vmx_has_emulated_msr(struct kvm *kvm, u32 index);
void vmx_msr_filter_changed(struct kvm_vcpu *vcpu); void vmx_msr_filter_changed(struct kvm_vcpu *vcpu);
void vmx_prepare_switch_to_guest(struct kvm_vcpu *vcpu); void vmx_prepare_switch_to_guest(struct kvm_vcpu *vcpu);
void vmx_update_exception_bitmap(struct kvm_vcpu *vcpu); void vmx_update_exception_bitmap(struct kvm_vcpu *vcpu);
int vmx_get_msr_feature(struct kvm_msr_entry *msr); int vmx_get_feature_msr(u32 msr, u64 *data);
int vmx_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info); int vmx_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info);
u64 vmx_get_segment_base(struct kvm_vcpu *vcpu, int seg); u64 vmx_get_segment_base(struct kvm_vcpu *vcpu, int seg);
void vmx_get_segment(struct kvm_vcpu *vcpu, struct kvm_segment *var, int seg); void vmx_get_segment(struct kvm_vcpu *vcpu, struct kvm_segment *var, int seg);

File diff suppressed because it is too large Load Diff

View File

@ -103,11 +103,18 @@ static inline unsigned int __shrink_ple_window(unsigned int val,
return max(val, min); return max(val, min);
} }
#define MSR_IA32_CR_PAT_DEFAULT 0x0007040600070406ULL #define MSR_IA32_CR_PAT_DEFAULT \
PAT_VALUE(WB, WT, UC_MINUS, UC, WB, WT, UC_MINUS, UC)
void kvm_service_local_tlb_flush_requests(struct kvm_vcpu *vcpu); void kvm_service_local_tlb_flush_requests(struct kvm_vcpu *vcpu);
int kvm_check_nested_events(struct kvm_vcpu *vcpu); int kvm_check_nested_events(struct kvm_vcpu *vcpu);
/* Forcibly leave the nested mode in cases like a vCPU reset */
static inline void kvm_leave_nested(struct kvm_vcpu *vcpu)
{
kvm_x86_ops.nested_ops->leave_nested(vcpu);
}
static inline bool kvm_vcpu_has_run(struct kvm_vcpu *vcpu) static inline bool kvm_vcpu_has_run(struct kvm_vcpu *vcpu)
{ {
return vcpu->arch.last_vmentry_cpu != -1; return vcpu->arch.last_vmentry_cpu != -1;
@ -334,6 +341,7 @@ int x86_decode_emulated_instruction(struct kvm_vcpu *vcpu, int emulation_type,
int x86_emulate_instruction(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, int x86_emulate_instruction(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa,
int emulation_type, void *insn, int insn_len); int emulation_type, void *insn, int insn_len);
fastpath_t handle_fastpath_set_msr_irqoff(struct kvm_vcpu *vcpu); fastpath_t handle_fastpath_set_msr_irqoff(struct kvm_vcpu *vcpu);
fastpath_t handle_fastpath_hlt(struct kvm_vcpu *vcpu);
extern struct kvm_caps kvm_caps; extern struct kvm_caps kvm_caps;
extern struct kvm_host_values kvm_host; extern struct kvm_host_values kvm_host;
@ -504,13 +512,26 @@ int kvm_handle_memory_failure(struct kvm_vcpu *vcpu, int r,
int kvm_handle_invpcid(struct kvm_vcpu *vcpu, unsigned long type, gva_t gva); int kvm_handle_invpcid(struct kvm_vcpu *vcpu, unsigned long type, gva_t gva);
bool kvm_msr_allowed(struct kvm_vcpu *vcpu, u32 index, u32 type); bool kvm_msr_allowed(struct kvm_vcpu *vcpu, u32 index, u32 type);
enum kvm_msr_access {
MSR_TYPE_R = BIT(0),
MSR_TYPE_W = BIT(1),
MSR_TYPE_RW = MSR_TYPE_R | MSR_TYPE_W,
};
/* /*
* Internal error codes that are used to indicate that MSR emulation encountered * Internal error codes that are used to indicate that MSR emulation encountered
* an error that should result in #GP in the guest, unless userspace * an error that should result in #GP in the guest, unless userspace handles it.
* handles it. * Note, '1', '0', and negative numbers are off limits, as they are used by KVM
* as part of KVM's lightly documented internal KVM_RUN return codes.
*
* UNSUPPORTED - The MSR isn't supported, either because it is completely
* unknown to KVM, or because the MSR should not exist according
* to the vCPU model.
*
* FILTERED - Access to the MSR is denied by a userspace MSR filter.
*/ */
#define KVM_MSR_RET_INVALID 2 /* in-kernel MSR emulation #GP condition */ #define KVM_MSR_RET_UNSUPPORTED 2
#define KVM_MSR_RET_FILTERED 3 /* #GP due to userspace MSR filter */ #define KVM_MSR_RET_FILTERED 3
#define __cr4_reserved_bits(__cpu_has, __c) \ #define __cr4_reserved_bits(__cpu_has, __c) \
({ \ ({ \

View File

@ -176,15 +176,6 @@ static inline void set_page_memtype(struct page *pg,
} }
#endif #endif
enum {
PAT_UC = 0, /* uncached */
PAT_WC = 1, /* Write combining */
PAT_WT = 4, /* Write Through */
PAT_WP = 5, /* Write Protected */
PAT_WB = 6, /* Write Back (default) */
PAT_UC_MINUS = 7, /* UC, but can be overridden by MTRR */
};
#define CM(c) (_PAGE_CACHE_MODE_ ## c) #define CM(c) (_PAGE_CACHE_MODE_ ## c)
static enum page_cache_mode __init pat_get_cache_mode(unsigned int pat_val, static enum page_cache_mode __init pat_get_cache_mode(unsigned int pat_val,
@ -194,13 +185,13 @@ static enum page_cache_mode __init pat_get_cache_mode(unsigned int pat_val,
char *cache_mode; char *cache_mode;
switch (pat_val) { switch (pat_val) {
case PAT_UC: cache = CM(UC); cache_mode = "UC "; break; case X86_MEMTYPE_UC: cache = CM(UC); cache_mode = "UC "; break;
case PAT_WC: cache = CM(WC); cache_mode = "WC "; break; case X86_MEMTYPE_WC: cache = CM(WC); cache_mode = "WC "; break;
case PAT_WT: cache = CM(WT); cache_mode = "WT "; break; case X86_MEMTYPE_WT: cache = CM(WT); cache_mode = "WT "; break;
case PAT_WP: cache = CM(WP); cache_mode = "WP "; break; case X86_MEMTYPE_WP: cache = CM(WP); cache_mode = "WP "; break;
case PAT_WB: cache = CM(WB); cache_mode = "WB "; break; case X86_MEMTYPE_WB: cache = CM(WB); cache_mode = "WB "; break;
case PAT_UC_MINUS: cache = CM(UC_MINUS); cache_mode = "UC- "; break; case X86_MEMTYPE_UC_MINUS: cache = CM(UC_MINUS); cache_mode = "UC- "; break;
default: cache = CM(WB); cache_mode = "WB "; break; default: cache = CM(WB); cache_mode = "WB "; break;
} }
memcpy(msg, cache_mode, 4); memcpy(msg, cache_mode, 4);
@ -257,12 +248,6 @@ void pat_cpu_init(void)
void __init pat_bp_init(void) void __init pat_bp_init(void)
{ {
struct cpuinfo_x86 *c = &boot_cpu_data; struct cpuinfo_x86 *c = &boot_cpu_data;
#define PAT(p0, p1, p2, p3, p4, p5, p6, p7) \
(((u64)PAT_ ## p0) | ((u64)PAT_ ## p1 << 8) | \
((u64)PAT_ ## p2 << 16) | ((u64)PAT_ ## p3 << 24) | \
((u64)PAT_ ## p4 << 32) | ((u64)PAT_ ## p5 << 40) | \
((u64)PAT_ ## p6 << 48) | ((u64)PAT_ ## p7 << 56))
if (!IS_ENABLED(CONFIG_X86_PAT)) if (!IS_ENABLED(CONFIG_X86_PAT))
pr_info_once("x86/PAT: PAT support disabled because CONFIG_X86_PAT is disabled in the kernel.\n"); pr_info_once("x86/PAT: PAT support disabled because CONFIG_X86_PAT is disabled in the kernel.\n");
@ -293,7 +278,7 @@ void __init pat_bp_init(void)
* NOTE: When WC or WP is used, it is redirected to UC- per * NOTE: When WC or WP is used, it is redirected to UC- per
* the default setup in __cachemode2pte_tbl[]. * the default setup in __cachemode2pte_tbl[].
*/ */
pat_msr_val = PAT(WB, WT, UC_MINUS, UC, WB, WT, UC_MINUS, UC); pat_msr_val = PAT_VALUE(WB, WT, UC_MINUS, UC, WB, WT, UC_MINUS, UC);
} }
/* /*
@ -328,7 +313,7 @@ void __init pat_bp_init(void)
* NOTE: When WT or WP is used, it is redirected to UC- per * NOTE: When WT or WP is used, it is redirected to UC- per
* the default setup in __cachemode2pte_tbl[]. * the default setup in __cachemode2pte_tbl[].
*/ */
pat_msr_val = PAT(WB, WC, UC_MINUS, UC, WB, WC, UC_MINUS, UC); pat_msr_val = PAT_VALUE(WB, WC, UC_MINUS, UC, WB, WC, UC_MINUS, UC);
} else { } else {
/* /*
* Full PAT support. We put WT in slot 7 to improve * Full PAT support. We put WT in slot 7 to improve
@ -356,13 +341,12 @@ void __init pat_bp_init(void)
* The reserved slots are unused, but mapped to their * The reserved slots are unused, but mapped to their
* corresponding types in the presence of PAT errata. * corresponding types in the presence of PAT errata.
*/ */
pat_msr_val = PAT(WB, WC, UC_MINUS, UC, WB, WP, UC_MINUS, WT); pat_msr_val = PAT_VALUE(WB, WC, UC_MINUS, UC, WB, WP, UC_MINUS, WT);
} }
memory_caching_control |= CACHE_PAT; memory_caching_control |= CACHE_PAT;
init_cache_modes(pat_msr_val); init_cache_modes(pat_msr_val);
#undef PAT
} }
static DEFINE_SPINLOCK(memtype_lock); /* protects memtype accesses */ static DEFINE_SPINLOCK(memtype_lock); /* protects memtype accesses */

View File

@ -1529,8 +1529,22 @@ static inline void kvm_create_vcpu_debugfs(struct kvm_vcpu *vcpu) {}
#endif #endif
#ifdef CONFIG_KVM_GENERIC_HARDWARE_ENABLING #ifdef CONFIG_KVM_GENERIC_HARDWARE_ENABLING
int kvm_arch_hardware_enable(void); /*
void kvm_arch_hardware_disable(void); * kvm_arch_{enable,disable}_virtualization() are called on one CPU, under
* kvm_usage_lock, immediately after/before 0=>1 and 1=>0 transitions of
* kvm_usage_count, i.e. at the beginning of the generic hardware enabling
* sequence, and at the end of the generic hardware disabling sequence.
*/
void kvm_arch_enable_virtualization(void);
void kvm_arch_disable_virtualization(void);
/*
* kvm_arch_{enable,disable}_virtualization_cpu() are called on "every" CPU to
* do the actual twiddling of hardware bits. The hooks are called on all
* online CPUs when KVM enables/disabled virtualization, and on a single CPU
* when that CPU is onlined/offlined (including for Resume/Suspend).
*/
int kvm_arch_enable_virtualization_cpu(void);
void kvm_arch_disable_virtualization_cpu(void);
#endif #endif
int kvm_arch_vcpu_runnable(struct kvm_vcpu *vcpu); int kvm_arch_vcpu_runnable(struct kvm_vcpu *vcpu);
bool kvm_arch_vcpu_in_kernel(struct kvm_vcpu *vcpu); bool kvm_arch_vcpu_in_kernel(struct kvm_vcpu *vcpu);

View File

@ -5,3 +5,7 @@
!*.h !*.h
!*.S !*.S
!*.sh !*.sh
!.gitignore
!config
!settings
!Makefile

View File

@ -130,6 +130,7 @@ TEST_GEN_PROGS_x86_64 += x86_64/max_vcpuid_cap_test
TEST_GEN_PROGS_x86_64 += x86_64/triple_fault_event_test TEST_GEN_PROGS_x86_64 += x86_64/triple_fault_event_test
TEST_GEN_PROGS_x86_64 += x86_64/recalc_apic_map_test TEST_GEN_PROGS_x86_64 += x86_64/recalc_apic_map_test
TEST_GEN_PROGS_x86_64 += access_tracking_perf_test TEST_GEN_PROGS_x86_64 += access_tracking_perf_test
TEST_GEN_PROGS_x86_64 += coalesced_io_test
TEST_GEN_PROGS_x86_64 += demand_paging_test TEST_GEN_PROGS_x86_64 += demand_paging_test
TEST_GEN_PROGS_x86_64 += dirty_log_test TEST_GEN_PROGS_x86_64 += dirty_log_test
TEST_GEN_PROGS_x86_64 += dirty_log_perf_test TEST_GEN_PROGS_x86_64 += dirty_log_perf_test
@ -167,6 +168,7 @@ TEST_GEN_PROGS_aarch64 += aarch64/vpmu_counter_access
TEST_GEN_PROGS_aarch64 += aarch64/no-vgic-v3 TEST_GEN_PROGS_aarch64 += aarch64/no-vgic-v3
TEST_GEN_PROGS_aarch64 += access_tracking_perf_test TEST_GEN_PROGS_aarch64 += access_tracking_perf_test
TEST_GEN_PROGS_aarch64 += arch_timer TEST_GEN_PROGS_aarch64 += arch_timer
TEST_GEN_PROGS_aarch64 += coalesced_io_test
TEST_GEN_PROGS_aarch64 += demand_paging_test TEST_GEN_PROGS_aarch64 += demand_paging_test
TEST_GEN_PROGS_aarch64 += dirty_log_test TEST_GEN_PROGS_aarch64 += dirty_log_test
TEST_GEN_PROGS_aarch64 += dirty_log_perf_test TEST_GEN_PROGS_aarch64 += dirty_log_perf_test
@ -188,6 +190,7 @@ TEST_GEN_PROGS_s390x += s390x/tprot
TEST_GEN_PROGS_s390x += s390x/cmma_test TEST_GEN_PROGS_s390x += s390x/cmma_test
TEST_GEN_PROGS_s390x += s390x/debug_test TEST_GEN_PROGS_s390x += s390x/debug_test
TEST_GEN_PROGS_s390x += s390x/shared_zeropage_test TEST_GEN_PROGS_s390x += s390x/shared_zeropage_test
TEST_GEN_PROGS_s390x += s390x/ucontrol_test
TEST_GEN_PROGS_s390x += demand_paging_test TEST_GEN_PROGS_s390x += demand_paging_test
TEST_GEN_PROGS_s390x += dirty_log_test TEST_GEN_PROGS_s390x += dirty_log_test
TEST_GEN_PROGS_s390x += guest_print_test TEST_GEN_PROGS_s390x += guest_print_test
@ -200,6 +203,7 @@ TEST_GEN_PROGS_s390x += kvm_binary_stats_test
TEST_GEN_PROGS_riscv += riscv/sbi_pmu_test TEST_GEN_PROGS_riscv += riscv/sbi_pmu_test
TEST_GEN_PROGS_riscv += riscv/ebreak_test TEST_GEN_PROGS_riscv += riscv/ebreak_test
TEST_GEN_PROGS_riscv += arch_timer TEST_GEN_PROGS_riscv += arch_timer
TEST_GEN_PROGS_riscv += coalesced_io_test
TEST_GEN_PROGS_riscv += demand_paging_test TEST_GEN_PROGS_riscv += demand_paging_test
TEST_GEN_PROGS_riscv += dirty_log_test TEST_GEN_PROGS_riscv += dirty_log_test
TEST_GEN_PROGS_riscv += get-reg-list TEST_GEN_PROGS_riscv += get-reg-list

View File

@ -0,0 +1,236 @@
// SPDX-License-Identifier: GPL-2.0
#include <signal.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/ioctl.h>
#include <linux/sizes.h>
#include <kvm_util.h>
#include <processor.h>
#include "ucall_common.h"
struct kvm_coalesced_io {
struct kvm_coalesced_mmio_ring *ring;
uint32_t ring_size;
uint64_t mmio_gpa;
uint64_t *mmio;
/*
* x86-only, but define pio_port for all architectures to minimize the
* amount of #ifdeffery and complexity, without having to sacrifice
* verbose error messages.
*/
uint8_t pio_port;
};
static struct kvm_coalesced_io kvm_builtin_io_ring;
#ifdef __x86_64__
static const int has_pio = 1;
#else
static const int has_pio = 0;
#endif
static void guest_code(struct kvm_coalesced_io *io)
{
int i, j;
for (;;) {
for (j = 0; j < 1 + has_pio; j++) {
/*
* KVM always leaves one free entry, i.e. exits to
* userspace before the last entry is filled.
*/
for (i = 0; i < io->ring_size - 1; i++) {
#ifdef __x86_64__
if (i & 1)
outl(io->pio_port, io->pio_port + i);
else
#endif
WRITE_ONCE(*io->mmio, io->mmio_gpa + i);
}
#ifdef __x86_64__
if (j & 1)
outl(io->pio_port, io->pio_port + i);
else
#endif
WRITE_ONCE(*io->mmio, io->mmio_gpa + i);
}
GUEST_SYNC(0);
WRITE_ONCE(*io->mmio, io->mmio_gpa + i);
#ifdef __x86_64__
outl(io->pio_port, io->pio_port + i);
#endif
}
}
static void vcpu_run_and_verify_io_exit(struct kvm_vcpu *vcpu,
struct kvm_coalesced_io *io,
uint32_t ring_start,
uint32_t expected_exit)
{
const bool want_pio = expected_exit == KVM_EXIT_IO;
struct kvm_coalesced_mmio_ring *ring = io->ring;
struct kvm_run *run = vcpu->run;
uint32_t pio_value;
WRITE_ONCE(ring->first, ring_start);
WRITE_ONCE(ring->last, ring_start);
vcpu_run(vcpu);
/*
* Annoyingly, reading PIO data is safe only for PIO exits, otherwise
* data_offset is garbage, e.g. an MMIO gpa.
*/
if (run->exit_reason == KVM_EXIT_IO)
pio_value = *(uint32_t *)((void *)run + run->io.data_offset);
else
pio_value = 0;
TEST_ASSERT((!want_pio && (run->exit_reason == KVM_EXIT_MMIO && run->mmio.is_write &&
run->mmio.phys_addr == io->mmio_gpa && run->mmio.len == 8 &&
*(uint64_t *)run->mmio.data == io->mmio_gpa + io->ring_size - 1)) ||
(want_pio && (run->exit_reason == KVM_EXIT_IO && run->io.port == io->pio_port &&
run->io.direction == KVM_EXIT_IO_OUT && run->io.count == 1 &&
pio_value == io->pio_port + io->ring_size - 1)),
"For start = %u, expected exit on %u-byte %s write 0x%llx = %lx, got exit_reason = %u (%s)\n "
"(MMIO addr = 0x%llx, write = %u, len = %u, data = %lx)\n "
"(PIO port = 0x%x, write = %u, len = %u, count = %u, data = %x",
ring_start, want_pio ? 4 : 8, want_pio ? "PIO" : "MMIO",
want_pio ? (unsigned long long)io->pio_port : io->mmio_gpa,
(want_pio ? io->pio_port : io->mmio_gpa) + io->ring_size - 1, run->exit_reason,
run->exit_reason == KVM_EXIT_MMIO ? "MMIO" : run->exit_reason == KVM_EXIT_IO ? "PIO" : "other",
run->mmio.phys_addr, run->mmio.is_write, run->mmio.len, *(uint64_t *)run->mmio.data,
run->io.port, run->io.direction, run->io.size, run->io.count, pio_value);
}
static void vcpu_run_and_verify_coalesced_io(struct kvm_vcpu *vcpu,
struct kvm_coalesced_io *io,
uint32_t ring_start,
uint32_t expected_exit)
{
struct kvm_coalesced_mmio_ring *ring = io->ring;
int i;
vcpu_run_and_verify_io_exit(vcpu, io, ring_start, expected_exit);
TEST_ASSERT((ring->last + 1) % io->ring_size == ring->first,
"Expected ring to be full (minus 1), first = %u, last = %u, max = %u, start = %u",
ring->first, ring->last, io->ring_size, ring_start);
for (i = 0; i < io->ring_size - 1; i++) {
uint32_t idx = (ring->first + i) % io->ring_size;
struct kvm_coalesced_mmio *entry = &ring->coalesced_mmio[idx];
#ifdef __x86_64__
if (i & 1)
TEST_ASSERT(entry->phys_addr == io->pio_port &&
entry->len == 4 && entry->pio &&
*(uint32_t *)entry->data == io->pio_port + i,
"Wanted 4-byte port I/O 0x%x = 0x%x in entry %u, got %u-byte %s 0x%llx = 0x%x",
io->pio_port, io->pio_port + i, i,
entry->len, entry->pio ? "PIO" : "MMIO",
entry->phys_addr, *(uint32_t *)entry->data);
else
#endif
TEST_ASSERT(entry->phys_addr == io->mmio_gpa &&
entry->len == 8 && !entry->pio,
"Wanted 8-byte MMIO to 0x%lx = %lx in entry %u, got %u-byte %s 0x%llx = 0x%lx",
io->mmio_gpa, io->mmio_gpa + i, i,
entry->len, entry->pio ? "PIO" : "MMIO",
entry->phys_addr, *(uint64_t *)entry->data);
}
}
static void test_coalesced_io(struct kvm_vcpu *vcpu,
struct kvm_coalesced_io *io, uint32_t ring_start)
{
struct kvm_coalesced_mmio_ring *ring = io->ring;
kvm_vm_register_coalesced_io(vcpu->vm, io->mmio_gpa, 8, false /* pio */);
#ifdef __x86_64__
kvm_vm_register_coalesced_io(vcpu->vm, io->pio_port, 8, true /* pio */);
#endif
vcpu_run_and_verify_coalesced_io(vcpu, io, ring_start, KVM_EXIT_MMIO);
#ifdef __x86_64__
vcpu_run_and_verify_coalesced_io(vcpu, io, ring_start, KVM_EXIT_IO);
#endif
/*
* Verify ucall, which may use non-coalesced MMIO or PIO, generates an
* immediate exit.
*/
WRITE_ONCE(ring->first, ring_start);
WRITE_ONCE(ring->last, ring_start);
vcpu_run(vcpu);
TEST_ASSERT_EQ(get_ucall(vcpu, NULL), UCALL_SYNC);
TEST_ASSERT_EQ(ring->first, ring_start);
TEST_ASSERT_EQ(ring->last, ring_start);
/* Verify that non-coalesced MMIO/PIO generates an exit to userspace. */
kvm_vm_unregister_coalesced_io(vcpu->vm, io->mmio_gpa, 8, false /* pio */);
vcpu_run_and_verify_io_exit(vcpu, io, ring_start, KVM_EXIT_MMIO);
#ifdef __x86_64__
kvm_vm_unregister_coalesced_io(vcpu->vm, io->pio_port, 8, true /* pio */);
vcpu_run_and_verify_io_exit(vcpu, io, ring_start, KVM_EXIT_IO);
#endif
}
int main(int argc, char *argv[])
{
struct kvm_vcpu *vcpu;
struct kvm_vm *vm;
int i;
TEST_REQUIRE(kvm_has_cap(KVM_CAP_COALESCED_MMIO));
#ifdef __x86_64__
TEST_REQUIRE(kvm_has_cap(KVM_CAP_COALESCED_PIO));
#endif
vm = vm_create_with_one_vcpu(&vcpu, guest_code);
kvm_builtin_io_ring = (struct kvm_coalesced_io) {
/*
* The I/O ring is a kernel-allocated page whose address is
* relative to each vCPU's run page, with the page offset
* provided by KVM in the return of KVM_CAP_COALESCED_MMIO.
*/
.ring = (void *)vcpu->run +
(kvm_check_cap(KVM_CAP_COALESCED_MMIO) * getpagesize()),
/*
* The size of the I/O ring is fixed, but KVM defines the sized
* based on the kernel's PAGE_SIZE. Thus, userspace must query
* the host's page size at runtime to compute the ring size.
*/
.ring_size = (getpagesize() - sizeof(struct kvm_coalesced_mmio_ring)) /
sizeof(struct kvm_coalesced_mmio),
/*
* Arbitrary address+port (MMIO mustn't overlap memslots), with
* the MMIO GPA identity mapped in the guest.
*/
.mmio_gpa = 4ull * SZ_1G,
.mmio = (uint64_t *)(4ull * SZ_1G),
.pio_port = 0x80,
};
virt_map(vm, (uint64_t)kvm_builtin_io_ring.mmio, kvm_builtin_io_ring.mmio_gpa, 1);
sync_global_to_guest(vm, kvm_builtin_io_ring);
vcpu_args_set(vcpu, 1, &kvm_builtin_io_ring);
for (i = 0; i < kvm_builtin_io_ring.ring_size; i++)
test_coalesced_io(vcpu, &kvm_builtin_io_ring, i);
kvm_vm_free(vm);
return 0;
}

View File

@ -107,6 +107,21 @@ static void ucall_abort(const char *assert_msg, const char *expected_assert_msg)
expected_assert_msg, &assert_msg[offset]); expected_assert_msg, &assert_msg[offset]);
} }
/*
* Open code vcpu_run(), sans the UCALL_ABORT handling, so that intentional
* guest asserts guest can be verified instead of being reported as failures.
*/
static void do_vcpu_run(struct kvm_vcpu *vcpu)
{
int r;
do {
r = __vcpu_run(vcpu);
} while (r == -1 && errno == EINTR);
TEST_ASSERT(!r, KVM_IOCTL_ERROR(KVM_RUN, r));
}
static void run_test(struct kvm_vcpu *vcpu, const char *expected_printf, static void run_test(struct kvm_vcpu *vcpu, const char *expected_printf,
const char *expected_assert) const char *expected_assert)
{ {
@ -114,7 +129,7 @@ static void run_test(struct kvm_vcpu *vcpu, const char *expected_printf,
struct ucall uc; struct ucall uc;
while (1) { while (1) {
vcpu_run(vcpu); do_vcpu_run(vcpu);
TEST_ASSERT(run->exit_reason == UCALL_EXIT_REASON, TEST_ASSERT(run->exit_reason == UCALL_EXIT_REASON,
"Unexpected exit reason: %u (%s),", "Unexpected exit reason: %u (%s),",
@ -159,7 +174,7 @@ static void test_limits(void)
vm = vm_create_with_one_vcpu(&vcpu, guest_code_limits); vm = vm_create_with_one_vcpu(&vcpu, guest_code_limits);
run = vcpu->run; run = vcpu->run;
vcpu_run(vcpu); do_vcpu_run(vcpu);
TEST_ASSERT(run->exit_reason == UCALL_EXIT_REASON, TEST_ASSERT(run->exit_reason == UCALL_EXIT_REASON,
"Unexpected exit reason: %u (%s),", "Unexpected exit reason: %u (%s),",

View File

@ -428,8 +428,6 @@ const char *vm_guest_mode_string(uint32_t i);
void kvm_vm_free(struct kvm_vm *vmp); void kvm_vm_free(struct kvm_vm *vmp);
void kvm_vm_restart(struct kvm_vm *vmp); void kvm_vm_restart(struct kvm_vm *vmp);
void kvm_vm_release(struct kvm_vm *vmp); void kvm_vm_release(struct kvm_vm *vmp);
int kvm_memcmp_hva_gva(void *hva, struct kvm_vm *vm, const vm_vaddr_t gva,
size_t len);
void kvm_vm_elf_load(struct kvm_vm *vm, const char *filename); void kvm_vm_elf_load(struct kvm_vm *vm, const char *filename);
int kvm_memfd_alloc(size_t size, bool hugepages); int kvm_memfd_alloc(size_t size, bool hugepages);
@ -460,6 +458,32 @@ static inline uint32_t kvm_vm_reset_dirty_ring(struct kvm_vm *vm)
return __vm_ioctl(vm, KVM_RESET_DIRTY_RINGS, NULL); return __vm_ioctl(vm, KVM_RESET_DIRTY_RINGS, NULL);
} }
static inline void kvm_vm_register_coalesced_io(struct kvm_vm *vm,
uint64_t address,
uint64_t size, bool pio)
{
struct kvm_coalesced_mmio_zone zone = {
.addr = address,
.size = size,
.pio = pio,
};
vm_ioctl(vm, KVM_REGISTER_COALESCED_MMIO, &zone);
}
static inline void kvm_vm_unregister_coalesced_io(struct kvm_vm *vm,
uint64_t address,
uint64_t size, bool pio)
{
struct kvm_coalesced_mmio_zone zone = {
.addr = address,
.size = size,
.pio = pio,
};
vm_ioctl(vm, KVM_UNREGISTER_COALESCED_MMIO, &zone);
}
static inline int vm_get_stats_fd(struct kvm_vm *vm) static inline int vm_get_stats_fd(struct kvm_vm *vm)
{ {
int fd = __vm_ioctl(vm, KVM_GET_STATS_FD, NULL); int fd = __vm_ioctl(vm, KVM_GET_STATS_FD, NULL);

View File

@ -0,0 +1,69 @@
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* Definition for kernel virtual machines on s390x
*
* Copyright IBM Corp. 2024
*
* Authors:
* Christoph Schlameuss <schlameuss@linux.ibm.com>
*/
#ifndef SELFTEST_KVM_DEBUG_PRINT_H
#define SELFTEST_KVM_DEBUG_PRINT_H
#include "asm/ptrace.h"
#include "kvm_util.h"
#include "sie.h"
static inline void print_hex_bytes(const char *name, u64 addr, size_t len)
{
u64 pos;
pr_debug("%s (%p)\n", name, (void *)addr);
pr_debug(" 0/0x00---------|");
if (len > 8)
pr_debug(" 8/0x08---------|");
if (len > 16)
pr_debug(" 16/0x10--------|");
if (len > 24)
pr_debug(" 24/0x18--------|");
for (pos = 0; pos < len; pos += 8) {
if ((pos % 32) == 0)
pr_debug("\n %3lu 0x%.3lx ", pos, pos);
pr_debug(" %16lx", *((u64 *)(addr + pos)));
}
pr_debug("\n");
}
static inline void print_hex(const char *name, u64 addr)
{
print_hex_bytes(name, addr, 512);
}
static inline void print_psw(struct kvm_run *run, struct kvm_s390_sie_block *sie_block)
{
pr_debug("flags:0x%x psw:0x%.16llx:0x%.16llx exit:%u %s\n",
run->flags,
run->psw_mask, run->psw_addr,
run->exit_reason, exit_reason_str(run->exit_reason));
pr_debug("sie_block psw:0x%.16llx:0x%.16llx\n",
sie_block->psw_mask, sie_block->psw_addr);
}
static inline void print_run(struct kvm_run *run, struct kvm_s390_sie_block *sie_block)
{
print_hex_bytes("run", (u64)run, 0x150);
print_hex("sie_block", (u64)sie_block);
print_psw(run, sie_block);
}
static inline void print_regs(struct kvm_run *run)
{
struct kvm_sync_regs *sync_regs = &run->s.regs;
print_hex_bytes("GPRS", (u64)sync_regs->gprs, 8 * NUM_GPRS);
print_hex_bytes("ACRS", (u64)sync_regs->acrs, 4 * NUM_ACRS);
print_hex_bytes("CRS", (u64)sync_regs->crs, 8 * NUM_CRS);
}
#endif /* SELFTEST_KVM_DEBUG_PRINT_H */

View File

@ -21,6 +21,11 @@
#define PAGE_PROTECT 0x200 /* HW read-only bit */ #define PAGE_PROTECT 0x200 /* HW read-only bit */
#define PAGE_NOEXEC 0x100 /* HW no-execute bit */ #define PAGE_NOEXEC 0x100 /* HW no-execute bit */
/* Page size definitions */
#define PAGE_SHIFT 12
#define PAGE_SIZE BIT_ULL(PAGE_SHIFT)
#define PAGE_MASK (~(PAGE_SIZE - 1))
/* Is there a portable way to do this? */ /* Is there a portable way to do this? */
static inline void cpu_relax(void) static inline void cpu_relax(void)
{ {

View File

@ -0,0 +1,240 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Definition for kernel virtual machines on s390.
*
* Adapted copy of struct definition kvm_s390_sie_block from
* arch/s390/include/asm/kvm_host.h for use in userspace selftest programs.
*
* Copyright IBM Corp. 2008, 2024
*
* Authors:
* Christoph Schlameuss <schlameuss@linux.ibm.com>
* Carsten Otte <cotte@de.ibm.com>
*/
#ifndef SELFTEST_KVM_SIE_H
#define SELFTEST_KVM_SIE_H
#include <linux/types.h>
struct kvm_s390_sie_block {
#define CPUSTAT_STOPPED 0x80000000
#define CPUSTAT_WAIT 0x10000000
#define CPUSTAT_ECALL_PEND 0x08000000
#define CPUSTAT_STOP_INT 0x04000000
#define CPUSTAT_IO_INT 0x02000000
#define CPUSTAT_EXT_INT 0x01000000
#define CPUSTAT_RUNNING 0x00800000
#define CPUSTAT_RETAINED 0x00400000
#define CPUSTAT_TIMING_SUB 0x00020000
#define CPUSTAT_SIE_SUB 0x00010000
#define CPUSTAT_RRF 0x00008000
#define CPUSTAT_SLSV 0x00004000
#define CPUSTAT_SLSR 0x00002000
#define CPUSTAT_ZARCH 0x00000800
#define CPUSTAT_MCDS 0x00000100
#define CPUSTAT_KSS 0x00000200
#define CPUSTAT_SM 0x00000080
#define CPUSTAT_IBS 0x00000040
#define CPUSTAT_GED2 0x00000010
#define CPUSTAT_G 0x00000008
#define CPUSTAT_GED 0x00000004
#define CPUSTAT_J 0x00000002
#define CPUSTAT_P 0x00000001
__u32 cpuflags; /* 0x0000 */
__u32: 1; /* 0x0004 */
__u32 prefix : 18;
__u32: 1;
__u32 ibc : 12;
__u8 reserved08[4]; /* 0x0008 */
#define PROG_IN_SIE BIT(0)
__u32 prog0c; /* 0x000c */
union {
__u8 reserved10[16]; /* 0x0010 */
struct {
__u64 pv_handle_cpu;
__u64 pv_handle_config;
};
};
#define PROG_BLOCK_SIE BIT(0)
#define PROG_REQUEST BIT(1)
__u32 prog20; /* 0x0020 */
__u8 reserved24[4]; /* 0x0024 */
__u64 cputm; /* 0x0028 */
__u64 ckc; /* 0x0030 */
__u64 epoch; /* 0x0038 */
__u32 svcc; /* 0x0040 */
#define LCTL_CR0 0x8000
#define LCTL_CR6 0x0200
#define LCTL_CR9 0x0040
#define LCTL_CR10 0x0020
#define LCTL_CR11 0x0010
#define LCTL_CR14 0x0002
__u16 lctl; /* 0x0044 */
__s16 icpua; /* 0x0046 */
#define ICTL_OPEREXC 0x80000000
#define ICTL_PINT 0x20000000
#define ICTL_LPSW 0x00400000
#define ICTL_STCTL 0x00040000
#define ICTL_ISKE 0x00004000
#define ICTL_SSKE 0x00002000
#define ICTL_RRBE 0x00001000
#define ICTL_TPROT 0x00000200
__u32 ictl; /* 0x0048 */
#define ECA_CEI 0x80000000
#define ECA_IB 0x40000000
#define ECA_SIGPI 0x10000000
#define ECA_MVPGI 0x01000000
#define ECA_AIV 0x00200000
#define ECA_VX 0x00020000
#define ECA_PROTEXCI 0x00002000
#define ECA_APIE 0x00000008
#define ECA_SII 0x00000001
__u32 eca; /* 0x004c */
#define ICPT_INST 0x04
#define ICPT_PROGI 0x08
#define ICPT_INSTPROGI 0x0C
#define ICPT_EXTREQ 0x10
#define ICPT_EXTINT 0x14
#define ICPT_IOREQ 0x18
#define ICPT_WAIT 0x1c
#define ICPT_VALIDITY 0x20
#define ICPT_STOP 0x28
#define ICPT_OPEREXC 0x2C
#define ICPT_PARTEXEC 0x38
#define ICPT_IOINST 0x40
#define ICPT_KSS 0x5c
#define ICPT_MCHKREQ 0x60
#define ICPT_INT_ENABLE 0x64
#define ICPT_PV_INSTR 0x68
#define ICPT_PV_NOTIFY 0x6c
#define ICPT_PV_PREF 0x70
__u8 icptcode; /* 0x0050 */
__u8 icptstatus; /* 0x0051 */
__u16 ihcpu; /* 0x0052 */
__u8 reserved54; /* 0x0054 */
#define IICTL_CODE_NONE 0x00
#define IICTL_CODE_MCHK 0x01
#define IICTL_CODE_EXT 0x02
#define IICTL_CODE_IO 0x03
#define IICTL_CODE_RESTART 0x04
#define IICTL_CODE_SPECIFICATION 0x10
#define IICTL_CODE_OPERAND 0x11
__u8 iictl; /* 0x0055 */
__u16 ipa; /* 0x0056 */
__u32 ipb; /* 0x0058 */
__u32 scaoh; /* 0x005c */
#define FPF_BPBC 0x20
__u8 fpf; /* 0x0060 */
#define ECB_GS 0x40
#define ECB_TE 0x10
#define ECB_SPECI 0x08
#define ECB_SRSI 0x04
#define ECB_HOSTPROTINT 0x02
#define ECB_PTF 0x01
__u8 ecb; /* 0x0061 */
#define ECB2_CMMA 0x80
#define ECB2_IEP 0x20
#define ECB2_PFMFI 0x08
#define ECB2_ESCA 0x04
#define ECB2_ZPCI_LSI 0x02
__u8 ecb2; /* 0x0062 */
#define ECB3_AISI 0x20
#define ECB3_AISII 0x10
#define ECB3_DEA 0x08
#define ECB3_AES 0x04
#define ECB3_RI 0x01
__u8 ecb3; /* 0x0063 */
#define ESCA_SCAOL_MASK ~0x3fU
__u32 scaol; /* 0x0064 */
__u8 sdf; /* 0x0068 */
__u8 epdx; /* 0x0069 */
__u8 cpnc; /* 0x006a */
__u8 reserved6b; /* 0x006b */
__u32 todpr; /* 0x006c */
#define GISA_FORMAT1 0x00000001
__u32 gd; /* 0x0070 */
__u8 reserved74[12]; /* 0x0074 */
__u64 mso; /* 0x0080 */
__u64 msl; /* 0x0088 */
__u64 psw_mask; /* 0x0090 */
__u64 psw_addr; /* 0x0098 */
__u64 gg14; /* 0x00a0 */
__u64 gg15; /* 0x00a8 */
__u8 reservedb0[8]; /* 0x00b0 */
#define HPID_KVM 0x4
#define HPID_VSIE 0x5
__u8 hpid; /* 0x00b8 */
__u8 reservedb9[7]; /* 0x00b9 */
union {
struct {
__u32 eiparams; /* 0x00c0 */
__u16 extcpuaddr; /* 0x00c4 */
__u16 eic; /* 0x00c6 */
};
__u64 mcic; /* 0x00c0 */
} __packed;
__u32 reservedc8; /* 0x00c8 */
union {
struct {
__u16 pgmilc; /* 0x00cc */
__u16 iprcc; /* 0x00ce */
};
__u32 edc; /* 0x00cc */
} __packed;
union {
struct {
__u32 dxc; /* 0x00d0 */
__u16 mcn; /* 0x00d4 */
__u8 perc; /* 0x00d6 */
__u8 peratmid; /* 0x00d7 */
};
__u64 faddr; /* 0x00d0 */
} __packed;
__u64 peraddr; /* 0x00d8 */
__u8 eai; /* 0x00e0 */
__u8 peraid; /* 0x00e1 */
__u8 oai; /* 0x00e2 */
__u8 armid; /* 0x00e3 */
__u8 reservede4[4]; /* 0x00e4 */
union {
__u64 tecmc; /* 0x00e8 */
struct {
__u16 subchannel_id; /* 0x00e8 */
__u16 subchannel_nr; /* 0x00ea */
__u32 io_int_parm; /* 0x00ec */
__u32 io_int_word; /* 0x00f0 */
};
} __packed;
__u8 reservedf4[8]; /* 0x00f4 */
#define CRYCB_FORMAT_MASK 0x00000003
#define CRYCB_FORMAT0 0x00000000
#define CRYCB_FORMAT1 0x00000001
#define CRYCB_FORMAT2 0x00000003
__u32 crycbd; /* 0x00fc */
__u64 gcr[16]; /* 0x0100 */
union {
__u64 gbea; /* 0x0180 */
__u64 sidad;
};
__u8 reserved188[8]; /* 0x0188 */
__u64 sdnxo; /* 0x0190 */
__u8 reserved198[8]; /* 0x0198 */
__u32 fac; /* 0x01a0 */
__u8 reserved1a4[20]; /* 0x01a4 */
__u64 cbrlo; /* 0x01b8 */
__u8 reserved1c0[8]; /* 0x01c0 */
#define ECD_HOSTREGMGMT 0x20000000
#define ECD_MEF 0x08000000
#define ECD_ETOKENF 0x02000000
#define ECD_ECC 0x00200000
__u32 ecd; /* 0x01c8 */
__u8 reserved1cc[18]; /* 0x01cc */
__u64 pp; /* 0x01de */
__u8 reserved1e6[2]; /* 0x01e6 */
__u64 itdba; /* 0x01e8 */
__u64 riccbd; /* 0x01f0 */
__u64 gvrd; /* 0x01f8 */
} __packed __aligned(512);
#endif /* SELFTEST_KVM_SIE_H */

View File

@ -11,6 +11,7 @@
#include <stdint.h> #include <stdint.h>
#include "processor.h" #include "processor.h"
#include "ucall_common.h"
#define APIC_DEFAULT_GPA 0xfee00000ULL #define APIC_DEFAULT_GPA 0xfee00000ULL
@ -93,9 +94,27 @@ static inline uint64_t x2apic_read_reg(unsigned int reg)
return rdmsr(APIC_BASE_MSR + (reg >> 4)); return rdmsr(APIC_BASE_MSR + (reg >> 4));
} }
static inline void x2apic_write_reg(unsigned int reg, uint64_t value) static inline uint8_t x2apic_write_reg_safe(unsigned int reg, uint64_t value)
{ {
wrmsr(APIC_BASE_MSR + (reg >> 4), value); return wrmsr_safe(APIC_BASE_MSR + (reg >> 4), value);
} }
static inline void x2apic_write_reg(unsigned int reg, uint64_t value)
{
uint8_t fault = x2apic_write_reg_safe(reg, value);
__GUEST_ASSERT(!fault, "Unexpected fault 0x%x on WRMSR(%x) = %lx\n",
fault, APIC_BASE_MSR + (reg >> 4), value);
}
static inline void x2apic_write_reg_fault(unsigned int reg, uint64_t value)
{
uint8_t fault = x2apic_write_reg_safe(reg, value);
__GUEST_ASSERT(fault == GP_VECTOR,
"Wanted #GP on WRMSR(%x) = %lx, got 0x%x\n",
APIC_BASE_MSR + (reg >> 4), value, fault);
}
#endif /* SELFTEST_KVM_APIC_H */ #endif /* SELFTEST_KVM_APIC_H */

View File

@ -186,6 +186,18 @@
#define HV_X64_ENLIGHTENED_VMCS_RECOMMENDED \ #define HV_X64_ENLIGHTENED_VMCS_RECOMMENDED \
KVM_X86_CPU_FEATURE(HYPERV_CPUID_ENLIGHTMENT_INFO, 0, EAX, 14) KVM_X86_CPU_FEATURE(HYPERV_CPUID_ENLIGHTMENT_INFO, 0, EAX, 14)
/* HYPERV_CPUID_NESTED_FEATURES.EAX */
#define HV_X64_NESTED_DIRECT_FLUSH \
KVM_X86_CPU_FEATURE(HYPERV_CPUID_NESTED_FEATURES, 0, EAX, 17)
#define HV_X64_NESTED_GUEST_MAPPING_FLUSH \
KVM_X86_CPU_FEATURE(HYPERV_CPUID_NESTED_FEATURES, 0, EAX, 18)
#define HV_X64_NESTED_MSR_BITMAP \
KVM_X86_CPU_FEATURE(HYPERV_CPUID_NESTED_FEATURES, 0, EAX, 19)
/* HYPERV_CPUID_NESTED_FEATURES.EBX */
#define HV_X64_NESTED_EVMCS1_PERF_GLOBAL_CTRL \
KVM_X86_CPU_FEATURE(HYPERV_CPUID_NESTED_FEATURES, 0, EBX, 0)
/* HYPERV_CPUID_SYNDBG_PLATFORM_CAPABILITIES.EAX */ /* HYPERV_CPUID_SYNDBG_PLATFORM_CAPABILITIES.EAX */
#define HV_X64_SYNDBG_CAP_ALLOW_KERNEL_DEBUGGING \ #define HV_X64_SYNDBG_CAP_ALLOW_KERNEL_DEBUGGING \
KVM_X86_CPU_FEATURE(HYPERV_CPUID_SYNDBG_PLATFORM_CAPABILITIES, 0, EAX, 1) KVM_X86_CPU_FEATURE(HYPERV_CPUID_SYNDBG_PLATFORM_CAPABILITIES, 0, EAX, 1)
@ -343,4 +355,10 @@ struct hyperv_test_pages *vcpu_alloc_hyperv_test_pages(struct kvm_vm *vm,
/* HV_X64_MSR_TSC_INVARIANT_CONTROL bits */ /* HV_X64_MSR_TSC_INVARIANT_CONTROL bits */
#define HV_INVARIANT_TSC_EXPOSED BIT_ULL(0) #define HV_INVARIANT_TSC_EXPOSED BIT_ULL(0)
const struct kvm_cpuid2 *kvm_get_supported_hv_cpuid(void);
const struct kvm_cpuid2 *vcpu_get_supported_hv_cpuid(struct kvm_vcpu *vcpu);
void vcpu_set_hv_cpuid(struct kvm_vcpu *vcpu);
bool kvm_hv_cpu_has(struct kvm_x86_cpu_feature feature);
#endif /* !SELFTEST_KVM_HYPERV_H */ #endif /* !SELFTEST_KVM_HYPERV_H */

View File

@ -25,6 +25,10 @@ extern bool host_cpu_is_intel;
extern bool host_cpu_is_amd; extern bool host_cpu_is_amd;
extern uint64_t guest_tsc_khz; extern uint64_t guest_tsc_khz;
#ifndef MAX_NR_CPUID_ENTRIES
#define MAX_NR_CPUID_ENTRIES 100
#endif
/* Forced emulation prefix, used to invoke the emulator unconditionally. */ /* Forced emulation prefix, used to invoke the emulator unconditionally. */
#define KVM_FEP "ud2; .byte 'k', 'v', 'm';" #define KVM_FEP "ud2; .byte 'k', 'v', 'm';"
@ -908,8 +912,6 @@ static inline void vcpu_xcrs_set(struct kvm_vcpu *vcpu, struct kvm_xcrs *xcrs)
const struct kvm_cpuid_entry2 *get_cpuid_entry(const struct kvm_cpuid2 *cpuid, const struct kvm_cpuid_entry2 *get_cpuid_entry(const struct kvm_cpuid2 *cpuid,
uint32_t function, uint32_t index); uint32_t function, uint32_t index);
const struct kvm_cpuid2 *kvm_get_supported_cpuid(void); const struct kvm_cpuid2 *kvm_get_supported_cpuid(void);
const struct kvm_cpuid2 *kvm_get_supported_hv_cpuid(void);
const struct kvm_cpuid2 *vcpu_get_supported_hv_cpuid(struct kvm_vcpu *vcpu);
static inline uint32_t kvm_cpu_fms(void) static inline uint32_t kvm_cpu_fms(void)
{ {
@ -1009,7 +1011,6 @@ static inline struct kvm_cpuid2 *allocate_kvm_cpuid2(int nr_entries)
} }
void vcpu_init_cpuid(struct kvm_vcpu *vcpu, const struct kvm_cpuid2 *cpuid); void vcpu_init_cpuid(struct kvm_vcpu *vcpu, const struct kvm_cpuid2 *cpuid);
void vcpu_set_hv_cpuid(struct kvm_vcpu *vcpu);
static inline struct kvm_cpuid_entry2 *__vcpu_get_cpuid_entry(struct kvm_vcpu *vcpu, static inline struct kvm_cpuid_entry2 *__vcpu_get_cpuid_entry(struct kvm_vcpu *vcpu,
uint32_t function, uint32_t function,

View File

@ -712,16 +712,13 @@ void kvm_vm_release(struct kvm_vm *vmp)
} }
static void __vm_mem_region_delete(struct kvm_vm *vm, static void __vm_mem_region_delete(struct kvm_vm *vm,
struct userspace_mem_region *region, struct userspace_mem_region *region)
bool unlink)
{ {
int ret; int ret;
if (unlink) { rb_erase(&region->gpa_node, &vm->regions.gpa_tree);
rb_erase(&region->gpa_node, &vm->regions.gpa_tree); rb_erase(&region->hva_node, &vm->regions.hva_tree);
rb_erase(&region->hva_node, &vm->regions.hva_tree); hash_del(&region->slot_node);
hash_del(&region->slot_node);
}
region->region.memory_size = 0; region->region.memory_size = 0;
vm_ioctl(vm, KVM_SET_USER_MEMORY_REGION2, &region->region); vm_ioctl(vm, KVM_SET_USER_MEMORY_REGION2, &region->region);
@ -762,7 +759,7 @@ void kvm_vm_free(struct kvm_vm *vmp)
/* Free userspace_mem_regions. */ /* Free userspace_mem_regions. */
hash_for_each_safe(vmp->regions.slot_hash, ctr, node, region, slot_node) hash_for_each_safe(vmp->regions.slot_hash, ctr, node, region, slot_node)
__vm_mem_region_delete(vmp, region, false); __vm_mem_region_delete(vmp, region);
/* Free sparsebit arrays. */ /* Free sparsebit arrays. */
sparsebit_free(&vmp->vpages_valid); sparsebit_free(&vmp->vpages_valid);
@ -794,76 +791,6 @@ int kvm_memfd_alloc(size_t size, bool hugepages)
return fd; return fd;
} }
/*
* Memory Compare, host virtual to guest virtual
*
* Input Args:
* hva - Starting host virtual address
* vm - Virtual Machine
* gva - Starting guest virtual address
* len - number of bytes to compare
*
* Output Args: None
*
* Input/Output Args: None
*
* Return:
* Returns 0 if the bytes starting at hva for a length of len
* are equal the guest virtual bytes starting at gva. Returns
* a value < 0, if bytes at hva are less than those at gva.
* Otherwise a value > 0 is returned.
*
* Compares the bytes starting at the host virtual address hva, for
* a length of len, to the guest bytes starting at the guest virtual
* address given by gva.
*/
int kvm_memcmp_hva_gva(void *hva, struct kvm_vm *vm, vm_vaddr_t gva, size_t len)
{
size_t amt;
/*
* Compare a batch of bytes until either a match is found
* or all the bytes have been compared.
*/
for (uintptr_t offset = 0; offset < len; offset += amt) {
uintptr_t ptr1 = (uintptr_t)hva + offset;
/*
* Determine host address for guest virtual address
* at offset.
*/
uintptr_t ptr2 = (uintptr_t)addr_gva2hva(vm, gva + offset);
/*
* Determine amount to compare on this pass.
* Don't allow the comparsion to cross a page boundary.
*/
amt = len - offset;
if ((ptr1 >> vm->page_shift) != ((ptr1 + amt) >> vm->page_shift))
amt = vm->page_size - (ptr1 % vm->page_size);
if ((ptr2 >> vm->page_shift) != ((ptr2 + amt) >> vm->page_shift))
amt = vm->page_size - (ptr2 % vm->page_size);
assert((ptr1 >> vm->page_shift) == ((ptr1 + amt - 1) >> vm->page_shift));
assert((ptr2 >> vm->page_shift) == ((ptr2 + amt - 1) >> vm->page_shift));
/*
* Perform the comparison. If there is a difference
* return that result to the caller, otherwise need
* to continue on looking for a mismatch.
*/
int ret = memcmp((void *)ptr1, (void *)ptr2, amt);
if (ret != 0)
return ret;
}
/*
* No mismatch found. Let the caller know the two memory
* areas are equal.
*/
return 0;
}
static void vm_userspace_mem_region_gpa_insert(struct rb_root *gpa_tree, static void vm_userspace_mem_region_gpa_insert(struct rb_root *gpa_tree,
struct userspace_mem_region *region) struct userspace_mem_region *region)
{ {
@ -1270,7 +1197,7 @@ void vm_mem_region_move(struct kvm_vm *vm, uint32_t slot, uint64_t new_gpa)
*/ */
void vm_mem_region_delete(struct kvm_vm *vm, uint32_t slot) void vm_mem_region_delete(struct kvm_vm *vm, uint32_t slot)
{ {
__vm_mem_region_delete(vm, memslot2region(vm, slot), true); __vm_mem_region_delete(vm, memslot2region(vm, slot));
} }
void vm_guest_mem_fallocate(struct kvm_vm *vm, uint64_t base, uint64_t size, void vm_guest_mem_fallocate(struct kvm_vm *vm, uint64_t base, uint64_t size,

View File

@ -14,7 +14,7 @@ void virt_arch_pgd_alloc(struct kvm_vm *vm)
{ {
vm_paddr_t paddr; vm_paddr_t paddr;
TEST_ASSERT(vm->page_size == 4096, "Unsupported page size: 0x%x", TEST_ASSERT(vm->page_size == PAGE_SIZE, "Unsupported page size: 0x%x",
vm->page_size); vm->page_size);
if (vm->pgd_created) if (vm->pgd_created)
@ -79,7 +79,7 @@ void virt_arch_pg_map(struct kvm_vm *vm, uint64_t gva, uint64_t gpa)
} }
/* Fill in page table entry */ /* Fill in page table entry */
idx = (gva >> 12) & 0x0ffu; /* page index */ idx = (gva >> PAGE_SHIFT) & 0x0ffu; /* page index */
if (!(entry[idx] & PAGE_INVALID)) if (!(entry[idx] & PAGE_INVALID))
fprintf(stderr, fprintf(stderr,
"WARNING: PTE for gpa=0x%"PRIx64" already set!\n", gpa); "WARNING: PTE for gpa=0x%"PRIx64" already set!\n", gpa);
@ -91,7 +91,7 @@ vm_paddr_t addr_arch_gva2gpa(struct kvm_vm *vm, vm_vaddr_t gva)
int ri, idx; int ri, idx;
uint64_t *entry; uint64_t *entry;
TEST_ASSERT(vm->page_size == 4096, "Unsupported page size: 0x%x", TEST_ASSERT(vm->page_size == PAGE_SIZE, "Unsupported page size: 0x%x",
vm->page_size); vm->page_size);
entry = addr_gpa2hva(vm, vm->pgd); entry = addr_gpa2hva(vm, vm->pgd);
@ -103,7 +103,7 @@ vm_paddr_t addr_arch_gva2gpa(struct kvm_vm *vm, vm_vaddr_t gva)
entry = addr_gpa2hva(vm, entry[idx] & REGION_ENTRY_ORIGIN); entry = addr_gpa2hva(vm, entry[idx] & REGION_ENTRY_ORIGIN);
} }
idx = (gva >> 12) & 0x0ffu; /* page index */ idx = (gva >> PAGE_SHIFT) & 0x0ffu; /* page index */
TEST_ASSERT(!(entry[idx] & PAGE_INVALID), TEST_ASSERT(!(entry[idx] & PAGE_INVALID),
"No page mapping for vm virtual address 0x%lx", gva); "No page mapping for vm virtual address 0x%lx", gva);
@ -168,7 +168,7 @@ struct kvm_vcpu *vm_arch_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id)
struct kvm_sregs sregs; struct kvm_sregs sregs;
struct kvm_vcpu *vcpu; struct kvm_vcpu *vcpu;
TEST_ASSERT(vm->page_size == 4096, "Unsupported page size: 0x%x", TEST_ASSERT(vm->page_size == PAGE_SIZE, "Unsupported page size: 0x%x",
vm->page_size); vm->page_size);
stack_vaddr = __vm_vaddr_alloc(vm, stack_size, stack_vaddr = __vm_vaddr_alloc(vm, stack_size,

View File

@ -8,6 +8,73 @@
#include "processor.h" #include "processor.h"
#include "hyperv.h" #include "hyperv.h"
const struct kvm_cpuid2 *kvm_get_supported_hv_cpuid(void)
{
static struct kvm_cpuid2 *cpuid;
int kvm_fd;
if (cpuid)
return cpuid;
cpuid = allocate_kvm_cpuid2(MAX_NR_CPUID_ENTRIES);
kvm_fd = open_kvm_dev_path_or_exit();
kvm_ioctl(kvm_fd, KVM_GET_SUPPORTED_HV_CPUID, cpuid);
close(kvm_fd);
return cpuid;
}
void vcpu_set_hv_cpuid(struct kvm_vcpu *vcpu)
{
static struct kvm_cpuid2 *cpuid_full;
const struct kvm_cpuid2 *cpuid_sys, *cpuid_hv;
int i, nent = 0;
if (!cpuid_full) {
cpuid_sys = kvm_get_supported_cpuid();
cpuid_hv = kvm_get_supported_hv_cpuid();
cpuid_full = allocate_kvm_cpuid2(cpuid_sys->nent + cpuid_hv->nent);
if (!cpuid_full) {
perror("malloc");
abort();
}
/* Need to skip KVM CPUID leaves 0x400000xx */
for (i = 0; i < cpuid_sys->nent; i++) {
if (cpuid_sys->entries[i].function >= 0x40000000 &&
cpuid_sys->entries[i].function < 0x40000100)
continue;
cpuid_full->entries[nent] = cpuid_sys->entries[i];
nent++;
}
memcpy(&cpuid_full->entries[nent], cpuid_hv->entries,
cpuid_hv->nent * sizeof(struct kvm_cpuid_entry2));
cpuid_full->nent = nent + cpuid_hv->nent;
}
vcpu_init_cpuid(vcpu, cpuid_full);
}
const struct kvm_cpuid2 *vcpu_get_supported_hv_cpuid(struct kvm_vcpu *vcpu)
{
struct kvm_cpuid2 *cpuid = allocate_kvm_cpuid2(MAX_NR_CPUID_ENTRIES);
vcpu_ioctl(vcpu, KVM_GET_SUPPORTED_HV_CPUID, cpuid);
return cpuid;
}
bool kvm_hv_cpu_has(struct kvm_x86_cpu_feature feature)
{
if (!kvm_has_cap(KVM_CAP_SYS_HYPERV_CPUID))
return false;
return kvm_cpuid_has(kvm_get_supported_hv_cpuid(), feature);
}
struct hyperv_test_pages *vcpu_alloc_hyperv_test_pages(struct kvm_vm *vm, struct hyperv_test_pages *vcpu_alloc_hyperv_test_pages(struct kvm_vm *vm,
vm_vaddr_t *p_hv_pages_gva) vm_vaddr_t *p_hv_pages_gva)
{ {

View File

@ -19,8 +19,6 @@
#define KERNEL_DS 0x10 #define KERNEL_DS 0x10
#define KERNEL_TSS 0x18 #define KERNEL_TSS 0x18
#define MAX_NR_CPUID_ENTRIES 100
vm_vaddr_t exception_handlers; vm_vaddr_t exception_handlers;
bool host_cpu_is_amd; bool host_cpu_is_amd;
bool host_cpu_is_intel; bool host_cpu_is_intel;
@ -566,10 +564,8 @@ void route_exception(struct ex_regs *regs)
if (kvm_fixup_exception(regs)) if (kvm_fixup_exception(regs))
return; return;
ucall_assert(UCALL_UNHANDLED, GUEST_FAIL("Unhandled exception '0x%lx' at guest RIP '0x%lx'",
"Unhandled exception in guest", __FILE__, __LINE__, regs->vector, regs->rip);
"Unhandled exception '0x%lx' at guest RIP '0x%lx'",
regs->vector, regs->rip);
} }
static void vm_init_descriptor_tables(struct kvm_vm *vm) static void vm_init_descriptor_tables(struct kvm_vm *vm)
@ -611,7 +607,7 @@ void assert_on_unhandled_exception(struct kvm_vcpu *vcpu)
{ {
struct ucall uc; struct ucall uc;
if (get_ucall(vcpu, &uc) == UCALL_UNHANDLED) if (get_ucall(vcpu, &uc) == UCALL_ABORT)
REPORT_GUEST_ASSERT(uc); REPORT_GUEST_ASSERT(uc);
} }
@ -1195,65 +1191,6 @@ void xen_hypercall(uint64_t nr, uint64_t a0, void *a1)
GUEST_ASSERT(!__xen_hypercall(nr, a0, a1)); GUEST_ASSERT(!__xen_hypercall(nr, a0, a1));
} }
const struct kvm_cpuid2 *kvm_get_supported_hv_cpuid(void)
{
static struct kvm_cpuid2 *cpuid;
int kvm_fd;
if (cpuid)
return cpuid;
cpuid = allocate_kvm_cpuid2(MAX_NR_CPUID_ENTRIES);
kvm_fd = open_kvm_dev_path_or_exit();
kvm_ioctl(kvm_fd, KVM_GET_SUPPORTED_HV_CPUID, cpuid);
close(kvm_fd);
return cpuid;
}
void vcpu_set_hv_cpuid(struct kvm_vcpu *vcpu)
{
static struct kvm_cpuid2 *cpuid_full;
const struct kvm_cpuid2 *cpuid_sys, *cpuid_hv;
int i, nent = 0;
if (!cpuid_full) {
cpuid_sys = kvm_get_supported_cpuid();
cpuid_hv = kvm_get_supported_hv_cpuid();
cpuid_full = allocate_kvm_cpuid2(cpuid_sys->nent + cpuid_hv->nent);
if (!cpuid_full) {
perror("malloc");
abort();
}
/* Need to skip KVM CPUID leaves 0x400000xx */
for (i = 0; i < cpuid_sys->nent; i++) {
if (cpuid_sys->entries[i].function >= 0x40000000 &&
cpuid_sys->entries[i].function < 0x40000100)
continue;
cpuid_full->entries[nent] = cpuid_sys->entries[i];
nent++;
}
memcpy(&cpuid_full->entries[nent], cpuid_hv->entries,
cpuid_hv->nent * sizeof(struct kvm_cpuid_entry2));
cpuid_full->nent = nent + cpuid_hv->nent;
}
vcpu_init_cpuid(vcpu, cpuid_full);
}
const struct kvm_cpuid2 *vcpu_get_supported_hv_cpuid(struct kvm_vcpu *vcpu)
{
struct kvm_cpuid2 *cpuid = allocate_kvm_cpuid2(MAX_NR_CPUID_ENTRIES);
vcpu_ioctl(vcpu, KVM_GET_SUPPORTED_HV_CPUID, cpuid);
return cpuid;
}
unsigned long vm_compute_max_gfn(struct kvm_vm *vm) unsigned long vm_compute_max_gfn(struct kvm_vm *vm)
{ {
const unsigned long num_ht_pages = 12 << (30 - vm->page_shift); /* 12 GiB */ const unsigned long num_ht_pages = 12 << (30 - vm->page_shift); /* 12 GiB */

View File

@ -79,6 +79,7 @@ struct test_params {
useconds_t delay; useconds_t delay;
uint64_t nr_iterations; uint64_t nr_iterations;
bool partition_vcpu_memory_access; bool partition_vcpu_memory_access;
bool disable_slot_zap_quirk;
}; };
static void run_test(enum vm_guest_mode mode, void *arg) static void run_test(enum vm_guest_mode mode, void *arg)
@ -89,6 +90,13 @@ static void run_test(enum vm_guest_mode mode, void *arg)
vm = memstress_create_vm(mode, nr_vcpus, guest_percpu_mem_size, 1, vm = memstress_create_vm(mode, nr_vcpus, guest_percpu_mem_size, 1,
VM_MEM_SRC_ANONYMOUS, VM_MEM_SRC_ANONYMOUS,
p->partition_vcpu_memory_access); p->partition_vcpu_memory_access);
#ifdef __x86_64__
if (p->disable_slot_zap_quirk)
vm_enable_cap(vm, KVM_CAP_DISABLE_QUIRKS2, KVM_X86_QUIRK_SLOT_ZAP_ALL);
pr_info("Memslot zap quirk %s\n", p->disable_slot_zap_quirk ?
"disabled" : "enabled");
#endif
pr_info("Finished creating vCPUs\n"); pr_info("Finished creating vCPUs\n");
@ -107,11 +115,12 @@ static void run_test(enum vm_guest_mode mode, void *arg)
static void help(char *name) static void help(char *name)
{ {
puts(""); puts("");
printf("usage: %s [-h] [-m mode] [-d delay_usec]\n" printf("usage: %s [-h] [-m mode] [-d delay_usec] [-q]\n"
" [-b memory] [-v vcpus] [-o] [-i iterations]\n", name); " [-b memory] [-v vcpus] [-o] [-i iterations]\n", name);
guest_modes_help(); guest_modes_help();
printf(" -d: add a delay between each iteration of adding and\n" printf(" -d: add a delay between each iteration of adding and\n"
" deleting a memslot in usec.\n"); " deleting a memslot in usec.\n");
printf(" -q: Disable memslot zap quirk.\n");
printf(" -b: specify the size of the memory region which should be\n" printf(" -b: specify the size of the memory region which should be\n"
" accessed by each vCPU. e.g. 10M or 3G.\n" " accessed by each vCPU. e.g. 10M or 3G.\n"
" Default: 1G\n"); " Default: 1G\n");
@ -137,7 +146,7 @@ int main(int argc, char *argv[])
guest_modes_append_default(); guest_modes_append_default();
while ((opt = getopt(argc, argv, "hm:d:b:v:oi:")) != -1) { while ((opt = getopt(argc, argv, "hm:d:qb:v:oi:")) != -1) {
switch (opt) { switch (opt) {
case 'm': case 'm':
guest_modes_cmdline(optarg); guest_modes_cmdline(optarg);
@ -160,6 +169,12 @@ int main(int argc, char *argv[])
case 'i': case 'i':
p.nr_iterations = atoi_positive("Number of iterations", optarg); p.nr_iterations = atoi_positive("Number of iterations", optarg);
break; break;
case 'q':
p.disable_slot_zap_quirk = true;
TEST_REQUIRE(kvm_check_cap(KVM_CAP_DISABLE_QUIRKS2) &
KVM_X86_QUIRK_SLOT_ZAP_ALL);
break;
case 'h': case 'h':
default: default:
help(argv[0]); help(argv[0]);

View File

@ -113,6 +113,7 @@ static_assert(ATOMIC_BOOL_LOCK_FREE == 2, "atomic bool is not lockless");
static sem_t vcpu_ready; static sem_t vcpu_ready;
static bool map_unmap_verify; static bool map_unmap_verify;
static bool disable_slot_zap_quirk;
static bool verbose; static bool verbose;
#define pr_info_v(...) \ #define pr_info_v(...) \
@ -578,6 +579,9 @@ static bool test_memslot_move_prepare(struct vm_data *data,
uint32_t guest_page_size = data->vm->page_size; uint32_t guest_page_size = data->vm->page_size;
uint64_t movesrcgpa, movetestgpa; uint64_t movesrcgpa, movetestgpa;
if (disable_slot_zap_quirk)
vm_enable_cap(data->vm, KVM_CAP_DISABLE_QUIRKS2, KVM_X86_QUIRK_SLOT_ZAP_ALL);
movesrcgpa = vm_slot2gpa(data, data->nslots - 1); movesrcgpa = vm_slot2gpa(data, data->nslots - 1);
if (isactive) { if (isactive) {
@ -896,6 +900,7 @@ static void help(char *name, struct test_args *targs)
pr_info(" -h: print this help screen.\n"); pr_info(" -h: print this help screen.\n");
pr_info(" -v: enable verbose mode (not for benchmarking).\n"); pr_info(" -v: enable verbose mode (not for benchmarking).\n");
pr_info(" -d: enable extra debug checks.\n"); pr_info(" -d: enable extra debug checks.\n");
pr_info(" -q: Disable memslot zap quirk during memslot move.\n");
pr_info(" -s: specify memslot count cap (-1 means no cap; currently: %i)\n", pr_info(" -s: specify memslot count cap (-1 means no cap; currently: %i)\n",
targs->nslots); targs->nslots);
pr_info(" -f: specify the first test to run (currently: %i; max %zu)\n", pr_info(" -f: specify the first test to run (currently: %i; max %zu)\n",
@ -954,7 +959,7 @@ static bool parse_args(int argc, char *argv[],
uint32_t max_mem_slots; uint32_t max_mem_slots;
int opt; int opt;
while ((opt = getopt(argc, argv, "hvds:f:e:l:r:")) != -1) { while ((opt = getopt(argc, argv, "hvdqs:f:e:l:r:")) != -1) {
switch (opt) { switch (opt) {
case 'h': case 'h':
default: default:
@ -966,6 +971,11 @@ static bool parse_args(int argc, char *argv[],
case 'd': case 'd':
map_unmap_verify = true; map_unmap_verify = true;
break; break;
case 'q':
disable_slot_zap_quirk = true;
TEST_REQUIRE(kvm_check_cap(KVM_CAP_DISABLE_QUIRKS2) &
KVM_X86_QUIRK_SLOT_ZAP_ALL);
break;
case 's': case 's':
targs->nslots = atoi_paranoid(optarg); targs->nslots = atoi_paranoid(optarg);
if (targs->nslots <= 1 && targs->nslots != -1) { if (targs->nslots <= 1 && targs->nslots != -1) {

View File

@ -17,16 +17,17 @@
#include "kvm_util.h" #include "kvm_util.h"
#include "kselftest.h" #include "kselftest.h"
#include "ucall_common.h" #include "ucall_common.h"
#include "processor.h"
#define MAIN_PAGE_COUNT 512 #define MAIN_PAGE_COUNT 512
#define TEST_DATA_PAGE_COUNT 512 #define TEST_DATA_PAGE_COUNT 512
#define TEST_DATA_MEMSLOT 1 #define TEST_DATA_MEMSLOT 1
#define TEST_DATA_START_GFN 4096 #define TEST_DATA_START_GFN PAGE_SIZE
#define TEST_DATA_TWO_PAGE_COUNT 256 #define TEST_DATA_TWO_PAGE_COUNT 256
#define TEST_DATA_TWO_MEMSLOT 2 #define TEST_DATA_TWO_MEMSLOT 2
#define TEST_DATA_TWO_START_GFN 8192 #define TEST_DATA_TWO_START_GFN (2 * PAGE_SIZE)
static char cmma_value_buf[MAIN_PAGE_COUNT + TEST_DATA_PAGE_COUNT]; static char cmma_value_buf[MAIN_PAGE_COUNT + TEST_DATA_PAGE_COUNT];
@ -66,7 +67,7 @@ static void guest_dirty_test_data(void)
" lghi 5,%[page_count]\n" " lghi 5,%[page_count]\n"
/* r5 += r1 */ /* r5 += r1 */
"2: agfr 5,1\n" "2: agfr 5,1\n"
/* r2 = r1 << 12 */ /* r2 = r1 << PAGE_SHIFT */
"1: sllg 2,1,12(0)\n" "1: sllg 2,1,12(0)\n"
/* essa(r4, r2, SET_STABLE) */ /* essa(r4, r2, SET_STABLE) */
" .insn rrf,0xb9ab0000,4,2,1,0\n" " .insn rrf,0xb9ab0000,4,2,1,0\n"

View File

@ -0,0 +1,2 @@
CONFIG_KVM=y
CONFIG_KVM_S390_UCONTROL=y

View File

@ -2,12 +2,12 @@
/* Test KVM debugging features. */ /* Test KVM debugging features. */
#include "kvm_util.h" #include "kvm_util.h"
#include "test_util.h" #include "test_util.h"
#include "sie.h"
#include <linux/kvm.h> #include <linux/kvm.h>
#define __LC_SVC_NEW_PSW 0x1c0 #define __LC_SVC_NEW_PSW 0x1c0
#define __LC_PGM_NEW_PSW 0x1d0 #define __LC_PGM_NEW_PSW 0x1d0
#define ICPT_INSTRUCTION 0x04
#define IPA0_DIAG 0x8300 #define IPA0_DIAG 0x8300
#define PGM_SPECIFICATION 0x06 #define PGM_SPECIFICATION 0x06
@ -85,7 +85,7 @@ static void test_step_pgm_diag(void)
vm = test_step_int_1(&vcpu, test_step_pgm_diag_guest_code, vm = test_step_int_1(&vcpu, test_step_pgm_diag_guest_code,
__LC_PGM_NEW_PSW, new_psw); __LC_PGM_NEW_PSW, new_psw);
TEST_ASSERT_KVM_EXIT_REASON(vcpu, KVM_EXIT_S390_SIEIC); TEST_ASSERT_KVM_EXIT_REASON(vcpu, KVM_EXIT_S390_SIEIC);
TEST_ASSERT_EQ(vcpu->run->s390_sieic.icptcode, ICPT_INSTRUCTION); TEST_ASSERT_EQ(vcpu->run->s390_sieic.icptcode, ICPT_INST);
TEST_ASSERT_EQ(vcpu->run->s390_sieic.ipa & 0xff00, IPA0_DIAG); TEST_ASSERT_EQ(vcpu->run->s390_sieic.ipa & 0xff00, IPA0_DIAG);
vcpu_ioctl(vcpu, KVM_S390_IRQ, &irq); vcpu_ioctl(vcpu, KVM_S390_IRQ, &irq);
vcpu_run(vcpu); vcpu_run(vcpu);

View File

@ -16,6 +16,7 @@
#include "kvm_util.h" #include "kvm_util.h"
#include "kselftest.h" #include "kselftest.h"
#include "ucall_common.h" #include "ucall_common.h"
#include "processor.h"
enum mop_target { enum mop_target {
LOGICAL, LOGICAL,
@ -226,9 +227,6 @@ static void memop_ioctl(struct test_info info, struct kvm_s390_mem_op *ksmo,
#define CHECK_N_DO(f, ...) ({ f(__VA_ARGS__, CHECK_ONLY); f(__VA_ARGS__); }) #define CHECK_N_DO(f, ...) ({ f(__VA_ARGS__, CHECK_ONLY); f(__VA_ARGS__); })
#define PAGE_SHIFT 12
#define PAGE_SIZE (1ULL << PAGE_SHIFT)
#define PAGE_MASK (~(PAGE_SIZE - 1))
#define CR0_FETCH_PROTECTION_OVERRIDE (1UL << (63 - 38)) #define CR0_FETCH_PROTECTION_OVERRIDE (1UL << (63 - 38))
#define CR0_STORAGE_PROTECTION_OVERRIDE (1UL << (63 - 39)) #define CR0_STORAGE_PROTECTION_OVERRIDE (1UL << (63 - 39))

View File

@ -9,9 +9,8 @@
#include "kvm_util.h" #include "kvm_util.h"
#include "kselftest.h" #include "kselftest.h"
#include "ucall_common.h" #include "ucall_common.h"
#include "processor.h"
#define PAGE_SHIFT 12
#define PAGE_SIZE (1 << PAGE_SHIFT)
#define CR0_FETCH_PROTECTION_OVERRIDE (1UL << (63 - 38)) #define CR0_FETCH_PROTECTION_OVERRIDE (1UL << (63 - 38))
#define CR0_STORAGE_PROTECTION_OVERRIDE (1UL << (63 - 39)) #define CR0_STORAGE_PROTECTION_OVERRIDE (1UL << (63 - 39))
@ -151,7 +150,7 @@ static enum stage perform_next_stage(int *i, bool mapped_0)
* instead. * instead.
* In order to skip these tests we detect this inside the guest * In order to skip these tests we detect this inside the guest
*/ */
skip = tests[*i].addr < (void *)4096 && skip = tests[*i].addr < (void *)PAGE_SIZE &&
tests[*i].expected != TRANSL_UNAVAIL && tests[*i].expected != TRANSL_UNAVAIL &&
!mapped_0; !mapped_0;
if (!skip) { if (!skip) {

View File

@ -0,0 +1,332 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
* Test code for the s390x kvm ucontrol interface
*
* Copyright IBM Corp. 2024
*
* Authors:
* Christoph Schlameuss <schlameuss@linux.ibm.com>
*/
#include "debug_print.h"
#include "kselftest_harness.h"
#include "kvm_util.h"
#include "processor.h"
#include "sie.h"
#include <linux/capability.h>
#include <linux/sizes.h>
#define VM_MEM_SIZE (4 * SZ_1M)
/* so directly declare capget to check caps without libcap */
int capget(cap_user_header_t header, cap_user_data_t data);
/**
* In order to create user controlled virtual machines on S390,
* check KVM_CAP_S390_UCONTROL and use the flag KVM_VM_S390_UCONTROL
* as privileged user (SYS_ADMIN).
*/
void require_ucontrol_admin(void)
{
struct __user_cap_data_struct data[_LINUX_CAPABILITY_U32S_3];
struct __user_cap_header_struct hdr = {
.version = _LINUX_CAPABILITY_VERSION_3,
};
int rc;
rc = capget(&hdr, data);
TEST_ASSERT_EQ(0, rc);
TEST_REQUIRE((data->effective & CAP_TO_MASK(CAP_SYS_ADMIN)) > 0);
TEST_REQUIRE(kvm_has_cap(KVM_CAP_S390_UCONTROL));
}
/* Test program setting some registers and looping */
extern char test_gprs_asm[];
asm("test_gprs_asm:\n"
"xgr %r0, %r0\n"
"lgfi %r1,1\n"
"lgfi %r2,2\n"
"lgfi %r3,3\n"
"lgfi %r4,4\n"
"lgfi %r5,5\n"
"lgfi %r6,6\n"
"lgfi %r7,7\n"
"0:\n"
" diag 0,0,0x44\n"
" ahi %r0,1\n"
" j 0b\n"
);
FIXTURE(uc_kvm)
{
struct kvm_s390_sie_block *sie_block;
struct kvm_run *run;
uintptr_t base_gpa;
uintptr_t code_gpa;
uintptr_t base_hva;
uintptr_t code_hva;
int kvm_run_size;
void *vm_mem;
int vcpu_fd;
int kvm_fd;
int vm_fd;
};
/**
* create VM with single vcpu, map kvm_run and SIE control block for easy access
*/
FIXTURE_SETUP(uc_kvm)
{
struct kvm_s390_vm_cpu_processor info;
int rc;
require_ucontrol_admin();
self->kvm_fd = open_kvm_dev_path_or_exit();
self->vm_fd = ioctl(self->kvm_fd, KVM_CREATE_VM, KVM_VM_S390_UCONTROL);
ASSERT_GE(self->vm_fd, 0);
kvm_device_attr_get(self->vm_fd, KVM_S390_VM_CPU_MODEL,
KVM_S390_VM_CPU_PROCESSOR, &info);
TH_LOG("create VM 0x%llx", info.cpuid);
self->vcpu_fd = ioctl(self->vm_fd, KVM_CREATE_VCPU, 0);
ASSERT_GE(self->vcpu_fd, 0);
self->kvm_run_size = ioctl(self->kvm_fd, KVM_GET_VCPU_MMAP_SIZE, NULL);
ASSERT_GE(self->kvm_run_size, sizeof(struct kvm_run))
TH_LOG(KVM_IOCTL_ERROR(KVM_GET_VCPU_MMAP_SIZE, self->kvm_run_size));
self->run = (struct kvm_run *)mmap(NULL, self->kvm_run_size,
PROT_READ | PROT_WRITE, MAP_SHARED, self->vcpu_fd, 0);
ASSERT_NE(self->run, MAP_FAILED);
/**
* For virtual cpus that have been created with S390 user controlled
* virtual machines, the resulting vcpu fd can be memory mapped at page
* offset KVM_S390_SIE_PAGE_OFFSET in order to obtain a memory map of
* the virtual cpu's hardware control block.
*/
self->sie_block = (struct kvm_s390_sie_block *)mmap(NULL, PAGE_SIZE,
PROT_READ | PROT_WRITE, MAP_SHARED,
self->vcpu_fd, KVM_S390_SIE_PAGE_OFFSET << PAGE_SHIFT);
ASSERT_NE(self->sie_block, MAP_FAILED);
TH_LOG("VM created %p %p", self->run, self->sie_block);
self->base_gpa = 0;
self->code_gpa = self->base_gpa + (3 * SZ_1M);
self->vm_mem = aligned_alloc(SZ_1M, VM_MEM_SIZE);
ASSERT_NE(NULL, self->vm_mem) TH_LOG("malloc failed %u", errno);
self->base_hva = (uintptr_t)self->vm_mem;
self->code_hva = self->base_hva - self->base_gpa + self->code_gpa;
struct kvm_s390_ucas_mapping map = {
.user_addr = self->base_hva,
.vcpu_addr = self->base_gpa,
.length = VM_MEM_SIZE,
};
TH_LOG("ucas map %p %p 0x%llx",
(void *)map.user_addr, (void *)map.vcpu_addr, map.length);
rc = ioctl(self->vcpu_fd, KVM_S390_UCAS_MAP, &map);
ASSERT_EQ(0, rc) TH_LOG("ucas map result %d not expected, %s",
rc, strerror(errno));
TH_LOG("page in %p", (void *)self->base_gpa);
rc = ioctl(self->vcpu_fd, KVM_S390_VCPU_FAULT, self->base_gpa);
ASSERT_EQ(0, rc) TH_LOG("vcpu fault (%p) result %d not expected, %s",
(void *)self->base_hva, rc, strerror(errno));
self->sie_block->cpuflags &= ~CPUSTAT_STOPPED;
}
FIXTURE_TEARDOWN(uc_kvm)
{
munmap(self->sie_block, PAGE_SIZE);
munmap(self->run, self->kvm_run_size);
close(self->vcpu_fd);
close(self->vm_fd);
close(self->kvm_fd);
free(self->vm_mem);
}
TEST_F(uc_kvm, uc_sie_assertions)
{
/* assert interception of Code 08 (Program Interruption) is set */
EXPECT_EQ(0, self->sie_block->ecb & ECB_SPECI);
}
TEST_F(uc_kvm, uc_attr_mem_limit)
{
u64 limit;
struct kvm_device_attr attr = {
.group = KVM_S390_VM_MEM_CTRL,
.attr = KVM_S390_VM_MEM_LIMIT_SIZE,
.addr = (unsigned long)&limit,
};
int rc;
rc = ioctl(self->vm_fd, KVM_GET_DEVICE_ATTR, &attr);
EXPECT_EQ(0, rc);
EXPECT_EQ(~0UL, limit);
/* assert set not supported */
rc = ioctl(self->vm_fd, KVM_SET_DEVICE_ATTR, &attr);
EXPECT_EQ(-1, rc);
EXPECT_EQ(EINVAL, errno);
}
TEST_F(uc_kvm, uc_no_dirty_log)
{
struct kvm_dirty_log dlog;
int rc;
rc = ioctl(self->vm_fd, KVM_GET_DIRTY_LOG, &dlog);
EXPECT_EQ(-1, rc);
EXPECT_EQ(EINVAL, errno);
}
/**
* Assert HPAGE CAP cannot be enabled on UCONTROL VM
*/
TEST(uc_cap_hpage)
{
int rc, kvm_fd, vm_fd, vcpu_fd;
struct kvm_enable_cap cap = {
.cap = KVM_CAP_S390_HPAGE_1M,
};
require_ucontrol_admin();
kvm_fd = open_kvm_dev_path_or_exit();
vm_fd = ioctl(kvm_fd, KVM_CREATE_VM, KVM_VM_S390_UCONTROL);
ASSERT_GE(vm_fd, 0);
/* assert hpages are not supported on ucontrol vm */
rc = ioctl(vm_fd, KVM_CHECK_EXTENSION, KVM_CAP_S390_HPAGE_1M);
EXPECT_EQ(0, rc);
/* Test that KVM_CAP_S390_HPAGE_1M can't be enabled for a ucontrol vm */
rc = ioctl(vm_fd, KVM_ENABLE_CAP, cap);
EXPECT_EQ(-1, rc);
EXPECT_EQ(EINVAL, errno);
/* assert HPAGE CAP is rejected after vCPU creation */
vcpu_fd = ioctl(vm_fd, KVM_CREATE_VCPU, 0);
ASSERT_GE(vcpu_fd, 0);
rc = ioctl(vm_fd, KVM_ENABLE_CAP, cap);
EXPECT_EQ(-1, rc);
EXPECT_EQ(EBUSY, errno);
close(vcpu_fd);
close(vm_fd);
close(kvm_fd);
}
/* verify SIEIC exit
* * fail on codes not expected in the test cases
*/
static bool uc_handle_sieic(FIXTURE_DATA(uc_kvm) * self)
{
struct kvm_s390_sie_block *sie_block = self->sie_block;
struct kvm_run *run = self->run;
/* check SIE interception code */
pr_info("sieic: 0x%.2x 0x%.4x 0x%.4x\n",
run->s390_sieic.icptcode,
run->s390_sieic.ipa,
run->s390_sieic.ipb);
switch (run->s390_sieic.icptcode) {
case ICPT_INST:
/* end execution in caller on intercepted instruction */
pr_info("sie instruction interception\n");
return false;
case ICPT_OPEREXC:
/* operation exception */
TEST_FAIL("sie exception on %.4x%.8x", sie_block->ipa, sie_block->ipb);
default:
TEST_FAIL("UNEXPECTED SIEIC CODE %d", run->s390_sieic.icptcode);
}
return true;
}
/* verify VM state on exit */
static bool uc_handle_exit(FIXTURE_DATA(uc_kvm) * self)
{
struct kvm_run *run = self->run;
switch (run->exit_reason) {
case KVM_EXIT_S390_SIEIC:
return uc_handle_sieic(self);
default:
pr_info("exit_reason %2d not handled\n", run->exit_reason);
}
return true;
}
/* run the VM until interrupted */
static int uc_run_once(FIXTURE_DATA(uc_kvm) * self)
{
int rc;
rc = ioctl(self->vcpu_fd, KVM_RUN, NULL);
print_run(self->run, self->sie_block);
print_regs(self->run);
pr_debug("run %d / %d %s\n", rc, errno, strerror(errno));
return rc;
}
static void uc_assert_diag44(FIXTURE_DATA(uc_kvm) * self)
{
struct kvm_s390_sie_block *sie_block = self->sie_block;
/* assert vm was interrupted by diag 0x0044 */
TEST_ASSERT_EQ(KVM_EXIT_S390_SIEIC, self->run->exit_reason);
TEST_ASSERT_EQ(ICPT_INST, sie_block->icptcode);
TEST_ASSERT_EQ(0x8300, sie_block->ipa);
TEST_ASSERT_EQ(0x440000, sie_block->ipb);
}
TEST_F(uc_kvm, uc_gprs)
{
struct kvm_sync_regs *sync_regs = &self->run->s.regs;
struct kvm_run *run = self->run;
struct kvm_regs regs = {};
/* Set registers to values that are different from the ones that we expect below */
for (int i = 0; i < 8; i++)
sync_regs->gprs[i] = 8;
run->kvm_dirty_regs |= KVM_SYNC_GPRS;
/* copy test_gprs_asm to code_hva / code_gpa */
TH_LOG("copy code %p to vm mapped memory %p / %p",
&test_gprs_asm, (void *)self->code_hva, (void *)self->code_gpa);
memcpy((void *)self->code_hva, &test_gprs_asm, PAGE_SIZE);
/* DAT disabled + 64 bit mode */
run->psw_mask = 0x0000000180000000ULL;
run->psw_addr = self->code_gpa;
/* run and expect interception of diag 44 */
ASSERT_EQ(0, uc_run_once(self));
ASSERT_EQ(false, uc_handle_exit(self));
uc_assert_diag44(self);
/* Retrieve and check guest register values */
ASSERT_EQ(0, ioctl(self->vcpu_fd, KVM_GET_REGS, &regs));
for (int i = 0; i < 8; i++) {
ASSERT_EQ(i, regs.gprs[i]);
ASSERT_EQ(i, sync_regs->gprs[i]);
}
/* run and expect interception of diag 44 again */
ASSERT_EQ(0, uc_run_once(self));
ASSERT_EQ(false, uc_handle_exit(self));
uc_assert_diag44(self);
/* check continued increment of register 0 value */
ASSERT_EQ(0, ioctl(self->vcpu_fd, KVM_GET_REGS, &regs));
ASSERT_EQ(1, regs.gprs[0]);
ASSERT_EQ(1, sync_regs->gprs[0]);
}
TEST_HARNESS_MAIN

View File

@ -175,7 +175,7 @@ static void guest_code_move_memory_region(void)
GUEST_DONE(); GUEST_DONE();
} }
static void test_move_memory_region(void) static void test_move_memory_region(bool disable_slot_zap_quirk)
{ {
pthread_t vcpu_thread; pthread_t vcpu_thread;
struct kvm_vcpu *vcpu; struct kvm_vcpu *vcpu;
@ -184,6 +184,9 @@ static void test_move_memory_region(void)
vm = spawn_vm(&vcpu, &vcpu_thread, guest_code_move_memory_region); vm = spawn_vm(&vcpu, &vcpu_thread, guest_code_move_memory_region);
if (disable_slot_zap_quirk)
vm_enable_cap(vm, KVM_CAP_DISABLE_QUIRKS2, KVM_X86_QUIRK_SLOT_ZAP_ALL);
hva = addr_gpa2hva(vm, MEM_REGION_GPA); hva = addr_gpa2hva(vm, MEM_REGION_GPA);
/* /*
@ -266,7 +269,7 @@ static void guest_code_delete_memory_region(void)
GUEST_ASSERT(0); GUEST_ASSERT(0);
} }
static void test_delete_memory_region(void) static void test_delete_memory_region(bool disable_slot_zap_quirk)
{ {
pthread_t vcpu_thread; pthread_t vcpu_thread;
struct kvm_vcpu *vcpu; struct kvm_vcpu *vcpu;
@ -276,6 +279,9 @@ static void test_delete_memory_region(void)
vm = spawn_vm(&vcpu, &vcpu_thread, guest_code_delete_memory_region); vm = spawn_vm(&vcpu, &vcpu_thread, guest_code_delete_memory_region);
if (disable_slot_zap_quirk)
vm_enable_cap(vm, KVM_CAP_DISABLE_QUIRKS2, KVM_X86_QUIRK_SLOT_ZAP_ALL);
/* Delete the memory region, the guest should not die. */ /* Delete the memory region, the guest should not die. */
vm_mem_region_delete(vm, MEM_REGION_SLOT); vm_mem_region_delete(vm, MEM_REGION_SLOT);
wait_for_vcpu(); wait_for_vcpu();
@ -553,7 +559,10 @@ int main(int argc, char *argv[])
{ {
#ifdef __x86_64__ #ifdef __x86_64__
int i, loops; int i, loops;
int j, disable_slot_zap_quirk = 0;
if (kvm_check_cap(KVM_CAP_DISABLE_QUIRKS2) & KVM_X86_QUIRK_SLOT_ZAP_ALL)
disable_slot_zap_quirk = 1;
/* /*
* FIXME: the zero-memslot test fails on aarch64 and s390x because * FIXME: the zero-memslot test fails on aarch64 and s390x because
* KVM_RUN fails with ENOEXEC or EFAULT. * KVM_RUN fails with ENOEXEC or EFAULT.
@ -579,13 +588,17 @@ int main(int argc, char *argv[])
else else
loops = 10; loops = 10;
pr_info("Testing MOVE of in-use region, %d loops\n", loops); for (j = 0; j <= disable_slot_zap_quirk; j++) {
for (i = 0; i < loops; i++) pr_info("Testing MOVE of in-use region, %d loops, slot zap quirk %s\n",
test_move_memory_region(); loops, j ? "disabled" : "enabled");
for (i = 0; i < loops; i++)
test_move_memory_region(!!j);
pr_info("Testing DELETE of in-use region, %d loops\n", loops); pr_info("Testing DELETE of in-use region, %d loops, slot zap quirk %s\n",
for (i = 0; i < loops; i++) loops, j ? "disabled" : "enabled");
test_delete_memory_region(); for (i = 0; i < loops; i++)
test_delete_memory_region(!!j);
}
#endif #endif
return 0; return 0;

View File

@ -47,15 +47,18 @@ static void guest_code(void)
/* /*
* Single step test, covers 2 basic instructions and 2 emulated * Single step test, covers 2 basic instructions and 2 emulated
* *
* Enable interrupts during the single stepping to see that * Enable interrupts during the single stepping to see that pending
* pending interrupt we raised is not handled due to KVM_GUESTDBG_BLOCKIRQ * interrupt we raised is not handled due to KVM_GUESTDBG_BLOCKIRQ.
*
* Write MSR_IA32_TSC_DEADLINE to verify that KVM's fastpath handler
* exits to userspace due to single-step being enabled.
*/ */
asm volatile("ss_start: " asm volatile("ss_start: "
"sti\n\t" "sti\n\t"
"xor %%eax,%%eax\n\t" "xor %%eax,%%eax\n\t"
"cpuid\n\t" "cpuid\n\t"
"movl $0x1a0,%%ecx\n\t" "movl $" __stringify(MSR_IA32_TSC_DEADLINE) ", %%ecx\n\t"
"rdmsr\n\t" "wrmsr\n\t"
"cli\n\t" "cli\n\t"
: : : "eax", "ebx", "ecx", "edx"); : : : "eax", "ebx", "ecx", "edx");

View File

@ -242,7 +242,7 @@ int main(int argc, char *argv[])
TEST_REQUIRE(kvm_cpu_has(X86_FEATURE_VMX)); TEST_REQUIRE(kvm_cpu_has(X86_FEATURE_VMX));
TEST_REQUIRE(kvm_has_cap(KVM_CAP_NESTED_STATE)); TEST_REQUIRE(kvm_has_cap(KVM_CAP_NESTED_STATE));
TEST_REQUIRE(kvm_has_cap(KVM_CAP_HYPERV_ENLIGHTENED_VMCS)); TEST_REQUIRE(kvm_has_cap(KVM_CAP_HYPERV_ENLIGHTENED_VMCS));
TEST_REQUIRE(kvm_has_cap(KVM_CAP_HYPERV_DIRECT_TLBFLUSH)); TEST_REQUIRE(kvm_hv_cpu_has(HV_X64_NESTED_DIRECT_FLUSH));
vm = vm_create_with_one_vcpu(&vcpu, guest_code); vm = vm_create_with_one_vcpu(&vcpu, guest_code);

View File

@ -157,7 +157,7 @@ int main(int argc, char *argv[])
int stage; int stage;
TEST_REQUIRE(kvm_cpu_has(X86_FEATURE_SVM)); TEST_REQUIRE(kvm_cpu_has(X86_FEATURE_SVM));
TEST_REQUIRE(kvm_has_cap(KVM_CAP_HYPERV_DIRECT_TLBFLUSH)); TEST_REQUIRE(kvm_hv_cpu_has(HV_X64_NESTED_DIRECT_FLUSH));
/* Create VM */ /* Create VM */
vm = vm_create_with_one_vcpu(&vcpu, guest_code); vm = vm_create_with_one_vcpu(&vcpu, guest_code);

View File

@ -160,6 +160,36 @@ static void test_sev(void *guest_code, uint64_t policy)
kvm_vm_free(vm); kvm_vm_free(vm);
} }
static void guest_shutdown_code(void)
{
struct desc_ptr idt;
/* Clobber the IDT so that #UD is guaranteed to trigger SHUTDOWN. */
memset(&idt, 0, sizeof(idt));
__asm__ __volatile__("lidt %0" :: "m"(idt));
__asm__ __volatile__("ud2");
}
static void test_sev_es_shutdown(void)
{
struct kvm_vcpu *vcpu;
struct kvm_vm *vm;
uint32_t type = KVM_X86_SEV_ES_VM;
vm = vm_sev_create_with_one_vcpu(type, guest_shutdown_code, &vcpu);
vm_sev_launch(vm, SEV_POLICY_ES, NULL);
vcpu_run(vcpu);
TEST_ASSERT(vcpu->run->exit_reason == KVM_EXIT_SHUTDOWN,
"Wanted SHUTDOWN, got %s",
exit_reason_str(vcpu->run->exit_reason));
kvm_vm_free(vm);
}
int main(int argc, char *argv[]) int main(int argc, char *argv[])
{ {
TEST_REQUIRE(kvm_cpu_has(X86_FEATURE_SEV)); TEST_REQUIRE(kvm_cpu_has(X86_FEATURE_SEV));
@ -171,6 +201,8 @@ int main(int argc, char *argv[])
test_sev(guest_sev_es_code, SEV_POLICY_ES | SEV_POLICY_NO_DBG); test_sev(guest_sev_es_code, SEV_POLICY_ES | SEV_POLICY_NO_DBG);
test_sev(guest_sev_es_code, SEV_POLICY_ES); test_sev(guest_sev_es_code, SEV_POLICY_ES);
test_sev_es_shutdown();
if (kvm_has_cap(KVM_CAP_XCRS) && if (kvm_has_cap(KVM_CAP_XCRS) &&
(xgetbv(0) & XFEATURE_MASK_X87_AVX) == XFEATURE_MASK_X87_AVX) { (xgetbv(0) & XFEATURE_MASK_X87_AVX) == XFEATURE_MASK_X87_AVX) {
test_sync_vmsa(0); test_sync_vmsa(0);

View File

@ -13,6 +13,7 @@
struct xapic_vcpu { struct xapic_vcpu {
struct kvm_vcpu *vcpu; struct kvm_vcpu *vcpu;
bool is_x2apic; bool is_x2apic;
bool has_xavic_errata;
}; };
static void xapic_guest_code(void) static void xapic_guest_code(void)
@ -31,6 +32,10 @@ static void xapic_guest_code(void)
} }
} }
#define X2APIC_RSVD_BITS_MASK (GENMASK_ULL(31, 20) | \
GENMASK_ULL(17, 16) | \
GENMASK_ULL(13, 13))
static void x2apic_guest_code(void) static void x2apic_guest_code(void)
{ {
asm volatile("cli"); asm volatile("cli");
@ -41,7 +46,12 @@ static void x2apic_guest_code(void)
uint64_t val = x2apic_read_reg(APIC_IRR) | uint64_t val = x2apic_read_reg(APIC_IRR) |
x2apic_read_reg(APIC_IRR + 0x10) << 32; x2apic_read_reg(APIC_IRR + 0x10) << 32;
x2apic_write_reg(APIC_ICR, val); if (val & X2APIC_RSVD_BITS_MASK) {
x2apic_write_reg_fault(APIC_ICR, val);
} else {
x2apic_write_reg(APIC_ICR, val);
GUEST_ASSERT_EQ(x2apic_read_reg(APIC_ICR), val);
}
GUEST_SYNC(val); GUEST_SYNC(val);
} while (1); } while (1);
} }
@ -71,27 +81,28 @@ static void ____test_icr(struct xapic_vcpu *x, uint64_t val)
icr = (u64)(*((u32 *)&xapic.regs[APIC_ICR])) | icr = (u64)(*((u32 *)&xapic.regs[APIC_ICR])) |
(u64)(*((u32 *)&xapic.regs[APIC_ICR2])) << 32; (u64)(*((u32 *)&xapic.regs[APIC_ICR2])) << 32;
if (!x->is_x2apic) { if (!x->is_x2apic) {
val &= (-1u | (0xffull << (32 + 24))); if (!x->has_xavic_errata)
TEST_ASSERT_EQ(icr, val & ~APIC_ICR_BUSY); val &= (-1u | (0xffull << (32 + 24)));
} else { } else if (val & X2APIC_RSVD_BITS_MASK) {
TEST_ASSERT_EQ(icr & ~APIC_ICR_BUSY, val & ~APIC_ICR_BUSY); return;
} }
}
#define X2APIC_RSVED_BITS_MASK (GENMASK_ULL(31,20) | \ if (x->has_xavic_errata)
GENMASK_ULL(17,16) | \ TEST_ASSERT_EQ(icr & ~APIC_ICR_BUSY, val & ~APIC_ICR_BUSY);
GENMASK_ULL(13,13)) else
TEST_ASSERT_EQ(icr, val & ~APIC_ICR_BUSY);
}
static void __test_icr(struct xapic_vcpu *x, uint64_t val) static void __test_icr(struct xapic_vcpu *x, uint64_t val)
{ {
if (x->is_x2apic) { /*
/* Hardware writing vICR register requires reserved bits 31:20, * The BUSY bit is reserved on both AMD and Intel, but only AMD treats
* 17:16 and 13 kept as zero to avoid #GP exception. Data value * it is as _must_ be zero. Intel simply ignores the bit. Don't test
* written to vICR should mask out those bits above. * the BUSY bit for x2APIC, as there is no single correct behavior.
*/ */
val &= ~X2APIC_RSVED_BITS_MASK; if (!x->is_x2apic)
} ____test_icr(x, val | APIC_ICR_BUSY);
____test_icr(x, val | APIC_ICR_BUSY);
____test_icr(x, val & ~(u64)APIC_ICR_BUSY); ____test_icr(x, val & ~(u64)APIC_ICR_BUSY);
} }
@ -231,6 +242,15 @@ int main(int argc, char *argv[])
vm = vm_create_with_one_vcpu(&x.vcpu, xapic_guest_code); vm = vm_create_with_one_vcpu(&x.vcpu, xapic_guest_code);
x.is_x2apic = false; x.is_x2apic = false;
/*
* AMD's AVIC implementation is buggy (fails to clear the ICR BUSY bit),
* and also diverges from KVM with respect to ICR2[23:0] (KVM and Intel
* drops writes, AMD does not). Account for the errata when checking
* that KVM reads back what was written.
*/
x.has_xavic_errata = host_cpu_is_amd &&
get_kvm_amd_param_bool("avic");
vcpu_clear_cpuid_feature(x.vcpu, X86_FEATURE_X2APIC); vcpu_clear_cpuid_feature(x.vcpu, X86_FEATURE_X2APIC);
virt_pg_map(vm, APIC_DEFAULT_GPA, APIC_DEFAULT_GPA); virt_pg_map(vm, APIC_DEFAULT_GPA, APIC_DEFAULT_GPA);

View File

@ -10,6 +10,7 @@
#include "test_util.h" #include "test_util.h"
#include "kvm_util.h" #include "kvm_util.h"
#include "processor.h" #include "processor.h"
#include "hyperv.h"
#define HCALL_REGION_GPA 0xc0000000ULL #define HCALL_REGION_GPA 0xc0000000ULL
#define HCALL_REGION_SLOT 10 #define HCALL_REGION_SLOT 10

View File

@ -40,27 +40,6 @@ static int coalesced_mmio_in_range(struct kvm_coalesced_mmio_dev *dev,
return 1; return 1;
} }
static int coalesced_mmio_has_room(struct kvm_coalesced_mmio_dev *dev, u32 last)
{
struct kvm_coalesced_mmio_ring *ring;
unsigned avail;
/* Are we able to batch it ? */
/* last is the first free entry
* check if we don't meet the first used entry
* there is always one unused entry in the buffer
*/
ring = dev->kvm->coalesced_mmio_ring;
avail = (ring->first - last - 1) % KVM_COALESCED_MMIO_MAX;
if (avail == 0) {
/* full */
return 0;
}
return 1;
}
static int coalesced_mmio_write(struct kvm_vcpu *vcpu, static int coalesced_mmio_write(struct kvm_vcpu *vcpu,
struct kvm_io_device *this, gpa_t addr, struct kvm_io_device *this, gpa_t addr,
int len, const void *val) int len, const void *val)
@ -74,9 +53,15 @@ static int coalesced_mmio_write(struct kvm_vcpu *vcpu,
spin_lock(&dev->kvm->ring_lock); spin_lock(&dev->kvm->ring_lock);
/*
* last is the index of the entry to fill. Verify userspace hasn't
* set last to be out of range, and that there is room in the ring.
* Leave one entry free in the ring so that userspace can differentiate
* between an empty ring and a full ring.
*/
insert = READ_ONCE(ring->last); insert = READ_ONCE(ring->last);
if (!coalesced_mmio_has_room(dev, insert) || if (insert >= KVM_COALESCED_MMIO_MAX ||
insert >= KVM_COALESCED_MMIO_MAX) { (insert + 1) % KVM_COALESCED_MMIO_MAX == READ_ONCE(ring->first)) {
spin_unlock(&dev->kvm->ring_lock); spin_unlock(&dev->kvm->ring_lock);
return -EOPNOTSUPP; return -EOPNOTSUPP;
} }

View File

@ -136,8 +136,8 @@ static int kvm_no_compat_open(struct inode *inode, struct file *file)
#define KVM_COMPAT(c) .compat_ioctl = kvm_no_compat_ioctl, \ #define KVM_COMPAT(c) .compat_ioctl = kvm_no_compat_ioctl, \
.open = kvm_no_compat_open .open = kvm_no_compat_open
#endif #endif
static int hardware_enable_all(void); static int kvm_enable_virtualization(void);
static void hardware_disable_all(void); static void kvm_disable_virtualization(void);
static void kvm_io_bus_destroy(struct kvm_io_bus *bus); static void kvm_io_bus_destroy(struct kvm_io_bus *bus);
@ -1220,7 +1220,7 @@ static struct kvm *kvm_create_vm(unsigned long type, const char *fdname)
if (r) if (r)
goto out_err_no_arch_destroy_vm; goto out_err_no_arch_destroy_vm;
r = hardware_enable_all(); r = kvm_enable_virtualization();
if (r) if (r)
goto out_err_no_disable; goto out_err_no_disable;
@ -1263,7 +1263,7 @@ out_no_coalesced_mmio:
mmu_notifier_unregister(&kvm->mmu_notifier, current->mm); mmu_notifier_unregister(&kvm->mmu_notifier, current->mm);
#endif #endif
out_err_no_mmu_notifier: out_err_no_mmu_notifier:
hardware_disable_all(); kvm_disable_virtualization();
out_err_no_disable: out_err_no_disable:
kvm_arch_destroy_vm(kvm); kvm_arch_destroy_vm(kvm);
out_err_no_arch_destroy_vm: out_err_no_arch_destroy_vm:
@ -1360,7 +1360,7 @@ static void kvm_destroy_vm(struct kvm *kvm)
#endif #endif
kvm_arch_free_vm(kvm); kvm_arch_free_vm(kvm);
preempt_notifier_dec(); preempt_notifier_dec();
hardware_disable_all(); kvm_disable_virtualization();
mmdrop(mm); mmdrop(mm);
} }
@ -3270,6 +3270,9 @@ static int __kvm_read_guest_page(struct kvm_memory_slot *slot, gfn_t gfn,
int r; int r;
unsigned long addr; unsigned long addr;
if (WARN_ON_ONCE(offset + len > PAGE_SIZE))
return -EFAULT;
addr = gfn_to_hva_memslot_prot(slot, gfn, NULL); addr = gfn_to_hva_memslot_prot(slot, gfn, NULL);
if (kvm_is_error_hva(addr)) if (kvm_is_error_hva(addr))
return -EFAULT; return -EFAULT;
@ -3343,6 +3346,9 @@ static int __kvm_read_guest_atomic(struct kvm_memory_slot *slot, gfn_t gfn,
int r; int r;
unsigned long addr; unsigned long addr;
if (WARN_ON_ONCE(offset + len > PAGE_SIZE))
return -EFAULT;
addr = gfn_to_hva_memslot_prot(slot, gfn, NULL); addr = gfn_to_hva_memslot_prot(slot, gfn, NULL);
if (kvm_is_error_hva(addr)) if (kvm_is_error_hva(addr))
return -EFAULT; return -EFAULT;
@ -3373,6 +3379,9 @@ static int __kvm_write_guest_page(struct kvm *kvm,
int r; int r;
unsigned long addr; unsigned long addr;
if (WARN_ON_ONCE(offset + len > PAGE_SIZE))
return -EFAULT;
addr = gfn_to_hva_memslot(memslot, gfn); addr = gfn_to_hva_memslot(memslot, gfn);
if (kvm_is_error_hva(addr)) if (kvm_is_error_hva(addr))
return -EFAULT; return -EFAULT;
@ -3576,7 +3585,7 @@ int kvm_clear_guest(struct kvm *kvm, gpa_t gpa, unsigned long len)
int ret; int ret;
while ((seg = next_segment(len, offset)) != 0) { while ((seg = next_segment(len, offset)) != 0) {
ret = kvm_write_guest_page(kvm, gfn, zero_page, offset, len); ret = kvm_write_guest_page(kvm, gfn, zero_page, offset, seg);
if (ret < 0) if (ret < 0)
return ret; return ret;
offset = 0; offset = 0;
@ -5566,137 +5575,67 @@ static struct miscdevice kvm_dev = {
}; };
#ifdef CONFIG_KVM_GENERIC_HARDWARE_ENABLING #ifdef CONFIG_KVM_GENERIC_HARDWARE_ENABLING
static bool enable_virt_at_load = true;
module_param(enable_virt_at_load, bool, 0444);
__visible bool kvm_rebooting; __visible bool kvm_rebooting;
EXPORT_SYMBOL_GPL(kvm_rebooting); EXPORT_SYMBOL_GPL(kvm_rebooting);
static DEFINE_PER_CPU(bool, hardware_enabled); static DEFINE_PER_CPU(bool, virtualization_enabled);
static DEFINE_MUTEX(kvm_usage_lock);
static int kvm_usage_count; static int kvm_usage_count;
static int __hardware_enable_nolock(void) __weak void kvm_arch_enable_virtualization(void)
{ {
if (__this_cpu_read(hardware_enabled))
}
__weak void kvm_arch_disable_virtualization(void)
{
}
static int kvm_enable_virtualization_cpu(void)
{
if (__this_cpu_read(virtualization_enabled))
return 0; return 0;
if (kvm_arch_hardware_enable()) { if (kvm_arch_enable_virtualization_cpu()) {
pr_info("kvm: enabling virtualization on CPU%d failed\n", pr_info("kvm: enabling virtualization on CPU%d failed\n",
raw_smp_processor_id()); raw_smp_processor_id());
return -EIO; return -EIO;
} }
__this_cpu_write(hardware_enabled, true); __this_cpu_write(virtualization_enabled, true);
return 0; return 0;
} }
static void hardware_enable_nolock(void *failed)
{
if (__hardware_enable_nolock())
atomic_inc(failed);
}
static int kvm_online_cpu(unsigned int cpu) static int kvm_online_cpu(unsigned int cpu)
{ {
int ret = 0;
/* /*
* Abort the CPU online process if hardware virtualization cannot * Abort the CPU online process if hardware virtualization cannot
* be enabled. Otherwise running VMs would encounter unrecoverable * be enabled. Otherwise running VMs would encounter unrecoverable
* errors when scheduled to this CPU. * errors when scheduled to this CPU.
*/ */
mutex_lock(&kvm_lock); return kvm_enable_virtualization_cpu();
if (kvm_usage_count)
ret = __hardware_enable_nolock();
mutex_unlock(&kvm_lock);
return ret;
} }
static void hardware_disable_nolock(void *junk) static void kvm_disable_virtualization_cpu(void *ign)
{ {
/* if (!__this_cpu_read(virtualization_enabled))
* Note, hardware_disable_all_nolock() tells all online CPUs to disable
* hardware, not just CPUs that successfully enabled hardware!
*/
if (!__this_cpu_read(hardware_enabled))
return; return;
kvm_arch_hardware_disable(); kvm_arch_disable_virtualization_cpu();
__this_cpu_write(hardware_enabled, false); __this_cpu_write(virtualization_enabled, false);
} }
static int kvm_offline_cpu(unsigned int cpu) static int kvm_offline_cpu(unsigned int cpu)
{ {
mutex_lock(&kvm_lock); kvm_disable_virtualization_cpu(NULL);
if (kvm_usage_count)
hardware_disable_nolock(NULL);
mutex_unlock(&kvm_lock);
return 0; return 0;
} }
static void hardware_disable_all_nolock(void)
{
BUG_ON(!kvm_usage_count);
kvm_usage_count--;
if (!kvm_usage_count)
on_each_cpu(hardware_disable_nolock, NULL, 1);
}
static void hardware_disable_all(void)
{
cpus_read_lock();
mutex_lock(&kvm_lock);
hardware_disable_all_nolock();
mutex_unlock(&kvm_lock);
cpus_read_unlock();
}
static int hardware_enable_all(void)
{
atomic_t failed = ATOMIC_INIT(0);
int r;
/*
* Do not enable hardware virtualization if the system is going down.
* If userspace initiated a forced reboot, e.g. reboot -f, then it's
* possible for an in-flight KVM_CREATE_VM to trigger hardware enabling
* after kvm_reboot() is called. Note, this relies on system_state
* being set _before_ kvm_reboot(), which is why KVM uses a syscore ops
* hook instead of registering a dedicated reboot notifier (the latter
* runs before system_state is updated).
*/
if (system_state == SYSTEM_HALT || system_state == SYSTEM_POWER_OFF ||
system_state == SYSTEM_RESTART)
return -EBUSY;
/*
* When onlining a CPU, cpu_online_mask is set before kvm_online_cpu()
* is called, and so on_each_cpu() between them includes the CPU that
* is being onlined. As a result, hardware_enable_nolock() may get
* invoked before kvm_online_cpu(), which also enables hardware if the
* usage count is non-zero. Disable CPU hotplug to avoid attempting to
* enable hardware multiple times.
*/
cpus_read_lock();
mutex_lock(&kvm_lock);
r = 0;
kvm_usage_count++;
if (kvm_usage_count == 1) {
on_each_cpu(hardware_enable_nolock, &failed, 1);
if (atomic_read(&failed)) {
hardware_disable_all_nolock();
r = -EBUSY;
}
}
mutex_unlock(&kvm_lock);
cpus_read_unlock();
return r;
}
static void kvm_shutdown(void) static void kvm_shutdown(void)
{ {
/* /*
@ -5712,34 +5651,32 @@ static void kvm_shutdown(void)
*/ */
pr_info("kvm: exiting hardware virtualization\n"); pr_info("kvm: exiting hardware virtualization\n");
kvm_rebooting = true; kvm_rebooting = true;
on_each_cpu(hardware_disable_nolock, NULL, 1); on_each_cpu(kvm_disable_virtualization_cpu, NULL, 1);
} }
static int kvm_suspend(void) static int kvm_suspend(void)
{ {
/* /*
* Secondary CPUs and CPU hotplug are disabled across the suspend/resume * Secondary CPUs and CPU hotplug are disabled across the suspend/resume
* callbacks, i.e. no need to acquire kvm_lock to ensure the usage count * callbacks, i.e. no need to acquire kvm_usage_lock to ensure the usage
* is stable. Assert that kvm_lock is not held to ensure the system * count is stable. Assert that kvm_usage_lock is not held to ensure
* isn't suspended while KVM is enabling hardware. Hardware enabling * the system isn't suspended while KVM is enabling hardware. Hardware
* can be preempted, but the task cannot be frozen until it has dropped * enabling can be preempted, but the task cannot be frozen until it has
* all locks (userspace tasks are frozen via a fake signal). * dropped all locks (userspace tasks are frozen via a fake signal).
*/ */
lockdep_assert_not_held(&kvm_lock); lockdep_assert_not_held(&kvm_usage_lock);
lockdep_assert_irqs_disabled(); lockdep_assert_irqs_disabled();
if (kvm_usage_count) kvm_disable_virtualization_cpu(NULL);
hardware_disable_nolock(NULL);
return 0; return 0;
} }
static void kvm_resume(void) static void kvm_resume(void)
{ {
lockdep_assert_not_held(&kvm_lock); lockdep_assert_not_held(&kvm_usage_lock);
lockdep_assert_irqs_disabled(); lockdep_assert_irqs_disabled();
if (kvm_usage_count) WARN_ON_ONCE(kvm_enable_virtualization_cpu());
WARN_ON_ONCE(__hardware_enable_nolock());
} }
static struct syscore_ops kvm_syscore_ops = { static struct syscore_ops kvm_syscore_ops = {
@ -5747,13 +5684,95 @@ static struct syscore_ops kvm_syscore_ops = {
.resume = kvm_resume, .resume = kvm_resume,
.shutdown = kvm_shutdown, .shutdown = kvm_shutdown,
}; };
static int kvm_enable_virtualization(void)
{
int r;
guard(mutex)(&kvm_usage_lock);
if (kvm_usage_count++)
return 0;
kvm_arch_enable_virtualization();
r = cpuhp_setup_state(CPUHP_AP_KVM_ONLINE, "kvm/cpu:online",
kvm_online_cpu, kvm_offline_cpu);
if (r)
goto err_cpuhp;
register_syscore_ops(&kvm_syscore_ops);
/*
* Undo virtualization enabling and bail if the system is going down.
* If userspace initiated a forced reboot, e.g. reboot -f, then it's
* possible for an in-flight operation to enable virtualization after
* syscore_shutdown() is called, i.e. without kvm_shutdown() being
* invoked. Note, this relies on system_state being set _before_
* kvm_shutdown(), e.g. to ensure either kvm_shutdown() is invoked
* or this CPU observes the impending shutdown. Which is why KVM uses
* a syscore ops hook instead of registering a dedicated reboot
* notifier (the latter runs before system_state is updated).
*/
if (system_state == SYSTEM_HALT || system_state == SYSTEM_POWER_OFF ||
system_state == SYSTEM_RESTART) {
r = -EBUSY;
goto err_rebooting;
}
return 0;
err_rebooting:
unregister_syscore_ops(&kvm_syscore_ops);
cpuhp_remove_state(CPUHP_AP_KVM_ONLINE);
err_cpuhp:
kvm_arch_disable_virtualization();
--kvm_usage_count;
return r;
}
static void kvm_disable_virtualization(void)
{
guard(mutex)(&kvm_usage_lock);
if (--kvm_usage_count)
return;
unregister_syscore_ops(&kvm_syscore_ops);
cpuhp_remove_state(CPUHP_AP_KVM_ONLINE);
kvm_arch_disable_virtualization();
}
static int kvm_init_virtualization(void)
{
if (enable_virt_at_load)
return kvm_enable_virtualization();
return 0;
}
static void kvm_uninit_virtualization(void)
{
if (enable_virt_at_load)
kvm_disable_virtualization();
}
#else /* CONFIG_KVM_GENERIC_HARDWARE_ENABLING */ #else /* CONFIG_KVM_GENERIC_HARDWARE_ENABLING */
static int hardware_enable_all(void) static int kvm_enable_virtualization(void)
{ {
return 0; return 0;
} }
static void hardware_disable_all(void) static int kvm_init_virtualization(void)
{
return 0;
}
static void kvm_disable_virtualization(void)
{
}
static void kvm_uninit_virtualization(void)
{ {
} }
@ -6454,15 +6473,6 @@ int kvm_init(unsigned vcpu_size, unsigned vcpu_align, struct module *module)
int r; int r;
int cpu; int cpu;
#ifdef CONFIG_KVM_GENERIC_HARDWARE_ENABLING
r = cpuhp_setup_state_nocalls(CPUHP_AP_KVM_ONLINE, "kvm/cpu:online",
kvm_online_cpu, kvm_offline_cpu);
if (r)
return r;
register_syscore_ops(&kvm_syscore_ops);
#endif
/* A kmem cache lets us meet the alignment requirements of fx_save. */ /* A kmem cache lets us meet the alignment requirements of fx_save. */
if (!vcpu_align) if (!vcpu_align)
vcpu_align = __alignof__(struct kvm_vcpu); vcpu_align = __alignof__(struct kvm_vcpu);
@ -6473,10 +6483,8 @@ int kvm_init(unsigned vcpu_size, unsigned vcpu_align, struct module *module)
offsetofend(struct kvm_vcpu, stats_id) offsetofend(struct kvm_vcpu, stats_id)
- offsetof(struct kvm_vcpu, arch), - offsetof(struct kvm_vcpu, arch),
NULL); NULL);
if (!kvm_vcpu_cache) { if (!kvm_vcpu_cache)
r = -ENOMEM; return -ENOMEM;
goto err_vcpu_cache;
}
for_each_possible_cpu(cpu) { for_each_possible_cpu(cpu) {
if (!alloc_cpumask_var_node(&per_cpu(cpu_kick_mask, cpu), if (!alloc_cpumask_var_node(&per_cpu(cpu_kick_mask, cpu),
@ -6510,6 +6518,10 @@ int kvm_init(unsigned vcpu_size, unsigned vcpu_align, struct module *module)
kvm_gmem_init(module); kvm_gmem_init(module);
r = kvm_init_virtualization();
if (r)
goto err_virt;
/* /*
* Registration _must_ be the very last thing done, as this exposes * Registration _must_ be the very last thing done, as this exposes
* /dev/kvm to userspace, i.e. all infrastructure must be setup! * /dev/kvm to userspace, i.e. all infrastructure must be setup!
@ -6523,6 +6535,8 @@ int kvm_init(unsigned vcpu_size, unsigned vcpu_align, struct module *module)
return 0; return 0;
err_register: err_register:
kvm_uninit_virtualization();
err_virt:
kvm_vfio_ops_exit(); kvm_vfio_ops_exit();
err_vfio: err_vfio:
kvm_async_pf_deinit(); kvm_async_pf_deinit();
@ -6533,11 +6547,6 @@ err_cpu_kick_mask:
for_each_possible_cpu(cpu) for_each_possible_cpu(cpu)
free_cpumask_var(per_cpu(cpu_kick_mask, cpu)); free_cpumask_var(per_cpu(cpu_kick_mask, cpu));
kmem_cache_destroy(kvm_vcpu_cache); kmem_cache_destroy(kvm_vcpu_cache);
err_vcpu_cache:
#ifdef CONFIG_KVM_GENERIC_HARDWARE_ENABLING
unregister_syscore_ops(&kvm_syscore_ops);
cpuhp_remove_state_nocalls(CPUHP_AP_KVM_ONLINE);
#endif
return r; return r;
} }
EXPORT_SYMBOL_GPL(kvm_init); EXPORT_SYMBOL_GPL(kvm_init);
@ -6553,16 +6562,14 @@ void kvm_exit(void)
*/ */
misc_deregister(&kvm_dev); misc_deregister(&kvm_dev);
kvm_uninit_virtualization();
debugfs_remove_recursive(kvm_debugfs_dir); debugfs_remove_recursive(kvm_debugfs_dir);
for_each_possible_cpu(cpu) for_each_possible_cpu(cpu)
free_cpumask_var(per_cpu(cpu_kick_mask, cpu)); free_cpumask_var(per_cpu(cpu_kick_mask, cpu));
kmem_cache_destroy(kvm_vcpu_cache); kmem_cache_destroy(kvm_vcpu_cache);
kvm_vfio_ops_exit(); kvm_vfio_ops_exit();
kvm_async_pf_deinit(); kvm_async_pf_deinit();
#ifdef CONFIG_KVM_GENERIC_HARDWARE_ENABLING
unregister_syscore_ops(&kvm_syscore_ops);
cpuhp_remove_state_nocalls(CPUHP_AP_KVM_ONLINE);
#endif
kvm_irqfd_exit(); kvm_irqfd_exit();
} }
EXPORT_SYMBOL_GPL(kvm_exit); EXPORT_SYMBOL_GPL(kvm_exit);