KVM: x86: avoid memslot check in NX hugepage recovery if it cannot succeed

Since gfn_to_memslot() is relatively expensive, it helps to
skip it if it the memslot cannot possibly have dirty logging
enabled.  In order to do this, add to struct kvm a counter
of the number of log-page memslots.  While the correct value
can only be read with slots_lock taken, the NX recovery thread
is content with using an approximate value.  Therefore, the
counter is an atomic_t.

Based on https://lore.kernel.org/kvm/20221027200316.2221027-2-dmatlack@google.com/
by David Matlack.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This commit is contained in:
Paolo Bonzini 2022-11-17 12:25:02 -05:00
parent 771a579c6e
commit 6c7b2202e4
3 changed files with 32 additions and 3 deletions

View File

@ -6878,16 +6878,32 @@ static void kvm_recover_nx_huge_pages(struct kvm *kvm)
WARN_ON_ONCE(!sp->nx_huge_page_disallowed);
WARN_ON_ONCE(!sp->role.direct);
slot = gfn_to_memslot(kvm, sp->gfn);
WARN_ON_ONCE(!slot);
/*
* Unaccount and do not attempt to recover any NX Huge Pages
* that are being dirty tracked, as they would just be faulted
* back in as 4KiB pages. The NX Huge Pages in this slot will be
* recovered, along with all the other huge pages in the slot,
* when dirty logging is disabled.
*
* Since gfn_to_memslot() is relatively expensive, it helps to
* skip it if it the test cannot possibly return true. On the
* other hand, if any memslot has logging enabled, chances are
* good that all of them do, in which case unaccount_nx_huge_page()
* is much cheaper than zapping the page.
*
* If a memslot update is in progress, reading an incorrect value
* of kvm->nr_memslots_dirty_logging is not a problem: if it is
* becoming zero, gfn_to_memslot() will be done unnecessarily; if
* it is becoming nonzero, the page will be zapped unnecessarily.
* Either way, this only affects efficiency in racy situations,
* and not correctness.
*/
slot = NULL;
if (atomic_read(&kvm->nr_memslots_dirty_logging)) {
slot = gfn_to_memslot(kvm, sp->gfn);
WARN_ON_ONCE(!slot);
}
if (slot && kvm_slot_dirty_track_enabled(slot))
unaccount_nx_huge_page(kvm, sp);
else if (is_tdp_mmu_page(sp))

View File

@ -722,6 +722,11 @@ struct kvm {
/* The current active memslot set for each address space */
struct kvm_memslots __rcu *memslots[KVM_ADDRESS_SPACE_NUM];
struct xarray vcpu_array;
/*
* Protected by slots_lock, but can be read outside if an
* incorrect answer is acceptable.
*/
atomic_t nr_memslots_dirty_logging;
/* Used to wait for completion of MMU notifiers. */
spinlock_t mn_invalidate_lock;

View File

@ -1641,6 +1641,8 @@ static void kvm_commit_memory_region(struct kvm *kvm,
const struct kvm_memory_slot *new,
enum kvm_mr_change change)
{
int old_flags = old ? old->flags : 0;
int new_flags = new ? new->flags : 0;
/*
* Update the total number of memslot pages before calling the arch
* hook so that architectures can consume the result directly.
@ -1650,6 +1652,12 @@ static void kvm_commit_memory_region(struct kvm *kvm,
else if (change == KVM_MR_CREATE)
kvm->nr_memslot_pages += new->npages;
if ((old_flags ^ new_flags) & KVM_MEM_LOG_DIRTY_PAGES) {
int change = (new_flags & KVM_MEM_LOG_DIRTY_PAGES) ? 1 : -1;
atomic_set(&kvm->nr_memslots_dirty_logging,
atomic_read(&kvm->nr_memslots_dirty_logging) + change);
}
kvm_arch_commit_memory_region(kvm, old, new, change);
switch (change) {