mirror of
https://github.com/torvalds/linux.git
synced 2024-11-28 07:01:32 +00:00
3c9bd4006b
It could take kvm->mmu_lock for an extended period of time when enabling dirty log for the first time. The main cost is to clear all the D-bits of last level SPTEs. This situation can benefit from manual dirty log protect as well, which can reduce the mmu_lock time taken. The sequence is like this: 1. Initialize all the bits of the dirty bitmap to 1 when enabling dirty log for the first time 2. Only write protect the huge pages 3. KVM_GET_DIRTY_LOG returns the dirty bitmap info 4. KVM_CLEAR_DIRTY_LOG will clear D-bit for each of the leaf level SPTEs gradually in small chunks Under the Intel(R) Xeon(R) Gold 6152 CPU @ 2.10GHz environment, I did some tests with a 128G windows VM and counted the time taken of memory_global_dirty_log_start, here is the numbers: VM Size Before After optimization 128G 460ms 10ms Signed-off-by: Jay Zhou <jianjay.zhou@huawei.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> |
||
---|---|---|
.. | ||
boot | ||
configs | ||
crypto | ||
entry | ||
events | ||
hyperv | ||
ia32 | ||
include | ||
kernel | ||
kvm | ||
lib | ||
math-emu | ||
mm | ||
net | ||
oprofile | ||
pci | ||
platform | ||
power | ||
purgatory | ||
ras | ||
realmode | ||
tools | ||
um | ||
video | ||
xen | ||
.gitignore | ||
Kbuild | ||
Kconfig | ||
Kconfig.cpu | ||
Kconfig.debug | ||
Makefile | ||
Makefile_32.cpu | ||
Makefile.um |