3c9bd4006b
It could take kvm->mmu_lock for an extended period of time when enabling dirty log for the first time. The main cost is to clear all the D-bits of last level SPTEs. This situation can benefit from manual dirty log protect as well, which can reduce the mmu_lock time taken. The sequence is like this: 1. Initialize all the bits of the dirty bitmap to 1 when enabling dirty log for the first time 2. Only write protect the huge pages 3. KVM_GET_DIRTY_LOG returns the dirty bitmap info 4. KVM_CLEAR_DIRTY_LOG will clear D-bit for each of the leaf level SPTEs gradually in small chunks Under the Intel(R) Xeon(R) Gold 6152 CPU @ 2.10GHz environment, I did some tests with a 128G windows VM and counted the time taken of memory_global_dirty_log_start, here is the numbers: VM Size Before After optimization 128G 460ms 10ms Signed-off-by: Jay Zhou <jianjay.zhou@huawei.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> |
||
---|---|---|
.. | ||
alpha | ||
arc | ||
arm | ||
arm64 | ||
c6x | ||
csky | ||
h8300 | ||
hexagon | ||
ia64 | ||
m68k | ||
microblaze | ||
mips | ||
nds32 | ||
nios2 | ||
openrisc | ||
parisc | ||
powerpc | ||
riscv | ||
s390 | ||
sh | ||
sparc | ||
um | ||
unicore32 | ||
x86 | ||
xtensa | ||
.gitignore | ||
Kconfig |