mirror of
https://github.com/torvalds/linux.git
synced 2024-11-23 12:42:02 +00:00
45190f01dd
zswap will always try to shrink pool when zswap is full. If there is a high pressure on zswap it will result in flipping pages in and out zswap pool without any real benefit, and the overall system performance will drop. The previous discussion on this subject [1] ended up with a suggestion to implement a sort of hysteresis to refuse taking pages into zswap pool until it has sufficient space if the limit has been hit. This is my take on this. Hysteresis is controlled with a sysfs-configurable parameter (namely, /sys/kernel/debug/zswap/accept_threhsold_percent). It specifies the threshold at which zswap would start accepting pages again after it became full. Setting this parameter to 100 disables the hysteresis and sets the zswap behavior to pre-hysteresis state. [1] https://lkml.org/lkml/2019/11/8/949 Link: http://lkml.kernel.org/r/20200108200118.15563-1-vitaly.wool@konsulko.com Signed-off-by: Vitaly Wool <vitaly.wool@konsulko.com> Cc: Dan Streetman <ddstreet@ieee.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> |
||
---|---|---|
.. | ||
.gitignore | ||
active_mm.rst | ||
balance.rst | ||
cleancache.rst | ||
frontswap.rst | ||
highmem.rst | ||
hmm.rst | ||
hugetlbfs_reserv.rst | ||
hwpoison.rst | ||
index.rst | ||
ksm.rst | ||
memory-model.rst | ||
mmu_notifier.rst | ||
numa.rst | ||
overcommit-accounting.rst | ||
page_frags.rst | ||
page_migration.rst | ||
page_owner.rst | ||
remap_file_pages.rst | ||
slub.rst | ||
split_page_table_lock.rst | ||
swap_numa.rst | ||
transhuge.rst | ||
unevictable-lru.rst | ||
z3fold.rst | ||
zsmalloc.rst | ||
zswap.rst |