forked from Minki/linux
mm: speed up cancel_dirty_page() for clean pages
Patch series "Speed up page cache truncation", v1.
When rebasing our enterprise distro to a newer kernel (from 4.4 to 4.12)
we have noticed a regression in bonnie++ benchmark when deleting files.
Eventually we have tracked this down to a fact that page cache
truncation got slower by about 10%. There were both gains and losses in
the above interval of kernels but we have been able to identify that
commit 83929372f6
("filemap: prepare find and delete operations for
huge pages") caused about 10% regression on its own.
After some investigation it didn't seem easily possible to fix the
regression while maintaining the THP in page cache functionality so
we've decided to optimize the page cache truncation path instead to make
up for the change. This series is a result of that effort.
Patch 1 is an easy speedup of cancel_dirty_page(). Patches 2-6 refactor
page cache truncation code so that it is easier to batch radix tree
operations. Patch 7 implements batching of deletes from the radix tree
which more than makes up for the original regression.
This patch (of 7):
cancel_dirty_page() does quite some work even for clean pages (fetching
of mapping, locking of memcg, atomic bit op on page flags) so it
accounts for ~2.5% of cost of truncation of a clean page. That is not
much but still dumb for something we don't need at all. Check whether a
page is actually dirty and avoid any work if not.
Link: http://lkml.kernel.org/r/20171010151937.26984-2-jack@suse.cz
Signed-off-by: Jan Kara <jack@suse.cz>
Acked-by: Mel Gorman <mgorman@suse.de>
Reviewed-by: Andi Kleen <ak@linux.intel.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Dave Chinner <david@fromorbit.com>
Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This commit is contained in:
parent
384bc41fc0
commit
736304f324
@ -1440,7 +1440,13 @@ void account_page_cleaned(struct page *page, struct address_space *mapping,
|
|||||||
struct bdi_writeback *wb);
|
struct bdi_writeback *wb);
|
||||||
int set_page_dirty(struct page *page);
|
int set_page_dirty(struct page *page);
|
||||||
int set_page_dirty_lock(struct page *page);
|
int set_page_dirty_lock(struct page *page);
|
||||||
void cancel_dirty_page(struct page *page);
|
void __cancel_dirty_page(struct page *page);
|
||||||
|
static inline void cancel_dirty_page(struct page *page)
|
||||||
|
{
|
||||||
|
/* Avoid atomic ops, locking, etc. when not actually needed. */
|
||||||
|
if (PageDirty(page))
|
||||||
|
__cancel_dirty_page(page);
|
||||||
|
}
|
||||||
int clear_page_dirty_for_io(struct page *page);
|
int clear_page_dirty_for_io(struct page *page);
|
||||||
|
|
||||||
int get_cmdline(struct task_struct *task, char *buffer, int buflen);
|
int get_cmdline(struct task_struct *task, char *buffer, int buflen);
|
||||||
|
@ -2608,7 +2608,7 @@ EXPORT_SYMBOL(set_page_dirty_lock);
|
|||||||
* page without actually doing it through the VM. Can you say "ext3 is
|
* page without actually doing it through the VM. Can you say "ext3 is
|
||||||
* horribly ugly"? Thought you could.
|
* horribly ugly"? Thought you could.
|
||||||
*/
|
*/
|
||||||
void cancel_dirty_page(struct page *page)
|
void __cancel_dirty_page(struct page *page)
|
||||||
{
|
{
|
||||||
struct address_space *mapping = page_mapping(page);
|
struct address_space *mapping = page_mapping(page);
|
||||||
|
|
||||||
@ -2629,7 +2629,7 @@ void cancel_dirty_page(struct page *page)
|
|||||||
ClearPageDirty(page);
|
ClearPageDirty(page);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL(cancel_dirty_page);
|
EXPORT_SYMBOL(__cancel_dirty_page);
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Clear a page's dirty flag, while caring for dirty memory accounting.
|
* Clear a page's dirty flag, while caring for dirty memory accounting.
|
||||||
|
Loading…
Reference in New Issue
Block a user