2018-03-21 19:22:41 +00:00
|
|
|
============================
|
|
|
|
Transparent Hugepage Support
|
|
|
|
============================
|
|
|
|
|
2019-04-26 18:04:29 +00:00
|
|
|
This document describes design principles for Transparent Hugepage (THP)
|
|
|
|
support and its interaction with other parts of the memory management
|
|
|
|
system.
|
2018-05-14 08:13:38 +00:00
|
|
|
|
|
|
|
Design principles
|
|
|
|
=================
|
|
|
|
|
|
|
|
- "graceful fallback": mm components which don't have transparent hugepage
|
|
|
|
knowledge fall back to breaking huge pmd mapping into table of ptes and,
|
|
|
|
if necessary, split a transparent hugepage. Therefore these components
|
|
|
|
can continue working on the regular pages or regular pte mappings.
|
|
|
|
|
|
|
|
- if a hugepage allocation fails because of memory fragmentation,
|
|
|
|
regular pages should be gracefully allocated instead and mixed in
|
|
|
|
the same vma without any failure or significant delay and without
|
|
|
|
userland noticing
|
|
|
|
|
|
|
|
- if some task quits and more hugepages become available (either
|
|
|
|
immediately in the buddy or through the VM), guest physical memory
|
|
|
|
backed by regular pages should be relocated on hugepages
|
|
|
|
automatically (with khugepaged)
|
|
|
|
|
|
|
|
- it doesn't require memory reservation and in turn it uses hugepages
|
|
|
|
whenever possible (the only possible reservation here is kernelcore=
|
|
|
|
to avoid unmovable pages to fragment all the memory but such a tweak
|
|
|
|
is not specific to transparent hugepage support and it's a generic
|
|
|
|
feature that applies to all dynamic high order allocations in the
|
|
|
|
kernel)
|
|
|
|
|
2018-03-21 19:22:41 +00:00
|
|
|
get_user_pages and follow_page
|
|
|
|
==============================
|
2011-01-13 23:46:30 +00:00
|
|
|
|
|
|
|
get_user_pages and follow_page if run on a hugepage, will return the
|
|
|
|
head or tail pages as usual (exactly as they would do on
|
2019-04-26 18:04:29 +00:00
|
|
|
hugetlbfs). Most GUP users will only care about the actual physical
|
2011-01-13 23:46:30 +00:00
|
|
|
address of the page and its temporary pinning to release after the I/O
|
|
|
|
is complete, so they won't ever notice the fact the page is huge. But
|
|
|
|
if any driver is going to mangle over the page structure of the tail
|
|
|
|
page (like for checking page->mapping or other bits that are relevant
|
|
|
|
for the head page and not the tail page), it should be updated to jump
|
2019-04-26 18:04:29 +00:00
|
|
|
to check head page instead. Taking a reference on any head/tail page would
|
|
|
|
prevent the page from being split by anyone.
|
2011-01-13 23:46:30 +00:00
|
|
|
|
2018-03-21 19:22:41 +00:00
|
|
|
.. note::
|
|
|
|
these aren't new constraints to the GUP API, and they match the
|
2019-04-26 18:04:29 +00:00
|
|
|
same constraints that apply to hugetlbfs too, so any driver capable
|
2018-03-21 19:22:41 +00:00
|
|
|
of handling GUP on hugetlbfs will also work fine on transparent
|
|
|
|
hugepage backed mappings.
|
2011-01-13 23:46:30 +00:00
|
|
|
|
2018-03-21 19:22:41 +00:00
|
|
|
Graceful fallback
|
|
|
|
=================
|
2011-01-13 23:46:30 +00:00
|
|
|
|
2016-05-20 23:58:07 +00:00
|
|
|
Code walking pagetables but unaware about huge pmds can simply call
|
2016-01-16 00:54:30 +00:00
|
|
|
split_huge_pmd(vma, pmd, addr) where the pmd is the one returned by
|
2011-01-13 23:46:30 +00:00
|
|
|
pmd_offset. It's trivial to make the code transparent hugepage aware
|
2016-01-16 00:54:30 +00:00
|
|
|
by just grepping for "pmd_offset" and adding split_huge_pmd where
|
2011-01-13 23:46:30 +00:00
|
|
|
missing after pmd_offset returns the pmd. Thanks to the graceful
|
|
|
|
fallback design, with a one liner change, you can avoid to write
|
2019-04-26 18:04:29 +00:00
|
|
|
hundreds if not thousands of lines of complex code to make your code
|
2011-01-13 23:46:30 +00:00
|
|
|
hugepage aware.
|
|
|
|
|
|
|
|
If you're not walking pagetables but you run into a physical hugepage
|
2019-04-26 18:04:29 +00:00
|
|
|
that you can't handle natively in your code, you can split it by
|
2011-01-13 23:46:30 +00:00
|
|
|
calling split_huge_page(page). This is what the Linux VM does before
|
2016-01-16 00:54:30 +00:00
|
|
|
it tries to swapout the hugepage for example. split_huge_page() can fail
|
|
|
|
if the page is pinned and you must handle this correctly.
|
2011-01-13 23:46:30 +00:00
|
|
|
|
|
|
|
Example to make mremap.c transparent hugepage aware with a one liner
|
2018-03-21 19:22:41 +00:00
|
|
|
change::
|
2011-01-13 23:46:30 +00:00
|
|
|
|
2018-03-21 19:22:41 +00:00
|
|
|
diff --git a/mm/mremap.c b/mm/mremap.c
|
|
|
|
--- a/mm/mremap.c
|
|
|
|
+++ b/mm/mremap.c
|
|
|
|
@@ -41,6 +41,7 @@ static pmd_t *get_old_pmd(struct mm_stru
|
|
|
|
return NULL;
|
2011-01-13 23:46:30 +00:00
|
|
|
|
2018-03-21 19:22:41 +00:00
|
|
|
pmd = pmd_offset(pud, addr);
|
|
|
|
+ split_huge_pmd(vma, pmd, addr);
|
|
|
|
if (pmd_none_or_clear_bad(pmd))
|
|
|
|
return NULL;
|
2011-01-13 23:46:30 +00:00
|
|
|
|
2018-03-21 19:22:41 +00:00
|
|
|
Locking in hugepage aware code
|
|
|
|
==============================
|
2011-01-13 23:46:30 +00:00
|
|
|
|
|
|
|
We want as much code as possible hugepage aware, as calling
|
2016-01-16 00:54:30 +00:00
|
|
|
split_huge_page() or split_huge_pmd() has a cost.
|
2011-01-13 23:46:30 +00:00
|
|
|
|
|
|
|
To make pagetable walks huge pmd aware, all you need to do is to call
|
|
|
|
pmd_trans_huge() on the pmd returned by pmd_offset. You must hold the
|
2020-06-09 04:33:54 +00:00
|
|
|
mmap_lock in read (or write) mode to be sure a huge pmd cannot be
|
2011-01-13 23:46:30 +00:00
|
|
|
created from under you by khugepaged (khugepaged collapse_huge_page
|
2020-06-09 04:33:54 +00:00
|
|
|
takes the mmap_lock in write mode in addition to the anon_vma lock). If
|
2011-01-13 23:46:30 +00:00
|
|
|
pmd_trans_huge returns false, you just fallback in the old code
|
|
|
|
paths. If instead pmd_trans_huge returns true, you have to take the
|
2016-01-16 00:54:30 +00:00
|
|
|
page table lock (pmd_lock()) and re-run pmd_trans_huge. Taking the
|
2019-04-26 18:04:29 +00:00
|
|
|
page table lock will prevent the huge pmd being converted into a
|
2016-01-16 00:54:30 +00:00
|
|
|
regular pmd from under you (split_huge_pmd can run in parallel to the
|
2011-01-13 23:46:30 +00:00
|
|
|
pagetable walk). If the second pmd_trans_huge returns false, you
|
2016-01-16 00:54:30 +00:00
|
|
|
should just drop the page table lock and fallback to the old code as
|
2019-04-26 18:04:29 +00:00
|
|
|
before. Otherwise, you can proceed to process the huge pmd and the
|
|
|
|
hugepage natively. Once finished, you can drop the page table lock.
|
2016-01-16 00:54:30 +00:00
|
|
|
|
2018-03-21 19:22:41 +00:00
|
|
|
Refcounts and transparent huge pages
|
|
|
|
====================================
|
2016-01-16 00:54:30 +00:00
|
|
|
|
|
|
|
Refcounting on THP is mostly consistent with refcounting on other compound
|
|
|
|
pages:
|
|
|
|
|
2023-01-11 14:28:49 +00:00
|
|
|
- get_page()/put_page() and GUP operate on the folio->_refcount.
|
2016-01-16 00:54:30 +00:00
|
|
|
|
2016-05-20 00:10:49 +00:00
|
|
|
- ->_refcount in tail pages is always zero: get_page_unless_zero() never
|
2019-04-26 18:04:29 +00:00
|
|
|
succeeds on tail pages.
|
2016-01-16 00:54:30 +00:00
|
|
|
|
2023-01-11 14:28:49 +00:00
|
|
|
- map/unmap of a PMD entry for the whole THP increment/decrement
|
|
|
|
folio->_entire_mapcount and also increment/decrement
|
|
|
|
folio->_nr_pages_mapped by COMPOUND_MAPPED when _entire_mapcount
|
|
|
|
goes from -1 to 0 or 0 to -1.
|
mm,thp,rmap: lock_compound_mapcounts() on THP mapcounts
Fix the races in maintaining compound_mapcount, subpages_mapcount and
subpage _mapcount by using PG_locked in the first tail of any compound
page for a bit_spin_lock() on such modifications; skipping the usual
atomic operations on those fields in this case.
Bring page_remove_file_rmap() and page_remove_anon_compound_rmap() back
into page_remove_rmap() itself. Rearrange page_add_anon_rmap() and
page_add_file_rmap() and page_remove_rmap() to follow the same "if
(compound) {lock} else if (PageCompound) {lock} else {atomic}" pattern
(with a PageTransHuge in the compound test, like before, to avoid BUG_ONs
and optimize away that block when THP is not configured). Move all the
stats updates outside, after the bit_spin_locked section, so that it is
sure to be a leaf lock.
Add page_dup_compound_rmap() to manage compound locking versus atomics in
sync with the rest. In particular, hugetlb pages are still using the
atomics: to avoid unnecessary interference there, and because they never
have subpage mappings; but this exception can easily be changed.
Conveniently, page_dup_compound_rmap() turns out to suit an anon THP's
__split_huge_pmd_locked() too.
bit_spin_lock() is not popular with PREEMPT_RT folks: but PREEMPT_RT
sensibly excludes TRANSPARENT_HUGEPAGE already, so its only exposure is to
the non-hugetlb non-THP pte-mapped compound pages (with large folios being
currently dependent on TRANSPARENT_HUGEPAGE). There is never any scan of
subpages in this case; but we have chosen to use PageCompound tests rather
than PageTransCompound tests to gate the use of lock_compound_mapcounts(),
so that page_mapped() is correct on all compound pages, whether or not
TRANSPARENT_HUGEPAGE is enabled: could that be a problem for PREEMPT_RT,
when there is contention on the lock - under heavy concurrent forking for
example? If so, then it can be turned into a sleeping lock (like
folio_lock()) when PREEMPT_RT.
A simple 100 X munmap(mmap(2GB, MAP_SHARED|MAP_POPULATE, tmpfs), 2GB) took
18 seconds on small pages, and used to take 1 second on huge pages, but
now takes 115 milliseconds on huge pages. Mapping by pmds a second time
used to take 860ms and now takes 86ms; mapping by pmds after mapping by
ptes (when the scan is needed) used to take 870ms and now takes 495ms.
Mapping huge pages by ptes is largely unaffected but variable: between 5%
faster and 5% slower in what I've recorded. Contention on the lock is
likely to behave worse than contention on the atomics behaved.
Link: https://lkml.kernel.org/r/1b42bd1a-8223-e827-602f-d466c2db7d3c@google.com
Signed-off-by: Hugh Dickins <hughd@google.com>
Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: James Houghton <jthoughton@google.com>
Cc: John Hubbard <jhubbard@nvidia.com>
Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: Miaohe Lin <linmiaohe@huawei.com>
Cc: Mike Kravetz <mike.kravetz@oracle.com>
Cc: Mina Almasry <almasrymina@google.com>
Cc: Muchun Song <songmuchun@bytedance.com>
Cc: Naoya Horiguchi <naoya.horiguchi@linux.dev>
Cc: Peter Xu <peterx@redhat.com>
Cc: Sidhartha Kumar <sidhartha.kumar@oracle.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Yang Shi <shy828301@gmail.com>
Cc: Zach O'Keefe <zokeefe@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2022-11-03 01:53:45 +00:00
|
|
|
|
2023-01-11 14:28:49 +00:00
|
|
|
- map/unmap of individual pages with PTE entry increment/decrement
|
|
|
|
page->_mapcount and also increment/decrement folio->_nr_pages_mapped
|
|
|
|
when page->_mapcount goes from -1 to 0 or 0 to -1 as this counts
|
|
|
|
the number of pages mapped by PTE.
|
2016-01-16 00:54:30 +00:00
|
|
|
|
2011-01-13 23:46:30 +00:00
|
|
|
split_huge_page internally has to distribute the refcounts in the head
|
2016-01-16 00:54:30 +00:00
|
|
|
page to the tail pages before clearing all PG_head/tail bits from the page
|
|
|
|
structures. It can be done easily for refcounts taken by page table
|
2019-04-26 18:04:29 +00:00
|
|
|
entries, but we don't have enough information on how to distribute any
|
2016-01-16 00:54:30 +00:00
|
|
|
additional pins (i.e. from get_user_pages). split_huge_page() fails any
|
2019-04-26 18:04:29 +00:00
|
|
|
requests to split pinned huge pages: it expects page count to be equal to
|
|
|
|
the sum of mapcount of all sub-pages plus one (split_huge_page caller must
|
|
|
|
have a reference to the head page).
|
2016-01-16 00:54:30 +00:00
|
|
|
|
2016-05-20 00:10:49 +00:00
|
|
|
split_huge_page uses migration entries to stabilize page->_refcount and
|
2019-04-26 18:04:29 +00:00
|
|
|
page->_mapcount of anonymous pages. File pages just get unmapped.
|
2016-01-16 00:54:30 +00:00
|
|
|
|
2019-04-26 18:04:29 +00:00
|
|
|
We are safe against physical memory scanners too: the only legitimate way
|
|
|
|
a scanner can get a reference to a page is get_page_unless_zero().
|
2016-01-16 00:54:30 +00:00
|
|
|
|
2016-05-20 23:58:07 +00:00
|
|
|
All tail pages have zero ->_refcount until atomic_add(). This prevents the
|
|
|
|
scanner from getting a reference to the tail page up to that point. After the
|
2019-04-26 18:04:29 +00:00
|
|
|
atomic_add() we don't care about the ->_refcount value. We already know how
|
2016-05-20 23:58:07 +00:00
|
|
|
many references should be uncharged from the head page.
|
2016-01-16 00:54:30 +00:00
|
|
|
|
|
|
|
For head page get_page_unless_zero() will succeed and we don't mind. It's
|
2019-04-26 18:04:29 +00:00
|
|
|
clear where references should go after split: it will stay on the head page.
|
2016-01-16 00:54:30 +00:00
|
|
|
|
2019-04-26 18:04:29 +00:00
|
|
|
Note that split_huge_pmd() doesn't have any limitations on refcounting:
|
2016-01-16 00:54:30 +00:00
|
|
|
pmd can be split at any point and never fails.
|
|
|
|
|
2023-01-11 14:29:13 +00:00
|
|
|
Partial unmap and deferred_split_folio()
|
|
|
|
========================================
|
2016-01-16 00:54:30 +00:00
|
|
|
|
|
|
|
Unmapping part of THP (with munmap() or other way) is not going to free
|
|
|
|
memory immediately. Instead, we detect that a subpage of THP is not in use
|
|
|
|
in page_remove_rmap() and queue the THP for splitting if memory pressure
|
|
|
|
comes. Splitting will free up unused subpages.
|
|
|
|
|
|
|
|
Splitting the page right away is not an option due to locking context in
|
2019-04-26 18:04:29 +00:00
|
|
|
the place where we can detect partial unmap. It also might be
|
2017-05-08 22:59:02 +00:00
|
|
|
counterproductive since in many cases partial unmap happens during exit(2) if
|
|
|
|
a THP crosses a VMA boundary.
|
2016-01-16 00:54:30 +00:00
|
|
|
|
2023-01-11 14:29:13 +00:00
|
|
|
The function deferred_split_folio() is used to queue a folio for splitting.
|
2016-01-16 00:54:30 +00:00
|
|
|
The splitting itself will happen when we get memory pressure via shrinker
|
|
|
|
interface.
|