mm/thp: fix deferred split unqueue naming and locking

Recent changes are putting more pressure on THP deferred split queues:
under load revealing long-standing races, causing list_del corruptions,
"Bad page state"s and worse (I keep BUGs in both of those, so usually
don't get to see how badly they end up without).  The relevant recent
changes being 6.8's mTHP, 6.10's mTHP swapout, and 6.12's mTHP swapin,
improved swap allocation, and underused THP splitting.

Before fixing locking: rename misleading folio_undo_large_rmappable(),
which does not undo large_rmappable, to folio_unqueue_deferred_split(),
which is what it does.  But that and its out-of-line __callee are mm
internals of very limited usability: add comment and WARN_ON_ONCEs to
check usage; and return a bool to say if a deferred split was unqueued,
which can then be used in WARN_ON_ONCEs around safety checks (sparing
callers the arcane conditionals in __folio_unqueue_deferred_split()).

Just omit the folio_unqueue_deferred_split() from free_unref_folios(), all
of whose callers now call it beforehand (and if any forget then bad_page()
will tell) - except for its caller put_pages_list(), which itself no
longer has any callers (and will be deleted separately).

Swapout: mem_cgroup_swapout() has been resetting folio->memcg_data 0
without checking and unqueueing a THP folio from deferred split list;
which is unfortunate, since the split_queue_lock depends on the memcg
(when memcg is enabled); so swapout has been unqueueing such THPs later,
when freeing the folio, using the pgdat's lock instead: potentially
corrupting the memcg's list.  __remove_mapping() has frozen refcount to 0
here, so no problem with calling folio_unqueue_deferred_split() before
resetting memcg_data.

That goes back to 5.4 commit 87eaceb3fa ("mm: thp: make deferred split
shrinker memcg aware"): which included a check on swapcache before adding
to deferred queue, but no check on deferred queue before adding THP to
swapcache.  That worked fine with the usual sequence of events in reclaim
(though there were a couple of rare ways in which a THP on deferred queue
could have been swapped out), but 6.12 commit dafff3f4c8 ("mm: split
underused THPs") avoids splitting underused THPs in reclaim, which makes
swapcache THPs on deferred queue commonplace.

Keep the check on swapcache before adding to deferred queue?  Yes: it is
no longer essential, but preserves the existing behaviour, and is likely
to be a worthwhile optimization (vmstat showed much more traffic on the
queue under swapping load if the check was removed); update its comment.

Memcg-v1 move (deprecated): mem_cgroup_move_account() has been changing
folio->memcg_data without checking and unqueueing a THP folio from the
deferred list, sometimes corrupting "from" memcg's list, like swapout. 
Refcount is non-zero here, so folio_unqueue_deferred_split() can only be
used in a WARN_ON_ONCE to validate the fix, which must be done earlier:
mem_cgroup_move_charge_pte_range() first try to split the THP (splitting
of course unqueues), or skip it if that fails.  Not ideal, but moving
charge has been requested, and khugepaged should repair the THP later:
nobody wants new custom unqueueing code just for this deprecated case.

The 87eaceb3fa commit did have the code to move from one deferred list
to another (but was not conscious of its unsafety while refcount non-0);
but that was removed by 5.6 commit fac0516b55 ("mm: thp: don't need care
deferred split queue in memcg charge move path"), which argued that the
existence of a PMD mapping guarantees that the THP cannot be on a deferred
list.  As above, false in rare cases, and now commonly false.

Backport to 6.11 should be straightforward.  Earlier backports must take
care that other _deferred_list fixes and dependencies are included.  There
is not a strong case for backports, but they can fix cornercases.

Link: https://lkml.kernel.org/r/8dc111ae-f6db-2da7-b25c-7a20b1effe3b@google.com
Fixes: 87eaceb3fa ("mm: thp: make deferred split shrinker memcg aware")
Fixes: dafff3f4c8 ("mm: split underused THPs")
Signed-off-by: Hugh Dickins <hughd@google.com>
Acked-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Yang Shi <shy828301@gmail.com>
Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
Cc: Barry Song <baohua@kernel.org>
Cc: Chris Li <chrisl@kernel.org>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Kefeng Wang <wangkefeng.wang@huawei.com>
Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: Nhat Pham <nphamcs@gmail.com>
Cc: Ryan Roberts <ryan.roberts@arm.com>
Cc: Shakeel Butt <shakeel.butt@linux.dev>
Cc: Usama Arif <usamaarif642@gmail.com>
Cc: Wei Yang <richard.weiyang@gmail.com>
Cc: Zi Yan <ziy@nvidia.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
This commit is contained in:
Hugh Dickins 2024-10-27 13:02:13 -07:00 committed by Andrew Morton
parent e66f3185fa
commit f8f931bba0
8 changed files with 67 additions and 24 deletions

View File

@ -3588,10 +3588,27 @@ int split_folio_to_list(struct folio *folio, struct list_head *list)
return split_huge_page_to_list_to_order(&folio->page, list, ret); return split_huge_page_to_list_to_order(&folio->page, list, ret);
} }
void __folio_undo_large_rmappable(struct folio *folio) /*
* __folio_unqueue_deferred_split() is not to be called directly:
* the folio_unqueue_deferred_split() inline wrapper in mm/internal.h
* limits its calls to those folios which may have a _deferred_list for
* queueing THP splits, and that list is (racily observed to be) non-empty.
*
* It is unsafe to call folio_unqueue_deferred_split() until folio refcount is
* zero: because even when split_queue_lock is held, a non-empty _deferred_list
* might be in use on deferred_split_scan()'s unlocked on-stack list.
*
* If memory cgroups are enabled, split_queue_lock is in the mem_cgroup: it is
* therefore important to unqueue deferred split before changing folio memcg.
*/
bool __folio_unqueue_deferred_split(struct folio *folio)
{ {
struct deferred_split *ds_queue; struct deferred_split *ds_queue;
unsigned long flags; unsigned long flags;
bool unqueued = false;
WARN_ON_ONCE(folio_ref_count(folio));
WARN_ON_ONCE(!mem_cgroup_disabled() && !folio_memcg(folio));
ds_queue = get_deferred_split_queue(folio); ds_queue = get_deferred_split_queue(folio);
spin_lock_irqsave(&ds_queue->split_queue_lock, flags); spin_lock_irqsave(&ds_queue->split_queue_lock, flags);
@ -3603,8 +3620,11 @@ void __folio_undo_large_rmappable(struct folio *folio)
MTHP_STAT_NR_ANON_PARTIALLY_MAPPED, -1); MTHP_STAT_NR_ANON_PARTIALLY_MAPPED, -1);
} }
list_del_init(&folio->_deferred_list); list_del_init(&folio->_deferred_list);
unqueued = true;
} }
spin_unlock_irqrestore(&ds_queue->split_queue_lock, flags); spin_unlock_irqrestore(&ds_queue->split_queue_lock, flags);
return unqueued; /* useful for debug warnings */
} }
/* partially_mapped=false won't clear PG_partially_mapped folio flag */ /* partially_mapped=false won't clear PG_partially_mapped folio flag */
@ -3627,14 +3647,11 @@ void deferred_split_folio(struct folio *folio, bool partially_mapped)
return; return;
/* /*
* The try_to_unmap() in page reclaim path might reach here too, * Exclude swapcache: originally to avoid a corrupt deferred split
* this may cause a race condition to corrupt deferred split queue. * queue. Nowadays that is fully prevented by mem_cgroup_swapout();
* And, if page reclaim is already handling the same folio, it is * but if page reclaim is already handling the same folio, it is
* unnecessary to handle it again in shrinker. * unnecessary to handle it again in the shrinker, so excluding
* * swapcache here may still be a useful optimization.
* Check the swapcache flag to determine if the folio is being
* handled by page reclaim since THP swap would add the folio into
* swap cache before calling try_to_unmap().
*/ */
if (folio_test_swapcache(folio)) if (folio_test_swapcache(folio))
return; return;

View File

@ -639,11 +639,11 @@ static inline void folio_set_order(struct folio *folio, unsigned int order)
#endif #endif
} }
void __folio_undo_large_rmappable(struct folio *folio); bool __folio_unqueue_deferred_split(struct folio *folio);
static inline void folio_undo_large_rmappable(struct folio *folio) static inline bool folio_unqueue_deferred_split(struct folio *folio)
{ {
if (folio_order(folio) <= 1 || !folio_test_large_rmappable(folio)) if (folio_order(folio) <= 1 || !folio_test_large_rmappable(folio))
return; return false;
/* /*
* At this point, there is no one trying to add the folio to * At this point, there is no one trying to add the folio to
@ -651,9 +651,9 @@ static inline void folio_undo_large_rmappable(struct folio *folio)
* to check without acquiring the split_queue_lock. * to check without acquiring the split_queue_lock.
*/ */
if (data_race(list_empty(&folio->_deferred_list))) if (data_race(list_empty(&folio->_deferred_list)))
return; return false;
__folio_undo_large_rmappable(folio); return __folio_unqueue_deferred_split(folio);
} }
static inline struct folio *page_rmappable_folio(struct page *page) static inline struct folio *page_rmappable_folio(struct page *page)

View File

@ -848,6 +848,8 @@ static int mem_cgroup_move_account(struct folio *folio,
css_get(&to->css); css_get(&to->css);
css_put(&from->css); css_put(&from->css);
/* Warning should never happen, so don't worry about refcount non-0 */
WARN_ON_ONCE(folio_unqueue_deferred_split(folio));
folio->memcg_data = (unsigned long)to; folio->memcg_data = (unsigned long)to;
__folio_memcg_unlock(from); __folio_memcg_unlock(from);
@ -1217,7 +1219,9 @@ static int mem_cgroup_move_charge_pte_range(pmd_t *pmd,
enum mc_target_type target_type; enum mc_target_type target_type;
union mc_target target; union mc_target target;
struct folio *folio; struct folio *folio;
bool tried_split_before = false;
retry_pmd:
ptl = pmd_trans_huge_lock(pmd, vma); ptl = pmd_trans_huge_lock(pmd, vma);
if (ptl) { if (ptl) {
if (mc.precharge < HPAGE_PMD_NR) { if (mc.precharge < HPAGE_PMD_NR) {
@ -1227,6 +1231,27 @@ static int mem_cgroup_move_charge_pte_range(pmd_t *pmd,
target_type = get_mctgt_type_thp(vma, addr, *pmd, &target); target_type = get_mctgt_type_thp(vma, addr, *pmd, &target);
if (target_type == MC_TARGET_PAGE) { if (target_type == MC_TARGET_PAGE) {
folio = target.folio; folio = target.folio;
/*
* Deferred split queue locking depends on memcg,
* and unqueue is unsafe unless folio refcount is 0:
* split or skip if on the queue? first try to split.
*/
if (!list_empty(&folio->_deferred_list)) {
spin_unlock(ptl);
if (!tried_split_before)
split_folio(folio);
folio_unlock(folio);
folio_put(folio);
if (tried_split_before)
return 0;
tried_split_before = true;
goto retry_pmd;
}
/*
* So long as that pmd lock is held, the folio cannot
* be racily added to the _deferred_list, because
* __folio_remove_rmap() will find !partially_mapped.
*/
if (folio_isolate_lru(folio)) { if (folio_isolate_lru(folio)) {
if (!mem_cgroup_move_account(folio, true, if (!mem_cgroup_move_account(folio, true,
mc.from, mc.to)) { mc.from, mc.to)) {

View File

@ -4629,9 +4629,6 @@ static void uncharge_folio(struct folio *folio, struct uncharge_gather *ug)
struct obj_cgroup *objcg; struct obj_cgroup *objcg;
VM_BUG_ON_FOLIO(folio_test_lru(folio), folio); VM_BUG_ON_FOLIO(folio_test_lru(folio), folio);
VM_BUG_ON_FOLIO(folio_order(folio) > 1 &&
!folio_test_hugetlb(folio) &&
!list_empty(&folio->_deferred_list), folio);
/* /*
* Nobody should be changing or seriously looking at * Nobody should be changing or seriously looking at
@ -4678,6 +4675,7 @@ static void uncharge_folio(struct folio *folio, struct uncharge_gather *ug)
ug->nr_memory += nr_pages; ug->nr_memory += nr_pages;
ug->pgpgout++; ug->pgpgout++;
WARN_ON_ONCE(folio_unqueue_deferred_split(folio));
folio->memcg_data = 0; folio->memcg_data = 0;
} }
@ -4789,6 +4787,9 @@ void mem_cgroup_migrate(struct folio *old, struct folio *new)
/* Transfer the charge and the css ref */ /* Transfer the charge and the css ref */
commit_charge(new, memcg); commit_charge(new, memcg);
/* Warning should never happen, so don't worry about refcount non-0 */
WARN_ON_ONCE(folio_unqueue_deferred_split(old));
old->memcg_data = 0; old->memcg_data = 0;
} }
@ -4975,6 +4976,7 @@ void mem_cgroup_swapout(struct folio *folio, swp_entry_t entry)
VM_BUG_ON_FOLIO(oldid, folio); VM_BUG_ON_FOLIO(oldid, folio);
mod_memcg_state(swap_memcg, MEMCG_SWAP, nr_entries); mod_memcg_state(swap_memcg, MEMCG_SWAP, nr_entries);
folio_unqueue_deferred_split(folio);
folio->memcg_data = 0; folio->memcg_data = 0;
if (!mem_cgroup_is_root(memcg)) if (!mem_cgroup_is_root(memcg))

View File

@ -490,7 +490,7 @@ static int __folio_migrate_mapping(struct address_space *mapping,
folio_test_large_rmappable(folio)) { folio_test_large_rmappable(folio)) {
if (!folio_ref_freeze(folio, expected_count)) if (!folio_ref_freeze(folio, expected_count))
return -EAGAIN; return -EAGAIN;
folio_undo_large_rmappable(folio); folio_unqueue_deferred_split(folio);
folio_ref_unfreeze(folio, expected_count); folio_ref_unfreeze(folio, expected_count);
} }
@ -515,7 +515,7 @@ static int __folio_migrate_mapping(struct address_space *mapping,
} }
/* Take off deferred split queue while frozen and memcg set */ /* Take off deferred split queue while frozen and memcg set */
folio_undo_large_rmappable(folio); folio_unqueue_deferred_split(folio);
/* /*
* Now we know that no one else is looking at the folio: * Now we know that no one else is looking at the folio:

View File

@ -2681,7 +2681,6 @@ void free_unref_folios(struct folio_batch *folios)
unsigned long pfn = folio_pfn(folio); unsigned long pfn = folio_pfn(folio);
unsigned int order = folio_order(folio); unsigned int order = folio_order(folio);
folio_undo_large_rmappable(folio);
if (!free_pages_prepare(&folio->page, order)) if (!free_pages_prepare(&folio->page, order))
continue; continue;
/* /*

View File

@ -121,7 +121,7 @@ void __folio_put(struct folio *folio)
} }
page_cache_release(folio); page_cache_release(folio);
folio_undo_large_rmappable(folio); folio_unqueue_deferred_split(folio);
mem_cgroup_uncharge(folio); mem_cgroup_uncharge(folio);
free_unref_page(&folio->page, folio_order(folio)); free_unref_page(&folio->page, folio_order(folio));
} }
@ -988,7 +988,7 @@ void folios_put_refs(struct folio_batch *folios, unsigned int *refs)
free_huge_folio(folio); free_huge_folio(folio);
continue; continue;
} }
folio_undo_large_rmappable(folio); folio_unqueue_deferred_split(folio);
__page_cache_release(folio, &lruvec, &flags); __page_cache_release(folio, &lruvec, &flags);
if (j != i) if (j != i)

View File

@ -1476,7 +1476,7 @@ free_it:
*/ */
nr_reclaimed += nr_pages; nr_reclaimed += nr_pages;
folio_undo_large_rmappable(folio); folio_unqueue_deferred_split(folio);
if (folio_batch_add(&free_folios, folio) == 0) { if (folio_batch_add(&free_folios, folio) == 0) {
mem_cgroup_uncharge_folios(&free_folios); mem_cgroup_uncharge_folios(&free_folios);
try_to_unmap_flush(); try_to_unmap_flush();
@ -1864,7 +1864,7 @@ static unsigned int move_folios_to_lru(struct lruvec *lruvec,
if (unlikely(folio_put_testzero(folio))) { if (unlikely(folio_put_testzero(folio))) {
__folio_clear_lru_flags(folio); __folio_clear_lru_flags(folio);
folio_undo_large_rmappable(folio); folio_unqueue_deferred_split(folio);
if (folio_batch_add(&free_folios, folio) == 0) { if (folio_batch_add(&free_folios, folio) == 0) {
spin_unlock_irq(&lruvec->lru_lock); spin_unlock_irq(&lruvec->lru_lock);
mem_cgroup_uncharge_folios(&free_folios); mem_cgroup_uncharge_folios(&free_folios);