linux/mm
Linus Torvalds cc09ee80c3 SLUB: reduce irq disabled scope and make it RT compatible
This series was initially inspired by Mel's pcplist local_lock rewrite, and
 also interest to better understand SLUB's locking and the new primitives and RT
 variants and implications. It makes SLUB compatible with PREEMPT_RT and
 generally more preemption-friendly, apparently without significant regressions,
 as the fast paths are not affected.
 
 The main changes to SLUB by this series:
 
 * irq disabling is now only done for minimum amount of time needed to protect
   the strict kmem_cache_cpu fields, and as part of spin lock, local lock and
   bit lock operations to make them irq-safe
 
 * SLUB is fully PREEMPT_RT compatible
 
 Series is based on 5.14-rc6 and also available as a git branch:
 https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux.git/log/?h=slub-local-lock-v5r0
 
 The series should now be sufficiently tested in both RT and !RT configs, mainly
 thanks to Mike.
 
 The RFC/v1 version also got basic performance screening by Mel that didn't show
 major regressions. Mike's testing with hackbench of v2 on !RT reported
 negligible differences [6]:
 
 virgin(ish) tip
 5.13.0.g60ab3ed-tip
           7,320.67 msec task-clock                #    7.792 CPUs utilized            ( +-  0.31% )
            221,215      context-switches          #    0.030 M/sec                    ( +-  3.97% )
             16,234      cpu-migrations            #    0.002 M/sec                    ( +-  4.07% )
             13,233      page-faults               #    0.002 M/sec                    ( +-  0.91% )
     27,592,205,252      cycles                    #    3.769 GHz                      ( +-  0.32% )
      8,309,495,040      instructions              #    0.30  insn per cycle           ( +-  0.37% )
      1,555,210,607      branches                  #  212.441 M/sec                    ( +-  0.42% )
          5,484,209      branch-misses             #    0.35% of all branches          ( +-  2.13% )
 
            0.93949 +- 0.00423 seconds time elapsed  ( +-  0.45% )
            0.94608 +- 0.00384 seconds time elapsed  ( +-  0.41% ) (repeat)
            0.94422 +- 0.00410 seconds time elapsed  ( +-  0.43% )
 
 5.13.0.g60ab3ed-tip +slub-local-lock-v2r3
           7,343.57 msec task-clock                #    7.776 CPUs utilized            ( +-  0.44% )
            223,044      context-switches          #    0.030 M/sec                    ( +-  3.02% )
             16,057      cpu-migrations            #    0.002 M/sec                    ( +-  4.03% )
             13,164      page-faults               #    0.002 M/sec                    ( +-  0.97% )
     27,684,906,017      cycles                    #    3.770 GHz                      ( +-  0.45% )
      8,323,273,871      instructions              #    0.30  insn per cycle           ( +-  0.28% )
      1,556,106,680      branches                  #  211.901 M/sec                    ( +-  0.31% )
          5,463,468      branch-misses             #    0.35% of all branches          ( +-  1.33% )
 
            0.94440 +- 0.00352 seconds time elapsed  ( +-  0.37% )
            0.94830 +- 0.00228 seconds time elapsed  ( +-  0.24% ) (repeat)
            0.93813 +- 0.00440 seconds time elapsed  ( +-  0.47% ) (repeat)
 
 RT configs showed some throughput regressions, but that's expected tradeoff for
 the preemption improvements through the RT mutex. It didn't prevent the v2 to
 be incorporated to the 5.13 RT tree [7], leading to testing exposure and
 bugfixes.
 
 Before the series, SLUB is lockless in both allocation and free fast paths, but
 elsewhere, it's disabling irqs for considerable periods of time - especially in
 allocation slowpath and the bulk allocation, where IRQs are re-enabled only
 when a new page from the page allocator is needed, and the context allows
 blocking. The irq disabled sections can then include deactivate_slab() which
 walks a full freelist and frees the slab back to page allocator or
 unfreeze_partials() going through a list of percpu partial slabs. The RT tree
 currently has some patches mitigating these, but we can do much better in
 mainline too.
 
 Patches 1-6 are straightforward improvements or cleanups that could exist
 outside of this series too, but are prerequsities.
 
 Patches 7-9 are also preparatory code changes without functional changes, but
 not so useful without the rest of the series.
 
 Patch 10 simplifies the fast paths on systems with preemption, based on
 (hopefully correct) observation that the current loops to verify tid are
 unnecessary.
 
 Patches 11-20 focus on reducing irq disabled scope in the allocation slowpath.
 
 Patch 11 moves disabling of irqs into ___slab_alloc() from its callers, which
 are the allocation slowpath, and bulk allocation. Instead these callers only
 disable preemption to stabilize the cpu. The following patches then gradually
 reduce the scope of disabled irqs in ___slab_alloc() and the functions called
 from there. As of patch 14, the re-enabling of irqs based on gfp flags before
 calling the page allocator is removed from allocate_slab(). As of patch 17,
 it's possible to reach the page allocator (in case of existing slabs depleted)
 without disabling and re-enabling irqs a single time.
 
 Pathces 21-26 reduce the scope of disabled irqs in functions related to
 unfreezing percpu partial slab.
 
 Patch 27 is preparatory. Patch 28 is adopted from the RT tree and converts the
 flushing of percpu slabs on all cpus from using IPI to workqueue, so that the
 processing isn't happening with irqs disabled in the IPI handler. The flushing
 is not performance critical so it should be acceptable.
 
 Patch 29 also comes from RT tree and makes object_map_lock RT compatible.
 
 Patch 30 make slab_lock irq-safe on RT where we cannot rely on having
 irq disabled from the list_lock spin lock usage.
 
 Patch 31 changes kmem_cache_cpu->partial handling in put_cpu_partial() from
 cmpxchg loop to a short irq disabled section, which is used by all other code
 modifying the field. This addresses a theoretical race scenario pointed out by
 Jann, and makes the critical section safe wrt with RT local_lock semantics
 after the conversion in patch 35.
 
 Patch 32 changes preempt disable to migrate disable, so that the nested
 list_lock spinlock is safe to take on RT. Because migrate_disable() is a
 function call even on !RT, a small set of private wrappers is introduced
 to keep using the cheaper preempt_disable() on !PREEMPT_RT configurations.
 As of this patch, SLUB should be already compatible with RT's lock semantics.
 
 Finally, patch 33 changes irq disabled sections that protect kmem_cache_cpu
 fields in the slow paths, with a local lock. However on PREEMPT_RT it means the
 lockless fast paths can now preempt slow paths which don't expect that, so the
 local lock has to be taken also in the fast paths and they are no longer
 lockless. RT folks seem to not mind this tradeoff. The patch also updates the
 locking documentation in the file's comment.
 -----BEGIN PGP SIGNATURE-----
 
 iQEzBAABCAAdFiEEjUuTAak14xi+SF7M4CHKc/GJqRAFAmEzSooACgkQ4CHKc/GJ
 qRC3Agf+MXJB5NVCOkwgEk9wipbFETrJDsvM2Yf2CrqbK9MzKtPNrL82lZHdgtq2
 HJ5gT8QZTFQ7n8nbY3P6LRClDdtqYm8b7aX02qtc2JrM29wIQw8A1gummLkQDNRm
 s+vd0ndPc4V6mqJQqiTk1WB8F+SJ0u3LfjesbIlqgcWREzZaPgm+hw3UUEtz/tXu
 RiEkWI30u0S0X5/HimqK8pdmwGPvzX8l1N9Sc2VeoQoFPPL/Cm2D5jZR/xHtKLfW
 q4ZVVXdh/YtOWXMD0jOr9q/bxwLDWCkvWHEmAES5nT2apFmCuusZ3+XWzWf8bSX/
 j3eTiiNHTaktf/mndEymEbztnqmfGQ==
 =3Jty
 -----END PGP SIGNATURE-----

Merge tag 'mm-slub-5.15-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux

Pull SLUB updates from Vlastimil Babka:
 "SLUB: reduce irq disabled scope and make it RT compatible

  This series was initially inspired by Mel's pcplist local_lock
  rewrite, and also interest to better understand SLUB's locking and the
  new primitives and RT variants and implications. It makes SLUB
  compatible with PREEMPT_RT and generally more preemption-friendly,
  apparently without significant regressions, as the fast paths are not
  affected.

  The main changes to SLUB by this series:

   - irq disabling is now only done for minimum amount of time needed to
     protect the strict kmem_cache_cpu fields, and as part of spin lock,
     local lock and bit lock operations to make them irq-safe

   - SLUB is fully PREEMPT_RT compatible

  The series should now be sufficiently tested in both RT and !RT
  configs, mainly thanks to Mike.

  The RFC/v1 version also got basic performance screening by Mel that
  didn't show major regressions. Mike's testing with hackbench of v2 on
  !RT reported negligible differences [6]:

    virgin(ish) tip
    5.13.0.g60ab3ed-tip
              7,320.67 msec task-clock                #    7.792 CPUs utilized            ( +-  0.31% )
               221,215      context-switches          #    0.030 M/sec                    ( +-  3.97% )
                16,234      cpu-migrations            #    0.002 M/sec                    ( +-  4.07% )
                13,233      page-faults               #    0.002 M/sec                    ( +-  0.91% )
        27,592,205,252      cycles                    #    3.769 GHz                      ( +-  0.32% )
         8,309,495,040      instructions              #    0.30  insn per cycle           ( +-  0.37% )
         1,555,210,607      branches                  #  212.441 M/sec                    ( +-  0.42% )
             5,484,209      branch-misses             #    0.35% of all branches          ( +-  2.13% )

               0.93949 +- 0.00423 seconds time elapsed  ( +-  0.45% )
               0.94608 +- 0.00384 seconds time elapsed  ( +-  0.41% ) (repeat)
               0.94422 +- 0.00410 seconds time elapsed  ( +-  0.43% )

    5.13.0.g60ab3ed-tip +slub-local-lock-v2r3
              7,343.57 msec task-clock                #    7.776 CPUs utilized            ( +-  0.44% )
               223,044      context-switches          #    0.030 M/sec                    ( +-  3.02% )
                16,057      cpu-migrations            #    0.002 M/sec                    ( +-  4.03% )
                13,164      page-faults               #    0.002 M/sec                    ( +-  0.97% )
        27,684,906,017      cycles                    #    3.770 GHz                      ( +-  0.45% )
         8,323,273,871      instructions              #    0.30  insn per cycle           ( +-  0.28% )
         1,556,106,680      branches                  #  211.901 M/sec                    ( +-  0.31% )
             5,463,468      branch-misses             #    0.35% of all branches          ( +-  1.33% )

               0.94440 +- 0.00352 seconds time elapsed  ( +-  0.37% )
               0.94830 +- 0.00228 seconds time elapsed  ( +-  0.24% ) (repeat)
               0.93813 +- 0.00440 seconds time elapsed  ( +-  0.47% ) (repeat)

  RT configs showed some throughput regressions, but that's expected
  tradeoff for the preemption improvements through the RT mutex. It
  didn't prevent the v2 to be incorporated to the 5.13 RT tree [7],
  leading to testing exposure and bugfixes.

  Before the series, SLUB is lockless in both allocation and free fast
  paths, but elsewhere, it's disabling irqs for considerable periods of
  time - especially in allocation slowpath and the bulk allocation,
  where IRQs are re-enabled only when a new page from the page allocator
  is needed, and the context allows blocking. The irq disabled sections
  can then include deactivate_slab() which walks a full freelist and
  frees the slab back to page allocator or unfreeze_partials() going
  through a list of percpu partial slabs. The RT tree currently has some
  patches mitigating these, but we can do much better in mainline too.

  Patches 1-6 are straightforward improvements or cleanups that could
  exist outside of this series too, but are prerequsities.

  Patches 7-9 are also preparatory code changes without functional
  changes, but not so useful without the rest of the series.

  Patch 10 simplifies the fast paths on systems with preemption, based
  on (hopefully correct) observation that the current loops to verify
  tid are unnecessary.

  Patches 11-20 focus on reducing irq disabled scope in the allocation
  slowpath:

   - patch 11 moves disabling of irqs into ___slab_alloc() from its
     callers, which are the allocation slowpath, and bulk allocation.
     Instead these callers only disable preemption to stabilize the cpu.

   - The following patches then gradually reduce the scope of disabled
     irqs in ___slab_alloc() and the functions called from there. As of
     patch 14, the re-enabling of irqs based on gfp flags before calling
     the page allocator is removed from allocate_slab(). As of patch 17,
     it's possible to reach the page allocator (in case of existing
     slabs depleted) without disabling and re-enabling irqs a single
     time.

  Pathces 21-26 reduce the scope of disabled irqs in functions related
  to unfreezing percpu partial slab.

  Patch 27 is preparatory. Patch 28 is adopted from the RT tree and
  converts the flushing of percpu slabs on all cpus from using IPI to
  workqueue, so that the processing isn't happening with irqs disabled
  in the IPI handler. The flushing is not performance critical so it
  should be acceptable.

  Patch 29 also comes from RT tree and makes object_map_lock RT
  compatible.

  Patch 30 make slab_lock irq-safe on RT where we cannot rely on having
  irq disabled from the list_lock spin lock usage.

  Patch 31 changes kmem_cache_cpu->partial handling in put_cpu_partial()
  from cmpxchg loop to a short irq disabled section, which is used by
  all other code modifying the field. This addresses a theoretical race
  scenario pointed out by Jann, and makes the critical section safe wrt
  with RT local_lock semantics after the conversion in patch 35.

  Patch 32 changes preempt disable to migrate disable, so that the
  nested list_lock spinlock is safe to take on RT. Because
  migrate_disable() is a function call even on !RT, a small set of
  private wrappers is introduced to keep using the cheaper
  preempt_disable() on !PREEMPT_RT configurations. As of this patch,
  SLUB should be already compatible with RT's lock semantics.

  Finally, patch 33 changes irq disabled sections that protect
  kmem_cache_cpu fields in the slow paths, with a local lock. However on
  PREEMPT_RT it means the lockless fast paths can now preempt slow paths
  which don't expect that, so the local lock has to be taken also in the
  fast paths and they are no longer lockless. RT folks seem to not mind
  this tradeoff. The patch also updates the locking documentation in the
  file's comment"

Mike Galbraith and Mel Gorman verified that their earlier testing
observations still hold for the final series:

Link: https://lore.kernel.org/lkml/89ba4f783114520c167cc915ba949ad2c04d6790.camel@gmx.de/
Link: https://lore.kernel.org/lkml/20210907082010.GB3959@techsingularity.net/

* tag 'mm-slub-5.15-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux: (33 commits)
  mm, slub: convert kmem_cpu_slab protection to local_lock
  mm, slub: use migrate_disable() on PREEMPT_RT
  mm, slub: protect put_cpu_partial() with disabled irqs instead of cmpxchg
  mm, slub: make slab_lock() disable irqs with PREEMPT_RT
  mm: slub: make object_map_lock a raw_spinlock_t
  mm: slub: move flush_cpu_slab() invocations __free_slab() invocations out of IRQ context
  mm, slab: split out the cpu offline variant of flush_slab()
  mm, slub: don't disable irqs in slub_cpu_dead()
  mm, slub: only disable irq with spin_lock in __unfreeze_partials()
  mm, slub: separate detaching of partial list in unfreeze_partials() from unfreezing
  mm, slub: detach whole partial list at once in unfreeze_partials()
  mm, slub: discard slabs in unfreeze_partials() without irqs disabled
  mm, slub: move irq control into unfreeze_partials()
  mm, slub: call deactivate_slab() without disabling irqs
  mm, slub: make locking in deactivate_slab() irq-safe
  mm, slub: move reset of c->page and freelist out of deactivate_slab()
  mm, slub: stop disabling irqs around get_partial()
  mm, slub: check new pages with restored irqs
  mm, slub: validate slab from partial list or page allocator before making it cpu slab
  mm, slub: restore irqs around calling new_slab()
  ...
2021-09-08 12:36:00 -07:00
..
kasan Merge branch 'akpm' (patches from Andrew) 2021-09-03 10:08:28 -07:00
kfence Kbuild updates for v5.15 2021-09-03 15:33:47 -07:00
backing-dev.c Merge branch 'akpm' (patches from Andrew) 2021-09-03 10:08:28 -07:00
balloon_compaction.c mm: fix typos in comments 2021-05-07 00:26:35 -07:00
bootmem_info.c mm/bootmem_info.c: mark __init on register_page_bootmem_info_section 2021-09-03 09:58:14 -07:00
cleancache.c
cma_debug.c mm/cma: change cma mutex to irq safe spinlock 2021-05-05 11:27:21 -07:00
cma_sysfs.c mm: cma: support sysfs 2021-05-05 11:27:24 -07:00
cma.c mm: use proper type for cma_[alloc|release] 2021-05-05 11:27:24 -07:00
cma.h mm: cma: support sysfs 2021-05-05 11:27:24 -07:00
compaction.c mm: compaction: support triggering of proactive compaction by user 2021-09-03 09:58:17 -07:00
debug_page_ref.c
debug_vm_pgtable.c mm/debug_vm_pgtable: fix corrupted page flag 2021-09-03 09:58:10 -07:00
debug.c mm/debug: factor PagePoisoned out of __dump_page 2021-06-29 10:53:53 -07:00
dmapool.c mm/dmapool: use DEVICE_ATTR_RO macro 2021-06-29 10:53:52 -07:00
early_ioremap.c mm/early_ioremap.c: use __func__ instead of function name 2021-02-26 09:41:02 -08:00
fadvise.c mm, fadvise: improve the expensive remote LRU cache draining after FADV_DONTNEED 2020-10-13 18:38:29 -07:00
failslab.c
filemap.c Merge branch 'akpm' (patches from Andrew) 2021-09-03 10:08:28 -07:00
frontswap.c mm/mempool: minor coding style tweaks 2021-05-05 11:27:27 -07:00
gup_test.c selftests/vm: gup_test: test faulting in kernel, and verify pinnable pages 2021-05-05 11:27:26 -07:00
gup_test.h selftests/vm: gup_test: fix test flag 2021-05-05 11:27:26 -07:00
gup.c Revert "mm/gup: remove try_get_page(), call try_get_compound_head() directly" 2021-09-07 11:03:45 -07:00
highmem.c mm: fix typos in comments 2021-05-07 00:26:35 -07:00
hmm.c mm: device exclusive memory access 2021-07-01 11:06:03 -07:00
huge_memory.c mm,do_huge_pmd_numa_page: remove unnecessary TLB flushing code 2021-09-03 09:58:13 -07:00
hugetlb_cgroup.c hugetlb: make free_huge_page irq safe 2021-05-05 11:27:22 -07:00
hugetlb_vmemmap.c mm: hugetlb: introduce CONFIG_HUGETLB_PAGE_FREE_VMEMMAP_DEFAULT_ON 2021-06-30 20:47:26 -07:00
hugetlb_vmemmap.h mm: hugetlb: introduce nr_free_vmemmap_pages in the struct hstate 2021-06-30 20:47:25 -07:00
hugetlb.c mm/hugetlb: add support for mempolicy MPOL_PREFERRED_MANY 2021-09-03 09:58:17 -07:00
hwpoison-inject.c mm: hwpoison: don't drop slab caches for offlining non-LRU page 2021-09-03 09:58:15 -07:00
init-mm.c mm: add setup_initial_init_mm() helper 2021-07-08 11:48:21 -07:00
internal.h mm/numa: automatically generate node migration order 2021-09-03 09:58:16 -07:00
interval_tree.c mm/interval_tree: add comments to improve code readability 2021-04-30 11:20:38 -07:00
io-mapping.c mm: add a io_mapping_map_user helper 2021-04-30 11:20:39 -07:00
ioremap.c mm/ioremap: fix iomap_max_page_shift 2021-05-14 19:41:32 -07:00
Kconfig mm: introduce memfd_secret system call to create "secret" memory areas 2021-07-08 11:48:21 -07:00
Kconfig.debug mm, page_poison: remove CONFIG_PAGE_POISONING_ZERO 2020-12-15 12:13:46 -08:00
khugepaged.c huge tmpfs: SGP_NOALLOC to stop collapse_file() on race 2021-09-03 09:58:12 -07:00
kmemleak.c kasan, kmemleak: reset tags when scanning block 2021-08-13 14:09:31 -10:00
ksm.c mm: KSM: fix data type 2021-09-03 09:58:18 -07:00
list_lru.c mm: vmscan: consolidate shrinker_maps handling code 2021-05-05 11:27:23 -07:00
maccess.c uaccess: add force_uaccess_{begin,end} helpers 2020-08-12 10:57:59 -07:00
madvise.c Merge branch 'akpm' (patches from Andrew) 2021-09-03 10:08:28 -07:00
Makefile mm: introduce memfd_secret system call to create "secret" memory areas 2021-07-08 11:48:21 -07:00
mapping_dirty_helpers.c mm/mapping_dirty_helpers: remove double Note in kerneldoc 2021-07-01 11:06:02 -07:00
memblock.c Merge branch 'akpm' (patches from Andrew) 2021-09-03 10:08:28 -07:00
memcontrol.c Merge branch 'akpm' (patches from Andrew) 2021-09-03 10:08:28 -07:00
memfd.c Reimplement RLIMIT_MEMLOCK on top of ucounts 2021-04-30 14:14:02 -05:00
memory_hotplug.c mm/migrate: enable returning precise migrate_pages() success count 2021-09-03 09:58:16 -07:00
memory-failure.c Merge branch 'akpm' (patches from Andrew) 2021-09-03 10:08:28 -07:00
memory.c mm: fix the deadlock in finish_fault() 2021-07-23 17:43:28 -07:00
mempolicy.c mm/mempolicy.c: use in_task() in mempolicy_slab_node() 2021-09-03 09:58:17 -07:00
mempool.c kasan: use separate (un)poison implementation for integrated init 2021-06-04 19:32:21 +01:00
memremap.c mm/memremap.c: fix improper SPDX comment style 2021-04-30 11:20:37 -07:00
memtest.c
migrate.c mm/migrate: correct kernel-doc notation 2021-09-03 09:58:18 -07:00
mincore.c inode: make init and permission helpers idmapped mount aware 2021-01-24 14:27:16 +01:00
mlock.c mm: introduce memfd_secret system call to create "secret" memory areas 2021-07-08 11:48:21 -07:00
mm_init.c include/linux/page-flags-layout.h: cleanups 2021-04-30 11:20:42 -07:00
mmap_lock.c mm: mmap_lock: fix disabling preemption directly 2021-07-23 17:43:28 -07:00
mmap.c Merge tag 'denywrite-for-5.15' of git://github.com/davidhildenbrand/linux 2021-09-04 11:35:47 -07:00
mmu_gather.c mm: eliminate "expecting prototype" kernel-doc warnings 2021-04-16 16:10:36 -07:00
mmu_notifier.c mm/mmu_notifiers: ensure range_end() is paired with range_start() 2021-03-25 09:22:55 -07:00
mmzone.c mm/lru: replace pgdat lru_lock with lruvec lock 2020-12-15 14:48:04 -08:00
mprotect.c mm: device exclusive memory access 2021-07-01 11:06:03 -07:00
mremap.c mm/mremap: fix memory account on do_munmap() failure 2021-09-03 09:58:14 -07:00
msync.c mm/msync: exit early when the flags is an MS_ASYNC and start < vm_start 2021-04-30 11:20:37 -07:00
nommu.c Merge tag 'denywrite-for-5.15' of git://github.com/davidhildenbrand/linux 2021-09-04 11:35:47 -07:00
oom_kill.c mm: introduce process_mrelease system call 2021-09-03 09:58:17 -07:00
page_alloc.c mm/migrate: enable returning precise migrate_pages() success count 2021-09-03 09:58:16 -07:00
page_counter.c mm: page_counter: mitigate consequences of a page_counter underflow 2021-04-30 11:20:38 -07:00
page_ext.c mm: replace CONFIG_FLAT_NODE_MEM_MAP with CONFIG_FLATMEM 2021-06-29 10:53:55 -07:00
page_idle.c mm: page_idle_get_page() does not need lru_lock 2020-12-15 14:48:03 -08:00
page_io.c swap: fix swapfile read/write offset 2021-03-02 17:25:46 -07:00
page_isolation.c mm/page_isolation: tracing: trace all test_pages_isolated failures 2021-09-03 09:58:15 -07:00
page_owner.c mm/page_owner: constify dump_page_owner 2021-06-29 10:53:53 -07:00
page_poison.c mm: page_poison: print page info when corruption is caught 2021-04-30 11:20:36 -07:00
page_reporting.c mm/page_reporting: allow driver to specify reporting order 2021-06-29 10:53:47 -07:00
page_reporting.h mm/page_reporting: export reporting order as module parameter 2021-06-29 10:53:47 -07:00
page_vma_mapped.c mm: device exclusive memory access 2021-07-01 11:06:03 -07:00
page-writeback.c Merge branch 'akpm' (patches from Andrew) 2021-09-03 10:08:28 -07:00
pagewalk.c mm: pagewalk: fix walk for hugepage tables 2021-06-29 10:53:49 -07:00
percpu-internal.h Merge branch 'for-5.14' of git://git.kernel.org/pub/scm/linux/kernel/git/dennis/percpu 2021-07-01 17:17:24 -07:00
percpu-km.c percpu: flush tlb in pcpu_reclaim_populated() 2021-07-04 18:30:17 +00:00
percpu-stats.c percpu: rework memcg accounting 2021-06-05 20:43:15 +00:00
percpu-vm.c percpu: flush tlb in pcpu_reclaim_populated() 2021-07-04 18:30:17 +00:00
percpu.c mm/percpu,c: remove obsolete comments of pcpu_chunk_populated() 2021-09-03 09:58:18 -07:00
pgalloc-track.h mm: fix typos in comments 2021-05-07 00:26:35 -07:00
pgtable-generic.c mm/thp: fix __split_huge_pmd_locked() on shmem migration entry 2021-06-16 09:24:42 -07:00
process_vm_access.c mm/process_vm_access.c: remove duplicate include 2021-05-05 11:27:27 -07:00
ptdump.c mm: ptdump: fix build failure 2021-04-16 16:10:37 -07:00
readahead.c mm: Protect operations adding pages to page cache with invalidate_lock 2021-07-13 13:14:27 +02:00
rmap.c \n 2021-08-30 10:24:50 -07:00
rodata_test.c mm/rodata_test.c: fix missing function declaration 2020-08-21 09:52:53 -07:00
secretmem.c mm/secretmem: wire up ->set_page_dirty 2021-07-23 17:43:28 -07:00
shmem.c Merge branch 'akpm' (patches from Andrew) 2021-09-03 10:08:28 -07:00
shuffle.c mm: eliminate "expecting prototype" kernel-doc warnings 2021-04-16 16:10:36 -07:00
shuffle.h mm/shuffle: fix section mismatch warning 2021-05-22 15:09:07 -10:00
slab_common.c mm: slub: move flush_cpu_slab() invocations __free_slab() invocations out of IRQ context 2021-09-04 01:12:23 +02:00
slab.c mm: fix typos in comments 2021-05-07 00:26:35 -07:00
slab.h mm/memcg: fix NULL pointer dereference in memcg_slab_free_hook() 2021-07-30 10:14:39 -07:00
slob.c mm: Don't build mm_dump_obj() on CONFIG_PRINTK=n kernels 2021-03-08 14:18:46 -08:00
slub.c mm, slub: convert kmem_cpu_slab protection to local_lock 2021-09-04 10:22:01 +02:00
sparse-vmemmap.c mm: sparsemem: split the huge PMD mapping of vmemmap pages 2021-06-30 20:47:26 -07:00
sparse.c mm: introduce memmap_alloc() to unify memory map allocation 2021-09-03 09:58:15 -07:00
swap_cgroup.c mm: memcontrol: make swap tracking an integral part of memory control 2020-06-03 20:09:48 -07:00
swap_slots.c mm: Replace deprecated CPU-hotplug functions. 2021-08-28 01:46:17 +02:00
swap_state.c Revert "mm: swap: check if swap backing device is congested or not" 2021-08-20 11:31:42 -07:00
swap.c mm: delete unused get_kernel_page() 2021-09-03 09:58:11 -07:00
swapfile.c mm, memcg: inline swap-related functions to improve disabled memcg config 2021-09-03 09:58:12 -07:00
truncate.c Merge branch 'akpm' (patches from Andrew) 2021-09-03 10:08:28 -07:00
usercopy.c mm/usercopy.c: delete duplicated word 2020-08-12 10:57:58 -07:00
userfaultfd.c userfaultfd: change mmap_changing to atomic 2021-09-03 09:58:16 -07:00
util.c mm: don't allow oversized kvmalloc() calls 2021-09-02 09:47:01 -07:00
vmacache.c kernel: better document the use_mm/unuse_mm API contract 2020-06-10 19:14:18 -07:00
vmalloc.c mm/vmalloc: fix wrong behavior in vread 2021-09-03 09:58:14 -07:00
vmpressure.c mm/vmpressure: replace vmpressure_to_css() with vmpressure_to_memcg() 2021-09-03 09:58:17 -07:00
vmscan.c mm, vmscan: guarantee drop_slab_node() termination 2021-09-03 09:58:17 -07:00
vmstat.c Merge branch 'akpm' (patches from Andrew) 2021-09-03 10:08:28 -07:00
workingset.c mm: workingset: define macro WORKINGSET_SHIFT 2021-06-30 20:47:28 -07:00
z3fold.c mm/z3fold: add kerneldoc fields for z3fold_pool 2021-07-01 11:06:03 -07:00
zbud.c mm/zbud: add kerneldoc fields for zbud_pool 2021-07-01 11:06:03 -07:00
zpool.c mm: fix typos in comments 2021-05-07 00:26:35 -07:00
zsmalloc.c mm/zsmalloc.c: improve readability for async_free_zspage() 2021-07-01 11:06:02 -07:00
zswap.c mm/zswap.c: fix two bugs in zswap_writeback_entry() 2021-06-30 20:47:31 -07:00