linux/mm
Mel Gorman 28f836b677 mm/page_alloc: split per cpu page lists and zone stats
The PCP (per-cpu page allocator in page_alloc.c) shares locking
requirements with vmstat and the zone lock which is inconvenient and
causes some issues.  For example, the PCP list and vmstat share the same
per-cpu space meaning that it's possible that vmstat updates dirty cache
lines holding per-cpu lists across CPUs unless padding is used.  Second,
PREEMPT_RT does not want to disable IRQs for too long in the page
allocator.

This series splits the locking requirements and uses locks types more
suitable for PREEMPT_RT, reduces the time when special locking is required
for stats and reduces the time when IRQs need to be disabled on
!PREEMPT_RT kernels.

Why local_lock?  PREEMPT_RT considers the following sequence to be unsafe
as documented in Documentation/locking/locktypes.rst

   local_irq_disable();
   spin_lock(&lock);

The pcp allocator has this sequence for rmqueue_pcplist (local_irq_save)
-> __rmqueue_pcplist -> rmqueue_bulk (spin_lock).  While it's possible to
separate this out, it generally means there are points where we enable
IRQs and reenable them again immediately.  To prevent a migration and the
per-cpu pointer going stale, migrate_disable is also needed.  That is a
custom lock that is similar, but worse, than local_lock.  Furthermore, on
PREEMPT_RT, it's undesirable to leave IRQs disabled for too long.  By
converting to local_lock which disables migration on PREEMPT_RT, the
locking requirements can be separated and start moving the protections for
PCP, stats and the zone lock to PREEMPT_RT-safe equivalent locking.  As a
bonus, local_lock also means that PROVE_LOCKING does something useful.

After that, it's obvious that zone_statistics incurs too much overhead and
leaves IRQs disabled for longer than necessary on !PREEMPT_RT kernels.
zone_statistics uses perfectly accurate counters requiring IRQs be
disabled for parallel RMW sequences when inaccurate ones like vm_events
would do.  The series makes the NUMA statistics (NUMA_HIT and friends)
inaccurate counters that then require no special protection on
!PREEMPT_RT.

The bulk page allocator can then do stat updates in bulk with IRQs enabled
which should improve the efficiency.  Technically, this could have been
done without the local_lock and vmstat conversion work and the order
simply reflects the timing of when different series were implemented.

Finally, there are places where we conflate IRQs being disabled for the
PCP with the IRQ-safe zone spinlock.  The remainder of the series reduces
the scope of what is protected by disabled IRQs on !PREEMPT_RT kernels.
By the end of the series, page_alloc.c does not call local_irq_save so the
locking scope is a bit clearer.  The one exception is that modifying
NR_FREE_PAGES still happens in places where it's known the IRQs are
disabled as it's harmless for PREEMPT_RT and would be expensive to split
the locking there.

No performance data is included because despite the overhead of the stats,
it's within the noise for most workloads on !PREEMPT_RT.  However, Jesper
Dangaard Brouer ran a page allocation microbenchmark on a E5-1650 v4 @
3.60GHz CPU on the first version of this series.  Focusing on the array
variant of the bulk page allocator reveals the following.

(CPU: Intel(R) Xeon(R) CPU E5-1650 v4 @ 3.60GHz)
ARRAY variant: time_bulk_page_alloc_free_array: step=bulk size

         Baseline        Patched
 1       56.383          54.225 (+3.83%)
 2       40.047          35.492 (+11.38%)
 3       37.339          32.643 (+12.58%)
 4       35.578          30.992 (+12.89%)
 8       33.592          29.606 (+11.87%)
 16      32.362          28.532 (+11.85%)
 32      31.476          27.728 (+11.91%)
 64      30.633          27.252 (+11.04%)
 128     30.596          27.090 (+11.46%)

While this is a positive outcome, the series is more likely to be
interesting to the RT people in terms of getting parts of the PREEMPT_RT
tree into mainline.

This patch (of 9):

The per-cpu page allocator lists and the per-cpu vmstat deltas are stored
in the same struct per_cpu_pages even though vmstats have no direct impact
on the per-cpu page lists.  This is inconsistent because the vmstats for a
node are stored on a dedicated structure.  The bigger issue is that the
per_cpu_pages structure is not cache-aligned and stat updates either cache
conflict with adjacent per-cpu lists incurring a runtime cost or padding
is required incurring a memory cost.

This patch splits the per-cpu pagelists and the vmstat deltas into
separate structures.  It's mostly a mechanical conversion but some
variable renaming is done to clearly distinguish the per-cpu pages
structure (pcp) from the vmstats (pzstats).

Superficially, this appears to increase the size of the per_cpu_pages
structure but the movement of expire fills a structure hole so there is no
impact overall.

[mgorman@techsingularity.net: make it W=1 cleaner]
  Link: https://lkml.kernel.org/r/20210514144622.GA3735@techsingularity.net
[mgorman@techsingularity.net: make it W=1 even cleaner]
  Link: https://lkml.kernel.org/r/20210516140705.GB3735@techsingularity.net
[lkp@intel.com: check struct per_cpu_zonestat has a non-zero size]
[vbabka@suse.cz: Init zone->per_cpu_zonestats properly]

Link: https://lkml.kernel.org/r/20210512095458.30632-1-mgorman@techsingularity.net
Link: https://lkml.kernel.org/r/20210512095458.30632-2-mgorman@techsingularity.net
Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Chuck Lever <chuck.lever@oracle.com>
Cc: Jesper Dangaard Brouer <brouer@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Michal Hocko <mhocko@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2021-06-29 10:53:54 -07:00
..
kasan kasan: add memory corruption identification support for hardware tag-based mode 2021-06-29 10:53:53 -07:00
kfence mm, slub: change run-time assertion in kmalloc_index() to compile-time 2021-06-29 10:53:46 -07:00
backing-dev.c writeback, cgroup: release dying cgwbs by switching attached inodes 2021-06-29 10:53:48 -07:00
balloon_compaction.c mm: fix typos in comments 2021-05-07 00:26:35 -07:00
cleancache.c
cma_debug.c mm/cma: change cma mutex to irq safe spinlock 2021-05-05 11:27:21 -07:00
cma_sysfs.c mm: cma: support sysfs 2021-05-05 11:27:24 -07:00
cma.c mm: use proper type for cma_[alloc|release] 2021-05-05 11:27:24 -07:00
cma.h mm: cma: support sysfs 2021-05-05 11:27:24 -07:00
compaction.c mm: memcontrol: remove the pgdata parameter of mem_cgroup_page_lruvec 2021-06-29 10:53:50 -07:00
debug_page_ref.c
debug_vm_pgtable.c mm/debug_vm_pgtable: ensure THP availability via has_transparent_hugepage() 2021-06-29 10:53:47 -07:00
debug.c mm/debug: factor PagePoisoned out of __dump_page 2021-06-29 10:53:53 -07:00
dmapool.c mm/dmapool: use DEVICE_ATTR_RO macro 2021-06-29 10:53:52 -07:00
early_ioremap.c mm/early_ioremap.c: use __func__ instead of function name 2021-02-26 09:41:02 -08:00
fadvise.c
failslab.c
filemap.c mm: charge active memcg when no mm is set 2021-06-29 10:53:50 -07:00
frontswap.c mm/mempool: minor coding style tweaks 2021-05-05 11:27:27 -07:00
gup_test.c selftests/vm: gup_test: test faulting in kernel, and verify pinnable pages 2021-05-05 11:27:26 -07:00
gup_test.h selftests/vm: gup_test: fix test flag 2021-05-05 11:27:26 -07:00
gup.c mm: gup: pack has_pinned in MMF_HAS_PINNED 2021-06-29 10:53:48 -07:00
highmem.c mm: fix typos in comments 2021-05-07 00:26:35 -07:00
hmm.c
huge_memory.c mm: thp: replace DEBUG_VM BUG with VM_WARN when unmap fails for split 2021-06-16 09:24:42 -07:00
hugetlb_cgroup.c hugetlb: make free_huge_page irq safe 2021-05-05 11:27:22 -07:00
hugetlb.c mm, futex: fix shared futex pgoff on shmem huge page 2021-06-24 19:40:54 -07:00
hwpoison-inject.c
init-mm.c
internal.h mm: memcg/slab: create a new set of kmalloc-cg-<n> caches 2021-06-29 10:53:49 -07:00
interval_tree.c mm/interval_tree: add comments to improve code readability 2021-04-30 11:20:38 -07:00
io-mapping.c mm: add a io_mapping_map_user helper 2021-04-30 11:20:39 -07:00
ioremap.c mm/ioremap: fix iomap_max_page_shift 2021-05-14 19:41:32 -07:00
Kconfig mm,memory_hotplug: allocate memmap from the added memory range 2021-05-05 11:27:26 -07:00
Kconfig.debug
khugepaged.c mm: fix typos in comments 2021-05-07 00:26:35 -07:00
kmemleak.c mm/kmemleak: fix possible wrong memory scanning period 2021-06-29 10:53:47 -07:00
ksm.c mm/ksm: use vma_lookup() in find_mergeable_vma() 2021-06-29 10:53:52 -07:00
list_lru.c mm: vmscan: consolidate shrinker_maps handling code 2021-05-05 11:27:23 -07:00
maccess.c
madvise.c mm: fix typos in comments 2021-05-07 00:26:35 -07:00
Makefile mm,memory_hotplug: add kernel boot option to enable memmap_on_memory 2021-05-05 11:27:27 -07:00
mapping_dirty_helpers.c mm/mapping_dirty_helpers: guard hugepage pud's usage 2021-04-16 16:10:37 -07:00
memblock.c memblock: remove return value of memblock_free_all() 2021-02-22 13:01:23 -08:00
memcontrol.c loop: charge i/o to mem and blk cg 2021-06-29 10:53:50 -07:00
memfd.c
memory_hotplug.c mm/mempool: minor coding style tweaks 2021-05-05 11:27:27 -07:00
memory-failure.c mm/hwpoison: do not lock page again when me_huge_page() successfully recovers 2021-06-24 19:40:54 -07:00
memory.c mm/memory.c: use vma_lookup() in __access_remote_vm() 2021-06-29 10:53:52 -07:00
mempolicy.c mm/mempolicy: use vma_lookup() in __access_remote_vm() 2021-06-29 10:53:52 -07:00
mempool.c mm/mempool: minor coding style tweaks 2021-05-05 11:27:27 -07:00
memremap.c mm/memremap.c: fix improper SPDX comment style 2021-04-30 11:20:37 -07:00
memtest.c
migrate.c mm/migrate: use vma_lookup() in do_pages_stat_array() 2021-06-29 10:53:52 -07:00
mincore.c inode: make init and permission helpers idmapped mount aware 2021-01-24 14:27:16 +01:00
mlock.c mm/mempool: minor coding style tweaks 2021-05-05 11:27:27 -07:00
mm_init.c include/linux/page-flags-layout.h: cleanups 2021-04-30 11:20:42 -07:00
mmap_lock.c mm: mmap_lock: use local locks instead of disabling preemption 2021-06-29 10:53:47 -07:00
mmap.c mm/mmap: use find_vma_intersection() in do_mmap() for overlap 2021-06-29 10:53:51 -07:00
mmu_gather.c mm: eliminate "expecting prototype" kernel-doc warnings 2021-04-16 16:10:36 -07:00
mmu_notifier.c mm/mmu_notifiers: ensure range_end() is paired with range_start() 2021-03-25 09:22:55 -07:00
mmzone.c
mprotect.c mm: fix typos in comments 2021-05-07 00:26:35 -07:00
mremap.c mm/mremap: use vma_lookup() in vma_to_resize() 2021-06-29 10:53:52 -07:00
msync.c mm/msync: exit early when the flags is an MS_ASYNC and start < vm_start 2021-04-30 11:20:37 -07:00
nommu.c mm: ignore MAP_EXECUTABLE in ksys_mmap_pgoff() 2021-06-29 10:53:50 -07:00
oom_kill.c mm: fix typos in comments 2021-05-07 00:26:35 -07:00
page_alloc.c mm/page_alloc: split per cpu page lists and zone stats 2021-06-29 10:53:54 -07:00
page_counter.c mm: page_counter: mitigate consequences of a page_counter underflow 2021-04-30 11:20:38 -07:00
page_ext.c
page_idle.c
page_io.c swap: fix swapfile read/write offset 2021-03-02 17:25:46 -07:00
page_isolation.c
page_owner.c mm/page_owner: constify dump_page_owner 2021-06-29 10:53:53 -07:00
page_poison.c mm: page_poison: print page info when corruption is caught 2021-04-30 11:20:36 -07:00
page_reporting.c mm/page_reporting: allow driver to specify reporting order 2021-06-29 10:53:47 -07:00
page_reporting.h mm/page_reporting: export reporting order as module parameter 2021-06-29 10:53:47 -07:00
page_vma_mapped.c mm/thp: another PVMW_SYNC fix in page_vma_mapped_walk() 2021-06-24 19:40:53 -07:00
page-writeback.c fs: remove noop_set_page_dirty() 2021-06-29 10:53:48 -07:00
pagewalk.c mm: pagewalk: fix walk for hugepage tables 2021-06-29 10:53:49 -07:00
percpu-internal.h mm: fix typos in comments 2021-05-07 00:26:35 -07:00
percpu-km.c
percpu-stats.c percpu: make pcpu_nr_empty_pop_pages per chunk type 2021-04-09 13:58:38 +00:00
percpu-vm.c mm/vmalloc: remove unmap_kernel_range 2021-04-30 11:20:40 -07:00
percpu.c mm: fix typos in comments 2021-05-07 00:26:35 -07:00
pgalloc-track.h mm: fix typos in comments 2021-05-07 00:26:35 -07:00
pgtable-generic.c mm/thp: fix __split_huge_pmd_locked() on shmem migration entry 2021-06-16 09:24:42 -07:00
process_vm_access.c mm/process_vm_access.c: remove duplicate include 2021-05-05 11:27:27 -07:00
ptdump.c mm: ptdump: fix build failure 2021-04-16 16:10:37 -07:00
readahead.c mm: Implement readahead_control pageset expansion 2021-04-23 10:14:29 +01:00
rmap.c mm/thp: fix page_address_in_vma() on file THP tails 2021-06-16 09:24:42 -07:00
rodata_test.c
shmem.c mm: charge active memcg when no mm is set 2021-06-29 10:53:50 -07:00
shuffle.c mm: eliminate "expecting prototype" kernel-doc warnings 2021-04-16 16:10:36 -07:00
shuffle.h mm/shuffle: fix section mismatch warning 2021-05-22 15:09:07 -10:00
slab_common.c mm: memcg/slab: disable cache merging for KMALLOC_NORMAL caches 2021-06-29 10:53:49 -07:00
slab.c mm: fix typos in comments 2021-05-07 00:26:35 -07:00
slab.h mm: memcg/slab: properly set up gfp flags for objcg pointer array 2021-06-29 10:53:49 -07:00
slob.c mm: Don't build mm_dump_obj() on CONFIG_PRINTK=n kernels 2021-03-08 14:18:46 -08:00
slub.c mm/slub: add taint after the errors are printed 2021-06-29 10:53:47 -07:00
sparse-vmemmap.c
sparse.c mm/sparse: fix check_usemap_section_nr warnings 2021-06-16 09:24:43 -07:00
swap_cgroup.c
swap_slots.c mm/swap_slots.c: delete meaningless forward declarations 2021-06-29 10:53:49 -07:00
swap_state.c swap: check mapping_empty() for swap cache before being freed 2021-06-29 10:53:49 -07:00
swap.c mm: memcontrol: remove the pgdata parameter of mem_cgroup_page_lruvec 2021-06-29 10:53:50 -07:00
swapfile.c mm, swap: remove unnecessary smp_rmb() in swap_type_to_swap_info() 2021-06-29 10:53:49 -07:00
truncate.c mm/thp: unmap_mapping_page() to fix THP truncate_cleanup_page() 2021-06-16 09:24:42 -07:00
usercopy.c
userfaultfd.c userfaultfd: hugetlbfs: fix new flag usage in error path 2021-05-22 15:09:07 -10:00
util.c mm/util.c: fix typo 2021-05-05 11:27:25 -07:00
vmacache.c
vmalloc.c mm: vmalloc: add cond_resched() in __vunmap() 2021-06-29 10:53:52 -07:00
vmpressure.c
vmscan.c mm: vmscan: remove noinline_for_stack 2021-06-29 10:53:50 -07:00
vmstat.c mm/page_alloc: split per cpu page lists and zone stats 2021-06-29 10:53:54 -07:00
workingset.c mm: memcontrol: remove the pgdata parameter of mem_cgroup_page_lruvec 2021-06-29 10:53:50 -07:00
z3fold.c mm: fix some typos and code style problems 2021-05-07 00:26:33 -07:00
zbud.c mm: set the sleep_mapped to true for zbud and z3fold 2021-02-26 09:41:01 -08:00
zpool.c mm: fix typos in comments 2021-05-07 00:26:35 -07:00
zsmalloc.c mm: fix typos in comments 2021-05-07 00:26:35 -07:00
zswap.c mm/zswap.c: switch from strlcpy to strscpy 2021-05-05 11:27:27 -07:00