linux/mm
Christoph Lameter 7c83912062 vmstat: User per cpu atomics to avoid interrupt disable / enable
Currently the operations to increment vm counters must disable interrupts
in order to not mess up their housekeeping of counters.

So use this_cpu_cmpxchg() to avoid the overhead. Since we can no longer
count on preremption being disabled we still have some minor issues.
The fetching of the counter thresholds is racy.
A threshold from another cpu may be applied if we happen to be
rescheduled on another cpu.  However, the following vmstat operation
will then bring the counter again under the threshold limit.

The operations for __xxx_zone_state are not changed since the caller
has taken care of the synchronization needs (and therefore the cycle
count is even less than the optimized version for the irq disable case
provided here).

The optimization using this_cpu_cmpxchg will only be used if the arch
supports efficient this_cpu_ops (must have CONFIG_CMPXCHG_LOCAL set!)

The use of this_cpu_cmpxchg reduces the cycle count for the counter
operations by %80 (inc_zone_page_state goes from 170 cycles to 32).

Signed-off-by: Christoph Lameter <cl@linux.com>
2010-12-18 15:54:49 +01:00
..
backing-dev.c Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs-2.6 2010-10-26 17:58:44 -07:00
bootmem.c x86, memblock: Replace e820_/_early string with memblock_ 2010-08-27 11:13:47 -07:00
bounce.c bounce: call flush_dcache_page() after bounce_copy_vec() 2010-09-09 18:57:25 -07:00
compaction.c mm: compaction: handle active and inactive fairly in too_many_isolated 2010-09-09 18:57:24 -07:00
debug-pagealloc.c
dmapool.c mm: add a might_sleep_if() to dma_pool_alloc() 2010-10-26 16:52:08 -07:00
fadvise.c readahead: introduce FMODE_RANDOM for POSIX_FADV_RANDOM 2010-03-06 11:26:25 -08:00
failslab.c include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h 2010-03-30 22:02:32 +09:00
filemap_xip.c include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h 2010-03-30 22:02:32 +09:00
filemap.c Call the filesystem back whenever a page is removed from the page cache 2010-12-02 09:55:21 -05:00
fremap.c Avoid pgoff overflow in remap_file_pages 2010-09-25 09:34:58 -07:00
highmem.c mm,x86: fix kmap_atomic_push vs ioremap_32.c 2010-10-27 18:03:05 -07:00
hugetlb.c mm/hugetlb.c: avoid double unlock_page() in hugetlb_fault() 2010-12-02 14:51:14 -08:00
hwpoison-inject.c HWPOISON, hugetlb: support hwpoison injection for hugepage 2010-08-11 09:23:11 +02:00
init-mm.c mm: provide init_mm mm_context initializer 2010-08-09 20:44:54 -07:00
internal.h mm: fix is_mem_section_removable() page_order BUG_ON check 2010-10-26 16:52:11 -07:00
Kconfig Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/percpu 2010-10-22 17:31:36 -07:00
Kconfig.debug trivial: improve help text for mm debug config options 2009-09-21 15:14:57 +02:00
kmemcheck.c kmemcheck: Fix build errors due to missing slab.h 2010-03-30 22:02:32 +09:00
kmemleak-test.c percpu: clean up percpu variable definitions 2009-06-24 15:13:48 +09:00
kmemleak.c kmemleak: Fix typo in the comment 2010-08-08 21:57:23 +01:00
ksm.c ksm: annotate ksm_thread_mutex is no deadlock source 2010-12-02 14:51:15 -08:00
maccess.c MN10300: Save frame pointer in thread_info struct rather than global var 2010-10-27 17:29:01 +01:00
madvise.c HWPOISON: Add a madvise() injector for soft page offlining 2009-12-16 12:20:00 +01:00
Makefile percpu: use percpu allocator on UP too 2010-10-02 10:26:05 +03:00
memblock.c memblock: Annotate memblock functions with __init_memblock 2010-10-11 16:00:52 -07:00
memcontrol.c cgroups: make swap accounting default behavior configurable 2010-11-25 06:50:45 +09:00
memory_hotplug.c mem-hotplug: introduce {un}lock_memory_hotplug() 2010-12-02 14:51:15 -08:00
memory-failure.c mem-hotplug: introduce {un}lock_memory_hotplug() 2010-12-02 14:51:15 -08:00
memory.c use clear_page()/copy_page() in favor of memset()/memcpy() on whole pages 2010-10-26 16:52:13 -07:00
mempolicy.c mm/mempolicy.c: add rcu read lock to protect pid structure 2010-12-02 14:51:14 -08:00
mempool.c mm: remove broken 'kzalloc' mempool 2009-09-22 07:17:35 -07:00
migrate.c mm: fix error reporting in move_pages() syscall 2010-10-26 16:52:11 -07:00
mincore.c mincore: do nested page table walks 2010-05-25 08:06:58 -07:00
mlock.c mm: Move vma_stack_continue into mm.h 2010-09-09 09:05:06 -07:00
mm_init.c
mmap.c install_special_mapping skips security_file_mmap check. 2010-12-15 12:30:36 -08:00
mmu_context.c exit: fix oops in sync_mm_rss 2010-03-24 16:31:21 -07:00
mmu_notifier.c include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h 2010-03-30 22:02:32 +09:00
mmzone.c mm: page allocator: calculate a better estimate of NR_FREE_PAGES when memory is low and kswapd is awake 2010-09-09 18:57:25 -07:00
mprotect.c perf_events: Fix perf_counter_mmap() hook in mprotect() 2010-11-09 10:19:38 -08:00
mremap.c mm: remove pte_*map_nested() 2010-10-26 16:52:08 -07:00
msync.c sanitize vfs_fsync calling conventions 2010-05-21 18:31:21 -04:00
nommu.c nommu: yield CPU while disposing VM 2010-11-25 06:50:38 +09:00
oom_kill.c oom: kill all threads sharing oom killed task's mm 2010-10-26 16:52:05 -07:00
page_alloc.c PM / Hibernate: Fix memory corruption related to swap 2010-12-06 23:52:08 +01:00
page_cgroup.c kmemleak: Annotate false positive in init_section_page_cgroup() 2010-07-19 11:54:14 +01:00
page_io.c block: unify flags for struct bio and struct request 2010-08-07 18:20:39 +02:00
page_isolation.c mm: page_isolation: codeclean fix comment and rm unneeded val init 2010-10-26 16:52:11 -07:00
page-writeback.c writeback: remove the internal 5% low bound on dirty_ratio 2010-10-26 16:52:08 -07:00
pagewalk.c mm: remove call to find_vma in pagewalk for non-hugetlbfs 2010-11-25 06:50:46 +09:00
percpu-km.c percpu: clear memory allocated with the km allocator 2010-10-02 10:28:42 +03:00
percpu-vm.c percpu: move vmalloc based chunk management into percpu-vm.c 2010-05-01 08:30:50 +02:00
percpu.c percpu: zero memory more efficiently in mm/percpu.c::pcpu_mem_alloc() 2010-12-07 14:47:23 +01:00
prio_tree.c
quicklist.c include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h 2010-03-30 22:02:32 +09:00
readahead.c readahead.c: fix comment 2010-05-25 08:07:00 -07:00
rmap.c rmap: make anon_vma_chain_free() static 2010-10-26 16:52:10 -07:00
shmem.c convert get_sb_nodev() users 2010-10-29 04:16:31 -04:00
slab.c core: Replace __get_cpu_var with __this_cpu_read if not used for an address. 2010-12-17 15:07:19 +01:00
slob.c slob: fix gfp flags for order-0 page allocations 2010-10-02 10:24:28 +03:00
slub.c slub: Fix a crash during slabinfo -v 2010-12-04 09:53:49 +02:00
sparse-vmemmap.c x86: Use memblock to replace early_res 2010-08-27 11:12:29 -07:00
sparse.c sparsemem: on no vmemmap path put mem_map on node high too 2010-05-25 08:06:56 -07:00
swap_state.c include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h 2010-03-30 22:02:32 +09:00
swap.c fuse: use release_pages() 2010-10-27 18:03:17 -07:00
swapfile.c /proc/swaps: support polling 2010-10-26 16:52:11 -07:00
thrash.c mm: pass mm to grab_swap_token 2009-06-23 12:50:05 -07:00
truncate.c Call the filesystem back whenever a page is removed from the page cache 2010-12-02 09:55:21 -05:00
util.c export __get_user_pages_fast() function 2010-10-24 10:51:24 +02:00
vmalloc.c vmalloc: eagerly clear ptes on vunmap 2010-12-02 14:51:15 -08:00
vmscan.c Call the filesystem back whenever a page is removed from the page cache 2010-12-02 09:55:21 -05:00
vmstat.c vmstat: User per cpu atomics to avoid interrupt disable / enable 2010-12-18 15:54:49 +01:00