forked from Minki/linux
96c2fb69b9
7 Commits
Author | SHA1 | Message | Date | |
---|---|---|---|---|
Chen Yucong
|
2c51856c9b |
mm: trace-vmscan-postprocess.pl: report the number of file/anon pages respectively
Until now, the reporting from trace-vmscan-postprocess.pl is not very useful because we cannot directly use this script for checking the file/anon ratio of scanning. This patch aims to report respectively the number of file/anon pages which were scanned/reclaimed by kswapd or direct-reclaim. Sample output is usually something like the following. Summary Direct reclaims: 8823 Direct reclaim pages scanned: 2438797 Direct reclaim file pages scanned: 1315200 Direct reclaim anon pages scanned: 1123597 Direct reclaim pages reclaimed: 446139 Direct reclaim file pages reclaimed: 378668 Direct reclaim anon pages reclaimed: 67471 Direct reclaim write file sync I/O: 0 Direct reclaim write anon sync I/O: 0 Direct reclaim write file async I/O: 0 Direct reclaim write anon async I/O: 4240 Wake kswapd requests: 122310 Time stalled direct reclaim: 13.78 seconds Kswapd wakeups: 25817 Kswapd pages scanned: 170779115 Kswapd file pages scanned: 162725123 Kswapd anon pages scanned: 8053992 Kswapd pages reclaimed: 129065738 Kswapd file pages reclaimed: 128500930 Kswapd anon pages reclaimed: 564808 Kswapd reclaim write file sync I/O: 0 Kswapd reclaim write anon sync I/O: 0 Kswapd reclaim write file async I/O: 36 Kswapd reclaim write anon async I/O: 730730 Time kswapd awake: 1015.50 seconds Signed-off-by: Chen Yucong <slaoub@gmail.com> Cc: Mel Gorman <mel@csn.ul.ie> Cc: Johannes Weiner <hannes@cmpxchg.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> |
||
Chen Yucong
|
b27ebf7791 |
mm:vmscan: update the trace-vmscan-postprocess.pl for event vmscan/mm_vmscan_lru_isolate
When using trace-vmscan-postprocess.pl for checking the file/anon rate
of scanning, we can find that it can not be performed. At the same
time, the following message will be reported:
WARNING: Format not as expected for event vmscan/mm_vmscan_lru_isolate
'file' != 'contig_taken' Fewer fields than expected in format at
./trace-vmscan-postprocess.pl line 171, <FORMAT> line 76.
In trace-vmscan-postprocess.pl, (contig_taken, contig_dirty, and
contig_failed) are be associated respectively to (nr_lumpy_taken,
nr_lumpy_dirty, and nr_lumpy_failed) for lumpy reclaim. Via commit
|
||
Vinayak Menon
|
bd7278166a |
Documentation/trace/postprocess/trace-vmscan-postprocess.pl: fix the traceevent regex
When irq, preempt and lockdep fields are printed (field 3 in the example below) in the trace output, the script fails. An example entry: kswapd0-610 [000] ...1 158.112152: mm_vmscan_kswapd_wake: nid=0 order=0 Signed-off-by: Vinayak Menon <vinayakm.list@gmail.com> Cc: Mel Gorman <mel@csn.ul.ie> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> |
||
Minchan Kim
|
4356f21d09 |
mm: change isolate mode from #define to bitwise type
Change ISOLATE_XXX macro with bitwise isolate_mode_t type. Normally, macro isn't recommended as it's type-unsafe and making debugging harder as symbol cannot be passed throught to the debugger. Quote from Johannes " Hmm, it would probably be cleaner to fully convert the isolation mode into independent flags. INACTIVE, ACTIVE, BOTH is currently a tri-state among flags, which is a bit ugly." This patch moves isolate mode from swap.h to mmzone.h by memcontrol.h Signed-off-by: Minchan Kim <minchan.kim@gmail.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: Mel Gorman <mgorman@suse.de> Cc: Rik van Riel <riel@redhat.com> Cc: Michal Hocko <mhocko@suse.cz> Cc: Andrea Arcangeli <aarcange@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> |
||
Mel Gorman
|
7a2d19bced |
mm: vmscan: tracepoint: account for scanned pages similarly for both ftrace and vmstat
When correlating ftrace results with /proc/vmstat, I noticed that the reporting scripts value for "pages scanned" differed significantly. Both values were "right" depending on how you look at it. The difference is due to vmstat only counting scanning of the inactive list towards pages scanned. The analysis script for the tracepoint counts active and inactive list yielding a far higher value than vmstat. The resulting scanning/reclaim ratio looks much worse. The tracepoint is ok but this patch updates the reporting script so that the report values for scanned are similar to vmstat. Signed-off-by: Mel Gorman <mel@csn.ul.ie> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> |
||
Mel Gorman
|
e11da5b4fd |
tracing, vmscan: add trace events for LRU list shrinking
There have been numerous reports of stalls that pointed at the problem being somewhere in the VM. There are multiple roots to the problems which means dealing with any of the root problems in isolation is tricky to justify on their own and they would still need integration testing. This patch series puts together two different patch sets which in combination should tackle some of the root causes of latency problems being reported. Patch 1 adds a tracepoint for shrink_inactive_list. For this series, the most important results is being able to calculate the scanning/reclaim ratio as a measure of the amount of work being done by page reclaim. Patch 2 accounts for time spent in congestion_wait. Patches 3-6 were originally developed by Kosaki Motohiro but reworked for this series. It has been noted that lumpy reclaim is far too aggressive and trashes the system somewhat. As SLUB uses high-order allocations, a large cost incurred by lumpy reclaim will be noticeable. It was also reported during transparent hugepage support testing that lumpy reclaim was trashing the system and these patches should mitigate that problem without disabling lumpy reclaim. Patch 7 adds wait_iff_congested() and replaces some callers of congestion_wait(). wait_iff_congested() only sleeps if there is a BDI that is currently congested. Patch 8 notes that any BDI being congested is not necessarily a problem because there could be multiple BDIs of varying speeds and numberous zones. It attempts to track when a zone being reclaimed contains many pages backed by a congested BDI and if so, reclaimers wait on the congestion queue. I ran a number of tests with monitoring on X86, X86-64 and PPC64. Each machine had 3G of RAM and the CPUs were X86: Intel P4 2-core X86-64: AMD Phenom 4-core PPC64: PPC970MP Each used a single disk and the onboard IO controller. Dirty ratio was left at 20. I'm just going to report for X86-64 and PPC64 in a vague attempt to keep this report short. Four kernels were tested each based on v2.6.36-rc4 traceonly-v2r2: Patches 1 and 2 to instrument vmscan reclaims and congestion_wait lowlumpy-v2r3: Patches 1-6 to test if lumpy reclaim is better waitcongest-v2r3: Patches 1-7 to only wait on congestion waitwriteback-v2r4: Patches 1-8 to detect when a zone is congested nocongest-v1r5: Patches 1-3 for testing wait_iff_congestion nodirect-v1r5: Patches 1-10 to disable filesystem writeback for better IO The tests run were as follows kernbench compile-based benchmark. Smoke test performance sysbench OLTP read-only benchmark. Will be re-run in the future as read-write micro-mapped-file-stream This is a micro-benchmark from Johannes Weiner that accesses a large sparse-file through mmap(). It was configured to run in only single-CPU mode but can be indicative of how well page reclaim identifies suitable pages. stress-highalloc Tries to allocate huge pages under heavy load. kernbench, iozone and sysbench did not report any performance regression on any machine. sysbench did pressure the system lightly and there was reclaim activity but there were no difference of major interest between the kernels. X86-64 micro-mapped-file-stream traceonly-v2r2 lowlumpy-v2r3 waitcongest-v2r3 waitwriteback-v2r4 pgalloc_dma 1639.00 ( 0.00%) 667.00 (-145.73%) 1167.00 ( -40.45%) 578.00 (-183.56%) pgalloc_dma32 2842410.00 ( 0.00%) 2842626.00 ( 0.01%) 2843043.00 ( 0.02%) 2843014.00 ( 0.02%) pgalloc_normal 0.00 ( 0.00%) 0.00 ( 0.00%) 0.00 ( 0.00%) 0.00 ( 0.00%) pgsteal_dma 729.00 ( 0.00%) 85.00 (-757.65%) 609.00 ( -19.70%) 125.00 (-483.20%) pgsteal_dma32 2338721.00 ( 0.00%) 2447354.00 ( 4.44%) 2429536.00 ( 3.74%) 2436772.00 ( 4.02%) pgsteal_normal 0.00 ( 0.00%) 0.00 ( 0.00%) 0.00 ( 0.00%) 0.00 ( 0.00%) pgscan_kswapd_dma 1469.00 ( 0.00%) 532.00 (-176.13%) 1078.00 ( -36.27%) 220.00 (-567.73%) pgscan_kswapd_dma32 4597713.00 ( 0.00%) 4503597.00 ( -2.09%) 4295673.00 ( -7.03%) 3891686.00 ( -18.14%) pgscan_kswapd_normal 0.00 ( 0.00%) 0.00 ( 0.00%) 0.00 ( 0.00%) 0.00 ( 0.00%) pgscan_direct_dma 71.00 ( 0.00%) 134.00 ( 47.01%) 243.00 ( 70.78%) 352.00 ( 79.83%) pgscan_direct_dma32 305820.00 ( 0.00%) 280204.00 ( -9.14%) 600518.00 ( 49.07%) 957485.00 ( 68.06%) pgscan_direct_normal 0.00 ( 0.00%) 0.00 ( 0.00%) 0.00 ( 0.00%) 0.00 ( 0.00%) pageoutrun 16296.00 ( 0.00%) 21254.00 ( 23.33%) 18447.00 ( 11.66%) 20067.00 ( 18.79%) allocstall 443.00 ( 0.00%) 273.00 ( -62.27%) 513.00 ( 13.65%) 1568.00 ( 71.75%) These are based on the raw figures taken from /proc/vmstat. It's a rough measure of reclaim activity. Note that allocstall counts are higher because we are entering direct reclaim more often as a result of not sleeping in congestion. In itself, it's not necessarily a bad thing. It's easier to get a view of what happened from the vmscan tracepoint report. FTrace Reclaim Statistics: vmscan traceonly-v2r2 lowlumpy-v2r3 waitcongest-v2r3 waitwriteback-v2r4 Direct reclaims 443 273 513 1568 Direct reclaim pages scanned 305968 280402 600825 957933 Direct reclaim pages reclaimed 43503 19005 30327 117191 Direct reclaim write file async I/O 0 0 0 0 Direct reclaim write anon async I/O 0 3 4 12 Direct reclaim write file sync I/O 0 0 0 0 Direct reclaim write anon sync I/O 0 0 0 0 Wake kswapd requests 187649 132338 191695 267701 Kswapd wakeups 3 1 4 1 Kswapd pages scanned 4599269 4454162 4296815 3891906 Kswapd pages reclaimed |
||
Mel Gorman
|
b898cc7001 |
vmscan: tracing: add a postprocessing script for reclaim-related ftrace events
Add a simple post-processing script for the reclaim-related trace events. It can be used to give an indication of how much traffic there is on the LRU lists and how severe latencies due to reclaim are. Example output looks like the following Reclaim latencies expressed as order-latency_in_ms uname-3942 9-200.179000000004 9-98.7900000000373 9-99.8330000001006 kswapd0-311 0-662.097999999998 0-2.79700000002049 \ 0-149.100000000035 0-3295.73600000003 0-9806.31799999997 0-35528.833 \ 0-10043.197 0-129740.979 0-3.50500000000466 0-3.54899999999907 \ 0-9297.78999999992 0-3.48499999998603 0-3596.97999999998 0-3.92799999995623 \ 0-3.35000000009313 0-16729.017 0-3.57799999997951 0-47435.0630000001 \ 0-3.7819999998901 0-5864.06999999995 0-18635.334 0-10541.289 9-186011.565 \ 9-3680.86300000001 9-1379.06499999994 9-958571.115 9-66215.474 \ 9-6721.14699999988 9-1962.15299999993 9-1094806.125 9-2267.83199999994 \ 9-47120.9029999999 9-427653.886 9-2.6359999999404 9-632.148999999976 \ 9-476.753000000026 9-495.577000000048 9-8.45900000003166 9-6.6820000000298 \ 9-1.30500000016764 9-251.746000000043 9-383.905000000028 9-80.1419999999925 \ 9-281.160000000149 9-14.8780000000261 9-381.45299999998 9-512.07799999998 \ 9-49.5519999999087 9-167.439000000013 9-183.820999999996 9-239.527999999933 \ 9-19.9479999998584 9-148.747999999905 9-164.583000000101 9-16.9480000000913 \ 9-192.376000000164 9-64.1010000000242 9-1.40800000005402 9-3.60800000000745 \ 9-17.1359999999404 9-4.69500000006519 9-2.06400000001304 9-1582488.554 \ 9-6244.19499999983 9-348153.812 9-2.0999999998603 9-0.987999999895692 \ 0-32218.473 0-1.6140000000596 0-1.28100000019185 0-1.41300000017509 \ 0-1.32299999985844 0-602.584000000032 0-1.34400000004098 0-1.6929999999702 \ 1-22101.8190000001 9-174876.724 9-16.2420000000857 9-175.165999999736 \ 9-15.8589999997057 9-0.604999999981374 9-3061.09000000032 9-479.277000000235 \ 9-1.54499999992549 9-771.985000000335 9-4.88700000010431 9-15.0649999999441 \ 9-0.879999999888241 9-252.01500000013 9-1381.03600000031 9-545.689999999944 \ 9-3438.0129999998 9-3343.70099999988 bench-stresshig-3942 9-7063.33900000004 9-129960.482 9-2062.27500000002 \ 9-3845.59399999992 9-171.82799999998 9-16493.821 9-7615.23900000006 \ 9-10217.848 9-983.138000000035 9-2698.39999999991 9-4016.1540000001 \ 9-5522.37700000009 9-21630.429 \ 9-15061.048 9-10327.953 9-542.69700000016 9-317.652000000002 \ 9-8554.71699999995 9-1786.61599999992 9-1899.31499999994 9-2093.41899999999 \ 9-4992.62400000007 9-942.648999999976 9-1923.98300000001 9-3.7980000001844 \ 9-5.99899999983609 9-0.912000000011176 9-1603.67700000014 9-1.98300000000745 \ 9-3.96500000008382 9-0.902999999932945 9-2802.72199999983 9-1078.24799999991 \ 9-2155.82900000014 9-10.058999999892 9-1984.723 9-1687.97999999998 \ 9-1136.05300000007 9-3183.61699999985 9-458.731000000145 9-6.48600000003353 \ 9-1013.25200000009 9-8415.22799999989 9-10065.584 9-2076.79600000009 \ 9-3792.65699999989 9-71.2010000001173 9-2560.96999999997 9-2260.68400000012 \ 9-2862.65799999982 9-1255.81500000018 9-15.7440000001807 9-4.33499999996275 \ 9-1446.63800000004 9-238.635000000009 9-60.1790000000037 9-4.38800000003539 \ 9-639.567000000039 9-306.698000000091 9-31.4070000001229 9-74.997999999905 \ 9-632.725999999791 9-1625.93200000003 9-931.266000000061 9-98.7749999999069 \ 9-984.606999999844 9-225.638999999966 9-421.316000000108 9-653.744999999879 \ 9-572.804000000004 9-769.158999999985 9-603.918000000063 9-4.28499999991618 \ 9-626.21399999992 9-1721.25 9-0.854999999981374 9-572.39599999995 \ 9-681.881999999983 9-1345.12599999993 9-363.666999999899 9-3823.31099999999 \ 9-2991.28200000012 9-4.27099999994971 9-309.76500000013 9-3068.35700000008 \ 9-788.25 9-3515.73999999999 9-2065.96100000013 9-286.719999999972 \ 9-316.076000000117 9-344.151000000071 9-2.51000000000931 9-306.688000000082 \ 9-1515.00099999993 9-336.528999999864 9-793.491999999853 9-457.348999999929 \ 9-13620.155 9-119.933999999892 9-35.0670000000391 9-918.266999999993 \ 9-828.569000000134 9-4863.81099999999 9-105.222000000067 9-894.23900000006 \ 9-110.964999999851 9-0.662999999942258 9-12753.3150000002 9-12.6129999998957 \ 9-13368.0899999999 9-12.4199999999255 9-1.00300000002608 9-1.41100000008009 \ 9-10300.5290000001 9-16.502000000095 9-30.7949999999255 9-6283.0140000002 \ 9-4320.53799999994 9-6826.27300000004 9-3.07299999985844 9-1497.26799999992 \ 9-13.4040000000969 9-3.12999999988824 9-3.86100000003353 9-11.3539999998175 \ 9-0.10799999977462 9-21.780999999959 9-209.695999999996 9-299.647000000114 \ 9-6.01699999999255 9-20.8349999999627 9-22.5470000000205 9-5470.16800000006 \ 9-7.60499999998137 9-0.821000000229105 9-1.56600000010803 9-14.1669999998994 \ 9-0.209000000031665 9-1.82300000009127 9-1.70000000018626 9-19.9429999999702 \ 9-124.266999999993 9-0.0389999998733401 9-6.71400000015274 9-16.7710000001825 \ 9-31.0409999999683 9-0.516999999992549 9-115.888000000035 9-5.19900000002235 \ 9-222.389999999898 9-11.2739999999758 9-80.9050000000279 9-8.14500000001863 \ 9-4.44599999999627 9-0.218999999808148 9-0.715000000083819 9-0.233000000007451 \ 9-48.2630000000354 9-248.560999999987 9-374.96800000011 9-644.179000000004 \ 9-0.835999999893829 9-79.0060000000522 9-128.447999999858 9-0.692000000039116 \ 9-5.26500000013039 9-128.449000000022 9-2.04799999995157 9-12.0990000001621 \ 9-8.39899999997579 9-10.3860000001732 9-11.9310000000987 9-53.4450000000652 \ 9-0.46999999997206 9-2.96299999998882 9-17.9699999999721 9-0.776000000070781 \ 9-25.2919999998994 9-33.1110000000335 9-0.434000000124797 9-0.641000000061467 \ 9-0.505000000121072 9-1.12800000002608 9-149.222000000067 9-1.17599999997765 \ 9-3247.33100000001 9-10.7439999999478 9-153.523000000045 9-1.38300000014715 \ 9-794.762000000104 9-3.36199999996461 9-128.765999999829 9-181.543999999994 \ 9-78149.8229999999 9-176.496999999974 9-89.9940000001807 9-9.12700000009499 \ 9-250.827000000048 9-0.224999999860302 9-0.388999999966472 9-1.16700000036508 \ 9-32.1740000001155 9-12.6800000001676 9-0.0720000001601875 9-0.274999999906868 \ 9-0.724000000394881 9-266.866000000387 9-45.5709999999963 9-4.54399999976158 \ 9-8.27199999988079 9-4.38099999958649 9-0.512000000104308 9-0.0640000002458692 \ 9-5.20000000018626 9-0.0839999997988343 9-12.816000000108 9-0.503000000026077 \ 9-0.507999999914318 9-6.23999999975786 9-3.35100000025705 9-18.8530000001192 \ 9-25.2220000000671 9-68.2309999996796 9-98.9939999999478 9-0.441000000108033 \ 9-4.24599999981001 9-261.702000000048 9-3.01599999982864 9-0.0749999997206032 \ 9-0.0370000000111759 9-4.375 9-3.21800000034273 9-11.3960000001825 \ 9-0.0540000000037253 9-0.286000000312924 9-0.865999999921769 \ 9-0.294999999925494 9-6.45999999996275 9-4.31099999975413 9-128.248999999836 \ 9-0.282999999821186 9-102.155000000261 9-0.0860000001266599 \ 9-0.0540000000037253 9-0.935000000055879 9-0.0670000002719462 \ 9-5.8640000000596 9-19.9860000000335 9-4.18699999991804 9-0.566000000108033 \ 9-2.55099999997765 9-0.702000000048429 9-131.653999999631 9-0.638999999966472 \ 9-14.3229999998584 9-183.398000000045 9-178.095999999903 9-3.22899999981746 \ 9-7.31399999978021 9-22.2400000002235 9-11.7979999999516 9-108.10599999968 \ 9-99.0159999998286 9-102.640999999829 9-38.414000000339 Process Direct Wokeup Pages Pages Pages details Rclms Kswapd Scanned Sync-IO ASync-IO cc1-30800 0 1 0 0 0 wakeup-0=1 cc1-24260 0 1 0 0 0 wakeup-0=1 cc1-24152 0 12 0 0 0 wakeup-0=12 cc1-8139 0 1 0 0 0 wakeup-0=1 cc1-4390 0 1 0 0 0 wakeup-0=1 cc1-4648 0 7 0 0 0 wakeup-0=7 cc1-4552 0 3 0 0 0 wakeup-0=3 dd-4550 0 31 0 0 0 wakeup-0=31 date-4898 0 1 0 0 0 wakeup-0=1 cc1-6549 0 7 0 0 0 wakeup-0=7 as-22202 0 17 0 0 0 wakeup-0=17 cc1-6495 0 9 0 0 0 wakeup-0=9 cc1-8299 0 1 0 0 0 wakeup-0=1 cc1-6009 0 1 0 0 0 wakeup-0=1 cc1-2574 0 2 0 0 0 wakeup-0=2 cc1-30568 0 1 0 0 0 wakeup-0=1 cc1-2679 0 6 0 0 0 wakeup-0=6 sh-13747 0 12 0 0 0 wakeup-0=12 cc1-22193 0 18 0 0 0 wakeup-0=18 cc1-30725 0 2 0 0 0 wakeup-0=2 as-4392 0 2 0 0 0 wakeup-0=2 cc1-28180 0 14 0 0 0 wakeup-0=14 cc1-13697 0 2 0 0 0 wakeup-0=2 cc1-22207 0 8 0 0 0 wakeup-0=8 cc1-15270 0 179 0 0 0 wakeup-0=179 cc1-22011 0 82 0 0 0 wakeup-0=82 cp-14682 0 1 0 0 0 wakeup-0=1 as-11926 0 2 0 0 0 wakeup-0=2 cc1-6016 0 5 0 0 0 wakeup-0=5 make-18554 0 13 0 0 0 wakeup-0=13 cc1-8292 0 12 0 0 0 wakeup-0=12 make-24381 0 1 0 0 0 wakeup-1=1 date-18681 0 33 0 0 0 wakeup-0=33 cc1-32276 0 1 0 0 0 wakeup-0=1 timestamp-outpu-2809 0 253 0 0 0 wakeup-0=240 wakeup-1=13 date-18624 0 7 0 0 0 wakeup-0=7 cc1-30960 0 9 0 0 0 wakeup-0=9 cc1-4014 0 1 0 0 0 wakeup-0=1 cc1-30706 0 22 0 0 0 wakeup-0=22 uname-3942 4 1 306 0 17 direct-9=4 wakeup-9=1 cc1-28207 0 1 0 0 0 wakeup-0=1 cc1-30563 0 9 0 0 0 wakeup-0=9 cc1-22214 0 10 0 0 0 wakeup-0=10 cc1-28221 0 11 0 0 0 wakeup-0=11 cc1-28123 0 6 0 0 0 wakeup-0=6 kswapd0-311 0 7 357302 0 34233 wakeup-0=7 cc1-5988 0 7 0 0 0 wakeup-0=7 as-30734 0 161 0 0 0 wakeup-0=161 cc1-22004 0 45 0 0 0 wakeup-0=45 date-4590 0 4 0 0 0 wakeup-0=4 cc1-15279 0 213 0 0 0 wakeup-0=213 date-30735 0 1 0 0 0 wakeup-0=1 cc1-30583 0 4 0 0 0 wakeup-0=4 cc1-32324 0 2 0 0 0 wakeup-0=2 cc1-23933 0 3 0 0 0 wakeup-0=3 cc1-22001 0 36 0 0 0 wakeup-0=36 bench-stresshig-3942 287 287 80186 6295 12196 direct-9=287 wakeup-9=287 cc1-28170 0 7 0 0 0 wakeup-0=7 date-7932 0 92 0 0 0 wakeup-0=92 cc1-22222 0 6 0 0 0 wakeup-0=6 cc1-32334 0 16 0 0 0 wakeup-0=16 cc1-2690 0 6 0 0 0 wakeup-0=6 cc1-30733 0 9 0 0 0 wakeup-0=9 cc1-32298 0 2 0 0 0 wakeup-0=2 cc1-13743 0 18 0 0 0 wakeup-0=18 cc1-22186 0 4 0 0 0 wakeup-0=4 cc1-28214 0 11 0 0 0 wakeup-0=11 cc1-13735 0 1 0 0 0 wakeup-0=1 updatedb-8173 0 18 0 0 0 wakeup-0=18 cc1-13750 0 3 0 0 0 wakeup-0=3 cat-2808 0 2 0 0 0 wakeup-0=2 cc1-15277 0 169 0 0 0 wakeup-0=169 date-18317 0 1 0 0 0 wakeup-0=1 cc1-15274 0 197 0 0 0 wakeup-0=197 cc1-30732 0 1 0 0 0 wakeup-0=1 Kswapd Kswapd Order Pages Pages Pages Instance Wakeups Re-wakeup Scanned Sync-IO ASync-IO kswapd0-311 91 24 357302 0 34233 wake-0=31 wake-1=1 wake-9=59 rewake-0=10 rewake-1=1 rewake-9=13 Summary Direct reclaims: 291 Direct reclaim pages scanned: 437794 Direct reclaim write sync I/O: 6295 Direct reclaim write async I/O: 46446 Wake kswapd requests: 2152 Time stalled direct reclaim: 519.163009000002 ms Kswapd wakeups: 91 Kswapd pages scanned: 357302 Kswapd reclaim write sync I/O: 0 Kswapd reclaim write async I/O: 34233 Time kswapd awake: 5282.749757 ms Signed-off-by: Mel Gorman <mel@csn.ul.ie> Acked-by: Rik van Riel <riel@redhat.com> Acked-by: Larry Woodman <lwoodman@redhat.com> Cc: Dave Chinner <david@fromorbit.com> Cc: Chris Mason <chris.mason@oracle.com> Cc: Nick Piggin <npiggin@suse.de> Cc: Rik van Riel <riel@redhat.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Christoph Hellwig <hch@infradead.org> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Michael Rubin <mrubin@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> |