2009-04-20 13:37:32 +00:00
|
|
|
/*
|
2009-06-02 21:37:05 +00:00
|
|
|
* builtin-stat.c
|
|
|
|
*
|
|
|
|
* Builtin stat command: Give a precise performance counters summary
|
|
|
|
* overview about any workload, CPU or specific PID.
|
|
|
|
*
|
|
|
|
* Sample output:
|
2009-04-20 13:37:32 +00:00
|
|
|
|
perf stat: Add -d -d and -d -d -d options to show more CPU events
Print even more detailed statistics if requested via perf stat -d:
-d: detailed events, L1 and LLC data cache
-d -d: more detailed events, dTLB and iTLB events
-d -d -d: very detailed events, adding prefetch events
Full output looks like this now:
Performance counter stats for '/home/mingo/hackbench 10' (5 runs):
1703.674707 task-clock # 8.709 CPUs utilized ( +- 4.19% )
49,068 context-switches # 0.029 M/sec ( +- 16.66% )
8,303 CPU-migrations # 0.005 M/sec ( +- 24.90% )
17,397 page-faults # 0.010 M/sec ( +- 0.46% )
2,345,389,239 cycles # 1.377 GHz ( +- 4.61% ) [55.90%]
1,884,503,527 stalled-cycles-frontend # 80.35% frontend cycles idle ( +- 5.67% ) [50.39%]
743,919,737 stalled-cycles-backend # 31.72% backend cycles idle ( +- 8.75% ) [49.91%]
1,314,416,379 instructions # 0.56 insns per cycle
# 1.43 stalled cycles per insn ( +- 2.53% ) [60.87%]
272,592,567 branches # 160.003 M/sec ( +- 1.74% ) [56.56%]
3,794,846 branch-misses # 1.39% of all branches ( +- 6.59% ) [58.50%]
449,982,778 L1-dcache-loads # 264.125 M/sec ( +- 2.47% ) [49.88%]
22,404,961 L1-dcache-load-misses # 4.98% of all L1-dcache hits ( +- 6.08% ) [55.05%]
6,204,750 LLC-loads # 3.642 M/sec ( +- 8.91% ) [43.75%]
1,837,411 LLC-load-misses # 1.078 M/sec ( +- 7.27% ) [12.07%]
411,440,421 L1-icache-loads # 241.502 M/sec ( +- 5.60% ) [36.52%]
27,556,832 L1-icache-load-misses # 16.175 M/sec ( +- 7.46% ) [46.72%]
464,067,627 dTLB-loads # 272.392 M/sec ( +- 4.46% ) [54.17%]
10,765,648 dTLB-load-misses # 6.319 M/sec ( +- 3.18% ) [48.68%]
1,273,080,386 iTLB-loads # 747.256 M/sec ( +- 3.38% ) [47.53%]
117,481 iTLB-load-misses # 0.069 M/sec ( +- 14.99% ) [47.01%]
4,590,653 L1-dcache-prefetches # 2.695 M/sec ( +- 4.49% ) [46.19%]
1,712,660 L1-dcache-prefetch-misses # 1.005 M/sec ( +- 3.75% ) [44.82%]
0.195622057 seconds time elapsed ( +- 6.84% )
Also clean up the attribute construction code to be appending, and factor
it out into add_default_attributes().
Tweak the coverage percentage printout a bit, so that it's easier to view it
alongside the +- sttddev colum.
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Steven Rostedt <rostedt@goodmis.org>
Link: http://lkml.kernel.org/n/tip-to3kgu04449s64062val8b62@git.kernel.org
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2011-05-19 11:30:56 +00:00
|
|
|
$ perf stat ./hackbench 10
|
2009-04-20 13:37:32 +00:00
|
|
|
|
perf stat: Add -d -d and -d -d -d options to show more CPU events
Print even more detailed statistics if requested via perf stat -d:
-d: detailed events, L1 and LLC data cache
-d -d: more detailed events, dTLB and iTLB events
-d -d -d: very detailed events, adding prefetch events
Full output looks like this now:
Performance counter stats for '/home/mingo/hackbench 10' (5 runs):
1703.674707 task-clock # 8.709 CPUs utilized ( +- 4.19% )
49,068 context-switches # 0.029 M/sec ( +- 16.66% )
8,303 CPU-migrations # 0.005 M/sec ( +- 24.90% )
17,397 page-faults # 0.010 M/sec ( +- 0.46% )
2,345,389,239 cycles # 1.377 GHz ( +- 4.61% ) [55.90%]
1,884,503,527 stalled-cycles-frontend # 80.35% frontend cycles idle ( +- 5.67% ) [50.39%]
743,919,737 stalled-cycles-backend # 31.72% backend cycles idle ( +- 8.75% ) [49.91%]
1,314,416,379 instructions # 0.56 insns per cycle
# 1.43 stalled cycles per insn ( +- 2.53% ) [60.87%]
272,592,567 branches # 160.003 M/sec ( +- 1.74% ) [56.56%]
3,794,846 branch-misses # 1.39% of all branches ( +- 6.59% ) [58.50%]
449,982,778 L1-dcache-loads # 264.125 M/sec ( +- 2.47% ) [49.88%]
22,404,961 L1-dcache-load-misses # 4.98% of all L1-dcache hits ( +- 6.08% ) [55.05%]
6,204,750 LLC-loads # 3.642 M/sec ( +- 8.91% ) [43.75%]
1,837,411 LLC-load-misses # 1.078 M/sec ( +- 7.27% ) [12.07%]
411,440,421 L1-icache-loads # 241.502 M/sec ( +- 5.60% ) [36.52%]
27,556,832 L1-icache-load-misses # 16.175 M/sec ( +- 7.46% ) [46.72%]
464,067,627 dTLB-loads # 272.392 M/sec ( +- 4.46% ) [54.17%]
10,765,648 dTLB-load-misses # 6.319 M/sec ( +- 3.18% ) [48.68%]
1,273,080,386 iTLB-loads # 747.256 M/sec ( +- 3.38% ) [47.53%]
117,481 iTLB-load-misses # 0.069 M/sec ( +- 14.99% ) [47.01%]
4,590,653 L1-dcache-prefetches # 2.695 M/sec ( +- 4.49% ) [46.19%]
1,712,660 L1-dcache-prefetch-misses # 1.005 M/sec ( +- 3.75% ) [44.82%]
0.195622057 seconds time elapsed ( +- 6.84% )
Also clean up the attribute construction code to be appending, and factor
it out into add_default_attributes().
Tweak the coverage percentage printout a bit, so that it's easier to view it
alongside the +- sttddev colum.
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Steven Rostedt <rostedt@goodmis.org>
Link: http://lkml.kernel.org/n/tip-to3kgu04449s64062val8b62@git.kernel.org
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2011-05-19 11:30:56 +00:00
|
|
|
Time: 0.118
|
2009-04-20 13:37:32 +00:00
|
|
|
|
perf stat: Add -d -d and -d -d -d options to show more CPU events
Print even more detailed statistics if requested via perf stat -d:
-d: detailed events, L1 and LLC data cache
-d -d: more detailed events, dTLB and iTLB events
-d -d -d: very detailed events, adding prefetch events
Full output looks like this now:
Performance counter stats for '/home/mingo/hackbench 10' (5 runs):
1703.674707 task-clock # 8.709 CPUs utilized ( +- 4.19% )
49,068 context-switches # 0.029 M/sec ( +- 16.66% )
8,303 CPU-migrations # 0.005 M/sec ( +- 24.90% )
17,397 page-faults # 0.010 M/sec ( +- 0.46% )
2,345,389,239 cycles # 1.377 GHz ( +- 4.61% ) [55.90%]
1,884,503,527 stalled-cycles-frontend # 80.35% frontend cycles idle ( +- 5.67% ) [50.39%]
743,919,737 stalled-cycles-backend # 31.72% backend cycles idle ( +- 8.75% ) [49.91%]
1,314,416,379 instructions # 0.56 insns per cycle
# 1.43 stalled cycles per insn ( +- 2.53% ) [60.87%]
272,592,567 branches # 160.003 M/sec ( +- 1.74% ) [56.56%]
3,794,846 branch-misses # 1.39% of all branches ( +- 6.59% ) [58.50%]
449,982,778 L1-dcache-loads # 264.125 M/sec ( +- 2.47% ) [49.88%]
22,404,961 L1-dcache-load-misses # 4.98% of all L1-dcache hits ( +- 6.08% ) [55.05%]
6,204,750 LLC-loads # 3.642 M/sec ( +- 8.91% ) [43.75%]
1,837,411 LLC-load-misses # 1.078 M/sec ( +- 7.27% ) [12.07%]
411,440,421 L1-icache-loads # 241.502 M/sec ( +- 5.60% ) [36.52%]
27,556,832 L1-icache-load-misses # 16.175 M/sec ( +- 7.46% ) [46.72%]
464,067,627 dTLB-loads # 272.392 M/sec ( +- 4.46% ) [54.17%]
10,765,648 dTLB-load-misses # 6.319 M/sec ( +- 3.18% ) [48.68%]
1,273,080,386 iTLB-loads # 747.256 M/sec ( +- 3.38% ) [47.53%]
117,481 iTLB-load-misses # 0.069 M/sec ( +- 14.99% ) [47.01%]
4,590,653 L1-dcache-prefetches # 2.695 M/sec ( +- 4.49% ) [46.19%]
1,712,660 L1-dcache-prefetch-misses # 1.005 M/sec ( +- 3.75% ) [44.82%]
0.195622057 seconds time elapsed ( +- 6.84% )
Also clean up the attribute construction code to be appending, and factor
it out into add_default_attributes().
Tweak the coverage percentage printout a bit, so that it's easier to view it
alongside the +- sttddev colum.
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Steven Rostedt <rostedt@goodmis.org>
Link: http://lkml.kernel.org/n/tip-to3kgu04449s64062val8b62@git.kernel.org
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2011-05-19 11:30:56 +00:00
|
|
|
Performance counter stats for './hackbench 10':
|
2009-04-20 13:37:32 +00:00
|
|
|
|
perf stat: Add -d -d and -d -d -d options to show more CPU events
Print even more detailed statistics if requested via perf stat -d:
-d: detailed events, L1 and LLC data cache
-d -d: more detailed events, dTLB and iTLB events
-d -d -d: very detailed events, adding prefetch events
Full output looks like this now:
Performance counter stats for '/home/mingo/hackbench 10' (5 runs):
1703.674707 task-clock # 8.709 CPUs utilized ( +- 4.19% )
49,068 context-switches # 0.029 M/sec ( +- 16.66% )
8,303 CPU-migrations # 0.005 M/sec ( +- 24.90% )
17,397 page-faults # 0.010 M/sec ( +- 0.46% )
2,345,389,239 cycles # 1.377 GHz ( +- 4.61% ) [55.90%]
1,884,503,527 stalled-cycles-frontend # 80.35% frontend cycles idle ( +- 5.67% ) [50.39%]
743,919,737 stalled-cycles-backend # 31.72% backend cycles idle ( +- 8.75% ) [49.91%]
1,314,416,379 instructions # 0.56 insns per cycle
# 1.43 stalled cycles per insn ( +- 2.53% ) [60.87%]
272,592,567 branches # 160.003 M/sec ( +- 1.74% ) [56.56%]
3,794,846 branch-misses # 1.39% of all branches ( +- 6.59% ) [58.50%]
449,982,778 L1-dcache-loads # 264.125 M/sec ( +- 2.47% ) [49.88%]
22,404,961 L1-dcache-load-misses # 4.98% of all L1-dcache hits ( +- 6.08% ) [55.05%]
6,204,750 LLC-loads # 3.642 M/sec ( +- 8.91% ) [43.75%]
1,837,411 LLC-load-misses # 1.078 M/sec ( +- 7.27% ) [12.07%]
411,440,421 L1-icache-loads # 241.502 M/sec ( +- 5.60% ) [36.52%]
27,556,832 L1-icache-load-misses # 16.175 M/sec ( +- 7.46% ) [46.72%]
464,067,627 dTLB-loads # 272.392 M/sec ( +- 4.46% ) [54.17%]
10,765,648 dTLB-load-misses # 6.319 M/sec ( +- 3.18% ) [48.68%]
1,273,080,386 iTLB-loads # 747.256 M/sec ( +- 3.38% ) [47.53%]
117,481 iTLB-load-misses # 0.069 M/sec ( +- 14.99% ) [47.01%]
4,590,653 L1-dcache-prefetches # 2.695 M/sec ( +- 4.49% ) [46.19%]
1,712,660 L1-dcache-prefetch-misses # 1.005 M/sec ( +- 3.75% ) [44.82%]
0.195622057 seconds time elapsed ( +- 6.84% )
Also clean up the attribute construction code to be appending, and factor
it out into add_default_attributes().
Tweak the coverage percentage printout a bit, so that it's easier to view it
alongside the +- sttddev colum.
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Steven Rostedt <rostedt@goodmis.org>
Link: http://lkml.kernel.org/n/tip-to3kgu04449s64062val8b62@git.kernel.org
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2011-05-19 11:30:56 +00:00
|
|
|
1708.761321 task-clock # 11.037 CPUs utilized
|
|
|
|
41,190 context-switches # 0.024 M/sec
|
|
|
|
6,735 CPU-migrations # 0.004 M/sec
|
|
|
|
17,318 page-faults # 0.010 M/sec
|
|
|
|
5,205,202,243 cycles # 3.046 GHz
|
|
|
|
3,856,436,920 stalled-cycles-frontend # 74.09% frontend cycles idle
|
|
|
|
1,600,790,871 stalled-cycles-backend # 30.75% backend cycles idle
|
|
|
|
2,603,501,247 instructions # 0.50 insns per cycle
|
|
|
|
# 1.48 stalled cycles per insn
|
|
|
|
484,357,498 branches # 283.455 M/sec
|
|
|
|
6,388,934 branch-misses # 1.32% of all branches
|
|
|
|
|
|
|
|
0.154822978 seconds time elapsed
|
2009-04-20 13:37:32 +00:00
|
|
|
|
2009-05-26 07:17:18 +00:00
|
|
|
*
|
perf stat: Add -d -d and -d -d -d options to show more CPU events
Print even more detailed statistics if requested via perf stat -d:
-d: detailed events, L1 and LLC data cache
-d -d: more detailed events, dTLB and iTLB events
-d -d -d: very detailed events, adding prefetch events
Full output looks like this now:
Performance counter stats for '/home/mingo/hackbench 10' (5 runs):
1703.674707 task-clock # 8.709 CPUs utilized ( +- 4.19% )
49,068 context-switches # 0.029 M/sec ( +- 16.66% )
8,303 CPU-migrations # 0.005 M/sec ( +- 24.90% )
17,397 page-faults # 0.010 M/sec ( +- 0.46% )
2,345,389,239 cycles # 1.377 GHz ( +- 4.61% ) [55.90%]
1,884,503,527 stalled-cycles-frontend # 80.35% frontend cycles idle ( +- 5.67% ) [50.39%]
743,919,737 stalled-cycles-backend # 31.72% backend cycles idle ( +- 8.75% ) [49.91%]
1,314,416,379 instructions # 0.56 insns per cycle
# 1.43 stalled cycles per insn ( +- 2.53% ) [60.87%]
272,592,567 branches # 160.003 M/sec ( +- 1.74% ) [56.56%]
3,794,846 branch-misses # 1.39% of all branches ( +- 6.59% ) [58.50%]
449,982,778 L1-dcache-loads # 264.125 M/sec ( +- 2.47% ) [49.88%]
22,404,961 L1-dcache-load-misses # 4.98% of all L1-dcache hits ( +- 6.08% ) [55.05%]
6,204,750 LLC-loads # 3.642 M/sec ( +- 8.91% ) [43.75%]
1,837,411 LLC-load-misses # 1.078 M/sec ( +- 7.27% ) [12.07%]
411,440,421 L1-icache-loads # 241.502 M/sec ( +- 5.60% ) [36.52%]
27,556,832 L1-icache-load-misses # 16.175 M/sec ( +- 7.46% ) [46.72%]
464,067,627 dTLB-loads # 272.392 M/sec ( +- 4.46% ) [54.17%]
10,765,648 dTLB-load-misses # 6.319 M/sec ( +- 3.18% ) [48.68%]
1,273,080,386 iTLB-loads # 747.256 M/sec ( +- 3.38% ) [47.53%]
117,481 iTLB-load-misses # 0.069 M/sec ( +- 14.99% ) [47.01%]
4,590,653 L1-dcache-prefetches # 2.695 M/sec ( +- 4.49% ) [46.19%]
1,712,660 L1-dcache-prefetch-misses # 1.005 M/sec ( +- 3.75% ) [44.82%]
0.195622057 seconds time elapsed ( +- 6.84% )
Also clean up the attribute construction code to be appending, and factor
it out into add_default_attributes().
Tweak the coverage percentage printout a bit, so that it's easier to view it
alongside the +- sttddev colum.
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Steven Rostedt <rostedt@goodmis.org>
Link: http://lkml.kernel.org/n/tip-to3kgu04449s64062val8b62@git.kernel.org
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2011-05-19 11:30:56 +00:00
|
|
|
* Copyright (C) 2008-2011, Red Hat Inc, Ingo Molnar <mingo@redhat.com>
|
2009-05-26 07:17:18 +00:00
|
|
|
*
|
|
|
|
* Improvements and fixes by:
|
|
|
|
*
|
|
|
|
* Arjan van de Ven <arjan@linux.intel.com>
|
|
|
|
* Yanmin Zhang <yanmin.zhang@intel.com>
|
|
|
|
* Wu Fengguang <fengguang.wu@intel.com>
|
|
|
|
* Mike Galbraith <efault@gmx.de>
|
|
|
|
* Paul Mackerras <paulus@samba.org>
|
2009-06-26 21:32:07 +00:00
|
|
|
* Jaswinder Singh Rajput <jaswinder@kernel.org>
|
2009-05-26 07:17:18 +00:00
|
|
|
*
|
|
|
|
* Released under the GPL v2. (and only v2, not any later version)
|
2009-04-20 13:37:32 +00:00
|
|
|
*/
|
|
|
|
|
2009-05-23 16:28:58 +00:00
|
|
|
#include "perf.h"
|
2009-05-27 07:10:38 +00:00
|
|
|
#include "builtin.h"
|
2014-10-17 15:17:40 +00:00
|
|
|
#include "util/cgroup.h"
|
2009-04-27 06:02:14 +00:00
|
|
|
#include "util/util.h"
|
2015-12-15 15:39:39 +00:00
|
|
|
#include <subcmd/parse-options.h>
|
2009-05-26 07:17:18 +00:00
|
|
|
#include "util/parse-events.h"
|
2013-08-21 23:47:26 +00:00
|
|
|
#include "util/pmu.h"
|
2009-08-16 20:05:48 +00:00
|
|
|
#include "util/event.h"
|
2011-01-11 22:56:53 +00:00
|
|
|
#include "util/evlist.h"
|
2011-01-03 18:39:04 +00:00
|
|
|
#include "util/evsel.h"
|
2009-08-16 20:05:48 +00:00
|
|
|
#include "util/debug.h"
|
2016-09-16 15:50:03 +00:00
|
|
|
#include "util/drv_configs.h"
|
2011-04-27 03:39:24 +00:00
|
|
|
#include "util/color.h"
|
2012-09-17 08:31:14 +00:00
|
|
|
#include "util/stat.h"
|
2009-12-31 08:05:50 +00:00
|
|
|
#include "util/header.h"
|
perf tools: Fix sparse CPU numbering related bugs
At present, the perf subcommands that do system-wide monitoring
(perf stat, perf record and perf top) don't work properly unless
the online cpus are numbered 0, 1, ..., N-1. These tools ask
for the number of online cpus with sysconf(_SC_NPROCESSORS_ONLN)
and then try to create events for cpus 0, 1, ..., N-1.
This creates problems for systems where the online cpus are
numbered sparsely. For example, a POWER6 system in
single-threaded mode (i.e. only running 1 hardware thread per
core) will have only even-numbered cpus online.
This fixes the problem by reading the /sys/devices/system/cpu/online
file to find out which cpus are online. The code that does that is in
tools/perf/util/cpumap.[ch], and consists of a read_cpu_map()
function that sets up a cpumap[] array and returns the number of
online cpus. If /sys/devices/system/cpu/online can't be read or
can't be parsed successfully, it falls back to using sysconf to
ask how many cpus are online and sets up an identity map in cpumap[].
The perf record, perf stat and perf top code then calls
read_cpu_map() in the system-wide monitoring case (instead of
sysconf) and uses cpumap[] to get the cpu numbers to pass to
perf_event_open.
Signed-off-by: Paul Mackerras <paulus@samba.org>
Cc: Anton Blanchard <anton@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Arnaldo Carvalho de Melo <acme@infradead.org>
LKML-Reference: <20100310093609.GA3959@brick.ozlabs.ibm.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-03-10 09:36:09 +00:00
|
|
|
#include "util/cpumap.h"
|
2010-03-18 14:36:05 +00:00
|
|
|
#include "util/thread.h"
|
2011-01-18 17:15:24 +00:00
|
|
|
#include "util/thread_map.h"
|
2015-08-07 10:51:03 +00:00
|
|
|
#include "util/counts.h"
|
perf stat: Basic support for TopDown in perf stat
Add basic plumbing for TopDown in perf stat
TopDown is intended to replace the frontend cycles idle/ backend cycles
idle metrics in standard perf stat output. These metrics are not
reliable in many workloads, due to out of order effects.
This implements a new --topdown mode in perf stat (similar to
--transaction) that measures the pipe line bottlenecks using
standardized formulas. The measurement can be all done with 5 counters
(one fixed counter)
The result are four metrics:
FrontendBound, BackendBound, BadSpeculation, Retiring
that describe the CPU pipeline behavior on a high level.
The full top down methology has many hierarchical metrics. This
implementation only supports level 1 which can be collected without
multiplexing. A full implementation of top down on top of perf is
available in pmu-tools toplev. (http://github.com/andikleen/pmu-tools)
The current version works on Intel Core CPUs starting with Sandy Bridge,
and Atom CPUs starting with Silvermont. In principle the generic
metrics should be also implementable on other out of order CPUs.
TopDown level 1 uses a set of abstracted metrics which are generic to
out of order CPU cores (although some CPUs may not implement all of
them):
topdown-total-slots Available slots in the pipeline
topdown-slots-issued Slots issued into the pipeline
topdown-slots-retired Slots successfully retired
topdown-fetch-bubbles Pipeline gaps in the frontend
topdown-recovery-bubbles Pipeline gaps during recovery
from misspeculation
These metrics then allow to compute four useful metrics:
FrontendBound, BackendBound, Retiring, BadSpeculation.
Add a new --topdown options to enable events. When --topdown is
specified set up events for all topdown events supported by the kernel.
Add topdown-* as a special case to the event parser, as is needed for
all events containing -.
The actual code to compute the metrics is in follow-on patches.
v2: Use standard sysctl read function.
v3: Move x86 specific code to arch/
v4: Enable --metric-only implicitly for topdown.
v5: Add --single-thread option to not force per core mode
v6: Fix output order of topdown metrics
v7: Allow combining with -d
v8: Remove --single-thread again
v9: Rename functions, adding arch_ and topdown_.
v10: Expand man page and describe TopDown better
Paste intro into commit description.
Print error when malloc fails.
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: http://lkml.kernel.org/r/1464119559-17203-1-git-send-email-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-05-30 15:49:42 +00:00
|
|
|
#include "util/group.h"
|
perf stat record: Add record command
Add 'perf stat record' command support. It creates simple (header only)
perf.data file ATM.
The record command could be specified anywhere among stat options. All
stat command options are valid for stat record command with '-o' option
exception. If specified for record command it denotes the perf data file
name.
Committer note:
Set sample_type to PERF_SAMPLE_IDENTIFIER, which should be harmless
while avoiding that older tools show confusing messages, for instance,
with sample_type = 0, we get:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.630237 task-clock (msec) # 0.528 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
52 page-faults # 0.083 M/sec
978,312 cycles # 1.552 GHz
671,931 stalled-cycles-frontend # 68.68% frontend cycles idle
<not supported> stalled-cycles-backend
646,379 instructions # 0.66 insns per cycle
# 1.04 stalled cycles per insn
131,046 branches # 207.931 M/sec
7,073 branch-misses # 5.40% of all branches
0.001193240 seconds time elapsed
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
non matching sample_type
$
While with sample_type set to PERF_SAMPLE_IDENTIFIER, after we re-run 'perf
stat record usleep' we get:
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$
Which at least shows the names of the events in the perf.data file.
Additionally, such files, when passed to 'perf report' will produce:
$ oldperf report --stdio
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
Warning:
Kernel address maps (/proc/{kallsyms,modules}) were restricted.
Check /proc/sys/kernel/kptr_restrict before running 'perf record'.
As no suitable kallsyms nor vmlinux was found, kernel samples
can't be resolved.
Samples in kernel modules can't be resolved as well.
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
$
Which is confusing and can be solved by just adding the kernel mmap record,
which will also remove that warning about the data size field being equal to
zero, after generating the mmap record:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.600796 task-clock (msec) # 0.478 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
54 page-faults # 0.090 M/sec
886,844 cycles # 1.476 GHz
582,169 stalled-cycles-frontend # 65.65% frontend cycles idle
<not supported> stalled-cycles-backend
638,344 instructions # 0.72 insns per cycle
# 0.91 stalled cycles per insn
130,204 branches # 216.719 M/sec
7,500 branch-misses # 5.76% of all branches
0.001255897 seconds time elapsed
$ oldperf evlist
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$ oldperf report --stdio
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
[acme@zoo linux]$
No warnings, sensible output about what are the events in the perf.data file and also
a "file has no samples" message, which indeed it doesn't.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Tested-by: Kan Liang <kan.liang@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: htp://lkml.kernel.org/r/1446734469-11352-3-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 14:40:46 +00:00
|
|
|
#include "util/session.h"
|
2015-11-05 14:40:55 +00:00
|
|
|
#include "util/tool.h"
|
2017-04-17 19:51:59 +00:00
|
|
|
#include "util/string2.h"
|
2017-08-31 19:40:31 +00:00
|
|
|
#include "util/metricgroup.h"
|
2018-06-06 22:15:06 +00:00
|
|
|
#include "util/top.h"
|
2015-11-05 14:40:55 +00:00
|
|
|
#include "asm/bug.h"
|
2009-04-20 13:37:32 +00:00
|
|
|
|
2016-08-05 18:40:30 +00:00
|
|
|
#include <linux/time64.h>
|
perf stat: Basic support for TopDown in perf stat
Add basic plumbing for TopDown in perf stat
TopDown is intended to replace the frontend cycles idle/ backend cycles
idle metrics in standard perf stat output. These metrics are not
reliable in many workloads, due to out of order effects.
This implements a new --topdown mode in perf stat (similar to
--transaction) that measures the pipe line bottlenecks using
standardized formulas. The measurement can be all done with 5 counters
(one fixed counter)
The result are four metrics:
FrontendBound, BackendBound, BadSpeculation, Retiring
that describe the CPU pipeline behavior on a high level.
The full top down methology has many hierarchical metrics. This
implementation only supports level 1 which can be collected without
multiplexing. A full implementation of top down on top of perf is
available in pmu-tools toplev. (http://github.com/andikleen/pmu-tools)
The current version works on Intel Core CPUs starting with Sandy Bridge,
and Atom CPUs starting with Silvermont. In principle the generic
metrics should be also implementable on other out of order CPUs.
TopDown level 1 uses a set of abstracted metrics which are generic to
out of order CPU cores (although some CPUs may not implement all of
them):
topdown-total-slots Available slots in the pipeline
topdown-slots-issued Slots issued into the pipeline
topdown-slots-retired Slots successfully retired
topdown-fetch-bubbles Pipeline gaps in the frontend
topdown-recovery-bubbles Pipeline gaps during recovery
from misspeculation
These metrics then allow to compute four useful metrics:
FrontendBound, BackendBound, Retiring, BadSpeculation.
Add a new --topdown options to enable events. When --topdown is
specified set up events for all topdown events supported by the kernel.
Add topdown-* as a special case to the event parser, as is needed for
all events containing -.
The actual code to compute the metrics is in follow-on patches.
v2: Use standard sysctl read function.
v3: Move x86 specific code to arch/
v4: Enable --metric-only implicitly for topdown.
v5: Add --single-thread option to not force per core mode
v6: Fix output order of topdown metrics
v7: Allow combining with -d
v8: Remove --single-thread again
v9: Rename functions, adding arch_ and topdown_.
v10: Expand man page and describe TopDown better
Paste intro into commit description.
Print error when malloc fails.
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: http://lkml.kernel.org/r/1464119559-17203-1-git-send-email-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-05-30 15:49:42 +00:00
|
|
|
#include <api/fs/fs.h>
|
2017-04-18 13:46:11 +00:00
|
|
|
#include <errno.h>
|
2017-04-19 18:49:18 +00:00
|
|
|
#include <signal.h>
|
2012-10-23 11:40:14 +00:00
|
|
|
#include <stdlib.h>
|
2009-04-20 13:37:32 +00:00
|
|
|
#include <sys/prctl.h>
|
2017-04-17 18:23:08 +00:00
|
|
|
#include <inttypes.h>
|
perf stat: add perf stat -B to pretty print large numbers
It is hard to read very large numbers so provide an option to perf stat
to separate thousands using a separator. The patch leverages the locale
support of stdio. You need to set your LC_NUMERIC appropriately, for
instance LC_NUMERIC=en_US.UTF8. You need to pass -B to activate this
feature. This way existing scripts parsing the output do not need to be
changed. Here is an example.
$ perf stat noploop 2
noploop for 2 seconds
Performance counter stats for 'noploop 2':
1998.347031 task-clock-msecs # 0.998 CPUs
61 context-switches # 0.000 M/sec
0 CPU-migrations # 0.000 M/sec
118 page-faults # 0.000 M/sec
4,138,410,900 cycles # 2070.917 M/sec (scaled from 70.01%)
2,062,650,268 instructions # 0.498 IPC (scaled from 70.01%)
2,057,653,466 branches # 1029.678 M/sec (scaled from 70.01%)
40,267 branch-misses # 0.002 % (scaled from 30.04%)
2,055,961,348 cache-references # 1028.831 M/sec (scaled from 30.03%)
53,725 cache-misses # 0.027 M/sec (scaled from 30.02%)
2.001393933 seconds time elapsed
$ perf stat -B noploop 2
noploop for 2 seconds
Performance counter stats for 'noploop 2':
1998.297883 task-clock-msecs # 0.998 CPUs
59 context-switches # 0.000 M/sec
0 CPU-migrations # 0.000 M/sec
119 page-faults # 0.000 M/sec
4,131,380,160 cycles # 2067.450 M/sec (scaled from 70.01%)
2,059,096,507 instructions # 0.498 IPC (scaled from 70.01%)
2,054,681,303 branches # 1028.216 M/sec (scaled from 70.01%)
25,650 branch-misses # 0.001 % (scaled from 30.05%)
2,056,283,014 cache-references # 1029.017 M/sec (scaled from 30.03%)
47,097 cache-misses # 0.024 M/sec (scaled from 30.02%)
2.001391016 seconds time elapsed
Cc: David S. Miller <davem@davemloft.net>
Cc: Frédéric Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Tom Zanussi <tzanussi@gmail.com>
LKML-Reference: <4bf28fe8.914ed80a.01ca.fffff5f5@mx.google.com>
Signed-off-by: Stephane Eranian <eranian@google.com>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2010-05-18 13:00:01 +00:00
|
|
|
#include <locale.h>
|
2016-05-05 23:04:03 +00:00
|
|
|
#include <math.h>
|
2017-04-19 23:57:47 +00:00
|
|
|
#include <sys/types.h>
|
|
|
|
#include <sys/stat.h>
|
2017-04-19 22:06:30 +00:00
|
|
|
#include <sys/wait.h>
|
2017-04-19 23:57:47 +00:00
|
|
|
#include <unistd.h>
|
2018-06-05 12:13:13 +00:00
|
|
|
#include <sys/time.h>
|
|
|
|
#include <sys/resource.h>
|
|
|
|
#include <sys/wait.h>
|
2009-05-05 15:50:27 +00:00
|
|
|
|
2017-04-17 19:10:49 +00:00
|
|
|
#include "sane_ctype.h"
|
|
|
|
|
perf stat: Add csv-style output
This patch adds an option (-x/--field-separator) to print counts using a
CSV-style output. The user can pass a custom separator. This makes it very easy
to import counts directly into your favorite spreadsheet without having to
write scripts.
Example:
$ perf stat --field-separator=, -a -- sleep 1
4009.961740,task-clock-msecs
13,context-switches
2,CPU-migrations
189,page-faults
9596385684,cycles
3493659441,instructions
872897069,branches
41562,branch-misses
22424,cache-references
1289,cache-misses
Works also in non-aggregated mode:
$ perf stat -x , -a -A -- sleep 1
CPU0,1002.526168,task-clock-msecs
CPU1,1002.528365,task-clock-msecs
CPU2,1002.523360,task-clock-msecs
CPU3,1002.519878,task-clock-msecs
CPU0,1,context-switches
CPU1,5,context-switches
CPU2,5,context-switches
CPU3,6,context-switches
CPU0,0,CPU-migrations
CPU1,1,CPU-migrations
CPU2,0,CPU-migrations
CPU3,1,CPU-migrations
CPU0,2,page-faults
CPU1,6,page-faults
CPU2,9,page-faults
CPU3,174,page-faults
CPU0,2399439771,cycles
CPU1,2380369063,cycles
CPU2,2399142710,cycles
CPU3,2373161192,cycles
CPU0,872900618,instructions
CPU1,873030960,instructions
CPU2,872714525,instructions
CPU3,874460580,instructions
CPU0,221556839,branches
CPU1,218134342,branches
CPU2,218161730,branches
CPU3,218284093,branches
CPU0,18556,branch-misses
CPU1,1449,branch-misses
CPU2,3447,branch-misses
CPU3,12714,branch-misses
CPU0,8330,cache-references
CPU1,313844,cache-references
CPU2,47993728,cache-references
CPU3,826481,cache-references
CPU0,272,cache-misses
CPU1,5360,cache-misses
CPU2,1342193,cache-misses
CPU3,13992,cache-misses
This second version adds the ability to name a separator and uses
field-separator as the long option to be consistent with perf report.
Commiter note: Since we enabled --big-num by default in 201e0b0 and -x can't be
used with it, we need to notice if the user explicitely enabled or disabled -B,
add code to disable big_num if the user didn't explicitely set --big_num when
-x is used.
Cc: David S. Miller <davem@davemloft.net>
Cc: Frederik Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: paulus@samba.org
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Robert Richter <robert.richter@amd.com>
LKML-Reference: <4cf68aa7.0fedd80a.5294.1203@mx.google.com>
Signed-off-by: Stephane Eranian <eranian@google.com>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2010-12-01 16:49:05 +00:00
|
|
|
#define DEFAULT_SEPARATOR " "
|
2011-05-30 14:55:59 +00:00
|
|
|
#define CNTR_NOT_SUPPORTED "<not supported>"
|
|
|
|
#define CNTR_NOT_COUNTED "<not counted>"
|
2017-05-26 19:05:38 +00:00
|
|
|
#define FREEZE_ON_SMI_PATH "devices/cpu/freeze_on_smi"
|
perf stat: Add csv-style output
This patch adds an option (-x/--field-separator) to print counts using a
CSV-style output. The user can pass a custom separator. This makes it very easy
to import counts directly into your favorite spreadsheet without having to
write scripts.
Example:
$ perf stat --field-separator=, -a -- sleep 1
4009.961740,task-clock-msecs
13,context-switches
2,CPU-migrations
189,page-faults
9596385684,cycles
3493659441,instructions
872897069,branches
41562,branch-misses
22424,cache-references
1289,cache-misses
Works also in non-aggregated mode:
$ perf stat -x , -a -A -- sleep 1
CPU0,1002.526168,task-clock-msecs
CPU1,1002.528365,task-clock-msecs
CPU2,1002.523360,task-clock-msecs
CPU3,1002.519878,task-clock-msecs
CPU0,1,context-switches
CPU1,5,context-switches
CPU2,5,context-switches
CPU3,6,context-switches
CPU0,0,CPU-migrations
CPU1,1,CPU-migrations
CPU2,0,CPU-migrations
CPU3,1,CPU-migrations
CPU0,2,page-faults
CPU1,6,page-faults
CPU2,9,page-faults
CPU3,174,page-faults
CPU0,2399439771,cycles
CPU1,2380369063,cycles
CPU2,2399142710,cycles
CPU3,2373161192,cycles
CPU0,872900618,instructions
CPU1,873030960,instructions
CPU2,872714525,instructions
CPU3,874460580,instructions
CPU0,221556839,branches
CPU1,218134342,branches
CPU2,218161730,branches
CPU3,218284093,branches
CPU0,18556,branch-misses
CPU1,1449,branch-misses
CPU2,3447,branch-misses
CPU3,12714,branch-misses
CPU0,8330,cache-references
CPU1,313844,cache-references
CPU2,47993728,cache-references
CPU3,826481,cache-references
CPU0,272,cache-misses
CPU1,5360,cache-misses
CPU2,1342193,cache-misses
CPU3,13992,cache-misses
This second version adds the ability to name a separator and uses
field-separator as the long option to be consistent with perf report.
Commiter note: Since we enabled --big-num by default in 201e0b0 and -x can't be
used with it, we need to notice if the user explicitely enabled or disabled -B,
add code to disable big_num if the user didn't explicitely set --big_num when
-x is used.
Cc: David S. Miller <davem@davemloft.net>
Cc: Frederik Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: paulus@samba.org
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Robert Richter <robert.richter@amd.com>
LKML-Reference: <4cf68aa7.0fedd80a.5294.1203@mx.google.com>
Signed-off-by: Stephane Eranian <eranian@google.com>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2010-12-01 16:49:05 +00:00
|
|
|
|
2015-06-26 09:29:26 +00:00
|
|
|
static void print_counters(struct timespec *ts, int argc, const char **argv);
|
perf stat: Add interval printing
This patch adds a new printing mode for perf stat. It allows interval
printing. That means perf stat can now print event deltas at regular
time interval. This is useful to detect phases in programs.
The -I option enables interval printing. It expects an interval duration
in milliseconds. Minimum is 100ms. Once, activated perf stat prints
events deltas since last printout. All modes are supported.
$ perf stat -I 1000 -e cycles noploop 10
noploop for 10 seconds
# time counts events
1.000109853 2,388,560,546 cycles
2.000262846 2,393,332,358 cycles
3.000354131 2,393,176,537 cycles
4.000439503 2,393,203,790 cycles
5.000527075 2,393,167,675 cycles
6.000609052 2,393,203,670 cycles
7.000691082 2,393,175,678 cycles
The output format makes it easy to feed into a plotting program such as
gnuplot when the -I option is used in combination with the -x option:
$ perf stat -x, -I 1000 -e cycles noploop 10
noploop for 10 seconds
1.000084113,2378775498,cycles
2.000245798,2391056897,cycles
3.000354445,2392089414,cycles
4.000459115,2390936603,cycles
5.000565341,2392108173,cycles
Signed-off-by: Stephane Eranian <eranian@google.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung.kim@lge.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1359460064-3060-3-git-send-email-eranian@google.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2013-01-29 11:47:44 +00:00
|
|
|
|
2013-08-21 23:47:26 +00:00
|
|
|
/* Default events used for perf stat -T */
|
2015-06-03 14:25:53 +00:00
|
|
|
static const char *transaction_attrs = {
|
|
|
|
"task-clock,"
|
2013-08-21 23:47:26 +00:00
|
|
|
"{"
|
|
|
|
"instructions,"
|
|
|
|
"cycles,"
|
|
|
|
"cpu/cycles-t/,"
|
|
|
|
"cpu/tx-start/,"
|
|
|
|
"cpu/el-start/,"
|
|
|
|
"cpu/cycles-ct/"
|
|
|
|
"}"
|
|
|
|
};
|
|
|
|
|
|
|
|
/* More limited version when the CPU does not have all events. */
|
2015-06-03 14:25:53 +00:00
|
|
|
static const char * transaction_limited_attrs = {
|
|
|
|
"task-clock,"
|
2013-08-21 23:47:26 +00:00
|
|
|
"{"
|
|
|
|
"instructions,"
|
|
|
|
"cycles,"
|
|
|
|
"cpu/cycles-t/,"
|
|
|
|
"cpu/tx-start/"
|
|
|
|
"}"
|
|
|
|
};
|
|
|
|
|
perf stat: Basic support for TopDown in perf stat
Add basic plumbing for TopDown in perf stat
TopDown is intended to replace the frontend cycles idle/ backend cycles
idle metrics in standard perf stat output. These metrics are not
reliable in many workloads, due to out of order effects.
This implements a new --topdown mode in perf stat (similar to
--transaction) that measures the pipe line bottlenecks using
standardized formulas. The measurement can be all done with 5 counters
(one fixed counter)
The result are four metrics:
FrontendBound, BackendBound, BadSpeculation, Retiring
that describe the CPU pipeline behavior on a high level.
The full top down methology has many hierarchical metrics. This
implementation only supports level 1 which can be collected without
multiplexing. A full implementation of top down on top of perf is
available in pmu-tools toplev. (http://github.com/andikleen/pmu-tools)
The current version works on Intel Core CPUs starting with Sandy Bridge,
and Atom CPUs starting with Silvermont. In principle the generic
metrics should be also implementable on other out of order CPUs.
TopDown level 1 uses a set of abstracted metrics which are generic to
out of order CPU cores (although some CPUs may not implement all of
them):
topdown-total-slots Available slots in the pipeline
topdown-slots-issued Slots issued into the pipeline
topdown-slots-retired Slots successfully retired
topdown-fetch-bubbles Pipeline gaps in the frontend
topdown-recovery-bubbles Pipeline gaps during recovery
from misspeculation
These metrics then allow to compute four useful metrics:
FrontendBound, BackendBound, Retiring, BadSpeculation.
Add a new --topdown options to enable events. When --topdown is
specified set up events for all topdown events supported by the kernel.
Add topdown-* as a special case to the event parser, as is needed for
all events containing -.
The actual code to compute the metrics is in follow-on patches.
v2: Use standard sysctl read function.
v3: Move x86 specific code to arch/
v4: Enable --metric-only implicitly for topdown.
v5: Add --single-thread option to not force per core mode
v6: Fix output order of topdown metrics
v7: Allow combining with -d
v8: Remove --single-thread again
v9: Rename functions, adding arch_ and topdown_.
v10: Expand man page and describe TopDown better
Paste intro into commit description.
Print error when malloc fails.
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: http://lkml.kernel.org/r/1464119559-17203-1-git-send-email-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-05-30 15:49:42 +00:00
|
|
|
static const char * topdown_attrs[] = {
|
|
|
|
"topdown-total-slots",
|
|
|
|
"topdown-slots-retired",
|
|
|
|
"topdown-recovery-bubbles",
|
|
|
|
"topdown-fetch-bubbles",
|
|
|
|
"topdown-slots-issued",
|
|
|
|
NULL,
|
|
|
|
};
|
|
|
|
|
2017-05-26 19:05:38 +00:00
|
|
|
static const char *smi_cost_attrs = {
|
|
|
|
"{"
|
|
|
|
"msr/aperf/,"
|
|
|
|
"msr/smi/,"
|
|
|
|
"cycles"
|
|
|
|
"}"
|
|
|
|
};
|
|
|
|
|
2012-04-05 16:26:27 +00:00
|
|
|
static struct perf_evlist *evsel_list;
|
2011-01-11 22:56:53 +00:00
|
|
|
|
2017-08-31 19:40:31 +00:00
|
|
|
static struct rblist metric_events;
|
|
|
|
|
2013-11-12 19:46:16 +00:00
|
|
|
static struct target target = {
|
2012-05-07 05:09:04 +00:00
|
|
|
.uid = UINT_MAX,
|
|
|
|
};
|
2009-04-20 13:37:32 +00:00
|
|
|
|
2015-10-25 14:51:18 +00:00
|
|
|
typedef int (*aggr_get_id_t)(struct cpu_map *m, int cpu);
|
|
|
|
|
2018-06-06 22:15:09 +00:00
|
|
|
#define METRIC_ONLY_LEN 20
|
|
|
|
|
2013-06-04 15:44:26 +00:00
|
|
|
static volatile pid_t child_pid = -1;
|
perf stat: Add -d -d and -d -d -d options to show more CPU events
Print even more detailed statistics if requested via perf stat -d:
-d: detailed events, L1 and LLC data cache
-d -d: more detailed events, dTLB and iTLB events
-d -d -d: very detailed events, adding prefetch events
Full output looks like this now:
Performance counter stats for '/home/mingo/hackbench 10' (5 runs):
1703.674707 task-clock # 8.709 CPUs utilized ( +- 4.19% )
49,068 context-switches # 0.029 M/sec ( +- 16.66% )
8,303 CPU-migrations # 0.005 M/sec ( +- 24.90% )
17,397 page-faults # 0.010 M/sec ( +- 0.46% )
2,345,389,239 cycles # 1.377 GHz ( +- 4.61% ) [55.90%]
1,884,503,527 stalled-cycles-frontend # 80.35% frontend cycles idle ( +- 5.67% ) [50.39%]
743,919,737 stalled-cycles-backend # 31.72% backend cycles idle ( +- 8.75% ) [49.91%]
1,314,416,379 instructions # 0.56 insns per cycle
# 1.43 stalled cycles per insn ( +- 2.53% ) [60.87%]
272,592,567 branches # 160.003 M/sec ( +- 1.74% ) [56.56%]
3,794,846 branch-misses # 1.39% of all branches ( +- 6.59% ) [58.50%]
449,982,778 L1-dcache-loads # 264.125 M/sec ( +- 2.47% ) [49.88%]
22,404,961 L1-dcache-load-misses # 4.98% of all L1-dcache hits ( +- 6.08% ) [55.05%]
6,204,750 LLC-loads # 3.642 M/sec ( +- 8.91% ) [43.75%]
1,837,411 LLC-load-misses # 1.078 M/sec ( +- 7.27% ) [12.07%]
411,440,421 L1-icache-loads # 241.502 M/sec ( +- 5.60% ) [36.52%]
27,556,832 L1-icache-load-misses # 16.175 M/sec ( +- 7.46% ) [46.72%]
464,067,627 dTLB-loads # 272.392 M/sec ( +- 4.46% ) [54.17%]
10,765,648 dTLB-load-misses # 6.319 M/sec ( +- 3.18% ) [48.68%]
1,273,080,386 iTLB-loads # 747.256 M/sec ( +- 3.38% ) [47.53%]
117,481 iTLB-load-misses # 0.069 M/sec ( +- 14.99% ) [47.01%]
4,590,653 L1-dcache-prefetches # 2.695 M/sec ( +- 4.49% ) [46.19%]
1,712,660 L1-dcache-prefetch-misses # 1.005 M/sec ( +- 3.75% ) [44.82%]
0.195622057 seconds time elapsed ( +- 6.84% )
Also clean up the attribute construction code to be appending, and factor
it out into add_default_attributes().
Tweak the coverage percentage printout a bit, so that it's easier to view it
alongside the +- sttddev colum.
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Steven Rostedt <rostedt@goodmis.org>
Link: http://lkml.kernel.org/n/tip-to3kgu04449s64062val8b62@git.kernel.org
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2011-05-19 11:30:56 +00:00
|
|
|
static int detailed_run = 0;
|
2013-08-21 23:47:26 +00:00
|
|
|
static bool transaction_run;
|
perf stat: Basic support for TopDown in perf stat
Add basic plumbing for TopDown in perf stat
TopDown is intended to replace the frontend cycles idle/ backend cycles
idle metrics in standard perf stat output. These metrics are not
reliable in many workloads, due to out of order effects.
This implements a new --topdown mode in perf stat (similar to
--transaction) that measures the pipe line bottlenecks using
standardized formulas. The measurement can be all done with 5 counters
(one fixed counter)
The result are four metrics:
FrontendBound, BackendBound, BadSpeculation, Retiring
that describe the CPU pipeline behavior on a high level.
The full top down methology has many hierarchical metrics. This
implementation only supports level 1 which can be collected without
multiplexing. A full implementation of top down on top of perf is
available in pmu-tools toplev. (http://github.com/andikleen/pmu-tools)
The current version works on Intel Core CPUs starting with Sandy Bridge,
and Atom CPUs starting with Silvermont. In principle the generic
metrics should be also implementable on other out of order CPUs.
TopDown level 1 uses a set of abstracted metrics which are generic to
out of order CPU cores (although some CPUs may not implement all of
them):
topdown-total-slots Available slots in the pipeline
topdown-slots-issued Slots issued into the pipeline
topdown-slots-retired Slots successfully retired
topdown-fetch-bubbles Pipeline gaps in the frontend
topdown-recovery-bubbles Pipeline gaps during recovery
from misspeculation
These metrics then allow to compute four useful metrics:
FrontendBound, BackendBound, Retiring, BadSpeculation.
Add a new --topdown options to enable events. When --topdown is
specified set up events for all topdown events supported by the kernel.
Add topdown-* as a special case to the event parser, as is needed for
all events containing -.
The actual code to compute the metrics is in follow-on patches.
v2: Use standard sysctl read function.
v3: Move x86 specific code to arch/
v4: Enable --metric-only implicitly for topdown.
v5: Add --single-thread option to not force per core mode
v6: Fix output order of topdown metrics
v7: Allow combining with -d
v8: Remove --single-thread again
v9: Rename functions, adding arch_ and topdown_.
v10: Expand man page and describe TopDown better
Paste intro into commit description.
Print error when malloc fails.
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: http://lkml.kernel.org/r/1464119559-17203-1-git-send-email-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-05-30 15:49:42 +00:00
|
|
|
static bool topdown_run = false;
|
2017-05-26 19:05:38 +00:00
|
|
|
static bool smi_cost = false;
|
|
|
|
static bool smi_reset = false;
|
2010-12-01 19:53:27 +00:00
|
|
|
static bool big_num = true;
|
perf stat: Add csv-style output
This patch adds an option (-x/--field-separator) to print counts using a
CSV-style output. The user can pass a custom separator. This makes it very easy
to import counts directly into your favorite spreadsheet without having to
write scripts.
Example:
$ perf stat --field-separator=, -a -- sleep 1
4009.961740,task-clock-msecs
13,context-switches
2,CPU-migrations
189,page-faults
9596385684,cycles
3493659441,instructions
872897069,branches
41562,branch-misses
22424,cache-references
1289,cache-misses
Works also in non-aggregated mode:
$ perf stat -x , -a -A -- sleep 1
CPU0,1002.526168,task-clock-msecs
CPU1,1002.528365,task-clock-msecs
CPU2,1002.523360,task-clock-msecs
CPU3,1002.519878,task-clock-msecs
CPU0,1,context-switches
CPU1,5,context-switches
CPU2,5,context-switches
CPU3,6,context-switches
CPU0,0,CPU-migrations
CPU1,1,CPU-migrations
CPU2,0,CPU-migrations
CPU3,1,CPU-migrations
CPU0,2,page-faults
CPU1,6,page-faults
CPU2,9,page-faults
CPU3,174,page-faults
CPU0,2399439771,cycles
CPU1,2380369063,cycles
CPU2,2399142710,cycles
CPU3,2373161192,cycles
CPU0,872900618,instructions
CPU1,873030960,instructions
CPU2,872714525,instructions
CPU3,874460580,instructions
CPU0,221556839,branches
CPU1,218134342,branches
CPU2,218161730,branches
CPU3,218284093,branches
CPU0,18556,branch-misses
CPU1,1449,branch-misses
CPU2,3447,branch-misses
CPU3,12714,branch-misses
CPU0,8330,cache-references
CPU1,313844,cache-references
CPU2,47993728,cache-references
CPU3,826481,cache-references
CPU0,272,cache-misses
CPU1,5360,cache-misses
CPU2,1342193,cache-misses
CPU3,13992,cache-misses
This second version adds the ability to name a separator and uses
field-separator as the long option to be consistent with perf report.
Commiter note: Since we enabled --big-num by default in 201e0b0 and -x can't be
used with it, we need to notice if the user explicitely enabled or disabled -B,
add code to disable big_num if the user didn't explicitely set --big_num when
-x is used.
Cc: David S. Miller <davem@davemloft.net>
Cc: Frederik Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: paulus@samba.org
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Robert Richter <robert.richter@amd.com>
LKML-Reference: <4cf68aa7.0fedd80a.5294.1203@mx.google.com>
Signed-off-by: Stephane Eranian <eranian@google.com>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2010-12-01 16:49:05 +00:00
|
|
|
static int big_num_opt = -1;
|
2011-08-17 10:42:07 +00:00
|
|
|
static bool group = false;
|
2012-10-23 11:40:14 +00:00
|
|
|
static const char *pre_cmd = NULL;
|
|
|
|
static const char *post_cmd = NULL;
|
|
|
|
static bool sync_run = false;
|
2013-03-01 18:02:27 +00:00
|
|
|
static bool forever = false;
|
perf stat: Basic support for TopDown in perf stat
Add basic plumbing for TopDown in perf stat
TopDown is intended to replace the frontend cycles idle/ backend cycles
idle metrics in standard perf stat output. These metrics are not
reliable in many workloads, due to out of order effects.
This implements a new --topdown mode in perf stat (similar to
--transaction) that measures the pipe line bottlenecks using
standardized formulas. The measurement can be all done with 5 counters
(one fixed counter)
The result are four metrics:
FrontendBound, BackendBound, BadSpeculation, Retiring
that describe the CPU pipeline behavior on a high level.
The full top down methology has many hierarchical metrics. This
implementation only supports level 1 which can be collected without
multiplexing. A full implementation of top down on top of perf is
available in pmu-tools toplev. (http://github.com/andikleen/pmu-tools)
The current version works on Intel Core CPUs starting with Sandy Bridge,
and Atom CPUs starting with Silvermont. In principle the generic
metrics should be also implementable on other out of order CPUs.
TopDown level 1 uses a set of abstracted metrics which are generic to
out of order CPU cores (although some CPUs may not implement all of
them):
topdown-total-slots Available slots in the pipeline
topdown-slots-issued Slots issued into the pipeline
topdown-slots-retired Slots successfully retired
topdown-fetch-bubbles Pipeline gaps in the frontend
topdown-recovery-bubbles Pipeline gaps during recovery
from misspeculation
These metrics then allow to compute four useful metrics:
FrontendBound, BackendBound, Retiring, BadSpeculation.
Add a new --topdown options to enable events. When --topdown is
specified set up events for all topdown events supported by the kernel.
Add topdown-* as a special case to the event parser, as is needed for
all events containing -.
The actual code to compute the metrics is in follow-on patches.
v2: Use standard sysctl read function.
v3: Move x86 specific code to arch/
v4: Enable --metric-only implicitly for topdown.
v5: Add --single-thread option to not force per core mode
v6: Fix output order of topdown metrics
v7: Allow combining with -d
v8: Remove --single-thread again
v9: Rename functions, adding arch_ and topdown_.
v10: Expand man page and describe TopDown better
Paste intro into commit description.
Print error when malloc fails.
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: http://lkml.kernel.org/r/1464119559-17203-1-git-send-email-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-05-30 15:49:42 +00:00
|
|
|
static bool force_metric_only = false;
|
perf stat: Collapse identically named events
The uncore PMU has a lot of duplicated PMUs for different subsystems.
When expanding an uncore alias we usually end up with a large
number of identically named aliases, which makes perf stat
output difficult to read.
Automatically sum them up in perf stat, unless --no-merge is specified.
This can be default because only the uncores generally have duplicated
aliases. Other PMUs have unique names.
Before:
% perf stat --no-merge -a -e unc_c_llc_lookup.any sleep 1
Performance counter stats for 'system wide':
694,976 Bytes unc_c_llc_lookup.any
706,304 Bytes unc_c_llc_lookup.any
956,608 Bytes unc_c_llc_lookup.any
782,720 Bytes unc_c_llc_lookup.any
605,696 Bytes unc_c_llc_lookup.any
442,816 Bytes unc_c_llc_lookup.any
659,328 Bytes unc_c_llc_lookup.any
509,312 Bytes unc_c_llc_lookup.any
263,936 Bytes unc_c_llc_lookup.any
592,448 Bytes unc_c_llc_lookup.any
672,448 Bytes unc_c_llc_lookup.any
608,640 Bytes unc_c_llc_lookup.any
641,024 Bytes unc_c_llc_lookup.any
856,896 Bytes unc_c_llc_lookup.any
808,832 Bytes unc_c_llc_lookup.any
684,864 Bytes unc_c_llc_lookup.any
710,464 Bytes unc_c_llc_lookup.any
538,304 Bytes unc_c_llc_lookup.any
1.002577660 seconds time elapsed
After:
% perf stat -a -e unc_c_llc_lookup.any sleep 1
Performance counter stats for 'system wide':
2,685,120 Bytes unc_c_llc_lookup.any
1.002648032 seconds time elapsed
v2: Split collect_aliases. Rename alias flag.
v3: Make sure unsupported/not counted is always printed.
v4: Factor out callback change into separate patch.
v5: Move check for bad results here
Move merged check into collect_data
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: http://lkml.kernel.org/r/20170320201711.14142-3-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-03-20 20:17:00 +00:00
|
|
|
static bool no_merge = false;
|
2018-04-23 09:08:21 +00:00
|
|
|
static bool walltime_run_table = false;
|
perf stat: Add interval printing
This patch adds a new printing mode for perf stat. It allows interval
printing. That means perf stat can now print event deltas at regular
time interval. This is useful to detect phases in programs.
The -I option enables interval printing. It expects an interval duration
in milliseconds. Minimum is 100ms. Once, activated perf stat prints
events deltas since last printout. All modes are supported.
$ perf stat -I 1000 -e cycles noploop 10
noploop for 10 seconds
# time counts events
1.000109853 2,388,560,546 cycles
2.000262846 2,393,332,358 cycles
3.000354131 2,393,176,537 cycles
4.000439503 2,393,203,790 cycles
5.000527075 2,393,167,675 cycles
6.000609052 2,393,203,670 cycles
7.000691082 2,393,175,678 cycles
The output format makes it easy to feed into a plotting program such as
gnuplot when the -I option is used in combination with the -x option:
$ perf stat -x, -I 1000 -e cycles noploop 10
noploop for 10 seconds
1.000084113,2378775498,cycles
2.000245798,2391056897,cycles
3.000354445,2392089414,cycles
4.000459115,2390936603,cycles
5.000565341,2392108173,cycles
Signed-off-by: Stephane Eranian <eranian@google.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung.kim@lge.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1359460064-3060-3-git-send-email-eranian@google.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2013-01-29 11:47:44 +00:00
|
|
|
static struct timespec ref_time;
|
2013-02-14 12:57:27 +00:00
|
|
|
static struct cpu_map *aggr_map;
|
2015-10-25 14:51:18 +00:00
|
|
|
static aggr_get_id_t aggr_get_id;
|
2015-11-05 14:40:45 +00:00
|
|
|
static bool append_file;
|
2018-01-29 09:25:22 +00:00
|
|
|
static bool interval_count;
|
2015-11-05 14:40:45 +00:00
|
|
|
static const char *output_name;
|
|
|
|
static int output_fd;
|
2018-04-23 09:08:21 +00:00
|
|
|
static u64 *walltime_run;
|
perf stat: add perf stat -B to pretty print large numbers
It is hard to read very large numbers so provide an option to perf stat
to separate thousands using a separator. The patch leverages the locale
support of stdio. You need to set your LC_NUMERIC appropriately, for
instance LC_NUMERIC=en_US.UTF8. You need to pass -B to activate this
feature. This way existing scripts parsing the output do not need to be
changed. Here is an example.
$ perf stat noploop 2
noploop for 2 seconds
Performance counter stats for 'noploop 2':
1998.347031 task-clock-msecs # 0.998 CPUs
61 context-switches # 0.000 M/sec
0 CPU-migrations # 0.000 M/sec
118 page-faults # 0.000 M/sec
4,138,410,900 cycles # 2070.917 M/sec (scaled from 70.01%)
2,062,650,268 instructions # 0.498 IPC (scaled from 70.01%)
2,057,653,466 branches # 1029.678 M/sec (scaled from 70.01%)
40,267 branch-misses # 0.002 % (scaled from 30.04%)
2,055,961,348 cache-references # 1028.831 M/sec (scaled from 30.03%)
53,725 cache-misses # 0.027 M/sec (scaled from 30.02%)
2.001393933 seconds time elapsed
$ perf stat -B noploop 2
noploop for 2 seconds
Performance counter stats for 'noploop 2':
1998.297883 task-clock-msecs # 0.998 CPUs
59 context-switches # 0.000 M/sec
0 CPU-migrations # 0.000 M/sec
119 page-faults # 0.000 M/sec
4,131,380,160 cycles # 2067.450 M/sec (scaled from 70.01%)
2,059,096,507 instructions # 0.498 IPC (scaled from 70.01%)
2,054,681,303 branches # 1028.216 M/sec (scaled from 70.01%)
25,650 branch-misses # 0.001 % (scaled from 30.05%)
2,056,283,014 cache-references # 1029.017 M/sec (scaled from 30.03%)
47,097 cache-misses # 0.024 M/sec (scaled from 30.02%)
2.001391016 seconds time elapsed
Cc: David S. Miller <davem@davemloft.net>
Cc: Frédéric Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Tom Zanussi <tzanussi@gmail.com>
LKML-Reference: <4bf28fe8.914ed80a.01ca.fffff5f5@mx.google.com>
Signed-off-by: Stephane Eranian <eranian@google.com>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2010-05-18 13:00:01 +00:00
|
|
|
|
perf stat record: Add record command
Add 'perf stat record' command support. It creates simple (header only)
perf.data file ATM.
The record command could be specified anywhere among stat options. All
stat command options are valid for stat record command with '-o' option
exception. If specified for record command it denotes the perf data file
name.
Committer note:
Set sample_type to PERF_SAMPLE_IDENTIFIER, which should be harmless
while avoiding that older tools show confusing messages, for instance,
with sample_type = 0, we get:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.630237 task-clock (msec) # 0.528 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
52 page-faults # 0.083 M/sec
978,312 cycles # 1.552 GHz
671,931 stalled-cycles-frontend # 68.68% frontend cycles idle
<not supported> stalled-cycles-backend
646,379 instructions # 0.66 insns per cycle
# 1.04 stalled cycles per insn
131,046 branches # 207.931 M/sec
7,073 branch-misses # 5.40% of all branches
0.001193240 seconds time elapsed
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
non matching sample_type
$
While with sample_type set to PERF_SAMPLE_IDENTIFIER, after we re-run 'perf
stat record usleep' we get:
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$
Which at least shows the names of the events in the perf.data file.
Additionally, such files, when passed to 'perf report' will produce:
$ oldperf report --stdio
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
Warning:
Kernel address maps (/proc/{kallsyms,modules}) were restricted.
Check /proc/sys/kernel/kptr_restrict before running 'perf record'.
As no suitable kallsyms nor vmlinux was found, kernel samples
can't be resolved.
Samples in kernel modules can't be resolved as well.
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
$
Which is confusing and can be solved by just adding the kernel mmap record,
which will also remove that warning about the data size field being equal to
zero, after generating the mmap record:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.600796 task-clock (msec) # 0.478 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
54 page-faults # 0.090 M/sec
886,844 cycles # 1.476 GHz
582,169 stalled-cycles-frontend # 65.65% frontend cycles idle
<not supported> stalled-cycles-backend
638,344 instructions # 0.72 insns per cycle
# 0.91 stalled cycles per insn
130,204 branches # 216.719 M/sec
7,500 branch-misses # 5.76% of all branches
0.001255897 seconds time elapsed
$ oldperf evlist
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$ oldperf report --stdio
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
[acme@zoo linux]$
No warnings, sensible output about what are the events in the perf.data file and also
a "file has no samples" message, which indeed it doesn't.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Tested-by: Kan Liang <kan.liang@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: htp://lkml.kernel.org/r/1446734469-11352-3-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 14:40:46 +00:00
|
|
|
struct perf_stat {
|
|
|
|
bool record;
|
2017-01-23 21:07:59 +00:00
|
|
|
struct perf_data data;
|
perf stat record: Add record command
Add 'perf stat record' command support. It creates simple (header only)
perf.data file ATM.
The record command could be specified anywhere among stat options. All
stat command options are valid for stat record command with '-o' option
exception. If specified for record command it denotes the perf data file
name.
Committer note:
Set sample_type to PERF_SAMPLE_IDENTIFIER, which should be harmless
while avoiding that older tools show confusing messages, for instance,
with sample_type = 0, we get:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.630237 task-clock (msec) # 0.528 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
52 page-faults # 0.083 M/sec
978,312 cycles # 1.552 GHz
671,931 stalled-cycles-frontend # 68.68% frontend cycles idle
<not supported> stalled-cycles-backend
646,379 instructions # 0.66 insns per cycle
# 1.04 stalled cycles per insn
131,046 branches # 207.931 M/sec
7,073 branch-misses # 5.40% of all branches
0.001193240 seconds time elapsed
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
non matching sample_type
$
While with sample_type set to PERF_SAMPLE_IDENTIFIER, after we re-run 'perf
stat record usleep' we get:
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$
Which at least shows the names of the events in the perf.data file.
Additionally, such files, when passed to 'perf report' will produce:
$ oldperf report --stdio
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
Warning:
Kernel address maps (/proc/{kallsyms,modules}) were restricted.
Check /proc/sys/kernel/kptr_restrict before running 'perf record'.
As no suitable kallsyms nor vmlinux was found, kernel samples
can't be resolved.
Samples in kernel modules can't be resolved as well.
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
$
Which is confusing and can be solved by just adding the kernel mmap record,
which will also remove that warning about the data size field being equal to
zero, after generating the mmap record:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.600796 task-clock (msec) # 0.478 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
54 page-faults # 0.090 M/sec
886,844 cycles # 1.476 GHz
582,169 stalled-cycles-frontend # 65.65% frontend cycles idle
<not supported> stalled-cycles-backend
638,344 instructions # 0.72 insns per cycle
# 0.91 stalled cycles per insn
130,204 branches # 216.719 M/sec
7,500 branch-misses # 5.76% of all branches
0.001255897 seconds time elapsed
$ oldperf evlist
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$ oldperf report --stdio
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
[acme@zoo linux]$
No warnings, sensible output about what are the events in the perf.data file and also
a "file has no samples" message, which indeed it doesn't.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Tested-by: Kan Liang <kan.liang@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: htp://lkml.kernel.org/r/1446734469-11352-3-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 14:40:46 +00:00
|
|
|
struct perf_session *session;
|
|
|
|
u64 bytes_written;
|
2015-11-05 14:40:55 +00:00
|
|
|
struct perf_tool tool;
|
2015-11-05 14:40:56 +00:00
|
|
|
bool maps_allocated;
|
|
|
|
struct cpu_map *cpus;
|
|
|
|
struct thread_map *threads;
|
2015-11-05 14:41:02 +00:00
|
|
|
enum aggr_mode aggr_mode;
|
perf stat record: Add record command
Add 'perf stat record' command support. It creates simple (header only)
perf.data file ATM.
The record command could be specified anywhere among stat options. All
stat command options are valid for stat record command with '-o' option
exception. If specified for record command it denotes the perf data file
name.
Committer note:
Set sample_type to PERF_SAMPLE_IDENTIFIER, which should be harmless
while avoiding that older tools show confusing messages, for instance,
with sample_type = 0, we get:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.630237 task-clock (msec) # 0.528 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
52 page-faults # 0.083 M/sec
978,312 cycles # 1.552 GHz
671,931 stalled-cycles-frontend # 68.68% frontend cycles idle
<not supported> stalled-cycles-backend
646,379 instructions # 0.66 insns per cycle
# 1.04 stalled cycles per insn
131,046 branches # 207.931 M/sec
7,073 branch-misses # 5.40% of all branches
0.001193240 seconds time elapsed
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
non matching sample_type
$
While with sample_type set to PERF_SAMPLE_IDENTIFIER, after we re-run 'perf
stat record usleep' we get:
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$
Which at least shows the names of the events in the perf.data file.
Additionally, such files, when passed to 'perf report' will produce:
$ oldperf report --stdio
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
Warning:
Kernel address maps (/proc/{kallsyms,modules}) were restricted.
Check /proc/sys/kernel/kptr_restrict before running 'perf record'.
As no suitable kallsyms nor vmlinux was found, kernel samples
can't be resolved.
Samples in kernel modules can't be resolved as well.
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
$
Which is confusing and can be solved by just adding the kernel mmap record,
which will also remove that warning about the data size field being equal to
zero, after generating the mmap record:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.600796 task-clock (msec) # 0.478 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
54 page-faults # 0.090 M/sec
886,844 cycles # 1.476 GHz
582,169 stalled-cycles-frontend # 65.65% frontend cycles idle
<not supported> stalled-cycles-backend
638,344 instructions # 0.72 insns per cycle
# 0.91 stalled cycles per insn
130,204 branches # 216.719 M/sec
7,500 branch-misses # 5.76% of all branches
0.001255897 seconds time elapsed
$ oldperf evlist
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$ oldperf report --stdio
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
[acme@zoo linux]$
No warnings, sensible output about what are the events in the perf.data file and also
a "file has no samples" message, which indeed it doesn't.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Tested-by: Kan Liang <kan.liang@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: htp://lkml.kernel.org/r/1446734469-11352-3-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 14:40:46 +00:00
|
|
|
};
|
|
|
|
|
|
|
|
static struct perf_stat perf_stat;
|
|
|
|
#define STAT_RECORD perf_stat.record
|
|
|
|
|
2009-12-31 08:05:50 +00:00
|
|
|
static volatile int done = 0;
|
|
|
|
|
2015-07-21 12:31:22 +00:00
|
|
|
static struct perf_stat_config stat_config = {
|
2018-08-30 06:32:40 +00:00
|
|
|
.aggr_mode = AGGR_GLOBAL,
|
|
|
|
.scale = true,
|
|
|
|
.unit_width = 4, /* strlen("unit") */
|
|
|
|
.run_count = 1,
|
|
|
|
.metric_only_len = METRIC_ONLY_LEN,
|
|
|
|
.walltime_nsecs_stats = &walltime_nsecs_stats,
|
2015-07-21 12:31:22 +00:00
|
|
|
};
|
|
|
|
|
2017-08-31 19:40:35 +00:00
|
|
|
static bool is_duration_time(struct perf_evsel *evsel)
|
|
|
|
{
|
|
|
|
return !strcmp(evsel->name, "duration_time");
|
|
|
|
}
|
|
|
|
|
perf stat: Add interval printing
This patch adds a new printing mode for perf stat. It allows interval
printing. That means perf stat can now print event deltas at regular
time interval. This is useful to detect phases in programs.
The -I option enables interval printing. It expects an interval duration
in milliseconds. Minimum is 100ms. Once, activated perf stat prints
events deltas since last printout. All modes are supported.
$ perf stat -I 1000 -e cycles noploop 10
noploop for 10 seconds
# time counts events
1.000109853 2,388,560,546 cycles
2.000262846 2,393,332,358 cycles
3.000354131 2,393,176,537 cycles
4.000439503 2,393,203,790 cycles
5.000527075 2,393,167,675 cycles
6.000609052 2,393,203,670 cycles
7.000691082 2,393,175,678 cycles
The output format makes it easy to feed into a plotting program such as
gnuplot when the -I option is used in combination with the -x option:
$ perf stat -x, -I 1000 -e cycles noploop 10
noploop for 10 seconds
1.000084113,2378775498,cycles
2.000245798,2391056897,cycles
3.000354445,2392089414,cycles
4.000459115,2390936603,cycles
5.000565341,2392108173,cycles
Signed-off-by: Stephane Eranian <eranian@google.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung.kim@lge.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1359460064-3060-3-git-send-email-eranian@google.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2013-01-29 11:47:44 +00:00
|
|
|
static inline void diff_timespec(struct timespec *r, struct timespec *a,
|
|
|
|
struct timespec *b)
|
|
|
|
{
|
|
|
|
r->tv_sec = a->tv_sec - b->tv_sec;
|
|
|
|
if (a->tv_nsec < b->tv_nsec) {
|
2016-08-08 17:57:04 +00:00
|
|
|
r->tv_nsec = a->tv_nsec + NSEC_PER_SEC - b->tv_nsec;
|
perf stat: Add interval printing
This patch adds a new printing mode for perf stat. It allows interval
printing. That means perf stat can now print event deltas at regular
time interval. This is useful to detect phases in programs.
The -I option enables interval printing. It expects an interval duration
in milliseconds. Minimum is 100ms. Once, activated perf stat prints
events deltas since last printout. All modes are supported.
$ perf stat -I 1000 -e cycles noploop 10
noploop for 10 seconds
# time counts events
1.000109853 2,388,560,546 cycles
2.000262846 2,393,332,358 cycles
3.000354131 2,393,176,537 cycles
4.000439503 2,393,203,790 cycles
5.000527075 2,393,167,675 cycles
6.000609052 2,393,203,670 cycles
7.000691082 2,393,175,678 cycles
The output format makes it easy to feed into a plotting program such as
gnuplot when the -I option is used in combination with the -x option:
$ perf stat -x, -I 1000 -e cycles noploop 10
noploop for 10 seconds
1.000084113,2378775498,cycles
2.000245798,2391056897,cycles
3.000354445,2392089414,cycles
4.000459115,2390936603,cycles
5.000565341,2392108173,cycles
Signed-off-by: Stephane Eranian <eranian@google.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung.kim@lge.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1359460064-3060-3-git-send-email-eranian@google.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2013-01-29 11:47:44 +00:00
|
|
|
r->tv_sec--;
|
|
|
|
} else {
|
|
|
|
r->tv_nsec = a->tv_nsec - b->tv_nsec ;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2015-06-26 09:29:13 +00:00
|
|
|
static void perf_stat__reset_stats(void)
|
|
|
|
{
|
2017-12-05 14:03:07 +00:00
|
|
|
int i;
|
|
|
|
|
2015-06-26 09:29:13 +00:00
|
|
|
perf_evlist__reset_stats(evsel_list);
|
2015-06-03 14:25:59 +00:00
|
|
|
perf_stat__reset_shadow_stats();
|
2017-12-05 14:03:07 +00:00
|
|
|
|
|
|
|
for (i = 0; i < stat_config.stats_num; i++)
|
|
|
|
perf_stat__reset_shadow_per_stat(&stat_config.stats[i]);
|
2015-06-03 14:25:55 +00:00
|
|
|
}
|
|
|
|
|
2015-11-05 14:40:48 +00:00
|
|
|
static int process_synthesized_event(struct perf_tool *tool __maybe_unused,
|
|
|
|
union perf_event *event,
|
|
|
|
struct perf_sample *sample __maybe_unused,
|
|
|
|
struct machine *machine __maybe_unused)
|
perf stat record: Add record command
Add 'perf stat record' command support. It creates simple (header only)
perf.data file ATM.
The record command could be specified anywhere among stat options. All
stat command options are valid for stat record command with '-o' option
exception. If specified for record command it denotes the perf data file
name.
Committer note:
Set sample_type to PERF_SAMPLE_IDENTIFIER, which should be harmless
while avoiding that older tools show confusing messages, for instance,
with sample_type = 0, we get:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.630237 task-clock (msec) # 0.528 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
52 page-faults # 0.083 M/sec
978,312 cycles # 1.552 GHz
671,931 stalled-cycles-frontend # 68.68% frontend cycles idle
<not supported> stalled-cycles-backend
646,379 instructions # 0.66 insns per cycle
# 1.04 stalled cycles per insn
131,046 branches # 207.931 M/sec
7,073 branch-misses # 5.40% of all branches
0.001193240 seconds time elapsed
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
non matching sample_type
$
While with sample_type set to PERF_SAMPLE_IDENTIFIER, after we re-run 'perf
stat record usleep' we get:
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$
Which at least shows the names of the events in the perf.data file.
Additionally, such files, when passed to 'perf report' will produce:
$ oldperf report --stdio
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
Warning:
Kernel address maps (/proc/{kallsyms,modules}) were restricted.
Check /proc/sys/kernel/kptr_restrict before running 'perf record'.
As no suitable kallsyms nor vmlinux was found, kernel samples
can't be resolved.
Samples in kernel modules can't be resolved as well.
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
$
Which is confusing and can be solved by just adding the kernel mmap record,
which will also remove that warning about the data size field being equal to
zero, after generating the mmap record:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.600796 task-clock (msec) # 0.478 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
54 page-faults # 0.090 M/sec
886,844 cycles # 1.476 GHz
582,169 stalled-cycles-frontend # 65.65% frontend cycles idle
<not supported> stalled-cycles-backend
638,344 instructions # 0.72 insns per cycle
# 0.91 stalled cycles per insn
130,204 branches # 216.719 M/sec
7,500 branch-misses # 5.76% of all branches
0.001255897 seconds time elapsed
$ oldperf evlist
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$ oldperf report --stdio
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
[acme@zoo linux]$
No warnings, sensible output about what are the events in the perf.data file and also
a "file has no samples" message, which indeed it doesn't.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Tested-by: Kan Liang <kan.liang@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: htp://lkml.kernel.org/r/1446734469-11352-3-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 14:40:46 +00:00
|
|
|
{
|
2017-01-23 21:07:59 +00:00
|
|
|
if (perf_data__write(&perf_stat.data, event, event->header.size) < 0) {
|
perf stat record: Add record command
Add 'perf stat record' command support. It creates simple (header only)
perf.data file ATM.
The record command could be specified anywhere among stat options. All
stat command options are valid for stat record command with '-o' option
exception. If specified for record command it denotes the perf data file
name.
Committer note:
Set sample_type to PERF_SAMPLE_IDENTIFIER, which should be harmless
while avoiding that older tools show confusing messages, for instance,
with sample_type = 0, we get:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.630237 task-clock (msec) # 0.528 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
52 page-faults # 0.083 M/sec
978,312 cycles # 1.552 GHz
671,931 stalled-cycles-frontend # 68.68% frontend cycles idle
<not supported> stalled-cycles-backend
646,379 instructions # 0.66 insns per cycle
# 1.04 stalled cycles per insn
131,046 branches # 207.931 M/sec
7,073 branch-misses # 5.40% of all branches
0.001193240 seconds time elapsed
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
non matching sample_type
$
While with sample_type set to PERF_SAMPLE_IDENTIFIER, after we re-run 'perf
stat record usleep' we get:
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$
Which at least shows the names of the events in the perf.data file.
Additionally, such files, when passed to 'perf report' will produce:
$ oldperf report --stdio
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
Warning:
Kernel address maps (/proc/{kallsyms,modules}) were restricted.
Check /proc/sys/kernel/kptr_restrict before running 'perf record'.
As no suitable kallsyms nor vmlinux was found, kernel samples
can't be resolved.
Samples in kernel modules can't be resolved as well.
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
$
Which is confusing and can be solved by just adding the kernel mmap record,
which will also remove that warning about the data size field being equal to
zero, after generating the mmap record:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.600796 task-clock (msec) # 0.478 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
54 page-faults # 0.090 M/sec
886,844 cycles # 1.476 GHz
582,169 stalled-cycles-frontend # 65.65% frontend cycles idle
<not supported> stalled-cycles-backend
638,344 instructions # 0.72 insns per cycle
# 0.91 stalled cycles per insn
130,204 branches # 216.719 M/sec
7,500 branch-misses # 5.76% of all branches
0.001255897 seconds time elapsed
$ oldperf evlist
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$ oldperf report --stdio
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
[acme@zoo linux]$
No warnings, sensible output about what are the events in the perf.data file and also
a "file has no samples" message, which indeed it doesn't.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Tested-by: Kan Liang <kan.liang@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: htp://lkml.kernel.org/r/1446734469-11352-3-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 14:40:46 +00:00
|
|
|
pr_err("failed to write perf data, error: %m\n");
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
2015-11-05 14:40:48 +00:00
|
|
|
perf_stat.bytes_written += event->header.size;
|
perf stat record: Add record command
Add 'perf stat record' command support. It creates simple (header only)
perf.data file ATM.
The record command could be specified anywhere among stat options. All
stat command options are valid for stat record command with '-o' option
exception. If specified for record command it denotes the perf data file
name.
Committer note:
Set sample_type to PERF_SAMPLE_IDENTIFIER, which should be harmless
while avoiding that older tools show confusing messages, for instance,
with sample_type = 0, we get:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.630237 task-clock (msec) # 0.528 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
52 page-faults # 0.083 M/sec
978,312 cycles # 1.552 GHz
671,931 stalled-cycles-frontend # 68.68% frontend cycles idle
<not supported> stalled-cycles-backend
646,379 instructions # 0.66 insns per cycle
# 1.04 stalled cycles per insn
131,046 branches # 207.931 M/sec
7,073 branch-misses # 5.40% of all branches
0.001193240 seconds time elapsed
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
non matching sample_type
$
While with sample_type set to PERF_SAMPLE_IDENTIFIER, after we re-run 'perf
stat record usleep' we get:
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$
Which at least shows the names of the events in the perf.data file.
Additionally, such files, when passed to 'perf report' will produce:
$ oldperf report --stdio
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
Warning:
Kernel address maps (/proc/{kallsyms,modules}) were restricted.
Check /proc/sys/kernel/kptr_restrict before running 'perf record'.
As no suitable kallsyms nor vmlinux was found, kernel samples
can't be resolved.
Samples in kernel modules can't be resolved as well.
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
$
Which is confusing and can be solved by just adding the kernel mmap record,
which will also remove that warning about the data size field being equal to
zero, after generating the mmap record:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.600796 task-clock (msec) # 0.478 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
54 page-faults # 0.090 M/sec
886,844 cycles # 1.476 GHz
582,169 stalled-cycles-frontend # 65.65% frontend cycles idle
<not supported> stalled-cycles-backend
638,344 instructions # 0.72 insns per cycle
# 0.91 stalled cycles per insn
130,204 branches # 216.719 M/sec
7,500 branch-misses # 5.76% of all branches
0.001255897 seconds time elapsed
$ oldperf evlist
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$ oldperf report --stdio
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
[acme@zoo linux]$
No warnings, sensible output about what are the events in the perf.data file and also
a "file has no samples" message, which indeed it doesn't.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Tested-by: Kan Liang <kan.liang@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: htp://lkml.kernel.org/r/1446734469-11352-3-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 14:40:46 +00:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2015-11-05 14:40:56 +00:00
|
|
|
static int write_stat_round_event(u64 tm, u64 type)
|
2015-11-05 14:40:52 +00:00
|
|
|
{
|
2015-11-05 14:40:56 +00:00
|
|
|
return perf_event__synthesize_stat_round(NULL, tm, type,
|
2015-11-05 14:40:52 +00:00
|
|
|
process_synthesized_event,
|
|
|
|
NULL);
|
|
|
|
}
|
|
|
|
|
|
|
|
#define WRITE_STAT_ROUND_EVENT(time, interval) \
|
|
|
|
write_stat_round_event(time, PERF_STAT_ROUND_TYPE__ ## interval)
|
|
|
|
|
2015-11-05 14:40:51 +00:00
|
|
|
#define SID(e, x, y) xyarray__entry(e->sample_id, x, y)
|
|
|
|
|
|
|
|
static int
|
|
|
|
perf_evsel__write_stat_event(struct perf_evsel *counter, u32 cpu, u32 thread,
|
|
|
|
struct perf_counts_values *count)
|
|
|
|
{
|
|
|
|
struct perf_sample_id *sid = SID(counter, cpu, thread);
|
|
|
|
|
|
|
|
return perf_event__synthesize_stat(NULL, cpu, thread, sid->id, count,
|
|
|
|
process_synthesized_event, NULL);
|
|
|
|
}
|
|
|
|
|
2010-11-16 09:05:01 +00:00
|
|
|
/*
|
|
|
|
* Read out the results of a single counter:
|
|
|
|
* do not aggregate counts across CPUs in system-wide mode
|
|
|
|
*/
|
2011-01-03 19:45:52 +00:00
|
|
|
static int read_counter(struct perf_evsel *counter)
|
2010-11-16 09:05:01 +00:00
|
|
|
{
|
2014-11-21 09:31:09 +00:00
|
|
|
int nthreads = thread_map__nr(evsel_list->threads);
|
2016-07-15 10:08:10 +00:00
|
|
|
int ncpus, cpu, thread;
|
|
|
|
|
2017-12-05 14:03:10 +00:00
|
|
|
if (target__has_cpu(&target) && !target__has_per_thread(&target))
|
2016-07-15 10:08:10 +00:00
|
|
|
ncpus = perf_evsel__nr_cpus(counter);
|
|
|
|
else
|
|
|
|
ncpus = 1;
|
2010-11-16 09:05:01 +00:00
|
|
|
|
2015-02-13 18:40:58 +00:00
|
|
|
if (!counter->supported)
|
|
|
|
return -ENOENT;
|
|
|
|
|
2014-11-21 09:31:09 +00:00
|
|
|
if (counter->system_wide)
|
|
|
|
nthreads = 1;
|
|
|
|
|
|
|
|
for (thread = 0; thread < nthreads; thread++) {
|
|
|
|
for (cpu = 0; cpu < ncpus; cpu++) {
|
2015-06-26 09:29:20 +00:00
|
|
|
struct perf_counts_values *count;
|
|
|
|
|
|
|
|
count = perf_counts(counter->counts, cpu, thread);
|
perf stat: Use group read for event groups
Make perf stat use group read if there are groups defined. The group
read will get the values for all member of groups within a single
syscall instead of calling read syscall for every event.
We can see considerable less amount of kernel cycles spent on single
group read, than reading each event separately, like for following perf
stat command:
# perf stat -e {cycles,instructions} -I 10 -a sleep 1
Monitored with "perf stat -r 5 -e '{cycles:u,cycles:k}'"
Before:
24,325,676 cycles:u
297,040,775 cycles:k
1.038554134 seconds time elapsed
After:
25,034,418 cycles:u
158,256,395 cycles:k
1.036864497 seconds time elapsed
The perf_evsel__open fallback changes contributed by Andi Kleen.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/20170726120206.9099-4-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-07-26 12:02:06 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* The leader's group read loads data into its group members
|
|
|
|
* (via perf_evsel__read_counter) and sets threir count->loaded.
|
|
|
|
*/
|
|
|
|
if (!count->loaded &&
|
|
|
|
perf_evsel__read_counter(counter, cpu, thread)) {
|
2017-04-12 18:23:01 +00:00
|
|
|
counter->counts->scaled = -1;
|
|
|
|
perf_counts(counter->counts, cpu, thread)->ena = 0;
|
|
|
|
perf_counts(counter->counts, cpu, thread)->run = 0;
|
2014-11-21 09:31:09 +00:00
|
|
|
return -1;
|
2017-04-12 18:23:01 +00:00
|
|
|
}
|
2015-11-05 14:40:51 +00:00
|
|
|
|
perf stat: Use group read for event groups
Make perf stat use group read if there are groups defined. The group
read will get the values for all member of groups within a single
syscall instead of calling read syscall for every event.
We can see considerable less amount of kernel cycles spent on single
group read, than reading each event separately, like for following perf
stat command:
# perf stat -e {cycles,instructions} -I 10 -a sleep 1
Monitored with "perf stat -r 5 -e '{cycles:u,cycles:k}'"
Before:
24,325,676 cycles:u
297,040,775 cycles:k
1.038554134 seconds time elapsed
After:
25,034,418 cycles:u
158,256,395 cycles:k
1.036864497 seconds time elapsed
The perf_evsel__open fallback changes contributed by Andi Kleen.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/20170726120206.9099-4-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-07-26 12:02:06 +00:00
|
|
|
count->loaded = false;
|
|
|
|
|
2015-11-05 14:40:51 +00:00
|
|
|
if (STAT_RECORD) {
|
|
|
|
if (perf_evsel__write_stat_event(counter, cpu, thread, count)) {
|
|
|
|
pr_err("failed to write stat event\n");
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
}
|
2016-04-27 20:00:51 +00:00
|
|
|
|
|
|
|
if (verbose > 1) {
|
|
|
|
fprintf(stat_config.output,
|
|
|
|
"%s: %d: %" PRIu64 " %" PRIu64 " %" PRIu64 "\n",
|
|
|
|
perf_evsel__name(counter),
|
|
|
|
cpu,
|
|
|
|
count->val, count->ena, count->run);
|
|
|
|
}
|
2014-11-21 09:31:09 +00:00
|
|
|
}
|
2010-11-16 09:05:01 +00:00
|
|
|
}
|
2011-01-03 19:45:52 +00:00
|
|
|
|
|
|
|
return 0;
|
2009-05-29 07:10:54 +00:00
|
|
|
}
|
|
|
|
|
perf stat: Avoid skew when reading events
When we don't have a tracee (i.e. we're attaching to a task or CPU),
counters can still be running after our workload finishes, and can still
be running as we read their values. As we read events one-by-one, there
can be arbitrary skew between values of events, even within a group.
This means that ratios within an event group are not reliable.
This skew can be seen if measuring a group of identical events, e.g:
# perf stat -a -C0 -e '{cycles,cycles}' sleep 1
To avoid this, we must stop groups from counting before we read the
values of any constituent events. This patch adds and makes use of a new
disable_counters() helper, which disables group leaders (and thus each
group as a whole). This mirrors the use of enable_counters() for
starting event groups in the absence of a tracee.
Closing a group leader splits the group, and without a disabled group
leader the newly split events will begin counting. Thus to ensure counts
are reliable we must defer closing group leaders until all counts have
been read. To do so this patch removes the event closing logic from the
read_counters() helper, explicitly closes the events using
perf_evlist__close(), which also aids legibility.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1470747869-3567-1-git-send-email-mark.rutland@arm.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-08-09 13:04:29 +00:00
|
|
|
static void read_counters(void)
|
perf stat: Add interval printing
This patch adds a new printing mode for perf stat. It allows interval
printing. That means perf stat can now print event deltas at regular
time interval. This is useful to detect phases in programs.
The -I option enables interval printing. It expects an interval duration
in milliseconds. Minimum is 100ms. Once, activated perf stat prints
events deltas since last printout. All modes are supported.
$ perf stat -I 1000 -e cycles noploop 10
noploop for 10 seconds
# time counts events
1.000109853 2,388,560,546 cycles
2.000262846 2,393,332,358 cycles
3.000354131 2,393,176,537 cycles
4.000439503 2,393,203,790 cycles
5.000527075 2,393,167,675 cycles
6.000609052 2,393,203,670 cycles
7.000691082 2,393,175,678 cycles
The output format makes it easy to feed into a plotting program such as
gnuplot when the -I option is used in combination with the -x option:
$ perf stat -x, -I 1000 -e cycles noploop 10
noploop for 10 seconds
1.000084113,2378775498,cycles
2.000245798,2391056897,cycles
3.000354445,2392089414,cycles
4.000459115,2390936603,cycles
5.000565341,2392108173,cycles
Signed-off-by: Stephane Eranian <eranian@google.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung.kim@lge.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1359460064-3060-3-git-send-email-eranian@google.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2013-01-29 11:47:44 +00:00
|
|
|
{
|
|
|
|
struct perf_evsel *counter;
|
2017-04-12 18:23:01 +00:00
|
|
|
int ret;
|
perf stat: Add interval printing
This patch adds a new printing mode for perf stat. It allows interval
printing. That means perf stat can now print event deltas at regular
time interval. This is useful to detect phases in programs.
The -I option enables interval printing. It expects an interval duration
in milliseconds. Minimum is 100ms. Once, activated perf stat prints
events deltas since last printout. All modes are supported.
$ perf stat -I 1000 -e cycles noploop 10
noploop for 10 seconds
# time counts events
1.000109853 2,388,560,546 cycles
2.000262846 2,393,332,358 cycles
3.000354131 2,393,176,537 cycles
4.000439503 2,393,203,790 cycles
5.000527075 2,393,167,675 cycles
6.000609052 2,393,203,670 cycles
7.000691082 2,393,175,678 cycles
The output format makes it easy to feed into a plotting program such as
gnuplot when the -I option is used in combination with the -x option:
$ perf stat -x, -I 1000 -e cycles noploop 10
noploop for 10 seconds
1.000084113,2378775498,cycles
2.000245798,2391056897,cycles
3.000354445,2392089414,cycles
4.000459115,2390936603,cycles
5.000565341,2392108173,cycles
Signed-off-by: Stephane Eranian <eranian@google.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung.kim@lge.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1359460064-3060-3-git-send-email-eranian@google.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2013-01-29 11:47:44 +00:00
|
|
|
|
2016-06-23 14:26:15 +00:00
|
|
|
evlist__for_each_entry(evsel_list, counter) {
|
2017-04-12 18:23:01 +00:00
|
|
|
ret = read_counter(counter);
|
|
|
|
if (ret)
|
2015-09-01 22:52:46 +00:00
|
|
|
pr_debug("failed to read counter %s\n", counter->name);
|
2015-06-26 09:29:20 +00:00
|
|
|
|
2017-04-12 18:23:01 +00:00
|
|
|
if (ret == 0 && perf_stat_process_counter(&stat_config, counter))
|
2015-06-26 09:29:20 +00:00
|
|
|
pr_warning("failed to process counter %s\n", counter->name);
|
perf stat: Add interval printing
This patch adds a new printing mode for perf stat. It allows interval
printing. That means perf stat can now print event deltas at regular
time interval. This is useful to detect phases in programs.
The -I option enables interval printing. It expects an interval duration
in milliseconds. Minimum is 100ms. Once, activated perf stat prints
events deltas since last printout. All modes are supported.
$ perf stat -I 1000 -e cycles noploop 10
noploop for 10 seconds
# time counts events
1.000109853 2,388,560,546 cycles
2.000262846 2,393,332,358 cycles
3.000354131 2,393,176,537 cycles
4.000439503 2,393,203,790 cycles
5.000527075 2,393,167,675 cycles
6.000609052 2,393,203,670 cycles
7.000691082 2,393,175,678 cycles
The output format makes it easy to feed into a plotting program such as
gnuplot when the -I option is used in combination with the -x option:
$ perf stat -x, -I 1000 -e cycles noploop 10
noploop for 10 seconds
1.000084113,2378775498,cycles
2.000245798,2391056897,cycles
3.000354445,2392089414,cycles
4.000459115,2390936603,cycles
5.000565341,2392108173,cycles
Signed-off-by: Stephane Eranian <eranian@google.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung.kim@lge.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1359460064-3060-3-git-send-email-eranian@google.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2013-01-29 11:47:44 +00:00
|
|
|
}
|
2015-06-26 09:29:19 +00:00
|
|
|
}
|
|
|
|
|
2015-06-26 09:29:24 +00:00
|
|
|
static void process_interval(void)
|
2015-06-26 09:29:19 +00:00
|
|
|
{
|
|
|
|
struct timespec ts, rs;
|
|
|
|
|
perf stat: Avoid skew when reading events
When we don't have a tracee (i.e. we're attaching to a task or CPU),
counters can still be running after our workload finishes, and can still
be running as we read their values. As we read events one-by-one, there
can be arbitrary skew between values of events, even within a group.
This means that ratios within an event group are not reliable.
This skew can be seen if measuring a group of identical events, e.g:
# perf stat -a -C0 -e '{cycles,cycles}' sleep 1
To avoid this, we must stop groups from counting before we read the
values of any constituent events. This patch adds and makes use of a new
disable_counters() helper, which disables group leaders (and thus each
group as a whole). This mirrors the use of enable_counters() for
starting event groups in the absence of a tracee.
Closing a group leader splits the group, and without a disabled group
leader the newly split events will begin counting. Thus to ensure counts
are reliable we must defer closing group leaders until all counts have
been read. To do so this patch removes the event closing logic from the
read_counters() helper, explicitly closes the events using
perf_evlist__close(), which also aids legibility.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1470747869-3567-1-git-send-email-mark.rutland@arm.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-08-09 13:04:29 +00:00
|
|
|
read_counters();
|
2013-02-14 12:57:27 +00:00
|
|
|
|
perf stat: Add interval printing
This patch adds a new printing mode for perf stat. It allows interval
printing. That means perf stat can now print event deltas at regular
time interval. This is useful to detect phases in programs.
The -I option enables interval printing. It expects an interval duration
in milliseconds. Minimum is 100ms. Once, activated perf stat prints
events deltas since last printout. All modes are supported.
$ perf stat -I 1000 -e cycles noploop 10
noploop for 10 seconds
# time counts events
1.000109853 2,388,560,546 cycles
2.000262846 2,393,332,358 cycles
3.000354131 2,393,176,537 cycles
4.000439503 2,393,203,790 cycles
5.000527075 2,393,167,675 cycles
6.000609052 2,393,203,670 cycles
7.000691082 2,393,175,678 cycles
The output format makes it easy to feed into a plotting program such as
gnuplot when the -I option is used in combination with the -x option:
$ perf stat -x, -I 1000 -e cycles noploop 10
noploop for 10 seconds
1.000084113,2378775498,cycles
2.000245798,2391056897,cycles
3.000354445,2392089414,cycles
4.000459115,2390936603,cycles
5.000565341,2392108173,cycles
Signed-off-by: Stephane Eranian <eranian@google.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung.kim@lge.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1359460064-3060-3-git-send-email-eranian@google.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2013-01-29 11:47:44 +00:00
|
|
|
clock_gettime(CLOCK_MONOTONIC, &ts);
|
|
|
|
diff_timespec(&rs, &ts, &ref_time);
|
|
|
|
|
2015-11-05 14:40:52 +00:00
|
|
|
if (STAT_RECORD) {
|
2016-08-05 18:40:30 +00:00
|
|
|
if (WRITE_STAT_ROUND_EVENT(rs.tv_sec * NSEC_PER_SEC + rs.tv_nsec, INTERVAL))
|
2015-11-05 14:40:52 +00:00
|
|
|
pr_err("failed to write stat round event\n");
|
|
|
|
}
|
|
|
|
|
2017-08-31 19:40:36 +00:00
|
|
|
init_stats(&walltime_nsecs_stats);
|
|
|
|
update_stats(&walltime_nsecs_stats, stat_config.interval * 1000000);
|
2015-06-26 09:29:26 +00:00
|
|
|
print_counters(&rs, 0, NULL);
|
perf stat: Add interval printing
This patch adds a new printing mode for perf stat. It allows interval
printing. That means perf stat can now print event deltas at regular
time interval. This is useful to detect phases in programs.
The -I option enables interval printing. It expects an interval duration
in milliseconds. Minimum is 100ms. Once, activated perf stat prints
events deltas since last printout. All modes are supported.
$ perf stat -I 1000 -e cycles noploop 10
noploop for 10 seconds
# time counts events
1.000109853 2,388,560,546 cycles
2.000262846 2,393,332,358 cycles
3.000354131 2,393,176,537 cycles
4.000439503 2,393,203,790 cycles
5.000527075 2,393,167,675 cycles
6.000609052 2,393,203,670 cycles
7.000691082 2,393,175,678 cycles
The output format makes it easy to feed into a plotting program such as
gnuplot when the -I option is used in combination with the -x option:
$ perf stat -x, -I 1000 -e cycles noploop 10
noploop for 10 seconds
1.000084113,2378775498,cycles
2.000245798,2391056897,cycles
3.000354445,2392089414,cycles
4.000459115,2390936603,cycles
5.000565341,2392108173,cycles
Signed-off-by: Stephane Eranian <eranian@google.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung.kim@lge.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1359460064-3060-3-git-send-email-eranian@google.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2013-01-29 11:47:44 +00:00
|
|
|
}
|
|
|
|
|
2015-12-03 09:06:44 +00:00
|
|
|
static void enable_counters(void)
|
2013-08-03 00:41:11 +00:00
|
|
|
{
|
2018-08-30 06:32:11 +00:00
|
|
|
if (stat_config.initial_delay)
|
|
|
|
usleep(stat_config.initial_delay * USEC_PER_MSEC);
|
2015-12-03 09:06:44 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* We need to enable counters only if:
|
|
|
|
* - we don't have tracee (attaching to task or cpu)
|
|
|
|
* - we have initial delay configured
|
|
|
|
*/
|
2018-08-30 06:32:11 +00:00
|
|
|
if (!target__none(&target) || stat_config.initial_delay)
|
2015-12-03 09:06:43 +00:00
|
|
|
perf_evlist__enable(evsel_list);
|
2013-08-03 00:41:11 +00:00
|
|
|
}
|
|
|
|
|
perf stat: Avoid skew when reading events
When we don't have a tracee (i.e. we're attaching to a task or CPU),
counters can still be running after our workload finishes, and can still
be running as we read their values. As we read events one-by-one, there
can be arbitrary skew between values of events, even within a group.
This means that ratios within an event group are not reliable.
This skew can be seen if measuring a group of identical events, e.g:
# perf stat -a -C0 -e '{cycles,cycles}' sleep 1
To avoid this, we must stop groups from counting before we read the
values of any constituent events. This patch adds and makes use of a new
disable_counters() helper, which disables group leaders (and thus each
group as a whole). This mirrors the use of enable_counters() for
starting event groups in the absence of a tracee.
Closing a group leader splits the group, and without a disabled group
leader the newly split events will begin counting. Thus to ensure counts
are reliable we must defer closing group leaders until all counts have
been read. To do so this patch removes the event closing logic from the
read_counters() helper, explicitly closes the events using
perf_evlist__close(), which also aids legibility.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1470747869-3567-1-git-send-email-mark.rutland@arm.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-08-09 13:04:29 +00:00
|
|
|
static void disable_counters(void)
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* If we don't have tracee (attaching to task or cpu), counters may
|
|
|
|
* still be running. To get accurate group ratios, we must stop groups
|
|
|
|
* from counting before reading their constituent counters.
|
|
|
|
*/
|
|
|
|
if (!target__none(&target))
|
|
|
|
perf_evlist__disable(evsel_list);
|
|
|
|
}
|
|
|
|
|
2014-01-02 18:11:25 +00:00
|
|
|
static volatile int workload_exec_errno;
|
2013-12-28 18:45:08 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* perf_evlist__prepare_workload will send a SIGUSR1
|
|
|
|
* if the fork fails, since we asked by setting its
|
|
|
|
* want_signal to true.
|
|
|
|
*/
|
2014-01-02 18:11:25 +00:00
|
|
|
static void workload_exec_failed_signal(int signo __maybe_unused, siginfo_t *info,
|
|
|
|
void *ucontext __maybe_unused)
|
2013-12-28 18:45:08 +00:00
|
|
|
{
|
2014-01-02 18:11:25 +00:00
|
|
|
workload_exec_errno = info->si_value.sival_int;
|
2013-12-28 18:45:08 +00:00
|
|
|
}
|
|
|
|
|
perf stat: Use group read for event groups
Make perf stat use group read if there are groups defined. The group
read will get the values for all member of groups within a single
syscall instead of calling read syscall for every event.
We can see considerable less amount of kernel cycles spent on single
group read, than reading each event separately, like for following perf
stat command:
# perf stat -e {cycles,instructions} -I 10 -a sleep 1
Monitored with "perf stat -r 5 -e '{cycles:u,cycles:k}'"
Before:
24,325,676 cycles:u
297,040,775 cycles:k
1.038554134 seconds time elapsed
After:
25,034,418 cycles:u
158,256,395 cycles:k
1.036864497 seconds time elapsed
The perf_evsel__open fallback changes contributed by Andi Kleen.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/20170726120206.9099-4-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-07-26 12:02:06 +00:00
|
|
|
static bool perf_evsel__should_store_id(struct perf_evsel *counter)
|
|
|
|
{
|
|
|
|
return STAT_RECORD || counter->attr.read_format & PERF_FORMAT_ID;
|
|
|
|
}
|
|
|
|
|
perf tools: Support weak groups in 'perf stat'
Setting up groups can be complicated due to the complicated scheduling
restrictions of different PMUs.
User tools usually don't understand all these restrictions.
Still in many cases it is useful to set up groups and they work most of
the time. However if the group is set up wrong some members will not
report any value because they never get scheduled.
Add a concept of a 'weak group': try to set up a group, but if it's not
schedulable fallback to not using a group. That gives us the best of
both worlds: groups if they work, but still a usable fallback if they
don't.
In theory it would be possible to have more complex fallback strategies
(e.g. try to split the group in half), but the simple fallback of not
using a group seems to work for now.
So far the weak group is only implemented for perf stat, not for record.
Here's an unschedulable group (on IvyBridge with SMT on)
% perf stat -e '{branches,branch-misses,l1d.replacement,l2_lines_in.all,l2_rqsts.all_code_rd}' -a sleep 1
73,806,067 branches
4,848,144 branch-misses # 6.57% of all branches
14,754,458 l1d.replacement
24,905,558 l2_lines_in.all
<not supported> l2_rqsts.all_code_rd <------- will never report anything
With the weak group:
% perf stat -e '{branches,branch-misses,l1d.replacement,l2_lines_in.all,l2_rqsts.all_code_rd}:W' -a sleep 1
125,366,055 branches (80.02%)
9,208,402 branch-misses # 7.35% of all branches (80.01%)
24,560,249 l1d.replacement (80.00%)
43,174,971 l2_lines_in.all (80.05%)
31,891,457 l2_rqsts.all_code_rd (79.92%)
The extra event scheduled with some extra multiplexing
v2: Move fallback code to separate function.
Add comment on for_each_group_member
Adjust to new perf_evsel__close interface
v3: Fix debug print out.
Committer testing:
Before:
# perf stat -e '{branches,branch-misses,l1d.replacement,l2_lines_in.all,l2_rqsts.all_code_rd}' -a sleep 1
Performance counter stats for 'system wide':
<not counted> branches
<not counted> branch-misses
<not counted> l1d.replacement
<not counted> l2_lines_in.all
<not supported> l2_rqsts.all_code_rd
1.002147212 seconds time elapsed
# perf stat -e '{branches,l1d.replacement,l2_lines_in.all,l2_rqsts.all_code_rd}' -a sleep 1
Performance counter stats for 'system wide':
83,207,892 branches
11,065,444 l1d.replacement
28,484,024 l2_lines_in.all
12,186,179 l2_rqsts.all_code_rd
1.001739493 seconds time elapsed
After:
# perf stat -e '{branches,branch-misses,l1d.replacement,l2_lines_in.all,l2_rqsts.all_code_rd}':W -a sleep 1
Performance counter stats for 'system wide':
543,323,909 branches (80.01%)
27,100,512 branch-misses # 4.99% of all branches (80.02%)
50,402,905 l1d.replacement (80.03%)
67,385,892 l2_lines_in.all (80.01%)
21,352,885 l2_rqsts.all_code_rd (79.94%)
1.001086658 seconds time elapsed
#
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Link: http://lkml.kernel.org/r/20170831194036.30146-2-andi@firstfloor.org
[ Add a "'perf stat' only, for now" comment in the man page, suggested by Jiri ]
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-08-31 19:40:26 +00:00
|
|
|
static struct perf_evsel *perf_evsel__reset_weak_group(struct perf_evsel *evsel)
|
|
|
|
{
|
|
|
|
struct perf_evsel *c2, *leader;
|
|
|
|
bool is_open = true;
|
|
|
|
|
|
|
|
leader = evsel->leader;
|
|
|
|
pr_debug("Weak group for %s/%d failed\n",
|
|
|
|
leader->name, leader->nr_members);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* for_each_group_member doesn't work here because it doesn't
|
|
|
|
* include the first entry.
|
|
|
|
*/
|
|
|
|
evlist__for_each_entry(evsel_list, c2) {
|
|
|
|
if (c2 == evsel)
|
|
|
|
is_open = false;
|
|
|
|
if (c2->leader == leader) {
|
|
|
|
if (is_open)
|
|
|
|
perf_evsel__close(c2);
|
|
|
|
c2->leader = c2;
|
|
|
|
c2->nr_members = 0;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return leader;
|
|
|
|
}
|
|
|
|
|
2018-04-23 09:08:21 +00:00
|
|
|
static int __run_perf_stat(int argc, const char **argv, int run_idx)
|
2009-06-13 12:57:28 +00:00
|
|
|
{
|
2015-07-21 12:31:25 +00:00
|
|
|
int interval = stat_config.interval;
|
2018-01-29 09:25:22 +00:00
|
|
|
int times = stat_config.times;
|
2018-01-29 09:25:23 +00:00
|
|
|
int timeout = stat_config.timeout;
|
2017-02-13 19:45:24 +00:00
|
|
|
char msg[BUFSIZ];
|
2009-06-13 12:57:28 +00:00
|
|
|
unsigned long long t0, t1;
|
2012-11-12 17:34:00 +00:00
|
|
|
struct perf_evsel *counter;
|
perf stat: Add interval printing
This patch adds a new printing mode for perf stat. It allows interval
printing. That means perf stat can now print event deltas at regular
time interval. This is useful to detect phases in programs.
The -I option enables interval printing. It expects an interval duration
in milliseconds. Minimum is 100ms. Once, activated perf stat prints
events deltas since last printout. All modes are supported.
$ perf stat -I 1000 -e cycles noploop 10
noploop for 10 seconds
# time counts events
1.000109853 2,388,560,546 cycles
2.000262846 2,393,332,358 cycles
3.000354131 2,393,176,537 cycles
4.000439503 2,393,203,790 cycles
5.000527075 2,393,167,675 cycles
6.000609052 2,393,203,670 cycles
7.000691082 2,393,175,678 cycles
The output format makes it easy to feed into a plotting program such as
gnuplot when the -I option is used in combination with the -x option:
$ perf stat -x, -I 1000 -e cycles noploop 10
noploop for 10 seconds
1.000084113,2378775498,cycles
2.000245798,2391056897,cycles
3.000354445,2392089414,cycles
4.000459115,2390936603,cycles
5.000565341,2392108173,cycles
Signed-off-by: Stephane Eranian <eranian@google.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung.kim@lge.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1359460064-3060-3-git-send-email-eranian@google.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2013-01-29 11:47:44 +00:00
|
|
|
struct timespec ts;
|
2013-11-12 16:58:49 +00:00
|
|
|
size_t l;
|
2009-06-13 12:57:28 +00:00
|
|
|
int status = 0;
|
2010-03-18 14:36:03 +00:00
|
|
|
const bool forks = (argc > 0);
|
2017-01-23 21:07:59 +00:00
|
|
|
bool is_pipe = STAT_RECORD ? perf_stat.data.is_pipe : false;
|
2016-09-16 15:50:03 +00:00
|
|
|
struct perf_evsel_config_term *err_term;
|
2009-06-13 12:57:28 +00:00
|
|
|
|
perf stat: Add interval printing
This patch adds a new printing mode for perf stat. It allows interval
printing. That means perf stat can now print event deltas at regular
time interval. This is useful to detect phases in programs.
The -I option enables interval printing. It expects an interval duration
in milliseconds. Minimum is 100ms. Once, activated perf stat prints
events deltas since last printout. All modes are supported.
$ perf stat -I 1000 -e cycles noploop 10
noploop for 10 seconds
# time counts events
1.000109853 2,388,560,546 cycles
2.000262846 2,393,332,358 cycles
3.000354131 2,393,176,537 cycles
4.000439503 2,393,203,790 cycles
5.000527075 2,393,167,675 cycles
6.000609052 2,393,203,670 cycles
7.000691082 2,393,175,678 cycles
The output format makes it easy to feed into a plotting program such as
gnuplot when the -I option is used in combination with the -x option:
$ perf stat -x, -I 1000 -e cycles noploop 10
noploop for 10 seconds
1.000084113,2378775498,cycles
2.000245798,2391056897,cycles
3.000354445,2392089414,cycles
4.000459115,2390936603,cycles
5.000565341,2392108173,cycles
Signed-off-by: Stephane Eranian <eranian@google.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung.kim@lge.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1359460064-3060-3-git-send-email-eranian@google.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2013-01-29 11:47:44 +00:00
|
|
|
if (interval) {
|
2016-08-08 17:57:04 +00:00
|
|
|
ts.tv_sec = interval / USEC_PER_MSEC;
|
|
|
|
ts.tv_nsec = (interval % USEC_PER_MSEC) * NSEC_PER_MSEC;
|
2018-01-29 09:25:23 +00:00
|
|
|
} else if (timeout) {
|
|
|
|
ts.tv_sec = timeout / USEC_PER_MSEC;
|
|
|
|
ts.tv_nsec = (timeout % USEC_PER_MSEC) * NSEC_PER_MSEC;
|
perf stat: Add interval printing
This patch adds a new printing mode for perf stat. It allows interval
printing. That means perf stat can now print event deltas at regular
time interval. This is useful to detect phases in programs.
The -I option enables interval printing. It expects an interval duration
in milliseconds. Minimum is 100ms. Once, activated perf stat prints
events deltas since last printout. All modes are supported.
$ perf stat -I 1000 -e cycles noploop 10
noploop for 10 seconds
# time counts events
1.000109853 2,388,560,546 cycles
2.000262846 2,393,332,358 cycles
3.000354131 2,393,176,537 cycles
4.000439503 2,393,203,790 cycles
5.000527075 2,393,167,675 cycles
6.000609052 2,393,203,670 cycles
7.000691082 2,393,175,678 cycles
The output format makes it easy to feed into a plotting program such as
gnuplot when the -I option is used in combination with the -x option:
$ perf stat -x, -I 1000 -e cycles noploop 10
noploop for 10 seconds
1.000084113,2378775498,cycles
2.000245798,2391056897,cycles
3.000354445,2392089414,cycles
4.000459115,2390936603,cycles
5.000565341,2392108173,cycles
Signed-off-by: Stephane Eranian <eranian@google.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung.kim@lge.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1359460064-3060-3-git-send-email-eranian@google.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2013-01-29 11:47:44 +00:00
|
|
|
} else {
|
|
|
|
ts.tv_sec = 1;
|
|
|
|
ts.tv_nsec = 0;
|
|
|
|
}
|
|
|
|
|
2009-12-31 08:05:50 +00:00
|
|
|
if (forks) {
|
2015-11-05 14:40:50 +00:00
|
|
|
if (perf_evlist__prepare_workload(evsel_list, &target, argv, is_pipe,
|
2014-01-03 17:56:49 +00:00
|
|
|
workload_exec_failed_signal) < 0) {
|
2013-03-11 07:43:18 +00:00
|
|
|
perror("failed to prepare workload");
|
|
|
|
return -1;
|
2009-12-31 08:05:50 +00:00
|
|
|
}
|
2013-09-30 09:01:11 +00:00
|
|
|
child_pid = evsel_list->workload.pid;
|
2009-06-29 11:13:21 +00:00
|
|
|
}
|
|
|
|
|
perf tools: Enable grouping logic for parsed events
This patch adds a functionality that allows to create event groups
based on the way they are specified on the command line. Adding
functionality to the '{}' group syntax introduced in earlier patch.
The current '--group/-g' option behaviour remains intact. If you
specify it for record/stat/top command, all the specified events
become members of a single group with the first event as a group
leader.
With the new '{}' group syntax you can create group like:
# perf record -e '{cycles,faults}' ls
resulting in single event group containing 'cycles' and 'faults'
events, with cycles event as group leader.
All groups are created with regards to threads and cpus. Thus
recording an event group within a 2 threads on server with
4 CPUs will create 8 separate groups.
Examples (first event in brackets is group leader):
# 1 group (cpu-clock,task-clock)
perf record --group -e cpu-clock,task-clock ls
perf record -e '{cpu-clock,task-clock}' ls
# 2 groups (cpu-clock,task-clock) (minor-faults,major-faults)
perf record -e '{cpu-clock,task-clock},{minor-faults,major-faults}' ls
# 1 group (cpu-clock,task-clock,minor-faults,major-faults)
perf record --group -e cpu-clock,task-clock -e minor-faults,major-faults ls
perf record -e '{cpu-clock,task-clock,minor-faults,major-faults}' ls
# 2 groups (cpu-clock,task-clock) (minor-faults,major-faults)
perf record -e '{cpu-clock,task-clock} -e '{minor-faults,major-faults}' \
-e instructions ls
# 1 group
# (cpu-clock,task-clock,minor-faults,major-faults,instructions)
perf record --group -e cpu-clock,task-clock \
-e minor-faults,major-faults -e instructions ls perf record -e
'{cpu-clock,task-clock,minor-faults,major-faults,instructions}' ls
It's possible to use standard event modifier for a group, which spans
over all events in the group and updates each event modifier settings,
for example:
# perf record -r '{faults:k,cache-references}:p'
resulting in ':kp' modifier being used for 'faults' and ':p' modifier
being used for 'cache-references' event.
Reviewed-by: Namhyung Kim <namhyung@kernel.org>
Signed-off-by: Jiri Olsa <jolsa@redhat.com>
Acked-by: Peter Zijlstra <peterz@infradead.org>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ulrich Drepper <drepper@gmail.com>
Link: http://lkml.kernel.org/n/tip-ho42u0wcr8mn1otkalqi13qp@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2012-08-08 10:22:36 +00:00
|
|
|
if (group)
|
2012-08-14 19:35:48 +00:00
|
|
|
perf_evlist__set_leader(evsel_list);
|
perf tools: Enable grouping logic for parsed events
This patch adds a functionality that allows to create event groups
based on the way they are specified on the command line. Adding
functionality to the '{}' group syntax introduced in earlier patch.
The current '--group/-g' option behaviour remains intact. If you
specify it for record/stat/top command, all the specified events
become members of a single group with the first event as a group
leader.
With the new '{}' group syntax you can create group like:
# perf record -e '{cycles,faults}' ls
resulting in single event group containing 'cycles' and 'faults'
events, with cycles event as group leader.
All groups are created with regards to threads and cpus. Thus
recording an event group within a 2 threads on server with
4 CPUs will create 8 separate groups.
Examples (first event in brackets is group leader):
# 1 group (cpu-clock,task-clock)
perf record --group -e cpu-clock,task-clock ls
perf record -e '{cpu-clock,task-clock}' ls
# 2 groups (cpu-clock,task-clock) (minor-faults,major-faults)
perf record -e '{cpu-clock,task-clock},{minor-faults,major-faults}' ls
# 1 group (cpu-clock,task-clock,minor-faults,major-faults)
perf record --group -e cpu-clock,task-clock -e minor-faults,major-faults ls
perf record -e '{cpu-clock,task-clock,minor-faults,major-faults}' ls
# 2 groups (cpu-clock,task-clock) (minor-faults,major-faults)
perf record -e '{cpu-clock,task-clock} -e '{minor-faults,major-faults}' \
-e instructions ls
# 1 group
# (cpu-clock,task-clock,minor-faults,major-faults,instructions)
perf record --group -e cpu-clock,task-clock \
-e minor-faults,major-faults -e instructions ls perf record -e
'{cpu-clock,task-clock,minor-faults,major-faults,instructions}' ls
It's possible to use standard event modifier for a group, which spans
over all events in the group and updates each event modifier settings,
for example:
# perf record -r '{faults:k,cache-references}:p'
resulting in ':kp' modifier being used for 'faults' and ':p' modifier
being used for 'cache-references' event.
Reviewed-by: Namhyung Kim <namhyung@kernel.org>
Signed-off-by: Jiri Olsa <jolsa@redhat.com>
Acked-by: Peter Zijlstra <peterz@infradead.org>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ulrich Drepper <drepper@gmail.com>
Link: http://lkml.kernel.org/n/tip-ho42u0wcr8mn1otkalqi13qp@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2012-08-08 10:22:36 +00:00
|
|
|
|
2016-06-23 14:26:15 +00:00
|
|
|
evlist__for_each_entry(evsel_list, counter) {
|
2016-05-12 19:25:18 +00:00
|
|
|
try_again:
|
2018-08-30 06:32:17 +00:00
|
|
|
if (create_perf_stat_counter(counter, &stat_config, &target) < 0) {
|
perf tools: Support weak groups in 'perf stat'
Setting up groups can be complicated due to the complicated scheduling
restrictions of different PMUs.
User tools usually don't understand all these restrictions.
Still in many cases it is useful to set up groups and they work most of
the time. However if the group is set up wrong some members will not
report any value because they never get scheduled.
Add a concept of a 'weak group': try to set up a group, but if it's not
schedulable fallback to not using a group. That gives us the best of
both worlds: groups if they work, but still a usable fallback if they
don't.
In theory it would be possible to have more complex fallback strategies
(e.g. try to split the group in half), but the simple fallback of not
using a group seems to work for now.
So far the weak group is only implemented for perf stat, not for record.
Here's an unschedulable group (on IvyBridge with SMT on)
% perf stat -e '{branches,branch-misses,l1d.replacement,l2_lines_in.all,l2_rqsts.all_code_rd}' -a sleep 1
73,806,067 branches
4,848,144 branch-misses # 6.57% of all branches
14,754,458 l1d.replacement
24,905,558 l2_lines_in.all
<not supported> l2_rqsts.all_code_rd <------- will never report anything
With the weak group:
% perf stat -e '{branches,branch-misses,l1d.replacement,l2_lines_in.all,l2_rqsts.all_code_rd}:W' -a sleep 1
125,366,055 branches (80.02%)
9,208,402 branch-misses # 7.35% of all branches (80.01%)
24,560,249 l1d.replacement (80.00%)
43,174,971 l2_lines_in.all (80.05%)
31,891,457 l2_rqsts.all_code_rd (79.92%)
The extra event scheduled with some extra multiplexing
v2: Move fallback code to separate function.
Add comment on for_each_group_member
Adjust to new perf_evsel__close interface
v3: Fix debug print out.
Committer testing:
Before:
# perf stat -e '{branches,branch-misses,l1d.replacement,l2_lines_in.all,l2_rqsts.all_code_rd}' -a sleep 1
Performance counter stats for 'system wide':
<not counted> branches
<not counted> branch-misses
<not counted> l1d.replacement
<not counted> l2_lines_in.all
<not supported> l2_rqsts.all_code_rd
1.002147212 seconds time elapsed
# perf stat -e '{branches,l1d.replacement,l2_lines_in.all,l2_rqsts.all_code_rd}' -a sleep 1
Performance counter stats for 'system wide':
83,207,892 branches
11,065,444 l1d.replacement
28,484,024 l2_lines_in.all
12,186,179 l2_rqsts.all_code_rd
1.001739493 seconds time elapsed
After:
# perf stat -e '{branches,branch-misses,l1d.replacement,l2_lines_in.all,l2_rqsts.all_code_rd}':W -a sleep 1
Performance counter stats for 'system wide':
543,323,909 branches (80.01%)
27,100,512 branch-misses # 4.99% of all branches (80.02%)
50,402,905 l1d.replacement (80.03%)
67,385,892 l2_lines_in.all (80.01%)
21,352,885 l2_rqsts.all_code_rd (79.94%)
1.001086658 seconds time elapsed
#
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Link: http://lkml.kernel.org/r/20170831194036.30146-2-andi@firstfloor.org
[ Add a "'perf stat' only, for now" comment in the man page, suggested by Jiri ]
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-08-31 19:40:26 +00:00
|
|
|
|
|
|
|
/* Weak group failed. Reset the group. */
|
perf stat: Fall weak group back even for EBADF
It's not possible to run a package event and a per cpu event in the same
group. This is used by some of the power metrics. They work correctly
when not using a group.
Normally weak groups should handle that, but in this case EBADF is
returned instead of the normal EINVAL.
$ strace -e perf_event_open ./perf stat -v -e '{cstate_pkg/c2-residency/,msr/tsc/}:W' -a sleep 1
Using CPUID GenuineIntel-6-3E
perf_event_open({type=0x17 /* PERF_TYPE_??? */, size=PERF_ATTR_SIZE_VER5, config=0, ...}, -1, 0, -1, PERF_FLAG_FD_CLOEXEC) = -1 EINVAL (Invalid argument)
perf_event_open({type=0x17 /* PERF_TYPE_??? */, size=PERF_ATTR_SIZE_VER5, config=0, ...}, -1, 0, -1, 0) = -1 EINVAL (Invalid argument)
perf_event_open({type=0x17 /* PERF_TYPE_??? */, size=PERF_ATTR_SIZE_VER5, config=0, ...}, -1, 0, -1, 0) = -1 EINVAL (Invalid argument)
perf_event_open({type=0x17 /* PERF_TYPE_??? */, size=PERF_ATTR_SIZE_VER5, config=0, ...}, -1, 0, -1, 0) = -1 EINVAL (Invalid argument)
perf_event_open({type=0x17 /* PERF_TYPE_??? */, size=PERF_ATTR_SIZE_VER5, config=0, ...}, -1, 0, -1, 0) = 3
perf_event_open({type=0x7 /* PERF_TYPE_??? */, size=PERF_ATTR_SIZE_VER5, config=0, ...}, -1, 0, 3, 0) = 4
perf_event_open({type=0x7 /* PERF_TYPE_??? */, size=PERF_ATTR_SIZE_VER5, config=0, ...}, -1, 1, 0, 0) = -1 EBADF (Bad file descriptor)
and perf errors out.
Make weak groups trigger a fall back for EBADF too. Then this case works correctly:
$ perf stat -v -e '{cstate_pkg/c2-residency/,msr/tsc/}:W' -a sleep 1
Using CPUID GenuineIntel-6-3E
Weak group for cstate_pkg/c2-residency//2 failed
cstate_pkg/c2-residency/: 476709882 1000598460 1000598460
msr/tsc/: 39625837911 12007369110 12007369110
Performance counter stats for 'system wide':
476,709,882 cstate_pkg/c2-residency/
39,625,837,911 msr/tsc/
1.000697588 seconds time elapsed
This fixes perf stat -M Power ...
$ perf stat -M Power --metric-only -a sleep 1
Performance counter stats for 'system wide':
Turbo_Utilization C3_Core_Residency C6_Core_Residency C7_Core_Residency C2_Pkg_Residency C3_Pkg_Residency C6_Pkg_Residency C7_Pkg_Residency
1.0 0.7 30.0 0.0 0.9 0.1 0.4 0.0
1.001240740 seconds time elapsed
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: http://lkml.kernel.org/r/20170905211324.32427-1-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-09-05 21:13:24 +00:00
|
|
|
if ((errno == EINVAL || errno == EBADF) &&
|
perf tools: Support weak groups in 'perf stat'
Setting up groups can be complicated due to the complicated scheduling
restrictions of different PMUs.
User tools usually don't understand all these restrictions.
Still in many cases it is useful to set up groups and they work most of
the time. However if the group is set up wrong some members will not
report any value because they never get scheduled.
Add a concept of a 'weak group': try to set up a group, but if it's not
schedulable fallback to not using a group. That gives us the best of
both worlds: groups if they work, but still a usable fallback if they
don't.
In theory it would be possible to have more complex fallback strategies
(e.g. try to split the group in half), but the simple fallback of not
using a group seems to work for now.
So far the weak group is only implemented for perf stat, not for record.
Here's an unschedulable group (on IvyBridge with SMT on)
% perf stat -e '{branches,branch-misses,l1d.replacement,l2_lines_in.all,l2_rqsts.all_code_rd}' -a sleep 1
73,806,067 branches
4,848,144 branch-misses # 6.57% of all branches
14,754,458 l1d.replacement
24,905,558 l2_lines_in.all
<not supported> l2_rqsts.all_code_rd <------- will never report anything
With the weak group:
% perf stat -e '{branches,branch-misses,l1d.replacement,l2_lines_in.all,l2_rqsts.all_code_rd}:W' -a sleep 1
125,366,055 branches (80.02%)
9,208,402 branch-misses # 7.35% of all branches (80.01%)
24,560,249 l1d.replacement (80.00%)
43,174,971 l2_lines_in.all (80.05%)
31,891,457 l2_rqsts.all_code_rd (79.92%)
The extra event scheduled with some extra multiplexing
v2: Move fallback code to separate function.
Add comment on for_each_group_member
Adjust to new perf_evsel__close interface
v3: Fix debug print out.
Committer testing:
Before:
# perf stat -e '{branches,branch-misses,l1d.replacement,l2_lines_in.all,l2_rqsts.all_code_rd}' -a sleep 1
Performance counter stats for 'system wide':
<not counted> branches
<not counted> branch-misses
<not counted> l1d.replacement
<not counted> l2_lines_in.all
<not supported> l2_rqsts.all_code_rd
1.002147212 seconds time elapsed
# perf stat -e '{branches,l1d.replacement,l2_lines_in.all,l2_rqsts.all_code_rd}' -a sleep 1
Performance counter stats for 'system wide':
83,207,892 branches
11,065,444 l1d.replacement
28,484,024 l2_lines_in.all
12,186,179 l2_rqsts.all_code_rd
1.001739493 seconds time elapsed
After:
# perf stat -e '{branches,branch-misses,l1d.replacement,l2_lines_in.all,l2_rqsts.all_code_rd}':W -a sleep 1
Performance counter stats for 'system wide':
543,323,909 branches (80.01%)
27,100,512 branch-misses # 4.99% of all branches (80.02%)
50,402,905 l1d.replacement (80.03%)
67,385,892 l2_lines_in.all (80.01%)
21,352,885 l2_rqsts.all_code_rd (79.94%)
1.001086658 seconds time elapsed
#
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Link: http://lkml.kernel.org/r/20170831194036.30146-2-andi@firstfloor.org
[ Add a "'perf stat' only, for now" comment in the man page, suggested by Jiri ]
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-08-31 19:40:26 +00:00
|
|
|
counter->leader != counter &&
|
|
|
|
counter->weak_group) {
|
|
|
|
counter = perf_evsel__reset_weak_group(counter);
|
|
|
|
goto try_again;
|
|
|
|
}
|
|
|
|
|
2012-05-08 15:29:16 +00:00
|
|
|
/*
|
|
|
|
* PPC returns ENXIO for HW counters until 2.6.37
|
|
|
|
* (behavior changed with commit b0a873e).
|
|
|
|
*/
|
2011-12-01 22:38:33 +00:00
|
|
|
if (errno == EINVAL || errno == ENOSYS ||
|
2012-05-08 15:29:16 +00:00
|
|
|
errno == ENOENT || errno == EOPNOTSUPP ||
|
|
|
|
errno == ENXIO) {
|
2017-02-17 08:17:38 +00:00
|
|
|
if (verbose > 0)
|
2011-04-29 22:04:15 +00:00
|
|
|
ui__warning("%s event is not supported by the kernel.\n",
|
2012-06-12 15:34:58 +00:00
|
|
|
perf_evsel__name(counter));
|
2011-05-30 14:55:59 +00:00
|
|
|
counter->supported = false;
|
2015-06-11 06:32:40 +00:00
|
|
|
|
|
|
|
if ((counter->leader != counter) ||
|
|
|
|
!(counter->leader->nr_members > 1))
|
|
|
|
continue;
|
2016-05-12 19:25:18 +00:00
|
|
|
} else if (perf_evsel__fallback(counter, errno, msg, sizeof(msg))) {
|
2017-02-17 08:17:38 +00:00
|
|
|
if (verbose > 0)
|
2016-05-12 19:25:18 +00:00
|
|
|
ui__warning("%s\n", msg);
|
|
|
|
goto try_again;
|
perf stat: Ignore error thread when enabling system-wide --per-thread
If we execute 'perf stat --per-thread' with non-root account (even set
kernel.perf_event_paranoid = -1 yet), it reports the error:
jinyao@skl:~$ perf stat --per-thread
Error:
You may not have permission to collect system-wide stats.
Consider tweaking /proc/sys/kernel/perf_event_paranoid,
which controls use of the performance events system by
unprivileged users (without CAP_SYS_ADMIN).
The current value is 2:
-1: Allow use of (almost) all events by all users
Ignore mlock limit after perf_event_mlock_kb without CAP_IPC_LOCK
>= 0: Disallow ftrace function tracepoint by users without CAP_SYS_ADMIN
Disallow raw tracepoint access by users without CAP_SYS_ADMIN
>= 1: Disallow CPU event access by users without CAP_SYS_ADMIN
>= 2: Disallow kernel profiling by users without CAP_SYS_ADMIN
To make this setting permanent, edit /etc/sysctl.conf too, e.g.:
kernel.perf_event_paranoid = -1
Perhaps the ptrace rule doesn't allow to trace some processes. But anyway
the global --per-thread mode had better ignore such errors and continue
working on other threads.
This patch will record the index of error thread in perf_evsel__open()
and remove this thread before retrying.
For example (run with non-root, kernel.perf_event_paranoid isn't set):
jinyao@skl:~$ perf stat --per-thread
^C
Performance counter stats for 'system wide':
vmstat-3458 6.171984 cpu-clock:u (msec) # 0.000 CPUs utilized
perf-3670 0.515599 cpu-clock:u (msec) # 0.000 CPUs utilized
vmstat-3458 1,163,643 cycles:u # 0.189 GHz
perf-3670 40,881 cycles:u # 0.079 GHz
vmstat-3458 1,410,238 instructions:u # 1.21 insn per cycle
perf-3670 3,536 instructions:u # 0.09 insn per cycle
vmstat-3458 288,937 branches:u # 46.814 M/sec
perf-3670 936 branches:u # 1.815 M/sec
vmstat-3458 15,195 branch-misses:u # 5.26% of all branches
perf-3670 76 branch-misses:u # 8.12% of all branches
12.651675247 seconds time elapsed
Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Kan Liang <kan.liang@intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1516117388-10120-1-git-send-email-yao.jin@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-01-16 15:43:08 +00:00
|
|
|
} else if (target__has_per_thread(&target) &&
|
|
|
|
evsel_list->threads &&
|
|
|
|
evsel_list->threads->err_thread != -1) {
|
|
|
|
/*
|
|
|
|
* For global --per-thread case, skip current
|
|
|
|
* error thread.
|
|
|
|
*/
|
|
|
|
if (!thread_map__remove(evsel_list->threads,
|
|
|
|
evsel_list->threads->err_thread)) {
|
|
|
|
evsel_list->threads->err_thread = -1;
|
|
|
|
goto try_again;
|
|
|
|
}
|
|
|
|
}
|
2011-04-28 06:48:42 +00:00
|
|
|
|
2012-12-13 18:10:58 +00:00
|
|
|
perf_evsel__open_strerror(counter, &target,
|
|
|
|
errno, msg, sizeof(msg));
|
|
|
|
ui__error("%s\n", msg);
|
|
|
|
|
2011-01-03 19:48:12 +00:00
|
|
|
if (child_pid != -1)
|
|
|
|
kill(child_pid, SIGTERM);
|
2012-08-26 18:24:44 +00:00
|
|
|
|
2011-01-03 19:48:12 +00:00
|
|
|
return -1;
|
|
|
|
}
|
2011-05-30 14:55:59 +00:00
|
|
|
counter->supported = true;
|
2013-11-12 16:58:49 +00:00
|
|
|
|
|
|
|
l = strlen(counter->unit);
|
2018-08-30 06:32:32 +00:00
|
|
|
if (l > stat_config.unit_width)
|
|
|
|
stat_config.unit_width = l;
|
2015-11-05 14:40:49 +00:00
|
|
|
|
perf stat: Use group read for event groups
Make perf stat use group read if there are groups defined. The group
read will get the values for all member of groups within a single
syscall instead of calling read syscall for every event.
We can see considerable less amount of kernel cycles spent on single
group read, than reading each event separately, like for following perf
stat command:
# perf stat -e {cycles,instructions} -I 10 -a sleep 1
Monitored with "perf stat -r 5 -e '{cycles:u,cycles:k}'"
Before:
24,325,676 cycles:u
297,040,775 cycles:k
1.038554134 seconds time elapsed
After:
25,034,418 cycles:u
158,256,395 cycles:k
1.036864497 seconds time elapsed
The perf_evsel__open fallback changes contributed by Andi Kleen.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/20170726120206.9099-4-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-07-26 12:02:06 +00:00
|
|
|
if (perf_evsel__should_store_id(counter) &&
|
2018-08-30 06:32:16 +00:00
|
|
|
perf_evsel__store_ids(counter, evsel_list))
|
2015-11-05 14:40:49 +00:00
|
|
|
return -1;
|
2010-03-22 16:10:28 +00:00
|
|
|
}
|
2009-06-13 12:57:28 +00:00
|
|
|
|
2015-03-24 22:23:47 +00:00
|
|
|
if (perf_evlist__apply_filters(evsel_list, &counter)) {
|
2017-06-27 14:22:31 +00:00
|
|
|
pr_err("failed to set filter \"%s\" on event %s with %d (%s)\n",
|
2015-03-24 22:23:47 +00:00
|
|
|
counter->filter, perf_evsel__name(counter), errno,
|
tools: Introduce str_error_r()
The tools so far have been using the strerror_r() GNU variant, that
returns a string, be it the buffer passed or something else.
But that, besides being tricky in cases where we expect that the
function using strerror_r() returns the error formatted in a provided
buffer (we have to check if it returned something else and copy that
instead), breaks the build on systems not using glibc, like Alpine
Linux, where musl libc is used.
So, introduce yet another wrapper, str_error_r(), that has the GNU
interface, but uses the portable XSI variant of strerror_r(), so that
users rest asured that the provided buffer is used and it is what is
returned.
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/n/tip-d4t42fnf48ytlk8rjxs822tf@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-07-06 14:56:20 +00:00
|
|
|
str_error_r(errno, msg, sizeof(msg)));
|
2011-03-14 15:40:30 +00:00
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
2016-09-16 15:50:03 +00:00
|
|
|
if (perf_evlist__apply_drv_configs(evsel_list, &counter, &err_term)) {
|
2017-06-27 14:22:31 +00:00
|
|
|
pr_err("failed to set config \"%s\" on event %s with %d (%s)\n",
|
2016-09-16 15:50:03 +00:00
|
|
|
err_term->val.drv_cfg, perf_evsel__name(counter), errno,
|
|
|
|
str_error_r(errno, msg, sizeof(msg)));
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
perf stat record: Add record command
Add 'perf stat record' command support. It creates simple (header only)
perf.data file ATM.
The record command could be specified anywhere among stat options. All
stat command options are valid for stat record command with '-o' option
exception. If specified for record command it denotes the perf data file
name.
Committer note:
Set sample_type to PERF_SAMPLE_IDENTIFIER, which should be harmless
while avoiding that older tools show confusing messages, for instance,
with sample_type = 0, we get:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.630237 task-clock (msec) # 0.528 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
52 page-faults # 0.083 M/sec
978,312 cycles # 1.552 GHz
671,931 stalled-cycles-frontend # 68.68% frontend cycles idle
<not supported> stalled-cycles-backend
646,379 instructions # 0.66 insns per cycle
# 1.04 stalled cycles per insn
131,046 branches # 207.931 M/sec
7,073 branch-misses # 5.40% of all branches
0.001193240 seconds time elapsed
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
non matching sample_type
$
While with sample_type set to PERF_SAMPLE_IDENTIFIER, after we re-run 'perf
stat record usleep' we get:
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$
Which at least shows the names of the events in the perf.data file.
Additionally, such files, when passed to 'perf report' will produce:
$ oldperf report --stdio
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
Warning:
Kernel address maps (/proc/{kallsyms,modules}) were restricted.
Check /proc/sys/kernel/kptr_restrict before running 'perf record'.
As no suitable kallsyms nor vmlinux was found, kernel samples
can't be resolved.
Samples in kernel modules can't be resolved as well.
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
$
Which is confusing and can be solved by just adding the kernel mmap record,
which will also remove that warning about the data size field being equal to
zero, after generating the mmap record:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.600796 task-clock (msec) # 0.478 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
54 page-faults # 0.090 M/sec
886,844 cycles # 1.476 GHz
582,169 stalled-cycles-frontend # 65.65% frontend cycles idle
<not supported> stalled-cycles-backend
638,344 instructions # 0.72 insns per cycle
# 0.91 stalled cycles per insn
130,204 branches # 216.719 M/sec
7,500 branch-misses # 5.76% of all branches
0.001255897 seconds time elapsed
$ oldperf evlist
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$ oldperf report --stdio
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
[acme@zoo linux]$
No warnings, sensible output about what are the events in the perf.data file and also
a "file has no samples" message, which indeed it doesn't.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Tested-by: Kan Liang <kan.liang@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: htp://lkml.kernel.org/r/1446734469-11352-3-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 14:40:46 +00:00
|
|
|
if (STAT_RECORD) {
|
2017-01-23 21:07:59 +00:00
|
|
|
int err, fd = perf_data__fd(&perf_stat.data);
|
perf stat record: Add record command
Add 'perf stat record' command support. It creates simple (header only)
perf.data file ATM.
The record command could be specified anywhere among stat options. All
stat command options are valid for stat record command with '-o' option
exception. If specified for record command it denotes the perf data file
name.
Committer note:
Set sample_type to PERF_SAMPLE_IDENTIFIER, which should be harmless
while avoiding that older tools show confusing messages, for instance,
with sample_type = 0, we get:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.630237 task-clock (msec) # 0.528 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
52 page-faults # 0.083 M/sec
978,312 cycles # 1.552 GHz
671,931 stalled-cycles-frontend # 68.68% frontend cycles idle
<not supported> stalled-cycles-backend
646,379 instructions # 0.66 insns per cycle
# 1.04 stalled cycles per insn
131,046 branches # 207.931 M/sec
7,073 branch-misses # 5.40% of all branches
0.001193240 seconds time elapsed
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
non matching sample_type
$
While with sample_type set to PERF_SAMPLE_IDENTIFIER, after we re-run 'perf
stat record usleep' we get:
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$
Which at least shows the names of the events in the perf.data file.
Additionally, such files, when passed to 'perf report' will produce:
$ oldperf report --stdio
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
Warning:
Kernel address maps (/proc/{kallsyms,modules}) were restricted.
Check /proc/sys/kernel/kptr_restrict before running 'perf record'.
As no suitable kallsyms nor vmlinux was found, kernel samples
can't be resolved.
Samples in kernel modules can't be resolved as well.
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
$
Which is confusing and can be solved by just adding the kernel mmap record,
which will also remove that warning about the data size field being equal to
zero, after generating the mmap record:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.600796 task-clock (msec) # 0.478 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
54 page-faults # 0.090 M/sec
886,844 cycles # 1.476 GHz
582,169 stalled-cycles-frontend # 65.65% frontend cycles idle
<not supported> stalled-cycles-backend
638,344 instructions # 0.72 insns per cycle
# 0.91 stalled cycles per insn
130,204 branches # 216.719 M/sec
7,500 branch-misses # 5.76% of all branches
0.001255897 seconds time elapsed
$ oldperf evlist
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$ oldperf report --stdio
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
[acme@zoo linux]$
No warnings, sensible output about what are the events in the perf.data file and also
a "file has no samples" message, which indeed it doesn't.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Tested-by: Kan Liang <kan.liang@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: htp://lkml.kernel.org/r/1446734469-11352-3-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 14:40:46 +00:00
|
|
|
|
2015-11-05 14:40:50 +00:00
|
|
|
if (is_pipe) {
|
2017-01-23 21:07:59 +00:00
|
|
|
err = perf_header__write_pipe(perf_data__fd(&perf_stat.data));
|
2015-11-05 14:40:50 +00:00
|
|
|
} else {
|
|
|
|
err = perf_session__write_header(perf_stat.session, evsel_list,
|
|
|
|
fd, false);
|
|
|
|
}
|
|
|
|
|
perf stat record: Add record command
Add 'perf stat record' command support. It creates simple (header only)
perf.data file ATM.
The record command could be specified anywhere among stat options. All
stat command options are valid for stat record command with '-o' option
exception. If specified for record command it denotes the perf data file
name.
Committer note:
Set sample_type to PERF_SAMPLE_IDENTIFIER, which should be harmless
while avoiding that older tools show confusing messages, for instance,
with sample_type = 0, we get:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.630237 task-clock (msec) # 0.528 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
52 page-faults # 0.083 M/sec
978,312 cycles # 1.552 GHz
671,931 stalled-cycles-frontend # 68.68% frontend cycles idle
<not supported> stalled-cycles-backend
646,379 instructions # 0.66 insns per cycle
# 1.04 stalled cycles per insn
131,046 branches # 207.931 M/sec
7,073 branch-misses # 5.40% of all branches
0.001193240 seconds time elapsed
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
non matching sample_type
$
While with sample_type set to PERF_SAMPLE_IDENTIFIER, after we re-run 'perf
stat record usleep' we get:
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$
Which at least shows the names of the events in the perf.data file.
Additionally, such files, when passed to 'perf report' will produce:
$ oldperf report --stdio
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
Warning:
Kernel address maps (/proc/{kallsyms,modules}) were restricted.
Check /proc/sys/kernel/kptr_restrict before running 'perf record'.
As no suitable kallsyms nor vmlinux was found, kernel samples
can't be resolved.
Samples in kernel modules can't be resolved as well.
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
$
Which is confusing and can be solved by just adding the kernel mmap record,
which will also remove that warning about the data size field being equal to
zero, after generating the mmap record:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.600796 task-clock (msec) # 0.478 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
54 page-faults # 0.090 M/sec
886,844 cycles # 1.476 GHz
582,169 stalled-cycles-frontend # 65.65% frontend cycles idle
<not supported> stalled-cycles-backend
638,344 instructions # 0.72 insns per cycle
# 0.91 stalled cycles per insn
130,204 branches # 216.719 M/sec
7,500 branch-misses # 5.76% of all branches
0.001255897 seconds time elapsed
$ oldperf evlist
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$ oldperf report --stdio
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
[acme@zoo linux]$
No warnings, sensible output about what are the events in the perf.data file and also
a "file has no samples" message, which indeed it doesn't.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Tested-by: Kan Liang <kan.liang@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: htp://lkml.kernel.org/r/1446734469-11352-3-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 14:40:46 +00:00
|
|
|
if (err < 0)
|
|
|
|
return err;
|
2015-11-05 14:40:48 +00:00
|
|
|
|
2018-08-30 06:32:21 +00:00
|
|
|
err = perf_stat_synthesize_config(&stat_config, NULL, evsel_list,
|
2018-08-30 06:32:22 +00:00
|
|
|
process_synthesized_event, is_pipe);
|
2015-11-05 14:40:48 +00:00
|
|
|
if (err < 0)
|
|
|
|
return err;
|
perf stat record: Add record command
Add 'perf stat record' command support. It creates simple (header only)
perf.data file ATM.
The record command could be specified anywhere among stat options. All
stat command options are valid for stat record command with '-o' option
exception. If specified for record command it denotes the perf data file
name.
Committer note:
Set sample_type to PERF_SAMPLE_IDENTIFIER, which should be harmless
while avoiding that older tools show confusing messages, for instance,
with sample_type = 0, we get:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.630237 task-clock (msec) # 0.528 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
52 page-faults # 0.083 M/sec
978,312 cycles # 1.552 GHz
671,931 stalled-cycles-frontend # 68.68% frontend cycles idle
<not supported> stalled-cycles-backend
646,379 instructions # 0.66 insns per cycle
# 1.04 stalled cycles per insn
131,046 branches # 207.931 M/sec
7,073 branch-misses # 5.40% of all branches
0.001193240 seconds time elapsed
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
non matching sample_type
$
While with sample_type set to PERF_SAMPLE_IDENTIFIER, after we re-run 'perf
stat record usleep' we get:
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$
Which at least shows the names of the events in the perf.data file.
Additionally, such files, when passed to 'perf report' will produce:
$ oldperf report --stdio
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
Warning:
Kernel address maps (/proc/{kallsyms,modules}) were restricted.
Check /proc/sys/kernel/kptr_restrict before running 'perf record'.
As no suitable kallsyms nor vmlinux was found, kernel samples
can't be resolved.
Samples in kernel modules can't be resolved as well.
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
$
Which is confusing and can be solved by just adding the kernel mmap record,
which will also remove that warning about the data size field being equal to
zero, after generating the mmap record:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.600796 task-clock (msec) # 0.478 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
54 page-faults # 0.090 M/sec
886,844 cycles # 1.476 GHz
582,169 stalled-cycles-frontend # 65.65% frontend cycles idle
<not supported> stalled-cycles-backend
638,344 instructions # 0.72 insns per cycle
# 0.91 stalled cycles per insn
130,204 branches # 216.719 M/sec
7,500 branch-misses # 5.76% of all branches
0.001255897 seconds time elapsed
$ oldperf evlist
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$ oldperf report --stdio
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
[acme@zoo linux]$
No warnings, sensible output about what are the events in the perf.data file and also
a "file has no samples" message, which indeed it doesn't.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Tested-by: Kan Liang <kan.liang@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: htp://lkml.kernel.org/r/1446734469-11352-3-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 14:40:46 +00:00
|
|
|
}
|
|
|
|
|
2009-06-13 12:57:28 +00:00
|
|
|
/*
|
|
|
|
* Enable counters and exec the command:
|
|
|
|
*/
|
|
|
|
t0 = rdclock();
|
perf stat: Add interval printing
This patch adds a new printing mode for perf stat. It allows interval
printing. That means perf stat can now print event deltas at regular
time interval. This is useful to detect phases in programs.
The -I option enables interval printing. It expects an interval duration
in milliseconds. Minimum is 100ms. Once, activated perf stat prints
events deltas since last printout. All modes are supported.
$ perf stat -I 1000 -e cycles noploop 10
noploop for 10 seconds
# time counts events
1.000109853 2,388,560,546 cycles
2.000262846 2,393,332,358 cycles
3.000354131 2,393,176,537 cycles
4.000439503 2,393,203,790 cycles
5.000527075 2,393,167,675 cycles
6.000609052 2,393,203,670 cycles
7.000691082 2,393,175,678 cycles
The output format makes it easy to feed into a plotting program such as
gnuplot when the -I option is used in combination with the -x option:
$ perf stat -x, -I 1000 -e cycles noploop 10
noploop for 10 seconds
1.000084113,2378775498,cycles
2.000245798,2391056897,cycles
3.000354445,2392089414,cycles
4.000459115,2390936603,cycles
5.000565341,2392108173,cycles
Signed-off-by: Stephane Eranian <eranian@google.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung.kim@lge.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1359460064-3060-3-git-send-email-eranian@google.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2013-01-29 11:47:44 +00:00
|
|
|
clock_gettime(CLOCK_MONOTONIC, &ref_time);
|
2009-06-13 12:57:28 +00:00
|
|
|
|
2009-12-31 08:05:50 +00:00
|
|
|
if (forks) {
|
2013-03-11 07:43:18 +00:00
|
|
|
perf_evlist__start_workload(evsel_list);
|
2015-12-03 09:06:44 +00:00
|
|
|
enable_counters();
|
2013-03-11 07:43:18 +00:00
|
|
|
|
2018-01-29 09:25:23 +00:00
|
|
|
if (interval || timeout) {
|
perf stat: Add interval printing
This patch adds a new printing mode for perf stat. It allows interval
printing. That means perf stat can now print event deltas at regular
time interval. This is useful to detect phases in programs.
The -I option enables interval printing. It expects an interval duration
in milliseconds. Minimum is 100ms. Once, activated perf stat prints
events deltas since last printout. All modes are supported.
$ perf stat -I 1000 -e cycles noploop 10
noploop for 10 seconds
# time counts events
1.000109853 2,388,560,546 cycles
2.000262846 2,393,332,358 cycles
3.000354131 2,393,176,537 cycles
4.000439503 2,393,203,790 cycles
5.000527075 2,393,167,675 cycles
6.000609052 2,393,203,670 cycles
7.000691082 2,393,175,678 cycles
The output format makes it easy to feed into a plotting program such as
gnuplot when the -I option is used in combination with the -x option:
$ perf stat -x, -I 1000 -e cycles noploop 10
noploop for 10 seconds
1.000084113,2378775498,cycles
2.000245798,2391056897,cycles
3.000354445,2392089414,cycles
4.000459115,2390936603,cycles
5.000565341,2392108173,cycles
Signed-off-by: Stephane Eranian <eranian@google.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung.kim@lge.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1359460064-3060-3-git-send-email-eranian@google.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2013-01-29 11:47:44 +00:00
|
|
|
while (!waitpid(child_pid, &status, WNOHANG)) {
|
|
|
|
nanosleep(&ts, NULL);
|
2018-01-29 09:25:23 +00:00
|
|
|
if (timeout)
|
|
|
|
break;
|
2015-06-26 09:29:24 +00:00
|
|
|
process_interval();
|
2018-01-29 09:25:22 +00:00
|
|
|
if (interval_count && !(--times))
|
|
|
|
break;
|
perf stat: Add interval printing
This patch adds a new printing mode for perf stat. It allows interval
printing. That means perf stat can now print event deltas at regular
time interval. This is useful to detect phases in programs.
The -I option enables interval printing. It expects an interval duration
in milliseconds. Minimum is 100ms. Once, activated perf stat prints
events deltas since last printout. All modes are supported.
$ perf stat -I 1000 -e cycles noploop 10
noploop for 10 seconds
# time counts events
1.000109853 2,388,560,546 cycles
2.000262846 2,393,332,358 cycles
3.000354131 2,393,176,537 cycles
4.000439503 2,393,203,790 cycles
5.000527075 2,393,167,675 cycles
6.000609052 2,393,203,670 cycles
7.000691082 2,393,175,678 cycles
The output format makes it easy to feed into a plotting program such as
gnuplot when the -I option is used in combination with the -x option:
$ perf stat -x, -I 1000 -e cycles noploop 10
noploop for 10 seconds
1.000084113,2378775498,cycles
2.000245798,2391056897,cycles
3.000354445,2392089414,cycles
4.000459115,2390936603,cycles
5.000565341,2392108173,cycles
Signed-off-by: Stephane Eranian <eranian@google.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung.kim@lge.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1359460064-3060-3-git-send-email-eranian@google.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2013-01-29 11:47:44 +00:00
|
|
|
}
|
|
|
|
}
|
2018-08-30 06:32:44 +00:00
|
|
|
wait4(child_pid, &status, 0, &stat_config.ru_data);
|
2013-12-28 18:45:08 +00:00
|
|
|
|
2014-01-02 18:11:25 +00:00
|
|
|
if (workload_exec_errno) {
|
tools: Introduce str_error_r()
The tools so far have been using the strerror_r() GNU variant, that
returns a string, be it the buffer passed or something else.
But that, besides being tricky in cases where we expect that the
function using strerror_r() returns the error formatted in a provided
buffer (we have to check if it returned something else and copy that
instead), breaks the build on systems not using glibc, like Alpine
Linux, where musl libc is used.
So, introduce yet another wrapper, str_error_r(), that has the GNU
interface, but uses the portable XSI variant of strerror_r(), so that
users rest asured that the provided buffer is used and it is what is
returned.
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/n/tip-d4t42fnf48ytlk8rjxs822tf@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-07-06 14:56:20 +00:00
|
|
|
const char *emsg = str_error_r(workload_exec_errno, msg, sizeof(msg));
|
2014-01-02 18:11:25 +00:00
|
|
|
pr_err("Workload failed: %s\n", emsg);
|
2013-12-28 18:45:08 +00:00
|
|
|
return -1;
|
2014-01-02 18:11:25 +00:00
|
|
|
}
|
2013-12-28 18:45:08 +00:00
|
|
|
|
2011-09-15 21:31:40 +00:00
|
|
|
if (WIFSIGNALED(status))
|
|
|
|
psignal(WTERMSIG(status), argv[0]);
|
2009-12-31 08:05:50 +00:00
|
|
|
} else {
|
2015-12-03 09:06:44 +00:00
|
|
|
enable_counters();
|
perf stat: Add interval printing
This patch adds a new printing mode for perf stat. It allows interval
printing. That means perf stat can now print event deltas at regular
time interval. This is useful to detect phases in programs.
The -I option enables interval printing. It expects an interval duration
in milliseconds. Minimum is 100ms. Once, activated perf stat prints
events deltas since last printout. All modes are supported.
$ perf stat -I 1000 -e cycles noploop 10
noploop for 10 seconds
# time counts events
1.000109853 2,388,560,546 cycles
2.000262846 2,393,332,358 cycles
3.000354131 2,393,176,537 cycles
4.000439503 2,393,203,790 cycles
5.000527075 2,393,167,675 cycles
6.000609052 2,393,203,670 cycles
7.000691082 2,393,175,678 cycles
The output format makes it easy to feed into a plotting program such as
gnuplot when the -I option is used in combination with the -x option:
$ perf stat -x, -I 1000 -e cycles noploop 10
noploop for 10 seconds
1.000084113,2378775498,cycles
2.000245798,2391056897,cycles
3.000354445,2392089414,cycles
4.000459115,2390936603,cycles
5.000565341,2392108173,cycles
Signed-off-by: Stephane Eranian <eranian@google.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung.kim@lge.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1359460064-3060-3-git-send-email-eranian@google.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2013-01-29 11:47:44 +00:00
|
|
|
while (!done) {
|
|
|
|
nanosleep(&ts, NULL);
|
2018-01-29 09:25:23 +00:00
|
|
|
if (timeout)
|
|
|
|
break;
|
2018-01-29 09:25:22 +00:00
|
|
|
if (interval) {
|
2015-06-26 09:29:24 +00:00
|
|
|
process_interval();
|
2018-01-29 09:25:22 +00:00
|
|
|
if (interval_count && !(--times))
|
|
|
|
break;
|
|
|
|
}
|
perf stat: Add interval printing
This patch adds a new printing mode for perf stat. It allows interval
printing. That means perf stat can now print event deltas at regular
time interval. This is useful to detect phases in programs.
The -I option enables interval printing. It expects an interval duration
in milliseconds. Minimum is 100ms. Once, activated perf stat prints
events deltas since last printout. All modes are supported.
$ perf stat -I 1000 -e cycles noploop 10
noploop for 10 seconds
# time counts events
1.000109853 2,388,560,546 cycles
2.000262846 2,393,332,358 cycles
3.000354131 2,393,176,537 cycles
4.000439503 2,393,203,790 cycles
5.000527075 2,393,167,675 cycles
6.000609052 2,393,203,670 cycles
7.000691082 2,393,175,678 cycles
The output format makes it easy to feed into a plotting program such as
gnuplot when the -I option is used in combination with the -x option:
$ perf stat -x, -I 1000 -e cycles noploop 10
noploop for 10 seconds
1.000084113,2378775498,cycles
2.000245798,2391056897,cycles
3.000354445,2392089414,cycles
4.000459115,2390936603,cycles
5.000565341,2392108173,cycles
Signed-off-by: Stephane Eranian <eranian@google.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung.kim@lge.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1359460064-3060-3-git-send-email-eranian@google.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2013-01-29 11:47:44 +00:00
|
|
|
}
|
2009-12-31 08:05:50 +00:00
|
|
|
}
|
2009-06-13 12:57:28 +00:00
|
|
|
|
perf stat: Avoid skew when reading events
When we don't have a tracee (i.e. we're attaching to a task or CPU),
counters can still be running after our workload finishes, and can still
be running as we read their values. As we read events one-by-one, there
can be arbitrary skew between values of events, even within a group.
This means that ratios within an event group are not reliable.
This skew can be seen if measuring a group of identical events, e.g:
# perf stat -a -C0 -e '{cycles,cycles}' sleep 1
To avoid this, we must stop groups from counting before we read the
values of any constituent events. This patch adds and makes use of a new
disable_counters() helper, which disables group leaders (and thus each
group as a whole). This mirrors the use of enable_counters() for
starting event groups in the absence of a tracee.
Closing a group leader splits the group, and without a disabled group
leader the newly split events will begin counting. Thus to ensure counts
are reliable we must defer closing group leaders until all counts have
been read. To do so this patch removes the event closing logic from the
read_counters() helper, explicitly closes the events using
perf_evlist__close(), which also aids legibility.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1470747869-3567-1-git-send-email-mark.rutland@arm.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-08-09 13:04:29 +00:00
|
|
|
disable_counters();
|
|
|
|
|
2009-06-13 12:57:28 +00:00
|
|
|
t1 = rdclock();
|
|
|
|
|
2018-04-23 09:08:21 +00:00
|
|
|
if (walltime_run_table)
|
|
|
|
walltime_run[run_idx] = t1 - t0;
|
|
|
|
|
2009-09-04 13:36:08 +00:00
|
|
|
update_stats(&walltime_nsecs_stats, t1 - t0);
|
2009-06-13 12:57:28 +00:00
|
|
|
|
perf stat: Avoid skew when reading events
When we don't have a tracee (i.e. we're attaching to a task or CPU),
counters can still be running after our workload finishes, and can still
be running as we read their values. As we read events one-by-one, there
can be arbitrary skew between values of events, even within a group.
This means that ratios within an event group are not reliable.
This skew can be seen if measuring a group of identical events, e.g:
# perf stat -a -C0 -e '{cycles,cycles}' sleep 1
To avoid this, we must stop groups from counting before we read the
values of any constituent events. This patch adds and makes use of a new
disable_counters() helper, which disables group leaders (and thus each
group as a whole). This mirrors the use of enable_counters() for
starting event groups in the absence of a tracee.
Closing a group leader splits the group, and without a disabled group
leader the newly split events will begin counting. Thus to ensure counts
are reliable we must defer closing group leaders until all counts have
been read. To do so this patch removes the event closing logic from the
read_counters() helper, explicitly closes the events using
perf_evlist__close(), which also aids legibility.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1470747869-3567-1-git-send-email-mark.rutland@arm.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-08-09 13:04:29 +00:00
|
|
|
/*
|
|
|
|
* Closing a group leader splits the group, and as we only disable
|
|
|
|
* group leaders, results in remaining events becoming enabled. To
|
|
|
|
* avoid arbitrary skew, we must read all counters before closing any
|
|
|
|
* group leaders.
|
|
|
|
*/
|
|
|
|
read_counters();
|
|
|
|
perf_evlist__close(evsel_list);
|
2011-01-03 19:45:52 +00:00
|
|
|
|
2009-06-13 12:57:28 +00:00
|
|
|
return WEXITSTATUS(status);
|
|
|
|
}
|
|
|
|
|
2018-04-23 09:08:21 +00:00
|
|
|
static int run_perf_stat(int argc, const char **argv, int run_idx)
|
2012-10-23 11:40:14 +00:00
|
|
|
{
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
if (pre_cmd) {
|
|
|
|
ret = system(pre_cmd);
|
|
|
|
if (ret)
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (sync_run)
|
|
|
|
sync();
|
|
|
|
|
2018-04-23 09:08:21 +00:00
|
|
|
ret = __run_perf_stat(argc, argv, run_idx);
|
2012-10-23 11:40:14 +00:00
|
|
|
if (ret)
|
|
|
|
return ret;
|
|
|
|
|
|
|
|
if (post_cmd) {
|
|
|
|
ret = system(post_cmd);
|
|
|
|
if (ret)
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2018-08-30 06:32:27 +00:00
|
|
|
static void print_running(struct perf_stat_config *config,
|
|
|
|
u64 run, u64 ena)
|
2015-03-11 14:16:27 +00:00
|
|
|
{
|
2018-08-30 06:32:29 +00:00
|
|
|
if (config->csv_output) {
|
2018-08-30 06:32:27 +00:00
|
|
|
fprintf(config->output, "%s%" PRIu64 "%s%.2f",
|
2018-08-30 06:32:29 +00:00
|
|
|
config->csv_sep,
|
2015-03-11 14:16:27 +00:00
|
|
|
run,
|
2018-08-30 06:32:29 +00:00
|
|
|
config->csv_sep,
|
2015-03-11 14:16:27 +00:00
|
|
|
ena ? 100.0 * run / ena : 100.0);
|
|
|
|
} else if (run != ena) {
|
2018-08-30 06:32:27 +00:00
|
|
|
fprintf(config->output, " (%.2f%%)", 100.0 * run / ena);
|
2015-03-11 14:16:27 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2018-08-30 06:32:27 +00:00
|
|
|
static void print_noise_pct(struct perf_stat_config *config,
|
|
|
|
double total, double avg)
|
2011-04-27 03:35:39 +00:00
|
|
|
{
|
2012-09-17 08:31:14 +00:00
|
|
|
double pct = rel_stddev_stats(total, avg);
|
2011-04-27 03:35:39 +00:00
|
|
|
|
2018-08-30 06:32:29 +00:00
|
|
|
if (config->csv_output)
|
|
|
|
fprintf(config->output, "%s%.2f%%", config->csv_sep, pct);
|
2011-09-07 23:14:02 +00:00
|
|
|
else if (pct)
|
2018-08-30 06:32:27 +00:00
|
|
|
fprintf(config->output, " ( +-%6.2f%% )", pct);
|
2011-04-27 03:35:39 +00:00
|
|
|
}
|
|
|
|
|
2018-08-30 06:32:27 +00:00
|
|
|
static void print_noise(struct perf_stat_config *config,
|
|
|
|
struct perf_evsel *evsel, double avg)
|
2009-06-13 12:57:28 +00:00
|
|
|
{
|
2015-10-16 10:41:03 +00:00
|
|
|
struct perf_stat_evsel *ps;
|
2011-01-03 18:39:04 +00:00
|
|
|
|
2018-08-30 06:32:36 +00:00
|
|
|
if (config->run_count == 1)
|
2009-09-04 16:23:38 +00:00
|
|
|
return;
|
|
|
|
|
2017-10-26 17:22:34 +00:00
|
|
|
ps = evsel->stats;
|
2018-08-30 06:32:27 +00:00
|
|
|
print_noise_pct(config, stddev_stats(&ps->res_stats[0]), avg);
|
2009-06-13 12:57:28 +00:00
|
|
|
}
|
|
|
|
|
2018-08-30 06:32:28 +00:00
|
|
|
static void aggr_printout(struct perf_stat_config *config,
|
|
|
|
struct perf_evsel *evsel, int id, int nr)
|
2009-06-13 11:35:00 +00:00
|
|
|
{
|
2018-08-30 06:32:28 +00:00
|
|
|
switch (config->aggr_mode) {
|
2013-02-14 12:57:29 +00:00
|
|
|
case AGGR_CORE:
|
2018-08-30 06:32:28 +00:00
|
|
|
fprintf(config->output, "S%d-C%*d%s%*d%s",
|
2013-02-14 12:57:29 +00:00
|
|
|
cpu_map__id_to_socket(id),
|
2018-08-30 06:32:29 +00:00
|
|
|
config->csv_output ? 0 : -8,
|
2013-02-14 12:57:29 +00:00
|
|
|
cpu_map__id_to_cpu(id),
|
2018-08-30 06:32:29 +00:00
|
|
|
config->csv_sep,
|
|
|
|
config->csv_output ? 0 : 4,
|
2013-02-14 12:57:29 +00:00
|
|
|
nr,
|
2018-08-30 06:32:29 +00:00
|
|
|
config->csv_sep);
|
2013-02-14 12:57:29 +00:00
|
|
|
break;
|
2013-02-14 12:57:27 +00:00
|
|
|
case AGGR_SOCKET:
|
2018-08-30 06:32:28 +00:00
|
|
|
fprintf(config->output, "S%*d%s%*d%s",
|
2018-08-30 06:32:29 +00:00
|
|
|
config->csv_output ? 0 : -5,
|
2013-02-14 12:57:29 +00:00
|
|
|
id,
|
2018-08-30 06:32:29 +00:00
|
|
|
config->csv_sep,
|
|
|
|
config->csv_output ? 0 : 4,
|
2013-02-06 14:46:02 +00:00
|
|
|
nr,
|
2018-08-30 06:32:29 +00:00
|
|
|
config->csv_sep);
|
2013-02-14 12:57:27 +00:00
|
|
|
break;
|
|
|
|
case AGGR_NONE:
|
2018-08-30 06:32:28 +00:00
|
|
|
fprintf(config->output, "CPU%*d%s",
|
2018-08-30 06:32:29 +00:00
|
|
|
config->csv_output ? 0 : -4,
|
|
|
|
perf_evsel__cpus(evsel)->map[id], config->csv_sep);
|
2013-02-14 12:57:27 +00:00
|
|
|
break;
|
perf stat: Introduce --per-thread option
Currently all the -p option PID arguments tasks values get aggregated
and printed as single values.
Adding --per-tasks option to print values per task.
$ perf stat -e cycles,instructions --per-thread -p 30190,30242
^C
Performance counter stats for process id '30190,30242':
cat-30190 0 cycles
yes-30242 3,842,525,421 cycles
cat-30190 0 instructions
yes-30242 10,370,817,010 instructions
1.143155657 seconds time elapsed
Also works under interval mode:
$ perf stat -e cycles,instructions --per-thread -p 30190,30242 -I 1000
# time comm-pid counts unit events
1.000073435 cat-30190 89,058 cycles
1.000073435 yes-30242 3,360,786,902 cycles (100.00%)
1.000073435 cat-30190 14,066 instructions
1.000073435 yes-30242 9,069,937,462 instructions
2.000204830 cat-30190 0 cycles
2.000204830 yes-30242 3,351,667,626 cycles
2.000204830 cat-30190 0 instructions
2.000204830 yes-30242 9,045,796,885 instructions
^C 2.771286639 cat-30190 0 cycles
2.771286639 yes-30242 2,593,884,166 cycles
2.771286639 cat-30190 0 instructions
2.771286639 yes-30242 7,001,171,191 instructions
It works only with -t and -p options, otherwise following error is
printed:
$ perf stat -e cycles --per-thread -I 1000 ls
The --per-thread option is only available when monitoring via -p -t options.
-p, --pid <pid> stat events on existing process id
-t, --tid <tid> stat events on existing thread id
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/r/1435310967-14570-23-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-06-26 09:29:27 +00:00
|
|
|
case AGGR_THREAD:
|
2018-08-30 06:32:28 +00:00
|
|
|
fprintf(config->output, "%*s-%*d%s",
|
2018-08-30 06:32:29 +00:00
|
|
|
config->csv_output ? 0 : 16,
|
perf stat: Introduce --per-thread option
Currently all the -p option PID arguments tasks values get aggregated
and printed as single values.
Adding --per-tasks option to print values per task.
$ perf stat -e cycles,instructions --per-thread -p 30190,30242
^C
Performance counter stats for process id '30190,30242':
cat-30190 0 cycles
yes-30242 3,842,525,421 cycles
cat-30190 0 instructions
yes-30242 10,370,817,010 instructions
1.143155657 seconds time elapsed
Also works under interval mode:
$ perf stat -e cycles,instructions --per-thread -p 30190,30242 -I 1000
# time comm-pid counts unit events
1.000073435 cat-30190 89,058 cycles
1.000073435 yes-30242 3,360,786,902 cycles (100.00%)
1.000073435 cat-30190 14,066 instructions
1.000073435 yes-30242 9,069,937,462 instructions
2.000204830 cat-30190 0 cycles
2.000204830 yes-30242 3,351,667,626 cycles
2.000204830 cat-30190 0 instructions
2.000204830 yes-30242 9,045,796,885 instructions
^C 2.771286639 cat-30190 0 cycles
2.771286639 yes-30242 2,593,884,166 cycles
2.771286639 cat-30190 0 instructions
2.771286639 yes-30242 7,001,171,191 instructions
It works only with -t and -p options, otherwise following error is
printed:
$ perf stat -e cycles --per-thread -I 1000 ls
The --per-thread option is only available when monitoring via -p -t options.
-p, --pid <pid> stat events on existing process id
-t, --tid <tid> stat events on existing thread id
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/r/1435310967-14570-23-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-06-26 09:29:27 +00:00
|
|
|
thread_map__comm(evsel->threads, id),
|
2018-08-30 06:32:29 +00:00
|
|
|
config->csv_output ? 0 : -8,
|
perf stat: Introduce --per-thread option
Currently all the -p option PID arguments tasks values get aggregated
and printed as single values.
Adding --per-tasks option to print values per task.
$ perf stat -e cycles,instructions --per-thread -p 30190,30242
^C
Performance counter stats for process id '30190,30242':
cat-30190 0 cycles
yes-30242 3,842,525,421 cycles
cat-30190 0 instructions
yes-30242 10,370,817,010 instructions
1.143155657 seconds time elapsed
Also works under interval mode:
$ perf stat -e cycles,instructions --per-thread -p 30190,30242 -I 1000
# time comm-pid counts unit events
1.000073435 cat-30190 89,058 cycles
1.000073435 yes-30242 3,360,786,902 cycles (100.00%)
1.000073435 cat-30190 14,066 instructions
1.000073435 yes-30242 9,069,937,462 instructions
2.000204830 cat-30190 0 cycles
2.000204830 yes-30242 3,351,667,626 cycles
2.000204830 cat-30190 0 instructions
2.000204830 yes-30242 9,045,796,885 instructions
^C 2.771286639 cat-30190 0 cycles
2.771286639 yes-30242 2,593,884,166 cycles
2.771286639 cat-30190 0 instructions
2.771286639 yes-30242 7,001,171,191 instructions
It works only with -t and -p options, otherwise following error is
printed:
$ perf stat -e cycles --per-thread -I 1000 ls
The --per-thread option is only available when monitoring via -p -t options.
-p, --pid <pid> stat events on existing process id
-t, --tid <tid> stat events on existing thread id
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/r/1435310967-14570-23-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-06-26 09:29:27 +00:00
|
|
|
thread_map__pid(evsel->threads, id),
|
2018-08-30 06:32:29 +00:00
|
|
|
config->csv_sep);
|
perf stat: Introduce --per-thread option
Currently all the -p option PID arguments tasks values get aggregated
and printed as single values.
Adding --per-tasks option to print values per task.
$ perf stat -e cycles,instructions --per-thread -p 30190,30242
^C
Performance counter stats for process id '30190,30242':
cat-30190 0 cycles
yes-30242 3,842,525,421 cycles
cat-30190 0 instructions
yes-30242 10,370,817,010 instructions
1.143155657 seconds time elapsed
Also works under interval mode:
$ perf stat -e cycles,instructions --per-thread -p 30190,30242 -I 1000
# time comm-pid counts unit events
1.000073435 cat-30190 89,058 cycles
1.000073435 yes-30242 3,360,786,902 cycles (100.00%)
1.000073435 cat-30190 14,066 instructions
1.000073435 yes-30242 9,069,937,462 instructions
2.000204830 cat-30190 0 cycles
2.000204830 yes-30242 3,351,667,626 cycles
2.000204830 cat-30190 0 instructions
2.000204830 yes-30242 9,045,796,885 instructions
^C 2.771286639 cat-30190 0 cycles
2.771286639 yes-30242 2,593,884,166 cycles
2.771286639 cat-30190 0 instructions
2.771286639 yes-30242 7,001,171,191 instructions
It works only with -t and -p options, otherwise following error is
printed:
$ perf stat -e cycles --per-thread -I 1000 ls
The --per-thread option is only available when monitoring via -p -t options.
-p, --pid <pid> stat events on existing process id
-t, --tid <tid> stat events on existing thread id
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/r/1435310967-14570-23-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-06-26 09:29:27 +00:00
|
|
|
break;
|
2013-02-14 12:57:27 +00:00
|
|
|
case AGGR_GLOBAL:
|
2015-10-16 10:41:04 +00:00
|
|
|
case AGGR_UNSET:
|
2013-02-14 12:57:27 +00:00
|
|
|
default:
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2016-01-30 17:06:49 +00:00
|
|
|
struct outstate {
|
|
|
|
FILE *fh;
|
|
|
|
bool newline;
|
perf stat: Add support for metrics in interval mode
Now that we can modify the metrics printout functions easily, it's
straight forward to support metric printing for interval mode. All that
is needed is to print the time stamp on every new line. Pass the prefix
into the context and print it out.
v2: Move wrong hunk to here.
Committer note:
Before:
[root@jouet ~]# perf stat -I 1000 -e instructions,cycles sleep 1
# time counts unit events
1.000168216 538,913 instructions
1.000168216 748,765 cycles
1.000660048 153,741 instructions
1.000660048 214,066 cycles
After:
# perf stat -I 1000 -e instructions,cycles sleep 1
# time counts unit events
1.000215928 519,620 instructions # 0.69 insn per cycle
1.000215928 752,003 cycles
1.000946033 148,502 instructions # 0.33 insn per cycle
1.000946033 160,104 cycles
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@redhat.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/r/1454173616-17710-3-git-send-email-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-01-30 17:06:50 +00:00
|
|
|
const char *prefix;
|
perf stat: Implement CSV metrics output
Now support CSV output for metrics. With the new output callbacks this
is relatively straight forward by creating new callbacks.
This allows to easily plot metrics from CSV files.
The new line callback needs to know the number of fields to skip them
correctly
Example output before:
% perf stat -x, true
0.200687,,task-clock,200687,100.00
0,,context-switches,200687,100.00
0,,cpu-migrations,200687,100.00
40,,page-faults,200687,100.00
730871,,cycles,203601,100.00
551056,,stalled-cycles-frontend,203601,100.00
<not supported>,,stalled-cycles-backend,0,100.00
385523,,instructions,203601,100.00
78028,,branches,203601,100.00
3946,,branch-misses,203601,100.00
After:
% perf stat -x, true
.502457,,task-clock,502457,100.00,0.485,CPUs utilized
0,,context-switches,502457,100.00,0.000,K/sec
0,,cpu-migrations,502457,100.00,0.000,K/sec
45,,page-faults,502457,100.00,0.090,M/sec
644692,,cycles,509102,100.00,1.283,GHz
423470,,stalled-cycles-frontend,509102,100.00,65.69,frontend cycles idle
<not supported>,,stalled-cycles-backend,0,100.00,,,,
492701,,instructions,509102,100.00,0.76,insn per cycle
,,,,,0.86,stalled cycles per insn
97767,,branches,509102,100.00,194.578,M/sec
4788,,branch-misses,509102,100.00,4.90,of all branches
or easier readable
$ perf stat -x, -o x.csv true
$ column -s, -t x.csv
0.490635 task-clock 490635 100.00 0.489 CPUs utilized
0 context-switches 490635 100.00 0.000 K/sec
0 cpu-migrations 490635 100.00 0.000 K/sec
45 page-faults 490635 100.00 0.092 M/sec
629080 cycles 497698 100.00 1.282 GHz
409498 stalled-cycles-frontend 497698 100.00 65.09 frontend cycles idle
<not supported> stalled-cycles-backend 0 100.00
491424 instructions 497698 100.00 0.78 insn per cycle
0.83 stalled cycles per insn
97278 branches 497698 100.00 198.270 M/sec
4569 branch-misses 497698 100.00 4.70 of all branches
Two new fields are added: metric value and metric name.
v2: Split out function argument changes
v3: Reenable metrics for real.
v4: Fix wrong hunk from refactoring.
v5: Remove extra "noise" printing (Jiri), but add it to the not counted case.
Print empty metrics for not counted.
v6: Avoid outputting metric on empty format.
v7: Print metric at the end
v8: Remove extra run, ena fields
v9: Avoid extra new line for unsupported counters
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Link: http://lkml.kernel.org/r/1456785386-19481-3-git-send-email-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-02-29 22:36:21 +00:00
|
|
|
int nfields;
|
perf stat: Support metrics in --per-core/socket mode
Enable metrics printing in --per-core / --per-socket mode. We need to
save the shadow metrics in a unique place. Always use the first CPU in
the aggregation. Then use the same CPU to retrieve the shadow value
later.
Example output:
% perf stat --per-core -a ./BC1s
Performance counter stats for 'system wide':
S0-C0 2 2966.020381 task-clock (msec) # 2.004 CPUs utilized (100.00%)
S0-C0 2 49 context-switches # 0.017 K/sec (100.00%)
S0-C0 2 4 cpu-migrations # 0.001 K/sec (100.00%)
S0-C0 2 467 page-faults # 0.157 K/sec
S0-C0 2 4,599,061,773 cycles # 1.551 GHz (100.00%)
S0-C0 2 9,755,886,883 instructions # 2.12 insn per cycle (100.00%)
S0-C0 2 1,906,272,125 branches # 642.704 M/sec (100.00%)
S0-C0 2 81,180,867 branch-misses # 4.26% of all branches
S0-C1 2 2965.995373 task-clock (msec) # 2.003 CPUs utilized (100.00%)
S0-C1 2 62 context-switches # 0.021 K/sec (100.00%)
S0-C1 2 8 cpu-migrations # 0.003 K/sec (100.00%)
S0-C1 2 281 page-faults # 0.095 K/sec
S0-C1 2 6,347,290 cycles # 0.002 GHz (100.00%)
S0-C1 2 4,654,156 instructions # 0.73 insn per cycle (100.00%)
S0-C1 2 947,121 branches # 0.319 M/sec (100.00%)
S0-C1 2 37,322 branch-misses # 3.94% of all branches
1.480409747 seconds time elapsed
v2: Rebase to older patches
v3: Document shadow cpus. Fix aggr_get_id argument. Fix -A shadows (Jiri)
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: http://lkml.kernel.org/r/1456785386-19481-4-git-send-email-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-02-29 22:36:22 +00:00
|
|
|
int id, nr;
|
|
|
|
struct perf_evsel *evsel;
|
2016-01-30 17:06:49 +00:00
|
|
|
};
|
|
|
|
|
|
|
|
#define METRIC_LEN 35
|
|
|
|
|
2018-08-30 06:32:28 +00:00
|
|
|
static void new_line_std(struct perf_stat_config *config __maybe_unused,
|
|
|
|
void *ctx)
|
2016-01-30 17:06:49 +00:00
|
|
|
{
|
|
|
|
struct outstate *os = ctx;
|
|
|
|
|
|
|
|
os->newline = true;
|
|
|
|
}
|
|
|
|
|
2018-08-30 06:32:28 +00:00
|
|
|
static void do_new_line_std(struct perf_stat_config *config,
|
|
|
|
struct outstate *os)
|
2016-01-30 17:06:49 +00:00
|
|
|
{
|
|
|
|
fputc('\n', os->fh);
|
perf stat: Add support for metrics in interval mode
Now that we can modify the metrics printout functions easily, it's
straight forward to support metric printing for interval mode. All that
is needed is to print the time stamp on every new line. Pass the prefix
into the context and print it out.
v2: Move wrong hunk to here.
Committer note:
Before:
[root@jouet ~]# perf stat -I 1000 -e instructions,cycles sleep 1
# time counts unit events
1.000168216 538,913 instructions
1.000168216 748,765 cycles
1.000660048 153,741 instructions
1.000660048 214,066 cycles
After:
# perf stat -I 1000 -e instructions,cycles sleep 1
# time counts unit events
1.000215928 519,620 instructions # 0.69 insn per cycle
1.000215928 752,003 cycles
1.000946033 148,502 instructions # 0.33 insn per cycle
1.000946033 160,104 cycles
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@redhat.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/r/1454173616-17710-3-git-send-email-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-01-30 17:06:50 +00:00
|
|
|
fputs(os->prefix, os->fh);
|
2018-08-30 06:32:28 +00:00
|
|
|
aggr_printout(config, os->evsel, os->id, os->nr);
|
|
|
|
if (config->aggr_mode == AGGR_NONE)
|
2016-01-30 17:06:49 +00:00
|
|
|
fprintf(os->fh, " ");
|
|
|
|
fprintf(os->fh, " ");
|
|
|
|
}
|
|
|
|
|
2018-08-30 06:32:28 +00:00
|
|
|
static void print_metric_std(struct perf_stat_config *config,
|
|
|
|
void *ctx, const char *color, const char *fmt,
|
2016-01-30 17:06:49 +00:00
|
|
|
const char *unit, double val)
|
|
|
|
{
|
|
|
|
struct outstate *os = ctx;
|
|
|
|
FILE *out = os->fh;
|
|
|
|
int n;
|
|
|
|
bool newline = os->newline;
|
|
|
|
|
|
|
|
os->newline = false;
|
|
|
|
|
|
|
|
if (unit == NULL || fmt == NULL) {
|
|
|
|
fprintf(out, "%-*s", METRIC_LEN, "");
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (newline)
|
2018-08-30 06:32:28 +00:00
|
|
|
do_new_line_std(config, os);
|
2016-01-30 17:06:49 +00:00
|
|
|
|
|
|
|
n = fprintf(out, " # ");
|
|
|
|
if (color)
|
|
|
|
n += color_fprintf(out, color, fmt, val);
|
|
|
|
else
|
|
|
|
n += fprintf(out, fmt, val);
|
|
|
|
fprintf(out, " %-*s", METRIC_LEN - n - 1, unit);
|
|
|
|
}
|
|
|
|
|
2018-08-30 06:32:28 +00:00
|
|
|
static void new_line_csv(struct perf_stat_config *config, void *ctx)
|
perf stat: Implement CSV metrics output
Now support CSV output for metrics. With the new output callbacks this
is relatively straight forward by creating new callbacks.
This allows to easily plot metrics from CSV files.
The new line callback needs to know the number of fields to skip them
correctly
Example output before:
% perf stat -x, true
0.200687,,task-clock,200687,100.00
0,,context-switches,200687,100.00
0,,cpu-migrations,200687,100.00
40,,page-faults,200687,100.00
730871,,cycles,203601,100.00
551056,,stalled-cycles-frontend,203601,100.00
<not supported>,,stalled-cycles-backend,0,100.00
385523,,instructions,203601,100.00
78028,,branches,203601,100.00
3946,,branch-misses,203601,100.00
After:
% perf stat -x, true
.502457,,task-clock,502457,100.00,0.485,CPUs utilized
0,,context-switches,502457,100.00,0.000,K/sec
0,,cpu-migrations,502457,100.00,0.000,K/sec
45,,page-faults,502457,100.00,0.090,M/sec
644692,,cycles,509102,100.00,1.283,GHz
423470,,stalled-cycles-frontend,509102,100.00,65.69,frontend cycles idle
<not supported>,,stalled-cycles-backend,0,100.00,,,,
492701,,instructions,509102,100.00,0.76,insn per cycle
,,,,,0.86,stalled cycles per insn
97767,,branches,509102,100.00,194.578,M/sec
4788,,branch-misses,509102,100.00,4.90,of all branches
or easier readable
$ perf stat -x, -o x.csv true
$ column -s, -t x.csv
0.490635 task-clock 490635 100.00 0.489 CPUs utilized
0 context-switches 490635 100.00 0.000 K/sec
0 cpu-migrations 490635 100.00 0.000 K/sec
45 page-faults 490635 100.00 0.092 M/sec
629080 cycles 497698 100.00 1.282 GHz
409498 stalled-cycles-frontend 497698 100.00 65.09 frontend cycles idle
<not supported> stalled-cycles-backend 0 100.00
491424 instructions 497698 100.00 0.78 insn per cycle
0.83 stalled cycles per insn
97278 branches 497698 100.00 198.270 M/sec
4569 branch-misses 497698 100.00 4.70 of all branches
Two new fields are added: metric value and metric name.
v2: Split out function argument changes
v3: Reenable metrics for real.
v4: Fix wrong hunk from refactoring.
v5: Remove extra "noise" printing (Jiri), but add it to the not counted case.
Print empty metrics for not counted.
v6: Avoid outputting metric on empty format.
v7: Print metric at the end
v8: Remove extra run, ena fields
v9: Avoid extra new line for unsupported counters
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Link: http://lkml.kernel.org/r/1456785386-19481-3-git-send-email-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-02-29 22:36:21 +00:00
|
|
|
{
|
|
|
|
struct outstate *os = ctx;
|
|
|
|
int i;
|
|
|
|
|
|
|
|
fputc('\n', os->fh);
|
|
|
|
if (os->prefix)
|
2018-08-30 06:32:29 +00:00
|
|
|
fprintf(os->fh, "%s%s", os->prefix, config->csv_sep);
|
2018-08-30 06:32:28 +00:00
|
|
|
aggr_printout(config, os->evsel, os->id, os->nr);
|
perf stat: Implement CSV metrics output
Now support CSV output for metrics. With the new output callbacks this
is relatively straight forward by creating new callbacks.
This allows to easily plot metrics from CSV files.
The new line callback needs to know the number of fields to skip them
correctly
Example output before:
% perf stat -x, true
0.200687,,task-clock,200687,100.00
0,,context-switches,200687,100.00
0,,cpu-migrations,200687,100.00
40,,page-faults,200687,100.00
730871,,cycles,203601,100.00
551056,,stalled-cycles-frontend,203601,100.00
<not supported>,,stalled-cycles-backend,0,100.00
385523,,instructions,203601,100.00
78028,,branches,203601,100.00
3946,,branch-misses,203601,100.00
After:
% perf stat -x, true
.502457,,task-clock,502457,100.00,0.485,CPUs utilized
0,,context-switches,502457,100.00,0.000,K/sec
0,,cpu-migrations,502457,100.00,0.000,K/sec
45,,page-faults,502457,100.00,0.090,M/sec
644692,,cycles,509102,100.00,1.283,GHz
423470,,stalled-cycles-frontend,509102,100.00,65.69,frontend cycles idle
<not supported>,,stalled-cycles-backend,0,100.00,,,,
492701,,instructions,509102,100.00,0.76,insn per cycle
,,,,,0.86,stalled cycles per insn
97767,,branches,509102,100.00,194.578,M/sec
4788,,branch-misses,509102,100.00,4.90,of all branches
or easier readable
$ perf stat -x, -o x.csv true
$ column -s, -t x.csv
0.490635 task-clock 490635 100.00 0.489 CPUs utilized
0 context-switches 490635 100.00 0.000 K/sec
0 cpu-migrations 490635 100.00 0.000 K/sec
45 page-faults 490635 100.00 0.092 M/sec
629080 cycles 497698 100.00 1.282 GHz
409498 stalled-cycles-frontend 497698 100.00 65.09 frontend cycles idle
<not supported> stalled-cycles-backend 0 100.00
491424 instructions 497698 100.00 0.78 insn per cycle
0.83 stalled cycles per insn
97278 branches 497698 100.00 198.270 M/sec
4569 branch-misses 497698 100.00 4.70 of all branches
Two new fields are added: metric value and metric name.
v2: Split out function argument changes
v3: Reenable metrics for real.
v4: Fix wrong hunk from refactoring.
v5: Remove extra "noise" printing (Jiri), but add it to the not counted case.
Print empty metrics for not counted.
v6: Avoid outputting metric on empty format.
v7: Print metric at the end
v8: Remove extra run, ena fields
v9: Avoid extra new line for unsupported counters
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Link: http://lkml.kernel.org/r/1456785386-19481-3-git-send-email-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-02-29 22:36:21 +00:00
|
|
|
for (i = 0; i < os->nfields; i++)
|
2018-08-30 06:32:29 +00:00
|
|
|
fputs(config->csv_sep, os->fh);
|
perf stat: Implement CSV metrics output
Now support CSV output for metrics. With the new output callbacks this
is relatively straight forward by creating new callbacks.
This allows to easily plot metrics from CSV files.
The new line callback needs to know the number of fields to skip them
correctly
Example output before:
% perf stat -x, true
0.200687,,task-clock,200687,100.00
0,,context-switches,200687,100.00
0,,cpu-migrations,200687,100.00
40,,page-faults,200687,100.00
730871,,cycles,203601,100.00
551056,,stalled-cycles-frontend,203601,100.00
<not supported>,,stalled-cycles-backend,0,100.00
385523,,instructions,203601,100.00
78028,,branches,203601,100.00
3946,,branch-misses,203601,100.00
After:
% perf stat -x, true
.502457,,task-clock,502457,100.00,0.485,CPUs utilized
0,,context-switches,502457,100.00,0.000,K/sec
0,,cpu-migrations,502457,100.00,0.000,K/sec
45,,page-faults,502457,100.00,0.090,M/sec
644692,,cycles,509102,100.00,1.283,GHz
423470,,stalled-cycles-frontend,509102,100.00,65.69,frontend cycles idle
<not supported>,,stalled-cycles-backend,0,100.00,,,,
492701,,instructions,509102,100.00,0.76,insn per cycle
,,,,,0.86,stalled cycles per insn
97767,,branches,509102,100.00,194.578,M/sec
4788,,branch-misses,509102,100.00,4.90,of all branches
or easier readable
$ perf stat -x, -o x.csv true
$ column -s, -t x.csv
0.490635 task-clock 490635 100.00 0.489 CPUs utilized
0 context-switches 490635 100.00 0.000 K/sec
0 cpu-migrations 490635 100.00 0.000 K/sec
45 page-faults 490635 100.00 0.092 M/sec
629080 cycles 497698 100.00 1.282 GHz
409498 stalled-cycles-frontend 497698 100.00 65.09 frontend cycles idle
<not supported> stalled-cycles-backend 0 100.00
491424 instructions 497698 100.00 0.78 insn per cycle
0.83 stalled cycles per insn
97278 branches 497698 100.00 198.270 M/sec
4569 branch-misses 497698 100.00 4.70 of all branches
Two new fields are added: metric value and metric name.
v2: Split out function argument changes
v3: Reenable metrics for real.
v4: Fix wrong hunk from refactoring.
v5: Remove extra "noise" printing (Jiri), but add it to the not counted case.
Print empty metrics for not counted.
v6: Avoid outputting metric on empty format.
v7: Print metric at the end
v8: Remove extra run, ena fields
v9: Avoid extra new line for unsupported counters
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Link: http://lkml.kernel.org/r/1456785386-19481-3-git-send-email-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-02-29 22:36:21 +00:00
|
|
|
}
|
|
|
|
|
2018-08-30 06:32:28 +00:00
|
|
|
static void print_metric_csv(struct perf_stat_config *config __maybe_unused,
|
|
|
|
void *ctx,
|
perf stat: Implement CSV metrics output
Now support CSV output for metrics. With the new output callbacks this
is relatively straight forward by creating new callbacks.
This allows to easily plot metrics from CSV files.
The new line callback needs to know the number of fields to skip them
correctly
Example output before:
% perf stat -x, true
0.200687,,task-clock,200687,100.00
0,,context-switches,200687,100.00
0,,cpu-migrations,200687,100.00
40,,page-faults,200687,100.00
730871,,cycles,203601,100.00
551056,,stalled-cycles-frontend,203601,100.00
<not supported>,,stalled-cycles-backend,0,100.00
385523,,instructions,203601,100.00
78028,,branches,203601,100.00
3946,,branch-misses,203601,100.00
After:
% perf stat -x, true
.502457,,task-clock,502457,100.00,0.485,CPUs utilized
0,,context-switches,502457,100.00,0.000,K/sec
0,,cpu-migrations,502457,100.00,0.000,K/sec
45,,page-faults,502457,100.00,0.090,M/sec
644692,,cycles,509102,100.00,1.283,GHz
423470,,stalled-cycles-frontend,509102,100.00,65.69,frontend cycles idle
<not supported>,,stalled-cycles-backend,0,100.00,,,,
492701,,instructions,509102,100.00,0.76,insn per cycle
,,,,,0.86,stalled cycles per insn
97767,,branches,509102,100.00,194.578,M/sec
4788,,branch-misses,509102,100.00,4.90,of all branches
or easier readable
$ perf stat -x, -o x.csv true
$ column -s, -t x.csv
0.490635 task-clock 490635 100.00 0.489 CPUs utilized
0 context-switches 490635 100.00 0.000 K/sec
0 cpu-migrations 490635 100.00 0.000 K/sec
45 page-faults 490635 100.00 0.092 M/sec
629080 cycles 497698 100.00 1.282 GHz
409498 stalled-cycles-frontend 497698 100.00 65.09 frontend cycles idle
<not supported> stalled-cycles-backend 0 100.00
491424 instructions 497698 100.00 0.78 insn per cycle
0.83 stalled cycles per insn
97278 branches 497698 100.00 198.270 M/sec
4569 branch-misses 497698 100.00 4.70 of all branches
Two new fields are added: metric value and metric name.
v2: Split out function argument changes
v3: Reenable metrics for real.
v4: Fix wrong hunk from refactoring.
v5: Remove extra "noise" printing (Jiri), but add it to the not counted case.
Print empty metrics for not counted.
v6: Avoid outputting metric on empty format.
v7: Print metric at the end
v8: Remove extra run, ena fields
v9: Avoid extra new line for unsupported counters
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Link: http://lkml.kernel.org/r/1456785386-19481-3-git-send-email-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-02-29 22:36:21 +00:00
|
|
|
const char *color __maybe_unused,
|
|
|
|
const char *fmt, const char *unit, double val)
|
|
|
|
{
|
|
|
|
struct outstate *os = ctx;
|
|
|
|
FILE *out = os->fh;
|
|
|
|
char buf[64], *vals, *ends;
|
|
|
|
|
|
|
|
if (unit == NULL || fmt == NULL) {
|
2018-08-30 06:32:29 +00:00
|
|
|
fprintf(out, "%s%s", config->csv_sep, config->csv_sep);
|
perf stat: Implement CSV metrics output
Now support CSV output for metrics. With the new output callbacks this
is relatively straight forward by creating new callbacks.
This allows to easily plot metrics from CSV files.
The new line callback needs to know the number of fields to skip them
correctly
Example output before:
% perf stat -x, true
0.200687,,task-clock,200687,100.00
0,,context-switches,200687,100.00
0,,cpu-migrations,200687,100.00
40,,page-faults,200687,100.00
730871,,cycles,203601,100.00
551056,,stalled-cycles-frontend,203601,100.00
<not supported>,,stalled-cycles-backend,0,100.00
385523,,instructions,203601,100.00
78028,,branches,203601,100.00
3946,,branch-misses,203601,100.00
After:
% perf stat -x, true
.502457,,task-clock,502457,100.00,0.485,CPUs utilized
0,,context-switches,502457,100.00,0.000,K/sec
0,,cpu-migrations,502457,100.00,0.000,K/sec
45,,page-faults,502457,100.00,0.090,M/sec
644692,,cycles,509102,100.00,1.283,GHz
423470,,stalled-cycles-frontend,509102,100.00,65.69,frontend cycles idle
<not supported>,,stalled-cycles-backend,0,100.00,,,,
492701,,instructions,509102,100.00,0.76,insn per cycle
,,,,,0.86,stalled cycles per insn
97767,,branches,509102,100.00,194.578,M/sec
4788,,branch-misses,509102,100.00,4.90,of all branches
or easier readable
$ perf stat -x, -o x.csv true
$ column -s, -t x.csv
0.490635 task-clock 490635 100.00 0.489 CPUs utilized
0 context-switches 490635 100.00 0.000 K/sec
0 cpu-migrations 490635 100.00 0.000 K/sec
45 page-faults 490635 100.00 0.092 M/sec
629080 cycles 497698 100.00 1.282 GHz
409498 stalled-cycles-frontend 497698 100.00 65.09 frontend cycles idle
<not supported> stalled-cycles-backend 0 100.00
491424 instructions 497698 100.00 0.78 insn per cycle
0.83 stalled cycles per insn
97278 branches 497698 100.00 198.270 M/sec
4569 branch-misses 497698 100.00 4.70 of all branches
Two new fields are added: metric value and metric name.
v2: Split out function argument changes
v3: Reenable metrics for real.
v4: Fix wrong hunk from refactoring.
v5: Remove extra "noise" printing (Jiri), but add it to the not counted case.
Print empty metrics for not counted.
v6: Avoid outputting metric on empty format.
v7: Print metric at the end
v8: Remove extra run, ena fields
v9: Avoid extra new line for unsupported counters
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Link: http://lkml.kernel.org/r/1456785386-19481-3-git-send-email-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-02-29 22:36:21 +00:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
snprintf(buf, sizeof(buf), fmt, val);
|
2017-04-07 14:24:18 +00:00
|
|
|
ends = vals = ltrim(buf);
|
perf stat: Implement CSV metrics output
Now support CSV output for metrics. With the new output callbacks this
is relatively straight forward by creating new callbacks.
This allows to easily plot metrics from CSV files.
The new line callback needs to know the number of fields to skip them
correctly
Example output before:
% perf stat -x, true
0.200687,,task-clock,200687,100.00
0,,context-switches,200687,100.00
0,,cpu-migrations,200687,100.00
40,,page-faults,200687,100.00
730871,,cycles,203601,100.00
551056,,stalled-cycles-frontend,203601,100.00
<not supported>,,stalled-cycles-backend,0,100.00
385523,,instructions,203601,100.00
78028,,branches,203601,100.00
3946,,branch-misses,203601,100.00
After:
% perf stat -x, true
.502457,,task-clock,502457,100.00,0.485,CPUs utilized
0,,context-switches,502457,100.00,0.000,K/sec
0,,cpu-migrations,502457,100.00,0.000,K/sec
45,,page-faults,502457,100.00,0.090,M/sec
644692,,cycles,509102,100.00,1.283,GHz
423470,,stalled-cycles-frontend,509102,100.00,65.69,frontend cycles idle
<not supported>,,stalled-cycles-backend,0,100.00,,,,
492701,,instructions,509102,100.00,0.76,insn per cycle
,,,,,0.86,stalled cycles per insn
97767,,branches,509102,100.00,194.578,M/sec
4788,,branch-misses,509102,100.00,4.90,of all branches
or easier readable
$ perf stat -x, -o x.csv true
$ column -s, -t x.csv
0.490635 task-clock 490635 100.00 0.489 CPUs utilized
0 context-switches 490635 100.00 0.000 K/sec
0 cpu-migrations 490635 100.00 0.000 K/sec
45 page-faults 490635 100.00 0.092 M/sec
629080 cycles 497698 100.00 1.282 GHz
409498 stalled-cycles-frontend 497698 100.00 65.09 frontend cycles idle
<not supported> stalled-cycles-backend 0 100.00
491424 instructions 497698 100.00 0.78 insn per cycle
0.83 stalled cycles per insn
97278 branches 497698 100.00 198.270 M/sec
4569 branch-misses 497698 100.00 4.70 of all branches
Two new fields are added: metric value and metric name.
v2: Split out function argument changes
v3: Reenable metrics for real.
v4: Fix wrong hunk from refactoring.
v5: Remove extra "noise" printing (Jiri), but add it to the not counted case.
Print empty metrics for not counted.
v6: Avoid outputting metric on empty format.
v7: Print metric at the end
v8: Remove extra run, ena fields
v9: Avoid extra new line for unsupported counters
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Link: http://lkml.kernel.org/r/1456785386-19481-3-git-send-email-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-02-29 22:36:21 +00:00
|
|
|
while (isdigit(*ends) || *ends == '.')
|
|
|
|
ends++;
|
|
|
|
*ends = 0;
|
|
|
|
while (isspace(*unit))
|
|
|
|
unit++;
|
2018-08-30 06:32:29 +00:00
|
|
|
fprintf(out, "%s%s%s%s", config->csv_sep, vals, config->csv_sep, unit);
|
perf stat: Implement CSV metrics output
Now support CSV output for metrics. With the new output callbacks this
is relatively straight forward by creating new callbacks.
This allows to easily plot metrics from CSV files.
The new line callback needs to know the number of fields to skip them
correctly
Example output before:
% perf stat -x, true
0.200687,,task-clock,200687,100.00
0,,context-switches,200687,100.00
0,,cpu-migrations,200687,100.00
40,,page-faults,200687,100.00
730871,,cycles,203601,100.00
551056,,stalled-cycles-frontend,203601,100.00
<not supported>,,stalled-cycles-backend,0,100.00
385523,,instructions,203601,100.00
78028,,branches,203601,100.00
3946,,branch-misses,203601,100.00
After:
% perf stat -x, true
.502457,,task-clock,502457,100.00,0.485,CPUs utilized
0,,context-switches,502457,100.00,0.000,K/sec
0,,cpu-migrations,502457,100.00,0.000,K/sec
45,,page-faults,502457,100.00,0.090,M/sec
644692,,cycles,509102,100.00,1.283,GHz
423470,,stalled-cycles-frontend,509102,100.00,65.69,frontend cycles idle
<not supported>,,stalled-cycles-backend,0,100.00,,,,
492701,,instructions,509102,100.00,0.76,insn per cycle
,,,,,0.86,stalled cycles per insn
97767,,branches,509102,100.00,194.578,M/sec
4788,,branch-misses,509102,100.00,4.90,of all branches
or easier readable
$ perf stat -x, -o x.csv true
$ column -s, -t x.csv
0.490635 task-clock 490635 100.00 0.489 CPUs utilized
0 context-switches 490635 100.00 0.000 K/sec
0 cpu-migrations 490635 100.00 0.000 K/sec
45 page-faults 490635 100.00 0.092 M/sec
629080 cycles 497698 100.00 1.282 GHz
409498 stalled-cycles-frontend 497698 100.00 65.09 frontend cycles idle
<not supported> stalled-cycles-backend 0 100.00
491424 instructions 497698 100.00 0.78 insn per cycle
0.83 stalled cycles per insn
97278 branches 497698 100.00 198.270 M/sec
4569 branch-misses 497698 100.00 4.70 of all branches
Two new fields are added: metric value and metric name.
v2: Split out function argument changes
v3: Reenable metrics for real.
v4: Fix wrong hunk from refactoring.
v5: Remove extra "noise" printing (Jiri), but add it to the not counted case.
Print empty metrics for not counted.
v6: Avoid outputting metric on empty format.
v7: Print metric at the end
v8: Remove extra run, ena fields
v9: Avoid extra new line for unsupported counters
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Link: http://lkml.kernel.org/r/1456785386-19481-3-git-send-email-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-02-29 22:36:21 +00:00
|
|
|
}
|
|
|
|
|
2016-03-03 23:57:36 +00:00
|
|
|
/* Filter out some columns that don't work well in metrics only mode */
|
|
|
|
|
|
|
|
static bool valid_only_metric(const char *unit)
|
|
|
|
{
|
|
|
|
if (!unit)
|
|
|
|
return false;
|
|
|
|
if (strstr(unit, "/sec") ||
|
|
|
|
strstr(unit, "hz") ||
|
|
|
|
strstr(unit, "Hz") ||
|
|
|
|
strstr(unit, "CPUs utilized"))
|
|
|
|
return false;
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
static const char *fixunit(char *buf, struct perf_evsel *evsel,
|
|
|
|
const char *unit)
|
|
|
|
{
|
|
|
|
if (!strncmp(unit, "of all", 6)) {
|
|
|
|
snprintf(buf, 1024, "%s %s", perf_evsel__name(evsel),
|
|
|
|
unit);
|
|
|
|
return buf;
|
|
|
|
}
|
|
|
|
return unit;
|
|
|
|
}
|
|
|
|
|
2018-08-30 06:32:37 +00:00
|
|
|
static void print_metric_only(struct perf_stat_config *config,
|
2018-08-30 06:32:28 +00:00
|
|
|
void *ctx, const char *color, const char *fmt,
|
2016-03-03 23:57:36 +00:00
|
|
|
const char *unit, double val)
|
|
|
|
{
|
|
|
|
struct outstate *os = ctx;
|
|
|
|
FILE *out = os->fh;
|
2018-06-06 22:15:08 +00:00
|
|
|
char buf[1024], str[1024];
|
2018-08-30 06:32:37 +00:00
|
|
|
unsigned mlen = config->metric_only_len;
|
2016-03-03 23:57:36 +00:00
|
|
|
|
|
|
|
if (!valid_only_metric(unit))
|
|
|
|
return;
|
|
|
|
unit = fixunit(buf, os->evsel, unit);
|
|
|
|
if (mlen < strlen(unit))
|
|
|
|
mlen = strlen(unit) + 1;
|
2018-06-06 22:15:08 +00:00
|
|
|
|
|
|
|
if (color)
|
|
|
|
mlen += strlen(color) + sizeof(PERF_COLOR_RESET) - 1;
|
|
|
|
|
|
|
|
color_snprintf(str, sizeof(str), color ?: "", fmt, val);
|
|
|
|
fprintf(out, "%*s ", mlen, str);
|
2016-03-03 23:57:36 +00:00
|
|
|
}
|
|
|
|
|
2018-08-30 06:32:28 +00:00
|
|
|
static void print_metric_only_csv(struct perf_stat_config *config __maybe_unused,
|
|
|
|
void *ctx, const char *color __maybe_unused,
|
2016-03-03 23:57:36 +00:00
|
|
|
const char *fmt,
|
|
|
|
const char *unit, double val)
|
|
|
|
{
|
|
|
|
struct outstate *os = ctx;
|
|
|
|
FILE *out = os->fh;
|
|
|
|
char buf[64], *vals, *ends;
|
|
|
|
char tbuf[1024];
|
|
|
|
|
|
|
|
if (!valid_only_metric(unit))
|
|
|
|
return;
|
|
|
|
unit = fixunit(tbuf, os->evsel, unit);
|
|
|
|
snprintf(buf, sizeof buf, fmt, val);
|
2017-04-07 14:24:18 +00:00
|
|
|
ends = vals = ltrim(buf);
|
2016-03-03 23:57:36 +00:00
|
|
|
while (isdigit(*ends) || *ends == '.')
|
|
|
|
ends++;
|
|
|
|
*ends = 0;
|
2018-08-30 06:32:29 +00:00
|
|
|
fprintf(out, "%s%s", vals, config->csv_sep);
|
2016-03-03 23:57:36 +00:00
|
|
|
}
|
|
|
|
|
2018-08-30 06:32:28 +00:00
|
|
|
static void new_line_metric(struct perf_stat_config *config __maybe_unused,
|
|
|
|
void *ctx __maybe_unused)
|
2016-03-03 23:57:36 +00:00
|
|
|
{
|
|
|
|
}
|
|
|
|
|
2018-08-30 06:32:37 +00:00
|
|
|
static void print_metric_header(struct perf_stat_config *config,
|
2018-08-30 06:32:28 +00:00
|
|
|
void *ctx, const char *color __maybe_unused,
|
2016-03-03 23:57:36 +00:00
|
|
|
const char *fmt __maybe_unused,
|
|
|
|
const char *unit, double val __maybe_unused)
|
|
|
|
{
|
|
|
|
struct outstate *os = ctx;
|
|
|
|
char tbuf[1024];
|
|
|
|
|
|
|
|
if (!valid_only_metric(unit))
|
|
|
|
return;
|
|
|
|
unit = fixunit(tbuf, os->evsel, unit);
|
2018-08-30 06:32:29 +00:00
|
|
|
if (config->csv_output)
|
|
|
|
fprintf(os->fh, "%s%s", unit, config->csv_sep);
|
2016-03-03 23:57:36 +00:00
|
|
|
else
|
2018-08-30 06:32:37 +00:00
|
|
|
fprintf(os->fh, "%*s ", config->metric_only_len, unit);
|
2016-03-03 23:57:36 +00:00
|
|
|
}
|
|
|
|
|
2018-08-30 06:32:38 +00:00
|
|
|
static int first_shadow_cpu(struct perf_stat_config *config,
|
|
|
|
struct perf_evsel *evsel, int id)
|
perf stat: Support metrics in --per-core/socket mode
Enable metrics printing in --per-core / --per-socket mode. We need to
save the shadow metrics in a unique place. Always use the first CPU in
the aggregation. Then use the same CPU to retrieve the shadow value
later.
Example output:
% perf stat --per-core -a ./BC1s
Performance counter stats for 'system wide':
S0-C0 2 2966.020381 task-clock (msec) # 2.004 CPUs utilized (100.00%)
S0-C0 2 49 context-switches # 0.017 K/sec (100.00%)
S0-C0 2 4 cpu-migrations # 0.001 K/sec (100.00%)
S0-C0 2 467 page-faults # 0.157 K/sec
S0-C0 2 4,599,061,773 cycles # 1.551 GHz (100.00%)
S0-C0 2 9,755,886,883 instructions # 2.12 insn per cycle (100.00%)
S0-C0 2 1,906,272,125 branches # 642.704 M/sec (100.00%)
S0-C0 2 81,180,867 branch-misses # 4.26% of all branches
S0-C1 2 2965.995373 task-clock (msec) # 2.003 CPUs utilized (100.00%)
S0-C1 2 62 context-switches # 0.021 K/sec (100.00%)
S0-C1 2 8 cpu-migrations # 0.003 K/sec (100.00%)
S0-C1 2 281 page-faults # 0.095 K/sec
S0-C1 2 6,347,290 cycles # 0.002 GHz (100.00%)
S0-C1 2 4,654,156 instructions # 0.73 insn per cycle (100.00%)
S0-C1 2 947,121 branches # 0.319 M/sec (100.00%)
S0-C1 2 37,322 branch-misses # 3.94% of all branches
1.480409747 seconds time elapsed
v2: Rebase to older patches
v3: Document shadow cpus. Fix aggr_get_id argument. Fix -A shadows (Jiri)
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: http://lkml.kernel.org/r/1456785386-19481-4-git-send-email-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-02-29 22:36:22 +00:00
|
|
|
{
|
|
|
|
int i;
|
|
|
|
|
|
|
|
if (!aggr_get_id)
|
|
|
|
return 0;
|
|
|
|
|
2018-08-30 06:32:38 +00:00
|
|
|
if (config->aggr_mode == AGGR_NONE)
|
perf stat: Support metrics in --per-core/socket mode
Enable metrics printing in --per-core / --per-socket mode. We need to
save the shadow metrics in a unique place. Always use the first CPU in
the aggregation. Then use the same CPU to retrieve the shadow value
later.
Example output:
% perf stat --per-core -a ./BC1s
Performance counter stats for 'system wide':
S0-C0 2 2966.020381 task-clock (msec) # 2.004 CPUs utilized (100.00%)
S0-C0 2 49 context-switches # 0.017 K/sec (100.00%)
S0-C0 2 4 cpu-migrations # 0.001 K/sec (100.00%)
S0-C0 2 467 page-faults # 0.157 K/sec
S0-C0 2 4,599,061,773 cycles # 1.551 GHz (100.00%)
S0-C0 2 9,755,886,883 instructions # 2.12 insn per cycle (100.00%)
S0-C0 2 1,906,272,125 branches # 642.704 M/sec (100.00%)
S0-C0 2 81,180,867 branch-misses # 4.26% of all branches
S0-C1 2 2965.995373 task-clock (msec) # 2.003 CPUs utilized (100.00%)
S0-C1 2 62 context-switches # 0.021 K/sec (100.00%)
S0-C1 2 8 cpu-migrations # 0.003 K/sec (100.00%)
S0-C1 2 281 page-faults # 0.095 K/sec
S0-C1 2 6,347,290 cycles # 0.002 GHz (100.00%)
S0-C1 2 4,654,156 instructions # 0.73 insn per cycle (100.00%)
S0-C1 2 947,121 branches # 0.319 M/sec (100.00%)
S0-C1 2 37,322 branch-misses # 3.94% of all branches
1.480409747 seconds time elapsed
v2: Rebase to older patches
v3: Document shadow cpus. Fix aggr_get_id argument. Fix -A shadows (Jiri)
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: http://lkml.kernel.org/r/1456785386-19481-4-git-send-email-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-02-29 22:36:22 +00:00
|
|
|
return id;
|
|
|
|
|
2018-08-30 06:32:38 +00:00
|
|
|
if (config->aggr_mode == AGGR_GLOBAL)
|
perf stat: Support metrics in --per-core/socket mode
Enable metrics printing in --per-core / --per-socket mode. We need to
save the shadow metrics in a unique place. Always use the first CPU in
the aggregation. Then use the same CPU to retrieve the shadow value
later.
Example output:
% perf stat --per-core -a ./BC1s
Performance counter stats for 'system wide':
S0-C0 2 2966.020381 task-clock (msec) # 2.004 CPUs utilized (100.00%)
S0-C0 2 49 context-switches # 0.017 K/sec (100.00%)
S0-C0 2 4 cpu-migrations # 0.001 K/sec (100.00%)
S0-C0 2 467 page-faults # 0.157 K/sec
S0-C0 2 4,599,061,773 cycles # 1.551 GHz (100.00%)
S0-C0 2 9,755,886,883 instructions # 2.12 insn per cycle (100.00%)
S0-C0 2 1,906,272,125 branches # 642.704 M/sec (100.00%)
S0-C0 2 81,180,867 branch-misses # 4.26% of all branches
S0-C1 2 2965.995373 task-clock (msec) # 2.003 CPUs utilized (100.00%)
S0-C1 2 62 context-switches # 0.021 K/sec (100.00%)
S0-C1 2 8 cpu-migrations # 0.003 K/sec (100.00%)
S0-C1 2 281 page-faults # 0.095 K/sec
S0-C1 2 6,347,290 cycles # 0.002 GHz (100.00%)
S0-C1 2 4,654,156 instructions # 0.73 insn per cycle (100.00%)
S0-C1 2 947,121 branches # 0.319 M/sec (100.00%)
S0-C1 2 37,322 branch-misses # 3.94% of all branches
1.480409747 seconds time elapsed
v2: Rebase to older patches
v3: Document shadow cpus. Fix aggr_get_id argument. Fix -A shadows (Jiri)
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: http://lkml.kernel.org/r/1456785386-19481-4-git-send-email-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-02-29 22:36:22 +00:00
|
|
|
return 0;
|
|
|
|
|
|
|
|
for (i = 0; i < perf_evsel__nr_cpus(evsel); i++) {
|
|
|
|
int cpu2 = perf_evsel__cpus(evsel)->map[i];
|
|
|
|
|
|
|
|
if (aggr_get_id(evsel_list->cpus, cpu2) == id)
|
|
|
|
return cpu2;
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2018-08-30 06:32:28 +00:00
|
|
|
static void abs_printout(struct perf_stat_config *config,
|
|
|
|
int id, int nr, struct perf_evsel *evsel, double avg)
|
2015-06-03 14:25:56 +00:00
|
|
|
{
|
2018-08-30 06:32:28 +00:00
|
|
|
FILE *output = config->output;
|
2015-06-03 14:25:56 +00:00
|
|
|
double sc = evsel->scale;
|
|
|
|
const char *fmt;
|
|
|
|
|
2018-08-30 06:32:29 +00:00
|
|
|
if (config->csv_output) {
|
2016-05-05 23:04:03 +00:00
|
|
|
fmt = floor(sc) != sc ? "%.2f%s" : "%.0f%s";
|
2015-06-03 14:25:56 +00:00
|
|
|
} else {
|
|
|
|
if (big_num)
|
2016-05-05 23:04:03 +00:00
|
|
|
fmt = floor(sc) != sc ? "%'18.2f%s" : "%'18.0f%s";
|
2015-06-03 14:25:56 +00:00
|
|
|
else
|
2016-05-05 23:04:03 +00:00
|
|
|
fmt = floor(sc) != sc ? "%18.2f%s" : "%18.0f%s";
|
2015-06-03 14:25:56 +00:00
|
|
|
}
|
|
|
|
|
2018-08-30 06:32:28 +00:00
|
|
|
aggr_printout(config, evsel, id, nr);
|
2015-06-03 14:25:56 +00:00
|
|
|
|
2018-08-30 06:32:29 +00:00
|
|
|
fprintf(output, fmt, avg, config->csv_sep);
|
2015-06-03 14:25:56 +00:00
|
|
|
|
|
|
|
if (evsel->unit)
|
|
|
|
fprintf(output, "%-*s%s",
|
2018-08-30 06:32:32 +00:00
|
|
|
config->csv_output ? 0 : config->unit_width,
|
2018-08-30 06:32:29 +00:00
|
|
|
evsel->unit, config->csv_sep);
|
2015-06-03 14:25:56 +00:00
|
|
|
|
2018-08-30 06:32:29 +00:00
|
|
|
fprintf(output, "%-*s", config->csv_output ? 0 : 25, perf_evsel__name(evsel));
|
2015-06-03 14:25:56 +00:00
|
|
|
|
|
|
|
if (evsel->cgrp)
|
2018-08-30 06:32:29 +00:00
|
|
|
fprintf(output, "%s%s", config->csv_sep, evsel->cgrp->name);
|
2015-11-03 01:50:21 +00:00
|
|
|
}
|
2015-06-03 14:25:56 +00:00
|
|
|
|
2018-04-24 18:20:11 +00:00
|
|
|
static bool is_mixed_hw_group(struct perf_evsel *counter)
|
|
|
|
{
|
|
|
|
struct perf_evlist *evlist = counter->evlist;
|
|
|
|
u32 pmu_type = counter->attr.type;
|
|
|
|
struct perf_evsel *pos;
|
|
|
|
|
|
|
|
if (counter->nr_members < 2)
|
|
|
|
return false;
|
|
|
|
|
|
|
|
evlist__for_each_entry(evlist, pos) {
|
|
|
|
/* software events can be part of any hardware group */
|
|
|
|
if (pos->attr.type == PERF_TYPE_SOFTWARE)
|
|
|
|
continue;
|
|
|
|
if (pmu_type == PERF_TYPE_SOFTWARE) {
|
|
|
|
pmu_type = pos->attr.type;
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
if (pmu_type != pos->attr.type)
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
2018-08-30 06:32:27 +00:00
|
|
|
static void printout(struct perf_stat_config *config, int id, int nr,
|
|
|
|
struct perf_evsel *counter, double uval,
|
2017-12-05 14:03:05 +00:00
|
|
|
char *prefix, u64 run, u64 ena, double noise,
|
|
|
|
struct runtime_stat *st)
|
2015-11-03 01:50:21 +00:00
|
|
|
{
|
2016-01-30 17:06:49 +00:00
|
|
|
struct perf_stat_output_ctx out;
|
perf stat: Add support for metrics in interval mode
Now that we can modify the metrics printout functions easily, it's
straight forward to support metric printing for interval mode. All that
is needed is to print the time stamp on every new line. Pass the prefix
into the context and print it out.
v2: Move wrong hunk to here.
Committer note:
Before:
[root@jouet ~]# perf stat -I 1000 -e instructions,cycles sleep 1
# time counts unit events
1.000168216 538,913 instructions
1.000168216 748,765 cycles
1.000660048 153,741 instructions
1.000660048 214,066 cycles
After:
# perf stat -I 1000 -e instructions,cycles sleep 1
# time counts unit events
1.000215928 519,620 instructions # 0.69 insn per cycle
1.000215928 752,003 cycles
1.000946033 148,502 instructions # 0.33 insn per cycle
1.000946033 160,104 cycles
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@redhat.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/r/1454173616-17710-3-git-send-email-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-01-30 17:06:50 +00:00
|
|
|
struct outstate os = {
|
2018-08-30 06:32:27 +00:00
|
|
|
.fh = config->output,
|
perf stat: Support metrics in --per-core/socket mode
Enable metrics printing in --per-core / --per-socket mode. We need to
save the shadow metrics in a unique place. Always use the first CPU in
the aggregation. Then use the same CPU to retrieve the shadow value
later.
Example output:
% perf stat --per-core -a ./BC1s
Performance counter stats for 'system wide':
S0-C0 2 2966.020381 task-clock (msec) # 2.004 CPUs utilized (100.00%)
S0-C0 2 49 context-switches # 0.017 K/sec (100.00%)
S0-C0 2 4 cpu-migrations # 0.001 K/sec (100.00%)
S0-C0 2 467 page-faults # 0.157 K/sec
S0-C0 2 4,599,061,773 cycles # 1.551 GHz (100.00%)
S0-C0 2 9,755,886,883 instructions # 2.12 insn per cycle (100.00%)
S0-C0 2 1,906,272,125 branches # 642.704 M/sec (100.00%)
S0-C0 2 81,180,867 branch-misses # 4.26% of all branches
S0-C1 2 2965.995373 task-clock (msec) # 2.003 CPUs utilized (100.00%)
S0-C1 2 62 context-switches # 0.021 K/sec (100.00%)
S0-C1 2 8 cpu-migrations # 0.003 K/sec (100.00%)
S0-C1 2 281 page-faults # 0.095 K/sec
S0-C1 2 6,347,290 cycles # 0.002 GHz (100.00%)
S0-C1 2 4,654,156 instructions # 0.73 insn per cycle (100.00%)
S0-C1 2 947,121 branches # 0.319 M/sec (100.00%)
S0-C1 2 37,322 branch-misses # 3.94% of all branches
1.480409747 seconds time elapsed
v2: Rebase to older patches
v3: Document shadow cpus. Fix aggr_get_id argument. Fix -A shadows (Jiri)
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: http://lkml.kernel.org/r/1456785386-19481-4-git-send-email-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-02-29 22:36:22 +00:00
|
|
|
.prefix = prefix ? prefix : "",
|
|
|
|
.id = id,
|
|
|
|
.nr = nr,
|
|
|
|
.evsel = counter,
|
perf stat: Add support for metrics in interval mode
Now that we can modify the metrics printout functions easily, it's
straight forward to support metric printing for interval mode. All that
is needed is to print the time stamp on every new line. Pass the prefix
into the context and print it out.
v2: Move wrong hunk to here.
Committer note:
Before:
[root@jouet ~]# perf stat -I 1000 -e instructions,cycles sleep 1
# time counts unit events
1.000168216 538,913 instructions
1.000168216 748,765 cycles
1.000660048 153,741 instructions
1.000660048 214,066 cycles
After:
# perf stat -I 1000 -e instructions,cycles sleep 1
# time counts unit events
1.000215928 519,620 instructions # 0.69 insn per cycle
1.000215928 752,003 cycles
1.000946033 148,502 instructions # 0.33 insn per cycle
1.000946033 160,104 cycles
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@redhat.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/r/1454173616-17710-3-git-send-email-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-01-30 17:06:50 +00:00
|
|
|
};
|
2016-01-30 17:06:49 +00:00
|
|
|
print_metric_t pm = print_metric_std;
|
2018-08-30 06:32:28 +00:00
|
|
|
new_line_t nl;
|
2015-11-03 01:50:21 +00:00
|
|
|
|
2018-08-30 06:32:31 +00:00
|
|
|
if (config->metric_only) {
|
2016-03-03 23:57:36 +00:00
|
|
|
nl = new_line_metric;
|
2018-08-30 06:32:29 +00:00
|
|
|
if (config->csv_output)
|
2016-03-03 23:57:36 +00:00
|
|
|
pm = print_metric_only_csv;
|
|
|
|
else
|
|
|
|
pm = print_metric_only;
|
|
|
|
} else
|
|
|
|
nl = new_line_std;
|
2015-11-03 01:50:21 +00:00
|
|
|
|
2018-08-30 06:32:31 +00:00
|
|
|
if (config->csv_output && !config->metric_only) {
|
perf stat: Implement CSV metrics output
Now support CSV output for metrics. With the new output callbacks this
is relatively straight forward by creating new callbacks.
This allows to easily plot metrics from CSV files.
The new line callback needs to know the number of fields to skip them
correctly
Example output before:
% perf stat -x, true
0.200687,,task-clock,200687,100.00
0,,context-switches,200687,100.00
0,,cpu-migrations,200687,100.00
40,,page-faults,200687,100.00
730871,,cycles,203601,100.00
551056,,stalled-cycles-frontend,203601,100.00
<not supported>,,stalled-cycles-backend,0,100.00
385523,,instructions,203601,100.00
78028,,branches,203601,100.00
3946,,branch-misses,203601,100.00
After:
% perf stat -x, true
.502457,,task-clock,502457,100.00,0.485,CPUs utilized
0,,context-switches,502457,100.00,0.000,K/sec
0,,cpu-migrations,502457,100.00,0.000,K/sec
45,,page-faults,502457,100.00,0.090,M/sec
644692,,cycles,509102,100.00,1.283,GHz
423470,,stalled-cycles-frontend,509102,100.00,65.69,frontend cycles idle
<not supported>,,stalled-cycles-backend,0,100.00,,,,
492701,,instructions,509102,100.00,0.76,insn per cycle
,,,,,0.86,stalled cycles per insn
97767,,branches,509102,100.00,194.578,M/sec
4788,,branch-misses,509102,100.00,4.90,of all branches
or easier readable
$ perf stat -x, -o x.csv true
$ column -s, -t x.csv
0.490635 task-clock 490635 100.00 0.489 CPUs utilized
0 context-switches 490635 100.00 0.000 K/sec
0 cpu-migrations 490635 100.00 0.000 K/sec
45 page-faults 490635 100.00 0.092 M/sec
629080 cycles 497698 100.00 1.282 GHz
409498 stalled-cycles-frontend 497698 100.00 65.09 frontend cycles idle
<not supported> stalled-cycles-backend 0 100.00
491424 instructions 497698 100.00 0.78 insn per cycle
0.83 stalled cycles per insn
97278 branches 497698 100.00 198.270 M/sec
4569 branch-misses 497698 100.00 4.70 of all branches
Two new fields are added: metric value and metric name.
v2: Split out function argument changes
v3: Reenable metrics for real.
v4: Fix wrong hunk from refactoring.
v5: Remove extra "noise" printing (Jiri), but add it to the not counted case.
Print empty metrics for not counted.
v6: Avoid outputting metric on empty format.
v7: Print metric at the end
v8: Remove extra run, ena fields
v9: Avoid extra new line for unsupported counters
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Link: http://lkml.kernel.org/r/1456785386-19481-3-git-send-email-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-02-29 22:36:21 +00:00
|
|
|
static int aggr_fields[] = {
|
|
|
|
[AGGR_GLOBAL] = 0,
|
|
|
|
[AGGR_THREAD] = 1,
|
|
|
|
[AGGR_NONE] = 1,
|
|
|
|
[AGGR_SOCKET] = 2,
|
|
|
|
[AGGR_CORE] = 2,
|
|
|
|
};
|
|
|
|
|
|
|
|
pm = print_metric_csv;
|
|
|
|
nl = new_line_csv;
|
|
|
|
os.nfields = 3;
|
2018-08-30 06:32:27 +00:00
|
|
|
os.nfields += aggr_fields[config->aggr_mode];
|
perf stat: Implement CSV metrics output
Now support CSV output for metrics. With the new output callbacks this
is relatively straight forward by creating new callbacks.
This allows to easily plot metrics from CSV files.
The new line callback needs to know the number of fields to skip them
correctly
Example output before:
% perf stat -x, true
0.200687,,task-clock,200687,100.00
0,,context-switches,200687,100.00
0,,cpu-migrations,200687,100.00
40,,page-faults,200687,100.00
730871,,cycles,203601,100.00
551056,,stalled-cycles-frontend,203601,100.00
<not supported>,,stalled-cycles-backend,0,100.00
385523,,instructions,203601,100.00
78028,,branches,203601,100.00
3946,,branch-misses,203601,100.00
After:
% perf stat -x, true
.502457,,task-clock,502457,100.00,0.485,CPUs utilized
0,,context-switches,502457,100.00,0.000,K/sec
0,,cpu-migrations,502457,100.00,0.000,K/sec
45,,page-faults,502457,100.00,0.090,M/sec
644692,,cycles,509102,100.00,1.283,GHz
423470,,stalled-cycles-frontend,509102,100.00,65.69,frontend cycles idle
<not supported>,,stalled-cycles-backend,0,100.00,,,,
492701,,instructions,509102,100.00,0.76,insn per cycle
,,,,,0.86,stalled cycles per insn
97767,,branches,509102,100.00,194.578,M/sec
4788,,branch-misses,509102,100.00,4.90,of all branches
or easier readable
$ perf stat -x, -o x.csv true
$ column -s, -t x.csv
0.490635 task-clock 490635 100.00 0.489 CPUs utilized
0 context-switches 490635 100.00 0.000 K/sec
0 cpu-migrations 490635 100.00 0.000 K/sec
45 page-faults 490635 100.00 0.092 M/sec
629080 cycles 497698 100.00 1.282 GHz
409498 stalled-cycles-frontend 497698 100.00 65.09 frontend cycles idle
<not supported> stalled-cycles-backend 0 100.00
491424 instructions 497698 100.00 0.78 insn per cycle
0.83 stalled cycles per insn
97278 branches 497698 100.00 198.270 M/sec
4569 branch-misses 497698 100.00 4.70 of all branches
Two new fields are added: metric value and metric name.
v2: Split out function argument changes
v3: Reenable metrics for real.
v4: Fix wrong hunk from refactoring.
v5: Remove extra "noise" printing (Jiri), but add it to the not counted case.
Print empty metrics for not counted.
v6: Avoid outputting metric on empty format.
v7: Print metric at the end
v8: Remove extra run, ena fields
v9: Avoid extra new line for unsupported counters
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Link: http://lkml.kernel.org/r/1456785386-19481-3-git-send-email-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-02-29 22:36:21 +00:00
|
|
|
if (counter->cgrp)
|
|
|
|
os.nfields++;
|
|
|
|
}
|
2016-02-17 22:44:00 +00:00
|
|
|
if (run == 0 || ena == 0 || counter->counts->scaled == -1) {
|
2018-08-30 06:32:31 +00:00
|
|
|
if (config->metric_only) {
|
2018-08-30 06:32:28 +00:00
|
|
|
pm(config, &os, NULL, "", "", 0);
|
2016-03-03 23:57:36 +00:00
|
|
|
return;
|
|
|
|
}
|
2018-08-30 06:32:28 +00:00
|
|
|
aggr_printout(config, counter, id, nr);
|
2016-01-30 17:06:51 +00:00
|
|
|
|
2018-08-30 06:32:27 +00:00
|
|
|
fprintf(config->output, "%*s%s",
|
2018-08-30 06:32:29 +00:00
|
|
|
config->csv_output ? 0 : 18,
|
2016-01-30 17:06:51 +00:00
|
|
|
counter->supported ? CNTR_NOT_COUNTED : CNTR_NOT_SUPPORTED,
|
2018-08-30 06:32:29 +00:00
|
|
|
config->csv_sep);
|
2016-01-30 17:06:51 +00:00
|
|
|
|
2018-04-24 18:20:11 +00:00
|
|
|
if (counter->supported) {
|
2018-08-30 06:32:42 +00:00
|
|
|
config->print_free_counters_hint = 1;
|
2018-04-24 18:20:11 +00:00
|
|
|
if (is_mixed_hw_group(counter))
|
2018-08-30 06:32:43 +00:00
|
|
|
config->print_mixed_hw_group_error = 1;
|
2018-04-24 18:20:11 +00:00
|
|
|
}
|
perf stat: Issue a HW watchdog disable hint
When using perf stat on an AMD F15h system with the default hw events
attributes, some of the events don't get counted:
Performance counter stats for 'sleep 1':
0.749208 task-clock (msec) # 0.001 CPUs utilized
1 context-switches # 0.001 M/sec
0 cpu-migrations # 0.000 K/sec
54 page-faults # 0.072 M/sec
1,122,815 cycles # 1.499 GHz
286,740 stalled-cycles-frontend # 25.54% frontend cycles idle
<not counted> stalled-cycles-backend (0.00%)
^^^^^^^^^^^^
<not counted> instructions (0.00%)
^^^^^^^^^^^^
<not counted> branches (0.00%)
<not counted> branch-misses (0.00%)
1.001550070 seconds time elapsed
The reason is that we have the HW watchdog consuming one PMU counter and
when perf tries to schedule 6 events on 6 counters and some of those
counters are constrained to only a specific subset of PMCs by the
hardware, the event scheduling fails.
So issue a hint to disable the HW watchdog around a perf stat session.
Committer note:
Testing it...
# perf stat -d usleep 1
Performance counter stats for 'usleep 1':
1.180203 task-clock (msec) # 0.490 CPUs utilized
1 context-switches # 0.847 K/sec
0 cpu-migrations # 0.000 K/sec
54 page-faults # 0.046 M/sec
184,754 cycles # 0.157 GHz
714,553 instructions # 3.87 insn per cycle
154,661 branches # 131.046 M/sec
7,247 branch-misses # 4.69% of all branches
219,984 L1-dcache-loads # 186.395 M/sec
17,600 L1-dcache-load-misses # 8.00% of all L1-dcache hits (90.16%)
<not counted> LLC-loads (0.00%)
<not counted> LLC-load-misses (0.00%)
0.002406823 seconds time elapsed
Some events weren't counted. Try disabling the NMI watchdog:
echo 0 > /proc/sys/kernel/nmi_watchdog
perf stat ...
echo 1 > /proc/sys/kernel/nmi_watchdog
#
Signed-off-by: Borislav Petkov <bp@suse.de>
Acked-by: Ingo Molnar <mingo@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Robert Richter <rric@kernel.org>
Cc: Vince Weaver <vince@deater.net>
Link: http://lkml.kernel.org/r/20170211183218.ijnvb5f7ciyuunx4@pd.tnic
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-02-07 00:40:05 +00:00
|
|
|
|
2018-08-30 06:32:27 +00:00
|
|
|
fprintf(config->output, "%-*s%s",
|
2018-08-30 06:32:32 +00:00
|
|
|
config->csv_output ? 0 : config->unit_width,
|
2018-08-30 06:32:29 +00:00
|
|
|
counter->unit, config->csv_sep);
|
2016-01-30 17:06:51 +00:00
|
|
|
|
2018-08-30 06:32:27 +00:00
|
|
|
fprintf(config->output, "%*s",
|
2018-08-30 06:32:29 +00:00
|
|
|
config->csv_output ? 0 : -25,
|
2016-01-30 17:06:51 +00:00
|
|
|
perf_evsel__name(counter));
|
|
|
|
|
|
|
|
if (counter->cgrp)
|
2018-08-30 06:32:27 +00:00
|
|
|
fprintf(config->output, "%s%s",
|
2018-08-30 06:32:29 +00:00
|
|
|
config->csv_sep, counter->cgrp->name);
|
2016-01-30 17:06:51 +00:00
|
|
|
|
2018-08-30 06:32:29 +00:00
|
|
|
if (!config->csv_output)
|
2018-08-30 06:32:28 +00:00
|
|
|
pm(config, &os, NULL, NULL, "", 0);
|
2018-08-30 06:32:27 +00:00
|
|
|
print_noise(config, counter, noise);
|
|
|
|
print_running(config, run, ena);
|
2018-08-30 06:32:29 +00:00
|
|
|
if (config->csv_output)
|
2018-08-30 06:32:28 +00:00
|
|
|
pm(config, &os, NULL, NULL, "", 0);
|
2016-01-30 17:06:51 +00:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2018-08-30 06:32:31 +00:00
|
|
|
if (!config->metric_only)
|
2018-08-30 06:32:28 +00:00
|
|
|
abs_printout(config, id, nr, counter, uval);
|
2015-06-03 14:25:56 +00:00
|
|
|
|
2016-01-30 17:06:49 +00:00
|
|
|
out.print_metric = pm;
|
|
|
|
out.new_line = nl;
|
|
|
|
out.ctx = &os;
|
perf stat: Output JSON MetricExpr metric
Add generic infrastructure to perf stat to output ratios for
"MetricExpr" entries in the event lists. Many events are more useful as
ratios than in raw form, typically some count in relation to total
ticks.
Transfer the MetricExpr information from the alias to the evsel.
We mark the events that need to be collected for MetricExpr, and also
link the events using them with a pointer. The code is careful to always
prefer the right event in the same group to minimize multiplexing
errors. At the moment only a single relation is supported.
Then add a rblist to the stat shadow code that remembers stats based on
the cpu and context.
Then finally update and retrieve and print these values similarly to the
existing hardcoded perf metrics. We use the simple expression parser
added earlier to evaluate the expression.
Normally we just output the result without further commentary, but for
--metric-only this would lead to empty columns. So for this case use the
original event as description.
There is no attempt to automatically add the MetricExpr event, if it is
missing, however we suggest it to the user, because the user tool
doesn't have enough information to reliably construct a group that is
guaranteed to schedule. So we leave that to the user.
% perf stat -a -I 1000 -e '{unc_p_clockticks,unc_p_freq_max_os_cycles}'
1.000147889 800,085,181 unc_p_clockticks
1.000147889 93,126,241 unc_p_freq_max_os_cycles # 11.6
2.000448381 800,218,217 unc_p_clockticks
2.000448381 142,516,095 unc_p_freq_max_os_cycles # 17.8
3.000639852 800,243,057 unc_p_clockticks
3.000639852 162,292,689 unc_p_freq_max_os_cycles # 20.3
% perf stat -a -I 1000 -e '{unc_p_clockticks,unc_p_freq_max_os_cycles}' --metric-only
# time freq_max_os_cycles %
1.000127077 0.9
2.000301436 0.7
3.000456379 0.0
v2: Change from DivideBy to MetricExpr
v3: Use expr__ prefix. Support more than one other event.
v4: Update description
v5: Only print warning message once for multiple PMUs.
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: http://lkml.kernel.org/r/20170320201711.14142-11-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-03-20 20:17:08 +00:00
|
|
|
out.force_header = false;
|
2016-01-30 17:06:49 +00:00
|
|
|
|
2018-08-30 06:32:31 +00:00
|
|
|
if (config->csv_output && !config->metric_only) {
|
2018-08-30 06:32:27 +00:00
|
|
|
print_noise(config, counter, noise);
|
|
|
|
print_running(config, run, ena);
|
perf stat: Implement CSV metrics output
Now support CSV output for metrics. With the new output callbacks this
is relatively straight forward by creating new callbacks.
This allows to easily plot metrics from CSV files.
The new line callback needs to know the number of fields to skip them
correctly
Example output before:
% perf stat -x, true
0.200687,,task-clock,200687,100.00
0,,context-switches,200687,100.00
0,,cpu-migrations,200687,100.00
40,,page-faults,200687,100.00
730871,,cycles,203601,100.00
551056,,stalled-cycles-frontend,203601,100.00
<not supported>,,stalled-cycles-backend,0,100.00
385523,,instructions,203601,100.00
78028,,branches,203601,100.00
3946,,branch-misses,203601,100.00
After:
% perf stat -x, true
.502457,,task-clock,502457,100.00,0.485,CPUs utilized
0,,context-switches,502457,100.00,0.000,K/sec
0,,cpu-migrations,502457,100.00,0.000,K/sec
45,,page-faults,502457,100.00,0.090,M/sec
644692,,cycles,509102,100.00,1.283,GHz
423470,,stalled-cycles-frontend,509102,100.00,65.69,frontend cycles idle
<not supported>,,stalled-cycles-backend,0,100.00,,,,
492701,,instructions,509102,100.00,0.76,insn per cycle
,,,,,0.86,stalled cycles per insn
97767,,branches,509102,100.00,194.578,M/sec
4788,,branch-misses,509102,100.00,4.90,of all branches
or easier readable
$ perf stat -x, -o x.csv true
$ column -s, -t x.csv
0.490635 task-clock 490635 100.00 0.489 CPUs utilized
0 context-switches 490635 100.00 0.000 K/sec
0 cpu-migrations 490635 100.00 0.000 K/sec
45 page-faults 490635 100.00 0.092 M/sec
629080 cycles 497698 100.00 1.282 GHz
409498 stalled-cycles-frontend 497698 100.00 65.09 frontend cycles idle
<not supported> stalled-cycles-backend 0 100.00
491424 instructions 497698 100.00 0.78 insn per cycle
0.83 stalled cycles per insn
97278 branches 497698 100.00 198.270 M/sec
4569 branch-misses 497698 100.00 4.70 of all branches
Two new fields are added: metric value and metric name.
v2: Split out function argument changes
v3: Reenable metrics for real.
v4: Fix wrong hunk from refactoring.
v5: Remove extra "noise" printing (Jiri), but add it to the not counted case.
Print empty metrics for not counted.
v6: Avoid outputting metric on empty format.
v7: Print metric at the end
v8: Remove extra run, ena fields
v9: Avoid extra new line for unsupported counters
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Link: http://lkml.kernel.org/r/1456785386-19481-3-git-send-email-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-02-29 22:36:21 +00:00
|
|
|
}
|
|
|
|
|
2018-08-30 06:32:28 +00:00
|
|
|
perf_stat__print_shadow_stats(config, counter, uval,
|
2018-08-30 06:32:38 +00:00
|
|
|
first_shadow_cpu(config, counter, id),
|
2017-12-05 14:03:05 +00:00
|
|
|
&out, &metric_events, st);
|
2018-08-30 06:32:31 +00:00
|
|
|
if (!config->csv_output && !config->metric_only) {
|
2018-08-30 06:32:27 +00:00
|
|
|
print_noise(config, counter, noise);
|
|
|
|
print_running(config, run, ena);
|
perf stat: Implement CSV metrics output
Now support CSV output for metrics. With the new output callbacks this
is relatively straight forward by creating new callbacks.
This allows to easily plot metrics from CSV files.
The new line callback needs to know the number of fields to skip them
correctly
Example output before:
% perf stat -x, true
0.200687,,task-clock,200687,100.00
0,,context-switches,200687,100.00
0,,cpu-migrations,200687,100.00
40,,page-faults,200687,100.00
730871,,cycles,203601,100.00
551056,,stalled-cycles-frontend,203601,100.00
<not supported>,,stalled-cycles-backend,0,100.00
385523,,instructions,203601,100.00
78028,,branches,203601,100.00
3946,,branch-misses,203601,100.00
After:
% perf stat -x, true
.502457,,task-clock,502457,100.00,0.485,CPUs utilized
0,,context-switches,502457,100.00,0.000,K/sec
0,,cpu-migrations,502457,100.00,0.000,K/sec
45,,page-faults,502457,100.00,0.090,M/sec
644692,,cycles,509102,100.00,1.283,GHz
423470,,stalled-cycles-frontend,509102,100.00,65.69,frontend cycles idle
<not supported>,,stalled-cycles-backend,0,100.00,,,,
492701,,instructions,509102,100.00,0.76,insn per cycle
,,,,,0.86,stalled cycles per insn
97767,,branches,509102,100.00,194.578,M/sec
4788,,branch-misses,509102,100.00,4.90,of all branches
or easier readable
$ perf stat -x, -o x.csv true
$ column -s, -t x.csv
0.490635 task-clock 490635 100.00 0.489 CPUs utilized
0 context-switches 490635 100.00 0.000 K/sec
0 cpu-migrations 490635 100.00 0.000 K/sec
45 page-faults 490635 100.00 0.092 M/sec
629080 cycles 497698 100.00 1.282 GHz
409498 stalled-cycles-frontend 497698 100.00 65.09 frontend cycles idle
<not supported> stalled-cycles-backend 0 100.00
491424 instructions 497698 100.00 0.78 insn per cycle
0.83 stalled cycles per insn
97278 branches 497698 100.00 198.270 M/sec
4569 branch-misses 497698 100.00 4.70 of all branches
Two new fields are added: metric value and metric name.
v2: Split out function argument changes
v3: Reenable metrics for real.
v4: Fix wrong hunk from refactoring.
v5: Remove extra "noise" printing (Jiri), but add it to the not counted case.
Print empty metrics for not counted.
v6: Avoid outputting metric on empty format.
v7: Print metric at the end
v8: Remove extra run, ena fields
v9: Avoid extra new line for unsupported counters
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Link: http://lkml.kernel.org/r/1456785386-19481-3-git-send-email-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-02-29 22:36:21 +00:00
|
|
|
}
|
2015-06-03 14:25:56 +00:00
|
|
|
}
|
|
|
|
|
2018-08-30 06:32:39 +00:00
|
|
|
static void aggr_update_shadow(struct perf_stat_config *config,
|
|
|
|
struct perf_evlist *evlist)
|
perf stat: Support metrics in --per-core/socket mode
Enable metrics printing in --per-core / --per-socket mode. We need to
save the shadow metrics in a unique place. Always use the first CPU in
the aggregation. Then use the same CPU to retrieve the shadow value
later.
Example output:
% perf stat --per-core -a ./BC1s
Performance counter stats for 'system wide':
S0-C0 2 2966.020381 task-clock (msec) # 2.004 CPUs utilized (100.00%)
S0-C0 2 49 context-switches # 0.017 K/sec (100.00%)
S0-C0 2 4 cpu-migrations # 0.001 K/sec (100.00%)
S0-C0 2 467 page-faults # 0.157 K/sec
S0-C0 2 4,599,061,773 cycles # 1.551 GHz (100.00%)
S0-C0 2 9,755,886,883 instructions # 2.12 insn per cycle (100.00%)
S0-C0 2 1,906,272,125 branches # 642.704 M/sec (100.00%)
S0-C0 2 81,180,867 branch-misses # 4.26% of all branches
S0-C1 2 2965.995373 task-clock (msec) # 2.003 CPUs utilized (100.00%)
S0-C1 2 62 context-switches # 0.021 K/sec (100.00%)
S0-C1 2 8 cpu-migrations # 0.003 K/sec (100.00%)
S0-C1 2 281 page-faults # 0.095 K/sec
S0-C1 2 6,347,290 cycles # 0.002 GHz (100.00%)
S0-C1 2 4,654,156 instructions # 0.73 insn per cycle (100.00%)
S0-C1 2 947,121 branches # 0.319 M/sec (100.00%)
S0-C1 2 37,322 branch-misses # 3.94% of all branches
1.480409747 seconds time elapsed
v2: Rebase to older patches
v3: Document shadow cpus. Fix aggr_get_id argument. Fix -A shadows (Jiri)
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: http://lkml.kernel.org/r/1456785386-19481-4-git-send-email-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-02-29 22:36:22 +00:00
|
|
|
{
|
|
|
|
int cpu, s2, id, s;
|
|
|
|
u64 val;
|
|
|
|
struct perf_evsel *counter;
|
|
|
|
|
|
|
|
for (s = 0; s < aggr_map->nr; s++) {
|
|
|
|
id = aggr_map->map[s];
|
2018-08-30 06:32:39 +00:00
|
|
|
evlist__for_each_entry(evlist, counter) {
|
perf stat: Support metrics in --per-core/socket mode
Enable metrics printing in --per-core / --per-socket mode. We need to
save the shadow metrics in a unique place. Always use the first CPU in
the aggregation. Then use the same CPU to retrieve the shadow value
later.
Example output:
% perf stat --per-core -a ./BC1s
Performance counter stats for 'system wide':
S0-C0 2 2966.020381 task-clock (msec) # 2.004 CPUs utilized (100.00%)
S0-C0 2 49 context-switches # 0.017 K/sec (100.00%)
S0-C0 2 4 cpu-migrations # 0.001 K/sec (100.00%)
S0-C0 2 467 page-faults # 0.157 K/sec
S0-C0 2 4,599,061,773 cycles # 1.551 GHz (100.00%)
S0-C0 2 9,755,886,883 instructions # 2.12 insn per cycle (100.00%)
S0-C0 2 1,906,272,125 branches # 642.704 M/sec (100.00%)
S0-C0 2 81,180,867 branch-misses # 4.26% of all branches
S0-C1 2 2965.995373 task-clock (msec) # 2.003 CPUs utilized (100.00%)
S0-C1 2 62 context-switches # 0.021 K/sec (100.00%)
S0-C1 2 8 cpu-migrations # 0.003 K/sec (100.00%)
S0-C1 2 281 page-faults # 0.095 K/sec
S0-C1 2 6,347,290 cycles # 0.002 GHz (100.00%)
S0-C1 2 4,654,156 instructions # 0.73 insn per cycle (100.00%)
S0-C1 2 947,121 branches # 0.319 M/sec (100.00%)
S0-C1 2 37,322 branch-misses # 3.94% of all branches
1.480409747 seconds time elapsed
v2: Rebase to older patches
v3: Document shadow cpus. Fix aggr_get_id argument. Fix -A shadows (Jiri)
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: http://lkml.kernel.org/r/1456785386-19481-4-git-send-email-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-02-29 22:36:22 +00:00
|
|
|
val = 0;
|
|
|
|
for (cpu = 0; cpu < perf_evsel__nr_cpus(counter); cpu++) {
|
|
|
|
s2 = aggr_get_id(evsel_list->cpus, cpu);
|
|
|
|
if (s2 != id)
|
|
|
|
continue;
|
|
|
|
val += perf_counts(counter->counts, cpu, 0)->val;
|
|
|
|
}
|
2017-01-23 21:42:56 +00:00
|
|
|
perf_stat__update_shadow_stats(counter, val,
|
2018-08-30 06:32:38 +00:00
|
|
|
first_shadow_cpu(config, counter, id),
|
2017-12-05 14:03:04 +00:00
|
|
|
&rt_stat);
|
perf stat: Support metrics in --per-core/socket mode
Enable metrics printing in --per-core / --per-socket mode. We need to
save the shadow metrics in a unique place. Always use the first CPU in
the aggregation. Then use the same CPU to retrieve the shadow value
later.
Example output:
% perf stat --per-core -a ./BC1s
Performance counter stats for 'system wide':
S0-C0 2 2966.020381 task-clock (msec) # 2.004 CPUs utilized (100.00%)
S0-C0 2 49 context-switches # 0.017 K/sec (100.00%)
S0-C0 2 4 cpu-migrations # 0.001 K/sec (100.00%)
S0-C0 2 467 page-faults # 0.157 K/sec
S0-C0 2 4,599,061,773 cycles # 1.551 GHz (100.00%)
S0-C0 2 9,755,886,883 instructions # 2.12 insn per cycle (100.00%)
S0-C0 2 1,906,272,125 branches # 642.704 M/sec (100.00%)
S0-C0 2 81,180,867 branch-misses # 4.26% of all branches
S0-C1 2 2965.995373 task-clock (msec) # 2.003 CPUs utilized (100.00%)
S0-C1 2 62 context-switches # 0.021 K/sec (100.00%)
S0-C1 2 8 cpu-migrations # 0.003 K/sec (100.00%)
S0-C1 2 281 page-faults # 0.095 K/sec
S0-C1 2 6,347,290 cycles # 0.002 GHz (100.00%)
S0-C1 2 4,654,156 instructions # 0.73 insn per cycle (100.00%)
S0-C1 2 947,121 branches # 0.319 M/sec (100.00%)
S0-C1 2 37,322 branch-misses # 3.94% of all branches
1.480409747 seconds time elapsed
v2: Rebase to older patches
v3: Document shadow cpus. Fix aggr_get_id argument. Fix -A shadows (Jiri)
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: http://lkml.kernel.org/r/1456785386-19481-4-git-send-email-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-02-29 22:36:22 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2018-03-06 14:04:43 +00:00
|
|
|
static void uniquify_event_name(struct perf_evsel *counter)
|
|
|
|
{
|
|
|
|
char *new_name;
|
|
|
|
char *config;
|
|
|
|
|
perf stat: Fix duplicate PMU name for interval print
PMU name is printed repeatedly for interval print, for example:
perf stat --no-merge -e 'unc_m_clockticks' -a -I 1000
# time counts unit events
1.001053069 243,702,144 unc_m_clockticks [uncore_imc_4]
1.001053069 244,268,304 unc_m_clockticks [uncore_imc_2]
1.001053069 244,427,386 unc_m_clockticks [uncore_imc_0]
1.001053069 244,583,760 unc_m_clockticks [uncore_imc_5]
1.001053069 244,738,971 unc_m_clockticks [uncore_imc_3]
1.001053069 244,880,309 unc_m_clockticks [uncore_imc_1]
2.002024821 240,818,200 unc_m_clockticks [uncore_imc_4] [uncore_imc_4]
2.002024821 240,767,812 unc_m_clockticks [uncore_imc_2] [uncore_imc_2]
2.002024821 240,764,215 unc_m_clockticks [uncore_imc_0] [uncore_imc_0]
2.002024821 240,759,504 unc_m_clockticks [uncore_imc_5] [uncore_imc_5]
2.002024821 240,755,992 unc_m_clockticks [uncore_imc_3] [uncore_imc_3]
2.002024821 240,750,403 unc_m_clockticks [uncore_imc_1] [uncore_imc_1]
For each print, the PMU name is unconditionally appended to the
counter->name.
Need to check the counter->name first. If the PMU name is already
appended, do nothing.
Committer notes:
Add and use perf_evsel->uniquified_name bool instead of doing the more
expensive strstr(event->name, pmu->name).
Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Agustin Vega-Frias <agustinv@codeaurora.org>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Ganapatrao Kulkarni <ganapatrao.kulkarni@cavium.com>
Cc: Jin Yao <yao.jin@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Shaokun Zhang <zhangshaokun@hisilicon.com>
Cc: Will Deacon <will.deacon@arm.com>
Fixes: 8c5421c016a4 ("perf pmu: Display pmu name when printing unmerged events in stat")
Link: http://lkml.kernel.org/r/1524594014-79243-5-git-send-email-kan.liang@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-04-24 18:20:14 +00:00
|
|
|
if (counter->uniquified_name ||
|
|
|
|
!counter->pmu_name || !strncmp(counter->name, counter->pmu_name,
|
2018-03-06 14:04:43 +00:00
|
|
|
strlen(counter->pmu_name)))
|
|
|
|
return;
|
|
|
|
|
|
|
|
config = strchr(counter->name, '/');
|
|
|
|
if (config) {
|
|
|
|
if (asprintf(&new_name,
|
|
|
|
"%s%s", counter->pmu_name, config) > 0) {
|
|
|
|
free(counter->name);
|
|
|
|
counter->name = new_name;
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
if (asprintf(&new_name,
|
|
|
|
"%s [%s]", counter->name, counter->pmu_name) > 0) {
|
|
|
|
free(counter->name);
|
|
|
|
counter->name = new_name;
|
|
|
|
}
|
|
|
|
}
|
perf stat: Fix duplicate PMU name for interval print
PMU name is printed repeatedly for interval print, for example:
perf stat --no-merge -e 'unc_m_clockticks' -a -I 1000
# time counts unit events
1.001053069 243,702,144 unc_m_clockticks [uncore_imc_4]
1.001053069 244,268,304 unc_m_clockticks [uncore_imc_2]
1.001053069 244,427,386 unc_m_clockticks [uncore_imc_0]
1.001053069 244,583,760 unc_m_clockticks [uncore_imc_5]
1.001053069 244,738,971 unc_m_clockticks [uncore_imc_3]
1.001053069 244,880,309 unc_m_clockticks [uncore_imc_1]
2.002024821 240,818,200 unc_m_clockticks [uncore_imc_4] [uncore_imc_4]
2.002024821 240,767,812 unc_m_clockticks [uncore_imc_2] [uncore_imc_2]
2.002024821 240,764,215 unc_m_clockticks [uncore_imc_0] [uncore_imc_0]
2.002024821 240,759,504 unc_m_clockticks [uncore_imc_5] [uncore_imc_5]
2.002024821 240,755,992 unc_m_clockticks [uncore_imc_3] [uncore_imc_3]
2.002024821 240,750,403 unc_m_clockticks [uncore_imc_1] [uncore_imc_1]
For each print, the PMU name is unconditionally appended to the
counter->name.
Need to check the counter->name first. If the PMU name is already
appended, do nothing.
Committer notes:
Add and use perf_evsel->uniquified_name bool instead of doing the more
expensive strstr(event->name, pmu->name).
Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Agustin Vega-Frias <agustinv@codeaurora.org>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Ganapatrao Kulkarni <ganapatrao.kulkarni@cavium.com>
Cc: Jin Yao <yao.jin@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Shaokun Zhang <zhangshaokun@hisilicon.com>
Cc: Will Deacon <will.deacon@arm.com>
Fixes: 8c5421c016a4 ("perf pmu: Display pmu name when printing unmerged events in stat")
Link: http://lkml.kernel.org/r/1524594014-79243-5-git-send-email-kan.liang@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-04-24 18:20:14 +00:00
|
|
|
|
|
|
|
counter->uniquified_name = true;
|
2018-03-06 14:04:43 +00:00
|
|
|
}
|
|
|
|
|
perf stat: Collapse identically named events
The uncore PMU has a lot of duplicated PMUs for different subsystems.
When expanding an uncore alias we usually end up with a large
number of identically named aliases, which makes perf stat
output difficult to read.
Automatically sum them up in perf stat, unless --no-merge is specified.
This can be default because only the uncores generally have duplicated
aliases. Other PMUs have unique names.
Before:
% perf stat --no-merge -a -e unc_c_llc_lookup.any sleep 1
Performance counter stats for 'system wide':
694,976 Bytes unc_c_llc_lookup.any
706,304 Bytes unc_c_llc_lookup.any
956,608 Bytes unc_c_llc_lookup.any
782,720 Bytes unc_c_llc_lookup.any
605,696 Bytes unc_c_llc_lookup.any
442,816 Bytes unc_c_llc_lookup.any
659,328 Bytes unc_c_llc_lookup.any
509,312 Bytes unc_c_llc_lookup.any
263,936 Bytes unc_c_llc_lookup.any
592,448 Bytes unc_c_llc_lookup.any
672,448 Bytes unc_c_llc_lookup.any
608,640 Bytes unc_c_llc_lookup.any
641,024 Bytes unc_c_llc_lookup.any
856,896 Bytes unc_c_llc_lookup.any
808,832 Bytes unc_c_llc_lookup.any
684,864 Bytes unc_c_llc_lookup.any
710,464 Bytes unc_c_llc_lookup.any
538,304 Bytes unc_c_llc_lookup.any
1.002577660 seconds time elapsed
After:
% perf stat -a -e unc_c_llc_lookup.any sleep 1
Performance counter stats for 'system wide':
2,685,120 Bytes unc_c_llc_lookup.any
1.002648032 seconds time elapsed
v2: Split collect_aliases. Rename alias flag.
v3: Make sure unsupported/not counted is always printed.
v4: Factor out callback change into separate patch.
v5: Move check for bad results here
Move merged check into collect_data
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: http://lkml.kernel.org/r/20170320201711.14142-3-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-03-20 20:17:00 +00:00
|
|
|
static void collect_all_aliases(struct perf_evsel *counter,
|
2017-03-20 20:16:59 +00:00
|
|
|
void (*cb)(struct perf_evsel *counter, void *data,
|
|
|
|
bool first),
|
|
|
|
void *data)
|
|
|
|
{
|
2018-08-30 06:32:35 +00:00
|
|
|
struct perf_evlist *evlist = counter->evlist;
|
perf stat: Collapse identically named events
The uncore PMU has a lot of duplicated PMUs for different subsystems.
When expanding an uncore alias we usually end up with a large
number of identically named aliases, which makes perf stat
output difficult to read.
Automatically sum them up in perf stat, unless --no-merge is specified.
This can be default because only the uncores generally have duplicated
aliases. Other PMUs have unique names.
Before:
% perf stat --no-merge -a -e unc_c_llc_lookup.any sleep 1
Performance counter stats for 'system wide':
694,976 Bytes unc_c_llc_lookup.any
706,304 Bytes unc_c_llc_lookup.any
956,608 Bytes unc_c_llc_lookup.any
782,720 Bytes unc_c_llc_lookup.any
605,696 Bytes unc_c_llc_lookup.any
442,816 Bytes unc_c_llc_lookup.any
659,328 Bytes unc_c_llc_lookup.any
509,312 Bytes unc_c_llc_lookup.any
263,936 Bytes unc_c_llc_lookup.any
592,448 Bytes unc_c_llc_lookup.any
672,448 Bytes unc_c_llc_lookup.any
608,640 Bytes unc_c_llc_lookup.any
641,024 Bytes unc_c_llc_lookup.any
856,896 Bytes unc_c_llc_lookup.any
808,832 Bytes unc_c_llc_lookup.any
684,864 Bytes unc_c_llc_lookup.any
710,464 Bytes unc_c_llc_lookup.any
538,304 Bytes unc_c_llc_lookup.any
1.002577660 seconds time elapsed
After:
% perf stat -a -e unc_c_llc_lookup.any sleep 1
Performance counter stats for 'system wide':
2,685,120 Bytes unc_c_llc_lookup.any
1.002648032 seconds time elapsed
v2: Split collect_aliases. Rename alias flag.
v3: Make sure unsupported/not counted is always printed.
v4: Factor out callback change into separate patch.
v5: Move check for bad results here
Move merged check into collect_data
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: http://lkml.kernel.org/r/20170320201711.14142-3-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-03-20 20:17:00 +00:00
|
|
|
struct perf_evsel *alias;
|
|
|
|
|
2018-08-30 06:32:35 +00:00
|
|
|
alias = list_prepare_entry(counter, &(evlist->entries), node);
|
|
|
|
list_for_each_entry_continue (alias, &evlist->entries, node) {
|
perf stat: Collapse identically named events
The uncore PMU has a lot of duplicated PMUs for different subsystems.
When expanding an uncore alias we usually end up with a large
number of identically named aliases, which makes perf stat
output difficult to read.
Automatically sum them up in perf stat, unless --no-merge is specified.
This can be default because only the uncores generally have duplicated
aliases. Other PMUs have unique names.
Before:
% perf stat --no-merge -a -e unc_c_llc_lookup.any sleep 1
Performance counter stats for 'system wide':
694,976 Bytes unc_c_llc_lookup.any
706,304 Bytes unc_c_llc_lookup.any
956,608 Bytes unc_c_llc_lookup.any
782,720 Bytes unc_c_llc_lookup.any
605,696 Bytes unc_c_llc_lookup.any
442,816 Bytes unc_c_llc_lookup.any
659,328 Bytes unc_c_llc_lookup.any
509,312 Bytes unc_c_llc_lookup.any
263,936 Bytes unc_c_llc_lookup.any
592,448 Bytes unc_c_llc_lookup.any
672,448 Bytes unc_c_llc_lookup.any
608,640 Bytes unc_c_llc_lookup.any
641,024 Bytes unc_c_llc_lookup.any
856,896 Bytes unc_c_llc_lookup.any
808,832 Bytes unc_c_llc_lookup.any
684,864 Bytes unc_c_llc_lookup.any
710,464 Bytes unc_c_llc_lookup.any
538,304 Bytes unc_c_llc_lookup.any
1.002577660 seconds time elapsed
After:
% perf stat -a -e unc_c_llc_lookup.any sleep 1
Performance counter stats for 'system wide':
2,685,120 Bytes unc_c_llc_lookup.any
1.002648032 seconds time elapsed
v2: Split collect_aliases. Rename alias flag.
v3: Make sure unsupported/not counted is always printed.
v4: Factor out callback change into separate patch.
v5: Move check for bad results here
Move merged check into collect_data
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: http://lkml.kernel.org/r/20170320201711.14142-3-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-03-20 20:17:00 +00:00
|
|
|
if (strcmp(perf_evsel__name(alias), perf_evsel__name(counter)) ||
|
|
|
|
alias->scale != counter->scale ||
|
|
|
|
alias->cgrp != counter->cgrp ||
|
|
|
|
strcmp(alias->unit, counter->unit) ||
|
perf stat: Get rid of extra clock display function
There's no reason to have separate function to display clock events.
It's only purpose was to convert the nanosecond value into microseconds.
We do that now in generic code, if the unit and scale values are
properly set, which this patch do for clock events.
The output differs in the unit field being displayed in its columns
rather than having it added as a suffix of the event name. Plus the
value is rounded into 2 decimal numbers as for any other event.
Before:
# perf stat -e cpu-clock,task-clock -C 0 sleep 3
Performance counter stats for 'CPU(s) 0':
3001.123137 cpu-clock (msec) # 1.000 CPUs utilized
3001.133250 task-clock (msec) # 1.000 CPUs utilized
3.001159813 seconds time elapsed
Now:
# perf stat -e cpu-clock,task-clock -C 0 sleep 3
Performance counter stats for 'CPU(s) 0':
3,001.05 msec cpu-clock # 1.000 CPUs utilized
3,001.05 msec task-clock # 1.000 CPUs utilized
3.001077794 seconds time elapsed
There's a small difference in csv output, as we now output the unit
field, which was empty before. It's in the proper spot, so there's no
compatibility issue.
Before:
# perf stat -e cpu-clock,task-clock -C 0 -x, sleep 3
3001.065177,,cpu-clock,3001064187,100.00,1.000,CPUs utilized
3001.077085,,task-clock,3001077085,100.00,1.000,CPUs utilized
# perf stat -e cpu-clock,task-clock -C 0 -x, sleep 3
3000.80,msec,cpu-clock,3000799026,100.00,1.000,CPUs utilized
3000.80,msec,task-clock,3000799550,100.00,1.000,CPUs utilized
Add perf_evsel__is_clock to replace nsec_counter.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20180720110036.32251-2-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-07-20 11:00:34 +00:00
|
|
|
perf_evsel__is_clock(alias) != perf_evsel__is_clock(counter))
|
perf stat: Collapse identically named events
The uncore PMU has a lot of duplicated PMUs for different subsystems.
When expanding an uncore alias we usually end up with a large
number of identically named aliases, which makes perf stat
output difficult to read.
Automatically sum them up in perf stat, unless --no-merge is specified.
This can be default because only the uncores generally have duplicated
aliases. Other PMUs have unique names.
Before:
% perf stat --no-merge -a -e unc_c_llc_lookup.any sleep 1
Performance counter stats for 'system wide':
694,976 Bytes unc_c_llc_lookup.any
706,304 Bytes unc_c_llc_lookup.any
956,608 Bytes unc_c_llc_lookup.any
782,720 Bytes unc_c_llc_lookup.any
605,696 Bytes unc_c_llc_lookup.any
442,816 Bytes unc_c_llc_lookup.any
659,328 Bytes unc_c_llc_lookup.any
509,312 Bytes unc_c_llc_lookup.any
263,936 Bytes unc_c_llc_lookup.any
592,448 Bytes unc_c_llc_lookup.any
672,448 Bytes unc_c_llc_lookup.any
608,640 Bytes unc_c_llc_lookup.any
641,024 Bytes unc_c_llc_lookup.any
856,896 Bytes unc_c_llc_lookup.any
808,832 Bytes unc_c_llc_lookup.any
684,864 Bytes unc_c_llc_lookup.any
710,464 Bytes unc_c_llc_lookup.any
538,304 Bytes unc_c_llc_lookup.any
1.002577660 seconds time elapsed
After:
% perf stat -a -e unc_c_llc_lookup.any sleep 1
Performance counter stats for 'system wide':
2,685,120 Bytes unc_c_llc_lookup.any
1.002648032 seconds time elapsed
v2: Split collect_aliases. Rename alias flag.
v3: Make sure unsupported/not counted is always printed.
v4: Factor out callback change into separate patch.
v5: Move check for bad results here
Move merged check into collect_data
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: http://lkml.kernel.org/r/20170320201711.14142-3-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-03-20 20:17:00 +00:00
|
|
|
break;
|
|
|
|
alias->merged_stat = true;
|
|
|
|
cb(alias, data, false);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static bool collect_data(struct perf_evsel *counter,
|
|
|
|
void (*cb)(struct perf_evsel *counter, void *data,
|
|
|
|
bool first),
|
|
|
|
void *data)
|
|
|
|
{
|
|
|
|
if (counter->merged_stat)
|
|
|
|
return false;
|
2017-03-20 20:16:59 +00:00
|
|
|
cb(counter, data, true);
|
2018-03-06 14:04:43 +00:00
|
|
|
if (no_merge)
|
|
|
|
uniquify_event_name(counter);
|
|
|
|
else if (counter->auto_merge_stats)
|
perf stat: Collapse identically named events
The uncore PMU has a lot of duplicated PMUs for different subsystems.
When expanding an uncore alias we usually end up with a large
number of identically named aliases, which makes perf stat
output difficult to read.
Automatically sum them up in perf stat, unless --no-merge is specified.
This can be default because only the uncores generally have duplicated
aliases. Other PMUs have unique names.
Before:
% perf stat --no-merge -a -e unc_c_llc_lookup.any sleep 1
Performance counter stats for 'system wide':
694,976 Bytes unc_c_llc_lookup.any
706,304 Bytes unc_c_llc_lookup.any
956,608 Bytes unc_c_llc_lookup.any
782,720 Bytes unc_c_llc_lookup.any
605,696 Bytes unc_c_llc_lookup.any
442,816 Bytes unc_c_llc_lookup.any
659,328 Bytes unc_c_llc_lookup.any
509,312 Bytes unc_c_llc_lookup.any
263,936 Bytes unc_c_llc_lookup.any
592,448 Bytes unc_c_llc_lookup.any
672,448 Bytes unc_c_llc_lookup.any
608,640 Bytes unc_c_llc_lookup.any
641,024 Bytes unc_c_llc_lookup.any
856,896 Bytes unc_c_llc_lookup.any
808,832 Bytes unc_c_llc_lookup.any
684,864 Bytes unc_c_llc_lookup.any
710,464 Bytes unc_c_llc_lookup.any
538,304 Bytes unc_c_llc_lookup.any
1.002577660 seconds time elapsed
After:
% perf stat -a -e unc_c_llc_lookup.any sleep 1
Performance counter stats for 'system wide':
2,685,120 Bytes unc_c_llc_lookup.any
1.002648032 seconds time elapsed
v2: Split collect_aliases. Rename alias flag.
v3: Make sure unsupported/not counted is always printed.
v4: Factor out callback change into separate patch.
v5: Move check for bad results here
Move merged check into collect_data
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: http://lkml.kernel.org/r/20170320201711.14142-3-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-03-20 20:17:00 +00:00
|
|
|
collect_all_aliases(counter, cb, data);
|
|
|
|
return true;
|
2017-03-20 20:16:59 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
struct aggr_data {
|
|
|
|
u64 ena, run, val;
|
|
|
|
int id;
|
|
|
|
int nr;
|
|
|
|
int cpu;
|
|
|
|
};
|
|
|
|
|
|
|
|
static void aggr_cb(struct perf_evsel *counter, void *data, bool first)
|
|
|
|
{
|
|
|
|
struct aggr_data *ad = data;
|
|
|
|
int cpu, s2;
|
|
|
|
|
|
|
|
for (cpu = 0; cpu < perf_evsel__nr_cpus(counter); cpu++) {
|
|
|
|
struct perf_counts_values *counts;
|
|
|
|
|
|
|
|
s2 = aggr_get_id(perf_evsel__cpus(counter), cpu);
|
|
|
|
if (s2 != ad->id)
|
|
|
|
continue;
|
|
|
|
if (first)
|
|
|
|
ad->nr++;
|
|
|
|
counts = perf_counts(counter->counts, cpu, 0);
|
2017-03-20 20:17:01 +00:00
|
|
|
/*
|
|
|
|
* When any result is bad, make them all to give
|
|
|
|
* consistent output in interval mode.
|
|
|
|
*/
|
|
|
|
if (counts->ena == 0 || counts->run == 0 ||
|
|
|
|
counter->counts->scaled == -1) {
|
|
|
|
ad->ena = 0;
|
|
|
|
ad->run = 0;
|
|
|
|
break;
|
|
|
|
}
|
2017-03-20 20:16:59 +00:00
|
|
|
ad->val += counts->val;
|
|
|
|
ad->ena += counts->ena;
|
|
|
|
ad->run += counts->run;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2018-08-30 06:32:27 +00:00
|
|
|
static void print_aggr(struct perf_stat_config *config,
|
2018-08-30 06:32:34 +00:00
|
|
|
struct perf_evlist *evlist,
|
2018-08-30 06:32:27 +00:00
|
|
|
char *prefix)
|
2013-02-06 14:46:02 +00:00
|
|
|
{
|
2018-08-30 06:32:31 +00:00
|
|
|
bool metric_only = config->metric_only;
|
2018-08-30 06:32:27 +00:00
|
|
|
FILE *output = config->output;
|
2013-02-06 14:46:02 +00:00
|
|
|
struct perf_evsel *counter;
|
2017-03-20 20:16:59 +00:00
|
|
|
int s, id, nr;
|
2013-11-12 16:58:49 +00:00
|
|
|
double uval;
|
2013-02-06 14:46:02 +00:00
|
|
|
u64 ena, run, val;
|
2016-03-03 23:57:36 +00:00
|
|
|
bool first;
|
2013-02-06 14:46:02 +00:00
|
|
|
|
2013-02-14 12:57:27 +00:00
|
|
|
if (!(aggr_map || aggr_get_id))
|
2013-02-06 14:46:02 +00:00
|
|
|
return;
|
|
|
|
|
2018-08-30 06:32:39 +00:00
|
|
|
aggr_update_shadow(config, evlist);
|
perf stat: Support metrics in --per-core/socket mode
Enable metrics printing in --per-core / --per-socket mode. We need to
save the shadow metrics in a unique place. Always use the first CPU in
the aggregation. Then use the same CPU to retrieve the shadow value
later.
Example output:
% perf stat --per-core -a ./BC1s
Performance counter stats for 'system wide':
S0-C0 2 2966.020381 task-clock (msec) # 2.004 CPUs utilized (100.00%)
S0-C0 2 49 context-switches # 0.017 K/sec (100.00%)
S0-C0 2 4 cpu-migrations # 0.001 K/sec (100.00%)
S0-C0 2 467 page-faults # 0.157 K/sec
S0-C0 2 4,599,061,773 cycles # 1.551 GHz (100.00%)
S0-C0 2 9,755,886,883 instructions # 2.12 insn per cycle (100.00%)
S0-C0 2 1,906,272,125 branches # 642.704 M/sec (100.00%)
S0-C0 2 81,180,867 branch-misses # 4.26% of all branches
S0-C1 2 2965.995373 task-clock (msec) # 2.003 CPUs utilized (100.00%)
S0-C1 2 62 context-switches # 0.021 K/sec (100.00%)
S0-C1 2 8 cpu-migrations # 0.003 K/sec (100.00%)
S0-C1 2 281 page-faults # 0.095 K/sec
S0-C1 2 6,347,290 cycles # 0.002 GHz (100.00%)
S0-C1 2 4,654,156 instructions # 0.73 insn per cycle (100.00%)
S0-C1 2 947,121 branches # 0.319 M/sec (100.00%)
S0-C1 2 37,322 branch-misses # 3.94% of all branches
1.480409747 seconds time elapsed
v2: Rebase to older patches
v3: Document shadow cpus. Fix aggr_get_id argument. Fix -A shadows (Jiri)
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: http://lkml.kernel.org/r/1456785386-19481-4-git-send-email-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-02-29 22:36:22 +00:00
|
|
|
|
2016-03-03 23:57:36 +00:00
|
|
|
/*
|
|
|
|
* With metric_only everything is on a single line.
|
|
|
|
* Without each counter has its own line.
|
|
|
|
*/
|
2013-02-14 12:57:27 +00:00
|
|
|
for (s = 0; s < aggr_map->nr; s++) {
|
2017-03-20 20:16:59 +00:00
|
|
|
struct aggr_data ad;
|
2016-03-03 23:57:36 +00:00
|
|
|
if (prefix && metric_only)
|
|
|
|
fprintf(output, "%s", prefix);
|
|
|
|
|
2017-03-20 20:16:59 +00:00
|
|
|
ad.id = id = aggr_map->map[s];
|
2016-03-03 23:57:36 +00:00
|
|
|
first = true;
|
2018-08-30 06:32:34 +00:00
|
|
|
evlist__for_each_entry(evlist, counter) {
|
2017-08-31 19:40:35 +00:00
|
|
|
if (is_duration_time(counter))
|
|
|
|
continue;
|
|
|
|
|
2017-03-20 20:16:59 +00:00
|
|
|
ad.val = ad.ena = ad.run = 0;
|
|
|
|
ad.nr = 0;
|
perf stat: Collapse identically named events
The uncore PMU has a lot of duplicated PMUs for different subsystems.
When expanding an uncore alias we usually end up with a large
number of identically named aliases, which makes perf stat
output difficult to read.
Automatically sum them up in perf stat, unless --no-merge is specified.
This can be default because only the uncores generally have duplicated
aliases. Other PMUs have unique names.
Before:
% perf stat --no-merge -a -e unc_c_llc_lookup.any sleep 1
Performance counter stats for 'system wide':
694,976 Bytes unc_c_llc_lookup.any
706,304 Bytes unc_c_llc_lookup.any
956,608 Bytes unc_c_llc_lookup.any
782,720 Bytes unc_c_llc_lookup.any
605,696 Bytes unc_c_llc_lookup.any
442,816 Bytes unc_c_llc_lookup.any
659,328 Bytes unc_c_llc_lookup.any
509,312 Bytes unc_c_llc_lookup.any
263,936 Bytes unc_c_llc_lookup.any
592,448 Bytes unc_c_llc_lookup.any
672,448 Bytes unc_c_llc_lookup.any
608,640 Bytes unc_c_llc_lookup.any
641,024 Bytes unc_c_llc_lookup.any
856,896 Bytes unc_c_llc_lookup.any
808,832 Bytes unc_c_llc_lookup.any
684,864 Bytes unc_c_llc_lookup.any
710,464 Bytes unc_c_llc_lookup.any
538,304 Bytes unc_c_llc_lookup.any
1.002577660 seconds time elapsed
After:
% perf stat -a -e unc_c_llc_lookup.any sleep 1
Performance counter stats for 'system wide':
2,685,120 Bytes unc_c_llc_lookup.any
1.002648032 seconds time elapsed
v2: Split collect_aliases. Rename alias flag.
v3: Make sure unsupported/not counted is always printed.
v4: Factor out callback change into separate patch.
v5: Move check for bad results here
Move merged check into collect_data
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: http://lkml.kernel.org/r/20170320201711.14142-3-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-03-20 20:17:00 +00:00
|
|
|
if (!collect_data(counter, aggr_cb, &ad))
|
|
|
|
continue;
|
2017-03-20 20:16:59 +00:00
|
|
|
nr = ad.nr;
|
|
|
|
ena = ad.ena;
|
|
|
|
run = ad.run;
|
|
|
|
val = ad.val;
|
2016-03-03 23:57:36 +00:00
|
|
|
if (first && metric_only) {
|
|
|
|
first = false;
|
2018-08-30 06:32:28 +00:00
|
|
|
aggr_printout(config, counter, id, nr);
|
2016-03-03 23:57:36 +00:00
|
|
|
}
|
|
|
|
if (prefix && !metric_only)
|
2013-02-06 14:46:02 +00:00
|
|
|
fprintf(output, "%s", prefix);
|
|
|
|
|
2013-11-12 16:58:49 +00:00
|
|
|
uval = val * counter->scale;
|
2018-08-30 06:32:27 +00:00
|
|
|
printout(config, id, nr, counter, uval, prefix,
|
|
|
|
run, ena, 1.0, &rt_stat);
|
2016-03-03 23:57:36 +00:00
|
|
|
if (!metric_only)
|
|
|
|
fputc('\n', output);
|
2013-02-06 14:46:02 +00:00
|
|
|
}
|
2016-03-03 23:57:36 +00:00
|
|
|
if (metric_only)
|
|
|
|
fputc('\n', output);
|
2013-02-06 14:46:02 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
perf stat: Resort '--per-thread' result
There are many threads reported if we enable '--per-thread'
globally.
1. Most of the threads are not counted or counting value 0.
This patch removes these threads.
2. We also resort the threads in display according to the
counting value. It's useful for user to see the hottest
threads easily.
For example, the new results would be:
root@skl:/tmp# perf stat --per-thread
^C
Performance counter stats for 'system wide':
perf-24165 4.302433 cpu-clock (msec) # 0.001 CPUs utilized
vmstat-23127 1.562215 cpu-clock (msec) # 0.000 CPUs utilized
irqbalance-2780 0.827851 cpu-clock (msec) # 0.000 CPUs utilized
sshd-23111 0.278308 cpu-clock (msec) # 0.000 CPUs utilized
thermald-2841 0.230880 cpu-clock (msec) # 0.000 CPUs utilized
sshd-23058 0.207306 cpu-clock (msec) # 0.000 CPUs utilized
kworker/0:2-19991 0.133983 cpu-clock (msec) # 0.000 CPUs utilized
kworker/u16:1-18249 0.125636 cpu-clock (msec) # 0.000 CPUs utilized
rcu_sched-8 0.085533 cpu-clock (msec) # 0.000 CPUs utilized
kworker/u16:2-23146 0.077139 cpu-clock (msec) # 0.000 CPUs utilized
gmain-2700 0.041789 cpu-clock (msec) # 0.000 CPUs utilized
kworker/4:1-15354 0.028370 cpu-clock (msec) # 0.000 CPUs utilized
kworker/6:0-17528 0.023895 cpu-clock (msec) # 0.000 CPUs utilized
kworker/4:1H-1887 0.013209 cpu-clock (msec) # 0.000 CPUs utilized
kworker/5:2-31362 0.011627 cpu-clock (msec) # 0.000 CPUs utilized
watchdog/0-11 0.010892 cpu-clock (msec) # 0.000 CPUs utilized
kworker/3:2-12870 0.010220 cpu-clock (msec) # 0.000 CPUs utilized
ksoftirqd/0-7 0.008869 cpu-clock (msec) # 0.000 CPUs utilized
watchdog/1-14 0.008476 cpu-clock (msec) # 0.000 CPUs utilized
watchdog/7-50 0.002944 cpu-clock (msec) # 0.000 CPUs utilized
watchdog/3-26 0.002893 cpu-clock (msec) # 0.000 CPUs utilized
watchdog/4-32 0.002759 cpu-clock (msec) # 0.000 CPUs utilized
watchdog/2-20 0.002429 cpu-clock (msec) # 0.000 CPUs utilized
watchdog/6-44 0.001491 cpu-clock (msec) # 0.000 CPUs utilized
watchdog/5-38 0.001477 cpu-clock (msec) # 0.000 CPUs utilized
rcu_sched-8 10 context-switches # 0.117 M/sec
kworker/u16:1-18249 7 context-switches # 0.056 M/sec
sshd-23111 4 context-switches # 0.014 M/sec
vmstat-23127 4 context-switches # 0.003 M/sec
perf-24165 4 context-switches # 0.930 K/sec
kworker/0:2-19991 3 context-switches # 0.022 M/sec
kworker/u16:2-23146 3 context-switches # 0.039 M/sec
kworker/4:1-15354 2 context-switches # 0.070 M/sec
kworker/6:0-17528 2 context-switches # 0.084 M/sec
sshd-23058 2 context-switches # 0.010 M/sec
ksoftirqd/0-7 1 context-switches # 0.113 M/sec
watchdog/0-11 1 context-switches # 0.092 M/sec
watchdog/1-14 1 context-switches # 0.118 M/sec
watchdog/2-20 1 context-switches # 0.412 M/sec
watchdog/3-26 1 context-switches # 0.346 M/sec
watchdog/4-32 1 context-switches # 0.362 M/sec
watchdog/5-38 1 context-switches # 0.677 M/sec
watchdog/6-44 1 context-switches # 0.671 M/sec
watchdog/7-50 1 context-switches # 0.340 M/sec
kworker/4:1H-1887 1 context-switches # 0.076 M/sec
thermald-2841 1 context-switches # 0.004 M/sec
gmain-2700 1 context-switches # 0.024 M/sec
irqbalance-2780 1 context-switches # 0.001 M/sec
kworker/3:2-12870 1 context-switches # 0.098 M/sec
kworker/5:2-31362 1 context-switches # 0.086 M/sec
kworker/u16:1-18249 2 cpu-migrations # 0.016 M/sec
kworker/u16:2-23146 2 cpu-migrations # 0.026 M/sec
rcu_sched-8 1 cpu-migrations # 0.012 M/sec
sshd-23058 1 cpu-migrations # 0.005 M/sec
perf-24165 8,833,385 cycles # 2.053 GHz
vmstat-23127 1,702,699 cycles # 1.090 GHz
irqbalance-2780 739,847 cycles # 0.894 GHz
sshd-23111 269,506 cycles # 0.968 GHz
thermald-2841 204,556 cycles # 0.886 GHz
sshd-23058 158,780 cycles # 0.766 GHz
kworker/0:2-19991 112,981 cycles # 0.843 GHz
kworker/u16:1-18249 100,926 cycles # 0.803 GHz
rcu_sched-8 74,024 cycles # 0.865 GHz
kworker/u16:2-23146 55,984 cycles # 0.726 GHz
gmain-2700 34,278 cycles # 0.820 GHz
kworker/4:1-15354 20,665 cycles # 0.728 GHz
kworker/6:0-17528 16,445 cycles # 0.688 GHz
kworker/5:2-31362 9,492 cycles # 0.816 GHz
watchdog/3-26 8,695 cycles # 3.006 GHz
kworker/4:1H-1887 8,238 cycles # 0.624 GHz
watchdog/4-32 7,580 cycles # 2.747 GHz
kworker/3:2-12870 7,306 cycles # 0.715 GHz
watchdog/2-20 7,274 cycles # 2.995 GHz
watchdog/0-11 6,988 cycles # 0.642 GHz
ksoftirqd/0-7 6,376 cycles # 0.719 GHz
watchdog/1-14 5,340 cycles # 0.630 GHz
watchdog/5-38 4,061 cycles # 2.749 GHz
watchdog/6-44 3,976 cycles # 2.667 GHz
watchdog/7-50 3,418 cycles # 1.161 GHz
vmstat-23127 2,511,699 instructions # 1.48 insn per cycle
perf-24165 1,829,908 instructions # 0.21 insn per cycle
irqbalance-2780 1,190,204 instructions # 1.61 insn per cycle
thermald-2841 143,544 instructions # 0.70 insn per cycle
sshd-23111 128,138 instructions # 0.48 insn per cycle
sshd-23058 57,654 instructions # 0.36 insn per cycle
rcu_sched-8 44,063 instructions # 0.60 insn per cycle
kworker/u16:1-18249 42,551 instructions # 0.42 insn per cycle
kworker/0:2-19991 25,873 instructions # 0.23 insn per cycle
kworker/u16:2-23146 21,407 instructions # 0.38 insn per cycle
gmain-2700 13,691 instructions # 0.40 insn per cycle
kworker/4:1-15354 12,964 instructions # 0.63 insn per cycle
kworker/6:0-17528 10,034 instructions # 0.61 insn per cycle
kworker/5:2-31362 5,203 instructions # 0.55 insn per cycle
kworker/3:2-12870 4,866 instructions # 0.67 insn per cycle
kworker/4:1H-1887 3,586 instructions # 0.44 insn per cycle
ksoftirqd/0-7 3,463 instructions # 0.54 insn per cycle
watchdog/0-11 3,135 instructions # 0.45 insn per cycle
watchdog/1-14 3,135 instructions # 0.59 insn per cycle
watchdog/2-20 3,135 instructions # 0.43 insn per cycle
watchdog/3-26 3,135 instructions # 0.36 insn per cycle
watchdog/4-32 3,135 instructions # 0.41 insn per cycle
watchdog/5-38 3,135 instructions # 0.77 insn per cycle
watchdog/6-44 3,135 instructions # 0.79 insn per cycle
watchdog/7-50 3,135 instructions # 0.92 insn per cycle
vmstat-23127 539,181 branches # 345.139 M/sec
perf-24165 375,364 branches # 87.245 M/sec
irqbalance-2780 262,092 branches # 316.593 M/sec
thermald-2841 31,611 branches # 136.915 M/sec
sshd-23111 21,874 branches # 78.596 M/sec
sshd-23058 10,682 branches # 51.528 M/sec
rcu_sched-8 8,693 branches # 101.633 M/sec
kworker/u16:1-18249 7,891 branches # 62.808 M/sec
kworker/0:2-19991 5,761 branches # 42.998 M/sec
kworker/u16:2-23146 4,099 branches # 53.138 M/sec
kworker/4:1-15354 2,755 branches # 97.110 M/sec
gmain-2700 2,638 branches # 63.127 M/sec
kworker/6:0-17528 2,216 branches # 92.739 M/sec
kworker/5:2-31362 1,132 branches # 97.360 M/sec
kworker/3:2-12870 1,081 branches # 105.773 M/sec
kworker/4:1H-1887 725 branches # 54.887 M/sec
ksoftirqd/0-7 707 branches # 79.716 M/sec
watchdog/0-11 652 branches # 59.860 M/sec
watchdog/1-14 652 branches # 76.923 M/sec
watchdog/2-20 652 branches # 268.423 M/sec
watchdog/3-26 652 branches # 225.372 M/sec
watchdog/4-32 652 branches # 236.318 M/sec
watchdog/5-38 652 branches # 441.435 M/sec
watchdog/6-44 652 branches # 437.290 M/sec
watchdog/7-50 652 branches # 221.467 M/sec
vmstat-23127 8,960 branch-misses # 1.66% of all branches
irqbalance-2780 3,047 branch-misses # 1.16% of all branches
perf-24165 2,876 branch-misses # 0.77% of all branches
sshd-23111 1,843 branch-misses # 8.43% of all branches
thermald-2841 1,444 branch-misses # 4.57% of all branches
sshd-23058 1,379 branch-misses # 12.91% of all branches
kworker/u16:1-18249 982 branch-misses # 12.44% of all branches
rcu_sched-8 893 branch-misses # 10.27% of all branches
kworker/u16:2-23146 578 branch-misses # 14.10% of all branches
kworker/0:2-19991 376 branch-misses # 6.53% of all branches
gmain-2700 280 branch-misses # 10.61% of all branches
kworker/6:0-17528 196 branch-misses # 8.84% of all branches
kworker/4:1-15354 187 branch-misses # 6.79% of all branches
kworker/5:2-31362 123 branch-misses # 10.87% of all branches
watchdog/0-11 95 branch-misses # 14.57% of all branches
watchdog/4-32 89 branch-misses # 13.65% of all branches
kworker/3:2-12870 80 branch-misses # 7.40% of all branches
watchdog/3-26 61 branch-misses # 9.36% of all branches
kworker/4:1H-1887 60 branch-misses # 8.28% of all branches
watchdog/2-20 52 branch-misses # 7.98% of all branches
ksoftirqd/0-7 47 branch-misses # 6.65% of all branches
watchdog/1-14 46 branch-misses # 7.06% of all branches
watchdog/7-50 13 branch-misses # 1.99% of all branches
watchdog/5-38 8 branch-misses # 1.23% of all branches
watchdog/6-44 7 branch-misses # 1.07% of all branches
3.695150786 seconds time elapsed
root@skl:/tmp# perf stat --per-thread -M IPC,CPI
^C
Performance counter stats for 'system wide':
vmstat-23127 2,000,783 inst_retired.any # 1.5 IPC
thermald-2841 1,472,670 inst_retired.any # 1.3 IPC
sshd-23111 977,374 inst_retired.any # 1.2 IPC
perf-24163 483,779 inst_retired.any # 0.2 IPC
gmain-2700 341,213 inst_retired.any # 0.9 IPC
sshd-23058 148,891 inst_retired.any # 0.8 IPC
rtkit-daemon-3288 71,210 inst_retired.any # 0.7 IPC
kworker/u16:1-18249 39,562 inst_retired.any # 0.3 IPC
rcu_sched-8 14,474 inst_retired.any # 0.8 IPC
kworker/0:2-19991 7,659 inst_retired.any # 0.2 IPC
kworker/4:1-15354 6,714 inst_retired.any # 0.8 IPC
rtkit-daemon-3289 4,839 inst_retired.any # 0.3 IPC
kworker/6:0-17528 3,321 inst_retired.any # 0.6 IPC
kworker/5:2-31362 3,215 inst_retired.any # 0.5 IPC
kworker/7:2-23145 3,173 inst_retired.any # 0.7 IPC
kworker/4:1H-1887 1,719 inst_retired.any # 0.3 IPC
watchdog/0-11 1,479 inst_retired.any # 0.3 IPC
watchdog/1-14 1,479 inst_retired.any # 0.3 IPC
watchdog/2-20 1,479 inst_retired.any # 0.4 IPC
watchdog/3-26 1,479 inst_retired.any # 0.4 IPC
watchdog/4-32 1,479 inst_retired.any # 0.3 IPC
watchdog/5-38 1,479 inst_retired.any # 0.3 IPC
watchdog/6-44 1,479 inst_retired.any # 0.7 IPC
watchdog/7-50 1,479 inst_retired.any # 0.7 IPC
kworker/u16:2-23146 1,408 inst_retired.any # 0.5 IPC
perf-24163 2,249,872 cpu_clk_unhalted.thread
vmstat-23127 1,352,455 cpu_clk_unhalted.thread
thermald-2841 1,161,140 cpu_clk_unhalted.thread
sshd-23111 807,827 cpu_clk_unhalted.thread
gmain-2700 375,535 cpu_clk_unhalted.thread
sshd-23058 194,071 cpu_clk_unhalted.thread
kworker/u16:1-18249 114,306 cpu_clk_unhalted.thread
rtkit-daemon-3288 103,547 cpu_clk_unhalted.thread
kworker/0:2-19991 46,550 cpu_clk_unhalted.thread
rcu_sched-8 18,855 cpu_clk_unhalted.thread
rtkit-daemon-3289 17,549 cpu_clk_unhalted.thread
kworker/4:1-15354 8,812 cpu_clk_unhalted.thread
kworker/5:2-31362 6,812 cpu_clk_unhalted.thread
kworker/4:1H-1887 5,270 cpu_clk_unhalted.thread
kworker/6:0-17528 5,111 cpu_clk_unhalted.thread
kworker/7:2-23145 4,667 cpu_clk_unhalted.thread
watchdog/0-11 4,663 cpu_clk_unhalted.thread
watchdog/1-14 4,663 cpu_clk_unhalted.thread
watchdog/4-32 4,626 cpu_clk_unhalted.thread
watchdog/5-38 4,403 cpu_clk_unhalted.thread
watchdog/3-26 3,936 cpu_clk_unhalted.thread
watchdog/2-20 3,850 cpu_clk_unhalted.thread
kworker/u16:2-23146 2,654 cpu_clk_unhalted.thread
watchdog/6-44 2,017 cpu_clk_unhalted.thread
watchdog/7-50 2,017 cpu_clk_unhalted.thread
vmstat-23127 2,000,783 inst_retired.any # 0.7 CPI
thermald-2841 1,472,670 inst_retired.any # 0.8 CPI
sshd-23111 977,374 inst_retired.any # 0.8 CPI
perf-24163 495,037 inst_retired.any # 4.7 CPI
gmain-2700 341,213 inst_retired.any # 1.1 CPI
sshd-23058 148,891 inst_retired.any # 1.3 CPI
rtkit-daemon-3288 71,210 inst_retired.any # 1.5 CPI
kworker/u16:1-18249 39,562 inst_retired.any # 2.9 CPI
rcu_sched-8 14,474 inst_retired.any # 1.3 CPI
kworker/0:2-19991 7,659 inst_retired.any # 6.1 CPI
kworker/4:1-15354 6,714 inst_retired.any # 1.3 CPI
rtkit-daemon-3289 4,839 inst_retired.any # 3.6 CPI
kworker/6:0-17528 3,321 inst_retired.any # 1.5 CPI
kworker/5:2-31362 3,215 inst_retired.any # 2.1 CPI
kworker/7:2-23145 3,173 inst_retired.any # 1.5 CPI
kworker/4:1H-1887 1,719 inst_retired.any # 3.1 CPI
watchdog/0-11 1,479 inst_retired.any # 3.2 CPI
watchdog/1-14 1,479 inst_retired.any # 3.2 CPI
watchdog/2-20 1,479 inst_retired.any # 2.6 CPI
watchdog/3-26 1,479 inst_retired.any # 2.7 CPI
watchdog/4-32 1,479 inst_retired.any # 3.1 CPI
watchdog/5-38 1,479 inst_retired.any # 3.0 CPI
watchdog/6-44 1,479 inst_retired.any # 1.4 CPI
watchdog/7-50 1,479 inst_retired.any # 1.4 CPI
kworker/u16:2-23146 1,408 inst_retired.any # 1.9 CPI
perf-24163 2,302,323 cycles
vmstat-23127 1,352,455 cycles
thermald-2841 1,161,140 cycles
sshd-23111 807,827 cycles
gmain-2700 375,535 cycles
sshd-23058 194,071 cycles
kworker/u16:1-18249 114,306 cycles
rtkit-daemon-3288 103,547 cycles
kworker/0:2-19991 46,550 cycles
rcu_sched-8 18,855 cycles
rtkit-daemon-3289 17,549 cycles
kworker/4:1-15354 8,812 cycles
kworker/5:2-31362 6,812 cycles
kworker/4:1H-1887 5,270 cycles
kworker/6:0-17528 5,111 cycles
kworker/7:2-23145 4,667 cycles
watchdog/0-11 4,663 cycles
watchdog/1-14 4,663 cycles
watchdog/4-32 4,626 cycles
watchdog/5-38 4,403 cycles
watchdog/3-26 3,936 cycles
watchdog/2-20 3,850 cycles
kworker/u16:2-23146 2,654 cycles
watchdog/6-44 2,017 cycles
watchdog/7-50 2,017 cycles
2.175726600 seconds time elapsed
Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Kan Liang <kan.liang@intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1512482591-4646-12-git-send-email-yao.jin@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-12-05 14:03:11 +00:00
|
|
|
static int cmp_val(const void *a, const void *b)
|
perf stat: Introduce --per-thread option
Currently all the -p option PID arguments tasks values get aggregated
and printed as single values.
Adding --per-tasks option to print values per task.
$ perf stat -e cycles,instructions --per-thread -p 30190,30242
^C
Performance counter stats for process id '30190,30242':
cat-30190 0 cycles
yes-30242 3,842,525,421 cycles
cat-30190 0 instructions
yes-30242 10,370,817,010 instructions
1.143155657 seconds time elapsed
Also works under interval mode:
$ perf stat -e cycles,instructions --per-thread -p 30190,30242 -I 1000
# time comm-pid counts unit events
1.000073435 cat-30190 89,058 cycles
1.000073435 yes-30242 3,360,786,902 cycles (100.00%)
1.000073435 cat-30190 14,066 instructions
1.000073435 yes-30242 9,069,937,462 instructions
2.000204830 cat-30190 0 cycles
2.000204830 yes-30242 3,351,667,626 cycles
2.000204830 cat-30190 0 instructions
2.000204830 yes-30242 9,045,796,885 instructions
^C 2.771286639 cat-30190 0 cycles
2.771286639 yes-30242 2,593,884,166 cycles
2.771286639 cat-30190 0 instructions
2.771286639 yes-30242 7,001,171,191 instructions
It works only with -t and -p options, otherwise following error is
printed:
$ perf stat -e cycles --per-thread -I 1000 ls
The --per-thread option is only available when monitoring via -p -t options.
-p, --pid <pid> stat events on existing process id
-t, --tid <tid> stat events on existing thread id
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/r/1435310967-14570-23-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-06-26 09:29:27 +00:00
|
|
|
{
|
perf stat: Resort '--per-thread' result
There are many threads reported if we enable '--per-thread'
globally.
1. Most of the threads are not counted or counting value 0.
This patch removes these threads.
2. We also resort the threads in display according to the
counting value. It's useful for user to see the hottest
threads easily.
For example, the new results would be:
root@skl:/tmp# perf stat --per-thread
^C
Performance counter stats for 'system wide':
perf-24165 4.302433 cpu-clock (msec) # 0.001 CPUs utilized
vmstat-23127 1.562215 cpu-clock (msec) # 0.000 CPUs utilized
irqbalance-2780 0.827851 cpu-clock (msec) # 0.000 CPUs utilized
sshd-23111 0.278308 cpu-clock (msec) # 0.000 CPUs utilized
thermald-2841 0.230880 cpu-clock (msec) # 0.000 CPUs utilized
sshd-23058 0.207306 cpu-clock (msec) # 0.000 CPUs utilized
kworker/0:2-19991 0.133983 cpu-clock (msec) # 0.000 CPUs utilized
kworker/u16:1-18249 0.125636 cpu-clock (msec) # 0.000 CPUs utilized
rcu_sched-8 0.085533 cpu-clock (msec) # 0.000 CPUs utilized
kworker/u16:2-23146 0.077139 cpu-clock (msec) # 0.000 CPUs utilized
gmain-2700 0.041789 cpu-clock (msec) # 0.000 CPUs utilized
kworker/4:1-15354 0.028370 cpu-clock (msec) # 0.000 CPUs utilized
kworker/6:0-17528 0.023895 cpu-clock (msec) # 0.000 CPUs utilized
kworker/4:1H-1887 0.013209 cpu-clock (msec) # 0.000 CPUs utilized
kworker/5:2-31362 0.011627 cpu-clock (msec) # 0.000 CPUs utilized
watchdog/0-11 0.010892 cpu-clock (msec) # 0.000 CPUs utilized
kworker/3:2-12870 0.010220 cpu-clock (msec) # 0.000 CPUs utilized
ksoftirqd/0-7 0.008869 cpu-clock (msec) # 0.000 CPUs utilized
watchdog/1-14 0.008476 cpu-clock (msec) # 0.000 CPUs utilized
watchdog/7-50 0.002944 cpu-clock (msec) # 0.000 CPUs utilized
watchdog/3-26 0.002893 cpu-clock (msec) # 0.000 CPUs utilized
watchdog/4-32 0.002759 cpu-clock (msec) # 0.000 CPUs utilized
watchdog/2-20 0.002429 cpu-clock (msec) # 0.000 CPUs utilized
watchdog/6-44 0.001491 cpu-clock (msec) # 0.000 CPUs utilized
watchdog/5-38 0.001477 cpu-clock (msec) # 0.000 CPUs utilized
rcu_sched-8 10 context-switches # 0.117 M/sec
kworker/u16:1-18249 7 context-switches # 0.056 M/sec
sshd-23111 4 context-switches # 0.014 M/sec
vmstat-23127 4 context-switches # 0.003 M/sec
perf-24165 4 context-switches # 0.930 K/sec
kworker/0:2-19991 3 context-switches # 0.022 M/sec
kworker/u16:2-23146 3 context-switches # 0.039 M/sec
kworker/4:1-15354 2 context-switches # 0.070 M/sec
kworker/6:0-17528 2 context-switches # 0.084 M/sec
sshd-23058 2 context-switches # 0.010 M/sec
ksoftirqd/0-7 1 context-switches # 0.113 M/sec
watchdog/0-11 1 context-switches # 0.092 M/sec
watchdog/1-14 1 context-switches # 0.118 M/sec
watchdog/2-20 1 context-switches # 0.412 M/sec
watchdog/3-26 1 context-switches # 0.346 M/sec
watchdog/4-32 1 context-switches # 0.362 M/sec
watchdog/5-38 1 context-switches # 0.677 M/sec
watchdog/6-44 1 context-switches # 0.671 M/sec
watchdog/7-50 1 context-switches # 0.340 M/sec
kworker/4:1H-1887 1 context-switches # 0.076 M/sec
thermald-2841 1 context-switches # 0.004 M/sec
gmain-2700 1 context-switches # 0.024 M/sec
irqbalance-2780 1 context-switches # 0.001 M/sec
kworker/3:2-12870 1 context-switches # 0.098 M/sec
kworker/5:2-31362 1 context-switches # 0.086 M/sec
kworker/u16:1-18249 2 cpu-migrations # 0.016 M/sec
kworker/u16:2-23146 2 cpu-migrations # 0.026 M/sec
rcu_sched-8 1 cpu-migrations # 0.012 M/sec
sshd-23058 1 cpu-migrations # 0.005 M/sec
perf-24165 8,833,385 cycles # 2.053 GHz
vmstat-23127 1,702,699 cycles # 1.090 GHz
irqbalance-2780 739,847 cycles # 0.894 GHz
sshd-23111 269,506 cycles # 0.968 GHz
thermald-2841 204,556 cycles # 0.886 GHz
sshd-23058 158,780 cycles # 0.766 GHz
kworker/0:2-19991 112,981 cycles # 0.843 GHz
kworker/u16:1-18249 100,926 cycles # 0.803 GHz
rcu_sched-8 74,024 cycles # 0.865 GHz
kworker/u16:2-23146 55,984 cycles # 0.726 GHz
gmain-2700 34,278 cycles # 0.820 GHz
kworker/4:1-15354 20,665 cycles # 0.728 GHz
kworker/6:0-17528 16,445 cycles # 0.688 GHz
kworker/5:2-31362 9,492 cycles # 0.816 GHz
watchdog/3-26 8,695 cycles # 3.006 GHz
kworker/4:1H-1887 8,238 cycles # 0.624 GHz
watchdog/4-32 7,580 cycles # 2.747 GHz
kworker/3:2-12870 7,306 cycles # 0.715 GHz
watchdog/2-20 7,274 cycles # 2.995 GHz
watchdog/0-11 6,988 cycles # 0.642 GHz
ksoftirqd/0-7 6,376 cycles # 0.719 GHz
watchdog/1-14 5,340 cycles # 0.630 GHz
watchdog/5-38 4,061 cycles # 2.749 GHz
watchdog/6-44 3,976 cycles # 2.667 GHz
watchdog/7-50 3,418 cycles # 1.161 GHz
vmstat-23127 2,511,699 instructions # 1.48 insn per cycle
perf-24165 1,829,908 instructions # 0.21 insn per cycle
irqbalance-2780 1,190,204 instructions # 1.61 insn per cycle
thermald-2841 143,544 instructions # 0.70 insn per cycle
sshd-23111 128,138 instructions # 0.48 insn per cycle
sshd-23058 57,654 instructions # 0.36 insn per cycle
rcu_sched-8 44,063 instructions # 0.60 insn per cycle
kworker/u16:1-18249 42,551 instructions # 0.42 insn per cycle
kworker/0:2-19991 25,873 instructions # 0.23 insn per cycle
kworker/u16:2-23146 21,407 instructions # 0.38 insn per cycle
gmain-2700 13,691 instructions # 0.40 insn per cycle
kworker/4:1-15354 12,964 instructions # 0.63 insn per cycle
kworker/6:0-17528 10,034 instructions # 0.61 insn per cycle
kworker/5:2-31362 5,203 instructions # 0.55 insn per cycle
kworker/3:2-12870 4,866 instructions # 0.67 insn per cycle
kworker/4:1H-1887 3,586 instructions # 0.44 insn per cycle
ksoftirqd/0-7 3,463 instructions # 0.54 insn per cycle
watchdog/0-11 3,135 instructions # 0.45 insn per cycle
watchdog/1-14 3,135 instructions # 0.59 insn per cycle
watchdog/2-20 3,135 instructions # 0.43 insn per cycle
watchdog/3-26 3,135 instructions # 0.36 insn per cycle
watchdog/4-32 3,135 instructions # 0.41 insn per cycle
watchdog/5-38 3,135 instructions # 0.77 insn per cycle
watchdog/6-44 3,135 instructions # 0.79 insn per cycle
watchdog/7-50 3,135 instructions # 0.92 insn per cycle
vmstat-23127 539,181 branches # 345.139 M/sec
perf-24165 375,364 branches # 87.245 M/sec
irqbalance-2780 262,092 branches # 316.593 M/sec
thermald-2841 31,611 branches # 136.915 M/sec
sshd-23111 21,874 branches # 78.596 M/sec
sshd-23058 10,682 branches # 51.528 M/sec
rcu_sched-8 8,693 branches # 101.633 M/sec
kworker/u16:1-18249 7,891 branches # 62.808 M/sec
kworker/0:2-19991 5,761 branches # 42.998 M/sec
kworker/u16:2-23146 4,099 branches # 53.138 M/sec
kworker/4:1-15354 2,755 branches # 97.110 M/sec
gmain-2700 2,638 branches # 63.127 M/sec
kworker/6:0-17528 2,216 branches # 92.739 M/sec
kworker/5:2-31362 1,132 branches # 97.360 M/sec
kworker/3:2-12870 1,081 branches # 105.773 M/sec
kworker/4:1H-1887 725 branches # 54.887 M/sec
ksoftirqd/0-7 707 branches # 79.716 M/sec
watchdog/0-11 652 branches # 59.860 M/sec
watchdog/1-14 652 branches # 76.923 M/sec
watchdog/2-20 652 branches # 268.423 M/sec
watchdog/3-26 652 branches # 225.372 M/sec
watchdog/4-32 652 branches # 236.318 M/sec
watchdog/5-38 652 branches # 441.435 M/sec
watchdog/6-44 652 branches # 437.290 M/sec
watchdog/7-50 652 branches # 221.467 M/sec
vmstat-23127 8,960 branch-misses # 1.66% of all branches
irqbalance-2780 3,047 branch-misses # 1.16% of all branches
perf-24165 2,876 branch-misses # 0.77% of all branches
sshd-23111 1,843 branch-misses # 8.43% of all branches
thermald-2841 1,444 branch-misses # 4.57% of all branches
sshd-23058 1,379 branch-misses # 12.91% of all branches
kworker/u16:1-18249 982 branch-misses # 12.44% of all branches
rcu_sched-8 893 branch-misses # 10.27% of all branches
kworker/u16:2-23146 578 branch-misses # 14.10% of all branches
kworker/0:2-19991 376 branch-misses # 6.53% of all branches
gmain-2700 280 branch-misses # 10.61% of all branches
kworker/6:0-17528 196 branch-misses # 8.84% of all branches
kworker/4:1-15354 187 branch-misses # 6.79% of all branches
kworker/5:2-31362 123 branch-misses # 10.87% of all branches
watchdog/0-11 95 branch-misses # 14.57% of all branches
watchdog/4-32 89 branch-misses # 13.65% of all branches
kworker/3:2-12870 80 branch-misses # 7.40% of all branches
watchdog/3-26 61 branch-misses # 9.36% of all branches
kworker/4:1H-1887 60 branch-misses # 8.28% of all branches
watchdog/2-20 52 branch-misses # 7.98% of all branches
ksoftirqd/0-7 47 branch-misses # 6.65% of all branches
watchdog/1-14 46 branch-misses # 7.06% of all branches
watchdog/7-50 13 branch-misses # 1.99% of all branches
watchdog/5-38 8 branch-misses # 1.23% of all branches
watchdog/6-44 7 branch-misses # 1.07% of all branches
3.695150786 seconds time elapsed
root@skl:/tmp# perf stat --per-thread -M IPC,CPI
^C
Performance counter stats for 'system wide':
vmstat-23127 2,000,783 inst_retired.any # 1.5 IPC
thermald-2841 1,472,670 inst_retired.any # 1.3 IPC
sshd-23111 977,374 inst_retired.any # 1.2 IPC
perf-24163 483,779 inst_retired.any # 0.2 IPC
gmain-2700 341,213 inst_retired.any # 0.9 IPC
sshd-23058 148,891 inst_retired.any # 0.8 IPC
rtkit-daemon-3288 71,210 inst_retired.any # 0.7 IPC
kworker/u16:1-18249 39,562 inst_retired.any # 0.3 IPC
rcu_sched-8 14,474 inst_retired.any # 0.8 IPC
kworker/0:2-19991 7,659 inst_retired.any # 0.2 IPC
kworker/4:1-15354 6,714 inst_retired.any # 0.8 IPC
rtkit-daemon-3289 4,839 inst_retired.any # 0.3 IPC
kworker/6:0-17528 3,321 inst_retired.any # 0.6 IPC
kworker/5:2-31362 3,215 inst_retired.any # 0.5 IPC
kworker/7:2-23145 3,173 inst_retired.any # 0.7 IPC
kworker/4:1H-1887 1,719 inst_retired.any # 0.3 IPC
watchdog/0-11 1,479 inst_retired.any # 0.3 IPC
watchdog/1-14 1,479 inst_retired.any # 0.3 IPC
watchdog/2-20 1,479 inst_retired.any # 0.4 IPC
watchdog/3-26 1,479 inst_retired.any # 0.4 IPC
watchdog/4-32 1,479 inst_retired.any # 0.3 IPC
watchdog/5-38 1,479 inst_retired.any # 0.3 IPC
watchdog/6-44 1,479 inst_retired.any # 0.7 IPC
watchdog/7-50 1,479 inst_retired.any # 0.7 IPC
kworker/u16:2-23146 1,408 inst_retired.any # 0.5 IPC
perf-24163 2,249,872 cpu_clk_unhalted.thread
vmstat-23127 1,352,455 cpu_clk_unhalted.thread
thermald-2841 1,161,140 cpu_clk_unhalted.thread
sshd-23111 807,827 cpu_clk_unhalted.thread
gmain-2700 375,535 cpu_clk_unhalted.thread
sshd-23058 194,071 cpu_clk_unhalted.thread
kworker/u16:1-18249 114,306 cpu_clk_unhalted.thread
rtkit-daemon-3288 103,547 cpu_clk_unhalted.thread
kworker/0:2-19991 46,550 cpu_clk_unhalted.thread
rcu_sched-8 18,855 cpu_clk_unhalted.thread
rtkit-daemon-3289 17,549 cpu_clk_unhalted.thread
kworker/4:1-15354 8,812 cpu_clk_unhalted.thread
kworker/5:2-31362 6,812 cpu_clk_unhalted.thread
kworker/4:1H-1887 5,270 cpu_clk_unhalted.thread
kworker/6:0-17528 5,111 cpu_clk_unhalted.thread
kworker/7:2-23145 4,667 cpu_clk_unhalted.thread
watchdog/0-11 4,663 cpu_clk_unhalted.thread
watchdog/1-14 4,663 cpu_clk_unhalted.thread
watchdog/4-32 4,626 cpu_clk_unhalted.thread
watchdog/5-38 4,403 cpu_clk_unhalted.thread
watchdog/3-26 3,936 cpu_clk_unhalted.thread
watchdog/2-20 3,850 cpu_clk_unhalted.thread
kworker/u16:2-23146 2,654 cpu_clk_unhalted.thread
watchdog/6-44 2,017 cpu_clk_unhalted.thread
watchdog/7-50 2,017 cpu_clk_unhalted.thread
vmstat-23127 2,000,783 inst_retired.any # 0.7 CPI
thermald-2841 1,472,670 inst_retired.any # 0.8 CPI
sshd-23111 977,374 inst_retired.any # 0.8 CPI
perf-24163 495,037 inst_retired.any # 4.7 CPI
gmain-2700 341,213 inst_retired.any # 1.1 CPI
sshd-23058 148,891 inst_retired.any # 1.3 CPI
rtkit-daemon-3288 71,210 inst_retired.any # 1.5 CPI
kworker/u16:1-18249 39,562 inst_retired.any # 2.9 CPI
rcu_sched-8 14,474 inst_retired.any # 1.3 CPI
kworker/0:2-19991 7,659 inst_retired.any # 6.1 CPI
kworker/4:1-15354 6,714 inst_retired.any # 1.3 CPI
rtkit-daemon-3289 4,839 inst_retired.any # 3.6 CPI
kworker/6:0-17528 3,321 inst_retired.any # 1.5 CPI
kworker/5:2-31362 3,215 inst_retired.any # 2.1 CPI
kworker/7:2-23145 3,173 inst_retired.any # 1.5 CPI
kworker/4:1H-1887 1,719 inst_retired.any # 3.1 CPI
watchdog/0-11 1,479 inst_retired.any # 3.2 CPI
watchdog/1-14 1,479 inst_retired.any # 3.2 CPI
watchdog/2-20 1,479 inst_retired.any # 2.6 CPI
watchdog/3-26 1,479 inst_retired.any # 2.7 CPI
watchdog/4-32 1,479 inst_retired.any # 3.1 CPI
watchdog/5-38 1,479 inst_retired.any # 3.0 CPI
watchdog/6-44 1,479 inst_retired.any # 1.4 CPI
watchdog/7-50 1,479 inst_retired.any # 1.4 CPI
kworker/u16:2-23146 1,408 inst_retired.any # 1.9 CPI
perf-24163 2,302,323 cycles
vmstat-23127 1,352,455 cycles
thermald-2841 1,161,140 cycles
sshd-23111 807,827 cycles
gmain-2700 375,535 cycles
sshd-23058 194,071 cycles
kworker/u16:1-18249 114,306 cycles
rtkit-daemon-3288 103,547 cycles
kworker/0:2-19991 46,550 cycles
rcu_sched-8 18,855 cycles
rtkit-daemon-3289 17,549 cycles
kworker/4:1-15354 8,812 cycles
kworker/5:2-31362 6,812 cycles
kworker/4:1H-1887 5,270 cycles
kworker/6:0-17528 5,111 cycles
kworker/7:2-23145 4,667 cycles
watchdog/0-11 4,663 cycles
watchdog/1-14 4,663 cycles
watchdog/4-32 4,626 cycles
watchdog/5-38 4,403 cycles
watchdog/3-26 3,936 cycles
watchdog/2-20 3,850 cycles
kworker/u16:2-23146 2,654 cycles
watchdog/6-44 2,017 cycles
watchdog/7-50 2,017 cycles
2.175726600 seconds time elapsed
Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Kan Liang <kan.liang@intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1512482591-4646-12-git-send-email-yao.jin@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-12-05 14:03:11 +00:00
|
|
|
return ((struct perf_aggr_thread_value *)b)->val -
|
|
|
|
((struct perf_aggr_thread_value *)a)->val;
|
|
|
|
}
|
|
|
|
|
|
|
|
static struct perf_aggr_thread_value *sort_aggr_thread(
|
|
|
|
struct perf_evsel *counter,
|
|
|
|
int nthreads, int ncpus,
|
|
|
|
int *ret)
|
|
|
|
{
|
|
|
|
int cpu, thread, i = 0;
|
perf stat: Introduce --per-thread option
Currently all the -p option PID arguments tasks values get aggregated
and printed as single values.
Adding --per-tasks option to print values per task.
$ perf stat -e cycles,instructions --per-thread -p 30190,30242
^C
Performance counter stats for process id '30190,30242':
cat-30190 0 cycles
yes-30242 3,842,525,421 cycles
cat-30190 0 instructions
yes-30242 10,370,817,010 instructions
1.143155657 seconds time elapsed
Also works under interval mode:
$ perf stat -e cycles,instructions --per-thread -p 30190,30242 -I 1000
# time comm-pid counts unit events
1.000073435 cat-30190 89,058 cycles
1.000073435 yes-30242 3,360,786,902 cycles (100.00%)
1.000073435 cat-30190 14,066 instructions
1.000073435 yes-30242 9,069,937,462 instructions
2.000204830 cat-30190 0 cycles
2.000204830 yes-30242 3,351,667,626 cycles
2.000204830 cat-30190 0 instructions
2.000204830 yes-30242 9,045,796,885 instructions
^C 2.771286639 cat-30190 0 cycles
2.771286639 yes-30242 2,593,884,166 cycles
2.771286639 cat-30190 0 instructions
2.771286639 yes-30242 7,001,171,191 instructions
It works only with -t and -p options, otherwise following error is
printed:
$ perf stat -e cycles --per-thread -I 1000 ls
The --per-thread option is only available when monitoring via -p -t options.
-p, --pid <pid> stat events on existing process id
-t, --tid <tid> stat events on existing thread id
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/r/1435310967-14570-23-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-06-26 09:29:27 +00:00
|
|
|
double uval;
|
perf stat: Resort '--per-thread' result
There are many threads reported if we enable '--per-thread'
globally.
1. Most of the threads are not counted or counting value 0.
This patch removes these threads.
2. We also resort the threads in display according to the
counting value. It's useful for user to see the hottest
threads easily.
For example, the new results would be:
root@skl:/tmp# perf stat --per-thread
^C
Performance counter stats for 'system wide':
perf-24165 4.302433 cpu-clock (msec) # 0.001 CPUs utilized
vmstat-23127 1.562215 cpu-clock (msec) # 0.000 CPUs utilized
irqbalance-2780 0.827851 cpu-clock (msec) # 0.000 CPUs utilized
sshd-23111 0.278308 cpu-clock (msec) # 0.000 CPUs utilized
thermald-2841 0.230880 cpu-clock (msec) # 0.000 CPUs utilized
sshd-23058 0.207306 cpu-clock (msec) # 0.000 CPUs utilized
kworker/0:2-19991 0.133983 cpu-clock (msec) # 0.000 CPUs utilized
kworker/u16:1-18249 0.125636 cpu-clock (msec) # 0.000 CPUs utilized
rcu_sched-8 0.085533 cpu-clock (msec) # 0.000 CPUs utilized
kworker/u16:2-23146 0.077139 cpu-clock (msec) # 0.000 CPUs utilized
gmain-2700 0.041789 cpu-clock (msec) # 0.000 CPUs utilized
kworker/4:1-15354 0.028370 cpu-clock (msec) # 0.000 CPUs utilized
kworker/6:0-17528 0.023895 cpu-clock (msec) # 0.000 CPUs utilized
kworker/4:1H-1887 0.013209 cpu-clock (msec) # 0.000 CPUs utilized
kworker/5:2-31362 0.011627 cpu-clock (msec) # 0.000 CPUs utilized
watchdog/0-11 0.010892 cpu-clock (msec) # 0.000 CPUs utilized
kworker/3:2-12870 0.010220 cpu-clock (msec) # 0.000 CPUs utilized
ksoftirqd/0-7 0.008869 cpu-clock (msec) # 0.000 CPUs utilized
watchdog/1-14 0.008476 cpu-clock (msec) # 0.000 CPUs utilized
watchdog/7-50 0.002944 cpu-clock (msec) # 0.000 CPUs utilized
watchdog/3-26 0.002893 cpu-clock (msec) # 0.000 CPUs utilized
watchdog/4-32 0.002759 cpu-clock (msec) # 0.000 CPUs utilized
watchdog/2-20 0.002429 cpu-clock (msec) # 0.000 CPUs utilized
watchdog/6-44 0.001491 cpu-clock (msec) # 0.000 CPUs utilized
watchdog/5-38 0.001477 cpu-clock (msec) # 0.000 CPUs utilized
rcu_sched-8 10 context-switches # 0.117 M/sec
kworker/u16:1-18249 7 context-switches # 0.056 M/sec
sshd-23111 4 context-switches # 0.014 M/sec
vmstat-23127 4 context-switches # 0.003 M/sec
perf-24165 4 context-switches # 0.930 K/sec
kworker/0:2-19991 3 context-switches # 0.022 M/sec
kworker/u16:2-23146 3 context-switches # 0.039 M/sec
kworker/4:1-15354 2 context-switches # 0.070 M/sec
kworker/6:0-17528 2 context-switches # 0.084 M/sec
sshd-23058 2 context-switches # 0.010 M/sec
ksoftirqd/0-7 1 context-switches # 0.113 M/sec
watchdog/0-11 1 context-switches # 0.092 M/sec
watchdog/1-14 1 context-switches # 0.118 M/sec
watchdog/2-20 1 context-switches # 0.412 M/sec
watchdog/3-26 1 context-switches # 0.346 M/sec
watchdog/4-32 1 context-switches # 0.362 M/sec
watchdog/5-38 1 context-switches # 0.677 M/sec
watchdog/6-44 1 context-switches # 0.671 M/sec
watchdog/7-50 1 context-switches # 0.340 M/sec
kworker/4:1H-1887 1 context-switches # 0.076 M/sec
thermald-2841 1 context-switches # 0.004 M/sec
gmain-2700 1 context-switches # 0.024 M/sec
irqbalance-2780 1 context-switches # 0.001 M/sec
kworker/3:2-12870 1 context-switches # 0.098 M/sec
kworker/5:2-31362 1 context-switches # 0.086 M/sec
kworker/u16:1-18249 2 cpu-migrations # 0.016 M/sec
kworker/u16:2-23146 2 cpu-migrations # 0.026 M/sec
rcu_sched-8 1 cpu-migrations # 0.012 M/sec
sshd-23058 1 cpu-migrations # 0.005 M/sec
perf-24165 8,833,385 cycles # 2.053 GHz
vmstat-23127 1,702,699 cycles # 1.090 GHz
irqbalance-2780 739,847 cycles # 0.894 GHz
sshd-23111 269,506 cycles # 0.968 GHz
thermald-2841 204,556 cycles # 0.886 GHz
sshd-23058 158,780 cycles # 0.766 GHz
kworker/0:2-19991 112,981 cycles # 0.843 GHz
kworker/u16:1-18249 100,926 cycles # 0.803 GHz
rcu_sched-8 74,024 cycles # 0.865 GHz
kworker/u16:2-23146 55,984 cycles # 0.726 GHz
gmain-2700 34,278 cycles # 0.820 GHz
kworker/4:1-15354 20,665 cycles # 0.728 GHz
kworker/6:0-17528 16,445 cycles # 0.688 GHz
kworker/5:2-31362 9,492 cycles # 0.816 GHz
watchdog/3-26 8,695 cycles # 3.006 GHz
kworker/4:1H-1887 8,238 cycles # 0.624 GHz
watchdog/4-32 7,580 cycles # 2.747 GHz
kworker/3:2-12870 7,306 cycles # 0.715 GHz
watchdog/2-20 7,274 cycles # 2.995 GHz
watchdog/0-11 6,988 cycles # 0.642 GHz
ksoftirqd/0-7 6,376 cycles # 0.719 GHz
watchdog/1-14 5,340 cycles # 0.630 GHz
watchdog/5-38 4,061 cycles # 2.749 GHz
watchdog/6-44 3,976 cycles # 2.667 GHz
watchdog/7-50 3,418 cycles # 1.161 GHz
vmstat-23127 2,511,699 instructions # 1.48 insn per cycle
perf-24165 1,829,908 instructions # 0.21 insn per cycle
irqbalance-2780 1,190,204 instructions # 1.61 insn per cycle
thermald-2841 143,544 instructions # 0.70 insn per cycle
sshd-23111 128,138 instructions # 0.48 insn per cycle
sshd-23058 57,654 instructions # 0.36 insn per cycle
rcu_sched-8 44,063 instructions # 0.60 insn per cycle
kworker/u16:1-18249 42,551 instructions # 0.42 insn per cycle
kworker/0:2-19991 25,873 instructions # 0.23 insn per cycle
kworker/u16:2-23146 21,407 instructions # 0.38 insn per cycle
gmain-2700 13,691 instructions # 0.40 insn per cycle
kworker/4:1-15354 12,964 instructions # 0.63 insn per cycle
kworker/6:0-17528 10,034 instructions # 0.61 insn per cycle
kworker/5:2-31362 5,203 instructions # 0.55 insn per cycle
kworker/3:2-12870 4,866 instructions # 0.67 insn per cycle
kworker/4:1H-1887 3,586 instructions # 0.44 insn per cycle
ksoftirqd/0-7 3,463 instructions # 0.54 insn per cycle
watchdog/0-11 3,135 instructions # 0.45 insn per cycle
watchdog/1-14 3,135 instructions # 0.59 insn per cycle
watchdog/2-20 3,135 instructions # 0.43 insn per cycle
watchdog/3-26 3,135 instructions # 0.36 insn per cycle
watchdog/4-32 3,135 instructions # 0.41 insn per cycle
watchdog/5-38 3,135 instructions # 0.77 insn per cycle
watchdog/6-44 3,135 instructions # 0.79 insn per cycle
watchdog/7-50 3,135 instructions # 0.92 insn per cycle
vmstat-23127 539,181 branches # 345.139 M/sec
perf-24165 375,364 branches # 87.245 M/sec
irqbalance-2780 262,092 branches # 316.593 M/sec
thermald-2841 31,611 branches # 136.915 M/sec
sshd-23111 21,874 branches # 78.596 M/sec
sshd-23058 10,682 branches # 51.528 M/sec
rcu_sched-8 8,693 branches # 101.633 M/sec
kworker/u16:1-18249 7,891 branches # 62.808 M/sec
kworker/0:2-19991 5,761 branches # 42.998 M/sec
kworker/u16:2-23146 4,099 branches # 53.138 M/sec
kworker/4:1-15354 2,755 branches # 97.110 M/sec
gmain-2700 2,638 branches # 63.127 M/sec
kworker/6:0-17528 2,216 branches # 92.739 M/sec
kworker/5:2-31362 1,132 branches # 97.360 M/sec
kworker/3:2-12870 1,081 branches # 105.773 M/sec
kworker/4:1H-1887 725 branches # 54.887 M/sec
ksoftirqd/0-7 707 branches # 79.716 M/sec
watchdog/0-11 652 branches # 59.860 M/sec
watchdog/1-14 652 branches # 76.923 M/sec
watchdog/2-20 652 branches # 268.423 M/sec
watchdog/3-26 652 branches # 225.372 M/sec
watchdog/4-32 652 branches # 236.318 M/sec
watchdog/5-38 652 branches # 441.435 M/sec
watchdog/6-44 652 branches # 437.290 M/sec
watchdog/7-50 652 branches # 221.467 M/sec
vmstat-23127 8,960 branch-misses # 1.66% of all branches
irqbalance-2780 3,047 branch-misses # 1.16% of all branches
perf-24165 2,876 branch-misses # 0.77% of all branches
sshd-23111 1,843 branch-misses # 8.43% of all branches
thermald-2841 1,444 branch-misses # 4.57% of all branches
sshd-23058 1,379 branch-misses # 12.91% of all branches
kworker/u16:1-18249 982 branch-misses # 12.44% of all branches
rcu_sched-8 893 branch-misses # 10.27% of all branches
kworker/u16:2-23146 578 branch-misses # 14.10% of all branches
kworker/0:2-19991 376 branch-misses # 6.53% of all branches
gmain-2700 280 branch-misses # 10.61% of all branches
kworker/6:0-17528 196 branch-misses # 8.84% of all branches
kworker/4:1-15354 187 branch-misses # 6.79% of all branches
kworker/5:2-31362 123 branch-misses # 10.87% of all branches
watchdog/0-11 95 branch-misses # 14.57% of all branches
watchdog/4-32 89 branch-misses # 13.65% of all branches
kworker/3:2-12870 80 branch-misses # 7.40% of all branches
watchdog/3-26 61 branch-misses # 9.36% of all branches
kworker/4:1H-1887 60 branch-misses # 8.28% of all branches
watchdog/2-20 52 branch-misses # 7.98% of all branches
ksoftirqd/0-7 47 branch-misses # 6.65% of all branches
watchdog/1-14 46 branch-misses # 7.06% of all branches
watchdog/7-50 13 branch-misses # 1.99% of all branches
watchdog/5-38 8 branch-misses # 1.23% of all branches
watchdog/6-44 7 branch-misses # 1.07% of all branches
3.695150786 seconds time elapsed
root@skl:/tmp# perf stat --per-thread -M IPC,CPI
^C
Performance counter stats for 'system wide':
vmstat-23127 2,000,783 inst_retired.any # 1.5 IPC
thermald-2841 1,472,670 inst_retired.any # 1.3 IPC
sshd-23111 977,374 inst_retired.any # 1.2 IPC
perf-24163 483,779 inst_retired.any # 0.2 IPC
gmain-2700 341,213 inst_retired.any # 0.9 IPC
sshd-23058 148,891 inst_retired.any # 0.8 IPC
rtkit-daemon-3288 71,210 inst_retired.any # 0.7 IPC
kworker/u16:1-18249 39,562 inst_retired.any # 0.3 IPC
rcu_sched-8 14,474 inst_retired.any # 0.8 IPC
kworker/0:2-19991 7,659 inst_retired.any # 0.2 IPC
kworker/4:1-15354 6,714 inst_retired.any # 0.8 IPC
rtkit-daemon-3289 4,839 inst_retired.any # 0.3 IPC
kworker/6:0-17528 3,321 inst_retired.any # 0.6 IPC
kworker/5:2-31362 3,215 inst_retired.any # 0.5 IPC
kworker/7:2-23145 3,173 inst_retired.any # 0.7 IPC
kworker/4:1H-1887 1,719 inst_retired.any # 0.3 IPC
watchdog/0-11 1,479 inst_retired.any # 0.3 IPC
watchdog/1-14 1,479 inst_retired.any # 0.3 IPC
watchdog/2-20 1,479 inst_retired.any # 0.4 IPC
watchdog/3-26 1,479 inst_retired.any # 0.4 IPC
watchdog/4-32 1,479 inst_retired.any # 0.3 IPC
watchdog/5-38 1,479 inst_retired.any # 0.3 IPC
watchdog/6-44 1,479 inst_retired.any # 0.7 IPC
watchdog/7-50 1,479 inst_retired.any # 0.7 IPC
kworker/u16:2-23146 1,408 inst_retired.any # 0.5 IPC
perf-24163 2,249,872 cpu_clk_unhalted.thread
vmstat-23127 1,352,455 cpu_clk_unhalted.thread
thermald-2841 1,161,140 cpu_clk_unhalted.thread
sshd-23111 807,827 cpu_clk_unhalted.thread
gmain-2700 375,535 cpu_clk_unhalted.thread
sshd-23058 194,071 cpu_clk_unhalted.thread
kworker/u16:1-18249 114,306 cpu_clk_unhalted.thread
rtkit-daemon-3288 103,547 cpu_clk_unhalted.thread
kworker/0:2-19991 46,550 cpu_clk_unhalted.thread
rcu_sched-8 18,855 cpu_clk_unhalted.thread
rtkit-daemon-3289 17,549 cpu_clk_unhalted.thread
kworker/4:1-15354 8,812 cpu_clk_unhalted.thread
kworker/5:2-31362 6,812 cpu_clk_unhalted.thread
kworker/4:1H-1887 5,270 cpu_clk_unhalted.thread
kworker/6:0-17528 5,111 cpu_clk_unhalted.thread
kworker/7:2-23145 4,667 cpu_clk_unhalted.thread
watchdog/0-11 4,663 cpu_clk_unhalted.thread
watchdog/1-14 4,663 cpu_clk_unhalted.thread
watchdog/4-32 4,626 cpu_clk_unhalted.thread
watchdog/5-38 4,403 cpu_clk_unhalted.thread
watchdog/3-26 3,936 cpu_clk_unhalted.thread
watchdog/2-20 3,850 cpu_clk_unhalted.thread
kworker/u16:2-23146 2,654 cpu_clk_unhalted.thread
watchdog/6-44 2,017 cpu_clk_unhalted.thread
watchdog/7-50 2,017 cpu_clk_unhalted.thread
vmstat-23127 2,000,783 inst_retired.any # 0.7 CPI
thermald-2841 1,472,670 inst_retired.any # 0.8 CPI
sshd-23111 977,374 inst_retired.any # 0.8 CPI
perf-24163 495,037 inst_retired.any # 4.7 CPI
gmain-2700 341,213 inst_retired.any # 1.1 CPI
sshd-23058 148,891 inst_retired.any # 1.3 CPI
rtkit-daemon-3288 71,210 inst_retired.any # 1.5 CPI
kworker/u16:1-18249 39,562 inst_retired.any # 2.9 CPI
rcu_sched-8 14,474 inst_retired.any # 1.3 CPI
kworker/0:2-19991 7,659 inst_retired.any # 6.1 CPI
kworker/4:1-15354 6,714 inst_retired.any # 1.3 CPI
rtkit-daemon-3289 4,839 inst_retired.any # 3.6 CPI
kworker/6:0-17528 3,321 inst_retired.any # 1.5 CPI
kworker/5:2-31362 3,215 inst_retired.any # 2.1 CPI
kworker/7:2-23145 3,173 inst_retired.any # 1.5 CPI
kworker/4:1H-1887 1,719 inst_retired.any # 3.1 CPI
watchdog/0-11 1,479 inst_retired.any # 3.2 CPI
watchdog/1-14 1,479 inst_retired.any # 3.2 CPI
watchdog/2-20 1,479 inst_retired.any # 2.6 CPI
watchdog/3-26 1,479 inst_retired.any # 2.7 CPI
watchdog/4-32 1,479 inst_retired.any # 3.1 CPI
watchdog/5-38 1,479 inst_retired.any # 3.0 CPI
watchdog/6-44 1,479 inst_retired.any # 1.4 CPI
watchdog/7-50 1,479 inst_retired.any # 1.4 CPI
kworker/u16:2-23146 1,408 inst_retired.any # 1.9 CPI
perf-24163 2,302,323 cycles
vmstat-23127 1,352,455 cycles
thermald-2841 1,161,140 cycles
sshd-23111 807,827 cycles
gmain-2700 375,535 cycles
sshd-23058 194,071 cycles
kworker/u16:1-18249 114,306 cycles
rtkit-daemon-3288 103,547 cycles
kworker/0:2-19991 46,550 cycles
rcu_sched-8 18,855 cycles
rtkit-daemon-3289 17,549 cycles
kworker/4:1-15354 8,812 cycles
kworker/5:2-31362 6,812 cycles
kworker/4:1H-1887 5,270 cycles
kworker/6:0-17528 5,111 cycles
kworker/7:2-23145 4,667 cycles
watchdog/0-11 4,663 cycles
watchdog/1-14 4,663 cycles
watchdog/4-32 4,626 cycles
watchdog/5-38 4,403 cycles
watchdog/3-26 3,936 cycles
watchdog/2-20 3,850 cycles
kworker/u16:2-23146 2,654 cycles
watchdog/6-44 2,017 cycles
watchdog/7-50 2,017 cycles
2.175726600 seconds time elapsed
Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Kan Liang <kan.liang@intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1512482591-4646-12-git-send-email-yao.jin@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-12-05 14:03:11 +00:00
|
|
|
struct perf_aggr_thread_value *buf;
|
|
|
|
|
|
|
|
buf = calloc(nthreads, sizeof(struct perf_aggr_thread_value));
|
|
|
|
if (!buf)
|
|
|
|
return NULL;
|
perf stat: Introduce --per-thread option
Currently all the -p option PID arguments tasks values get aggregated
and printed as single values.
Adding --per-tasks option to print values per task.
$ perf stat -e cycles,instructions --per-thread -p 30190,30242
^C
Performance counter stats for process id '30190,30242':
cat-30190 0 cycles
yes-30242 3,842,525,421 cycles
cat-30190 0 instructions
yes-30242 10,370,817,010 instructions
1.143155657 seconds time elapsed
Also works under interval mode:
$ perf stat -e cycles,instructions --per-thread -p 30190,30242 -I 1000
# time comm-pid counts unit events
1.000073435 cat-30190 89,058 cycles
1.000073435 yes-30242 3,360,786,902 cycles (100.00%)
1.000073435 cat-30190 14,066 instructions
1.000073435 yes-30242 9,069,937,462 instructions
2.000204830 cat-30190 0 cycles
2.000204830 yes-30242 3,351,667,626 cycles
2.000204830 cat-30190 0 instructions
2.000204830 yes-30242 9,045,796,885 instructions
^C 2.771286639 cat-30190 0 cycles
2.771286639 yes-30242 2,593,884,166 cycles
2.771286639 cat-30190 0 instructions
2.771286639 yes-30242 7,001,171,191 instructions
It works only with -t and -p options, otherwise following error is
printed:
$ perf stat -e cycles --per-thread -I 1000 ls
The --per-thread option is only available when monitoring via -p -t options.
-p, --pid <pid> stat events on existing process id
-t, --tid <tid> stat events on existing thread id
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/r/1435310967-14570-23-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-06-26 09:29:27 +00:00
|
|
|
|
|
|
|
for (thread = 0; thread < nthreads; thread++) {
|
|
|
|
u64 ena = 0, run = 0, val = 0;
|
|
|
|
|
|
|
|
for (cpu = 0; cpu < ncpus; cpu++) {
|
|
|
|
val += perf_counts(counter->counts, cpu, thread)->val;
|
|
|
|
ena += perf_counts(counter->counts, cpu, thread)->ena;
|
|
|
|
run += perf_counts(counter->counts, cpu, thread)->run;
|
|
|
|
}
|
|
|
|
|
perf stat: Resort '--per-thread' result
There are many threads reported if we enable '--per-thread'
globally.
1. Most of the threads are not counted or counting value 0.
This patch removes these threads.
2. We also resort the threads in display according to the
counting value. It's useful for user to see the hottest
threads easily.
For example, the new results would be:
root@skl:/tmp# perf stat --per-thread
^C
Performance counter stats for 'system wide':
perf-24165 4.302433 cpu-clock (msec) # 0.001 CPUs utilized
vmstat-23127 1.562215 cpu-clock (msec) # 0.000 CPUs utilized
irqbalance-2780 0.827851 cpu-clock (msec) # 0.000 CPUs utilized
sshd-23111 0.278308 cpu-clock (msec) # 0.000 CPUs utilized
thermald-2841 0.230880 cpu-clock (msec) # 0.000 CPUs utilized
sshd-23058 0.207306 cpu-clock (msec) # 0.000 CPUs utilized
kworker/0:2-19991 0.133983 cpu-clock (msec) # 0.000 CPUs utilized
kworker/u16:1-18249 0.125636 cpu-clock (msec) # 0.000 CPUs utilized
rcu_sched-8 0.085533 cpu-clock (msec) # 0.000 CPUs utilized
kworker/u16:2-23146 0.077139 cpu-clock (msec) # 0.000 CPUs utilized
gmain-2700 0.041789 cpu-clock (msec) # 0.000 CPUs utilized
kworker/4:1-15354 0.028370 cpu-clock (msec) # 0.000 CPUs utilized
kworker/6:0-17528 0.023895 cpu-clock (msec) # 0.000 CPUs utilized
kworker/4:1H-1887 0.013209 cpu-clock (msec) # 0.000 CPUs utilized
kworker/5:2-31362 0.011627 cpu-clock (msec) # 0.000 CPUs utilized
watchdog/0-11 0.010892 cpu-clock (msec) # 0.000 CPUs utilized
kworker/3:2-12870 0.010220 cpu-clock (msec) # 0.000 CPUs utilized
ksoftirqd/0-7 0.008869 cpu-clock (msec) # 0.000 CPUs utilized
watchdog/1-14 0.008476 cpu-clock (msec) # 0.000 CPUs utilized
watchdog/7-50 0.002944 cpu-clock (msec) # 0.000 CPUs utilized
watchdog/3-26 0.002893 cpu-clock (msec) # 0.000 CPUs utilized
watchdog/4-32 0.002759 cpu-clock (msec) # 0.000 CPUs utilized
watchdog/2-20 0.002429 cpu-clock (msec) # 0.000 CPUs utilized
watchdog/6-44 0.001491 cpu-clock (msec) # 0.000 CPUs utilized
watchdog/5-38 0.001477 cpu-clock (msec) # 0.000 CPUs utilized
rcu_sched-8 10 context-switches # 0.117 M/sec
kworker/u16:1-18249 7 context-switches # 0.056 M/sec
sshd-23111 4 context-switches # 0.014 M/sec
vmstat-23127 4 context-switches # 0.003 M/sec
perf-24165 4 context-switches # 0.930 K/sec
kworker/0:2-19991 3 context-switches # 0.022 M/sec
kworker/u16:2-23146 3 context-switches # 0.039 M/sec
kworker/4:1-15354 2 context-switches # 0.070 M/sec
kworker/6:0-17528 2 context-switches # 0.084 M/sec
sshd-23058 2 context-switches # 0.010 M/sec
ksoftirqd/0-7 1 context-switches # 0.113 M/sec
watchdog/0-11 1 context-switches # 0.092 M/sec
watchdog/1-14 1 context-switches # 0.118 M/sec
watchdog/2-20 1 context-switches # 0.412 M/sec
watchdog/3-26 1 context-switches # 0.346 M/sec
watchdog/4-32 1 context-switches # 0.362 M/sec
watchdog/5-38 1 context-switches # 0.677 M/sec
watchdog/6-44 1 context-switches # 0.671 M/sec
watchdog/7-50 1 context-switches # 0.340 M/sec
kworker/4:1H-1887 1 context-switches # 0.076 M/sec
thermald-2841 1 context-switches # 0.004 M/sec
gmain-2700 1 context-switches # 0.024 M/sec
irqbalance-2780 1 context-switches # 0.001 M/sec
kworker/3:2-12870 1 context-switches # 0.098 M/sec
kworker/5:2-31362 1 context-switches # 0.086 M/sec
kworker/u16:1-18249 2 cpu-migrations # 0.016 M/sec
kworker/u16:2-23146 2 cpu-migrations # 0.026 M/sec
rcu_sched-8 1 cpu-migrations # 0.012 M/sec
sshd-23058 1 cpu-migrations # 0.005 M/sec
perf-24165 8,833,385 cycles # 2.053 GHz
vmstat-23127 1,702,699 cycles # 1.090 GHz
irqbalance-2780 739,847 cycles # 0.894 GHz
sshd-23111 269,506 cycles # 0.968 GHz
thermald-2841 204,556 cycles # 0.886 GHz
sshd-23058 158,780 cycles # 0.766 GHz
kworker/0:2-19991 112,981 cycles # 0.843 GHz
kworker/u16:1-18249 100,926 cycles # 0.803 GHz
rcu_sched-8 74,024 cycles # 0.865 GHz
kworker/u16:2-23146 55,984 cycles # 0.726 GHz
gmain-2700 34,278 cycles # 0.820 GHz
kworker/4:1-15354 20,665 cycles # 0.728 GHz
kworker/6:0-17528 16,445 cycles # 0.688 GHz
kworker/5:2-31362 9,492 cycles # 0.816 GHz
watchdog/3-26 8,695 cycles # 3.006 GHz
kworker/4:1H-1887 8,238 cycles # 0.624 GHz
watchdog/4-32 7,580 cycles # 2.747 GHz
kworker/3:2-12870 7,306 cycles # 0.715 GHz
watchdog/2-20 7,274 cycles # 2.995 GHz
watchdog/0-11 6,988 cycles # 0.642 GHz
ksoftirqd/0-7 6,376 cycles # 0.719 GHz
watchdog/1-14 5,340 cycles # 0.630 GHz
watchdog/5-38 4,061 cycles # 2.749 GHz
watchdog/6-44 3,976 cycles # 2.667 GHz
watchdog/7-50 3,418 cycles # 1.161 GHz
vmstat-23127 2,511,699 instructions # 1.48 insn per cycle
perf-24165 1,829,908 instructions # 0.21 insn per cycle
irqbalance-2780 1,190,204 instructions # 1.61 insn per cycle
thermald-2841 143,544 instructions # 0.70 insn per cycle
sshd-23111 128,138 instructions # 0.48 insn per cycle
sshd-23058 57,654 instructions # 0.36 insn per cycle
rcu_sched-8 44,063 instructions # 0.60 insn per cycle
kworker/u16:1-18249 42,551 instructions # 0.42 insn per cycle
kworker/0:2-19991 25,873 instructions # 0.23 insn per cycle
kworker/u16:2-23146 21,407 instructions # 0.38 insn per cycle
gmain-2700 13,691 instructions # 0.40 insn per cycle
kworker/4:1-15354 12,964 instructions # 0.63 insn per cycle
kworker/6:0-17528 10,034 instructions # 0.61 insn per cycle
kworker/5:2-31362 5,203 instructions # 0.55 insn per cycle
kworker/3:2-12870 4,866 instructions # 0.67 insn per cycle
kworker/4:1H-1887 3,586 instructions # 0.44 insn per cycle
ksoftirqd/0-7 3,463 instructions # 0.54 insn per cycle
watchdog/0-11 3,135 instructions # 0.45 insn per cycle
watchdog/1-14 3,135 instructions # 0.59 insn per cycle
watchdog/2-20 3,135 instructions # 0.43 insn per cycle
watchdog/3-26 3,135 instructions # 0.36 insn per cycle
watchdog/4-32 3,135 instructions # 0.41 insn per cycle
watchdog/5-38 3,135 instructions # 0.77 insn per cycle
watchdog/6-44 3,135 instructions # 0.79 insn per cycle
watchdog/7-50 3,135 instructions # 0.92 insn per cycle
vmstat-23127 539,181 branches # 345.139 M/sec
perf-24165 375,364 branches # 87.245 M/sec
irqbalance-2780 262,092 branches # 316.593 M/sec
thermald-2841 31,611 branches # 136.915 M/sec
sshd-23111 21,874 branches # 78.596 M/sec
sshd-23058 10,682 branches # 51.528 M/sec
rcu_sched-8 8,693 branches # 101.633 M/sec
kworker/u16:1-18249 7,891 branches # 62.808 M/sec
kworker/0:2-19991 5,761 branches # 42.998 M/sec
kworker/u16:2-23146 4,099 branches # 53.138 M/sec
kworker/4:1-15354 2,755 branches # 97.110 M/sec
gmain-2700 2,638 branches # 63.127 M/sec
kworker/6:0-17528 2,216 branches # 92.739 M/sec
kworker/5:2-31362 1,132 branches # 97.360 M/sec
kworker/3:2-12870 1,081 branches # 105.773 M/sec
kworker/4:1H-1887 725 branches # 54.887 M/sec
ksoftirqd/0-7 707 branches # 79.716 M/sec
watchdog/0-11 652 branches # 59.860 M/sec
watchdog/1-14 652 branches # 76.923 M/sec
watchdog/2-20 652 branches # 268.423 M/sec
watchdog/3-26 652 branches # 225.372 M/sec
watchdog/4-32 652 branches # 236.318 M/sec
watchdog/5-38 652 branches # 441.435 M/sec
watchdog/6-44 652 branches # 437.290 M/sec
watchdog/7-50 652 branches # 221.467 M/sec
vmstat-23127 8,960 branch-misses # 1.66% of all branches
irqbalance-2780 3,047 branch-misses # 1.16% of all branches
perf-24165 2,876 branch-misses # 0.77% of all branches
sshd-23111 1,843 branch-misses # 8.43% of all branches
thermald-2841 1,444 branch-misses # 4.57% of all branches
sshd-23058 1,379 branch-misses # 12.91% of all branches
kworker/u16:1-18249 982 branch-misses # 12.44% of all branches
rcu_sched-8 893 branch-misses # 10.27% of all branches
kworker/u16:2-23146 578 branch-misses # 14.10% of all branches
kworker/0:2-19991 376 branch-misses # 6.53% of all branches
gmain-2700 280 branch-misses # 10.61% of all branches
kworker/6:0-17528 196 branch-misses # 8.84% of all branches
kworker/4:1-15354 187 branch-misses # 6.79% of all branches
kworker/5:2-31362 123 branch-misses # 10.87% of all branches
watchdog/0-11 95 branch-misses # 14.57% of all branches
watchdog/4-32 89 branch-misses # 13.65% of all branches
kworker/3:2-12870 80 branch-misses # 7.40% of all branches
watchdog/3-26 61 branch-misses # 9.36% of all branches
kworker/4:1H-1887 60 branch-misses # 8.28% of all branches
watchdog/2-20 52 branch-misses # 7.98% of all branches
ksoftirqd/0-7 47 branch-misses # 6.65% of all branches
watchdog/1-14 46 branch-misses # 7.06% of all branches
watchdog/7-50 13 branch-misses # 1.99% of all branches
watchdog/5-38 8 branch-misses # 1.23% of all branches
watchdog/6-44 7 branch-misses # 1.07% of all branches
3.695150786 seconds time elapsed
root@skl:/tmp# perf stat --per-thread -M IPC,CPI
^C
Performance counter stats for 'system wide':
vmstat-23127 2,000,783 inst_retired.any # 1.5 IPC
thermald-2841 1,472,670 inst_retired.any # 1.3 IPC
sshd-23111 977,374 inst_retired.any # 1.2 IPC
perf-24163 483,779 inst_retired.any # 0.2 IPC
gmain-2700 341,213 inst_retired.any # 0.9 IPC
sshd-23058 148,891 inst_retired.any # 0.8 IPC
rtkit-daemon-3288 71,210 inst_retired.any # 0.7 IPC
kworker/u16:1-18249 39,562 inst_retired.any # 0.3 IPC
rcu_sched-8 14,474 inst_retired.any # 0.8 IPC
kworker/0:2-19991 7,659 inst_retired.any # 0.2 IPC
kworker/4:1-15354 6,714 inst_retired.any # 0.8 IPC
rtkit-daemon-3289 4,839 inst_retired.any # 0.3 IPC
kworker/6:0-17528 3,321 inst_retired.any # 0.6 IPC
kworker/5:2-31362 3,215 inst_retired.any # 0.5 IPC
kworker/7:2-23145 3,173 inst_retired.any # 0.7 IPC
kworker/4:1H-1887 1,719 inst_retired.any # 0.3 IPC
watchdog/0-11 1,479 inst_retired.any # 0.3 IPC
watchdog/1-14 1,479 inst_retired.any # 0.3 IPC
watchdog/2-20 1,479 inst_retired.any # 0.4 IPC
watchdog/3-26 1,479 inst_retired.any # 0.4 IPC
watchdog/4-32 1,479 inst_retired.any # 0.3 IPC
watchdog/5-38 1,479 inst_retired.any # 0.3 IPC
watchdog/6-44 1,479 inst_retired.any # 0.7 IPC
watchdog/7-50 1,479 inst_retired.any # 0.7 IPC
kworker/u16:2-23146 1,408 inst_retired.any # 0.5 IPC
perf-24163 2,249,872 cpu_clk_unhalted.thread
vmstat-23127 1,352,455 cpu_clk_unhalted.thread
thermald-2841 1,161,140 cpu_clk_unhalted.thread
sshd-23111 807,827 cpu_clk_unhalted.thread
gmain-2700 375,535 cpu_clk_unhalted.thread
sshd-23058 194,071 cpu_clk_unhalted.thread
kworker/u16:1-18249 114,306 cpu_clk_unhalted.thread
rtkit-daemon-3288 103,547 cpu_clk_unhalted.thread
kworker/0:2-19991 46,550 cpu_clk_unhalted.thread
rcu_sched-8 18,855 cpu_clk_unhalted.thread
rtkit-daemon-3289 17,549 cpu_clk_unhalted.thread
kworker/4:1-15354 8,812 cpu_clk_unhalted.thread
kworker/5:2-31362 6,812 cpu_clk_unhalted.thread
kworker/4:1H-1887 5,270 cpu_clk_unhalted.thread
kworker/6:0-17528 5,111 cpu_clk_unhalted.thread
kworker/7:2-23145 4,667 cpu_clk_unhalted.thread
watchdog/0-11 4,663 cpu_clk_unhalted.thread
watchdog/1-14 4,663 cpu_clk_unhalted.thread
watchdog/4-32 4,626 cpu_clk_unhalted.thread
watchdog/5-38 4,403 cpu_clk_unhalted.thread
watchdog/3-26 3,936 cpu_clk_unhalted.thread
watchdog/2-20 3,850 cpu_clk_unhalted.thread
kworker/u16:2-23146 2,654 cpu_clk_unhalted.thread
watchdog/6-44 2,017 cpu_clk_unhalted.thread
watchdog/7-50 2,017 cpu_clk_unhalted.thread
vmstat-23127 2,000,783 inst_retired.any # 0.7 CPI
thermald-2841 1,472,670 inst_retired.any # 0.8 CPI
sshd-23111 977,374 inst_retired.any # 0.8 CPI
perf-24163 495,037 inst_retired.any # 4.7 CPI
gmain-2700 341,213 inst_retired.any # 1.1 CPI
sshd-23058 148,891 inst_retired.any # 1.3 CPI
rtkit-daemon-3288 71,210 inst_retired.any # 1.5 CPI
kworker/u16:1-18249 39,562 inst_retired.any # 2.9 CPI
rcu_sched-8 14,474 inst_retired.any # 1.3 CPI
kworker/0:2-19991 7,659 inst_retired.any # 6.1 CPI
kworker/4:1-15354 6,714 inst_retired.any # 1.3 CPI
rtkit-daemon-3289 4,839 inst_retired.any # 3.6 CPI
kworker/6:0-17528 3,321 inst_retired.any # 1.5 CPI
kworker/5:2-31362 3,215 inst_retired.any # 2.1 CPI
kworker/7:2-23145 3,173 inst_retired.any # 1.5 CPI
kworker/4:1H-1887 1,719 inst_retired.any # 3.1 CPI
watchdog/0-11 1,479 inst_retired.any # 3.2 CPI
watchdog/1-14 1,479 inst_retired.any # 3.2 CPI
watchdog/2-20 1,479 inst_retired.any # 2.6 CPI
watchdog/3-26 1,479 inst_retired.any # 2.7 CPI
watchdog/4-32 1,479 inst_retired.any # 3.1 CPI
watchdog/5-38 1,479 inst_retired.any # 3.0 CPI
watchdog/6-44 1,479 inst_retired.any # 1.4 CPI
watchdog/7-50 1,479 inst_retired.any # 1.4 CPI
kworker/u16:2-23146 1,408 inst_retired.any # 1.9 CPI
perf-24163 2,302,323 cycles
vmstat-23127 1,352,455 cycles
thermald-2841 1,161,140 cycles
sshd-23111 807,827 cycles
gmain-2700 375,535 cycles
sshd-23058 194,071 cycles
kworker/u16:1-18249 114,306 cycles
rtkit-daemon-3288 103,547 cycles
kworker/0:2-19991 46,550 cycles
rcu_sched-8 18,855 cycles
rtkit-daemon-3289 17,549 cycles
kworker/4:1-15354 8,812 cycles
kworker/5:2-31362 6,812 cycles
kworker/4:1H-1887 5,270 cycles
kworker/6:0-17528 5,111 cycles
kworker/7:2-23145 4,667 cycles
watchdog/0-11 4,663 cycles
watchdog/1-14 4,663 cycles
watchdog/4-32 4,626 cycles
watchdog/5-38 4,403 cycles
watchdog/3-26 3,936 cycles
watchdog/2-20 3,850 cycles
kworker/u16:2-23146 2,654 cycles
watchdog/6-44 2,017 cycles
watchdog/7-50 2,017 cycles
2.175726600 seconds time elapsed
Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Kan Liang <kan.liang@intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1512482591-4646-12-git-send-email-yao.jin@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-12-05 14:03:11 +00:00
|
|
|
uval = val * counter->scale;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Skip value 0 when enabling --per-thread globally,
|
|
|
|
* otherwise too many 0 output.
|
|
|
|
*/
|
|
|
|
if (uval == 0.0 && target__has_per_thread(&target))
|
|
|
|
continue;
|
|
|
|
|
|
|
|
buf[i].counter = counter;
|
|
|
|
buf[i].id = thread;
|
|
|
|
buf[i].uval = uval;
|
|
|
|
buf[i].val = val;
|
|
|
|
buf[i].run = run;
|
|
|
|
buf[i].ena = ena;
|
|
|
|
i++;
|
|
|
|
}
|
|
|
|
|
|
|
|
qsort(buf, i, sizeof(struct perf_aggr_thread_value), cmp_val);
|
|
|
|
|
|
|
|
if (ret)
|
|
|
|
*ret = i;
|
|
|
|
|
|
|
|
return buf;
|
|
|
|
}
|
|
|
|
|
2018-08-30 06:32:27 +00:00
|
|
|
static void print_aggr_thread(struct perf_stat_config *config,
|
|
|
|
struct perf_evsel *counter, char *prefix)
|
perf stat: Resort '--per-thread' result
There are many threads reported if we enable '--per-thread'
globally.
1. Most of the threads are not counted or counting value 0.
This patch removes these threads.
2. We also resort the threads in display according to the
counting value. It's useful for user to see the hottest
threads easily.
For example, the new results would be:
root@skl:/tmp# perf stat --per-thread
^C
Performance counter stats for 'system wide':
perf-24165 4.302433 cpu-clock (msec) # 0.001 CPUs utilized
vmstat-23127 1.562215 cpu-clock (msec) # 0.000 CPUs utilized
irqbalance-2780 0.827851 cpu-clock (msec) # 0.000 CPUs utilized
sshd-23111 0.278308 cpu-clock (msec) # 0.000 CPUs utilized
thermald-2841 0.230880 cpu-clock (msec) # 0.000 CPUs utilized
sshd-23058 0.207306 cpu-clock (msec) # 0.000 CPUs utilized
kworker/0:2-19991 0.133983 cpu-clock (msec) # 0.000 CPUs utilized
kworker/u16:1-18249 0.125636 cpu-clock (msec) # 0.000 CPUs utilized
rcu_sched-8 0.085533 cpu-clock (msec) # 0.000 CPUs utilized
kworker/u16:2-23146 0.077139 cpu-clock (msec) # 0.000 CPUs utilized
gmain-2700 0.041789 cpu-clock (msec) # 0.000 CPUs utilized
kworker/4:1-15354 0.028370 cpu-clock (msec) # 0.000 CPUs utilized
kworker/6:0-17528 0.023895 cpu-clock (msec) # 0.000 CPUs utilized
kworker/4:1H-1887 0.013209 cpu-clock (msec) # 0.000 CPUs utilized
kworker/5:2-31362 0.011627 cpu-clock (msec) # 0.000 CPUs utilized
watchdog/0-11 0.010892 cpu-clock (msec) # 0.000 CPUs utilized
kworker/3:2-12870 0.010220 cpu-clock (msec) # 0.000 CPUs utilized
ksoftirqd/0-7 0.008869 cpu-clock (msec) # 0.000 CPUs utilized
watchdog/1-14 0.008476 cpu-clock (msec) # 0.000 CPUs utilized
watchdog/7-50 0.002944 cpu-clock (msec) # 0.000 CPUs utilized
watchdog/3-26 0.002893 cpu-clock (msec) # 0.000 CPUs utilized
watchdog/4-32 0.002759 cpu-clock (msec) # 0.000 CPUs utilized
watchdog/2-20 0.002429 cpu-clock (msec) # 0.000 CPUs utilized
watchdog/6-44 0.001491 cpu-clock (msec) # 0.000 CPUs utilized
watchdog/5-38 0.001477 cpu-clock (msec) # 0.000 CPUs utilized
rcu_sched-8 10 context-switches # 0.117 M/sec
kworker/u16:1-18249 7 context-switches # 0.056 M/sec
sshd-23111 4 context-switches # 0.014 M/sec
vmstat-23127 4 context-switches # 0.003 M/sec
perf-24165 4 context-switches # 0.930 K/sec
kworker/0:2-19991 3 context-switches # 0.022 M/sec
kworker/u16:2-23146 3 context-switches # 0.039 M/sec
kworker/4:1-15354 2 context-switches # 0.070 M/sec
kworker/6:0-17528 2 context-switches # 0.084 M/sec
sshd-23058 2 context-switches # 0.010 M/sec
ksoftirqd/0-7 1 context-switches # 0.113 M/sec
watchdog/0-11 1 context-switches # 0.092 M/sec
watchdog/1-14 1 context-switches # 0.118 M/sec
watchdog/2-20 1 context-switches # 0.412 M/sec
watchdog/3-26 1 context-switches # 0.346 M/sec
watchdog/4-32 1 context-switches # 0.362 M/sec
watchdog/5-38 1 context-switches # 0.677 M/sec
watchdog/6-44 1 context-switches # 0.671 M/sec
watchdog/7-50 1 context-switches # 0.340 M/sec
kworker/4:1H-1887 1 context-switches # 0.076 M/sec
thermald-2841 1 context-switches # 0.004 M/sec
gmain-2700 1 context-switches # 0.024 M/sec
irqbalance-2780 1 context-switches # 0.001 M/sec
kworker/3:2-12870 1 context-switches # 0.098 M/sec
kworker/5:2-31362 1 context-switches # 0.086 M/sec
kworker/u16:1-18249 2 cpu-migrations # 0.016 M/sec
kworker/u16:2-23146 2 cpu-migrations # 0.026 M/sec
rcu_sched-8 1 cpu-migrations # 0.012 M/sec
sshd-23058 1 cpu-migrations # 0.005 M/sec
perf-24165 8,833,385 cycles # 2.053 GHz
vmstat-23127 1,702,699 cycles # 1.090 GHz
irqbalance-2780 739,847 cycles # 0.894 GHz
sshd-23111 269,506 cycles # 0.968 GHz
thermald-2841 204,556 cycles # 0.886 GHz
sshd-23058 158,780 cycles # 0.766 GHz
kworker/0:2-19991 112,981 cycles # 0.843 GHz
kworker/u16:1-18249 100,926 cycles # 0.803 GHz
rcu_sched-8 74,024 cycles # 0.865 GHz
kworker/u16:2-23146 55,984 cycles # 0.726 GHz
gmain-2700 34,278 cycles # 0.820 GHz
kworker/4:1-15354 20,665 cycles # 0.728 GHz
kworker/6:0-17528 16,445 cycles # 0.688 GHz
kworker/5:2-31362 9,492 cycles # 0.816 GHz
watchdog/3-26 8,695 cycles # 3.006 GHz
kworker/4:1H-1887 8,238 cycles # 0.624 GHz
watchdog/4-32 7,580 cycles # 2.747 GHz
kworker/3:2-12870 7,306 cycles # 0.715 GHz
watchdog/2-20 7,274 cycles # 2.995 GHz
watchdog/0-11 6,988 cycles # 0.642 GHz
ksoftirqd/0-7 6,376 cycles # 0.719 GHz
watchdog/1-14 5,340 cycles # 0.630 GHz
watchdog/5-38 4,061 cycles # 2.749 GHz
watchdog/6-44 3,976 cycles # 2.667 GHz
watchdog/7-50 3,418 cycles # 1.161 GHz
vmstat-23127 2,511,699 instructions # 1.48 insn per cycle
perf-24165 1,829,908 instructions # 0.21 insn per cycle
irqbalance-2780 1,190,204 instructions # 1.61 insn per cycle
thermald-2841 143,544 instructions # 0.70 insn per cycle
sshd-23111 128,138 instructions # 0.48 insn per cycle
sshd-23058 57,654 instructions # 0.36 insn per cycle
rcu_sched-8 44,063 instructions # 0.60 insn per cycle
kworker/u16:1-18249 42,551 instructions # 0.42 insn per cycle
kworker/0:2-19991 25,873 instructions # 0.23 insn per cycle
kworker/u16:2-23146 21,407 instructions # 0.38 insn per cycle
gmain-2700 13,691 instructions # 0.40 insn per cycle
kworker/4:1-15354 12,964 instructions # 0.63 insn per cycle
kworker/6:0-17528 10,034 instructions # 0.61 insn per cycle
kworker/5:2-31362 5,203 instructions # 0.55 insn per cycle
kworker/3:2-12870 4,866 instructions # 0.67 insn per cycle
kworker/4:1H-1887 3,586 instructions # 0.44 insn per cycle
ksoftirqd/0-7 3,463 instructions # 0.54 insn per cycle
watchdog/0-11 3,135 instructions # 0.45 insn per cycle
watchdog/1-14 3,135 instructions # 0.59 insn per cycle
watchdog/2-20 3,135 instructions # 0.43 insn per cycle
watchdog/3-26 3,135 instructions # 0.36 insn per cycle
watchdog/4-32 3,135 instructions # 0.41 insn per cycle
watchdog/5-38 3,135 instructions # 0.77 insn per cycle
watchdog/6-44 3,135 instructions # 0.79 insn per cycle
watchdog/7-50 3,135 instructions # 0.92 insn per cycle
vmstat-23127 539,181 branches # 345.139 M/sec
perf-24165 375,364 branches # 87.245 M/sec
irqbalance-2780 262,092 branches # 316.593 M/sec
thermald-2841 31,611 branches # 136.915 M/sec
sshd-23111 21,874 branches # 78.596 M/sec
sshd-23058 10,682 branches # 51.528 M/sec
rcu_sched-8 8,693 branches # 101.633 M/sec
kworker/u16:1-18249 7,891 branches # 62.808 M/sec
kworker/0:2-19991 5,761 branches # 42.998 M/sec
kworker/u16:2-23146 4,099 branches # 53.138 M/sec
kworker/4:1-15354 2,755 branches # 97.110 M/sec
gmain-2700 2,638 branches # 63.127 M/sec
kworker/6:0-17528 2,216 branches # 92.739 M/sec
kworker/5:2-31362 1,132 branches # 97.360 M/sec
kworker/3:2-12870 1,081 branches # 105.773 M/sec
kworker/4:1H-1887 725 branches # 54.887 M/sec
ksoftirqd/0-7 707 branches # 79.716 M/sec
watchdog/0-11 652 branches # 59.860 M/sec
watchdog/1-14 652 branches # 76.923 M/sec
watchdog/2-20 652 branches # 268.423 M/sec
watchdog/3-26 652 branches # 225.372 M/sec
watchdog/4-32 652 branches # 236.318 M/sec
watchdog/5-38 652 branches # 441.435 M/sec
watchdog/6-44 652 branches # 437.290 M/sec
watchdog/7-50 652 branches # 221.467 M/sec
vmstat-23127 8,960 branch-misses # 1.66% of all branches
irqbalance-2780 3,047 branch-misses # 1.16% of all branches
perf-24165 2,876 branch-misses # 0.77% of all branches
sshd-23111 1,843 branch-misses # 8.43% of all branches
thermald-2841 1,444 branch-misses # 4.57% of all branches
sshd-23058 1,379 branch-misses # 12.91% of all branches
kworker/u16:1-18249 982 branch-misses # 12.44% of all branches
rcu_sched-8 893 branch-misses # 10.27% of all branches
kworker/u16:2-23146 578 branch-misses # 14.10% of all branches
kworker/0:2-19991 376 branch-misses # 6.53% of all branches
gmain-2700 280 branch-misses # 10.61% of all branches
kworker/6:0-17528 196 branch-misses # 8.84% of all branches
kworker/4:1-15354 187 branch-misses # 6.79% of all branches
kworker/5:2-31362 123 branch-misses # 10.87% of all branches
watchdog/0-11 95 branch-misses # 14.57% of all branches
watchdog/4-32 89 branch-misses # 13.65% of all branches
kworker/3:2-12870 80 branch-misses # 7.40% of all branches
watchdog/3-26 61 branch-misses # 9.36% of all branches
kworker/4:1H-1887 60 branch-misses # 8.28% of all branches
watchdog/2-20 52 branch-misses # 7.98% of all branches
ksoftirqd/0-7 47 branch-misses # 6.65% of all branches
watchdog/1-14 46 branch-misses # 7.06% of all branches
watchdog/7-50 13 branch-misses # 1.99% of all branches
watchdog/5-38 8 branch-misses # 1.23% of all branches
watchdog/6-44 7 branch-misses # 1.07% of all branches
3.695150786 seconds time elapsed
root@skl:/tmp# perf stat --per-thread -M IPC,CPI
^C
Performance counter stats for 'system wide':
vmstat-23127 2,000,783 inst_retired.any # 1.5 IPC
thermald-2841 1,472,670 inst_retired.any # 1.3 IPC
sshd-23111 977,374 inst_retired.any # 1.2 IPC
perf-24163 483,779 inst_retired.any # 0.2 IPC
gmain-2700 341,213 inst_retired.any # 0.9 IPC
sshd-23058 148,891 inst_retired.any # 0.8 IPC
rtkit-daemon-3288 71,210 inst_retired.any # 0.7 IPC
kworker/u16:1-18249 39,562 inst_retired.any # 0.3 IPC
rcu_sched-8 14,474 inst_retired.any # 0.8 IPC
kworker/0:2-19991 7,659 inst_retired.any # 0.2 IPC
kworker/4:1-15354 6,714 inst_retired.any # 0.8 IPC
rtkit-daemon-3289 4,839 inst_retired.any # 0.3 IPC
kworker/6:0-17528 3,321 inst_retired.any # 0.6 IPC
kworker/5:2-31362 3,215 inst_retired.any # 0.5 IPC
kworker/7:2-23145 3,173 inst_retired.any # 0.7 IPC
kworker/4:1H-1887 1,719 inst_retired.any # 0.3 IPC
watchdog/0-11 1,479 inst_retired.any # 0.3 IPC
watchdog/1-14 1,479 inst_retired.any # 0.3 IPC
watchdog/2-20 1,479 inst_retired.any # 0.4 IPC
watchdog/3-26 1,479 inst_retired.any # 0.4 IPC
watchdog/4-32 1,479 inst_retired.any # 0.3 IPC
watchdog/5-38 1,479 inst_retired.any # 0.3 IPC
watchdog/6-44 1,479 inst_retired.any # 0.7 IPC
watchdog/7-50 1,479 inst_retired.any # 0.7 IPC
kworker/u16:2-23146 1,408 inst_retired.any # 0.5 IPC
perf-24163 2,249,872 cpu_clk_unhalted.thread
vmstat-23127 1,352,455 cpu_clk_unhalted.thread
thermald-2841 1,161,140 cpu_clk_unhalted.thread
sshd-23111 807,827 cpu_clk_unhalted.thread
gmain-2700 375,535 cpu_clk_unhalted.thread
sshd-23058 194,071 cpu_clk_unhalted.thread
kworker/u16:1-18249 114,306 cpu_clk_unhalted.thread
rtkit-daemon-3288 103,547 cpu_clk_unhalted.thread
kworker/0:2-19991 46,550 cpu_clk_unhalted.thread
rcu_sched-8 18,855 cpu_clk_unhalted.thread
rtkit-daemon-3289 17,549 cpu_clk_unhalted.thread
kworker/4:1-15354 8,812 cpu_clk_unhalted.thread
kworker/5:2-31362 6,812 cpu_clk_unhalted.thread
kworker/4:1H-1887 5,270 cpu_clk_unhalted.thread
kworker/6:0-17528 5,111 cpu_clk_unhalted.thread
kworker/7:2-23145 4,667 cpu_clk_unhalted.thread
watchdog/0-11 4,663 cpu_clk_unhalted.thread
watchdog/1-14 4,663 cpu_clk_unhalted.thread
watchdog/4-32 4,626 cpu_clk_unhalted.thread
watchdog/5-38 4,403 cpu_clk_unhalted.thread
watchdog/3-26 3,936 cpu_clk_unhalted.thread
watchdog/2-20 3,850 cpu_clk_unhalted.thread
kworker/u16:2-23146 2,654 cpu_clk_unhalted.thread
watchdog/6-44 2,017 cpu_clk_unhalted.thread
watchdog/7-50 2,017 cpu_clk_unhalted.thread
vmstat-23127 2,000,783 inst_retired.any # 0.7 CPI
thermald-2841 1,472,670 inst_retired.any # 0.8 CPI
sshd-23111 977,374 inst_retired.any # 0.8 CPI
perf-24163 495,037 inst_retired.any # 4.7 CPI
gmain-2700 341,213 inst_retired.any # 1.1 CPI
sshd-23058 148,891 inst_retired.any # 1.3 CPI
rtkit-daemon-3288 71,210 inst_retired.any # 1.5 CPI
kworker/u16:1-18249 39,562 inst_retired.any # 2.9 CPI
rcu_sched-8 14,474 inst_retired.any # 1.3 CPI
kworker/0:2-19991 7,659 inst_retired.any # 6.1 CPI
kworker/4:1-15354 6,714 inst_retired.any # 1.3 CPI
rtkit-daemon-3289 4,839 inst_retired.any # 3.6 CPI
kworker/6:0-17528 3,321 inst_retired.any # 1.5 CPI
kworker/5:2-31362 3,215 inst_retired.any # 2.1 CPI
kworker/7:2-23145 3,173 inst_retired.any # 1.5 CPI
kworker/4:1H-1887 1,719 inst_retired.any # 3.1 CPI
watchdog/0-11 1,479 inst_retired.any # 3.2 CPI
watchdog/1-14 1,479 inst_retired.any # 3.2 CPI
watchdog/2-20 1,479 inst_retired.any # 2.6 CPI
watchdog/3-26 1,479 inst_retired.any # 2.7 CPI
watchdog/4-32 1,479 inst_retired.any # 3.1 CPI
watchdog/5-38 1,479 inst_retired.any # 3.0 CPI
watchdog/6-44 1,479 inst_retired.any # 1.4 CPI
watchdog/7-50 1,479 inst_retired.any # 1.4 CPI
kworker/u16:2-23146 1,408 inst_retired.any # 1.9 CPI
perf-24163 2,302,323 cycles
vmstat-23127 1,352,455 cycles
thermald-2841 1,161,140 cycles
sshd-23111 807,827 cycles
gmain-2700 375,535 cycles
sshd-23058 194,071 cycles
kworker/u16:1-18249 114,306 cycles
rtkit-daemon-3288 103,547 cycles
kworker/0:2-19991 46,550 cycles
rcu_sched-8 18,855 cycles
rtkit-daemon-3289 17,549 cycles
kworker/4:1-15354 8,812 cycles
kworker/5:2-31362 6,812 cycles
kworker/4:1H-1887 5,270 cycles
kworker/6:0-17528 5,111 cycles
kworker/7:2-23145 4,667 cycles
watchdog/0-11 4,663 cycles
watchdog/1-14 4,663 cycles
watchdog/4-32 4,626 cycles
watchdog/5-38 4,403 cycles
watchdog/3-26 3,936 cycles
watchdog/2-20 3,850 cycles
kworker/u16:2-23146 2,654 cycles
watchdog/6-44 2,017 cycles
watchdog/7-50 2,017 cycles
2.175726600 seconds time elapsed
Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Kan Liang <kan.liang@intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1512482591-4646-12-git-send-email-yao.jin@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-12-05 14:03:11 +00:00
|
|
|
{
|
2018-08-30 06:32:27 +00:00
|
|
|
FILE *output = config->output;
|
perf stat: Resort '--per-thread' result
There are many threads reported if we enable '--per-thread'
globally.
1. Most of the threads are not counted or counting value 0.
This patch removes these threads.
2. We also resort the threads in display according to the
counting value. It's useful for user to see the hottest
threads easily.
For example, the new results would be:
root@skl:/tmp# perf stat --per-thread
^C
Performance counter stats for 'system wide':
perf-24165 4.302433 cpu-clock (msec) # 0.001 CPUs utilized
vmstat-23127 1.562215 cpu-clock (msec) # 0.000 CPUs utilized
irqbalance-2780 0.827851 cpu-clock (msec) # 0.000 CPUs utilized
sshd-23111 0.278308 cpu-clock (msec) # 0.000 CPUs utilized
thermald-2841 0.230880 cpu-clock (msec) # 0.000 CPUs utilized
sshd-23058 0.207306 cpu-clock (msec) # 0.000 CPUs utilized
kworker/0:2-19991 0.133983 cpu-clock (msec) # 0.000 CPUs utilized
kworker/u16:1-18249 0.125636 cpu-clock (msec) # 0.000 CPUs utilized
rcu_sched-8 0.085533 cpu-clock (msec) # 0.000 CPUs utilized
kworker/u16:2-23146 0.077139 cpu-clock (msec) # 0.000 CPUs utilized
gmain-2700 0.041789 cpu-clock (msec) # 0.000 CPUs utilized
kworker/4:1-15354 0.028370 cpu-clock (msec) # 0.000 CPUs utilized
kworker/6:0-17528 0.023895 cpu-clock (msec) # 0.000 CPUs utilized
kworker/4:1H-1887 0.013209 cpu-clock (msec) # 0.000 CPUs utilized
kworker/5:2-31362 0.011627 cpu-clock (msec) # 0.000 CPUs utilized
watchdog/0-11 0.010892 cpu-clock (msec) # 0.000 CPUs utilized
kworker/3:2-12870 0.010220 cpu-clock (msec) # 0.000 CPUs utilized
ksoftirqd/0-7 0.008869 cpu-clock (msec) # 0.000 CPUs utilized
watchdog/1-14 0.008476 cpu-clock (msec) # 0.000 CPUs utilized
watchdog/7-50 0.002944 cpu-clock (msec) # 0.000 CPUs utilized
watchdog/3-26 0.002893 cpu-clock (msec) # 0.000 CPUs utilized
watchdog/4-32 0.002759 cpu-clock (msec) # 0.000 CPUs utilized
watchdog/2-20 0.002429 cpu-clock (msec) # 0.000 CPUs utilized
watchdog/6-44 0.001491 cpu-clock (msec) # 0.000 CPUs utilized
watchdog/5-38 0.001477 cpu-clock (msec) # 0.000 CPUs utilized
rcu_sched-8 10 context-switches # 0.117 M/sec
kworker/u16:1-18249 7 context-switches # 0.056 M/sec
sshd-23111 4 context-switches # 0.014 M/sec
vmstat-23127 4 context-switches # 0.003 M/sec
perf-24165 4 context-switches # 0.930 K/sec
kworker/0:2-19991 3 context-switches # 0.022 M/sec
kworker/u16:2-23146 3 context-switches # 0.039 M/sec
kworker/4:1-15354 2 context-switches # 0.070 M/sec
kworker/6:0-17528 2 context-switches # 0.084 M/sec
sshd-23058 2 context-switches # 0.010 M/sec
ksoftirqd/0-7 1 context-switches # 0.113 M/sec
watchdog/0-11 1 context-switches # 0.092 M/sec
watchdog/1-14 1 context-switches # 0.118 M/sec
watchdog/2-20 1 context-switches # 0.412 M/sec
watchdog/3-26 1 context-switches # 0.346 M/sec
watchdog/4-32 1 context-switches # 0.362 M/sec
watchdog/5-38 1 context-switches # 0.677 M/sec
watchdog/6-44 1 context-switches # 0.671 M/sec
watchdog/7-50 1 context-switches # 0.340 M/sec
kworker/4:1H-1887 1 context-switches # 0.076 M/sec
thermald-2841 1 context-switches # 0.004 M/sec
gmain-2700 1 context-switches # 0.024 M/sec
irqbalance-2780 1 context-switches # 0.001 M/sec
kworker/3:2-12870 1 context-switches # 0.098 M/sec
kworker/5:2-31362 1 context-switches # 0.086 M/sec
kworker/u16:1-18249 2 cpu-migrations # 0.016 M/sec
kworker/u16:2-23146 2 cpu-migrations # 0.026 M/sec
rcu_sched-8 1 cpu-migrations # 0.012 M/sec
sshd-23058 1 cpu-migrations # 0.005 M/sec
perf-24165 8,833,385 cycles # 2.053 GHz
vmstat-23127 1,702,699 cycles # 1.090 GHz
irqbalance-2780 739,847 cycles # 0.894 GHz
sshd-23111 269,506 cycles # 0.968 GHz
thermald-2841 204,556 cycles # 0.886 GHz
sshd-23058 158,780 cycles # 0.766 GHz
kworker/0:2-19991 112,981 cycles # 0.843 GHz
kworker/u16:1-18249 100,926 cycles # 0.803 GHz
rcu_sched-8 74,024 cycles # 0.865 GHz
kworker/u16:2-23146 55,984 cycles # 0.726 GHz
gmain-2700 34,278 cycles # 0.820 GHz
kworker/4:1-15354 20,665 cycles # 0.728 GHz
kworker/6:0-17528 16,445 cycles # 0.688 GHz
kworker/5:2-31362 9,492 cycles # 0.816 GHz
watchdog/3-26 8,695 cycles # 3.006 GHz
kworker/4:1H-1887 8,238 cycles # 0.624 GHz
watchdog/4-32 7,580 cycles # 2.747 GHz
kworker/3:2-12870 7,306 cycles # 0.715 GHz
watchdog/2-20 7,274 cycles # 2.995 GHz
watchdog/0-11 6,988 cycles # 0.642 GHz
ksoftirqd/0-7 6,376 cycles # 0.719 GHz
watchdog/1-14 5,340 cycles # 0.630 GHz
watchdog/5-38 4,061 cycles # 2.749 GHz
watchdog/6-44 3,976 cycles # 2.667 GHz
watchdog/7-50 3,418 cycles # 1.161 GHz
vmstat-23127 2,511,699 instructions # 1.48 insn per cycle
perf-24165 1,829,908 instructions # 0.21 insn per cycle
irqbalance-2780 1,190,204 instructions # 1.61 insn per cycle
thermald-2841 143,544 instructions # 0.70 insn per cycle
sshd-23111 128,138 instructions # 0.48 insn per cycle
sshd-23058 57,654 instructions # 0.36 insn per cycle
rcu_sched-8 44,063 instructions # 0.60 insn per cycle
kworker/u16:1-18249 42,551 instructions # 0.42 insn per cycle
kworker/0:2-19991 25,873 instructions # 0.23 insn per cycle
kworker/u16:2-23146 21,407 instructions # 0.38 insn per cycle
gmain-2700 13,691 instructions # 0.40 insn per cycle
kworker/4:1-15354 12,964 instructions # 0.63 insn per cycle
kworker/6:0-17528 10,034 instructions # 0.61 insn per cycle
kworker/5:2-31362 5,203 instructions # 0.55 insn per cycle
kworker/3:2-12870 4,866 instructions # 0.67 insn per cycle
kworker/4:1H-1887 3,586 instructions # 0.44 insn per cycle
ksoftirqd/0-7 3,463 instructions # 0.54 insn per cycle
watchdog/0-11 3,135 instructions # 0.45 insn per cycle
watchdog/1-14 3,135 instructions # 0.59 insn per cycle
watchdog/2-20 3,135 instructions # 0.43 insn per cycle
watchdog/3-26 3,135 instructions # 0.36 insn per cycle
watchdog/4-32 3,135 instructions # 0.41 insn per cycle
watchdog/5-38 3,135 instructions # 0.77 insn per cycle
watchdog/6-44 3,135 instructions # 0.79 insn per cycle
watchdog/7-50 3,135 instructions # 0.92 insn per cycle
vmstat-23127 539,181 branches # 345.139 M/sec
perf-24165 375,364 branches # 87.245 M/sec
irqbalance-2780 262,092 branches # 316.593 M/sec
thermald-2841 31,611 branches # 136.915 M/sec
sshd-23111 21,874 branches # 78.596 M/sec
sshd-23058 10,682 branches # 51.528 M/sec
rcu_sched-8 8,693 branches # 101.633 M/sec
kworker/u16:1-18249 7,891 branches # 62.808 M/sec
kworker/0:2-19991 5,761 branches # 42.998 M/sec
kworker/u16:2-23146 4,099 branches # 53.138 M/sec
kworker/4:1-15354 2,755 branches # 97.110 M/sec
gmain-2700 2,638 branches # 63.127 M/sec
kworker/6:0-17528 2,216 branches # 92.739 M/sec
kworker/5:2-31362 1,132 branches # 97.360 M/sec
kworker/3:2-12870 1,081 branches # 105.773 M/sec
kworker/4:1H-1887 725 branches # 54.887 M/sec
ksoftirqd/0-7 707 branches # 79.716 M/sec
watchdog/0-11 652 branches # 59.860 M/sec
watchdog/1-14 652 branches # 76.923 M/sec
watchdog/2-20 652 branches # 268.423 M/sec
watchdog/3-26 652 branches # 225.372 M/sec
watchdog/4-32 652 branches # 236.318 M/sec
watchdog/5-38 652 branches # 441.435 M/sec
watchdog/6-44 652 branches # 437.290 M/sec
watchdog/7-50 652 branches # 221.467 M/sec
vmstat-23127 8,960 branch-misses # 1.66% of all branches
irqbalance-2780 3,047 branch-misses # 1.16% of all branches
perf-24165 2,876 branch-misses # 0.77% of all branches
sshd-23111 1,843 branch-misses # 8.43% of all branches
thermald-2841 1,444 branch-misses # 4.57% of all branches
sshd-23058 1,379 branch-misses # 12.91% of all branches
kworker/u16:1-18249 982 branch-misses # 12.44% of all branches
rcu_sched-8 893 branch-misses # 10.27% of all branches
kworker/u16:2-23146 578 branch-misses # 14.10% of all branches
kworker/0:2-19991 376 branch-misses # 6.53% of all branches
gmain-2700 280 branch-misses # 10.61% of all branches
kworker/6:0-17528 196 branch-misses # 8.84% of all branches
kworker/4:1-15354 187 branch-misses # 6.79% of all branches
kworker/5:2-31362 123 branch-misses # 10.87% of all branches
watchdog/0-11 95 branch-misses # 14.57% of all branches
watchdog/4-32 89 branch-misses # 13.65% of all branches
kworker/3:2-12870 80 branch-misses # 7.40% of all branches
watchdog/3-26 61 branch-misses # 9.36% of all branches
kworker/4:1H-1887 60 branch-misses # 8.28% of all branches
watchdog/2-20 52 branch-misses # 7.98% of all branches
ksoftirqd/0-7 47 branch-misses # 6.65% of all branches
watchdog/1-14 46 branch-misses # 7.06% of all branches
watchdog/7-50 13 branch-misses # 1.99% of all branches
watchdog/5-38 8 branch-misses # 1.23% of all branches
watchdog/6-44 7 branch-misses # 1.07% of all branches
3.695150786 seconds time elapsed
root@skl:/tmp# perf stat --per-thread -M IPC,CPI
^C
Performance counter stats for 'system wide':
vmstat-23127 2,000,783 inst_retired.any # 1.5 IPC
thermald-2841 1,472,670 inst_retired.any # 1.3 IPC
sshd-23111 977,374 inst_retired.any # 1.2 IPC
perf-24163 483,779 inst_retired.any # 0.2 IPC
gmain-2700 341,213 inst_retired.any # 0.9 IPC
sshd-23058 148,891 inst_retired.any # 0.8 IPC
rtkit-daemon-3288 71,210 inst_retired.any # 0.7 IPC
kworker/u16:1-18249 39,562 inst_retired.any # 0.3 IPC
rcu_sched-8 14,474 inst_retired.any # 0.8 IPC
kworker/0:2-19991 7,659 inst_retired.any # 0.2 IPC
kworker/4:1-15354 6,714 inst_retired.any # 0.8 IPC
rtkit-daemon-3289 4,839 inst_retired.any # 0.3 IPC
kworker/6:0-17528 3,321 inst_retired.any # 0.6 IPC
kworker/5:2-31362 3,215 inst_retired.any # 0.5 IPC
kworker/7:2-23145 3,173 inst_retired.any # 0.7 IPC
kworker/4:1H-1887 1,719 inst_retired.any # 0.3 IPC
watchdog/0-11 1,479 inst_retired.any # 0.3 IPC
watchdog/1-14 1,479 inst_retired.any # 0.3 IPC
watchdog/2-20 1,479 inst_retired.any # 0.4 IPC
watchdog/3-26 1,479 inst_retired.any # 0.4 IPC
watchdog/4-32 1,479 inst_retired.any # 0.3 IPC
watchdog/5-38 1,479 inst_retired.any # 0.3 IPC
watchdog/6-44 1,479 inst_retired.any # 0.7 IPC
watchdog/7-50 1,479 inst_retired.any # 0.7 IPC
kworker/u16:2-23146 1,408 inst_retired.any # 0.5 IPC
perf-24163 2,249,872 cpu_clk_unhalted.thread
vmstat-23127 1,352,455 cpu_clk_unhalted.thread
thermald-2841 1,161,140 cpu_clk_unhalted.thread
sshd-23111 807,827 cpu_clk_unhalted.thread
gmain-2700 375,535 cpu_clk_unhalted.thread
sshd-23058 194,071 cpu_clk_unhalted.thread
kworker/u16:1-18249 114,306 cpu_clk_unhalted.thread
rtkit-daemon-3288 103,547 cpu_clk_unhalted.thread
kworker/0:2-19991 46,550 cpu_clk_unhalted.thread
rcu_sched-8 18,855 cpu_clk_unhalted.thread
rtkit-daemon-3289 17,549 cpu_clk_unhalted.thread
kworker/4:1-15354 8,812 cpu_clk_unhalted.thread
kworker/5:2-31362 6,812 cpu_clk_unhalted.thread
kworker/4:1H-1887 5,270 cpu_clk_unhalted.thread
kworker/6:0-17528 5,111 cpu_clk_unhalted.thread
kworker/7:2-23145 4,667 cpu_clk_unhalted.thread
watchdog/0-11 4,663 cpu_clk_unhalted.thread
watchdog/1-14 4,663 cpu_clk_unhalted.thread
watchdog/4-32 4,626 cpu_clk_unhalted.thread
watchdog/5-38 4,403 cpu_clk_unhalted.thread
watchdog/3-26 3,936 cpu_clk_unhalted.thread
watchdog/2-20 3,850 cpu_clk_unhalted.thread
kworker/u16:2-23146 2,654 cpu_clk_unhalted.thread
watchdog/6-44 2,017 cpu_clk_unhalted.thread
watchdog/7-50 2,017 cpu_clk_unhalted.thread
vmstat-23127 2,000,783 inst_retired.any # 0.7 CPI
thermald-2841 1,472,670 inst_retired.any # 0.8 CPI
sshd-23111 977,374 inst_retired.any # 0.8 CPI
perf-24163 495,037 inst_retired.any # 4.7 CPI
gmain-2700 341,213 inst_retired.any # 1.1 CPI
sshd-23058 148,891 inst_retired.any # 1.3 CPI
rtkit-daemon-3288 71,210 inst_retired.any # 1.5 CPI
kworker/u16:1-18249 39,562 inst_retired.any # 2.9 CPI
rcu_sched-8 14,474 inst_retired.any # 1.3 CPI
kworker/0:2-19991 7,659 inst_retired.any # 6.1 CPI
kworker/4:1-15354 6,714 inst_retired.any # 1.3 CPI
rtkit-daemon-3289 4,839 inst_retired.any # 3.6 CPI
kworker/6:0-17528 3,321 inst_retired.any # 1.5 CPI
kworker/5:2-31362 3,215 inst_retired.any # 2.1 CPI
kworker/7:2-23145 3,173 inst_retired.any # 1.5 CPI
kworker/4:1H-1887 1,719 inst_retired.any # 3.1 CPI
watchdog/0-11 1,479 inst_retired.any # 3.2 CPI
watchdog/1-14 1,479 inst_retired.any # 3.2 CPI
watchdog/2-20 1,479 inst_retired.any # 2.6 CPI
watchdog/3-26 1,479 inst_retired.any # 2.7 CPI
watchdog/4-32 1,479 inst_retired.any # 3.1 CPI
watchdog/5-38 1,479 inst_retired.any # 3.0 CPI
watchdog/6-44 1,479 inst_retired.any # 1.4 CPI
watchdog/7-50 1,479 inst_retired.any # 1.4 CPI
kworker/u16:2-23146 1,408 inst_retired.any # 1.9 CPI
perf-24163 2,302,323 cycles
vmstat-23127 1,352,455 cycles
thermald-2841 1,161,140 cycles
sshd-23111 807,827 cycles
gmain-2700 375,535 cycles
sshd-23058 194,071 cycles
kworker/u16:1-18249 114,306 cycles
rtkit-daemon-3288 103,547 cycles
kworker/0:2-19991 46,550 cycles
rcu_sched-8 18,855 cycles
rtkit-daemon-3289 17,549 cycles
kworker/4:1-15354 8,812 cycles
kworker/5:2-31362 6,812 cycles
kworker/4:1H-1887 5,270 cycles
kworker/6:0-17528 5,111 cycles
kworker/7:2-23145 4,667 cycles
watchdog/0-11 4,663 cycles
watchdog/1-14 4,663 cycles
watchdog/4-32 4,626 cycles
watchdog/5-38 4,403 cycles
watchdog/3-26 3,936 cycles
watchdog/2-20 3,850 cycles
kworker/u16:2-23146 2,654 cycles
watchdog/6-44 2,017 cycles
watchdog/7-50 2,017 cycles
2.175726600 seconds time elapsed
Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Kan Liang <kan.liang@intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1512482591-4646-12-git-send-email-yao.jin@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-12-05 14:03:11 +00:00
|
|
|
int nthreads = thread_map__nr(counter->threads);
|
|
|
|
int ncpus = cpu_map__nr(counter->cpus);
|
|
|
|
int thread, sorted_threads, id;
|
|
|
|
struct perf_aggr_thread_value *buf;
|
|
|
|
|
|
|
|
buf = sort_aggr_thread(counter, nthreads, ncpus, &sorted_threads);
|
|
|
|
if (!buf) {
|
|
|
|
perror("cannot sort aggr thread");
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
for (thread = 0; thread < sorted_threads; thread++) {
|
perf stat: Introduce --per-thread option
Currently all the -p option PID arguments tasks values get aggregated
and printed as single values.
Adding --per-tasks option to print values per task.
$ perf stat -e cycles,instructions --per-thread -p 30190,30242
^C
Performance counter stats for process id '30190,30242':
cat-30190 0 cycles
yes-30242 3,842,525,421 cycles
cat-30190 0 instructions
yes-30242 10,370,817,010 instructions
1.143155657 seconds time elapsed
Also works under interval mode:
$ perf stat -e cycles,instructions --per-thread -p 30190,30242 -I 1000
# time comm-pid counts unit events
1.000073435 cat-30190 89,058 cycles
1.000073435 yes-30242 3,360,786,902 cycles (100.00%)
1.000073435 cat-30190 14,066 instructions
1.000073435 yes-30242 9,069,937,462 instructions
2.000204830 cat-30190 0 cycles
2.000204830 yes-30242 3,351,667,626 cycles
2.000204830 cat-30190 0 instructions
2.000204830 yes-30242 9,045,796,885 instructions
^C 2.771286639 cat-30190 0 cycles
2.771286639 yes-30242 2,593,884,166 cycles
2.771286639 cat-30190 0 instructions
2.771286639 yes-30242 7,001,171,191 instructions
It works only with -t and -p options, otherwise following error is
printed:
$ perf stat -e cycles --per-thread -I 1000 ls
The --per-thread option is only available when monitoring via -p -t options.
-p, --pid <pid> stat events on existing process id
-t, --tid <tid> stat events on existing thread id
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/r/1435310967-14570-23-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-06-26 09:29:27 +00:00
|
|
|
if (prefix)
|
|
|
|
fprintf(output, "%s", prefix);
|
|
|
|
|
perf stat: Resort '--per-thread' result
There are many threads reported if we enable '--per-thread'
globally.
1. Most of the threads are not counted or counting value 0.
This patch removes these threads.
2. We also resort the threads in display according to the
counting value. It's useful for user to see the hottest
threads easily.
For example, the new results would be:
root@skl:/tmp# perf stat --per-thread
^C
Performance counter stats for 'system wide':
perf-24165 4.302433 cpu-clock (msec) # 0.001 CPUs utilized
vmstat-23127 1.562215 cpu-clock (msec) # 0.000 CPUs utilized
irqbalance-2780 0.827851 cpu-clock (msec) # 0.000 CPUs utilized
sshd-23111 0.278308 cpu-clock (msec) # 0.000 CPUs utilized
thermald-2841 0.230880 cpu-clock (msec) # 0.000 CPUs utilized
sshd-23058 0.207306 cpu-clock (msec) # 0.000 CPUs utilized
kworker/0:2-19991 0.133983 cpu-clock (msec) # 0.000 CPUs utilized
kworker/u16:1-18249 0.125636 cpu-clock (msec) # 0.000 CPUs utilized
rcu_sched-8 0.085533 cpu-clock (msec) # 0.000 CPUs utilized
kworker/u16:2-23146 0.077139 cpu-clock (msec) # 0.000 CPUs utilized
gmain-2700 0.041789 cpu-clock (msec) # 0.000 CPUs utilized
kworker/4:1-15354 0.028370 cpu-clock (msec) # 0.000 CPUs utilized
kworker/6:0-17528 0.023895 cpu-clock (msec) # 0.000 CPUs utilized
kworker/4:1H-1887 0.013209 cpu-clock (msec) # 0.000 CPUs utilized
kworker/5:2-31362 0.011627 cpu-clock (msec) # 0.000 CPUs utilized
watchdog/0-11 0.010892 cpu-clock (msec) # 0.000 CPUs utilized
kworker/3:2-12870 0.010220 cpu-clock (msec) # 0.000 CPUs utilized
ksoftirqd/0-7 0.008869 cpu-clock (msec) # 0.000 CPUs utilized
watchdog/1-14 0.008476 cpu-clock (msec) # 0.000 CPUs utilized
watchdog/7-50 0.002944 cpu-clock (msec) # 0.000 CPUs utilized
watchdog/3-26 0.002893 cpu-clock (msec) # 0.000 CPUs utilized
watchdog/4-32 0.002759 cpu-clock (msec) # 0.000 CPUs utilized
watchdog/2-20 0.002429 cpu-clock (msec) # 0.000 CPUs utilized
watchdog/6-44 0.001491 cpu-clock (msec) # 0.000 CPUs utilized
watchdog/5-38 0.001477 cpu-clock (msec) # 0.000 CPUs utilized
rcu_sched-8 10 context-switches # 0.117 M/sec
kworker/u16:1-18249 7 context-switches # 0.056 M/sec
sshd-23111 4 context-switches # 0.014 M/sec
vmstat-23127 4 context-switches # 0.003 M/sec
perf-24165 4 context-switches # 0.930 K/sec
kworker/0:2-19991 3 context-switches # 0.022 M/sec
kworker/u16:2-23146 3 context-switches # 0.039 M/sec
kworker/4:1-15354 2 context-switches # 0.070 M/sec
kworker/6:0-17528 2 context-switches # 0.084 M/sec
sshd-23058 2 context-switches # 0.010 M/sec
ksoftirqd/0-7 1 context-switches # 0.113 M/sec
watchdog/0-11 1 context-switches # 0.092 M/sec
watchdog/1-14 1 context-switches # 0.118 M/sec
watchdog/2-20 1 context-switches # 0.412 M/sec
watchdog/3-26 1 context-switches # 0.346 M/sec
watchdog/4-32 1 context-switches # 0.362 M/sec
watchdog/5-38 1 context-switches # 0.677 M/sec
watchdog/6-44 1 context-switches # 0.671 M/sec
watchdog/7-50 1 context-switches # 0.340 M/sec
kworker/4:1H-1887 1 context-switches # 0.076 M/sec
thermald-2841 1 context-switches # 0.004 M/sec
gmain-2700 1 context-switches # 0.024 M/sec
irqbalance-2780 1 context-switches # 0.001 M/sec
kworker/3:2-12870 1 context-switches # 0.098 M/sec
kworker/5:2-31362 1 context-switches # 0.086 M/sec
kworker/u16:1-18249 2 cpu-migrations # 0.016 M/sec
kworker/u16:2-23146 2 cpu-migrations # 0.026 M/sec
rcu_sched-8 1 cpu-migrations # 0.012 M/sec
sshd-23058 1 cpu-migrations # 0.005 M/sec
perf-24165 8,833,385 cycles # 2.053 GHz
vmstat-23127 1,702,699 cycles # 1.090 GHz
irqbalance-2780 739,847 cycles # 0.894 GHz
sshd-23111 269,506 cycles # 0.968 GHz
thermald-2841 204,556 cycles # 0.886 GHz
sshd-23058 158,780 cycles # 0.766 GHz
kworker/0:2-19991 112,981 cycles # 0.843 GHz
kworker/u16:1-18249 100,926 cycles # 0.803 GHz
rcu_sched-8 74,024 cycles # 0.865 GHz
kworker/u16:2-23146 55,984 cycles # 0.726 GHz
gmain-2700 34,278 cycles # 0.820 GHz
kworker/4:1-15354 20,665 cycles # 0.728 GHz
kworker/6:0-17528 16,445 cycles # 0.688 GHz
kworker/5:2-31362 9,492 cycles # 0.816 GHz
watchdog/3-26 8,695 cycles # 3.006 GHz
kworker/4:1H-1887 8,238 cycles # 0.624 GHz
watchdog/4-32 7,580 cycles # 2.747 GHz
kworker/3:2-12870 7,306 cycles # 0.715 GHz
watchdog/2-20 7,274 cycles # 2.995 GHz
watchdog/0-11 6,988 cycles # 0.642 GHz
ksoftirqd/0-7 6,376 cycles # 0.719 GHz
watchdog/1-14 5,340 cycles # 0.630 GHz
watchdog/5-38 4,061 cycles # 2.749 GHz
watchdog/6-44 3,976 cycles # 2.667 GHz
watchdog/7-50 3,418 cycles # 1.161 GHz
vmstat-23127 2,511,699 instructions # 1.48 insn per cycle
perf-24165 1,829,908 instructions # 0.21 insn per cycle
irqbalance-2780 1,190,204 instructions # 1.61 insn per cycle
thermald-2841 143,544 instructions # 0.70 insn per cycle
sshd-23111 128,138 instructions # 0.48 insn per cycle
sshd-23058 57,654 instructions # 0.36 insn per cycle
rcu_sched-8 44,063 instructions # 0.60 insn per cycle
kworker/u16:1-18249 42,551 instructions # 0.42 insn per cycle
kworker/0:2-19991 25,873 instructions # 0.23 insn per cycle
kworker/u16:2-23146 21,407 instructions # 0.38 insn per cycle
gmain-2700 13,691 instructions # 0.40 insn per cycle
kworker/4:1-15354 12,964 instructions # 0.63 insn per cycle
kworker/6:0-17528 10,034 instructions # 0.61 insn per cycle
kworker/5:2-31362 5,203 instructions # 0.55 insn per cycle
kworker/3:2-12870 4,866 instructions # 0.67 insn per cycle
kworker/4:1H-1887 3,586 instructions # 0.44 insn per cycle
ksoftirqd/0-7 3,463 instructions # 0.54 insn per cycle
watchdog/0-11 3,135 instructions # 0.45 insn per cycle
watchdog/1-14 3,135 instructions # 0.59 insn per cycle
watchdog/2-20 3,135 instructions # 0.43 insn per cycle
watchdog/3-26 3,135 instructions # 0.36 insn per cycle
watchdog/4-32 3,135 instructions # 0.41 insn per cycle
watchdog/5-38 3,135 instructions # 0.77 insn per cycle
watchdog/6-44 3,135 instructions # 0.79 insn per cycle
watchdog/7-50 3,135 instructions # 0.92 insn per cycle
vmstat-23127 539,181 branches # 345.139 M/sec
perf-24165 375,364 branches # 87.245 M/sec
irqbalance-2780 262,092 branches # 316.593 M/sec
thermald-2841 31,611 branches # 136.915 M/sec
sshd-23111 21,874 branches # 78.596 M/sec
sshd-23058 10,682 branches # 51.528 M/sec
rcu_sched-8 8,693 branches # 101.633 M/sec
kworker/u16:1-18249 7,891 branches # 62.808 M/sec
kworker/0:2-19991 5,761 branches # 42.998 M/sec
kworker/u16:2-23146 4,099 branches # 53.138 M/sec
kworker/4:1-15354 2,755 branches # 97.110 M/sec
gmain-2700 2,638 branches # 63.127 M/sec
kworker/6:0-17528 2,216 branches # 92.739 M/sec
kworker/5:2-31362 1,132 branches # 97.360 M/sec
kworker/3:2-12870 1,081 branches # 105.773 M/sec
kworker/4:1H-1887 725 branches # 54.887 M/sec
ksoftirqd/0-7 707 branches # 79.716 M/sec
watchdog/0-11 652 branches # 59.860 M/sec
watchdog/1-14 652 branches # 76.923 M/sec
watchdog/2-20 652 branches # 268.423 M/sec
watchdog/3-26 652 branches # 225.372 M/sec
watchdog/4-32 652 branches # 236.318 M/sec
watchdog/5-38 652 branches # 441.435 M/sec
watchdog/6-44 652 branches # 437.290 M/sec
watchdog/7-50 652 branches # 221.467 M/sec
vmstat-23127 8,960 branch-misses # 1.66% of all branches
irqbalance-2780 3,047 branch-misses # 1.16% of all branches
perf-24165 2,876 branch-misses # 0.77% of all branches
sshd-23111 1,843 branch-misses # 8.43% of all branches
thermald-2841 1,444 branch-misses # 4.57% of all branches
sshd-23058 1,379 branch-misses # 12.91% of all branches
kworker/u16:1-18249 982 branch-misses # 12.44% of all branches
rcu_sched-8 893 branch-misses # 10.27% of all branches
kworker/u16:2-23146 578 branch-misses # 14.10% of all branches
kworker/0:2-19991 376 branch-misses # 6.53% of all branches
gmain-2700 280 branch-misses # 10.61% of all branches
kworker/6:0-17528 196 branch-misses # 8.84% of all branches
kworker/4:1-15354 187 branch-misses # 6.79% of all branches
kworker/5:2-31362 123 branch-misses # 10.87% of all branches
watchdog/0-11 95 branch-misses # 14.57% of all branches
watchdog/4-32 89 branch-misses # 13.65% of all branches
kworker/3:2-12870 80 branch-misses # 7.40% of all branches
watchdog/3-26 61 branch-misses # 9.36% of all branches
kworker/4:1H-1887 60 branch-misses # 8.28% of all branches
watchdog/2-20 52 branch-misses # 7.98% of all branches
ksoftirqd/0-7 47 branch-misses # 6.65% of all branches
watchdog/1-14 46 branch-misses # 7.06% of all branches
watchdog/7-50 13 branch-misses # 1.99% of all branches
watchdog/5-38 8 branch-misses # 1.23% of all branches
watchdog/6-44 7 branch-misses # 1.07% of all branches
3.695150786 seconds time elapsed
root@skl:/tmp# perf stat --per-thread -M IPC,CPI
^C
Performance counter stats for 'system wide':
vmstat-23127 2,000,783 inst_retired.any # 1.5 IPC
thermald-2841 1,472,670 inst_retired.any # 1.3 IPC
sshd-23111 977,374 inst_retired.any # 1.2 IPC
perf-24163 483,779 inst_retired.any # 0.2 IPC
gmain-2700 341,213 inst_retired.any # 0.9 IPC
sshd-23058 148,891 inst_retired.any # 0.8 IPC
rtkit-daemon-3288 71,210 inst_retired.any # 0.7 IPC
kworker/u16:1-18249 39,562 inst_retired.any # 0.3 IPC
rcu_sched-8 14,474 inst_retired.any # 0.8 IPC
kworker/0:2-19991 7,659 inst_retired.any # 0.2 IPC
kworker/4:1-15354 6,714 inst_retired.any # 0.8 IPC
rtkit-daemon-3289 4,839 inst_retired.any # 0.3 IPC
kworker/6:0-17528 3,321 inst_retired.any # 0.6 IPC
kworker/5:2-31362 3,215 inst_retired.any # 0.5 IPC
kworker/7:2-23145 3,173 inst_retired.any # 0.7 IPC
kworker/4:1H-1887 1,719 inst_retired.any # 0.3 IPC
watchdog/0-11 1,479 inst_retired.any # 0.3 IPC
watchdog/1-14 1,479 inst_retired.any # 0.3 IPC
watchdog/2-20 1,479 inst_retired.any # 0.4 IPC
watchdog/3-26 1,479 inst_retired.any # 0.4 IPC
watchdog/4-32 1,479 inst_retired.any # 0.3 IPC
watchdog/5-38 1,479 inst_retired.any # 0.3 IPC
watchdog/6-44 1,479 inst_retired.any # 0.7 IPC
watchdog/7-50 1,479 inst_retired.any # 0.7 IPC
kworker/u16:2-23146 1,408 inst_retired.any # 0.5 IPC
perf-24163 2,249,872 cpu_clk_unhalted.thread
vmstat-23127 1,352,455 cpu_clk_unhalted.thread
thermald-2841 1,161,140 cpu_clk_unhalted.thread
sshd-23111 807,827 cpu_clk_unhalted.thread
gmain-2700 375,535 cpu_clk_unhalted.thread
sshd-23058 194,071 cpu_clk_unhalted.thread
kworker/u16:1-18249 114,306 cpu_clk_unhalted.thread
rtkit-daemon-3288 103,547 cpu_clk_unhalted.thread
kworker/0:2-19991 46,550 cpu_clk_unhalted.thread
rcu_sched-8 18,855 cpu_clk_unhalted.thread
rtkit-daemon-3289 17,549 cpu_clk_unhalted.thread
kworker/4:1-15354 8,812 cpu_clk_unhalted.thread
kworker/5:2-31362 6,812 cpu_clk_unhalted.thread
kworker/4:1H-1887 5,270 cpu_clk_unhalted.thread
kworker/6:0-17528 5,111 cpu_clk_unhalted.thread
kworker/7:2-23145 4,667 cpu_clk_unhalted.thread
watchdog/0-11 4,663 cpu_clk_unhalted.thread
watchdog/1-14 4,663 cpu_clk_unhalted.thread
watchdog/4-32 4,626 cpu_clk_unhalted.thread
watchdog/5-38 4,403 cpu_clk_unhalted.thread
watchdog/3-26 3,936 cpu_clk_unhalted.thread
watchdog/2-20 3,850 cpu_clk_unhalted.thread
kworker/u16:2-23146 2,654 cpu_clk_unhalted.thread
watchdog/6-44 2,017 cpu_clk_unhalted.thread
watchdog/7-50 2,017 cpu_clk_unhalted.thread
vmstat-23127 2,000,783 inst_retired.any # 0.7 CPI
thermald-2841 1,472,670 inst_retired.any # 0.8 CPI
sshd-23111 977,374 inst_retired.any # 0.8 CPI
perf-24163 495,037 inst_retired.any # 4.7 CPI
gmain-2700 341,213 inst_retired.any # 1.1 CPI
sshd-23058 148,891 inst_retired.any # 1.3 CPI
rtkit-daemon-3288 71,210 inst_retired.any # 1.5 CPI
kworker/u16:1-18249 39,562 inst_retired.any # 2.9 CPI
rcu_sched-8 14,474 inst_retired.any # 1.3 CPI
kworker/0:2-19991 7,659 inst_retired.any # 6.1 CPI
kworker/4:1-15354 6,714 inst_retired.any # 1.3 CPI
rtkit-daemon-3289 4,839 inst_retired.any # 3.6 CPI
kworker/6:0-17528 3,321 inst_retired.any # 1.5 CPI
kworker/5:2-31362 3,215 inst_retired.any # 2.1 CPI
kworker/7:2-23145 3,173 inst_retired.any # 1.5 CPI
kworker/4:1H-1887 1,719 inst_retired.any # 3.1 CPI
watchdog/0-11 1,479 inst_retired.any # 3.2 CPI
watchdog/1-14 1,479 inst_retired.any # 3.2 CPI
watchdog/2-20 1,479 inst_retired.any # 2.6 CPI
watchdog/3-26 1,479 inst_retired.any # 2.7 CPI
watchdog/4-32 1,479 inst_retired.any # 3.1 CPI
watchdog/5-38 1,479 inst_retired.any # 3.0 CPI
watchdog/6-44 1,479 inst_retired.any # 1.4 CPI
watchdog/7-50 1,479 inst_retired.any # 1.4 CPI
kworker/u16:2-23146 1,408 inst_retired.any # 1.9 CPI
perf-24163 2,302,323 cycles
vmstat-23127 1,352,455 cycles
thermald-2841 1,161,140 cycles
sshd-23111 807,827 cycles
gmain-2700 375,535 cycles
sshd-23058 194,071 cycles
kworker/u16:1-18249 114,306 cycles
rtkit-daemon-3288 103,547 cycles
kworker/0:2-19991 46,550 cycles
rcu_sched-8 18,855 cycles
rtkit-daemon-3289 17,549 cycles
kworker/4:1-15354 8,812 cycles
kworker/5:2-31362 6,812 cycles
kworker/4:1H-1887 5,270 cycles
kworker/6:0-17528 5,111 cycles
kworker/7:2-23145 4,667 cycles
watchdog/0-11 4,663 cycles
watchdog/1-14 4,663 cycles
watchdog/4-32 4,626 cycles
watchdog/5-38 4,403 cycles
watchdog/3-26 3,936 cycles
watchdog/2-20 3,850 cycles
kworker/u16:2-23146 2,654 cycles
watchdog/6-44 2,017 cycles
watchdog/7-50 2,017 cycles
2.175726600 seconds time elapsed
Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Kan Liang <kan.liang@intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1512482591-4646-12-git-send-email-yao.jin@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-12-05 14:03:11 +00:00
|
|
|
id = buf[thread].id;
|
2018-08-30 06:32:27 +00:00
|
|
|
if (config->stats)
|
|
|
|
printout(config, id, 0, buf[thread].counter, buf[thread].uval,
|
perf stat: Resort '--per-thread' result
There are many threads reported if we enable '--per-thread'
globally.
1. Most of the threads are not counted or counting value 0.
This patch removes these threads.
2. We also resort the threads in display according to the
counting value. It's useful for user to see the hottest
threads easily.
For example, the new results would be:
root@skl:/tmp# perf stat --per-thread
^C
Performance counter stats for 'system wide':
perf-24165 4.302433 cpu-clock (msec) # 0.001 CPUs utilized
vmstat-23127 1.562215 cpu-clock (msec) # 0.000 CPUs utilized
irqbalance-2780 0.827851 cpu-clock (msec) # 0.000 CPUs utilized
sshd-23111 0.278308 cpu-clock (msec) # 0.000 CPUs utilized
thermald-2841 0.230880 cpu-clock (msec) # 0.000 CPUs utilized
sshd-23058 0.207306 cpu-clock (msec) # 0.000 CPUs utilized
kworker/0:2-19991 0.133983 cpu-clock (msec) # 0.000 CPUs utilized
kworker/u16:1-18249 0.125636 cpu-clock (msec) # 0.000 CPUs utilized
rcu_sched-8 0.085533 cpu-clock (msec) # 0.000 CPUs utilized
kworker/u16:2-23146 0.077139 cpu-clock (msec) # 0.000 CPUs utilized
gmain-2700 0.041789 cpu-clock (msec) # 0.000 CPUs utilized
kworker/4:1-15354 0.028370 cpu-clock (msec) # 0.000 CPUs utilized
kworker/6:0-17528 0.023895 cpu-clock (msec) # 0.000 CPUs utilized
kworker/4:1H-1887 0.013209 cpu-clock (msec) # 0.000 CPUs utilized
kworker/5:2-31362 0.011627 cpu-clock (msec) # 0.000 CPUs utilized
watchdog/0-11 0.010892 cpu-clock (msec) # 0.000 CPUs utilized
kworker/3:2-12870 0.010220 cpu-clock (msec) # 0.000 CPUs utilized
ksoftirqd/0-7 0.008869 cpu-clock (msec) # 0.000 CPUs utilized
watchdog/1-14 0.008476 cpu-clock (msec) # 0.000 CPUs utilized
watchdog/7-50 0.002944 cpu-clock (msec) # 0.000 CPUs utilized
watchdog/3-26 0.002893 cpu-clock (msec) # 0.000 CPUs utilized
watchdog/4-32 0.002759 cpu-clock (msec) # 0.000 CPUs utilized
watchdog/2-20 0.002429 cpu-clock (msec) # 0.000 CPUs utilized
watchdog/6-44 0.001491 cpu-clock (msec) # 0.000 CPUs utilized
watchdog/5-38 0.001477 cpu-clock (msec) # 0.000 CPUs utilized
rcu_sched-8 10 context-switches # 0.117 M/sec
kworker/u16:1-18249 7 context-switches # 0.056 M/sec
sshd-23111 4 context-switches # 0.014 M/sec
vmstat-23127 4 context-switches # 0.003 M/sec
perf-24165 4 context-switches # 0.930 K/sec
kworker/0:2-19991 3 context-switches # 0.022 M/sec
kworker/u16:2-23146 3 context-switches # 0.039 M/sec
kworker/4:1-15354 2 context-switches # 0.070 M/sec
kworker/6:0-17528 2 context-switches # 0.084 M/sec
sshd-23058 2 context-switches # 0.010 M/sec
ksoftirqd/0-7 1 context-switches # 0.113 M/sec
watchdog/0-11 1 context-switches # 0.092 M/sec
watchdog/1-14 1 context-switches # 0.118 M/sec
watchdog/2-20 1 context-switches # 0.412 M/sec
watchdog/3-26 1 context-switches # 0.346 M/sec
watchdog/4-32 1 context-switches # 0.362 M/sec
watchdog/5-38 1 context-switches # 0.677 M/sec
watchdog/6-44 1 context-switches # 0.671 M/sec
watchdog/7-50 1 context-switches # 0.340 M/sec
kworker/4:1H-1887 1 context-switches # 0.076 M/sec
thermald-2841 1 context-switches # 0.004 M/sec
gmain-2700 1 context-switches # 0.024 M/sec
irqbalance-2780 1 context-switches # 0.001 M/sec
kworker/3:2-12870 1 context-switches # 0.098 M/sec
kworker/5:2-31362 1 context-switches # 0.086 M/sec
kworker/u16:1-18249 2 cpu-migrations # 0.016 M/sec
kworker/u16:2-23146 2 cpu-migrations # 0.026 M/sec
rcu_sched-8 1 cpu-migrations # 0.012 M/sec
sshd-23058 1 cpu-migrations # 0.005 M/sec
perf-24165 8,833,385 cycles # 2.053 GHz
vmstat-23127 1,702,699 cycles # 1.090 GHz
irqbalance-2780 739,847 cycles # 0.894 GHz
sshd-23111 269,506 cycles # 0.968 GHz
thermald-2841 204,556 cycles # 0.886 GHz
sshd-23058 158,780 cycles # 0.766 GHz
kworker/0:2-19991 112,981 cycles # 0.843 GHz
kworker/u16:1-18249 100,926 cycles # 0.803 GHz
rcu_sched-8 74,024 cycles # 0.865 GHz
kworker/u16:2-23146 55,984 cycles # 0.726 GHz
gmain-2700 34,278 cycles # 0.820 GHz
kworker/4:1-15354 20,665 cycles # 0.728 GHz
kworker/6:0-17528 16,445 cycles # 0.688 GHz
kworker/5:2-31362 9,492 cycles # 0.816 GHz
watchdog/3-26 8,695 cycles # 3.006 GHz
kworker/4:1H-1887 8,238 cycles # 0.624 GHz
watchdog/4-32 7,580 cycles # 2.747 GHz
kworker/3:2-12870 7,306 cycles # 0.715 GHz
watchdog/2-20 7,274 cycles # 2.995 GHz
watchdog/0-11 6,988 cycles # 0.642 GHz
ksoftirqd/0-7 6,376 cycles # 0.719 GHz
watchdog/1-14 5,340 cycles # 0.630 GHz
watchdog/5-38 4,061 cycles # 2.749 GHz
watchdog/6-44 3,976 cycles # 2.667 GHz
watchdog/7-50 3,418 cycles # 1.161 GHz
vmstat-23127 2,511,699 instructions # 1.48 insn per cycle
perf-24165 1,829,908 instructions # 0.21 insn per cycle
irqbalance-2780 1,190,204 instructions # 1.61 insn per cycle
thermald-2841 143,544 instructions # 0.70 insn per cycle
sshd-23111 128,138 instructions # 0.48 insn per cycle
sshd-23058 57,654 instructions # 0.36 insn per cycle
rcu_sched-8 44,063 instructions # 0.60 insn per cycle
kworker/u16:1-18249 42,551 instructions # 0.42 insn per cycle
kworker/0:2-19991 25,873 instructions # 0.23 insn per cycle
kworker/u16:2-23146 21,407 instructions # 0.38 insn per cycle
gmain-2700 13,691 instructions # 0.40 insn per cycle
kworker/4:1-15354 12,964 instructions # 0.63 insn per cycle
kworker/6:0-17528 10,034 instructions # 0.61 insn per cycle
kworker/5:2-31362 5,203 instructions # 0.55 insn per cycle
kworker/3:2-12870 4,866 instructions # 0.67 insn per cycle
kworker/4:1H-1887 3,586 instructions # 0.44 insn per cycle
ksoftirqd/0-7 3,463 instructions # 0.54 insn per cycle
watchdog/0-11 3,135 instructions # 0.45 insn per cycle
watchdog/1-14 3,135 instructions # 0.59 insn per cycle
watchdog/2-20 3,135 instructions # 0.43 insn per cycle
watchdog/3-26 3,135 instructions # 0.36 insn per cycle
watchdog/4-32 3,135 instructions # 0.41 insn per cycle
watchdog/5-38 3,135 instructions # 0.77 insn per cycle
watchdog/6-44 3,135 instructions # 0.79 insn per cycle
watchdog/7-50 3,135 instructions # 0.92 insn per cycle
vmstat-23127 539,181 branches # 345.139 M/sec
perf-24165 375,364 branches # 87.245 M/sec
irqbalance-2780 262,092 branches # 316.593 M/sec
thermald-2841 31,611 branches # 136.915 M/sec
sshd-23111 21,874 branches # 78.596 M/sec
sshd-23058 10,682 branches # 51.528 M/sec
rcu_sched-8 8,693 branches # 101.633 M/sec
kworker/u16:1-18249 7,891 branches # 62.808 M/sec
kworker/0:2-19991 5,761 branches # 42.998 M/sec
kworker/u16:2-23146 4,099 branches # 53.138 M/sec
kworker/4:1-15354 2,755 branches # 97.110 M/sec
gmain-2700 2,638 branches # 63.127 M/sec
kworker/6:0-17528 2,216 branches # 92.739 M/sec
kworker/5:2-31362 1,132 branches # 97.360 M/sec
kworker/3:2-12870 1,081 branches # 105.773 M/sec
kworker/4:1H-1887 725 branches # 54.887 M/sec
ksoftirqd/0-7 707 branches # 79.716 M/sec
watchdog/0-11 652 branches # 59.860 M/sec
watchdog/1-14 652 branches # 76.923 M/sec
watchdog/2-20 652 branches # 268.423 M/sec
watchdog/3-26 652 branches # 225.372 M/sec
watchdog/4-32 652 branches # 236.318 M/sec
watchdog/5-38 652 branches # 441.435 M/sec
watchdog/6-44 652 branches # 437.290 M/sec
watchdog/7-50 652 branches # 221.467 M/sec
vmstat-23127 8,960 branch-misses # 1.66% of all branches
irqbalance-2780 3,047 branch-misses # 1.16% of all branches
perf-24165 2,876 branch-misses # 0.77% of all branches
sshd-23111 1,843 branch-misses # 8.43% of all branches
thermald-2841 1,444 branch-misses # 4.57% of all branches
sshd-23058 1,379 branch-misses # 12.91% of all branches
kworker/u16:1-18249 982 branch-misses # 12.44% of all branches
rcu_sched-8 893 branch-misses # 10.27% of all branches
kworker/u16:2-23146 578 branch-misses # 14.10% of all branches
kworker/0:2-19991 376 branch-misses # 6.53% of all branches
gmain-2700 280 branch-misses # 10.61% of all branches
kworker/6:0-17528 196 branch-misses # 8.84% of all branches
kworker/4:1-15354 187 branch-misses # 6.79% of all branches
kworker/5:2-31362 123 branch-misses # 10.87% of all branches
watchdog/0-11 95 branch-misses # 14.57% of all branches
watchdog/4-32 89 branch-misses # 13.65% of all branches
kworker/3:2-12870 80 branch-misses # 7.40% of all branches
watchdog/3-26 61 branch-misses # 9.36% of all branches
kworker/4:1H-1887 60 branch-misses # 8.28% of all branches
watchdog/2-20 52 branch-misses # 7.98% of all branches
ksoftirqd/0-7 47 branch-misses # 6.65% of all branches
watchdog/1-14 46 branch-misses # 7.06% of all branches
watchdog/7-50 13 branch-misses # 1.99% of all branches
watchdog/5-38 8 branch-misses # 1.23% of all branches
watchdog/6-44 7 branch-misses # 1.07% of all branches
3.695150786 seconds time elapsed
root@skl:/tmp# perf stat --per-thread -M IPC,CPI
^C
Performance counter stats for 'system wide':
vmstat-23127 2,000,783 inst_retired.any # 1.5 IPC
thermald-2841 1,472,670 inst_retired.any # 1.3 IPC
sshd-23111 977,374 inst_retired.any # 1.2 IPC
perf-24163 483,779 inst_retired.any # 0.2 IPC
gmain-2700 341,213 inst_retired.any # 0.9 IPC
sshd-23058 148,891 inst_retired.any # 0.8 IPC
rtkit-daemon-3288 71,210 inst_retired.any # 0.7 IPC
kworker/u16:1-18249 39,562 inst_retired.any # 0.3 IPC
rcu_sched-8 14,474 inst_retired.any # 0.8 IPC
kworker/0:2-19991 7,659 inst_retired.any # 0.2 IPC
kworker/4:1-15354 6,714 inst_retired.any # 0.8 IPC
rtkit-daemon-3289 4,839 inst_retired.any # 0.3 IPC
kworker/6:0-17528 3,321 inst_retired.any # 0.6 IPC
kworker/5:2-31362 3,215 inst_retired.any # 0.5 IPC
kworker/7:2-23145 3,173 inst_retired.any # 0.7 IPC
kworker/4:1H-1887 1,719 inst_retired.any # 0.3 IPC
watchdog/0-11 1,479 inst_retired.any # 0.3 IPC
watchdog/1-14 1,479 inst_retired.any # 0.3 IPC
watchdog/2-20 1,479 inst_retired.any # 0.4 IPC
watchdog/3-26 1,479 inst_retired.any # 0.4 IPC
watchdog/4-32 1,479 inst_retired.any # 0.3 IPC
watchdog/5-38 1,479 inst_retired.any # 0.3 IPC
watchdog/6-44 1,479 inst_retired.any # 0.7 IPC
watchdog/7-50 1,479 inst_retired.any # 0.7 IPC
kworker/u16:2-23146 1,408 inst_retired.any # 0.5 IPC
perf-24163 2,249,872 cpu_clk_unhalted.thread
vmstat-23127 1,352,455 cpu_clk_unhalted.thread
thermald-2841 1,161,140 cpu_clk_unhalted.thread
sshd-23111 807,827 cpu_clk_unhalted.thread
gmain-2700 375,535 cpu_clk_unhalted.thread
sshd-23058 194,071 cpu_clk_unhalted.thread
kworker/u16:1-18249 114,306 cpu_clk_unhalted.thread
rtkit-daemon-3288 103,547 cpu_clk_unhalted.thread
kworker/0:2-19991 46,550 cpu_clk_unhalted.thread
rcu_sched-8 18,855 cpu_clk_unhalted.thread
rtkit-daemon-3289 17,549 cpu_clk_unhalted.thread
kworker/4:1-15354 8,812 cpu_clk_unhalted.thread
kworker/5:2-31362 6,812 cpu_clk_unhalted.thread
kworker/4:1H-1887 5,270 cpu_clk_unhalted.thread
kworker/6:0-17528 5,111 cpu_clk_unhalted.thread
kworker/7:2-23145 4,667 cpu_clk_unhalted.thread
watchdog/0-11 4,663 cpu_clk_unhalted.thread
watchdog/1-14 4,663 cpu_clk_unhalted.thread
watchdog/4-32 4,626 cpu_clk_unhalted.thread
watchdog/5-38 4,403 cpu_clk_unhalted.thread
watchdog/3-26 3,936 cpu_clk_unhalted.thread
watchdog/2-20 3,850 cpu_clk_unhalted.thread
kworker/u16:2-23146 2,654 cpu_clk_unhalted.thread
watchdog/6-44 2,017 cpu_clk_unhalted.thread
watchdog/7-50 2,017 cpu_clk_unhalted.thread
vmstat-23127 2,000,783 inst_retired.any # 0.7 CPI
thermald-2841 1,472,670 inst_retired.any # 0.8 CPI
sshd-23111 977,374 inst_retired.any # 0.8 CPI
perf-24163 495,037 inst_retired.any # 4.7 CPI
gmain-2700 341,213 inst_retired.any # 1.1 CPI
sshd-23058 148,891 inst_retired.any # 1.3 CPI
rtkit-daemon-3288 71,210 inst_retired.any # 1.5 CPI
kworker/u16:1-18249 39,562 inst_retired.any # 2.9 CPI
rcu_sched-8 14,474 inst_retired.any # 1.3 CPI
kworker/0:2-19991 7,659 inst_retired.any # 6.1 CPI
kworker/4:1-15354 6,714 inst_retired.any # 1.3 CPI
rtkit-daemon-3289 4,839 inst_retired.any # 3.6 CPI
kworker/6:0-17528 3,321 inst_retired.any # 1.5 CPI
kworker/5:2-31362 3,215 inst_retired.any # 2.1 CPI
kworker/7:2-23145 3,173 inst_retired.any # 1.5 CPI
kworker/4:1H-1887 1,719 inst_retired.any # 3.1 CPI
watchdog/0-11 1,479 inst_retired.any # 3.2 CPI
watchdog/1-14 1,479 inst_retired.any # 3.2 CPI
watchdog/2-20 1,479 inst_retired.any # 2.6 CPI
watchdog/3-26 1,479 inst_retired.any # 2.7 CPI
watchdog/4-32 1,479 inst_retired.any # 3.1 CPI
watchdog/5-38 1,479 inst_retired.any # 3.0 CPI
watchdog/6-44 1,479 inst_retired.any # 1.4 CPI
watchdog/7-50 1,479 inst_retired.any # 1.4 CPI
kworker/u16:2-23146 1,408 inst_retired.any # 1.9 CPI
perf-24163 2,302,323 cycles
vmstat-23127 1,352,455 cycles
thermald-2841 1,161,140 cycles
sshd-23111 807,827 cycles
gmain-2700 375,535 cycles
sshd-23058 194,071 cycles
kworker/u16:1-18249 114,306 cycles
rtkit-daemon-3288 103,547 cycles
kworker/0:2-19991 46,550 cycles
rcu_sched-8 18,855 cycles
rtkit-daemon-3289 17,549 cycles
kworker/4:1-15354 8,812 cycles
kworker/5:2-31362 6,812 cycles
kworker/4:1H-1887 5,270 cycles
kworker/6:0-17528 5,111 cycles
kworker/7:2-23145 4,667 cycles
watchdog/0-11 4,663 cycles
watchdog/1-14 4,663 cycles
watchdog/4-32 4,626 cycles
watchdog/5-38 4,403 cycles
watchdog/3-26 3,936 cycles
watchdog/2-20 3,850 cycles
kworker/u16:2-23146 2,654 cycles
watchdog/6-44 2,017 cycles
watchdog/7-50 2,017 cycles
2.175726600 seconds time elapsed
Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Kan Liang <kan.liang@intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1512482591-4646-12-git-send-email-yao.jin@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-12-05 14:03:11 +00:00
|
|
|
prefix, buf[thread].run, buf[thread].ena, 1.0,
|
2018-08-30 06:32:27 +00:00
|
|
|
&config->stats[id]);
|
2017-12-05 14:03:08 +00:00
|
|
|
else
|
2018-08-30 06:32:27 +00:00
|
|
|
printout(config, id, 0, buf[thread].counter, buf[thread].uval,
|
perf stat: Resort '--per-thread' result
There are many threads reported if we enable '--per-thread'
globally.
1. Most of the threads are not counted or counting value 0.
This patch removes these threads.
2. We also resort the threads in display according to the
counting value. It's useful for user to see the hottest
threads easily.
For example, the new results would be:
root@skl:/tmp# perf stat --per-thread
^C
Performance counter stats for 'system wide':
perf-24165 4.302433 cpu-clock (msec) # 0.001 CPUs utilized
vmstat-23127 1.562215 cpu-clock (msec) # 0.000 CPUs utilized
irqbalance-2780 0.827851 cpu-clock (msec) # 0.000 CPUs utilized
sshd-23111 0.278308 cpu-clock (msec) # 0.000 CPUs utilized
thermald-2841 0.230880 cpu-clock (msec) # 0.000 CPUs utilized
sshd-23058 0.207306 cpu-clock (msec) # 0.000 CPUs utilized
kworker/0:2-19991 0.133983 cpu-clock (msec) # 0.000 CPUs utilized
kworker/u16:1-18249 0.125636 cpu-clock (msec) # 0.000 CPUs utilized
rcu_sched-8 0.085533 cpu-clock (msec) # 0.000 CPUs utilized
kworker/u16:2-23146 0.077139 cpu-clock (msec) # 0.000 CPUs utilized
gmain-2700 0.041789 cpu-clock (msec) # 0.000 CPUs utilized
kworker/4:1-15354 0.028370 cpu-clock (msec) # 0.000 CPUs utilized
kworker/6:0-17528 0.023895 cpu-clock (msec) # 0.000 CPUs utilized
kworker/4:1H-1887 0.013209 cpu-clock (msec) # 0.000 CPUs utilized
kworker/5:2-31362 0.011627 cpu-clock (msec) # 0.000 CPUs utilized
watchdog/0-11 0.010892 cpu-clock (msec) # 0.000 CPUs utilized
kworker/3:2-12870 0.010220 cpu-clock (msec) # 0.000 CPUs utilized
ksoftirqd/0-7 0.008869 cpu-clock (msec) # 0.000 CPUs utilized
watchdog/1-14 0.008476 cpu-clock (msec) # 0.000 CPUs utilized
watchdog/7-50 0.002944 cpu-clock (msec) # 0.000 CPUs utilized
watchdog/3-26 0.002893 cpu-clock (msec) # 0.000 CPUs utilized
watchdog/4-32 0.002759 cpu-clock (msec) # 0.000 CPUs utilized
watchdog/2-20 0.002429 cpu-clock (msec) # 0.000 CPUs utilized
watchdog/6-44 0.001491 cpu-clock (msec) # 0.000 CPUs utilized
watchdog/5-38 0.001477 cpu-clock (msec) # 0.000 CPUs utilized
rcu_sched-8 10 context-switches # 0.117 M/sec
kworker/u16:1-18249 7 context-switches # 0.056 M/sec
sshd-23111 4 context-switches # 0.014 M/sec
vmstat-23127 4 context-switches # 0.003 M/sec
perf-24165 4 context-switches # 0.930 K/sec
kworker/0:2-19991 3 context-switches # 0.022 M/sec
kworker/u16:2-23146 3 context-switches # 0.039 M/sec
kworker/4:1-15354 2 context-switches # 0.070 M/sec
kworker/6:0-17528 2 context-switches # 0.084 M/sec
sshd-23058 2 context-switches # 0.010 M/sec
ksoftirqd/0-7 1 context-switches # 0.113 M/sec
watchdog/0-11 1 context-switches # 0.092 M/sec
watchdog/1-14 1 context-switches # 0.118 M/sec
watchdog/2-20 1 context-switches # 0.412 M/sec
watchdog/3-26 1 context-switches # 0.346 M/sec
watchdog/4-32 1 context-switches # 0.362 M/sec
watchdog/5-38 1 context-switches # 0.677 M/sec
watchdog/6-44 1 context-switches # 0.671 M/sec
watchdog/7-50 1 context-switches # 0.340 M/sec
kworker/4:1H-1887 1 context-switches # 0.076 M/sec
thermald-2841 1 context-switches # 0.004 M/sec
gmain-2700 1 context-switches # 0.024 M/sec
irqbalance-2780 1 context-switches # 0.001 M/sec
kworker/3:2-12870 1 context-switches # 0.098 M/sec
kworker/5:2-31362 1 context-switches # 0.086 M/sec
kworker/u16:1-18249 2 cpu-migrations # 0.016 M/sec
kworker/u16:2-23146 2 cpu-migrations # 0.026 M/sec
rcu_sched-8 1 cpu-migrations # 0.012 M/sec
sshd-23058 1 cpu-migrations # 0.005 M/sec
perf-24165 8,833,385 cycles # 2.053 GHz
vmstat-23127 1,702,699 cycles # 1.090 GHz
irqbalance-2780 739,847 cycles # 0.894 GHz
sshd-23111 269,506 cycles # 0.968 GHz
thermald-2841 204,556 cycles # 0.886 GHz
sshd-23058 158,780 cycles # 0.766 GHz
kworker/0:2-19991 112,981 cycles # 0.843 GHz
kworker/u16:1-18249 100,926 cycles # 0.803 GHz
rcu_sched-8 74,024 cycles # 0.865 GHz
kworker/u16:2-23146 55,984 cycles # 0.726 GHz
gmain-2700 34,278 cycles # 0.820 GHz
kworker/4:1-15354 20,665 cycles # 0.728 GHz
kworker/6:0-17528 16,445 cycles # 0.688 GHz
kworker/5:2-31362 9,492 cycles # 0.816 GHz
watchdog/3-26 8,695 cycles # 3.006 GHz
kworker/4:1H-1887 8,238 cycles # 0.624 GHz
watchdog/4-32 7,580 cycles # 2.747 GHz
kworker/3:2-12870 7,306 cycles # 0.715 GHz
watchdog/2-20 7,274 cycles # 2.995 GHz
watchdog/0-11 6,988 cycles # 0.642 GHz
ksoftirqd/0-7 6,376 cycles # 0.719 GHz
watchdog/1-14 5,340 cycles # 0.630 GHz
watchdog/5-38 4,061 cycles # 2.749 GHz
watchdog/6-44 3,976 cycles # 2.667 GHz
watchdog/7-50 3,418 cycles # 1.161 GHz
vmstat-23127 2,511,699 instructions # 1.48 insn per cycle
perf-24165 1,829,908 instructions # 0.21 insn per cycle
irqbalance-2780 1,190,204 instructions # 1.61 insn per cycle
thermald-2841 143,544 instructions # 0.70 insn per cycle
sshd-23111 128,138 instructions # 0.48 insn per cycle
sshd-23058 57,654 instructions # 0.36 insn per cycle
rcu_sched-8 44,063 instructions # 0.60 insn per cycle
kworker/u16:1-18249 42,551 instructions # 0.42 insn per cycle
kworker/0:2-19991 25,873 instructions # 0.23 insn per cycle
kworker/u16:2-23146 21,407 instructions # 0.38 insn per cycle
gmain-2700 13,691 instructions # 0.40 insn per cycle
kworker/4:1-15354 12,964 instructions # 0.63 insn per cycle
kworker/6:0-17528 10,034 instructions # 0.61 insn per cycle
kworker/5:2-31362 5,203 instructions # 0.55 insn per cycle
kworker/3:2-12870 4,866 instructions # 0.67 insn per cycle
kworker/4:1H-1887 3,586 instructions # 0.44 insn per cycle
ksoftirqd/0-7 3,463 instructions # 0.54 insn per cycle
watchdog/0-11 3,135 instructions # 0.45 insn per cycle
watchdog/1-14 3,135 instructions # 0.59 insn per cycle
watchdog/2-20 3,135 instructions # 0.43 insn per cycle
watchdog/3-26 3,135 instructions # 0.36 insn per cycle
watchdog/4-32 3,135 instructions # 0.41 insn per cycle
watchdog/5-38 3,135 instructions # 0.77 insn per cycle
watchdog/6-44 3,135 instructions # 0.79 insn per cycle
watchdog/7-50 3,135 instructions # 0.92 insn per cycle
vmstat-23127 539,181 branches # 345.139 M/sec
perf-24165 375,364 branches # 87.245 M/sec
irqbalance-2780 262,092 branches # 316.593 M/sec
thermald-2841 31,611 branches # 136.915 M/sec
sshd-23111 21,874 branches # 78.596 M/sec
sshd-23058 10,682 branches # 51.528 M/sec
rcu_sched-8 8,693 branches # 101.633 M/sec
kworker/u16:1-18249 7,891 branches # 62.808 M/sec
kworker/0:2-19991 5,761 branches # 42.998 M/sec
kworker/u16:2-23146 4,099 branches # 53.138 M/sec
kworker/4:1-15354 2,755 branches # 97.110 M/sec
gmain-2700 2,638 branches # 63.127 M/sec
kworker/6:0-17528 2,216 branches # 92.739 M/sec
kworker/5:2-31362 1,132 branches # 97.360 M/sec
kworker/3:2-12870 1,081 branches # 105.773 M/sec
kworker/4:1H-1887 725 branches # 54.887 M/sec
ksoftirqd/0-7 707 branches # 79.716 M/sec
watchdog/0-11 652 branches # 59.860 M/sec
watchdog/1-14 652 branches # 76.923 M/sec
watchdog/2-20 652 branches # 268.423 M/sec
watchdog/3-26 652 branches # 225.372 M/sec
watchdog/4-32 652 branches # 236.318 M/sec
watchdog/5-38 652 branches # 441.435 M/sec
watchdog/6-44 652 branches # 437.290 M/sec
watchdog/7-50 652 branches # 221.467 M/sec
vmstat-23127 8,960 branch-misses # 1.66% of all branches
irqbalance-2780 3,047 branch-misses # 1.16% of all branches
perf-24165 2,876 branch-misses # 0.77% of all branches
sshd-23111 1,843 branch-misses # 8.43% of all branches
thermald-2841 1,444 branch-misses # 4.57% of all branches
sshd-23058 1,379 branch-misses # 12.91% of all branches
kworker/u16:1-18249 982 branch-misses # 12.44% of all branches
rcu_sched-8 893 branch-misses # 10.27% of all branches
kworker/u16:2-23146 578 branch-misses # 14.10% of all branches
kworker/0:2-19991 376 branch-misses # 6.53% of all branches
gmain-2700 280 branch-misses # 10.61% of all branches
kworker/6:0-17528 196 branch-misses # 8.84% of all branches
kworker/4:1-15354 187 branch-misses # 6.79% of all branches
kworker/5:2-31362 123 branch-misses # 10.87% of all branches
watchdog/0-11 95 branch-misses # 14.57% of all branches
watchdog/4-32 89 branch-misses # 13.65% of all branches
kworker/3:2-12870 80 branch-misses # 7.40% of all branches
watchdog/3-26 61 branch-misses # 9.36% of all branches
kworker/4:1H-1887 60 branch-misses # 8.28% of all branches
watchdog/2-20 52 branch-misses # 7.98% of all branches
ksoftirqd/0-7 47 branch-misses # 6.65% of all branches
watchdog/1-14 46 branch-misses # 7.06% of all branches
watchdog/7-50 13 branch-misses # 1.99% of all branches
watchdog/5-38 8 branch-misses # 1.23% of all branches
watchdog/6-44 7 branch-misses # 1.07% of all branches
3.695150786 seconds time elapsed
root@skl:/tmp# perf stat --per-thread -M IPC,CPI
^C
Performance counter stats for 'system wide':
vmstat-23127 2,000,783 inst_retired.any # 1.5 IPC
thermald-2841 1,472,670 inst_retired.any # 1.3 IPC
sshd-23111 977,374 inst_retired.any # 1.2 IPC
perf-24163 483,779 inst_retired.any # 0.2 IPC
gmain-2700 341,213 inst_retired.any # 0.9 IPC
sshd-23058 148,891 inst_retired.any # 0.8 IPC
rtkit-daemon-3288 71,210 inst_retired.any # 0.7 IPC
kworker/u16:1-18249 39,562 inst_retired.any # 0.3 IPC
rcu_sched-8 14,474 inst_retired.any # 0.8 IPC
kworker/0:2-19991 7,659 inst_retired.any # 0.2 IPC
kworker/4:1-15354 6,714 inst_retired.any # 0.8 IPC
rtkit-daemon-3289 4,839 inst_retired.any # 0.3 IPC
kworker/6:0-17528 3,321 inst_retired.any # 0.6 IPC
kworker/5:2-31362 3,215 inst_retired.any # 0.5 IPC
kworker/7:2-23145 3,173 inst_retired.any # 0.7 IPC
kworker/4:1H-1887 1,719 inst_retired.any # 0.3 IPC
watchdog/0-11 1,479 inst_retired.any # 0.3 IPC
watchdog/1-14 1,479 inst_retired.any # 0.3 IPC
watchdog/2-20 1,479 inst_retired.any # 0.4 IPC
watchdog/3-26 1,479 inst_retired.any # 0.4 IPC
watchdog/4-32 1,479 inst_retired.any # 0.3 IPC
watchdog/5-38 1,479 inst_retired.any # 0.3 IPC
watchdog/6-44 1,479 inst_retired.any # 0.7 IPC
watchdog/7-50 1,479 inst_retired.any # 0.7 IPC
kworker/u16:2-23146 1,408 inst_retired.any # 0.5 IPC
perf-24163 2,249,872 cpu_clk_unhalted.thread
vmstat-23127 1,352,455 cpu_clk_unhalted.thread
thermald-2841 1,161,140 cpu_clk_unhalted.thread
sshd-23111 807,827 cpu_clk_unhalted.thread
gmain-2700 375,535 cpu_clk_unhalted.thread
sshd-23058 194,071 cpu_clk_unhalted.thread
kworker/u16:1-18249 114,306 cpu_clk_unhalted.thread
rtkit-daemon-3288 103,547 cpu_clk_unhalted.thread
kworker/0:2-19991 46,550 cpu_clk_unhalted.thread
rcu_sched-8 18,855 cpu_clk_unhalted.thread
rtkit-daemon-3289 17,549 cpu_clk_unhalted.thread
kworker/4:1-15354 8,812 cpu_clk_unhalted.thread
kworker/5:2-31362 6,812 cpu_clk_unhalted.thread
kworker/4:1H-1887 5,270 cpu_clk_unhalted.thread
kworker/6:0-17528 5,111 cpu_clk_unhalted.thread
kworker/7:2-23145 4,667 cpu_clk_unhalted.thread
watchdog/0-11 4,663 cpu_clk_unhalted.thread
watchdog/1-14 4,663 cpu_clk_unhalted.thread
watchdog/4-32 4,626 cpu_clk_unhalted.thread
watchdog/5-38 4,403 cpu_clk_unhalted.thread
watchdog/3-26 3,936 cpu_clk_unhalted.thread
watchdog/2-20 3,850 cpu_clk_unhalted.thread
kworker/u16:2-23146 2,654 cpu_clk_unhalted.thread
watchdog/6-44 2,017 cpu_clk_unhalted.thread
watchdog/7-50 2,017 cpu_clk_unhalted.thread
vmstat-23127 2,000,783 inst_retired.any # 0.7 CPI
thermald-2841 1,472,670 inst_retired.any # 0.8 CPI
sshd-23111 977,374 inst_retired.any # 0.8 CPI
perf-24163 495,037 inst_retired.any # 4.7 CPI
gmain-2700 341,213 inst_retired.any # 1.1 CPI
sshd-23058 148,891 inst_retired.any # 1.3 CPI
rtkit-daemon-3288 71,210 inst_retired.any # 1.5 CPI
kworker/u16:1-18249 39,562 inst_retired.any # 2.9 CPI
rcu_sched-8 14,474 inst_retired.any # 1.3 CPI
kworker/0:2-19991 7,659 inst_retired.any # 6.1 CPI
kworker/4:1-15354 6,714 inst_retired.any # 1.3 CPI
rtkit-daemon-3289 4,839 inst_retired.any # 3.6 CPI
kworker/6:0-17528 3,321 inst_retired.any # 1.5 CPI
kworker/5:2-31362 3,215 inst_retired.any # 2.1 CPI
kworker/7:2-23145 3,173 inst_retired.any # 1.5 CPI
kworker/4:1H-1887 1,719 inst_retired.any # 3.1 CPI
watchdog/0-11 1,479 inst_retired.any # 3.2 CPI
watchdog/1-14 1,479 inst_retired.any # 3.2 CPI
watchdog/2-20 1,479 inst_retired.any # 2.6 CPI
watchdog/3-26 1,479 inst_retired.any # 2.7 CPI
watchdog/4-32 1,479 inst_retired.any # 3.1 CPI
watchdog/5-38 1,479 inst_retired.any # 3.0 CPI
watchdog/6-44 1,479 inst_retired.any # 1.4 CPI
watchdog/7-50 1,479 inst_retired.any # 1.4 CPI
kworker/u16:2-23146 1,408 inst_retired.any # 1.9 CPI
perf-24163 2,302,323 cycles
vmstat-23127 1,352,455 cycles
thermald-2841 1,161,140 cycles
sshd-23111 807,827 cycles
gmain-2700 375,535 cycles
sshd-23058 194,071 cycles
kworker/u16:1-18249 114,306 cycles
rtkit-daemon-3288 103,547 cycles
kworker/0:2-19991 46,550 cycles
rcu_sched-8 18,855 cycles
rtkit-daemon-3289 17,549 cycles
kworker/4:1-15354 8,812 cycles
kworker/5:2-31362 6,812 cycles
kworker/4:1H-1887 5,270 cycles
kworker/6:0-17528 5,111 cycles
kworker/7:2-23145 4,667 cycles
watchdog/0-11 4,663 cycles
watchdog/1-14 4,663 cycles
watchdog/4-32 4,626 cycles
watchdog/5-38 4,403 cycles
watchdog/3-26 3,936 cycles
watchdog/2-20 3,850 cycles
kworker/u16:2-23146 2,654 cycles
watchdog/6-44 2,017 cycles
watchdog/7-50 2,017 cycles
2.175726600 seconds time elapsed
Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Kan Liang <kan.liang@intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1512482591-4646-12-git-send-email-yao.jin@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-12-05 14:03:11 +00:00
|
|
|
prefix, buf[thread].run, buf[thread].ena, 1.0,
|
|
|
|
&rt_stat);
|
perf stat: Introduce --per-thread option
Currently all the -p option PID arguments tasks values get aggregated
and printed as single values.
Adding --per-tasks option to print values per task.
$ perf stat -e cycles,instructions --per-thread -p 30190,30242
^C
Performance counter stats for process id '30190,30242':
cat-30190 0 cycles
yes-30242 3,842,525,421 cycles
cat-30190 0 instructions
yes-30242 10,370,817,010 instructions
1.143155657 seconds time elapsed
Also works under interval mode:
$ perf stat -e cycles,instructions --per-thread -p 30190,30242 -I 1000
# time comm-pid counts unit events
1.000073435 cat-30190 89,058 cycles
1.000073435 yes-30242 3,360,786,902 cycles (100.00%)
1.000073435 cat-30190 14,066 instructions
1.000073435 yes-30242 9,069,937,462 instructions
2.000204830 cat-30190 0 cycles
2.000204830 yes-30242 3,351,667,626 cycles
2.000204830 cat-30190 0 instructions
2.000204830 yes-30242 9,045,796,885 instructions
^C 2.771286639 cat-30190 0 cycles
2.771286639 yes-30242 2,593,884,166 cycles
2.771286639 cat-30190 0 instructions
2.771286639 yes-30242 7,001,171,191 instructions
It works only with -t and -p options, otherwise following error is
printed:
$ perf stat -e cycles --per-thread -I 1000 ls
The --per-thread option is only available when monitoring via -p -t options.
-p, --pid <pid> stat events on existing process id
-t, --tid <tid> stat events on existing thread id
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/r/1435310967-14570-23-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-06-26 09:29:27 +00:00
|
|
|
fputc('\n', output);
|
|
|
|
}
|
perf stat: Resort '--per-thread' result
There are many threads reported if we enable '--per-thread'
globally.
1. Most of the threads are not counted or counting value 0.
This patch removes these threads.
2. We also resort the threads in display according to the
counting value. It's useful for user to see the hottest
threads easily.
For example, the new results would be:
root@skl:/tmp# perf stat --per-thread
^C
Performance counter stats for 'system wide':
perf-24165 4.302433 cpu-clock (msec) # 0.001 CPUs utilized
vmstat-23127 1.562215 cpu-clock (msec) # 0.000 CPUs utilized
irqbalance-2780 0.827851 cpu-clock (msec) # 0.000 CPUs utilized
sshd-23111 0.278308 cpu-clock (msec) # 0.000 CPUs utilized
thermald-2841 0.230880 cpu-clock (msec) # 0.000 CPUs utilized
sshd-23058 0.207306 cpu-clock (msec) # 0.000 CPUs utilized
kworker/0:2-19991 0.133983 cpu-clock (msec) # 0.000 CPUs utilized
kworker/u16:1-18249 0.125636 cpu-clock (msec) # 0.000 CPUs utilized
rcu_sched-8 0.085533 cpu-clock (msec) # 0.000 CPUs utilized
kworker/u16:2-23146 0.077139 cpu-clock (msec) # 0.000 CPUs utilized
gmain-2700 0.041789 cpu-clock (msec) # 0.000 CPUs utilized
kworker/4:1-15354 0.028370 cpu-clock (msec) # 0.000 CPUs utilized
kworker/6:0-17528 0.023895 cpu-clock (msec) # 0.000 CPUs utilized
kworker/4:1H-1887 0.013209 cpu-clock (msec) # 0.000 CPUs utilized
kworker/5:2-31362 0.011627 cpu-clock (msec) # 0.000 CPUs utilized
watchdog/0-11 0.010892 cpu-clock (msec) # 0.000 CPUs utilized
kworker/3:2-12870 0.010220 cpu-clock (msec) # 0.000 CPUs utilized
ksoftirqd/0-7 0.008869 cpu-clock (msec) # 0.000 CPUs utilized
watchdog/1-14 0.008476 cpu-clock (msec) # 0.000 CPUs utilized
watchdog/7-50 0.002944 cpu-clock (msec) # 0.000 CPUs utilized
watchdog/3-26 0.002893 cpu-clock (msec) # 0.000 CPUs utilized
watchdog/4-32 0.002759 cpu-clock (msec) # 0.000 CPUs utilized
watchdog/2-20 0.002429 cpu-clock (msec) # 0.000 CPUs utilized
watchdog/6-44 0.001491 cpu-clock (msec) # 0.000 CPUs utilized
watchdog/5-38 0.001477 cpu-clock (msec) # 0.000 CPUs utilized
rcu_sched-8 10 context-switches # 0.117 M/sec
kworker/u16:1-18249 7 context-switches # 0.056 M/sec
sshd-23111 4 context-switches # 0.014 M/sec
vmstat-23127 4 context-switches # 0.003 M/sec
perf-24165 4 context-switches # 0.930 K/sec
kworker/0:2-19991 3 context-switches # 0.022 M/sec
kworker/u16:2-23146 3 context-switches # 0.039 M/sec
kworker/4:1-15354 2 context-switches # 0.070 M/sec
kworker/6:0-17528 2 context-switches # 0.084 M/sec
sshd-23058 2 context-switches # 0.010 M/sec
ksoftirqd/0-7 1 context-switches # 0.113 M/sec
watchdog/0-11 1 context-switches # 0.092 M/sec
watchdog/1-14 1 context-switches # 0.118 M/sec
watchdog/2-20 1 context-switches # 0.412 M/sec
watchdog/3-26 1 context-switches # 0.346 M/sec
watchdog/4-32 1 context-switches # 0.362 M/sec
watchdog/5-38 1 context-switches # 0.677 M/sec
watchdog/6-44 1 context-switches # 0.671 M/sec
watchdog/7-50 1 context-switches # 0.340 M/sec
kworker/4:1H-1887 1 context-switches # 0.076 M/sec
thermald-2841 1 context-switches # 0.004 M/sec
gmain-2700 1 context-switches # 0.024 M/sec
irqbalance-2780 1 context-switches # 0.001 M/sec
kworker/3:2-12870 1 context-switches # 0.098 M/sec
kworker/5:2-31362 1 context-switches # 0.086 M/sec
kworker/u16:1-18249 2 cpu-migrations # 0.016 M/sec
kworker/u16:2-23146 2 cpu-migrations # 0.026 M/sec
rcu_sched-8 1 cpu-migrations # 0.012 M/sec
sshd-23058 1 cpu-migrations # 0.005 M/sec
perf-24165 8,833,385 cycles # 2.053 GHz
vmstat-23127 1,702,699 cycles # 1.090 GHz
irqbalance-2780 739,847 cycles # 0.894 GHz
sshd-23111 269,506 cycles # 0.968 GHz
thermald-2841 204,556 cycles # 0.886 GHz
sshd-23058 158,780 cycles # 0.766 GHz
kworker/0:2-19991 112,981 cycles # 0.843 GHz
kworker/u16:1-18249 100,926 cycles # 0.803 GHz
rcu_sched-8 74,024 cycles # 0.865 GHz
kworker/u16:2-23146 55,984 cycles # 0.726 GHz
gmain-2700 34,278 cycles # 0.820 GHz
kworker/4:1-15354 20,665 cycles # 0.728 GHz
kworker/6:0-17528 16,445 cycles # 0.688 GHz
kworker/5:2-31362 9,492 cycles # 0.816 GHz
watchdog/3-26 8,695 cycles # 3.006 GHz
kworker/4:1H-1887 8,238 cycles # 0.624 GHz
watchdog/4-32 7,580 cycles # 2.747 GHz
kworker/3:2-12870 7,306 cycles # 0.715 GHz
watchdog/2-20 7,274 cycles # 2.995 GHz
watchdog/0-11 6,988 cycles # 0.642 GHz
ksoftirqd/0-7 6,376 cycles # 0.719 GHz
watchdog/1-14 5,340 cycles # 0.630 GHz
watchdog/5-38 4,061 cycles # 2.749 GHz
watchdog/6-44 3,976 cycles # 2.667 GHz
watchdog/7-50 3,418 cycles # 1.161 GHz
vmstat-23127 2,511,699 instructions # 1.48 insn per cycle
perf-24165 1,829,908 instructions # 0.21 insn per cycle
irqbalance-2780 1,190,204 instructions # 1.61 insn per cycle
thermald-2841 143,544 instructions # 0.70 insn per cycle
sshd-23111 128,138 instructions # 0.48 insn per cycle
sshd-23058 57,654 instructions # 0.36 insn per cycle
rcu_sched-8 44,063 instructions # 0.60 insn per cycle
kworker/u16:1-18249 42,551 instructions # 0.42 insn per cycle
kworker/0:2-19991 25,873 instructions # 0.23 insn per cycle
kworker/u16:2-23146 21,407 instructions # 0.38 insn per cycle
gmain-2700 13,691 instructions # 0.40 insn per cycle
kworker/4:1-15354 12,964 instructions # 0.63 insn per cycle
kworker/6:0-17528 10,034 instructions # 0.61 insn per cycle
kworker/5:2-31362 5,203 instructions # 0.55 insn per cycle
kworker/3:2-12870 4,866 instructions # 0.67 insn per cycle
kworker/4:1H-1887 3,586 instructions # 0.44 insn per cycle
ksoftirqd/0-7 3,463 instructions # 0.54 insn per cycle
watchdog/0-11 3,135 instructions # 0.45 insn per cycle
watchdog/1-14 3,135 instructions # 0.59 insn per cycle
watchdog/2-20 3,135 instructions # 0.43 insn per cycle
watchdog/3-26 3,135 instructions # 0.36 insn per cycle
watchdog/4-32 3,135 instructions # 0.41 insn per cycle
watchdog/5-38 3,135 instructions # 0.77 insn per cycle
watchdog/6-44 3,135 instructions # 0.79 insn per cycle
watchdog/7-50 3,135 instructions # 0.92 insn per cycle
vmstat-23127 539,181 branches # 345.139 M/sec
perf-24165 375,364 branches # 87.245 M/sec
irqbalance-2780 262,092 branches # 316.593 M/sec
thermald-2841 31,611 branches # 136.915 M/sec
sshd-23111 21,874 branches # 78.596 M/sec
sshd-23058 10,682 branches # 51.528 M/sec
rcu_sched-8 8,693 branches # 101.633 M/sec
kworker/u16:1-18249 7,891 branches # 62.808 M/sec
kworker/0:2-19991 5,761 branches # 42.998 M/sec
kworker/u16:2-23146 4,099 branches # 53.138 M/sec
kworker/4:1-15354 2,755 branches # 97.110 M/sec
gmain-2700 2,638 branches # 63.127 M/sec
kworker/6:0-17528 2,216 branches # 92.739 M/sec
kworker/5:2-31362 1,132 branches # 97.360 M/sec
kworker/3:2-12870 1,081 branches # 105.773 M/sec
kworker/4:1H-1887 725 branches # 54.887 M/sec
ksoftirqd/0-7 707 branches # 79.716 M/sec
watchdog/0-11 652 branches # 59.860 M/sec
watchdog/1-14 652 branches # 76.923 M/sec
watchdog/2-20 652 branches # 268.423 M/sec
watchdog/3-26 652 branches # 225.372 M/sec
watchdog/4-32 652 branches # 236.318 M/sec
watchdog/5-38 652 branches # 441.435 M/sec
watchdog/6-44 652 branches # 437.290 M/sec
watchdog/7-50 652 branches # 221.467 M/sec
vmstat-23127 8,960 branch-misses # 1.66% of all branches
irqbalance-2780 3,047 branch-misses # 1.16% of all branches
perf-24165 2,876 branch-misses # 0.77% of all branches
sshd-23111 1,843 branch-misses # 8.43% of all branches
thermald-2841 1,444 branch-misses # 4.57% of all branches
sshd-23058 1,379 branch-misses # 12.91% of all branches
kworker/u16:1-18249 982 branch-misses # 12.44% of all branches
rcu_sched-8 893 branch-misses # 10.27% of all branches
kworker/u16:2-23146 578 branch-misses # 14.10% of all branches
kworker/0:2-19991 376 branch-misses # 6.53% of all branches
gmain-2700 280 branch-misses # 10.61% of all branches
kworker/6:0-17528 196 branch-misses # 8.84% of all branches
kworker/4:1-15354 187 branch-misses # 6.79% of all branches
kworker/5:2-31362 123 branch-misses # 10.87% of all branches
watchdog/0-11 95 branch-misses # 14.57% of all branches
watchdog/4-32 89 branch-misses # 13.65% of all branches
kworker/3:2-12870 80 branch-misses # 7.40% of all branches
watchdog/3-26 61 branch-misses # 9.36% of all branches
kworker/4:1H-1887 60 branch-misses # 8.28% of all branches
watchdog/2-20 52 branch-misses # 7.98% of all branches
ksoftirqd/0-7 47 branch-misses # 6.65% of all branches
watchdog/1-14 46 branch-misses # 7.06% of all branches
watchdog/7-50 13 branch-misses # 1.99% of all branches
watchdog/5-38 8 branch-misses # 1.23% of all branches
watchdog/6-44 7 branch-misses # 1.07% of all branches
3.695150786 seconds time elapsed
root@skl:/tmp# perf stat --per-thread -M IPC,CPI
^C
Performance counter stats for 'system wide':
vmstat-23127 2,000,783 inst_retired.any # 1.5 IPC
thermald-2841 1,472,670 inst_retired.any # 1.3 IPC
sshd-23111 977,374 inst_retired.any # 1.2 IPC
perf-24163 483,779 inst_retired.any # 0.2 IPC
gmain-2700 341,213 inst_retired.any # 0.9 IPC
sshd-23058 148,891 inst_retired.any # 0.8 IPC
rtkit-daemon-3288 71,210 inst_retired.any # 0.7 IPC
kworker/u16:1-18249 39,562 inst_retired.any # 0.3 IPC
rcu_sched-8 14,474 inst_retired.any # 0.8 IPC
kworker/0:2-19991 7,659 inst_retired.any # 0.2 IPC
kworker/4:1-15354 6,714 inst_retired.any # 0.8 IPC
rtkit-daemon-3289 4,839 inst_retired.any # 0.3 IPC
kworker/6:0-17528 3,321 inst_retired.any # 0.6 IPC
kworker/5:2-31362 3,215 inst_retired.any # 0.5 IPC
kworker/7:2-23145 3,173 inst_retired.any # 0.7 IPC
kworker/4:1H-1887 1,719 inst_retired.any # 0.3 IPC
watchdog/0-11 1,479 inst_retired.any # 0.3 IPC
watchdog/1-14 1,479 inst_retired.any # 0.3 IPC
watchdog/2-20 1,479 inst_retired.any # 0.4 IPC
watchdog/3-26 1,479 inst_retired.any # 0.4 IPC
watchdog/4-32 1,479 inst_retired.any # 0.3 IPC
watchdog/5-38 1,479 inst_retired.any # 0.3 IPC
watchdog/6-44 1,479 inst_retired.any # 0.7 IPC
watchdog/7-50 1,479 inst_retired.any # 0.7 IPC
kworker/u16:2-23146 1,408 inst_retired.any # 0.5 IPC
perf-24163 2,249,872 cpu_clk_unhalted.thread
vmstat-23127 1,352,455 cpu_clk_unhalted.thread
thermald-2841 1,161,140 cpu_clk_unhalted.thread
sshd-23111 807,827 cpu_clk_unhalted.thread
gmain-2700 375,535 cpu_clk_unhalted.thread
sshd-23058 194,071 cpu_clk_unhalted.thread
kworker/u16:1-18249 114,306 cpu_clk_unhalted.thread
rtkit-daemon-3288 103,547 cpu_clk_unhalted.thread
kworker/0:2-19991 46,550 cpu_clk_unhalted.thread
rcu_sched-8 18,855 cpu_clk_unhalted.thread
rtkit-daemon-3289 17,549 cpu_clk_unhalted.thread
kworker/4:1-15354 8,812 cpu_clk_unhalted.thread
kworker/5:2-31362 6,812 cpu_clk_unhalted.thread
kworker/4:1H-1887 5,270 cpu_clk_unhalted.thread
kworker/6:0-17528 5,111 cpu_clk_unhalted.thread
kworker/7:2-23145 4,667 cpu_clk_unhalted.thread
watchdog/0-11 4,663 cpu_clk_unhalted.thread
watchdog/1-14 4,663 cpu_clk_unhalted.thread
watchdog/4-32 4,626 cpu_clk_unhalted.thread
watchdog/5-38 4,403 cpu_clk_unhalted.thread
watchdog/3-26 3,936 cpu_clk_unhalted.thread
watchdog/2-20 3,850 cpu_clk_unhalted.thread
kworker/u16:2-23146 2,654 cpu_clk_unhalted.thread
watchdog/6-44 2,017 cpu_clk_unhalted.thread
watchdog/7-50 2,017 cpu_clk_unhalted.thread
vmstat-23127 2,000,783 inst_retired.any # 0.7 CPI
thermald-2841 1,472,670 inst_retired.any # 0.8 CPI
sshd-23111 977,374 inst_retired.any # 0.8 CPI
perf-24163 495,037 inst_retired.any # 4.7 CPI
gmain-2700 341,213 inst_retired.any # 1.1 CPI
sshd-23058 148,891 inst_retired.any # 1.3 CPI
rtkit-daemon-3288 71,210 inst_retired.any # 1.5 CPI
kworker/u16:1-18249 39,562 inst_retired.any # 2.9 CPI
rcu_sched-8 14,474 inst_retired.any # 1.3 CPI
kworker/0:2-19991 7,659 inst_retired.any # 6.1 CPI
kworker/4:1-15354 6,714 inst_retired.any # 1.3 CPI
rtkit-daemon-3289 4,839 inst_retired.any # 3.6 CPI
kworker/6:0-17528 3,321 inst_retired.any # 1.5 CPI
kworker/5:2-31362 3,215 inst_retired.any # 2.1 CPI
kworker/7:2-23145 3,173 inst_retired.any # 1.5 CPI
kworker/4:1H-1887 1,719 inst_retired.any # 3.1 CPI
watchdog/0-11 1,479 inst_retired.any # 3.2 CPI
watchdog/1-14 1,479 inst_retired.any # 3.2 CPI
watchdog/2-20 1,479 inst_retired.any # 2.6 CPI
watchdog/3-26 1,479 inst_retired.any # 2.7 CPI
watchdog/4-32 1,479 inst_retired.any # 3.1 CPI
watchdog/5-38 1,479 inst_retired.any # 3.0 CPI
watchdog/6-44 1,479 inst_retired.any # 1.4 CPI
watchdog/7-50 1,479 inst_retired.any # 1.4 CPI
kworker/u16:2-23146 1,408 inst_retired.any # 1.9 CPI
perf-24163 2,302,323 cycles
vmstat-23127 1,352,455 cycles
thermald-2841 1,161,140 cycles
sshd-23111 807,827 cycles
gmain-2700 375,535 cycles
sshd-23058 194,071 cycles
kworker/u16:1-18249 114,306 cycles
rtkit-daemon-3288 103,547 cycles
kworker/0:2-19991 46,550 cycles
rcu_sched-8 18,855 cycles
rtkit-daemon-3289 17,549 cycles
kworker/4:1-15354 8,812 cycles
kworker/5:2-31362 6,812 cycles
kworker/4:1H-1887 5,270 cycles
kworker/6:0-17528 5,111 cycles
kworker/7:2-23145 4,667 cycles
watchdog/0-11 4,663 cycles
watchdog/1-14 4,663 cycles
watchdog/4-32 4,626 cycles
watchdog/5-38 4,403 cycles
watchdog/3-26 3,936 cycles
watchdog/2-20 3,850 cycles
kworker/u16:2-23146 2,654 cycles
watchdog/6-44 2,017 cycles
watchdog/7-50 2,017 cycles
2.175726600 seconds time elapsed
Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Kan Liang <kan.liang@intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1512482591-4646-12-git-send-email-yao.jin@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-12-05 14:03:11 +00:00
|
|
|
|
|
|
|
free(buf);
|
perf stat: Introduce --per-thread option
Currently all the -p option PID arguments tasks values get aggregated
and printed as single values.
Adding --per-tasks option to print values per task.
$ perf stat -e cycles,instructions --per-thread -p 30190,30242
^C
Performance counter stats for process id '30190,30242':
cat-30190 0 cycles
yes-30242 3,842,525,421 cycles
cat-30190 0 instructions
yes-30242 10,370,817,010 instructions
1.143155657 seconds time elapsed
Also works under interval mode:
$ perf stat -e cycles,instructions --per-thread -p 30190,30242 -I 1000
# time comm-pid counts unit events
1.000073435 cat-30190 89,058 cycles
1.000073435 yes-30242 3,360,786,902 cycles (100.00%)
1.000073435 cat-30190 14,066 instructions
1.000073435 yes-30242 9,069,937,462 instructions
2.000204830 cat-30190 0 cycles
2.000204830 yes-30242 3,351,667,626 cycles
2.000204830 cat-30190 0 instructions
2.000204830 yes-30242 9,045,796,885 instructions
^C 2.771286639 cat-30190 0 cycles
2.771286639 yes-30242 2,593,884,166 cycles
2.771286639 cat-30190 0 instructions
2.771286639 yes-30242 7,001,171,191 instructions
It works only with -t and -p options, otherwise following error is
printed:
$ perf stat -e cycles --per-thread -I 1000 ls
The --per-thread option is only available when monitoring via -p -t options.
-p, --pid <pid> stat events on existing process id
-t, --tid <tid> stat events on existing thread id
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/r/1435310967-14570-23-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-06-26 09:29:27 +00:00
|
|
|
}
|
|
|
|
|
2017-03-20 20:16:59 +00:00
|
|
|
struct caggr_data {
|
|
|
|
double avg, avg_enabled, avg_running;
|
|
|
|
};
|
|
|
|
|
|
|
|
static void counter_aggr_cb(struct perf_evsel *counter, void *data,
|
|
|
|
bool first __maybe_unused)
|
|
|
|
{
|
|
|
|
struct caggr_data *cd = data;
|
2017-10-26 17:22:34 +00:00
|
|
|
struct perf_stat_evsel *ps = counter->stats;
|
2017-03-20 20:16:59 +00:00
|
|
|
|
|
|
|
cd->avg += avg_stats(&ps->res_stats[0]);
|
|
|
|
cd->avg_enabled += avg_stats(&ps->res_stats[1]);
|
|
|
|
cd->avg_running += avg_stats(&ps->res_stats[2]);
|
|
|
|
}
|
|
|
|
|
2009-05-29 07:10:54 +00:00
|
|
|
/*
|
|
|
|
* Print out the results of a single counter:
|
2010-11-16 09:05:01 +00:00
|
|
|
* aggregated counts in system-wide mode
|
2009-05-29 07:10:54 +00:00
|
|
|
*/
|
2018-08-30 06:32:27 +00:00
|
|
|
static void print_counter_aggr(struct perf_stat_config *config,
|
|
|
|
struct perf_evsel *counter, char *prefix)
|
2009-05-29 07:10:54 +00:00
|
|
|
{
|
2018-08-30 06:32:31 +00:00
|
|
|
bool metric_only = config->metric_only;
|
2018-08-30 06:32:27 +00:00
|
|
|
FILE *output = config->output;
|
2013-11-12 16:58:49 +00:00
|
|
|
double uval;
|
2017-03-20 20:16:59 +00:00
|
|
|
struct caggr_data cd = { .avg = 0.0 };
|
2015-03-11 14:16:27 +00:00
|
|
|
|
perf stat: Collapse identically named events
The uncore PMU has a lot of duplicated PMUs for different subsystems.
When expanding an uncore alias we usually end up with a large
number of identically named aliases, which makes perf stat
output difficult to read.
Automatically sum them up in perf stat, unless --no-merge is specified.
This can be default because only the uncores generally have duplicated
aliases. Other PMUs have unique names.
Before:
% perf stat --no-merge -a -e unc_c_llc_lookup.any sleep 1
Performance counter stats for 'system wide':
694,976 Bytes unc_c_llc_lookup.any
706,304 Bytes unc_c_llc_lookup.any
956,608 Bytes unc_c_llc_lookup.any
782,720 Bytes unc_c_llc_lookup.any
605,696 Bytes unc_c_llc_lookup.any
442,816 Bytes unc_c_llc_lookup.any
659,328 Bytes unc_c_llc_lookup.any
509,312 Bytes unc_c_llc_lookup.any
263,936 Bytes unc_c_llc_lookup.any
592,448 Bytes unc_c_llc_lookup.any
672,448 Bytes unc_c_llc_lookup.any
608,640 Bytes unc_c_llc_lookup.any
641,024 Bytes unc_c_llc_lookup.any
856,896 Bytes unc_c_llc_lookup.any
808,832 Bytes unc_c_llc_lookup.any
684,864 Bytes unc_c_llc_lookup.any
710,464 Bytes unc_c_llc_lookup.any
538,304 Bytes unc_c_llc_lookup.any
1.002577660 seconds time elapsed
After:
% perf stat -a -e unc_c_llc_lookup.any sleep 1
Performance counter stats for 'system wide':
2,685,120 Bytes unc_c_llc_lookup.any
1.002648032 seconds time elapsed
v2: Split collect_aliases. Rename alias flag.
v3: Make sure unsupported/not counted is always printed.
v4: Factor out callback change into separate patch.
v5: Move check for bad results here
Move merged check into collect_data
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: http://lkml.kernel.org/r/20170320201711.14142-3-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-03-20 20:17:00 +00:00
|
|
|
if (!collect_data(counter, counter_aggr_cb, &cd))
|
|
|
|
return;
|
2009-05-29 07:10:54 +00:00
|
|
|
|
2016-03-03 23:57:36 +00:00
|
|
|
if (prefix && !metric_only)
|
perf stat: Add interval printing
This patch adds a new printing mode for perf stat. It allows interval
printing. That means perf stat can now print event deltas at regular
time interval. This is useful to detect phases in programs.
The -I option enables interval printing. It expects an interval duration
in milliseconds. Minimum is 100ms. Once, activated perf stat prints
events deltas since last printout. All modes are supported.
$ perf stat -I 1000 -e cycles noploop 10
noploop for 10 seconds
# time counts events
1.000109853 2,388,560,546 cycles
2.000262846 2,393,332,358 cycles
3.000354131 2,393,176,537 cycles
4.000439503 2,393,203,790 cycles
5.000527075 2,393,167,675 cycles
6.000609052 2,393,203,670 cycles
7.000691082 2,393,175,678 cycles
The output format makes it easy to feed into a plotting program such as
gnuplot when the -I option is used in combination with the -x option:
$ perf stat -x, -I 1000 -e cycles noploop 10
noploop for 10 seconds
1.000084113,2378775498,cycles
2.000245798,2391056897,cycles
3.000354445,2392089414,cycles
4.000459115,2390936603,cycles
5.000565341,2392108173,cycles
Signed-off-by: Stephane Eranian <eranian@google.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung.kim@lge.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1359460064-3060-3-git-send-email-eranian@google.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2013-01-29 11:47:44 +00:00
|
|
|
fprintf(output, "%s", prefix);
|
|
|
|
|
2017-03-20 20:16:59 +00:00
|
|
|
uval = cd.avg * counter->scale;
|
2018-08-30 06:32:27 +00:00
|
|
|
printout(config, -1, 0, counter, uval, prefix, cd.avg_running, cd.avg_enabled,
|
2017-12-05 14:03:05 +00:00
|
|
|
cd.avg, &rt_stat);
|
2016-03-03 23:57:36 +00:00
|
|
|
if (!metric_only)
|
|
|
|
fprintf(output, "\n");
|
2009-05-29 07:10:54 +00:00
|
|
|
}
|
|
|
|
|
2017-03-20 20:16:59 +00:00
|
|
|
static void counter_cb(struct perf_evsel *counter, void *data,
|
|
|
|
bool first __maybe_unused)
|
|
|
|
{
|
|
|
|
struct aggr_data *ad = data;
|
|
|
|
|
|
|
|
ad->val += perf_counts(counter->counts, ad->cpu, 0)->val;
|
|
|
|
ad->ena += perf_counts(counter->counts, ad->cpu, 0)->ena;
|
|
|
|
ad->run += perf_counts(counter->counts, ad->cpu, 0)->run;
|
|
|
|
}
|
|
|
|
|
2010-11-16 09:05:01 +00:00
|
|
|
/*
|
|
|
|
* Print out the results of a single counter:
|
|
|
|
* does not use aggregated count in system-wide
|
|
|
|
*/
|
2018-08-30 06:32:27 +00:00
|
|
|
static void print_counter(struct perf_stat_config *config,
|
|
|
|
struct perf_evsel *counter, char *prefix)
|
2010-11-16 09:05:01 +00:00
|
|
|
{
|
2018-08-30 06:32:27 +00:00
|
|
|
FILE *output = config->output;
|
2010-11-16 09:05:01 +00:00
|
|
|
u64 ena, run, val;
|
2013-11-12 16:58:49 +00:00
|
|
|
double uval;
|
2010-11-16 09:05:01 +00:00
|
|
|
int cpu;
|
|
|
|
|
2012-09-10 07:53:50 +00:00
|
|
|
for (cpu = 0; cpu < perf_evsel__nr_cpus(counter); cpu++) {
|
2017-03-20 20:16:59 +00:00
|
|
|
struct aggr_data ad = { .cpu = cpu };
|
|
|
|
|
perf stat: Collapse identically named events
The uncore PMU has a lot of duplicated PMUs for different subsystems.
When expanding an uncore alias we usually end up with a large
number of identically named aliases, which makes perf stat
output difficult to read.
Automatically sum them up in perf stat, unless --no-merge is specified.
This can be default because only the uncores generally have duplicated
aliases. Other PMUs have unique names.
Before:
% perf stat --no-merge -a -e unc_c_llc_lookup.any sleep 1
Performance counter stats for 'system wide':
694,976 Bytes unc_c_llc_lookup.any
706,304 Bytes unc_c_llc_lookup.any
956,608 Bytes unc_c_llc_lookup.any
782,720 Bytes unc_c_llc_lookup.any
605,696 Bytes unc_c_llc_lookup.any
442,816 Bytes unc_c_llc_lookup.any
659,328 Bytes unc_c_llc_lookup.any
509,312 Bytes unc_c_llc_lookup.any
263,936 Bytes unc_c_llc_lookup.any
592,448 Bytes unc_c_llc_lookup.any
672,448 Bytes unc_c_llc_lookup.any
608,640 Bytes unc_c_llc_lookup.any
641,024 Bytes unc_c_llc_lookup.any
856,896 Bytes unc_c_llc_lookup.any
808,832 Bytes unc_c_llc_lookup.any
684,864 Bytes unc_c_llc_lookup.any
710,464 Bytes unc_c_llc_lookup.any
538,304 Bytes unc_c_llc_lookup.any
1.002577660 seconds time elapsed
After:
% perf stat -a -e unc_c_llc_lookup.any sleep 1
Performance counter stats for 'system wide':
2,685,120 Bytes unc_c_llc_lookup.any
1.002648032 seconds time elapsed
v2: Split collect_aliases. Rename alias flag.
v3: Make sure unsupported/not counted is always printed.
v4: Factor out callback change into separate patch.
v5: Move check for bad results here
Move merged check into collect_data
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: http://lkml.kernel.org/r/20170320201711.14142-3-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-03-20 20:17:00 +00:00
|
|
|
if (!collect_data(counter, counter_cb, &ad))
|
|
|
|
return;
|
2017-03-20 20:16:59 +00:00
|
|
|
val = ad.val;
|
|
|
|
ena = ad.ena;
|
|
|
|
run = ad.run;
|
perf stat: Add interval printing
This patch adds a new printing mode for perf stat. It allows interval
printing. That means perf stat can now print event deltas at regular
time interval. This is useful to detect phases in programs.
The -I option enables interval printing. It expects an interval duration
in milliseconds. Minimum is 100ms. Once, activated perf stat prints
events deltas since last printout. All modes are supported.
$ perf stat -I 1000 -e cycles noploop 10
noploop for 10 seconds
# time counts events
1.000109853 2,388,560,546 cycles
2.000262846 2,393,332,358 cycles
3.000354131 2,393,176,537 cycles
4.000439503 2,393,203,790 cycles
5.000527075 2,393,167,675 cycles
6.000609052 2,393,203,670 cycles
7.000691082 2,393,175,678 cycles
The output format makes it easy to feed into a plotting program such as
gnuplot when the -I option is used in combination with the -x option:
$ perf stat -x, -I 1000 -e cycles noploop 10
noploop for 10 seconds
1.000084113,2378775498,cycles
2.000245798,2391056897,cycles
3.000354445,2392089414,cycles
4.000459115,2390936603,cycles
5.000565341,2392108173,cycles
Signed-off-by: Stephane Eranian <eranian@google.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung.kim@lge.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1359460064-3060-3-git-send-email-eranian@google.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2013-01-29 11:47:44 +00:00
|
|
|
|
|
|
|
if (prefix)
|
|
|
|
fprintf(output, "%s", prefix);
|
|
|
|
|
2013-11-12 16:58:49 +00:00
|
|
|
uval = val * counter->scale;
|
2018-08-30 06:32:27 +00:00
|
|
|
printout(config, cpu, 0, counter, uval, prefix, run, ena, 1.0,
|
2017-12-05 14:03:05 +00:00
|
|
|
&rt_stat);
|
2010-11-16 09:05:01 +00:00
|
|
|
|
2011-08-15 20:22:33 +00:00
|
|
|
fputc('\n', output);
|
2010-11-16 09:05:01 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2018-08-30 06:32:27 +00:00
|
|
|
static void print_no_aggr_metric(struct perf_stat_config *config,
|
2018-08-30 06:32:34 +00:00
|
|
|
struct perf_evlist *evlist,
|
2018-08-30 06:32:27 +00:00
|
|
|
char *prefix)
|
2016-03-03 23:57:37 +00:00
|
|
|
{
|
|
|
|
int cpu;
|
|
|
|
int nrcpus = 0;
|
|
|
|
struct perf_evsel *counter;
|
|
|
|
u64 ena, run, val;
|
|
|
|
double uval;
|
|
|
|
|
2018-08-30 06:32:34 +00:00
|
|
|
nrcpus = evlist->cpus->nr;
|
2016-03-03 23:57:37 +00:00
|
|
|
for (cpu = 0; cpu < nrcpus; cpu++) {
|
|
|
|
bool first = true;
|
|
|
|
|
|
|
|
if (prefix)
|
2018-08-30 06:32:27 +00:00
|
|
|
fputs(prefix, config->output);
|
2018-08-30 06:32:34 +00:00
|
|
|
evlist__for_each_entry(evlist, counter) {
|
2017-08-31 19:40:35 +00:00
|
|
|
if (is_duration_time(counter))
|
|
|
|
continue;
|
2016-03-03 23:57:37 +00:00
|
|
|
if (first) {
|
2018-08-30 06:32:28 +00:00
|
|
|
aggr_printout(config, counter, cpu, 0);
|
2016-03-03 23:57:37 +00:00
|
|
|
first = false;
|
|
|
|
}
|
|
|
|
val = perf_counts(counter->counts, cpu, 0)->val;
|
|
|
|
ena = perf_counts(counter->counts, cpu, 0)->ena;
|
|
|
|
run = perf_counts(counter->counts, cpu, 0)->run;
|
|
|
|
|
|
|
|
uval = val * counter->scale;
|
2018-08-30 06:32:27 +00:00
|
|
|
printout(config, cpu, 0, counter, uval, prefix, run, ena, 1.0,
|
2017-12-05 14:03:05 +00:00
|
|
|
&rt_stat);
|
2016-03-03 23:57:37 +00:00
|
|
|
}
|
2018-08-30 06:32:27 +00:00
|
|
|
fputc('\n', config->output);
|
2016-03-03 23:57:37 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2016-03-03 23:57:36 +00:00
|
|
|
static int aggr_header_lens[] = {
|
|
|
|
[AGGR_CORE] = 18,
|
|
|
|
[AGGR_SOCKET] = 12,
|
2016-03-03 23:57:37 +00:00
|
|
|
[AGGR_NONE] = 6,
|
2016-03-03 23:57:36 +00:00
|
|
|
[AGGR_THREAD] = 24,
|
|
|
|
[AGGR_GLOBAL] = 0,
|
|
|
|
};
|
|
|
|
|
2016-05-24 19:52:39 +00:00
|
|
|
static const char *aggr_header_csv[] = {
|
|
|
|
[AGGR_CORE] = "core,cpus,",
|
|
|
|
[AGGR_SOCKET] = "socket,cpus",
|
|
|
|
[AGGR_NONE] = "cpu,",
|
|
|
|
[AGGR_THREAD] = "comm-pid,",
|
|
|
|
[AGGR_GLOBAL] = ""
|
|
|
|
};
|
|
|
|
|
2018-08-30 06:32:27 +00:00
|
|
|
static void print_metric_headers(struct perf_stat_config *config,
|
2018-08-30 06:32:34 +00:00
|
|
|
struct perf_evlist *evlist,
|
2018-08-30 06:32:27 +00:00
|
|
|
const char *prefix, bool no_indent)
|
2016-03-03 23:57:36 +00:00
|
|
|
{
|
|
|
|
struct perf_stat_output_ctx out;
|
|
|
|
struct perf_evsel *counter;
|
|
|
|
struct outstate os = {
|
2018-08-30 06:32:27 +00:00
|
|
|
.fh = config->output
|
2016-03-03 23:57:36 +00:00
|
|
|
};
|
|
|
|
|
|
|
|
if (prefix)
|
2018-08-30 06:32:27 +00:00
|
|
|
fprintf(config->output, "%s", prefix);
|
2016-03-03 23:57:36 +00:00
|
|
|
|
2018-08-30 06:32:29 +00:00
|
|
|
if (!config->csv_output && !no_indent)
|
2018-08-30 06:32:27 +00:00
|
|
|
fprintf(config->output, "%*s",
|
|
|
|
aggr_header_lens[config->aggr_mode], "");
|
2018-08-30 06:32:29 +00:00
|
|
|
if (config->csv_output) {
|
2018-08-30 06:32:27 +00:00
|
|
|
if (config->interval)
|
|
|
|
fputs("time,", config->output);
|
|
|
|
fputs(aggr_header_csv[config->aggr_mode], config->output);
|
2016-05-24 19:52:39 +00:00
|
|
|
}
|
2016-03-03 23:57:36 +00:00
|
|
|
|
|
|
|
/* Print metrics headers only */
|
2018-08-30 06:32:34 +00:00
|
|
|
evlist__for_each_entry(evlist, counter) {
|
2017-08-31 19:40:35 +00:00
|
|
|
if (is_duration_time(counter))
|
|
|
|
continue;
|
2016-03-03 23:57:36 +00:00
|
|
|
os.evsel = counter;
|
|
|
|
out.ctx = &os;
|
|
|
|
out.print_metric = print_metric_header;
|
|
|
|
out.new_line = new_line_metric;
|
perf stat: Output JSON MetricExpr metric
Add generic infrastructure to perf stat to output ratios for
"MetricExpr" entries in the event lists. Many events are more useful as
ratios than in raw form, typically some count in relation to total
ticks.
Transfer the MetricExpr information from the alias to the evsel.
We mark the events that need to be collected for MetricExpr, and also
link the events using them with a pointer. The code is careful to always
prefer the right event in the same group to minimize multiplexing
errors. At the moment only a single relation is supported.
Then add a rblist to the stat shadow code that remembers stats based on
the cpu and context.
Then finally update and retrieve and print these values similarly to the
existing hardcoded perf metrics. We use the simple expression parser
added earlier to evaluate the expression.
Normally we just output the result without further commentary, but for
--metric-only this would lead to empty columns. So for this case use the
original event as description.
There is no attempt to automatically add the MetricExpr event, if it is
missing, however we suggest it to the user, because the user tool
doesn't have enough information to reliably construct a group that is
guaranteed to schedule. So we leave that to the user.
% perf stat -a -I 1000 -e '{unc_p_clockticks,unc_p_freq_max_os_cycles}'
1.000147889 800,085,181 unc_p_clockticks
1.000147889 93,126,241 unc_p_freq_max_os_cycles # 11.6
2.000448381 800,218,217 unc_p_clockticks
2.000448381 142,516,095 unc_p_freq_max_os_cycles # 17.8
3.000639852 800,243,057 unc_p_clockticks
3.000639852 162,292,689 unc_p_freq_max_os_cycles # 20.3
% perf stat -a -I 1000 -e '{unc_p_clockticks,unc_p_freq_max_os_cycles}' --metric-only
# time freq_max_os_cycles %
1.000127077 0.9
2.000301436 0.7
3.000456379 0.0
v2: Change from DivideBy to MetricExpr
v3: Use expr__ prefix. Support more than one other event.
v4: Update description
v5: Only print warning message once for multiple PMUs.
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: http://lkml.kernel.org/r/20170320201711.14142-11-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-03-20 20:17:08 +00:00
|
|
|
out.force_header = true;
|
2016-03-03 23:57:36 +00:00
|
|
|
os.evsel = counter;
|
2018-08-30 06:32:28 +00:00
|
|
|
perf_stat__print_shadow_stats(config, counter, 0,
|
2016-03-03 23:57:36 +00:00
|
|
|
0,
|
2017-08-31 19:40:31 +00:00
|
|
|
&out,
|
2017-12-05 14:03:05 +00:00
|
|
|
&metric_events,
|
|
|
|
&rt_stat);
|
2016-03-03 23:57:36 +00:00
|
|
|
}
|
2018-08-30 06:32:27 +00:00
|
|
|
fputc('\n', config->output);
|
2016-03-03 23:57:36 +00:00
|
|
|
}
|
|
|
|
|
2018-08-30 06:32:27 +00:00
|
|
|
static void print_interval(struct perf_stat_config *config,
|
2018-08-30 06:32:34 +00:00
|
|
|
struct perf_evlist *evlist,
|
2018-08-30 06:32:27 +00:00
|
|
|
char *prefix, struct timespec *ts)
|
2015-06-26 09:29:26 +00:00
|
|
|
{
|
2018-08-30 06:32:31 +00:00
|
|
|
bool metric_only = config->metric_only;
|
2018-08-30 06:32:32 +00:00
|
|
|
unsigned int unit_width = config->unit_width;
|
2018-08-30 06:32:27 +00:00
|
|
|
FILE *output = config->output;
|
2015-06-26 09:29:26 +00:00
|
|
|
static int num_print_interval;
|
|
|
|
|
2018-08-30 06:32:30 +00:00
|
|
|
if (config->interval_clear)
|
2018-06-06 22:15:06 +00:00
|
|
|
puts(CONSOLE_CLEAR);
|
|
|
|
|
2018-08-30 06:32:29 +00:00
|
|
|
sprintf(prefix, "%6lu.%09lu%s", ts->tv_sec, ts->tv_nsec, config->csv_sep);
|
2015-06-26 09:29:26 +00:00
|
|
|
|
2018-08-30 06:32:30 +00:00
|
|
|
if ((num_print_interval == 0 && !config->csv_output) || config->interval_clear) {
|
2018-08-30 06:32:27 +00:00
|
|
|
switch (config->aggr_mode) {
|
2015-06-26 09:29:26 +00:00
|
|
|
case AGGR_SOCKET:
|
2016-05-24 19:52:38 +00:00
|
|
|
fprintf(output, "# time socket cpus");
|
|
|
|
if (!metric_only)
|
|
|
|
fprintf(output, " counts %*s events\n", unit_width, "unit");
|
2015-06-26 09:29:26 +00:00
|
|
|
break;
|
|
|
|
case AGGR_CORE:
|
2016-05-24 19:52:38 +00:00
|
|
|
fprintf(output, "# time core cpus");
|
|
|
|
if (!metric_only)
|
|
|
|
fprintf(output, " counts %*s events\n", unit_width, "unit");
|
2015-06-26 09:29:26 +00:00
|
|
|
break;
|
|
|
|
case AGGR_NONE:
|
2018-06-06 22:15:08 +00:00
|
|
|
fprintf(output, "# time CPU ");
|
2016-05-24 19:52:38 +00:00
|
|
|
if (!metric_only)
|
|
|
|
fprintf(output, " counts %*s events\n", unit_width, "unit");
|
2015-06-26 09:29:26 +00:00
|
|
|
break;
|
perf stat: Introduce --per-thread option
Currently all the -p option PID arguments tasks values get aggregated
and printed as single values.
Adding --per-tasks option to print values per task.
$ perf stat -e cycles,instructions --per-thread -p 30190,30242
^C
Performance counter stats for process id '30190,30242':
cat-30190 0 cycles
yes-30242 3,842,525,421 cycles
cat-30190 0 instructions
yes-30242 10,370,817,010 instructions
1.143155657 seconds time elapsed
Also works under interval mode:
$ perf stat -e cycles,instructions --per-thread -p 30190,30242 -I 1000
# time comm-pid counts unit events
1.000073435 cat-30190 89,058 cycles
1.000073435 yes-30242 3,360,786,902 cycles (100.00%)
1.000073435 cat-30190 14,066 instructions
1.000073435 yes-30242 9,069,937,462 instructions
2.000204830 cat-30190 0 cycles
2.000204830 yes-30242 3,351,667,626 cycles
2.000204830 cat-30190 0 instructions
2.000204830 yes-30242 9,045,796,885 instructions
^C 2.771286639 cat-30190 0 cycles
2.771286639 yes-30242 2,593,884,166 cycles
2.771286639 cat-30190 0 instructions
2.771286639 yes-30242 7,001,171,191 instructions
It works only with -t and -p options, otherwise following error is
printed:
$ perf stat -e cycles --per-thread -I 1000 ls
The --per-thread option is only available when monitoring via -p -t options.
-p, --pid <pid> stat events on existing process id
-t, --tid <tid> stat events on existing thread id
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/r/1435310967-14570-23-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-06-26 09:29:27 +00:00
|
|
|
case AGGR_THREAD:
|
2016-05-24 19:52:38 +00:00
|
|
|
fprintf(output, "# time comm-pid");
|
|
|
|
if (!metric_only)
|
|
|
|
fprintf(output, " counts %*s events\n", unit_width, "unit");
|
perf stat: Introduce --per-thread option
Currently all the -p option PID arguments tasks values get aggregated
and printed as single values.
Adding --per-tasks option to print values per task.
$ perf stat -e cycles,instructions --per-thread -p 30190,30242
^C
Performance counter stats for process id '30190,30242':
cat-30190 0 cycles
yes-30242 3,842,525,421 cycles
cat-30190 0 instructions
yes-30242 10,370,817,010 instructions
1.143155657 seconds time elapsed
Also works under interval mode:
$ perf stat -e cycles,instructions --per-thread -p 30190,30242 -I 1000
# time comm-pid counts unit events
1.000073435 cat-30190 89,058 cycles
1.000073435 yes-30242 3,360,786,902 cycles (100.00%)
1.000073435 cat-30190 14,066 instructions
1.000073435 yes-30242 9,069,937,462 instructions
2.000204830 cat-30190 0 cycles
2.000204830 yes-30242 3,351,667,626 cycles
2.000204830 cat-30190 0 instructions
2.000204830 yes-30242 9,045,796,885 instructions
^C 2.771286639 cat-30190 0 cycles
2.771286639 yes-30242 2,593,884,166 cycles
2.771286639 cat-30190 0 instructions
2.771286639 yes-30242 7,001,171,191 instructions
It works only with -t and -p options, otherwise following error is
printed:
$ perf stat -e cycles --per-thread -I 1000 ls
The --per-thread option is only available when monitoring via -p -t options.
-p, --pid <pid> stat events on existing process id
-t, --tid <tid> stat events on existing thread id
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/r/1435310967-14570-23-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-06-26 09:29:27 +00:00
|
|
|
break;
|
2015-06-26 09:29:26 +00:00
|
|
|
case AGGR_GLOBAL:
|
|
|
|
default:
|
2016-05-24 19:52:38 +00:00
|
|
|
fprintf(output, "# time");
|
|
|
|
if (!metric_only)
|
|
|
|
fprintf(output, " counts %*s events\n", unit_width, "unit");
|
2015-10-16 10:41:04 +00:00
|
|
|
case AGGR_UNSET:
|
|
|
|
break;
|
2015-06-26 09:29:26 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2018-08-30 06:32:30 +00:00
|
|
|
if ((num_print_interval == 0 || config->interval_clear) && metric_only)
|
2018-08-30 06:32:34 +00:00
|
|
|
print_metric_headers(config, evlist, " ", true);
|
2015-06-26 09:29:26 +00:00
|
|
|
if (++num_print_interval == 25)
|
|
|
|
num_print_interval = 0;
|
|
|
|
}
|
|
|
|
|
2018-08-30 06:32:27 +00:00
|
|
|
static void print_header(struct perf_stat_config *config,
|
2018-08-30 06:32:33 +00:00
|
|
|
struct target *_target,
|
2018-08-30 06:32:27 +00:00
|
|
|
int argc, const char **argv)
|
2009-06-13 12:57:28 +00:00
|
|
|
{
|
2018-08-30 06:32:27 +00:00
|
|
|
FILE *output = config->output;
|
2011-01-03 18:39:04 +00:00
|
|
|
int i;
|
2009-06-13 12:57:28 +00:00
|
|
|
|
2009-04-20 13:37:32 +00:00
|
|
|
fflush(stdout);
|
|
|
|
|
2018-08-30 06:32:29 +00:00
|
|
|
if (!config->csv_output) {
|
2011-08-15 20:22:33 +00:00
|
|
|
fprintf(output, "\n");
|
|
|
|
fprintf(output, " Performance counter stats for ");
|
2018-08-30 06:32:33 +00:00
|
|
|
if (_target->system_wide)
|
2013-09-28 20:27:58 +00:00
|
|
|
fprintf(output, "\'system wide");
|
2018-08-30 06:32:33 +00:00
|
|
|
else if (_target->cpu_list)
|
|
|
|
fprintf(output, "\'CPU(s) %s", _target->cpu_list);
|
|
|
|
else if (!target__has_task(_target)) {
|
2015-11-05 14:40:55 +00:00
|
|
|
fprintf(output, "\'%s", argv ? argv[0] : "pipe");
|
|
|
|
for (i = 1; argv && (i < argc); i++)
|
2011-08-15 20:22:33 +00:00
|
|
|
fprintf(output, " %s", argv[i]);
|
2018-08-30 06:32:33 +00:00
|
|
|
} else if (_target->pid)
|
|
|
|
fprintf(output, "process id \'%s", _target->pid);
|
perf stat: Add csv-style output
This patch adds an option (-x/--field-separator) to print counts using a
CSV-style output. The user can pass a custom separator. This makes it very easy
to import counts directly into your favorite spreadsheet without having to
write scripts.
Example:
$ perf stat --field-separator=, -a -- sleep 1
4009.961740,task-clock-msecs
13,context-switches
2,CPU-migrations
189,page-faults
9596385684,cycles
3493659441,instructions
872897069,branches
41562,branch-misses
22424,cache-references
1289,cache-misses
Works also in non-aggregated mode:
$ perf stat -x , -a -A -- sleep 1
CPU0,1002.526168,task-clock-msecs
CPU1,1002.528365,task-clock-msecs
CPU2,1002.523360,task-clock-msecs
CPU3,1002.519878,task-clock-msecs
CPU0,1,context-switches
CPU1,5,context-switches
CPU2,5,context-switches
CPU3,6,context-switches
CPU0,0,CPU-migrations
CPU1,1,CPU-migrations
CPU2,0,CPU-migrations
CPU3,1,CPU-migrations
CPU0,2,page-faults
CPU1,6,page-faults
CPU2,9,page-faults
CPU3,174,page-faults
CPU0,2399439771,cycles
CPU1,2380369063,cycles
CPU2,2399142710,cycles
CPU3,2373161192,cycles
CPU0,872900618,instructions
CPU1,873030960,instructions
CPU2,872714525,instructions
CPU3,874460580,instructions
CPU0,221556839,branches
CPU1,218134342,branches
CPU2,218161730,branches
CPU3,218284093,branches
CPU0,18556,branch-misses
CPU1,1449,branch-misses
CPU2,3447,branch-misses
CPU3,12714,branch-misses
CPU0,8330,cache-references
CPU1,313844,cache-references
CPU2,47993728,cache-references
CPU3,826481,cache-references
CPU0,272,cache-misses
CPU1,5360,cache-misses
CPU2,1342193,cache-misses
CPU3,13992,cache-misses
This second version adds the ability to name a separator and uses
field-separator as the long option to be consistent with perf report.
Commiter note: Since we enabled --big-num by default in 201e0b0 and -x can't be
used with it, we need to notice if the user explicitely enabled or disabled -B,
add code to disable big_num if the user didn't explicitely set --big_num when
-x is used.
Cc: David S. Miller <davem@davemloft.net>
Cc: Frederik Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: paulus@samba.org
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Robert Richter <robert.richter@amd.com>
LKML-Reference: <4cf68aa7.0fedd80a.5294.1203@mx.google.com>
Signed-off-by: Stephane Eranian <eranian@google.com>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2010-12-01 16:49:05 +00:00
|
|
|
else
|
2018-08-30 06:32:33 +00:00
|
|
|
fprintf(output, "thread id \'%s", _target->tid);
|
2009-06-03 17:36:07 +00:00
|
|
|
|
2011-08-15 20:22:33 +00:00
|
|
|
fprintf(output, "\'");
|
2018-08-30 06:32:36 +00:00
|
|
|
if (config->run_count > 1)
|
|
|
|
fprintf(output, " (%d runs)", config->run_count);
|
2011-08-15 20:22:33 +00:00
|
|
|
fprintf(output, ":\n\n");
|
perf stat: Add csv-style output
This patch adds an option (-x/--field-separator) to print counts using a
CSV-style output. The user can pass a custom separator. This makes it very easy
to import counts directly into your favorite spreadsheet without having to
write scripts.
Example:
$ perf stat --field-separator=, -a -- sleep 1
4009.961740,task-clock-msecs
13,context-switches
2,CPU-migrations
189,page-faults
9596385684,cycles
3493659441,instructions
872897069,branches
41562,branch-misses
22424,cache-references
1289,cache-misses
Works also in non-aggregated mode:
$ perf stat -x , -a -A -- sleep 1
CPU0,1002.526168,task-clock-msecs
CPU1,1002.528365,task-clock-msecs
CPU2,1002.523360,task-clock-msecs
CPU3,1002.519878,task-clock-msecs
CPU0,1,context-switches
CPU1,5,context-switches
CPU2,5,context-switches
CPU3,6,context-switches
CPU0,0,CPU-migrations
CPU1,1,CPU-migrations
CPU2,0,CPU-migrations
CPU3,1,CPU-migrations
CPU0,2,page-faults
CPU1,6,page-faults
CPU2,9,page-faults
CPU3,174,page-faults
CPU0,2399439771,cycles
CPU1,2380369063,cycles
CPU2,2399142710,cycles
CPU3,2373161192,cycles
CPU0,872900618,instructions
CPU1,873030960,instructions
CPU2,872714525,instructions
CPU3,874460580,instructions
CPU0,221556839,branches
CPU1,218134342,branches
CPU2,218161730,branches
CPU3,218284093,branches
CPU0,18556,branch-misses
CPU1,1449,branch-misses
CPU2,3447,branch-misses
CPU3,12714,branch-misses
CPU0,8330,cache-references
CPU1,313844,cache-references
CPU2,47993728,cache-references
CPU3,826481,cache-references
CPU0,272,cache-misses
CPU1,5360,cache-misses
CPU2,1342193,cache-misses
CPU3,13992,cache-misses
This second version adds the ability to name a separator and uses
field-separator as the long option to be consistent with perf report.
Commiter note: Since we enabled --big-num by default in 201e0b0 and -x can't be
used with it, we need to notice if the user explicitely enabled or disabled -B,
add code to disable big_num if the user didn't explicitely set --big_num when
-x is used.
Cc: David S. Miller <davem@davemloft.net>
Cc: Frederik Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: paulus@samba.org
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Robert Richter <robert.richter@amd.com>
LKML-Reference: <4cf68aa7.0fedd80a.5294.1203@mx.google.com>
Signed-off-by: Stephane Eranian <eranian@google.com>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2010-12-01 16:49:05 +00:00
|
|
|
}
|
2015-06-26 09:29:26 +00:00
|
|
|
}
|
|
|
|
|
2018-04-23 09:08:20 +00:00
|
|
|
static int get_precision(double num)
|
|
|
|
{
|
|
|
|
if (num > 1)
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
return lround(ceil(-log10(num)));
|
|
|
|
}
|
|
|
|
|
2018-08-30 06:32:36 +00:00
|
|
|
static void print_table(struct perf_stat_config *config,
|
|
|
|
FILE *output, int precision, double avg)
|
2018-04-23 09:08:21 +00:00
|
|
|
{
|
|
|
|
char tmp[64];
|
|
|
|
int idx, indent = 0;
|
|
|
|
|
|
|
|
scnprintf(tmp, 64, " %17.*f", precision, avg);
|
|
|
|
while (tmp[indent] == ' ')
|
|
|
|
indent++;
|
|
|
|
|
|
|
|
fprintf(output, "%*s# Table of individual measurements:\n", indent, "");
|
|
|
|
|
2018-08-30 06:32:36 +00:00
|
|
|
for (idx = 0; idx < config->run_count; idx++) {
|
2018-04-23 09:08:21 +00:00
|
|
|
double run = (double) walltime_run[idx] / NSEC_PER_SEC;
|
2018-04-23 09:08:22 +00:00
|
|
|
int h, n = 1 + abs((int) (100.0 * (run - avg)/run) / 5);
|
2018-04-23 09:08:21 +00:00
|
|
|
|
2018-04-23 09:08:22 +00:00
|
|
|
fprintf(output, " %17.*f (%+.*f) ",
|
2018-04-23 09:08:21 +00:00
|
|
|
precision, run, precision, run - avg);
|
2018-04-23 09:08:22 +00:00
|
|
|
|
|
|
|
for (h = 0; h < n; h++)
|
|
|
|
fprintf(output, "#");
|
|
|
|
|
|
|
|
fprintf(output, "\n");
|
2018-04-23 09:08:21 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
fprintf(output, "\n%*s# Final result:\n", indent, "");
|
|
|
|
}
|
|
|
|
|
2018-06-05 12:13:13 +00:00
|
|
|
static double timeval2double(struct timeval *t)
|
|
|
|
{
|
|
|
|
return t->tv_sec + (double) t->tv_usec/USEC_PER_SEC;
|
|
|
|
}
|
|
|
|
|
2018-08-30 06:32:27 +00:00
|
|
|
static void print_footer(struct perf_stat_config *config)
|
2015-06-26 09:29:26 +00:00
|
|
|
{
|
2018-08-30 06:32:40 +00:00
|
|
|
double avg = avg_stats(config->walltime_nsecs_stats) / NSEC_PER_SEC;
|
2018-08-30 06:32:27 +00:00
|
|
|
FILE *output = config->output;
|
2017-05-23 01:00:16 +00:00
|
|
|
int n;
|
2015-07-21 12:31:24 +00:00
|
|
|
|
2018-08-30 06:32:41 +00:00
|
|
|
if (!config->null_run)
|
2015-06-26 09:29:26 +00:00
|
|
|
fprintf(output, "\n");
|
2018-04-23 09:08:20 +00:00
|
|
|
|
2018-08-30 06:32:36 +00:00
|
|
|
if (config->run_count == 1) {
|
2018-04-23 09:08:20 +00:00
|
|
|
fprintf(output, " %17.9f seconds time elapsed", avg);
|
2018-06-05 12:13:13 +00:00
|
|
|
|
2018-08-30 06:32:44 +00:00
|
|
|
if (config->ru_display) {
|
|
|
|
double ru_utime = timeval2double(&config->ru_data.ru_utime);
|
|
|
|
double ru_stime = timeval2double(&config->ru_data.ru_stime);
|
2018-06-05 12:13:13 +00:00
|
|
|
|
|
|
|
fprintf(output, "\n\n");
|
|
|
|
fprintf(output, " %17.9f seconds user\n", ru_utime);
|
|
|
|
fprintf(output, " %17.9f seconds sys\n", ru_stime);
|
|
|
|
}
|
2018-04-23 09:08:20 +00:00
|
|
|
} else {
|
2018-08-30 06:32:40 +00:00
|
|
|
double sd = stddev_stats(config->walltime_nsecs_stats) / NSEC_PER_SEC;
|
2018-04-23 09:08:20 +00:00
|
|
|
/*
|
|
|
|
* Display at most 2 more significant
|
|
|
|
* digits than the stddev inaccuracy.
|
|
|
|
*/
|
|
|
|
int precision = get_precision(sd) + 2;
|
|
|
|
|
2018-04-23 09:08:21 +00:00
|
|
|
if (walltime_run_table)
|
2018-08-30 06:32:36 +00:00
|
|
|
print_table(config, output, precision, avg);
|
2018-04-23 09:08:21 +00:00
|
|
|
|
2018-04-23 09:08:20 +00:00
|
|
|
fprintf(output, " %17.*f +- %.*f seconds time elapsed",
|
|
|
|
precision, avg, precision, sd);
|
|
|
|
|
2018-08-30 06:32:27 +00:00
|
|
|
print_noise_pct(config, sd, avg);
|
2015-06-26 09:29:26 +00:00
|
|
|
}
|
|
|
|
fprintf(output, "\n\n");
|
perf stat: Issue a HW watchdog disable hint
When using perf stat on an AMD F15h system with the default hw events
attributes, some of the events don't get counted:
Performance counter stats for 'sleep 1':
0.749208 task-clock (msec) # 0.001 CPUs utilized
1 context-switches # 0.001 M/sec
0 cpu-migrations # 0.000 K/sec
54 page-faults # 0.072 M/sec
1,122,815 cycles # 1.499 GHz
286,740 stalled-cycles-frontend # 25.54% frontend cycles idle
<not counted> stalled-cycles-backend (0.00%)
^^^^^^^^^^^^
<not counted> instructions (0.00%)
^^^^^^^^^^^^
<not counted> branches (0.00%)
<not counted> branch-misses (0.00%)
1.001550070 seconds time elapsed
The reason is that we have the HW watchdog consuming one PMU counter and
when perf tries to schedule 6 events on 6 counters and some of those
counters are constrained to only a specific subset of PMCs by the
hardware, the event scheduling fails.
So issue a hint to disable the HW watchdog around a perf stat session.
Committer note:
Testing it...
# perf stat -d usleep 1
Performance counter stats for 'usleep 1':
1.180203 task-clock (msec) # 0.490 CPUs utilized
1 context-switches # 0.847 K/sec
0 cpu-migrations # 0.000 K/sec
54 page-faults # 0.046 M/sec
184,754 cycles # 0.157 GHz
714,553 instructions # 3.87 insn per cycle
154,661 branches # 131.046 M/sec
7,247 branch-misses # 4.69% of all branches
219,984 L1-dcache-loads # 186.395 M/sec
17,600 L1-dcache-load-misses # 8.00% of all L1-dcache hits (90.16%)
<not counted> LLC-loads (0.00%)
<not counted> LLC-load-misses (0.00%)
0.002406823 seconds time elapsed
Some events weren't counted. Try disabling the NMI watchdog:
echo 0 > /proc/sys/kernel/nmi_watchdog
perf stat ...
echo 1 > /proc/sys/kernel/nmi_watchdog
#
Signed-off-by: Borislav Petkov <bp@suse.de>
Acked-by: Ingo Molnar <mingo@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Robert Richter <rric@kernel.org>
Cc: Vince Weaver <vince@deater.net>
Link: http://lkml.kernel.org/r/20170211183218.ijnvb5f7ciyuunx4@pd.tnic
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-02-07 00:40:05 +00:00
|
|
|
|
2018-08-30 06:32:42 +00:00
|
|
|
if (config->print_free_counters_hint &&
|
2017-05-23 01:00:16 +00:00
|
|
|
sysctl__read_int("kernel/nmi_watchdog", &n) >= 0 &&
|
|
|
|
n > 0)
|
perf stat: Issue a HW watchdog disable hint
When using perf stat on an AMD F15h system with the default hw events
attributes, some of the events don't get counted:
Performance counter stats for 'sleep 1':
0.749208 task-clock (msec) # 0.001 CPUs utilized
1 context-switches # 0.001 M/sec
0 cpu-migrations # 0.000 K/sec
54 page-faults # 0.072 M/sec
1,122,815 cycles # 1.499 GHz
286,740 stalled-cycles-frontend # 25.54% frontend cycles idle
<not counted> stalled-cycles-backend (0.00%)
^^^^^^^^^^^^
<not counted> instructions (0.00%)
^^^^^^^^^^^^
<not counted> branches (0.00%)
<not counted> branch-misses (0.00%)
1.001550070 seconds time elapsed
The reason is that we have the HW watchdog consuming one PMU counter and
when perf tries to schedule 6 events on 6 counters and some of those
counters are constrained to only a specific subset of PMCs by the
hardware, the event scheduling fails.
So issue a hint to disable the HW watchdog around a perf stat session.
Committer note:
Testing it...
# perf stat -d usleep 1
Performance counter stats for 'usleep 1':
1.180203 task-clock (msec) # 0.490 CPUs utilized
1 context-switches # 0.847 K/sec
0 cpu-migrations # 0.000 K/sec
54 page-faults # 0.046 M/sec
184,754 cycles # 0.157 GHz
714,553 instructions # 3.87 insn per cycle
154,661 branches # 131.046 M/sec
7,247 branch-misses # 4.69% of all branches
219,984 L1-dcache-loads # 186.395 M/sec
17,600 L1-dcache-load-misses # 8.00% of all L1-dcache hits (90.16%)
<not counted> LLC-loads (0.00%)
<not counted> LLC-load-misses (0.00%)
0.002406823 seconds time elapsed
Some events weren't counted. Try disabling the NMI watchdog:
echo 0 > /proc/sys/kernel/nmi_watchdog
perf stat ...
echo 1 > /proc/sys/kernel/nmi_watchdog
#
Signed-off-by: Borislav Petkov <bp@suse.de>
Acked-by: Ingo Molnar <mingo@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Robert Richter <rric@kernel.org>
Cc: Vince Weaver <vince@deater.net>
Link: http://lkml.kernel.org/r/20170211183218.ijnvb5f7ciyuunx4@pd.tnic
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-02-07 00:40:05 +00:00
|
|
|
fprintf(output,
|
|
|
|
"Some events weren't counted. Try disabling the NMI watchdog:\n"
|
|
|
|
" echo 0 > /proc/sys/kernel/nmi_watchdog\n"
|
|
|
|
" perf stat ...\n"
|
|
|
|
" echo 1 > /proc/sys/kernel/nmi_watchdog\n");
|
2018-04-24 18:20:11 +00:00
|
|
|
|
2018-08-30 06:32:43 +00:00
|
|
|
if (config->print_mixed_hw_group_error)
|
2018-04-24 18:20:11 +00:00
|
|
|
fprintf(output,
|
|
|
|
"The events in group usually have to be from "
|
|
|
|
"the same PMU. Try reorganizing the group.\n");
|
2015-06-26 09:29:26 +00:00
|
|
|
}
|
|
|
|
|
2018-08-30 06:32:24 +00:00
|
|
|
static void
|
|
|
|
perf_evlist__print_counters(struct perf_evlist *evlist,
|
2018-08-30 06:32:26 +00:00
|
|
|
struct perf_stat_config *config,
|
2018-08-30 06:32:33 +00:00
|
|
|
struct target *_target,
|
2018-08-30 06:32:24 +00:00
|
|
|
struct timespec *ts,
|
|
|
|
int argc, const char **argv)
|
2015-06-26 09:29:26 +00:00
|
|
|
{
|
2018-08-30 06:32:31 +00:00
|
|
|
bool metric_only = config->metric_only;
|
2018-08-30 06:32:26 +00:00
|
|
|
int interval = config->interval;
|
2015-06-26 09:29:26 +00:00
|
|
|
struct perf_evsel *counter;
|
|
|
|
char buf[64], *prefix = NULL;
|
|
|
|
|
|
|
|
if (interval)
|
2018-08-30 06:32:34 +00:00
|
|
|
print_interval(config, evlist, prefix = buf, ts);
|
2015-06-26 09:29:26 +00:00
|
|
|
else
|
2018-08-30 06:32:33 +00:00
|
|
|
print_header(config, _target, argc, argv);
|
2009-05-29 07:10:54 +00:00
|
|
|
|
2016-03-03 23:57:36 +00:00
|
|
|
if (metric_only) {
|
|
|
|
static int num_print_iv;
|
|
|
|
|
2016-05-24 19:52:38 +00:00
|
|
|
if (num_print_iv == 0 && !interval)
|
2018-08-30 06:32:34 +00:00
|
|
|
print_metric_headers(config, evlist, prefix, false);
|
2016-03-03 23:57:36 +00:00
|
|
|
if (num_print_iv++ == 25)
|
|
|
|
num_print_iv = 0;
|
2018-08-30 06:32:26 +00:00
|
|
|
if (config->aggr_mode == AGGR_GLOBAL && prefix)
|
|
|
|
fprintf(config->output, "%s", prefix);
|
2016-03-03 23:57:36 +00:00
|
|
|
}
|
|
|
|
|
2018-08-30 06:32:26 +00:00
|
|
|
switch (config->aggr_mode) {
|
2013-02-14 12:57:29 +00:00
|
|
|
case AGGR_CORE:
|
2013-02-14 12:57:27 +00:00
|
|
|
case AGGR_SOCKET:
|
2018-08-30 06:32:34 +00:00
|
|
|
print_aggr(config, evlist, prefix);
|
2013-02-14 12:57:27 +00:00
|
|
|
break;
|
perf stat: Introduce --per-thread option
Currently all the -p option PID arguments tasks values get aggregated
and printed as single values.
Adding --per-tasks option to print values per task.
$ perf stat -e cycles,instructions --per-thread -p 30190,30242
^C
Performance counter stats for process id '30190,30242':
cat-30190 0 cycles
yes-30242 3,842,525,421 cycles
cat-30190 0 instructions
yes-30242 10,370,817,010 instructions
1.143155657 seconds time elapsed
Also works under interval mode:
$ perf stat -e cycles,instructions --per-thread -p 30190,30242 -I 1000
# time comm-pid counts unit events
1.000073435 cat-30190 89,058 cycles
1.000073435 yes-30242 3,360,786,902 cycles (100.00%)
1.000073435 cat-30190 14,066 instructions
1.000073435 yes-30242 9,069,937,462 instructions
2.000204830 cat-30190 0 cycles
2.000204830 yes-30242 3,351,667,626 cycles
2.000204830 cat-30190 0 instructions
2.000204830 yes-30242 9,045,796,885 instructions
^C 2.771286639 cat-30190 0 cycles
2.771286639 yes-30242 2,593,884,166 cycles
2.771286639 cat-30190 0 instructions
2.771286639 yes-30242 7,001,171,191 instructions
It works only with -t and -p options, otherwise following error is
printed:
$ perf stat -e cycles --per-thread -I 1000 ls
The --per-thread option is only available when monitoring via -p -t options.
-p, --pid <pid> stat events on existing process id
-t, --tid <tid> stat events on existing thread id
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/r/1435310967-14570-23-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-06-26 09:29:27 +00:00
|
|
|
case AGGR_THREAD:
|
2018-08-30 06:32:24 +00:00
|
|
|
evlist__for_each_entry(evlist, counter) {
|
2017-08-31 19:40:35 +00:00
|
|
|
if (is_duration_time(counter))
|
|
|
|
continue;
|
2018-08-30 06:32:27 +00:00
|
|
|
print_aggr_thread(config, counter, prefix);
|
2017-08-31 19:40:35 +00:00
|
|
|
}
|
perf stat: Introduce --per-thread option
Currently all the -p option PID arguments tasks values get aggregated
and printed as single values.
Adding --per-tasks option to print values per task.
$ perf stat -e cycles,instructions --per-thread -p 30190,30242
^C
Performance counter stats for process id '30190,30242':
cat-30190 0 cycles
yes-30242 3,842,525,421 cycles
cat-30190 0 instructions
yes-30242 10,370,817,010 instructions
1.143155657 seconds time elapsed
Also works under interval mode:
$ perf stat -e cycles,instructions --per-thread -p 30190,30242 -I 1000
# time comm-pid counts unit events
1.000073435 cat-30190 89,058 cycles
1.000073435 yes-30242 3,360,786,902 cycles (100.00%)
1.000073435 cat-30190 14,066 instructions
1.000073435 yes-30242 9,069,937,462 instructions
2.000204830 cat-30190 0 cycles
2.000204830 yes-30242 3,351,667,626 cycles
2.000204830 cat-30190 0 instructions
2.000204830 yes-30242 9,045,796,885 instructions
^C 2.771286639 cat-30190 0 cycles
2.771286639 yes-30242 2,593,884,166 cycles
2.771286639 cat-30190 0 instructions
2.771286639 yes-30242 7,001,171,191 instructions
It works only with -t and -p options, otherwise following error is
printed:
$ perf stat -e cycles --per-thread -I 1000 ls
The --per-thread option is only available when monitoring via -p -t options.
-p, --pid <pid> stat events on existing process id
-t, --tid <tid> stat events on existing thread id
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/r/1435310967-14570-23-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-06-26 09:29:27 +00:00
|
|
|
break;
|
2013-02-14 12:57:27 +00:00
|
|
|
case AGGR_GLOBAL:
|
2018-08-30 06:32:24 +00:00
|
|
|
evlist__for_each_entry(evlist, counter) {
|
2017-08-31 19:40:35 +00:00
|
|
|
if (is_duration_time(counter))
|
|
|
|
continue;
|
2018-08-30 06:32:27 +00:00
|
|
|
print_counter_aggr(config, counter, prefix);
|
2017-08-31 19:40:35 +00:00
|
|
|
}
|
2016-03-03 23:57:36 +00:00
|
|
|
if (metric_only)
|
2018-08-30 06:32:26 +00:00
|
|
|
fputc('\n', config->output);
|
2013-02-14 12:57:27 +00:00
|
|
|
break;
|
|
|
|
case AGGR_NONE:
|
2016-03-03 23:57:37 +00:00
|
|
|
if (metric_only)
|
2018-08-30 06:32:34 +00:00
|
|
|
print_no_aggr_metric(config, evlist, prefix);
|
2016-03-03 23:57:37 +00:00
|
|
|
else {
|
2018-08-30 06:32:24 +00:00
|
|
|
evlist__for_each_entry(evlist, counter) {
|
2017-08-31 19:40:35 +00:00
|
|
|
if (is_duration_time(counter))
|
|
|
|
continue;
|
2018-08-30 06:32:27 +00:00
|
|
|
print_counter(config, counter, prefix);
|
2017-08-31 19:40:35 +00:00
|
|
|
}
|
2016-03-03 23:57:37 +00:00
|
|
|
}
|
2013-02-14 12:57:27 +00:00
|
|
|
break;
|
2015-10-16 10:41:04 +00:00
|
|
|
case AGGR_UNSET:
|
2013-02-14 12:57:27 +00:00
|
|
|
default:
|
|
|
|
break;
|
2010-11-16 09:05:01 +00:00
|
|
|
}
|
2009-04-20 13:37:32 +00:00
|
|
|
|
2018-08-30 06:32:29 +00:00
|
|
|
if (!interval && !config->csv_output)
|
2018-08-30 06:32:27 +00:00
|
|
|
print_footer(config);
|
2015-06-26 09:29:26 +00:00
|
|
|
|
2018-08-30 06:32:26 +00:00
|
|
|
fflush(config->output);
|
2009-04-20 13:37:32 +00:00
|
|
|
}
|
|
|
|
|
2018-08-30 06:32:24 +00:00
|
|
|
static void print_counters(struct timespec *ts, int argc, const char **argv)
|
|
|
|
{
|
2018-08-30 06:32:25 +00:00
|
|
|
/* Do not print anything if we record to the pipe. */
|
|
|
|
if (STAT_RECORD && perf_stat.data.is_pipe)
|
|
|
|
return;
|
|
|
|
|
2018-08-30 06:32:33 +00:00
|
|
|
perf_evlist__print_counters(evsel_list, &stat_config, &target,
|
2018-08-30 06:32:26 +00:00
|
|
|
ts, argc, argv);
|
2018-08-30 06:32:24 +00:00
|
|
|
}
|
|
|
|
|
2009-06-10 13:55:59 +00:00
|
|
|
static volatile int signr = -1;
|
|
|
|
|
2009-05-26 07:17:18 +00:00
|
|
|
static void skip_signal(int signo)
|
2009-04-20 13:37:32 +00:00
|
|
|
{
|
2015-07-21 12:31:25 +00:00
|
|
|
if ((child_pid == -1) || stat_config.interval)
|
2009-12-31 08:05:50 +00:00
|
|
|
done = 1;
|
|
|
|
|
2009-06-10 13:55:59 +00:00
|
|
|
signr = signo;
|
2013-06-04 15:44:26 +00:00
|
|
|
/*
|
|
|
|
* render child_pid harmless
|
|
|
|
* won't send SIGTERM to a random
|
|
|
|
* process in case of race condition
|
|
|
|
* and fast PID recycling
|
|
|
|
*/
|
|
|
|
child_pid = -1;
|
2009-06-10 13:55:59 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static void sig_atexit(void)
|
|
|
|
{
|
2013-06-04 15:44:26 +00:00
|
|
|
sigset_t set, oset;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* avoid race condition with SIGCHLD handler
|
|
|
|
* in skip_signal() which is modifying child_pid
|
|
|
|
* goal is to avoid send SIGTERM to a random
|
|
|
|
* process
|
|
|
|
*/
|
|
|
|
sigemptyset(&set);
|
|
|
|
sigaddset(&set, SIGCHLD);
|
|
|
|
sigprocmask(SIG_BLOCK, &set, &oset);
|
|
|
|
|
2009-10-04 00:35:01 +00:00
|
|
|
if (child_pid != -1)
|
|
|
|
kill(child_pid, SIGTERM);
|
|
|
|
|
2013-06-04 15:44:26 +00:00
|
|
|
sigprocmask(SIG_SETMASK, &oset, NULL);
|
|
|
|
|
2009-06-10 13:55:59 +00:00
|
|
|
if (signr == -1)
|
|
|
|
return;
|
|
|
|
|
|
|
|
signal(signr, SIG_DFL);
|
|
|
|
kill(getpid(), signr);
|
2009-05-26 07:17:18 +00:00
|
|
|
}
|
|
|
|
|
2012-09-10 22:15:03 +00:00
|
|
|
static int stat__set_big_num(const struct option *opt __maybe_unused,
|
|
|
|
const char *s __maybe_unused, int unset)
|
perf stat: Add csv-style output
This patch adds an option (-x/--field-separator) to print counts using a
CSV-style output. The user can pass a custom separator. This makes it very easy
to import counts directly into your favorite spreadsheet without having to
write scripts.
Example:
$ perf stat --field-separator=, -a -- sleep 1
4009.961740,task-clock-msecs
13,context-switches
2,CPU-migrations
189,page-faults
9596385684,cycles
3493659441,instructions
872897069,branches
41562,branch-misses
22424,cache-references
1289,cache-misses
Works also in non-aggregated mode:
$ perf stat -x , -a -A -- sleep 1
CPU0,1002.526168,task-clock-msecs
CPU1,1002.528365,task-clock-msecs
CPU2,1002.523360,task-clock-msecs
CPU3,1002.519878,task-clock-msecs
CPU0,1,context-switches
CPU1,5,context-switches
CPU2,5,context-switches
CPU3,6,context-switches
CPU0,0,CPU-migrations
CPU1,1,CPU-migrations
CPU2,0,CPU-migrations
CPU3,1,CPU-migrations
CPU0,2,page-faults
CPU1,6,page-faults
CPU2,9,page-faults
CPU3,174,page-faults
CPU0,2399439771,cycles
CPU1,2380369063,cycles
CPU2,2399142710,cycles
CPU3,2373161192,cycles
CPU0,872900618,instructions
CPU1,873030960,instructions
CPU2,872714525,instructions
CPU3,874460580,instructions
CPU0,221556839,branches
CPU1,218134342,branches
CPU2,218161730,branches
CPU3,218284093,branches
CPU0,18556,branch-misses
CPU1,1449,branch-misses
CPU2,3447,branch-misses
CPU3,12714,branch-misses
CPU0,8330,cache-references
CPU1,313844,cache-references
CPU2,47993728,cache-references
CPU3,826481,cache-references
CPU0,272,cache-misses
CPU1,5360,cache-misses
CPU2,1342193,cache-misses
CPU3,13992,cache-misses
This second version adds the ability to name a separator and uses
field-separator as the long option to be consistent with perf report.
Commiter note: Since we enabled --big-num by default in 201e0b0 and -x can't be
used with it, we need to notice if the user explicitely enabled or disabled -B,
add code to disable big_num if the user didn't explicitely set --big_num when
-x is used.
Cc: David S. Miller <davem@davemloft.net>
Cc: Frederik Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: paulus@samba.org
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Robert Richter <robert.richter@amd.com>
LKML-Reference: <4cf68aa7.0fedd80a.5294.1203@mx.google.com>
Signed-off-by: Stephane Eranian <eranian@google.com>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2010-12-01 16:49:05 +00:00
|
|
|
{
|
|
|
|
big_num_opt = unset ? 0 : 1;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
perf stat: Basic support for TopDown in perf stat
Add basic plumbing for TopDown in perf stat
TopDown is intended to replace the frontend cycles idle/ backend cycles
idle metrics in standard perf stat output. These metrics are not
reliable in many workloads, due to out of order effects.
This implements a new --topdown mode in perf stat (similar to
--transaction) that measures the pipe line bottlenecks using
standardized formulas. The measurement can be all done with 5 counters
(one fixed counter)
The result are four metrics:
FrontendBound, BackendBound, BadSpeculation, Retiring
that describe the CPU pipeline behavior on a high level.
The full top down methology has many hierarchical metrics. This
implementation only supports level 1 which can be collected without
multiplexing. A full implementation of top down on top of perf is
available in pmu-tools toplev. (http://github.com/andikleen/pmu-tools)
The current version works on Intel Core CPUs starting with Sandy Bridge,
and Atom CPUs starting with Silvermont. In principle the generic
metrics should be also implementable on other out of order CPUs.
TopDown level 1 uses a set of abstracted metrics which are generic to
out of order CPU cores (although some CPUs may not implement all of
them):
topdown-total-slots Available slots in the pipeline
topdown-slots-issued Slots issued into the pipeline
topdown-slots-retired Slots successfully retired
topdown-fetch-bubbles Pipeline gaps in the frontend
topdown-recovery-bubbles Pipeline gaps during recovery
from misspeculation
These metrics then allow to compute four useful metrics:
FrontendBound, BackendBound, Retiring, BadSpeculation.
Add a new --topdown options to enable events. When --topdown is
specified set up events for all topdown events supported by the kernel.
Add topdown-* as a special case to the event parser, as is needed for
all events containing -.
The actual code to compute the metrics is in follow-on patches.
v2: Use standard sysctl read function.
v3: Move x86 specific code to arch/
v4: Enable --metric-only implicitly for topdown.
v5: Add --single-thread option to not force per core mode
v6: Fix output order of topdown metrics
v7: Allow combining with -d
v8: Remove --single-thread again
v9: Rename functions, adding arch_ and topdown_.
v10: Expand man page and describe TopDown better
Paste intro into commit description.
Print error when malloc fails.
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: http://lkml.kernel.org/r/1464119559-17203-1-git-send-email-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-05-30 15:49:42 +00:00
|
|
|
static int enable_metric_only(const struct option *opt __maybe_unused,
|
|
|
|
const char *s __maybe_unused, int unset)
|
|
|
|
{
|
|
|
|
force_metric_only = true;
|
2018-08-30 06:32:31 +00:00
|
|
|
stat_config.metric_only = !unset;
|
perf stat: Basic support for TopDown in perf stat
Add basic plumbing for TopDown in perf stat
TopDown is intended to replace the frontend cycles idle/ backend cycles
idle metrics in standard perf stat output. These metrics are not
reliable in many workloads, due to out of order effects.
This implements a new --topdown mode in perf stat (similar to
--transaction) that measures the pipe line bottlenecks using
standardized formulas. The measurement can be all done with 5 counters
(one fixed counter)
The result are four metrics:
FrontendBound, BackendBound, BadSpeculation, Retiring
that describe the CPU pipeline behavior on a high level.
The full top down methology has many hierarchical metrics. This
implementation only supports level 1 which can be collected without
multiplexing. A full implementation of top down on top of perf is
available in pmu-tools toplev. (http://github.com/andikleen/pmu-tools)
The current version works on Intel Core CPUs starting with Sandy Bridge,
and Atom CPUs starting with Silvermont. In principle the generic
metrics should be also implementable on other out of order CPUs.
TopDown level 1 uses a set of abstracted metrics which are generic to
out of order CPU cores (although some CPUs may not implement all of
them):
topdown-total-slots Available slots in the pipeline
topdown-slots-issued Slots issued into the pipeline
topdown-slots-retired Slots successfully retired
topdown-fetch-bubbles Pipeline gaps in the frontend
topdown-recovery-bubbles Pipeline gaps during recovery
from misspeculation
These metrics then allow to compute four useful metrics:
FrontendBound, BackendBound, Retiring, BadSpeculation.
Add a new --topdown options to enable events. When --topdown is
specified set up events for all topdown events supported by the kernel.
Add topdown-* as a special case to the event parser, as is needed for
all events containing -.
The actual code to compute the metrics is in follow-on patches.
v2: Use standard sysctl read function.
v3: Move x86 specific code to arch/
v4: Enable --metric-only implicitly for topdown.
v5: Add --single-thread option to not force per core mode
v6: Fix output order of topdown metrics
v7: Allow combining with -d
v8: Remove --single-thread again
v9: Rename functions, adding arch_ and topdown_.
v10: Expand man page and describe TopDown better
Paste intro into commit description.
Print error when malloc fails.
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: http://lkml.kernel.org/r/1464119559-17203-1-git-send-email-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-05-30 15:49:42 +00:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2017-08-31 19:40:31 +00:00
|
|
|
static int parse_metric_groups(const struct option *opt,
|
|
|
|
const char *str,
|
|
|
|
int unset __maybe_unused)
|
|
|
|
{
|
|
|
|
return metricgroup__parse_groups(opt, str, &metric_events);
|
|
|
|
}
|
|
|
|
|
2015-11-05 14:40:45 +00:00
|
|
|
static const struct option stat_options[] = {
|
|
|
|
OPT_BOOLEAN('T', "transaction", &transaction_run,
|
|
|
|
"hardware transaction statistics"),
|
|
|
|
OPT_CALLBACK('e', "event", &evsel_list, "event",
|
|
|
|
"event selector. use 'perf list' to list available events",
|
|
|
|
parse_events_option),
|
|
|
|
OPT_CALLBACK(0, "filter", &evsel_list, "filter",
|
|
|
|
"event filter", parse_filter),
|
2018-08-30 06:32:12 +00:00
|
|
|
OPT_BOOLEAN('i', "no-inherit", &stat_config.no_inherit,
|
2015-11-05 14:40:45 +00:00
|
|
|
"child tasks do not inherit counters"),
|
|
|
|
OPT_STRING('p', "pid", &target.pid, "pid",
|
|
|
|
"stat events on existing process id"),
|
|
|
|
OPT_STRING('t', "tid", &target.tid, "tid",
|
|
|
|
"stat events on existing thread id"),
|
|
|
|
OPT_BOOLEAN('a', "all-cpus", &target.system_wide,
|
|
|
|
"system-wide collection from all CPUs"),
|
|
|
|
OPT_BOOLEAN('g', "group", &group,
|
|
|
|
"put the counters into a counter group"),
|
|
|
|
OPT_BOOLEAN('c', "scale", &stat_config.scale, "scale/normalize counters"),
|
|
|
|
OPT_INCR('v', "verbose", &verbose,
|
|
|
|
"be more verbose (show counter open errors, etc)"),
|
2018-08-30 06:32:36 +00:00
|
|
|
OPT_INTEGER('r', "repeat", &stat_config.run_count,
|
2015-11-05 14:40:45 +00:00
|
|
|
"repeat command and print average + stddev (max: 100, forever: 0)"),
|
2018-04-23 09:08:21 +00:00
|
|
|
OPT_BOOLEAN(0, "table", &walltime_run_table,
|
|
|
|
"display details about each run (only with -r option)"),
|
2018-08-30 06:32:41 +00:00
|
|
|
OPT_BOOLEAN('n', "null", &stat_config.null_run,
|
2015-11-05 14:40:45 +00:00
|
|
|
"null run - dont start any counters"),
|
|
|
|
OPT_INCR('d', "detailed", &detailed_run,
|
|
|
|
"detailed run - start a lot of events"),
|
|
|
|
OPT_BOOLEAN('S', "sync", &sync_run,
|
|
|
|
"call sync() before starting a run"),
|
|
|
|
OPT_CALLBACK_NOOPT('B', "big-num", NULL, NULL,
|
|
|
|
"print large numbers with thousands\' separators",
|
|
|
|
stat__set_big_num),
|
|
|
|
OPT_STRING('C', "cpu", &target.cpu_list, "cpu",
|
|
|
|
"list of cpus to monitor in system-wide"),
|
|
|
|
OPT_SET_UINT('A', "no-aggr", &stat_config.aggr_mode,
|
|
|
|
"disable CPU count aggregation", AGGR_NONE),
|
perf stat: Collapse identically named events
The uncore PMU has a lot of duplicated PMUs for different subsystems.
When expanding an uncore alias we usually end up with a large
number of identically named aliases, which makes perf stat
output difficult to read.
Automatically sum them up in perf stat, unless --no-merge is specified.
This can be default because only the uncores generally have duplicated
aliases. Other PMUs have unique names.
Before:
% perf stat --no-merge -a -e unc_c_llc_lookup.any sleep 1
Performance counter stats for 'system wide':
694,976 Bytes unc_c_llc_lookup.any
706,304 Bytes unc_c_llc_lookup.any
956,608 Bytes unc_c_llc_lookup.any
782,720 Bytes unc_c_llc_lookup.any
605,696 Bytes unc_c_llc_lookup.any
442,816 Bytes unc_c_llc_lookup.any
659,328 Bytes unc_c_llc_lookup.any
509,312 Bytes unc_c_llc_lookup.any
263,936 Bytes unc_c_llc_lookup.any
592,448 Bytes unc_c_llc_lookup.any
672,448 Bytes unc_c_llc_lookup.any
608,640 Bytes unc_c_llc_lookup.any
641,024 Bytes unc_c_llc_lookup.any
856,896 Bytes unc_c_llc_lookup.any
808,832 Bytes unc_c_llc_lookup.any
684,864 Bytes unc_c_llc_lookup.any
710,464 Bytes unc_c_llc_lookup.any
538,304 Bytes unc_c_llc_lookup.any
1.002577660 seconds time elapsed
After:
% perf stat -a -e unc_c_llc_lookup.any sleep 1
Performance counter stats for 'system wide':
2,685,120 Bytes unc_c_llc_lookup.any
1.002648032 seconds time elapsed
v2: Split collect_aliases. Rename alias flag.
v3: Make sure unsupported/not counted is always printed.
v4: Factor out callback change into separate patch.
v5: Move check for bad results here
Move merged check into collect_data
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: http://lkml.kernel.org/r/20170320201711.14142-3-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-03-20 20:17:00 +00:00
|
|
|
OPT_BOOLEAN(0, "no-merge", &no_merge, "Do not merge identical named events"),
|
2018-08-30 06:32:29 +00:00
|
|
|
OPT_STRING('x', "field-separator", &stat_config.csv_sep, "separator",
|
2015-11-05 14:40:45 +00:00
|
|
|
"print counts with custom separator"),
|
|
|
|
OPT_CALLBACK('G', "cgroup", &evsel_list, "name",
|
|
|
|
"monitor event in cgroup name only", parse_cgroups),
|
|
|
|
OPT_STRING('o', "output", &output_name, "file", "output file name"),
|
|
|
|
OPT_BOOLEAN(0, "append", &append_file, "append to the output file"),
|
|
|
|
OPT_INTEGER(0, "log-fd", &output_fd,
|
|
|
|
"log output to fd, instead of stderr"),
|
|
|
|
OPT_STRING(0, "pre", &pre_cmd, "command",
|
|
|
|
"command to run prior to the measured command"),
|
|
|
|
OPT_STRING(0, "post", &post_cmd, "command",
|
|
|
|
"command to run after to the measured command"),
|
|
|
|
OPT_UINTEGER('I', "interval-print", &stat_config.interval,
|
2018-04-03 18:18:33 +00:00
|
|
|
"print counts at regular interval in ms "
|
|
|
|
"(overhead is possible for values <= 100ms)"),
|
2018-01-29 09:25:22 +00:00
|
|
|
OPT_INTEGER(0, "interval-count", &stat_config.times,
|
|
|
|
"print counts for fixed number of times"),
|
2018-08-30 06:32:30 +00:00
|
|
|
OPT_BOOLEAN(0, "interval-clear", &stat_config.interval_clear,
|
2018-06-06 22:15:06 +00:00
|
|
|
"clear screen in between new interval"),
|
2018-01-29 09:25:23 +00:00
|
|
|
OPT_UINTEGER(0, "timeout", &stat_config.timeout,
|
|
|
|
"stop workload and print counts after a timeout period in ms (>= 10ms)"),
|
2015-11-05 14:40:45 +00:00
|
|
|
OPT_SET_UINT(0, "per-socket", &stat_config.aggr_mode,
|
|
|
|
"aggregate counts per processor socket", AGGR_SOCKET),
|
|
|
|
OPT_SET_UINT(0, "per-core", &stat_config.aggr_mode,
|
|
|
|
"aggregate counts per physical processor core", AGGR_CORE),
|
|
|
|
OPT_SET_UINT(0, "per-thread", &stat_config.aggr_mode,
|
|
|
|
"aggregate counts per thread", AGGR_THREAD),
|
2018-08-30 06:32:11 +00:00
|
|
|
OPT_UINTEGER('D', "delay", &stat_config.initial_delay,
|
2015-11-05 14:40:45 +00:00
|
|
|
"ms to wait before starting measurement after program start"),
|
2018-08-30 06:32:31 +00:00
|
|
|
OPT_CALLBACK_NOOPT(0, "metric-only", &stat_config.metric_only, NULL,
|
perf stat: Basic support for TopDown in perf stat
Add basic plumbing for TopDown in perf stat
TopDown is intended to replace the frontend cycles idle/ backend cycles
idle metrics in standard perf stat output. These metrics are not
reliable in many workloads, due to out of order effects.
This implements a new --topdown mode in perf stat (similar to
--transaction) that measures the pipe line bottlenecks using
standardized formulas. The measurement can be all done with 5 counters
(one fixed counter)
The result are four metrics:
FrontendBound, BackendBound, BadSpeculation, Retiring
that describe the CPU pipeline behavior on a high level.
The full top down methology has many hierarchical metrics. This
implementation only supports level 1 which can be collected without
multiplexing. A full implementation of top down on top of perf is
available in pmu-tools toplev. (http://github.com/andikleen/pmu-tools)
The current version works on Intel Core CPUs starting with Sandy Bridge,
and Atom CPUs starting with Silvermont. In principle the generic
metrics should be also implementable on other out of order CPUs.
TopDown level 1 uses a set of abstracted metrics which are generic to
out of order CPU cores (although some CPUs may not implement all of
them):
topdown-total-slots Available slots in the pipeline
topdown-slots-issued Slots issued into the pipeline
topdown-slots-retired Slots successfully retired
topdown-fetch-bubbles Pipeline gaps in the frontend
topdown-recovery-bubbles Pipeline gaps during recovery
from misspeculation
These metrics then allow to compute four useful metrics:
FrontendBound, BackendBound, Retiring, BadSpeculation.
Add a new --topdown options to enable events. When --topdown is
specified set up events for all topdown events supported by the kernel.
Add topdown-* as a special case to the event parser, as is needed for
all events containing -.
The actual code to compute the metrics is in follow-on patches.
v2: Use standard sysctl read function.
v3: Move x86 specific code to arch/
v4: Enable --metric-only implicitly for topdown.
v5: Add --single-thread option to not force per core mode
v6: Fix output order of topdown metrics
v7: Allow combining with -d
v8: Remove --single-thread again
v9: Rename functions, adding arch_ and topdown_.
v10: Expand man page and describe TopDown better
Paste intro into commit description.
Print error when malloc fails.
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: http://lkml.kernel.org/r/1464119559-17203-1-git-send-email-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-05-30 15:49:42 +00:00
|
|
|
"Only print computed metrics. No raw values", enable_metric_only),
|
|
|
|
OPT_BOOLEAN(0, "topdown", &topdown_run,
|
|
|
|
"measure topdown level 1 statistics"),
|
2017-05-26 19:05:38 +00:00
|
|
|
OPT_BOOLEAN(0, "smi-cost", &smi_cost,
|
|
|
|
"measure SMI cost"),
|
2017-08-31 19:40:31 +00:00
|
|
|
OPT_CALLBACK('M', "metrics", &evsel_list, "metric/metric group list",
|
|
|
|
"monitor specified metrics or metric groups (separated by ,)",
|
|
|
|
parse_metric_groups),
|
2015-11-05 14:40:45 +00:00
|
|
|
OPT_END()
|
|
|
|
};
|
|
|
|
|
2015-10-16 10:41:15 +00:00
|
|
|
static int perf_stat__get_socket(struct cpu_map *map, int cpu)
|
|
|
|
{
|
|
|
|
return cpu_map__get_socket(map, cpu, NULL);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int perf_stat__get_core(struct cpu_map *map, int cpu)
|
|
|
|
{
|
|
|
|
return cpu_map__get_core(map, cpu, NULL);
|
|
|
|
}
|
|
|
|
|
2015-10-25 14:51:18 +00:00
|
|
|
static int cpu_map__get_max(struct cpu_map *map)
|
|
|
|
{
|
|
|
|
int i, max = -1;
|
|
|
|
|
|
|
|
for (i = 0; i < map->nr; i++) {
|
|
|
|
if (map->map[i] > max)
|
|
|
|
max = map->map[i];
|
|
|
|
}
|
|
|
|
|
|
|
|
return max;
|
|
|
|
}
|
|
|
|
|
|
|
|
static struct cpu_map *cpus_aggr_map;
|
|
|
|
|
|
|
|
static int perf_stat__get_aggr(aggr_get_id_t get_id, struct cpu_map *map, int idx)
|
|
|
|
{
|
|
|
|
int cpu;
|
|
|
|
|
|
|
|
if (idx >= map->nr)
|
|
|
|
return -1;
|
|
|
|
|
|
|
|
cpu = map->map[idx];
|
|
|
|
|
|
|
|
if (cpus_aggr_map->map[cpu] == -1)
|
|
|
|
cpus_aggr_map->map[cpu] = get_id(map, idx);
|
|
|
|
|
|
|
|
return cpus_aggr_map->map[cpu];
|
|
|
|
}
|
|
|
|
|
|
|
|
static int perf_stat__get_socket_cached(struct cpu_map *map, int idx)
|
|
|
|
{
|
|
|
|
return perf_stat__get_aggr(perf_stat__get_socket, map, idx);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int perf_stat__get_core_cached(struct cpu_map *map, int idx)
|
|
|
|
{
|
|
|
|
return perf_stat__get_aggr(perf_stat__get_core, map, idx);
|
|
|
|
}
|
|
|
|
|
2013-02-14 12:57:27 +00:00
|
|
|
static int perf_stat_init_aggr_mode(void)
|
|
|
|
{
|
2015-10-25 14:51:18 +00:00
|
|
|
int nr;
|
|
|
|
|
2015-07-21 12:31:22 +00:00
|
|
|
switch (stat_config.aggr_mode) {
|
2013-02-14 12:57:27 +00:00
|
|
|
case AGGR_SOCKET:
|
|
|
|
if (cpu_map__build_socket_map(evsel_list->cpus, &aggr_map)) {
|
|
|
|
perror("cannot build socket map");
|
|
|
|
return -1;
|
|
|
|
}
|
2015-10-25 14:51:18 +00:00
|
|
|
aggr_get_id = perf_stat__get_socket_cached;
|
2013-02-14 12:57:27 +00:00
|
|
|
break;
|
2013-02-14 12:57:29 +00:00
|
|
|
case AGGR_CORE:
|
|
|
|
if (cpu_map__build_core_map(evsel_list->cpus, &aggr_map)) {
|
|
|
|
perror("cannot build core map");
|
|
|
|
return -1;
|
|
|
|
}
|
2015-10-25 14:51:18 +00:00
|
|
|
aggr_get_id = perf_stat__get_core_cached;
|
2013-02-14 12:57:29 +00:00
|
|
|
break;
|
2013-02-14 12:57:27 +00:00
|
|
|
case AGGR_NONE:
|
|
|
|
case AGGR_GLOBAL:
|
perf stat: Introduce --per-thread option
Currently all the -p option PID arguments tasks values get aggregated
and printed as single values.
Adding --per-tasks option to print values per task.
$ perf stat -e cycles,instructions --per-thread -p 30190,30242
^C
Performance counter stats for process id '30190,30242':
cat-30190 0 cycles
yes-30242 3,842,525,421 cycles
cat-30190 0 instructions
yes-30242 10,370,817,010 instructions
1.143155657 seconds time elapsed
Also works under interval mode:
$ perf stat -e cycles,instructions --per-thread -p 30190,30242 -I 1000
# time comm-pid counts unit events
1.000073435 cat-30190 89,058 cycles
1.000073435 yes-30242 3,360,786,902 cycles (100.00%)
1.000073435 cat-30190 14,066 instructions
1.000073435 yes-30242 9,069,937,462 instructions
2.000204830 cat-30190 0 cycles
2.000204830 yes-30242 3,351,667,626 cycles
2.000204830 cat-30190 0 instructions
2.000204830 yes-30242 9,045,796,885 instructions
^C 2.771286639 cat-30190 0 cycles
2.771286639 yes-30242 2,593,884,166 cycles
2.771286639 cat-30190 0 instructions
2.771286639 yes-30242 7,001,171,191 instructions
It works only with -t and -p options, otherwise following error is
printed:
$ perf stat -e cycles --per-thread -I 1000 ls
The --per-thread option is only available when monitoring via -p -t options.
-p, --pid <pid> stat events on existing process id
-t, --tid <tid> stat events on existing thread id
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/r/1435310967-14570-23-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-06-26 09:29:27 +00:00
|
|
|
case AGGR_THREAD:
|
2015-10-16 10:41:04 +00:00
|
|
|
case AGGR_UNSET:
|
2013-02-14 12:57:27 +00:00
|
|
|
default:
|
|
|
|
break;
|
|
|
|
}
|
2015-10-25 14:51:18 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* The evsel_list->cpus is the base we operate on,
|
|
|
|
* taking the highest cpu number to be the size of
|
|
|
|
* the aggregation translate cpumap.
|
|
|
|
*/
|
|
|
|
nr = cpu_map__get_max(evsel_list->cpus);
|
|
|
|
cpus_aggr_map = cpu_map__empty_new(nr + 1);
|
|
|
|
return cpus_aggr_map ? 0 : -ENOMEM;
|
2013-02-14 12:57:27 +00:00
|
|
|
}
|
|
|
|
|
2015-12-09 02:11:27 +00:00
|
|
|
static void perf_stat__exit_aggr_mode(void)
|
|
|
|
{
|
|
|
|
cpu_map__put(aggr_map);
|
|
|
|
cpu_map__put(cpus_aggr_map);
|
|
|
|
aggr_map = NULL;
|
|
|
|
cpus_aggr_map = NULL;
|
|
|
|
}
|
|
|
|
|
2015-11-05 14:40:58 +00:00
|
|
|
static inline int perf_env__get_cpu(struct perf_env *env, struct cpu_map *map, int idx)
|
|
|
|
{
|
|
|
|
int cpu;
|
|
|
|
|
|
|
|
if (idx > map->nr)
|
|
|
|
return -1;
|
|
|
|
|
|
|
|
cpu = map->map[idx];
|
|
|
|
|
perf tools: Replace _SC_NPROCESSORS_CONF with max_present_cpu in cpu_topology_map
There are 2 problems wrt. cpu_topology_map on systems with sparse CPUs:
1. offline/absent CPUs will have their socket_id and core_id set to -1
which triggers:
"socket_id number is too big.You may need to upgrade the perf tool."
2. size of cpu_topology_map (perf_env.cpu[]) is allocated based on
_SC_NPROCESSORS_CONF, but can be indexed with CPU ids going above.
Users of perf_env.cpu[] are using CPU id as index. This can lead
to read beyond what was allocated:
==19991== Invalid read of size 4
==19991== at 0x490CEB: check_cpu_topology (topology.c:69)
==19991== by 0x490CEB: test_session_topology (topology.c:106)
...
For example:
_SC_NPROCESSORS_CONF == 16
available: 2 nodes (0-1)
node 0 cpus: 0 6 8 10 16 22 24 26
node 0 size: 12004 MB
node 0 free: 9470 MB
node 1 cpus: 1 7 9 11 23 25 27
node 1 size: 12093 MB
node 1 free: 9406 MB
node distances:
node 0 1
0: 10 20
1: 20 10
This patch changes HEADER_NRCPUS.nr_cpus_available from _SC_NPROCESSORS_CONF
to max_present_cpu and updates any user of cpu_topology_map to iterate
with nr_cpus_avail.
As a consequence HEADER_CPU_TOPOLOGY core_id and socket_id lists get longer,
but maintain compatibility with pre-patch state - index to cpu_topology_map is
CPU id.
perf test 36 -v
36: Session topology :
--- start ---
test child forked, pid 22211
templ file: /tmp/perf-test-gmdX5i
CPU 0, core 0, socket 0
CPU 1, core 0, socket 1
CPU 6, core 10, socket 0
CPU 7, core 10, socket 1
CPU 8, core 1, socket 0
CPU 9, core 1, socket 1
CPU 10, core 9, socket 0
CPU 11, core 9, socket 1
CPU 16, core 0, socket 0
CPU 22, core 10, socket 0
CPU 23, core 10, socket 1
CPU 24, core 1, socket 0
CPU 25, core 1, socket 1
CPU 26, core 9, socket 0
CPU 27, core 9, socket 1
test child finished with 0
---- end ----
Session topology: Ok
Signed-off-by: Jan Stancek <jstancek@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/d7c05c6445fca74a8442c2c73cfffd349c52c44f.1487146877.git.jstancek@redhat.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-02-17 11:10:26 +00:00
|
|
|
if (cpu >= env->nr_cpus_avail)
|
2015-11-05 14:40:58 +00:00
|
|
|
return -1;
|
|
|
|
|
|
|
|
return cpu;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int perf_env__get_socket(struct cpu_map *map, int idx, void *data)
|
|
|
|
{
|
|
|
|
struct perf_env *env = data;
|
|
|
|
int cpu = perf_env__get_cpu(env, map, idx);
|
|
|
|
|
|
|
|
return cpu == -1 ? -1 : env->cpu[cpu].socket_id;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int perf_env__get_core(struct cpu_map *map, int idx, void *data)
|
|
|
|
{
|
|
|
|
struct perf_env *env = data;
|
|
|
|
int core = -1, cpu = perf_env__get_cpu(env, map, idx);
|
|
|
|
|
|
|
|
if (cpu != -1) {
|
|
|
|
int socket_id = env->cpu[cpu].socket_id;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Encode socket in upper 16 bits
|
|
|
|
* core_id is relative to socket, and
|
|
|
|
* we need a global id. So we combine
|
|
|
|
* socket + core id.
|
|
|
|
*/
|
|
|
|
core = (socket_id << 16) | (env->cpu[cpu].core_id & 0xffff);
|
|
|
|
}
|
|
|
|
|
|
|
|
return core;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int perf_env__build_socket_map(struct perf_env *env, struct cpu_map *cpus,
|
|
|
|
struct cpu_map **sockp)
|
|
|
|
{
|
|
|
|
return cpu_map__build_map(cpus, sockp, perf_env__get_socket, env);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int perf_env__build_core_map(struct perf_env *env, struct cpu_map *cpus,
|
|
|
|
struct cpu_map **corep)
|
|
|
|
{
|
|
|
|
return cpu_map__build_map(cpus, corep, perf_env__get_core, env);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int perf_stat__get_socket_file(struct cpu_map *map, int idx)
|
|
|
|
{
|
|
|
|
return perf_env__get_socket(map, idx, &perf_stat.session->header.env);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int perf_stat__get_core_file(struct cpu_map *map, int idx)
|
|
|
|
{
|
|
|
|
return perf_env__get_core(map, idx, &perf_stat.session->header.env);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int perf_stat_init_aggr_mode_file(struct perf_stat *st)
|
|
|
|
{
|
|
|
|
struct perf_env *env = &st->session->header.env;
|
|
|
|
|
|
|
|
switch (stat_config.aggr_mode) {
|
|
|
|
case AGGR_SOCKET:
|
|
|
|
if (perf_env__build_socket_map(env, evsel_list->cpus, &aggr_map)) {
|
|
|
|
perror("cannot build socket map");
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
aggr_get_id = perf_stat__get_socket_file;
|
|
|
|
break;
|
|
|
|
case AGGR_CORE:
|
|
|
|
if (perf_env__build_core_map(env, evsel_list->cpus, &aggr_map)) {
|
|
|
|
perror("cannot build core map");
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
aggr_get_id = perf_stat__get_core_file;
|
|
|
|
break;
|
|
|
|
case AGGR_NONE:
|
|
|
|
case AGGR_GLOBAL:
|
|
|
|
case AGGR_THREAD:
|
|
|
|
case AGGR_UNSET:
|
|
|
|
default:
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
perf stat: Basic support for TopDown in perf stat
Add basic plumbing for TopDown in perf stat
TopDown is intended to replace the frontend cycles idle/ backend cycles
idle metrics in standard perf stat output. These metrics are not
reliable in many workloads, due to out of order effects.
This implements a new --topdown mode in perf stat (similar to
--transaction) that measures the pipe line bottlenecks using
standardized formulas. The measurement can be all done with 5 counters
(one fixed counter)
The result are four metrics:
FrontendBound, BackendBound, BadSpeculation, Retiring
that describe the CPU pipeline behavior on a high level.
The full top down methology has many hierarchical metrics. This
implementation only supports level 1 which can be collected without
multiplexing. A full implementation of top down on top of perf is
available in pmu-tools toplev. (http://github.com/andikleen/pmu-tools)
The current version works on Intel Core CPUs starting with Sandy Bridge,
and Atom CPUs starting with Silvermont. In principle the generic
metrics should be also implementable on other out of order CPUs.
TopDown level 1 uses a set of abstracted metrics which are generic to
out of order CPU cores (although some CPUs may not implement all of
them):
topdown-total-slots Available slots in the pipeline
topdown-slots-issued Slots issued into the pipeline
topdown-slots-retired Slots successfully retired
topdown-fetch-bubbles Pipeline gaps in the frontend
topdown-recovery-bubbles Pipeline gaps during recovery
from misspeculation
These metrics then allow to compute four useful metrics:
FrontendBound, BackendBound, Retiring, BadSpeculation.
Add a new --topdown options to enable events. When --topdown is
specified set up events for all topdown events supported by the kernel.
Add topdown-* as a special case to the event parser, as is needed for
all events containing -.
The actual code to compute the metrics is in follow-on patches.
v2: Use standard sysctl read function.
v3: Move x86 specific code to arch/
v4: Enable --metric-only implicitly for topdown.
v5: Add --single-thread option to not force per core mode
v6: Fix output order of topdown metrics
v7: Allow combining with -d
v8: Remove --single-thread again
v9: Rename functions, adding arch_ and topdown_.
v10: Expand man page and describe TopDown better
Paste intro into commit description.
Print error when malloc fails.
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: http://lkml.kernel.org/r/1464119559-17203-1-git-send-email-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-05-30 15:49:42 +00:00
|
|
|
static int topdown_filter_events(const char **attr, char **str, bool use_group)
|
|
|
|
{
|
|
|
|
int off = 0;
|
|
|
|
int i;
|
|
|
|
int len = 0;
|
|
|
|
char *s;
|
|
|
|
|
|
|
|
for (i = 0; attr[i]; i++) {
|
|
|
|
if (pmu_have_event("cpu", attr[i])) {
|
|
|
|
len += strlen(attr[i]) + 1;
|
|
|
|
attr[i - off] = attr[i];
|
|
|
|
} else
|
|
|
|
off++;
|
|
|
|
}
|
|
|
|
attr[i - off] = NULL;
|
|
|
|
|
|
|
|
*str = malloc(len + 1 + 2);
|
|
|
|
if (!*str)
|
|
|
|
return -1;
|
|
|
|
s = *str;
|
|
|
|
if (i - off == 0) {
|
|
|
|
*s = 0;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
if (use_group)
|
|
|
|
*s++ = '{';
|
|
|
|
for (i = 0; attr[i]; i++) {
|
|
|
|
strcpy(s, attr[i]);
|
|
|
|
s += strlen(s);
|
|
|
|
*s++ = ',';
|
|
|
|
}
|
|
|
|
if (use_group) {
|
|
|
|
s[-1] = '}';
|
|
|
|
*s = 0;
|
|
|
|
} else
|
|
|
|
s[-1] = 0;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
__weak bool arch_topdown_check_group(bool *warn)
|
|
|
|
{
|
|
|
|
*warn = false;
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
__weak void arch_topdown_group_warn(void)
|
|
|
|
{
|
|
|
|
}
|
|
|
|
|
perf stat: Add -d -d and -d -d -d options to show more CPU events
Print even more detailed statistics if requested via perf stat -d:
-d: detailed events, L1 and LLC data cache
-d -d: more detailed events, dTLB and iTLB events
-d -d -d: very detailed events, adding prefetch events
Full output looks like this now:
Performance counter stats for '/home/mingo/hackbench 10' (5 runs):
1703.674707 task-clock # 8.709 CPUs utilized ( +- 4.19% )
49,068 context-switches # 0.029 M/sec ( +- 16.66% )
8,303 CPU-migrations # 0.005 M/sec ( +- 24.90% )
17,397 page-faults # 0.010 M/sec ( +- 0.46% )
2,345,389,239 cycles # 1.377 GHz ( +- 4.61% ) [55.90%]
1,884,503,527 stalled-cycles-frontend # 80.35% frontend cycles idle ( +- 5.67% ) [50.39%]
743,919,737 stalled-cycles-backend # 31.72% backend cycles idle ( +- 8.75% ) [49.91%]
1,314,416,379 instructions # 0.56 insns per cycle
# 1.43 stalled cycles per insn ( +- 2.53% ) [60.87%]
272,592,567 branches # 160.003 M/sec ( +- 1.74% ) [56.56%]
3,794,846 branch-misses # 1.39% of all branches ( +- 6.59% ) [58.50%]
449,982,778 L1-dcache-loads # 264.125 M/sec ( +- 2.47% ) [49.88%]
22,404,961 L1-dcache-load-misses # 4.98% of all L1-dcache hits ( +- 6.08% ) [55.05%]
6,204,750 LLC-loads # 3.642 M/sec ( +- 8.91% ) [43.75%]
1,837,411 LLC-load-misses # 1.078 M/sec ( +- 7.27% ) [12.07%]
411,440,421 L1-icache-loads # 241.502 M/sec ( +- 5.60% ) [36.52%]
27,556,832 L1-icache-load-misses # 16.175 M/sec ( +- 7.46% ) [46.72%]
464,067,627 dTLB-loads # 272.392 M/sec ( +- 4.46% ) [54.17%]
10,765,648 dTLB-load-misses # 6.319 M/sec ( +- 3.18% ) [48.68%]
1,273,080,386 iTLB-loads # 747.256 M/sec ( +- 3.38% ) [47.53%]
117,481 iTLB-load-misses # 0.069 M/sec ( +- 14.99% ) [47.01%]
4,590,653 L1-dcache-prefetches # 2.695 M/sec ( +- 4.49% ) [46.19%]
1,712,660 L1-dcache-prefetch-misses # 1.005 M/sec ( +- 3.75% ) [44.82%]
0.195622057 seconds time elapsed ( +- 6.84% )
Also clean up the attribute construction code to be appending, and factor
it out into add_default_attributes().
Tweak the coverage percentage printout a bit, so that it's easier to view it
alongside the +- sttddev colum.
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Steven Rostedt <rostedt@goodmis.org>
Link: http://lkml.kernel.org/n/tip-to3kgu04449s64062val8b62@git.kernel.org
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2011-05-19 11:30:56 +00:00
|
|
|
/*
|
|
|
|
* Add default attributes, if there were no attributes specified or
|
|
|
|
* if -d/--detailed, -d -d or -d -d -d is used:
|
|
|
|
*/
|
|
|
|
static int add_default_attributes(void)
|
|
|
|
{
|
perf stat: Basic support for TopDown in perf stat
Add basic plumbing for TopDown in perf stat
TopDown is intended to replace the frontend cycles idle/ backend cycles
idle metrics in standard perf stat output. These metrics are not
reliable in many workloads, due to out of order effects.
This implements a new --topdown mode in perf stat (similar to
--transaction) that measures the pipe line bottlenecks using
standardized formulas. The measurement can be all done with 5 counters
(one fixed counter)
The result are four metrics:
FrontendBound, BackendBound, BadSpeculation, Retiring
that describe the CPU pipeline behavior on a high level.
The full top down methology has many hierarchical metrics. This
implementation only supports level 1 which can be collected without
multiplexing. A full implementation of top down on top of perf is
available in pmu-tools toplev. (http://github.com/andikleen/pmu-tools)
The current version works on Intel Core CPUs starting with Sandy Bridge,
and Atom CPUs starting with Silvermont. In principle the generic
metrics should be also implementable on other out of order CPUs.
TopDown level 1 uses a set of abstracted metrics which are generic to
out of order CPU cores (although some CPUs may not implement all of
them):
topdown-total-slots Available slots in the pipeline
topdown-slots-issued Slots issued into the pipeline
topdown-slots-retired Slots successfully retired
topdown-fetch-bubbles Pipeline gaps in the frontend
topdown-recovery-bubbles Pipeline gaps during recovery
from misspeculation
These metrics then allow to compute four useful metrics:
FrontendBound, BackendBound, Retiring, BadSpeculation.
Add a new --topdown options to enable events. When --topdown is
specified set up events for all topdown events supported by the kernel.
Add topdown-* as a special case to the event parser, as is needed for
all events containing -.
The actual code to compute the metrics is in follow-on patches.
v2: Use standard sysctl read function.
v3: Move x86 specific code to arch/
v4: Enable --metric-only implicitly for topdown.
v5: Add --single-thread option to not force per core mode
v6: Fix output order of topdown metrics
v7: Allow combining with -d
v8: Remove --single-thread again
v9: Rename functions, adding arch_ and topdown_.
v10: Expand man page and describe TopDown better
Paste intro into commit description.
Print error when malloc fails.
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: http://lkml.kernel.org/r/1464119559-17203-1-git-send-email-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-05-30 15:49:42 +00:00
|
|
|
int err;
|
perf stat: Check existence of frontend/backed stalled cycles
Only put the frontend/backend stalled cycles into the default perf stat
events when the CPU actually supports them.
This avoids empty columns with --metric-only on newer Intel CPUs.
Committer note:
Before:
$ perf stat ls
Performance counter stats for 'ls':
1.080893 task-clock (msec) # 0.619 CPUs utilized
0 context-switches # 0.000 K/sec
0 cpu-migrations # 0.000 K/sec
97 page-faults # 0.090 M/sec
3,327,741 cycles # 3.079 GHz
<not supported> stalled-cycles-frontend
<not supported> stalled-cycles-backend
1,609,544 instructions # 0.48 insn per cycle
319,117 branches # 295.235 M/sec
12,246 branch-misses # 3.84% of all branches
0.001746508 seconds time elapsed
$
After:
$ perf stat ls
Performance counter stats for 'ls':
0.693948 task-clock (msec) # 0.662 CPUs utilized
0 context-switches # 0.000 K/sec
0 cpu-migrations # 0.000 K/sec
95 page-faults # 0.137 M/sec
1,792,509 cycles # 2.583 GHz
1,599,047 instructions # 0.89 insn per cycle
316,328 branches # 455.838 M/sec
12,453 branch-misses # 3.94% of all branches
0.001048987 seconds time elapsed
$
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/r/1456532881-26621-2-git-send-email-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-02-27 00:27:56 +00:00
|
|
|
struct perf_event_attr default_attrs0[] = {
|
2012-10-01 18:20:58 +00:00
|
|
|
|
|
|
|
{ .type = PERF_TYPE_SOFTWARE, .config = PERF_COUNT_SW_TASK_CLOCK },
|
|
|
|
{ .type = PERF_TYPE_SOFTWARE, .config = PERF_COUNT_SW_CONTEXT_SWITCHES },
|
|
|
|
{ .type = PERF_TYPE_SOFTWARE, .config = PERF_COUNT_SW_CPU_MIGRATIONS },
|
|
|
|
{ .type = PERF_TYPE_SOFTWARE, .config = PERF_COUNT_SW_PAGE_FAULTS },
|
|
|
|
|
|
|
|
{ .type = PERF_TYPE_HARDWARE, .config = PERF_COUNT_HW_CPU_CYCLES },
|
perf stat: Check existence of frontend/backed stalled cycles
Only put the frontend/backend stalled cycles into the default perf stat
events when the CPU actually supports them.
This avoids empty columns with --metric-only on newer Intel CPUs.
Committer note:
Before:
$ perf stat ls
Performance counter stats for 'ls':
1.080893 task-clock (msec) # 0.619 CPUs utilized
0 context-switches # 0.000 K/sec
0 cpu-migrations # 0.000 K/sec
97 page-faults # 0.090 M/sec
3,327,741 cycles # 3.079 GHz
<not supported> stalled-cycles-frontend
<not supported> stalled-cycles-backend
1,609,544 instructions # 0.48 insn per cycle
319,117 branches # 295.235 M/sec
12,246 branch-misses # 3.84% of all branches
0.001746508 seconds time elapsed
$
After:
$ perf stat ls
Performance counter stats for 'ls':
0.693948 task-clock (msec) # 0.662 CPUs utilized
0 context-switches # 0.000 K/sec
0 cpu-migrations # 0.000 K/sec
95 page-faults # 0.137 M/sec
1,792,509 cycles # 2.583 GHz
1,599,047 instructions # 0.89 insn per cycle
316,328 branches # 455.838 M/sec
12,453 branch-misses # 3.94% of all branches
0.001048987 seconds time elapsed
$
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/r/1456532881-26621-2-git-send-email-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-02-27 00:27:56 +00:00
|
|
|
};
|
|
|
|
struct perf_event_attr frontend_attrs[] = {
|
2012-10-01 18:20:58 +00:00
|
|
|
{ .type = PERF_TYPE_HARDWARE, .config = PERF_COUNT_HW_STALLED_CYCLES_FRONTEND },
|
perf stat: Check existence of frontend/backed stalled cycles
Only put the frontend/backend stalled cycles into the default perf stat
events when the CPU actually supports them.
This avoids empty columns with --metric-only on newer Intel CPUs.
Committer note:
Before:
$ perf stat ls
Performance counter stats for 'ls':
1.080893 task-clock (msec) # 0.619 CPUs utilized
0 context-switches # 0.000 K/sec
0 cpu-migrations # 0.000 K/sec
97 page-faults # 0.090 M/sec
3,327,741 cycles # 3.079 GHz
<not supported> stalled-cycles-frontend
<not supported> stalled-cycles-backend
1,609,544 instructions # 0.48 insn per cycle
319,117 branches # 295.235 M/sec
12,246 branch-misses # 3.84% of all branches
0.001746508 seconds time elapsed
$
After:
$ perf stat ls
Performance counter stats for 'ls':
0.693948 task-clock (msec) # 0.662 CPUs utilized
0 context-switches # 0.000 K/sec
0 cpu-migrations # 0.000 K/sec
95 page-faults # 0.137 M/sec
1,792,509 cycles # 2.583 GHz
1,599,047 instructions # 0.89 insn per cycle
316,328 branches # 455.838 M/sec
12,453 branch-misses # 3.94% of all branches
0.001048987 seconds time elapsed
$
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/r/1456532881-26621-2-git-send-email-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-02-27 00:27:56 +00:00
|
|
|
};
|
|
|
|
struct perf_event_attr backend_attrs[] = {
|
2012-10-01 18:20:58 +00:00
|
|
|
{ .type = PERF_TYPE_HARDWARE, .config = PERF_COUNT_HW_STALLED_CYCLES_BACKEND },
|
perf stat: Check existence of frontend/backed stalled cycles
Only put the frontend/backend stalled cycles into the default perf stat
events when the CPU actually supports them.
This avoids empty columns with --metric-only on newer Intel CPUs.
Committer note:
Before:
$ perf stat ls
Performance counter stats for 'ls':
1.080893 task-clock (msec) # 0.619 CPUs utilized
0 context-switches # 0.000 K/sec
0 cpu-migrations # 0.000 K/sec
97 page-faults # 0.090 M/sec
3,327,741 cycles # 3.079 GHz
<not supported> stalled-cycles-frontend
<not supported> stalled-cycles-backend
1,609,544 instructions # 0.48 insn per cycle
319,117 branches # 295.235 M/sec
12,246 branch-misses # 3.84% of all branches
0.001746508 seconds time elapsed
$
After:
$ perf stat ls
Performance counter stats for 'ls':
0.693948 task-clock (msec) # 0.662 CPUs utilized
0 context-switches # 0.000 K/sec
0 cpu-migrations # 0.000 K/sec
95 page-faults # 0.137 M/sec
1,792,509 cycles # 2.583 GHz
1,599,047 instructions # 0.89 insn per cycle
316,328 branches # 455.838 M/sec
12,453 branch-misses # 3.94% of all branches
0.001048987 seconds time elapsed
$
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/r/1456532881-26621-2-git-send-email-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-02-27 00:27:56 +00:00
|
|
|
};
|
|
|
|
struct perf_event_attr default_attrs1[] = {
|
2012-10-01 18:20:58 +00:00
|
|
|
{ .type = PERF_TYPE_HARDWARE, .config = PERF_COUNT_HW_INSTRUCTIONS },
|
|
|
|
{ .type = PERF_TYPE_HARDWARE, .config = PERF_COUNT_HW_BRANCH_INSTRUCTIONS },
|
|
|
|
{ .type = PERF_TYPE_HARDWARE, .config = PERF_COUNT_HW_BRANCH_MISSES },
|
|
|
|
|
|
|
|
};
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Detailed stats (-d), covering the L1 and last level data caches:
|
|
|
|
*/
|
|
|
|
struct perf_event_attr detailed_attrs[] = {
|
|
|
|
|
|
|
|
{ .type = PERF_TYPE_HW_CACHE,
|
|
|
|
.config =
|
|
|
|
PERF_COUNT_HW_CACHE_L1D << 0 |
|
|
|
|
(PERF_COUNT_HW_CACHE_OP_READ << 8) |
|
|
|
|
(PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16) },
|
|
|
|
|
|
|
|
{ .type = PERF_TYPE_HW_CACHE,
|
|
|
|
.config =
|
|
|
|
PERF_COUNT_HW_CACHE_L1D << 0 |
|
|
|
|
(PERF_COUNT_HW_CACHE_OP_READ << 8) |
|
|
|
|
(PERF_COUNT_HW_CACHE_RESULT_MISS << 16) },
|
|
|
|
|
|
|
|
{ .type = PERF_TYPE_HW_CACHE,
|
|
|
|
.config =
|
|
|
|
PERF_COUNT_HW_CACHE_LL << 0 |
|
|
|
|
(PERF_COUNT_HW_CACHE_OP_READ << 8) |
|
|
|
|
(PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16) },
|
|
|
|
|
|
|
|
{ .type = PERF_TYPE_HW_CACHE,
|
|
|
|
.config =
|
|
|
|
PERF_COUNT_HW_CACHE_LL << 0 |
|
|
|
|
(PERF_COUNT_HW_CACHE_OP_READ << 8) |
|
|
|
|
(PERF_COUNT_HW_CACHE_RESULT_MISS << 16) },
|
|
|
|
};
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Very detailed stats (-d -d), covering the instruction cache and the TLB caches:
|
|
|
|
*/
|
|
|
|
struct perf_event_attr very_detailed_attrs[] = {
|
|
|
|
|
|
|
|
{ .type = PERF_TYPE_HW_CACHE,
|
|
|
|
.config =
|
|
|
|
PERF_COUNT_HW_CACHE_L1I << 0 |
|
|
|
|
(PERF_COUNT_HW_CACHE_OP_READ << 8) |
|
|
|
|
(PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16) },
|
|
|
|
|
|
|
|
{ .type = PERF_TYPE_HW_CACHE,
|
|
|
|
.config =
|
|
|
|
PERF_COUNT_HW_CACHE_L1I << 0 |
|
|
|
|
(PERF_COUNT_HW_CACHE_OP_READ << 8) |
|
|
|
|
(PERF_COUNT_HW_CACHE_RESULT_MISS << 16) },
|
|
|
|
|
|
|
|
{ .type = PERF_TYPE_HW_CACHE,
|
|
|
|
.config =
|
|
|
|
PERF_COUNT_HW_CACHE_DTLB << 0 |
|
|
|
|
(PERF_COUNT_HW_CACHE_OP_READ << 8) |
|
|
|
|
(PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16) },
|
|
|
|
|
|
|
|
{ .type = PERF_TYPE_HW_CACHE,
|
|
|
|
.config =
|
|
|
|
PERF_COUNT_HW_CACHE_DTLB << 0 |
|
|
|
|
(PERF_COUNT_HW_CACHE_OP_READ << 8) |
|
|
|
|
(PERF_COUNT_HW_CACHE_RESULT_MISS << 16) },
|
|
|
|
|
|
|
|
{ .type = PERF_TYPE_HW_CACHE,
|
|
|
|
.config =
|
|
|
|
PERF_COUNT_HW_CACHE_ITLB << 0 |
|
|
|
|
(PERF_COUNT_HW_CACHE_OP_READ << 8) |
|
|
|
|
(PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16) },
|
|
|
|
|
|
|
|
{ .type = PERF_TYPE_HW_CACHE,
|
|
|
|
.config =
|
|
|
|
PERF_COUNT_HW_CACHE_ITLB << 0 |
|
|
|
|
(PERF_COUNT_HW_CACHE_OP_READ << 8) |
|
|
|
|
(PERF_COUNT_HW_CACHE_RESULT_MISS << 16) },
|
|
|
|
|
|
|
|
};
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Very, very detailed stats (-d -d -d), adding prefetch events:
|
|
|
|
*/
|
|
|
|
struct perf_event_attr very_very_detailed_attrs[] = {
|
|
|
|
|
|
|
|
{ .type = PERF_TYPE_HW_CACHE,
|
|
|
|
.config =
|
|
|
|
PERF_COUNT_HW_CACHE_L1D << 0 |
|
|
|
|
(PERF_COUNT_HW_CACHE_OP_PREFETCH << 8) |
|
|
|
|
(PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16) },
|
|
|
|
|
|
|
|
{ .type = PERF_TYPE_HW_CACHE,
|
|
|
|
.config =
|
|
|
|
PERF_COUNT_HW_CACHE_L1D << 0 |
|
|
|
|
(PERF_COUNT_HW_CACHE_OP_PREFETCH << 8) |
|
|
|
|
(PERF_COUNT_HW_CACHE_RESULT_MISS << 16) },
|
|
|
|
};
|
2018-06-06 22:15:10 +00:00
|
|
|
struct parse_events_error errinfo;
|
2012-10-01 18:20:58 +00:00
|
|
|
|
perf stat: Add -d -d and -d -d -d options to show more CPU events
Print even more detailed statistics if requested via perf stat -d:
-d: detailed events, L1 and LLC data cache
-d -d: more detailed events, dTLB and iTLB events
-d -d -d: very detailed events, adding prefetch events
Full output looks like this now:
Performance counter stats for '/home/mingo/hackbench 10' (5 runs):
1703.674707 task-clock # 8.709 CPUs utilized ( +- 4.19% )
49,068 context-switches # 0.029 M/sec ( +- 16.66% )
8,303 CPU-migrations # 0.005 M/sec ( +- 24.90% )
17,397 page-faults # 0.010 M/sec ( +- 0.46% )
2,345,389,239 cycles # 1.377 GHz ( +- 4.61% ) [55.90%]
1,884,503,527 stalled-cycles-frontend # 80.35% frontend cycles idle ( +- 5.67% ) [50.39%]
743,919,737 stalled-cycles-backend # 31.72% backend cycles idle ( +- 8.75% ) [49.91%]
1,314,416,379 instructions # 0.56 insns per cycle
# 1.43 stalled cycles per insn ( +- 2.53% ) [60.87%]
272,592,567 branches # 160.003 M/sec ( +- 1.74% ) [56.56%]
3,794,846 branch-misses # 1.39% of all branches ( +- 6.59% ) [58.50%]
449,982,778 L1-dcache-loads # 264.125 M/sec ( +- 2.47% ) [49.88%]
22,404,961 L1-dcache-load-misses # 4.98% of all L1-dcache hits ( +- 6.08% ) [55.05%]
6,204,750 LLC-loads # 3.642 M/sec ( +- 8.91% ) [43.75%]
1,837,411 LLC-load-misses # 1.078 M/sec ( +- 7.27% ) [12.07%]
411,440,421 L1-icache-loads # 241.502 M/sec ( +- 5.60% ) [36.52%]
27,556,832 L1-icache-load-misses # 16.175 M/sec ( +- 7.46% ) [46.72%]
464,067,627 dTLB-loads # 272.392 M/sec ( +- 4.46% ) [54.17%]
10,765,648 dTLB-load-misses # 6.319 M/sec ( +- 3.18% ) [48.68%]
1,273,080,386 iTLB-loads # 747.256 M/sec ( +- 3.38% ) [47.53%]
117,481 iTLB-load-misses # 0.069 M/sec ( +- 14.99% ) [47.01%]
4,590,653 L1-dcache-prefetches # 2.695 M/sec ( +- 4.49% ) [46.19%]
1,712,660 L1-dcache-prefetch-misses # 1.005 M/sec ( +- 3.75% ) [44.82%]
0.195622057 seconds time elapsed ( +- 6.84% )
Also clean up the attribute construction code to be appending, and factor
it out into add_default_attributes().
Tweak the coverage percentage printout a bit, so that it's easier to view it
alongside the +- sttddev colum.
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Steven Rostedt <rostedt@goodmis.org>
Link: http://lkml.kernel.org/n/tip-to3kgu04449s64062val8b62@git.kernel.org
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2011-05-19 11:30:56 +00:00
|
|
|
/* Set attrs if no event is selected and !null_run: */
|
2018-08-30 06:32:41 +00:00
|
|
|
if (stat_config.null_run)
|
perf stat: Add -d -d and -d -d -d options to show more CPU events
Print even more detailed statistics if requested via perf stat -d:
-d: detailed events, L1 and LLC data cache
-d -d: more detailed events, dTLB and iTLB events
-d -d -d: very detailed events, adding prefetch events
Full output looks like this now:
Performance counter stats for '/home/mingo/hackbench 10' (5 runs):
1703.674707 task-clock # 8.709 CPUs utilized ( +- 4.19% )
49,068 context-switches # 0.029 M/sec ( +- 16.66% )
8,303 CPU-migrations # 0.005 M/sec ( +- 24.90% )
17,397 page-faults # 0.010 M/sec ( +- 0.46% )
2,345,389,239 cycles # 1.377 GHz ( +- 4.61% ) [55.90%]
1,884,503,527 stalled-cycles-frontend # 80.35% frontend cycles idle ( +- 5.67% ) [50.39%]
743,919,737 stalled-cycles-backend # 31.72% backend cycles idle ( +- 8.75% ) [49.91%]
1,314,416,379 instructions # 0.56 insns per cycle
# 1.43 stalled cycles per insn ( +- 2.53% ) [60.87%]
272,592,567 branches # 160.003 M/sec ( +- 1.74% ) [56.56%]
3,794,846 branch-misses # 1.39% of all branches ( +- 6.59% ) [58.50%]
449,982,778 L1-dcache-loads # 264.125 M/sec ( +- 2.47% ) [49.88%]
22,404,961 L1-dcache-load-misses # 4.98% of all L1-dcache hits ( +- 6.08% ) [55.05%]
6,204,750 LLC-loads # 3.642 M/sec ( +- 8.91% ) [43.75%]
1,837,411 LLC-load-misses # 1.078 M/sec ( +- 7.27% ) [12.07%]
411,440,421 L1-icache-loads # 241.502 M/sec ( +- 5.60% ) [36.52%]
27,556,832 L1-icache-load-misses # 16.175 M/sec ( +- 7.46% ) [46.72%]
464,067,627 dTLB-loads # 272.392 M/sec ( +- 4.46% ) [54.17%]
10,765,648 dTLB-load-misses # 6.319 M/sec ( +- 3.18% ) [48.68%]
1,273,080,386 iTLB-loads # 747.256 M/sec ( +- 3.38% ) [47.53%]
117,481 iTLB-load-misses # 0.069 M/sec ( +- 14.99% ) [47.01%]
4,590,653 L1-dcache-prefetches # 2.695 M/sec ( +- 4.49% ) [46.19%]
1,712,660 L1-dcache-prefetch-misses # 1.005 M/sec ( +- 3.75% ) [44.82%]
0.195622057 seconds time elapsed ( +- 6.84% )
Also clean up the attribute construction code to be appending, and factor
it out into add_default_attributes().
Tweak the coverage percentage printout a bit, so that it's easier to view it
alongside the +- sttddev colum.
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Steven Rostedt <rostedt@goodmis.org>
Link: http://lkml.kernel.org/n/tip-to3kgu04449s64062val8b62@git.kernel.org
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2011-05-19 11:30:56 +00:00
|
|
|
return 0;
|
|
|
|
|
2013-08-21 23:47:26 +00:00
|
|
|
if (transaction_run) {
|
2018-06-26 07:17:01 +00:00
|
|
|
/* Handle -T as -M transaction. Once platform specific metrics
|
|
|
|
* support has been added to the json files, all archictures
|
|
|
|
* will use this approach. To determine transaction support
|
|
|
|
* on an architecture test for such a metric name.
|
|
|
|
*/
|
|
|
|
if (metricgroup__has_metric("transaction")) {
|
|
|
|
struct option opt = { .value = &evsel_list };
|
|
|
|
|
|
|
|
return metricgroup__parse_groups(&opt, "transaction",
|
|
|
|
&metric_events);
|
|
|
|
}
|
|
|
|
|
2013-08-21 23:47:26 +00:00
|
|
|
if (pmu_have_event("cpu", "cycles-ct") &&
|
|
|
|
pmu_have_event("cpu", "el-start"))
|
perf stat: Fix core dump when flag T is used
Executing command 'perf stat -T -- ls' dumps core on x86 and s390.
Here is the call back chain (done on x86):
# gdb ./perf
....
(gdb) r stat -T -- ls
...
Program received signal SIGSEGV, Segmentation fault.
0x00007ffff56d1963 in vasprintf () from /lib64/libc.so.6
(gdb) where
#0 0x00007ffff56d1963 in vasprintf () from /lib64/libc.so.6
#1 0x00007ffff56ae484 in asprintf () from /lib64/libc.so.6
#2 0x00000000004f1982 in __parse_events_add_pmu (parse_state=0x7fffffffd580,
list=0xbfb970, name=0xbf3ef0 "cpu",
head_config=0xbfb930, auto_merge_stats=false) at util/parse-events.c:1233
#3 0x00000000004f1c8e in parse_events_add_pmu (parse_state=0x7fffffffd580,
list=0xbfb970, name=0xbf3ef0 "cpu",
head_config=0xbfb930) at util/parse-events.c:1288
#4 0x0000000000537ce3 in parse_events_parse (_parse_state=0x7fffffffd580,
scanner=0xbf4210) at util/parse-events.y:234
#5 0x00000000004f2c7a in parse_events__scanner (str=0x6b66c0
"task-clock,{instructions,cycles,cpu/cycles-t/,cpu/tx-start/}",
parse_state=0x7fffffffd580, start_token=258) at util/parse-events.c:1673
#6 0x00000000004f2e23 in parse_events (evlist=0xbe9990, str=0x6b66c0
"task-clock,{instructions,cycles,cpu/cycles-t/,cpu/tx-start/}", err=0x0)
at util/parse-events.c:1713
#7 0x000000000044e137 in add_default_attributes () at builtin-stat.c:2281
#8 0x000000000044f7b5 in cmd_stat (argc=1, argv=0x7fffffffe3b0) at
builtin-stat.c:2828
#9 0x00000000004c8b0f in run_builtin (p=0xab01a0 <commands+288>, argc=4,
argv=0x7fffffffe3b0) at perf.c:297
#10 0x00000000004c8d7c in handle_internal_command (argc=4,
argv=0x7fffffffe3b0) at perf.c:349
#11 0x00000000004c8ece in run_argv (argcp=0x7fffffffe20c,
argv=0x7fffffffe200) at perf.c:393
#12 0x00000000004c929c in main (argc=4, argv=0x7fffffffe3b0) at perf.c:537
(gdb)
It turns out that a NULL pointer is referenced. Here are the
function calls:
...
cmd_stat()
+---> add_default_attributes()
+---> parse_events(evsel_list, transaction_attrs, NULL);
3rd parameter set to NULL
Function parse_events(xx, xx, struct parse_events_error *err) dives
into a bison generated scanner and creates
parser state information for it first:
struct parse_events_state parse_state = {
.list = LIST_HEAD_INIT(parse_state.list),
.idx = evlist->nr_entries,
.error = err, <--- NULL POINTER !!!
.evlist = evlist,
};
Now various functions inside the bison scanner are called to end up in
__parse_events_add_pmu(struct parse_events_state *parse_state, ..) with
first parameter being a pointer to above structure definition.
Now the PMU event name is not found (because being executed in a VM) and
this function tries to create an error message with
asprintf(&parse_state->error.str, ....)
which references a NULL pointer and dumps core.
Fix this by providing a pointer to the necessary error information
instead of NULL. Technically only the else part is needed to avoid the
core dump, just lets be safe...
Signed-off-by: Thomas Richter <tmricht@linux.vnet.ibm.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Hendrik Brueckner <brueckner@linux.vnet.ibm.com>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Link: http://lkml.kernel.org/r/20180308145735.64717-1-tmricht@linux.vnet.ibm.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-03-08 14:57:35 +00:00
|
|
|
err = parse_events(evsel_list, transaction_attrs,
|
|
|
|
&errinfo);
|
2013-08-21 23:47:26 +00:00
|
|
|
else
|
perf stat: Fix core dump when flag T is used
Executing command 'perf stat -T -- ls' dumps core on x86 and s390.
Here is the call back chain (done on x86):
# gdb ./perf
....
(gdb) r stat -T -- ls
...
Program received signal SIGSEGV, Segmentation fault.
0x00007ffff56d1963 in vasprintf () from /lib64/libc.so.6
(gdb) where
#0 0x00007ffff56d1963 in vasprintf () from /lib64/libc.so.6
#1 0x00007ffff56ae484 in asprintf () from /lib64/libc.so.6
#2 0x00000000004f1982 in __parse_events_add_pmu (parse_state=0x7fffffffd580,
list=0xbfb970, name=0xbf3ef0 "cpu",
head_config=0xbfb930, auto_merge_stats=false) at util/parse-events.c:1233
#3 0x00000000004f1c8e in parse_events_add_pmu (parse_state=0x7fffffffd580,
list=0xbfb970, name=0xbf3ef0 "cpu",
head_config=0xbfb930) at util/parse-events.c:1288
#4 0x0000000000537ce3 in parse_events_parse (_parse_state=0x7fffffffd580,
scanner=0xbf4210) at util/parse-events.y:234
#5 0x00000000004f2c7a in parse_events__scanner (str=0x6b66c0
"task-clock,{instructions,cycles,cpu/cycles-t/,cpu/tx-start/}",
parse_state=0x7fffffffd580, start_token=258) at util/parse-events.c:1673
#6 0x00000000004f2e23 in parse_events (evlist=0xbe9990, str=0x6b66c0
"task-clock,{instructions,cycles,cpu/cycles-t/,cpu/tx-start/}", err=0x0)
at util/parse-events.c:1713
#7 0x000000000044e137 in add_default_attributes () at builtin-stat.c:2281
#8 0x000000000044f7b5 in cmd_stat (argc=1, argv=0x7fffffffe3b0) at
builtin-stat.c:2828
#9 0x00000000004c8b0f in run_builtin (p=0xab01a0 <commands+288>, argc=4,
argv=0x7fffffffe3b0) at perf.c:297
#10 0x00000000004c8d7c in handle_internal_command (argc=4,
argv=0x7fffffffe3b0) at perf.c:349
#11 0x00000000004c8ece in run_argv (argcp=0x7fffffffe20c,
argv=0x7fffffffe200) at perf.c:393
#12 0x00000000004c929c in main (argc=4, argv=0x7fffffffe3b0) at perf.c:537
(gdb)
It turns out that a NULL pointer is referenced. Here are the
function calls:
...
cmd_stat()
+---> add_default_attributes()
+---> parse_events(evsel_list, transaction_attrs, NULL);
3rd parameter set to NULL
Function parse_events(xx, xx, struct parse_events_error *err) dives
into a bison generated scanner and creates
parser state information for it first:
struct parse_events_state parse_state = {
.list = LIST_HEAD_INIT(parse_state.list),
.idx = evlist->nr_entries,
.error = err, <--- NULL POINTER !!!
.evlist = evlist,
};
Now various functions inside the bison scanner are called to end up in
__parse_events_add_pmu(struct parse_events_state *parse_state, ..) with
first parameter being a pointer to above structure definition.
Now the PMU event name is not found (because being executed in a VM) and
this function tries to create an error message with
asprintf(&parse_state->error.str, ....)
which references a NULL pointer and dumps core.
Fix this by providing a pointer to the necessary error information
instead of NULL. Technically only the else part is needed to avoid the
core dump, just lets be safe...
Signed-off-by: Thomas Richter <tmricht@linux.vnet.ibm.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Hendrik Brueckner <brueckner@linux.vnet.ibm.com>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Link: http://lkml.kernel.org/r/20180308145735.64717-1-tmricht@linux.vnet.ibm.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-03-08 14:57:35 +00:00
|
|
|
err = parse_events(evsel_list,
|
|
|
|
transaction_limited_attrs,
|
|
|
|
&errinfo);
|
2015-06-03 14:25:53 +00:00
|
|
|
if (err) {
|
2013-08-21 23:47:26 +00:00
|
|
|
fprintf(stderr, "Cannot set up transaction events\n");
|
2018-06-06 22:15:10 +00:00
|
|
|
parse_events_print_error(&errinfo, transaction_attrs);
|
2013-08-21 23:47:26 +00:00
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2017-05-26 19:05:38 +00:00
|
|
|
if (smi_cost) {
|
|
|
|
int smi;
|
|
|
|
|
|
|
|
if (sysfs__read_int(FREEZE_ON_SMI_PATH, &smi) < 0) {
|
|
|
|
fprintf(stderr, "freeze_on_smi is not supported.\n");
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!smi) {
|
|
|
|
if (sysfs__write_int(FREEZE_ON_SMI_PATH, 1) < 0) {
|
|
|
|
fprintf(stderr, "Failed to set freeze_on_smi.\n");
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
smi_reset = true;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (pmu_have_event("msr", "aperf") &&
|
|
|
|
pmu_have_event("msr", "smi")) {
|
|
|
|
if (!force_metric_only)
|
2018-08-30 06:32:31 +00:00
|
|
|
stat_config.metric_only = true;
|
2018-06-06 22:15:10 +00:00
|
|
|
err = parse_events(evsel_list, smi_cost_attrs, &errinfo);
|
2017-05-26 19:05:38 +00:00
|
|
|
} else {
|
|
|
|
fprintf(stderr, "To measure SMI cost, it needs "
|
|
|
|
"msr/aperf/, msr/smi/ and cpu/cycles/ support\n");
|
2018-06-06 22:15:10 +00:00
|
|
|
parse_events_print_error(&errinfo, smi_cost_attrs);
|
2017-05-26 19:05:38 +00:00
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
if (err) {
|
|
|
|
fprintf(stderr, "Cannot set up SMI cost events\n");
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
perf stat: Basic support for TopDown in perf stat
Add basic plumbing for TopDown in perf stat
TopDown is intended to replace the frontend cycles idle/ backend cycles
idle metrics in standard perf stat output. These metrics are not
reliable in many workloads, due to out of order effects.
This implements a new --topdown mode in perf stat (similar to
--transaction) that measures the pipe line bottlenecks using
standardized formulas. The measurement can be all done with 5 counters
(one fixed counter)
The result are four metrics:
FrontendBound, BackendBound, BadSpeculation, Retiring
that describe the CPU pipeline behavior on a high level.
The full top down methology has many hierarchical metrics. This
implementation only supports level 1 which can be collected without
multiplexing. A full implementation of top down on top of perf is
available in pmu-tools toplev. (http://github.com/andikleen/pmu-tools)
The current version works on Intel Core CPUs starting with Sandy Bridge,
and Atom CPUs starting with Silvermont. In principle the generic
metrics should be also implementable on other out of order CPUs.
TopDown level 1 uses a set of abstracted metrics which are generic to
out of order CPU cores (although some CPUs may not implement all of
them):
topdown-total-slots Available slots in the pipeline
topdown-slots-issued Slots issued into the pipeline
topdown-slots-retired Slots successfully retired
topdown-fetch-bubbles Pipeline gaps in the frontend
topdown-recovery-bubbles Pipeline gaps during recovery
from misspeculation
These metrics then allow to compute four useful metrics:
FrontendBound, BackendBound, Retiring, BadSpeculation.
Add a new --topdown options to enable events. When --topdown is
specified set up events for all topdown events supported by the kernel.
Add topdown-* as a special case to the event parser, as is needed for
all events containing -.
The actual code to compute the metrics is in follow-on patches.
v2: Use standard sysctl read function.
v3: Move x86 specific code to arch/
v4: Enable --metric-only implicitly for topdown.
v5: Add --single-thread option to not force per core mode
v6: Fix output order of topdown metrics
v7: Allow combining with -d
v8: Remove --single-thread again
v9: Rename functions, adding arch_ and topdown_.
v10: Expand man page and describe TopDown better
Paste intro into commit description.
Print error when malloc fails.
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: http://lkml.kernel.org/r/1464119559-17203-1-git-send-email-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-05-30 15:49:42 +00:00
|
|
|
if (topdown_run) {
|
|
|
|
char *str = NULL;
|
|
|
|
bool warn = false;
|
|
|
|
|
|
|
|
if (stat_config.aggr_mode != AGGR_GLOBAL &&
|
|
|
|
stat_config.aggr_mode != AGGR_CORE) {
|
|
|
|
pr_err("top down event configuration requires --per-core mode\n");
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
stat_config.aggr_mode = AGGR_CORE;
|
|
|
|
if (nr_cgroups || !target__has_cpu(&target)) {
|
|
|
|
pr_err("top down event configuration requires system-wide mode (-a)\n");
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!force_metric_only)
|
2018-08-30 06:32:31 +00:00
|
|
|
stat_config.metric_only = true;
|
perf stat: Basic support for TopDown in perf stat
Add basic plumbing for TopDown in perf stat
TopDown is intended to replace the frontend cycles idle/ backend cycles
idle metrics in standard perf stat output. These metrics are not
reliable in many workloads, due to out of order effects.
This implements a new --topdown mode in perf stat (similar to
--transaction) that measures the pipe line bottlenecks using
standardized formulas. The measurement can be all done with 5 counters
(one fixed counter)
The result are four metrics:
FrontendBound, BackendBound, BadSpeculation, Retiring
that describe the CPU pipeline behavior on a high level.
The full top down methology has many hierarchical metrics. This
implementation only supports level 1 which can be collected without
multiplexing. A full implementation of top down on top of perf is
available in pmu-tools toplev. (http://github.com/andikleen/pmu-tools)
The current version works on Intel Core CPUs starting with Sandy Bridge,
and Atom CPUs starting with Silvermont. In principle the generic
metrics should be also implementable on other out of order CPUs.
TopDown level 1 uses a set of abstracted metrics which are generic to
out of order CPU cores (although some CPUs may not implement all of
them):
topdown-total-slots Available slots in the pipeline
topdown-slots-issued Slots issued into the pipeline
topdown-slots-retired Slots successfully retired
topdown-fetch-bubbles Pipeline gaps in the frontend
topdown-recovery-bubbles Pipeline gaps during recovery
from misspeculation
These metrics then allow to compute four useful metrics:
FrontendBound, BackendBound, Retiring, BadSpeculation.
Add a new --topdown options to enable events. When --topdown is
specified set up events for all topdown events supported by the kernel.
Add topdown-* as a special case to the event parser, as is needed for
all events containing -.
The actual code to compute the metrics is in follow-on patches.
v2: Use standard sysctl read function.
v3: Move x86 specific code to arch/
v4: Enable --metric-only implicitly for topdown.
v5: Add --single-thread option to not force per core mode
v6: Fix output order of topdown metrics
v7: Allow combining with -d
v8: Remove --single-thread again
v9: Rename functions, adding arch_ and topdown_.
v10: Expand man page and describe TopDown better
Paste intro into commit description.
Print error when malloc fails.
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: http://lkml.kernel.org/r/1464119559-17203-1-git-send-email-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-05-30 15:49:42 +00:00
|
|
|
if (topdown_filter_events(topdown_attrs, &str,
|
|
|
|
arch_topdown_check_group(&warn)) < 0) {
|
|
|
|
pr_err("Out of memory\n");
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
if (topdown_attrs[0] && str) {
|
|
|
|
if (warn)
|
|
|
|
arch_topdown_group_warn();
|
2018-06-06 22:15:10 +00:00
|
|
|
err = parse_events(evsel_list, str, &errinfo);
|
perf stat: Basic support for TopDown in perf stat
Add basic plumbing for TopDown in perf stat
TopDown is intended to replace the frontend cycles idle/ backend cycles
idle metrics in standard perf stat output. These metrics are not
reliable in many workloads, due to out of order effects.
This implements a new --topdown mode in perf stat (similar to
--transaction) that measures the pipe line bottlenecks using
standardized formulas. The measurement can be all done with 5 counters
(one fixed counter)
The result are four metrics:
FrontendBound, BackendBound, BadSpeculation, Retiring
that describe the CPU pipeline behavior on a high level.
The full top down methology has many hierarchical metrics. This
implementation only supports level 1 which can be collected without
multiplexing. A full implementation of top down on top of perf is
available in pmu-tools toplev. (http://github.com/andikleen/pmu-tools)
The current version works on Intel Core CPUs starting with Sandy Bridge,
and Atom CPUs starting with Silvermont. In principle the generic
metrics should be also implementable on other out of order CPUs.
TopDown level 1 uses a set of abstracted metrics which are generic to
out of order CPU cores (although some CPUs may not implement all of
them):
topdown-total-slots Available slots in the pipeline
topdown-slots-issued Slots issued into the pipeline
topdown-slots-retired Slots successfully retired
topdown-fetch-bubbles Pipeline gaps in the frontend
topdown-recovery-bubbles Pipeline gaps during recovery
from misspeculation
These metrics then allow to compute four useful metrics:
FrontendBound, BackendBound, Retiring, BadSpeculation.
Add a new --topdown options to enable events. When --topdown is
specified set up events for all topdown events supported by the kernel.
Add topdown-* as a special case to the event parser, as is needed for
all events containing -.
The actual code to compute the metrics is in follow-on patches.
v2: Use standard sysctl read function.
v3: Move x86 specific code to arch/
v4: Enable --metric-only implicitly for topdown.
v5: Add --single-thread option to not force per core mode
v6: Fix output order of topdown metrics
v7: Allow combining with -d
v8: Remove --single-thread again
v9: Rename functions, adding arch_ and topdown_.
v10: Expand man page and describe TopDown better
Paste intro into commit description.
Print error when malloc fails.
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: http://lkml.kernel.org/r/1464119559-17203-1-git-send-email-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-05-30 15:49:42 +00:00
|
|
|
if (err) {
|
|
|
|
fprintf(stderr,
|
|
|
|
"Cannot set up top down events %s: %d\n",
|
|
|
|
str, err);
|
|
|
|
free(str);
|
2018-06-06 22:15:10 +00:00
|
|
|
parse_events_print_error(&errinfo, str);
|
perf stat: Basic support for TopDown in perf stat
Add basic plumbing for TopDown in perf stat
TopDown is intended to replace the frontend cycles idle/ backend cycles
idle metrics in standard perf stat output. These metrics are not
reliable in many workloads, due to out of order effects.
This implements a new --topdown mode in perf stat (similar to
--transaction) that measures the pipe line bottlenecks using
standardized formulas. The measurement can be all done with 5 counters
(one fixed counter)
The result are four metrics:
FrontendBound, BackendBound, BadSpeculation, Retiring
that describe the CPU pipeline behavior on a high level.
The full top down methology has many hierarchical metrics. This
implementation only supports level 1 which can be collected without
multiplexing. A full implementation of top down on top of perf is
available in pmu-tools toplev. (http://github.com/andikleen/pmu-tools)
The current version works on Intel Core CPUs starting with Sandy Bridge,
and Atom CPUs starting with Silvermont. In principle the generic
metrics should be also implementable on other out of order CPUs.
TopDown level 1 uses a set of abstracted metrics which are generic to
out of order CPU cores (although some CPUs may not implement all of
them):
topdown-total-slots Available slots in the pipeline
topdown-slots-issued Slots issued into the pipeline
topdown-slots-retired Slots successfully retired
topdown-fetch-bubbles Pipeline gaps in the frontend
topdown-recovery-bubbles Pipeline gaps during recovery
from misspeculation
These metrics then allow to compute four useful metrics:
FrontendBound, BackendBound, Retiring, BadSpeculation.
Add a new --topdown options to enable events. When --topdown is
specified set up events for all topdown events supported by the kernel.
Add topdown-* as a special case to the event parser, as is needed for
all events containing -.
The actual code to compute the metrics is in follow-on patches.
v2: Use standard sysctl read function.
v3: Move x86 specific code to arch/
v4: Enable --metric-only implicitly for topdown.
v5: Add --single-thread option to not force per core mode
v6: Fix output order of topdown metrics
v7: Allow combining with -d
v8: Remove --single-thread again
v9: Rename functions, adding arch_ and topdown_.
v10: Expand man page and describe TopDown better
Paste intro into commit description.
Print error when malloc fails.
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: http://lkml.kernel.org/r/1464119559-17203-1-git-send-email-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-05-30 15:49:42 +00:00
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
fprintf(stderr, "System does not support topdown\n");
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
free(str);
|
|
|
|
}
|
|
|
|
|
perf stat: Add -d -d and -d -d -d options to show more CPU events
Print even more detailed statistics if requested via perf stat -d:
-d: detailed events, L1 and LLC data cache
-d -d: more detailed events, dTLB and iTLB events
-d -d -d: very detailed events, adding prefetch events
Full output looks like this now:
Performance counter stats for '/home/mingo/hackbench 10' (5 runs):
1703.674707 task-clock # 8.709 CPUs utilized ( +- 4.19% )
49,068 context-switches # 0.029 M/sec ( +- 16.66% )
8,303 CPU-migrations # 0.005 M/sec ( +- 24.90% )
17,397 page-faults # 0.010 M/sec ( +- 0.46% )
2,345,389,239 cycles # 1.377 GHz ( +- 4.61% ) [55.90%]
1,884,503,527 stalled-cycles-frontend # 80.35% frontend cycles idle ( +- 5.67% ) [50.39%]
743,919,737 stalled-cycles-backend # 31.72% backend cycles idle ( +- 8.75% ) [49.91%]
1,314,416,379 instructions # 0.56 insns per cycle
# 1.43 stalled cycles per insn ( +- 2.53% ) [60.87%]
272,592,567 branches # 160.003 M/sec ( +- 1.74% ) [56.56%]
3,794,846 branch-misses # 1.39% of all branches ( +- 6.59% ) [58.50%]
449,982,778 L1-dcache-loads # 264.125 M/sec ( +- 2.47% ) [49.88%]
22,404,961 L1-dcache-load-misses # 4.98% of all L1-dcache hits ( +- 6.08% ) [55.05%]
6,204,750 LLC-loads # 3.642 M/sec ( +- 8.91% ) [43.75%]
1,837,411 LLC-load-misses # 1.078 M/sec ( +- 7.27% ) [12.07%]
411,440,421 L1-icache-loads # 241.502 M/sec ( +- 5.60% ) [36.52%]
27,556,832 L1-icache-load-misses # 16.175 M/sec ( +- 7.46% ) [46.72%]
464,067,627 dTLB-loads # 272.392 M/sec ( +- 4.46% ) [54.17%]
10,765,648 dTLB-load-misses # 6.319 M/sec ( +- 3.18% ) [48.68%]
1,273,080,386 iTLB-loads # 747.256 M/sec ( +- 3.38% ) [47.53%]
117,481 iTLB-load-misses # 0.069 M/sec ( +- 14.99% ) [47.01%]
4,590,653 L1-dcache-prefetches # 2.695 M/sec ( +- 4.49% ) [46.19%]
1,712,660 L1-dcache-prefetch-misses # 1.005 M/sec ( +- 3.75% ) [44.82%]
0.195622057 seconds time elapsed ( +- 6.84% )
Also clean up the attribute construction code to be appending, and factor
it out into add_default_attributes().
Tweak the coverage percentage printout a bit, so that it's easier to view it
alongside the +- sttddev colum.
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Steven Rostedt <rostedt@goodmis.org>
Link: http://lkml.kernel.org/n/tip-to3kgu04449s64062val8b62@git.kernel.org
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2011-05-19 11:30:56 +00:00
|
|
|
if (!evsel_list->nr_entries) {
|
perf stat: Use cpu-clock event for cpu targets
Currently 'perf stat' always counts task-clock event by default. But
it's somewhat confusing for system-wide targets (especially with 'sleep
N' as the 'sleep' task just sleeps and doesn't use cputime). Changing
to cpu-clock event instead for that case makes more sense IMHO.
Before:
# perf stat -a sleep 0.1
Performance counter stats for 'system wide':
403.038603 task-clock (msec) # 4.001 CPUs utilized
150 context-switches # 0.372 K/sec
7 cpu-migrations # 0.017 K/sec
71 page-faults # 0.176 K/sec
23,705,169 cycles # 0.059 GHz
15,888,166 instructions # 0.67 insn per cycle
3,326,078 branches # 8.253 M/sec
87,643 branch-misses # 2.64% of all branches
0.100737009 seconds time elapsed
#
After:
# perf stat -a sleep 0.1
Performance counter stats for 'system wide':
404.271182 cpu-clock (msec) # 4.000 CPUs utilized
143 context-switches # 0.354 K/sec
13 cpu-migrations # 0.032 K/sec
73 page-faults # 0.181 K/sec
22,119,220 cycles # 0.055 GHz
13,622,065 instructions # 0.62 insn per cycle
2,918,769 branches # 7.220 M/sec
85,033 branch-misses # 2.91% of all branches
0.101073089 seconds time elapsed
#
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1463119263-5569-3-git-send-email-namhyung@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-05-13 06:01:03 +00:00
|
|
|
if (target__has_cpu(&target))
|
|
|
|
default_attrs0[0].config = PERF_COUNT_SW_CPU_CLOCK;
|
|
|
|
|
perf stat: Check existence of frontend/backed stalled cycles
Only put the frontend/backend stalled cycles into the default perf stat
events when the CPU actually supports them.
This avoids empty columns with --metric-only on newer Intel CPUs.
Committer note:
Before:
$ perf stat ls
Performance counter stats for 'ls':
1.080893 task-clock (msec) # 0.619 CPUs utilized
0 context-switches # 0.000 K/sec
0 cpu-migrations # 0.000 K/sec
97 page-faults # 0.090 M/sec
3,327,741 cycles # 3.079 GHz
<not supported> stalled-cycles-frontend
<not supported> stalled-cycles-backend
1,609,544 instructions # 0.48 insn per cycle
319,117 branches # 295.235 M/sec
12,246 branch-misses # 3.84% of all branches
0.001746508 seconds time elapsed
$
After:
$ perf stat ls
Performance counter stats for 'ls':
0.693948 task-clock (msec) # 0.662 CPUs utilized
0 context-switches # 0.000 K/sec
0 cpu-migrations # 0.000 K/sec
95 page-faults # 0.137 M/sec
1,792,509 cycles # 2.583 GHz
1,599,047 instructions # 0.89 insn per cycle
316,328 branches # 455.838 M/sec
12,453 branch-misses # 3.94% of all branches
0.001048987 seconds time elapsed
$
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/r/1456532881-26621-2-git-send-email-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-02-27 00:27:56 +00:00
|
|
|
if (perf_evlist__add_default_attrs(evsel_list, default_attrs0) < 0)
|
|
|
|
return -1;
|
|
|
|
if (pmu_have_event("cpu", "stalled-cycles-frontend")) {
|
|
|
|
if (perf_evlist__add_default_attrs(evsel_list,
|
|
|
|
frontend_attrs) < 0)
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
if (pmu_have_event("cpu", "stalled-cycles-backend")) {
|
|
|
|
if (perf_evlist__add_default_attrs(evsel_list,
|
|
|
|
backend_attrs) < 0)
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
if (perf_evlist__add_default_attrs(evsel_list, default_attrs1) < 0)
|
2011-11-04 11:10:59 +00:00
|
|
|
return -1;
|
perf stat: Add -d -d and -d -d -d options to show more CPU events
Print even more detailed statistics if requested via perf stat -d:
-d: detailed events, L1 and LLC data cache
-d -d: more detailed events, dTLB and iTLB events
-d -d -d: very detailed events, adding prefetch events
Full output looks like this now:
Performance counter stats for '/home/mingo/hackbench 10' (5 runs):
1703.674707 task-clock # 8.709 CPUs utilized ( +- 4.19% )
49,068 context-switches # 0.029 M/sec ( +- 16.66% )
8,303 CPU-migrations # 0.005 M/sec ( +- 24.90% )
17,397 page-faults # 0.010 M/sec ( +- 0.46% )
2,345,389,239 cycles # 1.377 GHz ( +- 4.61% ) [55.90%]
1,884,503,527 stalled-cycles-frontend # 80.35% frontend cycles idle ( +- 5.67% ) [50.39%]
743,919,737 stalled-cycles-backend # 31.72% backend cycles idle ( +- 8.75% ) [49.91%]
1,314,416,379 instructions # 0.56 insns per cycle
# 1.43 stalled cycles per insn ( +- 2.53% ) [60.87%]
272,592,567 branches # 160.003 M/sec ( +- 1.74% ) [56.56%]
3,794,846 branch-misses # 1.39% of all branches ( +- 6.59% ) [58.50%]
449,982,778 L1-dcache-loads # 264.125 M/sec ( +- 2.47% ) [49.88%]
22,404,961 L1-dcache-load-misses # 4.98% of all L1-dcache hits ( +- 6.08% ) [55.05%]
6,204,750 LLC-loads # 3.642 M/sec ( +- 8.91% ) [43.75%]
1,837,411 LLC-load-misses # 1.078 M/sec ( +- 7.27% ) [12.07%]
411,440,421 L1-icache-loads # 241.502 M/sec ( +- 5.60% ) [36.52%]
27,556,832 L1-icache-load-misses # 16.175 M/sec ( +- 7.46% ) [46.72%]
464,067,627 dTLB-loads # 272.392 M/sec ( +- 4.46% ) [54.17%]
10,765,648 dTLB-load-misses # 6.319 M/sec ( +- 3.18% ) [48.68%]
1,273,080,386 iTLB-loads # 747.256 M/sec ( +- 3.38% ) [47.53%]
117,481 iTLB-load-misses # 0.069 M/sec ( +- 14.99% ) [47.01%]
4,590,653 L1-dcache-prefetches # 2.695 M/sec ( +- 4.49% ) [46.19%]
1,712,660 L1-dcache-prefetch-misses # 1.005 M/sec ( +- 3.75% ) [44.82%]
0.195622057 seconds time elapsed ( +- 6.84% )
Also clean up the attribute construction code to be appending, and factor
it out into add_default_attributes().
Tweak the coverage percentage printout a bit, so that it's easier to view it
alongside the +- sttddev colum.
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Steven Rostedt <rostedt@goodmis.org>
Link: http://lkml.kernel.org/n/tip-to3kgu04449s64062val8b62@git.kernel.org
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2011-05-19 11:30:56 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Detailed events get appended to the event list: */
|
|
|
|
|
|
|
|
if (detailed_run < 1)
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
/* Append detailed run extra attributes: */
|
perf stat: Initialize default events wrt exclude_{guest,host}
When no event is specified the tools use perf_evlist__add_default(), that will
call event_attr_init to initialize the KVM exclusion bits.
When the change was made to the tools so that by default guest samples would be
excluded, the changes were made just to the parsing routines and to
perf_evlist__add_default(), not to perf_evlist__add_attrs, that is used so far
just by perf stat to add multiple events, according to the level of detail
specified.
Recently the tools were changed to reconstruct the event name from all the
details in perf_event_attr, not just from .type and .config, but taking into
account all the feature bits (.exclude_{guest,host,user,kernel,etc},
.precise_ip, etc).
That is when we noticed that the default for perf stat wasn't the one for the
rest of the tools, i.e. the .exclude_guest bit wasn't being set.
I.e. the default, that doesn't call event_attr_init was showing the :HG
modifier:
$ perf stat usleep 1
Performance counter stats for 'usleep 1':
0.942119 task-clock # 0.454 CPUs utilized
1 context-switches # 0.001 M/sec
0 CPU-migrations # 0.000 K/sec
126 page-faults # 0.134 M/sec
693,193 cycles:HG # 0.736 GHz [40.11%]
407,461 stalled-cycles-frontend:HG # 58.78% frontend cycles idle [72.29%]
365,403 stalled-cycles-backend:HG # 52.71% backend cycles idle
465,982 instructions:HG # 0.67 insns per cycle
# 0.87 stalled cycles per insn
89,760 branches:HG # 95.275 M/sec
6,178 branch-misses:HG # 6.88% of all branches
0.002077228 seconds time elapsed
While if one explicitely specifies the same events, which will make the parsing code
to be called and thus event_attr_init is called:
$ perf stat -e task-clock,context-switches,migrations,page-faults,cycles,stalled-cycles-frontend,stalled-cycles-backend,instructions,branches,branch-misses usleep 1
Performance counter stats for 'usleep 1':
1.040349 task-clock # 0.500 CPUs utilized
2 context-switches # 0.002 M/sec
0 CPU-migrations # 0.000 K/sec
127 page-faults # 0.122 M/sec
587,966 cycles # 0.565 GHz [13.18%]
459,167 stalled-cycles-frontend # 78.09% frontend cycles idle
390,249 stalled-cycles-backend # 66.37% backend cycles idle
504,006 instructions # 0.86 insns per cycle
# 0.91 stalled cycles per insn
96,455 branches # 92.714 M/sec
6,522 branch-misses # 6.76% of all branches [96.12%]
0.002078681 seconds time elapsed
Fix it by introducing a perf_evlist__add_default_attrs method that will call
evlist_attr_init in all the perf_event_attr entries before adding the events.
Reported-by: Ingo Molnar <mingo@kernel.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Namhyung Kim <namhyung@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-4eysr236r0pgiyum9epwxw7s@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2012-05-30 16:53:54 +00:00
|
|
|
if (perf_evlist__add_default_attrs(evsel_list, detailed_attrs) < 0)
|
2011-11-04 11:10:59 +00:00
|
|
|
return -1;
|
perf stat: Add -d -d and -d -d -d options to show more CPU events
Print even more detailed statistics if requested via perf stat -d:
-d: detailed events, L1 and LLC data cache
-d -d: more detailed events, dTLB and iTLB events
-d -d -d: very detailed events, adding prefetch events
Full output looks like this now:
Performance counter stats for '/home/mingo/hackbench 10' (5 runs):
1703.674707 task-clock # 8.709 CPUs utilized ( +- 4.19% )
49,068 context-switches # 0.029 M/sec ( +- 16.66% )
8,303 CPU-migrations # 0.005 M/sec ( +- 24.90% )
17,397 page-faults # 0.010 M/sec ( +- 0.46% )
2,345,389,239 cycles # 1.377 GHz ( +- 4.61% ) [55.90%]
1,884,503,527 stalled-cycles-frontend # 80.35% frontend cycles idle ( +- 5.67% ) [50.39%]
743,919,737 stalled-cycles-backend # 31.72% backend cycles idle ( +- 8.75% ) [49.91%]
1,314,416,379 instructions # 0.56 insns per cycle
# 1.43 stalled cycles per insn ( +- 2.53% ) [60.87%]
272,592,567 branches # 160.003 M/sec ( +- 1.74% ) [56.56%]
3,794,846 branch-misses # 1.39% of all branches ( +- 6.59% ) [58.50%]
449,982,778 L1-dcache-loads # 264.125 M/sec ( +- 2.47% ) [49.88%]
22,404,961 L1-dcache-load-misses # 4.98% of all L1-dcache hits ( +- 6.08% ) [55.05%]
6,204,750 LLC-loads # 3.642 M/sec ( +- 8.91% ) [43.75%]
1,837,411 LLC-load-misses # 1.078 M/sec ( +- 7.27% ) [12.07%]
411,440,421 L1-icache-loads # 241.502 M/sec ( +- 5.60% ) [36.52%]
27,556,832 L1-icache-load-misses # 16.175 M/sec ( +- 7.46% ) [46.72%]
464,067,627 dTLB-loads # 272.392 M/sec ( +- 4.46% ) [54.17%]
10,765,648 dTLB-load-misses # 6.319 M/sec ( +- 3.18% ) [48.68%]
1,273,080,386 iTLB-loads # 747.256 M/sec ( +- 3.38% ) [47.53%]
117,481 iTLB-load-misses # 0.069 M/sec ( +- 14.99% ) [47.01%]
4,590,653 L1-dcache-prefetches # 2.695 M/sec ( +- 4.49% ) [46.19%]
1,712,660 L1-dcache-prefetch-misses # 1.005 M/sec ( +- 3.75% ) [44.82%]
0.195622057 seconds time elapsed ( +- 6.84% )
Also clean up the attribute construction code to be appending, and factor
it out into add_default_attributes().
Tweak the coverage percentage printout a bit, so that it's easier to view it
alongside the +- sttddev colum.
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Steven Rostedt <rostedt@goodmis.org>
Link: http://lkml.kernel.org/n/tip-to3kgu04449s64062val8b62@git.kernel.org
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2011-05-19 11:30:56 +00:00
|
|
|
|
|
|
|
if (detailed_run < 2)
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
/* Append very detailed run extra attributes: */
|
perf stat: Initialize default events wrt exclude_{guest,host}
When no event is specified the tools use perf_evlist__add_default(), that will
call event_attr_init to initialize the KVM exclusion bits.
When the change was made to the tools so that by default guest samples would be
excluded, the changes were made just to the parsing routines and to
perf_evlist__add_default(), not to perf_evlist__add_attrs, that is used so far
just by perf stat to add multiple events, according to the level of detail
specified.
Recently the tools were changed to reconstruct the event name from all the
details in perf_event_attr, not just from .type and .config, but taking into
account all the feature bits (.exclude_{guest,host,user,kernel,etc},
.precise_ip, etc).
That is when we noticed that the default for perf stat wasn't the one for the
rest of the tools, i.e. the .exclude_guest bit wasn't being set.
I.e. the default, that doesn't call event_attr_init was showing the :HG
modifier:
$ perf stat usleep 1
Performance counter stats for 'usleep 1':
0.942119 task-clock # 0.454 CPUs utilized
1 context-switches # 0.001 M/sec
0 CPU-migrations # 0.000 K/sec
126 page-faults # 0.134 M/sec
693,193 cycles:HG # 0.736 GHz [40.11%]
407,461 stalled-cycles-frontend:HG # 58.78% frontend cycles idle [72.29%]
365,403 stalled-cycles-backend:HG # 52.71% backend cycles idle
465,982 instructions:HG # 0.67 insns per cycle
# 0.87 stalled cycles per insn
89,760 branches:HG # 95.275 M/sec
6,178 branch-misses:HG # 6.88% of all branches
0.002077228 seconds time elapsed
While if one explicitely specifies the same events, which will make the parsing code
to be called and thus event_attr_init is called:
$ perf stat -e task-clock,context-switches,migrations,page-faults,cycles,stalled-cycles-frontend,stalled-cycles-backend,instructions,branches,branch-misses usleep 1
Performance counter stats for 'usleep 1':
1.040349 task-clock # 0.500 CPUs utilized
2 context-switches # 0.002 M/sec
0 CPU-migrations # 0.000 K/sec
127 page-faults # 0.122 M/sec
587,966 cycles # 0.565 GHz [13.18%]
459,167 stalled-cycles-frontend # 78.09% frontend cycles idle
390,249 stalled-cycles-backend # 66.37% backend cycles idle
504,006 instructions # 0.86 insns per cycle
# 0.91 stalled cycles per insn
96,455 branches # 92.714 M/sec
6,522 branch-misses # 6.76% of all branches [96.12%]
0.002078681 seconds time elapsed
Fix it by introducing a perf_evlist__add_default_attrs method that will call
evlist_attr_init in all the perf_event_attr entries before adding the events.
Reported-by: Ingo Molnar <mingo@kernel.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Namhyung Kim <namhyung@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-4eysr236r0pgiyum9epwxw7s@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2012-05-30 16:53:54 +00:00
|
|
|
if (perf_evlist__add_default_attrs(evsel_list, very_detailed_attrs) < 0)
|
2011-11-04 11:10:59 +00:00
|
|
|
return -1;
|
perf stat: Add -d -d and -d -d -d options to show more CPU events
Print even more detailed statistics if requested via perf stat -d:
-d: detailed events, L1 and LLC data cache
-d -d: more detailed events, dTLB and iTLB events
-d -d -d: very detailed events, adding prefetch events
Full output looks like this now:
Performance counter stats for '/home/mingo/hackbench 10' (5 runs):
1703.674707 task-clock # 8.709 CPUs utilized ( +- 4.19% )
49,068 context-switches # 0.029 M/sec ( +- 16.66% )
8,303 CPU-migrations # 0.005 M/sec ( +- 24.90% )
17,397 page-faults # 0.010 M/sec ( +- 0.46% )
2,345,389,239 cycles # 1.377 GHz ( +- 4.61% ) [55.90%]
1,884,503,527 stalled-cycles-frontend # 80.35% frontend cycles idle ( +- 5.67% ) [50.39%]
743,919,737 stalled-cycles-backend # 31.72% backend cycles idle ( +- 8.75% ) [49.91%]
1,314,416,379 instructions # 0.56 insns per cycle
# 1.43 stalled cycles per insn ( +- 2.53% ) [60.87%]
272,592,567 branches # 160.003 M/sec ( +- 1.74% ) [56.56%]
3,794,846 branch-misses # 1.39% of all branches ( +- 6.59% ) [58.50%]
449,982,778 L1-dcache-loads # 264.125 M/sec ( +- 2.47% ) [49.88%]
22,404,961 L1-dcache-load-misses # 4.98% of all L1-dcache hits ( +- 6.08% ) [55.05%]
6,204,750 LLC-loads # 3.642 M/sec ( +- 8.91% ) [43.75%]
1,837,411 LLC-load-misses # 1.078 M/sec ( +- 7.27% ) [12.07%]
411,440,421 L1-icache-loads # 241.502 M/sec ( +- 5.60% ) [36.52%]
27,556,832 L1-icache-load-misses # 16.175 M/sec ( +- 7.46% ) [46.72%]
464,067,627 dTLB-loads # 272.392 M/sec ( +- 4.46% ) [54.17%]
10,765,648 dTLB-load-misses # 6.319 M/sec ( +- 3.18% ) [48.68%]
1,273,080,386 iTLB-loads # 747.256 M/sec ( +- 3.38% ) [47.53%]
117,481 iTLB-load-misses # 0.069 M/sec ( +- 14.99% ) [47.01%]
4,590,653 L1-dcache-prefetches # 2.695 M/sec ( +- 4.49% ) [46.19%]
1,712,660 L1-dcache-prefetch-misses # 1.005 M/sec ( +- 3.75% ) [44.82%]
0.195622057 seconds time elapsed ( +- 6.84% )
Also clean up the attribute construction code to be appending, and factor
it out into add_default_attributes().
Tweak the coverage percentage printout a bit, so that it's easier to view it
alongside the +- sttddev colum.
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Steven Rostedt <rostedt@goodmis.org>
Link: http://lkml.kernel.org/n/tip-to3kgu04449s64062val8b62@git.kernel.org
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2011-05-19 11:30:56 +00:00
|
|
|
|
|
|
|
if (detailed_run < 3)
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
/* Append very, very detailed run extra attributes: */
|
perf stat: Initialize default events wrt exclude_{guest,host}
When no event is specified the tools use perf_evlist__add_default(), that will
call event_attr_init to initialize the KVM exclusion bits.
When the change was made to the tools so that by default guest samples would be
excluded, the changes were made just to the parsing routines and to
perf_evlist__add_default(), not to perf_evlist__add_attrs, that is used so far
just by perf stat to add multiple events, according to the level of detail
specified.
Recently the tools were changed to reconstruct the event name from all the
details in perf_event_attr, not just from .type and .config, but taking into
account all the feature bits (.exclude_{guest,host,user,kernel,etc},
.precise_ip, etc).
That is when we noticed that the default for perf stat wasn't the one for the
rest of the tools, i.e. the .exclude_guest bit wasn't being set.
I.e. the default, that doesn't call event_attr_init was showing the :HG
modifier:
$ perf stat usleep 1
Performance counter stats for 'usleep 1':
0.942119 task-clock # 0.454 CPUs utilized
1 context-switches # 0.001 M/sec
0 CPU-migrations # 0.000 K/sec
126 page-faults # 0.134 M/sec
693,193 cycles:HG # 0.736 GHz [40.11%]
407,461 stalled-cycles-frontend:HG # 58.78% frontend cycles idle [72.29%]
365,403 stalled-cycles-backend:HG # 52.71% backend cycles idle
465,982 instructions:HG # 0.67 insns per cycle
# 0.87 stalled cycles per insn
89,760 branches:HG # 95.275 M/sec
6,178 branch-misses:HG # 6.88% of all branches
0.002077228 seconds time elapsed
While if one explicitely specifies the same events, which will make the parsing code
to be called and thus event_attr_init is called:
$ perf stat -e task-clock,context-switches,migrations,page-faults,cycles,stalled-cycles-frontend,stalled-cycles-backend,instructions,branches,branch-misses usleep 1
Performance counter stats for 'usleep 1':
1.040349 task-clock # 0.500 CPUs utilized
2 context-switches # 0.002 M/sec
0 CPU-migrations # 0.000 K/sec
127 page-faults # 0.122 M/sec
587,966 cycles # 0.565 GHz [13.18%]
459,167 stalled-cycles-frontend # 78.09% frontend cycles idle
390,249 stalled-cycles-backend # 66.37% backend cycles idle
504,006 instructions # 0.86 insns per cycle
# 0.91 stalled cycles per insn
96,455 branches # 92.714 M/sec
6,522 branch-misses # 6.76% of all branches [96.12%]
0.002078681 seconds time elapsed
Fix it by introducing a perf_evlist__add_default_attrs method that will call
evlist_attr_init in all the perf_event_attr entries before adding the events.
Reported-by: Ingo Molnar <mingo@kernel.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Namhyung Kim <namhyung@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-4eysr236r0pgiyum9epwxw7s@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2012-05-30 16:53:54 +00:00
|
|
|
return perf_evlist__add_default_attrs(evsel_list, very_very_detailed_attrs);
|
perf stat: Add -d -d and -d -d -d options to show more CPU events
Print even more detailed statistics if requested via perf stat -d:
-d: detailed events, L1 and LLC data cache
-d -d: more detailed events, dTLB and iTLB events
-d -d -d: very detailed events, adding prefetch events
Full output looks like this now:
Performance counter stats for '/home/mingo/hackbench 10' (5 runs):
1703.674707 task-clock # 8.709 CPUs utilized ( +- 4.19% )
49,068 context-switches # 0.029 M/sec ( +- 16.66% )
8,303 CPU-migrations # 0.005 M/sec ( +- 24.90% )
17,397 page-faults # 0.010 M/sec ( +- 0.46% )
2,345,389,239 cycles # 1.377 GHz ( +- 4.61% ) [55.90%]
1,884,503,527 stalled-cycles-frontend # 80.35% frontend cycles idle ( +- 5.67% ) [50.39%]
743,919,737 stalled-cycles-backend # 31.72% backend cycles idle ( +- 8.75% ) [49.91%]
1,314,416,379 instructions # 0.56 insns per cycle
# 1.43 stalled cycles per insn ( +- 2.53% ) [60.87%]
272,592,567 branches # 160.003 M/sec ( +- 1.74% ) [56.56%]
3,794,846 branch-misses # 1.39% of all branches ( +- 6.59% ) [58.50%]
449,982,778 L1-dcache-loads # 264.125 M/sec ( +- 2.47% ) [49.88%]
22,404,961 L1-dcache-load-misses # 4.98% of all L1-dcache hits ( +- 6.08% ) [55.05%]
6,204,750 LLC-loads # 3.642 M/sec ( +- 8.91% ) [43.75%]
1,837,411 LLC-load-misses # 1.078 M/sec ( +- 7.27% ) [12.07%]
411,440,421 L1-icache-loads # 241.502 M/sec ( +- 5.60% ) [36.52%]
27,556,832 L1-icache-load-misses # 16.175 M/sec ( +- 7.46% ) [46.72%]
464,067,627 dTLB-loads # 272.392 M/sec ( +- 4.46% ) [54.17%]
10,765,648 dTLB-load-misses # 6.319 M/sec ( +- 3.18% ) [48.68%]
1,273,080,386 iTLB-loads # 747.256 M/sec ( +- 3.38% ) [47.53%]
117,481 iTLB-load-misses # 0.069 M/sec ( +- 14.99% ) [47.01%]
4,590,653 L1-dcache-prefetches # 2.695 M/sec ( +- 4.49% ) [46.19%]
1,712,660 L1-dcache-prefetch-misses # 1.005 M/sec ( +- 3.75% ) [44.82%]
0.195622057 seconds time elapsed ( +- 6.84% )
Also clean up the attribute construction code to be appending, and factor
it out into add_default_attributes().
Tweak the coverage percentage printout a bit, so that it's easier to view it
alongside the +- sttddev colum.
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Steven Rostedt <rostedt@goodmis.org>
Link: http://lkml.kernel.org/n/tip-to3kgu04449s64062val8b62@git.kernel.org
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2011-05-19 11:30:56 +00:00
|
|
|
}
|
|
|
|
|
2016-01-12 09:35:29 +00:00
|
|
|
static const char * const stat_record_usage[] = {
|
perf stat record: Add record command
Add 'perf stat record' command support. It creates simple (header only)
perf.data file ATM.
The record command could be specified anywhere among stat options. All
stat command options are valid for stat record command with '-o' option
exception. If specified for record command it denotes the perf data file
name.
Committer note:
Set sample_type to PERF_SAMPLE_IDENTIFIER, which should be harmless
while avoiding that older tools show confusing messages, for instance,
with sample_type = 0, we get:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.630237 task-clock (msec) # 0.528 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
52 page-faults # 0.083 M/sec
978,312 cycles # 1.552 GHz
671,931 stalled-cycles-frontend # 68.68% frontend cycles idle
<not supported> stalled-cycles-backend
646,379 instructions # 0.66 insns per cycle
# 1.04 stalled cycles per insn
131,046 branches # 207.931 M/sec
7,073 branch-misses # 5.40% of all branches
0.001193240 seconds time elapsed
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
non matching sample_type
$
While with sample_type set to PERF_SAMPLE_IDENTIFIER, after we re-run 'perf
stat record usleep' we get:
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$
Which at least shows the names of the events in the perf.data file.
Additionally, such files, when passed to 'perf report' will produce:
$ oldperf report --stdio
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
Warning:
Kernel address maps (/proc/{kallsyms,modules}) were restricted.
Check /proc/sys/kernel/kptr_restrict before running 'perf record'.
As no suitable kallsyms nor vmlinux was found, kernel samples
can't be resolved.
Samples in kernel modules can't be resolved as well.
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
$
Which is confusing and can be solved by just adding the kernel mmap record,
which will also remove that warning about the data size field being equal to
zero, after generating the mmap record:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.600796 task-clock (msec) # 0.478 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
54 page-faults # 0.090 M/sec
886,844 cycles # 1.476 GHz
582,169 stalled-cycles-frontend # 65.65% frontend cycles idle
<not supported> stalled-cycles-backend
638,344 instructions # 0.72 insns per cycle
# 0.91 stalled cycles per insn
130,204 branches # 216.719 M/sec
7,500 branch-misses # 5.76% of all branches
0.001255897 seconds time elapsed
$ oldperf evlist
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$ oldperf report --stdio
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
[acme@zoo linux]$
No warnings, sensible output about what are the events in the perf.data file and also
a "file has no samples" message, which indeed it doesn't.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Tested-by: Kan Liang <kan.liang@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: htp://lkml.kernel.org/r/1446734469-11352-3-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 14:40:46 +00:00
|
|
|
"perf stat record [<options>]",
|
|
|
|
NULL,
|
|
|
|
};
|
|
|
|
|
2015-11-05 14:40:47 +00:00
|
|
|
static void init_features(struct perf_session *session)
|
|
|
|
{
|
|
|
|
int feat;
|
|
|
|
|
|
|
|
for (feat = HEADER_FIRST_FEATURE; feat < HEADER_LAST_FEATURE; feat++)
|
|
|
|
perf_header__set_feat(&session->header, feat);
|
|
|
|
|
|
|
|
perf_header__clear_feat(&session->header, HEADER_BUILD_ID);
|
|
|
|
perf_header__clear_feat(&session->header, HEADER_TRACING_DATA);
|
|
|
|
perf_header__clear_feat(&session->header, HEADER_BRANCH_STACK);
|
|
|
|
perf_header__clear_feat(&session->header, HEADER_AUXTRACE);
|
|
|
|
}
|
|
|
|
|
perf stat record: Add record command
Add 'perf stat record' command support. It creates simple (header only)
perf.data file ATM.
The record command could be specified anywhere among stat options. All
stat command options are valid for stat record command with '-o' option
exception. If specified for record command it denotes the perf data file
name.
Committer note:
Set sample_type to PERF_SAMPLE_IDENTIFIER, which should be harmless
while avoiding that older tools show confusing messages, for instance,
with sample_type = 0, we get:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.630237 task-clock (msec) # 0.528 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
52 page-faults # 0.083 M/sec
978,312 cycles # 1.552 GHz
671,931 stalled-cycles-frontend # 68.68% frontend cycles idle
<not supported> stalled-cycles-backend
646,379 instructions # 0.66 insns per cycle
# 1.04 stalled cycles per insn
131,046 branches # 207.931 M/sec
7,073 branch-misses # 5.40% of all branches
0.001193240 seconds time elapsed
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
non matching sample_type
$
While with sample_type set to PERF_SAMPLE_IDENTIFIER, after we re-run 'perf
stat record usleep' we get:
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$
Which at least shows the names of the events in the perf.data file.
Additionally, such files, when passed to 'perf report' will produce:
$ oldperf report --stdio
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
Warning:
Kernel address maps (/proc/{kallsyms,modules}) were restricted.
Check /proc/sys/kernel/kptr_restrict before running 'perf record'.
As no suitable kallsyms nor vmlinux was found, kernel samples
can't be resolved.
Samples in kernel modules can't be resolved as well.
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
$
Which is confusing and can be solved by just adding the kernel mmap record,
which will also remove that warning about the data size field being equal to
zero, after generating the mmap record:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.600796 task-clock (msec) # 0.478 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
54 page-faults # 0.090 M/sec
886,844 cycles # 1.476 GHz
582,169 stalled-cycles-frontend # 65.65% frontend cycles idle
<not supported> stalled-cycles-backend
638,344 instructions # 0.72 insns per cycle
# 0.91 stalled cycles per insn
130,204 branches # 216.719 M/sec
7,500 branch-misses # 5.76% of all branches
0.001255897 seconds time elapsed
$ oldperf evlist
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$ oldperf report --stdio
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
[acme@zoo linux]$
No warnings, sensible output about what are the events in the perf.data file and also
a "file has no samples" message, which indeed it doesn't.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Tested-by: Kan Liang <kan.liang@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: htp://lkml.kernel.org/r/1446734469-11352-3-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 14:40:46 +00:00
|
|
|
static int __cmd_record(int argc, const char **argv)
|
|
|
|
{
|
|
|
|
struct perf_session *session;
|
2017-01-23 21:07:59 +00:00
|
|
|
struct perf_data *data = &perf_stat.data;
|
perf stat record: Add record command
Add 'perf stat record' command support. It creates simple (header only)
perf.data file ATM.
The record command could be specified anywhere among stat options. All
stat command options are valid for stat record command with '-o' option
exception. If specified for record command it denotes the perf data file
name.
Committer note:
Set sample_type to PERF_SAMPLE_IDENTIFIER, which should be harmless
while avoiding that older tools show confusing messages, for instance,
with sample_type = 0, we get:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.630237 task-clock (msec) # 0.528 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
52 page-faults # 0.083 M/sec
978,312 cycles # 1.552 GHz
671,931 stalled-cycles-frontend # 68.68% frontend cycles idle
<not supported> stalled-cycles-backend
646,379 instructions # 0.66 insns per cycle
# 1.04 stalled cycles per insn
131,046 branches # 207.931 M/sec
7,073 branch-misses # 5.40% of all branches
0.001193240 seconds time elapsed
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
non matching sample_type
$
While with sample_type set to PERF_SAMPLE_IDENTIFIER, after we re-run 'perf
stat record usleep' we get:
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$
Which at least shows the names of the events in the perf.data file.
Additionally, such files, when passed to 'perf report' will produce:
$ oldperf report --stdio
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
Warning:
Kernel address maps (/proc/{kallsyms,modules}) were restricted.
Check /proc/sys/kernel/kptr_restrict before running 'perf record'.
As no suitable kallsyms nor vmlinux was found, kernel samples
can't be resolved.
Samples in kernel modules can't be resolved as well.
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
$
Which is confusing and can be solved by just adding the kernel mmap record,
which will also remove that warning about the data size field being equal to
zero, after generating the mmap record:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.600796 task-clock (msec) # 0.478 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
54 page-faults # 0.090 M/sec
886,844 cycles # 1.476 GHz
582,169 stalled-cycles-frontend # 65.65% frontend cycles idle
<not supported> stalled-cycles-backend
638,344 instructions # 0.72 insns per cycle
# 0.91 stalled cycles per insn
130,204 branches # 216.719 M/sec
7,500 branch-misses # 5.76% of all branches
0.001255897 seconds time elapsed
$ oldperf evlist
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$ oldperf report --stdio
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
[acme@zoo linux]$
No warnings, sensible output about what are the events in the perf.data file and also
a "file has no samples" message, which indeed it doesn't.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Tested-by: Kan Liang <kan.liang@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: htp://lkml.kernel.org/r/1446734469-11352-3-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 14:40:46 +00:00
|
|
|
|
2016-01-12 09:35:29 +00:00
|
|
|
argc = parse_options(argc, argv, stat_options, stat_record_usage,
|
perf stat record: Add record command
Add 'perf stat record' command support. It creates simple (header only)
perf.data file ATM.
The record command could be specified anywhere among stat options. All
stat command options are valid for stat record command with '-o' option
exception. If specified for record command it denotes the perf data file
name.
Committer note:
Set sample_type to PERF_SAMPLE_IDENTIFIER, which should be harmless
while avoiding that older tools show confusing messages, for instance,
with sample_type = 0, we get:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.630237 task-clock (msec) # 0.528 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
52 page-faults # 0.083 M/sec
978,312 cycles # 1.552 GHz
671,931 stalled-cycles-frontend # 68.68% frontend cycles idle
<not supported> stalled-cycles-backend
646,379 instructions # 0.66 insns per cycle
# 1.04 stalled cycles per insn
131,046 branches # 207.931 M/sec
7,073 branch-misses # 5.40% of all branches
0.001193240 seconds time elapsed
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
non matching sample_type
$
While with sample_type set to PERF_SAMPLE_IDENTIFIER, after we re-run 'perf
stat record usleep' we get:
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$
Which at least shows the names of the events in the perf.data file.
Additionally, such files, when passed to 'perf report' will produce:
$ oldperf report --stdio
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
Warning:
Kernel address maps (/proc/{kallsyms,modules}) were restricted.
Check /proc/sys/kernel/kptr_restrict before running 'perf record'.
As no suitable kallsyms nor vmlinux was found, kernel samples
can't be resolved.
Samples in kernel modules can't be resolved as well.
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
$
Which is confusing and can be solved by just adding the kernel mmap record,
which will also remove that warning about the data size field being equal to
zero, after generating the mmap record:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.600796 task-clock (msec) # 0.478 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
54 page-faults # 0.090 M/sec
886,844 cycles # 1.476 GHz
582,169 stalled-cycles-frontend # 65.65% frontend cycles idle
<not supported> stalled-cycles-backend
638,344 instructions # 0.72 insns per cycle
# 0.91 stalled cycles per insn
130,204 branches # 216.719 M/sec
7,500 branch-misses # 5.76% of all branches
0.001255897 seconds time elapsed
$ oldperf evlist
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$ oldperf report --stdio
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
[acme@zoo linux]$
No warnings, sensible output about what are the events in the perf.data file and also
a "file has no samples" message, which indeed it doesn't.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Tested-by: Kan Liang <kan.liang@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: htp://lkml.kernel.org/r/1446734469-11352-3-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 14:40:46 +00:00
|
|
|
PARSE_OPT_STOP_AT_NON_OPTION);
|
|
|
|
|
|
|
|
if (output_name)
|
2017-01-23 21:25:41 +00:00
|
|
|
data->file.path = output_name;
|
perf stat record: Add record command
Add 'perf stat record' command support. It creates simple (header only)
perf.data file ATM.
The record command could be specified anywhere among stat options. All
stat command options are valid for stat record command with '-o' option
exception. If specified for record command it denotes the perf data file
name.
Committer note:
Set sample_type to PERF_SAMPLE_IDENTIFIER, which should be harmless
while avoiding that older tools show confusing messages, for instance,
with sample_type = 0, we get:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.630237 task-clock (msec) # 0.528 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
52 page-faults # 0.083 M/sec
978,312 cycles # 1.552 GHz
671,931 stalled-cycles-frontend # 68.68% frontend cycles idle
<not supported> stalled-cycles-backend
646,379 instructions # 0.66 insns per cycle
# 1.04 stalled cycles per insn
131,046 branches # 207.931 M/sec
7,073 branch-misses # 5.40% of all branches
0.001193240 seconds time elapsed
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
non matching sample_type
$
While with sample_type set to PERF_SAMPLE_IDENTIFIER, after we re-run 'perf
stat record usleep' we get:
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$
Which at least shows the names of the events in the perf.data file.
Additionally, such files, when passed to 'perf report' will produce:
$ oldperf report --stdio
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
Warning:
Kernel address maps (/proc/{kallsyms,modules}) were restricted.
Check /proc/sys/kernel/kptr_restrict before running 'perf record'.
As no suitable kallsyms nor vmlinux was found, kernel samples
can't be resolved.
Samples in kernel modules can't be resolved as well.
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
$
Which is confusing and can be solved by just adding the kernel mmap record,
which will also remove that warning about the data size field being equal to
zero, after generating the mmap record:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.600796 task-clock (msec) # 0.478 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
54 page-faults # 0.090 M/sec
886,844 cycles # 1.476 GHz
582,169 stalled-cycles-frontend # 65.65% frontend cycles idle
<not supported> stalled-cycles-backend
638,344 instructions # 0.72 insns per cycle
# 0.91 stalled cycles per insn
130,204 branches # 216.719 M/sec
7,500 branch-misses # 5.76% of all branches
0.001255897 seconds time elapsed
$ oldperf evlist
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$ oldperf report --stdio
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
[acme@zoo linux]$
No warnings, sensible output about what are the events in the perf.data file and also
a "file has no samples" message, which indeed it doesn't.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Tested-by: Kan Liang <kan.liang@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: htp://lkml.kernel.org/r/1446734469-11352-3-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 14:40:46 +00:00
|
|
|
|
2018-08-30 06:32:36 +00:00
|
|
|
if (stat_config.run_count != 1 || forever) {
|
2015-11-05 14:40:53 +00:00
|
|
|
pr_err("Cannot use -r option with perf stat record.\n");
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
2017-01-23 21:07:59 +00:00
|
|
|
session = perf_session__new(data, false, NULL);
|
perf stat record: Add record command
Add 'perf stat record' command support. It creates simple (header only)
perf.data file ATM.
The record command could be specified anywhere among stat options. All
stat command options are valid for stat record command with '-o' option
exception. If specified for record command it denotes the perf data file
name.
Committer note:
Set sample_type to PERF_SAMPLE_IDENTIFIER, which should be harmless
while avoiding that older tools show confusing messages, for instance,
with sample_type = 0, we get:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.630237 task-clock (msec) # 0.528 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
52 page-faults # 0.083 M/sec
978,312 cycles # 1.552 GHz
671,931 stalled-cycles-frontend # 68.68% frontend cycles idle
<not supported> stalled-cycles-backend
646,379 instructions # 0.66 insns per cycle
# 1.04 stalled cycles per insn
131,046 branches # 207.931 M/sec
7,073 branch-misses # 5.40% of all branches
0.001193240 seconds time elapsed
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
non matching sample_type
$
While with sample_type set to PERF_SAMPLE_IDENTIFIER, after we re-run 'perf
stat record usleep' we get:
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$
Which at least shows the names of the events in the perf.data file.
Additionally, such files, when passed to 'perf report' will produce:
$ oldperf report --stdio
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
Warning:
Kernel address maps (/proc/{kallsyms,modules}) were restricted.
Check /proc/sys/kernel/kptr_restrict before running 'perf record'.
As no suitable kallsyms nor vmlinux was found, kernel samples
can't be resolved.
Samples in kernel modules can't be resolved as well.
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
$
Which is confusing and can be solved by just adding the kernel mmap record,
which will also remove that warning about the data size field being equal to
zero, after generating the mmap record:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.600796 task-clock (msec) # 0.478 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
54 page-faults # 0.090 M/sec
886,844 cycles # 1.476 GHz
582,169 stalled-cycles-frontend # 65.65% frontend cycles idle
<not supported> stalled-cycles-backend
638,344 instructions # 0.72 insns per cycle
# 0.91 stalled cycles per insn
130,204 branches # 216.719 M/sec
7,500 branch-misses # 5.76% of all branches
0.001255897 seconds time elapsed
$ oldperf evlist
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$ oldperf report --stdio
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
[acme@zoo linux]$
No warnings, sensible output about what are the events in the perf.data file and also
a "file has no samples" message, which indeed it doesn't.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Tested-by: Kan Liang <kan.liang@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: htp://lkml.kernel.org/r/1446734469-11352-3-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 14:40:46 +00:00
|
|
|
if (session == NULL) {
|
|
|
|
pr_err("Perf session creation failed.\n");
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
2015-11-05 14:40:47 +00:00
|
|
|
init_features(session);
|
|
|
|
|
perf stat record: Add record command
Add 'perf stat record' command support. It creates simple (header only)
perf.data file ATM.
The record command could be specified anywhere among stat options. All
stat command options are valid for stat record command with '-o' option
exception. If specified for record command it denotes the perf data file
name.
Committer note:
Set sample_type to PERF_SAMPLE_IDENTIFIER, which should be harmless
while avoiding that older tools show confusing messages, for instance,
with sample_type = 0, we get:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.630237 task-clock (msec) # 0.528 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
52 page-faults # 0.083 M/sec
978,312 cycles # 1.552 GHz
671,931 stalled-cycles-frontend # 68.68% frontend cycles idle
<not supported> stalled-cycles-backend
646,379 instructions # 0.66 insns per cycle
# 1.04 stalled cycles per insn
131,046 branches # 207.931 M/sec
7,073 branch-misses # 5.40% of all branches
0.001193240 seconds time elapsed
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
non matching sample_type
$
While with sample_type set to PERF_SAMPLE_IDENTIFIER, after we re-run 'perf
stat record usleep' we get:
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$
Which at least shows the names of the events in the perf.data file.
Additionally, such files, when passed to 'perf report' will produce:
$ oldperf report --stdio
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
Warning:
Kernel address maps (/proc/{kallsyms,modules}) were restricted.
Check /proc/sys/kernel/kptr_restrict before running 'perf record'.
As no suitable kallsyms nor vmlinux was found, kernel samples
can't be resolved.
Samples in kernel modules can't be resolved as well.
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
$
Which is confusing and can be solved by just adding the kernel mmap record,
which will also remove that warning about the data size field being equal to
zero, after generating the mmap record:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.600796 task-clock (msec) # 0.478 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
54 page-faults # 0.090 M/sec
886,844 cycles # 1.476 GHz
582,169 stalled-cycles-frontend # 65.65% frontend cycles idle
<not supported> stalled-cycles-backend
638,344 instructions # 0.72 insns per cycle
# 0.91 stalled cycles per insn
130,204 branches # 216.719 M/sec
7,500 branch-misses # 5.76% of all branches
0.001255897 seconds time elapsed
$ oldperf evlist
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$ oldperf report --stdio
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
[acme@zoo linux]$
No warnings, sensible output about what are the events in the perf.data file and also
a "file has no samples" message, which indeed it doesn't.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Tested-by: Kan Liang <kan.liang@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: htp://lkml.kernel.org/r/1446734469-11352-3-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 14:40:46 +00:00
|
|
|
session->evlist = evsel_list;
|
|
|
|
perf_stat.session = session;
|
|
|
|
perf_stat.record = true;
|
|
|
|
return argc;
|
|
|
|
}
|
|
|
|
|
perf stat report: Process stat and stat round events
Adding processing of stat and stat round events.
The stat data com in stat events, using generic function
process_stat_round_event to store data under perf_evsel object.
The stat-round events comes each interval or as last event in non
interval mode. The function process_stat_round_event process stored data
for each perf_evsel object and print it out.
Committer note:
After this patch:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.498381 task-clock (msec) # 0.571 CPUs utilized
2 context-switches # 0.004 M/sec
0 cpu-migrations # 0.000 K/sec
149 page-faults # 0.299 M/sec
1,271,635 cycles # 2.552 GHz
928,712 stalled-cycles-frontend # 73.03% frontend cycles idle
663,286 stalled-cycles-backend # 52.16% backend cycles idle
792,614 instructions # 0.62 insns per cycle
# 1.17 stalled cycles per insn
136,850 branches # 274.589 M/sec
<not counted> branch-misses (0.00%)
0.000873419 seconds time elapsed
$
$ perf stat report
Performance counter stats for '/home/acme/bin/perf stat record usleep 1':
0.498381 task-clock (msec) # 0.571 CPUs utilized
2 context-switches # 0.004 M/sec
0 cpu-migrations # 0.000 K/sec
149 page-faults # 0.299 M/sec
1,271,635 cycles # 2.552 GHz
928,712 stalled-cycles-frontend # 73.03% frontend cycles idle
663,286 stalled-cycles-backend # 52.16% backend cycles idle
792,614 instructions # 0.62 insns per cycle
# 1.17 stalled cycles per insn
136,850 branches # 274.589 M/sec
<not counted> branch-misses (0.00%)
0.000873419 seconds time elapsed
$
Reported-by: Kan Liang <kan.liang@intel.com>
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1446734469-11352-16-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 14:40:59 +00:00
|
|
|
static int process_stat_round_event(struct perf_tool *tool __maybe_unused,
|
|
|
|
union perf_event *event,
|
|
|
|
struct perf_session *session)
|
|
|
|
{
|
2016-05-05 23:04:03 +00:00
|
|
|
struct stat_round_event *stat_round = &event->stat_round;
|
perf stat report: Process stat and stat round events
Adding processing of stat and stat round events.
The stat data com in stat events, using generic function
process_stat_round_event to store data under perf_evsel object.
The stat-round events comes each interval or as last event in non
interval mode. The function process_stat_round_event process stored data
for each perf_evsel object and print it out.
Committer note:
After this patch:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.498381 task-clock (msec) # 0.571 CPUs utilized
2 context-switches # 0.004 M/sec
0 cpu-migrations # 0.000 K/sec
149 page-faults # 0.299 M/sec
1,271,635 cycles # 2.552 GHz
928,712 stalled-cycles-frontend # 73.03% frontend cycles idle
663,286 stalled-cycles-backend # 52.16% backend cycles idle
792,614 instructions # 0.62 insns per cycle
# 1.17 stalled cycles per insn
136,850 branches # 274.589 M/sec
<not counted> branch-misses (0.00%)
0.000873419 seconds time elapsed
$
$ perf stat report
Performance counter stats for '/home/acme/bin/perf stat record usleep 1':
0.498381 task-clock (msec) # 0.571 CPUs utilized
2 context-switches # 0.004 M/sec
0 cpu-migrations # 0.000 K/sec
149 page-faults # 0.299 M/sec
1,271,635 cycles # 2.552 GHz
928,712 stalled-cycles-frontend # 73.03% frontend cycles idle
663,286 stalled-cycles-backend # 52.16% backend cycles idle
792,614 instructions # 0.62 insns per cycle
# 1.17 stalled cycles per insn
136,850 branches # 274.589 M/sec
<not counted> branch-misses (0.00%)
0.000873419 seconds time elapsed
$
Reported-by: Kan Liang <kan.liang@intel.com>
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1446734469-11352-16-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 14:40:59 +00:00
|
|
|
struct perf_evsel *counter;
|
|
|
|
struct timespec tsh, *ts = NULL;
|
|
|
|
const char **argv = session->header.env.cmdline_argv;
|
|
|
|
int argc = session->header.env.nr_cmdline;
|
|
|
|
|
2016-06-23 14:26:15 +00:00
|
|
|
evlist__for_each_entry(evsel_list, counter)
|
perf stat report: Process stat and stat round events
Adding processing of stat and stat round events.
The stat data com in stat events, using generic function
process_stat_round_event to store data under perf_evsel object.
The stat-round events comes each interval or as last event in non
interval mode. The function process_stat_round_event process stored data
for each perf_evsel object and print it out.
Committer note:
After this patch:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.498381 task-clock (msec) # 0.571 CPUs utilized
2 context-switches # 0.004 M/sec
0 cpu-migrations # 0.000 K/sec
149 page-faults # 0.299 M/sec
1,271,635 cycles # 2.552 GHz
928,712 stalled-cycles-frontend # 73.03% frontend cycles idle
663,286 stalled-cycles-backend # 52.16% backend cycles idle
792,614 instructions # 0.62 insns per cycle
# 1.17 stalled cycles per insn
136,850 branches # 274.589 M/sec
<not counted> branch-misses (0.00%)
0.000873419 seconds time elapsed
$
$ perf stat report
Performance counter stats for '/home/acme/bin/perf stat record usleep 1':
0.498381 task-clock (msec) # 0.571 CPUs utilized
2 context-switches # 0.004 M/sec
0 cpu-migrations # 0.000 K/sec
149 page-faults # 0.299 M/sec
1,271,635 cycles # 2.552 GHz
928,712 stalled-cycles-frontend # 73.03% frontend cycles idle
663,286 stalled-cycles-backend # 52.16% backend cycles idle
792,614 instructions # 0.62 insns per cycle
# 1.17 stalled cycles per insn
136,850 branches # 274.589 M/sec
<not counted> branch-misses (0.00%)
0.000873419 seconds time elapsed
$
Reported-by: Kan Liang <kan.liang@intel.com>
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1446734469-11352-16-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 14:40:59 +00:00
|
|
|
perf_stat_process_counter(&stat_config, counter);
|
|
|
|
|
2016-05-05 23:04:03 +00:00
|
|
|
if (stat_round->type == PERF_STAT_ROUND_TYPE__FINAL)
|
|
|
|
update_stats(&walltime_nsecs_stats, stat_round->time);
|
perf stat report: Process stat and stat round events
Adding processing of stat and stat round events.
The stat data com in stat events, using generic function
process_stat_round_event to store data under perf_evsel object.
The stat-round events comes each interval or as last event in non
interval mode. The function process_stat_round_event process stored data
for each perf_evsel object and print it out.
Committer note:
After this patch:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.498381 task-clock (msec) # 0.571 CPUs utilized
2 context-switches # 0.004 M/sec
0 cpu-migrations # 0.000 K/sec
149 page-faults # 0.299 M/sec
1,271,635 cycles # 2.552 GHz
928,712 stalled-cycles-frontend # 73.03% frontend cycles idle
663,286 stalled-cycles-backend # 52.16% backend cycles idle
792,614 instructions # 0.62 insns per cycle
# 1.17 stalled cycles per insn
136,850 branches # 274.589 M/sec
<not counted> branch-misses (0.00%)
0.000873419 seconds time elapsed
$
$ perf stat report
Performance counter stats for '/home/acme/bin/perf stat record usleep 1':
0.498381 task-clock (msec) # 0.571 CPUs utilized
2 context-switches # 0.004 M/sec
0 cpu-migrations # 0.000 K/sec
149 page-faults # 0.299 M/sec
1,271,635 cycles # 2.552 GHz
928,712 stalled-cycles-frontend # 73.03% frontend cycles idle
663,286 stalled-cycles-backend # 52.16% backend cycles idle
792,614 instructions # 0.62 insns per cycle
# 1.17 stalled cycles per insn
136,850 branches # 274.589 M/sec
<not counted> branch-misses (0.00%)
0.000873419 seconds time elapsed
$
Reported-by: Kan Liang <kan.liang@intel.com>
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1446734469-11352-16-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 14:40:59 +00:00
|
|
|
|
2016-05-05 23:04:03 +00:00
|
|
|
if (stat_config.interval && stat_round->time) {
|
2016-08-05 18:40:30 +00:00
|
|
|
tsh.tv_sec = stat_round->time / NSEC_PER_SEC;
|
|
|
|
tsh.tv_nsec = stat_round->time % NSEC_PER_SEC;
|
perf stat report: Process stat and stat round events
Adding processing of stat and stat round events.
The stat data com in stat events, using generic function
process_stat_round_event to store data under perf_evsel object.
The stat-round events comes each interval or as last event in non
interval mode. The function process_stat_round_event process stored data
for each perf_evsel object and print it out.
Committer note:
After this patch:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.498381 task-clock (msec) # 0.571 CPUs utilized
2 context-switches # 0.004 M/sec
0 cpu-migrations # 0.000 K/sec
149 page-faults # 0.299 M/sec
1,271,635 cycles # 2.552 GHz
928,712 stalled-cycles-frontend # 73.03% frontend cycles idle
663,286 stalled-cycles-backend # 52.16% backend cycles idle
792,614 instructions # 0.62 insns per cycle
# 1.17 stalled cycles per insn
136,850 branches # 274.589 M/sec
<not counted> branch-misses (0.00%)
0.000873419 seconds time elapsed
$
$ perf stat report
Performance counter stats for '/home/acme/bin/perf stat record usleep 1':
0.498381 task-clock (msec) # 0.571 CPUs utilized
2 context-switches # 0.004 M/sec
0 cpu-migrations # 0.000 K/sec
149 page-faults # 0.299 M/sec
1,271,635 cycles # 2.552 GHz
928,712 stalled-cycles-frontend # 73.03% frontend cycles idle
663,286 stalled-cycles-backend # 52.16% backend cycles idle
792,614 instructions # 0.62 insns per cycle
# 1.17 stalled cycles per insn
136,850 branches # 274.589 M/sec
<not counted> branch-misses (0.00%)
0.000873419 seconds time elapsed
$
Reported-by: Kan Liang <kan.liang@intel.com>
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1446734469-11352-16-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 14:40:59 +00:00
|
|
|
ts = &tsh;
|
|
|
|
}
|
|
|
|
|
|
|
|
print_counters(ts, argc, argv);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2015-11-05 14:40:57 +00:00
|
|
|
static
|
2016-12-12 13:52:10 +00:00
|
|
|
int process_stat_config_event(struct perf_tool *tool,
|
2015-11-05 14:40:57 +00:00
|
|
|
union perf_event *event,
|
|
|
|
struct perf_session *session __maybe_unused)
|
|
|
|
{
|
2015-11-05 14:40:58 +00:00
|
|
|
struct perf_stat *st = container_of(tool, struct perf_stat, tool);
|
|
|
|
|
2015-11-05 14:40:57 +00:00
|
|
|
perf_event__read_stat_config(&stat_config, &event->stat_config);
|
2015-11-05 14:40:58 +00:00
|
|
|
|
2015-11-05 14:41:02 +00:00
|
|
|
if (cpu_map__empty(st->cpus)) {
|
|
|
|
if (st->aggr_mode != AGGR_UNSET)
|
|
|
|
pr_warning("warning: processing task data, aggregation mode not set\n");
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (st->aggr_mode != AGGR_UNSET)
|
|
|
|
stat_config.aggr_mode = st->aggr_mode;
|
|
|
|
|
2017-01-23 21:07:59 +00:00
|
|
|
if (perf_stat.data.is_pipe)
|
2015-11-05 14:40:58 +00:00
|
|
|
perf_stat_init_aggr_mode();
|
|
|
|
else
|
|
|
|
perf_stat_init_aggr_mode_file(st);
|
|
|
|
|
2015-11-05 14:40:57 +00:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2015-11-05 14:40:56 +00:00
|
|
|
static int set_maps(struct perf_stat *st)
|
|
|
|
{
|
|
|
|
if (!st->cpus || !st->threads)
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
if (WARN_ONCE(st->maps_allocated, "stats double allocation\n"))
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
perf_evlist__set_maps(evsel_list, st->cpus, st->threads);
|
|
|
|
|
|
|
|
if (perf_evlist__alloc_stats(evsel_list, true))
|
|
|
|
return -ENOMEM;
|
|
|
|
|
|
|
|
st->maps_allocated = true;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static
|
2016-12-12 13:52:10 +00:00
|
|
|
int process_thread_map_event(struct perf_tool *tool,
|
2015-11-05 14:40:56 +00:00
|
|
|
union perf_event *event,
|
|
|
|
struct perf_session *session __maybe_unused)
|
|
|
|
{
|
|
|
|
struct perf_stat *st = container_of(tool, struct perf_stat, tool);
|
|
|
|
|
|
|
|
if (st->threads) {
|
|
|
|
pr_warning("Extra thread map event, ignoring.\n");
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
st->threads = thread_map__new_event(&event->thread_map);
|
|
|
|
if (!st->threads)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
|
|
|
return set_maps(st);
|
|
|
|
}
|
|
|
|
|
|
|
|
static
|
2016-12-12 13:52:10 +00:00
|
|
|
int process_cpu_map_event(struct perf_tool *tool,
|
2015-11-05 14:40:56 +00:00
|
|
|
union perf_event *event,
|
|
|
|
struct perf_session *session __maybe_unused)
|
|
|
|
{
|
|
|
|
struct perf_stat *st = container_of(tool, struct perf_stat, tool);
|
|
|
|
struct cpu_map *cpus;
|
|
|
|
|
|
|
|
if (st->cpus) {
|
|
|
|
pr_warning("Extra cpu map event, ignoring.\n");
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
cpus = cpu_map__new_data(&event->cpu_map.data);
|
|
|
|
if (!cpus)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
|
|
|
st->cpus = cpus;
|
|
|
|
return set_maps(st);
|
|
|
|
}
|
|
|
|
|
2017-12-05 14:03:07 +00:00
|
|
|
static int runtime_stat_new(struct perf_stat_config *config, int nthreads)
|
|
|
|
{
|
|
|
|
int i;
|
|
|
|
|
|
|
|
config->stats = calloc(nthreads, sizeof(struct runtime_stat));
|
|
|
|
if (!config->stats)
|
|
|
|
return -1;
|
|
|
|
|
|
|
|
config->stats_num = nthreads;
|
|
|
|
|
|
|
|
for (i = 0; i < nthreads; i++)
|
|
|
|
runtime_stat__init(&config->stats[i]);
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void runtime_stat_delete(struct perf_stat_config *config)
|
|
|
|
{
|
|
|
|
int i;
|
|
|
|
|
|
|
|
if (!config->stats)
|
|
|
|
return;
|
|
|
|
|
|
|
|
for (i = 0; i < config->stats_num; i++)
|
|
|
|
runtime_stat__exit(&config->stats[i]);
|
|
|
|
|
|
|
|
free(config->stats);
|
|
|
|
}
|
|
|
|
|
2016-01-12 09:35:29 +00:00
|
|
|
static const char * const stat_report_usage[] = {
|
2015-11-05 14:40:55 +00:00
|
|
|
"perf stat report [<options>]",
|
|
|
|
NULL,
|
|
|
|
};
|
|
|
|
|
|
|
|
static struct perf_stat perf_stat = {
|
|
|
|
.tool = {
|
|
|
|
.attr = perf_event__process_attr,
|
2015-11-05 14:41:00 +00:00
|
|
|
.event_update = perf_event__process_event_update,
|
2015-11-05 14:40:56 +00:00
|
|
|
.thread_map = process_thread_map_event,
|
|
|
|
.cpu_map = process_cpu_map_event,
|
2015-11-05 14:40:57 +00:00
|
|
|
.stat_config = process_stat_config_event,
|
perf stat report: Process stat and stat round events
Adding processing of stat and stat round events.
The stat data com in stat events, using generic function
process_stat_round_event to store data under perf_evsel object.
The stat-round events comes each interval or as last event in non
interval mode. The function process_stat_round_event process stored data
for each perf_evsel object and print it out.
Committer note:
After this patch:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.498381 task-clock (msec) # 0.571 CPUs utilized
2 context-switches # 0.004 M/sec
0 cpu-migrations # 0.000 K/sec
149 page-faults # 0.299 M/sec
1,271,635 cycles # 2.552 GHz
928,712 stalled-cycles-frontend # 73.03% frontend cycles idle
663,286 stalled-cycles-backend # 52.16% backend cycles idle
792,614 instructions # 0.62 insns per cycle
# 1.17 stalled cycles per insn
136,850 branches # 274.589 M/sec
<not counted> branch-misses (0.00%)
0.000873419 seconds time elapsed
$
$ perf stat report
Performance counter stats for '/home/acme/bin/perf stat record usleep 1':
0.498381 task-clock (msec) # 0.571 CPUs utilized
2 context-switches # 0.004 M/sec
0 cpu-migrations # 0.000 K/sec
149 page-faults # 0.299 M/sec
1,271,635 cycles # 2.552 GHz
928,712 stalled-cycles-frontend # 73.03% frontend cycles idle
663,286 stalled-cycles-backend # 52.16% backend cycles idle
792,614 instructions # 0.62 insns per cycle
# 1.17 stalled cycles per insn
136,850 branches # 274.589 M/sec
<not counted> branch-misses (0.00%)
0.000873419 seconds time elapsed
$
Reported-by: Kan Liang <kan.liang@intel.com>
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1446734469-11352-16-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 14:40:59 +00:00
|
|
|
.stat = perf_event__process_stat_event,
|
|
|
|
.stat_round = process_stat_round_event,
|
2015-11-05 14:40:55 +00:00
|
|
|
},
|
2015-11-05 14:41:02 +00:00
|
|
|
.aggr_mode = AGGR_UNSET,
|
2015-11-05 14:40:55 +00:00
|
|
|
};
|
|
|
|
|
|
|
|
static int __cmd_report(int argc, const char **argv)
|
|
|
|
{
|
|
|
|
struct perf_session *session;
|
|
|
|
const struct option options[] = {
|
|
|
|
OPT_STRING('i', "input", &input_name, "file", "input file name"),
|
2015-11-05 14:41:02 +00:00
|
|
|
OPT_SET_UINT(0, "per-socket", &perf_stat.aggr_mode,
|
|
|
|
"aggregate counts per processor socket", AGGR_SOCKET),
|
|
|
|
OPT_SET_UINT(0, "per-core", &perf_stat.aggr_mode,
|
|
|
|
"aggregate counts per physical processor core", AGGR_CORE),
|
|
|
|
OPT_SET_UINT('A', "no-aggr", &perf_stat.aggr_mode,
|
|
|
|
"disable CPU count aggregation", AGGR_NONE),
|
2015-11-05 14:40:55 +00:00
|
|
|
OPT_END()
|
|
|
|
};
|
|
|
|
struct stat st;
|
|
|
|
int ret;
|
|
|
|
|
2016-01-12 09:35:29 +00:00
|
|
|
argc = parse_options(argc, argv, options, stat_report_usage, 0);
|
2015-11-05 14:40:55 +00:00
|
|
|
|
|
|
|
if (!input_name || !strlen(input_name)) {
|
|
|
|
if (!fstat(STDIN_FILENO, &st) && S_ISFIFO(st.st_mode))
|
|
|
|
input_name = "-";
|
|
|
|
else
|
|
|
|
input_name = "perf.data";
|
|
|
|
}
|
|
|
|
|
2017-01-23 21:25:41 +00:00
|
|
|
perf_stat.data.file.path = input_name;
|
|
|
|
perf_stat.data.mode = PERF_DATA_MODE_READ;
|
2015-11-05 14:40:55 +00:00
|
|
|
|
2017-01-23 21:07:59 +00:00
|
|
|
session = perf_session__new(&perf_stat.data, false, &perf_stat.tool);
|
2015-11-05 14:40:55 +00:00
|
|
|
if (session == NULL)
|
|
|
|
return -1;
|
|
|
|
|
|
|
|
perf_stat.session = session;
|
|
|
|
stat_config.output = stderr;
|
|
|
|
evsel_list = session->evlist;
|
|
|
|
|
|
|
|
ret = perf_session__process_events(session);
|
|
|
|
if (ret)
|
|
|
|
return ret;
|
|
|
|
|
|
|
|
perf_session__delete(session);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
perf tools: Force uncore events to system wide monitoring
Make system wide (-a) the default option if no target was specified and
one of following conditions is met:
- there's no workload specified (current behaviour)
- there is workload specified but all requested
events are system wide ones
Mixed events core/uncore with workload:
$ perf stat -e 'uncore_cbox_0/clockticks/,cycles' sleep 1
Performance counter stats for 'sleep 1':
<not supported> uncore_cbox_0/clockticks/
980,489 cycles
1.000897406 seconds time elapsed
Uncore event with workload:
$ perf stat -e 'uncore_cbox_0/clockticks/' sleep 1
Performance counter stats for 'system wide':
281,473,897,192,670 uncore_cbox_0/clockticks/
1.000833784 seconds time elapsed
Committer note:
When testing I realized the default case for !root, i.e. no events
passed via -e, was broke by v2 of this patch, reported and after a
patch provided by Jiri it is back working:
[acme@jouet linux]$ perf stat usleep 1
Performance counter stats for 'usleep 1':
0.401335 task-clock:u (msec) # 0.297 CPUs utilized
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
48 page-faults:u # 0.120 M/sec
458,146 cycles:u # 1.142 GHz
245,113 instructions:u # 0.54 insn per cycle
47,991 branches:u # 119.578 M/sec
4,022 branch-misses:u # 8.38% of all branches
0.001350029 seconds time elapsed
[acme@jouet linux]$
Suggested-and-Tested-by: Borislav Petkov <bp@alien8.de>
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/20170227094818.GA12764@krava
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-02-27 09:48:18 +00:00
|
|
|
static void setup_system_wide(int forks)
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* Make system wide (-a) the default target if
|
|
|
|
* no target was specified and one of following
|
|
|
|
* conditions is met:
|
|
|
|
*
|
|
|
|
* - there's no workload specified
|
|
|
|
* - there is workload specified but all requested
|
|
|
|
* events are system wide events
|
|
|
|
*/
|
|
|
|
if (!target__none(&target))
|
|
|
|
return;
|
|
|
|
|
|
|
|
if (!forks)
|
|
|
|
target.system_wide = true;
|
|
|
|
else {
|
|
|
|
struct perf_evsel *counter;
|
|
|
|
|
|
|
|
evlist__for_each_entry(evsel_list, counter) {
|
|
|
|
if (!counter->system_wide)
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (evsel_list->nr_entries)
|
|
|
|
target.system_wide = true;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2017-03-27 14:47:20 +00:00
|
|
|
int cmd_stat(int argc, const char **argv)
|
2009-05-26 07:17:18 +00:00
|
|
|
{
|
2012-10-01 18:20:58 +00:00
|
|
|
const char * const stat_usage[] = {
|
|
|
|
"perf stat [<options>] [<command>]",
|
|
|
|
NULL
|
|
|
|
};
|
2013-11-01 07:33:15 +00:00
|
|
|
int status = -EINVAL, run_idx;
|
2011-08-15 20:22:33 +00:00
|
|
|
const char *mode;
|
2015-07-21 12:31:24 +00:00
|
|
|
FILE *output = stderr;
|
2018-01-29 09:25:23 +00:00
|
|
|
unsigned int interval, timeout;
|
2015-11-05 14:40:55 +00:00
|
|
|
const char * const stat_subcommands[] = { "record", "report" };
|
2009-06-13 12:57:28 +00:00
|
|
|
|
perf stat: add perf stat -B to pretty print large numbers
It is hard to read very large numbers so provide an option to perf stat
to separate thousands using a separator. The patch leverages the locale
support of stdio. You need to set your LC_NUMERIC appropriately, for
instance LC_NUMERIC=en_US.UTF8. You need to pass -B to activate this
feature. This way existing scripts parsing the output do not need to be
changed. Here is an example.
$ perf stat noploop 2
noploop for 2 seconds
Performance counter stats for 'noploop 2':
1998.347031 task-clock-msecs # 0.998 CPUs
61 context-switches # 0.000 M/sec
0 CPU-migrations # 0.000 M/sec
118 page-faults # 0.000 M/sec
4,138,410,900 cycles # 2070.917 M/sec (scaled from 70.01%)
2,062,650,268 instructions # 0.498 IPC (scaled from 70.01%)
2,057,653,466 branches # 1029.678 M/sec (scaled from 70.01%)
40,267 branch-misses # 0.002 % (scaled from 30.04%)
2,055,961,348 cache-references # 1028.831 M/sec (scaled from 30.03%)
53,725 cache-misses # 0.027 M/sec (scaled from 30.02%)
2.001393933 seconds time elapsed
$ perf stat -B noploop 2
noploop for 2 seconds
Performance counter stats for 'noploop 2':
1998.297883 task-clock-msecs # 0.998 CPUs
59 context-switches # 0.000 M/sec
0 CPU-migrations # 0.000 M/sec
119 page-faults # 0.000 M/sec
4,131,380,160 cycles # 2067.450 M/sec (scaled from 70.01%)
2,059,096,507 instructions # 0.498 IPC (scaled from 70.01%)
2,054,681,303 branches # 1028.216 M/sec (scaled from 70.01%)
25,650 branch-misses # 0.001 % (scaled from 30.05%)
2,056,283,014 cache-references # 1029.017 M/sec (scaled from 30.03%)
47,097 cache-misses # 0.024 M/sec (scaled from 30.02%)
2.001391016 seconds time elapsed
Cc: David S. Miller <davem@davemloft.net>
Cc: Frédéric Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Tom Zanussi <tzanussi@gmail.com>
LKML-Reference: <4bf28fe8.914ed80a.01ca.fffff5f5@mx.google.com>
Signed-off-by: Stephane Eranian <eranian@google.com>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2010-05-18 13:00:01 +00:00
|
|
|
setlocale(LC_ALL, "");
|
|
|
|
|
2013-03-11 07:43:12 +00:00
|
|
|
evsel_list = perf_evlist__new();
|
2011-01-11 22:56:53 +00:00
|
|
|
if (evsel_list == NULL)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
perf stat: Bail out on unsupported event config modifiers
'perf stat' accepts some config terms but doesn't apply them. For
example:
# perf stat -e 'instructions/no-inherit/' -e 'instructions/inherit/' bash
# ls
# exit
Performance counter stats for 'bash':
266258061 instructions/no-inherit/
266258061 instructions/inherit/
1.402183915 seconds time elapsed
The result is confusing, because user may expect the first
'instructions' event exclude the 'ls' command.
This patch forbid most of these config terms for 'perf stat'.
Result:
# ./perf stat -e 'instructions/no-inherit/' -e 'instructions/inherit/' bash
event syntax error: 'instructions/no-inherit/'
\___ 'no-inherit' is not usable in 'perf stat'
...
We can add blocked config terms back when 'perf stat' really supports them.
This patch also removes unavailable config term from error message:
# ./perf stat -e 'instructions/badterm/' ls
event syntax error: 'instructions/badterm/'
\___ unknown term
valid terms: config,config1,config2,name
# ./perf stat -e 'cpu/badterm/' ls
event syntax error: 'cpu/badterm/'
\___ unknown term
valid terms: pc,any,inv,edge,cmask,event,in_tx,ldlat,umask,in_tx_cp,offcore_rsp,config,config1,config2,name
Signed-off-by: Wang Nan <wangnan0@huawei.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Alexei Starovoitov <ast@kernel.org>
Cc: Brendan Gregg <brendan.d.gregg@gmail.com>
Cc: Cody P Schafer <dev@codyps.com>
Cc: He Kuang <hekuang@huawei.com>
Cc: Jeremie Galarneau <jeremie.galarneau@efficios.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kirill Smelkov <kirr@nexedi.com>
Cc: Li Zefan <lizefan@huawei.com>
Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Zefan Li <lizefan@huawei.com>
Cc: pi3orama@163.com
Link: http://lkml.kernel.org/r/1455882283-79592-11-git-send-email-wangnan0@huawei.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-02-19 11:43:58 +00:00
|
|
|
parse_events__shrink_config_terms();
|
perf stat record: Add record command
Add 'perf stat record' command support. It creates simple (header only)
perf.data file ATM.
The record command could be specified anywhere among stat options. All
stat command options are valid for stat record command with '-o' option
exception. If specified for record command it denotes the perf data file
name.
Committer note:
Set sample_type to PERF_SAMPLE_IDENTIFIER, which should be harmless
while avoiding that older tools show confusing messages, for instance,
with sample_type = 0, we get:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.630237 task-clock (msec) # 0.528 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
52 page-faults # 0.083 M/sec
978,312 cycles # 1.552 GHz
671,931 stalled-cycles-frontend # 68.68% frontend cycles idle
<not supported> stalled-cycles-backend
646,379 instructions # 0.66 insns per cycle
# 1.04 stalled cycles per insn
131,046 branches # 207.931 M/sec
7,073 branch-misses # 5.40% of all branches
0.001193240 seconds time elapsed
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
non matching sample_type
$
While with sample_type set to PERF_SAMPLE_IDENTIFIER, after we re-run 'perf
stat record usleep' we get:
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$
Which at least shows the names of the events in the perf.data file.
Additionally, such files, when passed to 'perf report' will produce:
$ oldperf report --stdio
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
Warning:
Kernel address maps (/proc/{kallsyms,modules}) were restricted.
Check /proc/sys/kernel/kptr_restrict before running 'perf record'.
As no suitable kallsyms nor vmlinux was found, kernel samples
can't be resolved.
Samples in kernel modules can't be resolved as well.
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
$
Which is confusing and can be solved by just adding the kernel mmap record,
which will also remove that warning about the data size field being equal to
zero, after generating the mmap record:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.600796 task-clock (msec) # 0.478 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
54 page-faults # 0.090 M/sec
886,844 cycles # 1.476 GHz
582,169 stalled-cycles-frontend # 65.65% frontend cycles idle
<not supported> stalled-cycles-backend
638,344 instructions # 0.72 insns per cycle
# 0.91 stalled cycles per insn
130,204 branches # 216.719 M/sec
7,500 branch-misses # 5.76% of all branches
0.001255897 seconds time elapsed
$ oldperf evlist
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$ oldperf report --stdio
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
[acme@zoo linux]$
No warnings, sensible output about what are the events in the perf.data file and also
a "file has no samples" message, which indeed it doesn't.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Tested-by: Kan Liang <kan.liang@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: htp://lkml.kernel.org/r/1446734469-11352-3-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 14:40:46 +00:00
|
|
|
argc = parse_options_subcommand(argc, argv, stat_options, stat_subcommands,
|
|
|
|
(const char **) stat_usage,
|
|
|
|
PARSE_OPT_STOP_AT_NON_OPTION);
|
perf stat: Output JSON MetricExpr metric
Add generic infrastructure to perf stat to output ratios for
"MetricExpr" entries in the event lists. Many events are more useful as
ratios than in raw form, typically some count in relation to total
ticks.
Transfer the MetricExpr information from the alias to the evsel.
We mark the events that need to be collected for MetricExpr, and also
link the events using them with a pointer. The code is careful to always
prefer the right event in the same group to minimize multiplexing
errors. At the moment only a single relation is supported.
Then add a rblist to the stat shadow code that remembers stats based on
the cpu and context.
Then finally update and retrieve and print these values similarly to the
existing hardcoded perf metrics. We use the simple expression parser
added earlier to evaluate the expression.
Normally we just output the result without further commentary, but for
--metric-only this would lead to empty columns. So for this case use the
original event as description.
There is no attempt to automatically add the MetricExpr event, if it is
missing, however we suggest it to the user, because the user tool
doesn't have enough information to reliably construct a group that is
guaranteed to schedule. So we leave that to the user.
% perf stat -a -I 1000 -e '{unc_p_clockticks,unc_p_freq_max_os_cycles}'
1.000147889 800,085,181 unc_p_clockticks
1.000147889 93,126,241 unc_p_freq_max_os_cycles # 11.6
2.000448381 800,218,217 unc_p_clockticks
2.000448381 142,516,095 unc_p_freq_max_os_cycles # 17.8
3.000639852 800,243,057 unc_p_clockticks
3.000639852 162,292,689 unc_p_freq_max_os_cycles # 20.3
% perf stat -a -I 1000 -e '{unc_p_clockticks,unc_p_freq_max_os_cycles}' --metric-only
# time freq_max_os_cycles %
1.000127077 0.9
2.000301436 0.7
3.000456379 0.0
v2: Change from DivideBy to MetricExpr
v3: Use expr__ prefix. Support more than one other event.
v4: Update description
v5: Only print warning message once for multiple PMUs.
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: http://lkml.kernel.org/r/20170320201711.14142-11-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-03-20 20:17:08 +00:00
|
|
|
perf_stat__collect_metric_expr(evsel_list);
|
2016-03-01 18:57:52 +00:00
|
|
|
perf_stat__init_shadow_stats();
|
perf stat record: Add record command
Add 'perf stat record' command support. It creates simple (header only)
perf.data file ATM.
The record command could be specified anywhere among stat options. All
stat command options are valid for stat record command with '-o' option
exception. If specified for record command it denotes the perf data file
name.
Committer note:
Set sample_type to PERF_SAMPLE_IDENTIFIER, which should be harmless
while avoiding that older tools show confusing messages, for instance,
with sample_type = 0, we get:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.630237 task-clock (msec) # 0.528 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
52 page-faults # 0.083 M/sec
978,312 cycles # 1.552 GHz
671,931 stalled-cycles-frontend # 68.68% frontend cycles idle
<not supported> stalled-cycles-backend
646,379 instructions # 0.66 insns per cycle
# 1.04 stalled cycles per insn
131,046 branches # 207.931 M/sec
7,073 branch-misses # 5.40% of all branches
0.001193240 seconds time elapsed
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
non matching sample_type
$
While with sample_type set to PERF_SAMPLE_IDENTIFIER, after we re-run 'perf
stat record usleep' we get:
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$
Which at least shows the names of the events in the perf.data file.
Additionally, such files, when passed to 'perf report' will produce:
$ oldperf report --stdio
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
Warning:
Kernel address maps (/proc/{kallsyms,modules}) were restricted.
Check /proc/sys/kernel/kptr_restrict before running 'perf record'.
As no suitable kallsyms nor vmlinux was found, kernel samples
can't be resolved.
Samples in kernel modules can't be resolved as well.
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
$
Which is confusing and can be solved by just adding the kernel mmap record,
which will also remove that warning about the data size field being equal to
zero, after generating the mmap record:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.600796 task-clock (msec) # 0.478 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
54 page-faults # 0.090 M/sec
886,844 cycles # 1.476 GHz
582,169 stalled-cycles-frontend # 65.65% frontend cycles idle
<not supported> stalled-cycles-backend
638,344 instructions # 0.72 insns per cycle
# 0.91 stalled cycles per insn
130,204 branches # 216.719 M/sec
7,500 branch-misses # 5.76% of all branches
0.001255897 seconds time elapsed
$ oldperf evlist
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$ oldperf report --stdio
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
[acme@zoo linux]$
No warnings, sensible output about what are the events in the perf.data file and also
a "file has no samples" message, which indeed it doesn't.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Tested-by: Kan Liang <kan.liang@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: htp://lkml.kernel.org/r/1446734469-11352-3-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 14:40:46 +00:00
|
|
|
|
2018-08-30 06:32:29 +00:00
|
|
|
if (stat_config.csv_sep) {
|
|
|
|
stat_config.csv_output = true;
|
|
|
|
if (!strcmp(stat_config.csv_sep, "\\t"))
|
|
|
|
stat_config.csv_sep = "\t";
|
2015-11-05 14:41:01 +00:00
|
|
|
} else
|
2018-08-30 06:32:29 +00:00
|
|
|
stat_config.csv_sep = DEFAULT_SEPARATOR;
|
2015-11-05 14:41:01 +00:00
|
|
|
|
perf stat record: Add record command
Add 'perf stat record' command support. It creates simple (header only)
perf.data file ATM.
The record command could be specified anywhere among stat options. All
stat command options are valid for stat record command with '-o' option
exception. If specified for record command it denotes the perf data file
name.
Committer note:
Set sample_type to PERF_SAMPLE_IDENTIFIER, which should be harmless
while avoiding that older tools show confusing messages, for instance,
with sample_type = 0, we get:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.630237 task-clock (msec) # 0.528 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
52 page-faults # 0.083 M/sec
978,312 cycles # 1.552 GHz
671,931 stalled-cycles-frontend # 68.68% frontend cycles idle
<not supported> stalled-cycles-backend
646,379 instructions # 0.66 insns per cycle
# 1.04 stalled cycles per insn
131,046 branches # 207.931 M/sec
7,073 branch-misses # 5.40% of all branches
0.001193240 seconds time elapsed
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
non matching sample_type
$
While with sample_type set to PERF_SAMPLE_IDENTIFIER, after we re-run 'perf
stat record usleep' we get:
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$
Which at least shows the names of the events in the perf.data file.
Additionally, such files, when passed to 'perf report' will produce:
$ oldperf report --stdio
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
Warning:
Kernel address maps (/proc/{kallsyms,modules}) were restricted.
Check /proc/sys/kernel/kptr_restrict before running 'perf record'.
As no suitable kallsyms nor vmlinux was found, kernel samples
can't be resolved.
Samples in kernel modules can't be resolved as well.
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
$
Which is confusing and can be solved by just adding the kernel mmap record,
which will also remove that warning about the data size field being equal to
zero, after generating the mmap record:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.600796 task-clock (msec) # 0.478 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
54 page-faults # 0.090 M/sec
886,844 cycles # 1.476 GHz
582,169 stalled-cycles-frontend # 65.65% frontend cycles idle
<not supported> stalled-cycles-backend
638,344 instructions # 0.72 insns per cycle
# 0.91 stalled cycles per insn
130,204 branches # 216.719 M/sec
7,500 branch-misses # 5.76% of all branches
0.001255897 seconds time elapsed
$ oldperf evlist
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$ oldperf report --stdio
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
[acme@zoo linux]$
No warnings, sensible output about what are the events in the perf.data file and also
a "file has no samples" message, which indeed it doesn't.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Tested-by: Kan Liang <kan.liang@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: htp://lkml.kernel.org/r/1446734469-11352-3-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 14:40:46 +00:00
|
|
|
if (argc && !strncmp(argv[0], "rec", 3)) {
|
|
|
|
argc = __cmd_record(argc, argv);
|
|
|
|
if (argc < 0)
|
|
|
|
return -1;
|
2015-11-05 14:40:55 +00:00
|
|
|
} else if (argc && !strncmp(argv[0], "rep", 3))
|
|
|
|
return __cmd_report(argc, argv);
|
perf stat: Add csv-style output
This patch adds an option (-x/--field-separator) to print counts using a
CSV-style output. The user can pass a custom separator. This makes it very easy
to import counts directly into your favorite spreadsheet without having to
write scripts.
Example:
$ perf stat --field-separator=, -a -- sleep 1
4009.961740,task-clock-msecs
13,context-switches
2,CPU-migrations
189,page-faults
9596385684,cycles
3493659441,instructions
872897069,branches
41562,branch-misses
22424,cache-references
1289,cache-misses
Works also in non-aggregated mode:
$ perf stat -x , -a -A -- sleep 1
CPU0,1002.526168,task-clock-msecs
CPU1,1002.528365,task-clock-msecs
CPU2,1002.523360,task-clock-msecs
CPU3,1002.519878,task-clock-msecs
CPU0,1,context-switches
CPU1,5,context-switches
CPU2,5,context-switches
CPU3,6,context-switches
CPU0,0,CPU-migrations
CPU1,1,CPU-migrations
CPU2,0,CPU-migrations
CPU3,1,CPU-migrations
CPU0,2,page-faults
CPU1,6,page-faults
CPU2,9,page-faults
CPU3,174,page-faults
CPU0,2399439771,cycles
CPU1,2380369063,cycles
CPU2,2399142710,cycles
CPU3,2373161192,cycles
CPU0,872900618,instructions
CPU1,873030960,instructions
CPU2,872714525,instructions
CPU3,874460580,instructions
CPU0,221556839,branches
CPU1,218134342,branches
CPU2,218161730,branches
CPU3,218284093,branches
CPU0,18556,branch-misses
CPU1,1449,branch-misses
CPU2,3447,branch-misses
CPU3,12714,branch-misses
CPU0,8330,cache-references
CPU1,313844,cache-references
CPU2,47993728,cache-references
CPU3,826481,cache-references
CPU0,272,cache-misses
CPU1,5360,cache-misses
CPU2,1342193,cache-misses
CPU3,13992,cache-misses
This second version adds the ability to name a separator and uses
field-separator as the long option to be consistent with perf report.
Commiter note: Since we enabled --big-num by default in 201e0b0 and -x can't be
used with it, we need to notice if the user explicitely enabled or disabled -B,
add code to disable big_num if the user didn't explicitely set --big_num when
-x is used.
Cc: David S. Miller <davem@davemloft.net>
Cc: Frederik Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: paulus@samba.org
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Robert Richter <robert.richter@amd.com>
LKML-Reference: <4cf68aa7.0fedd80a.5294.1203@mx.google.com>
Signed-off-by: Stephane Eranian <eranian@google.com>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2010-12-01 16:49:05 +00:00
|
|
|
|
2015-07-21 12:31:25 +00:00
|
|
|
interval = stat_config.interval;
|
2018-01-29 09:25:23 +00:00
|
|
|
timeout = stat_config.timeout;
|
2015-07-21 12:31:25 +00:00
|
|
|
|
perf stat record: Add record command
Add 'perf stat record' command support. It creates simple (header only)
perf.data file ATM.
The record command could be specified anywhere among stat options. All
stat command options are valid for stat record command with '-o' option
exception. If specified for record command it denotes the perf data file
name.
Committer note:
Set sample_type to PERF_SAMPLE_IDENTIFIER, which should be harmless
while avoiding that older tools show confusing messages, for instance,
with sample_type = 0, we get:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.630237 task-clock (msec) # 0.528 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
52 page-faults # 0.083 M/sec
978,312 cycles # 1.552 GHz
671,931 stalled-cycles-frontend # 68.68% frontend cycles idle
<not supported> stalled-cycles-backend
646,379 instructions # 0.66 insns per cycle
# 1.04 stalled cycles per insn
131,046 branches # 207.931 M/sec
7,073 branch-misses # 5.40% of all branches
0.001193240 seconds time elapsed
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
non matching sample_type
$
While with sample_type set to PERF_SAMPLE_IDENTIFIER, after we re-run 'perf
stat record usleep' we get:
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$
Which at least shows the names of the events in the perf.data file.
Additionally, such files, when passed to 'perf report' will produce:
$ oldperf report --stdio
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
Warning:
Kernel address maps (/proc/{kallsyms,modules}) were restricted.
Check /proc/sys/kernel/kptr_restrict before running 'perf record'.
As no suitable kallsyms nor vmlinux was found, kernel samples
can't be resolved.
Samples in kernel modules can't be resolved as well.
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
$
Which is confusing and can be solved by just adding the kernel mmap record,
which will also remove that warning about the data size field being equal to
zero, after generating the mmap record:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.600796 task-clock (msec) # 0.478 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
54 page-faults # 0.090 M/sec
886,844 cycles # 1.476 GHz
582,169 stalled-cycles-frontend # 65.65% frontend cycles idle
<not supported> stalled-cycles-backend
638,344 instructions # 0.72 insns per cycle
# 0.91 stalled cycles per insn
130,204 branches # 216.719 M/sec
7,500 branch-misses # 5.76% of all branches
0.001255897 seconds time elapsed
$ oldperf evlist
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$ oldperf report --stdio
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
[acme@zoo linux]$
No warnings, sensible output about what are the events in the perf.data file and also
a "file has no samples" message, which indeed it doesn't.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Tested-by: Kan Liang <kan.liang@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: htp://lkml.kernel.org/r/1446734469-11352-3-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 14:40:46 +00:00
|
|
|
/*
|
|
|
|
* For record command the -o is already taken care of.
|
|
|
|
*/
|
|
|
|
if (!STAT_RECORD && output_name && strcmp(output_name, "-"))
|
2011-08-15 20:22:33 +00:00
|
|
|
output = NULL;
|
|
|
|
|
2011-09-07 23:14:00 +00:00
|
|
|
if (output_name && output_fd) {
|
|
|
|
fprintf(stderr, "cannot use both --output and --log-fd\n");
|
2015-11-05 14:40:45 +00:00
|
|
|
parse_options_usage(stat_usage, stat_options, "o", 1);
|
|
|
|
parse_options_usage(NULL, stat_options, "log-fd", 0);
|
2013-11-01 07:33:15 +00:00
|
|
|
goto out;
|
2011-09-07 23:14:00 +00:00
|
|
|
}
|
2012-05-15 11:11:11 +00:00
|
|
|
|
2018-08-30 06:32:31 +00:00
|
|
|
if (stat_config.metric_only && stat_config.aggr_mode == AGGR_THREAD) {
|
2016-03-03 23:57:36 +00:00
|
|
|
fprintf(stderr, "--metric-only is not supported with --per-thread\n");
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
2018-08-30 06:32:36 +00:00
|
|
|
if (stat_config.metric_only && stat_config.run_count > 1) {
|
2016-03-03 23:57:36 +00:00
|
|
|
fprintf(stderr, "--metric-only is not supported with -r\n");
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
2018-08-30 06:32:36 +00:00
|
|
|
if (walltime_run_table && stat_config.run_count <= 1) {
|
2018-04-23 09:08:21 +00:00
|
|
|
fprintf(stderr, "--table is only supported with -r\n");
|
|
|
|
parse_options_usage(stat_usage, stat_options, "r", 1);
|
|
|
|
parse_options_usage(NULL, stat_options, "table", 0);
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
2012-05-15 11:11:11 +00:00
|
|
|
if (output_fd < 0) {
|
|
|
|
fprintf(stderr, "argument to --log-fd must be a > 0\n");
|
2015-11-05 14:40:45 +00:00
|
|
|
parse_options_usage(stat_usage, stat_options, "log-fd", 0);
|
2013-11-01 07:33:15 +00:00
|
|
|
goto out;
|
2012-05-15 11:11:11 +00:00
|
|
|
}
|
|
|
|
|
2011-08-15 20:22:33 +00:00
|
|
|
if (!output) {
|
|
|
|
struct timespec tm;
|
|
|
|
mode = append_file ? "a" : "w";
|
|
|
|
|
|
|
|
output = fopen(output_name, mode);
|
|
|
|
if (!output) {
|
|
|
|
perror("failed to create output file");
|
2012-08-26 18:24:44 +00:00
|
|
|
return -1;
|
2011-08-15 20:22:33 +00:00
|
|
|
}
|
|
|
|
clock_gettime(CLOCK_REALTIME, &tm);
|
|
|
|
fprintf(output, "# started on %s\n", ctime(&tm.tv_sec));
|
2012-05-15 11:11:11 +00:00
|
|
|
} else if (output_fd > 0) {
|
2011-09-07 23:14:00 +00:00
|
|
|
mode = append_file ? "a" : "w";
|
|
|
|
output = fdopen(output_fd, mode);
|
|
|
|
if (!output) {
|
|
|
|
perror("Failed opening logfd");
|
|
|
|
return -errno;
|
|
|
|
}
|
2011-08-15 20:22:33 +00:00
|
|
|
}
|
|
|
|
|
2015-07-21 12:31:24 +00:00
|
|
|
stat_config.output = output;
|
|
|
|
|
perf stat: Add csv-style output
This patch adds an option (-x/--field-separator) to print counts using a
CSV-style output. The user can pass a custom separator. This makes it very easy
to import counts directly into your favorite spreadsheet without having to
write scripts.
Example:
$ perf stat --field-separator=, -a -- sleep 1
4009.961740,task-clock-msecs
13,context-switches
2,CPU-migrations
189,page-faults
9596385684,cycles
3493659441,instructions
872897069,branches
41562,branch-misses
22424,cache-references
1289,cache-misses
Works also in non-aggregated mode:
$ perf stat -x , -a -A -- sleep 1
CPU0,1002.526168,task-clock-msecs
CPU1,1002.528365,task-clock-msecs
CPU2,1002.523360,task-clock-msecs
CPU3,1002.519878,task-clock-msecs
CPU0,1,context-switches
CPU1,5,context-switches
CPU2,5,context-switches
CPU3,6,context-switches
CPU0,0,CPU-migrations
CPU1,1,CPU-migrations
CPU2,0,CPU-migrations
CPU3,1,CPU-migrations
CPU0,2,page-faults
CPU1,6,page-faults
CPU2,9,page-faults
CPU3,174,page-faults
CPU0,2399439771,cycles
CPU1,2380369063,cycles
CPU2,2399142710,cycles
CPU3,2373161192,cycles
CPU0,872900618,instructions
CPU1,873030960,instructions
CPU2,872714525,instructions
CPU3,874460580,instructions
CPU0,221556839,branches
CPU1,218134342,branches
CPU2,218161730,branches
CPU3,218284093,branches
CPU0,18556,branch-misses
CPU1,1449,branch-misses
CPU2,3447,branch-misses
CPU3,12714,branch-misses
CPU0,8330,cache-references
CPU1,313844,cache-references
CPU2,47993728,cache-references
CPU3,826481,cache-references
CPU0,272,cache-misses
CPU1,5360,cache-misses
CPU2,1342193,cache-misses
CPU3,13992,cache-misses
This second version adds the ability to name a separator and uses
field-separator as the long option to be consistent with perf report.
Commiter note: Since we enabled --big-num by default in 201e0b0 and -x can't be
used with it, we need to notice if the user explicitely enabled or disabled -B,
add code to disable big_num if the user didn't explicitely set --big_num when
-x is used.
Cc: David S. Miller <davem@davemloft.net>
Cc: Frederik Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: paulus@samba.org
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Robert Richter <robert.richter@amd.com>
LKML-Reference: <4cf68aa7.0fedd80a.5294.1203@mx.google.com>
Signed-off-by: Stephane Eranian <eranian@google.com>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2010-12-01 16:49:05 +00:00
|
|
|
/*
|
|
|
|
* let the spreadsheet do the pretty-printing
|
|
|
|
*/
|
2018-08-30 06:32:29 +00:00
|
|
|
if (stat_config.csv_output) {
|
2011-09-07 23:14:04 +00:00
|
|
|
/* User explicitly passed -B? */
|
perf stat: Add csv-style output
This patch adds an option (-x/--field-separator) to print counts using a
CSV-style output. The user can pass a custom separator. This makes it very easy
to import counts directly into your favorite spreadsheet without having to
write scripts.
Example:
$ perf stat --field-separator=, -a -- sleep 1
4009.961740,task-clock-msecs
13,context-switches
2,CPU-migrations
189,page-faults
9596385684,cycles
3493659441,instructions
872897069,branches
41562,branch-misses
22424,cache-references
1289,cache-misses
Works also in non-aggregated mode:
$ perf stat -x , -a -A -- sleep 1
CPU0,1002.526168,task-clock-msecs
CPU1,1002.528365,task-clock-msecs
CPU2,1002.523360,task-clock-msecs
CPU3,1002.519878,task-clock-msecs
CPU0,1,context-switches
CPU1,5,context-switches
CPU2,5,context-switches
CPU3,6,context-switches
CPU0,0,CPU-migrations
CPU1,1,CPU-migrations
CPU2,0,CPU-migrations
CPU3,1,CPU-migrations
CPU0,2,page-faults
CPU1,6,page-faults
CPU2,9,page-faults
CPU3,174,page-faults
CPU0,2399439771,cycles
CPU1,2380369063,cycles
CPU2,2399142710,cycles
CPU3,2373161192,cycles
CPU0,872900618,instructions
CPU1,873030960,instructions
CPU2,872714525,instructions
CPU3,874460580,instructions
CPU0,221556839,branches
CPU1,218134342,branches
CPU2,218161730,branches
CPU3,218284093,branches
CPU0,18556,branch-misses
CPU1,1449,branch-misses
CPU2,3447,branch-misses
CPU3,12714,branch-misses
CPU0,8330,cache-references
CPU1,313844,cache-references
CPU2,47993728,cache-references
CPU3,826481,cache-references
CPU0,272,cache-misses
CPU1,5360,cache-misses
CPU2,1342193,cache-misses
CPU3,13992,cache-misses
This second version adds the ability to name a separator and uses
field-separator as the long option to be consistent with perf report.
Commiter note: Since we enabled --big-num by default in 201e0b0 and -x can't be
used with it, we need to notice if the user explicitely enabled or disabled -B,
add code to disable big_num if the user didn't explicitely set --big_num when
-x is used.
Cc: David S. Miller <davem@davemloft.net>
Cc: Frederik Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: paulus@samba.org
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Robert Richter <robert.richter@amd.com>
LKML-Reference: <4cf68aa7.0fedd80a.5294.1203@mx.google.com>
Signed-off-by: Stephane Eranian <eranian@google.com>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2010-12-01 16:49:05 +00:00
|
|
|
if (big_num_opt == 1) {
|
|
|
|
fprintf(stderr, "-B option not supported with -x\n");
|
2015-11-05 14:40:45 +00:00
|
|
|
parse_options_usage(stat_usage, stat_options, "B", 1);
|
|
|
|
parse_options_usage(NULL, stat_options, "x", 1);
|
2013-11-01 07:33:15 +00:00
|
|
|
goto out;
|
perf stat: Add csv-style output
This patch adds an option (-x/--field-separator) to print counts using a
CSV-style output. The user can pass a custom separator. This makes it very easy
to import counts directly into your favorite spreadsheet without having to
write scripts.
Example:
$ perf stat --field-separator=, -a -- sleep 1
4009.961740,task-clock-msecs
13,context-switches
2,CPU-migrations
189,page-faults
9596385684,cycles
3493659441,instructions
872897069,branches
41562,branch-misses
22424,cache-references
1289,cache-misses
Works also in non-aggregated mode:
$ perf stat -x , -a -A -- sleep 1
CPU0,1002.526168,task-clock-msecs
CPU1,1002.528365,task-clock-msecs
CPU2,1002.523360,task-clock-msecs
CPU3,1002.519878,task-clock-msecs
CPU0,1,context-switches
CPU1,5,context-switches
CPU2,5,context-switches
CPU3,6,context-switches
CPU0,0,CPU-migrations
CPU1,1,CPU-migrations
CPU2,0,CPU-migrations
CPU3,1,CPU-migrations
CPU0,2,page-faults
CPU1,6,page-faults
CPU2,9,page-faults
CPU3,174,page-faults
CPU0,2399439771,cycles
CPU1,2380369063,cycles
CPU2,2399142710,cycles
CPU3,2373161192,cycles
CPU0,872900618,instructions
CPU1,873030960,instructions
CPU2,872714525,instructions
CPU3,874460580,instructions
CPU0,221556839,branches
CPU1,218134342,branches
CPU2,218161730,branches
CPU3,218284093,branches
CPU0,18556,branch-misses
CPU1,1449,branch-misses
CPU2,3447,branch-misses
CPU3,12714,branch-misses
CPU0,8330,cache-references
CPU1,313844,cache-references
CPU2,47993728,cache-references
CPU3,826481,cache-references
CPU0,272,cache-misses
CPU1,5360,cache-misses
CPU2,1342193,cache-misses
CPU3,13992,cache-misses
This second version adds the ability to name a separator and uses
field-separator as the long option to be consistent with perf report.
Commiter note: Since we enabled --big-num by default in 201e0b0 and -x can't be
used with it, we need to notice if the user explicitely enabled or disabled -B,
add code to disable big_num if the user didn't explicitely set --big_num when
-x is used.
Cc: David S. Miller <davem@davemloft.net>
Cc: Frederik Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: paulus@samba.org
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Robert Richter <robert.richter@amd.com>
LKML-Reference: <4cf68aa7.0fedd80a.5294.1203@mx.google.com>
Signed-off-by: Stephane Eranian <eranian@google.com>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2010-12-01 16:49:05 +00:00
|
|
|
} else /* Nope, so disable big number formatting */
|
|
|
|
big_num = false;
|
|
|
|
} else if (big_num_opt == 0) /* User passed --no-big-num */
|
|
|
|
big_num = false;
|
|
|
|
|
perf tools: Force uncore events to system wide monitoring
Make system wide (-a) the default option if no target was specified and
one of following conditions is met:
- there's no workload specified (current behaviour)
- there is workload specified but all requested
events are system wide ones
Mixed events core/uncore with workload:
$ perf stat -e 'uncore_cbox_0/clockticks/,cycles' sleep 1
Performance counter stats for 'sleep 1':
<not supported> uncore_cbox_0/clockticks/
980,489 cycles
1.000897406 seconds time elapsed
Uncore event with workload:
$ perf stat -e 'uncore_cbox_0/clockticks/' sleep 1
Performance counter stats for 'system wide':
281,473,897,192,670 uncore_cbox_0/clockticks/
1.000833784 seconds time elapsed
Committer note:
When testing I realized the default case for !root, i.e. no events
passed via -e, was broke by v2 of this patch, reported and after a
patch provided by Jiri it is back working:
[acme@jouet linux]$ perf stat usleep 1
Performance counter stats for 'usleep 1':
0.401335 task-clock:u (msec) # 0.297 CPUs utilized
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
48 page-faults:u # 0.120 M/sec
458,146 cycles:u # 1.142 GHz
245,113 instructions:u # 0.54 insn per cycle
47,991 branches:u # 119.578 M/sec
4,022 branch-misses:u # 8.38% of all branches
0.001350029 seconds time elapsed
[acme@jouet linux]$
Suggested-and-Tested-by: Borislav Petkov <bp@alien8.de>
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/20170227094818.GA12764@krava
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-02-27 09:48:18 +00:00
|
|
|
setup_system_wide(argc);
|
2013-09-30 13:37:37 +00:00
|
|
|
|
2018-06-05 12:13:13 +00:00
|
|
|
/*
|
|
|
|
* Display user/system times only for single
|
|
|
|
* run and when there's specified tracee.
|
|
|
|
*/
|
2018-08-30 06:32:36 +00:00
|
|
|
if ((stat_config.run_count == 1) && target__none(&target))
|
2018-08-30 06:32:44 +00:00
|
|
|
stat_config.ru_display = true;
|
2018-06-05 12:13:13 +00:00
|
|
|
|
2018-08-30 06:32:36 +00:00
|
|
|
if (stat_config.run_count < 0) {
|
2013-11-01 07:33:15 +00:00
|
|
|
pr_err("Run count must be a positive number\n");
|
2015-11-05 14:40:45 +00:00
|
|
|
parse_options_usage(stat_usage, stat_options, "r", 1);
|
2013-11-01 07:33:15 +00:00
|
|
|
goto out;
|
2018-08-30 06:32:36 +00:00
|
|
|
} else if (stat_config.run_count == 0) {
|
2013-03-01 18:02:27 +00:00
|
|
|
forever = true;
|
2018-08-30 06:32:36 +00:00
|
|
|
stat_config.run_count = 1;
|
2013-03-01 18:02:27 +00:00
|
|
|
}
|
2009-04-20 13:37:32 +00:00
|
|
|
|
2018-04-23 09:08:21 +00:00
|
|
|
if (walltime_run_table) {
|
2018-08-30 06:32:36 +00:00
|
|
|
walltime_run = zalloc(stat_config.run_count * sizeof(walltime_run[0]));
|
2018-04-23 09:08:21 +00:00
|
|
|
if (!walltime_run) {
|
|
|
|
pr_err("failed to setup -r option");
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2017-12-05 14:03:10 +00:00
|
|
|
if ((stat_config.aggr_mode == AGGR_THREAD) &&
|
|
|
|
!target__has_task(&target)) {
|
|
|
|
if (!target.system_wide || target.cpu_list) {
|
|
|
|
fprintf(stderr, "The --per-thread option is only "
|
|
|
|
"available when monitoring via -p -t -a "
|
|
|
|
"options or only --per-thread.\n");
|
|
|
|
parse_options_usage(NULL, stat_options, "p", 1);
|
|
|
|
parse_options_usage(NULL, stat_options, "t", 1);
|
|
|
|
goto out;
|
|
|
|
}
|
perf stat: Introduce --per-thread option
Currently all the -p option PID arguments tasks values get aggregated
and printed as single values.
Adding --per-tasks option to print values per task.
$ perf stat -e cycles,instructions --per-thread -p 30190,30242
^C
Performance counter stats for process id '30190,30242':
cat-30190 0 cycles
yes-30242 3,842,525,421 cycles
cat-30190 0 instructions
yes-30242 10,370,817,010 instructions
1.143155657 seconds time elapsed
Also works under interval mode:
$ perf stat -e cycles,instructions --per-thread -p 30190,30242 -I 1000
# time comm-pid counts unit events
1.000073435 cat-30190 89,058 cycles
1.000073435 yes-30242 3,360,786,902 cycles (100.00%)
1.000073435 cat-30190 14,066 instructions
1.000073435 yes-30242 9,069,937,462 instructions
2.000204830 cat-30190 0 cycles
2.000204830 yes-30242 3,351,667,626 cycles
2.000204830 cat-30190 0 instructions
2.000204830 yes-30242 9,045,796,885 instructions
^C 2.771286639 cat-30190 0 cycles
2.771286639 yes-30242 2,593,884,166 cycles
2.771286639 cat-30190 0 instructions
2.771286639 yes-30242 7,001,171,191 instructions
It works only with -t and -p options, otherwise following error is
printed:
$ perf stat -e cycles --per-thread -I 1000 ls
The --per-thread option is only available when monitoring via -p -t options.
-p, --pid <pid> stat events on existing process id
-t, --tid <tid> stat events on existing thread id
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/r/1435310967-14570-23-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-06-26 09:29:27 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* no_aggr, cgroup are for system-wide only
|
|
|
|
* --per-thread is aggregated per thread, we dont mix it with cpu mode
|
|
|
|
*/
|
2015-07-21 12:31:22 +00:00
|
|
|
if (((stat_config.aggr_mode != AGGR_GLOBAL &&
|
|
|
|
stat_config.aggr_mode != AGGR_THREAD) || nr_cgroups) &&
|
2013-11-12 19:46:16 +00:00
|
|
|
!target__has_cpu(&target)) {
|
perf tool: Add cgroup support
This patch adds the ability to filter monitoring based on container groups
(cgroups) for both perf stat and perf record. It is possible to monitor
multiple cgroup in parallel. There is one cgroup per event. The cgroups to
monitor are passed via a new -G option followed by a comma separated list of
cgroup names.
The cgroup filesystem has to be mounted. Given a cgroup name, the perf tool
finds the corresponding directory in the cgroup filesystem and opens it. It
then passes that file descriptor to the kernel.
Example:
$ perf stat -B -a -e cycles:u,cycles:u,cycles:u -G test1,,test2 -- sleep 1
Performance counter stats for 'sleep 1':
2,368,667,414 cycles test1
2,369,661,459 cycles
<not counted> cycles test2
1.001856890 seconds time elapsed
Signed-off-by: Stephane Eranian <eranian@google.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <4d590290.825bdf0a.7d0a.4890@mx.google.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2011-02-14 09:20:01 +00:00
|
|
|
fprintf(stderr, "both cgroup and no-aggregation "
|
|
|
|
"modes only available in system-wide mode\n");
|
|
|
|
|
2015-11-05 14:40:45 +00:00
|
|
|
parse_options_usage(stat_usage, stat_options, "G", 1);
|
|
|
|
parse_options_usage(NULL, stat_options, "A", 1);
|
|
|
|
parse_options_usage(NULL, stat_options, "a", 1);
|
2013-11-01 07:33:15 +00:00
|
|
|
goto out;
|
2013-02-06 14:46:02 +00:00
|
|
|
}
|
|
|
|
|
perf stat: Add -d -d and -d -d -d options to show more CPU events
Print even more detailed statistics if requested via perf stat -d:
-d: detailed events, L1 and LLC data cache
-d -d: more detailed events, dTLB and iTLB events
-d -d -d: very detailed events, adding prefetch events
Full output looks like this now:
Performance counter stats for '/home/mingo/hackbench 10' (5 runs):
1703.674707 task-clock # 8.709 CPUs utilized ( +- 4.19% )
49,068 context-switches # 0.029 M/sec ( +- 16.66% )
8,303 CPU-migrations # 0.005 M/sec ( +- 24.90% )
17,397 page-faults # 0.010 M/sec ( +- 0.46% )
2,345,389,239 cycles # 1.377 GHz ( +- 4.61% ) [55.90%]
1,884,503,527 stalled-cycles-frontend # 80.35% frontend cycles idle ( +- 5.67% ) [50.39%]
743,919,737 stalled-cycles-backend # 31.72% backend cycles idle ( +- 8.75% ) [49.91%]
1,314,416,379 instructions # 0.56 insns per cycle
# 1.43 stalled cycles per insn ( +- 2.53% ) [60.87%]
272,592,567 branches # 160.003 M/sec ( +- 1.74% ) [56.56%]
3,794,846 branch-misses # 1.39% of all branches ( +- 6.59% ) [58.50%]
449,982,778 L1-dcache-loads # 264.125 M/sec ( +- 2.47% ) [49.88%]
22,404,961 L1-dcache-load-misses # 4.98% of all L1-dcache hits ( +- 6.08% ) [55.05%]
6,204,750 LLC-loads # 3.642 M/sec ( +- 8.91% ) [43.75%]
1,837,411 LLC-load-misses # 1.078 M/sec ( +- 7.27% ) [12.07%]
411,440,421 L1-icache-loads # 241.502 M/sec ( +- 5.60% ) [36.52%]
27,556,832 L1-icache-load-misses # 16.175 M/sec ( +- 7.46% ) [46.72%]
464,067,627 dTLB-loads # 272.392 M/sec ( +- 4.46% ) [54.17%]
10,765,648 dTLB-load-misses # 6.319 M/sec ( +- 3.18% ) [48.68%]
1,273,080,386 iTLB-loads # 747.256 M/sec ( +- 3.38% ) [47.53%]
117,481 iTLB-load-misses # 0.069 M/sec ( +- 14.99% ) [47.01%]
4,590,653 L1-dcache-prefetches # 2.695 M/sec ( +- 4.49% ) [46.19%]
1,712,660 L1-dcache-prefetch-misses # 1.005 M/sec ( +- 3.75% ) [44.82%]
0.195622057 seconds time elapsed ( +- 6.84% )
Also clean up the attribute construction code to be appending, and factor
it out into add_default_attributes().
Tweak the coverage percentage printout a bit, so that it's easier to view it
alongside the +- sttddev colum.
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Steven Rostedt <rostedt@goodmis.org>
Link: http://lkml.kernel.org/n/tip-to3kgu04449s64062val8b62@git.kernel.org
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2011-05-19 11:30:56 +00:00
|
|
|
if (add_default_attributes())
|
|
|
|
goto out;
|
2009-04-20 13:37:32 +00:00
|
|
|
|
2013-11-12 19:46:16 +00:00
|
|
|
target__validate(&target);
|
2011-01-03 19:53:33 +00:00
|
|
|
|
2017-12-05 14:03:10 +00:00
|
|
|
if ((stat_config.aggr_mode == AGGR_THREAD) && (target.system_wide))
|
|
|
|
target.per_thread = true;
|
|
|
|
|
2012-05-07 05:09:04 +00:00
|
|
|
if (perf_evlist__create_maps(evsel_list, &target) < 0) {
|
2013-11-12 19:46:16 +00:00
|
|
|
if (target__has_task(&target)) {
|
2012-05-07 05:09:04 +00:00
|
|
|
pr_err("Problems finding threads of monitor\n");
|
2015-11-05 14:40:45 +00:00
|
|
|
parse_options_usage(stat_usage, stat_options, "p", 1);
|
|
|
|
parse_options_usage(NULL, stat_options, "t", 1);
|
2013-11-12 19:46:16 +00:00
|
|
|
} else if (target__has_cpu(&target)) {
|
2012-05-07 05:09:04 +00:00
|
|
|
perror("failed to parse CPUs map");
|
2015-11-05 14:40:45 +00:00
|
|
|
parse_options_usage(stat_usage, stat_options, "C", 1);
|
|
|
|
parse_options_usage(NULL, stat_options, "a", 1);
|
2013-11-01 07:33:15 +00:00
|
|
|
}
|
|
|
|
goto out;
|
2011-01-03 19:49:48 +00:00
|
|
|
}
|
perf stat: Introduce --per-thread option
Currently all the -p option PID arguments tasks values get aggregated
and printed as single values.
Adding --per-tasks option to print values per task.
$ perf stat -e cycles,instructions --per-thread -p 30190,30242
^C
Performance counter stats for process id '30190,30242':
cat-30190 0 cycles
yes-30242 3,842,525,421 cycles
cat-30190 0 instructions
yes-30242 10,370,817,010 instructions
1.143155657 seconds time elapsed
Also works under interval mode:
$ perf stat -e cycles,instructions --per-thread -p 30190,30242 -I 1000
# time comm-pid counts unit events
1.000073435 cat-30190 89,058 cycles
1.000073435 yes-30242 3,360,786,902 cycles (100.00%)
1.000073435 cat-30190 14,066 instructions
1.000073435 yes-30242 9,069,937,462 instructions
2.000204830 cat-30190 0 cycles
2.000204830 yes-30242 3,351,667,626 cycles
2.000204830 cat-30190 0 instructions
2.000204830 yes-30242 9,045,796,885 instructions
^C 2.771286639 cat-30190 0 cycles
2.771286639 yes-30242 2,593,884,166 cycles
2.771286639 cat-30190 0 instructions
2.771286639 yes-30242 7,001,171,191 instructions
It works only with -t and -p options, otherwise following error is
printed:
$ perf stat -e cycles --per-thread -I 1000 ls
The --per-thread option is only available when monitoring via -p -t options.
-p, --pid <pid> stat events on existing process id
-t, --tid <tid> stat events on existing thread id
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/r/1435310967-14570-23-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-06-26 09:29:27 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Initialize thread_map with comm names,
|
|
|
|
* so we could print it out on output.
|
|
|
|
*/
|
2017-12-05 14:03:07 +00:00
|
|
|
if (stat_config.aggr_mode == AGGR_THREAD) {
|
perf stat: Introduce --per-thread option
Currently all the -p option PID arguments tasks values get aggregated
and printed as single values.
Adding --per-tasks option to print values per task.
$ perf stat -e cycles,instructions --per-thread -p 30190,30242
^C
Performance counter stats for process id '30190,30242':
cat-30190 0 cycles
yes-30242 3,842,525,421 cycles
cat-30190 0 instructions
yes-30242 10,370,817,010 instructions
1.143155657 seconds time elapsed
Also works under interval mode:
$ perf stat -e cycles,instructions --per-thread -p 30190,30242 -I 1000
# time comm-pid counts unit events
1.000073435 cat-30190 89,058 cycles
1.000073435 yes-30242 3,360,786,902 cycles (100.00%)
1.000073435 cat-30190 14,066 instructions
1.000073435 yes-30242 9,069,937,462 instructions
2.000204830 cat-30190 0 cycles
2.000204830 yes-30242 3,351,667,626 cycles
2.000204830 cat-30190 0 instructions
2.000204830 yes-30242 9,045,796,885 instructions
^C 2.771286639 cat-30190 0 cycles
2.771286639 yes-30242 2,593,884,166 cycles
2.771286639 cat-30190 0 instructions
2.771286639 yes-30242 7,001,171,191 instructions
It works only with -t and -p options, otherwise following error is
printed:
$ perf stat -e cycles --per-thread -I 1000 ls
The --per-thread option is only available when monitoring via -p -t options.
-p, --pid <pid> stat events on existing process id
-t, --tid <tid> stat events on existing thread id
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/r/1435310967-14570-23-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-06-26 09:29:27 +00:00
|
|
|
thread_map__read_comms(evsel_list->threads);
|
2017-12-05 14:03:07 +00:00
|
|
|
if (target.system_wide) {
|
|
|
|
if (runtime_stat_new(&stat_config,
|
|
|
|
thread_map__nr(evsel_list->threads))) {
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
perf stat: Introduce --per-thread option
Currently all the -p option PID arguments tasks values get aggregated
and printed as single values.
Adding --per-tasks option to print values per task.
$ perf stat -e cycles,instructions --per-thread -p 30190,30242
^C
Performance counter stats for process id '30190,30242':
cat-30190 0 cycles
yes-30242 3,842,525,421 cycles
cat-30190 0 instructions
yes-30242 10,370,817,010 instructions
1.143155657 seconds time elapsed
Also works under interval mode:
$ perf stat -e cycles,instructions --per-thread -p 30190,30242 -I 1000
# time comm-pid counts unit events
1.000073435 cat-30190 89,058 cycles
1.000073435 yes-30242 3,360,786,902 cycles (100.00%)
1.000073435 cat-30190 14,066 instructions
1.000073435 yes-30242 9,069,937,462 instructions
2.000204830 cat-30190 0 cycles
2.000204830 yes-30242 3,351,667,626 cycles
2.000204830 cat-30190 0 instructions
2.000204830 yes-30242 9,045,796,885 instructions
^C 2.771286639 cat-30190 0 cycles
2.771286639 yes-30242 2,593,884,166 cycles
2.771286639 cat-30190 0 instructions
2.771286639 yes-30242 7,001,171,191 instructions
It works only with -t and -p options, otherwise following error is
printed:
$ perf stat -e cycles --per-thread -I 1000 ls
The --per-thread option is only available when monitoring via -p -t options.
-p, --pid <pid> stat events on existing process id
-t, --tid <tid> stat events on existing thread id
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/r/1435310967-14570-23-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-06-26 09:29:27 +00:00
|
|
|
|
2018-01-29 09:25:22 +00:00
|
|
|
if (stat_config.times && interval)
|
|
|
|
interval_count = true;
|
|
|
|
else if (stat_config.times && !interval) {
|
|
|
|
pr_err("interval-count option should be used together with "
|
|
|
|
"interval-print.\n");
|
|
|
|
parse_options_usage(stat_usage, stat_options, "interval-count", 0);
|
|
|
|
parse_options_usage(stat_usage, stat_options, "I", 1);
|
|
|
|
goto out;
|
|
|
|
}
|
2010-05-28 10:00:01 +00:00
|
|
|
|
2018-01-29 09:25:23 +00:00
|
|
|
if (timeout && timeout < 100) {
|
|
|
|
if (timeout < 10) {
|
|
|
|
pr_err("timeout must be >= 10ms.\n");
|
|
|
|
parse_options_usage(stat_usage, stat_options, "timeout", 0);
|
|
|
|
goto out;
|
|
|
|
} else
|
|
|
|
pr_warning("timeout < 100ms. "
|
|
|
|
"The overhead percentage could be high in some cases. "
|
|
|
|
"Please proceed with caution.\n");
|
|
|
|
}
|
|
|
|
if (timeout && interval) {
|
|
|
|
pr_err("timeout option is not supported with interval-print.\n");
|
|
|
|
parse_options_usage(stat_usage, stat_options, "timeout", 0);
|
|
|
|
parse_options_usage(stat_usage, stat_options, "I", 1);
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
2013-03-18 14:24:21 +00:00
|
|
|
if (perf_evlist__alloc_stats(evsel_list, interval))
|
2014-01-03 18:56:06 +00:00
|
|
|
goto out;
|
2010-03-18 14:36:05 +00:00
|
|
|
|
2013-02-14 12:57:27 +00:00
|
|
|
if (perf_stat_init_aggr_mode())
|
2014-01-03 18:56:06 +00:00
|
|
|
goto out;
|
2013-02-14 12:57:27 +00:00
|
|
|
|
2018-08-30 06:32:14 +00:00
|
|
|
/*
|
|
|
|
* Set sample_type to PERF_SAMPLE_IDENTIFIER, which should be harmless
|
|
|
|
* while avoiding that older tools show confusing messages.
|
|
|
|
*
|
|
|
|
* However for pipe sessions we need to keep it zero,
|
|
|
|
* because script's perf_evsel__check_attr is triggered
|
|
|
|
* by attr->sample_type != 0, and we can't run it on
|
|
|
|
* stat sessions.
|
|
|
|
*/
|
|
|
|
stat_config.identifier = !(STAT_RECORD && perf_stat.data.is_pipe);
|
|
|
|
|
2009-05-15 09:03:23 +00:00
|
|
|
/*
|
|
|
|
* We dont want to block the signals - that would cause
|
|
|
|
* child tasks to inherit that and Ctrl-C would not work.
|
|
|
|
* What we want is for Ctrl-C to work in the exec()-ed
|
|
|
|
* task, but being ignored by perf stat itself:
|
|
|
|
*/
|
2009-06-10 13:55:59 +00:00
|
|
|
atexit(sig_atexit);
|
2013-03-01 18:02:27 +00:00
|
|
|
if (!forever)
|
|
|
|
signal(SIGINT, skip_signal);
|
perf stat: Add interval printing
This patch adds a new printing mode for perf stat. It allows interval
printing. That means perf stat can now print event deltas at regular
time interval. This is useful to detect phases in programs.
The -I option enables interval printing. It expects an interval duration
in milliseconds. Minimum is 100ms. Once, activated perf stat prints
events deltas since last printout. All modes are supported.
$ perf stat -I 1000 -e cycles noploop 10
noploop for 10 seconds
# time counts events
1.000109853 2,388,560,546 cycles
2.000262846 2,393,332,358 cycles
3.000354131 2,393,176,537 cycles
4.000439503 2,393,203,790 cycles
5.000527075 2,393,167,675 cycles
6.000609052 2,393,203,670 cycles
7.000691082 2,393,175,678 cycles
The output format makes it easy to feed into a plotting program such as
gnuplot when the -I option is used in combination with the -x option:
$ perf stat -x, -I 1000 -e cycles noploop 10
noploop for 10 seconds
1.000084113,2378775498,cycles
2.000245798,2391056897,cycles
3.000354445,2392089414,cycles
4.000459115,2390936603,cycles
5.000565341,2392108173,cycles
Signed-off-by: Stephane Eranian <eranian@google.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung.kim@lge.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1359460064-3060-3-git-send-email-eranian@google.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2013-01-29 11:47:44 +00:00
|
|
|
signal(SIGCHLD, skip_signal);
|
2009-05-15 09:03:23 +00:00
|
|
|
signal(SIGALRM, skip_signal);
|
|
|
|
signal(SIGABRT, skip_signal);
|
|
|
|
|
2009-06-13 12:57:28 +00:00
|
|
|
status = 0;
|
2018-08-30 06:32:36 +00:00
|
|
|
for (run_idx = 0; forever || run_idx < stat_config.run_count; run_idx++) {
|
|
|
|
if (stat_config.run_count != 1 && verbose > 0)
|
2011-08-15 20:22:33 +00:00
|
|
|
fprintf(output, "[ perf stat: executing run #%d ... ]\n",
|
|
|
|
run_idx + 1);
|
2011-04-28 16:17:11 +00:00
|
|
|
|
2018-04-23 09:08:21 +00:00
|
|
|
status = run_perf_stat(argc, argv, run_idx);
|
2013-03-01 18:02:27 +00:00
|
|
|
if (forever && status != -1) {
|
2015-06-26 09:29:26 +00:00
|
|
|
print_counters(NULL, argc, argv);
|
2015-06-26 09:29:13 +00:00
|
|
|
perf_stat__reset_stats();
|
2013-03-01 18:02:27 +00:00
|
|
|
}
|
2009-06-13 12:57:28 +00:00
|
|
|
}
|
|
|
|
|
2013-03-01 18:02:27 +00:00
|
|
|
if (!forever && status != -1 && !interval)
|
2015-06-26 09:29:26 +00:00
|
|
|
print_counters(NULL, argc, argv);
|
2013-03-18 14:24:21 +00:00
|
|
|
|
perf stat record: Add record command
Add 'perf stat record' command support. It creates simple (header only)
perf.data file ATM.
The record command could be specified anywhere among stat options. All
stat command options are valid for stat record command with '-o' option
exception. If specified for record command it denotes the perf data file
name.
Committer note:
Set sample_type to PERF_SAMPLE_IDENTIFIER, which should be harmless
while avoiding that older tools show confusing messages, for instance,
with sample_type = 0, we get:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.630237 task-clock (msec) # 0.528 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
52 page-faults # 0.083 M/sec
978,312 cycles # 1.552 GHz
671,931 stalled-cycles-frontend # 68.68% frontend cycles idle
<not supported> stalled-cycles-backend
646,379 instructions # 0.66 insns per cycle
# 1.04 stalled cycles per insn
131,046 branches # 207.931 M/sec
7,073 branch-misses # 5.40% of all branches
0.001193240 seconds time elapsed
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
non matching sample_type
$
While with sample_type set to PERF_SAMPLE_IDENTIFIER, after we re-run 'perf
stat record usleep' we get:
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$
Which at least shows the names of the events in the perf.data file.
Additionally, such files, when passed to 'perf report' will produce:
$ oldperf report --stdio
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
Warning:
Kernel address maps (/proc/{kallsyms,modules}) were restricted.
Check /proc/sys/kernel/kptr_restrict before running 'perf record'.
As no suitable kallsyms nor vmlinux was found, kernel samples
can't be resolved.
Samples in kernel modules can't be resolved as well.
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
$
Which is confusing and can be solved by just adding the kernel mmap record,
which will also remove that warning about the data size field being equal to
zero, after generating the mmap record:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.600796 task-clock (msec) # 0.478 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
54 page-faults # 0.090 M/sec
886,844 cycles # 1.476 GHz
582,169 stalled-cycles-frontend # 65.65% frontend cycles idle
<not supported> stalled-cycles-backend
638,344 instructions # 0.72 insns per cycle
# 0.91 stalled cycles per insn
130,204 branches # 216.719 M/sec
7,500 branch-misses # 5.76% of all branches
0.001255897 seconds time elapsed
$ oldperf evlist
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$ oldperf report --stdio
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
[acme@zoo linux]$
No warnings, sensible output about what are the events in the perf.data file and also
a "file has no samples" message, which indeed it doesn't.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Tested-by: Kan Liang <kan.liang@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: htp://lkml.kernel.org/r/1446734469-11352-3-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 14:40:46 +00:00
|
|
|
if (STAT_RECORD) {
|
|
|
|
/*
|
|
|
|
* We synthesize the kernel mmap record just so that older tools
|
|
|
|
* don't emit warnings about not being able to resolve symbols
|
|
|
|
* due to /proc/sys/kernel/kptr_restrict settings and instear provide
|
|
|
|
* a saner message about no samples being in the perf.data file.
|
|
|
|
*
|
|
|
|
* This also serves to suppress a warning about f_header.data.size == 0
|
2015-11-05 14:40:48 +00:00
|
|
|
* in header.c at the moment 'perf stat record' gets introduced, which
|
|
|
|
* is not really needed once we start adding the stat specific PERF_RECORD_
|
|
|
|
* records, but the need to suppress the kptr_restrict messages in older
|
|
|
|
* tools remain -acme
|
perf stat record: Add record command
Add 'perf stat record' command support. It creates simple (header only)
perf.data file ATM.
The record command could be specified anywhere among stat options. All
stat command options are valid for stat record command with '-o' option
exception. If specified for record command it denotes the perf data file
name.
Committer note:
Set sample_type to PERF_SAMPLE_IDENTIFIER, which should be harmless
while avoiding that older tools show confusing messages, for instance,
with sample_type = 0, we get:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.630237 task-clock (msec) # 0.528 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
52 page-faults # 0.083 M/sec
978,312 cycles # 1.552 GHz
671,931 stalled-cycles-frontend # 68.68% frontend cycles idle
<not supported> stalled-cycles-backend
646,379 instructions # 0.66 insns per cycle
# 1.04 stalled cycles per insn
131,046 branches # 207.931 M/sec
7,073 branch-misses # 5.40% of all branches
0.001193240 seconds time elapsed
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
non matching sample_type
$
While with sample_type set to PERF_SAMPLE_IDENTIFIER, after we re-run 'perf
stat record usleep' we get:
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$
Which at least shows the names of the events in the perf.data file.
Additionally, such files, when passed to 'perf report' will produce:
$ oldperf report --stdio
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
Warning:
Kernel address maps (/proc/{kallsyms,modules}) were restricted.
Check /proc/sys/kernel/kptr_restrict before running 'perf record'.
As no suitable kallsyms nor vmlinux was found, kernel samples
can't be resolved.
Samples in kernel modules can't be resolved as well.
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
$
Which is confusing and can be solved by just adding the kernel mmap record,
which will also remove that warning about the data size field being equal to
zero, after generating the mmap record:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.600796 task-clock (msec) # 0.478 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
54 page-faults # 0.090 M/sec
886,844 cycles # 1.476 GHz
582,169 stalled-cycles-frontend # 65.65% frontend cycles idle
<not supported> stalled-cycles-backend
638,344 instructions # 0.72 insns per cycle
# 0.91 stalled cycles per insn
130,204 branches # 216.719 M/sec
7,500 branch-misses # 5.76% of all branches
0.001255897 seconds time elapsed
$ oldperf evlist
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$ oldperf report --stdio
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
[acme@zoo linux]$
No warnings, sensible output about what are the events in the perf.data file and also
a "file has no samples" message, which indeed it doesn't.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Tested-by: Kan Liang <kan.liang@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: htp://lkml.kernel.org/r/1446734469-11352-3-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 14:40:46 +00:00
|
|
|
*/
|
2017-01-23 21:07:59 +00:00
|
|
|
int fd = perf_data__fd(&perf_stat.data);
|
perf stat record: Add record command
Add 'perf stat record' command support. It creates simple (header only)
perf.data file ATM.
The record command could be specified anywhere among stat options. All
stat command options are valid for stat record command with '-o' option
exception. If specified for record command it denotes the perf data file
name.
Committer note:
Set sample_type to PERF_SAMPLE_IDENTIFIER, which should be harmless
while avoiding that older tools show confusing messages, for instance,
with sample_type = 0, we get:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.630237 task-clock (msec) # 0.528 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
52 page-faults # 0.083 M/sec
978,312 cycles # 1.552 GHz
671,931 stalled-cycles-frontend # 68.68% frontend cycles idle
<not supported> stalled-cycles-backend
646,379 instructions # 0.66 insns per cycle
# 1.04 stalled cycles per insn
131,046 branches # 207.931 M/sec
7,073 branch-misses # 5.40% of all branches
0.001193240 seconds time elapsed
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
non matching sample_type
$
While with sample_type set to PERF_SAMPLE_IDENTIFIER, after we re-run 'perf
stat record usleep' we get:
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$
Which at least shows the names of the events in the perf.data file.
Additionally, such files, when passed to 'perf report' will produce:
$ oldperf report --stdio
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
Warning:
Kernel address maps (/proc/{kallsyms,modules}) were restricted.
Check /proc/sys/kernel/kptr_restrict before running 'perf record'.
As no suitable kallsyms nor vmlinux was found, kernel samples
can't be resolved.
Samples in kernel modules can't be resolved as well.
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
$
Which is confusing and can be solved by just adding the kernel mmap record,
which will also remove that warning about the data size field being equal to
zero, after generating the mmap record:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.600796 task-clock (msec) # 0.478 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
54 page-faults # 0.090 M/sec
886,844 cycles # 1.476 GHz
582,169 stalled-cycles-frontend # 65.65% frontend cycles idle
<not supported> stalled-cycles-backend
638,344 instructions # 0.72 insns per cycle
# 0.91 stalled cycles per insn
130,204 branches # 216.719 M/sec
7,500 branch-misses # 5.76% of all branches
0.001255897 seconds time elapsed
$ oldperf evlist
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$ oldperf report --stdio
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
[acme@zoo linux]$
No warnings, sensible output about what are the events in the perf.data file and also
a "file has no samples" message, which indeed it doesn't.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Tested-by: Kan Liang <kan.liang@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: htp://lkml.kernel.org/r/1446734469-11352-3-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 14:40:46 +00:00
|
|
|
int err = perf_event__synthesize_kernel_mmap((void *)&perf_stat,
|
|
|
|
process_synthesized_event,
|
|
|
|
&perf_stat.session->machines.host);
|
|
|
|
if (err) {
|
|
|
|
pr_warning("Couldn't synthesize the kernel mmap record, harmless, "
|
|
|
|
"older tools may produce warnings about this file\n.");
|
|
|
|
}
|
|
|
|
|
2015-11-05 14:40:52 +00:00
|
|
|
if (!interval) {
|
|
|
|
if (WRITE_STAT_ROUND_EVENT(walltime_nsecs_stats.max, FINAL))
|
|
|
|
pr_err("failed to write stat round event\n");
|
|
|
|
}
|
|
|
|
|
2017-01-23 21:07:59 +00:00
|
|
|
if (!perf_stat.data.is_pipe) {
|
2015-11-05 14:40:50 +00:00
|
|
|
perf_stat.session->header.data_size += perf_stat.bytes_written;
|
|
|
|
perf_session__write_header(perf_stat.session, evsel_list, fd, true);
|
|
|
|
}
|
perf stat record: Add record command
Add 'perf stat record' command support. It creates simple (header only)
perf.data file ATM.
The record command could be specified anywhere among stat options. All
stat command options are valid for stat record command with '-o' option
exception. If specified for record command it denotes the perf data file
name.
Committer note:
Set sample_type to PERF_SAMPLE_IDENTIFIER, which should be harmless
while avoiding that older tools show confusing messages, for instance,
with sample_type = 0, we get:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.630237 task-clock (msec) # 0.528 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
52 page-faults # 0.083 M/sec
978,312 cycles # 1.552 GHz
671,931 stalled-cycles-frontend # 68.68% frontend cycles idle
<not supported> stalled-cycles-backend
646,379 instructions # 0.66 insns per cycle
# 1.04 stalled cycles per insn
131,046 branches # 207.931 M/sec
7,073 branch-misses # 5.40% of all branches
0.001193240 seconds time elapsed
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
non matching sample_type
$
While with sample_type set to PERF_SAMPLE_IDENTIFIER, after we re-run 'perf
stat record usleep' we get:
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$
Which at least shows the names of the events in the perf.data file.
Additionally, such files, when passed to 'perf report' will produce:
$ oldperf report --stdio
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
Warning:
Kernel address maps (/proc/{kallsyms,modules}) were restricted.
Check /proc/sys/kernel/kptr_restrict before running 'perf record'.
As no suitable kallsyms nor vmlinux was found, kernel samples
can't be resolved.
Samples in kernel modules can't be resolved as well.
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
$
Which is confusing and can be solved by just adding the kernel mmap record,
which will also remove that warning about the data size field being equal to
zero, after generating the mmap record:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.600796 task-clock (msec) # 0.478 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
54 page-faults # 0.090 M/sec
886,844 cycles # 1.476 GHz
582,169 stalled-cycles-frontend # 65.65% frontend cycles idle
<not supported> stalled-cycles-backend
638,344 instructions # 0.72 insns per cycle
# 0.91 stalled cycles per insn
130,204 branches # 216.719 M/sec
7,500 branch-misses # 5.76% of all branches
0.001255897 seconds time elapsed
$ oldperf evlist
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$ oldperf report --stdio
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
[acme@zoo linux]$
No warnings, sensible output about what are the events in the perf.data file and also
a "file has no samples" message, which indeed it doesn't.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Tested-by: Kan Liang <kan.liang@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: htp://lkml.kernel.org/r/1446734469-11352-3-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 14:40:46 +00:00
|
|
|
|
|
|
|
perf_session__delete(perf_stat.session);
|
|
|
|
}
|
|
|
|
|
2015-12-09 02:11:27 +00:00
|
|
|
perf_stat__exit_aggr_mode();
|
2013-03-18 14:24:21 +00:00
|
|
|
perf_evlist__free_stats(evsel_list);
|
2011-02-01 18:18:10 +00:00
|
|
|
out:
|
2018-04-23 09:08:21 +00:00
|
|
|
free(walltime_run);
|
|
|
|
|
2017-05-26 19:05:38 +00:00
|
|
|
if (smi_cost && smi_reset)
|
|
|
|
sysfs__write_int(FREEZE_ON_SMI_PATH, 0);
|
|
|
|
|
2011-02-01 18:18:10 +00:00
|
|
|
perf_evlist__delete(evsel_list);
|
2017-12-05 14:03:07 +00:00
|
|
|
|
|
|
|
runtime_stat_delete(&stat_config);
|
|
|
|
|
2009-06-13 12:57:28 +00:00
|
|
|
return status;
|
2009-04-20 13:37:32 +00:00
|
|
|
}
|