2019-05-29 14:12:25 +00:00
|
|
|
// SPDX-License-Identifier: GPL-2.0-only
|
2009-04-20 13:37:32 +00:00
|
|
|
/*
|
2009-06-02 21:37:05 +00:00
|
|
|
* builtin-stat.c
|
|
|
|
*
|
|
|
|
* Builtin stat command: Give a precise performance counters summary
|
|
|
|
* overview about any workload, CPU or specific PID.
|
|
|
|
*
|
|
|
|
* Sample output:
|
2009-04-20 13:37:32 +00:00
|
|
|
|
perf stat: Add -d -d and -d -d -d options to show more CPU events
Print even more detailed statistics if requested via perf stat -d:
-d: detailed events, L1 and LLC data cache
-d -d: more detailed events, dTLB and iTLB events
-d -d -d: very detailed events, adding prefetch events
Full output looks like this now:
Performance counter stats for '/home/mingo/hackbench 10' (5 runs):
1703.674707 task-clock # 8.709 CPUs utilized ( +- 4.19% )
49,068 context-switches # 0.029 M/sec ( +- 16.66% )
8,303 CPU-migrations # 0.005 M/sec ( +- 24.90% )
17,397 page-faults # 0.010 M/sec ( +- 0.46% )
2,345,389,239 cycles # 1.377 GHz ( +- 4.61% ) [55.90%]
1,884,503,527 stalled-cycles-frontend # 80.35% frontend cycles idle ( +- 5.67% ) [50.39%]
743,919,737 stalled-cycles-backend # 31.72% backend cycles idle ( +- 8.75% ) [49.91%]
1,314,416,379 instructions # 0.56 insns per cycle
# 1.43 stalled cycles per insn ( +- 2.53% ) [60.87%]
272,592,567 branches # 160.003 M/sec ( +- 1.74% ) [56.56%]
3,794,846 branch-misses # 1.39% of all branches ( +- 6.59% ) [58.50%]
449,982,778 L1-dcache-loads # 264.125 M/sec ( +- 2.47% ) [49.88%]
22,404,961 L1-dcache-load-misses # 4.98% of all L1-dcache hits ( +- 6.08% ) [55.05%]
6,204,750 LLC-loads # 3.642 M/sec ( +- 8.91% ) [43.75%]
1,837,411 LLC-load-misses # 1.078 M/sec ( +- 7.27% ) [12.07%]
411,440,421 L1-icache-loads # 241.502 M/sec ( +- 5.60% ) [36.52%]
27,556,832 L1-icache-load-misses # 16.175 M/sec ( +- 7.46% ) [46.72%]
464,067,627 dTLB-loads # 272.392 M/sec ( +- 4.46% ) [54.17%]
10,765,648 dTLB-load-misses # 6.319 M/sec ( +- 3.18% ) [48.68%]
1,273,080,386 iTLB-loads # 747.256 M/sec ( +- 3.38% ) [47.53%]
117,481 iTLB-load-misses # 0.069 M/sec ( +- 14.99% ) [47.01%]
4,590,653 L1-dcache-prefetches # 2.695 M/sec ( +- 4.49% ) [46.19%]
1,712,660 L1-dcache-prefetch-misses # 1.005 M/sec ( +- 3.75% ) [44.82%]
0.195622057 seconds time elapsed ( +- 6.84% )
Also clean up the attribute construction code to be appending, and factor
it out into add_default_attributes().
Tweak the coverage percentage printout a bit, so that it's easier to view it
alongside the +- sttddev colum.
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Steven Rostedt <rostedt@goodmis.org>
Link: http://lkml.kernel.org/n/tip-to3kgu04449s64062val8b62@git.kernel.org
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2011-05-19 11:30:56 +00:00
|
|
|
$ perf stat ./hackbench 10
|
2009-04-20 13:37:32 +00:00
|
|
|
|
perf stat: Add -d -d and -d -d -d options to show more CPU events
Print even more detailed statistics if requested via perf stat -d:
-d: detailed events, L1 and LLC data cache
-d -d: more detailed events, dTLB and iTLB events
-d -d -d: very detailed events, adding prefetch events
Full output looks like this now:
Performance counter stats for '/home/mingo/hackbench 10' (5 runs):
1703.674707 task-clock # 8.709 CPUs utilized ( +- 4.19% )
49,068 context-switches # 0.029 M/sec ( +- 16.66% )
8,303 CPU-migrations # 0.005 M/sec ( +- 24.90% )
17,397 page-faults # 0.010 M/sec ( +- 0.46% )
2,345,389,239 cycles # 1.377 GHz ( +- 4.61% ) [55.90%]
1,884,503,527 stalled-cycles-frontend # 80.35% frontend cycles idle ( +- 5.67% ) [50.39%]
743,919,737 stalled-cycles-backend # 31.72% backend cycles idle ( +- 8.75% ) [49.91%]
1,314,416,379 instructions # 0.56 insns per cycle
# 1.43 stalled cycles per insn ( +- 2.53% ) [60.87%]
272,592,567 branches # 160.003 M/sec ( +- 1.74% ) [56.56%]
3,794,846 branch-misses # 1.39% of all branches ( +- 6.59% ) [58.50%]
449,982,778 L1-dcache-loads # 264.125 M/sec ( +- 2.47% ) [49.88%]
22,404,961 L1-dcache-load-misses # 4.98% of all L1-dcache hits ( +- 6.08% ) [55.05%]
6,204,750 LLC-loads # 3.642 M/sec ( +- 8.91% ) [43.75%]
1,837,411 LLC-load-misses # 1.078 M/sec ( +- 7.27% ) [12.07%]
411,440,421 L1-icache-loads # 241.502 M/sec ( +- 5.60% ) [36.52%]
27,556,832 L1-icache-load-misses # 16.175 M/sec ( +- 7.46% ) [46.72%]
464,067,627 dTLB-loads # 272.392 M/sec ( +- 4.46% ) [54.17%]
10,765,648 dTLB-load-misses # 6.319 M/sec ( +- 3.18% ) [48.68%]
1,273,080,386 iTLB-loads # 747.256 M/sec ( +- 3.38% ) [47.53%]
117,481 iTLB-load-misses # 0.069 M/sec ( +- 14.99% ) [47.01%]
4,590,653 L1-dcache-prefetches # 2.695 M/sec ( +- 4.49% ) [46.19%]
1,712,660 L1-dcache-prefetch-misses # 1.005 M/sec ( +- 3.75% ) [44.82%]
0.195622057 seconds time elapsed ( +- 6.84% )
Also clean up the attribute construction code to be appending, and factor
it out into add_default_attributes().
Tweak the coverage percentage printout a bit, so that it's easier to view it
alongside the +- sttddev colum.
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Steven Rostedt <rostedt@goodmis.org>
Link: http://lkml.kernel.org/n/tip-to3kgu04449s64062val8b62@git.kernel.org
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2011-05-19 11:30:56 +00:00
|
|
|
Time: 0.118
|
2009-04-20 13:37:32 +00:00
|
|
|
|
perf stat: Add -d -d and -d -d -d options to show more CPU events
Print even more detailed statistics if requested via perf stat -d:
-d: detailed events, L1 and LLC data cache
-d -d: more detailed events, dTLB and iTLB events
-d -d -d: very detailed events, adding prefetch events
Full output looks like this now:
Performance counter stats for '/home/mingo/hackbench 10' (5 runs):
1703.674707 task-clock # 8.709 CPUs utilized ( +- 4.19% )
49,068 context-switches # 0.029 M/sec ( +- 16.66% )
8,303 CPU-migrations # 0.005 M/sec ( +- 24.90% )
17,397 page-faults # 0.010 M/sec ( +- 0.46% )
2,345,389,239 cycles # 1.377 GHz ( +- 4.61% ) [55.90%]
1,884,503,527 stalled-cycles-frontend # 80.35% frontend cycles idle ( +- 5.67% ) [50.39%]
743,919,737 stalled-cycles-backend # 31.72% backend cycles idle ( +- 8.75% ) [49.91%]
1,314,416,379 instructions # 0.56 insns per cycle
# 1.43 stalled cycles per insn ( +- 2.53% ) [60.87%]
272,592,567 branches # 160.003 M/sec ( +- 1.74% ) [56.56%]
3,794,846 branch-misses # 1.39% of all branches ( +- 6.59% ) [58.50%]
449,982,778 L1-dcache-loads # 264.125 M/sec ( +- 2.47% ) [49.88%]
22,404,961 L1-dcache-load-misses # 4.98% of all L1-dcache hits ( +- 6.08% ) [55.05%]
6,204,750 LLC-loads # 3.642 M/sec ( +- 8.91% ) [43.75%]
1,837,411 LLC-load-misses # 1.078 M/sec ( +- 7.27% ) [12.07%]
411,440,421 L1-icache-loads # 241.502 M/sec ( +- 5.60% ) [36.52%]
27,556,832 L1-icache-load-misses # 16.175 M/sec ( +- 7.46% ) [46.72%]
464,067,627 dTLB-loads # 272.392 M/sec ( +- 4.46% ) [54.17%]
10,765,648 dTLB-load-misses # 6.319 M/sec ( +- 3.18% ) [48.68%]
1,273,080,386 iTLB-loads # 747.256 M/sec ( +- 3.38% ) [47.53%]
117,481 iTLB-load-misses # 0.069 M/sec ( +- 14.99% ) [47.01%]
4,590,653 L1-dcache-prefetches # 2.695 M/sec ( +- 4.49% ) [46.19%]
1,712,660 L1-dcache-prefetch-misses # 1.005 M/sec ( +- 3.75% ) [44.82%]
0.195622057 seconds time elapsed ( +- 6.84% )
Also clean up the attribute construction code to be appending, and factor
it out into add_default_attributes().
Tweak the coverage percentage printout a bit, so that it's easier to view it
alongside the +- sttddev colum.
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Steven Rostedt <rostedt@goodmis.org>
Link: http://lkml.kernel.org/n/tip-to3kgu04449s64062val8b62@git.kernel.org
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2011-05-19 11:30:56 +00:00
|
|
|
Performance counter stats for './hackbench 10':
|
2009-04-20 13:37:32 +00:00
|
|
|
|
perf stat: Add -d -d and -d -d -d options to show more CPU events
Print even more detailed statistics if requested via perf stat -d:
-d: detailed events, L1 and LLC data cache
-d -d: more detailed events, dTLB and iTLB events
-d -d -d: very detailed events, adding prefetch events
Full output looks like this now:
Performance counter stats for '/home/mingo/hackbench 10' (5 runs):
1703.674707 task-clock # 8.709 CPUs utilized ( +- 4.19% )
49,068 context-switches # 0.029 M/sec ( +- 16.66% )
8,303 CPU-migrations # 0.005 M/sec ( +- 24.90% )
17,397 page-faults # 0.010 M/sec ( +- 0.46% )
2,345,389,239 cycles # 1.377 GHz ( +- 4.61% ) [55.90%]
1,884,503,527 stalled-cycles-frontend # 80.35% frontend cycles idle ( +- 5.67% ) [50.39%]
743,919,737 stalled-cycles-backend # 31.72% backend cycles idle ( +- 8.75% ) [49.91%]
1,314,416,379 instructions # 0.56 insns per cycle
# 1.43 stalled cycles per insn ( +- 2.53% ) [60.87%]
272,592,567 branches # 160.003 M/sec ( +- 1.74% ) [56.56%]
3,794,846 branch-misses # 1.39% of all branches ( +- 6.59% ) [58.50%]
449,982,778 L1-dcache-loads # 264.125 M/sec ( +- 2.47% ) [49.88%]
22,404,961 L1-dcache-load-misses # 4.98% of all L1-dcache hits ( +- 6.08% ) [55.05%]
6,204,750 LLC-loads # 3.642 M/sec ( +- 8.91% ) [43.75%]
1,837,411 LLC-load-misses # 1.078 M/sec ( +- 7.27% ) [12.07%]
411,440,421 L1-icache-loads # 241.502 M/sec ( +- 5.60% ) [36.52%]
27,556,832 L1-icache-load-misses # 16.175 M/sec ( +- 7.46% ) [46.72%]
464,067,627 dTLB-loads # 272.392 M/sec ( +- 4.46% ) [54.17%]
10,765,648 dTLB-load-misses # 6.319 M/sec ( +- 3.18% ) [48.68%]
1,273,080,386 iTLB-loads # 747.256 M/sec ( +- 3.38% ) [47.53%]
117,481 iTLB-load-misses # 0.069 M/sec ( +- 14.99% ) [47.01%]
4,590,653 L1-dcache-prefetches # 2.695 M/sec ( +- 4.49% ) [46.19%]
1,712,660 L1-dcache-prefetch-misses # 1.005 M/sec ( +- 3.75% ) [44.82%]
0.195622057 seconds time elapsed ( +- 6.84% )
Also clean up the attribute construction code to be appending, and factor
it out into add_default_attributes().
Tweak the coverage percentage printout a bit, so that it's easier to view it
alongside the +- sttddev colum.
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Steven Rostedt <rostedt@goodmis.org>
Link: http://lkml.kernel.org/n/tip-to3kgu04449s64062val8b62@git.kernel.org
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2011-05-19 11:30:56 +00:00
|
|
|
1708.761321 task-clock # 11.037 CPUs utilized
|
|
|
|
41,190 context-switches # 0.024 M/sec
|
|
|
|
6,735 CPU-migrations # 0.004 M/sec
|
|
|
|
17,318 page-faults # 0.010 M/sec
|
|
|
|
5,205,202,243 cycles # 3.046 GHz
|
|
|
|
3,856,436,920 stalled-cycles-frontend # 74.09% frontend cycles idle
|
|
|
|
1,600,790,871 stalled-cycles-backend # 30.75% backend cycles idle
|
|
|
|
2,603,501,247 instructions # 0.50 insns per cycle
|
|
|
|
# 1.48 stalled cycles per insn
|
|
|
|
484,357,498 branches # 283.455 M/sec
|
|
|
|
6,388,934 branch-misses # 1.32% of all branches
|
|
|
|
|
|
|
|
0.154822978 seconds time elapsed
|
2009-04-20 13:37:32 +00:00
|
|
|
|
2009-05-26 07:17:18 +00:00
|
|
|
*
|
perf stat: Add -d -d and -d -d -d options to show more CPU events
Print even more detailed statistics if requested via perf stat -d:
-d: detailed events, L1 and LLC data cache
-d -d: more detailed events, dTLB and iTLB events
-d -d -d: very detailed events, adding prefetch events
Full output looks like this now:
Performance counter stats for '/home/mingo/hackbench 10' (5 runs):
1703.674707 task-clock # 8.709 CPUs utilized ( +- 4.19% )
49,068 context-switches # 0.029 M/sec ( +- 16.66% )
8,303 CPU-migrations # 0.005 M/sec ( +- 24.90% )
17,397 page-faults # 0.010 M/sec ( +- 0.46% )
2,345,389,239 cycles # 1.377 GHz ( +- 4.61% ) [55.90%]
1,884,503,527 stalled-cycles-frontend # 80.35% frontend cycles idle ( +- 5.67% ) [50.39%]
743,919,737 stalled-cycles-backend # 31.72% backend cycles idle ( +- 8.75% ) [49.91%]
1,314,416,379 instructions # 0.56 insns per cycle
# 1.43 stalled cycles per insn ( +- 2.53% ) [60.87%]
272,592,567 branches # 160.003 M/sec ( +- 1.74% ) [56.56%]
3,794,846 branch-misses # 1.39% of all branches ( +- 6.59% ) [58.50%]
449,982,778 L1-dcache-loads # 264.125 M/sec ( +- 2.47% ) [49.88%]
22,404,961 L1-dcache-load-misses # 4.98% of all L1-dcache hits ( +- 6.08% ) [55.05%]
6,204,750 LLC-loads # 3.642 M/sec ( +- 8.91% ) [43.75%]
1,837,411 LLC-load-misses # 1.078 M/sec ( +- 7.27% ) [12.07%]
411,440,421 L1-icache-loads # 241.502 M/sec ( +- 5.60% ) [36.52%]
27,556,832 L1-icache-load-misses # 16.175 M/sec ( +- 7.46% ) [46.72%]
464,067,627 dTLB-loads # 272.392 M/sec ( +- 4.46% ) [54.17%]
10,765,648 dTLB-load-misses # 6.319 M/sec ( +- 3.18% ) [48.68%]
1,273,080,386 iTLB-loads # 747.256 M/sec ( +- 3.38% ) [47.53%]
117,481 iTLB-load-misses # 0.069 M/sec ( +- 14.99% ) [47.01%]
4,590,653 L1-dcache-prefetches # 2.695 M/sec ( +- 4.49% ) [46.19%]
1,712,660 L1-dcache-prefetch-misses # 1.005 M/sec ( +- 3.75% ) [44.82%]
0.195622057 seconds time elapsed ( +- 6.84% )
Also clean up the attribute construction code to be appending, and factor
it out into add_default_attributes().
Tweak the coverage percentage printout a bit, so that it's easier to view it
alongside the +- sttddev colum.
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Steven Rostedt <rostedt@goodmis.org>
Link: http://lkml.kernel.org/n/tip-to3kgu04449s64062val8b62@git.kernel.org
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2011-05-19 11:30:56 +00:00
|
|
|
* Copyright (C) 2008-2011, Red Hat Inc, Ingo Molnar <mingo@redhat.com>
|
2009-05-26 07:17:18 +00:00
|
|
|
*
|
|
|
|
* Improvements and fixes by:
|
|
|
|
*
|
|
|
|
* Arjan van de Ven <arjan@linux.intel.com>
|
|
|
|
* Yanmin Zhang <yanmin.zhang@intel.com>
|
|
|
|
* Wu Fengguang <fengguang.wu@intel.com>
|
|
|
|
* Mike Galbraith <efault@gmx.de>
|
|
|
|
* Paul Mackerras <paulus@samba.org>
|
2009-06-26 21:32:07 +00:00
|
|
|
* Jaswinder Singh Rajput <jaswinder@kernel.org>
|
2009-04-20 13:37:32 +00:00
|
|
|
*/
|
|
|
|
|
2009-05-27 07:10:38 +00:00
|
|
|
#include "builtin.h"
|
2019-08-29 18:20:59 +00:00
|
|
|
#include "perf.h"
|
2014-10-17 15:17:40 +00:00
|
|
|
#include "util/cgroup.h"
|
2015-12-15 15:39:39 +00:00
|
|
|
#include <subcmd/parse-options.h>
|
2009-05-26 07:17:18 +00:00
|
|
|
#include "util/parse-events.h"
|
2013-08-21 23:47:26 +00:00
|
|
|
#include "util/pmu.h"
|
2009-08-16 20:05:48 +00:00
|
|
|
#include "util/event.h"
|
2011-01-11 22:56:53 +00:00
|
|
|
#include "util/evlist.h"
|
2021-04-27 07:01:29 +00:00
|
|
|
#include "util/evlist-hybrid.h"
|
2011-01-03 18:39:04 +00:00
|
|
|
#include "util/evsel.h"
|
2009-08-16 20:05:48 +00:00
|
|
|
#include "util/debug.h"
|
2011-04-27 03:39:24 +00:00
|
|
|
#include "util/color.h"
|
2012-09-17 08:31:14 +00:00
|
|
|
#include "util/stat.h"
|
2009-12-31 08:05:50 +00:00
|
|
|
#include "util/header.h"
|
perf tools: Fix sparse CPU numbering related bugs
At present, the perf subcommands that do system-wide monitoring
(perf stat, perf record and perf top) don't work properly unless
the online cpus are numbered 0, 1, ..., N-1. These tools ask
for the number of online cpus with sysconf(_SC_NPROCESSORS_ONLN)
and then try to create events for cpus 0, 1, ..., N-1.
This creates problems for systems where the online cpus are
numbered sparsely. For example, a POWER6 system in
single-threaded mode (i.e. only running 1 hardware thread per
core) will have only even-numbered cpus online.
This fixes the problem by reading the /sys/devices/system/cpu/online
file to find out which cpus are online. The code that does that is in
tools/perf/util/cpumap.[ch], and consists of a read_cpu_map()
function that sets up a cpumap[] array and returns the number of
online cpus. If /sys/devices/system/cpu/online can't be read or
can't be parsed successfully, it falls back to using sysconf to
ask how many cpus are online and sets up an identity map in cpumap[].
The perf record, perf stat and perf top code then calls
read_cpu_map() in the system-wide monitoring case (instead of
sysconf) and uses cpumap[] to get the cpu numbers to pass to
perf_event_open.
Signed-off-by: Paul Mackerras <paulus@samba.org>
Cc: Anton Blanchard <anton@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Arnaldo Carvalho de Melo <acme@infradead.org>
LKML-Reference: <20100310093609.GA3959@brick.ozlabs.ibm.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-03-10 09:36:09 +00:00
|
|
|
#include "util/cpumap.h"
|
2011-01-18 17:15:24 +00:00
|
|
|
#include "util/thread_map.h"
|
2015-08-07 10:51:03 +00:00
|
|
|
#include "util/counts.h"
|
2020-09-11 14:48:05 +00:00
|
|
|
#include "util/topdown.h"
|
perf stat record: Add record command
Add 'perf stat record' command support. It creates simple (header only)
perf.data file ATM.
The record command could be specified anywhere among stat options. All
stat command options are valid for stat record command with '-o' option
exception. If specified for record command it denotes the perf data file
name.
Committer note:
Set sample_type to PERF_SAMPLE_IDENTIFIER, which should be harmless
while avoiding that older tools show confusing messages, for instance,
with sample_type = 0, we get:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.630237 task-clock (msec) # 0.528 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
52 page-faults # 0.083 M/sec
978,312 cycles # 1.552 GHz
671,931 stalled-cycles-frontend # 68.68% frontend cycles idle
<not supported> stalled-cycles-backend
646,379 instructions # 0.66 insns per cycle
# 1.04 stalled cycles per insn
131,046 branches # 207.931 M/sec
7,073 branch-misses # 5.40% of all branches
0.001193240 seconds time elapsed
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
non matching sample_type
$
While with sample_type set to PERF_SAMPLE_IDENTIFIER, after we re-run 'perf
stat record usleep' we get:
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$
Which at least shows the names of the events in the perf.data file.
Additionally, such files, when passed to 'perf report' will produce:
$ oldperf report --stdio
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
Warning:
Kernel address maps (/proc/{kallsyms,modules}) were restricted.
Check /proc/sys/kernel/kptr_restrict before running 'perf record'.
As no suitable kallsyms nor vmlinux was found, kernel samples
can't be resolved.
Samples in kernel modules can't be resolved as well.
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
$
Which is confusing and can be solved by just adding the kernel mmap record,
which will also remove that warning about the data size field being equal to
zero, after generating the mmap record:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.600796 task-clock (msec) # 0.478 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
54 page-faults # 0.090 M/sec
886,844 cycles # 1.476 GHz
582,169 stalled-cycles-frontend # 65.65% frontend cycles idle
<not supported> stalled-cycles-backend
638,344 instructions # 0.72 insns per cycle
# 0.91 stalled cycles per insn
130,204 branches # 216.719 M/sec
7,500 branch-misses # 5.76% of all branches
0.001255897 seconds time elapsed
$ oldperf evlist
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$ oldperf report --stdio
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
[acme@zoo linux]$
No warnings, sensible output about what are the events in the perf.data file and also
a "file has no samples" message, which indeed it doesn't.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Tested-by: Kan Liang <kan.liang@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: htp://lkml.kernel.org/r/1446734469-11352-3-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 14:40:46 +00:00
|
|
|
#include "util/session.h"
|
2015-11-05 14:40:55 +00:00
|
|
|
#include "util/tool.h"
|
2017-04-17 19:51:59 +00:00
|
|
|
#include "util/string2.h"
|
2017-08-31 19:40:31 +00:00
|
|
|
#include "util/metricgroup.h"
|
2019-09-18 14:36:13 +00:00
|
|
|
#include "util/synthetic-events.h"
|
2019-08-22 18:40:29 +00:00
|
|
|
#include "util/target.h"
|
2019-08-29 18:16:27 +00:00
|
|
|
#include "util/time-utils.h"
|
2018-06-06 22:15:06 +00:00
|
|
|
#include "util/top.h"
|
2019-11-21 00:15:19 +00:00
|
|
|
#include "util/affinity.h"
|
2020-05-05 18:29:43 +00:00
|
|
|
#include "util/pfm.h"
|
perf stat: Enable counting events for BPF programs
Introduce 'perf stat -b' option, which counts events for BPF programs, like:
[root@localhost ~]# ~/perf stat -e ref-cycles,cycles -b 254 -I 1000
1.487903822 115,200 ref-cycles
1.487903822 86,012 cycles
2.489147029 80,560 ref-cycles
2.489147029 73,784 cycles
3.490341825 60,720 ref-cycles
3.490341825 37,797 cycles
4.491540887 37,120 ref-cycles
4.491540887 31,963 cycles
The example above counts 'cycles' and 'ref-cycles' of BPF program of id
254. This is similar to bpftool-prog-profile command, but more
flexible.
'perf stat -b' creates per-cpu perf_event and loads fentry/fexit BPF
programs (monitor-progs) to the target BPF program (target-prog). The
monitor-progs read perf_event before and after the target-prog, and
aggregate the difference in a BPF map. Then the user space reads data
from these maps.
A new 'struct bpf_counter' is introduced to provide a common interface
that uses BPF programs/maps to count perf events.
Committer notes:
Removed all but bpf_counter.h includes from evsel.h, not needed at all.
Also BPF map lookups for PERCPU_ARRAYs need to have as its value receive
buffer passed to the kernel libbpf_num_possible_cpus() entries, not
evsel__nr_cpus(evsel), as the former uses
/sys/devices/system/cpu/possible while the later uses
/sys/devices/system/cpu/online, which may be less than the 'possible'
number making the bpf map lookup overwrite memory and cause hard to
debug memory corruption.
We need to continue using evsel__nr_cpus(evsel) when accessing the
perf_counts array tho, not to overwrite another are of memory :-)
Signed-off-by: Song Liu <songliubraving@fb.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Link: https://lore.kernel.org/lkml/20210120163031.GU12699@kernel.org/
Acked-by: Namhyung Kim <namhyung@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: kernel-team@fb.com
Link: http://lore.kernel.org/lkml/20201229214214.3413833-4-songliubraving@fb.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-12-29 21:42:14 +00:00
|
|
|
#include "util/bpf_counter.h"
|
2021-04-19 09:41:44 +00:00
|
|
|
#include "util/iostat.h"
|
2021-04-27 07:01:20 +00:00
|
|
|
#include "util/pmu-hybrid.h"
|
2015-11-05 14:40:55 +00:00
|
|
|
#include "asm/bug.h"
|
2009-04-20 13:37:32 +00:00
|
|
|
|
2016-08-05 18:40:30 +00:00
|
|
|
#include <linux/time64.h>
|
2019-07-04 14:32:27 +00:00
|
|
|
#include <linux/zalloc.h>
|
perf stat: Basic support for TopDown in perf stat
Add basic plumbing for TopDown in perf stat
TopDown is intended to replace the frontend cycles idle/ backend cycles
idle metrics in standard perf stat output. These metrics are not
reliable in many workloads, due to out of order effects.
This implements a new --topdown mode in perf stat (similar to
--transaction) that measures the pipe line bottlenecks using
standardized formulas. The measurement can be all done with 5 counters
(one fixed counter)
The result are four metrics:
FrontendBound, BackendBound, BadSpeculation, Retiring
that describe the CPU pipeline behavior on a high level.
The full top down methology has many hierarchical metrics. This
implementation only supports level 1 which can be collected without
multiplexing. A full implementation of top down on top of perf is
available in pmu-tools toplev. (http://github.com/andikleen/pmu-tools)
The current version works on Intel Core CPUs starting with Sandy Bridge,
and Atom CPUs starting with Silvermont. In principle the generic
metrics should be also implementable on other out of order CPUs.
TopDown level 1 uses a set of abstracted metrics which are generic to
out of order CPU cores (although some CPUs may not implement all of
them):
topdown-total-slots Available slots in the pipeline
topdown-slots-issued Slots issued into the pipeline
topdown-slots-retired Slots successfully retired
topdown-fetch-bubbles Pipeline gaps in the frontend
topdown-recovery-bubbles Pipeline gaps during recovery
from misspeculation
These metrics then allow to compute four useful metrics:
FrontendBound, BackendBound, Retiring, BadSpeculation.
Add a new --topdown options to enable events. When --topdown is
specified set up events for all topdown events supported by the kernel.
Add topdown-* as a special case to the event parser, as is needed for
all events containing -.
The actual code to compute the metrics is in follow-on patches.
v2: Use standard sysctl read function.
v3: Move x86 specific code to arch/
v4: Enable --metric-only implicitly for topdown.
v5: Add --single-thread option to not force per core mode
v6: Fix output order of topdown metrics
v7: Allow combining with -d
v8: Remove --single-thread again
v9: Rename functions, adding arch_ and topdown_.
v10: Expand man page and describe TopDown better
Paste intro into commit description.
Print error when malloc fails.
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: http://lkml.kernel.org/r/1464119559-17203-1-git-send-email-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-05-30 15:49:42 +00:00
|
|
|
#include <api/fs/fs.h>
|
2017-04-18 13:46:11 +00:00
|
|
|
#include <errno.h>
|
2017-04-19 18:49:18 +00:00
|
|
|
#include <signal.h>
|
2012-10-23 11:40:14 +00:00
|
|
|
#include <stdlib.h>
|
2009-04-20 13:37:32 +00:00
|
|
|
#include <sys/prctl.h>
|
2017-04-17 18:23:08 +00:00
|
|
|
#include <inttypes.h>
|
perf stat: add perf stat -B to pretty print large numbers
It is hard to read very large numbers so provide an option to perf stat
to separate thousands using a separator. The patch leverages the locale
support of stdio. You need to set your LC_NUMERIC appropriately, for
instance LC_NUMERIC=en_US.UTF8. You need to pass -B to activate this
feature. This way existing scripts parsing the output do not need to be
changed. Here is an example.
$ perf stat noploop 2
noploop for 2 seconds
Performance counter stats for 'noploop 2':
1998.347031 task-clock-msecs # 0.998 CPUs
61 context-switches # 0.000 M/sec
0 CPU-migrations # 0.000 M/sec
118 page-faults # 0.000 M/sec
4,138,410,900 cycles # 2070.917 M/sec (scaled from 70.01%)
2,062,650,268 instructions # 0.498 IPC (scaled from 70.01%)
2,057,653,466 branches # 1029.678 M/sec (scaled from 70.01%)
40,267 branch-misses # 0.002 % (scaled from 30.04%)
2,055,961,348 cache-references # 1028.831 M/sec (scaled from 30.03%)
53,725 cache-misses # 0.027 M/sec (scaled from 30.02%)
2.001393933 seconds time elapsed
$ perf stat -B noploop 2
noploop for 2 seconds
Performance counter stats for 'noploop 2':
1998.297883 task-clock-msecs # 0.998 CPUs
59 context-switches # 0.000 M/sec
0 CPU-migrations # 0.000 M/sec
119 page-faults # 0.000 M/sec
4,131,380,160 cycles # 2067.450 M/sec (scaled from 70.01%)
2,059,096,507 instructions # 0.498 IPC (scaled from 70.01%)
2,054,681,303 branches # 1028.216 M/sec (scaled from 70.01%)
25,650 branch-misses # 0.001 % (scaled from 30.05%)
2,056,283,014 cache-references # 1029.017 M/sec (scaled from 30.03%)
47,097 cache-misses # 0.024 M/sec (scaled from 30.02%)
2.001391016 seconds time elapsed
Cc: David S. Miller <davem@davemloft.net>
Cc: Frédéric Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Tom Zanussi <tzanussi@gmail.com>
LKML-Reference: <4bf28fe8.914ed80a.01ca.fffff5f5@mx.google.com>
Signed-off-by: Stephane Eranian <eranian@google.com>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2010-05-18 13:00:01 +00:00
|
|
|
#include <locale.h>
|
2016-05-05 23:04:03 +00:00
|
|
|
#include <math.h>
|
2017-04-19 23:57:47 +00:00
|
|
|
#include <sys/types.h>
|
|
|
|
#include <sys/stat.h>
|
2017-04-19 22:06:30 +00:00
|
|
|
#include <sys/wait.h>
|
2017-04-19 23:57:47 +00:00
|
|
|
#include <unistd.h>
|
2018-06-05 12:13:13 +00:00
|
|
|
#include <sys/time.h>
|
|
|
|
#include <sys/resource.h>
|
2019-08-22 07:20:49 +00:00
|
|
|
#include <linux/err.h>
|
2009-05-05 15:50:27 +00:00
|
|
|
|
tools perf: Move from sane_ctype.h obtained from git to the Linux's original
We got the sane_ctype.h headers from git and kept using it so far, but
since that code originally came from the kernel sources to the git
sources, perhaps its better to just use the one in the kernel, so that
we can leverage tools/perf/check_headers.sh to be notified when our copy
gets out of sync, i.e. when fixes or goodies are added to the code we've
copied.
This will help with things like tools/lib/string.c where we want to have
more things in common with the kernel, such as strim(), skip_spaces(),
etc so as to go on removing the things that we have in tools/perf/util/
and instead using the code in the kernel, indirectly and removing things
like EXPORT_SYMBOL(), etc, getting notified when fixes and improvements
are made to the original code.
Hopefully this also should help with reducing the difference of code
hosted in tools/ to the one in the kernel proper.
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Link: https://lkml.kernel.org/n/tip-7k9868l713wqtgo01xxygn12@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-06-25 20:27:31 +00:00
|
|
|
#include <linux/ctype.h>
|
2019-07-21 11:24:43 +00:00
|
|
|
#include <perf/evlist.h>
|
2017-04-17 19:10:49 +00:00
|
|
|
|
perf stat: Add csv-style output
This patch adds an option (-x/--field-separator) to print counts using a
CSV-style output. The user can pass a custom separator. This makes it very easy
to import counts directly into your favorite spreadsheet without having to
write scripts.
Example:
$ perf stat --field-separator=, -a -- sleep 1
4009.961740,task-clock-msecs
13,context-switches
2,CPU-migrations
189,page-faults
9596385684,cycles
3493659441,instructions
872897069,branches
41562,branch-misses
22424,cache-references
1289,cache-misses
Works also in non-aggregated mode:
$ perf stat -x , -a -A -- sleep 1
CPU0,1002.526168,task-clock-msecs
CPU1,1002.528365,task-clock-msecs
CPU2,1002.523360,task-clock-msecs
CPU3,1002.519878,task-clock-msecs
CPU0,1,context-switches
CPU1,5,context-switches
CPU2,5,context-switches
CPU3,6,context-switches
CPU0,0,CPU-migrations
CPU1,1,CPU-migrations
CPU2,0,CPU-migrations
CPU3,1,CPU-migrations
CPU0,2,page-faults
CPU1,6,page-faults
CPU2,9,page-faults
CPU3,174,page-faults
CPU0,2399439771,cycles
CPU1,2380369063,cycles
CPU2,2399142710,cycles
CPU3,2373161192,cycles
CPU0,872900618,instructions
CPU1,873030960,instructions
CPU2,872714525,instructions
CPU3,874460580,instructions
CPU0,221556839,branches
CPU1,218134342,branches
CPU2,218161730,branches
CPU3,218284093,branches
CPU0,18556,branch-misses
CPU1,1449,branch-misses
CPU2,3447,branch-misses
CPU3,12714,branch-misses
CPU0,8330,cache-references
CPU1,313844,cache-references
CPU2,47993728,cache-references
CPU3,826481,cache-references
CPU0,272,cache-misses
CPU1,5360,cache-misses
CPU2,1342193,cache-misses
CPU3,13992,cache-misses
This second version adds the ability to name a separator and uses
field-separator as the long option to be consistent with perf report.
Commiter note: Since we enabled --big-num by default in 201e0b0 and -x can't be
used with it, we need to notice if the user explicitely enabled or disabled -B,
add code to disable big_num if the user didn't explicitely set --big_num when
-x is used.
Cc: David S. Miller <davem@davemloft.net>
Cc: Frederik Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: paulus@samba.org
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Robert Richter <robert.richter@amd.com>
LKML-Reference: <4cf68aa7.0fedd80a.5294.1203@mx.google.com>
Signed-off-by: Stephane Eranian <eranian@google.com>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2010-12-01 16:49:05 +00:00
|
|
|
#define DEFAULT_SEPARATOR " "
|
2017-05-26 19:05:38 +00:00
|
|
|
#define FREEZE_ON_SMI_PATH "devices/cpu/freeze_on_smi"
|
perf stat: Add csv-style output
This patch adds an option (-x/--field-separator) to print counts using a
CSV-style output. The user can pass a custom separator. This makes it very easy
to import counts directly into your favorite spreadsheet without having to
write scripts.
Example:
$ perf stat --field-separator=, -a -- sleep 1
4009.961740,task-clock-msecs
13,context-switches
2,CPU-migrations
189,page-faults
9596385684,cycles
3493659441,instructions
872897069,branches
41562,branch-misses
22424,cache-references
1289,cache-misses
Works also in non-aggregated mode:
$ perf stat -x , -a -A -- sleep 1
CPU0,1002.526168,task-clock-msecs
CPU1,1002.528365,task-clock-msecs
CPU2,1002.523360,task-clock-msecs
CPU3,1002.519878,task-clock-msecs
CPU0,1,context-switches
CPU1,5,context-switches
CPU2,5,context-switches
CPU3,6,context-switches
CPU0,0,CPU-migrations
CPU1,1,CPU-migrations
CPU2,0,CPU-migrations
CPU3,1,CPU-migrations
CPU0,2,page-faults
CPU1,6,page-faults
CPU2,9,page-faults
CPU3,174,page-faults
CPU0,2399439771,cycles
CPU1,2380369063,cycles
CPU2,2399142710,cycles
CPU3,2373161192,cycles
CPU0,872900618,instructions
CPU1,873030960,instructions
CPU2,872714525,instructions
CPU3,874460580,instructions
CPU0,221556839,branches
CPU1,218134342,branches
CPU2,218161730,branches
CPU3,218284093,branches
CPU0,18556,branch-misses
CPU1,1449,branch-misses
CPU2,3447,branch-misses
CPU3,12714,branch-misses
CPU0,8330,cache-references
CPU1,313844,cache-references
CPU2,47993728,cache-references
CPU3,826481,cache-references
CPU0,272,cache-misses
CPU1,5360,cache-misses
CPU2,1342193,cache-misses
CPU3,13992,cache-misses
This second version adds the ability to name a separator and uses
field-separator as the long option to be consistent with perf report.
Commiter note: Since we enabled --big-num by default in 201e0b0 and -x can't be
used with it, we need to notice if the user explicitely enabled or disabled -B,
add code to disable big_num if the user didn't explicitely set --big_num when
-x is used.
Cc: David S. Miller <davem@davemloft.net>
Cc: Frederik Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: paulus@samba.org
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Robert Richter <robert.richter@amd.com>
LKML-Reference: <4cf68aa7.0fedd80a.5294.1203@mx.google.com>
Signed-off-by: Stephane Eranian <eranian@google.com>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2010-12-01 16:49:05 +00:00
|
|
|
|
2015-06-26 09:29:26 +00:00
|
|
|
static void print_counters(struct timespec *ts, int argc, const char **argv);
|
perf stat: Add interval printing
This patch adds a new printing mode for perf stat. It allows interval
printing. That means perf stat can now print event deltas at regular
time interval. This is useful to detect phases in programs.
The -I option enables interval printing. It expects an interval duration
in milliseconds. Minimum is 100ms. Once, activated perf stat prints
events deltas since last printout. All modes are supported.
$ perf stat -I 1000 -e cycles noploop 10
noploop for 10 seconds
# time counts events
1.000109853 2,388,560,546 cycles
2.000262846 2,393,332,358 cycles
3.000354131 2,393,176,537 cycles
4.000439503 2,393,203,790 cycles
5.000527075 2,393,167,675 cycles
6.000609052 2,393,203,670 cycles
7.000691082 2,393,175,678 cycles
The output format makes it easy to feed into a plotting program such as
gnuplot when the -I option is used in combination with the -x option:
$ perf stat -x, -I 1000 -e cycles noploop 10
noploop for 10 seconds
1.000084113,2378775498,cycles
2.000245798,2391056897,cycles
3.000354445,2392089414,cycles
4.000459115,2390936603,cycles
5.000565341,2392108173,cycles
Signed-off-by: Stephane Eranian <eranian@google.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung.kim@lge.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1359460064-3060-3-git-send-email-eranian@google.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2013-01-29 11:47:44 +00:00
|
|
|
|
2013-08-21 23:47:26 +00:00
|
|
|
/* Default events used for perf stat -T */
|
2015-06-03 14:25:53 +00:00
|
|
|
static const char *transaction_attrs = {
|
|
|
|
"task-clock,"
|
2013-08-21 23:47:26 +00:00
|
|
|
"{"
|
|
|
|
"instructions,"
|
|
|
|
"cycles,"
|
|
|
|
"cpu/cycles-t/,"
|
|
|
|
"cpu/tx-start/,"
|
|
|
|
"cpu/el-start/,"
|
|
|
|
"cpu/cycles-ct/"
|
|
|
|
"}"
|
|
|
|
};
|
|
|
|
|
|
|
|
/* More limited version when the CPU does not have all events. */
|
2015-06-03 14:25:53 +00:00
|
|
|
static const char * transaction_limited_attrs = {
|
|
|
|
"task-clock,"
|
2013-08-21 23:47:26 +00:00
|
|
|
"{"
|
|
|
|
"instructions,"
|
|
|
|
"cycles,"
|
|
|
|
"cpu/cycles-t/,"
|
|
|
|
"cpu/tx-start/"
|
|
|
|
"}"
|
|
|
|
};
|
|
|
|
|
perf stat: Basic support for TopDown in perf stat
Add basic plumbing for TopDown in perf stat
TopDown is intended to replace the frontend cycles idle/ backend cycles
idle metrics in standard perf stat output. These metrics are not
reliable in many workloads, due to out of order effects.
This implements a new --topdown mode in perf stat (similar to
--transaction) that measures the pipe line bottlenecks using
standardized formulas. The measurement can be all done with 5 counters
(one fixed counter)
The result are four metrics:
FrontendBound, BackendBound, BadSpeculation, Retiring
that describe the CPU pipeline behavior on a high level.
The full top down methology has many hierarchical metrics. This
implementation only supports level 1 which can be collected without
multiplexing. A full implementation of top down on top of perf is
available in pmu-tools toplev. (http://github.com/andikleen/pmu-tools)
The current version works on Intel Core CPUs starting with Sandy Bridge,
and Atom CPUs starting with Silvermont. In principle the generic
metrics should be also implementable on other out of order CPUs.
TopDown level 1 uses a set of abstracted metrics which are generic to
out of order CPU cores (although some CPUs may not implement all of
them):
topdown-total-slots Available slots in the pipeline
topdown-slots-issued Slots issued into the pipeline
topdown-slots-retired Slots successfully retired
topdown-fetch-bubbles Pipeline gaps in the frontend
topdown-recovery-bubbles Pipeline gaps during recovery
from misspeculation
These metrics then allow to compute four useful metrics:
FrontendBound, BackendBound, Retiring, BadSpeculation.
Add a new --topdown options to enable events. When --topdown is
specified set up events for all topdown events supported by the kernel.
Add topdown-* as a special case to the event parser, as is needed for
all events containing -.
The actual code to compute the metrics is in follow-on patches.
v2: Use standard sysctl read function.
v3: Move x86 specific code to arch/
v4: Enable --metric-only implicitly for topdown.
v5: Add --single-thread option to not force per core mode
v6: Fix output order of topdown metrics
v7: Allow combining with -d
v8: Remove --single-thread again
v9: Rename functions, adding arch_ and topdown_.
v10: Expand man page and describe TopDown better
Paste intro into commit description.
Print error when malloc fails.
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: http://lkml.kernel.org/r/1464119559-17203-1-git-send-email-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-05-30 15:49:42 +00:00
|
|
|
static const char * topdown_attrs[] = {
|
|
|
|
"topdown-total-slots",
|
|
|
|
"topdown-slots-retired",
|
|
|
|
"topdown-recovery-bubbles",
|
|
|
|
"topdown-fetch-bubbles",
|
|
|
|
"topdown-slots-issued",
|
|
|
|
NULL,
|
|
|
|
};
|
|
|
|
|
2020-09-11 14:48:07 +00:00
|
|
|
static const char *topdown_metric_attrs[] = {
|
|
|
|
"slots",
|
|
|
|
"topdown-retiring",
|
|
|
|
"topdown-bad-spec",
|
|
|
|
"topdown-fe-bound",
|
|
|
|
"topdown-be-bound",
|
|
|
|
NULL,
|
|
|
|
};
|
|
|
|
|
perf stat: Support L2 Topdown events
The TMA method level 2 metrics is supported from the Intel Sapphire
Rapids server, which expose four L2 Topdown metrics events to user
space. There are eight L2 events in total. The other four L2 Topdown
metrics events are calculated from the corresponding L1 and the exposed
L2 events.
Now, the --topdown prints the complete top-down metrics that supported
by the CPU. For the Intel Sapphire Rapids server, there are 4 L1 events
and 8 L2 events displyed in one line.
Add a new option, --td-level, to display the top-down statistics that
equal to or lower than the input level.
The L2 event is marked only when both its L1 parent event and itself
crosse the threshold.
Here is an example:
$ perf stat --topdown --td-level=2 --no-metric-only sleep 1
Topdown accuracy may decrease when measuring long periods.
Please print the result regularly, e.g. -I1000
Performance counter stats for 'sleep 1':
16,734,390 slots
2,100,001 topdown-retiring # 12.6% retiring
2,034,376 topdown-bad-spec # 12.3% bad speculation
4,003,128 topdown-fe-bound # 24.1% frontend bound
328,125 topdown-heavy-ops # 2.0% heavy operations # 10.6% light operations
1,968,751 topdown-br-mispredict # 11.9% branch mispredict # 0.4% machine clears
2,953,127 topdown-fetch-lat # 17.8% fetch latency # 6.3% fetch bandwidth
5,906,255 topdown-mem-bound # 35.6% memory bound # 15.4% core bound
Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jin Yao <yao.jin@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lore.kernel.org/lkml/1612296553-21962-9-git-send-email-kan.liang@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2021-02-02 20:09:12 +00:00
|
|
|
static const char *topdown_metric_L2_attrs[] = {
|
|
|
|
"slots",
|
|
|
|
"topdown-retiring",
|
|
|
|
"topdown-bad-spec",
|
|
|
|
"topdown-fe-bound",
|
|
|
|
"topdown-be-bound",
|
|
|
|
"topdown-heavy-ops",
|
|
|
|
"topdown-br-mispredict",
|
|
|
|
"topdown-fetch-lat",
|
|
|
|
"topdown-mem-bound",
|
|
|
|
NULL,
|
|
|
|
};
|
|
|
|
|
perf stat: Add Topdown metrics L2 events as default events
The Topdown Microarchitecture Analysis (TMA) Method is a structured
analysis methodology to identify critical performance bottlenecks in
out-of-order processors.
The Topdown metrics L1 event was added as default in 42641d6f4d15e6db
("perf stat: Add Topdown metrics events as default events")
From the Sapphire Rapids server and later platforms, the same dedicated
"metrics" register is extended to support both L1 and L2 events.
Add both L1 and L2 Topdown metrics events as default to enrich the
default measuring information if the new measurement register is
available.
On legacy systems there is no change to avoid extra multiplexing.
The topdown_level indicates the max metrics level for the top-down
statistics. Set it to 2 to display all L1 and L2 Topdown metrics events.
With the patch:
$ perf stat sleep 1
Performance counter stats for 'sleep 1':
0.59 msec task-clock # 0.001 CPUs utilized
1 context-switches # 1.687 K/sec
0 cpu-migrations # 0.000 /sec
76 page-faults # 128.198 K/sec
1,405,318 cycles # 2.371 GHz
1,471,136 instructions # 1.05 insn per cycle
310,132 branches # 523.136 M/sec
10,435 branch-misses # 3.36% of all branches
8,431,908 slots # 14.223 G/sec
1,554,116 topdown-retiring # 18.4% retiring
1,289,585 topdown-bad-spec # 15.2% bad speculation
2,810,636 topdown-fe-bound # 33.2% frontend bound
2,810,636 topdown-be-bound # 33.2% backend bound
231,464 topdown-heavy-ops # 2.7% heavy operations # 15.6% light operations
1,223,453 topdown-br-mispredict # 14.5% branch mispredict # 0.8% machine clears
1,884,779 topdown-fetch-lat # 22.3% fetch latency # 10.9% fetch bandwidth
1,454,917 topdown-mem-bound # 17.2% memory bound # 16.0% Core bound
1.001179699 seconds time elapsed
0.000000000 seconds user
0.001238000 seconds sys
Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
Reviewed-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Namhyung Kim <namhyung@kernel.org>
Cc: Jin Yao <yao.jin@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Link: http://lore.kernel.org/lkml/1625760169-18396-1-git-send-email-kan.liang@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2021-07-08 16:02:49 +00:00
|
|
|
#define TOPDOWN_MAX_LEVEL 2
|
|
|
|
|
2017-05-26 19:05:38 +00:00
|
|
|
static const char *smi_cost_attrs = {
|
|
|
|
"{"
|
|
|
|
"msr/aperf/,"
|
|
|
|
"msr/smi/,"
|
|
|
|
"cycles"
|
|
|
|
"}"
|
|
|
|
};
|
|
|
|
|
2019-07-21 11:23:52 +00:00
|
|
|
static struct evlist *evsel_list;
|
2021-04-25 21:43:31 +00:00
|
|
|
static bool all_counters_use_bpf = true;
|
2011-01-11 22:56:53 +00:00
|
|
|
|
2013-11-12 19:46:16 +00:00
|
|
|
static struct target target = {
|
2012-05-07 05:09:04 +00:00
|
|
|
.uid = UINT_MAX,
|
|
|
|
};
|
2009-04-20 13:37:32 +00:00
|
|
|
|
2018-06-06 22:15:09 +00:00
|
|
|
#define METRIC_ONLY_LEN 20
|
|
|
|
|
2013-06-04 15:44:26 +00:00
|
|
|
static volatile pid_t child_pid = -1;
|
perf stat: Add -d -d and -d -d -d options to show more CPU events
Print even more detailed statistics if requested via perf stat -d:
-d: detailed events, L1 and LLC data cache
-d -d: more detailed events, dTLB and iTLB events
-d -d -d: very detailed events, adding prefetch events
Full output looks like this now:
Performance counter stats for '/home/mingo/hackbench 10' (5 runs):
1703.674707 task-clock # 8.709 CPUs utilized ( +- 4.19% )
49,068 context-switches # 0.029 M/sec ( +- 16.66% )
8,303 CPU-migrations # 0.005 M/sec ( +- 24.90% )
17,397 page-faults # 0.010 M/sec ( +- 0.46% )
2,345,389,239 cycles # 1.377 GHz ( +- 4.61% ) [55.90%]
1,884,503,527 stalled-cycles-frontend # 80.35% frontend cycles idle ( +- 5.67% ) [50.39%]
743,919,737 stalled-cycles-backend # 31.72% backend cycles idle ( +- 8.75% ) [49.91%]
1,314,416,379 instructions # 0.56 insns per cycle
# 1.43 stalled cycles per insn ( +- 2.53% ) [60.87%]
272,592,567 branches # 160.003 M/sec ( +- 1.74% ) [56.56%]
3,794,846 branch-misses # 1.39% of all branches ( +- 6.59% ) [58.50%]
449,982,778 L1-dcache-loads # 264.125 M/sec ( +- 2.47% ) [49.88%]
22,404,961 L1-dcache-load-misses # 4.98% of all L1-dcache hits ( +- 6.08% ) [55.05%]
6,204,750 LLC-loads # 3.642 M/sec ( +- 8.91% ) [43.75%]
1,837,411 LLC-load-misses # 1.078 M/sec ( +- 7.27% ) [12.07%]
411,440,421 L1-icache-loads # 241.502 M/sec ( +- 5.60% ) [36.52%]
27,556,832 L1-icache-load-misses # 16.175 M/sec ( +- 7.46% ) [46.72%]
464,067,627 dTLB-loads # 272.392 M/sec ( +- 4.46% ) [54.17%]
10,765,648 dTLB-load-misses # 6.319 M/sec ( +- 3.18% ) [48.68%]
1,273,080,386 iTLB-loads # 747.256 M/sec ( +- 3.38% ) [47.53%]
117,481 iTLB-load-misses # 0.069 M/sec ( +- 14.99% ) [47.01%]
4,590,653 L1-dcache-prefetches # 2.695 M/sec ( +- 4.49% ) [46.19%]
1,712,660 L1-dcache-prefetch-misses # 1.005 M/sec ( +- 3.75% ) [44.82%]
0.195622057 seconds time elapsed ( +- 6.84% )
Also clean up the attribute construction code to be appending, and factor
it out into add_default_attributes().
Tweak the coverage percentage printout a bit, so that it's easier to view it
alongside the +- sttddev colum.
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Steven Rostedt <rostedt@goodmis.org>
Link: http://lkml.kernel.org/n/tip-to3kgu04449s64062val8b62@git.kernel.org
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2011-05-19 11:30:56 +00:00
|
|
|
static int detailed_run = 0;
|
2013-08-21 23:47:26 +00:00
|
|
|
static bool transaction_run;
|
perf stat: Basic support for TopDown in perf stat
Add basic plumbing for TopDown in perf stat
TopDown is intended to replace the frontend cycles idle/ backend cycles
idle metrics in standard perf stat output. These metrics are not
reliable in many workloads, due to out of order effects.
This implements a new --topdown mode in perf stat (similar to
--transaction) that measures the pipe line bottlenecks using
standardized formulas. The measurement can be all done with 5 counters
(one fixed counter)
The result are four metrics:
FrontendBound, BackendBound, BadSpeculation, Retiring
that describe the CPU pipeline behavior on a high level.
The full top down methology has many hierarchical metrics. This
implementation only supports level 1 which can be collected without
multiplexing. A full implementation of top down on top of perf is
available in pmu-tools toplev. (http://github.com/andikleen/pmu-tools)
The current version works on Intel Core CPUs starting with Sandy Bridge,
and Atom CPUs starting with Silvermont. In principle the generic
metrics should be also implementable on other out of order CPUs.
TopDown level 1 uses a set of abstracted metrics which are generic to
out of order CPU cores (although some CPUs may not implement all of
them):
topdown-total-slots Available slots in the pipeline
topdown-slots-issued Slots issued into the pipeline
topdown-slots-retired Slots successfully retired
topdown-fetch-bubbles Pipeline gaps in the frontend
topdown-recovery-bubbles Pipeline gaps during recovery
from misspeculation
These metrics then allow to compute four useful metrics:
FrontendBound, BackendBound, Retiring, BadSpeculation.
Add a new --topdown options to enable events. When --topdown is
specified set up events for all topdown events supported by the kernel.
Add topdown-* as a special case to the event parser, as is needed for
all events containing -.
The actual code to compute the metrics is in follow-on patches.
v2: Use standard sysctl read function.
v3: Move x86 specific code to arch/
v4: Enable --metric-only implicitly for topdown.
v5: Add --single-thread option to not force per core mode
v6: Fix output order of topdown metrics
v7: Allow combining with -d
v8: Remove --single-thread again
v9: Rename functions, adding arch_ and topdown_.
v10: Expand man page and describe TopDown better
Paste intro into commit description.
Print error when malloc fails.
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: http://lkml.kernel.org/r/1464119559-17203-1-git-send-email-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-05-30 15:49:42 +00:00
|
|
|
static bool topdown_run = false;
|
2017-05-26 19:05:38 +00:00
|
|
|
static bool smi_cost = false;
|
|
|
|
static bool smi_reset = false;
|
perf stat: Add csv-style output
This patch adds an option (-x/--field-separator) to print counts using a
CSV-style output. The user can pass a custom separator. This makes it very easy
to import counts directly into your favorite spreadsheet without having to
write scripts.
Example:
$ perf stat --field-separator=, -a -- sleep 1
4009.961740,task-clock-msecs
13,context-switches
2,CPU-migrations
189,page-faults
9596385684,cycles
3493659441,instructions
872897069,branches
41562,branch-misses
22424,cache-references
1289,cache-misses
Works also in non-aggregated mode:
$ perf stat -x , -a -A -- sleep 1
CPU0,1002.526168,task-clock-msecs
CPU1,1002.528365,task-clock-msecs
CPU2,1002.523360,task-clock-msecs
CPU3,1002.519878,task-clock-msecs
CPU0,1,context-switches
CPU1,5,context-switches
CPU2,5,context-switches
CPU3,6,context-switches
CPU0,0,CPU-migrations
CPU1,1,CPU-migrations
CPU2,0,CPU-migrations
CPU3,1,CPU-migrations
CPU0,2,page-faults
CPU1,6,page-faults
CPU2,9,page-faults
CPU3,174,page-faults
CPU0,2399439771,cycles
CPU1,2380369063,cycles
CPU2,2399142710,cycles
CPU3,2373161192,cycles
CPU0,872900618,instructions
CPU1,873030960,instructions
CPU2,872714525,instructions
CPU3,874460580,instructions
CPU0,221556839,branches
CPU1,218134342,branches
CPU2,218161730,branches
CPU3,218284093,branches
CPU0,18556,branch-misses
CPU1,1449,branch-misses
CPU2,3447,branch-misses
CPU3,12714,branch-misses
CPU0,8330,cache-references
CPU1,313844,cache-references
CPU2,47993728,cache-references
CPU3,826481,cache-references
CPU0,272,cache-misses
CPU1,5360,cache-misses
CPU2,1342193,cache-misses
CPU3,13992,cache-misses
This second version adds the ability to name a separator and uses
field-separator as the long option to be consistent with perf report.
Commiter note: Since we enabled --big-num by default in 201e0b0 and -x can't be
used with it, we need to notice if the user explicitely enabled or disabled -B,
add code to disable big_num if the user didn't explicitely set --big_num when
-x is used.
Cc: David S. Miller <davem@davemloft.net>
Cc: Frederik Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: paulus@samba.org
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Robert Richter <robert.richter@amd.com>
LKML-Reference: <4cf68aa7.0fedd80a.5294.1203@mx.google.com>
Signed-off-by: Stephane Eranian <eranian@google.com>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2010-12-01 16:49:05 +00:00
|
|
|
static int big_num_opt = -1;
|
2011-08-17 10:42:07 +00:00
|
|
|
static bool group = false;
|
2012-10-23 11:40:14 +00:00
|
|
|
static const char *pre_cmd = NULL;
|
|
|
|
static const char *post_cmd = NULL;
|
|
|
|
static bool sync_run = false;
|
2013-03-01 18:02:27 +00:00
|
|
|
static bool forever = false;
|
perf stat: Basic support for TopDown in perf stat
Add basic plumbing for TopDown in perf stat
TopDown is intended to replace the frontend cycles idle/ backend cycles
idle metrics in standard perf stat output. These metrics are not
reliable in many workloads, due to out of order effects.
This implements a new --topdown mode in perf stat (similar to
--transaction) that measures the pipe line bottlenecks using
standardized formulas. The measurement can be all done with 5 counters
(one fixed counter)
The result are four metrics:
FrontendBound, BackendBound, BadSpeculation, Retiring
that describe the CPU pipeline behavior on a high level.
The full top down methology has many hierarchical metrics. This
implementation only supports level 1 which can be collected without
multiplexing. A full implementation of top down on top of perf is
available in pmu-tools toplev. (http://github.com/andikleen/pmu-tools)
The current version works on Intel Core CPUs starting with Sandy Bridge,
and Atom CPUs starting with Silvermont. In principle the generic
metrics should be also implementable on other out of order CPUs.
TopDown level 1 uses a set of abstracted metrics which are generic to
out of order CPU cores (although some CPUs may not implement all of
them):
topdown-total-slots Available slots in the pipeline
topdown-slots-issued Slots issued into the pipeline
topdown-slots-retired Slots successfully retired
topdown-fetch-bubbles Pipeline gaps in the frontend
topdown-recovery-bubbles Pipeline gaps during recovery
from misspeculation
These metrics then allow to compute four useful metrics:
FrontendBound, BackendBound, Retiring, BadSpeculation.
Add a new --topdown options to enable events. When --topdown is
specified set up events for all topdown events supported by the kernel.
Add topdown-* as a special case to the event parser, as is needed for
all events containing -.
The actual code to compute the metrics is in follow-on patches.
v2: Use standard sysctl read function.
v3: Move x86 specific code to arch/
v4: Enable --metric-only implicitly for topdown.
v5: Add --single-thread option to not force per core mode
v6: Fix output order of topdown metrics
v7: Allow combining with -d
v8: Remove --single-thread again
v9: Rename functions, adding arch_ and topdown_.
v10: Expand man page and describe TopDown better
Paste intro into commit description.
Print error when malloc fails.
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: http://lkml.kernel.org/r/1464119559-17203-1-git-send-email-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-05-30 15:49:42 +00:00
|
|
|
static bool force_metric_only = false;
|
perf stat: Add interval printing
This patch adds a new printing mode for perf stat. It allows interval
printing. That means perf stat can now print event deltas at regular
time interval. This is useful to detect phases in programs.
The -I option enables interval printing. It expects an interval duration
in milliseconds. Minimum is 100ms. Once, activated perf stat prints
events deltas since last printout. All modes are supported.
$ perf stat -I 1000 -e cycles noploop 10
noploop for 10 seconds
# time counts events
1.000109853 2,388,560,546 cycles
2.000262846 2,393,332,358 cycles
3.000354131 2,393,176,537 cycles
4.000439503 2,393,203,790 cycles
5.000527075 2,393,167,675 cycles
6.000609052 2,393,203,670 cycles
7.000691082 2,393,175,678 cycles
The output format makes it easy to feed into a plotting program such as
gnuplot when the -I option is used in combination with the -x option:
$ perf stat -x, -I 1000 -e cycles noploop 10
noploop for 10 seconds
1.000084113,2378775498,cycles
2.000245798,2391056897,cycles
3.000354445,2392089414,cycles
4.000459115,2390936603,cycles
5.000565341,2392108173,cycles
Signed-off-by: Stephane Eranian <eranian@google.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung.kim@lge.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1359460064-3060-3-git-send-email-eranian@google.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2013-01-29 11:47:44 +00:00
|
|
|
static struct timespec ref_time;
|
2015-11-05 14:40:45 +00:00
|
|
|
static bool append_file;
|
2018-01-29 09:25:22 +00:00
|
|
|
static bool interval_count;
|
2015-11-05 14:40:45 +00:00
|
|
|
static const char *output_name;
|
|
|
|
static int output_fd;
|
perf stat: add perf stat -B to pretty print large numbers
It is hard to read very large numbers so provide an option to perf stat
to separate thousands using a separator. The patch leverages the locale
support of stdio. You need to set your LC_NUMERIC appropriately, for
instance LC_NUMERIC=en_US.UTF8. You need to pass -B to activate this
feature. This way existing scripts parsing the output do not need to be
changed. Here is an example.
$ perf stat noploop 2
noploop for 2 seconds
Performance counter stats for 'noploop 2':
1998.347031 task-clock-msecs # 0.998 CPUs
61 context-switches # 0.000 M/sec
0 CPU-migrations # 0.000 M/sec
118 page-faults # 0.000 M/sec
4,138,410,900 cycles # 2070.917 M/sec (scaled from 70.01%)
2,062,650,268 instructions # 0.498 IPC (scaled from 70.01%)
2,057,653,466 branches # 1029.678 M/sec (scaled from 70.01%)
40,267 branch-misses # 0.002 % (scaled from 30.04%)
2,055,961,348 cache-references # 1028.831 M/sec (scaled from 30.03%)
53,725 cache-misses # 0.027 M/sec (scaled from 30.02%)
2.001393933 seconds time elapsed
$ perf stat -B noploop 2
noploop for 2 seconds
Performance counter stats for 'noploop 2':
1998.297883 task-clock-msecs # 0.998 CPUs
59 context-switches # 0.000 M/sec
0 CPU-migrations # 0.000 M/sec
119 page-faults # 0.000 M/sec
4,131,380,160 cycles # 2067.450 M/sec (scaled from 70.01%)
2,059,096,507 instructions # 0.498 IPC (scaled from 70.01%)
2,054,681,303 branches # 1028.216 M/sec (scaled from 70.01%)
25,650 branch-misses # 0.001 % (scaled from 30.05%)
2,056,283,014 cache-references # 1029.017 M/sec (scaled from 30.03%)
47,097 cache-misses # 0.024 M/sec (scaled from 30.02%)
2.001391016 seconds time elapsed
Cc: David S. Miller <davem@davemloft.net>
Cc: Frédéric Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Tom Zanussi <tzanussi@gmail.com>
LKML-Reference: <4bf28fe8.914ed80a.01ca.fffff5f5@mx.google.com>
Signed-off-by: Stephane Eranian <eranian@google.com>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2010-05-18 13:00:01 +00:00
|
|
|
|
perf stat record: Add record command
Add 'perf stat record' command support. It creates simple (header only)
perf.data file ATM.
The record command could be specified anywhere among stat options. All
stat command options are valid for stat record command with '-o' option
exception. If specified for record command it denotes the perf data file
name.
Committer note:
Set sample_type to PERF_SAMPLE_IDENTIFIER, which should be harmless
while avoiding that older tools show confusing messages, for instance,
with sample_type = 0, we get:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.630237 task-clock (msec) # 0.528 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
52 page-faults # 0.083 M/sec
978,312 cycles # 1.552 GHz
671,931 stalled-cycles-frontend # 68.68% frontend cycles idle
<not supported> stalled-cycles-backend
646,379 instructions # 0.66 insns per cycle
# 1.04 stalled cycles per insn
131,046 branches # 207.931 M/sec
7,073 branch-misses # 5.40% of all branches
0.001193240 seconds time elapsed
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
non matching sample_type
$
While with sample_type set to PERF_SAMPLE_IDENTIFIER, after we re-run 'perf
stat record usleep' we get:
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$
Which at least shows the names of the events in the perf.data file.
Additionally, such files, when passed to 'perf report' will produce:
$ oldperf report --stdio
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
Warning:
Kernel address maps (/proc/{kallsyms,modules}) were restricted.
Check /proc/sys/kernel/kptr_restrict before running 'perf record'.
As no suitable kallsyms nor vmlinux was found, kernel samples
can't be resolved.
Samples in kernel modules can't be resolved as well.
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
$
Which is confusing and can be solved by just adding the kernel mmap record,
which will also remove that warning about the data size field being equal to
zero, after generating the mmap record:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.600796 task-clock (msec) # 0.478 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
54 page-faults # 0.090 M/sec
886,844 cycles # 1.476 GHz
582,169 stalled-cycles-frontend # 65.65% frontend cycles idle
<not supported> stalled-cycles-backend
638,344 instructions # 0.72 insns per cycle
# 0.91 stalled cycles per insn
130,204 branches # 216.719 M/sec
7,500 branch-misses # 5.76% of all branches
0.001255897 seconds time elapsed
$ oldperf evlist
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$ oldperf report --stdio
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
[acme@zoo linux]$
No warnings, sensible output about what are the events in the perf.data file and also
a "file has no samples" message, which indeed it doesn't.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Tested-by: Kan Liang <kan.liang@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: htp://lkml.kernel.org/r/1446734469-11352-3-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 14:40:46 +00:00
|
|
|
struct perf_stat {
|
|
|
|
bool record;
|
2017-01-23 21:07:59 +00:00
|
|
|
struct perf_data data;
|
perf stat record: Add record command
Add 'perf stat record' command support. It creates simple (header only)
perf.data file ATM.
The record command could be specified anywhere among stat options. All
stat command options are valid for stat record command with '-o' option
exception. If specified for record command it denotes the perf data file
name.
Committer note:
Set sample_type to PERF_SAMPLE_IDENTIFIER, which should be harmless
while avoiding that older tools show confusing messages, for instance,
with sample_type = 0, we get:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.630237 task-clock (msec) # 0.528 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
52 page-faults # 0.083 M/sec
978,312 cycles # 1.552 GHz
671,931 stalled-cycles-frontend # 68.68% frontend cycles idle
<not supported> stalled-cycles-backend
646,379 instructions # 0.66 insns per cycle
# 1.04 stalled cycles per insn
131,046 branches # 207.931 M/sec
7,073 branch-misses # 5.40% of all branches
0.001193240 seconds time elapsed
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
non matching sample_type
$
While with sample_type set to PERF_SAMPLE_IDENTIFIER, after we re-run 'perf
stat record usleep' we get:
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$
Which at least shows the names of the events in the perf.data file.
Additionally, such files, when passed to 'perf report' will produce:
$ oldperf report --stdio
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
Warning:
Kernel address maps (/proc/{kallsyms,modules}) were restricted.
Check /proc/sys/kernel/kptr_restrict before running 'perf record'.
As no suitable kallsyms nor vmlinux was found, kernel samples
can't be resolved.
Samples in kernel modules can't be resolved as well.
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
$
Which is confusing and can be solved by just adding the kernel mmap record,
which will also remove that warning about the data size field being equal to
zero, after generating the mmap record:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.600796 task-clock (msec) # 0.478 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
54 page-faults # 0.090 M/sec
886,844 cycles # 1.476 GHz
582,169 stalled-cycles-frontend # 65.65% frontend cycles idle
<not supported> stalled-cycles-backend
638,344 instructions # 0.72 insns per cycle
# 0.91 stalled cycles per insn
130,204 branches # 216.719 M/sec
7,500 branch-misses # 5.76% of all branches
0.001255897 seconds time elapsed
$ oldperf evlist
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$ oldperf report --stdio
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
[acme@zoo linux]$
No warnings, sensible output about what are the events in the perf.data file and also
a "file has no samples" message, which indeed it doesn't.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Tested-by: Kan Liang <kan.liang@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: htp://lkml.kernel.org/r/1446734469-11352-3-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 14:40:46 +00:00
|
|
|
struct perf_session *session;
|
|
|
|
u64 bytes_written;
|
2015-11-05 14:40:55 +00:00
|
|
|
struct perf_tool tool;
|
2015-11-05 14:40:56 +00:00
|
|
|
bool maps_allocated;
|
2019-07-21 11:23:49 +00:00
|
|
|
struct perf_cpu_map *cpus;
|
2019-07-21 11:23:50 +00:00
|
|
|
struct perf_thread_map *threads;
|
2015-11-05 14:41:02 +00:00
|
|
|
enum aggr_mode aggr_mode;
|
perf stat record: Add record command
Add 'perf stat record' command support. It creates simple (header only)
perf.data file ATM.
The record command could be specified anywhere among stat options. All
stat command options are valid for stat record command with '-o' option
exception. If specified for record command it denotes the perf data file
name.
Committer note:
Set sample_type to PERF_SAMPLE_IDENTIFIER, which should be harmless
while avoiding that older tools show confusing messages, for instance,
with sample_type = 0, we get:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.630237 task-clock (msec) # 0.528 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
52 page-faults # 0.083 M/sec
978,312 cycles # 1.552 GHz
671,931 stalled-cycles-frontend # 68.68% frontend cycles idle
<not supported> stalled-cycles-backend
646,379 instructions # 0.66 insns per cycle
# 1.04 stalled cycles per insn
131,046 branches # 207.931 M/sec
7,073 branch-misses # 5.40% of all branches
0.001193240 seconds time elapsed
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
non matching sample_type
$
While with sample_type set to PERF_SAMPLE_IDENTIFIER, after we re-run 'perf
stat record usleep' we get:
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$
Which at least shows the names of the events in the perf.data file.
Additionally, such files, when passed to 'perf report' will produce:
$ oldperf report --stdio
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
Warning:
Kernel address maps (/proc/{kallsyms,modules}) were restricted.
Check /proc/sys/kernel/kptr_restrict before running 'perf record'.
As no suitable kallsyms nor vmlinux was found, kernel samples
can't be resolved.
Samples in kernel modules can't be resolved as well.
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
$
Which is confusing and can be solved by just adding the kernel mmap record,
which will also remove that warning about the data size field being equal to
zero, after generating the mmap record:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.600796 task-clock (msec) # 0.478 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
54 page-faults # 0.090 M/sec
886,844 cycles # 1.476 GHz
582,169 stalled-cycles-frontend # 65.65% frontend cycles idle
<not supported> stalled-cycles-backend
638,344 instructions # 0.72 insns per cycle
# 0.91 stalled cycles per insn
130,204 branches # 216.719 M/sec
7,500 branch-misses # 5.76% of all branches
0.001255897 seconds time elapsed
$ oldperf evlist
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$ oldperf report --stdio
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
[acme@zoo linux]$
No warnings, sensible output about what are the events in the perf.data file and also
a "file has no samples" message, which indeed it doesn't.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Tested-by: Kan Liang <kan.liang@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: htp://lkml.kernel.org/r/1446734469-11352-3-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 14:40:46 +00:00
|
|
|
};
|
|
|
|
|
|
|
|
static struct perf_stat perf_stat;
|
|
|
|
#define STAT_RECORD perf_stat.record
|
|
|
|
|
2009-12-31 08:05:50 +00:00
|
|
|
static volatile int done = 0;
|
|
|
|
|
2015-07-21 12:31:22 +00:00
|
|
|
static struct perf_stat_config stat_config = {
|
2018-08-30 06:32:40 +00:00
|
|
|
.aggr_mode = AGGR_GLOBAL,
|
|
|
|
.scale = true,
|
|
|
|
.unit_width = 4, /* strlen("unit") */
|
|
|
|
.run_count = 1,
|
|
|
|
.metric_only_len = METRIC_ONLY_LEN,
|
|
|
|
.walltime_nsecs_stats = &walltime_nsecs_stats,
|
2018-08-30 06:32:47 +00:00
|
|
|
.big_num = true,
|
2020-07-17 07:05:41 +00:00
|
|
|
.ctl_fd = -1,
|
2021-04-19 09:41:44 +00:00
|
|
|
.ctl_fd_ack = -1,
|
|
|
|
.iostat_run = false,
|
2015-07-21 12:31:22 +00:00
|
|
|
};
|
|
|
|
|
perf stat: Ensure group is defined on top of the same cpu mask
Jin Yao reported the issue (and posted first versions of this change)
with groups being defined over events with different cpu mask.
This causes assert aborts in get_group_fd, like:
# perf stat -M "C2_Pkg_Residency" -a -- sleep 1
perf: util/evsel.c:1464: get_group_fd: Assertion `!(fd == -1)' failed.
Aborted
All the events in the group have to be defined over the same cpus so the
group_fd can be found for every leader/member pair.
Adding check to ensure this condition is met and removing the group
(with warning) if we detect mixed cpus, like:
$ sudo perf stat -e '{power/energy-cores/,cycles},{instructions,power/energy-cores/}'
WARNING: event cpu maps do not match, disabling group:
anon group { power/energy-cores/, cycles }
anon group { instructions, power/energy-cores/ }
Ian asked also for cpu maps details, it's displayed in verbose mode:
$ sudo perf stat -e '{cycles,power/energy-cores/}' -v
WARNING: group events cpu maps do not match, disabling group:
anon group { power/energy-cores/, cycles }
power/energy-cores/: 0
cycles: 0-7
anon group { instructions, power/energy-cores/ }
instructions: 0-7
power/energy-cores/: 0
Committer testing:
[root@seventh ~]# perf stat -e '{power/energy-cores/,cycles},{instructions,power/energy-cores/}'
WARNING: grouped events cpus do not match, disabling group:
anon group { power/energy-cores/, cycles }
anon group { instructions, power/energy-cores/ }
^C
Performance counter stats for 'system wide':
12.62 Joules power/energy-cores/
106,920,637 cycles
80,228,899 instructions # 0.75 insn per cycle
12.62 Joules power/energy-cores/
14.514476987 seconds time elapsed
[root@seventh ~]#
But if we put compatible events in each group it works:
[root@seventh ~]# perf stat -e '{power/energy-cores/,power/energy-ram/},{instructions,cycles}' -a sleep 2
Performance counter stats for 'system wide':
1.95 Joules power/energy-cores/
0.92 Joules power/energy-ram/
29,305,715 instructions # 1.03 insn per cycle
28,423,338 cycles
2.001438142 seconds time elapsed
[root@seventh ~]#
This needs improvement tho:
[root@seventh ~]# perf stat -e '{power/energy-cores/,power/energy-ram/},{instructions,cycles}' sleep 2
Error:
The sys_perf_event_open() syscall returned with 22 (Invalid argument) for event (power/energy-cores/).
/bin/dmesg | grep -i perf may provide additional information.
[root@seventh ~]#
We need to emit a better message, one stating that the power/ events
can't be used for a specific workload, instead it is per-cpu or system
wide.
Fixes: 6a4bb04caacc8 ("perf tools: Enable grouping logic for parsed events")
Co-developed-by: Jin Yao <yao.jin@linux.intel.com>
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Acked-by: Ian Rogers <irogers@google.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Michael Petlan <mpetlan@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lore.kernel.org/lkml/20200602101736.GE1112120@krava
Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-06-02 10:17:36 +00:00
|
|
|
static bool cpus_map_matched(struct evsel *a, struct evsel *b)
|
|
|
|
{
|
|
|
|
if (!a->core.cpus && !b->core.cpus)
|
|
|
|
return true;
|
|
|
|
|
|
|
|
if (!a->core.cpus || !b->core.cpus)
|
|
|
|
return false;
|
|
|
|
|
|
|
|
if (a->core.cpus->nr != b->core.cpus->nr)
|
|
|
|
return false;
|
|
|
|
|
|
|
|
for (int i = 0; i < a->core.cpus->nr; i++) {
|
|
|
|
if (a->core.cpus->map[i] != b->core.cpus->map[i])
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void evlist__check_cpu_maps(struct evlist *evlist)
|
|
|
|
{
|
|
|
|
struct evsel *evsel, *pos, *leader;
|
|
|
|
char buf[1024];
|
|
|
|
|
2021-04-27 07:01:29 +00:00
|
|
|
if (evlist__has_hybrid(evlist))
|
|
|
|
evlist__warn_hybrid_group(evlist);
|
|
|
|
|
perf stat: Ensure group is defined on top of the same cpu mask
Jin Yao reported the issue (and posted first versions of this change)
with groups being defined over events with different cpu mask.
This causes assert aborts in get_group_fd, like:
# perf stat -M "C2_Pkg_Residency" -a -- sleep 1
perf: util/evsel.c:1464: get_group_fd: Assertion `!(fd == -1)' failed.
Aborted
All the events in the group have to be defined over the same cpus so the
group_fd can be found for every leader/member pair.
Adding check to ensure this condition is met and removing the group
(with warning) if we detect mixed cpus, like:
$ sudo perf stat -e '{power/energy-cores/,cycles},{instructions,power/energy-cores/}'
WARNING: event cpu maps do not match, disabling group:
anon group { power/energy-cores/, cycles }
anon group { instructions, power/energy-cores/ }
Ian asked also for cpu maps details, it's displayed in verbose mode:
$ sudo perf stat -e '{cycles,power/energy-cores/}' -v
WARNING: group events cpu maps do not match, disabling group:
anon group { power/energy-cores/, cycles }
power/energy-cores/: 0
cycles: 0-7
anon group { instructions, power/energy-cores/ }
instructions: 0-7
power/energy-cores/: 0
Committer testing:
[root@seventh ~]# perf stat -e '{power/energy-cores/,cycles},{instructions,power/energy-cores/}'
WARNING: grouped events cpus do not match, disabling group:
anon group { power/energy-cores/, cycles }
anon group { instructions, power/energy-cores/ }
^C
Performance counter stats for 'system wide':
12.62 Joules power/energy-cores/
106,920,637 cycles
80,228,899 instructions # 0.75 insn per cycle
12.62 Joules power/energy-cores/
14.514476987 seconds time elapsed
[root@seventh ~]#
But if we put compatible events in each group it works:
[root@seventh ~]# perf stat -e '{power/energy-cores/,power/energy-ram/},{instructions,cycles}' -a sleep 2
Performance counter stats for 'system wide':
1.95 Joules power/energy-cores/
0.92 Joules power/energy-ram/
29,305,715 instructions # 1.03 insn per cycle
28,423,338 cycles
2.001438142 seconds time elapsed
[root@seventh ~]#
This needs improvement tho:
[root@seventh ~]# perf stat -e '{power/energy-cores/,power/energy-ram/},{instructions,cycles}' sleep 2
Error:
The sys_perf_event_open() syscall returned with 22 (Invalid argument) for event (power/energy-cores/).
/bin/dmesg | grep -i perf may provide additional information.
[root@seventh ~]#
We need to emit a better message, one stating that the power/ events
can't be used for a specific workload, instead it is per-cpu or system
wide.
Fixes: 6a4bb04caacc8 ("perf tools: Enable grouping logic for parsed events")
Co-developed-by: Jin Yao <yao.jin@linux.intel.com>
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Acked-by: Ian Rogers <irogers@google.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Michael Petlan <mpetlan@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lore.kernel.org/lkml/20200602101736.GE1112120@krava
Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-06-02 10:17:36 +00:00
|
|
|
evlist__for_each_entry(evlist, evsel) {
|
2021-07-06 15:17:00 +00:00
|
|
|
leader = evsel__leader(evsel);
|
perf stat: Ensure group is defined on top of the same cpu mask
Jin Yao reported the issue (and posted first versions of this change)
with groups being defined over events with different cpu mask.
This causes assert aborts in get_group_fd, like:
# perf stat -M "C2_Pkg_Residency" -a -- sleep 1
perf: util/evsel.c:1464: get_group_fd: Assertion `!(fd == -1)' failed.
Aborted
All the events in the group have to be defined over the same cpus so the
group_fd can be found for every leader/member pair.
Adding check to ensure this condition is met and removing the group
(with warning) if we detect mixed cpus, like:
$ sudo perf stat -e '{power/energy-cores/,cycles},{instructions,power/energy-cores/}'
WARNING: event cpu maps do not match, disabling group:
anon group { power/energy-cores/, cycles }
anon group { instructions, power/energy-cores/ }
Ian asked also for cpu maps details, it's displayed in verbose mode:
$ sudo perf stat -e '{cycles,power/energy-cores/}' -v
WARNING: group events cpu maps do not match, disabling group:
anon group { power/energy-cores/, cycles }
power/energy-cores/: 0
cycles: 0-7
anon group { instructions, power/energy-cores/ }
instructions: 0-7
power/energy-cores/: 0
Committer testing:
[root@seventh ~]# perf stat -e '{power/energy-cores/,cycles},{instructions,power/energy-cores/}'
WARNING: grouped events cpus do not match, disabling group:
anon group { power/energy-cores/, cycles }
anon group { instructions, power/energy-cores/ }
^C
Performance counter stats for 'system wide':
12.62 Joules power/energy-cores/
106,920,637 cycles
80,228,899 instructions # 0.75 insn per cycle
12.62 Joules power/energy-cores/
14.514476987 seconds time elapsed
[root@seventh ~]#
But if we put compatible events in each group it works:
[root@seventh ~]# perf stat -e '{power/energy-cores/,power/energy-ram/},{instructions,cycles}' -a sleep 2
Performance counter stats for 'system wide':
1.95 Joules power/energy-cores/
0.92 Joules power/energy-ram/
29,305,715 instructions # 1.03 insn per cycle
28,423,338 cycles
2.001438142 seconds time elapsed
[root@seventh ~]#
This needs improvement tho:
[root@seventh ~]# perf stat -e '{power/energy-cores/,power/energy-ram/},{instructions,cycles}' sleep 2
Error:
The sys_perf_event_open() syscall returned with 22 (Invalid argument) for event (power/energy-cores/).
/bin/dmesg | grep -i perf may provide additional information.
[root@seventh ~]#
We need to emit a better message, one stating that the power/ events
can't be used for a specific workload, instead it is per-cpu or system
wide.
Fixes: 6a4bb04caacc8 ("perf tools: Enable grouping logic for parsed events")
Co-developed-by: Jin Yao <yao.jin@linux.intel.com>
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Acked-by: Ian Rogers <irogers@google.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Michael Petlan <mpetlan@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lore.kernel.org/lkml/20200602101736.GE1112120@krava
Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-06-02 10:17:36 +00:00
|
|
|
|
|
|
|
/* Check that leader matches cpus with each member. */
|
|
|
|
if (leader == evsel)
|
|
|
|
continue;
|
|
|
|
if (cpus_map_matched(leader, evsel))
|
|
|
|
continue;
|
|
|
|
|
|
|
|
/* If there's mismatch disable the group and warn user. */
|
|
|
|
WARN_ONCE(1, "WARNING: grouped events cpus do not match, disabling group:\n");
|
|
|
|
evsel__group_desc(leader, buf, sizeof(buf));
|
|
|
|
pr_warning(" %s\n", buf);
|
|
|
|
|
|
|
|
if (verbose) {
|
|
|
|
cpu_map__snprint(leader->core.cpus, buf, sizeof(buf));
|
|
|
|
pr_warning(" %s: %s\n", leader->name, buf);
|
|
|
|
cpu_map__snprint(evsel->core.cpus, buf, sizeof(buf));
|
|
|
|
pr_warning(" %s: %s\n", evsel->name, buf);
|
|
|
|
}
|
|
|
|
|
|
|
|
for_each_group_evsel(pos, leader) {
|
2021-07-06 15:17:00 +00:00
|
|
|
evsel__set_leader(pos, pos);
|
perf stat: Ensure group is defined on top of the same cpu mask
Jin Yao reported the issue (and posted first versions of this change)
with groups being defined over events with different cpu mask.
This causes assert aborts in get_group_fd, like:
# perf stat -M "C2_Pkg_Residency" -a -- sleep 1
perf: util/evsel.c:1464: get_group_fd: Assertion `!(fd == -1)' failed.
Aborted
All the events in the group have to be defined over the same cpus so the
group_fd can be found for every leader/member pair.
Adding check to ensure this condition is met and removing the group
(with warning) if we detect mixed cpus, like:
$ sudo perf stat -e '{power/energy-cores/,cycles},{instructions,power/energy-cores/}'
WARNING: event cpu maps do not match, disabling group:
anon group { power/energy-cores/, cycles }
anon group { instructions, power/energy-cores/ }
Ian asked also for cpu maps details, it's displayed in verbose mode:
$ sudo perf stat -e '{cycles,power/energy-cores/}' -v
WARNING: group events cpu maps do not match, disabling group:
anon group { power/energy-cores/, cycles }
power/energy-cores/: 0
cycles: 0-7
anon group { instructions, power/energy-cores/ }
instructions: 0-7
power/energy-cores/: 0
Committer testing:
[root@seventh ~]# perf stat -e '{power/energy-cores/,cycles},{instructions,power/energy-cores/}'
WARNING: grouped events cpus do not match, disabling group:
anon group { power/energy-cores/, cycles }
anon group { instructions, power/energy-cores/ }
^C
Performance counter stats for 'system wide':
12.62 Joules power/energy-cores/
106,920,637 cycles
80,228,899 instructions # 0.75 insn per cycle
12.62 Joules power/energy-cores/
14.514476987 seconds time elapsed
[root@seventh ~]#
But if we put compatible events in each group it works:
[root@seventh ~]# perf stat -e '{power/energy-cores/,power/energy-ram/},{instructions,cycles}' -a sleep 2
Performance counter stats for 'system wide':
1.95 Joules power/energy-cores/
0.92 Joules power/energy-ram/
29,305,715 instructions # 1.03 insn per cycle
28,423,338 cycles
2.001438142 seconds time elapsed
[root@seventh ~]#
This needs improvement tho:
[root@seventh ~]# perf stat -e '{power/energy-cores/,power/energy-ram/},{instructions,cycles}' sleep 2
Error:
The sys_perf_event_open() syscall returned with 22 (Invalid argument) for event (power/energy-cores/).
/bin/dmesg | grep -i perf may provide additional information.
[root@seventh ~]#
We need to emit a better message, one stating that the power/ events
can't be used for a specific workload, instead it is per-cpu or system
wide.
Fixes: 6a4bb04caacc8 ("perf tools: Enable grouping logic for parsed events")
Co-developed-by: Jin Yao <yao.jin@linux.intel.com>
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Acked-by: Ian Rogers <irogers@google.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Michael Petlan <mpetlan@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lore.kernel.org/lkml/20200602101736.GE1112120@krava
Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-06-02 10:17:36 +00:00
|
|
|
pos->core.nr_members = 0;
|
|
|
|
}
|
2021-07-06 15:17:00 +00:00
|
|
|
evsel->core.leader->nr_members = 0;
|
perf stat: Ensure group is defined on top of the same cpu mask
Jin Yao reported the issue (and posted first versions of this change)
with groups being defined over events with different cpu mask.
This causes assert aborts in get_group_fd, like:
# perf stat -M "C2_Pkg_Residency" -a -- sleep 1
perf: util/evsel.c:1464: get_group_fd: Assertion `!(fd == -1)' failed.
Aborted
All the events in the group have to be defined over the same cpus so the
group_fd can be found for every leader/member pair.
Adding check to ensure this condition is met and removing the group
(with warning) if we detect mixed cpus, like:
$ sudo perf stat -e '{power/energy-cores/,cycles},{instructions,power/energy-cores/}'
WARNING: event cpu maps do not match, disabling group:
anon group { power/energy-cores/, cycles }
anon group { instructions, power/energy-cores/ }
Ian asked also for cpu maps details, it's displayed in verbose mode:
$ sudo perf stat -e '{cycles,power/energy-cores/}' -v
WARNING: group events cpu maps do not match, disabling group:
anon group { power/energy-cores/, cycles }
power/energy-cores/: 0
cycles: 0-7
anon group { instructions, power/energy-cores/ }
instructions: 0-7
power/energy-cores/: 0
Committer testing:
[root@seventh ~]# perf stat -e '{power/energy-cores/,cycles},{instructions,power/energy-cores/}'
WARNING: grouped events cpus do not match, disabling group:
anon group { power/energy-cores/, cycles }
anon group { instructions, power/energy-cores/ }
^C
Performance counter stats for 'system wide':
12.62 Joules power/energy-cores/
106,920,637 cycles
80,228,899 instructions # 0.75 insn per cycle
12.62 Joules power/energy-cores/
14.514476987 seconds time elapsed
[root@seventh ~]#
But if we put compatible events in each group it works:
[root@seventh ~]# perf stat -e '{power/energy-cores/,power/energy-ram/},{instructions,cycles}' -a sleep 2
Performance counter stats for 'system wide':
1.95 Joules power/energy-cores/
0.92 Joules power/energy-ram/
29,305,715 instructions # 1.03 insn per cycle
28,423,338 cycles
2.001438142 seconds time elapsed
[root@seventh ~]#
This needs improvement tho:
[root@seventh ~]# perf stat -e '{power/energy-cores/,power/energy-ram/},{instructions,cycles}' sleep 2
Error:
The sys_perf_event_open() syscall returned with 22 (Invalid argument) for event (power/energy-cores/).
/bin/dmesg | grep -i perf may provide additional information.
[root@seventh ~]#
We need to emit a better message, one stating that the power/ events
can't be used for a specific workload, instead it is per-cpu or system
wide.
Fixes: 6a4bb04caacc8 ("perf tools: Enable grouping logic for parsed events")
Co-developed-by: Jin Yao <yao.jin@linux.intel.com>
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Acked-by: Ian Rogers <irogers@google.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Michael Petlan <mpetlan@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lore.kernel.org/lkml/20200602101736.GE1112120@krava
Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-06-02 10:17:36 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
perf stat: Add interval printing
This patch adds a new printing mode for perf stat. It allows interval
printing. That means perf stat can now print event deltas at regular
time interval. This is useful to detect phases in programs.
The -I option enables interval printing. It expects an interval duration
in milliseconds. Minimum is 100ms. Once, activated perf stat prints
events deltas since last printout. All modes are supported.
$ perf stat -I 1000 -e cycles noploop 10
noploop for 10 seconds
# time counts events
1.000109853 2,388,560,546 cycles
2.000262846 2,393,332,358 cycles
3.000354131 2,393,176,537 cycles
4.000439503 2,393,203,790 cycles
5.000527075 2,393,167,675 cycles
6.000609052 2,393,203,670 cycles
7.000691082 2,393,175,678 cycles
The output format makes it easy to feed into a plotting program such as
gnuplot when the -I option is used in combination with the -x option:
$ perf stat -x, -I 1000 -e cycles noploop 10
noploop for 10 seconds
1.000084113,2378775498,cycles
2.000245798,2391056897,cycles
3.000354445,2392089414,cycles
4.000459115,2390936603,cycles
5.000565341,2392108173,cycles
Signed-off-by: Stephane Eranian <eranian@google.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung.kim@lge.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1359460064-3060-3-git-send-email-eranian@google.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2013-01-29 11:47:44 +00:00
|
|
|
static inline void diff_timespec(struct timespec *r, struct timespec *a,
|
|
|
|
struct timespec *b)
|
|
|
|
{
|
|
|
|
r->tv_sec = a->tv_sec - b->tv_sec;
|
|
|
|
if (a->tv_nsec < b->tv_nsec) {
|
2016-08-08 17:57:04 +00:00
|
|
|
r->tv_nsec = a->tv_nsec + NSEC_PER_SEC - b->tv_nsec;
|
perf stat: Add interval printing
This patch adds a new printing mode for perf stat. It allows interval
printing. That means perf stat can now print event deltas at regular
time interval. This is useful to detect phases in programs.
The -I option enables interval printing. It expects an interval duration
in milliseconds. Minimum is 100ms. Once, activated perf stat prints
events deltas since last printout. All modes are supported.
$ perf stat -I 1000 -e cycles noploop 10
noploop for 10 seconds
# time counts events
1.000109853 2,388,560,546 cycles
2.000262846 2,393,332,358 cycles
3.000354131 2,393,176,537 cycles
4.000439503 2,393,203,790 cycles
5.000527075 2,393,167,675 cycles
6.000609052 2,393,203,670 cycles
7.000691082 2,393,175,678 cycles
The output format makes it easy to feed into a plotting program such as
gnuplot when the -I option is used in combination with the -x option:
$ perf stat -x, -I 1000 -e cycles noploop 10
noploop for 10 seconds
1.000084113,2378775498,cycles
2.000245798,2391056897,cycles
3.000354445,2392089414,cycles
4.000459115,2390936603,cycles
5.000565341,2392108173,cycles
Signed-off-by: Stephane Eranian <eranian@google.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung.kim@lge.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1359460064-3060-3-git-send-email-eranian@google.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2013-01-29 11:47:44 +00:00
|
|
|
r->tv_sec--;
|
|
|
|
} else {
|
|
|
|
r->tv_nsec = a->tv_nsec - b->tv_nsec ;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2015-06-26 09:29:13 +00:00
|
|
|
static void perf_stat__reset_stats(void)
|
|
|
|
{
|
2017-12-05 14:03:07 +00:00
|
|
|
int i;
|
|
|
|
|
2020-11-30 12:31:04 +00:00
|
|
|
evlist__reset_stats(evsel_list);
|
2015-06-03 14:25:59 +00:00
|
|
|
perf_stat__reset_shadow_stats();
|
2017-12-05 14:03:07 +00:00
|
|
|
|
|
|
|
for (i = 0; i < stat_config.stats_num; i++)
|
|
|
|
perf_stat__reset_shadow_per_stat(&stat_config.stats[i]);
|
2015-06-03 14:25:55 +00:00
|
|
|
}
|
|
|
|
|
2015-11-05 14:40:48 +00:00
|
|
|
static int process_synthesized_event(struct perf_tool *tool __maybe_unused,
|
|
|
|
union perf_event *event,
|
|
|
|
struct perf_sample *sample __maybe_unused,
|
|
|
|
struct machine *machine __maybe_unused)
|
perf stat record: Add record command
Add 'perf stat record' command support. It creates simple (header only)
perf.data file ATM.
The record command could be specified anywhere among stat options. All
stat command options are valid for stat record command with '-o' option
exception. If specified for record command it denotes the perf data file
name.
Committer note:
Set sample_type to PERF_SAMPLE_IDENTIFIER, which should be harmless
while avoiding that older tools show confusing messages, for instance,
with sample_type = 0, we get:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.630237 task-clock (msec) # 0.528 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
52 page-faults # 0.083 M/sec
978,312 cycles # 1.552 GHz
671,931 stalled-cycles-frontend # 68.68% frontend cycles idle
<not supported> stalled-cycles-backend
646,379 instructions # 0.66 insns per cycle
# 1.04 stalled cycles per insn
131,046 branches # 207.931 M/sec
7,073 branch-misses # 5.40% of all branches
0.001193240 seconds time elapsed
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
non matching sample_type
$
While with sample_type set to PERF_SAMPLE_IDENTIFIER, after we re-run 'perf
stat record usleep' we get:
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$
Which at least shows the names of the events in the perf.data file.
Additionally, such files, when passed to 'perf report' will produce:
$ oldperf report --stdio
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
Warning:
Kernel address maps (/proc/{kallsyms,modules}) were restricted.
Check /proc/sys/kernel/kptr_restrict before running 'perf record'.
As no suitable kallsyms nor vmlinux was found, kernel samples
can't be resolved.
Samples in kernel modules can't be resolved as well.
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
$
Which is confusing and can be solved by just adding the kernel mmap record,
which will also remove that warning about the data size field being equal to
zero, after generating the mmap record:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.600796 task-clock (msec) # 0.478 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
54 page-faults # 0.090 M/sec
886,844 cycles # 1.476 GHz
582,169 stalled-cycles-frontend # 65.65% frontend cycles idle
<not supported> stalled-cycles-backend
638,344 instructions # 0.72 insns per cycle
# 0.91 stalled cycles per insn
130,204 branches # 216.719 M/sec
7,500 branch-misses # 5.76% of all branches
0.001255897 seconds time elapsed
$ oldperf evlist
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$ oldperf report --stdio
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
[acme@zoo linux]$
No warnings, sensible output about what are the events in the perf.data file and also
a "file has no samples" message, which indeed it doesn't.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Tested-by: Kan Liang <kan.liang@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: htp://lkml.kernel.org/r/1446734469-11352-3-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 14:40:46 +00:00
|
|
|
{
|
2017-01-23 21:07:59 +00:00
|
|
|
if (perf_data__write(&perf_stat.data, event, event->header.size) < 0) {
|
perf stat record: Add record command
Add 'perf stat record' command support. It creates simple (header only)
perf.data file ATM.
The record command could be specified anywhere among stat options. All
stat command options are valid for stat record command with '-o' option
exception. If specified for record command it denotes the perf data file
name.
Committer note:
Set sample_type to PERF_SAMPLE_IDENTIFIER, which should be harmless
while avoiding that older tools show confusing messages, for instance,
with sample_type = 0, we get:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.630237 task-clock (msec) # 0.528 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
52 page-faults # 0.083 M/sec
978,312 cycles # 1.552 GHz
671,931 stalled-cycles-frontend # 68.68% frontend cycles idle
<not supported> stalled-cycles-backend
646,379 instructions # 0.66 insns per cycle
# 1.04 stalled cycles per insn
131,046 branches # 207.931 M/sec
7,073 branch-misses # 5.40% of all branches
0.001193240 seconds time elapsed
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
non matching sample_type
$
While with sample_type set to PERF_SAMPLE_IDENTIFIER, after we re-run 'perf
stat record usleep' we get:
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$
Which at least shows the names of the events in the perf.data file.
Additionally, such files, when passed to 'perf report' will produce:
$ oldperf report --stdio
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
Warning:
Kernel address maps (/proc/{kallsyms,modules}) were restricted.
Check /proc/sys/kernel/kptr_restrict before running 'perf record'.
As no suitable kallsyms nor vmlinux was found, kernel samples
can't be resolved.
Samples in kernel modules can't be resolved as well.
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
$
Which is confusing and can be solved by just adding the kernel mmap record,
which will also remove that warning about the data size field being equal to
zero, after generating the mmap record:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.600796 task-clock (msec) # 0.478 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
54 page-faults # 0.090 M/sec
886,844 cycles # 1.476 GHz
582,169 stalled-cycles-frontend # 65.65% frontend cycles idle
<not supported> stalled-cycles-backend
638,344 instructions # 0.72 insns per cycle
# 0.91 stalled cycles per insn
130,204 branches # 216.719 M/sec
7,500 branch-misses # 5.76% of all branches
0.001255897 seconds time elapsed
$ oldperf evlist
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$ oldperf report --stdio
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
[acme@zoo linux]$
No warnings, sensible output about what are the events in the perf.data file and also
a "file has no samples" message, which indeed it doesn't.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Tested-by: Kan Liang <kan.liang@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: htp://lkml.kernel.org/r/1446734469-11352-3-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 14:40:46 +00:00
|
|
|
pr_err("failed to write perf data, error: %m\n");
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
2015-11-05 14:40:48 +00:00
|
|
|
perf_stat.bytes_written += event->header.size;
|
perf stat record: Add record command
Add 'perf stat record' command support. It creates simple (header only)
perf.data file ATM.
The record command could be specified anywhere among stat options. All
stat command options are valid for stat record command with '-o' option
exception. If specified for record command it denotes the perf data file
name.
Committer note:
Set sample_type to PERF_SAMPLE_IDENTIFIER, which should be harmless
while avoiding that older tools show confusing messages, for instance,
with sample_type = 0, we get:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.630237 task-clock (msec) # 0.528 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
52 page-faults # 0.083 M/sec
978,312 cycles # 1.552 GHz
671,931 stalled-cycles-frontend # 68.68% frontend cycles idle
<not supported> stalled-cycles-backend
646,379 instructions # 0.66 insns per cycle
# 1.04 stalled cycles per insn
131,046 branches # 207.931 M/sec
7,073 branch-misses # 5.40% of all branches
0.001193240 seconds time elapsed
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
non matching sample_type
$
While with sample_type set to PERF_SAMPLE_IDENTIFIER, after we re-run 'perf
stat record usleep' we get:
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$
Which at least shows the names of the events in the perf.data file.
Additionally, such files, when passed to 'perf report' will produce:
$ oldperf report --stdio
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
Warning:
Kernel address maps (/proc/{kallsyms,modules}) were restricted.
Check /proc/sys/kernel/kptr_restrict before running 'perf record'.
As no suitable kallsyms nor vmlinux was found, kernel samples
can't be resolved.
Samples in kernel modules can't be resolved as well.
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
$
Which is confusing and can be solved by just adding the kernel mmap record,
which will also remove that warning about the data size field being equal to
zero, after generating the mmap record:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.600796 task-clock (msec) # 0.478 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
54 page-faults # 0.090 M/sec
886,844 cycles # 1.476 GHz
582,169 stalled-cycles-frontend # 65.65% frontend cycles idle
<not supported> stalled-cycles-backend
638,344 instructions # 0.72 insns per cycle
# 0.91 stalled cycles per insn
130,204 branches # 216.719 M/sec
7,500 branch-misses # 5.76% of all branches
0.001255897 seconds time elapsed
$ oldperf evlist
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$ oldperf report --stdio
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
[acme@zoo linux]$
No warnings, sensible output about what are the events in the perf.data file and also
a "file has no samples" message, which indeed it doesn't.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Tested-by: Kan Liang <kan.liang@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: htp://lkml.kernel.org/r/1446734469-11352-3-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 14:40:46 +00:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2015-11-05 14:40:56 +00:00
|
|
|
static int write_stat_round_event(u64 tm, u64 type)
|
2015-11-05 14:40:52 +00:00
|
|
|
{
|
2015-11-05 14:40:56 +00:00
|
|
|
return perf_event__synthesize_stat_round(NULL, tm, type,
|
2015-11-05 14:40:52 +00:00
|
|
|
process_synthesized_event,
|
|
|
|
NULL);
|
|
|
|
}
|
|
|
|
|
|
|
|
#define WRITE_STAT_ROUND_EVENT(time, interval) \
|
|
|
|
write_stat_round_event(time, PERF_STAT_ROUND_TYPE__ ## interval)
|
|
|
|
|
2019-09-02 20:04:12 +00:00
|
|
|
#define SID(e, x, y) xyarray__entry(e->core.sample_id, x, y)
|
2015-11-05 14:40:51 +00:00
|
|
|
|
2020-05-04 16:46:34 +00:00
|
|
|
static int evsel__write_stat_event(struct evsel *counter, u32 cpu, u32 thread,
|
|
|
|
struct perf_counts_values *count)
|
2015-11-05 14:40:51 +00:00
|
|
|
{
|
|
|
|
struct perf_sample_id *sid = SID(counter, cpu, thread);
|
|
|
|
|
|
|
|
return perf_event__synthesize_stat(NULL, cpu, thread, sid->id, count,
|
|
|
|
process_synthesized_event, NULL);
|
|
|
|
}
|
|
|
|
|
2019-07-21 11:23:51 +00:00
|
|
|
static int read_single_counter(struct evsel *counter, int cpu,
|
2019-03-26 22:18:21 +00:00
|
|
|
int thread, struct timespec *rs)
|
|
|
|
{
|
|
|
|
if (counter->tool_event == PERF_TOOL_DURATION_TIME) {
|
|
|
|
u64 val = rs->tv_nsec + rs->tv_sec*1000000000ULL;
|
|
|
|
struct perf_counts_values *count =
|
|
|
|
perf_counts(counter->counts, cpu, thread);
|
|
|
|
count->ena = count->run = val;
|
|
|
|
count->val = val;
|
|
|
|
return 0;
|
|
|
|
}
|
2020-04-30 14:00:53 +00:00
|
|
|
return evsel__read_counter(counter, cpu, thread);
|
2019-03-26 22:18:21 +00:00
|
|
|
}
|
|
|
|
|
2010-11-16 09:05:01 +00:00
|
|
|
/*
|
|
|
|
* Read out the results of a single counter:
|
|
|
|
* do not aggregate counts across CPUs in system-wide mode
|
|
|
|
*/
|
2019-11-21 00:15:20 +00:00
|
|
|
static int read_counter_cpu(struct evsel *counter, struct timespec *rs, int cpu)
|
2010-11-16 09:05:01 +00:00
|
|
|
{
|
2019-08-22 11:11:41 +00:00
|
|
|
int nthreads = perf_thread_map__nr(evsel_list->core.threads);
|
2019-11-21 00:15:20 +00:00
|
|
|
int thread;
|
2010-11-16 09:05:01 +00:00
|
|
|
|
2015-02-13 18:40:58 +00:00
|
|
|
if (!counter->supported)
|
|
|
|
return -ENOENT;
|
|
|
|
|
2019-08-06 09:35:19 +00:00
|
|
|
if (counter->core.system_wide)
|
2014-11-21 09:31:09 +00:00
|
|
|
nthreads = 1;
|
|
|
|
|
|
|
|
for (thread = 0; thread < nthreads; thread++) {
|
2019-11-21 00:15:20 +00:00
|
|
|
struct perf_counts_values *count;
|
|
|
|
|
|
|
|
count = perf_counts(counter->counts, cpu, thread);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* The leader's group read loads data into its group members
|
2020-04-30 14:00:53 +00:00
|
|
|
* (via evsel__read_counter()) and sets their count->loaded.
|
2019-11-21 00:15:20 +00:00
|
|
|
*/
|
|
|
|
if (!perf_counts__is_loaded(counter->counts, cpu, thread) &&
|
|
|
|
read_single_counter(counter, cpu, thread, rs)) {
|
|
|
|
counter->counts->scaled = -1;
|
|
|
|
perf_counts(counter->counts, cpu, thread)->ena = 0;
|
|
|
|
perf_counts(counter->counts, cpu, thread)->run = 0;
|
|
|
|
return -1;
|
|
|
|
}
|
2015-11-05 14:40:51 +00:00
|
|
|
|
2019-11-21 00:15:20 +00:00
|
|
|
perf_counts__set_loaded(counter->counts, cpu, thread, false);
|
perf stat: Use group read for event groups
Make perf stat use group read if there are groups defined. The group
read will get the values for all member of groups within a single
syscall instead of calling read syscall for every event.
We can see considerable less amount of kernel cycles spent on single
group read, than reading each event separately, like for following perf
stat command:
# perf stat -e {cycles,instructions} -I 10 -a sleep 1
Monitored with "perf stat -r 5 -e '{cycles:u,cycles:k}'"
Before:
24,325,676 cycles:u
297,040,775 cycles:k
1.038554134 seconds time elapsed
After:
25,034,418 cycles:u
158,256,395 cycles:k
1.036864497 seconds time elapsed
The perf_evsel__open fallback changes contributed by Andi Kleen.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/20170726120206.9099-4-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-07-26 12:02:06 +00:00
|
|
|
|
2019-11-21 00:15:20 +00:00
|
|
|
if (STAT_RECORD) {
|
2020-05-04 16:46:34 +00:00
|
|
|
if (evsel__write_stat_event(counter, cpu, thread, count)) {
|
2019-11-21 00:15:20 +00:00
|
|
|
pr_err("failed to write stat event\n");
|
|
|
|
return -1;
|
2015-11-05 14:40:51 +00:00
|
|
|
}
|
2019-11-21 00:15:20 +00:00
|
|
|
}
|
2016-04-27 20:00:51 +00:00
|
|
|
|
2019-11-21 00:15:20 +00:00
|
|
|
if (verbose > 1) {
|
|
|
|
fprintf(stat_config.output,
|
|
|
|
"%s: %d: %" PRIu64 " %" PRIu64 " %" PRIu64 "\n",
|
2020-04-29 19:07:09 +00:00
|
|
|
evsel__name(counter),
|
2019-11-21 00:15:20 +00:00
|
|
|
cpu,
|
|
|
|
count->val, count->ena, count->run);
|
2014-11-21 09:31:09 +00:00
|
|
|
}
|
2010-11-16 09:05:01 +00:00
|
|
|
}
|
2011-01-03 19:45:52 +00:00
|
|
|
|
|
|
|
return 0;
|
2009-05-29 07:10:54 +00:00
|
|
|
}
|
|
|
|
|
perf stat: Report summary for interval mode
Currently 'perf stat' supports to print counts at regular interval (-I),
but it's not very easy for user to get the overall statistics.
The patch uses 'evsel->prev_raw_counts' to get counts for summary. Copy
the counts to 'evsel->counts' after printing the interval results.
Next, we just follow the non-interval processing.
Let's see some examples,
root@kbl-ppc:~# perf stat -e cycles -I1000 --interval-count 2
# time counts unit events
1.000412064 2,281,114 cycles
2.001383658 2,547,880 cycles
Performance counter stats for 'system wide':
4,828,994 cycles
2.002860349 seconds time elapsed
root@kbl-ppc:~# perf stat -e cycles,instructions -I1000 --interval-count 2
# time counts unit events
1.000389902 1,536,093 cycles
1.000389902 420,226 instructions # 0.27 insn per cycle
2.001433453 2,213,952 cycles
2.001433453 735,465 instructions # 0.33 insn per cycle
Performance counter stats for 'system wide':
3,750,045 cycles
1,155,691 instructions # 0.31 insn per cycle
2.003023361 seconds time elapsed
root@kbl-ppc:~# perf stat -M CPI,IPC -I1000 --interval-count 2
# time counts unit events
1.000435121 905,303 inst_retired.any # 2.9 CPI
1.000435121 2,663,333 cycles
1.000435121 914,702 inst_retired.any # 0.3 IPC
1.000435121 2,676,559 cpu_clk_unhalted.thread
2.001615941 1,951,092 inst_retired.any # 1.8 CPI
2.001615941 3,551,357 cycles
2.001615941 1,950,837 inst_retired.any # 0.5 IPC
2.001615941 3,551,044 cpu_clk_unhalted.thread
Performance counter stats for 'system wide':
2,856,395 inst_retired.any # 2.2 CPI
6,214,690 cycles
2,865,539 inst_retired.any # 0.5 IPC
6,227,603 cpu_clk_unhalted.thread
2.003403078 seconds time elapsed
Committer testing:
Before:
# perf stat -e cycles -I1000 --interval-count 2
# time counts unit events
1.000618627 26,877,408 cycles
2.001417968 233,672,829 cycles
#
After:
# perf stat -e cycles -I1000 --interval-count 2
# time counts unit events
1.001531815 5,341,388,792 cycles
2.002936530 100,073,912 cycles
Performance counter stats for 'system wide':
5,441,462,704 cycles
2.004893794 seconds time elapsed
#
Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
Reviewed-by: Jiri Olsa <jolsa@redhat.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lore.kernel.org/lkml/20200520042737.24160-6-yao.jin@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-05-20 04:27:37 +00:00
|
|
|
static int read_affinity_counters(struct timespec *rs)
|
perf stat: Add interval printing
This patch adds a new printing mode for perf stat. It allows interval
printing. That means perf stat can now print event deltas at regular
time interval. This is useful to detect phases in programs.
The -I option enables interval printing. It expects an interval duration
in milliseconds. Minimum is 100ms. Once, activated perf stat prints
events deltas since last printout. All modes are supported.
$ perf stat -I 1000 -e cycles noploop 10
noploop for 10 seconds
# time counts events
1.000109853 2,388,560,546 cycles
2.000262846 2,393,332,358 cycles
3.000354131 2,393,176,537 cycles
4.000439503 2,393,203,790 cycles
5.000527075 2,393,167,675 cycles
6.000609052 2,393,203,670 cycles
7.000691082 2,393,175,678 cycles
The output format makes it easy to feed into a plotting program such as
gnuplot when the -I option is used in combination with the -x option:
$ perf stat -x, -I 1000 -e cycles noploop 10
noploop for 10 seconds
1.000084113,2378775498,cycles
2.000245798,2391056897,cycles
3.000354445,2392089414,cycles
4.000459115,2390936603,cycles
5.000565341,2392108173,cycles
Signed-off-by: Stephane Eranian <eranian@google.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung.kim@lge.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1359460064-3060-3-git-send-email-eranian@google.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2013-01-29 11:47:44 +00:00
|
|
|
{
|
2019-07-21 11:23:51 +00:00
|
|
|
struct evsel *counter;
|
2019-11-21 00:15:20 +00:00
|
|
|
struct affinity affinity;
|
|
|
|
int i, ncpus, cpu;
|
|
|
|
|
2021-04-25 21:43:31 +00:00
|
|
|
if (all_counters_use_bpf)
|
|
|
|
return 0;
|
|
|
|
|
2019-11-21 00:15:20 +00:00
|
|
|
if (affinity__setup(&affinity) < 0)
|
perf stat: Report summary for interval mode
Currently 'perf stat' supports to print counts at regular interval (-I),
but it's not very easy for user to get the overall statistics.
The patch uses 'evsel->prev_raw_counts' to get counts for summary. Copy
the counts to 'evsel->counts' after printing the interval results.
Next, we just follow the non-interval processing.
Let's see some examples,
root@kbl-ppc:~# perf stat -e cycles -I1000 --interval-count 2
# time counts unit events
1.000412064 2,281,114 cycles
2.001383658 2,547,880 cycles
Performance counter stats for 'system wide':
4,828,994 cycles
2.002860349 seconds time elapsed
root@kbl-ppc:~# perf stat -e cycles,instructions -I1000 --interval-count 2
# time counts unit events
1.000389902 1,536,093 cycles
1.000389902 420,226 instructions # 0.27 insn per cycle
2.001433453 2,213,952 cycles
2.001433453 735,465 instructions # 0.33 insn per cycle
Performance counter stats for 'system wide':
3,750,045 cycles
1,155,691 instructions # 0.31 insn per cycle
2.003023361 seconds time elapsed
root@kbl-ppc:~# perf stat -M CPI,IPC -I1000 --interval-count 2
# time counts unit events
1.000435121 905,303 inst_retired.any # 2.9 CPI
1.000435121 2,663,333 cycles
1.000435121 914,702 inst_retired.any # 0.3 IPC
1.000435121 2,676,559 cpu_clk_unhalted.thread
2.001615941 1,951,092 inst_retired.any # 1.8 CPI
2.001615941 3,551,357 cycles
2.001615941 1,950,837 inst_retired.any # 0.5 IPC
2.001615941 3,551,044 cpu_clk_unhalted.thread
Performance counter stats for 'system wide':
2,856,395 inst_retired.any # 2.2 CPI
6,214,690 cycles
2,865,539 inst_retired.any # 0.5 IPC
6,227,603 cpu_clk_unhalted.thread
2.003403078 seconds time elapsed
Committer testing:
Before:
# perf stat -e cycles -I1000 --interval-count 2
# time counts unit events
1.000618627 26,877,408 cycles
2.001417968 233,672,829 cycles
#
After:
# perf stat -e cycles -I1000 --interval-count 2
# time counts unit events
1.001531815 5,341,388,792 cycles
2.002936530 100,073,912 cycles
Performance counter stats for 'system wide':
5,441,462,704 cycles
2.004893794 seconds time elapsed
#
Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
Reviewed-by: Jiri Olsa <jolsa@redhat.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lore.kernel.org/lkml/20200520042737.24160-6-yao.jin@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-05-20 04:27:37 +00:00
|
|
|
return -1;
|
2019-11-21 00:15:20 +00:00
|
|
|
|
|
|
|
ncpus = perf_cpu_map__nr(evsel_list->core.all_cpus);
|
|
|
|
if (!target__has_cpu(&target) || target__has_per_thread(&target))
|
|
|
|
ncpus = 1;
|
|
|
|
evlist__for_each_cpu(evsel_list, i, cpu) {
|
|
|
|
if (i >= ncpus)
|
|
|
|
break;
|
|
|
|
affinity__set(&affinity, cpu);
|
|
|
|
|
|
|
|
evlist__for_each_entry(evsel_list, counter) {
|
|
|
|
if (evsel__cpu_iter_skip(counter, cpu))
|
|
|
|
continue;
|
2021-04-25 21:43:31 +00:00
|
|
|
if (evsel__is_bpf(counter))
|
|
|
|
continue;
|
2019-11-21 00:15:20 +00:00
|
|
|
if (!counter->err) {
|
|
|
|
counter->err = read_counter_cpu(counter, rs,
|
|
|
|
counter->cpu_iter - 1);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
affinity__cleanup(&affinity);
|
perf stat: Report summary for interval mode
Currently 'perf stat' supports to print counts at regular interval (-I),
but it's not very easy for user to get the overall statistics.
The patch uses 'evsel->prev_raw_counts' to get counts for summary. Copy
the counts to 'evsel->counts' after printing the interval results.
Next, we just follow the non-interval processing.
Let's see some examples,
root@kbl-ppc:~# perf stat -e cycles -I1000 --interval-count 2
# time counts unit events
1.000412064 2,281,114 cycles
2.001383658 2,547,880 cycles
Performance counter stats for 'system wide':
4,828,994 cycles
2.002860349 seconds time elapsed
root@kbl-ppc:~# perf stat -e cycles,instructions -I1000 --interval-count 2
# time counts unit events
1.000389902 1,536,093 cycles
1.000389902 420,226 instructions # 0.27 insn per cycle
2.001433453 2,213,952 cycles
2.001433453 735,465 instructions # 0.33 insn per cycle
Performance counter stats for 'system wide':
3,750,045 cycles
1,155,691 instructions # 0.31 insn per cycle
2.003023361 seconds time elapsed
root@kbl-ppc:~# perf stat -M CPI,IPC -I1000 --interval-count 2
# time counts unit events
1.000435121 905,303 inst_retired.any # 2.9 CPI
1.000435121 2,663,333 cycles
1.000435121 914,702 inst_retired.any # 0.3 IPC
1.000435121 2,676,559 cpu_clk_unhalted.thread
2.001615941 1,951,092 inst_retired.any # 1.8 CPI
2.001615941 3,551,357 cycles
2.001615941 1,950,837 inst_retired.any # 0.5 IPC
2.001615941 3,551,044 cpu_clk_unhalted.thread
Performance counter stats for 'system wide':
2,856,395 inst_retired.any # 2.2 CPI
6,214,690 cycles
2,865,539 inst_retired.any # 0.5 IPC
6,227,603 cpu_clk_unhalted.thread
2.003403078 seconds time elapsed
Committer testing:
Before:
# perf stat -e cycles -I1000 --interval-count 2
# time counts unit events
1.000618627 26,877,408 cycles
2.001417968 233,672,829 cycles
#
After:
# perf stat -e cycles -I1000 --interval-count 2
# time counts unit events
1.001531815 5,341,388,792 cycles
2.002936530 100,073,912 cycles
Performance counter stats for 'system wide':
5,441,462,704 cycles
2.004893794 seconds time elapsed
#
Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
Reviewed-by: Jiri Olsa <jolsa@redhat.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lore.kernel.org/lkml/20200520042737.24160-6-yao.jin@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-05-20 04:27:37 +00:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
perf stat: Enable counting events for BPF programs
Introduce 'perf stat -b' option, which counts events for BPF programs, like:
[root@localhost ~]# ~/perf stat -e ref-cycles,cycles -b 254 -I 1000
1.487903822 115,200 ref-cycles
1.487903822 86,012 cycles
2.489147029 80,560 ref-cycles
2.489147029 73,784 cycles
3.490341825 60,720 ref-cycles
3.490341825 37,797 cycles
4.491540887 37,120 ref-cycles
4.491540887 31,963 cycles
The example above counts 'cycles' and 'ref-cycles' of BPF program of id
254. This is similar to bpftool-prog-profile command, but more
flexible.
'perf stat -b' creates per-cpu perf_event and loads fentry/fexit BPF
programs (monitor-progs) to the target BPF program (target-prog). The
monitor-progs read perf_event before and after the target-prog, and
aggregate the difference in a BPF map. Then the user space reads data
from these maps.
A new 'struct bpf_counter' is introduced to provide a common interface
that uses BPF programs/maps to count perf events.
Committer notes:
Removed all but bpf_counter.h includes from evsel.h, not needed at all.
Also BPF map lookups for PERCPU_ARRAYs need to have as its value receive
buffer passed to the kernel libbpf_num_possible_cpus() entries, not
evsel__nr_cpus(evsel), as the former uses
/sys/devices/system/cpu/possible while the later uses
/sys/devices/system/cpu/online, which may be less than the 'possible'
number making the bpf map lookup overwrite memory and cause hard to
debug memory corruption.
We need to continue using evsel__nr_cpus(evsel) when accessing the
perf_counts array tho, not to overwrite another are of memory :-)
Signed-off-by: Song Liu <songliubraving@fb.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Link: https://lore.kernel.org/lkml/20210120163031.GU12699@kernel.org/
Acked-by: Namhyung Kim <namhyung@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: kernel-team@fb.com
Link: http://lore.kernel.org/lkml/20201229214214.3413833-4-songliubraving@fb.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-12-29 21:42:14 +00:00
|
|
|
static int read_bpf_map_counters(void)
|
|
|
|
{
|
|
|
|
struct evsel *counter;
|
|
|
|
int err;
|
|
|
|
|
|
|
|
evlist__for_each_entry(evsel_list, counter) {
|
2021-04-25 21:43:31 +00:00
|
|
|
if (!evsel__is_bpf(counter))
|
|
|
|
continue;
|
|
|
|
|
perf stat: Enable counting events for BPF programs
Introduce 'perf stat -b' option, which counts events for BPF programs, like:
[root@localhost ~]# ~/perf stat -e ref-cycles,cycles -b 254 -I 1000
1.487903822 115,200 ref-cycles
1.487903822 86,012 cycles
2.489147029 80,560 ref-cycles
2.489147029 73,784 cycles
3.490341825 60,720 ref-cycles
3.490341825 37,797 cycles
4.491540887 37,120 ref-cycles
4.491540887 31,963 cycles
The example above counts 'cycles' and 'ref-cycles' of BPF program of id
254. This is similar to bpftool-prog-profile command, but more
flexible.
'perf stat -b' creates per-cpu perf_event and loads fentry/fexit BPF
programs (monitor-progs) to the target BPF program (target-prog). The
monitor-progs read perf_event before and after the target-prog, and
aggregate the difference in a BPF map. Then the user space reads data
from these maps.
A new 'struct bpf_counter' is introduced to provide a common interface
that uses BPF programs/maps to count perf events.
Committer notes:
Removed all but bpf_counter.h includes from evsel.h, not needed at all.
Also BPF map lookups for PERCPU_ARRAYs need to have as its value receive
buffer passed to the kernel libbpf_num_possible_cpus() entries, not
evsel__nr_cpus(evsel), as the former uses
/sys/devices/system/cpu/possible while the later uses
/sys/devices/system/cpu/online, which may be less than the 'possible'
number making the bpf map lookup overwrite memory and cause hard to
debug memory corruption.
We need to continue using evsel__nr_cpus(evsel) when accessing the
perf_counts array tho, not to overwrite another are of memory :-)
Signed-off-by: Song Liu <songliubraving@fb.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Link: https://lore.kernel.org/lkml/20210120163031.GU12699@kernel.org/
Acked-by: Namhyung Kim <namhyung@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: kernel-team@fb.com
Link: http://lore.kernel.org/lkml/20201229214214.3413833-4-songliubraving@fb.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-12-29 21:42:14 +00:00
|
|
|
err = bpf_counter__read(counter);
|
|
|
|
if (err)
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
perf stat: Report summary for interval mode
Currently 'perf stat' supports to print counts at regular interval (-I),
but it's not very easy for user to get the overall statistics.
The patch uses 'evsel->prev_raw_counts' to get counts for summary. Copy
the counts to 'evsel->counts' after printing the interval results.
Next, we just follow the non-interval processing.
Let's see some examples,
root@kbl-ppc:~# perf stat -e cycles -I1000 --interval-count 2
# time counts unit events
1.000412064 2,281,114 cycles
2.001383658 2,547,880 cycles
Performance counter stats for 'system wide':
4,828,994 cycles
2.002860349 seconds time elapsed
root@kbl-ppc:~# perf stat -e cycles,instructions -I1000 --interval-count 2
# time counts unit events
1.000389902 1,536,093 cycles
1.000389902 420,226 instructions # 0.27 insn per cycle
2.001433453 2,213,952 cycles
2.001433453 735,465 instructions # 0.33 insn per cycle
Performance counter stats for 'system wide':
3,750,045 cycles
1,155,691 instructions # 0.31 insn per cycle
2.003023361 seconds time elapsed
root@kbl-ppc:~# perf stat -M CPI,IPC -I1000 --interval-count 2
# time counts unit events
1.000435121 905,303 inst_retired.any # 2.9 CPI
1.000435121 2,663,333 cycles
1.000435121 914,702 inst_retired.any # 0.3 IPC
1.000435121 2,676,559 cpu_clk_unhalted.thread
2.001615941 1,951,092 inst_retired.any # 1.8 CPI
2.001615941 3,551,357 cycles
2.001615941 1,950,837 inst_retired.any # 0.5 IPC
2.001615941 3,551,044 cpu_clk_unhalted.thread
Performance counter stats for 'system wide':
2,856,395 inst_retired.any # 2.2 CPI
6,214,690 cycles
2,865,539 inst_retired.any # 0.5 IPC
6,227,603 cpu_clk_unhalted.thread
2.003403078 seconds time elapsed
Committer testing:
Before:
# perf stat -e cycles -I1000 --interval-count 2
# time counts unit events
1.000618627 26,877,408 cycles
2.001417968 233,672,829 cycles
#
After:
# perf stat -e cycles -I1000 --interval-count 2
# time counts unit events
1.001531815 5,341,388,792 cycles
2.002936530 100,073,912 cycles
Performance counter stats for 'system wide':
5,441,462,704 cycles
2.004893794 seconds time elapsed
#
Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
Reviewed-by: Jiri Olsa <jolsa@redhat.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lore.kernel.org/lkml/20200520042737.24160-6-yao.jin@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-05-20 04:27:37 +00:00
|
|
|
static void read_counters(struct timespec *rs)
|
|
|
|
{
|
|
|
|
struct evsel *counter;
|
|
|
|
|
perf stat: Enable counting events for BPF programs
Introduce 'perf stat -b' option, which counts events for BPF programs, like:
[root@localhost ~]# ~/perf stat -e ref-cycles,cycles -b 254 -I 1000
1.487903822 115,200 ref-cycles
1.487903822 86,012 cycles
2.489147029 80,560 ref-cycles
2.489147029 73,784 cycles
3.490341825 60,720 ref-cycles
3.490341825 37,797 cycles
4.491540887 37,120 ref-cycles
4.491540887 31,963 cycles
The example above counts 'cycles' and 'ref-cycles' of BPF program of id
254. This is similar to bpftool-prog-profile command, but more
flexible.
'perf stat -b' creates per-cpu perf_event and loads fentry/fexit BPF
programs (monitor-progs) to the target BPF program (target-prog). The
monitor-progs read perf_event before and after the target-prog, and
aggregate the difference in a BPF map. Then the user space reads data
from these maps.
A new 'struct bpf_counter' is introduced to provide a common interface
that uses BPF programs/maps to count perf events.
Committer notes:
Removed all but bpf_counter.h includes from evsel.h, not needed at all.
Also BPF map lookups for PERCPU_ARRAYs need to have as its value receive
buffer passed to the kernel libbpf_num_possible_cpus() entries, not
evsel__nr_cpus(evsel), as the former uses
/sys/devices/system/cpu/possible while the later uses
/sys/devices/system/cpu/online, which may be less than the 'possible'
number making the bpf map lookup overwrite memory and cause hard to
debug memory corruption.
We need to continue using evsel__nr_cpus(evsel) when accessing the
perf_counts array tho, not to overwrite another are of memory :-)
Signed-off-by: Song Liu <songliubraving@fb.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Link: https://lore.kernel.org/lkml/20210120163031.GU12699@kernel.org/
Acked-by: Namhyung Kim <namhyung@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: kernel-team@fb.com
Link: http://lore.kernel.org/lkml/20201229214214.3413833-4-songliubraving@fb.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-12-29 21:42:14 +00:00
|
|
|
if (!stat_config.stop_read_counter) {
|
2021-04-25 21:43:31 +00:00
|
|
|
if (read_bpf_map_counters() ||
|
|
|
|
read_affinity_counters(rs))
|
perf stat: Enable counting events for BPF programs
Introduce 'perf stat -b' option, which counts events for BPF programs, like:
[root@localhost ~]# ~/perf stat -e ref-cycles,cycles -b 254 -I 1000
1.487903822 115,200 ref-cycles
1.487903822 86,012 cycles
2.489147029 80,560 ref-cycles
2.489147029 73,784 cycles
3.490341825 60,720 ref-cycles
3.490341825 37,797 cycles
4.491540887 37,120 ref-cycles
4.491540887 31,963 cycles
The example above counts 'cycles' and 'ref-cycles' of BPF program of id
254. This is similar to bpftool-prog-profile command, but more
flexible.
'perf stat -b' creates per-cpu perf_event and loads fentry/fexit BPF
programs (monitor-progs) to the target BPF program (target-prog). The
monitor-progs read perf_event before and after the target-prog, and
aggregate the difference in a BPF map. Then the user space reads data
from these maps.
A new 'struct bpf_counter' is introduced to provide a common interface
that uses BPF programs/maps to count perf events.
Committer notes:
Removed all but bpf_counter.h includes from evsel.h, not needed at all.
Also BPF map lookups for PERCPU_ARRAYs need to have as its value receive
buffer passed to the kernel libbpf_num_possible_cpus() entries, not
evsel__nr_cpus(evsel), as the former uses
/sys/devices/system/cpu/possible while the later uses
/sys/devices/system/cpu/online, which may be less than the 'possible'
number making the bpf map lookup overwrite memory and cause hard to
debug memory corruption.
We need to continue using evsel__nr_cpus(evsel) when accessing the
perf_counts array tho, not to overwrite another are of memory :-)
Signed-off-by: Song Liu <songliubraving@fb.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Link: https://lore.kernel.org/lkml/20210120163031.GU12699@kernel.org/
Acked-by: Namhyung Kim <namhyung@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: kernel-team@fb.com
Link: http://lore.kernel.org/lkml/20201229214214.3413833-4-songliubraving@fb.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-12-29 21:42:14 +00:00
|
|
|
return;
|
|
|
|
}
|
perf stat: Add interval printing
This patch adds a new printing mode for perf stat. It allows interval
printing. That means perf stat can now print event deltas at regular
time interval. This is useful to detect phases in programs.
The -I option enables interval printing. It expects an interval duration
in milliseconds. Minimum is 100ms. Once, activated perf stat prints
events deltas since last printout. All modes are supported.
$ perf stat -I 1000 -e cycles noploop 10
noploop for 10 seconds
# time counts events
1.000109853 2,388,560,546 cycles
2.000262846 2,393,332,358 cycles
3.000354131 2,393,176,537 cycles
4.000439503 2,393,203,790 cycles
5.000527075 2,393,167,675 cycles
6.000609052 2,393,203,670 cycles
7.000691082 2,393,175,678 cycles
The output format makes it easy to feed into a plotting program such as
gnuplot when the -I option is used in combination with the -x option:
$ perf stat -x, -I 1000 -e cycles noploop 10
noploop for 10 seconds
1.000084113,2378775498,cycles
2.000245798,2391056897,cycles
3.000354445,2392089414,cycles
4.000459115,2390936603,cycles
5.000565341,2392108173,cycles
Signed-off-by: Stephane Eranian <eranian@google.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung.kim@lge.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1359460064-3060-3-git-send-email-eranian@google.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2013-01-29 11:47:44 +00:00
|
|
|
|
2016-06-23 14:26:15 +00:00
|
|
|
evlist__for_each_entry(evsel_list, counter) {
|
2019-11-21 00:15:20 +00:00
|
|
|
if (counter->err)
|
2015-09-01 22:52:46 +00:00
|
|
|
pr_debug("failed to read counter %s\n", counter->name);
|
2019-11-21 00:15:20 +00:00
|
|
|
if (counter->err == 0 && perf_stat_process_counter(&stat_config, counter))
|
2015-06-26 09:29:20 +00:00
|
|
|
pr_warning("failed to process counter %s\n", counter->name);
|
2019-11-21 00:15:20 +00:00
|
|
|
counter->err = 0;
|
perf stat: Add interval printing
This patch adds a new printing mode for perf stat. It allows interval
printing. That means perf stat can now print event deltas at regular
time interval. This is useful to detect phases in programs.
The -I option enables interval printing. It expects an interval duration
in milliseconds. Minimum is 100ms. Once, activated perf stat prints
events deltas since last printout. All modes are supported.
$ perf stat -I 1000 -e cycles noploop 10
noploop for 10 seconds
# time counts events
1.000109853 2,388,560,546 cycles
2.000262846 2,393,332,358 cycles
3.000354131 2,393,176,537 cycles
4.000439503 2,393,203,790 cycles
5.000527075 2,393,167,675 cycles
6.000609052 2,393,203,670 cycles
7.000691082 2,393,175,678 cycles
The output format makes it easy to feed into a plotting program such as
gnuplot when the -I option is used in combination with the -x option:
$ perf stat -x, -I 1000 -e cycles noploop 10
noploop for 10 seconds
1.000084113,2378775498,cycles
2.000245798,2391056897,cycles
3.000354445,2392089414,cycles
4.000459115,2390936603,cycles
5.000565341,2392108173,cycles
Signed-off-by: Stephane Eranian <eranian@google.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung.kim@lge.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1359460064-3060-3-git-send-email-eranian@google.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2013-01-29 11:47:44 +00:00
|
|
|
}
|
2015-06-26 09:29:19 +00:00
|
|
|
}
|
|
|
|
|
perf stat: Fix wrong per-thread runtime stat for interval mode
root@kbl-ppc:~# perf stat --per-thread -e cycles,instructions -I1000 --interval-count 2
1.004171683 perf-3696 8,747,311 cycles
...
1.004171683 perf-3696 691,730 instructions # 0.08 insn per cycle
...
2.006490373 perf-3696 1,749,936 cycles
...
2.006490373 perf-3696 1,484,582 instructions # 0.28 insn per cycle
...
Let's see interval 2.006490373
perf-3696 1,749,936 cycles
perf-3696 1,484,582 instructions # 0.28 insn per cycle
insn per cycle = 1,484,582 / 1,749,936 = 0.85.
But now it's 0.28, that's not correct.
stat_config.stats[] records the per-thread runtime stat. But for
interval mode, it should be reset for each interval.
So now, with this patch,
root@kbl-ppc:~# perf stat --per-thread -e cycles,instructions -I1000 --interval-count 2
1.005818121 perf-8633 9,898,045 cycles
...
1.005818121 perf-8633 693,298 instructions # 0.07 insn per cycle
...
2.007863743 perf-8633 1,551,619 cycles
...
2.007863743 perf-8633 1,317,514 instructions # 0.85 insn per cycle
...
Let's check interval 2.007863743.
insn per cycle = 1,317,514 / 1,551,619 = 0.85. It's correct.
This patch creates runtime_stat_reset, places it next to
untime_stat_new/runtime_stat_delete and moves all runtime_stat
functions before process_interval.
Committer testing:
After the patch:
# perf stat --per-thread -e cycles,instructions -I1000 --interval-count 2 |& grep sssd_nss-1130
2.011309774 sssd_nss-1130 56,585 cycles
2.011309774 sssd_nss-1130 13,121 instructions # 0.23 insn per cycle
# python
>>> 13121.0 / 56585
0.23188124061146947
>>>
Fixes: commit 14e72a21c783 ("perf stat: Update or print per-thread stats")
Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
Reviewed-by: Jiri Olsa <jolsa@redhat.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jin Yao <yao.jin@intel.com>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lore.kernel.org/lkml/20200520042737.24160-2-yao.jin@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-05-20 04:27:33 +00:00
|
|
|
static int runtime_stat_new(struct perf_stat_config *config, int nthreads)
|
|
|
|
{
|
|
|
|
int i;
|
|
|
|
|
|
|
|
config->stats = calloc(nthreads, sizeof(struct runtime_stat));
|
|
|
|
if (!config->stats)
|
|
|
|
return -1;
|
|
|
|
|
|
|
|
config->stats_num = nthreads;
|
|
|
|
|
|
|
|
for (i = 0; i < nthreads; i++)
|
|
|
|
runtime_stat__init(&config->stats[i]);
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void runtime_stat_delete(struct perf_stat_config *config)
|
|
|
|
{
|
|
|
|
int i;
|
|
|
|
|
|
|
|
if (!config->stats)
|
|
|
|
return;
|
|
|
|
|
|
|
|
for (i = 0; i < config->stats_num; i++)
|
|
|
|
runtime_stat__exit(&config->stats[i]);
|
|
|
|
|
|
|
|
zfree(&config->stats);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void runtime_stat_reset(struct perf_stat_config *config)
|
|
|
|
{
|
|
|
|
int i;
|
|
|
|
|
|
|
|
if (!config->stats)
|
|
|
|
return;
|
|
|
|
|
|
|
|
for (i = 0; i < config->stats_num; i++)
|
|
|
|
perf_stat__reset_shadow_per_stat(&config->stats[i]);
|
|
|
|
}
|
|
|
|
|
2015-06-26 09:29:24 +00:00
|
|
|
static void process_interval(void)
|
2015-06-26 09:29:19 +00:00
|
|
|
{
|
|
|
|
struct timespec ts, rs;
|
|
|
|
|
perf stat: Add interval printing
This patch adds a new printing mode for perf stat. It allows interval
printing. That means perf stat can now print event deltas at regular
time interval. This is useful to detect phases in programs.
The -I option enables interval printing. It expects an interval duration
in milliseconds. Minimum is 100ms. Once, activated perf stat prints
events deltas since last printout. All modes are supported.
$ perf stat -I 1000 -e cycles noploop 10
noploop for 10 seconds
# time counts events
1.000109853 2,388,560,546 cycles
2.000262846 2,393,332,358 cycles
3.000354131 2,393,176,537 cycles
4.000439503 2,393,203,790 cycles
5.000527075 2,393,167,675 cycles
6.000609052 2,393,203,670 cycles
7.000691082 2,393,175,678 cycles
The output format makes it easy to feed into a plotting program such as
gnuplot when the -I option is used in combination with the -x option:
$ perf stat -x, -I 1000 -e cycles noploop 10
noploop for 10 seconds
1.000084113,2378775498,cycles
2.000245798,2391056897,cycles
3.000354445,2392089414,cycles
4.000459115,2390936603,cycles
5.000565341,2392108173,cycles
Signed-off-by: Stephane Eranian <eranian@google.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung.kim@lge.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1359460064-3060-3-git-send-email-eranian@google.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2013-01-29 11:47:44 +00:00
|
|
|
clock_gettime(CLOCK_MONOTONIC, &ts);
|
|
|
|
diff_timespec(&rs, &ts, &ref_time);
|
|
|
|
|
perf stat: Improve runtime stat for interval mode
For interval mode, the metric is printed after the '#' character if it
exists. But it's not calculated by the counts generated in this
interval.
See the following examples:
root@kbl-ppc:~# perf stat -M CPI -I1000 --interval-count 2
# time counts unit events
1.000422803 764,809 inst_retired.any # 2.9 CPI
1.000422803 2,234,932 cycles
2.001464585 1,960,061 inst_retired.any # 1.6 CPI
2.001464585 4,022,591 cycles
The second CPI should not be 1.6 (4,022,591/1,960,061 is 2.1)
root@kbl-ppc:~# perf stat -e cycles,instructions -I1000 --interval-count 2
# time counts unit events
1.000429493 2,869,311 cycles
1.000429493 816,875 instructions # 0.28 insn per cycle
2.001516426 9,260,973 cycles
2.001516426 5,250,634 instructions # 0.87 insn per cycle
The second 'insn per cycle' should not be 0.87 (5,250,634/9,260,973 is
0.57).
The current code uses a global variable 'rt_stat' for tracking and
updating the std dev of runtime stat. Unlike the counts, 'rt_stat' is not
reset for interval. While the counts are reset for interval.
perf_stat_process_counter()
{
if (config->interval)
init_stats(ps->res_stats);
}
So for interval mode, the 'rt_stat' variable should be reset too.
This patch resets 'rt_stat' before read_counters(), so the runtime stat
is only calculated by the counts generated in this interval.
With this patch:
root@kbl-ppc:~# perf stat -M CPI -I1000 --interval-count 2
# time counts unit events
1.000420924 2,408,818 inst_retired.any # 2.1 CPI
1.000420924 5,010,111 cycles
2.001448579 2,798,407 inst_retired.any # 1.6 CPI
2.001448579 4,599,861 cycles
root@kbl-ppc:~# perf stat -e cycles,instructions -I1000 --interval-count 2
# time counts unit events
1.000428555 2,769,714 cycles
1.000428555 774,462 instructions # 0.28 insn per cycle
2.001471562 3,595,904 cycles
2.001471562 1,243,703 instructions # 0.35 insn per cycle
Now the second 'insn per cycle' and CPI are calculated by the counts
generated in this interval.
Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@redhat.com>
Tested-By: Kajol Jain <kjain@linux.ibm.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jin Yao <yao.jin@intel.com>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lore.kernel.org/lkml/20200420145417.6864-1-yao.jin@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-04-20 14:54:17 +00:00
|
|
|
perf_stat__reset_shadow_per_stat(&rt_stat);
|
perf stat: Fix wrong per-thread runtime stat for interval mode
root@kbl-ppc:~# perf stat --per-thread -e cycles,instructions -I1000 --interval-count 2
1.004171683 perf-3696 8,747,311 cycles
...
1.004171683 perf-3696 691,730 instructions # 0.08 insn per cycle
...
2.006490373 perf-3696 1,749,936 cycles
...
2.006490373 perf-3696 1,484,582 instructions # 0.28 insn per cycle
...
Let's see interval 2.006490373
perf-3696 1,749,936 cycles
perf-3696 1,484,582 instructions # 0.28 insn per cycle
insn per cycle = 1,484,582 / 1,749,936 = 0.85.
But now it's 0.28, that's not correct.
stat_config.stats[] records the per-thread runtime stat. But for
interval mode, it should be reset for each interval.
So now, with this patch,
root@kbl-ppc:~# perf stat --per-thread -e cycles,instructions -I1000 --interval-count 2
1.005818121 perf-8633 9,898,045 cycles
...
1.005818121 perf-8633 693,298 instructions # 0.07 insn per cycle
...
2.007863743 perf-8633 1,551,619 cycles
...
2.007863743 perf-8633 1,317,514 instructions # 0.85 insn per cycle
...
Let's check interval 2.007863743.
insn per cycle = 1,317,514 / 1,551,619 = 0.85. It's correct.
This patch creates runtime_stat_reset, places it next to
untime_stat_new/runtime_stat_delete and moves all runtime_stat
functions before process_interval.
Committer testing:
After the patch:
# perf stat --per-thread -e cycles,instructions -I1000 --interval-count 2 |& grep sssd_nss-1130
2.011309774 sssd_nss-1130 56,585 cycles
2.011309774 sssd_nss-1130 13,121 instructions # 0.23 insn per cycle
# python
>>> 13121.0 / 56585
0.23188124061146947
>>>
Fixes: commit 14e72a21c783 ("perf stat: Update or print per-thread stats")
Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
Reviewed-by: Jiri Olsa <jolsa@redhat.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jin Yao <yao.jin@intel.com>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lore.kernel.org/lkml/20200520042737.24160-2-yao.jin@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-05-20 04:27:33 +00:00
|
|
|
runtime_stat_reset(&stat_config);
|
2019-03-26 22:18:21 +00:00
|
|
|
read_counters(&rs);
|
|
|
|
|
2015-11-05 14:40:52 +00:00
|
|
|
if (STAT_RECORD) {
|
2016-08-05 18:40:30 +00:00
|
|
|
if (WRITE_STAT_ROUND_EVENT(rs.tv_sec * NSEC_PER_SEC + rs.tv_nsec, INTERVAL))
|
2015-11-05 14:40:52 +00:00
|
|
|
pr_err("failed to write stat round event\n");
|
|
|
|
}
|
|
|
|
|
2017-08-31 19:40:36 +00:00
|
|
|
init_stats(&walltime_nsecs_stats);
|
2020-05-18 13:14:45 +00:00
|
|
|
update_stats(&walltime_nsecs_stats, stat_config.interval * 1000000ULL);
|
2015-06-26 09:29:26 +00:00
|
|
|
print_counters(&rs, 0, NULL);
|
perf stat: Add interval printing
This patch adds a new printing mode for perf stat. It allows interval
printing. That means perf stat can now print event deltas at regular
time interval. This is useful to detect phases in programs.
The -I option enables interval printing. It expects an interval duration
in milliseconds. Minimum is 100ms. Once, activated perf stat prints
events deltas since last printout. All modes are supported.
$ perf stat -I 1000 -e cycles noploop 10
noploop for 10 seconds
# time counts events
1.000109853 2,388,560,546 cycles
2.000262846 2,393,332,358 cycles
3.000354131 2,393,176,537 cycles
4.000439503 2,393,203,790 cycles
5.000527075 2,393,167,675 cycles
6.000609052 2,393,203,670 cycles
7.000691082 2,393,175,678 cycles
The output format makes it easy to feed into a plotting program such as
gnuplot when the -I option is used in combination with the -x option:
$ perf stat -x, -I 1000 -e cycles noploop 10
noploop for 10 seconds
1.000084113,2378775498,cycles
2.000245798,2391056897,cycles
3.000354445,2392089414,cycles
4.000459115,2390936603,cycles
5.000565341,2392108173,cycles
Signed-off-by: Stephane Eranian <eranian@google.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung.kim@lge.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1359460064-3060-3-git-send-email-eranian@google.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2013-01-29 11:47:44 +00:00
|
|
|
}
|
|
|
|
|
2020-07-17 07:02:08 +00:00
|
|
|
static bool handle_interval(unsigned int interval, int *times)
|
|
|
|
{
|
|
|
|
if (interval) {
|
|
|
|
process_interval();
|
|
|
|
if (interval_count && !(--(*times)))
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
perf stat: Enable counting events for BPF programs
Introduce 'perf stat -b' option, which counts events for BPF programs, like:
[root@localhost ~]# ~/perf stat -e ref-cycles,cycles -b 254 -I 1000
1.487903822 115,200 ref-cycles
1.487903822 86,012 cycles
2.489147029 80,560 ref-cycles
2.489147029 73,784 cycles
3.490341825 60,720 ref-cycles
3.490341825 37,797 cycles
4.491540887 37,120 ref-cycles
4.491540887 31,963 cycles
The example above counts 'cycles' and 'ref-cycles' of BPF program of id
254. This is similar to bpftool-prog-profile command, but more
flexible.
'perf stat -b' creates per-cpu perf_event and loads fentry/fexit BPF
programs (monitor-progs) to the target BPF program (target-prog). The
monitor-progs read perf_event before and after the target-prog, and
aggregate the difference in a BPF map. Then the user space reads data
from these maps.
A new 'struct bpf_counter' is introduced to provide a common interface
that uses BPF programs/maps to count perf events.
Committer notes:
Removed all but bpf_counter.h includes from evsel.h, not needed at all.
Also BPF map lookups for PERCPU_ARRAYs need to have as its value receive
buffer passed to the kernel libbpf_num_possible_cpus() entries, not
evsel__nr_cpus(evsel), as the former uses
/sys/devices/system/cpu/possible while the later uses
/sys/devices/system/cpu/online, which may be less than the 'possible'
number making the bpf map lookup overwrite memory and cause hard to
debug memory corruption.
We need to continue using evsel__nr_cpus(evsel) when accessing the
perf_counts array tho, not to overwrite another are of memory :-)
Signed-off-by: Song Liu <songliubraving@fb.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Link: https://lore.kernel.org/lkml/20210120163031.GU12699@kernel.org/
Acked-by: Namhyung Kim <namhyung@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: kernel-team@fb.com
Link: http://lore.kernel.org/lkml/20201229214214.3413833-4-songliubraving@fb.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-12-29 21:42:14 +00:00
|
|
|
static int enable_counters(void)
|
2013-08-03 00:41:11 +00:00
|
|
|
{
|
perf stat: Enable counting events for BPF programs
Introduce 'perf stat -b' option, which counts events for BPF programs, like:
[root@localhost ~]# ~/perf stat -e ref-cycles,cycles -b 254 -I 1000
1.487903822 115,200 ref-cycles
1.487903822 86,012 cycles
2.489147029 80,560 ref-cycles
2.489147029 73,784 cycles
3.490341825 60,720 ref-cycles
3.490341825 37,797 cycles
4.491540887 37,120 ref-cycles
4.491540887 31,963 cycles
The example above counts 'cycles' and 'ref-cycles' of BPF program of id
254. This is similar to bpftool-prog-profile command, but more
flexible.
'perf stat -b' creates per-cpu perf_event and loads fentry/fexit BPF
programs (monitor-progs) to the target BPF program (target-prog). The
monitor-progs read perf_event before and after the target-prog, and
aggregate the difference in a BPF map. Then the user space reads data
from these maps.
A new 'struct bpf_counter' is introduced to provide a common interface
that uses BPF programs/maps to count perf events.
Committer notes:
Removed all but bpf_counter.h includes from evsel.h, not needed at all.
Also BPF map lookups for PERCPU_ARRAYs need to have as its value receive
buffer passed to the kernel libbpf_num_possible_cpus() entries, not
evsel__nr_cpus(evsel), as the former uses
/sys/devices/system/cpu/possible while the later uses
/sys/devices/system/cpu/online, which may be less than the 'possible'
number making the bpf map lookup overwrite memory and cause hard to
debug memory corruption.
We need to continue using evsel__nr_cpus(evsel) when accessing the
perf_counts array tho, not to overwrite another are of memory :-)
Signed-off-by: Song Liu <songliubraving@fb.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Link: https://lore.kernel.org/lkml/20210120163031.GU12699@kernel.org/
Acked-by: Namhyung Kim <namhyung@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: kernel-team@fb.com
Link: http://lore.kernel.org/lkml/20201229214214.3413833-4-songliubraving@fb.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-12-29 21:42:14 +00:00
|
|
|
struct evsel *evsel;
|
|
|
|
int err;
|
|
|
|
|
2021-04-25 21:43:31 +00:00
|
|
|
evlist__for_each_entry(evsel_list, evsel) {
|
|
|
|
if (!evsel__is_bpf(evsel))
|
|
|
|
continue;
|
|
|
|
|
|
|
|
err = bpf_counter__enable(evsel);
|
|
|
|
if (err)
|
|
|
|
return err;
|
perf stat: Enable counting events for BPF programs
Introduce 'perf stat -b' option, which counts events for BPF programs, like:
[root@localhost ~]# ~/perf stat -e ref-cycles,cycles -b 254 -I 1000
1.487903822 115,200 ref-cycles
1.487903822 86,012 cycles
2.489147029 80,560 ref-cycles
2.489147029 73,784 cycles
3.490341825 60,720 ref-cycles
3.490341825 37,797 cycles
4.491540887 37,120 ref-cycles
4.491540887 31,963 cycles
The example above counts 'cycles' and 'ref-cycles' of BPF program of id
254. This is similar to bpftool-prog-profile command, but more
flexible.
'perf stat -b' creates per-cpu perf_event and loads fentry/fexit BPF
programs (monitor-progs) to the target BPF program (target-prog). The
monitor-progs read perf_event before and after the target-prog, and
aggregate the difference in a BPF map. Then the user space reads data
from these maps.
A new 'struct bpf_counter' is introduced to provide a common interface
that uses BPF programs/maps to count perf events.
Committer notes:
Removed all but bpf_counter.h includes from evsel.h, not needed at all.
Also BPF map lookups for PERCPU_ARRAYs need to have as its value receive
buffer passed to the kernel libbpf_num_possible_cpus() entries, not
evsel__nr_cpus(evsel), as the former uses
/sys/devices/system/cpu/possible while the later uses
/sys/devices/system/cpu/online, which may be less than the 'possible'
number making the bpf map lookup overwrite memory and cause hard to
debug memory corruption.
We need to continue using evsel__nr_cpus(evsel) when accessing the
perf_counts array tho, not to overwrite another are of memory :-)
Signed-off-by: Song Liu <songliubraving@fb.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Link: https://lore.kernel.org/lkml/20210120163031.GU12699@kernel.org/
Acked-by: Namhyung Kim <namhyung@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: kernel-team@fb.com
Link: http://lore.kernel.org/lkml/20201229214214.3413833-4-songliubraving@fb.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-12-29 21:42:14 +00:00
|
|
|
}
|
|
|
|
|
2020-07-17 07:04:33 +00:00
|
|
|
if (stat_config.initial_delay < 0) {
|
|
|
|
pr_info(EVLIST_DISABLED_MSG);
|
perf stat: Enable counting events for BPF programs
Introduce 'perf stat -b' option, which counts events for BPF programs, like:
[root@localhost ~]# ~/perf stat -e ref-cycles,cycles -b 254 -I 1000
1.487903822 115,200 ref-cycles
1.487903822 86,012 cycles
2.489147029 80,560 ref-cycles
2.489147029 73,784 cycles
3.490341825 60,720 ref-cycles
3.490341825 37,797 cycles
4.491540887 37,120 ref-cycles
4.491540887 31,963 cycles
The example above counts 'cycles' and 'ref-cycles' of BPF program of id
254. This is similar to bpftool-prog-profile command, but more
flexible.
'perf stat -b' creates per-cpu perf_event and loads fentry/fexit BPF
programs (monitor-progs) to the target BPF program (target-prog). The
monitor-progs read perf_event before and after the target-prog, and
aggregate the difference in a BPF map. Then the user space reads data
from these maps.
A new 'struct bpf_counter' is introduced to provide a common interface
that uses BPF programs/maps to count perf events.
Committer notes:
Removed all but bpf_counter.h includes from evsel.h, not needed at all.
Also BPF map lookups for PERCPU_ARRAYs need to have as its value receive
buffer passed to the kernel libbpf_num_possible_cpus() entries, not
evsel__nr_cpus(evsel), as the former uses
/sys/devices/system/cpu/possible while the later uses
/sys/devices/system/cpu/online, which may be less than the 'possible'
number making the bpf map lookup overwrite memory and cause hard to
debug memory corruption.
We need to continue using evsel__nr_cpus(evsel) when accessing the
perf_counts array tho, not to overwrite another are of memory :-)
Signed-off-by: Song Liu <songliubraving@fb.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Link: https://lore.kernel.org/lkml/20210120163031.GU12699@kernel.org/
Acked-by: Namhyung Kim <namhyung@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: kernel-team@fb.com
Link: http://lore.kernel.org/lkml/20201229214214.3413833-4-songliubraving@fb.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-12-29 21:42:14 +00:00
|
|
|
return 0;
|
2020-07-17 07:04:33 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
if (stat_config.initial_delay > 0) {
|
|
|
|
pr_info(EVLIST_DISABLED_MSG);
|
2018-08-30 06:32:11 +00:00
|
|
|
usleep(stat_config.initial_delay * USEC_PER_MSEC);
|
2020-07-17 07:04:33 +00:00
|
|
|
}
|
2015-12-03 09:06:44 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* We need to enable counters only if:
|
|
|
|
* - we don't have tracee (attaching to task or cpu)
|
|
|
|
* - we have initial delay configured
|
|
|
|
*/
|
2020-07-17 07:04:33 +00:00
|
|
|
if (!target__none(&target) || stat_config.initial_delay) {
|
2021-05-12 06:51:16 +00:00
|
|
|
if (!all_counters_use_bpf)
|
|
|
|
evlist__enable(evsel_list);
|
2020-07-17 07:04:33 +00:00
|
|
|
if (stat_config.initial_delay > 0)
|
|
|
|
pr_info(EVLIST_ENABLED_MSG);
|
|
|
|
}
|
perf stat: Enable counting events for BPF programs
Introduce 'perf stat -b' option, which counts events for BPF programs, like:
[root@localhost ~]# ~/perf stat -e ref-cycles,cycles -b 254 -I 1000
1.487903822 115,200 ref-cycles
1.487903822 86,012 cycles
2.489147029 80,560 ref-cycles
2.489147029 73,784 cycles
3.490341825 60,720 ref-cycles
3.490341825 37,797 cycles
4.491540887 37,120 ref-cycles
4.491540887 31,963 cycles
The example above counts 'cycles' and 'ref-cycles' of BPF program of id
254. This is similar to bpftool-prog-profile command, but more
flexible.
'perf stat -b' creates per-cpu perf_event and loads fentry/fexit BPF
programs (monitor-progs) to the target BPF program (target-prog). The
monitor-progs read perf_event before and after the target-prog, and
aggregate the difference in a BPF map. Then the user space reads data
from these maps.
A new 'struct bpf_counter' is introduced to provide a common interface
that uses BPF programs/maps to count perf events.
Committer notes:
Removed all but bpf_counter.h includes from evsel.h, not needed at all.
Also BPF map lookups for PERCPU_ARRAYs need to have as its value receive
buffer passed to the kernel libbpf_num_possible_cpus() entries, not
evsel__nr_cpus(evsel), as the former uses
/sys/devices/system/cpu/possible while the later uses
/sys/devices/system/cpu/online, which may be less than the 'possible'
number making the bpf map lookup overwrite memory and cause hard to
debug memory corruption.
We need to continue using evsel__nr_cpus(evsel) when accessing the
perf_counts array tho, not to overwrite another are of memory :-)
Signed-off-by: Song Liu <songliubraving@fb.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Link: https://lore.kernel.org/lkml/20210120163031.GU12699@kernel.org/
Acked-by: Namhyung Kim <namhyung@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: kernel-team@fb.com
Link: http://lore.kernel.org/lkml/20201229214214.3413833-4-songliubraving@fb.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-12-29 21:42:14 +00:00
|
|
|
return 0;
|
2013-08-03 00:41:11 +00:00
|
|
|
}
|
|
|
|
|
perf stat: Avoid skew when reading events
When we don't have a tracee (i.e. we're attaching to a task or CPU),
counters can still be running after our workload finishes, and can still
be running as we read their values. As we read events one-by-one, there
can be arbitrary skew between values of events, even within a group.
This means that ratios within an event group are not reliable.
This skew can be seen if measuring a group of identical events, e.g:
# perf stat -a -C0 -e '{cycles,cycles}' sleep 1
To avoid this, we must stop groups from counting before we read the
values of any constituent events. This patch adds and makes use of a new
disable_counters() helper, which disables group leaders (and thus each
group as a whole). This mirrors the use of enable_counters() for
starting event groups in the absence of a tracee.
Closing a group leader splits the group, and without a disabled group
leader the newly split events will begin counting. Thus to ensure counts
are reliable we must defer closing group leaders until all counts have
been read. To do so this patch removes the event closing logic from the
read_counters() helper, explicitly closes the events using
perf_evlist__close(), which also aids legibility.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1470747869-3567-1-git-send-email-mark.rutland@arm.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-08-09 13:04:29 +00:00
|
|
|
static void disable_counters(void)
|
|
|
|
{
|
2021-05-12 06:51:16 +00:00
|
|
|
struct evsel *counter;
|
|
|
|
|
perf stat: Avoid skew when reading events
When we don't have a tracee (i.e. we're attaching to a task or CPU),
counters can still be running after our workload finishes, and can still
be running as we read their values. As we read events one-by-one, there
can be arbitrary skew between values of events, even within a group.
This means that ratios within an event group are not reliable.
This skew can be seen if measuring a group of identical events, e.g:
# perf stat -a -C0 -e '{cycles,cycles}' sleep 1
To avoid this, we must stop groups from counting before we read the
values of any constituent events. This patch adds and makes use of a new
disable_counters() helper, which disables group leaders (and thus each
group as a whole). This mirrors the use of enable_counters() for
starting event groups in the absence of a tracee.
Closing a group leader splits the group, and without a disabled group
leader the newly split events will begin counting. Thus to ensure counts
are reliable we must defer closing group leaders until all counts have
been read. To do so this patch removes the event closing logic from the
read_counters() helper, explicitly closes the events using
perf_evlist__close(), which also aids legibility.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1470747869-3567-1-git-send-email-mark.rutland@arm.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-08-09 13:04:29 +00:00
|
|
|
/*
|
|
|
|
* If we don't have tracee (attaching to task or cpu), counters may
|
|
|
|
* still be running. To get accurate group ratios, we must stop groups
|
|
|
|
* from counting before reading their constituent counters.
|
|
|
|
*/
|
2021-05-12 06:51:16 +00:00
|
|
|
if (!target__none(&target)) {
|
|
|
|
evlist__for_each_entry(evsel_list, counter)
|
|
|
|
bpf_counter__disable(counter);
|
|
|
|
if (!all_counters_use_bpf)
|
|
|
|
evlist__disable(evsel_list);
|
|
|
|
}
|
perf stat: Avoid skew when reading events
When we don't have a tracee (i.e. we're attaching to a task or CPU),
counters can still be running after our workload finishes, and can still
be running as we read their values. As we read events one-by-one, there
can be arbitrary skew between values of events, even within a group.
This means that ratios within an event group are not reliable.
This skew can be seen if measuring a group of identical events, e.g:
# perf stat -a -C0 -e '{cycles,cycles}' sleep 1
To avoid this, we must stop groups from counting before we read the
values of any constituent events. This patch adds and makes use of a new
disable_counters() helper, which disables group leaders (and thus each
group as a whole). This mirrors the use of enable_counters() for
starting event groups in the absence of a tracee.
Closing a group leader splits the group, and without a disabled group
leader the newly split events will begin counting. Thus to ensure counts
are reliable we must defer closing group leaders until all counts have
been read. To do so this patch removes the event closing logic from the
read_counters() helper, explicitly closes the events using
perf_evlist__close(), which also aids legibility.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1470747869-3567-1-git-send-email-mark.rutland@arm.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-08-09 13:04:29 +00:00
|
|
|
}
|
|
|
|
|
2014-01-02 18:11:25 +00:00
|
|
|
static volatile int workload_exec_errno;
|
2013-12-28 18:45:08 +00:00
|
|
|
|
|
|
|
/*
|
2020-11-30 12:26:54 +00:00
|
|
|
* evlist__prepare_workload will send a SIGUSR1
|
2013-12-28 18:45:08 +00:00
|
|
|
* if the fork fails, since we asked by setting its
|
|
|
|
* want_signal to true.
|
|
|
|
*/
|
2014-01-02 18:11:25 +00:00
|
|
|
static void workload_exec_failed_signal(int signo __maybe_unused, siginfo_t *info,
|
|
|
|
void *ucontext __maybe_unused)
|
2013-12-28 18:45:08 +00:00
|
|
|
{
|
2014-01-02 18:11:25 +00:00
|
|
|
workload_exec_errno = info->si_value.sival_int;
|
2013-12-28 18:45:08 +00:00
|
|
|
}
|
|
|
|
|
2020-05-04 16:46:34 +00:00
|
|
|
static bool evsel__should_store_id(struct evsel *counter)
|
perf stat: Use group read for event groups
Make perf stat use group read if there are groups defined. The group
read will get the values for all member of groups within a single
syscall instead of calling read syscall for every event.
We can see considerable less amount of kernel cycles spent on single
group read, than reading each event separately, like for following perf
stat command:
# perf stat -e {cycles,instructions} -I 10 -a sleep 1
Monitored with "perf stat -r 5 -e '{cycles:u,cycles:k}'"
Before:
24,325,676 cycles:u
297,040,775 cycles:k
1.038554134 seconds time elapsed
After:
25,034,418 cycles:u
158,256,395 cycles:k
1.036864497 seconds time elapsed
The perf_evsel__open fallback changes contributed by Andi Kleen.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/20170726120206.9099-4-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-07-26 12:02:06 +00:00
|
|
|
{
|
2019-07-21 11:24:29 +00:00
|
|
|
return STAT_RECORD || counter->core.attr.read_format & PERF_FORMAT_ID;
|
perf stat: Use group read for event groups
Make perf stat use group read if there are groups defined. The group
read will get the values for all member of groups within a single
syscall instead of calling read syscall for every event.
We can see considerable less amount of kernel cycles spent on single
group read, than reading each event separately, like for following perf
stat command:
# perf stat -e {cycles,instructions} -I 10 -a sleep 1
Monitored with "perf stat -r 5 -e '{cycles:u,cycles:k}'"
Before:
24,325,676 cycles:u
297,040,775 cycles:k
1.038554134 seconds time elapsed
After:
25,034,418 cycles:u
158,256,395 cycles:k
1.036864497 seconds time elapsed
The perf_evsel__open fallback changes contributed by Andi Kleen.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/20170726120206.9099-4-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-07-26 12:02:06 +00:00
|
|
|
}
|
|
|
|
|
2018-10-22 09:30:15 +00:00
|
|
|
static bool is_target_alive(struct target *_target,
|
2019-07-21 11:23:50 +00:00
|
|
|
struct perf_thread_map *threads)
|
2018-10-22 09:30:15 +00:00
|
|
|
{
|
|
|
|
struct stat st;
|
|
|
|
int i;
|
|
|
|
|
|
|
|
if (!target__has_task(_target))
|
|
|
|
return true;
|
|
|
|
|
|
|
|
for (i = 0; i < threads->nr; i++) {
|
|
|
|
char path[PATH_MAX];
|
|
|
|
|
|
|
|
scnprintf(path, PATH_MAX, "%s/%d", procfs__mountpoint(),
|
|
|
|
threads->map[i].pid);
|
|
|
|
|
|
|
|
if (!stat(path, &st))
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
2020-07-17 07:05:06 +00:00
|
|
|
static void process_evlist(struct evlist *evlist, unsigned int interval)
|
|
|
|
{
|
|
|
|
enum evlist_ctl_cmd cmd = EVLIST_CTL_CMD_UNSUPPORTED;
|
|
|
|
|
|
|
|
if (evlist__ctlfd_process(evlist, &cmd) > 0) {
|
|
|
|
switch (cmd) {
|
|
|
|
case EVLIST_CTL_CMD_ENABLE:
|
|
|
|
if (interval)
|
|
|
|
process_interval();
|
|
|
|
break;
|
|
|
|
case EVLIST_CTL_CMD_DISABLE:
|
|
|
|
if (interval)
|
|
|
|
process_interval();
|
|
|
|
break;
|
2020-09-01 09:37:57 +00:00
|
|
|
case EVLIST_CTL_CMD_SNAPSHOT:
|
2020-07-17 07:05:06 +00:00
|
|
|
case EVLIST_CTL_CMD_ACK:
|
|
|
|
case EVLIST_CTL_CMD_UNSUPPORTED:
|
perf tools: Add 'evlist' control command
Add a new 'evlist' control command to display all the evlist events.
When it is received, perf will scan and print current evlist into perf
record terminal.
The interface string for control file is:
evlist [-v|-g|-F]
The syntax follows perf evlist command:
-F Show just the sample frequency used for each event.
-v Show all fields.
-g Show event group information.
Example session:
terminal 1:
# mkfifo control ack
# perf record --control=fifo:control,ack -e '{cycles,instructions}'
terminal 2:
# echo evlist > control
terminal 1:
cycles
instructions
dummy:HG
terminal 2:
# echo 'evlist -v' > control
terminal 1:
cycles: size: 120, { sample_period, sample_freq }: 4000, sample_type: \
IP|TID|TIME|ID|CPU|PERIOD, read_format: ID, disabled: 1, inherit: 1, freq: 1, \
sample_id_all: 1, exclude_guest: 1
instructions: size: 120, config: 0x1, { sample_period, sample_freq }: 4000, \
sample_type: IP|TID|TIME|ID|CPU|PERIOD, read_format: ID, inherit: 1, freq: 1, \
sample_id_all: 1, exclude_guest: 1
dummy:HG: type: 1, size: 120, config: 0x9, { sample_period, sample_freq }: 4000, \
sample_type: IP|TID|TIME|ID|CPU|PERIOD, read_format: ID, inherit: 1, mmap: 1, \
comm: 1, freq: 1, task: 1, sample_id_all: 1, mmap2: 1, comm_exec: 1, ksymbol: 1, \
bpf_event: 1
terminal 2:
# echo 'evlist -g' > control
terminal 1:
{cycles,instructions}
dummy:HG
terminal 2:
# echo 'evlist -F' > control
terminal 1:
cycles: sample_freq=4000
instructions: sample_freq=4000
dummy:HG: sample_freq=4000
This new evlist command is handy to get real event names when
wildcards are used.
Adding evsel_fprintf.c object to python/perf.so build, because
it's now evlist.c dependency.
Adding PYTHON_PERF define for python/perf.so compilation, so we
can use it to compile in only evsel__fprintf from evsel_fprintf.c
object.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Acked-by: Namhyung Kim <namhyung@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Alexei Budankov <abudankov@huawei.com>
Cc: Ian Rogers <irogers@google.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Michael Petlan <mpetlan@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lore.kernel.org/lkml/20201226232038.390883-3-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-12-26 23:20:36 +00:00
|
|
|
case EVLIST_CTL_CMD_EVLIST:
|
2020-12-26 23:20:37 +00:00
|
|
|
case EVLIST_CTL_CMD_STOP:
|
2020-12-26 23:20:38 +00:00
|
|
|
case EVLIST_CTL_CMD_PING:
|
2020-07-17 07:05:06 +00:00
|
|
|
default:
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static void compute_tts(struct timespec *time_start, struct timespec *time_stop,
|
|
|
|
int *time_to_sleep)
|
|
|
|
{
|
|
|
|
int tts = *time_to_sleep;
|
|
|
|
struct timespec time_diff;
|
|
|
|
|
|
|
|
diff_timespec(&time_diff, time_stop, time_start);
|
|
|
|
|
|
|
|
tts -= time_diff.tv_sec * MSEC_PER_SEC +
|
|
|
|
time_diff.tv_nsec / NSEC_PER_MSEC;
|
|
|
|
|
|
|
|
if (tts < 0)
|
|
|
|
tts = 0;
|
|
|
|
|
|
|
|
*time_to_sleep = tts;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int dispatch_events(bool forks, int timeout, int interval, int *times)
|
2020-07-17 07:04:02 +00:00
|
|
|
{
|
|
|
|
int child_exited = 0, status = 0;
|
2020-07-17 07:05:06 +00:00
|
|
|
int time_to_sleep, sleep_time;
|
|
|
|
struct timespec time_start, time_stop;
|
|
|
|
|
|
|
|
if (interval)
|
|
|
|
sleep_time = interval;
|
|
|
|
else if (timeout)
|
|
|
|
sleep_time = timeout;
|
|
|
|
else
|
|
|
|
sleep_time = 1000;
|
|
|
|
|
|
|
|
time_to_sleep = sleep_time;
|
2020-07-17 07:04:02 +00:00
|
|
|
|
|
|
|
while (!done) {
|
|
|
|
if (forks)
|
|
|
|
child_exited = waitpid(child_pid, &status, WNOHANG);
|
|
|
|
else
|
|
|
|
child_exited = !is_target_alive(&target, evsel_list->core.threads) ? 1 : 0;
|
|
|
|
|
|
|
|
if (child_exited)
|
|
|
|
break;
|
|
|
|
|
2020-07-17 07:05:06 +00:00
|
|
|
clock_gettime(CLOCK_MONOTONIC, &time_start);
|
|
|
|
if (!(evlist__poll(evsel_list, time_to_sleep) > 0)) { /* poll timeout or EINTR */
|
|
|
|
if (timeout || handle_interval(interval, times))
|
|
|
|
break;
|
|
|
|
time_to_sleep = sleep_time;
|
|
|
|
} else { /* fd revent */
|
|
|
|
process_evlist(evsel_list, interval);
|
|
|
|
clock_gettime(CLOCK_MONOTONIC, &time_stop);
|
|
|
|
compute_tts(&time_start, &time_stop, &time_to_sleep);
|
|
|
|
}
|
2020-07-17 07:04:02 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
return status;
|
|
|
|
}
|
|
|
|
|
2019-11-21 00:15:18 +00:00
|
|
|
enum counter_recovery {
|
|
|
|
COUNTER_SKIP,
|
|
|
|
COUNTER_RETRY,
|
|
|
|
COUNTER_FATAL,
|
|
|
|
};
|
|
|
|
|
|
|
|
static enum counter_recovery stat_handle_error(struct evsel *counter)
|
|
|
|
{
|
|
|
|
char msg[BUFSIZ];
|
|
|
|
/*
|
|
|
|
* PPC returns ENXIO for HW counters until 2.6.37
|
|
|
|
* (behavior changed with commit b0a873e).
|
|
|
|
*/
|
|
|
|
if (errno == EINVAL || errno == ENOSYS ||
|
|
|
|
errno == ENOENT || errno == EOPNOTSUPP ||
|
|
|
|
errno == ENXIO) {
|
|
|
|
if (verbose > 0)
|
|
|
|
ui__warning("%s event is not supported by the kernel.\n",
|
2020-04-29 19:07:09 +00:00
|
|
|
evsel__name(counter));
|
2019-11-21 00:15:18 +00:00
|
|
|
counter->supported = false;
|
2019-11-21 00:15:19 +00:00
|
|
|
/*
|
|
|
|
* errored is a sticky flag that means one of the counter's
|
|
|
|
* cpu event had a problem and needs to be reexamined.
|
|
|
|
*/
|
|
|
|
counter->errored = true;
|
2019-11-21 00:15:18 +00:00
|
|
|
|
2021-07-06 15:17:00 +00:00
|
|
|
if ((evsel__leader(counter) != counter) ||
|
|
|
|
!(counter->core.leader->nr_members > 1))
|
2019-11-21 00:15:18 +00:00
|
|
|
return COUNTER_SKIP;
|
2020-04-30 14:46:15 +00:00
|
|
|
} else if (evsel__fallback(counter, errno, msg, sizeof(msg))) {
|
2019-11-21 00:15:18 +00:00
|
|
|
if (verbose > 0)
|
|
|
|
ui__warning("%s\n", msg);
|
|
|
|
return COUNTER_RETRY;
|
|
|
|
} else if (target__has_per_thread(&target) &&
|
|
|
|
evsel_list->core.threads &&
|
|
|
|
evsel_list->core.threads->err_thread != -1) {
|
|
|
|
/*
|
|
|
|
* For global --per-thread case, skip current
|
|
|
|
* error thread.
|
|
|
|
*/
|
|
|
|
if (!thread_map__remove(evsel_list->core.threads,
|
|
|
|
evsel_list->core.threads->err_thread)) {
|
|
|
|
evsel_list->core.threads->err_thread = -1;
|
|
|
|
return COUNTER_RETRY;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-05-04 16:43:03 +00:00
|
|
|
evsel__open_strerror(counter, &target, errno, msg, sizeof(msg));
|
2019-11-21 00:15:18 +00:00
|
|
|
ui__error("%s\n", msg);
|
|
|
|
|
|
|
|
if (child_pid != -1)
|
|
|
|
kill(child_pid, SIGTERM);
|
|
|
|
return COUNTER_FATAL;
|
|
|
|
}
|
|
|
|
|
2018-04-23 09:08:21 +00:00
|
|
|
static int __run_perf_stat(int argc, const char **argv, int run_idx)
|
2009-06-13 12:57:28 +00:00
|
|
|
{
|
2015-07-21 12:31:25 +00:00
|
|
|
int interval = stat_config.interval;
|
2018-01-29 09:25:22 +00:00
|
|
|
int times = stat_config.times;
|
2018-01-29 09:25:23 +00:00
|
|
|
int timeout = stat_config.timeout;
|
2017-02-13 19:45:24 +00:00
|
|
|
char msg[BUFSIZ];
|
2009-06-13 12:57:28 +00:00
|
|
|
unsigned long long t0, t1;
|
2019-07-21 11:23:51 +00:00
|
|
|
struct evsel *counter;
|
2013-11-12 16:58:49 +00:00
|
|
|
size_t l;
|
2009-06-13 12:57:28 +00:00
|
|
|
int status = 0;
|
2010-03-18 14:36:03 +00:00
|
|
|
const bool forks = (argc > 0);
|
2017-01-23 21:07:59 +00:00
|
|
|
bool is_pipe = STAT_RECORD ? perf_stat.data.is_pipe : false;
|
2019-11-21 00:15:19 +00:00
|
|
|
struct affinity affinity;
|
perf stat: Enable counting events for BPF programs
Introduce 'perf stat -b' option, which counts events for BPF programs, like:
[root@localhost ~]# ~/perf stat -e ref-cycles,cycles -b 254 -I 1000
1.487903822 115,200 ref-cycles
1.487903822 86,012 cycles
2.489147029 80,560 ref-cycles
2.489147029 73,784 cycles
3.490341825 60,720 ref-cycles
3.490341825 37,797 cycles
4.491540887 37,120 ref-cycles
4.491540887 31,963 cycles
The example above counts 'cycles' and 'ref-cycles' of BPF program of id
254. This is similar to bpftool-prog-profile command, but more
flexible.
'perf stat -b' creates per-cpu perf_event and loads fentry/fexit BPF
programs (monitor-progs) to the target BPF program (target-prog). The
monitor-progs read perf_event before and after the target-prog, and
aggregate the difference in a BPF map. Then the user space reads data
from these maps.
A new 'struct bpf_counter' is introduced to provide a common interface
that uses BPF programs/maps to count perf events.
Committer notes:
Removed all but bpf_counter.h includes from evsel.h, not needed at all.
Also BPF map lookups for PERCPU_ARRAYs need to have as its value receive
buffer passed to the kernel libbpf_num_possible_cpus() entries, not
evsel__nr_cpus(evsel), as the former uses
/sys/devices/system/cpu/possible while the later uses
/sys/devices/system/cpu/online, which may be less than the 'possible'
number making the bpf map lookup overwrite memory and cause hard to
debug memory corruption.
We need to continue using evsel__nr_cpus(evsel) when accessing the
perf_counts array tho, not to overwrite another are of memory :-)
Signed-off-by: Song Liu <songliubraving@fb.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Link: https://lore.kernel.org/lkml/20210120163031.GU12699@kernel.org/
Acked-by: Namhyung Kim <namhyung@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: kernel-team@fb.com
Link: http://lore.kernel.org/lkml/20201229214214.3413833-4-songliubraving@fb.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-12-29 21:42:14 +00:00
|
|
|
int i, cpu, err;
|
2019-11-21 00:15:19 +00:00
|
|
|
bool second_pass = false;
|
2009-06-13 12:57:28 +00:00
|
|
|
|
2009-12-31 08:05:50 +00:00
|
|
|
if (forks) {
|
2020-11-30 12:26:54 +00:00
|
|
|
if (evlist__prepare_workload(evsel_list, &target, argv, is_pipe, workload_exec_failed_signal) < 0) {
|
2013-03-11 07:43:18 +00:00
|
|
|
perror("failed to prepare workload");
|
|
|
|
return -1;
|
2009-12-31 08:05:50 +00:00
|
|
|
}
|
2013-09-30 09:01:11 +00:00
|
|
|
child_pid = evsel_list->workload.pid;
|
2009-06-29 11:13:21 +00:00
|
|
|
}
|
|
|
|
|
perf tools: Enable grouping logic for parsed events
This patch adds a functionality that allows to create event groups
based on the way they are specified on the command line. Adding
functionality to the '{}' group syntax introduced in earlier patch.
The current '--group/-g' option behaviour remains intact. If you
specify it for record/stat/top command, all the specified events
become members of a single group with the first event as a group
leader.
With the new '{}' group syntax you can create group like:
# perf record -e '{cycles,faults}' ls
resulting in single event group containing 'cycles' and 'faults'
events, with cycles event as group leader.
All groups are created with regards to threads and cpus. Thus
recording an event group within a 2 threads on server with
4 CPUs will create 8 separate groups.
Examples (first event in brackets is group leader):
# 1 group (cpu-clock,task-clock)
perf record --group -e cpu-clock,task-clock ls
perf record -e '{cpu-clock,task-clock}' ls
# 2 groups (cpu-clock,task-clock) (minor-faults,major-faults)
perf record -e '{cpu-clock,task-clock},{minor-faults,major-faults}' ls
# 1 group (cpu-clock,task-clock,minor-faults,major-faults)
perf record --group -e cpu-clock,task-clock -e minor-faults,major-faults ls
perf record -e '{cpu-clock,task-clock,minor-faults,major-faults}' ls
# 2 groups (cpu-clock,task-clock) (minor-faults,major-faults)
perf record -e '{cpu-clock,task-clock} -e '{minor-faults,major-faults}' \
-e instructions ls
# 1 group
# (cpu-clock,task-clock,minor-faults,major-faults,instructions)
perf record --group -e cpu-clock,task-clock \
-e minor-faults,major-faults -e instructions ls perf record -e
'{cpu-clock,task-clock,minor-faults,major-faults,instructions}' ls
It's possible to use standard event modifier for a group, which spans
over all events in the group and updates each event modifier settings,
for example:
# perf record -r '{faults:k,cache-references}:p'
resulting in ':kp' modifier being used for 'faults' and ':p' modifier
being used for 'cache-references' event.
Reviewed-by: Namhyung Kim <namhyung@kernel.org>
Signed-off-by: Jiri Olsa <jolsa@redhat.com>
Acked-by: Peter Zijlstra <peterz@infradead.org>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ulrich Drepper <drepper@gmail.com>
Link: http://lkml.kernel.org/n/tip-ho42u0wcr8mn1otkalqi13qp@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2012-08-08 10:22:36 +00:00
|
|
|
if (group)
|
2020-11-30 12:22:07 +00:00
|
|
|
evlist__set_leader(evsel_list);
|
perf tools: Enable grouping logic for parsed events
This patch adds a functionality that allows to create event groups
based on the way they are specified on the command line. Adding
functionality to the '{}' group syntax introduced in earlier patch.
The current '--group/-g' option behaviour remains intact. If you
specify it for record/stat/top command, all the specified events
become members of a single group with the first event as a group
leader.
With the new '{}' group syntax you can create group like:
# perf record -e '{cycles,faults}' ls
resulting in single event group containing 'cycles' and 'faults'
events, with cycles event as group leader.
All groups are created with regards to threads and cpus. Thus
recording an event group within a 2 threads on server with
4 CPUs will create 8 separate groups.
Examples (first event in brackets is group leader):
# 1 group (cpu-clock,task-clock)
perf record --group -e cpu-clock,task-clock ls
perf record -e '{cpu-clock,task-clock}' ls
# 2 groups (cpu-clock,task-clock) (minor-faults,major-faults)
perf record -e '{cpu-clock,task-clock},{minor-faults,major-faults}' ls
# 1 group (cpu-clock,task-clock,minor-faults,major-faults)
perf record --group -e cpu-clock,task-clock -e minor-faults,major-faults ls
perf record -e '{cpu-clock,task-clock,minor-faults,major-faults}' ls
# 2 groups (cpu-clock,task-clock) (minor-faults,major-faults)
perf record -e '{cpu-clock,task-clock} -e '{minor-faults,major-faults}' \
-e instructions ls
# 1 group
# (cpu-clock,task-clock,minor-faults,major-faults,instructions)
perf record --group -e cpu-clock,task-clock \
-e minor-faults,major-faults -e instructions ls perf record -e
'{cpu-clock,task-clock,minor-faults,major-faults,instructions}' ls
It's possible to use standard event modifier for a group, which spans
over all events in the group and updates each event modifier settings,
for example:
# perf record -r '{faults:k,cache-references}:p'
resulting in ':kp' modifier being used for 'faults' and ':p' modifier
being used for 'cache-references' event.
Reviewed-by: Namhyung Kim <namhyung@kernel.org>
Signed-off-by: Jiri Olsa <jolsa@redhat.com>
Acked-by: Peter Zijlstra <peterz@infradead.org>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ulrich Drepper <drepper@gmail.com>
Link: http://lkml.kernel.org/n/tip-ho42u0wcr8mn1otkalqi13qp@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2012-08-08 10:22:36 +00:00
|
|
|
|
2019-11-21 00:15:19 +00:00
|
|
|
if (affinity__setup(&affinity) < 0)
|
|
|
|
return -1;
|
|
|
|
|
2021-04-25 21:43:31 +00:00
|
|
|
evlist__for_each_entry(evsel_list, counter) {
|
|
|
|
if (bpf_counter__load(counter, &target))
|
|
|
|
return -1;
|
|
|
|
if (!evsel__is_bpf(counter))
|
|
|
|
all_counters_use_bpf = false;
|
perf stat: Enable counting events for BPF programs
Introduce 'perf stat -b' option, which counts events for BPF programs, like:
[root@localhost ~]# ~/perf stat -e ref-cycles,cycles -b 254 -I 1000
1.487903822 115,200 ref-cycles
1.487903822 86,012 cycles
2.489147029 80,560 ref-cycles
2.489147029 73,784 cycles
3.490341825 60,720 ref-cycles
3.490341825 37,797 cycles
4.491540887 37,120 ref-cycles
4.491540887 31,963 cycles
The example above counts 'cycles' and 'ref-cycles' of BPF program of id
254. This is similar to bpftool-prog-profile command, but more
flexible.
'perf stat -b' creates per-cpu perf_event and loads fentry/fexit BPF
programs (monitor-progs) to the target BPF program (target-prog). The
monitor-progs read perf_event before and after the target-prog, and
aggregate the difference in a BPF map. Then the user space reads data
from these maps.
A new 'struct bpf_counter' is introduced to provide a common interface
that uses BPF programs/maps to count perf events.
Committer notes:
Removed all but bpf_counter.h includes from evsel.h, not needed at all.
Also BPF map lookups for PERCPU_ARRAYs need to have as its value receive
buffer passed to the kernel libbpf_num_possible_cpus() entries, not
evsel__nr_cpus(evsel), as the former uses
/sys/devices/system/cpu/possible while the later uses
/sys/devices/system/cpu/online, which may be less than the 'possible'
number making the bpf map lookup overwrite memory and cause hard to
debug memory corruption.
We need to continue using evsel__nr_cpus(evsel) when accessing the
perf_counts array tho, not to overwrite another are of memory :-)
Signed-off-by: Song Liu <songliubraving@fb.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Link: https://lore.kernel.org/lkml/20210120163031.GU12699@kernel.org/
Acked-by: Namhyung Kim <namhyung@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: kernel-team@fb.com
Link: http://lore.kernel.org/lkml/20201229214214.3413833-4-songliubraving@fb.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-12-29 21:42:14 +00:00
|
|
|
}
|
|
|
|
|
2019-11-21 00:15:19 +00:00
|
|
|
evlist__for_each_cpu (evsel_list, i, cpu) {
|
perf stat: Introduce 'bperf' to share hardware PMCs with BPF
The perf tool uses performance monitoring counters (PMCs) to monitor
system performance. The PMCs are limited hardware resources. For
example, Intel CPUs have 3x fixed PMCs and 4x programmable PMCs per cpu.
Modern data center systems use these PMCs in many different ways: system
level monitoring, (maybe nested) container level monitoring, per process
monitoring, profiling (in sample mode), etc. In some cases, there are
more active perf_events than available hardware PMCs. To allow all
perf_events to have a chance to run, it is necessary to do expensive
time multiplexing of events.
On the other hand, many monitoring tools count the common metrics
(cycles, instructions). It is a waste to have multiple tools create
multiple perf_events of "cycles" and occupy multiple PMCs.
bperf tries to reduce such wastes by allowing multiple perf_events of
"cycles" or "instructions" (at different scopes) to share PMUs. Instead
of having each perf-stat session to read its own perf_events, bperf uses
BPF programs to read the perf_events and aggregate readings to BPF maps.
Then, the perf-stat session(s) reads the values from these BPF maps.
Please refer to the comment before the definition of bperf_ops for the
description of bperf architecture.
bperf is off by default. To enable it, pass --bpf-counters option to
perf-stat. bperf uses a BPF hashmap to share information about BPF
programs and maps used by bperf. This map is pinned to bpffs. The
default path is /sys/fs/bpf/perf_attr_map. The user could change the
path with option --bpf-attr-map.
Committer testing:
# dmesg|grep "Performance Events" -A5
[ 0.225277] Performance Events: Fam17h+ core perfctr, AMD PMU driver.
[ 0.225280] ... version: 0
[ 0.225280] ... bit width: 48
[ 0.225281] ... generic registers: 6
[ 0.225281] ... value mask: 0000ffffffffffff
[ 0.225281] ... max period: 00007fffffffffff
#
# for a in $(seq 6) ; do perf stat -a -e cycles,instructions sleep 100000 & done
[1] 2436231
[2] 2436232
[3] 2436233
[4] 2436234
[5] 2436235
[6] 2436236
# perf stat -a -e cycles,instructions sleep 0.1
Performance counter stats for 'system wide':
310,326,987 cycles (41.87%)
236,143,290 instructions # 0.76 insn per cycle (41.87%)
0.100800885 seconds time elapsed
#
We can see that the counters were enabled for this workload 41.87% of
the time.
Now with --bpf-counters:
# for a in $(seq 32) ; do perf stat --bpf-counters -a -e cycles,instructions sleep 100000 & done
[1] 2436514
[2] 2436515
[3] 2436516
[4] 2436517
[5] 2436518
[6] 2436519
[7] 2436520
[8] 2436521
[9] 2436522
[10] 2436523
[11] 2436524
[12] 2436525
[13] 2436526
[14] 2436527
[15] 2436528
[16] 2436529
[17] 2436530
[18] 2436531
[19] 2436532
[20] 2436533
[21] 2436534
[22] 2436535
[23] 2436536
[24] 2436537
[25] 2436538
[26] 2436539
[27] 2436540
[28] 2436541
[29] 2436542
[30] 2436543
[31] 2436544
[32] 2436545
#
# ls -la /sys/fs/bpf/perf_attr_map
-rw-------. 1 root root 0 Mar 23 14:53 /sys/fs/bpf/perf_attr_map
# bpftool map | grep bperf | wc -l
64
#
# bpftool map | tail
1265: percpu_array name accum_readings flags 0x0
key 4B value 24B max_entries 1 memlock 4096B
1266: hash name filter flags 0x0
key 4B value 4B max_entries 1 memlock 4096B
1267: array name bperf_fo.bss flags 0x400
key 4B value 8B max_entries 1 memlock 4096B
btf_id 996
pids perf(2436545)
1268: percpu_array name accum_readings flags 0x0
key 4B value 24B max_entries 1 memlock 4096B
1269: hash name filter flags 0x0
key 4B value 4B max_entries 1 memlock 4096B
1270: array name bperf_fo.bss flags 0x400
key 4B value 8B max_entries 1 memlock 4096B
btf_id 997
pids perf(2436541)
1285: array name pid_iter.rodata flags 0x480
key 4B value 4B max_entries 1 memlock 4096B
btf_id 1017 frozen
pids bpftool(2437504)
1286: array flags 0x0
key 4B value 32B max_entries 1 memlock 4096B
#
# bpftool map dump id 1268 | tail
value (CPU 21):
8f f3 bc ca 00 00 00 00 80 fd 2a d1 4d 00 00 00
80 fd 2a d1 4d 00 00 00
value (CPU 22):
7e d5 64 4d 00 00 00 00 a4 8a 2e ee 4d 00 00 00
a4 8a 2e ee 4d 00 00 00
value (CPU 23):
a7 78 3e 06 01 00 00 00 b2 34 94 f6 4d 00 00 00
b2 34 94 f6 4d 00 00 00
Found 1 element
# bpftool map dump id 1268 | tail
value (CPU 21):
c6 8b d9 ca 00 00 00 00 20 c6 fc 83 4e 00 00 00
20 c6 fc 83 4e 00 00 00
value (CPU 22):
9c b4 d2 4d 00 00 00 00 3e 0c df 89 4e 00 00 00
3e 0c df 89 4e 00 00 00
value (CPU 23):
18 43 66 06 01 00 00 00 5b 69 ed 83 4e 00 00 00
5b 69 ed 83 4e 00 00 00
Found 1 element
# bpftool map dump id 1268 | tail
value (CPU 21):
f2 6e db ca 00 00 00 00 92 67 4c ba 4e 00 00 00
92 67 4c ba 4e 00 00 00
value (CPU 22):
dc 8e e1 4d 00 00 00 00 d9 32 7a c5 4e 00 00 00
d9 32 7a c5 4e 00 00 00
value (CPU 23):
bd 2b 73 06 01 00 00 00 7c 73 87 bf 4e 00 00 00
7c 73 87 bf 4e 00 00 00
Found 1 element
#
# perf stat --bpf-counters -a -e cycles,instructions sleep 0.1
Performance counter stats for 'system wide':
119,410,122 cycles
152,105,479 instructions # 1.27 insn per cycle
0.101395093 seconds time elapsed
#
See? We had the counters enabled all the time.
Signed-off-by: Song Liu <songliubraving@fb.com>
Reviewed-by: Jiri Olsa <jolsa@kernel.org>
Acked-by: Namhyung Kim <namhyung@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: kernel-team@fb.com
Link: http://lore.kernel.org/lkml/20210316211837.910506-2-songliubraving@fb.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2021-03-16 21:18:35 +00:00
|
|
|
/*
|
|
|
|
* bperf calls evsel__open_per_cpu() in bperf__load(), so
|
|
|
|
* no need to call it again here.
|
|
|
|
*/
|
|
|
|
if (target.use_bpf)
|
|
|
|
break;
|
2019-11-21 00:15:19 +00:00
|
|
|
affinity__set(&affinity, cpu);
|
|
|
|
|
|
|
|
evlist__for_each_entry(evsel_list, counter) {
|
|
|
|
if (evsel__cpu_iter_skip(counter, cpu))
|
|
|
|
continue;
|
|
|
|
if (counter->reset_group || counter->errored)
|
|
|
|
continue;
|
2021-04-25 21:43:31 +00:00
|
|
|
if (evsel__is_bpf(counter))
|
|
|
|
continue;
|
2016-05-12 19:25:18 +00:00
|
|
|
try_again:
|
2019-11-21 00:15:19 +00:00
|
|
|
if (create_perf_stat_counter(counter, &stat_config, &target,
|
|
|
|
counter->cpu_iter - 1) < 0) {
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Weak group failed. We cannot just undo this here
|
|
|
|
* because earlier CPUs might be in group mode, and the kernel
|
|
|
|
* doesn't support mixing group and non group reads. Defer
|
|
|
|
* it to later.
|
|
|
|
* Don't close here because we're in the wrong affinity.
|
|
|
|
*/
|
|
|
|
if ((errno == EINVAL || errno == EBADF) &&
|
2021-07-06 15:17:00 +00:00
|
|
|
evsel__leader(counter) != counter &&
|
2019-11-21 00:15:19 +00:00
|
|
|
counter->weak_group) {
|
2020-11-30 17:58:32 +00:00
|
|
|
evlist__reset_weak_group(evsel_list, counter, false);
|
2019-11-21 00:15:19 +00:00
|
|
|
assert(counter->reset_group);
|
|
|
|
second_pass = true;
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
|
|
|
switch (stat_handle_error(counter)) {
|
|
|
|
case COUNTER_FATAL:
|
|
|
|
return -1;
|
|
|
|
case COUNTER_RETRY:
|
|
|
|
goto try_again;
|
|
|
|
case COUNTER_SKIP:
|
|
|
|
continue;
|
|
|
|
default:
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
perf tools: Support weak groups in 'perf stat'
Setting up groups can be complicated due to the complicated scheduling
restrictions of different PMUs.
User tools usually don't understand all these restrictions.
Still in many cases it is useful to set up groups and they work most of
the time. However if the group is set up wrong some members will not
report any value because they never get scheduled.
Add a concept of a 'weak group': try to set up a group, but if it's not
schedulable fallback to not using a group. That gives us the best of
both worlds: groups if they work, but still a usable fallback if they
don't.
In theory it would be possible to have more complex fallback strategies
(e.g. try to split the group in half), but the simple fallback of not
using a group seems to work for now.
So far the weak group is only implemented for perf stat, not for record.
Here's an unschedulable group (on IvyBridge with SMT on)
% perf stat -e '{branches,branch-misses,l1d.replacement,l2_lines_in.all,l2_rqsts.all_code_rd}' -a sleep 1
73,806,067 branches
4,848,144 branch-misses # 6.57% of all branches
14,754,458 l1d.replacement
24,905,558 l2_lines_in.all
<not supported> l2_rqsts.all_code_rd <------- will never report anything
With the weak group:
% perf stat -e '{branches,branch-misses,l1d.replacement,l2_lines_in.all,l2_rqsts.all_code_rd}:W' -a sleep 1
125,366,055 branches (80.02%)
9,208,402 branch-misses # 7.35% of all branches (80.01%)
24,560,249 l1d.replacement (80.00%)
43,174,971 l2_lines_in.all (80.05%)
31,891,457 l2_rqsts.all_code_rd (79.92%)
The extra event scheduled with some extra multiplexing
v2: Move fallback code to separate function.
Add comment on for_each_group_member
Adjust to new perf_evsel__close interface
v3: Fix debug print out.
Committer testing:
Before:
# perf stat -e '{branches,branch-misses,l1d.replacement,l2_lines_in.all,l2_rqsts.all_code_rd}' -a sleep 1
Performance counter stats for 'system wide':
<not counted> branches
<not counted> branch-misses
<not counted> l1d.replacement
<not counted> l2_lines_in.all
<not supported> l2_rqsts.all_code_rd
1.002147212 seconds time elapsed
# perf stat -e '{branches,l1d.replacement,l2_lines_in.all,l2_rqsts.all_code_rd}' -a sleep 1
Performance counter stats for 'system wide':
83,207,892 branches
11,065,444 l1d.replacement
28,484,024 l2_lines_in.all
12,186,179 l2_rqsts.all_code_rd
1.001739493 seconds time elapsed
After:
# perf stat -e '{branches,branch-misses,l1d.replacement,l2_lines_in.all,l2_rqsts.all_code_rd}':W -a sleep 1
Performance counter stats for 'system wide':
543,323,909 branches (80.01%)
27,100,512 branch-misses # 4.99% of all branches (80.02%)
50,402,905 l1d.replacement (80.03%)
67,385,892 l2_lines_in.all (80.01%)
21,352,885 l2_rqsts.all_code_rd (79.94%)
1.001086658 seconds time elapsed
#
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Link: http://lkml.kernel.org/r/20170831194036.30146-2-andi@firstfloor.org
[ Add a "'perf stat' only, for now" comment in the man page, suggested by Jiri ]
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-08-31 19:40:26 +00:00
|
|
|
}
|
2019-11-21 00:15:19 +00:00
|
|
|
counter->supported = true;
|
|
|
|
}
|
|
|
|
}
|
perf tools: Support weak groups in 'perf stat'
Setting up groups can be complicated due to the complicated scheduling
restrictions of different PMUs.
User tools usually don't understand all these restrictions.
Still in many cases it is useful to set up groups and they work most of
the time. However if the group is set up wrong some members will not
report any value because they never get scheduled.
Add a concept of a 'weak group': try to set up a group, but if it's not
schedulable fallback to not using a group. That gives us the best of
both worlds: groups if they work, but still a usable fallback if they
don't.
In theory it would be possible to have more complex fallback strategies
(e.g. try to split the group in half), but the simple fallback of not
using a group seems to work for now.
So far the weak group is only implemented for perf stat, not for record.
Here's an unschedulable group (on IvyBridge with SMT on)
% perf stat -e '{branches,branch-misses,l1d.replacement,l2_lines_in.all,l2_rqsts.all_code_rd}' -a sleep 1
73,806,067 branches
4,848,144 branch-misses # 6.57% of all branches
14,754,458 l1d.replacement
24,905,558 l2_lines_in.all
<not supported> l2_rqsts.all_code_rd <------- will never report anything
With the weak group:
% perf stat -e '{branches,branch-misses,l1d.replacement,l2_lines_in.all,l2_rqsts.all_code_rd}:W' -a sleep 1
125,366,055 branches (80.02%)
9,208,402 branch-misses # 7.35% of all branches (80.01%)
24,560,249 l1d.replacement (80.00%)
43,174,971 l2_lines_in.all (80.05%)
31,891,457 l2_rqsts.all_code_rd (79.92%)
The extra event scheduled with some extra multiplexing
v2: Move fallback code to separate function.
Add comment on for_each_group_member
Adjust to new perf_evsel__close interface
v3: Fix debug print out.
Committer testing:
Before:
# perf stat -e '{branches,branch-misses,l1d.replacement,l2_lines_in.all,l2_rqsts.all_code_rd}' -a sleep 1
Performance counter stats for 'system wide':
<not counted> branches
<not counted> branch-misses
<not counted> l1d.replacement
<not counted> l2_lines_in.all
<not supported> l2_rqsts.all_code_rd
1.002147212 seconds time elapsed
# perf stat -e '{branches,l1d.replacement,l2_lines_in.all,l2_rqsts.all_code_rd}' -a sleep 1
Performance counter stats for 'system wide':
83,207,892 branches
11,065,444 l1d.replacement
28,484,024 l2_lines_in.all
12,186,179 l2_rqsts.all_code_rd
1.001739493 seconds time elapsed
After:
# perf stat -e '{branches,branch-misses,l1d.replacement,l2_lines_in.all,l2_rqsts.all_code_rd}':W -a sleep 1
Performance counter stats for 'system wide':
543,323,909 branches (80.01%)
27,100,512 branch-misses # 4.99% of all branches (80.02%)
50,402,905 l1d.replacement (80.03%)
67,385,892 l2_lines_in.all (80.01%)
21,352,885 l2_rqsts.all_code_rd (79.94%)
1.001086658 seconds time elapsed
#
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Link: http://lkml.kernel.org/r/20170831194036.30146-2-andi@firstfloor.org
[ Add a "'perf stat' only, for now" comment in the man page, suggested by Jiri ]
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-08-31 19:40:26 +00:00
|
|
|
|
2019-11-21 00:15:19 +00:00
|
|
|
if (second_pass) {
|
|
|
|
/*
|
|
|
|
* Now redo all the weak group after closing them,
|
|
|
|
* and also close errored counters.
|
|
|
|
*/
|
|
|
|
|
|
|
|
evlist__for_each_cpu(evsel_list, i, cpu) {
|
|
|
|
affinity__set(&affinity, cpu);
|
|
|
|
/* First close errored or weak retry */
|
|
|
|
evlist__for_each_entry(evsel_list, counter) {
|
|
|
|
if (!counter->reset_group && !counter->errored)
|
|
|
|
continue;
|
|
|
|
if (evsel__cpu_iter_skip_no_inc(counter, cpu))
|
|
|
|
continue;
|
|
|
|
perf_evsel__close_cpu(&counter->core, counter->cpu_iter);
|
perf stat: Ignore error thread when enabling system-wide --per-thread
If we execute 'perf stat --per-thread' with non-root account (even set
kernel.perf_event_paranoid = -1 yet), it reports the error:
jinyao@skl:~$ perf stat --per-thread
Error:
You may not have permission to collect system-wide stats.
Consider tweaking /proc/sys/kernel/perf_event_paranoid,
which controls use of the performance events system by
unprivileged users (without CAP_SYS_ADMIN).
The current value is 2:
-1: Allow use of (almost) all events by all users
Ignore mlock limit after perf_event_mlock_kb without CAP_IPC_LOCK
>= 0: Disallow ftrace function tracepoint by users without CAP_SYS_ADMIN
Disallow raw tracepoint access by users without CAP_SYS_ADMIN
>= 1: Disallow CPU event access by users without CAP_SYS_ADMIN
>= 2: Disallow kernel profiling by users without CAP_SYS_ADMIN
To make this setting permanent, edit /etc/sysctl.conf too, e.g.:
kernel.perf_event_paranoid = -1
Perhaps the ptrace rule doesn't allow to trace some processes. But anyway
the global --per-thread mode had better ignore such errors and continue
working on other threads.
This patch will record the index of error thread in perf_evsel__open()
and remove this thread before retrying.
For example (run with non-root, kernel.perf_event_paranoid isn't set):
jinyao@skl:~$ perf stat --per-thread
^C
Performance counter stats for 'system wide':
vmstat-3458 6.171984 cpu-clock:u (msec) # 0.000 CPUs utilized
perf-3670 0.515599 cpu-clock:u (msec) # 0.000 CPUs utilized
vmstat-3458 1,163,643 cycles:u # 0.189 GHz
perf-3670 40,881 cycles:u # 0.079 GHz
vmstat-3458 1,410,238 instructions:u # 1.21 insn per cycle
perf-3670 3,536 instructions:u # 0.09 insn per cycle
vmstat-3458 288,937 branches:u # 46.814 M/sec
perf-3670 936 branches:u # 1.815 M/sec
vmstat-3458 15,195 branch-misses:u # 5.26% of all branches
perf-3670 76 branch-misses:u # 8.12% of all branches
12.651675247 seconds time elapsed
Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Kan Liang <kan.liang@intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1516117388-10120-1-git-send-email-yao.jin@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-01-16 15:43:08 +00:00
|
|
|
}
|
2019-11-21 00:15:19 +00:00
|
|
|
/* Now reopen weak */
|
|
|
|
evlist__for_each_entry(evsel_list, counter) {
|
|
|
|
if (!counter->reset_group && !counter->errored)
|
|
|
|
continue;
|
|
|
|
if (evsel__cpu_iter_skip(counter, cpu))
|
|
|
|
continue;
|
|
|
|
if (!counter->reset_group)
|
|
|
|
continue;
|
|
|
|
try_again_reset:
|
2020-04-29 19:07:09 +00:00
|
|
|
pr_debug2("reopening weak %s\n", evsel__name(counter));
|
2019-11-21 00:15:19 +00:00
|
|
|
if (create_perf_stat_counter(counter, &stat_config, &target,
|
|
|
|
counter->cpu_iter - 1) < 0) {
|
|
|
|
|
|
|
|
switch (stat_handle_error(counter)) {
|
|
|
|
case COUNTER_FATAL:
|
|
|
|
return -1;
|
|
|
|
case COUNTER_RETRY:
|
|
|
|
goto try_again_reset;
|
|
|
|
case COUNTER_SKIP:
|
|
|
|
continue;
|
|
|
|
default:
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
counter->supported = true;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
affinity__cleanup(&affinity);
|
|
|
|
|
|
|
|
evlist__for_each_entry(evsel_list, counter) {
|
|
|
|
if (!counter->supported) {
|
|
|
|
perf_evsel__free_fd(&counter->core);
|
|
|
|
continue;
|
2011-01-03 19:48:12 +00:00
|
|
|
}
|
2013-11-12 16:58:49 +00:00
|
|
|
|
|
|
|
l = strlen(counter->unit);
|
2018-08-30 06:32:32 +00:00
|
|
|
if (l > stat_config.unit_width)
|
|
|
|
stat_config.unit_width = l;
|
2015-11-05 14:40:49 +00:00
|
|
|
|
2020-05-04 16:46:34 +00:00
|
|
|
if (evsel__should_store_id(counter) &&
|
2020-05-04 16:45:19 +00:00
|
|
|
evsel__store_ids(counter, evsel_list))
|
2015-11-05 14:40:49 +00:00
|
|
|
return -1;
|
2010-03-22 16:10:28 +00:00
|
|
|
}
|
2009-06-13 12:57:28 +00:00
|
|
|
|
2020-11-30 12:38:02 +00:00
|
|
|
if (evlist__apply_filters(evsel_list, &counter)) {
|
2017-06-27 14:22:31 +00:00
|
|
|
pr_err("failed to set filter \"%s\" on event %s with %d (%s)\n",
|
2020-04-29 19:07:09 +00:00
|
|
|
counter->filter, evsel__name(counter), errno,
|
tools: Introduce str_error_r()
The tools so far have been using the strerror_r() GNU variant, that
returns a string, be it the buffer passed or something else.
But that, besides being tricky in cases where we expect that the
function using strerror_r() returns the error formatted in a provided
buffer (we have to check if it returned something else and copy that
instead), breaks the build on systems not using glibc, like Alpine
Linux, where musl libc is used.
So, introduce yet another wrapper, str_error_r(), that has the GNU
interface, but uses the portable XSI variant of strerror_r(), so that
users rest asured that the provided buffer is used and it is what is
returned.
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/n/tip-d4t42fnf48ytlk8rjxs822tf@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-07-06 14:56:20 +00:00
|
|
|
str_error_r(errno, msg, sizeof(msg)));
|
2011-03-14 15:40:30 +00:00
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
perf stat record: Add record command
Add 'perf stat record' command support. It creates simple (header only)
perf.data file ATM.
The record command could be specified anywhere among stat options. All
stat command options are valid for stat record command with '-o' option
exception. If specified for record command it denotes the perf data file
name.
Committer note:
Set sample_type to PERF_SAMPLE_IDENTIFIER, which should be harmless
while avoiding that older tools show confusing messages, for instance,
with sample_type = 0, we get:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.630237 task-clock (msec) # 0.528 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
52 page-faults # 0.083 M/sec
978,312 cycles # 1.552 GHz
671,931 stalled-cycles-frontend # 68.68% frontend cycles idle
<not supported> stalled-cycles-backend
646,379 instructions # 0.66 insns per cycle
# 1.04 stalled cycles per insn
131,046 branches # 207.931 M/sec
7,073 branch-misses # 5.40% of all branches
0.001193240 seconds time elapsed
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
non matching sample_type
$
While with sample_type set to PERF_SAMPLE_IDENTIFIER, after we re-run 'perf
stat record usleep' we get:
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$
Which at least shows the names of the events in the perf.data file.
Additionally, such files, when passed to 'perf report' will produce:
$ oldperf report --stdio
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
Warning:
Kernel address maps (/proc/{kallsyms,modules}) were restricted.
Check /proc/sys/kernel/kptr_restrict before running 'perf record'.
As no suitable kallsyms nor vmlinux was found, kernel samples
can't be resolved.
Samples in kernel modules can't be resolved as well.
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
$
Which is confusing and can be solved by just adding the kernel mmap record,
which will also remove that warning about the data size field being equal to
zero, after generating the mmap record:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.600796 task-clock (msec) # 0.478 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
54 page-faults # 0.090 M/sec
886,844 cycles # 1.476 GHz
582,169 stalled-cycles-frontend # 65.65% frontend cycles idle
<not supported> stalled-cycles-backend
638,344 instructions # 0.72 insns per cycle
# 0.91 stalled cycles per insn
130,204 branches # 216.719 M/sec
7,500 branch-misses # 5.76% of all branches
0.001255897 seconds time elapsed
$ oldperf evlist
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$ oldperf report --stdio
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
[acme@zoo linux]$
No warnings, sensible output about what are the events in the perf.data file and also
a "file has no samples" message, which indeed it doesn't.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Tested-by: Kan Liang <kan.liang@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: htp://lkml.kernel.org/r/1446734469-11352-3-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 14:40:46 +00:00
|
|
|
if (STAT_RECORD) {
|
perf stat: Enable counting events for BPF programs
Introduce 'perf stat -b' option, which counts events for BPF programs, like:
[root@localhost ~]# ~/perf stat -e ref-cycles,cycles -b 254 -I 1000
1.487903822 115,200 ref-cycles
1.487903822 86,012 cycles
2.489147029 80,560 ref-cycles
2.489147029 73,784 cycles
3.490341825 60,720 ref-cycles
3.490341825 37,797 cycles
4.491540887 37,120 ref-cycles
4.491540887 31,963 cycles
The example above counts 'cycles' and 'ref-cycles' of BPF program of id
254. This is similar to bpftool-prog-profile command, but more
flexible.
'perf stat -b' creates per-cpu perf_event and loads fentry/fexit BPF
programs (monitor-progs) to the target BPF program (target-prog). The
monitor-progs read perf_event before and after the target-prog, and
aggregate the difference in a BPF map. Then the user space reads data
from these maps.
A new 'struct bpf_counter' is introduced to provide a common interface
that uses BPF programs/maps to count perf events.
Committer notes:
Removed all but bpf_counter.h includes from evsel.h, not needed at all.
Also BPF map lookups for PERCPU_ARRAYs need to have as its value receive
buffer passed to the kernel libbpf_num_possible_cpus() entries, not
evsel__nr_cpus(evsel), as the former uses
/sys/devices/system/cpu/possible while the later uses
/sys/devices/system/cpu/online, which may be less than the 'possible'
number making the bpf map lookup overwrite memory and cause hard to
debug memory corruption.
We need to continue using evsel__nr_cpus(evsel) when accessing the
perf_counts array tho, not to overwrite another are of memory :-)
Signed-off-by: Song Liu <songliubraving@fb.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Link: https://lore.kernel.org/lkml/20210120163031.GU12699@kernel.org/
Acked-by: Namhyung Kim <namhyung@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: kernel-team@fb.com
Link: http://lore.kernel.org/lkml/20201229214214.3413833-4-songliubraving@fb.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-12-29 21:42:14 +00:00
|
|
|
int fd = perf_data__fd(&perf_stat.data);
|
perf stat record: Add record command
Add 'perf stat record' command support. It creates simple (header only)
perf.data file ATM.
The record command could be specified anywhere among stat options. All
stat command options are valid for stat record command with '-o' option
exception. If specified for record command it denotes the perf data file
name.
Committer note:
Set sample_type to PERF_SAMPLE_IDENTIFIER, which should be harmless
while avoiding that older tools show confusing messages, for instance,
with sample_type = 0, we get:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.630237 task-clock (msec) # 0.528 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
52 page-faults # 0.083 M/sec
978,312 cycles # 1.552 GHz
671,931 stalled-cycles-frontend # 68.68% frontend cycles idle
<not supported> stalled-cycles-backend
646,379 instructions # 0.66 insns per cycle
# 1.04 stalled cycles per insn
131,046 branches # 207.931 M/sec
7,073 branch-misses # 5.40% of all branches
0.001193240 seconds time elapsed
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
non matching sample_type
$
While with sample_type set to PERF_SAMPLE_IDENTIFIER, after we re-run 'perf
stat record usleep' we get:
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$
Which at least shows the names of the events in the perf.data file.
Additionally, such files, when passed to 'perf report' will produce:
$ oldperf report --stdio
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
Warning:
Kernel address maps (/proc/{kallsyms,modules}) were restricted.
Check /proc/sys/kernel/kptr_restrict before running 'perf record'.
As no suitable kallsyms nor vmlinux was found, kernel samples
can't be resolved.
Samples in kernel modules can't be resolved as well.
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
$
Which is confusing and can be solved by just adding the kernel mmap record,
which will also remove that warning about the data size field being equal to
zero, after generating the mmap record:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.600796 task-clock (msec) # 0.478 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
54 page-faults # 0.090 M/sec
886,844 cycles # 1.476 GHz
582,169 stalled-cycles-frontend # 65.65% frontend cycles idle
<not supported> stalled-cycles-backend
638,344 instructions # 0.72 insns per cycle
# 0.91 stalled cycles per insn
130,204 branches # 216.719 M/sec
7,500 branch-misses # 5.76% of all branches
0.001255897 seconds time elapsed
$ oldperf evlist
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$ oldperf report --stdio
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
[acme@zoo linux]$
No warnings, sensible output about what are the events in the perf.data file and also
a "file has no samples" message, which indeed it doesn't.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Tested-by: Kan Liang <kan.liang@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: htp://lkml.kernel.org/r/1446734469-11352-3-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 14:40:46 +00:00
|
|
|
|
2015-11-05 14:40:50 +00:00
|
|
|
if (is_pipe) {
|
2017-01-23 21:07:59 +00:00
|
|
|
err = perf_header__write_pipe(perf_data__fd(&perf_stat.data));
|
2015-11-05 14:40:50 +00:00
|
|
|
} else {
|
|
|
|
err = perf_session__write_header(perf_stat.session, evsel_list,
|
|
|
|
fd, false);
|
|
|
|
}
|
|
|
|
|
perf stat record: Add record command
Add 'perf stat record' command support. It creates simple (header only)
perf.data file ATM.
The record command could be specified anywhere among stat options. All
stat command options are valid for stat record command with '-o' option
exception. If specified for record command it denotes the perf data file
name.
Committer note:
Set sample_type to PERF_SAMPLE_IDENTIFIER, which should be harmless
while avoiding that older tools show confusing messages, for instance,
with sample_type = 0, we get:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.630237 task-clock (msec) # 0.528 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
52 page-faults # 0.083 M/sec
978,312 cycles # 1.552 GHz
671,931 stalled-cycles-frontend # 68.68% frontend cycles idle
<not supported> stalled-cycles-backend
646,379 instructions # 0.66 insns per cycle
# 1.04 stalled cycles per insn
131,046 branches # 207.931 M/sec
7,073 branch-misses # 5.40% of all branches
0.001193240 seconds time elapsed
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
non matching sample_type
$
While with sample_type set to PERF_SAMPLE_IDENTIFIER, after we re-run 'perf
stat record usleep' we get:
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$
Which at least shows the names of the events in the perf.data file.
Additionally, such files, when passed to 'perf report' will produce:
$ oldperf report --stdio
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
Warning:
Kernel address maps (/proc/{kallsyms,modules}) were restricted.
Check /proc/sys/kernel/kptr_restrict before running 'perf record'.
As no suitable kallsyms nor vmlinux was found, kernel samples
can't be resolved.
Samples in kernel modules can't be resolved as well.
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
$
Which is confusing and can be solved by just adding the kernel mmap record,
which will also remove that warning about the data size field being equal to
zero, after generating the mmap record:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.600796 task-clock (msec) # 0.478 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
54 page-faults # 0.090 M/sec
886,844 cycles # 1.476 GHz
582,169 stalled-cycles-frontend # 65.65% frontend cycles idle
<not supported> stalled-cycles-backend
638,344 instructions # 0.72 insns per cycle
# 0.91 stalled cycles per insn
130,204 branches # 216.719 M/sec
7,500 branch-misses # 5.76% of all branches
0.001255897 seconds time elapsed
$ oldperf evlist
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$ oldperf report --stdio
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
[acme@zoo linux]$
No warnings, sensible output about what are the events in the perf.data file and also
a "file has no samples" message, which indeed it doesn't.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Tested-by: Kan Liang <kan.liang@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: htp://lkml.kernel.org/r/1446734469-11352-3-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 14:40:46 +00:00
|
|
|
if (err < 0)
|
|
|
|
return err;
|
2015-11-05 14:40:48 +00:00
|
|
|
|
2019-09-10 16:17:33 +00:00
|
|
|
err = perf_event__synthesize_stat_events(&stat_config, NULL, evsel_list,
|
|
|
|
process_synthesized_event, is_pipe);
|
2015-11-05 14:40:48 +00:00
|
|
|
if (err < 0)
|
|
|
|
return err;
|
perf stat record: Add record command
Add 'perf stat record' command support. It creates simple (header only)
perf.data file ATM.
The record command could be specified anywhere among stat options. All
stat command options are valid for stat record command with '-o' option
exception. If specified for record command it denotes the perf data file
name.
Committer note:
Set sample_type to PERF_SAMPLE_IDENTIFIER, which should be harmless
while avoiding that older tools show confusing messages, for instance,
with sample_type = 0, we get:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.630237 task-clock (msec) # 0.528 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
52 page-faults # 0.083 M/sec
978,312 cycles # 1.552 GHz
671,931 stalled-cycles-frontend # 68.68% frontend cycles idle
<not supported> stalled-cycles-backend
646,379 instructions # 0.66 insns per cycle
# 1.04 stalled cycles per insn
131,046 branches # 207.931 M/sec
7,073 branch-misses # 5.40% of all branches
0.001193240 seconds time elapsed
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
non matching sample_type
$
While with sample_type set to PERF_SAMPLE_IDENTIFIER, after we re-run 'perf
stat record usleep' we get:
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$
Which at least shows the names of the events in the perf.data file.
Additionally, such files, when passed to 'perf report' will produce:
$ oldperf report --stdio
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
Warning:
Kernel address maps (/proc/{kallsyms,modules}) were restricted.
Check /proc/sys/kernel/kptr_restrict before running 'perf record'.
As no suitable kallsyms nor vmlinux was found, kernel samples
can't be resolved.
Samples in kernel modules can't be resolved as well.
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
$
Which is confusing and can be solved by just adding the kernel mmap record,
which will also remove that warning about the data size field being equal to
zero, after generating the mmap record:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.600796 task-clock (msec) # 0.478 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
54 page-faults # 0.090 M/sec
886,844 cycles # 1.476 GHz
582,169 stalled-cycles-frontend # 65.65% frontend cycles idle
<not supported> stalled-cycles-backend
638,344 instructions # 0.72 insns per cycle
# 0.91 stalled cycles per insn
130,204 branches # 216.719 M/sec
7,500 branch-misses # 5.76% of all branches
0.001255897 seconds time elapsed
$ oldperf evlist
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$ oldperf report --stdio
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
[acme@zoo linux]$
No warnings, sensible output about what are the events in the perf.data file and also
a "file has no samples" message, which indeed it doesn't.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Tested-by: Kan Liang <kan.liang@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: htp://lkml.kernel.org/r/1446734469-11352-3-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 14:40:46 +00:00
|
|
|
}
|
|
|
|
|
2009-06-13 12:57:28 +00:00
|
|
|
/*
|
|
|
|
* Enable counters and exec the command:
|
|
|
|
*/
|
2009-12-31 08:05:50 +00:00
|
|
|
if (forks) {
|
2020-11-30 12:26:54 +00:00
|
|
|
evlist__start_workload(evsel_list);
|
perf stat: Enable counting events for BPF programs
Introduce 'perf stat -b' option, which counts events for BPF programs, like:
[root@localhost ~]# ~/perf stat -e ref-cycles,cycles -b 254 -I 1000
1.487903822 115,200 ref-cycles
1.487903822 86,012 cycles
2.489147029 80,560 ref-cycles
2.489147029 73,784 cycles
3.490341825 60,720 ref-cycles
3.490341825 37,797 cycles
4.491540887 37,120 ref-cycles
4.491540887 31,963 cycles
The example above counts 'cycles' and 'ref-cycles' of BPF program of id
254. This is similar to bpftool-prog-profile command, but more
flexible.
'perf stat -b' creates per-cpu perf_event and loads fentry/fexit BPF
programs (monitor-progs) to the target BPF program (target-prog). The
monitor-progs read perf_event before and after the target-prog, and
aggregate the difference in a BPF map. Then the user space reads data
from these maps.
A new 'struct bpf_counter' is introduced to provide a common interface
that uses BPF programs/maps to count perf events.
Committer notes:
Removed all but bpf_counter.h includes from evsel.h, not needed at all.
Also BPF map lookups for PERCPU_ARRAYs need to have as its value receive
buffer passed to the kernel libbpf_num_possible_cpus() entries, not
evsel__nr_cpus(evsel), as the former uses
/sys/devices/system/cpu/possible while the later uses
/sys/devices/system/cpu/online, which may be less than the 'possible'
number making the bpf map lookup overwrite memory and cause hard to
debug memory corruption.
We need to continue using evsel__nr_cpus(evsel) when accessing the
perf_counts array tho, not to overwrite another are of memory :-)
Signed-off-by: Song Liu <songliubraving@fb.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Link: https://lore.kernel.org/lkml/20210120163031.GU12699@kernel.org/
Acked-by: Namhyung Kim <namhyung@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: kernel-team@fb.com
Link: http://lore.kernel.org/lkml/20201229214214.3413833-4-songliubraving@fb.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-12-29 21:42:14 +00:00
|
|
|
err = enable_counters();
|
|
|
|
if (err)
|
|
|
|
return -1;
|
2013-03-11 07:43:18 +00:00
|
|
|
|
2021-03-16 21:18:36 +00:00
|
|
|
t0 = rdclock();
|
|
|
|
clock_gettime(CLOCK_MONOTONIC, &ref_time);
|
|
|
|
|
2020-07-17 07:05:41 +00:00
|
|
|
if (interval || timeout || evlist__ctlfd_initialized(evsel_list))
|
2020-07-17 07:05:06 +00:00
|
|
|
status = dispatch_events(forks, timeout, interval, ×);
|
2020-04-15 15:31:26 +00:00
|
|
|
if (child_pid != -1) {
|
|
|
|
if (timeout)
|
|
|
|
kill(child_pid, SIGTERM);
|
perf stat: Fix endless wait for child process
We hit a 'perf stat' issue by using following script:
#!/bin/bash
sleep 1000 &
exec perf stat -a -e cycles -I1000 -- sleep 5
Since "perf stat" is launched by exec, the "sleep 1000" would be the
child process of "perf stat". The wait4() call will not return because
it's waiting for the child process "sleep 1000" to end. So 'perf stat'
doesn't return even after 5s passes.
This patch lets 'perf stat' return when the specified child process ends
(in this case, the specified child process is "sleep 5").
Committer testing:
# cat test.sh
#!/bin/bash
sleep 10 &
exec perf stat -a -e cycles -I1000 -- sleep 5
#
Before:
# time ./test.sh
# time counts unit events
1.001113090 108,453,351 cycles
2.002062196 142,075,435 cycles
3.002896194 164,801,068 cycles
4.003731666 107,062,140 cycles
5.002068867 112,241,832 cycles
real 0m10.066s
user 0m0.016s
sys 0m0.101s
#
After:
# time ./test.sh
# time counts unit events
1.001016096 91,412,027 cycles
2.002014963 124,063,708 cycles
3.002883964 125,993,929 cycles
4.003706470 120,465,734 cycles
5.002006778 163,560,355 cycles
real 0m5.123s
user 0m0.014s
sys 0m0.105s
#
Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
Reviewed-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1546501245-4512-1-git-send-email-yao.jin@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-01-03 07:40:45 +00:00
|
|
|
wait4(child_pid, &status, 0, &stat_config.ru_data);
|
2020-04-15 15:31:26 +00:00
|
|
|
}
|
2013-12-28 18:45:08 +00:00
|
|
|
|
2014-01-02 18:11:25 +00:00
|
|
|
if (workload_exec_errno) {
|
tools: Introduce str_error_r()
The tools so far have been using the strerror_r() GNU variant, that
returns a string, be it the buffer passed or something else.
But that, besides being tricky in cases where we expect that the
function using strerror_r() returns the error formatted in a provided
buffer (we have to check if it returned something else and copy that
instead), breaks the build on systems not using glibc, like Alpine
Linux, where musl libc is used.
So, introduce yet another wrapper, str_error_r(), that has the GNU
interface, but uses the portable XSI variant of strerror_r(), so that
users rest asured that the provided buffer is used and it is what is
returned.
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/n/tip-d4t42fnf48ytlk8rjxs822tf@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-07-06 14:56:20 +00:00
|
|
|
const char *emsg = str_error_r(workload_exec_errno, msg, sizeof(msg));
|
2014-01-02 18:11:25 +00:00
|
|
|
pr_err("Workload failed: %s\n", emsg);
|
2013-12-28 18:45:08 +00:00
|
|
|
return -1;
|
2014-01-02 18:11:25 +00:00
|
|
|
}
|
2013-12-28 18:45:08 +00:00
|
|
|
|
2011-09-15 21:31:40 +00:00
|
|
|
if (WIFSIGNALED(status))
|
|
|
|
psignal(WTERMSIG(status), argv[0]);
|
2009-12-31 08:05:50 +00:00
|
|
|
} else {
|
perf stat: Enable counting events for BPF programs
Introduce 'perf stat -b' option, which counts events for BPF programs, like:
[root@localhost ~]# ~/perf stat -e ref-cycles,cycles -b 254 -I 1000
1.487903822 115,200 ref-cycles
1.487903822 86,012 cycles
2.489147029 80,560 ref-cycles
2.489147029 73,784 cycles
3.490341825 60,720 ref-cycles
3.490341825 37,797 cycles
4.491540887 37,120 ref-cycles
4.491540887 31,963 cycles
The example above counts 'cycles' and 'ref-cycles' of BPF program of id
254. This is similar to bpftool-prog-profile command, but more
flexible.
'perf stat -b' creates per-cpu perf_event and loads fentry/fexit BPF
programs (monitor-progs) to the target BPF program (target-prog). The
monitor-progs read perf_event before and after the target-prog, and
aggregate the difference in a BPF map. Then the user space reads data
from these maps.
A new 'struct bpf_counter' is introduced to provide a common interface
that uses BPF programs/maps to count perf events.
Committer notes:
Removed all but bpf_counter.h includes from evsel.h, not needed at all.
Also BPF map lookups for PERCPU_ARRAYs need to have as its value receive
buffer passed to the kernel libbpf_num_possible_cpus() entries, not
evsel__nr_cpus(evsel), as the former uses
/sys/devices/system/cpu/possible while the later uses
/sys/devices/system/cpu/online, which may be less than the 'possible'
number making the bpf map lookup overwrite memory and cause hard to
debug memory corruption.
We need to continue using evsel__nr_cpus(evsel) when accessing the
perf_counts array tho, not to overwrite another are of memory :-)
Signed-off-by: Song Liu <songliubraving@fb.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Link: https://lore.kernel.org/lkml/20210120163031.GU12699@kernel.org/
Acked-by: Namhyung Kim <namhyung@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: kernel-team@fb.com
Link: http://lore.kernel.org/lkml/20201229214214.3413833-4-songliubraving@fb.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-12-29 21:42:14 +00:00
|
|
|
err = enable_counters();
|
|
|
|
if (err)
|
|
|
|
return -1;
|
2021-03-16 21:18:36 +00:00
|
|
|
|
|
|
|
t0 = rdclock();
|
|
|
|
clock_gettime(CLOCK_MONOTONIC, &ref_time);
|
|
|
|
|
2020-07-17 07:05:06 +00:00
|
|
|
status = dispatch_events(forks, timeout, interval, ×);
|
2009-12-31 08:05:50 +00:00
|
|
|
}
|
2009-06-13 12:57:28 +00:00
|
|
|
|
perf stat: Avoid skew when reading events
When we don't have a tracee (i.e. we're attaching to a task or CPU),
counters can still be running after our workload finishes, and can still
be running as we read their values. As we read events one-by-one, there
can be arbitrary skew between values of events, even within a group.
This means that ratios within an event group are not reliable.
This skew can be seen if measuring a group of identical events, e.g:
# perf stat -a -C0 -e '{cycles,cycles}' sleep 1
To avoid this, we must stop groups from counting before we read the
values of any constituent events. This patch adds and makes use of a new
disable_counters() helper, which disables group leaders (and thus each
group as a whole). This mirrors the use of enable_counters() for
starting event groups in the absence of a tracee.
Closing a group leader splits the group, and without a disabled group
leader the newly split events will begin counting. Thus to ensure counts
are reliable we must defer closing group leaders until all counts have
been read. To do so this patch removes the event closing logic from the
read_counters() helper, explicitly closes the events using
perf_evlist__close(), which also aids legibility.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1470747869-3567-1-git-send-email-mark.rutland@arm.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-08-09 13:04:29 +00:00
|
|
|
disable_counters();
|
|
|
|
|
2009-06-13 12:57:28 +00:00
|
|
|
t1 = rdclock();
|
|
|
|
|
2018-08-30 06:32:50 +00:00
|
|
|
if (stat_config.walltime_run_table)
|
|
|
|
stat_config.walltime_run[run_idx] = t1 - t0;
|
2018-04-23 09:08:21 +00:00
|
|
|
|
perf stat: Turn off summary for interval mode by default
There's a risk that outputting interval mode summaries by default breaks
CSV consumers. It already broke pmu-tools/toplev.
So now we turn off the summary by default but we create a new option
'--summary' to enable the summary. This is active even when not using
CSV mode.
Before:
root@kbl-ppc:~# perf stat -I1000 --interval-count 2
# time counts unit events
1.000265904 8,005.73 msec cpu-clock # 8.006 CPUs utilized
1.000265904 601 context-switches # 0.075 K/sec
1.000265904 10 cpu-migrations # 0.001 K/sec
1.000265904 0 page-faults # 0.000 K/sec
1.000265904 66,746,521 cycles # 0.008 GHz
1.000265904 71,874,398 instructions # 1.08 insn per cycle
1.000265904 13,356,781 branches # 1.668 M/sec
1.000265904 298,756 branch-misses # 2.24% of all branches
2.001857667 8,012.52 msec cpu-clock # 8.013 CPUs utilized
2.001857667 164 context-switches # 0.020 K/sec
2.001857667 10 cpu-migrations # 0.001 K/sec
2.001857667 2 page-faults # 0.000 K/sec
2.001857667 5,822,188 cycles # 0.001 GHz
2.001857667 2,186,170 instructions # 0.38 insn per cycle
2.001857667 442,378 branches # 0.055 M/sec
2.001857667 44,750 branch-misses # 10.12% of all branches
Performance counter stats for 'system wide':
16,018.25 msec cpu-clock # 7.993 CPUs utilized
765 context-switches # 0.048 K/sec
20 cpu-migrations # 0.001 K/sec
2 page-faults # 0.000 K/sec
72,568,709 cycles # 0.005 GHz
74,060,568 instructions # 1.02 insn per cycle
13,799,159 branches # 0.861 M/sec
343,506 branch-misses # 2.49% of all branches
2.004118489 seconds time elapsed
After:
root@kbl-ppc:~# perf stat -I1000 --interval-count 2
# time counts unit events
1.001336393 8,013.28 msec cpu-clock # 8.013 CPUs utilized
1.001336393 82 context-switches # 0.010 K/sec
1.001336393 8 cpu-migrations # 0.001 K/sec
1.001336393 0 page-faults # 0.000 K/sec
1.001336393 4,199,121 cycles # 0.001 GHz
1.001336393 1,373,991 instructions # 0.33 insn per cycle
1.001336393 270,681 branches # 0.034 M/sec
1.001336393 31,659 branch-misses # 11.70% of all branches
2.003905006 8,020.52 msec cpu-clock # 8.021 CPUs utilized
2.003905006 184 context-switches # 0.023 K/sec
2.003905006 8 cpu-migrations # 0.001 K/sec
2.003905006 2 page-faults # 0.000 K/sec
2.003905006 5,446,190 cycles # 0.001 GHz
2.003905006 2,312,547 instructions # 0.42 insn per cycle
2.003905006 451,691 branches # 0.056 M/sec
2.003905006 37,925 branch-misses # 8.40% of all branches
root@kbl-ppc:~# perf stat -I1000 --interval-count 2 --summary
# time counts unit events
1.001313128 8,013.20 msec cpu-clock # 8.013 CPUs utilized
1.001313128 83 context-switches # 0.010 K/sec
1.001313128 8 cpu-migrations # 0.001 K/sec
1.001313128 0 page-faults # 0.000 K/sec
1.001313128 4,470,950 cycles # 0.001 GHz
1.001313128 1,440,045 instructions # 0.32 insn per cycle
1.001313128 283,222 branches # 0.035 M/sec
1.001313128 33,576 branch-misses # 11.86% of all branches
2.003857385 8,020.34 msec cpu-clock # 8.020 CPUs utilized
2.003857385 154 context-switches # 0.019 K/sec
2.003857385 8 cpu-migrations # 0.001 K/sec
2.003857385 2 page-faults # 0.000 K/sec
2.003857385 4,515,676 cycles # 0.001 GHz
2.003857385 2,180,449 instructions # 0.48 insn per cycle
2.003857385 435,254 branches # 0.054 M/sec
2.003857385 31,179 branch-misses # 7.16% of all branches
Performance counter stats for 'system wide':
16,033.53 msec cpu-clock # 7.992 CPUs utilized
237 context-switches # 0.015 K/sec
16 cpu-migrations # 0.001 K/sec
2 page-faults # 0.000 K/sec
8,986,626 cycles # 0.001 GHz
3,620,494 instructions # 0.40 insn per cycle
718,476 branches # 0.045 M/sec
64,755 branch-misses # 9.01% of all branches
2.006124542 seconds time elapsed
Fixes: c7e5b328a8d4 ("perf stat: Report summary for interval mode")
Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lore.kernel.org/lkml/20200903010113.32232-1-yao.jin@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-09-03 01:01:13 +00:00
|
|
|
if (interval && stat_config.summary) {
|
perf stat: Report summary for interval mode
Currently 'perf stat' supports to print counts at regular interval (-I),
but it's not very easy for user to get the overall statistics.
The patch uses 'evsel->prev_raw_counts' to get counts for summary. Copy
the counts to 'evsel->counts' after printing the interval results.
Next, we just follow the non-interval processing.
Let's see some examples,
root@kbl-ppc:~# perf stat -e cycles -I1000 --interval-count 2
# time counts unit events
1.000412064 2,281,114 cycles
2.001383658 2,547,880 cycles
Performance counter stats for 'system wide':
4,828,994 cycles
2.002860349 seconds time elapsed
root@kbl-ppc:~# perf stat -e cycles,instructions -I1000 --interval-count 2
# time counts unit events
1.000389902 1,536,093 cycles
1.000389902 420,226 instructions # 0.27 insn per cycle
2.001433453 2,213,952 cycles
2.001433453 735,465 instructions # 0.33 insn per cycle
Performance counter stats for 'system wide':
3,750,045 cycles
1,155,691 instructions # 0.31 insn per cycle
2.003023361 seconds time elapsed
root@kbl-ppc:~# perf stat -M CPI,IPC -I1000 --interval-count 2
# time counts unit events
1.000435121 905,303 inst_retired.any # 2.9 CPI
1.000435121 2,663,333 cycles
1.000435121 914,702 inst_retired.any # 0.3 IPC
1.000435121 2,676,559 cpu_clk_unhalted.thread
2.001615941 1,951,092 inst_retired.any # 1.8 CPI
2.001615941 3,551,357 cycles
2.001615941 1,950,837 inst_retired.any # 0.5 IPC
2.001615941 3,551,044 cpu_clk_unhalted.thread
Performance counter stats for 'system wide':
2,856,395 inst_retired.any # 2.2 CPI
6,214,690 cycles
2,865,539 inst_retired.any # 0.5 IPC
6,227,603 cpu_clk_unhalted.thread
2.003403078 seconds time elapsed
Committer testing:
Before:
# perf stat -e cycles -I1000 --interval-count 2
# time counts unit events
1.000618627 26,877,408 cycles
2.001417968 233,672,829 cycles
#
After:
# perf stat -e cycles -I1000 --interval-count 2
# time counts unit events
1.001531815 5,341,388,792 cycles
2.002936530 100,073,912 cycles
Performance counter stats for 'system wide':
5,441,462,704 cycles
2.004893794 seconds time elapsed
#
Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
Reviewed-by: Jiri Olsa <jolsa@redhat.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lore.kernel.org/lkml/20200520042737.24160-6-yao.jin@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-05-20 04:27:37 +00:00
|
|
|
stat_config.interval = 0;
|
perf stat: Turn off summary for interval mode by default
There's a risk that outputting interval mode summaries by default breaks
CSV consumers. It already broke pmu-tools/toplev.
So now we turn off the summary by default but we create a new option
'--summary' to enable the summary. This is active even when not using
CSV mode.
Before:
root@kbl-ppc:~# perf stat -I1000 --interval-count 2
# time counts unit events
1.000265904 8,005.73 msec cpu-clock # 8.006 CPUs utilized
1.000265904 601 context-switches # 0.075 K/sec
1.000265904 10 cpu-migrations # 0.001 K/sec
1.000265904 0 page-faults # 0.000 K/sec
1.000265904 66,746,521 cycles # 0.008 GHz
1.000265904 71,874,398 instructions # 1.08 insn per cycle
1.000265904 13,356,781 branches # 1.668 M/sec
1.000265904 298,756 branch-misses # 2.24% of all branches
2.001857667 8,012.52 msec cpu-clock # 8.013 CPUs utilized
2.001857667 164 context-switches # 0.020 K/sec
2.001857667 10 cpu-migrations # 0.001 K/sec
2.001857667 2 page-faults # 0.000 K/sec
2.001857667 5,822,188 cycles # 0.001 GHz
2.001857667 2,186,170 instructions # 0.38 insn per cycle
2.001857667 442,378 branches # 0.055 M/sec
2.001857667 44,750 branch-misses # 10.12% of all branches
Performance counter stats for 'system wide':
16,018.25 msec cpu-clock # 7.993 CPUs utilized
765 context-switches # 0.048 K/sec
20 cpu-migrations # 0.001 K/sec
2 page-faults # 0.000 K/sec
72,568,709 cycles # 0.005 GHz
74,060,568 instructions # 1.02 insn per cycle
13,799,159 branches # 0.861 M/sec
343,506 branch-misses # 2.49% of all branches
2.004118489 seconds time elapsed
After:
root@kbl-ppc:~# perf stat -I1000 --interval-count 2
# time counts unit events
1.001336393 8,013.28 msec cpu-clock # 8.013 CPUs utilized
1.001336393 82 context-switches # 0.010 K/sec
1.001336393 8 cpu-migrations # 0.001 K/sec
1.001336393 0 page-faults # 0.000 K/sec
1.001336393 4,199,121 cycles # 0.001 GHz
1.001336393 1,373,991 instructions # 0.33 insn per cycle
1.001336393 270,681 branches # 0.034 M/sec
1.001336393 31,659 branch-misses # 11.70% of all branches
2.003905006 8,020.52 msec cpu-clock # 8.021 CPUs utilized
2.003905006 184 context-switches # 0.023 K/sec
2.003905006 8 cpu-migrations # 0.001 K/sec
2.003905006 2 page-faults # 0.000 K/sec
2.003905006 5,446,190 cycles # 0.001 GHz
2.003905006 2,312,547 instructions # 0.42 insn per cycle
2.003905006 451,691 branches # 0.056 M/sec
2.003905006 37,925 branch-misses # 8.40% of all branches
root@kbl-ppc:~# perf stat -I1000 --interval-count 2 --summary
# time counts unit events
1.001313128 8,013.20 msec cpu-clock # 8.013 CPUs utilized
1.001313128 83 context-switches # 0.010 K/sec
1.001313128 8 cpu-migrations # 0.001 K/sec
1.001313128 0 page-faults # 0.000 K/sec
1.001313128 4,470,950 cycles # 0.001 GHz
1.001313128 1,440,045 instructions # 0.32 insn per cycle
1.001313128 283,222 branches # 0.035 M/sec
1.001313128 33,576 branch-misses # 11.86% of all branches
2.003857385 8,020.34 msec cpu-clock # 8.020 CPUs utilized
2.003857385 154 context-switches # 0.019 K/sec
2.003857385 8 cpu-migrations # 0.001 K/sec
2.003857385 2 page-faults # 0.000 K/sec
2.003857385 4,515,676 cycles # 0.001 GHz
2.003857385 2,180,449 instructions # 0.48 insn per cycle
2.003857385 435,254 branches # 0.054 M/sec
2.003857385 31,179 branch-misses # 7.16% of all branches
Performance counter stats for 'system wide':
16,033.53 msec cpu-clock # 7.992 CPUs utilized
237 context-switches # 0.015 K/sec
16 cpu-migrations # 0.001 K/sec
2 page-faults # 0.000 K/sec
8,986,626 cycles # 0.001 GHz
3,620,494 instructions # 0.40 insn per cycle
718,476 branches # 0.045 M/sec
64,755 branch-misses # 9.01% of all branches
2.006124542 seconds time elapsed
Fixes: c7e5b328a8d4 ("perf stat: Report summary for interval mode")
Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lore.kernel.org/lkml/20200903010113.32232-1-yao.jin@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-09-03 01:01:13 +00:00
|
|
|
stat_config.stop_read_counter = true;
|
perf stat: Report summary for interval mode
Currently 'perf stat' supports to print counts at regular interval (-I),
but it's not very easy for user to get the overall statistics.
The patch uses 'evsel->prev_raw_counts' to get counts for summary. Copy
the counts to 'evsel->counts' after printing the interval results.
Next, we just follow the non-interval processing.
Let's see some examples,
root@kbl-ppc:~# perf stat -e cycles -I1000 --interval-count 2
# time counts unit events
1.000412064 2,281,114 cycles
2.001383658 2,547,880 cycles
Performance counter stats for 'system wide':
4,828,994 cycles
2.002860349 seconds time elapsed
root@kbl-ppc:~# perf stat -e cycles,instructions -I1000 --interval-count 2
# time counts unit events
1.000389902 1,536,093 cycles
1.000389902 420,226 instructions # 0.27 insn per cycle
2.001433453 2,213,952 cycles
2.001433453 735,465 instructions # 0.33 insn per cycle
Performance counter stats for 'system wide':
3,750,045 cycles
1,155,691 instructions # 0.31 insn per cycle
2.003023361 seconds time elapsed
root@kbl-ppc:~# perf stat -M CPI,IPC -I1000 --interval-count 2
# time counts unit events
1.000435121 905,303 inst_retired.any # 2.9 CPI
1.000435121 2,663,333 cycles
1.000435121 914,702 inst_retired.any # 0.3 IPC
1.000435121 2,676,559 cpu_clk_unhalted.thread
2.001615941 1,951,092 inst_retired.any # 1.8 CPI
2.001615941 3,551,357 cycles
2.001615941 1,950,837 inst_retired.any # 0.5 IPC
2.001615941 3,551,044 cpu_clk_unhalted.thread
Performance counter stats for 'system wide':
2,856,395 inst_retired.any # 2.2 CPI
6,214,690 cycles
2,865,539 inst_retired.any # 0.5 IPC
6,227,603 cpu_clk_unhalted.thread
2.003403078 seconds time elapsed
Committer testing:
Before:
# perf stat -e cycles -I1000 --interval-count 2
# time counts unit events
1.000618627 26,877,408 cycles
2.001417968 233,672,829 cycles
#
After:
# perf stat -e cycles -I1000 --interval-count 2
# time counts unit events
1.001531815 5,341,388,792 cycles
2.002936530 100,073,912 cycles
Performance counter stats for 'system wide':
5,441,462,704 cycles
2.004893794 seconds time elapsed
#
Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
Reviewed-by: Jiri Olsa <jolsa@redhat.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lore.kernel.org/lkml/20200520042737.24160-6-yao.jin@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-05-20 04:27:37 +00:00
|
|
|
init_stats(&walltime_nsecs_stats);
|
|
|
|
update_stats(&walltime_nsecs_stats, t1 - t0);
|
|
|
|
|
|
|
|
if (stat_config.aggr_mode == AGGR_GLOBAL)
|
2020-11-30 12:31:04 +00:00
|
|
|
evlist__save_aggr_prev_raw_counts(evsel_list);
|
perf stat: Report summary for interval mode
Currently 'perf stat' supports to print counts at regular interval (-I),
but it's not very easy for user to get the overall statistics.
The patch uses 'evsel->prev_raw_counts' to get counts for summary. Copy
the counts to 'evsel->counts' after printing the interval results.
Next, we just follow the non-interval processing.
Let's see some examples,
root@kbl-ppc:~# perf stat -e cycles -I1000 --interval-count 2
# time counts unit events
1.000412064 2,281,114 cycles
2.001383658 2,547,880 cycles
Performance counter stats for 'system wide':
4,828,994 cycles
2.002860349 seconds time elapsed
root@kbl-ppc:~# perf stat -e cycles,instructions -I1000 --interval-count 2
# time counts unit events
1.000389902 1,536,093 cycles
1.000389902 420,226 instructions # 0.27 insn per cycle
2.001433453 2,213,952 cycles
2.001433453 735,465 instructions # 0.33 insn per cycle
Performance counter stats for 'system wide':
3,750,045 cycles
1,155,691 instructions # 0.31 insn per cycle
2.003023361 seconds time elapsed
root@kbl-ppc:~# perf stat -M CPI,IPC -I1000 --interval-count 2
# time counts unit events
1.000435121 905,303 inst_retired.any # 2.9 CPI
1.000435121 2,663,333 cycles
1.000435121 914,702 inst_retired.any # 0.3 IPC
1.000435121 2,676,559 cpu_clk_unhalted.thread
2.001615941 1,951,092 inst_retired.any # 1.8 CPI
2.001615941 3,551,357 cycles
2.001615941 1,950,837 inst_retired.any # 0.5 IPC
2.001615941 3,551,044 cpu_clk_unhalted.thread
Performance counter stats for 'system wide':
2,856,395 inst_retired.any # 2.2 CPI
6,214,690 cycles
2,865,539 inst_retired.any # 0.5 IPC
6,227,603 cpu_clk_unhalted.thread
2.003403078 seconds time elapsed
Committer testing:
Before:
# perf stat -e cycles -I1000 --interval-count 2
# time counts unit events
1.000618627 26,877,408 cycles
2.001417968 233,672,829 cycles
#
After:
# perf stat -e cycles -I1000 --interval-count 2
# time counts unit events
1.001531815 5,341,388,792 cycles
2.002936530 100,073,912 cycles
Performance counter stats for 'system wide':
5,441,462,704 cycles
2.004893794 seconds time elapsed
#
Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
Reviewed-by: Jiri Olsa <jolsa@redhat.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lore.kernel.org/lkml/20200520042737.24160-6-yao.jin@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-05-20 04:27:37 +00:00
|
|
|
|
2020-11-30 12:31:04 +00:00
|
|
|
evlist__copy_prev_raw_counts(evsel_list);
|
|
|
|
evlist__reset_prev_raw_counts(evsel_list);
|
perf stat: Report summary for interval mode
Currently 'perf stat' supports to print counts at regular interval (-I),
but it's not very easy for user to get the overall statistics.
The patch uses 'evsel->prev_raw_counts' to get counts for summary. Copy
the counts to 'evsel->counts' after printing the interval results.
Next, we just follow the non-interval processing.
Let's see some examples,
root@kbl-ppc:~# perf stat -e cycles -I1000 --interval-count 2
# time counts unit events
1.000412064 2,281,114 cycles
2.001383658 2,547,880 cycles
Performance counter stats for 'system wide':
4,828,994 cycles
2.002860349 seconds time elapsed
root@kbl-ppc:~# perf stat -e cycles,instructions -I1000 --interval-count 2
# time counts unit events
1.000389902 1,536,093 cycles
1.000389902 420,226 instructions # 0.27 insn per cycle
2.001433453 2,213,952 cycles
2.001433453 735,465 instructions # 0.33 insn per cycle
Performance counter stats for 'system wide':
3,750,045 cycles
1,155,691 instructions # 0.31 insn per cycle
2.003023361 seconds time elapsed
root@kbl-ppc:~# perf stat -M CPI,IPC -I1000 --interval-count 2
# time counts unit events
1.000435121 905,303 inst_retired.any # 2.9 CPI
1.000435121 2,663,333 cycles
1.000435121 914,702 inst_retired.any # 0.3 IPC
1.000435121 2,676,559 cpu_clk_unhalted.thread
2.001615941 1,951,092 inst_retired.any # 1.8 CPI
2.001615941 3,551,357 cycles
2.001615941 1,950,837 inst_retired.any # 0.5 IPC
2.001615941 3,551,044 cpu_clk_unhalted.thread
Performance counter stats for 'system wide':
2,856,395 inst_retired.any # 2.2 CPI
6,214,690 cycles
2,865,539 inst_retired.any # 0.5 IPC
6,227,603 cpu_clk_unhalted.thread
2.003403078 seconds time elapsed
Committer testing:
Before:
# perf stat -e cycles -I1000 --interval-count 2
# time counts unit events
1.000618627 26,877,408 cycles
2.001417968 233,672,829 cycles
#
After:
# perf stat -e cycles -I1000 --interval-count 2
# time counts unit events
1.001531815 5,341,388,792 cycles
2.002936530 100,073,912 cycles
Performance counter stats for 'system wide':
5,441,462,704 cycles
2.004893794 seconds time elapsed
#
Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
Reviewed-by: Jiri Olsa <jolsa@redhat.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lore.kernel.org/lkml/20200520042737.24160-6-yao.jin@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-05-20 04:27:37 +00:00
|
|
|
runtime_stat_reset(&stat_config);
|
|
|
|
perf_stat__reset_shadow_per_stat(&rt_stat);
|
|
|
|
} else
|
|
|
|
update_stats(&walltime_nsecs_stats, t1 - t0);
|
2009-06-13 12:57:28 +00:00
|
|
|
|
perf stat: Avoid skew when reading events
When we don't have a tracee (i.e. we're attaching to a task or CPU),
counters can still be running after our workload finishes, and can still
be running as we read their values. As we read events one-by-one, there
can be arbitrary skew between values of events, even within a group.
This means that ratios within an event group are not reliable.
This skew can be seen if measuring a group of identical events, e.g:
# perf stat -a -C0 -e '{cycles,cycles}' sleep 1
To avoid this, we must stop groups from counting before we read the
values of any constituent events. This patch adds and makes use of a new
disable_counters() helper, which disables group leaders (and thus each
group as a whole). This mirrors the use of enable_counters() for
starting event groups in the absence of a tracee.
Closing a group leader splits the group, and without a disabled group
leader the newly split events will begin counting. Thus to ensure counts
are reliable we must defer closing group leaders until all counts have
been read. To do so this patch removes the event closing logic from the
read_counters() helper, explicitly closes the events using
perf_evlist__close(), which also aids legibility.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1470747869-3567-1-git-send-email-mark.rutland@arm.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-08-09 13:04:29 +00:00
|
|
|
/*
|
|
|
|
* Closing a group leader splits the group, and as we only disable
|
|
|
|
* group leaders, results in remaining events becoming enabled. To
|
|
|
|
* avoid arbitrary skew, we must read all counters before closing any
|
|
|
|
* group leaders.
|
|
|
|
*/
|
2019-03-26 22:18:21 +00:00
|
|
|
read_counters(&(struct timespec) { .tv_nsec = t1-t0 });
|
2019-07-15 14:21:21 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* We need to keep evsel_list alive, because it's processed
|
|
|
|
* later the evsel_list will be closed after.
|
|
|
|
*/
|
|
|
|
if (!STAT_RECORD)
|
2019-07-21 11:24:07 +00:00
|
|
|
evlist__close(evsel_list);
|
2011-01-03 19:45:52 +00:00
|
|
|
|
2009-06-13 12:57:28 +00:00
|
|
|
return WEXITSTATUS(status);
|
|
|
|
}
|
|
|
|
|
2018-04-23 09:08:21 +00:00
|
|
|
static int run_perf_stat(int argc, const char **argv, int run_idx)
|
2012-10-23 11:40:14 +00:00
|
|
|
{
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
if (pre_cmd) {
|
|
|
|
ret = system(pre_cmd);
|
|
|
|
if (ret)
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (sync_run)
|
|
|
|
sync();
|
|
|
|
|
2018-04-23 09:08:21 +00:00
|
|
|
ret = __run_perf_stat(argc, argv, run_idx);
|
2012-10-23 11:40:14 +00:00
|
|
|
if (ret)
|
|
|
|
return ret;
|
|
|
|
|
|
|
|
if (post_cmd) {
|
|
|
|
ret = system(post_cmd);
|
|
|
|
if (ret)
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2018-08-30 06:32:24 +00:00
|
|
|
static void print_counters(struct timespec *ts, int argc, const char **argv)
|
|
|
|
{
|
2018-08-30 06:32:25 +00:00
|
|
|
/* Do not print anything if we record to the pipe. */
|
|
|
|
if (STAT_RECORD && perf_stat.data.is_pipe)
|
|
|
|
return;
|
2020-10-27 00:27:36 +00:00
|
|
|
if (stat_config.quiet)
|
|
|
|
return;
|
2018-08-30 06:32:25 +00:00
|
|
|
|
2020-11-30 17:55:12 +00:00
|
|
|
evlist__print_counters(evsel_list, &stat_config, &target, ts, argc, argv);
|
2018-08-30 06:32:24 +00:00
|
|
|
}
|
|
|
|
|
2009-06-10 13:55:59 +00:00
|
|
|
static volatile int signr = -1;
|
|
|
|
|
2009-05-26 07:17:18 +00:00
|
|
|
static void skip_signal(int signo)
|
2009-04-20 13:37:32 +00:00
|
|
|
{
|
2015-07-21 12:31:25 +00:00
|
|
|
if ((child_pid == -1) || stat_config.interval)
|
2009-12-31 08:05:50 +00:00
|
|
|
done = 1;
|
|
|
|
|
2009-06-10 13:55:59 +00:00
|
|
|
signr = signo;
|
2013-06-04 15:44:26 +00:00
|
|
|
/*
|
|
|
|
* render child_pid harmless
|
|
|
|
* won't send SIGTERM to a random
|
|
|
|
* process in case of race condition
|
|
|
|
* and fast PID recycling
|
|
|
|
*/
|
|
|
|
child_pid = -1;
|
2009-06-10 13:55:59 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static void sig_atexit(void)
|
|
|
|
{
|
2013-06-04 15:44:26 +00:00
|
|
|
sigset_t set, oset;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* avoid race condition with SIGCHLD handler
|
|
|
|
* in skip_signal() which is modifying child_pid
|
|
|
|
* goal is to avoid send SIGTERM to a random
|
|
|
|
* process
|
|
|
|
*/
|
|
|
|
sigemptyset(&set);
|
|
|
|
sigaddset(&set, SIGCHLD);
|
|
|
|
sigprocmask(SIG_BLOCK, &set, &oset);
|
|
|
|
|
2009-10-04 00:35:01 +00:00
|
|
|
if (child_pid != -1)
|
|
|
|
kill(child_pid, SIGTERM);
|
|
|
|
|
2013-06-04 15:44:26 +00:00
|
|
|
sigprocmask(SIG_SETMASK, &oset, NULL);
|
|
|
|
|
2009-06-10 13:55:59 +00:00
|
|
|
if (signr == -1)
|
|
|
|
return;
|
|
|
|
|
|
|
|
signal(signr, SIG_DFL);
|
|
|
|
kill(getpid(), signr);
|
2009-05-26 07:17:18 +00:00
|
|
|
}
|
|
|
|
|
2020-05-20 16:23:35 +00:00
|
|
|
void perf_stat__set_big_num(int set)
|
|
|
|
{
|
|
|
|
stat_config.big_num = (set != 0);
|
|
|
|
}
|
|
|
|
|
perf stat: Align CSV output for summary mode
The 'perf stat' subcommand supports the request for a summary of the
interval counter readings. But the summary lines break the CSV output
so it's hard for scripts to parse the result.
Before:
# perf stat -x, -I1000 --interval-count 1 --summary
1.001323097,8013.48,msec,cpu-clock,8013483384,100.00,8.013,CPUs utilized
1.001323097,270,,context-switches,8013513297,100.00,0.034,K/sec
1.001323097,13,,cpu-migrations,8013530032,100.00,0.002,K/sec
1.001323097,184,,page-faults,8013546992,100.00,0.023,K/sec
1.001323097,20574191,,cycles,8013551506,100.00,0.003,GHz
1.001323097,10562267,,instructions,8013564958,100.00,0.51,insn per cycle
1.001323097,2019244,,branches,8013575673,100.00,0.252,M/sec
1.001323097,106152,,branch-misses,8013585776,100.00,5.26,of all branches
8013.48,msec,cpu-clock,8013483384,100.00,7.984,CPUs utilized
270,,context-switches,8013513297,100.00,0.034,K/sec
13,,cpu-migrations,8013530032,100.00,0.002,K/sec
184,,page-faults,8013546992,100.00,0.023,K/sec
20574191,,cycles,8013551506,100.00,0.003,GHz
10562267,,instructions,8013564958,100.00,0.51,insn per cycle
2019244,,branches,8013575673,100.00,0.252,M/sec
106152,,branch-misses,8013585776,100.00,5.26,of all branches
The summary line loses the timestamp column, which breaks the CSV
output.
We add a column at the original 'timestamp' position and it just says
'summary' for the summary line.
After:
# perf stat -x, -I1000 --interval-count 1 --summary
1.001196053,8012.72,msec,cpu-clock,8012722903,100.00,8.013,CPUs utilized
1.001196053,218,,context-switches,8012753271,100.00,0.027,K/sec
1.001196053,9,,cpu-migrations,8012769767,100.00,0.001,K/sec
1.001196053,0,,page-faults,8012786257,100.00,0.000,K/sec
1.001196053,15004518,,cycles,8012790637,100.00,0.002,GHz
1.001196053,7954691,,instructions,8012804027,100.00,0.53,insn per cycle
1.001196053,1590259,,branches,8012814766,100.00,0.198,M/sec
1.001196053,82601,,branch-misses,8012824365,100.00,5.19,of all branches
summary,8012.72,msec,cpu-clock,8012722903,100.00,7.986,CPUs utilized
summary,218,,context-switches,8012753271,100.00,0.027,K/sec
summary,9,,cpu-migrations,8012769767,100.00,0.001,K/sec
summary,0,,page-faults,8012786257,100.00,0.000,K/sec
summary,15004518,,cycles,8012790637,100.00,0.002,GHz
summary,7954691,,instructions,8012804027,100.00,0.53,insn per cycle
summary,1590259,,branches,8012814766,100.00,0.198,M/sec
summary,82601,,branch-misses,8012824365,100.00,5.19,of all branches
Now it's easy for script to analyse the summary lines.
Of course, we also consider not to break possible existing scripts which
can continue to use the broken CSV format by using a new '--no-csv-summary.'
option.
# perf stat -x, -I1000 --interval-count 1 --summary --no-csv-summary
1.001213261,8012.67,msec,cpu-clock,8012672327,100.00,8.013,CPUs utilized
1.001213261,197,,context-switches,8012703742,100.00,24.586,/sec
1.001213261,9,,cpu-migrations,8012720902,100.00,1.123,/sec
1.001213261,644,,page-faults,8012738266,100.00,80.373,/sec
1.001213261,18350698,,cycles,8012744109,100.00,0.002,GHz
1.001213261,12745021,,instructions,8012759001,100.00,0.69,insn per cycle
1.001213261,2458033,,branches,8012770864,100.00,306.768,K/sec
1.001213261,102107,,branch-misses,8012781751,100.00,4.15,of all branches
8012.67,msec,cpu-clock,8012672327,100.00,7.985,CPUs utilized
197,,context-switches,8012703742,100.00,24.586,/sec
9,,cpu-migrations,8012720902,100.00,1.123,/sec
644,,page-faults,8012738266,100.00,80.373,/sec
18350698,,cycles,8012744109,100.00,0.002,GHz
12745021,,instructions,8012759001,100.00,0.69,insn per cycle
2458033,,branches,8012770864,100.00,306.768,K/sec
102107,,branch-misses,8012781751,100.00,4.15,of all branches
This option can be enabled in perf config by setting the variable
'stat.no-csv-summary'.
# perf config stat.no-csv-summary=true
# perf config -l
stat.no-csv-summary=true
# perf stat -x, -I1000 --interval-count 1 --summary
1.001330198,8013.28,msec,cpu-clock,8013279201,100.00,8.013,CPUs utilized
1.001330198,205,,context-switches,8013308394,100.00,25.583,/sec
1.001330198,10,,cpu-migrations,8013324681,100.00,1.248,/sec
1.001330198,0,,page-faults,8013340926,100.00,0.000,/sec
1.001330198,8027742,,cycles,8013344503,100.00,0.001,GHz
1.001330198,2871717,,instructions,8013356501,100.00,0.36,insn per cycle
1.001330198,553564,,branches,8013366204,100.00,69.081,K/sec
1.001330198,54021,,branch-misses,8013375952,100.00,9.76,of all branches
8013.28,msec,cpu-clock,8013279201,100.00,7.985,CPUs utilized
205,,context-switches,8013308394,100.00,25.583,/sec
10,,cpu-migrations,8013324681,100.00,1.248,/sec
0,,page-faults,8013340926,100.00,0.000,/sec
8027742,,cycles,8013344503,100.00,0.001,GHz
2871717,,instructions,8013356501,100.00,0.36,insn per cycle
553564,,branches,8013366204,100.00,69.081,K/sec
54021,,branch-misses,8013375952,100.00,9.76,of all branches
Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
Acked-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@redhat.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Jin Yao <yao.jin@intel.com>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lore.kernel.org/lkml/20210319070156.20394-1-yao.jin@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2021-03-19 07:01:55 +00:00
|
|
|
void perf_stat__set_no_csv_summary(int set)
|
|
|
|
{
|
|
|
|
stat_config.no_csv_summary = (set != 0);
|
|
|
|
}
|
|
|
|
|
2012-09-10 22:15:03 +00:00
|
|
|
static int stat__set_big_num(const struct option *opt __maybe_unused,
|
|
|
|
const char *s __maybe_unused, int unset)
|
perf stat: Add csv-style output
This patch adds an option (-x/--field-separator) to print counts using a
CSV-style output. The user can pass a custom separator. This makes it very easy
to import counts directly into your favorite spreadsheet without having to
write scripts.
Example:
$ perf stat --field-separator=, -a -- sleep 1
4009.961740,task-clock-msecs
13,context-switches
2,CPU-migrations
189,page-faults
9596385684,cycles
3493659441,instructions
872897069,branches
41562,branch-misses
22424,cache-references
1289,cache-misses
Works also in non-aggregated mode:
$ perf stat -x , -a -A -- sleep 1
CPU0,1002.526168,task-clock-msecs
CPU1,1002.528365,task-clock-msecs
CPU2,1002.523360,task-clock-msecs
CPU3,1002.519878,task-clock-msecs
CPU0,1,context-switches
CPU1,5,context-switches
CPU2,5,context-switches
CPU3,6,context-switches
CPU0,0,CPU-migrations
CPU1,1,CPU-migrations
CPU2,0,CPU-migrations
CPU3,1,CPU-migrations
CPU0,2,page-faults
CPU1,6,page-faults
CPU2,9,page-faults
CPU3,174,page-faults
CPU0,2399439771,cycles
CPU1,2380369063,cycles
CPU2,2399142710,cycles
CPU3,2373161192,cycles
CPU0,872900618,instructions
CPU1,873030960,instructions
CPU2,872714525,instructions
CPU3,874460580,instructions
CPU0,221556839,branches
CPU1,218134342,branches
CPU2,218161730,branches
CPU3,218284093,branches
CPU0,18556,branch-misses
CPU1,1449,branch-misses
CPU2,3447,branch-misses
CPU3,12714,branch-misses
CPU0,8330,cache-references
CPU1,313844,cache-references
CPU2,47993728,cache-references
CPU3,826481,cache-references
CPU0,272,cache-misses
CPU1,5360,cache-misses
CPU2,1342193,cache-misses
CPU3,13992,cache-misses
This second version adds the ability to name a separator and uses
field-separator as the long option to be consistent with perf report.
Commiter note: Since we enabled --big-num by default in 201e0b0 and -x can't be
used with it, we need to notice if the user explicitely enabled or disabled -B,
add code to disable big_num if the user didn't explicitely set --big_num when
-x is used.
Cc: David S. Miller <davem@davemloft.net>
Cc: Frederik Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: paulus@samba.org
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Robert Richter <robert.richter@amd.com>
LKML-Reference: <4cf68aa7.0fedd80a.5294.1203@mx.google.com>
Signed-off-by: Stephane Eranian <eranian@google.com>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2010-12-01 16:49:05 +00:00
|
|
|
{
|
|
|
|
big_num_opt = unset ? 0 : 1;
|
2020-05-20 16:23:35 +00:00
|
|
|
perf_stat__set_big_num(!unset);
|
perf stat: Add csv-style output
This patch adds an option (-x/--field-separator) to print counts using a
CSV-style output. The user can pass a custom separator. This makes it very easy
to import counts directly into your favorite spreadsheet without having to
write scripts.
Example:
$ perf stat --field-separator=, -a -- sleep 1
4009.961740,task-clock-msecs
13,context-switches
2,CPU-migrations
189,page-faults
9596385684,cycles
3493659441,instructions
872897069,branches
41562,branch-misses
22424,cache-references
1289,cache-misses
Works also in non-aggregated mode:
$ perf stat -x , -a -A -- sleep 1
CPU0,1002.526168,task-clock-msecs
CPU1,1002.528365,task-clock-msecs
CPU2,1002.523360,task-clock-msecs
CPU3,1002.519878,task-clock-msecs
CPU0,1,context-switches
CPU1,5,context-switches
CPU2,5,context-switches
CPU3,6,context-switches
CPU0,0,CPU-migrations
CPU1,1,CPU-migrations
CPU2,0,CPU-migrations
CPU3,1,CPU-migrations
CPU0,2,page-faults
CPU1,6,page-faults
CPU2,9,page-faults
CPU3,174,page-faults
CPU0,2399439771,cycles
CPU1,2380369063,cycles
CPU2,2399142710,cycles
CPU3,2373161192,cycles
CPU0,872900618,instructions
CPU1,873030960,instructions
CPU2,872714525,instructions
CPU3,874460580,instructions
CPU0,221556839,branches
CPU1,218134342,branches
CPU2,218161730,branches
CPU3,218284093,branches
CPU0,18556,branch-misses
CPU1,1449,branch-misses
CPU2,3447,branch-misses
CPU3,12714,branch-misses
CPU0,8330,cache-references
CPU1,313844,cache-references
CPU2,47993728,cache-references
CPU3,826481,cache-references
CPU0,272,cache-misses
CPU1,5360,cache-misses
CPU2,1342193,cache-misses
CPU3,13992,cache-misses
This second version adds the ability to name a separator and uses
field-separator as the long option to be consistent with perf report.
Commiter note: Since we enabled --big-num by default in 201e0b0 and -x can't be
used with it, we need to notice if the user explicitely enabled or disabled -B,
add code to disable big_num if the user didn't explicitely set --big_num when
-x is used.
Cc: David S. Miller <davem@davemloft.net>
Cc: Frederik Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: paulus@samba.org
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Robert Richter <robert.richter@amd.com>
LKML-Reference: <4cf68aa7.0fedd80a.5294.1203@mx.google.com>
Signed-off-by: Stephane Eranian <eranian@google.com>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2010-12-01 16:49:05 +00:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
perf stat: Basic support for TopDown in perf stat
Add basic plumbing for TopDown in perf stat
TopDown is intended to replace the frontend cycles idle/ backend cycles
idle metrics in standard perf stat output. These metrics are not
reliable in many workloads, due to out of order effects.
This implements a new --topdown mode in perf stat (similar to
--transaction) that measures the pipe line bottlenecks using
standardized formulas. The measurement can be all done with 5 counters
(one fixed counter)
The result are four metrics:
FrontendBound, BackendBound, BadSpeculation, Retiring
that describe the CPU pipeline behavior on a high level.
The full top down methology has many hierarchical metrics. This
implementation only supports level 1 which can be collected without
multiplexing. A full implementation of top down on top of perf is
available in pmu-tools toplev. (http://github.com/andikleen/pmu-tools)
The current version works on Intel Core CPUs starting with Sandy Bridge,
and Atom CPUs starting with Silvermont. In principle the generic
metrics should be also implementable on other out of order CPUs.
TopDown level 1 uses a set of abstracted metrics which are generic to
out of order CPU cores (although some CPUs may not implement all of
them):
topdown-total-slots Available slots in the pipeline
topdown-slots-issued Slots issued into the pipeline
topdown-slots-retired Slots successfully retired
topdown-fetch-bubbles Pipeline gaps in the frontend
topdown-recovery-bubbles Pipeline gaps during recovery
from misspeculation
These metrics then allow to compute four useful metrics:
FrontendBound, BackendBound, Retiring, BadSpeculation.
Add a new --topdown options to enable events. When --topdown is
specified set up events for all topdown events supported by the kernel.
Add topdown-* as a special case to the event parser, as is needed for
all events containing -.
The actual code to compute the metrics is in follow-on patches.
v2: Use standard sysctl read function.
v3: Move x86 specific code to arch/
v4: Enable --metric-only implicitly for topdown.
v5: Add --single-thread option to not force per core mode
v6: Fix output order of topdown metrics
v7: Allow combining with -d
v8: Remove --single-thread again
v9: Rename functions, adding arch_ and topdown_.
v10: Expand man page and describe TopDown better
Paste intro into commit description.
Print error when malloc fails.
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: http://lkml.kernel.org/r/1464119559-17203-1-git-send-email-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-05-30 15:49:42 +00:00
|
|
|
static int enable_metric_only(const struct option *opt __maybe_unused,
|
|
|
|
const char *s __maybe_unused, int unset)
|
|
|
|
{
|
|
|
|
force_metric_only = true;
|
2018-08-30 06:32:31 +00:00
|
|
|
stat_config.metric_only = !unset;
|
perf stat: Basic support for TopDown in perf stat
Add basic plumbing for TopDown in perf stat
TopDown is intended to replace the frontend cycles idle/ backend cycles
idle metrics in standard perf stat output. These metrics are not
reliable in many workloads, due to out of order effects.
This implements a new --topdown mode in perf stat (similar to
--transaction) that measures the pipe line bottlenecks using
standardized formulas. The measurement can be all done with 5 counters
(one fixed counter)
The result are four metrics:
FrontendBound, BackendBound, BadSpeculation, Retiring
that describe the CPU pipeline behavior on a high level.
The full top down methology has many hierarchical metrics. This
implementation only supports level 1 which can be collected without
multiplexing. A full implementation of top down on top of perf is
available in pmu-tools toplev. (http://github.com/andikleen/pmu-tools)
The current version works on Intel Core CPUs starting with Sandy Bridge,
and Atom CPUs starting with Silvermont. In principle the generic
metrics should be also implementable on other out of order CPUs.
TopDown level 1 uses a set of abstracted metrics which are generic to
out of order CPU cores (although some CPUs may not implement all of
them):
topdown-total-slots Available slots in the pipeline
topdown-slots-issued Slots issued into the pipeline
topdown-slots-retired Slots successfully retired
topdown-fetch-bubbles Pipeline gaps in the frontend
topdown-recovery-bubbles Pipeline gaps during recovery
from misspeculation
These metrics then allow to compute four useful metrics:
FrontendBound, BackendBound, Retiring, BadSpeculation.
Add a new --topdown options to enable events. When --topdown is
specified set up events for all topdown events supported by the kernel.
Add topdown-* as a special case to the event parser, as is needed for
all events containing -.
The actual code to compute the metrics is in follow-on patches.
v2: Use standard sysctl read function.
v3: Move x86 specific code to arch/
v4: Enable --metric-only implicitly for topdown.
v5: Add --single-thread option to not force per core mode
v6: Fix output order of topdown metrics
v7: Allow combining with -d
v8: Remove --single-thread again
v9: Rename functions, adding arch_ and topdown_.
v10: Expand man page and describe TopDown better
Paste intro into commit description.
Print error when malloc fails.
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: http://lkml.kernel.org/r/1464119559-17203-1-git-send-email-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-05-30 15:49:42 +00:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2017-08-31 19:40:31 +00:00
|
|
|
static int parse_metric_groups(const struct option *opt,
|
|
|
|
const char *str,
|
|
|
|
int unset __maybe_unused)
|
|
|
|
{
|
2020-05-20 18:20:10 +00:00
|
|
|
return metricgroup__parse_groups(opt, str,
|
|
|
|
stat_config.metric_no_group,
|
|
|
|
stat_config.metric_no_merge,
|
|
|
|
&stat_config.metric_events);
|
2017-08-31 19:40:31 +00:00
|
|
|
}
|
|
|
|
|
2020-07-17 07:05:41 +00:00
|
|
|
static int parse_control_option(const struct option *opt,
|
|
|
|
const char *str,
|
|
|
|
int unset __maybe_unused)
|
|
|
|
{
|
2020-09-01 09:37:53 +00:00
|
|
|
struct perf_stat_config *config = opt->value;
|
2020-07-17 07:05:41 +00:00
|
|
|
|
2020-09-02 10:57:07 +00:00
|
|
|
return evlist__parse_control(str, &config->ctl_fd, &config->ctl_fd_ack, &config->ctl_fd_close);
|
|
|
|
}
|
|
|
|
|
perf stat: Add --for-each-cgroup option
The --for-each-cgroup option is a syntax sugar to monitor large number
of cgroups easily. Current command line requires to list all the events
and cgroups even if users want to monitor same events for each cgroup.
This patch addresses that usage by copying given events for each cgroup
on user's behalf.
For instance, if they want to monitor 6 events for 200 cgroups each they
should write 1200 event names (with -e) AND 1200 cgroup names (with -G)
on the command line. But with this change, they can just specify 6
events and 200 cgroups with a new option.
A simpler example below: It wants to measure 3 events for 2 cgroups ('A'
and 'B'). The result is that total 6 events are counted like below.
$ perf stat -a -e cpu-clock,cycles,instructions --for-each-cgroup A,B sleep 1
Performance counter stats for 'system wide':
988.18 msec cpu-clock A # 0.987 CPUs utilized
3,153,761,702 cycles A # 3.200 GHz (100.00%)
8,067,769,847 instructions A # 2.57 insn per cycle (100.00%)
982.71 msec cpu-clock B # 0.982 CPUs utilized
3,136,093,298 cycles B # 3.182 GHz (99.99%)
8,109,619,327 instructions B # 2.58 insn per cycle (99.99%)
1.001228054 seconds time elapsed
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Acked-by: Jiri Olsa <jolsa@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Ian Rogers <irogers@google.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lore.kernel.org/lkml/20200924124455.336326-3-namhyung@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-09-24 12:44:52 +00:00
|
|
|
static int parse_stat_cgroups(const struct option *opt,
|
|
|
|
const char *str, int unset)
|
|
|
|
{
|
|
|
|
if (stat_config.cgroup_list) {
|
|
|
|
pr_err("--cgroup and --for-each-cgroup cannot be used together\n");
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
|
|
|
return parse_cgroups(opt, str, unset);
|
|
|
|
}
|
|
|
|
|
2018-12-10 16:00:04 +00:00
|
|
|
static struct option stat_options[] = {
|
2015-11-05 14:40:45 +00:00
|
|
|
OPT_BOOLEAN('T', "transaction", &transaction_run,
|
|
|
|
"hardware transaction statistics"),
|
|
|
|
OPT_CALLBACK('e', "event", &evsel_list, "event",
|
|
|
|
"event selector. use 'perf list' to list available events",
|
|
|
|
parse_events_option),
|
|
|
|
OPT_CALLBACK(0, "filter", &evsel_list, "filter",
|
|
|
|
"event filter", parse_filter),
|
2018-08-30 06:32:12 +00:00
|
|
|
OPT_BOOLEAN('i', "no-inherit", &stat_config.no_inherit,
|
2015-11-05 14:40:45 +00:00
|
|
|
"child tasks do not inherit counters"),
|
|
|
|
OPT_STRING('p', "pid", &target.pid, "pid",
|
|
|
|
"stat events on existing process id"),
|
|
|
|
OPT_STRING('t', "tid", &target.tid, "tid",
|
|
|
|
"stat events on existing thread id"),
|
perf stat: Enable counting events for BPF programs
Introduce 'perf stat -b' option, which counts events for BPF programs, like:
[root@localhost ~]# ~/perf stat -e ref-cycles,cycles -b 254 -I 1000
1.487903822 115,200 ref-cycles
1.487903822 86,012 cycles
2.489147029 80,560 ref-cycles
2.489147029 73,784 cycles
3.490341825 60,720 ref-cycles
3.490341825 37,797 cycles
4.491540887 37,120 ref-cycles
4.491540887 31,963 cycles
The example above counts 'cycles' and 'ref-cycles' of BPF program of id
254. This is similar to bpftool-prog-profile command, but more
flexible.
'perf stat -b' creates per-cpu perf_event and loads fentry/fexit BPF
programs (monitor-progs) to the target BPF program (target-prog). The
monitor-progs read perf_event before and after the target-prog, and
aggregate the difference in a BPF map. Then the user space reads data
from these maps.
A new 'struct bpf_counter' is introduced to provide a common interface
that uses BPF programs/maps to count perf events.
Committer notes:
Removed all but bpf_counter.h includes from evsel.h, not needed at all.
Also BPF map lookups for PERCPU_ARRAYs need to have as its value receive
buffer passed to the kernel libbpf_num_possible_cpus() entries, not
evsel__nr_cpus(evsel), as the former uses
/sys/devices/system/cpu/possible while the later uses
/sys/devices/system/cpu/online, which may be less than the 'possible'
number making the bpf map lookup overwrite memory and cause hard to
debug memory corruption.
We need to continue using evsel__nr_cpus(evsel) when accessing the
perf_counts array tho, not to overwrite another are of memory :-)
Signed-off-by: Song Liu <songliubraving@fb.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Link: https://lore.kernel.org/lkml/20210120163031.GU12699@kernel.org/
Acked-by: Namhyung Kim <namhyung@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: kernel-team@fb.com
Link: http://lore.kernel.org/lkml/20201229214214.3413833-4-songliubraving@fb.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-12-29 21:42:14 +00:00
|
|
|
#ifdef HAVE_BPF_SKEL
|
|
|
|
OPT_STRING('b', "bpf-prog", &target.bpf_str, "bpf-prog-id",
|
|
|
|
"stat events on existing bpf program id"),
|
perf stat: Introduce 'bperf' to share hardware PMCs with BPF
The perf tool uses performance monitoring counters (PMCs) to monitor
system performance. The PMCs are limited hardware resources. For
example, Intel CPUs have 3x fixed PMCs and 4x programmable PMCs per cpu.
Modern data center systems use these PMCs in many different ways: system
level monitoring, (maybe nested) container level monitoring, per process
monitoring, profiling (in sample mode), etc. In some cases, there are
more active perf_events than available hardware PMCs. To allow all
perf_events to have a chance to run, it is necessary to do expensive
time multiplexing of events.
On the other hand, many monitoring tools count the common metrics
(cycles, instructions). It is a waste to have multiple tools create
multiple perf_events of "cycles" and occupy multiple PMCs.
bperf tries to reduce such wastes by allowing multiple perf_events of
"cycles" or "instructions" (at different scopes) to share PMUs. Instead
of having each perf-stat session to read its own perf_events, bperf uses
BPF programs to read the perf_events and aggregate readings to BPF maps.
Then, the perf-stat session(s) reads the values from these BPF maps.
Please refer to the comment before the definition of bperf_ops for the
description of bperf architecture.
bperf is off by default. To enable it, pass --bpf-counters option to
perf-stat. bperf uses a BPF hashmap to share information about BPF
programs and maps used by bperf. This map is pinned to bpffs. The
default path is /sys/fs/bpf/perf_attr_map. The user could change the
path with option --bpf-attr-map.
Committer testing:
# dmesg|grep "Performance Events" -A5
[ 0.225277] Performance Events: Fam17h+ core perfctr, AMD PMU driver.
[ 0.225280] ... version: 0
[ 0.225280] ... bit width: 48
[ 0.225281] ... generic registers: 6
[ 0.225281] ... value mask: 0000ffffffffffff
[ 0.225281] ... max period: 00007fffffffffff
#
# for a in $(seq 6) ; do perf stat -a -e cycles,instructions sleep 100000 & done
[1] 2436231
[2] 2436232
[3] 2436233
[4] 2436234
[5] 2436235
[6] 2436236
# perf stat -a -e cycles,instructions sleep 0.1
Performance counter stats for 'system wide':
310,326,987 cycles (41.87%)
236,143,290 instructions # 0.76 insn per cycle (41.87%)
0.100800885 seconds time elapsed
#
We can see that the counters were enabled for this workload 41.87% of
the time.
Now with --bpf-counters:
# for a in $(seq 32) ; do perf stat --bpf-counters -a -e cycles,instructions sleep 100000 & done
[1] 2436514
[2] 2436515
[3] 2436516
[4] 2436517
[5] 2436518
[6] 2436519
[7] 2436520
[8] 2436521
[9] 2436522
[10] 2436523
[11] 2436524
[12] 2436525
[13] 2436526
[14] 2436527
[15] 2436528
[16] 2436529
[17] 2436530
[18] 2436531
[19] 2436532
[20] 2436533
[21] 2436534
[22] 2436535
[23] 2436536
[24] 2436537
[25] 2436538
[26] 2436539
[27] 2436540
[28] 2436541
[29] 2436542
[30] 2436543
[31] 2436544
[32] 2436545
#
# ls -la /sys/fs/bpf/perf_attr_map
-rw-------. 1 root root 0 Mar 23 14:53 /sys/fs/bpf/perf_attr_map
# bpftool map | grep bperf | wc -l
64
#
# bpftool map | tail
1265: percpu_array name accum_readings flags 0x0
key 4B value 24B max_entries 1 memlock 4096B
1266: hash name filter flags 0x0
key 4B value 4B max_entries 1 memlock 4096B
1267: array name bperf_fo.bss flags 0x400
key 4B value 8B max_entries 1 memlock 4096B
btf_id 996
pids perf(2436545)
1268: percpu_array name accum_readings flags 0x0
key 4B value 24B max_entries 1 memlock 4096B
1269: hash name filter flags 0x0
key 4B value 4B max_entries 1 memlock 4096B
1270: array name bperf_fo.bss flags 0x400
key 4B value 8B max_entries 1 memlock 4096B
btf_id 997
pids perf(2436541)
1285: array name pid_iter.rodata flags 0x480
key 4B value 4B max_entries 1 memlock 4096B
btf_id 1017 frozen
pids bpftool(2437504)
1286: array flags 0x0
key 4B value 32B max_entries 1 memlock 4096B
#
# bpftool map dump id 1268 | tail
value (CPU 21):
8f f3 bc ca 00 00 00 00 80 fd 2a d1 4d 00 00 00
80 fd 2a d1 4d 00 00 00
value (CPU 22):
7e d5 64 4d 00 00 00 00 a4 8a 2e ee 4d 00 00 00
a4 8a 2e ee 4d 00 00 00
value (CPU 23):
a7 78 3e 06 01 00 00 00 b2 34 94 f6 4d 00 00 00
b2 34 94 f6 4d 00 00 00
Found 1 element
# bpftool map dump id 1268 | tail
value (CPU 21):
c6 8b d9 ca 00 00 00 00 20 c6 fc 83 4e 00 00 00
20 c6 fc 83 4e 00 00 00
value (CPU 22):
9c b4 d2 4d 00 00 00 00 3e 0c df 89 4e 00 00 00
3e 0c df 89 4e 00 00 00
value (CPU 23):
18 43 66 06 01 00 00 00 5b 69 ed 83 4e 00 00 00
5b 69 ed 83 4e 00 00 00
Found 1 element
# bpftool map dump id 1268 | tail
value (CPU 21):
f2 6e db ca 00 00 00 00 92 67 4c ba 4e 00 00 00
92 67 4c ba 4e 00 00 00
value (CPU 22):
dc 8e e1 4d 00 00 00 00 d9 32 7a c5 4e 00 00 00
d9 32 7a c5 4e 00 00 00
value (CPU 23):
bd 2b 73 06 01 00 00 00 7c 73 87 bf 4e 00 00 00
7c 73 87 bf 4e 00 00 00
Found 1 element
#
# perf stat --bpf-counters -a -e cycles,instructions sleep 0.1
Performance counter stats for 'system wide':
119,410,122 cycles
152,105,479 instructions # 1.27 insn per cycle
0.101395093 seconds time elapsed
#
See? We had the counters enabled all the time.
Signed-off-by: Song Liu <songliubraving@fb.com>
Reviewed-by: Jiri Olsa <jolsa@kernel.org>
Acked-by: Namhyung Kim <namhyung@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: kernel-team@fb.com
Link: http://lore.kernel.org/lkml/20210316211837.910506-2-songliubraving@fb.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2021-03-16 21:18:35 +00:00
|
|
|
OPT_BOOLEAN(0, "bpf-counters", &target.use_bpf,
|
|
|
|
"use bpf program to count events"),
|
|
|
|
OPT_STRING(0, "bpf-attr-map", &target.attr_map, "attr-map-path",
|
|
|
|
"path to perf_event_attr map"),
|
perf stat: Enable counting events for BPF programs
Introduce 'perf stat -b' option, which counts events for BPF programs, like:
[root@localhost ~]# ~/perf stat -e ref-cycles,cycles -b 254 -I 1000
1.487903822 115,200 ref-cycles
1.487903822 86,012 cycles
2.489147029 80,560 ref-cycles
2.489147029 73,784 cycles
3.490341825 60,720 ref-cycles
3.490341825 37,797 cycles
4.491540887 37,120 ref-cycles
4.491540887 31,963 cycles
The example above counts 'cycles' and 'ref-cycles' of BPF program of id
254. This is similar to bpftool-prog-profile command, but more
flexible.
'perf stat -b' creates per-cpu perf_event and loads fentry/fexit BPF
programs (monitor-progs) to the target BPF program (target-prog). The
monitor-progs read perf_event before and after the target-prog, and
aggregate the difference in a BPF map. Then the user space reads data
from these maps.
A new 'struct bpf_counter' is introduced to provide a common interface
that uses BPF programs/maps to count perf events.
Committer notes:
Removed all but bpf_counter.h includes from evsel.h, not needed at all.
Also BPF map lookups for PERCPU_ARRAYs need to have as its value receive
buffer passed to the kernel libbpf_num_possible_cpus() entries, not
evsel__nr_cpus(evsel), as the former uses
/sys/devices/system/cpu/possible while the later uses
/sys/devices/system/cpu/online, which may be less than the 'possible'
number making the bpf map lookup overwrite memory and cause hard to
debug memory corruption.
We need to continue using evsel__nr_cpus(evsel) when accessing the
perf_counts array tho, not to overwrite another are of memory :-)
Signed-off-by: Song Liu <songliubraving@fb.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Link: https://lore.kernel.org/lkml/20210120163031.GU12699@kernel.org/
Acked-by: Namhyung Kim <namhyung@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: kernel-team@fb.com
Link: http://lore.kernel.org/lkml/20201229214214.3413833-4-songliubraving@fb.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-12-29 21:42:14 +00:00
|
|
|
#endif
|
2015-11-05 14:40:45 +00:00
|
|
|
OPT_BOOLEAN('a', "all-cpus", &target.system_wide,
|
|
|
|
"system-wide collection from all CPUs"),
|
|
|
|
OPT_BOOLEAN('g', "group", &group,
|
|
|
|
"put the counters into a counter group"),
|
2019-03-14 22:50:01 +00:00
|
|
|
OPT_BOOLEAN(0, "scale", &stat_config.scale,
|
|
|
|
"Use --no-scale to disable counter scaling for multiplexing"),
|
2015-11-05 14:40:45 +00:00
|
|
|
OPT_INCR('v', "verbose", &verbose,
|
|
|
|
"be more verbose (show counter open errors, etc)"),
|
2018-08-30 06:32:36 +00:00
|
|
|
OPT_INTEGER('r', "repeat", &stat_config.run_count,
|
2015-11-05 14:40:45 +00:00
|
|
|
"repeat command and print average + stddev (max: 100, forever: 0)"),
|
2018-08-30 06:32:50 +00:00
|
|
|
OPT_BOOLEAN(0, "table", &stat_config.walltime_run_table,
|
2018-04-23 09:08:21 +00:00
|
|
|
"display details about each run (only with -r option)"),
|
2018-08-30 06:32:41 +00:00
|
|
|
OPT_BOOLEAN('n', "null", &stat_config.null_run,
|
2015-11-05 14:40:45 +00:00
|
|
|
"null run - dont start any counters"),
|
|
|
|
OPT_INCR('d', "detailed", &detailed_run,
|
|
|
|
"detailed run - start a lot of events"),
|
|
|
|
OPT_BOOLEAN('S', "sync", &sync_run,
|
|
|
|
"call sync() before starting a run"),
|
|
|
|
OPT_CALLBACK_NOOPT('B', "big-num", NULL, NULL,
|
|
|
|
"print large numbers with thousands\' separators",
|
|
|
|
stat__set_big_num),
|
|
|
|
OPT_STRING('C', "cpu", &target.cpu_list, "cpu",
|
|
|
|
"list of cpus to monitor in system-wide"),
|
|
|
|
OPT_SET_UINT('A', "no-aggr", &stat_config.aggr_mode,
|
|
|
|
"disable CPU count aggregation", AGGR_NONE),
|
2018-08-30 06:32:48 +00:00
|
|
|
OPT_BOOLEAN(0, "no-merge", &stat_config.no_merge, "Do not merge identical named events"),
|
2018-08-30 06:32:29 +00:00
|
|
|
OPT_STRING('x', "field-separator", &stat_config.csv_sep, "separator",
|
2015-11-05 14:40:45 +00:00
|
|
|
"print counts with custom separator"),
|
|
|
|
OPT_CALLBACK('G', "cgroup", &evsel_list, "name",
|
perf stat: Add --for-each-cgroup option
The --for-each-cgroup option is a syntax sugar to monitor large number
of cgroups easily. Current command line requires to list all the events
and cgroups even if users want to monitor same events for each cgroup.
This patch addresses that usage by copying given events for each cgroup
on user's behalf.
For instance, if they want to monitor 6 events for 200 cgroups each they
should write 1200 event names (with -e) AND 1200 cgroup names (with -G)
on the command line. But with this change, they can just specify 6
events and 200 cgroups with a new option.
A simpler example below: It wants to measure 3 events for 2 cgroups ('A'
and 'B'). The result is that total 6 events are counted like below.
$ perf stat -a -e cpu-clock,cycles,instructions --for-each-cgroup A,B sleep 1
Performance counter stats for 'system wide':
988.18 msec cpu-clock A # 0.987 CPUs utilized
3,153,761,702 cycles A # 3.200 GHz (100.00%)
8,067,769,847 instructions A # 2.57 insn per cycle (100.00%)
982.71 msec cpu-clock B # 0.982 CPUs utilized
3,136,093,298 cycles B # 3.182 GHz (99.99%)
8,109,619,327 instructions B # 2.58 insn per cycle (99.99%)
1.001228054 seconds time elapsed
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Acked-by: Jiri Olsa <jolsa@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Ian Rogers <irogers@google.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lore.kernel.org/lkml/20200924124455.336326-3-namhyung@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-09-24 12:44:52 +00:00
|
|
|
"monitor event in cgroup name only", parse_stat_cgroups),
|
|
|
|
OPT_STRING(0, "for-each-cgroup", &stat_config.cgroup_list, "name",
|
|
|
|
"expand events for each cgroup"),
|
2015-11-05 14:40:45 +00:00
|
|
|
OPT_STRING('o', "output", &output_name, "file", "output file name"),
|
|
|
|
OPT_BOOLEAN(0, "append", &append_file, "append to the output file"),
|
|
|
|
OPT_INTEGER(0, "log-fd", &output_fd,
|
|
|
|
"log output to fd, instead of stderr"),
|
|
|
|
OPT_STRING(0, "pre", &pre_cmd, "command",
|
|
|
|
"command to run prior to the measured command"),
|
|
|
|
OPT_STRING(0, "post", &post_cmd, "command",
|
|
|
|
"command to run after to the measured command"),
|
|
|
|
OPT_UINTEGER('I', "interval-print", &stat_config.interval,
|
2018-04-03 18:18:33 +00:00
|
|
|
"print counts at regular interval in ms "
|
|
|
|
"(overhead is possible for values <= 100ms)"),
|
2018-01-29 09:25:22 +00:00
|
|
|
OPT_INTEGER(0, "interval-count", &stat_config.times,
|
|
|
|
"print counts for fixed number of times"),
|
2018-08-30 06:32:30 +00:00
|
|
|
OPT_BOOLEAN(0, "interval-clear", &stat_config.interval_clear,
|
2018-06-06 22:15:06 +00:00
|
|
|
"clear screen in between new interval"),
|
2018-01-29 09:25:23 +00:00
|
|
|
OPT_UINTEGER(0, "timeout", &stat_config.timeout,
|
|
|
|
"stop workload and print counts after a timeout period in ms (>= 10ms)"),
|
2015-11-05 14:40:45 +00:00
|
|
|
OPT_SET_UINT(0, "per-socket", &stat_config.aggr_mode,
|
|
|
|
"aggregate counts per processor socket", AGGR_SOCKET),
|
2019-06-04 22:50:42 +00:00
|
|
|
OPT_SET_UINT(0, "per-die", &stat_config.aggr_mode,
|
|
|
|
"aggregate counts per processor die", AGGR_DIE),
|
2015-11-05 14:40:45 +00:00
|
|
|
OPT_SET_UINT(0, "per-core", &stat_config.aggr_mode,
|
|
|
|
"aggregate counts per physical processor core", AGGR_CORE),
|
|
|
|
OPT_SET_UINT(0, "per-thread", &stat_config.aggr_mode,
|
|
|
|
"aggregate counts per thread", AGGR_THREAD),
|
perf stat: Add --per-node agregation support
Adding new --per-node option to aggregate counts per NUMA
nodes for system-wide mode measurements.
You can specify --per-node in live mode:
# perf stat -a -I 1000 -e cycles --per-node
# time node cpus counts unit events
1.000542550 N0 20 6,202,097 cycles
1.000542550 N1 20 639,559 cycles
2.002040063 N0 20 7,412,495 cycles
2.002040063 N1 20 2,185,577 cycles
3.003451699 N0 20 6,508,917 cycles
3.003451699 N1 20 765,607 cycles
...
Or in the record/report stat session:
# perf stat record -a -I 1000 -e cycles
# time counts unit events
1.000536937 10,008,468 cycles
2.002090152 9,578,539 cycles
3.003625233 7,647,869 cycles
4.005135036 7,032,086 cycles
^C 4.340902364 3,923,893 cycles
# perf stat report --per-node
# time node cpus counts unit events
1.000536937 N0 20 9,355,086 cycles
1.000536937 N1 20 653,382 cycles
2.002090152 N0 20 7,712,838 cycles
2.002090152 N1 20 1,865,701 cycles
3.003625233 N0 20 6,604,441 cycles
3.003625233 N1 20 1,043,428 cycles
4.005135036 N0 20 6,350,522 cycles
4.005135036 N1 20 681,564 cycles
4.340902364 N0 20 3,403,188 cycles
4.340902364 N1 20 520,705 cycles
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Alexey Budankov <alexey.budankov@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Joe Mario <jmario@redhat.com>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Michael Petlan <mpetlan@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/20190904073415.723-4-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-08-28 08:17:43 +00:00
|
|
|
OPT_SET_UINT(0, "per-node", &stat_config.aggr_mode,
|
|
|
|
"aggregate counts per numa node", AGGR_NODE),
|
2020-07-17 07:04:33 +00:00
|
|
|
OPT_INTEGER('D', "delay", &stat_config.initial_delay,
|
|
|
|
"ms to wait before starting measurement after program start (-1: start with events disabled)"),
|
2018-08-30 06:32:31 +00:00
|
|
|
OPT_CALLBACK_NOOPT(0, "metric-only", &stat_config.metric_only, NULL,
|
perf stat: Basic support for TopDown in perf stat
Add basic plumbing for TopDown in perf stat
TopDown is intended to replace the frontend cycles idle/ backend cycles
idle metrics in standard perf stat output. These metrics are not
reliable in many workloads, due to out of order effects.
This implements a new --topdown mode in perf stat (similar to
--transaction) that measures the pipe line bottlenecks using
standardized formulas. The measurement can be all done with 5 counters
(one fixed counter)
The result are four metrics:
FrontendBound, BackendBound, BadSpeculation, Retiring
that describe the CPU pipeline behavior on a high level.
The full top down methology has many hierarchical metrics. This
implementation only supports level 1 which can be collected without
multiplexing. A full implementation of top down on top of perf is
available in pmu-tools toplev. (http://github.com/andikleen/pmu-tools)
The current version works on Intel Core CPUs starting with Sandy Bridge,
and Atom CPUs starting with Silvermont. In principle the generic
metrics should be also implementable on other out of order CPUs.
TopDown level 1 uses a set of abstracted metrics which are generic to
out of order CPU cores (although some CPUs may not implement all of
them):
topdown-total-slots Available slots in the pipeline
topdown-slots-issued Slots issued into the pipeline
topdown-slots-retired Slots successfully retired
topdown-fetch-bubbles Pipeline gaps in the frontend
topdown-recovery-bubbles Pipeline gaps during recovery
from misspeculation
These metrics then allow to compute four useful metrics:
FrontendBound, BackendBound, Retiring, BadSpeculation.
Add a new --topdown options to enable events. When --topdown is
specified set up events for all topdown events supported by the kernel.
Add topdown-* as a special case to the event parser, as is needed for
all events containing -.
The actual code to compute the metrics is in follow-on patches.
v2: Use standard sysctl read function.
v3: Move x86 specific code to arch/
v4: Enable --metric-only implicitly for topdown.
v5: Add --single-thread option to not force per core mode
v6: Fix output order of topdown metrics
v7: Allow combining with -d
v8: Remove --single-thread again
v9: Rename functions, adding arch_ and topdown_.
v10: Expand man page and describe TopDown better
Paste intro into commit description.
Print error when malloc fails.
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: http://lkml.kernel.org/r/1464119559-17203-1-git-send-email-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-05-30 15:49:42 +00:00
|
|
|
"Only print computed metrics. No raw values", enable_metric_only),
|
2020-05-20 18:20:10 +00:00
|
|
|
OPT_BOOLEAN(0, "metric-no-group", &stat_config.metric_no_group,
|
|
|
|
"don't group metric events, impacts multiplexing"),
|
|
|
|
OPT_BOOLEAN(0, "metric-no-merge", &stat_config.metric_no_merge,
|
|
|
|
"don't try to share events between metrics in a group"),
|
perf stat: Basic support for TopDown in perf stat
Add basic plumbing for TopDown in perf stat
TopDown is intended to replace the frontend cycles idle/ backend cycles
idle metrics in standard perf stat output. These metrics are not
reliable in many workloads, due to out of order effects.
This implements a new --topdown mode in perf stat (similar to
--transaction) that measures the pipe line bottlenecks using
standardized formulas. The measurement can be all done with 5 counters
(one fixed counter)
The result are four metrics:
FrontendBound, BackendBound, BadSpeculation, Retiring
that describe the CPU pipeline behavior on a high level.
The full top down methology has many hierarchical metrics. This
implementation only supports level 1 which can be collected without
multiplexing. A full implementation of top down on top of perf is
available in pmu-tools toplev. (http://github.com/andikleen/pmu-tools)
The current version works on Intel Core CPUs starting with Sandy Bridge,
and Atom CPUs starting with Silvermont. In principle the generic
metrics should be also implementable on other out of order CPUs.
TopDown level 1 uses a set of abstracted metrics which are generic to
out of order CPU cores (although some CPUs may not implement all of
them):
topdown-total-slots Available slots in the pipeline
topdown-slots-issued Slots issued into the pipeline
topdown-slots-retired Slots successfully retired
topdown-fetch-bubbles Pipeline gaps in the frontend
topdown-recovery-bubbles Pipeline gaps during recovery
from misspeculation
These metrics then allow to compute four useful metrics:
FrontendBound, BackendBound, Retiring, BadSpeculation.
Add a new --topdown options to enable events. When --topdown is
specified set up events for all topdown events supported by the kernel.
Add topdown-* as a special case to the event parser, as is needed for
all events containing -.
The actual code to compute the metrics is in follow-on patches.
v2: Use standard sysctl read function.
v3: Move x86 specific code to arch/
v4: Enable --metric-only implicitly for topdown.
v5: Add --single-thread option to not force per core mode
v6: Fix output order of topdown metrics
v7: Allow combining with -d
v8: Remove --single-thread again
v9: Rename functions, adding arch_ and topdown_.
v10: Expand man page and describe TopDown better
Paste intro into commit description.
Print error when malloc fails.
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: http://lkml.kernel.org/r/1464119559-17203-1-git-send-email-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-05-30 15:49:42 +00:00
|
|
|
OPT_BOOLEAN(0, "topdown", &topdown_run,
|
perf stat: Support L2 Topdown events
The TMA method level 2 metrics is supported from the Intel Sapphire
Rapids server, which expose four L2 Topdown metrics events to user
space. There are eight L2 events in total. The other four L2 Topdown
metrics events are calculated from the corresponding L1 and the exposed
L2 events.
Now, the --topdown prints the complete top-down metrics that supported
by the CPU. For the Intel Sapphire Rapids server, there are 4 L1 events
and 8 L2 events displyed in one line.
Add a new option, --td-level, to display the top-down statistics that
equal to or lower than the input level.
The L2 event is marked only when both its L1 parent event and itself
crosse the threshold.
Here is an example:
$ perf stat --topdown --td-level=2 --no-metric-only sleep 1
Topdown accuracy may decrease when measuring long periods.
Please print the result regularly, e.g. -I1000
Performance counter stats for 'sleep 1':
16,734,390 slots
2,100,001 topdown-retiring # 12.6% retiring
2,034,376 topdown-bad-spec # 12.3% bad speculation
4,003,128 topdown-fe-bound # 24.1% frontend bound
328,125 topdown-heavy-ops # 2.0% heavy operations # 10.6% light operations
1,968,751 topdown-br-mispredict # 11.9% branch mispredict # 0.4% machine clears
2,953,127 topdown-fetch-lat # 17.8% fetch latency # 6.3% fetch bandwidth
5,906,255 topdown-mem-bound # 35.6% memory bound # 15.4% core bound
Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jin Yao <yao.jin@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lore.kernel.org/lkml/1612296553-21962-9-git-send-email-kan.liang@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2021-02-02 20:09:12 +00:00
|
|
|
"measure top-down statistics"),
|
|
|
|
OPT_UINTEGER(0, "td-level", &stat_config.topdown_level,
|
|
|
|
"Set the metrics level for the top-down statistics (0: max level)"),
|
2017-05-26 19:05:38 +00:00
|
|
|
OPT_BOOLEAN(0, "smi-cost", &smi_cost,
|
|
|
|
"measure SMI cost"),
|
2017-08-31 19:40:31 +00:00
|
|
|
OPT_CALLBACK('M', "metrics", &evsel_list, "metric/metric group list",
|
|
|
|
"monitor specified metrics or metric groups (separated by ,)",
|
|
|
|
parse_metric_groups),
|
2019-10-11 05:05:45 +00:00
|
|
|
OPT_BOOLEAN_FLAG(0, "all-kernel", &stat_config.all_kernel,
|
|
|
|
"Configure all used events to run in kernel space.",
|
|
|
|
PARSE_OPT_EXCLUSIVE),
|
|
|
|
OPT_BOOLEAN_FLAG(0, "all-user", &stat_config.all_user,
|
|
|
|
"Configure all used events to run in user space.",
|
|
|
|
PARSE_OPT_EXCLUSIVE),
|
perf stat: Show percore counts in per CPU output
We have supported the event modifier "percore" which sums up the event
counts for all hardware threads in a core and show the counts per core.
For example,
# perf stat -e cpu/event=cpu-cycles,percore/ -a -A -- sleep 1
Performance counter stats for 'system wide':
S0-D0-C0 395,072 cpu/event=cpu-cycles,percore/
S0-D0-C1 851,248 cpu/event=cpu-cycles,percore/
S0-D0-C2 954,226 cpu/event=cpu-cycles,percore/
S0-D0-C3 1,233,659 cpu/event=cpu-cycles,percore/
This patch provides a new option "--percore-show-thread". It is used
with event modifier "percore" together to sum up the event counts for
all hardware threads in a core but show the counts per hardware thread.
This is essentially a replacement for the any bit (which is gone in
Icelake). Per core counts are useful for some formulas, e.g. CoreIPC.
The original percore version was inconvenient to post process. This
variant matches the output of the any bit.
With this patch, for example,
# perf stat -e cpu/event=cpu-cycles,percore/ -a -A --percore-show-thread -- sleep 1
Performance counter stats for 'system wide':
CPU0 2,453,061 cpu/event=cpu-cycles,percore/
CPU1 1,823,921 cpu/event=cpu-cycles,percore/
CPU2 1,383,166 cpu/event=cpu-cycles,percore/
CPU3 1,102,652 cpu/event=cpu-cycles,percore/
CPU4 2,453,061 cpu/event=cpu-cycles,percore/
CPU5 1,823,921 cpu/event=cpu-cycles,percore/
CPU6 1,383,166 cpu/event=cpu-cycles,percore/
CPU7 1,102,652 cpu/event=cpu-cycles,percore/
We can see counts are duplicated in CPU pairs (CPU0/CPU4, CPU1/CPU5,
CPU2/CPU6, CPU3/CPU7).
The interval mode also works. For example,
# perf stat -e cpu/event=cpu-cycles,percore/ -a -A --percore-show-thread -I 1000
# time CPU counts unit events
1.000425421 CPU0 925,032 cpu/event=cpu-cycles,percore/
1.000425421 CPU1 430,202 cpu/event=cpu-cycles,percore/
1.000425421 CPU2 436,843 cpu/event=cpu-cycles,percore/
1.000425421 CPU3 1,192,504 cpu/event=cpu-cycles,percore/
1.000425421 CPU4 925,032 cpu/event=cpu-cycles,percore/
1.000425421 CPU5 430,202 cpu/event=cpu-cycles,percore/
1.000425421 CPU6 436,843 cpu/event=cpu-cycles,percore/
1.000425421 CPU7 1,192,504 cpu/event=cpu-cycles,percore/
If we offline CPU5, the result is:
# perf stat -e cpu/event=cpu-cycles,percore/ -a -A --percore-show-thread -- sleep 1
Performance counter stats for 'system wide':
CPU0 2,752,148 cpu/event=cpu-cycles,percore/
CPU1 1,009,312 cpu/event=cpu-cycles,percore/
CPU2 2,784,072 cpu/event=cpu-cycles,percore/
CPU3 2,427,922 cpu/event=cpu-cycles,percore/
CPU4 2,752,148 cpu/event=cpu-cycles,percore/
CPU6 2,784,072 cpu/event=cpu-cycles,percore/
CPU7 2,427,922 cpu/event=cpu-cycles,percore/
1.001416041 seconds time elapsed
v4:
---
Ravi Bangoria reports an issue in v3. Once we offline a CPU,
the output is not correct. The issue is we should use the cpu
idx in print_percore_thread rather than using the cpu value.
v3:
---
1. Fix the interval mode output error
2. Use cpu value (not cpu index) in config->aggr_get_id().
3. Refine the code according to Jiri's comments.
v2:
---
Add the explanation in change log. This is essentially a replacement
for the any bit. No code change.
Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
Tested-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lore.kernel.org/lkml/20200214080452.26402-1-yao.jin@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-02-14 08:04:52 +00:00
|
|
|
OPT_BOOLEAN(0, "percore-show-thread", &stat_config.percore_show_thread,
|
|
|
|
"Use with 'percore' event qualifier to show the event "
|
|
|
|
"counts of one hardware thread by sum up total hardware "
|
|
|
|
"threads of same physical core"),
|
perf stat: Turn off summary for interval mode by default
There's a risk that outputting interval mode summaries by default breaks
CSV consumers. It already broke pmu-tools/toplev.
So now we turn off the summary by default but we create a new option
'--summary' to enable the summary. This is active even when not using
CSV mode.
Before:
root@kbl-ppc:~# perf stat -I1000 --interval-count 2
# time counts unit events
1.000265904 8,005.73 msec cpu-clock # 8.006 CPUs utilized
1.000265904 601 context-switches # 0.075 K/sec
1.000265904 10 cpu-migrations # 0.001 K/sec
1.000265904 0 page-faults # 0.000 K/sec
1.000265904 66,746,521 cycles # 0.008 GHz
1.000265904 71,874,398 instructions # 1.08 insn per cycle
1.000265904 13,356,781 branches # 1.668 M/sec
1.000265904 298,756 branch-misses # 2.24% of all branches
2.001857667 8,012.52 msec cpu-clock # 8.013 CPUs utilized
2.001857667 164 context-switches # 0.020 K/sec
2.001857667 10 cpu-migrations # 0.001 K/sec
2.001857667 2 page-faults # 0.000 K/sec
2.001857667 5,822,188 cycles # 0.001 GHz
2.001857667 2,186,170 instructions # 0.38 insn per cycle
2.001857667 442,378 branches # 0.055 M/sec
2.001857667 44,750 branch-misses # 10.12% of all branches
Performance counter stats for 'system wide':
16,018.25 msec cpu-clock # 7.993 CPUs utilized
765 context-switches # 0.048 K/sec
20 cpu-migrations # 0.001 K/sec
2 page-faults # 0.000 K/sec
72,568,709 cycles # 0.005 GHz
74,060,568 instructions # 1.02 insn per cycle
13,799,159 branches # 0.861 M/sec
343,506 branch-misses # 2.49% of all branches
2.004118489 seconds time elapsed
After:
root@kbl-ppc:~# perf stat -I1000 --interval-count 2
# time counts unit events
1.001336393 8,013.28 msec cpu-clock # 8.013 CPUs utilized
1.001336393 82 context-switches # 0.010 K/sec
1.001336393 8 cpu-migrations # 0.001 K/sec
1.001336393 0 page-faults # 0.000 K/sec
1.001336393 4,199,121 cycles # 0.001 GHz
1.001336393 1,373,991 instructions # 0.33 insn per cycle
1.001336393 270,681 branches # 0.034 M/sec
1.001336393 31,659 branch-misses # 11.70% of all branches
2.003905006 8,020.52 msec cpu-clock # 8.021 CPUs utilized
2.003905006 184 context-switches # 0.023 K/sec
2.003905006 8 cpu-migrations # 0.001 K/sec
2.003905006 2 page-faults # 0.000 K/sec
2.003905006 5,446,190 cycles # 0.001 GHz
2.003905006 2,312,547 instructions # 0.42 insn per cycle
2.003905006 451,691 branches # 0.056 M/sec
2.003905006 37,925 branch-misses # 8.40% of all branches
root@kbl-ppc:~# perf stat -I1000 --interval-count 2 --summary
# time counts unit events
1.001313128 8,013.20 msec cpu-clock # 8.013 CPUs utilized
1.001313128 83 context-switches # 0.010 K/sec
1.001313128 8 cpu-migrations # 0.001 K/sec
1.001313128 0 page-faults # 0.000 K/sec
1.001313128 4,470,950 cycles # 0.001 GHz
1.001313128 1,440,045 instructions # 0.32 insn per cycle
1.001313128 283,222 branches # 0.035 M/sec
1.001313128 33,576 branch-misses # 11.86% of all branches
2.003857385 8,020.34 msec cpu-clock # 8.020 CPUs utilized
2.003857385 154 context-switches # 0.019 K/sec
2.003857385 8 cpu-migrations # 0.001 K/sec
2.003857385 2 page-faults # 0.000 K/sec
2.003857385 4,515,676 cycles # 0.001 GHz
2.003857385 2,180,449 instructions # 0.48 insn per cycle
2.003857385 435,254 branches # 0.054 M/sec
2.003857385 31,179 branch-misses # 7.16% of all branches
Performance counter stats for 'system wide':
16,033.53 msec cpu-clock # 7.992 CPUs utilized
237 context-switches # 0.015 K/sec
16 cpu-migrations # 0.001 K/sec
2 page-faults # 0.000 K/sec
8,986,626 cycles # 0.001 GHz
3,620,494 instructions # 0.40 insn per cycle
718,476 branches # 0.045 M/sec
64,755 branch-misses # 9.01% of all branches
2.006124542 seconds time elapsed
Fixes: c7e5b328a8d4 ("perf stat: Report summary for interval mode")
Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lore.kernel.org/lkml/20200903010113.32232-1-yao.jin@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-09-03 01:01:13 +00:00
|
|
|
OPT_BOOLEAN(0, "summary", &stat_config.summary,
|
|
|
|
"print summary for interval mode"),
|
perf stat: Align CSV output for summary mode
The 'perf stat' subcommand supports the request for a summary of the
interval counter readings. But the summary lines break the CSV output
so it's hard for scripts to parse the result.
Before:
# perf stat -x, -I1000 --interval-count 1 --summary
1.001323097,8013.48,msec,cpu-clock,8013483384,100.00,8.013,CPUs utilized
1.001323097,270,,context-switches,8013513297,100.00,0.034,K/sec
1.001323097,13,,cpu-migrations,8013530032,100.00,0.002,K/sec
1.001323097,184,,page-faults,8013546992,100.00,0.023,K/sec
1.001323097,20574191,,cycles,8013551506,100.00,0.003,GHz
1.001323097,10562267,,instructions,8013564958,100.00,0.51,insn per cycle
1.001323097,2019244,,branches,8013575673,100.00,0.252,M/sec
1.001323097,106152,,branch-misses,8013585776,100.00,5.26,of all branches
8013.48,msec,cpu-clock,8013483384,100.00,7.984,CPUs utilized
270,,context-switches,8013513297,100.00,0.034,K/sec
13,,cpu-migrations,8013530032,100.00,0.002,K/sec
184,,page-faults,8013546992,100.00,0.023,K/sec
20574191,,cycles,8013551506,100.00,0.003,GHz
10562267,,instructions,8013564958,100.00,0.51,insn per cycle
2019244,,branches,8013575673,100.00,0.252,M/sec
106152,,branch-misses,8013585776,100.00,5.26,of all branches
The summary line loses the timestamp column, which breaks the CSV
output.
We add a column at the original 'timestamp' position and it just says
'summary' for the summary line.
After:
# perf stat -x, -I1000 --interval-count 1 --summary
1.001196053,8012.72,msec,cpu-clock,8012722903,100.00,8.013,CPUs utilized
1.001196053,218,,context-switches,8012753271,100.00,0.027,K/sec
1.001196053,9,,cpu-migrations,8012769767,100.00,0.001,K/sec
1.001196053,0,,page-faults,8012786257,100.00,0.000,K/sec
1.001196053,15004518,,cycles,8012790637,100.00,0.002,GHz
1.001196053,7954691,,instructions,8012804027,100.00,0.53,insn per cycle
1.001196053,1590259,,branches,8012814766,100.00,0.198,M/sec
1.001196053,82601,,branch-misses,8012824365,100.00,5.19,of all branches
summary,8012.72,msec,cpu-clock,8012722903,100.00,7.986,CPUs utilized
summary,218,,context-switches,8012753271,100.00,0.027,K/sec
summary,9,,cpu-migrations,8012769767,100.00,0.001,K/sec
summary,0,,page-faults,8012786257,100.00,0.000,K/sec
summary,15004518,,cycles,8012790637,100.00,0.002,GHz
summary,7954691,,instructions,8012804027,100.00,0.53,insn per cycle
summary,1590259,,branches,8012814766,100.00,0.198,M/sec
summary,82601,,branch-misses,8012824365,100.00,5.19,of all branches
Now it's easy for script to analyse the summary lines.
Of course, we also consider not to break possible existing scripts which
can continue to use the broken CSV format by using a new '--no-csv-summary.'
option.
# perf stat -x, -I1000 --interval-count 1 --summary --no-csv-summary
1.001213261,8012.67,msec,cpu-clock,8012672327,100.00,8.013,CPUs utilized
1.001213261,197,,context-switches,8012703742,100.00,24.586,/sec
1.001213261,9,,cpu-migrations,8012720902,100.00,1.123,/sec
1.001213261,644,,page-faults,8012738266,100.00,80.373,/sec
1.001213261,18350698,,cycles,8012744109,100.00,0.002,GHz
1.001213261,12745021,,instructions,8012759001,100.00,0.69,insn per cycle
1.001213261,2458033,,branches,8012770864,100.00,306.768,K/sec
1.001213261,102107,,branch-misses,8012781751,100.00,4.15,of all branches
8012.67,msec,cpu-clock,8012672327,100.00,7.985,CPUs utilized
197,,context-switches,8012703742,100.00,24.586,/sec
9,,cpu-migrations,8012720902,100.00,1.123,/sec
644,,page-faults,8012738266,100.00,80.373,/sec
18350698,,cycles,8012744109,100.00,0.002,GHz
12745021,,instructions,8012759001,100.00,0.69,insn per cycle
2458033,,branches,8012770864,100.00,306.768,K/sec
102107,,branch-misses,8012781751,100.00,4.15,of all branches
This option can be enabled in perf config by setting the variable
'stat.no-csv-summary'.
# perf config stat.no-csv-summary=true
# perf config -l
stat.no-csv-summary=true
# perf stat -x, -I1000 --interval-count 1 --summary
1.001330198,8013.28,msec,cpu-clock,8013279201,100.00,8.013,CPUs utilized
1.001330198,205,,context-switches,8013308394,100.00,25.583,/sec
1.001330198,10,,cpu-migrations,8013324681,100.00,1.248,/sec
1.001330198,0,,page-faults,8013340926,100.00,0.000,/sec
1.001330198,8027742,,cycles,8013344503,100.00,0.001,GHz
1.001330198,2871717,,instructions,8013356501,100.00,0.36,insn per cycle
1.001330198,553564,,branches,8013366204,100.00,69.081,K/sec
1.001330198,54021,,branch-misses,8013375952,100.00,9.76,of all branches
8013.28,msec,cpu-clock,8013279201,100.00,7.985,CPUs utilized
205,,context-switches,8013308394,100.00,25.583,/sec
10,,cpu-migrations,8013324681,100.00,1.248,/sec
0,,page-faults,8013340926,100.00,0.000,/sec
8027742,,cycles,8013344503,100.00,0.001,GHz
2871717,,instructions,8013356501,100.00,0.36,insn per cycle
553564,,branches,8013366204,100.00,69.081,K/sec
54021,,branch-misses,8013375952,100.00,9.76,of all branches
Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
Acked-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@redhat.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Jin Yao <yao.jin@intel.com>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lore.kernel.org/lkml/20210319070156.20394-1-yao.jin@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2021-03-19 07:01:55 +00:00
|
|
|
OPT_BOOLEAN(0, "no-csv-summary", &stat_config.no_csv_summary,
|
|
|
|
"don't print 'summary' for CSV summary output"),
|
2020-10-27 00:27:36 +00:00
|
|
|
OPT_BOOLEAN(0, "quiet", &stat_config.quiet,
|
|
|
|
"don't print output (useful with record)"),
|
2020-05-05 18:29:43 +00:00
|
|
|
#ifdef HAVE_LIBPFM
|
|
|
|
OPT_CALLBACK(0, "pfm-events", &evsel_list, "event",
|
|
|
|
"libpfm4 event selector. use 'perf list' to list available events",
|
|
|
|
parse_libpfm_events_option),
|
|
|
|
#endif
|
2020-09-02 10:57:07 +00:00
|
|
|
OPT_CALLBACK(0, "control", &stat_config, "fd:ctl-fd[,ack-fd] or fifo:ctl-fifo[,ack-fifo]",
|
2020-07-17 07:05:41 +00:00
|
|
|
"Listen on ctl-fd descriptor for command to control measurement ('enable': enable events, 'disable': disable events).\n"
|
2020-09-02 10:57:07 +00:00
|
|
|
"\t\t\t Optionally send control command completion ('ack\\n') to ack-fd descriptor.\n"
|
|
|
|
"\t\t\t Alternatively, ctl-fifo / ack-fifo will be opened and used as ctl-fd / ack-fd.",
|
2020-07-17 07:05:41 +00:00
|
|
|
parse_control_option),
|
2021-04-19 09:41:44 +00:00
|
|
|
OPT_CALLBACK_OPTARG(0, "iostat", &evsel_list, &stat_config, "default",
|
|
|
|
"measure I/O performance metrics provided by arch/platform",
|
|
|
|
iostat_parse),
|
2015-11-05 14:40:45 +00:00
|
|
|
OPT_END()
|
|
|
|
};
|
|
|
|
|
2020-11-26 14:13:20 +00:00
|
|
|
static struct aggr_cpu_id perf_stat__get_socket(struct perf_stat_config *config __maybe_unused,
|
2019-07-21 11:23:49 +00:00
|
|
|
struct perf_cpu_map *map, int cpu)
|
2015-10-16 10:41:15 +00:00
|
|
|
{
|
|
|
|
return cpu_map__get_socket(map, cpu, NULL);
|
|
|
|
}
|
|
|
|
|
2020-11-26 14:13:20 +00:00
|
|
|
static struct aggr_cpu_id perf_stat__get_die(struct perf_stat_config *config __maybe_unused,
|
2019-07-21 11:23:49 +00:00
|
|
|
struct perf_cpu_map *map, int cpu)
|
2019-06-04 22:50:42 +00:00
|
|
|
{
|
|
|
|
return cpu_map__get_die(map, cpu, NULL);
|
|
|
|
}
|
|
|
|
|
2020-11-26 14:13:20 +00:00
|
|
|
static struct aggr_cpu_id perf_stat__get_core(struct perf_stat_config *config __maybe_unused,
|
2019-07-21 11:23:49 +00:00
|
|
|
struct perf_cpu_map *map, int cpu)
|
2015-10-16 10:41:15 +00:00
|
|
|
{
|
|
|
|
return cpu_map__get_core(map, cpu, NULL);
|
|
|
|
}
|
|
|
|
|
2020-11-26 14:13:20 +00:00
|
|
|
static struct aggr_cpu_id perf_stat__get_node(struct perf_stat_config *config __maybe_unused,
|
perf stat: Add --per-node agregation support
Adding new --per-node option to aggregate counts per NUMA
nodes for system-wide mode measurements.
You can specify --per-node in live mode:
# perf stat -a -I 1000 -e cycles --per-node
# time node cpus counts unit events
1.000542550 N0 20 6,202,097 cycles
1.000542550 N1 20 639,559 cycles
2.002040063 N0 20 7,412,495 cycles
2.002040063 N1 20 2,185,577 cycles
3.003451699 N0 20 6,508,917 cycles
3.003451699 N1 20 765,607 cycles
...
Or in the record/report stat session:
# perf stat record -a -I 1000 -e cycles
# time counts unit events
1.000536937 10,008,468 cycles
2.002090152 9,578,539 cycles
3.003625233 7,647,869 cycles
4.005135036 7,032,086 cycles
^C 4.340902364 3,923,893 cycles
# perf stat report --per-node
# time node cpus counts unit events
1.000536937 N0 20 9,355,086 cycles
1.000536937 N1 20 653,382 cycles
2.002090152 N0 20 7,712,838 cycles
2.002090152 N1 20 1,865,701 cycles
3.003625233 N0 20 6,604,441 cycles
3.003625233 N1 20 1,043,428 cycles
4.005135036 N0 20 6,350,522 cycles
4.005135036 N1 20 681,564 cycles
4.340902364 N0 20 3,403,188 cycles
4.340902364 N1 20 520,705 cycles
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Alexey Budankov <alexey.budankov@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Joe Mario <jmario@redhat.com>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Michael Petlan <mpetlan@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/20190904073415.723-4-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-08-28 08:17:43 +00:00
|
|
|
struct perf_cpu_map *map, int cpu)
|
|
|
|
{
|
|
|
|
return cpu_map__get_node(map, cpu, NULL);
|
|
|
|
}
|
|
|
|
|
2020-11-26 14:13:20 +00:00
|
|
|
static struct aggr_cpu_id perf_stat__get_aggr(struct perf_stat_config *config,
|
2019-07-21 11:23:49 +00:00
|
|
|
aggr_get_id_t get_id, struct perf_cpu_map *map, int idx)
|
2015-10-25 14:51:18 +00:00
|
|
|
{
|
|
|
|
int cpu;
|
2020-11-26 14:13:20 +00:00
|
|
|
struct aggr_cpu_id id = cpu_map__empty_aggr_cpu_id();
|
2015-10-25 14:51:18 +00:00
|
|
|
|
|
|
|
if (idx >= map->nr)
|
2020-11-26 14:13:20 +00:00
|
|
|
return id;
|
2015-10-25 14:51:18 +00:00
|
|
|
|
|
|
|
cpu = map->map[idx];
|
|
|
|
|
2020-11-26 14:13:23 +00:00
|
|
|
if (cpu_map__aggr_cpu_id_is_empty(config->cpus_aggr_map->map[cpu]))
|
|
|
|
config->cpus_aggr_map->map[cpu] = get_id(config, map, idx);
|
2015-10-25 14:51:18 +00:00
|
|
|
|
2020-11-26 14:13:23 +00:00
|
|
|
id = config->cpus_aggr_map->map[cpu];
|
2020-11-26 14:13:20 +00:00
|
|
|
return id;
|
2015-10-25 14:51:18 +00:00
|
|
|
}
|
|
|
|
|
2020-11-26 14:13:20 +00:00
|
|
|
static struct aggr_cpu_id perf_stat__get_socket_cached(struct perf_stat_config *config,
|
2019-07-21 11:23:49 +00:00
|
|
|
struct perf_cpu_map *map, int idx)
|
2015-10-25 14:51:18 +00:00
|
|
|
{
|
2018-08-30 06:32:45 +00:00
|
|
|
return perf_stat__get_aggr(config, perf_stat__get_socket, map, idx);
|
2015-10-25 14:51:18 +00:00
|
|
|
}
|
|
|
|
|
2020-11-26 14:13:20 +00:00
|
|
|
static struct aggr_cpu_id perf_stat__get_die_cached(struct perf_stat_config *config,
|
2019-07-21 11:23:49 +00:00
|
|
|
struct perf_cpu_map *map, int idx)
|
2019-06-04 22:50:42 +00:00
|
|
|
{
|
|
|
|
return perf_stat__get_aggr(config, perf_stat__get_die, map, idx);
|
|
|
|
}
|
|
|
|
|
2020-11-26 14:13:20 +00:00
|
|
|
static struct aggr_cpu_id perf_stat__get_core_cached(struct perf_stat_config *config,
|
2019-07-21 11:23:49 +00:00
|
|
|
struct perf_cpu_map *map, int idx)
|
2015-10-25 14:51:18 +00:00
|
|
|
{
|
2018-08-30 06:32:45 +00:00
|
|
|
return perf_stat__get_aggr(config, perf_stat__get_core, map, idx);
|
2015-10-25 14:51:18 +00:00
|
|
|
}
|
|
|
|
|
2020-11-26 14:13:20 +00:00
|
|
|
static struct aggr_cpu_id perf_stat__get_node_cached(struct perf_stat_config *config,
|
perf stat: Add --per-node agregation support
Adding new --per-node option to aggregate counts per NUMA
nodes for system-wide mode measurements.
You can specify --per-node in live mode:
# perf stat -a -I 1000 -e cycles --per-node
# time node cpus counts unit events
1.000542550 N0 20 6,202,097 cycles
1.000542550 N1 20 639,559 cycles
2.002040063 N0 20 7,412,495 cycles
2.002040063 N1 20 2,185,577 cycles
3.003451699 N0 20 6,508,917 cycles
3.003451699 N1 20 765,607 cycles
...
Or in the record/report stat session:
# perf stat record -a -I 1000 -e cycles
# time counts unit events
1.000536937 10,008,468 cycles
2.002090152 9,578,539 cycles
3.003625233 7,647,869 cycles
4.005135036 7,032,086 cycles
^C 4.340902364 3,923,893 cycles
# perf stat report --per-node
# time node cpus counts unit events
1.000536937 N0 20 9,355,086 cycles
1.000536937 N1 20 653,382 cycles
2.002090152 N0 20 7,712,838 cycles
2.002090152 N1 20 1,865,701 cycles
3.003625233 N0 20 6,604,441 cycles
3.003625233 N1 20 1,043,428 cycles
4.005135036 N0 20 6,350,522 cycles
4.005135036 N1 20 681,564 cycles
4.340902364 N0 20 3,403,188 cycles
4.340902364 N1 20 520,705 cycles
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Alexey Budankov <alexey.budankov@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Joe Mario <jmario@redhat.com>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Michael Petlan <mpetlan@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/20190904073415.723-4-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-08-28 08:17:43 +00:00
|
|
|
struct perf_cpu_map *map, int idx)
|
|
|
|
{
|
|
|
|
return perf_stat__get_aggr(config, perf_stat__get_node, map, idx);
|
|
|
|
}
|
|
|
|
|
perf stat: Support 'percore' event qualifier
With this patch, we can use the 'percore' event qualifier in perf-stat.
root@skl:/tmp# perf stat -e cpu/event=0,umask=0x3,percore=1/,cpu/event=0,umask=0x3/ -a -A -I1000
1.000773050 S0-C0 98,352,832 cpu/event=0,umask=0x3,percore=1/ (50.01%)
1.000773050 S0-C1 103,763,057 cpu/event=0,umask=0x3,percore=1/ (50.02%)
1.000773050 S0-C2 196,776,995 cpu/event=0,umask=0x3,percore=1/ (50.02%)
1.000773050 S0-C3 176,493,779 cpu/event=0,umask=0x3,percore=1/ (50.02%)
1.000773050 CPU0 47,699,641 cpu/event=0,umask=0x3/ (50.02%)
1.000773050 CPU1 49,052,451 cpu/event=0,umask=0x3/ (49.98%)
1.000773050 CPU2 102,771,422 cpu/event=0,umask=0x3/ (49.98%)
1.000773050 CPU3 100,784,662 cpu/event=0,umask=0x3/ (49.98%)
1.000773050 CPU4 43,171,342 cpu/event=0,umask=0x3/ (49.98%)
1.000773050 CPU5 54,152,158 cpu/event=0,umask=0x3/ (49.98%)
1.000773050 CPU6 93,618,410 cpu/event=0,umask=0x3/ (49.98%)
1.000773050 CPU7 74,477,589 cpu/event=0,umask=0x3/ (49.99%)
In this example, we count the event 'ref-cycles' per-core and per-CPU in
one perf stat command-line. From the output, we can see:
S0-C0 = CPU0 + CPU4
S0-C1 = CPU1 + CPU5
S0-C2 = CPU2 + CPU6
S0-C3 = CPU3 + CPU7
So the result is expected (tiny difference is ignored).
Note that, the 'percore' event qualifier needs to use with option '-A'.
Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
Tested-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jin Yao <yao.jin@intel.com>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1555077590-27664-4-git-send-email-yao.jin@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-04-12 13:59:49 +00:00
|
|
|
static bool term_percore_set(void)
|
|
|
|
{
|
2019-07-21 11:23:51 +00:00
|
|
|
struct evsel *counter;
|
perf stat: Support 'percore' event qualifier
With this patch, we can use the 'percore' event qualifier in perf-stat.
root@skl:/tmp# perf stat -e cpu/event=0,umask=0x3,percore=1/,cpu/event=0,umask=0x3/ -a -A -I1000
1.000773050 S0-C0 98,352,832 cpu/event=0,umask=0x3,percore=1/ (50.01%)
1.000773050 S0-C1 103,763,057 cpu/event=0,umask=0x3,percore=1/ (50.02%)
1.000773050 S0-C2 196,776,995 cpu/event=0,umask=0x3,percore=1/ (50.02%)
1.000773050 S0-C3 176,493,779 cpu/event=0,umask=0x3,percore=1/ (50.02%)
1.000773050 CPU0 47,699,641 cpu/event=0,umask=0x3/ (50.02%)
1.000773050 CPU1 49,052,451 cpu/event=0,umask=0x3/ (49.98%)
1.000773050 CPU2 102,771,422 cpu/event=0,umask=0x3/ (49.98%)
1.000773050 CPU3 100,784,662 cpu/event=0,umask=0x3/ (49.98%)
1.000773050 CPU4 43,171,342 cpu/event=0,umask=0x3/ (49.98%)
1.000773050 CPU5 54,152,158 cpu/event=0,umask=0x3/ (49.98%)
1.000773050 CPU6 93,618,410 cpu/event=0,umask=0x3/ (49.98%)
1.000773050 CPU7 74,477,589 cpu/event=0,umask=0x3/ (49.99%)
In this example, we count the event 'ref-cycles' per-core and per-CPU in
one perf stat command-line. From the output, we can see:
S0-C0 = CPU0 + CPU4
S0-C1 = CPU1 + CPU5
S0-C2 = CPU2 + CPU6
S0-C3 = CPU3 + CPU7
So the result is expected (tiny difference is ignored).
Note that, the 'percore' event qualifier needs to use with option '-A'.
Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
Tested-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jin Yao <yao.jin@intel.com>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1555077590-27664-4-git-send-email-yao.jin@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-04-12 13:59:49 +00:00
|
|
|
|
|
|
|
evlist__for_each_entry(evsel_list, counter) {
|
|
|
|
if (counter->percore)
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
2013-02-14 12:57:27 +00:00
|
|
|
static int perf_stat_init_aggr_mode(void)
|
|
|
|
{
|
2015-10-25 14:51:18 +00:00
|
|
|
int nr;
|
|
|
|
|
2015-07-21 12:31:22 +00:00
|
|
|
switch (stat_config.aggr_mode) {
|
2013-02-14 12:57:27 +00:00
|
|
|
case AGGR_SOCKET:
|
2019-07-21 11:24:41 +00:00
|
|
|
if (cpu_map__build_socket_map(evsel_list->core.cpus, &stat_config.aggr_map)) {
|
2013-02-14 12:57:27 +00:00
|
|
|
perror("cannot build socket map");
|
|
|
|
return -1;
|
|
|
|
}
|
2018-08-30 06:32:45 +00:00
|
|
|
stat_config.aggr_get_id = perf_stat__get_socket_cached;
|
2013-02-14 12:57:27 +00:00
|
|
|
break;
|
2019-06-04 22:50:42 +00:00
|
|
|
case AGGR_DIE:
|
2019-07-21 11:24:41 +00:00
|
|
|
if (cpu_map__build_die_map(evsel_list->core.cpus, &stat_config.aggr_map)) {
|
2019-06-04 22:50:42 +00:00
|
|
|
perror("cannot build die map");
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
stat_config.aggr_get_id = perf_stat__get_die_cached;
|
|
|
|
break;
|
2013-02-14 12:57:29 +00:00
|
|
|
case AGGR_CORE:
|
2019-07-21 11:24:41 +00:00
|
|
|
if (cpu_map__build_core_map(evsel_list->core.cpus, &stat_config.aggr_map)) {
|
2013-02-14 12:57:29 +00:00
|
|
|
perror("cannot build core map");
|
|
|
|
return -1;
|
|
|
|
}
|
2018-08-30 06:32:45 +00:00
|
|
|
stat_config.aggr_get_id = perf_stat__get_core_cached;
|
2013-02-14 12:57:29 +00:00
|
|
|
break;
|
perf stat: Add --per-node agregation support
Adding new --per-node option to aggregate counts per NUMA
nodes for system-wide mode measurements.
You can specify --per-node in live mode:
# perf stat -a -I 1000 -e cycles --per-node
# time node cpus counts unit events
1.000542550 N0 20 6,202,097 cycles
1.000542550 N1 20 639,559 cycles
2.002040063 N0 20 7,412,495 cycles
2.002040063 N1 20 2,185,577 cycles
3.003451699 N0 20 6,508,917 cycles
3.003451699 N1 20 765,607 cycles
...
Or in the record/report stat session:
# perf stat record -a -I 1000 -e cycles
# time counts unit events
1.000536937 10,008,468 cycles
2.002090152 9,578,539 cycles
3.003625233 7,647,869 cycles
4.005135036 7,032,086 cycles
^C 4.340902364 3,923,893 cycles
# perf stat report --per-node
# time node cpus counts unit events
1.000536937 N0 20 9,355,086 cycles
1.000536937 N1 20 653,382 cycles
2.002090152 N0 20 7,712,838 cycles
2.002090152 N1 20 1,865,701 cycles
3.003625233 N0 20 6,604,441 cycles
3.003625233 N1 20 1,043,428 cycles
4.005135036 N0 20 6,350,522 cycles
4.005135036 N1 20 681,564 cycles
4.340902364 N0 20 3,403,188 cycles
4.340902364 N1 20 520,705 cycles
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Alexey Budankov <alexey.budankov@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Joe Mario <jmario@redhat.com>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Michael Petlan <mpetlan@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/20190904073415.723-4-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-08-28 08:17:43 +00:00
|
|
|
case AGGR_NODE:
|
|
|
|
if (cpu_map__build_node_map(evsel_list->core.cpus, &stat_config.aggr_map)) {
|
|
|
|
perror("cannot build core map");
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
stat_config.aggr_get_id = perf_stat__get_node_cached;
|
|
|
|
break;
|
2013-02-14 12:57:27 +00:00
|
|
|
case AGGR_NONE:
|
perf stat: Support 'percore' event qualifier
With this patch, we can use the 'percore' event qualifier in perf-stat.
root@skl:/tmp# perf stat -e cpu/event=0,umask=0x3,percore=1/,cpu/event=0,umask=0x3/ -a -A -I1000
1.000773050 S0-C0 98,352,832 cpu/event=0,umask=0x3,percore=1/ (50.01%)
1.000773050 S0-C1 103,763,057 cpu/event=0,umask=0x3,percore=1/ (50.02%)
1.000773050 S0-C2 196,776,995 cpu/event=0,umask=0x3,percore=1/ (50.02%)
1.000773050 S0-C3 176,493,779 cpu/event=0,umask=0x3,percore=1/ (50.02%)
1.000773050 CPU0 47,699,641 cpu/event=0,umask=0x3/ (50.02%)
1.000773050 CPU1 49,052,451 cpu/event=0,umask=0x3/ (49.98%)
1.000773050 CPU2 102,771,422 cpu/event=0,umask=0x3/ (49.98%)
1.000773050 CPU3 100,784,662 cpu/event=0,umask=0x3/ (49.98%)
1.000773050 CPU4 43,171,342 cpu/event=0,umask=0x3/ (49.98%)
1.000773050 CPU5 54,152,158 cpu/event=0,umask=0x3/ (49.98%)
1.000773050 CPU6 93,618,410 cpu/event=0,umask=0x3/ (49.98%)
1.000773050 CPU7 74,477,589 cpu/event=0,umask=0x3/ (49.99%)
In this example, we count the event 'ref-cycles' per-core and per-CPU in
one perf stat command-line. From the output, we can see:
S0-C0 = CPU0 + CPU4
S0-C1 = CPU1 + CPU5
S0-C2 = CPU2 + CPU6
S0-C3 = CPU3 + CPU7
So the result is expected (tiny difference is ignored).
Note that, the 'percore' event qualifier needs to use with option '-A'.
Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
Tested-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jin Yao <yao.jin@intel.com>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1555077590-27664-4-git-send-email-yao.jin@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-04-12 13:59:49 +00:00
|
|
|
if (term_percore_set()) {
|
2019-07-21 11:24:41 +00:00
|
|
|
if (cpu_map__build_core_map(evsel_list->core.cpus,
|
perf stat: Support 'percore' event qualifier
With this patch, we can use the 'percore' event qualifier in perf-stat.
root@skl:/tmp# perf stat -e cpu/event=0,umask=0x3,percore=1/,cpu/event=0,umask=0x3/ -a -A -I1000
1.000773050 S0-C0 98,352,832 cpu/event=0,umask=0x3,percore=1/ (50.01%)
1.000773050 S0-C1 103,763,057 cpu/event=0,umask=0x3,percore=1/ (50.02%)
1.000773050 S0-C2 196,776,995 cpu/event=0,umask=0x3,percore=1/ (50.02%)
1.000773050 S0-C3 176,493,779 cpu/event=0,umask=0x3,percore=1/ (50.02%)
1.000773050 CPU0 47,699,641 cpu/event=0,umask=0x3/ (50.02%)
1.000773050 CPU1 49,052,451 cpu/event=0,umask=0x3/ (49.98%)
1.000773050 CPU2 102,771,422 cpu/event=0,umask=0x3/ (49.98%)
1.000773050 CPU3 100,784,662 cpu/event=0,umask=0x3/ (49.98%)
1.000773050 CPU4 43,171,342 cpu/event=0,umask=0x3/ (49.98%)
1.000773050 CPU5 54,152,158 cpu/event=0,umask=0x3/ (49.98%)
1.000773050 CPU6 93,618,410 cpu/event=0,umask=0x3/ (49.98%)
1.000773050 CPU7 74,477,589 cpu/event=0,umask=0x3/ (49.99%)
In this example, we count the event 'ref-cycles' per-core and per-CPU in
one perf stat command-line. From the output, we can see:
S0-C0 = CPU0 + CPU4
S0-C1 = CPU1 + CPU5
S0-C2 = CPU2 + CPU6
S0-C3 = CPU3 + CPU7
So the result is expected (tiny difference is ignored).
Note that, the 'percore' event qualifier needs to use with option '-A'.
Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
Tested-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jin Yao <yao.jin@intel.com>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1555077590-27664-4-git-send-email-yao.jin@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-04-12 13:59:49 +00:00
|
|
|
&stat_config.aggr_map)) {
|
|
|
|
perror("cannot build core map");
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
stat_config.aggr_get_id = perf_stat__get_core_cached;
|
|
|
|
}
|
|
|
|
break;
|
2013-02-14 12:57:27 +00:00
|
|
|
case AGGR_GLOBAL:
|
perf stat: Introduce --per-thread option
Currently all the -p option PID arguments tasks values get aggregated
and printed as single values.
Adding --per-tasks option to print values per task.
$ perf stat -e cycles,instructions --per-thread -p 30190,30242
^C
Performance counter stats for process id '30190,30242':
cat-30190 0 cycles
yes-30242 3,842,525,421 cycles
cat-30190 0 instructions
yes-30242 10,370,817,010 instructions
1.143155657 seconds time elapsed
Also works under interval mode:
$ perf stat -e cycles,instructions --per-thread -p 30190,30242 -I 1000
# time comm-pid counts unit events
1.000073435 cat-30190 89,058 cycles
1.000073435 yes-30242 3,360,786,902 cycles (100.00%)
1.000073435 cat-30190 14,066 instructions
1.000073435 yes-30242 9,069,937,462 instructions
2.000204830 cat-30190 0 cycles
2.000204830 yes-30242 3,351,667,626 cycles
2.000204830 cat-30190 0 instructions
2.000204830 yes-30242 9,045,796,885 instructions
^C 2.771286639 cat-30190 0 cycles
2.771286639 yes-30242 2,593,884,166 cycles
2.771286639 cat-30190 0 instructions
2.771286639 yes-30242 7,001,171,191 instructions
It works only with -t and -p options, otherwise following error is
printed:
$ perf stat -e cycles --per-thread -I 1000 ls
The --per-thread option is only available when monitoring via -p -t options.
-p, --pid <pid> stat events on existing process id
-t, --tid <tid> stat events on existing thread id
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/r/1435310967-14570-23-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-06-26 09:29:27 +00:00
|
|
|
case AGGR_THREAD:
|
2015-10-16 10:41:04 +00:00
|
|
|
case AGGR_UNSET:
|
2013-02-14 12:57:27 +00:00
|
|
|
default:
|
|
|
|
break;
|
|
|
|
}
|
2015-10-25 14:51:18 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* The evsel_list->cpus is the base we operate on,
|
|
|
|
* taking the highest cpu number to be the size of
|
|
|
|
* the aggregation translate cpumap.
|
|
|
|
*/
|
2019-09-02 12:12:53 +00:00
|
|
|
nr = perf_cpu_map__max(evsel_list->core.cpus);
|
2020-11-26 14:13:22 +00:00
|
|
|
stat_config.cpus_aggr_map = cpu_aggr_map__empty_new(nr + 1);
|
2018-08-30 06:32:45 +00:00
|
|
|
return stat_config.cpus_aggr_map ? 0 : -ENOMEM;
|
2013-02-14 12:57:27 +00:00
|
|
|
}
|
|
|
|
|
2020-11-26 14:13:22 +00:00
|
|
|
static void cpu_aggr_map__delete(struct cpu_aggr_map *map)
|
|
|
|
{
|
|
|
|
if (map) {
|
|
|
|
WARN_ONCE(refcount_read(&map->refcnt) != 0,
|
|
|
|
"cpu_aggr_map refcnt unbalanced\n");
|
|
|
|
free(map);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static void cpu_aggr_map__put(struct cpu_aggr_map *map)
|
|
|
|
{
|
|
|
|
if (map && refcount_dec_and_test(&map->refcnt))
|
|
|
|
cpu_aggr_map__delete(map);
|
|
|
|
}
|
|
|
|
|
2015-12-09 02:11:27 +00:00
|
|
|
static void perf_stat__exit_aggr_mode(void)
|
|
|
|
{
|
2020-11-26 14:13:22 +00:00
|
|
|
cpu_aggr_map__put(stat_config.aggr_map);
|
|
|
|
cpu_aggr_map__put(stat_config.cpus_aggr_map);
|
2018-08-30 06:32:45 +00:00
|
|
|
stat_config.aggr_map = NULL;
|
|
|
|
stat_config.cpus_aggr_map = NULL;
|
2015-12-09 02:11:27 +00:00
|
|
|
}
|
|
|
|
|
2019-07-21 11:23:49 +00:00
|
|
|
static inline int perf_env__get_cpu(struct perf_env *env, struct perf_cpu_map *map, int idx)
|
2015-11-05 14:40:58 +00:00
|
|
|
{
|
|
|
|
int cpu;
|
|
|
|
|
|
|
|
if (idx > map->nr)
|
|
|
|
return -1;
|
|
|
|
|
|
|
|
cpu = map->map[idx];
|
|
|
|
|
perf tools: Replace _SC_NPROCESSORS_CONF with max_present_cpu in cpu_topology_map
There are 2 problems wrt. cpu_topology_map on systems with sparse CPUs:
1. offline/absent CPUs will have their socket_id and core_id set to -1
which triggers:
"socket_id number is too big.You may need to upgrade the perf tool."
2. size of cpu_topology_map (perf_env.cpu[]) is allocated based on
_SC_NPROCESSORS_CONF, but can be indexed with CPU ids going above.
Users of perf_env.cpu[] are using CPU id as index. This can lead
to read beyond what was allocated:
==19991== Invalid read of size 4
==19991== at 0x490CEB: check_cpu_topology (topology.c:69)
==19991== by 0x490CEB: test_session_topology (topology.c:106)
...
For example:
_SC_NPROCESSORS_CONF == 16
available: 2 nodes (0-1)
node 0 cpus: 0 6 8 10 16 22 24 26
node 0 size: 12004 MB
node 0 free: 9470 MB
node 1 cpus: 1 7 9 11 23 25 27
node 1 size: 12093 MB
node 1 free: 9406 MB
node distances:
node 0 1
0: 10 20
1: 20 10
This patch changes HEADER_NRCPUS.nr_cpus_available from _SC_NPROCESSORS_CONF
to max_present_cpu and updates any user of cpu_topology_map to iterate
with nr_cpus_avail.
As a consequence HEADER_CPU_TOPOLOGY core_id and socket_id lists get longer,
but maintain compatibility with pre-patch state - index to cpu_topology_map is
CPU id.
perf test 36 -v
36: Session topology :
--- start ---
test child forked, pid 22211
templ file: /tmp/perf-test-gmdX5i
CPU 0, core 0, socket 0
CPU 1, core 0, socket 1
CPU 6, core 10, socket 0
CPU 7, core 10, socket 1
CPU 8, core 1, socket 0
CPU 9, core 1, socket 1
CPU 10, core 9, socket 0
CPU 11, core 9, socket 1
CPU 16, core 0, socket 0
CPU 22, core 10, socket 0
CPU 23, core 10, socket 1
CPU 24, core 1, socket 0
CPU 25, core 1, socket 1
CPU 26, core 9, socket 0
CPU 27, core 9, socket 1
test child finished with 0
---- end ----
Session topology: Ok
Signed-off-by: Jan Stancek <jstancek@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/d7c05c6445fca74a8442c2c73cfffd349c52c44f.1487146877.git.jstancek@redhat.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-02-17 11:10:26 +00:00
|
|
|
if (cpu >= env->nr_cpus_avail)
|
2015-11-05 14:40:58 +00:00
|
|
|
return -1;
|
|
|
|
|
|
|
|
return cpu;
|
|
|
|
}
|
|
|
|
|
2020-11-26 14:13:20 +00:00
|
|
|
static struct aggr_cpu_id perf_env__get_socket(struct perf_cpu_map *map, int idx, void *data)
|
2015-11-05 14:40:58 +00:00
|
|
|
{
|
|
|
|
struct perf_env *env = data;
|
|
|
|
int cpu = perf_env__get_cpu(env, map, idx);
|
2020-11-26 14:13:20 +00:00
|
|
|
struct aggr_cpu_id id = cpu_map__empty_aggr_cpu_id();
|
|
|
|
|
|
|
|
if (cpu != -1)
|
2020-11-26 14:13:25 +00:00
|
|
|
id.socket = env->cpu[cpu].socket_id;
|
2015-11-05 14:40:58 +00:00
|
|
|
|
2020-11-26 14:13:20 +00:00
|
|
|
return id;
|
2015-11-05 14:40:58 +00:00
|
|
|
}
|
|
|
|
|
2020-11-26 14:13:20 +00:00
|
|
|
static struct aggr_cpu_id perf_env__get_die(struct perf_cpu_map *map, int idx, void *data)
|
2019-06-04 22:50:42 +00:00
|
|
|
{
|
|
|
|
struct perf_env *env = data;
|
2020-11-26 14:13:20 +00:00
|
|
|
struct aggr_cpu_id id = cpu_map__empty_aggr_cpu_id();
|
|
|
|
int cpu = perf_env__get_cpu(env, map, idx);
|
2019-06-04 22:50:42 +00:00
|
|
|
|
|
|
|
if (cpu != -1) {
|
|
|
|
/*
|
2020-11-26 14:13:25 +00:00
|
|
|
* die_id is relative to socket, so start
|
|
|
|
* with the socket ID and then add die to
|
|
|
|
* make a unique ID.
|
2019-06-04 22:50:42 +00:00
|
|
|
*/
|
2020-11-26 14:13:25 +00:00
|
|
|
id.socket = env->cpu[cpu].socket_id;
|
2020-11-26 14:13:26 +00:00
|
|
|
id.die = env->cpu[cpu].die_id;
|
2019-06-04 22:50:42 +00:00
|
|
|
}
|
|
|
|
|
2020-11-26 14:13:20 +00:00
|
|
|
return id;
|
2019-06-04 22:50:42 +00:00
|
|
|
}
|
|
|
|
|
2020-11-26 14:13:20 +00:00
|
|
|
static struct aggr_cpu_id perf_env__get_core(struct perf_cpu_map *map, int idx, void *data)
|
2015-11-05 14:40:58 +00:00
|
|
|
{
|
|
|
|
struct perf_env *env = data;
|
2020-11-26 14:13:20 +00:00
|
|
|
struct aggr_cpu_id id = cpu_map__empty_aggr_cpu_id();
|
|
|
|
int cpu = perf_env__get_cpu(env, map, idx);
|
2015-11-05 14:40:58 +00:00
|
|
|
|
|
|
|
if (cpu != -1) {
|
|
|
|
/*
|
2019-06-04 22:50:42 +00:00
|
|
|
* core_id is relative to socket and die,
|
2020-11-26 14:13:27 +00:00
|
|
|
* we need a global id. So we set
|
|
|
|
* socket, die id and core id
|
2015-11-05 14:40:58 +00:00
|
|
|
*/
|
2020-11-26 14:13:25 +00:00
|
|
|
id.socket = env->cpu[cpu].socket_id;
|
2020-11-26 14:13:26 +00:00
|
|
|
id.die = env->cpu[cpu].die_id;
|
2020-11-26 14:13:27 +00:00
|
|
|
id.core = env->cpu[cpu].core_id;
|
2015-11-05 14:40:58 +00:00
|
|
|
}
|
|
|
|
|
2020-11-26 14:13:20 +00:00
|
|
|
return id;
|
2015-11-05 14:40:58 +00:00
|
|
|
}
|
|
|
|
|
2020-11-26 14:13:20 +00:00
|
|
|
static struct aggr_cpu_id perf_env__get_node(struct perf_cpu_map *map, int idx, void *data)
|
perf stat: Add --per-node agregation support
Adding new --per-node option to aggregate counts per NUMA
nodes for system-wide mode measurements.
You can specify --per-node in live mode:
# perf stat -a -I 1000 -e cycles --per-node
# time node cpus counts unit events
1.000542550 N0 20 6,202,097 cycles
1.000542550 N1 20 639,559 cycles
2.002040063 N0 20 7,412,495 cycles
2.002040063 N1 20 2,185,577 cycles
3.003451699 N0 20 6,508,917 cycles
3.003451699 N1 20 765,607 cycles
...
Or in the record/report stat session:
# perf stat record -a -I 1000 -e cycles
# time counts unit events
1.000536937 10,008,468 cycles
2.002090152 9,578,539 cycles
3.003625233 7,647,869 cycles
4.005135036 7,032,086 cycles
^C 4.340902364 3,923,893 cycles
# perf stat report --per-node
# time node cpus counts unit events
1.000536937 N0 20 9,355,086 cycles
1.000536937 N1 20 653,382 cycles
2.002090152 N0 20 7,712,838 cycles
2.002090152 N1 20 1,865,701 cycles
3.003625233 N0 20 6,604,441 cycles
3.003625233 N1 20 1,043,428 cycles
4.005135036 N0 20 6,350,522 cycles
4.005135036 N1 20 681,564 cycles
4.340902364 N0 20 3,403,188 cycles
4.340902364 N1 20 520,705 cycles
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Alexey Budankov <alexey.budankov@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Joe Mario <jmario@redhat.com>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Michael Petlan <mpetlan@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/20190904073415.723-4-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-08-28 08:17:43 +00:00
|
|
|
{
|
|
|
|
int cpu = perf_env__get_cpu(data, map, idx);
|
2020-11-26 14:13:20 +00:00
|
|
|
struct aggr_cpu_id id = cpu_map__empty_aggr_cpu_id();
|
perf stat: Add --per-node agregation support
Adding new --per-node option to aggregate counts per NUMA
nodes for system-wide mode measurements.
You can specify --per-node in live mode:
# perf stat -a -I 1000 -e cycles --per-node
# time node cpus counts unit events
1.000542550 N0 20 6,202,097 cycles
1.000542550 N1 20 639,559 cycles
2.002040063 N0 20 7,412,495 cycles
2.002040063 N1 20 2,185,577 cycles
3.003451699 N0 20 6,508,917 cycles
3.003451699 N1 20 765,607 cycles
...
Or in the record/report stat session:
# perf stat record -a -I 1000 -e cycles
# time counts unit events
1.000536937 10,008,468 cycles
2.002090152 9,578,539 cycles
3.003625233 7,647,869 cycles
4.005135036 7,032,086 cycles
^C 4.340902364 3,923,893 cycles
# perf stat report --per-node
# time node cpus counts unit events
1.000536937 N0 20 9,355,086 cycles
1.000536937 N1 20 653,382 cycles
2.002090152 N0 20 7,712,838 cycles
2.002090152 N1 20 1,865,701 cycles
3.003625233 N0 20 6,604,441 cycles
3.003625233 N1 20 1,043,428 cycles
4.005135036 N0 20 6,350,522 cycles
4.005135036 N1 20 681,564 cycles
4.340902364 N0 20 3,403,188 cycles
4.340902364 N1 20 520,705 cycles
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Alexey Budankov <alexey.budankov@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Joe Mario <jmario@redhat.com>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Michael Petlan <mpetlan@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/20190904073415.723-4-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-08-28 08:17:43 +00:00
|
|
|
|
2020-11-26 14:13:24 +00:00
|
|
|
id.node = perf_env__numa_node(data, cpu);
|
2020-11-26 14:13:20 +00:00
|
|
|
return id;
|
perf stat: Add --per-node agregation support
Adding new --per-node option to aggregate counts per NUMA
nodes for system-wide mode measurements.
You can specify --per-node in live mode:
# perf stat -a -I 1000 -e cycles --per-node
# time node cpus counts unit events
1.000542550 N0 20 6,202,097 cycles
1.000542550 N1 20 639,559 cycles
2.002040063 N0 20 7,412,495 cycles
2.002040063 N1 20 2,185,577 cycles
3.003451699 N0 20 6,508,917 cycles
3.003451699 N1 20 765,607 cycles
...
Or in the record/report stat session:
# perf stat record -a -I 1000 -e cycles
# time counts unit events
1.000536937 10,008,468 cycles
2.002090152 9,578,539 cycles
3.003625233 7,647,869 cycles
4.005135036 7,032,086 cycles
^C 4.340902364 3,923,893 cycles
# perf stat report --per-node
# time node cpus counts unit events
1.000536937 N0 20 9,355,086 cycles
1.000536937 N1 20 653,382 cycles
2.002090152 N0 20 7,712,838 cycles
2.002090152 N1 20 1,865,701 cycles
3.003625233 N0 20 6,604,441 cycles
3.003625233 N1 20 1,043,428 cycles
4.005135036 N0 20 6,350,522 cycles
4.005135036 N1 20 681,564 cycles
4.340902364 N0 20 3,403,188 cycles
4.340902364 N1 20 520,705 cycles
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Alexey Budankov <alexey.budankov@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Joe Mario <jmario@redhat.com>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Michael Petlan <mpetlan@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/20190904073415.723-4-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-08-28 08:17:43 +00:00
|
|
|
}
|
|
|
|
|
2019-07-21 11:23:49 +00:00
|
|
|
static int perf_env__build_socket_map(struct perf_env *env, struct perf_cpu_map *cpus,
|
2020-11-26 14:13:22 +00:00
|
|
|
struct cpu_aggr_map **sockp)
|
2015-11-05 14:40:58 +00:00
|
|
|
{
|
|
|
|
return cpu_map__build_map(cpus, sockp, perf_env__get_socket, env);
|
|
|
|
}
|
|
|
|
|
2019-07-21 11:23:49 +00:00
|
|
|
static int perf_env__build_die_map(struct perf_env *env, struct perf_cpu_map *cpus,
|
2020-11-26 14:13:22 +00:00
|
|
|
struct cpu_aggr_map **diep)
|
2019-06-04 22:50:42 +00:00
|
|
|
{
|
|
|
|
return cpu_map__build_map(cpus, diep, perf_env__get_die, env);
|
|
|
|
}
|
|
|
|
|
2019-07-21 11:23:49 +00:00
|
|
|
static int perf_env__build_core_map(struct perf_env *env, struct perf_cpu_map *cpus,
|
2020-11-26 14:13:22 +00:00
|
|
|
struct cpu_aggr_map **corep)
|
2015-11-05 14:40:58 +00:00
|
|
|
{
|
|
|
|
return cpu_map__build_map(cpus, corep, perf_env__get_core, env);
|
|
|
|
}
|
|
|
|
|
perf stat: Add --per-node agregation support
Adding new --per-node option to aggregate counts per NUMA
nodes for system-wide mode measurements.
You can specify --per-node in live mode:
# perf stat -a -I 1000 -e cycles --per-node
# time node cpus counts unit events
1.000542550 N0 20 6,202,097 cycles
1.000542550 N1 20 639,559 cycles
2.002040063 N0 20 7,412,495 cycles
2.002040063 N1 20 2,185,577 cycles
3.003451699 N0 20 6,508,917 cycles
3.003451699 N1 20 765,607 cycles
...
Or in the record/report stat session:
# perf stat record -a -I 1000 -e cycles
# time counts unit events
1.000536937 10,008,468 cycles
2.002090152 9,578,539 cycles
3.003625233 7,647,869 cycles
4.005135036 7,032,086 cycles
^C 4.340902364 3,923,893 cycles
# perf stat report --per-node
# time node cpus counts unit events
1.000536937 N0 20 9,355,086 cycles
1.000536937 N1 20 653,382 cycles
2.002090152 N0 20 7,712,838 cycles
2.002090152 N1 20 1,865,701 cycles
3.003625233 N0 20 6,604,441 cycles
3.003625233 N1 20 1,043,428 cycles
4.005135036 N0 20 6,350,522 cycles
4.005135036 N1 20 681,564 cycles
4.340902364 N0 20 3,403,188 cycles
4.340902364 N1 20 520,705 cycles
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Alexey Budankov <alexey.budankov@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Joe Mario <jmario@redhat.com>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Michael Petlan <mpetlan@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/20190904073415.723-4-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-08-28 08:17:43 +00:00
|
|
|
static int perf_env__build_node_map(struct perf_env *env, struct perf_cpu_map *cpus,
|
2020-11-26 14:13:22 +00:00
|
|
|
struct cpu_aggr_map **nodep)
|
perf stat: Add --per-node agregation support
Adding new --per-node option to aggregate counts per NUMA
nodes for system-wide mode measurements.
You can specify --per-node in live mode:
# perf stat -a -I 1000 -e cycles --per-node
# time node cpus counts unit events
1.000542550 N0 20 6,202,097 cycles
1.000542550 N1 20 639,559 cycles
2.002040063 N0 20 7,412,495 cycles
2.002040063 N1 20 2,185,577 cycles
3.003451699 N0 20 6,508,917 cycles
3.003451699 N1 20 765,607 cycles
...
Or in the record/report stat session:
# perf stat record -a -I 1000 -e cycles
# time counts unit events
1.000536937 10,008,468 cycles
2.002090152 9,578,539 cycles
3.003625233 7,647,869 cycles
4.005135036 7,032,086 cycles
^C 4.340902364 3,923,893 cycles
# perf stat report --per-node
# time node cpus counts unit events
1.000536937 N0 20 9,355,086 cycles
1.000536937 N1 20 653,382 cycles
2.002090152 N0 20 7,712,838 cycles
2.002090152 N1 20 1,865,701 cycles
3.003625233 N0 20 6,604,441 cycles
3.003625233 N1 20 1,043,428 cycles
4.005135036 N0 20 6,350,522 cycles
4.005135036 N1 20 681,564 cycles
4.340902364 N0 20 3,403,188 cycles
4.340902364 N1 20 520,705 cycles
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Alexey Budankov <alexey.budankov@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Joe Mario <jmario@redhat.com>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Michael Petlan <mpetlan@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/20190904073415.723-4-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-08-28 08:17:43 +00:00
|
|
|
{
|
|
|
|
return cpu_map__build_map(cpus, nodep, perf_env__get_node, env);
|
|
|
|
}
|
|
|
|
|
2020-11-26 14:13:20 +00:00
|
|
|
static struct aggr_cpu_id perf_stat__get_socket_file(struct perf_stat_config *config __maybe_unused,
|
2019-07-21 11:23:49 +00:00
|
|
|
struct perf_cpu_map *map, int idx)
|
2015-11-05 14:40:58 +00:00
|
|
|
{
|
|
|
|
return perf_env__get_socket(map, idx, &perf_stat.session->header.env);
|
|
|
|
}
|
2020-11-26 14:13:20 +00:00
|
|
|
static struct aggr_cpu_id perf_stat__get_die_file(struct perf_stat_config *config __maybe_unused,
|
2019-07-21 11:23:49 +00:00
|
|
|
struct perf_cpu_map *map, int idx)
|
2019-06-04 22:50:42 +00:00
|
|
|
{
|
|
|
|
return perf_env__get_die(map, idx, &perf_stat.session->header.env);
|
|
|
|
}
|
2015-11-05 14:40:58 +00:00
|
|
|
|
2020-11-26 14:13:20 +00:00
|
|
|
static struct aggr_cpu_id perf_stat__get_core_file(struct perf_stat_config *config __maybe_unused,
|
2019-07-21 11:23:49 +00:00
|
|
|
struct perf_cpu_map *map, int idx)
|
2015-11-05 14:40:58 +00:00
|
|
|
{
|
|
|
|
return perf_env__get_core(map, idx, &perf_stat.session->header.env);
|
|
|
|
}
|
|
|
|
|
2020-11-26 14:13:20 +00:00
|
|
|
static struct aggr_cpu_id perf_stat__get_node_file(struct perf_stat_config *config __maybe_unused,
|
perf stat: Add --per-node agregation support
Adding new --per-node option to aggregate counts per NUMA
nodes for system-wide mode measurements.
You can specify --per-node in live mode:
# perf stat -a -I 1000 -e cycles --per-node
# time node cpus counts unit events
1.000542550 N0 20 6,202,097 cycles
1.000542550 N1 20 639,559 cycles
2.002040063 N0 20 7,412,495 cycles
2.002040063 N1 20 2,185,577 cycles
3.003451699 N0 20 6,508,917 cycles
3.003451699 N1 20 765,607 cycles
...
Or in the record/report stat session:
# perf stat record -a -I 1000 -e cycles
# time counts unit events
1.000536937 10,008,468 cycles
2.002090152 9,578,539 cycles
3.003625233 7,647,869 cycles
4.005135036 7,032,086 cycles
^C 4.340902364 3,923,893 cycles
# perf stat report --per-node
# time node cpus counts unit events
1.000536937 N0 20 9,355,086 cycles
1.000536937 N1 20 653,382 cycles
2.002090152 N0 20 7,712,838 cycles
2.002090152 N1 20 1,865,701 cycles
3.003625233 N0 20 6,604,441 cycles
3.003625233 N1 20 1,043,428 cycles
4.005135036 N0 20 6,350,522 cycles
4.005135036 N1 20 681,564 cycles
4.340902364 N0 20 3,403,188 cycles
4.340902364 N1 20 520,705 cycles
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Alexey Budankov <alexey.budankov@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Joe Mario <jmario@redhat.com>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Michael Petlan <mpetlan@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/20190904073415.723-4-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-08-28 08:17:43 +00:00
|
|
|
struct perf_cpu_map *map, int idx)
|
|
|
|
{
|
|
|
|
return perf_env__get_node(map, idx, &perf_stat.session->header.env);
|
|
|
|
}
|
|
|
|
|
2015-11-05 14:40:58 +00:00
|
|
|
static int perf_stat_init_aggr_mode_file(struct perf_stat *st)
|
|
|
|
{
|
|
|
|
struct perf_env *env = &st->session->header.env;
|
|
|
|
|
|
|
|
switch (stat_config.aggr_mode) {
|
|
|
|
case AGGR_SOCKET:
|
2019-07-21 11:24:41 +00:00
|
|
|
if (perf_env__build_socket_map(env, evsel_list->core.cpus, &stat_config.aggr_map)) {
|
2015-11-05 14:40:58 +00:00
|
|
|
perror("cannot build socket map");
|
|
|
|
return -1;
|
|
|
|
}
|
2018-08-30 06:32:45 +00:00
|
|
|
stat_config.aggr_get_id = perf_stat__get_socket_file;
|
2015-11-05 14:40:58 +00:00
|
|
|
break;
|
2019-06-04 22:50:42 +00:00
|
|
|
case AGGR_DIE:
|
2019-07-21 11:24:41 +00:00
|
|
|
if (perf_env__build_die_map(env, evsel_list->core.cpus, &stat_config.aggr_map)) {
|
2019-06-04 22:50:42 +00:00
|
|
|
perror("cannot build die map");
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
stat_config.aggr_get_id = perf_stat__get_die_file;
|
|
|
|
break;
|
2015-11-05 14:40:58 +00:00
|
|
|
case AGGR_CORE:
|
2019-07-21 11:24:41 +00:00
|
|
|
if (perf_env__build_core_map(env, evsel_list->core.cpus, &stat_config.aggr_map)) {
|
2015-11-05 14:40:58 +00:00
|
|
|
perror("cannot build core map");
|
|
|
|
return -1;
|
|
|
|
}
|
2018-08-30 06:32:45 +00:00
|
|
|
stat_config.aggr_get_id = perf_stat__get_core_file;
|
2015-11-05 14:40:58 +00:00
|
|
|
break;
|
perf stat: Add --per-node agregation support
Adding new --per-node option to aggregate counts per NUMA
nodes for system-wide mode measurements.
You can specify --per-node in live mode:
# perf stat -a -I 1000 -e cycles --per-node
# time node cpus counts unit events
1.000542550 N0 20 6,202,097 cycles
1.000542550 N1 20 639,559 cycles
2.002040063 N0 20 7,412,495 cycles
2.002040063 N1 20 2,185,577 cycles
3.003451699 N0 20 6,508,917 cycles
3.003451699 N1 20 765,607 cycles
...
Or in the record/report stat session:
# perf stat record -a -I 1000 -e cycles
# time counts unit events
1.000536937 10,008,468 cycles
2.002090152 9,578,539 cycles
3.003625233 7,647,869 cycles
4.005135036 7,032,086 cycles
^C 4.340902364 3,923,893 cycles
# perf stat report --per-node
# time node cpus counts unit events
1.000536937 N0 20 9,355,086 cycles
1.000536937 N1 20 653,382 cycles
2.002090152 N0 20 7,712,838 cycles
2.002090152 N1 20 1,865,701 cycles
3.003625233 N0 20 6,604,441 cycles
3.003625233 N1 20 1,043,428 cycles
4.005135036 N0 20 6,350,522 cycles
4.005135036 N1 20 681,564 cycles
4.340902364 N0 20 3,403,188 cycles
4.340902364 N1 20 520,705 cycles
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Alexey Budankov <alexey.budankov@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Joe Mario <jmario@redhat.com>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Michael Petlan <mpetlan@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/20190904073415.723-4-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-08-28 08:17:43 +00:00
|
|
|
case AGGR_NODE:
|
|
|
|
if (perf_env__build_node_map(env, evsel_list->core.cpus, &stat_config.aggr_map)) {
|
|
|
|
perror("cannot build core map");
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
stat_config.aggr_get_id = perf_stat__get_node_file;
|
|
|
|
break;
|
2015-11-05 14:40:58 +00:00
|
|
|
case AGGR_NONE:
|
|
|
|
case AGGR_GLOBAL:
|
|
|
|
case AGGR_THREAD:
|
|
|
|
case AGGR_UNSET:
|
|
|
|
default:
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
perf stat: Add -d -d and -d -d -d options to show more CPU events
Print even more detailed statistics if requested via perf stat -d:
-d: detailed events, L1 and LLC data cache
-d -d: more detailed events, dTLB and iTLB events
-d -d -d: very detailed events, adding prefetch events
Full output looks like this now:
Performance counter stats for '/home/mingo/hackbench 10' (5 runs):
1703.674707 task-clock # 8.709 CPUs utilized ( +- 4.19% )
49,068 context-switches # 0.029 M/sec ( +- 16.66% )
8,303 CPU-migrations # 0.005 M/sec ( +- 24.90% )
17,397 page-faults # 0.010 M/sec ( +- 0.46% )
2,345,389,239 cycles # 1.377 GHz ( +- 4.61% ) [55.90%]
1,884,503,527 stalled-cycles-frontend # 80.35% frontend cycles idle ( +- 5.67% ) [50.39%]
743,919,737 stalled-cycles-backend # 31.72% backend cycles idle ( +- 8.75% ) [49.91%]
1,314,416,379 instructions # 0.56 insns per cycle
# 1.43 stalled cycles per insn ( +- 2.53% ) [60.87%]
272,592,567 branches # 160.003 M/sec ( +- 1.74% ) [56.56%]
3,794,846 branch-misses # 1.39% of all branches ( +- 6.59% ) [58.50%]
449,982,778 L1-dcache-loads # 264.125 M/sec ( +- 2.47% ) [49.88%]
22,404,961 L1-dcache-load-misses # 4.98% of all L1-dcache hits ( +- 6.08% ) [55.05%]
6,204,750 LLC-loads # 3.642 M/sec ( +- 8.91% ) [43.75%]
1,837,411 LLC-load-misses # 1.078 M/sec ( +- 7.27% ) [12.07%]
411,440,421 L1-icache-loads # 241.502 M/sec ( +- 5.60% ) [36.52%]
27,556,832 L1-icache-load-misses # 16.175 M/sec ( +- 7.46% ) [46.72%]
464,067,627 dTLB-loads # 272.392 M/sec ( +- 4.46% ) [54.17%]
10,765,648 dTLB-load-misses # 6.319 M/sec ( +- 3.18% ) [48.68%]
1,273,080,386 iTLB-loads # 747.256 M/sec ( +- 3.38% ) [47.53%]
117,481 iTLB-load-misses # 0.069 M/sec ( +- 14.99% ) [47.01%]
4,590,653 L1-dcache-prefetches # 2.695 M/sec ( +- 4.49% ) [46.19%]
1,712,660 L1-dcache-prefetch-misses # 1.005 M/sec ( +- 3.75% ) [44.82%]
0.195622057 seconds time elapsed ( +- 6.84% )
Also clean up the attribute construction code to be appending, and factor
it out into add_default_attributes().
Tweak the coverage percentage printout a bit, so that it's easier to view it
alongside the +- sttddev colum.
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Steven Rostedt <rostedt@goodmis.org>
Link: http://lkml.kernel.org/n/tip-to3kgu04449s64062val8b62@git.kernel.org
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2011-05-19 11:30:56 +00:00
|
|
|
/*
|
|
|
|
* Add default attributes, if there were no attributes specified or
|
|
|
|
* if -d/--detailed, -d -d or -d -d -d is used:
|
|
|
|
*/
|
|
|
|
static int add_default_attributes(void)
|
|
|
|
{
|
perf stat: Basic support for TopDown in perf stat
Add basic plumbing for TopDown in perf stat
TopDown is intended to replace the frontend cycles idle/ backend cycles
idle metrics in standard perf stat output. These metrics are not
reliable in many workloads, due to out of order effects.
This implements a new --topdown mode in perf stat (similar to
--transaction) that measures the pipe line bottlenecks using
standardized formulas. The measurement can be all done with 5 counters
(one fixed counter)
The result are four metrics:
FrontendBound, BackendBound, BadSpeculation, Retiring
that describe the CPU pipeline behavior on a high level.
The full top down methology has many hierarchical metrics. This
implementation only supports level 1 which can be collected without
multiplexing. A full implementation of top down on top of perf is
available in pmu-tools toplev. (http://github.com/andikleen/pmu-tools)
The current version works on Intel Core CPUs starting with Sandy Bridge,
and Atom CPUs starting with Silvermont. In principle the generic
metrics should be also implementable on other out of order CPUs.
TopDown level 1 uses a set of abstracted metrics which are generic to
out of order CPU cores (although some CPUs may not implement all of
them):
topdown-total-slots Available slots in the pipeline
topdown-slots-issued Slots issued into the pipeline
topdown-slots-retired Slots successfully retired
topdown-fetch-bubbles Pipeline gaps in the frontend
topdown-recovery-bubbles Pipeline gaps during recovery
from misspeculation
These metrics then allow to compute four useful metrics:
FrontendBound, BackendBound, Retiring, BadSpeculation.
Add a new --topdown options to enable events. When --topdown is
specified set up events for all topdown events supported by the kernel.
Add topdown-* as a special case to the event parser, as is needed for
all events containing -.
The actual code to compute the metrics is in follow-on patches.
v2: Use standard sysctl read function.
v3: Move x86 specific code to arch/
v4: Enable --metric-only implicitly for topdown.
v5: Add --single-thread option to not force per core mode
v6: Fix output order of topdown metrics
v7: Allow combining with -d
v8: Remove --single-thread again
v9: Rename functions, adding arch_ and topdown_.
v10: Expand man page and describe TopDown better
Paste intro into commit description.
Print error when malloc fails.
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: http://lkml.kernel.org/r/1464119559-17203-1-git-send-email-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-05-30 15:49:42 +00:00
|
|
|
int err;
|
perf stat: Check existence of frontend/backed stalled cycles
Only put the frontend/backend stalled cycles into the default perf stat
events when the CPU actually supports them.
This avoids empty columns with --metric-only on newer Intel CPUs.
Committer note:
Before:
$ perf stat ls
Performance counter stats for 'ls':
1.080893 task-clock (msec) # 0.619 CPUs utilized
0 context-switches # 0.000 K/sec
0 cpu-migrations # 0.000 K/sec
97 page-faults # 0.090 M/sec
3,327,741 cycles # 3.079 GHz
<not supported> stalled-cycles-frontend
<not supported> stalled-cycles-backend
1,609,544 instructions # 0.48 insn per cycle
319,117 branches # 295.235 M/sec
12,246 branch-misses # 3.84% of all branches
0.001746508 seconds time elapsed
$
After:
$ perf stat ls
Performance counter stats for 'ls':
0.693948 task-clock (msec) # 0.662 CPUs utilized
0 context-switches # 0.000 K/sec
0 cpu-migrations # 0.000 K/sec
95 page-faults # 0.137 M/sec
1,792,509 cycles # 2.583 GHz
1,599,047 instructions # 0.89 insn per cycle
316,328 branches # 455.838 M/sec
12,453 branch-misses # 3.94% of all branches
0.001048987 seconds time elapsed
$
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/r/1456532881-26621-2-git-send-email-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-02-27 00:27:56 +00:00
|
|
|
struct perf_event_attr default_attrs0[] = {
|
2012-10-01 18:20:58 +00:00
|
|
|
|
|
|
|
{ .type = PERF_TYPE_SOFTWARE, .config = PERF_COUNT_SW_TASK_CLOCK },
|
|
|
|
{ .type = PERF_TYPE_SOFTWARE, .config = PERF_COUNT_SW_CONTEXT_SWITCHES },
|
|
|
|
{ .type = PERF_TYPE_SOFTWARE, .config = PERF_COUNT_SW_CPU_MIGRATIONS },
|
|
|
|
{ .type = PERF_TYPE_SOFTWARE, .config = PERF_COUNT_SW_PAGE_FAULTS },
|
|
|
|
|
|
|
|
{ .type = PERF_TYPE_HARDWARE, .config = PERF_COUNT_HW_CPU_CYCLES },
|
perf stat: Check existence of frontend/backed stalled cycles
Only put the frontend/backend stalled cycles into the default perf stat
events when the CPU actually supports them.
This avoids empty columns with --metric-only on newer Intel CPUs.
Committer note:
Before:
$ perf stat ls
Performance counter stats for 'ls':
1.080893 task-clock (msec) # 0.619 CPUs utilized
0 context-switches # 0.000 K/sec
0 cpu-migrations # 0.000 K/sec
97 page-faults # 0.090 M/sec
3,327,741 cycles # 3.079 GHz
<not supported> stalled-cycles-frontend
<not supported> stalled-cycles-backend
1,609,544 instructions # 0.48 insn per cycle
319,117 branches # 295.235 M/sec
12,246 branch-misses # 3.84% of all branches
0.001746508 seconds time elapsed
$
After:
$ perf stat ls
Performance counter stats for 'ls':
0.693948 task-clock (msec) # 0.662 CPUs utilized
0 context-switches # 0.000 K/sec
0 cpu-migrations # 0.000 K/sec
95 page-faults # 0.137 M/sec
1,792,509 cycles # 2.583 GHz
1,599,047 instructions # 0.89 insn per cycle
316,328 branches # 455.838 M/sec
12,453 branch-misses # 3.94% of all branches
0.001048987 seconds time elapsed
$
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/r/1456532881-26621-2-git-send-email-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-02-27 00:27:56 +00:00
|
|
|
};
|
|
|
|
struct perf_event_attr frontend_attrs[] = {
|
2012-10-01 18:20:58 +00:00
|
|
|
{ .type = PERF_TYPE_HARDWARE, .config = PERF_COUNT_HW_STALLED_CYCLES_FRONTEND },
|
perf stat: Check existence of frontend/backed stalled cycles
Only put the frontend/backend stalled cycles into the default perf stat
events when the CPU actually supports them.
This avoids empty columns with --metric-only on newer Intel CPUs.
Committer note:
Before:
$ perf stat ls
Performance counter stats for 'ls':
1.080893 task-clock (msec) # 0.619 CPUs utilized
0 context-switches # 0.000 K/sec
0 cpu-migrations # 0.000 K/sec
97 page-faults # 0.090 M/sec
3,327,741 cycles # 3.079 GHz
<not supported> stalled-cycles-frontend
<not supported> stalled-cycles-backend
1,609,544 instructions # 0.48 insn per cycle
319,117 branches # 295.235 M/sec
12,246 branch-misses # 3.84% of all branches
0.001746508 seconds time elapsed
$
After:
$ perf stat ls
Performance counter stats for 'ls':
0.693948 task-clock (msec) # 0.662 CPUs utilized
0 context-switches # 0.000 K/sec
0 cpu-migrations # 0.000 K/sec
95 page-faults # 0.137 M/sec
1,792,509 cycles # 2.583 GHz
1,599,047 instructions # 0.89 insn per cycle
316,328 branches # 455.838 M/sec
12,453 branch-misses # 3.94% of all branches
0.001048987 seconds time elapsed
$
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/r/1456532881-26621-2-git-send-email-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-02-27 00:27:56 +00:00
|
|
|
};
|
|
|
|
struct perf_event_attr backend_attrs[] = {
|
2012-10-01 18:20:58 +00:00
|
|
|
{ .type = PERF_TYPE_HARDWARE, .config = PERF_COUNT_HW_STALLED_CYCLES_BACKEND },
|
perf stat: Check existence of frontend/backed stalled cycles
Only put the frontend/backend stalled cycles into the default perf stat
events when the CPU actually supports them.
This avoids empty columns with --metric-only on newer Intel CPUs.
Committer note:
Before:
$ perf stat ls
Performance counter stats for 'ls':
1.080893 task-clock (msec) # 0.619 CPUs utilized
0 context-switches # 0.000 K/sec
0 cpu-migrations # 0.000 K/sec
97 page-faults # 0.090 M/sec
3,327,741 cycles # 3.079 GHz
<not supported> stalled-cycles-frontend
<not supported> stalled-cycles-backend
1,609,544 instructions # 0.48 insn per cycle
319,117 branches # 295.235 M/sec
12,246 branch-misses # 3.84% of all branches
0.001746508 seconds time elapsed
$
After:
$ perf stat ls
Performance counter stats for 'ls':
0.693948 task-clock (msec) # 0.662 CPUs utilized
0 context-switches # 0.000 K/sec
0 cpu-migrations # 0.000 K/sec
95 page-faults # 0.137 M/sec
1,792,509 cycles # 2.583 GHz
1,599,047 instructions # 0.89 insn per cycle
316,328 branches # 455.838 M/sec
12,453 branch-misses # 3.94% of all branches
0.001048987 seconds time elapsed
$
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/r/1456532881-26621-2-git-send-email-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-02-27 00:27:56 +00:00
|
|
|
};
|
|
|
|
struct perf_event_attr default_attrs1[] = {
|
2012-10-01 18:20:58 +00:00
|
|
|
{ .type = PERF_TYPE_HARDWARE, .config = PERF_COUNT_HW_INSTRUCTIONS },
|
|
|
|
{ .type = PERF_TYPE_HARDWARE, .config = PERF_COUNT_HW_BRANCH_INSTRUCTIONS },
|
|
|
|
{ .type = PERF_TYPE_HARDWARE, .config = PERF_COUNT_HW_BRANCH_MISSES },
|
|
|
|
|
perf stat: Add default hybrid events
Previously if '-e' is not specified in perf stat, some software events
and hardware events are added to evlist by default.
Before:
# perf stat -a -- sleep 1
Performance counter stats for 'system wide':
24,044.40 msec cpu-clock # 23.946 CPUs utilized
99 context-switches # 4.117 /sec
24 cpu-migrations # 0.998 /sec
3 page-faults # 0.125 /sec
7,000,244 cycles # 0.000 GHz
2,955,024 instructions # 0.42 insn per cycle
608,941 branches # 25.326 K/sec
31,991 branch-misses # 5.25% of all branches
1.004106859 seconds time elapsed
Among the events, cycles, instructions, branches and branch-misses
are hardware events.
One hybrid platform, two hardware events are created for one
hardware event.
cpu_core/cycles/,
cpu_atom/cycles/,
cpu_core/instructions/,
cpu_atom/instructions/,
cpu_core/branches/,
cpu_atom/branches/,
cpu_core/branch-misses/,
cpu_atom/branch-misses/
These events would be added to evlist on hybrid platform.
Since parse_events() has been supported to create two hardware events
for one event on hybrid platform, so we just use parse_events(evlist,
"cycles,instructions,branches,branch-misses") to create the default
events and add them to evlist.
After:
# perf stat -a -- sleep 1
Performance counter stats for 'system wide':
24,043.99 msec cpu-clock # 23.991 CPUs utilized
139 context-switches # 5.781 /sec
25 cpu-migrations # 1.040 /sec
6 page-faults # 0.250 /sec
10,381,751 cpu_core/cycles/ # 431.782 K/sec
1,264,216 cpu_atom/cycles/ # 52.579 K/sec
3,406,958 cpu_core/instructions/ # 141.697 K/sec
414,588 cpu_atom/instructions/ # 17.243 K/sec
705,149 cpu_core/branches/ # 29.327 K/sec
82,358 cpu_atom/branches/ # 3.425 K/sec
40,821 cpu_core/branch-misses/ # 1.698 K/sec
9,086 cpu_atom/branch-misses/ # 377.891 /sec
1.002228863 seconds time elapsed
We can see two events are created for one hardware event.
One TODO is, the shadow stats looks a bit different, now it's just
'M/sec'.
The perf_stat__update_shadow_stats and perf_stat__print_shadow_stats
need to be improved in future if we want to get the original shadow
stats.
Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
Reviewed-by: Jiri Olsa <jolsa@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Kan Liang <kan.liang@intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: https://lore.kernel.org/r/20210427070139.25256-15-yao.jin@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2021-04-27 07:01:27 +00:00
|
|
|
};
|
|
|
|
struct perf_event_attr default_sw_attrs[] = {
|
|
|
|
{ .type = PERF_TYPE_SOFTWARE, .config = PERF_COUNT_SW_TASK_CLOCK },
|
|
|
|
{ .type = PERF_TYPE_SOFTWARE, .config = PERF_COUNT_SW_CONTEXT_SWITCHES },
|
|
|
|
{ .type = PERF_TYPE_SOFTWARE, .config = PERF_COUNT_SW_CPU_MIGRATIONS },
|
|
|
|
{ .type = PERF_TYPE_SOFTWARE, .config = PERF_COUNT_SW_PAGE_FAULTS },
|
2012-10-01 18:20:58 +00:00
|
|
|
};
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Detailed stats (-d), covering the L1 and last level data caches:
|
|
|
|
*/
|
|
|
|
struct perf_event_attr detailed_attrs[] = {
|
|
|
|
|
|
|
|
{ .type = PERF_TYPE_HW_CACHE,
|
|
|
|
.config =
|
|
|
|
PERF_COUNT_HW_CACHE_L1D << 0 |
|
|
|
|
(PERF_COUNT_HW_CACHE_OP_READ << 8) |
|
|
|
|
(PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16) },
|
|
|
|
|
|
|
|
{ .type = PERF_TYPE_HW_CACHE,
|
|
|
|
.config =
|
|
|
|
PERF_COUNT_HW_CACHE_L1D << 0 |
|
|
|
|
(PERF_COUNT_HW_CACHE_OP_READ << 8) |
|
|
|
|
(PERF_COUNT_HW_CACHE_RESULT_MISS << 16) },
|
|
|
|
|
|
|
|
{ .type = PERF_TYPE_HW_CACHE,
|
|
|
|
.config =
|
|
|
|
PERF_COUNT_HW_CACHE_LL << 0 |
|
|
|
|
(PERF_COUNT_HW_CACHE_OP_READ << 8) |
|
|
|
|
(PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16) },
|
|
|
|
|
|
|
|
{ .type = PERF_TYPE_HW_CACHE,
|
|
|
|
.config =
|
|
|
|
PERF_COUNT_HW_CACHE_LL << 0 |
|
|
|
|
(PERF_COUNT_HW_CACHE_OP_READ << 8) |
|
|
|
|
(PERF_COUNT_HW_CACHE_RESULT_MISS << 16) },
|
|
|
|
};
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Very detailed stats (-d -d), covering the instruction cache and the TLB caches:
|
|
|
|
*/
|
|
|
|
struct perf_event_attr very_detailed_attrs[] = {
|
|
|
|
|
|
|
|
{ .type = PERF_TYPE_HW_CACHE,
|
|
|
|
.config =
|
|
|
|
PERF_COUNT_HW_CACHE_L1I << 0 |
|
|
|
|
(PERF_COUNT_HW_CACHE_OP_READ << 8) |
|
|
|
|
(PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16) },
|
|
|
|
|
|
|
|
{ .type = PERF_TYPE_HW_CACHE,
|
|
|
|
.config =
|
|
|
|
PERF_COUNT_HW_CACHE_L1I << 0 |
|
|
|
|
(PERF_COUNT_HW_CACHE_OP_READ << 8) |
|
|
|
|
(PERF_COUNT_HW_CACHE_RESULT_MISS << 16) },
|
|
|
|
|
|
|
|
{ .type = PERF_TYPE_HW_CACHE,
|
|
|
|
.config =
|
|
|
|
PERF_COUNT_HW_CACHE_DTLB << 0 |
|
|
|
|
(PERF_COUNT_HW_CACHE_OP_READ << 8) |
|
|
|
|
(PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16) },
|
|
|
|
|
|
|
|
{ .type = PERF_TYPE_HW_CACHE,
|
|
|
|
.config =
|
|
|
|
PERF_COUNT_HW_CACHE_DTLB << 0 |
|
|
|
|
(PERF_COUNT_HW_CACHE_OP_READ << 8) |
|
|
|
|
(PERF_COUNT_HW_CACHE_RESULT_MISS << 16) },
|
|
|
|
|
|
|
|
{ .type = PERF_TYPE_HW_CACHE,
|
|
|
|
.config =
|
|
|
|
PERF_COUNT_HW_CACHE_ITLB << 0 |
|
|
|
|
(PERF_COUNT_HW_CACHE_OP_READ << 8) |
|
|
|
|
(PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16) },
|
|
|
|
|
|
|
|
{ .type = PERF_TYPE_HW_CACHE,
|
|
|
|
.config =
|
|
|
|
PERF_COUNT_HW_CACHE_ITLB << 0 |
|
|
|
|
(PERF_COUNT_HW_CACHE_OP_READ << 8) |
|
|
|
|
(PERF_COUNT_HW_CACHE_RESULT_MISS << 16) },
|
|
|
|
|
|
|
|
};
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Very, very detailed stats (-d -d -d), adding prefetch events:
|
|
|
|
*/
|
|
|
|
struct perf_event_attr very_very_detailed_attrs[] = {
|
|
|
|
|
|
|
|
{ .type = PERF_TYPE_HW_CACHE,
|
|
|
|
.config =
|
|
|
|
PERF_COUNT_HW_CACHE_L1D << 0 |
|
|
|
|
(PERF_COUNT_HW_CACHE_OP_PREFETCH << 8) |
|
|
|
|
(PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16) },
|
|
|
|
|
|
|
|
{ .type = PERF_TYPE_HW_CACHE,
|
|
|
|
.config =
|
|
|
|
PERF_COUNT_HW_CACHE_L1D << 0 |
|
|
|
|
(PERF_COUNT_HW_CACHE_OP_PREFETCH << 8) |
|
|
|
|
(PERF_COUNT_HW_CACHE_RESULT_MISS << 16) },
|
|
|
|
};
|
2018-06-06 22:15:10 +00:00
|
|
|
struct parse_events_error errinfo;
|
2012-10-01 18:20:58 +00:00
|
|
|
|
perf stat: Add -d -d and -d -d -d options to show more CPU events
Print even more detailed statistics if requested via perf stat -d:
-d: detailed events, L1 and LLC data cache
-d -d: more detailed events, dTLB and iTLB events
-d -d -d: very detailed events, adding prefetch events
Full output looks like this now:
Performance counter stats for '/home/mingo/hackbench 10' (5 runs):
1703.674707 task-clock # 8.709 CPUs utilized ( +- 4.19% )
49,068 context-switches # 0.029 M/sec ( +- 16.66% )
8,303 CPU-migrations # 0.005 M/sec ( +- 24.90% )
17,397 page-faults # 0.010 M/sec ( +- 0.46% )
2,345,389,239 cycles # 1.377 GHz ( +- 4.61% ) [55.90%]
1,884,503,527 stalled-cycles-frontend # 80.35% frontend cycles idle ( +- 5.67% ) [50.39%]
743,919,737 stalled-cycles-backend # 31.72% backend cycles idle ( +- 8.75% ) [49.91%]
1,314,416,379 instructions # 0.56 insns per cycle
# 1.43 stalled cycles per insn ( +- 2.53% ) [60.87%]
272,592,567 branches # 160.003 M/sec ( +- 1.74% ) [56.56%]
3,794,846 branch-misses # 1.39% of all branches ( +- 6.59% ) [58.50%]
449,982,778 L1-dcache-loads # 264.125 M/sec ( +- 2.47% ) [49.88%]
22,404,961 L1-dcache-load-misses # 4.98% of all L1-dcache hits ( +- 6.08% ) [55.05%]
6,204,750 LLC-loads # 3.642 M/sec ( +- 8.91% ) [43.75%]
1,837,411 LLC-load-misses # 1.078 M/sec ( +- 7.27% ) [12.07%]
411,440,421 L1-icache-loads # 241.502 M/sec ( +- 5.60% ) [36.52%]
27,556,832 L1-icache-load-misses # 16.175 M/sec ( +- 7.46% ) [46.72%]
464,067,627 dTLB-loads # 272.392 M/sec ( +- 4.46% ) [54.17%]
10,765,648 dTLB-load-misses # 6.319 M/sec ( +- 3.18% ) [48.68%]
1,273,080,386 iTLB-loads # 747.256 M/sec ( +- 3.38% ) [47.53%]
117,481 iTLB-load-misses # 0.069 M/sec ( +- 14.99% ) [47.01%]
4,590,653 L1-dcache-prefetches # 2.695 M/sec ( +- 4.49% ) [46.19%]
1,712,660 L1-dcache-prefetch-misses # 1.005 M/sec ( +- 3.75% ) [44.82%]
0.195622057 seconds time elapsed ( +- 6.84% )
Also clean up the attribute construction code to be appending, and factor
it out into add_default_attributes().
Tweak the coverage percentage printout a bit, so that it's easier to view it
alongside the +- sttddev colum.
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Steven Rostedt <rostedt@goodmis.org>
Link: http://lkml.kernel.org/n/tip-to3kgu04449s64062val8b62@git.kernel.org
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2011-05-19 11:30:56 +00:00
|
|
|
/* Set attrs if no event is selected and !null_run: */
|
2018-08-30 06:32:41 +00:00
|
|
|
if (stat_config.null_run)
|
perf stat: Add -d -d and -d -d -d options to show more CPU events
Print even more detailed statistics if requested via perf stat -d:
-d: detailed events, L1 and LLC data cache
-d -d: more detailed events, dTLB and iTLB events
-d -d -d: very detailed events, adding prefetch events
Full output looks like this now:
Performance counter stats for '/home/mingo/hackbench 10' (5 runs):
1703.674707 task-clock # 8.709 CPUs utilized ( +- 4.19% )
49,068 context-switches # 0.029 M/sec ( +- 16.66% )
8,303 CPU-migrations # 0.005 M/sec ( +- 24.90% )
17,397 page-faults # 0.010 M/sec ( +- 0.46% )
2,345,389,239 cycles # 1.377 GHz ( +- 4.61% ) [55.90%]
1,884,503,527 stalled-cycles-frontend # 80.35% frontend cycles idle ( +- 5.67% ) [50.39%]
743,919,737 stalled-cycles-backend # 31.72% backend cycles idle ( +- 8.75% ) [49.91%]
1,314,416,379 instructions # 0.56 insns per cycle
# 1.43 stalled cycles per insn ( +- 2.53% ) [60.87%]
272,592,567 branches # 160.003 M/sec ( +- 1.74% ) [56.56%]
3,794,846 branch-misses # 1.39% of all branches ( +- 6.59% ) [58.50%]
449,982,778 L1-dcache-loads # 264.125 M/sec ( +- 2.47% ) [49.88%]
22,404,961 L1-dcache-load-misses # 4.98% of all L1-dcache hits ( +- 6.08% ) [55.05%]
6,204,750 LLC-loads # 3.642 M/sec ( +- 8.91% ) [43.75%]
1,837,411 LLC-load-misses # 1.078 M/sec ( +- 7.27% ) [12.07%]
411,440,421 L1-icache-loads # 241.502 M/sec ( +- 5.60% ) [36.52%]
27,556,832 L1-icache-load-misses # 16.175 M/sec ( +- 7.46% ) [46.72%]
464,067,627 dTLB-loads # 272.392 M/sec ( +- 4.46% ) [54.17%]
10,765,648 dTLB-load-misses # 6.319 M/sec ( +- 3.18% ) [48.68%]
1,273,080,386 iTLB-loads # 747.256 M/sec ( +- 3.38% ) [47.53%]
117,481 iTLB-load-misses # 0.069 M/sec ( +- 14.99% ) [47.01%]
4,590,653 L1-dcache-prefetches # 2.695 M/sec ( +- 4.49% ) [46.19%]
1,712,660 L1-dcache-prefetch-misses # 1.005 M/sec ( +- 3.75% ) [44.82%]
0.195622057 seconds time elapsed ( +- 6.84% )
Also clean up the attribute construction code to be appending, and factor
it out into add_default_attributes().
Tweak the coverage percentage printout a bit, so that it's easier to view it
alongside the +- sttddev colum.
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Steven Rostedt <rostedt@goodmis.org>
Link: http://lkml.kernel.org/n/tip-to3kgu04449s64062val8b62@git.kernel.org
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2011-05-19 11:30:56 +00:00
|
|
|
return 0;
|
|
|
|
|
perf parse: Report initial event parsing error
Record the first event parsing error and report. Implementing feedback
from Jiri Olsa:
https://lkml.org/lkml/2019/10/28/680
An example error is:
$ tools/perf/perf stat -e c/c/
WARNING: multiple event parsing errors
event syntax error: 'c/c/'
\___ unknown term
valid terms: event,filter_rem,filter_opc0,edge,filter_isoc,filter_tid,filter_loc,filter_nc,inv,umask,filter_opc1,tid_en,thresh,filter_all_op,filter_not_nm,filter_state,filter_nm,config,config1,config2,name,period,percore
Initial error:
event syntax error: 'c/c/'
\___ Cannot find PMU `c'. Missing kernel support?
Run 'perf list' for a list of valid events
Usage: perf stat [<options>] [<command>]
-e, --event <event> event selector. use 'perf list' to list available events
Signed-off-by: Ian Rogers <irogers@google.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Allison Randal <allison@lohutok.net>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Anju T Sudhakar <anju@linux.vnet.ibm.com>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: Davidlohr Bueso <dave@stgolabs.net>
Cc: Jin Yao <yao.jin@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ravi Bangoria <ravi.bangoria@linux.ibm.com>
Cc: Stephane Eranian <eranian@google.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Thomas Richter <tmricht@linux.ibm.com>
Link: http://lore.kernel.org/lkml/20191116074652.9960-1-irogers@google.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-16 07:46:52 +00:00
|
|
|
bzero(&errinfo, sizeof(errinfo));
|
2013-08-21 23:47:26 +00:00
|
|
|
if (transaction_run) {
|
2018-06-26 07:17:01 +00:00
|
|
|
/* Handle -T as -M transaction. Once platform specific metrics
|
2021-03-23 16:09:15 +00:00
|
|
|
* support has been added to the json files, all architectures
|
2018-06-26 07:17:01 +00:00
|
|
|
* will use this approach. To determine transaction support
|
|
|
|
* on an architecture test for such a metric name.
|
|
|
|
*/
|
|
|
|
if (metricgroup__has_metric("transaction")) {
|
|
|
|
struct option opt = { .value = &evsel_list };
|
|
|
|
|
|
|
|
return metricgroup__parse_groups(&opt, "transaction",
|
2020-05-20 18:20:10 +00:00
|
|
|
stat_config.metric_no_group,
|
|
|
|
stat_config.metric_no_merge,
|
2018-08-30 06:32:51 +00:00
|
|
|
&stat_config.metric_events);
|
2018-06-26 07:17:01 +00:00
|
|
|
}
|
|
|
|
|
2013-08-21 23:47:26 +00:00
|
|
|
if (pmu_have_event("cpu", "cycles-ct") &&
|
|
|
|
pmu_have_event("cpu", "el-start"))
|
perf stat: Fix core dump when flag T is used
Executing command 'perf stat -T -- ls' dumps core on x86 and s390.
Here is the call back chain (done on x86):
# gdb ./perf
....
(gdb) r stat -T -- ls
...
Program received signal SIGSEGV, Segmentation fault.
0x00007ffff56d1963 in vasprintf () from /lib64/libc.so.6
(gdb) where
#0 0x00007ffff56d1963 in vasprintf () from /lib64/libc.so.6
#1 0x00007ffff56ae484 in asprintf () from /lib64/libc.so.6
#2 0x00000000004f1982 in __parse_events_add_pmu (parse_state=0x7fffffffd580,
list=0xbfb970, name=0xbf3ef0 "cpu",
head_config=0xbfb930, auto_merge_stats=false) at util/parse-events.c:1233
#3 0x00000000004f1c8e in parse_events_add_pmu (parse_state=0x7fffffffd580,
list=0xbfb970, name=0xbf3ef0 "cpu",
head_config=0xbfb930) at util/parse-events.c:1288
#4 0x0000000000537ce3 in parse_events_parse (_parse_state=0x7fffffffd580,
scanner=0xbf4210) at util/parse-events.y:234
#5 0x00000000004f2c7a in parse_events__scanner (str=0x6b66c0
"task-clock,{instructions,cycles,cpu/cycles-t/,cpu/tx-start/}",
parse_state=0x7fffffffd580, start_token=258) at util/parse-events.c:1673
#6 0x00000000004f2e23 in parse_events (evlist=0xbe9990, str=0x6b66c0
"task-clock,{instructions,cycles,cpu/cycles-t/,cpu/tx-start/}", err=0x0)
at util/parse-events.c:1713
#7 0x000000000044e137 in add_default_attributes () at builtin-stat.c:2281
#8 0x000000000044f7b5 in cmd_stat (argc=1, argv=0x7fffffffe3b0) at
builtin-stat.c:2828
#9 0x00000000004c8b0f in run_builtin (p=0xab01a0 <commands+288>, argc=4,
argv=0x7fffffffe3b0) at perf.c:297
#10 0x00000000004c8d7c in handle_internal_command (argc=4,
argv=0x7fffffffe3b0) at perf.c:349
#11 0x00000000004c8ece in run_argv (argcp=0x7fffffffe20c,
argv=0x7fffffffe200) at perf.c:393
#12 0x00000000004c929c in main (argc=4, argv=0x7fffffffe3b0) at perf.c:537
(gdb)
It turns out that a NULL pointer is referenced. Here are the
function calls:
...
cmd_stat()
+---> add_default_attributes()
+---> parse_events(evsel_list, transaction_attrs, NULL);
3rd parameter set to NULL
Function parse_events(xx, xx, struct parse_events_error *err) dives
into a bison generated scanner and creates
parser state information for it first:
struct parse_events_state parse_state = {
.list = LIST_HEAD_INIT(parse_state.list),
.idx = evlist->nr_entries,
.error = err, <--- NULL POINTER !!!
.evlist = evlist,
};
Now various functions inside the bison scanner are called to end up in
__parse_events_add_pmu(struct parse_events_state *parse_state, ..) with
first parameter being a pointer to above structure definition.
Now the PMU event name is not found (because being executed in a VM) and
this function tries to create an error message with
asprintf(&parse_state->error.str, ....)
which references a NULL pointer and dumps core.
Fix this by providing a pointer to the necessary error information
instead of NULL. Technically only the else part is needed to avoid the
core dump, just lets be safe...
Signed-off-by: Thomas Richter <tmricht@linux.vnet.ibm.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Hendrik Brueckner <brueckner@linux.vnet.ibm.com>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Link: http://lkml.kernel.org/r/20180308145735.64717-1-tmricht@linux.vnet.ibm.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-03-08 14:57:35 +00:00
|
|
|
err = parse_events(evsel_list, transaction_attrs,
|
|
|
|
&errinfo);
|
2013-08-21 23:47:26 +00:00
|
|
|
else
|
perf stat: Fix core dump when flag T is used
Executing command 'perf stat -T -- ls' dumps core on x86 and s390.
Here is the call back chain (done on x86):
# gdb ./perf
....
(gdb) r stat -T -- ls
...
Program received signal SIGSEGV, Segmentation fault.
0x00007ffff56d1963 in vasprintf () from /lib64/libc.so.6
(gdb) where
#0 0x00007ffff56d1963 in vasprintf () from /lib64/libc.so.6
#1 0x00007ffff56ae484 in asprintf () from /lib64/libc.so.6
#2 0x00000000004f1982 in __parse_events_add_pmu (parse_state=0x7fffffffd580,
list=0xbfb970, name=0xbf3ef0 "cpu",
head_config=0xbfb930, auto_merge_stats=false) at util/parse-events.c:1233
#3 0x00000000004f1c8e in parse_events_add_pmu (parse_state=0x7fffffffd580,
list=0xbfb970, name=0xbf3ef0 "cpu",
head_config=0xbfb930) at util/parse-events.c:1288
#4 0x0000000000537ce3 in parse_events_parse (_parse_state=0x7fffffffd580,
scanner=0xbf4210) at util/parse-events.y:234
#5 0x00000000004f2c7a in parse_events__scanner (str=0x6b66c0
"task-clock,{instructions,cycles,cpu/cycles-t/,cpu/tx-start/}",
parse_state=0x7fffffffd580, start_token=258) at util/parse-events.c:1673
#6 0x00000000004f2e23 in parse_events (evlist=0xbe9990, str=0x6b66c0
"task-clock,{instructions,cycles,cpu/cycles-t/,cpu/tx-start/}", err=0x0)
at util/parse-events.c:1713
#7 0x000000000044e137 in add_default_attributes () at builtin-stat.c:2281
#8 0x000000000044f7b5 in cmd_stat (argc=1, argv=0x7fffffffe3b0) at
builtin-stat.c:2828
#9 0x00000000004c8b0f in run_builtin (p=0xab01a0 <commands+288>, argc=4,
argv=0x7fffffffe3b0) at perf.c:297
#10 0x00000000004c8d7c in handle_internal_command (argc=4,
argv=0x7fffffffe3b0) at perf.c:349
#11 0x00000000004c8ece in run_argv (argcp=0x7fffffffe20c,
argv=0x7fffffffe200) at perf.c:393
#12 0x00000000004c929c in main (argc=4, argv=0x7fffffffe3b0) at perf.c:537
(gdb)
It turns out that a NULL pointer is referenced. Here are the
function calls:
...
cmd_stat()
+---> add_default_attributes()
+---> parse_events(evsel_list, transaction_attrs, NULL);
3rd parameter set to NULL
Function parse_events(xx, xx, struct parse_events_error *err) dives
into a bison generated scanner and creates
parser state information for it first:
struct parse_events_state parse_state = {
.list = LIST_HEAD_INIT(parse_state.list),
.idx = evlist->nr_entries,
.error = err, <--- NULL POINTER !!!
.evlist = evlist,
};
Now various functions inside the bison scanner are called to end up in
__parse_events_add_pmu(struct parse_events_state *parse_state, ..) with
first parameter being a pointer to above structure definition.
Now the PMU event name is not found (because being executed in a VM) and
this function tries to create an error message with
asprintf(&parse_state->error.str, ....)
which references a NULL pointer and dumps core.
Fix this by providing a pointer to the necessary error information
instead of NULL. Technically only the else part is needed to avoid the
core dump, just lets be safe...
Signed-off-by: Thomas Richter <tmricht@linux.vnet.ibm.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Hendrik Brueckner <brueckner@linux.vnet.ibm.com>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Link: http://lkml.kernel.org/r/20180308145735.64717-1-tmricht@linux.vnet.ibm.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-03-08 14:57:35 +00:00
|
|
|
err = parse_events(evsel_list,
|
|
|
|
transaction_limited_attrs,
|
|
|
|
&errinfo);
|
2015-06-03 14:25:53 +00:00
|
|
|
if (err) {
|
2013-08-21 23:47:26 +00:00
|
|
|
fprintf(stderr, "Cannot set up transaction events\n");
|
2021-11-07 09:00:00 +00:00
|
|
|
parse_events_error__print(&errinfo, transaction_attrs);
|
2013-08-21 23:47:26 +00:00
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2017-05-26 19:05:38 +00:00
|
|
|
if (smi_cost) {
|
|
|
|
int smi;
|
|
|
|
|
|
|
|
if (sysfs__read_int(FREEZE_ON_SMI_PATH, &smi) < 0) {
|
|
|
|
fprintf(stderr, "freeze_on_smi is not supported.\n");
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!smi) {
|
|
|
|
if (sysfs__write_int(FREEZE_ON_SMI_PATH, 1) < 0) {
|
|
|
|
fprintf(stderr, "Failed to set freeze_on_smi.\n");
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
smi_reset = true;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (pmu_have_event("msr", "aperf") &&
|
|
|
|
pmu_have_event("msr", "smi")) {
|
|
|
|
if (!force_metric_only)
|
2018-08-30 06:32:31 +00:00
|
|
|
stat_config.metric_only = true;
|
2018-06-06 22:15:10 +00:00
|
|
|
err = parse_events(evsel_list, smi_cost_attrs, &errinfo);
|
2017-05-26 19:05:38 +00:00
|
|
|
} else {
|
|
|
|
fprintf(stderr, "To measure SMI cost, it needs "
|
|
|
|
"msr/aperf/, msr/smi/ and cpu/cycles/ support\n");
|
2021-11-07 09:00:00 +00:00
|
|
|
parse_events_error__print(&errinfo, smi_cost_attrs);
|
2017-05-26 19:05:38 +00:00
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
if (err) {
|
2021-11-07 09:00:00 +00:00
|
|
|
parse_events_error__print(&errinfo, smi_cost_attrs);
|
2017-05-26 19:05:38 +00:00
|
|
|
fprintf(stderr, "Cannot set up SMI cost events\n");
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
perf stat: Basic support for TopDown in perf stat
Add basic plumbing for TopDown in perf stat
TopDown is intended to replace the frontend cycles idle/ backend cycles
idle metrics in standard perf stat output. These metrics are not
reliable in many workloads, due to out of order effects.
This implements a new --topdown mode in perf stat (similar to
--transaction) that measures the pipe line bottlenecks using
standardized formulas. The measurement can be all done with 5 counters
(one fixed counter)
The result are four metrics:
FrontendBound, BackendBound, BadSpeculation, Retiring
that describe the CPU pipeline behavior on a high level.
The full top down methology has many hierarchical metrics. This
implementation only supports level 1 which can be collected without
multiplexing. A full implementation of top down on top of perf is
available in pmu-tools toplev. (http://github.com/andikleen/pmu-tools)
The current version works on Intel Core CPUs starting with Sandy Bridge,
and Atom CPUs starting with Silvermont. In principle the generic
metrics should be also implementable on other out of order CPUs.
TopDown level 1 uses a set of abstracted metrics which are generic to
out of order CPU cores (although some CPUs may not implement all of
them):
topdown-total-slots Available slots in the pipeline
topdown-slots-issued Slots issued into the pipeline
topdown-slots-retired Slots successfully retired
topdown-fetch-bubbles Pipeline gaps in the frontend
topdown-recovery-bubbles Pipeline gaps during recovery
from misspeculation
These metrics then allow to compute four useful metrics:
FrontendBound, BackendBound, Retiring, BadSpeculation.
Add a new --topdown options to enable events. When --topdown is
specified set up events for all topdown events supported by the kernel.
Add topdown-* as a special case to the event parser, as is needed for
all events containing -.
The actual code to compute the metrics is in follow-on patches.
v2: Use standard sysctl read function.
v3: Move x86 specific code to arch/
v4: Enable --metric-only implicitly for topdown.
v5: Add --single-thread option to not force per core mode
v6: Fix output order of topdown metrics
v7: Allow combining with -d
v8: Remove --single-thread again
v9: Rename functions, adding arch_ and topdown_.
v10: Expand man page and describe TopDown better
Paste intro into commit description.
Print error when malloc fails.
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: http://lkml.kernel.org/r/1464119559-17203-1-git-send-email-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-05-30 15:49:42 +00:00
|
|
|
if (topdown_run) {
|
perf stat: Support L2 Topdown events
The TMA method level 2 metrics is supported from the Intel Sapphire
Rapids server, which expose four L2 Topdown metrics events to user
space. There are eight L2 events in total. The other four L2 Topdown
metrics events are calculated from the corresponding L1 and the exposed
L2 events.
Now, the --topdown prints the complete top-down metrics that supported
by the CPU. For the Intel Sapphire Rapids server, there are 4 L1 events
and 8 L2 events displyed in one line.
Add a new option, --td-level, to display the top-down statistics that
equal to or lower than the input level.
The L2 event is marked only when both its L1 parent event and itself
crosse the threshold.
Here is an example:
$ perf stat --topdown --td-level=2 --no-metric-only sleep 1
Topdown accuracy may decrease when measuring long periods.
Please print the result regularly, e.g. -I1000
Performance counter stats for 'sleep 1':
16,734,390 slots
2,100,001 topdown-retiring # 12.6% retiring
2,034,376 topdown-bad-spec # 12.3% bad speculation
4,003,128 topdown-fe-bound # 24.1% frontend bound
328,125 topdown-heavy-ops # 2.0% heavy operations # 10.6% light operations
1,968,751 topdown-br-mispredict # 11.9% branch mispredict # 0.4% machine clears
2,953,127 topdown-fetch-lat # 17.8% fetch latency # 6.3% fetch bandwidth
5,906,255 topdown-mem-bound # 35.6% memory bound # 15.4% core bound
Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jin Yao <yao.jin@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lore.kernel.org/lkml/1612296553-21962-9-git-send-email-kan.liang@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2021-02-02 20:09:12 +00:00
|
|
|
const char **metric_attrs = topdown_metric_attrs;
|
|
|
|
unsigned int max_level = 1;
|
perf stat: Basic support for TopDown in perf stat
Add basic plumbing for TopDown in perf stat
TopDown is intended to replace the frontend cycles idle/ backend cycles
idle metrics in standard perf stat output. These metrics are not
reliable in many workloads, due to out of order effects.
This implements a new --topdown mode in perf stat (similar to
--transaction) that measures the pipe line bottlenecks using
standardized formulas. The measurement can be all done with 5 counters
(one fixed counter)
The result are four metrics:
FrontendBound, BackendBound, BadSpeculation, Retiring
that describe the CPU pipeline behavior on a high level.
The full top down methology has many hierarchical metrics. This
implementation only supports level 1 which can be collected without
multiplexing. A full implementation of top down on top of perf is
available in pmu-tools toplev. (http://github.com/andikleen/pmu-tools)
The current version works on Intel Core CPUs starting with Sandy Bridge,
and Atom CPUs starting with Silvermont. In principle the generic
metrics should be also implementable on other out of order CPUs.
TopDown level 1 uses a set of abstracted metrics which are generic to
out of order CPU cores (although some CPUs may not implement all of
them):
topdown-total-slots Available slots in the pipeline
topdown-slots-issued Slots issued into the pipeline
topdown-slots-retired Slots successfully retired
topdown-fetch-bubbles Pipeline gaps in the frontend
topdown-recovery-bubbles Pipeline gaps during recovery
from misspeculation
These metrics then allow to compute four useful metrics:
FrontendBound, BackendBound, Retiring, BadSpeculation.
Add a new --topdown options to enable events. When --topdown is
specified set up events for all topdown events supported by the kernel.
Add topdown-* as a special case to the event parser, as is needed for
all events containing -.
The actual code to compute the metrics is in follow-on patches.
v2: Use standard sysctl read function.
v3: Move x86 specific code to arch/
v4: Enable --metric-only implicitly for topdown.
v5: Add --single-thread option to not force per core mode
v6: Fix output order of topdown metrics
v7: Allow combining with -d
v8: Remove --single-thread again
v9: Rename functions, adding arch_ and topdown_.
v10: Expand man page and describe TopDown better
Paste intro into commit description.
Print error when malloc fails.
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: http://lkml.kernel.org/r/1464119559-17203-1-git-send-email-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-05-30 15:49:42 +00:00
|
|
|
char *str = NULL;
|
|
|
|
bool warn = false;
|
|
|
|
|
2020-09-11 14:48:07 +00:00
|
|
|
if (!force_metric_only)
|
|
|
|
stat_config.metric_only = true;
|
|
|
|
|
perf stat: Support L2 Topdown events
The TMA method level 2 metrics is supported from the Intel Sapphire
Rapids server, which expose four L2 Topdown metrics events to user
space. There are eight L2 events in total. The other four L2 Topdown
metrics events are calculated from the corresponding L1 and the exposed
L2 events.
Now, the --topdown prints the complete top-down metrics that supported
by the CPU. For the Intel Sapphire Rapids server, there are 4 L1 events
and 8 L2 events displyed in one line.
Add a new option, --td-level, to display the top-down statistics that
equal to or lower than the input level.
The L2 event is marked only when both its L1 parent event and itself
crosse the threshold.
Here is an example:
$ perf stat --topdown --td-level=2 --no-metric-only sleep 1
Topdown accuracy may decrease when measuring long periods.
Please print the result regularly, e.g. -I1000
Performance counter stats for 'sleep 1':
16,734,390 slots
2,100,001 topdown-retiring # 12.6% retiring
2,034,376 topdown-bad-spec # 12.3% bad speculation
4,003,128 topdown-fe-bound # 24.1% frontend bound
328,125 topdown-heavy-ops # 2.0% heavy operations # 10.6% light operations
1,968,751 topdown-br-mispredict # 11.9% branch mispredict # 0.4% machine clears
2,953,127 topdown-fetch-lat # 17.8% fetch latency # 6.3% fetch bandwidth
5,906,255 topdown-mem-bound # 35.6% memory bound # 15.4% core bound
Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jin Yao <yao.jin@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lore.kernel.org/lkml/1612296553-21962-9-git-send-email-kan.liang@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2021-02-02 20:09:12 +00:00
|
|
|
if (pmu_have_event("cpu", topdown_metric_L2_attrs[5])) {
|
|
|
|
metric_attrs = topdown_metric_L2_attrs;
|
|
|
|
max_level = 2;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (stat_config.topdown_level > max_level) {
|
|
|
|
pr_err("Invalid top-down metrics level. The max level is %u.\n", max_level);
|
|
|
|
return -1;
|
|
|
|
} else if (!stat_config.topdown_level)
|
|
|
|
stat_config.topdown_level = max_level;
|
|
|
|
|
|
|
|
if (topdown_filter_events(metric_attrs, &str, 1) < 0) {
|
2020-09-11 14:48:07 +00:00
|
|
|
pr_err("Out of memory\n");
|
|
|
|
return -1;
|
|
|
|
}
|
perf stat: Support L2 Topdown events
The TMA method level 2 metrics is supported from the Intel Sapphire
Rapids server, which expose four L2 Topdown metrics events to user
space. There are eight L2 events in total. The other four L2 Topdown
metrics events are calculated from the corresponding L1 and the exposed
L2 events.
Now, the --topdown prints the complete top-down metrics that supported
by the CPU. For the Intel Sapphire Rapids server, there are 4 L1 events
and 8 L2 events displyed in one line.
Add a new option, --td-level, to display the top-down statistics that
equal to or lower than the input level.
The L2 event is marked only when both its L1 parent event and itself
crosse the threshold.
Here is an example:
$ perf stat --topdown --td-level=2 --no-metric-only sleep 1
Topdown accuracy may decrease when measuring long periods.
Please print the result regularly, e.g. -I1000
Performance counter stats for 'sleep 1':
16,734,390 slots
2,100,001 topdown-retiring # 12.6% retiring
2,034,376 topdown-bad-spec # 12.3% bad speculation
4,003,128 topdown-fe-bound # 24.1% frontend bound
328,125 topdown-heavy-ops # 2.0% heavy operations # 10.6% light operations
1,968,751 topdown-br-mispredict # 11.9% branch mispredict # 0.4% machine clears
2,953,127 topdown-fetch-lat # 17.8% fetch latency # 6.3% fetch bandwidth
5,906,255 topdown-mem-bound # 35.6% memory bound # 15.4% core bound
Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jin Yao <yao.jin@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lore.kernel.org/lkml/1612296553-21962-9-git-send-email-kan.liang@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2021-02-02 20:09:12 +00:00
|
|
|
if (metric_attrs[0] && str) {
|
2020-09-11 14:48:07 +00:00
|
|
|
if (!stat_config.interval && !stat_config.metric_only) {
|
|
|
|
fprintf(stat_config.output,
|
|
|
|
"Topdown accuracy may decrease when measuring long periods.\n"
|
|
|
|
"Please print the result regularly, e.g. -I1000\n");
|
|
|
|
}
|
|
|
|
goto setup_metrics;
|
|
|
|
}
|
|
|
|
|
|
|
|
zfree(&str);
|
|
|
|
|
perf stat: Basic support for TopDown in perf stat
Add basic plumbing for TopDown in perf stat
TopDown is intended to replace the frontend cycles idle/ backend cycles
idle metrics in standard perf stat output. These metrics are not
reliable in many workloads, due to out of order effects.
This implements a new --topdown mode in perf stat (similar to
--transaction) that measures the pipe line bottlenecks using
standardized formulas. The measurement can be all done with 5 counters
(one fixed counter)
The result are four metrics:
FrontendBound, BackendBound, BadSpeculation, Retiring
that describe the CPU pipeline behavior on a high level.
The full top down methology has many hierarchical metrics. This
implementation only supports level 1 which can be collected without
multiplexing. A full implementation of top down on top of perf is
available in pmu-tools toplev. (http://github.com/andikleen/pmu-tools)
The current version works on Intel Core CPUs starting with Sandy Bridge,
and Atom CPUs starting with Silvermont. In principle the generic
metrics should be also implementable on other out of order CPUs.
TopDown level 1 uses a set of abstracted metrics which are generic to
out of order CPU cores (although some CPUs may not implement all of
them):
topdown-total-slots Available slots in the pipeline
topdown-slots-issued Slots issued into the pipeline
topdown-slots-retired Slots successfully retired
topdown-fetch-bubbles Pipeline gaps in the frontend
topdown-recovery-bubbles Pipeline gaps during recovery
from misspeculation
These metrics then allow to compute four useful metrics:
FrontendBound, BackendBound, Retiring, BadSpeculation.
Add a new --topdown options to enable events. When --topdown is
specified set up events for all topdown events supported by the kernel.
Add topdown-* as a special case to the event parser, as is needed for
all events containing -.
The actual code to compute the metrics is in follow-on patches.
v2: Use standard sysctl read function.
v3: Move x86 specific code to arch/
v4: Enable --metric-only implicitly for topdown.
v5: Add --single-thread option to not force per core mode
v6: Fix output order of topdown metrics
v7: Allow combining with -d
v8: Remove --single-thread again
v9: Rename functions, adding arch_ and topdown_.
v10: Expand man page and describe TopDown better
Paste intro into commit description.
Print error when malloc fails.
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: http://lkml.kernel.org/r/1464119559-17203-1-git-send-email-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-05-30 15:49:42 +00:00
|
|
|
if (stat_config.aggr_mode != AGGR_GLOBAL &&
|
|
|
|
stat_config.aggr_mode != AGGR_CORE) {
|
|
|
|
pr_err("top down event configuration requires --per-core mode\n");
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
stat_config.aggr_mode = AGGR_CORE;
|
|
|
|
if (nr_cgroups || !target__has_cpu(&target)) {
|
|
|
|
pr_err("top down event configuration requires system-wide mode (-a)\n");
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (topdown_filter_events(topdown_attrs, &str,
|
|
|
|
arch_topdown_check_group(&warn)) < 0) {
|
|
|
|
pr_err("Out of memory\n");
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
if (topdown_attrs[0] && str) {
|
|
|
|
if (warn)
|
|
|
|
arch_topdown_group_warn();
|
2020-09-11 14:48:07 +00:00
|
|
|
setup_metrics:
|
2018-06-06 22:15:10 +00:00
|
|
|
err = parse_events(evsel_list, str, &errinfo);
|
perf stat: Basic support for TopDown in perf stat
Add basic plumbing for TopDown in perf stat
TopDown is intended to replace the frontend cycles idle/ backend cycles
idle metrics in standard perf stat output. These metrics are not
reliable in many workloads, due to out of order effects.
This implements a new --topdown mode in perf stat (similar to
--transaction) that measures the pipe line bottlenecks using
standardized formulas. The measurement can be all done with 5 counters
(one fixed counter)
The result are four metrics:
FrontendBound, BackendBound, BadSpeculation, Retiring
that describe the CPU pipeline behavior on a high level.
The full top down methology has many hierarchical metrics. This
implementation only supports level 1 which can be collected without
multiplexing. A full implementation of top down on top of perf is
available in pmu-tools toplev. (http://github.com/andikleen/pmu-tools)
The current version works on Intel Core CPUs starting with Sandy Bridge,
and Atom CPUs starting with Silvermont. In principle the generic
metrics should be also implementable on other out of order CPUs.
TopDown level 1 uses a set of abstracted metrics which are generic to
out of order CPU cores (although some CPUs may not implement all of
them):
topdown-total-slots Available slots in the pipeline
topdown-slots-issued Slots issued into the pipeline
topdown-slots-retired Slots successfully retired
topdown-fetch-bubbles Pipeline gaps in the frontend
topdown-recovery-bubbles Pipeline gaps during recovery
from misspeculation
These metrics then allow to compute four useful metrics:
FrontendBound, BackendBound, Retiring, BadSpeculation.
Add a new --topdown options to enable events. When --topdown is
specified set up events for all topdown events supported by the kernel.
Add topdown-* as a special case to the event parser, as is needed for
all events containing -.
The actual code to compute the metrics is in follow-on patches.
v2: Use standard sysctl read function.
v3: Move x86 specific code to arch/
v4: Enable --metric-only implicitly for topdown.
v5: Add --single-thread option to not force per core mode
v6: Fix output order of topdown metrics
v7: Allow combining with -d
v8: Remove --single-thread again
v9: Rename functions, adding arch_ and topdown_.
v10: Expand man page and describe TopDown better
Paste intro into commit description.
Print error when malloc fails.
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: http://lkml.kernel.org/r/1464119559-17203-1-git-send-email-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-05-30 15:49:42 +00:00
|
|
|
if (err) {
|
|
|
|
fprintf(stderr,
|
|
|
|
"Cannot set up top down events %s: %d\n",
|
|
|
|
str, err);
|
2021-11-07 09:00:00 +00:00
|
|
|
parse_events_error__print(&errinfo, str);
|
2019-07-02 10:34:11 +00:00
|
|
|
free(str);
|
perf stat: Basic support for TopDown in perf stat
Add basic plumbing for TopDown in perf stat
TopDown is intended to replace the frontend cycles idle/ backend cycles
idle metrics in standard perf stat output. These metrics are not
reliable in many workloads, due to out of order effects.
This implements a new --topdown mode in perf stat (similar to
--transaction) that measures the pipe line bottlenecks using
standardized formulas. The measurement can be all done with 5 counters
(one fixed counter)
The result are four metrics:
FrontendBound, BackendBound, BadSpeculation, Retiring
that describe the CPU pipeline behavior on a high level.
The full top down methology has many hierarchical metrics. This
implementation only supports level 1 which can be collected without
multiplexing. A full implementation of top down on top of perf is
available in pmu-tools toplev. (http://github.com/andikleen/pmu-tools)
The current version works on Intel Core CPUs starting with Sandy Bridge,
and Atom CPUs starting with Silvermont. In principle the generic
metrics should be also implementable on other out of order CPUs.
TopDown level 1 uses a set of abstracted metrics which are generic to
out of order CPU cores (although some CPUs may not implement all of
them):
topdown-total-slots Available slots in the pipeline
topdown-slots-issued Slots issued into the pipeline
topdown-slots-retired Slots successfully retired
topdown-fetch-bubbles Pipeline gaps in the frontend
topdown-recovery-bubbles Pipeline gaps during recovery
from misspeculation
These metrics then allow to compute four useful metrics:
FrontendBound, BackendBound, Retiring, BadSpeculation.
Add a new --topdown options to enable events. When --topdown is
specified set up events for all topdown events supported by the kernel.
Add topdown-* as a special case to the event parser, as is needed for
all events containing -.
The actual code to compute the metrics is in follow-on patches.
v2: Use standard sysctl read function.
v3: Move x86 specific code to arch/
v4: Enable --metric-only implicitly for topdown.
v5: Add --single-thread option to not force per core mode
v6: Fix output order of topdown metrics
v7: Allow combining with -d
v8: Remove --single-thread again
v9: Rename functions, adding arch_ and topdown_.
v10: Expand man page and describe TopDown better
Paste intro into commit description.
Print error when malloc fails.
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: http://lkml.kernel.org/r/1464119559-17203-1-git-send-email-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-05-30 15:49:42 +00:00
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
fprintf(stderr, "System does not support topdown\n");
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
free(str);
|
|
|
|
}
|
|
|
|
|
2019-07-21 11:24:28 +00:00
|
|
|
if (!evsel_list->core.nr_entries) {
|
perf stat: Add default hybrid events
Previously if '-e' is not specified in perf stat, some software events
and hardware events are added to evlist by default.
Before:
# perf stat -a -- sleep 1
Performance counter stats for 'system wide':
24,044.40 msec cpu-clock # 23.946 CPUs utilized
99 context-switches # 4.117 /sec
24 cpu-migrations # 0.998 /sec
3 page-faults # 0.125 /sec
7,000,244 cycles # 0.000 GHz
2,955,024 instructions # 0.42 insn per cycle
608,941 branches # 25.326 K/sec
31,991 branch-misses # 5.25% of all branches
1.004106859 seconds time elapsed
Among the events, cycles, instructions, branches and branch-misses
are hardware events.
One hybrid platform, two hardware events are created for one
hardware event.
cpu_core/cycles/,
cpu_atom/cycles/,
cpu_core/instructions/,
cpu_atom/instructions/,
cpu_core/branches/,
cpu_atom/branches/,
cpu_core/branch-misses/,
cpu_atom/branch-misses/
These events would be added to evlist on hybrid platform.
Since parse_events() has been supported to create two hardware events
for one event on hybrid platform, so we just use parse_events(evlist,
"cycles,instructions,branches,branch-misses") to create the default
events and add them to evlist.
After:
# perf stat -a -- sleep 1
Performance counter stats for 'system wide':
24,043.99 msec cpu-clock # 23.991 CPUs utilized
139 context-switches # 5.781 /sec
25 cpu-migrations # 1.040 /sec
6 page-faults # 0.250 /sec
10,381,751 cpu_core/cycles/ # 431.782 K/sec
1,264,216 cpu_atom/cycles/ # 52.579 K/sec
3,406,958 cpu_core/instructions/ # 141.697 K/sec
414,588 cpu_atom/instructions/ # 17.243 K/sec
705,149 cpu_core/branches/ # 29.327 K/sec
82,358 cpu_atom/branches/ # 3.425 K/sec
40,821 cpu_core/branch-misses/ # 1.698 K/sec
9,086 cpu_atom/branch-misses/ # 377.891 /sec
1.002228863 seconds time elapsed
We can see two events are created for one hardware event.
One TODO is, the shadow stats looks a bit different, now it's just
'M/sec'.
The perf_stat__update_shadow_stats and perf_stat__print_shadow_stats
need to be improved in future if we want to get the original shadow
stats.
Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
Reviewed-by: Jiri Olsa <jolsa@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Kan Liang <kan.liang@intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: https://lore.kernel.org/r/20210427070139.25256-15-yao.jin@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2021-04-27 07:01:27 +00:00
|
|
|
if (perf_pmu__has_hybrid()) {
|
|
|
|
const char *hybrid_str = "cycles,instructions,branches,branch-misses";
|
|
|
|
|
|
|
|
if (target__has_cpu(&target))
|
|
|
|
default_sw_attrs[0].config = PERF_COUNT_SW_CPU_CLOCK;
|
|
|
|
|
|
|
|
if (evlist__add_default_attrs(evsel_list,
|
|
|
|
default_sw_attrs) < 0) {
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
|
|
|
err = parse_events(evsel_list, hybrid_str, &errinfo);
|
|
|
|
if (err) {
|
|
|
|
fprintf(stderr,
|
|
|
|
"Cannot set up hybrid events %s: %d\n",
|
|
|
|
hybrid_str, err);
|
2021-11-07 09:00:00 +00:00
|
|
|
parse_events_error__print(&errinfo, hybrid_str);
|
perf stat: Add default hybrid events
Previously if '-e' is not specified in perf stat, some software events
and hardware events are added to evlist by default.
Before:
# perf stat -a -- sleep 1
Performance counter stats for 'system wide':
24,044.40 msec cpu-clock # 23.946 CPUs utilized
99 context-switches # 4.117 /sec
24 cpu-migrations # 0.998 /sec
3 page-faults # 0.125 /sec
7,000,244 cycles # 0.000 GHz
2,955,024 instructions # 0.42 insn per cycle
608,941 branches # 25.326 K/sec
31,991 branch-misses # 5.25% of all branches
1.004106859 seconds time elapsed
Among the events, cycles, instructions, branches and branch-misses
are hardware events.
One hybrid platform, two hardware events are created for one
hardware event.
cpu_core/cycles/,
cpu_atom/cycles/,
cpu_core/instructions/,
cpu_atom/instructions/,
cpu_core/branches/,
cpu_atom/branches/,
cpu_core/branch-misses/,
cpu_atom/branch-misses/
These events would be added to evlist on hybrid platform.
Since parse_events() has been supported to create two hardware events
for one event on hybrid platform, so we just use parse_events(evlist,
"cycles,instructions,branches,branch-misses") to create the default
events and add them to evlist.
After:
# perf stat -a -- sleep 1
Performance counter stats for 'system wide':
24,043.99 msec cpu-clock # 23.991 CPUs utilized
139 context-switches # 5.781 /sec
25 cpu-migrations # 1.040 /sec
6 page-faults # 0.250 /sec
10,381,751 cpu_core/cycles/ # 431.782 K/sec
1,264,216 cpu_atom/cycles/ # 52.579 K/sec
3,406,958 cpu_core/instructions/ # 141.697 K/sec
414,588 cpu_atom/instructions/ # 17.243 K/sec
705,149 cpu_core/branches/ # 29.327 K/sec
82,358 cpu_atom/branches/ # 3.425 K/sec
40,821 cpu_core/branch-misses/ # 1.698 K/sec
9,086 cpu_atom/branch-misses/ # 377.891 /sec
1.002228863 seconds time elapsed
We can see two events are created for one hardware event.
One TODO is, the shadow stats looks a bit different, now it's just
'M/sec'.
The perf_stat__update_shadow_stats and perf_stat__print_shadow_stats
need to be improved in future if we want to get the original shadow
stats.
Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
Reviewed-by: Jiri Olsa <jolsa@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Kan Liang <kan.liang@intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: https://lore.kernel.org/r/20210427070139.25256-15-yao.jin@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2021-04-27 07:01:27 +00:00
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
perf stat: Use cpu-clock event for cpu targets
Currently 'perf stat' always counts task-clock event by default. But
it's somewhat confusing for system-wide targets (especially with 'sleep
N' as the 'sleep' task just sleeps and doesn't use cputime). Changing
to cpu-clock event instead for that case makes more sense IMHO.
Before:
# perf stat -a sleep 0.1
Performance counter stats for 'system wide':
403.038603 task-clock (msec) # 4.001 CPUs utilized
150 context-switches # 0.372 K/sec
7 cpu-migrations # 0.017 K/sec
71 page-faults # 0.176 K/sec
23,705,169 cycles # 0.059 GHz
15,888,166 instructions # 0.67 insn per cycle
3,326,078 branches # 8.253 M/sec
87,643 branch-misses # 2.64% of all branches
0.100737009 seconds time elapsed
#
After:
# perf stat -a sleep 0.1
Performance counter stats for 'system wide':
404.271182 cpu-clock (msec) # 4.000 CPUs utilized
143 context-switches # 0.354 K/sec
13 cpu-migrations # 0.032 K/sec
73 page-faults # 0.181 K/sec
22,119,220 cycles # 0.055 GHz
13,622,065 instructions # 0.62 insn per cycle
2,918,769 branches # 7.220 M/sec
85,033 branch-misses # 2.91% of all branches
0.101073089 seconds time elapsed
#
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1463119263-5569-3-git-send-email-namhyung@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-05-13 06:01:03 +00:00
|
|
|
if (target__has_cpu(&target))
|
|
|
|
default_attrs0[0].config = PERF_COUNT_SW_CPU_CLOCK;
|
|
|
|
|
2020-06-17 12:16:20 +00:00
|
|
|
if (evlist__add_default_attrs(evsel_list, default_attrs0) < 0)
|
perf stat: Check existence of frontend/backed stalled cycles
Only put the frontend/backend stalled cycles into the default perf stat
events when the CPU actually supports them.
This avoids empty columns with --metric-only on newer Intel CPUs.
Committer note:
Before:
$ perf stat ls
Performance counter stats for 'ls':
1.080893 task-clock (msec) # 0.619 CPUs utilized
0 context-switches # 0.000 K/sec
0 cpu-migrations # 0.000 K/sec
97 page-faults # 0.090 M/sec
3,327,741 cycles # 3.079 GHz
<not supported> stalled-cycles-frontend
<not supported> stalled-cycles-backend
1,609,544 instructions # 0.48 insn per cycle
319,117 branches # 295.235 M/sec
12,246 branch-misses # 3.84% of all branches
0.001746508 seconds time elapsed
$
After:
$ perf stat ls
Performance counter stats for 'ls':
0.693948 task-clock (msec) # 0.662 CPUs utilized
0 context-switches # 0.000 K/sec
0 cpu-migrations # 0.000 K/sec
95 page-faults # 0.137 M/sec
1,792,509 cycles # 2.583 GHz
1,599,047 instructions # 0.89 insn per cycle
316,328 branches # 455.838 M/sec
12,453 branch-misses # 3.94% of all branches
0.001048987 seconds time elapsed
$
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/r/1456532881-26621-2-git-send-email-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-02-27 00:27:56 +00:00
|
|
|
return -1;
|
|
|
|
if (pmu_have_event("cpu", "stalled-cycles-frontend")) {
|
2020-06-17 12:16:20 +00:00
|
|
|
if (evlist__add_default_attrs(evsel_list, frontend_attrs) < 0)
|
perf stat: Check existence of frontend/backed stalled cycles
Only put the frontend/backend stalled cycles into the default perf stat
events when the CPU actually supports them.
This avoids empty columns with --metric-only on newer Intel CPUs.
Committer note:
Before:
$ perf stat ls
Performance counter stats for 'ls':
1.080893 task-clock (msec) # 0.619 CPUs utilized
0 context-switches # 0.000 K/sec
0 cpu-migrations # 0.000 K/sec
97 page-faults # 0.090 M/sec
3,327,741 cycles # 3.079 GHz
<not supported> stalled-cycles-frontend
<not supported> stalled-cycles-backend
1,609,544 instructions # 0.48 insn per cycle
319,117 branches # 295.235 M/sec
12,246 branch-misses # 3.84% of all branches
0.001746508 seconds time elapsed
$
After:
$ perf stat ls
Performance counter stats for 'ls':
0.693948 task-clock (msec) # 0.662 CPUs utilized
0 context-switches # 0.000 K/sec
0 cpu-migrations # 0.000 K/sec
95 page-faults # 0.137 M/sec
1,792,509 cycles # 2.583 GHz
1,599,047 instructions # 0.89 insn per cycle
316,328 branches # 455.838 M/sec
12,453 branch-misses # 3.94% of all branches
0.001048987 seconds time elapsed
$
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/r/1456532881-26621-2-git-send-email-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-02-27 00:27:56 +00:00
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
if (pmu_have_event("cpu", "stalled-cycles-backend")) {
|
2020-06-17 12:16:20 +00:00
|
|
|
if (evlist__add_default_attrs(evsel_list, backend_attrs) < 0)
|
perf stat: Check existence of frontend/backed stalled cycles
Only put the frontend/backend stalled cycles into the default perf stat
events when the CPU actually supports them.
This avoids empty columns with --metric-only on newer Intel CPUs.
Committer note:
Before:
$ perf stat ls
Performance counter stats for 'ls':
1.080893 task-clock (msec) # 0.619 CPUs utilized
0 context-switches # 0.000 K/sec
0 cpu-migrations # 0.000 K/sec
97 page-faults # 0.090 M/sec
3,327,741 cycles # 3.079 GHz
<not supported> stalled-cycles-frontend
<not supported> stalled-cycles-backend
1,609,544 instructions # 0.48 insn per cycle
319,117 branches # 295.235 M/sec
12,246 branch-misses # 3.84% of all branches
0.001746508 seconds time elapsed
$
After:
$ perf stat ls
Performance counter stats for 'ls':
0.693948 task-clock (msec) # 0.662 CPUs utilized
0 context-switches # 0.000 K/sec
0 cpu-migrations # 0.000 K/sec
95 page-faults # 0.137 M/sec
1,792,509 cycles # 2.583 GHz
1,599,047 instructions # 0.89 insn per cycle
316,328 branches # 455.838 M/sec
12,453 branch-misses # 3.94% of all branches
0.001048987 seconds time elapsed
$
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/r/1456532881-26621-2-git-send-email-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-02-27 00:27:56 +00:00
|
|
|
return -1;
|
|
|
|
}
|
2020-06-17 12:16:20 +00:00
|
|
|
if (evlist__add_default_attrs(evsel_list, default_attrs1) < 0)
|
2011-11-04 11:10:59 +00:00
|
|
|
return -1;
|
perf stat: Add Topdown metrics events as default events
The Topdown Microarchitecture Analysis (TMA) Method is a structured
analysis methodology to identify critical performance bottlenecks in
out-of-order processors. From the Ice Lake and later platforms, the
Topdown information can be retrieved from the dedicated "metrics"
register, which isn't impacted by other events. Also, the Topdown
metrics support both per thread/process and per core measuring. Adding
Topdown metrics events as default events can enrich the default
measuring information, and would not cost any extra multiplexing.
Introduce arch_evlist__add_default_attrs() to allow architecture
specific default events. Add the Topdown metrics events in the X86
specific arch_evlist__add_default_attrs(). Other architectures can add
their own default events later separately.
With the patch:
$ perf stat sleep 1
Performance counter stats for 'sleep 1':
0.82 msec task-clock:u # 0.001 CPUs utilized
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
61 page-faults:u # 0.074 M/sec
319,941 cycles:u # 0.388 GHz
242,802 instructions:u # 0.76 insn per cycle
54,380 branches:u # 66.028 M/sec
4,043 branch-misses:u # 7.43% of all branches
1,585,555 slots:u # 1925.189 M/sec
238,941 topdown-retiring:u # 15.0% retiring
410,378 topdown-bad-spec:u # 25.8% bad speculation
634,222 topdown-fe-bound:u # 39.9% frontend bound
304,675 topdown-be-bound:u # 19.2% backend bound
1.001791625 seconds time elapsed
0.000000000 seconds user
0.001572000 seconds sys
Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jin Yao <yao.jin@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Link: http://lore.kernel.org/lkml/20210121133752.118327-1-kan.liang@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2021-01-21 13:37:52 +00:00
|
|
|
|
perf stat: Add Topdown metrics L2 events as default events
The Topdown Microarchitecture Analysis (TMA) Method is a structured
analysis methodology to identify critical performance bottlenecks in
out-of-order processors.
The Topdown metrics L1 event was added as default in 42641d6f4d15e6db
("perf stat: Add Topdown metrics events as default events")
From the Sapphire Rapids server and later platforms, the same dedicated
"metrics" register is extended to support both L1 and L2 events.
Add both L1 and L2 Topdown metrics events as default to enrich the
default measuring information if the new measurement register is
available.
On legacy systems there is no change to avoid extra multiplexing.
The topdown_level indicates the max metrics level for the top-down
statistics. Set it to 2 to display all L1 and L2 Topdown metrics events.
With the patch:
$ perf stat sleep 1
Performance counter stats for 'sleep 1':
0.59 msec task-clock # 0.001 CPUs utilized
1 context-switches # 1.687 K/sec
0 cpu-migrations # 0.000 /sec
76 page-faults # 128.198 K/sec
1,405,318 cycles # 2.371 GHz
1,471,136 instructions # 1.05 insn per cycle
310,132 branches # 523.136 M/sec
10,435 branch-misses # 3.36% of all branches
8,431,908 slots # 14.223 G/sec
1,554,116 topdown-retiring # 18.4% retiring
1,289,585 topdown-bad-spec # 15.2% bad speculation
2,810,636 topdown-fe-bound # 33.2% frontend bound
2,810,636 topdown-be-bound # 33.2% backend bound
231,464 topdown-heavy-ops # 2.7% heavy operations # 15.6% light operations
1,223,453 topdown-br-mispredict # 14.5% branch mispredict # 0.8% machine clears
1,884,779 topdown-fetch-lat # 22.3% fetch latency # 10.9% fetch bandwidth
1,454,917 topdown-mem-bound # 17.2% memory bound # 16.0% Core bound
1.001179699 seconds time elapsed
0.000000000 seconds user
0.001238000 seconds sys
Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
Reviewed-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Namhyung Kim <namhyung@kernel.org>
Cc: Jin Yao <yao.jin@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Link: http://lore.kernel.org/lkml/1625760169-18396-1-git-send-email-kan.liang@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2021-07-08 16:02:49 +00:00
|
|
|
stat_config.topdown_level = TOPDOWN_MAX_LEVEL;
|
perf stat: Add Topdown metrics events as default events
The Topdown Microarchitecture Analysis (TMA) Method is a structured
analysis methodology to identify critical performance bottlenecks in
out-of-order processors. From the Ice Lake and later platforms, the
Topdown information can be retrieved from the dedicated "metrics"
register, which isn't impacted by other events. Also, the Topdown
metrics support both per thread/process and per core measuring. Adding
Topdown metrics events as default events can enrich the default
measuring information, and would not cost any extra multiplexing.
Introduce arch_evlist__add_default_attrs() to allow architecture
specific default events. Add the Topdown metrics events in the X86
specific arch_evlist__add_default_attrs(). Other architectures can add
their own default events later separately.
With the patch:
$ perf stat sleep 1
Performance counter stats for 'sleep 1':
0.82 msec task-clock:u # 0.001 CPUs utilized
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
61 page-faults:u # 0.074 M/sec
319,941 cycles:u # 0.388 GHz
242,802 instructions:u # 0.76 insn per cycle
54,380 branches:u # 66.028 M/sec
4,043 branch-misses:u # 7.43% of all branches
1,585,555 slots:u # 1925.189 M/sec
238,941 topdown-retiring:u # 15.0% retiring
410,378 topdown-bad-spec:u # 25.8% bad speculation
634,222 topdown-fe-bound:u # 39.9% frontend bound
304,675 topdown-be-bound:u # 19.2% backend bound
1.001791625 seconds time elapsed
0.000000000 seconds user
0.001572000 seconds sys
Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jin Yao <yao.jin@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Link: http://lore.kernel.org/lkml/20210121133752.118327-1-kan.liang@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2021-01-21 13:37:52 +00:00
|
|
|
if (arch_evlist__add_default_attrs(evsel_list) < 0)
|
|
|
|
return -1;
|
perf stat: Add -d -d and -d -d -d options to show more CPU events
Print even more detailed statistics if requested via perf stat -d:
-d: detailed events, L1 and LLC data cache
-d -d: more detailed events, dTLB and iTLB events
-d -d -d: very detailed events, adding prefetch events
Full output looks like this now:
Performance counter stats for '/home/mingo/hackbench 10' (5 runs):
1703.674707 task-clock # 8.709 CPUs utilized ( +- 4.19% )
49,068 context-switches # 0.029 M/sec ( +- 16.66% )
8,303 CPU-migrations # 0.005 M/sec ( +- 24.90% )
17,397 page-faults # 0.010 M/sec ( +- 0.46% )
2,345,389,239 cycles # 1.377 GHz ( +- 4.61% ) [55.90%]
1,884,503,527 stalled-cycles-frontend # 80.35% frontend cycles idle ( +- 5.67% ) [50.39%]
743,919,737 stalled-cycles-backend # 31.72% backend cycles idle ( +- 8.75% ) [49.91%]
1,314,416,379 instructions # 0.56 insns per cycle
# 1.43 stalled cycles per insn ( +- 2.53% ) [60.87%]
272,592,567 branches # 160.003 M/sec ( +- 1.74% ) [56.56%]
3,794,846 branch-misses # 1.39% of all branches ( +- 6.59% ) [58.50%]
449,982,778 L1-dcache-loads # 264.125 M/sec ( +- 2.47% ) [49.88%]
22,404,961 L1-dcache-load-misses # 4.98% of all L1-dcache hits ( +- 6.08% ) [55.05%]
6,204,750 LLC-loads # 3.642 M/sec ( +- 8.91% ) [43.75%]
1,837,411 LLC-load-misses # 1.078 M/sec ( +- 7.27% ) [12.07%]
411,440,421 L1-icache-loads # 241.502 M/sec ( +- 5.60% ) [36.52%]
27,556,832 L1-icache-load-misses # 16.175 M/sec ( +- 7.46% ) [46.72%]
464,067,627 dTLB-loads # 272.392 M/sec ( +- 4.46% ) [54.17%]
10,765,648 dTLB-load-misses # 6.319 M/sec ( +- 3.18% ) [48.68%]
1,273,080,386 iTLB-loads # 747.256 M/sec ( +- 3.38% ) [47.53%]
117,481 iTLB-load-misses # 0.069 M/sec ( +- 14.99% ) [47.01%]
4,590,653 L1-dcache-prefetches # 2.695 M/sec ( +- 4.49% ) [46.19%]
1,712,660 L1-dcache-prefetch-misses # 1.005 M/sec ( +- 3.75% ) [44.82%]
0.195622057 seconds time elapsed ( +- 6.84% )
Also clean up the attribute construction code to be appending, and factor
it out into add_default_attributes().
Tweak the coverage percentage printout a bit, so that it's easier to view it
alongside the +- sttddev colum.
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Steven Rostedt <rostedt@goodmis.org>
Link: http://lkml.kernel.org/n/tip-to3kgu04449s64062val8b62@git.kernel.org
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2011-05-19 11:30:56 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Detailed events get appended to the event list: */
|
|
|
|
|
|
|
|
if (detailed_run < 1)
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
/* Append detailed run extra attributes: */
|
2020-06-17 12:16:20 +00:00
|
|
|
if (evlist__add_default_attrs(evsel_list, detailed_attrs) < 0)
|
2011-11-04 11:10:59 +00:00
|
|
|
return -1;
|
perf stat: Add -d -d and -d -d -d options to show more CPU events
Print even more detailed statistics if requested via perf stat -d:
-d: detailed events, L1 and LLC data cache
-d -d: more detailed events, dTLB and iTLB events
-d -d -d: very detailed events, adding prefetch events
Full output looks like this now:
Performance counter stats for '/home/mingo/hackbench 10' (5 runs):
1703.674707 task-clock # 8.709 CPUs utilized ( +- 4.19% )
49,068 context-switches # 0.029 M/sec ( +- 16.66% )
8,303 CPU-migrations # 0.005 M/sec ( +- 24.90% )
17,397 page-faults # 0.010 M/sec ( +- 0.46% )
2,345,389,239 cycles # 1.377 GHz ( +- 4.61% ) [55.90%]
1,884,503,527 stalled-cycles-frontend # 80.35% frontend cycles idle ( +- 5.67% ) [50.39%]
743,919,737 stalled-cycles-backend # 31.72% backend cycles idle ( +- 8.75% ) [49.91%]
1,314,416,379 instructions # 0.56 insns per cycle
# 1.43 stalled cycles per insn ( +- 2.53% ) [60.87%]
272,592,567 branches # 160.003 M/sec ( +- 1.74% ) [56.56%]
3,794,846 branch-misses # 1.39% of all branches ( +- 6.59% ) [58.50%]
449,982,778 L1-dcache-loads # 264.125 M/sec ( +- 2.47% ) [49.88%]
22,404,961 L1-dcache-load-misses # 4.98% of all L1-dcache hits ( +- 6.08% ) [55.05%]
6,204,750 LLC-loads # 3.642 M/sec ( +- 8.91% ) [43.75%]
1,837,411 LLC-load-misses # 1.078 M/sec ( +- 7.27% ) [12.07%]
411,440,421 L1-icache-loads # 241.502 M/sec ( +- 5.60% ) [36.52%]
27,556,832 L1-icache-load-misses # 16.175 M/sec ( +- 7.46% ) [46.72%]
464,067,627 dTLB-loads # 272.392 M/sec ( +- 4.46% ) [54.17%]
10,765,648 dTLB-load-misses # 6.319 M/sec ( +- 3.18% ) [48.68%]
1,273,080,386 iTLB-loads # 747.256 M/sec ( +- 3.38% ) [47.53%]
117,481 iTLB-load-misses # 0.069 M/sec ( +- 14.99% ) [47.01%]
4,590,653 L1-dcache-prefetches # 2.695 M/sec ( +- 4.49% ) [46.19%]
1,712,660 L1-dcache-prefetch-misses # 1.005 M/sec ( +- 3.75% ) [44.82%]
0.195622057 seconds time elapsed ( +- 6.84% )
Also clean up the attribute construction code to be appending, and factor
it out into add_default_attributes().
Tweak the coverage percentage printout a bit, so that it's easier to view it
alongside the +- sttddev colum.
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Steven Rostedt <rostedt@goodmis.org>
Link: http://lkml.kernel.org/n/tip-to3kgu04449s64062val8b62@git.kernel.org
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2011-05-19 11:30:56 +00:00
|
|
|
|
|
|
|
if (detailed_run < 2)
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
/* Append very detailed run extra attributes: */
|
2020-06-17 12:16:20 +00:00
|
|
|
if (evlist__add_default_attrs(evsel_list, very_detailed_attrs) < 0)
|
2011-11-04 11:10:59 +00:00
|
|
|
return -1;
|
perf stat: Add -d -d and -d -d -d options to show more CPU events
Print even more detailed statistics if requested via perf stat -d:
-d: detailed events, L1 and LLC data cache
-d -d: more detailed events, dTLB and iTLB events
-d -d -d: very detailed events, adding prefetch events
Full output looks like this now:
Performance counter stats for '/home/mingo/hackbench 10' (5 runs):
1703.674707 task-clock # 8.709 CPUs utilized ( +- 4.19% )
49,068 context-switches # 0.029 M/sec ( +- 16.66% )
8,303 CPU-migrations # 0.005 M/sec ( +- 24.90% )
17,397 page-faults # 0.010 M/sec ( +- 0.46% )
2,345,389,239 cycles # 1.377 GHz ( +- 4.61% ) [55.90%]
1,884,503,527 stalled-cycles-frontend # 80.35% frontend cycles idle ( +- 5.67% ) [50.39%]
743,919,737 stalled-cycles-backend # 31.72% backend cycles idle ( +- 8.75% ) [49.91%]
1,314,416,379 instructions # 0.56 insns per cycle
# 1.43 stalled cycles per insn ( +- 2.53% ) [60.87%]
272,592,567 branches # 160.003 M/sec ( +- 1.74% ) [56.56%]
3,794,846 branch-misses # 1.39% of all branches ( +- 6.59% ) [58.50%]
449,982,778 L1-dcache-loads # 264.125 M/sec ( +- 2.47% ) [49.88%]
22,404,961 L1-dcache-load-misses # 4.98% of all L1-dcache hits ( +- 6.08% ) [55.05%]
6,204,750 LLC-loads # 3.642 M/sec ( +- 8.91% ) [43.75%]
1,837,411 LLC-load-misses # 1.078 M/sec ( +- 7.27% ) [12.07%]
411,440,421 L1-icache-loads # 241.502 M/sec ( +- 5.60% ) [36.52%]
27,556,832 L1-icache-load-misses # 16.175 M/sec ( +- 7.46% ) [46.72%]
464,067,627 dTLB-loads # 272.392 M/sec ( +- 4.46% ) [54.17%]
10,765,648 dTLB-load-misses # 6.319 M/sec ( +- 3.18% ) [48.68%]
1,273,080,386 iTLB-loads # 747.256 M/sec ( +- 3.38% ) [47.53%]
117,481 iTLB-load-misses # 0.069 M/sec ( +- 14.99% ) [47.01%]
4,590,653 L1-dcache-prefetches # 2.695 M/sec ( +- 4.49% ) [46.19%]
1,712,660 L1-dcache-prefetch-misses # 1.005 M/sec ( +- 3.75% ) [44.82%]
0.195622057 seconds time elapsed ( +- 6.84% )
Also clean up the attribute construction code to be appending, and factor
it out into add_default_attributes().
Tweak the coverage percentage printout a bit, so that it's easier to view it
alongside the +- sttddev colum.
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Steven Rostedt <rostedt@goodmis.org>
Link: http://lkml.kernel.org/n/tip-to3kgu04449s64062val8b62@git.kernel.org
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2011-05-19 11:30:56 +00:00
|
|
|
|
|
|
|
if (detailed_run < 3)
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
/* Append very, very detailed run extra attributes: */
|
2020-06-17 12:16:20 +00:00
|
|
|
return evlist__add_default_attrs(evsel_list, very_very_detailed_attrs);
|
perf stat: Add -d -d and -d -d -d options to show more CPU events
Print even more detailed statistics if requested via perf stat -d:
-d: detailed events, L1 and LLC data cache
-d -d: more detailed events, dTLB and iTLB events
-d -d -d: very detailed events, adding prefetch events
Full output looks like this now:
Performance counter stats for '/home/mingo/hackbench 10' (5 runs):
1703.674707 task-clock # 8.709 CPUs utilized ( +- 4.19% )
49,068 context-switches # 0.029 M/sec ( +- 16.66% )
8,303 CPU-migrations # 0.005 M/sec ( +- 24.90% )
17,397 page-faults # 0.010 M/sec ( +- 0.46% )
2,345,389,239 cycles # 1.377 GHz ( +- 4.61% ) [55.90%]
1,884,503,527 stalled-cycles-frontend # 80.35% frontend cycles idle ( +- 5.67% ) [50.39%]
743,919,737 stalled-cycles-backend # 31.72% backend cycles idle ( +- 8.75% ) [49.91%]
1,314,416,379 instructions # 0.56 insns per cycle
# 1.43 stalled cycles per insn ( +- 2.53% ) [60.87%]
272,592,567 branches # 160.003 M/sec ( +- 1.74% ) [56.56%]
3,794,846 branch-misses # 1.39% of all branches ( +- 6.59% ) [58.50%]
449,982,778 L1-dcache-loads # 264.125 M/sec ( +- 2.47% ) [49.88%]
22,404,961 L1-dcache-load-misses # 4.98% of all L1-dcache hits ( +- 6.08% ) [55.05%]
6,204,750 LLC-loads # 3.642 M/sec ( +- 8.91% ) [43.75%]
1,837,411 LLC-load-misses # 1.078 M/sec ( +- 7.27% ) [12.07%]
411,440,421 L1-icache-loads # 241.502 M/sec ( +- 5.60% ) [36.52%]
27,556,832 L1-icache-load-misses # 16.175 M/sec ( +- 7.46% ) [46.72%]
464,067,627 dTLB-loads # 272.392 M/sec ( +- 4.46% ) [54.17%]
10,765,648 dTLB-load-misses # 6.319 M/sec ( +- 3.18% ) [48.68%]
1,273,080,386 iTLB-loads # 747.256 M/sec ( +- 3.38% ) [47.53%]
117,481 iTLB-load-misses # 0.069 M/sec ( +- 14.99% ) [47.01%]
4,590,653 L1-dcache-prefetches # 2.695 M/sec ( +- 4.49% ) [46.19%]
1,712,660 L1-dcache-prefetch-misses # 1.005 M/sec ( +- 3.75% ) [44.82%]
0.195622057 seconds time elapsed ( +- 6.84% )
Also clean up the attribute construction code to be appending, and factor
it out into add_default_attributes().
Tweak the coverage percentage printout a bit, so that it's easier to view it
alongside the +- sttddev colum.
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Steven Rostedt <rostedt@goodmis.org>
Link: http://lkml.kernel.org/n/tip-to3kgu04449s64062val8b62@git.kernel.org
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2011-05-19 11:30:56 +00:00
|
|
|
}
|
|
|
|
|
2016-01-12 09:35:29 +00:00
|
|
|
static const char * const stat_record_usage[] = {
|
perf stat record: Add record command
Add 'perf stat record' command support. It creates simple (header only)
perf.data file ATM.
The record command could be specified anywhere among stat options. All
stat command options are valid for stat record command with '-o' option
exception. If specified for record command it denotes the perf data file
name.
Committer note:
Set sample_type to PERF_SAMPLE_IDENTIFIER, which should be harmless
while avoiding that older tools show confusing messages, for instance,
with sample_type = 0, we get:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.630237 task-clock (msec) # 0.528 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
52 page-faults # 0.083 M/sec
978,312 cycles # 1.552 GHz
671,931 stalled-cycles-frontend # 68.68% frontend cycles idle
<not supported> stalled-cycles-backend
646,379 instructions # 0.66 insns per cycle
# 1.04 stalled cycles per insn
131,046 branches # 207.931 M/sec
7,073 branch-misses # 5.40% of all branches
0.001193240 seconds time elapsed
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
non matching sample_type
$
While with sample_type set to PERF_SAMPLE_IDENTIFIER, after we re-run 'perf
stat record usleep' we get:
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$
Which at least shows the names of the events in the perf.data file.
Additionally, such files, when passed to 'perf report' will produce:
$ oldperf report --stdio
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
Warning:
Kernel address maps (/proc/{kallsyms,modules}) were restricted.
Check /proc/sys/kernel/kptr_restrict before running 'perf record'.
As no suitable kallsyms nor vmlinux was found, kernel samples
can't be resolved.
Samples in kernel modules can't be resolved as well.
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
$
Which is confusing and can be solved by just adding the kernel mmap record,
which will also remove that warning about the data size field being equal to
zero, after generating the mmap record:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.600796 task-clock (msec) # 0.478 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
54 page-faults # 0.090 M/sec
886,844 cycles # 1.476 GHz
582,169 stalled-cycles-frontend # 65.65% frontend cycles idle
<not supported> stalled-cycles-backend
638,344 instructions # 0.72 insns per cycle
# 0.91 stalled cycles per insn
130,204 branches # 216.719 M/sec
7,500 branch-misses # 5.76% of all branches
0.001255897 seconds time elapsed
$ oldperf evlist
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$ oldperf report --stdio
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
[acme@zoo linux]$
No warnings, sensible output about what are the events in the perf.data file and also
a "file has no samples" message, which indeed it doesn't.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Tested-by: Kan Liang <kan.liang@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: htp://lkml.kernel.org/r/1446734469-11352-3-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 14:40:46 +00:00
|
|
|
"perf stat record [<options>]",
|
|
|
|
NULL,
|
|
|
|
};
|
|
|
|
|
2015-11-05 14:40:47 +00:00
|
|
|
static void init_features(struct perf_session *session)
|
|
|
|
{
|
|
|
|
int feat;
|
|
|
|
|
|
|
|
for (feat = HEADER_FIRST_FEATURE; feat < HEADER_LAST_FEATURE; feat++)
|
|
|
|
perf_header__set_feat(&session->header, feat);
|
|
|
|
|
2019-04-09 10:01:56 +00:00
|
|
|
perf_header__clear_feat(&session->header, HEADER_DIR_FORMAT);
|
2015-11-05 14:40:47 +00:00
|
|
|
perf_header__clear_feat(&session->header, HEADER_BUILD_ID);
|
|
|
|
perf_header__clear_feat(&session->header, HEADER_TRACING_DATA);
|
|
|
|
perf_header__clear_feat(&session->header, HEADER_BRANCH_STACK);
|
|
|
|
perf_header__clear_feat(&session->header, HEADER_AUXTRACE);
|
|
|
|
}
|
|
|
|
|
perf stat record: Add record command
Add 'perf stat record' command support. It creates simple (header only)
perf.data file ATM.
The record command could be specified anywhere among stat options. All
stat command options are valid for stat record command with '-o' option
exception. If specified for record command it denotes the perf data file
name.
Committer note:
Set sample_type to PERF_SAMPLE_IDENTIFIER, which should be harmless
while avoiding that older tools show confusing messages, for instance,
with sample_type = 0, we get:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.630237 task-clock (msec) # 0.528 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
52 page-faults # 0.083 M/sec
978,312 cycles # 1.552 GHz
671,931 stalled-cycles-frontend # 68.68% frontend cycles idle
<not supported> stalled-cycles-backend
646,379 instructions # 0.66 insns per cycle
# 1.04 stalled cycles per insn
131,046 branches # 207.931 M/sec
7,073 branch-misses # 5.40% of all branches
0.001193240 seconds time elapsed
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
non matching sample_type
$
While with sample_type set to PERF_SAMPLE_IDENTIFIER, after we re-run 'perf
stat record usleep' we get:
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$
Which at least shows the names of the events in the perf.data file.
Additionally, such files, when passed to 'perf report' will produce:
$ oldperf report --stdio
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
Warning:
Kernel address maps (/proc/{kallsyms,modules}) were restricted.
Check /proc/sys/kernel/kptr_restrict before running 'perf record'.
As no suitable kallsyms nor vmlinux was found, kernel samples
can't be resolved.
Samples in kernel modules can't be resolved as well.
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
$
Which is confusing and can be solved by just adding the kernel mmap record,
which will also remove that warning about the data size field being equal to
zero, after generating the mmap record:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.600796 task-clock (msec) # 0.478 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
54 page-faults # 0.090 M/sec
886,844 cycles # 1.476 GHz
582,169 stalled-cycles-frontend # 65.65% frontend cycles idle
<not supported> stalled-cycles-backend
638,344 instructions # 0.72 insns per cycle
# 0.91 stalled cycles per insn
130,204 branches # 216.719 M/sec
7,500 branch-misses # 5.76% of all branches
0.001255897 seconds time elapsed
$ oldperf evlist
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$ oldperf report --stdio
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
[acme@zoo linux]$
No warnings, sensible output about what are the events in the perf.data file and also
a "file has no samples" message, which indeed it doesn't.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Tested-by: Kan Liang <kan.liang@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: htp://lkml.kernel.org/r/1446734469-11352-3-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 14:40:46 +00:00
|
|
|
static int __cmd_record(int argc, const char **argv)
|
|
|
|
{
|
|
|
|
struct perf_session *session;
|
2017-01-23 21:07:59 +00:00
|
|
|
struct perf_data *data = &perf_stat.data;
|
perf stat record: Add record command
Add 'perf stat record' command support. It creates simple (header only)
perf.data file ATM.
The record command could be specified anywhere among stat options. All
stat command options are valid for stat record command with '-o' option
exception. If specified for record command it denotes the perf data file
name.
Committer note:
Set sample_type to PERF_SAMPLE_IDENTIFIER, which should be harmless
while avoiding that older tools show confusing messages, for instance,
with sample_type = 0, we get:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.630237 task-clock (msec) # 0.528 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
52 page-faults # 0.083 M/sec
978,312 cycles # 1.552 GHz
671,931 stalled-cycles-frontend # 68.68% frontend cycles idle
<not supported> stalled-cycles-backend
646,379 instructions # 0.66 insns per cycle
# 1.04 stalled cycles per insn
131,046 branches # 207.931 M/sec
7,073 branch-misses # 5.40% of all branches
0.001193240 seconds time elapsed
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
non matching sample_type
$
While with sample_type set to PERF_SAMPLE_IDENTIFIER, after we re-run 'perf
stat record usleep' we get:
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$
Which at least shows the names of the events in the perf.data file.
Additionally, such files, when passed to 'perf report' will produce:
$ oldperf report --stdio
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
Warning:
Kernel address maps (/proc/{kallsyms,modules}) were restricted.
Check /proc/sys/kernel/kptr_restrict before running 'perf record'.
As no suitable kallsyms nor vmlinux was found, kernel samples
can't be resolved.
Samples in kernel modules can't be resolved as well.
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
$
Which is confusing and can be solved by just adding the kernel mmap record,
which will also remove that warning about the data size field being equal to
zero, after generating the mmap record:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.600796 task-clock (msec) # 0.478 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
54 page-faults # 0.090 M/sec
886,844 cycles # 1.476 GHz
582,169 stalled-cycles-frontend # 65.65% frontend cycles idle
<not supported> stalled-cycles-backend
638,344 instructions # 0.72 insns per cycle
# 0.91 stalled cycles per insn
130,204 branches # 216.719 M/sec
7,500 branch-misses # 5.76% of all branches
0.001255897 seconds time elapsed
$ oldperf evlist
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$ oldperf report --stdio
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
[acme@zoo linux]$
No warnings, sensible output about what are the events in the perf.data file and also
a "file has no samples" message, which indeed it doesn't.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Tested-by: Kan Liang <kan.liang@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: htp://lkml.kernel.org/r/1446734469-11352-3-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 14:40:46 +00:00
|
|
|
|
2016-01-12 09:35:29 +00:00
|
|
|
argc = parse_options(argc, argv, stat_options, stat_record_usage,
|
perf stat record: Add record command
Add 'perf stat record' command support. It creates simple (header only)
perf.data file ATM.
The record command could be specified anywhere among stat options. All
stat command options are valid for stat record command with '-o' option
exception. If specified for record command it denotes the perf data file
name.
Committer note:
Set sample_type to PERF_SAMPLE_IDENTIFIER, which should be harmless
while avoiding that older tools show confusing messages, for instance,
with sample_type = 0, we get:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.630237 task-clock (msec) # 0.528 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
52 page-faults # 0.083 M/sec
978,312 cycles # 1.552 GHz
671,931 stalled-cycles-frontend # 68.68% frontend cycles idle
<not supported> stalled-cycles-backend
646,379 instructions # 0.66 insns per cycle
# 1.04 stalled cycles per insn
131,046 branches # 207.931 M/sec
7,073 branch-misses # 5.40% of all branches
0.001193240 seconds time elapsed
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
non matching sample_type
$
While with sample_type set to PERF_SAMPLE_IDENTIFIER, after we re-run 'perf
stat record usleep' we get:
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$
Which at least shows the names of the events in the perf.data file.
Additionally, such files, when passed to 'perf report' will produce:
$ oldperf report --stdio
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
Warning:
Kernel address maps (/proc/{kallsyms,modules}) were restricted.
Check /proc/sys/kernel/kptr_restrict before running 'perf record'.
As no suitable kallsyms nor vmlinux was found, kernel samples
can't be resolved.
Samples in kernel modules can't be resolved as well.
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
$
Which is confusing and can be solved by just adding the kernel mmap record,
which will also remove that warning about the data size field being equal to
zero, after generating the mmap record:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.600796 task-clock (msec) # 0.478 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
54 page-faults # 0.090 M/sec
886,844 cycles # 1.476 GHz
582,169 stalled-cycles-frontend # 65.65% frontend cycles idle
<not supported> stalled-cycles-backend
638,344 instructions # 0.72 insns per cycle
# 0.91 stalled cycles per insn
130,204 branches # 216.719 M/sec
7,500 branch-misses # 5.76% of all branches
0.001255897 seconds time elapsed
$ oldperf evlist
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$ oldperf report --stdio
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
[acme@zoo linux]$
No warnings, sensible output about what are the events in the perf.data file and also
a "file has no samples" message, which indeed it doesn't.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Tested-by: Kan Liang <kan.liang@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: htp://lkml.kernel.org/r/1446734469-11352-3-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 14:40:46 +00:00
|
|
|
PARSE_OPT_STOP_AT_NON_OPTION);
|
|
|
|
|
|
|
|
if (output_name)
|
2019-02-21 09:41:30 +00:00
|
|
|
data->path = output_name;
|
perf stat record: Add record command
Add 'perf stat record' command support. It creates simple (header only)
perf.data file ATM.
The record command could be specified anywhere among stat options. All
stat command options are valid for stat record command with '-o' option
exception. If specified for record command it denotes the perf data file
name.
Committer note:
Set sample_type to PERF_SAMPLE_IDENTIFIER, which should be harmless
while avoiding that older tools show confusing messages, for instance,
with sample_type = 0, we get:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.630237 task-clock (msec) # 0.528 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
52 page-faults # 0.083 M/sec
978,312 cycles # 1.552 GHz
671,931 stalled-cycles-frontend # 68.68% frontend cycles idle
<not supported> stalled-cycles-backend
646,379 instructions # 0.66 insns per cycle
# 1.04 stalled cycles per insn
131,046 branches # 207.931 M/sec
7,073 branch-misses # 5.40% of all branches
0.001193240 seconds time elapsed
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
non matching sample_type
$
While with sample_type set to PERF_SAMPLE_IDENTIFIER, after we re-run 'perf
stat record usleep' we get:
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$
Which at least shows the names of the events in the perf.data file.
Additionally, such files, when passed to 'perf report' will produce:
$ oldperf report --stdio
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
Warning:
Kernel address maps (/proc/{kallsyms,modules}) were restricted.
Check /proc/sys/kernel/kptr_restrict before running 'perf record'.
As no suitable kallsyms nor vmlinux was found, kernel samples
can't be resolved.
Samples in kernel modules can't be resolved as well.
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
$
Which is confusing and can be solved by just adding the kernel mmap record,
which will also remove that warning about the data size field being equal to
zero, after generating the mmap record:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.600796 task-clock (msec) # 0.478 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
54 page-faults # 0.090 M/sec
886,844 cycles # 1.476 GHz
582,169 stalled-cycles-frontend # 65.65% frontend cycles idle
<not supported> stalled-cycles-backend
638,344 instructions # 0.72 insns per cycle
# 0.91 stalled cycles per insn
130,204 branches # 216.719 M/sec
7,500 branch-misses # 5.76% of all branches
0.001255897 seconds time elapsed
$ oldperf evlist
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$ oldperf report --stdio
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
[acme@zoo linux]$
No warnings, sensible output about what are the events in the perf.data file and also
a "file has no samples" message, which indeed it doesn't.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Tested-by: Kan Liang <kan.liang@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: htp://lkml.kernel.org/r/1446734469-11352-3-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 14:40:46 +00:00
|
|
|
|
2018-08-30 06:32:36 +00:00
|
|
|
if (stat_config.run_count != 1 || forever) {
|
2015-11-05 14:40:53 +00:00
|
|
|
pr_err("Cannot use -r option with perf stat record.\n");
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
2021-07-19 22:31:49 +00:00
|
|
|
session = perf_session__new(data, NULL);
|
2019-08-22 07:20:49 +00:00
|
|
|
if (IS_ERR(session)) {
|
|
|
|
pr_err("Perf session creation failed\n");
|
|
|
|
return PTR_ERR(session);
|
perf stat record: Add record command
Add 'perf stat record' command support. It creates simple (header only)
perf.data file ATM.
The record command could be specified anywhere among stat options. All
stat command options are valid for stat record command with '-o' option
exception. If specified for record command it denotes the perf data file
name.
Committer note:
Set sample_type to PERF_SAMPLE_IDENTIFIER, which should be harmless
while avoiding that older tools show confusing messages, for instance,
with sample_type = 0, we get:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.630237 task-clock (msec) # 0.528 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
52 page-faults # 0.083 M/sec
978,312 cycles # 1.552 GHz
671,931 stalled-cycles-frontend # 68.68% frontend cycles idle
<not supported> stalled-cycles-backend
646,379 instructions # 0.66 insns per cycle
# 1.04 stalled cycles per insn
131,046 branches # 207.931 M/sec
7,073 branch-misses # 5.40% of all branches
0.001193240 seconds time elapsed
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
non matching sample_type
$
While with sample_type set to PERF_SAMPLE_IDENTIFIER, after we re-run 'perf
stat record usleep' we get:
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$
Which at least shows the names of the events in the perf.data file.
Additionally, such files, when passed to 'perf report' will produce:
$ oldperf report --stdio
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
Warning:
Kernel address maps (/proc/{kallsyms,modules}) were restricted.
Check /proc/sys/kernel/kptr_restrict before running 'perf record'.
As no suitable kallsyms nor vmlinux was found, kernel samples
can't be resolved.
Samples in kernel modules can't be resolved as well.
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
$
Which is confusing and can be solved by just adding the kernel mmap record,
which will also remove that warning about the data size field being equal to
zero, after generating the mmap record:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.600796 task-clock (msec) # 0.478 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
54 page-faults # 0.090 M/sec
886,844 cycles # 1.476 GHz
582,169 stalled-cycles-frontend # 65.65% frontend cycles idle
<not supported> stalled-cycles-backend
638,344 instructions # 0.72 insns per cycle
# 0.91 stalled cycles per insn
130,204 branches # 216.719 M/sec
7,500 branch-misses # 5.76% of all branches
0.001255897 seconds time elapsed
$ oldperf evlist
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$ oldperf report --stdio
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
[acme@zoo linux]$
No warnings, sensible output about what are the events in the perf.data file and also
a "file has no samples" message, which indeed it doesn't.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Tested-by: Kan Liang <kan.liang@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: htp://lkml.kernel.org/r/1446734469-11352-3-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 14:40:46 +00:00
|
|
|
}
|
|
|
|
|
2015-11-05 14:40:47 +00:00
|
|
|
init_features(session);
|
|
|
|
|
perf stat record: Add record command
Add 'perf stat record' command support. It creates simple (header only)
perf.data file ATM.
The record command could be specified anywhere among stat options. All
stat command options are valid for stat record command with '-o' option
exception. If specified for record command it denotes the perf data file
name.
Committer note:
Set sample_type to PERF_SAMPLE_IDENTIFIER, which should be harmless
while avoiding that older tools show confusing messages, for instance,
with sample_type = 0, we get:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.630237 task-clock (msec) # 0.528 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
52 page-faults # 0.083 M/sec
978,312 cycles # 1.552 GHz
671,931 stalled-cycles-frontend # 68.68% frontend cycles idle
<not supported> stalled-cycles-backend
646,379 instructions # 0.66 insns per cycle
# 1.04 stalled cycles per insn
131,046 branches # 207.931 M/sec
7,073 branch-misses # 5.40% of all branches
0.001193240 seconds time elapsed
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
non matching sample_type
$
While with sample_type set to PERF_SAMPLE_IDENTIFIER, after we re-run 'perf
stat record usleep' we get:
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$
Which at least shows the names of the events in the perf.data file.
Additionally, such files, when passed to 'perf report' will produce:
$ oldperf report --stdio
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
Warning:
Kernel address maps (/proc/{kallsyms,modules}) were restricted.
Check /proc/sys/kernel/kptr_restrict before running 'perf record'.
As no suitable kallsyms nor vmlinux was found, kernel samples
can't be resolved.
Samples in kernel modules can't be resolved as well.
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
$
Which is confusing and can be solved by just adding the kernel mmap record,
which will also remove that warning about the data size field being equal to
zero, after generating the mmap record:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.600796 task-clock (msec) # 0.478 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
54 page-faults # 0.090 M/sec
886,844 cycles # 1.476 GHz
582,169 stalled-cycles-frontend # 65.65% frontend cycles idle
<not supported> stalled-cycles-backend
638,344 instructions # 0.72 insns per cycle
# 0.91 stalled cycles per insn
130,204 branches # 216.719 M/sec
7,500 branch-misses # 5.76% of all branches
0.001255897 seconds time elapsed
$ oldperf evlist
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$ oldperf report --stdio
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
[acme@zoo linux]$
No warnings, sensible output about what are the events in the perf.data file and also
a "file has no samples" message, which indeed it doesn't.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Tested-by: Kan Liang <kan.liang@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: htp://lkml.kernel.org/r/1446734469-11352-3-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 14:40:46 +00:00
|
|
|
session->evlist = evsel_list;
|
|
|
|
perf_stat.session = session;
|
|
|
|
perf_stat.record = true;
|
|
|
|
return argc;
|
|
|
|
}
|
|
|
|
|
2018-09-13 12:54:03 +00:00
|
|
|
static int process_stat_round_event(struct perf_session *session,
|
|
|
|
union perf_event *event)
|
perf stat report: Process stat and stat round events
Adding processing of stat and stat round events.
The stat data com in stat events, using generic function
process_stat_round_event to store data under perf_evsel object.
The stat-round events comes each interval or as last event in non
interval mode. The function process_stat_round_event process stored data
for each perf_evsel object and print it out.
Committer note:
After this patch:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.498381 task-clock (msec) # 0.571 CPUs utilized
2 context-switches # 0.004 M/sec
0 cpu-migrations # 0.000 K/sec
149 page-faults # 0.299 M/sec
1,271,635 cycles # 2.552 GHz
928,712 stalled-cycles-frontend # 73.03% frontend cycles idle
663,286 stalled-cycles-backend # 52.16% backend cycles idle
792,614 instructions # 0.62 insns per cycle
# 1.17 stalled cycles per insn
136,850 branches # 274.589 M/sec
<not counted> branch-misses (0.00%)
0.000873419 seconds time elapsed
$
$ perf stat report
Performance counter stats for '/home/acme/bin/perf stat record usleep 1':
0.498381 task-clock (msec) # 0.571 CPUs utilized
2 context-switches # 0.004 M/sec
0 cpu-migrations # 0.000 K/sec
149 page-faults # 0.299 M/sec
1,271,635 cycles # 2.552 GHz
928,712 stalled-cycles-frontend # 73.03% frontend cycles idle
663,286 stalled-cycles-backend # 52.16% backend cycles idle
792,614 instructions # 0.62 insns per cycle
# 1.17 stalled cycles per insn
136,850 branches # 274.589 M/sec
<not counted> branch-misses (0.00%)
0.000873419 seconds time elapsed
$
Reported-by: Kan Liang <kan.liang@intel.com>
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1446734469-11352-16-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 14:40:59 +00:00
|
|
|
{
|
2019-08-28 13:57:16 +00:00
|
|
|
struct perf_record_stat_round *stat_round = &event->stat_round;
|
2019-07-21 11:23:51 +00:00
|
|
|
struct evsel *counter;
|
perf stat report: Process stat and stat round events
Adding processing of stat and stat round events.
The stat data com in stat events, using generic function
process_stat_round_event to store data under perf_evsel object.
The stat-round events comes each interval or as last event in non
interval mode. The function process_stat_round_event process stored data
for each perf_evsel object and print it out.
Committer note:
After this patch:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.498381 task-clock (msec) # 0.571 CPUs utilized
2 context-switches # 0.004 M/sec
0 cpu-migrations # 0.000 K/sec
149 page-faults # 0.299 M/sec
1,271,635 cycles # 2.552 GHz
928,712 stalled-cycles-frontend # 73.03% frontend cycles idle
663,286 stalled-cycles-backend # 52.16% backend cycles idle
792,614 instructions # 0.62 insns per cycle
# 1.17 stalled cycles per insn
136,850 branches # 274.589 M/sec
<not counted> branch-misses (0.00%)
0.000873419 seconds time elapsed
$
$ perf stat report
Performance counter stats for '/home/acme/bin/perf stat record usleep 1':
0.498381 task-clock (msec) # 0.571 CPUs utilized
2 context-switches # 0.004 M/sec
0 cpu-migrations # 0.000 K/sec
149 page-faults # 0.299 M/sec
1,271,635 cycles # 2.552 GHz
928,712 stalled-cycles-frontend # 73.03% frontend cycles idle
663,286 stalled-cycles-backend # 52.16% backend cycles idle
792,614 instructions # 0.62 insns per cycle
# 1.17 stalled cycles per insn
136,850 branches # 274.589 M/sec
<not counted> branch-misses (0.00%)
0.000873419 seconds time elapsed
$
Reported-by: Kan Liang <kan.liang@intel.com>
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1446734469-11352-16-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 14:40:59 +00:00
|
|
|
struct timespec tsh, *ts = NULL;
|
|
|
|
const char **argv = session->header.env.cmdline_argv;
|
|
|
|
int argc = session->header.env.nr_cmdline;
|
|
|
|
|
2016-06-23 14:26:15 +00:00
|
|
|
evlist__for_each_entry(evsel_list, counter)
|
perf stat report: Process stat and stat round events
Adding processing of stat and stat round events.
The stat data com in stat events, using generic function
process_stat_round_event to store data under perf_evsel object.
The stat-round events comes each interval or as last event in non
interval mode. The function process_stat_round_event process stored data
for each perf_evsel object and print it out.
Committer note:
After this patch:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.498381 task-clock (msec) # 0.571 CPUs utilized
2 context-switches # 0.004 M/sec
0 cpu-migrations # 0.000 K/sec
149 page-faults # 0.299 M/sec
1,271,635 cycles # 2.552 GHz
928,712 stalled-cycles-frontend # 73.03% frontend cycles idle
663,286 stalled-cycles-backend # 52.16% backend cycles idle
792,614 instructions # 0.62 insns per cycle
# 1.17 stalled cycles per insn
136,850 branches # 274.589 M/sec
<not counted> branch-misses (0.00%)
0.000873419 seconds time elapsed
$
$ perf stat report
Performance counter stats for '/home/acme/bin/perf stat record usleep 1':
0.498381 task-clock (msec) # 0.571 CPUs utilized
2 context-switches # 0.004 M/sec
0 cpu-migrations # 0.000 K/sec
149 page-faults # 0.299 M/sec
1,271,635 cycles # 2.552 GHz
928,712 stalled-cycles-frontend # 73.03% frontend cycles idle
663,286 stalled-cycles-backend # 52.16% backend cycles idle
792,614 instructions # 0.62 insns per cycle
# 1.17 stalled cycles per insn
136,850 branches # 274.589 M/sec
<not counted> branch-misses (0.00%)
0.000873419 seconds time elapsed
$
Reported-by: Kan Liang <kan.liang@intel.com>
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1446734469-11352-16-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 14:40:59 +00:00
|
|
|
perf_stat_process_counter(&stat_config, counter);
|
|
|
|
|
2016-05-05 23:04:03 +00:00
|
|
|
if (stat_round->type == PERF_STAT_ROUND_TYPE__FINAL)
|
|
|
|
update_stats(&walltime_nsecs_stats, stat_round->time);
|
perf stat report: Process stat and stat round events
Adding processing of stat and stat round events.
The stat data com in stat events, using generic function
process_stat_round_event to store data under perf_evsel object.
The stat-round events comes each interval or as last event in non
interval mode. The function process_stat_round_event process stored data
for each perf_evsel object and print it out.
Committer note:
After this patch:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.498381 task-clock (msec) # 0.571 CPUs utilized
2 context-switches # 0.004 M/sec
0 cpu-migrations # 0.000 K/sec
149 page-faults # 0.299 M/sec
1,271,635 cycles # 2.552 GHz
928,712 stalled-cycles-frontend # 73.03% frontend cycles idle
663,286 stalled-cycles-backend # 52.16% backend cycles idle
792,614 instructions # 0.62 insns per cycle
# 1.17 stalled cycles per insn
136,850 branches # 274.589 M/sec
<not counted> branch-misses (0.00%)
0.000873419 seconds time elapsed
$
$ perf stat report
Performance counter stats for '/home/acme/bin/perf stat record usleep 1':
0.498381 task-clock (msec) # 0.571 CPUs utilized
2 context-switches # 0.004 M/sec
0 cpu-migrations # 0.000 K/sec
149 page-faults # 0.299 M/sec
1,271,635 cycles # 2.552 GHz
928,712 stalled-cycles-frontend # 73.03% frontend cycles idle
663,286 stalled-cycles-backend # 52.16% backend cycles idle
792,614 instructions # 0.62 insns per cycle
# 1.17 stalled cycles per insn
136,850 branches # 274.589 M/sec
<not counted> branch-misses (0.00%)
0.000873419 seconds time elapsed
$
Reported-by: Kan Liang <kan.liang@intel.com>
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1446734469-11352-16-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 14:40:59 +00:00
|
|
|
|
2016-05-05 23:04:03 +00:00
|
|
|
if (stat_config.interval && stat_round->time) {
|
2016-08-05 18:40:30 +00:00
|
|
|
tsh.tv_sec = stat_round->time / NSEC_PER_SEC;
|
|
|
|
tsh.tv_nsec = stat_round->time % NSEC_PER_SEC;
|
perf stat report: Process stat and stat round events
Adding processing of stat and stat round events.
The stat data com in stat events, using generic function
process_stat_round_event to store data under perf_evsel object.
The stat-round events comes each interval or as last event in non
interval mode. The function process_stat_round_event process stored data
for each perf_evsel object and print it out.
Committer note:
After this patch:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.498381 task-clock (msec) # 0.571 CPUs utilized
2 context-switches # 0.004 M/sec
0 cpu-migrations # 0.000 K/sec
149 page-faults # 0.299 M/sec
1,271,635 cycles # 2.552 GHz
928,712 stalled-cycles-frontend # 73.03% frontend cycles idle
663,286 stalled-cycles-backend # 52.16% backend cycles idle
792,614 instructions # 0.62 insns per cycle
# 1.17 stalled cycles per insn
136,850 branches # 274.589 M/sec
<not counted> branch-misses (0.00%)
0.000873419 seconds time elapsed
$
$ perf stat report
Performance counter stats for '/home/acme/bin/perf stat record usleep 1':
0.498381 task-clock (msec) # 0.571 CPUs utilized
2 context-switches # 0.004 M/sec
0 cpu-migrations # 0.000 K/sec
149 page-faults # 0.299 M/sec
1,271,635 cycles # 2.552 GHz
928,712 stalled-cycles-frontend # 73.03% frontend cycles idle
663,286 stalled-cycles-backend # 52.16% backend cycles idle
792,614 instructions # 0.62 insns per cycle
# 1.17 stalled cycles per insn
136,850 branches # 274.589 M/sec
<not counted> branch-misses (0.00%)
0.000873419 seconds time elapsed
$
Reported-by: Kan Liang <kan.liang@intel.com>
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1446734469-11352-16-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 14:40:59 +00:00
|
|
|
ts = &tsh;
|
|
|
|
}
|
|
|
|
|
|
|
|
print_counters(ts, argc, argv);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2015-11-05 14:40:57 +00:00
|
|
|
static
|
2018-09-13 12:54:03 +00:00
|
|
|
int process_stat_config_event(struct perf_session *session,
|
|
|
|
union perf_event *event)
|
2015-11-05 14:40:57 +00:00
|
|
|
{
|
2018-09-13 12:54:03 +00:00
|
|
|
struct perf_tool *tool = session->tool;
|
2015-11-05 14:40:58 +00:00
|
|
|
struct perf_stat *st = container_of(tool, struct perf_stat, tool);
|
|
|
|
|
2015-11-05 14:40:57 +00:00
|
|
|
perf_event__read_stat_config(&stat_config, &event->stat_config);
|
2015-11-05 14:40:58 +00:00
|
|
|
|
2019-08-22 11:11:39 +00:00
|
|
|
if (perf_cpu_map__empty(st->cpus)) {
|
2015-11-05 14:41:02 +00:00
|
|
|
if (st->aggr_mode != AGGR_UNSET)
|
|
|
|
pr_warning("warning: processing task data, aggregation mode not set\n");
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (st->aggr_mode != AGGR_UNSET)
|
|
|
|
stat_config.aggr_mode = st->aggr_mode;
|
|
|
|
|
2017-01-23 21:07:59 +00:00
|
|
|
if (perf_stat.data.is_pipe)
|
2015-11-05 14:40:58 +00:00
|
|
|
perf_stat_init_aggr_mode();
|
|
|
|
else
|
|
|
|
perf_stat_init_aggr_mode_file(st);
|
|
|
|
|
2015-11-05 14:40:57 +00:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2015-11-05 14:40:56 +00:00
|
|
|
static int set_maps(struct perf_stat *st)
|
|
|
|
{
|
|
|
|
if (!st->cpus || !st->threads)
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
if (WARN_ONCE(st->maps_allocated, "stats double allocation\n"))
|
|
|
|
return -EINVAL;
|
|
|
|
|
2019-07-21 11:24:43 +00:00
|
|
|
perf_evlist__set_maps(&evsel_list->core, st->cpus, st->threads);
|
2015-11-05 14:40:56 +00:00
|
|
|
|
2020-11-30 12:31:04 +00:00
|
|
|
if (evlist__alloc_stats(evsel_list, true))
|
2015-11-05 14:40:56 +00:00
|
|
|
return -ENOMEM;
|
|
|
|
|
|
|
|
st->maps_allocated = true;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static
|
2018-09-13 12:54:03 +00:00
|
|
|
int process_thread_map_event(struct perf_session *session,
|
|
|
|
union perf_event *event)
|
2015-11-05 14:40:56 +00:00
|
|
|
{
|
2018-09-13 12:54:03 +00:00
|
|
|
struct perf_tool *tool = session->tool;
|
2015-11-05 14:40:56 +00:00
|
|
|
struct perf_stat *st = container_of(tool, struct perf_stat, tool);
|
|
|
|
|
|
|
|
if (st->threads) {
|
|
|
|
pr_warning("Extra thread map event, ignoring.\n");
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
st->threads = thread_map__new_event(&event->thread_map);
|
|
|
|
if (!st->threads)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
|
|
|
return set_maps(st);
|
|
|
|
}
|
|
|
|
|
|
|
|
static
|
2018-09-13 12:54:03 +00:00
|
|
|
int process_cpu_map_event(struct perf_session *session,
|
|
|
|
union perf_event *event)
|
2015-11-05 14:40:56 +00:00
|
|
|
{
|
2018-09-13 12:54:03 +00:00
|
|
|
struct perf_tool *tool = session->tool;
|
2015-11-05 14:40:56 +00:00
|
|
|
struct perf_stat *st = container_of(tool, struct perf_stat, tool);
|
2019-07-21 11:23:49 +00:00
|
|
|
struct perf_cpu_map *cpus;
|
2015-11-05 14:40:56 +00:00
|
|
|
|
|
|
|
if (st->cpus) {
|
|
|
|
pr_warning("Extra cpu map event, ignoring.\n");
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
cpus = cpu_map__new_data(&event->cpu_map.data);
|
|
|
|
if (!cpus)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
|
|
|
st->cpus = cpus;
|
|
|
|
return set_maps(st);
|
|
|
|
}
|
|
|
|
|
2016-01-12 09:35:29 +00:00
|
|
|
static const char * const stat_report_usage[] = {
|
2015-11-05 14:40:55 +00:00
|
|
|
"perf stat report [<options>]",
|
|
|
|
NULL,
|
|
|
|
};
|
|
|
|
|
|
|
|
static struct perf_stat perf_stat = {
|
|
|
|
.tool = {
|
|
|
|
.attr = perf_event__process_attr,
|
2015-11-05 14:41:00 +00:00
|
|
|
.event_update = perf_event__process_event_update,
|
2015-11-05 14:40:56 +00:00
|
|
|
.thread_map = process_thread_map_event,
|
|
|
|
.cpu_map = process_cpu_map_event,
|
2015-11-05 14:40:57 +00:00
|
|
|
.stat_config = process_stat_config_event,
|
perf stat report: Process stat and stat round events
Adding processing of stat and stat round events.
The stat data com in stat events, using generic function
process_stat_round_event to store data under perf_evsel object.
The stat-round events comes each interval or as last event in non
interval mode. The function process_stat_round_event process stored data
for each perf_evsel object and print it out.
Committer note:
After this patch:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.498381 task-clock (msec) # 0.571 CPUs utilized
2 context-switches # 0.004 M/sec
0 cpu-migrations # 0.000 K/sec
149 page-faults # 0.299 M/sec
1,271,635 cycles # 2.552 GHz
928,712 stalled-cycles-frontend # 73.03% frontend cycles idle
663,286 stalled-cycles-backend # 52.16% backend cycles idle
792,614 instructions # 0.62 insns per cycle
# 1.17 stalled cycles per insn
136,850 branches # 274.589 M/sec
<not counted> branch-misses (0.00%)
0.000873419 seconds time elapsed
$
$ perf stat report
Performance counter stats for '/home/acme/bin/perf stat record usleep 1':
0.498381 task-clock (msec) # 0.571 CPUs utilized
2 context-switches # 0.004 M/sec
0 cpu-migrations # 0.000 K/sec
149 page-faults # 0.299 M/sec
1,271,635 cycles # 2.552 GHz
928,712 stalled-cycles-frontend # 73.03% frontend cycles idle
663,286 stalled-cycles-backend # 52.16% backend cycles idle
792,614 instructions # 0.62 insns per cycle
# 1.17 stalled cycles per insn
136,850 branches # 274.589 M/sec
<not counted> branch-misses (0.00%)
0.000873419 seconds time elapsed
$
Reported-by: Kan Liang <kan.liang@intel.com>
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1446734469-11352-16-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 14:40:59 +00:00
|
|
|
.stat = perf_event__process_stat_event,
|
|
|
|
.stat_round = process_stat_round_event,
|
2015-11-05 14:40:55 +00:00
|
|
|
},
|
2015-11-05 14:41:02 +00:00
|
|
|
.aggr_mode = AGGR_UNSET,
|
2015-11-05 14:40:55 +00:00
|
|
|
};
|
|
|
|
|
|
|
|
static int __cmd_report(int argc, const char **argv)
|
|
|
|
{
|
|
|
|
struct perf_session *session;
|
|
|
|
const struct option options[] = {
|
|
|
|
OPT_STRING('i', "input", &input_name, "file", "input file name"),
|
2015-11-05 14:41:02 +00:00
|
|
|
OPT_SET_UINT(0, "per-socket", &perf_stat.aggr_mode,
|
|
|
|
"aggregate counts per processor socket", AGGR_SOCKET),
|
2019-06-04 22:50:42 +00:00
|
|
|
OPT_SET_UINT(0, "per-die", &perf_stat.aggr_mode,
|
|
|
|
"aggregate counts per processor die", AGGR_DIE),
|
2015-11-05 14:41:02 +00:00
|
|
|
OPT_SET_UINT(0, "per-core", &perf_stat.aggr_mode,
|
|
|
|
"aggregate counts per physical processor core", AGGR_CORE),
|
perf stat: Add --per-node agregation support
Adding new --per-node option to aggregate counts per NUMA
nodes for system-wide mode measurements.
You can specify --per-node in live mode:
# perf stat -a -I 1000 -e cycles --per-node
# time node cpus counts unit events
1.000542550 N0 20 6,202,097 cycles
1.000542550 N1 20 639,559 cycles
2.002040063 N0 20 7,412,495 cycles
2.002040063 N1 20 2,185,577 cycles
3.003451699 N0 20 6,508,917 cycles
3.003451699 N1 20 765,607 cycles
...
Or in the record/report stat session:
# perf stat record -a -I 1000 -e cycles
# time counts unit events
1.000536937 10,008,468 cycles
2.002090152 9,578,539 cycles
3.003625233 7,647,869 cycles
4.005135036 7,032,086 cycles
^C 4.340902364 3,923,893 cycles
# perf stat report --per-node
# time node cpus counts unit events
1.000536937 N0 20 9,355,086 cycles
1.000536937 N1 20 653,382 cycles
2.002090152 N0 20 7,712,838 cycles
2.002090152 N1 20 1,865,701 cycles
3.003625233 N0 20 6,604,441 cycles
3.003625233 N1 20 1,043,428 cycles
4.005135036 N0 20 6,350,522 cycles
4.005135036 N1 20 681,564 cycles
4.340902364 N0 20 3,403,188 cycles
4.340902364 N1 20 520,705 cycles
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Alexey Budankov <alexey.budankov@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Joe Mario <jmario@redhat.com>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Michael Petlan <mpetlan@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/20190904073415.723-4-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-08-28 08:17:43 +00:00
|
|
|
OPT_SET_UINT(0, "per-node", &perf_stat.aggr_mode,
|
|
|
|
"aggregate counts per numa node", AGGR_NODE),
|
2015-11-05 14:41:02 +00:00
|
|
|
OPT_SET_UINT('A', "no-aggr", &perf_stat.aggr_mode,
|
|
|
|
"disable CPU count aggregation", AGGR_NONE),
|
2015-11-05 14:40:55 +00:00
|
|
|
OPT_END()
|
|
|
|
};
|
|
|
|
struct stat st;
|
|
|
|
int ret;
|
|
|
|
|
2016-01-12 09:35:29 +00:00
|
|
|
argc = parse_options(argc, argv, options, stat_report_usage, 0);
|
2015-11-05 14:40:55 +00:00
|
|
|
|
|
|
|
if (!input_name || !strlen(input_name)) {
|
|
|
|
if (!fstat(STDIN_FILENO, &st) && S_ISFIFO(st.st_mode))
|
|
|
|
input_name = "-";
|
|
|
|
else
|
|
|
|
input_name = "perf.data";
|
|
|
|
}
|
|
|
|
|
2019-02-21 09:41:30 +00:00
|
|
|
perf_stat.data.path = input_name;
|
|
|
|
perf_stat.data.mode = PERF_DATA_MODE_READ;
|
2015-11-05 14:40:55 +00:00
|
|
|
|
2021-07-19 22:31:49 +00:00
|
|
|
session = perf_session__new(&perf_stat.data, &perf_stat.tool);
|
2019-08-22 07:20:49 +00:00
|
|
|
if (IS_ERR(session))
|
|
|
|
return PTR_ERR(session);
|
2015-11-05 14:40:55 +00:00
|
|
|
|
|
|
|
perf_stat.session = session;
|
|
|
|
stat_config.output = stderr;
|
|
|
|
evsel_list = session->evlist;
|
|
|
|
|
|
|
|
ret = perf_session__process_events(session);
|
|
|
|
if (ret)
|
|
|
|
return ret;
|
|
|
|
|
|
|
|
perf_session__delete(session);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
perf tools: Force uncore events to system wide monitoring
Make system wide (-a) the default option if no target was specified and
one of following conditions is met:
- there's no workload specified (current behaviour)
- there is workload specified but all requested
events are system wide ones
Mixed events core/uncore with workload:
$ perf stat -e 'uncore_cbox_0/clockticks/,cycles' sleep 1
Performance counter stats for 'sleep 1':
<not supported> uncore_cbox_0/clockticks/
980,489 cycles
1.000897406 seconds time elapsed
Uncore event with workload:
$ perf stat -e 'uncore_cbox_0/clockticks/' sleep 1
Performance counter stats for 'system wide':
281,473,897,192,670 uncore_cbox_0/clockticks/
1.000833784 seconds time elapsed
Committer note:
When testing I realized the default case for !root, i.e. no events
passed via -e, was broke by v2 of this patch, reported and after a
patch provided by Jiri it is back working:
[acme@jouet linux]$ perf stat usleep 1
Performance counter stats for 'usleep 1':
0.401335 task-clock:u (msec) # 0.297 CPUs utilized
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
48 page-faults:u # 0.120 M/sec
458,146 cycles:u # 1.142 GHz
245,113 instructions:u # 0.54 insn per cycle
47,991 branches:u # 119.578 M/sec
4,022 branch-misses:u # 8.38% of all branches
0.001350029 seconds time elapsed
[acme@jouet linux]$
Suggested-and-Tested-by: Borislav Petkov <bp@alien8.de>
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/20170227094818.GA12764@krava
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-02-27 09:48:18 +00:00
|
|
|
static void setup_system_wide(int forks)
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* Make system wide (-a) the default target if
|
|
|
|
* no target was specified and one of following
|
|
|
|
* conditions is met:
|
|
|
|
*
|
|
|
|
* - there's no workload specified
|
|
|
|
* - there is workload specified but all requested
|
|
|
|
* events are system wide events
|
|
|
|
*/
|
|
|
|
if (!target__none(&target))
|
|
|
|
return;
|
|
|
|
|
|
|
|
if (!forks)
|
|
|
|
target.system_wide = true;
|
|
|
|
else {
|
2019-07-21 11:23:51 +00:00
|
|
|
struct evsel *counter;
|
perf tools: Force uncore events to system wide monitoring
Make system wide (-a) the default option if no target was specified and
one of following conditions is met:
- there's no workload specified (current behaviour)
- there is workload specified but all requested
events are system wide ones
Mixed events core/uncore with workload:
$ perf stat -e 'uncore_cbox_0/clockticks/,cycles' sleep 1
Performance counter stats for 'sleep 1':
<not supported> uncore_cbox_0/clockticks/
980,489 cycles
1.000897406 seconds time elapsed
Uncore event with workload:
$ perf stat -e 'uncore_cbox_0/clockticks/' sleep 1
Performance counter stats for 'system wide':
281,473,897,192,670 uncore_cbox_0/clockticks/
1.000833784 seconds time elapsed
Committer note:
When testing I realized the default case for !root, i.e. no events
passed via -e, was broke by v2 of this patch, reported and after a
patch provided by Jiri it is back working:
[acme@jouet linux]$ perf stat usleep 1
Performance counter stats for 'usleep 1':
0.401335 task-clock:u (msec) # 0.297 CPUs utilized
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
48 page-faults:u # 0.120 M/sec
458,146 cycles:u # 1.142 GHz
245,113 instructions:u # 0.54 insn per cycle
47,991 branches:u # 119.578 M/sec
4,022 branch-misses:u # 8.38% of all branches
0.001350029 seconds time elapsed
[acme@jouet linux]$
Suggested-and-Tested-by: Borislav Petkov <bp@alien8.de>
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/20170227094818.GA12764@krava
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-02-27 09:48:18 +00:00
|
|
|
|
|
|
|
evlist__for_each_entry(evsel_list, counter) {
|
2020-09-22 01:50:04 +00:00
|
|
|
if (!counter->core.system_wide &&
|
|
|
|
strcmp(counter->name, "duration_time")) {
|
perf tools: Force uncore events to system wide monitoring
Make system wide (-a) the default option if no target was specified and
one of following conditions is met:
- there's no workload specified (current behaviour)
- there is workload specified but all requested
events are system wide ones
Mixed events core/uncore with workload:
$ perf stat -e 'uncore_cbox_0/clockticks/,cycles' sleep 1
Performance counter stats for 'sleep 1':
<not supported> uncore_cbox_0/clockticks/
980,489 cycles
1.000897406 seconds time elapsed
Uncore event with workload:
$ perf stat -e 'uncore_cbox_0/clockticks/' sleep 1
Performance counter stats for 'system wide':
281,473,897,192,670 uncore_cbox_0/clockticks/
1.000833784 seconds time elapsed
Committer note:
When testing I realized the default case for !root, i.e. no events
passed via -e, was broke by v2 of this patch, reported and after a
patch provided by Jiri it is back working:
[acme@jouet linux]$ perf stat usleep 1
Performance counter stats for 'usleep 1':
0.401335 task-clock:u (msec) # 0.297 CPUs utilized
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
48 page-faults:u # 0.120 M/sec
458,146 cycles:u # 1.142 GHz
245,113 instructions:u # 0.54 insn per cycle
47,991 branches:u # 119.578 M/sec
4,022 branch-misses:u # 8.38% of all branches
0.001350029 seconds time elapsed
[acme@jouet linux]$
Suggested-and-Tested-by: Borislav Petkov <bp@alien8.de>
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/20170227094818.GA12764@krava
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-02-27 09:48:18 +00:00
|
|
|
return;
|
2020-09-22 01:50:04 +00:00
|
|
|
}
|
perf tools: Force uncore events to system wide monitoring
Make system wide (-a) the default option if no target was specified and
one of following conditions is met:
- there's no workload specified (current behaviour)
- there is workload specified but all requested
events are system wide ones
Mixed events core/uncore with workload:
$ perf stat -e 'uncore_cbox_0/clockticks/,cycles' sleep 1
Performance counter stats for 'sleep 1':
<not supported> uncore_cbox_0/clockticks/
980,489 cycles
1.000897406 seconds time elapsed
Uncore event with workload:
$ perf stat -e 'uncore_cbox_0/clockticks/' sleep 1
Performance counter stats for 'system wide':
281,473,897,192,670 uncore_cbox_0/clockticks/
1.000833784 seconds time elapsed
Committer note:
When testing I realized the default case for !root, i.e. no events
passed via -e, was broke by v2 of this patch, reported and after a
patch provided by Jiri it is back working:
[acme@jouet linux]$ perf stat usleep 1
Performance counter stats for 'usleep 1':
0.401335 task-clock:u (msec) # 0.297 CPUs utilized
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
48 page-faults:u # 0.120 M/sec
458,146 cycles:u # 1.142 GHz
245,113 instructions:u # 0.54 insn per cycle
47,991 branches:u # 119.578 M/sec
4,022 branch-misses:u # 8.38% of all branches
0.001350029 seconds time elapsed
[acme@jouet linux]$
Suggested-and-Tested-by: Borislav Petkov <bp@alien8.de>
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/20170227094818.GA12764@krava
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-02-27 09:48:18 +00:00
|
|
|
}
|
|
|
|
|
2019-07-21 11:24:28 +00:00
|
|
|
if (evsel_list->core.nr_entries)
|
perf tools: Force uncore events to system wide monitoring
Make system wide (-a) the default option if no target was specified and
one of following conditions is met:
- there's no workload specified (current behaviour)
- there is workload specified but all requested
events are system wide ones
Mixed events core/uncore with workload:
$ perf stat -e 'uncore_cbox_0/clockticks/,cycles' sleep 1
Performance counter stats for 'sleep 1':
<not supported> uncore_cbox_0/clockticks/
980,489 cycles
1.000897406 seconds time elapsed
Uncore event with workload:
$ perf stat -e 'uncore_cbox_0/clockticks/' sleep 1
Performance counter stats for 'system wide':
281,473,897,192,670 uncore_cbox_0/clockticks/
1.000833784 seconds time elapsed
Committer note:
When testing I realized the default case for !root, i.e. no events
passed via -e, was broke by v2 of this patch, reported and after a
patch provided by Jiri it is back working:
[acme@jouet linux]$ perf stat usleep 1
Performance counter stats for 'usleep 1':
0.401335 task-clock:u (msec) # 0.297 CPUs utilized
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
48 page-faults:u # 0.120 M/sec
458,146 cycles:u # 1.142 GHz
245,113 instructions:u # 0.54 insn per cycle
47,991 branches:u # 119.578 M/sec
4,022 branch-misses:u # 8.38% of all branches
0.001350029 seconds time elapsed
[acme@jouet linux]$
Suggested-and-Tested-by: Borislav Petkov <bp@alien8.de>
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/20170227094818.GA12764@krava
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-02-27 09:48:18 +00:00
|
|
|
target.system_wide = true;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2017-03-27 14:47:20 +00:00
|
|
|
int cmd_stat(int argc, const char **argv)
|
2009-05-26 07:17:18 +00:00
|
|
|
{
|
2012-10-01 18:20:58 +00:00
|
|
|
const char * const stat_usage[] = {
|
|
|
|
"perf stat [<options>] [<command>]",
|
|
|
|
NULL
|
|
|
|
};
|
perf stat: Enable counting events for BPF programs
Introduce 'perf stat -b' option, which counts events for BPF programs, like:
[root@localhost ~]# ~/perf stat -e ref-cycles,cycles -b 254 -I 1000
1.487903822 115,200 ref-cycles
1.487903822 86,012 cycles
2.489147029 80,560 ref-cycles
2.489147029 73,784 cycles
3.490341825 60,720 ref-cycles
3.490341825 37,797 cycles
4.491540887 37,120 ref-cycles
4.491540887 31,963 cycles
The example above counts 'cycles' and 'ref-cycles' of BPF program of id
254. This is similar to bpftool-prog-profile command, but more
flexible.
'perf stat -b' creates per-cpu perf_event and loads fentry/fexit BPF
programs (monitor-progs) to the target BPF program (target-prog). The
monitor-progs read perf_event before and after the target-prog, and
aggregate the difference in a BPF map. Then the user space reads data
from these maps.
A new 'struct bpf_counter' is introduced to provide a common interface
that uses BPF programs/maps to count perf events.
Committer notes:
Removed all but bpf_counter.h includes from evsel.h, not needed at all.
Also BPF map lookups for PERCPU_ARRAYs need to have as its value receive
buffer passed to the kernel libbpf_num_possible_cpus() entries, not
evsel__nr_cpus(evsel), as the former uses
/sys/devices/system/cpu/possible while the later uses
/sys/devices/system/cpu/online, which may be less than the 'possible'
number making the bpf map lookup overwrite memory and cause hard to
debug memory corruption.
We need to continue using evsel__nr_cpus(evsel) when accessing the
perf_counts array tho, not to overwrite another are of memory :-)
Signed-off-by: Song Liu <songliubraving@fb.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Link: https://lore.kernel.org/lkml/20210120163031.GU12699@kernel.org/
Acked-by: Namhyung Kim <namhyung@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: kernel-team@fb.com
Link: http://lore.kernel.org/lkml/20201229214214.3413833-4-songliubraving@fb.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-12-29 21:42:14 +00:00
|
|
|
int status = -EINVAL, run_idx, err;
|
2011-08-15 20:22:33 +00:00
|
|
|
const char *mode;
|
2015-07-21 12:31:24 +00:00
|
|
|
FILE *output = stderr;
|
2018-01-29 09:25:23 +00:00
|
|
|
unsigned int interval, timeout;
|
2015-11-05 14:40:55 +00:00
|
|
|
const char * const stat_subcommands[] = { "record", "report" };
|
perf stat: Enable counting events for BPF programs
Introduce 'perf stat -b' option, which counts events for BPF programs, like:
[root@localhost ~]# ~/perf stat -e ref-cycles,cycles -b 254 -I 1000
1.487903822 115,200 ref-cycles
1.487903822 86,012 cycles
2.489147029 80,560 ref-cycles
2.489147029 73,784 cycles
3.490341825 60,720 ref-cycles
3.490341825 37,797 cycles
4.491540887 37,120 ref-cycles
4.491540887 31,963 cycles
The example above counts 'cycles' and 'ref-cycles' of BPF program of id
254. This is similar to bpftool-prog-profile command, but more
flexible.
'perf stat -b' creates per-cpu perf_event and loads fentry/fexit BPF
programs (monitor-progs) to the target BPF program (target-prog). The
monitor-progs read perf_event before and after the target-prog, and
aggregate the difference in a BPF map. Then the user space reads data
from these maps.
A new 'struct bpf_counter' is introduced to provide a common interface
that uses BPF programs/maps to count perf events.
Committer notes:
Removed all but bpf_counter.h includes from evsel.h, not needed at all.
Also BPF map lookups for PERCPU_ARRAYs need to have as its value receive
buffer passed to the kernel libbpf_num_possible_cpus() entries, not
evsel__nr_cpus(evsel), as the former uses
/sys/devices/system/cpu/possible while the later uses
/sys/devices/system/cpu/online, which may be less than the 'possible'
number making the bpf map lookup overwrite memory and cause hard to
debug memory corruption.
We need to continue using evsel__nr_cpus(evsel) when accessing the
perf_counts array tho, not to overwrite another are of memory :-)
Signed-off-by: Song Liu <songliubraving@fb.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Link: https://lore.kernel.org/lkml/20210120163031.GU12699@kernel.org/
Acked-by: Namhyung Kim <namhyung@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: kernel-team@fb.com
Link: http://lore.kernel.org/lkml/20201229214214.3413833-4-songliubraving@fb.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-12-29 21:42:14 +00:00
|
|
|
char errbuf[BUFSIZ];
|
2009-06-13 12:57:28 +00:00
|
|
|
|
perf stat: add perf stat -B to pretty print large numbers
It is hard to read very large numbers so provide an option to perf stat
to separate thousands using a separator. The patch leverages the locale
support of stdio. You need to set your LC_NUMERIC appropriately, for
instance LC_NUMERIC=en_US.UTF8. You need to pass -B to activate this
feature. This way existing scripts parsing the output do not need to be
changed. Here is an example.
$ perf stat noploop 2
noploop for 2 seconds
Performance counter stats for 'noploop 2':
1998.347031 task-clock-msecs # 0.998 CPUs
61 context-switches # 0.000 M/sec
0 CPU-migrations # 0.000 M/sec
118 page-faults # 0.000 M/sec
4,138,410,900 cycles # 2070.917 M/sec (scaled from 70.01%)
2,062,650,268 instructions # 0.498 IPC (scaled from 70.01%)
2,057,653,466 branches # 1029.678 M/sec (scaled from 70.01%)
40,267 branch-misses # 0.002 % (scaled from 30.04%)
2,055,961,348 cache-references # 1028.831 M/sec (scaled from 30.03%)
53,725 cache-misses # 0.027 M/sec (scaled from 30.02%)
2.001393933 seconds time elapsed
$ perf stat -B noploop 2
noploop for 2 seconds
Performance counter stats for 'noploop 2':
1998.297883 task-clock-msecs # 0.998 CPUs
59 context-switches # 0.000 M/sec
0 CPU-migrations # 0.000 M/sec
119 page-faults # 0.000 M/sec
4,131,380,160 cycles # 2067.450 M/sec (scaled from 70.01%)
2,059,096,507 instructions # 0.498 IPC (scaled from 70.01%)
2,054,681,303 branches # 1028.216 M/sec (scaled from 70.01%)
25,650 branch-misses # 0.001 % (scaled from 30.05%)
2,056,283,014 cache-references # 1029.017 M/sec (scaled from 30.03%)
47,097 cache-misses # 0.024 M/sec (scaled from 30.02%)
2.001391016 seconds time elapsed
Cc: David S. Miller <davem@davemloft.net>
Cc: Frédéric Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Tom Zanussi <tzanussi@gmail.com>
LKML-Reference: <4bf28fe8.914ed80a.01ca.fffff5f5@mx.google.com>
Signed-off-by: Stephane Eranian <eranian@google.com>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2010-05-18 13:00:01 +00:00
|
|
|
setlocale(LC_ALL, "");
|
|
|
|
|
2019-07-21 11:23:55 +00:00
|
|
|
evsel_list = evlist__new();
|
2011-01-11 22:56:53 +00:00
|
|
|
if (evsel_list == NULL)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
perf stat: Bail out on unsupported event config modifiers
'perf stat' accepts some config terms but doesn't apply them. For
example:
# perf stat -e 'instructions/no-inherit/' -e 'instructions/inherit/' bash
# ls
# exit
Performance counter stats for 'bash':
266258061 instructions/no-inherit/
266258061 instructions/inherit/
1.402183915 seconds time elapsed
The result is confusing, because user may expect the first
'instructions' event exclude the 'ls' command.
This patch forbid most of these config terms for 'perf stat'.
Result:
# ./perf stat -e 'instructions/no-inherit/' -e 'instructions/inherit/' bash
event syntax error: 'instructions/no-inherit/'
\___ 'no-inherit' is not usable in 'perf stat'
...
We can add blocked config terms back when 'perf stat' really supports them.
This patch also removes unavailable config term from error message:
# ./perf stat -e 'instructions/badterm/' ls
event syntax error: 'instructions/badterm/'
\___ unknown term
valid terms: config,config1,config2,name
# ./perf stat -e 'cpu/badterm/' ls
event syntax error: 'cpu/badterm/'
\___ unknown term
valid terms: pc,any,inv,edge,cmask,event,in_tx,ldlat,umask,in_tx_cp,offcore_rsp,config,config1,config2,name
Signed-off-by: Wang Nan <wangnan0@huawei.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Alexei Starovoitov <ast@kernel.org>
Cc: Brendan Gregg <brendan.d.gregg@gmail.com>
Cc: Cody P Schafer <dev@codyps.com>
Cc: He Kuang <hekuang@huawei.com>
Cc: Jeremie Galarneau <jeremie.galarneau@efficios.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kirill Smelkov <kirr@nexedi.com>
Cc: Li Zefan <lizefan@huawei.com>
Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Zefan Li <lizefan@huawei.com>
Cc: pi3orama@163.com
Link: http://lkml.kernel.org/r/1455882283-79592-11-git-send-email-wangnan0@huawei.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-02-19 11:43:58 +00:00
|
|
|
parse_events__shrink_config_terms();
|
2018-12-10 16:00:04 +00:00
|
|
|
|
|
|
|
/* String-parsing callback-based options would segfault when negated */
|
|
|
|
set_option_flag(stat_options, 'e', "event", PARSE_OPT_NONEG);
|
|
|
|
set_option_flag(stat_options, 'M', "metrics", PARSE_OPT_NONEG);
|
|
|
|
set_option_flag(stat_options, 'G', "cgroup", PARSE_OPT_NONEG);
|
|
|
|
|
perf stat record: Add record command
Add 'perf stat record' command support. It creates simple (header only)
perf.data file ATM.
The record command could be specified anywhere among stat options. All
stat command options are valid for stat record command with '-o' option
exception. If specified for record command it denotes the perf data file
name.
Committer note:
Set sample_type to PERF_SAMPLE_IDENTIFIER, which should be harmless
while avoiding that older tools show confusing messages, for instance,
with sample_type = 0, we get:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.630237 task-clock (msec) # 0.528 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
52 page-faults # 0.083 M/sec
978,312 cycles # 1.552 GHz
671,931 stalled-cycles-frontend # 68.68% frontend cycles idle
<not supported> stalled-cycles-backend
646,379 instructions # 0.66 insns per cycle
# 1.04 stalled cycles per insn
131,046 branches # 207.931 M/sec
7,073 branch-misses # 5.40% of all branches
0.001193240 seconds time elapsed
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
non matching sample_type
$
While with sample_type set to PERF_SAMPLE_IDENTIFIER, after we re-run 'perf
stat record usleep' we get:
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$
Which at least shows the names of the events in the perf.data file.
Additionally, such files, when passed to 'perf report' will produce:
$ oldperf report --stdio
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
Warning:
Kernel address maps (/proc/{kallsyms,modules}) were restricted.
Check /proc/sys/kernel/kptr_restrict before running 'perf record'.
As no suitable kallsyms nor vmlinux was found, kernel samples
can't be resolved.
Samples in kernel modules can't be resolved as well.
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
$
Which is confusing and can be solved by just adding the kernel mmap record,
which will also remove that warning about the data size field being equal to
zero, after generating the mmap record:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.600796 task-clock (msec) # 0.478 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
54 page-faults # 0.090 M/sec
886,844 cycles # 1.476 GHz
582,169 stalled-cycles-frontend # 65.65% frontend cycles idle
<not supported> stalled-cycles-backend
638,344 instructions # 0.72 insns per cycle
# 0.91 stalled cycles per insn
130,204 branches # 216.719 M/sec
7,500 branch-misses # 5.76% of all branches
0.001255897 seconds time elapsed
$ oldperf evlist
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$ oldperf report --stdio
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
[acme@zoo linux]$
No warnings, sensible output about what are the events in the perf.data file and also
a "file has no samples" message, which indeed it doesn't.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Tested-by: Kan Liang <kan.liang@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: htp://lkml.kernel.org/r/1446734469-11352-3-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 14:40:46 +00:00
|
|
|
argc = parse_options_subcommand(argc, argv, stat_options, stat_subcommands,
|
|
|
|
(const char **) stat_usage,
|
|
|
|
PARSE_OPT_STOP_AT_NON_OPTION);
|
perf stat: Output JSON MetricExpr metric
Add generic infrastructure to perf stat to output ratios for
"MetricExpr" entries in the event lists. Many events are more useful as
ratios than in raw form, typically some count in relation to total
ticks.
Transfer the MetricExpr information from the alias to the evsel.
We mark the events that need to be collected for MetricExpr, and also
link the events using them with a pointer. The code is careful to always
prefer the right event in the same group to minimize multiplexing
errors. At the moment only a single relation is supported.
Then add a rblist to the stat shadow code that remembers stats based on
the cpu and context.
Then finally update and retrieve and print these values similarly to the
existing hardcoded perf metrics. We use the simple expression parser
added earlier to evaluate the expression.
Normally we just output the result without further commentary, but for
--metric-only this would lead to empty columns. So for this case use the
original event as description.
There is no attempt to automatically add the MetricExpr event, if it is
missing, however we suggest it to the user, because the user tool
doesn't have enough information to reliably construct a group that is
guaranteed to schedule. So we leave that to the user.
% perf stat -a -I 1000 -e '{unc_p_clockticks,unc_p_freq_max_os_cycles}'
1.000147889 800,085,181 unc_p_clockticks
1.000147889 93,126,241 unc_p_freq_max_os_cycles # 11.6
2.000448381 800,218,217 unc_p_clockticks
2.000448381 142,516,095 unc_p_freq_max_os_cycles # 17.8
3.000639852 800,243,057 unc_p_clockticks
3.000639852 162,292,689 unc_p_freq_max_os_cycles # 20.3
% perf stat -a -I 1000 -e '{unc_p_clockticks,unc_p_freq_max_os_cycles}' --metric-only
# time freq_max_os_cycles %
1.000127077 0.9
2.000301436 0.7
3.000456379 0.0
v2: Change from DivideBy to MetricExpr
v3: Use expr__ prefix. Support more than one other event.
v4: Update description
v5: Only print warning message once for multiple PMUs.
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: http://lkml.kernel.org/r/20170320201711.14142-11-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-03-20 20:17:08 +00:00
|
|
|
perf_stat__collect_metric_expr(evsel_list);
|
2016-03-01 18:57:52 +00:00
|
|
|
perf_stat__init_shadow_stats();
|
perf stat record: Add record command
Add 'perf stat record' command support. It creates simple (header only)
perf.data file ATM.
The record command could be specified anywhere among stat options. All
stat command options are valid for stat record command with '-o' option
exception. If specified for record command it denotes the perf data file
name.
Committer note:
Set sample_type to PERF_SAMPLE_IDENTIFIER, which should be harmless
while avoiding that older tools show confusing messages, for instance,
with sample_type = 0, we get:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.630237 task-clock (msec) # 0.528 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
52 page-faults # 0.083 M/sec
978,312 cycles # 1.552 GHz
671,931 stalled-cycles-frontend # 68.68% frontend cycles idle
<not supported> stalled-cycles-backend
646,379 instructions # 0.66 insns per cycle
# 1.04 stalled cycles per insn
131,046 branches # 207.931 M/sec
7,073 branch-misses # 5.40% of all branches
0.001193240 seconds time elapsed
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
non matching sample_type
$
While with sample_type set to PERF_SAMPLE_IDENTIFIER, after we re-run 'perf
stat record usleep' we get:
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$
Which at least shows the names of the events in the perf.data file.
Additionally, such files, when passed to 'perf report' will produce:
$ oldperf report --stdio
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
Warning:
Kernel address maps (/proc/{kallsyms,modules}) were restricted.
Check /proc/sys/kernel/kptr_restrict before running 'perf record'.
As no suitable kallsyms nor vmlinux was found, kernel samples
can't be resolved.
Samples in kernel modules can't be resolved as well.
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
$
Which is confusing and can be solved by just adding the kernel mmap record,
which will also remove that warning about the data size field being equal to
zero, after generating the mmap record:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.600796 task-clock (msec) # 0.478 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
54 page-faults # 0.090 M/sec
886,844 cycles # 1.476 GHz
582,169 stalled-cycles-frontend # 65.65% frontend cycles idle
<not supported> stalled-cycles-backend
638,344 instructions # 0.72 insns per cycle
# 0.91 stalled cycles per insn
130,204 branches # 216.719 M/sec
7,500 branch-misses # 5.76% of all branches
0.001255897 seconds time elapsed
$ oldperf evlist
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$ oldperf report --stdio
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
[acme@zoo linux]$
No warnings, sensible output about what are the events in the perf.data file and also
a "file has no samples" message, which indeed it doesn't.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Tested-by: Kan Liang <kan.liang@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: htp://lkml.kernel.org/r/1446734469-11352-3-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 14:40:46 +00:00
|
|
|
|
2018-08-30 06:32:29 +00:00
|
|
|
if (stat_config.csv_sep) {
|
|
|
|
stat_config.csv_output = true;
|
|
|
|
if (!strcmp(stat_config.csv_sep, "\\t"))
|
|
|
|
stat_config.csv_sep = "\t";
|
2015-11-05 14:41:01 +00:00
|
|
|
} else
|
2018-08-30 06:32:29 +00:00
|
|
|
stat_config.csv_sep = DEFAULT_SEPARATOR;
|
2015-11-05 14:41:01 +00:00
|
|
|
|
perf stat record: Add record command
Add 'perf stat record' command support. It creates simple (header only)
perf.data file ATM.
The record command could be specified anywhere among stat options. All
stat command options are valid for stat record command with '-o' option
exception. If specified for record command it denotes the perf data file
name.
Committer note:
Set sample_type to PERF_SAMPLE_IDENTIFIER, which should be harmless
while avoiding that older tools show confusing messages, for instance,
with sample_type = 0, we get:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.630237 task-clock (msec) # 0.528 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
52 page-faults # 0.083 M/sec
978,312 cycles # 1.552 GHz
671,931 stalled-cycles-frontend # 68.68% frontend cycles idle
<not supported> stalled-cycles-backend
646,379 instructions # 0.66 insns per cycle
# 1.04 stalled cycles per insn
131,046 branches # 207.931 M/sec
7,073 branch-misses # 5.40% of all branches
0.001193240 seconds time elapsed
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
non matching sample_type
$
While with sample_type set to PERF_SAMPLE_IDENTIFIER, after we re-run 'perf
stat record usleep' we get:
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$
Which at least shows the names of the events in the perf.data file.
Additionally, such files, when passed to 'perf report' will produce:
$ oldperf report --stdio
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
Warning:
Kernel address maps (/proc/{kallsyms,modules}) were restricted.
Check /proc/sys/kernel/kptr_restrict before running 'perf record'.
As no suitable kallsyms nor vmlinux was found, kernel samples
can't be resolved.
Samples in kernel modules can't be resolved as well.
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
$
Which is confusing and can be solved by just adding the kernel mmap record,
which will also remove that warning about the data size field being equal to
zero, after generating the mmap record:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.600796 task-clock (msec) # 0.478 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
54 page-faults # 0.090 M/sec
886,844 cycles # 1.476 GHz
582,169 stalled-cycles-frontend # 65.65% frontend cycles idle
<not supported> stalled-cycles-backend
638,344 instructions # 0.72 insns per cycle
# 0.91 stalled cycles per insn
130,204 branches # 216.719 M/sec
7,500 branch-misses # 5.76% of all branches
0.001255897 seconds time elapsed
$ oldperf evlist
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$ oldperf report --stdio
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
[acme@zoo linux]$
No warnings, sensible output about what are the events in the perf.data file and also
a "file has no samples" message, which indeed it doesn't.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Tested-by: Kan Liang <kan.liang@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: htp://lkml.kernel.org/r/1446734469-11352-3-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 14:40:46 +00:00
|
|
|
if (argc && !strncmp(argv[0], "rec", 3)) {
|
|
|
|
argc = __cmd_record(argc, argv);
|
|
|
|
if (argc < 0)
|
|
|
|
return -1;
|
2015-11-05 14:40:55 +00:00
|
|
|
} else if (argc && !strncmp(argv[0], "rep", 3))
|
|
|
|
return __cmd_report(argc, argv);
|
perf stat: Add csv-style output
This patch adds an option (-x/--field-separator) to print counts using a
CSV-style output. The user can pass a custom separator. This makes it very easy
to import counts directly into your favorite spreadsheet without having to
write scripts.
Example:
$ perf stat --field-separator=, -a -- sleep 1
4009.961740,task-clock-msecs
13,context-switches
2,CPU-migrations
189,page-faults
9596385684,cycles
3493659441,instructions
872897069,branches
41562,branch-misses
22424,cache-references
1289,cache-misses
Works also in non-aggregated mode:
$ perf stat -x , -a -A -- sleep 1
CPU0,1002.526168,task-clock-msecs
CPU1,1002.528365,task-clock-msecs
CPU2,1002.523360,task-clock-msecs
CPU3,1002.519878,task-clock-msecs
CPU0,1,context-switches
CPU1,5,context-switches
CPU2,5,context-switches
CPU3,6,context-switches
CPU0,0,CPU-migrations
CPU1,1,CPU-migrations
CPU2,0,CPU-migrations
CPU3,1,CPU-migrations
CPU0,2,page-faults
CPU1,6,page-faults
CPU2,9,page-faults
CPU3,174,page-faults
CPU0,2399439771,cycles
CPU1,2380369063,cycles
CPU2,2399142710,cycles
CPU3,2373161192,cycles
CPU0,872900618,instructions
CPU1,873030960,instructions
CPU2,872714525,instructions
CPU3,874460580,instructions
CPU0,221556839,branches
CPU1,218134342,branches
CPU2,218161730,branches
CPU3,218284093,branches
CPU0,18556,branch-misses
CPU1,1449,branch-misses
CPU2,3447,branch-misses
CPU3,12714,branch-misses
CPU0,8330,cache-references
CPU1,313844,cache-references
CPU2,47993728,cache-references
CPU3,826481,cache-references
CPU0,272,cache-misses
CPU1,5360,cache-misses
CPU2,1342193,cache-misses
CPU3,13992,cache-misses
This second version adds the ability to name a separator and uses
field-separator as the long option to be consistent with perf report.
Commiter note: Since we enabled --big-num by default in 201e0b0 and -x can't be
used with it, we need to notice if the user explicitely enabled or disabled -B,
add code to disable big_num if the user didn't explicitely set --big_num when
-x is used.
Cc: David S. Miller <davem@davemloft.net>
Cc: Frederik Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: paulus@samba.org
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Robert Richter <robert.richter@amd.com>
LKML-Reference: <4cf68aa7.0fedd80a.5294.1203@mx.google.com>
Signed-off-by: Stephane Eranian <eranian@google.com>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2010-12-01 16:49:05 +00:00
|
|
|
|
2015-07-21 12:31:25 +00:00
|
|
|
interval = stat_config.interval;
|
2018-01-29 09:25:23 +00:00
|
|
|
timeout = stat_config.timeout;
|
2015-07-21 12:31:25 +00:00
|
|
|
|
perf stat record: Add record command
Add 'perf stat record' command support. It creates simple (header only)
perf.data file ATM.
The record command could be specified anywhere among stat options. All
stat command options are valid for stat record command with '-o' option
exception. If specified for record command it denotes the perf data file
name.
Committer note:
Set sample_type to PERF_SAMPLE_IDENTIFIER, which should be harmless
while avoiding that older tools show confusing messages, for instance,
with sample_type = 0, we get:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.630237 task-clock (msec) # 0.528 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
52 page-faults # 0.083 M/sec
978,312 cycles # 1.552 GHz
671,931 stalled-cycles-frontend # 68.68% frontend cycles idle
<not supported> stalled-cycles-backend
646,379 instructions # 0.66 insns per cycle
# 1.04 stalled cycles per insn
131,046 branches # 207.931 M/sec
7,073 branch-misses # 5.40% of all branches
0.001193240 seconds time elapsed
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
non matching sample_type
$
While with sample_type set to PERF_SAMPLE_IDENTIFIER, after we re-run 'perf
stat record usleep' we get:
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$
Which at least shows the names of the events in the perf.data file.
Additionally, such files, when passed to 'perf report' will produce:
$ oldperf report --stdio
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
Warning:
Kernel address maps (/proc/{kallsyms,modules}) were restricted.
Check /proc/sys/kernel/kptr_restrict before running 'perf record'.
As no suitable kallsyms nor vmlinux was found, kernel samples
can't be resolved.
Samples in kernel modules can't be resolved as well.
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
$
Which is confusing and can be solved by just adding the kernel mmap record,
which will also remove that warning about the data size field being equal to
zero, after generating the mmap record:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.600796 task-clock (msec) # 0.478 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
54 page-faults # 0.090 M/sec
886,844 cycles # 1.476 GHz
582,169 stalled-cycles-frontend # 65.65% frontend cycles idle
<not supported> stalled-cycles-backend
638,344 instructions # 0.72 insns per cycle
# 0.91 stalled cycles per insn
130,204 branches # 216.719 M/sec
7,500 branch-misses # 5.76% of all branches
0.001255897 seconds time elapsed
$ oldperf evlist
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$ oldperf report --stdio
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
[acme@zoo linux]$
No warnings, sensible output about what are the events in the perf.data file and also
a "file has no samples" message, which indeed it doesn't.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Tested-by: Kan Liang <kan.liang@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: htp://lkml.kernel.org/r/1446734469-11352-3-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 14:40:46 +00:00
|
|
|
/*
|
|
|
|
* For record command the -o is already taken care of.
|
|
|
|
*/
|
|
|
|
if (!STAT_RECORD && output_name && strcmp(output_name, "-"))
|
2011-08-15 20:22:33 +00:00
|
|
|
output = NULL;
|
|
|
|
|
2011-09-07 23:14:00 +00:00
|
|
|
if (output_name && output_fd) {
|
|
|
|
fprintf(stderr, "cannot use both --output and --log-fd\n");
|
2015-11-05 14:40:45 +00:00
|
|
|
parse_options_usage(stat_usage, stat_options, "o", 1);
|
|
|
|
parse_options_usage(NULL, stat_options, "log-fd", 0);
|
2013-11-01 07:33:15 +00:00
|
|
|
goto out;
|
2011-09-07 23:14:00 +00:00
|
|
|
}
|
2012-05-15 11:11:11 +00:00
|
|
|
|
2018-08-30 06:32:31 +00:00
|
|
|
if (stat_config.metric_only && stat_config.aggr_mode == AGGR_THREAD) {
|
2016-03-03 23:57:36 +00:00
|
|
|
fprintf(stderr, "--metric-only is not supported with --per-thread\n");
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
2018-08-30 06:32:36 +00:00
|
|
|
if (stat_config.metric_only && stat_config.run_count > 1) {
|
2016-03-03 23:57:36 +00:00
|
|
|
fprintf(stderr, "--metric-only is not supported with -r\n");
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
2018-08-30 06:32:50 +00:00
|
|
|
if (stat_config.walltime_run_table && stat_config.run_count <= 1) {
|
2018-04-23 09:08:21 +00:00
|
|
|
fprintf(stderr, "--table is only supported with -r\n");
|
|
|
|
parse_options_usage(stat_usage, stat_options, "r", 1);
|
|
|
|
parse_options_usage(NULL, stat_options, "table", 0);
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
2012-05-15 11:11:11 +00:00
|
|
|
if (output_fd < 0) {
|
|
|
|
fprintf(stderr, "argument to --log-fd must be a > 0\n");
|
2015-11-05 14:40:45 +00:00
|
|
|
parse_options_usage(stat_usage, stat_options, "log-fd", 0);
|
2013-11-01 07:33:15 +00:00
|
|
|
goto out;
|
2012-05-15 11:11:11 +00:00
|
|
|
}
|
|
|
|
|
2020-10-27 00:27:36 +00:00
|
|
|
if (!output && !stat_config.quiet) {
|
2011-08-15 20:22:33 +00:00
|
|
|
struct timespec tm;
|
|
|
|
mode = append_file ? "a" : "w";
|
|
|
|
|
|
|
|
output = fopen(output_name, mode);
|
|
|
|
if (!output) {
|
|
|
|
perror("failed to create output file");
|
2012-08-26 18:24:44 +00:00
|
|
|
return -1;
|
2011-08-15 20:22:33 +00:00
|
|
|
}
|
|
|
|
clock_gettime(CLOCK_REALTIME, &tm);
|
|
|
|
fprintf(output, "# started on %s\n", ctime(&tm.tv_sec));
|
2012-05-15 11:11:11 +00:00
|
|
|
} else if (output_fd > 0) {
|
2011-09-07 23:14:00 +00:00
|
|
|
mode = append_file ? "a" : "w";
|
|
|
|
output = fdopen(output_fd, mode);
|
|
|
|
if (!output) {
|
|
|
|
perror("Failed opening logfd");
|
|
|
|
return -errno;
|
|
|
|
}
|
2011-08-15 20:22:33 +00:00
|
|
|
}
|
|
|
|
|
2015-07-21 12:31:24 +00:00
|
|
|
stat_config.output = output;
|
|
|
|
|
perf stat: Add csv-style output
This patch adds an option (-x/--field-separator) to print counts using a
CSV-style output. The user can pass a custom separator. This makes it very easy
to import counts directly into your favorite spreadsheet without having to
write scripts.
Example:
$ perf stat --field-separator=, -a -- sleep 1
4009.961740,task-clock-msecs
13,context-switches
2,CPU-migrations
189,page-faults
9596385684,cycles
3493659441,instructions
872897069,branches
41562,branch-misses
22424,cache-references
1289,cache-misses
Works also in non-aggregated mode:
$ perf stat -x , -a -A -- sleep 1
CPU0,1002.526168,task-clock-msecs
CPU1,1002.528365,task-clock-msecs
CPU2,1002.523360,task-clock-msecs
CPU3,1002.519878,task-clock-msecs
CPU0,1,context-switches
CPU1,5,context-switches
CPU2,5,context-switches
CPU3,6,context-switches
CPU0,0,CPU-migrations
CPU1,1,CPU-migrations
CPU2,0,CPU-migrations
CPU3,1,CPU-migrations
CPU0,2,page-faults
CPU1,6,page-faults
CPU2,9,page-faults
CPU3,174,page-faults
CPU0,2399439771,cycles
CPU1,2380369063,cycles
CPU2,2399142710,cycles
CPU3,2373161192,cycles
CPU0,872900618,instructions
CPU1,873030960,instructions
CPU2,872714525,instructions
CPU3,874460580,instructions
CPU0,221556839,branches
CPU1,218134342,branches
CPU2,218161730,branches
CPU3,218284093,branches
CPU0,18556,branch-misses
CPU1,1449,branch-misses
CPU2,3447,branch-misses
CPU3,12714,branch-misses
CPU0,8330,cache-references
CPU1,313844,cache-references
CPU2,47993728,cache-references
CPU3,826481,cache-references
CPU0,272,cache-misses
CPU1,5360,cache-misses
CPU2,1342193,cache-misses
CPU3,13992,cache-misses
This second version adds the ability to name a separator and uses
field-separator as the long option to be consistent with perf report.
Commiter note: Since we enabled --big-num by default in 201e0b0 and -x can't be
used with it, we need to notice if the user explicitely enabled or disabled -B,
add code to disable big_num if the user didn't explicitely set --big_num when
-x is used.
Cc: David S. Miller <davem@davemloft.net>
Cc: Frederik Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: paulus@samba.org
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Robert Richter <robert.richter@amd.com>
LKML-Reference: <4cf68aa7.0fedd80a.5294.1203@mx.google.com>
Signed-off-by: Stephane Eranian <eranian@google.com>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2010-12-01 16:49:05 +00:00
|
|
|
/*
|
|
|
|
* let the spreadsheet do the pretty-printing
|
|
|
|
*/
|
2018-08-30 06:32:29 +00:00
|
|
|
if (stat_config.csv_output) {
|
2011-09-07 23:14:04 +00:00
|
|
|
/* User explicitly passed -B? */
|
perf stat: Add csv-style output
This patch adds an option (-x/--field-separator) to print counts using a
CSV-style output. The user can pass a custom separator. This makes it very easy
to import counts directly into your favorite spreadsheet without having to
write scripts.
Example:
$ perf stat --field-separator=, -a -- sleep 1
4009.961740,task-clock-msecs
13,context-switches
2,CPU-migrations
189,page-faults
9596385684,cycles
3493659441,instructions
872897069,branches
41562,branch-misses
22424,cache-references
1289,cache-misses
Works also in non-aggregated mode:
$ perf stat -x , -a -A -- sleep 1
CPU0,1002.526168,task-clock-msecs
CPU1,1002.528365,task-clock-msecs
CPU2,1002.523360,task-clock-msecs
CPU3,1002.519878,task-clock-msecs
CPU0,1,context-switches
CPU1,5,context-switches
CPU2,5,context-switches
CPU3,6,context-switches
CPU0,0,CPU-migrations
CPU1,1,CPU-migrations
CPU2,0,CPU-migrations
CPU3,1,CPU-migrations
CPU0,2,page-faults
CPU1,6,page-faults
CPU2,9,page-faults
CPU3,174,page-faults
CPU0,2399439771,cycles
CPU1,2380369063,cycles
CPU2,2399142710,cycles
CPU3,2373161192,cycles
CPU0,872900618,instructions
CPU1,873030960,instructions
CPU2,872714525,instructions
CPU3,874460580,instructions
CPU0,221556839,branches
CPU1,218134342,branches
CPU2,218161730,branches
CPU3,218284093,branches
CPU0,18556,branch-misses
CPU1,1449,branch-misses
CPU2,3447,branch-misses
CPU3,12714,branch-misses
CPU0,8330,cache-references
CPU1,313844,cache-references
CPU2,47993728,cache-references
CPU3,826481,cache-references
CPU0,272,cache-misses
CPU1,5360,cache-misses
CPU2,1342193,cache-misses
CPU3,13992,cache-misses
This second version adds the ability to name a separator and uses
field-separator as the long option to be consistent with perf report.
Commiter note: Since we enabled --big-num by default in 201e0b0 and -x can't be
used with it, we need to notice if the user explicitely enabled or disabled -B,
add code to disable big_num if the user didn't explicitely set --big_num when
-x is used.
Cc: David S. Miller <davem@davemloft.net>
Cc: Frederik Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: paulus@samba.org
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Robert Richter <robert.richter@amd.com>
LKML-Reference: <4cf68aa7.0fedd80a.5294.1203@mx.google.com>
Signed-off-by: Stephane Eranian <eranian@google.com>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2010-12-01 16:49:05 +00:00
|
|
|
if (big_num_opt == 1) {
|
|
|
|
fprintf(stderr, "-B option not supported with -x\n");
|
2015-11-05 14:40:45 +00:00
|
|
|
parse_options_usage(stat_usage, stat_options, "B", 1);
|
|
|
|
parse_options_usage(NULL, stat_options, "x", 1);
|
2013-11-01 07:33:15 +00:00
|
|
|
goto out;
|
perf stat: Add csv-style output
This patch adds an option (-x/--field-separator) to print counts using a
CSV-style output. The user can pass a custom separator. This makes it very easy
to import counts directly into your favorite spreadsheet without having to
write scripts.
Example:
$ perf stat --field-separator=, -a -- sleep 1
4009.961740,task-clock-msecs
13,context-switches
2,CPU-migrations
189,page-faults
9596385684,cycles
3493659441,instructions
872897069,branches
41562,branch-misses
22424,cache-references
1289,cache-misses
Works also in non-aggregated mode:
$ perf stat -x , -a -A -- sleep 1
CPU0,1002.526168,task-clock-msecs
CPU1,1002.528365,task-clock-msecs
CPU2,1002.523360,task-clock-msecs
CPU3,1002.519878,task-clock-msecs
CPU0,1,context-switches
CPU1,5,context-switches
CPU2,5,context-switches
CPU3,6,context-switches
CPU0,0,CPU-migrations
CPU1,1,CPU-migrations
CPU2,0,CPU-migrations
CPU3,1,CPU-migrations
CPU0,2,page-faults
CPU1,6,page-faults
CPU2,9,page-faults
CPU3,174,page-faults
CPU0,2399439771,cycles
CPU1,2380369063,cycles
CPU2,2399142710,cycles
CPU3,2373161192,cycles
CPU0,872900618,instructions
CPU1,873030960,instructions
CPU2,872714525,instructions
CPU3,874460580,instructions
CPU0,221556839,branches
CPU1,218134342,branches
CPU2,218161730,branches
CPU3,218284093,branches
CPU0,18556,branch-misses
CPU1,1449,branch-misses
CPU2,3447,branch-misses
CPU3,12714,branch-misses
CPU0,8330,cache-references
CPU1,313844,cache-references
CPU2,47993728,cache-references
CPU3,826481,cache-references
CPU0,272,cache-misses
CPU1,5360,cache-misses
CPU2,1342193,cache-misses
CPU3,13992,cache-misses
This second version adds the ability to name a separator and uses
field-separator as the long option to be consistent with perf report.
Commiter note: Since we enabled --big-num by default in 201e0b0 and -x can't be
used with it, we need to notice if the user explicitely enabled or disabled -B,
add code to disable big_num if the user didn't explicitely set --big_num when
-x is used.
Cc: David S. Miller <davem@davemloft.net>
Cc: Frederik Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: paulus@samba.org
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Robert Richter <robert.richter@amd.com>
LKML-Reference: <4cf68aa7.0fedd80a.5294.1203@mx.google.com>
Signed-off-by: Stephane Eranian <eranian@google.com>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2010-12-01 16:49:05 +00:00
|
|
|
} else /* Nope, so disable big number formatting */
|
2018-08-30 06:32:47 +00:00
|
|
|
stat_config.big_num = false;
|
perf stat: Add csv-style output
This patch adds an option (-x/--field-separator) to print counts using a
CSV-style output. The user can pass a custom separator. This makes it very easy
to import counts directly into your favorite spreadsheet without having to
write scripts.
Example:
$ perf stat --field-separator=, -a -- sleep 1
4009.961740,task-clock-msecs
13,context-switches
2,CPU-migrations
189,page-faults
9596385684,cycles
3493659441,instructions
872897069,branches
41562,branch-misses
22424,cache-references
1289,cache-misses
Works also in non-aggregated mode:
$ perf stat -x , -a -A -- sleep 1
CPU0,1002.526168,task-clock-msecs
CPU1,1002.528365,task-clock-msecs
CPU2,1002.523360,task-clock-msecs
CPU3,1002.519878,task-clock-msecs
CPU0,1,context-switches
CPU1,5,context-switches
CPU2,5,context-switches
CPU3,6,context-switches
CPU0,0,CPU-migrations
CPU1,1,CPU-migrations
CPU2,0,CPU-migrations
CPU3,1,CPU-migrations
CPU0,2,page-faults
CPU1,6,page-faults
CPU2,9,page-faults
CPU3,174,page-faults
CPU0,2399439771,cycles
CPU1,2380369063,cycles
CPU2,2399142710,cycles
CPU3,2373161192,cycles
CPU0,872900618,instructions
CPU1,873030960,instructions
CPU2,872714525,instructions
CPU3,874460580,instructions
CPU0,221556839,branches
CPU1,218134342,branches
CPU2,218161730,branches
CPU3,218284093,branches
CPU0,18556,branch-misses
CPU1,1449,branch-misses
CPU2,3447,branch-misses
CPU3,12714,branch-misses
CPU0,8330,cache-references
CPU1,313844,cache-references
CPU2,47993728,cache-references
CPU3,826481,cache-references
CPU0,272,cache-misses
CPU1,5360,cache-misses
CPU2,1342193,cache-misses
CPU3,13992,cache-misses
This second version adds the ability to name a separator and uses
field-separator as the long option to be consistent with perf report.
Commiter note: Since we enabled --big-num by default in 201e0b0 and -x can't be
used with it, we need to notice if the user explicitely enabled or disabled -B,
add code to disable big_num if the user didn't explicitely set --big_num when
-x is used.
Cc: David S. Miller <davem@davemloft.net>
Cc: Frederik Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: paulus@samba.org
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Robert Richter <robert.richter@amd.com>
LKML-Reference: <4cf68aa7.0fedd80a.5294.1203@mx.google.com>
Signed-off-by: Stephane Eranian <eranian@google.com>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2010-12-01 16:49:05 +00:00
|
|
|
} else if (big_num_opt == 0) /* User passed --no-big-num */
|
2018-08-30 06:32:47 +00:00
|
|
|
stat_config.big_num = false;
|
perf stat: Add csv-style output
This patch adds an option (-x/--field-separator) to print counts using a
CSV-style output. The user can pass a custom separator. This makes it very easy
to import counts directly into your favorite spreadsheet without having to
write scripts.
Example:
$ perf stat --field-separator=, -a -- sleep 1
4009.961740,task-clock-msecs
13,context-switches
2,CPU-migrations
189,page-faults
9596385684,cycles
3493659441,instructions
872897069,branches
41562,branch-misses
22424,cache-references
1289,cache-misses
Works also in non-aggregated mode:
$ perf stat -x , -a -A -- sleep 1
CPU0,1002.526168,task-clock-msecs
CPU1,1002.528365,task-clock-msecs
CPU2,1002.523360,task-clock-msecs
CPU3,1002.519878,task-clock-msecs
CPU0,1,context-switches
CPU1,5,context-switches
CPU2,5,context-switches
CPU3,6,context-switches
CPU0,0,CPU-migrations
CPU1,1,CPU-migrations
CPU2,0,CPU-migrations
CPU3,1,CPU-migrations
CPU0,2,page-faults
CPU1,6,page-faults
CPU2,9,page-faults
CPU3,174,page-faults
CPU0,2399439771,cycles
CPU1,2380369063,cycles
CPU2,2399142710,cycles
CPU3,2373161192,cycles
CPU0,872900618,instructions
CPU1,873030960,instructions
CPU2,872714525,instructions
CPU3,874460580,instructions
CPU0,221556839,branches
CPU1,218134342,branches
CPU2,218161730,branches
CPU3,218284093,branches
CPU0,18556,branch-misses
CPU1,1449,branch-misses
CPU2,3447,branch-misses
CPU3,12714,branch-misses
CPU0,8330,cache-references
CPU1,313844,cache-references
CPU2,47993728,cache-references
CPU3,826481,cache-references
CPU0,272,cache-misses
CPU1,5360,cache-misses
CPU2,1342193,cache-misses
CPU3,13992,cache-misses
This second version adds the ability to name a separator and uses
field-separator as the long option to be consistent with perf report.
Commiter note: Since we enabled --big-num by default in 201e0b0 and -x can't be
used with it, we need to notice if the user explicitely enabled or disabled -B,
add code to disable big_num if the user didn't explicitely set --big_num when
-x is used.
Cc: David S. Miller <davem@davemloft.net>
Cc: Frederik Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: paulus@samba.org
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Robert Richter <robert.richter@amd.com>
LKML-Reference: <4cf68aa7.0fedd80a.5294.1203@mx.google.com>
Signed-off-by: Stephane Eranian <eranian@google.com>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2010-12-01 16:49:05 +00:00
|
|
|
|
perf stat: Enable counting events for BPF programs
Introduce 'perf stat -b' option, which counts events for BPF programs, like:
[root@localhost ~]# ~/perf stat -e ref-cycles,cycles -b 254 -I 1000
1.487903822 115,200 ref-cycles
1.487903822 86,012 cycles
2.489147029 80,560 ref-cycles
2.489147029 73,784 cycles
3.490341825 60,720 ref-cycles
3.490341825 37,797 cycles
4.491540887 37,120 ref-cycles
4.491540887 31,963 cycles
The example above counts 'cycles' and 'ref-cycles' of BPF program of id
254. This is similar to bpftool-prog-profile command, but more
flexible.
'perf stat -b' creates per-cpu perf_event and loads fentry/fexit BPF
programs (monitor-progs) to the target BPF program (target-prog). The
monitor-progs read perf_event before and after the target-prog, and
aggregate the difference in a BPF map. Then the user space reads data
from these maps.
A new 'struct bpf_counter' is introduced to provide a common interface
that uses BPF programs/maps to count perf events.
Committer notes:
Removed all but bpf_counter.h includes from evsel.h, not needed at all.
Also BPF map lookups for PERCPU_ARRAYs need to have as its value receive
buffer passed to the kernel libbpf_num_possible_cpus() entries, not
evsel__nr_cpus(evsel), as the former uses
/sys/devices/system/cpu/possible while the later uses
/sys/devices/system/cpu/online, which may be less than the 'possible'
number making the bpf map lookup overwrite memory and cause hard to
debug memory corruption.
We need to continue using evsel__nr_cpus(evsel) when accessing the
perf_counts array tho, not to overwrite another are of memory :-)
Signed-off-by: Song Liu <songliubraving@fb.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Link: https://lore.kernel.org/lkml/20210120163031.GU12699@kernel.org/
Acked-by: Namhyung Kim <namhyung@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: kernel-team@fb.com
Link: http://lore.kernel.org/lkml/20201229214214.3413833-4-songliubraving@fb.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-12-29 21:42:14 +00:00
|
|
|
err = target__validate(&target);
|
|
|
|
if (err) {
|
|
|
|
target__strerror(&target, err, errbuf, BUFSIZ);
|
|
|
|
pr_warning("%s\n", errbuf);
|
|
|
|
}
|
|
|
|
|
perf tools: Force uncore events to system wide monitoring
Make system wide (-a) the default option if no target was specified and
one of following conditions is met:
- there's no workload specified (current behaviour)
- there is workload specified but all requested
events are system wide ones
Mixed events core/uncore with workload:
$ perf stat -e 'uncore_cbox_0/clockticks/,cycles' sleep 1
Performance counter stats for 'sleep 1':
<not supported> uncore_cbox_0/clockticks/
980,489 cycles
1.000897406 seconds time elapsed
Uncore event with workload:
$ perf stat -e 'uncore_cbox_0/clockticks/' sleep 1
Performance counter stats for 'system wide':
281,473,897,192,670 uncore_cbox_0/clockticks/
1.000833784 seconds time elapsed
Committer note:
When testing I realized the default case for !root, i.e. no events
passed via -e, was broke by v2 of this patch, reported and after a
patch provided by Jiri it is back working:
[acme@jouet linux]$ perf stat usleep 1
Performance counter stats for 'usleep 1':
0.401335 task-clock:u (msec) # 0.297 CPUs utilized
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
48 page-faults:u # 0.120 M/sec
458,146 cycles:u # 1.142 GHz
245,113 instructions:u # 0.54 insn per cycle
47,991 branches:u # 119.578 M/sec
4,022 branch-misses:u # 8.38% of all branches
0.001350029 seconds time elapsed
[acme@jouet linux]$
Suggested-and-Tested-by: Borislav Petkov <bp@alien8.de>
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/20170227094818.GA12764@krava
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-02-27 09:48:18 +00:00
|
|
|
setup_system_wide(argc);
|
2013-09-30 13:37:37 +00:00
|
|
|
|
2018-06-05 12:13:13 +00:00
|
|
|
/*
|
|
|
|
* Display user/system times only for single
|
|
|
|
* run and when there's specified tracee.
|
|
|
|
*/
|
2018-08-30 06:32:36 +00:00
|
|
|
if ((stat_config.run_count == 1) && target__none(&target))
|
2018-08-30 06:32:44 +00:00
|
|
|
stat_config.ru_display = true;
|
2018-06-05 12:13:13 +00:00
|
|
|
|
2018-08-30 06:32:36 +00:00
|
|
|
if (stat_config.run_count < 0) {
|
2013-11-01 07:33:15 +00:00
|
|
|
pr_err("Run count must be a positive number\n");
|
2015-11-05 14:40:45 +00:00
|
|
|
parse_options_usage(stat_usage, stat_options, "r", 1);
|
2013-11-01 07:33:15 +00:00
|
|
|
goto out;
|
2018-08-30 06:32:36 +00:00
|
|
|
} else if (stat_config.run_count == 0) {
|
2013-03-01 18:02:27 +00:00
|
|
|
forever = true;
|
2018-08-30 06:32:36 +00:00
|
|
|
stat_config.run_count = 1;
|
2013-03-01 18:02:27 +00:00
|
|
|
}
|
2009-04-20 13:37:32 +00:00
|
|
|
|
2018-08-30 06:32:50 +00:00
|
|
|
if (stat_config.walltime_run_table) {
|
|
|
|
stat_config.walltime_run = zalloc(stat_config.run_count * sizeof(stat_config.walltime_run[0]));
|
|
|
|
if (!stat_config.walltime_run) {
|
2018-04-23 09:08:21 +00:00
|
|
|
pr_err("failed to setup -r option");
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2017-12-05 14:03:10 +00:00
|
|
|
if ((stat_config.aggr_mode == AGGR_THREAD) &&
|
|
|
|
!target__has_task(&target)) {
|
|
|
|
if (!target.system_wide || target.cpu_list) {
|
|
|
|
fprintf(stderr, "The --per-thread option is only "
|
|
|
|
"available when monitoring via -p -t -a "
|
|
|
|
"options or only --per-thread.\n");
|
|
|
|
parse_options_usage(NULL, stat_options, "p", 1);
|
|
|
|
parse_options_usage(NULL, stat_options, "t", 1);
|
|
|
|
goto out;
|
|
|
|
}
|
perf stat: Introduce --per-thread option
Currently all the -p option PID arguments tasks values get aggregated
and printed as single values.
Adding --per-tasks option to print values per task.
$ perf stat -e cycles,instructions --per-thread -p 30190,30242
^C
Performance counter stats for process id '30190,30242':
cat-30190 0 cycles
yes-30242 3,842,525,421 cycles
cat-30190 0 instructions
yes-30242 10,370,817,010 instructions
1.143155657 seconds time elapsed
Also works under interval mode:
$ perf stat -e cycles,instructions --per-thread -p 30190,30242 -I 1000
# time comm-pid counts unit events
1.000073435 cat-30190 89,058 cycles
1.000073435 yes-30242 3,360,786,902 cycles (100.00%)
1.000073435 cat-30190 14,066 instructions
1.000073435 yes-30242 9,069,937,462 instructions
2.000204830 cat-30190 0 cycles
2.000204830 yes-30242 3,351,667,626 cycles
2.000204830 cat-30190 0 instructions
2.000204830 yes-30242 9,045,796,885 instructions
^C 2.771286639 cat-30190 0 cycles
2.771286639 yes-30242 2,593,884,166 cycles
2.771286639 cat-30190 0 instructions
2.771286639 yes-30242 7,001,171,191 instructions
It works only with -t and -p options, otherwise following error is
printed:
$ perf stat -e cycles --per-thread -I 1000 ls
The --per-thread option is only available when monitoring via -p -t options.
-p, --pid <pid> stat events on existing process id
-t, --tid <tid> stat events on existing thread id
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/r/1435310967-14570-23-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-06-26 09:29:27 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* no_aggr, cgroup are for system-wide only
|
|
|
|
* --per-thread is aggregated per thread, we dont mix it with cpu mode
|
|
|
|
*/
|
2015-07-21 12:31:22 +00:00
|
|
|
if (((stat_config.aggr_mode != AGGR_GLOBAL &&
|
2021-08-30 17:02:00 +00:00
|
|
|
stat_config.aggr_mode != AGGR_THREAD) ||
|
|
|
|
(nr_cgroups || stat_config.cgroup_list)) &&
|
2013-11-12 19:46:16 +00:00
|
|
|
!target__has_cpu(&target)) {
|
perf tool: Add cgroup support
This patch adds the ability to filter monitoring based on container groups
(cgroups) for both perf stat and perf record. It is possible to monitor
multiple cgroup in parallel. There is one cgroup per event. The cgroups to
monitor are passed via a new -G option followed by a comma separated list of
cgroup names.
The cgroup filesystem has to be mounted. Given a cgroup name, the perf tool
finds the corresponding directory in the cgroup filesystem and opens it. It
then passes that file descriptor to the kernel.
Example:
$ perf stat -B -a -e cycles:u,cycles:u,cycles:u -G test1,,test2 -- sleep 1
Performance counter stats for 'sleep 1':
2,368,667,414 cycles test1
2,369,661,459 cycles
<not counted> cycles test2
1.001856890 seconds time elapsed
Signed-off-by: Stephane Eranian <eranian@google.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <4d590290.825bdf0a.7d0a.4890@mx.google.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2011-02-14 09:20:01 +00:00
|
|
|
fprintf(stderr, "both cgroup and no-aggregation "
|
|
|
|
"modes only available in system-wide mode\n");
|
|
|
|
|
2015-11-05 14:40:45 +00:00
|
|
|
parse_options_usage(stat_usage, stat_options, "G", 1);
|
|
|
|
parse_options_usage(NULL, stat_options, "A", 1);
|
|
|
|
parse_options_usage(NULL, stat_options, "a", 1);
|
2021-08-30 17:02:00 +00:00
|
|
|
parse_options_usage(NULL, stat_options, "for-each-cgroup", 0);
|
2013-11-01 07:33:15 +00:00
|
|
|
goto out;
|
2013-02-06 14:46:02 +00:00
|
|
|
}
|
|
|
|
|
2021-04-19 09:41:44 +00:00
|
|
|
if (stat_config.iostat_run) {
|
|
|
|
status = iostat_prepare(evsel_list, &stat_config);
|
|
|
|
if (status)
|
|
|
|
goto out;
|
|
|
|
if (iostat_mode == IOSTAT_LIST) {
|
|
|
|
iostat_list(evsel_list, &stat_config);
|
|
|
|
goto out;
|
|
|
|
} else if (verbose)
|
|
|
|
iostat_list(evsel_list, &stat_config);
|
2021-09-27 08:11:14 +00:00
|
|
|
if (iostat_mode == IOSTAT_RUN && !target__has_cpu(&target))
|
|
|
|
target.system_wide = true;
|
2021-04-19 09:41:44 +00:00
|
|
|
}
|
|
|
|
|
perf stat: Add -d -d and -d -d -d options to show more CPU events
Print even more detailed statistics if requested via perf stat -d:
-d: detailed events, L1 and LLC data cache
-d -d: more detailed events, dTLB and iTLB events
-d -d -d: very detailed events, adding prefetch events
Full output looks like this now:
Performance counter stats for '/home/mingo/hackbench 10' (5 runs):
1703.674707 task-clock # 8.709 CPUs utilized ( +- 4.19% )
49,068 context-switches # 0.029 M/sec ( +- 16.66% )
8,303 CPU-migrations # 0.005 M/sec ( +- 24.90% )
17,397 page-faults # 0.010 M/sec ( +- 0.46% )
2,345,389,239 cycles # 1.377 GHz ( +- 4.61% ) [55.90%]
1,884,503,527 stalled-cycles-frontend # 80.35% frontend cycles idle ( +- 5.67% ) [50.39%]
743,919,737 stalled-cycles-backend # 31.72% backend cycles idle ( +- 8.75% ) [49.91%]
1,314,416,379 instructions # 0.56 insns per cycle
# 1.43 stalled cycles per insn ( +- 2.53% ) [60.87%]
272,592,567 branches # 160.003 M/sec ( +- 1.74% ) [56.56%]
3,794,846 branch-misses # 1.39% of all branches ( +- 6.59% ) [58.50%]
449,982,778 L1-dcache-loads # 264.125 M/sec ( +- 2.47% ) [49.88%]
22,404,961 L1-dcache-load-misses # 4.98% of all L1-dcache hits ( +- 6.08% ) [55.05%]
6,204,750 LLC-loads # 3.642 M/sec ( +- 8.91% ) [43.75%]
1,837,411 LLC-load-misses # 1.078 M/sec ( +- 7.27% ) [12.07%]
411,440,421 L1-icache-loads # 241.502 M/sec ( +- 5.60% ) [36.52%]
27,556,832 L1-icache-load-misses # 16.175 M/sec ( +- 7.46% ) [46.72%]
464,067,627 dTLB-loads # 272.392 M/sec ( +- 4.46% ) [54.17%]
10,765,648 dTLB-load-misses # 6.319 M/sec ( +- 3.18% ) [48.68%]
1,273,080,386 iTLB-loads # 747.256 M/sec ( +- 3.38% ) [47.53%]
117,481 iTLB-load-misses # 0.069 M/sec ( +- 14.99% ) [47.01%]
4,590,653 L1-dcache-prefetches # 2.695 M/sec ( +- 4.49% ) [46.19%]
1,712,660 L1-dcache-prefetch-misses # 1.005 M/sec ( +- 3.75% ) [44.82%]
0.195622057 seconds time elapsed ( +- 6.84% )
Also clean up the attribute construction code to be appending, and factor
it out into add_default_attributes().
Tweak the coverage percentage printout a bit, so that it's easier to view it
alongside the +- sttddev colum.
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Steven Rostedt <rostedt@goodmis.org>
Link: http://lkml.kernel.org/n/tip-to3kgu04449s64062val8b62@git.kernel.org
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2011-05-19 11:30:56 +00:00
|
|
|
if (add_default_attributes())
|
|
|
|
goto out;
|
2009-04-20 13:37:32 +00:00
|
|
|
|
perf stat: Add --for-each-cgroup option
The --for-each-cgroup option is a syntax sugar to monitor large number
of cgroups easily. Current command line requires to list all the events
and cgroups even if users want to monitor same events for each cgroup.
This patch addresses that usage by copying given events for each cgroup
on user's behalf.
For instance, if they want to monitor 6 events for 200 cgroups each they
should write 1200 event names (with -e) AND 1200 cgroup names (with -G)
on the command line. But with this change, they can just specify 6
events and 200 cgroups with a new option.
A simpler example below: It wants to measure 3 events for 2 cgroups ('A'
and 'B'). The result is that total 6 events are counted like below.
$ perf stat -a -e cpu-clock,cycles,instructions --for-each-cgroup A,B sleep 1
Performance counter stats for 'system wide':
988.18 msec cpu-clock A # 0.987 CPUs utilized
3,153,761,702 cycles A # 3.200 GHz (100.00%)
8,067,769,847 instructions A # 2.57 insn per cycle (100.00%)
982.71 msec cpu-clock B # 0.982 CPUs utilized
3,136,093,298 cycles B # 3.182 GHz (99.99%)
8,109,619,327 instructions B # 2.58 insn per cycle (99.99%)
1.001228054 seconds time elapsed
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Acked-by: Jiri Olsa <jolsa@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Ian Rogers <irogers@google.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lore.kernel.org/lkml/20200924124455.336326-3-namhyung@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-09-24 12:44:52 +00:00
|
|
|
if (stat_config.cgroup_list) {
|
|
|
|
if (nr_cgroups > 0) {
|
|
|
|
pr_err("--cgroup and --for-each-cgroup cannot be used together\n");
|
|
|
|
parse_options_usage(stat_usage, stat_options, "G", 1);
|
|
|
|
parse_options_usage(NULL, stat_options, "for-each-cgroup", 0);
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
2020-09-24 12:44:53 +00:00
|
|
|
if (evlist__expand_cgroup(evsel_list, stat_config.cgroup_list,
|
perf stat: Support regex pattern in --for-each-cgroup
To make the command line even more compact with cgroups, support regex
pattern matching in cgroup names.
$ perf stat -a -e cpu-clock,cycles --for-each-cgroup ^foo sleep 1
3,000.73 msec cpu-clock foo # 2.998 CPUs utilized
12,530,992,699 cycles foo # 7.517 GHz (100.00%)
1,000.61 msec cpu-clock foo/bar # 1.000 CPUs utilized
4,178,529,579 cycles foo/bar # 2.506 GHz (100.00%)
1,000.03 msec cpu-clock foo/baz # 0.999 CPUs utilized
4,176,104,315 cycles foo/baz # 2.505 GHz (100.00%)
1.000892614 seconds time elapsed
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Acked-by: Jiri Olsa <jolsa@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Ian Rogers <irogers@google.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lore.kernel.org/lkml/20201027072855.655449-2-namhyung@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-10-27 07:28:55 +00:00
|
|
|
&stat_config.metric_events, true) < 0) {
|
|
|
|
parse_options_usage(stat_usage, stat_options,
|
|
|
|
"for-each-cgroup", 0);
|
perf stat: Add --for-each-cgroup option
The --for-each-cgroup option is a syntax sugar to monitor large number
of cgroups easily. Current command line requires to list all the events
and cgroups even if users want to monitor same events for each cgroup.
This patch addresses that usage by copying given events for each cgroup
on user's behalf.
For instance, if they want to monitor 6 events for 200 cgroups each they
should write 1200 event names (with -e) AND 1200 cgroup names (with -G)
on the command line. But with this change, they can just specify 6
events and 200 cgroups with a new option.
A simpler example below: It wants to measure 3 events for 2 cgroups ('A'
and 'B'). The result is that total 6 events are counted like below.
$ perf stat -a -e cpu-clock,cycles,instructions --for-each-cgroup A,B sleep 1
Performance counter stats for 'system wide':
988.18 msec cpu-clock A # 0.987 CPUs utilized
3,153,761,702 cycles A # 3.200 GHz (100.00%)
8,067,769,847 instructions A # 2.57 insn per cycle (100.00%)
982.71 msec cpu-clock B # 0.982 CPUs utilized
3,136,093,298 cycles B # 3.182 GHz (99.99%)
8,109,619,327 instructions B # 2.58 insn per cycle (99.99%)
1.001228054 seconds time elapsed
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Acked-by: Jiri Olsa <jolsa@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Ian Rogers <irogers@google.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lore.kernel.org/lkml/20200924124455.336326-3-namhyung@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-09-24 12:44:52 +00:00
|
|
|
goto out;
|
perf stat: Support regex pattern in --for-each-cgroup
To make the command line even more compact with cgroups, support regex
pattern matching in cgroup names.
$ perf stat -a -e cpu-clock,cycles --for-each-cgroup ^foo sleep 1
3,000.73 msec cpu-clock foo # 2.998 CPUs utilized
12,530,992,699 cycles foo # 7.517 GHz (100.00%)
1,000.61 msec cpu-clock foo/bar # 1.000 CPUs utilized
4,178,529,579 cycles foo/bar # 2.506 GHz (100.00%)
1,000.03 msec cpu-clock foo/baz # 0.999 CPUs utilized
4,176,104,315 cycles foo/baz # 2.505 GHz (100.00%)
1.000892614 seconds time elapsed
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Acked-by: Jiri Olsa <jolsa@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Ian Rogers <irogers@google.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lore.kernel.org/lkml/20201027072855.655449-2-namhyung@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-10-27 07:28:55 +00:00
|
|
|
}
|
perf stat: Add --for-each-cgroup option
The --for-each-cgroup option is a syntax sugar to monitor large number
of cgroups easily. Current command line requires to list all the events
and cgroups even if users want to monitor same events for each cgroup.
This patch addresses that usage by copying given events for each cgroup
on user's behalf.
For instance, if they want to monitor 6 events for 200 cgroups each they
should write 1200 event names (with -e) AND 1200 cgroup names (with -G)
on the command line. But with this change, they can just specify 6
events and 200 cgroups with a new option.
A simpler example below: It wants to measure 3 events for 2 cgroups ('A'
and 'B'). The result is that total 6 events are counted like below.
$ perf stat -a -e cpu-clock,cycles,instructions --for-each-cgroup A,B sleep 1
Performance counter stats for 'system wide':
988.18 msec cpu-clock A # 0.987 CPUs utilized
3,153,761,702 cycles A # 3.200 GHz (100.00%)
8,067,769,847 instructions A # 2.57 insn per cycle (100.00%)
982.71 msec cpu-clock B # 0.982 CPUs utilized
3,136,093,298 cycles B # 3.182 GHz (99.99%)
8,109,619,327 instructions B # 2.58 insn per cycle (99.99%)
1.001228054 seconds time elapsed
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Acked-by: Jiri Olsa <jolsa@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Ian Rogers <irogers@google.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lore.kernel.org/lkml/20200924124455.336326-3-namhyung@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-09-24 12:44:52 +00:00
|
|
|
}
|
|
|
|
|
2017-12-05 14:03:10 +00:00
|
|
|
if ((stat_config.aggr_mode == AGGR_THREAD) && (target.system_wide))
|
|
|
|
target.per_thread = true;
|
|
|
|
|
perf tools: Enable on a list of CPUs for hybrid
The 'perf record' and 'perf stat' commands have supported the option
'-C/--cpus' to count or collect only on the list of CPUs provided. This
option needs to be supported for hybrid as well.
For hybrid support, it needs to check that the cpu list are available
on hybrid PMU. One example for AlderLake, cpu0-7 is 'cpu_core', cpu8-11
is 'cpu_atom'.
Before:
# perf stat -e cpu_core/cycles/ -C11 -- sleep 1
Performance counter stats for 'CPU(s) 11':
<not supported> cpu_core/cycles/
1.006179431 seconds time elapsed
The 'perf stat' command silently returned "<not supported>" without any
helpful information. It should error out pointing out that that cpu11
was not 'cpu_core'.
After:
# perf stat -e cpu_core/cycles/ -C11 -- sleep 1
WARNING: 11 isn't a 'cpu_core', please use a CPU list in the 'cpu_core' range (0-7)
failed to use cpu list 11
We also need to support the events without pmu prefix specified.
# perf stat -e cycles -C11 -- sleep 1
WARNING: 11 isn't a 'cpu_core', please use a CPU list in the 'cpu_core' range (0-7)
Performance counter stats for 'CPU(s) 11':
1,067,373 cpu_atom/cycles/
1.005544738 seconds time elapsed
The perf tool creates two cycles events automatically, cpu_core/cycles/ and
cpu_atom/cycles/. It checks that cpu11 is not 'cpu_core', then shows a warning
for cpu_core/cycles/ and only count the cpu_atom/cycles/.
If part of cpus are 'cpu_core' and part of cpus are 'cpu_atom', for example,
# perf stat -e cycles -C0,11 -- sleep 1
WARNING: use 0 in 'cpu_core' for 'cycles', skip other cpus in list.
WARNING: use 11 in 'cpu_atom' for 'cycles', skip other cpus in list.
Performance counter stats for 'CPU(s) 0,11':
1,914,704 cpu_core/cycles/
2,036,983 cpu_atom/cycles/
1.005815641 seconds time elapsed
It now automatically selects cpu0 for cpu_core/cycles/, selects cpu11 for
cpu_atom/cycles/, and output with some warnings.
Some more complex examples,
# perf stat -e cycles,instructions -C0,11 -- sleep 1
WARNING: use 0 in 'cpu_core' for 'cycles', skip other cpus in list.
WARNING: use 11 in 'cpu_atom' for 'cycles', skip other cpus in list.
WARNING: use 0 in 'cpu_core' for 'instructions', skip other cpus in list.
WARNING: use 11 in 'cpu_atom' for 'instructions', skip other cpus in list.
Performance counter stats for 'CPU(s) 0,11':
2,780,387 cpu_core/cycles/
1,583,432 cpu_atom/cycles/
3,957,277 cpu_core/instructions/
1,167,089 cpu_atom/instructions/
1.006005124 seconds time elapsed
# perf stat -e cycles,cpu_atom/instructions/ -C0,11 -- sleep 1
WARNING: use 0 in 'cpu_core' for 'cycles', skip other cpus in list.
WARNING: use 11 in 'cpu_atom' for 'cycles', skip other cpus in list.
WARNING: use 11 in 'cpu_atom' for 'cpu_atom/instructions/', skip other cpus in list.
Performance counter stats for 'CPU(s) 0,11':
3,290,301 cpu_core/cycles/
1,953,073 cpu_atom/cycles/
1,407,869 cpu_atom/instructions/
1.006260912 seconds time elapsed
Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Jin Yao <yao.jin@intel.com>
Cc: Kan Liang <kan.liang@intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: https //lore.kernel.org/r/20210723063433.7318-4-yao.jin@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2021-07-23 06:34:33 +00:00
|
|
|
if (evlist__fix_hybrid_cpus(evsel_list, target.cpu_list)) {
|
|
|
|
pr_err("failed to use cpu list %s\n", target.cpu_list);
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
target.hybrid = perf_pmu__has_hybrid();
|
2020-11-30 17:56:52 +00:00
|
|
|
if (evlist__create_maps(evsel_list, &target) < 0) {
|
2013-11-12 19:46:16 +00:00
|
|
|
if (target__has_task(&target)) {
|
2012-05-07 05:09:04 +00:00
|
|
|
pr_err("Problems finding threads of monitor\n");
|
2015-11-05 14:40:45 +00:00
|
|
|
parse_options_usage(stat_usage, stat_options, "p", 1);
|
|
|
|
parse_options_usage(NULL, stat_options, "t", 1);
|
2013-11-12 19:46:16 +00:00
|
|
|
} else if (target__has_cpu(&target)) {
|
2012-05-07 05:09:04 +00:00
|
|
|
perror("failed to parse CPUs map");
|
2015-11-05 14:40:45 +00:00
|
|
|
parse_options_usage(stat_usage, stat_options, "C", 1);
|
|
|
|
parse_options_usage(NULL, stat_options, "a", 1);
|
2013-11-01 07:33:15 +00:00
|
|
|
}
|
|
|
|
goto out;
|
2011-01-03 19:49:48 +00:00
|
|
|
}
|
perf stat: Introduce --per-thread option
Currently all the -p option PID arguments tasks values get aggregated
and printed as single values.
Adding --per-tasks option to print values per task.
$ perf stat -e cycles,instructions --per-thread -p 30190,30242
^C
Performance counter stats for process id '30190,30242':
cat-30190 0 cycles
yes-30242 3,842,525,421 cycles
cat-30190 0 instructions
yes-30242 10,370,817,010 instructions
1.143155657 seconds time elapsed
Also works under interval mode:
$ perf stat -e cycles,instructions --per-thread -p 30190,30242 -I 1000
# time comm-pid counts unit events
1.000073435 cat-30190 89,058 cycles
1.000073435 yes-30242 3,360,786,902 cycles (100.00%)
1.000073435 cat-30190 14,066 instructions
1.000073435 yes-30242 9,069,937,462 instructions
2.000204830 cat-30190 0 cycles
2.000204830 yes-30242 3,351,667,626 cycles
2.000204830 cat-30190 0 instructions
2.000204830 yes-30242 9,045,796,885 instructions
^C 2.771286639 cat-30190 0 cycles
2.771286639 yes-30242 2,593,884,166 cycles
2.771286639 cat-30190 0 instructions
2.771286639 yes-30242 7,001,171,191 instructions
It works only with -t and -p options, otherwise following error is
printed:
$ perf stat -e cycles --per-thread -I 1000 ls
The --per-thread option is only available when monitoring via -p -t options.
-p, --pid <pid> stat events on existing process id
-t, --tid <tid> stat events on existing thread id
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/r/1435310967-14570-23-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-06-26 09:29:27 +00:00
|
|
|
|
perf stat: Ensure group is defined on top of the same cpu mask
Jin Yao reported the issue (and posted first versions of this change)
with groups being defined over events with different cpu mask.
This causes assert aborts in get_group_fd, like:
# perf stat -M "C2_Pkg_Residency" -a -- sleep 1
perf: util/evsel.c:1464: get_group_fd: Assertion `!(fd == -1)' failed.
Aborted
All the events in the group have to be defined over the same cpus so the
group_fd can be found for every leader/member pair.
Adding check to ensure this condition is met and removing the group
(with warning) if we detect mixed cpus, like:
$ sudo perf stat -e '{power/energy-cores/,cycles},{instructions,power/energy-cores/}'
WARNING: event cpu maps do not match, disabling group:
anon group { power/energy-cores/, cycles }
anon group { instructions, power/energy-cores/ }
Ian asked also for cpu maps details, it's displayed in verbose mode:
$ sudo perf stat -e '{cycles,power/energy-cores/}' -v
WARNING: group events cpu maps do not match, disabling group:
anon group { power/energy-cores/, cycles }
power/energy-cores/: 0
cycles: 0-7
anon group { instructions, power/energy-cores/ }
instructions: 0-7
power/energy-cores/: 0
Committer testing:
[root@seventh ~]# perf stat -e '{power/energy-cores/,cycles},{instructions,power/energy-cores/}'
WARNING: grouped events cpus do not match, disabling group:
anon group { power/energy-cores/, cycles }
anon group { instructions, power/energy-cores/ }
^C
Performance counter stats for 'system wide':
12.62 Joules power/energy-cores/
106,920,637 cycles
80,228,899 instructions # 0.75 insn per cycle
12.62 Joules power/energy-cores/
14.514476987 seconds time elapsed
[root@seventh ~]#
But if we put compatible events in each group it works:
[root@seventh ~]# perf stat -e '{power/energy-cores/,power/energy-ram/},{instructions,cycles}' -a sleep 2
Performance counter stats for 'system wide':
1.95 Joules power/energy-cores/
0.92 Joules power/energy-ram/
29,305,715 instructions # 1.03 insn per cycle
28,423,338 cycles
2.001438142 seconds time elapsed
[root@seventh ~]#
This needs improvement tho:
[root@seventh ~]# perf stat -e '{power/energy-cores/,power/energy-ram/},{instructions,cycles}' sleep 2
Error:
The sys_perf_event_open() syscall returned with 22 (Invalid argument) for event (power/energy-cores/).
/bin/dmesg | grep -i perf may provide additional information.
[root@seventh ~]#
We need to emit a better message, one stating that the power/ events
can't be used for a specific workload, instead it is per-cpu or system
wide.
Fixes: 6a4bb04caacc8 ("perf tools: Enable grouping logic for parsed events")
Co-developed-by: Jin Yao <yao.jin@linux.intel.com>
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Acked-by: Ian Rogers <irogers@google.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Michael Petlan <mpetlan@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lore.kernel.org/lkml/20200602101736.GE1112120@krava
Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-06-02 10:17:36 +00:00
|
|
|
evlist__check_cpu_maps(evsel_list);
|
|
|
|
|
perf stat: Introduce --per-thread option
Currently all the -p option PID arguments tasks values get aggregated
and printed as single values.
Adding --per-tasks option to print values per task.
$ perf stat -e cycles,instructions --per-thread -p 30190,30242
^C
Performance counter stats for process id '30190,30242':
cat-30190 0 cycles
yes-30242 3,842,525,421 cycles
cat-30190 0 instructions
yes-30242 10,370,817,010 instructions
1.143155657 seconds time elapsed
Also works under interval mode:
$ perf stat -e cycles,instructions --per-thread -p 30190,30242 -I 1000
# time comm-pid counts unit events
1.000073435 cat-30190 89,058 cycles
1.000073435 yes-30242 3,360,786,902 cycles (100.00%)
1.000073435 cat-30190 14,066 instructions
1.000073435 yes-30242 9,069,937,462 instructions
2.000204830 cat-30190 0 cycles
2.000204830 yes-30242 3,351,667,626 cycles
2.000204830 cat-30190 0 instructions
2.000204830 yes-30242 9,045,796,885 instructions
^C 2.771286639 cat-30190 0 cycles
2.771286639 yes-30242 2,593,884,166 cycles
2.771286639 cat-30190 0 instructions
2.771286639 yes-30242 7,001,171,191 instructions
It works only with -t and -p options, otherwise following error is
printed:
$ perf stat -e cycles --per-thread -I 1000 ls
The --per-thread option is only available when monitoring via -p -t options.
-p, --pid <pid> stat events on existing process id
-t, --tid <tid> stat events on existing thread id
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/r/1435310967-14570-23-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-06-26 09:29:27 +00:00
|
|
|
/*
|
|
|
|
* Initialize thread_map with comm names,
|
|
|
|
* so we could print it out on output.
|
|
|
|
*/
|
2017-12-05 14:03:07 +00:00
|
|
|
if (stat_config.aggr_mode == AGGR_THREAD) {
|
2019-07-21 11:24:42 +00:00
|
|
|
thread_map__read_comms(evsel_list->core.threads);
|
2017-12-05 14:03:07 +00:00
|
|
|
if (target.system_wide) {
|
|
|
|
if (runtime_stat_new(&stat_config,
|
2019-08-22 11:11:41 +00:00
|
|
|
perf_thread_map__nr(evsel_list->core.threads))) {
|
2017-12-05 14:03:07 +00:00
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
perf stat: Introduce --per-thread option
Currently all the -p option PID arguments tasks values get aggregated
and printed as single values.
Adding --per-tasks option to print values per task.
$ perf stat -e cycles,instructions --per-thread -p 30190,30242
^C
Performance counter stats for process id '30190,30242':
cat-30190 0 cycles
yes-30242 3,842,525,421 cycles
cat-30190 0 instructions
yes-30242 10,370,817,010 instructions
1.143155657 seconds time elapsed
Also works under interval mode:
$ perf stat -e cycles,instructions --per-thread -p 30190,30242 -I 1000
# time comm-pid counts unit events
1.000073435 cat-30190 89,058 cycles
1.000073435 yes-30242 3,360,786,902 cycles (100.00%)
1.000073435 cat-30190 14,066 instructions
1.000073435 yes-30242 9,069,937,462 instructions
2.000204830 cat-30190 0 cycles
2.000204830 yes-30242 3,351,667,626 cycles
2.000204830 cat-30190 0 instructions
2.000204830 yes-30242 9,045,796,885 instructions
^C 2.771286639 cat-30190 0 cycles
2.771286639 yes-30242 2,593,884,166 cycles
2.771286639 cat-30190 0 instructions
2.771286639 yes-30242 7,001,171,191 instructions
It works only with -t and -p options, otherwise following error is
printed:
$ perf stat -e cycles --per-thread -I 1000 ls
The --per-thread option is only available when monitoring via -p -t options.
-p, --pid <pid> stat events on existing process id
-t, --tid <tid> stat events on existing thread id
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/r/1435310967-14570-23-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-06-26 09:29:27 +00:00
|
|
|
|
perf stat: Add --per-node agregation support
Adding new --per-node option to aggregate counts per NUMA
nodes for system-wide mode measurements.
You can specify --per-node in live mode:
# perf stat -a -I 1000 -e cycles --per-node
# time node cpus counts unit events
1.000542550 N0 20 6,202,097 cycles
1.000542550 N1 20 639,559 cycles
2.002040063 N0 20 7,412,495 cycles
2.002040063 N1 20 2,185,577 cycles
3.003451699 N0 20 6,508,917 cycles
3.003451699 N1 20 765,607 cycles
...
Or in the record/report stat session:
# perf stat record -a -I 1000 -e cycles
# time counts unit events
1.000536937 10,008,468 cycles
2.002090152 9,578,539 cycles
3.003625233 7,647,869 cycles
4.005135036 7,032,086 cycles
^C 4.340902364 3,923,893 cycles
# perf stat report --per-node
# time node cpus counts unit events
1.000536937 N0 20 9,355,086 cycles
1.000536937 N1 20 653,382 cycles
2.002090152 N0 20 7,712,838 cycles
2.002090152 N1 20 1,865,701 cycles
3.003625233 N0 20 6,604,441 cycles
3.003625233 N1 20 1,043,428 cycles
4.005135036 N0 20 6,350,522 cycles
4.005135036 N1 20 681,564 cycles
4.340902364 N0 20 3,403,188 cycles
4.340902364 N1 20 520,705 cycles
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Alexey Budankov <alexey.budankov@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Joe Mario <jmario@redhat.com>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Michael Petlan <mpetlan@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/20190904073415.723-4-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-08-28 08:17:43 +00:00
|
|
|
if (stat_config.aggr_mode == AGGR_NODE)
|
|
|
|
cpu__setup_cpunode_map();
|
|
|
|
|
2018-01-29 09:25:22 +00:00
|
|
|
if (stat_config.times && interval)
|
|
|
|
interval_count = true;
|
|
|
|
else if (stat_config.times && !interval) {
|
|
|
|
pr_err("interval-count option should be used together with "
|
|
|
|
"interval-print.\n");
|
|
|
|
parse_options_usage(stat_usage, stat_options, "interval-count", 0);
|
|
|
|
parse_options_usage(stat_usage, stat_options, "I", 1);
|
|
|
|
goto out;
|
|
|
|
}
|
2010-05-28 10:00:01 +00:00
|
|
|
|
2018-01-29 09:25:23 +00:00
|
|
|
if (timeout && timeout < 100) {
|
|
|
|
if (timeout < 10) {
|
|
|
|
pr_err("timeout must be >= 10ms.\n");
|
|
|
|
parse_options_usage(stat_usage, stat_options, "timeout", 0);
|
|
|
|
goto out;
|
|
|
|
} else
|
|
|
|
pr_warning("timeout < 100ms. "
|
|
|
|
"The overhead percentage could be high in some cases. "
|
|
|
|
"Please proceed with caution.\n");
|
|
|
|
}
|
|
|
|
if (timeout && interval) {
|
|
|
|
pr_err("timeout option is not supported with interval-print.\n");
|
|
|
|
parse_options_usage(stat_usage, stat_options, "timeout", 0);
|
|
|
|
parse_options_usage(stat_usage, stat_options, "I", 1);
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
2020-11-30 12:31:04 +00:00
|
|
|
if (evlist__alloc_stats(evsel_list, interval))
|
2014-01-03 18:56:06 +00:00
|
|
|
goto out;
|
2010-03-18 14:36:05 +00:00
|
|
|
|
2013-02-14 12:57:27 +00:00
|
|
|
if (perf_stat_init_aggr_mode())
|
2014-01-03 18:56:06 +00:00
|
|
|
goto out;
|
2013-02-14 12:57:27 +00:00
|
|
|
|
2018-08-30 06:32:14 +00:00
|
|
|
/*
|
|
|
|
* Set sample_type to PERF_SAMPLE_IDENTIFIER, which should be harmless
|
|
|
|
* while avoiding that older tools show confusing messages.
|
|
|
|
*
|
|
|
|
* However for pipe sessions we need to keep it zero,
|
|
|
|
* because script's perf_evsel__check_attr is triggered
|
|
|
|
* by attr->sample_type != 0, and we can't run it on
|
|
|
|
* stat sessions.
|
|
|
|
*/
|
|
|
|
stat_config.identifier = !(STAT_RECORD && perf_stat.data.is_pipe);
|
|
|
|
|
2009-05-15 09:03:23 +00:00
|
|
|
/*
|
|
|
|
* We dont want to block the signals - that would cause
|
|
|
|
* child tasks to inherit that and Ctrl-C would not work.
|
|
|
|
* What we want is for Ctrl-C to work in the exec()-ed
|
|
|
|
* task, but being ignored by perf stat itself:
|
|
|
|
*/
|
2009-06-10 13:55:59 +00:00
|
|
|
atexit(sig_atexit);
|
2013-03-01 18:02:27 +00:00
|
|
|
if (!forever)
|
|
|
|
signal(SIGINT, skip_signal);
|
perf stat: Add interval printing
This patch adds a new printing mode for perf stat. It allows interval
printing. That means perf stat can now print event deltas at regular
time interval. This is useful to detect phases in programs.
The -I option enables interval printing. It expects an interval duration
in milliseconds. Minimum is 100ms. Once, activated perf stat prints
events deltas since last printout. All modes are supported.
$ perf stat -I 1000 -e cycles noploop 10
noploop for 10 seconds
# time counts events
1.000109853 2,388,560,546 cycles
2.000262846 2,393,332,358 cycles
3.000354131 2,393,176,537 cycles
4.000439503 2,393,203,790 cycles
5.000527075 2,393,167,675 cycles
6.000609052 2,393,203,670 cycles
7.000691082 2,393,175,678 cycles
The output format makes it easy to feed into a plotting program such as
gnuplot when the -I option is used in combination with the -x option:
$ perf stat -x, -I 1000 -e cycles noploop 10
noploop for 10 seconds
1.000084113,2378775498,cycles
2.000245798,2391056897,cycles
3.000354445,2392089414,cycles
4.000459115,2390936603,cycles
5.000565341,2392108173,cycles
Signed-off-by: Stephane Eranian <eranian@google.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung.kim@lge.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1359460064-3060-3-git-send-email-eranian@google.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2013-01-29 11:47:44 +00:00
|
|
|
signal(SIGCHLD, skip_signal);
|
2009-05-15 09:03:23 +00:00
|
|
|
signal(SIGALRM, skip_signal);
|
|
|
|
signal(SIGABRT, skip_signal);
|
|
|
|
|
2020-07-17 07:05:41 +00:00
|
|
|
if (evlist__initialize_ctlfd(evsel_list, stat_config.ctl_fd, stat_config.ctl_fd_ack))
|
|
|
|
goto out;
|
|
|
|
|
2009-06-13 12:57:28 +00:00
|
|
|
status = 0;
|
2018-08-30 06:32:36 +00:00
|
|
|
for (run_idx = 0; forever || run_idx < stat_config.run_count; run_idx++) {
|
|
|
|
if (stat_config.run_count != 1 && verbose > 0)
|
2011-08-15 20:22:33 +00:00
|
|
|
fprintf(output, "[ perf stat: executing run #%d ... ]\n",
|
|
|
|
run_idx + 1);
|
2011-04-28 16:17:11 +00:00
|
|
|
|
perf stat: Reset previous counts on repeat with interval
When using 'perf stat' with repeat and interval option, it shows wrong
values for events.
The wrong values will be shown for the first interval on the second and
subsequent repetitions.
Without the fix:
# perf stat -r 3 -I 2000 -e faults -e sched:sched_switch -a sleep 5
2.000282489 53 faults
2.000282489 513 sched:sched_switch
4.005478208 3,721 faults
4.005478208 2,666 sched:sched_switch
5.025470933 395 faults
5.025470933 1,307 sched:sched_switch
2.009602825 1,84,46,74,40,73,70,95,47,520 faults <------
2.009602825 1,84,46,74,40,73,70,95,49,568 sched:sched_switch <------
4.019612206 4,730 faults
4.019612206 2,746 sched:sched_switch
5.039615484 3,953 faults
5.039615484 1,496 sched:sched_switch
2.000274620 1,84,46,74,40,73,70,95,47,520 faults <------
2.000274620 1,84,46,74,40,73,70,95,47,520 sched:sched_switch <------
4.000480342 4,282 faults
4.000480342 2,303 sched:sched_switch
5.000916811 1,322 faults
5.000916811 1,064 sched:sched_switch
#
prev_raw_counts is allocated when using intervals. This is used when
calculating the difference in the counts of events when using interval.
The current counts are stored in prev_raw_counts to calculate the
differences in the next iteration.
On the first interval of the second and subsequent repetitions,
prev_raw_counts would be the values stored in the last interval of the
previous repetitions, while the current counts will only be for the
first interval of the current repetition.
Hence there is a possibility of events showing up as big number.
Fix this by resetting prev_raw_counts whenever perf stat repeats the
command.
With the fix:
# perf stat -r 3 -I 2000 -e faults -e sched:sched_switch -a sleep 5
2.019349347 2,597 faults
2.019349347 2,753 sched:sched_switch
4.019577372 3,098 faults
4.019577372 2,532 sched:sched_switch
5.019415481 1,879 faults
5.019415481 1,356 sched:sched_switch
2.000178813 8,468 faults
2.000178813 2,254 sched:sched_switch
4.000404621 7,440 faults
4.000404621 1,266 sched:sched_switch
5.040196079 2,458 faults
5.040196079 556 sched:sched_switch
2.000191939 6,870 faults
2.000191939 1,170 sched:sched_switch
4.000414103 541 faults
4.000414103 902 sched:sched_switch
5.000809863 450 faults
5.000809863 364 sched:sched_switch
#
Committer notes:
This was broken since the cset introducing the --interval feature, i.e.
--repeat + --interval wasn't tested at that point, add the Fixes tag so
that automatic scripts can pick this up.
Fixes: 13370a9b5bb8 ("perf stat: Add interval printing")
Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Tested-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Cc: Stephane Eranian <eranian@google.com>
Cc: stable@vger.kernel.org # v3.9+
Link: http://lore.kernel.org/lkml/20190904094738.9558-2-srikar@linux.vnet.ibm.com
[ Fixed up conflicts with libperf, i.e. some perf_{evsel,evlist} lost the 'perf' prefix ]
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-09-04 09:47:37 +00:00
|
|
|
if (run_idx != 0)
|
2020-11-30 12:31:04 +00:00
|
|
|
evlist__reset_prev_raw_counts(evsel_list);
|
perf stat: Reset previous counts on repeat with interval
When using 'perf stat' with repeat and interval option, it shows wrong
values for events.
The wrong values will be shown for the first interval on the second and
subsequent repetitions.
Without the fix:
# perf stat -r 3 -I 2000 -e faults -e sched:sched_switch -a sleep 5
2.000282489 53 faults
2.000282489 513 sched:sched_switch
4.005478208 3,721 faults
4.005478208 2,666 sched:sched_switch
5.025470933 395 faults
5.025470933 1,307 sched:sched_switch
2.009602825 1,84,46,74,40,73,70,95,47,520 faults <------
2.009602825 1,84,46,74,40,73,70,95,49,568 sched:sched_switch <------
4.019612206 4,730 faults
4.019612206 2,746 sched:sched_switch
5.039615484 3,953 faults
5.039615484 1,496 sched:sched_switch
2.000274620 1,84,46,74,40,73,70,95,47,520 faults <------
2.000274620 1,84,46,74,40,73,70,95,47,520 sched:sched_switch <------
4.000480342 4,282 faults
4.000480342 2,303 sched:sched_switch
5.000916811 1,322 faults
5.000916811 1,064 sched:sched_switch
#
prev_raw_counts is allocated when using intervals. This is used when
calculating the difference in the counts of events when using interval.
The current counts are stored in prev_raw_counts to calculate the
differences in the next iteration.
On the first interval of the second and subsequent repetitions,
prev_raw_counts would be the values stored in the last interval of the
previous repetitions, while the current counts will only be for the
first interval of the current repetition.
Hence there is a possibility of events showing up as big number.
Fix this by resetting prev_raw_counts whenever perf stat repeats the
command.
With the fix:
# perf stat -r 3 -I 2000 -e faults -e sched:sched_switch -a sleep 5
2.019349347 2,597 faults
2.019349347 2,753 sched:sched_switch
4.019577372 3,098 faults
4.019577372 2,532 sched:sched_switch
5.019415481 1,879 faults
5.019415481 1,356 sched:sched_switch
2.000178813 8,468 faults
2.000178813 2,254 sched:sched_switch
4.000404621 7,440 faults
4.000404621 1,266 sched:sched_switch
5.040196079 2,458 faults
5.040196079 556 sched:sched_switch
2.000191939 6,870 faults
2.000191939 1,170 sched:sched_switch
4.000414103 541 faults
4.000414103 902 sched:sched_switch
5.000809863 450 faults
5.000809863 364 sched:sched_switch
#
Committer notes:
This was broken since the cset introducing the --interval feature, i.e.
--repeat + --interval wasn't tested at that point, add the Fixes tag so
that automatic scripts can pick this up.
Fixes: 13370a9b5bb8 ("perf stat: Add interval printing")
Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Tested-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Cc: Stephane Eranian <eranian@google.com>
Cc: stable@vger.kernel.org # v3.9+
Link: http://lore.kernel.org/lkml/20190904094738.9558-2-srikar@linux.vnet.ibm.com
[ Fixed up conflicts with libperf, i.e. some perf_{evsel,evlist} lost the 'perf' prefix ]
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-09-04 09:47:37 +00:00
|
|
|
|
2018-04-23 09:08:21 +00:00
|
|
|
status = run_perf_stat(argc, argv, run_idx);
|
perf stat: Fix a segmentation fault when using repeat forever
Observe a segmentation fault when 'perf stat' is asked to repeat forever
with the interval option.
Without fix:
# perf stat -r 0 -I 5000 -e cycles -a sleep 10
# time counts unit events
5.000211692 3,13,89,82,34,157 cycles
10.000380119 1,53,98,52,22,294 cycles
10.040467280 17,16,79,265 cycles
Segmentation fault
This problem was only observed when we use forever option aka -r 0 and
works with limited repeats. Calling print_counter with ts being set to
NULL, is not a correct option when interval is set. Hence avoid
print_counter(NULL,..) if interval is set.
With fix:
# perf stat -r 0 -I 5000 -e cycles -a sleep 10
# time counts unit events
5.019866622 3,15,14,43,08,697 cycles
10.039865756 3,15,16,31,95,261 cycles
10.059950628 1,26,05,47,158 cycles
5.009902655 3,14,52,62,33,932 cycles
10.019880228 3,14,52,22,89,154 cycles
10.030543876 66,90,18,333 cycles
5.009848281 3,14,51,98,25,437 cycles
10.029854402 3,15,14,93,04,918 cycles
5.009834177 3,14,51,95,92,316 cycles
Committer notes:
Did the 'git bisect' to find the cset introducing the problem to add the
Fixes tag below, and at that time the problem reproduced as:
(gdb) run stat -r0 -I500 sleep 1
<SNIP>
Program received signal SIGSEGV, Segmentation fault.
print_interval (prefix=prefix@entry=0x7fffffffc8d0 "", ts=ts@entry=0x0) at builtin-stat.c:866
866 sprintf(prefix, "%6lu.%09lu%s", ts->tv_sec, ts->tv_nsec, csv_sep);
(gdb) bt
#0 print_interval (prefix=prefix@entry=0x7fffffffc8d0 "", ts=ts@entry=0x0) at builtin-stat.c:866
#1 0x000000000041860a in print_counters (ts=ts@entry=0x0, argc=argc@entry=2, argv=argv@entry=0x7fffffffd640) at builtin-stat.c:938
#2 0x0000000000419a7f in cmd_stat (argc=2, argv=0x7fffffffd640, prefix=<optimized out>) at builtin-stat.c:1411
#3 0x000000000045c65a in run_builtin (p=p@entry=0x6291b8 <commands+216>, argc=argc@entry=5, argv=argv@entry=0x7fffffffd640) at perf.c:370
#4 0x000000000045c893 in handle_internal_command (argc=5, argv=0x7fffffffd640) at perf.c:429
#5 0x000000000045c8f1 in run_argv (argcp=argcp@entry=0x7fffffffd4ac, argv=argv@entry=0x7fffffffd4a0) at perf.c:473
#6 0x000000000045cac9 in main (argc=<optimized out>, argv=<optimized out>) at perf.c:588
(gdb)
Mostly the same as just before this patch:
Program received signal SIGSEGV, Segmentation fault.
0x00000000005874a7 in print_interval (config=0xa1f2a0 <stat_config>, evlist=0xbc9b90, prefix=0x7fffffffd1c0 "`", ts=0x0) at util/stat-display.c:964
964 sprintf(prefix, "%6lu.%09lu%s", ts->tv_sec, ts->tv_nsec, config->csv_sep);
(gdb) bt
#0 0x00000000005874a7 in print_interval (config=0xa1f2a0 <stat_config>, evlist=0xbc9b90, prefix=0x7fffffffd1c0 "`", ts=0x0) at util/stat-display.c:964
#1 0x0000000000588047 in perf_evlist__print_counters (evlist=0xbc9b90, config=0xa1f2a0 <stat_config>, _target=0xa1f0c0 <target>, ts=0x0, argc=2, argv=0x7fffffffd670)
at util/stat-display.c:1172
#2 0x000000000045390f in print_counters (ts=0x0, argc=2, argv=0x7fffffffd670) at builtin-stat.c:656
#3 0x0000000000456bb5 in cmd_stat (argc=2, argv=0x7fffffffd670) at builtin-stat.c:1960
#4 0x00000000004dd2e0 in run_builtin (p=0xa30e00 <commands+288>, argc=5, argv=0x7fffffffd670) at perf.c:310
#5 0x00000000004dd54d in handle_internal_command (argc=5, argv=0x7fffffffd670) at perf.c:362
#6 0x00000000004dd694 in run_argv (argcp=0x7fffffffd4cc, argv=0x7fffffffd4c0) at perf.c:406
#7 0x00000000004dda11 in main (argc=5, argv=0x7fffffffd670) at perf.c:531
(gdb)
Fixes: d4f63a4741a8 ("perf stat: Introduce print_counters function")
Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Tested-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Cc: stable@vger.kernel.org # v4.2+
Link: http://lore.kernel.org/lkml/20190904094738.9558-3-srikar@linux.vnet.ibm.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-09-04 09:47:38 +00:00
|
|
|
if (forever && status != -1 && !interval) {
|
2015-06-26 09:29:26 +00:00
|
|
|
print_counters(NULL, argc, argv);
|
2015-06-26 09:29:13 +00:00
|
|
|
perf_stat__reset_stats();
|
2013-03-01 18:02:27 +00:00
|
|
|
}
|
2009-06-13 12:57:28 +00:00
|
|
|
}
|
|
|
|
|
perf stat: Report summary for interval mode
Currently 'perf stat' supports to print counts at regular interval (-I),
but it's not very easy for user to get the overall statistics.
The patch uses 'evsel->prev_raw_counts' to get counts for summary. Copy
the counts to 'evsel->counts' after printing the interval results.
Next, we just follow the non-interval processing.
Let's see some examples,
root@kbl-ppc:~# perf stat -e cycles -I1000 --interval-count 2
# time counts unit events
1.000412064 2,281,114 cycles
2.001383658 2,547,880 cycles
Performance counter stats for 'system wide':
4,828,994 cycles
2.002860349 seconds time elapsed
root@kbl-ppc:~# perf stat -e cycles,instructions -I1000 --interval-count 2
# time counts unit events
1.000389902 1,536,093 cycles
1.000389902 420,226 instructions # 0.27 insn per cycle
2.001433453 2,213,952 cycles
2.001433453 735,465 instructions # 0.33 insn per cycle
Performance counter stats for 'system wide':
3,750,045 cycles
1,155,691 instructions # 0.31 insn per cycle
2.003023361 seconds time elapsed
root@kbl-ppc:~# perf stat -M CPI,IPC -I1000 --interval-count 2
# time counts unit events
1.000435121 905,303 inst_retired.any # 2.9 CPI
1.000435121 2,663,333 cycles
1.000435121 914,702 inst_retired.any # 0.3 IPC
1.000435121 2,676,559 cpu_clk_unhalted.thread
2.001615941 1,951,092 inst_retired.any # 1.8 CPI
2.001615941 3,551,357 cycles
2.001615941 1,950,837 inst_retired.any # 0.5 IPC
2.001615941 3,551,044 cpu_clk_unhalted.thread
Performance counter stats for 'system wide':
2,856,395 inst_retired.any # 2.2 CPI
6,214,690 cycles
2,865,539 inst_retired.any # 0.5 IPC
6,227,603 cpu_clk_unhalted.thread
2.003403078 seconds time elapsed
Committer testing:
Before:
# perf stat -e cycles -I1000 --interval-count 2
# time counts unit events
1.000618627 26,877,408 cycles
2.001417968 233,672,829 cycles
#
After:
# perf stat -e cycles -I1000 --interval-count 2
# time counts unit events
1.001531815 5,341,388,792 cycles
2.002936530 100,073,912 cycles
Performance counter stats for 'system wide':
5,441,462,704 cycles
2.004893794 seconds time elapsed
#
Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
Reviewed-by: Jiri Olsa <jolsa@redhat.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lore.kernel.org/lkml/20200520042737.24160-6-yao.jin@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-05-20 04:27:37 +00:00
|
|
|
if (!forever && status != -1 && (!interval || stat_config.summary))
|
2015-06-26 09:29:26 +00:00
|
|
|
print_counters(NULL, argc, argv);
|
2013-03-18 14:24:21 +00:00
|
|
|
|
2020-07-17 07:05:41 +00:00
|
|
|
evlist__finalize_ctlfd(evsel_list);
|
|
|
|
|
perf stat record: Add record command
Add 'perf stat record' command support. It creates simple (header only)
perf.data file ATM.
The record command could be specified anywhere among stat options. All
stat command options are valid for stat record command with '-o' option
exception. If specified for record command it denotes the perf data file
name.
Committer note:
Set sample_type to PERF_SAMPLE_IDENTIFIER, which should be harmless
while avoiding that older tools show confusing messages, for instance,
with sample_type = 0, we get:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.630237 task-clock (msec) # 0.528 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
52 page-faults # 0.083 M/sec
978,312 cycles # 1.552 GHz
671,931 stalled-cycles-frontend # 68.68% frontend cycles idle
<not supported> stalled-cycles-backend
646,379 instructions # 0.66 insns per cycle
# 1.04 stalled cycles per insn
131,046 branches # 207.931 M/sec
7,073 branch-misses # 5.40% of all branches
0.001193240 seconds time elapsed
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
non matching sample_type
$
While with sample_type set to PERF_SAMPLE_IDENTIFIER, after we re-run 'perf
stat record usleep' we get:
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$
Which at least shows the names of the events in the perf.data file.
Additionally, such files, when passed to 'perf report' will produce:
$ oldperf report --stdio
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
Warning:
Kernel address maps (/proc/{kallsyms,modules}) were restricted.
Check /proc/sys/kernel/kptr_restrict before running 'perf record'.
As no suitable kallsyms nor vmlinux was found, kernel samples
can't be resolved.
Samples in kernel modules can't be resolved as well.
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
$
Which is confusing and can be solved by just adding the kernel mmap record,
which will also remove that warning about the data size field being equal to
zero, after generating the mmap record:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.600796 task-clock (msec) # 0.478 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
54 page-faults # 0.090 M/sec
886,844 cycles # 1.476 GHz
582,169 stalled-cycles-frontend # 65.65% frontend cycles idle
<not supported> stalled-cycles-backend
638,344 instructions # 0.72 insns per cycle
# 0.91 stalled cycles per insn
130,204 branches # 216.719 M/sec
7,500 branch-misses # 5.76% of all branches
0.001255897 seconds time elapsed
$ oldperf evlist
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$ oldperf report --stdio
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
[acme@zoo linux]$
No warnings, sensible output about what are the events in the perf.data file and also
a "file has no samples" message, which indeed it doesn't.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Tested-by: Kan Liang <kan.liang@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: htp://lkml.kernel.org/r/1446734469-11352-3-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 14:40:46 +00:00
|
|
|
if (STAT_RECORD) {
|
|
|
|
/*
|
|
|
|
* We synthesize the kernel mmap record just so that older tools
|
|
|
|
* don't emit warnings about not being able to resolve symbols
|
2021-03-23 16:09:15 +00:00
|
|
|
* due to /proc/sys/kernel/kptr_restrict settings and instead provide
|
perf stat record: Add record command
Add 'perf stat record' command support. It creates simple (header only)
perf.data file ATM.
The record command could be specified anywhere among stat options. All
stat command options are valid for stat record command with '-o' option
exception. If specified for record command it denotes the perf data file
name.
Committer note:
Set sample_type to PERF_SAMPLE_IDENTIFIER, which should be harmless
while avoiding that older tools show confusing messages, for instance,
with sample_type = 0, we get:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.630237 task-clock (msec) # 0.528 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
52 page-faults # 0.083 M/sec
978,312 cycles # 1.552 GHz
671,931 stalled-cycles-frontend # 68.68% frontend cycles idle
<not supported> stalled-cycles-backend
646,379 instructions # 0.66 insns per cycle
# 1.04 stalled cycles per insn
131,046 branches # 207.931 M/sec
7,073 branch-misses # 5.40% of all branches
0.001193240 seconds time elapsed
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
non matching sample_type
$
While with sample_type set to PERF_SAMPLE_IDENTIFIER, after we re-run 'perf
stat record usleep' we get:
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$
Which at least shows the names of the events in the perf.data file.
Additionally, such files, when passed to 'perf report' will produce:
$ oldperf report --stdio
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
Warning:
Kernel address maps (/proc/{kallsyms,modules}) were restricted.
Check /proc/sys/kernel/kptr_restrict before running 'perf record'.
As no suitable kallsyms nor vmlinux was found, kernel samples
can't be resolved.
Samples in kernel modules can't be resolved as well.
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
$
Which is confusing and can be solved by just adding the kernel mmap record,
which will also remove that warning about the data size field being equal to
zero, after generating the mmap record:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.600796 task-clock (msec) # 0.478 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
54 page-faults # 0.090 M/sec
886,844 cycles # 1.476 GHz
582,169 stalled-cycles-frontend # 65.65% frontend cycles idle
<not supported> stalled-cycles-backend
638,344 instructions # 0.72 insns per cycle
# 0.91 stalled cycles per insn
130,204 branches # 216.719 M/sec
7,500 branch-misses # 5.76% of all branches
0.001255897 seconds time elapsed
$ oldperf evlist
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$ oldperf report --stdio
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
[acme@zoo linux]$
No warnings, sensible output about what are the events in the perf.data file and also
a "file has no samples" message, which indeed it doesn't.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Tested-by: Kan Liang <kan.liang@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: htp://lkml.kernel.org/r/1446734469-11352-3-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 14:40:46 +00:00
|
|
|
* a saner message about no samples being in the perf.data file.
|
|
|
|
*
|
|
|
|
* This also serves to suppress a warning about f_header.data.size == 0
|
2015-11-05 14:40:48 +00:00
|
|
|
* in header.c at the moment 'perf stat record' gets introduced, which
|
|
|
|
* is not really needed once we start adding the stat specific PERF_RECORD_
|
|
|
|
* records, but the need to suppress the kptr_restrict messages in older
|
|
|
|
* tools remain -acme
|
perf stat record: Add record command
Add 'perf stat record' command support. It creates simple (header only)
perf.data file ATM.
The record command could be specified anywhere among stat options. All
stat command options are valid for stat record command with '-o' option
exception. If specified for record command it denotes the perf data file
name.
Committer note:
Set sample_type to PERF_SAMPLE_IDENTIFIER, which should be harmless
while avoiding that older tools show confusing messages, for instance,
with sample_type = 0, we get:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.630237 task-clock (msec) # 0.528 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
52 page-faults # 0.083 M/sec
978,312 cycles # 1.552 GHz
671,931 stalled-cycles-frontend # 68.68% frontend cycles idle
<not supported> stalled-cycles-backend
646,379 instructions # 0.66 insns per cycle
# 1.04 stalled cycles per insn
131,046 branches # 207.931 M/sec
7,073 branch-misses # 5.40% of all branches
0.001193240 seconds time elapsed
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
non matching sample_type
$
While with sample_type set to PERF_SAMPLE_IDENTIFIER, after we re-run 'perf
stat record usleep' we get:
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$
Which at least shows the names of the events in the perf.data file.
Additionally, such files, when passed to 'perf report' will produce:
$ oldperf report --stdio
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
Warning:
Kernel address maps (/proc/{kallsyms,modules}) were restricted.
Check /proc/sys/kernel/kptr_restrict before running 'perf record'.
As no suitable kallsyms nor vmlinux was found, kernel samples
can't be resolved.
Samples in kernel modules can't be resolved as well.
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
$
Which is confusing and can be solved by just adding the kernel mmap record,
which will also remove that warning about the data size field being equal to
zero, after generating the mmap record:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.600796 task-clock (msec) # 0.478 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
54 page-faults # 0.090 M/sec
886,844 cycles # 1.476 GHz
582,169 stalled-cycles-frontend # 65.65% frontend cycles idle
<not supported> stalled-cycles-backend
638,344 instructions # 0.72 insns per cycle
# 0.91 stalled cycles per insn
130,204 branches # 216.719 M/sec
7,500 branch-misses # 5.76% of all branches
0.001255897 seconds time elapsed
$ oldperf evlist
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$ oldperf report --stdio
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
[acme@zoo linux]$
No warnings, sensible output about what are the events in the perf.data file and also
a "file has no samples" message, which indeed it doesn't.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Tested-by: Kan Liang <kan.liang@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: htp://lkml.kernel.org/r/1446734469-11352-3-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 14:40:46 +00:00
|
|
|
*/
|
2017-01-23 21:07:59 +00:00
|
|
|
int fd = perf_data__fd(&perf_stat.data);
|
perf stat: Enable counting events for BPF programs
Introduce 'perf stat -b' option, which counts events for BPF programs, like:
[root@localhost ~]# ~/perf stat -e ref-cycles,cycles -b 254 -I 1000
1.487903822 115,200 ref-cycles
1.487903822 86,012 cycles
2.489147029 80,560 ref-cycles
2.489147029 73,784 cycles
3.490341825 60,720 ref-cycles
3.490341825 37,797 cycles
4.491540887 37,120 ref-cycles
4.491540887 31,963 cycles
The example above counts 'cycles' and 'ref-cycles' of BPF program of id
254. This is similar to bpftool-prog-profile command, but more
flexible.
'perf stat -b' creates per-cpu perf_event and loads fentry/fexit BPF
programs (monitor-progs) to the target BPF program (target-prog). The
monitor-progs read perf_event before and after the target-prog, and
aggregate the difference in a BPF map. Then the user space reads data
from these maps.
A new 'struct bpf_counter' is introduced to provide a common interface
that uses BPF programs/maps to count perf events.
Committer notes:
Removed all but bpf_counter.h includes from evsel.h, not needed at all.
Also BPF map lookups for PERCPU_ARRAYs need to have as its value receive
buffer passed to the kernel libbpf_num_possible_cpus() entries, not
evsel__nr_cpus(evsel), as the former uses
/sys/devices/system/cpu/possible while the later uses
/sys/devices/system/cpu/online, which may be less than the 'possible'
number making the bpf map lookup overwrite memory and cause hard to
debug memory corruption.
We need to continue using evsel__nr_cpus(evsel) when accessing the
perf_counts array tho, not to overwrite another are of memory :-)
Signed-off-by: Song Liu <songliubraving@fb.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Link: https://lore.kernel.org/lkml/20210120163031.GU12699@kernel.org/
Acked-by: Namhyung Kim <namhyung@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: kernel-team@fb.com
Link: http://lore.kernel.org/lkml/20201229214214.3413833-4-songliubraving@fb.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-12-29 21:42:14 +00:00
|
|
|
|
|
|
|
err = perf_event__synthesize_kernel_mmap((void *)&perf_stat,
|
|
|
|
process_synthesized_event,
|
|
|
|
&perf_stat.session->machines.host);
|
perf stat record: Add record command
Add 'perf stat record' command support. It creates simple (header only)
perf.data file ATM.
The record command could be specified anywhere among stat options. All
stat command options are valid for stat record command with '-o' option
exception. If specified for record command it denotes the perf data file
name.
Committer note:
Set sample_type to PERF_SAMPLE_IDENTIFIER, which should be harmless
while avoiding that older tools show confusing messages, for instance,
with sample_type = 0, we get:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.630237 task-clock (msec) # 0.528 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
52 page-faults # 0.083 M/sec
978,312 cycles # 1.552 GHz
671,931 stalled-cycles-frontend # 68.68% frontend cycles idle
<not supported> stalled-cycles-backend
646,379 instructions # 0.66 insns per cycle
# 1.04 stalled cycles per insn
131,046 branches # 207.931 M/sec
7,073 branch-misses # 5.40% of all branches
0.001193240 seconds time elapsed
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
non matching sample_type
$
While with sample_type set to PERF_SAMPLE_IDENTIFIER, after we re-run 'perf
stat record usleep' we get:
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$
Which at least shows the names of the events in the perf.data file.
Additionally, such files, when passed to 'perf report' will produce:
$ oldperf report --stdio
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
Warning:
Kernel address maps (/proc/{kallsyms,modules}) were restricted.
Check /proc/sys/kernel/kptr_restrict before running 'perf record'.
As no suitable kallsyms nor vmlinux was found, kernel samples
can't be resolved.
Samples in kernel modules can't be resolved as well.
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
$
Which is confusing and can be solved by just adding the kernel mmap record,
which will also remove that warning about the data size field being equal to
zero, after generating the mmap record:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.600796 task-clock (msec) # 0.478 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
54 page-faults # 0.090 M/sec
886,844 cycles # 1.476 GHz
582,169 stalled-cycles-frontend # 65.65% frontend cycles idle
<not supported> stalled-cycles-backend
638,344 instructions # 0.72 insns per cycle
# 0.91 stalled cycles per insn
130,204 branches # 216.719 M/sec
7,500 branch-misses # 5.76% of all branches
0.001255897 seconds time elapsed
$ oldperf evlist
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$ oldperf report --stdio
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
[acme@zoo linux]$
No warnings, sensible output about what are the events in the perf.data file and also
a "file has no samples" message, which indeed it doesn't.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Tested-by: Kan Liang <kan.liang@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: htp://lkml.kernel.org/r/1446734469-11352-3-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 14:40:46 +00:00
|
|
|
if (err) {
|
|
|
|
pr_warning("Couldn't synthesize the kernel mmap record, harmless, "
|
|
|
|
"older tools may produce warnings about this file\n.");
|
|
|
|
}
|
|
|
|
|
2015-11-05 14:40:52 +00:00
|
|
|
if (!interval) {
|
|
|
|
if (WRITE_STAT_ROUND_EVENT(walltime_nsecs_stats.max, FINAL))
|
|
|
|
pr_err("failed to write stat round event\n");
|
|
|
|
}
|
|
|
|
|
2017-01-23 21:07:59 +00:00
|
|
|
if (!perf_stat.data.is_pipe) {
|
2015-11-05 14:40:50 +00:00
|
|
|
perf_stat.session->header.data_size += perf_stat.bytes_written;
|
|
|
|
perf_session__write_header(perf_stat.session, evsel_list, fd, true);
|
|
|
|
}
|
perf stat record: Add record command
Add 'perf stat record' command support. It creates simple (header only)
perf.data file ATM.
The record command could be specified anywhere among stat options. All
stat command options are valid for stat record command with '-o' option
exception. If specified for record command it denotes the perf data file
name.
Committer note:
Set sample_type to PERF_SAMPLE_IDENTIFIER, which should be harmless
while avoiding that older tools show confusing messages, for instance,
with sample_type = 0, we get:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.630237 task-clock (msec) # 0.528 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
52 page-faults # 0.083 M/sec
978,312 cycles # 1.552 GHz
671,931 stalled-cycles-frontend # 68.68% frontend cycles idle
<not supported> stalled-cycles-backend
646,379 instructions # 0.66 insns per cycle
# 1.04 stalled cycles per insn
131,046 branches # 207.931 M/sec
7,073 branch-misses # 5.40% of all branches
0.001193240 seconds time elapsed
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
non matching sample_type
$
While with sample_type set to PERF_SAMPLE_IDENTIFIER, after we re-run 'perf
stat record usleep' we get:
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$
Which at least shows the names of the events in the perf.data file.
Additionally, such files, when passed to 'perf report' will produce:
$ oldperf report --stdio
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
Warning:
Kernel address maps (/proc/{kallsyms,modules}) were restricted.
Check /proc/sys/kernel/kptr_restrict before running 'perf record'.
As no suitable kallsyms nor vmlinux was found, kernel samples
can't be resolved.
Samples in kernel modules can't be resolved as well.
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
$
Which is confusing and can be solved by just adding the kernel mmap record,
which will also remove that warning about the data size field being equal to
zero, after generating the mmap record:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.600796 task-clock (msec) # 0.478 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
54 page-faults # 0.090 M/sec
886,844 cycles # 1.476 GHz
582,169 stalled-cycles-frontend # 65.65% frontend cycles idle
<not supported> stalled-cycles-backend
638,344 instructions # 0.72 insns per cycle
# 0.91 stalled cycles per insn
130,204 branches # 216.719 M/sec
7,500 branch-misses # 5.76% of all branches
0.001255897 seconds time elapsed
$ oldperf evlist
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$ oldperf report --stdio
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
[acme@zoo linux]$
No warnings, sensible output about what are the events in the perf.data file and also
a "file has no samples" message, which indeed it doesn't.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Tested-by: Kan Liang <kan.liang@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: htp://lkml.kernel.org/r/1446734469-11352-3-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 14:40:46 +00:00
|
|
|
|
2019-07-21 11:24:07 +00:00
|
|
|
evlist__close(evsel_list);
|
perf stat record: Add record command
Add 'perf stat record' command support. It creates simple (header only)
perf.data file ATM.
The record command could be specified anywhere among stat options. All
stat command options are valid for stat record command with '-o' option
exception. If specified for record command it denotes the perf data file
name.
Committer note:
Set sample_type to PERF_SAMPLE_IDENTIFIER, which should be harmless
while avoiding that older tools show confusing messages, for instance,
with sample_type = 0, we get:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.630237 task-clock (msec) # 0.528 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
52 page-faults # 0.083 M/sec
978,312 cycles # 1.552 GHz
671,931 stalled-cycles-frontend # 68.68% frontend cycles idle
<not supported> stalled-cycles-backend
646,379 instructions # 0.66 insns per cycle
# 1.04 stalled cycles per insn
131,046 branches # 207.931 M/sec
7,073 branch-misses # 5.40% of all branches
0.001193240 seconds time elapsed
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
non matching sample_type
$
While with sample_type set to PERF_SAMPLE_IDENTIFIER, after we re-run 'perf
stat record usleep' we get:
$ oldperf evlist
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$
Which at least shows the names of the events in the perf.data file.
Additionally, such files, when passed to 'perf report' will produce:
$ oldperf report --stdio
WARNING: The perf.data file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
Warning:
Kernel address maps (/proc/{kallsyms,modules}) were restricted.
Check /proc/sys/kernel/kptr_restrict before running 'perf record'.
As no suitable kallsyms nor vmlinux was found, kernel samples
can't be resolved.
Samples in kernel modules can't be resolved as well.
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
$
Which is confusing and can be solved by just adding the kernel mmap record,
which will also remove that warning about the data size field being equal to
zero, after generating the mmap record:
$ perf stat record usleep 1
Performance counter stats for 'usleep 1':
0.600796 task-clock (msec) # 0.478 CPUs utilized
1 context-switches # 0.002 M/sec
0 cpu-migrations # 0.000 K/sec
54 page-faults # 0.090 M/sec
886,844 cycles # 1.476 GHz
582,169 stalled-cycles-frontend # 65.65% frontend cycles idle
<not supported> stalled-cycles-backend
638,344 instructions # 0.72 insns per cycle
# 0.91 stalled cycles per insn
130,204 branches # 216.719 M/sec
7,500 branch-misses # 5.76% of all branches
0.001255897 seconds time elapsed
$ oldperf evlist
task-clock
context-switches
cpu-migrations
page-faults
cycles
stalled-cycles-frontend
stalled-cycles-backend
instructions
branches
branch-misses
$ oldperf report --stdio
Error:
The perf.data file has no samples!
# To display the perf.data header info, please use --header/--header-only options.
#
[acme@zoo linux]$
No warnings, sensible output about what are the events in the perf.data file and also
a "file has no samples" message, which indeed it doesn't.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Tested-by: Kan Liang <kan.liang@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: htp://lkml.kernel.org/r/1446734469-11352-3-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 14:40:46 +00:00
|
|
|
perf_session__delete(perf_stat.session);
|
|
|
|
}
|
|
|
|
|
2015-12-09 02:11:27 +00:00
|
|
|
perf_stat__exit_aggr_mode();
|
2020-11-30 12:31:04 +00:00
|
|
|
evlist__free_stats(evsel_list);
|
2011-02-01 18:18:10 +00:00
|
|
|
out:
|
2021-04-19 09:41:44 +00:00
|
|
|
if (stat_config.iostat_run)
|
|
|
|
iostat_release(evsel_list);
|
|
|
|
|
2019-07-04 15:06:20 +00:00
|
|
|
zfree(&stat_config.walltime_run);
|
2018-04-23 09:08:21 +00:00
|
|
|
|
2017-05-26 19:05:38 +00:00
|
|
|
if (smi_cost && smi_reset)
|
|
|
|
sysfs__write_int(FREEZE_ON_SMI_PATH, 0);
|
|
|
|
|
2019-07-21 11:23:56 +00:00
|
|
|
evlist__delete(evsel_list);
|
2017-12-05 14:03:07 +00:00
|
|
|
|
2020-06-02 21:47:38 +00:00
|
|
|
metricgroup__rblist_exit(&stat_config.metric_events);
|
2017-12-05 14:03:07 +00:00
|
|
|
runtime_stat_delete(&stat_config);
|
2020-09-03 12:29:37 +00:00
|
|
|
evlist__close_control(stat_config.ctl_fd, stat_config.ctl_fd_ack, &stat_config.ctl_fd_close);
|
2017-12-05 14:03:07 +00:00
|
|
|
|
2009-06-13 12:57:28 +00:00
|
|
|
return status;
|
2009-04-20 13:37:32 +00:00
|
|
|
}
|