Add support for SQLite 3 to the call-graph-from-sql.py script. The SQL
statements work as is, so just detect the database type by checking if the
SQLite 3 file exists.
Committer notes:
Tested collecting the PT data on a RHEL7.4, generating the SQLite3
database there and then moving it to a Fedora 26 system where the
call-graph-from-sql.py script was run, using python-pyside version
1.2.2-7fc26 to see the callgraphs using Qt4.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Link: http://lkml.kernel.org/r/1501749090-20357-6-git-send-email-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Add support for exporting to SQLite 3 the same data as the PostgreSQL
export.
Committer note:
Tested on RHEL 7.4 using the 1.2.2-4el python-pyside packages from EPEL.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Link: http://lkml.kernel.org/r/1501749090-20357-4-git-send-email-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
The export does not work if only branches are exported because of a
missing column in the samples table. Fix by adding the missing
call_path_id.
Fixes: 3521f3bc9d ("perf script: Update export-to-postgresql to support callchain export")
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Link: http://lkml.kernel.org/r/1501749090-20357-2-git-send-email-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Add script intel-pt-events.py that provides an example of how to unpack the
raw data for power events and PTWRITE.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Link: http://lkml.kernel.org/r/1495786658-18063-35-git-send-email-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
New features:
- Add --sample-cpu to 'perf record', to explicitely ask for sampling
the CPU (Jiri Olsa)
Fixes:
- Fix processing of multi byte chunks in objdump output, fixing
disassemble processing for annotation on at least ARM64 (Jan Stancek)
- Use SyS_epoll_wait in a BPF 'perf test' entry instead of sys_epoll_wait, that
is not present in the DWARF info in vmlinux files (Arnaldo Carvalho de Melo)
- Add -wno-shadow when processing files using perl headers, fixing
the build on Fedora Rawhide and Arch Linux (Namhyung Kim)
Infrastructure:
- Annotate prep work to better catch and report errors related to
using objdump to disassemble DSOs (Arnaldo Carvalho de Melo)
- Add 'alloc', 'scnprintf' and 'and' methods for bitmap processing (Jiri Olsa)
- Add nested output resorting callback in hists processing (Jiri Olsa)
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
iQIcBAABCAAGBQJXopDWAAoJENZQFvNTUqpAowMP/RYzsngLAfx2c1OGdXSYdesD
0bdkTwTA/ob+uQdSvksR6Y9zf+I5nD6dwB6NbBbeXgs8ZwWNNjLObz1hFtYZYQ6j
qBlf1iJ6k0cHm8EaF5fs0J6/RyU+TasqBrgDiqiMTlHJs5gsyD892vVqz0SxiKwP
i1nbyG8VrgBKTAv5j7pMZSn12IsSdGzymGzb/sfGmqz38t97Jp3hUj9MDb8I/wMJ
iEorX0wUNJRu1jfvjiev9gtLvPbmKQ8Nnj05Qy+aT4Lf0iNa4kLz/jqXXeCHs57n
0uoJoRn/vAqYoBFtyLkYBppuygubc7neuk4AaaOu8CQ6Y2sKgX9WTyZ8a0PxOOQZ
aDIU/GraJ5mzOJCVq/4QRQPx7OSw0hJ33kNa03+cGxU5uQQdUOLJCrSOL8WOcH8C
izRwomVLpUvNA1bsWeRl9C01/c/qGKYl7Mytptkt8xbA4LyUAWD7DZhIAIUOV2qY
ekP8Xsc/qeaHCM80XaYJWhgcAd5SaxfL3aIUalDk6G+4KVMoDlyU3fxa977wEj30
R2yOZdG8sp3c2KvrdXQASZbcgsLlDq8Bqt7LbtPbQOoa8NEfAl/6e/LIF2CAwgLc
8pL5j6tPcetEiIpUHoNpuwGEYGCkWwIntPczGK2j3+ppj4r7pYaLO3XxwRu/8tnH
RG7QV1U68YcFM47awIT5
=uPmA
-----END PGP SIGNATURE-----
Merge tag 'perf-core-for-mingo-20160803' of git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux into perf/urgent
Pull perf/core improvements and fixes from Arnaldo Carvalho de Melo:
New features:
- Add --sample-cpu to 'perf record', to explicitely ask for sampling
the CPU (Jiri Olsa)
Fixes:
- Fix processing of multi byte chunks in objdump output, fixing
disassemble processing for annotation on at least ARM64 (Jan Stancek)
- Use SyS_epoll_wait in a BPF 'perf test' entry instead of sys_epoll_wait, that
is not present in the DWARF info in vmlinux files (Arnaldo Carvalho de Melo)
- Add -wno-shadow when processing files using perl headers, fixing
the build on Fedora Rawhide and Arch Linux (Namhyung Kim)
Infrastructure changes:
- Annotate prep work to better catch and report errors related to
using objdump to disassemble DSOs (Arnaldo Carvalho de Melo)
- Add 'alloc', 'scnprintf' and 'and' methods for bitmap processing (Jiri Olsa)
- Add nested output resorting callback in hists processing (Jiri Olsa)
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
On my Archlinux machine, perf faild to build like below:
CC scripts/perl/Perf-Trace-Util/Context.o
In file included from /usr/lib/perl5/core/perl/CORE/perl.h:3905:0,
from Context.xs:23:
/usr/lib/perl5/core/perl/CORE/inline.h: In function :
/usr/lib/perl5/core/perl/CORE/cop.h:612:13: warning: declaration of 'av'
shadows a previous local [-Werror-shadow]
AV *av =3D GvAV(PL_defgv);
^
/usr/lib/perl5/core/perl/CORE/inline.h:526:5: note: in expansion of
macro 'CX_POP_SAVEARRAY'
CX_POP_SAVEARRAY(cx);
^~~~~~~~~~~~~~~~
In file included from /usr/lib/perl5/core/perl/CORE/perl.h:5853:0,
from Context.xs:23:
/usr/lib/perl5/core/perl/CORE/inline.h:518:9: note:
shadowed declaration is here
AV *av;
^~
What I did to fix is adding '-Wno-shadow' as the error message said it's
the cause of the failure. Since it's from the perl (not perf) code
base, we don't have the control so I just wanted to ignore the warning
when compiling perl scripting code.
Committer note:
This also fixes the build on Fedora Rawhide.
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20160802024317.31725-1-namhyung@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Pull networking updates from David Miller:
1) Unified UDP encapsulation offload methods for drivers, from
Alexander Duyck.
2) Make DSA binding more sane, from Andrew Lunn.
3) Support QCA9888 chips in ath10k, from Anilkumar Kolli.
4) Several workqueue usage cleanups, from Bhaktipriya Shridhar.
5) Add XDP (eXpress Data Path), essentially running BPF programs on RX
packets as soon as the device sees them, with the option to mirror
the packet on TX via the same interface. From Brenden Blanco and
others.
6) Allow qdisc/class stats dumps to run lockless, from Eric Dumazet.
7) Add VLAN support to b53 and bcm_sf2, from Florian Fainelli.
8) Simplify netlink conntrack entry layout, from Florian Westphal.
9) Add ipv4 forwarding support to mlxsw spectrum driver, from Ido
Schimmel, Yotam Gigi, and Jiri Pirko.
10) Add SKB array infrastructure and convert tun and macvtap over to it.
From Michael S Tsirkin and Jason Wang.
11) Support qdisc packet injection in pktgen, from John Fastabend.
12) Add neighbour monitoring framework to TIPC, from Jon Paul Maloy.
13) Add NV congestion control support to TCP, from Lawrence Brakmo.
14) Add GSO support to SCTP, from Marcelo Ricardo Leitner.
15) Allow GRO and RPS to function on macsec devices, from Paolo Abeni.
16) Support MPLS over IPV4, from Simon Horman.
* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next: (1622 commits)
xgene: Fix build warning with ACPI disabled.
be2net: perform temperature query in adapter regardless of its interface state
l2tp: Correctly return -EBADF from pppol2tp_getname.
net/mlx5_core/health: Remove deprecated create_singlethread_workqueue
net: ipmr/ip6mr: update lastuse on entry change
macsec: ensure rx_sa is set when validation is disabled
tipc: dump monitor attributes
tipc: add a function to get the bearer name
tipc: get monitor threshold for the cluster
tipc: make cluster size threshold for monitoring configurable
tipc: introduce constants for tipc address validation
net: neigh: disallow transition to NUD_STALE if lladdr is unchanged in neigh_update()
MAINTAINERS: xgene: Add driver and documentation path
Documentation: dtb: xgene: Add MDIO node
dtb: xgene: Add MDIO node
drivers: net: xgene: ethtool: Use phy_ethtool_gset and sset
drivers: net: xgene: Use exported functions
drivers: net: xgene: Enable MDIO driver
drivers: net: xgene: Add backward compatibility
drivers: net: phy: xgene: Add MDIO driver
...
An important information for the napi_poll tracepoint is knowing
the work done (packets processed) by the napi_poll() call. Add
both the work done and budget, as they are related.
Handle trace_napi_poll() param change in dropwatch/drop_monitor
and in python perf script netdev-times.py in backward compat way,
as python fortunately supports optional parameter handling.
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
It is ignored and this is actually a python script, not a perl one.
Reported-by: Brendan Gregg <brendan.d.gregg@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Milian Wolff <milian.wolff@kdab.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Link: http://lkml.kernel.org/n/tip-0w4bpbqd79v3sl34jvpr11v0@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Add stackcollapse.py script as an example of parsing call chains, and
also of using optparse to access command line options.
The flame graph tools include a set of scripts that parse output from
various tools (including "perf script"), remove the offsets in the
function and collapse each stack to a single line. The website also
says "perf report could have a report style [...] that output folded
stacks directly, obviating the need for stackcollapse-perf.pl", so here
it is.
This script is a Python rewrite of stackcollapse-perf.pl, using the perf
scripting interface to access the perf data directly from Python.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Cc: Brendan Gregg <bgregg@netflix.com>
Link: http://lkml.kernel.org/r/1460467573-22989-1-git-send-email-pbonzini@redhat.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Update the export-to-postgresql.py to support the newly introduced
callchain export.
callchains are added into the existing call_paths table and can now
be associated with samples when the "callpaths" commandline option
is used with the script.
Ex.:
$ perf script -s export-to-postgresql.py example_db all callchains
Includes the following changes to enable callchain export via the python export
APIs:
- Add the "callchains" commandline option, which is used to enable
callchain export by setting the perf_db_export_callchains global
- Add perf_db_export_callchains checks for call_path table creation
and population.
- Add call_path_id to samples_table to conform with the new API
example usage and output using a small test app:
test_app.c:
volatile int x = 0;
void inc_x_loop()
{
int i;
for(i=0; i<100000000; i++)
x++;
}
void a()
{
inc_x_loop();
}
void b()
{
inc_x_loop();
}
int main()
{
a();
b();
return 0;
}
example usage:
$ gcc -g -O0 test_app.c
$ perf record --call-graph=dwarf ./a.out
[ perf record: Woken up 77 times to write data ]
[ perf record: Captured and wrote 19.373 MB perf.data (2404 samples) ]
$ perf script -s scripts/python/export-to-postgresql.py
example_db all callchains
$ psql example_db
example_db=#
SELECT
(SELECT name FROM symbols WHERE id = cps.symbol_id) as symbol,
(SELECT name FROM symbols WHERE id =
(SELECT symbol_id from call_paths where id = cps.parent_id))
as parent_symbol,
sum(period) as event_count
FROM samples join call_paths as cps on call_path_id = cps.id
GROUP BY cps.id,evsel_id
ORDER BY event_count DESC
LIMIT 5;
symbol | parent_symbol | event_count
------------------+--------------------------+-------------
inc_x_loop | a | 734250982
inc_x_loop | b | 731028057
unknown | unknown | 1335858
task_tick_fair | scheduler_tick | 1238842
update_wall_time | tick_do_update_jiffies64 | 650373
(5 rows)
The above data shows total "self time" in cycles for each call path that was
sampled. It is intended to demonstrate how it accounts separately for the two
ways to reach the "inc_x_loop" function(via "a" and "b"). Recursive common
table expressions can be used as well to get cumulative time spent in a
function as well, but that is beyond the scope of this basic example.
Signed-off-by: Chris Phlipot <cphlipot0@gmail.com>
Acked-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1461831551-12213-7-git-send-email-cphlipot0@gmail.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
The current instructions for setting up an Ubuntu system for using the
export-to-postgresql.py script are incorrect.
The instructions in the script have been updated to work on newer
versions of ubuntu.
-Add missing dependencies to apt-get command:
python-pyside.qtsql, libqt4-sql-psql
-Add '-s' option to createuser command to force the user to be a
superuser since the command doesn't prompt as indicated in the
current instructions.
Tested on: Ubuntu 14.04, Ubuntu 16.04(beta)
Signed-off-by: Chris Phlipot <cphlipot0@gmail.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1461056164-14914-3-git-send-email-cphlipot0@gmail.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
To print syscall names, the audit-libs-python package is required.. If
not installed, it prints this error string:
# perf script syscall-counts
Install the audit-libs-python package to get syscall names.
But the package name is different in Ubuntu, mention that in the error
message, similar to a error message of util/trace-event-scripting.c:
# perf script syscall-counts
Install the audit-libs-python package to get syscall names.
For example:
# apt-get install python-audit (Ubuntu)
# yum install audit-libs-python (Fedora)
etc.
Signed-off-by: Taeung Song <treeze.taeung@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Link: http://lkml.kernel.org/r/1455018790-13425-1-git-send-email-treeze.taeung@gmail.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Adding stat-cpi.py as an example of how to do stat scripting.
It computes the CPI metrics from cycles and instructions events.
The CPI is based performance metric showing the Cycles Per Instructions
ratio, which helps to identify cycles-hungry code.
Following stat record/report/script combinations could be used:
- get CPI for given workload
$ perf stat -e cycles,instructions record ls
SNIP
Performance counter stats for 'ls':
2,904,431 cycles
3,346,878 instructions # 1.15 insns per cycle
0.001782686 seconds time elapsed
$ perf script -s ./scripts/python/stat-cpi.py
0.001783: cpu -1, thread -1 -> cpi 0.867803 (2904431/3346878)
$ perf stat -e cycles,instructions record ls | perf script -s ./scripts/python/stat-cpi.py
SNIP
0.001730: cpu -1, thread -1 -> cpi 0.869026 (2928292/3369627)
- get CPI systemwide:
$ perf stat -e cycles,instructions -a -I 1000 record sleep 3
# time counts unit events
1.000158618 594,274,711 cycles (100.00%)
1.000158618 441,898,250 instructions
2.000350973 567,649,705 cycles (100.00%)
2.000350973 432,669,206 instructions
3.000559210 561,940,430 cycles (100.00%)
3.000559210 420,403,465 instructions
3.000670798 780,105 cycles (100.00%)
3.000670798 326,516 instructions
$ perf script -s ./scripts/python/stat-cpi.py
1.000159: cpu -1, thread -1 -> cpi 1.344823 (594274711/441898250)
2.000351: cpu -1, thread -1 -> cpi 1.311972 (567649705/432669206)
3.000559: cpu -1, thread -1 -> cpi 1.336669 (561940430/420403465)
3.000671: cpu -1, thread -1 -> cpi 2.389178 (780105/326516)
$ perf stat -e cycles,instructions -a -I 1000 record sleep 3 | perf script -s ./scripts/python/stat-cpi.py
1.000202: cpu -1, thread -1 -> cpi 1.035091 (940778881/908885530)
2.000392: cpu -1, thread -1 -> cpi 1.442600 (627493992/434974455)
3.000545: cpu -1, thread -1 -> cpi 1.353612 (741463930/547766890)
3.000622: cpu -1, thread -1 -> cpi 2.642110 (784083/296764)
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Tested-by: Kan Liang <kan.liang@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1452077397-31958-4-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Add some comments to the script and some 'views' to the created database
that better illustrate the database structure and how it can be used.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Link: http://lkml.kernel.org/r/1443186956-18718-8-git-send-email-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
This patch creates a new script (compaction-times) to report time
spent in mm compaction. It is possible to report times in nanoseconds
(default) or microseconds (-u).
The option -p will break down results by process id, -pv will further
decompose by each compaction entry/exit.
For each compaction entry/exit what is reported is controlled by the
options:
-t report only timing
-m report migration stats
-ms report migration scanner stats
-fs report free scanner stats
The default is to report all.
Entries may be further filtered by pid, pid-range or comm (regex).
The script is useful when analysing workloads that compact memory. The
most common example will be THP allocations on systems with a lot of
uptime that has fragmented memory.
This is an example of using the script to analyse a thpscale from
mmtests which deliberately fragments memory and allocates THP in 4
separate threads
# Recording step, one of the following;
$ perf record -e 'compaction:mm_compaction_*' ./workload
# or:
$ perf script record compaction-times
# Reporting: basic
total: 2444505743ns migration: moved=357738 failed=39275
free_scanner: scanned=2705578 isolated=387875
migration_scanner: scanned=414426 isolated=397013
# Reporting: Per task stall times
$ perf script report compaction-times -- -t -p
total: 2444505743ns
6384[thpscale]: 740800017ns
6385[thpscale]: 274119512ns
6386[thpscale]: 832961337ns
6383[thpscale]: 596624877ns
# Reporting: Per-compaction attempts for task 6385
$ perf script report compaction-times -- -m -pv 6385
total: 274119512ns migration: moved=14893 failed=24285
6385[thpscale]: 274119512ns migration: moved=14893 failed=24285
6385[thpscale].1: 3033277ns migration: moved=511 failed=1
6385[thpscale].2: 9592094ns migration: moved=1524 failed=12
6385[thpscale].3: 2495587ns migration: moved=512 failed=0
6385[thpscale].4: 2561766ns migration: moved=512 failed=0
6385[thpscale].5: 2523521ns migration: moved=512 failed=0
..... output continues ...
Changes since v1:
- report stats for isolate_migratepages and isolate_freepages
(Vlastimil Babka)
- refactor code to achieve above
- add help text
- output to stdout/stderr explicitly
Signed-off-by: Tony Jones <tonyj@suse.com>
Cc: Mel Gorman <mgorman@suse.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Link: http://lkml.kernel.org/r/1439840932-8933-1-git-send-email-tonyj@suse.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Add a script to produce a call-graph from data exported to a postgresql
database and derived from a processor trace event like intel_pt or intel_bts.
Refer to comments in the scripts call-graph-from-postgresql.py and
export-to-postgresql.py for more details on how to set up the environment,
install the required packages, etc.
Committer note:
From the scripts, for convenience while reading 'git log':
An example of using this script with Intel PT:
$ perf record -e intel_pt//u ls
$ perf script -s ~/libexec/perf-core/scripts/python/export-to-postgresql.py pt_example branches calls
2015-05-29 12:49:23.464364 Creating database...
2015-05-29 12:49:26.281717 Writing to intermediate files...
2015-05-29 12:49:27.190383 Copying to database...
2015-05-29 12:49:28.140451 Removing intermediate files...
2015-05-29 12:49:28.147451 Adding primary keys
2015-05-29 12:49:28.655683 Adding foreign keys
2015-05-29 12:49:29.365350 Done
$ python tools/perf/scripts/python/call-graph-from-postgresql.py pt_example
# The result is a GUI window with a tree representing a context-sensitive
# call-graph. Expanding a couple of levels of the tree and adjusting column
# widths to suit will display something like:
Call Graph: pt_example
Call Path |Object |Count|Time(ns)|Time(%)|Branch Count|Branch Count(%)
v- ls
v- 2638:2638
v- _start ld-2.19.so 1 10074071 100.0 211135 100.0
|- unknown unknown 1 13198 0.1 1 0.0
>- _dl_start ld-2.19.so 1 1400980 13.9 19637 9.3
>- _d_linit_internal ld-2.19.so 1 448152 4.4 11094 5.3
v-__libc_start_main@plt ls 1 8211741 81.5 180397 85.4
>- _dl_fixup ld-2.19.so 1 7607 0.1 108 0.1
>- __cxa_atexit libc-2.19.so 1 11737 0.1 10 0.0
>- __libc_csu_init ls 1 10354 0.1 10 0.0
|- _setjmp libc-2.19.so 1 0 0.0 4 0.0
v- main ls 1 8182043 99.6 180254 99.9
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Link: http://lkml.kernel.org/r/1437150840-31811-11-git-send-email-adrian.hunter@intel.com
[ Added 'python-pyside qt-postgresql' to the yum cmdline installing required packages ]
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Move the scripts objects building under build framework to be included
in the libperf build object.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com>
Tested-by: Will Deacon <will.deacon@arm.com>
Cc: Alexis Berlemont <alexis.berlemont@gmail.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-ry8pd41ahwpq9h46i8te33c7@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
When building perf for arm64 I hit a warning (and be treated as an
error) like below:
aarch64-oe-linux-gcc -o .../scripts/perl/Perf-Trace-Util/Context.o -c -Wbad-function-cast \
... scripts/perl/Perf-Trace-Util/Context.c
In file included from .../usr/lib64/perl/5.14.3/CORE/perl.h:2464:0,
from Context.xs:23:
/.../usr/lib64/perl/5.14.3/CORE/handy.h:108:0: error: "bool" redefined [-Werror]
# define bool char
^
In file included from /.../usr/src/kernel/tools/include/linux/types.h:4:0,
from /.../usr/src/kernel/arch/arm64/include/uapi/asm/sigcontext.h:19,
from /.../usr/include/bits/sigcontext.h:27,
from /.../usr/include/signal.h:340,
from /.../usr/include/sys/param.h:28,
from /.../usr/lib64/perl/5.14.3/CORE/perl.h:678,
from Context.xs:23:
/.../usr/lib/aarch64-oe-linux/gcc/aarch64-oe-linux/4.9.2/include/stdbool.h:33:0: note: this is the location of the previous definition
#define bool _Bool
Looks like the failure is caused by arm64 uapi/asm/sigcontext.h, which
includes linux/types.h while other archs not.
Current perl consider this problem:
http://perl5.git.perl.org/perl.git/commit/bd31be4baa3ee68abdb92c0db3200efe0fad903b
However there are users which use old version of perl.
This patch includes stdbool.h before Context.xs and define HAS_BOOL to
prevent perl'e headers define its own 'bool'. Code is learn from perl's
git tree.
Signed-off-by: Wang Nan <wangnan0@huawei.com>
Acked-by: Jiri Olsa <jolsa@redhat.com>
Acked-by: Namhyung Kim <namhyung@kernel.org>
Cc: Li Zefan <lizefan@huawei.com>
Link: http://lkml.kernel.org/r/1421671397-4659-1-git-send-email-wangnan0@huawei.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Add the ability to export detailed information about paired calls and
returns to Python db export and the export-to-postgresql.py script.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/r/1414678188-14946-7-git-send-email-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Add branch_type and in_tx to Python db export and the
export-to-postgresql.py script.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/r/1414678188-14946-4-git-send-email-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Add a Python script to export to a postgresql database.
The script requires the Python pyside module and the Qt PostgreSQL
driver. The packages needed are probably named "python-pyside" and
"libqt4-sql-psql"
The caller of the script must be able to create postgresql databases.
The script takes the database name as a parameter. The database and
database tables are created. Data is written to flat files which are
then imported using SQL COPY FROM.
Example:
$ perf record ls
...
$ perf script report export-to-postgresql lsdb
2014-02-14 10:55:38.631431 Creating database...
2014-02-14 10:55:39.291958 Writing to intermediate files...
2014-02-14 10:55:39.350280 Copying to database...
2014-02-14 10:55:39.358536 Removing intermediate files...
2014-02-14 10:55:39.358665 Adding primary keys
2014-02-14 10:55:39.658697 Adding foreign keys
2014-02-14 10:55:39.667412 Done
$ psql lsdb
lsdb-# \d
List of relations
Schema | Name | Type | Owner
--------+-----------------+-------+-------
public | comm_threads | table | acme
public | comms | table | acme
public | dsos | table | acme
public | machines | table | acme
public | samples | table | acme
public | samples_view | view | acme
public | selected_events | table | acme
public | symbols | table | acme
public | threads | table | acme
(9 rows)
lsdb-# \d samples
Table "public.samples"
Column | Type | Modifiers
---------------+---------+-----------
id | bigint | not null
evsel_id | bigint |
machine_id | bigint |
thread_id | bigint |
comm_id | bigint |
dso_id | bigint |
symbol_id | bigint |
sym_offset | bigint |
ip | bigint |
time | bigint |
cpu | integer |
to_dso_id | bigint |
to_symbol_id | bigint |
to_sym_offset | bigint |
to_ip | bigint |
period | bigint |
weight | bigint |
transaction | bigint |
data_src | bigint |
Indexes:
"samples_pkey" PRIMARY KEY, btree (id)
Foreign-key constraints:
"commfk" FOREIGN KEY (comm_id) REFERENCES comms(id)
"dsofk" FOREIGN KEY (dso_id) REFERENCES dsos(id)
"evselfk" FOREIGN KEY (evsel_id) REFERENCES selected_events(id)
"machinefk" FOREIGN KEY (machine_id) REFERENCES machines(id)
"symbolfk" FOREIGN KEY (symbol_id) REFERENCES symbols(id)
"threadfk" FOREIGN KEY (thread_id) REFERENCES threads(id)
"todsofk" FOREIGN KEY (to_dso_id) REFERENCES dsos(id)
"tosymbolfk" FOREIGN KEY (to_symbol_id) REFERENCES symbols(id)
lsdb-# \d samples_view
View "public.samples_view"
Column | Type | Modifiers
-------------------+-------------------------+-----------
id | bigint |
time | bigint |
cpu | integer |
pid | integer |
tid | integer |
command | character varying(16) |
event | character varying(80) |
ip_hex | text |
symbol | character varying(2048) |
sym_offset | bigint |
dso_short_name | character varying(256) |
to_ip_hex | text |
to_symbol | character varying(2048) |
to_sym_offset | bigint |
to_dso_short_name | character varying(256) |
lsdb=# select * from samples_view;
id| time |cpu | pid | tid |command| event | ip_hex | symbol |sym_off| dso_name|to_ip_hex|to_symbol|to_sym_off|to_dso_name
--+------------+----+------+------+-------+--------+---------------+---------------------+-------+---------+---------+---------+----------+----------
1 |12202825015 | -1 | 7339 | 7339 |:17339 | cycles | fffff8104d24a |native_write_msr_safe| 10 | [kernel]| 0 | unknown | 0| unknown
2 |12203258804 | -1 | 7339 | 7339 |:17339 | cycles | fffff8104d24a |native_write_msr_safe| 10 | [kernel]| 0 | unknown | 0| unknown
3 |12203988119 | -1 | 7339 | 7339 |:17339 | cycles | fffff8104d24a |native_write_msr_safe| 10 | [kernel]| 0 | unknown | 0| unknown
My notes (which may be out-of-date) on setting up postgresql so you can
create databases:
fedora:
$ sudo yum install postgresql postgresql-server python-pyside qt-postgresql
$ sudo su - postgres -c initdb
$ sudo service postgresql start
$ sudo su - postgres
$ createuser -s <your username>
I used the the unix user name in createuser.
If it fails, try createuser without -s and answer the following question
to allow your user to create tables:
Shall the new role be a superuser? (y/n) y
ubuntu:
$ sudo apt-get install postgresql
$ sudo su - postgres
$ createuser <your username>
Shall the new role be a superuser? (y/n) y
You may want to disable automatic startup. One way is to edit
/etc/postgresql/9.3/main/start.conf. Another is to disable the init
script e.g. sudo update-rc.d postgresql disable
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/r/1414061124-26830-8-git-send-email-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
This provides valuable information for tracing performance problems.
Since this change alters the interface for the python scripts, also
adjust the script generation and the provided scripts.
Signed-off-by: Joseph Schuchart <joseph.schuchart@tu-dresden.de>
Acked-by: Thomas Ilsche <thomas.ilsche@tu-dresden.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Thomas Ilsche <thomas.ilsche@tu-dresden.de>
Link: http://lkml.kernel.org/r/53BE7E1B.10503@tu-dresden.de
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Older kernels (e.g., RHEL6) do system call tracing via the
syscalls:sys_{enter,exit} tracepoints rather than using raw_syscalls:*.
Update perf python and perl scripts to fallback to syscalls:* when
raw_syscalls:* isn't available.
Signed-off-by: Daniel Bristot de Oliveira <bristot@redhat.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Luis Claudio R. Goncalves <lgoncalv@redhat.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/5a6c64081a3375bc3bc66351b14559678ef4d71e.1402507908.git.bristot@redhat.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
They convey no information, perhaps I was bitten by some snake at some
point, complete the detox by naming the last of those arguments more
sensibly.
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-u1r0dnjoro08dgztiy2g3t2q@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
765532c8 (perf script: Finish the rename from trace to script,
2010-12-23) made a mistake during find-and-replace replacing
"../../../util/trace-event.h" with "../../../util/script-event.h", a
non-existent file. Fix this include.
Signed-off-by: Ramkumar Ramachandra <artagnon@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Link: http://lkml.kernel.org/r/1373364033-7918-3-git-send-email-artagnon@gmail.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
We can read /proc/kallsyms in a fraction of a second, so why waste
a further fraction of a second showing progress?
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
The sort order of dictionaries in Python is undocumented. Use
tuples instead, which are documented to be lexically ordered.
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
The comparison between traced and symbol addresses is backwards: if
the traced address doesn't exactly match a symbol (which we don't
expect it to), we'll show the next symbol and the offset to it,
whereas we should show the previous symbol and the offset from it.
Cc: stable@vger.kernel.org
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
This works much better if we don't treat protocol numbers as addresses.
Cc: stable@vger.kernel.org
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
Fixing rwtop script race. The issue is caused by rwtop script triggering
SIGALRM and underneath pipe reading layer reporting error when
interrupted.
Fixing this by setting SA_RESTART for rwtop SIGALRM handler, which
avoids interruption of the pipe reading layer.
The discussion for this issue & fix is here:
https://lkml.org/lkml/2012/9/18/123
Signed-off-by: Jiri Olsa <jolsa@redhat.com>
Original-patch-by: Andrew Jones <drjones@redhat.com>
Cc: Andrew Jones <drjones@redhat.com>
Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1360080351-3246-2-git-send-email-jolsa@redhat.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
The tracepoints used by the workqueue-stats script no longer exist so
trying to run the script results in:
# perf script record workqueue-stats
invalid or unsupported event: 'workqueue:workqueue_creation'
Run 'perf list' for a list of valid events
So remove the script until it can be reworked using the new workqueue
tracepoints.
Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com>
Link: http://lkml.kernel.org/r/e7a7637d5df9df86887c3bff7683574665ec5360.1358527965.git.tom.zanussi@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
So that event_analyzing_sample.py can be shown by "perf script -l"
Signed-off-by: Feng Tang <feng.tang@intel.com>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1347007349-3102-4-git-send-email-feng.tang@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Correct the checking for handler returned by PyDict_GetItemString(),
also fix some spelling error and remove some data code in
event_analyzing_sample.py, as suggested by Namhyung Kim.
v2: restore back the wrongly removed trace_unhandled() func
Signed-off-by: Feng Tang <feng.tang@intel.com>
Acked-by: Namhyung Kim <namhyung@kernel.org>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Robert Richter <robert.richter@amd.com>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/r/20120809134613.067104c4@feng-i7
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Currently only trace point events are supported in perf/python script,
the first 3 patches of this serie add the support for all types of
events. This script is just a simple sample to show how to gather the
basic information of the events and analyze them.
This script will create one object for each event sample and insert them
into a table in a database, then leverage the simple SQL commands to
sort/group them. User can modify or write their brand new functions
according to their specific requirment.
Here is the sample of how to use the script:
$ perf record -a tree
$ perf script -s process_event.py
There is 100 records in gen_events table
Statistics about the general events grouped by thread/symbol/dso:
comm number histgram
==========================================
swapper 56 ######
tree 20 #####
perf 10 ####
sshd 8 ####
kworker/7:2 4 ###
ksoftirqd/7 1 #
plugin-containe 1 #
symbol number histgram
==========================================================
native_write_msr_safe 40 ######
__lock_acquire 8 ####
ftrace_graph_caller 4 ###
prepare_ftrace_return 4 ###
intel_idle 3 ##
native_sched_clock 3 ##
Unknown_symbol 2 ##
do_softirq 2 ##
lock_release 2 ##
lock_release_holdtime 2 ##
trace_graph_entry 2 ##
_IO_putc 1 #
__d_lookup_rcu 1 #
__do_fault 1 #
__schedule 1 #
_raw_spin_lock 1 #
delay_tsc 1 #
generic_exec_single 1 #
generic_fillattr 1 #
dso number histgram
==================================================================
[kernel.kallsyms] 95 #######
/lib/libc-2.12.1.so 5 ###
Signed-off-by: Feng Tang <feng.tang@intel.com>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Robert Richter <robert.richter@amd.com>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/r/1344419875-21665-6-git-send-email-feng.tang@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
This library defines several class types for perf events which could
help to better analyze the event samples. Currently there are just a few
classes, PerfEvent is the base class for all perf events, PebsEvent is
a HW base Intel x86 PEBS event, and user could add more SW/HW event
classes based on requriements.
Signed-off-by: Feng Tang <feng.tang@intel.com>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Robert Richter <robert.richter@amd.com>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/r/1344419875-21665-5-git-send-email-feng.tang@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
A while back I created the dropmonitor protocol, which allowed users to get
reports of dropped frames communicated to them via a netlink socket.
While useful, several people have now asked that I integrate the ability
to do drop monitoring with perf, so they don't have to run additional
tools.
This patch adds a drop monitor script to the perf suite, and provides
the same output that the netlink socket does.
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1309801217-22450-1-git-send-email-nhorman@tuxdriver.com
Signed-off-by: Neil Horman <nhorman@tuxdriver.com>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
The scripts have calls to 'perf trace' that need to be converted to 'perf script', do it.
This problem was introduced in 133dc4c.
Reported-by: Torok Edwin <edwintorok@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Tom Zanussi <tzanussi@gmail.com>
Cc: Torok Edwin <edwintorok@gmail.com>
LKML-Reference: <new-submission>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Free the perf trace name space and rename the trace to 'script' which is a
better match for the scripting engine.
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Including -a unconditionally when recording doesn't allow for the
option of running scripts without it. Future patches will add add it
back if needed at run-time.
Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com>
Acked-by: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
Where we don't have the audit.MACH_ARMEB constant.
Cc: David S. Miller <davem@davemloft.net>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Tom Zanussi <tzanussi@gmail.com>
LKML-Reference: <new-submission>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>