Commit Graph

589957 Commits

Author SHA1 Message Date
Masami Hiramatsu
642aadaa32 perf header: Make topology checkers to check return value of strbuf
Make topology checkers to check the return value of strbuf APIs so that
it can detect errors in it.

Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20160510054735.6158.98650.stgit@devbox
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-05-10 11:57:22 -03:00
Masami Hiramatsu
70a6898fdc perf tools: Make alias handler to check return value of strbuf
Make alias handler and sq_quote_argv to check the return value of strbuf
APIs.

In sq_quote_argv() calls die(), but this fix handles strbuf failure as a
special case and returns to caller, since the caller - handle_alias()
also has to check the return value of other strbuf APIs and those checks
can be merged to one if() statement.

Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20160510054725.6158.84597.stgit@devbox
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-05-10 11:56:52 -03:00
Masami Hiramatsu
b72ca40390 perf help: Make check_emacsclient_version to check strbuf APIs
Make check_emacsclient_version() to check the return value of strbuf
APIs so that it can handle errors in strbuf.

Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20160510054716.6158.11755.stgit@devbox
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-05-10 11:56:14 -03:00
Masami Hiramatsu
bf4d5f25c9 perf probe: Check the return value of strbuf APIs
Check the return value of strbuf APIs in perf-probe
related code, so that it can handle errors in strbuf.

Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20160510054707.6158.69861.stgit@devbox
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-05-10 11:53:34 -03:00
Masami Hiramatsu
5cea57f30a perf tools: Rewrite strbuf not to die()
Rewrite strbuf implementation not to use die() nor xrealloc().  Instead
of die(), now most of the API returns error code or 0 if succeeded.

Suggested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20160510054658.6158.24080.stgit@devbox
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-05-10 11:27:58 -03:00
Chris Phlipot
9c7b37cd63 perf symbols: Fix handling of zero-length symbols.
This change introduces a fix to symbols__find, so that it is able to
find symbols of length zero (where start == end).

The current code has the following problem:

- The current implementation of symbols__find is unable to find any symbols
  of length zero.

- The db-export framework explicitly creates zero length symbols at
  locations where no symbol currently exists.

The combination of the two above behaviors results in behavior similar
to the example below.

1. addr_location is created for a sample, but symbol is unable to be
   resolved.

2. db export creates an "unknown" symbol of length zero at that address
   and inserts it into the dso.

3. A new sample comes in at the same address, but symbol__find is unable
   to find the zero length symbol, so it is still unresolved.

4. db export sees the symbol is unresolved, and allocated a duplicate
   symbol, even though it already did this in step 2.

This behavior continues every time an address without symbol information
is seen, which causes a very large number of these symbols to be
allocated.

The effect of this fix can be observed by looking at the contents of an
exported database before/after the fix (generated with
scripts/python/export-to-postgresql.py)

Ex.
BEFORE THE CHANGE:

  example_db=# select count(*) from symbols;
   count
  --------
   900213
  (1 row)

  example_db=# select count(*) from symbols where symbols.name='unknown';
   count
  --------
   897355
  (1 row)

  example_db=# select count(*) from symbols where symbols.name!='unknown';
   count
  -------
    2858
  (1 row)

AFTER THE CHANGE:

  example_db=# select count(*) from symbols;
   count
  -------
   25217
  (1 row)

  example_db=# select count(*) from symbols where name='unknown';
   count
  -------
   22359
  (1 row)

  example_db=# select count(*) from symbols where name!='unknown';
   count
  -------
    2858
  (1 row)

Signed-off-by: Chris Phlipot <cphlipot0@gmail.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1462612620-25008-1-git-send-email-cphlipot0@gmail.com
[ Moved the test to later in the rb_tree tests, as this not the likely case ]
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-05-09 18:40:03 -03:00
Arnaldo Carvalho de Melo
0a241ef4a2 perf evsel: Print state of perf_event_attr.write_backward
Now we can see if it is set when using verbose mode in various tools,
such as 'perf test':

  # perf test -vv back
  45: Test backward reading from ring buffer                   :
  --- start ---
  <SNIP>
  ------------------------------------------------------------
  perf_event_attr:
    type                             2
    size                             112
    config                           0x98
    { sample_period, sample_freq }   1
    sample_type                      IP|TID|TIME|CPU|PERIOD|RAW
    disabled                         1
    mmap                             1
    comm                             1
    task                             1
    sample_id_all                    1
    exclude_guest                    1
    mmap2                            1
    comm_exec                        1
    write_backward                   1
  ------------------------------------------------------------
  sys_perf_event_open: pid 20911  cpu -1  group_fd -1  flags 0x8
  <SNIP>
  ---- end ----
  Test backward reading from ring buffer: Ok
  #

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Milian Wolff <milian.wolff@kdab.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/n/tip-kxv05kv9qwl5of7rzfeiiwbv@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-05-09 18:11:27 -03:00
Wang Nan
ee74701ed8 perf tests: Add test to check backward ring buffer
This test checks reading from backward ring buffer.

Test result:

  # ~/perf test 'ring buffer'
  45: Test backward reading from ring buffer                   : Ok

The test case is a while loop which calls prctl(PR_SET_NAME) multiple
times.  Each prctl should issue 2 events: one PERF_RECORD_SAMPLE, one
PERF_RECORD_COMM.

The first round creates a relative large ring buffer (256 pages). It can
afford all events. Read from it and check the count of each type of
events.

The second round creates a small ring buffer (1 page) and makes it
overwritable. Check the correctness of the buffer.

Signed-off-by: Wang Nan <wangnan0@huawei.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Zefan Li <lizefan@huawei.com>
Cc: pi3orama@163.com
Link: http://lkml.kernel.org/r/1462758471-89706-3-git-send-email-wangnan0@huawei.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-05-09 18:11:22 -03:00
Wang Nan
e24c7520ea perf tools: Support reading from backward ring buffer
perf_evlist__mmap_read_backward() is introduced for reading backward
ring buffer. Since direction for reading such ring buffer is different
from the direction kernel writing to it, and since user need to fetch
most recent record from it, a perf_evlist__mmap_read_catchup() is
introduced to move the reading pointer to the end of the buffer.

Signed-off-by: Wang Nan <wangnan0@huawei.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Zefan Li <lizefan@huawei.com>
Cc: pi3orama@163.com
Link: http://lkml.kernel.org/r/1462758471-89706-2-git-send-email-wangnan0@huawei.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-05-09 17:20:53 -03:00
Chris Phlipot
aff633406c perf script: Fix incorrect python db-export error message
Fix the error message printed when attempting and failing to create the
call path root incorrectly references the call return process.

This change fixes the message to properly reference the failure to
create the call path root.

Signed-off-by: Chris Phlipot <cphlipot0@gmail.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1462612620-25008-2-git-send-email-cphlipot0@gmail.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-05-09 14:08:39 -03:00
Andi Kleen
f340c5fc93 perf stat: Scale values by unit before metrics
Scale values by unit before passing them to the metrics printing
functions.  This is needed for TopDown, because it needs to scale the
slots correctly by pipeline width / SMTness.

For existing metrics it shouldn't make any difference, as those
generally use events that don't have any units.

Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1462489447-31832-8-git-send-email-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-05-09 13:42:09 -03:00
He Kuang
841e3558b2 perf callchain: Recording 'dwarf' callchains do not need DWARF unwinding support
There is no need to check for DWARF unwinding support when using the
'dwarf' callchain record method, as this will only ask the kernel to
collect stack dumps for later DWARF CFI processing, which can be done in
another machine, where the support for DWARF unwinding need to be
present.

Signed-off-by: He Kuang <hekuang@huawei.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Ekaterina Tumanova <tumanova@linux.vnet.ibm.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Kan Liang <kan.liang@intel.com>
Cc: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/r/1462525154-125656-2-git-send-email-hekuang@huawei.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-05-09 13:29:36 -03:00
Ingo Molnar
ea7c285189 perf/core improvements and fixes:
User visible:
 
 - Fix ordering of kernel/user entries in 'caller' mode, where the kernel and
   user parts were being correctly inverted but kept in place wrt each other,
   i.e. 'callee' (k1, k2, u3, u4) became 'caller' (k2, k1, u4, u3) when it
   should be 'caller' (u4, u3, k2, k1) (Chris Phlipot)
 
 - In 'perf trace' don't print the raw arg syscall args for a syscall that has
   no arguments, like gettid(). This was happening because just checking if
   the syscall args list is NULL may mean that there are no args (e.g.: gettid)
   or that there is no tracepoint info (e.g.: clone) (Arnaldo Carvalho de Melo)
 
 - Add extra output of counter values with 'perf stat -vv' (Andi Kleen)
 
 Infrastructure:
 
 - Expose callchain db export via the python API (Chris Phlipot)
 
 Code reorganization:
 
 - Move some more syscall arg beautifiers from the 'perf trace' main file to
   separate files in tools/perf/trace/beauty/, to reduce the main file line
   count (Arnaldo Carvalho de Melo)
 
 Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v2
 
 iQIcBAABCAAGBQJXLMC4AAoJENZQFvNTUqpAJvYQALplVTE2Yf7Umjw51jDhQPVi
 5AAWlEfjMkiW9a+8oWwFzmwNtTbe+NpTe9NsyD0zRtRNMJ7oEZOHBiQhTwYp2hBv
 jYp9/Wtdb7OGA/p+3RCbsTj6jcgd8B2KF+hhqES8QKHtTXi87V5tiRsDjj6pzUzG
 Bwq+21AsalZu7JMcnxCKhOeq9JjgxQ4/W9nhcHPmGz83yTEdiQJpCqqiaNUNLvM8
 j8vttXXoCAATNldUV2mkeggEK2OL9Bn2B4Cy1pmS/63oRhKXdBHbTpfI80/TzdBb
 cetw2l9K/gOyT56TwlijAo3982Yc9SFUCw/PYwYkvmX2niqmoocNWV5xIJh8AzOT
 CHi2UvDwOUuD4p73xHeJAvIGpZaeri2RjxlR7gmjqVmezdTb9/WqxW1ZEIFVGwbG
 Hhmi+mKX3YAseoQJ8k3gl8/Cbjat8PPt8xluG46X5Z1YrI8FKW9Kvk0CkpVqzemJ
 pUervZ0TD+gbaMxeGsCvtr8n5FSAUBTeQ8YZuVCDqHMxwFssvz/mphKo7moJHts6
 v/MIYkkq9DhBr9WLzEEkKH4XXKBB42leB2zJsJ6MT/enCi+qXSTUZXO6TyJR3D7l
 0k3xpwWSF14/kFt7gptjJC5IHY81T+7A1nCXeIL4pBD11KLj+bN8kWV8naWjYqGh
 BJj3F4o8b+iDxG5AuCOq
 =OKHa
 -----END PGP SIGNATURE-----

Merge tag 'perf-core-for-mingo-20160506' of git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux into perf/core

Pull perf/core improvements and fixes from Arnaldo Carvalho de Melo:

User visible changes:

- Fix ordering of kernel/user entries in 'caller' mode, where the kernel and
  user parts were being correctly inverted but kept in place wrt each other,
  i.e. 'callee' (k1, k2, u3, u4) became 'caller' (k2, k1, u4, u3) when it
  should be 'caller' (u4, u3, k2, k1) (Chris Phlipot)

- In 'perf trace' don't print the raw arg syscall args for a syscall that has
  no arguments, like gettid(). This was happening because just checking if
  the syscall args list is NULL may mean that there are no args (e.g.: gettid)
  or that there is no tracepoint info (e.g.: clone) (Arnaldo Carvalho de Melo)

- Add extra output of counter values with 'perf stat -vv' (Andi Kleen)

Infrastructure changes:

- Expose callchain db export via the python API (Chris Phlipot)

Code reorganization:

- Move some more syscall arg beautifiers from the 'perf trace' main file to
  separate files in tools/perf/trace/beauty/, to reduce the main file line
  count (Arnaldo Carvalho de Melo)

Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2016-05-07 06:49:28 +02:00
Arnaldo Carvalho de Melo
d5d71e86d2 perf trace: Move futex_op beautifier to tools/perf/trace/beauty/
To reduce the size of builtin-trace.c.

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/n/tip-vb8dpy7bptkf219q5c25ulfp@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-05-06 13:00:59 -03:00
Arnaldo Carvalho de Melo
8f48df69b4 perf trace: Move open_flags beautifier to tools/perf/trace/beauty/
To reduce the size of builtin-trace.c.

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/n/tip-jt293541hv9od7gqw6lilioh@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-05-06 13:00:58 -03:00
Arnaldo Carvalho de Melo
12199d8e20 perf trace: Move signum beautifier to tools/perf/trace/beauty/
To reduce the size of builtin-trace.c.

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/n/tip-qecqxwwtreio6eaatfv58yq5@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-05-06 13:00:57 -03:00
Andi Kleen
0b1abbf4a7 perf stat: Add extra output of counter values with -vv
Add debug output of raw counter values per CPU when perf stat -v is
specified, together with their cpu numbers.  This is very useful to
debug problems with per core counters, where we can normally only see
aggregated values.

v2: Make it depend on -vv, not -v

Signed-off-by: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1461787251-6702-12-git-send-email-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-05-06 13:00:56 -03:00
Chris Phlipot
3521f3bc9d perf script: Update export-to-postgresql to support callchain export
Update the export-to-postgresql.py to support the newly introduced
callchain export.

callchains are added into the existing call_paths table and can now
be associated with samples when the "callpaths" commandline option
is used with the script.

Ex.:

  $ perf script -s export-to-postgresql.py example_db all callchains

Includes the following changes to enable callchain export via the python export
APIs:

- Add the "callchains" commandline option, which is used to enable
  callchain export by setting the perf_db_export_callchains global
- Add perf_db_export_callchains checks for call_path table creation
  and population.
- Add call_path_id to samples_table to conform with the new API

example usage and output using a small test app:

  test_app.c:

	volatile int x = 0;
	void inc_x_loop()
	{
		int i;
		for(i=0; i<100000000; i++)
			x++;
	}

	void a()
	{
		inc_x_loop();
	}

	void b()
	{
		inc_x_loop();
	}

	int main()
	{
		a();
		b();
		return 0;
	}

example usage:

  $ gcc -g -O0 test_app.c
  $ perf record --call-graph=dwarf ./a.out
  [ perf record: Woken up 77 times to write data ]
  [ perf record: Captured and wrote 19.373 MB perf.data (2404 samples) ]

  $ perf script -s scripts/python/export-to-postgresql.py
	example_db all callchains

  $ psql example_db

  example_db=#
  SELECT
  (SELECT name FROM symbols WHERE id = cps.symbol_id) as symbol,
  (SELECT name FROM symbols WHERE id =
	(SELECT symbol_id from call_paths where id = cps.parent_id))
	as parent_symbol,
  sum(period) as event_count
  FROM samples join call_paths as cps on call_path_id = cps.id
  GROUP BY cps.id,evsel_id
  ORDER BY event_count DESC
  LIMIT 5;

        symbol      |      parent_symbol       | event_count
  ------------------+--------------------------+-------------
   inc_x_loop       | a                        |   734250982
   inc_x_loop       | b                        |   731028057
   unknown          | unknown                  |     1335858
   task_tick_fair   | scheduler_tick           |     1238842
   update_wall_time | tick_do_update_jiffies64 |      650373
  (5 rows)

The above data shows total "self time" in cycles for each call path that was
sampled. It is intended to demonstrate how it accounts separately for the two
ways to reach the "inc_x_loop" function(via "a" and "b").  Recursive common
table expressions can be used as well to get cumulative time spent in a
function as well, but that is beyond the scope of this basic example.

Signed-off-by: Chris Phlipot <cphlipot0@gmail.com>
Acked-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1461831551-12213-7-git-send-email-cphlipot0@gmail.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-05-06 13:00:55 -03:00
Chris Phlipot
2c15f5eb04 perf script: Expose usage of the callchain db export via the python api
This change allows python scripts to be able to utilize the recent
changes to the db export api allowing the export of call_paths derived
from sampled callchains. These call paths are also now associated with
the samples from which they were derived.

- This feature is enabled by setting "perf_db_export_callchains" to true

- When enabled, samples that have callchain information will have the
  callchains exported via call_path_table

- The call_path_id field is added to sample_table to enable association of
  samples with the corresponding callchain stored in the call paths
  table. A call_path_id of 0 will be exported if there is no
  corresponding callchain.

- When "perf_db_export_callchains" and "perf_db_export_calls" are both
  set to True, the call path root data structure will be shared. This
  prevents duplicating of data and call path ids that would result from
  building two separate call path trees in memory.

- The call_return_processor structure definition was relocated to the header
  file to make its contents visible to db-export.c. This enables the
  sharing of call path trees between the two features, as mentioned
  above.

This change is visible to python scripts using the python db export api.

The change is backwards compatible with scripts written against the
previous API, assuming that the scripts model the sample_table function
after the one in export-to-postgresql.py script by allowing for
additional arguments to be added in the future. ie. using *x as the
final argument of the sample_table function.

Signed-off-by: Chris Phlipot <cphlipot0@gmail.com>
Acked-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1461831551-12213-6-git-send-email-cphlipot0@gmail.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-05-06 13:00:54 -03:00
Chris Phlipot
568850eaad perf script: Add call path id to exported sample in db export
The exported sample now contains a reference to the call_path_id that
represents its callchain.

While callchains themselves are nice to have, being able to associate
them with samples makes them much more useful, and can allow for such
things as determining how much cumulative time is spent in a particular
function. This information is normally possible to get from the call
return processor. However, when doing normal sampling, call/return
information is not available, thus necessitating the need for
associating samples directly with call paths.

This commit include changes to db-export layer to make this information
available for subsequent patches in this change set, but by itself, does
not make any changes visible to the user.

Signed-off-by: Chris Phlipot <cphlipot0@gmail.com>
Acked-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1461831551-12213-5-git-send-email-cphlipot0@gmail.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-05-06 13:00:53 -03:00
Chris Phlipot
0a3eba3ad6 perf script: Enable db export to output sampled callchains
This change enables the db export api to export callchains. This is
accomplished by adding callchains obtained from samples to the
call_path_root structure and exporting them via the current call path
export API.

While the current API does support exporting call paths, this is not
supported when sampling. This commit addresses that missing feature by
allowing the export of call paths when callchains are present in
samples.

Summary:

- This feature is activated by initializing the call_path_root member
  inside the db_export structure to a non-null value.

- Callchains are resolved with thread__resolve_callchain() and then stored
  and exported by adding a call path under call path root.
- Symbol and DSO for each callchain node are exported via db_ids_from_al()

This commit puts in place infrastructure to be used by subsequent commits,
and by itself, does not introduce any user-visible changes.

Signed-off-by: Chris Phlipot <cphlipot0@gmail.com>
Acked-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1461831551-12213-4-git-send-email-cphlipot0@gmail.com
[ Made adjustments suggested by Adrian Hunter, see thread via this cset's Link: tag ]
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-05-06 13:00:52 -03:00
Chris Phlipot
451db12617 perf tools: Refactor code to move call path handling out of thread-stack
Move the call path handling code out of thread-stack.c and
thread-stack.h to allow other components that are not part of
thread-stack to create call paths.

Summary:

- Create call-path.c and call-path.h and add them to the build.

- Move all call path related code out of thread-stack.c and thread-stack.h
  and into call-path.c and call-path.h.

- A small subset of structures and functions are now visible through
  call-path.h, which is required for thread-stack.c to continue to
  compile.

This change is a prerequisite for subsequent patches in this change set
and by itself contains no user-visible changes.

Signed-off-by: Chris Phlipot <cphlipot0@gmail.com>
Acked-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1461831551-12213-3-git-send-email-cphlipot0@gmail.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-05-06 13:00:43 -03:00
Chris Phlipot
9919a65ec5 perf callchain: Fix incorrect ordering of entries
The existing implementation of thread__resolve_callchain, under certain
circumstances, can assemble callchain entries in the incorrect order.

The callchain entries are resolved incorrectly for a sample when all of
the following conditions are met:

1. callchain_param.order is set to ORDER_CALLER

2. thread__resolve_callchain_sample is able to resolve callchain entries
   for the sample.

3. unwind__get_entries is also able to resolve callchain entries for the
   sample.

The fix is accomplished by reversing the order in which
thread__resolve_callchain_sample and unwind__get_entries are called when
callchain_param.order is set to ORDER_CALLER.

Unwind specific code from thread__resolve_callchain is also moved into a
new static function to improve readability of the fix.

How to Reproduce the Existing Bug:

Modifying perf script to print call trees in the opposite order or
applying the remaining patches from this series and comparing the
results output from export-to-postgtresql.py are the easiest ways to see
the bug, however it can still be seen in current builds using perf
report.

Here is how i can reproduce the bug using perf report:

  # perf record --call-graph=dwarf stress -c 1 -t 5

when i run this command:

  # perf report --call-graph=flat,0,0,callee

This callchain, containing kernel (handle_irq_event, etc) and userspace
samples (__libc_start_main, etc) is contained in the output, which looks
correct (callee order):

                gen8_irq_handler
                handle_irq_event_percpu
                handle_irq_event
                handle_edge_irq
                handle_irq
                do_IRQ
                ret_from_intr
                __random
                rand
                0x558f2a04dded
                0x558f2a04c774
                __libc_start_main
                0x558f2a04dcd9

Now run this command using caller order:

  # perf report --call-graph=flat,0,0,caller

It is expected to see the exact reverse of the above when using caller
order (with "0x558f2a04dcd9" at the top and "gen8_irq_handler" at the
bottom) in the output, but it is nowhere to be found.

instead you see this:

                ret_from_intr
                do_IRQ
                handle_irq
                handle_edge_irq
                handle_irq_event
                handle_irq_event_percpu
                gen8_irq_handler
                0x558f2a04dcd9
                __libc_start_main
                0x558f2a04c774
                0x558f2a04dded
                rand
                __random

Notice how internally the kernel symbols are reversed and the user space
symbols are reversed, but the kernel symbols still appear above the user
space symbols.

if this patch is applied and perf script is re-run, you will see the
expected output (with "0x558f2a04dcd9" at the top and "gen8_irq_handler"
at the bottom):

                0x558f2a04dcd9
                __libc_start_main
                0x558f2a04c774
                0x558f2a04dded
                rand
                __random
                ret_from_intr
                do_IRQ
                handle_irq
                handle_edge_irq
                handle_irq_event
                handle_irq_event_percpu
                gen8_irq_handler

Signed-off-by: Chris Phlipot <cphlipot0@gmail.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1461831551-12213-2-git-send-email-cphlipot0@gmail.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-05-06 08:59:47 -03:00
Arnaldo Carvalho de Melo
4c4d6e5190 perf trace: Do not print raw args list for syscalls with no args
The test to check if the arg format had been read from the
syscall:sys_enter_name/format file was looking at the list of non-commom
fields, and if that is empty, it would think it had failed to read it,
because it doesn't exist, for instance, for the clone() syscall.

So instead before dumping the raw syscall args list check
IS_ERR(sc->tp_format), if that is true, then an attempt was made to read
the format file and failed, in which case dump the raw arg list values.

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Milian Wolff <milian.wolff@kdab.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/n/tip-ls7pmdqb2xy9339vdburwvnk@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-05-06 08:44:30 -03:00
Ingo Molnar
c0edb7467c perf/core improvements and fixes:
User visible:
 
 - Order output of 'perf trace --summary' better, now the threads will
   appear ascending order of number of events, and then, for each, in
   descending order of syscalls by the time spent in the syscalls, so
   that the last page produced can be the one about the most interesting
   thread straced, suggested by Milian Wolff (Arnaldo Carvalho de Melo)
 
 - Do not show the runtime_ms for a thread when not collecting it, that
   is done so far only with 'perf trace --sched' (Arnaldo Carvalho de Melo)
 
 - Fix kallsyms perf test on ppc64le (Naveen N. Rao)
 
 Infrastructure:
 
 - Move global variables related to presence of some keys in the sort order to a
   per hist struct, to allow code like the hists browser to work with multiple
   hists with different lists of columns (Jiri Olsa)
 
 - Add support for generating bpf prologue in powerpc (Naveen N. Rao)
 
 - Fix kprobe and kretprobe handling with kallsyms on ppc64le (Naveen N. Rao)
 
 - evlist mmap changes, prep work for supporting reading backwards (Wang Nan)
 
 Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v2
 
 iQIcBAABCAAGBQJXK+IPAAoJENZQFvNTUqpAbr8QAJ6AnyZbgyDHa1fUuaKksKdH
 0bUQeW+Xwo3D1TcNHIpDBdLiVjQz4Hte0gXdvJMDx7+BPdY3SjTVfVuCZmz7Uj3X
 orvTQSewOFIcfUV4znYK+38EUqY1SkeKZyEX++7KFH56IVZYTNgvk8qwEz2HaQ09
 ED6pUr8KCfg0ouq4FKriX5KpbyJd2Ql9S7mZqqPsvANsKlm0BbUObJdqey+v8f9X
 YVcFlq+A427T/QY4nfPrTjsPi7Plab7wfSR3qk4KZpLvI76FPQy0W4O3KOv0yORa
 x3KQMaQsA+icvlLFoOA14UkNZKru/h64I6J3QyW+proOQXh2rNWWyzn+mznDgVe9
 SDBuu/ObXY3dDOsmXmL5Qpe6Pp15RJsjv3WaYwsm8qxw3///ra00js+I0ktQ1qJ+
 8lP+90rJj555OTcfgcpH+TYRuOud5EBe1X6CAJNUj2rKi+dkTYIGrEc/iUdmkzar
 PQWw6w1kDkaLX7DdIT/4x8qSsbjfEMXZyw0wuDHipGDzAXI8+X3yODjmLqU9HhnX
 OSZBySpR0BoA4emm+tC2QFOjPadWf0xnR5Wi8cCh6dYGFVdRYDsAQxSalfyKQWSV
 d7rNyHiitq1mFcYq4x2cZ7uTteVpAzzJa95jCTYy++x63FYhABtdzuFkHeM0Md5e
 9QUJ3g6L+kkpQWO22i+O
 =zIqy
 -----END PGP SIGNATURE-----

Merge tag 'perf-core-for-mingo-20160505' of git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux into perf/core

Pull perf/core improvements and fixes from Arnaldo Carvalho de Melo:

User visible changes:

- Order output of 'perf trace --summary' better, now the threads will
  appear ascending order of number of events, and then, for each, in
  descending order of syscalls by the time spent in the syscalls, so
  that the last page produced can be the one about the most interesting
  thread straced, suggested by Milian Wolff (Arnaldo Carvalho de Melo)

- Do not show the runtime_ms for a thread when not collecting it, that
  is done so far only with 'perf trace --sched' (Arnaldo Carvalho de Melo)

- Fix kallsyms perf test on ppc64le (Naveen N. Rao)

Infrastructure changes:

- Move global variables related to presence of some keys in the sort order to a
  per hist struct, to allow code like the hists browser to work with multiple
  hists with different lists of columns (Jiri Olsa)

- Add support for generating bpf prologue in powerpc (Naveen N. Rao)

- Fix kprobe and kretprobe handling with kallsyms on ppc64le (Naveen N. Rao)

- evlist mmap changes, prep work for supporting reading backwards (Wang Nan)

Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2016-05-06 08:35:14 +02:00
Wang Nan
b6b85dad30 perf evlist: Rename variable in perf_mmap__read()
In perf_mmap__read(), give better names to pointers. Original name 'old'
and 'head' directly related to pointers in ring buffer control page. For
backward ring buffer, the meaning of 'head' point is not 'the first byte
of free space', but 'the first byte of the last record'. To reduce
confusion, rename 'old' to 'start', 'head' to 'end'.  'start' -> 'end'
is the direction the records should be read from.

Change parameter order.

Change 'overwrite' to 'check_messup'. When reading from 'head', no need
to check messup for for backward ring buffer.

Signed-off-by: Wang Nan <wangnan0@huawei.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Zefan Li <lizefan@huawei.com>
Cc: pi3orama@163.com
Link: http://lkml.kernel.org/r/1461723563-67451-3-git-send-email-wangnan0@huawei.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-05-05 21:04:04 -03:00
Wang Nan
0f4ccd1181 perf evlist: Extract perf_mmap__read()
Extract event reader from perf_evlist__mmap_read() to perf__mmap_read().
Future commit will feed it with manually computed 'head' and 'old'
pointers.

Signed-off-by: Wang Nan <wangnan0@huawei.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Zefan Li <lizefan@huawei.com>
Cc: pi3orama@163.com
Link: http://lkml.kernel.org/r/1461723563-67451-2-git-send-email-wangnan0@huawei.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-05-05 21:04:03 -03:00
Naveen N. Rao
0b3c2264ae perf symbols: Fix kallsyms perf test on ppc64le
ppc64le functions have a Global Entry Point (GEP) and a Local Entry
Point (LEP). While placing a probe, we always prefer the LEP since it
catches function calls through both the GEP and the LEP. In order to do
this, we fixup the function entry points during elf symbol table lookup
to point to the LEPs. This works, but breaks 'perf test kallsyms' since
the symbols loaded from the symbol table (pointing to the LEP) do not
match the symbols in kallsyms.

To fix this, we do not adjust all the symbols during symbol table load.
Instead, we note down st_other in a newly introduced arch-specific
member of perf symbol structure, and later use this to adjust the probe
trace point.

Reported-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Acked-by: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
Acked-by: Balbir Singh <bsingharora@gmail.com>
Cc: Mark Wielaard <mjw@redhat.com>
Cc: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Thiago Jung Bauermann <bauerman@linux.vnet.ibm.com>
Cc: linuxppc-dev@lists.ozlabs.org
Link: http://lkml.kernel.org/r/6be7c2b17e370100c2f79dd444509df7929bdd3e.1460451721.git.naveen.n.rao@linux.vnet.ibm.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-05-05 21:04:03 -03:00
Naveen N. Rao
239aeba764 perf powerpc: Fix kprobe and kretprobe handling with kallsyms on ppc64le
So far, we used to treat probe point offsets as being offset from the
LEP. However, userspace applications (objdump/readelf) always show
disassembly and offsets from the function GEP. This is confusing to the
user as we will end up probing at an address different from what the
user expects when looking at the function disassembly with
readelf/objdump. Fix this by changing how we modify probe address with
perf.

If only the function name is provided, we assume the user needs the LEP.
Otherwise, if an offset is specified, we assume that the user knows the
exact address to probe based on function disassembly, and so we just
place the probe from the GEP offset.

Finally, kretprobe was also broken with kallsyms as we were trying to
specify an offset. This patch also fixes that issue.

Reported-by: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Acked-by: Balbir Singh <bsingharora@gmail.com>
Cc: Mark Wielaard <mjw@redhat.com>
Cc: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Thiago Jung Bauermann <bauerman@linux.vnet.ibm.com>
Cc: linuxppc-dev@lists.ozlabs.org
Link: http://lkml.kernel.org/r/75df860aad8216bf4b9bcd10c6351ecc0e3dee54.1460451721.git.naveen.n.rao@linux.vnet.ibm.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-05-05 21:04:02 -03:00
Jiri Olsa
7cecb7fe83 perf hists: Move sort__has_comm into struct perf_hpp_list
Now we have sort dimensions private for struct hists,
we need to make dimension booleans hists specific as
well.

Moving sort__has_comm into struct perf_hpp_list.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1462276488-26683-8-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-05-05 21:04:02 -03:00
Jiri Olsa
fa82911a1b perf hists: Move sort__has_thread into struct perf_hpp_list
Now we have sort dimensions private for struct hists, we need to make
dimension booleans hists specific as well.

Moving sort__has_thread into struct perf_hpp_list.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1462276488-26683-7-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-05-05 21:04:01 -03:00
Jiri Olsa
35a634f76c perf hists: Move sort__has_socket into struct perf_hpp_list
Now we have sort dimensions private for struct hists, we need to make
dimension booleans hists specific as well.

Moving sort__has_socket into struct perf_hpp_list.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1462276488-26683-6-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-05-05 21:04:01 -03:00
Jiri Olsa
69849fc5d2 perf hists: Move sort__has_dso into struct perf_hpp_list
Now we have sort dimensions private for struct hists, we need to make
dimension booleans hists specific as well.

Moving sort__has_dso into struct perf_hpp_list.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1462276488-26683-5-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-05-05 21:04:00 -03:00
Jiri Olsa
2e0453af4e perf hists: Move sort__has_sym into struct perf_hpp_list
Now we have sort dimensions private for struct hists, we need to make
dimension booleans hists specific as well.

Moving sort__has_sym into struct perf_hpp_list.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1462276488-26683-4-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-05-05 21:03:59 -03:00
Jiri Olsa
de7e6a7c8b perf hists: Move sort__has_parent into struct perf_hpp_list
Now we have sort dimensions private for struct hists, we need to make
dimension booleans hists specific as well.

Moving sort__has_parent into struct perf_hpp_list.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1462276488-26683-3-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-05-05 21:03:59 -03:00
Jiri Olsa
52225036fa perf hists: Move sort__need_collapse into struct perf_hpp_list
Now we have sort dimensions private for struct hists, we need to make
dimension booleans hists specific as well.

Moving sort__need_collapse into struct perf_hpp_list.

Adding hists__has macro to easily access this info perf struct hists
object.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1462276488-26683-2-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-05-05 21:03:58 -03:00
Naveen N. Rao
4679bccaa3 perf tools powerpc: Add support for generating bpf prologue
Generalize existing macros to serve the purpose.

Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Cc: Ian Munsie <imunsie@au1.ibm.com>
Cc: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Wang Nan <wangnan0@huawei.com>
Cc: linuxppc-dev@lists.ozlabs.org
Link: http://lkml.kernel.org/r/1462461799-17518-1-git-send-email-naveen.n.rao@linux.vnet.ibm.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-05-05 21:03:58 -03:00
Arnaldo Carvalho de Melo
03548ebf6d perf trace: Do not show the runtime_ms for a thread when not collecting it
That field is only updated when we use the "sched:sched_stat_runtime"
tracepoint, and that is only done so far when we use the '--stat' command line
option, without it we get just zeros, confusing the users:

Without this patch:

  # trace -a -s sleep 1
  <SNIP>
   qemu-system-x86 (9931), 468 events, 9.6%, 0.000 msec

     syscall     calls    total       min       avg       max      stddev
                          (msec)    (msec)    (msec)    (msec)        (%)
     ---------- ------ --------- --------- --------- ---------     ------
     ppoll          98   982.374     0.000    10.024    29.983     12.65%
     write          34     0.401     0.005     0.012     0.027      5.49%
     ioctl         102     0.347     0.002     0.003     0.007      3.08%

   firefox (10871), 1856 events, 38.2%, 0.000 msec

                          (msec)    (msec)    (msec)    (msec)        (%)
     ---------- ------ --------- --------- --------- ---------     ------
     poll          395   934.873     0.000     2.367    17.120     11.51%
     recvmsg       395     0.988     0.001     0.003     0.021      4.20%
     read          106     0.460     0.002     0.004     0.007      3.17%
     futex          24     0.108     0.001     0.004     0.010     10.05%
     mmap            2     0.041     0.016     0.021     0.026     23.92%
     write           6     0.027     0.004     0.004     0.005      2.52%

After this patch that ', 0.000 msecs' gets suppressed when --stat is not
in use.

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Milian Wolff <milian.wolff@kdab.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/n/tip-p7emqrsw7900tdkg43v9l1e1@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-05-05 21:03:57 -03:00
Arnaldo Carvalho de Melo
b535d523dc perf trace: Sort syscalls stats by msecs in --summary
# trace -a -s sleep 1
  <SNIP>
   Xorg (1965), 788 events, 19.0%, 0.000 msec

     syscall            calls    total       min       avg       max      stddev
                                 (msec)    (msec)    (msec)    (msec)        (%)
     --------------- -------- --------- --------- --------- ---------     ------
     select                89   731.038     0.000     8.214   175.218     36.71%
     ioctl                 22     0.661     0.010     0.030     0.072     10.43%
     writev                42     0.253     0.002     0.006     0.011      5.94%
     recvmsg               60     0.185     0.001     0.003     0.009      5.90%
     setitimer             60     0.127     0.001     0.002     0.006      6.14%
     read                  52     0.102     0.001     0.002     0.005      8.55%
     rt_sigprocmask        45     0.092     0.001     0.002     0.023     23.65%
     poll                  12     0.021     0.001     0.002     0.003      7.21%
     epoll_wait            12     0.019     0.001     0.002     0.002      2.71%

   firefox (10871), 1080 events, 26.1%, 0.000 msec

     syscall            calls    total       min       avg       max      stddev
                                 (msec)    (msec)    (msec)    (msec)        (%)
     --------------- -------- --------- --------- --------- ---------     ------
     poll                 240   979.562     0.000     4.082    17.132     11.33%
     recvmsg              240     0.532     0.001     0.002     0.007      3.69%
     read                  60     0.303     0.003     0.005     0.029      8.50%

Suggested-by: Milian Wolff <milian.wolff@kdab.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/n/tip-52kdkuyxihq0kvc0n2aalhay@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-05-05 21:03:56 -03:00
Arnaldo Carvalho de Melo
96c1445122 perf trace: Sort summary output by number of events
# trace -a -s sleep 1 |& grep events | tail
   gmain (1733), 34 events, 1.0%, 0.000 msec
   hexchat (9765), 46 events, 1.4%, 0.000 msec
   ssh (11109), 80 events, 2.4%, 0.000 msec
   sleep (32631), 81 events, 2.4%, 0.000 msec
   qemu-system-x86 (10021), 272 events, 8.2%, 0.000 msec
   Xorg (1965), 322 events, 9.7%, 0.000 msec
   SoftwareVsyncTh (10922), 366 events, 11.1%, 0.000 msec
   gnome-shell (2231), 446 events, 13.5%, 0.000 msec
   qemu-system-x86 (9931), 468 events, 14.1%, 0.000 msec
   firefox (10871), 1098 events, 33.2%, 0.000 msec
  [root@jouet ~]#

Suggested-by: Milian Wolff <milian.wolff@kdab.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/n/tip-ye4cnprhfeiq32ar4lt60dqs@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-05-05 21:03:56 -03:00
Arnaldo Carvalho de Melo
f58c253564 perf tools: Add template for generating rbtree resort class
Sometimes we want to sort an existing rbtree by a different key,
introduce a template for that, that needs only to be provided the
rbtree root and the number of entries in it.

To do that a new rbtree will be created with extra space for each entry,
where possibly pre-calculated keys will be stored to be used in the
resort process and also later, when using the newly sorted rbtree.

Please check the following two changesets to see it in use for resorting
stats for threads and its syscalls in 'perf trace --summary'.

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Milian Wolff <milian.wolff@kdab.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/n/tip-9l6e1q34lmf3wwdeewstyakg@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-05-05 21:03:55 -03:00
Arnaldo Carvalho de Melo
d2c1103440 perf machine: Introduce number of threads member
To be used, for instance, for pre-allocating an rb_tree array for
sorting by other keys besides the current pid one.

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Milian Wolff <milian.wolff@kdab.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/n/tip-ja0ifkwue7ttjhbwijn6g6eu@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-05-05 21:03:55 -03:00
Alexander Shishkin
1b6de59171 perf/x86/intel/pt: Convert ACCESS_ONCE()s
This patch converts remaining ACCESS_ONCE() instances into READ_ONCE()
and WRITE_ONCE() as appropriate.

Signed-off-by: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Arnaldo Carvalho de Melo <acme@infradead.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mathieu Poirier <mathieu.poirier@linaro.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Vince Weaver <vincent.weaver@maine.edu>
Cc: vince@deater.net
Link: http://lkml.kernel.org/r/1461857746-31346-2-git-send-email-alexander.shishkin@linux.intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2016-05-05 10:16:29 +02:00
Alexander Shishkin
65c7e6f1c4 perf/x86/intel/pt: Export CPU frequency ratios needed by PT decoders
Intel PT decoders need access to various bits of timing related
information to be able to correctly decode timing packets from a PT
stream (MTC and CBR packets). This patch exports all the necessary
bits as sysfs attributes for the sake of consistency:

  * max_nonturbo_ratio: ratio between the invariant TSC and base clock;

  * tsc_art_ratio: TSC to core crystal clock ratio (also available as CPUID.15H).

Signed-off-by: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Arnaldo Carvalho de Melo <acme@infradead.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mathieu Poirier <mathieu.poirier@linaro.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Vince Weaver <vincent.weaver@maine.edu>
Cc: vince@deater.net
Link: http://lkml.kernel.org/r/87zisdvibe.fsf@ashishki-desk.ger.corp.intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2016-05-05 10:16:28 +02:00
Alexander Shishkin
ccbebba4c6 perf/x86/intel/pt: Bypass PT vs. LBR exclusivity if the core supports it
Not all cores prevent using Intel PT and LBRs simultaneously, although
most of them still do as of today. This patch adds an opt-in flag for
such cores to disable mutual exclusivity between PT and LBR; also flip
it on for Goldmont.

Signed-off-by: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Arnaldo Carvalho de Melo <acme@infradead.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mathieu Poirier <mathieu.poirier@linaro.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Vince Weaver <vincent.weaver@maine.edu>
Cc: vince@deater.net
Link: http://lkml.kernel.org/r/1461857746-31346-4-git-send-email-alexander.shishkin@linux.intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2016-05-05 10:16:28 +02:00
Mark Rutland
5101ef20f0 perf/arm: Special-case hetereogeneous CPUs
Commit:

  2665784850 ("perf/core: Verify we have a single perf_hw_context PMU")

forcefully prevents multiple PMUs from sharing perf_hw_context, as this
generally doesn't make sense. It is a common bug for uncore PMUs to
use perf_hw_context rather than perf_invalid_context, which this detects.

However, systems exist with heterogeneous CPUs (and hence heterogeneous
HW PMUs), for which sharing perf_hw_context is necessary, and possible
in some limited cases.

To make this work we have to perform some gymnastics, as we did in these
commits:

  66eb579e66 ("perf: allow for PMU-specific event filtering")
  c904e32a69 ("arm: perf: filter unschedulable events")

To allow those systems to work, we must allow PMUs for heterogeneous
CPUs to share perf_hw_context, though we must still disallow sharing
otherwise to detect the common misuse of perf_hw_context.

This patch adds a new PERF_PMU_CAP_HETEROGENEOUS_CPUS for this, updates
the core logic to account for this, and makes use of it in the arm_pmu
code that is used for systems with heterogeneous CPUs. Comments are
added to make the rationale clear and hopefully avoid accidental abuse.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Vince Weaver <vincent.weaver@maine.edu>
Cc: Will Deacon <will.deacon@arm.com>
Cc: linux-arm-kernel@lists.infradead.org
Link: http://lkml.kernel.org/r/20160426103346.GA20836@leverpostej
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2016-05-05 10:13:59 +02:00
Alexander Shishkin
6e855cd4f4 perf/core: Let userspace know if the PMU supports address filters
Export an additional common attribute for PMUs that support address range
filtering to let the perf userspace identify such PMUs in a uniform way.

Signed-off-by: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Arnaldo Carvalho de Melo <acme@infradead.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mathieu Poirier <mathieu.poirier@linaro.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Vince Weaver <vincent.weaver@maine.edu>
Cc: vince@deater.net
Link: http://lkml.kernel.org/r/1461771888-10409-8-git-send-email-alexander.shishkin@linux.intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2016-05-05 10:13:58 +02:00
Alexander Shishkin
eadf48cab4 perf/x86/intel/pt: Add support for address range filtering in PT
Newer versions of Intel PT support address ranges, which can be used to
define IP address range-based filters or TraceSTOP regions. Number of
ranges in enumerated via cpuid.

This patch implements PMU callbacks and related low-level code to allow
filter validation, configuration and programming into the hardware.

Signed-off-by: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Arnaldo Carvalho de Melo <acme@infradead.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mathieu Poirier <mathieu.poirier@linaro.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Vince Weaver <vincent.weaver@maine.edu>
Cc: vince@deater.net
Link: http://lkml.kernel.org/r/1461771888-10409-7-git-send-email-alexander.shishkin@linux.intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2016-05-05 10:13:58 +02:00
Alexander Shishkin
375637bc52 perf/core: Introduce address range filtering
Many instruction tracing PMUs out there support address range-based
filtering, which would, for example, generate trace data only for a
given range of instruction addresses, which is useful for tracing
individual functions, modules or libraries. Other PMUs may also
utilize this functionality to allow filtering to or filtering out
code at certain address ranges.

This patch introduces the interface for userspace to specify these
filters and for the PMU drivers to apply these filters to hardware
configuration.

The user interface is an ASCII string that is passed via an ioctl()
and specifies (in the form of an ASCII string) address ranges within
certain object files or within kernel. There is no special treatment
for kernel modules yet, but it might be a worthy pursuit.

The PMU driver interface basically adds two extra callbacks to the
PMU driver structure, one of which validates the filter configuration
proposed by the user against what the hardware is actually capable of
doing and the other one translates hardware-independent filter
configuration into something that can be programmed into the
hardware.

Signed-off-by: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Mathieu Poirier <mathieu.poirier@linaro.org>
Cc: Arnaldo Carvalho de Melo <acme@infradead.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Vince Weaver <vincent.weaver@maine.edu>
Cc: vince@deater.net
Link: http://lkml.kernel.org/r/1461771888-10409-6-git-send-email-alexander.shishkin@linux.intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2016-05-05 10:13:57 +02:00
Alexander Shishkin
b73e4fefc1 perf/core: Extend perf_event_aux_ctx() to optionally iterate through more events
Trace filtering code needs an iterator that can go through all events in
a context, including inactive and filtered, to be able to update their
filters' address ranges based on mmap or exec events.

This patch changes perf_event_aux_ctx() to optionally do this.

Signed-off-by: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Arnaldo Carvalho de Melo <acme@infradead.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mathieu Poirier <mathieu.poirier@linaro.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Vince Weaver <vincent.weaver@maine.edu>
Cc: vince@deater.net
Link: http://lkml.kernel.org/r/1461771888-10409-5-git-send-email-alexander.shishkin@linux.intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2016-05-05 10:13:57 +02:00