Commit Graph

203 Commits

Author SHA1 Message Date
Namhyung Kim
941bdea79e perf sched timehist: Account thread wait time separately
Separate thread wait time into 3 parts - sleep, iowait and preempt based
on the prev_state of the last event.

Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20170113104523.31212-1-namhyung@kernel.org
[ Fix the build on centos:5 where 'wait' shadows a global declaration ]
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-01-17 11:35:44 -03:00
Namhyung Kim
9396c9cb0d perf sched timehist: Show total scheduling time
Show length of analyzed sample time and rate of idle task running.
This also takes care of time range given by --time option.

  $ perf sched timehist -sI | tail
  Samples do not have callchains.
  Idle stats:
      CPU  0 idle for    930.316  msec  ( 92.93%)
      CPU  1 idle for    963.614  msec  ( 96.25%)
      CPU  2 idle for    885.482  msec  ( 88.45%)
      CPU  3 idle for    938.635  msec  ( 93.76%)

      Total number of unique tasks: 118
  Total number of context switches: 2337
             Total run time (msec): 3718.048
      Total scheduling time (msec): 1001.131  (x 4)

Suggested-by: David Ahern <dsahern@gmail.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/20161222060350.17655-3-namhyung@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-12-27 21:47:57 -03:00
Namhyung Kim
bdd75729e5 perf sched timehist: Fix invalid period calculation
When --time option is given with a value outside recorded time, the last
sample time (tprev) was set to that value and run time calculation might
be incorrect.  This is a problem of the first samples for each cpus
since it would skip the runtime update when tprev is 0.  But with --time
option it had non-zero (which is invalid) value so the calculation is
also incorrect.

For example, let's see the followging:

  $ perf sched timehist
             time    cpu  task name                       wait time  sch delay   run time
                          [tid/pid]                          (msec)     (msec)     (msec)
  --------------- ------  ------------------------------  ---------  ---------  ---------
      3195.968367 [0003]  <idle>                              0.000      0.000      0.000
      3195.968386 [0002]  Timer[4306/4277]                    0.000      0.000      0.018
      3195.968397 [0002]  Web Content[4277]                   0.000      0.000      0.000
      3195.968595 [0001]  JS Helper[4302/4277]                0.000      0.000      0.000
      3195.969217 [0000]  <idle>                              0.000      0.000      0.621
      3195.969251 [0001]  kworker/1:1H[291]                   0.000      0.000      0.033

The sample starts at 3195.968367 but when I gave a time interval from
3194 to 3196 (in sec) it will calculate the whole 2 second as runtime.
In below, 2 cpus accounted it as runtime, other 2 cpus accounted it as
idle time.

Before:

  $ perf sched timehist --time 3194,3196 -s | tail
  Idle stats:
      CPU  0 idle for   1995.991  msec
      CPU  1 idle for     20.793  msec
      CPU  2 idle for     30.191  msec
      CPU  3 idle for   1999.852  msec

      Total number of unique tasks: 23
  Total number of context switches: 128
             Total run time (msec): 3724.940

After:

  $ perf sched timehist --time 3194,3196 -s | tail
  Idle stats:
      CPU  0 idle for     10.811  msec
      CPU  1 idle for     20.793  msec
      CPU  2 idle for     30.191  msec
      CPU  3 idle for     18.337  msec

      Total number of unique tasks: 23
  Total number of context switches: 128
             Total run time (msec): 18.139

Committer notes:

Further testing:

Before:

  Idle stats:
      CPU  0 idle for    229.785  msec
      CPU  1 idle for    937.944  msec
      CPU  2 idle for    188.931  msec
      CPU  3 idle for    986.185  msec

  After:

  # perf sched timehist --time 40602,40603 -s | tail

  Idle stats:
      CPU  0 idle for    229.785  msec
      CPU  1 idle for    175.407  msec
      CPU  2 idle for    188.931  msec
      CPU  3 idle for    223.657  msec

      Total number of unique tasks: 68
  Total number of context switches: 814
             Total run time (msec): 97.688

  # for cpu in `seq 0 3` ; do echo -n "CPU $cpu idle for " ; perf sched timehist --time 40602,40603 | grep "\[000${cpu}\].*\<idle\>" | tr -s ' ' | cut -d' ' -f7 | awk '{entries++ ; s+=$1} END {print s " msec (entries: " entries ")"}' ; done
  CPU 0 idle for 229.721 msec (entries: 123)
  CPU 1 idle for 175.381 msec (entries: 65)
  CPU 2 idle for 188.903 msec (entries: 56)
  CPU 3 idle for 223.61 msec (entries: 102)

Difference due to the idle stats being accounted at nanoseconds precision while
the <idle> entries in 'perf sched timehist' are trucated at msec.usec.

Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Fixes: 853b740711 ("perf sched timehist: Add option to specify time window of interest")
Link: http://lkml.kernel.org/r/20161222060350.17655-2-namhyung@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-12-22 16:35:46 -03:00
Namhyung Kim
4fa0d1aa27 perf sched timehist: Remove hardcoded 'comm_width' check at print_summary
Now that the default 'comm_width' value is 30, no need to check that at
print_summary,

Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/20161222060350.17655-1-namhyung@kernel.org
[ Split from a larger patch ]
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-12-22 16:35:46 -03:00
Namhyung Kim
9b8087d720 perf sched timehist: Enlarge default 'comm_width'
Current default value is 20 but it's easily changed to a bigger value as
task has a long name and different tid and pid.  And it makes the output
not aligned.  So change it to have a large value as summary shows.

Committer notes:

Before:

  # perf sched record
  ^C
  # perf sched timehist
  <SNIP>
    40602.770537 [0001]  rcuos/2[29]               7.970      0.002      0.020
    40602.771512 [0003]  <idle>                    0.003      0.000      0.986
    40602.771586 [0001]  <idle>                    0.020      0.000      1.049
    40602.771606 [0001]  qemu-system-x86[3593/3510]      0.000      0.002      0.020
    40602.771629 [0003]  qemu-system-x86[3510]           0.000      0.003      0.116
    40602.771776 [0000]  <idle>                          0.001      0.000      1.892
  <SNIP>

After:

  # perf sched timehist
  <SNIP>
   40602.770537 [0001]  rcuos/2[29]                         7.970      0.002      0.020
   40602.771512 [0003]  <idle>                              0.003      0.000      0.986
   40602.771586 [0001]  <idle>                              0.020      0.000      1.049
   40602.771606 [0001]  qemu-system-x86[3593/3510]          0.000      0.002      0.020
   40602.771629 [0003]  qemu-system-x86[3510]               0.000      0.003      0.116
  <SNIP>

Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/20161222060350.17655-1-namhyung@kernel.org
[ Split from a larger patch ]
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-12-22 16:35:45 -03:00
Namhyung Kim
0e6758e823 perf sched timehist: Honour 'comm_width' when aligning the headers
Current default value is 20, but that may change in the future, so make
places where we have 20 hardcoded use 'comm_width'.

Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/20161222060350.17655-1-namhyung@kernel.org
[ Split from a larger patch ]
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-12-22 16:35:45 -03:00
Namhyung Kim
ba957ebb54 perf sched timehist: Show callchains for idle stat
When --idle-hist option is used with --summary, it now shows idle stats
with callchains like below:

  Idle stats by callchain:
  CPU  0:   902.195 msec
  Idle time (msec)    Count Callchains
  ----------------  ------- --------------------------------------------------
           370.589       69 futex_wait_queue_me <- futex_wait <- do_futex <- sys_futex <- entry_SYSCALL_64_fastpath
           178.799       17 worker_thread <- kthread <- ret_from_fork
           128.352       17 schedule_timeout <- rcu_gp_kthread <- kthread <- ret_from_fork
           125.111       19 schedule_hrtimeout_range_clock <- schedule_hrtimeout_range <- poll_schedule_timeout <- do_select <- core_sys_select
            71.599       50 schedule_hrtimeout_range_clock <- schedule_hrtimeout_range <- poll_schedule_timeout <- do_sys_poll <- sys_poll
            23.146        1 rcu_gp_kthread <- kthread <- ret_from_fork
             4.510        1 schedule_hrtimeout_range_clock <- schedule_hrtimeout_range <- ep_poll <- sys_epoll_wait <- do_syscall_64
             0.085        1 schedule_hrtimeout_range_clock <- schedule_hrtimeout_range <- poll_schedule_timeout <- do_sys_poll <- do_restart_poll
  ...

Committer notes:

Extra testing:

  # uname -a
  Linux jouet 4.8.8-300.fc25.x86_64 #1 SMP Tue Nov 15 18:10:06 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux

1) Run 'perf sched record -g'

2) Run 'perf sched timehist --idle --summary'

<SNIP>
  Idle stats by callchain:
  CPU  0: 13456.840 msec
  Idle time (msec) Count Callchains
  ---------------- ----- --------------------------------------------------
          5386.637  3283 schedule_hrtimeout_range_clock <- schedule_hrtimeout_range <- poll_schedule_timeout <- do_sys_poll <- sys_poll
          2750.238  2299 futex_wait_queue_me <- futex_wait <- do_futex <- sys_futex <- do_syscall_64
          1275.672  1287 schedule_hrtimeout_range_clock <- schedule_hrtimeout_range <- ep_poll <- sys_epoll_wait <- entry_SYSCALL_64_fastpath
           936.322   452 worker_thread <- kthread <- ret_from_fork
           741.311   385 rcu_nocb_kthread <- kthread <- ret_from_fork
           729.385   248 schedule_hrtimeout_range_clock <- schedule_hrtimeout_range <- poll_schedule_timeout <- do_sys_poll <- sys_ppoll
           365.386   229 irq_thread <- kthread <- ret_from_fork
           338.934   265 futex_wait_queue_me <- futex_wait <- do_futex <- sys_futex <- entry_SYSCALL_64_fastpath
           219.488   201 schedule_timeout <- rcu_gp_kthread <- kthread <- ret_from_fork
           186.839   410 schedule_hrtimeout_range_clock <- schedule_hrtimeout_range <- ep_poll <- sys_epoll_wait <- do_syscall_64
           142.541    59 kvm_vcpu_block <- kvm_arch_vcpu_ioctl_run <- kvm_vcpu_ioctl <- do_vfs_ioctl <- sys_ioctl
            83.887    92 smpboot_thread_fn <- kthread <- ret_from_fork
            62.722    96 do_exit <- do_group_exit <- 0x2a5594 <- entry_SYSCALL_64_fastpath
            47.894    83 pipe_wait <- pipe_read <- __vfs_read <- vfs_read <- sys_read
            46.554    61 rcu_gp_kthread <- kthread <- ret_from_fork
            34.337    21 schedule_timeout <- intel_fbc_work_fn <- process_one_work <- worker_thread <- kthread
            29.521    14 schedule_hrtimeout_range_clock <- schedule_hrtimeout_range <- poll_schedule_timeout <- do_select <- core_sys_select
            20.274    10 schedule_timeout <- io_schedule_timeout <- bit_wait_io <- __wait_on_bit <- out_of_line_wait_on_bit
            15.085    55 schedule_timeout <- unix_stream_read_generic <- unix_stream_recvmsg <- sock_recvmsg <- SYSC_recvfrom
<SNIP>

Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Acked-by: David Ahern <dsahern@gmail.com>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20161208144755.16673-7-namhyung@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-12-15 16:25:45 -03:00
Namhyung Kim
07235f84ec perf sched timehist: Add -I/--idle-hist option
The --idle-hist option is to analyze system idle state so which process
makes cpu to go idle.  If this option is specified, non-idle events will
be skipped and processes switching to/from idle will be shown.

This option is mostly useful when used with --summary(-only) option.  In
the idle-time summary view, idle time is accounted to previous thread
which is run before idle task.

The example output looks like following:

  Idle-time summary
                  comm parent sched-out idle-time min-idle avg-idle max-idle stddev migrations
                                (count)    (msec)   (msec)   (msec)   (msec)      %
  --------------------------------------------------------------------------------------------
        rcu_preempt[7]      2        95   550.872    0.011    5.798   23.146   7.63      0
       migration/1[16]      2         1    15.558   15.558   15.558   15.558   0.00      0
        khugepaged[39]      2         1     3.062    3.062    3.062    3.062   0.00      0
     kworker/0:1H[124]      2         2     4.728    0.611    2.364    4.116  74.12      0
  systemd-journal[167]      1         1     4.510    4.510    4.510    4.510   0.00      0
    kworker/u16:3[558]      2        13    74.737    0.080    5.749   12.960  21.96      0
   irq/34-iwlwifi[628]      2        21   118.403    0.032    5.638   23.990  24.00      0
    kworker/u17:0[673]      2         1     3.523    3.523    3.523    3.523   0.00      0
      dbus-daemon[722]      1         1     6.743    6.743    6.743    6.743   0.00      0
          ifplugd[741]      1         1    58.826   58.826   58.826   58.826   0.00      0
  wpa_supplicant[1490]      1         1    13.302   13.302   13.302   13.302   0.00      0
     wpa_actiond[1492]      1         2     4.064    0.168    2.032    3.896  91.72      0
         dockerd[1500]      1         1     0.055    0.055    0.055    0.055   0.00      0
  ...

Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Acked-by: David Ahern <dsahern@gmail.com>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20161208144755.16673-6-namhyung@kernel.org
Link: http://lkml.kernel.org/r/20161213080632.19099-2-namhyung@kernel.org
[ Merged fix sent by Namhyumg, as posted in the second Link: tag ]
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-12-15 16:25:45 -03:00
Namhyung Kim
a4b2b6f56e perf sched timehist: Skip non-idle events when necessary
Sometimes it only focuses on idle-related events like upcoming idle-hist
feature.  In this case we don't want to see other event to reduce noise.

Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Acked-by: David Ahern <dsahern@gmail.com>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20161208144755.16673-5-namhyung@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-12-15 16:25:44 -03:00
Namhyung Kim
699b5b920d perf sched timehist: Save callchain when entering idle
In order to investigate the idleness reason, it is necessary to keep the
callchains when entering idle.  This can be identified by the
sched:sched_switch event having the next_pid field as 0.

Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Acked-by: David Ahern <dsahern@gmail.com>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20161208144755.16673-4-namhyung@kernel.org
Link: http://lkml.kernel.org/r/20161213080632.19099-1-namhyung@kernel.org
[ Merged fix from Namhyung, see second Link: tag ]
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-12-15 16:25:44 -03:00
Namhyung Kim
3bc2fa9cb8 perf sched timehist: Introduce struct idle_time_data
The struct idle_time_data is to keep idle stats with callchains entering
to the idle task.  The normal thread_runtime calculation is done
transparently since it extends the struct thread_runtime.

Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Acked-by: David Ahern <dsahern@gmail.com>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20161208144755.16673-3-namhyung@kernel.org
[ Align struct field names ]
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-12-15 16:25:44 -03:00
Namhyung Kim
96039c7c52 perf sched timehist: Split is_idle_sample()
The is_idle_sample() function actually does more than determining
whether sample come from idle task.  Split the callchain part into
save_task_callchain() to make it clearer.

Also checking prev_pid from trace data looks preferred than just
checking sample->pid since it's possible, although rare, to have invalid
0 pid/tid on scheduling an exiting task.

Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Acked-by: David Ahern <dsahern@gmail.com>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20161208144755.16673-2-namhyung@kernel.org
[ Remove some needless () in some return statements ]
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-12-15 16:25:44 -03:00
Namhyung Kim
b336352b41 perf sched timehist: Cleanup idle_max_cpu handling
It treats the idle_max_cpu little bit confusingly IMHO.  Let's make it
more straight forward.

Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Acked-by: David Ahern <dsahern@gmail.com>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20161206034010.6499-6-namhyung@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-12-07 12:00:34 -03:00
Namhyung Kim
5d92d96a94 perf sched timehist: Handle zero sample->tid properly
Sometimes samples have tid of 0 but non-0 pid.  It ends up having a new
thread of 0 tid/pid (instead of referring idle task) since tid is used
to search matching task.  But I guess it's wrong to use 0 as a tid when
pid is set.  This patch uses tid only if it has a non-zero value or same
as pid (of 0).

Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20161206034010.6499-4-namhyung@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-12-07 12:00:34 -03:00
Namhyung Kim
6fa94258ce perf sched: Cleanup option processing
The -D/--dump-raw-trace option is in the parent option so no need to
repeat it.  Also move -f/--force option to parent as it's common to
handle data file.

Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Acked-by: David Ahern <dsahern@gmail.com>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20161206034010.6499-2-namhyung@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-12-07 12:00:33 -03:00
David Ahern
f45bf8d393 perf sched timehist: Improve error message when analyzing wrong file
Arnaldo reported an unhelpful error message when running perf sched
timehist on a file that did not contain sched tracepoints:

    [root@jouet ~]# perf sched timehist
    No trace sample to read. Did you call 'perf record -R'?

    [root@jouet ~]# perf evlist -v
    cycles:ppp: size: 112, { sample_period, sample_freq }: 4000, sample_type: IP|TID|TIME|CALLCHAIN|CPU|PERIOD, disabled: 1, inherit: 1, mmap: 1, comm: 1, freq: 1, task: 1, precise_ip: 3, sample_id_all: 1, exclude_guest: 1, mmap2: 1, comm_exec: 1

Change the has_traces check to look for the sched_switch event. Analysis
for perf sched timehist requires at least this event.

Now when analyzing a file without sched tracepoints you get:

    root@f21-vbox:/tmp$ perf sched timehist
    No sched_switch events found. Have you run 'perf sched record'?

Signed-off-by: David Ahern <dsahern@gmail.com>
Reported-and-Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Acked-by: Namhyung Kim <namhyung@kernel.org>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1480451988-43673-1-git-send-email-dsahern@gmail.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-12-07 12:00:32 -03:00
David Ahern
853b740711 perf sched timehist: Add option to specify time window of interest
Add option to allow user to control analysis window. e.g., collect data
for time window and analyze a segment of interest within that window.

Committer notes:

Testing it:

  # perf sched record -a usleep 1
  [ perf record: Woken up 1 times to write data ]
  [ perf record: Captured and wrote 1.593 MB perf.data (25 samples) ]
  #
  # perf sched timehist | head -18
  Samples do not have callchains.
          time    cpu   task name       wait time  sch delay  run time
                        [tid/pid]          (msec)     (msec)    (msec)
  ------------- ------  --------------- ---------  ---------  --------
   19818.635579 [0002]  <idle>              0.000      0.000     0.000
   19818.635613 [0000]  perf[9116]          0.000      0.000     0.000
   19818.635676 [0000]  <idle>              0.000      0.000     0.063
   19818.635678 [0000]  rcuos/2[29]         0.000      0.002     0.001
   19818.635696 [0002]  perf[9117]          0.000      0.004     0.116
   19818.635702 [0000]  <idle>              0.001      0.000     0.024
   19818.635709 [0002]  migration/2[25]     0.000      0.003     0.012
   19818.636263 [0000]  usleep[9117]        0.005      0.000     0.560
   19818.636316 [0000]  <idle>              0.560      0.000     0.053
   19818.636358 [0002]  <idle>              0.129      0.000     0.649
   19818.636358 [0000]  usleep[9117]        0.053      0.002     0.042
  #

  # perf sched timehist --time 19818.635696,
  Samples do not have callchains.
           time    cpu  task name       wait time  sch delay  run time
                        [tid/pid]          (msec)     (msec)    (msec)
  ------------- ------  ---------------  --------  --------- ---------
   19818.635696 [0002]  perf[9117]          0.000      0.120     0.000
   19818.635702 [0000]  <idle>              0.019      0.000     0.006
   19818.635709 [0002]  migration/2[25]     0.000      0.003     0.012
   19818.636263 [0000]  usleep[9117]        0.005      0.000     0.560
   19818.636316 [0000]  <idle>              0.560      0.000     0.053
   19818.636358 [0002]  <idle>              0.129      0.000     0.649
   19818.636358 [0000]  usleep[9117]        0.053      0.002     0.042
  #
  # perf sched timehist --time 19818.635696,19818.635709
  Samples do not have callchains.
           time    cpu  task name       wait time  sch delay  run time
                        [tid/pid]          (msec)     (msec)    (msec)
  ------------- ------  --------------- ---------  --------- ---------
   19818.635696 [0002]  perf[9117]          0.000      0.120     0.000
   19818.635702 [0000]  <idle>              0.019      0.000     0.006
   19818.635709 [0002]  migration/2[25]     0.000      0.003     0.012
   19818.635709 [0000]  usleep[9117]        0.005      0.000     0.006
  #

Signed-off-by: David Ahern <dsahern@gmail.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Acked-by: Namhyung Kim <namhyung@kernel.org>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1480439746-42695-5-git-send-email-dsahern@gmail.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-12-01 13:02:52 -03:00
David Ahern
350f54fab2 perf sched timehist: Handle cpu migration events
Add handlers for sched:sched_migrate_task event. Total number of
migrations is added to summary display and -M/--migrations can be used
to show migration events.

Signed-off-by: David Ahern <dsahern@gmail.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Link: http://lkml.kernel.org/r/1480091321-35591-1-git-send-email-dsa@cumulusnetworks.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-11-25 16:00:22 -03:00
Namhyung Kim
8388deb3ba perf sched timehist: Enlarge max stack depth by 2
When it records callchains, they will always have 2 scheduler functions
(__schedule + schedule or __schedule + preempt_schedule) and get
ignored.  So it should collect 2 more functions to show the expected
number of callchains to user.

Committer Notes:

Example of final result, using the same perf.data file as in the
previous cset comment, but this time redirecting the output of 'perf
sched timehist' to a file instead of copy'n'pasting from xterm:

  [root@jouet experimental]# perf sched timehist > /tmp/bla
  [root@jouet experimental]# cat /tmp/bla
      time  cpu task name        wait time sch delay run time
                 [tid/pid]            (msec) (msec) (msec)
  -------- ----  -------------------- ------ ------ -----
  6.494998 [01] <idle>                0.000  0.000  0.000
  6.495027 [02] perf[519]             0.000  0.000  0.000 schedule_hrtimeout_range_clock <- schedule_hrtimeout_range <- poll_schedule_timeout <- do_sys_poll <- sys_poll
  6.495096 [03] <idle>                0.000  0.000  0.000
  6.495100 [03] rcuos/0[9]            0.000  0.005  0.003 rcu_nocb_kthread <- kthread <- ret_from_fork
  6.495113 [01] perf[520]             0.000  0.008  0.114 preempt_schedule_common <- _cond_resched <- wait_for_completion <- stop_one_cpu <- sched_exec <- do_execveat_common.isra.35
  6.495121 [00] <idle>                0.000  0.000  0.000
  6.495129 [01] migration/1[17]       0.000  0.003  0.016 smpboot_thread_fn <- kthread <- ret_from_fork
  6.496085 [02] <idle>                0.000  0.000  1.057
  6.496096 [02] kworker/u16:1[31169]  0.000  0.004  0.011 worker_thread <- kthread <- ret_from_fork
  6.496096 [03] <idle>                0.003  0.000  0.996
  6.496169 [02] <idle>                0.011  0.000  0.072
  6.496171 [00] ls[520]               0.008  0.000  1.049 do_exit <- do_group_exit <- [unknown] <- entry_SYSCALL_64_fastpath
  6.496172 [03] gnome-terminal-[4391] 0.000  0.003  0.076 schedule_hrtimeout_range_clock <- schedule_hrtimeout_range <- poll_schedule_timeout <- do_sys_poll <- sys_poll

Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/r/20161124011114.7102-3-namhyung@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-11-25 10:50:57 -03:00
Namhyung Kim
cdeb01bf78 perf sched timehist: Mark schedule function in callchains
The sched_switch event always captured from the scheduler function.  So
it'd be great omit them from the callchain.  This patch marks the
functions to be omitted by later patch.

Committer notes:

Testing it:

Before:

  [root@jouet experimental]# perf sched record -g ls
  Dockerfile  perf.data  x-mips64
  [ perf record: Woken up 1 times to write data ]
  [ perf record: Captured and wrote 1.355 MB perf.data (29 samples) ]
  [root@jouet experimental]# perf sched timehist
      time  cpu  task name         wait time sch delay run time
                 [tid/pid]             (msec) (msec) (msec)
  ----------- -----  ----------------- ------ ------ ------
  6.494998 [001] <idle>                0.000  0.000  0.000
  6.495027 [002] perf[519]             0.000  0.000  0.000 __schedule <- schedule <- schedule_hrtimeout_range_clock <- schedule_hrtimeou
  6.495096 [003] <idle>                0.000  0.000  0.000
  6.495100 [003] rcuos/0[9]            0.000  0.005  0.003 __schedule <- schedule <- rcu_nocb_kthread <- kthread <- ret_from_fork
  6.495113 [001] perf[520]             0.000  0.008  0.114 __schedule <- preempt_schedule_common <- _cond_resched <- wait_for_completion
  6.495121 [000] <idle>                0.000  0.000  0.000
  6.495129 [001] migration/1[17]       0.000  0.003  0.016 __schedule <- schedule <- smpboot_thread_fn <- kthread <- ret_from_fork
  6.496085 [002] <idle>                0.000  0.000  1.057
  6.496096 [002] kworker/u16:1[31169]  0.000  0.004  0.011 __schedule <- schedule <- worker_thread <- kthread <- ret_from_fork
  6.496096 [003] <idle>                0.003  0.000  0.996
  6.496169 [002] <idle>                0.011  0.000  0.072
  6.496171 [000] ls[520]               0.008  0.000  1.049 __schedule <- schedule <- do_exit <- do_group_exit <- [unknown]
  6.496172 [003] gnome-terminal-[4391] 0.000  0.003  0.076 __schedule <- schedule <- schedule_hrtimeout_range_clock <- schedule_hrtimeo

After:

  [root@jouet experimental]# perf sched timehist
      time  cpu  task name         wait time sch delay run time
                 [tid/pid]            (msec)  (msec)  (msec)
  ----------- -----  ----------------- -----  -----  ------
  6.494998 [001] <idle>                0.000  0.000  0.000
  6.495027 [002] perf[519]             0.000  0.000  0.000 schedule_hrtimeout_range_clock <- schedule_hrtimeout_range <- poll_schedule_t
  6.495096 [003] <idle>                0.000  0.000  0.000
  6.495100 [003] rcuos/0[9]            0.000  0.005  0.003 rcu_nocb_kthread <- kthread <- ret_from_fork
  6.495113 [001] perf[520]             0.000  0.008  0.114 preempt_schedule_common <- _cond_resched <- wait_for_completion <- stop_one_c
  6.495121 [000] <idle>                0.000  0.000  0.000
  6.495129 [001] migration/1[17]       0.000  0.003  0.016 smpboot_thread_fn <- kthread <- ret_from_fork
  6.496085 [002] <idle>                0.000  0.000  1.057
  6.496096 [002] kworker/u16:1[31169]  0.000  0.004  0.011 worker_thread <- kthread <- ret_from_fork
  6.496096 [003] <idle>                0.003  0.000  0.996
  6.496169 [002] <idle>                0.011  0.000  0.072
  6.496171 [000] ls[520]               0.008  0.000  1.049 do_exit <- do_group_exit <- [unknown]
  6.496172 [003] gnome-terminal-[4391] 0.000  0.003  0.076 schedule_hrtimeout_range_clock <- schedule_hrtimeout_range <- poll_schedule_
  [root@jouet experimental]#

Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/r/20161124011114.7102-1-namhyung@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-11-25 10:49:43 -03:00
Namhyung Kim
2d9bbf6eb3 perf callchain: Add option to skip ignore symbol when printing callchains
For tracepoint events, callchains always contain certain functions.
Sometimes it'd be better to skip those functions as they have no value.

Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/r/20161124011114.7102-2-namhyung@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-11-25 10:49:38 -03:00
David Ahern
a407b0678b perf sched timehist: Add -V/--cpu-visual option
The -V option provides a visual aid for sched switches by cpu:

  $ perf sched timehist -V
             time    cpu  0123456789abc  task name              b/n time  sch delay   run time
                                         [tid/pid]                (msec)     (msec)     (msec)
  --------------- ------  -------------  --------------------  ---------  ---------  ---------
  ...
   2412598.429696 [0009]           i     <idle>                    0.000      0.000      0.000
   2412598.429767 [0002]    s            perf[7219]                0.000      0.000      0.000
   2412598.429783 [0009]           s     perf[7220]                0.000      0.006      0.087
   2412598.429794 [0010]            i    <idle>                    0.000      0.000      0.000
   2412598.429795 [0009]           s     migration/9[53]           0.000      0.003      0.011
   2412598.430370 [0010]            s    sleep[7220]               0.011      0.000      0.576
   2412598.432584 [0003]     i           <idle>                    0.000      0.000      0.000
  ...

Committer notes:

'i' marks idle time, 's' are scheduler events.

Signed-off-by: David Ahern <dsahern@gmail.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Acked-by: Ingo Molnar <mingo@kernel.org>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/r/20161116060634.28477-8-namhyung@kernel.org
[ Add documentation based on above commit message ]
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-11-23 10:44:09 -03:00
David Ahern
6c973c9085 perf sched timehist: Add call graph options
If callchains were recorded they are appended to the line with a default stack depth of 5:

  1.874569 [0011] gcc[31949]       0.014 0.000 1.148 wait_for_completion_killable <- do_fork <- sys_vfork <- stub_vfork <- __vfork
  1.874591 [0010] gcc[31951]       0.000 0.000 0.024 __cond_resched <- _cond_resched <- wait_for_completion <- stop_one_cpu <- sched_exec
  1.874603 [0010] migration/10[59] 3.350 0.004 0.011 smpboot_thread_fn <- kthread <- ret_from_fork
  1.874604 [0011] <idle>           1.148 0.000 0.035 cpu_startup_entry <- start_secondary
  1.874723 [0005] <idle>           0.016 0.000 1.383 cpu_startup_entry <- start_secondary
  1.874746 [0005] gcc[31949]       0.153 0.078 0.022 do_wait sys_wait4 <- system_call_fastpath <- __GI___waitpid

 --no-call-graph can be used to not show the callchains. --max-stack is used
to control the number of frames shown (default of 5). -x/--excl options can
be used to collapse redundant callchains to get more relevant data on screen.

Signed-off-by: David Ahern <dsahern@gmail.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Acked-by: Ingo Molnar <mingo@kernel.org>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/r/20161116060634.28477-7-namhyung@kernel.org
[ Add documentation based on above commit message ]
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-11-23 10:44:09 -03:00
David Ahern
fc1469f1b2 perf sched timehist: Add -w/--wakeups option
The -w option is to show wakeup events with timehist.

  $ perf sched timehist -w
             time    cpu  task name              b/n time  sch delay   run time
                          [tid/pid]                (msec)     (msec)     (msec)
  --------------- ------  --------------------  ---------  ---------  ---------
   2412598.429689 [0002]  perf[7219]                                             awakened: perf[7220]
   2412598.429696 [0009]  <idle>                    0.000      0.000      0.000
   2412598.429767 [0002]  perf[7219]                0.000      0.000      0.000
   2412598.429780 [0009]  perf[7220]                                             awakened: migration/9[53]
  ...

Signed-off-by: David Ahern <dsahern@gmail.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Acked-by: Ingo Molnar <mingo@kernel.org>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/r/20161116060634.28477-6-namhyung@kernel.org
[ Add documentation based on above commit message ]
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-11-23 10:44:08 -03:00
David Ahern
52df138caa perf sched timehist: Add summary options
The -s/--summary option is to show process runtime statistics.  And the
 -S/--with-summary option is to show the stats with the normal output.

  $ perf sched timehist -s

  Runtime summary
                            comm  parent   sched-in     run-time    min-run     avg-run     max-run  stddev
                                            (count)       (msec)     (msec)      (msec)      (msec)       %
  ---------------------------------------------------------------------------------------------------------
                  ksoftirqd/0[3]       2          2        0.011      0.004       0.005       0.006   14.87
                  rcu_preempt[7]       2         11        0.071      0.002       0.006       0.017   20.23
                  watchdog/0[11]       2          1        0.002      0.002       0.002       0.002    0.00
                  watchdog/1[12]       2          1        0.004      0.004       0.004       0.004    0.00
  ...

  Terminated tasks:
                     sleep[7220]    7219          3        0.770      0.087       0.256       0.576   62.28

  Idle stats:
      CPU  0 idle for   2352.006  msec
      CPU  1 idle for   2764.497  msec
      CPU  2 idle for   2998.229  msec
      CPU  3 idle for   2967.800  msec

      Total number of unique tasks: 52
  Total number of context switches: 2532
             Total run time (msec): 218.036

Signed-off-by: David Ahern <dsahern@gmail.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Acked-by: Ingo Molnar <mingo@kernel.org>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/r/20161116060634.28477-5-namhyung@kernel.org
[ Add documentation from last commit, so that docs comes with the cset that introduces the feature ]
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-11-23 10:44:08 -03:00
David Ahern
49394a2a24 perf sched timehist: Introduce timehist command
'perf sched timehist' provides an analysis of scheduling events.

Example usage:
    perf sched record -- sleep 1
    perf sched timehist

By default it shows the individual schedule events, including the wait
time (time between sched-out and next sched-in events for the task), the
task scheduling delay (time between wakeup and actually running) and run
time for the task:

            time    cpu  task name             wait time  sch delay   run time
                         [tid/pid]                (msec)     (msec)     (msec)
  -------------- ------  --------------------  ---------  ---------  ---------
    79371.874569 [0011]  gcc[31949]                0.014      0.000      1.148
    79371.874591 [0010]  gcc[31951]                0.000      0.000      0.024
    79371.874603 [0010]  migration/10[59]          3.350      0.004      0.011
    79371.874604 [0011]  <idle>                    1.148      0.000      0.035
    79371.874723 [0005]  <idle>                    0.016      0.000      1.383
    79371.874746 [0005]  gcc[31949]                0.153      0.078      0.022
...

Times are in msec.usec.

Committer note:

Add above explanation as the 'perf sched timehist' entry for 'man
perf-sched'.

Signed-off-by: David Ahern <dsahern@gmail.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Acked-by: Ingo Molnar <mingo@kernel.org>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/r/20161116060634.28477-4-namhyung@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-11-23 10:44:07 -03:00
Namhyung Kim
99620a5d0c perf tools: Introduce timestamp__scnprintf_usec()
Joonwoo reported that there's a mismatch between timestamps in script
and sched commands.  This was because of difference in printing the
timestamp.  Factor out the code and share it so that they can be in
sync.  Also I found that sched map has similar problem, fix it too.

Committer notes:

Fixed the max_lat_at bug introduced by Namhyung's original patch, as
pointed out by Joonwoo, and made it a function following the scnprintf()
model, i.e. returning the number of bytes formatted, and receiving as
the first parameter the object from where the data to the formatting is
obtained, renaming it from:

   char *timestamp_in_usec(char *bf, size_t size, u64 timestamp)

to

   int timestamp__scnprintf_usec(u64 timestamp, char *bf, size_t size)

Reported-by: Joonwoo Park <joonwoop@codeaurora.org>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/20161024020246.14928-3-namhyung@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-10-28 11:29:40 -02:00
Namhyung Kim
e107f129e2 perf sched map: Always show task comm with -v
I'd like to see the name of tasks with perf sched map, but it only shows
name of new tasks and then use short names after all.  This is not good
for long running tasks since it's hard for users to track the short
names.  This patch makes it show the names (except the idle task) when
-v option is used.  Probably we may make it as default behavior.

Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/20161024020246.14928-2-namhyung@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-10-25 15:05:09 -03:00
Namhyung Kim
1208bb274b perf sched map: Apply cpu color when there's an activity
Applying cpu color always doesn't help readability IMHO.  Instead it
might be better to applying the color when there's an activity on those
CPUs.

Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/20161024020246.14928-1-namhyung@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-10-25 15:04:48 -03:00
Namhyung Kim
77f02f4446 perf sched: Make common options cascading
The -i and -v options can be used in subcommands so enable cascading the
sched_options.  This fixes the following inconvenience in 'perf sched':

  $ perf sched -i perf.data.sched  map
  ... (it works well) ...

  $ perf sched map  -i perf.data.sched
    Error: unknown switch `i'

   Usage: perf sched map [<options>]

          --color-cpus <cpus>
                            highlight given CPUs in map
          --color-pids <pids>
                            highlight given pids in map
          --compact         map output in compact mode
          --cpus <cpus>     display given CPUs in map

With this patch, the second command line works with the perf.data.sched
data file.

Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/r/20161024030003.28534-2-namhyung@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-10-25 10:24:48 -03:00
Arnaldo Carvalho de Melo
4fc76e495b perf sched: Use linux/time64.h
Probably the next step is to introduce linux/time.h and use
timespec_to_ns(), etc.

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/n/tip-4nqhskn27fn93cz3ukbc8drf@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-08-23 15:37:33 -03:00
Arnaldo Carvalho de Melo
c8b5f2c96d tools: Introduce str_error_r()
The tools so far have been using the strerror_r() GNU variant, that
returns a string, be it the buffer passed or something else.

But that, besides being tricky in cases where we expect that the
function using strerror_r() returns the error formatted in a provided
buffer (we have to check if it returned something else and copy that
instead), breaks the build on systems not using glibc, like Alpine
Linux, where musl libc is used.

So, introduce yet another wrapper, str_error_r(), that has the GNU
interface, but uses the portable XSI variant of strerror_r(), so that
users rest asured that the provided buffer is used and it is what is
returned.

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/n/tip-d4t42fnf48ytlk8rjxs822tf@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-07-12 15:19:47 -03:00
Jiri Olsa
73643bb6a2 perf sched map: Display only given cpus
Introducing --cpus option that will display only given cpus. Could be
used together with color-cpus option.

  $ perf sched map  --cpus 0,1
        *A0   309999.786924 secs A0 => rcu_sched:7
        *.    309999.786930 secs
    *B0  .    309999.786931 secs B0 => rcuos/2:25
     B0 *A0   309999.786947 secs

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1460467771-26532-9-git-send-email-jolsa@kernel.org
[ Added entry to man page ]
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-04-13 10:11:52 -03:00
Jiri Olsa
cf294f24f8 perf sched map: Color given cpus
Adding --color-cpus option to display selected cpus with background
color (red by default).  It helps on navigating through the perf sched
map output.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1460467771-26532-8-git-send-email-jolsa@kernel.org
[ Added entry to man page ]
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-04-13 10:11:51 -03:00
Jiri Olsa
a151a37a76 perf sched map: Color given pids
Adding --color-pids option to display selected pids in color (blue by
default). It helps on navigating through the 'perf sched map' output.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1460467771-26532-7-git-send-email-jolsa@kernel.org
[ Added entry to man page ]
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-04-13 10:11:51 -03:00
Jiri Olsa
8cd91195e5 perf sched: Use color_fprintf for output
As preparation for next patch.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1460467771-26532-5-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-04-13 10:11:51 -03:00
Jiri Olsa
99623c628f perf sched: Add compact display option
Add compact map display that does not output the whole cpu matrix, only
cpus that got event.

  $ perf sched map --compact
    *A0   1082427.094098 secs A0 => perf:19404 (CPU 2)
     A0 *.    1082427.094127 secs .  => swapper:0 (CPU 1)
     A0  .  *B0   1082427.094174 secs B0 => rcuos/2:25 (CPU 3)
     A0  .  *.    1082427.094177 secs
    *C0  .   .    1082427.094187 secs C0 => migration/2:21
     C0 *A0  .    1082427.094193 secs
    *.   A0  .    1082427.094195 secs
    *D0  A0  .    1082427.094402 secs D0 => rngd:968
    *.   A0  .    1082427.094406 secs
     .  *E0  .    1082427.095221 secs E0 => kworker/1:1:5333
     .   E0 *F0   1082427.095227 secs F0 => xterm:3342

It helps to display sane output for small thread loads on big cpu
servers.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1460467771-26532-4-git-send-email-jolsa@kernel.org
[ Add entry in 'perf sched' man page ]
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-04-13 10:11:51 -03:00
Josh Poimboeuf
4b6ab94eab perf subcmd: Create subcmd library
Move the subcommand-related files from perf to a new library named
libsubcmd.a.

Since we're moving files anyway, go ahead and rename 'exec_cmd.*' to
'exec-cmd.*' to be consistent with the naming of all the other files.

Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/c0a838d4c878ab17fee50998811612b2281355c1.1450193761.git.jpoimboe@redhat.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-12-17 14:27:14 -03:00
Jiri Olsa
0014de172d perf sched latency: Fix thread pid reuse issue
The latency subcommand holds a tree of working atoms sorted by thread's
pid/tid. If there's new thread with same pid and tid, the old working atom is
found and assert bug condition is hit in search function:

  thread_atoms_search: Assertion `!(thread != atoms->thread)' failed

Changing the sort function to use thread object pointers together with pid and
tid check. This way new thread will never find old one with same pid/tid.

Link: http://lkml.kernel.org/n/tip-o4doazhhv0zax5zshkg8hnys@git.kernel.org
Reported-by: Mohit Agrawal <moagrawa@redhat.com>
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Acked-by: Namhyung Kim <namhyung@kernel.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1446462625-15807-1-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-05 12:51:00 -03:00
Namhyung Kim
c711836972 perf tools: Introduce usage_with_options_msg()
Now usage_with_options() setup a pager before printing message so normal
printf() or pr_err() will not be shown.  The usage_with_options_msg()
can be used to print some help message before usage strings.

Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Acked-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1445701767-12731-4-git-send-email-namhyung@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-10-27 09:28:44 -03:00
Josef Bacik
2f80dd4488 perf sched: Add option to merge like comms to lat output
Sometimes when debugging large multi-threaded applications it is helpful
to collate all of the latency numbers into one bulk record to get an
idea of what is going on.

This patch does this by merging any entries that belong to the same comm
into one entry and then spits out those totals.

I've also slightly changed the output so you can see how many threads
were merged in the processing.  Here is the new default output format

 -----------------------------------------------------------------------------------------------------------
  Task                 | Runtime ms  | Switches | Average delay ms | Maximum delay ms | Maximum delay at    |
 -----------------------------------------------------------------------------------------------------------
  chrome:(23)          |  740.878 ms |     2612 | avg:    0.022 ms | max:    0.845 ms | max at: 7935.254223 s
  pulseaudio:1523      |   94.440 ms |      597 | avg:    0.027 ms | max:    0.110 ms | max at: 7934.668372 s
  threaded-ml:6042     |   72.554 ms |      386 | avg:    0.035 ms | max:    1.186 ms | max at: 7935.330911 s
  Chrome_IOThread:3832 |   52.388 ms |      456 | avg:    0.021 ms | max:    1.365 ms | max at: 7935.330602 s
  Chrome_ChildIOT:(7)  |   50.694 ms |      743 | avg:    0.021 ms | max:    1.448 ms | max at: 7935.256659 s
  Compositor:5510      |   30.012 ms |      192 | avg:    0.019 ms | max:    0.131 ms | max at: 7936.636815 s
  plugin_audio_th:6043 |   24.828 ms |      314 | avg:    0.018 ms | max:    0.143 ms | max at: 7936.205994 s
  CompositorTileW:(2)  |   14.099 ms |       45 | avg:    0.022 ms | max:    0.153 ms | max at: 7937.521800 s

the (#) after the task is the number of tasks merged, and then if there were
no tasks merged it just shows the pid.  Here is the same trace file with the -p
option to print the per-pid latency numbers

 -----------------------------------------------------------------------------------------------------------
  Task                 | Runtime ms  | Switches | Average delay ms | Maximum delay ms | Maximum delay at    |
 -----------------------------------------------------------------------------------------------------------
  chrome:5500          |  386.872 ms |      387 | avg:    0.023 ms | max:    0.241 ms | max at: 7936.001694 s
  pulseaudio:1523      |   94.440 ms |      597 | avg:    0.027 ms | max:    0.110 ms | max at: 7934.668372 s
  threaded-ml:6042     |   72.554 ms |      386 | avg:    0.035 ms | max:    1.186 ms | max at: 7935.330911 s
  chrome:10226         |   69.710 ms |      251 | avg:    0.023 ms | max:    0.764 ms | max at: 7935.992305 s
  chrome:4267          |   64.551 ms |      418 | avg:    0.021 ms | max:    0.294 ms | max at: 7937.862427 s
  chrome:4827          |   62.268 ms |       54 | avg:    0.029 ms | max:    0.666 ms | max at: 7935.992813 s
  Chrome_IOThread:3832 |   52.388 ms |      456 | avg:    0.021 ms | max:    1.365 ms | max at: 7935.330602 s
  chrome:3776          |   46.150 ms |      349 | avg:    0.023 ms | max:    0.845 ms | max at: 7935.254223 s

Signed-off-by: Josef Bacik <jbacik@fb.com>
Acked-by: Ingo Molnar <mingo@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: kernel-team@fb.com
Link: http://lkml.kernel.org/r/1432300720-30478-1-git-send-email-jbacik@fb.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-05-27 12:21:45 -03:00
Arnaldo Carvalho de Melo
b91fc39f4a perf machine: Protect the machine->threads with a rwlock
In addition to using refcounts for the struct thread lifetime
management, we need to protect access to machine->threads from
concurrent access.

That happens in 'perf top', where a thread processes events, inserting
and deleting entries from that rb_tree while another thread decays
hist_entries, that end up dropping references and ultimately deleting
threads from the rb_tree and releasing its resources when no further
hist_entry (or other data structures, like in 'perf sched') references
it.

So the rule is the same for refcounts + protected trees in the kernel,
get the tree lock, find object, bump the refcount, drop the tree lock,
return, use object, drop the refcount if no more use of it is needed,
keep it if storing it in some other data structure, drop when releasing
that data structure.

I.e. pair "t = machine__find(new)_thread()" with a "thread__put(t)", and
"perf_event__preprocess_sample(&al)" with "addr_location__put(&al)".

The addr_location__put() one is because as we return references to
several data structures, we may end up adding more reference counting
for the other data structures and then we'll drop it at
addr_location__put() time.

Acked-by: David Ahern <dsahern@gmail.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Borislav Petkov <bp@suse.de>
Cc: Don Zickus <dzickus@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-bs9rt4n0jw3hi9f3zxyy3xln@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-05-08 16:19:27 -03:00
Yunlong Song
ff5f3bbd40 perf sched replay: Use replay_repeat to calculate the runavg of cpu usage instead of the default value 10
Since sched->replay_repeat is set to 10 as default, the sched->run_avg,
sched->runavg_cpu_usage, and sched->runavg_parent_cpu_usage all use
10 to calculate their value.

However, the replay_repeat can be changed to other value by using -r
option, so the calculation above should use replay_repeat to achieve
more accurate results instead of the default value 10.

Signed-off-by: Yunlong Song <yunlong.song@huawei.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/r/1427809596-29559-10-git-send-email-yunlong.song@huawei.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-04-08 09:07:27 -03:00
Yunlong Song
f0dd330fdf perf sched replay: Support using -f to override perf.data file ownership
Enable to use perf.data when it is not owned by current user or root.

Example:

 $ ls -al perf.data
 -rw------- 1 Yunlong.Song Yunlong.Song 5321918 Mar 25 15:14 perf.data
 $ sudo id
 uid=0(root) gid=0(root) groups=0(root),64(pkcs11)

Before this patch:

 $ sudo perf sched replay -f
 run measurement overhead: 98 nsecs
 sleep measurement overhead: 52909 nsecs
 the run test took 1000015 nsecs
 the sleep test took 1054253 nsecs
 File perf.data not owned by current user or root (use -f to override)

As shown above, the -f option does not work at all.

After this patch:

 $ sudo perf sched replay -f
 run measurement overhead: 221 nsecs
 sleep measurement overhead: 40514 nsecs
 the run test took 1000003 nsecs
 the sleep test took 1056098 nsecs
 nr_run_events:        10
 nr_sleep_events:      1562
 nr_wakeup_events:     5
 task      0 (                  :1:         1), nr_events: 1
 task      1 (                  :2:         2), nr_events: 1
 task      2 (                  :3:         3), nr_events: 1
 ...
 ...
 task   1549 (             :163132:    163132), nr_events: 1
 task   1550 (             :163540:    163540), nr_events: 1
 task   1551 (           <unknown>:         0), nr_events: 10
 ------------------------------------------------------------
 #1  : 50.198, ravg: 50.20, cpu: 2335.18 / 2335.18
 #2  : 219.099, ravg: 67.09, cpu: 2835.11 / 2385.17
 #3  : 238.626, ravg: 84.24, cpu: 3278.26 / 2474.48
 #4  : 200.364, ravg: 95.85, cpu: 2977.41 / 2524.77
 #5  : 176.882, ravg: 103.96, cpu: 2801.35 / 2552.43
 #6  : 191.093, ravg: 112.67, cpu: 2813.70 / 2578.56
 #7  : 189.448, ravg: 120.35, cpu: 2809.21 / 2601.62
 #8  : 200.637, ravg: 128.38, cpu: 2849.91 / 2626.45
 #9  : 248.338, ravg: 140.37, cpu: 4380.61 / 2801.87
 #10 : 511.139, ravg: 177.45, cpu: 3077.73 / 2829.45

As shown above, the -f option really works now.

Besides for replay, -f option can also work for latency and map.

Signed-off-by: Yunlong Song <yunlong.song@huawei.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/r/1427809596-29559-9-git-send-email-yunlong.song@huawei.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-04-08 09:07:26 -03:00
Yunlong Song
939cda521a perf sched replay: Fix the EMFILE error caused by the limitation of the maximum open files
The soft maximum number of open files for a calling process is 1024,
which is defined as INR_OPEN_CUR in include/uapi/linux/fs.h, and the
hard maximum number of open files for a calling process is 4096, which
is defined as INR_OPEN_MAX in include/uapi/linux/fs.h.

Both INR_OPEN_CUR and INR_OPEN_MAX are used to limit the value of
RLIMIT_NOFILE in include/asm-generic/resource.h.

And the soft maximum number finally decides the limitation of the
maximum files which are allowed to be opened.

That is to say a process can use at most 1024 file descriptors for its
o pened files, or an EMFILE error will happen.

This error can be fixed by increasing the soft maximum number, under the
constraint that the soft maximum number can not exceed the hard maximum
number, or both soft and hard maximum number should be increased
simultaneously with privilege.

For perf sched replay, it uses sys_perf_event_open to create the file
descriptor for each of the tasks in order to handle information of perf
events.

That is to say each task needs a unique file descriptor. In x86_64,
there may be over 1024 or 4096 tasks correspoinding to the record in
perf.data, which causes that no enough file descriptors can be used.

As a result, EMFILE error happens and stops the replay process. To solve
this problem, we adaptively increase the soft and hard maximum number of
open files with a '-f' option.

Example:

Test environment: x86_64 with 160 cores

 $ cat /proc/sys/kernel/pid_max
 163840
 $ cat /proc/sys/fs/file-max
 6815744
 $ ulimit -Sn
 1024
 $ ulimit -Hn
 4096

Before this patch:

 $ perf sched replay
 ...
 task   1549 (             :163132:    163132), nr_events: 1
 task   1550 (             :163540:    163540), nr_events: 1
 task   1551 (           <unknown>:         0), nr_events: 10
 Error: sys_perf_event_open() syscall returned with -1 (Too many open
 files)

After this patch:

 $ perf sched replay
 ...
 task   1549 (             :163132:    163132), nr_events: 1
 task   1550 (             :163540:    163540), nr_events: 1
 task   1551 (           <unknown>:         0), nr_events: 10
 Error: sys_perf_event_open() syscall returned with -1 (Too many open
 files)
 Have a try with -f option

 $ perf sched replay -f
 ...
 task   1549 (             :163132:    163132), nr_events: 1
 task   1550 (             :163540:    163540), nr_events: 1
 task   1551 (           <unknown>:         0), nr_events: 10
 ------------------------------------------------------------
 #1  : 54.401, ravg: 54.40, cpu: 3285.21 / 3285.21
 #2  : 199.548, ravg: 68.92, cpu: 4999.65 / 3456.66
 #3  : 170.483, ravg: 79.07, cpu: 1349.94 / 3245.99
 #4  : 192.034, ravg: 90.37, cpu: 1322.88 / 3053.67
 #5  : 182.929, ravg: 99.62, cpu: 1406.51 / 2888.96
 #6  : 152.974, ravg: 104.96, cpu: 1167.54 / 2716.82
 #7  : 155.579, ravg: 110.02, cpu: 2992.53 / 2744.39
 #8  : 130.557, ravg: 112.08, cpu: 1126.43 / 2582.59
 #9  : 138.520, ravg: 114.72, cpu: 1253.22 / 2449.65
 #10 : 134.328, ravg: 116.68, cpu: 1587.95 / 2363.48

Signed-off-by: Yunlong Song <yunlong.song@huawei.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/r/1427809596-29559-8-git-send-email-yunlong.song@huawei.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-04-08 09:07:26 -03:00
Yunlong Song
1aff59be53 perf sched replay: Handle the dead halt of sem_wait when create_tasks() fails for any task
Since there is sem_wait for each task in the wait_for_tasks(), e.g.
sem_wait(&task->work_done_sem).

The sem_wait can continue only when work_done_sem is greater than 0, or
it will be blocked.

For perf sched replay, one task may sem_post the work_done_sem of
another task, which causes the work_done_sem of that task processed in a
reasonable sequence, e.g. sem_post, sem_wait, sem_wait, sem_post...

This sequence simulates the sched process of the running tasks at the
time when perf sched record runs.

As a result, all the tasks are required and their threads must be
successfully created.

If any one (task A) of the tasks fails to create its thread, then
another task (task B), whose work_done_sem needs sem_post from that
failed task A, may likely block itself due to seg_wait.

And this is a dead halt, since task B's thread_func cannot continue at
all.

To solve this problem, perf sched replay should exit once any task fails
to create its thread.

Example:

Test environment: x86_64 with 160 cores

Before this patch:

 $ perf sched replay
 ...
 Error: sys_perf_event_open() syscall returned with -1 (Too many open
 files)
 ------------------------------------------------------------    <- dead halt

After this patch:

 $ perf sched replay
 ...
 task   1551 (           <unknown>:         0), nr_events: 10
 Error: sys_perf_event_open() syscall returned with -1 (Too many open
 files)
 $

As shown above, perf sched replay finishes the process after printing an
error message and does not block itself.

Signed-off-by: Yunlong Song <yunlong.song@huawei.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/r/1427809596-29559-7-git-send-email-yunlong.song@huawei.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-04-08 09:07:25 -03:00
Yunlong Song
08097abc11 perf sched replay: Fix the segmentation fault problem caused by pr_err in threads
The pr_err in self_open_counters() prints error message to stderr.
Unlike stdout, stderr uses memory buffer on the stack of each calling
process.

The pr_err in self_open_counters() works in a thread called thread_func
created in function create_tasks, which concurrently creates
sched->nr_tasks threads.

If the error happens and pr_err prints the error message in each of
these threads, the stack size of the perf process (default is 8192
kbytes) will quickly run out and the segmentation fault will happen
then.

To solve this problem, pr_err with self_open_counters() should be moved
from newly created threads to the old main thread of the perf process.
Then the pr_err can work in a stable situation without the strange
segmentation fault problem.

Example:

Test environment: x86_64 with 160 cores

Before this patch:

 $ perf sched replay
 ...
 task   1549 (             :163132:    163132), nr_events: 1
 task   1550 (             :163540:    163540), nr_events: 1
 task   1551 (           <unknown>:         0), nr_events: 10
 Segmentation fault

After this patch:

 $ perf sched replay
 ...
 task   1549 (             :163132:    163132), nr_events: 1
 task   1550 (             :163540:    163540), nr_events: 1
 task   1551 (           <unknown>:         0), nr_events: 10
 ...

As shown above, the result continues without any segmentation fault.

Signed-off-by: Yunlong Song <yunlong.song@huawei.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/r/1427809596-29559-6-git-send-email-yunlong.song@huawei.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-04-08 09:07:24 -03:00
Yunlong Song
3a423a5c36 perf sched replay: Realloc the memory of pid_to_task stepwise to adapt to the different pid_max configurations
Although the memory of pid_to_task can be allocated via calloc according
to the value of /proc/sys/kernel/pid_max, it cannot handle the case when
pid_max is changed after 'perf sched record' has created its perf.data.

If the new pid_max configured in 'perf sched replay' is smaller than the
old pid_max configured in 'perf sched record', then it will cause the
assertion failure problem.

To solve this problem, we realloc the memory of pid_to_task stepwise
once the passed-in pid parameter in register_pid is larger than the
current pid_max.

Example:

Test environment: x86_64 with 160 cores

 $ cat /proc/sys/kernel/pid_max
 163840
 $ perf sched record ls
 $ echo 5000 > /proc/sys/kernel/pid_max
 $ cat /proc/sys/kernel/pid_max
 5000

Before this patch:

 $ perf sched replay
 run measurement overhead: 221 nsecs
 sleep measurement overhead: 55356 nsecs
 the run test took 1000011 nsecs
 the sleep test took 1060940 nsecs
 perf: builtin-sched.c:337: register_pid: Assertion `!(pid >= (unsigned
 long)pid_max)' failed.
 Aborted

After this patch:

 $ perf sched replay
 run measurement overhead: 221 nsecs
 sleep measurement overhead: 55611 nsecs
 the run test took 1000026 nsecs
 the sleep test took 1060486 nsecs
 nr_run_events:        10
 nr_sleep_events:      1562
 nr_wakeup_events:     5
 task      0 (                  :1:         1), nr_events: 1
 task      1 (                  :2:         2), nr_events: 1
 task      2 (                  :3:         3), nr_events: 1
 task      3 (                  :5:         5), nr_events: 1
 ...

Signed-off-by: Yunlong Song <yunlong.song@huawei.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/r/1427809596-29559-5-git-send-email-yunlong.song@huawei.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-04-08 09:07:23 -03:00
Yunlong Song
cb06ac256a perf sched replay: Alloc the memory of pid_to_task dynamically to adapt to the unexpected change of pid_max
The current memory allocation of struct task_desc *pid_to_task[MAX_PID]
is in a permanent and preset way, and it has two problems:

Problem 1: If the pid_max, which is the max number of pids in the
system, is much smaller than MAX_PID (1024*1000), then it causes a waste
of stack memory. This may happen in the case where the number of cpu
cores is much smaller than 1000.

Problem 2: If the pid_max is changed from the default value to a value
larger than MAX_PID, then it will cause assertion failure problem. The
maximum value of pid_max can be set to pid_max_max (see pidmap_init
defined in kernel/pid.c), which equals to PID_MAX_LIMIT. In x86_64,
PID_MAX_LIMIT is 4*1024*1024 (defined in include/linux/threads.h). This
value is much larger than MAX_PID, and will take up 32768 Kbytes
(4*1024*1024*8/1024) for memory allocation of pid_to_task, which is much
larger than the default 8192 Kbytes of the stack size of calling
process.

Due to these two problems, we use calloc to allocate the memory of
pid_to_task dynamically.

Example:

Test environment: x86_64 with 160 cores

 $ cat /proc/sys/kernel/pid_max
 163840
 $ echo 1025000 > /proc/sys/kernel/pid_max
 $ cat /proc/sys/kernel/pid_max
 1025000

Run some applications until the pid of some process is greater than
the value of MAX_PID (1024*1000).

Before this patch:

 $ perf sched replay
 run measurement overhead: 221 nsecs
 sleep measurement overhead: 55480 nsecs
 the run test took 1000008 nsecs
 the sleep test took 1063151 nsecs
 perf: builtin-sched.c:330: register_pid: Assertion `!(pid >= 1024000)'
 failed.
 Aborted

After this patch:

 $ perf sched replay
 run measurement overhead: 221 nsecs
 sleep measurement overhead: 55435 nsecs
 the run test took 1000004 nsecs
 the sleep test took 1059312 nsecs
 nr_run_events:        10
 nr_sleep_events:      1562
 nr_wakeup_events:     5
 task      0 (                  :1:         1), nr_events: 1
 task      1 (                  :2:         2), nr_events: 1
 task      2 (                  :3:         3), nr_events: 1
 task      3 (                  :5:         5), nr_events: 1
 ...

Signed-off-by: Yunlong Song <yunlong.song@huawei.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/r/1427809596-29559-4-git-send-email-yunlong.song@huawei.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-04-08 09:07:22 -03:00
Yunlong Song
a35e27d0e5 perf sched replay: Increase the MAX_PID value to fix assertion failure problem
Current MAX_PID is only 65536, which will cause assertion failure problem
when CPU cores are more than 64 in x86_64.

This is because the pid_max value in x86_64 is at least
PIDS_PER_CPU_DEFAULT * num_possible_cpus() (see function pidmap_init
defined in kernel/pid.c), where PIDS_PER_CPU_DEFAULT is 1024 (defined in
include/linux/threads.h).

Thus for MAX_PID = 65536, the correspoinding CPU cores are
65536/1024=64.  This is obviously not enough at all for x86_64, and will
cause an assertion failure problem due to BUG_ON(pid >= MAX_PID) in the
codes.

We increase MAX_PID value from 65536 to 1024*1000, which can be used in
x86_64 with 1000 cores.

This number is finally decided according to the limitation of stack size
of calling process.

Use 'ulimit -a', the result shows the stack size of any process is 8192
Kbytes, which is defined in include/uapi/linux/resource.h (#define
_STK_LIM (8*1024*1024)).

Thus we choose a large enough value for MAX_PID, and make it satisfy to
the limitation of the stack size, i.e., making the perf process take up
a memory space just smaller than 8192 Kbytes.

We have calculated and tested that 1024*1000 is OK for MAX_PID.

This means perf sched replay can now be used with at most 1000 cores in
x86_64 without any assertion failure problem.

Example:

Test environment: x86_64 with 160 cores

 $ cat /proc/sys/kernel/pid_max
 163840

Before this patch:

 $ perf sched replay
 run measurement overhead: 240 nsecs
 sleep measurement overhead: 55379 nsecs
 the run test took 1000004 nsecs
 the sleep test took 1059424 nsecs
 perf: builtin-sched.c:330: register_pid: Assertion `!(pid >= 65536)'
 failed.
 Aborted

After this patch:

 $ perf sched replay
 run measurement overhead: 221 nsecs
 sleep measurement overhead: 55397 nsecs
 the run test took 999920 nsecs
 the sleep test took 1053313 nsecs
 nr_run_events:        10
 nr_sleep_events:      1562
 nr_wakeup_events:     5
 task      0 (                  :1:         1), nr_events: 1
 task      1 (                  :2:         2), nr_events: 1
 task      2 (                  :3:         3), nr_events: 1
 task      3 (                  :5:         5), nr_events: 1
 ...

Signed-off-by: Yunlong Song <yunlong.song@huawei.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/r/1427809596-29559-3-git-send-email-yunlong.song@huawei.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-04-08 09:07:21 -03:00