2010-01-15 15:17:52 +00:00
|
|
|
#!/bin/bash
|
License cleanup: add SPDX GPL-2.0 license identifier to files with no license
Many source files in the tree are missing licensing information, which
makes it harder for compliance tools to determine the correct license.
By default all files without license information are under the default
license of the kernel, which is GPL version 2.
Update the files which contain no license information with the 'GPL-2.0'
SPDX license identifier. The SPDX identifier is a legally binding
shorthand, which can be used instead of the full boiler plate text.
This patch is based on work done by Thomas Gleixner and Kate Stewart and
Philippe Ombredanne.
How this work was done:
Patches were generated and checked against linux-4.14-rc6 for a subset of
the use cases:
- file had no licensing information it it.
- file was a */uapi/* one with no licensing information in it,
- file was a */uapi/* one with existing licensing information,
Further patches will be generated in subsequent months to fix up cases
where non-standard license headers were used, and references to license
had to be inferred by heuristics based on keywords.
The analysis to determine which SPDX License Identifier to be applied to
a file was done in a spreadsheet of side by side results from of the
output of two independent scanners (ScanCode & Windriver) producing SPDX
tag:value files created by Philippe Ombredanne. Philippe prepared the
base worksheet, and did an initial spot review of a few 1000 files.
The 4.13 kernel was the starting point of the analysis with 60,537 files
assessed. Kate Stewart did a file by file comparison of the scanner
results in the spreadsheet to determine which SPDX license identifier(s)
to be applied to the file. She confirmed any determination that was not
immediately clear with lawyers working with the Linux Foundation.
Criteria used to select files for SPDX license identifier tagging was:
- Files considered eligible had to be source code files.
- Make and config files were included as candidates if they contained >5
lines of source
- File already had some variant of a license header in it (even if <5
lines).
All documentation files were explicitly excluded.
The following heuristics were used to determine which SPDX license
identifiers to apply.
- when both scanners couldn't find any license traces, file was
considered to have no license information in it, and the top level
COPYING file license applied.
For non */uapi/* files that summary was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 11139
and resulted in the first patch in this series.
If that file was a */uapi/* path one, it was "GPL-2.0 WITH
Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 WITH Linux-syscall-note 930
and resulted in the second patch in this series.
- if a file had some form of licensing information in it, and was one
of the */uapi/* ones, it was denoted with the Linux-syscall-note if
any GPL family license was found in the file or had no licensing in
it (per prior point). Results summary:
SPDX license identifier # files
---------------------------------------------------|------
GPL-2.0 WITH Linux-syscall-note 270
GPL-2.0+ WITH Linux-syscall-note 169
((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21
((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17
LGPL-2.1+ WITH Linux-syscall-note 15
GPL-1.0+ WITH Linux-syscall-note 14
((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5
LGPL-2.0+ WITH Linux-syscall-note 4
LGPL-2.1 WITH Linux-syscall-note 3
((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3
((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1
and that resulted in the third patch in this series.
- when the two scanners agreed on the detected license(s), that became
the concluded license(s).
- when there was disagreement between the two scanners (one detected a
license but the other didn't, or they both detected different
licenses) a manual inspection of the file occurred.
- In most cases a manual inspection of the information in the file
resulted in a clear resolution of the license that should apply (and
which scanner probably needed to revisit its heuristics).
- When it was not immediately clear, the license identifier was
confirmed with lawyers working with the Linux Foundation.
- If there was any question as to the appropriate license identifier,
the file was flagged for further research and to be revisited later
in time.
In total, over 70 hours of logged manual review was done on the
spreadsheet to determine the SPDX license identifiers to apply to the
source files by Kate, Philippe, Thomas and, in some cases, confirmation
by lawyers working with the Linux Foundation.
Kate also obtained a third independent scan of the 4.13 code base from
FOSSology, and compared selected files where the other two scanners
disagreed against that SPDX file, to see if there was new insights. The
Windriver scanner is based on an older version of FOSSology in part, so
they are related.
Thomas did random spot checks in about 500 files from the spreadsheets
for the uapi headers and agreed with SPDX license identifier in the
files he inspected. For the non-uapi files Thomas did random spot checks
in about 15000 files.
In initial set of patches against 4.14-rc6, 3 files were found to have
copy/paste license identifier errors, and have been fixed to reflect the
correct identifier.
Additionally Philippe spent 10 hours this week doing a detailed manual
inspection and review of the 12,461 patched files from the initial patch
version early this week with:
- a full scancode scan run, collecting the matched texts, detected
license ids and scores
- reviewing anything where there was a license detected (about 500+
files) to ensure that the applied SPDX license was correct
- reviewing anything where there was no detection but the patch license
was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
SPDX license was correct
This produced a worksheet with 20 files needing minor correction. This
worksheet was then exported into 3 different .csv files for the
different types of files to be modified.
These .csv files were then reviewed by Greg. Thomas wrote a script to
parse the csv files and add the proper SPDX tag to the file, in the
format that the file expected. This script was further refined by Greg
based on the output to detect more types of files automatically and to
distinguish between header and source .c files (which need different
comment types.) Finally Greg ran the script using the .csv files to
generate the patches.
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-01 14:07:57 +00:00
|
|
|
# SPDX-License-Identifier: GPL-2.0
|
2010-01-15 15:17:52 +00:00
|
|
|
# perf archive
|
|
|
|
# Arnaldo Carvalho de Melo <acme@redhat.com>
|
|
|
|
|
|
|
|
PERF_DATA=perf.data
|
perf archive: Add new option '--all' to pack perf.data with DSOs
'perf archive' has limited functionality and people from Red Hat Global
Support Services sent a request for a new feature that would pack
perf.data file together with an archive with debug symbols created by
the command 'perf archive' as customers were being confused and often
would forget to send perf.data file with the debug symbols.
With this patch 'perf archive' now accepts an option '--all' that
generates archive 'perf.all-hostname-date-time.tar.bz2' that holds file
'perf.data' and a sub-tar 'perf.symbols.tar.bz2' with debug symbols. The
functionality of the command 'perf archive' was not changed.
Committer testing:
Run 'perf record' on a Intel 14900K machine, hybrid:
root@number:~# perf record -a sleep 5s
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 4.006 MB perf.data (15427 samples) ]
root@number:~# perf archive --all
Now please run:
$ tar xvf perf.all-number-20231219-104854.tar.bz2 && tar xvf perf.symbols.tar.bz2 -C ~/.debug
wherever you need to run 'perf report' on.
root@number:~#
root@number:~# perf report --header-only
# ========
# captured on : Tue Dec 19 10:48:48 2023
# header version : 1
# data offset : 1008
# data size : 4199936
# feat offset : 4200944
# hostname : number
# os release : 6.6.4-200.fc39.x86_64
# perf version : 6.7.rc6.gca90f8e17b84
# arch : x86_64
# nrcpus online : 28
# nrcpus avail : 28
# cpudesc : Intel(R) Core(TM) i7-14700K
# cpuid : GenuineIntel,6,183,1
# total memory : 32610508 kB
# cmdline : /home/acme/bin/perf (deleted) record -a sleep 5s
# event : name = cpu_atom/cycles/P, , id = { 5088024, 5088025, 5088026, 5088027, 5088028, 5088029, 5088030, 5088031, 5088032, 5088033, 5088034, 5088035 }, type = 0 (PERF_TYPE_HARDWARE), size>
# event : name = cpu_core/cycles/P, , id = { 5088036, 5088037, 5088038, 5088039, 5088040, 5088041, 5088042, 5088043, 5088044, 5088045, 5088046, 5088047, 5088048, 5088049, 5088050, 5088051 },>
# event : name = dummy:u, , id = { 5088052, 5088053, 5088054, 5088055, 5088056, 5088057, 5088058, 5088059, 5088060, 5088061, 5088062, 5088063, 5088064, 5088065, 5088066, 5088067, 5088068, 50>
# CPU_TOPOLOGY info available, use -I to display
# NUMA_TOPOLOGY info available, use -I to display
# pmu mappings: cpu_atom = 10, cpu_core = 4, breakpoint = 5, cstate_core = 34, cstate_pkg = 35, i915 = 14, intel_bts = 11, intel_pt = 12, kprobe = 8, msr = 13, power = 36, software = 1, trac>
# CACHE info available, use -I to display
# time of first sample : 124739.850375
# time of last sample : 124744.855181
# sample duration : 5004.806 ms
# sample duration : 5004.806 ms
# MEM_TOPOLOGY info available, use -I to display
# bpf_prog_info 2: bpf_prog_7cc47bbf07148bfe_hid_tail_call addr 0xffffffffc0000978 size 113
# bpf_prog_info 47: bpf_prog_713a545fe0530ce7_restrict_filesystems addr 0xffffffffc0000748 size 305
# bpf_prog_info 163: bpf_prog_bd834b0730296056 addr 0xffffffffc000df14 size 331
# bpf_prog_info 258: bpf_prog_ee0e253c78993a24_sd_devices addr 0xffffffffc001fc08 size 264
# bpf_prog_info 259: bpf_prog_40ddf486530245f5_sd_devices addr 0xffffffffc00204bc size 318
# bpf_prog_info 260: bpf_prog_6deef7357e7b4530_sd_fw_egress addr 0xffffffffc0020630 size 63
# bpf_prog_info 261: bpf_prog_6deef7357e7b4530_sd_fw_ingress addr 0xffffffffc0020688 size 63
# bpf_prog_info 262: bpf_prog_b37200ab714f0e17_sd_devices addr 0xffffffffc002072c size 110
# bpf_prog_info 263: bpf_prog_b90a282ee45cfed9_sd_devices addr 0xffffffffc00207d8 size 393
# bpf_prog_info 264: bpf_prog_ee0e253c78993a24_sd_devices addr 0xffffffffc002099c size 264
# bpf_prog_info 265: bpf_prog_6deef7357e7b4530_sd_fw_egress addr 0xffffffffc0020ad4 size 63
# bpf_prog_info 266: bpf_prog_6deef7357e7b4530_sd_fw_ingress addr 0xffffffffc0020b50 size 63
# bpf_prog_info 267: bpf_prog_ee0e253c78993a24_sd_devices addr 0xffffffffc002d98c size 264
# bpf_prog_info 268: bpf_prog_be31ae23198a0378_sd_devices addr 0xffffffffc002dac8 size 297
# bpf_prog_info 269: bpf_prog_ccbbf91f3c6979c7_sd_devices addr 0xffffffffc002dc54 size 360
# bpf_prog_info 270: bpf_prog_3a0ef5414c2f6fca_sd_devices addr 0xffffffffc002dde8 size 456
# bpf_prog_info 271: bpf_prog_6deef7357e7b4530_sd_fw_egress addr 0xffffffffc0020bd4 size 63
# bpf_prog_info 272: bpf_prog_6deef7357e7b4530_sd_fw_ingress addr 0xffffffffc00299b4 size 63
# bpf_prog_info 273: bpf_prog_ee0e253c78993a24_sd_devices addr 0xffffffffc002dfd0 size 264
# bpf_prog_info 274: bpf_prog_6deef7357e7b4530_sd_fw_egress addr 0xffffffffc0029a3c size 63
# bpf_prog_info 275: bpf_prog_6deef7357e7b4530_sd_fw_ingress addr 0xffffffffc002d71c size 63
# bpf_prog_info 276: bpf_prog_6deef7357e7b4530_sd_fw_egress addr 0xffffffffc002d7a8 size 63
# bpf_prog_info 277: bpf_prog_6deef7357e7b4530_sd_fw_ingress addr 0xffffffffc002e13c size 63
# bpf_prog_info 278: bpf_prog_6deef7357e7b4530_sd_fw_egress addr 0xffffffffc002e1a8 size 63
# bpf_prog_info 279: bpf_prog_6deef7357e7b4530_sd_fw_ingress addr 0xffffffffc002e234 size 63
# bpf_prog_info 280: bpf_prog_be31ae23198a0378_sd_devices addr 0xffffffffc002e2ac size 297
# bpf_prog_info 281: bpf_prog_6deef7357e7b4530_sd_fw_egress addr 0xffffffffc002e42c size 63
# bpf_prog_info 282: bpf_prog_6deef7357e7b4530_sd_fw_ingress addr 0xffffffffc002e49c size 63
# bpf_prog_info 290: bpf_prog_ee0e253c78993a24_sd_devices addr 0xffffffffc0004b18 size 264
# bpf_prog_info 294: bpf_prog_0b1566e4b83190c5_sd_devices addr 0xffffffffc0004c50 size 360
# bpf_prog_info 295: bpf_prog_ee0e253c78993a24_sd_devices addr 0xffffffffc001cfc8 size 264
# bpf_prog_info 296: bpf_prog_6deef7357e7b4530_sd_fw_egress addr 0xffffffffc0013abc size 63
# bpf_prog_info 297: bpf_prog_6deef7357e7b4530_sd_fw_ingress addr 0xffffffffc0013b24 size 63
# btf info of id 2
# btf info of id 52
# HYBRID_TOPOLOGY info available, use -I to display
# cpu_atom pmu capabilities: branches=32, max_precise=3, pmu_name=alderlake_hybrid
# cpu_core pmu capabilities: branches=32, max_precise=3, pmu_name=alderlake_hybrid
# intel_pt pmu capabilities: topa_multiple_entries=1, psb_cyc=1, single_range_output=1, mtc_periods=249, ip_filtering=1, output_subsys=0, cr3_filtering=1, psb_periods=3f, event_trace=0, cycl>
# missing features: TRACING_DATA BRANCH_STACK GROUP_DESC AUXTRACE STAT CLOCKID DIR_FORMAT COMPRESSED CPU_PMU_CAPS CLOCK_DATA
# ========
#
root@number:~#
And then transferring it to a ARM64 machine, a Libre Computer RK3399-PC:
root@number:~# scp perf.all-number-20231219-104854.tar.bz2 acme@192.168.86.114:.
acme@192.168.86.114's password:
perf.all-number-20231219-104854.tar.bz2 100% 145MB 85.4MB/s 00:01
root@number:~#
root@number:~# ssh acme@192.168.86.114
acme@192.168.86.114's password:
Welcome to Ubuntu 23.04 (GNU/Linux 6.1.68-12200-g1c40dda3081e aarch64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
Last login: Tue Dec 19 14:53:18 2023 from 192.168.86.42
acme@roc-rk3399-pc:~$ tar xvf perf.all-number-20231219-104854.tar.bz2 && tar xvf perf.symbols.tar.bz2 -C ~/.debug
perf.data
perf.symbols.tar.bz2
.build-id/ad/acc227f470409213308050b71f664322e2956c
[kernel.kallsyms]/adacc227f470409213308050b71f664322e2956c/
[kernel.kallsyms]/adacc227f470409213308050b71f664322e2956c/kallsyms
[kernel.kallsyms]/adacc227f470409213308050b71f664322e2956c/probes
.build-id/76/c91f4d62baa06bb52e07e20aba36d21a8f9797
usr/lib64/libz.so.1.2.13/76c91f4d62baa06bb52e07e20aba36d21a8f9797/
<SNIP>
.build-id/09/d7e96bc1e3f599d15ca28b36959124b2d74410
usr/lib64/librpm_sequoia.so.1/09d7e96bc1e3f599d15ca28b36959124b2d74410/
usr/lib64/librpm_sequoia.so.1/09d7e96bc1e3f599d15ca28b36959124b2d74410/elf
usr/lib64/librpm_sequoia.so.1/09d7e96bc1e3f599d15ca28b36959124b2d74410/probes
acme@roc-rk3399-pc:~$
acme@roc-rk3399-pc:~$ perf report --stdio | head -40
# To display the perf.data header info, please use --header/--header-only options.
#
# Total Lost Samples: 0
#
# Samples: 6K of event 'cpu_atom/cycles/P'
# Event count (approx.): 4519946621
#
# Overhead Command Shared Object Symbol
# ........ ............... .............................................. .........................................................................................................................................................
#
1.73% swapper [kernel.kallsyms] [k] intel_idle
1.43% sh [kernel.kallsyms] [k] next_uptodate_folio
0.94% make ld-linux-x86-64.so.2 [.] do_lookup_x
0.90% sh ld-linux-x86-64.so.2 [.] do_lookup_x
0.82% sh [kernel.kallsyms] [k] perf_event_mmap_output
0.74% sh [kernel.kallsyms] [k] filemap_map_pages
0.72% sh ld-linux-x86-64.so.2 [.] _dl_relocate_object
0.69% cc1 [kernel.kallsyms] [k] clear_page_erms
0.61% sh [kernel.kallsyms] [k] unmap_page_range
0.56% swapper [kernel.kallsyms] [k] poll_idle
0.52% cc1 ld-linux-x86-64.so.2 [.] do_lookup_x
0.47% make ld-linux-x86-64.so.2 [.] _dl_relocate_object
0.44% cc1 cc1 [.] make_node(tree_code)
0.43% sh [kernel.kallsyms] [k] native_irq_return_iret
0.38% sh libc.so.6 [.] _int_malloc
0.38% cc1 cc1 [.] decl_attributes(tree_node**, tree_node*, int, tree_node*)
0.38% sh [kernel.kallsyms] [k] clear_page_erms
0.37% cc1 cc1 [.] ht_lookup_with_hash(ht*, unsigned char const*, unsigned long, unsigned int, ht_lookup_option)
0.37% make [kernel.kallsyms] [k] perf_event_mmap_output
0.37% make ld-linux-x86-64.so.2 [.] _dl_lookup_symbol_x
0.35% sh [kernel.kallsyms] [k] _compound_head
0.35% make make [.] hash_find_slot
0.33% sh libc.so.6 [.] __strlen_avx2
0.33% cc1 cc1 [.] ggc_internal_alloc(unsigned long, void (*)(void*), unsigned long, unsigned long)
0.33% sh [kernel.kallsyms] [k] perf_iterate_ctx
0.31% make make [.] jhash_string
0.31% sh [kernel.kallsyms] [k] page_remove_rmap
0.30% cc1 libc.so.6 [.] _int_malloc
0.30% make libc.so.6 [.] _int_malloc
acme@roc-rk3399-pc:~$
Signed-off-by: Veronika Molnarova <vmolnaro@redhat.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Michael Petlan <mpetlan@redhat.com>
Link: https://lore.kernel.org/r/20231212165909.14459-1-vmolnaro@redhat.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2023-12-12 16:59:08 +00:00
|
|
|
PERF_SYMBOLS=perf.symbols
|
|
|
|
PERF_ALL=perf.all
|
|
|
|
ALL=0
|
2023-12-12 16:59:09 +00:00
|
|
|
UNPACK=0
|
perf archive: Add new option '--all' to pack perf.data with DSOs
'perf archive' has limited functionality and people from Red Hat Global
Support Services sent a request for a new feature that would pack
perf.data file together with an archive with debug symbols created by
the command 'perf archive' as customers were being confused and often
would forget to send perf.data file with the debug symbols.
With this patch 'perf archive' now accepts an option '--all' that
generates archive 'perf.all-hostname-date-time.tar.bz2' that holds file
'perf.data' and a sub-tar 'perf.symbols.tar.bz2' with debug symbols. The
functionality of the command 'perf archive' was not changed.
Committer testing:
Run 'perf record' on a Intel 14900K machine, hybrid:
root@number:~# perf record -a sleep 5s
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 4.006 MB perf.data (15427 samples) ]
root@number:~# perf archive --all
Now please run:
$ tar xvf perf.all-number-20231219-104854.tar.bz2 && tar xvf perf.symbols.tar.bz2 -C ~/.debug
wherever you need to run 'perf report' on.
root@number:~#
root@number:~# perf report --header-only
# ========
# captured on : Tue Dec 19 10:48:48 2023
# header version : 1
# data offset : 1008
# data size : 4199936
# feat offset : 4200944
# hostname : number
# os release : 6.6.4-200.fc39.x86_64
# perf version : 6.7.rc6.gca90f8e17b84
# arch : x86_64
# nrcpus online : 28
# nrcpus avail : 28
# cpudesc : Intel(R) Core(TM) i7-14700K
# cpuid : GenuineIntel,6,183,1
# total memory : 32610508 kB
# cmdline : /home/acme/bin/perf (deleted) record -a sleep 5s
# event : name = cpu_atom/cycles/P, , id = { 5088024, 5088025, 5088026, 5088027, 5088028, 5088029, 5088030, 5088031, 5088032, 5088033, 5088034, 5088035 }, type = 0 (PERF_TYPE_HARDWARE), size>
# event : name = cpu_core/cycles/P, , id = { 5088036, 5088037, 5088038, 5088039, 5088040, 5088041, 5088042, 5088043, 5088044, 5088045, 5088046, 5088047, 5088048, 5088049, 5088050, 5088051 },>
# event : name = dummy:u, , id = { 5088052, 5088053, 5088054, 5088055, 5088056, 5088057, 5088058, 5088059, 5088060, 5088061, 5088062, 5088063, 5088064, 5088065, 5088066, 5088067, 5088068, 50>
# CPU_TOPOLOGY info available, use -I to display
# NUMA_TOPOLOGY info available, use -I to display
# pmu mappings: cpu_atom = 10, cpu_core = 4, breakpoint = 5, cstate_core = 34, cstate_pkg = 35, i915 = 14, intel_bts = 11, intel_pt = 12, kprobe = 8, msr = 13, power = 36, software = 1, trac>
# CACHE info available, use -I to display
# time of first sample : 124739.850375
# time of last sample : 124744.855181
# sample duration : 5004.806 ms
# sample duration : 5004.806 ms
# MEM_TOPOLOGY info available, use -I to display
# bpf_prog_info 2: bpf_prog_7cc47bbf07148bfe_hid_tail_call addr 0xffffffffc0000978 size 113
# bpf_prog_info 47: bpf_prog_713a545fe0530ce7_restrict_filesystems addr 0xffffffffc0000748 size 305
# bpf_prog_info 163: bpf_prog_bd834b0730296056 addr 0xffffffffc000df14 size 331
# bpf_prog_info 258: bpf_prog_ee0e253c78993a24_sd_devices addr 0xffffffffc001fc08 size 264
# bpf_prog_info 259: bpf_prog_40ddf486530245f5_sd_devices addr 0xffffffffc00204bc size 318
# bpf_prog_info 260: bpf_prog_6deef7357e7b4530_sd_fw_egress addr 0xffffffffc0020630 size 63
# bpf_prog_info 261: bpf_prog_6deef7357e7b4530_sd_fw_ingress addr 0xffffffffc0020688 size 63
# bpf_prog_info 262: bpf_prog_b37200ab714f0e17_sd_devices addr 0xffffffffc002072c size 110
# bpf_prog_info 263: bpf_prog_b90a282ee45cfed9_sd_devices addr 0xffffffffc00207d8 size 393
# bpf_prog_info 264: bpf_prog_ee0e253c78993a24_sd_devices addr 0xffffffffc002099c size 264
# bpf_prog_info 265: bpf_prog_6deef7357e7b4530_sd_fw_egress addr 0xffffffffc0020ad4 size 63
# bpf_prog_info 266: bpf_prog_6deef7357e7b4530_sd_fw_ingress addr 0xffffffffc0020b50 size 63
# bpf_prog_info 267: bpf_prog_ee0e253c78993a24_sd_devices addr 0xffffffffc002d98c size 264
# bpf_prog_info 268: bpf_prog_be31ae23198a0378_sd_devices addr 0xffffffffc002dac8 size 297
# bpf_prog_info 269: bpf_prog_ccbbf91f3c6979c7_sd_devices addr 0xffffffffc002dc54 size 360
# bpf_prog_info 270: bpf_prog_3a0ef5414c2f6fca_sd_devices addr 0xffffffffc002dde8 size 456
# bpf_prog_info 271: bpf_prog_6deef7357e7b4530_sd_fw_egress addr 0xffffffffc0020bd4 size 63
# bpf_prog_info 272: bpf_prog_6deef7357e7b4530_sd_fw_ingress addr 0xffffffffc00299b4 size 63
# bpf_prog_info 273: bpf_prog_ee0e253c78993a24_sd_devices addr 0xffffffffc002dfd0 size 264
# bpf_prog_info 274: bpf_prog_6deef7357e7b4530_sd_fw_egress addr 0xffffffffc0029a3c size 63
# bpf_prog_info 275: bpf_prog_6deef7357e7b4530_sd_fw_ingress addr 0xffffffffc002d71c size 63
# bpf_prog_info 276: bpf_prog_6deef7357e7b4530_sd_fw_egress addr 0xffffffffc002d7a8 size 63
# bpf_prog_info 277: bpf_prog_6deef7357e7b4530_sd_fw_ingress addr 0xffffffffc002e13c size 63
# bpf_prog_info 278: bpf_prog_6deef7357e7b4530_sd_fw_egress addr 0xffffffffc002e1a8 size 63
# bpf_prog_info 279: bpf_prog_6deef7357e7b4530_sd_fw_ingress addr 0xffffffffc002e234 size 63
# bpf_prog_info 280: bpf_prog_be31ae23198a0378_sd_devices addr 0xffffffffc002e2ac size 297
# bpf_prog_info 281: bpf_prog_6deef7357e7b4530_sd_fw_egress addr 0xffffffffc002e42c size 63
# bpf_prog_info 282: bpf_prog_6deef7357e7b4530_sd_fw_ingress addr 0xffffffffc002e49c size 63
# bpf_prog_info 290: bpf_prog_ee0e253c78993a24_sd_devices addr 0xffffffffc0004b18 size 264
# bpf_prog_info 294: bpf_prog_0b1566e4b83190c5_sd_devices addr 0xffffffffc0004c50 size 360
# bpf_prog_info 295: bpf_prog_ee0e253c78993a24_sd_devices addr 0xffffffffc001cfc8 size 264
# bpf_prog_info 296: bpf_prog_6deef7357e7b4530_sd_fw_egress addr 0xffffffffc0013abc size 63
# bpf_prog_info 297: bpf_prog_6deef7357e7b4530_sd_fw_ingress addr 0xffffffffc0013b24 size 63
# btf info of id 2
# btf info of id 52
# HYBRID_TOPOLOGY info available, use -I to display
# cpu_atom pmu capabilities: branches=32, max_precise=3, pmu_name=alderlake_hybrid
# cpu_core pmu capabilities: branches=32, max_precise=3, pmu_name=alderlake_hybrid
# intel_pt pmu capabilities: topa_multiple_entries=1, psb_cyc=1, single_range_output=1, mtc_periods=249, ip_filtering=1, output_subsys=0, cr3_filtering=1, psb_periods=3f, event_trace=0, cycl>
# missing features: TRACING_DATA BRANCH_STACK GROUP_DESC AUXTRACE STAT CLOCKID DIR_FORMAT COMPRESSED CPU_PMU_CAPS CLOCK_DATA
# ========
#
root@number:~#
And then transferring it to a ARM64 machine, a Libre Computer RK3399-PC:
root@number:~# scp perf.all-number-20231219-104854.tar.bz2 acme@192.168.86.114:.
acme@192.168.86.114's password:
perf.all-number-20231219-104854.tar.bz2 100% 145MB 85.4MB/s 00:01
root@number:~#
root@number:~# ssh acme@192.168.86.114
acme@192.168.86.114's password:
Welcome to Ubuntu 23.04 (GNU/Linux 6.1.68-12200-g1c40dda3081e aarch64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
Last login: Tue Dec 19 14:53:18 2023 from 192.168.86.42
acme@roc-rk3399-pc:~$ tar xvf perf.all-number-20231219-104854.tar.bz2 && tar xvf perf.symbols.tar.bz2 -C ~/.debug
perf.data
perf.symbols.tar.bz2
.build-id/ad/acc227f470409213308050b71f664322e2956c
[kernel.kallsyms]/adacc227f470409213308050b71f664322e2956c/
[kernel.kallsyms]/adacc227f470409213308050b71f664322e2956c/kallsyms
[kernel.kallsyms]/adacc227f470409213308050b71f664322e2956c/probes
.build-id/76/c91f4d62baa06bb52e07e20aba36d21a8f9797
usr/lib64/libz.so.1.2.13/76c91f4d62baa06bb52e07e20aba36d21a8f9797/
<SNIP>
.build-id/09/d7e96bc1e3f599d15ca28b36959124b2d74410
usr/lib64/librpm_sequoia.so.1/09d7e96bc1e3f599d15ca28b36959124b2d74410/
usr/lib64/librpm_sequoia.so.1/09d7e96bc1e3f599d15ca28b36959124b2d74410/elf
usr/lib64/librpm_sequoia.so.1/09d7e96bc1e3f599d15ca28b36959124b2d74410/probes
acme@roc-rk3399-pc:~$
acme@roc-rk3399-pc:~$ perf report --stdio | head -40
# To display the perf.data header info, please use --header/--header-only options.
#
# Total Lost Samples: 0
#
# Samples: 6K of event 'cpu_atom/cycles/P'
# Event count (approx.): 4519946621
#
# Overhead Command Shared Object Symbol
# ........ ............... .............................................. .........................................................................................................................................................
#
1.73% swapper [kernel.kallsyms] [k] intel_idle
1.43% sh [kernel.kallsyms] [k] next_uptodate_folio
0.94% make ld-linux-x86-64.so.2 [.] do_lookup_x
0.90% sh ld-linux-x86-64.so.2 [.] do_lookup_x
0.82% sh [kernel.kallsyms] [k] perf_event_mmap_output
0.74% sh [kernel.kallsyms] [k] filemap_map_pages
0.72% sh ld-linux-x86-64.so.2 [.] _dl_relocate_object
0.69% cc1 [kernel.kallsyms] [k] clear_page_erms
0.61% sh [kernel.kallsyms] [k] unmap_page_range
0.56% swapper [kernel.kallsyms] [k] poll_idle
0.52% cc1 ld-linux-x86-64.so.2 [.] do_lookup_x
0.47% make ld-linux-x86-64.so.2 [.] _dl_relocate_object
0.44% cc1 cc1 [.] make_node(tree_code)
0.43% sh [kernel.kallsyms] [k] native_irq_return_iret
0.38% sh libc.so.6 [.] _int_malloc
0.38% cc1 cc1 [.] decl_attributes(tree_node**, tree_node*, int, tree_node*)
0.38% sh [kernel.kallsyms] [k] clear_page_erms
0.37% cc1 cc1 [.] ht_lookup_with_hash(ht*, unsigned char const*, unsigned long, unsigned int, ht_lookup_option)
0.37% make [kernel.kallsyms] [k] perf_event_mmap_output
0.37% make ld-linux-x86-64.so.2 [.] _dl_lookup_symbol_x
0.35% sh [kernel.kallsyms] [k] _compound_head
0.35% make make [.] hash_find_slot
0.33% sh libc.so.6 [.] __strlen_avx2
0.33% cc1 cc1 [.] ggc_internal_alloc(unsigned long, void (*)(void*), unsigned long, unsigned long)
0.33% sh [kernel.kallsyms] [k] perf_iterate_ctx
0.31% make make [.] jhash_string
0.31% sh [kernel.kallsyms] [k] page_remove_rmap
0.30% cc1 libc.so.6 [.] _int_malloc
0.30% make libc.so.6 [.] _int_malloc
acme@roc-rk3399-pc:~$
Signed-off-by: Veronika Molnarova <vmolnaro@redhat.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Michael Petlan <mpetlan@redhat.com>
Link: https://lore.kernel.org/r/20231212165909.14459-1-vmolnaro@redhat.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2023-12-12 16:59:08 +00:00
|
|
|
|
|
|
|
while [ $# -gt 0 ] ; do
|
|
|
|
if [ $1 == "--all" ]; then
|
|
|
|
ALL=1
|
|
|
|
shift
|
2023-12-12 16:59:09 +00:00
|
|
|
elif [ $1 == "--unpack" ]; then
|
|
|
|
UNPACK=1
|
|
|
|
shift
|
perf archive: Add new option '--all' to pack perf.data with DSOs
'perf archive' has limited functionality and people from Red Hat Global
Support Services sent a request for a new feature that would pack
perf.data file together with an archive with debug symbols created by
the command 'perf archive' as customers were being confused and often
would forget to send perf.data file with the debug symbols.
With this patch 'perf archive' now accepts an option '--all' that
generates archive 'perf.all-hostname-date-time.tar.bz2' that holds file
'perf.data' and a sub-tar 'perf.symbols.tar.bz2' with debug symbols. The
functionality of the command 'perf archive' was not changed.
Committer testing:
Run 'perf record' on a Intel 14900K machine, hybrid:
root@number:~# perf record -a sleep 5s
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 4.006 MB perf.data (15427 samples) ]
root@number:~# perf archive --all
Now please run:
$ tar xvf perf.all-number-20231219-104854.tar.bz2 && tar xvf perf.symbols.tar.bz2 -C ~/.debug
wherever you need to run 'perf report' on.
root@number:~#
root@number:~# perf report --header-only
# ========
# captured on : Tue Dec 19 10:48:48 2023
# header version : 1
# data offset : 1008
# data size : 4199936
# feat offset : 4200944
# hostname : number
# os release : 6.6.4-200.fc39.x86_64
# perf version : 6.7.rc6.gca90f8e17b84
# arch : x86_64
# nrcpus online : 28
# nrcpus avail : 28
# cpudesc : Intel(R) Core(TM) i7-14700K
# cpuid : GenuineIntel,6,183,1
# total memory : 32610508 kB
# cmdline : /home/acme/bin/perf (deleted) record -a sleep 5s
# event : name = cpu_atom/cycles/P, , id = { 5088024, 5088025, 5088026, 5088027, 5088028, 5088029, 5088030, 5088031, 5088032, 5088033, 5088034, 5088035 }, type = 0 (PERF_TYPE_HARDWARE), size>
# event : name = cpu_core/cycles/P, , id = { 5088036, 5088037, 5088038, 5088039, 5088040, 5088041, 5088042, 5088043, 5088044, 5088045, 5088046, 5088047, 5088048, 5088049, 5088050, 5088051 },>
# event : name = dummy:u, , id = { 5088052, 5088053, 5088054, 5088055, 5088056, 5088057, 5088058, 5088059, 5088060, 5088061, 5088062, 5088063, 5088064, 5088065, 5088066, 5088067, 5088068, 50>
# CPU_TOPOLOGY info available, use -I to display
# NUMA_TOPOLOGY info available, use -I to display
# pmu mappings: cpu_atom = 10, cpu_core = 4, breakpoint = 5, cstate_core = 34, cstate_pkg = 35, i915 = 14, intel_bts = 11, intel_pt = 12, kprobe = 8, msr = 13, power = 36, software = 1, trac>
# CACHE info available, use -I to display
# time of first sample : 124739.850375
# time of last sample : 124744.855181
# sample duration : 5004.806 ms
# sample duration : 5004.806 ms
# MEM_TOPOLOGY info available, use -I to display
# bpf_prog_info 2: bpf_prog_7cc47bbf07148bfe_hid_tail_call addr 0xffffffffc0000978 size 113
# bpf_prog_info 47: bpf_prog_713a545fe0530ce7_restrict_filesystems addr 0xffffffffc0000748 size 305
# bpf_prog_info 163: bpf_prog_bd834b0730296056 addr 0xffffffffc000df14 size 331
# bpf_prog_info 258: bpf_prog_ee0e253c78993a24_sd_devices addr 0xffffffffc001fc08 size 264
# bpf_prog_info 259: bpf_prog_40ddf486530245f5_sd_devices addr 0xffffffffc00204bc size 318
# bpf_prog_info 260: bpf_prog_6deef7357e7b4530_sd_fw_egress addr 0xffffffffc0020630 size 63
# bpf_prog_info 261: bpf_prog_6deef7357e7b4530_sd_fw_ingress addr 0xffffffffc0020688 size 63
# bpf_prog_info 262: bpf_prog_b37200ab714f0e17_sd_devices addr 0xffffffffc002072c size 110
# bpf_prog_info 263: bpf_prog_b90a282ee45cfed9_sd_devices addr 0xffffffffc00207d8 size 393
# bpf_prog_info 264: bpf_prog_ee0e253c78993a24_sd_devices addr 0xffffffffc002099c size 264
# bpf_prog_info 265: bpf_prog_6deef7357e7b4530_sd_fw_egress addr 0xffffffffc0020ad4 size 63
# bpf_prog_info 266: bpf_prog_6deef7357e7b4530_sd_fw_ingress addr 0xffffffffc0020b50 size 63
# bpf_prog_info 267: bpf_prog_ee0e253c78993a24_sd_devices addr 0xffffffffc002d98c size 264
# bpf_prog_info 268: bpf_prog_be31ae23198a0378_sd_devices addr 0xffffffffc002dac8 size 297
# bpf_prog_info 269: bpf_prog_ccbbf91f3c6979c7_sd_devices addr 0xffffffffc002dc54 size 360
# bpf_prog_info 270: bpf_prog_3a0ef5414c2f6fca_sd_devices addr 0xffffffffc002dde8 size 456
# bpf_prog_info 271: bpf_prog_6deef7357e7b4530_sd_fw_egress addr 0xffffffffc0020bd4 size 63
# bpf_prog_info 272: bpf_prog_6deef7357e7b4530_sd_fw_ingress addr 0xffffffffc00299b4 size 63
# bpf_prog_info 273: bpf_prog_ee0e253c78993a24_sd_devices addr 0xffffffffc002dfd0 size 264
# bpf_prog_info 274: bpf_prog_6deef7357e7b4530_sd_fw_egress addr 0xffffffffc0029a3c size 63
# bpf_prog_info 275: bpf_prog_6deef7357e7b4530_sd_fw_ingress addr 0xffffffffc002d71c size 63
# bpf_prog_info 276: bpf_prog_6deef7357e7b4530_sd_fw_egress addr 0xffffffffc002d7a8 size 63
# bpf_prog_info 277: bpf_prog_6deef7357e7b4530_sd_fw_ingress addr 0xffffffffc002e13c size 63
# bpf_prog_info 278: bpf_prog_6deef7357e7b4530_sd_fw_egress addr 0xffffffffc002e1a8 size 63
# bpf_prog_info 279: bpf_prog_6deef7357e7b4530_sd_fw_ingress addr 0xffffffffc002e234 size 63
# bpf_prog_info 280: bpf_prog_be31ae23198a0378_sd_devices addr 0xffffffffc002e2ac size 297
# bpf_prog_info 281: bpf_prog_6deef7357e7b4530_sd_fw_egress addr 0xffffffffc002e42c size 63
# bpf_prog_info 282: bpf_prog_6deef7357e7b4530_sd_fw_ingress addr 0xffffffffc002e49c size 63
# bpf_prog_info 290: bpf_prog_ee0e253c78993a24_sd_devices addr 0xffffffffc0004b18 size 264
# bpf_prog_info 294: bpf_prog_0b1566e4b83190c5_sd_devices addr 0xffffffffc0004c50 size 360
# bpf_prog_info 295: bpf_prog_ee0e253c78993a24_sd_devices addr 0xffffffffc001cfc8 size 264
# bpf_prog_info 296: bpf_prog_6deef7357e7b4530_sd_fw_egress addr 0xffffffffc0013abc size 63
# bpf_prog_info 297: bpf_prog_6deef7357e7b4530_sd_fw_ingress addr 0xffffffffc0013b24 size 63
# btf info of id 2
# btf info of id 52
# HYBRID_TOPOLOGY info available, use -I to display
# cpu_atom pmu capabilities: branches=32, max_precise=3, pmu_name=alderlake_hybrid
# cpu_core pmu capabilities: branches=32, max_precise=3, pmu_name=alderlake_hybrid
# intel_pt pmu capabilities: topa_multiple_entries=1, psb_cyc=1, single_range_output=1, mtc_periods=249, ip_filtering=1, output_subsys=0, cr3_filtering=1, psb_periods=3f, event_trace=0, cycl>
# missing features: TRACING_DATA BRANCH_STACK GROUP_DESC AUXTRACE STAT CLOCKID DIR_FORMAT COMPRESSED CPU_PMU_CAPS CLOCK_DATA
# ========
#
root@number:~#
And then transferring it to a ARM64 machine, a Libre Computer RK3399-PC:
root@number:~# scp perf.all-number-20231219-104854.tar.bz2 acme@192.168.86.114:.
acme@192.168.86.114's password:
perf.all-number-20231219-104854.tar.bz2 100% 145MB 85.4MB/s 00:01
root@number:~#
root@number:~# ssh acme@192.168.86.114
acme@192.168.86.114's password:
Welcome to Ubuntu 23.04 (GNU/Linux 6.1.68-12200-g1c40dda3081e aarch64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
Last login: Tue Dec 19 14:53:18 2023 from 192.168.86.42
acme@roc-rk3399-pc:~$ tar xvf perf.all-number-20231219-104854.tar.bz2 && tar xvf perf.symbols.tar.bz2 -C ~/.debug
perf.data
perf.symbols.tar.bz2
.build-id/ad/acc227f470409213308050b71f664322e2956c
[kernel.kallsyms]/adacc227f470409213308050b71f664322e2956c/
[kernel.kallsyms]/adacc227f470409213308050b71f664322e2956c/kallsyms
[kernel.kallsyms]/adacc227f470409213308050b71f664322e2956c/probes
.build-id/76/c91f4d62baa06bb52e07e20aba36d21a8f9797
usr/lib64/libz.so.1.2.13/76c91f4d62baa06bb52e07e20aba36d21a8f9797/
<SNIP>
.build-id/09/d7e96bc1e3f599d15ca28b36959124b2d74410
usr/lib64/librpm_sequoia.so.1/09d7e96bc1e3f599d15ca28b36959124b2d74410/
usr/lib64/librpm_sequoia.so.1/09d7e96bc1e3f599d15ca28b36959124b2d74410/elf
usr/lib64/librpm_sequoia.so.1/09d7e96bc1e3f599d15ca28b36959124b2d74410/probes
acme@roc-rk3399-pc:~$
acme@roc-rk3399-pc:~$ perf report --stdio | head -40
# To display the perf.data header info, please use --header/--header-only options.
#
# Total Lost Samples: 0
#
# Samples: 6K of event 'cpu_atom/cycles/P'
# Event count (approx.): 4519946621
#
# Overhead Command Shared Object Symbol
# ........ ............... .............................................. .........................................................................................................................................................
#
1.73% swapper [kernel.kallsyms] [k] intel_idle
1.43% sh [kernel.kallsyms] [k] next_uptodate_folio
0.94% make ld-linux-x86-64.so.2 [.] do_lookup_x
0.90% sh ld-linux-x86-64.so.2 [.] do_lookup_x
0.82% sh [kernel.kallsyms] [k] perf_event_mmap_output
0.74% sh [kernel.kallsyms] [k] filemap_map_pages
0.72% sh ld-linux-x86-64.so.2 [.] _dl_relocate_object
0.69% cc1 [kernel.kallsyms] [k] clear_page_erms
0.61% sh [kernel.kallsyms] [k] unmap_page_range
0.56% swapper [kernel.kallsyms] [k] poll_idle
0.52% cc1 ld-linux-x86-64.so.2 [.] do_lookup_x
0.47% make ld-linux-x86-64.so.2 [.] _dl_relocate_object
0.44% cc1 cc1 [.] make_node(tree_code)
0.43% sh [kernel.kallsyms] [k] native_irq_return_iret
0.38% sh libc.so.6 [.] _int_malloc
0.38% cc1 cc1 [.] decl_attributes(tree_node**, tree_node*, int, tree_node*)
0.38% sh [kernel.kallsyms] [k] clear_page_erms
0.37% cc1 cc1 [.] ht_lookup_with_hash(ht*, unsigned char const*, unsigned long, unsigned int, ht_lookup_option)
0.37% make [kernel.kallsyms] [k] perf_event_mmap_output
0.37% make ld-linux-x86-64.so.2 [.] _dl_lookup_symbol_x
0.35% sh [kernel.kallsyms] [k] _compound_head
0.35% make make [.] hash_find_slot
0.33% sh libc.so.6 [.] __strlen_avx2
0.33% cc1 cc1 [.] ggc_internal_alloc(unsigned long, void (*)(void*), unsigned long, unsigned long)
0.33% sh [kernel.kallsyms] [k] perf_iterate_ctx
0.31% make make [.] jhash_string
0.31% sh [kernel.kallsyms] [k] page_remove_rmap
0.30% cc1 libc.so.6 [.] _int_malloc
0.30% make libc.so.6 [.] _int_malloc
acme@roc-rk3399-pc:~$
Signed-off-by: Veronika Molnarova <vmolnaro@redhat.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Michael Petlan <mpetlan@redhat.com>
Link: https://lore.kernel.org/r/20231212165909.14459-1-vmolnaro@redhat.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2023-12-12 16:59:08 +00:00
|
|
|
else
|
|
|
|
PERF_DATA=$1
|
2023-12-12 16:59:09 +00:00
|
|
|
UNPACK_TAR=$1
|
perf archive: Add new option '--all' to pack perf.data with DSOs
'perf archive' has limited functionality and people from Red Hat Global
Support Services sent a request for a new feature that would pack
perf.data file together with an archive with debug symbols created by
the command 'perf archive' as customers were being confused and often
would forget to send perf.data file with the debug symbols.
With this patch 'perf archive' now accepts an option '--all' that
generates archive 'perf.all-hostname-date-time.tar.bz2' that holds file
'perf.data' and a sub-tar 'perf.symbols.tar.bz2' with debug symbols. The
functionality of the command 'perf archive' was not changed.
Committer testing:
Run 'perf record' on a Intel 14900K machine, hybrid:
root@number:~# perf record -a sleep 5s
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 4.006 MB perf.data (15427 samples) ]
root@number:~# perf archive --all
Now please run:
$ tar xvf perf.all-number-20231219-104854.tar.bz2 && tar xvf perf.symbols.tar.bz2 -C ~/.debug
wherever you need to run 'perf report' on.
root@number:~#
root@number:~# perf report --header-only
# ========
# captured on : Tue Dec 19 10:48:48 2023
# header version : 1
# data offset : 1008
# data size : 4199936
# feat offset : 4200944
# hostname : number
# os release : 6.6.4-200.fc39.x86_64
# perf version : 6.7.rc6.gca90f8e17b84
# arch : x86_64
# nrcpus online : 28
# nrcpus avail : 28
# cpudesc : Intel(R) Core(TM) i7-14700K
# cpuid : GenuineIntel,6,183,1
# total memory : 32610508 kB
# cmdline : /home/acme/bin/perf (deleted) record -a sleep 5s
# event : name = cpu_atom/cycles/P, , id = { 5088024, 5088025, 5088026, 5088027, 5088028, 5088029, 5088030, 5088031, 5088032, 5088033, 5088034, 5088035 }, type = 0 (PERF_TYPE_HARDWARE), size>
# event : name = cpu_core/cycles/P, , id = { 5088036, 5088037, 5088038, 5088039, 5088040, 5088041, 5088042, 5088043, 5088044, 5088045, 5088046, 5088047, 5088048, 5088049, 5088050, 5088051 },>
# event : name = dummy:u, , id = { 5088052, 5088053, 5088054, 5088055, 5088056, 5088057, 5088058, 5088059, 5088060, 5088061, 5088062, 5088063, 5088064, 5088065, 5088066, 5088067, 5088068, 50>
# CPU_TOPOLOGY info available, use -I to display
# NUMA_TOPOLOGY info available, use -I to display
# pmu mappings: cpu_atom = 10, cpu_core = 4, breakpoint = 5, cstate_core = 34, cstate_pkg = 35, i915 = 14, intel_bts = 11, intel_pt = 12, kprobe = 8, msr = 13, power = 36, software = 1, trac>
# CACHE info available, use -I to display
# time of first sample : 124739.850375
# time of last sample : 124744.855181
# sample duration : 5004.806 ms
# sample duration : 5004.806 ms
# MEM_TOPOLOGY info available, use -I to display
# bpf_prog_info 2: bpf_prog_7cc47bbf07148bfe_hid_tail_call addr 0xffffffffc0000978 size 113
# bpf_prog_info 47: bpf_prog_713a545fe0530ce7_restrict_filesystems addr 0xffffffffc0000748 size 305
# bpf_prog_info 163: bpf_prog_bd834b0730296056 addr 0xffffffffc000df14 size 331
# bpf_prog_info 258: bpf_prog_ee0e253c78993a24_sd_devices addr 0xffffffffc001fc08 size 264
# bpf_prog_info 259: bpf_prog_40ddf486530245f5_sd_devices addr 0xffffffffc00204bc size 318
# bpf_prog_info 260: bpf_prog_6deef7357e7b4530_sd_fw_egress addr 0xffffffffc0020630 size 63
# bpf_prog_info 261: bpf_prog_6deef7357e7b4530_sd_fw_ingress addr 0xffffffffc0020688 size 63
# bpf_prog_info 262: bpf_prog_b37200ab714f0e17_sd_devices addr 0xffffffffc002072c size 110
# bpf_prog_info 263: bpf_prog_b90a282ee45cfed9_sd_devices addr 0xffffffffc00207d8 size 393
# bpf_prog_info 264: bpf_prog_ee0e253c78993a24_sd_devices addr 0xffffffffc002099c size 264
# bpf_prog_info 265: bpf_prog_6deef7357e7b4530_sd_fw_egress addr 0xffffffffc0020ad4 size 63
# bpf_prog_info 266: bpf_prog_6deef7357e7b4530_sd_fw_ingress addr 0xffffffffc0020b50 size 63
# bpf_prog_info 267: bpf_prog_ee0e253c78993a24_sd_devices addr 0xffffffffc002d98c size 264
# bpf_prog_info 268: bpf_prog_be31ae23198a0378_sd_devices addr 0xffffffffc002dac8 size 297
# bpf_prog_info 269: bpf_prog_ccbbf91f3c6979c7_sd_devices addr 0xffffffffc002dc54 size 360
# bpf_prog_info 270: bpf_prog_3a0ef5414c2f6fca_sd_devices addr 0xffffffffc002dde8 size 456
# bpf_prog_info 271: bpf_prog_6deef7357e7b4530_sd_fw_egress addr 0xffffffffc0020bd4 size 63
# bpf_prog_info 272: bpf_prog_6deef7357e7b4530_sd_fw_ingress addr 0xffffffffc00299b4 size 63
# bpf_prog_info 273: bpf_prog_ee0e253c78993a24_sd_devices addr 0xffffffffc002dfd0 size 264
# bpf_prog_info 274: bpf_prog_6deef7357e7b4530_sd_fw_egress addr 0xffffffffc0029a3c size 63
# bpf_prog_info 275: bpf_prog_6deef7357e7b4530_sd_fw_ingress addr 0xffffffffc002d71c size 63
# bpf_prog_info 276: bpf_prog_6deef7357e7b4530_sd_fw_egress addr 0xffffffffc002d7a8 size 63
# bpf_prog_info 277: bpf_prog_6deef7357e7b4530_sd_fw_ingress addr 0xffffffffc002e13c size 63
# bpf_prog_info 278: bpf_prog_6deef7357e7b4530_sd_fw_egress addr 0xffffffffc002e1a8 size 63
# bpf_prog_info 279: bpf_prog_6deef7357e7b4530_sd_fw_ingress addr 0xffffffffc002e234 size 63
# bpf_prog_info 280: bpf_prog_be31ae23198a0378_sd_devices addr 0xffffffffc002e2ac size 297
# bpf_prog_info 281: bpf_prog_6deef7357e7b4530_sd_fw_egress addr 0xffffffffc002e42c size 63
# bpf_prog_info 282: bpf_prog_6deef7357e7b4530_sd_fw_ingress addr 0xffffffffc002e49c size 63
# bpf_prog_info 290: bpf_prog_ee0e253c78993a24_sd_devices addr 0xffffffffc0004b18 size 264
# bpf_prog_info 294: bpf_prog_0b1566e4b83190c5_sd_devices addr 0xffffffffc0004c50 size 360
# bpf_prog_info 295: bpf_prog_ee0e253c78993a24_sd_devices addr 0xffffffffc001cfc8 size 264
# bpf_prog_info 296: bpf_prog_6deef7357e7b4530_sd_fw_egress addr 0xffffffffc0013abc size 63
# bpf_prog_info 297: bpf_prog_6deef7357e7b4530_sd_fw_ingress addr 0xffffffffc0013b24 size 63
# btf info of id 2
# btf info of id 52
# HYBRID_TOPOLOGY info available, use -I to display
# cpu_atom pmu capabilities: branches=32, max_precise=3, pmu_name=alderlake_hybrid
# cpu_core pmu capabilities: branches=32, max_precise=3, pmu_name=alderlake_hybrid
# intel_pt pmu capabilities: topa_multiple_entries=1, psb_cyc=1, single_range_output=1, mtc_periods=249, ip_filtering=1, output_subsys=0, cr3_filtering=1, psb_periods=3f, event_trace=0, cycl>
# missing features: TRACING_DATA BRANCH_STACK GROUP_DESC AUXTRACE STAT CLOCKID DIR_FORMAT COMPRESSED CPU_PMU_CAPS CLOCK_DATA
# ========
#
root@number:~#
And then transferring it to a ARM64 machine, a Libre Computer RK3399-PC:
root@number:~# scp perf.all-number-20231219-104854.tar.bz2 acme@192.168.86.114:.
acme@192.168.86.114's password:
perf.all-number-20231219-104854.tar.bz2 100% 145MB 85.4MB/s 00:01
root@number:~#
root@number:~# ssh acme@192.168.86.114
acme@192.168.86.114's password:
Welcome to Ubuntu 23.04 (GNU/Linux 6.1.68-12200-g1c40dda3081e aarch64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
Last login: Tue Dec 19 14:53:18 2023 from 192.168.86.42
acme@roc-rk3399-pc:~$ tar xvf perf.all-number-20231219-104854.tar.bz2 && tar xvf perf.symbols.tar.bz2 -C ~/.debug
perf.data
perf.symbols.tar.bz2
.build-id/ad/acc227f470409213308050b71f664322e2956c
[kernel.kallsyms]/adacc227f470409213308050b71f664322e2956c/
[kernel.kallsyms]/adacc227f470409213308050b71f664322e2956c/kallsyms
[kernel.kallsyms]/adacc227f470409213308050b71f664322e2956c/probes
.build-id/76/c91f4d62baa06bb52e07e20aba36d21a8f9797
usr/lib64/libz.so.1.2.13/76c91f4d62baa06bb52e07e20aba36d21a8f9797/
<SNIP>
.build-id/09/d7e96bc1e3f599d15ca28b36959124b2d74410
usr/lib64/librpm_sequoia.so.1/09d7e96bc1e3f599d15ca28b36959124b2d74410/
usr/lib64/librpm_sequoia.so.1/09d7e96bc1e3f599d15ca28b36959124b2d74410/elf
usr/lib64/librpm_sequoia.so.1/09d7e96bc1e3f599d15ca28b36959124b2d74410/probes
acme@roc-rk3399-pc:~$
acme@roc-rk3399-pc:~$ perf report --stdio | head -40
# To display the perf.data header info, please use --header/--header-only options.
#
# Total Lost Samples: 0
#
# Samples: 6K of event 'cpu_atom/cycles/P'
# Event count (approx.): 4519946621
#
# Overhead Command Shared Object Symbol
# ........ ............... .............................................. .........................................................................................................................................................
#
1.73% swapper [kernel.kallsyms] [k] intel_idle
1.43% sh [kernel.kallsyms] [k] next_uptodate_folio
0.94% make ld-linux-x86-64.so.2 [.] do_lookup_x
0.90% sh ld-linux-x86-64.so.2 [.] do_lookup_x
0.82% sh [kernel.kallsyms] [k] perf_event_mmap_output
0.74% sh [kernel.kallsyms] [k] filemap_map_pages
0.72% sh ld-linux-x86-64.so.2 [.] _dl_relocate_object
0.69% cc1 [kernel.kallsyms] [k] clear_page_erms
0.61% sh [kernel.kallsyms] [k] unmap_page_range
0.56% swapper [kernel.kallsyms] [k] poll_idle
0.52% cc1 ld-linux-x86-64.so.2 [.] do_lookup_x
0.47% make ld-linux-x86-64.so.2 [.] _dl_relocate_object
0.44% cc1 cc1 [.] make_node(tree_code)
0.43% sh [kernel.kallsyms] [k] native_irq_return_iret
0.38% sh libc.so.6 [.] _int_malloc
0.38% cc1 cc1 [.] decl_attributes(tree_node**, tree_node*, int, tree_node*)
0.38% sh [kernel.kallsyms] [k] clear_page_erms
0.37% cc1 cc1 [.] ht_lookup_with_hash(ht*, unsigned char const*, unsigned long, unsigned int, ht_lookup_option)
0.37% make [kernel.kallsyms] [k] perf_event_mmap_output
0.37% make ld-linux-x86-64.so.2 [.] _dl_lookup_symbol_x
0.35% sh [kernel.kallsyms] [k] _compound_head
0.35% make make [.] hash_find_slot
0.33% sh libc.so.6 [.] __strlen_avx2
0.33% cc1 cc1 [.] ggc_internal_alloc(unsigned long, void (*)(void*), unsigned long, unsigned long)
0.33% sh [kernel.kallsyms] [k] perf_iterate_ctx
0.31% make make [.] jhash_string
0.31% sh [kernel.kallsyms] [k] page_remove_rmap
0.30% cc1 libc.so.6 [.] _int_malloc
0.30% make libc.so.6 [.] _int_malloc
acme@roc-rk3399-pc:~$
Signed-off-by: Veronika Molnarova <vmolnaro@redhat.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Michael Petlan <mpetlan@redhat.com>
Link: https://lore.kernel.org/r/20231212165909.14459-1-vmolnaro@redhat.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2023-12-12 16:59:08 +00:00
|
|
|
shift
|
|
|
|
fi
|
|
|
|
done
|
2010-01-15 15:17:52 +00:00
|
|
|
|
2023-12-12 16:59:09 +00:00
|
|
|
if [ $UNPACK -eq 1 ]; then
|
|
|
|
if [ ! -z "$UNPACK_TAR" ]; then # tar given as an argument
|
|
|
|
if [ ! -e "$UNPACK_TAR" ]; then
|
|
|
|
echo "Provided file $UNPACK_TAR does not exist"
|
|
|
|
exit 1
|
|
|
|
fi
|
|
|
|
TARGET="$UNPACK_TAR"
|
|
|
|
else # search for perf tar in the current directory
|
|
|
|
TARGET=`find . -regex "\./perf.*\.tar\.bz2"`
|
|
|
|
TARGET_NUM=`echo -n "$TARGET" | grep -c '^'`
|
|
|
|
|
|
|
|
if [ -z "$TARGET" -o $TARGET_NUM -gt 1 ]; then
|
|
|
|
echo -e "Error: $TARGET_NUM files found for unpacking:\n$TARGET"
|
|
|
|
echo "Provide the requested file as an argument"
|
|
|
|
exit 1
|
|
|
|
else
|
|
|
|
echo "Found target file for unpacking: $TARGET"
|
|
|
|
fi
|
|
|
|
fi
|
|
|
|
|
|
|
|
if [[ "$TARGET" =~ (\./)?$PERF_ALL.*.tar.bz2 ]]; then # perf tar generated by --all option
|
|
|
|
TAR_CONTENTS=`tar tvf "$TARGET" | tr -s " " | cut -d " " -f 6`
|
|
|
|
VALID_TAR=`echo "$TAR_CONTENTS" | grep "$PERF_SYMBOLS.tar.bz2" | wc -l` # check if it contains a sub-tar perf.symbols
|
|
|
|
if [ $VALID_TAR -ne 1 ]; then
|
|
|
|
echo "Error: $TARGET file is not valid (contains zero or multiple sub-tar files with debug symbols)"
|
|
|
|
exit 1
|
|
|
|
fi
|
|
|
|
|
|
|
|
INTERSECT=`comm -12 <(ls) <(echo "$TAR_CONTENTS") | tr "\n" " "` # check for overwriting
|
|
|
|
if [ ! -z "$INTERSECT" ]; then # prompt if file(s) already exist in the current directory
|
|
|
|
echo "File(s) ${INTERSECT::-1} already exist in the current directory."
|
|
|
|
while true; do
|
|
|
|
read -p 'Do you wish to overwrite them? ' yn
|
|
|
|
case $yn in
|
|
|
|
[Yy]* ) break;;
|
|
|
|
[Nn]* ) exit 1;;
|
|
|
|
* ) echo "Please answer yes or no.";;
|
|
|
|
esac
|
|
|
|
done
|
|
|
|
fi
|
|
|
|
|
|
|
|
# unzip the perf.data file in the current working directory and debug symbols in ~/.debug directory
|
|
|
|
tar xvf $TARGET && tar xvf $PERF_SYMBOLS.tar.bz2 -C ~/.debug
|
|
|
|
|
|
|
|
else # perf tar generated by perf archive (contains only debug symbols)
|
|
|
|
tar xvf $TARGET -C ~/.debug
|
|
|
|
fi
|
|
|
|
exit 0
|
|
|
|
fi
|
|
|
|
|
perf buildid: add perfconfig option to specify buildid cache dir
This patch adds the ability to specify an alternate directory to store the
buildid cache (buildids, copy of binaries). By default, it is hardcoded to
$HOME/.debug. This directory contains immutable data. The layout of the
directory is such that no conflicts in filenames are possible. A modification
in a file, yields a different buildid and thus a different location in the
subdir hierarchy.
You may want to put the buildid cache elsewhere because of disk space
limitation or simply to share the cache between users. It is also useful for
remote collect vs. local analysis of profiles.
This patch adds a new config option to the perfconfig file. Under the tag
'buildid', there is a dir option. For instance, if you have:
$ cat /etc/perfconfig
[buildid]
dir = /var/cache/perf-buildid
All buildids and binaries are be saved in the directory specified. The perf
record, buildid-list, buildid-cache, report, annotate, and archive commands
will it to pull information out.
The option can be set in the system-wide perfconfig file or in the
$HOME/.perfconfig file.
Cc: David S. Miller <davem@davemloft.net>
Cc: Frédéric Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Tom Zanussi <tzanussi@gmail.com>
LKML-Reference: <4c055fb7.df0ce30a.5f0d.ffffae52@mx.google.com>
Signed-off-by: Stephane Eranian <eranian@google.com>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2010-06-01 19:25:01 +00:00
|
|
|
#
|
|
|
|
# PERF_BUILDID_DIR environment variable set by perf
|
|
|
|
# path to buildid directory, default to $HOME/.debug
|
|
|
|
#
|
|
|
|
if [ -z $PERF_BUILDID_DIR ]; then
|
|
|
|
PERF_BUILDID_DIR=~/.debug/
|
|
|
|
else
|
|
|
|
# append / to make substitutions work
|
|
|
|
PERF_BUILDID_DIR=$PERF_BUILDID_DIR/
|
|
|
|
fi
|
|
|
|
|
2010-01-15 15:17:52 +00:00
|
|
|
BUILDIDS=$(mktemp /tmp/perf-archive-buildids.XXXXXX)
|
|
|
|
|
2021-02-19 16:09:32 +00:00
|
|
|
perf buildid-list -i $PERF_DATA --with-hits | grep -v "^ " > $BUILDIDS
|
2010-01-15 15:17:52 +00:00
|
|
|
if [ ! -s $BUILDIDS ] ; then
|
|
|
|
echo "perf archive: no build-ids found"
|
2012-09-13 22:07:42 +00:00
|
|
|
rm $BUILDIDS || true
|
2010-01-15 15:17:52 +00:00
|
|
|
exit 1
|
|
|
|
fi
|
|
|
|
|
|
|
|
MANIFEST=$(mktemp /tmp/perf-archive-manifest.XXXXXX)
|
2012-04-02 06:28:29 +00:00
|
|
|
PERF_BUILDID_LINKDIR=$(readlink -f $PERF_BUILDID_DIR)/
|
2010-01-15 15:17:52 +00:00
|
|
|
|
|
|
|
cut -d ' ' -f 1 $BUILDIDS | \
|
|
|
|
while read build_id ; do
|
perf buildid: add perfconfig option to specify buildid cache dir
This patch adds the ability to specify an alternate directory to store the
buildid cache (buildids, copy of binaries). By default, it is hardcoded to
$HOME/.debug. This directory contains immutable data. The layout of the
directory is such that no conflicts in filenames are possible. A modification
in a file, yields a different buildid and thus a different location in the
subdir hierarchy.
You may want to put the buildid cache elsewhere because of disk space
limitation or simply to share the cache between users. It is also useful for
remote collect vs. local analysis of profiles.
This patch adds a new config option to the perfconfig file. Under the tag
'buildid', there is a dir option. For instance, if you have:
$ cat /etc/perfconfig
[buildid]
dir = /var/cache/perf-buildid
All buildids and binaries are be saved in the directory specified. The perf
record, buildid-list, buildid-cache, report, annotate, and archive commands
will it to pull information out.
The option can be set in the system-wide perfconfig file or in the
$HOME/.perfconfig file.
Cc: David S. Miller <davem@davemloft.net>
Cc: Frédéric Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Tom Zanussi <tzanussi@gmail.com>
LKML-Reference: <4c055fb7.df0ce30a.5f0d.ffffae52@mx.google.com>
Signed-off-by: Stephane Eranian <eranian@google.com>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2010-06-01 19:25:01 +00:00
|
|
|
linkname=$PERF_BUILDID_DIR.build-id/${build_id:0:2}/${build_id:2}
|
2010-01-15 15:17:52 +00:00
|
|
|
filename=$(readlink -f $linkname)
|
perf buildid: add perfconfig option to specify buildid cache dir
This patch adds the ability to specify an alternate directory to store the
buildid cache (buildids, copy of binaries). By default, it is hardcoded to
$HOME/.debug. This directory contains immutable data. The layout of the
directory is such that no conflicts in filenames are possible. A modification
in a file, yields a different buildid and thus a different location in the
subdir hierarchy.
You may want to put the buildid cache elsewhere because of disk space
limitation or simply to share the cache between users. It is also useful for
remote collect vs. local analysis of profiles.
This patch adds a new config option to the perfconfig file. Under the tag
'buildid', there is a dir option. For instance, if you have:
$ cat /etc/perfconfig
[buildid]
dir = /var/cache/perf-buildid
All buildids and binaries are be saved in the directory specified. The perf
record, buildid-list, buildid-cache, report, annotate, and archive commands
will it to pull information out.
The option can be set in the system-wide perfconfig file or in the
$HOME/.perfconfig file.
Cc: David S. Miller <davem@davemloft.net>
Cc: Frédéric Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Tom Zanussi <tzanussi@gmail.com>
LKML-Reference: <4c055fb7.df0ce30a.5f0d.ffffae52@mx.google.com>
Signed-off-by: Stephane Eranian <eranian@google.com>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2010-06-01 19:25:01 +00:00
|
|
|
echo ${linkname#$PERF_BUILDID_DIR} >> $MANIFEST
|
2012-04-02 06:28:29 +00:00
|
|
|
echo ${filename#$PERF_BUILDID_LINKDIR} >> $MANIFEST
|
2010-01-15 15:17:52 +00:00
|
|
|
done
|
|
|
|
|
perf archive: Add new option '--all' to pack perf.data with DSOs
'perf archive' has limited functionality and people from Red Hat Global
Support Services sent a request for a new feature that would pack
perf.data file together with an archive with debug symbols created by
the command 'perf archive' as customers were being confused and often
would forget to send perf.data file with the debug symbols.
With this patch 'perf archive' now accepts an option '--all' that
generates archive 'perf.all-hostname-date-time.tar.bz2' that holds file
'perf.data' and a sub-tar 'perf.symbols.tar.bz2' with debug symbols. The
functionality of the command 'perf archive' was not changed.
Committer testing:
Run 'perf record' on a Intel 14900K machine, hybrid:
root@number:~# perf record -a sleep 5s
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 4.006 MB perf.data (15427 samples) ]
root@number:~# perf archive --all
Now please run:
$ tar xvf perf.all-number-20231219-104854.tar.bz2 && tar xvf perf.symbols.tar.bz2 -C ~/.debug
wherever you need to run 'perf report' on.
root@number:~#
root@number:~# perf report --header-only
# ========
# captured on : Tue Dec 19 10:48:48 2023
# header version : 1
# data offset : 1008
# data size : 4199936
# feat offset : 4200944
# hostname : number
# os release : 6.6.4-200.fc39.x86_64
# perf version : 6.7.rc6.gca90f8e17b84
# arch : x86_64
# nrcpus online : 28
# nrcpus avail : 28
# cpudesc : Intel(R) Core(TM) i7-14700K
# cpuid : GenuineIntel,6,183,1
# total memory : 32610508 kB
# cmdline : /home/acme/bin/perf (deleted) record -a sleep 5s
# event : name = cpu_atom/cycles/P, , id = { 5088024, 5088025, 5088026, 5088027, 5088028, 5088029, 5088030, 5088031, 5088032, 5088033, 5088034, 5088035 }, type = 0 (PERF_TYPE_HARDWARE), size>
# event : name = cpu_core/cycles/P, , id = { 5088036, 5088037, 5088038, 5088039, 5088040, 5088041, 5088042, 5088043, 5088044, 5088045, 5088046, 5088047, 5088048, 5088049, 5088050, 5088051 },>
# event : name = dummy:u, , id = { 5088052, 5088053, 5088054, 5088055, 5088056, 5088057, 5088058, 5088059, 5088060, 5088061, 5088062, 5088063, 5088064, 5088065, 5088066, 5088067, 5088068, 50>
# CPU_TOPOLOGY info available, use -I to display
# NUMA_TOPOLOGY info available, use -I to display
# pmu mappings: cpu_atom = 10, cpu_core = 4, breakpoint = 5, cstate_core = 34, cstate_pkg = 35, i915 = 14, intel_bts = 11, intel_pt = 12, kprobe = 8, msr = 13, power = 36, software = 1, trac>
# CACHE info available, use -I to display
# time of first sample : 124739.850375
# time of last sample : 124744.855181
# sample duration : 5004.806 ms
# sample duration : 5004.806 ms
# MEM_TOPOLOGY info available, use -I to display
# bpf_prog_info 2: bpf_prog_7cc47bbf07148bfe_hid_tail_call addr 0xffffffffc0000978 size 113
# bpf_prog_info 47: bpf_prog_713a545fe0530ce7_restrict_filesystems addr 0xffffffffc0000748 size 305
# bpf_prog_info 163: bpf_prog_bd834b0730296056 addr 0xffffffffc000df14 size 331
# bpf_prog_info 258: bpf_prog_ee0e253c78993a24_sd_devices addr 0xffffffffc001fc08 size 264
# bpf_prog_info 259: bpf_prog_40ddf486530245f5_sd_devices addr 0xffffffffc00204bc size 318
# bpf_prog_info 260: bpf_prog_6deef7357e7b4530_sd_fw_egress addr 0xffffffffc0020630 size 63
# bpf_prog_info 261: bpf_prog_6deef7357e7b4530_sd_fw_ingress addr 0xffffffffc0020688 size 63
# bpf_prog_info 262: bpf_prog_b37200ab714f0e17_sd_devices addr 0xffffffffc002072c size 110
# bpf_prog_info 263: bpf_prog_b90a282ee45cfed9_sd_devices addr 0xffffffffc00207d8 size 393
# bpf_prog_info 264: bpf_prog_ee0e253c78993a24_sd_devices addr 0xffffffffc002099c size 264
# bpf_prog_info 265: bpf_prog_6deef7357e7b4530_sd_fw_egress addr 0xffffffffc0020ad4 size 63
# bpf_prog_info 266: bpf_prog_6deef7357e7b4530_sd_fw_ingress addr 0xffffffffc0020b50 size 63
# bpf_prog_info 267: bpf_prog_ee0e253c78993a24_sd_devices addr 0xffffffffc002d98c size 264
# bpf_prog_info 268: bpf_prog_be31ae23198a0378_sd_devices addr 0xffffffffc002dac8 size 297
# bpf_prog_info 269: bpf_prog_ccbbf91f3c6979c7_sd_devices addr 0xffffffffc002dc54 size 360
# bpf_prog_info 270: bpf_prog_3a0ef5414c2f6fca_sd_devices addr 0xffffffffc002dde8 size 456
# bpf_prog_info 271: bpf_prog_6deef7357e7b4530_sd_fw_egress addr 0xffffffffc0020bd4 size 63
# bpf_prog_info 272: bpf_prog_6deef7357e7b4530_sd_fw_ingress addr 0xffffffffc00299b4 size 63
# bpf_prog_info 273: bpf_prog_ee0e253c78993a24_sd_devices addr 0xffffffffc002dfd0 size 264
# bpf_prog_info 274: bpf_prog_6deef7357e7b4530_sd_fw_egress addr 0xffffffffc0029a3c size 63
# bpf_prog_info 275: bpf_prog_6deef7357e7b4530_sd_fw_ingress addr 0xffffffffc002d71c size 63
# bpf_prog_info 276: bpf_prog_6deef7357e7b4530_sd_fw_egress addr 0xffffffffc002d7a8 size 63
# bpf_prog_info 277: bpf_prog_6deef7357e7b4530_sd_fw_ingress addr 0xffffffffc002e13c size 63
# bpf_prog_info 278: bpf_prog_6deef7357e7b4530_sd_fw_egress addr 0xffffffffc002e1a8 size 63
# bpf_prog_info 279: bpf_prog_6deef7357e7b4530_sd_fw_ingress addr 0xffffffffc002e234 size 63
# bpf_prog_info 280: bpf_prog_be31ae23198a0378_sd_devices addr 0xffffffffc002e2ac size 297
# bpf_prog_info 281: bpf_prog_6deef7357e7b4530_sd_fw_egress addr 0xffffffffc002e42c size 63
# bpf_prog_info 282: bpf_prog_6deef7357e7b4530_sd_fw_ingress addr 0xffffffffc002e49c size 63
# bpf_prog_info 290: bpf_prog_ee0e253c78993a24_sd_devices addr 0xffffffffc0004b18 size 264
# bpf_prog_info 294: bpf_prog_0b1566e4b83190c5_sd_devices addr 0xffffffffc0004c50 size 360
# bpf_prog_info 295: bpf_prog_ee0e253c78993a24_sd_devices addr 0xffffffffc001cfc8 size 264
# bpf_prog_info 296: bpf_prog_6deef7357e7b4530_sd_fw_egress addr 0xffffffffc0013abc size 63
# bpf_prog_info 297: bpf_prog_6deef7357e7b4530_sd_fw_ingress addr 0xffffffffc0013b24 size 63
# btf info of id 2
# btf info of id 52
# HYBRID_TOPOLOGY info available, use -I to display
# cpu_atom pmu capabilities: branches=32, max_precise=3, pmu_name=alderlake_hybrid
# cpu_core pmu capabilities: branches=32, max_precise=3, pmu_name=alderlake_hybrid
# intel_pt pmu capabilities: topa_multiple_entries=1, psb_cyc=1, single_range_output=1, mtc_periods=249, ip_filtering=1, output_subsys=0, cr3_filtering=1, psb_periods=3f, event_trace=0, cycl>
# missing features: TRACING_DATA BRANCH_STACK GROUP_DESC AUXTRACE STAT CLOCKID DIR_FORMAT COMPRESSED CPU_PMU_CAPS CLOCK_DATA
# ========
#
root@number:~#
And then transferring it to a ARM64 machine, a Libre Computer RK3399-PC:
root@number:~# scp perf.all-number-20231219-104854.tar.bz2 acme@192.168.86.114:.
acme@192.168.86.114's password:
perf.all-number-20231219-104854.tar.bz2 100% 145MB 85.4MB/s 00:01
root@number:~#
root@number:~# ssh acme@192.168.86.114
acme@192.168.86.114's password:
Welcome to Ubuntu 23.04 (GNU/Linux 6.1.68-12200-g1c40dda3081e aarch64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
Last login: Tue Dec 19 14:53:18 2023 from 192.168.86.42
acme@roc-rk3399-pc:~$ tar xvf perf.all-number-20231219-104854.tar.bz2 && tar xvf perf.symbols.tar.bz2 -C ~/.debug
perf.data
perf.symbols.tar.bz2
.build-id/ad/acc227f470409213308050b71f664322e2956c
[kernel.kallsyms]/adacc227f470409213308050b71f664322e2956c/
[kernel.kallsyms]/adacc227f470409213308050b71f664322e2956c/kallsyms
[kernel.kallsyms]/adacc227f470409213308050b71f664322e2956c/probes
.build-id/76/c91f4d62baa06bb52e07e20aba36d21a8f9797
usr/lib64/libz.so.1.2.13/76c91f4d62baa06bb52e07e20aba36d21a8f9797/
<SNIP>
.build-id/09/d7e96bc1e3f599d15ca28b36959124b2d74410
usr/lib64/librpm_sequoia.so.1/09d7e96bc1e3f599d15ca28b36959124b2d74410/
usr/lib64/librpm_sequoia.so.1/09d7e96bc1e3f599d15ca28b36959124b2d74410/elf
usr/lib64/librpm_sequoia.so.1/09d7e96bc1e3f599d15ca28b36959124b2d74410/probes
acme@roc-rk3399-pc:~$
acme@roc-rk3399-pc:~$ perf report --stdio | head -40
# To display the perf.data header info, please use --header/--header-only options.
#
# Total Lost Samples: 0
#
# Samples: 6K of event 'cpu_atom/cycles/P'
# Event count (approx.): 4519946621
#
# Overhead Command Shared Object Symbol
# ........ ............... .............................................. .........................................................................................................................................................
#
1.73% swapper [kernel.kallsyms] [k] intel_idle
1.43% sh [kernel.kallsyms] [k] next_uptodate_folio
0.94% make ld-linux-x86-64.so.2 [.] do_lookup_x
0.90% sh ld-linux-x86-64.so.2 [.] do_lookup_x
0.82% sh [kernel.kallsyms] [k] perf_event_mmap_output
0.74% sh [kernel.kallsyms] [k] filemap_map_pages
0.72% sh ld-linux-x86-64.so.2 [.] _dl_relocate_object
0.69% cc1 [kernel.kallsyms] [k] clear_page_erms
0.61% sh [kernel.kallsyms] [k] unmap_page_range
0.56% swapper [kernel.kallsyms] [k] poll_idle
0.52% cc1 ld-linux-x86-64.so.2 [.] do_lookup_x
0.47% make ld-linux-x86-64.so.2 [.] _dl_relocate_object
0.44% cc1 cc1 [.] make_node(tree_code)
0.43% sh [kernel.kallsyms] [k] native_irq_return_iret
0.38% sh libc.so.6 [.] _int_malloc
0.38% cc1 cc1 [.] decl_attributes(tree_node**, tree_node*, int, tree_node*)
0.38% sh [kernel.kallsyms] [k] clear_page_erms
0.37% cc1 cc1 [.] ht_lookup_with_hash(ht*, unsigned char const*, unsigned long, unsigned int, ht_lookup_option)
0.37% make [kernel.kallsyms] [k] perf_event_mmap_output
0.37% make ld-linux-x86-64.so.2 [.] _dl_lookup_symbol_x
0.35% sh [kernel.kallsyms] [k] _compound_head
0.35% make make [.] hash_find_slot
0.33% sh libc.so.6 [.] __strlen_avx2
0.33% cc1 cc1 [.] ggc_internal_alloc(unsigned long, void (*)(void*), unsigned long, unsigned long)
0.33% sh [kernel.kallsyms] [k] perf_iterate_ctx
0.31% make make [.] jhash_string
0.31% sh [kernel.kallsyms] [k] page_remove_rmap
0.30% cc1 libc.so.6 [.] _int_malloc
0.30% make libc.so.6 [.] _int_malloc
acme@roc-rk3399-pc:~$
Signed-off-by: Veronika Molnarova <vmolnaro@redhat.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Michael Petlan <mpetlan@redhat.com>
Link: https://lore.kernel.org/r/20231212165909.14459-1-vmolnaro@redhat.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2023-12-12 16:59:08 +00:00
|
|
|
if [ $ALL -eq 1 ]; then # pack perf.data file together with tar containing debug symbols
|
|
|
|
HOSTNAME=$(hostname)
|
|
|
|
DATE=$(date '+%Y%m%d-%H%M%S')
|
|
|
|
tar cjf $PERF_SYMBOLS.tar.bz2 -C $PERF_BUILDID_DIR -T $MANIFEST
|
|
|
|
tar cjf $PERF_ALL-$HOSTNAME-$DATE.tar.bz2 $PERF_DATA $PERF_SYMBOLS.tar.bz2
|
|
|
|
rm $PERF_SYMBOLS.tar.bz2 $MANIFEST $BUILDIDS || true
|
|
|
|
else # pack only the debug symbols
|
|
|
|
tar cjf $PERF_DATA.tar.bz2 -C $PERF_BUILDID_DIR -T $MANIFEST
|
|
|
|
rm $MANIFEST $BUILDIDS || true
|
|
|
|
fi
|
|
|
|
|
2023-12-12 16:59:09 +00:00
|
|
|
echo -e "Now please run:\n"
|
|
|
|
echo -e "$ perf archive --unpack\n"
|
|
|
|
echo "or unpack the tar manually wherever you need to run 'perf report' on."
|
2010-01-15 15:17:52 +00:00
|
|
|
exit 0
|