linux/tools/perf/util/symbol-elf.c

2989 lines
68 KiB
C
Raw Normal View History

License cleanup: add SPDX GPL-2.0 license identifier to files with no license Many source files in the tree are missing licensing information, which makes it harder for compliance tools to determine the correct license. By default all files without license information are under the default license of the kernel, which is GPL version 2. Update the files which contain no license information with the 'GPL-2.0' SPDX license identifier. The SPDX identifier is a legally binding shorthand, which can be used instead of the full boiler plate text. This patch is based on work done by Thomas Gleixner and Kate Stewart and Philippe Ombredanne. How this work was done: Patches were generated and checked against linux-4.14-rc6 for a subset of the use cases: - file had no licensing information it it. - file was a */uapi/* one with no licensing information in it, - file was a */uapi/* one with existing licensing information, Further patches will be generated in subsequent months to fix up cases where non-standard license headers were used, and references to license had to be inferred by heuristics based on keywords. The analysis to determine which SPDX License Identifier to be applied to a file was done in a spreadsheet of side by side results from of the output of two independent scanners (ScanCode & Windriver) producing SPDX tag:value files created by Philippe Ombredanne. Philippe prepared the base worksheet, and did an initial spot review of a few 1000 files. The 4.13 kernel was the starting point of the analysis with 60,537 files assessed. Kate Stewart did a file by file comparison of the scanner results in the spreadsheet to determine which SPDX license identifier(s) to be applied to the file. She confirmed any determination that was not immediately clear with lawyers working with the Linux Foundation. Criteria used to select files for SPDX license identifier tagging was: - Files considered eligible had to be source code files. - Make and config files were included as candidates if they contained >5 lines of source - File already had some variant of a license header in it (even if <5 lines). All documentation files were explicitly excluded. The following heuristics were used to determine which SPDX license identifiers to apply. - when both scanners couldn't find any license traces, file was considered to have no license information in it, and the top level COPYING file license applied. For non */uapi/* files that summary was: SPDX license identifier # files ---------------------------------------------------|------- GPL-2.0 11139 and resulted in the first patch in this series. If that file was a */uapi/* path one, it was "GPL-2.0 WITH Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was: SPDX license identifier # files ---------------------------------------------------|------- GPL-2.0 WITH Linux-syscall-note 930 and resulted in the second patch in this series. - if a file had some form of licensing information in it, and was one of the */uapi/* ones, it was denoted with the Linux-syscall-note if any GPL family license was found in the file or had no licensing in it (per prior point). Results summary: SPDX license identifier # files ---------------------------------------------------|------ GPL-2.0 WITH Linux-syscall-note 270 GPL-2.0+ WITH Linux-syscall-note 169 ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21 ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17 LGPL-2.1+ WITH Linux-syscall-note 15 GPL-1.0+ WITH Linux-syscall-note 14 ((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5 LGPL-2.0+ WITH Linux-syscall-note 4 LGPL-2.1 WITH Linux-syscall-note 3 ((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3 ((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1 and that resulted in the third patch in this series. - when the two scanners agreed on the detected license(s), that became the concluded license(s). - when there was disagreement between the two scanners (one detected a license but the other didn't, or they both detected different licenses) a manual inspection of the file occurred. - In most cases a manual inspection of the information in the file resulted in a clear resolution of the license that should apply (and which scanner probably needed to revisit its heuristics). - When it was not immediately clear, the license identifier was confirmed with lawyers working with the Linux Foundation. - If there was any question as to the appropriate license identifier, the file was flagged for further research and to be revisited later in time. In total, over 70 hours of logged manual review was done on the spreadsheet to determine the SPDX license identifiers to apply to the source files by Kate, Philippe, Thomas and, in some cases, confirmation by lawyers working with the Linux Foundation. Kate also obtained a third independent scan of the 4.13 code base from FOSSology, and compared selected files where the other two scanners disagreed against that SPDX file, to see if there was new insights. The Windriver scanner is based on an older version of FOSSology in part, so they are related. Thomas did random spot checks in about 500 files from the spreadsheets for the uapi headers and agreed with SPDX license identifier in the files he inspected. For the non-uapi files Thomas did random spot checks in about 15000 files. In initial set of patches against 4.14-rc6, 3 files were found to have copy/paste license identifier errors, and have been fixed to reflect the correct identifier. Additionally Philippe spent 10 hours this week doing a detailed manual inspection and review of the 12,461 patched files from the initial patch version early this week with: - a full scancode scan run, collecting the matched texts, detected license ids and scores - reviewing anything where there was a license detected (about 500+ files) to ensure that the applied SPDX license was correct - reviewing anything where there was no detection but the patch license was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied SPDX license was correct This produced a worksheet with 20 files needing minor correction. This worksheet was then exported into 3 different .csv files for the different types of files to be modified. These .csv files were then reviewed by Greg. Thomas wrote a script to parse the csv files and add the proper SPDX tag to the file, in the format that the file expected. This script was further refined by Greg based on the output to detect more types of files automatically and to distinguish between header and source .c files (which need different comment types.) Finally Greg ran the script using the .csv files to generate the patches. Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org> Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-01 14:07:57 +00:00
// SPDX-License-Identifier: GPL-2.0
#include <fcntl.h>
#include <stdio.h>
#include <errno.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
#include <inttypes.h>
#include "dso.h"
#include "map.h"
#include "maps.h"
#include "symbol.h"
#include "symsrc.h"
#include "demangle-cxx.h"
#include "demangle-ocaml.h"
#include "demangle-java.h"
perf symbols: Add Rust demangling Rust demangling is another step after bfd demangling. Add a diagnosis to identify mangled Rust symbols based on the hash that the Rust mangler appends as the last path component, as well as other characteristics. Add a demangler to reconstruct the original symbol. Committer notes: How I tested it: Enabled COPR on Fedora 24 and then installed the 'rust-binary' package, with it: $ cat src/main.rs fn main() { println!("Hello, world!"); } $ cat Cargo.toml [package] name = "hello_world" version = "0.0.1" authors = [ "Arnaldo Carvalho de Melo <acme@kernel.org>" ] $ perf record cargo bench Compiling hello_world v0.0.1 (file:///home/acme/projects/hello_world) Running target/release/hello_world-d4b9dab4b2a47d75 running 0 tests test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured [ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 0.096 MB perf.data (1457 samples) ] $ Before this patch: $ perf report --stdio --dsos librbml-e8edd0fd.so # dso: librbml-e8edd0fd.so # # Total Lost Samples: 0 # # Samples: 1K of event 'cycles:u' # Event count (approx.): 979599126 # # Overhead Command Symbol # ........ ....... ............................................................................................................. # 1.78% rustc [.] rbml::reader::maybe_get_doc::hb9d387df6024b15b 1.50% rustc [.] _$LT$reader..DocsIterator$LT$$u27$a$GT$$u20$as$u20$std..iter..Iterator$GT$::next::hd9af9e60d79a35c8 1.20% rustc [.] rbml::reader::doc_at::hc88107fba445af31 0.46% rustc [.] _$LT$reader..TaggedDocsIterator$LT$$u27$a$GT$$u20$as$u20$std..iter..Iterator$GT$::next::h0cb40e696e4bb489 0.35% rustc [.] rbml::reader::Decoder::_next_int::h66eef7825a398bc3 0.29% rustc [.] rbml::reader::Decoder::_next_sub::h8e5266005580b836 0.15% rustc [.] rbml::reader::get_doc::h094521c645459139 0.14% rustc [.] _$LT$reader..Decoder$LT$$u27$doc$GT$$u20$as$u20$serialize..Decoder$GT$::read_u32::h0acea2fff9669327 0.07% rustc [.] rbml::reader::Decoder::next_doc::h6714d469c9dfaf91 0.07% rustc [.] _ZN4rbml6reader10doc_as_u6417h930b740aa94f1d3aE@plt 0.06% rustc [.] _fini $ After: $ perf report --stdio --dsos librbml-e8edd0fd.so # dso: librbml-e8edd0fd.so # # Total Lost Samples: 0 # # Samples: 1K of event 'cycles:u' # Event count (approx.): 979599126 # # Overhead Command Symbol # ........ ....... ................................................................. # 1.78% rustc [.] rbml::reader::maybe_get_doc 1.50% rustc [.] <reader::DocsIterator<'a> as std::iter::Iterator>::next 1.20% rustc [.] rbml::reader::doc_at 0.46% rustc [.] <reader::TaggedDocsIterator<'a> as std::iter::Iterator>::next 0.35% rustc [.] rbml::reader::Decoder::_next_int 0.29% rustc [.] rbml::reader::Decoder::_next_sub 0.15% rustc [.] rbml::reader::get_doc 0.14% rustc [.] <reader::Decoder<'doc> as serialize::Decoder>::read_u32 0.07% rustc [.] rbml::reader::Decoder::next_doc 0.07% rustc [.] _ZN4rbml6reader10doc_as_u6417h930b740aa94f1d3aE@plt 0.06% rustc [.] _fini $ Signed-off-by: David Tolnay <dtolnay@gmail.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/5780B7FA.3030602@gmail.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-07-09 07:20:00 +00:00
#include "demangle-rust.h"
#include "machine.h"
#include "vdso.h"
#include "debug.h"
#include "util/copyfile.h"
#include <linux/ctype.h>
#include <linux/kernel.h>
#include <linux/zalloc.h>
perf symbols: Slightly improve module file executable section mappings Currently perf does not record module section addresses except for the .text section. In general that means perf cannot get module section mappings correct (except for .text) when loading symbols from a kernel module file. (Note using --kcore does not have this issue) Improve that situation slightly by identifying executable sections that use the same mapping as the .text section. That happens when an executable section comes directly after the .text section, both in memory and on file, something that can be determined by following the same layout rules used by the kernel, refer kernel layout_sections(). Note whether that happens is somewhat arbitrary, so this is not a final solution. Example from tracing a virtual machine process: Before: $ perf script | grep unknown CPU 0/KVM 1718 203.511270: 318341 cpu-cycles:P: ffffffffc13e8a70 [unknown] (/lib/modules/6.7.2-local/kernel/arch/x86/kvm/kvm-intel.ko) $ perf script -vvv 2>&1 >/dev/null | grep kvm.intel | grep 'noinstr.text\|ffff' Map: 0-7e0 41430 [kvm_intel].noinstr.text Map: ffffffffc13a7000-ffffffffc1421000 a0 /lib/modules/6.7.2-local/kernel/arch/x86/kvm/kvm-intel.ko After: $ perf script | grep 203.511270 CPU 0/KVM 1718 203.511270: 318341 cpu-cycles:P: ffffffffc13e8a70 vmx_vmexit+0x0 (/lib/modules/6.7.2-local/kernel/arch/x86/kvm/kvm-intel.ko) $ perf script -vvv 2>&1 >/dev/null | grep kvm.intel | grep 'noinstr.text\|ffff' Map: ffffffffc13a7000-ffffffffc1421000 a0 /lib/modules/6.7.2-local/kernel/arch/x86/kvm/kvm-intel.ko Reported-by: Like Xu <like.xu.linux@gmail.com> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Acked-by: Namhyung Kim <namhyung@kernel.org> Signed-off-by: Namhyung Kim <namhyung@kernel.org> Link: https://lore.kernel.org/r/20240208085326.13432-3-adrian.hunter@intel.com
2024-02-08 08:53:26 +00:00
#include <linux/string.h>
#include <symbol/kallsyms.h>
#include <internal/lib.h>
#ifdef HAVE_LIBBFD_SUPPORT
#define PACKAGE 'perf'
#include <bfd.h>
#endif
#if defined(HAVE_LIBBFD_SUPPORT) || defined(HAVE_CPLUS_DEMANGLE_SUPPORT)
#ifndef DMGL_PARAMS
#define DMGL_PARAMS (1 << 0) /* Include function args */
#define DMGL_ANSI (1 << 1) /* Include const, volatile, etc */
#endif
#endif
#ifndef EM_AARCH64
#define EM_AARCH64 183 /* ARM 64 bit */
#endif
#ifndef EM_LOONGARCH
#define EM_LOONGARCH 258
#endif
#ifndef ELF32_ST_VISIBILITY
#define ELF32_ST_VISIBILITY(o) ((o) & 0x03)
#endif
/* For ELF64 the definitions are the same. */
#ifndef ELF64_ST_VISIBILITY
#define ELF64_ST_VISIBILITY(o) ELF32_ST_VISIBILITY (o)
#endif
/* How to extract information held in the st_other field. */
#ifndef GELF_ST_VISIBILITY
#define GELF_ST_VISIBILITY(val) ELF64_ST_VISIBILITY (val)
#endif
typedef Elf64_Nhdr GElf_Nhdr;
#ifndef HAVE_ELF_GETPHDRNUM_SUPPORT
static int elf_getphdrnum(Elf *elf, size_t *dst)
{
GElf_Ehdr gehdr;
GElf_Ehdr *ehdr;
ehdr = gelf_getehdr(elf, &gehdr);
if (!ehdr)
return -1;
*dst = ehdr->e_phnum;
return 0;
}
#endif
#ifndef HAVE_ELF_GETSHDRSTRNDX_SUPPORT
static int elf_getshdrstrndx(Elf *elf __maybe_unused, size_t *dst __maybe_unused)
{
pr_err("%s: update your libelf to > 0.140, this one lacks elf_getshdrstrndx().\n", __func__);
return -1;
}
#endif
#ifndef NT_GNU_BUILD_ID
#define NT_GNU_BUILD_ID 3
#endif
/**
* elf_symtab__for_each_symbol - iterate thru all the symbols
*
* @syms: struct elf_symtab instance to iterate
* @idx: uint32_t idx
* @sym: GElf_Sym iterator
*/
#define elf_symtab__for_each_symbol(syms, nr_syms, idx, sym) \
for (idx = 0, gelf_getsym(syms, idx, &sym);\
idx < nr_syms; \
idx++, gelf_getsym(syms, idx, &sym))
static inline uint8_t elf_sym__type(const GElf_Sym *sym)
{
return GELF_ST_TYPE(sym->st_info);
}
perf symbols: Filter out hidden symbols from labels When perf is built with the annobin plugin (RHEL8 build) extra symbols are added to its binary: # nm perf | grep annobin | head -10 0000000000241100 t .annobin_annotate.c 0000000000326490 t .annobin_annotate.c 0000000000249255 t .annobin_annotate.c_end 00000000003283a8 t .annobin_annotate.c_end 00000000001bce18 t .annobin_annotate.c_end.hot 00000000001bce18 t .annobin_annotate.c_end.hot 00000000001bc3e2 t .annobin_annotate.c_end.unlikely 00000000001bc400 t .annobin_annotate.c_end.unlikely 00000000001bce18 t .annobin_annotate.c.hot 00000000001bce18 t .annobin_annotate.c.hot ... Those symbols have no use for report or annotation and should be skipped. Moreover they interfere with the DWARF unwind test on the PPC arch, where they are mixed with checked symbols and then the test fails: # perf test dwarf -v 59: Test dwarf unwind : --- start --- test child forked, pid 8515 unwind: .annobin_dwarf_unwind.c:ip = 0x10dba40dc (0x2740dc) ... got: .annobin_dwarf_unwind.c 0x10dba40dc, expecting test__arch_unwind_sample unwind: failed with 'no error' The annobin symbols are defined as NOTYPE/LOCAL/HIDDEN: # readelf -s ./perf | grep annobin | head -1 40: 00000000001bce4f 0 NOTYPE LOCAL HIDDEN 13 .annobin_init.c They can still pass the check for the label symbol. Adding check for HIDDEN and INTERNAL (as suggested by Nick below) visibility and filter out such symbols. > Just to be awkward, if you are going to ignore STV_HIDDEN > symbols then you should probably also ignore STV_INTERNAL ones > as well... Annobin does not generate them, but you never know, > one day some other tool might create some. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Michael Petlan <mpetlan@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Nick Clifton <nickc@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/20190128133526.GD15461@krava Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-01-28 13:35:26 +00:00
static inline uint8_t elf_sym__visibility(const GElf_Sym *sym)
{
return GELF_ST_VISIBILITY(sym->st_other);
}
#ifndef STT_GNU_IFUNC
#define STT_GNU_IFUNC 10
#endif
static inline int elf_sym__is_function(const GElf_Sym *sym)
{
return (elf_sym__type(sym) == STT_FUNC ||
elf_sym__type(sym) == STT_GNU_IFUNC) &&
sym->st_name != 0 &&
sym->st_shndx != SHN_UNDEF;
}
static inline bool elf_sym__is_object(const GElf_Sym *sym)
{
return elf_sym__type(sym) == STT_OBJECT &&
sym->st_name != 0 &&
sym->st_shndx != SHN_UNDEF;
}
static inline int elf_sym__is_label(const GElf_Sym *sym)
{
return elf_sym__type(sym) == STT_NOTYPE &&
sym->st_name != 0 &&
sym->st_shndx != SHN_UNDEF &&
perf symbols: Filter out hidden symbols from labels When perf is built with the annobin plugin (RHEL8 build) extra symbols are added to its binary: # nm perf | grep annobin | head -10 0000000000241100 t .annobin_annotate.c 0000000000326490 t .annobin_annotate.c 0000000000249255 t .annobin_annotate.c_end 00000000003283a8 t .annobin_annotate.c_end 00000000001bce18 t .annobin_annotate.c_end.hot 00000000001bce18 t .annobin_annotate.c_end.hot 00000000001bc3e2 t .annobin_annotate.c_end.unlikely 00000000001bc400 t .annobin_annotate.c_end.unlikely 00000000001bce18 t .annobin_annotate.c.hot 00000000001bce18 t .annobin_annotate.c.hot ... Those symbols have no use for report or annotation and should be skipped. Moreover they interfere with the DWARF unwind test on the PPC arch, where they are mixed with checked symbols and then the test fails: # perf test dwarf -v 59: Test dwarf unwind : --- start --- test child forked, pid 8515 unwind: .annobin_dwarf_unwind.c:ip = 0x10dba40dc (0x2740dc) ... got: .annobin_dwarf_unwind.c 0x10dba40dc, expecting test__arch_unwind_sample unwind: failed with 'no error' The annobin symbols are defined as NOTYPE/LOCAL/HIDDEN: # readelf -s ./perf | grep annobin | head -1 40: 00000000001bce4f 0 NOTYPE LOCAL HIDDEN 13 .annobin_init.c They can still pass the check for the label symbol. Adding check for HIDDEN and INTERNAL (as suggested by Nick below) visibility and filter out such symbols. > Just to be awkward, if you are going to ignore STV_HIDDEN > symbols then you should probably also ignore STV_INTERNAL ones > as well... Annobin does not generate them, but you never know, > one day some other tool might create some. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Michael Petlan <mpetlan@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Nick Clifton <nickc@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/20190128133526.GD15461@krava Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-01-28 13:35:26 +00:00
sym->st_shndx != SHN_ABS &&
elf_sym__visibility(sym) != STV_HIDDEN &&
elf_sym__visibility(sym) != STV_INTERNAL;
}
static bool elf_sym__filter(GElf_Sym *sym)
{
return elf_sym__is_function(sym) || elf_sym__is_object(sym);
}
static inline const char *elf_sym__name(const GElf_Sym *sym,
const Elf_Data *symstrs)
{
return symstrs->d_buf + sym->st_name;
}
static inline const char *elf_sec__name(const GElf_Shdr *shdr,
const Elf_Data *secstrs)
{
return secstrs->d_buf + shdr->sh_name;
}
static inline int elf_sec__is_text(const GElf_Shdr *shdr,
const Elf_Data *secstrs)
{
return strstr(elf_sec__name(shdr, secstrs), "text") != NULL;
}
static inline bool elf_sec__is_data(const GElf_Shdr *shdr,
const Elf_Data *secstrs)
{
return strstr(elf_sec__name(shdr, secstrs), "data") != NULL;
}
static bool elf_sec__filter(GElf_Shdr *shdr, Elf_Data *secstrs)
{
return elf_sec__is_text(shdr, secstrs) ||
elf_sec__is_data(shdr, secstrs);
}
static size_t elf_addr_to_index(Elf *elf, GElf_Addr addr)
{
Elf_Scn *sec = NULL;
GElf_Shdr shdr;
size_t cnt = 1;
while ((sec = elf_nextscn(elf, sec)) != NULL) {
gelf_getshdr(sec, &shdr);
if ((addr >= shdr.sh_addr) &&
(addr < (shdr.sh_addr + shdr.sh_size)))
return cnt;
++cnt;
}
return -1;
}
Elf_Scn *elf_section_by_name(Elf *elf, GElf_Ehdr *ep,
GElf_Shdr *shp, const char *name, size_t *idx)
{
Elf_Scn *sec = NULL;
size_t cnt = 1;
/* ELF is corrupted/truncated, avoid calling elf_strptr. */
if (!elf_rawdata(elf_getscn(elf, ep->e_shstrndx), NULL))
return NULL;
while ((sec = elf_nextscn(elf, sec)) != NULL) {
char *str;
gelf_getshdr(sec, shp);
str = elf_strptr(elf, ep->e_shstrndx, shp->sh_name);
if (str && !strcmp(name, str)) {
if (idx)
*idx = cnt;
return sec;
}
++cnt;
}
return NULL;
}
bool filename__has_section(const char *filename, const char *sec)
{
int fd;
Elf *elf;
GElf_Ehdr ehdr;
GElf_Shdr shdr;
bool found = false;
fd = open(filename, O_RDONLY);
if (fd < 0)
return false;
elf = elf_begin(fd, PERF_ELF_C_READ_MMAP, NULL);
if (elf == NULL)
goto out;
if (gelf_getehdr(elf, &ehdr) == NULL)
goto elf_out;
found = !!elf_section_by_name(elf, &ehdr, &shdr, sec, NULL);
elf_out:
elf_end(elf);
out:
close(fd);
return found;
}
perf symbol: Correct address for bss symbols When using 'perf mem' and 'perf c2c', an issue is observed that tool reports the wrong offset for global data symbols. This is a common issue on both x86 and Arm64 platforms. Let's see an example, for a test program, below is the disassembly for its .bss section which is dumped with objdump: ... Disassembly of section .bss: 0000000000004040 <completed.0>: ... 0000000000004080 <buf1>: ... 00000000000040c0 <buf2>: ... 0000000000004100 <thread>: ... First we used 'perf mem record' to run the test program and then used 'perf --debug verbose=4 mem report' to observe what's the symbol info for 'buf1' and 'buf2' structures. # ./perf mem record -e ldlat-loads,ldlat-stores -- false_sharing.exe 8 # ./perf --debug verbose=4 mem report ... dso__load_sym_internal: adjusting symbol: st_value: 0x40c0 sh_addr: 0x4040 sh_offset: 0x3028 symbol__new: buf2 0x30a8-0x30e8 ... dso__load_sym_internal: adjusting symbol: st_value: 0x4080 sh_addr: 0x4040 sh_offset: 0x3028 symbol__new: buf1 0x3068-0x30a8 ... The perf tool relies on libelf to parse symbols, in executable and shared object files, 'st_value' holds a virtual address; 'sh_addr' is the address at which section's first byte should reside in memory, and 'sh_offset' is the byte offset from the beginning of the file to the first byte in the section. The perf tool uses below formula to convert a symbol's memory address to a file address: file_address = st_value - sh_addr + sh_offset ^ ` Memory address We can see the final adjusted address ranges for buf1 and buf2 are [0x30a8-0x30e8) and [0x3068-0x30a8) respectively, apparently this is incorrect, in the code, the structure for 'buf1' and 'buf2' specifies compiler attribute with 64-byte alignment. The problem happens for 'sh_offset', libelf returns it as 0x3028 which is not 64-byte aligned, combining with disassembly, it's likely libelf doesn't respect the alignment for .bss section, therefore, it doesn't return the aligned value for 'sh_offset'. Suggested by Fangrui Song, ELF file contains program header which contains PT_LOAD segments, the fields p_vaddr and p_offset in PT_LOAD segments contain the execution info. A better choice for converting memory address to file address is using the formula: file_address = st_value - p_vaddr + p_offset This patch introduces elf_read_program_header() which returns the program header based on the passed 'st_value', then it uses the formula above to calculate the symbol file address; and the debugging log is updated respectively. After applying the change: # ./perf --debug verbose=4 mem report ... dso__load_sym_internal: adjusting symbol: st_value: 0x40c0 p_vaddr: 0x3d28 p_offset: 0x2d28 symbol__new: buf2 0x30c0-0x3100 ... dso__load_sym_internal: adjusting symbol: st_value: 0x4080 p_vaddr: 0x3d28 p_offset: 0x2d28 symbol__new: buf1 0x3080-0x30c0 ... Fixes: f17e04afaff84b5c ("perf report: Fix ELF symbol parsing") Reported-by: Chang Rui <changruinj@gmail.com> Suggested-by: Fangrui Song <maskray@google.com> Signed-off-by: Leo Yan <leo.yan@linaro.org> Acked-by: Namhyung Kim <namhyung@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20220724060013.171050-2-leo.yan@linaro.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2022-07-24 06:00:12 +00:00
static int elf_read_program_header(Elf *elf, u64 vaddr, GElf_Phdr *phdr)
{
size_t i, phdrnum;
u64 sz;
if (elf_getphdrnum(elf, &phdrnum))
return -1;
for (i = 0; i < phdrnum; i++) {
if (gelf_getphdr(elf, i, phdr) == NULL)
return -1;
if (phdr->p_type != PT_LOAD)
continue;
sz = max(phdr->p_memsz, phdr->p_filesz);
if (!sz)
continue;
if (vaddr >= phdr->p_vaddr && (vaddr < phdr->p_vaddr + sz))
return 0;
}
/* Not found any valid program header */
return -1;
}
static bool want_demangle(bool is_kernel_sym)
{
return is_kernel_sym ? symbol_conf.demangle_kernel : symbol_conf.demangle;
}
/*
* Demangle C++ function signature, typically replaced by demangle-cxx.cpp
* version.
*/
__weak char *cxx_demangle_sym(const char *str __maybe_unused, bool params __maybe_unused,
bool modifiers __maybe_unused)
{
#ifdef HAVE_LIBBFD_SUPPORT
int flags = (params ? DMGL_PARAMS : 0) | (modifiers ? DMGL_ANSI : 0);
return bfd_demangle(NULL, str, flags);
#elif defined(HAVE_CPLUS_DEMANGLE_SUPPORT)
int flags = (params ? DMGL_PARAMS : 0) | (modifiers ? DMGL_ANSI : 0);
return cplus_demangle(str, flags);
#else
return NULL;
#endif
}
static char *demangle_sym(struct dso *dso, int kmodule, const char *elf_name)
{
char *demangled = NULL;
/*
* We need to figure out if the object was created from C++ sources
* DWARF DW_compile_unit has this, but we don't always have access
* to it...
*/
if (!want_demangle(dso->kernel || kmodule))
return demangled;
demangled = cxx_demangle_sym(elf_name, verbose > 0, verbose > 0);
if (demangled == NULL) {
demangled = ocaml_demangle_sym(elf_name);
if (demangled == NULL) {
demangled = java_demangle_sym(elf_name, JAVA_DEMANGLE_NORET);
}
}
else if (rust_is_mangled(demangled))
/*
* Input to Rust demangling is the BFD-demangled
* name which it Rust-demangles in place.
*/
rust_demangle_sym(demangled);
return demangled;
}
struct rel_info {
perf symbols: Sort plt relocations for x86 For x86, with the addition of IFUNCs, relocation information becomes disordered with respect to plt. Correct that by sorting the relocations by offset. Example: Before: $ cat tstpltlib.c void fn1(void) {} void fn2(void) {} void fn3(void) {} void fn4(void) {} $ cat tstpltifunc.c #include <stdio.h> void thing1(void) { printf("thing1\n"); } void thing2(void) { printf("thing2\n"); } typedef void (*thing_fn_t)(void); thing_fn_t thing_ifunc(void) { int x; if (x & 1) return thing2; return thing1; } void thing(void) __attribute__ ((ifunc ("thing_ifunc"))); void fn1(void); void fn2(void); void fn3(void); void fn4(void); int main() { fn4(); fn1(); thing(); fn2(); fn3(); return 0; } $ gcc --version gcc (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0 Copyright (C) 2021 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. $ gcc -Wall -Wextra -shared -o libtstpltlib.so tstpltlib.c $ gcc -Wall -Wextra -Wno-uninitialized -o tstpltifunc tstpltifunc.c -L . -ltstpltlib -Wl,-rpath="$(pwd)" $ readelf -rW tstpltifunc | grep -A99 plt Relocation section '.rela.plt' at offset 0x738 contains 8 entries: Offset Info Type Symbol's Value Symbol's Name + Addend 0000000000003f98 0000000300000007 R_X86_64_JUMP_SLOT 0000000000000000 puts@GLIBC_2.2.5 + 0 0000000000003fa8 0000000400000007 R_X86_64_JUMP_SLOT 0000000000000000 __stack_chk_fail@GLIBC_2.4 + 0 0000000000003fb0 0000000500000007 R_X86_64_JUMP_SLOT 0000000000000000 fn1 + 0 0000000000003fb8 0000000600000007 R_X86_64_JUMP_SLOT 0000000000000000 fn3 + 0 0000000000003fc0 0000000800000007 R_X86_64_JUMP_SLOT 0000000000000000 fn4 + 0 0000000000003fc8 0000000900000007 R_X86_64_JUMP_SLOT 0000000000000000 fn2 + 0 0000000000003fd0 0000000b00000007 R_X86_64_JUMP_SLOT 0000000000000000 getrandom@GLIBC_2.25 + 0 0000000000003fa0 0000000000000025 R_X86_64_IRELATIVE 125d $ perf record -e intel_pt//u --filter 'filter main @ ./tstpltifunc' ./tstpltifunc thing2 [ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 0.029 MB perf.data ] $ perf script --itrace=be --ns -F+flags,-event,+addr,-period,-comm,-tid,-cpu,-dso 20417.302513948: tr strt 0 [unknown] => 5629a74892be main+0x0 20417.302513948: tr end call 5629a74892c6 main+0x8 => 5629a7489110 fn2@plt+0x0 20417.302513949: tr strt 0 [unknown] => 5629a74892cb main+0xd 20417.302513949: tr end call 5629a74892cb main+0xd => 5629a74890f0 fn3@plt+0x0 20417.302513950: tr strt 0 [unknown] => 5629a74892d0 main+0x12 20417.302513950: tr end call 5629a74892d0 main+0x12 => 5629a74890d0 __stack_chk_fail@plt+0x0 20417.302528114: tr strt 0 [unknown] => 5629a74892d5 main+0x17 20417.302528114: tr end call 5629a74892d5 main+0x17 => 5629a7489120 getrandom@plt+0x0 20417.302528115: tr strt 0 [unknown] => 5629a74892da main+0x1c 20417.302528115: tr end call 5629a74892da main+0x1c => 5629a7489100 fn4@plt+0x0 20417.302528115: tr strt 0 [unknown] => 5629a74892df main+0x21 20417.302528115: tr end return 5629a74892e5 main+0x27 => 7ff14da29d90 __libc_start_call_main+0x80 After: $ perf script --itrace=be --ns -F+flags,-event,+addr,-period,-comm,-tid,-cpu,-dso 20417.302513948: tr strt 0 [unknown] => 5629a74892be main+0x0 20417.302513948: tr end call 5629a74892c6 main+0x8 => 5629a7489110 fn4@plt+0x0 20417.302513949: tr strt 0 [unknown] => 5629a74892cb main+0xd 20417.302513949: tr end call 5629a74892cb main+0xd => 5629a74890f0 fn1@plt+0x0 20417.302513950: tr strt 0 [unknown] => 5629a74892d0 main+0x12 20417.302513950: tr end call 5629a74892d0 main+0x12 => 5629a74890d0 offset_0x10d0@plt+0x0 20417.302528114: tr strt 0 [unknown] => 5629a74892d5 main+0x17 20417.302528114: tr end call 5629a74892d5 main+0x17 => 5629a7489120 fn2@plt+0x0 20417.302528115: tr strt 0 [unknown] => 5629a74892da main+0x1c 20417.302528115: tr end call 5629a74892da main+0x1c => 5629a7489100 fn3@plt+0x0 20417.302528115: tr strt 0 [unknown] => 5629a74892df main+0x21 20417.302528115: tr end return 5629a74892e5 main+0x27 => 7ff14da29d90 __libc_start_call_main+0x80 Reviewed-by: Namhyung Kim <namhyung@kernel.org> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Jiri Olsa <jolsa@kernel.org> Link: https://lore.kernel.org/r/20230131131625.6964-4-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2023-01-31 13:16:19 +00:00
u32 nr_entries;
u32 *sorted;
bool is_rela;
Elf_Data *reldata;
GElf_Rela rela;
GElf_Rel rel;
};
static u32 get_rel_symidx(struct rel_info *ri, u32 idx)
{
perf symbols: Sort plt relocations for x86 For x86, with the addition of IFUNCs, relocation information becomes disordered with respect to plt. Correct that by sorting the relocations by offset. Example: Before: $ cat tstpltlib.c void fn1(void) {} void fn2(void) {} void fn3(void) {} void fn4(void) {} $ cat tstpltifunc.c #include <stdio.h> void thing1(void) { printf("thing1\n"); } void thing2(void) { printf("thing2\n"); } typedef void (*thing_fn_t)(void); thing_fn_t thing_ifunc(void) { int x; if (x & 1) return thing2; return thing1; } void thing(void) __attribute__ ((ifunc ("thing_ifunc"))); void fn1(void); void fn2(void); void fn3(void); void fn4(void); int main() { fn4(); fn1(); thing(); fn2(); fn3(); return 0; } $ gcc --version gcc (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0 Copyright (C) 2021 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. $ gcc -Wall -Wextra -shared -o libtstpltlib.so tstpltlib.c $ gcc -Wall -Wextra -Wno-uninitialized -o tstpltifunc tstpltifunc.c -L . -ltstpltlib -Wl,-rpath="$(pwd)" $ readelf -rW tstpltifunc | grep -A99 plt Relocation section '.rela.plt' at offset 0x738 contains 8 entries: Offset Info Type Symbol's Value Symbol's Name + Addend 0000000000003f98 0000000300000007 R_X86_64_JUMP_SLOT 0000000000000000 puts@GLIBC_2.2.5 + 0 0000000000003fa8 0000000400000007 R_X86_64_JUMP_SLOT 0000000000000000 __stack_chk_fail@GLIBC_2.4 + 0 0000000000003fb0 0000000500000007 R_X86_64_JUMP_SLOT 0000000000000000 fn1 + 0 0000000000003fb8 0000000600000007 R_X86_64_JUMP_SLOT 0000000000000000 fn3 + 0 0000000000003fc0 0000000800000007 R_X86_64_JUMP_SLOT 0000000000000000 fn4 + 0 0000000000003fc8 0000000900000007 R_X86_64_JUMP_SLOT 0000000000000000 fn2 + 0 0000000000003fd0 0000000b00000007 R_X86_64_JUMP_SLOT 0000000000000000 getrandom@GLIBC_2.25 + 0 0000000000003fa0 0000000000000025 R_X86_64_IRELATIVE 125d $ perf record -e intel_pt//u --filter 'filter main @ ./tstpltifunc' ./tstpltifunc thing2 [ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 0.029 MB perf.data ] $ perf script --itrace=be --ns -F+flags,-event,+addr,-period,-comm,-tid,-cpu,-dso 20417.302513948: tr strt 0 [unknown] => 5629a74892be main+0x0 20417.302513948: tr end call 5629a74892c6 main+0x8 => 5629a7489110 fn2@plt+0x0 20417.302513949: tr strt 0 [unknown] => 5629a74892cb main+0xd 20417.302513949: tr end call 5629a74892cb main+0xd => 5629a74890f0 fn3@plt+0x0 20417.302513950: tr strt 0 [unknown] => 5629a74892d0 main+0x12 20417.302513950: tr end call 5629a74892d0 main+0x12 => 5629a74890d0 __stack_chk_fail@plt+0x0 20417.302528114: tr strt 0 [unknown] => 5629a74892d5 main+0x17 20417.302528114: tr end call 5629a74892d5 main+0x17 => 5629a7489120 getrandom@plt+0x0 20417.302528115: tr strt 0 [unknown] => 5629a74892da main+0x1c 20417.302528115: tr end call 5629a74892da main+0x1c => 5629a7489100 fn4@plt+0x0 20417.302528115: tr strt 0 [unknown] => 5629a74892df main+0x21 20417.302528115: tr end return 5629a74892e5 main+0x27 => 7ff14da29d90 __libc_start_call_main+0x80 After: $ perf script --itrace=be --ns -F+flags,-event,+addr,-period,-comm,-tid,-cpu,-dso 20417.302513948: tr strt 0 [unknown] => 5629a74892be main+0x0 20417.302513948: tr end call 5629a74892c6 main+0x8 => 5629a7489110 fn4@plt+0x0 20417.302513949: tr strt 0 [unknown] => 5629a74892cb main+0xd 20417.302513949: tr end call 5629a74892cb main+0xd => 5629a74890f0 fn1@plt+0x0 20417.302513950: tr strt 0 [unknown] => 5629a74892d0 main+0x12 20417.302513950: tr end call 5629a74892d0 main+0x12 => 5629a74890d0 offset_0x10d0@plt+0x0 20417.302528114: tr strt 0 [unknown] => 5629a74892d5 main+0x17 20417.302528114: tr end call 5629a74892d5 main+0x17 => 5629a7489120 fn2@plt+0x0 20417.302528115: tr strt 0 [unknown] => 5629a74892da main+0x1c 20417.302528115: tr end call 5629a74892da main+0x1c => 5629a7489100 fn3@plt+0x0 20417.302528115: tr strt 0 [unknown] => 5629a74892df main+0x21 20417.302528115: tr end return 5629a74892e5 main+0x27 => 7ff14da29d90 __libc_start_call_main+0x80 Reviewed-by: Namhyung Kim <namhyung@kernel.org> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Jiri Olsa <jolsa@kernel.org> Link: https://lore.kernel.org/r/20230131131625.6964-4-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2023-01-31 13:16:19 +00:00
idx = ri->sorted ? ri->sorted[idx] : idx;
if (ri->is_rela) {
gelf_getrela(ri->reldata, idx, &ri->rela);
return GELF_R_SYM(ri->rela.r_info);
}
gelf_getrel(ri->reldata, idx, &ri->rel);
return GELF_R_SYM(ri->rel.r_info);
}
perf symbols: Sort plt relocations for x86 For x86, with the addition of IFUNCs, relocation information becomes disordered with respect to plt. Correct that by sorting the relocations by offset. Example: Before: $ cat tstpltlib.c void fn1(void) {} void fn2(void) {} void fn3(void) {} void fn4(void) {} $ cat tstpltifunc.c #include <stdio.h> void thing1(void) { printf("thing1\n"); } void thing2(void) { printf("thing2\n"); } typedef void (*thing_fn_t)(void); thing_fn_t thing_ifunc(void) { int x; if (x & 1) return thing2; return thing1; } void thing(void) __attribute__ ((ifunc ("thing_ifunc"))); void fn1(void); void fn2(void); void fn3(void); void fn4(void); int main() { fn4(); fn1(); thing(); fn2(); fn3(); return 0; } $ gcc --version gcc (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0 Copyright (C) 2021 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. $ gcc -Wall -Wextra -shared -o libtstpltlib.so tstpltlib.c $ gcc -Wall -Wextra -Wno-uninitialized -o tstpltifunc tstpltifunc.c -L . -ltstpltlib -Wl,-rpath="$(pwd)" $ readelf -rW tstpltifunc | grep -A99 plt Relocation section '.rela.plt' at offset 0x738 contains 8 entries: Offset Info Type Symbol's Value Symbol's Name + Addend 0000000000003f98 0000000300000007 R_X86_64_JUMP_SLOT 0000000000000000 puts@GLIBC_2.2.5 + 0 0000000000003fa8 0000000400000007 R_X86_64_JUMP_SLOT 0000000000000000 __stack_chk_fail@GLIBC_2.4 + 0 0000000000003fb0 0000000500000007 R_X86_64_JUMP_SLOT 0000000000000000 fn1 + 0 0000000000003fb8 0000000600000007 R_X86_64_JUMP_SLOT 0000000000000000 fn3 + 0 0000000000003fc0 0000000800000007 R_X86_64_JUMP_SLOT 0000000000000000 fn4 + 0 0000000000003fc8 0000000900000007 R_X86_64_JUMP_SLOT 0000000000000000 fn2 + 0 0000000000003fd0 0000000b00000007 R_X86_64_JUMP_SLOT 0000000000000000 getrandom@GLIBC_2.25 + 0 0000000000003fa0 0000000000000025 R_X86_64_IRELATIVE 125d $ perf record -e intel_pt//u --filter 'filter main @ ./tstpltifunc' ./tstpltifunc thing2 [ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 0.029 MB perf.data ] $ perf script --itrace=be --ns -F+flags,-event,+addr,-period,-comm,-tid,-cpu,-dso 20417.302513948: tr strt 0 [unknown] => 5629a74892be main+0x0 20417.302513948: tr end call 5629a74892c6 main+0x8 => 5629a7489110 fn2@plt+0x0 20417.302513949: tr strt 0 [unknown] => 5629a74892cb main+0xd 20417.302513949: tr end call 5629a74892cb main+0xd => 5629a74890f0 fn3@plt+0x0 20417.302513950: tr strt 0 [unknown] => 5629a74892d0 main+0x12 20417.302513950: tr end call 5629a74892d0 main+0x12 => 5629a74890d0 __stack_chk_fail@plt+0x0 20417.302528114: tr strt 0 [unknown] => 5629a74892d5 main+0x17 20417.302528114: tr end call 5629a74892d5 main+0x17 => 5629a7489120 getrandom@plt+0x0 20417.302528115: tr strt 0 [unknown] => 5629a74892da main+0x1c 20417.302528115: tr end call 5629a74892da main+0x1c => 5629a7489100 fn4@plt+0x0 20417.302528115: tr strt 0 [unknown] => 5629a74892df main+0x21 20417.302528115: tr end return 5629a74892e5 main+0x27 => 7ff14da29d90 __libc_start_call_main+0x80 After: $ perf script --itrace=be --ns -F+flags,-event,+addr,-period,-comm,-tid,-cpu,-dso 20417.302513948: tr strt 0 [unknown] => 5629a74892be main+0x0 20417.302513948: tr end call 5629a74892c6 main+0x8 => 5629a7489110 fn4@plt+0x0 20417.302513949: tr strt 0 [unknown] => 5629a74892cb main+0xd 20417.302513949: tr end call 5629a74892cb main+0xd => 5629a74890f0 fn1@plt+0x0 20417.302513950: tr strt 0 [unknown] => 5629a74892d0 main+0x12 20417.302513950: tr end call 5629a74892d0 main+0x12 => 5629a74890d0 offset_0x10d0@plt+0x0 20417.302528114: tr strt 0 [unknown] => 5629a74892d5 main+0x17 20417.302528114: tr end call 5629a74892d5 main+0x17 => 5629a7489120 fn2@plt+0x0 20417.302528115: tr strt 0 [unknown] => 5629a74892da main+0x1c 20417.302528115: tr end call 5629a74892da main+0x1c => 5629a7489100 fn3@plt+0x0 20417.302528115: tr strt 0 [unknown] => 5629a74892df main+0x21 20417.302528115: tr end return 5629a74892e5 main+0x27 => 7ff14da29d90 __libc_start_call_main+0x80 Reviewed-by: Namhyung Kim <namhyung@kernel.org> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Jiri Olsa <jolsa@kernel.org> Link: https://lore.kernel.org/r/20230131131625.6964-4-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2023-01-31 13:16:19 +00:00
static u64 get_rel_offset(struct rel_info *ri, u32 x)
{
if (ri->is_rela) {
GElf_Rela rela;
gelf_getrela(ri->reldata, x, &rela);
return rela.r_offset;
} else {
GElf_Rel rel;
gelf_getrel(ri->reldata, x, &rel);
return rel.r_offset;
}
}
static int rel_cmp(const void *a, const void *b, void *r)
{
struct rel_info *ri = r;
u64 a_offset = get_rel_offset(ri, *(const u32 *)a);
u64 b_offset = get_rel_offset(ri, *(const u32 *)b);
return a_offset < b_offset ? -1 : (a_offset > b_offset ? 1 : 0);
}
static int sort_rel(struct rel_info *ri)
{
size_t sz = sizeof(ri->sorted[0]);
u32 i;
ri->sorted = calloc(ri->nr_entries, sz);
if (!ri->sorted)
return -1;
for (i = 0; i < ri->nr_entries; i++)
ri->sorted[i] = i;
qsort_r(ri->sorted, ri->nr_entries, sz, rel_cmp, ri);
return 0;
}
perf symbols: Add support for IFUNC symbols for x86_64 For x86_64, the GNU linker is putting IFUNC information in the relocation addend, so use it to try to find a symbol for plt entries that refer to IFUNCs. Example: Before: $ cat tstpltlib.c void fn1(void) {} void fn2(void) {} void fn3(void) {} void fn4(void) {} $ cat tstpltifunc.c #include <stdio.h> void thing1(void) { printf("thing1\n"); } void thing2(void) { printf("thing2\n"); } typedef void (*thing_fn_t)(void); thing_fn_t thing_ifunc(void) { int x; if (x & 1) return thing2; return thing1; } void thing(void) __attribute__ ((ifunc ("thing_ifunc"))); void fn1(void); void fn2(void); void fn3(void); void fn4(void); int main() { fn4(); fn1(); thing(); fn2(); fn3(); return 0; } $ gcc --version gcc (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0 Copyright (C) 2021 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. $ gcc -Wall -Wextra -shared -o libtstpltlib.so tstpltlib.c $ gcc -Wall -Wextra -Wno-uninitialized -o tstpltifunc tstpltifunc.c -L . -ltstpltlib -Wl,-rpath="$(pwd)" $ readelf -rW tstpltifunc | grep -A99 plt Relocation section '.rela.plt' at offset 0x738 contains 8 entries: Offset Info Type Symbol's Value Symbol's Name + Addend 0000000000003f98 0000000300000007 R_X86_64_JUMP_SLOT 0000000000000000 puts@GLIBC_2.2.5 + 0 0000000000003fa8 0000000400000007 R_X86_64_JUMP_SLOT 0000000000000000 __stack_chk_fail@GLIBC_2.4 + 0 0000000000003fb0 0000000500000007 R_X86_64_JUMP_SLOT 0000000000000000 fn1 + 0 0000000000003fb8 0000000600000007 R_X86_64_JUMP_SLOT 0000000000000000 fn3 + 0 0000000000003fc0 0000000800000007 R_X86_64_JUMP_SLOT 0000000000000000 fn4 + 0 0000000000003fc8 0000000900000007 R_X86_64_JUMP_SLOT 0000000000000000 fn2 + 0 0000000000003fd0 0000000b00000007 R_X86_64_JUMP_SLOT 0000000000000000 getrandom@GLIBC_2.25 + 0 0000000000003fa0 0000000000000025 R_X86_64_IRELATIVE 125d $ perf record -e intel_pt//u --filter 'filter main @ ./tstpltifunc' ./tstpltifunc thing2 [ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 0.016 MB perf.data ] $ perf script --itrace=be --ns -F+flags,-event,+addr,-period,-comm,-tid,-cpu,-dso 21860.073683659: tr strt 0 [unknown] => 561e212c42be main+0x0 21860.073683659: tr end call 561e212c42c6 main+0x8 => 561e212c4110 fn4@plt+0x0 21860.073683661: tr strt 0 [unknown] => 561e212c42cb main+0xd 21860.073683661: tr end call 561e212c42cb main+0xd => 561e212c40f0 fn1@plt+0x0 21860.073683661: tr strt 0 [unknown] => 561e212c42d0 main+0x12 21860.073683661: tr end call 561e212c42d0 main+0x12 => 561e212c40d0 offset_0x10d0@plt+0x0 21860.073698451: tr strt 0 [unknown] => 561e212c42d5 main+0x17 21860.073698451: tr end call 561e212c42d5 main+0x17 => 561e212c4120 fn2@plt+0x0 21860.073698451: tr strt 0 [unknown] => 561e212c42da main+0x1c 21860.073698451: tr end call 561e212c42da main+0x1c => 561e212c4100 fn3@plt+0x0 21860.073698452: tr strt 0 [unknown] => 561e212c42df main+0x21 21860.073698452: tr end return 561e212c42e5 main+0x27 => 7fb51cc29d90 __libc_start_call_main+0x80 After: $ perf script --itrace=be --ns -F+flags,-event,+addr,-period,-comm,-tid,-cpu,-dso 21860.073683659: tr strt 0 [unknown] => 561e212c42be main+0x0 21860.073683659: tr end call 561e212c42c6 main+0x8 => 561e212c4110 fn4@plt+0x0 21860.073683661: tr strt 0 [unknown] => 561e212c42cb main+0xd 21860.073683661: tr end call 561e212c42cb main+0xd => 561e212c40f0 fn1@plt+0x0 21860.073683661: tr strt 0 [unknown] => 561e212c42d0 main+0x12 21860.073683661: tr end call 561e212c42d0 main+0x12 => 561e212c40d0 thing_ifunc@plt+0x0 21860.073698451: tr strt 0 [unknown] => 561e212c42d5 main+0x17 21860.073698451: tr end call 561e212c42d5 main+0x17 => 561e212c4120 fn2@plt+0x0 21860.073698451: tr strt 0 [unknown] => 561e212c42da main+0x1c 21860.073698451: tr end call 561e212c42da main+0x1c => 561e212c4100 fn3@plt+0x0 21860.073698452: tr strt 0 [unknown] => 561e212c42df main+0x21 21860.073698452: tr end return 561e212c42e5 main+0x27 => 7fb51cc29d90 __libc_start_call_main+0x80 Reviewed-by: Namhyung Kim <namhyung@kernel.org> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Jiri Olsa <jolsa@kernel.org> Link: https://lore.kernel.org/r/20230131131625.6964-6-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2023-01-31 13:16:21 +00:00
/*
* For x86_64, the GNU linker is putting IFUNC information in the relocation
* addend.
*/
static bool addend_may_be_ifunc(GElf_Ehdr *ehdr, struct rel_info *ri)
{
return ehdr->e_machine == EM_X86_64 && ri->is_rela &&
GELF_R_TYPE(ri->rela.r_info) == R_X86_64_IRELATIVE;
}
static bool get_ifunc_name(Elf *elf, struct dso *dso, GElf_Ehdr *ehdr,
struct rel_info *ri, char *buf, size_t buf_sz)
{
u64 addr = ri->rela.r_addend;
struct symbol *sym;
GElf_Phdr phdr;
if (!addend_may_be_ifunc(ehdr, ri))
return false;
if (elf_read_program_header(elf, addr, &phdr))
return false;
addr -= phdr.p_vaddr - phdr.p_offset;
sym = dso__find_symbol_nocache(dso, addr);
/* Expecting the address to be an IFUNC or IFUNC alias */
if (!sym || sym->start != addr || (sym->type != STT_GNU_IFUNC && !sym->ifunc_alias))
return false;
snprintf(buf, buf_sz, "%s@plt", sym->name);
return true;
}
perf symbols: Sort plt relocations for x86 For x86, with the addition of IFUNCs, relocation information becomes disordered with respect to plt. Correct that by sorting the relocations by offset. Example: Before: $ cat tstpltlib.c void fn1(void) {} void fn2(void) {} void fn3(void) {} void fn4(void) {} $ cat tstpltifunc.c #include <stdio.h> void thing1(void) { printf("thing1\n"); } void thing2(void) { printf("thing2\n"); } typedef void (*thing_fn_t)(void); thing_fn_t thing_ifunc(void) { int x; if (x & 1) return thing2; return thing1; } void thing(void) __attribute__ ((ifunc ("thing_ifunc"))); void fn1(void); void fn2(void); void fn3(void); void fn4(void); int main() { fn4(); fn1(); thing(); fn2(); fn3(); return 0; } $ gcc --version gcc (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0 Copyright (C) 2021 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. $ gcc -Wall -Wextra -shared -o libtstpltlib.so tstpltlib.c $ gcc -Wall -Wextra -Wno-uninitialized -o tstpltifunc tstpltifunc.c -L . -ltstpltlib -Wl,-rpath="$(pwd)" $ readelf -rW tstpltifunc | grep -A99 plt Relocation section '.rela.plt' at offset 0x738 contains 8 entries: Offset Info Type Symbol's Value Symbol's Name + Addend 0000000000003f98 0000000300000007 R_X86_64_JUMP_SLOT 0000000000000000 puts@GLIBC_2.2.5 + 0 0000000000003fa8 0000000400000007 R_X86_64_JUMP_SLOT 0000000000000000 __stack_chk_fail@GLIBC_2.4 + 0 0000000000003fb0 0000000500000007 R_X86_64_JUMP_SLOT 0000000000000000 fn1 + 0 0000000000003fb8 0000000600000007 R_X86_64_JUMP_SLOT 0000000000000000 fn3 + 0 0000000000003fc0 0000000800000007 R_X86_64_JUMP_SLOT 0000000000000000 fn4 + 0 0000000000003fc8 0000000900000007 R_X86_64_JUMP_SLOT 0000000000000000 fn2 + 0 0000000000003fd0 0000000b00000007 R_X86_64_JUMP_SLOT 0000000000000000 getrandom@GLIBC_2.25 + 0 0000000000003fa0 0000000000000025 R_X86_64_IRELATIVE 125d $ perf record -e intel_pt//u --filter 'filter main @ ./tstpltifunc' ./tstpltifunc thing2 [ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 0.029 MB perf.data ] $ perf script --itrace=be --ns -F+flags,-event,+addr,-period,-comm,-tid,-cpu,-dso 20417.302513948: tr strt 0 [unknown] => 5629a74892be main+0x0 20417.302513948: tr end call 5629a74892c6 main+0x8 => 5629a7489110 fn2@plt+0x0 20417.302513949: tr strt 0 [unknown] => 5629a74892cb main+0xd 20417.302513949: tr end call 5629a74892cb main+0xd => 5629a74890f0 fn3@plt+0x0 20417.302513950: tr strt 0 [unknown] => 5629a74892d0 main+0x12 20417.302513950: tr end call 5629a74892d0 main+0x12 => 5629a74890d0 __stack_chk_fail@plt+0x0 20417.302528114: tr strt 0 [unknown] => 5629a74892d5 main+0x17 20417.302528114: tr end call 5629a74892d5 main+0x17 => 5629a7489120 getrandom@plt+0x0 20417.302528115: tr strt 0 [unknown] => 5629a74892da main+0x1c 20417.302528115: tr end call 5629a74892da main+0x1c => 5629a7489100 fn4@plt+0x0 20417.302528115: tr strt 0 [unknown] => 5629a74892df main+0x21 20417.302528115: tr end return 5629a74892e5 main+0x27 => 7ff14da29d90 __libc_start_call_main+0x80 After: $ perf script --itrace=be --ns -F+flags,-event,+addr,-period,-comm,-tid,-cpu,-dso 20417.302513948: tr strt 0 [unknown] => 5629a74892be main+0x0 20417.302513948: tr end call 5629a74892c6 main+0x8 => 5629a7489110 fn4@plt+0x0 20417.302513949: tr strt 0 [unknown] => 5629a74892cb main+0xd 20417.302513949: tr end call 5629a74892cb main+0xd => 5629a74890f0 fn1@plt+0x0 20417.302513950: tr strt 0 [unknown] => 5629a74892d0 main+0x12 20417.302513950: tr end call 5629a74892d0 main+0x12 => 5629a74890d0 offset_0x10d0@plt+0x0 20417.302528114: tr strt 0 [unknown] => 5629a74892d5 main+0x17 20417.302528114: tr end call 5629a74892d5 main+0x17 => 5629a7489120 fn2@plt+0x0 20417.302528115: tr strt 0 [unknown] => 5629a74892da main+0x1c 20417.302528115: tr end call 5629a74892da main+0x1c => 5629a7489100 fn3@plt+0x0 20417.302528115: tr strt 0 [unknown] => 5629a74892df main+0x21 20417.302528115: tr end return 5629a74892e5 main+0x27 => 7ff14da29d90 __libc_start_call_main+0x80 Reviewed-by: Namhyung Kim <namhyung@kernel.org> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Jiri Olsa <jolsa@kernel.org> Link: https://lore.kernel.org/r/20230131131625.6964-4-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2023-01-31 13:16:19 +00:00
static void exit_rel(struct rel_info *ri)
{
zfree(&ri->sorted);
perf symbols: Sort plt relocations for x86 For x86, with the addition of IFUNCs, relocation information becomes disordered with respect to plt. Correct that by sorting the relocations by offset. Example: Before: $ cat tstpltlib.c void fn1(void) {} void fn2(void) {} void fn3(void) {} void fn4(void) {} $ cat tstpltifunc.c #include <stdio.h> void thing1(void) { printf("thing1\n"); } void thing2(void) { printf("thing2\n"); } typedef void (*thing_fn_t)(void); thing_fn_t thing_ifunc(void) { int x; if (x & 1) return thing2; return thing1; } void thing(void) __attribute__ ((ifunc ("thing_ifunc"))); void fn1(void); void fn2(void); void fn3(void); void fn4(void); int main() { fn4(); fn1(); thing(); fn2(); fn3(); return 0; } $ gcc --version gcc (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0 Copyright (C) 2021 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. $ gcc -Wall -Wextra -shared -o libtstpltlib.so tstpltlib.c $ gcc -Wall -Wextra -Wno-uninitialized -o tstpltifunc tstpltifunc.c -L . -ltstpltlib -Wl,-rpath="$(pwd)" $ readelf -rW tstpltifunc | grep -A99 plt Relocation section '.rela.plt' at offset 0x738 contains 8 entries: Offset Info Type Symbol's Value Symbol's Name + Addend 0000000000003f98 0000000300000007 R_X86_64_JUMP_SLOT 0000000000000000 puts@GLIBC_2.2.5 + 0 0000000000003fa8 0000000400000007 R_X86_64_JUMP_SLOT 0000000000000000 __stack_chk_fail@GLIBC_2.4 + 0 0000000000003fb0 0000000500000007 R_X86_64_JUMP_SLOT 0000000000000000 fn1 + 0 0000000000003fb8 0000000600000007 R_X86_64_JUMP_SLOT 0000000000000000 fn3 + 0 0000000000003fc0 0000000800000007 R_X86_64_JUMP_SLOT 0000000000000000 fn4 + 0 0000000000003fc8 0000000900000007 R_X86_64_JUMP_SLOT 0000000000000000 fn2 + 0 0000000000003fd0 0000000b00000007 R_X86_64_JUMP_SLOT 0000000000000000 getrandom@GLIBC_2.25 + 0 0000000000003fa0 0000000000000025 R_X86_64_IRELATIVE 125d $ perf record -e intel_pt//u --filter 'filter main @ ./tstpltifunc' ./tstpltifunc thing2 [ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 0.029 MB perf.data ] $ perf script --itrace=be --ns -F+flags,-event,+addr,-period,-comm,-tid,-cpu,-dso 20417.302513948: tr strt 0 [unknown] => 5629a74892be main+0x0 20417.302513948: tr end call 5629a74892c6 main+0x8 => 5629a7489110 fn2@plt+0x0 20417.302513949: tr strt 0 [unknown] => 5629a74892cb main+0xd 20417.302513949: tr end call 5629a74892cb main+0xd => 5629a74890f0 fn3@plt+0x0 20417.302513950: tr strt 0 [unknown] => 5629a74892d0 main+0x12 20417.302513950: tr end call 5629a74892d0 main+0x12 => 5629a74890d0 __stack_chk_fail@plt+0x0 20417.302528114: tr strt 0 [unknown] => 5629a74892d5 main+0x17 20417.302528114: tr end call 5629a74892d5 main+0x17 => 5629a7489120 getrandom@plt+0x0 20417.302528115: tr strt 0 [unknown] => 5629a74892da main+0x1c 20417.302528115: tr end call 5629a74892da main+0x1c => 5629a7489100 fn4@plt+0x0 20417.302528115: tr strt 0 [unknown] => 5629a74892df main+0x21 20417.302528115: tr end return 5629a74892e5 main+0x27 => 7ff14da29d90 __libc_start_call_main+0x80 After: $ perf script --itrace=be --ns -F+flags,-event,+addr,-period,-comm,-tid,-cpu,-dso 20417.302513948: tr strt 0 [unknown] => 5629a74892be main+0x0 20417.302513948: tr end call 5629a74892c6 main+0x8 => 5629a7489110 fn4@plt+0x0 20417.302513949: tr strt 0 [unknown] => 5629a74892cb main+0xd 20417.302513949: tr end call 5629a74892cb main+0xd => 5629a74890f0 fn1@plt+0x0 20417.302513950: tr strt 0 [unknown] => 5629a74892d0 main+0x12 20417.302513950: tr end call 5629a74892d0 main+0x12 => 5629a74890d0 offset_0x10d0@plt+0x0 20417.302528114: tr strt 0 [unknown] => 5629a74892d5 main+0x17 20417.302528114: tr end call 5629a74892d5 main+0x17 => 5629a7489120 fn2@plt+0x0 20417.302528115: tr strt 0 [unknown] => 5629a74892da main+0x1c 20417.302528115: tr end call 5629a74892da main+0x1c => 5629a7489100 fn3@plt+0x0 20417.302528115: tr strt 0 [unknown] => 5629a74892df main+0x21 20417.302528115: tr end return 5629a74892e5 main+0x27 => 7ff14da29d90 __libc_start_call_main+0x80 Reviewed-by: Namhyung Kim <namhyung@kernel.org> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Jiri Olsa <jolsa@kernel.org> Link: https://lore.kernel.org/r/20230131131625.6964-4-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2023-01-31 13:16:19 +00:00
}
static bool get_plt_sizes(struct dso *dso, GElf_Ehdr *ehdr, GElf_Shdr *shdr_plt,
u64 *plt_header_size, u64 *plt_entry_size)
{
switch (ehdr->e_machine) {
case EM_ARM:
*plt_header_size = 20;
*plt_entry_size = 12;
return true;
case EM_AARCH64:
*plt_header_size = 32;
*plt_entry_size = 16;
return true;
case EM_LOONGARCH:
*plt_header_size = 32;
*plt_entry_size = 16;
return true;
case EM_SPARC:
*plt_header_size = 48;
*plt_entry_size = 12;
return true;
case EM_SPARCV9:
*plt_header_size = 128;
*plt_entry_size = 32;
return true;
perf symbols: Correct plt entry sizes for x86 In 32-bit executables the .plt entry size can be set to 4 when it is really 16. In fact the only sizes used for x86 (32 or 64 bit) are 8 or 16, so check for those and, if not, use the alignment to choose which it is. Example on Ubuntu 22.04 gcc 11.3: Before: $ cat tstpltlib.c void fn1(void) {} void fn2(void) {} void fn3(void) {} void fn4(void) {} $ cat tstplt.c void fn1(void); void fn2(void); void fn3(void); void fn4(void); int main() { fn4(); fn1(); fn2(); fn3(); return 0; } $ gcc --version gcc (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0 Copyright (C) 2021 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. $ gcc -m32 -Wall -Wextra -shared -o libtstpltlib32.so tstpltlib.c $ gcc -m32 -Wall -Wextra -o tstplt32 tstplt.c -L . -ltstpltlib32 -Wl,-rpath=$(pwd) $ perf record -e intel_pt//u --filter 'filter main @ ./tstplt32' ./tstplt32 [ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 0.011 MB perf.data ] $ readelf -SW tstplt32 | grep 'plt\|Name' [Nr] Name Type Addr Off Size ES Flg Lk Inf Al [10] .rel.plt REL 0000041c 00041c 000028 08 AI 5 22 4 [12] .plt PROGBITS 00001030 001030 000060 04 AX 0 0 16 <- ES is 0x04, should be 0x10 [13] .plt.got PROGBITS 00001090 001090 000008 08 AX 0 0 8 $ perf script --itrace=be --ns -F+flags,-event,+addr,-period,-comm,-tid,-cpu,-dso 17894.383903029: tr strt 0 [unknown] => 565b81cd main+0x0 17894.383903029: tr end call 565b81d4 main+0x7 => 565b80d0 __x86.get_pc_thunk.bx+0x0 17894.383903031: tr strt 0 [unknown] => 565b81d9 main+0xc 17894.383903031: tr end call 565b81df main+0x12 => 565b8070 [unknown] 17894.383903032: tr strt 0 [unknown] => 565b81e4 main+0x17 17894.383903032: tr end call 565b81e4 main+0x17 => 565b8050 [unknown] 17894.383903033: tr strt 0 [unknown] => 565b81e9 main+0x1c 17894.383903033: tr end call 565b81e9 main+0x1c => 565b8080 [unknown] 17894.383903033: tr strt 0 [unknown] => 565b81ee main+0x21 17894.383903033: tr end call 565b81ee main+0x21 => 565b8060 [unknown] 17894.383903237: tr strt 0 [unknown] => 565b81f3 main+0x26 17894.383903237: tr end return 565b81fc main+0x2f => f7c21519 [unknown] After: $ perf script --itrace=be --ns -F+flags,-event,+addr,-period,-comm,-tid,-cpu,-dso 17894.383903029: tr strt 0 [unknown] => 565b81cd main+0x0 17894.383903029: tr end call 565b81d4 main+0x7 => 565b80d0 __x86.get_pc_thunk.bx+0x0 17894.383903031: tr strt 0 [unknown] => 565b81d9 main+0xc 17894.383903031: tr end call 565b81df main+0x12 => 565b8070 fn4@plt+0x0 17894.383903032: tr strt 0 [unknown] => 565b81e4 main+0x17 17894.383903032: tr end call 565b81e4 main+0x17 => 565b8050 fn1@plt+0x0 17894.383903033: tr strt 0 [unknown] => 565b81e9 main+0x1c 17894.383903033: tr end call 565b81e9 main+0x1c => 565b8080 fn2@plt+0x0 17894.383903033: tr strt 0 [unknown] => 565b81ee main+0x21 17894.383903033: tr end call 565b81ee main+0x21 => 565b8060 fn3@plt+0x0 17894.383903237: tr strt 0 [unknown] => 565b81f3 main+0x26 17894.383903237: tr end return 565b81fc main+0x2f => f7c21519 [unknown] Reviewed-by: Namhyung Kim <namhyung@kernel.org> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Jiri Olsa <jolsa@kernel.org> Link: https://lore.kernel.org/r/20230131131625.6964-2-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2023-01-31 13:16:17 +00:00
case EM_386:
case EM_X86_64:
*plt_entry_size = shdr_plt->sh_entsize;
/* Size is 8 or 16, if not, assume alignment indicates size */
if (*plt_entry_size != 8 && *plt_entry_size != 16)
*plt_entry_size = shdr_plt->sh_addralign == 8 ? 8 : 16;
*plt_header_size = *plt_entry_size;
break;
default: /* FIXME: s390/alpha/mips/parisc/poperpc/sh/xtensa need to be checked */
*plt_header_size = shdr_plt->sh_entsize;
*plt_entry_size = shdr_plt->sh_entsize;
perf symbols: Correct plt entry sizes for x86 In 32-bit executables the .plt entry size can be set to 4 when it is really 16. In fact the only sizes used for x86 (32 or 64 bit) are 8 or 16, so check for those and, if not, use the alignment to choose which it is. Example on Ubuntu 22.04 gcc 11.3: Before: $ cat tstpltlib.c void fn1(void) {} void fn2(void) {} void fn3(void) {} void fn4(void) {} $ cat tstplt.c void fn1(void); void fn2(void); void fn3(void); void fn4(void); int main() { fn4(); fn1(); fn2(); fn3(); return 0; } $ gcc --version gcc (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0 Copyright (C) 2021 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. $ gcc -m32 -Wall -Wextra -shared -o libtstpltlib32.so tstpltlib.c $ gcc -m32 -Wall -Wextra -o tstplt32 tstplt.c -L . -ltstpltlib32 -Wl,-rpath=$(pwd) $ perf record -e intel_pt//u --filter 'filter main @ ./tstplt32' ./tstplt32 [ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 0.011 MB perf.data ] $ readelf -SW tstplt32 | grep 'plt\|Name' [Nr] Name Type Addr Off Size ES Flg Lk Inf Al [10] .rel.plt REL 0000041c 00041c 000028 08 AI 5 22 4 [12] .plt PROGBITS 00001030 001030 000060 04 AX 0 0 16 <- ES is 0x04, should be 0x10 [13] .plt.got PROGBITS 00001090 001090 000008 08 AX 0 0 8 $ perf script --itrace=be --ns -F+flags,-event,+addr,-period,-comm,-tid,-cpu,-dso 17894.383903029: tr strt 0 [unknown] => 565b81cd main+0x0 17894.383903029: tr end call 565b81d4 main+0x7 => 565b80d0 __x86.get_pc_thunk.bx+0x0 17894.383903031: tr strt 0 [unknown] => 565b81d9 main+0xc 17894.383903031: tr end call 565b81df main+0x12 => 565b8070 [unknown] 17894.383903032: tr strt 0 [unknown] => 565b81e4 main+0x17 17894.383903032: tr end call 565b81e4 main+0x17 => 565b8050 [unknown] 17894.383903033: tr strt 0 [unknown] => 565b81e9 main+0x1c 17894.383903033: tr end call 565b81e9 main+0x1c => 565b8080 [unknown] 17894.383903033: tr strt 0 [unknown] => 565b81ee main+0x21 17894.383903033: tr end call 565b81ee main+0x21 => 565b8060 [unknown] 17894.383903237: tr strt 0 [unknown] => 565b81f3 main+0x26 17894.383903237: tr end return 565b81fc main+0x2f => f7c21519 [unknown] After: $ perf script --itrace=be --ns -F+flags,-event,+addr,-period,-comm,-tid,-cpu,-dso 17894.383903029: tr strt 0 [unknown] => 565b81cd main+0x0 17894.383903029: tr end call 565b81d4 main+0x7 => 565b80d0 __x86.get_pc_thunk.bx+0x0 17894.383903031: tr strt 0 [unknown] => 565b81d9 main+0xc 17894.383903031: tr end call 565b81df main+0x12 => 565b8070 fn4@plt+0x0 17894.383903032: tr strt 0 [unknown] => 565b81e4 main+0x17 17894.383903032: tr end call 565b81e4 main+0x17 => 565b8050 fn1@plt+0x0 17894.383903033: tr strt 0 [unknown] => 565b81e9 main+0x1c 17894.383903033: tr end call 565b81e9 main+0x1c => 565b8080 fn2@plt+0x0 17894.383903033: tr strt 0 [unknown] => 565b81ee main+0x21 17894.383903033: tr end call 565b81ee main+0x21 => 565b8060 fn3@plt+0x0 17894.383903237: tr strt 0 [unknown] => 565b81f3 main+0x26 17894.383903237: tr end return 565b81fc main+0x2f => f7c21519 [unknown] Reviewed-by: Namhyung Kim <namhyung@kernel.org> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Jiri Olsa <jolsa@kernel.org> Link: https://lore.kernel.org/r/20230131131625.6964-2-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2023-01-31 13:16:17 +00:00
break;
}
perf symbols: Correct plt entry sizes for x86 In 32-bit executables the .plt entry size can be set to 4 when it is really 16. In fact the only sizes used for x86 (32 or 64 bit) are 8 or 16, so check for those and, if not, use the alignment to choose which it is. Example on Ubuntu 22.04 gcc 11.3: Before: $ cat tstpltlib.c void fn1(void) {} void fn2(void) {} void fn3(void) {} void fn4(void) {} $ cat tstplt.c void fn1(void); void fn2(void); void fn3(void); void fn4(void); int main() { fn4(); fn1(); fn2(); fn3(); return 0; } $ gcc --version gcc (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0 Copyright (C) 2021 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. $ gcc -m32 -Wall -Wextra -shared -o libtstpltlib32.so tstpltlib.c $ gcc -m32 -Wall -Wextra -o tstplt32 tstplt.c -L . -ltstpltlib32 -Wl,-rpath=$(pwd) $ perf record -e intel_pt//u --filter 'filter main @ ./tstplt32' ./tstplt32 [ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 0.011 MB perf.data ] $ readelf -SW tstplt32 | grep 'plt\|Name' [Nr] Name Type Addr Off Size ES Flg Lk Inf Al [10] .rel.plt REL 0000041c 00041c 000028 08 AI 5 22 4 [12] .plt PROGBITS 00001030 001030 000060 04 AX 0 0 16 <- ES is 0x04, should be 0x10 [13] .plt.got PROGBITS 00001090 001090 000008 08 AX 0 0 8 $ perf script --itrace=be --ns -F+flags,-event,+addr,-period,-comm,-tid,-cpu,-dso 17894.383903029: tr strt 0 [unknown] => 565b81cd main+0x0 17894.383903029: tr end call 565b81d4 main+0x7 => 565b80d0 __x86.get_pc_thunk.bx+0x0 17894.383903031: tr strt 0 [unknown] => 565b81d9 main+0xc 17894.383903031: tr end call 565b81df main+0x12 => 565b8070 [unknown] 17894.383903032: tr strt 0 [unknown] => 565b81e4 main+0x17 17894.383903032: tr end call 565b81e4 main+0x17 => 565b8050 [unknown] 17894.383903033: tr strt 0 [unknown] => 565b81e9 main+0x1c 17894.383903033: tr end call 565b81e9 main+0x1c => 565b8080 [unknown] 17894.383903033: tr strt 0 [unknown] => 565b81ee main+0x21 17894.383903033: tr end call 565b81ee main+0x21 => 565b8060 [unknown] 17894.383903237: tr strt 0 [unknown] => 565b81f3 main+0x26 17894.383903237: tr end return 565b81fc main+0x2f => f7c21519 [unknown] After: $ perf script --itrace=be --ns -F+flags,-event,+addr,-period,-comm,-tid,-cpu,-dso 17894.383903029: tr strt 0 [unknown] => 565b81cd main+0x0 17894.383903029: tr end call 565b81d4 main+0x7 => 565b80d0 __x86.get_pc_thunk.bx+0x0 17894.383903031: tr strt 0 [unknown] => 565b81d9 main+0xc 17894.383903031: tr end call 565b81df main+0x12 => 565b8070 fn4@plt+0x0 17894.383903032: tr strt 0 [unknown] => 565b81e4 main+0x17 17894.383903032: tr end call 565b81e4 main+0x17 => 565b8050 fn1@plt+0x0 17894.383903033: tr strt 0 [unknown] => 565b81e9 main+0x1c 17894.383903033: tr end call 565b81e9 main+0x1c => 565b8080 fn2@plt+0x0 17894.383903033: tr strt 0 [unknown] => 565b81ee main+0x21 17894.383903033: tr end call 565b81ee main+0x21 => 565b8060 fn3@plt+0x0 17894.383903237: tr strt 0 [unknown] => 565b81f3 main+0x26 17894.383903237: tr end return 565b81fc main+0x2f => f7c21519 [unknown] Reviewed-by: Namhyung Kim <namhyung@kernel.org> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Jiri Olsa <jolsa@kernel.org> Link: https://lore.kernel.org/r/20230131131625.6964-2-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2023-01-31 13:16:17 +00:00
if (*plt_entry_size)
return true;
pr_debug("Missing PLT entry size for %s\n", dso->long_name);
return false;
}
perf symbols: Add support for x86 .plt.sec The section .plt.sec was originally added for MPX and was first called .plt.bnd. While MPX has been deprecated, .plt.sec is now also used for IBT. On x86_64, IBT may be enabled by default, but can be switched off using gcc option -fcf-protection=none, or switched on by -z ibt or -z ibtplt. On 32-bit, option -z ibt or -z ibtplt will enable IBT. With .plt.sec, calls are made into .plt.sec instead of .plt, so it makes more sense to put the symbols there instead of .plt. A notable difference is that .plt.sec does not have a header entry. For x86, when synthesizing symbols for plt, use offset and entry size of .plt.sec instead of .plt when there is a .plt.sec section. Example on Ubuntu 22.04 gcc 11.3: Before: $ cat tstpltlib.c void fn1(void) {} void fn2(void) {} void fn3(void) {} void fn4(void) {} $ cat tstplt.c void fn1(void); void fn2(void); void fn3(void); void fn4(void); int main() { fn4(); fn1(); fn2(); fn3(); return 0; } $ gcc --version gcc (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0 Copyright (C) 2021 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. $ gcc -Wall -Wextra -shared -o libtstpltlib.so tstpltlib.c $ gcc -Wall -Wextra -z ibt -o tstplt tstplt.c -L . -ltstpltlib -Wl,-rpath=$(pwd) $ readelf -SW tstplt | grep 'plt\|Name' [Nr] Name Type Address Off Size ES Flg Lk Inf Al [11] .rela.plt RELA 0000000000000698 000698 000060 18 AI 6 24 8 [13] .plt PROGBITS 0000000000001020 001020 000050 10 AX 0 0 16 [14] .plt.got PROGBITS 0000000000001070 001070 000010 10 AX 0 0 16 [15] .plt.sec PROGBITS 0000000000001080 001080 000040 10 AX 0 0 16 $ perf record -e intel_pt//u --filter 'filter main @ ./tstplt' ./tstplt [ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 0.015 MB perf.data ] $ perf script --itrace=be --ns -F+flags,-event,+addr,-period,-comm,-tid,-cpu,-dso 38970.522546686: tr strt 0 [unknown] => 55fc222a81a9 main+0x0 38970.522546686: tr end call 55fc222a81b1 main+0x8 => 55fc222a80a0 [unknown] 38970.522546687: tr strt 0 [unknown] => 55fc222a81b6 main+0xd 38970.522546687: tr end call 55fc222a81b6 main+0xd => 55fc222a8080 [unknown] 38970.522546688: tr strt 0 [unknown] => 55fc222a81bb main+0x12 38970.522546688: tr end call 55fc222a81bb main+0x12 => 55fc222a80b0 [unknown] 38970.522546688: tr strt 0 [unknown] => 55fc222a81c0 main+0x17 38970.522546688: tr end call 55fc222a81c0 main+0x17 => 55fc222a8090 [unknown] 38970.522546689: tr strt 0 [unknown] => 55fc222a81c5 main+0x1c 38970.522546894: tr end return 55fc222a81cb main+0x22 => 7f3a4dc29d90 __libc_start_call_main+0x80 After: $ perf script --itrace=be --ns -F+flags,-event,+addr,-period,-comm,-tid,-cpu,-dso 38970.522546686: tr strt 0 [unknown] => 55fc222a81a9 main+0x0 38970.522546686: tr end call 55fc222a81b1 main+0x8 => 55fc222a80a0 fn4@plt+0x0 38970.522546687: tr strt 0 [unknown] => 55fc222a81b6 main+0xd 38970.522546687: tr end call 55fc222a81b6 main+0xd => 55fc222a8080 fn1@plt+0x0 38970.522546688: tr strt 0 [unknown] => 55fc222a81bb main+0x12 38970.522546688: tr end call 55fc222a81bb main+0x12 => 55fc222a80b0 fn2@plt+0x0 38970.522546688: tr strt 0 [unknown] => 55fc222a81c0 main+0x17 38970.522546688: tr end call 55fc222a81c0 main+0x17 => 55fc222a8090 fn3@plt+0x0 38970.522546689: tr strt 0 [unknown] => 55fc222a81c5 main+0x1c 38970.522546894: tr end return 55fc222a81cb main+0x22 => 7f3a4dc29d90 __libc_start_call_main+0x80 Reviewed-by: Namhyung Kim <namhyung@kernel.org> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Jiri Olsa <jolsa@kernel.org> Link: https://lore.kernel.org/r/20230131131625.6964-3-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2023-01-31 13:16:18 +00:00
static bool machine_is_x86(GElf_Half e_machine)
{
return e_machine == EM_386 || e_machine == EM_X86_64;
}
perf symbols: Get symbols for .plt.got for x86-64 For x86_64, determine a symbol for .plt.got entries. That requires computing the target offset and finding that in .rela.dyn, which in turn means .rela.dyn needs to be sorted by offset. Example: In this example, the GNU C Library is using .plt.got for malloc and free. Before: $ gcc --version gcc (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0 Copyright (C) 2021 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. $ perf record -e intel_pt//u uname Linux [ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 0.027 MB perf.data ] $ perf script --itrace=be --ns -F-event,+addr,-period,-comm,-tid,-cpu > /tmp/cmp1.txt After: $ perf script --itrace=be --ns -F-event,+addr,-period,-comm,-tid,-cpu > /tmp/cmp2.txt $ diff /tmp/cmp1.txt /tmp/cmp2.txt | head -12 15509,15510c15509,15510 < 27046.755390907: 7f0b2943e3ab _nl_normalize_codeset+0x5b (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b29428380 offset_0x28380@plt+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) < 27046.755390907: 7f0b29428384 offset_0x28380@plt+0x4 (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b294a5120 malloc+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) --- > 27046.755390907: 7f0b2943e3ab _nl_normalize_codeset+0x5b (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b29428380 malloc@plt+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) > 27046.755390907: 7f0b29428384 malloc@plt+0x4 (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b294a5120 malloc+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) 15821,15822c15821,15822 < 27046.755394865: 7f0b2943850c _nl_load_locale_from_archive+0x5bc (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b29428370 offset_0x28370@plt+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) < 27046.755394865: 7f0b29428374 offset_0x28370@plt+0x4 (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b294a5460 cfree@GLIBC_2.2.5+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) --- > 27046.755394865: 7f0b2943850c _nl_load_locale_from_archive+0x5bc (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b29428370 free@plt+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) > 27046.755394865: 7f0b29428374 free@plt+0x4 (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b294a5460 cfree@GLIBC_2.2.5+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) Reviewed-by: Namhyung Kim <namhyung@kernel.org> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Jiri Olsa <jolsa@kernel.org> Link: https://lore.kernel.org/r/20230131131625.6964-10-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2023-01-31 13:16:25 +00:00
struct rela_dyn {
GElf_Addr offset;
u32 sym_idx;
};
struct rela_dyn_info {
struct dso *dso;
Elf_Data *plt_got_data;
u32 nr_entries;
struct rela_dyn *sorted;
Elf_Data *dynsym_data;
Elf_Data *dynstr_data;
Elf_Data *rela_dyn_data;
};
static void exit_rela_dyn(struct rela_dyn_info *di)
{
zfree(&di->sorted);
perf symbols: Get symbols for .plt.got for x86-64 For x86_64, determine a symbol for .plt.got entries. That requires computing the target offset and finding that in .rela.dyn, which in turn means .rela.dyn needs to be sorted by offset. Example: In this example, the GNU C Library is using .plt.got for malloc and free. Before: $ gcc --version gcc (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0 Copyright (C) 2021 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. $ perf record -e intel_pt//u uname Linux [ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 0.027 MB perf.data ] $ perf script --itrace=be --ns -F-event,+addr,-period,-comm,-tid,-cpu > /tmp/cmp1.txt After: $ perf script --itrace=be --ns -F-event,+addr,-period,-comm,-tid,-cpu > /tmp/cmp2.txt $ diff /tmp/cmp1.txt /tmp/cmp2.txt | head -12 15509,15510c15509,15510 < 27046.755390907: 7f0b2943e3ab _nl_normalize_codeset+0x5b (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b29428380 offset_0x28380@plt+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) < 27046.755390907: 7f0b29428384 offset_0x28380@plt+0x4 (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b294a5120 malloc+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) --- > 27046.755390907: 7f0b2943e3ab _nl_normalize_codeset+0x5b (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b29428380 malloc@plt+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) > 27046.755390907: 7f0b29428384 malloc@plt+0x4 (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b294a5120 malloc+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) 15821,15822c15821,15822 < 27046.755394865: 7f0b2943850c _nl_load_locale_from_archive+0x5bc (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b29428370 offset_0x28370@plt+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) < 27046.755394865: 7f0b29428374 offset_0x28370@plt+0x4 (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b294a5460 cfree@GLIBC_2.2.5+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) --- > 27046.755394865: 7f0b2943850c _nl_load_locale_from_archive+0x5bc (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b29428370 free@plt+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) > 27046.755394865: 7f0b29428374 free@plt+0x4 (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b294a5460 cfree@GLIBC_2.2.5+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) Reviewed-by: Namhyung Kim <namhyung@kernel.org> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Jiri Olsa <jolsa@kernel.org> Link: https://lore.kernel.org/r/20230131131625.6964-10-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2023-01-31 13:16:25 +00:00
}
static int cmp_offset(const void *a, const void *b)
{
const struct rela_dyn *va = a;
const struct rela_dyn *vb = b;
return va->offset < vb->offset ? -1 : (va->offset > vb->offset ? 1 : 0);
}
static int sort_rela_dyn(struct rela_dyn_info *di)
{
u32 i, n;
di->sorted = calloc(di->nr_entries, sizeof(di->sorted[0]));
if (!di->sorted)
return -1;
/* Get data for sorting: the offset and symbol index */
for (i = 0, n = 0; i < di->nr_entries; i++) {
GElf_Rela rela;
u32 sym_idx;
gelf_getrela(di->rela_dyn_data, i, &rela);
sym_idx = GELF_R_SYM(rela.r_info);
if (sym_idx) {
di->sorted[n].sym_idx = sym_idx;
di->sorted[n].offset = rela.r_offset;
n += 1;
}
}
/* Sort by offset */
di->nr_entries = n;
qsort(di->sorted, n, sizeof(di->sorted[0]), cmp_offset);
return 0;
}
static void get_rela_dyn_info(Elf *elf, GElf_Ehdr *ehdr, struct rela_dyn_info *di, Elf_Scn *scn)
{
GElf_Shdr rela_dyn_shdr;
GElf_Shdr shdr;
di->plt_got_data = elf_getdata(scn, NULL);
scn = elf_section_by_name(elf, ehdr, &rela_dyn_shdr, ".rela.dyn", NULL);
if (!scn || !rela_dyn_shdr.sh_link || !rela_dyn_shdr.sh_entsize)
return;
di->nr_entries = rela_dyn_shdr.sh_size / rela_dyn_shdr.sh_entsize;
di->rela_dyn_data = elf_getdata(scn, NULL);
scn = elf_getscn(elf, rela_dyn_shdr.sh_link);
if (!scn || !gelf_getshdr(scn, &shdr) || !shdr.sh_link)
return;
di->dynsym_data = elf_getdata(scn, NULL);
di->dynstr_data = elf_getdata(elf_getscn(elf, shdr.sh_link), NULL);
if (!di->plt_got_data || !di->dynstr_data || !di->dynsym_data || !di->rela_dyn_data)
return;
/* Sort into offset order */
sort_rela_dyn(di);
}
/* Get instruction displacement from a plt entry for x86_64 */
static u32 get_x86_64_plt_disp(const u8 *p)
{
u8 endbr64[] = {0xf3, 0x0f, 0x1e, 0xfa};
int n = 0;
/* Skip endbr64 */
if (!memcmp(p, endbr64, sizeof(endbr64)))
n += sizeof(endbr64);
/* Skip bnd prefix */
if (p[n] == 0xf2)
n += 1;
/* jmp with 4-byte displacement */
if (p[n] == 0xff && p[n + 1] == 0x25) {
u32 disp;
perf symbols: Get symbols for .plt.got for x86-64 For x86_64, determine a symbol for .plt.got entries. That requires computing the target offset and finding that in .rela.dyn, which in turn means .rela.dyn needs to be sorted by offset. Example: In this example, the GNU C Library is using .plt.got for malloc and free. Before: $ gcc --version gcc (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0 Copyright (C) 2021 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. $ perf record -e intel_pt//u uname Linux [ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 0.027 MB perf.data ] $ perf script --itrace=be --ns -F-event,+addr,-period,-comm,-tid,-cpu > /tmp/cmp1.txt After: $ perf script --itrace=be --ns -F-event,+addr,-period,-comm,-tid,-cpu > /tmp/cmp2.txt $ diff /tmp/cmp1.txt /tmp/cmp2.txt | head -12 15509,15510c15509,15510 < 27046.755390907: 7f0b2943e3ab _nl_normalize_codeset+0x5b (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b29428380 offset_0x28380@plt+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) < 27046.755390907: 7f0b29428384 offset_0x28380@plt+0x4 (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b294a5120 malloc+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) --- > 27046.755390907: 7f0b2943e3ab _nl_normalize_codeset+0x5b (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b29428380 malloc@plt+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) > 27046.755390907: 7f0b29428384 malloc@plt+0x4 (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b294a5120 malloc+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) 15821,15822c15821,15822 < 27046.755394865: 7f0b2943850c _nl_load_locale_from_archive+0x5bc (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b29428370 offset_0x28370@plt+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) < 27046.755394865: 7f0b29428374 offset_0x28370@plt+0x4 (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b294a5460 cfree@GLIBC_2.2.5+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) --- > 27046.755394865: 7f0b2943850c _nl_load_locale_from_archive+0x5bc (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b29428370 free@plt+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) > 27046.755394865: 7f0b29428374 free@plt+0x4 (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b294a5460 cfree@GLIBC_2.2.5+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) Reviewed-by: Namhyung Kim <namhyung@kernel.org> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Jiri Olsa <jolsa@kernel.org> Link: https://lore.kernel.org/r/20230131131625.6964-10-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2023-01-31 13:16:25 +00:00
n += 2;
/* Also add offset from start of entry to end of instruction */
memcpy(&disp, p + n, sizeof(disp));
return n + 4 + le32toh(disp);
perf symbols: Get symbols for .plt.got for x86-64 For x86_64, determine a symbol for .plt.got entries. That requires computing the target offset and finding that in .rela.dyn, which in turn means .rela.dyn needs to be sorted by offset. Example: In this example, the GNU C Library is using .plt.got for malloc and free. Before: $ gcc --version gcc (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0 Copyright (C) 2021 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. $ perf record -e intel_pt//u uname Linux [ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 0.027 MB perf.data ] $ perf script --itrace=be --ns -F-event,+addr,-period,-comm,-tid,-cpu > /tmp/cmp1.txt After: $ perf script --itrace=be --ns -F-event,+addr,-period,-comm,-tid,-cpu > /tmp/cmp2.txt $ diff /tmp/cmp1.txt /tmp/cmp2.txt | head -12 15509,15510c15509,15510 < 27046.755390907: 7f0b2943e3ab _nl_normalize_codeset+0x5b (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b29428380 offset_0x28380@plt+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) < 27046.755390907: 7f0b29428384 offset_0x28380@plt+0x4 (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b294a5120 malloc+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) --- > 27046.755390907: 7f0b2943e3ab _nl_normalize_codeset+0x5b (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b29428380 malloc@plt+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) > 27046.755390907: 7f0b29428384 malloc@plt+0x4 (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b294a5120 malloc+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) 15821,15822c15821,15822 < 27046.755394865: 7f0b2943850c _nl_load_locale_from_archive+0x5bc (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b29428370 offset_0x28370@plt+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) < 27046.755394865: 7f0b29428374 offset_0x28370@plt+0x4 (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b294a5460 cfree@GLIBC_2.2.5+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) --- > 27046.755394865: 7f0b2943850c _nl_load_locale_from_archive+0x5bc (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b29428370 free@plt+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) > 27046.755394865: 7f0b29428374 free@plt+0x4 (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b294a5460 cfree@GLIBC_2.2.5+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) Reviewed-by: Namhyung Kim <namhyung@kernel.org> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Jiri Olsa <jolsa@kernel.org> Link: https://lore.kernel.org/r/20230131131625.6964-10-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2023-01-31 13:16:25 +00:00
}
return 0;
}
static bool get_plt_got_name(GElf_Shdr *shdr, size_t i,
struct rela_dyn_info *di,
char *buf, size_t buf_sz)
{
struct rela_dyn vi, *vr;
const char *sym_name;
char *demangled;
GElf_Sym sym;
bool result;
perf symbols: Get symbols for .plt.got for x86-64 For x86_64, determine a symbol for .plt.got entries. That requires computing the target offset and finding that in .rela.dyn, which in turn means .rela.dyn needs to be sorted by offset. Example: In this example, the GNU C Library is using .plt.got for malloc and free. Before: $ gcc --version gcc (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0 Copyright (C) 2021 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. $ perf record -e intel_pt//u uname Linux [ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 0.027 MB perf.data ] $ perf script --itrace=be --ns -F-event,+addr,-period,-comm,-tid,-cpu > /tmp/cmp1.txt After: $ perf script --itrace=be --ns -F-event,+addr,-period,-comm,-tid,-cpu > /tmp/cmp2.txt $ diff /tmp/cmp1.txt /tmp/cmp2.txt | head -12 15509,15510c15509,15510 < 27046.755390907: 7f0b2943e3ab _nl_normalize_codeset+0x5b (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b29428380 offset_0x28380@plt+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) < 27046.755390907: 7f0b29428384 offset_0x28380@plt+0x4 (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b294a5120 malloc+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) --- > 27046.755390907: 7f0b2943e3ab _nl_normalize_codeset+0x5b (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b29428380 malloc@plt+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) > 27046.755390907: 7f0b29428384 malloc@plt+0x4 (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b294a5120 malloc+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) 15821,15822c15821,15822 < 27046.755394865: 7f0b2943850c _nl_load_locale_from_archive+0x5bc (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b29428370 offset_0x28370@plt+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) < 27046.755394865: 7f0b29428374 offset_0x28370@plt+0x4 (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b294a5460 cfree@GLIBC_2.2.5+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) --- > 27046.755394865: 7f0b2943850c _nl_load_locale_from_archive+0x5bc (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b29428370 free@plt+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) > 27046.755394865: 7f0b29428374 free@plt+0x4 (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b294a5460 cfree@GLIBC_2.2.5+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) Reviewed-by: Namhyung Kim <namhyung@kernel.org> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Jiri Olsa <jolsa@kernel.org> Link: https://lore.kernel.org/r/20230131131625.6964-10-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2023-01-31 13:16:25 +00:00
u32 disp;
if (!di->sorted)
return false;
disp = get_x86_64_plt_disp(di->plt_got_data->d_buf + i);
if (!disp)
return false;
/* Compute target offset of the .plt.got entry */
vi.offset = shdr->sh_offset + di->plt_got_data->d_off + i + disp;
/* Find that offset in .rela.dyn (sorted by offset) */
vr = bsearch(&vi, di->sorted, di->nr_entries, sizeof(di->sorted[0]), cmp_offset);
if (!vr)
return false;
/* Get the associated symbol */
gelf_getsym(di->dynsym_data, vr->sym_idx, &sym);
sym_name = elf_sym__name(&sym, di->dynstr_data);
demangled = demangle_sym(di->dso, 0, sym_name);
if (demangled != NULL)
sym_name = demangled;
snprintf(buf, buf_sz, "%s@plt", sym_name);
result = *sym_name;
perf symbols: Get symbols for .plt.got for x86-64 For x86_64, determine a symbol for .plt.got entries. That requires computing the target offset and finding that in .rela.dyn, which in turn means .rela.dyn needs to be sorted by offset. Example: In this example, the GNU C Library is using .plt.got for malloc and free. Before: $ gcc --version gcc (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0 Copyright (C) 2021 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. $ perf record -e intel_pt//u uname Linux [ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 0.027 MB perf.data ] $ perf script --itrace=be --ns -F-event,+addr,-period,-comm,-tid,-cpu > /tmp/cmp1.txt After: $ perf script --itrace=be --ns -F-event,+addr,-period,-comm,-tid,-cpu > /tmp/cmp2.txt $ diff /tmp/cmp1.txt /tmp/cmp2.txt | head -12 15509,15510c15509,15510 < 27046.755390907: 7f0b2943e3ab _nl_normalize_codeset+0x5b (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b29428380 offset_0x28380@plt+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) < 27046.755390907: 7f0b29428384 offset_0x28380@plt+0x4 (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b294a5120 malloc+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) --- > 27046.755390907: 7f0b2943e3ab _nl_normalize_codeset+0x5b (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b29428380 malloc@plt+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) > 27046.755390907: 7f0b29428384 malloc@plt+0x4 (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b294a5120 malloc+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) 15821,15822c15821,15822 < 27046.755394865: 7f0b2943850c _nl_load_locale_from_archive+0x5bc (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b29428370 offset_0x28370@plt+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) < 27046.755394865: 7f0b29428374 offset_0x28370@plt+0x4 (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b294a5460 cfree@GLIBC_2.2.5+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) --- > 27046.755394865: 7f0b2943850c _nl_load_locale_from_archive+0x5bc (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b29428370 free@plt+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) > 27046.755394865: 7f0b29428374 free@plt+0x4 (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b294a5460 cfree@GLIBC_2.2.5+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) Reviewed-by: Namhyung Kim <namhyung@kernel.org> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Jiri Olsa <jolsa@kernel.org> Link: https://lore.kernel.org/r/20230131131625.6964-10-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2023-01-31 13:16:25 +00:00
free(demangled);
return result;
perf symbols: Get symbols for .plt.got for x86-64 For x86_64, determine a symbol for .plt.got entries. That requires computing the target offset and finding that in .rela.dyn, which in turn means .rela.dyn needs to be sorted by offset. Example: In this example, the GNU C Library is using .plt.got for malloc and free. Before: $ gcc --version gcc (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0 Copyright (C) 2021 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. $ perf record -e intel_pt//u uname Linux [ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 0.027 MB perf.data ] $ perf script --itrace=be --ns -F-event,+addr,-period,-comm,-tid,-cpu > /tmp/cmp1.txt After: $ perf script --itrace=be --ns -F-event,+addr,-period,-comm,-tid,-cpu > /tmp/cmp2.txt $ diff /tmp/cmp1.txt /tmp/cmp2.txt | head -12 15509,15510c15509,15510 < 27046.755390907: 7f0b2943e3ab _nl_normalize_codeset+0x5b (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b29428380 offset_0x28380@plt+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) < 27046.755390907: 7f0b29428384 offset_0x28380@plt+0x4 (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b294a5120 malloc+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) --- > 27046.755390907: 7f0b2943e3ab _nl_normalize_codeset+0x5b (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b29428380 malloc@plt+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) > 27046.755390907: 7f0b29428384 malloc@plt+0x4 (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b294a5120 malloc+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) 15821,15822c15821,15822 < 27046.755394865: 7f0b2943850c _nl_load_locale_from_archive+0x5bc (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b29428370 offset_0x28370@plt+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) < 27046.755394865: 7f0b29428374 offset_0x28370@plt+0x4 (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b294a5460 cfree@GLIBC_2.2.5+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) --- > 27046.755394865: 7f0b2943850c _nl_load_locale_from_archive+0x5bc (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b29428370 free@plt+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) > 27046.755394865: 7f0b29428374 free@plt+0x4 (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b294a5460 cfree@GLIBC_2.2.5+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) Reviewed-by: Namhyung Kim <namhyung@kernel.org> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Jiri Olsa <jolsa@kernel.org> Link: https://lore.kernel.org/r/20230131131625.6964-10-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2023-01-31 13:16:25 +00:00
}
perf symbols: Start adding support for .plt.got for x86 For x86, .plt.got is used, for example, when the address is taken of a dynamically linked function. Start adding support by synthesizing a symbol for each entry. A subsequent patch will attempt to get a better name for the symbol. Example: Before: $ cat tstpltlib.c void fn1(void) {} void fn2(void) {} void fn3(void) {} void fn4(void) {} $ cat tstpltgot.c void fn1(void); void fn2(void); void fn3(void); void fn4(void); void callfn(void (*fn)(void)) { fn(); } int main() { fn4(); fn1(); callfn(fn3); fn2(); fn3(); return 0; } $ gcc --version gcc (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0 Copyright (C) 2021 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. $ gcc -Wall -Wextra -shared -o libtstpltlib.so tstpltlib.c $ gcc -Wall -Wextra -o tstpltgot tstpltgot.c -L . -ltstpltlib -Wl,-rpath="$(pwd)" $ readelf -SW tstpltgot | grep 'Name\|plt\|dyn' [Nr] Name Type Address Off Size ES Flg Lk Inf Al [ 6] .dynsym DYNSYM 00000000000003d8 0003d8 0000f0 18 A 7 1 8 [ 7] .dynstr STRTAB 00000000000004c8 0004c8 0000c6 00 A 0 0 1 [10] .rela.dyn RELA 00000000000005d8 0005d8 0000d8 18 A 6 0 8 [11] .rela.plt RELA 00000000000006b0 0006b0 000048 18 AI 6 24 8 [13] .plt PROGBITS 0000000000001020 001020 000040 10 AX 0 0 16 [14] .plt.got PROGBITS 0000000000001060 001060 000020 10 AX 0 0 16 [15] .plt.sec PROGBITS 0000000000001080 001080 000030 10 AX 0 0 16 [23] .dynamic DYNAMIC 0000000000003d90 002d90 000210 10 WA 7 0 8 $ perf record -e intel_pt//u --filter 'filter main @ ./tstpltgot , filter callfn @ ./tstpltgot' ./tstpltgot [ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 0.011 MB perf.data ] $ perf script --itrace=be --ns -F+flags,-event,+addr,-period,-comm,-tid,-cpu,-dso 28393.810326915: tr strt 0 [unknown] => 562350baa1b2 main+0x0 28393.810326915: tr end call 562350baa1ba main+0x8 => 562350baa090 fn4@plt+0x0 28393.810326917: tr strt 0 [unknown] => 562350baa1bf main+0xd 28393.810326917: tr end call 562350baa1bf main+0xd => 562350baa080 fn1@plt+0x0 28393.810326917: tr strt 0 [unknown] => 562350baa1c4 main+0x12 28393.810326917: call 562350baa1ce main+0x1c => 562350baa199 callfn+0x0 28393.810326917: tr end call 562350baa1ad callfn+0x14 => 7f607d36110f fn3+0x0 28393.810326922: tr strt 0 [unknown] => 562350baa1af callfn+0x16 28393.810326922: return 562350baa1b1 callfn+0x18 => 562350baa1d3 main+0x21 28393.810326922: tr end call 562350baa1d3 main+0x21 => 562350baa0a0 fn2@plt+0x0 28393.810326924: tr strt 0 [unknown] => 562350baa1d8 main+0x26 28393.810326924: tr end call 562350baa1d8 main+0x26 => 562350baa060 [unknown] <- call to fn3 via .plt.got 28393.810326925: tr strt 0 [unknown] => 562350baa1dd main+0x2b 28393.810326925: tr end return 562350baa1e3 main+0x31 => 7f607d029d90 __libc_start_call_main+0x80 After: $ perf script --itrace=be --ns -F+flags,-event,+addr,-period,-comm,-tid,-cpu,-dso 28393.810326915: tr strt 0 [unknown] => 562350baa1b2 main+0x0 28393.810326915: tr end call 562350baa1ba main+0x8 => 562350baa090 fn4@plt+0x0 28393.810326917: tr strt 0 [unknown] => 562350baa1bf main+0xd 28393.810326917: tr end call 562350baa1bf main+0xd => 562350baa080 fn1@plt+0x0 28393.810326917: tr strt 0 [unknown] => 562350baa1c4 main+0x12 28393.810326917: call 562350baa1ce main+0x1c => 562350baa199 callfn+0x0 28393.810326917: tr end call 562350baa1ad callfn+0x14 => 7f607d36110f fn3+0x0 28393.810326922: tr strt 0 [unknown] => 562350baa1af callfn+0x16 28393.810326922: return 562350baa1b1 callfn+0x18 => 562350baa1d3 main+0x21 28393.810326922: tr end call 562350baa1d3 main+0x21 => 562350baa0a0 fn2@plt+0x0 28393.810326924: tr strt 0 [unknown] => 562350baa1d8 main+0x26 28393.810326924: tr end call 562350baa1d8 main+0x26 => 562350baa060 offset_0x1060@plt+0x0 28393.810326925: tr strt 0 [unknown] => 562350baa1dd main+0x2b 28393.810326925: tr end return 562350baa1e3 main+0x31 => 7f607d029d90 __libc_start_call_main+0x80 Reviewed-by: Namhyung Kim <namhyung@kernel.org> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Jiri Olsa <jolsa@kernel.org> Link: https://lore.kernel.org/r/20230131131625.6964-9-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2023-01-31 13:16:24 +00:00
static int dso__synthesize_plt_got_symbols(struct dso *dso, Elf *elf,
GElf_Ehdr *ehdr,
char *buf, size_t buf_sz)
{
perf symbols: Get symbols for .plt.got for x86-64 For x86_64, determine a symbol for .plt.got entries. That requires computing the target offset and finding that in .rela.dyn, which in turn means .rela.dyn needs to be sorted by offset. Example: In this example, the GNU C Library is using .plt.got for malloc and free. Before: $ gcc --version gcc (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0 Copyright (C) 2021 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. $ perf record -e intel_pt//u uname Linux [ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 0.027 MB perf.data ] $ perf script --itrace=be --ns -F-event,+addr,-period,-comm,-tid,-cpu > /tmp/cmp1.txt After: $ perf script --itrace=be --ns -F-event,+addr,-period,-comm,-tid,-cpu > /tmp/cmp2.txt $ diff /tmp/cmp1.txt /tmp/cmp2.txt | head -12 15509,15510c15509,15510 < 27046.755390907: 7f0b2943e3ab _nl_normalize_codeset+0x5b (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b29428380 offset_0x28380@plt+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) < 27046.755390907: 7f0b29428384 offset_0x28380@plt+0x4 (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b294a5120 malloc+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) --- > 27046.755390907: 7f0b2943e3ab _nl_normalize_codeset+0x5b (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b29428380 malloc@plt+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) > 27046.755390907: 7f0b29428384 malloc@plt+0x4 (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b294a5120 malloc+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) 15821,15822c15821,15822 < 27046.755394865: 7f0b2943850c _nl_load_locale_from_archive+0x5bc (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b29428370 offset_0x28370@plt+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) < 27046.755394865: 7f0b29428374 offset_0x28370@plt+0x4 (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b294a5460 cfree@GLIBC_2.2.5+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) --- > 27046.755394865: 7f0b2943850c _nl_load_locale_from_archive+0x5bc (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b29428370 free@plt+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) > 27046.755394865: 7f0b29428374 free@plt+0x4 (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b294a5460 cfree@GLIBC_2.2.5+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) Reviewed-by: Namhyung Kim <namhyung@kernel.org> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Jiri Olsa <jolsa@kernel.org> Link: https://lore.kernel.org/r/20230131131625.6964-10-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2023-01-31 13:16:25 +00:00
struct rela_dyn_info di = { .dso = dso };
perf symbols: Start adding support for .plt.got for x86 For x86, .plt.got is used, for example, when the address is taken of a dynamically linked function. Start adding support by synthesizing a symbol for each entry. A subsequent patch will attempt to get a better name for the symbol. Example: Before: $ cat tstpltlib.c void fn1(void) {} void fn2(void) {} void fn3(void) {} void fn4(void) {} $ cat tstpltgot.c void fn1(void); void fn2(void); void fn3(void); void fn4(void); void callfn(void (*fn)(void)) { fn(); } int main() { fn4(); fn1(); callfn(fn3); fn2(); fn3(); return 0; } $ gcc --version gcc (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0 Copyright (C) 2021 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. $ gcc -Wall -Wextra -shared -o libtstpltlib.so tstpltlib.c $ gcc -Wall -Wextra -o tstpltgot tstpltgot.c -L . -ltstpltlib -Wl,-rpath="$(pwd)" $ readelf -SW tstpltgot | grep 'Name\|plt\|dyn' [Nr] Name Type Address Off Size ES Flg Lk Inf Al [ 6] .dynsym DYNSYM 00000000000003d8 0003d8 0000f0 18 A 7 1 8 [ 7] .dynstr STRTAB 00000000000004c8 0004c8 0000c6 00 A 0 0 1 [10] .rela.dyn RELA 00000000000005d8 0005d8 0000d8 18 A 6 0 8 [11] .rela.plt RELA 00000000000006b0 0006b0 000048 18 AI 6 24 8 [13] .plt PROGBITS 0000000000001020 001020 000040 10 AX 0 0 16 [14] .plt.got PROGBITS 0000000000001060 001060 000020 10 AX 0 0 16 [15] .plt.sec PROGBITS 0000000000001080 001080 000030 10 AX 0 0 16 [23] .dynamic DYNAMIC 0000000000003d90 002d90 000210 10 WA 7 0 8 $ perf record -e intel_pt//u --filter 'filter main @ ./tstpltgot , filter callfn @ ./tstpltgot' ./tstpltgot [ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 0.011 MB perf.data ] $ perf script --itrace=be --ns -F+flags,-event,+addr,-period,-comm,-tid,-cpu,-dso 28393.810326915: tr strt 0 [unknown] => 562350baa1b2 main+0x0 28393.810326915: tr end call 562350baa1ba main+0x8 => 562350baa090 fn4@plt+0x0 28393.810326917: tr strt 0 [unknown] => 562350baa1bf main+0xd 28393.810326917: tr end call 562350baa1bf main+0xd => 562350baa080 fn1@plt+0x0 28393.810326917: tr strt 0 [unknown] => 562350baa1c4 main+0x12 28393.810326917: call 562350baa1ce main+0x1c => 562350baa199 callfn+0x0 28393.810326917: tr end call 562350baa1ad callfn+0x14 => 7f607d36110f fn3+0x0 28393.810326922: tr strt 0 [unknown] => 562350baa1af callfn+0x16 28393.810326922: return 562350baa1b1 callfn+0x18 => 562350baa1d3 main+0x21 28393.810326922: tr end call 562350baa1d3 main+0x21 => 562350baa0a0 fn2@plt+0x0 28393.810326924: tr strt 0 [unknown] => 562350baa1d8 main+0x26 28393.810326924: tr end call 562350baa1d8 main+0x26 => 562350baa060 [unknown] <- call to fn3 via .plt.got 28393.810326925: tr strt 0 [unknown] => 562350baa1dd main+0x2b 28393.810326925: tr end return 562350baa1e3 main+0x31 => 7f607d029d90 __libc_start_call_main+0x80 After: $ perf script --itrace=be --ns -F+flags,-event,+addr,-period,-comm,-tid,-cpu,-dso 28393.810326915: tr strt 0 [unknown] => 562350baa1b2 main+0x0 28393.810326915: tr end call 562350baa1ba main+0x8 => 562350baa090 fn4@plt+0x0 28393.810326917: tr strt 0 [unknown] => 562350baa1bf main+0xd 28393.810326917: tr end call 562350baa1bf main+0xd => 562350baa080 fn1@plt+0x0 28393.810326917: tr strt 0 [unknown] => 562350baa1c4 main+0x12 28393.810326917: call 562350baa1ce main+0x1c => 562350baa199 callfn+0x0 28393.810326917: tr end call 562350baa1ad callfn+0x14 => 7f607d36110f fn3+0x0 28393.810326922: tr strt 0 [unknown] => 562350baa1af callfn+0x16 28393.810326922: return 562350baa1b1 callfn+0x18 => 562350baa1d3 main+0x21 28393.810326922: tr end call 562350baa1d3 main+0x21 => 562350baa0a0 fn2@plt+0x0 28393.810326924: tr strt 0 [unknown] => 562350baa1d8 main+0x26 28393.810326924: tr end call 562350baa1d8 main+0x26 => 562350baa060 offset_0x1060@plt+0x0 28393.810326925: tr strt 0 [unknown] => 562350baa1dd main+0x2b 28393.810326925: tr end return 562350baa1e3 main+0x31 => 7f607d029d90 __libc_start_call_main+0x80 Reviewed-by: Namhyung Kim <namhyung@kernel.org> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Jiri Olsa <jolsa@kernel.org> Link: https://lore.kernel.org/r/20230131131625.6964-9-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2023-01-31 13:16:24 +00:00
struct symbol *sym;
GElf_Shdr shdr;
Elf_Scn *scn;
perf symbols: Get symbols for .plt.got for x86-64 For x86_64, determine a symbol for .plt.got entries. That requires computing the target offset and finding that in .rela.dyn, which in turn means .rela.dyn needs to be sorted by offset. Example: In this example, the GNU C Library is using .plt.got for malloc and free. Before: $ gcc --version gcc (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0 Copyright (C) 2021 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. $ perf record -e intel_pt//u uname Linux [ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 0.027 MB perf.data ] $ perf script --itrace=be --ns -F-event,+addr,-period,-comm,-tid,-cpu > /tmp/cmp1.txt After: $ perf script --itrace=be --ns -F-event,+addr,-period,-comm,-tid,-cpu > /tmp/cmp2.txt $ diff /tmp/cmp1.txt /tmp/cmp2.txt | head -12 15509,15510c15509,15510 < 27046.755390907: 7f0b2943e3ab _nl_normalize_codeset+0x5b (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b29428380 offset_0x28380@plt+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) < 27046.755390907: 7f0b29428384 offset_0x28380@plt+0x4 (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b294a5120 malloc+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) --- > 27046.755390907: 7f0b2943e3ab _nl_normalize_codeset+0x5b (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b29428380 malloc@plt+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) > 27046.755390907: 7f0b29428384 malloc@plt+0x4 (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b294a5120 malloc+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) 15821,15822c15821,15822 < 27046.755394865: 7f0b2943850c _nl_load_locale_from_archive+0x5bc (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b29428370 offset_0x28370@plt+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) < 27046.755394865: 7f0b29428374 offset_0x28370@plt+0x4 (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b294a5460 cfree@GLIBC_2.2.5+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) --- > 27046.755394865: 7f0b2943850c _nl_load_locale_from_archive+0x5bc (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b29428370 free@plt+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) > 27046.755394865: 7f0b29428374 free@plt+0x4 (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b294a5460 cfree@GLIBC_2.2.5+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) Reviewed-by: Namhyung Kim <namhyung@kernel.org> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Jiri Olsa <jolsa@kernel.org> Link: https://lore.kernel.org/r/20230131131625.6964-10-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2023-01-31 13:16:25 +00:00
int err = -1;
perf symbols: Start adding support for .plt.got for x86 For x86, .plt.got is used, for example, when the address is taken of a dynamically linked function. Start adding support by synthesizing a symbol for each entry. A subsequent patch will attempt to get a better name for the symbol. Example: Before: $ cat tstpltlib.c void fn1(void) {} void fn2(void) {} void fn3(void) {} void fn4(void) {} $ cat tstpltgot.c void fn1(void); void fn2(void); void fn3(void); void fn4(void); void callfn(void (*fn)(void)) { fn(); } int main() { fn4(); fn1(); callfn(fn3); fn2(); fn3(); return 0; } $ gcc --version gcc (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0 Copyright (C) 2021 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. $ gcc -Wall -Wextra -shared -o libtstpltlib.so tstpltlib.c $ gcc -Wall -Wextra -o tstpltgot tstpltgot.c -L . -ltstpltlib -Wl,-rpath="$(pwd)" $ readelf -SW tstpltgot | grep 'Name\|plt\|dyn' [Nr] Name Type Address Off Size ES Flg Lk Inf Al [ 6] .dynsym DYNSYM 00000000000003d8 0003d8 0000f0 18 A 7 1 8 [ 7] .dynstr STRTAB 00000000000004c8 0004c8 0000c6 00 A 0 0 1 [10] .rela.dyn RELA 00000000000005d8 0005d8 0000d8 18 A 6 0 8 [11] .rela.plt RELA 00000000000006b0 0006b0 000048 18 AI 6 24 8 [13] .plt PROGBITS 0000000000001020 001020 000040 10 AX 0 0 16 [14] .plt.got PROGBITS 0000000000001060 001060 000020 10 AX 0 0 16 [15] .plt.sec PROGBITS 0000000000001080 001080 000030 10 AX 0 0 16 [23] .dynamic DYNAMIC 0000000000003d90 002d90 000210 10 WA 7 0 8 $ perf record -e intel_pt//u --filter 'filter main @ ./tstpltgot , filter callfn @ ./tstpltgot' ./tstpltgot [ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 0.011 MB perf.data ] $ perf script --itrace=be --ns -F+flags,-event,+addr,-period,-comm,-tid,-cpu,-dso 28393.810326915: tr strt 0 [unknown] => 562350baa1b2 main+0x0 28393.810326915: tr end call 562350baa1ba main+0x8 => 562350baa090 fn4@plt+0x0 28393.810326917: tr strt 0 [unknown] => 562350baa1bf main+0xd 28393.810326917: tr end call 562350baa1bf main+0xd => 562350baa080 fn1@plt+0x0 28393.810326917: tr strt 0 [unknown] => 562350baa1c4 main+0x12 28393.810326917: call 562350baa1ce main+0x1c => 562350baa199 callfn+0x0 28393.810326917: tr end call 562350baa1ad callfn+0x14 => 7f607d36110f fn3+0x0 28393.810326922: tr strt 0 [unknown] => 562350baa1af callfn+0x16 28393.810326922: return 562350baa1b1 callfn+0x18 => 562350baa1d3 main+0x21 28393.810326922: tr end call 562350baa1d3 main+0x21 => 562350baa0a0 fn2@plt+0x0 28393.810326924: tr strt 0 [unknown] => 562350baa1d8 main+0x26 28393.810326924: tr end call 562350baa1d8 main+0x26 => 562350baa060 [unknown] <- call to fn3 via .plt.got 28393.810326925: tr strt 0 [unknown] => 562350baa1dd main+0x2b 28393.810326925: tr end return 562350baa1e3 main+0x31 => 7f607d029d90 __libc_start_call_main+0x80 After: $ perf script --itrace=be --ns -F+flags,-event,+addr,-period,-comm,-tid,-cpu,-dso 28393.810326915: tr strt 0 [unknown] => 562350baa1b2 main+0x0 28393.810326915: tr end call 562350baa1ba main+0x8 => 562350baa090 fn4@plt+0x0 28393.810326917: tr strt 0 [unknown] => 562350baa1bf main+0xd 28393.810326917: tr end call 562350baa1bf main+0xd => 562350baa080 fn1@plt+0x0 28393.810326917: tr strt 0 [unknown] => 562350baa1c4 main+0x12 28393.810326917: call 562350baa1ce main+0x1c => 562350baa199 callfn+0x0 28393.810326917: tr end call 562350baa1ad callfn+0x14 => 7f607d36110f fn3+0x0 28393.810326922: tr strt 0 [unknown] => 562350baa1af callfn+0x16 28393.810326922: return 562350baa1b1 callfn+0x18 => 562350baa1d3 main+0x21 28393.810326922: tr end call 562350baa1d3 main+0x21 => 562350baa0a0 fn2@plt+0x0 28393.810326924: tr strt 0 [unknown] => 562350baa1d8 main+0x26 28393.810326924: tr end call 562350baa1d8 main+0x26 => 562350baa060 offset_0x1060@plt+0x0 28393.810326925: tr strt 0 [unknown] => 562350baa1dd main+0x2b 28393.810326925: tr end return 562350baa1e3 main+0x31 => 7f607d029d90 __libc_start_call_main+0x80 Reviewed-by: Namhyung Kim <namhyung@kernel.org> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Jiri Olsa <jolsa@kernel.org> Link: https://lore.kernel.org/r/20230131131625.6964-9-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2023-01-31 13:16:24 +00:00
size_t i;
scn = elf_section_by_name(elf, ehdr, &shdr, ".plt.got", NULL);
if (!scn || !shdr.sh_entsize)
return 0;
perf symbols: Get symbols for .plt.got for x86-64 For x86_64, determine a symbol for .plt.got entries. That requires computing the target offset and finding that in .rela.dyn, which in turn means .rela.dyn needs to be sorted by offset. Example: In this example, the GNU C Library is using .plt.got for malloc and free. Before: $ gcc --version gcc (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0 Copyright (C) 2021 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. $ perf record -e intel_pt//u uname Linux [ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 0.027 MB perf.data ] $ perf script --itrace=be --ns -F-event,+addr,-period,-comm,-tid,-cpu > /tmp/cmp1.txt After: $ perf script --itrace=be --ns -F-event,+addr,-period,-comm,-tid,-cpu > /tmp/cmp2.txt $ diff /tmp/cmp1.txt /tmp/cmp2.txt | head -12 15509,15510c15509,15510 < 27046.755390907: 7f0b2943e3ab _nl_normalize_codeset+0x5b (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b29428380 offset_0x28380@plt+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) < 27046.755390907: 7f0b29428384 offset_0x28380@plt+0x4 (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b294a5120 malloc+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) --- > 27046.755390907: 7f0b2943e3ab _nl_normalize_codeset+0x5b (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b29428380 malloc@plt+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) > 27046.755390907: 7f0b29428384 malloc@plt+0x4 (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b294a5120 malloc+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) 15821,15822c15821,15822 < 27046.755394865: 7f0b2943850c _nl_load_locale_from_archive+0x5bc (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b29428370 offset_0x28370@plt+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) < 27046.755394865: 7f0b29428374 offset_0x28370@plt+0x4 (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b294a5460 cfree@GLIBC_2.2.5+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) --- > 27046.755394865: 7f0b2943850c _nl_load_locale_from_archive+0x5bc (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b29428370 free@plt+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) > 27046.755394865: 7f0b29428374 free@plt+0x4 (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b294a5460 cfree@GLIBC_2.2.5+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) Reviewed-by: Namhyung Kim <namhyung@kernel.org> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Jiri Olsa <jolsa@kernel.org> Link: https://lore.kernel.org/r/20230131131625.6964-10-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2023-01-31 13:16:25 +00:00
if (ehdr->e_machine == EM_X86_64)
get_rela_dyn_info(elf, ehdr, &di, scn);
perf symbols: Start adding support for .plt.got for x86 For x86, .plt.got is used, for example, when the address is taken of a dynamically linked function. Start adding support by synthesizing a symbol for each entry. A subsequent patch will attempt to get a better name for the symbol. Example: Before: $ cat tstpltlib.c void fn1(void) {} void fn2(void) {} void fn3(void) {} void fn4(void) {} $ cat tstpltgot.c void fn1(void); void fn2(void); void fn3(void); void fn4(void); void callfn(void (*fn)(void)) { fn(); } int main() { fn4(); fn1(); callfn(fn3); fn2(); fn3(); return 0; } $ gcc --version gcc (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0 Copyright (C) 2021 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. $ gcc -Wall -Wextra -shared -o libtstpltlib.so tstpltlib.c $ gcc -Wall -Wextra -o tstpltgot tstpltgot.c -L . -ltstpltlib -Wl,-rpath="$(pwd)" $ readelf -SW tstpltgot | grep 'Name\|plt\|dyn' [Nr] Name Type Address Off Size ES Flg Lk Inf Al [ 6] .dynsym DYNSYM 00000000000003d8 0003d8 0000f0 18 A 7 1 8 [ 7] .dynstr STRTAB 00000000000004c8 0004c8 0000c6 00 A 0 0 1 [10] .rela.dyn RELA 00000000000005d8 0005d8 0000d8 18 A 6 0 8 [11] .rela.plt RELA 00000000000006b0 0006b0 000048 18 AI 6 24 8 [13] .plt PROGBITS 0000000000001020 001020 000040 10 AX 0 0 16 [14] .plt.got PROGBITS 0000000000001060 001060 000020 10 AX 0 0 16 [15] .plt.sec PROGBITS 0000000000001080 001080 000030 10 AX 0 0 16 [23] .dynamic DYNAMIC 0000000000003d90 002d90 000210 10 WA 7 0 8 $ perf record -e intel_pt//u --filter 'filter main @ ./tstpltgot , filter callfn @ ./tstpltgot' ./tstpltgot [ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 0.011 MB perf.data ] $ perf script --itrace=be --ns -F+flags,-event,+addr,-period,-comm,-tid,-cpu,-dso 28393.810326915: tr strt 0 [unknown] => 562350baa1b2 main+0x0 28393.810326915: tr end call 562350baa1ba main+0x8 => 562350baa090 fn4@plt+0x0 28393.810326917: tr strt 0 [unknown] => 562350baa1bf main+0xd 28393.810326917: tr end call 562350baa1bf main+0xd => 562350baa080 fn1@plt+0x0 28393.810326917: tr strt 0 [unknown] => 562350baa1c4 main+0x12 28393.810326917: call 562350baa1ce main+0x1c => 562350baa199 callfn+0x0 28393.810326917: tr end call 562350baa1ad callfn+0x14 => 7f607d36110f fn3+0x0 28393.810326922: tr strt 0 [unknown] => 562350baa1af callfn+0x16 28393.810326922: return 562350baa1b1 callfn+0x18 => 562350baa1d3 main+0x21 28393.810326922: tr end call 562350baa1d3 main+0x21 => 562350baa0a0 fn2@plt+0x0 28393.810326924: tr strt 0 [unknown] => 562350baa1d8 main+0x26 28393.810326924: tr end call 562350baa1d8 main+0x26 => 562350baa060 [unknown] <- call to fn3 via .plt.got 28393.810326925: tr strt 0 [unknown] => 562350baa1dd main+0x2b 28393.810326925: tr end return 562350baa1e3 main+0x31 => 7f607d029d90 __libc_start_call_main+0x80 After: $ perf script --itrace=be --ns -F+flags,-event,+addr,-period,-comm,-tid,-cpu,-dso 28393.810326915: tr strt 0 [unknown] => 562350baa1b2 main+0x0 28393.810326915: tr end call 562350baa1ba main+0x8 => 562350baa090 fn4@plt+0x0 28393.810326917: tr strt 0 [unknown] => 562350baa1bf main+0xd 28393.810326917: tr end call 562350baa1bf main+0xd => 562350baa080 fn1@plt+0x0 28393.810326917: tr strt 0 [unknown] => 562350baa1c4 main+0x12 28393.810326917: call 562350baa1ce main+0x1c => 562350baa199 callfn+0x0 28393.810326917: tr end call 562350baa1ad callfn+0x14 => 7f607d36110f fn3+0x0 28393.810326922: tr strt 0 [unknown] => 562350baa1af callfn+0x16 28393.810326922: return 562350baa1b1 callfn+0x18 => 562350baa1d3 main+0x21 28393.810326922: tr end call 562350baa1d3 main+0x21 => 562350baa0a0 fn2@plt+0x0 28393.810326924: tr strt 0 [unknown] => 562350baa1d8 main+0x26 28393.810326924: tr end call 562350baa1d8 main+0x26 => 562350baa060 offset_0x1060@plt+0x0 28393.810326925: tr strt 0 [unknown] => 562350baa1dd main+0x2b 28393.810326925: tr end return 562350baa1e3 main+0x31 => 7f607d029d90 __libc_start_call_main+0x80 Reviewed-by: Namhyung Kim <namhyung@kernel.org> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Jiri Olsa <jolsa@kernel.org> Link: https://lore.kernel.org/r/20230131131625.6964-9-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2023-01-31 13:16:24 +00:00
for (i = 0; i < shdr.sh_size; i += shdr.sh_entsize) {
perf symbols: Get symbols for .plt.got for x86-64 For x86_64, determine a symbol for .plt.got entries. That requires computing the target offset and finding that in .rela.dyn, which in turn means .rela.dyn needs to be sorted by offset. Example: In this example, the GNU C Library is using .plt.got for malloc and free. Before: $ gcc --version gcc (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0 Copyright (C) 2021 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. $ perf record -e intel_pt//u uname Linux [ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 0.027 MB perf.data ] $ perf script --itrace=be --ns -F-event,+addr,-period,-comm,-tid,-cpu > /tmp/cmp1.txt After: $ perf script --itrace=be --ns -F-event,+addr,-period,-comm,-tid,-cpu > /tmp/cmp2.txt $ diff /tmp/cmp1.txt /tmp/cmp2.txt | head -12 15509,15510c15509,15510 < 27046.755390907: 7f0b2943e3ab _nl_normalize_codeset+0x5b (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b29428380 offset_0x28380@plt+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) < 27046.755390907: 7f0b29428384 offset_0x28380@plt+0x4 (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b294a5120 malloc+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) --- > 27046.755390907: 7f0b2943e3ab _nl_normalize_codeset+0x5b (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b29428380 malloc@plt+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) > 27046.755390907: 7f0b29428384 malloc@plt+0x4 (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b294a5120 malloc+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) 15821,15822c15821,15822 < 27046.755394865: 7f0b2943850c _nl_load_locale_from_archive+0x5bc (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b29428370 offset_0x28370@plt+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) < 27046.755394865: 7f0b29428374 offset_0x28370@plt+0x4 (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b294a5460 cfree@GLIBC_2.2.5+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) --- > 27046.755394865: 7f0b2943850c _nl_load_locale_from_archive+0x5bc (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b29428370 free@plt+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) > 27046.755394865: 7f0b29428374 free@plt+0x4 (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b294a5460 cfree@GLIBC_2.2.5+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) Reviewed-by: Namhyung Kim <namhyung@kernel.org> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Jiri Olsa <jolsa@kernel.org> Link: https://lore.kernel.org/r/20230131131625.6964-10-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2023-01-31 13:16:25 +00:00
if (!get_plt_got_name(&shdr, i, &di, buf, buf_sz))
snprintf(buf, buf_sz, "offset_%#" PRIx64 "@plt", (u64)shdr.sh_offset + i);
perf symbols: Start adding support for .plt.got for x86 For x86, .plt.got is used, for example, when the address is taken of a dynamically linked function. Start adding support by synthesizing a symbol for each entry. A subsequent patch will attempt to get a better name for the symbol. Example: Before: $ cat tstpltlib.c void fn1(void) {} void fn2(void) {} void fn3(void) {} void fn4(void) {} $ cat tstpltgot.c void fn1(void); void fn2(void); void fn3(void); void fn4(void); void callfn(void (*fn)(void)) { fn(); } int main() { fn4(); fn1(); callfn(fn3); fn2(); fn3(); return 0; } $ gcc --version gcc (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0 Copyright (C) 2021 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. $ gcc -Wall -Wextra -shared -o libtstpltlib.so tstpltlib.c $ gcc -Wall -Wextra -o tstpltgot tstpltgot.c -L . -ltstpltlib -Wl,-rpath="$(pwd)" $ readelf -SW tstpltgot | grep 'Name\|plt\|dyn' [Nr] Name Type Address Off Size ES Flg Lk Inf Al [ 6] .dynsym DYNSYM 00000000000003d8 0003d8 0000f0 18 A 7 1 8 [ 7] .dynstr STRTAB 00000000000004c8 0004c8 0000c6 00 A 0 0 1 [10] .rela.dyn RELA 00000000000005d8 0005d8 0000d8 18 A 6 0 8 [11] .rela.plt RELA 00000000000006b0 0006b0 000048 18 AI 6 24 8 [13] .plt PROGBITS 0000000000001020 001020 000040 10 AX 0 0 16 [14] .plt.got PROGBITS 0000000000001060 001060 000020 10 AX 0 0 16 [15] .plt.sec PROGBITS 0000000000001080 001080 000030 10 AX 0 0 16 [23] .dynamic DYNAMIC 0000000000003d90 002d90 000210 10 WA 7 0 8 $ perf record -e intel_pt//u --filter 'filter main @ ./tstpltgot , filter callfn @ ./tstpltgot' ./tstpltgot [ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 0.011 MB perf.data ] $ perf script --itrace=be --ns -F+flags,-event,+addr,-period,-comm,-tid,-cpu,-dso 28393.810326915: tr strt 0 [unknown] => 562350baa1b2 main+0x0 28393.810326915: tr end call 562350baa1ba main+0x8 => 562350baa090 fn4@plt+0x0 28393.810326917: tr strt 0 [unknown] => 562350baa1bf main+0xd 28393.810326917: tr end call 562350baa1bf main+0xd => 562350baa080 fn1@plt+0x0 28393.810326917: tr strt 0 [unknown] => 562350baa1c4 main+0x12 28393.810326917: call 562350baa1ce main+0x1c => 562350baa199 callfn+0x0 28393.810326917: tr end call 562350baa1ad callfn+0x14 => 7f607d36110f fn3+0x0 28393.810326922: tr strt 0 [unknown] => 562350baa1af callfn+0x16 28393.810326922: return 562350baa1b1 callfn+0x18 => 562350baa1d3 main+0x21 28393.810326922: tr end call 562350baa1d3 main+0x21 => 562350baa0a0 fn2@plt+0x0 28393.810326924: tr strt 0 [unknown] => 562350baa1d8 main+0x26 28393.810326924: tr end call 562350baa1d8 main+0x26 => 562350baa060 [unknown] <- call to fn3 via .plt.got 28393.810326925: tr strt 0 [unknown] => 562350baa1dd main+0x2b 28393.810326925: tr end return 562350baa1e3 main+0x31 => 7f607d029d90 __libc_start_call_main+0x80 After: $ perf script --itrace=be --ns -F+flags,-event,+addr,-period,-comm,-tid,-cpu,-dso 28393.810326915: tr strt 0 [unknown] => 562350baa1b2 main+0x0 28393.810326915: tr end call 562350baa1ba main+0x8 => 562350baa090 fn4@plt+0x0 28393.810326917: tr strt 0 [unknown] => 562350baa1bf main+0xd 28393.810326917: tr end call 562350baa1bf main+0xd => 562350baa080 fn1@plt+0x0 28393.810326917: tr strt 0 [unknown] => 562350baa1c4 main+0x12 28393.810326917: call 562350baa1ce main+0x1c => 562350baa199 callfn+0x0 28393.810326917: tr end call 562350baa1ad callfn+0x14 => 7f607d36110f fn3+0x0 28393.810326922: tr strt 0 [unknown] => 562350baa1af callfn+0x16 28393.810326922: return 562350baa1b1 callfn+0x18 => 562350baa1d3 main+0x21 28393.810326922: tr end call 562350baa1d3 main+0x21 => 562350baa0a0 fn2@plt+0x0 28393.810326924: tr strt 0 [unknown] => 562350baa1d8 main+0x26 28393.810326924: tr end call 562350baa1d8 main+0x26 => 562350baa060 offset_0x1060@plt+0x0 28393.810326925: tr strt 0 [unknown] => 562350baa1dd main+0x2b 28393.810326925: tr end return 562350baa1e3 main+0x31 => 7f607d029d90 __libc_start_call_main+0x80 Reviewed-by: Namhyung Kim <namhyung@kernel.org> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Jiri Olsa <jolsa@kernel.org> Link: https://lore.kernel.org/r/20230131131625.6964-9-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2023-01-31 13:16:24 +00:00
sym = symbol__new(shdr.sh_offset + i, shdr.sh_entsize, STB_GLOBAL, STT_FUNC, buf);
if (!sym)
perf symbols: Get symbols for .plt.got for x86-64 For x86_64, determine a symbol for .plt.got entries. That requires computing the target offset and finding that in .rela.dyn, which in turn means .rela.dyn needs to be sorted by offset. Example: In this example, the GNU C Library is using .plt.got for malloc and free. Before: $ gcc --version gcc (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0 Copyright (C) 2021 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. $ perf record -e intel_pt//u uname Linux [ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 0.027 MB perf.data ] $ perf script --itrace=be --ns -F-event,+addr,-period,-comm,-tid,-cpu > /tmp/cmp1.txt After: $ perf script --itrace=be --ns -F-event,+addr,-period,-comm,-tid,-cpu > /tmp/cmp2.txt $ diff /tmp/cmp1.txt /tmp/cmp2.txt | head -12 15509,15510c15509,15510 < 27046.755390907: 7f0b2943e3ab _nl_normalize_codeset+0x5b (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b29428380 offset_0x28380@plt+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) < 27046.755390907: 7f0b29428384 offset_0x28380@plt+0x4 (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b294a5120 malloc+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) --- > 27046.755390907: 7f0b2943e3ab _nl_normalize_codeset+0x5b (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b29428380 malloc@plt+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) > 27046.755390907: 7f0b29428384 malloc@plt+0x4 (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b294a5120 malloc+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) 15821,15822c15821,15822 < 27046.755394865: 7f0b2943850c _nl_load_locale_from_archive+0x5bc (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b29428370 offset_0x28370@plt+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) < 27046.755394865: 7f0b29428374 offset_0x28370@plt+0x4 (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b294a5460 cfree@GLIBC_2.2.5+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) --- > 27046.755394865: 7f0b2943850c _nl_load_locale_from_archive+0x5bc (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b29428370 free@plt+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) > 27046.755394865: 7f0b29428374 free@plt+0x4 (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b294a5460 cfree@GLIBC_2.2.5+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) Reviewed-by: Namhyung Kim <namhyung@kernel.org> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Jiri Olsa <jolsa@kernel.org> Link: https://lore.kernel.org/r/20230131131625.6964-10-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2023-01-31 13:16:25 +00:00
goto out;
perf symbols: Start adding support for .plt.got for x86 For x86, .plt.got is used, for example, when the address is taken of a dynamically linked function. Start adding support by synthesizing a symbol for each entry. A subsequent patch will attempt to get a better name for the symbol. Example: Before: $ cat tstpltlib.c void fn1(void) {} void fn2(void) {} void fn3(void) {} void fn4(void) {} $ cat tstpltgot.c void fn1(void); void fn2(void); void fn3(void); void fn4(void); void callfn(void (*fn)(void)) { fn(); } int main() { fn4(); fn1(); callfn(fn3); fn2(); fn3(); return 0; } $ gcc --version gcc (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0 Copyright (C) 2021 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. $ gcc -Wall -Wextra -shared -o libtstpltlib.so tstpltlib.c $ gcc -Wall -Wextra -o tstpltgot tstpltgot.c -L . -ltstpltlib -Wl,-rpath="$(pwd)" $ readelf -SW tstpltgot | grep 'Name\|plt\|dyn' [Nr] Name Type Address Off Size ES Flg Lk Inf Al [ 6] .dynsym DYNSYM 00000000000003d8 0003d8 0000f0 18 A 7 1 8 [ 7] .dynstr STRTAB 00000000000004c8 0004c8 0000c6 00 A 0 0 1 [10] .rela.dyn RELA 00000000000005d8 0005d8 0000d8 18 A 6 0 8 [11] .rela.plt RELA 00000000000006b0 0006b0 000048 18 AI 6 24 8 [13] .plt PROGBITS 0000000000001020 001020 000040 10 AX 0 0 16 [14] .plt.got PROGBITS 0000000000001060 001060 000020 10 AX 0 0 16 [15] .plt.sec PROGBITS 0000000000001080 001080 000030 10 AX 0 0 16 [23] .dynamic DYNAMIC 0000000000003d90 002d90 000210 10 WA 7 0 8 $ perf record -e intel_pt//u --filter 'filter main @ ./tstpltgot , filter callfn @ ./tstpltgot' ./tstpltgot [ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 0.011 MB perf.data ] $ perf script --itrace=be --ns -F+flags,-event,+addr,-period,-comm,-tid,-cpu,-dso 28393.810326915: tr strt 0 [unknown] => 562350baa1b2 main+0x0 28393.810326915: tr end call 562350baa1ba main+0x8 => 562350baa090 fn4@plt+0x0 28393.810326917: tr strt 0 [unknown] => 562350baa1bf main+0xd 28393.810326917: tr end call 562350baa1bf main+0xd => 562350baa080 fn1@plt+0x0 28393.810326917: tr strt 0 [unknown] => 562350baa1c4 main+0x12 28393.810326917: call 562350baa1ce main+0x1c => 562350baa199 callfn+0x0 28393.810326917: tr end call 562350baa1ad callfn+0x14 => 7f607d36110f fn3+0x0 28393.810326922: tr strt 0 [unknown] => 562350baa1af callfn+0x16 28393.810326922: return 562350baa1b1 callfn+0x18 => 562350baa1d3 main+0x21 28393.810326922: tr end call 562350baa1d3 main+0x21 => 562350baa0a0 fn2@plt+0x0 28393.810326924: tr strt 0 [unknown] => 562350baa1d8 main+0x26 28393.810326924: tr end call 562350baa1d8 main+0x26 => 562350baa060 [unknown] <- call to fn3 via .plt.got 28393.810326925: tr strt 0 [unknown] => 562350baa1dd main+0x2b 28393.810326925: tr end return 562350baa1e3 main+0x31 => 7f607d029d90 __libc_start_call_main+0x80 After: $ perf script --itrace=be --ns -F+flags,-event,+addr,-period,-comm,-tid,-cpu,-dso 28393.810326915: tr strt 0 [unknown] => 562350baa1b2 main+0x0 28393.810326915: tr end call 562350baa1ba main+0x8 => 562350baa090 fn4@plt+0x0 28393.810326917: tr strt 0 [unknown] => 562350baa1bf main+0xd 28393.810326917: tr end call 562350baa1bf main+0xd => 562350baa080 fn1@plt+0x0 28393.810326917: tr strt 0 [unknown] => 562350baa1c4 main+0x12 28393.810326917: call 562350baa1ce main+0x1c => 562350baa199 callfn+0x0 28393.810326917: tr end call 562350baa1ad callfn+0x14 => 7f607d36110f fn3+0x0 28393.810326922: tr strt 0 [unknown] => 562350baa1af callfn+0x16 28393.810326922: return 562350baa1b1 callfn+0x18 => 562350baa1d3 main+0x21 28393.810326922: tr end call 562350baa1d3 main+0x21 => 562350baa0a0 fn2@plt+0x0 28393.810326924: tr strt 0 [unknown] => 562350baa1d8 main+0x26 28393.810326924: tr end call 562350baa1d8 main+0x26 => 562350baa060 offset_0x1060@plt+0x0 28393.810326925: tr strt 0 [unknown] => 562350baa1dd main+0x2b 28393.810326925: tr end return 562350baa1e3 main+0x31 => 7f607d029d90 __libc_start_call_main+0x80 Reviewed-by: Namhyung Kim <namhyung@kernel.org> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Jiri Olsa <jolsa@kernel.org> Link: https://lore.kernel.org/r/20230131131625.6964-9-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2023-01-31 13:16:24 +00:00
symbols__insert(&dso->symbols, sym);
}
perf symbols: Get symbols for .plt.got for x86-64 For x86_64, determine a symbol for .plt.got entries. That requires computing the target offset and finding that in .rela.dyn, which in turn means .rela.dyn needs to be sorted by offset. Example: In this example, the GNU C Library is using .plt.got for malloc and free. Before: $ gcc --version gcc (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0 Copyright (C) 2021 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. $ perf record -e intel_pt//u uname Linux [ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 0.027 MB perf.data ] $ perf script --itrace=be --ns -F-event,+addr,-period,-comm,-tid,-cpu > /tmp/cmp1.txt After: $ perf script --itrace=be --ns -F-event,+addr,-period,-comm,-tid,-cpu > /tmp/cmp2.txt $ diff /tmp/cmp1.txt /tmp/cmp2.txt | head -12 15509,15510c15509,15510 < 27046.755390907: 7f0b2943e3ab _nl_normalize_codeset+0x5b (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b29428380 offset_0x28380@plt+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) < 27046.755390907: 7f0b29428384 offset_0x28380@plt+0x4 (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b294a5120 malloc+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) --- > 27046.755390907: 7f0b2943e3ab _nl_normalize_codeset+0x5b (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b29428380 malloc@plt+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) > 27046.755390907: 7f0b29428384 malloc@plt+0x4 (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b294a5120 malloc+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) 15821,15822c15821,15822 < 27046.755394865: 7f0b2943850c _nl_load_locale_from_archive+0x5bc (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b29428370 offset_0x28370@plt+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) < 27046.755394865: 7f0b29428374 offset_0x28370@plt+0x4 (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b294a5460 cfree@GLIBC_2.2.5+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) --- > 27046.755394865: 7f0b2943850c _nl_load_locale_from_archive+0x5bc (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b29428370 free@plt+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) > 27046.755394865: 7f0b29428374 free@plt+0x4 (/usr/lib/x86_64-linux-gnu/libc.so.6) => 7f0b294a5460 cfree@GLIBC_2.2.5+0x0 (/usr/lib/x86_64-linux-gnu/libc.so.6) Reviewed-by: Namhyung Kim <namhyung@kernel.org> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Jiri Olsa <jolsa@kernel.org> Link: https://lore.kernel.org/r/20230131131625.6964-10-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2023-01-31 13:16:25 +00:00
err = 0;
out:
exit_rela_dyn(&di);
return err;
perf symbols: Start adding support for .plt.got for x86 For x86, .plt.got is used, for example, when the address is taken of a dynamically linked function. Start adding support by synthesizing a symbol for each entry. A subsequent patch will attempt to get a better name for the symbol. Example: Before: $ cat tstpltlib.c void fn1(void) {} void fn2(void) {} void fn3(void) {} void fn4(void) {} $ cat tstpltgot.c void fn1(void); void fn2(void); void fn3(void); void fn4(void); void callfn(void (*fn)(void)) { fn(); } int main() { fn4(); fn1(); callfn(fn3); fn2(); fn3(); return 0; } $ gcc --version gcc (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0 Copyright (C) 2021 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. $ gcc -Wall -Wextra -shared -o libtstpltlib.so tstpltlib.c $ gcc -Wall -Wextra -o tstpltgot tstpltgot.c -L . -ltstpltlib -Wl,-rpath="$(pwd)" $ readelf -SW tstpltgot | grep 'Name\|plt\|dyn' [Nr] Name Type Address Off Size ES Flg Lk Inf Al [ 6] .dynsym DYNSYM 00000000000003d8 0003d8 0000f0 18 A 7 1 8 [ 7] .dynstr STRTAB 00000000000004c8 0004c8 0000c6 00 A 0 0 1 [10] .rela.dyn RELA 00000000000005d8 0005d8 0000d8 18 A 6 0 8 [11] .rela.plt RELA 00000000000006b0 0006b0 000048 18 AI 6 24 8 [13] .plt PROGBITS 0000000000001020 001020 000040 10 AX 0 0 16 [14] .plt.got PROGBITS 0000000000001060 001060 000020 10 AX 0 0 16 [15] .plt.sec PROGBITS 0000000000001080 001080 000030 10 AX 0 0 16 [23] .dynamic DYNAMIC 0000000000003d90 002d90 000210 10 WA 7 0 8 $ perf record -e intel_pt//u --filter 'filter main @ ./tstpltgot , filter callfn @ ./tstpltgot' ./tstpltgot [ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 0.011 MB perf.data ] $ perf script --itrace=be --ns -F+flags,-event,+addr,-period,-comm,-tid,-cpu,-dso 28393.810326915: tr strt 0 [unknown] => 562350baa1b2 main+0x0 28393.810326915: tr end call 562350baa1ba main+0x8 => 562350baa090 fn4@plt+0x0 28393.810326917: tr strt 0 [unknown] => 562350baa1bf main+0xd 28393.810326917: tr end call 562350baa1bf main+0xd => 562350baa080 fn1@plt+0x0 28393.810326917: tr strt 0 [unknown] => 562350baa1c4 main+0x12 28393.810326917: call 562350baa1ce main+0x1c => 562350baa199 callfn+0x0 28393.810326917: tr end call 562350baa1ad callfn+0x14 => 7f607d36110f fn3+0x0 28393.810326922: tr strt 0 [unknown] => 562350baa1af callfn+0x16 28393.810326922: return 562350baa1b1 callfn+0x18 => 562350baa1d3 main+0x21 28393.810326922: tr end call 562350baa1d3 main+0x21 => 562350baa0a0 fn2@plt+0x0 28393.810326924: tr strt 0 [unknown] => 562350baa1d8 main+0x26 28393.810326924: tr end call 562350baa1d8 main+0x26 => 562350baa060 [unknown] <- call to fn3 via .plt.got 28393.810326925: tr strt 0 [unknown] => 562350baa1dd main+0x2b 28393.810326925: tr end return 562350baa1e3 main+0x31 => 7f607d029d90 __libc_start_call_main+0x80 After: $ perf script --itrace=be --ns -F+flags,-event,+addr,-period,-comm,-tid,-cpu,-dso 28393.810326915: tr strt 0 [unknown] => 562350baa1b2 main+0x0 28393.810326915: tr end call 562350baa1ba main+0x8 => 562350baa090 fn4@plt+0x0 28393.810326917: tr strt 0 [unknown] => 562350baa1bf main+0xd 28393.810326917: tr end call 562350baa1bf main+0xd => 562350baa080 fn1@plt+0x0 28393.810326917: tr strt 0 [unknown] => 562350baa1c4 main+0x12 28393.810326917: call 562350baa1ce main+0x1c => 562350baa199 callfn+0x0 28393.810326917: tr end call 562350baa1ad callfn+0x14 => 7f607d36110f fn3+0x0 28393.810326922: tr strt 0 [unknown] => 562350baa1af callfn+0x16 28393.810326922: return 562350baa1b1 callfn+0x18 => 562350baa1d3 main+0x21 28393.810326922: tr end call 562350baa1d3 main+0x21 => 562350baa0a0 fn2@plt+0x0 28393.810326924: tr strt 0 [unknown] => 562350baa1d8 main+0x26 28393.810326924: tr end call 562350baa1d8 main+0x26 => 562350baa060 offset_0x1060@plt+0x0 28393.810326925: tr strt 0 [unknown] => 562350baa1dd main+0x2b 28393.810326925: tr end return 562350baa1e3 main+0x31 => 7f607d029d90 __libc_start_call_main+0x80 Reviewed-by: Namhyung Kim <namhyung@kernel.org> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Jiri Olsa <jolsa@kernel.org> Link: https://lore.kernel.org/r/20230131131625.6964-9-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2023-01-31 13:16:24 +00:00
}
/*
* We need to check if we have a .dynsym, so that we can handle the
* .plt, synthesizing its symbols, that aren't on the symtabs (be it
* .dynsym or .symtab).
* And always look at the original dso, not at debuginfo packages, that
* have the PLT data stripped out (shdr_rel_plt.sh_type == SHT_NOBITS).
*/
int dso__synthesize_plt_symbols(struct dso *dso, struct symsrc *ss)
{
perf symbols: Sort plt relocations for x86 For x86, with the addition of IFUNCs, relocation information becomes disordered with respect to plt. Correct that by sorting the relocations by offset. Example: Before: $ cat tstpltlib.c void fn1(void) {} void fn2(void) {} void fn3(void) {} void fn4(void) {} $ cat tstpltifunc.c #include <stdio.h> void thing1(void) { printf("thing1\n"); } void thing2(void) { printf("thing2\n"); } typedef void (*thing_fn_t)(void); thing_fn_t thing_ifunc(void) { int x; if (x & 1) return thing2; return thing1; } void thing(void) __attribute__ ((ifunc ("thing_ifunc"))); void fn1(void); void fn2(void); void fn3(void); void fn4(void); int main() { fn4(); fn1(); thing(); fn2(); fn3(); return 0; } $ gcc --version gcc (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0 Copyright (C) 2021 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. $ gcc -Wall -Wextra -shared -o libtstpltlib.so tstpltlib.c $ gcc -Wall -Wextra -Wno-uninitialized -o tstpltifunc tstpltifunc.c -L . -ltstpltlib -Wl,-rpath="$(pwd)" $ readelf -rW tstpltifunc | grep -A99 plt Relocation section '.rela.plt' at offset 0x738 contains 8 entries: Offset Info Type Symbol's Value Symbol's Name + Addend 0000000000003f98 0000000300000007 R_X86_64_JUMP_SLOT 0000000000000000 puts@GLIBC_2.2.5 + 0 0000000000003fa8 0000000400000007 R_X86_64_JUMP_SLOT 0000000000000000 __stack_chk_fail@GLIBC_2.4 + 0 0000000000003fb0 0000000500000007 R_X86_64_JUMP_SLOT 0000000000000000 fn1 + 0 0000000000003fb8 0000000600000007 R_X86_64_JUMP_SLOT 0000000000000000 fn3 + 0 0000000000003fc0 0000000800000007 R_X86_64_JUMP_SLOT 0000000000000000 fn4 + 0 0000000000003fc8 0000000900000007 R_X86_64_JUMP_SLOT 0000000000000000 fn2 + 0 0000000000003fd0 0000000b00000007 R_X86_64_JUMP_SLOT 0000000000000000 getrandom@GLIBC_2.25 + 0 0000000000003fa0 0000000000000025 R_X86_64_IRELATIVE 125d $ perf record -e intel_pt//u --filter 'filter main @ ./tstpltifunc' ./tstpltifunc thing2 [ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 0.029 MB perf.data ] $ perf script --itrace=be --ns -F+flags,-event,+addr,-period,-comm,-tid,-cpu,-dso 20417.302513948: tr strt 0 [unknown] => 5629a74892be main+0x0 20417.302513948: tr end call 5629a74892c6 main+0x8 => 5629a7489110 fn2@plt+0x0 20417.302513949: tr strt 0 [unknown] => 5629a74892cb main+0xd 20417.302513949: tr end call 5629a74892cb main+0xd => 5629a74890f0 fn3@plt+0x0 20417.302513950: tr strt 0 [unknown] => 5629a74892d0 main+0x12 20417.302513950: tr end call 5629a74892d0 main+0x12 => 5629a74890d0 __stack_chk_fail@plt+0x0 20417.302528114: tr strt 0 [unknown] => 5629a74892d5 main+0x17 20417.302528114: tr end call 5629a74892d5 main+0x17 => 5629a7489120 getrandom@plt+0x0 20417.302528115: tr strt 0 [unknown] => 5629a74892da main+0x1c 20417.302528115: tr end call 5629a74892da main+0x1c => 5629a7489100 fn4@plt+0x0 20417.302528115: tr strt 0 [unknown] => 5629a74892df main+0x21 20417.302528115: tr end return 5629a74892e5 main+0x27 => 7ff14da29d90 __libc_start_call_main+0x80 After: $ perf script --itrace=be --ns -F+flags,-event,+addr,-period,-comm,-tid,-cpu,-dso 20417.302513948: tr strt 0 [unknown] => 5629a74892be main+0x0 20417.302513948: tr end call 5629a74892c6 main+0x8 => 5629a7489110 fn4@plt+0x0 20417.302513949: tr strt 0 [unknown] => 5629a74892cb main+0xd 20417.302513949: tr end call 5629a74892cb main+0xd => 5629a74890f0 fn1@plt+0x0 20417.302513950: tr strt 0 [unknown] => 5629a74892d0 main+0x12 20417.302513950: tr end call 5629a74892d0 main+0x12 => 5629a74890d0 offset_0x10d0@plt+0x0 20417.302528114: tr strt 0 [unknown] => 5629a74892d5 main+0x17 20417.302528114: tr end call 5629a74892d5 main+0x17 => 5629a7489120 fn2@plt+0x0 20417.302528115: tr strt 0 [unknown] => 5629a74892da main+0x1c 20417.302528115: tr end call 5629a74892da main+0x1c => 5629a7489100 fn3@plt+0x0 20417.302528115: tr strt 0 [unknown] => 5629a74892df main+0x21 20417.302528115: tr end return 5629a74892e5 main+0x27 => 7ff14da29d90 __libc_start_call_main+0x80 Reviewed-by: Namhyung Kim <namhyung@kernel.org> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Jiri Olsa <jolsa@kernel.org> Link: https://lore.kernel.org/r/20230131131625.6964-4-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2023-01-31 13:16:19 +00:00
uint32_t idx;
GElf_Sym sym;
perf symbols: Fix plt entry calculation for ARM and AARCH64 On x86, the plt header size is as same as the plt entry size, and can be identified from shdr's sh_entsize of the plt. But we can't assume that the sh_entsize of the plt shdr is always the plt entry size in all architecture, and the plt header size may be not as same as the plt entry size in some architecure. On ARM, the plt header size is 20 bytes and the plt entry size is 12 bytes (don't consider the FOUR_WORD_PLT case) that refer to the binutils implementation. The plt section is as follows: Disassembly of section .plt: 000004a0 <__cxa_finalize@plt-0x14>: 4a0: e52de004 push {lr} ; (str lr, [sp, #-4]!) 4a4: e59fe004 ldr lr, [pc, #4] ; 4b0 <_init+0x1c> 4a8: e08fe00e add lr, pc, lr 4ac: e5bef008 ldr pc, [lr, #8]! 4b0: 00008424 .word 0x00008424 000004b4 <__cxa_finalize@plt>: 4b4: e28fc600 add ip, pc, #0, 12 4b8: e28cca08 add ip, ip, #8, 20 ; 0x8000 4bc: e5bcf424 ldr pc, [ip, #1060]! ; 0x424 000004c0 <printf@plt>: 4c0: e28fc600 add ip, pc, #0, 12 4c4: e28cca08 add ip, ip, #8, 20 ; 0x8000 4c8: e5bcf41c ldr pc, [ip, #1052]! ; 0x41c On AARCH64, the plt header size is 32 bytes and the plt entry size is 16 bytes. The plt section is as follows: Disassembly of section .plt: 0000000000000560 <__cxa_finalize@plt-0x20>: 560: a9bf7bf0 stp x16, x30, [sp,#-16]! 564: 90000090 adrp x16, 10000 <__FRAME_END__+0xf8a8> 568: f944be11 ldr x17, [x16,#2424] 56c: 9125e210 add x16, x16, #0x978 570: d61f0220 br x17 574: d503201f nop 578: d503201f nop 57c: d503201f nop 0000000000000580 <__cxa_finalize@plt>: 580: 90000090 adrp x16, 10000 <__FRAME_END__+0xf8a8> 584: f944c211 ldr x17, [x16,#2432] 588: 91260210 add x16, x16, #0x980 58c: d61f0220 br x17 0000000000000590 <__gmon_start__@plt>: 590: 90000090 adrp x16, 10000 <__FRAME_END__+0xf8a8> 594: f944c611 ldr x17, [x16,#2440] 598: 91262210 add x16, x16, #0x988 59c: d61f0220 br x17 NOTES: In addition to ARM and AARCH64, other architectures, such as s390/alpha/mips/parisc/poperpc/sh/sparc/xtensa also need to consider this issue. Signed-off-by: Li Bin <huawei.libin@huawei.com> Acked-by: Namhyung Kim <namhyung@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Alexis Berlemont <alexis.berlemont@gmail.com> Cc: David Tolnay <dtolnay@gmail.com> Cc: Hanjun Guo <guohanjun@huawei.com> Cc: Hemant Kumar <hemant@linux.vnet.ibm.com> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Milian Wolff <milian.wolff@kdab.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Wang Nan <wangnan0@huawei.com> Cc: zhangmengting@huawei.com Link: http://lkml.kernel.org/r/1496622849-21877-1-git-send-email-huawei.libin@huawei.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-06-05 00:34:09 +00:00
u64 plt_offset, plt_header_size, plt_entry_size;
perf symbols: Add support for x86 .plt.sec The section .plt.sec was originally added for MPX and was first called .plt.bnd. While MPX has been deprecated, .plt.sec is now also used for IBT. On x86_64, IBT may be enabled by default, but can be switched off using gcc option -fcf-protection=none, or switched on by -z ibt or -z ibtplt. On 32-bit, option -z ibt or -z ibtplt will enable IBT. With .plt.sec, calls are made into .plt.sec instead of .plt, so it makes more sense to put the symbols there instead of .plt. A notable difference is that .plt.sec does not have a header entry. For x86, when synthesizing symbols for plt, use offset and entry size of .plt.sec instead of .plt when there is a .plt.sec section. Example on Ubuntu 22.04 gcc 11.3: Before: $ cat tstpltlib.c void fn1(void) {} void fn2(void) {} void fn3(void) {} void fn4(void) {} $ cat tstplt.c void fn1(void); void fn2(void); void fn3(void); void fn4(void); int main() { fn4(); fn1(); fn2(); fn3(); return 0; } $ gcc --version gcc (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0 Copyright (C) 2021 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. $ gcc -Wall -Wextra -shared -o libtstpltlib.so tstpltlib.c $ gcc -Wall -Wextra -z ibt -o tstplt tstplt.c -L . -ltstpltlib -Wl,-rpath=$(pwd) $ readelf -SW tstplt | grep 'plt\|Name' [Nr] Name Type Address Off Size ES Flg Lk Inf Al [11] .rela.plt RELA 0000000000000698 000698 000060 18 AI 6 24 8 [13] .plt PROGBITS 0000000000001020 001020 000050 10 AX 0 0 16 [14] .plt.got PROGBITS 0000000000001070 001070 000010 10 AX 0 0 16 [15] .plt.sec PROGBITS 0000000000001080 001080 000040 10 AX 0 0 16 $ perf record -e intel_pt//u --filter 'filter main @ ./tstplt' ./tstplt [ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 0.015 MB perf.data ] $ perf script --itrace=be --ns -F+flags,-event,+addr,-period,-comm,-tid,-cpu,-dso 38970.522546686: tr strt 0 [unknown] => 55fc222a81a9 main+0x0 38970.522546686: tr end call 55fc222a81b1 main+0x8 => 55fc222a80a0 [unknown] 38970.522546687: tr strt 0 [unknown] => 55fc222a81b6 main+0xd 38970.522546687: tr end call 55fc222a81b6 main+0xd => 55fc222a8080 [unknown] 38970.522546688: tr strt 0 [unknown] => 55fc222a81bb main+0x12 38970.522546688: tr end call 55fc222a81bb main+0x12 => 55fc222a80b0 [unknown] 38970.522546688: tr strt 0 [unknown] => 55fc222a81c0 main+0x17 38970.522546688: tr end call 55fc222a81c0 main+0x17 => 55fc222a8090 [unknown] 38970.522546689: tr strt 0 [unknown] => 55fc222a81c5 main+0x1c 38970.522546894: tr end return 55fc222a81cb main+0x22 => 7f3a4dc29d90 __libc_start_call_main+0x80 After: $ perf script --itrace=be --ns -F+flags,-event,+addr,-period,-comm,-tid,-cpu,-dso 38970.522546686: tr strt 0 [unknown] => 55fc222a81a9 main+0x0 38970.522546686: tr end call 55fc222a81b1 main+0x8 => 55fc222a80a0 fn4@plt+0x0 38970.522546687: tr strt 0 [unknown] => 55fc222a81b6 main+0xd 38970.522546687: tr end call 55fc222a81b6 main+0xd => 55fc222a8080 fn1@plt+0x0 38970.522546688: tr strt 0 [unknown] => 55fc222a81bb main+0x12 38970.522546688: tr end call 55fc222a81bb main+0x12 => 55fc222a80b0 fn2@plt+0x0 38970.522546688: tr strt 0 [unknown] => 55fc222a81c0 main+0x17 38970.522546688: tr end call 55fc222a81c0 main+0x17 => 55fc222a8090 fn3@plt+0x0 38970.522546689: tr strt 0 [unknown] => 55fc222a81c5 main+0x1c 38970.522546894: tr end return 55fc222a81cb main+0x22 => 7f3a4dc29d90 __libc_start_call_main+0x80 Reviewed-by: Namhyung Kim <namhyung@kernel.org> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Jiri Olsa <jolsa@kernel.org> Link: https://lore.kernel.org/r/20230131131625.6964-3-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2023-01-31 13:16:18 +00:00
GElf_Shdr shdr_plt, plt_sec_shdr;
struct symbol *f, *plt_sym;
GElf_Shdr shdr_rel_plt, shdr_dynsym;
Elf_Data *syms, *symstrs;
Elf_Scn *scn_plt_rel, *scn_symstrs, *scn_dynsym;
GElf_Ehdr ehdr;
char sympltname[1024];
Elf *elf;
int nr = 0, err = -1;
struct rel_info ri = { .is_rela = false };
bool lazy_plt;
elf = ss->elf;
ehdr = ss->ehdr;
perf symbols: Add symbol for .plt header perf expands the _init symbol over .plt because there are no PLT symbols at that point, but then dso__synthesize_plt_symbols() creates them. Fix by truncating the previous symbol and inserting a symbol for .plt header. Example: Before: $ perf test --dso `which uname` -v Symbols 74: Symbols : --- start --- test child forked, pid 191028 Problems creating module maps, continuing anyway... Testing /usr/bin/uname Overlapping symbols: 2000-25f0 g _init 2040-2050 g free@plt test child finished with -1 ---- end ---- Symbols: FAILED! $ perf test --dso `which uname` -vv Symbols 2>/tmp/cmp1.txt After: $ perf test --dso `which uname` -v Symbols 74: Symbols : --- start --- test child forked, pid 194291 Testing /usr/bin/uname test child finished with 0 ---- end ---- Symbols: Ok $ perf test --dso `which uname` -vv Symbols 2>/tmp/cmp2.txt $ diff /tmp/cmp1.txt /tmp/cmp2.txt 4,5c4 < test child forked, pid 191031 < Problems creating module maps, continuing anyway... --- > test child forked, pid 194296 9c8,9 < 2000-25f0 g _init --- > 2000-2030 g _init > 2030-2040 g .plt 100,103c100 < Overlapping symbols: < 2000-25f0 g _init < 2040-2050 g free@plt < test child finished with -1 --- > test child finished with 0 105c102 < Symbols: FAILED! --- > Symbols: Ok $ Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Acked-by: Ian Rogers <irogers@google.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Link: https://lore.kernel.org/r/20230120123456.12449-8-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2023-01-20 12:34:53 +00:00
if (!elf_section_by_name(elf, &ehdr, &shdr_plt, ".plt", NULL))
return 0;
/*
* A symbol from a previous section (e.g. .init) can have been expanded
* by symbols__fixup_end() to overlap .plt. Truncate it before adding
* a symbol for .plt header.
*/
f = dso__find_symbol_nocache(dso, shdr_plt.sh_offset);
if (f && f->start < shdr_plt.sh_offset && f->end > shdr_plt.sh_offset)
f->end = shdr_plt.sh_offset;
if (!get_plt_sizes(dso, &ehdr, &shdr_plt, &plt_header_size, &plt_entry_size))
return 0;
/* Add a symbol for .plt header */
perf symbols: Add support for x86 .plt.sec The section .plt.sec was originally added for MPX and was first called .plt.bnd. While MPX has been deprecated, .plt.sec is now also used for IBT. On x86_64, IBT may be enabled by default, but can be switched off using gcc option -fcf-protection=none, or switched on by -z ibt or -z ibtplt. On 32-bit, option -z ibt or -z ibtplt will enable IBT. With .plt.sec, calls are made into .plt.sec instead of .plt, so it makes more sense to put the symbols there instead of .plt. A notable difference is that .plt.sec does not have a header entry. For x86, when synthesizing symbols for plt, use offset and entry size of .plt.sec instead of .plt when there is a .plt.sec section. Example on Ubuntu 22.04 gcc 11.3: Before: $ cat tstpltlib.c void fn1(void) {} void fn2(void) {} void fn3(void) {} void fn4(void) {} $ cat tstplt.c void fn1(void); void fn2(void); void fn3(void); void fn4(void); int main() { fn4(); fn1(); fn2(); fn3(); return 0; } $ gcc --version gcc (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0 Copyright (C) 2021 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. $ gcc -Wall -Wextra -shared -o libtstpltlib.so tstpltlib.c $ gcc -Wall -Wextra -z ibt -o tstplt tstplt.c -L . -ltstpltlib -Wl,-rpath=$(pwd) $ readelf -SW tstplt | grep 'plt\|Name' [Nr] Name Type Address Off Size ES Flg Lk Inf Al [11] .rela.plt RELA 0000000000000698 000698 000060 18 AI 6 24 8 [13] .plt PROGBITS 0000000000001020 001020 000050 10 AX 0 0 16 [14] .plt.got PROGBITS 0000000000001070 001070 000010 10 AX 0 0 16 [15] .plt.sec PROGBITS 0000000000001080 001080 000040 10 AX 0 0 16 $ perf record -e intel_pt//u --filter 'filter main @ ./tstplt' ./tstplt [ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 0.015 MB perf.data ] $ perf script --itrace=be --ns -F+flags,-event,+addr,-period,-comm,-tid,-cpu,-dso 38970.522546686: tr strt 0 [unknown] => 55fc222a81a9 main+0x0 38970.522546686: tr end call 55fc222a81b1 main+0x8 => 55fc222a80a0 [unknown] 38970.522546687: tr strt 0 [unknown] => 55fc222a81b6 main+0xd 38970.522546687: tr end call 55fc222a81b6 main+0xd => 55fc222a8080 [unknown] 38970.522546688: tr strt 0 [unknown] => 55fc222a81bb main+0x12 38970.522546688: tr end call 55fc222a81bb main+0x12 => 55fc222a80b0 [unknown] 38970.522546688: tr strt 0 [unknown] => 55fc222a81c0 main+0x17 38970.522546688: tr end call 55fc222a81c0 main+0x17 => 55fc222a8090 [unknown] 38970.522546689: tr strt 0 [unknown] => 55fc222a81c5 main+0x1c 38970.522546894: tr end return 55fc222a81cb main+0x22 => 7f3a4dc29d90 __libc_start_call_main+0x80 After: $ perf script --itrace=be --ns -F+flags,-event,+addr,-period,-comm,-tid,-cpu,-dso 38970.522546686: tr strt 0 [unknown] => 55fc222a81a9 main+0x0 38970.522546686: tr end call 55fc222a81b1 main+0x8 => 55fc222a80a0 fn4@plt+0x0 38970.522546687: tr strt 0 [unknown] => 55fc222a81b6 main+0xd 38970.522546687: tr end call 55fc222a81b6 main+0xd => 55fc222a8080 fn1@plt+0x0 38970.522546688: tr strt 0 [unknown] => 55fc222a81bb main+0x12 38970.522546688: tr end call 55fc222a81bb main+0x12 => 55fc222a80b0 fn2@plt+0x0 38970.522546688: tr strt 0 [unknown] => 55fc222a81c0 main+0x17 38970.522546688: tr end call 55fc222a81c0 main+0x17 => 55fc222a8090 fn3@plt+0x0 38970.522546689: tr strt 0 [unknown] => 55fc222a81c5 main+0x1c 38970.522546894: tr end return 55fc222a81cb main+0x22 => 7f3a4dc29d90 __libc_start_call_main+0x80 Reviewed-by: Namhyung Kim <namhyung@kernel.org> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Jiri Olsa <jolsa@kernel.org> Link: https://lore.kernel.org/r/20230131131625.6964-3-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2023-01-31 13:16:18 +00:00
plt_sym = symbol__new(shdr_plt.sh_offset, plt_header_size, STB_GLOBAL, STT_FUNC, ".plt");
if (!plt_sym)
perf symbols: Add symbol for .plt header perf expands the _init symbol over .plt because there are no PLT symbols at that point, but then dso__synthesize_plt_symbols() creates them. Fix by truncating the previous symbol and inserting a symbol for .plt header. Example: Before: $ perf test --dso `which uname` -v Symbols 74: Symbols : --- start --- test child forked, pid 191028 Problems creating module maps, continuing anyway... Testing /usr/bin/uname Overlapping symbols: 2000-25f0 g _init 2040-2050 g free@plt test child finished with -1 ---- end ---- Symbols: FAILED! $ perf test --dso `which uname` -vv Symbols 2>/tmp/cmp1.txt After: $ perf test --dso `which uname` -v Symbols 74: Symbols : --- start --- test child forked, pid 194291 Testing /usr/bin/uname test child finished with 0 ---- end ---- Symbols: Ok $ perf test --dso `which uname` -vv Symbols 2>/tmp/cmp2.txt $ diff /tmp/cmp1.txt /tmp/cmp2.txt 4,5c4 < test child forked, pid 191031 < Problems creating module maps, continuing anyway... --- > test child forked, pid 194296 9c8,9 < 2000-25f0 g _init --- > 2000-2030 g _init > 2030-2040 g .plt 100,103c100 < Overlapping symbols: < 2000-25f0 g _init < 2040-2050 g free@plt < test child finished with -1 --- > test child finished with 0 105c102 < Symbols: FAILED! --- > Symbols: Ok $ Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Acked-by: Ian Rogers <irogers@google.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Link: https://lore.kernel.org/r/20230120123456.12449-8-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2023-01-20 12:34:53 +00:00
goto out_elf_end;
perf symbols: Add support for x86 .plt.sec The section .plt.sec was originally added for MPX and was first called .plt.bnd. While MPX has been deprecated, .plt.sec is now also used for IBT. On x86_64, IBT may be enabled by default, but can be switched off using gcc option -fcf-protection=none, or switched on by -z ibt or -z ibtplt. On 32-bit, option -z ibt or -z ibtplt will enable IBT. With .plt.sec, calls are made into .plt.sec instead of .plt, so it makes more sense to put the symbols there instead of .plt. A notable difference is that .plt.sec does not have a header entry. For x86, when synthesizing symbols for plt, use offset and entry size of .plt.sec instead of .plt when there is a .plt.sec section. Example on Ubuntu 22.04 gcc 11.3: Before: $ cat tstpltlib.c void fn1(void) {} void fn2(void) {} void fn3(void) {} void fn4(void) {} $ cat tstplt.c void fn1(void); void fn2(void); void fn3(void); void fn4(void); int main() { fn4(); fn1(); fn2(); fn3(); return 0; } $ gcc --version gcc (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0 Copyright (C) 2021 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. $ gcc -Wall -Wextra -shared -o libtstpltlib.so tstpltlib.c $ gcc -Wall -Wextra -z ibt -o tstplt tstplt.c -L . -ltstpltlib -Wl,-rpath=$(pwd) $ readelf -SW tstplt | grep 'plt\|Name' [Nr] Name Type Address Off Size ES Flg Lk Inf Al [11] .rela.plt RELA 0000000000000698 000698 000060 18 AI 6 24 8 [13] .plt PROGBITS 0000000000001020 001020 000050 10 AX 0 0 16 [14] .plt.got PROGBITS 0000000000001070 001070 000010 10 AX 0 0 16 [15] .plt.sec PROGBITS 0000000000001080 001080 000040 10 AX 0 0 16 $ perf record -e intel_pt//u --filter 'filter main @ ./tstplt' ./tstplt [ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 0.015 MB perf.data ] $ perf script --itrace=be --ns -F+flags,-event,+addr,-period,-comm,-tid,-cpu,-dso 38970.522546686: tr strt 0 [unknown] => 55fc222a81a9 main+0x0 38970.522546686: tr end call 55fc222a81b1 main+0x8 => 55fc222a80a0 [unknown] 38970.522546687: tr strt 0 [unknown] => 55fc222a81b6 main+0xd 38970.522546687: tr end call 55fc222a81b6 main+0xd => 55fc222a8080 [unknown] 38970.522546688: tr strt 0 [unknown] => 55fc222a81bb main+0x12 38970.522546688: tr end call 55fc222a81bb main+0x12 => 55fc222a80b0 [unknown] 38970.522546688: tr strt 0 [unknown] => 55fc222a81c0 main+0x17 38970.522546688: tr end call 55fc222a81c0 main+0x17 => 55fc222a8090 [unknown] 38970.522546689: tr strt 0 [unknown] => 55fc222a81c5 main+0x1c 38970.522546894: tr end return 55fc222a81cb main+0x22 => 7f3a4dc29d90 __libc_start_call_main+0x80 After: $ perf script --itrace=be --ns -F+flags,-event,+addr,-period,-comm,-tid,-cpu,-dso 38970.522546686: tr strt 0 [unknown] => 55fc222a81a9 main+0x0 38970.522546686: tr end call 55fc222a81b1 main+0x8 => 55fc222a80a0 fn4@plt+0x0 38970.522546687: tr strt 0 [unknown] => 55fc222a81b6 main+0xd 38970.522546687: tr end call 55fc222a81b6 main+0xd => 55fc222a8080 fn1@plt+0x0 38970.522546688: tr strt 0 [unknown] => 55fc222a81bb main+0x12 38970.522546688: tr end call 55fc222a81bb main+0x12 => 55fc222a80b0 fn2@plt+0x0 38970.522546688: tr strt 0 [unknown] => 55fc222a81c0 main+0x17 38970.522546688: tr end call 55fc222a81c0 main+0x17 => 55fc222a8090 fn3@plt+0x0 38970.522546689: tr strt 0 [unknown] => 55fc222a81c5 main+0x1c 38970.522546894: tr end return 55fc222a81cb main+0x22 => 7f3a4dc29d90 __libc_start_call_main+0x80 Reviewed-by: Namhyung Kim <namhyung@kernel.org> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Jiri Olsa <jolsa@kernel.org> Link: https://lore.kernel.org/r/20230131131625.6964-3-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2023-01-31 13:16:18 +00:00
symbols__insert(&dso->symbols, plt_sym);
perf symbols: Start adding support for .plt.got for x86 For x86, .plt.got is used, for example, when the address is taken of a dynamically linked function. Start adding support by synthesizing a symbol for each entry. A subsequent patch will attempt to get a better name for the symbol. Example: Before: $ cat tstpltlib.c void fn1(void) {} void fn2(void) {} void fn3(void) {} void fn4(void) {} $ cat tstpltgot.c void fn1(void); void fn2(void); void fn3(void); void fn4(void); void callfn(void (*fn)(void)) { fn(); } int main() { fn4(); fn1(); callfn(fn3); fn2(); fn3(); return 0; } $ gcc --version gcc (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0 Copyright (C) 2021 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. $ gcc -Wall -Wextra -shared -o libtstpltlib.so tstpltlib.c $ gcc -Wall -Wextra -o tstpltgot tstpltgot.c -L . -ltstpltlib -Wl,-rpath="$(pwd)" $ readelf -SW tstpltgot | grep 'Name\|plt\|dyn' [Nr] Name Type Address Off Size ES Flg Lk Inf Al [ 6] .dynsym DYNSYM 00000000000003d8 0003d8 0000f0 18 A 7 1 8 [ 7] .dynstr STRTAB 00000000000004c8 0004c8 0000c6 00 A 0 0 1 [10] .rela.dyn RELA 00000000000005d8 0005d8 0000d8 18 A 6 0 8 [11] .rela.plt RELA 00000000000006b0 0006b0 000048 18 AI 6 24 8 [13] .plt PROGBITS 0000000000001020 001020 000040 10 AX 0 0 16 [14] .plt.got PROGBITS 0000000000001060 001060 000020 10 AX 0 0 16 [15] .plt.sec PROGBITS 0000000000001080 001080 000030 10 AX 0 0 16 [23] .dynamic DYNAMIC 0000000000003d90 002d90 000210 10 WA 7 0 8 $ perf record -e intel_pt//u --filter 'filter main @ ./tstpltgot , filter callfn @ ./tstpltgot' ./tstpltgot [ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 0.011 MB perf.data ] $ perf script --itrace=be --ns -F+flags,-event,+addr,-period,-comm,-tid,-cpu,-dso 28393.810326915: tr strt 0 [unknown] => 562350baa1b2 main+0x0 28393.810326915: tr end call 562350baa1ba main+0x8 => 562350baa090 fn4@plt+0x0 28393.810326917: tr strt 0 [unknown] => 562350baa1bf main+0xd 28393.810326917: tr end call 562350baa1bf main+0xd => 562350baa080 fn1@plt+0x0 28393.810326917: tr strt 0 [unknown] => 562350baa1c4 main+0x12 28393.810326917: call 562350baa1ce main+0x1c => 562350baa199 callfn+0x0 28393.810326917: tr end call 562350baa1ad callfn+0x14 => 7f607d36110f fn3+0x0 28393.810326922: tr strt 0 [unknown] => 562350baa1af callfn+0x16 28393.810326922: return 562350baa1b1 callfn+0x18 => 562350baa1d3 main+0x21 28393.810326922: tr end call 562350baa1d3 main+0x21 => 562350baa0a0 fn2@plt+0x0 28393.810326924: tr strt 0 [unknown] => 562350baa1d8 main+0x26 28393.810326924: tr end call 562350baa1d8 main+0x26 => 562350baa060 [unknown] <- call to fn3 via .plt.got 28393.810326925: tr strt 0 [unknown] => 562350baa1dd main+0x2b 28393.810326925: tr end return 562350baa1e3 main+0x31 => 7f607d029d90 __libc_start_call_main+0x80 After: $ perf script --itrace=be --ns -F+flags,-event,+addr,-period,-comm,-tid,-cpu,-dso 28393.810326915: tr strt 0 [unknown] => 562350baa1b2 main+0x0 28393.810326915: tr end call 562350baa1ba main+0x8 => 562350baa090 fn4@plt+0x0 28393.810326917: tr strt 0 [unknown] => 562350baa1bf main+0xd 28393.810326917: tr end call 562350baa1bf main+0xd => 562350baa080 fn1@plt+0x0 28393.810326917: tr strt 0 [unknown] => 562350baa1c4 main+0x12 28393.810326917: call 562350baa1ce main+0x1c => 562350baa199 callfn+0x0 28393.810326917: tr end call 562350baa1ad callfn+0x14 => 7f607d36110f fn3+0x0 28393.810326922: tr strt 0 [unknown] => 562350baa1af callfn+0x16 28393.810326922: return 562350baa1b1 callfn+0x18 => 562350baa1d3 main+0x21 28393.810326922: tr end call 562350baa1d3 main+0x21 => 562350baa0a0 fn2@plt+0x0 28393.810326924: tr strt 0 [unknown] => 562350baa1d8 main+0x26 28393.810326924: tr end call 562350baa1d8 main+0x26 => 562350baa060 offset_0x1060@plt+0x0 28393.810326925: tr strt 0 [unknown] => 562350baa1dd main+0x2b 28393.810326925: tr end return 562350baa1e3 main+0x31 => 7f607d029d90 __libc_start_call_main+0x80 Reviewed-by: Namhyung Kim <namhyung@kernel.org> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Jiri Olsa <jolsa@kernel.org> Link: https://lore.kernel.org/r/20230131131625.6964-9-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2023-01-31 13:16:24 +00:00
/* Only x86 has .plt.got */
if (machine_is_x86(ehdr.e_machine) &&
dso__synthesize_plt_got_symbols(dso, elf, &ehdr, sympltname, sizeof(sympltname)))
goto out_elf_end;
perf symbols: Add support for x86 .plt.sec The section .plt.sec was originally added for MPX and was first called .plt.bnd. While MPX has been deprecated, .plt.sec is now also used for IBT. On x86_64, IBT may be enabled by default, but can be switched off using gcc option -fcf-protection=none, or switched on by -z ibt or -z ibtplt. On 32-bit, option -z ibt or -z ibtplt will enable IBT. With .plt.sec, calls are made into .plt.sec instead of .plt, so it makes more sense to put the symbols there instead of .plt. A notable difference is that .plt.sec does not have a header entry. For x86, when synthesizing symbols for plt, use offset and entry size of .plt.sec instead of .plt when there is a .plt.sec section. Example on Ubuntu 22.04 gcc 11.3: Before: $ cat tstpltlib.c void fn1(void) {} void fn2(void) {} void fn3(void) {} void fn4(void) {} $ cat tstplt.c void fn1(void); void fn2(void); void fn3(void); void fn4(void); int main() { fn4(); fn1(); fn2(); fn3(); return 0; } $ gcc --version gcc (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0 Copyright (C) 2021 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. $ gcc -Wall -Wextra -shared -o libtstpltlib.so tstpltlib.c $ gcc -Wall -Wextra -z ibt -o tstplt tstplt.c -L . -ltstpltlib -Wl,-rpath=$(pwd) $ readelf -SW tstplt | grep 'plt\|Name' [Nr] Name Type Address Off Size ES Flg Lk Inf Al [11] .rela.plt RELA 0000000000000698 000698 000060 18 AI 6 24 8 [13] .plt PROGBITS 0000000000001020 001020 000050 10 AX 0 0 16 [14] .plt.got PROGBITS 0000000000001070 001070 000010 10 AX 0 0 16 [15] .plt.sec PROGBITS 0000000000001080 001080 000040 10 AX 0 0 16 $ perf record -e intel_pt//u --filter 'filter main @ ./tstplt' ./tstplt [ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 0.015 MB perf.data ] $ perf script --itrace=be --ns -F+flags,-event,+addr,-period,-comm,-tid,-cpu,-dso 38970.522546686: tr strt 0 [unknown] => 55fc222a81a9 main+0x0 38970.522546686: tr end call 55fc222a81b1 main+0x8 => 55fc222a80a0 [unknown] 38970.522546687: tr strt 0 [unknown] => 55fc222a81b6 main+0xd 38970.522546687: tr end call 55fc222a81b6 main+0xd => 55fc222a8080 [unknown] 38970.522546688: tr strt 0 [unknown] => 55fc222a81bb main+0x12 38970.522546688: tr end call 55fc222a81bb main+0x12 => 55fc222a80b0 [unknown] 38970.522546688: tr strt 0 [unknown] => 55fc222a81c0 main+0x17 38970.522546688: tr end call 55fc222a81c0 main+0x17 => 55fc222a8090 [unknown] 38970.522546689: tr strt 0 [unknown] => 55fc222a81c5 main+0x1c 38970.522546894: tr end return 55fc222a81cb main+0x22 => 7f3a4dc29d90 __libc_start_call_main+0x80 After: $ perf script --itrace=be --ns -F+flags,-event,+addr,-period,-comm,-tid,-cpu,-dso 38970.522546686: tr strt 0 [unknown] => 55fc222a81a9 main+0x0 38970.522546686: tr end call 55fc222a81b1 main+0x8 => 55fc222a80a0 fn4@plt+0x0 38970.522546687: tr strt 0 [unknown] => 55fc222a81b6 main+0xd 38970.522546687: tr end call 55fc222a81b6 main+0xd => 55fc222a8080 fn1@plt+0x0 38970.522546688: tr strt 0 [unknown] => 55fc222a81bb main+0x12 38970.522546688: tr end call 55fc222a81bb main+0x12 => 55fc222a80b0 fn2@plt+0x0 38970.522546688: tr strt 0 [unknown] => 55fc222a81c0 main+0x17 38970.522546688: tr end call 55fc222a81c0 main+0x17 => 55fc222a8090 fn3@plt+0x0 38970.522546689: tr strt 0 [unknown] => 55fc222a81c5 main+0x1c 38970.522546894: tr end return 55fc222a81cb main+0x22 => 7f3a4dc29d90 __libc_start_call_main+0x80 Reviewed-by: Namhyung Kim <namhyung@kernel.org> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Jiri Olsa <jolsa@kernel.org> Link: https://lore.kernel.org/r/20230131131625.6964-3-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2023-01-31 13:16:18 +00:00
/* Only x86 has .plt.sec */
if (machine_is_x86(ehdr.e_machine) &&
elf_section_by_name(elf, &ehdr, &plt_sec_shdr, ".plt.sec", NULL)) {
if (!get_plt_sizes(dso, &ehdr, &plt_sec_shdr, &plt_header_size, &plt_entry_size))
return 0;
/* Extend .plt symbol to entire .plt */
plt_sym->end = plt_sym->start + shdr_plt.sh_size;
/* Use .plt.sec offset */
plt_offset = plt_sec_shdr.sh_offset;
lazy_plt = false;
perf symbols: Add support for x86 .plt.sec The section .plt.sec was originally added for MPX and was first called .plt.bnd. While MPX has been deprecated, .plt.sec is now also used for IBT. On x86_64, IBT may be enabled by default, but can be switched off using gcc option -fcf-protection=none, or switched on by -z ibt or -z ibtplt. On 32-bit, option -z ibt or -z ibtplt will enable IBT. With .plt.sec, calls are made into .plt.sec instead of .plt, so it makes more sense to put the symbols there instead of .plt. A notable difference is that .plt.sec does not have a header entry. For x86, when synthesizing symbols for plt, use offset and entry size of .plt.sec instead of .plt when there is a .plt.sec section. Example on Ubuntu 22.04 gcc 11.3: Before: $ cat tstpltlib.c void fn1(void) {} void fn2(void) {} void fn3(void) {} void fn4(void) {} $ cat tstplt.c void fn1(void); void fn2(void); void fn3(void); void fn4(void); int main() { fn4(); fn1(); fn2(); fn3(); return 0; } $ gcc --version gcc (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0 Copyright (C) 2021 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. $ gcc -Wall -Wextra -shared -o libtstpltlib.so tstpltlib.c $ gcc -Wall -Wextra -z ibt -o tstplt tstplt.c -L . -ltstpltlib -Wl,-rpath=$(pwd) $ readelf -SW tstplt | grep 'plt\|Name' [Nr] Name Type Address Off Size ES Flg Lk Inf Al [11] .rela.plt RELA 0000000000000698 000698 000060 18 AI 6 24 8 [13] .plt PROGBITS 0000000000001020 001020 000050 10 AX 0 0 16 [14] .plt.got PROGBITS 0000000000001070 001070 000010 10 AX 0 0 16 [15] .plt.sec PROGBITS 0000000000001080 001080 000040 10 AX 0 0 16 $ perf record -e intel_pt//u --filter 'filter main @ ./tstplt' ./tstplt [ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 0.015 MB perf.data ] $ perf script --itrace=be --ns -F+flags,-event,+addr,-period,-comm,-tid,-cpu,-dso 38970.522546686: tr strt 0 [unknown] => 55fc222a81a9 main+0x0 38970.522546686: tr end call 55fc222a81b1 main+0x8 => 55fc222a80a0 [unknown] 38970.522546687: tr strt 0 [unknown] => 55fc222a81b6 main+0xd 38970.522546687: tr end call 55fc222a81b6 main+0xd => 55fc222a8080 [unknown] 38970.522546688: tr strt 0 [unknown] => 55fc222a81bb main+0x12 38970.522546688: tr end call 55fc222a81bb main+0x12 => 55fc222a80b0 [unknown] 38970.522546688: tr strt 0 [unknown] => 55fc222a81c0 main+0x17 38970.522546688: tr end call 55fc222a81c0 main+0x17 => 55fc222a8090 [unknown] 38970.522546689: tr strt 0 [unknown] => 55fc222a81c5 main+0x1c 38970.522546894: tr end return 55fc222a81cb main+0x22 => 7f3a4dc29d90 __libc_start_call_main+0x80 After: $ perf script --itrace=be --ns -F+flags,-event,+addr,-period,-comm,-tid,-cpu,-dso 38970.522546686: tr strt 0 [unknown] => 55fc222a81a9 main+0x0 38970.522546686: tr end call 55fc222a81b1 main+0x8 => 55fc222a80a0 fn4@plt+0x0 38970.522546687: tr strt 0 [unknown] => 55fc222a81b6 main+0xd 38970.522546687: tr end call 55fc222a81b6 main+0xd => 55fc222a8080 fn1@plt+0x0 38970.522546688: tr strt 0 [unknown] => 55fc222a81bb main+0x12 38970.522546688: tr end call 55fc222a81bb main+0x12 => 55fc222a80b0 fn2@plt+0x0 38970.522546688: tr strt 0 [unknown] => 55fc222a81c0 main+0x17 38970.522546688: tr end call 55fc222a81c0 main+0x17 => 55fc222a8090 fn3@plt+0x0 38970.522546689: tr strt 0 [unknown] => 55fc222a81c5 main+0x1c 38970.522546894: tr end return 55fc222a81cb main+0x22 => 7f3a4dc29d90 __libc_start_call_main+0x80 Reviewed-by: Namhyung Kim <namhyung@kernel.org> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Jiri Olsa <jolsa@kernel.org> Link: https://lore.kernel.org/r/20230131131625.6964-3-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2023-01-31 13:16:18 +00:00
} else {
plt_offset = shdr_plt.sh_offset;
lazy_plt = true;
perf symbols: Add support for x86 .plt.sec The section .plt.sec was originally added for MPX and was first called .plt.bnd. While MPX has been deprecated, .plt.sec is now also used for IBT. On x86_64, IBT may be enabled by default, but can be switched off using gcc option -fcf-protection=none, or switched on by -z ibt or -z ibtplt. On 32-bit, option -z ibt or -z ibtplt will enable IBT. With .plt.sec, calls are made into .plt.sec instead of .plt, so it makes more sense to put the symbols there instead of .plt. A notable difference is that .plt.sec does not have a header entry. For x86, when synthesizing symbols for plt, use offset and entry size of .plt.sec instead of .plt when there is a .plt.sec section. Example on Ubuntu 22.04 gcc 11.3: Before: $ cat tstpltlib.c void fn1(void) {} void fn2(void) {} void fn3(void) {} void fn4(void) {} $ cat tstplt.c void fn1(void); void fn2(void); void fn3(void); void fn4(void); int main() { fn4(); fn1(); fn2(); fn3(); return 0; } $ gcc --version gcc (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0 Copyright (C) 2021 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. $ gcc -Wall -Wextra -shared -o libtstpltlib.so tstpltlib.c $ gcc -Wall -Wextra -z ibt -o tstplt tstplt.c -L . -ltstpltlib -Wl,-rpath=$(pwd) $ readelf -SW tstplt | grep 'plt\|Name' [Nr] Name Type Address Off Size ES Flg Lk Inf Al [11] .rela.plt RELA 0000000000000698 000698 000060 18 AI 6 24 8 [13] .plt PROGBITS 0000000000001020 001020 000050 10 AX 0 0 16 [14] .plt.got PROGBITS 0000000000001070 001070 000010 10 AX 0 0 16 [15] .plt.sec PROGBITS 0000000000001080 001080 000040 10 AX 0 0 16 $ perf record -e intel_pt//u --filter 'filter main @ ./tstplt' ./tstplt [ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 0.015 MB perf.data ] $ perf script --itrace=be --ns -F+flags,-event,+addr,-period,-comm,-tid,-cpu,-dso 38970.522546686: tr strt 0 [unknown] => 55fc222a81a9 main+0x0 38970.522546686: tr end call 55fc222a81b1 main+0x8 => 55fc222a80a0 [unknown] 38970.522546687: tr strt 0 [unknown] => 55fc222a81b6 main+0xd 38970.522546687: tr end call 55fc222a81b6 main+0xd => 55fc222a8080 [unknown] 38970.522546688: tr strt 0 [unknown] => 55fc222a81bb main+0x12 38970.522546688: tr end call 55fc222a81bb main+0x12 => 55fc222a80b0 [unknown] 38970.522546688: tr strt 0 [unknown] => 55fc222a81c0 main+0x17 38970.522546688: tr end call 55fc222a81c0 main+0x17 => 55fc222a8090 [unknown] 38970.522546689: tr strt 0 [unknown] => 55fc222a81c5 main+0x1c 38970.522546894: tr end return 55fc222a81cb main+0x22 => 7f3a4dc29d90 __libc_start_call_main+0x80 After: $ perf script --itrace=be --ns -F+flags,-event,+addr,-period,-comm,-tid,-cpu,-dso 38970.522546686: tr strt 0 [unknown] => 55fc222a81a9 main+0x0 38970.522546686: tr end call 55fc222a81b1 main+0x8 => 55fc222a80a0 fn4@plt+0x0 38970.522546687: tr strt 0 [unknown] => 55fc222a81b6 main+0xd 38970.522546687: tr end call 55fc222a81b6 main+0xd => 55fc222a8080 fn1@plt+0x0 38970.522546688: tr strt 0 [unknown] => 55fc222a81bb main+0x12 38970.522546688: tr end call 55fc222a81bb main+0x12 => 55fc222a80b0 fn2@plt+0x0 38970.522546688: tr strt 0 [unknown] => 55fc222a81c0 main+0x17 38970.522546688: tr end call 55fc222a81c0 main+0x17 => 55fc222a8090 fn3@plt+0x0 38970.522546689: tr strt 0 [unknown] => 55fc222a81c5 main+0x1c 38970.522546894: tr end return 55fc222a81cb main+0x22 => 7f3a4dc29d90 __libc_start_call_main+0x80 Reviewed-by: Namhyung Kim <namhyung@kernel.org> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Jiri Olsa <jolsa@kernel.org> Link: https://lore.kernel.org/r/20230131131625.6964-3-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2023-01-31 13:16:18 +00:00
}
perf symbols: Add symbol for .plt header perf expands the _init symbol over .plt because there are no PLT symbols at that point, but then dso__synthesize_plt_symbols() creates them. Fix by truncating the previous symbol and inserting a symbol for .plt header. Example: Before: $ perf test --dso `which uname` -v Symbols 74: Symbols : --- start --- test child forked, pid 191028 Problems creating module maps, continuing anyway... Testing /usr/bin/uname Overlapping symbols: 2000-25f0 g _init 2040-2050 g free@plt test child finished with -1 ---- end ---- Symbols: FAILED! $ perf test --dso `which uname` -vv Symbols 2>/tmp/cmp1.txt After: $ perf test --dso `which uname` -v Symbols 74: Symbols : --- start --- test child forked, pid 194291 Testing /usr/bin/uname test child finished with 0 ---- end ---- Symbols: Ok $ perf test --dso `which uname` -vv Symbols 2>/tmp/cmp2.txt $ diff /tmp/cmp1.txt /tmp/cmp2.txt 4,5c4 < test child forked, pid 191031 < Problems creating module maps, continuing anyway... --- > test child forked, pid 194296 9c8,9 < 2000-25f0 g _init --- > 2000-2030 g _init > 2030-2040 g .plt 100,103c100 < Overlapping symbols: < 2000-25f0 g _init < 2040-2050 g free@plt < test child finished with -1 --- > test child finished with 0 105c102 < Symbols: FAILED! --- > Symbols: Ok $ Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Acked-by: Ian Rogers <irogers@google.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Link: https://lore.kernel.org/r/20230120123456.12449-8-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2023-01-20 12:34:53 +00:00
scn_plt_rel = elf_section_by_name(elf, &ehdr, &shdr_rel_plt,
".rela.plt", NULL);
if (scn_plt_rel == NULL) {
scn_plt_rel = elf_section_by_name(elf, &ehdr, &shdr_rel_plt,
".rel.plt", NULL);
if (scn_plt_rel == NULL)
return 0;
}
if (shdr_rel_plt.sh_type != SHT_RELA &&
shdr_rel_plt.sh_type != SHT_REL)
return 0;
perf symbols: Allow for static executables with .plt A statically linked executable can have a .plt due to IFUNCs, in which case .symtab is used not .dynsym. Check the section header link to see if that is the case, and then use symtab instead. Example: Before: $ cat tstifunc.c #include <stdio.h> void thing1(void) { printf("thing1\n"); } void thing2(void) { printf("thing2\n"); } typedef void (*thing_fn_t)(void); thing_fn_t thing_ifunc(void) { int x; if (x & 1) return thing2; return thing1; } void thing(void) __attribute__ ((ifunc ("thing_ifunc"))); int main() { thing(); return 0; } $ gcc --version gcc (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0 Copyright (C) 2021 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. $ gcc -static -Wall -Wextra -Wno-uninitialized -o tstifuncstatic tstifunc.c $ readelf -SW tstifuncstatic | grep 'Name\|plt\|dyn' [Nr] Name Type Address Off Size ES Flg Lk Inf Al [ 4] .rela.plt RELA 00000000004002e8 0002e8 000258 18 AI 29 20 8 [ 6] .plt PROGBITS 0000000000401020 001020 000190 00 AX 0 0 16 [20] .got.plt PROGBITS 00000000004c5000 0c4000 0000e0 08 WA 0 0 8 $ perf record -e intel_pt//u --filter 'filter main @ ./tstifuncstatic' ./tstifuncstatic thing1 [ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 0.008 MB perf.data ] $ perf script --itrace=be --ns -F+flags,-event,+addr,-period,-comm,-tid,-cpu,-dso 15786.690189535: tr strt 0 [unknown] => 4017cd main+0x0 15786.690189535: tr end call 4017d5 main+0x8 => 401170 [unknown] 15786.690197660: tr strt 0 [unknown] => 4017da main+0xd 15786.690197660: tr end return 4017e0 main+0x13 => 401c1a __libc_start_call_main+0x6a After: $ perf script --itrace=be --ns -F+flags,-event,+addr,-period,-comm,-tid,-cpu,-dso 15786.690189535: tr strt 0 [unknown] => 4017cd main+0x0 15786.690189535: tr end call 4017d5 main+0x8 => 401170 thing_ifunc@plt+0x0 15786.690197660: tr strt 0 [unknown] => 4017da main+0xd 15786.690197660: tr end return 4017e0 main+0x13 => 401c1a __libc_start_call_main+0x6a Reviewed-by: Namhyung Kim <namhyung@kernel.org> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Jiri Olsa <jolsa@kernel.org> Link: https://lore.kernel.org/r/20230131131625.6964-8-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2023-01-31 13:16:23 +00:00
if (!shdr_rel_plt.sh_link)
return 0;
if (shdr_rel_plt.sh_link == ss->dynsym_idx) {
scn_dynsym = ss->dynsym;
shdr_dynsym = ss->dynshdr;
} else if (shdr_rel_plt.sh_link == ss->symtab_idx) {
/*
* A static executable can have a .plt due to IFUNCs, in which
* case .symtab is used not .dynsym.
*/
scn_dynsym = ss->symtab;
shdr_dynsym = ss->symshdr;
} else {
goto out_elf_end;
perf symbols: Allow for static executables with .plt A statically linked executable can have a .plt due to IFUNCs, in which case .symtab is used not .dynsym. Check the section header link to see if that is the case, and then use symtab instead. Example: Before: $ cat tstifunc.c #include <stdio.h> void thing1(void) { printf("thing1\n"); } void thing2(void) { printf("thing2\n"); } typedef void (*thing_fn_t)(void); thing_fn_t thing_ifunc(void) { int x; if (x & 1) return thing2; return thing1; } void thing(void) __attribute__ ((ifunc ("thing_ifunc"))); int main() { thing(); return 0; } $ gcc --version gcc (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0 Copyright (C) 2021 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. $ gcc -static -Wall -Wextra -Wno-uninitialized -o tstifuncstatic tstifunc.c $ readelf -SW tstifuncstatic | grep 'Name\|plt\|dyn' [Nr] Name Type Address Off Size ES Flg Lk Inf Al [ 4] .rela.plt RELA 00000000004002e8 0002e8 000258 18 AI 29 20 8 [ 6] .plt PROGBITS 0000000000401020 001020 000190 00 AX 0 0 16 [20] .got.plt PROGBITS 00000000004c5000 0c4000 0000e0 08 WA 0 0 8 $ perf record -e intel_pt//u --filter 'filter main @ ./tstifuncstatic' ./tstifuncstatic thing1 [ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 0.008 MB perf.data ] $ perf script --itrace=be --ns -F+flags,-event,+addr,-period,-comm,-tid,-cpu,-dso 15786.690189535: tr strt 0 [unknown] => 4017cd main+0x0 15786.690189535: tr end call 4017d5 main+0x8 => 401170 [unknown] 15786.690197660: tr strt 0 [unknown] => 4017da main+0xd 15786.690197660: tr end return 4017e0 main+0x13 => 401c1a __libc_start_call_main+0x6a After: $ perf script --itrace=be --ns -F+flags,-event,+addr,-period,-comm,-tid,-cpu,-dso 15786.690189535: tr strt 0 [unknown] => 4017cd main+0x0 15786.690189535: tr end call 4017d5 main+0x8 => 401170 thing_ifunc@plt+0x0 15786.690197660: tr strt 0 [unknown] => 4017da main+0xd 15786.690197660: tr end return 4017e0 main+0x13 => 401c1a __libc_start_call_main+0x6a Reviewed-by: Namhyung Kim <namhyung@kernel.org> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Jiri Olsa <jolsa@kernel.org> Link: https://lore.kernel.org/r/20230131131625.6964-8-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2023-01-31 13:16:23 +00:00
}
if (!scn_dynsym)
return 0;
/*
* Fetch the relocation section to find the idxes to the GOT
* and the symbols in the .dynsym they refer to.
*/
ri.reldata = elf_getdata(scn_plt_rel, NULL);
if (!ri.reldata)
goto out_elf_end;
syms = elf_getdata(scn_dynsym, NULL);
if (syms == NULL)
goto out_elf_end;
scn_symstrs = elf_getscn(elf, shdr_dynsym.sh_link);
if (scn_symstrs == NULL)
goto out_elf_end;
symstrs = elf_getdata(scn_symstrs, NULL);
if (symstrs == NULL)
goto out_elf_end;
if (symstrs->d_size == 0)
goto out_elf_end;
perf symbols: Sort plt relocations for x86 For x86, with the addition of IFUNCs, relocation information becomes disordered with respect to plt. Correct that by sorting the relocations by offset. Example: Before: $ cat tstpltlib.c void fn1(void) {} void fn2(void) {} void fn3(void) {} void fn4(void) {} $ cat tstpltifunc.c #include <stdio.h> void thing1(void) { printf("thing1\n"); } void thing2(void) { printf("thing2\n"); } typedef void (*thing_fn_t)(void); thing_fn_t thing_ifunc(void) { int x; if (x & 1) return thing2; return thing1; } void thing(void) __attribute__ ((ifunc ("thing_ifunc"))); void fn1(void); void fn2(void); void fn3(void); void fn4(void); int main() { fn4(); fn1(); thing(); fn2(); fn3(); return 0; } $ gcc --version gcc (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0 Copyright (C) 2021 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. $ gcc -Wall -Wextra -shared -o libtstpltlib.so tstpltlib.c $ gcc -Wall -Wextra -Wno-uninitialized -o tstpltifunc tstpltifunc.c -L . -ltstpltlib -Wl,-rpath="$(pwd)" $ readelf -rW tstpltifunc | grep -A99 plt Relocation section '.rela.plt' at offset 0x738 contains 8 entries: Offset Info Type Symbol's Value Symbol's Name + Addend 0000000000003f98 0000000300000007 R_X86_64_JUMP_SLOT 0000000000000000 puts@GLIBC_2.2.5 + 0 0000000000003fa8 0000000400000007 R_X86_64_JUMP_SLOT 0000000000000000 __stack_chk_fail@GLIBC_2.4 + 0 0000000000003fb0 0000000500000007 R_X86_64_JUMP_SLOT 0000000000000000 fn1 + 0 0000000000003fb8 0000000600000007 R_X86_64_JUMP_SLOT 0000000000000000 fn3 + 0 0000000000003fc0 0000000800000007 R_X86_64_JUMP_SLOT 0000000000000000 fn4 + 0 0000000000003fc8 0000000900000007 R_X86_64_JUMP_SLOT 0000000000000000 fn2 + 0 0000000000003fd0 0000000b00000007 R_X86_64_JUMP_SLOT 0000000000000000 getrandom@GLIBC_2.25 + 0 0000000000003fa0 0000000000000025 R_X86_64_IRELATIVE 125d $ perf record -e intel_pt//u --filter 'filter main @ ./tstpltifunc' ./tstpltifunc thing2 [ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 0.029 MB perf.data ] $ perf script --itrace=be --ns -F+flags,-event,+addr,-period,-comm,-tid,-cpu,-dso 20417.302513948: tr strt 0 [unknown] => 5629a74892be main+0x0 20417.302513948: tr end call 5629a74892c6 main+0x8 => 5629a7489110 fn2@plt+0x0 20417.302513949: tr strt 0 [unknown] => 5629a74892cb main+0xd 20417.302513949: tr end call 5629a74892cb main+0xd => 5629a74890f0 fn3@plt+0x0 20417.302513950: tr strt 0 [unknown] => 5629a74892d0 main+0x12 20417.302513950: tr end call 5629a74892d0 main+0x12 => 5629a74890d0 __stack_chk_fail@plt+0x0 20417.302528114: tr strt 0 [unknown] => 5629a74892d5 main+0x17 20417.302528114: tr end call 5629a74892d5 main+0x17 => 5629a7489120 getrandom@plt+0x0 20417.302528115: tr strt 0 [unknown] => 5629a74892da main+0x1c 20417.302528115: tr end call 5629a74892da main+0x1c => 5629a7489100 fn4@plt+0x0 20417.302528115: tr strt 0 [unknown] => 5629a74892df main+0x21 20417.302528115: tr end return 5629a74892e5 main+0x27 => 7ff14da29d90 __libc_start_call_main+0x80 After: $ perf script --itrace=be --ns -F+flags,-event,+addr,-period,-comm,-tid,-cpu,-dso 20417.302513948: tr strt 0 [unknown] => 5629a74892be main+0x0 20417.302513948: tr end call 5629a74892c6 main+0x8 => 5629a7489110 fn4@plt+0x0 20417.302513949: tr strt 0 [unknown] => 5629a74892cb main+0xd 20417.302513949: tr end call 5629a74892cb main+0xd => 5629a74890f0 fn1@plt+0x0 20417.302513950: tr strt 0 [unknown] => 5629a74892d0 main+0x12 20417.302513950: tr end call 5629a74892d0 main+0x12 => 5629a74890d0 offset_0x10d0@plt+0x0 20417.302528114: tr strt 0 [unknown] => 5629a74892d5 main+0x17 20417.302528114: tr end call 5629a74892d5 main+0x17 => 5629a7489120 fn2@plt+0x0 20417.302528115: tr strt 0 [unknown] => 5629a74892da main+0x1c 20417.302528115: tr end call 5629a74892da main+0x1c => 5629a7489100 fn3@plt+0x0 20417.302528115: tr strt 0 [unknown] => 5629a74892df main+0x21 20417.302528115: tr end return 5629a74892e5 main+0x27 => 7ff14da29d90 __libc_start_call_main+0x80 Reviewed-by: Namhyung Kim <namhyung@kernel.org> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Jiri Olsa <jolsa@kernel.org> Link: https://lore.kernel.org/r/20230131131625.6964-4-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2023-01-31 13:16:19 +00:00
ri.nr_entries = shdr_rel_plt.sh_size / shdr_rel_plt.sh_entsize;
ri.is_rela = shdr_rel_plt.sh_type == SHT_RELA;
if (lazy_plt) {
/*
* Assume a .plt with the same number of entries as the number
* of relocation entries is not lazy and does not have a header.
*/
if (ri.nr_entries * plt_entry_size == shdr_plt.sh_size)
dso__delete_symbol(dso, plt_sym);
else
plt_offset += plt_header_size;
}
perf symbols: Sort plt relocations for x86 For x86, with the addition of IFUNCs, relocation information becomes disordered with respect to plt. Correct that by sorting the relocations by offset. Example: Before: $ cat tstpltlib.c void fn1(void) {} void fn2(void) {} void fn3(void) {} void fn4(void) {} $ cat tstpltifunc.c #include <stdio.h> void thing1(void) { printf("thing1\n"); } void thing2(void) { printf("thing2\n"); } typedef void (*thing_fn_t)(void); thing_fn_t thing_ifunc(void) { int x; if (x & 1) return thing2; return thing1; } void thing(void) __attribute__ ((ifunc ("thing_ifunc"))); void fn1(void); void fn2(void); void fn3(void); void fn4(void); int main() { fn4(); fn1(); thing(); fn2(); fn3(); return 0; } $ gcc --version gcc (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0 Copyright (C) 2021 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. $ gcc -Wall -Wextra -shared -o libtstpltlib.so tstpltlib.c $ gcc -Wall -Wextra -Wno-uninitialized -o tstpltifunc tstpltifunc.c -L . -ltstpltlib -Wl,-rpath="$(pwd)" $ readelf -rW tstpltifunc | grep -A99 plt Relocation section '.rela.plt' at offset 0x738 contains 8 entries: Offset Info Type Symbol's Value Symbol's Name + Addend 0000000000003f98 0000000300000007 R_X86_64_JUMP_SLOT 0000000000000000 puts@GLIBC_2.2.5 + 0 0000000000003fa8 0000000400000007 R_X86_64_JUMP_SLOT 0000000000000000 __stack_chk_fail@GLIBC_2.4 + 0 0000000000003fb0 0000000500000007 R_X86_64_JUMP_SLOT 0000000000000000 fn1 + 0 0000000000003fb8 0000000600000007 R_X86_64_JUMP_SLOT 0000000000000000 fn3 + 0 0000000000003fc0 0000000800000007 R_X86_64_JUMP_SLOT 0000000000000000 fn4 + 0 0000000000003fc8 0000000900000007 R_X86_64_JUMP_SLOT 0000000000000000 fn2 + 0 0000000000003fd0 0000000b00000007 R_X86_64_JUMP_SLOT 0000000000000000 getrandom@GLIBC_2.25 + 0 0000000000003fa0 0000000000000025 R_X86_64_IRELATIVE 125d $ perf record -e intel_pt//u --filter 'filter main @ ./tstpltifunc' ./tstpltifunc thing2 [ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 0.029 MB perf.data ] $ perf script --itrace=be --ns -F+flags,-event,+addr,-period,-comm,-tid,-cpu,-dso 20417.302513948: tr strt 0 [unknown] => 5629a74892be main+0x0 20417.302513948: tr end call 5629a74892c6 main+0x8 => 5629a7489110 fn2@plt+0x0 20417.302513949: tr strt 0 [unknown] => 5629a74892cb main+0xd 20417.302513949: tr end call 5629a74892cb main+0xd => 5629a74890f0 fn3@plt+0x0 20417.302513950: tr strt 0 [unknown] => 5629a74892d0 main+0x12 20417.302513950: tr end call 5629a74892d0 main+0x12 => 5629a74890d0 __stack_chk_fail@plt+0x0 20417.302528114: tr strt 0 [unknown] => 5629a74892d5 main+0x17 20417.302528114: tr end call 5629a74892d5 main+0x17 => 5629a7489120 getrandom@plt+0x0 20417.302528115: tr strt 0 [unknown] => 5629a74892da main+0x1c 20417.302528115: tr end call 5629a74892da main+0x1c => 5629a7489100 fn4@plt+0x0 20417.302528115: tr strt 0 [unknown] => 5629a74892df main+0x21 20417.302528115: tr end return 5629a74892e5 main+0x27 => 7ff14da29d90 __libc_start_call_main+0x80 After: $ perf script --itrace=be --ns -F+flags,-event,+addr,-period,-comm,-tid,-cpu,-dso 20417.302513948: tr strt 0 [unknown] => 5629a74892be main+0x0 20417.302513948: tr end call 5629a74892c6 main+0x8 => 5629a7489110 fn4@plt+0x0 20417.302513949: tr strt 0 [unknown] => 5629a74892cb main+0xd 20417.302513949: tr end call 5629a74892cb main+0xd => 5629a74890f0 fn1@plt+0x0 20417.302513950: tr strt 0 [unknown] => 5629a74892d0 main+0x12 20417.302513950: tr end call 5629a74892d0 main+0x12 => 5629a74890d0 offset_0x10d0@plt+0x0 20417.302528114: tr strt 0 [unknown] => 5629a74892d5 main+0x17 20417.302528114: tr end call 5629a74892d5 main+0x17 => 5629a7489120 fn2@plt+0x0 20417.302528115: tr strt 0 [unknown] => 5629a74892da main+0x1c 20417.302528115: tr end call 5629a74892da main+0x1c => 5629a7489100 fn3@plt+0x0 20417.302528115: tr strt 0 [unknown] => 5629a74892df main+0x21 20417.302528115: tr end return 5629a74892e5 main+0x27 => 7ff14da29d90 __libc_start_call_main+0x80 Reviewed-by: Namhyung Kim <namhyung@kernel.org> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Jiri Olsa <jolsa@kernel.org> Link: https://lore.kernel.org/r/20230131131625.6964-4-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2023-01-31 13:16:19 +00:00
/*
* x86 doesn't insert IFUNC relocations in .plt order, so sort to get
* back in order.
*/
if (machine_is_x86(ehdr.e_machine) && sort_rel(&ri))
goto out_elf_end;
for (idx = 0; idx < ri.nr_entries; idx++) {
const char *elf_name = NULL;
char *demangled = NULL;
gelf_getsym(syms, get_rel_symidx(&ri, idx), &sym);
elf_name = elf_sym__name(&sym, symstrs);
demangled = demangle_sym(dso, 0, elf_name);
if (demangled)
elf_name = demangled;
if (*elf_name)
snprintf(sympltname, sizeof(sympltname), "%s@plt", elf_name);
perf symbols: Add support for IFUNC symbols for x86_64 For x86_64, the GNU linker is putting IFUNC information in the relocation addend, so use it to try to find a symbol for plt entries that refer to IFUNCs. Example: Before: $ cat tstpltlib.c void fn1(void) {} void fn2(void) {} void fn3(void) {} void fn4(void) {} $ cat tstpltifunc.c #include <stdio.h> void thing1(void) { printf("thing1\n"); } void thing2(void) { printf("thing2\n"); } typedef void (*thing_fn_t)(void); thing_fn_t thing_ifunc(void) { int x; if (x & 1) return thing2; return thing1; } void thing(void) __attribute__ ((ifunc ("thing_ifunc"))); void fn1(void); void fn2(void); void fn3(void); void fn4(void); int main() { fn4(); fn1(); thing(); fn2(); fn3(); return 0; } $ gcc --version gcc (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0 Copyright (C) 2021 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. $ gcc -Wall -Wextra -shared -o libtstpltlib.so tstpltlib.c $ gcc -Wall -Wextra -Wno-uninitialized -o tstpltifunc tstpltifunc.c -L . -ltstpltlib -Wl,-rpath="$(pwd)" $ readelf -rW tstpltifunc | grep -A99 plt Relocation section '.rela.plt' at offset 0x738 contains 8 entries: Offset Info Type Symbol's Value Symbol's Name + Addend 0000000000003f98 0000000300000007 R_X86_64_JUMP_SLOT 0000000000000000 puts@GLIBC_2.2.5 + 0 0000000000003fa8 0000000400000007 R_X86_64_JUMP_SLOT 0000000000000000 __stack_chk_fail@GLIBC_2.4 + 0 0000000000003fb0 0000000500000007 R_X86_64_JUMP_SLOT 0000000000000000 fn1 + 0 0000000000003fb8 0000000600000007 R_X86_64_JUMP_SLOT 0000000000000000 fn3 + 0 0000000000003fc0 0000000800000007 R_X86_64_JUMP_SLOT 0000000000000000 fn4 + 0 0000000000003fc8 0000000900000007 R_X86_64_JUMP_SLOT 0000000000000000 fn2 + 0 0000000000003fd0 0000000b00000007 R_X86_64_JUMP_SLOT 0000000000000000 getrandom@GLIBC_2.25 + 0 0000000000003fa0 0000000000000025 R_X86_64_IRELATIVE 125d $ perf record -e intel_pt//u --filter 'filter main @ ./tstpltifunc' ./tstpltifunc thing2 [ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 0.016 MB perf.data ] $ perf script --itrace=be --ns -F+flags,-event,+addr,-period,-comm,-tid,-cpu,-dso 21860.073683659: tr strt 0 [unknown] => 561e212c42be main+0x0 21860.073683659: tr end call 561e212c42c6 main+0x8 => 561e212c4110 fn4@plt+0x0 21860.073683661: tr strt 0 [unknown] => 561e212c42cb main+0xd 21860.073683661: tr end call 561e212c42cb main+0xd => 561e212c40f0 fn1@plt+0x0 21860.073683661: tr strt 0 [unknown] => 561e212c42d0 main+0x12 21860.073683661: tr end call 561e212c42d0 main+0x12 => 561e212c40d0 offset_0x10d0@plt+0x0 21860.073698451: tr strt 0 [unknown] => 561e212c42d5 main+0x17 21860.073698451: tr end call 561e212c42d5 main+0x17 => 561e212c4120 fn2@plt+0x0 21860.073698451: tr strt 0 [unknown] => 561e212c42da main+0x1c 21860.073698451: tr end call 561e212c42da main+0x1c => 561e212c4100 fn3@plt+0x0 21860.073698452: tr strt 0 [unknown] => 561e212c42df main+0x21 21860.073698452: tr end return 561e212c42e5 main+0x27 => 7fb51cc29d90 __libc_start_call_main+0x80 After: $ perf script --itrace=be --ns -F+flags,-event,+addr,-period,-comm,-tid,-cpu,-dso 21860.073683659: tr strt 0 [unknown] => 561e212c42be main+0x0 21860.073683659: tr end call 561e212c42c6 main+0x8 => 561e212c4110 fn4@plt+0x0 21860.073683661: tr strt 0 [unknown] => 561e212c42cb main+0xd 21860.073683661: tr end call 561e212c42cb main+0xd => 561e212c40f0 fn1@plt+0x0 21860.073683661: tr strt 0 [unknown] => 561e212c42d0 main+0x12 21860.073683661: tr end call 561e212c42d0 main+0x12 => 561e212c40d0 thing_ifunc@plt+0x0 21860.073698451: tr strt 0 [unknown] => 561e212c42d5 main+0x17 21860.073698451: tr end call 561e212c42d5 main+0x17 => 561e212c4120 fn2@plt+0x0 21860.073698451: tr strt 0 [unknown] => 561e212c42da main+0x1c 21860.073698451: tr end call 561e212c42da main+0x1c => 561e212c4100 fn3@plt+0x0 21860.073698452: tr strt 0 [unknown] => 561e212c42df main+0x21 21860.073698452: tr end return 561e212c42e5 main+0x27 => 7fb51cc29d90 __libc_start_call_main+0x80 Reviewed-by: Namhyung Kim <namhyung@kernel.org> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Jiri Olsa <jolsa@kernel.org> Link: https://lore.kernel.org/r/20230131131625.6964-6-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2023-01-31 13:16:21 +00:00
else if (!get_ifunc_name(elf, dso, &ehdr, &ri, sympltname, sizeof(sympltname)))
snprintf(sympltname, sizeof(sympltname),
"offset_%#" PRIx64 "@plt", plt_offset);
free(demangled);
f = symbol__new(plt_offset, plt_entry_size, STB_GLOBAL, STT_FUNC, sympltname);
if (!f)
goto out_elf_end;
plt_offset += plt_entry_size;
symbols__insert(&dso->symbols, f);
++nr;
}
err = 0;
out_elf_end:
perf symbols: Sort plt relocations for x86 For x86, with the addition of IFUNCs, relocation information becomes disordered with respect to plt. Correct that by sorting the relocations by offset. Example: Before: $ cat tstpltlib.c void fn1(void) {} void fn2(void) {} void fn3(void) {} void fn4(void) {} $ cat tstpltifunc.c #include <stdio.h> void thing1(void) { printf("thing1\n"); } void thing2(void) { printf("thing2\n"); } typedef void (*thing_fn_t)(void); thing_fn_t thing_ifunc(void) { int x; if (x & 1) return thing2; return thing1; } void thing(void) __attribute__ ((ifunc ("thing_ifunc"))); void fn1(void); void fn2(void); void fn3(void); void fn4(void); int main() { fn4(); fn1(); thing(); fn2(); fn3(); return 0; } $ gcc --version gcc (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0 Copyright (C) 2021 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. $ gcc -Wall -Wextra -shared -o libtstpltlib.so tstpltlib.c $ gcc -Wall -Wextra -Wno-uninitialized -o tstpltifunc tstpltifunc.c -L . -ltstpltlib -Wl,-rpath="$(pwd)" $ readelf -rW tstpltifunc | grep -A99 plt Relocation section '.rela.plt' at offset 0x738 contains 8 entries: Offset Info Type Symbol's Value Symbol's Name + Addend 0000000000003f98 0000000300000007 R_X86_64_JUMP_SLOT 0000000000000000 puts@GLIBC_2.2.5 + 0 0000000000003fa8 0000000400000007 R_X86_64_JUMP_SLOT 0000000000000000 __stack_chk_fail@GLIBC_2.4 + 0 0000000000003fb0 0000000500000007 R_X86_64_JUMP_SLOT 0000000000000000 fn1 + 0 0000000000003fb8 0000000600000007 R_X86_64_JUMP_SLOT 0000000000000000 fn3 + 0 0000000000003fc0 0000000800000007 R_X86_64_JUMP_SLOT 0000000000000000 fn4 + 0 0000000000003fc8 0000000900000007 R_X86_64_JUMP_SLOT 0000000000000000 fn2 + 0 0000000000003fd0 0000000b00000007 R_X86_64_JUMP_SLOT 0000000000000000 getrandom@GLIBC_2.25 + 0 0000000000003fa0 0000000000000025 R_X86_64_IRELATIVE 125d $ perf record -e intel_pt//u --filter 'filter main @ ./tstpltifunc' ./tstpltifunc thing2 [ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 0.029 MB perf.data ] $ perf script --itrace=be --ns -F+flags,-event,+addr,-period,-comm,-tid,-cpu,-dso 20417.302513948: tr strt 0 [unknown] => 5629a74892be main+0x0 20417.302513948: tr end call 5629a74892c6 main+0x8 => 5629a7489110 fn2@plt+0x0 20417.302513949: tr strt 0 [unknown] => 5629a74892cb main+0xd 20417.302513949: tr end call 5629a74892cb main+0xd => 5629a74890f0 fn3@plt+0x0 20417.302513950: tr strt 0 [unknown] => 5629a74892d0 main+0x12 20417.302513950: tr end call 5629a74892d0 main+0x12 => 5629a74890d0 __stack_chk_fail@plt+0x0 20417.302528114: tr strt 0 [unknown] => 5629a74892d5 main+0x17 20417.302528114: tr end call 5629a74892d5 main+0x17 => 5629a7489120 getrandom@plt+0x0 20417.302528115: tr strt 0 [unknown] => 5629a74892da main+0x1c 20417.302528115: tr end call 5629a74892da main+0x1c => 5629a7489100 fn4@plt+0x0 20417.302528115: tr strt 0 [unknown] => 5629a74892df main+0x21 20417.302528115: tr end return 5629a74892e5 main+0x27 => 7ff14da29d90 __libc_start_call_main+0x80 After: $ perf script --itrace=be --ns -F+flags,-event,+addr,-period,-comm,-tid,-cpu,-dso 20417.302513948: tr strt 0 [unknown] => 5629a74892be main+0x0 20417.302513948: tr end call 5629a74892c6 main+0x8 => 5629a7489110 fn4@plt+0x0 20417.302513949: tr strt 0 [unknown] => 5629a74892cb main+0xd 20417.302513949: tr end call 5629a74892cb main+0xd => 5629a74890f0 fn1@plt+0x0 20417.302513950: tr strt 0 [unknown] => 5629a74892d0 main+0x12 20417.302513950: tr end call 5629a74892d0 main+0x12 => 5629a74890d0 offset_0x10d0@plt+0x0 20417.302528114: tr strt 0 [unknown] => 5629a74892d5 main+0x17 20417.302528114: tr end call 5629a74892d5 main+0x17 => 5629a7489120 fn2@plt+0x0 20417.302528115: tr strt 0 [unknown] => 5629a74892da main+0x1c 20417.302528115: tr end call 5629a74892da main+0x1c => 5629a7489100 fn3@plt+0x0 20417.302528115: tr strt 0 [unknown] => 5629a74892df main+0x21 20417.302528115: tr end return 5629a74892e5 main+0x27 => 7ff14da29d90 __libc_start_call_main+0x80 Reviewed-by: Namhyung Kim <namhyung@kernel.org> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Jiri Olsa <jolsa@kernel.org> Link: https://lore.kernel.org/r/20230131131625.6964-4-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2023-01-31 13:16:19 +00:00
exit_rel(&ri);
if (err == 0)
return nr;
pr_debug("%s: problems reading %s PLT info.\n",
__func__, dso->long_name);
return 0;
}
char *dso__demangle_sym(struct dso *dso, int kmodule, const char *elf_name)
{
return demangle_sym(dso, kmodule, elf_name);
}
/*
* Align offset to 4 bytes as needed for note name and descriptor data.
*/
#define NOTE_ALIGN(n) (((n) + 3) & -4U)
static int elf_read_build_id(Elf *elf, void *bf, size_t size)
{
int err = -1;
GElf_Ehdr ehdr;
GElf_Shdr shdr;
Elf_Data *data;
Elf_Scn *sec;
Elf_Kind ek;
void *ptr;
if (size < BUILD_ID_SIZE)
goto out;
ek = elf_kind(elf);
if (ek != ELF_K_ELF)
goto out;
if (gelf_getehdr(elf, &ehdr) == NULL) {
pr_err("%s: cannot get elf header.\n", __func__);
goto out;
}
/*
* Check following sections for notes:
* '.note.gnu.build-id'
* '.notes'
* '.note' (VDSO specific)
*/
do {
sec = elf_section_by_name(elf, &ehdr, &shdr,
".note.gnu.build-id", NULL);
if (sec)
break;
sec = elf_section_by_name(elf, &ehdr, &shdr,
".notes", NULL);
if (sec)
break;
sec = elf_section_by_name(elf, &ehdr, &shdr,
".note", NULL);
if (sec)
break;
return err;
} while (0);
data = elf_getdata(sec, NULL);
if (data == NULL)
goto out;
ptr = data->d_buf;
while (ptr < (data->d_buf + data->d_size)) {
GElf_Nhdr *nhdr = ptr;
size_t namesz = NOTE_ALIGN(nhdr->n_namesz),
descsz = NOTE_ALIGN(nhdr->n_descsz);
const char *name;
ptr += sizeof(*nhdr);
name = ptr;
ptr += namesz;
if (nhdr->n_type == NT_GNU_BUILD_ID &&
nhdr->n_namesz == sizeof("GNU")) {
if (memcmp(name, "GNU", sizeof("GNU")) == 0) {
size_t sz = min(size, descsz);
memcpy(bf, ptr, sz);
memset(bf + sz, 0, size - sz);
err = sz;
break;
}
}
ptr += descsz;
}
out:
return err;
}
#ifdef HAVE_LIBBFD_BUILDID_SUPPORT
static int read_build_id(const char *filename, struct build_id *bid)
{
size_t size = sizeof(bid->data);
int err = -1;
bfd *abfd;
abfd = bfd_openr(filename, NULL);
if (!abfd)
return -1;
if (!bfd_check_format(abfd, bfd_object)) {
pr_debug2("%s: cannot read %s bfd file.\n", __func__, filename);
goto out_close;
}
if (!abfd->build_id || abfd->build_id->size > size)
goto out_close;
memcpy(bid->data, abfd->build_id->data, abfd->build_id->size);
memset(bid->data + abfd->build_id->size, 0, size - abfd->build_id->size);
err = bid->size = abfd->build_id->size;
out_close:
bfd_close(abfd);
return err;
}
#else // HAVE_LIBBFD_BUILDID_SUPPORT
static int read_build_id(const char *filename, struct build_id *bid)
{
size_t size = sizeof(bid->data);
int fd, err = -1;
Elf *elf;
if (size < BUILD_ID_SIZE)
goto out;
fd = open(filename, O_RDONLY);
if (fd < 0)
goto out;
elf = elf_begin(fd, PERF_ELF_C_READ_MMAP, NULL);
if (elf == NULL) {
pr_debug2("%s: cannot read %s ELF file.\n", __func__, filename);
goto out_close;
}
err = elf_read_build_id(elf, bid->data, size);
if (err > 0)
bid->size = err;
elf_end(elf);
out_close:
close(fd);
out:
return err;
}
#endif // HAVE_LIBBFD_BUILDID_SUPPORT
int filename__read_build_id(const char *filename, struct build_id *bid)
{
struct kmod_path m = { .name = NULL, };
char path[PATH_MAX];
int err;
if (!filename)
return -EFAULT;
err = kmod_path__parse(&m, filename);
if (err)
return -1;
if (m.comp) {
int error = 0, fd;
fd = filename__decompress(filename, path, sizeof(path), m.comp, &error);
if (fd < 0) {
pr_debug("Failed to decompress (error %d) %s\n",
error, filename);
return -1;
}
close(fd);
filename = path;
}
err = read_build_id(filename, bid);
if (m.comp)
unlink(filename);
return err;
}
int sysfs__read_build_id(const char *filename, struct build_id *bid)
{
size_t size = sizeof(bid->data);
int fd, err = -1;
fd = open(filename, O_RDONLY);
if (fd < 0)
goto out;
while (1) {
char bf[BUFSIZ];
GElf_Nhdr nhdr;
size_t namesz, descsz;
if (read(fd, &nhdr, sizeof(nhdr)) != sizeof(nhdr))
break;
namesz = NOTE_ALIGN(nhdr.n_namesz);
descsz = NOTE_ALIGN(nhdr.n_descsz);
if (nhdr.n_type == NT_GNU_BUILD_ID &&
nhdr.n_namesz == sizeof("GNU")) {
if (read(fd, bf, namesz) != (ssize_t)namesz)
break;
if (memcmp(bf, "GNU", sizeof("GNU")) == 0) {
size_t sz = min(descsz, size);
if (read(fd, bid->data, sz) == (ssize_t)sz) {
memset(bid->data + sz, 0, size - sz);
bid->size = sz;
err = 0;
break;
}
} else if (read(fd, bf, descsz) != (ssize_t)descsz)
break;
} else {
int n = namesz + descsz;
if (n > (int)sizeof(bf)) {
n = sizeof(bf);
pr_debug("%s: truncating reading of build id in sysfs file %s: n_namesz=%u, n_descsz=%u.\n",
__func__, filename, nhdr.n_namesz, nhdr.n_descsz);
}
if (read(fd, bf, n) != n)
break;
}
}
close(fd);
out:
return err;
}
#ifdef HAVE_LIBBFD_SUPPORT
int filename__read_debuglink(const char *filename, char *debuglink,
size_t size)
{
int err = -1;
asection *section;
bfd *abfd;
abfd = bfd_openr(filename, NULL);
if (!abfd)
return -1;
if (!bfd_check_format(abfd, bfd_object)) {
pr_debug2("%s: cannot read %s bfd file.\n", __func__, filename);
goto out_close;
}
section = bfd_get_section_by_name(abfd, ".gnu_debuglink");
if (!section)
goto out_close;
if (section->size > size)
goto out_close;
if (!bfd_get_section_contents(abfd, section, debuglink, 0,
section->size))
goto out_close;
err = 0;
out_close:
bfd_close(abfd);
return err;
}
#else
int filename__read_debuglink(const char *filename, char *debuglink,
size_t size)
{
int fd, err = -1;
Elf *elf;
GElf_Ehdr ehdr;
GElf_Shdr shdr;
Elf_Data *data;
Elf_Scn *sec;
Elf_Kind ek;
fd = open(filename, O_RDONLY);
if (fd < 0)
goto out;
elf = elf_begin(fd, PERF_ELF_C_READ_MMAP, NULL);
if (elf == NULL) {
pr_debug2("%s: cannot read %s ELF file.\n", __func__, filename);
goto out_close;
}
ek = elf_kind(elf);
if (ek != ELF_K_ELF)
goto out_elf_end;
if (gelf_getehdr(elf, &ehdr) == NULL) {
pr_err("%s: cannot get elf header.\n", __func__);
goto out_elf_end;
}
sec = elf_section_by_name(elf, &ehdr, &shdr,
".gnu_debuglink", NULL);
if (sec == NULL)
goto out_elf_end;
data = elf_getdata(sec, NULL);
if (data == NULL)
goto out_elf_end;
/* the start of this section is a zero-terminated string */
strncpy(debuglink, data->d_buf, size);
err = 0;
out_elf_end:
elf_end(elf);
out_close:
close(fd);
out:
return err;
}
#endif
static int dso__swap_init(struct dso *dso, unsigned char eidata)
{
static unsigned int const endian = 1;
dso->needs_swap = DSO_SWAP__NO;
switch (eidata) {
case ELFDATA2LSB:
/* We are big endian, DSO is little endian. */
if (*(unsigned char const *)&endian != 1)
dso->needs_swap = DSO_SWAP__YES;
break;
case ELFDATA2MSB:
/* We are little endian, DSO is big endian. */
if (*(unsigned char const *)&endian != 0)
dso->needs_swap = DSO_SWAP__YES;
break;
default:
pr_err("unrecognized DSO data encoding %d\n", eidata);
return -EINVAL;
}
return 0;
}
perf symbols: Use both runtime and debug images We keep both a 'runtime' elf image as well as a 'debug' elf image around and generate symbols by looking at both of these. This eliminates the need for the want_symtab/goto restart mechanism combined with iterating over and reopening the elf images a second time. Also give dso__synthsize_plt_symbols() the runtime image (which has dynsyms) instead of the symbol image (which may only have a symtab and no dynsyms). Previously if a debug image was found all runtime images were ignored. This fixes 2 issues: - Symbol resolution to failure on PowerPC systems with debug symbols installed, as the debug images lack a '.opd' section which contains function descriptors. - On all archs, plt synthesis failed when a debug image was loaded and that debug image lacks a dynsym section while a runtime image has a dynsym section. Assumptions: - If a .opd section exists, it is contained in the highest priority image with a dynsym section. - This generally implies that the debug image lacks a dynsym section (ie: it is marked as NO_BITS). Signed-off-by: Cody P Schafer <cody@linux.vnet.ibm.com> Cc: David Hansen <dave@linux.vnet.ibm.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Matt Hellsley <matthltc@us.ibm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com> Link: http://lkml.kernel.org/r/1344637382-22789-17-git-send-email-cody@linux.vnet.ibm.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2012-08-10 22:23:02 +00:00
bool symsrc__possibly_runtime(struct symsrc *ss)
{
return ss->dynsym || ss->opdsec;
}
bool symsrc__has_symtab(struct symsrc *ss)
{
return ss->symtab != NULL;
}
void symsrc__destroy(struct symsrc *ss)
{
zfree(&ss->name);
elf_end(ss->elf);
close(ss->fd);
}
perf symbols: Consolidate symbol fixup issue After copying Arm64's perf archive with object files and perf.data file to x86 laptop, the x86's perf kernel symbol resolution fails. It outputs 'unknown' for all symbols parsing. This issue is root caused by the function elf__needs_adjust_symbols(), x86 perf tool uses one weak version, Arm64 (and powerpc) has rewritten their own version. elf__needs_adjust_symbols() decides if need to parse symbols with the relative offset address; but x86 building uses the weak function which misses to check for the elf type 'ET_DYN', so that it cannot parse symbols in Arm DSOs due to the wrong result from elf__needs_adjust_symbols(). The DSO parsing should not depend on any specific architecture perf building; e.g. x86 perf tool can parse Arm and Arm64 DSOs, vice versa. And confirmed by Naveen N. Rao that powerpc64 kernels are not being built as ET_DYN anymore and change to ET_EXEC. This patch removes the arch specific functions for Arm64 and powerpc and changes elf__needs_adjust_symbols() as a common function. In the common elf__needs_adjust_symbols(), it checks an extra condition 'ET_DYN' for elf header type. With this fixing, the Arm64 DSO can be parsed properly with x86's perf tool. Before: # perf script main 3258 1 branches: 0 [unknown] ([unknown]) => ffff800010c4665c [unknown] ([kernel.kallsyms]) main 3258 1 branches: ffff800010c46670 [unknown] ([kernel.kallsyms]) => ffff800010c4eaec [unknown] ([kernel.kallsyms]) main 3258 1 branches: ffff800010c4eaec [unknown] ([kernel.kallsyms]) => ffff800010c4eb00 [unknown] ([kernel.kallsyms]) main 3258 1 branches: ffff800010c4eb08 [unknown] ([kernel.kallsyms]) => ffff800010c4e780 [unknown] ([kernel.kallsyms]) main 3258 1 branches: ffff800010c4e7a0 [unknown] ([kernel.kallsyms]) => ffff800010c4eeac [unknown] ([kernel.kallsyms]) main 3258 1 branches: ffff800010c4eebc [unknown] ([kernel.kallsyms]) => ffff800010c4ed80 [unknown] ([kernel.kallsyms]) After: # perf script main 3258 1 branches: 0 [unknown] ([unknown]) => ffff800010c4665c coresight_timeout+0x54 ([kernel.kallsyms]) main 3258 1 branches: ffff800010c46670 coresight_timeout+0x68 ([kernel.kallsyms]) => ffff800010c4eaec etm4_enable_hw+0x3cc ([kernel.kallsyms]) main 3258 1 branches: ffff800010c4eaec etm4_enable_hw+0x3cc ([kernel.kallsyms]) => ffff800010c4eb00 etm4_enable_hw+0x3e0 ([kernel.kallsyms]) main 3258 1 branches: ffff800010c4eb08 etm4_enable_hw+0x3e8 ([kernel.kallsyms]) => ffff800010c4e780 etm4_enable_hw+0x60 ([kernel.kallsyms]) main 3258 1 branches: ffff800010c4e7a0 etm4_enable_hw+0x80 ([kernel.kallsyms]) => ffff800010c4eeac etm4_enable+0x2d4 ([kernel.kallsyms]) main 3258 1 branches: ffff800010c4eebc etm4_enable+0x2e4 ([kernel.kallsyms]) => ffff800010c4ed80 etm4_enable+0x1a8 ([kernel.kallsyms]) v3: Changed to check for ET_DYN across all architectures. v2: Fixed Arm64 and powerpc native building. Reported-by: Mike Leach <mike.leach@linaro.org> Signed-off-by: Leo Yan <leo.yan@linaro.org> Reviewed-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Acked-by: Jiri Olsa <jolsa@redhat.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Allison Randal <allison@lohutok.net> Cc: Enrico Weigelt <info@metux.net> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Hendrik Brueckner <brueckner@linux.vnet.ibm.com> Cc: John Garry <john.garry@huawei.com> Cc: Kate Stewart <kstewart@linuxfoundation.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Mathieu Poirier <mathieu.poirier@linaro.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Thomas Richter <tmricht@linux.vnet.ibm.com> Link: http://lore.kernel.org/lkml/20200306015759.10084-1-leo.yan@linaro.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-03-06 01:57:58 +00:00
bool elf__needs_adjust_symbols(GElf_Ehdr ehdr)
{
perf symbols: Consolidate symbol fixup issue After copying Arm64's perf archive with object files and perf.data file to x86 laptop, the x86's perf kernel symbol resolution fails. It outputs 'unknown' for all symbols parsing. This issue is root caused by the function elf__needs_adjust_symbols(), x86 perf tool uses one weak version, Arm64 (and powerpc) has rewritten their own version. elf__needs_adjust_symbols() decides if need to parse symbols with the relative offset address; but x86 building uses the weak function which misses to check for the elf type 'ET_DYN', so that it cannot parse symbols in Arm DSOs due to the wrong result from elf__needs_adjust_symbols(). The DSO parsing should not depend on any specific architecture perf building; e.g. x86 perf tool can parse Arm and Arm64 DSOs, vice versa. And confirmed by Naveen N. Rao that powerpc64 kernels are not being built as ET_DYN anymore and change to ET_EXEC. This patch removes the arch specific functions for Arm64 and powerpc and changes elf__needs_adjust_symbols() as a common function. In the common elf__needs_adjust_symbols(), it checks an extra condition 'ET_DYN' for elf header type. With this fixing, the Arm64 DSO can be parsed properly with x86's perf tool. Before: # perf script main 3258 1 branches: 0 [unknown] ([unknown]) => ffff800010c4665c [unknown] ([kernel.kallsyms]) main 3258 1 branches: ffff800010c46670 [unknown] ([kernel.kallsyms]) => ffff800010c4eaec [unknown] ([kernel.kallsyms]) main 3258 1 branches: ffff800010c4eaec [unknown] ([kernel.kallsyms]) => ffff800010c4eb00 [unknown] ([kernel.kallsyms]) main 3258 1 branches: ffff800010c4eb08 [unknown] ([kernel.kallsyms]) => ffff800010c4e780 [unknown] ([kernel.kallsyms]) main 3258 1 branches: ffff800010c4e7a0 [unknown] ([kernel.kallsyms]) => ffff800010c4eeac [unknown] ([kernel.kallsyms]) main 3258 1 branches: ffff800010c4eebc [unknown] ([kernel.kallsyms]) => ffff800010c4ed80 [unknown] ([kernel.kallsyms]) After: # perf script main 3258 1 branches: 0 [unknown] ([unknown]) => ffff800010c4665c coresight_timeout+0x54 ([kernel.kallsyms]) main 3258 1 branches: ffff800010c46670 coresight_timeout+0x68 ([kernel.kallsyms]) => ffff800010c4eaec etm4_enable_hw+0x3cc ([kernel.kallsyms]) main 3258 1 branches: ffff800010c4eaec etm4_enable_hw+0x3cc ([kernel.kallsyms]) => ffff800010c4eb00 etm4_enable_hw+0x3e0 ([kernel.kallsyms]) main 3258 1 branches: ffff800010c4eb08 etm4_enable_hw+0x3e8 ([kernel.kallsyms]) => ffff800010c4e780 etm4_enable_hw+0x60 ([kernel.kallsyms]) main 3258 1 branches: ffff800010c4e7a0 etm4_enable_hw+0x80 ([kernel.kallsyms]) => ffff800010c4eeac etm4_enable+0x2d4 ([kernel.kallsyms]) main 3258 1 branches: ffff800010c4eebc etm4_enable+0x2e4 ([kernel.kallsyms]) => ffff800010c4ed80 etm4_enable+0x1a8 ([kernel.kallsyms]) v3: Changed to check for ET_DYN across all architectures. v2: Fixed Arm64 and powerpc native building. Reported-by: Mike Leach <mike.leach@linaro.org> Signed-off-by: Leo Yan <leo.yan@linaro.org> Reviewed-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Acked-by: Jiri Olsa <jolsa@redhat.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Allison Randal <allison@lohutok.net> Cc: Enrico Weigelt <info@metux.net> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Hendrik Brueckner <brueckner@linux.vnet.ibm.com> Cc: John Garry <john.garry@huawei.com> Cc: Kate Stewart <kstewart@linuxfoundation.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Mathieu Poirier <mathieu.poirier@linaro.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Thomas Richter <tmricht@linux.vnet.ibm.com> Link: http://lore.kernel.org/lkml/20200306015759.10084-1-leo.yan@linaro.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-03-06 01:57:58 +00:00
/*
* Usually vmlinux is an ELF file with type ET_EXEC for most
* architectures; except Arm64 kernel is linked with option
* '-share', so need to check type ET_DYN.
*/
return ehdr.e_type == ET_EXEC || ehdr.e_type == ET_REL ||
ehdr.e_type == ET_DYN;
}
int symsrc__init(struct symsrc *ss, struct dso *dso, const char *name,
enum dso_binary_type type)
{
GElf_Ehdr ehdr;
Elf *elf;
int fd;
perf symbols: Save DSO loading errno to better report errors Before, when some problem happened while trying to load the kernel symtab, 'perf top' would show: ┌─Warning:───────────────────────────┐ │The vmlinux file can't be used. │ │Kernel samples will not be resolved.│ │ │ │ │ │Press any key... │ └────────────────────────────────────┘ Now, it reports: # perf top --vmlinux /dev/null ┌─Warning:───────────────────────────────────────────┐ │The /tmp/passwd file can't be used: Invalid ELF file│ │Kernel samples will not be resolved. │ │ │ │ │ │Press any key... │ └────────────────────────────────────────────────────┘ This is possible because we now register the reason for not being able to load the symtab in the dso->load_errno member, and provide a dso__strerror_load() routine to format this error into a strerror like string with a short reason for the error while loading. That can be just forwarding the dso__strerror_load() call to strerror_r(), or, for a separate errno range providing a custom message. Reported-by: Ingo Molnar <mingo@kernel.org> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Borislav Petkov <bp@suse.de> Cc: David Ahern <dsahern@gmail.com> Cc: Don Zickus <dzickus@redhat.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/n/tip-u5rb5uq63xqhkfb8uv2lxd5u@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-03-24 14:49:02 +00:00
if (dso__needs_decompress(dso)) {
fd = dso__decompress_kmodule_fd(dso, name);
perf symbols: Save DSO loading errno to better report errors Before, when some problem happened while trying to load the kernel symtab, 'perf top' would show: ┌─Warning:───────────────────────────┐ │The vmlinux file can't be used. │ │Kernel samples will not be resolved.│ │ │ │ │ │Press any key... │ └────────────────────────────────────┘ Now, it reports: # perf top --vmlinux /dev/null ┌─Warning:───────────────────────────────────────────┐ │The /tmp/passwd file can't be used: Invalid ELF file│ │Kernel samples will not be resolved. │ │ │ │ │ │Press any key... │ └────────────────────────────────────────────────────┘ This is possible because we now register the reason for not being able to load the symtab in the dso->load_errno member, and provide a dso__strerror_load() routine to format this error into a strerror like string with a short reason for the error while loading. That can be just forwarding the dso__strerror_load() call to strerror_r(), or, for a separate errno range providing a custom message. Reported-by: Ingo Molnar <mingo@kernel.org> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Borislav Petkov <bp@suse.de> Cc: David Ahern <dsahern@gmail.com> Cc: Don Zickus <dzickus@redhat.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/n/tip-u5rb5uq63xqhkfb8uv2lxd5u@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-03-24 14:49:02 +00:00
if (fd < 0)
return -1;
type = dso->symtab_type;
perf symbols: Save DSO loading errno to better report errors Before, when some problem happened while trying to load the kernel symtab, 'perf top' would show: ┌─Warning:───────────────────────────┐ │The vmlinux file can't be used. │ │Kernel samples will not be resolved.│ │ │ │ │ │Press any key... │ └────────────────────────────────────┘ Now, it reports: # perf top --vmlinux /dev/null ┌─Warning:───────────────────────────────────────────┐ │The /tmp/passwd file can't be used: Invalid ELF file│ │Kernel samples will not be resolved. │ │ │ │ │ │Press any key... │ └────────────────────────────────────────────────────┘ This is possible because we now register the reason for not being able to load the symtab in the dso->load_errno member, and provide a dso__strerror_load() routine to format this error into a strerror like string with a short reason for the error while loading. That can be just forwarding the dso__strerror_load() call to strerror_r(), or, for a separate errno range providing a custom message. Reported-by: Ingo Molnar <mingo@kernel.org> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Borislav Petkov <bp@suse.de> Cc: David Ahern <dsahern@gmail.com> Cc: Don Zickus <dzickus@redhat.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/n/tip-u5rb5uq63xqhkfb8uv2lxd5u@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-03-24 14:49:02 +00:00
} else {
fd = open(name, O_RDONLY);
perf symbols: Save DSO loading errno to better report errors Before, when some problem happened while trying to load the kernel symtab, 'perf top' would show: ┌─Warning:───────────────────────────┐ │The vmlinux file can't be used. │ │Kernel samples will not be resolved.│ │ │ │ │ │Press any key... │ └────────────────────────────────────┘ Now, it reports: # perf top --vmlinux /dev/null ┌─Warning:───────────────────────────────────────────┐ │The /tmp/passwd file can't be used: Invalid ELF file│ │Kernel samples will not be resolved. │ │ │ │ │ │Press any key... │ └────────────────────────────────────────────────────┘ This is possible because we now register the reason for not being able to load the symtab in the dso->load_errno member, and provide a dso__strerror_load() routine to format this error into a strerror like string with a short reason for the error while loading. That can be just forwarding the dso__strerror_load() call to strerror_r(), or, for a separate errno range providing a custom message. Reported-by: Ingo Molnar <mingo@kernel.org> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Borislav Petkov <bp@suse.de> Cc: David Ahern <dsahern@gmail.com> Cc: Don Zickus <dzickus@redhat.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/n/tip-u5rb5uq63xqhkfb8uv2lxd5u@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-03-24 14:49:02 +00:00
if (fd < 0) {
dso->load_errno = errno;
return -1;
}
}
elf = elf_begin(fd, PERF_ELF_C_READ_MMAP, NULL);
if (elf == NULL) {
pr_debug("%s: cannot read %s ELF file.\n", __func__, name);
perf symbols: Save DSO loading errno to better report errors Before, when some problem happened while trying to load the kernel symtab, 'perf top' would show: ┌─Warning:───────────────────────────┐ │The vmlinux file can't be used. │ │Kernel samples will not be resolved.│ │ │ │ │ │Press any key... │ └────────────────────────────────────┘ Now, it reports: # perf top --vmlinux /dev/null ┌─Warning:───────────────────────────────────────────┐ │The /tmp/passwd file can't be used: Invalid ELF file│ │Kernel samples will not be resolved. │ │ │ │ │ │Press any key... │ └────────────────────────────────────────────────────┘ This is possible because we now register the reason for not being able to load the symtab in the dso->load_errno member, and provide a dso__strerror_load() routine to format this error into a strerror like string with a short reason for the error while loading. That can be just forwarding the dso__strerror_load() call to strerror_r(), or, for a separate errno range providing a custom message. Reported-by: Ingo Molnar <mingo@kernel.org> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Borislav Petkov <bp@suse.de> Cc: David Ahern <dsahern@gmail.com> Cc: Don Zickus <dzickus@redhat.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/n/tip-u5rb5uq63xqhkfb8uv2lxd5u@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-03-24 14:49:02 +00:00
dso->load_errno = DSO_LOAD_ERRNO__INVALID_ELF;
goto out_close;
}
if (gelf_getehdr(elf, &ehdr) == NULL) {
perf symbols: Save DSO loading errno to better report errors Before, when some problem happened while trying to load the kernel symtab, 'perf top' would show: ┌─Warning:───────────────────────────┐ │The vmlinux file can't be used. │ │Kernel samples will not be resolved.│ │ │ │ │ │Press any key... │ └────────────────────────────────────┘ Now, it reports: # perf top --vmlinux /dev/null ┌─Warning:───────────────────────────────────────────┐ │The /tmp/passwd file can't be used: Invalid ELF file│ │Kernel samples will not be resolved. │ │ │ │ │ │Press any key... │ └────────────────────────────────────────────────────┘ This is possible because we now register the reason for not being able to load the symtab in the dso->load_errno member, and provide a dso__strerror_load() routine to format this error into a strerror like string with a short reason for the error while loading. That can be just forwarding the dso__strerror_load() call to strerror_r(), or, for a separate errno range providing a custom message. Reported-by: Ingo Molnar <mingo@kernel.org> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Borislav Petkov <bp@suse.de> Cc: David Ahern <dsahern@gmail.com> Cc: Don Zickus <dzickus@redhat.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/n/tip-u5rb5uq63xqhkfb8uv2lxd5u@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-03-24 14:49:02 +00:00
dso->load_errno = DSO_LOAD_ERRNO__INVALID_ELF;
pr_debug("%s: cannot get elf header.\n", __func__);
goto out_elf_end;
}
perf symbols: Save DSO loading errno to better report errors Before, when some problem happened while trying to load the kernel symtab, 'perf top' would show: ┌─Warning:───────────────────────────┐ │The vmlinux file can't be used. │ │Kernel samples will not be resolved.│ │ │ │ │ │Press any key... │ └────────────────────────────────────┘ Now, it reports: # perf top --vmlinux /dev/null ┌─Warning:───────────────────────────────────────────┐ │The /tmp/passwd file can't be used: Invalid ELF file│ │Kernel samples will not be resolved. │ │ │ │ │ │Press any key... │ └────────────────────────────────────────────────────┘ This is possible because we now register the reason for not being able to load the symtab in the dso->load_errno member, and provide a dso__strerror_load() routine to format this error into a strerror like string with a short reason for the error while loading. That can be just forwarding the dso__strerror_load() call to strerror_r(), or, for a separate errno range providing a custom message. Reported-by: Ingo Molnar <mingo@kernel.org> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Borislav Petkov <bp@suse.de> Cc: David Ahern <dsahern@gmail.com> Cc: Don Zickus <dzickus@redhat.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/n/tip-u5rb5uq63xqhkfb8uv2lxd5u@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-03-24 14:49:02 +00:00
if (dso__swap_init(dso, ehdr.e_ident[EI_DATA])) {
dso->load_errno = DSO_LOAD_ERRNO__INTERNAL_ERROR;
goto out_elf_end;
perf symbols: Save DSO loading errno to better report errors Before, when some problem happened while trying to load the kernel symtab, 'perf top' would show: ┌─Warning:───────────────────────────┐ │The vmlinux file can't be used. │ │Kernel samples will not be resolved.│ │ │ │ │ │Press any key... │ └────────────────────────────────────┘ Now, it reports: # perf top --vmlinux /dev/null ┌─Warning:───────────────────────────────────────────┐ │The /tmp/passwd file can't be used: Invalid ELF file│ │Kernel samples will not be resolved. │ │ │ │ │ │Press any key... │ └────────────────────────────────────────────────────┘ This is possible because we now register the reason for not being able to load the symtab in the dso->load_errno member, and provide a dso__strerror_load() routine to format this error into a strerror like string with a short reason for the error while loading. That can be just forwarding the dso__strerror_load() call to strerror_r(), or, for a separate errno range providing a custom message. Reported-by: Ingo Molnar <mingo@kernel.org> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Borislav Petkov <bp@suse.de> Cc: David Ahern <dsahern@gmail.com> Cc: Don Zickus <dzickus@redhat.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/n/tip-u5rb5uq63xqhkfb8uv2lxd5u@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-03-24 14:49:02 +00:00
}
/* Always reject images with a mismatched build-id: */
if (dso->has_build_id && !symbol_conf.ignore_vmlinux_buildid) {
u8 build_id[BUILD_ID_SIZE];
struct build_id bid;
int size;
size = elf_read_build_id(elf, build_id, BUILD_ID_SIZE);
if (size <= 0) {
perf symbols: Save DSO loading errno to better report errors Before, when some problem happened while trying to load the kernel symtab, 'perf top' would show: ┌─Warning:───────────────────────────┐ │The vmlinux file can't be used. │ │Kernel samples will not be resolved.│ │ │ │ │ │Press any key... │ └────────────────────────────────────┘ Now, it reports: # perf top --vmlinux /dev/null ┌─Warning:───────────────────────────────────────────┐ │The /tmp/passwd file can't be used: Invalid ELF file│ │Kernel samples will not be resolved. │ │ │ │ │ │Press any key... │ └────────────────────────────────────────────────────┘ This is possible because we now register the reason for not being able to load the symtab in the dso->load_errno member, and provide a dso__strerror_load() routine to format this error into a strerror like string with a short reason for the error while loading. That can be just forwarding the dso__strerror_load() call to strerror_r(), or, for a separate errno range providing a custom message. Reported-by: Ingo Molnar <mingo@kernel.org> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Borislav Petkov <bp@suse.de> Cc: David Ahern <dsahern@gmail.com> Cc: Don Zickus <dzickus@redhat.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/n/tip-u5rb5uq63xqhkfb8uv2lxd5u@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-03-24 14:49:02 +00:00
dso->load_errno = DSO_LOAD_ERRNO__CANNOT_READ_BUILDID;
goto out_elf_end;
perf symbols: Save DSO loading errno to better report errors Before, when some problem happened while trying to load the kernel symtab, 'perf top' would show: ┌─Warning:───────────────────────────┐ │The vmlinux file can't be used. │ │Kernel samples will not be resolved.│ │ │ │ │ │Press any key... │ └────────────────────────────────────┘ Now, it reports: # perf top --vmlinux /dev/null ┌─Warning:───────────────────────────────────────────┐ │The /tmp/passwd file can't be used: Invalid ELF file│ │Kernel samples will not be resolved. │ │ │ │ │ │Press any key... │ └────────────────────────────────────────────────────┘ This is possible because we now register the reason for not being able to load the symtab in the dso->load_errno member, and provide a dso__strerror_load() routine to format this error into a strerror like string with a short reason for the error while loading. That can be just forwarding the dso__strerror_load() call to strerror_r(), or, for a separate errno range providing a custom message. Reported-by: Ingo Molnar <mingo@kernel.org> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Borislav Petkov <bp@suse.de> Cc: David Ahern <dsahern@gmail.com> Cc: Don Zickus <dzickus@redhat.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/n/tip-u5rb5uq63xqhkfb8uv2lxd5u@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-03-24 14:49:02 +00:00
}
build_id__init(&bid, build_id, size);
if (!dso__build_id_equal(dso, &bid)) {
pr_debug("%s: build id mismatch for %s.\n", __func__, name);
perf symbols: Save DSO loading errno to better report errors Before, when some problem happened while trying to load the kernel symtab, 'perf top' would show: ┌─Warning:───────────────────────────┐ │The vmlinux file can't be used. │ │Kernel samples will not be resolved.│ │ │ │ │ │Press any key... │ └────────────────────────────────────┘ Now, it reports: # perf top --vmlinux /dev/null ┌─Warning:───────────────────────────────────────────┐ │The /tmp/passwd file can't be used: Invalid ELF file│ │Kernel samples will not be resolved. │ │ │ │ │ │Press any key... │ └────────────────────────────────────────────────────┘ This is possible because we now register the reason for not being able to load the symtab in the dso->load_errno member, and provide a dso__strerror_load() routine to format this error into a strerror like string with a short reason for the error while loading. That can be just forwarding the dso__strerror_load() call to strerror_r(), or, for a separate errno range providing a custom message. Reported-by: Ingo Molnar <mingo@kernel.org> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Borislav Petkov <bp@suse.de> Cc: David Ahern <dsahern@gmail.com> Cc: Don Zickus <dzickus@redhat.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/n/tip-u5rb5uq63xqhkfb8uv2lxd5u@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-03-24 14:49:02 +00:00
dso->load_errno = DSO_LOAD_ERRNO__MISMATCHING_BUILDID;
goto out_elf_end;
perf symbols: Save DSO loading errno to better report errors Before, when some problem happened while trying to load the kernel symtab, 'perf top' would show: ┌─Warning:───────────────────────────┐ │The vmlinux file can't be used. │ │Kernel samples will not be resolved.│ │ │ │ │ │Press any key... │ └────────────────────────────────────┘ Now, it reports: # perf top --vmlinux /dev/null ┌─Warning:───────────────────────────────────────────┐ │The /tmp/passwd file can't be used: Invalid ELF file│ │Kernel samples will not be resolved. │ │ │ │ │ │Press any key... │ └────────────────────────────────────────────────────┘ This is possible because we now register the reason for not being able to load the symtab in the dso->load_errno member, and provide a dso__strerror_load() routine to format this error into a strerror like string with a short reason for the error while loading. That can be just forwarding the dso__strerror_load() call to strerror_r(), or, for a separate errno range providing a custom message. Reported-by: Ingo Molnar <mingo@kernel.org> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Borislav Petkov <bp@suse.de> Cc: David Ahern <dsahern@gmail.com> Cc: Don Zickus <dzickus@redhat.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/n/tip-u5rb5uq63xqhkfb8uv2lxd5u@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-03-24 14:49:02 +00:00
}
}
ss->is_64_bit = (gelf_getclass(elf) == ELFCLASS64);
perf symbols: Allow for static executables with .plt A statically linked executable can have a .plt due to IFUNCs, in which case .symtab is used not .dynsym. Check the section header link to see if that is the case, and then use symtab instead. Example: Before: $ cat tstifunc.c #include <stdio.h> void thing1(void) { printf("thing1\n"); } void thing2(void) { printf("thing2\n"); } typedef void (*thing_fn_t)(void); thing_fn_t thing_ifunc(void) { int x; if (x & 1) return thing2; return thing1; } void thing(void) __attribute__ ((ifunc ("thing_ifunc"))); int main() { thing(); return 0; } $ gcc --version gcc (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0 Copyright (C) 2021 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. $ gcc -static -Wall -Wextra -Wno-uninitialized -o tstifuncstatic tstifunc.c $ readelf -SW tstifuncstatic | grep 'Name\|plt\|dyn' [Nr] Name Type Address Off Size ES Flg Lk Inf Al [ 4] .rela.plt RELA 00000000004002e8 0002e8 000258 18 AI 29 20 8 [ 6] .plt PROGBITS 0000000000401020 001020 000190 00 AX 0 0 16 [20] .got.plt PROGBITS 00000000004c5000 0c4000 0000e0 08 WA 0 0 8 $ perf record -e intel_pt//u --filter 'filter main @ ./tstifuncstatic' ./tstifuncstatic thing1 [ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 0.008 MB perf.data ] $ perf script --itrace=be --ns -F+flags,-event,+addr,-period,-comm,-tid,-cpu,-dso 15786.690189535: tr strt 0 [unknown] => 4017cd main+0x0 15786.690189535: tr end call 4017d5 main+0x8 => 401170 [unknown] 15786.690197660: tr strt 0 [unknown] => 4017da main+0xd 15786.690197660: tr end return 4017e0 main+0x13 => 401c1a __libc_start_call_main+0x6a After: $ perf script --itrace=be --ns -F+flags,-event,+addr,-period,-comm,-tid,-cpu,-dso 15786.690189535: tr strt 0 [unknown] => 4017cd main+0x0 15786.690189535: tr end call 4017d5 main+0x8 => 401170 thing_ifunc@plt+0x0 15786.690197660: tr strt 0 [unknown] => 4017da main+0xd 15786.690197660: tr end return 4017e0 main+0x13 => 401c1a __libc_start_call_main+0x6a Reviewed-by: Namhyung Kim <namhyung@kernel.org> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Jiri Olsa <jolsa@kernel.org> Link: https://lore.kernel.org/r/20230131131625.6964-8-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2023-01-31 13:16:23 +00:00
ss->symtab_idx = 0;
ss->symtab = elf_section_by_name(elf, &ehdr, &ss->symshdr, ".symtab",
perf symbols: Allow for static executables with .plt A statically linked executable can have a .plt due to IFUNCs, in which case .symtab is used not .dynsym. Check the section header link to see if that is the case, and then use symtab instead. Example: Before: $ cat tstifunc.c #include <stdio.h> void thing1(void) { printf("thing1\n"); } void thing2(void) { printf("thing2\n"); } typedef void (*thing_fn_t)(void); thing_fn_t thing_ifunc(void) { int x; if (x & 1) return thing2; return thing1; } void thing(void) __attribute__ ((ifunc ("thing_ifunc"))); int main() { thing(); return 0; } $ gcc --version gcc (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0 Copyright (C) 2021 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. $ gcc -static -Wall -Wextra -Wno-uninitialized -o tstifuncstatic tstifunc.c $ readelf -SW tstifuncstatic | grep 'Name\|plt\|dyn' [Nr] Name Type Address Off Size ES Flg Lk Inf Al [ 4] .rela.plt RELA 00000000004002e8 0002e8 000258 18 AI 29 20 8 [ 6] .plt PROGBITS 0000000000401020 001020 000190 00 AX 0 0 16 [20] .got.plt PROGBITS 00000000004c5000 0c4000 0000e0 08 WA 0 0 8 $ perf record -e intel_pt//u --filter 'filter main @ ./tstifuncstatic' ./tstifuncstatic thing1 [ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 0.008 MB perf.data ] $ perf script --itrace=be --ns -F+flags,-event,+addr,-period,-comm,-tid,-cpu,-dso 15786.690189535: tr strt 0 [unknown] => 4017cd main+0x0 15786.690189535: tr end call 4017d5 main+0x8 => 401170 [unknown] 15786.690197660: tr strt 0 [unknown] => 4017da main+0xd 15786.690197660: tr end return 4017e0 main+0x13 => 401c1a __libc_start_call_main+0x6a After: $ perf script --itrace=be --ns -F+flags,-event,+addr,-period,-comm,-tid,-cpu,-dso 15786.690189535: tr strt 0 [unknown] => 4017cd main+0x0 15786.690189535: tr end call 4017d5 main+0x8 => 401170 thing_ifunc@plt+0x0 15786.690197660: tr strt 0 [unknown] => 4017da main+0xd 15786.690197660: tr end return 4017e0 main+0x13 => 401c1a __libc_start_call_main+0x6a Reviewed-by: Namhyung Kim <namhyung@kernel.org> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Jiri Olsa <jolsa@kernel.org> Link: https://lore.kernel.org/r/20230131131625.6964-8-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2023-01-31 13:16:23 +00:00
&ss->symtab_idx);
if (ss->symshdr.sh_type != SHT_SYMTAB)
ss->symtab = NULL;
ss->dynsym_idx = 0;
ss->dynsym = elf_section_by_name(elf, &ehdr, &ss->dynshdr, ".dynsym",
&ss->dynsym_idx);
if (ss->dynshdr.sh_type != SHT_DYNSYM)
ss->dynsym = NULL;
ss->opdidx = 0;
ss->opdsec = elf_section_by_name(elf, &ehdr, &ss->opdshdr, ".opd",
&ss->opdidx);
if (ss->opdshdr.sh_type != SHT_PROGBITS)
ss->opdsec = NULL;
if (dso->kernel == DSO_SPACE__USER)
perf symbols: Adjust symbol for shared objects He Kuang reported a problem that perf fails to get correct symbol on Android platform in [1]. The problem can be reproduced on normal x86_64 platform. I will describe the reproducing steps in detail at the end of commit message. The reason of this problem is the missing of symbol adjustment for normal shared objects. In most of the cases skipping adjustment is okay. However, when '.text' section have different 'address' and 'offset' the result is wrong. I checked all shared objects in my working platform, only wine dll objects and debug objects (in .debug) have this problem. However, it is common on Android. For example: $ readelf -S ./libsurfaceflinger.so | grep \.text [10] .text PROGBITS 0000000000029030 00012030 This patch enables symbol adjustment for dynamic objects so the symbol address got from elfutils would be adjusted correctly. Now nearly all types of ELF files should adjust symbols. Makes ss->adjust_symbols default to true. Steps to reproduce the problem: $ cat ./Makefile PWD := $(shell pwd) LDFLAGS += "-Wl,-rpath=$(PWD)" CFLAGS += -g main: main.c libbuggy.so libbuggy.so: buggy.c gcc -g -shared -fPIC -Wl,-Ttext-segment=0x200000 $< -o $@ clean: rm -rf main libbuggy.so *.o $ cat ./buggy.c int fib(int x) { return (x == 0) ? 1 : (x == 1) ? 1 : fib(x - 1) + fib(x - 2); } $ cat ./main.c #include <stdio.h> extern int fib(int x); int main() { int i; for (i = 0; i < 40; i++) printf("%d\n", fib(i)); return 0; } $ make $ perf record ./main ... $ perf report --stdio # Overhead Command Shared Object Symbol # ........ ....... ................. ............................... # 14.97% main libbuggy.so [.] 0x000000000000066c 8.68% main libbuggy.so [.] 0x00000000000006aa 8.52% main libbuggy.so [.] fib@plt 7.95% main libbuggy.so [.] 0x0000000000000664 5.94% main libbuggy.so [.] 0x00000000000006a9 5.35% main libbuggy.so [.] 0x0000000000000678 ... The correct result should be (after this patch): # Overhead Command Shared Object Symbol # ........ ....... ................. ............................... # 91.47% main libbuggy.so [.] fib 8.52% main libbuggy.so [.] fib@plt 0.00% main [kernel.kallsyms] [k] kmem_cache_free [1] http://lkml.kernel.org/g/1452567507-54013-1-git-send-email-hekuang@huawei.com Signed-off-by: Wang Nan <wangnan0@huawei.com> Acked-by: Namhyung Kim <namhyung@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Cody P Schafer <dev@codyps.com> Cc: He Kuang <hekuang@huawei.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Kirill Smelkov <kirr@nexedi.com> Cc: Li Zefan <lizefan@huawei.com> Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Cc: Zefan Li <lizefan@huawei.com> Cc: pi3orama@163.com Link: http://lkml.kernel.org/r/1460024671-64774-3-git-send-email-wangnan0@huawei.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-04-07 10:24:31 +00:00
ss->adjust_symbols = true;
else
ss->adjust_symbols = elf__needs_adjust_symbols(ehdr);
ss->name = strdup(name);
perf symbols: Save DSO loading errno to better report errors Before, when some problem happened while trying to load the kernel symtab, 'perf top' would show: ┌─Warning:───────────────────────────┐ │The vmlinux file can't be used. │ │Kernel samples will not be resolved.│ │ │ │ │ │Press any key... │ └────────────────────────────────────┘ Now, it reports: # perf top --vmlinux /dev/null ┌─Warning:───────────────────────────────────────────┐ │The /tmp/passwd file can't be used: Invalid ELF file│ │Kernel samples will not be resolved. │ │ │ │ │ │Press any key... │ └────────────────────────────────────────────────────┘ This is possible because we now register the reason for not being able to load the symtab in the dso->load_errno member, and provide a dso__strerror_load() routine to format this error into a strerror like string with a short reason for the error while loading. That can be just forwarding the dso__strerror_load() call to strerror_r(), or, for a separate errno range providing a custom message. Reported-by: Ingo Molnar <mingo@kernel.org> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Borislav Petkov <bp@suse.de> Cc: David Ahern <dsahern@gmail.com> Cc: Don Zickus <dzickus@redhat.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/n/tip-u5rb5uq63xqhkfb8uv2lxd5u@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-03-24 14:49:02 +00:00
if (!ss->name) {
dso->load_errno = errno;
goto out_elf_end;
perf symbols: Save DSO loading errno to better report errors Before, when some problem happened while trying to load the kernel symtab, 'perf top' would show: ┌─Warning:───────────────────────────┐ │The vmlinux file can't be used. │ │Kernel samples will not be resolved.│ │ │ │ │ │Press any key... │ └────────────────────────────────────┘ Now, it reports: # perf top --vmlinux /dev/null ┌─Warning:───────────────────────────────────────────┐ │The /tmp/passwd file can't be used: Invalid ELF file│ │Kernel samples will not be resolved. │ │ │ │ │ │Press any key... │ └────────────────────────────────────────────────────┘ This is possible because we now register the reason for not being able to load the symtab in the dso->load_errno member, and provide a dso__strerror_load() routine to format this error into a strerror like string with a short reason for the error while loading. That can be just forwarding the dso__strerror_load() call to strerror_r(), or, for a separate errno range providing a custom message. Reported-by: Ingo Molnar <mingo@kernel.org> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Borislav Petkov <bp@suse.de> Cc: David Ahern <dsahern@gmail.com> Cc: Don Zickus <dzickus@redhat.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/n/tip-u5rb5uq63xqhkfb8uv2lxd5u@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-03-24 14:49:02 +00:00
}
ss->elf = elf;
ss->fd = fd;
ss->ehdr = ehdr;
ss->type = type;
return 0;
out_elf_end:
elf_end(elf);
out_close:
close(fd);
return -1;
}
perf symbols: Slightly improve module file executable section mappings Currently perf does not record module section addresses except for the .text section. In general that means perf cannot get module section mappings correct (except for .text) when loading symbols from a kernel module file. (Note using --kcore does not have this issue) Improve that situation slightly by identifying executable sections that use the same mapping as the .text section. That happens when an executable section comes directly after the .text section, both in memory and on file, something that can be determined by following the same layout rules used by the kernel, refer kernel layout_sections(). Note whether that happens is somewhat arbitrary, so this is not a final solution. Example from tracing a virtual machine process: Before: $ perf script | grep unknown CPU 0/KVM 1718 203.511270: 318341 cpu-cycles:P: ffffffffc13e8a70 [unknown] (/lib/modules/6.7.2-local/kernel/arch/x86/kvm/kvm-intel.ko) $ perf script -vvv 2>&1 >/dev/null | grep kvm.intel | grep 'noinstr.text\|ffff' Map: 0-7e0 41430 [kvm_intel].noinstr.text Map: ffffffffc13a7000-ffffffffc1421000 a0 /lib/modules/6.7.2-local/kernel/arch/x86/kvm/kvm-intel.ko After: $ perf script | grep 203.511270 CPU 0/KVM 1718 203.511270: 318341 cpu-cycles:P: ffffffffc13e8a70 vmx_vmexit+0x0 (/lib/modules/6.7.2-local/kernel/arch/x86/kvm/kvm-intel.ko) $ perf script -vvv 2>&1 >/dev/null | grep kvm.intel | grep 'noinstr.text\|ffff' Map: ffffffffc13a7000-ffffffffc1421000 a0 /lib/modules/6.7.2-local/kernel/arch/x86/kvm/kvm-intel.ko Reported-by: Like Xu <like.xu.linux@gmail.com> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Acked-by: Namhyung Kim <namhyung@kernel.org> Signed-off-by: Namhyung Kim <namhyung@kernel.org> Link: https://lore.kernel.org/r/20240208085326.13432-3-adrian.hunter@intel.com
2024-02-08 08:53:26 +00:00
static bool is_exe_text(int flags)
{
return (flags & (SHF_ALLOC | SHF_EXECINSTR)) == (SHF_ALLOC | SHF_EXECINSTR);
}
/*
* Some executable module sections like .noinstr.text might be laid out with
* .text so they can use the same mapping (memory address to file offset).
* Check if that is the case. Refer to kernel layout_sections(). Return the
* maximum offset.
*/
static u64 max_text_section(Elf *elf, GElf_Ehdr *ehdr)
{
Elf_Scn *sec = NULL;
GElf_Shdr shdr;
u64 offs = 0;
/* Doesn't work for some arch */
if (ehdr->e_machine == EM_PARISC ||
ehdr->e_machine == EM_ALPHA)
return 0;
/* ELF is corrupted/truncated, avoid calling elf_strptr. */
if (!elf_rawdata(elf_getscn(elf, ehdr->e_shstrndx), NULL))
return 0;
while ((sec = elf_nextscn(elf, sec)) != NULL) {
char *sec_name;
if (!gelf_getshdr(sec, &shdr))
break;
if (!is_exe_text(shdr.sh_flags))
continue;
/* .init and .exit sections are not placed with .text */
sec_name = elf_strptr(elf, ehdr->e_shstrndx, shdr.sh_name);
if (!sec_name ||
strstarts(sec_name, ".init") ||
strstarts(sec_name, ".exit"))
break;
/* Must be next to previous, assumes .text is first */
if (offs && PERF_ALIGN(offs, shdr.sh_addralign ?: 1) != shdr.sh_offset)
break;
offs = shdr.sh_offset + shdr.sh_size;
}
return offs;
}
/**
* ref_reloc_sym_not_found - has kernel relocation symbol been found.
* @kmap: kernel maps and relocation reference symbol
*
* This function returns %true if we are dealing with the kernel maps and the
* relocation reference symbol has not yet been found. Otherwise %false is
* returned.
*/
static bool ref_reloc_sym_not_found(struct kmap *kmap)
{
return kmap && kmap->ref_reloc_sym && kmap->ref_reloc_sym->name &&
!kmap->ref_reloc_sym->unrelocated_addr;
}
/**
* ref_reloc - kernel relocation offset.
* @kmap: kernel maps and relocation reference symbol
*
* This function returns the offset of kernel addresses as determined by using
* the relocation reference symbol i.e. if the kernel has not been relocated
* then the return value is zero.
*/
static u64 ref_reloc(struct kmap *kmap)
{
if (kmap && kmap->ref_reloc_sym &&
kmap->ref_reloc_sym->unrelocated_addr)
return kmap->ref_reloc_sym->addr -
kmap->ref_reloc_sym->unrelocated_addr;
return 0;
}
void __weak arch__sym_update(struct symbol *s __maybe_unused,
GElf_Sym *sym __maybe_unused) { }
static int dso__process_kernel_symbol(struct dso *dso, struct map *map,
GElf_Sym *sym, GElf_Shdr *shdr,
struct maps *kmaps, struct kmap *kmap,
struct dso **curr_dsop, struct map **curr_mapp,
const char *section_name,
perf symbols: Slightly improve module file executable section mappings Currently perf does not record module section addresses except for the .text section. In general that means perf cannot get module section mappings correct (except for .text) when loading symbols from a kernel module file. (Note using --kcore does not have this issue) Improve that situation slightly by identifying executable sections that use the same mapping as the .text section. That happens when an executable section comes directly after the .text section, both in memory and on file, something that can be determined by following the same layout rules used by the kernel, refer kernel layout_sections(). Note whether that happens is somewhat arbitrary, so this is not a final solution. Example from tracing a virtual machine process: Before: $ perf script | grep unknown CPU 0/KVM 1718 203.511270: 318341 cpu-cycles:P: ffffffffc13e8a70 [unknown] (/lib/modules/6.7.2-local/kernel/arch/x86/kvm/kvm-intel.ko) $ perf script -vvv 2>&1 >/dev/null | grep kvm.intel | grep 'noinstr.text\|ffff' Map: 0-7e0 41430 [kvm_intel].noinstr.text Map: ffffffffc13a7000-ffffffffc1421000 a0 /lib/modules/6.7.2-local/kernel/arch/x86/kvm/kvm-intel.ko After: $ perf script | grep 203.511270 CPU 0/KVM 1718 203.511270: 318341 cpu-cycles:P: ffffffffc13e8a70 vmx_vmexit+0x0 (/lib/modules/6.7.2-local/kernel/arch/x86/kvm/kvm-intel.ko) $ perf script -vvv 2>&1 >/dev/null | grep kvm.intel | grep 'noinstr.text\|ffff' Map: ffffffffc13a7000-ffffffffc1421000 a0 /lib/modules/6.7.2-local/kernel/arch/x86/kvm/kvm-intel.ko Reported-by: Like Xu <like.xu.linux@gmail.com> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Acked-by: Namhyung Kim <namhyung@kernel.org> Signed-off-by: Namhyung Kim <namhyung@kernel.org> Link: https://lore.kernel.org/r/20240208085326.13432-3-adrian.hunter@intel.com
2024-02-08 08:53:26 +00:00
bool adjust_kernel_syms, bool kmodule, bool *remap_kernel,
u64 max_text_sh_offset)
{
struct dso *curr_dso = *curr_dsop;
struct map *curr_map;
char dso_name[PATH_MAX];
/* Adjust symbol to map to file offset */
if (adjust_kernel_syms)
sym->st_value -= shdr->sh_addr - shdr->sh_offset;
if (strcmp(section_name, (curr_dso->short_name + dso->short_name_len)) == 0)
return 0;
if (strcmp(section_name, ".text") == 0) {
/*
* The initial kernel mapping is based on
* kallsyms and identity maps. Overwrite it to
* map to the kernel dso.
*/
if (*remap_kernel && dso->kernel && !kmodule) {
*remap_kernel = false;
map__set_start(map, shdr->sh_addr + ref_reloc(kmap));
map__set_end(map, map__start(map) + shdr->sh_size);
map__set_pgoff(map, shdr->sh_offset);
perf map: Simplify map_ip/unmap_ip and make 'struct map' smaller When mapping an IP it is either an identity mapping or a DSO relative mapping, so a single bit is required in the struct to identify this. The current code uses function pointers, adding 2 pointers per map and also pushing the size of a map beyond 1 cache line. Switch to using a byte to identify the mapping type (as well as priv and erange_warned), to avoid any masking. Change struct maps's layout to avoid holes. Before: ``` struct map { u64 start; /* 0 8 */ u64 end; /* 8 8 */ _Bool erange_warned:1; /* 16: 0 1 */ _Bool priv:1; /* 16: 1 1 */ /* XXX 6 bits hole, try to pack */ /* XXX 3 bytes hole, try to pack */ u32 prot; /* 20 4 */ u64 pgoff; /* 24 8 */ u64 reloc; /* 32 8 */ u64 (*map_ip)(const struct map *, u64); /* 40 8 */ u64 (*unmap_ip)(const struct map *, u64); /* 48 8 */ struct dso * dso; /* 56 8 */ /* --- cacheline 1 boundary (64 bytes) --- */ refcount_t refcnt; /* 64 4 */ u32 flags; /* 68 4 */ /* size: 72, cachelines: 2, members: 12 */ /* sum members: 68, holes: 1, sum holes: 3 */ /* sum bitfield members: 2 bits, bit holes: 1, sum bit holes: 6 bits */ /* last cacheline: 8 bytes */ }; ``` After: ``` struct map { u64 start; /* 0 8 */ u64 end; /* 8 8 */ u64 pgoff; /* 16 8 */ u64 reloc; /* 24 8 */ struct dso * dso; /* 32 8 */ refcount_t refcnt; /* 40 4 */ u32 prot; /* 44 4 */ u32 flags; /* 48 4 */ enum mapping_type mapping_type:8; /* 52: 0 4 */ /* Bitfield combined with next fields */ _Bool erange_warned; /* 53 1 */ _Bool priv; /* 54 1 */ /* size: 56, cachelines: 1, members: 11 */ /* padding: 1 */ /* last cacheline: 56 bytes */ }; ``` Signed-off-by: Ian Rogers <irogers@google.com> Acked-by: Namhyung Kim <namhyung@kernel.org> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Athira Jajeev <atrajeev@linux.vnet.ibm.com> Cc: Changbin Du <changbin.du@huawei.com> Cc: Colin Ian King <colin.i.king@gmail.com> Cc: Dmitrii Dolgov <9erthalion6@gmail.com> Cc: German Gomez <german.gomez@arm.com> Cc: Guilherme Amadio <amadio@gentoo.org> Cc: Huacai Chen <chenhuacai@kernel.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: James Clark <james.clark@arm.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: K Prateek Nayak <kprateek.nayak@amd.com> Cc: Kajol Jain <kjain@linux.ibm.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Leo Yan <leo.yan@linaro.org> Cc: Li Dong <lidong@vivo.com> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Masami Hiramatsu (Google) <mhiramat@kernel.org> Cc: Miguel Ojeda <ojeda@kernel.org> Cc: Ming Wang <wangming01@loongson.cn> Cc: Nick Terrell <terrelln@fb.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ravi Bangoria <ravi.bangoria@amd.com> Cc: Sandipan Das <sandipan.das@amd.com> Cc: Sean Christopherson <seanjc@google.com> Cc: Steinar H. Gunderson <sesse@google.com> Cc: Vincent Whitchurch <vincent.whitchurch@axis.com> Cc: Wenyu Liu <liuwenyu7@huawei.com> Cc: Yang Jihong <yangjihong1@huawei.com> Link: https://lore.kernel.org/r/20231127220902.1315692-13-irogers@google.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2023-11-27 22:08:24 +00:00
map__set_mapping_type(map, MAPPING_TYPE__DSO);
/* Ensure maps are correctly ordered */
if (kmaps) {
perf maps: Remove rb_node from struct map struct map is reference counted, having it also be a node in an red-black tree complicates the reference counting. Switch to having a map_rb_node which is a red-block tree node but points at the reference counted struct map. This reference is responsible for a single reference count. Committer notes: Fixed up tools/perf/util/unwind-libunwind-local.c to use map_rb_node as well. Signed-off-by: Ian Rogers <irogers@google.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Alexey Bayduraev <alexey.v.bayduraev@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Cc: Darren Hart <dvhart@infradead.org> Cc: Davidlohr Bueso <dave@stgolabs.net> Cc: Dmitriy Vyukov <dvyukov@google.com> Cc: Eric Dumazet <edumazet@google.com> Cc: German Gomez <german.gomez@arm.com> Cc: Hao Luo <haoluo@google.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: James Clark <james.clark@arm.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: John Garry <john.g.garry@oracle.com> Cc: Kajol Jain <kjain@linux.ibm.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Leo Yan <leo.yan@linaro.org> Cc: Madhavan Srinivasan <maddy@linux.ibm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Miaoqian Lin <linmq006@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Riccardo Mancini <rickyman7@gmail.com> Cc: Shunsuke Nakamura <nakamura.shun@fujitsu.com> Cc: Song Liu <song@kernel.org> Cc: Stephane Eranian <eranian@google.com> Cc: Stephen Brennan <stephen.s.brennan@oracle.com> Cc: Steven Rostedt (VMware) <rostedt@goodmis.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Thomas Richter <tmricht@linux.ibm.com> Cc: Yury Norov <yury.norov@gmail.com> Link: https://lore.kernel.org/r/20230320212248.1175731-2-irogers@google.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2023-03-20 21:22:33 +00:00
int err;
perf symbol-elf: Correct holding a reference If a reference is held, don't put it as this will confuse reference count checking. Signed-off-by: Ian Rogers <irogers@google.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Ali Saidi <alisaidi@amazon.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Athira Rajeev <atrajeev@linux.vnet.ibm.com> Cc: Brian Robbins <brianrob@linux.microsoft.com> Cc: Changbin Du <changbin.du@huawei.com> Cc: Dmitrii Dolgov <9erthalion6@gmail.com> Cc: Fangrui Song <maskray@google.com> Cc: German Gomez <german.gomez@arm.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Ivan Babrou <ivan@cloudflare.com> Cc: James Clark <james.clark@arm.com> Cc: Jing Zhang <renyu.zj@linux.alibaba.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: John Garry <john.g.garry@oracle.com> Cc: K Prateek Nayak <kprateek.nayak@amd.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Leo Yan <leo.yan@linaro.org> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Miguel Ojeda <ojeda@kernel.org> Cc: Mike Leach <mike.leach@linaro.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ravi Bangoria <ravi.bangoria@amd.com> Cc: Sean Christopherson <seanjc@google.com> Cc: Steinar H. Gunderson <sesse@google.com> Cc: Suzuki Poulouse <suzuki.poulose@arm.com> Cc: Wenyu Liu <liuwenyu7@huawei.com> Cc: Will Deacon <will@kernel.org> Cc: Yang Jihong <yangjihong1@huawei.com> Cc: Ye Xingchen <ye.xingchen@zte.com.cn> Cc: Yuan Can <yuancan@huawei.com> Cc: coresight@lists.linaro.org Cc: linux-arm-kernel@lists.infradead.org Link: https://lore.kernel.org/r/20230608232823.4027869-17-irogers@google.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2023-06-08 23:28:13 +00:00
struct map *tmp = map__get(map);
perf maps: Remove rb_node from struct map struct map is reference counted, having it also be a node in an red-black tree complicates the reference counting. Switch to having a map_rb_node which is a red-block tree node but points at the reference counted struct map. This reference is responsible for a single reference count. Committer notes: Fixed up tools/perf/util/unwind-libunwind-local.c to use map_rb_node as well. Signed-off-by: Ian Rogers <irogers@google.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Alexey Bayduraev <alexey.v.bayduraev@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Cc: Darren Hart <dvhart@infradead.org> Cc: Davidlohr Bueso <dave@stgolabs.net> Cc: Dmitriy Vyukov <dvyukov@google.com> Cc: Eric Dumazet <edumazet@google.com> Cc: German Gomez <german.gomez@arm.com> Cc: Hao Luo <haoluo@google.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: James Clark <james.clark@arm.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: John Garry <john.g.garry@oracle.com> Cc: Kajol Jain <kjain@linux.ibm.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Leo Yan <leo.yan@linaro.org> Cc: Madhavan Srinivasan <maddy@linux.ibm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Miaoqian Lin <linmq006@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Riccardo Mancini <rickyman7@gmail.com> Cc: Shunsuke Nakamura <nakamura.shun@fujitsu.com> Cc: Song Liu <song@kernel.org> Cc: Stephane Eranian <eranian@google.com> Cc: Stephen Brennan <stephen.s.brennan@oracle.com> Cc: Steven Rostedt (VMware) <rostedt@goodmis.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Thomas Richter <tmricht@linux.ibm.com> Cc: Yury Norov <yury.norov@gmail.com> Link: https://lore.kernel.org/r/20230320212248.1175731-2-irogers@google.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2023-03-20 21:22:33 +00:00
maps__remove(kmaps, map);
perf maps: Remove rb_node from struct map struct map is reference counted, having it also be a node in an red-black tree complicates the reference counting. Switch to having a map_rb_node which is a red-block tree node but points at the reference counted struct map. This reference is responsible for a single reference count. Committer notes: Fixed up tools/perf/util/unwind-libunwind-local.c to use map_rb_node as well. Signed-off-by: Ian Rogers <irogers@google.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Alexey Bayduraev <alexey.v.bayduraev@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Cc: Darren Hart <dvhart@infradead.org> Cc: Davidlohr Bueso <dave@stgolabs.net> Cc: Dmitriy Vyukov <dvyukov@google.com> Cc: Eric Dumazet <edumazet@google.com> Cc: German Gomez <german.gomez@arm.com> Cc: Hao Luo <haoluo@google.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: James Clark <james.clark@arm.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: John Garry <john.g.garry@oracle.com> Cc: Kajol Jain <kjain@linux.ibm.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Leo Yan <leo.yan@linaro.org> Cc: Madhavan Srinivasan <maddy@linux.ibm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Miaoqian Lin <linmq006@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Riccardo Mancini <rickyman7@gmail.com> Cc: Shunsuke Nakamura <nakamura.shun@fujitsu.com> Cc: Song Liu <song@kernel.org> Cc: Stephane Eranian <eranian@google.com> Cc: Stephen Brennan <stephen.s.brennan@oracle.com> Cc: Steven Rostedt (VMware) <rostedt@goodmis.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Thomas Richter <tmricht@linux.ibm.com> Cc: Yury Norov <yury.norov@gmail.com> Link: https://lore.kernel.org/r/20230320212248.1175731-2-irogers@google.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2023-03-20 21:22:33 +00:00
err = maps__insert(kmaps, map);
perf symbol-elf: Correct holding a reference If a reference is held, don't put it as this will confuse reference count checking. Signed-off-by: Ian Rogers <irogers@google.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Ali Saidi <alisaidi@amazon.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Athira Rajeev <atrajeev@linux.vnet.ibm.com> Cc: Brian Robbins <brianrob@linux.microsoft.com> Cc: Changbin Du <changbin.du@huawei.com> Cc: Dmitrii Dolgov <9erthalion6@gmail.com> Cc: Fangrui Song <maskray@google.com> Cc: German Gomez <german.gomez@arm.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Ivan Babrou <ivan@cloudflare.com> Cc: James Clark <james.clark@arm.com> Cc: Jing Zhang <renyu.zj@linux.alibaba.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: John Garry <john.g.garry@oracle.com> Cc: K Prateek Nayak <kprateek.nayak@amd.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Leo Yan <leo.yan@linaro.org> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Miguel Ojeda <ojeda@kernel.org> Cc: Mike Leach <mike.leach@linaro.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ravi Bangoria <ravi.bangoria@amd.com> Cc: Sean Christopherson <seanjc@google.com> Cc: Steinar H. Gunderson <sesse@google.com> Cc: Suzuki Poulouse <suzuki.poulose@arm.com> Cc: Wenyu Liu <liuwenyu7@huawei.com> Cc: Will Deacon <will@kernel.org> Cc: Yang Jihong <yangjihong1@huawei.com> Cc: Ye Xingchen <ye.xingchen@zte.com.cn> Cc: Yuan Can <yuancan@huawei.com> Cc: coresight@lists.linaro.org Cc: linux-arm-kernel@lists.infradead.org Link: https://lore.kernel.org/r/20230608232823.4027869-17-irogers@google.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2023-06-08 23:28:13 +00:00
map__put(tmp);
perf maps: Remove rb_node from struct map struct map is reference counted, having it also be a node in an red-black tree complicates the reference counting. Switch to having a map_rb_node which is a red-block tree node but points at the reference counted struct map. This reference is responsible for a single reference count. Committer notes: Fixed up tools/perf/util/unwind-libunwind-local.c to use map_rb_node as well. Signed-off-by: Ian Rogers <irogers@google.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Alexey Bayduraev <alexey.v.bayduraev@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Cc: Darren Hart <dvhart@infradead.org> Cc: Davidlohr Bueso <dave@stgolabs.net> Cc: Dmitriy Vyukov <dvyukov@google.com> Cc: Eric Dumazet <edumazet@google.com> Cc: German Gomez <german.gomez@arm.com> Cc: Hao Luo <haoluo@google.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: James Clark <james.clark@arm.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: John Garry <john.g.garry@oracle.com> Cc: Kajol Jain <kjain@linux.ibm.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Leo Yan <leo.yan@linaro.org> Cc: Madhavan Srinivasan <maddy@linux.ibm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Miaoqian Lin <linmq006@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Riccardo Mancini <rickyman7@gmail.com> Cc: Shunsuke Nakamura <nakamura.shun@fujitsu.com> Cc: Song Liu <song@kernel.org> Cc: Stephane Eranian <eranian@google.com> Cc: Stephen Brennan <stephen.s.brennan@oracle.com> Cc: Steven Rostedt (VMware) <rostedt@goodmis.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Thomas Richter <tmricht@linux.ibm.com> Cc: Yury Norov <yury.norov@gmail.com> Link: https://lore.kernel.org/r/20230320212248.1175731-2-irogers@google.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2023-03-20 21:22:33 +00:00
if (err)
return err;
}
}
/*
* The initial module mapping is based on
* /proc/modules mapped to offset zero.
* Overwrite it to map to the module dso.
*/
if (*remap_kernel && kmodule) {
*remap_kernel = false;
map__set_pgoff(map, shdr->sh_offset);
}
*curr_mapp = map;
*curr_dsop = dso;
return 0;
}
if (!kmap)
return 0;
perf symbols: Slightly improve module file executable section mappings Currently perf does not record module section addresses except for the .text section. In general that means perf cannot get module section mappings correct (except for .text) when loading symbols from a kernel module file. (Note using --kcore does not have this issue) Improve that situation slightly by identifying executable sections that use the same mapping as the .text section. That happens when an executable section comes directly after the .text section, both in memory and on file, something that can be determined by following the same layout rules used by the kernel, refer kernel layout_sections(). Note whether that happens is somewhat arbitrary, so this is not a final solution. Example from tracing a virtual machine process: Before: $ perf script | grep unknown CPU 0/KVM 1718 203.511270: 318341 cpu-cycles:P: ffffffffc13e8a70 [unknown] (/lib/modules/6.7.2-local/kernel/arch/x86/kvm/kvm-intel.ko) $ perf script -vvv 2>&1 >/dev/null | grep kvm.intel | grep 'noinstr.text\|ffff' Map: 0-7e0 41430 [kvm_intel].noinstr.text Map: ffffffffc13a7000-ffffffffc1421000 a0 /lib/modules/6.7.2-local/kernel/arch/x86/kvm/kvm-intel.ko After: $ perf script | grep 203.511270 CPU 0/KVM 1718 203.511270: 318341 cpu-cycles:P: ffffffffc13e8a70 vmx_vmexit+0x0 (/lib/modules/6.7.2-local/kernel/arch/x86/kvm/kvm-intel.ko) $ perf script -vvv 2>&1 >/dev/null | grep kvm.intel | grep 'noinstr.text\|ffff' Map: ffffffffc13a7000-ffffffffc1421000 a0 /lib/modules/6.7.2-local/kernel/arch/x86/kvm/kvm-intel.ko Reported-by: Like Xu <like.xu.linux@gmail.com> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Acked-by: Namhyung Kim <namhyung@kernel.org> Signed-off-by: Namhyung Kim <namhyung@kernel.org> Link: https://lore.kernel.org/r/20240208085326.13432-3-adrian.hunter@intel.com
2024-02-08 08:53:26 +00:00
/*
* perf does not record module section addresses except for .text, but
* some sections can use the same mapping as .text.
*/
if (kmodule && adjust_kernel_syms && is_exe_text(shdr->sh_flags) &&
shdr->sh_offset <= max_text_sh_offset) {
*curr_mapp = map;
*curr_dsop = dso;
return 0;
}
snprintf(dso_name, sizeof(dso_name), "%s%s", dso->short_name, section_name);
curr_map = maps__find_by_name(kmaps, dso_name);
if (curr_map == NULL) {
u64 start = sym->st_value;
if (kmodule)
perf map: Add accessor for start and end Later changes will add reference count checking for struct map, start and end are frequently accessed variables. Add an accessor so that the reference count check is only necessary in one place. Signed-off-by: Ian Rogers <irogers@google.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Alexey Bayduraev <alexey.v.bayduraev@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Cc: Darren Hart <dvhart@infradead.org> Cc: Davidlohr Bueso <dave@stgolabs.net> Cc: Dmitriy Vyukov <dvyukov@google.com> Cc: Eric Dumazet <edumazet@google.com> Cc: German Gomez <german.gomez@arm.com> Cc: Hao Luo <haoluo@google.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: James Clark <james.clark@arm.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: John Garry <john.g.garry@oracle.com> Cc: Kajol Jain <kjain@linux.ibm.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Leo Yan <leo.yan@linaro.org> Cc: Madhavan Srinivasan <maddy@linux.ibm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Miaoqian Lin <linmq006@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Riccardo Mancini <rickyman7@gmail.com> Cc: Shunsuke Nakamura <nakamura.shun@fujitsu.com> Cc: Song Liu <song@kernel.org> Cc: Stephane Eranian <eranian@google.com> Cc: Stephen Brennan <stephen.s.brennan@oracle.com> Cc: Steven Rostedt (VMware) <rostedt@goodmis.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Thomas Richter <tmricht@linux.ibm.com> Cc: Yury Norov <yury.norov@gmail.com> Link: https://lore.kernel.org/r/20230320212248.1175731-2-irogers@google.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2023-03-20 21:22:36 +00:00
start += map__start(map) + shdr->sh_offset;
curr_dso = dso__new(dso_name);
if (curr_dso == NULL)
return -1;
curr_dso->kernel = dso->kernel;
curr_dso->long_name = dso->long_name;
curr_dso->long_name_len = dso->long_name_len;
perf symbols: Fix DSO kernel load and symbol process to correctly map DSO to its long_name, type and adjust_symbols Test "object code reading" fails sometimes for kernel address as below: Reading object code for memory address: 0xc000000000004c3c File is: [kernel.kallsyms] On file address is: 0x14c3c dso__data_read_offset failed test child finished with -1 ---- end ---- Object code reading: FAILED! Here dso__data_read_offset() fails for symbol address 0xc000000000004c3c. This is because the DSO long_name here is "[kernel.kallsyms]" and hence open_dso() fails to open this file. There is an incorrect DSO to map handling here. The key points here are: - The DSO long_name is set to "[kernel.kallsyms]". This file is not present and hence returns error - The DSO binary type is set to DSO_BINARY_TYPE__NOT_FOUND - The DSO adjust_symbols member is set to zero In the end dso__data_read_offset() returns -1 and the address 0x14c3c can not be resolved. Hence the test fails. But the address actually maps to the kernel DSO # objdump -z -d --start-address=0xc000000000004c3c --stop-address=0xc000000000004cbc /home/athira/linux/vmlinux /home/athira/linux/vmlinux: file format elf64-powerpcle Disassembly of section .head.text: c000000000004c3c <exc_virt_0x4c00_system_call+0x3c>: c000000000004c3c: a6 02 9b 7d mfsrr1 r12 c000000000004c40: 78 13 42 7c mr r2,r2 c000000000004c44: 18 00 4d e9 ld r10,24(r13) c000000000004c48: 60 c6 4a 61 ori r10,r10,50784 c000000000004c4c: a6 03 49 7d mtctr r10 Fix dso__process_kernel_symbol() to set the binary_type and adjust_symbols members. dso->adjust_symbols is used by map__rip_2objdump() which converts the symbol start address to the objdump address. Also set dso->long_name in dso__load_vmlinux(). Suggested-by: Adrian Hunter <adrian.hunter@intel.com> Signed-off-by: Athira Rajeev <atrajeev@linux.vnet.ibm.com> Acked-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Disha Goel <disgoel@linux.vnet.ibm.com> Cc: Ian Rogers <irogers@google.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Kajol Jain <kjain@linux.ibm.com> Cc: Madhavan Srinivasan <maddy@linux.ibm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: linuxppc-dev@lists.ozlabs.org Link: https://lore.kernel.org/r/20230811051546.70039-1-atrajeev@linux.vnet.ibm.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2023-08-11 05:15:46 +00:00
curr_dso->binary_type = dso->binary_type;
curr_dso->adjust_symbols = dso->adjust_symbols;
curr_map = map__new2(start, curr_dso);
dso__put(curr_dso);
if (curr_map == NULL)
return -1;
if (curr_dso->kernel)
map__kmap(curr_map)->kmaps = kmaps;
if (adjust_kernel_syms) {
map__set_start(curr_map, shdr->sh_addr + ref_reloc(kmap));
map__set_end(curr_map, map__start(curr_map) + shdr->sh_size);
map__set_pgoff(curr_map, shdr->sh_offset);
} else {
perf map: Simplify map_ip/unmap_ip and make 'struct map' smaller When mapping an IP it is either an identity mapping or a DSO relative mapping, so a single bit is required in the struct to identify this. The current code uses function pointers, adding 2 pointers per map and also pushing the size of a map beyond 1 cache line. Switch to using a byte to identify the mapping type (as well as priv and erange_warned), to avoid any masking. Change struct maps's layout to avoid holes. Before: ``` struct map { u64 start; /* 0 8 */ u64 end; /* 8 8 */ _Bool erange_warned:1; /* 16: 0 1 */ _Bool priv:1; /* 16: 1 1 */ /* XXX 6 bits hole, try to pack */ /* XXX 3 bytes hole, try to pack */ u32 prot; /* 20 4 */ u64 pgoff; /* 24 8 */ u64 reloc; /* 32 8 */ u64 (*map_ip)(const struct map *, u64); /* 40 8 */ u64 (*unmap_ip)(const struct map *, u64); /* 48 8 */ struct dso * dso; /* 56 8 */ /* --- cacheline 1 boundary (64 bytes) --- */ refcount_t refcnt; /* 64 4 */ u32 flags; /* 68 4 */ /* size: 72, cachelines: 2, members: 12 */ /* sum members: 68, holes: 1, sum holes: 3 */ /* sum bitfield members: 2 bits, bit holes: 1, sum bit holes: 6 bits */ /* last cacheline: 8 bytes */ }; ``` After: ``` struct map { u64 start; /* 0 8 */ u64 end; /* 8 8 */ u64 pgoff; /* 16 8 */ u64 reloc; /* 24 8 */ struct dso * dso; /* 32 8 */ refcount_t refcnt; /* 40 4 */ u32 prot; /* 44 4 */ u32 flags; /* 48 4 */ enum mapping_type mapping_type:8; /* 52: 0 4 */ /* Bitfield combined with next fields */ _Bool erange_warned; /* 53 1 */ _Bool priv; /* 54 1 */ /* size: 56, cachelines: 1, members: 11 */ /* padding: 1 */ /* last cacheline: 56 bytes */ }; ``` Signed-off-by: Ian Rogers <irogers@google.com> Acked-by: Namhyung Kim <namhyung@kernel.org> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Athira Jajeev <atrajeev@linux.vnet.ibm.com> Cc: Changbin Du <changbin.du@huawei.com> Cc: Colin Ian King <colin.i.king@gmail.com> Cc: Dmitrii Dolgov <9erthalion6@gmail.com> Cc: German Gomez <german.gomez@arm.com> Cc: Guilherme Amadio <amadio@gentoo.org> Cc: Huacai Chen <chenhuacai@kernel.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: James Clark <james.clark@arm.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: K Prateek Nayak <kprateek.nayak@amd.com> Cc: Kajol Jain <kjain@linux.ibm.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Leo Yan <leo.yan@linaro.org> Cc: Li Dong <lidong@vivo.com> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Masami Hiramatsu (Google) <mhiramat@kernel.org> Cc: Miguel Ojeda <ojeda@kernel.org> Cc: Ming Wang <wangming01@loongson.cn> Cc: Nick Terrell <terrelln@fb.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ravi Bangoria <ravi.bangoria@amd.com> Cc: Sandipan Das <sandipan.das@amd.com> Cc: Sean Christopherson <seanjc@google.com> Cc: Steinar H. Gunderson <sesse@google.com> Cc: Vincent Whitchurch <vincent.whitchurch@axis.com> Cc: Wenyu Liu <liuwenyu7@huawei.com> Cc: Yang Jihong <yangjihong1@huawei.com> Link: https://lore.kernel.org/r/20231127220902.1315692-13-irogers@google.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2023-11-27 22:08:24 +00:00
map__set_mapping_type(curr_map, MAPPING_TYPE__IDENTITY);
}
curr_dso->symtab_type = dso->symtab_type;
perf maps: Remove rb_node from struct map struct map is reference counted, having it also be a node in an red-black tree complicates the reference counting. Switch to having a map_rb_node which is a red-block tree node but points at the reference counted struct map. This reference is responsible for a single reference count. Committer notes: Fixed up tools/perf/util/unwind-libunwind-local.c to use map_rb_node as well. Signed-off-by: Ian Rogers <irogers@google.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Alexey Bayduraev <alexey.v.bayduraev@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Cc: Darren Hart <dvhart@infradead.org> Cc: Davidlohr Bueso <dave@stgolabs.net> Cc: Dmitriy Vyukov <dvyukov@google.com> Cc: Eric Dumazet <edumazet@google.com> Cc: German Gomez <german.gomez@arm.com> Cc: Hao Luo <haoluo@google.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: James Clark <james.clark@arm.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: John Garry <john.g.garry@oracle.com> Cc: Kajol Jain <kjain@linux.ibm.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Leo Yan <leo.yan@linaro.org> Cc: Madhavan Srinivasan <maddy@linux.ibm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Miaoqian Lin <linmq006@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Riccardo Mancini <rickyman7@gmail.com> Cc: Shunsuke Nakamura <nakamura.shun@fujitsu.com> Cc: Song Liu <song@kernel.org> Cc: Stephane Eranian <eranian@google.com> Cc: Stephen Brennan <stephen.s.brennan@oracle.com> Cc: Steven Rostedt (VMware) <rostedt@goodmis.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Thomas Richter <tmricht@linux.ibm.com> Cc: Yury Norov <yury.norov@gmail.com> Link: https://lore.kernel.org/r/20230320212248.1175731-2-irogers@google.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2023-03-20 21:22:33 +00:00
if (maps__insert(kmaps, curr_map))
return -1;
/*
* Add it before we drop the reference to curr_map, i.e. while
* we still are sure to have a reference to this DSO via
* *curr_map->dso.
*/
perf maps: Add functions to access maps Introduce functions to access struct maps. These functions reduce the number of places reference counting is necessary. While tidying APIs do some small const-ification, in particlar to unwind_libunwind_ops. Committer notes: Fixed up tools/perf/util/unwind-libunwind.c: - return ops->get_entries(cb, arg, thread, data, max_stack); + return ops->get_entries(cb, arg, thread, data, max_stack, best_effort); Signed-off-by: Ian Rogers <irogers@google.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Alexey Bayduraev <alexey.v.bayduraev@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Cc: Darren Hart <dvhart@infradead.org> Cc: Davidlohr Bueso <dave@stgolabs.net> Cc: Dmitriy Vyukov <dvyukov@google.com> Cc: Eric Dumazet <edumazet@google.com> Cc: German Gomez <german.gomez@arm.com> Cc: Hao Luo <haoluo@google.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: James Clark <james.clark@arm.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: John Garry <john.g.garry@oracle.com> Cc: Kajol Jain <kjain@linux.ibm.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Leo Yan <leo.yan@linaro.org> Cc: Madhavan Srinivasan <maddy@linux.ibm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Miaoqian Lin <linmq006@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Riccardo Mancini <rickyman7@gmail.com> Cc: Shunsuke Nakamura <nakamura.shun@fujitsu.com> Cc: Song Liu <song@kernel.org> Cc: Stephane Eranian <eranian@google.com> Cc: Stephen Brennan <stephen.s.brennan@oracle.com> Cc: Steven Rostedt (VMware) <rostedt@goodmis.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Thomas Richter <tmricht@linux.ibm.com> Cc: Yury Norov <yury.norov@gmail.com> Link: https://lore.kernel.org/r/20230320212248.1175731-2-irogers@google.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2023-03-20 21:22:34 +00:00
dsos__add(&maps__machine(kmaps)->dsos, curr_dso);
/* kmaps already got it */
map__put(curr_map);
dso__set_loaded(curr_dso);
*curr_mapp = curr_map;
*curr_dsop = curr_dso;
} else {
perf map: Add accessor for dso Later changes will add reference count checking for struct map, with dso being the most frequently accessed variable. Add an accessor so that the reference count check is only necessary in one place. Additional changes: - add a dso variable to avoid repeated map__dso calls. - in builtin-mem.c dump_raw_samples, code only partially tested for dso == NULL. Make the possibility of NULL consistent. - in thread.c thread__memcpy fix use of spaces and use tabs. Committer notes: Did missing conversions on these files: tools/perf/arch/powerpc/util/skip-callchain-idx.c tools/perf/arch/powerpc/util/sym-handling.c tools/perf/ui/browsers/hists.c tools/perf/ui/gtk/annotate.c tools/perf/util/cs-etm.c tools/perf/util/thread.c tools/perf/util/unwind-libunwind-local.c tools/perf/util/unwind-libunwind.c Signed-off-by: Ian Rogers <irogers@google.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Alexey Bayduraev <alexey.v.bayduraev@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Cc: Darren Hart <dvhart@infradead.org> Cc: Davidlohr Bueso <dave@stgolabs.net> Cc: Dmitriy Vyukov <dvyukov@google.com> Cc: Eric Dumazet <edumazet@google.com> Cc: German Gomez <german.gomez@arm.com> Cc: Hao Luo <haoluo@google.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: James Clark <james.clark@arm.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: John Garry <john.g.garry@oracle.com> Cc: Kajol Jain <kjain@linux.ibm.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Leo Yan <leo.yan@linaro.org> Cc: Madhavan Srinivasan <maddy@linux.ibm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Miaoqian Lin <linmq006@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Riccardo Mancini <rickyman7@gmail.com> Cc: Shunsuke Nakamura <nakamura.shun@fujitsu.com> Cc: Song Liu <song@kernel.org> Cc: Stephane Eranian <eranian@google.com> Cc: Stephen Brennan <stephen.s.brennan@oracle.com> Cc: Steven Rostedt (VMware) <rostedt@goodmis.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Thomas Richter <tmricht@linux.ibm.com> Cc: Yury Norov <yury.norov@gmail.com> Link: https://lore.kernel.org/r/20230320212248.1175731-2-irogers@google.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2023-03-20 21:22:35 +00:00
*curr_dsop = map__dso(curr_map);
map__put(curr_map);
}
return 0;
}
static int
dso__load_sym_internal(struct dso *dso, struct map *map, struct symsrc *syms_ss,
struct symsrc *runtime_ss, int kmodule, int dynsym)
{
struct kmap *kmap = dso->kernel ? map__kmap(map) : NULL;
struct maps *kmaps = kmap ? map__kmaps(map) : NULL;
struct map *curr_map = map;
struct dso *curr_dso = dso;
perf top: Fix overflow in elf_sec__is_text() ASan reports a heap-buffer-overflow in elf_sec__is_text when using perf-top. The bug is caused by the fact that secstrs is built from runtime_ss, while shdr is built from syms_ss if shdr.sh_type != SHT_NOBITS. Therefore, they point to two different ELF files. This patch renames secstrs to secstrs_run and adds secstrs_sym, so that the correct secstrs is chosen depending on shdr.sh_type. $ ASAN_OPTIONS=abort_on_error=1:disable_coredump=0:unmap_shadow_on_exit=1 ./perf top ================================================================= ==363148==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x61300009add6 at pc 0x00000049875c bp 0x7f4f56446440 sp 0x7f4f56445bf0 READ of size 1 at 0x61300009add6 thread T6 #0 0x49875b in StrstrCheck(void*, char*, char const*, char const*) (/home/user/linux/tools/perf/perf+0x49875b) #1 0x4d13a2 in strstr (/home/user/linux/tools/perf/perf+0x4d13a2) #2 0xacae36 in elf_sec__is_text /home/user/linux/tools/perf/util/symbol-elf.c:176:9 #3 0xac3ec9 in elf_sec__filter /home/user/linux/tools/perf/util/symbol-elf.c:187:9 #4 0xac2c3d in dso__load_sym /home/user/linux/tools/perf/util/symbol-elf.c:1254:20 #5 0x883981 in dso__load /home/user/linux/tools/perf/util/symbol.c:1897:9 #6 0x8e6248 in map__load /home/user/linux/tools/perf/util/map.c:332:7 #7 0x8e66e5 in map__find_symbol /home/user/linux/tools/perf/util/map.c:366:6 #8 0x7f8278 in machine__resolve /home/user/linux/tools/perf/util/event.c:707:13 #9 0x5f3d1a in perf_event__process_sample /home/user/linux/tools/perf/builtin-top.c:773:6 #10 0x5f30e4 in deliver_event /home/user/linux/tools/perf/builtin-top.c:1197:3 #11 0x908a72 in do_flush /home/user/linux/tools/perf/util/ordered-events.c:244:9 #12 0x905fae in __ordered_events__flush /home/user/linux/tools/perf/util/ordered-events.c:323:8 #13 0x9058db in ordered_events__flush /home/user/linux/tools/perf/util/ordered-events.c:341:9 #14 0x5f19b1 in process_thread /home/user/linux/tools/perf/builtin-top.c:1109:7 #15 0x7f4f6a21a298 in start_thread /usr/src/debug/glibc-2.33-16.fc34.x86_64/nptl/pthread_create.c:481:8 #16 0x7f4f697d0352 in clone ../sysdeps/unix/sysv/linux/x86_64/clone.S:95 0x61300009add6 is located 10 bytes to the right of 332-byte region [0x61300009ac80,0x61300009adcc) allocated by thread T6 here: #0 0x4f3f7f in malloc (/home/user/linux/tools/perf/perf+0x4f3f7f) #1 0x7f4f6a0a88d9 (/lib64/libelf.so.1+0xa8d9) Thread T6 created by T0 here: #0 0x464856 in pthread_create (/home/user/linux/tools/perf/perf+0x464856) #1 0x5f06e0 in __cmd_top /home/user/linux/tools/perf/builtin-top.c:1309:6 #2 0x5ef19f in cmd_top /home/user/linux/tools/perf/builtin-top.c:1762:11 #3 0x7b28c0 in run_builtin /home/user/linux/tools/perf/perf.c:313:11 #4 0x7b119f in handle_internal_command /home/user/linux/tools/perf/perf.c:365:8 #5 0x7b2423 in run_argv /home/user/linux/tools/perf/perf.c:409:2 #6 0x7b0c19 in main /home/user/linux/tools/perf/perf.c:539:3 #7 0x7f4f696f7b74 in __libc_start_main /usr/src/debug/glibc-2.33-16.fc34.x86_64/csu/../csu/libc-start.c:332:16 SUMMARY: AddressSanitizer: heap-buffer-overflow (/home/user/linux/tools/perf/perf+0x49875b) in StrstrCheck(void*, char*, char const*, char const*) Shadow bytes around the buggy address: 0x0c268000b560: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa 0x0c268000b570: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa 0x0c268000b580: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa 0x0c268000b590: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x0c268000b5a0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 =>0x0c268000b5b0: 00 00 00 00 00 00 00 00 00 04[fa]fa fa fa fa fa 0x0c268000b5c0: fa fa fa fa fa fa fa fa 00 00 00 00 00 00 00 00 0x0c268000b5d0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x0c268000b5e0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x0c268000b5f0: 07 fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa 0x0c268000b600: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd Shadow byte legend (one shadow byte represents 8 application bytes): Addressable: 00 Partially addressable: 01 02 03 04 05 06 07 Heap left redzone: fa Freed heap region: fd Stack left redzone: f1 Stack mid redzone: f2 Stack right redzone: f3 Stack after return: f5 Stack use after scope: f8 Global redzone: f9 Global init order: f6 Poisoned by user: f7 Container overflow: fc Array cookie: ac Intra object redzone: bb ASan internal: fe Left alloca redzone: ca Right alloca redzone: cb Shadow gap: cc ==363148==ABORTING Suggested-by: Jiri Slaby <jirislaby@kernel.org> Signed-off-by: Riccardo Mancini <rickyman7@gmail.com> Acked-by: Namhyung Kim <namhyung@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Fabian Hemmer <copy@copy.sh> Cc: Ian Rogers <irogers@google.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Jiri Slaby <jirislaby@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Remi Bernon <rbernon@codeweavers.com> Link: http://lore.kernel.org/lkml/20210621222108.196219-1-rickyman7@gmail.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2021-06-21 22:21:08 +00:00
Elf_Data *symstrs, *secstrs, *secstrs_run, *secstrs_sym;
uint32_t nr_syms;
int err = -1;
uint32_t idx;
GElf_Ehdr ehdr;
GElf_Shdr shdr;
GElf_Shdr tshdr;
Elf_Data *syms, *opddata = NULL;
GElf_Sym sym;
Elf_Scn *sec, *sec_strndx;
Elf *elf;
int nr = 0;
bool remap_kernel = false, adjust_kernel_syms = false;
perf symbols: Slightly improve module file executable section mappings Currently perf does not record module section addresses except for the .text section. In general that means perf cannot get module section mappings correct (except for .text) when loading symbols from a kernel module file. (Note using --kcore does not have this issue) Improve that situation slightly by identifying executable sections that use the same mapping as the .text section. That happens when an executable section comes directly after the .text section, both in memory and on file, something that can be determined by following the same layout rules used by the kernel, refer kernel layout_sections(). Note whether that happens is somewhat arbitrary, so this is not a final solution. Example from tracing a virtual machine process: Before: $ perf script | grep unknown CPU 0/KVM 1718 203.511270: 318341 cpu-cycles:P: ffffffffc13e8a70 [unknown] (/lib/modules/6.7.2-local/kernel/arch/x86/kvm/kvm-intel.ko) $ perf script -vvv 2>&1 >/dev/null | grep kvm.intel | grep 'noinstr.text\|ffff' Map: 0-7e0 41430 [kvm_intel].noinstr.text Map: ffffffffc13a7000-ffffffffc1421000 a0 /lib/modules/6.7.2-local/kernel/arch/x86/kvm/kvm-intel.ko After: $ perf script | grep 203.511270 CPU 0/KVM 1718 203.511270: 318341 cpu-cycles:P: ffffffffc13e8a70 vmx_vmexit+0x0 (/lib/modules/6.7.2-local/kernel/arch/x86/kvm/kvm-intel.ko) $ perf script -vvv 2>&1 >/dev/null | grep kvm.intel | grep 'noinstr.text\|ffff' Map: ffffffffc13a7000-ffffffffc1421000 a0 /lib/modules/6.7.2-local/kernel/arch/x86/kvm/kvm-intel.ko Reported-by: Like Xu <like.xu.linux@gmail.com> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Acked-by: Namhyung Kim <namhyung@kernel.org> Signed-off-by: Namhyung Kim <namhyung@kernel.org> Link: https://lore.kernel.org/r/20240208085326.13432-3-adrian.hunter@intel.com
2024-02-08 08:53:26 +00:00
u64 max_text_sh_offset = 0;
if (kmap && !kmaps)
return -1;
elf = syms_ss->elf;
ehdr = syms_ss->ehdr;
if (dynsym) {
sec = syms_ss->dynsym;
shdr = syms_ss->dynshdr;
} else {
sec = syms_ss->symtab;
shdr = syms_ss->symshdr;
}
if (elf_section_by_name(runtime_ss->elf, &runtime_ss->ehdr, &tshdr,
tools/perf: Add text_end to "struct dso" to save .text section size Update "struct dso" to include new member "text_end". This new field will represent the offset for end of text section for a dso. For elf, this value is derived as: sh_size (Size of section in byes) + sh_offset (Section file offst) of the elf header for text. For bfd, this value is derived as: 1. For PE file, section->size + ( section->vma - dso->text_offset) 2. Other cases: section->filepos (file position) + section->size (size of section) To resolve the address from a sample, perf looks at the DSO maps. In case of address from a kernel module, there were some address found to be not resolved. This was observed while running perf test for "Object code reading". Though the ip falls beteen the start address of the loaded module (perf map->start ) and end address ( perf map->end), it was unresolved. Example: Reading object code for memory address: 0xc008000007f0142c File is: /lib/modules/6.5.0-rc3+/kernel/fs/xfs/xfs.ko On file address is: 0x1114cc Objdump command is: objdump -z -d --start-address=0x11142c --stop-address=0x1114ac /lib/modules/6.5.0-rc3+/kernel/fs/xfs/xfs.ko objdump read too few bytes: 128 test child finished with -1 Here, module is loaded at: # cat /proc/modules | grep xfs xfs 2228224 3 - Live 0xc008000007d00000 From objdump for xfs module, text section is: text 0010f7bc 0000000000000000 0000000000000000 000000a0 2**4 Here the offset for 0xc008000007f0142c ie 0x112074 falls out .text section which is up to 0x10f7bc. In this case for module, the address 0xc008000007e11fd4 is pointing to stub instructions. This address range represents the module stubs which is allocated on module load and hence is not part of DSO offset. To identify such address, which falls out of text section and within module end, added the new field "text_end" to "struct dso". Reported-by: Disha Goel <disgoel@linux.ibm.com> Signed-off-by: Athira Rajeev <atrajeev@linux.vnet.ibm.com> Reviewed-by: Adrian Hunter <adrian.hunter@intel.com> Reviewed-by: Kajol Jain <kjain@linux.ibm.com> Acked-by: Namhyung Kim <namhyung@kernel.org> Cc: maddy@linux.ibm.com Cc: disgoel@linux.vnet.ibm.com Cc: linuxppc-dev@lists.ozlabs.org Link: https://lore.kernel.org/r/20230928075213.84392-1-atrajeev@linux.vnet.ibm.com Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2023-09-28 07:52:11 +00:00
".text", NULL)) {
dso->text_offset = tshdr.sh_addr - tshdr.sh_offset;
tools/perf: Add text_end to "struct dso" to save .text section size Update "struct dso" to include new member "text_end". This new field will represent the offset for end of text section for a dso. For elf, this value is derived as: sh_size (Size of section in byes) + sh_offset (Section file offst) of the elf header for text. For bfd, this value is derived as: 1. For PE file, section->size + ( section->vma - dso->text_offset) 2. Other cases: section->filepos (file position) + section->size (size of section) To resolve the address from a sample, perf looks at the DSO maps. In case of address from a kernel module, there were some address found to be not resolved. This was observed while running perf test for "Object code reading". Though the ip falls beteen the start address of the loaded module (perf map->start ) and end address ( perf map->end), it was unresolved. Example: Reading object code for memory address: 0xc008000007f0142c File is: /lib/modules/6.5.0-rc3+/kernel/fs/xfs/xfs.ko On file address is: 0x1114cc Objdump command is: objdump -z -d --start-address=0x11142c --stop-address=0x1114ac /lib/modules/6.5.0-rc3+/kernel/fs/xfs/xfs.ko objdump read too few bytes: 128 test child finished with -1 Here, module is loaded at: # cat /proc/modules | grep xfs xfs 2228224 3 - Live 0xc008000007d00000 From objdump for xfs module, text section is: text 0010f7bc 0000000000000000 0000000000000000 000000a0 2**4 Here the offset for 0xc008000007f0142c ie 0x112074 falls out .text section which is up to 0x10f7bc. In this case for module, the address 0xc008000007e11fd4 is pointing to stub instructions. This address range represents the module stubs which is allocated on module load and hence is not part of DSO offset. To identify such address, which falls out of text section and within module end, added the new field "text_end" to "struct dso". Reported-by: Disha Goel <disgoel@linux.ibm.com> Signed-off-by: Athira Rajeev <atrajeev@linux.vnet.ibm.com> Reviewed-by: Adrian Hunter <adrian.hunter@intel.com> Reviewed-by: Kajol Jain <kjain@linux.ibm.com> Acked-by: Namhyung Kim <namhyung@kernel.org> Cc: maddy@linux.ibm.com Cc: disgoel@linux.vnet.ibm.com Cc: linuxppc-dev@lists.ozlabs.org Link: https://lore.kernel.org/r/20230928075213.84392-1-atrajeev@linux.vnet.ibm.com Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2023-09-28 07:52:11 +00:00
dso->text_end = tshdr.sh_offset + tshdr.sh_size;
}
if (runtime_ss->opdsec)
opddata = elf_rawdata(runtime_ss->opdsec, NULL);
syms = elf_getdata(sec, NULL);
if (syms == NULL)
goto out_elf_end;
sec = elf_getscn(elf, shdr.sh_link);
if (sec == NULL)
goto out_elf_end;
symstrs = elf_getdata(sec, NULL);
if (symstrs == NULL)
goto out_elf_end;
sec_strndx = elf_getscn(runtime_ss->elf, runtime_ss->ehdr.e_shstrndx);
if (sec_strndx == NULL)
goto out_elf_end;
perf top: Fix overflow in elf_sec__is_text() ASan reports a heap-buffer-overflow in elf_sec__is_text when using perf-top. The bug is caused by the fact that secstrs is built from runtime_ss, while shdr is built from syms_ss if shdr.sh_type != SHT_NOBITS. Therefore, they point to two different ELF files. This patch renames secstrs to secstrs_run and adds secstrs_sym, so that the correct secstrs is chosen depending on shdr.sh_type. $ ASAN_OPTIONS=abort_on_error=1:disable_coredump=0:unmap_shadow_on_exit=1 ./perf top ================================================================= ==363148==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x61300009add6 at pc 0x00000049875c bp 0x7f4f56446440 sp 0x7f4f56445bf0 READ of size 1 at 0x61300009add6 thread T6 #0 0x49875b in StrstrCheck(void*, char*, char const*, char const*) (/home/user/linux/tools/perf/perf+0x49875b) #1 0x4d13a2 in strstr (/home/user/linux/tools/perf/perf+0x4d13a2) #2 0xacae36 in elf_sec__is_text /home/user/linux/tools/perf/util/symbol-elf.c:176:9 #3 0xac3ec9 in elf_sec__filter /home/user/linux/tools/perf/util/symbol-elf.c:187:9 #4 0xac2c3d in dso__load_sym /home/user/linux/tools/perf/util/symbol-elf.c:1254:20 #5 0x883981 in dso__load /home/user/linux/tools/perf/util/symbol.c:1897:9 #6 0x8e6248 in map__load /home/user/linux/tools/perf/util/map.c:332:7 #7 0x8e66e5 in map__find_symbol /home/user/linux/tools/perf/util/map.c:366:6 #8 0x7f8278 in machine__resolve /home/user/linux/tools/perf/util/event.c:707:13 #9 0x5f3d1a in perf_event__process_sample /home/user/linux/tools/perf/builtin-top.c:773:6 #10 0x5f30e4 in deliver_event /home/user/linux/tools/perf/builtin-top.c:1197:3 #11 0x908a72 in do_flush /home/user/linux/tools/perf/util/ordered-events.c:244:9 #12 0x905fae in __ordered_events__flush /home/user/linux/tools/perf/util/ordered-events.c:323:8 #13 0x9058db in ordered_events__flush /home/user/linux/tools/perf/util/ordered-events.c:341:9 #14 0x5f19b1 in process_thread /home/user/linux/tools/perf/builtin-top.c:1109:7 #15 0x7f4f6a21a298 in start_thread /usr/src/debug/glibc-2.33-16.fc34.x86_64/nptl/pthread_create.c:481:8 #16 0x7f4f697d0352 in clone ../sysdeps/unix/sysv/linux/x86_64/clone.S:95 0x61300009add6 is located 10 bytes to the right of 332-byte region [0x61300009ac80,0x61300009adcc) allocated by thread T6 here: #0 0x4f3f7f in malloc (/home/user/linux/tools/perf/perf+0x4f3f7f) #1 0x7f4f6a0a88d9 (/lib64/libelf.so.1+0xa8d9) Thread T6 created by T0 here: #0 0x464856 in pthread_create (/home/user/linux/tools/perf/perf+0x464856) #1 0x5f06e0 in __cmd_top /home/user/linux/tools/perf/builtin-top.c:1309:6 #2 0x5ef19f in cmd_top /home/user/linux/tools/perf/builtin-top.c:1762:11 #3 0x7b28c0 in run_builtin /home/user/linux/tools/perf/perf.c:313:11 #4 0x7b119f in handle_internal_command /home/user/linux/tools/perf/perf.c:365:8 #5 0x7b2423 in run_argv /home/user/linux/tools/perf/perf.c:409:2 #6 0x7b0c19 in main /home/user/linux/tools/perf/perf.c:539:3 #7 0x7f4f696f7b74 in __libc_start_main /usr/src/debug/glibc-2.33-16.fc34.x86_64/csu/../csu/libc-start.c:332:16 SUMMARY: AddressSanitizer: heap-buffer-overflow (/home/user/linux/tools/perf/perf+0x49875b) in StrstrCheck(void*, char*, char const*, char const*) Shadow bytes around the buggy address: 0x0c268000b560: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa 0x0c268000b570: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa 0x0c268000b580: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa 0x0c268000b590: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x0c268000b5a0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 =>0x0c268000b5b0: 00 00 00 00 00 00 00 00 00 04[fa]fa fa fa fa fa 0x0c268000b5c0: fa fa fa fa fa fa fa fa 00 00 00 00 00 00 00 00 0x0c268000b5d0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x0c268000b5e0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x0c268000b5f0: 07 fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa 0x0c268000b600: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd Shadow byte legend (one shadow byte represents 8 application bytes): Addressable: 00 Partially addressable: 01 02 03 04 05 06 07 Heap left redzone: fa Freed heap region: fd Stack left redzone: f1 Stack mid redzone: f2 Stack right redzone: f3 Stack after return: f5 Stack use after scope: f8 Global redzone: f9 Global init order: f6 Poisoned by user: f7 Container overflow: fc Array cookie: ac Intra object redzone: bb ASan internal: fe Left alloca redzone: ca Right alloca redzone: cb Shadow gap: cc ==363148==ABORTING Suggested-by: Jiri Slaby <jirislaby@kernel.org> Signed-off-by: Riccardo Mancini <rickyman7@gmail.com> Acked-by: Namhyung Kim <namhyung@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Fabian Hemmer <copy@copy.sh> Cc: Ian Rogers <irogers@google.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Jiri Slaby <jirislaby@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Remi Bernon <rbernon@codeweavers.com> Link: http://lore.kernel.org/lkml/20210621222108.196219-1-rickyman7@gmail.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2021-06-21 22:21:08 +00:00
secstrs_run = elf_getdata(sec_strndx, NULL);
if (secstrs_run == NULL)
goto out_elf_end;
sec_strndx = elf_getscn(elf, ehdr.e_shstrndx);
if (sec_strndx == NULL)
goto out_elf_end;
secstrs_sym = elf_getdata(sec_strndx, NULL);
if (secstrs_sym == NULL)
goto out_elf_end;
nr_syms = shdr.sh_size / shdr.sh_entsize;
memset(&sym, 0, sizeof(sym));
/*
* The kernel relocation symbol is needed in advance in order to adjust
* kernel maps correctly.
*/
if (ref_reloc_sym_not_found(kmap)) {
elf_symtab__for_each_symbol(syms, nr_syms, idx, sym) {
const char *elf_name = elf_sym__name(&sym, symstrs);
if (strcmp(elf_name, kmap->ref_reloc_sym->name))
continue;
kmap->ref_reloc_sym->unrelocated_addr = sym.st_value;
map__set_reloc(map, kmap->ref_reloc_sym->addr - kmap->ref_reloc_sym->unrelocated_addr);
break;
}
}
/*
* Handle any relocation of vdso necessary because older kernels
* attempted to prelink vdso to its virtual address.
*/
if (dso__is_vdso(dso))
map__set_reloc(map, map__start(map) - dso->text_offset);
dso->adjust_symbols = runtime_ss->adjust_symbols || ref_reloc(kmap);
/*
* Initial kernel and module mappings do not map to the dso.
* Flag the fixups.
*/
if (dso->kernel) {
remap_kernel = true;
adjust_kernel_syms = dso->adjust_symbols;
}
perf symbols: Slightly improve module file executable section mappings Currently perf does not record module section addresses except for the .text section. In general that means perf cannot get module section mappings correct (except for .text) when loading symbols from a kernel module file. (Note using --kcore does not have this issue) Improve that situation slightly by identifying executable sections that use the same mapping as the .text section. That happens when an executable section comes directly after the .text section, both in memory and on file, something that can be determined by following the same layout rules used by the kernel, refer kernel layout_sections(). Note whether that happens is somewhat arbitrary, so this is not a final solution. Example from tracing a virtual machine process: Before: $ perf script | grep unknown CPU 0/KVM 1718 203.511270: 318341 cpu-cycles:P: ffffffffc13e8a70 [unknown] (/lib/modules/6.7.2-local/kernel/arch/x86/kvm/kvm-intel.ko) $ perf script -vvv 2>&1 >/dev/null | grep kvm.intel | grep 'noinstr.text\|ffff' Map: 0-7e0 41430 [kvm_intel].noinstr.text Map: ffffffffc13a7000-ffffffffc1421000 a0 /lib/modules/6.7.2-local/kernel/arch/x86/kvm/kvm-intel.ko After: $ perf script | grep 203.511270 CPU 0/KVM 1718 203.511270: 318341 cpu-cycles:P: ffffffffc13e8a70 vmx_vmexit+0x0 (/lib/modules/6.7.2-local/kernel/arch/x86/kvm/kvm-intel.ko) $ perf script -vvv 2>&1 >/dev/null | grep kvm.intel | grep 'noinstr.text\|ffff' Map: ffffffffc13a7000-ffffffffc1421000 a0 /lib/modules/6.7.2-local/kernel/arch/x86/kvm/kvm-intel.ko Reported-by: Like Xu <like.xu.linux@gmail.com> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Acked-by: Namhyung Kim <namhyung@kernel.org> Signed-off-by: Namhyung Kim <namhyung@kernel.org> Link: https://lore.kernel.org/r/20240208085326.13432-3-adrian.hunter@intel.com
2024-02-08 08:53:26 +00:00
if (kmodule && adjust_kernel_syms)
max_text_sh_offset = max_text_section(runtime_ss->elf, &runtime_ss->ehdr);
elf_symtab__for_each_symbol(syms, nr_syms, idx, sym) {
struct symbol *f;
const char *elf_name = elf_sym__name(&sym, symstrs);
char *demangled = NULL;
int is_label = elf_sym__is_label(&sym);
const char *section_name;
bool used_opd = false;
if (!is_label && !elf_sym__filter(&sym))
continue;
/* Reject ARM ELF "mapping symbols": these aren't unique and
* don't identify functions, so will confuse the profile
* output: */
if (ehdr.e_machine == EM_ARM || ehdr.e_machine == EM_AARCH64) {
if (elf_name[0] == '$' && strchr("adtx", elf_name[1])
&& (elf_name[2] == '\0' || elf_name[2] == '.'))
continue;
}
if (runtime_ss->opdsec && sym.st_shndx == runtime_ss->opdidx) {
u32 offset = sym.st_value - syms_ss->opdshdr.sh_addr;
u64 *opd = opddata->d_buf + offset;
sym.st_value = DSO__SWAP(dso, u64, *opd);
sym.st_shndx = elf_addr_to_index(runtime_ss->elf,
sym.st_value);
used_opd = true;
}
perf symbol: Correct address for bss symbols When using 'perf mem' and 'perf c2c', an issue is observed that tool reports the wrong offset for global data symbols. This is a common issue on both x86 and Arm64 platforms. Let's see an example, for a test program, below is the disassembly for its .bss section which is dumped with objdump: ... Disassembly of section .bss: 0000000000004040 <completed.0>: ... 0000000000004080 <buf1>: ... 00000000000040c0 <buf2>: ... 0000000000004100 <thread>: ... First we used 'perf mem record' to run the test program and then used 'perf --debug verbose=4 mem report' to observe what's the symbol info for 'buf1' and 'buf2' structures. # ./perf mem record -e ldlat-loads,ldlat-stores -- false_sharing.exe 8 # ./perf --debug verbose=4 mem report ... dso__load_sym_internal: adjusting symbol: st_value: 0x40c0 sh_addr: 0x4040 sh_offset: 0x3028 symbol__new: buf2 0x30a8-0x30e8 ... dso__load_sym_internal: adjusting symbol: st_value: 0x4080 sh_addr: 0x4040 sh_offset: 0x3028 symbol__new: buf1 0x3068-0x30a8 ... The perf tool relies on libelf to parse symbols, in executable and shared object files, 'st_value' holds a virtual address; 'sh_addr' is the address at which section's first byte should reside in memory, and 'sh_offset' is the byte offset from the beginning of the file to the first byte in the section. The perf tool uses below formula to convert a symbol's memory address to a file address: file_address = st_value - sh_addr + sh_offset ^ ` Memory address We can see the final adjusted address ranges for buf1 and buf2 are [0x30a8-0x30e8) and [0x3068-0x30a8) respectively, apparently this is incorrect, in the code, the structure for 'buf1' and 'buf2' specifies compiler attribute with 64-byte alignment. The problem happens for 'sh_offset', libelf returns it as 0x3028 which is not 64-byte aligned, combining with disassembly, it's likely libelf doesn't respect the alignment for .bss section, therefore, it doesn't return the aligned value for 'sh_offset'. Suggested by Fangrui Song, ELF file contains program header which contains PT_LOAD segments, the fields p_vaddr and p_offset in PT_LOAD segments contain the execution info. A better choice for converting memory address to file address is using the formula: file_address = st_value - p_vaddr + p_offset This patch introduces elf_read_program_header() which returns the program header based on the passed 'st_value', then it uses the formula above to calculate the symbol file address; and the debugging log is updated respectively. After applying the change: # ./perf --debug verbose=4 mem report ... dso__load_sym_internal: adjusting symbol: st_value: 0x40c0 p_vaddr: 0x3d28 p_offset: 0x2d28 symbol__new: buf2 0x30c0-0x3100 ... dso__load_sym_internal: adjusting symbol: st_value: 0x4080 p_vaddr: 0x3d28 p_offset: 0x2d28 symbol__new: buf1 0x3080-0x30c0 ... Fixes: f17e04afaff84b5c ("perf report: Fix ELF symbol parsing") Reported-by: Chang Rui <changruinj@gmail.com> Suggested-by: Fangrui Song <maskray@google.com> Signed-off-by: Leo Yan <leo.yan@linaro.org> Acked-by: Namhyung Kim <namhyung@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20220724060013.171050-2-leo.yan@linaro.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2022-07-24 06:00:12 +00:00
/*
* When loading symbols in a data mapping, ABS symbols (which
* has a value of SHN_ABS in its st_shndx) failed at
* elf_getscn(). And it marks the loading as a failure so
* already loaded symbols cannot be fixed up.
*
* I'm not sure what should be done. Just ignore them for now.
* - Namhyung Kim
*/
if (sym.st_shndx == SHN_ABS)
continue;
perf symbols: Resolve symbols against debug file first With LTO, there are symbols like these: /usr/lib/debug/usr/lib64/libantlr4-runtime.so.4.8-4.8-1.4.x86_64.debug 10305: 0000000000955fa4 0 NOTYPE LOCAL DEFAULT 29 Predicate.cpp.2bc410e7 This comes from a runtime/debug split done by the standard way: objcopy --only-keep-debug $runtime $debug objcopy --add-gnu-debuglink=$debugfn -R .comment -R .GCC.command.line --strip-all $runtime perf currently cannot resolve such symbols (relicts of LTO), as section 29 exists only in the debug file (29 is .debug_info). And perf resolves symbols only against runtime file. This results in all symbols from such a library being unresolved: 0.38% main2 libantlr4-runtime.so.4.8 [.] 0x00000000000671e0 So try resolving against the debug file first. And only if it fails (the section has NOBITS set), try runtime file. We can do this, as "objcopy --only-keep-debug" per documentation preserves all sections, but clears data of some of them (the runtime ones) and marks them as NOBITS. The correct result is now: 0.38% main2 libantlr4-runtime.so.4.8 [.] antlr4::IntStream::~IntStream Note that these LTO symbols are properly skipped anyway as they belong neither to *text* nor to *data* (is_label && !elf_sec__filter(&shdr, secstrs) is true). Signed-off-by: Jiri Slaby <jslaby@suse.cz> Acked-by: Namhyung Kim <namhyung@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lore.kernel.org/lkml/20210217122125.26416-1-jslaby@suse.cz Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2021-02-17 12:21:25 +00:00
sec = elf_getscn(syms_ss->elf, sym.st_shndx);
if (!sec)
goto out_elf_end;
gelf_getshdr(sec, &shdr);
/*
* If the attribute bit SHF_ALLOC is not set, the section
* doesn't occupy memory during process execution.
* E.g. ".gnu.warning.*" section is used by linker to generate
* warnings when calling deprecated functions, the symbols in
* the section aren't loaded to memory during process execution,
* so skip them.
*/
if (!(shdr.sh_flags & SHF_ALLOC))
continue;
perf top: Fix overflow in elf_sec__is_text() ASan reports a heap-buffer-overflow in elf_sec__is_text when using perf-top. The bug is caused by the fact that secstrs is built from runtime_ss, while shdr is built from syms_ss if shdr.sh_type != SHT_NOBITS. Therefore, they point to two different ELF files. This patch renames secstrs to secstrs_run and adds secstrs_sym, so that the correct secstrs is chosen depending on shdr.sh_type. $ ASAN_OPTIONS=abort_on_error=1:disable_coredump=0:unmap_shadow_on_exit=1 ./perf top ================================================================= ==363148==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x61300009add6 at pc 0x00000049875c bp 0x7f4f56446440 sp 0x7f4f56445bf0 READ of size 1 at 0x61300009add6 thread T6 #0 0x49875b in StrstrCheck(void*, char*, char const*, char const*) (/home/user/linux/tools/perf/perf+0x49875b) #1 0x4d13a2 in strstr (/home/user/linux/tools/perf/perf+0x4d13a2) #2 0xacae36 in elf_sec__is_text /home/user/linux/tools/perf/util/symbol-elf.c:176:9 #3 0xac3ec9 in elf_sec__filter /home/user/linux/tools/perf/util/symbol-elf.c:187:9 #4 0xac2c3d in dso__load_sym /home/user/linux/tools/perf/util/symbol-elf.c:1254:20 #5 0x883981 in dso__load /home/user/linux/tools/perf/util/symbol.c:1897:9 #6 0x8e6248 in map__load /home/user/linux/tools/perf/util/map.c:332:7 #7 0x8e66e5 in map__find_symbol /home/user/linux/tools/perf/util/map.c:366:6 #8 0x7f8278 in machine__resolve /home/user/linux/tools/perf/util/event.c:707:13 #9 0x5f3d1a in perf_event__process_sample /home/user/linux/tools/perf/builtin-top.c:773:6 #10 0x5f30e4 in deliver_event /home/user/linux/tools/perf/builtin-top.c:1197:3 #11 0x908a72 in do_flush /home/user/linux/tools/perf/util/ordered-events.c:244:9 #12 0x905fae in __ordered_events__flush /home/user/linux/tools/perf/util/ordered-events.c:323:8 #13 0x9058db in ordered_events__flush /home/user/linux/tools/perf/util/ordered-events.c:341:9 #14 0x5f19b1 in process_thread /home/user/linux/tools/perf/builtin-top.c:1109:7 #15 0x7f4f6a21a298 in start_thread /usr/src/debug/glibc-2.33-16.fc34.x86_64/nptl/pthread_create.c:481:8 #16 0x7f4f697d0352 in clone ../sysdeps/unix/sysv/linux/x86_64/clone.S:95 0x61300009add6 is located 10 bytes to the right of 332-byte region [0x61300009ac80,0x61300009adcc) allocated by thread T6 here: #0 0x4f3f7f in malloc (/home/user/linux/tools/perf/perf+0x4f3f7f) #1 0x7f4f6a0a88d9 (/lib64/libelf.so.1+0xa8d9) Thread T6 created by T0 here: #0 0x464856 in pthread_create (/home/user/linux/tools/perf/perf+0x464856) #1 0x5f06e0 in __cmd_top /home/user/linux/tools/perf/builtin-top.c:1309:6 #2 0x5ef19f in cmd_top /home/user/linux/tools/perf/builtin-top.c:1762:11 #3 0x7b28c0 in run_builtin /home/user/linux/tools/perf/perf.c:313:11 #4 0x7b119f in handle_internal_command /home/user/linux/tools/perf/perf.c:365:8 #5 0x7b2423 in run_argv /home/user/linux/tools/perf/perf.c:409:2 #6 0x7b0c19 in main /home/user/linux/tools/perf/perf.c:539:3 #7 0x7f4f696f7b74 in __libc_start_main /usr/src/debug/glibc-2.33-16.fc34.x86_64/csu/../csu/libc-start.c:332:16 SUMMARY: AddressSanitizer: heap-buffer-overflow (/home/user/linux/tools/perf/perf+0x49875b) in StrstrCheck(void*, char*, char const*, char const*) Shadow bytes around the buggy address: 0x0c268000b560: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa 0x0c268000b570: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa 0x0c268000b580: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa 0x0c268000b590: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x0c268000b5a0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 =>0x0c268000b5b0: 00 00 00 00 00 00 00 00 00 04[fa]fa fa fa fa fa 0x0c268000b5c0: fa fa fa fa fa fa fa fa 00 00 00 00 00 00 00 00 0x0c268000b5d0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x0c268000b5e0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x0c268000b5f0: 07 fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa 0x0c268000b600: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd Shadow byte legend (one shadow byte represents 8 application bytes): Addressable: 00 Partially addressable: 01 02 03 04 05 06 07 Heap left redzone: fa Freed heap region: fd Stack left redzone: f1 Stack mid redzone: f2 Stack right redzone: f3 Stack after return: f5 Stack use after scope: f8 Global redzone: f9 Global init order: f6 Poisoned by user: f7 Container overflow: fc Array cookie: ac Intra object redzone: bb ASan internal: fe Left alloca redzone: ca Right alloca redzone: cb Shadow gap: cc ==363148==ABORTING Suggested-by: Jiri Slaby <jirislaby@kernel.org> Signed-off-by: Riccardo Mancini <rickyman7@gmail.com> Acked-by: Namhyung Kim <namhyung@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Fabian Hemmer <copy@copy.sh> Cc: Ian Rogers <irogers@google.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Jiri Slaby <jirislaby@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Remi Bernon <rbernon@codeweavers.com> Link: http://lore.kernel.org/lkml/20210621222108.196219-1-rickyman7@gmail.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2021-06-21 22:21:08 +00:00
secstrs = secstrs_sym;
perf symbols: Resolve symbols against debug file first With LTO, there are symbols like these: /usr/lib/debug/usr/lib64/libantlr4-runtime.so.4.8-4.8-1.4.x86_64.debug 10305: 0000000000955fa4 0 NOTYPE LOCAL DEFAULT 29 Predicate.cpp.2bc410e7 This comes from a runtime/debug split done by the standard way: objcopy --only-keep-debug $runtime $debug objcopy --add-gnu-debuglink=$debugfn -R .comment -R .GCC.command.line --strip-all $runtime perf currently cannot resolve such symbols (relicts of LTO), as section 29 exists only in the debug file (29 is .debug_info). And perf resolves symbols only against runtime file. This results in all symbols from such a library being unresolved: 0.38% main2 libantlr4-runtime.so.4.8 [.] 0x00000000000671e0 So try resolving against the debug file first. And only if it fails (the section has NOBITS set), try runtime file. We can do this, as "objcopy --only-keep-debug" per documentation preserves all sections, but clears data of some of them (the runtime ones) and marks them as NOBITS. The correct result is now: 0.38% main2 libantlr4-runtime.so.4.8 [.] antlr4::IntStream::~IntStream Note that these LTO symbols are properly skipped anyway as they belong neither to *text* nor to *data* (is_label && !elf_sec__filter(&shdr, secstrs) is true). Signed-off-by: Jiri Slaby <jslaby@suse.cz> Acked-by: Namhyung Kim <namhyung@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lore.kernel.org/lkml/20210217122125.26416-1-jslaby@suse.cz Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2021-02-17 12:21:25 +00:00
/*
* We have to fallback to runtime when syms' section header has
* NOBITS set. NOBITS results in file offset (sh_offset) not
* being incremented. So sh_offset used below has different
* values for syms (invalid) and runtime (valid).
*/
if (shdr.sh_type == SHT_NOBITS) {
sec = elf_getscn(runtime_ss->elf, sym.st_shndx);
if (!sec)
goto out_elf_end;
gelf_getshdr(sec, &shdr);
perf top: Fix overflow in elf_sec__is_text() ASan reports a heap-buffer-overflow in elf_sec__is_text when using perf-top. The bug is caused by the fact that secstrs is built from runtime_ss, while shdr is built from syms_ss if shdr.sh_type != SHT_NOBITS. Therefore, they point to two different ELF files. This patch renames secstrs to secstrs_run and adds secstrs_sym, so that the correct secstrs is chosen depending on shdr.sh_type. $ ASAN_OPTIONS=abort_on_error=1:disable_coredump=0:unmap_shadow_on_exit=1 ./perf top ================================================================= ==363148==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x61300009add6 at pc 0x00000049875c bp 0x7f4f56446440 sp 0x7f4f56445bf0 READ of size 1 at 0x61300009add6 thread T6 #0 0x49875b in StrstrCheck(void*, char*, char const*, char const*) (/home/user/linux/tools/perf/perf+0x49875b) #1 0x4d13a2 in strstr (/home/user/linux/tools/perf/perf+0x4d13a2) #2 0xacae36 in elf_sec__is_text /home/user/linux/tools/perf/util/symbol-elf.c:176:9 #3 0xac3ec9 in elf_sec__filter /home/user/linux/tools/perf/util/symbol-elf.c:187:9 #4 0xac2c3d in dso__load_sym /home/user/linux/tools/perf/util/symbol-elf.c:1254:20 #5 0x883981 in dso__load /home/user/linux/tools/perf/util/symbol.c:1897:9 #6 0x8e6248 in map__load /home/user/linux/tools/perf/util/map.c:332:7 #7 0x8e66e5 in map__find_symbol /home/user/linux/tools/perf/util/map.c:366:6 #8 0x7f8278 in machine__resolve /home/user/linux/tools/perf/util/event.c:707:13 #9 0x5f3d1a in perf_event__process_sample /home/user/linux/tools/perf/builtin-top.c:773:6 #10 0x5f30e4 in deliver_event /home/user/linux/tools/perf/builtin-top.c:1197:3 #11 0x908a72 in do_flush /home/user/linux/tools/perf/util/ordered-events.c:244:9 #12 0x905fae in __ordered_events__flush /home/user/linux/tools/perf/util/ordered-events.c:323:8 #13 0x9058db in ordered_events__flush /home/user/linux/tools/perf/util/ordered-events.c:341:9 #14 0x5f19b1 in process_thread /home/user/linux/tools/perf/builtin-top.c:1109:7 #15 0x7f4f6a21a298 in start_thread /usr/src/debug/glibc-2.33-16.fc34.x86_64/nptl/pthread_create.c:481:8 #16 0x7f4f697d0352 in clone ../sysdeps/unix/sysv/linux/x86_64/clone.S:95 0x61300009add6 is located 10 bytes to the right of 332-byte region [0x61300009ac80,0x61300009adcc) allocated by thread T6 here: #0 0x4f3f7f in malloc (/home/user/linux/tools/perf/perf+0x4f3f7f) #1 0x7f4f6a0a88d9 (/lib64/libelf.so.1+0xa8d9) Thread T6 created by T0 here: #0 0x464856 in pthread_create (/home/user/linux/tools/perf/perf+0x464856) #1 0x5f06e0 in __cmd_top /home/user/linux/tools/perf/builtin-top.c:1309:6 #2 0x5ef19f in cmd_top /home/user/linux/tools/perf/builtin-top.c:1762:11 #3 0x7b28c0 in run_builtin /home/user/linux/tools/perf/perf.c:313:11 #4 0x7b119f in handle_internal_command /home/user/linux/tools/perf/perf.c:365:8 #5 0x7b2423 in run_argv /home/user/linux/tools/perf/perf.c:409:2 #6 0x7b0c19 in main /home/user/linux/tools/perf/perf.c:539:3 #7 0x7f4f696f7b74 in __libc_start_main /usr/src/debug/glibc-2.33-16.fc34.x86_64/csu/../csu/libc-start.c:332:16 SUMMARY: AddressSanitizer: heap-buffer-overflow (/home/user/linux/tools/perf/perf+0x49875b) in StrstrCheck(void*, char*, char const*, char const*) Shadow bytes around the buggy address: 0x0c268000b560: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa 0x0c268000b570: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa 0x0c268000b580: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa 0x0c268000b590: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x0c268000b5a0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 =>0x0c268000b5b0: 00 00 00 00 00 00 00 00 00 04[fa]fa fa fa fa fa 0x0c268000b5c0: fa fa fa fa fa fa fa fa 00 00 00 00 00 00 00 00 0x0c268000b5d0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x0c268000b5e0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x0c268000b5f0: 07 fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa 0x0c268000b600: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd Shadow byte legend (one shadow byte represents 8 application bytes): Addressable: 00 Partially addressable: 01 02 03 04 05 06 07 Heap left redzone: fa Freed heap region: fd Stack left redzone: f1 Stack mid redzone: f2 Stack right redzone: f3 Stack after return: f5 Stack use after scope: f8 Global redzone: f9 Global init order: f6 Poisoned by user: f7 Container overflow: fc Array cookie: ac Intra object redzone: bb ASan internal: fe Left alloca redzone: ca Right alloca redzone: cb Shadow gap: cc ==363148==ABORTING Suggested-by: Jiri Slaby <jirislaby@kernel.org> Signed-off-by: Riccardo Mancini <rickyman7@gmail.com> Acked-by: Namhyung Kim <namhyung@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Fabian Hemmer <copy@copy.sh> Cc: Ian Rogers <irogers@google.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Jiri Slaby <jirislaby@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Remi Bernon <rbernon@codeweavers.com> Link: http://lore.kernel.org/lkml/20210621222108.196219-1-rickyman7@gmail.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2021-06-21 22:21:08 +00:00
secstrs = secstrs_run;
perf symbols: Resolve symbols against debug file first With LTO, there are symbols like these: /usr/lib/debug/usr/lib64/libantlr4-runtime.so.4.8-4.8-1.4.x86_64.debug 10305: 0000000000955fa4 0 NOTYPE LOCAL DEFAULT 29 Predicate.cpp.2bc410e7 This comes from a runtime/debug split done by the standard way: objcopy --only-keep-debug $runtime $debug objcopy --add-gnu-debuglink=$debugfn -R .comment -R .GCC.command.line --strip-all $runtime perf currently cannot resolve such symbols (relicts of LTO), as section 29 exists only in the debug file (29 is .debug_info). And perf resolves symbols only against runtime file. This results in all symbols from such a library being unresolved: 0.38% main2 libantlr4-runtime.so.4.8 [.] 0x00000000000671e0 So try resolving against the debug file first. And only if it fails (the section has NOBITS set), try runtime file. We can do this, as "objcopy --only-keep-debug" per documentation preserves all sections, but clears data of some of them (the runtime ones) and marks them as NOBITS. The correct result is now: 0.38% main2 libantlr4-runtime.so.4.8 [.] antlr4::IntStream::~IntStream Note that these LTO symbols are properly skipped anyway as they belong neither to *text* nor to *data* (is_label && !elf_sec__filter(&shdr, secstrs) is true). Signed-off-by: Jiri Slaby <jslaby@suse.cz> Acked-by: Namhyung Kim <namhyung@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lore.kernel.org/lkml/20210217122125.26416-1-jslaby@suse.cz Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2021-02-17 12:21:25 +00:00
}
if (is_label && !elf_sec__filter(&shdr, secstrs))
continue;
section_name = elf_sec__name(&shdr, secstrs);
/* On ARM, symbols for thumb functions have 1 added to
* the symbol address as a flag - remove it */
if ((ehdr.e_machine == EM_ARM) &&
(GELF_ST_TYPE(sym.st_info) == STT_FUNC) &&
(sym.st_value & 1))
--sym.st_value;
if (dso->kernel) {
if (dso__process_kernel_symbol(dso, map, &sym, &shdr, kmaps, kmap, &curr_dso, &curr_map,
perf symbols: Slightly improve module file executable section mappings Currently perf does not record module section addresses except for the .text section. In general that means perf cannot get module section mappings correct (except for .text) when loading symbols from a kernel module file. (Note using --kcore does not have this issue) Improve that situation slightly by identifying executable sections that use the same mapping as the .text section. That happens when an executable section comes directly after the .text section, both in memory and on file, something that can be determined by following the same layout rules used by the kernel, refer kernel layout_sections(). Note whether that happens is somewhat arbitrary, so this is not a final solution. Example from tracing a virtual machine process: Before: $ perf script | grep unknown CPU 0/KVM 1718 203.511270: 318341 cpu-cycles:P: ffffffffc13e8a70 [unknown] (/lib/modules/6.7.2-local/kernel/arch/x86/kvm/kvm-intel.ko) $ perf script -vvv 2>&1 >/dev/null | grep kvm.intel | grep 'noinstr.text\|ffff' Map: 0-7e0 41430 [kvm_intel].noinstr.text Map: ffffffffc13a7000-ffffffffc1421000 a0 /lib/modules/6.7.2-local/kernel/arch/x86/kvm/kvm-intel.ko After: $ perf script | grep 203.511270 CPU 0/KVM 1718 203.511270: 318341 cpu-cycles:P: ffffffffc13e8a70 vmx_vmexit+0x0 (/lib/modules/6.7.2-local/kernel/arch/x86/kvm/kvm-intel.ko) $ perf script -vvv 2>&1 >/dev/null | grep kvm.intel | grep 'noinstr.text\|ffff' Map: ffffffffc13a7000-ffffffffc1421000 a0 /lib/modules/6.7.2-local/kernel/arch/x86/kvm/kvm-intel.ko Reported-by: Like Xu <like.xu.linux@gmail.com> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Acked-by: Namhyung Kim <namhyung@kernel.org> Signed-off-by: Namhyung Kim <namhyung@kernel.org> Link: https://lore.kernel.org/r/20240208085326.13432-3-adrian.hunter@intel.com
2024-02-08 08:53:26 +00:00
section_name, adjust_kernel_syms, kmodule,
&remap_kernel, max_text_sh_offset))
goto out_elf_end;
} else if ((used_opd && runtime_ss->adjust_symbols) ||
(!used_opd && syms_ss->adjust_symbols)) {
perf symbol: Correct address for bss symbols When using 'perf mem' and 'perf c2c', an issue is observed that tool reports the wrong offset for global data symbols. This is a common issue on both x86 and Arm64 platforms. Let's see an example, for a test program, below is the disassembly for its .bss section which is dumped with objdump: ... Disassembly of section .bss: 0000000000004040 <completed.0>: ... 0000000000004080 <buf1>: ... 00000000000040c0 <buf2>: ... 0000000000004100 <thread>: ... First we used 'perf mem record' to run the test program and then used 'perf --debug verbose=4 mem report' to observe what's the symbol info for 'buf1' and 'buf2' structures. # ./perf mem record -e ldlat-loads,ldlat-stores -- false_sharing.exe 8 # ./perf --debug verbose=4 mem report ... dso__load_sym_internal: adjusting symbol: st_value: 0x40c0 sh_addr: 0x4040 sh_offset: 0x3028 symbol__new: buf2 0x30a8-0x30e8 ... dso__load_sym_internal: adjusting symbol: st_value: 0x4080 sh_addr: 0x4040 sh_offset: 0x3028 symbol__new: buf1 0x3068-0x30a8 ... The perf tool relies on libelf to parse symbols, in executable and shared object files, 'st_value' holds a virtual address; 'sh_addr' is the address at which section's first byte should reside in memory, and 'sh_offset' is the byte offset from the beginning of the file to the first byte in the section. The perf tool uses below formula to convert a symbol's memory address to a file address: file_address = st_value - sh_addr + sh_offset ^ ` Memory address We can see the final adjusted address ranges for buf1 and buf2 are [0x30a8-0x30e8) and [0x3068-0x30a8) respectively, apparently this is incorrect, in the code, the structure for 'buf1' and 'buf2' specifies compiler attribute with 64-byte alignment. The problem happens for 'sh_offset', libelf returns it as 0x3028 which is not 64-byte aligned, combining with disassembly, it's likely libelf doesn't respect the alignment for .bss section, therefore, it doesn't return the aligned value for 'sh_offset'. Suggested by Fangrui Song, ELF file contains program header which contains PT_LOAD segments, the fields p_vaddr and p_offset in PT_LOAD segments contain the execution info. A better choice for converting memory address to file address is using the formula: file_address = st_value - p_vaddr + p_offset This patch introduces elf_read_program_header() which returns the program header based on the passed 'st_value', then it uses the formula above to calculate the symbol file address; and the debugging log is updated respectively. After applying the change: # ./perf --debug verbose=4 mem report ... dso__load_sym_internal: adjusting symbol: st_value: 0x40c0 p_vaddr: 0x3d28 p_offset: 0x2d28 symbol__new: buf2 0x30c0-0x3100 ... dso__load_sym_internal: adjusting symbol: st_value: 0x4080 p_vaddr: 0x3d28 p_offset: 0x2d28 symbol__new: buf1 0x3080-0x30c0 ... Fixes: f17e04afaff84b5c ("perf report: Fix ELF symbol parsing") Reported-by: Chang Rui <changruinj@gmail.com> Suggested-by: Fangrui Song <maskray@google.com> Signed-off-by: Leo Yan <leo.yan@linaro.org> Acked-by: Namhyung Kim <namhyung@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20220724060013.171050-2-leo.yan@linaro.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2022-07-24 06:00:12 +00:00
GElf_Phdr phdr;
perf symbol: correction while adjusting symbol perf doesn't provide proper symbol information for specially crafted .debug files. Sometimes .debug file may not have similar program header as runtime ELF file. For example if we generate .debug file using objcopy --only-keep-debug resulting file will not contain .text, .data and other runtime sections. That means corresponding program headers will have zero FileSiz and modified Offset. Example: program header of text section of libxxx.so: Type Offset VirtAddr PhysAddr FileSiz MemSiz Flags Align LOAD 0x00000000003d3000 0x00000000003d3000 0x00000000003d3000 0x000000000055ae80 0x000000000055ae80 R E 0x1000 Same program header after executing: objcopy --only-keep-debug libxxx.so libxxx.so.debug LOAD 0x0000000000001000 0x00000000003d3000 0x00000000003d3000 0x0000000000000000 0x000000000055ae80 R E 0x1000 Offset and FileSiz have been changed. Following formula will not provide correct value, if program header taken from .debug file (syms_ss): sym.st_value -= phdr.p_vaddr - phdr.p_offset; Correct program header information is located inside runtime ELF file (runtime_ss). Fixes: 2d86612aacb7805f ("perf symbol: Correct address for bss symbols") Signed-off-by: Ajay Kaher <akaher@vmware.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Alexey Makhalov <amakhalov@vmware.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Leo Yan <leo.yan@linaro.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Srivatsa S. Bhat <srivatsab@vmware.com> Cc: Steven Rostedt (VMware) <rostedt@goodmis.org> Cc: Vasavi Sirnapalli <vsirnapalli@vmware.com> Link: http://lore.kernel.org/lkml/1669198696-50547-1-git-send-email-akaher@vmware.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2022-11-23 10:18:16 +00:00
if (elf_read_program_header(runtime_ss->elf,
perf symbol: Correct address for bss symbols When using 'perf mem' and 'perf c2c', an issue is observed that tool reports the wrong offset for global data symbols. This is a common issue on both x86 and Arm64 platforms. Let's see an example, for a test program, below is the disassembly for its .bss section which is dumped with objdump: ... Disassembly of section .bss: 0000000000004040 <completed.0>: ... 0000000000004080 <buf1>: ... 00000000000040c0 <buf2>: ... 0000000000004100 <thread>: ... First we used 'perf mem record' to run the test program and then used 'perf --debug verbose=4 mem report' to observe what's the symbol info for 'buf1' and 'buf2' structures. # ./perf mem record -e ldlat-loads,ldlat-stores -- false_sharing.exe 8 # ./perf --debug verbose=4 mem report ... dso__load_sym_internal: adjusting symbol: st_value: 0x40c0 sh_addr: 0x4040 sh_offset: 0x3028 symbol__new: buf2 0x30a8-0x30e8 ... dso__load_sym_internal: adjusting symbol: st_value: 0x4080 sh_addr: 0x4040 sh_offset: 0x3028 symbol__new: buf1 0x3068-0x30a8 ... The perf tool relies on libelf to parse symbols, in executable and shared object files, 'st_value' holds a virtual address; 'sh_addr' is the address at which section's first byte should reside in memory, and 'sh_offset' is the byte offset from the beginning of the file to the first byte in the section. The perf tool uses below formula to convert a symbol's memory address to a file address: file_address = st_value - sh_addr + sh_offset ^ ` Memory address We can see the final adjusted address ranges for buf1 and buf2 are [0x30a8-0x30e8) and [0x3068-0x30a8) respectively, apparently this is incorrect, in the code, the structure for 'buf1' and 'buf2' specifies compiler attribute with 64-byte alignment. The problem happens for 'sh_offset', libelf returns it as 0x3028 which is not 64-byte aligned, combining with disassembly, it's likely libelf doesn't respect the alignment for .bss section, therefore, it doesn't return the aligned value for 'sh_offset'. Suggested by Fangrui Song, ELF file contains program header which contains PT_LOAD segments, the fields p_vaddr and p_offset in PT_LOAD segments contain the execution info. A better choice for converting memory address to file address is using the formula: file_address = st_value - p_vaddr + p_offset This patch introduces elf_read_program_header() which returns the program header based on the passed 'st_value', then it uses the formula above to calculate the symbol file address; and the debugging log is updated respectively. After applying the change: # ./perf --debug verbose=4 mem report ... dso__load_sym_internal: adjusting symbol: st_value: 0x40c0 p_vaddr: 0x3d28 p_offset: 0x2d28 symbol__new: buf2 0x30c0-0x3100 ... dso__load_sym_internal: adjusting symbol: st_value: 0x4080 p_vaddr: 0x3d28 p_offset: 0x2d28 symbol__new: buf1 0x3080-0x30c0 ... Fixes: f17e04afaff84b5c ("perf report: Fix ELF symbol parsing") Reported-by: Chang Rui <changruinj@gmail.com> Suggested-by: Fangrui Song <maskray@google.com> Signed-off-by: Leo Yan <leo.yan@linaro.org> Acked-by: Namhyung Kim <namhyung@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20220724060013.171050-2-leo.yan@linaro.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2022-07-24 06:00:12 +00:00
(u64)sym.st_value, &phdr)) {
perf symbol: Fail to read phdr workaround The perf jvmti agent doesn't create program headers, in this case fallback on section headers as happened previously. Committer notes: To test this, from a public post by Ian: 1) download a Java workload dacapo-9.12-MR1-bach.jar from https://sourceforge.net/projects/dacapobench/ 2) build perf such as "make -C tools/perf O=/tmp/perf NO_LIBBFD=1" it should detect Java and create /tmp/perf/libperf-jvmti.so 3) run perf with the jvmti agent: perf record -k 1 java -agentpath:/tmp/perf/libperf-jvmti.so -jar dacapo-9.12-MR1-bach.jar -n 10 fop 4) run perf inject: perf inject -i perf.data -o perf-injected.data -j 5) run perf report perf report -i perf-injected.data | grep org.apache.fop With this patch reverted I see lots of symbols like: 0.00% java jitted-388040-4656.so [.] org.apache.fop.fo.FObj.bind(org.apache.fop.fo.PropertyList) With the patch (2d86612aacb7805f ("perf symbol: Correct address for bss symbols")) I see lots of: dso__load_sym_internal: failed to find program header for symbol: Lorg/apache/fop/fo/FObj;bind(Lorg/apache/fop/fo/PropertyList;)V st_value: 0x40 Fixes: 2d86612aacb7805f ("perf symbol: Correct address for bss symbols") Reviewed-by: Leo Yan <leo.yan@linaro.org> Signed-off-by: Ian Rogers <irogers@google.com> Tested-by: Leo Yan <leo.yan@linaro.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Leo Yan <leo.yan@linaro.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lore.kernel.org/lkml/20220731164923.691193-1-irogers@google.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2022-07-31 16:49:23 +00:00
pr_debug4("%s: failed to find program header for "
perf symbol: Correct address for bss symbols When using 'perf mem' and 'perf c2c', an issue is observed that tool reports the wrong offset for global data symbols. This is a common issue on both x86 and Arm64 platforms. Let's see an example, for a test program, below is the disassembly for its .bss section which is dumped with objdump: ... Disassembly of section .bss: 0000000000004040 <completed.0>: ... 0000000000004080 <buf1>: ... 00000000000040c0 <buf2>: ... 0000000000004100 <thread>: ... First we used 'perf mem record' to run the test program and then used 'perf --debug verbose=4 mem report' to observe what's the symbol info for 'buf1' and 'buf2' structures. # ./perf mem record -e ldlat-loads,ldlat-stores -- false_sharing.exe 8 # ./perf --debug verbose=4 mem report ... dso__load_sym_internal: adjusting symbol: st_value: 0x40c0 sh_addr: 0x4040 sh_offset: 0x3028 symbol__new: buf2 0x30a8-0x30e8 ... dso__load_sym_internal: adjusting symbol: st_value: 0x4080 sh_addr: 0x4040 sh_offset: 0x3028 symbol__new: buf1 0x3068-0x30a8 ... The perf tool relies on libelf to parse symbols, in executable and shared object files, 'st_value' holds a virtual address; 'sh_addr' is the address at which section's first byte should reside in memory, and 'sh_offset' is the byte offset from the beginning of the file to the first byte in the section. The perf tool uses below formula to convert a symbol's memory address to a file address: file_address = st_value - sh_addr + sh_offset ^ ` Memory address We can see the final adjusted address ranges for buf1 and buf2 are [0x30a8-0x30e8) and [0x3068-0x30a8) respectively, apparently this is incorrect, in the code, the structure for 'buf1' and 'buf2' specifies compiler attribute with 64-byte alignment. The problem happens for 'sh_offset', libelf returns it as 0x3028 which is not 64-byte aligned, combining with disassembly, it's likely libelf doesn't respect the alignment for .bss section, therefore, it doesn't return the aligned value for 'sh_offset'. Suggested by Fangrui Song, ELF file contains program header which contains PT_LOAD segments, the fields p_vaddr and p_offset in PT_LOAD segments contain the execution info. A better choice for converting memory address to file address is using the formula: file_address = st_value - p_vaddr + p_offset This patch introduces elf_read_program_header() which returns the program header based on the passed 'st_value', then it uses the formula above to calculate the symbol file address; and the debugging log is updated respectively. After applying the change: # ./perf --debug verbose=4 mem report ... dso__load_sym_internal: adjusting symbol: st_value: 0x40c0 p_vaddr: 0x3d28 p_offset: 0x2d28 symbol__new: buf2 0x30c0-0x3100 ... dso__load_sym_internal: adjusting symbol: st_value: 0x4080 p_vaddr: 0x3d28 p_offset: 0x2d28 symbol__new: buf1 0x3080-0x30c0 ... Fixes: f17e04afaff84b5c ("perf report: Fix ELF symbol parsing") Reported-by: Chang Rui <changruinj@gmail.com> Suggested-by: Fangrui Song <maskray@google.com> Signed-off-by: Leo Yan <leo.yan@linaro.org> Acked-by: Namhyung Kim <namhyung@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20220724060013.171050-2-leo.yan@linaro.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2022-07-24 06:00:12 +00:00
"symbol: %s st_value: %#" PRIx64 "\n",
__func__, elf_name, (u64)sym.st_value);
perf symbol: Fail to read phdr workaround The perf jvmti agent doesn't create program headers, in this case fallback on section headers as happened previously. Committer notes: To test this, from a public post by Ian: 1) download a Java workload dacapo-9.12-MR1-bach.jar from https://sourceforge.net/projects/dacapobench/ 2) build perf such as "make -C tools/perf O=/tmp/perf NO_LIBBFD=1" it should detect Java and create /tmp/perf/libperf-jvmti.so 3) run perf with the jvmti agent: perf record -k 1 java -agentpath:/tmp/perf/libperf-jvmti.so -jar dacapo-9.12-MR1-bach.jar -n 10 fop 4) run perf inject: perf inject -i perf.data -o perf-injected.data -j 5) run perf report perf report -i perf-injected.data | grep org.apache.fop With this patch reverted I see lots of symbols like: 0.00% java jitted-388040-4656.so [.] org.apache.fop.fo.FObj.bind(org.apache.fop.fo.PropertyList) With the patch (2d86612aacb7805f ("perf symbol: Correct address for bss symbols")) I see lots of: dso__load_sym_internal: failed to find program header for symbol: Lorg/apache/fop/fo/FObj;bind(Lorg/apache/fop/fo/PropertyList;)V st_value: 0x40 Fixes: 2d86612aacb7805f ("perf symbol: Correct address for bss symbols") Reviewed-by: Leo Yan <leo.yan@linaro.org> Signed-off-by: Ian Rogers <irogers@google.com> Tested-by: Leo Yan <leo.yan@linaro.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Leo Yan <leo.yan@linaro.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lore.kernel.org/lkml/20220731164923.691193-1-irogers@google.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2022-07-31 16:49:23 +00:00
pr_debug4("%s: adjusting symbol: st_value: %#" PRIx64 " "
"sh_addr: %#" PRIx64 " sh_offset: %#" PRIx64 "\n",
__func__, (u64)sym.st_value, (u64)shdr.sh_addr,
(u64)shdr.sh_offset);
/*
* Fail to find program header, let's rollback
* to use shdr.sh_addr and shdr.sh_offset to
* calibrate symbol's file address, though this
* is not necessary for normal C ELF file, we
* still need to handle java JIT symbols in this
* case.
*/
sym.st_value -= shdr.sh_addr - shdr.sh_offset;
} else {
pr_debug4("%s: adjusting symbol: st_value: %#" PRIx64 " "
"p_vaddr: %#" PRIx64 " p_offset: %#" PRIx64 "\n",
__func__, (u64)sym.st_value, (u64)phdr.p_vaddr,
(u64)phdr.p_offset);
sym.st_value -= phdr.p_vaddr - phdr.p_offset;
perf symbol: Correct address for bss symbols When using 'perf mem' and 'perf c2c', an issue is observed that tool reports the wrong offset for global data symbols. This is a common issue on both x86 and Arm64 platforms. Let's see an example, for a test program, below is the disassembly for its .bss section which is dumped with objdump: ... Disassembly of section .bss: 0000000000004040 <completed.0>: ... 0000000000004080 <buf1>: ... 00000000000040c0 <buf2>: ... 0000000000004100 <thread>: ... First we used 'perf mem record' to run the test program and then used 'perf --debug verbose=4 mem report' to observe what's the symbol info for 'buf1' and 'buf2' structures. # ./perf mem record -e ldlat-loads,ldlat-stores -- false_sharing.exe 8 # ./perf --debug verbose=4 mem report ... dso__load_sym_internal: adjusting symbol: st_value: 0x40c0 sh_addr: 0x4040 sh_offset: 0x3028 symbol__new: buf2 0x30a8-0x30e8 ... dso__load_sym_internal: adjusting symbol: st_value: 0x4080 sh_addr: 0x4040 sh_offset: 0x3028 symbol__new: buf1 0x3068-0x30a8 ... The perf tool relies on libelf to parse symbols, in executable and shared object files, 'st_value' holds a virtual address; 'sh_addr' is the address at which section's first byte should reside in memory, and 'sh_offset' is the byte offset from the beginning of the file to the first byte in the section. The perf tool uses below formula to convert a symbol's memory address to a file address: file_address = st_value - sh_addr + sh_offset ^ ` Memory address We can see the final adjusted address ranges for buf1 and buf2 are [0x30a8-0x30e8) and [0x3068-0x30a8) respectively, apparently this is incorrect, in the code, the structure for 'buf1' and 'buf2' specifies compiler attribute with 64-byte alignment. The problem happens for 'sh_offset', libelf returns it as 0x3028 which is not 64-byte aligned, combining with disassembly, it's likely libelf doesn't respect the alignment for .bss section, therefore, it doesn't return the aligned value for 'sh_offset'. Suggested by Fangrui Song, ELF file contains program header which contains PT_LOAD segments, the fields p_vaddr and p_offset in PT_LOAD segments contain the execution info. A better choice for converting memory address to file address is using the formula: file_address = st_value - p_vaddr + p_offset This patch introduces elf_read_program_header() which returns the program header based on the passed 'st_value', then it uses the formula above to calculate the symbol file address; and the debugging log is updated respectively. After applying the change: # ./perf --debug verbose=4 mem report ... dso__load_sym_internal: adjusting symbol: st_value: 0x40c0 p_vaddr: 0x3d28 p_offset: 0x2d28 symbol__new: buf2 0x30c0-0x3100 ... dso__load_sym_internal: adjusting symbol: st_value: 0x4080 p_vaddr: 0x3d28 p_offset: 0x2d28 symbol__new: buf1 0x3080-0x30c0 ... Fixes: f17e04afaff84b5c ("perf report: Fix ELF symbol parsing") Reported-by: Chang Rui <changruinj@gmail.com> Suggested-by: Fangrui Song <maskray@google.com> Signed-off-by: Leo Yan <leo.yan@linaro.org> Acked-by: Namhyung Kim <namhyung@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20220724060013.171050-2-leo.yan@linaro.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2022-07-24 06:00:12 +00:00
}
}
demangled = demangle_sym(dso, kmodule, elf_name);
if (demangled != NULL)
elf_name = demangled;
perf symbols: Add Rust demangling Rust demangling is another step after bfd demangling. Add a diagnosis to identify mangled Rust symbols based on the hash that the Rust mangler appends as the last path component, as well as other characteristics. Add a demangler to reconstruct the original symbol. Committer notes: How I tested it: Enabled COPR on Fedora 24 and then installed the 'rust-binary' package, with it: $ cat src/main.rs fn main() { println!("Hello, world!"); } $ cat Cargo.toml [package] name = "hello_world" version = "0.0.1" authors = [ "Arnaldo Carvalho de Melo <acme@kernel.org>" ] $ perf record cargo bench Compiling hello_world v0.0.1 (file:///home/acme/projects/hello_world) Running target/release/hello_world-d4b9dab4b2a47d75 running 0 tests test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured [ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 0.096 MB perf.data (1457 samples) ] $ Before this patch: $ perf report --stdio --dsos librbml-e8edd0fd.so # dso: librbml-e8edd0fd.so # # Total Lost Samples: 0 # # Samples: 1K of event 'cycles:u' # Event count (approx.): 979599126 # # Overhead Command Symbol # ........ ....... ............................................................................................................. # 1.78% rustc [.] rbml::reader::maybe_get_doc::hb9d387df6024b15b 1.50% rustc [.] _$LT$reader..DocsIterator$LT$$u27$a$GT$$u20$as$u20$std..iter..Iterator$GT$::next::hd9af9e60d79a35c8 1.20% rustc [.] rbml::reader::doc_at::hc88107fba445af31 0.46% rustc [.] _$LT$reader..TaggedDocsIterator$LT$$u27$a$GT$$u20$as$u20$std..iter..Iterator$GT$::next::h0cb40e696e4bb489 0.35% rustc [.] rbml::reader::Decoder::_next_int::h66eef7825a398bc3 0.29% rustc [.] rbml::reader::Decoder::_next_sub::h8e5266005580b836 0.15% rustc [.] rbml::reader::get_doc::h094521c645459139 0.14% rustc [.] _$LT$reader..Decoder$LT$$u27$doc$GT$$u20$as$u20$serialize..Decoder$GT$::read_u32::h0acea2fff9669327 0.07% rustc [.] rbml::reader::Decoder::next_doc::h6714d469c9dfaf91 0.07% rustc [.] _ZN4rbml6reader10doc_as_u6417h930b740aa94f1d3aE@plt 0.06% rustc [.] _fini $ After: $ perf report --stdio --dsos librbml-e8edd0fd.so # dso: librbml-e8edd0fd.so # # Total Lost Samples: 0 # # Samples: 1K of event 'cycles:u' # Event count (approx.): 979599126 # # Overhead Command Symbol # ........ ....... ................................................................. # 1.78% rustc [.] rbml::reader::maybe_get_doc 1.50% rustc [.] <reader::DocsIterator<'a> as std::iter::Iterator>::next 1.20% rustc [.] rbml::reader::doc_at 0.46% rustc [.] <reader::TaggedDocsIterator<'a> as std::iter::Iterator>::next 0.35% rustc [.] rbml::reader::Decoder::_next_int 0.29% rustc [.] rbml::reader::Decoder::_next_sub 0.15% rustc [.] rbml::reader::get_doc 0.14% rustc [.] <reader::Decoder<'doc> as serialize::Decoder>::read_u32 0.07% rustc [.] rbml::reader::Decoder::next_doc 0.07% rustc [.] _ZN4rbml6reader10doc_as_u6417h930b740aa94f1d3aE@plt 0.06% rustc [.] _fini $ Signed-off-by: David Tolnay <dtolnay@gmail.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/5780B7FA.3030602@gmail.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-07-09 07:20:00 +00:00
f = symbol__new(sym.st_value, sym.st_size,
GELF_ST_BIND(sym.st_info),
GELF_ST_TYPE(sym.st_info), elf_name);
free(demangled);
if (!f)
goto out_elf_end;
arch__sym_update(f, &sym);
__symbols__insert(&curr_dso->symbols, f, dso->kernel);
nr++;
}
/*
* For misannotated, zeroed, ASM function sizes.
*/
if (nr > 0) {
symbols__fixup_end(&dso->symbols, false);
symbols__fixup_duplicate(&dso->symbols);
if (kmap) {
/*
* We need to fixup this here too because we create new
* maps here, for things like vsyscall sections.
*/
maps__fixup_end(kmaps);
}
}
err = nr;
out_elf_end:
return err;
}
int dso__load_sym(struct dso *dso, struct map *map, struct symsrc *syms_ss,
struct symsrc *runtime_ss, int kmodule)
{
int nr = 0;
int err = -1;
dso->symtab_type = syms_ss->type;
dso->is_64_bit = syms_ss->is_64_bit;
dso->rel = syms_ss->ehdr.e_type == ET_REL;
/*
* Modules may already have symbols from kallsyms, but those symbols
* have the wrong values for the dso maps, so remove them.
*/
if (kmodule && syms_ss->symtab)
symbols__delete(&dso->symbols);
if (!syms_ss->symtab) {
/*
* If the vmlinux is stripped, fail so we will fall back
* to using kallsyms. The vmlinux runtime symbols aren't
* of much use.
*/
if (dso->kernel)
return err;
} else {
err = dso__load_sym_internal(dso, map, syms_ss, runtime_ss,
kmodule, 0);
if (err < 0)
return err;
nr = err;
}
if (syms_ss->dynsym) {
err = dso__load_sym_internal(dso, map, syms_ss, runtime_ss,
kmodule, 1);
if (err < 0)
return err;
err += nr;
}
return err;
}
static int elf_read_maps(Elf *elf, bool exe, mapfn_t mapfn, void *data)
{
GElf_Phdr phdr;
size_t i, phdrnum;
int err;
u64 sz;
if (elf_getphdrnum(elf, &phdrnum))
return -1;
for (i = 0; i < phdrnum; i++) {
if (gelf_getphdr(elf, i, &phdr) == NULL)
return -1;
if (phdr.p_type != PT_LOAD)
continue;
if (exe) {
if (!(phdr.p_flags & PF_X))
continue;
} else {
if (!(phdr.p_flags & PF_R))
continue;
}
sz = min(phdr.p_memsz, phdr.p_filesz);
if (!sz)
continue;
err = mapfn(phdr.p_vaddr, sz, phdr.p_offset, data);
if (err)
return err;
}
return 0;
}
int file__read_maps(int fd, bool exe, mapfn_t mapfn, void *data,
bool *is_64_bit)
{
int err;
Elf *elf;
elf = elf_begin(fd, PERF_ELF_C_READ_MMAP, NULL);
if (elf == NULL)
return -1;
if (is_64_bit)
*is_64_bit = (gelf_getclass(elf) == ELFCLASS64);
err = elf_read_maps(elf, exe, mapfn, data);
elf_end(elf);
return err;
}
enum dso_type dso__type_fd(int fd)
{
enum dso_type dso_type = DSO__TYPE_UNKNOWN;
GElf_Ehdr ehdr;
Elf_Kind ek;
Elf *elf;
elf = elf_begin(fd, PERF_ELF_C_READ_MMAP, NULL);
if (elf == NULL)
goto out;
ek = elf_kind(elf);
if (ek != ELF_K_ELF)
goto out_end;
if (gelf_getclass(elf) == ELFCLASS64) {
dso_type = DSO__TYPE_64BIT;
goto out_end;
}
if (gelf_getehdr(elf, &ehdr) == NULL)
goto out_end;
if (ehdr.e_machine == EM_X86_64)
dso_type = DSO__TYPE_X32BIT;
else
dso_type = DSO__TYPE_32BIT;
out_end:
elf_end(elf);
out:
return dso_type;
}
static int copy_bytes(int from, off_t from_offs, int to, off_t to_offs, u64 len)
{
ssize_t r;
size_t n;
int err = -1;
char *buf = malloc(page_size);
if (buf == NULL)
return -1;
if (lseek(to, to_offs, SEEK_SET) != to_offs)
goto out;
if (lseek(from, from_offs, SEEK_SET) != from_offs)
goto out;
while (len) {
n = page_size;
if (len < n)
n = len;
/* Use read because mmap won't work on proc files */
r = read(from, buf, n);
if (r < 0)
goto out;
if (!r)
break;
n = r;
r = write(to, buf, n);
if (r < 0)
goto out;
if ((size_t)r != n)
goto out;
len -= n;
}
err = 0;
out:
free(buf);
return err;
}
struct kcore {
int fd;
int elfclass;
Elf *elf;
GElf_Ehdr ehdr;
};
static int kcore__open(struct kcore *kcore, const char *filename)
{
GElf_Ehdr *ehdr;
kcore->fd = open(filename, O_RDONLY);
if (kcore->fd == -1)
return -1;
kcore->elf = elf_begin(kcore->fd, ELF_C_READ, NULL);
if (!kcore->elf)
goto out_close;
kcore->elfclass = gelf_getclass(kcore->elf);
if (kcore->elfclass == ELFCLASSNONE)
goto out_end;
ehdr = gelf_getehdr(kcore->elf, &kcore->ehdr);
if (!ehdr)
goto out_end;
return 0;
out_end:
elf_end(kcore->elf);
out_close:
close(kcore->fd);
return -1;
}
static int kcore__init(struct kcore *kcore, char *filename, int elfclass,
bool temp)
{
kcore->elfclass = elfclass;
if (temp)
kcore->fd = mkstemp(filename);
else
kcore->fd = open(filename, O_WRONLY | O_CREAT | O_EXCL, 0400);
if (kcore->fd == -1)
return -1;
kcore->elf = elf_begin(kcore->fd, ELF_C_WRITE, NULL);
if (!kcore->elf)
goto out_close;
if (!gelf_newehdr(kcore->elf, elfclass))
goto out_end;
memset(&kcore->ehdr, 0, sizeof(GElf_Ehdr));
return 0;
out_end:
elf_end(kcore->elf);
out_close:
close(kcore->fd);
unlink(filename);
return -1;
}
static void kcore__close(struct kcore *kcore)
{
elf_end(kcore->elf);
close(kcore->fd);
}
static int kcore__copy_hdr(struct kcore *from, struct kcore *to, size_t count)
{
GElf_Ehdr *ehdr = &to->ehdr;
GElf_Ehdr *kehdr = &from->ehdr;
memcpy(ehdr->e_ident, kehdr->e_ident, EI_NIDENT);
ehdr->e_type = kehdr->e_type;
ehdr->e_machine = kehdr->e_machine;
ehdr->e_version = kehdr->e_version;
ehdr->e_entry = 0;
ehdr->e_shoff = 0;
ehdr->e_flags = kehdr->e_flags;
ehdr->e_phnum = count;
ehdr->e_shentsize = 0;
ehdr->e_shnum = 0;
ehdr->e_shstrndx = 0;
if (from->elfclass == ELFCLASS32) {
ehdr->e_phoff = sizeof(Elf32_Ehdr);
ehdr->e_ehsize = sizeof(Elf32_Ehdr);
ehdr->e_phentsize = sizeof(Elf32_Phdr);
} else {
ehdr->e_phoff = sizeof(Elf64_Ehdr);
ehdr->e_ehsize = sizeof(Elf64_Ehdr);
ehdr->e_phentsize = sizeof(Elf64_Phdr);
}
if (!gelf_update_ehdr(to->elf, ehdr))
return -1;
if (!gelf_newphdr(to->elf, count))
return -1;
return 0;
}
static int kcore__add_phdr(struct kcore *kcore, int idx, off_t offset,
u64 addr, u64 len)
{
GElf_Phdr phdr = {
.p_type = PT_LOAD,
.p_flags = PF_R | PF_W | PF_X,
.p_offset = offset,
.p_vaddr = addr,
.p_paddr = 0,
.p_filesz = len,
.p_memsz = len,
.p_align = page_size,
};
if (!gelf_update_phdr(kcore->elf, idx, &phdr))
return -1;
return 0;
}
static off_t kcore__write(struct kcore *kcore)
{
return elf_update(kcore->elf, ELF_C_WRITE);
}
perf buildid-cache: Add ability to add kcore to the cache kcore can be used to view the running kernel object code. However, kcore changes as modules are loaded and unloaded, and when the kernel decides to modify its own code. Consequently it is useful to create a copy of kcore at a particular time. Unlike vmlinux, kcore is not unique for a given build-id. And in addition, the kallsyms and modules files are also needed. The tool therefore creates a directory: ~/.debug/[kernel.kcore]/<build-id>/<YYYYmmddHHMMSShh> which contains: kcore, kallsyms and modules. Note that the copied kcore contains only code sections. See the kcore_copy() function for how that is determined. The tool will not make additional copies of kcore if there is already one with the same modules at the same addresses. Currently, perf tools will not look for kcore in the cache. That is addressed in another patch. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Namhyung Kim <namhyung@gmail.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/525BF849.5030405@intel.com [ renamed 'index' to 'idx' to avoid shadowing string.h symbol in f12, use at least one member initializer when initializing a struct to zeros, also to fix the build on f12 ] Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2013-10-14 13:57:29 +00:00
struct phdr_data {
off_t offset;
off_t rel;
perf buildid-cache: Add ability to add kcore to the cache kcore can be used to view the running kernel object code. However, kcore changes as modules are loaded and unloaded, and when the kernel decides to modify its own code. Consequently it is useful to create a copy of kcore at a particular time. Unlike vmlinux, kcore is not unique for a given build-id. And in addition, the kallsyms and modules files are also needed. The tool therefore creates a directory: ~/.debug/[kernel.kcore]/<build-id>/<YYYYmmddHHMMSShh> which contains: kcore, kallsyms and modules. Note that the copied kcore contains only code sections. See the kcore_copy() function for how that is determined. The tool will not make additional copies of kcore if there is already one with the same modules at the same addresses. Currently, perf tools will not look for kcore in the cache. That is addressed in another patch. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Namhyung Kim <namhyung@gmail.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/525BF849.5030405@intel.com [ renamed 'index' to 'idx' to avoid shadowing string.h symbol in f12, use at least one member initializer when initializing a struct to zeros, also to fix the build on f12 ] Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2013-10-14 13:57:29 +00:00
u64 addr;
u64 len;
struct list_head node;
struct phdr_data *remaps;
perf buildid-cache: Add ability to add kcore to the cache kcore can be used to view the running kernel object code. However, kcore changes as modules are loaded and unloaded, and when the kernel decides to modify its own code. Consequently it is useful to create a copy of kcore at a particular time. Unlike vmlinux, kcore is not unique for a given build-id. And in addition, the kallsyms and modules files are also needed. The tool therefore creates a directory: ~/.debug/[kernel.kcore]/<build-id>/<YYYYmmddHHMMSShh> which contains: kcore, kallsyms and modules. Note that the copied kcore contains only code sections. See the kcore_copy() function for how that is determined. The tool will not make additional copies of kcore if there is already one with the same modules at the same addresses. Currently, perf tools will not look for kcore in the cache. That is addressed in another patch. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Namhyung Kim <namhyung@gmail.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/525BF849.5030405@intel.com [ renamed 'index' to 'idx' to avoid shadowing string.h symbol in f12, use at least one member initializer when initializing a struct to zeros, also to fix the build on f12 ] Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2013-10-14 13:57:29 +00:00
};
struct sym_data {
u64 addr;
struct list_head node;
};
perf buildid-cache: Add ability to add kcore to the cache kcore can be used to view the running kernel object code. However, kcore changes as modules are loaded and unloaded, and when the kernel decides to modify its own code. Consequently it is useful to create a copy of kcore at a particular time. Unlike vmlinux, kcore is not unique for a given build-id. And in addition, the kallsyms and modules files are also needed. The tool therefore creates a directory: ~/.debug/[kernel.kcore]/<build-id>/<YYYYmmddHHMMSShh> which contains: kcore, kallsyms and modules. Note that the copied kcore contains only code sections. See the kcore_copy() function for how that is determined. The tool will not make additional copies of kcore if there is already one with the same modules at the same addresses. Currently, perf tools will not look for kcore in the cache. That is addressed in another patch. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Namhyung Kim <namhyung@gmail.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/525BF849.5030405@intel.com [ renamed 'index' to 'idx' to avoid shadowing string.h symbol in f12, use at least one member initializer when initializing a struct to zeros, also to fix the build on f12 ] Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2013-10-14 13:57:29 +00:00
struct kcore_copy_info {
u64 stext;
u64 etext;
u64 first_symbol;
u64 last_symbol;
u64 first_module;
u64 first_module_symbol;
perf buildid-cache: Add ability to add kcore to the cache kcore can be used to view the running kernel object code. However, kcore changes as modules are loaded and unloaded, and when the kernel decides to modify its own code. Consequently it is useful to create a copy of kcore at a particular time. Unlike vmlinux, kcore is not unique for a given build-id. And in addition, the kallsyms and modules files are also needed. The tool therefore creates a directory: ~/.debug/[kernel.kcore]/<build-id>/<YYYYmmddHHMMSShh> which contains: kcore, kallsyms and modules. Note that the copied kcore contains only code sections. See the kcore_copy() function for how that is determined. The tool will not make additional copies of kcore if there is already one with the same modules at the same addresses. Currently, perf tools will not look for kcore in the cache. That is addressed in another patch. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Namhyung Kim <namhyung@gmail.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/525BF849.5030405@intel.com [ renamed 'index' to 'idx' to avoid shadowing string.h symbol in f12, use at least one member initializer when initializing a struct to zeros, also to fix the build on f12 ] Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2013-10-14 13:57:29 +00:00
u64 last_module_symbol;
size_t phnum;
struct list_head phdrs;
struct list_head syms;
perf buildid-cache: Add ability to add kcore to the cache kcore can be used to view the running kernel object code. However, kcore changes as modules are loaded and unloaded, and when the kernel decides to modify its own code. Consequently it is useful to create a copy of kcore at a particular time. Unlike vmlinux, kcore is not unique for a given build-id. And in addition, the kallsyms and modules files are also needed. The tool therefore creates a directory: ~/.debug/[kernel.kcore]/<build-id>/<YYYYmmddHHMMSShh> which contains: kcore, kallsyms and modules. Note that the copied kcore contains only code sections. See the kcore_copy() function for how that is determined. The tool will not make additional copies of kcore if there is already one with the same modules at the same addresses. Currently, perf tools will not look for kcore in the cache. That is addressed in another patch. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Namhyung Kim <namhyung@gmail.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/525BF849.5030405@intel.com [ renamed 'index' to 'idx' to avoid shadowing string.h symbol in f12, use at least one member initializer when initializing a struct to zeros, also to fix the build on f12 ] Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2013-10-14 13:57:29 +00:00
};
#define kcore_copy__for_each_phdr(k, p) \
list_for_each_entry((p), &(k)->phdrs, node)
static struct phdr_data *phdr_data__new(u64 addr, u64 len, off_t offset)
{
struct phdr_data *p = zalloc(sizeof(*p));
if (p) {
p->addr = addr;
p->len = len;
p->offset = offset;
}
return p;
}
static struct phdr_data *kcore_copy_info__addnew(struct kcore_copy_info *kci,
u64 addr, u64 len,
off_t offset)
{
struct phdr_data *p = phdr_data__new(addr, len, offset);
if (p)
list_add_tail(&p->node, &kci->phdrs);
return p;
}
static void kcore_copy__free_phdrs(struct kcore_copy_info *kci)
{
struct phdr_data *p, *tmp;
list_for_each_entry_safe(p, tmp, &kci->phdrs, node) {
list_del_init(&p->node);
free(p);
}
}
static struct sym_data *kcore_copy__new_sym(struct kcore_copy_info *kci,
u64 addr)
{
struct sym_data *s = zalloc(sizeof(*s));
if (s) {
s->addr = addr;
list_add_tail(&s->node, &kci->syms);
}
return s;
}
static void kcore_copy__free_syms(struct kcore_copy_info *kci)
{
struct sym_data *s, *tmp;
list_for_each_entry_safe(s, tmp, &kci->syms, node) {
list_del_init(&s->node);
free(s);
}
}
perf buildid-cache: Add ability to add kcore to the cache kcore can be used to view the running kernel object code. However, kcore changes as modules are loaded and unloaded, and when the kernel decides to modify its own code. Consequently it is useful to create a copy of kcore at a particular time. Unlike vmlinux, kcore is not unique for a given build-id. And in addition, the kallsyms and modules files are also needed. The tool therefore creates a directory: ~/.debug/[kernel.kcore]/<build-id>/<YYYYmmddHHMMSShh> which contains: kcore, kallsyms and modules. Note that the copied kcore contains only code sections. See the kcore_copy() function for how that is determined. The tool will not make additional copies of kcore if there is already one with the same modules at the same addresses. Currently, perf tools will not look for kcore in the cache. That is addressed in another patch. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Namhyung Kim <namhyung@gmail.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/525BF849.5030405@intel.com [ renamed 'index' to 'idx' to avoid shadowing string.h symbol in f12, use at least one member initializer when initializing a struct to zeros, also to fix the build on f12 ] Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2013-10-14 13:57:29 +00:00
static int kcore_copy__process_kallsyms(void *arg, const char *name, char type,
u64 start)
{
struct kcore_copy_info *kci = arg;
if (!kallsyms__is_function(type))
perf buildid-cache: Add ability to add kcore to the cache kcore can be used to view the running kernel object code. However, kcore changes as modules are loaded and unloaded, and when the kernel decides to modify its own code. Consequently it is useful to create a copy of kcore at a particular time. Unlike vmlinux, kcore is not unique for a given build-id. And in addition, the kallsyms and modules files are also needed. The tool therefore creates a directory: ~/.debug/[kernel.kcore]/<build-id>/<YYYYmmddHHMMSShh> which contains: kcore, kallsyms and modules. Note that the copied kcore contains only code sections. See the kcore_copy() function for how that is determined. The tool will not make additional copies of kcore if there is already one with the same modules at the same addresses. Currently, perf tools will not look for kcore in the cache. That is addressed in another patch. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Namhyung Kim <namhyung@gmail.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/525BF849.5030405@intel.com [ renamed 'index' to 'idx' to avoid shadowing string.h symbol in f12, use at least one member initializer when initializing a struct to zeros, also to fix the build on f12 ] Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2013-10-14 13:57:29 +00:00
return 0;
if (strchr(name, '[')) {
if (!kci->first_module_symbol || start < kci->first_module_symbol)
kci->first_module_symbol = start;
perf buildid-cache: Add ability to add kcore to the cache kcore can be used to view the running kernel object code. However, kcore changes as modules are loaded and unloaded, and when the kernel decides to modify its own code. Consequently it is useful to create a copy of kcore at a particular time. Unlike vmlinux, kcore is not unique for a given build-id. And in addition, the kallsyms and modules files are also needed. The tool therefore creates a directory: ~/.debug/[kernel.kcore]/<build-id>/<YYYYmmddHHMMSShh> which contains: kcore, kallsyms and modules. Note that the copied kcore contains only code sections. See the kcore_copy() function for how that is determined. The tool will not make additional copies of kcore if there is already one with the same modules at the same addresses. Currently, perf tools will not look for kcore in the cache. That is addressed in another patch. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Namhyung Kim <namhyung@gmail.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/525BF849.5030405@intel.com [ renamed 'index' to 'idx' to avoid shadowing string.h symbol in f12, use at least one member initializer when initializing a struct to zeros, also to fix the build on f12 ] Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2013-10-14 13:57:29 +00:00
if (start > kci->last_module_symbol)
kci->last_module_symbol = start;
return 0;
}
if (!kci->first_symbol || start < kci->first_symbol)
kci->first_symbol = start;
if (!kci->last_symbol || start > kci->last_symbol)
kci->last_symbol = start;
if (!strcmp(name, "_stext")) {
kci->stext = start;
return 0;
}
if (!strcmp(name, "_etext")) {
kci->etext = start;
return 0;
}
if (is_entry_trampoline(name) && !kcore_copy__new_sym(kci, start))
return -1;
perf buildid-cache: Add ability to add kcore to the cache kcore can be used to view the running kernel object code. However, kcore changes as modules are loaded and unloaded, and when the kernel decides to modify its own code. Consequently it is useful to create a copy of kcore at a particular time. Unlike vmlinux, kcore is not unique for a given build-id. And in addition, the kallsyms and modules files are also needed. The tool therefore creates a directory: ~/.debug/[kernel.kcore]/<build-id>/<YYYYmmddHHMMSShh> which contains: kcore, kallsyms and modules. Note that the copied kcore contains only code sections. See the kcore_copy() function for how that is determined. The tool will not make additional copies of kcore if there is already one with the same modules at the same addresses. Currently, perf tools will not look for kcore in the cache. That is addressed in another patch. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Namhyung Kim <namhyung@gmail.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/525BF849.5030405@intel.com [ renamed 'index' to 'idx' to avoid shadowing string.h symbol in f12, use at least one member initializer when initializing a struct to zeros, also to fix the build on f12 ] Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2013-10-14 13:57:29 +00:00
return 0;
}
static int kcore_copy__parse_kallsyms(struct kcore_copy_info *kci,
const char *dir)
{
char kallsyms_filename[PATH_MAX];
scnprintf(kallsyms_filename, PATH_MAX, "%s/kallsyms", dir);
if (symbol__restricted_filename(kallsyms_filename, "/proc/kallsyms"))
return -1;
if (kallsyms__parse(kallsyms_filename, kci,
kcore_copy__process_kallsyms) < 0)
return -1;
return 0;
}
static int kcore_copy__process_modules(void *arg,
const char *name __maybe_unused,
perf record: Fix wrong size in perf_record_mmap for last kernel module During work on perf report for s390 I ran into the following issue: 0 0x318 [0x78]: PERF_RECORD_MMAP -1/0: [0x3ff804d6990(0xfffffc007fb2966f) @ 0]: x /lib/modules/4.12.0perf1+/kernel/drivers/s390/net/qeth_l2.ko This is a PERF_RECORD_MMAP entry of the perf.data file with an invalid module size for qeth_l2.ko (the s390 ethernet device driver). Even a mainframe does not have 0xfffffc007fb2966f bytes of main memory. It turned out that this wrong size is created by the perf record command. What happens is this function call sequence from __cmd_record(): perf_session__new(): perf_session__create_kernel_maps(): machine__create_kernel_maps(): machine__create_modules(): Creates map for all loaded kernel modules. modules__parse(): Reads /proc/modules and extracts module name and load address (1st and last column) machine__create_module(): Called for every module found in /proc/modules. Creates a new map for every module found and enters module name and start address into the map. Since the module end address is unknown it is set to zero. This ends up with a kernel module map list sorted by module start addresses. All module end addresses are zero. Last machine__create_kernel_maps() calls function map_groups__fixup_end(). This function iterates through the maps and assigns each map entry's end address the successor map entry start address. The last entry of the map group has no successor, so ~0 is used as end to consume the remaining memory. Later __cmd_record calls function record__synthesize() which in turn calls perf_event__synthesize_kernel_mmap() and perf_event__synthesize_modules() to create PERF_REPORT_MMAP entries into the perf.data file. On s390 this results in the last module qeth_l2.ko (which has highest start address, see module table: [root@s8360047 perf]# cat /proc/modules qeth_l2 86016 1 - Live 0x000003ff804d6000 qeth 266240 1 qeth_l2, Live 0x000003ff80296000 ccwgroup 24576 1 qeth, Live 0x000003ff80218000 vmur 36864 0 - Live 0x000003ff80182000 qdio 143360 2 qeth_l2,qeth, Live 0x000003ff80002000 [root@s8360047 perf]# ) to be the last entry and its map has an end address of ~0. When the PERF_RECORD_MMAP entry is created for kernel module qeth_l2.ko its start address and length is written. The length is calculated in line: event->mmap.len = pos->end - pos->start; and results in 0xffffffffffffffff - 0x3ff804d6990(*) = 0xfffffc007fb2966f (*) On s390 the module start address is actually determined by a __weak function named arch__fix_module_text_start() in machine__create_module(). I think this improvable. We can use the module size (2nd column of /proc/modules) to get each loaded kernel module size and calculate its end address. Only for map entries which do not have a valid end address (end is still zero) we can use the heuristic we have now, that is use successor start address or ~0. Signed-off-by: Thomas-Mich Richter <tmricht@linux.vnet.ibm.com> Reviewed-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Hendrik Brueckner <brueckner@linux.vnet.ibm.com> Cc: Thomas-Mich Richter <tmricht@linux.vnet.ibm.com> Cc: Zvonko Kosic <zvonko.kosic@de.ibm.com> LPU-Reference: 20170803134902.47207-2-tmricht@linux.vnet.ibm.com Link: http://lkml.kernel.org/n/tip-nmoqij5b5vxx7rq2ckwu8iaj@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-08-03 13:49:02 +00:00
u64 start, u64 size __maybe_unused)
perf buildid-cache: Add ability to add kcore to the cache kcore can be used to view the running kernel object code. However, kcore changes as modules are loaded and unloaded, and when the kernel decides to modify its own code. Consequently it is useful to create a copy of kcore at a particular time. Unlike vmlinux, kcore is not unique for a given build-id. And in addition, the kallsyms and modules files are also needed. The tool therefore creates a directory: ~/.debug/[kernel.kcore]/<build-id>/<YYYYmmddHHMMSShh> which contains: kcore, kallsyms and modules. Note that the copied kcore contains only code sections. See the kcore_copy() function for how that is determined. The tool will not make additional copies of kcore if there is already one with the same modules at the same addresses. Currently, perf tools will not look for kcore in the cache. That is addressed in another patch. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Namhyung Kim <namhyung@gmail.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/525BF849.5030405@intel.com [ renamed 'index' to 'idx' to avoid shadowing string.h symbol in f12, use at least one member initializer when initializing a struct to zeros, also to fix the build on f12 ] Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2013-10-14 13:57:29 +00:00
{
struct kcore_copy_info *kci = arg;
if (!kci->first_module || start < kci->first_module)
kci->first_module = start;
return 0;
}
static int kcore_copy__parse_modules(struct kcore_copy_info *kci,
const char *dir)
{
char modules_filename[PATH_MAX];
scnprintf(modules_filename, PATH_MAX, "%s/modules", dir);
if (symbol__restricted_filename(modules_filename, "/proc/modules"))
return -1;
if (modules__parse(modules_filename, kci,
kcore_copy__process_modules) < 0)
return -1;
return 0;
}
static int kcore_copy__map(struct kcore_copy_info *kci, u64 start, u64 end,
u64 pgoff, u64 s, u64 e)
perf buildid-cache: Add ability to add kcore to the cache kcore can be used to view the running kernel object code. However, kcore changes as modules are loaded and unloaded, and when the kernel decides to modify its own code. Consequently it is useful to create a copy of kcore at a particular time. Unlike vmlinux, kcore is not unique for a given build-id. And in addition, the kallsyms and modules files are also needed. The tool therefore creates a directory: ~/.debug/[kernel.kcore]/<build-id>/<YYYYmmddHHMMSShh> which contains: kcore, kallsyms and modules. Note that the copied kcore contains only code sections. See the kcore_copy() function for how that is determined. The tool will not make additional copies of kcore if there is already one with the same modules at the same addresses. Currently, perf tools will not look for kcore in the cache. That is addressed in another patch. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Namhyung Kim <namhyung@gmail.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/525BF849.5030405@intel.com [ renamed 'index' to 'idx' to avoid shadowing string.h symbol in f12, use at least one member initializer when initializing a struct to zeros, also to fix the build on f12 ] Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2013-10-14 13:57:29 +00:00
{
u64 len, offset;
if (s < start || s >= end)
return 0;
perf buildid-cache: Add ability to add kcore to the cache kcore can be used to view the running kernel object code. However, kcore changes as modules are loaded and unloaded, and when the kernel decides to modify its own code. Consequently it is useful to create a copy of kcore at a particular time. Unlike vmlinux, kcore is not unique for a given build-id. And in addition, the kallsyms and modules files are also needed. The tool therefore creates a directory: ~/.debug/[kernel.kcore]/<build-id>/<YYYYmmddHHMMSShh> which contains: kcore, kallsyms and modules. Note that the copied kcore contains only code sections. See the kcore_copy() function for how that is determined. The tool will not make additional copies of kcore if there is already one with the same modules at the same addresses. Currently, perf tools will not look for kcore in the cache. That is addressed in another patch. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Namhyung Kim <namhyung@gmail.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/525BF849.5030405@intel.com [ renamed 'index' to 'idx' to avoid shadowing string.h symbol in f12, use at least one member initializer when initializing a struct to zeros, also to fix the build on f12 ] Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2013-10-14 13:57:29 +00:00
offset = (s - start) + pgoff;
len = e < end ? e - s : end - s;
return kcore_copy_info__addnew(kci, s, len, offset) ? 0 : -1;
perf buildid-cache: Add ability to add kcore to the cache kcore can be used to view the running kernel object code. However, kcore changes as modules are loaded and unloaded, and when the kernel decides to modify its own code. Consequently it is useful to create a copy of kcore at a particular time. Unlike vmlinux, kcore is not unique for a given build-id. And in addition, the kallsyms and modules files are also needed. The tool therefore creates a directory: ~/.debug/[kernel.kcore]/<build-id>/<YYYYmmddHHMMSShh> which contains: kcore, kallsyms and modules. Note that the copied kcore contains only code sections. See the kcore_copy() function for how that is determined. The tool will not make additional copies of kcore if there is already one with the same modules at the same addresses. Currently, perf tools will not look for kcore in the cache. That is addressed in another patch. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Namhyung Kim <namhyung@gmail.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/525BF849.5030405@intel.com [ renamed 'index' to 'idx' to avoid shadowing string.h symbol in f12, use at least one member initializer when initializing a struct to zeros, also to fix the build on f12 ] Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2013-10-14 13:57:29 +00:00
}
static int kcore_copy__read_map(u64 start, u64 len, u64 pgoff, void *data)
{
struct kcore_copy_info *kci = data;
u64 end = start + len;
struct sym_data *sdat;
perf buildid-cache: Add ability to add kcore to the cache kcore can be used to view the running kernel object code. However, kcore changes as modules are loaded and unloaded, and when the kernel decides to modify its own code. Consequently it is useful to create a copy of kcore at a particular time. Unlike vmlinux, kcore is not unique for a given build-id. And in addition, the kallsyms and modules files are also needed. The tool therefore creates a directory: ~/.debug/[kernel.kcore]/<build-id>/<YYYYmmddHHMMSShh> which contains: kcore, kallsyms and modules. Note that the copied kcore contains only code sections. See the kcore_copy() function for how that is determined. The tool will not make additional copies of kcore if there is already one with the same modules at the same addresses. Currently, perf tools will not look for kcore in the cache. That is addressed in another patch. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Namhyung Kim <namhyung@gmail.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/525BF849.5030405@intel.com [ renamed 'index' to 'idx' to avoid shadowing string.h symbol in f12, use at least one member initializer when initializing a struct to zeros, also to fix the build on f12 ] Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2013-10-14 13:57:29 +00:00
if (kcore_copy__map(kci, start, end, pgoff, kci->stext, kci->etext))
return -1;
perf buildid-cache: Add ability to add kcore to the cache kcore can be used to view the running kernel object code. However, kcore changes as modules are loaded and unloaded, and when the kernel decides to modify its own code. Consequently it is useful to create a copy of kcore at a particular time. Unlike vmlinux, kcore is not unique for a given build-id. And in addition, the kallsyms and modules files are also needed. The tool therefore creates a directory: ~/.debug/[kernel.kcore]/<build-id>/<YYYYmmddHHMMSShh> which contains: kcore, kallsyms and modules. Note that the copied kcore contains only code sections. See the kcore_copy() function for how that is determined. The tool will not make additional copies of kcore if there is already one with the same modules at the same addresses. Currently, perf tools will not look for kcore in the cache. That is addressed in another patch. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Namhyung Kim <namhyung@gmail.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/525BF849.5030405@intel.com [ renamed 'index' to 'idx' to avoid shadowing string.h symbol in f12, use at least one member initializer when initializing a struct to zeros, also to fix the build on f12 ] Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2013-10-14 13:57:29 +00:00
if (kcore_copy__map(kci, start, end, pgoff, kci->first_module,
kci->last_module_symbol))
return -1;
perf buildid-cache: Add ability to add kcore to the cache kcore can be used to view the running kernel object code. However, kcore changes as modules are loaded and unloaded, and when the kernel decides to modify its own code. Consequently it is useful to create a copy of kcore at a particular time. Unlike vmlinux, kcore is not unique for a given build-id. And in addition, the kallsyms and modules files are also needed. The tool therefore creates a directory: ~/.debug/[kernel.kcore]/<build-id>/<YYYYmmddHHMMSShh> which contains: kcore, kallsyms and modules. Note that the copied kcore contains only code sections. See the kcore_copy() function for how that is determined. The tool will not make additional copies of kcore if there is already one with the same modules at the same addresses. Currently, perf tools will not look for kcore in the cache. That is addressed in another patch. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Namhyung Kim <namhyung@gmail.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/525BF849.5030405@intel.com [ renamed 'index' to 'idx' to avoid shadowing string.h symbol in f12, use at least one member initializer when initializing a struct to zeros, also to fix the build on f12 ] Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2013-10-14 13:57:29 +00:00
list_for_each_entry(sdat, &kci->syms, node) {
u64 s = round_down(sdat->addr, page_size);
if (kcore_copy__map(kci, start, end, pgoff, s, s + len))
return -1;
}
perf buildid-cache: Add ability to add kcore to the cache kcore can be used to view the running kernel object code. However, kcore changes as modules are loaded and unloaded, and when the kernel decides to modify its own code. Consequently it is useful to create a copy of kcore at a particular time. Unlike vmlinux, kcore is not unique for a given build-id. And in addition, the kallsyms and modules files are also needed. The tool therefore creates a directory: ~/.debug/[kernel.kcore]/<build-id>/<YYYYmmddHHMMSShh> which contains: kcore, kallsyms and modules. Note that the copied kcore contains only code sections. See the kcore_copy() function for how that is determined. The tool will not make additional copies of kcore if there is already one with the same modules at the same addresses. Currently, perf tools will not look for kcore in the cache. That is addressed in another patch. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Namhyung Kim <namhyung@gmail.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/525BF849.5030405@intel.com [ renamed 'index' to 'idx' to avoid shadowing string.h symbol in f12, use at least one member initializer when initializing a struct to zeros, also to fix the build on f12 ] Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2013-10-14 13:57:29 +00:00
return 0;
}
static int kcore_copy__read_maps(struct kcore_copy_info *kci, Elf *elf)
{
if (elf_read_maps(elf, true, kcore_copy__read_map, kci) < 0)
return -1;
return 0;
}
static void kcore_copy__find_remaps(struct kcore_copy_info *kci)
{
struct phdr_data *p, *k = NULL;
u64 kend;
if (!kci->stext)
return;
/* Find phdr that corresponds to the kernel map (contains stext) */
kcore_copy__for_each_phdr(kci, p) {
u64 pend = p->addr + p->len - 1;
if (p->addr <= kci->stext && pend >= kci->stext) {
k = p;
break;
}
}
if (!k)
return;
kend = k->offset + k->len;
/* Find phdrs that remap the kernel */
kcore_copy__for_each_phdr(kci, p) {
u64 pend = p->offset + p->len;
if (p == k)
continue;
if (p->offset >= k->offset && pend <= kend)
p->remaps = k;
}
}
static void kcore_copy__layout(struct kcore_copy_info *kci)
{
struct phdr_data *p;
off_t rel = 0;
kcore_copy__find_remaps(kci);
kcore_copy__for_each_phdr(kci, p) {
if (!p->remaps) {
p->rel = rel;
rel += p->len;
}
kci->phnum += 1;
}
kcore_copy__for_each_phdr(kci, p) {
struct phdr_data *k = p->remaps;
if (k)
p->rel = p->offset - k->offset + k->rel;
}
}
perf buildid-cache: Add ability to add kcore to the cache kcore can be used to view the running kernel object code. However, kcore changes as modules are loaded and unloaded, and when the kernel decides to modify its own code. Consequently it is useful to create a copy of kcore at a particular time. Unlike vmlinux, kcore is not unique for a given build-id. And in addition, the kallsyms and modules files are also needed. The tool therefore creates a directory: ~/.debug/[kernel.kcore]/<build-id>/<YYYYmmddHHMMSShh> which contains: kcore, kallsyms and modules. Note that the copied kcore contains only code sections. See the kcore_copy() function for how that is determined. The tool will not make additional copies of kcore if there is already one with the same modules at the same addresses. Currently, perf tools will not look for kcore in the cache. That is addressed in another patch. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Namhyung Kim <namhyung@gmail.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/525BF849.5030405@intel.com [ renamed 'index' to 'idx' to avoid shadowing string.h symbol in f12, use at least one member initializer when initializing a struct to zeros, also to fix the build on f12 ] Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2013-10-14 13:57:29 +00:00
static int kcore_copy__calc_maps(struct kcore_copy_info *kci, const char *dir,
Elf *elf)
{
if (kcore_copy__parse_kallsyms(kci, dir))
return -1;
if (kcore_copy__parse_modules(kci, dir))
return -1;
if (kci->stext)
kci->stext = round_down(kci->stext, page_size);
else
kci->stext = round_down(kci->first_symbol, page_size);
if (kci->etext) {
kci->etext = round_up(kci->etext, page_size);
} else if (kci->last_symbol) {
kci->etext = round_up(kci->last_symbol, page_size);
kci->etext += page_size;
}
if (kci->first_module_symbol &&
(!kci->first_module || kci->first_module_symbol < kci->first_module))
kci->first_module = kci->first_module_symbol;
perf buildid-cache: Add ability to add kcore to the cache kcore can be used to view the running kernel object code. However, kcore changes as modules are loaded and unloaded, and when the kernel decides to modify its own code. Consequently it is useful to create a copy of kcore at a particular time. Unlike vmlinux, kcore is not unique for a given build-id. And in addition, the kallsyms and modules files are also needed. The tool therefore creates a directory: ~/.debug/[kernel.kcore]/<build-id>/<YYYYmmddHHMMSShh> which contains: kcore, kallsyms and modules. Note that the copied kcore contains only code sections. See the kcore_copy() function for how that is determined. The tool will not make additional copies of kcore if there is already one with the same modules at the same addresses. Currently, perf tools will not look for kcore in the cache. That is addressed in another patch. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Namhyung Kim <namhyung@gmail.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/525BF849.5030405@intel.com [ renamed 'index' to 'idx' to avoid shadowing string.h symbol in f12, use at least one member initializer when initializing a struct to zeros, also to fix the build on f12 ] Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2013-10-14 13:57:29 +00:00
kci->first_module = round_down(kci->first_module, page_size);
if (kci->last_module_symbol) {
kci->last_module_symbol = round_up(kci->last_module_symbol,
page_size);
kci->last_module_symbol += page_size;
}
if (!kci->stext || !kci->etext)
return -1;
if (kci->first_module && !kci->last_module_symbol)
return -1;
if (kcore_copy__read_maps(kci, elf))
return -1;
kcore_copy__layout(kci);
return 0;
perf buildid-cache: Add ability to add kcore to the cache kcore can be used to view the running kernel object code. However, kcore changes as modules are loaded and unloaded, and when the kernel decides to modify its own code. Consequently it is useful to create a copy of kcore at a particular time. Unlike vmlinux, kcore is not unique for a given build-id. And in addition, the kallsyms and modules files are also needed. The tool therefore creates a directory: ~/.debug/[kernel.kcore]/<build-id>/<YYYYmmddHHMMSShh> which contains: kcore, kallsyms and modules. Note that the copied kcore contains only code sections. See the kcore_copy() function for how that is determined. The tool will not make additional copies of kcore if there is already one with the same modules at the same addresses. Currently, perf tools will not look for kcore in the cache. That is addressed in another patch. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Namhyung Kim <namhyung@gmail.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/525BF849.5030405@intel.com [ renamed 'index' to 'idx' to avoid shadowing string.h symbol in f12, use at least one member initializer when initializing a struct to zeros, also to fix the build on f12 ] Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2013-10-14 13:57:29 +00:00
}
static int kcore_copy__copy_file(const char *from_dir, const char *to_dir,
const char *name)
{
char from_filename[PATH_MAX];
char to_filename[PATH_MAX];
scnprintf(from_filename, PATH_MAX, "%s/%s", from_dir, name);
scnprintf(to_filename, PATH_MAX, "%s/%s", to_dir, name);
return copyfile_mode(from_filename, to_filename, 0400);
}
static int kcore_copy__unlink(const char *dir, const char *name)
{
char filename[PATH_MAX];
scnprintf(filename, PATH_MAX, "%s/%s", dir, name);
return unlink(filename);
}
static int kcore_copy__compare_fds(int from, int to)
{
char *buf_from;
char *buf_to;
ssize_t ret;
size_t len;
int err = -1;
buf_from = malloc(page_size);
buf_to = malloc(page_size);
if (!buf_from || !buf_to)
goto out;
while (1) {
/* Use read because mmap won't work on proc files */
ret = read(from, buf_from, page_size);
if (ret < 0)
goto out;
if (!ret)
break;
len = ret;
if (readn(to, buf_to, len) != (int)len)
goto out;
if (memcmp(buf_from, buf_to, len))
goto out;
}
err = 0;
out:
free(buf_to);
free(buf_from);
return err;
}
static int kcore_copy__compare_files(const char *from_filename,
const char *to_filename)
{
int from, to, err = -1;
from = open(from_filename, O_RDONLY);
if (from < 0)
return -1;
to = open(to_filename, O_RDONLY);
if (to < 0)
goto out_close_from;
err = kcore_copy__compare_fds(from, to);
close(to);
out_close_from:
close(from);
return err;
}
static int kcore_copy__compare_file(const char *from_dir, const char *to_dir,
const char *name)
{
char from_filename[PATH_MAX];
char to_filename[PATH_MAX];
scnprintf(from_filename, PATH_MAX, "%s/%s", from_dir, name);
scnprintf(to_filename, PATH_MAX, "%s/%s", to_dir, name);
return kcore_copy__compare_files(from_filename, to_filename);
}
/**
* kcore_copy - copy kallsyms, modules and kcore from one directory to another.
* @from_dir: from directory
* @to_dir: to directory
*
* This function copies kallsyms, modules and kcore files from one directory to
* another. kallsyms and modules are copied entirely. Only code segments are
* copied from kcore. It is assumed that two segments suffice: one for the
* kernel proper and one for all the modules. The code segments are determined
* from kallsyms and modules files. The kernel map starts at _stext or the
* lowest function symbol, and ends at _etext or the highest function symbol.
* The module map starts at the lowest module address and ends at the highest
* module symbol. Start addresses are rounded down to the nearest page. End
* addresses are rounded up to the nearest page. An extra page is added to the
* highest kernel symbol and highest module symbol to, hopefully, encompass that
* symbol too. Because it contains only code sections, the resulting kcore is
* unusual. One significant peculiarity is that the mapping (start -> pgoff)
* is not the same for the kernel map and the modules map. That happens because
* the data is copied adjacently whereas the original kcore has gaps. Finally,
* kallsyms file is compared with its copy to check that modules have not been
* loaded or unloaded while the copies were taking place.
perf buildid-cache: Add ability to add kcore to the cache kcore can be used to view the running kernel object code. However, kcore changes as modules are loaded and unloaded, and when the kernel decides to modify its own code. Consequently it is useful to create a copy of kcore at a particular time. Unlike vmlinux, kcore is not unique for a given build-id. And in addition, the kallsyms and modules files are also needed. The tool therefore creates a directory: ~/.debug/[kernel.kcore]/<build-id>/<YYYYmmddHHMMSShh> which contains: kcore, kallsyms and modules. Note that the copied kcore contains only code sections. See the kcore_copy() function for how that is determined. The tool will not make additional copies of kcore if there is already one with the same modules at the same addresses. Currently, perf tools will not look for kcore in the cache. That is addressed in another patch. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Namhyung Kim <namhyung@gmail.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/525BF849.5030405@intel.com [ renamed 'index' to 'idx' to avoid shadowing string.h symbol in f12, use at least one member initializer when initializing a struct to zeros, also to fix the build on f12 ] Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2013-10-14 13:57:29 +00:00
*
* Return: %0 on success, %-1 on failure.
*/
int kcore_copy(const char *from_dir, const char *to_dir)
{
struct kcore kcore;
struct kcore extract;
int idx = 0, err = -1;
off_t offset, sz;
perf buildid-cache: Add ability to add kcore to the cache kcore can be used to view the running kernel object code. However, kcore changes as modules are loaded and unloaded, and when the kernel decides to modify its own code. Consequently it is useful to create a copy of kcore at a particular time. Unlike vmlinux, kcore is not unique for a given build-id. And in addition, the kallsyms and modules files are also needed. The tool therefore creates a directory: ~/.debug/[kernel.kcore]/<build-id>/<YYYYmmddHHMMSShh> which contains: kcore, kallsyms and modules. Note that the copied kcore contains only code sections. See the kcore_copy() function for how that is determined. The tool will not make additional copies of kcore if there is already one with the same modules at the same addresses. Currently, perf tools will not look for kcore in the cache. That is addressed in another patch. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Namhyung Kim <namhyung@gmail.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/525BF849.5030405@intel.com [ renamed 'index' to 'idx' to avoid shadowing string.h symbol in f12, use at least one member initializer when initializing a struct to zeros, also to fix the build on f12 ] Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2013-10-14 13:57:29 +00:00
struct kcore_copy_info kci = { .stext = 0, };
char kcore_filename[PATH_MAX];
char extract_filename[PATH_MAX];
struct phdr_data *p;
perf buildid-cache: Add ability to add kcore to the cache kcore can be used to view the running kernel object code. However, kcore changes as modules are loaded and unloaded, and when the kernel decides to modify its own code. Consequently it is useful to create a copy of kcore at a particular time. Unlike vmlinux, kcore is not unique for a given build-id. And in addition, the kallsyms and modules files are also needed. The tool therefore creates a directory: ~/.debug/[kernel.kcore]/<build-id>/<YYYYmmddHHMMSShh> which contains: kcore, kallsyms and modules. Note that the copied kcore contains only code sections. See the kcore_copy() function for how that is determined. The tool will not make additional copies of kcore if there is already one with the same modules at the same addresses. Currently, perf tools will not look for kcore in the cache. That is addressed in another patch. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Namhyung Kim <namhyung@gmail.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/525BF849.5030405@intel.com [ renamed 'index' to 'idx' to avoid shadowing string.h symbol in f12, use at least one member initializer when initializing a struct to zeros, also to fix the build on f12 ] Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2013-10-14 13:57:29 +00:00
INIT_LIST_HEAD(&kci.phdrs);
INIT_LIST_HEAD(&kci.syms);
perf buildid-cache: Add ability to add kcore to the cache kcore can be used to view the running kernel object code. However, kcore changes as modules are loaded and unloaded, and when the kernel decides to modify its own code. Consequently it is useful to create a copy of kcore at a particular time. Unlike vmlinux, kcore is not unique for a given build-id. And in addition, the kallsyms and modules files are also needed. The tool therefore creates a directory: ~/.debug/[kernel.kcore]/<build-id>/<YYYYmmddHHMMSShh> which contains: kcore, kallsyms and modules. Note that the copied kcore contains only code sections. See the kcore_copy() function for how that is determined. The tool will not make additional copies of kcore if there is already one with the same modules at the same addresses. Currently, perf tools will not look for kcore in the cache. That is addressed in another patch. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Namhyung Kim <namhyung@gmail.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/525BF849.5030405@intel.com [ renamed 'index' to 'idx' to avoid shadowing string.h symbol in f12, use at least one member initializer when initializing a struct to zeros, also to fix the build on f12 ] Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2013-10-14 13:57:29 +00:00
if (kcore_copy__copy_file(from_dir, to_dir, "kallsyms"))
return -1;
if (kcore_copy__copy_file(from_dir, to_dir, "modules"))
goto out_unlink_kallsyms;
scnprintf(kcore_filename, PATH_MAX, "%s/kcore", from_dir);
scnprintf(extract_filename, PATH_MAX, "%s/kcore", to_dir);
if (kcore__open(&kcore, kcore_filename))
goto out_unlink_modules;
if (kcore_copy__calc_maps(&kci, from_dir, kcore.elf))
goto out_kcore_close;
if (kcore__init(&extract, extract_filename, kcore.elfclass, false))
goto out_kcore_close;
if (kcore__copy_hdr(&kcore, &extract, kci.phnum))
perf buildid-cache: Add ability to add kcore to the cache kcore can be used to view the running kernel object code. However, kcore changes as modules are loaded and unloaded, and when the kernel decides to modify its own code. Consequently it is useful to create a copy of kcore at a particular time. Unlike vmlinux, kcore is not unique for a given build-id. And in addition, the kallsyms and modules files are also needed. The tool therefore creates a directory: ~/.debug/[kernel.kcore]/<build-id>/<YYYYmmddHHMMSShh> which contains: kcore, kallsyms and modules. Note that the copied kcore contains only code sections. See the kcore_copy() function for how that is determined. The tool will not make additional copies of kcore if there is already one with the same modules at the same addresses. Currently, perf tools will not look for kcore in the cache. That is addressed in another patch. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Namhyung Kim <namhyung@gmail.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/525BF849.5030405@intel.com [ renamed 'index' to 'idx' to avoid shadowing string.h symbol in f12, use at least one member initializer when initializing a struct to zeros, also to fix the build on f12 ] Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2013-10-14 13:57:29 +00:00
goto out_extract_close;
offset = gelf_fsize(extract.elf, ELF_T_EHDR, 1, EV_CURRENT) +
gelf_fsize(extract.elf, ELF_T_PHDR, kci.phnum, EV_CURRENT);
offset = round_up(offset, page_size);
kcore_copy__for_each_phdr(&kci, p) {
off_t offs = p->rel + offset;
perf buildid-cache: Add ability to add kcore to the cache kcore can be used to view the running kernel object code. However, kcore changes as modules are loaded and unloaded, and when the kernel decides to modify its own code. Consequently it is useful to create a copy of kcore at a particular time. Unlike vmlinux, kcore is not unique for a given build-id. And in addition, the kallsyms and modules files are also needed. The tool therefore creates a directory: ~/.debug/[kernel.kcore]/<build-id>/<YYYYmmddHHMMSShh> which contains: kcore, kallsyms and modules. Note that the copied kcore contains only code sections. See the kcore_copy() function for how that is determined. The tool will not make additional copies of kcore if there is already one with the same modules at the same addresses. Currently, perf tools will not look for kcore in the cache. That is addressed in another patch. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Namhyung Kim <namhyung@gmail.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/525BF849.5030405@intel.com [ renamed 'index' to 'idx' to avoid shadowing string.h symbol in f12, use at least one member initializer when initializing a struct to zeros, also to fix the build on f12 ] Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2013-10-14 13:57:29 +00:00
if (kcore__add_phdr(&extract, idx++, offs, p->addr, p->len))
perf buildid-cache: Add ability to add kcore to the cache kcore can be used to view the running kernel object code. However, kcore changes as modules are loaded and unloaded, and when the kernel decides to modify its own code. Consequently it is useful to create a copy of kcore at a particular time. Unlike vmlinux, kcore is not unique for a given build-id. And in addition, the kallsyms and modules files are also needed. The tool therefore creates a directory: ~/.debug/[kernel.kcore]/<build-id>/<YYYYmmddHHMMSShh> which contains: kcore, kallsyms and modules. Note that the copied kcore contains only code sections. See the kcore_copy() function for how that is determined. The tool will not make additional copies of kcore if there is already one with the same modules at the same addresses. Currently, perf tools will not look for kcore in the cache. That is addressed in another patch. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Namhyung Kim <namhyung@gmail.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/525BF849.5030405@intel.com [ renamed 'index' to 'idx' to avoid shadowing string.h symbol in f12, use at least one member initializer when initializing a struct to zeros, also to fix the build on f12 ] Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2013-10-14 13:57:29 +00:00
goto out_extract_close;
}
sz = kcore__write(&extract);
if (sz < 0 || sz > offset)
goto out_extract_close;
kcore_copy__for_each_phdr(&kci, p) {
off_t offs = p->rel + offset;
perf buildid-cache: Add ability to add kcore to the cache kcore can be used to view the running kernel object code. However, kcore changes as modules are loaded and unloaded, and when the kernel decides to modify its own code. Consequently it is useful to create a copy of kcore at a particular time. Unlike vmlinux, kcore is not unique for a given build-id. And in addition, the kallsyms and modules files are also needed. The tool therefore creates a directory: ~/.debug/[kernel.kcore]/<build-id>/<YYYYmmddHHMMSShh> which contains: kcore, kallsyms and modules. Note that the copied kcore contains only code sections. See the kcore_copy() function for how that is determined. The tool will not make additional copies of kcore if there is already one with the same modules at the same addresses. Currently, perf tools will not look for kcore in the cache. That is addressed in another patch. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Namhyung Kim <namhyung@gmail.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/525BF849.5030405@intel.com [ renamed 'index' to 'idx' to avoid shadowing string.h symbol in f12, use at least one member initializer when initializing a struct to zeros, also to fix the build on f12 ] Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2013-10-14 13:57:29 +00:00
if (p->remaps)
continue;
if (copy_bytes(kcore.fd, p->offset, extract.fd, offs, p->len))
goto out_extract_close;
}
perf buildid-cache: Add ability to add kcore to the cache kcore can be used to view the running kernel object code. However, kcore changes as modules are loaded and unloaded, and when the kernel decides to modify its own code. Consequently it is useful to create a copy of kcore at a particular time. Unlike vmlinux, kcore is not unique for a given build-id. And in addition, the kallsyms and modules files are also needed. The tool therefore creates a directory: ~/.debug/[kernel.kcore]/<build-id>/<YYYYmmddHHMMSShh> which contains: kcore, kallsyms and modules. Note that the copied kcore contains only code sections. See the kcore_copy() function for how that is determined. The tool will not make additional copies of kcore if there is already one with the same modules at the same addresses. Currently, perf tools will not look for kcore in the cache. That is addressed in another patch. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Namhyung Kim <namhyung@gmail.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/525BF849.5030405@intel.com [ renamed 'index' to 'idx' to avoid shadowing string.h symbol in f12, use at least one member initializer when initializing a struct to zeros, also to fix the build on f12 ] Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2013-10-14 13:57:29 +00:00
if (kcore_copy__compare_file(from_dir, to_dir, "kallsyms"))
goto out_extract_close;
err = 0;
out_extract_close:
kcore__close(&extract);
if (err)
unlink(extract_filename);
out_kcore_close:
kcore__close(&kcore);
out_unlink_modules:
if (err)
kcore_copy__unlink(to_dir, "modules");
out_unlink_kallsyms:
if (err)
kcore_copy__unlink(to_dir, "kallsyms");
kcore_copy__free_phdrs(&kci);
kcore_copy__free_syms(&kci);
perf buildid-cache: Add ability to add kcore to the cache kcore can be used to view the running kernel object code. However, kcore changes as modules are loaded and unloaded, and when the kernel decides to modify its own code. Consequently it is useful to create a copy of kcore at a particular time. Unlike vmlinux, kcore is not unique for a given build-id. And in addition, the kallsyms and modules files are also needed. The tool therefore creates a directory: ~/.debug/[kernel.kcore]/<build-id>/<YYYYmmddHHMMSShh> which contains: kcore, kallsyms and modules. Note that the copied kcore contains only code sections. See the kcore_copy() function for how that is determined. The tool will not make additional copies of kcore if there is already one with the same modules at the same addresses. Currently, perf tools will not look for kcore in the cache. That is addressed in another patch. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Namhyung Kim <namhyung@gmail.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/525BF849.5030405@intel.com [ renamed 'index' to 'idx' to avoid shadowing string.h symbol in f12, use at least one member initializer when initializing a struct to zeros, also to fix the build on f12 ] Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2013-10-14 13:57:29 +00:00
return err;
}
int kcore_extract__create(struct kcore_extract *kce)
{
struct kcore kcore;
struct kcore extract;
size_t count = 1;
int idx = 0, err = -1;
off_t offset = page_size, sz;
if (kcore__open(&kcore, kce->kcore_filename))
return -1;
strcpy(kce->extract_filename, PERF_KCORE_EXTRACT);
if (kcore__init(&extract, kce->extract_filename, kcore.elfclass, true))
goto out_kcore_close;
if (kcore__copy_hdr(&kcore, &extract, count))
goto out_extract_close;
if (kcore__add_phdr(&extract, idx, offset, kce->addr, kce->len))
goto out_extract_close;
sz = kcore__write(&extract);
if (sz < 0 || sz > offset)
goto out_extract_close;
if (copy_bytes(kcore.fd, kce->offs, extract.fd, offset, kce->len))
goto out_extract_close;
err = 0;
out_extract_close:
kcore__close(&extract);
if (err)
unlink(kce->extract_filename);
out_kcore_close:
kcore__close(&kcore);
return err;
}
void kcore_extract__delete(struct kcore_extract *kce)
{
unlink(kce->extract_filename);
}
#ifdef HAVE_GELF_GETNOTE_SUPPORT
static void sdt_adjust_loc(struct sdt_note *tmp, GElf_Addr base_off)
{
if (!base_off)
return;
if (tmp->bit32)
tmp->addr.a32[SDT_NOTE_IDX_LOC] =
tmp->addr.a32[SDT_NOTE_IDX_LOC] + base_off -
tmp->addr.a32[SDT_NOTE_IDX_BASE];
else
tmp->addr.a64[SDT_NOTE_IDX_LOC] =
tmp->addr.a64[SDT_NOTE_IDX_LOC] + base_off -
tmp->addr.a64[SDT_NOTE_IDX_BASE];
}
static void sdt_adjust_refctr(struct sdt_note *tmp, GElf_Addr base_addr,
GElf_Addr base_off)
{
if (!base_off)
return;
if (tmp->bit32 && tmp->addr.a32[SDT_NOTE_IDX_REFCTR])
tmp->addr.a32[SDT_NOTE_IDX_REFCTR] -= (base_addr - base_off);
else if (tmp->addr.a64[SDT_NOTE_IDX_REFCTR])
tmp->addr.a64[SDT_NOTE_IDX_REFCTR] -= (base_addr - base_off);
}
perf sdt: ELF support for SDT This patch serves the initial support to identify and list SDT events in binaries. When programs containing SDT markers are compiled, gcc with the help of assembler directives identifies them and places them in the section ".note.stapsdt". To find these markers from the binaries, one needs to traverse through this section and parse the relevant details like the name, type and location of the marker. Also, the original location could be skewed due to the effect of prelinking. If that is the case, the locations need to be adjusted. The functions in this patch open a given ELF, find out the SDT section, parse the relevant details, adjust the location (if necessary) and populate them in a list. A typical note entry in ".note.stapsdt" section is as follows : |--nhdr.n_namesz--| ------------------------------------ | nhdr | "stapsdt" | ----- |----------------------------------| | | <location> <base_address> | | | <semaphore> | nhdr.n_descsize | "provider_name" "note_name" | | | <args> | ----- |----------------------------------| | nhdr | "stapsdt" | |... The above shows an excerpt from the section ".note.stapsdt". 'nhdr' is a structure which has the note name size (n_namesz), note description size (n_desc_sz) and note type (n_type). So, in order to parse the note note info, we need nhdr to tell us where to start from. As can be seen from <sys/sdt.h>, the name of the SDT notes given is "stapsdt". But this is not the identifier of the note. After that, we go to description of the note to find out its location, the address of the ".stapsdt.base" section and the semaphore address. Then, we find the provider name and the SDT marker name and then follow the arguments. Signed-off-by: Hemant Kumar <hemant@linux.vnet.ibm.com> Reviewed-by: Masami Hiramatsu <mhiramat@kernel.org> Acked-by: Namhyung Kim <namhyung@kernel.org> Cc: Ananth N Mavinakayanahalli <ananth@linux.vnet.ibm.com> Cc: Brendan Gregg <brendan.d.gregg@gmail.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/146736022628.27797.1201368329092908163.stgit@devbox Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-07-01 08:03:46 +00:00
/**
* populate_sdt_note : Parse raw data and identify SDT note
* @elf: elf of the opened file
* @data: raw data of a section with description offset applied
* @len: note description size
* @type: type of the note
* @sdt_notes: List to add the SDT note
*
* Responsible for parsing the @data in section .note.stapsdt in @elf and
* if its an SDT note, it appends to @sdt_notes list.
*/
static int populate_sdt_note(Elf **elf, const char *data, size_t len,
struct list_head *sdt_notes)
{
const char *provider, *name, *args;
perf sdt: ELF support for SDT This patch serves the initial support to identify and list SDT events in binaries. When programs containing SDT markers are compiled, gcc with the help of assembler directives identifies them and places them in the section ".note.stapsdt". To find these markers from the binaries, one needs to traverse through this section and parse the relevant details like the name, type and location of the marker. Also, the original location could be skewed due to the effect of prelinking. If that is the case, the locations need to be adjusted. The functions in this patch open a given ELF, find out the SDT section, parse the relevant details, adjust the location (if necessary) and populate them in a list. A typical note entry in ".note.stapsdt" section is as follows : |--nhdr.n_namesz--| ------------------------------------ | nhdr | "stapsdt" | ----- |----------------------------------| | | <location> <base_address> | | | <semaphore> | nhdr.n_descsize | "provider_name" "note_name" | | | <args> | ----- |----------------------------------| | nhdr | "stapsdt" | |... The above shows an excerpt from the section ".note.stapsdt". 'nhdr' is a structure which has the note name size (n_namesz), note description size (n_desc_sz) and note type (n_type). So, in order to parse the note note info, we need nhdr to tell us where to start from. As can be seen from <sys/sdt.h>, the name of the SDT notes given is "stapsdt". But this is not the identifier of the note. After that, we go to description of the note to find out its location, the address of the ".stapsdt.base" section and the semaphore address. Then, we find the provider name and the SDT marker name and then follow the arguments. Signed-off-by: Hemant Kumar <hemant@linux.vnet.ibm.com> Reviewed-by: Masami Hiramatsu <mhiramat@kernel.org> Acked-by: Namhyung Kim <namhyung@kernel.org> Cc: Ananth N Mavinakayanahalli <ananth@linux.vnet.ibm.com> Cc: Brendan Gregg <brendan.d.gregg@gmail.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/146736022628.27797.1201368329092908163.stgit@devbox Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-07-01 08:03:46 +00:00
struct sdt_note *tmp = NULL;
GElf_Ehdr ehdr;
GElf_Shdr shdr;
int ret = -EINVAL;
union {
Elf64_Addr a64[NR_ADDR];
Elf32_Addr a32[NR_ADDR];
} buf;
Elf_Data dst = {
.d_buf = &buf, .d_type = ELF_T_ADDR, .d_version = EV_CURRENT,
.d_size = gelf_fsize((*elf), ELF_T_ADDR, NR_ADDR, EV_CURRENT),
.d_off = 0, .d_align = 0
};
Elf_Data src = {
.d_buf = (void *) data, .d_type = ELF_T_ADDR,
.d_version = EV_CURRENT, .d_size = dst.d_size, .d_off = 0,
.d_align = 0
};
tmp = (struct sdt_note *)calloc(1, sizeof(struct sdt_note));
if (!tmp) {
ret = -ENOMEM;
goto out_err;
}
INIT_LIST_HEAD(&tmp->note_list);
if (len < dst.d_size + 3)
goto out_free_note;
/* Translation from file representation to memory representation */
if (gelf_xlatetom(*elf, &dst, &src,
elf_getident(*elf, NULL)[EI_DATA]) == NULL) {
pr_err("gelf_xlatetom : %s\n", elf_errmsg(-1));
goto out_free_note;
}
/* Populate the fields of sdt_note */
provider = data + dst.d_size;
name = (const char *)memchr(provider, '\0', data + len - provider);
if (name++ == NULL)
goto out_free_note;
tmp->provider = strdup(provider);
if (!tmp->provider) {
ret = -ENOMEM;
goto out_free_note;
}
tmp->name = strdup(name);
if (!tmp->name) {
ret = -ENOMEM;
goto out_free_prov;
}
args = memchr(name, '\0', data + len - name);
/*
* There is no argument if:
* - We reached the end of the note;
* - There is not enough room to hold a potential string;
* - The argument string is empty or just contains ':'.
*/
if (args == NULL || data + len - args < 2 ||
args[1] == ':' || args[1] == '\0')
tmp->args = NULL;
else {
tmp->args = strdup(++args);
if (!tmp->args) {
ret = -ENOMEM;
goto out_free_name;
}
}
perf sdt: ELF support for SDT This patch serves the initial support to identify and list SDT events in binaries. When programs containing SDT markers are compiled, gcc with the help of assembler directives identifies them and places them in the section ".note.stapsdt". To find these markers from the binaries, one needs to traverse through this section and parse the relevant details like the name, type and location of the marker. Also, the original location could be skewed due to the effect of prelinking. If that is the case, the locations need to be adjusted. The functions in this patch open a given ELF, find out the SDT section, parse the relevant details, adjust the location (if necessary) and populate them in a list. A typical note entry in ".note.stapsdt" section is as follows : |--nhdr.n_namesz--| ------------------------------------ | nhdr | "stapsdt" | ----- |----------------------------------| | | <location> <base_address> | | | <semaphore> | nhdr.n_descsize | "provider_name" "note_name" | | | <args> | ----- |----------------------------------| | nhdr | "stapsdt" | |... The above shows an excerpt from the section ".note.stapsdt". 'nhdr' is a structure which has the note name size (n_namesz), note description size (n_desc_sz) and note type (n_type). So, in order to parse the note note info, we need nhdr to tell us where to start from. As can be seen from <sys/sdt.h>, the name of the SDT notes given is "stapsdt". But this is not the identifier of the note. After that, we go to description of the note to find out its location, the address of the ".stapsdt.base" section and the semaphore address. Then, we find the provider name and the SDT marker name and then follow the arguments. Signed-off-by: Hemant Kumar <hemant@linux.vnet.ibm.com> Reviewed-by: Masami Hiramatsu <mhiramat@kernel.org> Acked-by: Namhyung Kim <namhyung@kernel.org> Cc: Ananth N Mavinakayanahalli <ananth@linux.vnet.ibm.com> Cc: Brendan Gregg <brendan.d.gregg@gmail.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/146736022628.27797.1201368329092908163.stgit@devbox Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-07-01 08:03:46 +00:00
if (gelf_getclass(*elf) == ELFCLASS32) {
memcpy(&tmp->addr, &buf, 3 * sizeof(Elf32_Addr));
tmp->bit32 = true;
} else {
memcpy(&tmp->addr, &buf, 3 * sizeof(Elf64_Addr));
tmp->bit32 = false;
}
if (!gelf_getehdr(*elf, &ehdr)) {
pr_debug("%s : cannot get elf header.\n", __func__);
ret = -EBADF;
goto out_free_args;
perf sdt: ELF support for SDT This patch serves the initial support to identify and list SDT events in binaries. When programs containing SDT markers are compiled, gcc with the help of assembler directives identifies them and places them in the section ".note.stapsdt". To find these markers from the binaries, one needs to traverse through this section and parse the relevant details like the name, type and location of the marker. Also, the original location could be skewed due to the effect of prelinking. If that is the case, the locations need to be adjusted. The functions in this patch open a given ELF, find out the SDT section, parse the relevant details, adjust the location (if necessary) and populate them in a list. A typical note entry in ".note.stapsdt" section is as follows : |--nhdr.n_namesz--| ------------------------------------ | nhdr | "stapsdt" | ----- |----------------------------------| | | <location> <base_address> | | | <semaphore> | nhdr.n_descsize | "provider_name" "note_name" | | | <args> | ----- |----------------------------------| | nhdr | "stapsdt" | |... The above shows an excerpt from the section ".note.stapsdt". 'nhdr' is a structure which has the note name size (n_namesz), note description size (n_desc_sz) and note type (n_type). So, in order to parse the note note info, we need nhdr to tell us where to start from. As can be seen from <sys/sdt.h>, the name of the SDT notes given is "stapsdt". But this is not the identifier of the note. After that, we go to description of the note to find out its location, the address of the ".stapsdt.base" section and the semaphore address. Then, we find the provider name and the SDT marker name and then follow the arguments. Signed-off-by: Hemant Kumar <hemant@linux.vnet.ibm.com> Reviewed-by: Masami Hiramatsu <mhiramat@kernel.org> Acked-by: Namhyung Kim <namhyung@kernel.org> Cc: Ananth N Mavinakayanahalli <ananth@linux.vnet.ibm.com> Cc: Brendan Gregg <brendan.d.gregg@gmail.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/146736022628.27797.1201368329092908163.stgit@devbox Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-07-01 08:03:46 +00:00
}
/* Adjust the prelink effect :
* Find out the .stapsdt.base section.
* This scn will help us to handle prelinking (if present).
* Compare the retrieved file offset of the base section with the
* base address in the description of the SDT note. If its different,
* then accordingly, adjust the note location.
*/
if (elf_section_by_name(*elf, &ehdr, &shdr, SDT_BASE_SCN, NULL))
sdt_adjust_loc(tmp, shdr.sh_offset);
/* Adjust reference counter offset */
if (elf_section_by_name(*elf, &ehdr, &shdr, SDT_PROBES_SCN, NULL))
sdt_adjust_refctr(tmp, shdr.sh_addr, shdr.sh_offset);
perf sdt: ELF support for SDT This patch serves the initial support to identify and list SDT events in binaries. When programs containing SDT markers are compiled, gcc with the help of assembler directives identifies them and places them in the section ".note.stapsdt". To find these markers from the binaries, one needs to traverse through this section and parse the relevant details like the name, type and location of the marker. Also, the original location could be skewed due to the effect of prelinking. If that is the case, the locations need to be adjusted. The functions in this patch open a given ELF, find out the SDT section, parse the relevant details, adjust the location (if necessary) and populate them in a list. A typical note entry in ".note.stapsdt" section is as follows : |--nhdr.n_namesz--| ------------------------------------ | nhdr | "stapsdt" | ----- |----------------------------------| | | <location> <base_address> | | | <semaphore> | nhdr.n_descsize | "provider_name" "note_name" | | | <args> | ----- |----------------------------------| | nhdr | "stapsdt" | |... The above shows an excerpt from the section ".note.stapsdt". 'nhdr' is a structure which has the note name size (n_namesz), note description size (n_desc_sz) and note type (n_type). So, in order to parse the note note info, we need nhdr to tell us where to start from. As can be seen from <sys/sdt.h>, the name of the SDT notes given is "stapsdt". But this is not the identifier of the note. After that, we go to description of the note to find out its location, the address of the ".stapsdt.base" section and the semaphore address. Then, we find the provider name and the SDT marker name and then follow the arguments. Signed-off-by: Hemant Kumar <hemant@linux.vnet.ibm.com> Reviewed-by: Masami Hiramatsu <mhiramat@kernel.org> Acked-by: Namhyung Kim <namhyung@kernel.org> Cc: Ananth N Mavinakayanahalli <ananth@linux.vnet.ibm.com> Cc: Brendan Gregg <brendan.d.gregg@gmail.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/146736022628.27797.1201368329092908163.stgit@devbox Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-07-01 08:03:46 +00:00
list_add_tail(&tmp->note_list, sdt_notes);
return 0;
out_free_args:
zfree(&tmp->args);
perf sdt: ELF support for SDT This patch serves the initial support to identify and list SDT events in binaries. When programs containing SDT markers are compiled, gcc with the help of assembler directives identifies them and places them in the section ".note.stapsdt". To find these markers from the binaries, one needs to traverse through this section and parse the relevant details like the name, type and location of the marker. Also, the original location could be skewed due to the effect of prelinking. If that is the case, the locations need to be adjusted. The functions in this patch open a given ELF, find out the SDT section, parse the relevant details, adjust the location (if necessary) and populate them in a list. A typical note entry in ".note.stapsdt" section is as follows : |--nhdr.n_namesz--| ------------------------------------ | nhdr | "stapsdt" | ----- |----------------------------------| | | <location> <base_address> | | | <semaphore> | nhdr.n_descsize | "provider_name" "note_name" | | | <args> | ----- |----------------------------------| | nhdr | "stapsdt" | |... The above shows an excerpt from the section ".note.stapsdt". 'nhdr' is a structure which has the note name size (n_namesz), note description size (n_desc_sz) and note type (n_type). So, in order to parse the note note info, we need nhdr to tell us where to start from. As can be seen from <sys/sdt.h>, the name of the SDT notes given is "stapsdt". But this is not the identifier of the note. After that, we go to description of the note to find out its location, the address of the ".stapsdt.base" section and the semaphore address. Then, we find the provider name and the SDT marker name and then follow the arguments. Signed-off-by: Hemant Kumar <hemant@linux.vnet.ibm.com> Reviewed-by: Masami Hiramatsu <mhiramat@kernel.org> Acked-by: Namhyung Kim <namhyung@kernel.org> Cc: Ananth N Mavinakayanahalli <ananth@linux.vnet.ibm.com> Cc: Brendan Gregg <brendan.d.gregg@gmail.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/146736022628.27797.1201368329092908163.stgit@devbox Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-07-01 08:03:46 +00:00
out_free_name:
zfree(&tmp->name);
perf sdt: ELF support for SDT This patch serves the initial support to identify and list SDT events in binaries. When programs containing SDT markers are compiled, gcc with the help of assembler directives identifies them and places them in the section ".note.stapsdt". To find these markers from the binaries, one needs to traverse through this section and parse the relevant details like the name, type and location of the marker. Also, the original location could be skewed due to the effect of prelinking. If that is the case, the locations need to be adjusted. The functions in this patch open a given ELF, find out the SDT section, parse the relevant details, adjust the location (if necessary) and populate them in a list. A typical note entry in ".note.stapsdt" section is as follows : |--nhdr.n_namesz--| ------------------------------------ | nhdr | "stapsdt" | ----- |----------------------------------| | | <location> <base_address> | | | <semaphore> | nhdr.n_descsize | "provider_name" "note_name" | | | <args> | ----- |----------------------------------| | nhdr | "stapsdt" | |... The above shows an excerpt from the section ".note.stapsdt". 'nhdr' is a structure which has the note name size (n_namesz), note description size (n_desc_sz) and note type (n_type). So, in order to parse the note note info, we need nhdr to tell us where to start from. As can be seen from <sys/sdt.h>, the name of the SDT notes given is "stapsdt". But this is not the identifier of the note. After that, we go to description of the note to find out its location, the address of the ".stapsdt.base" section and the semaphore address. Then, we find the provider name and the SDT marker name and then follow the arguments. Signed-off-by: Hemant Kumar <hemant@linux.vnet.ibm.com> Reviewed-by: Masami Hiramatsu <mhiramat@kernel.org> Acked-by: Namhyung Kim <namhyung@kernel.org> Cc: Ananth N Mavinakayanahalli <ananth@linux.vnet.ibm.com> Cc: Brendan Gregg <brendan.d.gregg@gmail.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/146736022628.27797.1201368329092908163.stgit@devbox Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-07-01 08:03:46 +00:00
out_free_prov:
zfree(&tmp->provider);
perf sdt: ELF support for SDT This patch serves the initial support to identify and list SDT events in binaries. When programs containing SDT markers are compiled, gcc with the help of assembler directives identifies them and places them in the section ".note.stapsdt". To find these markers from the binaries, one needs to traverse through this section and parse the relevant details like the name, type and location of the marker. Also, the original location could be skewed due to the effect of prelinking. If that is the case, the locations need to be adjusted. The functions in this patch open a given ELF, find out the SDT section, parse the relevant details, adjust the location (if necessary) and populate them in a list. A typical note entry in ".note.stapsdt" section is as follows : |--nhdr.n_namesz--| ------------------------------------ | nhdr | "stapsdt" | ----- |----------------------------------| | | <location> <base_address> | | | <semaphore> | nhdr.n_descsize | "provider_name" "note_name" | | | <args> | ----- |----------------------------------| | nhdr | "stapsdt" | |... The above shows an excerpt from the section ".note.stapsdt". 'nhdr' is a structure which has the note name size (n_namesz), note description size (n_desc_sz) and note type (n_type). So, in order to parse the note note info, we need nhdr to tell us where to start from. As can be seen from <sys/sdt.h>, the name of the SDT notes given is "stapsdt". But this is not the identifier of the note. After that, we go to description of the note to find out its location, the address of the ".stapsdt.base" section and the semaphore address. Then, we find the provider name and the SDT marker name and then follow the arguments. Signed-off-by: Hemant Kumar <hemant@linux.vnet.ibm.com> Reviewed-by: Masami Hiramatsu <mhiramat@kernel.org> Acked-by: Namhyung Kim <namhyung@kernel.org> Cc: Ananth N Mavinakayanahalli <ananth@linux.vnet.ibm.com> Cc: Brendan Gregg <brendan.d.gregg@gmail.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/146736022628.27797.1201368329092908163.stgit@devbox Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-07-01 08:03:46 +00:00
out_free_note:
free(tmp);
out_err:
return ret;
}
/**
* construct_sdt_notes_list : constructs a list of SDT notes
* @elf : elf to look into
* @sdt_notes : empty list_head
*
* Scans the sections in 'elf' for the section
* .note.stapsdt. It, then calls populate_sdt_note to find
* out the SDT events and populates the 'sdt_notes'.
*/
static int construct_sdt_notes_list(Elf *elf, struct list_head *sdt_notes)
{
GElf_Ehdr ehdr;
Elf_Scn *scn = NULL;
Elf_Data *data;
GElf_Shdr shdr;
size_t shstrndx, next;
GElf_Nhdr nhdr;
size_t name_off, desc_off, offset;
int ret = 0;
if (gelf_getehdr(elf, &ehdr) == NULL) {
ret = -EBADF;
goto out_ret;
}
if (elf_getshdrstrndx(elf, &shstrndx) != 0) {
ret = -EBADF;
goto out_ret;
}
/* Look for the required section */
scn = elf_section_by_name(elf, &ehdr, &shdr, SDT_NOTE_SCN, NULL);
if (!scn) {
ret = -ENOENT;
goto out_ret;
}
if ((shdr.sh_type != SHT_NOTE) || (shdr.sh_flags & SHF_ALLOC)) {
ret = -ENOENT;
goto out_ret;
}
data = elf_getdata(scn, NULL);
/* Get the SDT notes */
for (offset = 0; (next = gelf_getnote(data, offset, &nhdr, &name_off,
&desc_off)) > 0; offset = next) {
if (nhdr.n_namesz == sizeof(SDT_NOTE_NAME) &&
!memcmp(data->d_buf + name_off, SDT_NOTE_NAME,
sizeof(SDT_NOTE_NAME))) {
/* Check the type of the note */
if (nhdr.n_type != SDT_NOTE_TYPE)
goto out_ret;
ret = populate_sdt_note(&elf, ((data->d_buf) + desc_off),
nhdr.n_descsz, sdt_notes);
if (ret < 0)
goto out_ret;
}
}
if (list_empty(sdt_notes))
ret = -ENOENT;
out_ret:
return ret;
}
/**
* get_sdt_note_list : Wrapper to construct a list of sdt notes
* @head : empty list_head
* @target : file to find SDT notes from
*
* This opens the file, initializes
* the ELF and then calls construct_sdt_notes_list.
*/
int get_sdt_note_list(struct list_head *head, const char *target)
{
Elf *elf;
int fd, ret;
fd = open(target, O_RDONLY);
if (fd < 0)
return -EBADF;
elf = elf_begin(fd, PERF_ELF_C_READ_MMAP, NULL);
if (!elf) {
ret = -EBADF;
goto out_close;
}
ret = construct_sdt_notes_list(elf, head);
elf_end(elf);
out_close:
close(fd);
return ret;
}
/**
* cleanup_sdt_note_list : free the sdt notes' list
* @sdt_notes: sdt notes' list
*
* Free up the SDT notes in @sdt_notes.
* Returns the number of SDT notes free'd.
*/
int cleanup_sdt_note_list(struct list_head *sdt_notes)
{
struct sdt_note *tmp, *pos;
int nr_free = 0;
list_for_each_entry_safe(pos, tmp, sdt_notes, note_list) {
list_del_init(&pos->note_list);
zfree(&pos->args);
zfree(&pos->name);
zfree(&pos->provider);
perf sdt: ELF support for SDT This patch serves the initial support to identify and list SDT events in binaries. When programs containing SDT markers are compiled, gcc with the help of assembler directives identifies them and places them in the section ".note.stapsdt". To find these markers from the binaries, one needs to traverse through this section and parse the relevant details like the name, type and location of the marker. Also, the original location could be skewed due to the effect of prelinking. If that is the case, the locations need to be adjusted. The functions in this patch open a given ELF, find out the SDT section, parse the relevant details, adjust the location (if necessary) and populate them in a list. A typical note entry in ".note.stapsdt" section is as follows : |--nhdr.n_namesz--| ------------------------------------ | nhdr | "stapsdt" | ----- |----------------------------------| | | <location> <base_address> | | | <semaphore> | nhdr.n_descsize | "provider_name" "note_name" | | | <args> | ----- |----------------------------------| | nhdr | "stapsdt" | |... The above shows an excerpt from the section ".note.stapsdt". 'nhdr' is a structure which has the note name size (n_namesz), note description size (n_desc_sz) and note type (n_type). So, in order to parse the note note info, we need nhdr to tell us where to start from. As can be seen from <sys/sdt.h>, the name of the SDT notes given is "stapsdt". But this is not the identifier of the note. After that, we go to description of the note to find out its location, the address of the ".stapsdt.base" section and the semaphore address. Then, we find the provider name and the SDT marker name and then follow the arguments. Signed-off-by: Hemant Kumar <hemant@linux.vnet.ibm.com> Reviewed-by: Masami Hiramatsu <mhiramat@kernel.org> Acked-by: Namhyung Kim <namhyung@kernel.org> Cc: Ananth N Mavinakayanahalli <ananth@linux.vnet.ibm.com> Cc: Brendan Gregg <brendan.d.gregg@gmail.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/146736022628.27797.1201368329092908163.stgit@devbox Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-07-01 08:03:46 +00:00
free(pos);
nr_free++;
}
return nr_free;
}
/**
* sdt_notes__get_count: Counts the number of sdt events
* @start: list_head to sdt_notes list
*
* Returns the number of SDT notes in a list
*/
int sdt_notes__get_count(struct list_head *start)
{
struct sdt_note *sdt_ptr;
int count = 0;
list_for_each_entry(sdt_ptr, start, note_list)
count++;
return count;
}
#endif
perf sdt: ELF support for SDT This patch serves the initial support to identify and list SDT events in binaries. When programs containing SDT markers are compiled, gcc with the help of assembler directives identifies them and places them in the section ".note.stapsdt". To find these markers from the binaries, one needs to traverse through this section and parse the relevant details like the name, type and location of the marker. Also, the original location could be skewed due to the effect of prelinking. If that is the case, the locations need to be adjusted. The functions in this patch open a given ELF, find out the SDT section, parse the relevant details, adjust the location (if necessary) and populate them in a list. A typical note entry in ".note.stapsdt" section is as follows : |--nhdr.n_namesz--| ------------------------------------ | nhdr | "stapsdt" | ----- |----------------------------------| | | <location> <base_address> | | | <semaphore> | nhdr.n_descsize | "provider_name" "note_name" | | | <args> | ----- |----------------------------------| | nhdr | "stapsdt" | |... The above shows an excerpt from the section ".note.stapsdt". 'nhdr' is a structure which has the note name size (n_namesz), note description size (n_desc_sz) and note type (n_type). So, in order to parse the note note info, we need nhdr to tell us where to start from. As can be seen from <sys/sdt.h>, the name of the SDT notes given is "stapsdt". But this is not the identifier of the note. After that, we go to description of the note to find out its location, the address of the ".stapsdt.base" section and the semaphore address. Then, we find the provider name and the SDT marker name and then follow the arguments. Signed-off-by: Hemant Kumar <hemant@linux.vnet.ibm.com> Reviewed-by: Masami Hiramatsu <mhiramat@kernel.org> Acked-by: Namhyung Kim <namhyung@kernel.org> Cc: Ananth N Mavinakayanahalli <ananth@linux.vnet.ibm.com> Cc: Brendan Gregg <brendan.d.gregg@gmail.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/146736022628.27797.1201368329092908163.stgit@devbox Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-07-01 08:03:46 +00:00
void symbol__elf_init(void)
{
elf_version(EV_CURRENT);
}