Merge branch 'bpf-support-resilient-split-btf'

Alan Maguire says:

====================
bpf: support resilient split BTF

Split BPF Type Format (BTF) provides huge advantages in that kernel
modules only have to provide type information for types that they do not
share with the core kernel; for core kernel types, split BTF refers to
core kernel BTF type ids.  So for a STRUCT sk_buff, a module that
uses that structure (or a pointer to it) simply needs to refer to the
core kernel type id, saving the need to define the structure and its many
dependents.  This cuts down on duplication and makes BTF as compact
as possible.

However, there is a downside.  This scheme requires the references from
split BTF to base BTF to be valid not just at encoding time, but at use
time (when the module is loaded).  Even a small change in kernel types
can perturb the type ids in core kernel BTF, and - if the new reproducible
BTF option is not used - pahole's parallel processing of compilation units
can lead to different type ids for the same kernel if the BTF is
regenerated.

So we have a robustness problem for split BTF for cases where a module is
not always compiled at the same time as the kernel.  This problem is
particularly acute for distros which generally want module builders to be
able to compile a module for the lifetime of a Linux stable-based release,
and have it continue to be valid over the lifetime of that release, even
as changes in data structures (and hence BTF types) accrue.  Today it's not
possible to generate BTF for modules that works beyond the initial
kernel it is compiled against - kernel bugfixes etc invalidate the split
BTF references to vmlinux BTF, and BTF is no longer usable for the
module.

The goal of this series is to provide options to provide additional
context for cases like this.  That context comes in the form of
distilled base BTF; it stands in for the base BTF, and contains
information about the types referenced from split BTF, but not their
full descriptions.  The modified split BTF will refer to type ids in
this .BTF.base section, and when the kernel loads such modules it
will use that .BTF.base to map references from split BTF to the
equivalent current vmlinux base BTF types.  Once this relocation
process has succeeded, the module BTF available in /sys/kernel/btf
will look exactly as if it was built with the current vmlinux;
references to base types will be fixed up etc.

A module builder - using this series along with the pahole changes -
can then build a module with distilled base BTF via an out-of-tree
module build, i.e.

make -C . M=path/2/module

The module will have a .BTF section (the split BTF) and a
.BTF.base section.  The latter is small in size - distilled base
BTF does not need full struct/union/enum information for named
types for example.  For 2667 modules built with distilled base BTF,
the average size observed was 1556 bytes (stddev 1563).  The overall
size added to this 2667 modules was 5.3Mb.

Note that for the in-tree modules, this approach is not needed as
split and base BTF in the case of in-tree modules are always built
and re-built together.

The series first focuses on generating split BTF with distilled base
BTF; then relocation support is added to allow split BTF with
an associated distlled base to be relocated with a new base BTF.

Next Eduard's patch allows BTF ELF parsing to work with both
.BTF and .BTF.base sections; this ensures that bpftool will be
able to dump BTF for a module with a .BTF.base section for example,
or indeed dump relocated BTF where a module and a "-B vmlinux"
is supplied.

Then we add support to resolve_btfids to ignore base BTF - i.e.
to avoid relocation - if a .BTF.base section is found.  This ensures
the .BTF.ids section is populated with ids relative to the distilled
base (these will be relocated as part of module load).

Finally the series supports storage of .BTF.base data/size in modules
and supports sharing of relocation code with the kernel to allow
relocation of module BTF.  For the kernel, this relocation
process happens at module load time, and we relocate split BTF
references to point at types in the current vmlinux BTF.  As part of
this, .BTF.ids references need to be mapped also.

So concretely, what happens is

- we generate split BTF in the .BTF section of a module that refers to
  types in the .BTF.base section as base types; the latter are not full
  type descriptions but provide information about the base type.  So
  a STRUCT sk_buff would be represented as a FWD struct sk_buff in
  distilled base BTF for example.
- when the module is loaded, the split BTF is relocated with vmlinux
  BTF; in the case of the FWD struct sk_buff, we find the STRUCT sk_buff
  in vmlinux BTF and map all split BTF references to the distilled base
  FWD sk_buff, replacing them with references to the vmlinux BTF
  STRUCT sk_buff.

A previous approach to this problem [1] utilized standalone BTF for such
cases - where the BTF is not defined relative to base BTF so there is no
relocation required.  The problem with that approach is that from
the verifier perspective, some types are special, and having a custom
representation of a core kernel type that did not necessarily match the
current representation is not tenable.  So the approach taken here was
to preserve the split BTF model while minimizing the representation of
the context needed to relocate split and current vmlinux BTF.

To generate distilled .BTF.base sections the associated dwarves
patch (to be applied on the "next" branch there) is needed [3]
Without it, things will still work but modules will not be built
with a .BTF.base section.

Changes since v5[4]:

- Update search of distilled types to return the first occurrence
  of a string (or a string+size pair); this allows us to iterate
  over all matches in distilled base BTF (Andrii, patch 3)
- Update to use BTF field iterators (Andrii, patches 1, 3 and 8)
- Update tests to cover multiple match and associated error cases
  (Eduard, patch 4)
- Rename elf_sections_info to btf_elf_secs, remove use of
  libbpf_get_error(), reset btf->owns_base when relocation
  succeeds (Andrii, patch 5)

Changes since v4[5]:

- Moved embeddedness, duplicate name checks to relocation time
  and record struct/union size for all distilled struct/unions
  instead of using forwards.  This allows us to carry out
  type compatibility checks based on the base BTF we want to
  relocate with (Eduard, patches 1, 3)
- Moved to using qsort() instead of qsort_r() as support for
  qsort_r() appears to be missing in Android libc (Andrii, patch 3)
- Sorting/searching now incorporates size matching depending
  on BTF kind and embeddedness of struct/union (Eduard, Andrii,
  patch 3)
- Improved naming of various types during relocation to avoid
  confusion (Andrii, patch 3)
- Incorporated Eduard's patch (patch 5) which handles .BTF.base
  sections internally in btf_parse_elf().  This makes ELF parsing
  work with split BTF, split BTF with a distilled base, split
  BTF with a distilled base _and_ base BTF (by relocating) etc.
  Having this avoids the need for bpftool changes; it will work
  as-is with .BTF.base sections (Eduard, patch 4)
- Updated resolve_btfids to _not_ relocate BTF for modules
  where a .BTF.base section is present; in that one case we
  do not want to relocate BTF as the .BTF.ids section should
  reflect ids in .BTF.base which will later be relocated on
  module load (Eduard, Andrii, patch 5)

Changes since v3[6]:

- distill now checks for duplicate-named struct/unions and records
  them as a sized struct/union to help identify which of the
  multiple base BTF structs/unions it refers to (Eduard, patch 1)
- added test support for multiple name handling (Eduard, patch 2)
- simplified the string mapping when updating split BTF to use
  base BTF instead of distilled base.  Since the only string
  references split BTF can make to base BTF are the names of
  the base types, create a string map from distilled string
  offset -> base BTF string offset and update string offsets
  by visiting all strings in split BTF; this saves having to
  do costly searches of base BTF (Eduard, patch 7,10)
- fixed bpftool manpage and indentation issues (Quentin, patch 11)

Also explored Eduard's suggestion of doing an implicit fallback
to checking for .BTF.base section in btf__parse() when it is
called to get base BTF.  However while it is doable, it turned
out to be difficult operationally.  Since fallback is implicit
we do not know the source of the BTF - was it from .BTF or
.BTF.base? In bpftool, we want to try first standalone BTF,
then split, then split with distilled base.  Having a way
to explicitly request .BTF.base via btf__parse_opts() fits
that model better.

Changes since v2[7]:

- submitted patch to use --btf_features in Makefile.btf for pahole
  v1.26 and later separately (Andrii).  That has landed in bpf-next
  now.
- distilled base now encodes ENUM64 as fwd ENUM (size 8), eliminating
  the need for support for ENUM64 in btf__add_fwd (patch 1, Andrii)
- moved to distilling only named types, augmenting split BTF with
  associated reference types; this simplifies greatly the distilled
  base BTF and the mapping operation between distilled and base
  BTF when relocating (most of the series changes, Andrii)
- relocation now iterates over base BTF, looking for matches based
  on name in distilled BTF.  Distilled BTF is pre-sorted by name
  (Andrii, patch 8)
- removed most redundant compabitiliby checks aside from struct
  size for base types/embedded structs and kind compatibility
  (since we only match on name) (Andrii, patch 8)
- btf__parse_opts() now replaces btf_parse() internally in libbpf
  (Eduard, patch 3)

Changes since RFC [8]:

- updated terminology; we replace clunky "base reference" BTF with
  distilling base BTF into a .BTF.base section. Similarly BTF
  reconcilation becomes BTF relocation (Andrii, most patches)
- add distilled base BTF by default for out-of-tree modules
  (Alexei, patch 8)
- distill algorithm updated to record size of embedded struct/union
  by recording it as a 0-vlen STRUCT/UNION with size preserved
  (Andrii, patch 2)
- verify size match on relocation for such STRUCT/UNIONs (Andrii,
  patch 9)
- with embedded STRUCT/UNION recording size, we can have bpftool
  dump a header representation using .BTF.base + .BTF sections
  rather than special-casing and refusing to use "format c" for
  that case (patch 5)
- match enum with enum64 and vice versa (Andrii, patch 9)
- ensure that resolve_btfids works with BTF without .BTF.base
  section (patch 7)
- update tests to cover embedded types, arrays and function
  prototypes (patches 3, 12)

[1] https://lore.kernel.org/bpf/20231112124834.388735-14-alan.maguire@oracle.com/
[2] https://lore.kernel.org/bpf/20240501175035.2476830-1-alan.maguire@oracle.com/
[3] https://lore.kernel.org/bpf/20240517102714.4072080-1-alan.maguire@oracle.com/
[4] https://lore.kernel.org/bpf/20240528122408.3154936-1-alan.maguire@oracle.com/
[5] https://lore.kernel.org/bpf/20240517102246.4070184-1-alan.maguire@oracle.com/
[6] https://lore.kernel.org/bpf/20240510103052.850012-1-alan.maguire@oracle.com/
[7] https://lore.kernel.org/bpf/20240424154806.3417662-1-alan.maguire@oracle.com/
[8] https://lore.kernel.org/bpf/20240322102455.98558-1-alan.maguire@oracle.com/
====================

Link: https://lore.kernel.org/r/20240613095014.357981-1-alan.maguire@oracle.com
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
This commit is contained in:
Andrii Nakryiko 2024-06-17 14:38:32 -07:00
commit f6afdaf72a
8 changed files with 1558 additions and 71 deletions

View File

@ -409,6 +409,14 @@ static int elf_collect(struct object *obj)
obj->efile.idlist = data; obj->efile.idlist = data;
obj->efile.idlist_shndx = idx; obj->efile.idlist_shndx = idx;
obj->efile.idlist_addr = sh.sh_addr; obj->efile.idlist_addr = sh.sh_addr;
} else if (!strcmp(name, BTF_BASE_ELF_SEC)) {
/* If a .BTF.base section is found, do not resolve
* BTF ids relative to vmlinux; resolve relative
* to the .BTF.base section instead. btf__parse_split()
* will take care of this once the base BTF it is
* passed is NULL.
*/
obj->base_btf_path = NULL;
} }
if (compressed_section_fix(elf, scn, &sh)) if (compressed_section_fix(elf, scn, &sh))

View File

@ -1,4 +1,4 @@
libbpf-y := libbpf.o bpf.o nlattr.o btf.o libbpf_errno.o str_error.o \ libbpf-y := libbpf.o bpf.o nlattr.o btf.o libbpf_errno.o str_error.o \
netlink.o bpf_prog_linfo.o libbpf_probes.o hashmap.o \ netlink.o bpf_prog_linfo.o libbpf_probes.o hashmap.o \
btf_dump.o ringbuf.o strset.o linker.o gen_loader.o relo_core.o \ btf_dump.o ringbuf.o strset.o linker.o gen_loader.o relo_core.o \
usdt.o zip.o elf.o features.o usdt.o zip.o elf.o features.o btf_relocate.o

View File

@ -116,6 +116,9 @@ struct btf {
/* whether strings are already deduplicated */ /* whether strings are already deduplicated */
bool strs_deduped; bool strs_deduped;
/* whether base_btf should be freed in btf_free for this instance */
bool owns_base;
/* BTF object FD, if loaded into kernel */ /* BTF object FD, if loaded into kernel */
int fd; int fd;
@ -969,6 +972,8 @@ void btf__free(struct btf *btf)
free(btf->raw_data); free(btf->raw_data);
free(btf->raw_data_swapped); free(btf->raw_data_swapped);
free(btf->type_offs); free(btf->type_offs);
if (btf->owns_base)
btf__free(btf->base_btf);
free(btf); free(btf);
} }
@ -1084,16 +1089,86 @@ struct btf *btf__new_split(const void *data, __u32 size, struct btf *base_btf)
return libbpf_ptr(btf_new(data, size, base_btf)); return libbpf_ptr(btf_new(data, size, base_btf));
} }
struct btf_elf_secs {
Elf_Data *btf_data;
Elf_Data *btf_ext_data;
Elf_Data *btf_base_data;
};
static int btf_find_elf_sections(Elf *elf, const char *path, struct btf_elf_secs *secs)
{
Elf_Scn *scn = NULL;
Elf_Data *data;
GElf_Ehdr ehdr;
size_t shstrndx;
int idx = 0;
if (!gelf_getehdr(elf, &ehdr)) {
pr_warn("failed to get EHDR from %s\n", path);
goto err;
}
if (elf_getshdrstrndx(elf, &shstrndx)) {
pr_warn("failed to get section names section index for %s\n",
path);
goto err;
}
if (!elf_rawdata(elf_getscn(elf, shstrndx), NULL)) {
pr_warn("failed to get e_shstrndx from %s\n", path);
goto err;
}
while ((scn = elf_nextscn(elf, scn)) != NULL) {
Elf_Data **field;
GElf_Shdr sh;
char *name;
idx++;
if (gelf_getshdr(scn, &sh) != &sh) {
pr_warn("failed to get section(%d) header from %s\n",
idx, path);
goto err;
}
name = elf_strptr(elf, shstrndx, sh.sh_name);
if (!name) {
pr_warn("failed to get section(%d) name from %s\n",
idx, path);
goto err;
}
if (strcmp(name, BTF_ELF_SEC) == 0)
field = &secs->btf_data;
else if (strcmp(name, BTF_EXT_ELF_SEC) == 0)
field = &secs->btf_ext_data;
else if (strcmp(name, BTF_BASE_ELF_SEC) == 0)
field = &secs->btf_base_data;
else
continue;
data = elf_getdata(scn, 0);
if (!data) {
pr_warn("failed to get section(%d, %s) data from %s\n",
idx, name, path);
goto err;
}
*field = data;
}
return 0;
err:
return -LIBBPF_ERRNO__FORMAT;
}
static struct btf *btf_parse_elf(const char *path, struct btf *base_btf, static struct btf *btf_parse_elf(const char *path, struct btf *base_btf,
struct btf_ext **btf_ext) struct btf_ext **btf_ext)
{ {
Elf_Data *btf_data = NULL, *btf_ext_data = NULL; struct btf_elf_secs secs = {};
int err = 0, fd = -1, idx = 0; struct btf *dist_base_btf = NULL;
struct btf *btf = NULL; struct btf *btf = NULL;
Elf_Scn *scn = NULL; int err = 0, fd = -1;
Elf *elf = NULL; Elf *elf = NULL;
GElf_Ehdr ehdr;
size_t shstrndx;
if (elf_version(EV_CURRENT) == EV_NONE) { if (elf_version(EV_CURRENT) == EV_NONE) {
pr_warn("failed to init libelf for %s\n", path); pr_warn("failed to init libelf for %s\n", path);
@ -1107,73 +1182,48 @@ static struct btf *btf_parse_elf(const char *path, struct btf *base_btf,
return ERR_PTR(err); return ERR_PTR(err);
} }
err = -LIBBPF_ERRNO__FORMAT;
elf = elf_begin(fd, ELF_C_READ, NULL); elf = elf_begin(fd, ELF_C_READ, NULL);
if (!elf) { if (!elf) {
pr_warn("failed to open %s as ELF file\n", path); pr_warn("failed to open %s as ELF file\n", path);
goto done; goto done;
} }
if (!gelf_getehdr(elf, &ehdr)) {
pr_warn("failed to get EHDR from %s\n", path); err = btf_find_elf_sections(elf, path, &secs);
if (err)
goto done; goto done;
}
if (elf_getshdrstrndx(elf, &shstrndx)) { if (!secs.btf_data) {
pr_warn("failed to get section names section index for %s\n",
path);
goto done;
}
if (!elf_rawdata(elf_getscn(elf, shstrndx), NULL)) {
pr_warn("failed to get e_shstrndx from %s\n", path);
goto done;
}
while ((scn = elf_nextscn(elf, scn)) != NULL) {
GElf_Shdr sh;
char *name;
idx++;
if (gelf_getshdr(scn, &sh) != &sh) {
pr_warn("failed to get section(%d) header from %s\n",
idx, path);
goto done;
}
name = elf_strptr(elf, shstrndx, sh.sh_name);
if (!name) {
pr_warn("failed to get section(%d) name from %s\n",
idx, path);
goto done;
}
if (strcmp(name, BTF_ELF_SEC) == 0) {
btf_data = elf_getdata(scn, 0);
if (!btf_data) {
pr_warn("failed to get section(%d, %s) data from %s\n",
idx, name, path);
goto done;
}
continue;
} else if (btf_ext && strcmp(name, BTF_EXT_ELF_SEC) == 0) {
btf_ext_data = elf_getdata(scn, 0);
if (!btf_ext_data) {
pr_warn("failed to get section(%d, %s) data from %s\n",
idx, name, path);
goto done;
}
continue;
}
}
if (!btf_data) {
pr_warn("failed to find '%s' ELF section in %s\n", BTF_ELF_SEC, path); pr_warn("failed to find '%s' ELF section in %s\n", BTF_ELF_SEC, path);
err = -ENODATA; err = -ENODATA;
goto done; goto done;
} }
btf = btf_new(btf_data->d_buf, btf_data->d_size, base_btf);
err = libbpf_get_error(btf); if (secs.btf_base_data) {
if (err) dist_base_btf = btf_new(secs.btf_base_data->d_buf, secs.btf_base_data->d_size,
NULL);
if (IS_ERR(dist_base_btf)) {
err = PTR_ERR(dist_base_btf);
dist_base_btf = NULL;
goto done;
}
}
btf = btf_new(secs.btf_data->d_buf, secs.btf_data->d_size,
dist_base_btf ?: base_btf);
if (IS_ERR(btf)) {
err = PTR_ERR(btf);
goto done; goto done;
}
if (dist_base_btf && base_btf) {
err = btf__relocate(btf, base_btf);
if (err)
goto done;
btf__free(dist_base_btf);
dist_base_btf = NULL;
}
if (dist_base_btf)
btf->owns_base = true;
switch (gelf_getclass(elf)) { switch (gelf_getclass(elf)) {
case ELFCLASS32: case ELFCLASS32:
@ -1187,11 +1237,12 @@ static struct btf *btf_parse_elf(const char *path, struct btf *base_btf,
break; break;
} }
if (btf_ext && btf_ext_data) { if (btf_ext && secs.btf_ext_data) {
*btf_ext = btf_ext__new(btf_ext_data->d_buf, btf_ext_data->d_size); *btf_ext = btf_ext__new(secs.btf_ext_data->d_buf, secs.btf_ext_data->d_size);
err = libbpf_get_error(*btf_ext); if (IS_ERR(*btf_ext)) {
if (err) err = PTR_ERR(*btf_ext);
goto done; goto done;
}
} else if (btf_ext) { } else if (btf_ext) {
*btf_ext = NULL; *btf_ext = NULL;
} }
@ -1205,6 +1256,7 @@ done:
if (btf_ext) if (btf_ext)
btf_ext__free(*btf_ext); btf_ext__free(*btf_ext);
btf__free(dist_base_btf);
btf__free(btf); btf__free(btf);
return ERR_PTR(err); return ERR_PTR(err);
@ -1770,9 +1822,8 @@ static int btf_rewrite_str(struct btf_pipe *p, __u32 *str_off)
return 0; return 0;
} }
int btf__add_type(struct btf *btf, const struct btf *src_btf, const struct btf_type *src_type) static int btf_add_type(struct btf_pipe *p, const struct btf_type *src_type)
{ {
struct btf_pipe p = { .src = src_btf, .dst = btf };
struct btf_field_iter it; struct btf_field_iter it;
struct btf_type *t; struct btf_type *t;
__u32 *str_off; __u32 *str_off;
@ -1783,10 +1834,10 @@ int btf__add_type(struct btf *btf, const struct btf *src_btf, const struct btf_t
return libbpf_err(sz); return libbpf_err(sz);
/* deconstruct BTF, if necessary, and invalidate raw_data */ /* deconstruct BTF, if necessary, and invalidate raw_data */
if (btf_ensure_modifiable(btf)) if (btf_ensure_modifiable(p->dst))
return libbpf_err(-ENOMEM); return libbpf_err(-ENOMEM);
t = btf_add_type_mem(btf, sz); t = btf_add_type_mem(p->dst, sz);
if (!t) if (!t)
return libbpf_err(-ENOMEM); return libbpf_err(-ENOMEM);
@ -1797,12 +1848,19 @@ int btf__add_type(struct btf *btf, const struct btf *src_btf, const struct btf_t
return libbpf_err(err); return libbpf_err(err);
while ((str_off = btf_field_iter_next(&it))) { while ((str_off = btf_field_iter_next(&it))) {
err = btf_rewrite_str(&p, str_off); err = btf_rewrite_str(p, str_off);
if (err) if (err)
return libbpf_err(err); return libbpf_err(err);
} }
return btf_commit_type(btf, sz); return btf_commit_type(p->dst, sz);
}
int btf__add_type(struct btf *btf, const struct btf *src_btf, const struct btf_type *src_type)
{
struct btf_pipe p = { .src = src_btf, .dst = btf };
return btf_add_type(&p, src_type);
} }
static size_t btf_dedup_identity_hash_fn(long key, void *ctx); static size_t btf_dedup_identity_hash_fn(long key, void *ctx);
@ -5276,3 +5334,325 @@ int btf_ext_visit_str_offs(struct btf_ext *btf_ext, str_off_visit_fn visit, void
return 0; return 0;
} }
struct btf_distill {
struct btf_pipe pipe;
int *id_map;
unsigned int split_start_id;
unsigned int split_start_str;
int diff_id;
};
static int btf_add_distilled_type_ids(struct btf_distill *dist, __u32 i)
{
struct btf_type *split_t = btf_type_by_id(dist->pipe.src, i);
struct btf_field_iter it;
__u32 *id;
int err;
err = btf_field_iter_init(&it, split_t, BTF_FIELD_ITER_IDS);
if (err)
return err;
while ((id = btf_field_iter_next(&it))) {
struct btf_type *base_t;
if (!*id)
continue;
/* split BTF id, not needed */
if (*id >= dist->split_start_id)
continue;
/* already added ? */
if (dist->id_map[*id] > 0)
continue;
/* only a subset of base BTF types should be referenced from
* split BTF; ensure nothing unexpected is referenced.
*/
base_t = btf_type_by_id(dist->pipe.src, *id);
switch (btf_kind(base_t)) {
case BTF_KIND_INT:
case BTF_KIND_FLOAT:
case BTF_KIND_FWD:
case BTF_KIND_ARRAY:
case BTF_KIND_STRUCT:
case BTF_KIND_UNION:
case BTF_KIND_TYPEDEF:
case BTF_KIND_ENUM:
case BTF_KIND_ENUM64:
case BTF_KIND_PTR:
case BTF_KIND_CONST:
case BTF_KIND_RESTRICT:
case BTF_KIND_VOLATILE:
case BTF_KIND_FUNC_PROTO:
case BTF_KIND_TYPE_TAG:
dist->id_map[*id] = *id;
break;
default:
pr_warn("unexpected reference to base type[%u] of kind [%u] when creating distilled base BTF.\n",
*id, btf_kind(base_t));
return -EINVAL;
}
/* If a base type is used, ensure types it refers to are
* marked as used also; so for example if we find a PTR to INT
* we need both the PTR and INT.
*
* The only exception is named struct/unions, since distilled
* base BTF composite types have no members.
*/
if (btf_is_composite(base_t) && base_t->name_off)
continue;
err = btf_add_distilled_type_ids(dist, *id);
if (err)
return err;
}
return 0;
}
static int btf_add_distilled_types(struct btf_distill *dist)
{
bool adding_to_base = dist->pipe.dst->start_id == 1;
int id = btf__type_cnt(dist->pipe.dst);
struct btf_type *t;
int i, err = 0;
/* Add types for each of the required references to either distilled
* base or split BTF, depending on type characteristics.
*/
for (i = 1; i < dist->split_start_id; i++) {
const char *name;
int kind;
if (!dist->id_map[i])
continue;
t = btf_type_by_id(dist->pipe.src, i);
kind = btf_kind(t);
name = btf__name_by_offset(dist->pipe.src, t->name_off);
switch (kind) {
case BTF_KIND_INT:
case BTF_KIND_FLOAT:
case BTF_KIND_FWD:
/* Named int, float, fwd are added to base. */
if (!adding_to_base)
continue;
err = btf_add_type(&dist->pipe, t);
break;
case BTF_KIND_STRUCT:
case BTF_KIND_UNION:
/* Named struct/union are added to base as 0-vlen
* struct/union of same size. Anonymous struct/unions
* are added to split BTF as-is.
*/
if (adding_to_base) {
if (!t->name_off)
continue;
err = btf_add_composite(dist->pipe.dst, kind, name, t->size);
} else {
if (t->name_off)
continue;
err = btf_add_type(&dist->pipe, t);
}
break;
case BTF_KIND_ENUM:
case BTF_KIND_ENUM64:
/* Named enum[64]s are added to base as a sized
* enum; relocation will match with appropriately-named
* and sized enum or enum64.
*
* Anonymous enums are added to split BTF as-is.
*/
if (adding_to_base) {
if (!t->name_off)
continue;
err = btf__add_enum(dist->pipe.dst, name, t->size);
} else {
if (t->name_off)
continue;
err = btf_add_type(&dist->pipe, t);
}
break;
case BTF_KIND_ARRAY:
case BTF_KIND_TYPEDEF:
case BTF_KIND_PTR:
case BTF_KIND_CONST:
case BTF_KIND_RESTRICT:
case BTF_KIND_VOLATILE:
case BTF_KIND_FUNC_PROTO:
case BTF_KIND_TYPE_TAG:
/* All other types are added to split BTF. */
if (adding_to_base)
continue;
err = btf_add_type(&dist->pipe, t);
break;
default:
pr_warn("unexpected kind when adding base type '%s'[%u] of kind [%u] to distilled base BTF.\n",
name, i, kind);
return -EINVAL;
}
if (err < 0)
break;
dist->id_map[i] = id++;
}
return err;
}
/* Split BTF ids without a mapping will be shifted downwards since distilled
* base BTF is smaller than the original base BTF. For those that have a
* mapping (either to base or updated split BTF), update the id based on
* that mapping.
*/
static int btf_update_distilled_type_ids(struct btf_distill *dist, __u32 i)
{
struct btf_type *t = btf_type_by_id(dist->pipe.dst, i);
struct btf_field_iter it;
__u32 *id;
int err;
err = btf_field_iter_init(&it, t, BTF_FIELD_ITER_IDS);
if (err)
return err;
while ((id = btf_field_iter_next(&it))) {
if (dist->id_map[*id])
*id = dist->id_map[*id];
else if (*id >= dist->split_start_id)
*id -= dist->diff_id;
}
return 0;
}
/* Create updated split BTF with distilled base BTF; distilled base BTF
* consists of BTF information required to clarify the types that split
* BTF refers to, omitting unneeded details. Specifically it will contain
* base types and memberless definitions of named structs, unions and enumerated
* types. Associated reference types like pointers, arrays and anonymous
* structs, unions and enumerated types will be added to split BTF.
* Size is recorded for named struct/unions to help guide matching to the
* target base BTF during later relocation.
*
* The only case where structs, unions or enumerated types are fully represented
* is when they are anonymous; in such cases, the anonymous type is added to
* split BTF in full.
*
* We return newly-created split BTF where the split BTF refers to a newly-created
* distilled base BTF. Both must be freed separately by the caller.
*/
int btf__distill_base(const struct btf *src_btf, struct btf **new_base_btf,
struct btf **new_split_btf)
{
struct btf *new_base = NULL, *new_split = NULL;
const struct btf *old_base;
unsigned int n = btf__type_cnt(src_btf);
struct btf_distill dist = {};
struct btf_type *t;
int i, err = 0;
/* src BTF must be split BTF. */
old_base = btf__base_btf(src_btf);
if (!new_base_btf || !new_split_btf || !old_base)
return libbpf_err(-EINVAL);
new_base = btf__new_empty();
if (!new_base)
return libbpf_err(-ENOMEM);
dist.id_map = calloc(n, sizeof(*dist.id_map));
if (!dist.id_map) {
err = -ENOMEM;
goto done;
}
dist.pipe.src = src_btf;
dist.pipe.dst = new_base;
dist.pipe.str_off_map = hashmap__new(btf_dedup_identity_hash_fn, btf_dedup_equal_fn, NULL);
if (IS_ERR(dist.pipe.str_off_map)) {
err = -ENOMEM;
goto done;
}
dist.split_start_id = btf__type_cnt(old_base);
dist.split_start_str = old_base->hdr->str_len;
/* Pass over src split BTF; generate the list of base BTF type ids it
* references; these will constitute our distilled BTF set to be
* distributed over base and split BTF as appropriate.
*/
for (i = src_btf->start_id; i < n; i++) {
err = btf_add_distilled_type_ids(&dist, i);
if (err < 0)
goto done;
}
/* Next add types for each of the required references to base BTF and split BTF
* in turn.
*/
err = btf_add_distilled_types(&dist);
if (err < 0)
goto done;
/* Create new split BTF with distilled base BTF as its base; the final
* state is split BTF with distilled base BTF that represents enough
* about its base references to allow it to be relocated with the base
* BTF available.
*/
new_split = btf__new_empty_split(new_base);
if (!new_split_btf) {
err = -errno;
goto done;
}
dist.pipe.dst = new_split;
/* First add all split types */
for (i = src_btf->start_id; i < n; i++) {
t = btf_type_by_id(src_btf, i);
err = btf_add_type(&dist.pipe, t);
if (err < 0)
goto done;
}
/* Now add distilled types to split BTF that are not added to base. */
err = btf_add_distilled_types(&dist);
if (err < 0)
goto done;
/* All split BTF ids will be shifted downwards since there are less base
* BTF ids in distilled base BTF.
*/
dist.diff_id = dist.split_start_id - btf__type_cnt(new_base);
n = btf__type_cnt(new_split);
/* Now update base/split BTF ids. */
for (i = 1; i < n; i++) {
err = btf_update_distilled_type_ids(&dist, i);
if (err < 0)
break;
}
done:
free(dist.id_map);
hashmap__free(dist.pipe.str_off_map);
if (err) {
btf__free(new_split);
btf__free(new_base);
return libbpf_err(err);
}
*new_base_btf = new_base;
*new_split_btf = new_split;
return 0;
}
const struct btf_header *btf_header(const struct btf *btf)
{
return btf->hdr;
}
void btf_set_base_btf(struct btf *btf, const struct btf *base_btf)
{
btf->base_btf = (struct btf *)base_btf;
btf->start_id = btf__type_cnt(base_btf);
btf->start_str_off = base_btf->hdr->str_len;
}
int btf__relocate(struct btf *btf, const struct btf *base_btf)
{
int err = btf_relocate(btf, base_btf, NULL);
if (!err)
btf->owns_base = false;
return libbpf_err(err);
}

View File

@ -18,6 +18,7 @@ extern "C" {
#define BTF_ELF_SEC ".BTF" #define BTF_ELF_SEC ".BTF"
#define BTF_EXT_ELF_SEC ".BTF.ext" #define BTF_EXT_ELF_SEC ".BTF.ext"
#define BTF_BASE_ELF_SEC ".BTF.base"
#define MAPS_ELF_SEC ".maps" #define MAPS_ELF_SEC ".maps"
struct btf; struct btf;
@ -107,6 +108,27 @@ LIBBPF_API struct btf *btf__new_empty(void);
*/ */
LIBBPF_API struct btf *btf__new_empty_split(struct btf *base_btf); LIBBPF_API struct btf *btf__new_empty_split(struct btf *base_btf);
/**
* @brief **btf__distill_base()** creates new versions of the split BTF
* *src_btf* and its base BTF. The new base BTF will only contain the types
* needed to improve robustness of the split BTF to small changes in base BTF.
* When that split BTF is loaded against a (possibly changed) base, this
* distilled base BTF will help update references to that (possibly changed)
* base BTF.
*
* Both the new split and its associated new base BTF must be freed by
* the caller.
*
* If successful, 0 is returned and **new_base_btf** and **new_split_btf**
* will point at new base/split BTF. Both the new split and its associated
* new base BTF must be freed by the caller.
*
* A negative value is returned on error and the thread-local `errno` variable
* is set to the error code as well.
*/
LIBBPF_API int btf__distill_base(const struct btf *src_btf, struct btf **new_base_btf,
struct btf **new_split_btf);
LIBBPF_API struct btf *btf__parse(const char *path, struct btf_ext **btf_ext); LIBBPF_API struct btf *btf__parse(const char *path, struct btf_ext **btf_ext);
LIBBPF_API struct btf *btf__parse_split(const char *path, struct btf *base_btf); LIBBPF_API struct btf *btf__parse_split(const char *path, struct btf *base_btf);
LIBBPF_API struct btf *btf__parse_elf(const char *path, struct btf_ext **btf_ext); LIBBPF_API struct btf *btf__parse_elf(const char *path, struct btf_ext **btf_ext);
@ -231,6 +253,20 @@ struct btf_dedup_opts {
LIBBPF_API int btf__dedup(struct btf *btf, const struct btf_dedup_opts *opts); LIBBPF_API int btf__dedup(struct btf *btf, const struct btf_dedup_opts *opts);
/**
* @brief **btf__relocate()** will check the split BTF *btf* for references
* to base BTF kinds, and verify those references are compatible with
* *base_btf*; if they are, *btf* is adjusted such that is re-parented to
* *base_btf* and type ids and strings are adjusted to accommodate this.
*
* If successful, 0 is returned and **btf** now has **base_btf** as its
* base.
*
* A negative value is returned on error and the thread-local `errno` variable
* is set to the error code as well.
*/
LIBBPF_API int btf__relocate(struct btf *btf, const struct btf *base_btf);
struct btf_dump; struct btf_dump;
struct btf_dump_opts { struct btf_dump_opts {

View File

@ -0,0 +1,506 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright (c) 2024, Oracle and/or its affiliates. */
#ifndef _GNU_SOURCE
#define _GNU_SOURCE
#endif
#include "btf.h"
#include "bpf.h"
#include "libbpf.h"
#include "libbpf_internal.h"
struct btf;
struct btf_relocate {
struct btf *btf;
const struct btf *base_btf;
const struct btf *dist_base_btf;
unsigned int nr_base_types;
unsigned int nr_split_types;
unsigned int nr_dist_base_types;
int dist_str_len;
int base_str_len;
__u32 *id_map;
__u32 *str_map;
};
/* Set temporarily in relocation id_map if distilled base struct/union is
* embedded in a split BTF struct/union; in such a case, size information must
* match between distilled base BTF and base BTF representation of type.
*/
#define BTF_IS_EMBEDDED ((__u32)-1)
/* <name, size, id> triple used in sorting/searching distilled base BTF. */
struct btf_name_info {
const char *name;
/* set when search requires a size match */
int needs_size:1,
size:31;
__u32 id;
};
static int btf_relocate_rewrite_type_id(struct btf_relocate *r, __u32 i)
{
struct btf_type *t = btf_type_by_id(r->btf, i);
struct btf_field_iter it;
__u32 *id;
int err;
err = btf_field_iter_init(&it, t, BTF_FIELD_ITER_IDS);
if (err)
return err;
while ((id = btf_field_iter_next(&it)))
*id = r->id_map[*id];
return 0;
}
/* Simple string comparison used for sorting within BTF, since all distilled
* types are named. If strings match, and size is non-zero for both elements
* fall back to using size for ordering.
*/
static int cmp_btf_name_size(const void *n1, const void *n2)
{
const struct btf_name_info *ni1 = n1;
const struct btf_name_info *ni2 = n2;
int name_diff = strcmp(ni1->name, ni2->name);
if (!name_diff && ni1->needs_size && ni2->needs_size)
return ni2->size - ni1->size;
return name_diff;
}
/* Binary search with a small twist; find leftmost element that matches
* so that we can then iterate through all exact matches. So for example
* searching { "a", "bb", "bb", "c" } we would always match on the
* leftmost "bb".
*/
static struct btf_name_info *search_btf_name_size(struct btf_name_info *key,
struct btf_name_info *vals,
int nelems)
{
struct btf_name_info *ret = NULL;
int high = nelems - 1;
int low = 0;
while (low <= high) {
int mid = (low + high)/2;
struct btf_name_info *val = &vals[mid];
int diff = cmp_btf_name_size(key, val);
if (diff == 0)
ret = val;
/* even if found, keep searching for leftmost match */
if (diff <= 0)
high = mid - 1;
else
low = mid + 1;
}
return ret;
}
/* If a member of a split BTF struct/union refers to a base BTF
* struct/union, mark that struct/union id temporarily in the id_map
* with BTF_IS_EMBEDDED. Members can be const/restrict/volatile/typedef
* reference types, but if a pointer is encountered, the type is no longer
* considered embedded.
*/
static int btf_mark_embedded_composite_type_ids(struct btf_relocate *r, __u32 i)
{
struct btf_type *t = btf_type_by_id(r->btf, i);
struct btf_field_iter it;
__u32 *id;
int err;
if (!btf_is_composite(t))
return 0;
err = btf_field_iter_init(&it, t, BTF_FIELD_ITER_IDS);
if (err)
return err;
while ((id = btf_field_iter_next(&it))) {
__u32 next_id = *id;
while (next_id) {
t = btf_type_by_id(r->btf, next_id);
switch (btf_kind(t)) {
case BTF_KIND_CONST:
case BTF_KIND_RESTRICT:
case BTF_KIND_VOLATILE:
case BTF_KIND_TYPEDEF:
case BTF_KIND_TYPE_TAG:
next_id = t->type;
break;
case BTF_KIND_ARRAY: {
struct btf_array *a = btf_array(t);
next_id = a->type;
break;
}
case BTF_KIND_STRUCT:
case BTF_KIND_UNION:
if (next_id < r->nr_dist_base_types)
r->id_map[next_id] = BTF_IS_EMBEDDED;
next_id = 0;
break;
default:
next_id = 0;
break;
}
}
}
return 0;
}
/* Build a map from distilled base BTF ids to base BTF ids. To do so, iterate
* through base BTF looking up distilled type (using binary search) equivalents.
*/
static int btf_relocate_map_distilled_base(struct btf_relocate *r)
{
struct btf_name_info *dist_base_info_sorted, *dist_base_info_sorted_end;
struct btf_type *base_t, *dist_t;
__u8 *base_name_cnt = NULL;
int err = 0;
__u32 id;
/* generate a sort index array of name/type ids sorted by name for
* distilled base BTF to speed name-based lookups.
*/
dist_base_info_sorted = calloc(r->nr_dist_base_types, sizeof(*dist_base_info_sorted));
if (!dist_base_info_sorted) {
err = -ENOMEM;
goto done;
}
dist_base_info_sorted_end = dist_base_info_sorted + r->nr_dist_base_types;
for (id = 0; id < r->nr_dist_base_types; id++) {
dist_t = btf_type_by_id(r->dist_base_btf, id);
dist_base_info_sorted[id].name = btf__name_by_offset(r->dist_base_btf,
dist_t->name_off);
dist_base_info_sorted[id].id = id;
dist_base_info_sorted[id].size = dist_t->size;
dist_base_info_sorted[id].needs_size = true;
}
qsort(dist_base_info_sorted, r->nr_dist_base_types, sizeof(*dist_base_info_sorted),
cmp_btf_name_size);
/* Mark distilled base struct/union members of split BTF structs/unions
* in id_map with BTF_IS_EMBEDDED; this signals that these types
* need to match both name and size, otherwise embeddding the base
* struct/union in the split type is invalid.
*/
for (id = r->nr_dist_base_types; id < r->nr_split_types; id++) {
err = btf_mark_embedded_composite_type_ids(r, id);
if (err)
goto done;
}
/* Collect name counts for composite types in base BTF. If multiple
* instances of a struct/union of the same name exist, we need to use
* size to determine which to map to since name alone is ambiguous.
*/
base_name_cnt = calloc(r->base_str_len, sizeof(*base_name_cnt));
if (!base_name_cnt) {
err = -ENOMEM;
goto done;
}
for (id = 1; id < r->nr_base_types; id++) {
base_t = btf_type_by_id(r->base_btf, id);
if (!btf_is_composite(base_t) || !base_t->name_off)
continue;
if (base_name_cnt[base_t->name_off] < 255)
base_name_cnt[base_t->name_off]++;
}
/* Now search base BTF for matching distilled base BTF types. */
for (id = 1; id < r->nr_base_types; id++) {
struct btf_name_info *dist_name_info, *dist_name_info_next = NULL;
struct btf_name_info base_name_info = {};
int dist_kind, base_kind;
base_t = btf_type_by_id(r->base_btf, id);
/* distilled base consists of named types only. */
if (!base_t->name_off)
continue;
base_kind = btf_kind(base_t);
base_name_info.id = id;
base_name_info.name = btf__name_by_offset(r->base_btf, base_t->name_off);
switch (base_kind) {
case BTF_KIND_INT:
case BTF_KIND_FLOAT:
case BTF_KIND_ENUM:
case BTF_KIND_ENUM64:
/* These types should match both name and size */
base_name_info.needs_size = true;
base_name_info.size = base_t->size;
break;
case BTF_KIND_FWD:
/* No size considerations for fwds. */
break;
case BTF_KIND_STRUCT:
case BTF_KIND_UNION:
/* Size only needs to be used for struct/union if there
* are multiple types in base BTF with the same name.
* If there are multiple _distilled_ types with the same
* name (a very unlikely scenario), that doesn't matter
* unless corresponding _base_ types to match them are
* missing.
*/
base_name_info.needs_size = base_name_cnt[base_t->name_off] > 1;
base_name_info.size = base_t->size;
break;
default:
continue;
}
/* iterate over all matching distilled base types */
for (dist_name_info = search_btf_name_size(&base_name_info, dist_base_info_sorted,
r->nr_dist_base_types);
dist_name_info != NULL; dist_name_info = dist_name_info_next) {
/* Are there more distilled matches to process after
* this one?
*/
dist_name_info_next = dist_name_info + 1;
if (dist_name_info_next >= dist_base_info_sorted_end ||
cmp_btf_name_size(&base_name_info, dist_name_info_next))
dist_name_info_next = NULL;
if (!dist_name_info->id || dist_name_info->id > r->nr_dist_base_types) {
pr_warn("base BTF id [%d] maps to invalid distilled base BTF id [%d]\n",
id, dist_name_info->id);
err = -EINVAL;
goto done;
}
dist_t = btf_type_by_id(r->dist_base_btf, dist_name_info->id);
dist_kind = btf_kind(dist_t);
/* Validate that the found distilled type is compatible.
* Do not error out on mismatch as another match may
* occur for an identically-named type.
*/
switch (dist_kind) {
case BTF_KIND_FWD:
switch (base_kind) {
case BTF_KIND_FWD:
if (btf_kflag(dist_t) != btf_kflag(base_t))
continue;
break;
case BTF_KIND_STRUCT:
if (btf_kflag(base_t))
continue;
break;
case BTF_KIND_UNION:
if (!btf_kflag(base_t))
continue;
break;
default:
continue;
}
break;
case BTF_KIND_INT:
if (dist_kind != base_kind ||
btf_int_encoding(base_t) != btf_int_encoding(dist_t))
continue;
break;
case BTF_KIND_FLOAT:
if (dist_kind != base_kind)
continue;
break;
case BTF_KIND_ENUM:
/* ENUM and ENUM64 are encoded as sized ENUM in
* distilled base BTF.
*/
if (base_kind != dist_kind && base_kind != BTF_KIND_ENUM64)
continue;
break;
case BTF_KIND_STRUCT:
case BTF_KIND_UNION:
/* size verification is required for embedded
* struct/unions.
*/
if (r->id_map[dist_name_info->id] == BTF_IS_EMBEDDED &&
base_t->size != dist_t->size)
continue;
break;
default:
continue;
}
if (r->id_map[dist_name_info->id] &&
r->id_map[dist_name_info->id] != BTF_IS_EMBEDDED) {
/* we already have a match; this tells us that
* multiple base types of the same name
* have the same size, since for cases where
* multiple types have the same name we match
* on name and size. In this case, we have
* no way of determining which to relocate
* to in base BTF, so error out.
*/
pr_warn("distilled base BTF type '%s' [%u], size %u has multiple candidates of the same size (ids [%u, %u]) in base BTF\n",
base_name_info.name, dist_name_info->id,
base_t->size, id, r->id_map[dist_name_info->id]);
err = -EINVAL;
goto done;
}
/* map id and name */
r->id_map[dist_name_info->id] = id;
r->str_map[dist_t->name_off] = base_t->name_off;
}
}
/* ensure all distilled BTF ids now have a mapping... */
for (id = 1; id < r->nr_dist_base_types; id++) {
const char *name;
if (r->id_map[id] && r->id_map[id] != BTF_IS_EMBEDDED)
continue;
dist_t = btf_type_by_id(r->dist_base_btf, id);
name = btf__name_by_offset(r->dist_base_btf, dist_t->name_off);
pr_warn("distilled base BTF type '%s' [%d] is not mapped to base BTF id\n",
name, id);
err = -EINVAL;
break;
}
done:
free(base_name_cnt);
free(dist_base_info_sorted);
return err;
}
/* distilled base should only have named int/float/enum/fwd/struct/union types. */
static int btf_relocate_validate_distilled_base(struct btf_relocate *r)
{
unsigned int i;
for (i = 1; i < r->nr_dist_base_types; i++) {
struct btf_type *t = btf_type_by_id(r->dist_base_btf, i);
int kind = btf_kind(t);
switch (kind) {
case BTF_KIND_INT:
case BTF_KIND_FLOAT:
case BTF_KIND_ENUM:
case BTF_KIND_STRUCT:
case BTF_KIND_UNION:
case BTF_KIND_FWD:
if (t->name_off)
break;
pr_warn("type [%d], kind [%d] is invalid for distilled base BTF; it is anonymous\n",
i, kind);
return -EINVAL;
default:
pr_warn("type [%d] in distilled based BTF has unexpected kind [%d]\n",
i, kind);
return -EINVAL;
}
}
return 0;
}
static int btf_relocate_rewrite_strs(struct btf_relocate *r, __u32 i)
{
struct btf_type *t = btf_type_by_id(r->btf, i);
struct btf_field_iter it;
__u32 *str_off;
int off, err;
err = btf_field_iter_init(&it, t, BTF_FIELD_ITER_STRS);
if (err)
return err;
while ((str_off = btf_field_iter_next(&it))) {
if (!*str_off)
continue;
if (*str_off >= r->dist_str_len) {
*str_off += r->base_str_len - r->dist_str_len;
} else {
off = r->str_map[*str_off];
if (!off) {
pr_warn("string '%s' [offset %u] is not mapped to base BTF",
btf__str_by_offset(r->btf, off), *str_off);
return -ENOENT;
}
*str_off = off;
}
}
return 0;
}
/* If successful, output of relocation is updated BTF with base BTF pointing
* at base_btf, and type ids, strings adjusted accordingly.
*/
int btf_relocate(struct btf *btf, const struct btf *base_btf, __u32 **id_map)
{
unsigned int nr_types = btf__type_cnt(btf);
const struct btf_header *dist_base_hdr;
const struct btf_header *base_hdr;
struct btf_relocate r = {};
int err = 0;
__u32 id, i;
r.dist_base_btf = btf__base_btf(btf);
if (!base_btf || r.dist_base_btf == base_btf)
return -EINVAL;
r.nr_dist_base_types = btf__type_cnt(r.dist_base_btf);
r.nr_base_types = btf__type_cnt(base_btf);
r.nr_split_types = nr_types - r.nr_dist_base_types;
r.btf = btf;
r.base_btf = base_btf;
r.id_map = calloc(nr_types, sizeof(*r.id_map));
r.str_map = calloc(btf_header(r.dist_base_btf)->str_len, sizeof(*r.str_map));
dist_base_hdr = btf_header(r.dist_base_btf);
base_hdr = btf_header(r.base_btf);
r.dist_str_len = dist_base_hdr->str_len;
r.base_str_len = base_hdr->str_len;
if (!r.id_map || !r.str_map) {
err = -ENOMEM;
goto err_out;
}
err = btf_relocate_validate_distilled_base(&r);
if (err)
goto err_out;
/* Split BTF ids need to be adjusted as base and distilled base
* have different numbers of types, changing the start id of split
* BTF.
*/
for (id = r.nr_dist_base_types; id < nr_types; id++)
r.id_map[id] = id + r.nr_base_types - r.nr_dist_base_types;
/* Build a map from distilled base ids to actual base BTF ids; it is used
* to update split BTF id references. Also build a str_map mapping from
* distilled base BTF names to base BTF names.
*/
err = btf_relocate_map_distilled_base(&r);
if (err)
goto err_out;
/* Next, rewrite type ids in split BTF, replacing split ids with updated
* ids based on number of types in base BTF, and base ids with
* relocated ids from base_btf.
*/
for (i = 0, id = r.nr_dist_base_types; i < r.nr_split_types; i++, id++) {
err = btf_relocate_rewrite_type_id(&r, id);
if (err)
goto err_out;
}
/* String offsets now need to be updated using the str_map. */
for (i = 0; i < r.nr_split_types; i++) {
err = btf_relocate_rewrite_strs(&r, i + r.nr_dist_base_types);
if (err)
goto err_out;
}
/* Finally reset base BTF to be base_btf */
btf_set_base_btf(btf, base_btf);
if (id_map) {
*id_map = r.id_map;
r.id_map = NULL;
}
err_out:
free(r.id_map);
free(r.str_map);
return err;
}

View File

@ -419,6 +419,8 @@ LIBBPF_1.4.0 {
LIBBPF_1.5.0 { LIBBPF_1.5.0 {
global: global:
btf__distill_base;
btf__relocate;
bpf_map__autoattach; bpf_map__autoattach;
bpf_map__set_autoattach; bpf_map__set_autoattach;
bpf_program__attach_sockmap; bpf_program__attach_sockmap;

View File

@ -234,6 +234,9 @@ struct btf_type;
struct btf_type *btf_type_by_id(const struct btf *btf, __u32 type_id); struct btf_type *btf_type_by_id(const struct btf *btf, __u32 type_id);
const char *btf_kind_str(const struct btf_type *t); const char *btf_kind_str(const struct btf_type *t);
const struct btf_type *skip_mods_and_typedefs(const struct btf *btf, __u32 id, __u32 *res_id); const struct btf_type *skip_mods_and_typedefs(const struct btf *btf, __u32 id, __u32 *res_id);
const struct btf_header *btf_header(const struct btf *btf);
void btf_set_base_btf(struct btf *btf, const struct btf *base_btf);
int btf_relocate(struct btf *btf, const struct btf *base_btf, __u32 **id_map);
static inline enum btf_func_linkage btf_func_linkage(const struct btf_type *t) static inline enum btf_func_linkage btf_func_linkage(const struct btf_type *t)
{ {

View File

@ -0,0 +1,552 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright (c) 2024, Oracle and/or its affiliates. */
#include <test_progs.h>
#include <bpf/btf.h>
#include "btf_helpers.h"
/* Fabricate base, split BTF with references to base types needed; then create
* split BTF with distilled base BTF and ensure expectations are met:
* - only referenced base types from split BTF are present
* - struct/union/enum are represented as empty unless anonymous, when they
* are represented in full in split BTF
*/
static void test_distilled_base(void)
{
struct btf *btf1 = NULL, *btf2 = NULL, *btf3 = NULL, *btf4 = NULL;
btf1 = btf__new_empty();
if (!ASSERT_OK_PTR(btf1, "empty_main_btf"))
return;
btf__add_int(btf1, "int", 4, BTF_INT_SIGNED); /* [1] int */
btf__add_ptr(btf1, 1); /* [2] ptr to int */
btf__add_struct(btf1, "s1", 8); /* [3] struct s1 { */
btf__add_field(btf1, "f1", 2, 0, 0); /* int *f1; */
/* } */
btf__add_struct(btf1, "", 12); /* [4] struct { */
btf__add_field(btf1, "f1", 1, 0, 0); /* int f1; */
btf__add_field(btf1, "f2", 3, 32, 0); /* struct s1 f2; */
/* } */
btf__add_int(btf1, "unsigned int", 4, 0); /* [5] unsigned int */
btf__add_union(btf1, "u1", 12); /* [6] union u1 { */
btf__add_field(btf1, "f1", 1, 0, 0); /* int f1; */
btf__add_field(btf1, "f2", 2, 0, 0); /* int *f2; */
/* } */
btf__add_union(btf1, "", 4); /* [7] union { */
btf__add_field(btf1, "f1", 1, 0, 0); /* int f1; */
/* } */
btf__add_enum(btf1, "e1", 4); /* [8] enum e1 { */
btf__add_enum_value(btf1, "v1", 1); /* v1 = 1; */
/* } */
btf__add_enum(btf1, "", 4); /* [9] enum { */
btf__add_enum_value(btf1, "av1", 2); /* av1 = 2; */
/* } */
btf__add_enum64(btf1, "e641", 8, true); /* [10] enum64 { */
btf__add_enum64_value(btf1, "v1", 1024); /* v1 = 1024; */
/* } */
btf__add_enum64(btf1, "", 8, true); /* [11] enum64 { */
btf__add_enum64_value(btf1, "v1", 1025); /* v1 = 1025; */
/* } */
btf__add_struct(btf1, "unneeded", 4); /* [12] struct unneeded { */
btf__add_field(btf1, "f1", 1, 0, 0); /* int f1; */
/* } */
btf__add_struct(btf1, "embedded", 4); /* [13] struct embedded { */
btf__add_field(btf1, "f1", 1, 0, 0); /* int f1; */
/* } */
btf__add_func_proto(btf1, 1); /* [14] int (*)(int *p1); */
btf__add_func_param(btf1, "p1", 1);
btf__add_array(btf1, 1, 1, 3); /* [15] int [3]; */
btf__add_struct(btf1, "from_proto", 4); /* [16] struct from_proto { */
btf__add_field(btf1, "f1", 1, 0, 0); /* int f1; */
/* } */
btf__add_union(btf1, "u1", 4); /* [17] union u1 { */
btf__add_field(btf1, "f1", 1, 0, 0); /* int f1; */
/* } */
VALIDATE_RAW_BTF(
btf1,
"[1] INT 'int' size=4 bits_offset=0 nr_bits=32 encoding=SIGNED",
"[2] PTR '(anon)' type_id=1",
"[3] STRUCT 's1' size=8 vlen=1\n"
"\t'f1' type_id=2 bits_offset=0",
"[4] STRUCT '(anon)' size=12 vlen=2\n"
"\t'f1' type_id=1 bits_offset=0\n"
"\t'f2' type_id=3 bits_offset=32",
"[5] INT 'unsigned int' size=4 bits_offset=0 nr_bits=32 encoding=(none)",
"[6] UNION 'u1' size=12 vlen=2\n"
"\t'f1' type_id=1 bits_offset=0\n"
"\t'f2' type_id=2 bits_offset=0",
"[7] UNION '(anon)' size=4 vlen=1\n"
"\t'f1' type_id=1 bits_offset=0",
"[8] ENUM 'e1' encoding=UNSIGNED size=4 vlen=1\n"
"\t'v1' val=1",
"[9] ENUM '(anon)' encoding=UNSIGNED size=4 vlen=1\n"
"\t'av1' val=2",
"[10] ENUM64 'e641' encoding=SIGNED size=8 vlen=1\n"
"\t'v1' val=1024",
"[11] ENUM64 '(anon)' encoding=SIGNED size=8 vlen=1\n"
"\t'v1' val=1025",
"[12] STRUCT 'unneeded' size=4 vlen=1\n"
"\t'f1' type_id=1 bits_offset=0",
"[13] STRUCT 'embedded' size=4 vlen=1\n"
"\t'f1' type_id=1 bits_offset=0",
"[14] FUNC_PROTO '(anon)' ret_type_id=1 vlen=1\n"
"\t'p1' type_id=1",
"[15] ARRAY '(anon)' type_id=1 index_type_id=1 nr_elems=3",
"[16] STRUCT 'from_proto' size=4 vlen=1\n"
"\t'f1' type_id=1 bits_offset=0",
"[17] UNION 'u1' size=4 vlen=1\n"
"\t'f1' type_id=1 bits_offset=0");
btf2 = btf__new_empty_split(btf1);
if (!ASSERT_OK_PTR(btf2, "empty_split_btf"))
goto cleanup;
btf__add_ptr(btf2, 3); /* [18] ptr to struct s1 */
/* add ptr to struct anon */
btf__add_ptr(btf2, 4); /* [19] ptr to struct (anon) */
btf__add_const(btf2, 6); /* [20] const union u1 */
btf__add_restrict(btf2, 7); /* [21] restrict union (anon) */
btf__add_volatile(btf2, 8); /* [22] volatile enum e1 */
btf__add_typedef(btf2, "et", 9); /* [23] typedef enum (anon) */
btf__add_const(btf2, 10); /* [24] const enum64 e641 */
btf__add_ptr(btf2, 11); /* [25] restrict enum64 (anon) */
btf__add_struct(btf2, "with_embedded", 4); /* [26] struct with_embedded { */
btf__add_field(btf2, "f1", 13, 0, 0); /* struct embedded f1; */
/* } */
btf__add_func(btf2, "fn", BTF_FUNC_STATIC, 14); /* [27] int fn(int p1); */
btf__add_typedef(btf2, "arraytype", 15); /* [28] typedef int[3] foo; */
btf__add_func_proto(btf2, 1); /* [29] int (*)(struct from proto p1); */
btf__add_func_param(btf2, "p1", 16);
VALIDATE_RAW_BTF(
btf2,
"[1] INT 'int' size=4 bits_offset=0 nr_bits=32 encoding=SIGNED",
"[2] PTR '(anon)' type_id=1",
"[3] STRUCT 's1' size=8 vlen=1\n"
"\t'f1' type_id=2 bits_offset=0",
"[4] STRUCT '(anon)' size=12 vlen=2\n"
"\t'f1' type_id=1 bits_offset=0\n"
"\t'f2' type_id=3 bits_offset=32",
"[5] INT 'unsigned int' size=4 bits_offset=0 nr_bits=32 encoding=(none)",
"[6] UNION 'u1' size=12 vlen=2\n"
"\t'f1' type_id=1 bits_offset=0\n"
"\t'f2' type_id=2 bits_offset=0",
"[7] UNION '(anon)' size=4 vlen=1\n"
"\t'f1' type_id=1 bits_offset=0",
"[8] ENUM 'e1' encoding=UNSIGNED size=4 vlen=1\n"
"\t'v1' val=1",
"[9] ENUM '(anon)' encoding=UNSIGNED size=4 vlen=1\n"
"\t'av1' val=2",
"[10] ENUM64 'e641' encoding=SIGNED size=8 vlen=1\n"
"\t'v1' val=1024",
"[11] ENUM64 '(anon)' encoding=SIGNED size=8 vlen=1\n"
"\t'v1' val=1025",
"[12] STRUCT 'unneeded' size=4 vlen=1\n"
"\t'f1' type_id=1 bits_offset=0",
"[13] STRUCT 'embedded' size=4 vlen=1\n"
"\t'f1' type_id=1 bits_offset=0",
"[14] FUNC_PROTO '(anon)' ret_type_id=1 vlen=1\n"
"\t'p1' type_id=1",
"[15] ARRAY '(anon)' type_id=1 index_type_id=1 nr_elems=3",
"[16] STRUCT 'from_proto' size=4 vlen=1\n"
"\t'f1' type_id=1 bits_offset=0",
"[17] UNION 'u1' size=4 vlen=1\n"
"\t'f1' type_id=1 bits_offset=0",
"[18] PTR '(anon)' type_id=3",
"[19] PTR '(anon)' type_id=4",
"[20] CONST '(anon)' type_id=6",
"[21] RESTRICT '(anon)' type_id=7",
"[22] VOLATILE '(anon)' type_id=8",
"[23] TYPEDEF 'et' type_id=9",
"[24] CONST '(anon)' type_id=10",
"[25] PTR '(anon)' type_id=11",
"[26] STRUCT 'with_embedded' size=4 vlen=1\n"
"\t'f1' type_id=13 bits_offset=0",
"[27] FUNC 'fn' type_id=14 linkage=static",
"[28] TYPEDEF 'arraytype' type_id=15",
"[29] FUNC_PROTO '(anon)' ret_type_id=1 vlen=1\n"
"\t'p1' type_id=16");
if (!ASSERT_EQ(0, btf__distill_base(btf2, &btf3, &btf4),
"distilled_base") ||
!ASSERT_OK_PTR(btf3, "distilled_base") ||
!ASSERT_OK_PTR(btf4, "distilled_split") ||
!ASSERT_EQ(8, btf__type_cnt(btf3), "distilled_base_type_cnt"))
goto cleanup;
VALIDATE_RAW_BTF(
btf4,
"[1] INT 'int' size=4 bits_offset=0 nr_bits=32 encoding=SIGNED",
"[2] STRUCT 's1' size=8 vlen=0",
"[3] UNION 'u1' size=12 vlen=0",
"[4] ENUM 'e1' encoding=UNSIGNED size=4 vlen=0",
"[5] ENUM 'e641' encoding=UNSIGNED size=8 vlen=0",
"[6] STRUCT 'embedded' size=4 vlen=0",
"[7] STRUCT 'from_proto' size=4 vlen=0",
/* split BTF; these types should match split BTF above from 17-28, with
* updated type id references
*/
"[8] PTR '(anon)' type_id=2",
"[9] PTR '(anon)' type_id=20",
"[10] CONST '(anon)' type_id=3",
"[11] RESTRICT '(anon)' type_id=21",
"[12] VOLATILE '(anon)' type_id=4",
"[13] TYPEDEF 'et' type_id=22",
"[14] CONST '(anon)' type_id=5",
"[15] PTR '(anon)' type_id=23",
"[16] STRUCT 'with_embedded' size=4 vlen=1\n"
"\t'f1' type_id=6 bits_offset=0",
"[17] FUNC 'fn' type_id=24 linkage=static",
"[18] TYPEDEF 'arraytype' type_id=25",
"[19] FUNC_PROTO '(anon)' ret_type_id=1 vlen=1\n"
"\t'p1' type_id=7",
/* split BTF types added from original base BTF below */
"[20] STRUCT '(anon)' size=12 vlen=2\n"
"\t'f1' type_id=1 bits_offset=0\n"
"\t'f2' type_id=2 bits_offset=32",
"[21] UNION '(anon)' size=4 vlen=1\n"
"\t'f1' type_id=1 bits_offset=0",
"[22] ENUM '(anon)' encoding=UNSIGNED size=4 vlen=1\n"
"\t'av1' val=2",
"[23] ENUM64 '(anon)' encoding=SIGNED size=8 vlen=1\n"
"\t'v1' val=1025",
"[24] FUNC_PROTO '(anon)' ret_type_id=1 vlen=1\n"
"\t'p1' type_id=1",
"[25] ARRAY '(anon)' type_id=1 index_type_id=1 nr_elems=3");
if (!ASSERT_EQ(btf__relocate(btf4, btf1), 0, "relocate_split"))
goto cleanup;
VALIDATE_RAW_BTF(
btf4,
"[1] INT 'int' size=4 bits_offset=0 nr_bits=32 encoding=SIGNED",
"[2] PTR '(anon)' type_id=1",
"[3] STRUCT 's1' size=8 vlen=1\n"
"\t'f1' type_id=2 bits_offset=0",
"[4] STRUCT '(anon)' size=12 vlen=2\n"
"\t'f1' type_id=1 bits_offset=0\n"
"\t'f2' type_id=3 bits_offset=32",
"[5] INT 'unsigned int' size=4 bits_offset=0 nr_bits=32 encoding=(none)",
"[6] UNION 'u1' size=12 vlen=2\n"
"\t'f1' type_id=1 bits_offset=0\n"
"\t'f2' type_id=2 bits_offset=0",
"[7] UNION '(anon)' size=4 vlen=1\n"
"\t'f1' type_id=1 bits_offset=0",
"[8] ENUM 'e1' encoding=UNSIGNED size=4 vlen=1\n"
"\t'v1' val=1",
"[9] ENUM '(anon)' encoding=UNSIGNED size=4 vlen=1\n"
"\t'av1' val=2",
"[10] ENUM64 'e641' encoding=SIGNED size=8 vlen=1\n"
"\t'v1' val=1024",
"[11] ENUM64 '(anon)' encoding=SIGNED size=8 vlen=1\n"
"\t'v1' val=1025",
"[12] STRUCT 'unneeded' size=4 vlen=1\n"
"\t'f1' type_id=1 bits_offset=0",
"[13] STRUCT 'embedded' size=4 vlen=1\n"
"\t'f1' type_id=1 bits_offset=0",
"[14] FUNC_PROTO '(anon)' ret_type_id=1 vlen=1\n"
"\t'p1' type_id=1",
"[15] ARRAY '(anon)' type_id=1 index_type_id=1 nr_elems=3",
"[16] STRUCT 'from_proto' size=4 vlen=1\n"
"\t'f1' type_id=1 bits_offset=0",
"[17] UNION 'u1' size=4 vlen=1\n"
"\t'f1' type_id=1 bits_offset=0",
"[18] PTR '(anon)' type_id=3",
"[19] PTR '(anon)' type_id=30",
"[20] CONST '(anon)' type_id=6",
"[21] RESTRICT '(anon)' type_id=31",
"[22] VOLATILE '(anon)' type_id=8",
"[23] TYPEDEF 'et' type_id=32",
"[24] CONST '(anon)' type_id=10",
"[25] PTR '(anon)' type_id=33",
"[26] STRUCT 'with_embedded' size=4 vlen=1\n"
"\t'f1' type_id=13 bits_offset=0",
"[27] FUNC 'fn' type_id=34 linkage=static",
"[28] TYPEDEF 'arraytype' type_id=35",
"[29] FUNC_PROTO '(anon)' ret_type_id=1 vlen=1\n"
"\t'p1' type_id=16",
/* below here are (duplicate) anon base types added by distill
* process to split BTF.
*/
"[30] STRUCT '(anon)' size=12 vlen=2\n"
"\t'f1' type_id=1 bits_offset=0\n"
"\t'f2' type_id=3 bits_offset=32",
"[31] UNION '(anon)' size=4 vlen=1\n"
"\t'f1' type_id=1 bits_offset=0",
"[32] ENUM '(anon)' encoding=UNSIGNED size=4 vlen=1\n"
"\t'av1' val=2",
"[33] ENUM64 '(anon)' encoding=SIGNED size=8 vlen=1\n"
"\t'v1' val=1025",
"[34] FUNC_PROTO '(anon)' ret_type_id=1 vlen=1\n"
"\t'p1' type_id=1",
"[35] ARRAY '(anon)' type_id=1 index_type_id=1 nr_elems=3");
cleanup:
btf__free(btf4);
btf__free(btf3);
btf__free(btf2);
btf__free(btf1);
}
/* ensure we can cope with multiple types with the same name in
* distilled base BTF. In this case because sizes are different,
* we can still disambiguate them.
*/
static void test_distilled_base_multi(void)
{
struct btf *btf1 = NULL, *btf2 = NULL, *btf3 = NULL, *btf4 = NULL;
btf1 = btf__new_empty();
if (!ASSERT_OK_PTR(btf1, "empty_main_btf"))
return;
btf__add_int(btf1, "int", 4, BTF_INT_SIGNED); /* [1] int */
btf__add_int(btf1, "int", 8, BTF_INT_SIGNED); /* [2] int */
VALIDATE_RAW_BTF(
btf1,
"[1] INT 'int' size=4 bits_offset=0 nr_bits=32 encoding=SIGNED",
"[2] INT 'int' size=8 bits_offset=0 nr_bits=64 encoding=SIGNED");
btf2 = btf__new_empty_split(btf1);
if (!ASSERT_OK_PTR(btf2, "empty_split_btf"))
goto cleanup;
btf__add_ptr(btf2, 1);
btf__add_const(btf2, 2);
VALIDATE_RAW_BTF(
btf2,
"[1] INT 'int' size=4 bits_offset=0 nr_bits=32 encoding=SIGNED",
"[2] INT 'int' size=8 bits_offset=0 nr_bits=64 encoding=SIGNED",
"[3] PTR '(anon)' type_id=1",
"[4] CONST '(anon)' type_id=2");
if (!ASSERT_EQ(0, btf__distill_base(btf2, &btf3, &btf4),
"distilled_base") ||
!ASSERT_OK_PTR(btf3, "distilled_base") ||
!ASSERT_OK_PTR(btf4, "distilled_split") ||
!ASSERT_EQ(3, btf__type_cnt(btf3), "distilled_base_type_cnt"))
goto cleanup;
VALIDATE_RAW_BTF(
btf3,
"[1] INT 'int' size=4 bits_offset=0 nr_bits=32 encoding=SIGNED",
"[2] INT 'int' size=8 bits_offset=0 nr_bits=64 encoding=SIGNED");
if (!ASSERT_EQ(btf__relocate(btf4, btf1), 0, "relocate_split"))
goto cleanup;
VALIDATE_RAW_BTF(
btf4,
"[1] INT 'int' size=4 bits_offset=0 nr_bits=32 encoding=SIGNED",
"[2] INT 'int' size=8 bits_offset=0 nr_bits=64 encoding=SIGNED",
"[3] PTR '(anon)' type_id=1",
"[4] CONST '(anon)' type_id=2");
cleanup:
btf__free(btf4);
btf__free(btf3);
btf__free(btf2);
btf__free(btf1);
}
/* If a needed type is not present in the base BTF we wish to relocate
* with, btf__relocate() should error our.
*/
static void test_distilled_base_missing_err(void)
{
struct btf *btf1 = NULL, *btf2 = NULL, *btf3 = NULL, *btf4 = NULL, *btf5 = NULL;
btf1 = btf__new_empty();
if (!ASSERT_OK_PTR(btf1, "empty_main_btf"))
return;
btf__add_int(btf1, "int", 4, BTF_INT_SIGNED); /* [1] int */
btf__add_int(btf1, "int", 8, BTF_INT_SIGNED); /* [2] int */
VALIDATE_RAW_BTF(
btf1,
"[1] INT 'int' size=4 bits_offset=0 nr_bits=32 encoding=SIGNED",
"[2] INT 'int' size=8 bits_offset=0 nr_bits=64 encoding=SIGNED");
btf2 = btf__new_empty_split(btf1);
if (!ASSERT_OK_PTR(btf2, "empty_split_btf"))
goto cleanup;
btf__add_ptr(btf2, 1);
btf__add_const(btf2, 2);
VALIDATE_RAW_BTF(
btf2,
"[1] INT 'int' size=4 bits_offset=0 nr_bits=32 encoding=SIGNED",
"[2] INT 'int' size=8 bits_offset=0 nr_bits=64 encoding=SIGNED",
"[3] PTR '(anon)' type_id=1",
"[4] CONST '(anon)' type_id=2");
if (!ASSERT_EQ(0, btf__distill_base(btf2, &btf3, &btf4),
"distilled_base") ||
!ASSERT_OK_PTR(btf3, "distilled_base") ||
!ASSERT_OK_PTR(btf4, "distilled_split") ||
!ASSERT_EQ(3, btf__type_cnt(btf3), "distilled_base_type_cnt"))
goto cleanup;
VALIDATE_RAW_BTF(
btf3,
"[1] INT 'int' size=4 bits_offset=0 nr_bits=32 encoding=SIGNED",
"[2] INT 'int' size=8 bits_offset=0 nr_bits=64 encoding=SIGNED");
btf5 = btf__new_empty();
if (!ASSERT_OK_PTR(btf5, "empty_reloc_btf"))
return;
btf__add_int(btf5, "int", 4, BTF_INT_SIGNED); /* [1] int */
VALIDATE_RAW_BTF(
btf5,
"[1] INT 'int' size=4 bits_offset=0 nr_bits=32 encoding=SIGNED");
ASSERT_EQ(btf__relocate(btf4, btf5), -EINVAL, "relocate_split");
cleanup:
btf__free(btf5);
btf__free(btf4);
btf__free(btf3);
btf__free(btf2);
btf__free(btf1);
}
/* With 2 types of same size in distilled base BTF, relocation should
* fail as we have no means to choose between them.
*/
static void test_distilled_base_multi_err(void)
{
struct btf *btf1 = NULL, *btf2 = NULL, *btf3 = NULL, *btf4 = NULL;
btf1 = btf__new_empty();
if (!ASSERT_OK_PTR(btf1, "empty_main_btf"))
return;
btf__add_int(btf1, "int", 4, BTF_INT_SIGNED); /* [1] int */
btf__add_int(btf1, "int", 4, BTF_INT_SIGNED); /* [2] int */
VALIDATE_RAW_BTF(
btf1,
"[1] INT 'int' size=4 bits_offset=0 nr_bits=32 encoding=SIGNED",
"[2] INT 'int' size=4 bits_offset=0 nr_bits=32 encoding=SIGNED");
btf2 = btf__new_empty_split(btf1);
if (!ASSERT_OK_PTR(btf2, "empty_split_btf"))
goto cleanup;
btf__add_ptr(btf2, 1);
btf__add_const(btf2, 2);
VALIDATE_RAW_BTF(
btf2,
"[1] INT 'int' size=4 bits_offset=0 nr_bits=32 encoding=SIGNED",
"[2] INT 'int' size=4 bits_offset=0 nr_bits=32 encoding=SIGNED",
"[3] PTR '(anon)' type_id=1",
"[4] CONST '(anon)' type_id=2");
if (!ASSERT_EQ(0, btf__distill_base(btf2, &btf3, &btf4),
"distilled_base") ||
!ASSERT_OK_PTR(btf3, "distilled_base") ||
!ASSERT_OK_PTR(btf4, "distilled_split") ||
!ASSERT_EQ(3, btf__type_cnt(btf3), "distilled_base_type_cnt"))
goto cleanup;
VALIDATE_RAW_BTF(
btf3,
"[1] INT 'int' size=4 bits_offset=0 nr_bits=32 encoding=SIGNED",
"[2] INT 'int' size=4 bits_offset=0 nr_bits=32 encoding=SIGNED");
ASSERT_EQ(btf__relocate(btf4, btf1), -EINVAL, "relocate_split");
cleanup:
btf__free(btf4);
btf__free(btf3);
btf__free(btf2);
btf__free(btf1);
}
/* With 2 types of same size in base BTF, relocation should
* fail as we have no means to choose between them.
*/
static void test_distilled_base_multi_err2(void)
{
struct btf *btf1 = NULL, *btf2 = NULL, *btf3 = NULL, *btf4 = NULL, *btf5 = NULL;
btf1 = btf__new_empty();
if (!ASSERT_OK_PTR(btf1, "empty_main_btf"))
return;
btf__add_int(btf1, "int", 4, BTF_INT_SIGNED); /* [1] int */
VALIDATE_RAW_BTF(
btf1,
"[1] INT 'int' size=4 bits_offset=0 nr_bits=32 encoding=SIGNED");
btf2 = btf__new_empty_split(btf1);
if (!ASSERT_OK_PTR(btf2, "empty_split_btf"))
goto cleanup;
btf__add_ptr(btf2, 1);
VALIDATE_RAW_BTF(
btf2,
"[1] INT 'int' size=4 bits_offset=0 nr_bits=32 encoding=SIGNED",
"[2] PTR '(anon)' type_id=1");
if (!ASSERT_EQ(0, btf__distill_base(btf2, &btf3, &btf4),
"distilled_base") ||
!ASSERT_OK_PTR(btf3, "distilled_base") ||
!ASSERT_OK_PTR(btf4, "distilled_split") ||
!ASSERT_EQ(2, btf__type_cnt(btf3), "distilled_base_type_cnt"))
goto cleanup;
VALIDATE_RAW_BTF(
btf3,
"[1] INT 'int' size=4 bits_offset=0 nr_bits=32 encoding=SIGNED");
btf5 = btf__new_empty();
if (!ASSERT_OK_PTR(btf5, "empty_reloc_btf"))
return;
btf__add_int(btf5, "int", 4, BTF_INT_SIGNED); /* [1] int */
btf__add_int(btf5, "int", 4, BTF_INT_SIGNED); /* [2] int */
VALIDATE_RAW_BTF(
btf5,
"[1] INT 'int' size=4 bits_offset=0 nr_bits=32 encoding=SIGNED",
"[2] INT 'int' size=4 bits_offset=0 nr_bits=32 encoding=SIGNED");
ASSERT_EQ(btf__relocate(btf4, btf5), -EINVAL, "relocate_split");
cleanup:
btf__free(btf5);
btf__free(btf4);
btf__free(btf3);
btf__free(btf2);
btf__free(btf1);
}
/* create split reference BTF from vmlinux + split BTF with a few type references;
* ensure the resultant split reference BTF is as expected, containing only types
* needed to disambiguate references from split BTF.
*/
static void test_distilled_base_vmlinux(void)
{
struct btf *split_btf = NULL, *vmlinux_btf = btf__load_vmlinux_btf();
struct btf *split_dist = NULL, *base_dist = NULL;
__s32 int_id, myint_id;
if (!ASSERT_OK_PTR(vmlinux_btf, "load_vmlinux"))
return;
int_id = btf__find_by_name_kind(vmlinux_btf, "int", BTF_KIND_INT);
if (!ASSERT_GT(int_id, 0, "find_int"))
goto cleanup;
split_btf = btf__new_empty_split(vmlinux_btf);
if (!ASSERT_OK_PTR(split_btf, "new_split"))
goto cleanup;
myint_id = btf__add_typedef(split_btf, "myint", int_id);
btf__add_ptr(split_btf, myint_id);
if (!ASSERT_EQ(btf__distill_base(split_btf, &base_dist, &split_dist), 0,
"distill_vmlinux_base"))
goto cleanup;
if (!ASSERT_OK_PTR(split_dist, "split_distilled") ||
!ASSERT_OK_PTR(base_dist, "base_dist"))
goto cleanup;
VALIDATE_RAW_BTF(
split_dist,
"[1] INT 'int' size=4 bits_offset=0 nr_bits=32 encoding=SIGNED",
"[2] TYPEDEF 'myint' type_id=1",
"[3] PTR '(anon)' type_id=2");
cleanup:
btf__free(split_dist);
btf__free(base_dist);
btf__free(split_btf);
btf__free(vmlinux_btf);
}
void test_btf_distill(void)
{
if (test__start_subtest("distilled_base"))
test_distilled_base();
if (test__start_subtest("distilled_base_multi"))
test_distilled_base_multi();
if (test__start_subtest("distilled_base_missing_err"))
test_distilled_base_missing_err();
if (test__start_subtest("distilled_base_multi_err"))
test_distilled_base_multi_err();
if (test__start_subtest("distilled_base_multi_err2"))
test_distilled_base_multi_err2();
if (test__start_subtest("distilled_base_vmlinux"))
test_distilled_base_vmlinux();
}