2018-10-05 23:40:00 +00:00
/* SPDX-License-Identifier: (LGPL-2.1 OR BSD-2-Clause) */
2018-04-18 22:56:05 +00:00
/* Copyright (c) 2018 Facebook */
2021-09-15 02:19:52 +00:00
/*! \file */
2018-04-18 22:56:05 +00:00
2018-10-03 22:26:42 +00:00
# ifndef __LIBBPF_BTF_H
# define __LIBBPF_BTF_H
2018-04-18 22:56:05 +00:00
2019-05-24 18:59:03 +00:00
# include <stdarg.h>
libbpf: Allow modification of BTF and add btf__add_str API
Allow internal BTF representation to switch from default read-only mode, in
which raw BTF data is a single non-modifiable block of memory with BTF header,
types, and strings layed out sequentially and contiguously in memory, into
a writable representation with types and strings data split out into separate
memory regions, that can be dynamically expanded.
Such writable internal representation is transparent to users of libbpf APIs,
but allows to append new types and strings at the end of BTF, which is
a typical use case when generating BTF programmatically. All the basic
guarantees of BTF types and strings layout is preserved, i.e., user can get
`struct btf_type *` pointer and read it directly. Such btf_type pointers might
be invalidated if BTF is modified, so some care is required in such mixed
read/write scenarios.
Switch from read-only to writable configuration happens automatically the
first time when user attempts to modify BTF by either adding a new type or new
string. It is still possible to get raw BTF data, which is a single piece of
memory that can be persisted in ELF section or into a file as raw BTF. Such
raw data memory is also still owned by BTF and will be freed either when BTF
object is freed or if another modification to BTF happens, as any modification
invalidates BTF raw representation.
This patch adds the first two BTF manipulation APIs: btf__add_str(), which
allows to add arbitrary strings to BTF string section, and btf__find_str()
which allows to find existing string offset, but not add it if it's missing.
All the added strings are automatically deduplicated. This is achieved by
maintaining an additional string lookup index for all unique strings. Such
index is built when BTF is switched to modifiable mode. If at that time BTF
strings section contained duplicate strings, they are not de-duplicated. This
is done specifically to not modify the existing content of BTF (types, their
string offsets, etc), which can cause confusion and is especially important
property if there is struct btf_ext associated with struct btf. By following
this "imperfect deduplication" process, btf_ext is kept consitent and correct.
If deduplication of strings is necessary, it can be forced by doing BTF
deduplication, at which point all the strings will be eagerly deduplicated and
all string offsets both in struct btf and struct btf_ext will be updated.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Link: https://lore.kernel.org/bpf/20200926011357.2366158-6-andriin@fb.com
2020-09-26 01:13:53 +00:00
# include <stdbool.h>
2019-08-07 21:39:48 +00:00
# include <linux/btf.h>
2018-07-24 15:40:21 +00:00
# include <linux/types.h>
2018-04-18 22:56:05 +00:00
2019-12-14 01:43:29 +00:00
# include "libbpf_common.h"
2018-11-21 17:29:44 +00:00
# ifdef __cplusplus
extern " C " {
# endif
2018-04-18 22:56:05 +00:00
# define BTF_ELF_SEC ".BTF"
2018-11-19 23:29:16 +00:00
# define BTF_EXT_ELF_SEC ".BTF.ext"
libbpf: allow specifying map definitions using BTF
This patch adds support for a new way to define BPF maps. It relies on
BTF to describe mandatory and optional attributes of a map, as well as
captures type information of key and value naturally. This eliminates
the need for BPF_ANNOTATE_KV_PAIR hack and ensures key/value sizes are
always in sync with the key/value type.
Relying on BTF, this approach allows for both forward and backward
compatibility w.r.t. extending supported map definition features. By
default, any unrecognized attributes are treated as an error, but it's
possible relax this using MAPS_RELAX_COMPAT flag. New attributes, added
in the future will need to be optional.
The outline of the new map definition (short, BTF-defined maps) is as follows:
1. All the maps should be defined in .maps ELF section. It's possible to
have both "legacy" map definitions in `maps` sections and BTF-defined
maps in .maps sections. Everything will still work transparently.
2. The map declaration and initialization is done through
a global/static variable of a struct type with few mandatory and
extra optional fields:
- type field is mandatory and specified type of BPF map;
- key/value fields are mandatory and capture key/value type/size information;
- max_entries attribute is optional; if max_entries is not specified or
initialized, it has to be provided in runtime through libbpf API
before loading bpf_object;
- map_flags is optional and if not defined, will be assumed to be 0.
3. Key/value fields should be **a pointer** to a type describing
key/value. The pointee type is assumed (and will be recorded as such
and used for size determination) to be a type describing key/value of
the map. This is done to save excessive amounts of space allocated in
corresponding ELF sections for key/value of big size.
4. As some maps disallow having BTF type ID associated with key/value,
it's possible to specify key/value size explicitly without
associating BTF type ID with it. Use key_size and value_size fields
to do that (see example below).
Here's an example of simple ARRAY map defintion:
struct my_value { int x, y, z; };
struct {
int type;
int max_entries;
int *key;
struct my_value *value;
} btf_map SEC(".maps") = {
.type = BPF_MAP_TYPE_ARRAY,
.max_entries = 16,
};
This will define BPF ARRAY map 'btf_map' with 16 elements. The key will
be of type int and thus key size will be 4 bytes. The value is struct
my_value of size 12 bytes. This map can be used from C code exactly the
same as with existing maps defined through struct bpf_map_def.
Here's an example of STACKMAP definition (which currently disallows BTF type
IDs for key/value):
struct {
__u32 type;
__u32 max_entries;
__u32 map_flags;
__u32 key_size;
__u32 value_size;
} stackmap SEC(".maps") = {
.type = BPF_MAP_TYPE_STACK_TRACE,
.max_entries = 128,
.map_flags = BPF_F_STACK_BUILD_ID,
.key_size = sizeof(__u32),
.value_size = PERF_MAX_STACK_DEPTH * sizeof(struct bpf_stack_build_id),
};
This approach is naturally extended to support map-in-map, by making a value
field to be another struct that describes inner map. This feature is not
implemented yet. It's also possible to incrementally add features like pinning
with full backwards and forward compatibility. Support for static
initialization of BPF_MAP_TYPE_PROG_ARRAY using pointers to BPF programs
is also on the roadmap.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Acked-by: Song Liu <songliubraving@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-06-17 19:26:56 +00:00
# define MAPS_ELF_SEC ".maps"
2018-04-18 22:56:05 +00:00
struct btf ;
2018-11-19 23:29:16 +00:00
struct btf_ext ;
2018-07-24 15:40:22 +00:00
struct btf_type ;
2018-04-18 22:56:05 +00:00
2019-04-09 21:20:14 +00:00
struct bpf_object ;
libbpf: Support BTF loading and raw data output in both endianness
Teach BTF to recognized wrong endianness and transparently convert it
internally to host endianness. Original endianness of BTF will be preserved
and used during btf__get_raw_data() to convert resulting raw data to the same
endianness and a source raw_data. This means that little-endian host can parse
big-endian BTF with no issues, all the type data will be presented to the
client application in native endianness, but when it's time for emitting BTF
to persist it in a file (e.g., after BTF deduplication), original non-native
endianness will be preserved and stored.
It's possible to query original endianness of BTF data with new
btf__endianness() API. It's also possible to override desired output
endianness with btf__set_endianness(), so that if application needs to load,
say, big-endian BTF and store it as little-endian BTF, it's possible to
manually override this. If btf__set_endianness() was used to change
endianness, btf__endianness() will reflect overridden endianness.
Given there are no known use cases for supporting cross-endianness for
.BTF.ext, loading .BTF.ext in non-native endianness is not supported.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20200929043046.1324350-3-andriin@fb.com
2020-09-29 04:30:45 +00:00
enum btf_endianness {
BTF_LITTLE_ENDIAN = 0 ,
BTF_BIG_ENDIAN = 1 ,
} ;
2021-09-15 02:19:52 +00:00
/**
* @ brief * * btf__free ( ) * * frees all data of a BTF object
* @ param btf BTF object to free
*/
2018-10-16 05:50:34 +00:00
LIBBPF_API void btf__free ( struct btf * btf ) ;
libbpf: Implement basic split BTF support
Support split BTF operation, in which one BTF (base BTF) provides basic set of
types and strings, while another one (split BTF) builds on top of base's types
and strings and adds its own new types and strings. From API standpoint, the
fact that the split BTF is built on top of the base BTF is transparent.
Type numeration is transparent. If the base BTF had last type ID #N, then all
types in the split BTF start at type ID N+1. Any type in split BTF can
reference base BTF types, but not vice versa. Programmatically construction of
a split BTF on top of a base BTF is supported: one can create an empty split
BTF with btf__new_empty_split() and pass base BTF as an input, or pass raw
binary data to btf__new_split(), or use btf__parse_xxx_split() variants to get
initial set of split types/strings from the ELF file with .BTF section.
String offsets are similarly transparent and are a logical continuation of
base BTF's strings. When building BTF programmatically and adding a new string
(explicitly with btf__add_str() or implicitly through appending new
types/members), string-to-be-added would first be looked up from the base
BTF's string section and re-used if it's there. If not, it will be looked up
and/or added to the split BTF string section. Similarly to type IDs, types in
split BTF can refer to strings from base BTF absolutely transparently (but not
vice versa, of course, because base BTF doesn't "know" about existence of
split BTF).
Internal type index is slightly adjusted to be zero-indexed, ignoring a fake
[0] VOID type. This allows to handle split/base BTF type lookups transparently
by using btf->start_id type ID offset, which is always 1 for base/non-split
BTF and equals btf__get_nr_types(base_btf) + 1 for the split BTF.
BTF deduplication is not yet supported for split BTF and support for it will
be added in separate patch.
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Song Liu <songliubraving@fb.com>
Link: https://lore.kernel.org/bpf/20201105043402.2530976-5-andrii@kernel.org
2020-11-05 04:33:54 +00:00
2021-09-15 02:19:52 +00:00
/**
* @ brief * * btf__new ( ) * * creates a new instance of a BTF object from the raw
* bytes of an ELF ' s BTF section
* @ param data raw bytes
* @ param size number of bytes passed in ` data `
* @ return new BTF object instance which has to be eventually freed with
* * * btf__free ( ) * *
*
* On error , error - code - encoded - as - pointer is returned , not a NULL . To extract
* error code from such a pointer ` libbpf_get_error ( ) ` should be used . If
* ` libbpf_set_strict_mode ( LIBBPF_STRICT_CLEAN_PTRS ) ` is enabled , NULL is
* returned on error instead . In both cases thread - local ` errno ` variable is
* always set to error code as well .
*/
2020-07-10 01:10:23 +00:00
LIBBPF_API struct btf * btf__new ( const void * data , __u32 size ) ;
2021-09-15 02:19:52 +00:00
/**
* @ brief * * btf__new_split ( ) * * create a new instance of a BTF object from the
* provided raw data bytes . It takes another BTF instance , * * base_btf * * , which
* serves as a base BTF , which is extended by types in a newly created BTF
* instance
* @ param data raw bytes
* @ param size length of raw bytes
* @ param base_btf the base BTF object
* @ return new BTF object instance which has to be eventually freed with
* * * btf__free ( ) * *
*
* If * base_btf * is NULL , ` btf__new_split ( ) ` is equivalent to ` btf__new ( ) ` and
* creates non - split BTF .
*
* On error , error - code - encoded - as - pointer is returned , not a NULL . To extract
* error code from such a pointer ` libbpf_get_error ( ) ` should be used . If
* ` libbpf_set_strict_mode ( LIBBPF_STRICT_CLEAN_PTRS ) ` is enabled , NULL is
* returned on error instead . In both cases thread - local ` errno ` variable is
* always set to error code as well .
*/
libbpf: Implement basic split BTF support
Support split BTF operation, in which one BTF (base BTF) provides basic set of
types and strings, while another one (split BTF) builds on top of base's types
and strings and adds its own new types and strings. From API standpoint, the
fact that the split BTF is built on top of the base BTF is transparent.
Type numeration is transparent. If the base BTF had last type ID #N, then all
types in the split BTF start at type ID N+1. Any type in split BTF can
reference base BTF types, but not vice versa. Programmatically construction of
a split BTF on top of a base BTF is supported: one can create an empty split
BTF with btf__new_empty_split() and pass base BTF as an input, or pass raw
binary data to btf__new_split(), or use btf__parse_xxx_split() variants to get
initial set of split types/strings from the ELF file with .BTF section.
String offsets are similarly transparent and are a logical continuation of
base BTF's strings. When building BTF programmatically and adding a new string
(explicitly with btf__add_str() or implicitly through appending new
types/members), string-to-be-added would first be looked up from the base
BTF's string section and re-used if it's there. If not, it will be looked up
and/or added to the split BTF string section. Similarly to type IDs, types in
split BTF can refer to strings from base BTF absolutely transparently (but not
vice versa, of course, because base BTF doesn't "know" about existence of
split BTF).
Internal type index is slightly adjusted to be zero-indexed, ignoring a fake
[0] VOID type. This allows to handle split/base BTF type lookups transparently
by using btf->start_id type ID offset, which is always 1 for base/non-split
BTF and equals btf__get_nr_types(base_btf) + 1 for the split BTF.
BTF deduplication is not yet supported for split BTF and support for it will
be added in separate patch.
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Song Liu <songliubraving@fb.com>
Link: https://lore.kernel.org/bpf/20201105043402.2530976-5-andrii@kernel.org
2020-11-05 04:33:54 +00:00
LIBBPF_API struct btf * btf__new_split ( const void * data , __u32 size , struct btf * base_btf ) ;
2021-09-15 02:19:52 +00:00
/**
* @ brief * * btf__new_empty ( ) * * creates an empty BTF object . Use
* ` btf__add_ * ( ) ` to populate such BTF object .
* @ return new BTF object instance which has to be eventually freed with
* * * btf__free ( ) * *
*
* On error , error - code - encoded - as - pointer is returned , not a NULL . To extract
* error code from such a pointer ` libbpf_get_error ( ) ` should be used . If
* ` libbpf_set_strict_mode ( LIBBPF_STRICT_CLEAN_PTRS ) ` is enabled , NULL is
* returned on error instead . In both cases thread - local ` errno ` variable is
* always set to error code as well .
*/
2020-09-26 01:13:54 +00:00
LIBBPF_API struct btf * btf__new_empty ( void ) ;
2021-09-15 02:19:52 +00:00
/**
* @ brief * * btf__new_empty_split ( ) * * creates an unpopulated BTF object from an
* ELF BTF section except with a base BTF on top of which split BTF should be
* based
* @ return new BTF object instance which has to be eventually freed with
* * * btf__free ( ) * *
*
* If * base_btf * is NULL , ` btf__new_empty_split ( ) ` is equivalent to
* ` btf__new_empty ( ) ` and creates non - split BTF .
*
* On error , error - code - encoded - as - pointer is returned , not a NULL . To extract
* error code from such a pointer ` libbpf_get_error ( ) ` should be used . If
* ` libbpf_set_strict_mode ( LIBBPF_STRICT_CLEAN_PTRS ) ` is enabled , NULL is
* returned on error instead . In both cases thread - local ` errno ` variable is
* always set to error code as well .
*/
libbpf: Implement basic split BTF support
Support split BTF operation, in which one BTF (base BTF) provides basic set of
types and strings, while another one (split BTF) builds on top of base's types
and strings and adds its own new types and strings. From API standpoint, the
fact that the split BTF is built on top of the base BTF is transparent.
Type numeration is transparent. If the base BTF had last type ID #N, then all
types in the split BTF start at type ID N+1. Any type in split BTF can
reference base BTF types, but not vice versa. Programmatically construction of
a split BTF on top of a base BTF is supported: one can create an empty split
BTF with btf__new_empty_split() and pass base BTF as an input, or pass raw
binary data to btf__new_split(), or use btf__parse_xxx_split() variants to get
initial set of split types/strings from the ELF file with .BTF section.
String offsets are similarly transparent and are a logical continuation of
base BTF's strings. When building BTF programmatically and adding a new string
(explicitly with btf__add_str() or implicitly through appending new
types/members), string-to-be-added would first be looked up from the base
BTF's string section and re-used if it's there. If not, it will be looked up
and/or added to the split BTF string section. Similarly to type IDs, types in
split BTF can refer to strings from base BTF absolutely transparently (but not
vice versa, of course, because base BTF doesn't "know" about existence of
split BTF).
Internal type index is slightly adjusted to be zero-indexed, ignoring a fake
[0] VOID type. This allows to handle split/base BTF type lookups transparently
by using btf->start_id type ID offset, which is always 1 for base/non-split
BTF and equals btf__get_nr_types(base_btf) + 1 for the split BTF.
BTF deduplication is not yet supported for split BTF and support for it will
be added in separate patch.
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Song Liu <songliubraving@fb.com>
Link: https://lore.kernel.org/bpf/20201105043402.2530976-5-andrii@kernel.org
2020-11-05 04:33:54 +00:00
LIBBPF_API struct btf * btf__new_empty_split ( struct btf * base_btf ) ;
2020-08-02 01:32:17 +00:00
LIBBPF_API struct btf * btf__parse ( const char * path , struct btf_ext * * btf_ext ) ;
libbpf: Implement basic split BTF support
Support split BTF operation, in which one BTF (base BTF) provides basic set of
types and strings, while another one (split BTF) builds on top of base's types
and strings and adds its own new types and strings. From API standpoint, the
fact that the split BTF is built on top of the base BTF is transparent.
Type numeration is transparent. If the base BTF had last type ID #N, then all
types in the split BTF start at type ID N+1. Any type in split BTF can
reference base BTF types, but not vice versa. Programmatically construction of
a split BTF on top of a base BTF is supported: one can create an empty split
BTF with btf__new_empty_split() and pass base BTF as an input, or pass raw
binary data to btf__new_split(), or use btf__parse_xxx_split() variants to get
initial set of split types/strings from the ELF file with .BTF section.
String offsets are similarly transparent and are a logical continuation of
base BTF's strings. When building BTF programmatically and adding a new string
(explicitly with btf__add_str() or implicitly through appending new
types/members), string-to-be-added would first be looked up from the base
BTF's string section and re-used if it's there. If not, it will be looked up
and/or added to the split BTF string section. Similarly to type IDs, types in
split BTF can refer to strings from base BTF absolutely transparently (but not
vice versa, of course, because base BTF doesn't "know" about existence of
split BTF).
Internal type index is slightly adjusted to be zero-indexed, ignoring a fake
[0] VOID type. This allows to handle split/base BTF type lookups transparently
by using btf->start_id type ID offset, which is always 1 for base/non-split
BTF and equals btf__get_nr_types(base_btf) + 1 for the split BTF.
BTF deduplication is not yet supported for split BTF and support for it will
be added in separate patch.
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Song Liu <songliubraving@fb.com>
Link: https://lore.kernel.org/bpf/20201105043402.2530976-5-andrii@kernel.org
2020-11-05 04:33:54 +00:00
LIBBPF_API struct btf * btf__parse_split ( const char * path , struct btf * base_btf ) ;
2020-08-02 01:32:17 +00:00
LIBBPF_API struct btf * btf__parse_elf ( const char * path , struct btf_ext * * btf_ext ) ;
libbpf: Implement basic split BTF support
Support split BTF operation, in which one BTF (base BTF) provides basic set of
types and strings, while another one (split BTF) builds on top of base's types
and strings and adds its own new types and strings. From API standpoint, the
fact that the split BTF is built on top of the base BTF is transparent.
Type numeration is transparent. If the base BTF had last type ID #N, then all
types in the split BTF start at type ID N+1. Any type in split BTF can
reference base BTF types, but not vice versa. Programmatically construction of
a split BTF on top of a base BTF is supported: one can create an empty split
BTF with btf__new_empty_split() and pass base BTF as an input, or pass raw
binary data to btf__new_split(), or use btf__parse_xxx_split() variants to get
initial set of split types/strings from the ELF file with .BTF section.
String offsets are similarly transparent and are a logical continuation of
base BTF's strings. When building BTF programmatically and adding a new string
(explicitly with btf__add_str() or implicitly through appending new
types/members), string-to-be-added would first be looked up from the base
BTF's string section and re-used if it's there. If not, it will be looked up
and/or added to the split BTF string section. Similarly to type IDs, types in
split BTF can refer to strings from base BTF absolutely transparently (but not
vice versa, of course, because base BTF doesn't "know" about existence of
split BTF).
Internal type index is slightly adjusted to be zero-indexed, ignoring a fake
[0] VOID type. This allows to handle split/base BTF type lookups transparently
by using btf->start_id type ID offset, which is always 1 for base/non-split
BTF and equals btf__get_nr_types(base_btf) + 1 for the split BTF.
BTF deduplication is not yet supported for split BTF and support for it will
be added in separate patch.
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Song Liu <songliubraving@fb.com>
Link: https://lore.kernel.org/bpf/20201105043402.2530976-5-andrii@kernel.org
2020-11-05 04:33:54 +00:00
LIBBPF_API struct btf * btf__parse_elf_split ( const char * path , struct btf * base_btf ) ;
2020-08-02 01:32:17 +00:00
LIBBPF_API struct btf * btf__parse_raw ( const char * path ) ;
libbpf: Implement basic split BTF support
Support split BTF operation, in which one BTF (base BTF) provides basic set of
types and strings, while another one (split BTF) builds on top of base's types
and strings and adds its own new types and strings. From API standpoint, the
fact that the split BTF is built on top of the base BTF is transparent.
Type numeration is transparent. If the base BTF had last type ID #N, then all
types in the split BTF start at type ID N+1. Any type in split BTF can
reference base BTF types, but not vice versa. Programmatically construction of
a split BTF on top of a base BTF is supported: one can create an empty split
BTF with btf__new_empty_split() and pass base BTF as an input, or pass raw
binary data to btf__new_split(), or use btf__parse_xxx_split() variants to get
initial set of split types/strings from the ELF file with .BTF section.
String offsets are similarly transparent and are a logical continuation of
base BTF's strings. When building BTF programmatically and adding a new string
(explicitly with btf__add_str() or implicitly through appending new
types/members), string-to-be-added would first be looked up from the base
BTF's string section and re-used if it's there. If not, it will be looked up
and/or added to the split BTF string section. Similarly to type IDs, types in
split BTF can refer to strings from base BTF absolutely transparently (but not
vice versa, of course, because base BTF doesn't "know" about existence of
split BTF).
Internal type index is slightly adjusted to be zero-indexed, ignoring a fake
[0] VOID type. This allows to handle split/base BTF type lookups transparently
by using btf->start_id type ID offset, which is always 1 for base/non-split
BTF and equals btf__get_nr_types(base_btf) + 1 for the split BTF.
BTF deduplication is not yet supported for split BTF and support for it will
be added in separate patch.
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Song Liu <songliubraving@fb.com>
Link: https://lore.kernel.org/bpf/20201105043402.2530976-5-andrii@kernel.org
2020-11-05 04:33:54 +00:00
LIBBPF_API struct btf * btf__parse_raw_split ( const char * path , struct btf * base_btf ) ;
2021-07-30 11:40:12 +00:00
LIBBPF_API struct btf * btf__load_vmlinux_btf ( void ) ;
LIBBPF_API struct btf * btf__load_module_btf ( const char * module_name , struct btf * vmlinux_btf ) ;
LIBBPF_API struct btf * libbpf_find_kernel_btf ( void ) ;
2021-07-29 16:20:23 +00:00
LIBBPF_API struct btf * btf__load_from_kernel_by_id ( __u32 id ) ;
2021-07-29 16:20:27 +00:00
LIBBPF_API struct btf * btf__load_from_kernel_by_id_split ( __u32 id , struct btf * base_btf ) ;
libbpf: Add LIBBPF_DEPRECATED_SINCE macro for scheduling API deprecations
Introduce a macro LIBBPF_DEPRECATED_SINCE(major, minor, message) to prepare
the deprecation of two API functions. This macro marks functions as deprecated
when libbpf's version reaches the values passed as an argument.
As part of this change libbpf_version.h header is added with recorded major
(LIBBPF_MAJOR_VERSION) and minor (LIBBPF_MINOR_VERSION) libbpf version macros.
They are now part of libbpf public API and can be relied upon by user code.
libbpf_version.h is installed system-wide along other libbpf public headers.
Due to this new build-time auto-generated header, in-kernel applications
relying on libbpf (resolve_btfids, bpftool, bpf_preload) are updated to
include libbpf's output directory as part of a list of include search paths.
Better fix would be to use libbpf's make_install target to install public API
headers, but that clean up is left out as a future improvement. The build
changes were tested by building kernel (with KBUILD_OUTPUT and O= specified
explicitly), bpftool, libbpf, selftests/bpf, and resolve_btfids builds. No
problems were detected.
Note that because of the constraints of the C preprocessor we have to write
a few lines of macro magic for each version used to prepare deprecation (0.6
for now).
Also, use LIBBPF_DEPRECATED_SINCE() to schedule deprecation of
btf__get_from_id() and btf__load(), which are replaced by
btf__load_from_kernel_by_id() and btf__load_into_kernel(), respectively,
starting from future libbpf v0.6. This is part of libbpf 1.0 effort ([0]).
[0] Closes: https://github.com/libbpf/libbpf/issues/278
Co-developed-by: Quentin Monnet <quentin@isovalent.com>
Co-developed-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Quentin Monnet <quentin@isovalent.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20210908213226.1871016-1-andrii@kernel.org
2021-09-08 21:32:26 +00:00
LIBBPF_DEPRECATED_SINCE ( 0 , 6 , " use btf__load_from_kernel_by_id instead " )
2021-07-29 16:20:23 +00:00
LIBBPF_API int btf__get_from_id ( __u32 id , struct btf * * btf ) ;
2021-10-21 01:43:55 +00:00
LIBBPF_DEPRECATED_SINCE ( 0 , 6 , " intended for internal libbpf use only " )
2019-04-09 21:20:14 +00:00
LIBBPF_API int btf__finalize_data ( struct bpf_object * obj , struct btf * btf ) ;
libbpf: Add LIBBPF_DEPRECATED_SINCE macro for scheduling API deprecations
Introduce a macro LIBBPF_DEPRECATED_SINCE(major, minor, message) to prepare
the deprecation of two API functions. This macro marks functions as deprecated
when libbpf's version reaches the values passed as an argument.
As part of this change libbpf_version.h header is added with recorded major
(LIBBPF_MAJOR_VERSION) and minor (LIBBPF_MINOR_VERSION) libbpf version macros.
They are now part of libbpf public API and can be relied upon by user code.
libbpf_version.h is installed system-wide along other libbpf public headers.
Due to this new build-time auto-generated header, in-kernel applications
relying on libbpf (resolve_btfids, bpftool, bpf_preload) are updated to
include libbpf's output directory as part of a list of include search paths.
Better fix would be to use libbpf's make_install target to install public API
headers, but that clean up is left out as a future improvement. The build
changes were tested by building kernel (with KBUILD_OUTPUT and O= specified
explicitly), bpftool, libbpf, selftests/bpf, and resolve_btfids builds. No
problems were detected.
Note that because of the constraints of the C preprocessor we have to write
a few lines of macro magic for each version used to prepare deprecation (0.6
for now).
Also, use LIBBPF_DEPRECATED_SINCE() to schedule deprecation of
btf__get_from_id() and btf__load(), which are replaced by
btf__load_from_kernel_by_id() and btf__load_into_kernel(), respectively,
starting from future libbpf v0.6. This is part of libbpf 1.0 effort ([0]).
[0] Closes: https://github.com/libbpf/libbpf/issues/278
Co-developed-by: Quentin Monnet <quentin@isovalent.com>
Co-developed-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Quentin Monnet <quentin@isovalent.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20210908213226.1871016-1-andrii@kernel.org
2021-09-08 21:32:26 +00:00
LIBBPF_DEPRECATED_SINCE ( 0 , 6 , " use btf__load_into_kernel instead " )
2019-02-08 19:19:36 +00:00
LIBBPF_API int btf__load ( struct btf * btf ) ;
2021-07-29 16:20:22 +00:00
LIBBPF_API int btf__load_into_kernel ( struct btf * btf ) ;
2018-10-16 05:50:34 +00:00
LIBBPF_API __s32 btf__find_by_name ( const struct btf * btf ,
const char * type_name ) ;
2019-11-14 18:57:05 +00:00
LIBBPF_API __s32 btf__find_by_name_kind ( const struct btf * btf ,
const char * type_name , __u32 kind ) ;
2021-10-22 13:06:19 +00:00
LIBBPF_DEPRECATED_SINCE ( 0 , 7 , " use btf__type_cnt() instead; note that btf__get_nr_types() == btf__type_cnt() - 1 " )
2019-02-05 01:29:46 +00:00
LIBBPF_API __u32 btf__get_nr_types ( const struct btf * btf ) ;
2021-10-22 13:06:19 +00:00
LIBBPF_API __u32 btf__type_cnt ( const struct btf * btf ) ;
2020-12-02 06:52:42 +00:00
LIBBPF_API const struct btf * btf__base_btf ( const struct btf * btf ) ;
2018-10-16 05:50:34 +00:00
LIBBPF_API const struct btf_type * btf__type_by_id ( const struct btf * btf ,
__u32 id ) ;
libbpf: Handle BTF pointer sizes more carefully
With libbpf and BTF it is pretty common to have libbpf built for one
architecture, while BTF information was generated for a different architecture
(typically, but not always, BPF). In such case, the size of a pointer might
differ betweem architectures. libbpf previously was always making an
assumption that pointer size for BTF is the same as native architecture
pointer size, but that breaks for cases where libbpf is built as 32-bit
library, while BTF is for 64-bit architecture.
To solve this, add heuristic to determine pointer size by searching for `long`
or `unsigned long` integer type and using its size as a pointer size. Also,
allow to override the pointer size with a new API btf__set_pointer_size(), for
cases where application knows which pointer size should be used. User
application can check what libbpf "guessed" by looking at the result of
btf__pointer_size(). If it's not 0, then libbpf successfully determined a
pointer size, otherwise native arch pointer size will be used.
For cases where BTF is parsed from ELF file, use ELF's class (32-bit or
64-bit) to determine pointer size.
Fixes: 8a138aed4a80 ("bpf: btf: Add BTF support to libbpf")
Fixes: 351131b51c7a ("libbpf: add btf_dump API for BTF-to-C conversion")
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20200813204945.1020225-5-andriin@fb.com
2020-08-13 20:49:40 +00:00
LIBBPF_API size_t btf__pointer_size ( const struct btf * btf ) ;
LIBBPF_API int btf__set_pointer_size ( struct btf * btf , size_t ptr_sz ) ;
libbpf: Support BTF loading and raw data output in both endianness
Teach BTF to recognized wrong endianness and transparently convert it
internally to host endianness. Original endianness of BTF will be preserved
and used during btf__get_raw_data() to convert resulting raw data to the same
endianness and a source raw_data. This means that little-endian host can parse
big-endian BTF with no issues, all the type data will be presented to the
client application in native endianness, but when it's time for emitting BTF
to persist it in a file (e.g., after BTF deduplication), original non-native
endianness will be preserved and stored.
It's possible to query original endianness of BTF data with new
btf__endianness() API. It's also possible to override desired output
endianness with btf__set_endianness(), so that if application needs to load,
say, big-endian BTF and store it as little-endian BTF, it's possible to
manually override this. If btf__set_endianness() was used to change
endianness, btf__endianness() will reflect overridden endianness.
Given there are no known use cases for supporting cross-endianness for
.BTF.ext, loading .BTF.ext in non-native endianness is not supported.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20200929043046.1324350-3-andriin@fb.com
2020-09-29 04:30:45 +00:00
LIBBPF_API enum btf_endianness btf__endianness ( const struct btf * btf ) ;
LIBBPF_API int btf__set_endianness ( struct btf * btf , enum btf_endianness endian ) ;
2018-10-16 05:50:34 +00:00
LIBBPF_API __s64 btf__resolve_size ( const struct btf * btf , __u32 type_id ) ;
LIBBPF_API int btf__resolve_type ( const struct btf * btf , __u32 type_id ) ;
2019-12-14 01:43:30 +00:00
LIBBPF_API int btf__align_of ( const struct btf * btf , __u32 id ) ;
2018-10-16 05:50:34 +00:00
LIBBPF_API int btf__fd ( const struct btf * btf ) ;
2020-07-08 01:53:14 +00:00
LIBBPF_API void btf__set_fd ( struct btf * btf , int fd ) ;
2021-10-22 13:06:19 +00:00
LIBBPF_API const void * btf__raw_data ( const struct btf * btf , __u32 * size ) ;
2018-10-16 05:50:34 +00:00
LIBBPF_API const char * btf__name_by_offset ( const struct btf * btf , __u32 offset ) ;
2020-09-29 02:05:31 +00:00
LIBBPF_API const char * btf__str_by_offset ( const struct btf * btf , __u32 offset ) ;
2022-02-03 22:50:17 +00:00
LIBBPF_DEPRECATED_SINCE ( 0 , 7 , " this API is not necessary when BTF-defined maps are used " )
2019-02-05 19:48:22 +00:00
LIBBPF_API int btf__get_map_kv_tids ( const struct btf * btf , const char * map_name ,
2019-02-04 19:00:58 +00:00
__u32 expected_key_size ,
__u32 expected_value_size ,
__u32 * key_type_id , __u32 * value_type_id ) ;
2018-04-18 22:56:05 +00:00
2021-11-24 00:23:14 +00:00
LIBBPF_API struct btf_ext * btf_ext__new ( const __u8 * data , __u32 size ) ;
2019-02-04 19:00:57 +00:00
LIBBPF_API void btf_ext__free ( struct btf_ext * btf_ext ) ;
2022-01-24 19:42:48 +00:00
LIBBPF_API const void * btf_ext__raw_data ( const struct btf_ext * btf_ext , __u32 * size ) ;
2020-09-03 20:35:33 +00:00
LIBBPF_API LIBBPF_DEPRECATED ( " btf_ext__reloc_func_info was never meant as a public API and has wrong assumptions embedded in it; it will be removed in the future libbpf versions " )
int btf_ext__reloc_func_info ( const struct btf * btf ,
const struct btf_ext * btf_ext ,
const char * sec_name , __u32 insns_cnt ,
void * * func_info , __u32 * cnt ) ;
LIBBPF_API LIBBPF_DEPRECATED ( " btf_ext__reloc_line_info was never meant as a public API and has wrong assumptions embedded in it; it will be removed in the future libbpf versions " )
int btf_ext__reloc_line_info ( const struct btf * btf ,
const struct btf_ext * btf_ext ,
const char * sec_name , __u32 insns_cnt ,
void * * line_info , __u32 * cnt ) ;
2022-02-01 01:46:10 +00:00
LIBBPF_API LIBBPF_DEPRECATED ( " btf_ext__reloc_func_info is deprecated; write custom func_info parsing to fetch rec_size " )
__u32 btf_ext__func_info_rec_size ( const struct btf_ext * btf_ext ) ;
LIBBPF_API LIBBPF_DEPRECATED ( " btf_ext__reloc_line_info is deprecated; write custom line_info parsing to fetch rec_size " )
__u32 btf_ext__line_info_rec_size ( const struct btf_ext * btf_ext ) ;
2018-11-19 23:29:16 +00:00
libbpf: Allow modification of BTF and add btf__add_str API
Allow internal BTF representation to switch from default read-only mode, in
which raw BTF data is a single non-modifiable block of memory with BTF header,
types, and strings layed out sequentially and contiguously in memory, into
a writable representation with types and strings data split out into separate
memory regions, that can be dynamically expanded.
Such writable internal representation is transparent to users of libbpf APIs,
but allows to append new types and strings at the end of BTF, which is
a typical use case when generating BTF programmatically. All the basic
guarantees of BTF types and strings layout is preserved, i.e., user can get
`struct btf_type *` pointer and read it directly. Such btf_type pointers might
be invalidated if BTF is modified, so some care is required in such mixed
read/write scenarios.
Switch from read-only to writable configuration happens automatically the
first time when user attempts to modify BTF by either adding a new type or new
string. It is still possible to get raw BTF data, which is a single piece of
memory that can be persisted in ELF section or into a file as raw BTF. Such
raw data memory is also still owned by BTF and will be freed either when BTF
object is freed or if another modification to BTF happens, as any modification
invalidates BTF raw representation.
This patch adds the first two BTF manipulation APIs: btf__add_str(), which
allows to add arbitrary strings to BTF string section, and btf__find_str()
which allows to find existing string offset, but not add it if it's missing.
All the added strings are automatically deduplicated. This is achieved by
maintaining an additional string lookup index for all unique strings. Such
index is built when BTF is switched to modifiable mode. If at that time BTF
strings section contained duplicate strings, they are not de-duplicated. This
is done specifically to not modify the existing content of BTF (types, their
string offsets, etc), which can cause confusion and is especially important
property if there is struct btf_ext associated with struct btf. By following
this "imperfect deduplication" process, btf_ext is kept consitent and correct.
If deduplication of strings is necessary, it can be forced by doing BTF
deduplication, at which point all the strings will be eagerly deduplicated and
all string offsets both in struct btf and struct btf_ext will be updated.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Link: https://lore.kernel.org/bpf/20200926011357.2366158-6-andriin@fb.com
2020-09-26 01:13:53 +00:00
LIBBPF_API int btf__find_str ( struct btf * btf , const char * s ) ;
LIBBPF_API int btf__add_str ( struct btf * btf , const char * s ) ;
2021-03-18 19:40:29 +00:00
LIBBPF_API int btf__add_type ( struct btf * btf , const struct btf * src_btf ,
const struct btf_type * src_type ) ;
2021-10-06 05:11:05 +00:00
/**
* @ brief * * btf__add_btf ( ) * * appends all the BTF types from * src_btf * into * btf *
* @ param btf BTF object which all the BTF types and strings are added to
* @ param src_btf BTF object which all BTF types and referenced strings are copied from
* @ return BTF type ID of the first appended BTF type , or negative error code
*
* * * btf__add_btf ( ) * * can be used to simply and efficiently append the entire
* contents of one BTF object to another one . All the BTF type data is copied
* over , all referenced type IDs are adjusted by adding a necessary ID offset .
* Only strings referenced from BTF types are copied over and deduplicated , so
* if there were some unused strings in * src_btf * , those won ' t be copied over ,
* which is consistent with the general string deduplication semantics of BTF
* writing APIs .
*
* If any error is encountered during this process , the contents of * btf * is
* left intact , which means that * * btf__add_btf ( ) * * follows the transactional
* semantics and the operation as a whole is all - or - nothing .
*
* * src_btf * has to be non - split BTF , as of now copying types from split BTF
* is not supported and will result in - ENOTSUP error code returned .
*/
LIBBPF_API int btf__add_btf ( struct btf * btf , const struct btf * src_btf ) ;
libbpf: Allow modification of BTF and add btf__add_str API
Allow internal BTF representation to switch from default read-only mode, in
which raw BTF data is a single non-modifiable block of memory with BTF header,
types, and strings layed out sequentially and contiguously in memory, into
a writable representation with types and strings data split out into separate
memory regions, that can be dynamically expanded.
Such writable internal representation is transparent to users of libbpf APIs,
but allows to append new types and strings at the end of BTF, which is
a typical use case when generating BTF programmatically. All the basic
guarantees of BTF types and strings layout is preserved, i.e., user can get
`struct btf_type *` pointer and read it directly. Such btf_type pointers might
be invalidated if BTF is modified, so some care is required in such mixed
read/write scenarios.
Switch from read-only to writable configuration happens automatically the
first time when user attempts to modify BTF by either adding a new type or new
string. It is still possible to get raw BTF data, which is a single piece of
memory that can be persisted in ELF section or into a file as raw BTF. Such
raw data memory is also still owned by BTF and will be freed either when BTF
object is freed or if another modification to BTF happens, as any modification
invalidates BTF raw representation.
This patch adds the first two BTF manipulation APIs: btf__add_str(), which
allows to add arbitrary strings to BTF string section, and btf__find_str()
which allows to find existing string offset, but not add it if it's missing.
All the added strings are automatically deduplicated. This is achieved by
maintaining an additional string lookup index for all unique strings. Such
index is built when BTF is switched to modifiable mode. If at that time BTF
strings section contained duplicate strings, they are not de-duplicated. This
is done specifically to not modify the existing content of BTF (types, their
string offsets, etc), which can cause confusion and is especially important
property if there is struct btf_ext associated with struct btf. By following
this "imperfect deduplication" process, btf_ext is kept consitent and correct.
If deduplication of strings is necessary, it can be forced by doing BTF
deduplication, at which point all the strings will be eagerly deduplicated and
all string offsets both in struct btf and struct btf_ext will be updated.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Link: https://lore.kernel.org/bpf/20200926011357.2366158-6-andriin@fb.com
2020-09-26 01:13:53 +00:00
libbpf: Add BTF writing APIs
Add APIs for appending new BTF types at the end of BTF object.
Each BTF kind has either one API of the form btf__add_<kind>(). For types
that have variable amount of additional items (struct/union, enum, func_proto,
datasec), additional API is provided to emit each such item. E.g., for
emitting a struct, one would use the following sequence of API calls:
btf__add_struct(...);
btf__add_field(...);
...
btf__add_field(...);
Each btf__add_field() will ensure that the last BTF type is of STRUCT or
UNION kind and will automatically increment that type's vlen field.
All the strings are provided as C strings (const char *), not a string offset.
This significantly improves usability of BTF writer APIs. All such strings
will be automatically appended to string section or existing string will be
re-used, if such string was already added previously.
Each API attempts to do all the reasonable validations, like enforcing
non-empty names for entities with required names, proper value bounds, various
bit offset restrictions, etc.
Type ID validation is minimal because it's possible to emit a type that refers
to type that will be emitted later, so libbpf has no way to enforce such
cases. User must be careful to properly emit all the necessary types and
specify type IDs that will be valid in the finally generated BTF.
Each of btf__add_<kind>() APIs return new type ID on success or negative
value on error. APIs like btf__add_field() that emit additional items
return zero on success and negative value on error.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Link: https://lore.kernel.org/bpf/20200929020533.711288-2-andriin@fb.com
2020-09-29 02:05:30 +00:00
LIBBPF_API int btf__add_int ( struct btf * btf , const char * name , size_t byte_sz , int encoding ) ;
2021-02-26 20:22:49 +00:00
LIBBPF_API int btf__add_float ( struct btf * btf , const char * name , size_t byte_sz ) ;
libbpf: Add BTF writing APIs
Add APIs for appending new BTF types at the end of BTF object.
Each BTF kind has either one API of the form btf__add_<kind>(). For types
that have variable amount of additional items (struct/union, enum, func_proto,
datasec), additional API is provided to emit each such item. E.g., for
emitting a struct, one would use the following sequence of API calls:
btf__add_struct(...);
btf__add_field(...);
...
btf__add_field(...);
Each btf__add_field() will ensure that the last BTF type is of STRUCT or
UNION kind and will automatically increment that type's vlen field.
All the strings are provided as C strings (const char *), not a string offset.
This significantly improves usability of BTF writer APIs. All such strings
will be automatically appended to string section or existing string will be
re-used, if such string was already added previously.
Each API attempts to do all the reasonable validations, like enforcing
non-empty names for entities with required names, proper value bounds, various
bit offset restrictions, etc.
Type ID validation is minimal because it's possible to emit a type that refers
to type that will be emitted later, so libbpf has no way to enforce such
cases. User must be careful to properly emit all the necessary types and
specify type IDs that will be valid in the finally generated BTF.
Each of btf__add_<kind>() APIs return new type ID on success or negative
value on error. APIs like btf__add_field() that emit additional items
return zero on success and negative value on error.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Link: https://lore.kernel.org/bpf/20200929020533.711288-2-andriin@fb.com
2020-09-29 02:05:30 +00:00
LIBBPF_API int btf__add_ptr ( struct btf * btf , int ref_type_id ) ;
LIBBPF_API int btf__add_array ( struct btf * btf ,
int index_type_id , int elem_type_id , __u32 nr_elems ) ;
/* struct/union construction APIs */
LIBBPF_API int btf__add_struct ( struct btf * btf , const char * name , __u32 sz ) ;
LIBBPF_API int btf__add_union ( struct btf * btf , const char * name , __u32 sz ) ;
LIBBPF_API int btf__add_field ( struct btf * btf , const char * name , int field_type_id ,
__u32 bit_offset , __u32 bit_size ) ;
/* enum construction APIs */
LIBBPF_API int btf__add_enum ( struct btf * btf , const char * name , __u32 bytes_sz ) ;
LIBBPF_API int btf__add_enum_value ( struct btf * btf , const char * name , __s64 value ) ;
enum btf_fwd_kind {
BTF_FWD_STRUCT = 0 ,
BTF_FWD_UNION = 1 ,
BTF_FWD_ENUM = 2 ,
} ;
LIBBPF_API int btf__add_fwd ( struct btf * btf , const char * name , enum btf_fwd_kind fwd_kind ) ;
LIBBPF_API int btf__add_typedef ( struct btf * btf , const char * name , int ref_type_id ) ;
LIBBPF_API int btf__add_volatile ( struct btf * btf , int ref_type_id ) ;
LIBBPF_API int btf__add_const ( struct btf * btf , int ref_type_id ) ;
LIBBPF_API int btf__add_restrict ( struct btf * btf , int ref_type_id ) ;
2021-11-12 01:26:14 +00:00
LIBBPF_API int btf__add_type_tag ( struct btf * btf , const char * value , int ref_type_id ) ;
libbpf: Add BTF writing APIs
Add APIs for appending new BTF types at the end of BTF object.
Each BTF kind has either one API of the form btf__add_<kind>(). For types
that have variable amount of additional items (struct/union, enum, func_proto,
datasec), additional API is provided to emit each such item. E.g., for
emitting a struct, one would use the following sequence of API calls:
btf__add_struct(...);
btf__add_field(...);
...
btf__add_field(...);
Each btf__add_field() will ensure that the last BTF type is of STRUCT or
UNION kind and will automatically increment that type's vlen field.
All the strings are provided as C strings (const char *), not a string offset.
This significantly improves usability of BTF writer APIs. All such strings
will be automatically appended to string section or existing string will be
re-used, if such string was already added previously.
Each API attempts to do all the reasonable validations, like enforcing
non-empty names for entities with required names, proper value bounds, various
bit offset restrictions, etc.
Type ID validation is minimal because it's possible to emit a type that refers
to type that will be emitted later, so libbpf has no way to enforce such
cases. User must be careful to properly emit all the necessary types and
specify type IDs that will be valid in the finally generated BTF.
Each of btf__add_<kind>() APIs return new type ID on success or negative
value on error. APIs like btf__add_field() that emit additional items
return zero on success and negative value on error.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Link: https://lore.kernel.org/bpf/20200929020533.711288-2-andriin@fb.com
2020-09-29 02:05:30 +00:00
/* func and func_proto construction APIs */
LIBBPF_API int btf__add_func ( struct btf * btf , const char * name ,
enum btf_func_linkage linkage , int proto_type_id ) ;
LIBBPF_API int btf__add_func_proto ( struct btf * btf , int ret_type_id ) ;
LIBBPF_API int btf__add_func_param ( struct btf * btf , const char * name , int type_id ) ;
/* var & datasec construction APIs */
LIBBPF_API int btf__add_var ( struct btf * btf , const char * name , int linkage , int type_id ) ;
LIBBPF_API int btf__add_datasec ( struct btf * btf , const char * name , __u32 byte_sz ) ;
LIBBPF_API int btf__add_datasec_var_info ( struct btf * btf , int var_type_id ,
__u32 offset , __u32 byte_sz ) ;
2021-09-14 22:30:25 +00:00
/* tag construction API */
2021-10-12 16:48:38 +00:00
LIBBPF_API int btf__add_decl_tag ( struct btf * btf , const char * value , int ref_type_id ,
2021-09-14 22:30:25 +00:00
int component_idx ) ;
2019-02-05 01:29:45 +00:00
struct btf_dedup_opts {
libbpf: Turn btf_dedup_opts into OPTS-based struct
btf__dedup() and struct btf_dedup_opts were added before we figured out
OPTS mechanism. As such, btf_dedup_opts is non-extensible without
breaking an ABI and potentially crashing user application.
Unfortunately, btf__dedup() and btf_dedup_opts are short and succinct
names that would be great to preserve and use going forward. So we use
___libbpf_override() macro approach, used previously for bpf_prog_load()
API, to define a new btf__dedup() variant that accepts only struct btf *
and struct btf_dedup_opts * arguments, and rename the old btf__dedup()
implementation into btf__dedup_deprecated(). This keeps both source and
binary compatibility with old and new applications.
The biggest problem was struct btf_dedup_opts, which wasn't OPTS-based,
and as such doesn't have `size_t sz;` as a first field. But btf__dedup()
is a pretty rarely used API and I believe that the only currently known
users (besides selftests) are libbpf's own bpf_linker and pahole.
Neither use case actually uses options and just passes NULL. So instead
of doing extra hacks, just rewrite struct btf_dedup_opts into OPTS-based
one, move btf_ext argument into those opts (only bpf_linker needs to
dedup btf_ext, so it's not a typical thing to specify), and drop never
used `dont_resolve_fwds` option (it was never used anywhere, AFAIK, it
makes BTF dedup much less useful and efficient).
Just in case, for old implementation, btf__dedup_deprecated(), detect
non-NULL options and error out with helpful message, to help users
migrate, if there are any user playing with btf__dedup().
The last remaining piece is dedup_table_size, which is another
anachronism from very early days of BTF dedup. Since then it has been
reduced to the only valid value, 1, to request forced hash collisions.
This is only used during testing. So instead introduce a bool flag to
force collisions explicitly.
This patch also adapts selftests to new btf__dedup() and btf_dedup_opts
use to avoid selftests breakage.
[0] Closes: https://github.com/libbpf/libbpf/issues/281
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211111053624.190580-4-andrii@kernel.org
2021-11-11 05:36:18 +00:00
size_t sz ;
/* optional .BTF.ext info to dedup along the main BTF info */
struct btf_ext * btf_ext ;
/* force hash collisions (used for testing) */
bool force_collisions ;
size_t : 0 ;
2019-02-05 01:29:45 +00:00
} ;
libbpf: Turn btf_dedup_opts into OPTS-based struct
btf__dedup() and struct btf_dedup_opts were added before we figured out
OPTS mechanism. As such, btf_dedup_opts is non-extensible without
breaking an ABI and potentially crashing user application.
Unfortunately, btf__dedup() and btf_dedup_opts are short and succinct
names that would be great to preserve and use going forward. So we use
___libbpf_override() macro approach, used previously for bpf_prog_load()
API, to define a new btf__dedup() variant that accepts only struct btf *
and struct btf_dedup_opts * arguments, and rename the old btf__dedup()
implementation into btf__dedup_deprecated(). This keeps both source and
binary compatibility with old and new applications.
The biggest problem was struct btf_dedup_opts, which wasn't OPTS-based,
and as such doesn't have `size_t sz;` as a first field. But btf__dedup()
is a pretty rarely used API and I believe that the only currently known
users (besides selftests) are libbpf's own bpf_linker and pahole.
Neither use case actually uses options and just passes NULL. So instead
of doing extra hacks, just rewrite struct btf_dedup_opts into OPTS-based
one, move btf_ext argument into those opts (only bpf_linker needs to
dedup btf_ext, so it's not a typical thing to specify), and drop never
used `dont_resolve_fwds` option (it was never used anywhere, AFAIK, it
makes BTF dedup much less useful and efficient).
Just in case, for old implementation, btf__dedup_deprecated(), detect
non-NULL options and error out with helpful message, to help users
migrate, if there are any user playing with btf__dedup().
The last remaining piece is dedup_table_size, which is another
anachronism from very early days of BTF dedup. Since then it has been
reduced to the only valid value, 1, to request forced hash collisions.
This is only used during testing. So instead introduce a bool flag to
force collisions explicitly.
This patch also adapts selftests to new btf__dedup() and btf_dedup_opts
use to avoid selftests breakage.
[0] Closes: https://github.com/libbpf/libbpf/issues/281
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211111053624.190580-4-andrii@kernel.org
2021-11-11 05:36:18 +00:00
# define btf_dedup_opts__last_field force_collisions
2019-02-05 01:29:45 +00:00
libbpf: Turn btf_dedup_opts into OPTS-based struct
btf__dedup() and struct btf_dedup_opts were added before we figured out
OPTS mechanism. As such, btf_dedup_opts is non-extensible without
breaking an ABI and potentially crashing user application.
Unfortunately, btf__dedup() and btf_dedup_opts are short and succinct
names that would be great to preserve and use going forward. So we use
___libbpf_override() macro approach, used previously for bpf_prog_load()
API, to define a new btf__dedup() variant that accepts only struct btf *
and struct btf_dedup_opts * arguments, and rename the old btf__dedup()
implementation into btf__dedup_deprecated(). This keeps both source and
binary compatibility with old and new applications.
The biggest problem was struct btf_dedup_opts, which wasn't OPTS-based,
and as such doesn't have `size_t sz;` as a first field. But btf__dedup()
is a pretty rarely used API and I believe that the only currently known
users (besides selftests) are libbpf's own bpf_linker and pahole.
Neither use case actually uses options and just passes NULL. So instead
of doing extra hacks, just rewrite struct btf_dedup_opts into OPTS-based
one, move btf_ext argument into those opts (only bpf_linker needs to
dedup btf_ext, so it's not a typical thing to specify), and drop never
used `dont_resolve_fwds` option (it was never used anywhere, AFAIK, it
makes BTF dedup much less useful and efficient).
Just in case, for old implementation, btf__dedup_deprecated(), detect
non-NULL options and error out with helpful message, to help users
migrate, if there are any user playing with btf__dedup().
The last remaining piece is dedup_table_size, which is another
anachronism from very early days of BTF dedup. Since then it has been
reduced to the only valid value, 1, to request forced hash collisions.
This is only used during testing. So instead introduce a bool flag to
force collisions explicitly.
This patch also adapts selftests to new btf__dedup() and btf_dedup_opts
use to avoid selftests breakage.
[0] Closes: https://github.com/libbpf/libbpf/issues/281
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211111053624.190580-4-andrii@kernel.org
2021-11-11 05:36:18 +00:00
LIBBPF_API int btf__dedup ( struct btf * btf , const struct btf_dedup_opts * opts ) ;
LIBBPF_API int btf__dedup_v0_6_0 ( struct btf * btf , const struct btf_dedup_opts * opts ) ;
LIBBPF_DEPRECATED_SINCE ( 0 , 7 , " use btf__dedup() instead " )
LIBBPF_API int btf__dedup_deprecated ( struct btf * btf , struct btf_ext * btf_ext , const void * opts ) ;
# define btf__dedup(...) ___libbpf_overload(___btf_dedup, __VA_ARGS__)
# define ___btf_dedup3(btf, btf_ext, opts) btf__dedup_deprecated(btf, btf_ext, opts)
# define ___btf_dedup2(btf, opts) btf__dedup(btf, opts)
2019-02-05 01:29:45 +00:00
2019-05-24 18:59:03 +00:00
struct btf_dump ;
struct btf_dump_opts {
libbpf: Ensure btf_dump__new() and btf_dump_opts are future-proof
Change btf_dump__new() and corresponding struct btf_dump_ops structure
to be extensible by using OPTS "framework" ([0]). Given we don't change
the names, we use a similar approach as with bpf_prog_load(), but this
time we ended up with two APIs with the same name and same number of
arguments, so overloading based on number of arguments with
___libbpf_override() doesn't work.
Instead, use "overloading" based on types. In this particular case,
print callback has to be specified, so we detect which argument is
a callback. If it's 4th (last) argument, old implementation of API is
used by user code. If not, it must be 2nd, and thus new implementation
is selected. The rest is handled by the same symbol versioning approach.
btf_ext argument is dropped as it was never used and isn't necessary
either. If in the future we'll need btf_ext, that will be added into
OPTS-based struct btf_dump_opts.
struct btf_dump_opts is reused for both old API and new APIs. ctx field
is marked deprecated in v0.7+ and it's put at the same memory location
as OPTS's sz field. Any user of new-style btf_dump__new() will have to
set sz field and doesn't/shouldn't use ctx, as ctx is now passed along
the callback as mandatory input argument, following the other APIs in
libbpf that accept callbacks consistently.
Again, this is quite ugly in implementation, but is done in the name of
backwards compatibility and uniform and extensible future APIs (at the
same time, sigh). And it will be gone in libbpf 1.0.
[0] Closes: https://github.com/libbpf/libbpf/issues/283
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211111053624.190580-5-andrii@kernel.org
2021-11-11 05:36:19 +00:00
union {
size_t sz ;
void * ctx ; /* DEPRECATED: will be gone in v1.0 */
} ;
2019-05-24 18:59:03 +00:00
} ;
typedef void ( * btf_dump_printf_fn_t ) ( void * ctx , const char * fmt , va_list args ) ;
LIBBPF_API struct btf_dump * btf_dump__new ( const struct btf * btf ,
libbpf: Ensure btf_dump__new() and btf_dump_opts are future-proof
Change btf_dump__new() and corresponding struct btf_dump_ops structure
to be extensible by using OPTS "framework" ([0]). Given we don't change
the names, we use a similar approach as with bpf_prog_load(), but this
time we ended up with two APIs with the same name and same number of
arguments, so overloading based on number of arguments with
___libbpf_override() doesn't work.
Instead, use "overloading" based on types. In this particular case,
print callback has to be specified, so we detect which argument is
a callback. If it's 4th (last) argument, old implementation of API is
used by user code. If not, it must be 2nd, and thus new implementation
is selected. The rest is handled by the same symbol versioning approach.
btf_ext argument is dropped as it was never used and isn't necessary
either. If in the future we'll need btf_ext, that will be added into
OPTS-based struct btf_dump_opts.
struct btf_dump_opts is reused for both old API and new APIs. ctx field
is marked deprecated in v0.7+ and it's put at the same memory location
as OPTS's sz field. Any user of new-style btf_dump__new() will have to
set sz field and doesn't/shouldn't use ctx, as ctx is now passed along
the callback as mandatory input argument, following the other APIs in
libbpf that accept callbacks consistently.
Again, this is quite ugly in implementation, but is done in the name of
backwards compatibility and uniform and extensible future APIs (at the
same time, sigh). And it will be gone in libbpf 1.0.
[0] Closes: https://github.com/libbpf/libbpf/issues/283
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211111053624.190580-5-andrii@kernel.org
2021-11-11 05:36:19 +00:00
btf_dump_printf_fn_t printf_fn ,
void * ctx ,
const struct btf_dump_opts * opts ) ;
LIBBPF_API struct btf_dump * btf_dump__new_v0_6_0 ( const struct btf * btf ,
btf_dump_printf_fn_t printf_fn ,
void * ctx ,
const struct btf_dump_opts * opts ) ;
LIBBPF_API struct btf_dump * btf_dump__new_deprecated ( const struct btf * btf ,
const struct btf_ext * btf_ext ,
const struct btf_dump_opts * opts ,
btf_dump_printf_fn_t printf_fn ) ;
/* Choose either btf_dump__new() or btf_dump__new_deprecated() based on the
* type of 4 th argument . If it ' s btf_dump ' s print callback , use deprecated
* API ; otherwise , choose the new btf_dump__new ( ) . ___libbpf_override ( )
* doesn ' t work here because both variants have 4 input arguments .
*
* ( void * ) casts are necessary to avoid compilation warnings about type
* mismatches , because even though __builtin_choose_expr ( ) only ever evaluates
* one side the other side still has to satisfy type constraints ( this is
* compiler implementation limitation which might be lifted eventually ,
* according to the documentation ) . So passing struct btf_ext in place of
* btf_dump_printf_fn_t would be generating compilation warning . Casting to
* void * avoids this issue .
*
* Also , two type compatibility checks for a function and function pointer are
* required because passing function reference into btf_dump__new ( ) as
* btf_dump__new ( . . . , my_callback , . . . ) and as btf_dump__new ( . . . ,
* & my_callback , . . . ) ( not explicit ampersand in the latter case ) actually
* differs as far as __builtin_types_compatible_p ( ) is concerned . Thus two
* checks are combined to detect callback argument .
*
* The rest works just like in case of ___libbpf_override ( ) usage with symbol
* versioning .
2021-12-23 13:17:35 +00:00
*
* C + + compilers don ' t support __builtin_types_compatible_p ( ) , so at least
* don ' t screw up compilation for them and let C + + users pick btf_dump__new
* vs btf_dump__new_deprecated explicitly .
libbpf: Ensure btf_dump__new() and btf_dump_opts are future-proof
Change btf_dump__new() and corresponding struct btf_dump_ops structure
to be extensible by using OPTS "framework" ([0]). Given we don't change
the names, we use a similar approach as with bpf_prog_load(), but this
time we ended up with two APIs with the same name and same number of
arguments, so overloading based on number of arguments with
___libbpf_override() doesn't work.
Instead, use "overloading" based on types. In this particular case,
print callback has to be specified, so we detect which argument is
a callback. If it's 4th (last) argument, old implementation of API is
used by user code. If not, it must be 2nd, and thus new implementation
is selected. The rest is handled by the same symbol versioning approach.
btf_ext argument is dropped as it was never used and isn't necessary
either. If in the future we'll need btf_ext, that will be added into
OPTS-based struct btf_dump_opts.
struct btf_dump_opts is reused for both old API and new APIs. ctx field
is marked deprecated in v0.7+ and it's put at the same memory location
as OPTS's sz field. Any user of new-style btf_dump__new() will have to
set sz field and doesn't/shouldn't use ctx, as ctx is now passed along
the callback as mandatory input argument, following the other APIs in
libbpf that accept callbacks consistently.
Again, this is quite ugly in implementation, but is done in the name of
backwards compatibility and uniform and extensible future APIs (at the
same time, sigh). And it will be gone in libbpf 1.0.
[0] Closes: https://github.com/libbpf/libbpf/issues/283
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211111053624.190580-5-andrii@kernel.org
2021-11-11 05:36:19 +00:00
*/
2021-12-23 13:17:35 +00:00
# ifndef __cplusplus
libbpf: Ensure btf_dump__new() and btf_dump_opts are future-proof
Change btf_dump__new() and corresponding struct btf_dump_ops structure
to be extensible by using OPTS "framework" ([0]). Given we don't change
the names, we use a similar approach as with bpf_prog_load(), but this
time we ended up with two APIs with the same name and same number of
arguments, so overloading based on number of arguments with
___libbpf_override() doesn't work.
Instead, use "overloading" based on types. In this particular case,
print callback has to be specified, so we detect which argument is
a callback. If it's 4th (last) argument, old implementation of API is
used by user code. If not, it must be 2nd, and thus new implementation
is selected. The rest is handled by the same symbol versioning approach.
btf_ext argument is dropped as it was never used and isn't necessary
either. If in the future we'll need btf_ext, that will be added into
OPTS-based struct btf_dump_opts.
struct btf_dump_opts is reused for both old API and new APIs. ctx field
is marked deprecated in v0.7+ and it's put at the same memory location
as OPTS's sz field. Any user of new-style btf_dump__new() will have to
set sz field and doesn't/shouldn't use ctx, as ctx is now passed along
the callback as mandatory input argument, following the other APIs in
libbpf that accept callbacks consistently.
Again, this is quite ugly in implementation, but is done in the name of
backwards compatibility and uniform and extensible future APIs (at the
same time, sigh). And it will be gone in libbpf 1.0.
[0] Closes: https://github.com/libbpf/libbpf/issues/283
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211111053624.190580-5-andrii@kernel.org
2021-11-11 05:36:19 +00:00
# define btf_dump__new(a1, a2, a3, a4) __builtin_choose_expr( \
__builtin_types_compatible_p ( typeof ( a4 ) , btf_dump_printf_fn_t ) | | \
__builtin_types_compatible_p ( typeof ( a4 ) , void ( void * , const char * , va_list ) ) , \
btf_dump__new_deprecated ( ( void * ) a1 , ( void * ) a2 , ( void * ) a3 , ( void * ) a4 ) , \
btf_dump__new ( ( void * ) a1 , ( void * ) a2 , ( void * ) a3 , ( void * ) a4 ) )
2021-12-23 13:17:35 +00:00
# endif
libbpf: Ensure btf_dump__new() and btf_dump_opts are future-proof
Change btf_dump__new() and corresponding struct btf_dump_ops structure
to be extensible by using OPTS "framework" ([0]). Given we don't change
the names, we use a similar approach as with bpf_prog_load(), but this
time we ended up with two APIs with the same name and same number of
arguments, so overloading based on number of arguments with
___libbpf_override() doesn't work.
Instead, use "overloading" based on types. In this particular case,
print callback has to be specified, so we detect which argument is
a callback. If it's 4th (last) argument, old implementation of API is
used by user code. If not, it must be 2nd, and thus new implementation
is selected. The rest is handled by the same symbol versioning approach.
btf_ext argument is dropped as it was never used and isn't necessary
either. If in the future we'll need btf_ext, that will be added into
OPTS-based struct btf_dump_opts.
struct btf_dump_opts is reused for both old API and new APIs. ctx field
is marked deprecated in v0.7+ and it's put at the same memory location
as OPTS's sz field. Any user of new-style btf_dump__new() will have to
set sz field and doesn't/shouldn't use ctx, as ctx is now passed along
the callback as mandatory input argument, following the other APIs in
libbpf that accept callbacks consistently.
Again, this is quite ugly in implementation, but is done in the name of
backwards compatibility and uniform and extensible future APIs (at the
same time, sigh). And it will be gone in libbpf 1.0.
[0] Closes: https://github.com/libbpf/libbpf/issues/283
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211111053624.190580-5-andrii@kernel.org
2021-11-11 05:36:19 +00:00
2019-05-24 18:59:03 +00:00
LIBBPF_API void btf_dump__free ( struct btf_dump * d ) ;
LIBBPF_API int btf_dump__dump_type ( struct btf_dump * d , __u32 id ) ;
2019-12-14 01:43:31 +00:00
struct btf_dump_emit_type_decl_opts {
/* size of this struct, for forward/backward compatiblity */
size_t sz ;
/* optional field name for type declaration, e.g.:
* - struct my_struct < FNAME >
* - void ( * < FNAME > ) ( int )
* - char ( * < FNAME > ) [ 123 ]
*/
const char * field_name ;
/* extra indentation level (in number of tabs) to emit for multi-line
* type declarations ( e . g . , anonymous struct ) ; applies for lines
* starting from the second one ( first line is assumed to have
* necessary indentation already
*/
int indent_level ;
2020-07-13 23:24:08 +00:00
/* strip all the const/volatile/restrict mods */
bool strip_mods ;
2021-03-19 19:21:17 +00:00
size_t : 0 ;
2019-12-14 01:43:31 +00:00
} ;
2020-07-13 23:24:09 +00:00
# define btf_dump_emit_type_decl_opts__last_field strip_mods
2019-12-14 01:43:31 +00:00
LIBBPF_API int
btf_dump__emit_type_decl ( struct btf_dump * d , __u32 id ,
const struct btf_dump_emit_type_decl_opts * opts ) ;
libbpf: BTF dumper support for typed data
Add a BTF dumper for typed data, so that the user can dump a typed
version of the data provided.
The API is
int btf_dump__dump_type_data(struct btf_dump *d, __u32 id,
void *data, size_t data_sz,
const struct btf_dump_type_data_opts *opts);
...where the id is the BTF id of the data pointed to by the "void *"
argument; for example the BTF id of "struct sk_buff" for a
"struct skb *" data pointer. Options supported are
- a starting indent level (indent_lvl)
- a user-specified indent string which will be printed once per
indent level; if NULL, tab is chosen but any string <= 32 chars
can be provided.
- a set of boolean options to control dump display, similar to those
used for BPF helper bpf_snprintf_btf(). Options are
- compact : omit newlines and other indentation
- skip_names: omit member names
- emit_zeroes: show zero-value members
Default output format is identical to that dumped by bpf_snprintf_btf(),
for example a "struct sk_buff" representation would look like this:
struct sk_buff){
(union){
(struct){
.next = (struct sk_buff *)0xffffffffffffffff,
.prev = (struct sk_buff *)0xffffffffffffffff,
(union){
.dev = (struct net_device *)0xffffffffffffffff,
.dev_scratch = (long unsigned int)18446744073709551615,
},
},
...
If the data structure is larger than the *data_sz*
number of bytes that are available in *data*, as much
of the data as possible will be dumped and -E2BIG will
be returned. This is useful as tracers will sometimes
not be able to capture all of the data associated with
a type; for example a "struct task_struct" is ~16k.
Being able to specify that only a subset is available is
important for such cases. On success, the amount of data
dumped is returned.
Signed-off-by: Alan Maguire <alan.maguire@oracle.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/1626362126-27775-2-git-send-email-alan.maguire@oracle.com
2021-07-15 15:15:24 +00:00
struct btf_dump_type_data_opts {
/* size of this struct, for forward/backward compatibility */
size_t sz ;
const char * indent_str ;
int indent_level ;
/* below match "show" flags for bpf_show_snprintf() */
bool compact ; /* no newlines/indentation */
bool skip_names ; /* skip member/type names */
bool emit_zeroes ; /* show 0-valued fields */
size_t : 0 ;
} ;
# define btf_dump_type_data_opts__last_field emit_zeroes
LIBBPF_API int
btf_dump__dump_type_data ( struct btf_dump * d , __u32 id ,
const void * data , size_t data_sz ,
const struct btf_dump_type_data_opts * opts ) ;
2019-08-07 21:39:48 +00:00
/*
2022-01-18 14:13:27 +00:00
* A set of helpers for easier BTF types handling .
*
* The inline functions below rely on constants from the kernel headers which
* may not be available for applications including this header file . To avoid
* compilation errors , we define all the constants here that were added after
* the initial introduction of the BTF_KIND * constants .
2019-08-07 21:39:48 +00:00
*/
2022-01-18 14:13:27 +00:00
# ifndef BTF_KIND_FUNC
# define BTF_KIND_FUNC 12 /* Function */
# define BTF_KIND_FUNC_PROTO 13 /* Function Proto */
# endif
# ifndef BTF_KIND_VAR
# define BTF_KIND_VAR 14 /* Variable */
# define BTF_KIND_DATASEC 15 /* Section */
# endif
# ifndef BTF_KIND_FLOAT
# define BTF_KIND_FLOAT 16 /* Floating point */
# endif
/* The kernel header switched to enums, so these two were never #defined */
# define BTF_KIND_DECL_TAG 17 /* Decl Tag */
# define BTF_KIND_TYPE_TAG 18 /* Type Tag */
2019-08-07 21:39:48 +00:00
static inline __u16 btf_kind ( const struct btf_type * t )
{
return BTF_INFO_KIND ( t - > info ) ;
}
static inline __u16 btf_vlen ( const struct btf_type * t )
{
return BTF_INFO_VLEN ( t - > info ) ;
}
static inline bool btf_kflag ( const struct btf_type * t )
{
return BTF_INFO_KFLAG ( t - > info ) ;
}
libbpf: Add support for extracting kernel symbol addresses
Add support for another (in addition to existing Kconfig) special kind of
externs in BPF code, kernel symbol externs. Such externs allow BPF code to
"know" kernel symbol address and either use it for comparisons with kernel
data structures (e.g., struct file's f_op pointer, to distinguish different
kinds of file), or, with the help of bpf_probe_user_kernel(), to follow
pointers and read data from global variables. Kernel symbol addresses are
found through /proc/kallsyms, which should be present in the system.
Currently, such kernel symbol variables are typeless: they have to be defined
as `extern const void <symbol>` and the only operation you can do (in C code)
with them is to take its address. Such extern should reside in a special
section '.ksyms'. bpf_helpers.h header provides __ksym macro for this. Strong
vs weak semantics stays the same as with Kconfig externs. If symbol is not
found in /proc/kallsyms, this will be a failure for strong (non-weak) extern,
but will be defaulted to 0 for weak externs.
If the same symbol is defined multiple times in /proc/kallsyms, then it will
be error if any of the associated addresses differs. In that case, address is
ambiguous, so libbpf falls on the side of caution, rather than confusing user
with randomly chosen address.
In the future, once kernel is extended with variables BTF information, such
ksym externs will be supported in a typed version, which will allow BPF
program to read variable's contents directly, similarly to how it's done for
fentry/fexit input arguments.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Reviewed-by: Hao Luo <haoluo@google.com>
Link: https://lore.kernel.org/bpf/20200619231703.738941-3-andriin@fb.com
2020-06-19 23:16:56 +00:00
static inline bool btf_is_void ( const struct btf_type * t )
{
return btf_kind ( t ) = = BTF_KIND_UNKN ;
}
2019-08-07 21:39:48 +00:00
static inline bool btf_is_int ( const struct btf_type * t )
{
return btf_kind ( t ) = = BTF_KIND_INT ;
}
static inline bool btf_is_ptr ( const struct btf_type * t )
{
return btf_kind ( t ) = = BTF_KIND_PTR ;
}
static inline bool btf_is_array ( const struct btf_type * t )
{
return btf_kind ( t ) = = BTF_KIND_ARRAY ;
}
static inline bool btf_is_struct ( const struct btf_type * t )
{
return btf_kind ( t ) = = BTF_KIND_STRUCT ;
}
static inline bool btf_is_union ( const struct btf_type * t )
{
return btf_kind ( t ) = = BTF_KIND_UNION ;
}
static inline bool btf_is_composite ( const struct btf_type * t )
{
__u16 kind = btf_kind ( t ) ;
return kind = = BTF_KIND_STRUCT | | kind = = BTF_KIND_UNION ;
}
static inline bool btf_is_enum ( const struct btf_type * t )
{
return btf_kind ( t ) = = BTF_KIND_ENUM ;
}
static inline bool btf_is_fwd ( const struct btf_type * t )
{
return btf_kind ( t ) = = BTF_KIND_FWD ;
}
static inline bool btf_is_typedef ( const struct btf_type * t )
{
return btf_kind ( t ) = = BTF_KIND_TYPEDEF ;
}
static inline bool btf_is_volatile ( const struct btf_type * t )
{
return btf_kind ( t ) = = BTF_KIND_VOLATILE ;
}
static inline bool btf_is_const ( const struct btf_type * t )
{
return btf_kind ( t ) = = BTF_KIND_CONST ;
}
static inline bool btf_is_restrict ( const struct btf_type * t )
{
return btf_kind ( t ) = = BTF_KIND_RESTRICT ;
}
static inline bool btf_is_mod ( const struct btf_type * t )
{
__u16 kind = btf_kind ( t ) ;
return kind = = BTF_KIND_VOLATILE | |
kind = = BTF_KIND_CONST | |
2021-11-12 01:26:14 +00:00
kind = = BTF_KIND_RESTRICT | |
kind = = BTF_KIND_TYPE_TAG ;
2019-08-07 21:39:48 +00:00
}
static inline bool btf_is_func ( const struct btf_type * t )
{
return btf_kind ( t ) = = BTF_KIND_FUNC ;
}
static inline bool btf_is_func_proto ( const struct btf_type * t )
{
return btf_kind ( t ) = = BTF_KIND_FUNC_PROTO ;
}
static inline bool btf_is_var ( const struct btf_type * t )
{
return btf_kind ( t ) = = BTF_KIND_VAR ;
}
static inline bool btf_is_datasec ( const struct btf_type * t )
{
return btf_kind ( t ) = = BTF_KIND_DATASEC ;
}
2021-02-26 20:22:49 +00:00
static inline bool btf_is_float ( const struct btf_type * t )
{
return btf_kind ( t ) = = BTF_KIND_FLOAT ;
}
2021-10-12 16:48:38 +00:00
static inline bool btf_is_decl_tag ( const struct btf_type * t )
2021-09-14 22:30:25 +00:00
{
2021-10-12 16:48:38 +00:00
return btf_kind ( t ) = = BTF_KIND_DECL_TAG ;
2021-09-14 22:30:25 +00:00
}
2021-11-12 01:26:14 +00:00
static inline bool btf_is_type_tag ( const struct btf_type * t )
{
return btf_kind ( t ) = = BTF_KIND_TYPE_TAG ;
}
2019-08-07 21:39:48 +00:00
static inline __u8 btf_int_encoding ( const struct btf_type * t )
{
return BTF_INT_ENCODING ( * ( __u32 * ) ( t + 1 ) ) ;
}
static inline __u8 btf_int_offset ( const struct btf_type * t )
{
return BTF_INT_OFFSET ( * ( __u32 * ) ( t + 1 ) ) ;
}
static inline __u8 btf_int_bits ( const struct btf_type * t )
{
return BTF_INT_BITS ( * ( __u32 * ) ( t + 1 ) ) ;
}
static inline struct btf_array * btf_array ( const struct btf_type * t )
{
return ( struct btf_array * ) ( t + 1 ) ;
}
static inline struct btf_enum * btf_enum ( const struct btf_type * t )
{
return ( struct btf_enum * ) ( t + 1 ) ;
}
static inline struct btf_member * btf_members ( const struct btf_type * t )
{
return ( struct btf_member * ) ( t + 1 ) ;
}
/* Get bit offset of a member with specified index. */
static inline __u32 btf_member_bit_offset ( const struct btf_type * t ,
__u32 member_idx )
{
const struct btf_member * m = btf_members ( t ) + member_idx ;
bool kflag = btf_kflag ( t ) ;
return kflag ? BTF_MEMBER_BIT_OFFSET ( m - > offset ) : m - > offset ;
}
/*
* Get bitfield size of a member , assuming t is BTF_KIND_STRUCT or
* BTF_KIND_UNION . If member is not a bitfield , zero is returned .
*/
static inline __u32 btf_member_bitfield_size ( const struct btf_type * t ,
__u32 member_idx )
{
const struct btf_member * m = btf_members ( t ) + member_idx ;
bool kflag = btf_kflag ( t ) ;
return kflag ? BTF_MEMBER_BITFIELD_SIZE ( m - > offset ) : 0 ;
}
static inline struct btf_param * btf_params ( const struct btf_type * t )
{
return ( struct btf_param * ) ( t + 1 ) ;
}
static inline struct btf_var * btf_var ( const struct btf_type * t )
{
return ( struct btf_var * ) ( t + 1 ) ;
}
static inline struct btf_var_secinfo *
btf_var_secinfos ( const struct btf_type * t )
{
return ( struct btf_var_secinfo * ) ( t + 1 ) ;
}
2021-10-12 16:48:38 +00:00
struct btf_decl_tag ;
static inline struct btf_decl_tag * btf_decl_tag ( const struct btf_type * t )
2021-09-14 22:30:25 +00:00
{
2021-10-12 16:48:38 +00:00
return ( struct btf_decl_tag * ) ( t + 1 ) ;
2021-09-14 22:30:25 +00:00
}
2018-11-21 17:29:44 +00:00
# ifdef __cplusplus
} /* extern "C" */
# endif
2018-10-03 22:26:42 +00:00
# endif /* __LIBBPF_BTF_H */