These descriptions are present in the man-pages project from the
original submissions around 2015-2016. Import them so that they can be
kept up to date as developers extend the bpf syscall commands.
These descriptions follow the pattern used by scripts/bpf_helpers_doc.py
so that we can take advantage of the parser to generate more up-to-date
man page writing based upon these headers.
Some minor wording adjustments were made to make the descriptions
more consistent for the description / return format.
Signed-off-by: Joe Stringer <joe@cilium.io>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Reviewed-by: Quentin Monnet <quentin@isovalent.com>
Acked-by: Toke Høiland-Jørgensen <toke@redhat.com>
Acked-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/bpf/20210302171947.2268128-2-joe@cilium.io
Co-authored-by: Alexei Starovoitov <ast@kernel.org>
Co-authored-by: Michael Kerrisk <mtk.manpages@gmail.com>
Ilya Leoshkevich says:
====================
Some BPF programs compiled on s390 fail to load, because s390
arch-specific linux headers contain float and double types.
Introduce support for such types by representing them using the new
BTF_KIND_FLOAT. This series deals with libbpf, bpftool, in-kernel BTF
parser as well as selftests and documentation.
There are also pahole and LLVM parts:
* https://github.com/iii-i/dwarves/commit/btf-kind-float-v2
* https://reviews.llvm.org/D83289
but they should go in after the libbpf part is integrated.
---
v0: https://lore.kernel.org/bpf/20210210030317.78820-1-iii@linux.ibm.com/
v0 -> v1: Per Andrii's suggestion, remove the unnecessary trailing u32.
v1: https://lore.kernel.org/bpf/20210216011216.3168-1-iii@linux.ibm.com/
v1 -> v2: John noticed that sanitization corrupts BTF, because new and
old sizes don't match. Per Yonghong's suggestion, use a
modifier type (which has the same size as the float type) as
a replacement.
Per Yonghong's suggestion, add size and alignment checks to
the kernel BTF parser.
v2: https://lore.kernel.org/bpf/20210219022543.20893-1-iii@linux.ibm.com/
v2 -> v3: Based on Yonghong's suggestions: Use BTF_KIND_CONST instead of
BTF_KIND_TYPEDEF and make sure that the C code generated from
the sanitized BTF is well-formed; fix size calculation in
tests and use NAME_TBD everywhere; limit allowed sizes to 2,
4, 8, 12 and 16 (this should also fix m68k and nds32le
builds).
v3: https://lore.kernel.org/bpf/20210220034959.27006-1-iii@linux.ibm.com/
v3 -> v4: More fixes for the Yonghong's findings: fix the outdated
comment in bpf_object__sanitize_btf() and add the error
handling there (I've decided to check uint_id and uchar_id
too in order to simplify debugging); add bpftool output
example; use div64_u64_rem() instead of % in order to fix the
linker error.
Also fix the "invalid BTF_INFO" test (new commit, #4).
v4: https://lore.kernel.org/bpf/20210222214917.83629-1-iii@linux.ibm.com/
v4 -> v5: Fixes for the Andrii's findings: Use BTF_KIND_STRUCT instead
of BTF_KIND_TYPEDEF for sanitization; check byte_sz in
libbpf; move btf__add_float; remove relo support; add a dedup
test (new commit, #7).
v5: https://lore.kernel.org/bpf/20210223231459.99664-1-iii@linux.ibm.com/
v5 -> v6: Fixes for further findings by Andrii: split whitespace issue
fix into a separate patch; add 12-byte float to "float test #1,
well-formed".
v6: https://lore.kernel.org/bpf/20210224234535.106970-1-iii@linux.ibm.com/
v6 -> v7: John suggested to add a comment explaining why sanitization
does not preserve the type name, as well as what effect it has
on running the code on the older kernels.
Yonghong has asked to add a comment explaining why we are not
checking the alignment very precisely in the kernel.
John suggested to add a bpf_core_field_size test (commit #9).
Based on Alexei's feedback [1] I'm proceeding with the BTF_KIND_FLOAT
approach.
[1] https://lore.kernel.org/bpf/CAADnVQKWPODWZ2RSJ5FJhfYpxkuV0cvSAL1O+FSr9oP1ercoBg@mail.gmail.com/
====================
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
On the kernel side, introduce a new btf_kind_operations. It is
similar to that of BTF_KIND_INT, however, it does not need to
handle encodings and bit offsets. Do not implement printing, since
the kernel does not know how to format floating-point values.
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/bpf/20210226202256.116518-7-iii@linux.ibm.com
The logic follows that of BTF_KIND_INT most of the time. Sanitization
replaces BTF_KIND_FLOATs with equally-sized empty BTF_KIND_STRUCTs on
older kernels, for example, the following:
[4] FLOAT 'float' size=4
becomes the following:
[4] STRUCT '(anon)' size=4 vlen=0
With dwarves patch [1] and this patch, the older kernels, which were
failing with the floating-point-related errors, will now start working
correctly.
[1] https://github.com/iii-i/dwarves/commit/btf-kind-float-v2
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20210226202256.116518-4-iii@linux.ibm.com
The original bcc pull request https://github.com/iovisor/bcc/pull/3270 exposed
a verifier failure with Clang 12/13 while Clang 4 works fine.
Further investigation exposed two issues:
Issue 1: LLVM may generate code which uses less refined value. The issue is
fixed in LLVM patch: https://reviews.llvm.org/D97479
Issue 2: Spills with initial value 0 are marked as precise which makes later
state pruning less effective. This is my rough initial analysis and
further investigation is needed to find how to improve verifier
pruning in such cases.
With the above LLVM patch, for the new loop6.c test, which has smaller loop
bound compared to original test, I got:
$ test_progs -s -n 10/16
...
stack depth 64
processed 390735 insns (limit 1000000) max_states_per_insn 87
total_states 8658 peak_states 964 mark_read 6
#10/16 loop6.o:OK
Use the original loop bound, i.e., commenting out "#define WORKAROUND", I got:
$ test_progs -s -n 10/16
...
BPF program is too large. Processed 1000001 insn
stack depth 64
processed 1000001 insns (limit 1000000) max_states_per_insn 91
total_states 23176 peak_states 5069 mark_read 6
...
#10/16 loop6.o:FAIL
The purpose of this patch is to provide a regression test for the above LLVM fix
and also provide a test case for further analyzing the verifier pruning issue.
Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Cc: Zhenwei Pi <pizhenwei@bytedance.com>
Link: https://lore.kernel.org/bpf/20210226223810.236472-1-yhs@fb.com
Yonghong Song says:
====================
This patch set introduced bpf_for_each_map_elem() helper.
The helper permits bpf program iterates through all elements
for a particular map.
The work originally inspired by an internal discussion where
firewall rules are kept in a map and bpf prog wants to
check packet 5 tuples against all rules in the map.
A bounded loop can be used but it has a few drawbacks.
As the loop iteration goes up, verification time goes up too.
For really large maps, verification may fail.
A helper which abstracts out the loop itself will not have
verification time issue.
A recent discussion in [1] involves to iterate all hash map
elements in bpf program. Currently iterating all hashmap elements
in bpf program is not easy if key space is really big.
Having a helper to abstract out the loop itself is even more
meaningful.
The proposed helper signature looks like:
long bpf_for_each_map_elem(map, callback_fn, callback_ctx, flags)
where callback_fn is a static function and callback_ctx is
a piece of data allocated on the caller stack which can be
accessed by the callback_fn. The callback_fn signature might be
different for different maps. For example, for hash/array maps,
the signature is
long callback_fn(map, key, val, callback_ctx)
In the rest of series, Patches 1/2/3/4 did some refactoring. Patch 5
implemented core kernel support for the helper. Patches 6 and 7
added hashmap and arraymap support. Patches 8/9 added libbpf
support. Patch 10 added bpftool support. Patches 11 and 12 added
selftests for hashmap and arraymap.
[1]: https://lore.kernel.org/bpf/20210122205415.113822-1-xiyou.wangcong@gmail.com/
Changelogs:
v4 -> v5:
- rebase on top of bpf-next.
v3 -> v4:
- better refactoring of check_func_call(), calculate subprogno outside
of __check_func_call() helper. (Andrii)
- better documentation (like the list of supported maps and their
callback signatures) in uapi header. (Andrii)
- implement and use ASSERT_LT in selftests. (Andrii)
- a few other minor changes.
v2 -> v3:
- add comments in retrieve_ptr_limit(), which is in sanitize_ptr_alu(),
to clarify the code is not executed for PTR_TO_MAP_KEY handling,
but code is manually tested. (Alexei)
- require BTF for callback function. (Alexei)
- simplify hashmap/arraymap callback return handling as return value
[0, 1] has been enforced by the verifier. (Alexei)
- also mark global subprog (if used in ld_imm64) as RELO_SUBPROG_ADDR. (Andrii)
- handle the condition to mark RELO_SUBPROG_ADDR properly. (Andrii)
- make bpftool subprog insn offset dumping consist with pcrel calls. (Andrii)
v1 -> v2:
- setup callee frame in check_helper_call() and then proceed to verify
helper return value as normal (Alexei)
- use meta data to keep track of map/func pointer to avoid hard coding
the register number (Alexei)
- verify callback_fn return value range [0, 1]. (Alexei)
- add migrate_{disable, enable} to ensure percpu value is the one
bpf program expects to see. (Alexei)
- change bpf_for_each_map_elem() return value to the number of iterated
elements. (Andrii)
- Change libbpf pseudo_func relo name to RELO_SUBPROG_ADDR and use
more rigid checking for the relocation. (Andrii)
- Better format to print out subprog address with bpftool. (Andrii)
- Use bpf_prog_test_run to trigger bpf run, instead of bpf_iter. (Andrii)
- Other misc changes.
====================
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
A test is added for arraymap and percpu arraymap. The test also
exercises the early return for the helper which does not
traverse all elements.
$ ./test_progs -n 45
#45/1 hash_map:OK
#45/2 array_map:OK
#45 for_each:OK
Summary: 1/2 PASSED, 0 SKIPPED, 0 FAILED
Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20210226204934.3885756-1-yhs@fb.com
A test case is added for hashmap and percpu hashmap. The test
also exercises nested bpf_for_each_map_elem() calls like
bpf_prog:
bpf_for_each_map_elem(func1)
func1:
bpf_for_each_map_elem(func2)
func2:
$ ./test_progs -n 45
#45/1 hash_map:OK
#45 for_each:OK
Summary: 1/1 PASSED, 0 SKIPPED, 0 FAILED
Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20210226204933.3885657-1-yhs@fb.com
A new relocation RELO_SUBPROG_ADDR is added to capture
subprog addresses loaded with ld_imm64 insns. Such ld_imm64
insns are marked with BPF_PSEUDO_FUNC and will be passed to
kernel. For bpf_for_each_map_elem() case, kernel will
check that the to-be-used subprog address must be a static
function and replace it with proper actual jited func address.
Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20210226204930.3885367-1-yhs@fb.com
The bpf_for_each_map_elem() helper is introduced which
iterates all map elements with a callback function. The
helper signature looks like
long bpf_for_each_map_elem(map, callback_fn, callback_ctx, flags)
and for each map element, the callback_fn will be called. For example,
like hashmap, the callback signature may look like
long callback_fn(map, key, val, callback_ctx)
There are two known use cases for this. One is from upstream ([1]) where
a for_each_map_elem helper may help implement a timeout mechanism
in a more generic way. Another is from our internal discussion
for a firewall use case where a map contains all the rules. The packet
data can be compared to all these rules to decide allow or deny
the packet.
For array maps, users can already use a bounded loop to traverse
elements. Using this helper can avoid using bounded loop. For other
type of maps (e.g., hash maps) where bounded loop is hard or
impossible to use, this helper provides a convenient way to
operate on all elements.
For callback_fn, besides map and map element, a callback_ctx,
allocated on caller stack, is also passed to the callback
function. This callback_ctx argument can provide additional
input and allow to write to caller stack for output.
If the callback_fn returns 0, the helper will iterate through next
element if available. If the callback_fn returns 1, the helper
will stop iterating and returns to the bpf program. Other return
values are not used for now.
Currently, this helper is only available with jit. It is possible
to make it work with interpreter with so effort but I leave it
as the future work.
[1]: https://lore.kernel.org/bpf/20210122205415.113822-1-xiyou.wangcong@gmail.com/
Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20210226204925.3884923-1-yhs@fb.com
Currently, verifier function add_subprog() returns 0 for success
and negative value for failure. Change the return value
to be the subprog number for success. This functionality will be
used in the next patch to save a call to find_subprog().
Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20210226204924.3884848-1-yhs@fb.com
Later proposed bpf_for_each_map_elem() helper has callback
function as one of its arguments. This patch refactored
check_func_call() to permit callback function which sets
callee state. Different callback functions may have
different callee states.
There is no functionality change for this patch.
Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20210226204923.3884627-1-yhs@fb.com
Factor out the function verbose_invalid_scalar() to verbose
print if a scalar is not in a tnum range. There is no
functionality change and the function will be used by
later patch which introduced bpf_for_each_map_elem().
Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20210226204922.3884375-1-yhs@fb.com
During verifier check_cfg(), all instructions are
visited to ensure verifier can handle program control flows.
This patch factored out function visit_func_call_insn()
so it can be reused in later patch to visit callback function
calls. There is no functionality change for this patch.
Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20210226204920.3884136-1-yhs@fb.com
Building selftests in a separate directory like this:
make O="$BUILD" -C tools/testing/selftests/bpf
and then running:
cd "$BUILD" && ./test_progs -t btf
causes all the non-flavored btf_dump_test_case_*.c tests to fail,
because these files are not copied to where test_progs expects to find
them.
Fix by not skipping EXT-COPY when the original $(OUTPUT) is not empty
(lib.mk sets it to $(shell pwd) in that case) and using rsync instead
of cp: cp fails because e.g. urandom_read is being copied into itself,
and rsync simply skips such cases. rsync is already used by kselftests
and therefore is not a new dependency.
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20210224111445.102342-1-iii@linux.ibm.com
When vmtest.sh ran a command in a VM, it did not record or propagate the
error code of the command. This made the script less "script-able". The
script now saves the error code of the said command in a file in the VM,
copies the file back to the host and (when available) uses this error
code instead of its own.
Signed-off-by: KP Singh <kpsingh@google.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20210225161947.1778590-1-kpsingh@kernel.org
Cong Wang says:
====================
From: Cong Wang <cong.wang@bytedance.com>
This patchset is the first series of patches separated out from
the original large patchset, to make reviews easier. This patchset
does not add any new feature or change any functionality but merely
cleans up the existing sockmap and skmsg code and refactors it, to
prepare for the patches followed up. This passed all BPF selftests.
To see the big picture, the original whole patchset is available
on github: https://github.com/congwang/linux/tree/sockmap
and this patchset is also available on github:
https://github.com/congwang/linux/tree/sockmap1
---
v7: add 1 trivial cleanup patch
define a mask for sk_redir
fix CONFIG_BPF_SYSCALL in include/net/udp.h
make sk_psock_done_strp() static
move skb_bpf_redirect_clear() to sk_psock_backlog()
v6: fix !CONFIG_INET case
v5: improve CONFIG_BPF_SYSCALL dependency
add 3 trivial cleanup patches
v4: reuse skb dst instead of skb ext
fix another Kconfig error
v3: fix a few Kconfig compile errors
remove an unused variable
add a comment for bpf_convert_data_end_access()
v2: split the original patchset
compute data_end with bpf_convert_data_end_access()
get rid of psock->bpf_running
reduce the scope of CONFIG_BPF_STREAM_PARSER
do not add CONFIG_BPF_SOCK_MAP
====================
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Currently TCP_SKB_CB() is hard-coded in skmsg code, it certainly
does not work for any other non-TCP protocols. We can move them to
skb ext, but it introduces a memory allocation on fast path.
Fortunately, we only need to a word-size to store all the information,
because the flags actually only contains 1 bit so can be just packed
into the lowest bit of the "pointer", which is stored as unsigned
long.
Inside struct sk_buff, '_skb_refdst' can be reused because skb dst is
no longer needed after ->sk_data_ready() so we can just drop it.
Signed-off-by: Cong Wang <cong.wang@bytedance.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Acked-by: Jakub Sitnicki <jakub@cloudflare.com>
Link: https://lore.kernel.org/bpf/20210223184934.6054-5-xiyou.wangcong@gmail.com
As suggested by John, clean up sockmap related Kconfigs:
Reduce the scope of CONFIG_BPF_STREAM_PARSER down to TCP stream
parser, to reflect its name.
Make the rest sockmap code simply depend on CONFIG_BPF_SYSCALL
and CONFIG_INET, the latter is still needed at this point because
of TCP/UDP proto update. And leave CONFIG_NET_SOCK_MSG untouched,
as it is used by non-sockmap cases.
Signed-off-by: Cong Wang <cong.wang@bytedance.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Reviewed-by: Lorenz Bauer <lmb@cloudflare.com>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Acked-by: Jakub Sitnicki <jakub@cloudflare.com>
Link: https://lore.kernel.org/bpf/20210223184934.6054-2-xiyou.wangcong@gmail.com
Ciara Loftus says:
====================
This series attempts to improve the xsk selftest framework by:
1. making the default output less verbose
2. adding an optional verbose flag to both the test_xsk.sh script and xdpxceiver app.
3. renaming the debug option in the app to to 'dump-pkts' and add a flag to the test_xsk.sh
script which enables the flag in the app.
4. changing how tests are launched - now they are launched from the xdpxceiver app
instead of the script.
Once the improvements are made, a new set of tests are added which test the xsk
statistics.
The output of the test script now looks like:
./test_xsk.sh
PREREQUISITES: [ PASS ]
1..10
ok 1 PASS: SKB NOPOLL
ok 2 PASS: SKB POLL
ok 3 PASS: SKB NOPOLL Socket Teardown
ok 4 PASS: SKB NOPOLL Bi-directional Sockets
ok 5 PASS: SKB NOPOLL Stats
ok 6 PASS: DRV NOPOLL
ok 7 PASS: DRV POLL
ok 8 PASS: DRV NOPOLL Socket Teardown
ok 9 PASS: DRV NOPOLL Bi-directional Sockets
ok 10 PASS: DRV NOPOLL Stats
# Totals: pass:10 fail:0 xfail:0 xpass:0 skip:0 error:0
XSK KSELFTESTS: [ PASS ]
v2->v3:
* Rename dump-pkts to dump_pkts in test_xsk.sh
* Add examples of flag usage to test_xsk.sh
v1->v2:
* Changed '-d' flag in the shell script to '-D' to be consistent with the xdpxceiver app.
* Renamed debug mode to 'dump-pkts' which better reflects the behaviour.
* Use libpf APIs instead of calls to ss for configuring xdp on the links
* Remove mutex init & destroy for each stats test
* Added a description for each of the new statistics tests
* Distinguish between exiting due to initialisation failure vs test failure
This series applies on commit d310ec03a3
Ciara Loftus (3):
selftests/bpf: expose and rename debug argument
selftests/bpf: restructure xsk selftests
selftests/bpf: introduce xsk statistics tests
====================
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
This commit introduces a range of tests to the xsk testsuite
for validating xsk statistics.
A new test type called 'stats' is added. Within it there are
four sub-tests. Each test configures a scenario which should
trigger the given error statistic. The test passes if the statistic
is successfully incremented.
The four statistics for which tests have been created are:
1. rx dropped
Increase the UMEM frame headroom to a value which results in
insufficient space in the rx buffer for both the packet and the headroom.
2. tx invalid
Set the 'len' field of tx descriptors to an invalid value (umem frame
size + 1).
3. rx ring full
Reduce the size of the RX ring to a fraction of the fill ring size.
4. fill queue empty
Do not populate the fill queue and then try to receive pkts.
Signed-off-by: Ciara Loftus <ciara.loftus@intel.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Reviewed-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Link: https://lore.kernel.org/bpf/20210223162304.7450-5-ciara.loftus@intel.com
Prior to this commit individual xsk tests were launched from the
shell script 'test_xsk.sh'. When adding a new test type, two new test
configurations had to be added to this file - one for each of the
supported XDP 'modes' (skb or drv). Should zero copy support be added to
the xsk selftest framework in the future, three new test configurations
would need to be added for each new test type. Each new test type also
typically requires new CLI arguments for the xdpxceiver program.
This commit aims to reduce the overhead of adding new tests, by launching
the test configurations from within the xdpxceiver program itself, using
simple loops. Every test is run every time the C program is executed. Many
of the CLI arguments can be removed as a result.
Signed-off-by: Ciara Loftus <ciara.loftus@intel.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Reviewed-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Link: https://lore.kernel.org/bpf/20210223162304.7450-4-ciara.loftus@intel.com
Launching xdpxceiver with -D enables what was formerly know as 'debug'
mode. Rename this mode to 'dump-pkts' as it better describes the
behavior enabled by the option. New usage:
./xdpxceiver .. -D
or
./xdpxceiver .. --dump-pkts
Also make it possible to pass this flag to the app via the test_xsk.sh
shell script like so:
./test_xsk.sh -D
Signed-off-by: Ciara Loftus <ciara.loftus@intel.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20210223162304.7450-3-ciara.loftus@intel.com
Song Liu says:
====================
This set enables task local storage for non-BPF_LSM programs.
It is common for tracing BPF program to access per-task data. Currently,
these data are stored in hash tables with pid as the key. In
bcc/libbpftools [1], 9 out of 23 tools use such hash tables. However,
hash table is not ideal for many use case. Task local storage provides
better usability and performance for BPF programs. Please refer to 6/6 for
some performance comparison of task local storage vs. hash table.
Changes v5 => v6:
1. Add inc/dec bpf_task_storage_busy in bpf_local_storage_map_free().
Changes v4 => v5:
1. Fix build w/o CONFIG_NET. (kernel test robot)
2. Remove unnecessary check for !task_storage_ptr(). (Martin)
3. Small changes in commit logs.
Changes v3 => v4:
1. Prevent deadlock from recursive calls of bpf_task_storage_[get|delete].
(2/6 checks potential deadlock and fails over, 4/6 adds a selftest).
Changes v2 => v3:
1. Make the selftest more robust. (Andrii)
2. Small changes with runqslower. (Andrii)
3. Shortern CC list to make it easy for vger.
Changes v1 => v2:
1. Do not allocate task local storage when the task is being freed.
2. Revise the selftest and added a new test for a task being freed.
3. Minor changes in runqslower.
====================
Acked-by: Martin KaFai Lau <kafai@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>