Commit Graph

399 Commits

Author SHA1 Message Date
Song Liu
8d830f549d bpftool: Add _bpftool and profiler.skel.h to .gitignore
These files are generated, so ignore them.

Signed-off-by: Song Liu <songliubraving@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Reviewed-by: Quentin Monnet <quentin@isovalent.com>
Link: https://lore.kernel.org/bpf/20200312182332.3953408-4-songliubraving@fb.com
2020-03-13 00:08:33 +01:00
Song Liu
39be909c38 bpftool: Skeleton should depend on libbpf
Add the dependency to libbpf, to fix build errors like:

  In file included from skeleton/profiler.bpf.c:5:
  .../bpf_helpers.h:5:10: fatal error: 'bpf_helper_defs.h' file not found
  #include "bpf_helper_defs.h"
           ^~~~~~~~~~~~~~~~~~~
  1 error generated.
  make: *** [skeleton/profiler.bpf.o] Error 1
  make: *** Waiting for unfinished jobs....

Fixes: 47c09d6a9f ("bpftool: Introduce "prog profile" command")
Suggested-by: Quentin Monnet <quentin@isovalent.com>
Signed-off-by: Song Liu <songliubraving@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Reviewed-by: Quentin Monnet <quentin@isovalent.com>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Link: https://lore.kernel.org/bpf/20200312182332.3953408-3-songliubraving@fb.com
2020-03-13 00:08:33 +01:00
Song Liu
14e5728ff8 bpftool: Only build bpftool-prog-profile if supported by clang
bpftool-prog-profile requires clang to generate BTF for global variables.
When compared with older clang, skip this command. This is achieved by
adding a new feature, clang-bpf-global-var, to tools/build/feature.

Signed-off-by: Song Liu <songliubraving@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Reviewed-by: Quentin Monnet <quentin@isovalent.com>
Link: https://lore.kernel.org/bpf/20200312182332.3953408-2-songliubraving@fb.com
2020-03-13 00:08:33 +01:00
Tobias Klauser
fe4eb069ed bpftool: Use linux/types.h from source tree for profiler build
When compiling bpftool on a system where the /usr/include/asm symlink
doesn't exist (e.g. on an Ubuntu system without gcc-multilib installed),
the build fails with:

    CLANG    skeleton/profiler.bpf.o
  In file included from skeleton/profiler.bpf.c:4:
  In file included from /usr/include/linux/bpf.h:11:
  /usr/include/linux/types.h:5:10: fatal error: 'asm/types.h' file not found
  #include <asm/types.h>
           ^~~~~~~~~~~~~
  1 error generated.
  make: *** [Makefile:123: skeleton/profiler.bpf.o] Error 1

This indicates that the build is using linux/types.h from system headers
instead of source tree headers.

To fix this, adjust the clang search path to include the necessary
headers from tools/testing/selftests/bpf/include/uapi and
tools/include/uapi. Also use __bitwise__ instead of __bitwise in
skeleton/profiler.h to avoid clashing with the definition in
tools/testing/selftests/bpf/include/uapi/linux/types.h.

Signed-off-by: Tobias Klauser <tklauser@distanz.ch>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Reviewed-by: Quentin Monnet <quentin@isovalent.com>
Link: https://lore.kernel.org/bpf/20200312130330.32239-1-tklauser@distanz.ch
2020-03-12 16:22:41 +01:00
Andrii Nakryiko
37ccc12bbc tools/runqslower: Add BPF_F_CURRENT_CPU for running selftest on older kernels
Libbpf compiles and runs subset of selftests on each PR in its Github mirror
repository. To allow still building up-to-date selftests against outdated
kernel images, add back BPF_F_CURRENT_CPU definitions back.

N.B. BCC's runqslower version ([0]) doesn't need BPF_F_CURRENT_CPU due to use of
locally checked in vmlinux.h, generated against kernel with 1aae4bdd78 ("bpf:
Switch BPF UAPI #define constants used from BPF program side to enums")
applied.

  [0] https://github.com/iovisor/bcc/pull/2809

Fixes: 367d82f17e (" tools/runqslower: Drop copy/pasted BPF_F_CURRENT_CPU definiton")
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20200311043010.530620-1-andriin@fb.com
2020-03-11 15:17:39 +01:00
Song Liu
aad32f4c76 bpftool: Fix typo in bash-completion
_bpftool_get_map_names => _bpftool_get_prog_names for prog-attach|detach.

Fixes: 99f9863a0c ("bpftool: Match maps by name")
Signed-off-by: Song Liu <songliubraving@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Reviewed-by: Quentin Monnet <quentin@isovalent.com>
Link: https://lore.kernel.org/bpf/20200309173218.2739965-5-songliubraving@fb.com
2020-03-10 00:04:21 +01:00
Song Liu
397692eab3 bpftool: Bash completion for "bpftool prog profile"
Add bash completion for "bpftool prog profile" command.

Signed-off-by: Song Liu <songliubraving@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Reviewed-by: Quentin Monnet <quentin@isovalent.com>
Link: https://lore.kernel.org/bpf/20200309173218.2739965-4-songliubraving@fb.com
2020-03-10 00:04:19 +01:00
Song Liu
319c7c1f6b bpftool: Documentation for bpftool prog profile
Add documentation for the new bpftool prog profile command.

Signed-off-by: Song Liu <songliubraving@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Reviewed-by: Quentin Monnet <quentin@isovalent.com>
Link: https://lore.kernel.org/bpf/20200309173218.2739965-3-songliubraving@fb.com
2020-03-10 00:04:07 +01:00
Song Liu
47c09d6a9f bpftool: Introduce "prog profile" command
With fentry/fexit programs, it is possible to profile BPF program with
hardware counters. Introduce bpftool "prog profile", which measures key
metrics of a BPF program.

bpftool prog profile command creates per-cpu perf events. Then it attaches
fentry/fexit programs to the target BPF program. The fentry program saves
perf event value to a map. The fexit program reads the perf event again,
and calculates the difference, which is the instructions/cycles used by
the target program.

Example input and output:

  ./bpftool prog profile id 337 duration 3 cycles instructions llc_misses

        4228 run_cnt
     3403698 cycles                                              (84.08%)
     3525294 instructions   #  1.04 insn per cycle               (84.05%)
          13 llc_misses     #  3.69 LLC misses per million isns  (83.50%)

This command measures cycles and instructions for BPF program with id
337 for 3 seconds. The program has triggered 4228 times. The rest of the
output is similar to perf-stat. In this example, the counters were only
counting ~84% of the time because of time multiplexing of perf counters.

Note that, this approach measures cycles and instructions in very small
increments. So the fentry/fexit programs introduce noticeable errors to
the measurement results.

The fentry/fexit programs are generated with BPF skeletons. Therefore, we
build bpftool twice. The first time _bpftool is built without skeletons.
Then, _bpftool is used to generate the skeletons. The second time, bpftool
is built with skeletons.

Signed-off-by: Song Liu <songliubraving@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Reviewed-by: Quentin Monnet <quentin@isovalent.com>
Acked-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/bpf/20200309173218.2739965-2-songliubraving@fb.com
2020-03-10 00:04:07 +01:00
Andrii Nakryiko
367d82f17e tools/runqslower: Drop copy/pasted BPF_F_CURRENT_CPU definiton
With BPF_F_CURRENT_CPU being an enum, it is now captured in vmlinux.h and is
readily usable by runqslower. So drop local copy/pasted definition in favor of
the one coming from vmlinux.h.

Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20200303003233.3496043-4-andriin@fb.com
2020-03-04 17:00:06 +01:00
Andrii Nakryiko
ca7dc2791b bpftool: Add header guards to generated vmlinux.h
Add canonical #ifndef/#define/#endif guard for generated vmlinux.h header with
__VMLINUX_H__ symbol. __VMLINUX_H__ is also going to play double role of
identifying whether vmlinux.h is being used, versus, say, BCC or non-CO-RE
libbpf modes with dependency on kernel headers. This will make it possible to
write helper macro/functions, agnostic to exact BPF program set up.

Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20200229231112.1240137-2-andriin@fb.com
2020-03-02 16:25:14 -08:00
Michal Rostecki
ad92b12a6e bpftool: Update bash completion for "bpftool feature" command
Update bash completion for "bpftool feature" command with the new
argument: "full".

Signed-off-by: Michal Rostecki <mrostecki@opensuse.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Reviewed-by: Quentin Monnet <quentin@isovalent.com>
Link: https://lore.kernel.org/bpf/20200226165941.6379-5-mrostecki@opensuse.org
2020-02-26 18:34:34 +01:00
Michal Rostecki
bcdacab6e7 bpftool: Update documentation of "bpftool feature" command
Update documentation of "bpftool feature" command with information about
new arguments: "full".

Signed-off-by: Michal Rostecki <mrostecki@opensuse.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Reviewed-by: Quentin Monnet <quentin@isovalent.com>
Link: https://lore.kernel.org/bpf/20200226165941.6379-4-mrostecki@opensuse.org
2020-02-26 18:34:34 +01:00
Michal Rostecki
368cb0e7cd bpftool: Make probes which emit dmesg warnings optional
Probes related to bpf_probe_write_user and bpf_trace_printk helpers emit
dmesg warnings which might be confusing for people running bpftool on
production environments. This change filters them out by default and
introduces the new positional argument "full" which enables all
available probes.

Signed-off-by: Michal Rostecki <mrostecki@opensuse.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Reviewed-by: Quentin Monnet <quentin@isovalent.com>
Link: https://lore.kernel.org/bpf/20200226165941.6379-3-mrostecki@opensuse.org
2020-02-26 18:34:34 +01:00
Michal Rostecki
6b52ca44e8 bpftool: Move out sections to separate functions
Remove all calls of print_end_then_start_section function and for loops
out from the do_probe function. Instead, provide separate functions for
each section (like i.e. section_helpers) which are called in do_probe.
This change is motivated by better readability.

Signed-off-by: Michal Rostecki <mrostecki@opensuse.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Reviewed-by: Quentin Monnet <quentin@isovalent.com>
Link: https://lore.kernel.org/bpf/20200226165941.6379-2-mrostecki@opensuse.org
2020-02-26 18:34:34 +01:00
Andrey Ignatov
3494bec0f6 bpftool: Support struct_ops, tracing, ext prog types
Add support for prog types that were added to kernel but not present in
bpftool yet: struct_ops, tracing, ext prog types and corresponding
section names.

Before:
  # bpftool p l
  ...
  184: type 26  name test_subprog3  tag dda135a7dc0daf54  gpl
          loaded_at 2020-02-25T13:28:33-0800  uid 0
          xlated 112B  jited 103B  memlock 4096B  map_ids 136
          btf_id 85
  185: type 28  name new_get_skb_len  tag d2de5b87d8e5dc49  gpl
          loaded_at 2020-02-25T13:28:33-0800  uid 0
          xlated 72B  jited 69B  memlock 4096B  map_ids 136
          btf_id 85

After:
  # bpftool p l
  ...
  184: tracing  name test_subprog3  tag dda135a7dc0daf54  gpl
          loaded_at 2020-02-25T13:28:33-0800  uid 0
          xlated 112B  jited 103B  memlock 4096B  map_ids 136
          btf_id 85
  185: ext  name new_get_skb_len  tag d2de5b87d8e5dc49  gpl
          loaded_at 2020-02-25T13:28:33-0800  uid 0
          xlated 72B  jited 69B  memlock 4096B  map_ids 136
          btf_id 85

Signed-off-by: Andrey Ignatov <rdna@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Reviewed-by: Quentin Monnet <quentin@isovalent.com>
Acked-by: Song Liu <songliubraving@fb.com>
Link: https://lore.kernel.org/bpf/20200225223441.689109-1-rdna@fb.com
2020-02-26 16:40:53 +01:00
Toke Høiland-Jørgensen
d95f1e8b46 bpftool: Don't crash on missing xlated program instructions
Turns out the xlated program instructions can also be missing if
kptr_restrict sysctl is set. This means that the previous fix to check the
jited_prog_insns pointer was insufficient; add another check of the
xlated_prog_insns pointer as well.

Fixes: 5b79bcdf03 ("bpftool: Don't crash on missing jited insns or ksyms")
Fixes: cae73f2339 ("bpftool: use bpf_program__get_prog_info_linear() in prog.c:do_dump()")
Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Reviewed-by: Quentin Monnet <quentin@isovalent.com>
Link: https://lore.kernel.org/bpf/20200206102906.112551-1-toke@redhat.com
2020-02-07 22:29:45 +01:00
Song Liu
fc9e34f8de tools/bpf/runqslower: Rebuild libbpf.a on libbpf source change
Add missing dependency of $(BPFOBJ) to $(LIBBPF_SRC), so that running make
in runqslower/ will rebuild libbpf.a when there is change in libbpf/.

Fixes: 9c01546d26 ("tools/bpf: Add runqslower tool to tools/bpf")
Signed-off-by: Song Liu <songliubraving@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Link: https://lore.kernel.org/bpf/20200204215037.2258698-1-songliubraving@fb.com
2020-02-05 22:05:28 +01:00
Michal Rostecki
a525b0881d bpftool: Remove redundant "HAVE" prefix from the large INSN limit check
"HAVE" prefix is already applied by default to feature macros and before
this change, the large INSN limit macro had the incorrect name with
double "HAVE".

Fixes: 2faef64aa6 ("bpftool: Add misc section and probe for large INSN limit")
Signed-off-by: Michal Rostecki <mrostecki@opensuse.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/bpf/20200202110200.31024-1-mrostecki@opensuse.org
2020-02-04 00:04:06 +01:00
Yulia Kartseva
2577e373bb runqslower: Fix Makefile
Fix undefined reference linker errors when building runqslower with
gcc 7.4.0 on Ubuntu 18.04.
The issue is with misplaced -lelf, -lz options in Makefile:
$(Q)$(CC) $(CFLAGS) -lelf -lz $^ -o $@

-lelf, -lz options should follow the list of target dependencies:
$(Q)$(CC) $(CFLAGS) $^ -lelf -lz -o $@
or after substitution
cc -g -Wall runqslower.o libbpf.a -lelf -lz -o runqslower

The current order of gcc params causes failure in libelf symbols resolution,
e.g. undefined reference to `elf_memory'

Fixes: 9c01546d26 ("tools/bpf: Add runqslower tool to tools/bpf")
Signed-off-by: Julia Kartseva <hex@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Link: https://lore.kernel.org/bpf/908498f794661c44dca54da9e09dc0c382df6fcb.1580425879.git.hex@fb.com
2020-01-30 16:19:47 -08:00
Andrey Ignatov
07fdbee134 tools/bpf: Allow overriding llvm tools for runqslower
tools/testing/selftests/bpf/Makefile supports overriding clang, llc and
other tools so that custom ones can be used instead of those from PATH.
It's convinient and heavily used by some users.

Apply same rules to runqslower/Makefile.

Signed-off-by: Andrey Ignatov <rdna@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Link: https://lore.kernel.org/bpf/20200124224142.1833678-1-rdna@fb.com
2020-01-27 10:52:57 +01:00
Andrii Nakryiko
41258289a8 bpftool: Print function linkage in BTF dump
Add printing out BTF_KIND_FUNC's linkage.

Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Acked-by: Toke Høiland-Jørgensen <toke@redhat.com>
Link: https://lore.kernel.org/bpf/20200124054317.2459436-1-andriin@fb.com
2020-01-24 11:09:56 +01:00
David S. Miller
954b3c4397 Merge git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next
Alexei Starovoitov says:

====================
pull-request: bpf-next 2020-01-22

The following pull-request contains BPF updates for your *net-next* tree.

We've added 92 non-merge commits during the last 16 day(s) which contain
a total of 320 files changed, 7532 insertions(+), 1448 deletions(-).

The main changes are:

1) function by function verification and program extensions from Alexei.

2) massive cleanup of selftests/bpf from Toke and Andrii.

3) batched bpf map operations from Brian and Yonghong.

4) tcp congestion control in bpf from Martin.

5) bulking for non-map xdp_redirect form Toke.

6) bpf_send_signal_thread helper from Yonghong.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2020-01-23 08:10:16 +01:00
Toke Høiland-Jørgensen
b6580cd899 runsqslower: Support user-specified libbpf include and object paths
This adds support for specifying the libbpf include and object paths as
arguments to the runqslower Makefile, to support reusing the libbpf version
built as part of the selftests.

Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/157952561135.1683545.5660339645093141381.stgit@toke.dk
2020-01-20 16:37:45 -08:00
Toke Høiland-Jørgensen
a9ed34c0b7 tools/runqslower: Remove tools/lib/bpf from include path
Since we are now consistently using the bpf/ prefix on #include directives,
we don't need to include tools/lib/bpf in the include path. Remove it to
make sure we don't inadvertently introduce new includes without the prefix.

Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Link: https://lore.kernel.org/bpf/157952561027.1683545.1976265477926794138.stgit@toke.dk
2020-01-20 16:37:45 -08:00
Toke Høiland-Jørgensen
229c3b47b7 bpftool: Use consistent include paths for libbpf
Fix bpftool to include libbpf header files with the bpf/ prefix, to be
consistent with external users of the library. Also ensure that all
includes of exported libbpf header files (those that are exported on 'make
install' of the library) use bracketed includes instead of quoted.

To make sure no new files are introduced that doesn't include the bpf/
prefix in its include, remove tools/lib/bpf from the include path entirely,
and use tools/lib instead.

Fixes: 6910d7d386 ("selftests/bpf: Ensure bpf_helper_defs.h are taken from selftests dir")
Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Link: https://lore.kernel.org/bpf/157952560684.1683545.4765181397974997027.stgit@toke.dk
2020-01-20 16:37:45 -08:00
Toke Høiland-Jørgensen
5b554ce518 tools/runqslower: Use consistent include paths for libbpf
Fix the runqslower tool to include libbpf header files with the bpf/
prefix, to be consistent with external users of the library. Also ensure
that all includes of exported libbpf header files (those that are exported
on 'make install' of the library) use bracketed includes instead of quoted.

To not break the build, keep the old include path until everything has been
changed to the new one; a subsequent patch will remove that.

Fixes: 6910d7d386 ("selftests/bpf: Ensure bpf_helper_defs.h are taken from selftests dir")
Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Link: https://lore.kernel.org/bpf/157952560457.1683545.9913736511685743625.stgit@toke.dk
2020-01-20 16:37:45 -08:00
Toke Høiland-Jørgensen
a835d38d41 tools/bpf/runqslower: Fix override option for VMLINUX_BTF
The runqslower tool refuses to build without a file to read vmlinux BTF
from. The build fails with an error message to override the location by
setting the VMLINUX_BTF variable if autodetection fails. However, the
Makefile doesn't actually work with that override - the error message is
still emitted.

Fix this by including the value of VMLINUX_BTF in the expansion, and only
emitting the error message if the *result* is empty. Also permit running
'make clean' even though no VMLINUX_BTF is set.

Fixes: 9c01546d26 ("tools/bpf: Add runqslower tool to tools/bpf")
Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Link: https://lore.kernel.org/bpf/157952560237.1683545.17771785178857224877.stgit@toke.dk
2020-01-20 16:37:45 -08:00
David S. Miller
b3f7e3f23a Merge ra.kernel.org:/pub/scm/linux/kernel/git/netdev/net 2020-01-19 22:10:04 +01:00
Martin KaFai Lau
4e1ea33292 bpftool: Support dumping a map with btf_vmlinux_value_type_id
This patch makes bpftool support dumping a map's value properly
when the map's value type is a type of the running kernel's btf.
(i.e. map_info.btf_vmlinux_value_type_id is set instead of
map_info.btf_value_type_id).  The first usecase is for the
BPF_MAP_TYPE_STRUCT_OPS.

Signed-off-by: Martin KaFai Lau <kafai@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Link: https://lore.kernel.org/bpf/20200115230044.1103008-1-kafai@fb.com
2020-01-15 15:23:27 -08:00
Martin KaFai Lau
84c72ceee9 bpftool: Add struct_ops map name
This patch adds BPF_MAP_TYPE_STRUCT_OPS to "struct_ops" name mapping
so that "bpftool map show" can print the "struct_ops" map type
properly.

[root@arch-fb-vm1 bpf]# ~/devshare/fb-kernel/linux/tools/bpf/bpftool/bpftool map show id 8
8: struct_ops  name dctcp  flags 0x0
	key 4B  value 256B  max_entries 1  memlock 4096B
	btf_id 7

Signed-off-by: Martin KaFai Lau <kafai@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Link: https://lore.kernel.org/bpf/20200115230037.1102674-1-kafai@fb.com
2020-01-15 15:23:27 -08:00
Martin KaFai Lau
188a486619 bpftool: Fix missing BTF output for json during map dump
The btf availability check is only done for plain text output.
It causes the whole BTF output went missing when json_output
is used.

This patch simplifies the logic a little by avoiding passing "int btf" to
map_dump().

For plain text output, the btf_wtr is only created when the map has
BTF (i.e. info->btf_id != 0).  The nullness of "json_writer_t *wtr"
in map_dump() alone can decide if dumping BTF output is needed.
As long as wtr is not NULL, map_dump() will print out the BTF-described
data whenever a map has BTF available (i.e. info->btf_id != 0)
regardless of json or plain-text output.

In do_dump(), the "int btf" is also renamed to "int do_plain_btf".

Fixes: 99f9863a0c ("bpftool: Match maps by name")
Signed-off-by: Martin KaFai Lau <kafai@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Cc: Paul Chaignon <paul.chaignon@orange.com>
Link: https://lore.kernel.org/bpf/20200115230025.1101828-1-kafai@fb.com
2020-01-15 15:23:27 -08:00
Martin KaFai Lau
d7de72674a bpftool: Fix a leak of btf object
When testing a map has btf or not, maps_have_btf() tests it by actually
getting a btf_fd from sys_bpf(BPF_BTF_GET_FD_BY_ID). However, it
forgot to btf__free() it.

In maps_have_btf() stage, there is no need to test it by really
calling sys_bpf(BPF_BTF_GET_FD_BY_ID). Testing non zero
info.btf_id is good enough.

Also, the err_close case is unnecessary, and also causes double
close() because the calling func do_dump() will close() all fds again.

Fixes: 99f9863a0c ("bpftool: Match maps by name")
Signed-off-by: Martin KaFai Lau <kafai@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Cc: Paul Chaignon <paul.chaignon@orange.com>
Link: https://lore.kernel.org/bpf/20200115230019.1101352-1-kafai@fb.com
2020-01-15 15:23:27 -08:00
Andrii Nakryiko
3a0d3092a4 selftests/bpf: Build runqslower from selftests
Ensure runqslower tool is built as part of selftests to prevent it from bit
rotting.

Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20200113073143.1779940-7-andriin@fb.com
2020-01-14 09:23:12 -08:00
Andrii Nakryiko
9c01546d26 tools/bpf: Add runqslower tool to tools/bpf
Convert one of BCC tools (runqslower [0]) to BPF CO-RE + libbpf. It matches
its BCC-based counterpart 1-to-1, supporting all the same parameters and
functionality.

runqslower tool utilizes BPF skeleton, auto-generated from BPF object file,
as well as memory-mapped interface to global (read-only, in this case) data.
Its Makefile also ensures auto-generation of "relocatable" vmlinux.h, which is
necessary for BTF-typed raw tracepoints with direct memory access.

  [0] 11bf5d02c8/tools/runqslower.py

Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20200113073143.1779940-6-andriin@fb.com
2020-01-13 17:48:13 -08:00
Andrii Nakryiko
1cf5b23988 bpftool: Apply preserve_access_index attribute to all types in BTF dump
This patch makes structs and unions, emitted through BTF dump, automatically
CO-RE-relocatable (unless disabled with `#define BPF_NO_PRESERVE_ACCESS_INDEX`,
specified before including generated header file).

This effectivaly turns usual bpf_probe_read() call into equivalent of
bpf_core_read(), by automatically applying builtin_preserve_access_index to
any field accesses of types in generated C types header.

This is especially useful for tp_btf/fentry/fexit BPF program types. They
allow direct memory access, so BPF C code just uses straightfoward a->b->c
access pattern to read data from kernel. But without kernel structs marked as
CO-RE relocatable through preserve_access_index attribute, one has to enclose
all the data reads into a special __builtin_preserve_access_index code block,
like so:

__builtin_preserve_access_index(({
    x = p->pid; /* where p is struct task_struct *, for example */
}));

This is very inconvenient and obscures the logic quite a bit. By marking all
auto-generated types with preserve_access_index attribute the above code is
reduced to just a clean and natural `x = p->pid;`.

Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20200113073143.1779940-5-andriin@fb.com
2020-01-13 17:48:13 -08:00
Martin KaFai Lau
555089fdfc bpftool: Fix printing incorrect pointer in btf_dump_ptr
For plain text output, it incorrectly prints the pointer value
"void *data".  The "void *data" is actually pointing to memory that
contains a bpf-map's value.  The intention is to print the content of
the bpf-map's value instead of printing the pointer pointing to the
bpf-map's value.

In this case, a member of the bpf-map's value is a pointer type.
Thus, it should print the "*(void **)data".

Fixes: 22c349e8db ("tools: bpftool: fix format strings and arguments for jsonw_printf()")
Signed-off-by: Martin KaFai Lau <kafai@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com>
Link: https://lore.kernel.org/bpf/20200110231644.3484151-1-kafai@fb.com
2020-01-11 19:08:21 -08:00
Michal Rostecki
2faef64aa6 bpftool: Add misc section and probe for large INSN limit
Introduce a new probe section (misc) for probes not related to concrete
map types, program types, functions or kernel configuration. Introduce a
probe for large INSN limit as the first one in that section.

Example outputs:

  # bpftool feature probe
  [...]
  Scanning miscellaneous eBPF features...
  Large program size limit is available

  # bpftool feature probe macros
  [...]
  /*** eBPF misc features ***/
  #define HAVE_HAVE_LARGE_INSN_LIMIT

  # bpftool feature probe -j | jq '.["misc"]'
  {
    "have_large_insn_limit": true
  }

Signed-off-by: Michal Rostecki <mrostecki@opensuse.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com>
Link: https://lore.kernel.org/bpf/20200108162428.25014-3-mrostecki@opensuse.org
2020-01-08 19:34:45 +01:00
David S. Miller
2bbc078f81 Merge git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next
Daniel Borkmann says:

====================
pull-request: bpf-next 2019-12-27

The following pull-request contains BPF updates for your *net-next* tree.

We've added 127 non-merge commits during the last 17 day(s) which contain
a total of 110 files changed, 6901 insertions(+), 2721 deletions(-).

There are three merge conflicts. Conflicts and resolution looks as follows:

1) Merge conflict in net/bpf/test_run.c:

There was a tree-wide cleanup c593642c8b ("treewide: Use sizeof_field() macro")
which gets in the way with b590cb5f80 ("bpf: Switch to offsetofend in
BPF_PROG_TEST_RUN"):

  <<<<<<< HEAD
          if (!range_is_zero(__skb, offsetof(struct __sk_buff, priority) +
                             sizeof_field(struct __sk_buff, priority),
  =======
          if (!range_is_zero(__skb, offsetofend(struct __sk_buff, priority),
  >>>>>>> 7c8dce4b16

There are a few occasions that look similar to this. Always take the chunk with
offsetofend(). Note that there is one where the fields differ in here:

  <<<<<<< HEAD
          if (!range_is_zero(__skb, offsetof(struct __sk_buff, tstamp) +
                             sizeof_field(struct __sk_buff, tstamp),
  =======
          if (!range_is_zero(__skb, offsetofend(struct __sk_buff, gso_segs),
  >>>>>>> 7c8dce4b16

Just take the one with offsetofend() /and/ gso_segs. Latter is correct due to
850a88cc40 ("bpf: Expose __sk_buff wire_len/gso_segs to BPF_PROG_TEST_RUN").

2) Merge conflict in arch/riscv/net/bpf_jit_comp.c:

(I'm keeping Bjorn in Cc here for a double-check in case I got it wrong.)

  <<<<<<< HEAD
          if (is_13b_check(off, insn))
                  return -1;
          emit(rv_blt(tcc, RV_REG_ZERO, off >> 1), ctx);
  =======
          emit_branch(BPF_JSLT, RV_REG_T1, RV_REG_ZERO, off, ctx);
  >>>>>>> 7c8dce4b16

Result should look like:

          emit_branch(BPF_JSLT, tcc, RV_REG_ZERO, off, ctx);

3) Merge conflict in arch/riscv/include/asm/pgtable.h:

  <<<<<<< HEAD
  =======
  #define VMALLOC_SIZE     (KERN_VIRT_SIZE >> 1)
  #define VMALLOC_END      (PAGE_OFFSET - 1)
  #define VMALLOC_START    (PAGE_OFFSET - VMALLOC_SIZE)

  #define BPF_JIT_REGION_SIZE     (SZ_128M)
  #define BPF_JIT_REGION_START    (PAGE_OFFSET - BPF_JIT_REGION_SIZE)
  #define BPF_JIT_REGION_END      (VMALLOC_END)

  /*
   * Roughly size the vmemmap space to be large enough to fit enough
   * struct pages to map half the virtual address space. Then
   * position vmemmap directly below the VMALLOC region.
   */
  #define VMEMMAP_SHIFT \
          (CONFIG_VA_BITS - PAGE_SHIFT - 1 + STRUCT_PAGE_MAX_SHIFT)
  #define VMEMMAP_SIZE    BIT(VMEMMAP_SHIFT)
  #define VMEMMAP_END     (VMALLOC_START - 1)
  #define VMEMMAP_START   (VMALLOC_START - VMEMMAP_SIZE)

  #define vmemmap         ((struct page *)VMEMMAP_START)

  >>>>>>> 7c8dce4b16

Only take the BPF_* defines from there and move them higher up in the
same file. Remove the rest from the chunk. The VMALLOC_* etc defines
got moved via 01f52e16b8 ("riscv: define vmemmap before pfn_to_page
calls"). Result:

  [...]
  #define __S101  PAGE_READ_EXEC
  #define __S110  PAGE_SHARED_EXEC
  #define __S111  PAGE_SHARED_EXEC

  #define VMALLOC_SIZE     (KERN_VIRT_SIZE >> 1)
  #define VMALLOC_END      (PAGE_OFFSET - 1)
  #define VMALLOC_START    (PAGE_OFFSET - VMALLOC_SIZE)

  #define BPF_JIT_REGION_SIZE     (SZ_128M)
  #define BPF_JIT_REGION_START    (PAGE_OFFSET - BPF_JIT_REGION_SIZE)
  #define BPF_JIT_REGION_END      (VMALLOC_END)

  /*
   * Roughly size the vmemmap space to be large enough to fit enough
   * struct pages to map half the virtual address space. Then
   * position vmemmap directly below the VMALLOC region.
   */
  #define VMEMMAP_SHIFT \
          (CONFIG_VA_BITS - PAGE_SHIFT - 1 + STRUCT_PAGE_MAX_SHIFT)
  #define VMEMMAP_SIZE    BIT(VMEMMAP_SHIFT)
  #define VMEMMAP_END     (VMALLOC_START - 1)
  #define VMEMMAP_START   (VMALLOC_START - VMEMMAP_SIZE)

  [...]

Let me know if there are any other issues.

Anyway, the main changes are:

1) Extend bpftool to produce a struct (aka "skeleton") tailored and specific
   to a provided BPF object file. This provides an alternative, simplified API
   compared to standard libbpf interaction. Also, add libbpf extern variable
   resolution for .kconfig section to import Kconfig data, from Andrii Nakryiko.

2) Add BPF dispatcher for XDP which is a mechanism to avoid indirect calls by
   generating a branch funnel as discussed back in bpfconf'19 at LSF/MM. Also,
   add various BPF riscv JIT improvements, from Björn Töpel.

3) Extend bpftool to allow matching BPF programs and maps by name,
   from Paul Chaignon.

4) Support for replacing cgroup BPF programs attached with BPF_F_ALLOW_MULTI
   flag for allowing updates without service interruption, from Andrey Ignatov.

5) Cleanup and simplification of ring access functions for AF_XDP with a
   bonus of 0-5% performance improvement, from Magnus Karlsson.

6) Enable BPF JITs for x86-64 and arm64 by default. Also, final version of
   audit support for BPF, from Daniel Borkmann and latter with Jiri Olsa.

7) Move and extend test_select_reuseport into BPF program tests under
   BPF selftests, from Jakub Sitnicki.

8) Various BPF sample improvements for xdpsock for customizing parameters
   to set up and benchmark AF_XDP, from Jay Jayatheerthan.

9) Improve libbpf to provide a ulimit hint on permission denied errors.
   Also change XDP sample programs to attach in driver mode by default,
   from Toke Høiland-Jørgensen.

10) Extend BPF test infrastructure to allow changing skb mark from tc BPF
    programs, from Nikita V. Shirokov.

11) Optimize prologue code sequence in BPF arm32 JIT, from Russell King.

12) Fix xdp_redirect_cpu BPF sample to manually attach to tracepoints after
    libbpf conversion, from Jesper Dangaard Brouer.

13) Minor misc improvements from various others.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2019-12-27 14:20:10 -08:00
Andrii Nakryiko
7c8dce4b16 bpftool: Make skeleton C code compilable with C++ compiler
When auto-generated BPF skeleton C code is included from C++ application, it
triggers compilation error due to void * being implicitly casted to whatever
target pointer type. This is supported by C, but not C++. To solve this
problem, add explicit casts, where necessary.

To ensure issues like this are captured going forward, add skeleton usage in
test_cpp test.

Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20191226210253.3132060-1-andriin@fb.com
2019-12-27 10:11:05 +01:00
Hechao Li
1162f84403 bpf: Print error message for bpftool cgroup show
Currently, when bpftool cgroup show <path> has an error, no error
message is printed. This is confusing because the user may think the
result is empty.

Before the change:

$ bpftool cgroup show /sys/fs/cgroup
ID       AttachType      AttachFlags     Name
$ echo $?
255

After the change:
$ ./bpftool cgroup show /sys/fs/cgroup
Error: can't query bpf programs attached to /sys/fs/cgroup: Operation
not permitted

v2: Rename check_query_cgroup_progs to cgroup_has_attached_progs

Signed-off-by: Hechao Li <hechaol@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20191224011742.3714301-1-hechaol@fb.com
2019-12-26 10:35:54 +01:00
Andrii Nakryiko
81bfdd087b libbpf: Put Kconfig externs into .kconfig section
Move Kconfig-provided externs into custom .kconfig section. Add __kconfig into
bpf_helpers.h for user convenience. Update selftests accordingly.

Suggested-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20191219002837.3074619-2-andriin@fb.com
2019-12-18 17:33:36 -08:00
Andrii Nakryiko
dacce6412e bpftool: Work-around rst2man conversion bug
Work-around what appears to be a bug in rst2man convertion tool, used to
create man pages out of reStructureText-formatted documents. If text line
starts with dot, rst2man will put it in resulting man file verbatim. This
seems to cause man tool to interpret it as a directive/command (e.g., `.bs`), and
subsequently not render entire line because it's unrecognized one.

Enclose '.xxx' words in extra formatting to work around.

Fixes: cb21ac5885 ("bpftool: Add gen subcommand manpage")
Reported-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Yonghong Song <yhs@fb.com
Link: https://lore.kernel.org/bpf/20191218221707.2552199-1-andriin@fb.com
2019-12-18 17:03:52 -08:00
Andrii Nakryiko
7c43e0d6a5 bpftool: Simplify format string to not use positional args
Change format string referring to just single argument out of two available.
Some versions of libc can reject such format string.

Reported-by: Nikita Shirokov <tehnerd@tehnerd.com>
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/bpf/20191218214314.2403729-1-andriin@fb.com
2019-12-18 17:02:16 -08:00
Andrii Nakryiko
cb21ac5885 bpftool: Add gen subcommand manpage
Add bpftool-gen.rst describing skeleton on the high level. Also include
a small, but complete, example BPF app (BPF side, userspace side, generated
skeleton) in example section to demonstrate skeleton API and its usage.

Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/bpf/20191218052552.2915188-4-andriin@fb.com
2019-12-17 22:16:36 -08:00
Andrii Nakryiko
5dc7a8b211 bpftool, selftests/bpf: Embed object file inside skeleton
Embed contents of BPF object file used for BPF skeleton generation inside
skeleton itself. This allows to keep BPF object file and its skeleton in sync
at all times, and simpifies skeleton instantiation.

Also switch existing selftests to not require BPF_EMBED_OBJ anymore.

Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/bpf/20191218052552.2915188-2-andriin@fb.com
2019-12-17 22:16:35 -08:00
Paul Chaignon
159ecc002b bpftool: Fix compilation warning on shadowed variable
The ident variable has already been declared at the top of the function
and doesn't need to be re-declared.

Fixes: 985ead416d ("bpftool: Add skeleton codegen command")
Signed-off-by: Paul Chaignon <paul.chaignon@orange.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20191216112733.GA28366@Omicron
2019-12-16 14:18:48 +01:00
Andrii Nakryiko
2ad97d473d bpftool: Generate externs datasec in BPF skeleton
Add support for generation of mmap()-ed read-only view of libbpf-provided
extern variables. As externs are not supposed to be provided by user code
(that's what .data, .bss, and .rodata is for), don't mmap() it initially. Only
after skeleton load is performed, map .extern contents as read-only memory.

Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20191214014710.3449601-4-andriin@fb.com
2019-12-15 16:41:12 -08:00
Andrii Nakryiko
d9c00c3b16 bpftool: Add gen skeleton BASH completions
Add BASH completions for gen sub-command.

Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Cc: Quentin Monnet <quentin.monnet@netronome.com>
Link: https://lore.kernel.org/bpf/20191214014341.3442258-18-andriin@fb.com
2019-12-15 15:58:06 -08:00
Andrii Nakryiko
985ead416d bpftool: Add skeleton codegen command
Add `bpftool gen skeleton` command, which takes in compiled BPF .o object file
and dumps a BPF skeleton struct and related code to work with that skeleton.
Skeleton itself is tailored to a specific structure of provided BPF object
file, containing accessors (just plain struct fields) for every map and
program, as well as dedicated space for bpf_links. If BPF program is using
global variables, corresponding structure definitions of compatible memory
layout are emitted as well, making it possible to initialize and subsequently
read/update global variables values using simple and clear C syntax for
accessing fields. This skeleton majorly improves usability of
opening/loading/attaching of BPF object, as well as interacting with it
throughout the lifetime of loaded BPF object.

Generated skeleton struct has the following structure:

struct <object-name> {
	/* used by libbpf's skeleton API */
	struct bpf_object_skeleton *skeleton;
	/* bpf_object for libbpf APIs */
	struct bpf_object *obj;
	struct {
		/* for every defined map in BPF object: */
		struct bpf_map *<map-name>;
	} maps;
	struct {
		/* for every program in BPF object: */
		struct bpf_program *<program-name>;
	} progs;
	struct {
		/* for every program in BPF object: */
		struct bpf_link *<program-name>;
	} links;
	/* for every present global data section: */
	struct <object-name>__<one of bss, data, or rodata> {
		/* memory layout of corresponding data section,
		 * with every defined variable represented as a struct field
		 * with exactly the same type, but without const/volatile
		 * modifiers, e.g.:
		 */
		 int *my_var_1;
		 ...
	} *<one of bss, data, or rodata>;
};

This provides great usability improvements:
- no need to look up maps and programs by name, instead just
  my_obj->maps.my_map or my_obj->progs.my_prog would give necessary
  bpf_map/bpf_program pointers, which user can pass to existing libbpf APIs;
- pre-defined places for bpf_links, which will be automatically populated for
  program types that libbpf knows how to attach automatically (currently
  tracepoints, kprobe/kretprobe, raw tracepoint and tracing programs). On
  tearing down skeleton, all active bpf_links will be destroyed (meaning BPF
  programs will be detached, if they are attached). For cases in which libbpf
  doesn't know how to auto-attach BPF program, user can manually create link
  after loading skeleton and they will be auto-detached on skeleton
  destruction:

	my_obj->links.my_fancy_prog = bpf_program__attach_cgroup_whatever(
		my_obj->progs.my_fancy_prog, <whatever extra param);

- it's extremely easy and convenient to work with global data from userspace
  now. Both for read-only and read/write variables, it's possible to
  pre-initialize them before skeleton is loaded:

	skel = my_obj__open(raw_embed_data);
	my_obj->rodata->my_var = 123;
	my_obj__load(skel); /* 123 will be initialization value for my_var */

  After load, if kernel supports mmap() for BPF arrays, user can still read
  (and write for .bss and .data) variables values, but at that point it will
  be directly mmap()-ed to BPF array, backing global variables. This allows to
  seamlessly exchange data with BPF side. From userspace program's POV, all
  the pointers and memory contents stay the same, but mapped kernel memory
  changes to point to created map.
  If kernel doesn't yet support mmap() for BPF arrays, it's still possible to
  use those data section structs to pre-initialize .bss, .data, and .rodata,
  but after load their pointers will be reset to NULL, allowing user code to
  gracefully handle this condition, if necessary.

Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Link: https://lore.kernel.org/bpf/20191214014341.3442258-14-andriin@fb.com
2019-12-15 15:58:05 -08:00