2018-10-05 23:40:00 +00:00
|
|
|
// SPDX-License-Identifier: (LGPL-2.1 OR BSD-2-Clause)
|
2018-01-30 20:55:03 +00:00
|
|
|
|
2015-07-01 02:14:03 +00:00
|
|
|
/*
|
|
|
|
* common eBPF ELF operations.
|
|
|
|
*
|
|
|
|
* Copyright (C) 2013-2015 Alexei Starovoitov <ast@kernel.org>
|
|
|
|
* Copyright (C) 2015 Wang Nan <wangnan0@huawei.com>
|
|
|
|
* Copyright (C) 2015 Huawei Inc.
|
2016-07-04 11:02:42 +00:00
|
|
|
*
|
|
|
|
* This program is free software; you can redistribute it and/or
|
|
|
|
* modify it under the terms of the GNU Lesser General Public
|
|
|
|
* License as published by the Free Software Foundation;
|
|
|
|
* version 2.1 of the License (not later!)
|
|
|
|
*
|
|
|
|
* This program is distributed in the hope that it will be useful,
|
|
|
|
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
|
|
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
|
|
* GNU Lesser General Public License for more details.
|
|
|
|
*
|
|
|
|
* You should have received a copy of the GNU Lesser General Public
|
|
|
|
* License along with this program; if not, see <http://www.gnu.org/licenses>
|
2015-07-01 02:14:03 +00:00
|
|
|
*/
|
|
|
|
|
|
|
|
#include <stdlib.h>
|
2019-02-13 18:25:53 +00:00
|
|
|
#include <string.h>
|
2015-07-01 02:14:03 +00:00
|
|
|
#include <memory.h>
|
|
|
|
#include <unistd.h>
|
|
|
|
#include <asm/unistd.h>
|
2019-06-17 19:26:50 +00:00
|
|
|
#include <errno.h>
|
2015-07-01 02:14:03 +00:00
|
|
|
#include <linux/bpf.h>
|
libbpf: Auto-bump RLIMIT_MEMLOCK if kernel needs it for BPF
The need to increase RLIMIT_MEMLOCK to do anything useful with BPF is
one of the first extremely frustrating gotchas that all new BPF users go
through and in some cases have to learn it a very hard way.
Luckily, starting with upstream Linux kernel version 5.11, BPF subsystem
dropped the dependency on memlock and uses memcg-based memory accounting
instead. Unfortunately, detecting memcg-based BPF memory accounting is
far from trivial (as can be evidenced by this patch), so in practice
most BPF applications still do unconditional RLIMIT_MEMLOCK increase.
As we move towards libbpf 1.0, it would be good to allow users to forget
about RLIMIT_MEMLOCK vs memcg and let libbpf do the sensible adjustment
automatically. This patch paves the way forward in this matter. Libbpf
will do feature detection of memcg-based accounting, and if detected,
will do nothing. But if the kernel is too old, just like BCC, libbpf
will automatically increase RLIMIT_MEMLOCK on behalf of user
application ([0]).
As this is technically a breaking change, during the transition period
applications have to opt into libbpf 1.0 mode by setting
LIBBPF_STRICT_AUTO_RLIMIT_MEMLOCK bit when calling
libbpf_set_strict_mode().
Libbpf allows to control the exact amount of set RLIMIT_MEMLOCK limit
with libbpf_set_memlock_rlim_max() API. Passing 0 will make libbpf do
nothing with RLIMIT_MEMLOCK. libbpf_set_memlock_rlim_max() has to be
called before the first bpf_prog_load(), bpf_btf_load(), or
bpf_object__load() call, otherwise it has no effect and will return
-EBUSY.
[0] Closes: https://github.com/libbpf/libbpf/issues/369
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20211214195904.1785155-2-andrii@kernel.org
2021-12-14 19:59:03 +00:00
|
|
|
#include <linux/filter.h>
|
2022-03-06 02:34:26 +00:00
|
|
|
#include <linux/kernel.h>
|
libbpf: Unify low-level BPF_PROG_LOAD APIs into bpf_prog_load()
Add a new unified OPTS-based low-level API for program loading,
bpf_prog_load() ([0]). bpf_prog_load() accepts few "mandatory"
parameters as input arguments (program type, name, license,
instructions) and all the other optional (as in not required to specify
for all types of BPF programs) fields into struct bpf_prog_load_opts.
This makes all the other non-extensible APIs variant for BPF_PROG_LOAD
obsolete and they are slated for deprecation in libbpf v0.7:
- bpf_load_program();
- bpf_load_program_xattr();
- bpf_verify_program().
Implementation-wise, internal helper libbpf__bpf_prog_load is refactored
to become a public bpf_prog_load() API. struct bpf_prog_load_params used
internally is replaced by public struct bpf_prog_load_opts.
Unfortunately, while conceptually all this is pretty straightforward,
the biggest complication comes from the already existing bpf_prog_load()
*high-level* API, which has nothing to do with BPF_PROG_LOAD command.
We try really hard to have a new API named bpf_prog_load(), though,
because it maps naturally to BPF_PROG_LOAD command.
For that, we rename old bpf_prog_load() into bpf_prog_load_deprecated()
and mark it as COMPAT_VERSION() for shared library users compiled
against old version of libbpf. Statically linked users and shared lib
users compiled against new version of libbpf headers will get "rerouted"
to bpf_prog_deprecated() through a macro helper that decides whether to
use new or old bpf_prog_load() based on number of input arguments (see
___libbpf_overload in libbpf_common.h).
To test that existing
bpf_prog_load()-using code compiles and works as expected, I've compiled
and ran selftests as is. I had to remove (locally) selftest/bpf/Makefile
-Dbpf_prog_load=bpf_prog_test_load hack because it was conflicting with
the macro-based overload approach. I don't expect anyone else to do
something like this in practice, though. This is testing-specific way to
replace bpf_prog_load() calls with special testing variant of it, which
adds extra prog_flags value. After testing I kept this selftests hack,
but ensured that we use a new bpf_prog_load_deprecated name for this.
This patch also marks bpf_prog_load() and bpf_prog_load_xattr() as deprecated.
bpf_object interface has to be used for working with struct bpf_program.
Libbpf doesn't support loading just a bpf_program.
The silver lining is that when we get to libbpf 1.0 all these
complication will be gone and we'll have one clean bpf_prog_load()
low-level API with no backwards compatibility hackery surrounding it.
[0] Closes: https://github.com/libbpf/libbpf/issues/284
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211103220845.2676888-4-andrii@kernel.org
2021-11-03 22:08:36 +00:00
|
|
|
#include <limits.h>
|
libbpf: Auto-bump RLIMIT_MEMLOCK if kernel needs it for BPF
The need to increase RLIMIT_MEMLOCK to do anything useful with BPF is
one of the first extremely frustrating gotchas that all new BPF users go
through and in some cases have to learn it a very hard way.
Luckily, starting with upstream Linux kernel version 5.11, BPF subsystem
dropped the dependency on memlock and uses memcg-based memory accounting
instead. Unfortunately, detecting memcg-based BPF memory accounting is
far from trivial (as can be evidenced by this patch), so in practice
most BPF applications still do unconditional RLIMIT_MEMLOCK increase.
As we move towards libbpf 1.0, it would be good to allow users to forget
about RLIMIT_MEMLOCK vs memcg and let libbpf do the sensible adjustment
automatically. This patch paves the way forward in this matter. Libbpf
will do feature detection of memcg-based accounting, and if detected,
will do nothing. But if the kernel is too old, just like BCC, libbpf
will automatically increase RLIMIT_MEMLOCK on behalf of user
application ([0]).
As this is technically a breaking change, during the transition period
applications have to opt into libbpf 1.0 mode by setting
LIBBPF_STRICT_AUTO_RLIMIT_MEMLOCK bit when calling
libbpf_set_strict_mode().
Libbpf allows to control the exact amount of set RLIMIT_MEMLOCK limit
with libbpf_set_memlock_rlim_max() API. Passing 0 will make libbpf do
nothing with RLIMIT_MEMLOCK. libbpf_set_memlock_rlim_max() has to be
called before the first bpf_prog_load(), bpf_btf_load(), or
bpf_object__load() call, otherwise it has no effect and will return
-EBUSY.
[0] Closes: https://github.com/libbpf/libbpf/issues/369
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20211214195904.1785155-2-andrii@kernel.org
2021-12-14 19:59:03 +00:00
|
|
|
#include <sys/resource.h>
|
2015-07-01 02:14:03 +00:00
|
|
|
#include "bpf.h"
|
2018-01-30 20:55:01 +00:00
|
|
|
#include "libbpf.h"
|
2019-06-17 19:26:50 +00:00
|
|
|
#include "libbpf_internal.h"
|
2015-07-01 02:14:03 +00:00
|
|
|
|
|
|
|
/*
|
2017-02-27 22:29:28 +00:00
|
|
|
* When building perf, unistd.h is overridden. __NR_bpf is
|
2016-01-11 13:47:57 +00:00
|
|
|
* required to be defined explicitly.
|
2015-07-01 02:14:03 +00:00
|
|
|
*/
|
|
|
|
#ifndef __NR_bpf
|
|
|
|
# if defined(__i386__)
|
|
|
|
# define __NR_bpf 357
|
|
|
|
# elif defined(__x86_64__)
|
|
|
|
# define __NR_bpf 321
|
|
|
|
# elif defined(__aarch64__)
|
|
|
|
# define __NR_bpf 280
|
2017-04-22 19:31:05 +00:00
|
|
|
# elif defined(__sparc__)
|
|
|
|
# define __NR_bpf 349
|
2017-08-04 12:20:55 +00:00
|
|
|
# elif defined(__s390__)
|
|
|
|
# define __NR_bpf 351
|
2019-05-02 15:56:50 +00:00
|
|
|
# elif defined(__arc__)
|
|
|
|
# define __NR_bpf 280
|
bpf, mips: Fix build errors about __NR_bpf undeclared
Add the __NR_bpf definitions to fix the following build errors for mips:
$ cd tools/bpf/bpftool
$ make
[...]
bpf.c:54:4: error: #error __NR_bpf not defined. libbpf does not support your arch.
# error __NR_bpf not defined. libbpf does not support your arch.
^~~~~
bpf.c: In function ‘sys_bpf’:
bpf.c:66:17: error: ‘__NR_bpf’ undeclared (first use in this function); did you mean ‘__NR_brk’?
return syscall(__NR_bpf, cmd, attr, size);
^~~~~~~~
__NR_brk
[...]
In file included from gen_loader.c:15:0:
skel_internal.h: In function ‘skel_sys_bpf’:
skel_internal.h:53:17: error: ‘__NR_bpf’ undeclared (first use in this function); did you mean ‘__NR_brk’?
return syscall(__NR_bpf, cmd, attr, size);
^~~~~~~~
__NR_brk
We can see the following generated definitions:
$ grep -r "#define __NR_bpf" arch/mips
arch/mips/include/generated/uapi/asm/unistd_o32.h:#define __NR_bpf (__NR_Linux + 355)
arch/mips/include/generated/uapi/asm/unistd_n64.h:#define __NR_bpf (__NR_Linux + 315)
arch/mips/include/generated/uapi/asm/unistd_n32.h:#define __NR_bpf (__NR_Linux + 319)
The __NR_Linux is defined in arch/mips/include/uapi/asm/unistd.h:
$ grep -r "#define __NR_Linux" arch/mips
arch/mips/include/uapi/asm/unistd.h:#define __NR_Linux 4000
arch/mips/include/uapi/asm/unistd.h:#define __NR_Linux 5000
arch/mips/include/uapi/asm/unistd.h:#define __NR_Linux 6000
That is to say, __NR_bpf is:
4000 + 355 = 4355 for mips o32,
6000 + 319 = 6319 for mips n32,
5000 + 315 = 5315 for mips n64.
So use the GCC pre-defined macro _ABIO32, _ABIN32 and _ABI64 [1] to define
the corresponding __NR_bpf.
This patch is similar with commit bad1926dd2f6 ("bpf, s390: fix build for
libbpf and selftest suite").
[1] https://gcc.gnu.org/git/?p=gcc.git;a=blob;f=gcc/config/mips/mips.h#l549
Signed-off-by: Tiezhu Yang <yangtiezhu@loongson.cn>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/1637804167-8323-1-git-send-email-yangtiezhu@loongson.cn
2021-11-25 01:36:07 +00:00
|
|
|
# elif defined(__mips__) && defined(_ABIO32)
|
|
|
|
# define __NR_bpf 4355
|
|
|
|
# elif defined(__mips__) && defined(_ABIN32)
|
|
|
|
# define __NR_bpf 6319
|
|
|
|
# elif defined(__mips__) && defined(_ABI64)
|
|
|
|
# define __NR_bpf 5315
|
2015-07-01 02:14:03 +00:00
|
|
|
# else
|
|
|
|
# error __NR_bpf not defined. libbpf does not support your arch.
|
|
|
|
# endif
|
|
|
|
#endif
|
|
|
|
|
2017-02-11 19:37:08 +00:00
|
|
|
static inline __u64 ptr_to_u64(const void *ptr)
|
2015-07-01 02:14:06 +00:00
|
|
|
{
|
|
|
|
return (__u64) (unsigned long) ptr;
|
|
|
|
}
|
|
|
|
|
2017-02-11 19:37:08 +00:00
|
|
|
static inline int sys_bpf(enum bpf_cmd cmd, union bpf_attr *attr,
|
|
|
|
unsigned int size)
|
2015-07-01 02:14:03 +00:00
|
|
|
{
|
|
|
|
return syscall(__NR_bpf, cmd, attr, size);
|
|
|
|
}
|
|
|
|
|
libbpf: Ensure that BPF syscall fds are never 0, 1, or 2
Add a simple wrapper for passing an fd and getting a new one >= 3 if it
is one of 0, 1, or 2. There are two primary reasons to make this change:
First, libbpf relies on the assumption a certain BPF fd is never 0 (e.g.
most recently noticed in [0]). Second, Alexei pointed out in [1] that
some environments reset stdin, stdout, and stderr if they notice an
invalid fd at these numbers. To protect against both these cases, switch
all internal BPF syscall wrappers in libbpf to always return an fd >= 3.
We only need to modify the syscall wrappers and not other code that
assumes a valid fd by doing >= 0, to avoid pointless churn, and because
it is still a valid assumption. The cost paid is two additional syscalls
if fd is in range [0, 2].
[0]: e31eec77e4ab ("bpf: selftests: Fix fd cleanup in get_branch_snapshot")
[1]: https://lore.kernel.org/bpf/CAADnVQKVKY8o_3aU8Gzke443+uHa-eGoM0h7W4srChMXU1S4Bg@mail.gmail.com
Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Song Liu <songliubraving@fb.com>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20211028063501.2239335-5-memxor@gmail.com
2021-10-28 06:34:57 +00:00
|
|
|
static inline int sys_bpf_fd(enum bpf_cmd cmd, union bpf_attr *attr,
|
|
|
|
unsigned int size)
|
|
|
|
{
|
|
|
|
int fd;
|
|
|
|
|
|
|
|
fd = sys_bpf(cmd, attr, size);
|
|
|
|
return ensure_good_fd(fd);
|
|
|
|
}
|
|
|
|
|
2021-11-03 22:08:35 +00:00
|
|
|
#define PROG_LOAD_ATTEMPTS 5
|
|
|
|
|
|
|
|
static inline int sys_bpf_prog_load(union bpf_attr *attr, unsigned int size, int attempts)
|
2019-01-08 13:58:00 +00:00
|
|
|
{
|
|
|
|
int fd;
|
|
|
|
|
|
|
|
do {
|
libbpf: Ensure that BPF syscall fds are never 0, 1, or 2
Add a simple wrapper for passing an fd and getting a new one >= 3 if it
is one of 0, 1, or 2. There are two primary reasons to make this change:
First, libbpf relies on the assumption a certain BPF fd is never 0 (e.g.
most recently noticed in [0]). Second, Alexei pointed out in [1] that
some environments reset stdin, stdout, and stderr if they notice an
invalid fd at these numbers. To protect against both these cases, switch
all internal BPF syscall wrappers in libbpf to always return an fd >= 3.
We only need to modify the syscall wrappers and not other code that
assumes a valid fd by doing >= 0, to avoid pointless churn, and because
it is still a valid assumption. The cost paid is two additional syscalls
if fd is in range [0, 2].
[0]: e31eec77e4ab ("bpf: selftests: Fix fd cleanup in get_branch_snapshot")
[1]: https://lore.kernel.org/bpf/CAADnVQKVKY8o_3aU8Gzke443+uHa-eGoM0h7W4srChMXU1S4Bg@mail.gmail.com
Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Song Liu <songliubraving@fb.com>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20211028063501.2239335-5-memxor@gmail.com
2021-10-28 06:34:57 +00:00
|
|
|
fd = sys_bpf_fd(BPF_PROG_LOAD, attr, size);
|
2021-11-03 22:08:35 +00:00
|
|
|
} while (fd < 0 && errno == EAGAIN && --attempts > 0);
|
2019-01-08 13:58:00 +00:00
|
|
|
|
|
|
|
return fd;
|
|
|
|
}
|
|
|
|
|
libbpf: Auto-bump RLIMIT_MEMLOCK if kernel needs it for BPF
The need to increase RLIMIT_MEMLOCK to do anything useful with BPF is
one of the first extremely frustrating gotchas that all new BPF users go
through and in some cases have to learn it a very hard way.
Luckily, starting with upstream Linux kernel version 5.11, BPF subsystem
dropped the dependency on memlock and uses memcg-based memory accounting
instead. Unfortunately, detecting memcg-based BPF memory accounting is
far from trivial (as can be evidenced by this patch), so in practice
most BPF applications still do unconditional RLIMIT_MEMLOCK increase.
As we move towards libbpf 1.0, it would be good to allow users to forget
about RLIMIT_MEMLOCK vs memcg and let libbpf do the sensible adjustment
automatically. This patch paves the way forward in this matter. Libbpf
will do feature detection of memcg-based accounting, and if detected,
will do nothing. But if the kernel is too old, just like BCC, libbpf
will automatically increase RLIMIT_MEMLOCK on behalf of user
application ([0]).
As this is technically a breaking change, during the transition period
applications have to opt into libbpf 1.0 mode by setting
LIBBPF_STRICT_AUTO_RLIMIT_MEMLOCK bit when calling
libbpf_set_strict_mode().
Libbpf allows to control the exact amount of set RLIMIT_MEMLOCK limit
with libbpf_set_memlock_rlim_max() API. Passing 0 will make libbpf do
nothing with RLIMIT_MEMLOCK. libbpf_set_memlock_rlim_max() has to be
called before the first bpf_prog_load(), bpf_btf_load(), or
bpf_object__load() call, otherwise it has no effect and will return
-EBUSY.
[0] Closes: https://github.com/libbpf/libbpf/issues/369
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20211214195904.1785155-2-andrii@kernel.org
2021-12-14 19:59:03 +00:00
|
|
|
/* Probe whether kernel switched from memlock-based (RLIMIT_MEMLOCK) to
|
|
|
|
* memcg-based memory accounting for BPF maps and progs. This was done in [0].
|
|
|
|
* We use the support for bpf_ktime_get_coarse_ns() helper, which was added in
|
|
|
|
* the same 5.11 Linux release ([1]), to detect memcg-based accounting for BPF.
|
|
|
|
*
|
|
|
|
* [0] https://lore.kernel.org/bpf/20201201215900.3569844-1-guro@fb.com/
|
|
|
|
* [1] d05512618056 ("bpf: Add bpf_ktime_get_coarse_ns helper")
|
|
|
|
*/
|
|
|
|
int probe_memcg_account(void)
|
|
|
|
{
|
|
|
|
const size_t prog_load_attr_sz = offsetofend(union bpf_attr, attach_btf_obj_fd);
|
|
|
|
struct bpf_insn insns[] = {
|
|
|
|
BPF_EMIT_CALL(BPF_FUNC_ktime_get_coarse_ns),
|
|
|
|
BPF_EXIT_INSN(),
|
|
|
|
};
|
2022-03-06 02:34:26 +00:00
|
|
|
size_t insn_cnt = ARRAY_SIZE(insns);
|
libbpf: Auto-bump RLIMIT_MEMLOCK if kernel needs it for BPF
The need to increase RLIMIT_MEMLOCK to do anything useful with BPF is
one of the first extremely frustrating gotchas that all new BPF users go
through and in some cases have to learn it a very hard way.
Luckily, starting with upstream Linux kernel version 5.11, BPF subsystem
dropped the dependency on memlock and uses memcg-based memory accounting
instead. Unfortunately, detecting memcg-based BPF memory accounting is
far from trivial (as can be evidenced by this patch), so in practice
most BPF applications still do unconditional RLIMIT_MEMLOCK increase.
As we move towards libbpf 1.0, it would be good to allow users to forget
about RLIMIT_MEMLOCK vs memcg and let libbpf do the sensible adjustment
automatically. This patch paves the way forward in this matter. Libbpf
will do feature detection of memcg-based accounting, and if detected,
will do nothing. But if the kernel is too old, just like BCC, libbpf
will automatically increase RLIMIT_MEMLOCK on behalf of user
application ([0]).
As this is technically a breaking change, during the transition period
applications have to opt into libbpf 1.0 mode by setting
LIBBPF_STRICT_AUTO_RLIMIT_MEMLOCK bit when calling
libbpf_set_strict_mode().
Libbpf allows to control the exact amount of set RLIMIT_MEMLOCK limit
with libbpf_set_memlock_rlim_max() API. Passing 0 will make libbpf do
nothing with RLIMIT_MEMLOCK. libbpf_set_memlock_rlim_max() has to be
called before the first bpf_prog_load(), bpf_btf_load(), or
bpf_object__load() call, otherwise it has no effect and will return
-EBUSY.
[0] Closes: https://github.com/libbpf/libbpf/issues/369
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20211214195904.1785155-2-andrii@kernel.org
2021-12-14 19:59:03 +00:00
|
|
|
union bpf_attr attr;
|
|
|
|
int prog_fd;
|
|
|
|
|
|
|
|
/* attempt loading freplace trying to use custom BTF */
|
|
|
|
memset(&attr, 0, prog_load_attr_sz);
|
|
|
|
attr.prog_type = BPF_PROG_TYPE_SOCKET_FILTER;
|
|
|
|
attr.insns = ptr_to_u64(insns);
|
|
|
|
attr.insn_cnt = insn_cnt;
|
|
|
|
attr.license = ptr_to_u64("GPL");
|
|
|
|
|
|
|
|
prog_fd = sys_bpf_fd(BPF_PROG_LOAD, &attr, prog_load_attr_sz);
|
|
|
|
if (prog_fd >= 0) {
|
|
|
|
close(prog_fd);
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static bool memlock_bumped;
|
|
|
|
static rlim_t memlock_rlim = RLIM_INFINITY;
|
|
|
|
|
|
|
|
int libbpf_set_memlock_rlim(size_t memlock_bytes)
|
|
|
|
{
|
|
|
|
if (memlock_bumped)
|
|
|
|
return libbpf_err(-EBUSY);
|
|
|
|
|
|
|
|
memlock_rlim = memlock_bytes;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
int bump_rlimit_memlock(void)
|
|
|
|
{
|
|
|
|
struct rlimit rlim;
|
|
|
|
|
|
|
|
/* if kernel supports memcg-based accounting, skip bumping RLIMIT_MEMLOCK */
|
|
|
|
if (memlock_bumped || kernel_supports(NULL, FEAT_MEMCG_ACCOUNT))
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
memlock_bumped = true;
|
|
|
|
|
|
|
|
/* zero memlock_rlim_max disables auto-bumping RLIMIT_MEMLOCK */
|
|
|
|
if (memlock_rlim == 0)
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
rlim.rlim_cur = rlim.rlim_max = memlock_rlim;
|
|
|
|
if (setrlimit(RLIMIT_MEMLOCK, &rlim))
|
|
|
|
return -errno;
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2021-11-24 19:32:30 +00:00
|
|
|
int bpf_map_create(enum bpf_map_type map_type,
|
|
|
|
const char *map_name,
|
|
|
|
__u32 key_size,
|
|
|
|
__u32 value_size,
|
|
|
|
__u32 max_entries,
|
|
|
|
const struct bpf_map_create_opts *opts)
|
2015-07-01 02:14:03 +00:00
|
|
|
{
|
2021-11-24 19:32:30 +00:00
|
|
|
const size_t attr_sz = offsetofend(union bpf_attr, map_extra);
|
2015-07-01 02:14:03 +00:00
|
|
|
union bpf_attr attr;
|
2021-05-25 03:59:33 +00:00
|
|
|
int fd;
|
2015-07-01 02:14:03 +00:00
|
|
|
|
libbpf: Auto-bump RLIMIT_MEMLOCK if kernel needs it for BPF
The need to increase RLIMIT_MEMLOCK to do anything useful with BPF is
one of the first extremely frustrating gotchas that all new BPF users go
through and in some cases have to learn it a very hard way.
Luckily, starting with upstream Linux kernel version 5.11, BPF subsystem
dropped the dependency on memlock and uses memcg-based memory accounting
instead. Unfortunately, detecting memcg-based BPF memory accounting is
far from trivial (as can be evidenced by this patch), so in practice
most BPF applications still do unconditional RLIMIT_MEMLOCK increase.
As we move towards libbpf 1.0, it would be good to allow users to forget
about RLIMIT_MEMLOCK vs memcg and let libbpf do the sensible adjustment
automatically. This patch paves the way forward in this matter. Libbpf
will do feature detection of memcg-based accounting, and if detected,
will do nothing. But if the kernel is too old, just like BCC, libbpf
will automatically increase RLIMIT_MEMLOCK on behalf of user
application ([0]).
As this is technically a breaking change, during the transition period
applications have to opt into libbpf 1.0 mode by setting
LIBBPF_STRICT_AUTO_RLIMIT_MEMLOCK bit when calling
libbpf_set_strict_mode().
Libbpf allows to control the exact amount of set RLIMIT_MEMLOCK limit
with libbpf_set_memlock_rlim_max() API. Passing 0 will make libbpf do
nothing with RLIMIT_MEMLOCK. libbpf_set_memlock_rlim_max() has to be
called before the first bpf_prog_load(), bpf_btf_load(), or
bpf_object__load() call, otherwise it has no effect and will return
-EBUSY.
[0] Closes: https://github.com/libbpf/libbpf/issues/369
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20211214195904.1785155-2-andrii@kernel.org
2021-12-14 19:59:03 +00:00
|
|
|
bump_rlimit_memlock();
|
|
|
|
|
2021-11-24 19:32:30 +00:00
|
|
|
memset(&attr, 0, attr_sz);
|
|
|
|
|
|
|
|
if (!OPTS_VALID(opts, bpf_map_create_opts))
|
|
|
|
return libbpf_err(-EINVAL);
|
|
|
|
|
|
|
|
attr.map_type = map_type;
|
|
|
|
if (map_name)
|
2021-12-11 00:40:43 +00:00
|
|
|
libbpf_strlcpy(attr.map_name, map_name, sizeof(attr.map_name));
|
2021-11-24 19:32:30 +00:00
|
|
|
attr.key_size = key_size;
|
|
|
|
attr.value_size = value_size;
|
|
|
|
attr.max_entries = max_entries;
|
|
|
|
|
|
|
|
attr.btf_fd = OPTS_GET(opts, btf_fd, 0);
|
|
|
|
attr.btf_key_type_id = OPTS_GET(opts, btf_key_type_id, 0);
|
|
|
|
attr.btf_value_type_id = OPTS_GET(opts, btf_value_type_id, 0);
|
|
|
|
attr.btf_vmlinux_value_type_id = OPTS_GET(opts, btf_vmlinux_value_type_id, 0);
|
|
|
|
|
|
|
|
attr.inner_map_fd = OPTS_GET(opts, inner_map_fd, 0);
|
|
|
|
attr.map_flags = OPTS_GET(opts, map_flags, 0);
|
|
|
|
attr.map_extra = OPTS_GET(opts, map_extra, 0);
|
|
|
|
attr.numa_node = OPTS_GET(opts, numa_node, 0);
|
|
|
|
attr.map_ifindex = OPTS_GET(opts, map_ifindex, 0);
|
2018-04-18 22:56:05 +00:00
|
|
|
|
2021-11-24 19:32:30 +00:00
|
|
|
fd = sys_bpf_fd(BPF_MAP_CREATE, &attr, attr_sz);
|
2021-05-25 03:59:33 +00:00
|
|
|
return libbpf_err_errno(fd);
|
2018-04-18 22:56:05 +00:00
|
|
|
}
|
2017-09-27 21:37:54 +00:00
|
|
|
|
2018-12-08 00:42:31 +00:00
|
|
|
static void *
|
|
|
|
alloc_zero_tailing_info(const void *orecord, __u32 cnt,
|
|
|
|
__u32 actual_rec_size, __u32 expected_rec_size)
|
|
|
|
{
|
2019-11-07 02:08:52 +00:00
|
|
|
__u64 info_len = (__u64)actual_rec_size * cnt;
|
2018-12-08 00:42:31 +00:00
|
|
|
void *info, *nrecord;
|
|
|
|
int i;
|
|
|
|
|
|
|
|
info = malloc(info_len);
|
|
|
|
if (!info)
|
|
|
|
return NULL;
|
|
|
|
|
|
|
|
/* zero out bytes kernel does not understand */
|
|
|
|
nrecord = info;
|
|
|
|
for (i = 0; i < cnt; i++) {
|
|
|
|
memcpy(nrecord, orecord, expected_rec_size);
|
|
|
|
memset(nrecord + expected_rec_size, 0,
|
|
|
|
actual_rec_size - expected_rec_size);
|
|
|
|
orecord += actual_rec_size;
|
|
|
|
nrecord += actual_rec_size;
|
|
|
|
}
|
|
|
|
|
|
|
|
return info;
|
|
|
|
}
|
|
|
|
|
2022-06-27 21:15:14 +00:00
|
|
|
int bpf_prog_load(enum bpf_prog_type prog_type,
|
|
|
|
const char *prog_name, const char *license,
|
|
|
|
const struct bpf_insn *insns, size_t insn_cnt,
|
|
|
|
const struct bpf_prog_load_opts *opts)
|
2015-07-01 02:14:06 +00:00
|
|
|
{
|
2018-12-08 00:42:31 +00:00
|
|
|
void *finfo = NULL, *linfo = NULL;
|
libbpf: Unify low-level BPF_PROG_LOAD APIs into bpf_prog_load()
Add a new unified OPTS-based low-level API for program loading,
bpf_prog_load() ([0]). bpf_prog_load() accepts few "mandatory"
parameters as input arguments (program type, name, license,
instructions) and all the other optional (as in not required to specify
for all types of BPF programs) fields into struct bpf_prog_load_opts.
This makes all the other non-extensible APIs variant for BPF_PROG_LOAD
obsolete and they are slated for deprecation in libbpf v0.7:
- bpf_load_program();
- bpf_load_program_xattr();
- bpf_verify_program().
Implementation-wise, internal helper libbpf__bpf_prog_load is refactored
to become a public bpf_prog_load() API. struct bpf_prog_load_params used
internally is replaced by public struct bpf_prog_load_opts.
Unfortunately, while conceptually all this is pretty straightforward,
the biggest complication comes from the already existing bpf_prog_load()
*high-level* API, which has nothing to do with BPF_PROG_LOAD command.
We try really hard to have a new API named bpf_prog_load(), though,
because it maps naturally to BPF_PROG_LOAD command.
For that, we rename old bpf_prog_load() into bpf_prog_load_deprecated()
and mark it as COMPAT_VERSION() for shared library users compiled
against old version of libbpf. Statically linked users and shared lib
users compiled against new version of libbpf headers will get "rerouted"
to bpf_prog_deprecated() through a macro helper that decides whether to
use new or old bpf_prog_load() based on number of input arguments (see
___libbpf_overload in libbpf_common.h).
To test that existing
bpf_prog_load()-using code compiles and works as expected, I've compiled
and ran selftests as is. I had to remove (locally) selftest/bpf/Makefile
-Dbpf_prog_load=bpf_prog_test_load hack because it was conflicting with
the macro-based overload approach. I don't expect anyone else to do
something like this in practice, though. This is testing-specific way to
replace bpf_prog_load() calls with special testing variant of it, which
adds extra prog_flags value. After testing I kept this selftests hack,
but ensured that we use a new bpf_prog_load_deprecated name for this.
This patch also marks bpf_prog_load() and bpf_prog_load_xattr() as deprecated.
bpf_object interface has to be used for working with struct bpf_program.
Libbpf doesn't support loading just a bpf_program.
The silver lining is that when we get to libbpf 1.0 all these
complication will be gone and we'll have one clean bpf_prog_load()
low-level API with no backwards compatibility hackery surrounding it.
[0] Closes: https://github.com/libbpf/libbpf/issues/284
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211103220845.2676888-4-andrii@kernel.org
2021-11-03 22:08:36 +00:00
|
|
|
const char *func_info, *line_info;
|
|
|
|
__u32 log_size, log_level, attach_prog_fd, attach_btf_obj_fd;
|
|
|
|
__u32 func_info_rec_size, line_info_rec_size;
|
|
|
|
int fd, attempts;
|
2015-07-01 02:14:06 +00:00
|
|
|
union bpf_attr attr;
|
libbpf: Unify low-level BPF_PROG_LOAD APIs into bpf_prog_load()
Add a new unified OPTS-based low-level API for program loading,
bpf_prog_load() ([0]). bpf_prog_load() accepts few "mandatory"
parameters as input arguments (program type, name, license,
instructions) and all the other optional (as in not required to specify
for all types of BPF programs) fields into struct bpf_prog_load_opts.
This makes all the other non-extensible APIs variant for BPF_PROG_LOAD
obsolete and they are slated for deprecation in libbpf v0.7:
- bpf_load_program();
- bpf_load_program_xattr();
- bpf_verify_program().
Implementation-wise, internal helper libbpf__bpf_prog_load is refactored
to become a public bpf_prog_load() API. struct bpf_prog_load_params used
internally is replaced by public struct bpf_prog_load_opts.
Unfortunately, while conceptually all this is pretty straightforward,
the biggest complication comes from the already existing bpf_prog_load()
*high-level* API, which has nothing to do with BPF_PROG_LOAD command.
We try really hard to have a new API named bpf_prog_load(), though,
because it maps naturally to BPF_PROG_LOAD command.
For that, we rename old bpf_prog_load() into bpf_prog_load_deprecated()
and mark it as COMPAT_VERSION() for shared library users compiled
against old version of libbpf. Statically linked users and shared lib
users compiled against new version of libbpf headers will get "rerouted"
to bpf_prog_deprecated() through a macro helper that decides whether to
use new or old bpf_prog_load() based on number of input arguments (see
___libbpf_overload in libbpf_common.h).
To test that existing
bpf_prog_load()-using code compiles and works as expected, I've compiled
and ran selftests as is. I had to remove (locally) selftest/bpf/Makefile
-Dbpf_prog_load=bpf_prog_test_load hack because it was conflicting with
the macro-based overload approach. I don't expect anyone else to do
something like this in practice, though. This is testing-specific way to
replace bpf_prog_load() calls with special testing variant of it, which
adds extra prog_flags value. After testing I kept this selftests hack,
but ensured that we use a new bpf_prog_load_deprecated name for this.
This patch also marks bpf_prog_load() and bpf_prog_load_xattr() as deprecated.
bpf_object interface has to be used for working with struct bpf_program.
Libbpf doesn't support loading just a bpf_program.
The silver lining is that when we get to libbpf 1.0 all these
complication will be gone and we'll have one clean bpf_prog_load()
low-level API with no backwards compatibility hackery surrounding it.
[0] Closes: https://github.com/libbpf/libbpf/issues/284
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211103220845.2676888-4-andrii@kernel.org
2021-11-03 22:08:36 +00:00
|
|
|
char *log_buf;
|
2018-03-30 22:08:01 +00:00
|
|
|
|
libbpf: Auto-bump RLIMIT_MEMLOCK if kernel needs it for BPF
The need to increase RLIMIT_MEMLOCK to do anything useful with BPF is
one of the first extremely frustrating gotchas that all new BPF users go
through and in some cases have to learn it a very hard way.
Luckily, starting with upstream Linux kernel version 5.11, BPF subsystem
dropped the dependency on memlock and uses memcg-based memory accounting
instead. Unfortunately, detecting memcg-based BPF memory accounting is
far from trivial (as can be evidenced by this patch), so in practice
most BPF applications still do unconditional RLIMIT_MEMLOCK increase.
As we move towards libbpf 1.0, it would be good to allow users to forget
about RLIMIT_MEMLOCK vs memcg and let libbpf do the sensible adjustment
automatically. This patch paves the way forward in this matter. Libbpf
will do feature detection of memcg-based accounting, and if detected,
will do nothing. But if the kernel is too old, just like BCC, libbpf
will automatically increase RLIMIT_MEMLOCK on behalf of user
application ([0]).
As this is technically a breaking change, during the transition period
applications have to opt into libbpf 1.0 mode by setting
LIBBPF_STRICT_AUTO_RLIMIT_MEMLOCK bit when calling
libbpf_set_strict_mode().
Libbpf allows to control the exact amount of set RLIMIT_MEMLOCK limit
with libbpf_set_memlock_rlim_max() API. Passing 0 will make libbpf do
nothing with RLIMIT_MEMLOCK. libbpf_set_memlock_rlim_max() has to be
called before the first bpf_prog_load(), bpf_btf_load(), or
bpf_object__load() call, otherwise it has no effect and will return
-EBUSY.
[0] Closes: https://github.com/libbpf/libbpf/issues/369
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20211214195904.1785155-2-andrii@kernel.org
2021-12-14 19:59:03 +00:00
|
|
|
bump_rlimit_memlock();
|
|
|
|
|
libbpf: Unify low-level BPF_PROG_LOAD APIs into bpf_prog_load()
Add a new unified OPTS-based low-level API for program loading,
bpf_prog_load() ([0]). bpf_prog_load() accepts few "mandatory"
parameters as input arguments (program type, name, license,
instructions) and all the other optional (as in not required to specify
for all types of BPF programs) fields into struct bpf_prog_load_opts.
This makes all the other non-extensible APIs variant for BPF_PROG_LOAD
obsolete and they are slated for deprecation in libbpf v0.7:
- bpf_load_program();
- bpf_load_program_xattr();
- bpf_verify_program().
Implementation-wise, internal helper libbpf__bpf_prog_load is refactored
to become a public bpf_prog_load() API. struct bpf_prog_load_params used
internally is replaced by public struct bpf_prog_load_opts.
Unfortunately, while conceptually all this is pretty straightforward,
the biggest complication comes from the already existing bpf_prog_load()
*high-level* API, which has nothing to do with BPF_PROG_LOAD command.
We try really hard to have a new API named bpf_prog_load(), though,
because it maps naturally to BPF_PROG_LOAD command.
For that, we rename old bpf_prog_load() into bpf_prog_load_deprecated()
and mark it as COMPAT_VERSION() for shared library users compiled
against old version of libbpf. Statically linked users and shared lib
users compiled against new version of libbpf headers will get "rerouted"
to bpf_prog_deprecated() through a macro helper that decides whether to
use new or old bpf_prog_load() based on number of input arguments (see
___libbpf_overload in libbpf_common.h).
To test that existing
bpf_prog_load()-using code compiles and works as expected, I've compiled
and ran selftests as is. I had to remove (locally) selftest/bpf/Makefile
-Dbpf_prog_load=bpf_prog_test_load hack because it was conflicting with
the macro-based overload approach. I don't expect anyone else to do
something like this in practice, though. This is testing-specific way to
replace bpf_prog_load() calls with special testing variant of it, which
adds extra prog_flags value. After testing I kept this selftests hack,
but ensured that we use a new bpf_prog_load_deprecated name for this.
This patch also marks bpf_prog_load() and bpf_prog_load_xattr() as deprecated.
bpf_object interface has to be used for working with struct bpf_program.
Libbpf doesn't support loading just a bpf_program.
The silver lining is that when we get to libbpf 1.0 all these
complication will be gone and we'll have one clean bpf_prog_load()
low-level API with no backwards compatibility hackery surrounding it.
[0] Closes: https://github.com/libbpf/libbpf/issues/284
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211103220845.2676888-4-andrii@kernel.org
2021-11-03 22:08:36 +00:00
|
|
|
if (!OPTS_VALID(opts, bpf_prog_load_opts))
|
2021-05-25 03:59:33 +00:00
|
|
|
return libbpf_err(-EINVAL);
|
tools/bpf: add log_level to bpf_load_program_attr
The kernel verifier has three levels of logs:
0: no logs
1: logs mostly useful
> 1: verbose
Current libbpf API functions bpf_load_program_xattr() and
bpf_load_program() cannot specify log_level.
The bcc, however, provides an interface for user to
specify log_level 2 for verbose output.
This patch added log_level into structure
bpf_load_program_attr, so users, including bcc, can use
bpf_load_program_xattr() to change log_level. The
supported log_level is 0, 1, and 2.
The bpf selftest test_sock.c is modified to enable log_level = 2.
If the "verbose" in test_sock.c is changed to true,
the test will output logs like below:
$ ./test_sock
func#0 @0
0: R1=ctx(id=0,off=0,imm=0) R10=fp0,call_-1
0: (bf) r6 = r1
1: R1=ctx(id=0,off=0,imm=0) R6_w=ctx(id=0,off=0,imm=0) R10=fp0,call_-1
1: (61) r7 = *(u32 *)(r6 +28)
invalid bpf_context access off=28 size=4
Test case: bind4 load with invalid access: src_ip6 .. [PASS]
...
Test case: bind6 allow all .. [PASS]
Summary: 16 PASSED, 0 FAILED
Some test_sock tests are negative tests and verbose verifier
log will be printed out as shown in the above.
Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-02-07 17:34:51 +00:00
|
|
|
|
libbpf: Unify low-level BPF_PROG_LOAD APIs into bpf_prog_load()
Add a new unified OPTS-based low-level API for program loading,
bpf_prog_load() ([0]). bpf_prog_load() accepts few "mandatory"
parameters as input arguments (program type, name, license,
instructions) and all the other optional (as in not required to specify
for all types of BPF programs) fields into struct bpf_prog_load_opts.
This makes all the other non-extensible APIs variant for BPF_PROG_LOAD
obsolete and they are slated for deprecation in libbpf v0.7:
- bpf_load_program();
- bpf_load_program_xattr();
- bpf_verify_program().
Implementation-wise, internal helper libbpf__bpf_prog_load is refactored
to become a public bpf_prog_load() API. struct bpf_prog_load_params used
internally is replaced by public struct bpf_prog_load_opts.
Unfortunately, while conceptually all this is pretty straightforward,
the biggest complication comes from the already existing bpf_prog_load()
*high-level* API, which has nothing to do with BPF_PROG_LOAD command.
We try really hard to have a new API named bpf_prog_load(), though,
because it maps naturally to BPF_PROG_LOAD command.
For that, we rename old bpf_prog_load() into bpf_prog_load_deprecated()
and mark it as COMPAT_VERSION() for shared library users compiled
against old version of libbpf. Statically linked users and shared lib
users compiled against new version of libbpf headers will get "rerouted"
to bpf_prog_deprecated() through a macro helper that decides whether to
use new or old bpf_prog_load() based on number of input arguments (see
___libbpf_overload in libbpf_common.h).
To test that existing
bpf_prog_load()-using code compiles and works as expected, I've compiled
and ran selftests as is. I had to remove (locally) selftest/bpf/Makefile
-Dbpf_prog_load=bpf_prog_test_load hack because it was conflicting with
the macro-based overload approach. I don't expect anyone else to do
something like this in practice, though. This is testing-specific way to
replace bpf_prog_load() calls with special testing variant of it, which
adds extra prog_flags value. After testing I kept this selftests hack,
but ensured that we use a new bpf_prog_load_deprecated name for this.
This patch also marks bpf_prog_load() and bpf_prog_load_xattr() as deprecated.
bpf_object interface has to be used for working with struct bpf_program.
Libbpf doesn't support loading just a bpf_program.
The silver lining is that when we get to libbpf 1.0 all these
complication will be gone and we'll have one clean bpf_prog_load()
low-level API with no backwards compatibility hackery surrounding it.
[0] Closes: https://github.com/libbpf/libbpf/issues/284
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211103220845.2676888-4-andrii@kernel.org
2021-11-03 22:08:36 +00:00
|
|
|
attempts = OPTS_GET(opts, attempts, 0);
|
|
|
|
if (attempts < 0)
|
2021-05-25 03:59:33 +00:00
|
|
|
return libbpf_err(-EINVAL);
|
libbpf: Unify low-level BPF_PROG_LOAD APIs into bpf_prog_load()
Add a new unified OPTS-based low-level API for program loading,
bpf_prog_load() ([0]). bpf_prog_load() accepts few "mandatory"
parameters as input arguments (program type, name, license,
instructions) and all the other optional (as in not required to specify
for all types of BPF programs) fields into struct bpf_prog_load_opts.
This makes all the other non-extensible APIs variant for BPF_PROG_LOAD
obsolete and they are slated for deprecation in libbpf v0.7:
- bpf_load_program();
- bpf_load_program_xattr();
- bpf_verify_program().
Implementation-wise, internal helper libbpf__bpf_prog_load is refactored
to become a public bpf_prog_load() API. struct bpf_prog_load_params used
internally is replaced by public struct bpf_prog_load_opts.
Unfortunately, while conceptually all this is pretty straightforward,
the biggest complication comes from the already existing bpf_prog_load()
*high-level* API, which has nothing to do with BPF_PROG_LOAD command.
We try really hard to have a new API named bpf_prog_load(), though,
because it maps naturally to BPF_PROG_LOAD command.
For that, we rename old bpf_prog_load() into bpf_prog_load_deprecated()
and mark it as COMPAT_VERSION() for shared library users compiled
against old version of libbpf. Statically linked users and shared lib
users compiled against new version of libbpf headers will get "rerouted"
to bpf_prog_deprecated() through a macro helper that decides whether to
use new or old bpf_prog_load() based on number of input arguments (see
___libbpf_overload in libbpf_common.h).
To test that existing
bpf_prog_load()-using code compiles and works as expected, I've compiled
and ran selftests as is. I had to remove (locally) selftest/bpf/Makefile
-Dbpf_prog_load=bpf_prog_test_load hack because it was conflicting with
the macro-based overload approach. I don't expect anyone else to do
something like this in practice, though. This is testing-specific way to
replace bpf_prog_load() calls with special testing variant of it, which
adds extra prog_flags value. After testing I kept this selftests hack,
but ensured that we use a new bpf_prog_load_deprecated name for this.
This patch also marks bpf_prog_load() and bpf_prog_load_xattr() as deprecated.
bpf_object interface has to be used for working with struct bpf_program.
Libbpf doesn't support loading just a bpf_program.
The silver lining is that when we get to libbpf 1.0 all these
complication will be gone and we'll have one clean bpf_prog_load()
low-level API with no backwards compatibility hackery surrounding it.
[0] Closes: https://github.com/libbpf/libbpf/issues/284
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211103220845.2676888-4-andrii@kernel.org
2021-11-03 22:08:36 +00:00
|
|
|
if (attempts == 0)
|
|
|
|
attempts = PROG_LOAD_ATTEMPTS;
|
2018-03-30 22:08:01 +00:00
|
|
|
|
2019-02-13 18:25:53 +00:00
|
|
|
memset(&attr, 0, sizeof(attr));
|
2020-12-03 20:46:31 +00:00
|
|
|
|
libbpf: Unify low-level BPF_PROG_LOAD APIs into bpf_prog_load()
Add a new unified OPTS-based low-level API for program loading,
bpf_prog_load() ([0]). bpf_prog_load() accepts few "mandatory"
parameters as input arguments (program type, name, license,
instructions) and all the other optional (as in not required to specify
for all types of BPF programs) fields into struct bpf_prog_load_opts.
This makes all the other non-extensible APIs variant for BPF_PROG_LOAD
obsolete and they are slated for deprecation in libbpf v0.7:
- bpf_load_program();
- bpf_load_program_xattr();
- bpf_verify_program().
Implementation-wise, internal helper libbpf__bpf_prog_load is refactored
to become a public bpf_prog_load() API. struct bpf_prog_load_params used
internally is replaced by public struct bpf_prog_load_opts.
Unfortunately, while conceptually all this is pretty straightforward,
the biggest complication comes from the already existing bpf_prog_load()
*high-level* API, which has nothing to do with BPF_PROG_LOAD command.
We try really hard to have a new API named bpf_prog_load(), though,
because it maps naturally to BPF_PROG_LOAD command.
For that, we rename old bpf_prog_load() into bpf_prog_load_deprecated()
and mark it as COMPAT_VERSION() for shared library users compiled
against old version of libbpf. Statically linked users and shared lib
users compiled against new version of libbpf headers will get "rerouted"
to bpf_prog_deprecated() through a macro helper that decides whether to
use new or old bpf_prog_load() based on number of input arguments (see
___libbpf_overload in libbpf_common.h).
To test that existing
bpf_prog_load()-using code compiles and works as expected, I've compiled
and ran selftests as is. I had to remove (locally) selftest/bpf/Makefile
-Dbpf_prog_load=bpf_prog_test_load hack because it was conflicting with
the macro-based overload approach. I don't expect anyone else to do
something like this in practice, though. This is testing-specific way to
replace bpf_prog_load() calls with special testing variant of it, which
adds extra prog_flags value. After testing I kept this selftests hack,
but ensured that we use a new bpf_prog_load_deprecated name for this.
This patch also marks bpf_prog_load() and bpf_prog_load_xattr() as deprecated.
bpf_object interface has to be used for working with struct bpf_program.
Libbpf doesn't support loading just a bpf_program.
The silver lining is that when we get to libbpf 1.0 all these
complication will be gone and we'll have one clean bpf_prog_load()
low-level API with no backwards compatibility hackery surrounding it.
[0] Closes: https://github.com/libbpf/libbpf/issues/284
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211103220845.2676888-4-andrii@kernel.org
2021-11-03 22:08:36 +00:00
|
|
|
attr.prog_type = prog_type;
|
|
|
|
attr.expected_attach_type = OPTS_GET(opts, expected_attach_type, 0);
|
2020-12-03 20:46:31 +00:00
|
|
|
|
libbpf: Unify low-level BPF_PROG_LOAD APIs into bpf_prog_load()
Add a new unified OPTS-based low-level API for program loading,
bpf_prog_load() ([0]). bpf_prog_load() accepts few "mandatory"
parameters as input arguments (program type, name, license,
instructions) and all the other optional (as in not required to specify
for all types of BPF programs) fields into struct bpf_prog_load_opts.
This makes all the other non-extensible APIs variant for BPF_PROG_LOAD
obsolete and they are slated for deprecation in libbpf v0.7:
- bpf_load_program();
- bpf_load_program_xattr();
- bpf_verify_program().
Implementation-wise, internal helper libbpf__bpf_prog_load is refactored
to become a public bpf_prog_load() API. struct bpf_prog_load_params used
internally is replaced by public struct bpf_prog_load_opts.
Unfortunately, while conceptually all this is pretty straightforward,
the biggest complication comes from the already existing bpf_prog_load()
*high-level* API, which has nothing to do with BPF_PROG_LOAD command.
We try really hard to have a new API named bpf_prog_load(), though,
because it maps naturally to BPF_PROG_LOAD command.
For that, we rename old bpf_prog_load() into bpf_prog_load_deprecated()
and mark it as COMPAT_VERSION() for shared library users compiled
against old version of libbpf. Statically linked users and shared lib
users compiled against new version of libbpf headers will get "rerouted"
to bpf_prog_deprecated() through a macro helper that decides whether to
use new or old bpf_prog_load() based on number of input arguments (see
___libbpf_overload in libbpf_common.h).
To test that existing
bpf_prog_load()-using code compiles and works as expected, I've compiled
and ran selftests as is. I had to remove (locally) selftest/bpf/Makefile
-Dbpf_prog_load=bpf_prog_test_load hack because it was conflicting with
the macro-based overload approach. I don't expect anyone else to do
something like this in practice, though. This is testing-specific way to
replace bpf_prog_load() calls with special testing variant of it, which
adds extra prog_flags value. After testing I kept this selftests hack,
but ensured that we use a new bpf_prog_load_deprecated name for this.
This patch also marks bpf_prog_load() and bpf_prog_load_xattr() as deprecated.
bpf_object interface has to be used for working with struct bpf_program.
Libbpf doesn't support loading just a bpf_program.
The silver lining is that when we get to libbpf 1.0 all these
complication will be gone and we'll have one clean bpf_prog_load()
low-level API with no backwards compatibility hackery surrounding it.
[0] Closes: https://github.com/libbpf/libbpf/issues/284
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211103220845.2676888-4-andrii@kernel.org
2021-11-03 22:08:36 +00:00
|
|
|
attr.prog_btf_fd = OPTS_GET(opts, prog_btf_fd, 0);
|
|
|
|
attr.prog_flags = OPTS_GET(opts, prog_flags, 0);
|
|
|
|
attr.prog_ifindex = OPTS_GET(opts, prog_ifindex, 0);
|
|
|
|
attr.kern_version = OPTS_GET(opts, kern_version, 0);
|
2020-12-03 20:46:31 +00:00
|
|
|
|
libbpf: Unify low-level BPF_PROG_LOAD APIs into bpf_prog_load()
Add a new unified OPTS-based low-level API for program loading,
bpf_prog_load() ([0]). bpf_prog_load() accepts few "mandatory"
parameters as input arguments (program type, name, license,
instructions) and all the other optional (as in not required to specify
for all types of BPF programs) fields into struct bpf_prog_load_opts.
This makes all the other non-extensible APIs variant for BPF_PROG_LOAD
obsolete and they are slated for deprecation in libbpf v0.7:
- bpf_load_program();
- bpf_load_program_xattr();
- bpf_verify_program().
Implementation-wise, internal helper libbpf__bpf_prog_load is refactored
to become a public bpf_prog_load() API. struct bpf_prog_load_params used
internally is replaced by public struct bpf_prog_load_opts.
Unfortunately, while conceptually all this is pretty straightforward,
the biggest complication comes from the already existing bpf_prog_load()
*high-level* API, which has nothing to do with BPF_PROG_LOAD command.
We try really hard to have a new API named bpf_prog_load(), though,
because it maps naturally to BPF_PROG_LOAD command.
For that, we rename old bpf_prog_load() into bpf_prog_load_deprecated()
and mark it as COMPAT_VERSION() for shared library users compiled
against old version of libbpf. Statically linked users and shared lib
users compiled against new version of libbpf headers will get "rerouted"
to bpf_prog_deprecated() through a macro helper that decides whether to
use new or old bpf_prog_load() based on number of input arguments (see
___libbpf_overload in libbpf_common.h).
To test that existing
bpf_prog_load()-using code compiles and works as expected, I've compiled
and ran selftests as is. I had to remove (locally) selftest/bpf/Makefile
-Dbpf_prog_load=bpf_prog_test_load hack because it was conflicting with
the macro-based overload approach. I don't expect anyone else to do
something like this in practice, though. This is testing-specific way to
replace bpf_prog_load() calls with special testing variant of it, which
adds extra prog_flags value. After testing I kept this selftests hack,
but ensured that we use a new bpf_prog_load_deprecated name for this.
This patch also marks bpf_prog_load() and bpf_prog_load_xattr() as deprecated.
bpf_object interface has to be used for working with struct bpf_program.
Libbpf doesn't support loading just a bpf_program.
The silver lining is that when we get to libbpf 1.0 all these
complication will be gone and we'll have one clean bpf_prog_load()
low-level API with no backwards compatibility hackery surrounding it.
[0] Closes: https://github.com/libbpf/libbpf/issues/284
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211103220845.2676888-4-andrii@kernel.org
2021-11-03 22:08:36 +00:00
|
|
|
if (prog_name)
|
2021-12-11 00:40:43 +00:00
|
|
|
libbpf_strlcpy(attr.prog_name, prog_name, sizeof(attr.prog_name));
|
libbpf: Unify low-level BPF_PROG_LOAD APIs into bpf_prog_load()
Add a new unified OPTS-based low-level API for program loading,
bpf_prog_load() ([0]). bpf_prog_load() accepts few "mandatory"
parameters as input arguments (program type, name, license,
instructions) and all the other optional (as in not required to specify
for all types of BPF programs) fields into struct bpf_prog_load_opts.
This makes all the other non-extensible APIs variant for BPF_PROG_LOAD
obsolete and they are slated for deprecation in libbpf v0.7:
- bpf_load_program();
- bpf_load_program_xattr();
- bpf_verify_program().
Implementation-wise, internal helper libbpf__bpf_prog_load is refactored
to become a public bpf_prog_load() API. struct bpf_prog_load_params used
internally is replaced by public struct bpf_prog_load_opts.
Unfortunately, while conceptually all this is pretty straightforward,
the biggest complication comes from the already existing bpf_prog_load()
*high-level* API, which has nothing to do with BPF_PROG_LOAD command.
We try really hard to have a new API named bpf_prog_load(), though,
because it maps naturally to BPF_PROG_LOAD command.
For that, we rename old bpf_prog_load() into bpf_prog_load_deprecated()
and mark it as COMPAT_VERSION() for shared library users compiled
against old version of libbpf. Statically linked users and shared lib
users compiled against new version of libbpf headers will get "rerouted"
to bpf_prog_deprecated() through a macro helper that decides whether to
use new or old bpf_prog_load() based on number of input arguments (see
___libbpf_overload in libbpf_common.h).
To test that existing
bpf_prog_load()-using code compiles and works as expected, I've compiled
and ran selftests as is. I had to remove (locally) selftest/bpf/Makefile
-Dbpf_prog_load=bpf_prog_test_load hack because it was conflicting with
the macro-based overload approach. I don't expect anyone else to do
something like this in practice, though. This is testing-specific way to
replace bpf_prog_load() calls with special testing variant of it, which
adds extra prog_flags value. After testing I kept this selftests hack,
but ensured that we use a new bpf_prog_load_deprecated name for this.
This patch also marks bpf_prog_load() and bpf_prog_load_xattr() as deprecated.
bpf_object interface has to be used for working with struct bpf_program.
Libbpf doesn't support loading just a bpf_program.
The silver lining is that when we get to libbpf 1.0 all these
complication will be gone and we'll have one clean bpf_prog_load()
low-level API with no backwards compatibility hackery surrounding it.
[0] Closes: https://github.com/libbpf/libbpf/issues/284
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211103220845.2676888-4-andrii@kernel.org
2021-11-03 22:08:36 +00:00
|
|
|
attr.license = ptr_to_u64(license);
|
tools/bpf: add log_level to bpf_load_program_attr
The kernel verifier has three levels of logs:
0: no logs
1: logs mostly useful
> 1: verbose
Current libbpf API functions bpf_load_program_xattr() and
bpf_load_program() cannot specify log_level.
The bcc, however, provides an interface for user to
specify log_level 2 for verbose output.
This patch added log_level into structure
bpf_load_program_attr, so users, including bcc, can use
bpf_load_program_xattr() to change log_level. The
supported log_level is 0, 1, and 2.
The bpf selftest test_sock.c is modified to enable log_level = 2.
If the "verbose" in test_sock.c is changed to true,
the test will output logs like below:
$ ./test_sock
func#0 @0
0: R1=ctx(id=0,off=0,imm=0) R10=fp0,call_-1
0: (bf) r6 = r1
1: R1=ctx(id=0,off=0,imm=0) R6_w=ctx(id=0,off=0,imm=0) R10=fp0,call_-1
1: (61) r7 = *(u32 *)(r6 +28)
invalid bpf_context access off=28 size=4
Test case: bind4 load with invalid access: src_ip6 .. [PASS]
...
Test case: bind6 allow all .. [PASS]
Summary: 16 PASSED, 0 FAILED
Some test_sock tests are negative tests and verbose verifier
log will be printed out as shown in the above.
Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-02-07 17:34:51 +00:00
|
|
|
|
libbpf: Unify low-level BPF_PROG_LOAD APIs into bpf_prog_load()
Add a new unified OPTS-based low-level API for program loading,
bpf_prog_load() ([0]). bpf_prog_load() accepts few "mandatory"
parameters as input arguments (program type, name, license,
instructions) and all the other optional (as in not required to specify
for all types of BPF programs) fields into struct bpf_prog_load_opts.
This makes all the other non-extensible APIs variant for BPF_PROG_LOAD
obsolete and they are slated for deprecation in libbpf v0.7:
- bpf_load_program();
- bpf_load_program_xattr();
- bpf_verify_program().
Implementation-wise, internal helper libbpf__bpf_prog_load is refactored
to become a public bpf_prog_load() API. struct bpf_prog_load_params used
internally is replaced by public struct bpf_prog_load_opts.
Unfortunately, while conceptually all this is pretty straightforward,
the biggest complication comes from the already existing bpf_prog_load()
*high-level* API, which has nothing to do with BPF_PROG_LOAD command.
We try really hard to have a new API named bpf_prog_load(), though,
because it maps naturally to BPF_PROG_LOAD command.
For that, we rename old bpf_prog_load() into bpf_prog_load_deprecated()
and mark it as COMPAT_VERSION() for shared library users compiled
against old version of libbpf. Statically linked users and shared lib
users compiled against new version of libbpf headers will get "rerouted"
to bpf_prog_deprecated() through a macro helper that decides whether to
use new or old bpf_prog_load() based on number of input arguments (see
___libbpf_overload in libbpf_common.h).
To test that existing
bpf_prog_load()-using code compiles and works as expected, I've compiled
and ran selftests as is. I had to remove (locally) selftest/bpf/Makefile
-Dbpf_prog_load=bpf_prog_test_load hack because it was conflicting with
the macro-based overload approach. I don't expect anyone else to do
something like this in practice, though. This is testing-specific way to
replace bpf_prog_load() calls with special testing variant of it, which
adds extra prog_flags value. After testing I kept this selftests hack,
but ensured that we use a new bpf_prog_load_deprecated name for this.
This patch also marks bpf_prog_load() and bpf_prog_load_xattr() as deprecated.
bpf_object interface has to be used for working with struct bpf_program.
Libbpf doesn't support loading just a bpf_program.
The silver lining is that when we get to libbpf 1.0 all these
complication will be gone and we'll have one clean bpf_prog_load()
low-level API with no backwards compatibility hackery surrounding it.
[0] Closes: https://github.com/libbpf/libbpf/issues/284
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211103220845.2676888-4-andrii@kernel.org
2021-11-03 22:08:36 +00:00
|
|
|
if (insn_cnt > UINT_MAX)
|
|
|
|
return libbpf_err(-E2BIG);
|
|
|
|
|
|
|
|
attr.insns = ptr_to_u64(insns);
|
|
|
|
attr.insn_cnt = (__u32)insn_cnt;
|
|
|
|
|
|
|
|
attach_prog_fd = OPTS_GET(opts, attach_prog_fd, 0);
|
|
|
|
attach_btf_obj_fd = OPTS_GET(opts, attach_btf_obj_fd, 0);
|
|
|
|
|
|
|
|
if (attach_prog_fd && attach_btf_obj_fd)
|
|
|
|
return libbpf_err(-EINVAL);
|
|
|
|
|
|
|
|
attr.attach_btf_id = OPTS_GET(opts, attach_btf_id, 0);
|
|
|
|
if (attach_prog_fd)
|
|
|
|
attr.attach_prog_fd = attach_prog_fd;
|
|
|
|
else
|
|
|
|
attr.attach_btf_obj_fd = attach_btf_obj_fd;
|
tools/bpf: add log_level to bpf_load_program_attr
The kernel verifier has three levels of logs:
0: no logs
1: logs mostly useful
> 1: verbose
Current libbpf API functions bpf_load_program_xattr() and
bpf_load_program() cannot specify log_level.
The bcc, however, provides an interface for user to
specify log_level 2 for verbose output.
This patch added log_level into structure
bpf_load_program_attr, so users, including bcc, can use
bpf_load_program_xattr() to change log_level. The
supported log_level is 0, 1, and 2.
The bpf selftest test_sock.c is modified to enable log_level = 2.
If the "verbose" in test_sock.c is changed to true,
the test will output logs like below:
$ ./test_sock
func#0 @0
0: R1=ctx(id=0,off=0,imm=0) R10=fp0,call_-1
0: (bf) r6 = r1
1: R1=ctx(id=0,off=0,imm=0) R6_w=ctx(id=0,off=0,imm=0) R10=fp0,call_-1
1: (61) r7 = *(u32 *)(r6 +28)
invalid bpf_context access off=28 size=4
Test case: bind4 load with invalid access: src_ip6 .. [PASS]
...
Test case: bind6 allow all .. [PASS]
Summary: 16 PASSED, 0 FAILED
Some test_sock tests are negative tests and verbose verifier
log will be printed out as shown in the above.
Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-02-07 17:34:51 +00:00
|
|
|
|
libbpf: Unify low-level BPF_PROG_LOAD APIs into bpf_prog_load()
Add a new unified OPTS-based low-level API for program loading,
bpf_prog_load() ([0]). bpf_prog_load() accepts few "mandatory"
parameters as input arguments (program type, name, license,
instructions) and all the other optional (as in not required to specify
for all types of BPF programs) fields into struct bpf_prog_load_opts.
This makes all the other non-extensible APIs variant for BPF_PROG_LOAD
obsolete and they are slated for deprecation in libbpf v0.7:
- bpf_load_program();
- bpf_load_program_xattr();
- bpf_verify_program().
Implementation-wise, internal helper libbpf__bpf_prog_load is refactored
to become a public bpf_prog_load() API. struct bpf_prog_load_params used
internally is replaced by public struct bpf_prog_load_opts.
Unfortunately, while conceptually all this is pretty straightforward,
the biggest complication comes from the already existing bpf_prog_load()
*high-level* API, which has nothing to do with BPF_PROG_LOAD command.
We try really hard to have a new API named bpf_prog_load(), though,
because it maps naturally to BPF_PROG_LOAD command.
For that, we rename old bpf_prog_load() into bpf_prog_load_deprecated()
and mark it as COMPAT_VERSION() for shared library users compiled
against old version of libbpf. Statically linked users and shared lib
users compiled against new version of libbpf headers will get "rerouted"
to bpf_prog_deprecated() through a macro helper that decides whether to
use new or old bpf_prog_load() based on number of input arguments (see
___libbpf_overload in libbpf_common.h).
To test that existing
bpf_prog_load()-using code compiles and works as expected, I've compiled
and ran selftests as is. I had to remove (locally) selftest/bpf/Makefile
-Dbpf_prog_load=bpf_prog_test_load hack because it was conflicting with
the macro-based overload approach. I don't expect anyone else to do
something like this in practice, though. This is testing-specific way to
replace bpf_prog_load() calls with special testing variant of it, which
adds extra prog_flags value. After testing I kept this selftests hack,
but ensured that we use a new bpf_prog_load_deprecated name for this.
This patch also marks bpf_prog_load() and bpf_prog_load_xattr() as deprecated.
bpf_object interface has to be used for working with struct bpf_program.
Libbpf doesn't support loading just a bpf_program.
The silver lining is that when we get to libbpf 1.0 all these
complication will be gone and we'll have one clean bpf_prog_load()
low-level API with no backwards compatibility hackery surrounding it.
[0] Closes: https://github.com/libbpf/libbpf/issues/284
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211103220845.2676888-4-andrii@kernel.org
2021-11-03 22:08:36 +00:00
|
|
|
log_buf = OPTS_GET(opts, log_buf, NULL);
|
|
|
|
log_size = OPTS_GET(opts, log_size, 0);
|
|
|
|
log_level = OPTS_GET(opts, log_level, 0);
|
2020-12-03 20:46:31 +00:00
|
|
|
|
libbpf: Unify low-level BPF_PROG_LOAD APIs into bpf_prog_load()
Add a new unified OPTS-based low-level API for program loading,
bpf_prog_load() ([0]). bpf_prog_load() accepts few "mandatory"
parameters as input arguments (program type, name, license,
instructions) and all the other optional (as in not required to specify
for all types of BPF programs) fields into struct bpf_prog_load_opts.
This makes all the other non-extensible APIs variant for BPF_PROG_LOAD
obsolete and they are slated for deprecation in libbpf v0.7:
- bpf_load_program();
- bpf_load_program_xattr();
- bpf_verify_program().
Implementation-wise, internal helper libbpf__bpf_prog_load is refactored
to become a public bpf_prog_load() API. struct bpf_prog_load_params used
internally is replaced by public struct bpf_prog_load_opts.
Unfortunately, while conceptually all this is pretty straightforward,
the biggest complication comes from the already existing bpf_prog_load()
*high-level* API, which has nothing to do with BPF_PROG_LOAD command.
We try really hard to have a new API named bpf_prog_load(), though,
because it maps naturally to BPF_PROG_LOAD command.
For that, we rename old bpf_prog_load() into bpf_prog_load_deprecated()
and mark it as COMPAT_VERSION() for shared library users compiled
against old version of libbpf. Statically linked users and shared lib
users compiled against new version of libbpf headers will get "rerouted"
to bpf_prog_deprecated() through a macro helper that decides whether to
use new or old bpf_prog_load() based on number of input arguments (see
___libbpf_overload in libbpf_common.h).
To test that existing
bpf_prog_load()-using code compiles and works as expected, I've compiled
and ran selftests as is. I had to remove (locally) selftest/bpf/Makefile
-Dbpf_prog_load=bpf_prog_test_load hack because it was conflicting with
the macro-based overload approach. I don't expect anyone else to do
something like this in practice, though. This is testing-specific way to
replace bpf_prog_load() calls with special testing variant of it, which
adds extra prog_flags value. After testing I kept this selftests hack,
but ensured that we use a new bpf_prog_load_deprecated name for this.
This patch also marks bpf_prog_load() and bpf_prog_load_xattr() as deprecated.
bpf_object interface has to be used for working with struct bpf_program.
Libbpf doesn't support loading just a bpf_program.
The silver lining is that when we get to libbpf 1.0 all these
complication will be gone and we'll have one clean bpf_prog_load()
low-level API with no backwards compatibility hackery surrounding it.
[0] Closes: https://github.com/libbpf/libbpf/issues/284
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211103220845.2676888-4-andrii@kernel.org
2021-11-03 22:08:36 +00:00
|
|
|
if (!!log_buf != !!log_size)
|
|
|
|
return libbpf_err(-EINVAL);
|
|
|
|
if (log_level > (4 | 2 | 1))
|
|
|
|
return libbpf_err(-EINVAL);
|
|
|
|
if (log_level && !log_buf)
|
|
|
|
return libbpf_err(-EINVAL);
|
2020-12-03 20:46:31 +00:00
|
|
|
|
libbpf: Unify low-level BPF_PROG_LOAD APIs into bpf_prog_load()
Add a new unified OPTS-based low-level API for program loading,
bpf_prog_load() ([0]). bpf_prog_load() accepts few "mandatory"
parameters as input arguments (program type, name, license,
instructions) and all the other optional (as in not required to specify
for all types of BPF programs) fields into struct bpf_prog_load_opts.
This makes all the other non-extensible APIs variant for BPF_PROG_LOAD
obsolete and they are slated for deprecation in libbpf v0.7:
- bpf_load_program();
- bpf_load_program_xattr();
- bpf_verify_program().
Implementation-wise, internal helper libbpf__bpf_prog_load is refactored
to become a public bpf_prog_load() API. struct bpf_prog_load_params used
internally is replaced by public struct bpf_prog_load_opts.
Unfortunately, while conceptually all this is pretty straightforward,
the biggest complication comes from the already existing bpf_prog_load()
*high-level* API, which has nothing to do with BPF_PROG_LOAD command.
We try really hard to have a new API named bpf_prog_load(), though,
because it maps naturally to BPF_PROG_LOAD command.
For that, we rename old bpf_prog_load() into bpf_prog_load_deprecated()
and mark it as COMPAT_VERSION() for shared library users compiled
against old version of libbpf. Statically linked users and shared lib
users compiled against new version of libbpf headers will get "rerouted"
to bpf_prog_deprecated() through a macro helper that decides whether to
use new or old bpf_prog_load() based on number of input arguments (see
___libbpf_overload in libbpf_common.h).
To test that existing
bpf_prog_load()-using code compiles and works as expected, I've compiled
and ran selftests as is. I had to remove (locally) selftest/bpf/Makefile
-Dbpf_prog_load=bpf_prog_test_load hack because it was conflicting with
the macro-based overload approach. I don't expect anyone else to do
something like this in practice, though. This is testing-specific way to
replace bpf_prog_load() calls with special testing variant of it, which
adds extra prog_flags value. After testing I kept this selftests hack,
but ensured that we use a new bpf_prog_load_deprecated name for this.
This patch also marks bpf_prog_load() and bpf_prog_load_xattr() as deprecated.
bpf_object interface has to be used for working with struct bpf_program.
Libbpf doesn't support loading just a bpf_program.
The silver lining is that when we get to libbpf 1.0 all these
complication will be gone and we'll have one clean bpf_prog_load()
low-level API with no backwards compatibility hackery surrounding it.
[0] Closes: https://github.com/libbpf/libbpf/issues/284
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211103220845.2676888-4-andrii@kernel.org
2021-11-03 22:08:36 +00:00
|
|
|
func_info_rec_size = OPTS_GET(opts, func_info_rec_size, 0);
|
|
|
|
func_info = OPTS_GET(opts, func_info, NULL);
|
|
|
|
attr.func_info_rec_size = func_info_rec_size;
|
|
|
|
attr.func_info = ptr_to_u64(func_info);
|
|
|
|
attr.func_info_cnt = OPTS_GET(opts, func_info_cnt, 0);
|
2015-07-01 02:14:06 +00:00
|
|
|
|
libbpf: Unify low-level BPF_PROG_LOAD APIs into bpf_prog_load()
Add a new unified OPTS-based low-level API for program loading,
bpf_prog_load() ([0]). bpf_prog_load() accepts few "mandatory"
parameters as input arguments (program type, name, license,
instructions) and all the other optional (as in not required to specify
for all types of BPF programs) fields into struct bpf_prog_load_opts.
This makes all the other non-extensible APIs variant for BPF_PROG_LOAD
obsolete and they are slated for deprecation in libbpf v0.7:
- bpf_load_program();
- bpf_load_program_xattr();
- bpf_verify_program().
Implementation-wise, internal helper libbpf__bpf_prog_load is refactored
to become a public bpf_prog_load() API. struct bpf_prog_load_params used
internally is replaced by public struct bpf_prog_load_opts.
Unfortunately, while conceptually all this is pretty straightforward,
the biggest complication comes from the already existing bpf_prog_load()
*high-level* API, which has nothing to do with BPF_PROG_LOAD command.
We try really hard to have a new API named bpf_prog_load(), though,
because it maps naturally to BPF_PROG_LOAD command.
For that, we rename old bpf_prog_load() into bpf_prog_load_deprecated()
and mark it as COMPAT_VERSION() for shared library users compiled
against old version of libbpf. Statically linked users and shared lib
users compiled against new version of libbpf headers will get "rerouted"
to bpf_prog_deprecated() through a macro helper that decides whether to
use new or old bpf_prog_load() based on number of input arguments (see
___libbpf_overload in libbpf_common.h).
To test that existing
bpf_prog_load()-using code compiles and works as expected, I've compiled
and ran selftests as is. I had to remove (locally) selftest/bpf/Makefile
-Dbpf_prog_load=bpf_prog_test_load hack because it was conflicting with
the macro-based overload approach. I don't expect anyone else to do
something like this in practice, though. This is testing-specific way to
replace bpf_prog_load() calls with special testing variant of it, which
adds extra prog_flags value. After testing I kept this selftests hack,
but ensured that we use a new bpf_prog_load_deprecated name for this.
This patch also marks bpf_prog_load() and bpf_prog_load_xattr() as deprecated.
bpf_object interface has to be used for working with struct bpf_program.
Libbpf doesn't support loading just a bpf_program.
The silver lining is that when we get to libbpf 1.0 all these
complication will be gone and we'll have one clean bpf_prog_load()
low-level API with no backwards compatibility hackery surrounding it.
[0] Closes: https://github.com/libbpf/libbpf/issues/284
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211103220845.2676888-4-andrii@kernel.org
2021-11-03 22:08:36 +00:00
|
|
|
line_info_rec_size = OPTS_GET(opts, line_info_rec_size, 0);
|
|
|
|
line_info = OPTS_GET(opts, line_info, NULL);
|
|
|
|
attr.line_info_rec_size = line_info_rec_size;
|
|
|
|
attr.line_info = ptr_to_u64(line_info);
|
|
|
|
attr.line_info_cnt = OPTS_GET(opts, line_info_cnt, 0);
|
2020-12-03 20:46:31 +00:00
|
|
|
|
libbpf: Unify low-level BPF_PROG_LOAD APIs into bpf_prog_load()
Add a new unified OPTS-based low-level API for program loading,
bpf_prog_load() ([0]). bpf_prog_load() accepts few "mandatory"
parameters as input arguments (program type, name, license,
instructions) and all the other optional (as in not required to specify
for all types of BPF programs) fields into struct bpf_prog_load_opts.
This makes all the other non-extensible APIs variant for BPF_PROG_LOAD
obsolete and they are slated for deprecation in libbpf v0.7:
- bpf_load_program();
- bpf_load_program_xattr();
- bpf_verify_program().
Implementation-wise, internal helper libbpf__bpf_prog_load is refactored
to become a public bpf_prog_load() API. struct bpf_prog_load_params used
internally is replaced by public struct bpf_prog_load_opts.
Unfortunately, while conceptually all this is pretty straightforward,
the biggest complication comes from the already existing bpf_prog_load()
*high-level* API, which has nothing to do with BPF_PROG_LOAD command.
We try really hard to have a new API named bpf_prog_load(), though,
because it maps naturally to BPF_PROG_LOAD command.
For that, we rename old bpf_prog_load() into bpf_prog_load_deprecated()
and mark it as COMPAT_VERSION() for shared library users compiled
against old version of libbpf. Statically linked users and shared lib
users compiled against new version of libbpf headers will get "rerouted"
to bpf_prog_deprecated() through a macro helper that decides whether to
use new or old bpf_prog_load() based on number of input arguments (see
___libbpf_overload in libbpf_common.h).
To test that existing
bpf_prog_load()-using code compiles and works as expected, I've compiled
and ran selftests as is. I had to remove (locally) selftest/bpf/Makefile
-Dbpf_prog_load=bpf_prog_test_load hack because it was conflicting with
the macro-based overload approach. I don't expect anyone else to do
something like this in practice, though. This is testing-specific way to
replace bpf_prog_load() calls with special testing variant of it, which
adds extra prog_flags value. After testing I kept this selftests hack,
but ensured that we use a new bpf_prog_load_deprecated name for this.
This patch also marks bpf_prog_load() and bpf_prog_load_xattr() as deprecated.
bpf_object interface has to be used for working with struct bpf_program.
Libbpf doesn't support loading just a bpf_program.
The silver lining is that when we get to libbpf 1.0 all these
complication will be gone and we'll have one clean bpf_prog_load()
low-level API with no backwards compatibility hackery surrounding it.
[0] Closes: https://github.com/libbpf/libbpf/issues/284
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211103220845.2676888-4-andrii@kernel.org
2021-11-03 22:08:36 +00:00
|
|
|
attr.fd_array = ptr_to_u64(OPTS_GET(opts, fd_array, NULL));
|
2015-07-01 02:14:06 +00:00
|
|
|
|
2021-12-09 19:38:29 +00:00
|
|
|
if (log_level) {
|
|
|
|
attr.log_buf = ptr_to_u64(log_buf);
|
|
|
|
attr.log_size = log_size;
|
|
|
|
attr.log_level = log_level;
|
|
|
|
}
|
|
|
|
|
libbpf: Unify low-level BPF_PROG_LOAD APIs into bpf_prog_load()
Add a new unified OPTS-based low-level API for program loading,
bpf_prog_load() ([0]). bpf_prog_load() accepts few "mandatory"
parameters as input arguments (program type, name, license,
instructions) and all the other optional (as in not required to specify
for all types of BPF programs) fields into struct bpf_prog_load_opts.
This makes all the other non-extensible APIs variant for BPF_PROG_LOAD
obsolete and they are slated for deprecation in libbpf v0.7:
- bpf_load_program();
- bpf_load_program_xattr();
- bpf_verify_program().
Implementation-wise, internal helper libbpf__bpf_prog_load is refactored
to become a public bpf_prog_load() API. struct bpf_prog_load_params used
internally is replaced by public struct bpf_prog_load_opts.
Unfortunately, while conceptually all this is pretty straightforward,
the biggest complication comes from the already existing bpf_prog_load()
*high-level* API, which has nothing to do with BPF_PROG_LOAD command.
We try really hard to have a new API named bpf_prog_load(), though,
because it maps naturally to BPF_PROG_LOAD command.
For that, we rename old bpf_prog_load() into bpf_prog_load_deprecated()
and mark it as COMPAT_VERSION() for shared library users compiled
against old version of libbpf. Statically linked users and shared lib
users compiled against new version of libbpf headers will get "rerouted"
to bpf_prog_deprecated() through a macro helper that decides whether to
use new or old bpf_prog_load() based on number of input arguments (see
___libbpf_overload in libbpf_common.h).
To test that existing
bpf_prog_load()-using code compiles and works as expected, I've compiled
and ran selftests as is. I had to remove (locally) selftest/bpf/Makefile
-Dbpf_prog_load=bpf_prog_test_load hack because it was conflicting with
the macro-based overload approach. I don't expect anyone else to do
something like this in practice, though. This is testing-specific way to
replace bpf_prog_load() calls with special testing variant of it, which
adds extra prog_flags value. After testing I kept this selftests hack,
but ensured that we use a new bpf_prog_load_deprecated name for this.
This patch also marks bpf_prog_load() and bpf_prog_load_xattr() as deprecated.
bpf_object interface has to be used for working with struct bpf_program.
Libbpf doesn't support loading just a bpf_program.
The silver lining is that when we get to libbpf 1.0 all these
complication will be gone and we'll have one clean bpf_prog_load()
low-level API with no backwards compatibility hackery surrounding it.
[0] Closes: https://github.com/libbpf/libbpf/issues/284
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211103220845.2676888-4-andrii@kernel.org
2021-11-03 22:08:36 +00:00
|
|
|
fd = sys_bpf_prog_load(&attr, sizeof(attr), attempts);
|
2018-12-08 00:42:29 +00:00
|
|
|
if (fd >= 0)
|
2015-07-01 02:14:06 +00:00
|
|
|
return fd;
|
|
|
|
|
2018-11-19 23:29:16 +00:00
|
|
|
/* After bpf_prog_load, the kernel may modify certain attributes
|
|
|
|
* to give user space a hint how to deal with loading failure.
|
|
|
|
* Check to see whether we can make some changes and load again.
|
|
|
|
*/
|
2018-12-08 00:42:31 +00:00
|
|
|
while (errno == E2BIG && (!finfo || !linfo)) {
|
|
|
|
if (!finfo && attr.func_info_cnt &&
|
libbpf: Unify low-level BPF_PROG_LOAD APIs into bpf_prog_load()
Add a new unified OPTS-based low-level API for program loading,
bpf_prog_load() ([0]). bpf_prog_load() accepts few "mandatory"
parameters as input arguments (program type, name, license,
instructions) and all the other optional (as in not required to specify
for all types of BPF programs) fields into struct bpf_prog_load_opts.
This makes all the other non-extensible APIs variant for BPF_PROG_LOAD
obsolete and they are slated for deprecation in libbpf v0.7:
- bpf_load_program();
- bpf_load_program_xattr();
- bpf_verify_program().
Implementation-wise, internal helper libbpf__bpf_prog_load is refactored
to become a public bpf_prog_load() API. struct bpf_prog_load_params used
internally is replaced by public struct bpf_prog_load_opts.
Unfortunately, while conceptually all this is pretty straightforward,
the biggest complication comes from the already existing bpf_prog_load()
*high-level* API, which has nothing to do with BPF_PROG_LOAD command.
We try really hard to have a new API named bpf_prog_load(), though,
because it maps naturally to BPF_PROG_LOAD command.
For that, we rename old bpf_prog_load() into bpf_prog_load_deprecated()
and mark it as COMPAT_VERSION() for shared library users compiled
against old version of libbpf. Statically linked users and shared lib
users compiled against new version of libbpf headers will get "rerouted"
to bpf_prog_deprecated() through a macro helper that decides whether to
use new or old bpf_prog_load() based on number of input arguments (see
___libbpf_overload in libbpf_common.h).
To test that existing
bpf_prog_load()-using code compiles and works as expected, I've compiled
and ran selftests as is. I had to remove (locally) selftest/bpf/Makefile
-Dbpf_prog_load=bpf_prog_test_load hack because it was conflicting with
the macro-based overload approach. I don't expect anyone else to do
something like this in practice, though. This is testing-specific way to
replace bpf_prog_load() calls with special testing variant of it, which
adds extra prog_flags value. After testing I kept this selftests hack,
but ensured that we use a new bpf_prog_load_deprecated name for this.
This patch also marks bpf_prog_load() and bpf_prog_load_xattr() as deprecated.
bpf_object interface has to be used for working with struct bpf_program.
Libbpf doesn't support loading just a bpf_program.
The silver lining is that when we get to libbpf 1.0 all these
complication will be gone and we'll have one clean bpf_prog_load()
low-level API with no backwards compatibility hackery surrounding it.
[0] Closes: https://github.com/libbpf/libbpf/issues/284
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211103220845.2676888-4-andrii@kernel.org
2021-11-03 22:08:36 +00:00
|
|
|
attr.func_info_rec_size < func_info_rec_size) {
|
2018-12-08 00:42:31 +00:00
|
|
|
/* try with corrected func info records */
|
libbpf: Unify low-level BPF_PROG_LOAD APIs into bpf_prog_load()
Add a new unified OPTS-based low-level API for program loading,
bpf_prog_load() ([0]). bpf_prog_load() accepts few "mandatory"
parameters as input arguments (program type, name, license,
instructions) and all the other optional (as in not required to specify
for all types of BPF programs) fields into struct bpf_prog_load_opts.
This makes all the other non-extensible APIs variant for BPF_PROG_LOAD
obsolete and they are slated for deprecation in libbpf v0.7:
- bpf_load_program();
- bpf_load_program_xattr();
- bpf_verify_program().
Implementation-wise, internal helper libbpf__bpf_prog_load is refactored
to become a public bpf_prog_load() API. struct bpf_prog_load_params used
internally is replaced by public struct bpf_prog_load_opts.
Unfortunately, while conceptually all this is pretty straightforward,
the biggest complication comes from the already existing bpf_prog_load()
*high-level* API, which has nothing to do with BPF_PROG_LOAD command.
We try really hard to have a new API named bpf_prog_load(), though,
because it maps naturally to BPF_PROG_LOAD command.
For that, we rename old bpf_prog_load() into bpf_prog_load_deprecated()
and mark it as COMPAT_VERSION() for shared library users compiled
against old version of libbpf. Statically linked users and shared lib
users compiled against new version of libbpf headers will get "rerouted"
to bpf_prog_deprecated() through a macro helper that decides whether to
use new or old bpf_prog_load() based on number of input arguments (see
___libbpf_overload in libbpf_common.h).
To test that existing
bpf_prog_load()-using code compiles and works as expected, I've compiled
and ran selftests as is. I had to remove (locally) selftest/bpf/Makefile
-Dbpf_prog_load=bpf_prog_test_load hack because it was conflicting with
the macro-based overload approach. I don't expect anyone else to do
something like this in practice, though. This is testing-specific way to
replace bpf_prog_load() calls with special testing variant of it, which
adds extra prog_flags value. After testing I kept this selftests hack,
but ensured that we use a new bpf_prog_load_deprecated name for this.
This patch also marks bpf_prog_load() and bpf_prog_load_xattr() as deprecated.
bpf_object interface has to be used for working with struct bpf_program.
Libbpf doesn't support loading just a bpf_program.
The silver lining is that when we get to libbpf 1.0 all these
complication will be gone and we'll have one clean bpf_prog_load()
low-level API with no backwards compatibility hackery surrounding it.
[0] Closes: https://github.com/libbpf/libbpf/issues/284
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211103220845.2676888-4-andrii@kernel.org
2021-11-03 22:08:36 +00:00
|
|
|
finfo = alloc_zero_tailing_info(func_info,
|
|
|
|
attr.func_info_cnt,
|
|
|
|
func_info_rec_size,
|
2018-12-08 00:42:31 +00:00
|
|
|
attr.func_info_rec_size);
|
2021-05-25 03:59:33 +00:00
|
|
|
if (!finfo) {
|
|
|
|
errno = E2BIG;
|
2018-12-08 00:42:31 +00:00
|
|
|
goto done;
|
2021-05-25 03:59:33 +00:00
|
|
|
}
|
2018-12-08 00:42:31 +00:00
|
|
|
|
|
|
|
attr.func_info = ptr_to_u64(finfo);
|
libbpf: Unify low-level BPF_PROG_LOAD APIs into bpf_prog_load()
Add a new unified OPTS-based low-level API for program loading,
bpf_prog_load() ([0]). bpf_prog_load() accepts few "mandatory"
parameters as input arguments (program type, name, license,
instructions) and all the other optional (as in not required to specify
for all types of BPF programs) fields into struct bpf_prog_load_opts.
This makes all the other non-extensible APIs variant for BPF_PROG_LOAD
obsolete and they are slated for deprecation in libbpf v0.7:
- bpf_load_program();
- bpf_load_program_xattr();
- bpf_verify_program().
Implementation-wise, internal helper libbpf__bpf_prog_load is refactored
to become a public bpf_prog_load() API. struct bpf_prog_load_params used
internally is replaced by public struct bpf_prog_load_opts.
Unfortunately, while conceptually all this is pretty straightforward,
the biggest complication comes from the already existing bpf_prog_load()
*high-level* API, which has nothing to do with BPF_PROG_LOAD command.
We try really hard to have a new API named bpf_prog_load(), though,
because it maps naturally to BPF_PROG_LOAD command.
For that, we rename old bpf_prog_load() into bpf_prog_load_deprecated()
and mark it as COMPAT_VERSION() for shared library users compiled
against old version of libbpf. Statically linked users and shared lib
users compiled against new version of libbpf headers will get "rerouted"
to bpf_prog_deprecated() through a macro helper that decides whether to
use new or old bpf_prog_load() based on number of input arguments (see
___libbpf_overload in libbpf_common.h).
To test that existing
bpf_prog_load()-using code compiles and works as expected, I've compiled
and ran selftests as is. I had to remove (locally) selftest/bpf/Makefile
-Dbpf_prog_load=bpf_prog_test_load hack because it was conflicting with
the macro-based overload approach. I don't expect anyone else to do
something like this in practice, though. This is testing-specific way to
replace bpf_prog_load() calls with special testing variant of it, which
adds extra prog_flags value. After testing I kept this selftests hack,
but ensured that we use a new bpf_prog_load_deprecated name for this.
This patch also marks bpf_prog_load() and bpf_prog_load_xattr() as deprecated.
bpf_object interface has to be used for working with struct bpf_program.
Libbpf doesn't support loading just a bpf_program.
The silver lining is that when we get to libbpf 1.0 all these
complication will be gone and we'll have one clean bpf_prog_load()
low-level API with no backwards compatibility hackery surrounding it.
[0] Closes: https://github.com/libbpf/libbpf/issues/284
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211103220845.2676888-4-andrii@kernel.org
2021-11-03 22:08:36 +00:00
|
|
|
attr.func_info_rec_size = func_info_rec_size;
|
2018-12-08 00:42:31 +00:00
|
|
|
} else if (!linfo && attr.line_info_cnt &&
|
libbpf: Unify low-level BPF_PROG_LOAD APIs into bpf_prog_load()
Add a new unified OPTS-based low-level API for program loading,
bpf_prog_load() ([0]). bpf_prog_load() accepts few "mandatory"
parameters as input arguments (program type, name, license,
instructions) and all the other optional (as in not required to specify
for all types of BPF programs) fields into struct bpf_prog_load_opts.
This makes all the other non-extensible APIs variant for BPF_PROG_LOAD
obsolete and they are slated for deprecation in libbpf v0.7:
- bpf_load_program();
- bpf_load_program_xattr();
- bpf_verify_program().
Implementation-wise, internal helper libbpf__bpf_prog_load is refactored
to become a public bpf_prog_load() API. struct bpf_prog_load_params used
internally is replaced by public struct bpf_prog_load_opts.
Unfortunately, while conceptually all this is pretty straightforward,
the biggest complication comes from the already existing bpf_prog_load()
*high-level* API, which has nothing to do with BPF_PROG_LOAD command.
We try really hard to have a new API named bpf_prog_load(), though,
because it maps naturally to BPF_PROG_LOAD command.
For that, we rename old bpf_prog_load() into bpf_prog_load_deprecated()
and mark it as COMPAT_VERSION() for shared library users compiled
against old version of libbpf. Statically linked users and shared lib
users compiled against new version of libbpf headers will get "rerouted"
to bpf_prog_deprecated() through a macro helper that decides whether to
use new or old bpf_prog_load() based on number of input arguments (see
___libbpf_overload in libbpf_common.h).
To test that existing
bpf_prog_load()-using code compiles and works as expected, I've compiled
and ran selftests as is. I had to remove (locally) selftest/bpf/Makefile
-Dbpf_prog_load=bpf_prog_test_load hack because it was conflicting with
the macro-based overload approach. I don't expect anyone else to do
something like this in practice, though. This is testing-specific way to
replace bpf_prog_load() calls with special testing variant of it, which
adds extra prog_flags value. After testing I kept this selftests hack,
but ensured that we use a new bpf_prog_load_deprecated name for this.
This patch also marks bpf_prog_load() and bpf_prog_load_xattr() as deprecated.
bpf_object interface has to be used for working with struct bpf_program.
Libbpf doesn't support loading just a bpf_program.
The silver lining is that when we get to libbpf 1.0 all these
complication will be gone and we'll have one clean bpf_prog_load()
low-level API with no backwards compatibility hackery surrounding it.
[0] Closes: https://github.com/libbpf/libbpf/issues/284
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211103220845.2676888-4-andrii@kernel.org
2021-11-03 22:08:36 +00:00
|
|
|
attr.line_info_rec_size < line_info_rec_size) {
|
|
|
|
linfo = alloc_zero_tailing_info(line_info,
|
|
|
|
attr.line_info_cnt,
|
|
|
|
line_info_rec_size,
|
2018-12-08 00:42:31 +00:00
|
|
|
attr.line_info_rec_size);
|
2021-05-25 03:59:33 +00:00
|
|
|
if (!linfo) {
|
|
|
|
errno = E2BIG;
|
2018-12-08 00:42:31 +00:00
|
|
|
goto done;
|
2021-05-25 03:59:33 +00:00
|
|
|
}
|
2018-12-08 00:42:31 +00:00
|
|
|
|
|
|
|
attr.line_info = ptr_to_u64(linfo);
|
libbpf: Unify low-level BPF_PROG_LOAD APIs into bpf_prog_load()
Add a new unified OPTS-based low-level API for program loading,
bpf_prog_load() ([0]). bpf_prog_load() accepts few "mandatory"
parameters as input arguments (program type, name, license,
instructions) and all the other optional (as in not required to specify
for all types of BPF programs) fields into struct bpf_prog_load_opts.
This makes all the other non-extensible APIs variant for BPF_PROG_LOAD
obsolete and they are slated for deprecation in libbpf v0.7:
- bpf_load_program();
- bpf_load_program_xattr();
- bpf_verify_program().
Implementation-wise, internal helper libbpf__bpf_prog_load is refactored
to become a public bpf_prog_load() API. struct bpf_prog_load_params used
internally is replaced by public struct bpf_prog_load_opts.
Unfortunately, while conceptually all this is pretty straightforward,
the biggest complication comes from the already existing bpf_prog_load()
*high-level* API, which has nothing to do with BPF_PROG_LOAD command.
We try really hard to have a new API named bpf_prog_load(), though,
because it maps naturally to BPF_PROG_LOAD command.
For that, we rename old bpf_prog_load() into bpf_prog_load_deprecated()
and mark it as COMPAT_VERSION() for shared library users compiled
against old version of libbpf. Statically linked users and shared lib
users compiled against new version of libbpf headers will get "rerouted"
to bpf_prog_deprecated() through a macro helper that decides whether to
use new or old bpf_prog_load() based on number of input arguments (see
___libbpf_overload in libbpf_common.h).
To test that existing
bpf_prog_load()-using code compiles and works as expected, I've compiled
and ran selftests as is. I had to remove (locally) selftest/bpf/Makefile
-Dbpf_prog_load=bpf_prog_test_load hack because it was conflicting with
the macro-based overload approach. I don't expect anyone else to do
something like this in practice, though. This is testing-specific way to
replace bpf_prog_load() calls with special testing variant of it, which
adds extra prog_flags value. After testing I kept this selftests hack,
but ensured that we use a new bpf_prog_load_deprecated name for this.
This patch also marks bpf_prog_load() and bpf_prog_load_xattr() as deprecated.
bpf_object interface has to be used for working with struct bpf_program.
Libbpf doesn't support loading just a bpf_program.
The silver lining is that when we get to libbpf 1.0 all these
complication will be gone and we'll have one clean bpf_prog_load()
low-level API with no backwards compatibility hackery surrounding it.
[0] Closes: https://github.com/libbpf/libbpf/issues/284
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211103220845.2676888-4-andrii@kernel.org
2021-11-03 22:08:36 +00:00
|
|
|
attr.line_info_rec_size = line_info_rec_size;
|
2018-12-08 00:42:31 +00:00
|
|
|
} else {
|
|
|
|
break;
|
2018-11-19 23:29:16 +00:00
|
|
|
}
|
|
|
|
|
libbpf: Unify low-level BPF_PROG_LOAD APIs into bpf_prog_load()
Add a new unified OPTS-based low-level API for program loading,
bpf_prog_load() ([0]). bpf_prog_load() accepts few "mandatory"
parameters as input arguments (program type, name, license,
instructions) and all the other optional (as in not required to specify
for all types of BPF programs) fields into struct bpf_prog_load_opts.
This makes all the other non-extensible APIs variant for BPF_PROG_LOAD
obsolete and they are slated for deprecation in libbpf v0.7:
- bpf_load_program();
- bpf_load_program_xattr();
- bpf_verify_program().
Implementation-wise, internal helper libbpf__bpf_prog_load is refactored
to become a public bpf_prog_load() API. struct bpf_prog_load_params used
internally is replaced by public struct bpf_prog_load_opts.
Unfortunately, while conceptually all this is pretty straightforward,
the biggest complication comes from the already existing bpf_prog_load()
*high-level* API, which has nothing to do with BPF_PROG_LOAD command.
We try really hard to have a new API named bpf_prog_load(), though,
because it maps naturally to BPF_PROG_LOAD command.
For that, we rename old bpf_prog_load() into bpf_prog_load_deprecated()
and mark it as COMPAT_VERSION() for shared library users compiled
against old version of libbpf. Statically linked users and shared lib
users compiled against new version of libbpf headers will get "rerouted"
to bpf_prog_deprecated() through a macro helper that decides whether to
use new or old bpf_prog_load() based on number of input arguments (see
___libbpf_overload in libbpf_common.h).
To test that existing
bpf_prog_load()-using code compiles and works as expected, I've compiled
and ran selftests as is. I had to remove (locally) selftest/bpf/Makefile
-Dbpf_prog_load=bpf_prog_test_load hack because it was conflicting with
the macro-based overload approach. I don't expect anyone else to do
something like this in practice, though. This is testing-specific way to
replace bpf_prog_load() calls with special testing variant of it, which
adds extra prog_flags value. After testing I kept this selftests hack,
but ensured that we use a new bpf_prog_load_deprecated name for this.
This patch also marks bpf_prog_load() and bpf_prog_load_xattr() as deprecated.
bpf_object interface has to be used for working with struct bpf_program.
Libbpf doesn't support loading just a bpf_program.
The silver lining is that when we get to libbpf 1.0 all these
complication will be gone and we'll have one clean bpf_prog_load()
low-level API with no backwards compatibility hackery surrounding it.
[0] Closes: https://github.com/libbpf/libbpf/issues/284
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211103220845.2676888-4-andrii@kernel.org
2021-11-03 22:08:36 +00:00
|
|
|
fd = sys_bpf_prog_load(&attr, sizeof(attr), attempts);
|
2018-12-08 00:42:29 +00:00
|
|
|
if (fd >= 0)
|
2018-11-19 23:29:16 +00:00
|
|
|
goto done;
|
|
|
|
}
|
|
|
|
|
2021-12-09 19:38:29 +00:00
|
|
|
if (log_level == 0 && log_buf) {
|
|
|
|
/* log_level == 0 with non-NULL log_buf requires retrying on error
|
|
|
|
* with log_level == 1 and log_buf/log_buf_size set, to get details of
|
|
|
|
* failure
|
|
|
|
*/
|
|
|
|
attr.log_buf = ptr_to_u64(log_buf);
|
|
|
|
attr.log_size = log_size;
|
|
|
|
attr.log_level = 1;
|
2020-12-03 20:46:31 +00:00
|
|
|
|
2021-12-09 19:38:29 +00:00
|
|
|
fd = sys_bpf_prog_load(&attr, sizeof(attr), attempts);
|
|
|
|
}
|
2018-11-19 23:29:16 +00:00
|
|
|
done:
|
2021-05-25 03:59:33 +00:00
|
|
|
/* free() doesn't affect errno, so we don't need to restore it */
|
2018-11-19 23:29:16 +00:00
|
|
|
free(finfo);
|
2018-12-08 00:42:31 +00:00
|
|
|
free(linfo);
|
2021-05-25 03:59:33 +00:00
|
|
|
return libbpf_err_errno(fd);
|
2015-07-01 02:14:06 +00:00
|
|
|
}
|
2015-11-24 13:36:08 +00:00
|
|
|
|
2017-02-09 23:21:39 +00:00
|
|
|
int bpf_map_update_elem(int fd, const void *key, const void *value,
|
2016-12-09 02:46:15 +00:00
|
|
|
__u64 flags)
|
2015-11-24 13:36:08 +00:00
|
|
|
{
|
|
|
|
union bpf_attr attr;
|
2021-05-25 03:59:33 +00:00
|
|
|
int ret;
|
2015-11-24 13:36:08 +00:00
|
|
|
|
2019-02-13 18:25:53 +00:00
|
|
|
memset(&attr, 0, sizeof(attr));
|
2015-11-24 13:36:08 +00:00
|
|
|
attr.map_fd = fd;
|
|
|
|
attr.key = ptr_to_u64(key);
|
|
|
|
attr.value = ptr_to_u64(value);
|
|
|
|
attr.flags = flags;
|
|
|
|
|
2021-05-25 03:59:33 +00:00
|
|
|
ret = sys_bpf(BPF_MAP_UPDATE_ELEM, &attr, sizeof(attr));
|
|
|
|
return libbpf_err_errno(ret);
|
2015-11-24 13:36:08 +00:00
|
|
|
}
|
2016-11-26 07:03:25 +00:00
|
|
|
|
2017-02-09 23:21:40 +00:00
|
|
|
int bpf_map_lookup_elem(int fd, const void *key, void *value)
|
2016-11-26 07:03:25 +00:00
|
|
|
{
|
|
|
|
union bpf_attr attr;
|
2021-05-25 03:59:33 +00:00
|
|
|
int ret;
|
2016-11-26 07:03:25 +00:00
|
|
|
|
2019-02-13 18:25:53 +00:00
|
|
|
memset(&attr, 0, sizeof(attr));
|
2016-11-26 07:03:25 +00:00
|
|
|
attr.map_fd = fd;
|
|
|
|
attr.key = ptr_to_u64(key);
|
|
|
|
attr.value = ptr_to_u64(value);
|
|
|
|
|
2021-05-25 03:59:33 +00:00
|
|
|
ret = sys_bpf(BPF_MAP_LOOKUP_ELEM, &attr, sizeof(attr));
|
|
|
|
return libbpf_err_errno(ret);
|
2016-11-26 07:03:25 +00:00
|
|
|
}
|
|
|
|
|
2019-01-31 23:40:11 +00:00
|
|
|
int bpf_map_lookup_elem_flags(int fd, const void *key, void *value, __u64 flags)
|
|
|
|
{
|
|
|
|
union bpf_attr attr;
|
2021-05-25 03:59:33 +00:00
|
|
|
int ret;
|
2019-01-31 23:40:11 +00:00
|
|
|
|
2019-02-13 18:25:53 +00:00
|
|
|
memset(&attr, 0, sizeof(attr));
|
2019-01-31 23:40:11 +00:00
|
|
|
attr.map_fd = fd;
|
|
|
|
attr.key = ptr_to_u64(key);
|
|
|
|
attr.value = ptr_to_u64(value);
|
|
|
|
attr.flags = flags;
|
|
|
|
|
2021-05-25 03:59:33 +00:00
|
|
|
ret = sys_bpf(BPF_MAP_LOOKUP_ELEM, &attr, sizeof(attr));
|
|
|
|
return libbpf_err_errno(ret);
|
2019-01-31 23:40:11 +00:00
|
|
|
}
|
|
|
|
|
2018-10-18 13:16:41 +00:00
|
|
|
int bpf_map_lookup_and_delete_elem(int fd, const void *key, void *value)
|
|
|
|
{
|
|
|
|
union bpf_attr attr;
|
2021-05-25 03:59:33 +00:00
|
|
|
int ret;
|
2018-10-18 13:16:41 +00:00
|
|
|
|
2019-02-13 18:25:53 +00:00
|
|
|
memset(&attr, 0, sizeof(attr));
|
2018-10-18 13:16:41 +00:00
|
|
|
attr.map_fd = fd;
|
|
|
|
attr.key = ptr_to_u64(key);
|
|
|
|
attr.value = ptr_to_u64(value);
|
|
|
|
|
2021-05-25 03:59:33 +00:00
|
|
|
ret = sys_bpf(BPF_MAP_LOOKUP_AND_DELETE_ELEM, &attr, sizeof(attr));
|
|
|
|
return libbpf_err_errno(ret);
|
2018-10-18 13:16:41 +00:00
|
|
|
}
|
|
|
|
|
2021-05-11 21:00:05 +00:00
|
|
|
int bpf_map_lookup_and_delete_elem_flags(int fd, const void *key, void *value, __u64 flags)
|
|
|
|
{
|
|
|
|
union bpf_attr attr;
|
2021-11-04 17:13:54 +00:00
|
|
|
int ret;
|
2021-05-11 21:00:05 +00:00
|
|
|
|
|
|
|
memset(&attr, 0, sizeof(attr));
|
|
|
|
attr.map_fd = fd;
|
|
|
|
attr.key = ptr_to_u64(key);
|
|
|
|
attr.value = ptr_to_u64(value);
|
|
|
|
attr.flags = flags;
|
|
|
|
|
2021-11-04 17:13:54 +00:00
|
|
|
ret = sys_bpf(BPF_MAP_LOOKUP_AND_DELETE_ELEM, &attr, sizeof(attr));
|
|
|
|
return libbpf_err_errno(ret);
|
2021-05-11 21:00:05 +00:00
|
|
|
}
|
|
|
|
|
2017-02-09 23:21:41 +00:00
|
|
|
int bpf_map_delete_elem(int fd, const void *key)
|
2016-11-26 07:03:25 +00:00
|
|
|
{
|
|
|
|
union bpf_attr attr;
|
2021-05-25 03:59:33 +00:00
|
|
|
int ret;
|
2016-11-26 07:03:25 +00:00
|
|
|
|
2019-02-13 18:25:53 +00:00
|
|
|
memset(&attr, 0, sizeof(attr));
|
2016-11-26 07:03:25 +00:00
|
|
|
attr.map_fd = fd;
|
|
|
|
attr.key = ptr_to_u64(key);
|
|
|
|
|
2021-05-25 03:59:33 +00:00
|
|
|
ret = sys_bpf(BPF_MAP_DELETE_ELEM, &attr, sizeof(attr));
|
|
|
|
return libbpf_err_errno(ret);
|
2016-11-26 07:03:25 +00:00
|
|
|
}
|
|
|
|
|
2022-05-12 22:07:12 +00:00
|
|
|
int bpf_map_delete_elem_flags(int fd, const void *key, __u64 flags)
|
|
|
|
{
|
|
|
|
union bpf_attr attr;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
memset(&attr, 0, sizeof(attr));
|
|
|
|
attr.map_fd = fd;
|
|
|
|
attr.key = ptr_to_u64(key);
|
|
|
|
attr.flags = flags;
|
|
|
|
|
|
|
|
ret = sys_bpf(BPF_MAP_DELETE_ELEM, &attr, sizeof(attr));
|
|
|
|
return libbpf_err_errno(ret);
|
|
|
|
}
|
|
|
|
|
2017-02-09 23:21:42 +00:00
|
|
|
int bpf_map_get_next_key(int fd, const void *key, void *next_key)
|
2016-11-26 07:03:25 +00:00
|
|
|
{
|
|
|
|
union bpf_attr attr;
|
2021-05-25 03:59:33 +00:00
|
|
|
int ret;
|
2016-11-26 07:03:25 +00:00
|
|
|
|
2019-02-13 18:25:53 +00:00
|
|
|
memset(&attr, 0, sizeof(attr));
|
2016-11-26 07:03:25 +00:00
|
|
|
attr.map_fd = fd;
|
|
|
|
attr.key = ptr_to_u64(key);
|
|
|
|
attr.next_key = ptr_to_u64(next_key);
|
|
|
|
|
2021-05-25 03:59:33 +00:00
|
|
|
ret = sys_bpf(BPF_MAP_GET_NEXT_KEY, &attr, sizeof(attr));
|
|
|
|
return libbpf_err_errno(ret);
|
2016-11-26 07:03:25 +00:00
|
|
|
}
|
|
|
|
|
bpf, libbpf: support global data/bss/rodata sections
This work adds BPF loader support for global data sections
to libbpf. This allows to write BPF programs in more natural
C-like way by being able to define global variables and const
data.
Back at LPC 2018 [0] we presented a first prototype which
implemented support for global data sections by extending BPF
syscall where union bpf_attr would get additional memory/size
pair for each section passed during prog load in order to later
add this base address into the ldimm64 instruction along with
the user provided offset when accessing a variable. Consensus
from LPC was that for proper upstream support, it would be
more desirable to use maps instead of bpf_attr extension as
this would allow for introspection of these sections as well
as potential live updates of their content. This work follows
this path by taking the following steps from loader side:
1) In bpf_object__elf_collect() step we pick up ".data",
".rodata", and ".bss" section information.
2) If present, in bpf_object__init_internal_map() we add
maps to the obj's map array that corresponds to each
of the present sections. Given section size and access
properties can differ, a single entry array map is
created with value size that is corresponding to the
ELF section size of .data, .bss or .rodata. These
internal maps are integrated into the normal map
handling of libbpf such that when user traverses all
obj maps, they can be differentiated from user-created
ones via bpf_map__is_internal(). In later steps when
we actually create these maps in the kernel via
bpf_object__create_maps(), then for .data and .rodata
sections their content is copied into the map through
bpf_map_update_elem(). For .bss this is not necessary
since array map is already zero-initialized by default.
Additionally, for .rodata the map is frozen as read-only
after setup, such that neither from program nor syscall
side writes would be possible.
3) In bpf_program__collect_reloc() step, we record the
corresponding map, insn index, and relocation type for
the global data.
4) And last but not least in the actual relocation step in
bpf_program__relocate(), we mark the ldimm64 instruction
with src_reg = BPF_PSEUDO_MAP_VALUE where in the first
imm field the map's file descriptor is stored as similarly
done as in BPF_PSEUDO_MAP_FD, and in the second imm field
(as ldimm64 is 2-insn wide) we store the access offset
into the section. Given these maps have only single element
ldimm64's off remains zero in both parts.
5) On kernel side, this special marked BPF_PSEUDO_MAP_VALUE
load will then store the actual target address in order
to have a 'map-lookup'-free access. That is, the actual
map value base address + offset. The destination register
in the verifier will then be marked as PTR_TO_MAP_VALUE,
containing the fixed offset as reg->off and backing BPF
map as reg->map_ptr. Meaning, it's treated as any other
normal map value from verification side, only with
efficient, direct value access instead of actual call to
map lookup helper as in the typical case.
Currently, only support for static global variables has been
added, and libbpf rejects non-static global variables from
loading. This can be lifted until we have proper semantics
for how BPF will treat multi-object BPF loads. From BTF side,
libbpf will set the value type id of the types corresponding
to the ".bss", ".data" and ".rodata" names which LLVM will
emit without the object name prefix. The key type will be
left as zero, thus making use of the key-less BTF option in
array maps.
Simple example dump of program using globals vars in each
section:
# bpftool prog
[...]
6784: sched_cls name load_static_dat tag a7e1291567277844 gpl
loaded_at 2019-03-11T15:39:34+0000 uid 0
xlated 1776B jited 993B memlock 4096B map_ids 2238,2237,2235,2236,2239,2240
# bpftool map show id 2237
2237: array name test_glo.bss flags 0x0
key 4B value 64B max_entries 1 memlock 4096B
# bpftool map show id 2235
2235: array name test_glo.data flags 0x0
key 4B value 64B max_entries 1 memlock 4096B
# bpftool map show id 2236
2236: array name test_glo.rodata flags 0x80
key 4B value 96B max_entries 1 memlock 4096B
# bpftool prog dump xlated id 6784
int load_static_data(struct __sk_buff * skb):
; int load_static_data(struct __sk_buff *skb)
0: (b7) r6 = 0
; test_reloc(number, 0, &num0);
1: (63) *(u32 *)(r10 -4) = r6
2: (bf) r2 = r10
; int load_static_data(struct __sk_buff *skb)
3: (07) r2 += -4
; test_reloc(number, 0, &num0);
4: (18) r1 = map[id:2238]
6: (18) r3 = map[id:2237][0]+0 <-- direct addr in .bss area
8: (b7) r4 = 0
9: (85) call array_map_update_elem#100464
10: (b7) r1 = 1
; test_reloc(number, 1, &num1);
[...]
; test_reloc(string, 2, str2);
120: (18) r8 = map[id:2237][0]+16 <-- same here at offset +16
122: (18) r1 = map[id:2239]
124: (18) r3 = map[id:2237][0]+16
126: (b7) r4 = 0
127: (85) call array_map_update_elem#100464
128: (b7) r1 = 120
; str1[5] = 'x';
129: (73) *(u8 *)(r9 +5) = r1
; test_reloc(string, 3, str1);
130: (b7) r1 = 3
131: (63) *(u32 *)(r10 -4) = r1
132: (b7) r9 = 3
133: (bf) r2 = r10
; int load_static_data(struct __sk_buff *skb)
134: (07) r2 += -4
; test_reloc(string, 3, str1);
135: (18) r1 = map[id:2239]
137: (18) r3 = map[id:2235][0]+16 <-- direct addr in .data area
139: (b7) r4 = 0
140: (85) call array_map_update_elem#100464
141: (b7) r1 = 111
; __builtin_memcpy(&str2[2], "hello", sizeof("hello"));
142: (73) *(u8 *)(r8 +6) = r1 <-- further access based on .bss data
143: (b7) r1 = 108
144: (73) *(u8 *)(r8 +5) = r1
[...]
For Cilium use-case in particular, this enables migrating configuration
constants from Cilium daemon's generated header defines into global
data sections such that expensive runtime recompilations with LLVM can
be avoided altogether. Instead, the ELF file becomes effectively a
"template", meaning, it is compiled only once (!) and the Cilium daemon
will then rewrite relevant configuration data from the ELF's .data or
.rodata sections directly instead of recompiling the program. The
updated ELF is then loaded into the kernel and atomically replaces
the existing program in the networking datapath. More info in [0].
Based upon recent fix in LLVM, commit c0db6b6bd444 ("[BPF] Don't fail
for static variables").
[0] LPC 2018, BPF track, "ELF relocation for static data in BPF",
http://vger.kernel.org/lpc-bpf2018.html#session-3
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-04-09 21:20:13 +00:00
|
|
|
int bpf_map_freeze(int fd)
|
|
|
|
{
|
|
|
|
union bpf_attr attr;
|
2021-05-25 03:59:33 +00:00
|
|
|
int ret;
|
bpf, libbpf: support global data/bss/rodata sections
This work adds BPF loader support for global data sections
to libbpf. This allows to write BPF programs in more natural
C-like way by being able to define global variables and const
data.
Back at LPC 2018 [0] we presented a first prototype which
implemented support for global data sections by extending BPF
syscall where union bpf_attr would get additional memory/size
pair for each section passed during prog load in order to later
add this base address into the ldimm64 instruction along with
the user provided offset when accessing a variable. Consensus
from LPC was that for proper upstream support, it would be
more desirable to use maps instead of bpf_attr extension as
this would allow for introspection of these sections as well
as potential live updates of their content. This work follows
this path by taking the following steps from loader side:
1) In bpf_object__elf_collect() step we pick up ".data",
".rodata", and ".bss" section information.
2) If present, in bpf_object__init_internal_map() we add
maps to the obj's map array that corresponds to each
of the present sections. Given section size and access
properties can differ, a single entry array map is
created with value size that is corresponding to the
ELF section size of .data, .bss or .rodata. These
internal maps are integrated into the normal map
handling of libbpf such that when user traverses all
obj maps, they can be differentiated from user-created
ones via bpf_map__is_internal(). In later steps when
we actually create these maps in the kernel via
bpf_object__create_maps(), then for .data and .rodata
sections their content is copied into the map through
bpf_map_update_elem(). For .bss this is not necessary
since array map is already zero-initialized by default.
Additionally, for .rodata the map is frozen as read-only
after setup, such that neither from program nor syscall
side writes would be possible.
3) In bpf_program__collect_reloc() step, we record the
corresponding map, insn index, and relocation type for
the global data.
4) And last but not least in the actual relocation step in
bpf_program__relocate(), we mark the ldimm64 instruction
with src_reg = BPF_PSEUDO_MAP_VALUE where in the first
imm field the map's file descriptor is stored as similarly
done as in BPF_PSEUDO_MAP_FD, and in the second imm field
(as ldimm64 is 2-insn wide) we store the access offset
into the section. Given these maps have only single element
ldimm64's off remains zero in both parts.
5) On kernel side, this special marked BPF_PSEUDO_MAP_VALUE
load will then store the actual target address in order
to have a 'map-lookup'-free access. That is, the actual
map value base address + offset. The destination register
in the verifier will then be marked as PTR_TO_MAP_VALUE,
containing the fixed offset as reg->off and backing BPF
map as reg->map_ptr. Meaning, it's treated as any other
normal map value from verification side, only with
efficient, direct value access instead of actual call to
map lookup helper as in the typical case.
Currently, only support for static global variables has been
added, and libbpf rejects non-static global variables from
loading. This can be lifted until we have proper semantics
for how BPF will treat multi-object BPF loads. From BTF side,
libbpf will set the value type id of the types corresponding
to the ".bss", ".data" and ".rodata" names which LLVM will
emit without the object name prefix. The key type will be
left as zero, thus making use of the key-less BTF option in
array maps.
Simple example dump of program using globals vars in each
section:
# bpftool prog
[...]
6784: sched_cls name load_static_dat tag a7e1291567277844 gpl
loaded_at 2019-03-11T15:39:34+0000 uid 0
xlated 1776B jited 993B memlock 4096B map_ids 2238,2237,2235,2236,2239,2240
# bpftool map show id 2237
2237: array name test_glo.bss flags 0x0
key 4B value 64B max_entries 1 memlock 4096B
# bpftool map show id 2235
2235: array name test_glo.data flags 0x0
key 4B value 64B max_entries 1 memlock 4096B
# bpftool map show id 2236
2236: array name test_glo.rodata flags 0x80
key 4B value 96B max_entries 1 memlock 4096B
# bpftool prog dump xlated id 6784
int load_static_data(struct __sk_buff * skb):
; int load_static_data(struct __sk_buff *skb)
0: (b7) r6 = 0
; test_reloc(number, 0, &num0);
1: (63) *(u32 *)(r10 -4) = r6
2: (bf) r2 = r10
; int load_static_data(struct __sk_buff *skb)
3: (07) r2 += -4
; test_reloc(number, 0, &num0);
4: (18) r1 = map[id:2238]
6: (18) r3 = map[id:2237][0]+0 <-- direct addr in .bss area
8: (b7) r4 = 0
9: (85) call array_map_update_elem#100464
10: (b7) r1 = 1
; test_reloc(number, 1, &num1);
[...]
; test_reloc(string, 2, str2);
120: (18) r8 = map[id:2237][0]+16 <-- same here at offset +16
122: (18) r1 = map[id:2239]
124: (18) r3 = map[id:2237][0]+16
126: (b7) r4 = 0
127: (85) call array_map_update_elem#100464
128: (b7) r1 = 120
; str1[5] = 'x';
129: (73) *(u8 *)(r9 +5) = r1
; test_reloc(string, 3, str1);
130: (b7) r1 = 3
131: (63) *(u32 *)(r10 -4) = r1
132: (b7) r9 = 3
133: (bf) r2 = r10
; int load_static_data(struct __sk_buff *skb)
134: (07) r2 += -4
; test_reloc(string, 3, str1);
135: (18) r1 = map[id:2239]
137: (18) r3 = map[id:2235][0]+16 <-- direct addr in .data area
139: (b7) r4 = 0
140: (85) call array_map_update_elem#100464
141: (b7) r1 = 111
; __builtin_memcpy(&str2[2], "hello", sizeof("hello"));
142: (73) *(u8 *)(r8 +6) = r1 <-- further access based on .bss data
143: (b7) r1 = 108
144: (73) *(u8 *)(r8 +5) = r1
[...]
For Cilium use-case in particular, this enables migrating configuration
constants from Cilium daemon's generated header defines into global
data sections such that expensive runtime recompilations with LLVM can
be avoided altogether. Instead, the ELF file becomes effectively a
"template", meaning, it is compiled only once (!) and the Cilium daemon
will then rewrite relevant configuration data from the ELF's .data or
.rodata sections directly instead of recompiling the program. The
updated ELF is then loaded into the kernel and atomically replaces
the existing program in the networking datapath. More info in [0].
Based upon recent fix in LLVM, commit c0db6b6bd444 ("[BPF] Don't fail
for static variables").
[0] LPC 2018, BPF track, "ELF relocation for static data in BPF",
http://vger.kernel.org/lpc-bpf2018.html#session-3
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-04-09 21:20:13 +00:00
|
|
|
|
|
|
|
memset(&attr, 0, sizeof(attr));
|
|
|
|
attr.map_fd = fd;
|
|
|
|
|
2021-05-25 03:59:33 +00:00
|
|
|
ret = sys_bpf(BPF_MAP_FREEZE, &attr, sizeof(attr));
|
|
|
|
return libbpf_err_errno(ret);
|
bpf, libbpf: support global data/bss/rodata sections
This work adds BPF loader support for global data sections
to libbpf. This allows to write BPF programs in more natural
C-like way by being able to define global variables and const
data.
Back at LPC 2018 [0] we presented a first prototype which
implemented support for global data sections by extending BPF
syscall where union bpf_attr would get additional memory/size
pair for each section passed during prog load in order to later
add this base address into the ldimm64 instruction along with
the user provided offset when accessing a variable. Consensus
from LPC was that for proper upstream support, it would be
more desirable to use maps instead of bpf_attr extension as
this would allow for introspection of these sections as well
as potential live updates of their content. This work follows
this path by taking the following steps from loader side:
1) In bpf_object__elf_collect() step we pick up ".data",
".rodata", and ".bss" section information.
2) If present, in bpf_object__init_internal_map() we add
maps to the obj's map array that corresponds to each
of the present sections. Given section size and access
properties can differ, a single entry array map is
created with value size that is corresponding to the
ELF section size of .data, .bss or .rodata. These
internal maps are integrated into the normal map
handling of libbpf such that when user traverses all
obj maps, they can be differentiated from user-created
ones via bpf_map__is_internal(). In later steps when
we actually create these maps in the kernel via
bpf_object__create_maps(), then for .data and .rodata
sections their content is copied into the map through
bpf_map_update_elem(). For .bss this is not necessary
since array map is already zero-initialized by default.
Additionally, for .rodata the map is frozen as read-only
after setup, such that neither from program nor syscall
side writes would be possible.
3) In bpf_program__collect_reloc() step, we record the
corresponding map, insn index, and relocation type for
the global data.
4) And last but not least in the actual relocation step in
bpf_program__relocate(), we mark the ldimm64 instruction
with src_reg = BPF_PSEUDO_MAP_VALUE where in the first
imm field the map's file descriptor is stored as similarly
done as in BPF_PSEUDO_MAP_FD, and in the second imm field
(as ldimm64 is 2-insn wide) we store the access offset
into the section. Given these maps have only single element
ldimm64's off remains zero in both parts.
5) On kernel side, this special marked BPF_PSEUDO_MAP_VALUE
load will then store the actual target address in order
to have a 'map-lookup'-free access. That is, the actual
map value base address + offset. The destination register
in the verifier will then be marked as PTR_TO_MAP_VALUE,
containing the fixed offset as reg->off and backing BPF
map as reg->map_ptr. Meaning, it's treated as any other
normal map value from verification side, only with
efficient, direct value access instead of actual call to
map lookup helper as in the typical case.
Currently, only support for static global variables has been
added, and libbpf rejects non-static global variables from
loading. This can be lifted until we have proper semantics
for how BPF will treat multi-object BPF loads. From BTF side,
libbpf will set the value type id of the types corresponding
to the ".bss", ".data" and ".rodata" names which LLVM will
emit without the object name prefix. The key type will be
left as zero, thus making use of the key-less BTF option in
array maps.
Simple example dump of program using globals vars in each
section:
# bpftool prog
[...]
6784: sched_cls name load_static_dat tag a7e1291567277844 gpl
loaded_at 2019-03-11T15:39:34+0000 uid 0
xlated 1776B jited 993B memlock 4096B map_ids 2238,2237,2235,2236,2239,2240
# bpftool map show id 2237
2237: array name test_glo.bss flags 0x0
key 4B value 64B max_entries 1 memlock 4096B
# bpftool map show id 2235
2235: array name test_glo.data flags 0x0
key 4B value 64B max_entries 1 memlock 4096B
# bpftool map show id 2236
2236: array name test_glo.rodata flags 0x80
key 4B value 96B max_entries 1 memlock 4096B
# bpftool prog dump xlated id 6784
int load_static_data(struct __sk_buff * skb):
; int load_static_data(struct __sk_buff *skb)
0: (b7) r6 = 0
; test_reloc(number, 0, &num0);
1: (63) *(u32 *)(r10 -4) = r6
2: (bf) r2 = r10
; int load_static_data(struct __sk_buff *skb)
3: (07) r2 += -4
; test_reloc(number, 0, &num0);
4: (18) r1 = map[id:2238]
6: (18) r3 = map[id:2237][0]+0 <-- direct addr in .bss area
8: (b7) r4 = 0
9: (85) call array_map_update_elem#100464
10: (b7) r1 = 1
; test_reloc(number, 1, &num1);
[...]
; test_reloc(string, 2, str2);
120: (18) r8 = map[id:2237][0]+16 <-- same here at offset +16
122: (18) r1 = map[id:2239]
124: (18) r3 = map[id:2237][0]+16
126: (b7) r4 = 0
127: (85) call array_map_update_elem#100464
128: (b7) r1 = 120
; str1[5] = 'x';
129: (73) *(u8 *)(r9 +5) = r1
; test_reloc(string, 3, str1);
130: (b7) r1 = 3
131: (63) *(u32 *)(r10 -4) = r1
132: (b7) r9 = 3
133: (bf) r2 = r10
; int load_static_data(struct __sk_buff *skb)
134: (07) r2 += -4
; test_reloc(string, 3, str1);
135: (18) r1 = map[id:2239]
137: (18) r3 = map[id:2235][0]+16 <-- direct addr in .data area
139: (b7) r4 = 0
140: (85) call array_map_update_elem#100464
141: (b7) r1 = 111
; __builtin_memcpy(&str2[2], "hello", sizeof("hello"));
142: (73) *(u8 *)(r8 +6) = r1 <-- further access based on .bss data
143: (b7) r1 = 108
144: (73) *(u8 *)(r8 +5) = r1
[...]
For Cilium use-case in particular, this enables migrating configuration
constants from Cilium daemon's generated header defines into global
data sections such that expensive runtime recompilations with LLVM can
be avoided altogether. Instead, the ELF file becomes effectively a
"template", meaning, it is compiled only once (!) and the Cilium daemon
will then rewrite relevant configuration data from the ELF's .data or
.rodata sections directly instead of recompiling the program. The
updated ELF is then loaded into the kernel and atomically replaces
the existing program in the networking datapath. More info in [0].
Based upon recent fix in LLVM, commit c0db6b6bd444 ("[BPF] Don't fail
for static variables").
[0] LPC 2018, BPF track, "ELF relocation for static data in BPF",
http://vger.kernel.org/lpc-bpf2018.html#session-3
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-04-09 21:20:13 +00:00
|
|
|
}
|
|
|
|
|
2020-01-15 18:43:06 +00:00
|
|
|
static int bpf_map_batch_common(int cmd, int fd, void *in_batch,
|
|
|
|
void *out_batch, void *keys, void *values,
|
|
|
|
__u32 *count,
|
|
|
|
const struct bpf_map_batch_opts *opts)
|
|
|
|
{
|
2020-01-16 04:59:18 +00:00
|
|
|
union bpf_attr attr;
|
2020-01-15 18:43:06 +00:00
|
|
|
int ret;
|
|
|
|
|
|
|
|
if (!OPTS_VALID(opts, bpf_map_batch_opts))
|
2021-05-25 03:59:33 +00:00
|
|
|
return libbpf_err(-EINVAL);
|
2020-01-15 18:43:06 +00:00
|
|
|
|
|
|
|
memset(&attr, 0, sizeof(attr));
|
|
|
|
attr.batch.map_fd = fd;
|
|
|
|
attr.batch.in_batch = ptr_to_u64(in_batch);
|
|
|
|
attr.batch.out_batch = ptr_to_u64(out_batch);
|
|
|
|
attr.batch.keys = ptr_to_u64(keys);
|
|
|
|
attr.batch.values = ptr_to_u64(values);
|
|
|
|
attr.batch.count = *count;
|
|
|
|
attr.batch.elem_flags = OPTS_GET(opts, elem_flags, 0);
|
|
|
|
attr.batch.flags = OPTS_GET(opts, flags, 0);
|
|
|
|
|
|
|
|
ret = sys_bpf(cmd, &attr, sizeof(attr));
|
|
|
|
*count = attr.batch.count;
|
|
|
|
|
2021-05-25 03:59:33 +00:00
|
|
|
return libbpf_err_errno(ret);
|
2020-01-15 18:43:06 +00:00
|
|
|
}
|
|
|
|
|
2022-01-06 20:13:05 +00:00
|
|
|
int bpf_map_delete_batch(int fd, const void *keys, __u32 *count,
|
2020-01-15 18:43:06 +00:00
|
|
|
const struct bpf_map_batch_opts *opts)
|
|
|
|
{
|
|
|
|
return bpf_map_batch_common(BPF_MAP_DELETE_BATCH, fd, NULL,
|
2022-01-06 20:13:05 +00:00
|
|
|
NULL, (void *)keys, NULL, count, opts);
|
2020-01-15 18:43:06 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
int bpf_map_lookup_batch(int fd, void *in_batch, void *out_batch, void *keys,
|
|
|
|
void *values, __u32 *count,
|
|
|
|
const struct bpf_map_batch_opts *opts)
|
|
|
|
{
|
|
|
|
return bpf_map_batch_common(BPF_MAP_LOOKUP_BATCH, fd, in_batch,
|
|
|
|
out_batch, keys, values, count, opts);
|
|
|
|
}
|
|
|
|
|
|
|
|
int bpf_map_lookup_and_delete_batch(int fd, void *in_batch, void *out_batch,
|
|
|
|
void *keys, void *values, __u32 *count,
|
|
|
|
const struct bpf_map_batch_opts *opts)
|
|
|
|
{
|
|
|
|
return bpf_map_batch_common(BPF_MAP_LOOKUP_AND_DELETE_BATCH,
|
|
|
|
fd, in_batch, out_batch, keys, values,
|
|
|
|
count, opts);
|
|
|
|
}
|
|
|
|
|
2022-01-06 20:13:05 +00:00
|
|
|
int bpf_map_update_batch(int fd, const void *keys, const void *values, __u32 *count,
|
2020-01-15 18:43:06 +00:00
|
|
|
const struct bpf_map_batch_opts *opts)
|
|
|
|
{
|
|
|
|
return bpf_map_batch_common(BPF_MAP_UPDATE_BATCH, fd, NULL, NULL,
|
2022-01-06 20:13:05 +00:00
|
|
|
(void *)keys, (void *)values, count, opts);
|
2020-01-15 18:43:06 +00:00
|
|
|
}
|
|
|
|
|
2016-11-26 07:03:25 +00:00
|
|
|
int bpf_obj_pin(int fd, const char *pathname)
|
|
|
|
{
|
|
|
|
union bpf_attr attr;
|
2021-05-25 03:59:33 +00:00
|
|
|
int ret;
|
2016-11-26 07:03:25 +00:00
|
|
|
|
2019-02-13 18:25:53 +00:00
|
|
|
memset(&attr, 0, sizeof(attr));
|
2016-11-26 07:03:25 +00:00
|
|
|
attr.pathname = ptr_to_u64((void *)pathname);
|
|
|
|
attr.bpf_fd = fd;
|
|
|
|
|
2021-05-25 03:59:33 +00:00
|
|
|
ret = sys_bpf(BPF_OBJ_PIN, &attr, sizeof(attr));
|
|
|
|
return libbpf_err_errno(ret);
|
2016-11-26 07:03:25 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
int bpf_obj_get(const char *pathname)
|
2022-07-29 20:27:27 +00:00
|
|
|
{
|
|
|
|
return bpf_obj_get_opts(pathname, NULL);
|
|
|
|
}
|
|
|
|
|
|
|
|
int bpf_obj_get_opts(const char *pathname, const struct bpf_obj_get_opts *opts)
|
2016-11-26 07:03:25 +00:00
|
|
|
{
|
|
|
|
union bpf_attr attr;
|
2021-05-25 03:59:33 +00:00
|
|
|
int fd;
|
2016-11-26 07:03:25 +00:00
|
|
|
|
2022-07-29 20:27:27 +00:00
|
|
|
if (!OPTS_VALID(opts, bpf_obj_get_opts))
|
|
|
|
return libbpf_err(-EINVAL);
|
|
|
|
|
2019-02-13 18:25:53 +00:00
|
|
|
memset(&attr, 0, sizeof(attr));
|
2016-11-26 07:03:25 +00:00
|
|
|
attr.pathname = ptr_to_u64((void *)pathname);
|
2022-07-29 20:27:27 +00:00
|
|
|
attr.file_flags = OPTS_GET(opts, file_flags, 0);
|
2016-11-26 07:03:25 +00:00
|
|
|
|
libbpf: Ensure that BPF syscall fds are never 0, 1, or 2
Add a simple wrapper for passing an fd and getting a new one >= 3 if it
is one of 0, 1, or 2. There are two primary reasons to make this change:
First, libbpf relies on the assumption a certain BPF fd is never 0 (e.g.
most recently noticed in [0]). Second, Alexei pointed out in [1] that
some environments reset stdin, stdout, and stderr if they notice an
invalid fd at these numbers. To protect against both these cases, switch
all internal BPF syscall wrappers in libbpf to always return an fd >= 3.
We only need to modify the syscall wrappers and not other code that
assumes a valid fd by doing >= 0, to avoid pointless churn, and because
it is still a valid assumption. The cost paid is two additional syscalls
if fd is in range [0, 2].
[0]: e31eec77e4ab ("bpf: selftests: Fix fd cleanup in get_branch_snapshot")
[1]: https://lore.kernel.org/bpf/CAADnVQKVKY8o_3aU8Gzke443+uHa-eGoM0h7W4srChMXU1S4Bg@mail.gmail.com
Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Song Liu <songliubraving@fb.com>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20211028063501.2239335-5-memxor@gmail.com
2021-10-28 06:34:57 +00:00
|
|
|
fd = sys_bpf_fd(BPF_OBJ_GET, &attr, sizeof(attr));
|
2021-05-25 03:59:33 +00:00
|
|
|
return libbpf_err_errno(fd);
|
2016-11-26 07:03:25 +00:00
|
|
|
}
|
2016-12-14 22:05:26 +00:00
|
|
|
|
2017-08-28 14:10:04 +00:00
|
|
|
int bpf_prog_attach(int prog_fd, int target_fd, enum bpf_attach_type type,
|
|
|
|
unsigned int flags)
|
2019-12-19 07:44:36 +00:00
|
|
|
{
|
|
|
|
DECLARE_LIBBPF_OPTS(bpf_prog_attach_opts, opts,
|
|
|
|
.flags = flags,
|
|
|
|
);
|
|
|
|
|
2022-01-07 18:46:03 +00:00
|
|
|
return bpf_prog_attach_opts(prog_fd, target_fd, type, &opts);
|
2019-12-19 07:44:36 +00:00
|
|
|
}
|
|
|
|
|
2022-01-07 18:46:03 +00:00
|
|
|
int bpf_prog_attach_opts(int prog_fd, int target_fd,
|
2019-12-19 07:44:36 +00:00
|
|
|
enum bpf_attach_type type,
|
|
|
|
const struct bpf_prog_attach_opts *opts)
|
2016-12-14 22:05:26 +00:00
|
|
|
{
|
|
|
|
union bpf_attr attr;
|
2021-05-25 03:59:33 +00:00
|
|
|
int ret;
|
2016-12-14 22:05:26 +00:00
|
|
|
|
2019-12-19 07:44:36 +00:00
|
|
|
if (!OPTS_VALID(opts, bpf_prog_attach_opts))
|
2021-05-25 03:59:33 +00:00
|
|
|
return libbpf_err(-EINVAL);
|
2019-12-19 07:44:36 +00:00
|
|
|
|
2019-02-13 18:25:53 +00:00
|
|
|
memset(&attr, 0, sizeof(attr));
|
2016-12-14 22:05:26 +00:00
|
|
|
attr.target_fd = target_fd;
|
2017-08-28 14:10:04 +00:00
|
|
|
attr.attach_bpf_fd = prog_fd;
|
2016-12-14 22:05:26 +00:00
|
|
|
attr.attach_type = type;
|
2019-12-19 07:44:36 +00:00
|
|
|
attr.attach_flags = OPTS_GET(opts, flags, 0);
|
|
|
|
attr.replace_bpf_fd = OPTS_GET(opts, replace_prog_fd, 0);
|
2016-12-14 22:05:26 +00:00
|
|
|
|
2021-05-25 03:59:33 +00:00
|
|
|
ret = sys_bpf(BPF_PROG_ATTACH, &attr, sizeof(attr));
|
|
|
|
return libbpf_err_errno(ret);
|
2016-12-14 22:05:26 +00:00
|
|
|
}
|
|
|
|
|
2022-01-07 18:46:03 +00:00
|
|
|
__attribute__((alias("bpf_prog_attach_opts")))
|
|
|
|
int bpf_prog_attach_xattr(int prog_fd, int target_fd,
|
|
|
|
enum bpf_attach_type type,
|
|
|
|
const struct bpf_prog_attach_opts *opts);
|
|
|
|
|
2016-12-14 22:05:26 +00:00
|
|
|
int bpf_prog_detach(int target_fd, enum bpf_attach_type type)
|
|
|
|
{
|
|
|
|
union bpf_attr attr;
|
2021-05-25 03:59:33 +00:00
|
|
|
int ret;
|
2016-12-14 22:05:26 +00:00
|
|
|
|
2019-02-13 18:25:53 +00:00
|
|
|
memset(&attr, 0, sizeof(attr));
|
2016-12-14 22:05:26 +00:00
|
|
|
attr.target_fd = target_fd;
|
|
|
|
attr.attach_type = type;
|
|
|
|
|
2021-05-25 03:59:33 +00:00
|
|
|
ret = sys_bpf(BPF_PROG_DETACH, &attr, sizeof(attr));
|
|
|
|
return libbpf_err_errno(ret);
|
2016-12-14 22:05:26 +00:00
|
|
|
}
|
2017-03-31 04:45:39 +00:00
|
|
|
|
2017-10-03 05:50:24 +00:00
|
|
|
int bpf_prog_detach2(int prog_fd, int target_fd, enum bpf_attach_type type)
|
|
|
|
{
|
|
|
|
union bpf_attr attr;
|
2021-05-25 03:59:33 +00:00
|
|
|
int ret;
|
2017-10-03 05:50:24 +00:00
|
|
|
|
2019-02-13 18:25:53 +00:00
|
|
|
memset(&attr, 0, sizeof(attr));
|
2017-10-03 05:50:24 +00:00
|
|
|
attr.target_fd = target_fd;
|
|
|
|
attr.attach_bpf_fd = prog_fd;
|
|
|
|
attr.attach_type = type;
|
|
|
|
|
2021-05-25 03:59:33 +00:00
|
|
|
ret = sys_bpf(BPF_PROG_DETACH, &attr, sizeof(attr));
|
|
|
|
return libbpf_err_errno(ret);
|
2017-10-03 05:50:24 +00:00
|
|
|
}
|
|
|
|
|
2020-03-30 03:00:00 +00:00
|
|
|
int bpf_link_create(int prog_fd, int target_fd,
|
|
|
|
enum bpf_attach_type attach_type,
|
|
|
|
const struct bpf_link_create_opts *opts)
|
|
|
|
{
|
2020-09-29 12:45:53 +00:00
|
|
|
__u32 target_btf_id, iter_info_len;
|
2020-03-30 03:00:00 +00:00
|
|
|
union bpf_attr attr;
|
2022-04-21 03:39:44 +00:00
|
|
|
int fd, err;
|
2020-03-30 03:00:00 +00:00
|
|
|
|
|
|
|
if (!OPTS_VALID(opts, bpf_link_create_opts))
|
2021-05-25 03:59:33 +00:00
|
|
|
return libbpf_err(-EINVAL);
|
2020-03-30 03:00:00 +00:00
|
|
|
|
2020-09-29 12:45:53 +00:00
|
|
|
iter_info_len = OPTS_GET(opts, iter_info_len, 0);
|
|
|
|
target_btf_id = OPTS_GET(opts, target_btf_id, 0);
|
|
|
|
|
2021-08-15 07:06:03 +00:00
|
|
|
/* validate we don't have unexpected combinations of non-zero fields */
|
|
|
|
if (iter_info_len || target_btf_id) {
|
|
|
|
if (iter_info_len && target_btf_id)
|
|
|
|
return libbpf_err(-EINVAL);
|
|
|
|
if (!OPTS_ZEROED(opts, target_btf_id))
|
|
|
|
return libbpf_err(-EINVAL);
|
|
|
|
}
|
2020-09-29 12:45:53 +00:00
|
|
|
|
2020-03-30 03:00:00 +00:00
|
|
|
memset(&attr, 0, sizeof(attr));
|
|
|
|
attr.link_create.prog_fd = prog_fd;
|
|
|
|
attr.link_create.target_fd = target_fd;
|
|
|
|
attr.link_create.attach_type = attach_type;
|
2020-07-23 18:41:17 +00:00
|
|
|
attr.link_create.flags = OPTS_GET(opts, flags, 0);
|
2020-09-29 12:45:53 +00:00
|
|
|
|
2021-08-15 07:06:03 +00:00
|
|
|
if (target_btf_id) {
|
2020-09-29 12:45:53 +00:00
|
|
|
attr.link_create.target_btf_id = target_btf_id;
|
2021-08-15 07:06:03 +00:00
|
|
|
goto proceed;
|
2020-09-29 12:45:53 +00:00
|
|
|
}
|
2020-03-30 03:00:00 +00:00
|
|
|
|
2021-08-15 07:06:03 +00:00
|
|
|
switch (attach_type) {
|
|
|
|
case BPF_TRACE_ITER:
|
|
|
|
attr.link_create.iter_info = ptr_to_u64(OPTS_GET(opts, iter_info, (void *)0));
|
|
|
|
attr.link_create.iter_info_len = iter_info_len;
|
|
|
|
break;
|
|
|
|
case BPF_PERF_EVENT:
|
|
|
|
attr.link_create.perf_event.bpf_cookie = OPTS_GET(opts, perf_event.bpf_cookie, 0);
|
|
|
|
if (!OPTS_ZEROED(opts, perf_event))
|
|
|
|
return libbpf_err(-EINVAL);
|
2022-03-16 12:24:14 +00:00
|
|
|
break;
|
|
|
|
case BPF_TRACE_KPROBE_MULTI:
|
|
|
|
attr.link_create.kprobe_multi.flags = OPTS_GET(opts, kprobe_multi.flags, 0);
|
|
|
|
attr.link_create.kprobe_multi.cnt = OPTS_GET(opts, kprobe_multi.cnt, 0);
|
|
|
|
attr.link_create.kprobe_multi.syms = ptr_to_u64(OPTS_GET(opts, kprobe_multi.syms, 0));
|
|
|
|
attr.link_create.kprobe_multi.addrs = ptr_to_u64(OPTS_GET(opts, kprobe_multi.addrs, 0));
|
|
|
|
attr.link_create.kprobe_multi.cookies = ptr_to_u64(OPTS_GET(opts, kprobe_multi.cookies, 0));
|
|
|
|
if (!OPTS_ZEROED(opts, kprobe_multi))
|
|
|
|
return libbpf_err(-EINVAL);
|
2021-08-15 07:06:03 +00:00
|
|
|
break;
|
2022-05-10 20:59:22 +00:00
|
|
|
case BPF_TRACE_FENTRY:
|
|
|
|
case BPF_TRACE_FEXIT:
|
|
|
|
case BPF_MODIFY_RETURN:
|
|
|
|
case BPF_LSM_MAC:
|
|
|
|
attr.link_create.tracing.cookie = OPTS_GET(opts, tracing.cookie, 0);
|
|
|
|
if (!OPTS_ZEROED(opts, tracing))
|
|
|
|
return libbpf_err(-EINVAL);
|
|
|
|
break;
|
2021-08-15 07:06:03 +00:00
|
|
|
default:
|
|
|
|
if (!OPTS_ZEROED(opts, flags))
|
|
|
|
return libbpf_err(-EINVAL);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
proceed:
|
libbpf: Ensure that BPF syscall fds are never 0, 1, or 2
Add a simple wrapper for passing an fd and getting a new one >= 3 if it
is one of 0, 1, or 2. There are two primary reasons to make this change:
First, libbpf relies on the assumption a certain BPF fd is never 0 (e.g.
most recently noticed in [0]). Second, Alexei pointed out in [1] that
some environments reset stdin, stdout, and stderr if they notice an
invalid fd at these numbers. To protect against both these cases, switch
all internal BPF syscall wrappers in libbpf to always return an fd >= 3.
We only need to modify the syscall wrappers and not other code that
assumes a valid fd by doing >= 0, to avoid pointless churn, and because
it is still a valid assumption. The cost paid is two additional syscalls
if fd is in range [0, 2].
[0]: e31eec77e4ab ("bpf: selftests: Fix fd cleanup in get_branch_snapshot")
[1]: https://lore.kernel.org/bpf/CAADnVQKVKY8o_3aU8Gzke443+uHa-eGoM0h7W4srChMXU1S4Bg@mail.gmail.com
Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Song Liu <songliubraving@fb.com>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20211028063501.2239335-5-memxor@gmail.com
2021-10-28 06:34:57 +00:00
|
|
|
fd = sys_bpf_fd(BPF_LINK_CREATE, &attr, sizeof(attr));
|
2022-04-21 03:39:44 +00:00
|
|
|
if (fd >= 0)
|
|
|
|
return fd;
|
|
|
|
/* we'll get EINVAL if LINK_CREATE doesn't support attaching fentry
|
|
|
|
* and other similar programs
|
|
|
|
*/
|
|
|
|
err = -errno;
|
|
|
|
if (err != -EINVAL)
|
|
|
|
return libbpf_err(err);
|
|
|
|
|
|
|
|
/* if user used features not supported by
|
|
|
|
* BPF_RAW_TRACEPOINT_OPEN command, then just give up immediately
|
|
|
|
*/
|
|
|
|
if (attr.link_create.target_fd || attr.link_create.target_btf_id)
|
|
|
|
return libbpf_err(err);
|
|
|
|
if (!OPTS_ZEROED(opts, sz))
|
|
|
|
return libbpf_err(err);
|
|
|
|
|
|
|
|
/* otherwise, for few select kinds of programs that can be
|
|
|
|
* attached using BPF_RAW_TRACEPOINT_OPEN command, try that as
|
|
|
|
* a fallback for older kernels
|
|
|
|
*/
|
|
|
|
switch (attach_type) {
|
|
|
|
case BPF_TRACE_RAW_TP:
|
|
|
|
case BPF_LSM_MAC:
|
|
|
|
case BPF_TRACE_FENTRY:
|
|
|
|
case BPF_TRACE_FEXIT:
|
|
|
|
case BPF_MODIFY_RETURN:
|
|
|
|
return bpf_raw_tracepoint_open(NULL, prog_fd);
|
|
|
|
default:
|
|
|
|
return libbpf_err(err);
|
|
|
|
}
|
2020-03-30 03:00:00 +00:00
|
|
|
}
|
|
|
|
|
2020-07-31 18:28:27 +00:00
|
|
|
int bpf_link_detach(int link_fd)
|
|
|
|
{
|
|
|
|
union bpf_attr attr;
|
2021-05-25 03:59:33 +00:00
|
|
|
int ret;
|
2020-07-31 18:28:27 +00:00
|
|
|
|
|
|
|
memset(&attr, 0, sizeof(attr));
|
|
|
|
attr.link_detach.link_fd = link_fd;
|
|
|
|
|
2021-05-25 03:59:33 +00:00
|
|
|
ret = sys_bpf(BPF_LINK_DETACH, &attr, sizeof(attr));
|
|
|
|
return libbpf_err_errno(ret);
|
2020-07-31 18:28:27 +00:00
|
|
|
}
|
|
|
|
|
2020-03-30 03:00:00 +00:00
|
|
|
int bpf_link_update(int link_fd, int new_prog_fd,
|
|
|
|
const struct bpf_link_update_opts *opts)
|
|
|
|
{
|
|
|
|
union bpf_attr attr;
|
2021-05-25 03:59:33 +00:00
|
|
|
int ret;
|
2020-03-30 03:00:00 +00:00
|
|
|
|
|
|
|
if (!OPTS_VALID(opts, bpf_link_update_opts))
|
2021-05-25 03:59:33 +00:00
|
|
|
return libbpf_err(-EINVAL);
|
2020-03-30 03:00:00 +00:00
|
|
|
|
|
|
|
memset(&attr, 0, sizeof(attr));
|
|
|
|
attr.link_update.link_fd = link_fd;
|
|
|
|
attr.link_update.new_prog_fd = new_prog_fd;
|
|
|
|
attr.link_update.flags = OPTS_GET(opts, flags, 0);
|
|
|
|
attr.link_update.old_prog_fd = OPTS_GET(opts, old_prog_fd, 0);
|
|
|
|
|
2021-05-25 03:59:33 +00:00
|
|
|
ret = sys_bpf(BPF_LINK_UPDATE, &attr, sizeof(attr));
|
|
|
|
return libbpf_err_errno(ret);
|
2020-03-30 03:00:00 +00:00
|
|
|
}
|
|
|
|
|
2020-05-09 17:59:17 +00:00
|
|
|
int bpf_iter_create(int link_fd)
|
|
|
|
{
|
|
|
|
union bpf_attr attr;
|
2021-05-25 03:59:33 +00:00
|
|
|
int fd;
|
2020-05-09 17:59:17 +00:00
|
|
|
|
|
|
|
memset(&attr, 0, sizeof(attr));
|
|
|
|
attr.iter_create.link_fd = link_fd;
|
|
|
|
|
libbpf: Ensure that BPF syscall fds are never 0, 1, or 2
Add a simple wrapper for passing an fd and getting a new one >= 3 if it
is one of 0, 1, or 2. There are two primary reasons to make this change:
First, libbpf relies on the assumption a certain BPF fd is never 0 (e.g.
most recently noticed in [0]). Second, Alexei pointed out in [1] that
some environments reset stdin, stdout, and stderr if they notice an
invalid fd at these numbers. To protect against both these cases, switch
all internal BPF syscall wrappers in libbpf to always return an fd >= 3.
We only need to modify the syscall wrappers and not other code that
assumes a valid fd by doing >= 0, to avoid pointless churn, and because
it is still a valid assumption. The cost paid is two additional syscalls
if fd is in range [0, 2].
[0]: e31eec77e4ab ("bpf: selftests: Fix fd cleanup in get_branch_snapshot")
[1]: https://lore.kernel.org/bpf/CAADnVQKVKY8o_3aU8Gzke443+uHa-eGoM0h7W4srChMXU1S4Bg@mail.gmail.com
Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Song Liu <songliubraving@fb.com>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20211028063501.2239335-5-memxor@gmail.com
2021-10-28 06:34:57 +00:00
|
|
|
fd = sys_bpf_fd(BPF_ITER_CREATE, &attr, sizeof(attr));
|
2021-05-25 03:59:33 +00:00
|
|
|
return libbpf_err_errno(fd);
|
2020-05-09 17:59:17 +00:00
|
|
|
}
|
|
|
|
|
2022-06-28 17:43:12 +00:00
|
|
|
int bpf_prog_query_opts(int target_fd,
|
|
|
|
enum bpf_attach_type type,
|
|
|
|
struct bpf_prog_query_opts *opts)
|
2017-10-03 05:50:27 +00:00
|
|
|
{
|
|
|
|
union bpf_attr attr;
|
|
|
|
int ret;
|
|
|
|
|
2022-06-28 17:43:12 +00:00
|
|
|
if (!OPTS_VALID(opts, bpf_prog_query_opts))
|
|
|
|
return libbpf_err(-EINVAL);
|
|
|
|
|
2019-02-13 18:25:53 +00:00
|
|
|
memset(&attr, 0, sizeof(attr));
|
2022-06-28 17:43:12 +00:00
|
|
|
|
2017-10-03 05:50:27 +00:00
|
|
|
attr.query.target_fd = target_fd;
|
|
|
|
attr.query.attach_type = type;
|
2022-06-28 17:43:12 +00:00
|
|
|
attr.query.query_flags = OPTS_GET(opts, query_flags, 0);
|
|
|
|
attr.query.prog_cnt = OPTS_GET(opts, prog_cnt, 0);
|
|
|
|
attr.query.prog_ids = ptr_to_u64(OPTS_GET(opts, prog_ids, NULL));
|
|
|
|
attr.query.prog_attach_flags = ptr_to_u64(OPTS_GET(opts, prog_attach_flags, NULL));
|
2017-10-03 05:50:27 +00:00
|
|
|
|
|
|
|
ret = sys_bpf(BPF_PROG_QUERY, &attr, sizeof(attr));
|
2021-05-25 03:59:33 +00:00
|
|
|
|
2022-06-28 17:43:12 +00:00
|
|
|
OPTS_SET(opts, attach_flags, attr.query.attach_flags);
|
|
|
|
OPTS_SET(opts, prog_cnt, attr.query.prog_cnt);
|
|
|
|
|
|
|
|
return libbpf_err_errno(ret);
|
|
|
|
}
|
|
|
|
|
|
|
|
int bpf_prog_query(int target_fd, enum bpf_attach_type type, __u32 query_flags,
|
|
|
|
__u32 *attach_flags, __u32 *prog_ids, __u32 *prog_cnt)
|
|
|
|
{
|
|
|
|
LIBBPF_OPTS(bpf_prog_query_opts, opts);
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
opts.query_flags = query_flags;
|
|
|
|
opts.prog_ids = prog_ids;
|
|
|
|
opts.prog_cnt = *prog_cnt;
|
|
|
|
|
|
|
|
ret = bpf_prog_query_opts(target_fd, type, &opts);
|
|
|
|
|
2017-10-03 05:50:27 +00:00
|
|
|
if (attach_flags)
|
2022-06-28 17:43:12 +00:00
|
|
|
*attach_flags = opts.attach_flags;
|
|
|
|
*prog_cnt = opts.prog_cnt;
|
2021-05-25 03:59:33 +00:00
|
|
|
|
|
|
|
return libbpf_err_errno(ret);
|
2017-10-03 05:50:27 +00:00
|
|
|
}
|
|
|
|
|
2020-09-25 20:54:30 +00:00
|
|
|
int bpf_prog_test_run_opts(int prog_fd, struct bpf_test_run_opts *opts)
|
|
|
|
{
|
|
|
|
union bpf_attr attr;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
if (!OPTS_VALID(opts, bpf_test_run_opts))
|
2021-05-25 03:59:33 +00:00
|
|
|
return libbpf_err(-EINVAL);
|
2020-09-25 20:54:30 +00:00
|
|
|
|
|
|
|
memset(&attr, 0, sizeof(attr));
|
|
|
|
attr.test.prog_fd = prog_fd;
|
2022-03-09 10:53:44 +00:00
|
|
|
attr.test.batch_size = OPTS_GET(opts, batch_size, 0);
|
2020-09-25 20:54:30 +00:00
|
|
|
attr.test.cpu = OPTS_GET(opts, cpu, 0);
|
|
|
|
attr.test.flags = OPTS_GET(opts, flags, 0);
|
|
|
|
attr.test.repeat = OPTS_GET(opts, repeat, 0);
|
|
|
|
attr.test.duration = OPTS_GET(opts, duration, 0);
|
|
|
|
attr.test.ctx_size_in = OPTS_GET(opts, ctx_size_in, 0);
|
|
|
|
attr.test.ctx_size_out = OPTS_GET(opts, ctx_size_out, 0);
|
|
|
|
attr.test.data_size_in = OPTS_GET(opts, data_size_in, 0);
|
|
|
|
attr.test.data_size_out = OPTS_GET(opts, data_size_out, 0);
|
|
|
|
attr.test.ctx_in = ptr_to_u64(OPTS_GET(opts, ctx_in, NULL));
|
|
|
|
attr.test.ctx_out = ptr_to_u64(OPTS_GET(opts, ctx_out, NULL));
|
|
|
|
attr.test.data_in = ptr_to_u64(OPTS_GET(opts, data_in, NULL));
|
|
|
|
attr.test.data_out = ptr_to_u64(OPTS_GET(opts, data_out, NULL));
|
|
|
|
|
|
|
|
ret = sys_bpf(BPF_PROG_TEST_RUN, &attr, sizeof(attr));
|
2021-05-25 03:59:33 +00:00
|
|
|
|
2020-09-25 20:54:30 +00:00
|
|
|
OPTS_SET(opts, data_size_out, attr.test.data_size_out);
|
|
|
|
OPTS_SET(opts, ctx_size_out, attr.test.ctx_size_out);
|
|
|
|
OPTS_SET(opts, duration, attr.test.duration);
|
|
|
|
OPTS_SET(opts, retval, attr.test.retval);
|
2021-05-25 03:59:33 +00:00
|
|
|
|
|
|
|
return libbpf_err_errno(ret);
|
2020-09-25 20:54:30 +00:00
|
|
|
}
|
|
|
|
|
2019-08-20 09:31:52 +00:00
|
|
|
static int bpf_obj_get_next_id(__u32 start_id, __u32 *next_id, int cmd)
|
2017-06-05 19:15:53 +00:00
|
|
|
{
|
|
|
|
union bpf_attr attr;
|
|
|
|
int err;
|
|
|
|
|
2019-02-13 18:25:53 +00:00
|
|
|
memset(&attr, 0, sizeof(attr));
|
2017-06-05 19:15:53 +00:00
|
|
|
attr.start_id = start_id;
|
|
|
|
|
2019-08-20 09:31:52 +00:00
|
|
|
err = sys_bpf(cmd, &attr, sizeof(attr));
|
2017-06-05 19:15:53 +00:00
|
|
|
if (!err)
|
|
|
|
*next_id = attr.next_id;
|
|
|
|
|
2021-05-25 03:59:33 +00:00
|
|
|
return libbpf_err_errno(err);
|
2017-06-05 19:15:53 +00:00
|
|
|
}
|
|
|
|
|
2019-08-20 09:31:52 +00:00
|
|
|
int bpf_prog_get_next_id(__u32 start_id, __u32 *next_id)
|
2017-06-05 19:15:53 +00:00
|
|
|
{
|
2019-08-20 09:31:52 +00:00
|
|
|
return bpf_obj_get_next_id(start_id, next_id, BPF_PROG_GET_NEXT_ID);
|
|
|
|
}
|
2017-06-05 19:15:53 +00:00
|
|
|
|
2019-08-20 09:31:52 +00:00
|
|
|
int bpf_map_get_next_id(__u32 start_id, __u32 *next_id)
|
|
|
|
{
|
|
|
|
return bpf_obj_get_next_id(start_id, next_id, BPF_MAP_GET_NEXT_ID);
|
2017-06-05 19:15:53 +00:00
|
|
|
}
|
|
|
|
|
2019-08-20 09:31:53 +00:00
|
|
|
int bpf_btf_get_next_id(__u32 start_id, __u32 *next_id)
|
|
|
|
{
|
|
|
|
return bpf_obj_get_next_id(start_id, next_id, BPF_BTF_GET_NEXT_ID);
|
|
|
|
}
|
|
|
|
|
2020-04-29 00:16:09 +00:00
|
|
|
int bpf_link_get_next_id(__u32 start_id, __u32 *next_id)
|
|
|
|
{
|
|
|
|
return bpf_obj_get_next_id(start_id, next_id, BPF_LINK_GET_NEXT_ID);
|
|
|
|
}
|
|
|
|
|
2017-06-05 19:15:53 +00:00
|
|
|
int bpf_prog_get_fd_by_id(__u32 id)
|
|
|
|
{
|
|
|
|
union bpf_attr attr;
|
2021-05-25 03:59:33 +00:00
|
|
|
int fd;
|
2017-06-05 19:15:53 +00:00
|
|
|
|
2019-02-13 18:25:53 +00:00
|
|
|
memset(&attr, 0, sizeof(attr));
|
2017-06-05 19:15:53 +00:00
|
|
|
attr.prog_id = id;
|
|
|
|
|
libbpf: Ensure that BPF syscall fds are never 0, 1, or 2
Add a simple wrapper for passing an fd and getting a new one >= 3 if it
is one of 0, 1, or 2. There are two primary reasons to make this change:
First, libbpf relies on the assumption a certain BPF fd is never 0 (e.g.
most recently noticed in [0]). Second, Alexei pointed out in [1] that
some environments reset stdin, stdout, and stderr if they notice an
invalid fd at these numbers. To protect against both these cases, switch
all internal BPF syscall wrappers in libbpf to always return an fd >= 3.
We only need to modify the syscall wrappers and not other code that
assumes a valid fd by doing >= 0, to avoid pointless churn, and because
it is still a valid assumption. The cost paid is two additional syscalls
if fd is in range [0, 2].
[0]: e31eec77e4ab ("bpf: selftests: Fix fd cleanup in get_branch_snapshot")
[1]: https://lore.kernel.org/bpf/CAADnVQKVKY8o_3aU8Gzke443+uHa-eGoM0h7W4srChMXU1S4Bg@mail.gmail.com
Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Song Liu <songliubraving@fb.com>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20211028063501.2239335-5-memxor@gmail.com
2021-10-28 06:34:57 +00:00
|
|
|
fd = sys_bpf_fd(BPF_PROG_GET_FD_BY_ID, &attr, sizeof(attr));
|
2021-05-25 03:59:33 +00:00
|
|
|
return libbpf_err_errno(fd);
|
2017-06-05 19:15:53 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
int bpf_map_get_fd_by_id(__u32 id)
|
|
|
|
{
|
|
|
|
union bpf_attr attr;
|
2021-05-25 03:59:33 +00:00
|
|
|
int fd;
|
2017-06-05 19:15:53 +00:00
|
|
|
|
2019-02-13 18:25:53 +00:00
|
|
|
memset(&attr, 0, sizeof(attr));
|
2017-06-05 19:15:53 +00:00
|
|
|
attr.map_id = id;
|
|
|
|
|
libbpf: Ensure that BPF syscall fds are never 0, 1, or 2
Add a simple wrapper for passing an fd and getting a new one >= 3 if it
is one of 0, 1, or 2. There are two primary reasons to make this change:
First, libbpf relies on the assumption a certain BPF fd is never 0 (e.g.
most recently noticed in [0]). Second, Alexei pointed out in [1] that
some environments reset stdin, stdout, and stderr if they notice an
invalid fd at these numbers. To protect against both these cases, switch
all internal BPF syscall wrappers in libbpf to always return an fd >= 3.
We only need to modify the syscall wrappers and not other code that
assumes a valid fd by doing >= 0, to avoid pointless churn, and because
it is still a valid assumption. The cost paid is two additional syscalls
if fd is in range [0, 2].
[0]: e31eec77e4ab ("bpf: selftests: Fix fd cleanup in get_branch_snapshot")
[1]: https://lore.kernel.org/bpf/CAADnVQKVKY8o_3aU8Gzke443+uHa-eGoM0h7W4srChMXU1S4Bg@mail.gmail.com
Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Song Liu <songliubraving@fb.com>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20211028063501.2239335-5-memxor@gmail.com
2021-10-28 06:34:57 +00:00
|
|
|
fd = sys_bpf_fd(BPF_MAP_GET_FD_BY_ID, &attr, sizeof(attr));
|
2021-05-25 03:59:33 +00:00
|
|
|
return libbpf_err_errno(fd);
|
2017-06-05 19:15:53 +00:00
|
|
|
}
|
|
|
|
|
2018-05-04 21:49:55 +00:00
|
|
|
int bpf_btf_get_fd_by_id(__u32 id)
|
|
|
|
{
|
|
|
|
union bpf_attr attr;
|
2021-05-25 03:59:33 +00:00
|
|
|
int fd;
|
2018-05-04 21:49:55 +00:00
|
|
|
|
2019-02-13 18:25:53 +00:00
|
|
|
memset(&attr, 0, sizeof(attr));
|
2018-05-04 21:49:55 +00:00
|
|
|
attr.btf_id = id;
|
|
|
|
|
libbpf: Ensure that BPF syscall fds are never 0, 1, or 2
Add a simple wrapper for passing an fd and getting a new one >= 3 if it
is one of 0, 1, or 2. There are two primary reasons to make this change:
First, libbpf relies on the assumption a certain BPF fd is never 0 (e.g.
most recently noticed in [0]). Second, Alexei pointed out in [1] that
some environments reset stdin, stdout, and stderr if they notice an
invalid fd at these numbers. To protect against both these cases, switch
all internal BPF syscall wrappers in libbpf to always return an fd >= 3.
We only need to modify the syscall wrappers and not other code that
assumes a valid fd by doing >= 0, to avoid pointless churn, and because
it is still a valid assumption. The cost paid is two additional syscalls
if fd is in range [0, 2].
[0]: e31eec77e4ab ("bpf: selftests: Fix fd cleanup in get_branch_snapshot")
[1]: https://lore.kernel.org/bpf/CAADnVQKVKY8o_3aU8Gzke443+uHa-eGoM0h7W4srChMXU1S4Bg@mail.gmail.com
Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Song Liu <songliubraving@fb.com>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20211028063501.2239335-5-memxor@gmail.com
2021-10-28 06:34:57 +00:00
|
|
|
fd = sys_bpf_fd(BPF_BTF_GET_FD_BY_ID, &attr, sizeof(attr));
|
2021-05-25 03:59:33 +00:00
|
|
|
return libbpf_err_errno(fd);
|
2018-05-04 21:49:55 +00:00
|
|
|
}
|
|
|
|
|
2020-04-29 00:16:09 +00:00
|
|
|
int bpf_link_get_fd_by_id(__u32 id)
|
|
|
|
{
|
|
|
|
union bpf_attr attr;
|
2021-05-25 03:59:33 +00:00
|
|
|
int fd;
|
2020-04-29 00:16:09 +00:00
|
|
|
|
|
|
|
memset(&attr, 0, sizeof(attr));
|
|
|
|
attr.link_id = id;
|
|
|
|
|
libbpf: Ensure that BPF syscall fds are never 0, 1, or 2
Add a simple wrapper for passing an fd and getting a new one >= 3 if it
is one of 0, 1, or 2. There are two primary reasons to make this change:
First, libbpf relies on the assumption a certain BPF fd is never 0 (e.g.
most recently noticed in [0]). Second, Alexei pointed out in [1] that
some environments reset stdin, stdout, and stderr if they notice an
invalid fd at these numbers. To protect against both these cases, switch
all internal BPF syscall wrappers in libbpf to always return an fd >= 3.
We only need to modify the syscall wrappers and not other code that
assumes a valid fd by doing >= 0, to avoid pointless churn, and because
it is still a valid assumption. The cost paid is two additional syscalls
if fd is in range [0, 2].
[0]: e31eec77e4ab ("bpf: selftests: Fix fd cleanup in get_branch_snapshot")
[1]: https://lore.kernel.org/bpf/CAADnVQKVKY8o_3aU8Gzke443+uHa-eGoM0h7W4srChMXU1S4Bg@mail.gmail.com
Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Song Liu <songliubraving@fb.com>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20211028063501.2239335-5-memxor@gmail.com
2021-10-28 06:34:57 +00:00
|
|
|
fd = sys_bpf_fd(BPF_LINK_GET_FD_BY_ID, &attr, sizeof(attr));
|
2021-05-25 03:59:33 +00:00
|
|
|
return libbpf_err_errno(fd);
|
2020-04-29 00:16:09 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
int bpf_obj_get_info_by_fd(int bpf_fd, void *info, __u32 *info_len)
|
2017-06-05 19:15:53 +00:00
|
|
|
{
|
|
|
|
union bpf_attr attr;
|
|
|
|
int err;
|
|
|
|
|
2019-02-13 18:25:53 +00:00
|
|
|
memset(&attr, 0, sizeof(attr));
|
2020-04-29 00:16:09 +00:00
|
|
|
attr.info.bpf_fd = bpf_fd;
|
2017-06-05 19:15:53 +00:00
|
|
|
attr.info.info_len = *info_len;
|
|
|
|
attr.info.info = ptr_to_u64(info);
|
|
|
|
|
|
|
|
err = sys_bpf(BPF_OBJ_GET_INFO_BY_FD, &attr, sizeof(attr));
|
2021-05-25 03:59:33 +00:00
|
|
|
|
2017-06-05 19:15:53 +00:00
|
|
|
if (!err)
|
|
|
|
*info_len = attr.info.info_len;
|
|
|
|
|
2021-05-25 03:59:33 +00:00
|
|
|
return libbpf_err_errno(err);
|
2017-06-05 19:15:53 +00:00
|
|
|
}
|
2018-01-30 20:55:01 +00:00
|
|
|
|
2018-03-28 19:05:38 +00:00
|
|
|
int bpf_raw_tracepoint_open(const char *name, int prog_fd)
|
|
|
|
{
|
|
|
|
union bpf_attr attr;
|
2021-05-25 03:59:33 +00:00
|
|
|
int fd;
|
2018-03-28 19:05:38 +00:00
|
|
|
|
2019-02-13 18:25:53 +00:00
|
|
|
memset(&attr, 0, sizeof(attr));
|
2018-03-28 19:05:38 +00:00
|
|
|
attr.raw_tracepoint.name = ptr_to_u64(name);
|
|
|
|
attr.raw_tracepoint.prog_fd = prog_fd;
|
|
|
|
|
libbpf: Ensure that BPF syscall fds are never 0, 1, or 2
Add a simple wrapper for passing an fd and getting a new one >= 3 if it
is one of 0, 1, or 2. There are two primary reasons to make this change:
First, libbpf relies on the assumption a certain BPF fd is never 0 (e.g.
most recently noticed in [0]). Second, Alexei pointed out in [1] that
some environments reset stdin, stdout, and stderr if they notice an
invalid fd at these numbers. To protect against both these cases, switch
all internal BPF syscall wrappers in libbpf to always return an fd >= 3.
We only need to modify the syscall wrappers and not other code that
assumes a valid fd by doing >= 0, to avoid pointless churn, and because
it is still a valid assumption. The cost paid is two additional syscalls
if fd is in range [0, 2].
[0]: e31eec77e4ab ("bpf: selftests: Fix fd cleanup in get_branch_snapshot")
[1]: https://lore.kernel.org/bpf/CAADnVQKVKY8o_3aU8Gzke443+uHa-eGoM0h7W4srChMXU1S4Bg@mail.gmail.com
Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Song Liu <songliubraving@fb.com>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20211028063501.2239335-5-memxor@gmail.com
2021-10-28 06:34:57 +00:00
|
|
|
fd = sys_bpf_fd(BPF_RAW_TRACEPOINT_OPEN, &attr, sizeof(attr));
|
2021-05-25 03:59:33 +00:00
|
|
|
return libbpf_err_errno(fd);
|
2018-03-28 19:05:38 +00:00
|
|
|
}
|
|
|
|
|
2021-12-09 19:38:30 +00:00
|
|
|
int bpf_btf_load(const void *btf_data, size_t btf_size, const struct bpf_btf_load_opts *opts)
|
2018-04-18 22:56:05 +00:00
|
|
|
{
|
2021-12-09 19:38:30 +00:00
|
|
|
const size_t attr_sz = offsetofend(union bpf_attr, btf_log_level);
|
|
|
|
union bpf_attr attr;
|
|
|
|
char *log_buf;
|
|
|
|
size_t log_size;
|
|
|
|
__u32 log_level;
|
2018-04-18 22:56:05 +00:00
|
|
|
int fd;
|
|
|
|
|
libbpf: Auto-bump RLIMIT_MEMLOCK if kernel needs it for BPF
The need to increase RLIMIT_MEMLOCK to do anything useful with BPF is
one of the first extremely frustrating gotchas that all new BPF users go
through and in some cases have to learn it a very hard way.
Luckily, starting with upstream Linux kernel version 5.11, BPF subsystem
dropped the dependency on memlock and uses memcg-based memory accounting
instead. Unfortunately, detecting memcg-based BPF memory accounting is
far from trivial (as can be evidenced by this patch), so in practice
most BPF applications still do unconditional RLIMIT_MEMLOCK increase.
As we move towards libbpf 1.0, it would be good to allow users to forget
about RLIMIT_MEMLOCK vs memcg and let libbpf do the sensible adjustment
automatically. This patch paves the way forward in this matter. Libbpf
will do feature detection of memcg-based accounting, and if detected,
will do nothing. But if the kernel is too old, just like BCC, libbpf
will automatically increase RLIMIT_MEMLOCK on behalf of user
application ([0]).
As this is technically a breaking change, during the transition period
applications have to opt into libbpf 1.0 mode by setting
LIBBPF_STRICT_AUTO_RLIMIT_MEMLOCK bit when calling
libbpf_set_strict_mode().
Libbpf allows to control the exact amount of set RLIMIT_MEMLOCK limit
with libbpf_set_memlock_rlim_max() API. Passing 0 will make libbpf do
nothing with RLIMIT_MEMLOCK. libbpf_set_memlock_rlim_max() has to be
called before the first bpf_prog_load(), bpf_btf_load(), or
bpf_object__load() call, otherwise it has no effect and will return
-EBUSY.
[0] Closes: https://github.com/libbpf/libbpf/issues/369
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20211214195904.1785155-2-andrii@kernel.org
2021-12-14 19:59:03 +00:00
|
|
|
bump_rlimit_memlock();
|
|
|
|
|
2021-12-09 19:38:30 +00:00
|
|
|
memset(&attr, 0, attr_sz);
|
|
|
|
|
|
|
|
if (!OPTS_VALID(opts, bpf_btf_load_opts))
|
|
|
|
return libbpf_err(-EINVAL);
|
|
|
|
|
|
|
|
log_buf = OPTS_GET(opts, log_buf, NULL);
|
|
|
|
log_size = OPTS_GET(opts, log_size, 0);
|
|
|
|
log_level = OPTS_GET(opts, log_level, 0);
|
|
|
|
|
|
|
|
if (log_size > UINT_MAX)
|
|
|
|
return libbpf_err(-EINVAL);
|
|
|
|
if (log_size && !log_buf)
|
|
|
|
return libbpf_err(-EINVAL);
|
|
|
|
|
|
|
|
attr.btf = ptr_to_u64(btf_data);
|
2018-04-18 22:56:05 +00:00
|
|
|
attr.btf_size = btf_size;
|
2021-12-09 19:38:30 +00:00
|
|
|
/* log_level == 0 and log_buf != NULL means "try loading without
|
|
|
|
* log_buf, but retry with log_buf and log_level=1 on error", which is
|
|
|
|
* consistent across low-level and high-level BTF and program loading
|
|
|
|
* APIs within libbpf and provides a sensible behavior in practice
|
|
|
|
*/
|
|
|
|
if (log_level) {
|
|
|
|
attr.btf_log_buf = ptr_to_u64(log_buf);
|
|
|
|
attr.btf_log_size = (__u32)log_size;
|
|
|
|
attr.btf_log_level = log_level;
|
|
|
|
}
|
2018-04-18 22:56:05 +00:00
|
|
|
|
2021-12-09 19:38:30 +00:00
|
|
|
fd = sys_bpf_fd(BPF_BTF_LOAD, &attr, attr_sz);
|
|
|
|
if (fd < 0 && log_buf && log_level == 0) {
|
2018-04-18 22:56:05 +00:00
|
|
|
attr.btf_log_buf = ptr_to_u64(log_buf);
|
2021-12-09 19:38:30 +00:00
|
|
|
attr.btf_log_size = (__u32)log_size;
|
|
|
|
attr.btf_log_level = 1;
|
|
|
|
fd = sys_bpf_fd(BPF_BTF_LOAD, &attr, attr_sz);
|
2018-04-18 22:56:05 +00:00
|
|
|
}
|
2021-12-09 19:38:30 +00:00
|
|
|
return libbpf_err_errno(fd);
|
|
|
|
}
|
|
|
|
|
2018-05-24 18:21:10 +00:00
|
|
|
int bpf_task_fd_query(int pid, int fd, __u32 flags, char *buf, __u32 *buf_len,
|
|
|
|
__u32 *prog_id, __u32 *fd_type, __u64 *probe_offset,
|
|
|
|
__u64 *probe_addr)
|
|
|
|
{
|
|
|
|
union bpf_attr attr = {};
|
|
|
|
int err;
|
|
|
|
|
|
|
|
attr.task_fd_query.pid = pid;
|
|
|
|
attr.task_fd_query.fd = fd;
|
|
|
|
attr.task_fd_query.flags = flags;
|
|
|
|
attr.task_fd_query.buf = ptr_to_u64(buf);
|
|
|
|
attr.task_fd_query.buf_len = *buf_len;
|
|
|
|
|
|
|
|
err = sys_bpf(BPF_TASK_FD_QUERY, &attr, sizeof(attr));
|
2021-05-25 03:59:33 +00:00
|
|
|
|
2018-05-24 18:21:10 +00:00
|
|
|
*buf_len = attr.task_fd_query.buf_len;
|
|
|
|
*prog_id = attr.task_fd_query.prog_id;
|
|
|
|
*fd_type = attr.task_fd_query.fd_type;
|
|
|
|
*probe_offset = attr.task_fd_query.probe_offset;
|
|
|
|
*probe_addr = attr.task_fd_query.probe_addr;
|
|
|
|
|
2021-05-25 03:59:33 +00:00
|
|
|
return libbpf_err_errno(err);
|
2018-05-24 18:21:10 +00:00
|
|
|
}
|
2020-04-30 07:15:05 +00:00
|
|
|
|
|
|
|
int bpf_enable_stats(enum bpf_stats_type type)
|
|
|
|
{
|
|
|
|
union bpf_attr attr;
|
2021-05-25 03:59:33 +00:00
|
|
|
int fd;
|
2020-04-30 07:15:05 +00:00
|
|
|
|
|
|
|
memset(&attr, 0, sizeof(attr));
|
|
|
|
attr.enable_stats.type = type;
|
|
|
|
|
libbpf: Ensure that BPF syscall fds are never 0, 1, or 2
Add a simple wrapper for passing an fd and getting a new one >= 3 if it
is one of 0, 1, or 2. There are two primary reasons to make this change:
First, libbpf relies on the assumption a certain BPF fd is never 0 (e.g.
most recently noticed in [0]). Second, Alexei pointed out in [1] that
some environments reset stdin, stdout, and stderr if they notice an
invalid fd at these numbers. To protect against both these cases, switch
all internal BPF syscall wrappers in libbpf to always return an fd >= 3.
We only need to modify the syscall wrappers and not other code that
assumes a valid fd by doing >= 0, to avoid pointless churn, and because
it is still a valid assumption. The cost paid is two additional syscalls
if fd is in range [0, 2].
[0]: e31eec77e4ab ("bpf: selftests: Fix fd cleanup in get_branch_snapshot")
[1]: https://lore.kernel.org/bpf/CAADnVQKVKY8o_3aU8Gzke443+uHa-eGoM0h7W4srChMXU1S4Bg@mail.gmail.com
Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Song Liu <songliubraving@fb.com>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20211028063501.2239335-5-memxor@gmail.com
2021-10-28 06:34:57 +00:00
|
|
|
fd = sys_bpf_fd(BPF_ENABLE_STATS, &attr, sizeof(attr));
|
2021-05-25 03:59:33 +00:00
|
|
|
return libbpf_err_errno(fd);
|
2020-04-30 07:15:05 +00:00
|
|
|
}
|
2020-09-15 23:45:41 +00:00
|
|
|
|
|
|
|
int bpf_prog_bind_map(int prog_fd, int map_fd,
|
|
|
|
const struct bpf_prog_bind_opts *opts)
|
|
|
|
{
|
|
|
|
union bpf_attr attr;
|
2021-05-25 03:59:33 +00:00
|
|
|
int ret;
|
2020-09-15 23:45:41 +00:00
|
|
|
|
|
|
|
if (!OPTS_VALID(opts, bpf_prog_bind_opts))
|
2021-05-25 03:59:33 +00:00
|
|
|
return libbpf_err(-EINVAL);
|
2020-09-15 23:45:41 +00:00
|
|
|
|
|
|
|
memset(&attr, 0, sizeof(attr));
|
|
|
|
attr.prog_bind_map.prog_fd = prog_fd;
|
|
|
|
attr.prog_bind_map.map_fd = map_fd;
|
|
|
|
attr.prog_bind_map.flags = OPTS_GET(opts, flags, 0);
|
|
|
|
|
2021-05-25 03:59:33 +00:00
|
|
|
ret = sys_bpf(BPF_PROG_BIND_MAP, &attr, sizeof(attr));
|
|
|
|
return libbpf_err_errno(ret);
|
2020-09-15 23:45:41 +00:00
|
|
|
}
|