linux/kernel/bpf
Alexei Starovoitov b121d1e74d bpf: prevent kprobe+bpf deadlocks
if kprobe is placed within update or delete hash map helpers
that hold bucket spin lock and triggered bpf program is trying to
grab the spinlock for the same bucket on the same cpu, it will
deadlock.
Fix it by extending existing recursion prevention mechanism.

Note, map_lookup and other tracing helpers don't have this problem,
since they don't hold any locks and don't modify global data.
bpf_trace_printk has its own recursive check and ok as well.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-03-08 15:28:30 -05:00
..
arraymap.c bpf: add lookup/update support for per-cpu hash and array maps 2016-02-06 03:34:36 -05:00
core.c bpf: move clearing of A/X into classic to eBPF migration prologue 2015-12-18 16:04:51 -05:00
hashtab.c bpf: grab rcu read lock for bpf_percpu_hash_update 2016-02-19 14:37:43 -05:00
helpers.c bpf: split state from prandom_u32() and consolidate {c, e}BPF prngs 2015-10-08 05:26:39 -07:00
inode.c bpf, inode: allow for rename and link ops 2015-12-12 18:44:23 -05:00
Makefile bpf: introduce BPF_MAP_TYPE_STACK_TRACE 2016-02-20 00:21:44 -05:00
stackmap.c bpf: introduce BPF_MAP_TYPE_STACK_TRACE 2016-02-20 00:21:44 -05:00
syscall.c bpf: prevent kprobe+bpf deadlocks 2016-03-08 15:28:30 -05:00
verifier.c Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net 2016-02-23 00:09:14 -05:00