linux/kernel/bpf
Hou Tao 997849c4b9 bpf: Zeroing allocated object from slab in bpf memory allocator
Currently the freed element in bpf memory allocator may be immediately
reused, for htab map the reuse will reinitialize special fields in map
value (e.g., bpf_spin_lock), but lookup procedure may still access
these special fields, and it may lead to hard-lockup as shown below:

 NMI backtrace for cpu 16
 CPU: 16 PID: 2574 Comm: htab.bin Tainted: G             L     6.1.0+ #1
 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996),
 RIP: 0010:queued_spin_lock_slowpath+0x283/0x2c0
 ......
 Call Trace:
  <TASK>
  copy_map_value_locked+0xb7/0x170
  bpf_map_copy_value+0x113/0x3c0
  __sys_bpf+0x1c67/0x2780
  __x64_sys_bpf+0x1c/0x20
  do_syscall_64+0x30/0x60
  entry_SYSCALL_64_after_hwframe+0x46/0xb0
 ......
  </TASK>

For htab map, just like the preallocated case, these is no need to
initialize these special fields in map value again once these fields
have been initialized. For preallocated htab map, these fields are
initialized through __GFP_ZERO in bpf_map_area_alloc(), so do the
similar thing for non-preallocated htab in bpf memory allocator. And
there is no need to use __GFP_ZERO for per-cpu bpf memory allocator,
because __alloc_percpu_gfp() does it implicitly.

Fixes: 0fd7c5d433 ("bpf: Optimize call_rcu in non-preallocated hash map.")
Signed-off-by: Hou Tao <houtao1@huawei.com>
Link: https://lore.kernel.org/r/20230215082132.3856544-2-houtao@huaweicloud.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2023-02-15 15:40:06 -08:00
..
preload bpf: iterators: Split iterators.lskel.h into little- and big- endian versions 2023-01-28 12:45:15 -08:00
arraymap.c bpf: Do btf_record_free outside map_free callback 2022-11-17 19:11:31 -08:00
bloom_filter.c treewide: use get_random_u32() when possible 2022-10-11 17:42:58 -06:00
bpf_cgrp_storage.c bpf: Fix a compilation failure with clang lto build 2022-11-30 17:13:25 -08:00
bpf_inode_storage.c bpf: Fix a compilation failure with clang lto build 2022-11-30 17:13:25 -08:00
bpf_iter.c bpf: Initialize the bpf_run_ctx in bpf_iter_run_prog() 2022-08-18 17:06:13 -07:00
bpf_local_storage.c bpf: use bpf_map_kvcalloc in bpf_local_storage 2023-02-10 18:59:56 -08:00
bpf_lru_list.c bpf_lru_list: Read double-checked variable once without lock 2021-02-10 15:54:26 -08:00
bpf_lru_list.h printk: stop including cache.h from printk.h 2022-05-13 07:20:07 -07:00
bpf_lsm.c bpf: Fix the kernel crash caused by bpf_setsockopt(). 2023-01-26 23:26:40 -08:00
bpf_struct_ops_types.h bpf: Add dummy BPF STRUCT_OPS for test purpose 2021-11-01 14:10:00 -07:00
bpf_struct_ops.c mm: Introduce set_memory_rox() 2022-12-15 10:37:26 -08:00
bpf_task_storage.c bpf: Fix a compilation failure with clang lto build 2022-11-30 17:13:25 -08:00
btf.c bpf: Special verifier handling for bpf_rbtree_{remove, first} 2023-02-13 19:40:53 -08:00
cgroup_iter.c bpf: Pin the start cgroup in cgroup_iter_seq_init() 2022-11-21 17:40:42 +01:00
cgroup.c Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net 2022-10-03 17:44:18 -07:00
core.c bpf: allow to disable bpf prog memory accounting 2023-02-10 18:59:57 -08:00
cpumap.c net: skbuff: drop the word head from skb cache 2023-02-10 09:10:28 +00:00
cpumask.c bpf: Add __bpf_kfunc tag to all kfuncs 2023-02-02 00:25:14 +01:00
devmap.c bpf: devmap: check XDP features in __xdp_enqueue routine 2023-02-02 20:48:24 -08:00
disasm.c bpf: Relicense disassembler as GPL-2.0-only OR BSD-2-Clause 2021-09-02 14:49:23 +02:00
disasm.h bpf: Relicense disassembler as GPL-2.0-only OR BSD-2-Clause 2021-09-02 14:49:23 +02:00
dispatcher.c bpf: Synchronize dispatcher update with bpf_dispatcher_xdp_func 2022-12-14 12:02:14 -08:00
hashtab.c bpf: Zeroing allocated object from slab in bpf memory allocator 2023-02-15 15:40:06 -08:00
helpers.c bpf: Add bpf_rbtree_{add,remove,first} kfuncs 2023-02-13 19:40:48 -08:00
inode.c bpf: Convert bpf_preload.ko to use light skeleton. 2022-02-10 23:31:51 +01:00
Kconfig rcu: Make the TASKS_RCU Kconfig option be selected 2022-04-20 16:52:58 -07:00
link_iter.c bpf: Add bpf_link iterator 2022-05-10 11:20:45 -07:00
local_storage.c bpf: Consolidate spin_lock, timer management into btf_record 2022-11-03 22:19:40 -07:00
lpm_trie.c bpf: Use bpf_map_area_alloc consistently on bpf map creation 2022-08-10 11:50:43 -07:00
Makefile bpf: Enable cpumasks to be queried and used as kptrs 2023-01-25 07:57:49 -08:00
map_in_map.c bpf: Add comments for map BTF matching requirement for bpf_list_head 2022-11-17 19:22:14 -08:00
map_in_map.h
map_iter.c bpf: Introduce MEM_RDONLY flag 2021-12-18 13:27:41 -08:00
memalloc.c bpf: Zeroing allocated object from slab in bpf memory allocator 2023-02-15 15:40:06 -08:00
mmap_unlock_work.h bpf: Introduce helper bpf_find_vma 2021-11-07 11:54:51 -08:00
net_namespace.c net: Add includes masked by netdevice.h including uapi/bpf.h 2021-12-29 20:03:05 -08:00
offload.c bpf: Drop always true do_idr_lock parameter to bpf_map_free_id 2023-02-02 20:26:12 -08:00
percpu_freelist.c bpf: Initialize same number of free nodes for each pcpu_freelist 2022-11-11 12:05:14 -08:00
percpu_freelist.h
prog_iter.c
queue_stack_maps.c bpf: Remove unneeded memset in queue_stack_map creation 2022-08-10 11:48:22 -07:00
reuseport_array.c net: Fix suspicious RCU usage in bpf_sk_reuseport_detach() 2022-08-17 16:42:59 -07:00
ringbuf.c bpf: Rename MEM_ALLOC to MEM_RINGBUF 2022-11-14 21:52:45 -08:00
stackmap.c perf/bpf: Always use perf callchains if exist 2022-09-13 15:03:22 +02:00
syscall.c bpf: Add basic bpf_rb_{root,node} support 2023-02-13 19:31:13 -08:00
sysfs_btf.c bpf: Load and verify kernel module BTFs 2020-11-10 15:25:53 -08:00
task_iter.c bpf: keep a reference to the mm, in case the task is dead. 2022-12-28 14:11:48 -08:00
tnum.c bpf, tnums: Provably sound, faster, and more precise algorithm for tnum_mul 2021-06-01 13:34:15 +02:00
trampoline.c bpf: Fix panic due to wrong pageattr of im->image 2022-12-28 13:46:28 -08:00
verifier.c bpf: BPF_ST with variable offset should preserve STACK_ZERO marks 2023-02-15 11:48:47 -08:00