forked from Minki/linux
4fe8435909
when all map elements are pre-allocated one cpu can delete and reuse htab_elem
while another cpu is still walking the hlist. In such case the lookup may
miss the element. Convert hlist to hlist_nulls to avoid such scenario.
When bucket lock is taken there is no need to take such precautions,
so only convert map_lookup and map_get_next to nulls.
The race window is extremely small and only reproducible with explicit
udelay() inside lookup_nulls_elem_raw()
Similar to hlist add hlist_nulls_for_each_entry_safe() and
hlist_nulls_entry_safe() helpers.
Fixes:
|
||
---|---|---|
.. | ||
arraymap.c | ||
bpf_lru_list.c | ||
bpf_lru_list.h | ||
cgroup.c | ||
core.c | ||
hashtab.c | ||
helpers.c | ||
inode.c | ||
lpm_trie.c | ||
Makefile | ||
percpu_freelist.c | ||
percpu_freelist.h | ||
stackmap.c | ||
syscall.c | ||
verifier.c |