mirror of
https://github.com/torvalds/linux.git
synced 2024-11-22 20:22:09 +00:00
1b8f85defb
uprobe_cpu_buffer and corresponding logic to store uprobe args into it are used for uprobes/uretprobes that are created through tracefs or perf events. BPF is yet another user of uprobe/uretprobe infrastructure, but doesn't need uprobe_cpu_buffer and associated data. For BPF-only use cases this buffer handling and preparation is a pure overhead. At the same time, BPF-only uprobe/uretprobe usage is very common in practice. Also, for a lot of cases applications are very senstivie to performance overheads, as they might be tracing a very high frequency functions like malloc()/free(), so every bit of performance improvement matters. All that is to say that this uprobe_cpu_buffer preparation is an unnecessary overhead that each BPF user of uprobes/uretprobe has to pay. This patch is changing this by making uprobe_cpu_buffer preparation optional. It will happen only if either tracefs-based or perf event-based uprobe/uretprobe consumer is registered for given uprobe/uretprobe. For BPF-only use cases this step will be skipped. We used uprobe/uretprobe benchmark which is part of BPF selftests (see [0]) to estimate the improvements. We have 3 uprobe and 3 uretprobe scenarios, which vary an instruction that is replaced by uprobe: nop (fastest uprobe case), `push rbp` (typical case), and non-simulated `ret` instruction (slowest case). Benchmark thread is constantly calling user space function in a tight loop. User space function has attached BPF uprobe or uretprobe program doing nothing but atomic counter increments to count number of triggering calls. Benchmark emits throughput in millions of executions per second. BEFORE these changes ==================== uprobe-nop : 2.657 ± 0.024M/s uprobe-push : 2.499 ± 0.018M/s uprobe-ret : 1.100 ± 0.006M/s uretprobe-nop : 1.356 ± 0.004M/s uretprobe-push : 1.317 ± 0.019M/s uretprobe-ret : 0.785 ± 0.007M/s AFTER these changes =================== uprobe-nop : 2.732 ± 0.022M/s (+2.8%) uprobe-push : 2.621 ± 0.016M/s (+4.9%) uprobe-ret : 1.105 ± 0.007M/s (+0.5%) uretprobe-nop : 1.396 ± 0.007M/s (+2.9%) uretprobe-push : 1.347 ± 0.008M/s (+2.3%) uretprobe-ret : 0.800 ± 0.006M/s (+1.9) So the improvements on this particular machine seems to be between 2% and 5%. [0] https://github.com/torvalds/linux/blob/master/tools/testing/selftests/bpf/benchs/bench_trigger.c Reviewed-by: Jiri Olsa <jolsa@kernel.org> Link: https://lore.kernel.org/all/20240318181728.2795838-3-andrii@kernel.org/ Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Masami Hiramatsu (Google) <mhiramat@kernel.org> Signed-off-by: Masami Hiramatsu (Google) <mhiramat@kernel.org> |
||
---|---|---|
.. | ||
rv | ||
blktrace.c | ||
bpf_trace.c | ||
bpf_trace.h | ||
error_report-traces.c | ||
fgraph.c | ||
fprobe.c | ||
ftrace_internal.h | ||
ftrace.c | ||
Kconfig | ||
kprobe_event_gen_test.c | ||
Makefile | ||
pid_list.c | ||
pid_list.h | ||
power-traces.c | ||
preemptirq_delay_test.c | ||
rethook.c | ||
ring_buffer_benchmark.c | ||
ring_buffer.c | ||
rpm-traces.c | ||
synth_event_gen_test.c | ||
trace_benchmark.c | ||
trace_benchmark.h | ||
trace_boot.c | ||
trace_branch.c | ||
trace_btf.c | ||
trace_btf.h | ||
trace_clock.c | ||
trace_dynevent.c | ||
trace_dynevent.h | ||
trace_entries.h | ||
trace_eprobe.c | ||
trace_event_perf.c | ||
trace_events_filter_test.h | ||
trace_events_filter.c | ||
trace_events_hist.c | ||
trace_events_inject.c | ||
trace_events_synth.c | ||
trace_events_trigger.c | ||
trace_events_user.c | ||
trace_events.c | ||
trace_export.c | ||
trace_fprobe.c | ||
trace_functions_graph.c | ||
trace_functions.c | ||
trace_hwlat.c | ||
trace_irqsoff.c | ||
trace_kdb.c | ||
trace_kprobe_selftest.c | ||
trace_kprobe_selftest.h | ||
trace_kprobe.c | ||
trace_mmiotrace.c | ||
trace_nop.c | ||
trace_osnoise.c | ||
trace_output.c | ||
trace_output.h | ||
trace_preemptirq.c | ||
trace_printk.c | ||
trace_probe_kernel.h | ||
trace_probe_tmpl.h | ||
trace_probe.c | ||
trace_probe.h | ||
trace_recursion_record.c | ||
trace_sched_switch.c | ||
trace_sched_wakeup.c | ||
trace_selftest_dynamic.c | ||
trace_selftest.c | ||
trace_seq.c | ||
trace_stack.c | ||
trace_stat.c | ||
trace_stat.h | ||
trace_synth.h | ||
trace_syscalls.c | ||
trace_uprobe.c | ||
trace.c | ||
trace.h | ||
tracing_map.c | ||
tracing_map.h |