Andrii Nakryiko wrote:
> Add fmod_ret BPF program to existing test_overhead selftest. Also re-implement
> user-space benchmarking part into benchmark runner to compare results.
> Results
> with ./bench are consistently somewhat lower than test_overhead's, but
> relative
> performance of various types of BPF programs stay consisten (e.g., kretprobe
> is
> noticeably slower).
>
> To test with ./bench, the following command was used:
>
> for i in base kprobe kretprobe rawtp fentry fexit fmodret; \
> do \
> summary=$(sudo ./bench -w2 -d5 -a rename-$i | \
> tail -n1 | cut -d'(' -f1 | cut -d' ' -f3-) && \
> printf "%-10s: %s\n" $i "$summary"; \
> done
might be nice to have a script ./bench_tracing_overhead.sh when its in its
own directory ./bench. Otherwise I'll have to look this up every single
time I'm sure.
>
> This gives the following numbers:
>
> base : 3.975 ± 0.065M/s
> kprobe : 3.268 ± 0.095M/s
> kretprobe : 2.496 ± 0.040M/s
> rawtp : 3.899 ± 0.078M/s
> fentry : 3.836 ± 0.049M/s
> fexit : 3.660 ± 0.082M/s
> fmodret : 3.776 ± 0.033M/s
>
> While running test_overhead gives:
>
> task_rename base 4457K events per sec
> task_rename kprobe 3849K events per sec
> task_rename kretprobe 2729K events per sec
> task_rename raw_tp 4506K events per sec
> task_rename fentry 4381K events per sec
> task_rename fexit 4349K events per sec
> task_rename fmod_ret 4130K events per sec
>
> Signed-off-by: Andrii Nakryiko <[email protected]>
> ---
LGTM
Acked-by: John Fastabend <[email protected]>