aboutsummaryrefslogtreecommitdiff
path: root/samples/bpf/test_overhead_kprobe_kern.c
AgeCommit message (Collapse)AuthorFilesLines
2023-01-15samples/bpf: change _kern suffix to .bpf with BPF test programsGravatar Daniel T. Lee 1-47/+0
This commit changes the _kern suffix to .bpf with the BPF test programs. With this modification, test programs will inherit the benefit of the new CLANG-BPF compile target. Signed-off-by: Daniel T. Lee <danieltimlee@gmail.com> Link: https://lore.kernel.org/r/20230115071613.125791-11-danieltimlee@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2023-01-15samples/bpf: use vmlinux.h instead of implicit headers in BPF test programGravatar Daniel T. Lee 1-3/+1
This commit applies vmlinux.h to BPF functionality testing program. Macros that were not defined despite migration to "vmlinux.h" were defined separately in individual files. Signed-off-by: Daniel T. Lee <danieltimlee@gmail.com> Link: https://lore.kernel.org/r/20230115071613.125791-10-danieltimlee@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2023-01-15samples/bpf: replace broken overhead microbenchmark with fib_table_lookupGravatar Daniel T. Lee 1-1/+1
The test_overhead bpf program is designed to compare performance between tracepoint and kprobe. Initially it used task_rename and urandom_read tracepoint. However, commit 14c174633f34 ("random: remove unused tracepoints") removed urandom_read tracepoint, and for this reason the test_overhead got broken. This commit introduces new microbenchmark using fib_table_lookup. This microbenchmark sends UDP packets to localhost in order to invoke fib_table_lookup. In a nutshell: fd = socket(AF_INET, SOCK_DGRAM, IPPROTO_UDP); addr.sin_addr.s_addr = inet_addr(DUMMY_IP); addr.sin_port = htons(DUMMY_PORT); for() { sendto(fd, buf, strlen(buf), 0, (struct sockaddr *)&addr, sizeof(addr)); } on 4 cpus in parallel: lookup per sec base (no tracepoints, no kprobes) 381k with kprobe at fib_table_lookup() 325k with tracepoint at fib:fib_table_lookup 330k with raw_tracepoint at fib:fib_table_lookup 365k Fixes: 14c174633f34 ("random: remove unused tracepoints") Signed-off-by: Daniel T. Lee <danieltimlee@gmail.com> Link: https://lore.kernel.org/r/20230115071613.125791-6-danieltimlee@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2022-01-20samples/bpf/test_overhead_kprobe_kern: replace bpf_probe_read_kernel with ↵Gravatar Yafang Shao 1-5/+6
bpf_probe_read_kernel_str to get task comm bpf_probe_read_kernel_str() will add a nul terminator to the dst, then we don't care about if the dst size is big enough. This patch also replaces the hard-coded 16 with TASK_COMM_LEN to make it grepable. Link: https://lkml.kernel.org/r/20211120112738.45980-6-laoar.shao@gmail.com Signed-off-by: Yafang Shao <laoar.shao@gmail.com> Reviewed-by: Kees Cook <keescook@chromium.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Reviewed-by: David Hildenbrand <david@redhat.com> Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Cc: Arnaldo Carvalho de Melo <arnaldo.melo@gmail.com> Cc: Alexei Starovoitov <alexei.starovoitov@gmail.com> Cc: Andrii Nakryiko <andrii.nakryiko@gmail.com> Cc: Michal Miroslaw <mirq-linux@rere.qmqm.pl> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Matthew Wilcox <willy@infradead.org> Cc: David Hildenbrand <david@redhat.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Kees Cook <keescook@chromium.org> Cc: Petr Mladek <pmladek@suse.com> Cc: Dennis Dalessandro <dennis.dalessandro@cornelisnetworks.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-07-21samples/bpf, selftests/bpf: Use bpf_probe_read_kernelGravatar Ilya Leoshkevich 1-3/+9
A handful of samples and selftests fail to build on s390, because after commit 0ebeea8ca8a4 ("bpf: Restrict bpf_probe_read{, str}() only to archs where they work") bpf_probe_read is not available anymore. Fix by using bpf_probe_read_kernel. Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200720114806.88823-1-iii@linux.ibm.com
2020-01-20samples/bpf: Use consistent include paths for libbpfGravatar Toke Høiland-Jørgensen 1-2/+2
Fix all files in samples/bpf to include libbpf header files with the bpf/ prefix, to be consistent with external users of the library. Also ensure that all includes of exported libbpf header files (those that are exported on 'make install' of the library) use bracketed includes instead of quoted. To make sure no new files are introduced that doesn't include the bpf/ prefix in its include, remove tools/lib/bpf from the include path entirely, and use tools/lib instead. Fixes: 6910d7d3867a ("selftests/bpf: Ensure bpf_helper_defs.h are taken from selftests dir") Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Jesper Dangaard Brouer <brouer@redhat.com> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/157952560911.1683545.8795966751309534150.stgit@toke.dk
2019-10-08selftests/bpf: Split off tracing-only helpers into bpf_tracing.hGravatar Andrii Nakryiko 1-0/+1
Split-off PT_REGS-related helpers into bpf_tracing.h header. Adjust selftests and samples to include it where necessary. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: John Fastabend <john.fastabend@gmail.com> Acked-by: Song Liu <songliubraving@fb.com> Link: https://lore.kernel.org/bpf/20191008175942.1769476-5-andriin@fb.com
2016-04-07samples/bpf: add tracepoint vs kprobe performance testsGravatar Alexei Starovoitov 1-0/+41
the first microbenchmark does fd=open("/proc/self/comm"); for() { write(fd, "test"); } and on 4 cpus in parallel: writes per sec base (no tracepoints, no kprobes) 930k with kprobe at __set_task_comm() 420k with tracepoint at task:task_rename 730k For kprobe + full bpf program manully fetches oldcomm, newcomm via bpf_probe_read. For tracepint bpf program does nothing, since arguments are copied by tracepoint. 2nd microbenchmark does: fd=open("/dev/urandom"); for() { read(fd, buf); } and on 4 cpus in parallel: reads per sec base (no tracepoints, no kprobes) 300k with kprobe at urandom_read() 279k with tracepoint at random:urandom_read 290k bpf progs attached to kprobe and tracepoint are noop. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>