aboutsummaryrefslogtreecommitdiff
path: root/tools/testing
diff options
context:
space:
mode:
authorGravatar Martin KaFai Lau <martin.lau@kernel.org> 2024-01-23 16:39:54 -0800
committerGravatar Martin KaFai Lau <martin.lau@kernel.org> 2024-01-23 17:12:52 -0800
commit8b593021319d4893a8fbeb7bd1f668657e68403c (patch)
tree85add1361db079527e9798ff1f78d180e9f08180 /tools/testing
parentMerge branch 'bpf-add-cookies-retrieval-for-perf-kprobe-multi-links' (diff)
parentselftests/bpf: test case for register_bpf_struct_ops(). (diff)
downloadlinux-8b593021319d4893a8fbeb7bd1f668657e68403c.tar.gz
linux-8b593021319d4893a8fbeb7bd1f668657e68403c.tar.bz2
linux-8b593021319d4893a8fbeb7bd1f668657e68403c.zip
Merge branch 'Registrating struct_ops types from modules'
Kui-Feng Lee says: ==================== Given the current constraints of the current implementation, struct_ops cannot be registered dynamically. This presents a significant limitation for modules like coming fuse-bpf, which seeks to implement a new struct_ops type. To address this issue, a new API is introduced that allows the registration of new struct_ops types from modules. Previously, struct_ops types were defined in bpf_struct_ops_types.h and collected as a static array. The new API lets callers add new struct_ops types dynamically. The static array has been removed and replaced by the per-btf struct_ops_tab. The struct_ops subsystem relies on BTF to determine the layout of values in a struct_ops map and identify the subsystem that the struct_ops map registers to. However, the kernel BTF does not include the type information of struct_ops types defined by a module. The struct_ops subsystem requires knowledge of the corresponding module for a given struct_ops map and the utilization of BTF information from that module. We empower libbpf to determine the correct module for accessing the BTF information and pass an identity (FD) of the module btf to the kernel. The kernel looks up type information and registered struct_ops types directly from the given btf. If a module exits while one or more struct_ops maps still refer to a struct_ops type defined by the module, it can lead to unforeseen complications. Therefore, it is crucial to ensure that a module remains intact as long as any struct_ops map is still linked to a struct_ops type defined by the module. To achieve this, every struct_ops map holds a reference to the module while being registered. Changes from v16: - Fix unnecessary bpf_struct_ops_link_create() removing/adding. - Rename REGISTER_BPF_STRUCT_OPS() to register_bpf_struct_ops(). - Implement bpf_map_struct_ops_info_fill() for !CONFIG_BPF_JIT. Changes from v15: - Fix the misleading commit message of part 4. - Introduce BPF_F_VTYPE_BTF_OBJ_FD flag to struct bpf_attr to tell if value_type_btf_obj_fd is set or not. - Introduce links_cnt to struct bpf_struct_ops_map to avoid accessing struct bpf_struct_ops_desc in bpf_struct_ops_map_put_progs() after calling module_put() against the owner module of the struct_ops type. (Part 9) Changes from v14: - Rebase. Add cif_stub required by the commit 2cd3e3772e413 ("x86/cfi,bpf: Fix bpf_struct_ops CFI") - Remove creating struct_ops map without bpf_testmod.ko from the test. - Check the name of btf returned by bpf_map_info by getting the name with bpf_btf_get_info_by_fd(). - Change value_type_btf_obj_fd to a signed type to allow the 0 fd. Changes from v13: - Change the test case to use bpf_map_create() to create a struct_ops map while testmod.ko is unloaded. - Move bpf_struct_ops_find*() to btf.c. - Use btf_is_module() to replace btf != btf_vmlinux. Changes from v12: - Rebase to for-next to fix conflictions. Changes from v11: - bpf_struct_ops_maps hold only the refcnt to the module, but not btf. (patch 1) - Fix warning messages. (patch 1, 9 and 10) - Remove unnecessary conditional compiling of CONFIG_BPF_JIT. (patch 4, 9 and 10) - Fix the commit log of the patch 7 to explain how a btf is pass from the user space and how the kernel handle it. - bpf_struct_ops_maps hold the module defining it's type, but not btf. A map will hold the module through its life-span from allocating to being free. (patch 8) - Change selftests and tracing __bpf_struct_ops_map_free() to wait for the release of the bpf_testmod module. - Include btf_obj_id in bpf_map_info. (patch 14) Changes from v10: - Guard btf.c from CONFIG_BPF_JIT=n. This patchset has introduced symbols from bpf_struct_ops.c which is only built when CONFIG_BPF_JIT=y. - Fix the warning of unused errout_free label by moving code that is leaked to patch 8 to patch 7. Changes from v9: - Remove the call_rcu_tasks_trace() changes from kern_sync_rcu(). - Trace btf_put() in the test case to ensure the release of kmod's btf, or the consequent tests may fail for using kmod's unloaded old btf instead the new one created after loading again. The kmod's btf may live for awhile after unloading the kmod, for a map being freed asynchronized is still holding the btf. - Split "add struct_ops_tab to btf" into tow patches by adding "make struct_ops_map support btfs other than btf_vmlinux". - Flip the order of "pass attached BTF to the bpf_struct_ops subsystem" and "hold module for bpf_struct_ops_map" to make it more reasonable. - Fix the compile errors of a missing header file. Changes from v8: - Rename bpf_struct_ops_init_one() to bpf_struct_ops_desc_init(). - Move code that using BTF_ID_LIST to the newly added patch 2. - Move code that lookup struct_ops types from a given module to the newly added patch 5. - Store the pointers of btf at st_maps. - Add test cases for the cases of modules being unload. - Call bpf_struct_ops_init() in btf_add_struct_ops() to fix an inconsistent issue. Changes from v7: - Fix check_struct_ops_btf_id() to use attach btf if there is instead of btf_vmlinux. Changes from v6: - Change returned error code to -EINVAL for the case of bpf_try_get_module(). - Return an error code from bpf_struct_ops_init(). - Fix the dependency issue of testing_helpers.c and rcu_tasks_trace_gp.skel.h. Changes from v5: - As the 2nd patch, we introduce "bpf_struct_ops_desc". This change involves moving certain members of "bpf_struct_ops" to "bpf_struct_ops_desc", which becomes a part of "btf_struct_ops_tab". This ensures that these members remain accessible even when the owner module of a "bpf_struct_ops" is unloaded. - Correct the order of arguments when calling in the 3rd patch. - Remove the owner argument from bpf_struct_ops_init_one(). Instead, callers should fill in st_ops->owner. - Make sure to hold the owner module when calling bpf_struct_ops_find() and bpf_struct_ops_find_value() in the 6th patch. - Merge the functions register_bpf_struct_ops_btf() and register_bpf_struct_ops() into a single function and relocate it to btf.c for better organization and clarity. - Undo the name modifications made to find_kernel_btf_id() and find_ksym_btf_id() in the 8th patch. Changes from v4: - Fix the dependency between testing_helpers.o and rcu_tasks_trace_gp.skel.h. Changes from v3: - Fix according to the feedback for v3. - Change of the order of arguments to make btf as the first argument. - Use btf_try_get_module() instead of try_get_module() since the module pointed by st_ops->owner can gone while some one is still holding its btf. - Move variables defined by BPF_STRUCT_OPS_COMMON_VALUE to struct bpf_struct_ops_common_value to validation easier. - Register the struct_ops type defined by bpf_testmod in its init function. - Rename field name to 'value_type_btf_obj_fd' to make it explicit. - Fix leaking of btf objects on error. - st_maps hold their modules to keep modules alive and prevent they from unloading. - bpf_map of libbpf keeps mod_btf_fd instead of a pointer to module_btf. - Do call_rcu_tasks_trace() in kern_sync_rcu() to ensure the bpf_testmod is unloaded properly. It uses rcu_tasks_trace_gp to trigger call_rcu_tasks_trace() in the kernel. - Merge and reorder patches in a reasonable order. Changes from v2: - Remove struct_ops array, and add a per-btf (module) struct_ops_tab to collect registered struct_ops types. - Validate value_type by checking member names and types. --- v16: https://lore.kernel.org/all/20240118014930.1992551-1-thinker.li@gmail.com/ v15: https://lore.kernel.org/all/20231220222654.1435895-1-thinker.li@gmail.com/ v14: https://lore.kernel.org/all/20231217081132.1025020-1-thinker.li@gmail.com/ v13: https://lore.kernel.org/all/20231209002709.535966-1-thinker.li@gmail.com/ v12: https://lore.kernel.org/all/20231207013950.1689269-1-thinker.li@gmail.com/ v11: https://lore.kernel.org/all/20231106201252.1568931-1-thinker.li@gmail.com/ v10: https://lore.kernel.org/all/20231103232202.3664407-1-thinker.li@gmail.com/ v9: https://lore.kernel.org/all/20231101204519.677870-1-thinker.li@gmail.com/ v8: https://lore.kernel.org/all/20231030192810.382942-1-thinker.li@gmail.com/ v7: https://lore.kernel.org/all/20231027211702.1374597-1-thinker.li@gmail.com/ v6: https://lore.kernel.org/all/20231022050335.2579051-11-thinker.li@gmail.com/ v5: https://lore.kernel.org/all/20231017162306.176586-1-thinker.li@gmail.com/ v4: https://lore.kernel.org/all/20231013224304.187218-1-thinker.li@gmail.com/ v3: https://lore.kernel.org/all/20230920155923.151136-1-thinker.li@gmail.com/ v2: https://lore.kernel.org/all/20230913061449.1918219-1-thinker.li@gmail.com/ ==================== Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
Diffstat (limited to 'tools/testing')
-rw-r--r--tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c66
-rw-r--r--tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.h5
-rw-r--r--tools/testing/selftests/bpf/prog_tests/test_struct_ops_module.c75
-rw-r--r--tools/testing/selftests/bpf/progs/struct_ops_module.c30
4 files changed, 176 insertions, 0 deletions
diff --git a/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c b/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c
index e7c9e1c7fde0..8befaf17d454 100644
--- a/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c
+++ b/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c
@@ -1,5 +1,6 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright (c) 2020 Facebook */
+#include <linux/bpf.h>
#include <linux/btf.h>
#include <linux/btf_ids.h>
#include <linux/delay.h>
@@ -521,11 +522,75 @@ BTF_ID_FLAGS(func, bpf_kfunc_call_test_static_unused_arg)
BTF_ID_FLAGS(func, bpf_kfunc_call_test_offset)
BTF_SET8_END(bpf_testmod_check_kfunc_ids)
+static int bpf_testmod_ops_init(struct btf *btf)
+{
+ return 0;
+}
+
+static bool bpf_testmod_ops_is_valid_access(int off, int size,
+ enum bpf_access_type type,
+ const struct bpf_prog *prog,
+ struct bpf_insn_access_aux *info)
+{
+ return bpf_tracing_btf_ctx_access(off, size, type, prog, info);
+}
+
+static int bpf_testmod_ops_init_member(const struct btf_type *t,
+ const struct btf_member *member,
+ void *kdata, const void *udata)
+{
+ return 0;
+}
+
static const struct btf_kfunc_id_set bpf_testmod_kfunc_set = {
.owner = THIS_MODULE,
.set = &bpf_testmod_check_kfunc_ids,
};
+static const struct bpf_verifier_ops bpf_testmod_verifier_ops = {
+ .is_valid_access = bpf_testmod_ops_is_valid_access,
+};
+
+static int bpf_dummy_reg(void *kdata)
+{
+ struct bpf_testmod_ops *ops = kdata;
+ int r;
+
+ r = ops->test_2(4, 3);
+
+ return 0;
+}
+
+static void bpf_dummy_unreg(void *kdata)
+{
+}
+
+static int bpf_testmod_test_1(void)
+{
+ return 0;
+}
+
+static int bpf_testmod_test_2(int a, int b)
+{
+ return 0;
+}
+
+static struct bpf_testmod_ops __bpf_testmod_ops = {
+ .test_1 = bpf_testmod_test_1,
+ .test_2 = bpf_testmod_test_2,
+};
+
+struct bpf_struct_ops bpf_bpf_testmod_ops = {
+ .verifier_ops = &bpf_testmod_verifier_ops,
+ .init = bpf_testmod_ops_init,
+ .init_member = bpf_testmod_ops_init_member,
+ .reg = bpf_dummy_reg,
+ .unreg = bpf_dummy_unreg,
+ .cfi_stubs = &__bpf_testmod_ops,
+ .name = "bpf_testmod_ops",
+ .owner = THIS_MODULE,
+};
+
extern int bpf_fentry_test1(int a);
static int bpf_testmod_init(void)
@@ -536,6 +601,7 @@ static int bpf_testmod_init(void)
ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_SCHED_CLS, &bpf_testmod_kfunc_set);
ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_TRACING, &bpf_testmod_kfunc_set);
ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_SYSCALL, &bpf_testmod_kfunc_set);
+ ret = ret ?: register_bpf_struct_ops(&bpf_bpf_testmod_ops, bpf_testmod_ops);
if (ret < 0)
return ret;
if (bpf_fentry_test1(0) < 0)
diff --git a/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.h b/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.h
index f32793efe095..ca5435751c79 100644
--- a/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.h
+++ b/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.h
@@ -28,4 +28,9 @@ struct bpf_iter_testmod_seq {
int cnt;
};
+struct bpf_testmod_ops {
+ int (*test_1)(void);
+ int (*test_2)(int a, int b);
+};
+
#endif /* _BPF_TESTMOD_H */
diff --git a/tools/testing/selftests/bpf/prog_tests/test_struct_ops_module.c b/tools/testing/selftests/bpf/prog_tests/test_struct_ops_module.c
new file mode 100644
index 000000000000..8d833f0c7580
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/test_struct_ops_module.c
@@ -0,0 +1,75 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2024 Meta Platforms, Inc. and affiliates. */
+#include <test_progs.h>
+#include <time.h>
+
+#include "struct_ops_module.skel.h"
+
+static void check_map_info(struct bpf_map_info *info)
+{
+ struct bpf_btf_info btf_info;
+ char btf_name[256];
+ u32 btf_info_len = sizeof(btf_info);
+ int err, fd;
+
+ fd = bpf_btf_get_fd_by_id(info->btf_vmlinux_id);
+ if (!ASSERT_GE(fd, 0, "get_value_type_btf_obj_fd"))
+ return;
+
+ memset(&btf_info, 0, sizeof(btf_info));
+ btf_info.name = ptr_to_u64(btf_name);
+ btf_info.name_len = sizeof(btf_name);
+ err = bpf_btf_get_info_by_fd(fd, &btf_info, &btf_info_len);
+ if (!ASSERT_OK(err, "get_value_type_btf_obj_info"))
+ goto cleanup;
+
+ if (!ASSERT_EQ(strcmp(btf_name, "bpf_testmod"), 0, "get_value_type_btf_obj_name"))
+ goto cleanup;
+
+cleanup:
+ close(fd);
+}
+
+static void test_struct_ops_load(void)
+{
+ DECLARE_LIBBPF_OPTS(bpf_object_open_opts, opts);
+ struct struct_ops_module *skel;
+ struct bpf_map_info info = {};
+ struct bpf_link *link;
+ int err;
+ u32 len;
+
+ skel = struct_ops_module__open_opts(&opts);
+ if (!ASSERT_OK_PTR(skel, "struct_ops_module_open"))
+ return;
+
+ err = struct_ops_module__load(skel);
+ if (!ASSERT_OK(err, "struct_ops_module_load"))
+ goto cleanup;
+
+ len = sizeof(info);
+ err = bpf_map_get_info_by_fd(bpf_map__fd(skel->maps.testmod_1), &info,
+ &len);
+ if (!ASSERT_OK(err, "bpf_map_get_info_by_fd"))
+ goto cleanup;
+
+ link = bpf_map__attach_struct_ops(skel->maps.testmod_1);
+ ASSERT_OK_PTR(link, "attach_test_mod_1");
+
+ /* test_2() will be called from bpf_dummy_reg() in bpf_testmod.c */
+ ASSERT_EQ(skel->bss->test_2_result, 7, "test_2_result");
+
+ bpf_link__destroy(link);
+
+ check_map_info(&info);
+
+cleanup:
+ struct_ops_module__destroy(skel);
+}
+
+void serial_test_struct_ops_module(void)
+{
+ if (test__start_subtest("test_struct_ops_load"))
+ test_struct_ops_load();
+}
+
diff --git a/tools/testing/selftests/bpf/progs/struct_ops_module.c b/tools/testing/selftests/bpf/progs/struct_ops_module.c
new file mode 100644
index 000000000000..e44ac55195ca
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/struct_ops_module.c
@@ -0,0 +1,30 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2024 Meta Platforms, Inc. and affiliates. */
+#include <vmlinux.h>
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+#include "../bpf_testmod/bpf_testmod.h"
+
+char _license[] SEC("license") = "GPL";
+
+int test_2_result = 0;
+
+SEC("struct_ops/test_1")
+int BPF_PROG(test_1)
+{
+ return 0xdeadbeef;
+}
+
+SEC("struct_ops/test_2")
+int BPF_PROG(test_2, int a, int b)
+{
+ test_2_result = a + b;
+ return a + b;
+}
+
+SEC(".struct_ops.link")
+struct bpf_testmod_ops testmod_1 = {
+ .test_1 = (void *)test_1,
+ .test_2 = (void *)test_2,
+};
+