Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

selftests/bpf: Make sure stashed kptr in local kptr is freed recursively

When dropping a local kptr, any kptr stashed into it is supposed to be
freed through bpf_obj_free_fields->__bpf_obj_drop_impl recursively. Add a
test to make sure it happens.

The test first stashes a referenced kptr to "struct task" into a local
kptr and gets the reference count of the task. Then, it drops the local
kptr and reads the reference count of the task again. Since
bpf_obj_free_fields and __bpf_obj_drop_impl will go through the local kptr
recursively during bpf_obj_drop, the dtor of the stashed task kptr should
eventually be called. The second reference count should be one less than
the first one.

Signed-off-by: Amery Hung <amery.hung@bytedance.com>
Acked-by: Martin KaFai Lau <martin.lau@kernel.org>
Link: https://lore.kernel.org/r/20240827011301.608620-1-amery.hung@bytedance.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>

authored by

Amery Hung and committed by
Alexei Starovoitov
bd0b4836 c634d6f4

+29 -1
+29 -1
tools/testing/selftests/bpf/progs/task_kfunc_success.c
··· 143 143 SEC("tp_btf/task_newtask") 144 144 int BPF_PROG(test_task_xchg_release, struct task_struct *task, u64 clone_flags) 145 145 { 146 - struct task_struct *kptr; 146 + struct task_struct *kptr, *acquired; 147 147 struct __tasks_kfunc_map_value *v, *local; 148 + int refcnt, refcnt_after_drop; 148 149 long status; 149 150 150 151 if (!is_test_kfunc_task()) ··· 191 190 return 0; 192 191 } 193 192 193 + /* Stash a copy into local kptr and check if it is released recursively */ 194 + acquired = bpf_task_acquire(kptr); 195 + if (!acquired) { 196 + err = 7; 197 + bpf_obj_drop(local); 198 + bpf_task_release(kptr); 199 + return 0; 200 + } 201 + bpf_probe_read_kernel(&refcnt, sizeof(refcnt), &acquired->rcu_users); 202 + 203 + acquired = bpf_kptr_xchg(&local->task, acquired); 204 + if (acquired) { 205 + err = 8; 206 + bpf_obj_drop(local); 207 + bpf_task_release(kptr); 208 + bpf_task_release(acquired); 209 + return 0; 210 + } 211 + 194 212 bpf_obj_drop(local); 213 + 214 + bpf_probe_read_kernel(&refcnt_after_drop, sizeof(refcnt_after_drop), &kptr->rcu_users); 215 + if (refcnt != refcnt_after_drop + 1) { 216 + err = 9; 217 + bpf_task_release(kptr); 218 + return 0; 219 + } 220 + 195 221 bpf_task_release(kptr); 196 222 197 223 return 0;