Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

bpf: Allow reuse from waiting_for_gp_ttrace list.

alloc_bulk() can reuse elements from free_by_rcu_ttrace.
Let it reuse from waiting_for_gp_ttrace as well to avoid unnecessary kmalloc().

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20230706033447.54696-10-alexei.starovoitov@gmail.com

authored by

Alexei Starovoitov and committed by
Daniel Borkmann
04fabf00 822fb26b

+10 -6
+10 -6
kernel/bpf/memalloc.c
··· 212 212 if (i >= cnt) 213 213 return; 214 214 215 + for (; i < cnt; i++) { 216 + obj = llist_del_first(&c->waiting_for_gp_ttrace); 217 + if (!obj) 218 + break; 219 + add_obj_to_free_list(c, obj); 220 + } 221 + if (i >= cnt) 222 + return; 223 + 215 224 memcg = get_memcg(c); 216 225 old_memcg = set_active_memcg(memcg); 217 226 for (; i < cnt; i++) { ··· 304 295 305 296 WARN_ON_ONCE(!llist_empty(&c->waiting_for_gp_ttrace)); 306 297 llist_for_each_safe(llnode, t, llist_del_all(&c->free_by_rcu_ttrace)) 307 - /* There is no concurrent __llist_add(waiting_for_gp_ttrace) access. 308 - * It doesn't race with llist_del_all either. 309 - * But there could be two concurrent llist_del_all(waiting_for_gp_ttrace): 310 - * from __free_rcu() and from drain_mem_cache(). 311 - */ 312 - __llist_add(llnode, &c->waiting_for_gp_ttrace); 298 + llist_add(llnode, &c->waiting_for_gp_ttrace); 313 299 314 300 if (unlikely(READ_ONCE(c->draining))) { 315 301 __free_rcu(&c->rcu_ttrace);