Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

net/dst: use a smaller percpu_counter batch for dst entries accounting

percpu_counter_add() uses a default batch size which is quite big
on platforms with 256 cpus. (2*256 -> 512)

This means dst_entries_get_fast() can be off by +/- 2*(nr_cpus^2)
(131072 on servers with 256 cpus)

Reduce the batch size to something more reasonable, and
add logic to ip6_dst_gc() to call dst_entries_get_slow()
before calling the _very_ expensive fib6_run_gc() function.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>

authored by

Eric Dumazet and committed by
Jakub Kicinski
cf86a086 790709f2

+10 -5
+3 -1
include/net/dst_ops.h
··· 53 53 return percpu_counter_sum_positive(&dst->pcpuc_entries); 54 54 } 55 55 56 + #define DST_PERCPU_COUNTER_BATCH 32 56 57 static inline void dst_entries_add(struct dst_ops *dst, int val) 57 58 { 58 - percpu_counter_add(&dst->pcpuc_entries, val); 59 + percpu_counter_add_batch(&dst->pcpuc_entries, val, 60 + DST_PERCPU_COUNTER_BATCH); 59 61 } 60 62 61 63 static inline int dst_entries_init(struct dst_ops *dst)
+4 -4
net/core/dst.c
··· 81 81 { 82 82 struct dst_entry *dst; 83 83 84 - if (ops->gc && dst_entries_get_fast(ops) > ops->gc_thresh) { 84 + if (ops->gc && 85 + !(flags & DST_NOCOUNT) && 86 + dst_entries_get_fast(ops) > ops->gc_thresh) { 85 87 if (ops->gc(ops)) { 86 - printk_ratelimited(KERN_NOTICE "Route cache is full: " 87 - "consider increasing sysctl " 88 - "net.ipv[4|6].route.max_size.\n"); 88 + pr_notice_ratelimited("Route cache is full: consider increasing sysctl net.ipv6.route.max_size.\n"); 89 89 return NULL; 90 90 } 91 91 }
+3
net/ipv6/route.c
··· 3195 3195 int entries; 3196 3196 3197 3197 entries = dst_entries_get_fast(ops); 3198 + if (entries > rt_max_size) 3199 + entries = dst_entries_get_slow(ops); 3200 + 3198 3201 if (time_after(rt_last_gc + rt_min_interval, jiffies) && 3199 3202 entries <= rt_max_size) 3200 3203 goto out;