Merge branch 'net_sched-fix-races-with-RCU-callbacks'

Cong Wang says:

====================
net_sched: fix races with RCU callbacks

Recently, the RCU callbacks used in TC filters and TC actions keep
drawing my attention, they introduce at least 4 race condition bugs:

1. A simple one fixed by Daniel:

commit c78e1746d3ad7d548bdf3fe491898cc453911a49
Author: Daniel Borkmann <daniel@iogearbox.net>
Date: Wed May 20 17:13:33 2015 +0200

net: sched: fix call_rcu() race on classifier module unloads

2. A very nasty one fixed by me:

commit 1697c4bb5245649a23f06a144cc38c06715e1b65
Author: Cong Wang <xiyou.wangcong@gmail.com>
Date: Mon Sep 11 16:33:32 2017 -0700

net_sched: carefully handle tcf_block_put()

3. Two more bugs found by Chris:
https://patchwork.ozlabs.org/patch/826696/
https://patchwork.ozlabs.org/patch/826695/

Usually RCU callbacks are simple, however for TC filters and actions,
they are complex because at least TC actions could be destroyed
together with the TC filter in one callback. And RCU callbacks are
invoked in BH context, without locking they are parallel too. All of
these contribute to the cause of these nasty bugs.

Alternatively, we could also:

a) Introduce a spinlock to serialize these RCU callbacks. But as I
said in commit 1697c4bb5245 ("net_sched: carefully handle
tcf_block_put()"), it is very hard to do because of tcf_chain_dump().
Potentially we need to do a lot of work to make it possible (if not
impossible).

b) Just get rid of these RCU callbacks, because they are not
necessary at all, callers of these call_rcu() are all on slow paths
and holding RTNL lock, so blocking is allowed in their contexts.
However, David and Eric dislike adding synchronize_rcu() here.

As suggested by Paul, we could defer the work to a workqueue and
gain the permission of holding RTNL again without any performance
impact, however, in tcf_block_put() we could have a deadlock when
flushing workqueue while hodling RTNL lock, the trick here is to
defer the work itself in workqueue and make it queued after all
other works so that we keep the same ordering to avoid any
use-after-free. Please see the first patch for details.

Patch 1 introduces the infrastructure, patch 2~12 move each
tc filter to the new tc filter workqueue, patch 13 adds
an assertion to catch potential bugs like this, patch 14
closes another rcu callback race, patch 15 and patch 16 add
new test cases.
====================

Reported-by: Chris Mi <chrism@mellanox.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: Jiri Pirko <jiri@resnulli.us>
Cc: John Fastabend <john.fastabend@gmail.com>
Cc: Jamal Hadi Salim <jhs@mojatatu.com>
Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>

+3
include/net/pkt_cls.h
··· 2 2 #define __NET_PKT_CLS_H 3 3 4 4 #include <linux/pkt_cls.h> 5 + #include <linux/workqueue.h> 5 6 #include <net/sch_generic.h> 6 7 #include <net/act_api.h> 7 8 ··· 17 16 18 17 int register_tcf_proto_ops(struct tcf_proto_ops *ops); 19 18 int unregister_tcf_proto_ops(struct tcf_proto_ops *ops); 19 + 20 + bool tcf_queue_work(struct work_struct *work); 20 21 21 22 #ifdef CONFIG_NET_CLS 22 23 struct tcf_chain *tcf_chain_get(struct tcf_block *block, u32 chain_index,
+2
include/net/sch_generic.h
··· 10 10 #include <linux/dynamic_queue_limits.h> 11 11 #include <linux/list.h> 12 12 #include <linux/refcount.h> 13 + #include <linux/workqueue.h> 13 14 #include <net/gen_stats.h> 14 15 #include <net/rtnetlink.h> 15 16 ··· 272 271 273 272 struct tcf_block { 274 273 struct list_head chain_list; 274 + struct work_struct work; 275 275 }; 276 276 277 277 static inline void qdisc_cb_private_validate(const struct sk_buff *skb, int sz)
+1
net/sched/act_sample.c
··· 264 264 265 265 static void __exit sample_cleanup_module(void) 266 266 { 267 + rcu_barrier(); 267 268 tcf_unregister_action(&act_sample_ops, &sample_net_ops); 268 269 } 269 270
+52 -17
net/sched/cls_api.c
··· 77 77 } 78 78 EXPORT_SYMBOL(register_tcf_proto_ops); 79 79 80 + static struct workqueue_struct *tc_filter_wq; 81 + 80 82 int unregister_tcf_proto_ops(struct tcf_proto_ops *ops) 81 83 { 82 84 struct tcf_proto_ops *t; ··· 88 86 * tcf_proto_ops's destroy() handler. 89 87 */ 90 88 rcu_barrier(); 89 + flush_workqueue(tc_filter_wq); 91 90 92 91 write_lock(&cls_mod_lock); 93 92 list_for_each_entry(t, &tcf_proto_base, head) { ··· 102 99 return rc; 103 100 } 104 101 EXPORT_SYMBOL(unregister_tcf_proto_ops); 102 + 103 + bool tcf_queue_work(struct work_struct *work) 104 + { 105 + return queue_work(tc_filter_wq, work); 106 + } 107 + EXPORT_SYMBOL(tcf_queue_work); 105 108 106 109 /* Select new prio value from the range, managed by kernel. */ 107 110 ··· 275 266 } 276 267 EXPORT_SYMBOL(tcf_block_get); 277 268 278 - void tcf_block_put(struct tcf_block *block) 269 + static void tcf_block_put_final(struct work_struct *work) 279 270 { 271 + struct tcf_block *block = container_of(work, struct tcf_block, work); 280 272 struct tcf_chain *chain, *tmp; 281 273 282 - if (!block) 283 - return; 274 + /* At this point, all the chains should have refcnt == 1. */ 275 + rtnl_lock(); 276 + list_for_each_entry_safe(chain, tmp, &block->chain_list, list) 277 + tcf_chain_put(chain); 278 + rtnl_unlock(); 279 + kfree(block); 280 + } 284 281 285 - /* XXX: Standalone actions are not allowed to jump to any chain, and 286 - * bound actions should be all removed after flushing. However, 287 - * filters are destroyed in RCU callbacks, we have to hold the chains 288 - * first, otherwise we would always race with RCU callbacks on this list 289 - * without proper locking. 290 - */ 282 + /* XXX: Standalone actions are not allowed to jump to any chain, and bound 283 + * actions should be all removed after flushing. However, filters are destroyed 284 + * in RCU callbacks, we have to hold the chains first, otherwise we would 285 + * always race with RCU callbacks on this list without proper locking. 286 + */ 287 + static void tcf_block_put_deferred(struct work_struct *work) 288 + { 289 + struct tcf_block *block = container_of(work, struct tcf_block, work); 290 + struct tcf_chain *chain; 291 291 292 - /* Wait for existing RCU callbacks to cool down. */ 293 - rcu_barrier(); 294 - 292 + rtnl_lock(); 295 293 /* Hold a refcnt for all chains, except 0, in case they are gone. */ 296 294 list_for_each_entry(chain, &block->chain_list, list) 297 295 if (chain->index) ··· 308 292 list_for_each_entry(chain, &block->chain_list, list) 309 293 tcf_chain_flush(chain); 310 294 311 - /* Wait for RCU callbacks to release the reference count. */ 295 + INIT_WORK(&block->work, tcf_block_put_final); 296 + /* Wait for RCU callbacks to release the reference count and make 297 + * sure their works have been queued before this. 298 + */ 312 299 rcu_barrier(); 300 + tcf_queue_work(&block->work); 301 + rtnl_unlock(); 302 + } 313 303 314 - /* At this point, all the chains should have refcnt == 1. */ 315 - list_for_each_entry_safe(chain, tmp, &block->chain_list, list) 316 - tcf_chain_put(chain); 317 - kfree(block); 304 + void tcf_block_put(struct tcf_block *block) 305 + { 306 + if (!block) 307 + return; 308 + 309 + INIT_WORK(&block->work, tcf_block_put_deferred); 310 + /* Wait for existing RCU callbacks to cool down, make sure their works 311 + * have been queued before this. We can not flush pending works here 312 + * because we are holding the RTNL lock. 313 + */ 314 + rcu_barrier(); 315 + tcf_queue_work(&block->work); 318 316 } 319 317 EXPORT_SYMBOL(tcf_block_put); 320 318 ··· 909 879 #ifdef CONFIG_NET_CLS_ACT 910 880 LIST_HEAD(actions); 911 881 882 + ASSERT_RTNL(); 912 883 tcf_exts_to_list(exts, &actions); 913 884 tcf_action_destroy(&actions, TCA_ACT_UNBIND); 914 885 kfree(exts->actions); ··· 1061 1030 1062 1031 static int __init tc_filter_init(void) 1063 1032 { 1033 + tc_filter_wq = alloc_ordered_workqueue("tc_filter_workqueue", 0); 1034 + if (!tc_filter_wq) 1035 + return -ENOMEM; 1036 + 1064 1037 rtnl_register(PF_UNSPEC, RTM_NEWTFILTER, tc_ctl_tfilter, NULL, 0); 1065 1038 rtnl_register(PF_UNSPEC, RTM_DELTFILTER, tc_ctl_tfilter, NULL, 0); 1066 1039 rtnl_register(PF_UNSPEC, RTM_GETTFILTER, tc_ctl_tfilter,
+18 -4
net/sched/cls_basic.c
··· 34 34 struct tcf_result res; 35 35 struct tcf_proto *tp; 36 36 struct list_head link; 37 - struct rcu_head rcu; 37 + union { 38 + struct work_struct work; 39 + struct rcu_head rcu; 40 + }; 38 41 }; 39 42 40 43 static int basic_classify(struct sk_buff *skb, const struct tcf_proto *tp, ··· 85 82 return 0; 86 83 } 87 84 85 + static void basic_delete_filter_work(struct work_struct *work) 86 + { 87 + struct basic_filter *f = container_of(work, struct basic_filter, work); 88 + 89 + rtnl_lock(); 90 + tcf_exts_destroy(&f->exts); 91 + tcf_em_tree_destroy(&f->ematches); 92 + rtnl_unlock(); 93 + 94 + kfree(f); 95 + } 96 + 88 97 static void basic_delete_filter(struct rcu_head *head) 89 98 { 90 99 struct basic_filter *f = container_of(head, struct basic_filter, rcu); 91 100 92 - tcf_exts_destroy(&f->exts); 93 - tcf_em_tree_destroy(&f->ematches); 94 - kfree(f); 101 + INIT_WORK(&f->work, basic_delete_filter_work); 102 + tcf_queue_work(&f->work); 95 103 } 96 104 97 105 static void basic_destroy(struct tcf_proto *tp)
+17 -2
net/sched/cls_bpf.c
··· 49 49 struct sock_filter *bpf_ops; 50 50 const char *bpf_name; 51 51 struct tcf_proto *tp; 52 - struct rcu_head rcu; 52 + union { 53 + struct work_struct work; 54 + struct rcu_head rcu; 55 + }; 53 56 }; 54 57 55 58 static const struct nla_policy bpf_policy[TCA_BPF_MAX + 1] = { ··· 260 257 kfree(prog); 261 258 } 262 259 260 + static void cls_bpf_delete_prog_work(struct work_struct *work) 261 + { 262 + struct cls_bpf_prog *prog = container_of(work, struct cls_bpf_prog, work); 263 + 264 + rtnl_lock(); 265 + __cls_bpf_delete_prog(prog); 266 + rtnl_unlock(); 267 + } 268 + 263 269 static void cls_bpf_delete_prog_rcu(struct rcu_head *rcu) 264 270 { 265 - __cls_bpf_delete_prog(container_of(rcu, struct cls_bpf_prog, rcu)); 271 + struct cls_bpf_prog *prog = container_of(rcu, struct cls_bpf_prog, rcu); 272 + 273 + INIT_WORK(&prog->work, cls_bpf_delete_prog_work); 274 + tcf_queue_work(&prog->work); 266 275 } 267 276 268 277 static void __cls_bpf_delete(struct tcf_proto *tp, struct cls_bpf_prog *prog)
+18 -4
net/sched/cls_cgroup.c
··· 23 23 struct tcf_exts exts; 24 24 struct tcf_ematch_tree ematches; 25 25 struct tcf_proto *tp; 26 - struct rcu_head rcu; 26 + union { 27 + struct work_struct work; 28 + struct rcu_head rcu; 29 + }; 27 30 }; 28 31 29 32 static int cls_cgroup_classify(struct sk_buff *skb, const struct tcf_proto *tp, ··· 60 57 [TCA_CGROUP_EMATCHES] = { .type = NLA_NESTED }, 61 58 }; 62 59 60 + static void cls_cgroup_destroy_work(struct work_struct *work) 61 + { 62 + struct cls_cgroup_head *head = container_of(work, 63 + struct cls_cgroup_head, 64 + work); 65 + rtnl_lock(); 66 + tcf_exts_destroy(&head->exts); 67 + tcf_em_tree_destroy(&head->ematches); 68 + kfree(head); 69 + rtnl_unlock(); 70 + } 71 + 63 72 static void cls_cgroup_destroy_rcu(struct rcu_head *root) 64 73 { 65 74 struct cls_cgroup_head *head = container_of(root, 66 75 struct cls_cgroup_head, 67 76 rcu); 68 77 69 - tcf_exts_destroy(&head->exts); 70 - tcf_em_tree_destroy(&head->ematches); 71 - kfree(head); 78 + INIT_WORK(&head->work, cls_cgroup_destroy_work); 79 + tcf_queue_work(&head->work); 72 80 } 73 81 74 82 static int cls_cgroup_change(struct net *net, struct sk_buff *in_skb,
+16 -3
net/sched/cls_flow.c
··· 57 57 u32 divisor; 58 58 u32 baseclass; 59 59 u32 hashrnd; 60 - struct rcu_head rcu; 60 + union { 61 + struct work_struct work; 62 + struct rcu_head rcu; 63 + }; 61 64 }; 62 65 63 66 static inline u32 addr_fold(void *addr) ··· 372 369 [TCA_FLOW_PERTURB] = { .type = NLA_U32 }, 373 370 }; 374 371 375 - static void flow_destroy_filter(struct rcu_head *head) 372 + static void flow_destroy_filter_work(struct work_struct *work) 376 373 { 377 - struct flow_filter *f = container_of(head, struct flow_filter, rcu); 374 + struct flow_filter *f = container_of(work, struct flow_filter, work); 378 375 376 + rtnl_lock(); 379 377 del_timer_sync(&f->perturb_timer); 380 378 tcf_exts_destroy(&f->exts); 381 379 tcf_em_tree_destroy(&f->ematches); 382 380 kfree(f); 381 + rtnl_unlock(); 382 + } 383 + 384 + static void flow_destroy_filter(struct rcu_head *head) 385 + { 386 + struct flow_filter *f = container_of(head, struct flow_filter, rcu); 387 + 388 + INIT_WORK(&f->work, flow_destroy_filter_work); 389 + tcf_queue_work(&f->work); 383 390 } 384 391 385 392 static int flow_change(struct net *net, struct sk_buff *in_skb,
+16 -3
net/sched/cls_flower.c
··· 87 87 struct list_head list; 88 88 u32 handle; 89 89 u32 flags; 90 - struct rcu_head rcu; 90 + union { 91 + struct work_struct work; 92 + struct rcu_head rcu; 93 + }; 91 94 struct net_device *hw_dev; 92 95 }; 93 96 ··· 218 215 return 0; 219 216 } 220 217 218 + static void fl_destroy_filter_work(struct work_struct *work) 219 + { 220 + struct cls_fl_filter *f = container_of(work, struct cls_fl_filter, work); 221 + 222 + rtnl_lock(); 223 + tcf_exts_destroy(&f->exts); 224 + kfree(f); 225 + rtnl_unlock(); 226 + } 227 + 221 228 static void fl_destroy_filter(struct rcu_head *head) 222 229 { 223 230 struct cls_fl_filter *f = container_of(head, struct cls_fl_filter, rcu); 224 231 225 - tcf_exts_destroy(&f->exts); 226 - kfree(f); 232 + INIT_WORK(&f->work, fl_destroy_filter_work); 233 + tcf_queue_work(&f->work); 227 234 } 228 235 229 236 static void fl_hw_destroy_filter(struct tcf_proto *tp, struct cls_fl_filter *f)
+16 -3
net/sched/cls_fw.c
··· 46 46 #endif /* CONFIG_NET_CLS_IND */ 47 47 struct tcf_exts exts; 48 48 struct tcf_proto *tp; 49 - struct rcu_head rcu; 49 + union { 50 + struct work_struct work; 51 + struct rcu_head rcu; 52 + }; 50 53 }; 51 54 52 55 static u32 fw_hash(u32 handle) ··· 122 119 return 0; 123 120 } 124 121 122 + static void fw_delete_filter_work(struct work_struct *work) 123 + { 124 + struct fw_filter *f = container_of(work, struct fw_filter, work); 125 + 126 + rtnl_lock(); 127 + tcf_exts_destroy(&f->exts); 128 + kfree(f); 129 + rtnl_unlock(); 130 + } 131 + 125 132 static void fw_delete_filter(struct rcu_head *head) 126 133 { 127 134 struct fw_filter *f = container_of(head, struct fw_filter, rcu); 128 135 129 - tcf_exts_destroy(&f->exts); 130 - kfree(f); 136 + INIT_WORK(&f->work, fw_delete_filter_work); 137 + tcf_queue_work(&f->work); 131 138 } 132 139 133 140 static void fw_destroy(struct tcf_proto *tp)
+16 -3
net/sched/cls_matchall.c
··· 21 21 struct tcf_result res; 22 22 u32 handle; 23 23 u32 flags; 24 - struct rcu_head rcu; 24 + union { 25 + struct work_struct work; 26 + struct rcu_head rcu; 27 + }; 25 28 }; 26 29 27 30 static int mall_classify(struct sk_buff *skb, const struct tcf_proto *tp, ··· 44 41 return 0; 45 42 } 46 43 44 + static void mall_destroy_work(struct work_struct *work) 45 + { 46 + struct cls_mall_head *head = container_of(work, struct cls_mall_head, 47 + work); 48 + rtnl_lock(); 49 + tcf_exts_destroy(&head->exts); 50 + kfree(head); 51 + rtnl_unlock(); 52 + } 53 + 47 54 static void mall_destroy_rcu(struct rcu_head *rcu) 48 55 { 49 56 struct cls_mall_head *head = container_of(rcu, struct cls_mall_head, 50 57 rcu); 51 58 52 - tcf_exts_destroy(&head->exts); 53 - kfree(head); 59 + INIT_WORK(&head->work, mall_destroy_work); 60 + tcf_queue_work(&head->work); 54 61 } 55 62 56 63 static int mall_replace_hw_filter(struct tcf_proto *tp,
+16 -3
net/sched/cls_route.c
··· 57 57 u32 handle; 58 58 struct route4_bucket *bkt; 59 59 struct tcf_proto *tp; 60 - struct rcu_head rcu; 60 + union { 61 + struct work_struct work; 62 + struct rcu_head rcu; 63 + }; 61 64 }; 62 65 63 66 #define ROUTE4_FAILURE ((struct route4_filter *)(-1L)) ··· 257 254 return 0; 258 255 } 259 256 257 + static void route4_delete_filter_work(struct work_struct *work) 258 + { 259 + struct route4_filter *f = container_of(work, struct route4_filter, work); 260 + 261 + rtnl_lock(); 262 + tcf_exts_destroy(&f->exts); 263 + kfree(f); 264 + rtnl_unlock(); 265 + } 266 + 260 267 static void route4_delete_filter(struct rcu_head *head) 261 268 { 262 269 struct route4_filter *f = container_of(head, struct route4_filter, rcu); 263 270 264 - tcf_exts_destroy(&f->exts); 265 - kfree(f); 271 + INIT_WORK(&f->work, route4_delete_filter_work); 272 + tcf_queue_work(&f->work); 266 273 } 267 274 268 275 static void route4_destroy(struct tcf_proto *tp)
+16 -3
net/sched/cls_rsvp.h
··· 97 97 98 98 u32 handle; 99 99 struct rsvp_session *sess; 100 - struct rcu_head rcu; 100 + union { 101 + struct work_struct work; 102 + struct rcu_head rcu; 103 + }; 101 104 }; 102 105 103 106 static inline unsigned int hash_dst(__be32 *dst, u8 protocol, u8 tunnelid) ··· 285 282 return -ENOBUFS; 286 283 } 287 284 285 + static void rsvp_delete_filter_work(struct work_struct *work) 286 + { 287 + struct rsvp_filter *f = container_of(work, struct rsvp_filter, work); 288 + 289 + rtnl_lock(); 290 + tcf_exts_destroy(&f->exts); 291 + kfree(f); 292 + rtnl_unlock(); 293 + } 294 + 288 295 static void rsvp_delete_filter_rcu(struct rcu_head *head) 289 296 { 290 297 struct rsvp_filter *f = container_of(head, struct rsvp_filter, rcu); 291 298 292 - tcf_exts_destroy(&f->exts); 293 - kfree(f); 299 + INIT_WORK(&f->work, rsvp_delete_filter_work); 300 + tcf_queue_work(&f->work); 294 301 } 295 302 296 303 static void rsvp_delete_filter(struct tcf_proto *tp, struct rsvp_filter *f)
+33 -5
net/sched/cls_tcindex.c
··· 27 27 struct tcindex_filter_result { 28 28 struct tcf_exts exts; 29 29 struct tcf_result res; 30 - struct rcu_head rcu; 30 + union { 31 + struct work_struct work; 32 + struct rcu_head rcu; 33 + }; 31 34 }; 32 35 33 36 struct tcindex_filter { 34 37 u16 key; 35 38 struct tcindex_filter_result result; 36 39 struct tcindex_filter __rcu *next; 37 - struct rcu_head rcu; 40 + union { 41 + struct work_struct work; 42 + struct rcu_head rcu; 43 + }; 38 44 }; 39 45 40 46 ··· 139 133 return 0; 140 134 } 141 135 136 + static void tcindex_destroy_rexts_work(struct work_struct *work) 137 + { 138 + struct tcindex_filter_result *r; 139 + 140 + r = container_of(work, struct tcindex_filter_result, work); 141 + rtnl_lock(); 142 + tcf_exts_destroy(&r->exts); 143 + rtnl_unlock(); 144 + } 145 + 142 146 static void tcindex_destroy_rexts(struct rcu_head *head) 143 147 { 144 148 struct tcindex_filter_result *r; 145 149 146 150 r = container_of(head, struct tcindex_filter_result, rcu); 147 - tcf_exts_destroy(&r->exts); 151 + INIT_WORK(&r->work, tcindex_destroy_rexts_work); 152 + tcf_queue_work(&r->work); 153 + } 154 + 155 + static void tcindex_destroy_fexts_work(struct work_struct *work) 156 + { 157 + struct tcindex_filter *f = container_of(work, struct tcindex_filter, 158 + work); 159 + 160 + rtnl_lock(); 161 + tcf_exts_destroy(&f->result.exts); 162 + kfree(f); 163 + rtnl_unlock(); 148 164 } 149 165 150 166 static void tcindex_destroy_fexts(struct rcu_head *head) ··· 174 146 struct tcindex_filter *f = container_of(head, struct tcindex_filter, 175 147 rcu); 176 148 177 - tcf_exts_destroy(&f->result.exts); 178 - kfree(f); 149 + INIT_WORK(&f->work, tcindex_destroy_fexts_work); 150 + tcf_queue_work(&f->work); 179 151 } 180 152 181 153 static int tcindex_delete(struct tcf_proto *tp, void *arg, bool *last)
+26 -3
net/sched/cls_u32.c
··· 68 68 u32 __percpu *pcpu_success; 69 69 #endif 70 70 struct tcf_proto *tp; 71 - struct rcu_head rcu; 71 + union { 72 + struct work_struct work; 73 + struct rcu_head rcu; 74 + }; 72 75 /* The 'sel' field MUST be the last field in structure to allow for 73 76 * tc_u32_keys allocated at end of structure. 74 77 */ ··· 421 418 * this the u32_delete_key_rcu variant does not free the percpu 422 419 * statistics. 423 420 */ 421 + static void u32_delete_key_work(struct work_struct *work) 422 + { 423 + struct tc_u_knode *key = container_of(work, struct tc_u_knode, work); 424 + 425 + rtnl_lock(); 426 + u32_destroy_key(key->tp, key, false); 427 + rtnl_unlock(); 428 + } 429 + 424 430 static void u32_delete_key_rcu(struct rcu_head *rcu) 425 431 { 426 432 struct tc_u_knode *key = container_of(rcu, struct tc_u_knode, rcu); 427 433 428 - u32_destroy_key(key->tp, key, false); 434 + INIT_WORK(&key->work, u32_delete_key_work); 435 + tcf_queue_work(&key->work); 429 436 } 430 437 431 438 /* u32_delete_key_freepf_rcu is the rcu callback variant ··· 445 432 * for the variant that should be used with keys return from 446 433 * u32_init_knode() 447 434 */ 435 + static void u32_delete_key_freepf_work(struct work_struct *work) 436 + { 437 + struct tc_u_knode *key = container_of(work, struct tc_u_knode, work); 438 + 439 + rtnl_lock(); 440 + u32_destroy_key(key->tp, key, true); 441 + rtnl_unlock(); 442 + } 443 + 448 444 static void u32_delete_key_freepf_rcu(struct rcu_head *rcu) 449 445 { 450 446 struct tc_u_knode *key = container_of(rcu, struct tc_u_knode, rcu); 451 447 452 - u32_destroy_key(key->tp, key, true); 448 + INIT_WORK(&key->work, u32_delete_key_freepf_work); 449 + tcf_queue_work(&key->work); 453 450 } 454 451 455 452 static int u32_delete_key(struct tcf_proto *tp, struct tc_u_knode *key)
+21
tools/testing/selftests/tc-testing/tc-tests/filters/tests.json
··· 17 17 "teardown": [ 18 18 "$TC qdisc del dev $DEV1 ingress" 19 19 ] 20 + }, 21 + { 22 + "id": "d052", 23 + "name": "Add 1M filters with the same action", 24 + "category": [ 25 + "filter", 26 + "flower" 27 + ], 28 + "setup": [ 29 + "$TC qdisc add dev $DEV2 ingress", 30 + "./tdc_batch.py $DEV2 $BATCH_FILE --share_action -n 1000000" 31 + ], 32 + "cmdUnderTest": "$TC -b $BATCH_FILE", 33 + "expExitCode": "0", 34 + "verifyCmd": "$TC actions list action gact", 35 + "matchPattern": "action order 0: gact action drop.*index 1 ref 1000000 bind 1000000", 36 + "matchCount": "1", 37 + "teardown": [ 38 + "$TC qdisc del dev $DEV2 ingress", 39 + "/bin/rm $BATCH_FILE" 40 + ] 20 41 } 21 42 ]
+16 -4
tools/testing/selftests/tc-testing/tdc.py
··· 88 88 exit(1) 89 89 90 90 91 - def test_runner(filtered_tests): 91 + def test_runner(filtered_tests, args): 92 92 """ 93 93 Driver function for the unit tests. 94 94 ··· 105 105 for tidx in testlist: 106 106 result = True 107 107 tresult = "" 108 + if "flower" in tidx["category"] and args.device == None: 109 + continue 108 110 print("Test " + tidx["id"] + ": " + tidx["name"]) 109 111 prepare_env(tidx["setup"]) 110 112 (p, procout) = exec_cmd(tidx["cmdUnderTest"]) ··· 153 151 cmd = 'ip link set $DEV0 up' 154 152 exec_cmd(cmd, False) 155 153 cmd = 'ip -s $NS link set $DEV1 up' 154 + exec_cmd(cmd, False) 155 + cmd = 'ip link set $DEV2 netns $NS' 156 + exec_cmd(cmd, False) 157 + cmd = 'ip -s $NS link set $DEV2 up' 156 158 exec_cmd(cmd, False) 157 159 158 160 ··· 217 211 help='Execute the single test case with specified ID') 218 212 parser.add_argument('-i', '--id', action='store_true', dest='gen_id', 219 213 help='Generate ID numbers for new test cases') 220 - return parser 214 + parser.add_argument('-d', '--device', 215 + help='Execute the test case in flower category') 221 216 return parser 222 217 223 218 ··· 232 225 233 226 if args.path != None: 234 227 NAMES['TC'] = args.path 228 + if args.device != None: 229 + NAMES['DEV2'] = args.device 235 230 if not os.path.isfile(NAMES['TC']): 236 231 print("The specified tc path " + NAMES['TC'] + " does not exist.") 237 232 exit(1) ··· 390 381 if (len(alltests) == 0): 391 382 print("Cannot find a test case with ID matching " + target_id) 392 383 exit(1) 393 - catresults = test_runner(alltests) 384 + catresults = test_runner(alltests, args) 394 385 print("All test results: " + "\n\n" + catresults) 395 386 elif (len(target_category) > 0): 387 + if (target_category == "flower") and args.device == None: 388 + print("Please specify a NIC device (-d) to run category flower") 389 + exit(1) 396 390 if (target_category not in ucat): 397 391 print("Specified category is not present in this file.") 398 392 exit(1) 399 393 else: 400 - catresults = test_runner(testcases[target_category]) 394 + catresults = test_runner(testcases[target_category], args) 401 395 print("Category " + target_category + "\n\n" + catresults) 402 396 403 397 ns_destroy()
+62
tools/testing/selftests/tc-testing/tdc_batch.py
··· 1 + #!/usr/bin/python3 2 + 3 + """ 4 + tdc_batch.py - a script to generate TC batch file 5 + 6 + Copyright (C) 2017 Chris Mi <chrism@mellanox.com> 7 + """ 8 + 9 + import argparse 10 + 11 + parser = argparse.ArgumentParser(description='TC batch file generator') 12 + parser.add_argument("device", help="device name") 13 + parser.add_argument("file", help="batch file name") 14 + parser.add_argument("-n", "--number", type=int, 15 + help="how many lines in batch file") 16 + parser.add_argument("-o", "--skip_sw", 17 + help="skip_sw (offload), by default skip_hw", 18 + action="store_true") 19 + parser.add_argument("-s", "--share_action", 20 + help="all filters share the same action", 21 + action="store_true") 22 + parser.add_argument("-p", "--prio", 23 + help="all filters have different prio", 24 + action="store_true") 25 + args = parser.parse_args() 26 + 27 + device = args.device 28 + file = open(args.file, 'w') 29 + 30 + number = 1 31 + if args.number: 32 + number = args.number 33 + 34 + skip = "skip_hw" 35 + if args.skip_sw: 36 + skip = "skip_sw" 37 + 38 + share_action = "" 39 + if args.share_action: 40 + share_action = "index 1" 41 + 42 + prio = "prio 1" 43 + if args.prio: 44 + prio = "" 45 + if number > 0x4000: 46 + number = 0x4000 47 + 48 + index = 0 49 + for i in range(0x100): 50 + for j in range(0x100): 51 + for k in range(0x100): 52 + mac = ("%02x:%02x:%02x" % (i, j, k)) 53 + src_mac = "e4:11:00:" + mac 54 + dst_mac = "e4:12:00:" + mac 55 + cmd = ("filter add dev %s %s protocol ip parent ffff: flower %s " 56 + "src_mac %s dst_mac %s action drop %s" % 57 + (device, prio, skip, src_mac, dst_mac, share_action)) 58 + file.write("%s\n" % cmd) 59 + index += 1 60 + if index >= number: 61 + file.close() 62 + exit(0)
+2
tools/testing/selftests/tc-testing/tdc_config.py
··· 12 12 # Name of veth devices to be created for the namespace 13 13 'DEV0': 'v0p0', 14 14 'DEV1': 'v0p1', 15 + 'DEV2': '', 16 + 'BATCH_FILE': './batch.txt', 15 17 # Name of the namespace to use 16 18 'NS': 'tcut' 17 19 }