Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branch 'l2tp-misc-improvements'

James Chapman says:

====================
l2tp: misc improvements

This series makes several improvements to l2tp:

* update documentation to be consistent with recent l2tp changes.
* move l2tp_ip socket tables to per-net data.
* fix handling of hash key collisions in l2tp_v3_session_get
* implement and use get-next APIs for management and procfs/debugfs.
* improve l2tp refcount helpers.
* use per-cpu dev->tstats in l2tpeth devices.
* fix a lockdep splat.
* fix a race between l2tp_pre_exit_net and pppol2tp_release.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>

+437 -251
+20 -34
Documentation/networking/l2tp.rst
··· 638 638 L2TPv2 and 32-bit for L2TPv3. Internally, the id is stored as a 32-bit 639 639 value. 640 640 641 - Tunnels are kept in a per-net list, indexed by tunnel id. The tunnel 642 - id namespace is shared by L2TPv2 and L2TPv3. The tunnel context can be 643 - derived from the socket's sk_user_data. 641 + Tunnels are kept in a per-net list, indexed by tunnel id. The 642 + tunnel id namespace is shared by L2TPv2 and L2TPv3. 644 643 645 644 Handling tunnel socket close is perhaps the most tricky part of the 646 645 L2TP implementation. If userspace closes a tunnel socket, the L2TP ··· 651 652 its tunnel close actions. For L2TPIP sockets, the socket's close 652 653 handler initiates the same tunnel close actions. All sessions are 653 654 first closed. Each session drops its tunnel ref. When the tunnel ref 654 - reaches zero, the tunnel puts its socket ref. When the socket is 655 - eventually destroyed, its sk_destruct finally frees the L2TP tunnel 656 - context. 655 + reaches zero, the tunnel drops its socket ref. 657 656 658 657 Sessions 659 658 -------- ··· 664 667 Relay. Linux currently implements only Ethernet and PPP session types. 665 668 666 669 Some L2TP session types also have a socket (PPP pseudowires) while 667 - others do not (Ethernet pseudowires). We can't therefore use the 668 - socket reference count as the reference count for session 669 - contexts. The L2TP implementation therefore has its own internal 670 - reference counts on the session contexts. 670 + others do not (Ethernet pseudowires). 671 671 672 672 Like tunnels, L2TP sessions are identified by a unique 673 673 session id. Just as with tunnel ids, the session id is 16-bit for ··· 674 680 Sessions hold a ref on their parent tunnel to ensure that the tunnel 675 681 stays extant while one or more sessions references it. 676 682 677 - Sessions are kept in a per-tunnel list, indexed by session id. L2TPv3 678 - sessions are also kept in a per-net list indexed by session id, 679 - because L2TPv3 session ids are unique across all tunnels and L2TPv3 680 - data packets do not contain a tunnel id in the header. This list is 681 - therefore needed to find the session context associated with a 682 - received data packet when the tunnel context cannot be derived from 683 - the tunnel socket. 683 + Sessions are kept in a per-net list. L2TPv2 sessions and L2TPv3 684 + sessions are stored in separate lists. L2TPv2 sessions are keyed 685 + by a 32-bit key made up of the 16-bit tunnel ID and 16-bit 686 + session ID. L2TPv3 sessions are keyed by the 32-bit session ID, since 687 + L2TPv3 session ids are unique across all tunnels. 684 688 685 689 Although the L2TPv3 RFC specifies that L2TPv3 session ids are not 686 - scoped by the tunnel, the kernel does not police this for L2TPv3 UDP 687 - tunnels and does not add sessions of L2TPv3 UDP tunnels into the 688 - per-net session list. In the UDP receive code, we must trust that the 689 - tunnel can be identified using the tunnel socket's sk_user_data and 690 - lookup the session in the tunnel's session list instead of the per-net 691 - session list. 690 + scoped by the tunnel, the Linux implementation has historically 691 + allowed this. Such session id collisions are supported using a per-net 692 + hash table keyed by sk and session ID. When looking up L2TPv3 693 + sessions, the list entry may link to multiple sessions with that 694 + session ID, in which case the session matching the given sk (tunnel) 695 + is used. 692 696 693 697 PPP 694 698 --- ··· 706 714 by closing its corresponding L2TP session. This is complicated because 707 715 it must consider racing with netlink session create/destroy requests 708 716 and pppol2tp_connect trying to reconnect with a session that is in the 709 - process of being closed. Unlike tunnels, PPP sessions do not hold a 710 - ref on their associated socket, so code must be careful to sock_hold 711 - the socket where necessary. For all the details, see commit 712 - 3d609342cc04129ff7568e19316ce3d7451a27e8. 717 + process of being closed. PPP sessions hold a ref on their associated 718 + socket in order that the socket remains extants while the session 719 + references it. 713 720 714 721 Ethernet 715 722 -------- ··· 752 761 753 762 The current implementation has a number of limitations: 754 763 755 - 1) Multiple UDP sockets with the same 5-tuple address cannot be 756 - used. The kernel's tunnel context is identified using private 757 - data associated with the socket so it is important that each 758 - socket is uniquely identified by its address. 759 - 760 - 2) Interfacing with openvswitch is not yet implemented. It may be 764 + 1) Interfacing with openvswitch is not yet implemented. It may be 761 765 useful to map OVS Ethernet and VLAN ports into L2TPv3 tunnels. 762 766 763 - 3) VLAN pseudowires are implemented using an ``l2tpethN`` interface 767 + 2) VLAN pseudowires are implemented using an ``l2tpethN`` interface 764 768 configured with a VLAN sub-interface. Since L2TPv3 VLAN 765 769 pseudowires carry one and only one VLAN, it may be better to use 766 770 a single netdevice rather than an ``l2tpethN`` and ``l2tpethN``:M
+135 -52
net/l2tp/l2tp_core.c
··· 117 117 struct hlist_head l2tp_v3_session_htable[16]; 118 118 }; 119 119 120 - static inline u32 l2tp_v2_session_key(u16 tunnel_id, u16 session_id) 120 + static u32 l2tp_v2_session_key(u16 tunnel_id, u16 session_id) 121 121 { 122 122 return ((u32)tunnel_id) << 16 | session_id; 123 123 } 124 124 125 - static inline unsigned long l2tp_v3_session_hashkey(struct sock *sk, u32 session_id) 125 + static unsigned long l2tp_v3_session_hashkey(struct sock *sk, u32 session_id) 126 126 { 127 127 return ((unsigned long)sk) + session_id; 128 128 } ··· 135 135 } 136 136 #endif 137 137 138 - static inline struct l2tp_net *l2tp_pernet(const struct net *net) 138 + static struct l2tp_net *l2tp_pernet(const struct net *net) 139 139 { 140 140 return net_generic(net, l2tp_net_id); 141 141 } ··· 170 170 { 171 171 trace_free_session(session); 172 172 if (session->tunnel) 173 - l2tp_tunnel_dec_refcount(session->tunnel); 173 + l2tp_tunnel_put(session->tunnel); 174 174 kfree_rcu(session, rcu); 175 175 } 176 176 ··· 197 197 } 198 198 EXPORT_SYMBOL_GPL(l2tp_sk_to_tunnel); 199 199 200 - void l2tp_tunnel_inc_refcount(struct l2tp_tunnel *tunnel) 201 - { 202 - refcount_inc(&tunnel->ref_count); 203 - } 204 - EXPORT_SYMBOL_GPL(l2tp_tunnel_inc_refcount); 205 - 206 - void l2tp_tunnel_dec_refcount(struct l2tp_tunnel *tunnel) 200 + void l2tp_tunnel_put(struct l2tp_tunnel *tunnel) 207 201 { 208 202 if (refcount_dec_and_test(&tunnel->ref_count)) 209 203 l2tp_tunnel_free(tunnel); 210 204 } 211 - EXPORT_SYMBOL_GPL(l2tp_tunnel_dec_refcount); 205 + EXPORT_SYMBOL_GPL(l2tp_tunnel_put); 212 206 213 - void l2tp_session_inc_refcount(struct l2tp_session *session) 214 - { 215 - refcount_inc(&session->ref_count); 216 - } 217 - EXPORT_SYMBOL_GPL(l2tp_session_inc_refcount); 218 - 219 - void l2tp_session_dec_refcount(struct l2tp_session *session) 207 + void l2tp_session_put(struct l2tp_session *session) 220 208 { 221 209 if (refcount_dec_and_test(&session->ref_count)) 222 210 l2tp_session_free(session); 223 211 } 224 - EXPORT_SYMBOL_GPL(l2tp_session_dec_refcount); 212 + EXPORT_SYMBOL_GPL(l2tp_session_put); 225 213 226 214 /* Lookup a tunnel. A new reference is held on the returned tunnel. */ 227 215 struct l2tp_tunnel *l2tp_tunnel_get(const struct net *net, u32 tunnel_id) ··· 229 241 } 230 242 EXPORT_SYMBOL_GPL(l2tp_tunnel_get); 231 243 232 - struct l2tp_tunnel *l2tp_tunnel_get_nth(const struct net *net, int nth) 244 + struct l2tp_tunnel *l2tp_tunnel_get_next(const struct net *net, unsigned long *key) 233 245 { 234 246 struct l2tp_net *pn = l2tp_pernet(net); 235 - unsigned long tunnel_id, tmp; 236 - struct l2tp_tunnel *tunnel; 237 - int count = 0; 247 + struct l2tp_tunnel *tunnel = NULL; 238 248 239 249 rcu_read_lock_bh(); 240 - idr_for_each_entry_ul(&pn->l2tp_tunnel_idr, tunnel, tmp, tunnel_id) { 241 - if (tunnel && ++count > nth && 242 - refcount_inc_not_zero(&tunnel->ref_count)) { 250 + again: 251 + tunnel = idr_get_next_ul(&pn->l2tp_tunnel_idr, key); 252 + if (tunnel) { 253 + if (refcount_inc_not_zero(&tunnel->ref_count)) { 243 254 rcu_read_unlock_bh(); 244 255 return tunnel; 245 256 } 257 + (*key)++; 258 + goto again; 246 259 } 247 260 rcu_read_unlock_bh(); 248 261 249 262 return NULL; 250 263 } 251 - EXPORT_SYMBOL_GPL(l2tp_tunnel_get_nth); 264 + EXPORT_SYMBOL_GPL(l2tp_tunnel_get_next); 252 265 253 266 struct l2tp_session *l2tp_v3_session_get(const struct net *net, struct sock *sk, u32 session_id) 254 267 { ··· 280 291 */ 281 292 struct l2tp_tunnel *tunnel = READ_ONCE(session->tunnel); 282 293 283 - if (tunnel && tunnel->sock == sk && 294 + if (session->session_id == session_id && 295 + tunnel && tunnel->sock == sk && 284 296 refcount_inc_not_zero(&session->ref_count)) { 285 297 rcu_read_unlock_bh(); 286 298 return session; ··· 322 332 } 323 333 EXPORT_SYMBOL_GPL(l2tp_session_get); 324 334 325 - struct l2tp_session *l2tp_session_get_nth(struct l2tp_tunnel *tunnel, int nth) 335 + static struct l2tp_session *l2tp_v2_session_get_next(const struct net *net, 336 + u16 tid, 337 + unsigned long *key) 326 338 { 327 - struct l2tp_session *session; 328 - int count = 0; 339 + struct l2tp_net *pn = l2tp_pernet(net); 340 + struct l2tp_session *session = NULL; 341 + 342 + /* Start searching within the range of the tid */ 343 + if (*key == 0) 344 + *key = l2tp_v2_session_key(tid, 0); 329 345 330 346 rcu_read_lock_bh(); 331 - list_for_each_entry_rcu(session, &tunnel->session_list, list) { 332 - if (++count > nth) { 333 - l2tp_session_inc_refcount(session); 347 + again: 348 + session = idr_get_next_ul(&pn->l2tp_v2_session_idr, key); 349 + if (session) { 350 + struct l2tp_tunnel *tunnel = READ_ONCE(session->tunnel); 351 + 352 + /* ignore sessions with id 0 as they are internal for pppol2tp */ 353 + if (session->session_id == 0) { 354 + (*key)++; 355 + goto again; 356 + } 357 + 358 + if (tunnel && tunnel->tunnel_id == tid && 359 + refcount_inc_not_zero(&session->ref_count)) { 334 360 rcu_read_unlock_bh(); 335 361 return session; 336 362 } 363 + 364 + (*key)++; 365 + if (tunnel->tunnel_id == tid) 366 + goto again; 337 367 } 338 368 rcu_read_unlock_bh(); 339 369 340 370 return NULL; 341 371 } 342 - EXPORT_SYMBOL_GPL(l2tp_session_get_nth); 372 + 373 + static struct l2tp_session *l2tp_v3_session_get_next(const struct net *net, 374 + u32 tid, struct sock *sk, 375 + unsigned long *key) 376 + { 377 + struct l2tp_net *pn = l2tp_pernet(net); 378 + struct l2tp_session *session = NULL; 379 + 380 + rcu_read_lock_bh(); 381 + again: 382 + session = idr_get_next_ul(&pn->l2tp_v3_session_idr, key); 383 + if (session && !hash_hashed(&session->hlist)) { 384 + struct l2tp_tunnel *tunnel = READ_ONCE(session->tunnel); 385 + 386 + if (tunnel && tunnel->tunnel_id == tid && 387 + refcount_inc_not_zero(&session->ref_count)) { 388 + rcu_read_unlock_bh(); 389 + return session; 390 + } 391 + 392 + (*key)++; 393 + goto again; 394 + } 395 + 396 + /* If we get here and session is non-NULL, the IDR entry may be one 397 + * where the session_id collides with one in another tunnel. Check 398 + * session_htable for a match. There can only be one session of a given 399 + * ID per tunnel so we can return as soon as a match is found. 400 + */ 401 + if (session && hash_hashed(&session->hlist)) { 402 + unsigned long hkey = l2tp_v3_session_hashkey(sk, session->session_id); 403 + u32 sid = session->session_id; 404 + 405 + hash_for_each_possible_rcu(pn->l2tp_v3_session_htable, session, 406 + hlist, hkey) { 407 + struct l2tp_tunnel *tunnel = READ_ONCE(session->tunnel); 408 + 409 + if (session->session_id == sid && 410 + tunnel && tunnel->tunnel_id == tid && 411 + refcount_inc_not_zero(&session->ref_count)) { 412 + rcu_read_unlock_bh(); 413 + return session; 414 + } 415 + } 416 + 417 + /* If no match found, the colliding session ID isn't in our 418 + * tunnel so try the next session ID. 419 + */ 420 + (*key)++; 421 + goto again; 422 + } 423 + 424 + rcu_read_unlock_bh(); 425 + 426 + return NULL; 427 + } 428 + 429 + struct l2tp_session *l2tp_session_get_next(const struct net *net, struct sock *sk, int pver, 430 + u32 tunnel_id, unsigned long *key) 431 + { 432 + if (pver == L2TP_HDR_VER_2) 433 + return l2tp_v2_session_get_next(net, tunnel_id, key); 434 + else 435 + return l2tp_v3_session_get_next(net, tunnel_id, sk, key); 436 + } 437 + EXPORT_SYMBOL_GPL(l2tp_session_get_next); 343 438 344 439 /* Lookup a session by interface name. 345 440 * This is very inefficient but is only used by management interfaces. ··· 442 367 if (tunnel) { 443 368 list_for_each_entry_rcu(session, &tunnel->session_list, list) { 444 369 if (!strcmp(session->ifname, ifname)) { 445 - l2tp_session_inc_refcount(session); 370 + refcount_inc(&session->ref_count); 446 371 rcu_read_unlock_bh(); 447 372 448 373 return session; ··· 459 384 static void l2tp_session_coll_list_add(struct l2tp_session_coll_list *clist, 460 385 struct l2tp_session *session) 461 386 { 462 - l2tp_session_inc_refcount(session); 387 + refcount_inc(&session->ref_count); 463 388 WARN_ON_ONCE(session->coll_list); 464 389 session->coll_list = clist; 465 390 spin_lock(&clist->lock); ··· 545 470 spin_unlock(&clist->lock); 546 471 if (refcount_dec_and_test(&clist->ref_count)) 547 472 kfree(clist); 548 - l2tp_session_dec_refcount(session); 473 + l2tp_session_put(session); 549 474 } 550 475 } 551 476 ··· 594 519 goto out; 595 520 } 596 521 597 - l2tp_tunnel_inc_refcount(tunnel); 522 + refcount_inc(&tunnel->ref_count); 598 523 WRITE_ONCE(session->tunnel, tunnel); 599 524 list_add_rcu(&session->list, &tunnel->session_list); 600 525 ··· 1077 1002 1078 1003 if (!session || !session->recv_skb) { 1079 1004 if (session) 1080 - l2tp_session_dec_refcount(session); 1005 + l2tp_session_put(session); 1081 1006 1082 1007 /* Not found? Pass to userspace to deal with */ 1083 1008 goto pass; ··· 1091 1016 1092 1017 if (version == L2TP_HDR_VER_3 && 1093 1018 l2tp_v3_ensure_opt_in_linear(session, skb, &ptr, &optr)) { 1094 - l2tp_session_dec_refcount(session); 1019 + l2tp_session_put(session); 1095 1020 goto invalid; 1096 1021 } 1097 1022 1098 1023 l2tp_recv_common(session, skb, ptr, optr, hdrflags, length); 1099 - l2tp_session_dec_refcount(session); 1024 + l2tp_session_put(session); 1100 1025 1101 1026 return 0; 1102 1027 ··· 1396 1321 tunnel = l2tp_sk_to_tunnel(sk); 1397 1322 if (tunnel) { 1398 1323 l2tp_tunnel_delete(tunnel); 1399 - l2tp_tunnel_dec_refcount(tunnel); 1324 + l2tp_tunnel_put(tunnel); 1400 1325 } 1401 1326 } 1402 1327 ··· 1431 1356 1432 1357 l2tp_tunnel_remove(tunnel->l2tp_net, tunnel); 1433 1358 /* drop initial ref */ 1434 - l2tp_tunnel_dec_refcount(tunnel); 1359 + l2tp_tunnel_put(tunnel); 1435 1360 1436 1361 /* drop workqueue ref */ 1437 - l2tp_tunnel_dec_refcount(tunnel); 1362 + l2tp_tunnel_put(tunnel); 1438 1363 } 1439 1364 1440 1365 /* Create a socket for the tunnel, if one isn't set up by ··· 1622 1547 1623 1548 tunnel = l2tp_sk_to_tunnel(sk); 1624 1549 if (tunnel) { 1625 - l2tp_tunnel_dec_refcount(tunnel); 1550 + l2tp_tunnel_put(tunnel); 1626 1551 return -EBUSY; 1627 1552 } 1628 1553 ··· 1714 1639 { 1715 1640 if (!test_and_set_bit(0, &tunnel->dead)) { 1716 1641 trace_delete_tunnel(tunnel); 1717 - l2tp_tunnel_inc_refcount(tunnel); 1642 + refcount_inc(&tunnel->ref_count); 1718 1643 queue_work(l2tp_wq, &tunnel->del_work); 1719 1644 } 1720 1645 } ··· 1724 1649 { 1725 1650 if (!test_and_set_bit(0, &session->dead)) { 1726 1651 trace_delete_session(session); 1727 - l2tp_session_inc_refcount(session); 1652 + refcount_inc(&session->ref_count); 1728 1653 queue_work(l2tp_wq, &session->del_work); 1729 1654 } 1730 1655 } ··· 1742 1667 (*session->session_close)(session); 1743 1668 1744 1669 /* drop initial ref */ 1745 - l2tp_session_dec_refcount(session); 1670 + l2tp_session_put(session); 1746 1671 1747 1672 /* drop workqueue ref */ 1748 - l2tp_session_dec_refcount(session); 1673 + l2tp_session_put(session); 1749 1674 } 1750 1675 1751 1676 /* We come here whenever a session's send_seq, cookie_len or ··· 1855 1780 } 1856 1781 rcu_read_unlock_bh(); 1857 1782 1858 - if (l2tp_wq) 1783 + if (l2tp_wq) { 1784 + /* ensure that all TUNNEL_DELETE work items are run before 1785 + * draining the work queue since TUNNEL_DELETE requests may 1786 + * queue SESSION_DELETE work items for each session in the 1787 + * tunnel. drain_workqueue may otherwise warn if SESSION_DELETE 1788 + * requests are queued while the work queue is being drained. 1789 + */ 1790 + __flush_workqueue(l2tp_wq); 1859 1791 drain_workqueue(l2tp_wq); 1792 + } 1860 1793 } 1861 1794 1862 1795 static __net_exit void l2tp_exit_net(struct net *net)
+5 -6
net/l2tp/l2tp_core.h
··· 209 209 } 210 210 211 211 /* Tunnel and session refcounts */ 212 - void l2tp_tunnel_inc_refcount(struct l2tp_tunnel *tunnel); 213 - void l2tp_tunnel_dec_refcount(struct l2tp_tunnel *tunnel); 214 - void l2tp_session_inc_refcount(struct l2tp_session *session); 215 - void l2tp_session_dec_refcount(struct l2tp_session *session); 212 + void l2tp_tunnel_put(struct l2tp_tunnel *tunnel); 213 + void l2tp_session_put(struct l2tp_session *session); 216 214 217 215 /* Tunnel and session lookup. 218 216 * These functions take a reference on the instances they return, so 219 217 * the caller must ensure that the reference is dropped appropriately. 220 218 */ 221 219 struct l2tp_tunnel *l2tp_tunnel_get(const struct net *net, u32 tunnel_id); 222 - struct l2tp_tunnel *l2tp_tunnel_get_nth(const struct net *net, int nth); 220 + struct l2tp_tunnel *l2tp_tunnel_get_next(const struct net *net, unsigned long *key); 223 221 224 222 struct l2tp_session *l2tp_v3_session_get(const struct net *net, struct sock *sk, u32 session_id); 225 223 struct l2tp_session *l2tp_v2_session_get(const struct net *net, u16 tunnel_id, u16 session_id); 226 224 struct l2tp_session *l2tp_session_get(const struct net *net, struct sock *sk, int pver, 227 225 u32 tunnel_id, u32 session_id); 228 - struct l2tp_session *l2tp_session_get_nth(struct l2tp_tunnel *tunnel, int nth); 226 + struct l2tp_session *l2tp_session_get_next(const struct net *net, struct sock *sk, int pver, 227 + u32 tunnel_id, unsigned long *key); 229 228 struct l2tp_session *l2tp_session_get_by_ifname(const struct net *net, 230 229 const char *ifname); 231 230
+13 -11
net/l2tp/l2tp_debugfs.c
··· 34 34 struct l2tp_dfs_seq_data { 35 35 struct net *net; 36 36 netns_tracker ns_tracker; 37 - int tunnel_idx; /* current tunnel */ 38 - int session_idx; /* index of session within current tunnel */ 37 + unsigned long tkey; /* lookup key of current tunnel */ 38 + unsigned long skey; /* lookup key of current session */ 39 39 struct l2tp_tunnel *tunnel; 40 40 struct l2tp_session *session; /* NULL means get next tunnel */ 41 41 }; ··· 44 44 { 45 45 /* Drop reference taken during previous invocation */ 46 46 if (pd->tunnel) 47 - l2tp_tunnel_dec_refcount(pd->tunnel); 47 + l2tp_tunnel_put(pd->tunnel); 48 48 49 - pd->tunnel = l2tp_tunnel_get_nth(pd->net, pd->tunnel_idx); 50 - pd->tunnel_idx++; 49 + pd->tunnel = l2tp_tunnel_get_next(pd->net, &pd->tkey); 50 + pd->tkey++; 51 51 } 52 52 53 53 static void l2tp_dfs_next_session(struct l2tp_dfs_seq_data *pd) 54 54 { 55 55 /* Drop reference taken during previous invocation */ 56 56 if (pd->session) 57 - l2tp_session_dec_refcount(pd->session); 57 + l2tp_session_put(pd->session); 58 58 59 - pd->session = l2tp_session_get_nth(pd->tunnel, pd->session_idx); 60 - pd->session_idx++; 59 + pd->session = l2tp_session_get_next(pd->net, pd->tunnel->sock, 60 + pd->tunnel->version, 61 + pd->tunnel->tunnel_id, &pd->skey); 62 + pd->skey++; 61 63 62 64 if (!pd->session) { 63 - pd->session_idx = 0; 65 + pd->skey = 0; 64 66 l2tp_dfs_next_tunnel(pd); 65 67 } 66 68 } ··· 111 109 * or l2tp_dfs_next_tunnel(). 112 110 */ 113 111 if (pd->session) { 114 - l2tp_session_dec_refcount(pd->session); 112 + l2tp_session_put(pd->session); 115 113 pd->session = NULL; 116 114 } 117 115 if (pd->tunnel) { 118 - l2tp_tunnel_dec_refcount(pd->tunnel); 116 + l2tp_tunnel_put(pd->tunnel); 119 117 pd->tunnel = NULL; 120 118 } 121 119 }
+15 -27
net/l2tp/l2tp_eth.c
··· 72 72 unsigned int len = skb->len; 73 73 int ret = l2tp_xmit_skb(session, skb); 74 74 75 - if (likely(ret == NET_XMIT_SUCCESS)) { 76 - DEV_STATS_ADD(dev, tx_bytes, len); 77 - DEV_STATS_INC(dev, tx_packets); 78 - } else { 75 + if (likely(ret == NET_XMIT_SUCCESS)) 76 + dev_sw_netstats_tx_add(dev, 1, len); 77 + else 79 78 DEV_STATS_INC(dev, tx_dropped); 80 - } 81 - return NETDEV_TX_OK; 82 - } 83 79 84 - static void l2tp_eth_get_stats64(struct net_device *dev, 85 - struct rtnl_link_stats64 *stats) 86 - { 87 - stats->tx_bytes = DEV_STATS_READ(dev, tx_bytes); 88 - stats->tx_packets = DEV_STATS_READ(dev, tx_packets); 89 - stats->tx_dropped = DEV_STATS_READ(dev, tx_dropped); 90 - stats->rx_bytes = DEV_STATS_READ(dev, rx_bytes); 91 - stats->rx_packets = DEV_STATS_READ(dev, rx_packets); 92 - stats->rx_errors = DEV_STATS_READ(dev, rx_errors); 80 + return NETDEV_TX_OK; 93 81 } 94 82 95 83 static const struct net_device_ops l2tp_eth_netdev_ops = { 96 84 .ndo_init = l2tp_eth_dev_init, 97 85 .ndo_uninit = l2tp_eth_dev_uninit, 98 86 .ndo_start_xmit = l2tp_eth_dev_xmit, 99 - .ndo_get_stats64 = l2tp_eth_get_stats64, 87 + .ndo_get_stats64 = dev_get_tstats64, 100 88 .ndo_set_mac_address = eth_mac_addr, 101 89 }; 102 90 ··· 100 112 dev->features |= NETIF_F_LLTX; 101 113 dev->netdev_ops = &l2tp_eth_netdev_ops; 102 114 dev->needs_free_netdev = true; 115 + dev->pcpu_stat_type = NETDEV_PCPU_STAT_TSTATS; 103 116 } 104 117 105 118 static void l2tp_eth_dev_recv(struct l2tp_session *session, struct sk_buff *skb, int data_len) ··· 127 138 if (!dev) 128 139 goto error_rcu; 129 140 130 - if (dev_forward_skb(dev, skb) == NET_RX_SUCCESS) { 131 - DEV_STATS_INC(dev, rx_packets); 132 - DEV_STATS_ADD(dev, rx_bytes, data_len); 133 - } else { 141 + if (dev_forward_skb(dev, skb) == NET_RX_SUCCESS) 142 + dev_sw_netstats_rx_add(dev, data_len); 143 + else 134 144 DEV_STATS_INC(dev, rx_errors); 135 - } 145 + 136 146 rcu_read_unlock(); 137 147 138 148 return; ··· 271 283 272 284 spriv = l2tp_session_priv(session); 273 285 274 - l2tp_session_inc_refcount(session); 286 + refcount_inc(&session->ref_count); 275 287 276 288 rtnl_lock(); 277 289 ··· 289 301 if (rc < 0) { 290 302 rtnl_unlock(); 291 303 l2tp_session_delete(session); 292 - l2tp_session_dec_refcount(session); 304 + l2tp_session_put(session); 293 305 free_netdev(dev); 294 306 295 307 return rc; ··· 300 312 301 313 rtnl_unlock(); 302 314 303 - l2tp_session_dec_refcount(session); 315 + l2tp_session_put(session); 304 316 305 317 __module_get(THIS_MODULE); 306 318 307 319 return 0; 308 320 309 321 err_sess_dev: 310 - l2tp_session_dec_refcount(session); 322 + l2tp_session_put(session); 311 323 free_netdev(dev); 312 324 err_sess: 313 - l2tp_session_dec_refcount(session); 325 + l2tp_session_put(session); 314 326 err: 315 327 return rc; 316 328 }
+87 -29
net/l2tp/l2tp_ip.c
··· 22 22 #include <net/tcp_states.h> 23 23 #include <net/protocol.h> 24 24 #include <net/xfrm.h> 25 + #include <net/net_namespace.h> 26 + #include <net/netns/generic.h> 25 27 26 28 #include "l2tp_core.h" 29 + 30 + /* per-net private data for this module */ 31 + static unsigned int l2tp_ip_net_id; 32 + struct l2tp_ip_net { 33 + rwlock_t l2tp_ip_lock; 34 + struct hlist_head l2tp_ip_table; 35 + struct hlist_head l2tp_ip_bind_table; 36 + }; 27 37 28 38 struct l2tp_ip_sock { 29 39 /* inet_sock has to be the first member of l2tp_ip_sock */ ··· 43 33 u32 peer_conn_id; 44 34 }; 45 35 46 - static DEFINE_RWLOCK(l2tp_ip_lock); 47 - static struct hlist_head l2tp_ip_table; 48 - static struct hlist_head l2tp_ip_bind_table; 49 - 50 - static inline struct l2tp_ip_sock *l2tp_ip_sk(const struct sock *sk) 36 + static struct l2tp_ip_sock *l2tp_ip_sk(const struct sock *sk) 51 37 { 52 38 return (struct l2tp_ip_sock *)sk; 39 + } 40 + 41 + static struct l2tp_ip_net *l2tp_ip_pernet(const struct net *net) 42 + { 43 + return net_generic(net, l2tp_ip_net_id); 53 44 } 54 45 55 46 static struct sock *__l2tp_ip_bind_lookup(const struct net *net, __be32 laddr, 56 47 __be32 raddr, int dif, u32 tunnel_id) 57 48 { 49 + struct l2tp_ip_net *pn = l2tp_ip_pernet(net); 58 50 struct sock *sk; 59 51 60 - sk_for_each_bound(sk, &l2tp_ip_bind_table) { 52 + sk_for_each_bound(sk, &pn->l2tp_ip_bind_table) { 61 53 const struct l2tp_ip_sock *l2tp = l2tp_ip_sk(sk); 62 54 const struct inet_sock *inet = inet_sk(sk); 63 55 int bound_dev_if; ··· 125 113 static int l2tp_ip_recv(struct sk_buff *skb) 126 114 { 127 115 struct net *net = dev_net(skb->dev); 116 + struct l2tp_ip_net *pn; 128 117 struct sock *sk; 129 118 u32 session_id; 130 119 u32 tunnel_id; ··· 133 120 struct l2tp_session *session; 134 121 struct l2tp_tunnel *tunnel = NULL; 135 122 struct iphdr *iph; 123 + 124 + pn = l2tp_ip_pernet(net); 136 125 137 126 if (!pskb_may_pull(skb, 4)) 138 127 goto discard; ··· 167 152 goto discard_sess; 168 153 169 154 l2tp_recv_common(session, skb, ptr, optr, 0, skb->len); 170 - l2tp_session_dec_refcount(session); 155 + l2tp_session_put(session); 171 156 172 157 return 0; 173 158 ··· 182 167 tunnel_id = ntohl(*(__be32 *)&skb->data[4]); 183 168 iph = (struct iphdr *)skb_network_header(skb); 184 169 185 - read_lock_bh(&l2tp_ip_lock); 170 + read_lock_bh(&pn->l2tp_ip_lock); 186 171 sk = __l2tp_ip_bind_lookup(net, iph->daddr, iph->saddr, inet_iif(skb), 187 172 tunnel_id); 188 173 if (!sk) { 189 - read_unlock_bh(&l2tp_ip_lock); 174 + read_unlock_bh(&pn->l2tp_ip_lock); 190 175 goto discard; 191 176 } 192 177 sock_hold(sk); 193 - read_unlock_bh(&l2tp_ip_lock); 178 + read_unlock_bh(&pn->l2tp_ip_lock); 194 179 195 180 if (!xfrm4_policy_check(sk, XFRM_POLICY_IN, skb)) 196 181 goto discard_put; ··· 200 185 return sk_receive_skb(sk, skb, 1); 201 186 202 187 discard_sess: 203 - l2tp_session_dec_refcount(session); 188 + l2tp_session_put(session); 204 189 goto discard; 205 190 206 191 discard_put: ··· 213 198 214 199 static int l2tp_ip_hash(struct sock *sk) 215 200 { 201 + struct l2tp_ip_net *pn = l2tp_ip_pernet(sock_net(sk)); 202 + 216 203 if (sk_unhashed(sk)) { 217 - write_lock_bh(&l2tp_ip_lock); 218 - sk_add_node(sk, &l2tp_ip_table); 219 - write_unlock_bh(&l2tp_ip_lock); 204 + write_lock_bh(&pn->l2tp_ip_lock); 205 + sk_add_node(sk, &pn->l2tp_ip_table); 206 + write_unlock_bh(&pn->l2tp_ip_lock); 220 207 } 221 208 return 0; 222 209 } 223 210 224 211 static void l2tp_ip_unhash(struct sock *sk) 225 212 { 213 + struct l2tp_ip_net *pn = l2tp_ip_pernet(sock_net(sk)); 214 + 226 215 if (sk_unhashed(sk)) 227 216 return; 228 - write_lock_bh(&l2tp_ip_lock); 217 + write_lock_bh(&pn->l2tp_ip_lock); 229 218 sk_del_node_init(sk); 230 - write_unlock_bh(&l2tp_ip_lock); 219 + write_unlock_bh(&pn->l2tp_ip_lock); 231 220 } 232 221 233 222 static int l2tp_ip_open(struct sock *sk) ··· 245 226 246 227 static void l2tp_ip_close(struct sock *sk, long timeout) 247 228 { 248 - write_lock_bh(&l2tp_ip_lock); 229 + struct l2tp_ip_net *pn = l2tp_ip_pernet(sock_net(sk)); 230 + 231 + write_lock_bh(&pn->l2tp_ip_lock); 249 232 hlist_del_init(&sk->sk_bind_node); 250 233 sk_del_node_init(sk); 251 - write_unlock_bh(&l2tp_ip_lock); 234 + write_unlock_bh(&pn->l2tp_ip_lock); 252 235 sk_common_release(sk); 253 236 } 254 237 ··· 265 244 tunnel = l2tp_sk_to_tunnel(sk); 266 245 if (tunnel) { 267 246 l2tp_tunnel_delete(tunnel); 268 - l2tp_tunnel_dec_refcount(tunnel); 247 + l2tp_tunnel_put(tunnel); 269 248 } 270 249 } 271 250 ··· 274 253 struct inet_sock *inet = inet_sk(sk); 275 254 struct sockaddr_l2tpip *addr = (struct sockaddr_l2tpip *)uaddr; 276 255 struct net *net = sock_net(sk); 256 + struct l2tp_ip_net *pn; 277 257 int ret; 278 258 int chk_addr_ret; 279 259 ··· 305 283 if (chk_addr_ret == RTN_MULTICAST || chk_addr_ret == RTN_BROADCAST) 306 284 inet->inet_saddr = 0; /* Use device */ 307 285 308 - write_lock_bh(&l2tp_ip_lock); 286 + pn = l2tp_ip_pernet(net); 287 + write_lock_bh(&pn->l2tp_ip_lock); 309 288 if (__l2tp_ip_bind_lookup(net, addr->l2tp_addr.s_addr, 0, 310 289 sk->sk_bound_dev_if, addr->l2tp_conn_id)) { 311 - write_unlock_bh(&l2tp_ip_lock); 290 + write_unlock_bh(&pn->l2tp_ip_lock); 312 291 ret = -EADDRINUSE; 313 292 goto out; 314 293 } ··· 317 294 sk_dst_reset(sk); 318 295 l2tp_ip_sk(sk)->conn_id = addr->l2tp_conn_id; 319 296 320 - sk_add_bind_node(sk, &l2tp_ip_bind_table); 297 + sk_add_bind_node(sk, &pn->l2tp_ip_bind_table); 321 298 sk_del_node_init(sk); 322 - write_unlock_bh(&l2tp_ip_lock); 299 + write_unlock_bh(&pn->l2tp_ip_lock); 323 300 324 301 ret = 0; 325 302 sock_reset_flag(sk, SOCK_ZAPPED); ··· 333 310 static int l2tp_ip_connect(struct sock *sk, struct sockaddr *uaddr, int addr_len) 334 311 { 335 312 struct sockaddr_l2tpip *lsa = (struct sockaddr_l2tpip *)uaddr; 313 + struct l2tp_ip_net *pn = l2tp_ip_pernet(sock_net(sk)); 336 314 int rc; 337 315 338 316 if (addr_len < sizeof(*lsa)) ··· 356 332 357 333 l2tp_ip_sk(sk)->peer_conn_id = lsa->l2tp_conn_id; 358 334 359 - write_lock_bh(&l2tp_ip_lock); 335 + write_lock_bh(&pn->l2tp_ip_lock); 360 336 hlist_del_init(&sk->sk_bind_node); 361 - sk_add_bind_node(sk, &l2tp_ip_bind_table); 362 - write_unlock_bh(&l2tp_ip_lock); 337 + sk_add_bind_node(sk, &pn->l2tp_ip_bind_table); 338 + write_unlock_bh(&pn->l2tp_ip_lock); 363 339 364 340 out_sk: 365 341 release_sock(sk); ··· 664 640 .handler = l2tp_ip_recv, 665 641 }; 666 642 643 + static __net_init int l2tp_ip_init_net(struct net *net) 644 + { 645 + struct l2tp_ip_net *pn = net_generic(net, l2tp_ip_net_id); 646 + 647 + rwlock_init(&pn->l2tp_ip_lock); 648 + INIT_HLIST_HEAD(&pn->l2tp_ip_table); 649 + INIT_HLIST_HEAD(&pn->l2tp_ip_bind_table); 650 + return 0; 651 + } 652 + 653 + static __net_exit void l2tp_ip_exit_net(struct net *net) 654 + { 655 + struct l2tp_ip_net *pn = l2tp_ip_pernet(net); 656 + 657 + write_lock_bh(&pn->l2tp_ip_lock); 658 + WARN_ON_ONCE(hlist_count_nodes(&pn->l2tp_ip_table) != 0); 659 + WARN_ON_ONCE(hlist_count_nodes(&pn->l2tp_ip_bind_table) != 0); 660 + write_unlock_bh(&pn->l2tp_ip_lock); 661 + } 662 + 663 + static struct pernet_operations l2tp_ip_net_ops = { 664 + .init = l2tp_ip_init_net, 665 + .exit = l2tp_ip_exit_net, 666 + .id = &l2tp_ip_net_id, 667 + .size = sizeof(struct l2tp_ip_net), 668 + }; 669 + 667 670 static int __init l2tp_ip_init(void) 668 671 { 669 672 int err; 670 673 671 674 pr_info("L2TP IP encapsulation support (L2TPv3)\n"); 672 675 676 + err = register_pernet_device(&l2tp_ip_net_ops); 677 + if (err) 678 + goto out; 679 + 673 680 err = proto_register(&l2tp_ip_prot, 1); 674 681 if (err != 0) 675 - goto out; 682 + goto out1; 676 683 677 684 err = inet_add_protocol(&l2tp_ip_protocol, IPPROTO_L2TP); 678 685 if (err) 679 - goto out1; 686 + goto out2; 680 687 681 688 inet_register_protosw(&l2tp_ip_protosw); 682 689 return 0; 683 690 684 - out1: 691 + out2: 685 692 proto_unregister(&l2tp_ip_prot); 693 + out1: 694 + unregister_pernet_device(&l2tp_ip_net_ops); 686 695 out: 687 696 return err; 688 697 } ··· 725 668 inet_unregister_protosw(&l2tp_ip_protosw); 726 669 inet_del_protocol(&l2tp_ip_protocol, IPPROTO_L2TP); 727 670 proto_unregister(&l2tp_ip_prot); 671 + unregister_pernet_device(&l2tp_ip_net_ops); 728 672 } 729 673 730 674 module_init(l2tp_ip_init);
+89 -29
net/l2tp/l2tp_ip6.c
··· 22 22 #include <net/tcp_states.h> 23 23 #include <net/protocol.h> 24 24 #include <net/xfrm.h> 25 + #include <net/net_namespace.h> 26 + #include <net/netns/generic.h> 25 27 26 28 #include <net/transp_v6.h> 27 29 #include <net/addrconf.h> 28 30 #include <net/ip6_route.h> 29 31 30 32 #include "l2tp_core.h" 33 + 34 + /* per-net private data for this module */ 35 + static unsigned int l2tp_ip6_net_id; 36 + struct l2tp_ip6_net { 37 + rwlock_t l2tp_ip6_lock; 38 + struct hlist_head l2tp_ip6_table; 39 + struct hlist_head l2tp_ip6_bind_table; 40 + }; 31 41 32 42 struct l2tp_ip6_sock { 33 43 /* inet_sock has to be the first member of l2tp_ip6_sock */ ··· 49 39 struct ipv6_pinfo inet6; 50 40 }; 51 41 52 - static DEFINE_RWLOCK(l2tp_ip6_lock); 53 - static struct hlist_head l2tp_ip6_table; 54 - static struct hlist_head l2tp_ip6_bind_table; 55 - 56 - static inline struct l2tp_ip6_sock *l2tp_ip6_sk(const struct sock *sk) 42 + static struct l2tp_ip6_sock *l2tp_ip6_sk(const struct sock *sk) 57 43 { 58 44 return (struct l2tp_ip6_sock *)sk; 45 + } 46 + 47 + static struct l2tp_ip6_net *l2tp_ip6_pernet(const struct net *net) 48 + { 49 + return net_generic(net, l2tp_ip6_net_id); 59 50 } 60 51 61 52 static struct sock *__l2tp_ip6_bind_lookup(const struct net *net, ··· 64 53 const struct in6_addr *raddr, 65 54 int dif, u32 tunnel_id) 66 55 { 56 + struct l2tp_ip6_net *pn = l2tp_ip6_pernet(net); 67 57 struct sock *sk; 68 58 69 - sk_for_each_bound(sk, &l2tp_ip6_bind_table) { 59 + sk_for_each_bound(sk, &pn->l2tp_ip6_bind_table) { 70 60 const struct in6_addr *sk_laddr = inet6_rcv_saddr(sk); 71 61 const struct in6_addr *sk_raddr = &sk->sk_v6_daddr; 72 62 const struct l2tp_ip6_sock *l2tp = l2tp_ip6_sk(sk); ··· 135 123 static int l2tp_ip6_recv(struct sk_buff *skb) 136 124 { 137 125 struct net *net = dev_net(skb->dev); 126 + struct l2tp_ip6_net *pn; 138 127 struct sock *sk; 139 128 u32 session_id; 140 129 u32 tunnel_id; ··· 143 130 struct l2tp_session *session; 144 131 struct l2tp_tunnel *tunnel = NULL; 145 132 struct ipv6hdr *iph; 133 + 134 + pn = l2tp_ip6_pernet(net); 146 135 147 136 if (!pskb_may_pull(skb, 4)) 148 137 goto discard; ··· 177 162 goto discard_sess; 178 163 179 164 l2tp_recv_common(session, skb, ptr, optr, 0, skb->len); 180 - l2tp_session_dec_refcount(session); 165 + l2tp_session_put(session); 181 166 182 167 return 0; 183 168 ··· 192 177 tunnel_id = ntohl(*(__be32 *)&skb->data[4]); 193 178 iph = ipv6_hdr(skb); 194 179 195 - read_lock_bh(&l2tp_ip6_lock); 180 + read_lock_bh(&pn->l2tp_ip6_lock); 196 181 sk = __l2tp_ip6_bind_lookup(net, &iph->daddr, &iph->saddr, 197 182 inet6_iif(skb), tunnel_id); 198 183 if (!sk) { 199 - read_unlock_bh(&l2tp_ip6_lock); 184 + read_unlock_bh(&pn->l2tp_ip6_lock); 200 185 goto discard; 201 186 } 202 187 sock_hold(sk); 203 - read_unlock_bh(&l2tp_ip6_lock); 188 + read_unlock_bh(&pn->l2tp_ip6_lock); 204 189 205 190 if (!xfrm6_policy_check(sk, XFRM_POLICY_IN, skb)) 206 191 goto discard_put; ··· 210 195 return sk_receive_skb(sk, skb, 1); 211 196 212 197 discard_sess: 213 - l2tp_session_dec_refcount(session); 198 + l2tp_session_put(session); 214 199 goto discard; 215 200 216 201 discard_put: ··· 223 208 224 209 static int l2tp_ip6_hash(struct sock *sk) 225 210 { 211 + struct l2tp_ip6_net *pn = l2tp_ip6_pernet(sock_net(sk)); 212 + 226 213 if (sk_unhashed(sk)) { 227 - write_lock_bh(&l2tp_ip6_lock); 228 - sk_add_node(sk, &l2tp_ip6_table); 229 - write_unlock_bh(&l2tp_ip6_lock); 214 + write_lock_bh(&pn->l2tp_ip6_lock); 215 + sk_add_node(sk, &pn->l2tp_ip6_table); 216 + write_unlock_bh(&pn->l2tp_ip6_lock); 230 217 } 231 218 return 0; 232 219 } 233 220 234 221 static void l2tp_ip6_unhash(struct sock *sk) 235 222 { 223 + struct l2tp_ip6_net *pn = l2tp_ip6_pernet(sock_net(sk)); 224 + 236 225 if (sk_unhashed(sk)) 237 226 return; 238 - write_lock_bh(&l2tp_ip6_lock); 227 + write_lock_bh(&pn->l2tp_ip6_lock); 239 228 sk_del_node_init(sk); 240 - write_unlock_bh(&l2tp_ip6_lock); 229 + write_unlock_bh(&pn->l2tp_ip6_lock); 241 230 } 242 231 243 232 static int l2tp_ip6_open(struct sock *sk) ··· 255 236 256 237 static void l2tp_ip6_close(struct sock *sk, long timeout) 257 238 { 258 - write_lock_bh(&l2tp_ip6_lock); 239 + struct l2tp_ip6_net *pn = l2tp_ip6_pernet(sock_net(sk)); 240 + 241 + write_lock_bh(&pn->l2tp_ip6_lock); 259 242 hlist_del_init(&sk->sk_bind_node); 260 243 sk_del_node_init(sk); 261 - write_unlock_bh(&l2tp_ip6_lock); 244 + write_unlock_bh(&pn->l2tp_ip6_lock); 262 245 263 246 sk_common_release(sk); 264 247 } ··· 276 255 tunnel = l2tp_sk_to_tunnel(sk); 277 256 if (tunnel) { 278 257 l2tp_tunnel_delete(tunnel); 279 - l2tp_tunnel_dec_refcount(tunnel); 258 + l2tp_tunnel_put(tunnel); 280 259 } 281 260 } 282 261 ··· 286 265 struct ipv6_pinfo *np = inet6_sk(sk); 287 266 struct sockaddr_l2tpip6 *addr = (struct sockaddr_l2tpip6 *)uaddr; 288 267 struct net *net = sock_net(sk); 268 + struct l2tp_ip6_net *pn; 289 269 __be32 v4addr = 0; 290 270 int bound_dev_if; 291 271 int addr_type; 292 272 int err; 273 + 274 + pn = l2tp_ip6_pernet(net); 293 275 294 276 if (addr->l2tp_family != AF_INET6) 295 277 return -EINVAL; ··· 351 327 } 352 328 rcu_read_unlock(); 353 329 354 - write_lock_bh(&l2tp_ip6_lock); 330 + write_lock_bh(&pn->l2tp_ip6_lock); 355 331 if (__l2tp_ip6_bind_lookup(net, &addr->l2tp_addr, NULL, bound_dev_if, 356 332 addr->l2tp_conn_id)) { 357 - write_unlock_bh(&l2tp_ip6_lock); 333 + write_unlock_bh(&pn->l2tp_ip6_lock); 358 334 err = -EADDRINUSE; 359 335 goto out_unlock; 360 336 } ··· 367 343 368 344 l2tp_ip6_sk(sk)->conn_id = addr->l2tp_conn_id; 369 345 370 - sk_add_bind_node(sk, &l2tp_ip6_bind_table); 346 + sk_add_bind_node(sk, &pn->l2tp_ip6_bind_table); 371 347 sk_del_node_init(sk); 372 - write_unlock_bh(&l2tp_ip6_lock); 348 + write_unlock_bh(&pn->l2tp_ip6_lock); 373 349 374 350 sock_reset_flag(sk, SOCK_ZAPPED); 375 351 release_sock(sk); ··· 391 367 struct in6_addr *daddr; 392 368 int addr_type; 393 369 int rc; 370 + struct l2tp_ip6_net *pn; 394 371 395 372 if (addr_len < sizeof(*lsa)) 396 373 return -EINVAL; ··· 423 398 424 399 l2tp_ip6_sk(sk)->peer_conn_id = lsa->l2tp_conn_id; 425 400 426 - write_lock_bh(&l2tp_ip6_lock); 401 + pn = l2tp_ip6_pernet(sock_net(sk)); 402 + write_lock_bh(&pn->l2tp_ip6_lock); 427 403 hlist_del_init(&sk->sk_bind_node); 428 - sk_add_bind_node(sk, &l2tp_ip6_bind_table); 429 - write_unlock_bh(&l2tp_ip6_lock); 404 + sk_add_bind_node(sk, &pn->l2tp_ip6_bind_table); 405 + write_unlock_bh(&pn->l2tp_ip6_lock); 430 406 431 407 out_sk: 432 408 release_sock(sk); ··· 794 768 .handler = l2tp_ip6_recv, 795 769 }; 796 770 771 + static __net_init int l2tp_ip6_init_net(struct net *net) 772 + { 773 + struct l2tp_ip6_net *pn = net_generic(net, l2tp_ip6_net_id); 774 + 775 + rwlock_init(&pn->l2tp_ip6_lock); 776 + INIT_HLIST_HEAD(&pn->l2tp_ip6_table); 777 + INIT_HLIST_HEAD(&pn->l2tp_ip6_bind_table); 778 + return 0; 779 + } 780 + 781 + static __net_exit void l2tp_ip6_exit_net(struct net *net) 782 + { 783 + struct l2tp_ip6_net *pn = l2tp_ip6_pernet(net); 784 + 785 + write_lock_bh(&pn->l2tp_ip6_lock); 786 + WARN_ON_ONCE(hlist_count_nodes(&pn->l2tp_ip6_table) != 0); 787 + WARN_ON_ONCE(hlist_count_nodes(&pn->l2tp_ip6_bind_table) != 0); 788 + write_unlock_bh(&pn->l2tp_ip6_lock); 789 + } 790 + 791 + static struct pernet_operations l2tp_ip6_net_ops = { 792 + .init = l2tp_ip6_init_net, 793 + .exit = l2tp_ip6_exit_net, 794 + .id = &l2tp_ip6_net_id, 795 + .size = sizeof(struct l2tp_ip6_net), 796 + }; 797 + 797 798 static int __init l2tp_ip6_init(void) 798 799 { 799 800 int err; 800 801 801 802 pr_info("L2TP IP encapsulation support for IPv6 (L2TPv3)\n"); 802 803 804 + err = register_pernet_device(&l2tp_ip6_net_ops); 805 + if (err) 806 + goto out; 807 + 803 808 err = proto_register(&l2tp_ip6_prot, 1); 804 809 if (err != 0) 805 - goto out; 810 + goto out1; 806 811 807 812 err = inet6_add_protocol(&l2tp_ip6_protocol, IPPROTO_L2TP); 808 813 if (err) 809 - goto out1; 814 + goto out2; 810 815 811 816 inet6_register_protosw(&l2tp_ip6_protosw); 812 817 return 0; 813 818 814 - out1: 819 + out2: 815 820 proto_unregister(&l2tp_ip6_prot); 821 + out1: 822 + unregister_pernet_device(&l2tp_ip6_net_ops); 816 823 out: 817 824 return err; 818 825 } ··· 855 796 inet6_unregister_protosw(&l2tp_ip6_protosw); 856 797 inet6_del_protocol(&l2tp_ip6_protocol, IPPROTO_L2TP); 857 798 proto_unregister(&l2tp_ip6_prot); 799 + unregister_pernet_device(&l2tp_ip6_net_ops); 858 800 } 859 801 860 802 module_init(l2tp_ip6_init);
+40 -32
net/l2tp/l2tp_netlink.c
··· 63 63 if (tunnel) { 64 64 session = l2tp_session_get(net, tunnel->sock, tunnel->version, 65 65 tunnel_id, session_id); 66 - l2tp_tunnel_dec_refcount(tunnel); 66 + l2tp_tunnel_put(tunnel); 67 67 } 68 68 } 69 69 ··· 242 242 if (ret < 0) 243 243 goto out; 244 244 245 - l2tp_tunnel_inc_refcount(tunnel); 245 + refcount_inc(&tunnel->ref_count); 246 246 ret = l2tp_tunnel_register(tunnel, net, &cfg); 247 247 if (ret < 0) { 248 248 kfree(tunnel); ··· 250 250 } 251 251 ret = l2tp_tunnel_notify(&l2tp_nl_family, info, tunnel, 252 252 L2TP_CMD_TUNNEL_CREATE); 253 - l2tp_tunnel_dec_refcount(tunnel); 253 + l2tp_tunnel_put(tunnel); 254 254 255 255 out: 256 256 return ret; ··· 280 280 281 281 l2tp_tunnel_delete(tunnel); 282 282 283 - l2tp_tunnel_dec_refcount(tunnel); 283 + l2tp_tunnel_put(tunnel); 284 284 285 285 out: 286 286 return ret; ··· 308 308 ret = l2tp_tunnel_notify(&l2tp_nl_family, info, 309 309 tunnel, L2TP_CMD_TUNNEL_MODIFY); 310 310 311 - l2tp_tunnel_dec_refcount(tunnel); 311 + l2tp_tunnel_put(tunnel); 312 312 313 313 out: 314 314 return ret; ··· 479 479 if (ret < 0) 480 480 goto err_nlmsg_tunnel; 481 481 482 - l2tp_tunnel_dec_refcount(tunnel); 482 + l2tp_tunnel_put(tunnel); 483 483 484 484 return genlmsg_unicast(net, msg, info->snd_portid); 485 485 486 486 err_nlmsg_tunnel: 487 - l2tp_tunnel_dec_refcount(tunnel); 487 + l2tp_tunnel_put(tunnel); 488 488 err_nlmsg: 489 489 nlmsg_free(msg); 490 490 err: 491 491 return ret; 492 492 } 493 493 494 + struct l2tp_nl_cb_data { 495 + unsigned long tkey; 496 + unsigned long skey; 497 + }; 498 + 494 499 static int l2tp_nl_cmd_tunnel_dump(struct sk_buff *skb, struct netlink_callback *cb) 495 500 { 496 - int ti = cb->args[0]; 501 + struct l2tp_nl_cb_data *cbd = (void *)&cb->ctx[0]; 502 + unsigned long key = cbd->tkey; 497 503 struct l2tp_tunnel *tunnel; 498 504 struct net *net = sock_net(skb->sk); 499 505 500 506 for (;;) { 501 - tunnel = l2tp_tunnel_get_nth(net, ti); 507 + tunnel = l2tp_tunnel_get_next(net, &key); 502 508 if (!tunnel) 503 509 goto out; 504 510 505 511 if (l2tp_nl_tunnel_send(skb, NETLINK_CB(cb->skb).portid, 506 512 cb->nlh->nlmsg_seq, NLM_F_MULTI, 507 513 tunnel, L2TP_CMD_TUNNEL_GET) < 0) { 508 - l2tp_tunnel_dec_refcount(tunnel); 514 + l2tp_tunnel_put(tunnel); 509 515 goto out; 510 516 } 511 - l2tp_tunnel_dec_refcount(tunnel); 517 + l2tp_tunnel_put(tunnel); 512 518 513 - ti++; 519 + key++; 514 520 } 515 521 516 522 out: 517 - cb->args[0] = ti; 523 + cbd->tkey = key; 518 524 519 525 return skb->len; 520 526 } ··· 647 641 if (session) { 648 642 ret = l2tp_session_notify(&l2tp_nl_family, info, session, 649 643 L2TP_CMD_SESSION_CREATE); 650 - l2tp_session_dec_refcount(session); 644 + l2tp_session_put(session); 651 645 } 652 646 } 653 647 654 648 out_tunnel: 655 - l2tp_tunnel_dec_refcount(tunnel); 649 + l2tp_tunnel_put(tunnel); 656 650 out: 657 651 return ret; 658 652 } ··· 677 671 if (l2tp_nl_cmd_ops[pw_type] && l2tp_nl_cmd_ops[pw_type]->session_delete) 678 672 l2tp_nl_cmd_ops[pw_type]->session_delete(session); 679 673 680 - l2tp_session_dec_refcount(session); 674 + l2tp_session_put(session); 681 675 682 676 out: 683 677 return ret; ··· 713 707 ret = l2tp_session_notify(&l2tp_nl_family, info, 714 708 session, L2TP_CMD_SESSION_MODIFY); 715 709 716 - l2tp_session_dec_refcount(session); 710 + l2tp_session_put(session); 717 711 718 712 out: 719 713 return ret; ··· 824 818 825 819 ret = genlmsg_unicast(genl_info_net(info), msg, info->snd_portid); 826 820 827 - l2tp_session_dec_refcount(session); 821 + l2tp_session_put(session); 828 822 829 823 return ret; 830 824 831 825 err_ref_msg: 832 826 nlmsg_free(msg); 833 827 err_ref: 834 - l2tp_session_dec_refcount(session); 828 + l2tp_session_put(session); 835 829 err: 836 830 return ret; 837 831 } 838 832 839 833 static int l2tp_nl_cmd_session_dump(struct sk_buff *skb, struct netlink_callback *cb) 840 834 { 835 + struct l2tp_nl_cb_data *cbd = (void *)&cb->ctx[0]; 841 836 struct net *net = sock_net(skb->sk); 842 837 struct l2tp_session *session; 843 838 struct l2tp_tunnel *tunnel = NULL; 844 - int ti = cb->args[0]; 845 - int si = cb->args[1]; 839 + unsigned long tkey = cbd->tkey; 840 + unsigned long skey = cbd->skey; 846 841 847 842 for (;;) { 848 843 if (!tunnel) { 849 - tunnel = l2tp_tunnel_get_nth(net, ti); 844 + tunnel = l2tp_tunnel_get_next(net, &tkey); 850 845 if (!tunnel) 851 846 goto out; 852 847 } 853 848 854 - session = l2tp_session_get_nth(tunnel, si); 849 + session = l2tp_session_get_next(net, tunnel->sock, tunnel->version, 850 + tunnel->tunnel_id, &skey); 855 851 if (!session) { 856 - ti++; 857 - l2tp_tunnel_dec_refcount(tunnel); 852 + tkey++; 853 + l2tp_tunnel_put(tunnel); 858 854 tunnel = NULL; 859 - si = 0; 855 + skey = 0; 860 856 continue; 861 857 } 862 858 863 859 if (l2tp_nl_session_send(skb, NETLINK_CB(cb->skb).portid, 864 860 cb->nlh->nlmsg_seq, NLM_F_MULTI, 865 861 session, L2TP_CMD_SESSION_GET) < 0) { 866 - l2tp_session_dec_refcount(session); 867 - l2tp_tunnel_dec_refcount(tunnel); 862 + l2tp_session_put(session); 863 + l2tp_tunnel_put(tunnel); 868 864 break; 869 865 } 870 - l2tp_session_dec_refcount(session); 866 + l2tp_session_put(session); 871 867 872 - si++; 868 + skey++; 873 869 } 874 870 875 871 out: 876 - cb->args[0] = ti; 877 - cb->args[1] = si; 872 + cbd->tkey = tkey; 873 + cbd->skey = skey; 878 874 879 875 return skb->len; 880 876 }
+33 -31
net/l2tp/l2tp_ppp.c
··· 149 149 150 150 /* Helpers to obtain tunnel/session contexts from sockets. 151 151 */ 152 - static inline struct l2tp_session *pppol2tp_sock_to_session(struct sock *sk) 152 + static struct l2tp_session *pppol2tp_sock_to_session(struct sock *sk) 153 153 { 154 154 struct l2tp_session *session; 155 155 ··· 313 313 l2tp_xmit_skb(session, skb); 314 314 local_bh_enable(); 315 315 316 - l2tp_session_dec_refcount(session); 316 + l2tp_session_put(session); 317 317 318 318 return total_len; 319 319 320 320 error_put_sess: 321 - l2tp_session_dec_refcount(session); 321 + l2tp_session_put(session); 322 322 error: 323 323 return error; 324 324 } ··· 372 372 l2tp_xmit_skb(session, skb); 373 373 local_bh_enable(); 374 374 375 - l2tp_session_dec_refcount(session); 375 + l2tp_session_put(session); 376 376 377 377 return 1; 378 378 379 379 abort_put_sess: 380 - l2tp_session_dec_refcount(session); 380 + l2tp_session_put(session); 381 381 abort: 382 382 /* Free the original skb */ 383 383 kfree_skb(skb); ··· 413 413 sock_put(ps->__sk); 414 414 415 415 /* drop ref taken when we referenced socket via sk_user_data */ 416 - l2tp_session_dec_refcount(session); 416 + l2tp_session_put(session); 417 417 } 418 418 } 419 419 ··· 444 444 if (session) { 445 445 l2tp_session_delete(session); 446 446 /* drop ref taken by pppol2tp_sock_to_session */ 447 - l2tp_session_dec_refcount(session); 447 + l2tp_session_put(session); 448 448 } 449 449 450 450 release_sock(sk); ··· 668 668 if (error < 0) 669 669 return ERR_PTR(error); 670 670 671 - l2tp_tunnel_inc_refcount(tunnel); 671 + refcount_inc(&tunnel->ref_count); 672 672 error = l2tp_tunnel_register(tunnel, net, &tcfg); 673 673 if (error < 0) { 674 674 kfree(tunnel); ··· 684 684 685 685 /* Error if socket is not prepped */ 686 686 if (!tunnel->sock) { 687 - l2tp_tunnel_dec_refcount(tunnel); 687 + l2tp_tunnel_put(tunnel); 688 688 return ERR_PTR(-ENOENT); 689 689 } 690 690 } ··· 774 774 775 775 pppol2tp_session_init(session); 776 776 ps = l2tp_session_priv(session); 777 - l2tp_session_inc_refcount(session); 777 + refcount_inc(&session->ref_count); 778 778 779 779 mutex_lock(&ps->sk_lock); 780 780 error = l2tp_session_register(session, tunnel); 781 781 if (error < 0) { 782 782 mutex_unlock(&ps->sk_lock); 783 - l2tp_session_dec_refcount(session); 783 + l2tp_session_put(session); 784 784 goto end; 785 785 } 786 786 ··· 836 836 l2tp_tunnel_delete(tunnel); 837 837 } 838 838 if (drop_refcnt) 839 - l2tp_session_dec_refcount(session); 840 - l2tp_tunnel_dec_refcount(tunnel); 839 + l2tp_session_put(session); 840 + l2tp_tunnel_put(tunnel); 841 841 release_sock(sk); 842 842 843 843 return error; ··· 877 877 return 0; 878 878 879 879 err_sess: 880 - l2tp_session_dec_refcount(session); 880 + l2tp_session_put(session); 881 881 err: 882 882 return error; 883 883 } ··· 988 988 989 989 error = len; 990 990 991 - l2tp_session_dec_refcount(session); 991 + l2tp_session_put(session); 992 992 end: 993 993 return error; 994 994 } ··· 1038 1038 return -EBADR; 1039 1039 1040 1040 if (session->pwtype != L2TP_PWTYPE_PPP) { 1041 - l2tp_session_dec_refcount(session); 1041 + l2tp_session_put(session); 1042 1042 return -EBADR; 1043 1043 } 1044 1044 1045 1045 pppol2tp_copy_stats(stats, &session->stats); 1046 - l2tp_session_dec_refcount(session); 1046 + l2tp_session_put(session); 1047 1047 1048 1048 return 0; 1049 1049 } ··· 1261 1261 err = pppol2tp_session_setsockopt(sk, session, optname, val); 1262 1262 } 1263 1263 1264 - l2tp_session_dec_refcount(session); 1264 + l2tp_session_put(session); 1265 1265 end: 1266 1266 return err; 1267 1267 } ··· 1382 1382 err = 0; 1383 1383 1384 1384 end_put_sess: 1385 - l2tp_session_dec_refcount(session); 1385 + l2tp_session_put(session); 1386 1386 end: 1387 1387 return err; 1388 1388 } ··· 1397 1397 1398 1398 struct pppol2tp_seq_data { 1399 1399 struct seq_net_private p; 1400 - int tunnel_idx; /* current tunnel */ 1401 - int session_idx; /* index of session within current tunnel */ 1400 + unsigned long tkey; /* lookup key of current tunnel */ 1401 + unsigned long skey; /* lookup key of current session */ 1402 1402 struct l2tp_tunnel *tunnel; 1403 1403 struct l2tp_session *session; /* NULL means get next tunnel */ 1404 1404 }; ··· 1407 1407 { 1408 1408 /* Drop reference taken during previous invocation */ 1409 1409 if (pd->tunnel) 1410 - l2tp_tunnel_dec_refcount(pd->tunnel); 1410 + l2tp_tunnel_put(pd->tunnel); 1411 1411 1412 1412 for (;;) { 1413 - pd->tunnel = l2tp_tunnel_get_nth(net, pd->tunnel_idx); 1414 - pd->tunnel_idx++; 1413 + pd->tunnel = l2tp_tunnel_get_next(net, &pd->tkey); 1414 + pd->tkey++; 1415 1415 1416 1416 /* Only accept L2TPv2 tunnels */ 1417 1417 if (!pd->tunnel || pd->tunnel->version == 2) 1418 1418 return; 1419 1419 1420 - l2tp_tunnel_dec_refcount(pd->tunnel); 1420 + l2tp_tunnel_put(pd->tunnel); 1421 1421 } 1422 1422 } 1423 1423 ··· 1425 1425 { 1426 1426 /* Drop reference taken during previous invocation */ 1427 1427 if (pd->session) 1428 - l2tp_session_dec_refcount(pd->session); 1428 + l2tp_session_put(pd->session); 1429 1429 1430 - pd->session = l2tp_session_get_nth(pd->tunnel, pd->session_idx); 1431 - pd->session_idx++; 1430 + pd->session = l2tp_session_get_next(net, pd->tunnel->sock, 1431 + pd->tunnel->version, 1432 + pd->tunnel->tunnel_id, &pd->skey); 1433 + pd->skey++; 1432 1434 1433 1435 if (!pd->session) { 1434 - pd->session_idx = 0; 1436 + pd->skey = 0; 1435 1437 pppol2tp_next_tunnel(net, pd); 1436 1438 } 1437 1439 } ··· 1485 1483 * or pppol2tp_next_tunnel(). 1486 1484 */ 1487 1485 if (pd->session) { 1488 - l2tp_session_dec_refcount(pd->session); 1486 + l2tp_session_put(pd->session); 1489 1487 pd->session = NULL; 1490 1488 } 1491 1489 if (pd->tunnel) { 1492 - l2tp_tunnel_dec_refcount(pd->tunnel); 1490 + l2tp_tunnel_put(pd->tunnel); 1493 1491 pd->tunnel = NULL; 1494 1492 } 1495 1493 }