Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net

Pull networking fixes from David Miller:

1) Always increment IPV4 ID field in encapsulated GSO packets, even
when DF is set. Regression fix from Pravin B Shelar.

2) Fix per-net subsystem initialization in netfilter conntrack,
otherwise we may access dynamically allocated memory before it is
actually allocated. From Gao Feng.

3) Fix DMA buffer lengths in iwl3945 driver, from Stanislaw Gruszka.

4) Fix race between submission of sync vs async commands in mwifiex
driver, from Amitkumar Karwar.

5) Add missing cancel of command timer in mwifiex driver, from Bing
Zhao.

6) Missing SKB free in rtlwifi USB driver, from Jussi Kivilinna.

7) Thermal layer tries to use a genetlink multicast string that is
longer than the 16 character limit. Fix it and add a BUG check to
prevent this kind of thing from happening in the future.

From Masatake YAMATO.

8) Fix many bugs in the handling of the teardown of L2TP connections,
UDP encapsulation instances, and sockets. From Tom Parkin.

9) Missing socket release in IRDA, from Kees Cook.

10) Fix fec driver modular build, from Fabio Estevam.

11) Erroneous use of kfree() instead of free_netdev() in lantiq_etop,
from Wei Yongjun.

12) Fix bugs in handling of queue numbers and steering rules in mlx4
driver, from Moshe Lazer, Hadar Hen Zion, and Or Gerlitz.

13) Some FOO_DIAG_MAX constants were defined off by one, fix from Andrey
Vagin.

14) TCP segmentation deferral is unintentionally done too strongly,
breaking ACK clocking. Fix from Eric Dumazet.

15) net_enable_timestamp() can legitimately be invoked from software
interrupts, and in a way that is safe, so remove the WARN_ON().
Also from Eric Dumazet.

16) Fix use after free in VLANs, from Cong Wang.

17) Fix TCP slow start retransmit storms after SACK reneging, from
Yuchung Cheng.

18) Unix socket release should mark a socket dead before NULL'ing out
sock->sk, otherwise we can race. Fix from Paul Moore.

19) IPV6 addrconf code can try to free static memory, from Hong Zhiguo.

20) Fix register mis-programming, NULL pointer derefs, and wrong PHC
clock frequency in IGB driver. From Lior LevyAlex Williamson, Jiri
Benc, and Jeff Kirsher.

21) skb->ip_summed logic in pch_gbe driver is reversed, breaking packet
forwarding. Fix from Veaceslav Falico.

* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net: (65 commits)
ipv4: Fix ip-header identification for gso packets.
bonding: remove already created master sysfs link on failure
af_unix: dont send SCM_CREDENTIAL when dest socket is NULL
pch_gbe: fix ip_summed checksum reporting on rx
igb: fix PHC stopping on max freq
igb: make sensor info static
igb: SR-IOV init reordering
igb: Fix null pointer dereference
igb: fix i350 anti spoofing config
ixgbevf: don't release the soft entries
ipv6: fix bad free of addrconf_init_net
unix: fix a race condition in unix_release()
tcp: undo spurious timeout after SACK reneging
bnx2x: fix assignment of signed expression to unsigned variable
bridge: fix crash when set mac address of br interface
8021q: fix a potential use-after-free
net: remove a WARN_ON() in net_enable_timestamp()
tcp: preserve ACK clocking in TSO
net: fix *_DIAG_MAX constants
net/mlx4_core: Disallow releasing VF QPs which have steering rules
...

+542 -461
+7
Documentation/networking/ipvs-sysctl.txt
··· 15 15 enabled and the variable is automatically set to 2, otherwise 16 16 the strategy is disabled and the variable is set to 1. 17 17 18 + backup_only - BOOLEAN 19 + 0 - disabled (default) 20 + not 0 - enabled 21 + 22 + If set, disable the director function while the server is 23 + in backup mode to avoid packet loops for DR/TUN methods. 24 + 18 25 conntrack - BOOLEAN 19 26 0 - disabled (default) 20 27 not 0 - enabled
+4
drivers/bluetooth/ath3k.c
··· 73 73 { USB_DEVICE(0x03F0, 0x311D) }, 74 74 75 75 /* Atheros AR3012 with sflash firmware*/ 76 + { USB_DEVICE(0x0CF3, 0x0036) }, 76 77 { USB_DEVICE(0x0CF3, 0x3004) }, 77 78 { USB_DEVICE(0x0CF3, 0x3008) }, 78 79 { USB_DEVICE(0x0CF3, 0x311D) }, 80 + { USB_DEVICE(0x0CF3, 0x817a) }, 79 81 { USB_DEVICE(0x13d3, 0x3375) }, 80 82 { USB_DEVICE(0x04CA, 0x3004) }, 81 83 { USB_DEVICE(0x04CA, 0x3005) }, ··· 109 107 static struct usb_device_id ath3k_blist_tbl[] = { 110 108 111 109 /* Atheros AR3012 with sflash firmware*/ 110 + { USB_DEVICE(0x0CF3, 0x0036), .driver_info = BTUSB_ATH3012 }, 112 111 { USB_DEVICE(0x0cf3, 0x3004), .driver_info = BTUSB_ATH3012 }, 113 112 { USB_DEVICE(0x0cf3, 0x3008), .driver_info = BTUSB_ATH3012 }, 114 113 { USB_DEVICE(0x0cf3, 0x311D), .driver_info = BTUSB_ATH3012 }, 114 + { USB_DEVICE(0x0CF3, 0x817a), .driver_info = BTUSB_ATH3012 }, 115 115 { USB_DEVICE(0x13d3, 0x3375), .driver_info = BTUSB_ATH3012 }, 116 116 { USB_DEVICE(0x04ca, 0x3004), .driver_info = BTUSB_ATH3012 }, 117 117 { USB_DEVICE(0x04ca, 0x3005), .driver_info = BTUSB_ATH3012 },
+2
drivers/bluetooth/btusb.c
··· 131 131 { USB_DEVICE(0x03f0, 0x311d), .driver_info = BTUSB_IGNORE }, 132 132 133 133 /* Atheros 3012 with sflash firmware */ 134 + { USB_DEVICE(0x0cf3, 0x0036), .driver_info = BTUSB_ATH3012 }, 134 135 { USB_DEVICE(0x0cf3, 0x3004), .driver_info = BTUSB_ATH3012 }, 135 136 { USB_DEVICE(0x0cf3, 0x3008), .driver_info = BTUSB_ATH3012 }, 136 137 { USB_DEVICE(0x0cf3, 0x311d), .driver_info = BTUSB_ATH3012 }, 138 + { USB_DEVICE(0x0cf3, 0x817a), .driver_info = BTUSB_ATH3012 }, 137 139 { USB_DEVICE(0x13d3, 0x3375), .driver_info = BTUSB_ATH3012 }, 138 140 { USB_DEVICE(0x04ca, 0x3004), .driver_info = BTUSB_ATH3012 }, 139 141 { USB_DEVICE(0x04ca, 0x3005), .driver_info = BTUSB_ATH3012 },
+5
drivers/net/bonding/bond_sysfs.c
··· 183 183 sprintf(linkname, "slave_%s", slave->name); 184 184 ret = sysfs_create_link(&(master->dev.kobj), &(slave->dev.kobj), 185 185 linkname); 186 + 187 + /* free the master link created earlier in case of error */ 188 + if (ret) 189 + sysfs_remove_link(&(slave->dev.kobj), "master"); 190 + 186 191 return ret; 187 192 188 193 }
+9 -9
drivers/net/ethernet/broadcom/bnx2x/bnx2x_dcb.c
··· 2139 2139 break; 2140 2140 default: 2141 2141 BNX2X_ERR("Non valid capability ID\n"); 2142 - rval = -EINVAL; 2142 + rval = 1; 2143 2143 break; 2144 2144 } 2145 2145 } else { 2146 2146 DP(BNX2X_MSG_DCB, "DCB disabled\n"); 2147 - rval = -EINVAL; 2147 + rval = 1; 2148 2148 } 2149 2149 2150 2150 DP(BNX2X_MSG_DCB, "capid %d:%x\n", capid, *cap); ··· 2170 2170 break; 2171 2171 default: 2172 2172 BNX2X_ERR("Non valid TC-ID\n"); 2173 - rval = -EINVAL; 2173 + rval = 1; 2174 2174 break; 2175 2175 } 2176 2176 } else { 2177 2177 DP(BNX2X_MSG_DCB, "DCB disabled\n"); 2178 - rval = -EINVAL; 2178 + rval = 1; 2179 2179 } 2180 2180 2181 2181 return rval; ··· 2188 2188 return -EINVAL; 2189 2189 } 2190 2190 2191 - static u8 bnx2x_dcbnl_get_pfc_state(struct net_device *netdev) 2191 + static u8 bnx2x_dcbnl_get_pfc_state(struct net_device *netdev) 2192 2192 { 2193 2193 struct bnx2x *bp = netdev_priv(netdev); 2194 2194 DP(BNX2X_MSG_DCB, "state = %d\n", bp->dcbx_local_feat.pfc.enabled); ··· 2390 2390 break; 2391 2391 default: 2392 2392 BNX2X_ERR("Non valid featrue-ID\n"); 2393 - rval = -EINVAL; 2393 + rval = 1; 2394 2394 break; 2395 2395 } 2396 2396 } else { 2397 2397 DP(BNX2X_MSG_DCB, "DCB disabled\n"); 2398 - rval = -EINVAL; 2398 + rval = 1; 2399 2399 } 2400 2400 2401 2401 return rval; ··· 2431 2431 break; 2432 2432 default: 2433 2433 BNX2X_ERR("Non valid featrue-ID\n"); 2434 - rval = -EINVAL; 2434 + rval = 1; 2435 2435 break; 2436 2436 } 2437 2437 } else { 2438 2438 DP(BNX2X_MSG_DCB, "dcbnl call not valid\n"); 2439 - rval = -EINVAL; 2439 + rval = 1; 2440 2440 } 2441 2441 2442 2442 return rval;
+3 -3
drivers/net/ethernet/freescale/fec.c
··· 1332 1332 static void fec_enet_free_buffers(struct net_device *ndev) 1333 1333 { 1334 1334 struct fec_enet_private *fep = netdev_priv(ndev); 1335 - int i; 1335 + unsigned int i; 1336 1336 struct sk_buff *skb; 1337 1337 struct bufdesc *bdp; 1338 1338 ··· 1356 1356 static int fec_enet_alloc_buffers(struct net_device *ndev) 1357 1357 { 1358 1358 struct fec_enet_private *fep = netdev_priv(ndev); 1359 - int i; 1359 + unsigned int i; 1360 1360 struct sk_buff *skb; 1361 1361 struct bufdesc *bdp; 1362 1362 ··· 1598 1598 struct fec_enet_private *fep = netdev_priv(ndev); 1599 1599 struct bufdesc *cbd_base; 1600 1600 struct bufdesc *bdp; 1601 - int i; 1601 + unsigned int i; 1602 1602 1603 1603 /* Allocate memory for buffer descriptors. */ 1604 1604 cbd_base = dma_alloc_coherent(NULL, PAGE_SIZE, &fep->bd_dma,
+3
drivers/net/ethernet/freescale/fec_ptp.c
··· 128 128 129 129 spin_unlock_irqrestore(&fep->tmreg_lock, flags); 130 130 } 131 + EXPORT_SYMBOL(fec_ptp_start_cyclecounter); 131 132 132 133 /** 133 134 * fec_ptp_adjfreq - adjust ptp cycle frequency ··· 319 318 return copy_to_user(ifr->ifr_data, &config, sizeof(config)) ? 320 319 -EFAULT : 0; 321 320 } 321 + EXPORT_SYMBOL(fec_ptp_ioctl); 322 322 323 323 /** 324 324 * fec_time_keep - call timecounter_read every second to avoid timer overrun ··· 385 383 pr_info("registered PHC device on %s\n", ndev->name); 386 384 } 387 385 } 386 + EXPORT_SYMBOL(fec_ptp_init);
+19 -14
drivers/net/ethernet/intel/igb/e1000_82575.c
··· 1818 1818 **/ 1819 1819 void igb_vmdq_set_anti_spoofing_pf(struct e1000_hw *hw, bool enable, int pf) 1820 1820 { 1821 - u32 dtxswc; 1821 + u32 reg_val, reg_offset; 1822 1822 1823 1823 switch (hw->mac.type) { 1824 1824 case e1000_82576: 1825 + reg_offset = E1000_DTXSWC; 1826 + break; 1825 1827 case e1000_i350: 1826 - dtxswc = rd32(E1000_DTXSWC); 1827 - if (enable) { 1828 - dtxswc |= (E1000_DTXSWC_MAC_SPOOF_MASK | 1829 - E1000_DTXSWC_VLAN_SPOOF_MASK); 1830 - /* The PF can spoof - it has to in order to 1831 - * support emulation mode NICs */ 1832 - dtxswc ^= (1 << pf | 1 << (pf + MAX_NUM_VFS)); 1833 - } else { 1834 - dtxswc &= ~(E1000_DTXSWC_MAC_SPOOF_MASK | 1835 - E1000_DTXSWC_VLAN_SPOOF_MASK); 1836 - } 1837 - wr32(E1000_DTXSWC, dtxswc); 1828 + reg_offset = E1000_TXSWC; 1838 1829 break; 1839 1830 default: 1840 - break; 1831 + return; 1841 1832 } 1833 + 1834 + reg_val = rd32(reg_offset); 1835 + if (enable) { 1836 + reg_val |= (E1000_DTXSWC_MAC_SPOOF_MASK | 1837 + E1000_DTXSWC_VLAN_SPOOF_MASK); 1838 + /* The PF can spoof - it has to in order to 1839 + * support emulation mode NICs 1840 + */ 1841 + reg_val ^= (1 << pf | 1 << (pf + MAX_NUM_VFS)); 1842 + } else { 1843 + reg_val &= ~(E1000_DTXSWC_MAC_SPOOF_MASK | 1844 + E1000_DTXSWC_VLAN_SPOOF_MASK); 1845 + } 1846 + wr32(reg_offset, reg_val); 1842 1847 } 1843 1848 1844 1849 /**
+1 -1
drivers/net/ethernet/intel/igb/igb_hwmon.c
··· 39 39 #include <linux/pci.h> 40 40 41 41 #ifdef CONFIG_IGB_HWMON 42 - struct i2c_board_info i350_sensor_info = { 42 + static struct i2c_board_info i350_sensor_info = { 43 43 I2C_BOARD_INFO("i350bb", (0Xf8 >> 1)), 44 44 }; 45 45
+2 -2
drivers/net/ethernet/intel/igb/igb_main.c
··· 2542 2542 if ((hw->mac.type == e1000_i210) || (hw->mac.type == e1000_i211)) 2543 2543 return; 2544 2544 2545 - igb_enable_sriov(pdev, max_vfs); 2546 2545 pci_sriov_set_totalvfs(pdev, 7); 2546 + igb_enable_sriov(pdev, max_vfs); 2547 2547 2548 2548 #endif /* CONFIG_PCI_IOV */ 2549 2549 } ··· 2652 2652 if (max_vfs > 7) { 2653 2653 dev_warn(&pdev->dev, 2654 2654 "Maximum of 7 VFs per PF, using max\n"); 2655 - adapter->vfs_allocated_count = 7; 2655 + max_vfs = adapter->vfs_allocated_count = 7; 2656 2656 } else 2657 2657 adapter->vfs_allocated_count = max_vfs; 2658 2658 if (adapter->vfs_allocated_count)
+1 -1
drivers/net/ethernet/intel/igb/igb_ptp.c
··· 740 740 case e1000_82576: 741 741 snprintf(adapter->ptp_caps.name, 16, "%pm", netdev->dev_addr); 742 742 adapter->ptp_caps.owner = THIS_MODULE; 743 - adapter->ptp_caps.max_adj = 1000000000; 743 + adapter->ptp_caps.max_adj = 999999881; 744 744 adapter->ptp_caps.n_ext_ts = 0; 745 745 adapter->ptp_caps.pps = 0; 746 746 adapter->ptp_caps.adjfreq = igb_ptp_adjfreq_82576;
+20 -4
drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
··· 944 944 free_irq(adapter->msix_entries[vector].vector, 945 945 adapter->q_vector[vector]); 946 946 } 947 - pci_disable_msix(adapter->pdev); 948 - kfree(adapter->msix_entries); 949 - adapter->msix_entries = NULL; 947 + /* This failure is non-recoverable - it indicates the system is 948 + * out of MSIX vector resources and the VF driver cannot run 949 + * without them. Set the number of msix vectors to zero 950 + * indicating that not enough can be allocated. The error 951 + * will be returned to the user indicating device open failed. 952 + * Any further attempts to force the driver to open will also 953 + * fail. The only way to recover is to unload the driver and 954 + * reload it again. If the system has recovered some MSIX 955 + * vectors then it may succeed. 956 + */ 957 + adapter->num_msix_vectors = 0; 950 958 return err; 951 959 } 952 960 ··· 2580 2572 struct ixgbe_hw *hw = &adapter->hw; 2581 2573 int err; 2582 2574 2575 + /* A previous failure to open the device because of a lack of 2576 + * available MSIX vector resources may have reset the number 2577 + * of msix vectors variable to zero. The only way to recover 2578 + * is to unload/reload the driver and hope that the system has 2579 + * been able to recover some MSIX vector resources. 2580 + */ 2581 + if (!adapter->num_msix_vectors) 2582 + return -ENOMEM; 2583 + 2583 2584 /* disallow open during test */ 2584 2585 if (test_bit(__IXGBEVF_TESTING, &adapter->state)) 2585 2586 return -EBUSY; ··· 2645 2628 2646 2629 err_req_irq: 2647 2630 ixgbevf_down(adapter); 2648 - ixgbevf_free_irq(adapter); 2649 2631 err_setup_rx: 2650 2632 ixgbevf_free_all_rx_resources(adapter); 2651 2633 err_setup_tx:
+1 -1
drivers/net/ethernet/lantiq_etop.c
··· 769 769 return 0; 770 770 771 771 err_free: 772 - kfree(dev); 772 + free_netdev(dev); 773 773 err_out: 774 774 return err; 775 775 }
+11 -11
drivers/net/ethernet/mellanox/mlx4/en_netdev.c
··· 1637 1637 /* Flush multicast filter */ 1638 1638 mlx4_SET_MCAST_FLTR(mdev->dev, priv->port, 0, 1, MLX4_MCAST_CONFIG); 1639 1639 1640 + /* Remove flow steering rules for the port*/ 1641 + if (mdev->dev->caps.steering_mode == 1642 + MLX4_STEERING_MODE_DEVICE_MANAGED) { 1643 + ASSERT_RTNL(); 1644 + list_for_each_entry_safe(flow, tmp_flow, 1645 + &priv->ethtool_list, list) { 1646 + mlx4_flow_detach(mdev->dev, flow->id); 1647 + list_del(&flow->list); 1648 + } 1649 + } 1650 + 1640 1651 mlx4_en_destroy_drop_qp(priv); 1641 1652 1642 1653 /* Free TX Rings */ ··· 1667 1656 mlx4_en_put_qp(priv); 1668 1657 if (!(mdev->dev->caps.flags2 & MLX4_DEV_CAP_FLAGS2_REASSIGN_MAC_EN)) 1669 1658 mdev->mac_removed[priv->port] = 1; 1670 - 1671 - /* Remove flow steering rules for the port*/ 1672 - if (mdev->dev->caps.steering_mode == 1673 - MLX4_STEERING_MODE_DEVICE_MANAGED) { 1674 - ASSERT_RTNL(); 1675 - list_for_each_entry_safe(flow, tmp_flow, 1676 - &priv->ethtool_list, list) { 1677 - mlx4_flow_detach(mdev->dev, flow->id); 1678 - list_del(&flow->list); 1679 - } 1680 - } 1681 1659 1682 1660 /* Free RX Rings */ 1683 1661 for (i = 0; i < priv->rx_ring_num; i++) {
+1 -1
drivers/net/ethernet/mellanox/mlx4/eq.c
··· 771 771 struct mlx4_slave_event_eq_info *event_eq = 772 772 priv->mfunc.master.slave_state[slave].event_eq; 773 773 u32 in_modifier = vhcr->in_modifier; 774 - u32 eqn = in_modifier & 0x1FF; 774 + u32 eqn = in_modifier & 0x3FF; 775 775 u64 in_param = vhcr->in_param; 776 776 int err = 0; 777 777 int i;
+35 -10
drivers/net/ethernet/mellanox/mlx4/resource_tracker.c
··· 99 99 struct list_head mcg_list; 100 100 spinlock_t mcg_spl; 101 101 int local_qpn; 102 + atomic_t ref_count; 102 103 }; 103 104 104 105 enum res_mtt_states { ··· 198 197 199 198 struct res_fs_rule { 200 199 struct res_common com; 200 + int qpn; 201 201 }; 202 202 203 203 static void *res_tracker_lookup(struct rb_root *root, u64 res_id) ··· 357 355 return dev->caps.num_mpts - 1; 358 356 } 359 357 360 - static void *find_res(struct mlx4_dev *dev, int res_id, 358 + static void *find_res(struct mlx4_dev *dev, u64 res_id, 361 359 enum mlx4_resource type) 362 360 { 363 361 struct mlx4_priv *priv = mlx4_priv(dev); ··· 449 447 ret->local_qpn = id; 450 448 INIT_LIST_HEAD(&ret->mcg_list); 451 449 spin_lock_init(&ret->mcg_spl); 450 + atomic_set(&ret->ref_count, 0); 452 451 453 452 return &ret->com; 454 453 } ··· 557 554 return &ret->com; 558 555 } 559 556 560 - static struct res_common *alloc_fs_rule_tr(u64 id) 557 + static struct res_common *alloc_fs_rule_tr(u64 id, int qpn) 561 558 { 562 559 struct res_fs_rule *ret; 563 560 ··· 567 564 568 565 ret->com.res_id = id; 569 566 ret->com.state = RES_FS_RULE_ALLOCATED; 570 - 567 + ret->qpn = qpn; 571 568 return &ret->com; 572 569 } 573 570 ··· 605 602 ret = alloc_xrcdn_tr(id); 606 603 break; 607 604 case RES_FS_RULE: 608 - ret = alloc_fs_rule_tr(id); 605 + ret = alloc_fs_rule_tr(id, extra); 609 606 break; 610 607 default: 611 608 return NULL; ··· 674 671 675 672 static int remove_qp_ok(struct res_qp *res) 676 673 { 677 - if (res->com.state == RES_QP_BUSY) 674 + if (res->com.state == RES_QP_BUSY || atomic_read(&res->ref_count) || 675 + !list_empty(&res->mcg_list)) { 676 + pr_err("resource tracker: fail to remove qp, state %d, ref_count %d\n", 677 + res->com.state, atomic_read(&res->ref_count)); 678 678 return -EBUSY; 679 - else if (res->com.state != RES_QP_RESERVED) 679 + } else if (res->com.state != RES_QP_RESERVED) { 680 680 return -EPERM; 681 + } 681 682 682 683 return 0; 683 684 } ··· 3131 3124 struct list_head *rlist = &tracker->slave_list[slave].res_list[RES_MAC]; 3132 3125 int err; 3133 3126 int qpn; 3127 + struct res_qp *rqp; 3134 3128 struct mlx4_net_trans_rule_hw_ctrl *ctrl; 3135 3129 struct _rule_hw *rule_header; 3136 3130 int header_id; ··· 3142 3134 3143 3135 ctrl = (struct mlx4_net_trans_rule_hw_ctrl *)inbox->buf; 3144 3136 qpn = be32_to_cpu(ctrl->qpn) & 0xffffff; 3145 - err = get_res(dev, slave, qpn, RES_QP, NULL); 3137 + err = get_res(dev, slave, qpn, RES_QP, &rqp); 3146 3138 if (err) { 3147 3139 pr_err("Steering rule with qpn 0x%x rejected.\n", qpn); 3148 3140 return err; ··· 3183 3175 if (err) 3184 3176 goto err_put; 3185 3177 3186 - err = add_res_range(dev, slave, vhcr->out_param, 1, RES_FS_RULE, 0); 3178 + err = add_res_range(dev, slave, vhcr->out_param, 1, RES_FS_RULE, qpn); 3187 3179 if (err) { 3188 3180 mlx4_err(dev, "Fail to add flow steering resources.\n "); 3189 3181 /* detach rule*/ 3190 3182 mlx4_cmd(dev, vhcr->out_param, 0, 0, 3191 3183 MLX4_QP_FLOW_STEERING_DETACH, MLX4_CMD_TIME_CLASS_A, 3192 3184 MLX4_CMD_NATIVE); 3185 + goto err_put; 3193 3186 } 3187 + atomic_inc(&rqp->ref_count); 3194 3188 err_put: 3195 3189 put_res(dev, slave, qpn, RES_QP); 3196 3190 return err; ··· 3205 3195 struct mlx4_cmd_info *cmd) 3206 3196 { 3207 3197 int err; 3198 + struct res_qp *rqp; 3199 + struct res_fs_rule *rrule; 3208 3200 3209 3201 if (dev->caps.steering_mode != 3210 3202 MLX4_STEERING_MODE_DEVICE_MANAGED) 3211 3203 return -EOPNOTSUPP; 3212 3204 3205 + err = get_res(dev, slave, vhcr->in_param, RES_FS_RULE, &rrule); 3206 + if (err) 3207 + return err; 3208 + /* Release the rule form busy state before removal */ 3209 + put_res(dev, slave, vhcr->in_param, RES_FS_RULE); 3210 + err = get_res(dev, slave, rrule->qpn, RES_QP, &rqp); 3211 + if (err) 3212 + return err; 3213 + 3213 3214 err = rem_res_range(dev, slave, vhcr->in_param, 1, RES_FS_RULE, 0); 3214 3215 if (err) { 3215 3216 mlx4_err(dev, "Fail to remove flow steering resources.\n "); 3216 - return err; 3217 + goto out; 3217 3218 } 3218 3219 3219 3220 err = mlx4_cmd(dev, vhcr->in_param, 0, 0, 3220 3221 MLX4_QP_FLOW_STEERING_DETACH, MLX4_CMD_TIME_CLASS_A, 3221 3222 MLX4_CMD_NATIVE); 3223 + if (!err) 3224 + atomic_dec(&rqp->ref_count); 3225 + out: 3226 + put_res(dev, slave, rrule->qpn, RES_QP); 3222 3227 return err; 3223 3228 } 3224 3229 ··· 3831 3806 mutex_lock(&priv->mfunc.master.res_tracker.slave_list[slave].mutex); 3832 3807 /*VLAN*/ 3833 3808 rem_slave_macs(dev, slave); 3809 + rem_slave_fs_rule(dev, slave); 3834 3810 rem_slave_qps(dev, slave); 3835 3811 rem_slave_srqs(dev, slave); 3836 3812 rem_slave_cqs(dev, slave); ··· 3840 3814 rem_slave_mtts(dev, slave); 3841 3815 rem_slave_counters(dev, slave); 3842 3816 rem_slave_xrcdns(dev, slave); 3843 - rem_slave_fs_rule(dev, slave); 3844 3817 mutex_unlock(&priv->mfunc.master.res_tracker.slave_list[slave].mutex); 3845 3818 }
+2 -1
drivers/net/ethernet/nxp/lpc_eth.c
··· 1472 1472 } 1473 1473 platform_set_drvdata(pdev, ndev); 1474 1474 1475 - if (lpc_mii_init(pldat) != 0) 1475 + ret = lpc_mii_init(pldat); 1476 + if (ret) 1476 1477 goto err_out_unregister_netdev; 1477 1478 1478 1479 netdev_info(ndev, "LPC mac at 0x%08x irq %d\n",
+2 -2
drivers/net/ethernet/oki-semi/pch_gbe/pch_gbe_main.c
··· 1726 1726 1727 1727 skb->protocol = eth_type_trans(skb, netdev); 1728 1728 if (tcp_ip_status & PCH_GBE_RXD_ACC_STAT_TCPIPOK) 1729 - skb->ip_summed = CHECKSUM_NONE; 1730 - else 1731 1729 skb->ip_summed = CHECKSUM_UNNECESSARY; 1730 + else 1731 + skb->ip_summed = CHECKSUM_NONE; 1732 1732 1733 1733 napi_gro_receive(&adapter->napi, skb); 1734 1734 (*work_done)++;
+10
drivers/net/ethernet/renesas/sh_eth.c
··· 2220 2220 /* MDIO bus release function */ 2221 2221 static int sh_mdio_release(struct net_device *ndev) 2222 2222 { 2223 + struct sh_eth_private *mdp = netdev_priv(ndev); 2223 2224 struct mii_bus *bus = dev_get_drvdata(&ndev->dev); 2224 2225 2225 2226 /* unregister mdio bus */ ··· 2234 2233 2235 2234 /* free bitbang info */ 2236 2235 free_mdio_bitbang(bus); 2236 + 2237 + /* free bitbang memory */ 2238 + kfree(mdp->bitbang); 2237 2239 2238 2240 return 0; 2239 2241 } ··· 2266 2262 bitbang->ctrl.ops = &bb_ops; 2267 2263 2268 2264 /* MII controller setting */ 2265 + mdp->bitbang = bitbang; 2269 2266 mdp->mii_bus = alloc_mdio_bitbang(&bitbang->ctrl); 2270 2267 if (!mdp->mii_bus) { 2271 2268 ret = -ENOMEM; ··· 2446 2441 } 2447 2442 mdp->tsu_addr = ioremap(rtsu->start, 2448 2443 resource_size(rtsu)); 2444 + if (mdp->tsu_addr == NULL) { 2445 + ret = -ENOMEM; 2446 + dev_err(&pdev->dev, "TSU ioremap failed.\n"); 2447 + goto out_release; 2448 + } 2449 2449 mdp->port = devno % 2; 2450 2450 ndev->features = NETIF_F_HW_VLAN_FILTER; 2451 2451 }
+1
drivers/net/ethernet/renesas/sh_eth.h
··· 705 705 const u16 *reg_offset; 706 706 void __iomem *addr; 707 707 void __iomem *tsu_addr; 708 + struct bb_info *bitbang; 708 709 u32 num_rx_ring; 709 710 u32 num_tx_ring; 710 711 dma_addr_t rx_desc_dma;
+1 -1
drivers/net/ethernet/ti/cpsw.c
··· 1364 1364 struct platform_device *mdio; 1365 1365 1366 1366 parp = of_get_property(slave_node, "phy_id", &lenp); 1367 - if ((parp == NULL) && (lenp != (sizeof(void *) * 2))) { 1367 + if ((parp == NULL) || (lenp != (sizeof(void *) * 2))) { 1368 1368 pr_err("Missing slave[%d] phy_id property\n", i); 1369 1369 ret = -EINVAL; 1370 1370 goto error_ret;
+4
drivers/net/wireless/ath/ath9k/ar9003_calib.c
··· 1023 1023 AR_PHY_AGC_CONTROL_FLTR_CAL | 1024 1024 AR_PHY_AGC_CONTROL_PKDET_CAL; 1025 1025 1026 + /* Use chip chainmask only for calibration */ 1026 1027 ar9003_hw_set_chain_masks(ah, ah->caps.rx_chainmask, ah->caps.tx_chainmask); 1027 1028 1028 1029 if (rtt) { ··· 1150 1149 1151 1150 ar9003_hw_rtt_disable(ah); 1152 1151 } 1152 + 1153 + /* Revert chainmask to runtime parameters */ 1154 + ar9003_hw_set_chain_masks(ah, ah->rxchainmask, ah->txchainmask); 1153 1155 1154 1156 /* Initialize list pointers */ 1155 1157 ah->cal_list = ah->cal_list_last = ah->cal_list_curr = NULL;
+13 -13
drivers/net/wireless/ath/ath9k/link.c
··· 28 28 int i; 29 29 bool needreset = false; 30 30 31 - for (i = 0; i < ATH9K_NUM_TX_QUEUES; i++) 32 - if (ATH_TXQ_SETUP(sc, i)) { 33 - txq = &sc->tx.txq[i]; 34 - ath_txq_lock(sc, txq); 35 - if (txq->axq_depth) { 36 - if (txq->axq_tx_inprogress) { 37 - needreset = true; 38 - ath_txq_unlock(sc, txq); 39 - break; 40 - } else { 41 - txq->axq_tx_inprogress = true; 42 - } 31 + for (i = 0; i < IEEE80211_NUM_ACS; i++) { 32 + txq = sc->tx.txq_map[i]; 33 + 34 + ath_txq_lock(sc, txq); 35 + if (txq->axq_depth) { 36 + if (txq->axq_tx_inprogress) { 37 + needreset = true; 38 + ath_txq_unlock(sc, txq); 39 + break; 40 + } else { 41 + txq->axq_tx_inprogress = true; 43 42 } 44 - ath_txq_unlock_complete(sc, txq); 45 43 } 44 + ath_txq_unlock_complete(sc, txq); 45 + } 46 46 47 47 if (needreset) { 48 48 ath_dbg(ath9k_hw_common(sc->sc_ah), RESET,
+12 -10
drivers/net/wireless/iwlegacy/3945-mac.c
··· 475 475 dma_addr_t txcmd_phys; 476 476 int txq_id = skb_get_queue_mapping(skb); 477 477 u16 len, idx, hdr_len; 478 + u16 firstlen, secondlen; 478 479 u8 id; 479 480 u8 unicast; 480 481 u8 sta_id; ··· 590 589 len = 591 590 sizeof(struct il3945_tx_cmd) + sizeof(struct il_cmd_header) + 592 591 hdr_len; 593 - len = (len + 3) & ~3; 592 + firstlen = (len + 3) & ~3; 594 593 595 594 /* Physical address of this Tx command's header (not MAC header!), 596 595 * within command buffer array. */ 597 596 txcmd_phys = 598 - pci_map_single(il->pci_dev, &out_cmd->hdr, len, PCI_DMA_TODEVICE); 597 + pci_map_single(il->pci_dev, &out_cmd->hdr, firstlen, 598 + PCI_DMA_TODEVICE); 599 599 if (unlikely(pci_dma_mapping_error(il->pci_dev, txcmd_phys))) 600 600 goto drop_unlock; 601 601 602 602 /* Set up TFD's 2nd entry to point directly to remainder of skb, 603 603 * if any (802.11 null frames have no payload). */ 604 - len = skb->len - hdr_len; 605 - if (len) { 604 + secondlen = skb->len - hdr_len; 605 + if (secondlen > 0) { 606 606 phys_addr = 607 - pci_map_single(il->pci_dev, skb->data + hdr_len, len, 607 + pci_map_single(il->pci_dev, skb->data + hdr_len, secondlen, 608 608 PCI_DMA_TODEVICE); 609 609 if (unlikely(pci_dma_mapping_error(il->pci_dev, phys_addr))) 610 610 goto drop_unlock; ··· 613 611 614 612 /* Add buffer containing Tx command and MAC(!) header to TFD's 615 613 * first entry */ 616 - il->ops->txq_attach_buf_to_tfd(il, txq, txcmd_phys, len, 1, 0); 614 + il->ops->txq_attach_buf_to_tfd(il, txq, txcmd_phys, firstlen, 1, 0); 617 615 dma_unmap_addr_set(out_meta, mapping, txcmd_phys); 618 - dma_unmap_len_set(out_meta, len, len); 619 - if (len) 620 - il->ops->txq_attach_buf_to_tfd(il, txq, phys_addr, len, 0, 621 - U32_PAD(len)); 616 + dma_unmap_len_set(out_meta, len, firstlen); 617 + if (secondlen > 0) 618 + il->ops->txq_attach_buf_to_tfd(il, txq, phys_addr, secondlen, 0, 619 + U32_PAD(secondlen)); 622 620 623 621 if (!ieee80211_has_morefrags(hdr->frame_control)) { 624 622 txq->need_update = 1;
+16 -6
drivers/net/wireless/mwifiex/cmdevt.c
··· 157 157 return -1; 158 158 } 159 159 160 + cmd_code = le16_to_cpu(host_cmd->command); 161 + cmd_size = le16_to_cpu(host_cmd->size); 162 + 163 + if (adapter->hw_status == MWIFIEX_HW_STATUS_RESET && 164 + cmd_code != HostCmd_CMD_FUNC_SHUTDOWN && 165 + cmd_code != HostCmd_CMD_FUNC_INIT) { 166 + dev_err(adapter->dev, 167 + "DNLD_CMD: FW in reset state, ignore cmd %#x\n", 168 + cmd_code); 169 + mwifiex_complete_cmd(adapter, cmd_node); 170 + mwifiex_insert_cmd_to_free_q(adapter, cmd_node); 171 + return -1; 172 + } 173 + 160 174 /* Set command sequence number */ 161 175 adapter->seq_num++; 162 176 host_cmd->seq_num = cpu_to_le16(HostCmd_SET_SEQ_NO_BSS_INFO ··· 181 167 spin_lock_irqsave(&adapter->mwifiex_cmd_lock, flags); 182 168 adapter->curr_cmd = cmd_node; 183 169 spin_unlock_irqrestore(&adapter->mwifiex_cmd_lock, flags); 184 - 185 - cmd_code = le16_to_cpu(host_cmd->command); 186 - cmd_size = le16_to_cpu(host_cmd->size); 187 170 188 171 /* Adjust skb length */ 189 172 if (cmd_node->cmd_skb->len > cmd_size) ··· 495 484 496 485 ret = mwifiex_send_cmd_async(priv, cmd_no, cmd_action, cmd_oid, 497 486 data_buf); 498 - if (!ret) 499 - ret = mwifiex_wait_queue_complete(adapter); 500 487 501 488 return ret; 502 489 } ··· 597 588 if (cmd_no == HostCmd_CMD_802_11_SCAN) { 598 589 mwifiex_queue_scan_cmd(priv, cmd_node); 599 590 } else { 600 - adapter->cmd_queued = cmd_node; 601 591 mwifiex_insert_cmd_to_pending_q(adapter, cmd_node, true); 602 592 queue_work(adapter->workqueue, &adapter->main_work); 593 + if (cmd_node->wait_q_enabled) 594 + ret = mwifiex_wait_queue_complete(adapter, cmd_node); 603 595 } 604 596 605 597 return ret;
+8
drivers/net/wireless/mwifiex/init.c
··· 709 709 return ret; 710 710 } 711 711 712 + /* cancel current command */ 713 + if (adapter->curr_cmd) { 714 + dev_warn(adapter->dev, "curr_cmd is still in processing\n"); 715 + del_timer(&adapter->cmd_timer); 716 + mwifiex_insert_cmd_to_free_q(adapter, adapter->curr_cmd); 717 + adapter->curr_cmd = NULL; 718 + } 719 + 712 720 /* shut down mwifiex */ 713 721 dev_dbg(adapter->dev, "info: shutdown mwifiex...\n"); 714 722
+2 -2
drivers/net/wireless/mwifiex/main.h
··· 723 723 u16 cmd_wait_q_required; 724 724 struct mwifiex_wait_queue cmd_wait_q; 725 725 u8 scan_wait_q_woken; 726 - struct cmd_ctrl_node *cmd_queued; 727 726 spinlock_t queue_lock; /* lock for tx queues */ 728 727 struct completion fw_load; 729 728 u8 country_code[IEEE80211_COUNTRY_STRING_LEN]; ··· 1017 1018 struct mwifiex_multicast_list *mcast_list); 1018 1019 int mwifiex_copy_mcast_addr(struct mwifiex_multicast_list *mlist, 1019 1020 struct net_device *dev); 1020 - int mwifiex_wait_queue_complete(struct mwifiex_adapter *adapter); 1021 + int mwifiex_wait_queue_complete(struct mwifiex_adapter *adapter, 1022 + struct cmd_ctrl_node *cmd_queued); 1021 1023 int mwifiex_bss_start(struct mwifiex_private *priv, struct cfg80211_bss *bss, 1022 1024 struct cfg80211_ssid *req_ssid); 1023 1025 int mwifiex_cancel_hs(struct mwifiex_private *priv, int cmd_type);
+4 -4
drivers/net/wireless/mwifiex/scan.c
··· 1388 1388 list_del(&cmd_node->list); 1389 1389 spin_unlock_irqrestore(&adapter->scan_pending_q_lock, 1390 1390 flags); 1391 - adapter->cmd_queued = cmd_node; 1392 1391 mwifiex_insert_cmd_to_pending_q(adapter, cmd_node, 1393 1392 true); 1394 1393 queue_work(adapter->workqueue, &adapter->main_work); 1394 + 1395 + /* Perform internal scan synchronously */ 1396 + if (!priv->scan_request) 1397 + mwifiex_wait_queue_complete(adapter, cmd_node); 1395 1398 } else { 1396 1399 spin_unlock_irqrestore(&adapter->scan_pending_q_lock, 1397 1400 flags); ··· 1948 1945 else 1949 1946 /* Normal scan */ 1950 1947 ret = mwifiex_scan_networks(priv, NULL); 1951 - 1952 - if (!ret) 1953 - ret = mwifiex_wait_queue_complete(priv->adapter); 1954 1948 1955 1949 up(&priv->async_sem); 1956 1950
+2 -8
drivers/net/wireless/mwifiex/sta_ioctl.c
··· 54 54 * This function waits on a cmd wait queue. It also cancels the pending 55 55 * request after waking up, in case of errors. 56 56 */ 57 - int mwifiex_wait_queue_complete(struct mwifiex_adapter *adapter) 57 + int mwifiex_wait_queue_complete(struct mwifiex_adapter *adapter, 58 + struct cmd_ctrl_node *cmd_queued) 58 59 { 59 60 int status; 60 - struct cmd_ctrl_node *cmd_queued; 61 - 62 - if (!adapter->cmd_queued) 63 - return 0; 64 - 65 - cmd_queued = adapter->cmd_queued; 66 - adapter->cmd_queued = NULL; 67 61 68 62 dev_dbg(adapter->dev, "cmd pending\n"); 69 63 atomic_inc(&adapter->cmd_pending);
+1
drivers/net/wireless/rtlwifi/usb.c
··· 851 851 if (unlikely(!_urb)) { 852 852 RT_TRACE(rtlpriv, COMP_ERR, DBG_EMERG, 853 853 "Can't allocate urb. Drop skb!\n"); 854 + kfree_skb(skb); 854 855 return; 855 856 } 856 857 _rtl_submit_tx_urb(hw, _urb);
+1 -1
include/linux/thermal.h
··· 44 44 /* Adding event notification support elements */ 45 45 #define THERMAL_GENL_FAMILY_NAME "thermal_event" 46 46 #define THERMAL_GENL_VERSION 0x01 47 - #define THERMAL_GENL_MCAST_GROUP_NAME "thermal_mc_group" 47 + #define THERMAL_GENL_MCAST_GROUP_NAME "thermal_mc_grp" 48 48 49 49 /* Default Thermal Governor */ 50 50 #if defined(CONFIG_THERMAL_DEFAULT_GOV_STEP_WISE)
+1
include/linux/udp.h
··· 68 68 * For encapsulation sockets. 69 69 */ 70 70 int (*encap_rcv)(struct sock *sk, struct sk_buff *skb); 71 + void (*encap_destroy)(struct sock *sk); 71 72 }; 72 73 73 74 static inline struct udp_sock *udp_sk(const struct sock *sk)
+1
include/net/flow_keys.h
··· 9 9 __be32 ports; 10 10 __be16 port16[2]; 11 11 }; 12 + u16 thoff; 12 13 u8 ip_proto; 13 14 }; 14 15
+12
include/net/ip_vs.h
··· 976 976 int sysctl_sync_retries; 977 977 int sysctl_nat_icmp_send; 978 978 int sysctl_pmtu_disc; 979 + int sysctl_backup_only; 979 980 980 981 /* ip_vs_lblc */ 981 982 int sysctl_lblc_expiration; ··· 1068 1067 return ipvs->sysctl_pmtu_disc; 1069 1068 } 1070 1069 1070 + static inline int sysctl_backup_only(struct netns_ipvs *ipvs) 1071 + { 1072 + return ipvs->sync_state & IP_VS_STATE_BACKUP && 1073 + ipvs->sysctl_backup_only; 1074 + } 1075 + 1071 1076 #else 1072 1077 1073 1078 static inline int sysctl_sync_threshold(struct netns_ipvs *ipvs) ··· 1119 1112 static inline int sysctl_pmtu_disc(struct netns_ipvs *ipvs) 1120 1113 { 1121 1114 return 1; 1115 + } 1116 + 1117 + static inline int sysctl_backup_only(struct netns_ipvs *ipvs) 1118 + { 1119 + return 0; 1122 1120 } 1123 1121 1124 1122 #endif
+6 -10
include/net/ipip.h
··· 77 77 { 78 78 struct iphdr *iph = ip_hdr(skb); 79 79 80 - if (iph->frag_off & htons(IP_DF)) 81 - iph->id = 0; 82 - else { 83 - /* Use inner packet iph-id if possible. */ 84 - if (skb->protocol == htons(ETH_P_IP) && old_iph->id) 85 - iph->id = old_iph->id; 86 - else 87 - __ip_select_ident(iph, dst, 88 - (skb_shinfo(skb)->gso_segs ?: 1) - 1); 89 - } 80 + /* Use inner packet iph-id if possible. */ 81 + if (skb->protocol == htons(ETH_P_IP) && old_iph->id) 82 + iph->id = old_iph->id; 83 + else 84 + __ip_select_ident(iph, dst, 85 + (skb_shinfo(skb)->gso_segs ?: 1) - 1); 90 86 } 91 87 #endif
+3 -1
include/uapi/linux/packet_diag.h
··· 33 33 PACKET_DIAG_TX_RING, 34 34 PACKET_DIAG_FANOUT, 35 35 36 - PACKET_DIAG_MAX, 36 + __PACKET_DIAG_MAX, 37 37 }; 38 + 39 + #define PACKET_DIAG_MAX (__PACKET_DIAG_MAX - 1) 38 40 39 41 struct packet_diag_info { 40 42 __u32 pdi_index;
+3 -1
include/uapi/linux/unix_diag.h
··· 39 39 UNIX_DIAG_MEMINFO, 40 40 UNIX_DIAG_SHUTDOWN, 41 41 42 - UNIX_DIAG_MAX, 42 + __UNIX_DIAG_MAX, 43 43 }; 44 + 45 + #define UNIX_DIAG_MAX (__UNIX_DIAG_MAX - 1) 44 46 45 47 struct unix_diag_vfs { 46 48 __u32 udiag_vfs_ino;
+7 -7
net/8021q/vlan.c
··· 86 86 87 87 grp = &vlan_info->grp; 88 88 89 - /* Take it out of our own structures, but be sure to interlock with 90 - * HW accelerating devices or SW vlan input packet processing if 91 - * VLAN is not 0 (leave it there for 802.1p). 92 - */ 93 - if (vlan_id) 94 - vlan_vid_del(real_dev, vlan_id); 95 - 96 89 grp->nr_vlan_devs--; 97 90 98 91 if (vlan->flags & VLAN_FLAG_MVRP) ··· 106 113 vlan_mvrp_uninit_applicant(real_dev); 107 114 vlan_gvrp_uninit_applicant(real_dev); 108 115 } 116 + 117 + /* Take it out of our own structures, but be sure to interlock with 118 + * HW accelerating devices or SW vlan input packet processing if 119 + * VLAN is not 0 (leave it there for 802.1p). 120 + */ 121 + if (vlan_id) 122 + vlan_vid_del(real_dev, vlan_id); 109 123 110 124 /* Get rid of the vlan's reference to real_dev */ 111 125 dev_put(real_dev);
+1
net/bluetooth/sco.c
··· 359 359 sco_chan_del(sk, ECONNRESET); 360 360 break; 361 361 362 + case BT_CONNECT2: 362 363 case BT_CONNECT: 363 364 case BT_DISCONN: 364 365 sco_chan_del(sk, ECONNRESET);
+1 -1
net/bridge/br_fdb.c
··· 423 423 return 0; 424 424 br_warn(br, "adding interface %s with same address " 425 425 "as a received packet\n", 426 - source->dev->name); 426 + source ? source->dev->name : br->dev->name); 427 427 fdb_delete(br, fdb); 428 428 } 429 429
-1
net/core/dev.c
··· 1545 1545 return; 1546 1546 } 1547 1547 #endif 1548 - WARN_ON(in_interrupt()); 1549 1548 static_key_slow_inc(&netstamp_needed); 1550 1549 } 1551 1550 EXPORT_SYMBOL(net_enable_timestamp);
+2
net/core/flow_dissector.c
··· 140 140 flow->ports = *ports; 141 141 } 142 142 143 + flow->thoff = (u16) nhoff; 144 + 143 145 return true; 144 146 } 145 147 EXPORT_SYMBOL(skb_flow_dissect);
+1 -2
net/ipv4/af_inet.c
··· 1333 1333 iph->frag_off |= htons(IP_MF); 1334 1334 offset += (skb->len - skb->mac_len - iph->ihl * 4); 1335 1335 } else { 1336 - if (!(iph->frag_off & htons(IP_DF))) 1337 - iph->id = htons(id++); 1336 + iph->id = htons(id++); 1338 1337 } 1339 1338 iph->tot_len = htons(skb->len - skb->mac_len); 1340 1339 iph->check = 0;
+2 -1
net/ipv4/ipconfig.c
··· 1522 1522 } 1523 1523 for (i++; i < CONF_NAMESERVERS_MAX; i++) 1524 1524 if (ic_nameservers[i] != NONE) 1525 - pr_cont(", nameserver%u=%pI4\n", i, &ic_nameservers[i]); 1525 + pr_cont(", nameserver%u=%pI4", i, &ic_nameservers[i]); 1526 + pr_cont("\n"); 1526 1527 #endif /* !SILENT */ 1527 1528 1528 1529 return 0;
-13
net/ipv4/netfilter/Kconfig
··· 36 36 37 37 If unsure, say Y. 38 38 39 - config IP_NF_QUEUE 40 - tristate "IP Userspace queueing via NETLINK (OBSOLETE)" 41 - depends on NETFILTER_ADVANCED 42 - help 43 - Netfilter has the ability to queue packets to user space: the 44 - netlink device can be used to access them using this driver. 45 - 46 - This option enables the old IPv4-only "ip_queue" implementation 47 - which has been obsoleted by the new "nfnetlink_queue" code (see 48 - CONFIG_NETFILTER_NETLINK_QUEUE). 49 - 50 - To compile it as a module, choose M here. If unsure, say N. 51 - 52 39 config IP_NF_IPTABLES 53 40 tristate "IP tables support (required for filtering/masq/NAT)" 54 41 default m if NETFILTER_ADVANCED=n
+2 -5
net/ipv4/tcp_input.c
··· 2059 2059 if (tcp_is_reno(tp)) 2060 2060 tcp_reset_reno_sack(tp); 2061 2061 2062 - if (!how) { 2063 - /* Push undo marker, if it was plain RTO and nothing 2064 - * was retransmitted. */ 2065 - tp->undo_marker = tp->snd_una; 2066 - } else { 2062 + tp->undo_marker = tp->snd_una; 2063 + if (how) { 2067 2064 tp->sacked_out = 0; 2068 2065 tp->fackets_out = 0; 2069 2066 }
+5 -2
net/ipv4/tcp_output.c
··· 1809 1809 goto send_now; 1810 1810 } 1811 1811 1812 - /* Ok, it looks like it is advisable to defer. */ 1813 - tp->tso_deferred = 1 | (jiffies << 1); 1812 + /* Ok, it looks like it is advisable to defer. 1813 + * Do not rearm the timer if already set to not break TCP ACK clocking. 1814 + */ 1815 + if (!tp->tso_deferred) 1816 + tp->tso_deferred = 1 | (jiffies << 1); 1814 1817 1815 1818 return true; 1816 1819
+7
net/ipv4/udp.c
··· 1762 1762 1763 1763 void udp_destroy_sock(struct sock *sk) 1764 1764 { 1765 + struct udp_sock *up = udp_sk(sk); 1765 1766 bool slow = lock_sock_fast(sk); 1766 1767 udp_flush_pending_frames(sk); 1767 1768 unlock_sock_fast(sk, slow); 1769 + if (static_key_false(&udp_encap_needed) && up->encap_type) { 1770 + void (*encap_destroy)(struct sock *sk); 1771 + encap_destroy = ACCESS_ONCE(up->encap_destroy); 1772 + if (encap_destroy) 1773 + encap_destroy(sk); 1774 + } 1768 1775 } 1769 1776 1770 1777 /*
+10 -16
net/ipv6/addrconf.c
··· 4784 4784 4785 4785 static int __net_init addrconf_init_net(struct net *net) 4786 4786 { 4787 - int err; 4787 + int err = -ENOMEM; 4788 4788 struct ipv6_devconf *all, *dflt; 4789 4789 4790 - err = -ENOMEM; 4791 - all = &ipv6_devconf; 4792 - dflt = &ipv6_devconf_dflt; 4790 + all = kmemdup(&ipv6_devconf, sizeof(ipv6_devconf), GFP_KERNEL); 4791 + if (all == NULL) 4792 + goto err_alloc_all; 4793 4793 4794 - if (!net_eq(net, &init_net)) { 4795 - all = kmemdup(all, sizeof(ipv6_devconf), GFP_KERNEL); 4796 - if (all == NULL) 4797 - goto err_alloc_all; 4794 + dflt = kmemdup(&ipv6_devconf_dflt, sizeof(ipv6_devconf_dflt), GFP_KERNEL); 4795 + if (dflt == NULL) 4796 + goto err_alloc_dflt; 4798 4797 4799 - dflt = kmemdup(dflt, sizeof(ipv6_devconf_dflt), GFP_KERNEL); 4800 - if (dflt == NULL) 4801 - goto err_alloc_dflt; 4802 - } else { 4803 - /* these will be inherited by all namespaces */ 4804 - dflt->autoconf = ipv6_defaults.autoconf; 4805 - dflt->disable_ipv6 = ipv6_defaults.disable_ipv6; 4806 - } 4798 + /* these will be inherited by all namespaces */ 4799 + dflt->autoconf = ipv6_defaults.autoconf; 4800 + dflt->disable_ipv6 = ipv6_defaults.disable_ipv6; 4807 4801 4808 4802 net->ipv6.devconf_all = all; 4809 4803 net->ipv6.devconf_dflt = dflt;
+2
net/ipv6/netfilter/ip6t_NPT.c
··· 114 114 static struct xt_target ip6t_npt_target_reg[] __read_mostly = { 115 115 { 116 116 .name = "SNPT", 117 + .table = "mangle", 117 118 .target = ip6t_snpt_tg, 118 119 .targetsize = sizeof(struct ip6t_npt_tginfo), 119 120 .checkentry = ip6t_npt_checkentry, ··· 125 124 }, 126 125 { 127 126 .name = "DNPT", 127 + .table = "mangle", 128 128 .target = ip6t_dnpt_tg, 129 129 .targetsize = sizeof(struct ip6t_npt_tginfo), 130 130 .checkentry = ip6t_npt_checkentry,
+8
net/ipv6/udp.c
··· 1285 1285 1286 1286 void udpv6_destroy_sock(struct sock *sk) 1287 1287 { 1288 + struct udp_sock *up = udp_sk(sk); 1288 1289 lock_sock(sk); 1289 1290 udp_v6_flush_pending_frames(sk); 1290 1291 release_sock(sk); 1292 + 1293 + if (static_key_false(&udpv6_encap_needed) && up->encap_type) { 1294 + void (*encap_destroy)(struct sock *sk); 1295 + encap_destroy = ACCESS_ONCE(up->encap_destroy); 1296 + if (encap_destroy) 1297 + encap_destroy(sk); 1298 + } 1291 1299 1292 1300 inet6_destroy_sock(sk); 1293 1301 }
+4 -2
net/irda/af_irda.c
··· 2583 2583 NULL, NULL, NULL); 2584 2584 2585 2585 /* Check if the we got some results */ 2586 - if (!self->cachedaddr) 2587 - return -EAGAIN; /* Didn't find any devices */ 2586 + if (!self->cachedaddr) { 2587 + err = -EAGAIN; /* Didn't find any devices */ 2588 + goto out; 2589 + } 2588 2590 daddr = self->cachedaddr; 2589 2591 /* Cleanup */ 2590 2592 self->cachedaddr = 0;
+103 -107
net/l2tp/l2tp_core.c
··· 114 114 115 115 static void l2tp_session_set_header_len(struct l2tp_session *session, int version); 116 116 static void l2tp_tunnel_free(struct l2tp_tunnel *tunnel); 117 - static void l2tp_tunnel_closeall(struct l2tp_tunnel *tunnel); 118 117 119 118 static inline struct l2tp_net *l2tp_pernet(struct net *net) 120 119 { ··· 191 192 } else { 192 193 /* Socket is owned by kernelspace */ 193 194 sk = tunnel->sock; 195 + sock_hold(sk); 194 196 } 195 197 196 198 out: ··· 210 210 } 211 211 sock_put(sk); 212 212 } 213 + sock_put(sk); 213 214 } 214 215 EXPORT_SYMBOL_GPL(l2tp_tunnel_sock_put); 215 216 ··· 374 373 struct sk_buff *skbp; 375 374 struct sk_buff *tmp; 376 375 u32 ns = L2TP_SKB_CB(skb)->ns; 377 - struct l2tp_stats *sstats; 378 376 379 377 spin_lock_bh(&session->reorder_q.lock); 380 - sstats = &session->stats; 381 378 skb_queue_walk_safe(&session->reorder_q, skbp, tmp) { 382 379 if (L2TP_SKB_CB(skbp)->ns > ns) { 383 380 __skb_queue_before(&session->reorder_q, skbp, skb); ··· 383 384 "%s: pkt %hu, inserted before %hu, reorder_q len=%d\n", 384 385 session->name, ns, L2TP_SKB_CB(skbp)->ns, 385 386 skb_queue_len(&session->reorder_q)); 386 - u64_stats_update_begin(&sstats->syncp); 387 - sstats->rx_oos_packets++; 388 - u64_stats_update_end(&sstats->syncp); 387 + atomic_long_inc(&session->stats.rx_oos_packets); 389 388 goto out; 390 389 } 391 390 } ··· 400 403 { 401 404 struct l2tp_tunnel *tunnel = session->tunnel; 402 405 int length = L2TP_SKB_CB(skb)->length; 403 - struct l2tp_stats *tstats, *sstats; 404 406 405 407 /* We're about to requeue the skb, so return resources 406 408 * to its current owner (a socket receive buffer). 407 409 */ 408 410 skb_orphan(skb); 409 411 410 - tstats = &tunnel->stats; 411 - u64_stats_update_begin(&tstats->syncp); 412 - sstats = &session->stats; 413 - u64_stats_update_begin(&sstats->syncp); 414 - tstats->rx_packets++; 415 - tstats->rx_bytes += length; 416 - sstats->rx_packets++; 417 - sstats->rx_bytes += length; 418 - u64_stats_update_end(&tstats->syncp); 419 - u64_stats_update_end(&sstats->syncp); 412 + atomic_long_inc(&tunnel->stats.rx_packets); 413 + atomic_long_add(length, &tunnel->stats.rx_bytes); 414 + atomic_long_inc(&session->stats.rx_packets); 415 + atomic_long_add(length, &session->stats.rx_bytes); 420 416 421 417 if (L2TP_SKB_CB(skb)->has_seq) { 422 418 /* Bump our Nr */ ··· 440 450 { 441 451 struct sk_buff *skb; 442 452 struct sk_buff *tmp; 443 - struct l2tp_stats *sstats; 444 453 445 454 /* If the pkt at the head of the queue has the nr that we 446 455 * expect to send up next, dequeue it and any other ··· 447 458 */ 448 459 start: 449 460 spin_lock_bh(&session->reorder_q.lock); 450 - sstats = &session->stats; 451 461 skb_queue_walk_safe(&session->reorder_q, skb, tmp) { 452 462 if (time_after(jiffies, L2TP_SKB_CB(skb)->expires)) { 453 - u64_stats_update_begin(&sstats->syncp); 454 - sstats->rx_seq_discards++; 455 - sstats->rx_errors++; 456 - u64_stats_update_end(&sstats->syncp); 463 + atomic_long_inc(&session->stats.rx_seq_discards); 464 + atomic_long_inc(&session->stats.rx_errors); 457 465 l2tp_dbg(session, L2TP_MSG_SEQ, 458 466 "%s: oos pkt %u len %d discarded (too old), waiting for %u, reorder_q_len=%d\n", 459 467 session->name, L2TP_SKB_CB(skb)->ns, ··· 609 623 struct l2tp_tunnel *tunnel = session->tunnel; 610 624 int offset; 611 625 u32 ns, nr; 612 - struct l2tp_stats *sstats = &session->stats; 613 626 614 627 /* The ref count is increased since we now hold a pointer to 615 628 * the session. Take care to decrement the refcnt when exiting ··· 625 640 "%s: cookie mismatch (%u/%u). Discarding.\n", 626 641 tunnel->name, tunnel->tunnel_id, 627 642 session->session_id); 628 - u64_stats_update_begin(&sstats->syncp); 629 - sstats->rx_cookie_discards++; 630 - u64_stats_update_end(&sstats->syncp); 643 + atomic_long_inc(&session->stats.rx_cookie_discards); 631 644 goto discard; 632 645 } 633 646 ptr += session->peer_cookie_len; ··· 694 711 l2tp_warn(session, L2TP_MSG_SEQ, 695 712 "%s: recv data has no seq numbers when required. Discarding.\n", 696 713 session->name); 697 - u64_stats_update_begin(&sstats->syncp); 698 - sstats->rx_seq_discards++; 699 - u64_stats_update_end(&sstats->syncp); 714 + atomic_long_inc(&session->stats.rx_seq_discards); 700 715 goto discard; 701 716 } 702 717 ··· 713 732 l2tp_warn(session, L2TP_MSG_SEQ, 714 733 "%s: recv data has no seq numbers when required. Discarding.\n", 715 734 session->name); 716 - u64_stats_update_begin(&sstats->syncp); 717 - sstats->rx_seq_discards++; 718 - u64_stats_update_end(&sstats->syncp); 735 + atomic_long_inc(&session->stats.rx_seq_discards); 719 736 goto discard; 720 737 } 721 738 } ··· 767 788 * packets 768 789 */ 769 790 if (L2TP_SKB_CB(skb)->ns != session->nr) { 770 - u64_stats_update_begin(&sstats->syncp); 771 - sstats->rx_seq_discards++; 772 - u64_stats_update_end(&sstats->syncp); 791 + atomic_long_inc(&session->stats.rx_seq_discards); 773 792 l2tp_dbg(session, L2TP_MSG_SEQ, 774 793 "%s: oos pkt %u len %d discarded, waiting for %u, reorder_q_len=%d\n", 775 794 session->name, L2TP_SKB_CB(skb)->ns, ··· 793 816 return; 794 817 795 818 discard: 796 - u64_stats_update_begin(&sstats->syncp); 797 - sstats->rx_errors++; 798 - u64_stats_update_end(&sstats->syncp); 819 + atomic_long_inc(&session->stats.rx_errors); 799 820 kfree_skb(skb); 800 821 801 822 if (session->deref) ··· 802 827 l2tp_session_dec_refcount(session); 803 828 } 804 829 EXPORT_SYMBOL(l2tp_recv_common); 830 + 831 + /* Drop skbs from the session's reorder_q 832 + */ 833 + int l2tp_session_queue_purge(struct l2tp_session *session) 834 + { 835 + struct sk_buff *skb = NULL; 836 + BUG_ON(!session); 837 + BUG_ON(session->magic != L2TP_SESSION_MAGIC); 838 + while ((skb = skb_dequeue(&session->reorder_q))) { 839 + atomic_long_inc(&session->stats.rx_errors); 840 + kfree_skb(skb); 841 + if (session->deref) 842 + (*session->deref)(session); 843 + } 844 + return 0; 845 + } 846 + EXPORT_SYMBOL_GPL(l2tp_session_queue_purge); 805 847 806 848 /* Internal UDP receive frame. Do the real work of receiving an L2TP data frame 807 849 * here. The skb is not on a list when we get here. ··· 835 843 u32 tunnel_id, session_id; 836 844 u16 version; 837 845 int length; 838 - struct l2tp_stats *tstats; 839 846 840 847 if (tunnel->sock && l2tp_verify_udp_checksum(tunnel->sock, skb)) 841 848 goto discard_bad_csum; ··· 923 932 discard_bad_csum: 924 933 LIMIT_NETDEBUG("%s: UDP: bad checksum\n", tunnel->name); 925 934 UDP_INC_STATS_USER(tunnel->l2tp_net, UDP_MIB_INERRORS, 0); 926 - tstats = &tunnel->stats; 927 - u64_stats_update_begin(&tstats->syncp); 928 - tstats->rx_errors++; 929 - u64_stats_update_end(&tstats->syncp); 935 + atomic_long_inc(&tunnel->stats.rx_errors); 930 936 kfree_skb(skb); 931 937 932 938 return 0; ··· 1050 1062 struct l2tp_tunnel *tunnel = session->tunnel; 1051 1063 unsigned int len = skb->len; 1052 1064 int error; 1053 - struct l2tp_stats *tstats, *sstats; 1054 1065 1055 1066 /* Debug */ 1056 1067 if (session->send_seq) ··· 1078 1091 error = ip_queue_xmit(skb, fl); 1079 1092 1080 1093 /* Update stats */ 1081 - tstats = &tunnel->stats; 1082 - u64_stats_update_begin(&tstats->syncp); 1083 - sstats = &session->stats; 1084 - u64_stats_update_begin(&sstats->syncp); 1085 1094 if (error >= 0) { 1086 - tstats->tx_packets++; 1087 - tstats->tx_bytes += len; 1088 - sstats->tx_packets++; 1089 - sstats->tx_bytes += len; 1095 + atomic_long_inc(&tunnel->stats.tx_packets); 1096 + atomic_long_add(len, &tunnel->stats.tx_bytes); 1097 + atomic_long_inc(&session->stats.tx_packets); 1098 + atomic_long_add(len, &session->stats.tx_bytes); 1090 1099 } else { 1091 - tstats->tx_errors++; 1092 - sstats->tx_errors++; 1100 + atomic_long_inc(&tunnel->stats.tx_errors); 1101 + atomic_long_inc(&session->stats.tx_errors); 1093 1102 } 1094 - u64_stats_update_end(&tstats->syncp); 1095 - u64_stats_update_end(&sstats->syncp); 1096 1103 1097 1104 return 0; 1098 1105 } ··· 1263 1282 /* No longer an encapsulation socket. See net/ipv4/udp.c */ 1264 1283 (udp_sk(sk))->encap_type = 0; 1265 1284 (udp_sk(sk))->encap_rcv = NULL; 1285 + (udp_sk(sk))->encap_destroy = NULL; 1266 1286 break; 1267 1287 case L2TP_ENCAPTYPE_IP: 1268 1288 break; ··· 1293 1311 1294 1312 /* When the tunnel is closed, all the attached sessions need to go too. 1295 1313 */ 1296 - static void l2tp_tunnel_closeall(struct l2tp_tunnel *tunnel) 1314 + void l2tp_tunnel_closeall(struct l2tp_tunnel *tunnel) 1297 1315 { 1298 1316 int hash; 1299 1317 struct hlist_node *walk; ··· 1316 1334 1317 1335 hlist_del_init(&session->hlist); 1318 1336 1319 - /* Since we should hold the sock lock while 1320 - * doing any unbinding, we need to release the 1321 - * lock we're holding before taking that lock. 1322 - * Hold a reference to the sock so it doesn't 1323 - * disappear as we're jumping between locks. 1324 - */ 1325 1337 if (session->ref != NULL) 1326 1338 (*session->ref)(session); 1327 1339 1328 1340 write_unlock_bh(&tunnel->hlist_lock); 1329 1341 1330 - if (tunnel->version != L2TP_HDR_VER_2) { 1331 - struct l2tp_net *pn = l2tp_pernet(tunnel->l2tp_net); 1332 - 1333 - spin_lock_bh(&pn->l2tp_session_hlist_lock); 1334 - hlist_del_init_rcu(&session->global_hlist); 1335 - spin_unlock_bh(&pn->l2tp_session_hlist_lock); 1336 - synchronize_rcu(); 1337 - } 1342 + __l2tp_session_unhash(session); 1343 + l2tp_session_queue_purge(session); 1338 1344 1339 1345 if (session->session_close != NULL) 1340 1346 (*session->session_close)(session); 1341 1347 1342 1348 if (session->deref != NULL) 1343 1349 (*session->deref)(session); 1350 + 1351 + l2tp_session_dec_refcount(session); 1344 1352 1345 1353 write_lock_bh(&tunnel->hlist_lock); 1346 1354 ··· 1343 1371 } 1344 1372 } 1345 1373 write_unlock_bh(&tunnel->hlist_lock); 1374 + } 1375 + EXPORT_SYMBOL_GPL(l2tp_tunnel_closeall); 1376 + 1377 + /* Tunnel socket destroy hook for UDP encapsulation */ 1378 + static void l2tp_udp_encap_destroy(struct sock *sk) 1379 + { 1380 + struct l2tp_tunnel *tunnel = l2tp_sock_to_tunnel(sk); 1381 + if (tunnel) { 1382 + l2tp_tunnel_closeall(tunnel); 1383 + sock_put(sk); 1384 + } 1346 1385 } 1347 1386 1348 1387 /* Really kill the tunnel. ··· 1380 1397 return; 1381 1398 1382 1399 sock = sk->sk_socket; 1383 - BUG_ON(!sock); 1384 1400 1385 - /* If the tunnel socket was created directly by the kernel, use the 1386 - * sk_* API to release the socket now. Otherwise go through the 1387 - * inet_* layer to shut the socket down, and let userspace close it. 1401 + /* If the tunnel socket was created by userspace, then go through the 1402 + * inet layer to shut the socket down, and let userspace close it. 1403 + * Otherwise, if we created the socket directly within the kernel, use 1404 + * the sk API to release it here. 1388 1405 * In either case the tunnel resources are freed in the socket 1389 1406 * destructor when the tunnel socket goes away. 1390 1407 */ 1391 - if (sock->file == NULL) { 1392 - kernel_sock_shutdown(sock, SHUT_RDWR); 1393 - sk_release_kernel(sk); 1408 + if (tunnel->fd >= 0) { 1409 + if (sock) 1410 + inet_shutdown(sock, 2); 1394 1411 } else { 1395 - inet_shutdown(sock, 2); 1412 + if (sock) 1413 + kernel_sock_shutdown(sock, SHUT_RDWR); 1414 + sk_release_kernel(sk); 1396 1415 } 1397 1416 1398 1417 l2tp_tunnel_sock_put(sk); ··· 1653 1668 /* Mark socket as an encapsulation socket. See net/ipv4/udp.c */ 1654 1669 udp_sk(sk)->encap_type = UDP_ENCAP_L2TPINUDP; 1655 1670 udp_sk(sk)->encap_rcv = l2tp_udp_encap_recv; 1671 + udp_sk(sk)->encap_destroy = l2tp_udp_encap_destroy; 1656 1672 #if IS_ENABLED(CONFIG_IPV6) 1657 1673 if (sk->sk_family == PF_INET6) 1658 1674 udpv6_encap_enable(); ··· 1709 1723 */ 1710 1724 int l2tp_tunnel_delete(struct l2tp_tunnel *tunnel) 1711 1725 { 1726 + l2tp_tunnel_closeall(tunnel); 1712 1727 return (false == queue_work(l2tp_wq, &tunnel->del_work)); 1713 1728 } 1714 1729 EXPORT_SYMBOL_GPL(l2tp_tunnel_delete); ··· 1718 1731 */ 1719 1732 void l2tp_session_free(struct l2tp_session *session) 1720 1733 { 1721 - struct l2tp_tunnel *tunnel; 1734 + struct l2tp_tunnel *tunnel = session->tunnel; 1722 1735 1723 1736 BUG_ON(atomic_read(&session->ref_count) != 0); 1724 1737 1725 - tunnel = session->tunnel; 1726 - if (tunnel != NULL) { 1738 + if (tunnel) { 1727 1739 BUG_ON(tunnel->magic != L2TP_TUNNEL_MAGIC); 1728 - 1729 - /* Delete the session from the hash */ 1730 - write_lock_bh(&tunnel->hlist_lock); 1731 - hlist_del_init(&session->hlist); 1732 - write_unlock_bh(&tunnel->hlist_lock); 1733 - 1734 - /* Unlink from the global hash if not L2TPv2 */ 1735 - if (tunnel->version != L2TP_HDR_VER_2) { 1736 - struct l2tp_net *pn = l2tp_pernet(tunnel->l2tp_net); 1737 - 1738 - spin_lock_bh(&pn->l2tp_session_hlist_lock); 1739 - hlist_del_init_rcu(&session->global_hlist); 1740 - spin_unlock_bh(&pn->l2tp_session_hlist_lock); 1741 - synchronize_rcu(); 1742 - } 1743 - 1744 1740 if (session->session_id != 0) 1745 1741 atomic_dec(&l2tp_session_count); 1746 - 1747 1742 sock_put(tunnel->sock); 1748 - 1749 - /* This will delete the tunnel context if this 1750 - * is the last session on the tunnel. 1751 - */ 1752 1743 session->tunnel = NULL; 1753 1744 l2tp_tunnel_dec_refcount(tunnel); 1754 1745 } ··· 1737 1772 } 1738 1773 EXPORT_SYMBOL_GPL(l2tp_session_free); 1739 1774 1775 + /* Remove an l2tp session from l2tp_core's hash lists. 1776 + * Provides a tidyup interface for pseudowire code which can't just route all 1777 + * shutdown via. l2tp_session_delete and a pseudowire-specific session_close 1778 + * callback. 1779 + */ 1780 + void __l2tp_session_unhash(struct l2tp_session *session) 1781 + { 1782 + struct l2tp_tunnel *tunnel = session->tunnel; 1783 + 1784 + /* Remove the session from core hashes */ 1785 + if (tunnel) { 1786 + /* Remove from the per-tunnel hash */ 1787 + write_lock_bh(&tunnel->hlist_lock); 1788 + hlist_del_init(&session->hlist); 1789 + write_unlock_bh(&tunnel->hlist_lock); 1790 + 1791 + /* For L2TPv3 we have a per-net hash: remove from there, too */ 1792 + if (tunnel->version != L2TP_HDR_VER_2) { 1793 + struct l2tp_net *pn = l2tp_pernet(tunnel->l2tp_net); 1794 + spin_lock_bh(&pn->l2tp_session_hlist_lock); 1795 + hlist_del_init_rcu(&session->global_hlist); 1796 + spin_unlock_bh(&pn->l2tp_session_hlist_lock); 1797 + synchronize_rcu(); 1798 + } 1799 + } 1800 + } 1801 + EXPORT_SYMBOL_GPL(__l2tp_session_unhash); 1802 + 1740 1803 /* This function is used by the netlink SESSION_DELETE command and by 1741 1804 pseudowire modules. 1742 1805 */ 1743 1806 int l2tp_session_delete(struct l2tp_session *session) 1744 1807 { 1808 + if (session->ref) 1809 + (*session->ref)(session); 1810 + __l2tp_session_unhash(session); 1811 + l2tp_session_queue_purge(session); 1745 1812 if (session->session_close != NULL) 1746 1813 (*session->session_close)(session); 1747 - 1814 + if (session->deref) 1815 + (*session->ref)(session); 1748 1816 l2tp_session_dec_refcount(session); 1749 - 1750 1817 return 0; 1751 1818 } 1752 1819 EXPORT_SYMBOL_GPL(l2tp_session_delete); 1753 - 1754 1820 1755 1821 /* We come here whenever a session's send_seq, cookie_len or 1756 1822 * l2specific_len parameters are set.
+12 -10
net/l2tp/l2tp_core.h
··· 36 36 struct sk_buff; 37 37 38 38 struct l2tp_stats { 39 - u64 tx_packets; 40 - u64 tx_bytes; 41 - u64 tx_errors; 42 - u64 rx_packets; 43 - u64 rx_bytes; 44 - u64 rx_seq_discards; 45 - u64 rx_oos_packets; 46 - u64 rx_errors; 47 - u64 rx_cookie_discards; 48 - struct u64_stats_sync syncp; 39 + atomic_long_t tx_packets; 40 + atomic_long_t tx_bytes; 41 + atomic_long_t tx_errors; 42 + atomic_long_t rx_packets; 43 + atomic_long_t rx_bytes; 44 + atomic_long_t rx_seq_discards; 45 + atomic_long_t rx_oos_packets; 46 + atomic_long_t rx_errors; 47 + atomic_long_t rx_cookie_discards; 49 48 }; 50 49 51 50 struct l2tp_tunnel; ··· 239 240 extern struct l2tp_tunnel *l2tp_tunnel_find_nth(struct net *net, int nth); 240 241 241 242 extern int l2tp_tunnel_create(struct net *net, int fd, int version, u32 tunnel_id, u32 peer_tunnel_id, struct l2tp_tunnel_cfg *cfg, struct l2tp_tunnel **tunnelp); 243 + extern void l2tp_tunnel_closeall(struct l2tp_tunnel *tunnel); 242 244 extern int l2tp_tunnel_delete(struct l2tp_tunnel *tunnel); 243 245 extern struct l2tp_session *l2tp_session_create(int priv_size, struct l2tp_tunnel *tunnel, u32 session_id, u32 peer_session_id, struct l2tp_session_cfg *cfg); 246 + extern void __l2tp_session_unhash(struct l2tp_session *session); 244 247 extern int l2tp_session_delete(struct l2tp_session *session); 245 248 extern void l2tp_session_free(struct l2tp_session *session); 246 249 extern void l2tp_recv_common(struct l2tp_session *session, struct sk_buff *skb, unsigned char *ptr, unsigned char *optr, u16 hdrflags, int length, int (*payload_hook)(struct sk_buff *skb)); 250 + extern int l2tp_session_queue_purge(struct l2tp_session *session); 247 251 extern int l2tp_udp_encap_recv(struct sock *sk, struct sk_buff *skb); 248 252 249 253 extern int l2tp_xmit_skb(struct l2tp_session *session, struct sk_buff *skb, int hdr_len);
+14 -14
net/l2tp/l2tp_debugfs.c
··· 146 146 tunnel->sock ? atomic_read(&tunnel->sock->sk_refcnt) : 0, 147 147 atomic_read(&tunnel->ref_count)); 148 148 149 - seq_printf(m, " %08x rx %llu/%llu/%llu rx %llu/%llu/%llu\n", 149 + seq_printf(m, " %08x rx %ld/%ld/%ld rx %ld/%ld/%ld\n", 150 150 tunnel->debug, 151 - (unsigned long long)tunnel->stats.tx_packets, 152 - (unsigned long long)tunnel->stats.tx_bytes, 153 - (unsigned long long)tunnel->stats.tx_errors, 154 - (unsigned long long)tunnel->stats.rx_packets, 155 - (unsigned long long)tunnel->stats.rx_bytes, 156 - (unsigned long long)tunnel->stats.rx_errors); 151 + atomic_long_read(&tunnel->stats.tx_packets), 152 + atomic_long_read(&tunnel->stats.tx_bytes), 153 + atomic_long_read(&tunnel->stats.tx_errors), 154 + atomic_long_read(&tunnel->stats.rx_packets), 155 + atomic_long_read(&tunnel->stats.rx_bytes), 156 + atomic_long_read(&tunnel->stats.rx_errors)); 157 157 158 158 if (tunnel->show != NULL) 159 159 tunnel->show(m, tunnel); ··· 203 203 seq_printf(m, "\n"); 204 204 } 205 205 206 - seq_printf(m, " %hu/%hu tx %llu/%llu/%llu rx %llu/%llu/%llu\n", 206 + seq_printf(m, " %hu/%hu tx %ld/%ld/%ld rx %ld/%ld/%ld\n", 207 207 session->nr, session->ns, 208 - (unsigned long long)session->stats.tx_packets, 209 - (unsigned long long)session->stats.tx_bytes, 210 - (unsigned long long)session->stats.tx_errors, 211 - (unsigned long long)session->stats.rx_packets, 212 - (unsigned long long)session->stats.rx_bytes, 213 - (unsigned long long)session->stats.rx_errors); 208 + atomic_long_read(&session->stats.tx_packets), 209 + atomic_long_read(&session->stats.tx_bytes), 210 + atomic_long_read(&session->stats.tx_errors), 211 + atomic_long_read(&session->stats.rx_packets), 212 + atomic_long_read(&session->stats.rx_bytes), 213 + atomic_long_read(&session->stats.rx_errors)); 214 214 215 215 if (session->show != NULL) 216 216 session->show(m, session);
+6
net/l2tp/l2tp_ip.c
··· 228 228 static void l2tp_ip_destroy_sock(struct sock *sk) 229 229 { 230 230 struct sk_buff *skb; 231 + struct l2tp_tunnel *tunnel = l2tp_sock_to_tunnel(sk); 231 232 232 233 while ((skb = __skb_dequeue_tail(&sk->sk_write_queue)) != NULL) 233 234 kfree_skb(skb); 235 + 236 + if (tunnel) { 237 + l2tp_tunnel_closeall(tunnel); 238 + sock_put(sk); 239 + } 234 240 235 241 sk_refcnt_debug_dec(sk); 236 242 }
+7
net/l2tp/l2tp_ip6.c
··· 241 241 242 242 static void l2tp_ip6_destroy_sock(struct sock *sk) 243 243 { 244 + struct l2tp_tunnel *tunnel = l2tp_sock_to_tunnel(sk); 245 + 244 246 lock_sock(sk); 245 247 ip6_flush_pending_frames(sk); 246 248 release_sock(sk); 249 + 250 + if (tunnel) { 251 + l2tp_tunnel_closeall(tunnel); 252 + sock_put(sk); 253 + } 247 254 248 255 inet6_destroy_sock(sk); 249 256 }
+28 -44
net/l2tp/l2tp_netlink.c
··· 246 246 #if IS_ENABLED(CONFIG_IPV6) 247 247 struct ipv6_pinfo *np = NULL; 248 248 #endif 249 - struct l2tp_stats stats; 250 - unsigned int start; 251 249 252 250 hdr = genlmsg_put(skb, portid, seq, &l2tp_nl_family, flags, 253 251 L2TP_CMD_TUNNEL_GET); ··· 263 265 if (nest == NULL) 264 266 goto nla_put_failure; 265 267 266 - do { 267 - start = u64_stats_fetch_begin(&tunnel->stats.syncp); 268 - stats.tx_packets = tunnel->stats.tx_packets; 269 - stats.tx_bytes = tunnel->stats.tx_bytes; 270 - stats.tx_errors = tunnel->stats.tx_errors; 271 - stats.rx_packets = tunnel->stats.rx_packets; 272 - stats.rx_bytes = tunnel->stats.rx_bytes; 273 - stats.rx_errors = tunnel->stats.rx_errors; 274 - stats.rx_seq_discards = tunnel->stats.rx_seq_discards; 275 - stats.rx_oos_packets = tunnel->stats.rx_oos_packets; 276 - } while (u64_stats_fetch_retry(&tunnel->stats.syncp, start)); 277 - 278 - if (nla_put_u64(skb, L2TP_ATTR_TX_PACKETS, stats.tx_packets) || 279 - nla_put_u64(skb, L2TP_ATTR_TX_BYTES, stats.tx_bytes) || 280 - nla_put_u64(skb, L2TP_ATTR_TX_ERRORS, stats.tx_errors) || 281 - nla_put_u64(skb, L2TP_ATTR_RX_PACKETS, stats.rx_packets) || 282 - nla_put_u64(skb, L2TP_ATTR_RX_BYTES, stats.rx_bytes) || 268 + if (nla_put_u64(skb, L2TP_ATTR_TX_PACKETS, 269 + atomic_long_read(&tunnel->stats.tx_packets)) || 270 + nla_put_u64(skb, L2TP_ATTR_TX_BYTES, 271 + atomic_long_read(&tunnel->stats.tx_bytes)) || 272 + nla_put_u64(skb, L2TP_ATTR_TX_ERRORS, 273 + atomic_long_read(&tunnel->stats.tx_errors)) || 274 + nla_put_u64(skb, L2TP_ATTR_RX_PACKETS, 275 + atomic_long_read(&tunnel->stats.rx_packets)) || 276 + nla_put_u64(skb, L2TP_ATTR_RX_BYTES, 277 + atomic_long_read(&tunnel->stats.rx_bytes)) || 283 278 nla_put_u64(skb, L2TP_ATTR_RX_SEQ_DISCARDS, 284 - stats.rx_seq_discards) || 279 + atomic_long_read(&tunnel->stats.rx_seq_discards)) || 285 280 nla_put_u64(skb, L2TP_ATTR_RX_OOS_PACKETS, 286 - stats.rx_oos_packets) || 287 - nla_put_u64(skb, L2TP_ATTR_RX_ERRORS, stats.rx_errors)) 281 + atomic_long_read(&tunnel->stats.rx_oos_packets)) || 282 + nla_put_u64(skb, L2TP_ATTR_RX_ERRORS, 283 + atomic_long_read(&tunnel->stats.rx_errors))) 288 284 goto nla_put_failure; 289 285 nla_nest_end(skb, nest); 290 286 ··· 604 612 struct nlattr *nest; 605 613 struct l2tp_tunnel *tunnel = session->tunnel; 606 614 struct sock *sk = NULL; 607 - struct l2tp_stats stats; 608 - unsigned int start; 609 615 610 616 sk = tunnel->sock; 611 617 ··· 646 656 if (nest == NULL) 647 657 goto nla_put_failure; 648 658 649 - do { 650 - start = u64_stats_fetch_begin(&session->stats.syncp); 651 - stats.tx_packets = session->stats.tx_packets; 652 - stats.tx_bytes = session->stats.tx_bytes; 653 - stats.tx_errors = session->stats.tx_errors; 654 - stats.rx_packets = session->stats.rx_packets; 655 - stats.rx_bytes = session->stats.rx_bytes; 656 - stats.rx_errors = session->stats.rx_errors; 657 - stats.rx_seq_discards = session->stats.rx_seq_discards; 658 - stats.rx_oos_packets = session->stats.rx_oos_packets; 659 - } while (u64_stats_fetch_retry(&session->stats.syncp, start)); 660 - 661 - if (nla_put_u64(skb, L2TP_ATTR_TX_PACKETS, stats.tx_packets) || 662 - nla_put_u64(skb, L2TP_ATTR_TX_BYTES, stats.tx_bytes) || 663 - nla_put_u64(skb, L2TP_ATTR_TX_ERRORS, stats.tx_errors) || 664 - nla_put_u64(skb, L2TP_ATTR_RX_PACKETS, stats.rx_packets) || 665 - nla_put_u64(skb, L2TP_ATTR_RX_BYTES, stats.rx_bytes) || 659 + if (nla_put_u64(skb, L2TP_ATTR_TX_PACKETS, 660 + atomic_long_read(&session->stats.tx_packets)) || 661 + nla_put_u64(skb, L2TP_ATTR_TX_BYTES, 662 + atomic_long_read(&session->stats.tx_bytes)) || 663 + nla_put_u64(skb, L2TP_ATTR_TX_ERRORS, 664 + atomic_long_read(&session->stats.tx_errors)) || 665 + nla_put_u64(skb, L2TP_ATTR_RX_PACKETS, 666 + atomic_long_read(&session->stats.rx_packets)) || 667 + nla_put_u64(skb, L2TP_ATTR_RX_BYTES, 668 + atomic_long_read(&session->stats.rx_bytes)) || 666 669 nla_put_u64(skb, L2TP_ATTR_RX_SEQ_DISCARDS, 667 - stats.rx_seq_discards) || 670 + atomic_long_read(&session->stats.rx_seq_discards)) || 668 671 nla_put_u64(skb, L2TP_ATTR_RX_OOS_PACKETS, 669 - stats.rx_oos_packets) || 670 - nla_put_u64(skb, L2TP_ATTR_RX_ERRORS, stats.rx_errors)) 672 + atomic_long_read(&session->stats.rx_oos_packets)) || 673 + nla_put_u64(skb, L2TP_ATTR_RX_ERRORS, 674 + atomic_long_read(&session->stats.rx_errors))) 671 675 goto nla_put_failure; 672 676 nla_nest_end(skb, nest); 673 677
+36 -75
net/l2tp/l2tp_ppp.c
··· 97 97 #include <net/ip.h> 98 98 #include <net/udp.h> 99 99 #include <net/xfrm.h> 100 + #include <net/inet_common.h> 100 101 101 102 #include <asm/byteorder.h> 102 103 #include <linux/atomic.h> ··· 260 259 session->name); 261 260 262 261 /* Not bound. Nothing we can do, so discard. */ 263 - session->stats.rx_errors++; 262 + atomic_long_inc(&session->stats.rx_errors); 264 263 kfree_skb(skb); 265 264 } 266 265 ··· 448 447 { 449 448 struct pppol2tp_session *ps = l2tp_session_priv(session); 450 449 struct sock *sk = ps->sock; 451 - struct sk_buff *skb; 450 + struct socket *sock = sk->sk_socket; 452 451 453 452 BUG_ON(session->magic != L2TP_SESSION_MAGIC); 454 453 455 - if (session->session_id == 0) 456 - goto out; 457 454 458 - if (sk != NULL) { 459 - lock_sock(sk); 460 - 461 - if (sk->sk_state & (PPPOX_CONNECTED | PPPOX_BOUND)) { 462 - pppox_unbind_sock(sk); 463 - sk->sk_state = PPPOX_DEAD; 464 - sk->sk_state_change(sk); 465 - } 466 - 467 - /* Purge any queued data */ 468 - skb_queue_purge(&sk->sk_receive_queue); 469 - skb_queue_purge(&sk->sk_write_queue); 470 - while ((skb = skb_dequeue(&session->reorder_q))) { 471 - kfree_skb(skb); 472 - sock_put(sk); 473 - } 474 - 475 - release_sock(sk); 455 + if (sock) { 456 + inet_shutdown(sock, 2); 457 + /* Don't let the session go away before our socket does */ 458 + l2tp_session_inc_refcount(session); 476 459 } 477 - 478 - out: 479 460 return; 480 461 } 481 462 ··· 466 483 */ 467 484 static void pppol2tp_session_destruct(struct sock *sk) 468 485 { 469 - struct l2tp_session *session; 470 - 471 - if (sk->sk_user_data != NULL) { 472 - session = sk->sk_user_data; 473 - if (session == NULL) 474 - goto out; 475 - 486 + struct l2tp_session *session = sk->sk_user_data; 487 + if (session) { 476 488 sk->sk_user_data = NULL; 477 489 BUG_ON(session->magic != L2TP_SESSION_MAGIC); 478 490 l2tp_session_dec_refcount(session); 479 491 } 480 - 481 - out: 482 492 return; 483 493 } 484 494 ··· 501 525 session = pppol2tp_sock_to_session(sk); 502 526 503 527 /* Purge any queued data */ 504 - skb_queue_purge(&sk->sk_receive_queue); 505 - skb_queue_purge(&sk->sk_write_queue); 506 528 if (session != NULL) { 507 - struct sk_buff *skb; 508 - while ((skb = skb_dequeue(&session->reorder_q))) { 509 - kfree_skb(skb); 510 - sock_put(sk); 511 - } 529 + __l2tp_session_unhash(session); 530 + l2tp_session_queue_purge(session); 512 531 sock_put(sk); 513 532 } 533 + skb_queue_purge(&sk->sk_receive_queue); 534 + skb_queue_purge(&sk->sk_write_queue); 514 535 515 536 release_sock(sk); 516 537 ··· 853 880 return error; 854 881 } 855 882 856 - /* Called when deleting sessions via the netlink interface. 857 - */ 858 - static int pppol2tp_session_delete(struct l2tp_session *session) 859 - { 860 - struct pppol2tp_session *ps = l2tp_session_priv(session); 861 - 862 - if (ps->sock == NULL) 863 - l2tp_session_dec_refcount(session); 864 - 865 - return 0; 866 - } 867 - 868 883 #endif /* CONFIG_L2TP_V3 */ 869 884 870 885 /* getname() support. ··· 986 1025 static void pppol2tp_copy_stats(struct pppol2tp_ioc_stats *dest, 987 1026 struct l2tp_stats *stats) 988 1027 { 989 - dest->tx_packets = stats->tx_packets; 990 - dest->tx_bytes = stats->tx_bytes; 991 - dest->tx_errors = stats->tx_errors; 992 - dest->rx_packets = stats->rx_packets; 993 - dest->rx_bytes = stats->rx_bytes; 994 - dest->rx_seq_discards = stats->rx_seq_discards; 995 - dest->rx_oos_packets = stats->rx_oos_packets; 996 - dest->rx_errors = stats->rx_errors; 1028 + dest->tx_packets = atomic_long_read(&stats->tx_packets); 1029 + dest->tx_bytes = atomic_long_read(&stats->tx_bytes); 1030 + dest->tx_errors = atomic_long_read(&stats->tx_errors); 1031 + dest->rx_packets = atomic_long_read(&stats->rx_packets); 1032 + dest->rx_bytes = atomic_long_read(&stats->rx_bytes); 1033 + dest->rx_seq_discards = atomic_long_read(&stats->rx_seq_discards); 1034 + dest->rx_oos_packets = atomic_long_read(&stats->rx_oos_packets); 1035 + dest->rx_errors = atomic_long_read(&stats->rx_errors); 997 1036 } 998 1037 999 1038 /* Session ioctl helper. ··· 1627 1666 tunnel->name, 1628 1667 (tunnel == tunnel->sock->sk_user_data) ? 'Y' : 'N', 1629 1668 atomic_read(&tunnel->ref_count) - 1); 1630 - seq_printf(m, " %08x %llu/%llu/%llu %llu/%llu/%llu\n", 1669 + seq_printf(m, " %08x %ld/%ld/%ld %ld/%ld/%ld\n", 1631 1670 tunnel->debug, 1632 - (unsigned long long)tunnel->stats.tx_packets, 1633 - (unsigned long long)tunnel->stats.tx_bytes, 1634 - (unsigned long long)tunnel->stats.tx_errors, 1635 - (unsigned long long)tunnel->stats.rx_packets, 1636 - (unsigned long long)tunnel->stats.rx_bytes, 1637 - (unsigned long long)tunnel->stats.rx_errors); 1671 + atomic_long_read(&tunnel->stats.tx_packets), 1672 + atomic_long_read(&tunnel->stats.tx_bytes), 1673 + atomic_long_read(&tunnel->stats.tx_errors), 1674 + atomic_long_read(&tunnel->stats.rx_packets), 1675 + atomic_long_read(&tunnel->stats.rx_bytes), 1676 + atomic_long_read(&tunnel->stats.rx_errors)); 1638 1677 } 1639 1678 1640 1679 static void pppol2tp_seq_session_show(struct seq_file *m, void *v) ··· 1669 1708 session->lns_mode ? "LNS" : "LAC", 1670 1709 session->debug, 1671 1710 jiffies_to_msecs(session->reorder_timeout)); 1672 - seq_printf(m, " %hu/%hu %llu/%llu/%llu %llu/%llu/%llu\n", 1711 + seq_printf(m, " %hu/%hu %ld/%ld/%ld %ld/%ld/%ld\n", 1673 1712 session->nr, session->ns, 1674 - (unsigned long long)session->stats.tx_packets, 1675 - (unsigned long long)session->stats.tx_bytes, 1676 - (unsigned long long)session->stats.tx_errors, 1677 - (unsigned long long)session->stats.rx_packets, 1678 - (unsigned long long)session->stats.rx_bytes, 1679 - (unsigned long long)session->stats.rx_errors); 1713 + atomic_long_read(&session->stats.tx_packets), 1714 + atomic_long_read(&session->stats.tx_bytes), 1715 + atomic_long_read(&session->stats.tx_errors), 1716 + atomic_long_read(&session->stats.rx_packets), 1717 + atomic_long_read(&session->stats.rx_bytes), 1718 + atomic_long_read(&session->stats.rx_errors)); 1680 1719 1681 1720 if (po) 1682 1721 seq_printf(m, " interface %s\n", ppp_dev_name(&po->chan)); ··· 1800 1839 1801 1840 static const struct l2tp_nl_cmd_ops pppol2tp_nl_cmd_ops = { 1802 1841 .session_create = pppol2tp_session_create, 1803 - .session_delete = pppol2tp_session_delete, 1842 + .session_delete = l2tp_session_delete, 1804 1843 }; 1805 1844 1806 1845 #endif /* CONFIG_L2TP_V3 */
+8 -6
net/netfilter/ipvs/ip_vs_core.c
··· 1394 1394 skb_reset_network_header(skb); 1395 1395 IP_VS_DBG(12, "ICMP for IPIP %pI4->%pI4: mtu=%u\n", 1396 1396 &ip_hdr(skb)->saddr, &ip_hdr(skb)->daddr, mtu); 1397 - rcu_read_lock(); 1398 1397 ipv4_update_pmtu(skb, dev_net(skb->dev), 1399 1398 mtu, 0, 0, 0, 0); 1400 - rcu_read_unlock(); 1401 1399 /* Client uses PMTUD? */ 1402 1400 if (!(cih->frag_off & htons(IP_DF))) 1403 1401 goto ignore_ipip; ··· 1575 1577 } 1576 1578 /* ipvs enabled in this netns ? */ 1577 1579 net = skb_net(skb); 1578 - if (!net_ipvs(net)->enable) 1580 + ipvs = net_ipvs(net); 1581 + if (unlikely(sysctl_backup_only(ipvs) || !ipvs->enable)) 1579 1582 return NF_ACCEPT; 1580 1583 1581 1584 ip_vs_fill_iph_skb(af, skb, &iph); ··· 1653 1654 } 1654 1655 1655 1656 IP_VS_DBG_PKT(11, af, pp, skb, 0, "Incoming packet"); 1656 - ipvs = net_ipvs(net); 1657 1657 /* Check the server status */ 1658 1658 if (cp->dest && !(cp->dest->flags & IP_VS_DEST_F_AVAILABLE)) { 1659 1659 /* the destination server is not available */ ··· 1813 1815 { 1814 1816 int r; 1815 1817 struct net *net; 1818 + struct netns_ipvs *ipvs; 1816 1819 1817 1820 if (ip_hdr(skb)->protocol != IPPROTO_ICMP) 1818 1821 return NF_ACCEPT; 1819 1822 1820 1823 /* ipvs enabled in this netns ? */ 1821 1824 net = skb_net(skb); 1822 - if (!net_ipvs(net)->enable) 1825 + ipvs = net_ipvs(net); 1826 + if (unlikely(sysctl_backup_only(ipvs) || !ipvs->enable)) 1823 1827 return NF_ACCEPT; 1824 1828 1825 1829 return ip_vs_in_icmp(skb, &r, hooknum); ··· 1835 1835 { 1836 1836 int r; 1837 1837 struct net *net; 1838 + struct netns_ipvs *ipvs; 1838 1839 struct ip_vs_iphdr iphdr; 1839 1840 1840 1841 ip_vs_fill_iph_skb(AF_INET6, skb, &iphdr); ··· 1844 1843 1845 1844 /* ipvs enabled in this netns ? */ 1846 1845 net = skb_net(skb); 1847 - if (!net_ipvs(net)->enable) 1846 + ipvs = net_ipvs(net); 1847 + if (unlikely(sysctl_backup_only(ipvs) || !ipvs->enable)) 1848 1848 return NF_ACCEPT; 1849 1849 1850 1850 return ip_vs_in_icmp_v6(skb, &r, hooknum, &iphdr);
+7
net/netfilter/ipvs/ip_vs_ctl.c
··· 1808 1808 .mode = 0644, 1809 1809 .proc_handler = proc_dointvec, 1810 1810 }, 1811 + { 1812 + .procname = "backup_only", 1813 + .maxlen = sizeof(int), 1814 + .mode = 0644, 1815 + .proc_handler = proc_dointvec, 1816 + }, 1811 1817 #ifdef CONFIG_IP_VS_DEBUG 1812 1818 { 1813 1819 .procname = "debug_level", ··· 3747 3741 tbl[idx++].data = &ipvs->sysctl_nat_icmp_send; 3748 3742 ipvs->sysctl_pmtu_disc = 1; 3749 3743 tbl[idx++].data = &ipvs->sysctl_pmtu_disc; 3744 + tbl[idx++].data = &ipvs->sysctl_backup_only; 3750 3745 3751 3746 3752 3747 ipvs->sysctl_hdr = register_net_sysctl(net, "net/ipv4/vs", tbl);
+9 -7
net/netfilter/ipvs/ip_vs_proto_sctp.c
··· 906 906 sctp_chunkhdr_t _sctpch, *sch; 907 907 unsigned char chunk_type; 908 908 int event, next_state; 909 - int ihl; 909 + int ihl, cofs; 910 910 911 911 #ifdef CONFIG_IP_VS_IPV6 912 912 ihl = cp->af == AF_INET ? ip_hdrlen(skb) : sizeof(struct ipv6hdr); ··· 914 914 ihl = ip_hdrlen(skb); 915 915 #endif 916 916 917 - sch = skb_header_pointer(skb, ihl + sizeof(sctp_sctphdr_t), 918 - sizeof(_sctpch), &_sctpch); 917 + cofs = ihl + sizeof(sctp_sctphdr_t); 918 + sch = skb_header_pointer(skb, cofs, sizeof(_sctpch), &_sctpch); 919 919 if (sch == NULL) 920 920 return; 921 921 ··· 933 933 */ 934 934 if ((sch->type == SCTP_CID_COOKIE_ECHO) || 935 935 (sch->type == SCTP_CID_COOKIE_ACK)) { 936 - sch = skb_header_pointer(skb, (ihl + sizeof(sctp_sctphdr_t) + 937 - sch->length), sizeof(_sctpch), &_sctpch); 938 - if (sch) { 939 - if (sch->type == SCTP_CID_ABORT) 936 + int clen = ntohs(sch->length); 937 + 938 + if (clen >= sizeof(sctp_chunkhdr_t)) { 939 + sch = skb_header_pointer(skb, cofs + ALIGN(clen, 4), 940 + sizeof(_sctpch), &_sctpch); 941 + if (sch && sch->type == SCTP_CID_ABORT) 940 942 chunk_type = sch->type; 941 943 } 942 944 }
+6 -6
net/netfilter/nf_conntrack_proto_dccp.c
··· 969 969 { 970 970 int ret; 971 971 972 + ret = register_pernet_subsys(&dccp_net_ops); 973 + if (ret < 0) 974 + goto out_pernet; 975 + 972 976 ret = nf_ct_l4proto_register(&dccp_proto4); 973 977 if (ret < 0) 974 978 goto out_dccp4; ··· 981 977 if (ret < 0) 982 978 goto out_dccp6; 983 979 984 - ret = register_pernet_subsys(&dccp_net_ops); 985 - if (ret < 0) 986 - goto out_pernet; 987 - 988 980 return 0; 989 - out_pernet: 990 - nf_ct_l4proto_unregister(&dccp_proto6); 991 981 out_dccp6: 992 982 nf_ct_l4proto_unregister(&dccp_proto4); 993 983 out_dccp4: 984 + unregister_pernet_subsys(&dccp_net_ops); 985 + out_pernet: 994 986 return ret; 995 987 } 996 988
+6 -6
net/netfilter/nf_conntrack_proto_gre.c
··· 420 420 { 421 421 int ret; 422 422 423 - ret = nf_ct_l4proto_register(&nf_conntrack_l4proto_gre4); 424 - if (ret < 0) 425 - goto out_gre4; 426 - 427 423 ret = register_pernet_subsys(&proto_gre_net_ops); 428 424 if (ret < 0) 429 425 goto out_pernet; 430 426 427 + ret = nf_ct_l4proto_register(&nf_conntrack_l4proto_gre4); 428 + if (ret < 0) 429 + goto out_gre4; 430 + 431 431 return 0; 432 - out_pernet: 433 - nf_ct_l4proto_unregister(&nf_conntrack_l4proto_gre4); 434 432 out_gre4: 433 + unregister_pernet_subsys(&proto_gre_net_ops); 434 + out_pernet: 435 435 return ret; 436 436 } 437 437
+6 -6
net/netfilter/nf_conntrack_proto_sctp.c
··· 888 888 { 889 889 int ret; 890 890 891 + ret = register_pernet_subsys(&sctp_net_ops); 892 + if (ret < 0) 893 + goto out_pernet; 894 + 891 895 ret = nf_ct_l4proto_register(&nf_conntrack_l4proto_sctp4); 892 896 if (ret < 0) 893 897 goto out_sctp4; ··· 900 896 if (ret < 0) 901 897 goto out_sctp6; 902 898 903 - ret = register_pernet_subsys(&sctp_net_ops); 904 - if (ret < 0) 905 - goto out_pernet; 906 - 907 899 return 0; 908 - out_pernet: 909 - nf_ct_l4proto_unregister(&nf_conntrack_l4proto_sctp6); 910 900 out_sctp6: 911 901 nf_ct_l4proto_unregister(&nf_conntrack_l4proto_sctp4); 912 902 out_sctp4: 903 + unregister_pernet_subsys(&sctp_net_ops); 904 + out_pernet: 913 905 return ret; 914 906 } 915 907
+6 -6
net/netfilter/nf_conntrack_proto_udplite.c
··· 371 371 { 372 372 int ret; 373 373 374 + ret = register_pernet_subsys(&udplite_net_ops); 375 + if (ret < 0) 376 + goto out_pernet; 377 + 374 378 ret = nf_ct_l4proto_register(&nf_conntrack_l4proto_udplite4); 375 379 if (ret < 0) 376 380 goto out_udplite4; ··· 383 379 if (ret < 0) 384 380 goto out_udplite6; 385 381 386 - ret = register_pernet_subsys(&udplite_net_ops); 387 - if (ret < 0) 388 - goto out_pernet; 389 - 390 382 return 0; 391 - out_pernet: 392 - nf_ct_l4proto_unregister(&nf_conntrack_l4proto_udplite6); 393 383 out_udplite6: 394 384 nf_ct_l4proto_unregister(&nf_conntrack_l4proto_udplite4); 395 385 out_udplite4: 386 + unregister_pernet_subsys(&udplite_net_ops); 387 + out_pernet: 396 388 return ret; 397 389 } 398 390
+1
net/netlink/genetlink.c
··· 142 142 int err = 0; 143 143 144 144 BUG_ON(grp->name[0] == '\0'); 145 + BUG_ON(memchr(grp->name, '\0', GENL_NAMSIZ) == NULL); 145 146 146 147 genl_lock(); 147 148
+5 -6
net/unix/af_unix.c
··· 382 382 #endif 383 383 } 384 384 385 - static int unix_release_sock(struct sock *sk, int embrion) 385 + static void unix_release_sock(struct sock *sk, int embrion) 386 386 { 387 387 struct unix_sock *u = unix_sk(sk); 388 388 struct path path; ··· 451 451 452 452 if (unix_tot_inflight) 453 453 unix_gc(); /* Garbage collect fds */ 454 - 455 - return 0; 456 454 } 457 455 458 456 static void init_peercred(struct sock *sk) ··· 697 699 if (!sk) 698 700 return 0; 699 701 702 + unix_release_sock(sk, 0); 700 703 sock->sk = NULL; 701 704 702 - return unix_release_sock(sk, 0); 705 + return 0; 703 706 } 704 707 705 708 static int unix_autobind(struct socket *sock) ··· 1412 1413 if (UNIXCB(skb).cred) 1413 1414 return; 1414 1415 if (test_bit(SOCK_PASSCRED, &sock->flags) || 1415 - !other->sk_socket || 1416 - test_bit(SOCK_PASSCRED, &other->sk_socket->flags)) { 1416 + (other->sk_socket && 1417 + test_bit(SOCK_PASSCRED, &other->sk_socket->flags))) { 1417 1418 UNIXCB(skb).pid = get_pid(task_tgid(current)); 1418 1419 UNIXCB(skb).cred = get_current_cred(); 1419 1420 }