Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net

Pull networking updates from David Miller:

1) tx_filtered/ps_tx_buf queues need to be accessed with the SKB queue
lock, from Arik Nemtsov.

2) Don't call 802.11 driver's filter configure method until it's
actually open, from Felix Fietkau.

3) Use ieee80211_free_txskb otherwise we leak control information.
From Johannes Berg.

4) Fix memory leak in bluetooth UUID removal,f rom Johan Hedberg.

5) The shift mask trick doesn't work properly when 'optname' is out of
range in do_ip_setsockopt(). Use a straightforward switch statement
instead, the compiler emits essentially the same code but without
the missing range check. From Xi Wang.

6) Fix when we call tcp_replace_ts_recent() otherwise we can
erroneously accept a too-high tsval. From Eric Dumazet.

7) VXLAN bug fixes, mostly to do with VLAN header length handling, from
Alexander Duyck.

8) Missing return value initialization for IPV6_MINHOPCOUNT socket
option handling. From Hannes Frederic.

9) Fix regression in tasklet handling in jme/ksz884x/xilinx drivers,
from Xiaotian Feng.

10) At smsc911x driver init time, we don't know if the chip is in word
swap mode or not. However we do need to wait for the control
register's ready bit to be set before we program any other part of
the chip. Adjust the wait loop to account for this. From Kamlakant
Patel.

11) Revert erroneous MDIO bus unregister change to mdio-bitbang.c

12) Fix memory leak in /proc/net/sctp/, from Tommi Rantala.

13) tilegx driver registers IRQ with NULL name, oops, from Simon Marchi.

14) TCP metrics hash table kzalloc() based allocation can fail, back
down to using vmalloc() if it does. From Eric Dumazet.

15) Fix packet steering out-of-order delivery regression, from Tom
Herbert.

* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net: (40 commits)
net-rps: Fix brokeness causing OOO packets
tcp: handle tcp_net_metrics_init() order-5 memory allocation failures
batman-adv: process broadcast packets in BLA earlier
batman-adv: don't add TEMP clients belonging to other backbone nodes
batman-adv: correctly pass the client flag on tt_response
batman-adv: fix tt_global_entries flags update
tilegx: request_irq with a non-null device name
net: correct check in dev_addr_del()
tcp: fix retransmission in repair mode
sctp: fix /proc/net/sctp/ memory leak
Revert "drivers/net/phy/mdio-bitbang.c: Call mdiobus_unregister before mdiobus_free"
net/smsc911x: Fix ready check in cases where WORD_SWAP is needed
drivers/net: fix tasklet misuse issue
ipv4/ip_vti.c: VTI fix post-decryption forwarding
brcmfmac: fix typo in CONFIG_BRCMISCAN
vxlan: Update hard_header_len based on lowerdev when instantiating VXLAN
vxlan: fix a typo.
ipv6: setsockopt(IPIPPROTO_IPV6, IPV6_MINHOPCOUNT) forgot to set return value
doc/net: Fix typo in netdev-features.txt
vxlan: Fix error that was resulting in VXLAN MTU size being 10 bytes too large
...

+1 -1
Documentation/networking/netdev-features.txt
··· 164 164 This requests that the NIC receive all possible frames, including errored 165 165 frames (such as bad FCS, etc). This can be helpful when sniffing a link with 166 166 bad packets on it. Some NICs may receive more packets if also put into normal 167 - PROMISC mdoe. 167 + PROMISC mode.
+1
drivers/bluetooth/ath3k.c
··· 67 67 { USB_DEVICE(0x13d3, 0x3304) }, 68 68 { USB_DEVICE(0x0930, 0x0215) }, 69 69 { USB_DEVICE(0x0489, 0xE03D) }, 70 + { USB_DEVICE(0x0489, 0xE027) }, 70 71 71 72 /* Atheros AR9285 Malbec with sflash firmware */ 72 73 { USB_DEVICE(0x03F0, 0x311D) },
+1
drivers/bluetooth/btusb.c
··· 124 124 { USB_DEVICE(0x13d3, 0x3304), .driver_info = BTUSB_IGNORE }, 125 125 { USB_DEVICE(0x0930, 0x0215), .driver_info = BTUSB_IGNORE }, 126 126 { USB_DEVICE(0x0489, 0xe03d), .driver_info = BTUSB_IGNORE }, 127 + { USB_DEVICE(0x0489, 0xe027), .driver_info = BTUSB_IGNORE }, 127 128 128 129 /* Atheros AR9285 Malbec with sflash firmware */ 129 130 { USB_DEVICE(0x03f0, 0x311d), .driver_info = BTUSB_IGNORE },
+8 -20
drivers/net/ethernet/jme.c
··· 1860 1860 jme_clear_pm(jme); 1861 1861 JME_NAPI_ENABLE(jme); 1862 1862 1863 - tasklet_enable(&jme->linkch_task); 1864 - tasklet_enable(&jme->txclean_task); 1865 - tasklet_hi_enable(&jme->rxclean_task); 1866 - tasklet_hi_enable(&jme->rxempty_task); 1863 + tasklet_init(&jme->linkch_task, jme_link_change_tasklet, 1864 + (unsigned long) jme); 1865 + tasklet_init(&jme->txclean_task, jme_tx_clean_tasklet, 1866 + (unsigned long) jme); 1867 + tasklet_init(&jme->rxclean_task, jme_rx_clean_tasklet, 1868 + (unsigned long) jme); 1869 + tasklet_init(&jme->rxempty_task, jme_rx_empty_tasklet, 1870 + (unsigned long) jme); 1867 1871 1868 1872 rc = jme_request_irq(jme); 1869 1873 if (rc) ··· 3083 3079 tasklet_init(&jme->pcc_task, 3084 3080 jme_pcc_tasklet, 3085 3081 (unsigned long) jme); 3086 - tasklet_init(&jme->linkch_task, 3087 - jme_link_change_tasklet, 3088 - (unsigned long) jme); 3089 - tasklet_init(&jme->txclean_task, 3090 - jme_tx_clean_tasklet, 3091 - (unsigned long) jme); 3092 - tasklet_init(&jme->rxclean_task, 3093 - jme_rx_clean_tasklet, 3094 - (unsigned long) jme); 3095 - tasklet_init(&jme->rxempty_task, 3096 - jme_rx_empty_tasklet, 3097 - (unsigned long) jme); 3098 - tasklet_disable_nosync(&jme->linkch_task); 3099 - tasklet_disable_nosync(&jme->txclean_task); 3100 - tasklet_disable_nosync(&jme->rxclean_task); 3101 - tasklet_disable_nosync(&jme->rxempty_task); 3102 3082 jme->dpi.cur = PCC_P1; 3103 3083 3104 3084 jme->reg_ghc = 0;
+4 -12
drivers/net/ethernet/micrel/ksz884x.c
··· 5459 5459 rc = request_irq(dev->irq, netdev_intr, IRQF_SHARED, dev->name, dev); 5460 5460 if (rc) 5461 5461 return rc; 5462 - tasklet_enable(&hw_priv->rx_tasklet); 5463 - tasklet_enable(&hw_priv->tx_tasklet); 5462 + tasklet_init(&hw_priv->rx_tasklet, rx_proc_task, 5463 + (unsigned long) hw_priv); 5464 + tasklet_init(&hw_priv->tx_tasklet, tx_proc_task, 5465 + (unsigned long) hw_priv); 5464 5466 5465 5467 hw->promiscuous = 0; 5466 5468 hw->all_multi = 0; ··· 7034 7032 7035 7033 spin_lock_init(&hw_priv->hwlock); 7036 7034 mutex_init(&hw_priv->lock); 7037 - 7038 - /* tasklet is enabled. */ 7039 - tasklet_init(&hw_priv->rx_tasklet, rx_proc_task, 7040 - (unsigned long) hw_priv); 7041 - tasklet_init(&hw_priv->tx_tasklet, tx_proc_task, 7042 - (unsigned long) hw_priv); 7043 - 7044 - /* tasklet_enable will decrement the atomic counter. */ 7045 - tasklet_disable(&hw_priv->rx_tasklet); 7046 - tasklet_disable(&hw_priv->tx_tasklet); 7047 7035 7048 7036 for (i = 0; i < TOTAL_PORT_NUM; i++) 7049 7037 init_waitqueue_head(&hw_priv->counter[i].counter);
+15 -2
drivers/net/ethernet/smsc/smsc911x.c
··· 2110 2110 static int __devinit smsc911x_init(struct net_device *dev) 2111 2111 { 2112 2112 struct smsc911x_data *pdata = netdev_priv(dev); 2113 - unsigned int byte_test; 2113 + unsigned int byte_test, mask; 2114 2114 unsigned int to = 100; 2115 2115 2116 2116 SMSC_TRACE(pdata, probe, "Driver Parameters:"); ··· 2130 2130 /* 2131 2131 * poll the READY bit in PMT_CTRL. Any other access to the device is 2132 2132 * forbidden while this bit isn't set. Try for 100ms 2133 + * 2134 + * Note that this test is done before the WORD_SWAP register is 2135 + * programmed. So in some configurations the READY bit is at 16 before 2136 + * WORD_SWAP is written to. This issue is worked around by waiting 2137 + * until either bit 0 or bit 16 gets set in PMT_CTRL. 2138 + * 2139 + * SMSC has confirmed that checking bit 16 (marked as reserved in 2140 + * the datasheet) is fine since these bits "will either never be set 2141 + * or can only go high after READY does (so also indicate the device 2142 + * is ready)". 2133 2143 */ 2134 - while (!(smsc911x_reg_read(pdata, PMT_CTRL) & PMT_CTRL_READY_) && --to) 2144 + 2145 + mask = PMT_CTRL_READY_ | swahw32(PMT_CTRL_READY_); 2146 + while (!(smsc911x_reg_read(pdata, PMT_CTRL) & mask) && --to) 2135 2147 udelay(1000); 2148 + 2136 2149 if (to == 0) { 2137 2150 pr_err("Device not READY in 100ms aborting\n"); 2138 2151 return -ENODEV;
+1 -1
drivers/net/ethernet/tile/tilegx.c
··· 917 917 ingress_irq = rc; 918 918 tile_irq_activate(ingress_irq, TILE_IRQ_PERCPU); 919 919 rc = request_irq(ingress_irq, tile_net_handle_ingress_irq, 920 - 0, NULL, NULL); 920 + 0, "tile_net", NULL); 921 921 if (rc != 0) { 922 922 netdev_err(dev, "request_irq failed: %d\n", rc); 923 923 destroy_irq(ingress_irq);
+6 -6
drivers/net/ethernet/xilinx/xilinx_axienet_main.c
··· 942 942 phy_start(lp->phy_dev); 943 943 } 944 944 945 + /* Enable tasklets for Axi DMA error handling */ 946 + tasklet_init(&lp->dma_err_tasklet, axienet_dma_err_handler, 947 + (unsigned long) lp); 948 + 945 949 /* Enable interrupts for Axi DMA Tx */ 946 950 ret = request_irq(lp->tx_irq, axienet_tx_irq, 0, ndev->name, ndev); 947 951 if (ret) ··· 954 950 ret = request_irq(lp->rx_irq, axienet_rx_irq, 0, ndev->name, ndev); 955 951 if (ret) 956 952 goto err_rx_irq; 957 - /* Enable tasklets for Axi DMA error handling */ 958 - tasklet_enable(&lp->dma_err_tasklet); 953 + 959 954 return 0; 960 955 961 956 err_rx_irq: ··· 963 960 if (lp->phy_dev) 964 961 phy_disconnect(lp->phy_dev); 965 962 lp->phy_dev = NULL; 963 + tasklet_kill(&lp->dma_err_tasklet); 966 964 dev_err(lp->dev, "request_irq() failed\n"); 967 965 return ret; 968 966 } ··· 1616 1612 dev_err(lp->dev, "register_netdev() error (%i)\n", ret); 1617 1613 goto err_iounmap_2; 1618 1614 } 1619 - 1620 - tasklet_init(&lp->dma_err_tasklet, axienet_dma_err_handler, 1621 - (unsigned long) lp); 1622 - tasklet_disable(&lp->dma_err_tasklet); 1623 1615 1624 1616 return 0; 1625 1617
-1
drivers/net/phy/mdio-bitbang.c
··· 234 234 struct mdiobb_ctrl *ctrl = bus->priv; 235 235 236 236 module_put(ctrl->ops->owner); 237 - mdiobus_unregister(bus); 238 237 mdiobus_free(bus); 239 238 } 240 239 EXPORT_SYMBOL(free_mdio_bitbang);
+18 -4
drivers/net/usb/cdc_ncm.c
··· 540 540 (ctx->ether_desc == NULL) || (ctx->control != intf)) 541 541 goto error; 542 542 543 - /* claim interfaces, if any */ 544 - temp = usb_driver_claim_interface(driver, ctx->data, dev); 545 - if (temp) 546 - goto error; 543 + /* claim data interface, if different from control */ 544 + if (ctx->data != ctx->control) { 545 + temp = usb_driver_claim_interface(driver, ctx->data, dev); 546 + if (temp) 547 + goto error; 548 + } 547 549 548 550 iface_no = ctx->data->cur_altsetting->desc.bInterfaceNumber; 549 551 ··· 624 622 hrtimer_cancel(&ctx->tx_timer); 625 623 626 624 tasklet_kill(&ctx->bh); 625 + 626 + /* handle devices with combined control and data interface */ 627 + if (ctx->control == ctx->data) 628 + ctx->data = NULL; 627 629 628 630 /* disconnect master --> disconnect slave */ 629 631 if (intf == ctx->control && ctx->data) { ··· 1249 1243 .bInterfaceSubClass = USB_CDC_SUBCLASS_NCM, 1250 1244 .bInterfaceProtocol = USB_CDC_PROTO_NONE, 1251 1245 .driver_info = (unsigned long) &wwan_info, 1246 + }, 1247 + 1248 + /* Huawei NCM devices disguised as vendor specific */ 1249 + { USB_VENDOR_AND_INTERFACE_INFO(0x12d1, 0xff, 0x02, 0x16), 1250 + .driver_info = (unsigned long)&wwan_info, 1251 + }, 1252 + { USB_VENDOR_AND_INTERFACE_INFO(0x12d1, 0xff, 0x02, 0x46), 1253 + .driver_info = (unsigned long)&wwan_info, 1252 1254 }, 1253 1255 1254 1256 /* Generic CDC-NCM devices */
+2 -2
drivers/net/usb/smsc95xx.c
··· 184 184 /* set the address, index & direction (read from PHY) */ 185 185 phy_id &= dev->mii.phy_id_mask; 186 186 idx &= dev->mii.reg_num_mask; 187 - addr = (phy_id << 11) | (idx << 6) | MII_READ_; 187 + addr = (phy_id << 11) | (idx << 6) | MII_READ_ | MII_BUSY_; 188 188 ret = smsc95xx_write_reg(dev, MII_ADDR, addr); 189 189 check_warn_goto_done(ret, "Error writing MII_ADDR"); 190 190 ··· 221 221 /* set the address, index & direction (write to PHY) */ 222 222 phy_id &= dev->mii.phy_id_mask; 223 223 idx &= dev->mii.reg_num_mask; 224 - addr = (phy_id << 11) | (idx << 6) | MII_WRITE_; 224 + addr = (phy_id << 11) | (idx << 6) | MII_WRITE_ | MII_BUSY_; 225 225 ret = smsc95xx_write_reg(dev, MII_ADDR, addr); 226 226 check_warn_goto_done(ret, "Error writing MII_ADDR"); 227 227
+7 -3
drivers/net/vxlan.c
··· 1 1 /* 2 - * VXLAN: Virtual eXtensiable Local Area Network 2 + * VXLAN: Virtual eXtensible Local Area Network 3 3 * 4 4 * Copyright (c) 2012 Vyatta Inc. 5 5 * ··· 50 50 51 51 #define VXLAN_N_VID (1u << 24) 52 52 #define VXLAN_VID_MASK (VXLAN_N_VID - 1) 53 - /* VLAN + IP header + UDP + VXLAN */ 54 - #define VXLAN_HEADROOM (4 + 20 + 8 + 8) 53 + /* IP header + UDP + VXLAN + Ethernet header */ 54 + #define VXLAN_HEADROOM (20 + 8 + 8 + 14) 55 55 56 56 #define VXLAN_FLAGS 0x08000000 /* struct vxlanhdr.vx_flags required value. */ 57 57 ··· 1102 1102 1103 1103 if (!tb[IFLA_MTU]) 1104 1104 dev->mtu = lowerdev->mtu - VXLAN_HEADROOM; 1105 + 1106 + /* update header length based on lower device */ 1107 + dev->hard_header_len = lowerdev->hard_header_len + 1108 + VXLAN_HEADROOM; 1105 1109 } 1106 1110 1107 1111 if (data[IFLA_VXLAN_TOS])
+1 -1
drivers/net/wireless/brcm80211/brcmfmac/wl_cfg80211.c
··· 4401 4401 4402 4402 static void brcmf_wiphy_pno_params(struct wiphy *wiphy) 4403 4403 { 4404 - #ifndef CONFIG_BRCMFISCAN 4404 + #ifndef CONFIG_BRCMISCAN 4405 4405 /* scheduled scan settings */ 4406 4406 wiphy->max_sched_scan_ssids = BRCMF_PNO_MAX_PFN_COUNT; 4407 4407 wiphy->max_match_sets = BRCMF_PNO_MAX_PFN_COUNT;
+1 -1
drivers/net/wireless/iwlwifi/dvm/mac80211.c
··· 521 521 ieee80211_get_tx_rate(hw, IEEE80211_SKB_CB(skb))->bitrate); 522 522 523 523 if (iwlagn_tx_skb(priv, control->sta, skb)) 524 - dev_kfree_skb_any(skb); 524 + ieee80211_free_txskb(hw, skb); 525 525 } 526 526 527 527 static void iwlagn_mac_update_tkip_key(struct ieee80211_hw *hw,
+1 -1
drivers/net/wireless/iwlwifi/dvm/main.c
··· 2114 2114 2115 2115 info = IEEE80211_SKB_CB(skb); 2116 2116 iwl_trans_free_tx_cmd(priv->trans, info->driver_data[1]); 2117 - dev_kfree_skb_any(skb); 2117 + ieee80211_free_txskb(priv->hw, skb); 2118 2118 } 2119 2119 2120 2120 static void iwl_set_hw_rfkill_state(struct iwl_op_mode *op_mode, bool state)
+21 -2
drivers/net/wireless/iwlwifi/pcie/rx.c
··· 321 321 dma_map_page(trans->dev, page, 0, 322 322 PAGE_SIZE << trans_pcie->rx_page_order, 323 323 DMA_FROM_DEVICE); 324 + if (dma_mapping_error(trans->dev, rxb->page_dma)) { 325 + rxb->page = NULL; 326 + spin_lock_irqsave(&rxq->lock, flags); 327 + list_add(&rxb->list, &rxq->rx_used); 328 + spin_unlock_irqrestore(&rxq->lock, flags); 329 + __free_pages(page, trans_pcie->rx_page_order); 330 + return; 331 + } 324 332 /* dma address must be no more than 36 bits */ 325 333 BUG_ON(rxb->page_dma & ~DMA_BIT_MASK(36)); 326 334 /* and also 256 byte aligned! */ ··· 496 488 dma_map_page(trans->dev, rxb->page, 0, 497 489 PAGE_SIZE << trans_pcie->rx_page_order, 498 490 DMA_FROM_DEVICE); 499 - list_add_tail(&rxb->list, &rxq->rx_free); 500 - rxq->free_count++; 491 + if (dma_mapping_error(trans->dev, rxb->page_dma)) { 492 + /* 493 + * free the page(s) as well to not break 494 + * the invariant that the items on the used 495 + * list have no page(s) 496 + */ 497 + __free_pages(rxb->page, trans_pcie->rx_page_order); 498 + rxb->page = NULL; 499 + list_add_tail(&rxb->list, &rxq->rx_used); 500 + } else { 501 + list_add_tail(&rxb->list, &rxq->rx_free); 502 + rxq->free_count++; 503 + } 501 504 } else 502 505 list_add_tail(&rxb->list, &rxq->rx_used); 503 506 spin_unlock_irqrestore(&rxq->lock, flags);
+22 -2
drivers/s390/net/qeth_core_main.c
··· 2942 2942 QETH_DBF_TEXT(SETUP, 2, "qipasscb"); 2943 2943 2944 2944 cmd = (struct qeth_ipa_cmd *) data; 2945 + 2946 + switch (cmd->hdr.return_code) { 2947 + case IPA_RC_NOTSUPP: 2948 + case IPA_RC_L2_UNSUPPORTED_CMD: 2949 + QETH_DBF_TEXT(SETUP, 2, "ipaunsup"); 2950 + card->options.ipa4.supported_funcs |= IPA_SETADAPTERPARMS; 2951 + card->options.ipa6.supported_funcs |= IPA_SETADAPTERPARMS; 2952 + return -0; 2953 + default: 2954 + if (cmd->hdr.return_code) { 2955 + QETH_DBF_MESSAGE(1, "%s IPA_CMD_QIPASSIST: Unhandled " 2956 + "rc=%d\n", 2957 + dev_name(&card->gdev->dev), 2958 + cmd->hdr.return_code); 2959 + return 0; 2960 + } 2961 + } 2962 + 2945 2963 if (cmd->hdr.prot_version == QETH_PROT_IPV4) { 2946 2964 card->options.ipa4.supported_funcs = cmd->hdr.ipa_supported; 2947 2965 card->options.ipa4.enabled_funcs = cmd->hdr.ipa_enabled; 2948 - } else { 2966 + } else if (cmd->hdr.prot_version == QETH_PROT_IPV6) { 2949 2967 card->options.ipa6.supported_funcs = cmd->hdr.ipa_supported; 2950 2968 card->options.ipa6.enabled_funcs = cmd->hdr.ipa_enabled; 2951 - } 2969 + } else 2970 + QETH_DBF_MESSAGE(1, "%s IPA_CMD_QIPASSIST: Flawed LIC detected" 2971 + "\n", dev_name(&card->gdev->dev)); 2952 2972 QETH_DBF_TEXT(SETUP, 2, "suppenbl"); 2953 2973 QETH_DBF_TEXT_(SETUP, 2, "%08x", (__u32)cmd->hdr.ipa_supported); 2954 2974 QETH_DBF_TEXT_(SETUP, 2, "%08x", (__u32)cmd->hdr.ipa_enabled);
+8 -5
drivers/s390/net/qeth_l2_main.c
··· 626 626 QETH_DBF_TEXT(SETUP, 2, "doL2init"); 627 627 QETH_DBF_TEXT_(SETUP, 2, "doL2%s", CARD_BUS_ID(card)); 628 628 629 - rc = qeth_query_setadapterparms(card); 630 - if (rc) { 631 - QETH_DBF_MESSAGE(2, "could not query adapter parameters on " 632 - "device %s: x%x\n", CARD_BUS_ID(card), rc); 629 + if (qeth_is_supported(card, IPA_SETADAPTERPARMS)) { 630 + rc = qeth_query_setadapterparms(card); 631 + if (rc) { 632 + QETH_DBF_MESSAGE(2, "could not query adapter " 633 + "parameters on device %s: x%x\n", 634 + CARD_BUS_ID(card), rc); 635 + } 633 636 } 634 637 635 638 if (card->info.type == QETH_CARD_TYPE_IQD || ··· 679 676 return -ERESTARTSYS; 680 677 } 681 678 rc = qeth_l2_send_delmac(card, &card->dev->dev_addr[0]); 682 - if (!rc) 679 + if (!rc || (rc == IPA_RC_L2_MAC_NOT_FOUND)) 683 680 rc = qeth_l2_send_setmac(card, addr->sa_data); 684 681 return rc ? -EINVAL : 0; 685 682 }
+6 -6
net/batman-adv/soft-interface.c
··· 325 325 326 326 soft_iface->last_rx = jiffies; 327 327 328 + /* Let the bridge loop avoidance check the packet. If will 329 + * not handle it, we can safely push it up. 330 + */ 331 + if (batadv_bla_rx(bat_priv, skb, vid, is_bcast)) 332 + goto out; 333 + 328 334 if (orig_node) 329 335 batadv_tt_add_temporary_global_entry(bat_priv, orig_node, 330 336 ethhdr->h_source); 331 337 332 338 if (batadv_is_ap_isolated(bat_priv, ethhdr->h_source, ethhdr->h_dest)) 333 339 goto dropped; 334 - 335 - /* Let the bridge loop avoidance check the packet. If will 336 - * not handle it, we can safely push it up. 337 - */ 338 - if (batadv_bla_rx(bat_priv, skb, vid, is_bcast)) 339 - goto out; 340 340 341 341 netif_rx(skb); 342 342 goto out;
+14 -1
net/batman-adv/translation-table.c
··· 769 769 */ 770 770 tt_global_entry->common.flags &= ~BATADV_TT_CLIENT_TEMP; 771 771 772 + /* the change can carry possible "attribute" flags like the 773 + * TT_CLIENT_WIFI, therefore they have to be copied in the 774 + * client entry 775 + */ 776 + tt_global_entry->common.flags |= flags; 777 + 772 778 /* If there is the BATADV_TT_CLIENT_ROAM flag set, there is only 773 779 * one originator left in the list and we previously received a 774 780 * delete + roaming change for this originator. ··· 1502 1496 1503 1497 memcpy(tt_change->addr, tt_common_entry->addr, 1504 1498 ETH_ALEN); 1505 - tt_change->flags = BATADV_NO_FLAGS; 1499 + tt_change->flags = tt_common_entry->flags; 1506 1500 1507 1501 tt_count++; 1508 1502 tt_change++; ··· 2455 2449 const unsigned char *addr) 2456 2450 { 2457 2451 bool ret = false; 2452 + 2453 + /* if the originator is a backbone node (meaning it belongs to the same 2454 + * LAN of this node) the temporary client must not be added because to 2455 + * reach such destination the node must use the LAN instead of the mesh 2456 + */ 2457 + if (batadv_bla_is_backbone_gw_orig(bat_priv, orig_node->orig)) 2458 + goto out; 2458 2459 2459 2460 if (!batadv_tt_global_add(bat_priv, orig_node, addr, 2460 2461 BATADV_TT_CLIENT_TEMP,
+2 -2
net/bluetooth/hci_core.c
··· 1754 1754 if (hdev->dev_type != HCI_AMP) 1755 1755 set_bit(HCI_AUTO_OFF, &hdev->dev_flags); 1756 1756 1757 - schedule_work(&hdev->power_on); 1758 - 1759 1757 hci_notify(hdev, HCI_DEV_REG); 1760 1758 hci_dev_hold(hdev); 1759 + 1760 + schedule_work(&hdev->power_on); 1761 1761 1762 1762 return id; 1763 1763
+7 -5
net/bluetooth/mgmt.c
··· 326 326 struct hci_dev *d; 327 327 size_t rp_len; 328 328 u16 count; 329 - int i, err; 329 + int err; 330 330 331 331 BT_DBG("sock %p", sk); 332 332 ··· 347 347 return -ENOMEM; 348 348 } 349 349 350 - rp->num_controllers = cpu_to_le16(count); 351 - 352 - i = 0; 350 + count = 0; 353 351 list_for_each_entry(d, &hci_dev_list, list) { 354 352 if (test_bit(HCI_SETUP, &d->dev_flags)) 355 353 continue; ··· 355 357 if (!mgmt_valid_hdev(d)) 356 358 continue; 357 359 358 - rp->index[i++] = cpu_to_le16(d->id); 360 + rp->index[count++] = cpu_to_le16(d->id); 359 361 BT_DBG("Added hci%u", d->id); 360 362 } 363 + 364 + rp->num_controllers = cpu_to_le16(count); 365 + rp_len = sizeof(*rp) + (2 * count); 361 366 362 367 read_unlock(&hci_dev_list_lock); 363 368 ··· 1367 1366 continue; 1368 1367 1369 1368 list_del(&match->list); 1369 + kfree(match); 1370 1370 found++; 1371 1371 } 1372 1372
+1 -1
net/bluetooth/smp.c
··· 267 267 268 268 clear_bit(HCI_CONN_ENCRYPT_PEND, &conn->hcon->flags); 269 269 mgmt_auth_failed(conn->hcon->hdev, conn->dst, hcon->type, 270 - hcon->dst_type, reason); 270 + hcon->dst_type, HCI_ERROR_AUTH_FAILURE); 271 271 272 272 cancel_delayed_work_sync(&conn->security_timer); 273 273
+3 -1
net/core/dev.c
··· 2818 2818 if (unlikely(tcpu != next_cpu) && 2819 2819 (tcpu == RPS_NO_CPU || !cpu_online(tcpu) || 2820 2820 ((int)(per_cpu(softnet_data, tcpu).input_queue_head - 2821 - rflow->last_qtail)) >= 0)) 2821 + rflow->last_qtail)) >= 0)) { 2822 + tcpu = next_cpu; 2822 2823 rflow = set_rps_cpu(dev, skb, rflow, next_cpu); 2824 + } 2823 2825 2824 2826 if (tcpu != RPS_NO_CPU && cpu_online(tcpu)) { 2825 2827 *rflowp = rflow;
+2 -1
net/core/dev_addr_lists.c
··· 319 319 */ 320 320 ha = list_first_entry(&dev->dev_addrs.list, 321 321 struct netdev_hw_addr, list); 322 - if (ha->addr == dev->dev_addr && ha->refcount == 1) 322 + if (!memcmp(ha->addr, addr, dev->addr_len) && 323 + ha->type == addr_type && ha->refcount == 1) 323 324 return -ENOENT; 324 325 325 326 err = __hw_addr_del(&dev->dev_addrs, addr, dev->addr_len,
+22 -13
net/ipv4/ip_sockglue.c
··· 457 457 struct inet_sock *inet = inet_sk(sk); 458 458 int val = 0, err; 459 459 460 - if (((1<<optname) & ((1<<IP_PKTINFO) | (1<<IP_RECVTTL) | 461 - (1<<IP_RECVOPTS) | (1<<IP_RECVTOS) | 462 - (1<<IP_RETOPTS) | (1<<IP_TOS) | 463 - (1<<IP_TTL) | (1<<IP_HDRINCL) | 464 - (1<<IP_MTU_DISCOVER) | (1<<IP_RECVERR) | 465 - (1<<IP_ROUTER_ALERT) | (1<<IP_FREEBIND) | 466 - (1<<IP_PASSSEC) | (1<<IP_TRANSPARENT) | 467 - (1<<IP_MINTTL) | (1<<IP_NODEFRAG))) || 468 - optname == IP_UNICAST_IF || 469 - optname == IP_MULTICAST_TTL || 470 - optname == IP_MULTICAST_ALL || 471 - optname == IP_MULTICAST_LOOP || 472 - optname == IP_RECVORIGDSTADDR) { 460 + switch (optname) { 461 + case IP_PKTINFO: 462 + case IP_RECVTTL: 463 + case IP_RECVOPTS: 464 + case IP_RECVTOS: 465 + case IP_RETOPTS: 466 + case IP_TOS: 467 + case IP_TTL: 468 + case IP_HDRINCL: 469 + case IP_MTU_DISCOVER: 470 + case IP_RECVERR: 471 + case IP_ROUTER_ALERT: 472 + case IP_FREEBIND: 473 + case IP_PASSSEC: 474 + case IP_TRANSPARENT: 475 + case IP_MINTTL: 476 + case IP_NODEFRAG: 477 + case IP_UNICAST_IF: 478 + case IP_MULTICAST_TTL: 479 + case IP_MULTICAST_ALL: 480 + case IP_MULTICAST_LOOP: 481 + case IP_RECVORIGDSTADDR: 473 482 if (optlen >= sizeof(int)) { 474 483 if (get_user(val, (int __user *) optval)) 475 484 return -EFAULT;
+5
net/ipv4/ip_vti.c
··· 338 338 if (tunnel != NULL) { 339 339 struct pcpu_tstats *tstats; 340 340 341 + if (!xfrm4_policy_check(NULL, XFRM_POLICY_IN, skb)) 342 + return -1; 343 + 341 344 tstats = this_cpu_ptr(tunnel->dev->tstats); 342 345 u64_stats_update_begin(&tstats->syncp); 343 346 tstats->rx_packets++; 344 347 tstats->rx_bytes += skb->len; 345 348 u64_stats_update_end(&tstats->syncp); 346 349 350 + skb->mark = 0; 351 + secpath_reset(skb); 347 352 skb->dev = tunnel->dev; 348 353 return 1; 349 354 }
+2 -2
net/ipv4/tcp.c
··· 1212 1212 wait_for_sndbuf: 1213 1213 set_bit(SOCK_NOSPACE, &sk->sk_socket->flags); 1214 1214 wait_for_memory: 1215 - if (copied && likely(!tp->repair)) 1215 + if (copied) 1216 1216 tcp_push(sk, flags & ~MSG_MORE, mss_now, TCP_NAGLE_PUSH); 1217 1217 1218 1218 if ((err = sk_stream_wait_memory(sk, &timeo)) != 0) ··· 1223 1223 } 1224 1224 1225 1225 out: 1226 - if (copied && likely(!tp->repair)) 1226 + if (copied) 1227 1227 tcp_push(sk, flags, mss_now, tp->nonagle); 1228 1228 release_sock(sk); 1229 1229 return copied + copied_syn;
+10 -5
net/ipv4/tcp_input.c
··· 5313 5313 goto discard; 5314 5314 } 5315 5315 5316 - /* ts_recent update must be made after we are sure that the packet 5317 - * is in window. 5318 - */ 5319 - tcp_replace_ts_recent(tp, TCP_SKB_CB(skb)->seq); 5320 - 5321 5316 /* step 3: check security and precedence [ignored] */ 5322 5317 5323 5318 /* step 4: Check for a SYN ··· 5546 5551 step5: 5547 5552 if (th->ack && tcp_ack(sk, skb, FLAG_SLOWPATH) < 0) 5548 5553 goto discard; 5554 + 5555 + /* ts_recent update must be made after we are sure that the packet 5556 + * is in window. 5557 + */ 5558 + tcp_replace_ts_recent(tp, TCP_SKB_CB(skb)->seq); 5549 5559 5550 5560 tcp_rcv_rtt_measure_ts(sk, skb); 5551 5561 ··· 6129 6129 } 6130 6130 } else 6131 6131 goto discard; 6132 + 6133 + /* ts_recent update must be made after we are sure that the packet 6134 + * is in window. 6135 + */ 6136 + tcp_replace_ts_recent(tp, TCP_SKB_CB(skb)->seq); 6132 6137 6133 6138 /* step 6: check the URG bit */ 6134 6139 tcp_urg(sk, skb, th);
+9 -3
net/ipv4/tcp_metrics.c
··· 1 1 #include <linux/rcupdate.h> 2 2 #include <linux/spinlock.h> 3 3 #include <linux/jiffies.h> 4 - #include <linux/bootmem.h> 5 4 #include <linux/module.h> 6 5 #include <linux/cache.h> 7 6 #include <linux/slab.h> ··· 8 9 #include <linux/tcp.h> 9 10 #include <linux/hash.h> 10 11 #include <linux/tcp_metrics.h> 12 + #include <linux/vmalloc.h> 11 13 12 14 #include <net/inet_connection_sock.h> 13 15 #include <net/net_namespace.h> ··· 1034 1034 net->ipv4.tcp_metrics_hash_log = order_base_2(slots); 1035 1035 size = sizeof(struct tcpm_hash_bucket) << net->ipv4.tcp_metrics_hash_log; 1036 1036 1037 - net->ipv4.tcp_metrics_hash = kzalloc(size, GFP_KERNEL); 1037 + net->ipv4.tcp_metrics_hash = kzalloc(size, GFP_KERNEL | __GFP_NOWARN); 1038 + if (!net->ipv4.tcp_metrics_hash) 1039 + net->ipv4.tcp_metrics_hash = vzalloc(size); 1040 + 1038 1041 if (!net->ipv4.tcp_metrics_hash) 1039 1042 return -ENOMEM; 1040 1043 ··· 1058 1055 tm = next; 1059 1056 } 1060 1057 } 1061 - kfree(net->ipv4.tcp_metrics_hash); 1058 + if (is_vmalloc_addr(net->ipv4.tcp_metrics_hash)) 1059 + vfree(net->ipv4.tcp_metrics_hash); 1060 + else 1061 + kfree(net->ipv4.tcp_metrics_hash); 1062 1062 } 1063 1063 1064 1064 static __net_initdata struct pernet_operations tcp_net_metrics_ops = {
+4
net/ipv4/tcp_output.c
··· 1986 1986 tso_segs = tcp_init_tso_segs(sk, skb, mss_now); 1987 1987 BUG_ON(!tso_segs); 1988 1988 1989 + if (unlikely(tp->repair) && tp->repair_queue == TCP_SEND_QUEUE) 1990 + goto repair; /* Skip network transmission */ 1991 + 1989 1992 cwnd_quota = tcp_cwnd_test(tp, skb); 1990 1993 if (!cwnd_quota) 1991 1994 break; ··· 2029 2026 if (unlikely(tcp_transmit_skb(sk, skb, 1, gfp))) 2030 2027 break; 2031 2028 2029 + repair: 2032 2030 /* Advance the send_head. This one is sent out. 2033 2031 * This call will increment packets_out. 2034 2032 */
+1
net/ipv6/ipv6_sockglue.c
··· 827 827 if (val < 0 || val > 255) 828 828 goto e_inval; 829 829 np->min_hopcount = val; 830 + retv = 0; 830 831 break; 831 832 case IPV6_DONTFRAG: 832 833 np->dontfrag = valbool;
+3
net/mac80211/cfg.c
··· 2594 2594 else 2595 2595 local->probe_req_reg--; 2596 2596 2597 + if (!local->open_count) 2598 + break; 2599 + 2597 2600 ieee80211_queue_work(&local->hw, &local->reconfig_filter); 2598 2601 break; 2599 2602 default:
+2
net/mac80211/ieee80211_i.h
··· 1314 1314 struct net_device *dev); 1315 1315 netdev_tx_t ieee80211_subif_start_xmit(struct sk_buff *skb, 1316 1316 struct net_device *dev); 1317 + void ieee80211_purge_tx_queue(struct ieee80211_hw *hw, 1318 + struct sk_buff_head *skbs); 1317 1319 1318 1320 /* HT */ 1319 1321 void ieee80211_apply_htcap_overrides(struct ieee80211_sub_if_data *sdata,
+4 -2
net/mac80211/main.c
··· 871 871 local->hw.wiphy->cipher_suites, 872 872 sizeof(u32) * local->hw.wiphy->n_cipher_suites, 873 873 GFP_KERNEL); 874 - if (!suites) 875 - return -ENOMEM; 874 + if (!suites) { 875 + result = -ENOMEM; 876 + goto fail_wiphy_register; 877 + } 876 878 for (r = 0; r < local->hw.wiphy->n_cipher_suites; r++) { 877 879 u32 suite = local->hw.wiphy->cipher_suites[r]; 878 880 if (suite == WLAN_CIPHER_SUITE_WEP40 ||
+1 -1
net/mac80211/scan.c
··· 917 917 struct cfg80211_sched_scan_request *req) 918 918 { 919 919 struct ieee80211_local *local = sdata->local; 920 - struct ieee80211_sched_scan_ies sched_scan_ies; 920 + struct ieee80211_sched_scan_ies sched_scan_ies = {}; 921 921 int ret, i; 922 922 923 923 mutex_lock(&local->mtx);
+8 -3
net/mac80211/sta_info.c
··· 117 117 118 118 for (ac = 0; ac < IEEE80211_NUM_ACS; ac++) { 119 119 local->total_ps_buffered -= skb_queue_len(&sta->ps_tx_buf[ac]); 120 - __skb_queue_purge(&sta->ps_tx_buf[ac]); 121 - __skb_queue_purge(&sta->tx_filtered[ac]); 120 + ieee80211_purge_tx_queue(&local->hw, &sta->ps_tx_buf[ac]); 121 + ieee80211_purge_tx_queue(&local->hw, &sta->tx_filtered[ac]); 122 122 } 123 123 124 124 #ifdef CONFIG_MAC80211_MESH ··· 141 141 tid_tx = rcu_dereference_raw(sta->ampdu_mlme.tid_tx[i]); 142 142 if (!tid_tx) 143 143 continue; 144 - __skb_queue_purge(&tid_tx->pending); 144 + ieee80211_purge_tx_queue(&local->hw, &tid_tx->pending); 145 145 kfree(tid_tx); 146 146 } 147 147 ··· 961 961 struct ieee80211_local *local = sdata->local; 962 962 struct sk_buff_head pending; 963 963 int filtered = 0, buffered = 0, ac; 964 + unsigned long flags; 964 965 965 966 clear_sta_flag(sta, WLAN_STA_SP); 966 967 ··· 977 976 for (ac = 0; ac < IEEE80211_NUM_ACS; ac++) { 978 977 int count = skb_queue_len(&pending), tmp; 979 978 979 + spin_lock_irqsave(&sta->tx_filtered[ac].lock, flags); 980 980 skb_queue_splice_tail_init(&sta->tx_filtered[ac], &pending); 981 + spin_unlock_irqrestore(&sta->tx_filtered[ac].lock, flags); 981 982 tmp = skb_queue_len(&pending); 982 983 filtered += tmp - count; 983 984 count = tmp; 984 985 986 + spin_lock_irqsave(&sta->ps_tx_buf[ac].lock, flags); 985 987 skb_queue_splice_tail_init(&sta->ps_tx_buf[ac], &pending); 988 + spin_unlock_irqrestore(&sta->ps_tx_buf[ac].lock, flags); 986 989 tmp = skb_queue_len(&pending); 987 990 buffered += tmp - count; 988 991 }
+9
net/mac80211/status.c
··· 668 668 dev_kfree_skb_any(skb); 669 669 } 670 670 EXPORT_SYMBOL(ieee80211_free_txskb); 671 + 672 + void ieee80211_purge_tx_queue(struct ieee80211_hw *hw, 673 + struct sk_buff_head *skbs) 674 + { 675 + struct sk_buff *skb; 676 + 677 + while ((skb = __skb_dequeue(skbs))) 678 + ieee80211_free_txskb(hw, skb); 679 + }
+6 -3
net/mac80211/tx.c
··· 1358 1358 if (tx->skb) 1359 1359 ieee80211_free_txskb(&tx->local->hw, tx->skb); 1360 1360 else 1361 - __skb_queue_purge(&tx->skbs); 1361 + ieee80211_purge_tx_queue(&tx->local->hw, &tx->skbs); 1362 1362 return -1; 1363 1363 } else if (unlikely(res == TX_QUEUED)) { 1364 1364 I802_DEBUG_INC(tx->local->tx_handlers_queued); ··· 2120 2120 */ 2121 2121 void ieee80211_clear_tx_pending(struct ieee80211_local *local) 2122 2122 { 2123 + struct sk_buff *skb; 2123 2124 int i; 2124 2125 2125 - for (i = 0; i < local->hw.queues; i++) 2126 - skb_queue_purge(&local->pending[i]); 2126 + for (i = 0; i < local->hw.queues; i++) { 2127 + while ((skb = skb_dequeue(&local->pending[i])) != NULL) 2128 + ieee80211_free_txskb(&local->hw, skb); 2129 + } 2127 2130 } 2128 2131 2129 2132 /*
+2
net/mac80211/util.c
··· 1491 1491 list_for_each_entry(sdata, &local->interfaces, list) { 1492 1492 if (sdata->vif.type != NL80211_IFTYPE_STATION) 1493 1493 continue; 1494 + if (!sdata->u.mgd.associated) 1495 + continue; 1494 1496 1495 1497 ieee80211_send_nullfunc(local, sdata, 0); 1496 1498 }
+4 -4
net/sctp/proc.c
··· 102 102 .open = sctp_snmp_seq_open, 103 103 .read = seq_read, 104 104 .llseek = seq_lseek, 105 - .release = single_release, 105 + .release = single_release_net, 106 106 }; 107 107 108 108 /* Set up the proc fs entry for 'snmp' object. */ ··· 251 251 .open = sctp_eps_seq_open, 252 252 .read = seq_read, 253 253 .llseek = seq_lseek, 254 - .release = seq_release, 254 + .release = seq_release_net, 255 255 }; 256 256 257 257 /* Set up the proc fs entry for 'eps' object. */ ··· 372 372 .open = sctp_assocs_seq_open, 373 373 .read = seq_read, 374 374 .llseek = seq_lseek, 375 - .release = seq_release, 375 + .release = seq_release_net, 376 376 }; 377 377 378 378 /* Set up the proc fs entry for 'assocs' object. */ ··· 517 517 .open = sctp_remaddr_seq_open, 518 518 .read = seq_read, 519 519 .llseek = seq_lseek, 520 - .release = seq_release, 520 + .release = seq_release_net, 521 521 }; 522 522 523 523 int __net_init sctp_remaddr_proc_init(struct net *net)
+2 -3
net/wireless/reg.c
··· 141 141 .reg_rules = { 142 142 /* IEEE 802.11b/g, channels 1..11 */ 143 143 REG_RULE(2412-10, 2462+10, 40, 6, 20, 0), 144 - /* IEEE 802.11b/g, channels 12..13. No HT40 145 - * channel fits here. */ 146 - REG_RULE(2467-10, 2472+10, 20, 6, 20, 144 + /* IEEE 802.11b/g, channels 12..13. */ 145 + REG_RULE(2467-10, 2472+10, 40, 6, 20, 147 146 NL80211_RRF_PASSIVE_SCAN | 148 147 NL80211_RRF_NO_IBSS), 149 148 /* IEEE 802.11 channel 14 - Only JP enables