Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net

Pull networking fixes from David Miller:

1) When we run a tap on netlink sockets, we have to copy mmap'd SKBs
instead of cloning them. From Daniel Borkmann.

2) When converting classical BPF into eBPF, fix the setting of the
source reg to BPF_REG_X. From Tycho Andersen.

3) Fix igmpv3/mldv2 report parsing in the bridge multicast code, from
Linus Lussing.

4) Fix dst refcounting for ipv6 tunnels, from Martin KaFai Lau.

5) Set NLM_F_REPLACE flag properly when replacing ipv6 routes, from
Roopa Prabhu.

6) Add some new cxgb4 PCI device IDs, from Hariprasad Shenai.

7) Fix headroom tests and SKB leaks in ipv6 fragmentation code, from
Florian Westphal.

8) Check DMA mapping errors in bna driver, from Ivan Vecera.

9) Several 8139cp bug fixes (dev_kfree_skb_any in interrupt context,
misclearing of interrupt status in TX timeout handler, etc.) from
David Woodhouse.

10) In tipc, reset SKB header pointer after skb_linearize(), from Erik
Hugne.

11) Fix autobind races et al. in netlink code, from Herbert Xu with
help from Tejun Heo and others.

12) Missing SET_NETDEV_DEV in sunvnet driver, from Sowmini Varadhan.

13) Fix various races in timewait timer and reqsk_queue_hadh_req, from
Eric Dumazet.

14) Fix array overruns in mac80211, from Johannes Berg and Dan
Carpenter.

15) Fix data race in rhashtable_rehash_one(), from Dmitriy Vyukov.

16) Fix race between poll_one_napi and napi_disable, from Neil Horman.

17) Fix byte order in geneve tunnel port config, from John W Linville.

18) Fix handling of ARP replies over lightweight tunnels, from Jiri
Benc.

19) We can loop when fib rule dumps cross multiple SKBs, fix from Wilson
Kok and Roopa Prabhu.

20) Several reference count handling bug fixes in the PHY/MDIO layer
from Russel King.

21) Fix lockdep splat in ppp_dev_uninit(), from Guillaume Nault.

22) Fix crash in icmp_route_lookup(), from David Ahern.

* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net: (116 commits)
net: Fix panic in icmp_route_lookup
net: update docbook comment for __mdiobus_register()
ppp: fix lockdep splat in ppp_dev_uninit()
net: via/Kconfig: GENERIC_PCI_IOMAP required if PCI not selected
phy: marvell: add link partner advertised modes
net: fix net_device refcounting
phy: add phy_device_remove()
phy: fixed-phy: properly validate phy in fixed_phy_update_state()
net: fix phy refcounting in a bunch of drivers
of_mdio: fix MDIO phy device refcounting
phy: add proper phy struct device refcounting
phy: fix mdiobus module safety
net: dsa: fix of_mdio_find_bus() device refcount leak
phy: fix of_mdio_find_bus() device refcount leak
ip6_tunnel: Reduce log level in ip6_tnl_err() to debug
ip6_gre: Reduce log level in ip6gre_err() to debug
fib_rules: fix fib rule dumps across multiple skbs
bnx2x: byte swap rss_key to comply to Toeplitz specs
net: revert "net_sched: move tp->root allocation into fw_init()"
lwtunnel: remove source and destination UDP port config option
...

+1700 -642
+96
Documentation/networking/vrf.txt
··· 1 + Virtual Routing and Forwarding (VRF) 2 + ==================================== 3 + The VRF device combined with ip rules provides the ability to create virtual 4 + routing and forwarding domains (aka VRFs, VRF-lite to be specific) in the 5 + Linux network stack. One use case is the multi-tenancy problem where each 6 + tenant has their own unique routing tables and in the very least need 7 + different default gateways. 8 + 9 + Processes can be "VRF aware" by binding a socket to the VRF device. Packets 10 + through the socket then use the routing table associated with the VRF 11 + device. An important feature of the VRF device implementation is that it 12 + impacts only Layer 3 and above so L2 tools (e.g., LLDP) are not affected 13 + (ie., they do not need to be run in each VRF). The design also allows 14 + the use of higher priority ip rules (Policy Based Routing, PBR) to take 15 + precedence over the VRF device rules directing specific traffic as desired. 16 + 17 + In addition, VRF devices allow VRFs to be nested within namespaces. For 18 + example network namespaces provide separation of network interfaces at L1 19 + (Layer 1 separation), VLANs on the interfaces within a namespace provide 20 + L2 separation and then VRF devices provide L3 separation. 21 + 22 + Design 23 + ------ 24 + A VRF device is created with an associated route table. Network interfaces 25 + are then enslaved to a VRF device: 26 + 27 + +-----------------------------+ 28 + | vrf-blue | ===> route table 10 29 + +-----------------------------+ 30 + | | | 31 + +------+ +------+ +-------------+ 32 + | eth1 | | eth2 | ... | bond1 | 33 + +------+ +------+ +-------------+ 34 + | | 35 + +------+ +------+ 36 + | eth8 | | eth9 | 37 + +------+ +------+ 38 + 39 + Packets received on an enslaved device and are switched to the VRF device 40 + using an rx_handler which gives the impression that packets flow through 41 + the VRF device. Similarly on egress routing rules are used to send packets 42 + to the VRF device driver before getting sent out the actual interface. This 43 + allows tcpdump on a VRF device to capture all packets into and out of the 44 + VRF as a whole.[1] Similiarly, netfilter [2] and tc rules can be applied 45 + using the VRF device to specify rules that apply to the VRF domain as a whole. 46 + 47 + [1] Packets in the forwarded state do not flow through the device, so those 48 + packets are not seen by tcpdump. Will revisit this limitation in a 49 + future release. 50 + 51 + [2] Iptables on ingress is limited to NF_INET_PRE_ROUTING only with skb->dev 52 + set to real ingress device and egress is limited to NF_INET_POST_ROUTING. 53 + Will revisit this limitation in a future release. 54 + 55 + 56 + Setup 57 + ----- 58 + 1. VRF device is created with an association to a FIB table. 59 + e.g, ip link add vrf-blue type vrf table 10 60 + ip link set dev vrf-blue up 61 + 62 + 2. Rules are added that send lookups to the associated FIB table when the 63 + iif or oif is the VRF device. e.g., 64 + ip ru add oif vrf-blue table 10 65 + ip ru add iif vrf-blue table 10 66 + 67 + Set the default route for the table (and hence default route for the VRF). 68 + e.g, ip route add table 10 prohibit default 69 + 70 + 3. Enslave L3 interfaces to a VRF device. 71 + e.g, ip link set dev eth1 master vrf-blue 72 + 73 + Local and connected routes for enslaved devices are automatically moved to 74 + the table associated with VRF device. Any additional routes depending on 75 + the enslaved device will need to be reinserted following the enslavement. 76 + 77 + 4. Additional VRF routes are added to associated table. 78 + e.g., ip route add table 10 ... 79 + 80 + 81 + Applications 82 + ------------ 83 + Applications that are to work within a VRF need to bind their socket to the 84 + VRF device: 85 + 86 + setsockopt(sd, SOL_SOCKET, SO_BINDTODEVICE, dev, strlen(dev)+1); 87 + 88 + or to specify the output device using cmsg and IP_PKTINFO. 89 + 90 + 91 + Limitations 92 + ----------- 93 + VRF device currently only works for IPv4. Support for IPv6 is under development. 94 + 95 + Index of original ingress interface is not available via cmsg. Will address 96 + soon.
+9 -7
Documentation/sysctl/net.txt
··· 54 54 -------------- 55 55 56 56 The default queuing discipline to use for network devices. This allows 57 - overriding the default queue discipline of pfifo_fast with an 58 - alternative. Since the default queuing discipline is created with the 59 - no additional parameters so is best suited to queuing disciplines that 60 - work well without configuration like stochastic fair queue (sfq), 61 - CoDel (codel) or fair queue CoDel (fq_codel). Don't use queuing disciplines 62 - like Hierarchical Token Bucket or Deficit Round Robin which require setting 63 - up classes and bandwidths. 57 + overriding the default of pfifo_fast with an alternative. Since the default 58 + queuing discipline is created without additional parameters so is best suited 59 + to queuing disciplines that work well without configuration like stochastic 60 + fair queue (sfq), CoDel (codel) or fair queue CoDel (fq_codel). Don't use 61 + queuing disciplines like Hierarchical Token Bucket or Deficit Round Robin 62 + which require setting up classes and bandwidths. Note that physical multiqueue 63 + interfaces still use mq as root qdisc, which in turn uses this default for its 64 + leaves. Virtual devices (like e.g. lo or veth) ignore this setting and instead 65 + default to noqueue. 64 66 Default: pfifo_fast 65 67 66 68 busy_read
+8 -1
MAINTAINERS
··· 808 808 F: drivers/video/fbdev/arcfb.c 809 809 F: drivers/video/fbdev/core/fb_defio.c 810 810 811 + ARCNET NETWORK LAYER 812 + M: Michael Grzeschik <m.grzeschik@pengutronix.de> 813 + L: netdev@vger.kernel.org 814 + S: Maintained 815 + F: drivers/net/arcnet/ 816 + F: include/uapi/linux/if_arcnet.h 817 + 811 818 ARM MFM AND FLOPPY DRIVERS 812 819 M: Ian Molton <spyro@f2s.com> 813 820 S: Maintained ··· 8507 8500 F: drivers/net/ethernet/qlogic/qla3xxx.* 8508 8501 8509 8502 QLOGIC QLCNIC (1/10)Gb ETHERNET DRIVER 8510 - M: Shahed Shaikh <shahed.shaikh@qlogic.com> 8511 8503 M: Dept-GELinuxNICDev@qlogic.com 8512 8504 L: netdev@vger.kernel.org 8513 8505 S: Supported ··· 11268 11262 S: Maintained 11269 11263 F: drivers/net/vrf.c 11270 11264 F: include/net/vrf.h 11265 + F: Documentation/networking/vrf.txt 11271 11266 11272 11267 VT1211 HARDWARE MONITOR DRIVER 11273 11268 M: Juerg Haefliger <juergh@gmail.com>
+2 -5
drivers/atm/he.c
··· 1578 1578 1579 1579 kfree(he_dev->rbpl_virt); 1580 1580 kfree(he_dev->rbpl_table); 1581 - 1582 - if (he_dev->rbpl_pool) 1583 - dma_pool_destroy(he_dev->rbpl_pool); 1581 + dma_pool_destroy(he_dev->rbpl_pool); 1584 1582 1585 1583 if (he_dev->rbrq_base) 1586 1584 dma_free_coherent(&he_dev->pci_dev->dev, CONFIG_RBRQ_SIZE * sizeof(struct he_rbrq), ··· 1592 1594 dma_free_coherent(&he_dev->pci_dev->dev, CONFIG_TBRQ_SIZE * sizeof(struct he_tbrq), 1593 1595 he_dev->tpdrq_base, he_dev->tpdrq_phys); 1594 1596 1595 - if (he_dev->tpd_pool) 1596 - dma_pool_destroy(he_dev->tpd_pool); 1597 + dma_pool_destroy(he_dev->tpd_pool); 1597 1598 1598 1599 if (he_dev->pci_dev) { 1599 1600 pci_read_config_word(he_dev->pci_dev, PCI_COMMAND, &command);
+10 -2
drivers/atm/solos-pci.c
··· 805 805 continue; 806 806 } 807 807 808 - skb = alloc_skb(size + 1, GFP_ATOMIC); 808 + /* Use netdev_alloc_skb() because it adds NET_SKB_PAD of 809 + * headroom, and ensures we can route packets back out an 810 + * Ethernet interface (for example) without having to 811 + * reallocate. Adding NET_IP_ALIGN also ensures that both 812 + * PPPoATM and PPPoEoBR2684 packets end up aligned. */ 813 + skb = netdev_alloc_skb_ip_align(NULL, size + 1); 809 814 if (!skb) { 810 815 if (net_ratelimit()) 811 816 dev_warn(&card->dev->dev, "Failed to allocate sk_buff for RX\n"); ··· 874 869 /* Allocate RX skbs for any ports which need them */ 875 870 if (card->using_dma && card->atmdev[port] && 876 871 !card->rx_skb[port]) { 877 - struct sk_buff *skb = alloc_skb(RX_DMA_SIZE, GFP_ATOMIC); 872 + /* Unlike the MMIO case (qv) we can't add NET_IP_ALIGN 873 + * here; the FPGA can only DMA to addresses which are 874 + * aligned to 4 bytes. */ 875 + struct sk_buff *skb = dev_alloc_skb(RX_DMA_SIZE); 878 876 if (skb) { 879 877 SKB_CB(skb)->dma_addr = 880 878 dma_map_single(&card->dev->dev, skb->data,
+1 -1
drivers/net/arcnet/arcnet.c
··· 326 326 dev->type = ARPHRD_ARCNET; 327 327 dev->netdev_ops = &arcnet_netdev_ops; 328 328 dev->header_ops = &arcnet_header_ops; 329 - dev->hard_header_len = sizeof(struct archdr); 329 + dev->hard_header_len = sizeof(struct arc_hardware); 330 330 dev->mtu = choose_mtu(); 331 331 332 332 dev->addr_len = ARCNET_ALEN;
+1
drivers/net/dsa/mv88e6xxx.c
··· 2000 2000 */ 2001 2001 reg = _mv88e6xxx_reg_read(ds, REG_PORT(port), PORT_PCS_CTRL); 2002 2002 if (dsa_is_cpu_port(ds, port) || dsa_is_dsa_port(ds, port)) { 2003 + reg &= ~PORT_PCS_CTRL_UNFORCED; 2003 2004 reg |= PORT_PCS_CTRL_FORCE_LINK | 2004 2005 PORT_PCS_CTRL_LINK_UP | 2005 2006 PORT_PCS_CTRL_DUPLEX_FULL |
+16 -8
drivers/net/ethernet/apm/xgene/xgene_enet_hw.c
··· 689 689 netdev_dbg(ndev, "No phy-handle found in DT\n"); 690 690 return -ENODEV; 691 691 } 692 - pdata->phy_dev = of_phy_find_device(phy_np); 693 - } 694 692 695 - phy_dev = pdata->phy_dev; 693 + phy_dev = of_phy_connect(ndev, phy_np, &xgene_enet_adjust_link, 694 + 0, pdata->phy_mode); 695 + if (!phy_dev) { 696 + netdev_err(ndev, "Could not connect to PHY\n"); 697 + return -ENODEV; 698 + } 696 699 697 - if (!phy_dev || 698 - phy_connect_direct(ndev, phy_dev, &xgene_enet_adjust_link, 699 - pdata->phy_mode)) { 700 - netdev_err(ndev, "Could not connect to PHY\n"); 701 - return -ENODEV; 700 + pdata->phy_dev = phy_dev; 701 + } else { 702 + phy_dev = pdata->phy_dev; 703 + 704 + if (!phy_dev || 705 + phy_connect_direct(ndev, phy_dev, &xgene_enet_adjust_link, 706 + pdata->phy_mode)) { 707 + netdev_err(ndev, "Could not connect to PHY\n"); 708 + return -ENODEV; 709 + } 702 710 } 703 711 704 712 pdata->phy_speed = SPEED_UNKNOWN;
+1
drivers/net/ethernet/arc/emac_arc.c
··· 78 78 { .compatible = "snps,arc-emac" }, 79 79 { /* Sentinel */ } 80 80 }; 81 + MODULE_DEVICE_TABLE(of, emac_arc_dt_ids); 81 82 82 83 static struct platform_driver emac_arc_driver = { 83 84 .probe = emac_arc_probe,
+1
drivers/net/ethernet/broadcom/bcmsysport.c
··· 2079 2079 { .compatible = "brcm,systemport" }, 2080 2080 { /* sentinel */ } 2081 2081 }; 2082 + MODULE_DEVICE_TABLE(of, bcm_sysport_of_match); 2082 2083 2083 2084 static struct platform_driver bcm_sysport_driver = { 2084 2085 .probe = bcm_sysport_probe,
+1
drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
··· 1946 1946 u16 vlan_cnt; 1947 1947 u16 vlan_credit; 1948 1948 u16 vxlan_dst_port; 1949 + u8 vxlan_dst_port_count; 1949 1950 bool accept_any_vlan; 1950 1951 }; 1951 1952
+14 -6
drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
··· 3705 3705 3706 3706 void bnx2x_update_mfw_dump(struct bnx2x *bp) 3707 3707 { 3708 - struct timeval epoc; 3709 3708 u32 drv_ver; 3710 3709 u32 valid_dump; 3711 3710 3712 3711 if (!SHMEM2_HAS(bp, drv_info)) 3713 3712 return; 3714 3713 3715 - /* Update Driver load time */ 3716 - do_gettimeofday(&epoc); 3717 - SHMEM2_WR(bp, drv_info.epoc, epoc.tv_sec); 3714 + /* Update Driver load time, possibly broken in y2038 */ 3715 + SHMEM2_WR(bp, drv_info.epoc, (u32)ktime_get_real_seconds()); 3718 3716 3719 3717 drv_ver = bnx2x_update_mng_version_utility(DRV_MODULE_VERSION, true); 3720 3718 SHMEM2_WR(bp, drv_info.drv_ver, drv_ver); ··· 10108 10110 if (!netif_running(bp->dev)) 10109 10111 return; 10110 10112 10111 - if (bp->vxlan_dst_port || !IS_PF(bp)) { 10113 + if (bp->vxlan_dst_port_count && bp->vxlan_dst_port == port) { 10114 + bp->vxlan_dst_port_count++; 10115 + return; 10116 + } 10117 + 10118 + if (bp->vxlan_dst_port_count || !IS_PF(bp)) { 10112 10119 DP(BNX2X_MSG_SP, "Vxlan destination port limit reached\n"); 10113 10120 return; 10114 10121 } 10115 10122 10116 10123 bp->vxlan_dst_port = port; 10124 + bp->vxlan_dst_port_count = 1; 10117 10125 bnx2x_schedule_sp_rtnl(bp, BNX2X_SP_RTNL_ADD_VXLAN_PORT, 0); 10118 10126 } 10119 10127 ··· 10134 10130 10135 10131 static void __bnx2x_del_vxlan_port(struct bnx2x *bp, u16 port) 10136 10132 { 10137 - if (!bp->vxlan_dst_port || bp->vxlan_dst_port != port || !IS_PF(bp)) { 10133 + if (!bp->vxlan_dst_port_count || bp->vxlan_dst_port != port || 10134 + !IS_PF(bp)) { 10138 10135 DP(BNX2X_MSG_SP, "Invalid vxlan port\n"); 10139 10136 return; 10140 10137 } 10138 + bp->vxlan_dst_port--; 10139 + if (bp->vxlan_dst_port) 10140 + return; 10141 10141 10142 10142 if (netif_running(bp->dev)) { 10143 10143 bnx2x_schedule_sp_rtnl(bp, BNX2X_SP_RTNL_DEL_VXLAN_PORT, 0);
+10 -2
drivers/net/ethernet/broadcom/bnx2x/bnx2x_sp.c
··· 4319 4319 4320 4320 /* RSS keys */ 4321 4321 if (test_bit(BNX2X_RSS_SET_SRCH, &p->rss_flags)) { 4322 - memcpy(&data->rss_key[0], &p->rss_key[0], 4323 - sizeof(data->rss_key)); 4322 + u8 *dst = (u8 *)(data->rss_key) + sizeof(data->rss_key); 4323 + const u8 *src = (const u8 *)p->rss_key; 4324 + int i; 4325 + 4326 + /* Apparently, bnx2x reads this array in reverse order 4327 + * We need to byte swap rss_key to comply with Toeplitz specs. 4328 + */ 4329 + for (i = 0; i < sizeof(data->rss_key); i++) 4330 + *--dst = *src++; 4331 + 4324 4332 caps |= ETH_RSS_UPDATE_RAMROD_DATA_UPDATE_RSS_KEY; 4325 4333 } 4326 4334
+1
drivers/net/ethernet/broadcom/genet/bcmgenet.c
··· 3155 3155 { .compatible = "brcm,genet-v4", .data = (void *)GENET_V4 }, 3156 3156 { }, 3157 3157 }; 3158 + MODULE_DEVICE_TABLE(of, bcmgenet_match); 3158 3159 3159 3160 static int bcmgenet_probe(struct platform_device *pdev) 3160 3161 {
+2
drivers/net/ethernet/brocade/bna/bna_tx_rx.c
··· 2400 2400 q0->rcb->id = 0; 2401 2401 q0->rx_packets = q0->rx_bytes = 0; 2402 2402 q0->rx_packets_with_error = q0->rxbuf_alloc_failed = 0; 2403 + q0->rxbuf_map_failed = 0; 2403 2404 2404 2405 bna_rxq_qpt_setup(q0, rxp, dpage_count, PAGE_SIZE, 2405 2406 &dqpt_mem[i], &dsqpt_mem[i], &dpage_mem[i]); ··· 2429 2428 : rx_cfg->q1_buf_size; 2430 2429 q1->rx_packets = q1->rx_bytes = 0; 2431 2430 q1->rx_packets_with_error = q1->rxbuf_alloc_failed = 0; 2431 + q1->rxbuf_map_failed = 0; 2432 2432 2433 2433 bna_rxq_qpt_setup(q1, rxp, hpage_count, PAGE_SIZE, 2434 2434 &hqpt_mem[i], &hsqpt_mem[i],
+1
drivers/net/ethernet/brocade/bna/bna_types.h
··· 587 587 u64 rx_bytes; 588 588 u64 rx_packets_with_error; 589 589 u64 rxbuf_alloc_failed; 590 + u64 rxbuf_map_failed; 590 591 }; 591 592 592 593 /* RxQ pair */
+28 -1
drivers/net/ethernet/brocade/bna/bnad.c
··· 399 399 } 400 400 401 401 dma_addr = dma_map_page(&bnad->pcidev->dev, page, page_offset, 402 - unmap_q->map_size, DMA_FROM_DEVICE); 402 + unmap_q->map_size, DMA_FROM_DEVICE); 403 + if (dma_mapping_error(&bnad->pcidev->dev, dma_addr)) { 404 + put_page(page); 405 + BNAD_UPDATE_CTR(bnad, rxbuf_map_failed); 406 + rcb->rxq->rxbuf_map_failed++; 407 + goto finishing; 408 + } 403 409 404 410 unmap->page = page; 405 411 unmap->page_offset = page_offset; ··· 460 454 rcb->rxq->rxbuf_alloc_failed++; 461 455 goto finishing; 462 456 } 457 + 463 458 dma_addr = dma_map_single(&bnad->pcidev->dev, skb->data, 464 459 buff_sz, DMA_FROM_DEVICE); 460 + if (dma_mapping_error(&bnad->pcidev->dev, dma_addr)) { 461 + dev_kfree_skb_any(skb); 462 + BNAD_UPDATE_CTR(bnad, rxbuf_map_failed); 463 + rcb->rxq->rxbuf_map_failed++; 464 + goto finishing; 465 + } 465 466 466 467 unmap->skb = skb; 467 468 dma_unmap_addr_set(&unmap->vector, dma_addr, dma_addr); ··· 3038 3025 unmap = head_unmap; 3039 3026 dma_addr = dma_map_single(&bnad->pcidev->dev, skb->data, 3040 3027 len, DMA_TO_DEVICE); 3028 + if (dma_mapping_error(&bnad->pcidev->dev, dma_addr)) { 3029 + dev_kfree_skb_any(skb); 3030 + BNAD_UPDATE_CTR(bnad, tx_skb_map_failed); 3031 + return NETDEV_TX_OK; 3032 + } 3041 3033 BNA_SET_DMA_ADDR(dma_addr, &txqent->vector[0].host_addr); 3042 3034 txqent->vector[0].length = htons(len); 3043 3035 dma_unmap_addr_set(&unmap->vectors[0], dma_addr, dma_addr); ··· 3074 3056 3075 3057 dma_addr = skb_frag_dma_map(&bnad->pcidev->dev, frag, 3076 3058 0, size, DMA_TO_DEVICE); 3059 + if (dma_mapping_error(&bnad->pcidev->dev, dma_addr)) { 3060 + /* Undo the changes starting at tcb->producer_index */ 3061 + bnad_tx_buff_unmap(bnad, unmap_q, q_depth, 3062 + tcb->producer_index); 3063 + dev_kfree_skb_any(skb); 3064 + BNAD_UPDATE_CTR(bnad, tx_skb_map_failed); 3065 + return NETDEV_TX_OK; 3066 + } 3067 + 3077 3068 dma_unmap_len_set(&unmap->vectors[vect_id], dma_len, size); 3078 3069 BNA_SET_DMA_ADDR(dma_addr, &txqent->vector[vect_id].host_addr); 3079 3070 txqent->vector[vect_id].length = htons(size);
+2
drivers/net/ethernet/brocade/bna/bnad.h
··· 175 175 u64 tx_skb_headlen_zero; 176 176 u64 tx_skb_frag_zero; 177 177 u64 tx_skb_len_mismatch; 178 + u64 tx_skb_map_failed; 178 179 179 180 u64 hw_stats_updates; 180 181 u64 netif_rx_dropped; ··· 190 189 u64 rx_unmap_q_alloc_failed; 191 190 192 191 u64 rxbuf_alloc_failed; 192 + u64 rxbuf_map_failed; 193 193 }; 194 194 195 195 /* Complete driver stats */
+4
drivers/net/ethernet/brocade/bna/bnad_ethtool.c
··· 90 90 "tx_skb_headlen_zero", 91 91 "tx_skb_frag_zero", 92 92 "tx_skb_len_mismatch", 93 + "tx_skb_map_failed", 93 94 "hw_stats_updates", 94 95 "netif_rx_dropped", 95 96 ··· 103 102 "tx_unmap_q_alloc_failed", 104 103 "rx_unmap_q_alloc_failed", 105 104 "rxbuf_alloc_failed", 105 + "rxbuf_map_failed", 106 106 107 107 "mac_stats_clr_cnt", 108 108 "mac_frame_64", ··· 809 807 rx_packets_with_error; 810 808 buf[bi++] = rcb->rxq-> 811 809 rxbuf_alloc_failed; 810 + buf[bi++] = rcb->rxq->rxbuf_map_failed; 812 811 buf[bi++] = rcb->producer_index; 813 812 buf[bi++] = rcb->consumer_index; 814 813 } ··· 824 821 rx_packets_with_error; 825 822 buf[bi++] = rcb->rxq-> 826 823 rxbuf_alloc_failed; 824 + buf[bi++] = rcb->rxq->rxbuf_map_failed; 827 825 buf[bi++] = rcb->producer_index; 828 826 buf[bi++] = rcb->consumer_index; 829 827 }
+5
drivers/net/ethernet/chelsio/cxgb4/t4_pci_id_tbl.h
··· 157 157 CH_PCI_ID_TABLE_FENTRY(0x5090), /* Custom T540-CR */ 158 158 CH_PCI_ID_TABLE_FENTRY(0x5091), /* Custom T522-CR */ 159 159 CH_PCI_ID_TABLE_FENTRY(0x5092), /* Custom T520-CR */ 160 + CH_PCI_ID_TABLE_FENTRY(0x5093), /* Custom T580-LP-CR */ 161 + CH_PCI_ID_TABLE_FENTRY(0x5094), /* Custom T540-CR */ 162 + CH_PCI_ID_TABLE_FENTRY(0x5095), /* Custom T540-CR-SO */ 163 + CH_PCI_ID_TABLE_FENTRY(0x5096), /* Custom T580-CR */ 164 + CH_PCI_ID_TABLE_FENTRY(0x5097), /* Custom T520-KR */ 160 165 161 166 /* T6 adapters: 162 167 */
+1
drivers/net/ethernet/emulex/benet/be.h
··· 582 582 u16 pvid; 583 583 __be16 vxlan_port; 584 584 int vxlan_port_count; 585 + int vxlan_port_aliases; 585 586 struct phy_info phy; 586 587 u8 wol_cap; 587 588 bool wol_en;
+10
drivers/net/ethernet/emulex/benet/be_main.c
··· 5176 5176 if (lancer_chip(adapter) || BEx_chip(adapter) || be_is_mc(adapter)) 5177 5177 return; 5178 5178 5179 + if (adapter->vxlan_port == port && adapter->vxlan_port_count) { 5180 + adapter->vxlan_port_aliases++; 5181 + return; 5182 + } 5183 + 5179 5184 if (adapter->flags & BE_FLAGS_VXLAN_OFFLOADS) { 5180 5185 dev_info(dev, 5181 5186 "Only one UDP port supported for VxLAN offloads\n"); ··· 5230 5225 5231 5226 if (adapter->vxlan_port != port) 5232 5227 goto done; 5228 + 5229 + if (adapter->vxlan_port_aliases) { 5230 + adapter->vxlan_port_aliases--; 5231 + return; 5232 + } 5233 5233 5234 5234 be_disable_vxlan_offloads(adapter); 5235 5235
+10 -5
drivers/net/ethernet/freescale/gianfar.c
··· 1710 1710 * everything for us? Resetting it takes the link down and requires 1711 1711 * several seconds for it to come back. 1712 1712 */ 1713 - if (phy_read(tbiphy, MII_BMSR) & BMSR_LSTATUS) 1713 + if (phy_read(tbiphy, MII_BMSR) & BMSR_LSTATUS) { 1714 + put_device(&tbiphy->dev); 1714 1715 return; 1716 + } 1715 1717 1716 1718 /* Single clk mode, mii mode off(for serdes communication) */ 1717 1719 phy_write(tbiphy, MII_TBICON, TBICON_CLK_SELECT); ··· 1725 1723 phy_write(tbiphy, MII_BMCR, 1726 1724 BMCR_ANENABLE | BMCR_ANRESTART | BMCR_FULLDPLX | 1727 1725 BMCR_SPEED1000); 1726 + 1727 + put_device(&tbiphy->dev); 1728 1728 } 1729 1729 1730 1730 static int __gfar_is_rx_idle(struct gfar_private *priv) ··· 1974 1970 /* Install our interrupt handlers for Error, 1975 1971 * Transmit, and Receive 1976 1972 */ 1977 - err = request_irq(gfar_irq(grp, ER)->irq, gfar_error, 1978 - IRQF_NO_SUSPEND, 1973 + err = request_irq(gfar_irq(grp, ER)->irq, gfar_error, 0, 1979 1974 gfar_irq(grp, ER)->name, grp); 1980 1975 if (err < 0) { 1981 1976 netif_err(priv, intr, dev, "Can't get IRQ %d\n", ··· 1982 1979 1983 1980 goto err_irq_fail; 1984 1981 } 1982 + enable_irq_wake(gfar_irq(grp, ER)->irq); 1983 + 1985 1984 err = request_irq(gfar_irq(grp, TX)->irq, gfar_transmit, 0, 1986 1985 gfar_irq(grp, TX)->name, grp); 1987 1986 if (err < 0) { ··· 1999 1994 goto rx_irq_fail; 2000 1995 } 2001 1996 } else { 2002 - err = request_irq(gfar_irq(grp, TX)->irq, gfar_interrupt, 2003 - IRQF_NO_SUSPEND, 1997 + err = request_irq(gfar_irq(grp, TX)->irq, gfar_interrupt, 0, 2004 1998 gfar_irq(grp, TX)->name, grp); 2005 1999 if (err < 0) { 2006 2000 netif_err(priv, intr, dev, "Can't get IRQ %d\n", 2007 2001 gfar_irq(grp, TX)->irq); 2008 2002 goto err_irq_fail; 2009 2003 } 2004 + enable_irq_wake(gfar_irq(grp, TX)->irq); 2010 2005 } 2011 2006 2012 2007 return 0;
+1
drivers/net/ethernet/freescale/gianfar_ptp.c
··· 557 557 { .compatible = "fsl,etsec-ptp" }, 558 558 {}, 559 559 }; 560 + MODULE_DEVICE_TABLE(of, match_table); 560 561 561 562 static struct platform_driver gianfar_ptp_driver = { 562 563 .driver = {
+7 -1
drivers/net/ethernet/freescale/ucc_geth.c
··· 1384 1384 value = phy_read(tbiphy, ENET_TBI_MII_CR); 1385 1385 value &= ~0x1000; /* Turn off autonegotiation */ 1386 1386 phy_write(tbiphy, ENET_TBI_MII_CR, value); 1387 + 1388 + put_device(&tbiphy->dev); 1387 1389 } 1388 1390 1389 1391 init_check_frame_length_mode(ug_info->lengthCheckRx, &ug_regs->maccfg2); ··· 1704 1702 * everything for us? Resetting it takes the link down and requires 1705 1703 * several seconds for it to come back. 1706 1704 */ 1707 - if (phy_read(tbiphy, ENET_TBI_MII_SR) & TBISR_LSTATUS) 1705 + if (phy_read(tbiphy, ENET_TBI_MII_SR) & TBISR_LSTATUS) { 1706 + put_device(&tbiphy->dev); 1708 1707 return; 1708 + } 1709 1709 1710 1710 /* Single clk mode, mii mode off(for serdes communication) */ 1711 1711 phy_write(tbiphy, ENET_TBI_MII_ANA, TBIANA_SETTINGS); ··· 1715 1711 phy_write(tbiphy, ENET_TBI_MII_TBICON, TBICON_CLK_SELECT); 1716 1712 1717 1713 phy_write(tbiphy, ENET_TBI_MII_CR, TBICR_SETTINGS); 1714 + 1715 + put_device(&tbiphy->dev); 1718 1716 } 1719 1717 1720 1718 /* Configure the PHY for dev.
+5 -1
drivers/net/ethernet/marvell/mvneta.c
··· 1479 1479 struct mvneta_rx_desc *rx_desc = mvneta_rxq_next_desc_get(rxq); 1480 1480 struct sk_buff *skb; 1481 1481 unsigned char *data; 1482 + dma_addr_t phys_addr; 1482 1483 u32 rx_status; 1483 1484 int rx_bytes, err; 1484 1485 ··· 1487 1486 rx_status = rx_desc->status; 1488 1487 rx_bytes = rx_desc->data_size - (ETH_FCS_LEN + MVNETA_MH_SIZE); 1489 1488 data = (unsigned char *)rx_desc->buf_cookie; 1489 + phys_addr = rx_desc->buf_phys_addr; 1490 1490 1491 1491 if (!mvneta_rxq_desc_is_first_last(rx_status) || 1492 1492 (rx_status & MVNETA_RXD_ERR_SUMMARY)) { ··· 1536 1534 if (!skb) 1537 1535 goto err_drop_frame; 1538 1536 1539 - dma_unmap_single(dev->dev.parent, rx_desc->buf_phys_addr, 1537 + dma_unmap_single(dev->dev.parent, phys_addr, 1540 1538 MVNETA_RX_BUF_SIZE(pp->pkt_size), DMA_FROM_DEVICE); 1541 1539 1542 1540 rcvd_pkts++; ··· 3175 3173 struct phy_device *phy = of_phy_find_device(dn); 3176 3174 3177 3175 mvneta_fixed_link_update(pp, phy); 3176 + 3177 + put_device(&phy->dev); 3178 3178 } 3179 3179 3180 3180 return 0;
-2
drivers/net/ethernet/mellanox/mlx4/en_rx.c
··· 1270 1270 rss_context->hash_fn = MLX4_RSS_HASH_TOP; 1271 1271 memcpy(rss_context->rss_key, priv->rss_key, 1272 1272 MLX4_EN_RSS_KEY_SIZE); 1273 - netdev_rss_key_fill(rss_context->rss_key, 1274 - MLX4_EN_RSS_KEY_SIZE); 1275 1273 } else { 1276 1274 en_err(priv, "Unknown RSS hash function requested\n"); 1277 1275 err = -EINVAL;
+1
drivers/net/ethernet/micrel/ks8851.c
··· 1601 1601 { .compatible = "micrel,ks8851" }, 1602 1602 { } 1603 1603 }; 1604 + MODULE_DEVICE_TABLE(of, ks8851_match_table); 1604 1605 1605 1606 static struct spi_driver ks8851_driver = { 1606 1607 .driver = {
+1
drivers/net/ethernet/moxa/moxart_ether.c
··· 552 552 { .compatible = "moxa,moxart-mac" }, 553 553 { } 554 554 }; 555 + MODULE_DEVICE_TABLE(of, moxart_mac_match); 555 556 556 557 static struct platform_driver moxart_mac_driver = { 557 558 .probe = moxart_mac_probe,
+1
drivers/net/ethernet/qlogic/qlcnic/qlcnic.h
··· 536 536 u8 extend_lb_time; 537 537 u8 phys_port_id[ETH_ALEN]; 538 538 u8 lb_mode; 539 + u8 vxlan_port_count; 539 540 u16 vxlan_port; 540 541 struct device *hwmon_dev; 541 542 u32 post_mode;
+13 -5
drivers/net/ethernet/qlogic/qlcnic/qlcnic_main.c
··· 483 483 /* Adapter supports only one VXLAN port. Use very first port 484 484 * for enabling offload 485 485 */ 486 - if (!qlcnic_encap_rx_offload(adapter) || ahw->vxlan_port) 486 + if (!qlcnic_encap_rx_offload(adapter)) 487 487 return; 488 + if (!ahw->vxlan_port_count) { 489 + ahw->vxlan_port_count = 1; 490 + ahw->vxlan_port = ntohs(port); 491 + adapter->flags |= QLCNIC_ADD_VXLAN_PORT; 492 + return; 493 + } 494 + if (ahw->vxlan_port == ntohs(port)) 495 + ahw->vxlan_port_count++; 488 496 489 - ahw->vxlan_port = ntohs(port); 490 - adapter->flags |= QLCNIC_ADD_VXLAN_PORT; 491 497 } 492 498 493 499 static void qlcnic_del_vxlan_port(struct net_device *netdev, ··· 502 496 struct qlcnic_adapter *adapter = netdev_priv(netdev); 503 497 struct qlcnic_hardware_context *ahw = adapter->ahw; 504 498 505 - if (!qlcnic_encap_rx_offload(adapter) || !ahw->vxlan_port || 499 + if (!qlcnic_encap_rx_offload(adapter) || !ahw->vxlan_port_count || 506 500 (ahw->vxlan_port != ntohs(port))) 507 501 return; 508 502 509 - adapter->flags |= QLCNIC_DEL_VXLAN_PORT; 503 + ahw->vxlan_port_count--; 504 + if (!ahw->vxlan_port_count) 505 + adapter->flags |= QLCNIC_DEL_VXLAN_PORT; 510 506 } 511 507 512 508 static netdev_features_t qlcnic_features_check(struct sk_buff *skb,
+55 -56
drivers/net/ethernet/realtek/8139cp.c
··· 157 157 NWayAdvert = 0x66, /* MII ADVERTISE */ 158 158 NWayLPAR = 0x68, /* MII LPA */ 159 159 NWayExpansion = 0x6A, /* MII Expansion */ 160 + TxDmaOkLowDesc = 0x82, /* Low 16 bit address of a Tx descriptor. */ 160 161 Config5 = 0xD8, /* Config5 */ 161 162 TxPoll = 0xD9, /* Tell chip to check Tx descriptors for work */ 162 163 RxMaxSize = 0xDA, /* Max size of an Rx packet (8169 only) */ ··· 342 341 unsigned tx_tail; 343 342 struct cp_desc *tx_ring; 344 343 struct sk_buff *tx_skb[CP_TX_RING_SIZE]; 344 + u32 tx_opts[CP_TX_RING_SIZE]; 345 345 346 346 unsigned rx_buf_sz; 347 347 unsigned wol_enabled : 1; /* Is Wake-on-LAN enabled? */ ··· 667 665 BUG_ON(!skb); 668 666 669 667 dma_unmap_single(&cp->pdev->dev, le64_to_cpu(txd->addr), 670 - le32_to_cpu(txd->opts1) & 0xffff, 668 + cp->tx_opts[tx_tail] & 0xffff, 671 669 PCI_DMA_TODEVICE); 672 670 673 671 if (status & LastFrag) { ··· 735 733 { 736 734 struct cp_private *cp = netdev_priv(dev); 737 735 unsigned entry; 738 - u32 eor, flags; 736 + u32 eor, opts1; 739 737 unsigned long intr_flags; 740 738 __le32 opts2; 741 739 int mss = 0; ··· 755 753 mss = skb_shinfo(skb)->gso_size; 756 754 757 755 opts2 = cpu_to_le32(cp_tx_vlan_tag(skb)); 756 + opts1 = DescOwn; 757 + if (mss) 758 + opts1 |= LargeSend | ((mss & MSSMask) << MSSShift); 759 + else if (skb->ip_summed == CHECKSUM_PARTIAL) { 760 + const struct iphdr *ip = ip_hdr(skb); 761 + if (ip->protocol == IPPROTO_TCP) 762 + opts1 |= IPCS | TCPCS; 763 + else if (ip->protocol == IPPROTO_UDP) 764 + opts1 |= IPCS | UDPCS; 765 + else { 766 + WARN_ONCE(1, 767 + "Net bug: asked to checksum invalid Legacy IP packet\n"); 768 + goto out_dma_error; 769 + } 770 + } 758 771 759 772 if (skb_shinfo(skb)->nr_frags == 0) { 760 773 struct cp_desc *txd = &cp->tx_ring[entry]; ··· 785 768 txd->addr = cpu_to_le64(mapping); 786 769 wmb(); 787 770 788 - flags = eor | len | DescOwn | FirstFrag | LastFrag; 771 + opts1 |= eor | len | FirstFrag | LastFrag; 789 772 790 - if (mss) 791 - flags |= LargeSend | ((mss & MSSMask) << MSSShift); 792 - else if (skb->ip_summed == CHECKSUM_PARTIAL) { 793 - const struct iphdr *ip = ip_hdr(skb); 794 - if (ip->protocol == IPPROTO_TCP) 795 - flags |= IPCS | TCPCS; 796 - else if (ip->protocol == IPPROTO_UDP) 797 - flags |= IPCS | UDPCS; 798 - else 799 - WARN_ON(1); /* we need a WARN() */ 800 - } 801 - 802 - txd->opts1 = cpu_to_le32(flags); 773 + txd->opts1 = cpu_to_le32(opts1); 803 774 wmb(); 804 775 805 776 cp->tx_skb[entry] = skb; 806 - entry = NEXT_TX(entry); 777 + cp->tx_opts[entry] = opts1; 778 + netif_dbg(cp, tx_queued, cp->dev, "tx queued, slot %d, skblen %d\n", 779 + entry, skb->len); 807 780 } else { 808 781 struct cp_desc *txd; 809 - u32 first_len, first_eor; 782 + u32 first_len, first_eor, ctrl; 810 783 dma_addr_t first_mapping; 811 784 int frag, first_entry = entry; 812 - const struct iphdr *ip = ip_hdr(skb); 813 785 814 786 /* We must give this initial chunk to the device last. 815 787 * Otherwise we could race with the device. ··· 811 805 goto out_dma_error; 812 806 813 807 cp->tx_skb[entry] = skb; 814 - entry = NEXT_TX(entry); 815 808 816 809 for (frag = 0; frag < skb_shinfo(skb)->nr_frags; frag++) { 817 810 const skb_frag_t *this_frag = &skb_shinfo(skb)->frags[frag]; 818 811 u32 len; 819 - u32 ctrl; 820 812 dma_addr_t mapping; 813 + 814 + entry = NEXT_TX(entry); 821 815 822 816 len = skb_frag_size(this_frag); 823 817 mapping = dma_map_single(&cp->pdev->dev, ··· 830 824 831 825 eor = (entry == (CP_TX_RING_SIZE - 1)) ? RingEnd : 0; 832 826 833 - ctrl = eor | len | DescOwn; 834 - 835 - if (mss) 836 - ctrl |= LargeSend | 837 - ((mss & MSSMask) << MSSShift); 838 - else if (skb->ip_summed == CHECKSUM_PARTIAL) { 839 - if (ip->protocol == IPPROTO_TCP) 840 - ctrl |= IPCS | TCPCS; 841 - else if (ip->protocol == IPPROTO_UDP) 842 - ctrl |= IPCS | UDPCS; 843 - else 844 - BUG(); 845 - } 827 + ctrl = opts1 | eor | len; 846 828 847 829 if (frag == skb_shinfo(skb)->nr_frags - 1) 848 830 ctrl |= LastFrag; ··· 843 849 txd->opts1 = cpu_to_le32(ctrl); 844 850 wmb(); 845 851 852 + cp->tx_opts[entry] = ctrl; 846 853 cp->tx_skb[entry] = skb; 847 - entry = NEXT_TX(entry); 848 854 } 849 855 850 856 txd = &cp->tx_ring[first_entry]; ··· 852 858 txd->addr = cpu_to_le64(first_mapping); 853 859 wmb(); 854 860 855 - if (skb->ip_summed == CHECKSUM_PARTIAL) { 856 - if (ip->protocol == IPPROTO_TCP) 857 - txd->opts1 = cpu_to_le32(first_eor | first_len | 858 - FirstFrag | DescOwn | 859 - IPCS | TCPCS); 860 - else if (ip->protocol == IPPROTO_UDP) 861 - txd->opts1 = cpu_to_le32(first_eor | first_len | 862 - FirstFrag | DescOwn | 863 - IPCS | UDPCS); 864 - else 865 - BUG(); 866 - } else 867 - txd->opts1 = cpu_to_le32(first_eor | first_len | 868 - FirstFrag | DescOwn); 861 + ctrl = opts1 | first_eor | first_len | FirstFrag; 862 + txd->opts1 = cpu_to_le32(ctrl); 869 863 wmb(); 864 + 865 + cp->tx_opts[first_entry] = ctrl; 866 + netif_dbg(cp, tx_queued, cp->dev, "tx queued, slots %d-%d, skblen %d\n", 867 + first_entry, entry, skb->len); 870 868 } 871 - cp->tx_head = entry; 869 + cp->tx_head = NEXT_TX(entry); 872 870 873 871 netdev_sent_queue(dev, skb->len); 874 - netif_dbg(cp, tx_queued, cp->dev, "tx queued, slot %d, skblen %d\n", 875 - entry, skb->len); 876 872 if (TX_BUFFS_AVAIL(cp) <= (MAX_SKB_FRAGS + 1)) 877 873 netif_stop_queue(dev); 878 874 ··· 1099 1115 { 1100 1116 memset(cp->tx_ring, 0, sizeof(struct cp_desc) * CP_TX_RING_SIZE); 1101 1117 cp->tx_ring[CP_TX_RING_SIZE - 1].opts1 = cpu_to_le32(RingEnd); 1118 + memset(cp->tx_opts, 0, sizeof(cp->tx_opts)); 1102 1119 1103 1120 cp_init_rings_index(cp); 1104 1121 ··· 1136 1151 desc = cp->rx_ring + i; 1137 1152 dma_unmap_single(&cp->pdev->dev,le64_to_cpu(desc->addr), 1138 1153 cp->rx_buf_sz, PCI_DMA_FROMDEVICE); 1139 - dev_kfree_skb(cp->rx_skb[i]); 1154 + dev_kfree_skb_any(cp->rx_skb[i]); 1140 1155 } 1141 1156 } 1142 1157 ··· 1149 1164 le32_to_cpu(desc->opts1) & 0xffff, 1150 1165 PCI_DMA_TODEVICE); 1151 1166 if (le32_to_cpu(desc->opts1) & LastFrag) 1152 - dev_kfree_skb(skb); 1167 + dev_kfree_skb_any(skb); 1153 1168 cp->dev->stats.tx_dropped++; 1154 1169 } 1155 1170 } ··· 1157 1172 1158 1173 memset(cp->rx_ring, 0, sizeof(struct cp_desc) * CP_RX_RING_SIZE); 1159 1174 memset(cp->tx_ring, 0, sizeof(struct cp_desc) * CP_TX_RING_SIZE); 1175 + memset(cp->tx_opts, 0, sizeof(cp->tx_opts)); 1160 1176 1161 1177 memset(cp->rx_skb, 0, sizeof(struct sk_buff *) * CP_RX_RING_SIZE); 1162 1178 memset(cp->tx_skb, 0, sizeof(struct sk_buff *) * CP_TX_RING_SIZE); ··· 1235 1249 { 1236 1250 struct cp_private *cp = netdev_priv(dev); 1237 1251 unsigned long flags; 1238 - int rc; 1252 + int rc, i; 1239 1253 1240 1254 netdev_warn(dev, "Transmit timeout, status %2x %4x %4x %4x\n", 1241 1255 cpr8(Cmd), cpr16(CpCmd), ··· 1243 1257 1244 1258 spin_lock_irqsave(&cp->lock, flags); 1245 1259 1260 + netif_dbg(cp, tx_err, cp->dev, "TX ring head %d tail %d desc %x\n", 1261 + cp->tx_head, cp->tx_tail, cpr16(TxDmaOkLowDesc)); 1262 + for (i = 0; i < CP_TX_RING_SIZE; i++) { 1263 + netif_dbg(cp, tx_err, cp->dev, 1264 + "TX slot %d @%p: %08x (%08x) %08x %llx %p\n", 1265 + i, &cp->tx_ring[i], le32_to_cpu(cp->tx_ring[i].opts1), 1266 + cp->tx_opts[i], le32_to_cpu(cp->tx_ring[i].opts2), 1267 + le64_to_cpu(cp->tx_ring[i].addr), 1268 + cp->tx_skb[i]); 1269 + } 1270 + 1246 1271 cp_stop_hw(cp); 1247 1272 cp_clean_rings(cp); 1248 1273 rc = cp_init_rings(cp); 1249 1274 cp_start_hw(cp); 1250 - cp_enable_irq(cp); 1275 + __cp_set_rx_mode(dev); 1276 + cpw16_f(IntrMask, cp_norx_intr_mask); 1251 1277 1252 1278 netif_wake_queue(dev); 1279 + napi_schedule_irqoff(&cp->napi); 1253 1280 1254 1281 spin_unlock_irqrestore(&cp->lock, flags); 1255 1282 }
+8 -3
drivers/net/ethernet/stmicro/stmmac/stmmac_mdio.c
··· 161 161 162 162 if (!gpio_request(reset_gpio, "mdio-reset")) { 163 163 gpio_direction_output(reset_gpio, active_low ? 1 : 0); 164 - udelay(data->delays[0]); 164 + if (data->delays[0]) 165 + msleep(DIV_ROUND_UP(data->delays[0], 1000)); 166 + 165 167 gpio_set_value(reset_gpio, active_low ? 0 : 1); 166 - udelay(data->delays[1]); 168 + if (data->delays[1]) 169 + msleep(DIV_ROUND_UP(data->delays[1], 1000)); 170 + 167 171 gpio_set_value(reset_gpio, active_low ? 1 : 0); 168 - udelay(data->delays[2]); 172 + if (data->delays[2]) 173 + msleep(DIV_ROUND_UP(data->delays[2], 1000)); 169 174 } 170 175 } 171 176 #endif
+11 -6
drivers/net/ethernet/sun/sunvnet.c
··· 1756 1756 #endif 1757 1757 }; 1758 1758 1759 - static struct vnet *vnet_new(const u64 *local_mac) 1759 + static struct vnet *vnet_new(const u64 *local_mac, 1760 + struct vio_dev *vdev) 1760 1761 { 1761 1762 struct net_device *dev; 1762 1763 struct vnet *vp; ··· 1791 1790 NETIF_F_HW_CSUM | NETIF_F_SG; 1792 1791 dev->features = dev->hw_features; 1793 1792 1793 + SET_NETDEV_DEV(dev, &vdev->dev); 1794 + 1794 1795 err = register_netdev(dev); 1795 1796 if (err) { 1796 1797 pr_err("Cannot register net device, aborting\n"); ··· 1811 1808 return ERR_PTR(err); 1812 1809 } 1813 1810 1814 - static struct vnet *vnet_find_or_create(const u64 *local_mac) 1811 + static struct vnet *vnet_find_or_create(const u64 *local_mac, 1812 + struct vio_dev *vdev) 1815 1813 { 1816 1814 struct vnet *iter, *vp; 1817 1815 ··· 1825 1821 } 1826 1822 } 1827 1823 if (!vp) 1828 - vp = vnet_new(local_mac); 1824 + vp = vnet_new(local_mac, vdev); 1829 1825 mutex_unlock(&vnet_list_mutex); 1830 1826 1831 1827 return vp; ··· 1852 1848 static const char *local_mac_prop = "local-mac-address"; 1853 1849 1854 1850 static struct vnet *vnet_find_parent(struct mdesc_handle *hp, 1855 - u64 port_node) 1851 + u64 port_node, 1852 + struct vio_dev *vdev) 1856 1853 { 1857 1854 const u64 *local_mac = NULL; 1858 1855 u64 a; ··· 1874 1869 if (!local_mac) 1875 1870 return ERR_PTR(-ENODEV); 1876 1871 1877 - return vnet_find_or_create(local_mac); 1872 + return vnet_find_or_create(local_mac, vdev); 1878 1873 } 1879 1874 1880 1875 static struct ldc_channel_config vnet_ldc_cfg = { ··· 1928 1923 1929 1924 hp = mdesc_grab(); 1930 1925 1931 - vp = vnet_find_parent(hp, vdev->mp); 1926 + vp = vnet_find_parent(hp, vdev->mp, vdev); 1932 1927 if (IS_ERR(vp)) { 1933 1928 pr_err("Cannot find port parent vnet\n"); 1934 1929 err = PTR_ERR(vp);
+35 -39
drivers/net/ethernet/ti/netcp_core.c
··· 291 291 interface_list) { 292 292 struct netcp_intf_modpriv *intf_modpriv; 293 293 294 - /* If interface not registered then register now */ 295 - if (!netcp_intf->netdev_registered) 296 - ret = netcp_register_interface(netcp_intf); 297 - 298 - if (ret) 299 - return -ENODEV; 300 - 301 294 intf_modpriv = devm_kzalloc(dev, sizeof(*intf_modpriv), 302 295 GFP_KERNEL); 303 296 if (!intf_modpriv) ··· 298 305 299 306 interface = of_parse_phandle(netcp_intf->node_interface, 300 307 module->name, 0); 308 + 309 + if (!interface) { 310 + devm_kfree(dev, intf_modpriv); 311 + continue; 312 + } 301 313 302 314 intf_modpriv->netcp_priv = netcp_intf; 303 315 intf_modpriv->netcp_module = module; ··· 319 321 list_del(&intf_modpriv->intf_list); 320 322 devm_kfree(dev, intf_modpriv); 321 323 continue; 324 + } 325 + } 326 + 327 + /* Now register the interface with netdev */ 328 + list_for_each_entry(netcp_intf, 329 + &netcp_device->interface_head, 330 + interface_list) { 331 + /* If interface not registered then register now */ 332 + if (!netcp_intf->netdev_registered) { 333 + ret = netcp_register_interface(netcp_intf); 334 + if (ret) 335 + return -ENODEV; 322 336 } 323 337 } 324 338 return 0; ··· 367 357 if (ret < 0) 368 358 goto fail; 369 359 } 370 - 371 360 mutex_unlock(&netcp_modules_lock); 372 361 return 0; 373 362 ··· 805 796 netcp->rx_pool = NULL; 806 797 } 807 798 808 - static void netcp_allocate_rx_buf(struct netcp_intf *netcp, int fdq) 799 + static int netcp_allocate_rx_buf(struct netcp_intf *netcp, int fdq) 809 800 { 810 801 struct knav_dma_desc *hwdesc; 811 802 unsigned int buf_len, dma_sz; ··· 819 810 hwdesc = knav_pool_desc_get(netcp->rx_pool); 820 811 if (IS_ERR_OR_NULL(hwdesc)) { 821 812 dev_dbg(netcp->ndev_dev, "out of rx pool desc\n"); 822 - return; 813 + return -ENOMEM; 823 814 } 824 815 825 816 if (likely(fdq == 0)) { ··· 871 862 knav_pool_desc_map(netcp->rx_pool, hwdesc, sizeof(*hwdesc), &dma, 872 863 &dma_sz); 873 864 knav_queue_push(netcp->rx_fdq[fdq], dma, sizeof(*hwdesc), 0); 874 - return; 865 + return 0; 875 866 876 867 fail: 877 868 knav_pool_desc_put(netcp->rx_pool, hwdesc); 869 + return -ENOMEM; 878 870 } 879 871 880 872 /* Refill Rx FDQ with descriptors & attached buffers */ 881 873 static void netcp_rxpool_refill(struct netcp_intf *netcp) 882 874 { 883 875 u32 fdq_deficit[KNAV_DMA_FDQ_PER_CHAN] = {0}; 884 - int i; 876 + int i, ret = 0; 885 877 886 878 /* Calculate the FDQ deficit and refill */ 887 879 for (i = 0; i < KNAV_DMA_FDQ_PER_CHAN && netcp->rx_fdq[i]; i++) { 888 880 fdq_deficit[i] = netcp->rx_queue_depths[i] - 889 881 knav_queue_get_count(netcp->rx_fdq[i]); 890 882 891 - while (fdq_deficit[i]--) 892 - netcp_allocate_rx_buf(netcp, i); 883 + while (fdq_deficit[i]-- && !ret) 884 + ret = netcp_allocate_rx_buf(netcp, i); 893 885 } /* end for fdqs */ 894 886 } 895 887 ··· 903 893 904 894 packets = netcp_process_rx_packets(netcp, budget); 905 895 896 + netcp_rxpool_refill(netcp); 906 897 if (packets < budget) { 907 898 napi_complete(&netcp->rx_napi); 908 899 knav_queue_enable_notify(netcp->rx_queue); 909 900 } 910 901 911 - netcp_rxpool_refill(netcp); 912 902 return packets; 913 903 } 914 904 ··· 1394 1384 continue; 1395 1385 dev_dbg(netcp->ndev_dev, "deleting address %pM, type %x\n", 1396 1386 naddr->addr, naddr->type); 1397 - mutex_lock(&netcp_modules_lock); 1398 1387 for_each_module(netcp, priv) { 1399 1388 module = priv->netcp_module; 1400 1389 if (!module->del_addr) ··· 1402 1393 naddr); 1403 1394 WARN_ON(error); 1404 1395 } 1405 - mutex_unlock(&netcp_modules_lock); 1406 1396 netcp_addr_del(netcp, naddr); 1407 1397 } 1408 1398 } ··· 1418 1410 continue; 1419 1411 dev_dbg(netcp->ndev_dev, "adding address %pM, type %x\n", 1420 1412 naddr->addr, naddr->type); 1421 - mutex_lock(&netcp_modules_lock); 1413 + 1422 1414 for_each_module(netcp, priv) { 1423 1415 module = priv->netcp_module; 1424 1416 if (!module->add_addr) ··· 1426 1418 error = module->add_addr(priv->module_priv, naddr); 1427 1419 WARN_ON(error); 1428 1420 } 1429 - mutex_unlock(&netcp_modules_lock); 1430 1421 } 1431 1422 } 1432 1423 ··· 1439 1432 ndev->flags & IFF_ALLMULTI || 1440 1433 netdev_mc_count(ndev) > NETCP_MAX_MCAST_ADDR); 1441 1434 1435 + spin_lock(&netcp->lock); 1442 1436 /* first clear all marks */ 1443 1437 netcp_addr_clear_mark(netcp); 1444 1438 ··· 1458 1450 /* finally sweep and callout into modules */ 1459 1451 netcp_addr_sweep_del(netcp); 1460 1452 netcp_addr_sweep_add(netcp); 1453 + spin_unlock(&netcp->lock); 1461 1454 } 1462 1455 1463 1456 static void netcp_free_navigator_resources(struct netcp_intf *netcp) ··· 1623 1614 goto fail; 1624 1615 } 1625 1616 1626 - mutex_lock(&netcp_modules_lock); 1627 1617 for_each_module(netcp, intf_modpriv) { 1628 1618 module = intf_modpriv->netcp_module; 1629 1619 if (module->open) { ··· 1633 1625 } 1634 1626 } 1635 1627 } 1636 - mutex_unlock(&netcp_modules_lock); 1637 1628 1638 1629 napi_enable(&netcp->rx_napi); 1639 1630 napi_enable(&netcp->tx_napi); ··· 1649 1642 if (module->close) 1650 1643 module->close(intf_modpriv->module_priv, ndev); 1651 1644 } 1652 - mutex_unlock(&netcp_modules_lock); 1653 1645 1654 1646 fail: 1655 1647 netcp_free_navigator_resources(netcp); ··· 1672 1666 napi_disable(&netcp->rx_napi); 1673 1667 napi_disable(&netcp->tx_napi); 1674 1668 1675 - mutex_lock(&netcp_modules_lock); 1676 1669 for_each_module(netcp, intf_modpriv) { 1677 1670 module = intf_modpriv->netcp_module; 1678 1671 if (module->close) { ··· 1680 1675 dev_err(netcp->ndev_dev, "Close failed\n"); 1681 1676 } 1682 1677 } 1683 - mutex_unlock(&netcp_modules_lock); 1684 1678 1685 1679 /* Recycle Rx descriptors from completion queue */ 1686 1680 netcp_empty_rx_queue(netcp); ··· 1707 1703 if (!netif_running(ndev)) 1708 1704 return -EINVAL; 1709 1705 1710 - mutex_lock(&netcp_modules_lock); 1711 1706 for_each_module(netcp, intf_modpriv) { 1712 1707 module = intf_modpriv->netcp_module; 1713 1708 if (!module->ioctl) ··· 1722 1719 } 1723 1720 1724 1721 out: 1725 - mutex_unlock(&netcp_modules_lock); 1726 1722 return (ret == 0) ? 0 : err; 1727 1723 } 1728 1724 ··· 1756 1754 struct netcp_intf *netcp = netdev_priv(ndev); 1757 1755 struct netcp_intf_modpriv *intf_modpriv; 1758 1756 struct netcp_module *module; 1757 + unsigned long flags; 1759 1758 int err = 0; 1760 1759 1761 1760 dev_dbg(netcp->ndev_dev, "adding rx vlan id: %d\n", vid); 1762 1761 1763 - mutex_lock(&netcp_modules_lock); 1762 + spin_lock_irqsave(&netcp->lock, flags); 1764 1763 for_each_module(netcp, intf_modpriv) { 1765 1764 module = intf_modpriv->netcp_module; 1766 1765 if ((module->add_vid) && (vid != 0)) { ··· 1773 1770 } 1774 1771 } 1775 1772 } 1776 - mutex_unlock(&netcp_modules_lock); 1773 + spin_unlock_irqrestore(&netcp->lock, flags); 1774 + 1777 1775 return err; 1778 1776 } 1779 1777 ··· 1783 1779 struct netcp_intf *netcp = netdev_priv(ndev); 1784 1780 struct netcp_intf_modpriv *intf_modpriv; 1785 1781 struct netcp_module *module; 1782 + unsigned long flags; 1786 1783 int err = 0; 1787 1784 1788 1785 dev_dbg(netcp->ndev_dev, "removing rx vlan id: %d\n", vid); 1789 1786 1790 - mutex_lock(&netcp_modules_lock); 1787 + spin_lock_irqsave(&netcp->lock, flags); 1791 1788 for_each_module(netcp, intf_modpriv) { 1792 1789 module = intf_modpriv->netcp_module; 1793 1790 if (module->del_vid) { ··· 1800 1795 } 1801 1796 } 1802 1797 } 1803 - mutex_unlock(&netcp_modules_lock); 1798 + spin_unlock_irqrestore(&netcp->lock, flags); 1804 1799 return err; 1805 1800 } 1806 1801 ··· 2045 2040 struct device_node *child, *interfaces; 2046 2041 struct netcp_device *netcp_device; 2047 2042 struct device *dev = &pdev->dev; 2048 - struct netcp_module *module; 2049 2043 int ret; 2050 2044 2051 2045 if (!node) { ··· 2091 2087 /* Add the device instance to the list */ 2092 2088 list_add_tail(&netcp_device->device_list, &netcp_devices); 2093 2089 2094 - /* Probe & attach any modules already registered */ 2095 - mutex_lock(&netcp_modules_lock); 2096 - for_each_netcp_module(module) { 2097 - ret = netcp_module_probe(netcp_device, module); 2098 - if (ret < 0) 2099 - dev_err(dev, "module(%s) probe failed\n", module->name); 2100 - } 2101 - mutex_unlock(&netcp_modules_lock); 2102 2090 return 0; 2103 2091 2104 2092 probe_quit_interface:
+20 -27
drivers/net/ethernet/ti/netcp_ethss.c
··· 77 77 #define GBENU_ALE_OFFSET 0x1e000 78 78 #define GBENU_HOST_PORT_NUM 0 79 79 #define GBENU_NUM_ALE_ENTRIES 1024 80 + #define GBENU_SGMII_MODULE_SIZE 0x100 80 81 81 82 /* 10G Ethernet SS defines */ 82 83 #define XGBE_MODULE_NAME "netcp-xgbe" ··· 150 149 #define XGBE_STATS2_MODULE 2 151 150 152 151 /* s: 0-based slave_port */ 153 - #define SGMII_BASE(s) \ 154 - (((s) < 2) ? gbe_dev->sgmii_port_regs : gbe_dev->sgmii_port34_regs) 152 + #define SGMII_BASE(d, s) \ 153 + (((s) < 2) ? (d)->sgmii_port_regs : (d)->sgmii_port34_regs) 155 154 156 155 #define GBE_TX_QUEUE 648 157 156 #define GBE_TXHOOK_ORDER 0 ··· 1998 1997 return; 1999 1998 2000 1999 if (!SLAVE_LINK_IS_XGMII(slave)) { 2001 - if (gbe_dev->ss_version == GBE_SS_VERSION_14) 2002 - sgmii_link_state = 2003 - netcp_sgmii_get_port_link(SGMII_BASE(sp), sp); 2004 - else 2005 - sgmii_link_state = 2006 - netcp_sgmii_get_port_link( 2007 - gbe_dev->sgmii_port_regs, sp); 2000 + sgmii_link_state = 2001 + netcp_sgmii_get_port_link(SGMII_BASE(gbe_dev, sp), sp); 2008 2002 } 2009 2003 2010 2004 phy_link_state = gbe_phy_link_status(slave); ··· 2096 2100 static void gbe_sgmii_rtreset(struct gbe_priv *priv, 2097 2101 struct gbe_slave *slave, bool set) 2098 2102 { 2099 - void __iomem *sgmii_port_regs; 2100 - 2101 2103 if (SLAVE_LINK_IS_XGMII(slave)) 2102 2104 return; 2103 2105 2104 - if ((priv->ss_version == GBE_SS_VERSION_14) && (slave->slave_num >= 2)) 2105 - sgmii_port_regs = priv->sgmii_port34_regs; 2106 - else 2107 - sgmii_port_regs = priv->sgmii_port_regs; 2108 - 2109 - netcp_sgmii_rtreset(sgmii_port_regs, slave->slave_num, set); 2106 + netcp_sgmii_rtreset(SGMII_BASE(priv, slave->slave_num), 2107 + slave->slave_num, set); 2110 2108 } 2111 2109 2112 2110 static void gbe_slave_stop(struct gbe_intf *intf) ··· 2126 2136 2127 2137 static void gbe_sgmii_config(struct gbe_priv *priv, struct gbe_slave *slave) 2128 2138 { 2129 - void __iomem *sgmii_port_regs; 2139 + if (SLAVE_LINK_IS_XGMII(slave)) 2140 + return; 2130 2141 2131 - sgmii_port_regs = priv->sgmii_port_regs; 2132 - if ((priv->ss_version == GBE_SS_VERSION_14) && (slave->slave_num >= 2)) 2133 - sgmii_port_regs = priv->sgmii_port34_regs; 2134 - 2135 - if (!SLAVE_LINK_IS_XGMII(slave)) { 2136 - netcp_sgmii_reset(sgmii_port_regs, slave->slave_num); 2137 - netcp_sgmii_config(sgmii_port_regs, slave->slave_num, 2138 - slave->link_interface); 2139 - } 2142 + netcp_sgmii_reset(SGMII_BASE(priv, slave->slave_num), slave->slave_num); 2143 + netcp_sgmii_config(SGMII_BASE(priv, slave->slave_num), slave->slave_num, 2144 + slave->link_interface); 2140 2145 } 2141 2146 2142 2147 static int gbe_slave_open(struct gbe_intf *gbe_intf) ··· 2982 2997 gbe_dev->switch_regs = regs; 2983 2998 2984 2999 gbe_dev->sgmii_port_regs = gbe_dev->ss_regs + GBENU_SGMII_MODULE_OFFSET; 3000 + 3001 + /* Although sgmii modules are mem mapped to one contiguous 3002 + * region on GBENU devices, setting sgmii_port34_regs allows 3003 + * consistent code when accessing sgmii api 3004 + */ 3005 + gbe_dev->sgmii_port34_regs = gbe_dev->sgmii_port_regs + 3006 + (2 * GBENU_SGMII_MODULE_SIZE); 3007 + 2985 3008 gbe_dev->host_port_regs = gbe_dev->switch_regs + GBENU_HOST_PORT_OFFSET; 2986 3009 2987 3010 for (i = 0; i < (gbe_dev->max_num_ports); i++)
+1 -1
drivers/net/ethernet/via/Kconfig
··· 17 17 18 18 config VIA_RHINE 19 19 tristate "VIA Rhine support" 20 - depends on (PCI || OF_IRQ) 20 + depends on PCI || (OF_IRQ && GENERIC_PCI_IOMAP) 21 21 depends on HAS_DMA 22 22 select CRC32 23 23 select MII
+2
drivers/net/ethernet/xilinx/xilinx_emaclite.c
··· 828 828 if (!phydev) 829 829 dev_info(dev, 830 830 "MDIO of the phy is not registered yet\n"); 831 + else 832 + put_device(&phydev->dev); 831 833 return 0; 832 834 } 833 835
+4 -4
drivers/net/fjes/fjes_hw.c
··· 1011 1011 set_bit(epidx, &irq_bit); 1012 1012 break; 1013 1013 } 1014 + 1015 + hw->ep_shm_info[epidx].es_status = 1016 + info[epidx].es_status; 1017 + hw->ep_shm_info[epidx].zone = info[epidx].zone; 1014 1018 } 1015 - 1016 - hw->ep_shm_info[epidx].es_status = info[epidx].es_status; 1017 - hw->ep_shm_info[epidx].zone = info[epidx].zone; 1018 - 1019 1019 break; 1020 1020 } 1021 1021
+14 -18
drivers/net/geneve.c
··· 126 126 __be32 addr; 127 127 int err; 128 128 129 + iph = ip_hdr(skb); /* outer IP header... */ 130 + 129 131 if (gs->collect_md) { 130 132 static u8 zero_vni[3]; 131 133 ··· 135 133 addr = 0; 136 134 } else { 137 135 vni = gnvh->vni; 138 - iph = ip_hdr(skb); /* Still outer IP header... */ 139 136 addr = iph->saddr; 140 137 } 141 138 ··· 179 178 180 179 skb_reset_network_header(skb); 181 180 182 - iph = ip_hdr(skb); /* Now inner IP header... */ 183 181 err = IP_ECN_decapsulate(iph, skb); 184 182 185 183 if (unlikely(err)) { ··· 626 626 struct geneve_sock *gs = geneve->sock; 627 627 struct ip_tunnel_info *info = NULL; 628 628 struct rtable *rt = NULL; 629 + const struct iphdr *iip; /* interior IP header */ 629 630 struct flowi4 fl4; 630 631 __u8 tos, ttl; 631 632 __be16 sport; ··· 654 653 sport = udp_flow_src_port(geneve->net, skb, 1, USHRT_MAX, true); 655 654 skb_reset_mac_header(skb); 656 655 656 + iip = ip_hdr(skb); 657 + 657 658 if (info) { 658 659 const struct ip_tunnel_key *key = &info->key; 659 660 u8 *opts = NULL; ··· 671 668 if (unlikely(err)) 672 669 goto err; 673 670 674 - tos = key->tos; 671 + tos = ip_tunnel_ecn_encap(key->tos, iip, skb); 675 672 ttl = key->ttl; 676 673 df = key->tun_flags & TUNNEL_DONT_FRAGMENT ? htons(IP_DF) : 0; 677 674 } else { 678 - const struct iphdr *iip; /* interior IP header */ 679 - 680 675 udp_csum = false; 681 676 err = geneve_build_skb(rt, skb, 0, geneve->vni, 682 677 0, NULL, udp_csum); 683 678 if (unlikely(err)) 684 679 goto err; 685 680 686 - iip = ip_hdr(skb); 687 681 tos = ip_tunnel_ecn_encap(fl4.flowi4_tos, iip, skb); 688 682 ttl = geneve->ttl; 689 683 if (!ttl && IN_MULTICAST(ntohl(fl4.daddr))) ··· 748 748 dev->features |= NETIF_F_RXCSUM; 749 749 dev->features |= NETIF_F_GSO_SOFTWARE; 750 750 751 - dev->vlan_features = dev->features; 752 - dev->features |= NETIF_F_HW_VLAN_CTAG_TX | NETIF_F_HW_VLAN_STAG_TX; 753 - 754 751 dev->hw_features |= NETIF_F_SG | NETIF_F_HW_CSUM | NETIF_F_RXCSUM; 755 752 dev->hw_features |= NETIF_F_GSO_SOFTWARE; 756 - dev->hw_features |= NETIF_F_HW_VLAN_CTAG_TX | NETIF_F_HW_VLAN_STAG_TX; 757 753 758 754 netif_keep_dst(dev); 759 755 dev->priv_flags |= IFF_LIVE_ADDR_CHANGE | IFF_NO_QUEUE; ··· 815 819 816 820 static int geneve_configure(struct net *net, struct net_device *dev, 817 821 __be32 rem_addr, __u32 vni, __u8 ttl, __u8 tos, 818 - __u16 dst_port, bool metadata) 822 + __be16 dst_port, bool metadata) 819 823 { 820 824 struct geneve_net *gn = net_generic(net, geneve_net_id); 821 825 struct geneve_dev *t, *geneve = netdev_priv(dev); ··· 840 844 841 845 geneve->ttl = ttl; 842 846 geneve->tos = tos; 843 - geneve->dst_port = htons(dst_port); 847 + geneve->dst_port = dst_port; 844 848 geneve->collect_md = metadata; 845 849 846 - t = geneve_find_dev(gn, htons(dst_port), rem_addr, geneve->vni, 850 + t = geneve_find_dev(gn, dst_port, rem_addr, geneve->vni, 847 851 &tun_on_same_port, &tun_collect_md); 848 852 if (t) 849 853 return -EBUSY; ··· 867 871 static int geneve_newlink(struct net *net, struct net_device *dev, 868 872 struct nlattr *tb[], struct nlattr *data[]) 869 873 { 870 - __u16 dst_port = GENEVE_UDP_PORT; 874 + __be16 dst_port = htons(GENEVE_UDP_PORT); 871 875 __u8 ttl = 0, tos = 0; 872 876 bool metadata = false; 873 877 __be32 rem_addr; ··· 886 890 tos = nla_get_u8(data[IFLA_GENEVE_TOS]); 887 891 888 892 if (data[IFLA_GENEVE_PORT]) 889 - dst_port = nla_get_u16(data[IFLA_GENEVE_PORT]); 893 + dst_port = nla_get_be16(data[IFLA_GENEVE_PORT]); 890 894 891 895 if (data[IFLA_GENEVE_COLLECT_METADATA]) 892 896 metadata = true; ··· 909 913 nla_total_size(sizeof(struct in_addr)) + /* IFLA_GENEVE_REMOTE */ 910 914 nla_total_size(sizeof(__u8)) + /* IFLA_GENEVE_TTL */ 911 915 nla_total_size(sizeof(__u8)) + /* IFLA_GENEVE_TOS */ 912 - nla_total_size(sizeof(__u16)) + /* IFLA_GENEVE_PORT */ 916 + nla_total_size(sizeof(__be16)) + /* IFLA_GENEVE_PORT */ 913 917 nla_total_size(0) + /* IFLA_GENEVE_COLLECT_METADATA */ 914 918 0; 915 919 } ··· 931 935 nla_put_u8(skb, IFLA_GENEVE_TOS, geneve->tos)) 932 936 goto nla_put_failure; 933 937 934 - if (nla_put_u16(skb, IFLA_GENEVE_PORT, ntohs(geneve->dst_port))) 938 + if (nla_put_be16(skb, IFLA_GENEVE_PORT, geneve->dst_port)) 935 939 goto nla_put_failure; 936 940 937 941 if (geneve->collect_md) { ··· 971 975 if (IS_ERR(dev)) 972 976 return dev; 973 977 974 - err = geneve_configure(net, dev, 0, 0, 0, 0, dst_port, true); 978 + err = geneve_configure(net, dev, 0, 0, 0, 0, htons(dst_port), true); 975 979 if (err) { 976 980 free_netdev(dev); 977 981 return ERR_PTR(err);
-6
drivers/net/irda/ali-ircc.c
··· 1031 1031 static void ali_ircc_sir_change_speed(struct ali_ircc_cb *priv, __u32 speed) 1032 1032 { 1033 1033 struct ali_ircc_cb *self = priv; 1034 - unsigned long flags; 1035 1034 int iobase; 1036 1035 int fcr; /* FIFO control reg */ 1037 1036 int lcr; /* Line control reg */ ··· 1060 1061 /* Update accounting for new speed */ 1061 1062 self->io.speed = speed; 1062 1063 1063 - spin_lock_irqsave(&self->lock, flags); 1064 - 1065 1064 divisor = 115200/speed; 1066 1065 1067 1066 fcr = UART_FCR_ENABLE_FIFO; ··· 1086 1089 /* without this, the connection will be broken after come back from FIR speed, 1087 1090 but with this, the SIR connection is harder to established */ 1088 1091 outb((UART_MCR_DTR | UART_MCR_RTS | UART_MCR_OUT2), iobase+UART_MCR); 1089 - 1090 - spin_unlock_irqrestore(&self->lock, flags); 1091 - 1092 1092 } 1093 1093 1094 1094 static void ali_ircc_change_dongle_speed(struct ali_ircc_cb *priv, int speed)
+2 -2
drivers/net/macvtap.c
··· 1111 1111 return 0; 1112 1112 1113 1113 case TUNSETSNDBUF: 1114 - if (get_user(u, up)) 1114 + if (get_user(s, sp)) 1115 1115 return -EFAULT; 1116 1116 1117 - q->sk.sk_sndbuf = u; 1117 + q->sk.sk_sndbuf = s; 1118 1118 return 0; 1119 1119 1120 1120 case TUNGETVNETHDRSZ:
+1 -1
drivers/net/phy/fixed_phy.c
··· 220 220 struct fixed_mdio_bus *fmb = &platform_fmb; 221 221 struct fixed_phy *fp; 222 222 223 - if (!phydev || !phydev->bus) 223 + if (!phydev || phydev->bus != fmb->mii_bus) 224 224 return -EINVAL; 225 225 226 226 list_for_each_entry(fp, &fmb->phys, node) {
+9
drivers/net/phy/marvell.c
··· 785 785 int adv; 786 786 int err; 787 787 int lpa; 788 + int lpagb; 788 789 int status = 0; 789 790 790 791 /* Update the link, but return if there ··· 803 802 if (lpa < 0) 804 803 return lpa; 805 804 805 + lpagb = phy_read(phydev, MII_STAT1000); 806 + if (lpagb < 0) 807 + return lpagb; 808 + 806 809 adv = phy_read(phydev, MII_ADVERTISE); 807 810 if (adv < 0) 808 811 return adv; 812 + 813 + phydev->lp_advertising = mii_stat1000_to_ethtool_lpa_t(lpagb) | 814 + mii_lpa_to_ethtool_lpa_t(lpa); 809 815 810 816 lpa &= adv; 811 817 ··· 861 853 phydev->speed = SPEED_10; 862 854 863 855 phydev->pause = phydev->asym_pause = 0; 856 + phydev->lp_advertising = 0; 864 857 } 865 858 866 859 return 0;
+1
drivers/net/phy/mdio-bcm-unimac.c
··· 244 244 { .compatible = "brcm,unimac-mdio", }, 245 245 { /* sentinel */ }, 246 246 }; 247 + MODULE_DEVICE_TABLE(of, unimac_mdio_ids); 247 248 248 249 static struct platform_driver unimac_mdio_driver = { 249 250 .driver = {
+1
drivers/net/phy/mdio-gpio.c
··· 261 261 { .compatible = "virtual,mdio-gpio", }, 262 262 { /* sentinel */ } 263 263 }; 264 + MODULE_DEVICE_TABLE(of, mdio_gpio_of_match); 264 265 265 266 static struct platform_driver mdio_gpio_driver = { 266 267 .probe = mdio_gpio_probe,
+13 -6
drivers/net/phy/mdio-mux.c
··· 113 113 if (!parent_bus_node) 114 114 return -ENODEV; 115 115 116 - parent_bus = of_mdio_find_bus(parent_bus_node); 117 - if (parent_bus == NULL) { 118 - ret_val = -EPROBE_DEFER; 119 - goto err_parent_bus; 120 - } 121 - 122 116 pb = devm_kzalloc(dev, sizeof(*pb), GFP_KERNEL); 123 117 if (pb == NULL) { 124 118 ret_val = -ENOMEM; 119 + goto err_parent_bus; 120 + } 121 + 122 + parent_bus = of_mdio_find_bus(parent_bus_node); 123 + if (parent_bus == NULL) { 124 + ret_val = -EPROBE_DEFER; 125 125 goto err_parent_bus; 126 126 } 127 127 ··· 173 173 dev_info(dev, "Version " DRV_VERSION "\n"); 174 174 return 0; 175 175 } 176 + 177 + /* balance the reference of_mdio_find_bus() took */ 178 + put_device(&pb->mii_bus->dev); 179 + 176 180 err_parent_bus: 177 181 of_node_put(parent_bus_node); 178 182 return ret_val; ··· 193 189 mdiobus_free(cb->mii_bus); 194 190 cb = cb->next; 195 191 } 192 + 193 + /* balance the reference of_mdio_find_bus() in mdio_mux_init() took */ 194 + put_device(&pb->mii_bus->dev); 196 195 } 197 196 EXPORT_SYMBOL_GPL(mdio_mux_uninit); 198 197
+21 -10
drivers/net/phy/mdio_bus.c
··· 167 167 * of_mdio_find_bus - Given an mii_bus node, find the mii_bus. 168 168 * @mdio_bus_np: Pointer to the mii_bus. 169 169 * 170 - * Returns a pointer to the mii_bus, or NULL if none found. 170 + * Returns a reference to the mii_bus, or NULL if none found. The 171 + * embedded struct device will have its reference count incremented, 172 + * and this must be put once the bus is finished with. 171 173 * 172 174 * Because the association of a device_node and mii_bus is made via 173 175 * of_mdiobus_register(), the mii_bus cannot be found before it is ··· 236 234 #endif 237 235 238 236 /** 239 - * mdiobus_register - bring up all the PHYs on a given bus and attach them to bus 237 + * __mdiobus_register - bring up all the PHYs on a given bus and attach them to bus 240 238 * @bus: target mii_bus 239 + * @owner: module containing bus accessor functions 241 240 * 242 241 * Description: Called by a bus driver to bring up all the PHYs 243 - * on a given bus, and attach them to the bus. 242 + * on a given bus, and attach them to the bus. Drivers should use 243 + * mdiobus_register() rather than __mdiobus_register() unless they 244 + * need to pass a specific owner module. 244 245 * 245 246 * Returns 0 on success or < 0 on error. 246 247 */ 247 - int mdiobus_register(struct mii_bus *bus) 248 + int __mdiobus_register(struct mii_bus *bus, struct module *owner) 248 249 { 249 250 int i, err; 250 251 ··· 258 253 BUG_ON(bus->state != MDIOBUS_ALLOCATED && 259 254 bus->state != MDIOBUS_UNREGISTERED); 260 255 256 + bus->owner = owner; 261 257 bus->dev.parent = bus->parent; 262 258 bus->dev.class = &mdio_bus_class; 263 259 bus->dev.groups = NULL; ··· 294 288 295 289 error: 296 290 while (--i >= 0) { 297 - if (bus->phy_map[i]) 298 - device_unregister(&bus->phy_map[i]->dev); 291 + struct phy_device *phydev = bus->phy_map[i]; 292 + if (phydev) { 293 + phy_device_remove(phydev); 294 + phy_device_free(phydev); 295 + } 299 296 } 300 297 device_del(&bus->dev); 301 298 return err; 302 299 } 303 - EXPORT_SYMBOL(mdiobus_register); 300 + EXPORT_SYMBOL(__mdiobus_register); 304 301 305 302 void mdiobus_unregister(struct mii_bus *bus) 306 303 { ··· 313 304 bus->state = MDIOBUS_UNREGISTERED; 314 305 315 306 for (i = 0; i < PHY_MAX_ADDR; i++) { 316 - if (bus->phy_map[i]) 317 - device_unregister(&bus->phy_map[i]->dev); 318 - bus->phy_map[i] = NULL; 307 + struct phy_device *phydev = bus->phy_map[i]; 308 + if (phydev) { 309 + phy_device_remove(phydev); 310 + phy_device_free(phydev); 311 + } 319 312 } 320 313 device_del(&bus->dev); 321 314 }
+48 -14
drivers/net/phy/phy_device.c
··· 384 384 EXPORT_SYMBOL(phy_device_register); 385 385 386 386 /** 387 + * phy_device_remove - Remove a previously registered phy device from the MDIO bus 388 + * @phydev: phy_device structure to remove 389 + * 390 + * This doesn't free the phy_device itself, it merely reverses the effects 391 + * of phy_device_register(). Use phy_device_free() to free the device 392 + * after calling this function. 393 + */ 394 + void phy_device_remove(struct phy_device *phydev) 395 + { 396 + struct mii_bus *bus = phydev->bus; 397 + int addr = phydev->addr; 398 + 399 + device_del(&phydev->dev); 400 + bus->phy_map[addr] = NULL; 401 + } 402 + EXPORT_SYMBOL(phy_device_remove); 403 + 404 + /** 387 405 * phy_find_first - finds the first PHY device on the bus 388 406 * @bus: the target MII bus 389 407 */ ··· 596 578 * generic driver is used. The phy_device is given a ptr to 597 579 * the attaching device, and given a callback for link status 598 580 * change. The phy_device is returned to the attaching driver. 581 + * This function takes a reference on the phy device. 599 582 */ 600 583 int phy_attach_direct(struct net_device *dev, struct phy_device *phydev, 601 584 u32 flags, phy_interface_t interface) 602 585 { 586 + struct mii_bus *bus = phydev->bus; 603 587 struct device *d = &phydev->dev; 604 - struct module *bus_module; 605 588 int err; 589 + 590 + if (!try_module_get(bus->owner)) { 591 + dev_err(&dev->dev, "failed to get the bus module\n"); 592 + return -EIO; 593 + } 594 + 595 + get_device(d); 606 596 607 597 /* Assume that if there is no driver, that it doesn't 608 598 * exist, and we should use the genphy driver. ··· 626 600 err = device_bind_driver(d); 627 601 628 602 if (err) 629 - return err; 603 + goto error; 630 604 } 631 605 632 606 if (phydev->attached_dev) { 633 607 dev_err(&dev->dev, "PHY already attached\n"); 634 - return -EBUSY; 635 - } 636 - 637 - /* Increment the bus module reference count */ 638 - bus_module = phydev->bus->dev.driver ? 639 - phydev->bus->dev.driver->owner : NULL; 640 - if (!try_module_get(bus_module)) { 641 - dev_err(&dev->dev, "failed to get the bus module\n"); 642 - return -EIO; 608 + err = -EBUSY; 609 + goto error; 643 610 } 644 611 645 612 phydev->attached_dev = dev; ··· 654 635 else 655 636 phy_resume(phydev); 656 637 638 + return err; 639 + 640 + error: 641 + put_device(d); 642 + module_put(bus->owner); 657 643 return err; 658 644 } 659 645 EXPORT_SYMBOL(phy_attach_direct); ··· 701 677 /** 702 678 * phy_detach - detach a PHY device from its network device 703 679 * @phydev: target phy_device struct 680 + * 681 + * This detaches the phy device from its network device and the phy 682 + * driver, and drops the reference count taken in phy_attach_direct(). 704 683 */ 705 684 void phy_detach(struct phy_device *phydev) 706 685 { 686 + struct mii_bus *bus; 707 687 int i; 708 - 709 - if (phydev->bus->dev.driver) 710 - module_put(phydev->bus->dev.driver->owner); 711 688 712 689 phydev->attached_dev->phydev = NULL; 713 690 phydev->attached_dev = NULL; ··· 725 700 break; 726 701 } 727 702 } 703 + 704 + /* 705 + * The phydev might go away on the put_device() below, so avoid 706 + * a use-after-free bug by reading the underlying bus first. 707 + */ 708 + bus = phydev->bus; 709 + 710 + put_device(&phydev->dev); 711 + module_put(bus->owner); 728 712 } 729 713 EXPORT_SYMBOL(phy_detach); 730 714
-14
drivers/net/phy/vitesse.c
··· 66 66 #define PHY_ID_VSC8244 0x000fc6c0 67 67 #define PHY_ID_VSC8514 0x00070670 68 68 #define PHY_ID_VSC8574 0x000704a0 69 - #define PHY_ID_VSC8641 0x00070431 70 69 #define PHY_ID_VSC8662 0x00070660 71 70 #define PHY_ID_VSC8221 0x000fc550 72 71 #define PHY_ID_VSC8211 0x000fc4b0 ··· 272 273 .config_intr = &vsc82xx_config_intr, 273 274 .driver = { .owner = THIS_MODULE,}, 274 275 }, { 275 - .phy_id = PHY_ID_VSC8641, 276 - .name = "Vitesse VSC8641", 277 - .phy_id_mask = 0x000ffff0, 278 - .features = PHY_GBIT_FEATURES, 279 - .flags = PHY_HAS_INTERRUPT, 280 - .config_init = &vsc824x_config_init, 281 - .config_aneg = &vsc82x4_config_aneg, 282 - .read_status = &genphy_read_status, 283 - .ack_interrupt = &vsc824x_ack_interrupt, 284 - .config_intr = &vsc82xx_config_intr, 285 - .driver = { .owner = THIS_MODULE,}, 286 - }, { 287 276 .phy_id = PHY_ID_VSC8662, 288 277 .name = "Vitesse VSC8662", 289 278 .phy_id_mask = 0x000ffff0, ··· 318 331 { PHY_ID_VSC8244, 0x000fffc0 }, 319 332 { PHY_ID_VSC8514, 0x000ffff0 }, 320 333 { PHY_ID_VSC8574, 0x000ffff0 }, 321 - { PHY_ID_VSC8641, 0x000ffff0 }, 322 334 { PHY_ID_VSC8662, 0x000ffff0 }, 323 335 { PHY_ID_VSC8221, 0x000ffff0 }, 324 336 { PHY_ID_VSC8211, 0x000ffff0 },
+3 -1
drivers/net/ppp/ppp_generic.c
··· 2755 2755 */ 2756 2756 dev_net_set(dev, net); 2757 2757 2758 + rtnl_lock(); 2758 2759 mutex_lock(&pn->all_ppp_mutex); 2759 2760 2760 2761 if (unit < 0) { ··· 2786 2785 ppp->file.index = unit; 2787 2786 sprintf(dev->name, "ppp%d", unit); 2788 2787 2789 - ret = register_netdev(dev); 2788 + ret = register_netdevice(dev); 2790 2789 if (ret != 0) { 2791 2790 unit_put(&pn->units_idr, unit); 2792 2791 netdev_err(ppp->dev, "PPP: couldn't register device %s (%d)\n", ··· 2798 2797 2799 2798 atomic_inc(&ppp_unit_count); 2800 2799 mutex_unlock(&pn->all_ppp_mutex); 2800 + rtnl_unlock(); 2801 2801 2802 2802 *retp = 0; 2803 2803 return ppp;
+11
drivers/net/usb/Kconfig
··· 583 583 584 584 http://ubuntuforums.org/showpost.php?p=10589647&postcount=17 585 585 586 + config USB_NET_CH9200 587 + tristate "QingHeng CH9200 USB ethernet support" 588 + depends on USB_USBNET 589 + select MII 590 + help 591 + Choose this option if you have a USB ethernet adapter with a QinHeng 592 + CH9200 chipset. 593 + 594 + To compile this driver as a module, choose M here: the 595 + module will be called ch9200. 596 + 586 597 endif # USB_NET_DRIVERS
+1 -1
drivers/net/usb/Makefile
··· 38 38 obj-$(CONFIG_USB_VL600) += lg-vl600.o 39 39 obj-$(CONFIG_USB_NET_QMI_WWAN) += qmi_wwan.o 40 40 obj-$(CONFIG_USB_NET_CDC_MBIM) += cdc_mbim.o 41 - 41 + obj-$(CONFIG_USB_NET_CH9200) += ch9200.o
+432
drivers/net/usb/ch9200.c
··· 1 + /* 2 + * USB 10M/100M ethernet adapter 3 + * 4 + * This file is licensed under the terms of the GNU General Public License 5 + * version 2. This program is licensed "as is" without any warranty of any 6 + * kind, whether express or implied 7 + * 8 + */ 9 + 10 + #include <linux/kernel.h> 11 + #include <linux/module.h> 12 + #include <linux/sched.h> 13 + #include <linux/stddef.h> 14 + #include <linux/init.h> 15 + #include <linux/netdevice.h> 16 + #include <linux/etherdevice.h> 17 + #include <linux/ethtool.h> 18 + #include <linux/mii.h> 19 + #include <linux/usb.h> 20 + #include <linux/crc32.h> 21 + #include <linux/usb/usbnet.h> 22 + #include <linux/slab.h> 23 + 24 + #define CH9200_VID 0x1A86 25 + #define CH9200_PID_E092 0xE092 26 + 27 + #define CTRL_TIMEOUT_MS 1000 28 + 29 + #define CONTROL_TIMEOUT_MS 1000 30 + 31 + #define REQUEST_READ 0x0E 32 + #define REQUEST_WRITE 0x0F 33 + 34 + /* Address space: 35 + * 00-63 : MII 36 + * 64-128: MAC 37 + * 38 + * Note: all accesses must be 16-bit 39 + */ 40 + 41 + #define MAC_REG_CTRL 64 42 + #define MAC_REG_STATUS 66 43 + #define MAC_REG_INTERRUPT_MASK 68 44 + #define MAC_REG_PHY_COMMAND 70 45 + #define MAC_REG_PHY_DATA 72 46 + #define MAC_REG_STATION_L 74 47 + #define MAC_REG_STATION_M 76 48 + #define MAC_REG_STATION_H 78 49 + #define MAC_REG_HASH_L 80 50 + #define MAC_REG_HASH_M1 82 51 + #define MAC_REG_HASH_M2 84 52 + #define MAC_REG_HASH_H 86 53 + #define MAC_REG_THRESHOLD 88 54 + #define MAC_REG_FIFO_DEPTH 90 55 + #define MAC_REG_PAUSE 92 56 + #define MAC_REG_FLOW_CONTROL 94 57 + 58 + /* Control register bits 59 + * 60 + * Note: bits 13 and 15 are reserved 61 + */ 62 + #define LOOPBACK (0x01 << 14) 63 + #define BASE100X (0x01 << 12) 64 + #define MBPS_10 (0x01 << 11) 65 + #define DUPLEX_MODE (0x01 << 10) 66 + #define PAUSE_FRAME (0x01 << 9) 67 + #define PROMISCUOUS (0x01 << 8) 68 + #define MULTICAST (0x01 << 7) 69 + #define BROADCAST (0x01 << 6) 70 + #define HASH (0x01 << 5) 71 + #define APPEND_PAD (0x01 << 4) 72 + #define APPEND_CRC (0x01 << 3) 73 + #define TRANSMITTER_ACTION (0x01 << 2) 74 + #define RECEIVER_ACTION (0x01 << 1) 75 + #define DMA_ACTION (0x01 << 0) 76 + 77 + /* Status register bits 78 + * 79 + * Note: bits 7-15 are reserved 80 + */ 81 + #define ALIGNMENT (0x01 << 6) 82 + #define FIFO_OVER_RUN (0x01 << 5) 83 + #define FIFO_UNDER_RUN (0x01 << 4) 84 + #define RX_ERROR (0x01 << 3) 85 + #define RX_COMPLETE (0x01 << 2) 86 + #define TX_ERROR (0x01 << 1) 87 + #define TX_COMPLETE (0x01 << 0) 88 + 89 + /* FIFO depth register bits 90 + * 91 + * Note: bits 6 and 14 are reserved 92 + */ 93 + 94 + #define ETH_TXBD (0x01 << 15) 95 + #define ETN_TX_FIFO_DEPTH (0x01 << 8) 96 + #define ETH_RXBD (0x01 << 7) 97 + #define ETH_RX_FIFO_DEPTH (0x01 << 0) 98 + 99 + static int control_read(struct usbnet *dev, 100 + unsigned char request, unsigned short value, 101 + unsigned short index, void *data, unsigned short size, 102 + int timeout) 103 + { 104 + unsigned char *buf = NULL; 105 + unsigned char request_type; 106 + int err = 0; 107 + 108 + if (request == REQUEST_READ) 109 + request_type = (USB_DIR_IN | USB_TYPE_VENDOR | USB_RECIP_OTHER); 110 + else 111 + request_type = (USB_DIR_IN | USB_TYPE_VENDOR | 112 + USB_RECIP_DEVICE); 113 + 114 + netdev_dbg(dev->net, "Control_read() index=0x%02x size=%d\n", 115 + index, size); 116 + 117 + buf = kmalloc(size, GFP_KERNEL); 118 + if (!buf) { 119 + err = -ENOMEM; 120 + goto err_out; 121 + } 122 + 123 + err = usb_control_msg(dev->udev, 124 + usb_rcvctrlpipe(dev->udev, 0), 125 + request, request_type, value, index, buf, size, 126 + timeout); 127 + if (err == size) 128 + memcpy(data, buf, size); 129 + else if (err >= 0) 130 + err = -EINVAL; 131 + kfree(buf); 132 + 133 + return err; 134 + 135 + err_out: 136 + return err; 137 + } 138 + 139 + static int control_write(struct usbnet *dev, unsigned char request, 140 + unsigned short value, unsigned short index, 141 + void *data, unsigned short size, int timeout) 142 + { 143 + unsigned char *buf = NULL; 144 + unsigned char request_type; 145 + int err = 0; 146 + 147 + if (request == REQUEST_WRITE) 148 + request_type = (USB_DIR_OUT | USB_TYPE_VENDOR | 149 + USB_RECIP_OTHER); 150 + else 151 + request_type = (USB_DIR_OUT | USB_TYPE_VENDOR | 152 + USB_RECIP_DEVICE); 153 + 154 + netdev_dbg(dev->net, "Control_write() index=0x%02x size=%d\n", 155 + index, size); 156 + 157 + if (data) { 158 + buf = kmalloc(size, GFP_KERNEL); 159 + if (!buf) { 160 + err = -ENOMEM; 161 + goto err_out; 162 + } 163 + memcpy(buf, data, size); 164 + } 165 + 166 + err = usb_control_msg(dev->udev, 167 + usb_sndctrlpipe(dev->udev, 0), 168 + request, request_type, value, index, buf, size, 169 + timeout); 170 + if (err >= 0 && err < size) 171 + err = -EINVAL; 172 + kfree(buf); 173 + 174 + return 0; 175 + 176 + err_out: 177 + return err; 178 + } 179 + 180 + static int ch9200_mdio_read(struct net_device *netdev, int phy_id, int loc) 181 + { 182 + struct usbnet *dev = netdev_priv(netdev); 183 + unsigned char buff[2]; 184 + 185 + netdev_dbg(netdev, "ch9200_mdio_read phy_id:%02x loc:%02x\n", 186 + phy_id, loc); 187 + 188 + if (phy_id != 0) 189 + return -ENODEV; 190 + 191 + control_read(dev, REQUEST_READ, 0, loc * 2, buff, 0x02, 192 + CONTROL_TIMEOUT_MS); 193 + 194 + return (buff[0] | buff[1] << 8); 195 + } 196 + 197 + static void ch9200_mdio_write(struct net_device *netdev, 198 + int phy_id, int loc, int val) 199 + { 200 + struct usbnet *dev = netdev_priv(netdev); 201 + unsigned char buff[2]; 202 + 203 + netdev_dbg(netdev, "ch9200_mdio_write() phy_id=%02x loc:%02x\n", 204 + phy_id, loc); 205 + 206 + if (phy_id != 0) 207 + return; 208 + 209 + buff[0] = (unsigned char)val; 210 + buff[1] = (unsigned char)(val >> 8); 211 + 212 + control_write(dev, REQUEST_WRITE, 0, loc * 2, buff, 0x02, 213 + CONTROL_TIMEOUT_MS); 214 + } 215 + 216 + static int ch9200_link_reset(struct usbnet *dev) 217 + { 218 + struct ethtool_cmd ecmd; 219 + 220 + mii_check_media(&dev->mii, 1, 1); 221 + mii_ethtool_gset(&dev->mii, &ecmd); 222 + 223 + netdev_dbg(dev->net, "link_reset() speed:%d duplex:%d\n", 224 + ecmd.speed, ecmd.duplex); 225 + 226 + return 0; 227 + } 228 + 229 + static void ch9200_status(struct usbnet *dev, struct urb *urb) 230 + { 231 + int link; 232 + unsigned char *buf; 233 + 234 + if (urb->actual_length < 16) 235 + return; 236 + 237 + buf = urb->transfer_buffer; 238 + link = !!(buf[0] & 0x01); 239 + 240 + if (link) { 241 + netif_carrier_on(dev->net); 242 + usbnet_defer_kevent(dev, EVENT_LINK_RESET); 243 + } else { 244 + netif_carrier_off(dev->net); 245 + } 246 + } 247 + 248 + static struct sk_buff *ch9200_tx_fixup(struct usbnet *dev, struct sk_buff *skb, 249 + gfp_t flags) 250 + { 251 + int i = 0; 252 + int len = 0; 253 + int tx_overhead = 0; 254 + 255 + tx_overhead = 0x40; 256 + 257 + len = skb->len; 258 + if (skb_headroom(skb) < tx_overhead) { 259 + struct sk_buff *skb2; 260 + 261 + skb2 = skb_copy_expand(skb, tx_overhead, 0, flags); 262 + dev_kfree_skb_any(skb); 263 + skb = skb2; 264 + if (!skb) 265 + return NULL; 266 + } 267 + 268 + __skb_push(skb, tx_overhead); 269 + /* usbnet adds padding if length is a multiple of packet size 270 + * if so, adjust length value in header 271 + */ 272 + if ((skb->len % dev->maxpacket) == 0) 273 + len++; 274 + 275 + skb->data[0] = len; 276 + skb->data[1] = len >> 8; 277 + skb->data[2] = 0x00; 278 + skb->data[3] = 0x80; 279 + 280 + for (i = 4; i < 48; i++) 281 + skb->data[i] = 0x00; 282 + 283 + skb->data[48] = len; 284 + skb->data[49] = len >> 8; 285 + skb->data[50] = 0x00; 286 + skb->data[51] = 0x80; 287 + 288 + for (i = 52; i < 64; i++) 289 + skb->data[i] = 0x00; 290 + 291 + return skb; 292 + } 293 + 294 + static int ch9200_rx_fixup(struct usbnet *dev, struct sk_buff *skb) 295 + { 296 + int len = 0; 297 + int rx_overhead = 0; 298 + 299 + rx_overhead = 64; 300 + 301 + if (unlikely(skb->len < rx_overhead)) { 302 + dev_err(&dev->udev->dev, "unexpected tiny rx frame\n"); 303 + return 0; 304 + } 305 + 306 + len = (skb->data[skb->len - 16] | skb->data[skb->len - 15] << 8); 307 + skb_trim(skb, len); 308 + 309 + return 1; 310 + } 311 + 312 + static int get_mac_address(struct usbnet *dev, unsigned char *data) 313 + { 314 + int err = 0; 315 + unsigned char mac_addr[0x06]; 316 + int rd_mac_len = 0; 317 + 318 + netdev_dbg(dev->net, "get_mac_address:\n\tusbnet VID:%0x PID:%0x\n", 319 + dev->udev->descriptor.idVendor, 320 + dev->udev->descriptor.idProduct); 321 + 322 + memset(mac_addr, 0, sizeof(mac_addr)); 323 + rd_mac_len = control_read(dev, REQUEST_READ, 0, 324 + MAC_REG_STATION_L, mac_addr, 0x02, 325 + CONTROL_TIMEOUT_MS); 326 + rd_mac_len += control_read(dev, REQUEST_READ, 0, MAC_REG_STATION_M, 327 + mac_addr + 2, 0x02, CONTROL_TIMEOUT_MS); 328 + rd_mac_len += control_read(dev, REQUEST_READ, 0, MAC_REG_STATION_H, 329 + mac_addr + 4, 0x02, CONTROL_TIMEOUT_MS); 330 + if (rd_mac_len != ETH_ALEN) 331 + err = -EINVAL; 332 + 333 + data[0] = mac_addr[5]; 334 + data[1] = mac_addr[4]; 335 + data[2] = mac_addr[3]; 336 + data[3] = mac_addr[2]; 337 + data[4] = mac_addr[1]; 338 + data[5] = mac_addr[0]; 339 + 340 + return err; 341 + } 342 + 343 + static int ch9200_bind(struct usbnet *dev, struct usb_interface *intf) 344 + { 345 + int retval = 0; 346 + unsigned char data[2]; 347 + 348 + retval = usbnet_get_endpoints(dev, intf); 349 + if (retval) 350 + return retval; 351 + 352 + dev->mii.dev = dev->net; 353 + dev->mii.mdio_read = ch9200_mdio_read; 354 + dev->mii.mdio_write = ch9200_mdio_write; 355 + dev->mii.reg_num_mask = 0x1f; 356 + 357 + dev->mii.phy_id_mask = 0x1f; 358 + 359 + dev->hard_mtu = dev->net->mtu + dev->net->hard_header_len; 360 + dev->rx_urb_size = 24 * 64 + 16; 361 + mii_nway_restart(&dev->mii); 362 + 363 + data[0] = 0x01; 364 + data[1] = 0x0F; 365 + retval = control_write(dev, REQUEST_WRITE, 0, MAC_REG_THRESHOLD, data, 366 + 0x02, CONTROL_TIMEOUT_MS); 367 + 368 + data[0] = 0xA0; 369 + data[1] = 0x90; 370 + retval = control_write(dev, REQUEST_WRITE, 0, MAC_REG_FIFO_DEPTH, data, 371 + 0x02, CONTROL_TIMEOUT_MS); 372 + 373 + data[0] = 0x30; 374 + data[1] = 0x00; 375 + retval = control_write(dev, REQUEST_WRITE, 0, MAC_REG_PAUSE, data, 376 + 0x02, CONTROL_TIMEOUT_MS); 377 + 378 + data[0] = 0x17; 379 + data[1] = 0xD8; 380 + retval = control_write(dev, REQUEST_WRITE, 0, MAC_REG_FLOW_CONTROL, 381 + data, 0x02, CONTROL_TIMEOUT_MS); 382 + 383 + /* Undocumented register */ 384 + data[0] = 0x01; 385 + data[1] = 0x00; 386 + retval = control_write(dev, REQUEST_WRITE, 0, 254, data, 0x02, 387 + CONTROL_TIMEOUT_MS); 388 + 389 + data[0] = 0x5F; 390 + data[1] = 0x0D; 391 + retval = control_write(dev, REQUEST_WRITE, 0, MAC_REG_CTRL, data, 0x02, 392 + CONTROL_TIMEOUT_MS); 393 + 394 + retval = get_mac_address(dev, dev->net->dev_addr); 395 + 396 + return retval; 397 + } 398 + 399 + static const struct driver_info ch9200_info = { 400 + .description = "CH9200 USB to Network Adaptor", 401 + .flags = FLAG_ETHER, 402 + .bind = ch9200_bind, 403 + .rx_fixup = ch9200_rx_fixup, 404 + .tx_fixup = ch9200_tx_fixup, 405 + .status = ch9200_status, 406 + .link_reset = ch9200_link_reset, 407 + .reset = ch9200_link_reset, 408 + }; 409 + 410 + static const struct usb_device_id ch9200_products[] = { 411 + { 412 + USB_DEVICE(0x1A86, 0xE092), 413 + .driver_info = (unsigned long)&ch9200_info, 414 + }, 415 + {}, 416 + }; 417 + 418 + MODULE_DEVICE_TABLE(usb, ch9200_products); 419 + 420 + static struct usb_driver ch9200_driver = { 421 + .name = "ch9200", 422 + .id_table = ch9200_products, 423 + .probe = usbnet_probe, 424 + .disconnect = usbnet_disconnect, 425 + .suspend = usbnet_suspend, 426 + .resume = usbnet_resume, 427 + }; 428 + 429 + module_usb_driver(ch9200_driver); 430 + 431 + MODULE_DESCRIPTION("QinHeng CH9200 USB Network device"); 432 + MODULE_LICENSE("GPL");
+2 -1
drivers/net/vrf.c
··· 193 193 .flowi4_oif = vrf_dev->ifindex, 194 194 .flowi4_iif = LOOPBACK_IFINDEX, 195 195 .flowi4_tos = RT_TOS(ip4h->tos), 196 - .flowi4_flags = FLOWI_FLAG_ANYSRC | FLOWI_FLAG_VRFSRC, 196 + .flowi4_flags = FLOWI_FLAG_ANYSRC | FLOWI_FLAG_VRFSRC | 197 + FLOWI_FLAG_SKIP_NH_OIF, 197 198 .daddr = ip4h->daddr, 198 199 }; 199 200
+9 -6
drivers/net/vxlan.c
··· 2392 2392 2393 2393 eth_hw_addr_random(dev); 2394 2394 ether_setup(dev); 2395 - if (vxlan->default_dst.remote_ip.sa.sa_family == AF_INET6) 2396 - dev->needed_headroom = ETH_HLEN + VXLAN6_HEADROOM; 2397 - else 2398 - dev->needed_headroom = ETH_HLEN + VXLAN_HEADROOM; 2399 2395 2400 2396 dev->netdev_ops = &vxlan_netdev_ops; 2401 2397 dev->destructor = free_netdev; ··· 2636 2640 dst->remote_ip.sa.sa_family = AF_INET; 2637 2641 2638 2642 if (dst->remote_ip.sa.sa_family == AF_INET6 || 2639 - vxlan->cfg.saddr.sa.sa_family == AF_INET6) 2643 + vxlan->cfg.saddr.sa.sa_family == AF_INET6) { 2644 + if (!IS_ENABLED(CONFIG_IPV6)) 2645 + return -EPFNOSUPPORT; 2640 2646 use_ipv6 = true; 2647 + } 2641 2648 2642 2649 if (conf->remote_ifindex) { 2643 2650 struct net_device *lowerdev ··· 2669 2670 2670 2671 dev->needed_headroom = lowerdev->hard_header_len + 2671 2672 (use_ipv6 ? VXLAN6_HEADROOM : VXLAN_HEADROOM); 2672 - } else if (use_ipv6) 2673 + } else if (use_ipv6) { 2673 2674 vxlan->flags |= VXLAN_F_IPV6; 2675 + dev->needed_headroom = ETH_HLEN + VXLAN6_HEADROOM; 2676 + } else { 2677 + dev->needed_headroom = ETH_HLEN + VXLAN_HEADROOM; 2678 + } 2674 2679 2675 2680 memcpy(&vxlan->cfg, conf, sizeof(*conf)); 2676 2681 if (!vxlan->cfg.dst_port)
+23 -4
drivers/of/of_mdio.c
··· 197 197 * of_phy_find_device - Give a PHY node, find the phy_device 198 198 * @phy_np: Pointer to the phy's device tree node 199 199 * 200 - * Returns a pointer to the phy_device. 200 + * If successful, returns a pointer to the phy_device with the embedded 201 + * struct device refcount incremented by one, or NULL on failure. 201 202 */ 202 203 struct phy_device *of_phy_find_device(struct device_node *phy_np) 203 204 { ··· 218 217 * @hndlr: Link state callback for the network device 219 218 * @iface: PHY data interface type 220 219 * 221 - * Returns a pointer to the phy_device if successful. NULL otherwise 220 + * If successful, returns a pointer to the phy_device with the embedded 221 + * struct device refcount incremented by one, or NULL on failure. The 222 + * refcount must be dropped by calling phy_disconnect() or phy_detach(). 222 223 */ 223 224 struct phy_device *of_phy_connect(struct net_device *dev, 224 225 struct device_node *phy_np, ··· 228 225 phy_interface_t iface) 229 226 { 230 227 struct phy_device *phy = of_phy_find_device(phy_np); 228 + int ret; 231 229 232 230 if (!phy) 233 231 return NULL; 234 232 235 233 phy->dev_flags = flags; 236 234 237 - return phy_connect_direct(dev, phy, hndlr, iface) ? NULL : phy; 235 + ret = phy_connect_direct(dev, phy, hndlr, iface); 236 + 237 + /* refcount is held by phy_connect_direct() on success */ 238 + put_device(&phy->dev); 239 + 240 + return ret ? NULL : phy; 238 241 } 239 242 EXPORT_SYMBOL(of_phy_connect); 240 243 ··· 250 241 * @phy_np: Node pointer for the PHY 251 242 * @flags: flags to pass to the PHY 252 243 * @iface: PHY data interface type 244 + * 245 + * If successful, returns a pointer to the phy_device with the embedded 246 + * struct device refcount incremented by one, or NULL on failure. The 247 + * refcount must be dropped by calling phy_disconnect() or phy_detach(). 253 248 */ 254 249 struct phy_device *of_phy_attach(struct net_device *dev, 255 250 struct device_node *phy_np, u32 flags, 256 251 phy_interface_t iface) 257 252 { 258 253 struct phy_device *phy = of_phy_find_device(phy_np); 254 + int ret; 259 255 260 256 if (!phy) 261 257 return NULL; 262 258 263 - return phy_attach_direct(dev, phy, flags, iface) ? NULL : phy; 259 + ret = phy_attach_direct(dev, phy, flags, iface); 260 + 261 + /* refcount is held by phy_attach_direct() on success */ 262 + put_device(&phy->dev); 263 + 264 + return ret ? NULL : phy; 264 265 } 265 266 EXPORT_SYMBOL(of_phy_attach); 266 267
+1
include/linux/netdevice.h
··· 507 507 BUG_ON(!test_bit(NAPI_STATE_SCHED, &n->state)); 508 508 smp_mb__before_atomic(); 509 509 clear_bit(NAPI_STATE_SCHED, &n->state); 510 + clear_bit(NAPI_STATE_NPSVC, &n->state); 510 511 } 511 512 512 513 #ifdef CONFIG_SMP
+5 -1
include/linux/phy.h
··· 19 19 #include <linux/spinlock.h> 20 20 #include <linux/ethtool.h> 21 21 #include <linux/mii.h> 22 + #include <linux/module.h> 22 23 #include <linux/timer.h> 23 24 #include <linux/workqueue.h> 24 25 #include <linux/mod_devicetable.h> ··· 154 153 * PHYs should register using this structure 155 154 */ 156 155 struct mii_bus { 156 + struct module *owner; 157 157 const char *name; 158 158 char id[MII_BUS_ID_SIZE]; 159 159 void *priv; ··· 200 198 return mdiobus_alloc_size(0); 201 199 } 202 200 203 - int mdiobus_register(struct mii_bus *bus); 201 + int __mdiobus_register(struct mii_bus *bus, struct module *owner); 202 + #define mdiobus_register(bus) __mdiobus_register(bus, THIS_MODULE) 204 203 void mdiobus_unregister(struct mii_bus *bus); 205 204 void mdiobus_free(struct mii_bus *bus); 206 205 struct mii_bus *devm_mdiobus_alloc_size(struct device *dev, int sizeof_priv); ··· 745 742 struct phy_c45_device_ids *c45_ids); 746 743 struct phy_device *get_phy_device(struct mii_bus *bus, int addr, bool is_c45); 747 744 int phy_device_register(struct phy_device *phy); 745 + void phy_device_remove(struct phy_device *phydev); 748 746 int phy_init_hw(struct phy_device *phydev); 749 747 int phy_suspend(struct phy_device *phydev); 750 748 int phy_resume(struct phy_device *phydev);
+6 -3
include/linux/skbuff.h
··· 179 179 u8 bridged_dnat:1; 180 180 __u16 frag_max_size; 181 181 struct net_device *physindev; 182 + 183 + /* always valid & non-NULL from FORWARD on, for physdev match */ 184 + struct net_device *physoutdev; 182 185 union { 183 186 /* prerouting: detect dnat in orig/reply direction */ 184 187 __be32 ipv4_daddr; ··· 192 189 * skb is out in neigh layer. 193 190 */ 194 191 char neigh_header[8]; 195 - 196 - /* always valid & non-NULL from FORWARD on, for physdev match */ 197 - struct net_device *physoutdev; 198 192 }; 199 193 }; 200 194 #endif ··· 2707 2707 { 2708 2708 if (skb->ip_summed == CHECKSUM_COMPLETE) 2709 2709 skb->csum = csum_sub(skb->csum, csum_partial(start, len, 0)); 2710 + else if (skb->ip_summed == CHECKSUM_PARTIAL && 2711 + skb_checksum_start_offset(skb) <= len) 2712 + skb->ip_summed = CHECKSUM_NONE; 2710 2713 } 2711 2714 2712 2715 unsigned char *skb_pull_rcsum(struct sk_buff *skb, unsigned int len);
+1
include/net/flow.h
··· 35 35 #define FLOWI_FLAG_ANYSRC 0x01 36 36 #define FLOWI_FLAG_KNOWN_NH 0x02 37 37 #define FLOWI_FLAG_VRFSRC 0x04 38 + #define FLOWI_FLAG_SKIP_NH_OIF 0x08 38 39 __u32 flowic_secid; 39 40 struct flowi_tunnel flowic_tun_key; 40 41 };
+13 -1
include/net/inet_timewait_sock.h
··· 110 110 void __inet_twsk_hashdance(struct inet_timewait_sock *tw, struct sock *sk, 111 111 struct inet_hashinfo *hashinfo); 112 112 113 - void inet_twsk_schedule(struct inet_timewait_sock *tw, const int timeo); 113 + void __inet_twsk_schedule(struct inet_timewait_sock *tw, int timeo, 114 + bool rearm); 115 + 116 + static void inline inet_twsk_schedule(struct inet_timewait_sock *tw, int timeo) 117 + { 118 + __inet_twsk_schedule(tw, timeo, false); 119 + } 120 + 121 + static void inline inet_twsk_reschedule(struct inet_timewait_sock *tw, int timeo) 122 + { 123 + __inet_twsk_schedule(tw, timeo, true); 124 + } 125 + 114 126 void inet_twsk_deschedule_put(struct inet_timewait_sock *tw); 115 127 116 128 void inet_twsk_purge(struct inet_hashinfo *hashinfo,
+2 -1
include/net/ip6_fib.h
··· 275 275 struct nl_info *info, struct mx6_config *mxc); 276 276 int fib6_del(struct rt6_info *rt, struct nl_info *info); 277 277 278 - void inet6_rt_notify(int event, struct rt6_info *rt, struct nl_info *info); 278 + void inet6_rt_notify(int event, struct rt6_info *rt, struct nl_info *info, 279 + unsigned int flags); 279 280 280 281 void fib6_run_gc(unsigned long expires, struct net *net, bool force); 281 282
+12 -5
include/net/ip6_tunnel.h
··· 32 32 __be32 o_key; 33 33 }; 34 34 35 + struct ip6_tnl_dst { 36 + seqlock_t lock; 37 + struct dst_entry __rcu *dst; 38 + u32 cookie; 39 + }; 40 + 35 41 /* IPv6 tunnel */ 36 42 struct ip6_tnl { 37 43 struct ip6_tnl __rcu *next; /* next tunnel in list */ ··· 45 39 struct net *net; /* netns for packet i/o */ 46 40 struct __ip6_tnl_parm parms; /* tunnel configuration parameters */ 47 41 struct flowi fl; /* flowi template for xmit */ 48 - struct dst_entry *dst_cache; /* cached dst */ 49 - u32 dst_cookie; 42 + struct ip6_tnl_dst __percpu *dst_cache; /* cached dst */ 50 43 51 44 int err_count; 52 45 unsigned long err_time; ··· 65 60 __u8 encap_limit; /* tunnel encapsulation limit */ 66 61 } __packed; 67 62 68 - struct dst_entry *ip6_tnl_dst_check(struct ip6_tnl *t); 63 + struct dst_entry *ip6_tnl_dst_get(struct ip6_tnl *t); 64 + int ip6_tnl_dst_init(struct ip6_tnl *t); 65 + void ip6_tnl_dst_destroy(struct ip6_tnl *t); 69 66 void ip6_tnl_dst_reset(struct ip6_tnl *t); 70 - void ip6_tnl_dst_store(struct ip6_tnl *t, struct dst_entry *dst); 67 + void ip6_tnl_dst_set(struct ip6_tnl *t, struct dst_entry *dst); 71 68 int ip6_tnl_rcv_ctl(struct ip6_tnl *t, const struct in6_addr *laddr, 72 69 const struct in6_addr *raddr); 73 70 int ip6_tnl_xmit_ctl(struct ip6_tnl *t, const struct in6_addr *laddr, ··· 86 79 struct net_device_stats *stats = &dev->stats; 87 80 int pkt_len, err; 88 81 89 - pkt_len = skb->len; 82 + pkt_len = skb->len - skb_inner_network_offset(skb); 90 83 err = ip6_local_out_sk(sk, skb); 91 84 92 85 if (net_xmit_eval(err) == 0) {
+19 -11
include/net/ip_fib.h
··· 236 236 rcu_read_lock(); 237 237 238 238 tb = fib_get_table(net, RT_TABLE_MAIN); 239 - if (tb && !fib_table_lookup(tb, flp, res, flags | FIB_LOOKUP_NOREF)) 240 - err = 0; 239 + if (tb) 240 + err = fib_table_lookup(tb, flp, res, flags | FIB_LOOKUP_NOREF); 241 + 242 + if (err == -EAGAIN) 243 + err = -ENETUNREACH; 241 244 242 245 rcu_read_unlock(); 243 246 ··· 261 258 struct fib_result *res, unsigned int flags) 262 259 { 263 260 struct fib_table *tb; 264 - int err; 261 + int err = -ENETUNREACH; 265 262 266 263 flags |= FIB_LOOKUP_NOREF; 267 264 if (net->ipv4.fib_has_custom_rules) ··· 271 268 272 269 res->tclassid = 0; 273 270 274 - for (err = 0; !err; err = -ENETUNREACH) { 275 - tb = rcu_dereference_rtnl(net->ipv4.fib_main); 276 - if (tb && !fib_table_lookup(tb, flp, res, flags)) 277 - break; 271 + tb = rcu_dereference_rtnl(net->ipv4.fib_main); 272 + if (tb) 273 + err = fib_table_lookup(tb, flp, res, flags); 278 274 279 - tb = rcu_dereference_rtnl(net->ipv4.fib_default); 280 - if (tb && !fib_table_lookup(tb, flp, res, flags)) 281 - break; 282 - } 275 + if (!err) 276 + goto out; 277 + 278 + tb = rcu_dereference_rtnl(net->ipv4.fib_default); 279 + if (tb) 280 + err = fib_table_lookup(tb, flp, res, flags); 281 + 282 + out: 283 + if (err == -EAGAIN) 284 + err = -ENETUNREACH; 283 285 284 286 rcu_read_unlock(); 285 287
+2
include/net/ip_tunnels.h
··· 276 276 int iptunnel_xmit(struct sock *sk, struct rtable *rt, struct sk_buff *skb, 277 277 __be32 src, __be32 dst, u8 proto, 278 278 u8 tos, u8 ttl, __be16 df, bool xnet); 279 + struct metadata_dst *iptunnel_metadata_reply(struct metadata_dst *md, 280 + gfp_t flags); 279 281 280 282 struct sk_buff *iptunnel_handle_offloads(struct sk_buff *skb, bool gre_csum, 281 283 int gso_type_mask);
+1 -1
include/net/route.h
··· 255 255 flow_flags |= FLOWI_FLAG_ANYSRC; 256 256 257 257 if (netif_index_is_vrf(sock_net(sk), oif)) 258 - flow_flags |= FLOWI_FLAG_VRFSRC; 258 + flow_flags |= FLOWI_FLAG_VRFSRC | FLOWI_FLAG_SKIP_NH_OIF; 259 259 260 260 flowi4_init_output(fl4, oif, sk->sk_mark, tos, RT_SCOPE_UNIVERSE, 261 261 protocol, flow_flags, dst, src, dport, sport);
-4
include/uapi/linux/lwtunnel.h
··· 21 21 LWTUNNEL_IP_SRC, 22 22 LWTUNNEL_IP_TTL, 23 23 LWTUNNEL_IP_TOS, 24 - LWTUNNEL_IP_SPORT, 25 - LWTUNNEL_IP_DPORT, 26 24 LWTUNNEL_IP_FLAGS, 27 25 __LWTUNNEL_IP_MAX, 28 26 }; ··· 34 36 LWTUNNEL_IP6_SRC, 35 37 LWTUNNEL_IP6_HOPLIMIT, 36 38 LWTUNNEL_IP6_TC, 37 - LWTUNNEL_IP6_SPORT, 38 - LWTUNNEL_IP6_DPORT, 39 39 LWTUNNEL_IP6_FLAGS, 40 40 __LWTUNNEL_IP6_MAX, 41 41 };
+1 -4
lib/rhashtable.c
··· 187 187 head = rht_dereference_bucket(new_tbl->buckets[new_hash], 188 188 new_tbl, new_hash); 189 189 190 - if (rht_is_a_nulls(head)) 191 - INIT_RHT_NULLS_HEAD(entry->next, ht, new_hash); 192 - else 193 - RCU_INIT_POINTER(entry->next, head); 190 + RCU_INIT_POINTER(entry->next, head); 194 191 195 192 rcu_assign_pointer(new_tbl->buckets[new_hash], entry); 196 193 spin_unlock(new_bucket_lock);
+3
net/atm/clip.c
··· 317 317 318 318 static int clip_encap(struct atm_vcc *vcc, int mode) 319 319 { 320 + if (!CLIP_VCC(vcc)) 321 + return -EBADFD; 322 + 320 323 CLIP_VCC(vcc)->encap = mode; 321 324 return 0; 322 325 }
+6 -6
net/bluetooth/smp.c
··· 2311 2311 if (!conn) 2312 2312 return 1; 2313 2313 2314 - chan = conn->smp; 2315 - if (!chan) { 2316 - BT_ERR("SMP security requested but not available"); 2317 - return 1; 2318 - } 2319 - 2320 2314 if (!hci_dev_test_flag(hcon->hdev, HCI_LE_ENABLED)) 2321 2315 return 1; 2322 2316 ··· 2323 2329 if (hcon->role == HCI_ROLE_MASTER) 2324 2330 if (smp_ltk_encrypt(conn, hcon->pending_sec_level)) 2325 2331 return 0; 2332 + 2333 + chan = conn->smp; 2334 + if (!chan) { 2335 + BT_ERR("SMP security requested but not available"); 2336 + return 1; 2337 + } 2326 2338 2327 2339 l2cap_chan_lock(chan); 2328 2340
+2 -2
net/bridge/br_multicast.c
··· 1006 1006 1007 1007 ih = igmpv3_report_hdr(skb); 1008 1008 num = ntohs(ih->ngrec); 1009 - len = sizeof(*ih); 1009 + len = skb_transport_offset(skb) + sizeof(*ih); 1010 1010 1011 1011 for (i = 0; i < num; i++) { 1012 1012 len += sizeof(*grec); ··· 1067 1067 1068 1068 icmp6h = icmp6_hdr(skb); 1069 1069 num = ntohs(icmp6h->icmp6_dataun.un_data16[1]); 1070 - len = sizeof(*icmp6h); 1070 + len = skb_transport_offset(skb) + sizeof(*icmp6h); 1071 1071 1072 1072 for (i = 0; i < num; i++) { 1073 1073 __be16 *nsrcs, _nsrcs;
+2
net/core/dev.c
··· 4713 4713 4714 4714 while (test_and_set_bit(NAPI_STATE_SCHED, &n->state)) 4715 4715 msleep(1); 4716 + while (test_and_set_bit(NAPI_STATE_NPSVC, &n->state)) 4717 + msleep(1); 4716 4718 4717 4719 hrtimer_cancel(&n->timer); 4718 4720
+9 -5
net/core/fib_rules.c
··· 631 631 { 632 632 int idx = 0; 633 633 struct fib_rule *rule; 634 + int err = 0; 634 635 635 636 rcu_read_lock(); 636 637 list_for_each_entry_rcu(rule, &ops->rules_list, list) { 637 638 if (idx < cb->args[1]) 638 639 goto skip; 639 640 640 - if (fib_nl_fill_rule(skb, rule, NETLINK_CB(cb->skb).portid, 641 - cb->nlh->nlmsg_seq, RTM_NEWRULE, 642 - NLM_F_MULTI, ops) < 0) 641 + err = fib_nl_fill_rule(skb, rule, NETLINK_CB(cb->skb).portid, 642 + cb->nlh->nlmsg_seq, RTM_NEWRULE, 643 + NLM_F_MULTI, ops); 644 + if (err) 643 645 break; 644 646 skip: 645 647 idx++; ··· 650 648 cb->args[1] = idx; 651 649 rules_ops_put(ops); 652 650 653 - return skb->len; 651 + return err; 654 652 } 655 653 656 654 static int fib_nl_dumprule(struct sk_buff *skb, struct netlink_callback *cb) ··· 666 664 if (ops == NULL) 667 665 return -EAFNOSUPPORT; 668 666 669 - return dump_rules(skb, cb, ops); 667 + dump_rules(skb, cb, ops); 668 + 669 + return skb->len; 670 670 } 671 671 672 672 rcu_read_lock();
+1 -1
net/core/filter.c
··· 478 478 bpf_src = BPF_X; 479 479 } else { 480 480 insn->dst_reg = BPF_REG_A; 481 - insn->src_reg = BPF_REG_X; 482 481 insn->imm = fp->k; 483 482 bpf_src = BPF_SRC(fp->code); 483 + insn->src_reg = bpf_src == BPF_X ? BPF_REG_X : 0; 484 484 } 485 485 486 486 /* Common case where 'jump_false' is next insn. */
+9
net/core/net-sysfs.c
··· 1481 1481 return ret == 0 ? dev->of_node == data : ret; 1482 1482 } 1483 1483 1484 + /* 1485 + * of_find_net_device_by_node - lookup the net device for the device node 1486 + * @np: OF device node 1487 + * 1488 + * Looks up the net_device structure corresponding with the device node. 1489 + * If successful, returns a pointer to the net_device with the embedded 1490 + * struct device refcount incremented by one, or NULL on failure. The 1491 + * refcount must be dropped when done with the net_device. 1492 + */ 1484 1493 struct net_device *of_find_net_device_by_node(struct device_node *np) 1485 1494 { 1486 1495 struct device *dev;
+8 -2
net/core/netpoll.c
··· 142 142 */ 143 143 static int poll_one_napi(struct napi_struct *napi, int budget) 144 144 { 145 - int work; 145 + int work = 0; 146 146 147 147 /* net_rx_action's ->poll() invocations and our's are 148 148 * synchronized by this test which is only made while ··· 151 151 if (!test_bit(NAPI_STATE_SCHED, &napi->state)) 152 152 return budget; 153 153 154 - set_bit(NAPI_STATE_NPSVC, &napi->state); 154 + /* If we set this bit but see that it has already been set, 155 + * that indicates that napi has been disabled and we need 156 + * to abort this operation 157 + */ 158 + if (test_and_set_bit(NAPI_STATE_NPSVC, &napi->state)) 159 + goto out; 155 160 156 161 work = napi->poll(napi, budget); 157 162 WARN_ONCE(work > budget, "%pF exceeded budget in poll\n", napi->poll); ··· 164 159 165 160 clear_bit(NAPI_STATE_NPSVC, &napi->state); 166 161 162 + out: 167 163 return budget - work; 168 164 } 169 165
+16 -10
net/core/rtnetlink.c
··· 3047 3047 u32 portid = NETLINK_CB(cb->skb).portid; 3048 3048 u32 seq = cb->nlh->nlmsg_seq; 3049 3049 u32 filter_mask = 0; 3050 + int err; 3050 3051 3051 3052 if (nlmsg_len(cb->nlh) > sizeof(struct ifinfomsg)) { 3052 3053 struct nlattr *extfilt; ··· 3068 3067 struct net_device *br_dev = netdev_master_upper_dev_get(dev); 3069 3068 3070 3069 if (br_dev && br_dev->netdev_ops->ndo_bridge_getlink) { 3071 - if (idx >= cb->args[0] && 3072 - br_dev->netdev_ops->ndo_bridge_getlink( 3073 - skb, portid, seq, dev, filter_mask, 3074 - NLM_F_MULTI) < 0) 3075 - break; 3070 + if (idx >= cb->args[0]) { 3071 + err = br_dev->netdev_ops->ndo_bridge_getlink( 3072 + skb, portid, seq, dev, 3073 + filter_mask, NLM_F_MULTI); 3074 + if (err < 0 && err != -EOPNOTSUPP) 3075 + break; 3076 + } 3076 3077 idx++; 3077 3078 } 3078 3079 3079 3080 if (ops->ndo_bridge_getlink) { 3080 - if (idx >= cb->args[0] && 3081 - ops->ndo_bridge_getlink(skb, portid, seq, dev, 3082 - filter_mask, 3083 - NLM_F_MULTI) < 0) 3084 - break; 3081 + if (idx >= cb->args[0]) { 3082 + err = ops->ndo_bridge_getlink(skb, portid, 3083 + seq, dev, 3084 + filter_mask, 3085 + NLM_F_MULTI); 3086 + if (err < 0 && err != -EOPNOTSUPP) 3087 + break; 3088 + } 3085 3089 idx++; 3086 3090 } 3087 3091 }
+4 -8
net/core/sock.c
··· 2740 2740 return; 2741 2741 kfree(rsk_prot->slab_name); 2742 2742 rsk_prot->slab_name = NULL; 2743 - if (rsk_prot->slab) { 2744 - kmem_cache_destroy(rsk_prot->slab); 2745 - rsk_prot->slab = NULL; 2746 - } 2743 + kmem_cache_destroy(rsk_prot->slab); 2744 + rsk_prot->slab = NULL; 2747 2745 } 2748 2746 2749 2747 static int req_prot_init(const struct proto *prot) ··· 2826 2828 list_del(&prot->node); 2827 2829 mutex_unlock(&proto_list_mutex); 2828 2830 2829 - if (prot->slab != NULL) { 2830 - kmem_cache_destroy(prot->slab); 2831 - prot->slab = NULL; 2832 - } 2831 + kmem_cache_destroy(prot->slab); 2832 + prot->slab = NULL; 2833 2833 2834 2834 req_prot_cleanup(prot->rsk_prot); 2835 2835
+4 -8
net/dccp/ackvec.c
··· 398 398 399 399 void dccp_ackvec_exit(void) 400 400 { 401 - if (dccp_ackvec_slab != NULL) { 402 - kmem_cache_destroy(dccp_ackvec_slab); 403 - dccp_ackvec_slab = NULL; 404 - } 405 - if (dccp_ackvec_record_slab != NULL) { 406 - kmem_cache_destroy(dccp_ackvec_record_slab); 407 - dccp_ackvec_record_slab = NULL; 408 - } 401 + kmem_cache_destroy(dccp_ackvec_slab); 402 + dccp_ackvec_slab = NULL; 403 + kmem_cache_destroy(dccp_ackvec_record_slab); 404 + dccp_ackvec_record_slab = NULL; 409 405 }
+1 -2
net/dccp/ccid.c
··· 95 95 96 96 static void ccid_kmem_cache_destroy(struct kmem_cache *slab) 97 97 { 98 - if (slab != NULL) 99 - kmem_cache_destroy(slab); 98 + kmem_cache_destroy(slab); 100 99 } 101 100 102 101 static int __init ccid_activate(struct ccid_operations *ccid_ops)
+2 -2
net/dccp/minisocks.c
··· 48 48 tw->tw_ipv6only = sk->sk_ipv6only; 49 49 } 50 50 #endif 51 - /* Linkage updates. */ 52 - __inet_twsk_hashdance(tw, sk, &dccp_hashinfo); 53 51 54 52 /* Get the TIME_WAIT timeout firing. */ 55 53 if (timeo < rto) ··· 58 60 timeo = DCCP_TIMEWAIT_LEN; 59 61 60 62 inet_twsk_schedule(tw, timeo); 63 + /* Linkage updates. */ 64 + __inet_twsk_hashdance(tw, sk, &dccp_hashinfo); 61 65 inet_twsk_put(tw); 62 66 } else { 63 67 /* Sorry, if we're out of memory, just CLOSE this
+34 -7
net/dsa/dsa.c
··· 634 634 port_index++; 635 635 } 636 636 kfree(pd->chip[i].rtable); 637 + 638 + /* Drop our reference to the MDIO bus device */ 639 + if (pd->chip[i].host_dev) 640 + put_device(pd->chip[i].host_dev); 637 641 } 638 642 kfree(pd->chip); 639 643 } ··· 665 661 return -EPROBE_DEFER; 666 662 667 663 ethernet = of_parse_phandle(np, "dsa,ethernet", 0); 668 - if (!ethernet) 669 - return -EINVAL; 664 + if (!ethernet) { 665 + ret = -EINVAL; 666 + goto out_put_mdio; 667 + } 670 668 671 669 ethernet_dev = of_find_net_device_by_node(ethernet); 672 - if (!ethernet_dev) 673 - return -EPROBE_DEFER; 670 + if (!ethernet_dev) { 671 + ret = -EPROBE_DEFER; 672 + goto out_put_mdio; 673 + } 674 674 675 675 pd = kzalloc(sizeof(*pd), GFP_KERNEL); 676 - if (!pd) 677 - return -ENOMEM; 676 + if (!pd) { 677 + ret = -ENOMEM; 678 + goto out_put_ethernet; 679 + } 678 680 679 681 dev->platform_data = pd; 680 682 pd->of_netdev = ethernet_dev; ··· 701 691 cd = &pd->chip[chip_index]; 702 692 703 693 cd->of_node = child; 704 - cd->host_dev = &mdio_bus->dev; 694 + 695 + /* When assigning the host device, increment its refcount */ 696 + cd->host_dev = get_device(&mdio_bus->dev); 705 697 706 698 sw_addr = of_get_property(child, "reg", NULL); 707 699 if (!sw_addr) ··· 723 711 ret = -EPROBE_DEFER; 724 712 goto out_free_chip; 725 713 } 714 + 715 + /* Drop the mdio_bus device ref, replacing the host 716 + * device with the mdio_bus_switch device, keeping 717 + * the refcount from of_mdio_find_bus() above. 718 + */ 719 + put_device(cd->host_dev); 726 720 cd->host_dev = &mdio_bus_switch->dev; 727 721 } 728 722 ··· 762 744 } 763 745 } 764 746 747 + /* The individual chips hold their own refcount on the mdio bus, 748 + * so drop ours */ 749 + put_device(&mdio_bus->dev); 750 + 765 751 return 0; 766 752 767 753 out_free_chip: ··· 773 751 out_free: 774 752 kfree(pd); 775 753 dev->platform_data = NULL; 754 + out_put_ethernet: 755 + put_device(&ethernet_dev->dev); 756 + out_put_mdio: 757 + put_device(&mdio_bus->dev); 776 758 return ret; 777 759 } 778 760 ··· 788 762 return; 789 763 790 764 dsa_of_free_platform_data(pd); 765 + put_device(&pd->of_netdev->dev); 791 766 kfree(pd); 792 767 } 793 768 #else
+1 -1
net/dsa/tag_trailer.c
··· 78 78 79 79 trailer = skb_tail_pointer(skb) - 4; 80 80 if (trailer[0] != 0x80 || (trailer[1] & 0xf8) != 0x00 || 81 - (trailer[3] & 0xef) != 0x00 || trailer[3] != 0x00) 81 + (trailer[2] & 0xef) != 0x00 || trailer[3] != 0x00) 82 82 goto out_drop; 83 83 84 84 source_port = trailer[1] & 7;
+25 -14
net/ipv4/arp.c
··· 113 113 #include <net/arp.h> 114 114 #include <net/ax25.h> 115 115 #include <net/netrom.h> 116 + #include <net/dst_metadata.h> 117 + #include <net/ip_tunnels.h> 116 118 117 119 #include <linux/uaccess.h> 118 120 ··· 298 296 struct net_device *dev, __be32 src_ip, 299 297 const unsigned char *dest_hw, 300 298 const unsigned char *src_hw, 301 - const unsigned char *target_hw, struct sk_buff *oskb) 299 + const unsigned char *target_hw, 300 + struct dst_entry *dst) 302 301 { 303 302 struct sk_buff *skb; 304 303 ··· 312 309 if (!skb) 313 310 return; 314 311 315 - if (oskb) 316 - skb_dst_copy(skb, oskb); 317 - 312 + skb_dst_set(skb, dst); 318 313 arp_xmit(skb); 319 314 } 320 315 ··· 334 333 __be32 target = *(__be32 *)neigh->primary_key; 335 334 int probes = atomic_read(&neigh->probes); 336 335 struct in_device *in_dev; 336 + struct dst_entry *dst = NULL; 337 337 338 338 rcu_read_lock(); 339 339 in_dev = __in_dev_get_rcu(dev); ··· 383 381 } 384 382 } 385 383 384 + if (skb && !(dev->priv_flags & IFF_XMIT_DST_RELEASE)) 385 + dst = dst_clone(skb_dst(skb)); 386 386 arp_send_dst(ARPOP_REQUEST, ETH_P_ARP, target, dev, saddr, 387 - dst_hw, dev->dev_addr, NULL, 388 - dev->priv_flags & IFF_XMIT_DST_RELEASE ? NULL : skb); 387 + dst_hw, dev->dev_addr, NULL, dst); 389 388 } 390 389 391 390 static int arp_ignore(struct in_device *in_dev, __be32 sip, __be32 tip) ··· 652 649 int addr_type; 653 650 struct neighbour *n; 654 651 struct net *net = dev_net(dev); 652 + struct dst_entry *reply_dst = NULL; 655 653 bool is_garp = false; 656 654 657 655 /* arp_rcv below verifies the ARP header and verifies the device ··· 753 749 * cache. 754 750 */ 755 751 752 + if (arp->ar_op == htons(ARPOP_REQUEST) && skb_metadata_dst(skb)) 753 + reply_dst = (struct dst_entry *) 754 + iptunnel_metadata_reply(skb_metadata_dst(skb), 755 + GFP_ATOMIC); 756 + 756 757 /* Special case: IPv4 duplicate address detection packet (RFC2131) */ 757 758 if (sip == 0) { 758 759 if (arp->ar_op == htons(ARPOP_REQUEST) && 759 760 inet_addr_type_dev_table(net, dev, tip) == RTN_LOCAL && 760 761 !arp_ignore(in_dev, sip, tip)) 761 - arp_send(ARPOP_REPLY, ETH_P_ARP, sip, dev, tip, sha, 762 - dev->dev_addr, sha); 762 + arp_send_dst(ARPOP_REPLY, ETH_P_ARP, sip, dev, tip, 763 + sha, dev->dev_addr, sha, reply_dst); 763 764 goto out; 764 765 } 765 766 ··· 783 774 if (!dont_send) { 784 775 n = neigh_event_ns(&arp_tbl, sha, &sip, dev); 785 776 if (n) { 786 - arp_send(ARPOP_REPLY, ETH_P_ARP, sip, 787 - dev, tip, sha, dev->dev_addr, 788 - sha); 777 + arp_send_dst(ARPOP_REPLY, ETH_P_ARP, 778 + sip, dev, tip, sha, 779 + dev->dev_addr, sha, 780 + reply_dst); 789 781 neigh_release(n); 790 782 } 791 783 } ··· 804 794 if (NEIGH_CB(skb)->flags & LOCALLY_ENQUEUED || 805 795 skb->pkt_type == PACKET_HOST || 806 796 NEIGH_VAR(in_dev->arp_parms, PROXY_DELAY) == 0) { 807 - arp_send(ARPOP_REPLY, ETH_P_ARP, sip, 808 - dev, tip, sha, dev->dev_addr, 809 - sha); 797 + arp_send_dst(ARPOP_REPLY, ETH_P_ARP, 798 + sip, dev, tip, sha, 799 + dev->dev_addr, sha, 800 + reply_dst); 810 801 } else { 811 802 pneigh_enqueue(&arp_tbl, 812 803 in_dev->arp_parms, skb);
+1 -1
net/ipv4/fib_trie.c
··· 1426 1426 nh->nh_flags & RTNH_F_LINKDOWN && 1427 1427 !(fib_flags & FIB_LOOKUP_IGNORE_LINKSTATE)) 1428 1428 continue; 1429 - if (!(flp->flowi4_flags & FLOWI_FLAG_VRFSRC)) { 1429 + if (!(flp->flowi4_flags & FLOWI_FLAG_SKIP_NH_OIF)) { 1430 1430 if (flp->flowi4_oif && 1431 1431 flp->flowi4_oif != nh->nh_oif) 1432 1432 continue;
+2 -2
net/ipv4/icmp.c
··· 427 427 fl4.flowi4_mark = mark; 428 428 fl4.flowi4_tos = RT_TOS(ip_hdr(skb)->tos); 429 429 fl4.flowi4_proto = IPPROTO_ICMP; 430 - fl4.flowi4_oif = vrf_master_ifindex(skb->dev) ? : skb->dev->ifindex; 430 + fl4.flowi4_oif = vrf_master_ifindex(skb->dev); 431 431 security_skb_classify_flow(skb, flowi4_to_flowi(&fl4)); 432 432 rt = ip_route_output_key(net, &fl4); 433 433 if (IS_ERR(rt)) ··· 461 461 fl4->flowi4_proto = IPPROTO_ICMP; 462 462 fl4->fl4_icmp_type = type; 463 463 fl4->fl4_icmp_code = code; 464 - fl4->flowi4_oif = vrf_master_ifindex(skb_in->dev) ? : skb_in->dev->ifindex; 464 + fl4->flowi4_oif = vrf_master_ifindex(skb_in->dev); 465 465 466 466 security_skb_classify_flow(skb_in, flowi4_to_flowi(fl4)); 467 467 rt = __ip_route_output_key(net, fl4);
+4 -4
net/ipv4/inet_connection_sock.c
··· 685 685 req->num_timeout = 0; 686 686 req->sk = NULL; 687 687 688 + setup_timer(&req->rsk_timer, reqsk_timer_handler, (unsigned long)req); 689 + mod_timer_pinned(&req->rsk_timer, jiffies + timeout); 690 + req->rsk_hash = hash; 691 + 688 692 /* before letting lookups find us, make sure all req fields 689 693 * are committed to memory and refcnt initialized. 690 694 */ 691 695 smp_wmb(); 692 696 atomic_set(&req->rsk_refcnt, 2); 693 - setup_timer(&req->rsk_timer, reqsk_timer_handler, (unsigned long)req); 694 - req->rsk_hash = hash; 695 697 696 698 spin_lock(&queue->syn_wait_lock); 697 699 req->dl_next = lopt->syn_table[hash]; 698 700 lopt->syn_table[hash] = req; 699 701 spin_unlock(&queue->syn_wait_lock); 700 - 701 - mod_timer_pinned(&req->rsk_timer, jiffies + timeout); 702 702 } 703 703 EXPORT_SYMBOL(reqsk_queue_hash_req); 704 704
+10 -6
net/ipv4/inet_timewait_sock.c
··· 123 123 /* 124 124 * Step 2: Hash TW into tcp ehash chain. 125 125 * Notes : 126 - * - tw_refcnt is set to 3 because : 126 + * - tw_refcnt is set to 4 because : 127 127 * - We have one reference from bhash chain. 128 128 * - We have one reference from ehash chain. 129 + * - We have one reference from timer. 130 + * - One reference for ourself (our caller will release it). 129 131 * We can use atomic_set() because prior spin_lock()/spin_unlock() 130 132 * committed into memory all tw fields. 131 133 */ 132 - atomic_set(&tw->tw_refcnt, 1 + 1 + 1); 134 + atomic_set(&tw->tw_refcnt, 4); 133 135 inet_twsk_add_node_rcu(tw, &ehead->chain); 134 136 135 137 /* Step 3: Remove SK from hash chain */ ··· 219 217 } 220 218 EXPORT_SYMBOL(inet_twsk_deschedule_put); 221 219 222 - void inet_twsk_schedule(struct inet_timewait_sock *tw, const int timeo) 220 + void __inet_twsk_schedule(struct inet_timewait_sock *tw, int timeo, bool rearm) 223 221 { 224 222 /* timeout := RTO * 3.5 225 223 * ··· 247 245 */ 248 246 249 247 tw->tw_kill = timeo <= 4*HZ; 250 - if (!mod_timer_pinned(&tw->tw_timer, jiffies + timeo)) { 251 - atomic_inc(&tw->tw_refcnt); 248 + if (!rearm) { 249 + BUG_ON(mod_timer_pinned(&tw->tw_timer, jiffies + timeo)); 252 250 atomic_inc(&tw->tw_dr->tw_count); 251 + } else { 252 + mod_timer_pending(&tw->tw_timer, jiffies + timeo); 253 253 } 254 254 } 255 - EXPORT_SYMBOL_GPL(inet_twsk_schedule); 255 + EXPORT_SYMBOL_GPL(__inet_twsk_schedule); 256 256 257 257 void inet_twsk_purge(struct inet_hashinfo *hashinfo, 258 258 struct inet_timewait_death_row *twdr, int family)
+29 -25
net/ipv4/ip_tunnel_core.c
··· 46 46 #include <net/net_namespace.h> 47 47 #include <net/netns/generic.h> 48 48 #include <net/rtnetlink.h> 49 + #include <net/dst_metadata.h> 49 50 50 51 int iptunnel_xmit(struct sock *sk, struct rtable *rt, struct sk_buff *skb, 51 52 __be32 src, __be32 dst, __u8 proto, 52 53 __u8 tos, __u8 ttl, __be16 df, bool xnet) 53 54 { 54 - int pkt_len = skb->len; 55 + int pkt_len = skb->len - skb_inner_network_offset(skb); 55 56 struct iphdr *iph; 56 57 int err; 57 58 ··· 119 118 return 0; 120 119 } 121 120 EXPORT_SYMBOL_GPL(iptunnel_pull_header); 121 + 122 + struct metadata_dst *iptunnel_metadata_reply(struct metadata_dst *md, 123 + gfp_t flags) 124 + { 125 + struct metadata_dst *res; 126 + struct ip_tunnel_info *dst, *src; 127 + 128 + if (!md || md->u.tun_info.mode & IP_TUNNEL_INFO_TX) 129 + return NULL; 130 + 131 + res = metadata_dst_alloc(0, flags); 132 + if (!res) 133 + return NULL; 134 + 135 + dst = &res->u.tun_info; 136 + src = &md->u.tun_info; 137 + dst->key.tun_id = src->key.tun_id; 138 + if (src->mode & IP_TUNNEL_INFO_IPV6) 139 + memcpy(&dst->key.u.ipv6.dst, &src->key.u.ipv6.src, 140 + sizeof(struct in6_addr)); 141 + else 142 + dst->key.u.ipv4.dst = src->key.u.ipv4.src; 143 + dst->mode = src->mode | IP_TUNNEL_INFO_TX; 144 + 145 + return res; 146 + } 147 + EXPORT_SYMBOL_GPL(iptunnel_metadata_reply); 122 148 123 149 struct sk_buff *iptunnel_handle_offloads(struct sk_buff *skb, 124 150 bool csum_help, ··· 226 198 [LWTUNNEL_IP_SRC] = { .type = NLA_U32 }, 227 199 [LWTUNNEL_IP_TTL] = { .type = NLA_U8 }, 228 200 [LWTUNNEL_IP_TOS] = { .type = NLA_U8 }, 229 - [LWTUNNEL_IP_SPORT] = { .type = NLA_U16 }, 230 - [LWTUNNEL_IP_DPORT] = { .type = NLA_U16 }, 231 201 [LWTUNNEL_IP_FLAGS] = { .type = NLA_U16 }, 232 202 }; 233 203 ··· 265 239 if (tb[LWTUNNEL_IP_TOS]) 266 240 tun_info->key.tos = nla_get_u8(tb[LWTUNNEL_IP_TOS]); 267 241 268 - if (tb[LWTUNNEL_IP_SPORT]) 269 - tun_info->key.tp_src = nla_get_be16(tb[LWTUNNEL_IP_SPORT]); 270 - 271 - if (tb[LWTUNNEL_IP_DPORT]) 272 - tun_info->key.tp_dst = nla_get_be16(tb[LWTUNNEL_IP_DPORT]); 273 - 274 242 if (tb[LWTUNNEL_IP_FLAGS]) 275 243 tun_info->key.tun_flags = nla_get_u16(tb[LWTUNNEL_IP_FLAGS]); 276 244 ··· 286 266 nla_put_be32(skb, LWTUNNEL_IP_SRC, tun_info->key.u.ipv4.src) || 287 267 nla_put_u8(skb, LWTUNNEL_IP_TOS, tun_info->key.tos) || 288 268 nla_put_u8(skb, LWTUNNEL_IP_TTL, tun_info->key.ttl) || 289 - nla_put_u16(skb, LWTUNNEL_IP_SPORT, tun_info->key.tp_src) || 290 - nla_put_u16(skb, LWTUNNEL_IP_DPORT, tun_info->key.tp_dst) || 291 269 nla_put_u16(skb, LWTUNNEL_IP_FLAGS, tun_info->key.tun_flags)) 292 270 return -ENOMEM; 293 271 ··· 299 281 + nla_total_size(4) /* LWTUNNEL_IP_SRC */ 300 282 + nla_total_size(1) /* LWTUNNEL_IP_TOS */ 301 283 + nla_total_size(1) /* LWTUNNEL_IP_TTL */ 302 - + nla_total_size(2) /* LWTUNNEL_IP_SPORT */ 303 - + nla_total_size(2) /* LWTUNNEL_IP_DPORT */ 304 284 + nla_total_size(2); /* LWTUNNEL_IP_FLAGS */ 305 285 } 306 286 ··· 321 305 [LWTUNNEL_IP6_SRC] = { .len = sizeof(struct in6_addr) }, 322 306 [LWTUNNEL_IP6_HOPLIMIT] = { .type = NLA_U8 }, 323 307 [LWTUNNEL_IP6_TC] = { .type = NLA_U8 }, 324 - [LWTUNNEL_IP6_SPORT] = { .type = NLA_U16 }, 325 - [LWTUNNEL_IP6_DPORT] = { .type = NLA_U16 }, 326 308 [LWTUNNEL_IP6_FLAGS] = { .type = NLA_U16 }, 327 309 }; 328 310 ··· 360 346 if (tb[LWTUNNEL_IP6_TC]) 361 347 tun_info->key.tos = nla_get_u8(tb[LWTUNNEL_IP6_TC]); 362 348 363 - if (tb[LWTUNNEL_IP6_SPORT]) 364 - tun_info->key.tp_src = nla_get_be16(tb[LWTUNNEL_IP6_SPORT]); 365 - 366 - if (tb[LWTUNNEL_IP6_DPORT]) 367 - tun_info->key.tp_dst = nla_get_be16(tb[LWTUNNEL_IP6_DPORT]); 368 - 369 349 if (tb[LWTUNNEL_IP6_FLAGS]) 370 350 tun_info->key.tun_flags = nla_get_u16(tb[LWTUNNEL_IP6_FLAGS]); 371 351 ··· 381 373 nla_put_in6_addr(skb, LWTUNNEL_IP6_SRC, &tun_info->key.u.ipv6.src) || 382 374 nla_put_u8(skb, LWTUNNEL_IP6_HOPLIMIT, tun_info->key.tos) || 383 375 nla_put_u8(skb, LWTUNNEL_IP6_TC, tun_info->key.ttl) || 384 - nla_put_u16(skb, LWTUNNEL_IP6_SPORT, tun_info->key.tp_src) || 385 - nla_put_u16(skb, LWTUNNEL_IP6_DPORT, tun_info->key.tp_dst) || 386 376 nla_put_u16(skb, LWTUNNEL_IP6_FLAGS, tun_info->key.tun_flags)) 387 377 return -ENOMEM; 388 378 ··· 394 388 + nla_total_size(16) /* LWTUNNEL_IP6_SRC */ 395 389 + nla_total_size(1) /* LWTUNNEL_IP6_HOPLIMIT */ 396 390 + nla_total_size(1) /* LWTUNNEL_IP6_TC */ 397 - + nla_total_size(2) /* LWTUNNEL_IP6_SPORT */ 398 - + nla_total_size(2) /* LWTUNNEL_IP6_DPORT */ 399 391 + nla_total_size(2); /* LWTUNNEL_IP6_FLAGS */ 400 392 } 401 393
+4 -2
net/ipv4/route.c
··· 2045 2045 struct fib_result res; 2046 2046 struct rtable *rth; 2047 2047 int orig_oif; 2048 + int err = -ENETUNREACH; 2048 2049 2049 2050 res.tclassid = 0; 2050 2051 res.fi = NULL; ··· 2154 2153 goto make_route; 2155 2154 } 2156 2155 2157 - if (fib_lookup(net, fl4, &res, 0)) { 2156 + err = fib_lookup(net, fl4, &res, 0); 2157 + if (err) { 2158 2158 res.fi = NULL; 2159 2159 res.table = NULL; 2160 2160 if (fl4->flowi4_oif) { ··· 2183 2181 res.type = RTN_UNICAST; 2184 2182 goto make_route; 2185 2183 } 2186 - rth = ERR_PTR(-ENETUNREACH); 2184 + rth = ERR_PTR(err); 2187 2185 goto out; 2188 2186 } 2189 2187
+8 -2
net/ipv4/tcp_cubic.c
··· 154 154 static void bictcp_cwnd_event(struct sock *sk, enum tcp_ca_event event) 155 155 { 156 156 if (event == CA_EVENT_TX_START) { 157 - s32 delta = tcp_time_stamp - tcp_sk(sk)->lsndtime; 158 157 struct bictcp *ca = inet_csk_ca(sk); 158 + u32 now = tcp_time_stamp; 159 + s32 delta; 160 + 161 + delta = now - tcp_sk(sk)->lsndtime; 159 162 160 163 /* We were application limited (idle) for a while. 161 164 * Shift epoch_start to keep cwnd growth to cubic curve. 162 165 */ 163 - if (ca->epoch_start && delta > 0) 166 + if (ca->epoch_start && delta > 0) { 164 167 ca->epoch_start += delta; 168 + if (after(ca->epoch_start, now)) 169 + ca->epoch_start = now; 170 + } 165 171 return; 166 172 } 167 173 }
+6 -7
net/ipv4/tcp_minisocks.c
··· 162 162 if (tcp_death_row.sysctl_tw_recycle && 163 163 tcptw->tw_ts_recent_stamp && 164 164 tcp_tw_remember_stamp(tw)) 165 - inet_twsk_schedule(tw, tw->tw_timeout); 165 + inet_twsk_reschedule(tw, tw->tw_timeout); 166 166 else 167 - inet_twsk_schedule(tw, TCP_TIMEWAIT_LEN); 167 + inet_twsk_reschedule(tw, TCP_TIMEWAIT_LEN); 168 168 return TCP_TW_ACK; 169 169 } 170 170 ··· 201 201 return TCP_TW_SUCCESS; 202 202 } 203 203 } 204 - inet_twsk_schedule(tw, TCP_TIMEWAIT_LEN); 204 + inet_twsk_reschedule(tw, TCP_TIMEWAIT_LEN); 205 205 206 206 if (tmp_opt.saw_tstamp) { 207 207 tcptw->tw_ts_recent = tmp_opt.rcv_tsval; ··· 251 251 * Do not reschedule in the last case. 252 252 */ 253 253 if (paws_reject || th->ack) 254 - inet_twsk_schedule(tw, TCP_TIMEWAIT_LEN); 254 + inet_twsk_reschedule(tw, TCP_TIMEWAIT_LEN); 255 255 256 256 return tcp_timewait_check_oow_rate_limit( 257 257 tw, skb, LINUX_MIB_TCPACKSKIPPEDTIMEWAIT); ··· 322 322 } while (0); 323 323 #endif 324 324 325 - /* Linkage updates. */ 326 - __inet_twsk_hashdance(tw, sk, &tcp_hashinfo); 327 - 328 325 /* Get the TIME_WAIT timeout firing. */ 329 326 if (timeo < rto) 330 327 timeo = rto; ··· 335 338 } 336 339 337 340 inet_twsk_schedule(tw, timeo); 341 + /* Linkage updates. */ 342 + __inet_twsk_hashdance(tw, sk, &tcp_hashinfo); 338 343 inet_twsk_put(tw); 339 344 } else { 340 345 /* Sorry, if we're out of memory, just CLOSE this
+1
net/ipv4/tcp_output.c
··· 2897 2897 skb_reserve(skb, MAX_TCP_HEADER); 2898 2898 tcp_init_nondata_skb(skb, tcp_acceptable_seq(sk), 2899 2899 TCPHDR_ACK | TCPHDR_RST); 2900 + skb_mstamp_get(&skb->skb_mstamp); 2900 2901 /* Send it off. */ 2901 2902 if (tcp_transmit_skb(sk, skb, 0, priority)) 2902 2903 NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPABORTFAILED);
+2 -1
net/ipv4/udp.c
··· 1024 1024 if (netif_index_is_vrf(net, ipc.oif)) { 1025 1025 flowi4_init_output(fl4, ipc.oif, sk->sk_mark, tos, 1026 1026 RT_SCOPE_UNIVERSE, sk->sk_protocol, 1027 - (flow_flags | FLOWI_FLAG_VRFSRC), 1027 + (flow_flags | FLOWI_FLAG_VRFSRC | 1028 + FLOWI_FLAG_SKIP_NH_OIF), 1028 1029 faddr, saddr, dport, 1029 1030 inet->inet_sport); 1030 1031
+2
net/ipv4/xfrm4_policy.c
··· 33 33 if (saddr) 34 34 fl4->saddr = saddr->a4; 35 35 36 + fl4->flowi4_flags = FLOWI_FLAG_SKIP_NH_OIF; 37 + 36 38 rt = __ip_route_output_key(net, fl4); 37 39 if (!IS_ERR(rt)) 38 40 return &rt->dst;
+3 -4
net/ipv6/addrconf.c
··· 5127 5127 5128 5128 rt = addrconf_get_prefix_route(&ifp->peer_addr, 128, 5129 5129 ifp->idev->dev, 0, 0); 5130 - if (rt && ip6_del_rt(rt)) 5131 - dst_free(&rt->dst); 5130 + if (rt) 5131 + ip6_del_rt(rt); 5132 5132 } 5133 5133 dst_hold(&ifp->rt->dst); 5134 5134 5135 - if (ip6_del_rt(ifp->rt)) 5136 - dst_free(&ifp->rt->dst); 5135 + ip6_del_rt(ifp->rt); 5137 5136 5138 5137 rt_genid_bump_ipv6(net); 5139 5138 break;
+19 -7
net/ipv6/ip6_fib.c
··· 155 155 kmem_cache_free(fib6_node_kmem, fn); 156 156 } 157 157 158 + static void rt6_rcu_free(struct rt6_info *rt) 159 + { 160 + call_rcu(&rt->dst.rcu_head, dst_rcu_free); 161 + } 162 + 158 163 static void rt6_free_pcpu(struct rt6_info *non_pcpu_rt) 159 164 { 160 165 int cpu; ··· 174 169 ppcpu_rt = per_cpu_ptr(non_pcpu_rt->rt6i_pcpu, cpu); 175 170 pcpu_rt = *ppcpu_rt; 176 171 if (pcpu_rt) { 177 - dst_free(&pcpu_rt->dst); 172 + rt6_rcu_free(pcpu_rt); 178 173 *ppcpu_rt = NULL; 179 174 } 180 175 } ··· 186 181 { 187 182 if (atomic_dec_and_test(&rt->rt6i_ref)) { 188 183 rt6_free_pcpu(rt); 189 - dst_free(&rt->dst); 184 + rt6_rcu_free(rt); 190 185 } 191 186 } 192 187 ··· 851 846 *ins = rt; 852 847 rt->rt6i_node = fn; 853 848 atomic_inc(&rt->rt6i_ref); 854 - inet6_rt_notify(RTM_NEWROUTE, rt, info); 849 + inet6_rt_notify(RTM_NEWROUTE, rt, info, 0); 855 850 info->nl_net->ipv6.rt6_stats->fib_rt_entries++; 856 851 857 852 if (!(fn->fn_flags & RTN_RTINFO)) { ··· 877 872 rt->rt6i_node = fn; 878 873 rt->dst.rt6_next = iter->dst.rt6_next; 879 874 atomic_inc(&rt->rt6i_ref); 880 - inet6_rt_notify(RTM_NEWROUTE, rt, info); 875 + inet6_rt_notify(RTM_NEWROUTE, rt, info, NLM_F_REPLACE); 881 876 if (!(fn->fn_flags & RTN_RTINFO)) { 882 877 info->nl_net->ipv6.rt6_stats->fib_route_nodes++; 883 878 fn->fn_flags |= RTN_RTINFO; ··· 937 932 int allow_create = 1; 938 933 int replace_required = 0; 939 934 int sernum = fib6_new_sernum(info->nl_net); 935 + 936 + if (WARN_ON_ONCE((rt->dst.flags & DST_NOCACHE) && 937 + !atomic_read(&rt->dst.__refcnt))) 938 + return -EINVAL; 940 939 941 940 if (info->nlh) { 942 941 if (!(info->nlh->nlmsg_flags & NLM_F_CREATE)) ··· 1034 1025 fib6_start_gc(info->nl_net, rt); 1035 1026 if (!(rt->rt6i_flags & RTF_CACHE)) 1036 1027 fib6_prune_clones(info->nl_net, pn); 1028 + rt->dst.flags &= ~DST_NOCACHE; 1037 1029 } 1038 1030 1039 1031 out: ··· 1059 1049 atomic_inc(&pn->leaf->rt6i_ref); 1060 1050 } 1061 1051 #endif 1062 - dst_free(&rt->dst); 1052 + if (!(rt->dst.flags & DST_NOCACHE)) 1053 + dst_free(&rt->dst); 1063 1054 } 1064 1055 return err; 1065 1056 ··· 1071 1060 st_failure: 1072 1061 if (fn && !(fn->fn_flags & (RTN_RTINFO|RTN_ROOT))) 1073 1062 fib6_repair_tree(info->nl_net, fn); 1074 - dst_free(&rt->dst); 1063 + if (!(rt->dst.flags & DST_NOCACHE)) 1064 + dst_free(&rt->dst); 1075 1065 return err; 1076 1066 #endif 1077 1067 } ··· 1422 1410 1423 1411 fib6_purge_rt(rt, fn, net); 1424 1412 1425 - inet6_rt_notify(RTM_DELROUTE, rt, info); 1413 + inet6_rt_notify(RTM_DELROUTE, rt, info, 0); 1426 1414 rt6_release(rt); 1427 1415 } 1428 1416
+55 -38
net/ipv6/ip6_gre.c
··· 404 404 struct ipv6_tlv_tnl_enc_lim *tel; 405 405 __u32 mtu; 406 406 case ICMPV6_DEST_UNREACH: 407 - net_warn_ratelimited("%s: Path to destination invalid or inactive!\n", 408 - t->parms.name); 407 + net_dbg_ratelimited("%s: Path to destination invalid or inactive!\n", 408 + t->parms.name); 409 409 break; 410 410 case ICMPV6_TIME_EXCEED: 411 411 if (code == ICMPV6_EXC_HOPLIMIT) { 412 - net_warn_ratelimited("%s: Too small hop limit or routing loop in tunnel!\n", 413 - t->parms.name); 412 + net_dbg_ratelimited("%s: Too small hop limit or routing loop in tunnel!\n", 413 + t->parms.name); 414 414 } 415 415 break; 416 416 case ICMPV6_PARAMPROB: ··· 421 421 if (teli && teli == be32_to_cpu(info) - 2) { 422 422 tel = (struct ipv6_tlv_tnl_enc_lim *) &skb->data[teli]; 423 423 if (tel->encap_limit == 0) { 424 - net_warn_ratelimited("%s: Too small encapsulation limit or routing loop in tunnel!\n", 425 - t->parms.name); 424 + net_dbg_ratelimited("%s: Too small encapsulation limit or routing loop in tunnel!\n", 425 + t->parms.name); 426 426 } 427 427 } else { 428 - net_warn_ratelimited("%s: Recipient unable to parse tunneled packet!\n", 429 - t->parms.name); 428 + net_dbg_ratelimited("%s: Recipient unable to parse tunneled packet!\n", 429 + t->parms.name); 430 430 } 431 431 break; 432 432 case ICMPV6_PKT_TOOBIG: ··· 634 634 } 635 635 636 636 if (!fl6->flowi6_mark) 637 - dst = ip6_tnl_dst_check(tunnel); 637 + dst = ip6_tnl_dst_get(tunnel); 638 638 639 639 if (!dst) { 640 - ndst = ip6_route_output(net, NULL, fl6); 640 + dst = ip6_route_output(net, NULL, fl6); 641 641 642 - if (ndst->error) 642 + if (dst->error) 643 643 goto tx_err_link_failure; 644 - ndst = xfrm_lookup(net, ndst, flowi6_to_flowi(fl6), NULL, 0); 645 - if (IS_ERR(ndst)) { 646 - err = PTR_ERR(ndst); 647 - ndst = NULL; 644 + dst = xfrm_lookup(net, dst, flowi6_to_flowi(fl6), NULL, 0); 645 + if (IS_ERR(dst)) { 646 + err = PTR_ERR(dst); 647 + dst = NULL; 648 648 goto tx_err_link_failure; 649 649 } 650 - dst = ndst; 650 + ndst = dst; 651 651 } 652 652 653 653 tdev = dst->dev; ··· 702 702 skb = new_skb; 703 703 } 704 704 705 - if (fl6->flowi6_mark) { 706 - skb_dst_set(skb, dst); 707 - ndst = NULL; 708 - } else { 709 - skb_dst_set_noref(skb, dst); 710 - } 705 + if (!fl6->flowi6_mark && ndst) 706 + ip6_tnl_dst_set(tunnel, ndst); 707 + skb_dst_set(skb, dst); 711 708 712 709 proto = NEXTHDR_GRE; 713 710 if (encap_limit >= 0) { ··· 759 762 skb_set_inner_protocol(skb, protocol); 760 763 761 764 ip6tunnel_xmit(NULL, skb, dev); 762 - if (ndst) 763 - ip6_tnl_dst_store(tunnel, ndst); 764 765 return 0; 765 766 tx_err_link_failure: 766 767 stats->tx_carrier_errors++; 767 768 dst_link_failure(skb); 768 769 tx_err_dst_release: 769 - dst_release(ndst); 770 + dst_release(dst); 770 771 return err; 771 772 } 772 773 ··· 1218 1223 1219 1224 static void ip6gre_dev_free(struct net_device *dev) 1220 1225 { 1226 + struct ip6_tnl *t = netdev_priv(dev); 1227 + 1228 + ip6_tnl_dst_destroy(t); 1221 1229 free_percpu(dev->tstats); 1222 1230 free_netdev(dev); 1223 1231 } ··· 1243 1245 netif_keep_dst(dev); 1244 1246 } 1245 1247 1246 - static int ip6gre_tunnel_init(struct net_device *dev) 1248 + static int ip6gre_tunnel_init_common(struct net_device *dev) 1247 1249 { 1248 1250 struct ip6_tnl *tunnel; 1251 + int ret; 1249 1252 1250 1253 tunnel = netdev_priv(dev); 1251 1254 ··· 1254 1255 tunnel->net = dev_net(dev); 1255 1256 strcpy(tunnel->parms.name, dev->name); 1256 1257 1258 + dev->tstats = netdev_alloc_pcpu_stats(struct pcpu_sw_netstats); 1259 + if (!dev->tstats) 1260 + return -ENOMEM; 1261 + 1262 + ret = ip6_tnl_dst_init(tunnel); 1263 + if (ret) { 1264 + free_percpu(dev->tstats); 1265 + dev->tstats = NULL; 1266 + return ret; 1267 + } 1268 + 1269 + return 0; 1270 + } 1271 + 1272 + static int ip6gre_tunnel_init(struct net_device *dev) 1273 + { 1274 + struct ip6_tnl *tunnel; 1275 + int ret; 1276 + 1277 + ret = ip6gre_tunnel_init_common(dev); 1278 + if (ret) 1279 + return ret; 1280 + 1281 + tunnel = netdev_priv(dev); 1282 + 1257 1283 memcpy(dev->dev_addr, &tunnel->parms.laddr, sizeof(struct in6_addr)); 1258 1284 memcpy(dev->broadcast, &tunnel->parms.raddr, sizeof(struct in6_addr)); 1259 1285 1260 1286 if (ipv6_addr_any(&tunnel->parms.raddr)) 1261 1287 dev->header_ops = &ip6gre_header_ops; 1262 - 1263 - dev->tstats = netdev_alloc_pcpu_stats(struct pcpu_sw_netstats); 1264 - if (!dev->tstats) 1265 - return -ENOMEM; 1266 1288 1267 1289 return 0; 1268 1290 } ··· 1480 1460 static int ip6gre_tap_init(struct net_device *dev) 1481 1461 { 1482 1462 struct ip6_tnl *tunnel; 1463 + int ret; 1464 + 1465 + ret = ip6gre_tunnel_init_common(dev); 1466 + if (ret) 1467 + return ret; 1483 1468 1484 1469 tunnel = netdev_priv(dev); 1485 1470 1486 - tunnel->dev = dev; 1487 - tunnel->net = dev_net(dev); 1488 - strcpy(tunnel->parms.name, dev->name); 1489 - 1490 1471 ip6gre_tnl_link_config(tunnel, 1); 1491 - 1492 - dev->tstats = netdev_alloc_pcpu_stats(struct pcpu_sw_netstats); 1493 - if (!dev->tstats) 1494 - return -ENOMEM; 1495 1472 1496 1473 return 0; 1497 1474 }
+8 -6
net/ipv6/ip6_output.c
··· 586 586 frag_id = ipv6_select_ident(net, &ipv6_hdr(skb)->daddr, 587 587 &ipv6_hdr(skb)->saddr); 588 588 589 + hroom = LL_RESERVED_SPACE(rt->dst.dev); 589 590 if (skb_has_frag_list(skb)) { 590 591 int first_len = skb_pagelen(skb); 591 592 struct sk_buff *frag2; 592 593 593 594 if (first_len - hlen > mtu || 594 595 ((first_len - hlen) & 7) || 595 - skb_cloned(skb)) 596 + skb_cloned(skb) || 597 + skb_headroom(skb) < (hroom + sizeof(struct frag_hdr))) 596 598 goto slow_path; 597 599 598 600 skb_walk_frags(skb, frag) { 599 601 /* Correct geometry. */ 600 602 if (frag->len > mtu || 601 603 ((frag->len & 7) && frag->next) || 602 - skb_headroom(frag) < hlen) 604 + skb_headroom(frag) < (hlen + hroom + sizeof(struct frag_hdr))) 603 605 goto slow_path_clean; 604 606 605 607 /* Partially cloned skb? */ ··· 618 616 619 617 err = 0; 620 618 offset = 0; 621 - frag = skb_shinfo(skb)->frag_list; 622 - skb_frag_list_init(skb); 623 619 /* BUILD HEADER */ 624 620 625 621 *prevhdr = NEXTHDR_FRAGMENT; ··· 625 625 if (!tmp_hdr) { 626 626 IP6_INC_STATS(net, ip6_dst_idev(skb_dst(skb)), 627 627 IPSTATS_MIB_FRAGFAILS); 628 - return -ENOMEM; 628 + err = -ENOMEM; 629 + goto fail; 629 630 } 631 + frag = skb_shinfo(skb)->frag_list; 632 + skb_frag_list_init(skb); 630 633 631 634 __skb_pull(skb, hlen); 632 635 fh = (struct frag_hdr *)__skb_push(skb, sizeof(struct frag_hdr)); ··· 726 723 */ 727 724 728 725 *prevhdr = NEXTHDR_FRAGMENT; 729 - hroom = LL_RESERVED_SPACE(rt->dst.dev); 730 726 troom = rt->dst.dev->needed_tailroom; 731 727 732 728 /*
+107 -42
net/ipv6/ip6_tunnel.c
··· 126 126 * Locking : hash tables are protected by RCU and RTNL 127 127 */ 128 128 129 - struct dst_entry *ip6_tnl_dst_check(struct ip6_tnl *t) 129 + static void ip6_tnl_per_cpu_dst_set(struct ip6_tnl_dst *idst, 130 + struct dst_entry *dst) 130 131 { 131 - struct dst_entry *dst = t->dst_cache; 132 - 133 - if (dst && dst->obsolete && 134 - !dst->ops->check(dst, t->dst_cookie)) { 135 - t->dst_cache = NULL; 136 - dst_release(dst); 137 - return NULL; 132 + write_seqlock_bh(&idst->lock); 133 + dst_release(rcu_dereference_protected( 134 + idst->dst, 135 + lockdep_is_held(&idst->lock.lock))); 136 + if (dst) { 137 + dst_hold(dst); 138 + idst->cookie = rt6_get_cookie((struct rt6_info *)dst); 139 + } else { 140 + idst->cookie = 0; 138 141 } 142 + rcu_assign_pointer(idst->dst, dst); 143 + write_sequnlock_bh(&idst->lock); 144 + } 139 145 146 + struct dst_entry *ip6_tnl_dst_get(struct ip6_tnl *t) 147 + { 148 + struct ip6_tnl_dst *idst; 149 + struct dst_entry *dst; 150 + unsigned int seq; 151 + u32 cookie; 152 + 153 + idst = raw_cpu_ptr(t->dst_cache); 154 + 155 + rcu_read_lock(); 156 + do { 157 + seq = read_seqbegin(&idst->lock); 158 + dst = rcu_dereference(idst->dst); 159 + cookie = idst->cookie; 160 + } while (read_seqretry(&idst->lock, seq)); 161 + 162 + if (dst && !atomic_inc_not_zero(&dst->__refcnt)) 163 + dst = NULL; 164 + rcu_read_unlock(); 165 + 166 + if (dst && dst->obsolete && !dst->ops->check(dst, cookie)) { 167 + ip6_tnl_per_cpu_dst_set(idst, NULL); 168 + dst_release(dst); 169 + dst = NULL; 170 + } 140 171 return dst; 141 172 } 142 - EXPORT_SYMBOL_GPL(ip6_tnl_dst_check); 173 + EXPORT_SYMBOL_GPL(ip6_tnl_dst_get); 143 174 144 175 void ip6_tnl_dst_reset(struct ip6_tnl *t) 145 176 { 146 - dst_release(t->dst_cache); 147 - t->dst_cache = NULL; 177 + int i; 178 + 179 + for_each_possible_cpu(i) 180 + ip6_tnl_per_cpu_dst_set(raw_cpu_ptr(t->dst_cache), NULL); 148 181 } 149 182 EXPORT_SYMBOL_GPL(ip6_tnl_dst_reset); 150 183 151 - void ip6_tnl_dst_store(struct ip6_tnl *t, struct dst_entry *dst) 184 + void ip6_tnl_dst_set(struct ip6_tnl *t, struct dst_entry *dst) 152 185 { 153 - struct rt6_info *rt = (struct rt6_info *) dst; 154 - t->dst_cookie = rt6_get_cookie(rt); 155 - dst_release(t->dst_cache); 156 - t->dst_cache = dst; 186 + ip6_tnl_per_cpu_dst_set(raw_cpu_ptr(t->dst_cache), dst); 187 + 157 188 } 158 - EXPORT_SYMBOL_GPL(ip6_tnl_dst_store); 189 + EXPORT_SYMBOL_GPL(ip6_tnl_dst_set); 190 + 191 + void ip6_tnl_dst_destroy(struct ip6_tnl *t) 192 + { 193 + if (!t->dst_cache) 194 + return; 195 + 196 + ip6_tnl_dst_reset(t); 197 + free_percpu(t->dst_cache); 198 + } 199 + EXPORT_SYMBOL_GPL(ip6_tnl_dst_destroy); 200 + 201 + int ip6_tnl_dst_init(struct ip6_tnl *t) 202 + { 203 + int i; 204 + 205 + t->dst_cache = alloc_percpu(struct ip6_tnl_dst); 206 + if (!t->dst_cache) 207 + return -ENOMEM; 208 + 209 + for_each_possible_cpu(i) 210 + seqlock_init(&per_cpu_ptr(t->dst_cache, i)->lock); 211 + 212 + return 0; 213 + } 214 + EXPORT_SYMBOL_GPL(ip6_tnl_dst_init); 159 215 160 216 /** 161 217 * ip6_tnl_lookup - fetch tunnel matching the end-point addresses ··· 327 271 328 272 static void ip6_dev_free(struct net_device *dev) 329 273 { 274 + struct ip6_tnl *t = netdev_priv(dev); 275 + 276 + ip6_tnl_dst_destroy(t); 330 277 free_percpu(dev->tstats); 331 278 free_netdev(dev); 332 279 } ··· 569 510 struct ipv6_tlv_tnl_enc_lim *tel; 570 511 __u32 mtu; 571 512 case ICMPV6_DEST_UNREACH: 572 - net_warn_ratelimited("%s: Path to destination invalid or inactive!\n", 573 - t->parms.name); 513 + net_dbg_ratelimited("%s: Path to destination invalid or inactive!\n", 514 + t->parms.name); 574 515 rel_msg = 1; 575 516 break; 576 517 case ICMPV6_TIME_EXCEED: 577 518 if ((*code) == ICMPV6_EXC_HOPLIMIT) { 578 - net_warn_ratelimited("%s: Too small hop limit or routing loop in tunnel!\n", 579 - t->parms.name); 519 + net_dbg_ratelimited("%s: Too small hop limit or routing loop in tunnel!\n", 520 + t->parms.name); 580 521 rel_msg = 1; 581 522 } 582 523 break; ··· 588 529 if (teli && teli == *info - 2) { 589 530 tel = (struct ipv6_tlv_tnl_enc_lim *) &skb->data[teli]; 590 531 if (tel->encap_limit == 0) { 591 - net_warn_ratelimited("%s: Too small encapsulation limit or routing loop in tunnel!\n", 592 - t->parms.name); 532 + net_dbg_ratelimited("%s: Too small encapsulation limit or routing loop in tunnel!\n", 533 + t->parms.name); 593 534 rel_msg = 1; 594 535 } 595 536 } else { 596 - net_warn_ratelimited("%s: Recipient unable to parse tunneled packet!\n", 597 - t->parms.name); 537 + net_dbg_ratelimited("%s: Recipient unable to parse tunneled packet!\n", 538 + t->parms.name); 598 539 } 599 540 break; 600 541 case ICMPV6_PKT_TOOBIG: ··· 1069 1010 memcpy(&fl6->daddr, addr6, sizeof(fl6->daddr)); 1070 1011 neigh_release(neigh); 1071 1012 } else if (!fl6->flowi6_mark) 1072 - dst = ip6_tnl_dst_check(t); 1013 + dst = ip6_tnl_dst_get(t); 1073 1014 1074 1015 if (!ip6_tnl_xmit_ctl(t, &fl6->saddr, &fl6->daddr)) 1075 1016 goto tx_err_link_failure; 1076 1017 1077 1018 if (!dst) { 1078 - ndst = ip6_route_output(net, NULL, fl6); 1019 + dst = ip6_route_output(net, NULL, fl6); 1079 1020 1080 - if (ndst->error) 1021 + if (dst->error) 1081 1022 goto tx_err_link_failure; 1082 - ndst = xfrm_lookup(net, ndst, flowi6_to_flowi(fl6), NULL, 0); 1083 - if (IS_ERR(ndst)) { 1084 - err = PTR_ERR(ndst); 1085 - ndst = NULL; 1023 + dst = xfrm_lookup(net, dst, flowi6_to_flowi(fl6), NULL, 0); 1024 + if (IS_ERR(dst)) { 1025 + err = PTR_ERR(dst); 1026 + dst = NULL; 1086 1027 goto tx_err_link_failure; 1087 1028 } 1088 - dst = ndst; 1029 + ndst = dst; 1089 1030 } 1090 1031 1091 1032 tdev = dst->dev; ··· 1131 1072 consume_skb(skb); 1132 1073 skb = new_skb; 1133 1074 } 1134 - if (fl6->flowi6_mark) { 1135 - skb_dst_set(skb, dst); 1136 - ndst = NULL; 1137 - } else { 1138 - skb_dst_set_noref(skb, dst); 1139 - } 1075 + 1076 + if (!fl6->flowi6_mark && ndst) 1077 + ip6_tnl_dst_set(t, ndst); 1078 + skb_dst_set(skb, dst); 1079 + 1140 1080 skb->transport_header = skb->network_header; 1141 1081 1142 1082 proto = fl6->flowi6_proto; ··· 1159 1101 ipv6h->saddr = fl6->saddr; 1160 1102 ipv6h->daddr = fl6->daddr; 1161 1103 ip6tunnel_xmit(NULL, skb, dev); 1162 - if (ndst) 1163 - ip6_tnl_dst_store(t, ndst); 1164 1104 return 0; 1165 1105 tx_err_link_failure: 1166 1106 stats->tx_carrier_errors++; 1167 1107 dst_link_failure(skb); 1168 1108 tx_err_dst_release: 1169 - dst_release(ndst); 1109 + dst_release(dst); 1170 1110 return err; 1171 1111 } 1172 1112 ··· 1629 1573 ip6_tnl_dev_init_gen(struct net_device *dev) 1630 1574 { 1631 1575 struct ip6_tnl *t = netdev_priv(dev); 1576 + int ret; 1632 1577 1633 1578 t->dev = dev; 1634 1579 t->net = dev_net(dev); 1635 1580 dev->tstats = netdev_alloc_pcpu_stats(struct pcpu_sw_netstats); 1636 1581 if (!dev->tstats) 1637 1582 return -ENOMEM; 1583 + 1584 + ret = ip6_tnl_dst_init(t); 1585 + if (ret) { 1586 + free_percpu(dev->tstats); 1587 + dev->tstats = NULL; 1588 + return ret; 1589 + } 1590 + 1638 1591 return 0; 1639 1592 } 1640 1593
+10 -6
net/ipv6/route.c
··· 1322 1322 if (rt) { 1323 1323 if (rt->rt6i_flags & RTF_CACHE) { 1324 1324 dst_hold(&rt->dst); 1325 - if (ip6_del_rt(rt)) 1326 - dst_free(&rt->dst); 1325 + ip6_del_rt(rt); 1327 1326 } else if (rt->rt6i_node && (rt->rt6i_flags & RTF_DEFAULT)) { 1328 1327 rt->rt6i_node->fn_sernum = -1; 1329 1328 } ··· 1885 1886 rt->dst.input = ip6_pkt_prohibit; 1886 1887 break; 1887 1888 case RTN_THROW: 1889 + case RTN_UNREACHABLE: 1888 1890 default: 1889 1891 rt->dst.error = (cfg->fc_type == RTN_THROW) ? -EAGAIN 1890 - : -ENETUNREACH; 1892 + : (cfg->fc_type == RTN_UNREACHABLE) 1893 + ? -EHOSTUNREACH : -ENETUNREACH; 1891 1894 rt->dst.output = ip6_pkt_discard_out; 1892 1895 rt->dst.input = ip6_pkt_discard; 1893 1896 break; ··· 2029 2028 struct fib6_table *table; 2030 2029 struct net *net = dev_net(rt->dst.dev); 2031 2030 2032 - if (rt == net->ipv6.ip6_null_entry) { 2031 + if (rt == net->ipv6.ip6_null_entry || 2032 + rt->dst.flags & DST_NOCACHE) { 2033 2033 err = -ENOENT; 2034 2034 goto out; 2035 2035 } ··· 2517 2515 rt->rt6i_dst.addr = *addr; 2518 2516 rt->rt6i_dst.plen = 128; 2519 2517 rt->rt6i_table = fib6_get_table(net, RT6_TABLE_LOCAL); 2518 + rt->dst.flags |= DST_NOCACHE; 2520 2519 2521 2520 atomic_set(&rt->dst.__refcnt, 1); 2522 2521 ··· 3306 3303 return err; 3307 3304 } 3308 3305 3309 - void inet6_rt_notify(int event, struct rt6_info *rt, struct nl_info *info) 3306 + void inet6_rt_notify(int event, struct rt6_info *rt, struct nl_info *info, 3307 + unsigned int nlm_flags) 3310 3308 { 3311 3309 struct sk_buff *skb; 3312 3310 struct net *net = info->nl_net; ··· 3322 3318 goto errout; 3323 3319 3324 3320 err = rt6_fill_node(net, skb, rt, NULL, NULL, 0, 3325 - event, info->portid, seq, 0, 0, 0); 3321 + event, info->portid, seq, 0, 0, nlm_flags); 3326 3322 if (err < 0) { 3327 3323 /* -EMSGSIZE implies BUG in rt6_nlmsg_size() */ 3328 3324 WARN_ON(err == -EMSGSIZE);
+10 -7
net/mac80211/cfg.c
··· 2474 2474 2475 2475 bss_conf->cqm_rssi_thold = rssi_thold; 2476 2476 bss_conf->cqm_rssi_hyst = rssi_hyst; 2477 + sdata->u.mgd.last_cqm_event_signal = 0; 2477 2478 2478 2479 /* tell the driver upon association, unless already associated */ 2479 2480 if (sdata->u.mgd.associated && ··· 2519 2518 continue; 2520 2519 2521 2520 for (j = 0; j < IEEE80211_HT_MCS_MASK_LEN; j++) { 2522 - if (~sdata->rc_rateidx_mcs_mask[i][j]) 2521 + if (~sdata->rc_rateidx_mcs_mask[i][j]) { 2523 2522 sdata->rc_has_mcs_mask[i] = true; 2524 - 2525 - if (~sdata->rc_rateidx_vht_mcs_mask[i][j]) 2526 - sdata->rc_has_vht_mcs_mask[i] = true; 2527 - 2528 - if (sdata->rc_has_mcs_mask[i] && 2529 - sdata->rc_has_vht_mcs_mask[i]) 2530 2523 break; 2524 + } 2525 + } 2526 + 2527 + for (j = 0; j < NL80211_VHT_NSS_MAX; j++) { 2528 + if (~sdata->rc_rateidx_vht_mcs_mask[i][j]) { 2529 + sdata->rc_has_vht_mcs_mask[i] = true; 2530 + break; 2531 + } 2531 2532 } 2532 2533 } 2533 2534
+7 -2
net/netfilter/nf_log.c
··· 107 107 108 108 void nf_log_unregister(struct nf_logger *logger) 109 109 { 110 + const struct nf_logger *log; 110 111 int i; 111 112 112 113 mutex_lock(&nf_log_mutex); 113 - for (i = 0; i < NFPROTO_NUMPROTO; i++) 114 - RCU_INIT_POINTER(loggers[i][logger->type], NULL); 114 + for (i = 0; i < NFPROTO_NUMPROTO; i++) { 115 + log = nft_log_dereference(loggers[i][logger->type]); 116 + if (log == logger) 117 + RCU_INIT_POINTER(loggers[i][logger->type], NULL); 118 + } 115 119 mutex_unlock(&nf_log_mutex); 120 + synchronize_rcu(); 116 121 } 117 122 EXPORT_SYMBOL(nf_log_unregister); 118 123
+18 -6
net/netfilter/nft_compat.c
··· 619 619 620 620 static struct nft_expr_type nft_match_type; 621 621 622 + static bool nft_match_cmp(const struct xt_match *match, 623 + const char *name, u32 rev, u32 family) 624 + { 625 + return strcmp(match->name, name) == 0 && match->revision == rev && 626 + (match->family == NFPROTO_UNSPEC || match->family == family); 627 + } 628 + 622 629 static const struct nft_expr_ops * 623 630 nft_match_select_ops(const struct nft_ctx *ctx, 624 631 const struct nlattr * const tb[]) ··· 633 626 struct nft_xt *nft_match; 634 627 struct xt_match *match; 635 628 char *mt_name; 636 - __u32 rev, family; 629 + u32 rev, family; 637 630 638 631 if (tb[NFTA_MATCH_NAME] == NULL || 639 632 tb[NFTA_MATCH_REV] == NULL || ··· 648 641 list_for_each_entry(nft_match, &nft_match_list, head) { 649 642 struct xt_match *match = nft_match->ops.data; 650 643 651 - if (strcmp(match->name, mt_name) == 0 && 652 - match->revision == rev && match->family == family) { 644 + if (nft_match_cmp(match, mt_name, rev, family)) { 653 645 if (!try_module_get(match->me)) 654 646 return ERR_PTR(-ENOENT); 655 647 ··· 699 693 700 694 static struct nft_expr_type nft_target_type; 701 695 696 + static bool nft_target_cmp(const struct xt_target *tg, 697 + const char *name, u32 rev, u32 family) 698 + { 699 + return strcmp(tg->name, name) == 0 && tg->revision == rev && 700 + (tg->family == NFPROTO_UNSPEC || tg->family == family); 701 + } 702 + 702 703 static const struct nft_expr_ops * 703 704 nft_target_select_ops(const struct nft_ctx *ctx, 704 705 const struct nlattr * const tb[]) ··· 713 700 struct nft_xt *nft_target; 714 701 struct xt_target *target; 715 702 char *tg_name; 716 - __u32 rev, family; 703 + u32 rev, family; 717 704 718 705 if (tb[NFTA_TARGET_NAME] == NULL || 719 706 tb[NFTA_TARGET_REV] == NULL || ··· 728 715 list_for_each_entry(nft_target, &nft_target_list, head) { 729 716 struct xt_target *target = nft_target->ops.data; 730 717 731 - if (strcmp(target->name, tg_name) == 0 && 732 - target->revision == rev && target->family == family) { 718 + if (nft_target_cmp(target, tg_name, rev, family)) { 733 719 if (!try_module_get(target->me)) 734 720 return ERR_PTR(-ENOENT); 735 721
+49 -14
net/netlink/af_netlink.c
··· 125 125 return group ? 1 << (group - 1) : 0; 126 126 } 127 127 128 + static struct sk_buff *netlink_to_full_skb(const struct sk_buff *skb, 129 + gfp_t gfp_mask) 130 + { 131 + unsigned int len = skb_end_offset(skb); 132 + struct sk_buff *new; 133 + 134 + new = alloc_skb(len, gfp_mask); 135 + if (new == NULL) 136 + return NULL; 137 + 138 + NETLINK_CB(new).portid = NETLINK_CB(skb).portid; 139 + NETLINK_CB(new).dst_group = NETLINK_CB(skb).dst_group; 140 + NETLINK_CB(new).creds = NETLINK_CB(skb).creds; 141 + 142 + memcpy(skb_put(new, len), skb->data, len); 143 + return new; 144 + } 145 + 128 146 int netlink_add_tap(struct netlink_tap *nt) 129 147 { 130 148 if (unlikely(nt->dev->type != ARPHRD_NETLINK)) ··· 224 206 int ret = -ENOMEM; 225 207 226 208 dev_hold(dev); 227 - nskb = skb_clone(skb, GFP_ATOMIC); 209 + 210 + if (netlink_skb_is_mmaped(skb) || is_vmalloc_addr(skb->head)) 211 + nskb = netlink_to_full_skb(skb, GFP_ATOMIC); 212 + else 213 + nskb = skb_clone(skb, GFP_ATOMIC); 228 214 if (nskb) { 229 215 nskb->dev = dev; 230 216 nskb->protocol = htons((u16) sk->sk_protocol); ··· 301 279 } 302 280 303 281 #ifdef CONFIG_NETLINK_MMAP 304 - static bool netlink_skb_is_mmaped(const struct sk_buff *skb) 305 - { 306 - return NETLINK_CB(skb).flags & NETLINK_SKB_MMAPED; 307 - } 308 - 309 282 static bool netlink_rx_is_mmaped(struct sock *sk) 310 283 { 311 284 return nlk_sk(sk)->rx_ring.pg_vec != NULL; ··· 863 846 } 864 847 865 848 #else /* CONFIG_NETLINK_MMAP */ 866 - #define netlink_skb_is_mmaped(skb) false 867 849 #define netlink_rx_is_mmaped(sk) false 868 850 #define netlink_tx_is_mmaped(sk) false 869 851 #define netlink_mmap sock_no_mmap ··· 1110 1094 1111 1095 lock_sock(sk); 1112 1096 1113 - err = -EBUSY; 1114 - if (nlk_sk(sk)->portid) 1097 + err = nlk_sk(sk)->portid == portid ? 0 : -EBUSY; 1098 + if (nlk_sk(sk)->bound) 1115 1099 goto err; 1116 1100 1117 1101 err = -ENOMEM; ··· 1131 1115 err = -EOVERFLOW; 1132 1116 if (err == -EEXIST) 1133 1117 err = -EADDRINUSE; 1134 - nlk_sk(sk)->portid = 0; 1135 1118 sock_put(sk); 1119 + goto err; 1136 1120 } 1121 + 1122 + /* We need to ensure that the socket is hashed and visible. */ 1123 + smp_wmb(); 1124 + nlk_sk(sk)->bound = portid; 1137 1125 1138 1126 err: 1139 1127 release_sock(sk); ··· 1523 1503 struct sockaddr_nl *nladdr = (struct sockaddr_nl *)addr; 1524 1504 int err; 1525 1505 long unsigned int groups = nladdr->nl_groups; 1506 + bool bound; 1526 1507 1527 1508 if (addr_len < sizeof(struct sockaddr_nl)) 1528 1509 return -EINVAL; ··· 1540 1519 return err; 1541 1520 } 1542 1521 1543 - if (nlk->portid) 1522 + bound = nlk->bound; 1523 + if (bound) { 1524 + /* Ensure nlk->portid is up-to-date. */ 1525 + smp_rmb(); 1526 + 1544 1527 if (nladdr->nl_pid != nlk->portid) 1545 1528 return -EINVAL; 1529 + } 1546 1530 1547 1531 if (nlk->netlink_bind && groups) { 1548 1532 int group; ··· 1563 1537 } 1564 1538 } 1565 1539 1566 - if (!nlk->portid) { 1540 + /* No need for barriers here as we return to user-space without 1541 + * using any of the bound attributes. 1542 + */ 1543 + if (!bound) { 1567 1544 err = nladdr->nl_pid ? 1568 1545 netlink_insert(sk, nladdr->nl_pid) : 1569 1546 netlink_autobind(sock); ··· 1614 1585 !netlink_allowed(sock, NL_CFG_F_NONROOT_SEND)) 1615 1586 return -EPERM; 1616 1587 1617 - if (!nlk->portid) 1588 + /* No need for barriers here as we return to user-space without 1589 + * using any of the bound attributes. 1590 + */ 1591 + if (!nlk->bound) 1618 1592 err = netlink_autobind(sock); 1619 1593 1620 1594 if (err == 0) { ··· 2458 2426 dst_group = nlk->dst_group; 2459 2427 } 2460 2428 2461 - if (!nlk->portid) { 2429 + if (!nlk->bound) { 2462 2430 err = netlink_autobind(sock); 2463 2431 if (err) 2464 2432 goto out; 2433 + } else { 2434 + /* Ensure nlk is hashed and visible. */ 2435 + smp_rmb(); 2465 2436 } 2466 2437 2467 2438 /* It's a really convoluted way for userland to ask for mmaped
+10
net/netlink/af_netlink.h
··· 35 35 unsigned long state; 36 36 size_t max_recvmsg_len; 37 37 wait_queue_head_t wait; 38 + bool bound; 38 39 bool cb_running; 39 40 struct netlink_callback cb; 40 41 struct mutex *cb_mutex; ··· 58 57 static inline struct netlink_sock *nlk_sk(struct sock *sk) 59 58 { 60 59 return container_of(sk, struct netlink_sock, sk); 60 + } 61 + 62 + static inline bool netlink_skb_is_mmaped(const struct sk_buff *skb) 63 + { 64 + #ifdef CONFIG_NETLINK_MMAP 65 + return NETLINK_CB(skb).flags & NETLINK_SKB_MMAPED; 66 + #else 67 + return false; 68 + #endif /* CONFIG_NETLINK_MMAP */ 61 69 } 62 70 63 71 struct netlink_table {
+2 -1
net/openvswitch/Kconfig
··· 5 5 config OPENVSWITCH 6 6 tristate "Open vSwitch" 7 7 depends on INET 8 - depends on (!NF_CONNTRACK || NF_CONNTRACK) 8 + depends on !NF_CONNTRACK || \ 9 + (NF_CONNTRACK && (!NF_DEFRAG_IPV6 || NF_DEFRAG_IPV6)) 9 10 select LIBCRC32C 10 11 select MPLS 11 12 select NET_MPLS_GSO
+5 -3
net/openvswitch/conntrack.c
··· 275 275 case NFPROTO_IPV6: { 276 276 u8 nexthdr = ipv6_hdr(skb)->nexthdr; 277 277 __be16 frag_off; 278 + int ofs; 278 279 279 - protoff = ipv6_skip_exthdr(skb, sizeof(struct ipv6hdr), 280 - &nexthdr, &frag_off); 281 - if (protoff < 0 || (frag_off & htons(~0x7)) != 0) { 280 + ofs = ipv6_skip_exthdr(skb, sizeof(struct ipv6hdr), &nexthdr, 281 + &frag_off); 282 + if (ofs < 0 || (frag_off & htons(~0x7)) != 0) { 282 283 pr_debug("proto header not found\n"); 283 284 return NF_ACCEPT; 284 285 } 286 + protoff = ofs; 285 287 break; 286 288 } 287 289 default:
+2 -2
net/openvswitch/datapath.c
··· 952 952 if (error) 953 953 goto err_kfree_flow; 954 954 955 - ovs_flow_mask_key(&new_flow->key, &key, &mask); 955 + ovs_flow_mask_key(&new_flow->key, &key, true, &mask); 956 956 957 957 /* Extract flow identifier. */ 958 958 error = ovs_nla_get_identifier(&new_flow->id, a[OVS_FLOW_ATTR_UFID], ··· 1080 1080 struct sw_flow_key masked_key; 1081 1081 int error; 1082 1082 1083 - ovs_flow_mask_key(&masked_key, key, mask); 1083 + ovs_flow_mask_key(&masked_key, key, true, mask); 1084 1084 error = ovs_nla_copy_actions(net, a, &masked_key, &acts, log); 1085 1085 if (error) { 1086 1086 OVS_NLERR(log,
+59 -23
net/openvswitch/flow_netlink.c
··· 57 57 }; 58 58 59 59 #define OVS_ATTR_NESTED -1 60 + #define OVS_ATTR_VARIABLE -2 60 61 61 62 static void update_range(struct sw_flow_match *match, 62 63 size_t offset, size_t size, bool is_mask) ··· 305 304 + nla_total_size(28); /* OVS_KEY_ATTR_ND */ 306 305 } 307 306 307 + static const struct ovs_len_tbl ovs_vxlan_ext_key_lens[OVS_VXLAN_EXT_MAX + 1] = { 308 + [OVS_VXLAN_EXT_GBP] = { .len = sizeof(u32) }, 309 + }; 310 + 308 311 static const struct ovs_len_tbl ovs_tunnel_key_lens[OVS_TUNNEL_KEY_ATTR_MAX + 1] = { 309 312 [OVS_TUNNEL_KEY_ATTR_ID] = { .len = sizeof(u64) }, 310 313 [OVS_TUNNEL_KEY_ATTR_IPV4_SRC] = { .len = sizeof(u32) }, ··· 320 315 [OVS_TUNNEL_KEY_ATTR_TP_SRC] = { .len = sizeof(u16) }, 321 316 [OVS_TUNNEL_KEY_ATTR_TP_DST] = { .len = sizeof(u16) }, 322 317 [OVS_TUNNEL_KEY_ATTR_OAM] = { .len = 0 }, 323 - [OVS_TUNNEL_KEY_ATTR_GENEVE_OPTS] = { .len = OVS_ATTR_NESTED }, 324 - [OVS_TUNNEL_KEY_ATTR_VXLAN_OPTS] = { .len = OVS_ATTR_NESTED }, 318 + [OVS_TUNNEL_KEY_ATTR_GENEVE_OPTS] = { .len = OVS_ATTR_VARIABLE }, 319 + [OVS_TUNNEL_KEY_ATTR_VXLAN_OPTS] = { .len = OVS_ATTR_NESTED, 320 + .next = ovs_vxlan_ext_key_lens }, 325 321 }; 326 322 327 323 /* The size of the argument for each %OVS_KEY_ATTR_* Netlink attribute. */ ··· 354 348 [OVS_KEY_ATTR_CT_MARK] = { .len = sizeof(u32) }, 355 349 [OVS_KEY_ATTR_CT_LABEL] = { .len = sizeof(struct ovs_key_ct_label) }, 356 350 }; 351 + 352 + static bool check_attr_len(unsigned int attr_len, unsigned int expected_len) 353 + { 354 + return expected_len == attr_len || 355 + expected_len == OVS_ATTR_NESTED || 356 + expected_len == OVS_ATTR_VARIABLE; 357 + } 357 358 358 359 static bool is_all_zero(const u8 *fp, size_t size) 359 360 { ··· 401 388 } 402 389 403 390 expected_len = ovs_key_lens[type].len; 404 - if (nla_len(nla) != expected_len && expected_len != OVS_ATTR_NESTED) { 391 + if (!check_attr_len(nla_len(nla), expected_len)) { 405 392 OVS_NLERR(log, "Key %d has unexpected len %d expected %d", 406 393 type, nla_len(nla), expected_len); 407 394 return -EINVAL; ··· 486 473 return 0; 487 474 } 488 475 489 - static const struct nla_policy vxlan_opt_policy[OVS_VXLAN_EXT_MAX + 1] = { 490 - [OVS_VXLAN_EXT_GBP] = { .type = NLA_U32 }, 491 - }; 492 - 493 - static int vxlan_tun_opt_from_nlattr(const struct nlattr *a, 476 + static int vxlan_tun_opt_from_nlattr(const struct nlattr *attr, 494 477 struct sw_flow_match *match, bool is_mask, 495 478 bool log) 496 479 { 497 - struct nlattr *tb[OVS_VXLAN_EXT_MAX+1]; 480 + struct nlattr *a; 481 + int rem; 498 482 unsigned long opt_key_offset; 499 483 struct vxlan_metadata opts; 500 - int err; 501 484 502 485 BUILD_BUG_ON(sizeof(opts) > sizeof(match->key->tun_opts)); 503 486 504 - err = nla_parse_nested(tb, OVS_VXLAN_EXT_MAX, a, vxlan_opt_policy); 505 - if (err < 0) 506 - return err; 507 - 508 487 memset(&opts, 0, sizeof(opts)); 488 + nla_for_each_nested(a, attr, rem) { 489 + int type = nla_type(a); 509 490 510 - if (tb[OVS_VXLAN_EXT_GBP]) 511 - opts.gbp = nla_get_u32(tb[OVS_VXLAN_EXT_GBP]); 491 + if (type > OVS_VXLAN_EXT_MAX) { 492 + OVS_NLERR(log, "VXLAN extension %d out of range max %d", 493 + type, OVS_VXLAN_EXT_MAX); 494 + return -EINVAL; 495 + } 496 + 497 + if (!check_attr_len(nla_len(a), 498 + ovs_vxlan_ext_key_lens[type].len)) { 499 + OVS_NLERR(log, "VXLAN extension %d has unexpected len %d expected %d", 500 + type, nla_len(a), 501 + ovs_vxlan_ext_key_lens[type].len); 502 + return -EINVAL; 503 + } 504 + 505 + switch (type) { 506 + case OVS_VXLAN_EXT_GBP: 507 + opts.gbp = nla_get_u32(a); 508 + break; 509 + default: 510 + OVS_NLERR(log, "Unknown VXLAN extension attribute %d", 511 + type); 512 + return -EINVAL; 513 + } 514 + } 515 + if (rem) { 516 + OVS_NLERR(log, "VXLAN extension message has %d unknown bytes.", 517 + rem); 518 + return -EINVAL; 519 + } 512 520 513 521 if (!is_mask) 514 522 SW_FLOW_KEY_PUT(match, tun_opts_len, sizeof(opts), false); ··· 562 528 return -EINVAL; 563 529 } 564 530 565 - if (ovs_tunnel_key_lens[type].len != nla_len(a) && 566 - ovs_tunnel_key_lens[type].len != OVS_ATTR_NESTED) { 531 + if (!check_attr_len(nla_len(a), 532 + ovs_tunnel_key_lens[type].len)) { 567 533 OVS_NLERR(log, "Tunnel attr %d has unexpected len %d expected %d", 568 534 type, nla_len(a), ovs_tunnel_key_lens[type].len); 569 535 return -EINVAL; ··· 1086 1052 1087 1053 /* The nlattr stream should already have been validated */ 1088 1054 nla_for_each_nested(nla, attr, rem) { 1089 - if (tbl && tbl[nla_type(nla)].len == OVS_ATTR_NESTED) 1090 - nlattr_set(nla, val, tbl[nla_type(nla)].next); 1091 - else 1055 + if (tbl[nla_type(nla)].len == OVS_ATTR_NESTED) { 1056 + if (tbl[nla_type(nla)].next) 1057 + tbl = tbl[nla_type(nla)].next; 1058 + nlattr_set(nla, val, tbl); 1059 + } else { 1092 1060 memset(nla_data(nla), val, nla_len(nla)); 1061 + } 1093 1062 } 1094 1063 } 1095 1064 ··· 1959 1922 key_len /= 2; 1960 1923 1961 1924 if (key_type > OVS_KEY_ATTR_MAX || 1962 - (ovs_key_lens[key_type].len != key_len && 1963 - ovs_key_lens[key_type].len != OVS_ATTR_NESTED)) 1925 + !check_attr_len(key_len, ovs_key_lens[key_type].len)) 1964 1926 return -EINVAL; 1965 1927 1966 1928 if (masked && !validate_masked(nla_data(ovs_key), key_len))
+12 -11
net/openvswitch/flow_table.c
··· 57 57 } 58 58 59 59 void ovs_flow_mask_key(struct sw_flow_key *dst, const struct sw_flow_key *src, 60 - const struct sw_flow_mask *mask) 60 + bool full, const struct sw_flow_mask *mask) 61 61 { 62 - const long *m = (const long *)((const u8 *)&mask->key + 63 - mask->range.start); 64 - const long *s = (const long *)((const u8 *)src + 65 - mask->range.start); 66 - long *d = (long *)((u8 *)dst + mask->range.start); 62 + int start = full ? 0 : mask->range.start; 63 + int len = full ? sizeof *dst : range_n_bytes(&mask->range); 64 + const long *m = (const long *)((const u8 *)&mask->key + start); 65 + const long *s = (const long *)((const u8 *)src + start); 66 + long *d = (long *)((u8 *)dst + start); 67 67 int i; 68 68 69 - /* The memory outside of the 'mask->range' are not set since 70 - * further operations on 'dst' only uses contents within 71 - * 'mask->range'. 69 + /* If 'full' is true then all of 'dst' is fully initialized. Otherwise, 70 + * if 'full' is false the memory outside of the 'mask->range' is left 71 + * uninitialized. This can be used as an optimization when further 72 + * operations on 'dst' only use contents within 'mask->range'. 72 73 */ 73 - for (i = 0; i < range_n_bytes(&mask->range); i += sizeof(long)) 74 + for (i = 0; i < len; i += sizeof(long)) 74 75 *d++ = *s++ & *m++; 75 76 } 76 77 ··· 476 475 u32 hash; 477 476 struct sw_flow_key masked_key; 478 477 479 - ovs_flow_mask_key(&masked_key, unmasked, mask); 478 + ovs_flow_mask_key(&masked_key, unmasked, false, mask); 480 479 hash = flow_hash(&masked_key, &mask->range); 481 480 head = find_bucket(ti, hash); 482 481 hlist_for_each_entry_rcu(flow, head, flow_table.node[ti->node_ver]) {
+1 -1
net/openvswitch/flow_table.h
··· 86 86 bool ovs_flow_cmp(const struct sw_flow *, const struct sw_flow_match *); 87 87 88 88 void ovs_flow_mask_key(struct sw_flow_key *dst, const struct sw_flow_key *src, 89 - const struct sw_flow_mask *mask); 89 + bool full, const struct sw_flow_mask *mask); 90 90 #endif /* flow_table.h */
+17 -15
net/packet/af_packet.c
··· 230 230 } sa; 231 231 }; 232 232 233 + #define vio_le() virtio_legacy_is_little_endian() 234 + 233 235 #define PACKET_SKB_CB(__skb) ((struct packet_skb_cb *)((__skb)->cb)) 234 236 235 237 #define GET_PBDQC_FROM_RB(x) ((struct tpacket_kbdq_core *)(&(x)->prb_bdqc)) ··· 2682 2680 goto out_unlock; 2683 2681 2684 2682 if ((vnet_hdr.flags & VIRTIO_NET_HDR_F_NEEDS_CSUM) && 2685 - (__virtio16_to_cpu(false, vnet_hdr.csum_start) + 2686 - __virtio16_to_cpu(false, vnet_hdr.csum_offset) + 2 > 2687 - __virtio16_to_cpu(false, vnet_hdr.hdr_len))) 2688 - vnet_hdr.hdr_len = __cpu_to_virtio16(false, 2689 - __virtio16_to_cpu(false, vnet_hdr.csum_start) + 2690 - __virtio16_to_cpu(false, vnet_hdr.csum_offset) + 2); 2683 + (__virtio16_to_cpu(vio_le(), vnet_hdr.csum_start) + 2684 + __virtio16_to_cpu(vio_le(), vnet_hdr.csum_offset) + 2 > 2685 + __virtio16_to_cpu(vio_le(), vnet_hdr.hdr_len))) 2686 + vnet_hdr.hdr_len = __cpu_to_virtio16(vio_le(), 2687 + __virtio16_to_cpu(vio_le(), vnet_hdr.csum_start) + 2688 + __virtio16_to_cpu(vio_le(), vnet_hdr.csum_offset) + 2); 2691 2689 2692 2690 err = -EINVAL; 2693 - if (__virtio16_to_cpu(false, vnet_hdr.hdr_len) > len) 2691 + if (__virtio16_to_cpu(vio_le(), vnet_hdr.hdr_len) > len) 2694 2692 goto out_unlock; 2695 2693 2696 2694 if (vnet_hdr.gso_type != VIRTIO_NET_HDR_GSO_NONE) { ··· 2733 2731 hlen = LL_RESERVED_SPACE(dev); 2734 2732 tlen = dev->needed_tailroom; 2735 2733 skb = packet_alloc_skb(sk, hlen + tlen, hlen, len, 2736 - __virtio16_to_cpu(false, vnet_hdr.hdr_len), 2734 + __virtio16_to_cpu(vio_le(), vnet_hdr.hdr_len), 2737 2735 msg->msg_flags & MSG_DONTWAIT, &err); 2738 2736 if (skb == NULL) 2739 2737 goto out_unlock; ··· 2780 2778 2781 2779 if (po->has_vnet_hdr) { 2782 2780 if (vnet_hdr.flags & VIRTIO_NET_HDR_F_NEEDS_CSUM) { 2783 - u16 s = __virtio16_to_cpu(false, vnet_hdr.csum_start); 2784 - u16 o = __virtio16_to_cpu(false, vnet_hdr.csum_offset); 2781 + u16 s = __virtio16_to_cpu(vio_le(), vnet_hdr.csum_start); 2782 + u16 o = __virtio16_to_cpu(vio_le(), vnet_hdr.csum_offset); 2785 2783 if (!skb_partial_csum_set(skb, s, o)) { 2786 2784 err = -EINVAL; 2787 2785 goto out_free; ··· 2789 2787 } 2790 2788 2791 2789 skb_shinfo(skb)->gso_size = 2792 - __virtio16_to_cpu(false, vnet_hdr.gso_size); 2790 + __virtio16_to_cpu(vio_le(), vnet_hdr.gso_size); 2793 2791 skb_shinfo(skb)->gso_type = gso_type; 2794 2792 2795 2793 /* Header must be checked, and gso_segs computed. */ ··· 3163 3161 3164 3162 /* This is a hint as to how much should be linear. */ 3165 3163 vnet_hdr.hdr_len = 3166 - __cpu_to_virtio16(false, skb_headlen(skb)); 3164 + __cpu_to_virtio16(vio_le(), skb_headlen(skb)); 3167 3165 vnet_hdr.gso_size = 3168 - __cpu_to_virtio16(false, sinfo->gso_size); 3166 + __cpu_to_virtio16(vio_le(), sinfo->gso_size); 3169 3167 if (sinfo->gso_type & SKB_GSO_TCPV4) 3170 3168 vnet_hdr.gso_type = VIRTIO_NET_HDR_GSO_TCPV4; 3171 3169 else if (sinfo->gso_type & SKB_GSO_TCPV6) ··· 3183 3181 3184 3182 if (skb->ip_summed == CHECKSUM_PARTIAL) { 3185 3183 vnet_hdr.flags = VIRTIO_NET_HDR_F_NEEDS_CSUM; 3186 - vnet_hdr.csum_start = __cpu_to_virtio16(false, 3184 + vnet_hdr.csum_start = __cpu_to_virtio16(vio_le(), 3187 3185 skb_checksum_start_offset(skb)); 3188 - vnet_hdr.csum_offset = __cpu_to_virtio16(false, 3186 + vnet_hdr.csum_offset = __cpu_to_virtio16(vio_le(), 3189 3187 skb->csum_offset); 3190 3188 } else if (skb->ip_summed == CHECKSUM_UNNECESSARY) { 3191 3189 vnet_hdr.flags = VIRTIO_NET_HDR_F_DATA_VALID;
+15 -15
net/sched/cls_fw.c
··· 33 33 34 34 struct fw_head { 35 35 u32 mask; 36 - bool mask_set; 37 36 struct fw_filter __rcu *ht[HTSIZE]; 38 37 struct rcu_head rcu; 39 38 }; ··· 83 84 } 84 85 } 85 86 } else { 86 - /* old method */ 87 + /* Old method: classify the packet using its skb mark. */ 87 88 if (id && (TC_H_MAJ(id) == 0 || 88 89 !(TC_H_MAJ(id ^ tp->q->handle)))) { 89 90 res->classid = id; ··· 113 114 114 115 static int fw_init(struct tcf_proto *tp) 115 116 { 116 - struct fw_head *head; 117 - 118 - head = kzalloc(sizeof(struct fw_head), GFP_KERNEL); 119 - if (head == NULL) 120 - return -ENOBUFS; 121 - 122 - head->mask_set = false; 123 - rcu_assign_pointer(tp->root, head); 117 + /* We don't allocate fw_head here, because in the old method 118 + * we don't need it at all. 119 + */ 124 120 return 0; 125 121 } 126 122 ··· 246 252 int err; 247 253 248 254 if (!opt) 249 - return handle ? -EINVAL : 0; 255 + return handle ? -EINVAL : 0; /* Succeed if it is old method. */ 250 256 251 257 err = nla_parse_nested(tb, TCA_FW_MAX, opt, fw_policy); 252 258 if (err < 0) ··· 296 302 if (!handle) 297 303 return -EINVAL; 298 304 299 - if (!head->mask_set) { 300 - head->mask = 0xFFFFFFFF; 305 + if (!head) { 306 + u32 mask = 0xFFFFFFFF; 301 307 if (tb[TCA_FW_MASK]) 302 - head->mask = nla_get_u32(tb[TCA_FW_MASK]); 303 - head->mask_set = true; 308 + mask = nla_get_u32(tb[TCA_FW_MASK]); 309 + 310 + head = kzalloc(sizeof(*head), GFP_KERNEL); 311 + if (!head) 312 + return -ENOBUFS; 313 + head->mask = mask; 314 + 315 + rcu_assign_pointer(tp->root, head); 304 316 } 305 317 306 318 f = kzalloc(sizeof(struct fw_filter), GFP_KERNEL);
+41 -23
net/sctp/protocol.c
··· 1186 1186 unregister_inetaddr_notifier(&sctp_inetaddr_notifier); 1187 1187 } 1188 1188 1189 - static int __net_init sctp_net_init(struct net *net) 1189 + static int __net_init sctp_defaults_init(struct net *net) 1190 1190 { 1191 1191 int status; 1192 1192 ··· 1279 1279 1280 1280 sctp_dbg_objcnt_init(net); 1281 1281 1282 - /* Initialize the control inode/socket for handling OOTB packets. */ 1283 - if ((status = sctp_ctl_sock_init(net))) { 1284 - pr_err("Failed to initialize the SCTP control sock\n"); 1285 - goto err_ctl_sock_init; 1286 - } 1287 - 1288 1282 /* Initialize the local address list. */ 1289 1283 INIT_LIST_HEAD(&net->sctp.local_addr_list); 1290 1284 spin_lock_init(&net->sctp.local_addr_lock); ··· 1294 1300 1295 1301 return 0; 1296 1302 1297 - err_ctl_sock_init: 1298 - sctp_dbg_objcnt_exit(net); 1299 - sctp_proc_exit(net); 1300 1303 err_init_proc: 1301 1304 cleanup_sctp_mibs(net); 1302 1305 err_init_mibs: ··· 1302 1311 return status; 1303 1312 } 1304 1313 1305 - static void __net_exit sctp_net_exit(struct net *net) 1314 + static void __net_exit sctp_defaults_exit(struct net *net) 1306 1315 { 1307 1316 /* Free the local address list */ 1308 1317 sctp_free_addr_wq(net); 1309 1318 sctp_free_local_addr_list(net); 1310 - 1311 - /* Free the control endpoint. */ 1312 - inet_ctl_sock_destroy(net->sctp.ctl_sock); 1313 1319 1314 1320 sctp_dbg_objcnt_exit(net); 1315 1321 ··· 1315 1327 sctp_sysctl_net_unregister(net); 1316 1328 } 1317 1329 1318 - static struct pernet_operations sctp_net_ops = { 1319 - .init = sctp_net_init, 1320 - .exit = sctp_net_exit, 1330 + static struct pernet_operations sctp_defaults_ops = { 1331 + .init = sctp_defaults_init, 1332 + .exit = sctp_defaults_exit, 1333 + }; 1334 + 1335 + static int __net_init sctp_ctrlsock_init(struct net *net) 1336 + { 1337 + int status; 1338 + 1339 + /* Initialize the control inode/socket for handling OOTB packets. */ 1340 + status = sctp_ctl_sock_init(net); 1341 + if (status) 1342 + pr_err("Failed to initialize the SCTP control sock\n"); 1343 + 1344 + return status; 1345 + } 1346 + 1347 + static void __net_init sctp_ctrlsock_exit(struct net *net) 1348 + { 1349 + /* Free the control endpoint. */ 1350 + inet_ctl_sock_destroy(net->sctp.ctl_sock); 1351 + } 1352 + 1353 + static struct pernet_operations sctp_ctrlsock_ops = { 1354 + .init = sctp_ctrlsock_init, 1355 + .exit = sctp_ctrlsock_exit, 1321 1356 }; 1322 1357 1323 1358 /* Initialize the universe into something sensible. */ ··· 1473 1462 sctp_v4_pf_init(); 1474 1463 sctp_v6_pf_init(); 1475 1464 1476 - status = sctp_v4_protosw_init(); 1465 + status = register_pernet_subsys(&sctp_defaults_ops); 1466 + if (status) 1467 + goto err_register_defaults; 1477 1468 1469 + status = sctp_v4_protosw_init(); 1478 1470 if (status) 1479 1471 goto err_protosw_init; 1480 1472 ··· 1485 1471 if (status) 1486 1472 goto err_v6_protosw_init; 1487 1473 1488 - status = register_pernet_subsys(&sctp_net_ops); 1474 + status = register_pernet_subsys(&sctp_ctrlsock_ops); 1489 1475 if (status) 1490 - goto err_register_pernet_subsys; 1476 + goto err_register_ctrlsock; 1491 1477 1492 1478 status = sctp_v4_add_protocol(); 1493 1479 if (status) ··· 1503 1489 err_v6_add_protocol: 1504 1490 sctp_v4_del_protocol(); 1505 1491 err_add_protocol: 1506 - unregister_pernet_subsys(&sctp_net_ops); 1507 - err_register_pernet_subsys: 1492 + unregister_pernet_subsys(&sctp_ctrlsock_ops); 1493 + err_register_ctrlsock: 1508 1494 sctp_v6_protosw_exit(); 1509 1495 err_v6_protosw_init: 1510 1496 sctp_v4_protosw_exit(); 1511 1497 err_protosw_init: 1498 + unregister_pernet_subsys(&sctp_defaults_ops); 1499 + err_register_defaults: 1512 1500 sctp_v4_pf_exit(); 1513 1501 sctp_v6_pf_exit(); 1514 1502 sctp_sysctl_unregister(); ··· 1543 1527 sctp_v6_del_protocol(); 1544 1528 sctp_v4_del_protocol(); 1545 1529 1546 - unregister_pernet_subsys(&sctp_net_ops); 1530 + unregister_pernet_subsys(&sctp_ctrlsock_ops); 1547 1531 1548 1532 /* Free protosw registrations */ 1549 1533 sctp_v6_protosw_exit(); 1550 1534 sctp_v4_protosw_exit(); 1535 + 1536 + unregister_pernet_subsys(&sctp_defaults_ops); 1551 1537 1552 1538 /* Unregister with socket layer. */ 1553 1539 sctp_v6_pf_exit();
+1
net/tipc/msg.c
··· 539 539 *err = -TIPC_ERR_NO_NAME; 540 540 if (skb_linearize(skb)) 541 541 return false; 542 + msg = buf_msg(skb); 542 543 if (msg_reroute_cnt(msg)) 543 544 return false; 544 545 dnode = addr_domain(net, msg_lookup_scope(msg));