Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net

Pull networking fixes from David Miller:

1) MODULE_FIRMWARE firmware string not correct for iwlwifi 8000 chips,
from Sara Sharon.

2) Fix SKB size checks in batman-adv stack on receive, from Sven
Eckelmann.

3) Leak fix on mac80211 interface add error paths, from Johannes Berg.

4) Cannot invoke napi_disable() with BH disabled in myri10ge driver,
fix from Stanislaw Gruszka.

5) Fix sign extension problem when computing feature masks in
net_gso_ok(), from Marcelo Ricardo Leitner.

6) lan78xx driver doesn't count packets and packet lengths in its
statistics properly, fix from Woojung Huh.

7) Fix the buffer allocation sizes in pegasus USB driver, from Petko
Manolov.

8) Fix refcount overflows in bpf, from Alexei Starovoitov.

9) Unified dst cache handling introduced a preempt warning in
ip_tunnel, fix by resetting rather then setting the cached route.
From Paolo Abeni.

10) Listener hash collision test fix in soreuseport, from Craig Gallak

* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net: (47 commits)
gre: do not pull header in ICMP error processing
net: Implement net_dbg_ratelimited() for CONFIG_DYNAMIC_DEBUG case
tipc: only process unicast on intended node
cxgb3: fix out of bounds read
net/smscx5xx: use the device tree for mac address
soreuseport: Fix TCP listener hash collision
net: l2tp: fix reversed udp6 checksum flags
ip_tunnel: fix preempt warning in ip tunnel creation/updating
samples/bpf: fix trace_output example
bpf: fix check_map_func_compatibility logic
bpf: fix refcnt overflow
drivers: net: cpsw: use of_phy_connect() in fixed-link case
dt: cpsw: phy-handle, phy_id, and fixed-link are mutually exclusive
drivers: net: cpsw: don't ignore phy-mode if phy-handle is used
drivers: net: cpsw: fix segfault in case of bad phy-handle
drivers: net: cpsw: fix parsing of phy-handle DT property in dual_emac config
MAINTAINERS: net: Change maintainer for GRETH 10/100/1G Ethernet MAC device driver
gre: reject GUE and FOU in collect metadata mode
pegasus: fixes reported packet length
pegasus: fixes URB buffer allocation size;
...

+381 -191
+3 -3
Documentation/devicetree/bindings/net/cpsw.txt
··· 45 45 Optional properties: 46 46 - dual_emac_res_vlan : Specifies VID to be used to segregate the ports 47 47 - mac-address : See ethernet.txt file in the same directory 48 - - phy_id : Specifies slave phy id 48 + - phy_id : Specifies slave phy id (deprecated, use phy-handle) 49 49 - phy-handle : See ethernet.txt file in the same directory 50 50 51 51 Slave sub-nodes: 52 52 - fixed-link : See fixed-link.txt file in the same directory 53 - Either the property phy_id, or the sub-node 54 - fixed-link can be specified 53 + 54 + Note: Exactly one of phy_id, phy-handle, or fixed-link must be specified. 55 55 56 56 Note: "ti,hwmods" field is used to fetch the base address and irq 57 57 resources from TI, omap hwmod data base during device registration.
+3 -3
Documentation/networking/altera_tse.txt
··· 6 6 using the SGDMA and MSGDMA soft DMA IP components. The driver uses the 7 7 platform bus to obtain component resources. The designs used to test this 8 8 driver were built for a Cyclone(R) V SOC FPGA board, a Cyclone(R) V FPGA board, 9 - and tested with ARM and NIOS processor hosts seperately. The anticipated use 9 + and tested with ARM and NIOS processor hosts separately. The anticipated use 10 10 cases are simple communications between an embedded system and an external peer 11 11 for status and simple configuration of the embedded system. 12 12 ··· 65 65 4.1) Transmit process 66 66 When the driver's transmit routine is called by the kernel, it sets up a 67 67 transmit descriptor by calling the underlying DMA transmit routine (SGDMA or 68 - MSGDMA), and initites a transmit operation. Once the transmit is complete, an 68 + MSGDMA), and initiates a transmit operation. Once the transmit is complete, an 69 69 interrupt is driven by the transmit DMA logic. The driver handles the transmit 70 70 completion in the context of the interrupt handling chain by recycling 71 71 resource required to send and track the requested transmit operation. 72 72 73 73 4.2) Receive process 74 74 The driver will post receive buffers to the receive DMA logic during driver 75 - intialization. Receive buffers may or may not be queued depending upon the 75 + initialization. Receive buffers may or may not be queued depending upon the 76 76 underlying DMA logic (MSGDMA is able queue receive buffers, SGDMA is not able 77 77 to queue receive buffers to the SGDMA receive logic). When a packet is 78 78 received, the DMA logic generates an interrupt. The driver handles a receive
+3 -3
Documentation/networking/ipvlan.txt
··· 8 8 This is conceptually very similar to the macvlan driver with one major 9 9 exception of using L3 for mux-ing /demux-ing among slaves. This property makes 10 10 the master device share the L2 with it's slave devices. I have developed this 11 - driver in conjuntion with network namespaces and not sure if there is use case 11 + driver in conjunction with network namespaces and not sure if there is use case 12 12 outside of it. 13 13 14 14 ··· 42 42 as well. 43 43 44 44 4.2 L3 mode: 45 - In this mode TX processing upto L3 happens on the stack instance attached 45 + In this mode TX processing up to L3 happens on the stack instance attached 46 46 to the slave device and packets are switched to the stack instance of the 47 47 master device for the L2 processing and routing from that instance will be 48 48 used before packets are queued on the outbound device. In this mode the slaves ··· 56 56 (a) The Linux host that is connected to the external switch / router has 57 57 policy configured that allows only one mac per port. 58 58 (b) No of virtual devices created on a master exceed the mac capacity and 59 - puts the NIC in promiscous mode and degraded performance is a concern. 59 + puts the NIC in promiscuous mode and degraded performance is a concern. 60 60 (c) If the slave device is to be put into the hostile / untrusted network 61 61 namespace where L2 on the slave could be changed / misused. 62 62
+3 -3
Documentation/networking/pktgen.txt
··· 67 67 * add_device DEVICE@NAME -- adds a single device 68 68 * rem_device_all -- remove all associated devices 69 69 70 - When adding a device to a thread, a corrosponding procfile is created 70 + When adding a device to a thread, a corresponding procfile is created 71 71 which is used for configuring this device. Thus, device names need to 72 72 be unique. 73 73 74 74 To support adding the same device to multiple threads, which is useful 75 - with multi queue NICs, a the device naming scheme is extended with "@": 75 + with multi queue NICs, the device naming scheme is extended with "@": 76 76 device@something 77 77 78 78 The part after "@" can be anything, but it is custom to use the thread ··· 221 221 222 222 A collection of tutorial scripts and helpers for pktgen is in the 223 223 samples/pktgen directory. The helper parameters.sh file support easy 224 - and consistant parameter parsing across the sample scripts. 224 + and consistent parameter parsing across the sample scripts. 225 225 226 226 Usage example and help: 227 227 ./pktgen_sample01_simple.sh -i eth4 -m 00:1B:21:3C:9D:F8 -d 192.168.8.2
+1 -1
Documentation/networking/vrf.txt
··· 41 41 the VRF device. Similarly on egress routing rules are used to send packets 42 42 to the VRF device driver before getting sent out the actual interface. This 43 43 allows tcpdump on a VRF device to capture all packets into and out of the 44 - VRF as a whole.[1] Similiarly, netfilter [2] and tc rules can be applied 44 + VRF as a whole.[1] Similarly, netfilter [2] and tc rules can be applied 45 45 using the VRF device to specify rules that apply to the VRF domain as a whole. 46 46 47 47 [1] Packets in the forwarded state do not flow through the device, so those
+3 -3
Documentation/networking/xfrm_sync.txt
··· 4 4 from Jamal <hadi@cyberus.ca>. 5 5 6 6 The end goal for syncing is to be able to insert attributes + generate 7 - events so that the an SA can be safely moved from one machine to another 7 + events so that the SA can be safely moved from one machine to another 8 8 for HA purposes. 9 9 The idea is to synchronize the SA so that the takeover machine can do 10 10 the processing of the SA as accurate as possible if it has access to it. ··· 13 13 These patches add ability to sync and have accurate lifetime byte (to 14 14 ensure proper decay of SAs) and replay counters to avoid replay attacks 15 15 with as minimal loss at failover time. 16 - This way a backup stays as closely uptodate as an active member. 16 + This way a backup stays as closely up-to-date as an active member. 17 17 18 18 Because the above items change for every packet the SA receives, 19 19 it is possible for a lot of the events to be generated. ··· 163 163 there is a period where the timer threshold expires with no packets 164 164 seen, then an odd behavior is seen as follows: 165 165 The first packet arrival after a timer expiry will trigger a timeout 166 - aevent; i.e we dont wait for a timeout period or a packet threshold 166 + event; i.e we don't wait for a timeout period or a packet threshold 167 167 to be reached. This is done for simplicity and efficiency reasons. 168 168 169 169 -JHS
+3 -2
MAINTAINERS
··· 4903 4903 F: include/net/gre.h 4904 4904 4905 4905 GRETH 10/100/1G Ethernet MAC device driver 4906 - M: Kristoffer Glembo <kristoffer@gaisler.com> 4906 + M: Andreas Larsson <andreas@gaisler.com> 4907 4907 L: netdev@vger.kernel.org 4908 4908 S: Maintained 4909 4909 F: drivers/net/ethernet/aeroflex/ ··· 10014 10014 10015 10015 SFC NETWORK DRIVER 10016 10016 M: Solarflare linux maintainers <linux-net-drivers@solarflare.com> 10017 - M: Shradha Shah <sshah@solarflare.com> 10017 + M: Edward Cree <ecree@solarflare.com> 10018 + M: Bert Kenward <bkenward@solarflare.com> 10018 10019 L: netdev@vger.kernel.org 10019 10020 S: Supported 10020 10021 F: drivers/net/ethernet/sfc/
+1 -1
drivers/net/dsa/mv88e6xxx.c
··· 2181 2181 struct net_device *bridge) 2182 2182 { 2183 2183 struct mv88e6xxx_priv_state *ps = ds_to_priv(ds); 2184 - int i, err; 2184 + int i, err = 0; 2185 2185 2186 2186 mutex_lock(&ps->smi_mutex); 2187 2187
+39 -14
drivers/net/ethernet/broadcom/bnxt/bnxt.c
··· 581 581 struct page *page; 582 582 dma_addr_t mapping; 583 583 u16 sw_prod = rxr->rx_sw_agg_prod; 584 + unsigned int offset = 0; 584 585 585 - page = alloc_page(gfp); 586 - if (!page) 587 - return -ENOMEM; 586 + if (PAGE_SIZE > BNXT_RX_PAGE_SIZE) { 587 + page = rxr->rx_page; 588 + if (!page) { 589 + page = alloc_page(gfp); 590 + if (!page) 591 + return -ENOMEM; 592 + rxr->rx_page = page; 593 + rxr->rx_page_offset = 0; 594 + } 595 + offset = rxr->rx_page_offset; 596 + rxr->rx_page_offset += BNXT_RX_PAGE_SIZE; 597 + if (rxr->rx_page_offset == PAGE_SIZE) 598 + rxr->rx_page = NULL; 599 + else 600 + get_page(page); 601 + } else { 602 + page = alloc_page(gfp); 603 + if (!page) 604 + return -ENOMEM; 605 + } 588 606 589 - mapping = dma_map_page(&pdev->dev, page, 0, PAGE_SIZE, 607 + mapping = dma_map_page(&pdev->dev, page, offset, BNXT_RX_PAGE_SIZE, 590 608 PCI_DMA_FROMDEVICE); 591 609 if (dma_mapping_error(&pdev->dev, mapping)) { 592 610 __free_page(page); ··· 619 601 rxr->rx_sw_agg_prod = NEXT_RX_AGG(sw_prod); 620 602 621 603 rx_agg_buf->page = page; 604 + rx_agg_buf->offset = offset; 622 605 rx_agg_buf->mapping = mapping; 623 606 rxbd->rx_bd_haddr = cpu_to_le64(mapping); 624 607 rxbd->rx_bd_opaque = sw_prod; ··· 661 642 page = cons_rx_buf->page; 662 643 cons_rx_buf->page = NULL; 663 644 prod_rx_buf->page = page; 645 + prod_rx_buf->offset = cons_rx_buf->offset; 664 646 665 647 prod_rx_buf->mapping = cons_rx_buf->mapping; 666 648 ··· 729 709 RX_AGG_CMP_LEN) >> RX_AGG_CMP_LEN_SHIFT; 730 710 731 711 cons_rx_buf = &rxr->rx_agg_ring[cons]; 732 - skb_fill_page_desc(skb, i, cons_rx_buf->page, 0, frag_len); 712 + skb_fill_page_desc(skb, i, cons_rx_buf->page, 713 + cons_rx_buf->offset, frag_len); 733 714 __clear_bit(cons, rxr->rx_agg_bmap); 734 715 735 716 /* It is possible for bnxt_alloc_rx_page() to allocate ··· 761 740 return NULL; 762 741 } 763 742 764 - dma_unmap_page(&pdev->dev, mapping, PAGE_SIZE, 743 + dma_unmap_page(&pdev->dev, mapping, BNXT_RX_PAGE_SIZE, 765 744 PCI_DMA_FROMDEVICE); 766 745 767 746 skb->data_len += frag_len; ··· 1605 1584 1606 1585 dma_unmap_page(&pdev->dev, 1607 1586 dma_unmap_addr(rx_agg_buf, mapping), 1608 - PAGE_SIZE, PCI_DMA_FROMDEVICE); 1587 + BNXT_RX_PAGE_SIZE, PCI_DMA_FROMDEVICE); 1609 1588 1610 1589 rx_agg_buf->page = NULL; 1611 1590 __clear_bit(j, rxr->rx_agg_bmap); 1612 1591 1613 1592 __free_page(page); 1593 + } 1594 + if (rxr->rx_page) { 1595 + __free_page(rxr->rx_page); 1596 + rxr->rx_page = NULL; 1614 1597 } 1615 1598 } 1616 1599 } ··· 1998 1973 if (!(bp->flags & BNXT_FLAG_AGG_RINGS)) 1999 1974 return 0; 2000 1975 2001 - type = ((u32)PAGE_SIZE << RX_BD_LEN_SHIFT) | 1976 + type = ((u32)BNXT_RX_PAGE_SIZE << RX_BD_LEN_SHIFT) | 2002 1977 RX_BD_TYPE_RX_AGG_BD | RX_BD_FLAGS_SOP; 2003 1978 2004 1979 bnxt_init_rxbd_pages(ring, type); ··· 2189 2164 bp->rx_agg_nr_pages = 0; 2190 2165 2191 2166 if (bp->flags & BNXT_FLAG_TPA) 2192 - agg_factor = 4; 2167 + agg_factor = min_t(u32, 4, 65536 / BNXT_RX_PAGE_SIZE); 2193 2168 2194 2169 bp->flags &= ~BNXT_FLAG_JUMBO; 2195 2170 if (rx_space > PAGE_SIZE) { ··· 3045 3020 /* Number of segs are log2 units, and first packet is not 3046 3021 * included as part of this units. 3047 3022 */ 3048 - if (mss <= PAGE_SIZE) { 3049 - n = PAGE_SIZE / mss; 3023 + if (mss <= BNXT_RX_PAGE_SIZE) { 3024 + n = BNXT_RX_PAGE_SIZE / mss; 3050 3025 nsegs = (MAX_SKB_FRAGS - 1) * n; 3051 3026 } else { 3052 - n = mss / PAGE_SIZE; 3053 - if (mss & (PAGE_SIZE - 1)) 3027 + n = mss / BNXT_RX_PAGE_SIZE; 3028 + if (mss & (BNXT_RX_PAGE_SIZE - 1)) 3054 3029 n++; 3055 3030 nsegs = (MAX_SKB_FRAGS - n) / n; 3056 3031 } ··· 4334 4309 if (bp->flags & BNXT_FLAG_MSIX_CAP) 4335 4310 rc = bnxt_setup_msix(bp); 4336 4311 4337 - if (!(bp->flags & BNXT_FLAG_USING_MSIX)) { 4312 + if (!(bp->flags & BNXT_FLAG_USING_MSIX) && BNXT_PF(bp)) { 4338 4313 /* fallback to INTA */ 4339 4314 rc = bnxt_setup_inta(bp); 4340 4315 }
+13
drivers/net/ethernet/broadcom/bnxt/bnxt.h
··· 407 407 408 408 #define BNXT_PAGE_SIZE (1 << BNXT_PAGE_SHIFT) 409 409 410 + /* The RXBD length is 16-bit so we can only support page sizes < 64K */ 411 + #if (PAGE_SHIFT > 15) 412 + #define BNXT_RX_PAGE_SHIFT 15 413 + #else 414 + #define BNXT_RX_PAGE_SHIFT PAGE_SHIFT 415 + #endif 416 + 417 + #define BNXT_RX_PAGE_SIZE (1 << BNXT_RX_PAGE_SHIFT) 418 + 410 419 #define BNXT_MIN_PKT_SIZE 45 411 420 412 421 #define BNXT_NUM_TESTS(bp) 0 ··· 515 506 516 507 struct bnxt_sw_rx_agg_bd { 517 508 struct page *page; 509 + unsigned int offset; 518 510 dma_addr_t mapping; 519 511 }; 520 512 ··· 595 585 596 586 unsigned long *rx_agg_bmap; 597 587 u16 rx_agg_bmap_size; 588 + 589 + struct page *rx_page; 590 + unsigned int rx_page_offset; 598 591 599 592 dma_addr_t rx_desc_mapping[MAX_RX_PAGES]; 600 593 dma_addr_t rx_agg_desc_mapping[MAX_RX_AGG_PAGES];
+2 -1
drivers/net/ethernet/chelsio/cxgb3/cxgb3_main.c
··· 576 576 unsigned int nq0 = adap2pinfo(adap, 0)->nqsets; 577 577 unsigned int nq1 = adap->port[1] ? adap2pinfo(adap, 1)->nqsets : 1; 578 578 u8 cpus[SGE_QSETS + 1]; 579 - u16 rspq_map[RSS_TABLE_SIZE]; 579 + u16 rspq_map[RSS_TABLE_SIZE + 1]; 580 580 581 581 for (i = 0; i < SGE_QSETS; ++i) 582 582 cpus[i] = i; ··· 586 586 rspq_map[i] = i % nq0; 587 587 rspq_map[i + RSS_TABLE_SIZE / 2] = (i % nq1) + nq0; 588 588 } 589 + rspq_map[RSS_TABLE_SIZE] = 0xffff; /* terminator */ 589 590 590 591 t3_config_rss(adap, F_RQFEEDBACKENABLE | F_TNLLKPEN | F_TNLMAPEN | 591 592 F_TNLPRTEN | F_TNL2TUPEN | F_TNL4TUPEN |
+2 -2
drivers/net/ethernet/myricom/myri10ge/myri10ge.c
··· 2668 2668 2669 2669 del_timer_sync(&mgp->watchdog_timer); 2670 2670 mgp->running = MYRI10GE_ETH_STOPPING; 2671 - local_bh_disable(); /* myri10ge_ss_lock_napi needs bh disabled */ 2672 2671 for (i = 0; i < mgp->num_slices; i++) { 2673 2672 napi_disable(&mgp->ss[i].napi); 2673 + local_bh_disable(); /* myri10ge_ss_lock_napi needs this */ 2674 2674 /* Lock the slice to prevent the busy_poll handler from 2675 2675 * accessing it. Later when we bring the NIC up, myri10ge_open 2676 2676 * resets the slice including this lock. ··· 2679 2679 pr_info("Slice %d locked\n", i); 2680 2680 mdelay(1); 2681 2681 } 2682 + local_bh_enable(); 2682 2683 } 2683 - local_bh_enable(); 2684 2684 netif_carrier_off(dev); 2685 2685 2686 2686 netif_tx_stop_all_queues(dev);
+13 -2
drivers/net/ethernet/sfc/ef10.c
··· 1920 1920 return 0; 1921 1921 } 1922 1922 1923 + if (nic_data->datapath_caps & 1924 + 1 << MC_CMD_GET_CAPABILITIES_OUT_RX_RSS_LIMITED_LBN) 1925 + return -EOPNOTSUPP; 1926 + 1923 1927 MCDI_SET_DWORD(inbuf, RSS_CONTEXT_ALLOC_IN_UPSTREAM_PORT_ID, 1924 1928 nic_data->vport_id); 1925 1929 MCDI_SET_DWORD(inbuf, RSS_CONTEXT_ALLOC_IN_TYPE, alloc_type); ··· 2927 2923 bool replacing) 2928 2924 { 2929 2925 struct efx_ef10_nic_data *nic_data = efx->nic_data; 2926 + u32 flags = spec->flags; 2930 2927 2931 2928 memset(inbuf, 0, MC_CMD_FILTER_OP_IN_LEN); 2929 + 2930 + /* Remove RSS flag if we don't have an RSS context. */ 2931 + if (flags & EFX_FILTER_FLAG_RX_RSS && 2932 + spec->rss_context == EFX_FILTER_RSS_CONTEXT_DEFAULT && 2933 + nic_data->rx_rss_context == EFX_EF10_RSS_CONTEXT_INVALID) 2934 + flags &= ~EFX_FILTER_FLAG_RX_RSS; 2932 2935 2933 2936 if (replacing) { 2934 2937 MCDI_SET_DWORD(inbuf, FILTER_OP_IN_OP, ··· 2996 2985 spec->dmaq_id == EFX_FILTER_RX_DMAQ_ID_DROP ? 2997 2986 0 : spec->dmaq_id); 2998 2987 MCDI_SET_DWORD(inbuf, FILTER_OP_IN_RX_MODE, 2999 - (spec->flags & EFX_FILTER_FLAG_RX_RSS) ? 2988 + (flags & EFX_FILTER_FLAG_RX_RSS) ? 3000 2989 MC_CMD_FILTER_OP_IN_RX_MODE_RSS : 3001 2990 MC_CMD_FILTER_OP_IN_RX_MODE_SIMPLE); 3002 - if (spec->flags & EFX_FILTER_FLAG_RX_RSS) 2991 + if (flags & EFX_FILTER_FLAG_RX_RSS) 3003 2992 MCDI_SET_DWORD(inbuf, FILTER_OP_IN_RX_CONTEXT, 3004 2993 spec->rss_context != 3005 2994 EFX_FILTER_RSS_CONTEXT_DEFAULT ?
+37 -32
drivers/net/ethernet/ti/cpsw.c
··· 367 367 spinlock_t lock; 368 368 struct platform_device *pdev; 369 369 struct net_device *ndev; 370 - struct device_node *phy_node; 371 370 struct napi_struct napi_rx; 372 371 struct napi_struct napi_tx; 373 372 struct device *dev; ··· 1147 1148 cpsw_ale_add_mcast(priv->ale, priv->ndev->broadcast, 1148 1149 1 << slave_port, 0, 0, ALE_MCAST_FWD_2); 1149 1150 1150 - if (priv->phy_node) 1151 - slave->phy = of_phy_connect(priv->ndev, priv->phy_node, 1151 + if (slave->data->phy_node) { 1152 + slave->phy = of_phy_connect(priv->ndev, slave->data->phy_node, 1152 1153 &cpsw_adjust_link, 0, slave->data->phy_if); 1153 - else 1154 + if (!slave->phy) { 1155 + dev_err(priv->dev, "phy \"%s\" not found on slave %d\n", 1156 + slave->data->phy_node->full_name, 1157 + slave->slave_num); 1158 + return; 1159 + } 1160 + } else { 1154 1161 slave->phy = phy_connect(priv->ndev, slave->data->phy_id, 1155 1162 &cpsw_adjust_link, slave->data->phy_if); 1156 - if (IS_ERR(slave->phy)) { 1157 - dev_err(priv->dev, "phy %s not found on slave %d\n", 1158 - slave->data->phy_id, slave->slave_num); 1159 - slave->phy = NULL; 1160 - } else { 1161 - phy_attached_info(slave->phy); 1162 - 1163 - phy_start(slave->phy); 1164 - 1165 - /* Configure GMII_SEL register */ 1166 - cpsw_phy_sel(&priv->pdev->dev, slave->phy->interface, 1167 - slave->slave_num); 1163 + if (IS_ERR(slave->phy)) { 1164 + dev_err(priv->dev, 1165 + "phy \"%s\" not found on slave %d, err %ld\n", 1166 + slave->data->phy_id, slave->slave_num, 1167 + PTR_ERR(slave->phy)); 1168 + slave->phy = NULL; 1169 + return; 1170 + } 1168 1171 } 1172 + 1173 + phy_attached_info(slave->phy); 1174 + 1175 + phy_start(slave->phy); 1176 + 1177 + /* Configure GMII_SEL register */ 1178 + cpsw_phy_sel(&priv->pdev->dev, slave->phy->interface, slave->slave_num); 1169 1179 } 1170 1180 1171 1181 static inline void cpsw_add_default_vlan(struct cpsw_priv *priv) ··· 1948 1940 slave->port_vlan = data->dual_emac_res_vlan; 1949 1941 } 1950 1942 1951 - static int cpsw_probe_dt(struct cpsw_priv *priv, 1943 + static int cpsw_probe_dt(struct cpsw_platform_data *data, 1952 1944 struct platform_device *pdev) 1953 1945 { 1954 1946 struct device_node *node = pdev->dev.of_node; 1955 1947 struct device_node *slave_node; 1956 - struct cpsw_platform_data *data = &priv->data; 1957 1948 int i = 0, ret; 1958 1949 u32 prop; 1959 1950 ··· 2040 2033 if (strcmp(slave_node->name, "slave")) 2041 2034 continue; 2042 2035 2043 - priv->phy_node = of_parse_phandle(slave_node, "phy-handle", 0); 2036 + slave_data->phy_node = of_parse_phandle(slave_node, 2037 + "phy-handle", 0); 2044 2038 parp = of_get_property(slave_node, "phy_id", &lenp); 2045 - if (of_phy_is_fixed_link(slave_node)) { 2046 - struct device_node *phy_node; 2047 - struct phy_device *phy_dev; 2048 - 2039 + if (slave_data->phy_node) { 2040 + dev_dbg(&pdev->dev, 2041 + "slave[%d] using phy-handle=\"%s\"\n", 2042 + i, slave_data->phy_node->full_name); 2043 + } else if (of_phy_is_fixed_link(slave_node)) { 2049 2044 /* In the case of a fixed PHY, the DT node associated 2050 2045 * to the PHY is the Ethernet MAC DT node. 2051 2046 */ 2052 2047 ret = of_phy_register_fixed_link(slave_node); 2053 2048 if (ret) 2054 2049 return ret; 2055 - phy_node = of_node_get(slave_node); 2056 - phy_dev = of_phy_find_device(phy_node); 2057 - if (!phy_dev) 2058 - return -ENODEV; 2059 - snprintf(slave_data->phy_id, sizeof(slave_data->phy_id), 2060 - PHY_ID_FMT, phy_dev->mdio.bus->id, 2061 - phy_dev->mdio.addr); 2050 + slave_data->phy_node = of_node_get(slave_node); 2062 2051 } else if (parp) { 2063 2052 u32 phyid; 2064 2053 struct device_node *mdio_node; ··· 2075 2072 snprintf(slave_data->phy_id, sizeof(slave_data->phy_id), 2076 2073 PHY_ID_FMT, mdio->name, phyid); 2077 2074 } else { 2078 - dev_err(&pdev->dev, "No slave[%d] phy_id or fixed-link property\n", i); 2075 + dev_err(&pdev->dev, 2076 + "No slave[%d] phy_id, phy-handle, or fixed-link property\n", 2077 + i); 2079 2078 goto no_phy_slave; 2080 2079 } 2081 2080 slave_data->phy_if = of_get_phy_mode(slave_node); ··· 2280 2275 /* Select default pin state */ 2281 2276 pinctrl_pm_select_default_state(&pdev->dev); 2282 2277 2283 - if (cpsw_probe_dt(priv, pdev)) { 2278 + if (cpsw_probe_dt(&priv->data, pdev)) { 2284 2279 dev_err(&pdev->dev, "cpsw: platform data missing\n"); 2285 2280 ret = -ENODEV; 2286 2281 goto clean_runtime_disable_ret;
+1
drivers/net/ethernet/ti/cpsw.h
··· 18 18 #include <linux/phy.h> 19 19 20 20 struct cpsw_slave_data { 21 + struct device_node *phy_node; 21 22 char phy_id[MII_BUS_ID_SIZE]; 22 23 int phy_if; 23 24 u8 mac_addr[ETH_ALEN];
+4 -1
drivers/net/ethernet/ti/davinci_emac.c
··· 1512 1512 1513 1513 /* TODO: Add phy read and write and private statistics get feature */ 1514 1514 1515 - return phy_mii_ioctl(priv->phydev, ifrq, cmd); 1515 + if (priv->phydev) 1516 + return phy_mii_ioctl(priv->phydev, ifrq, cmd); 1517 + else 1518 + return -EOPNOTSUPP; 1516 1519 } 1517 1520 1518 1521 static int match_first_device(struct device *dev, void *data)
+1 -1
drivers/net/ethernet/toshiba/ps3_gelic_wireless.c
··· 1622 1622 continue; 1623 1623 1624 1624 /* copy hw scan info */ 1625 - memcpy(target->hwinfo, scan_info, scan_info->size); 1625 + memcpy(target->hwinfo, scan_info, be16_to_cpu(scan_info->size)); 1626 1626 target->essid_len = strnlen(scan_info->essid, 1627 1627 sizeof(scan_info->essid)); 1628 1628 target->rate_len = 0;
+14 -18
drivers/net/phy/at803x.c
··· 359 359 * in the FIFO. In such cases, the FIFO enters an error mode it 360 360 * cannot recover from by software. 361 361 */ 362 - if (phydev->drv->phy_id == ATH8030_PHY_ID) { 363 - if (phydev->state == PHY_NOLINK) { 364 - if (priv->gpiod_reset && !priv->phy_reset) { 365 - struct at803x_context context; 362 + if (phydev->state == PHY_NOLINK) { 363 + if (priv->gpiod_reset && !priv->phy_reset) { 364 + struct at803x_context context; 366 365 367 - at803x_context_save(phydev, &context); 366 + at803x_context_save(phydev, &context); 368 367 369 - gpiod_set_value(priv->gpiod_reset, 1); 370 - msleep(1); 371 - gpiod_set_value(priv->gpiod_reset, 0); 372 - msleep(1); 368 + gpiod_set_value(priv->gpiod_reset, 1); 369 + msleep(1); 370 + gpiod_set_value(priv->gpiod_reset, 0); 371 + msleep(1); 373 372 374 - at803x_context_restore(phydev, &context); 373 + at803x_context_restore(phydev, &context); 375 374 376 - phydev_dbg(phydev, "%s(): phy was reset\n", 377 - __func__); 378 - priv->phy_reset = true; 379 - } 380 - } else { 381 - priv->phy_reset = false; 375 + phydev_dbg(phydev, "%s(): phy was reset\n", 376 + __func__); 377 + priv->phy_reset = true; 382 378 } 379 + } else { 380 + priv->phy_reset = false; 383 381 } 384 382 } 385 383 ··· 389 391 .phy_id_mask = 0xffffffef, 390 392 .probe = at803x_probe, 391 393 .config_init = at803x_config_init, 392 - .link_change_notify = at803x_link_change_notify, 393 394 .set_wol = at803x_set_wol, 394 395 .get_wol = at803x_get_wol, 395 396 .suspend = at803x_suspend, ··· 424 427 .phy_id_mask = 0xffffffef, 425 428 .probe = at803x_probe, 426 429 .config_init = at803x_config_init, 427 - .link_change_notify = at803x_link_change_notify, 428 430 .set_wol = at803x_set_wol, 429 431 .get_wol = at803x_get_wol, 430 432 .suspend = at803x_suspend,
+38 -6
drivers/net/usb/lan78xx.c
··· 269 269 struct lan78xx_net *dev; 270 270 enum skb_state state; 271 271 size_t length; 272 + int num_of_packet; 272 273 }; 273 274 274 275 struct usb_context { ··· 1804 1803 1805 1804 static void lan78xx_link_status_change(struct net_device *net) 1806 1805 { 1807 - /* nothing to do */ 1806 + struct phy_device *phydev = net->phydev; 1807 + int ret, temp; 1808 + 1809 + /* At forced 100 F/H mode, chip may fail to set mode correctly 1810 + * when cable is switched between long(~50+m) and short one. 1811 + * As workaround, set to 10 before setting to 100 1812 + * at forced 100 F/H mode. 1813 + */ 1814 + if (!phydev->autoneg && (phydev->speed == 100)) { 1815 + /* disable phy interrupt */ 1816 + temp = phy_read(phydev, LAN88XX_INT_MASK); 1817 + temp &= ~LAN88XX_INT_MASK_MDINTPIN_EN_; 1818 + ret = phy_write(phydev, LAN88XX_INT_MASK, temp); 1819 + 1820 + temp = phy_read(phydev, MII_BMCR); 1821 + temp &= ~(BMCR_SPEED100 | BMCR_SPEED1000); 1822 + phy_write(phydev, MII_BMCR, temp); /* set to 10 first */ 1823 + temp |= BMCR_SPEED100; 1824 + phy_write(phydev, MII_BMCR, temp); /* set to 100 later */ 1825 + 1826 + /* clear pending interrupt generated while workaround */ 1827 + temp = phy_read(phydev, LAN88XX_INT_STS); 1828 + 1829 + /* enable phy interrupt back */ 1830 + temp = phy_read(phydev, LAN88XX_INT_MASK); 1831 + temp |= LAN88XX_INT_MASK_MDINTPIN_EN_; 1832 + ret = phy_write(phydev, LAN88XX_INT_MASK, temp); 1833 + } 1808 1834 } 1809 1835 1810 1836 static int lan78xx_phy_init(struct lan78xx_net *dev) ··· 2492 2464 struct lan78xx_net *dev = entry->dev; 2493 2465 2494 2466 if (urb->status == 0) { 2495 - dev->net->stats.tx_packets++; 2467 + dev->net->stats.tx_packets += entry->num_of_packet; 2496 2468 dev->net->stats.tx_bytes += entry->length; 2497 2469 } else { 2498 2470 dev->net->stats.tx_errors++; ··· 2709 2681 return; 2710 2682 } 2711 2683 2712 - skb->protocol = eth_type_trans(skb, dev->net); 2713 2684 dev->net->stats.rx_packets++; 2714 2685 dev->net->stats.rx_bytes += skb->len; 2686 + 2687 + skb->protocol = eth_type_trans(skb, dev->net); 2715 2688 2716 2689 netif_dbg(dev, rx_status, dev->net, "< rx, len %zu, type 0x%x\n", 2717 2690 skb->len + sizeof(struct ethhdr), skb->protocol); ··· 2963 2934 2964 2935 skb_totallen = 0; 2965 2936 pkt_cnt = 0; 2937 + count = 0; 2938 + length = 0; 2966 2939 for (skb = tqp->next; pkt_cnt < tqp->qlen; skb = skb->next) { 2967 2940 if (skb_is_gso(skb)) { 2968 2941 if (pkt_cnt) { 2969 2942 /* handle previous packets first */ 2970 2943 break; 2971 2944 } 2972 - length = skb->len; 2945 + count = 1; 2946 + length = skb->len - TX_OVERHEAD; 2973 2947 skb2 = skb_dequeue(tqp); 2974 2948 goto gso_skb; 2975 2949 } ··· 2993 2961 for (count = pos = 0; count < pkt_cnt; count++) { 2994 2962 skb2 = skb_dequeue(tqp); 2995 2963 if (skb2) { 2964 + length += (skb2->len - TX_OVERHEAD); 2996 2965 memcpy(skb->data + pos, skb2->data, skb2->len); 2997 2966 pos += roundup(skb2->len, sizeof(u32)); 2998 2967 dev_kfree_skb(skb2); 2999 2968 } 3000 2969 } 3001 - 3002 - length = skb_totallen; 3003 2970 3004 2971 gso_skb: 3005 2972 urb = usb_alloc_urb(0, GFP_ATOMIC); ··· 3011 2980 entry->urb = urb; 3012 2981 entry->dev = dev; 3013 2982 entry->length = length; 2983 + entry->num_of_packet = count; 3014 2984 3015 2985 spin_lock_irqsave(&dev->txq.lock, flags); 3016 2986 ret = usb_autopm_get_interface_async(dev->intf);
+5 -5
drivers/net/usb/pegasus.c
··· 411 411 int ret; 412 412 413 413 read_mii_word(pegasus, pegasus->phy, MII_LPA, &linkpart); 414 - data[0] = 0xc9; 414 + data[0] = 0xc8; /* TX & RX enable, append status, no CRC */ 415 415 data[1] = 0; 416 416 if (linkpart & (ADVERTISE_100FULL | ADVERTISE_10FULL)) 417 417 data[1] |= 0x20; /* set full duplex */ ··· 497 497 pkt_len = buf[count - 3] << 8; 498 498 pkt_len += buf[count - 4]; 499 499 pkt_len &= 0xfff; 500 - pkt_len -= 8; 500 + pkt_len -= 4; 501 501 } 502 502 503 503 /* ··· 528 528 goon: 529 529 usb_fill_bulk_urb(pegasus->rx_urb, pegasus->usb, 530 530 usb_rcvbulkpipe(pegasus->usb, 1), 531 - pegasus->rx_skb->data, PEGASUS_MTU + 8, 531 + pegasus->rx_skb->data, PEGASUS_MTU, 532 532 read_bulk_callback, pegasus); 533 533 rx_status = usb_submit_urb(pegasus->rx_urb, GFP_ATOMIC); 534 534 if (rx_status == -ENODEV) ··· 569 569 } 570 570 usb_fill_bulk_urb(pegasus->rx_urb, pegasus->usb, 571 571 usb_rcvbulkpipe(pegasus->usb, 1), 572 - pegasus->rx_skb->data, PEGASUS_MTU + 8, 572 + pegasus->rx_skb->data, PEGASUS_MTU, 573 573 read_bulk_callback, pegasus); 574 574 try_again: 575 575 status = usb_submit_urb(pegasus->rx_urb, GFP_ATOMIC); ··· 823 823 824 824 usb_fill_bulk_urb(pegasus->rx_urb, pegasus->usb, 825 825 usb_rcvbulkpipe(pegasus->usb, 1), 826 - pegasus->rx_skb->data, PEGASUS_MTU + 8, 826 + pegasus->rx_skb->data, PEGASUS_MTU, 827 827 read_bulk_callback, pegasus); 828 828 if ((res = usb_submit_urb(pegasus->rx_urb, GFP_KERNEL))) { 829 829 if (res == -ENODEV)
+11 -1
drivers/net/usb/smsc75xx.c
··· 29 29 #include <linux/crc32.h> 30 30 #include <linux/usb/usbnet.h> 31 31 #include <linux/slab.h> 32 + #include <linux/of_net.h> 32 33 #include "smsc75xx.h" 33 34 34 35 #define SMSC_CHIPNAME "smsc75xx" ··· 762 761 763 762 static void smsc75xx_init_mac_address(struct usbnet *dev) 764 763 { 764 + const u8 *mac_addr; 765 + 766 + /* maybe the boot loader passed the MAC address in devicetree */ 767 + mac_addr = of_get_mac_address(dev->udev->dev.of_node); 768 + if (mac_addr) { 769 + memcpy(dev->net->dev_addr, mac_addr, ETH_ALEN); 770 + return; 771 + } 772 + 765 773 /* try reading mac address from EEPROM */ 766 774 if (smsc75xx_read_eeprom(dev, EEPROM_MAC_OFFSET, ETH_ALEN, 767 775 dev->net->dev_addr) == 0) { ··· 782 772 } 783 773 } 784 774 785 - /* no eeprom, or eeprom values are invalid. generate random MAC */ 775 + /* no useful static MAC address found. generate a random one */ 786 776 eth_hw_addr_random(dev->net); 787 777 netif_dbg(dev, ifup, dev->net, "MAC address set to eth_random_addr\n"); 788 778 }
+11 -1
drivers/net/usb/smsc95xx.c
··· 29 29 #include <linux/crc32.h> 30 30 #include <linux/usb/usbnet.h> 31 31 #include <linux/slab.h> 32 + #include <linux/of_net.h> 32 33 #include "smsc95xx.h" 33 34 34 35 #define SMSC_CHIPNAME "smsc95xx" ··· 766 765 767 766 static void smsc95xx_init_mac_address(struct usbnet *dev) 768 767 { 768 + const u8 *mac_addr; 769 + 770 + /* maybe the boot loader passed the MAC address in devicetree */ 771 + mac_addr = of_get_mac_address(dev->udev->dev.of_node); 772 + if (mac_addr) { 773 + memcpy(dev->net->dev_addr, mac_addr, ETH_ALEN); 774 + return; 775 + } 776 + 769 777 /* try reading mac address from EEPROM */ 770 778 if (smsc95xx_read_eeprom(dev, EEPROM_MAC_OFFSET, ETH_ALEN, 771 779 dev->net->dev_addr) == 0) { ··· 785 775 } 786 776 } 787 777 788 - /* no eeprom, or eeprom values are invalid. generate random MAC */ 778 + /* no useful static MAC address found. generate a random one */ 789 779 eth_hw_addr_random(dev->net); 790 780 netif_dbg(dev, ifup, dev->net, "MAC address set to eth_random_addr\n"); 791 781 }
+3 -5
drivers/net/wireless/ath/ath9k/ar5008_phy.c
··· 274 274 }; 275 275 static const int inc[4] = { 0, 100, 0, 0 }; 276 276 277 + memset(&mask_m, 0, sizeof(int8_t) * 123); 278 + memset(&mask_p, 0, sizeof(int8_t) * 123); 279 + 277 280 cur_bin = -6000; 278 281 upper = bin + 100; 279 282 lower = bin - 100; ··· 427 424 int tmp, new; 428 425 int i; 429 426 430 - int8_t mask_m[123]; 431 - int8_t mask_p[123]; 432 427 int cur_bb_spur; 433 428 bool is2GHz = IS_CHAN_2GHZ(chan); 434 - 435 - memset(&mask_m, 0, sizeof(int8_t) * 123); 436 - memset(&mask_p, 0, sizeof(int8_t) * 123); 437 429 438 430 for (i = 0; i < AR_EEPROM_MODAL_SPURS; i++) { 439 431 cur_bb_spur = ah->eep_ops->get_spur_channel(ah, i, is2GHz);
-5
drivers/net/wireless/ath/ath9k/ar9002_phy.c
··· 178 178 int i; 179 179 struct chan_centers centers; 180 180 181 - int8_t mask_m[123]; 182 - int8_t mask_p[123]; 183 181 int cur_bb_spur; 184 182 bool is2GHz = IS_CHAN_2GHZ(chan); 185 - 186 - memset(&mask_m, 0, sizeof(int8_t) * 123); 187 - memset(&mask_p, 0, sizeof(int8_t) * 123); 188 183 189 184 ath9k_hw_get_channel_centers(ah, chan, &centers); 190 185 freq = centers.synth_center;
+1 -1
drivers/net/wireless/intel/iwlwifi/iwl-8000.c
··· 93 93 #define IWL8260_SMEM_OFFSET 0x400000 94 94 #define IWL8260_SMEM_LEN 0x68000 95 95 96 - #define IWL8000_FW_PRE "iwlwifi-8000" 96 + #define IWL8000_FW_PRE "iwlwifi-8000C-" 97 97 #define IWL8000_MODULE_FIRMWARE(api) \ 98 98 IWL8000_FW_PRE "-" __stringify(api) ".ucode" 99 99
+10 -16
drivers/net/wireless/intel/iwlwifi/iwl-drv.c
··· 238 238 snprintf(drv->firmware_name, sizeof(drv->firmware_name), "%s%s.ucode", 239 239 name_pre, tag); 240 240 241 - /* 242 - * Starting 8000B - FW name format has changed. This overwrites the 243 - * previous name and uses the new format. 244 - */ 245 - if (drv->trans->cfg->device_family == IWL_DEVICE_FAMILY_8000) { 246 - char rev_step = 'A' + CSR_HW_REV_STEP(drv->trans->hw_rev); 247 - 248 - if (rev_step != 'A') 249 - snprintf(drv->firmware_name, 250 - sizeof(drv->firmware_name), "%s%c-%s.ucode", 251 - name_pre, rev_step, tag); 252 - } 253 - 254 241 IWL_DEBUG_INFO(drv, "attempting to load firmware %s'%s'\n", 255 242 (drv->fw_index == UCODE_EXPERIMENTAL_INDEX) 256 243 ? "EXPERIMENTAL " : "", ··· 1047 1060 return -EINVAL; 1048 1061 } 1049 1062 1050 - if (WARN(fw_has_capa(capa, IWL_UCODE_TLV_CAPA_GSCAN_SUPPORT) && 1051 - !gscan_capa, 1052 - "GSCAN is supported but capabilities TLV is unavailable\n")) 1063 + /* 1064 + * If ucode advertises that it supports GSCAN but GSCAN 1065 + * capabilities TLV is not present, or if it has an old format, 1066 + * warn and continue without GSCAN. 1067 + */ 1068 + if (fw_has_capa(capa, IWL_UCODE_TLV_CAPA_GSCAN_SUPPORT) && 1069 + !gscan_capa) { 1070 + IWL_DEBUG_INFO(drv, 1071 + "GSCAN is supported but capabilities TLV is unavailable\n"); 1053 1072 __clear_bit((__force long)IWL_UCODE_TLV_CAPA_GSCAN_SUPPORT, 1054 1073 capa->_capa); 1074 + } 1055 1075 1056 1076 return 0; 1057 1077
+4 -2
drivers/net/wireless/intel/iwlwifi/mvm/fw-dbg.c
··· 526 526 file_len += sizeof(*dump_data) + sizeof(*dump_mem) + sram2_len; 527 527 528 528 /* Make room for fw's virtual image pages, if it exists */ 529 - if (mvm->fw->img[mvm->cur_ucode].paging_mem_size) 529 + if (mvm->fw->img[mvm->cur_ucode].paging_mem_size && 530 + mvm->fw_paging_db[0].fw_paging_block) 530 531 file_len += mvm->num_of_paging_blk * 531 532 (sizeof(*dump_data) + 532 533 sizeof(struct iwl_fw_error_dump_paging) + ··· 644 643 } 645 644 646 645 /* Dump fw's virtual image */ 647 - if (mvm->fw->img[mvm->cur_ucode].paging_mem_size) { 646 + if (mvm->fw->img[mvm->cur_ucode].paging_mem_size && 647 + mvm->fw_paging_db[0].fw_paging_block) { 648 648 for (i = 1; i < mvm->num_of_paging_blk + 1; i++) { 649 649 struct iwl_fw_error_dump_paging *paging; 650 650 struct page *pages =
+2
drivers/net/wireless/intel/iwlwifi/mvm/fw.c
··· 144 144 145 145 __free_pages(mvm->fw_paging_db[i].fw_paging_block, 146 146 get_order(mvm->fw_paging_db[i].fw_paging_size)); 147 + mvm->fw_paging_db[i].fw_paging_block = NULL; 147 148 } 148 149 kfree(mvm->trans->paging_download_buf); 149 150 mvm->trans->paging_download_buf = NULL; 151 + mvm->trans->paging_db = NULL; 150 152 151 153 memset(mvm->fw_paging_db, 0, sizeof(mvm->fw_paging_db)); 152 154 }
+10
drivers/net/wireless/intel/iwlwifi/pcie/drv.c
··· 479 479 {IWL_PCI_DEVICE(0x24F3, 0x0930, iwl8260_2ac_cfg)}, 480 480 {IWL_PCI_DEVICE(0x24F3, 0x0000, iwl8265_2ac_cfg)}, 481 481 {IWL_PCI_DEVICE(0x24FD, 0x0010, iwl8265_2ac_cfg)}, 482 + {IWL_PCI_DEVICE(0x24FD, 0x0110, iwl8265_2ac_cfg)}, 483 + {IWL_PCI_DEVICE(0x24FD, 0x1110, iwl8265_2ac_cfg)}, 484 + {IWL_PCI_DEVICE(0x24FD, 0x1010, iwl8265_2ac_cfg)}, 485 + {IWL_PCI_DEVICE(0x24FD, 0x0050, iwl8265_2ac_cfg)}, 486 + {IWL_PCI_DEVICE(0x24FD, 0x0150, iwl8265_2ac_cfg)}, 487 + {IWL_PCI_DEVICE(0x24FD, 0x9010, iwl8265_2ac_cfg)}, 488 + {IWL_PCI_DEVICE(0x24FD, 0x8110, iwl8265_2ac_cfg)}, 489 + {IWL_PCI_DEVICE(0x24FD, 0x8050, iwl8265_2ac_cfg)}, 482 490 {IWL_PCI_DEVICE(0x24FD, 0x8010, iwl8265_2ac_cfg)}, 483 491 {IWL_PCI_DEVICE(0x24FD, 0x0810, iwl8265_2ac_cfg)}, 492 + {IWL_PCI_DEVICE(0x24FD, 0x9110, iwl8265_2ac_cfg)}, 493 + {IWL_PCI_DEVICE(0x24FD, 0x8130, iwl8265_2ac_cfg)}, 484 494 485 495 /* 9000 Series */ 486 496 {IWL_PCI_DEVICE(0x9DF0, 0x2A10, iwl5165_2ac_cfg)},
+2 -1
include/linux/bpf.h
··· 171 171 void bpf_register_map_type(struct bpf_map_type_list *tl); 172 172 173 173 struct bpf_prog *bpf_prog_get(u32 ufd); 174 + struct bpf_prog *bpf_prog_inc(struct bpf_prog *prog); 174 175 void bpf_prog_put(struct bpf_prog *prog); 175 176 void bpf_prog_put_rcu(struct bpf_prog *prog); 176 177 177 178 struct bpf_map *bpf_map_get_with_uref(u32 ufd); 178 179 struct bpf_map *__bpf_map_get(struct fd f); 179 - void bpf_map_inc(struct bpf_map *map, bool uref); 180 + struct bpf_map *bpf_map_inc(struct bpf_map *map, bool uref); 180 181 void bpf_map_put_with_uref(struct bpf_map *map); 181 182 void bpf_map_put(struct bpf_map *map); 182 183 int bpf_map_precharge_memlock(u32 pages);
+9 -1
include/linux/net.h
··· 246 246 net_ratelimited_function(pr_warn, fmt, ##__VA_ARGS__) 247 247 #define net_info_ratelimited(fmt, ...) \ 248 248 net_ratelimited_function(pr_info, fmt, ##__VA_ARGS__) 249 - #if defined(DEBUG) 249 + #if defined(CONFIG_DYNAMIC_DEBUG) 250 + #define net_dbg_ratelimited(fmt, ...) \ 251 + do { \ 252 + DEFINE_DYNAMIC_DEBUG_METADATA(descriptor, fmt); \ 253 + if (unlikely(descriptor.flags & _DPRINTK_FLAGS_PRINT) && \ 254 + net_ratelimit()) \ 255 + __dynamic_pr_debug(&descriptor, fmt, ##__VA_ARGS__); \ 256 + } while (0) 257 + #elif defined(DEBUG) 250 258 #define net_dbg_ratelimited(fmt, ...) \ 251 259 net_ratelimited_function(pr_debug, fmt, ##__VA_ARGS__) 252 260 #else
+1 -1
include/linux/netdevice.h
··· 4004 4004 4005 4005 static inline bool net_gso_ok(netdev_features_t features, int gso_type) 4006 4006 { 4007 - netdev_features_t feature = gso_type << NETIF_F_GSO_SHIFT; 4007 + netdev_features_t feature = (netdev_features_t)gso_type << NETIF_F_GSO_SHIFT; 4008 4008 4009 4009 /* check flags correspondence */ 4010 4010 BUILD_BUG_ON(SKB_GSO_TCPV4 != (NETIF_F_TSO >> NETIF_F_GSO_SHIFT));
+4 -3
kernel/bpf/inode.c
··· 31 31 { 32 32 switch (type) { 33 33 case BPF_TYPE_PROG: 34 - atomic_inc(&((struct bpf_prog *)raw)->aux->refcnt); 34 + raw = bpf_prog_inc(raw); 35 35 break; 36 36 case BPF_TYPE_MAP: 37 - bpf_map_inc(raw, true); 37 + raw = bpf_map_inc(raw, true); 38 38 break; 39 39 default: 40 40 WARN_ON_ONCE(1); ··· 297 297 goto out; 298 298 299 299 raw = bpf_any_get(inode->i_private, *type); 300 - touch_atime(&path); 300 + if (!IS_ERR(raw)) 301 + touch_atime(&path); 301 302 302 303 path_put(&path); 303 304 return raw;
+20 -4
kernel/bpf/syscall.c
··· 218 218 return f.file->private_data; 219 219 } 220 220 221 - void bpf_map_inc(struct bpf_map *map, bool uref) 221 + /* prog's and map's refcnt limit */ 222 + #define BPF_MAX_REFCNT 32768 223 + 224 + struct bpf_map *bpf_map_inc(struct bpf_map *map, bool uref) 222 225 { 223 - atomic_inc(&map->refcnt); 226 + if (atomic_inc_return(&map->refcnt) > BPF_MAX_REFCNT) { 227 + atomic_dec(&map->refcnt); 228 + return ERR_PTR(-EBUSY); 229 + } 224 230 if (uref) 225 231 atomic_inc(&map->usercnt); 232 + return map; 226 233 } 227 234 228 235 struct bpf_map *bpf_map_get_with_uref(u32 ufd) ··· 241 234 if (IS_ERR(map)) 242 235 return map; 243 236 244 - bpf_map_inc(map, true); 237 + map = bpf_map_inc(map, true); 245 238 fdput(f); 246 239 247 240 return map; ··· 665 658 return f.file->private_data; 666 659 } 667 660 661 + struct bpf_prog *bpf_prog_inc(struct bpf_prog *prog) 662 + { 663 + if (atomic_inc_return(&prog->aux->refcnt) > BPF_MAX_REFCNT) { 664 + atomic_dec(&prog->aux->refcnt); 665 + return ERR_PTR(-EBUSY); 666 + } 667 + return prog; 668 + } 669 + 668 670 /* called by sockets/tracing/seccomp before attaching program to an event 669 671 * pairs with bpf_prog_put() 670 672 */ ··· 686 670 if (IS_ERR(prog)) 687 671 return prog; 688 672 689 - atomic_inc(&prog->aux->refcnt); 673 + prog = bpf_prog_inc(prog); 690 674 fdput(f); 691 675 692 676 return prog;
+47 -29
kernel/bpf/verifier.c
··· 239 239 [CONST_IMM] = "imm", 240 240 }; 241 241 242 - static const struct { 243 - int map_type; 244 - int func_id; 245 - } func_limit[] = { 246 - {BPF_MAP_TYPE_PROG_ARRAY, BPF_FUNC_tail_call}, 247 - {BPF_MAP_TYPE_PERF_EVENT_ARRAY, BPF_FUNC_perf_event_read}, 248 - {BPF_MAP_TYPE_PERF_EVENT_ARRAY, BPF_FUNC_perf_event_output}, 249 - {BPF_MAP_TYPE_STACK_TRACE, BPF_FUNC_get_stackid}, 250 - }; 251 - 252 242 static void print_verifier_state(struct verifier_env *env) 253 243 { 254 244 enum bpf_reg_type t; ··· 911 921 912 922 static int check_map_func_compatibility(struct bpf_map *map, int func_id) 913 923 { 914 - bool bool_map, bool_func; 915 - int i; 916 - 917 924 if (!map) 918 925 return 0; 919 926 920 - for (i = 0; i < ARRAY_SIZE(func_limit); i++) { 921 - bool_map = (map->map_type == func_limit[i].map_type); 922 - bool_func = (func_id == func_limit[i].func_id); 923 - /* only when map & func pair match it can continue. 924 - * don't allow any other map type to be passed into 925 - * the special func; 926 - */ 927 - if (bool_func && bool_map != bool_func) { 928 - verbose("cannot pass map_type %d into func %d\n", 929 - map->map_type, func_id); 930 - return -EINVAL; 931 - } 927 + /* We need a two way check, first is from map perspective ... */ 928 + switch (map->map_type) { 929 + case BPF_MAP_TYPE_PROG_ARRAY: 930 + if (func_id != BPF_FUNC_tail_call) 931 + goto error; 932 + break; 933 + case BPF_MAP_TYPE_PERF_EVENT_ARRAY: 934 + if (func_id != BPF_FUNC_perf_event_read && 935 + func_id != BPF_FUNC_perf_event_output) 936 + goto error; 937 + break; 938 + case BPF_MAP_TYPE_STACK_TRACE: 939 + if (func_id != BPF_FUNC_get_stackid) 940 + goto error; 941 + break; 942 + default: 943 + break; 944 + } 945 + 946 + /* ... and second from the function itself. */ 947 + switch (func_id) { 948 + case BPF_FUNC_tail_call: 949 + if (map->map_type != BPF_MAP_TYPE_PROG_ARRAY) 950 + goto error; 951 + break; 952 + case BPF_FUNC_perf_event_read: 953 + case BPF_FUNC_perf_event_output: 954 + if (map->map_type != BPF_MAP_TYPE_PERF_EVENT_ARRAY) 955 + goto error; 956 + break; 957 + case BPF_FUNC_get_stackid: 958 + if (map->map_type != BPF_MAP_TYPE_STACK_TRACE) 959 + goto error; 960 + break; 961 + default: 962 + break; 932 963 } 933 964 934 965 return 0; 966 + error: 967 + verbose("cannot pass map_type %d into func %d\n", 968 + map->map_type, func_id); 969 + return -EINVAL; 935 970 } 936 971 937 972 static int check_call(struct verifier_env *env, int func_id) ··· 2064 2049 return -E2BIG; 2065 2050 } 2066 2051 2067 - /* remember this map */ 2068 - env->used_maps[env->used_map_cnt++] = map; 2069 - 2070 2052 /* hold the map. If the program is rejected by verifier, 2071 2053 * the map will be released by release_maps() or it 2072 2054 * will be used by the valid program until it's unloaded 2073 2055 * and all maps are released in free_bpf_prog_info() 2074 2056 */ 2075 - bpf_map_inc(map, false); 2057 + map = bpf_map_inc(map, false); 2058 + if (IS_ERR(map)) { 2059 + fdput(f); 2060 + return PTR_ERR(map); 2061 + } 2062 + env->used_maps[env->used_map_cnt++] = map; 2063 + 2076 2064 fdput(f); 2077 2065 next_insn: 2078 2066 insn++;
+1 -2
net/batman-adv/hard-interface.c
··· 572 572 struct batadv_priv *bat_priv = netdev_priv(hard_iface->soft_iface); 573 573 struct batadv_hard_iface *primary_if = NULL; 574 574 575 - if (hard_iface->if_status == BATADV_IF_ACTIVE) 576 - batadv_hardif_deactivate_interface(hard_iface); 575 + batadv_hardif_deactivate_interface(hard_iface); 577 576 578 577 if (hard_iface->if_status != BATADV_IF_INACTIVE) 579 578 goto out;
+1
net/batman-adv/originator.c
··· 663 663 ether_addr_copy(neigh_node->addr, neigh_addr); 664 664 neigh_node->if_incoming = hard_iface; 665 665 neigh_node->orig_node = orig_node; 666 + neigh_node->last_seen = jiffies; 666 667 667 668 /* extra reference for return */ 668 669 kref_init(&neigh_node->refcount);
+9
net/batman-adv/routing.c
··· 105 105 neigh_node = NULL; 106 106 107 107 spin_lock_bh(&orig_node->neigh_list_lock); 108 + /* curr_router used earlier may not be the current orig_ifinfo->router 109 + * anymore because it was dereferenced outside of the neigh_list_lock 110 + * protected region. After the new best neighbor has replace the current 111 + * best neighbor the reference counter needs to decrease. Consequently, 112 + * the code needs to ensure the curr_router variable contains a pointer 113 + * to the replaced best neighbor. 114 + */ 115 + curr_router = rcu_dereference_protected(orig_ifinfo->router, true); 116 + 108 117 rcu_assign_pointer(orig_ifinfo->router, neigh_node); 109 118 spin_unlock_bh(&orig_node->neigh_list_lock); 110 119 batadv_orig_ifinfo_put(orig_ifinfo);
+6
net/batman-adv/send.c
··· 675 675 676 676 if (pending) { 677 677 hlist_del(&forw_packet->list); 678 + if (!forw_packet->own) 679 + atomic_inc(&bat_priv->bcast_queue_left); 680 + 678 681 batadv_forw_packet_free(forw_packet); 679 682 } 680 683 } ··· 705 702 706 703 if (pending) { 707 704 hlist_del(&forw_packet->list); 705 + if (!forw_packet->own) 706 + atomic_inc(&bat_priv->batman_queue_left); 707 + 708 708 batadv_forw_packet_free(forw_packet); 709 709 } 710 710 }
+6 -2
net/batman-adv/soft-interface.c
··· 408 408 */ 409 409 nf_reset(skb); 410 410 411 + if (unlikely(!pskb_may_pull(skb, ETH_HLEN))) 412 + goto dropped; 413 + 411 414 vid = batadv_get_vid(skb, 0); 412 415 ethhdr = eth_hdr(skb); 413 416 414 417 switch (ntohs(ethhdr->h_proto)) { 415 418 case ETH_P_8021Q: 419 + if (!pskb_may_pull(skb, VLAN_ETH_HLEN)) 420 + goto dropped; 421 + 416 422 vhdr = (struct vlan_ethhdr *)skb->data; 417 423 418 424 if (vhdr->h_vlan_encapsulated_proto != ethertype) ··· 430 424 } 431 425 432 426 /* skb->dev & skb->pkt_type are set here */ 433 - if (unlikely(!pskb_may_pull(skb, ETH_HLEN))) 434 - goto dropped; 435 427 skb->protocol = eth_type_trans(skb, soft_iface); 436 428 437 429 /* should not be necessary anymore as we use skb_pull_rcsum()
+2
net/ipv4/inet_hashtables.c
··· 470 470 const struct sock *sk2, 471 471 bool match_wildcard)) 472 472 { 473 + struct inet_bind_bucket *tb = inet_csk(sk)->icsk_bind_hash; 473 474 struct sock *sk2; 474 475 struct hlist_nulls_node *node; 475 476 kuid_t uid = sock_i_uid(sk); ··· 480 479 sk2->sk_family == sk->sk_family && 481 480 ipv6_only_sock(sk2) == ipv6_only_sock(sk) && 482 481 sk2->sk_bound_dev_if == sk->sk_bound_dev_if && 482 + inet_csk(sk2)->icsk_bind_hash == tb && 483 483 sk2->sk_reuseport && uid_eq(uid, sock_i_uid(sk2)) && 484 484 saddr_same(sk, sk2, false)) 485 485 return reuseport_add_sock(sk, sk2);
+21 -9
net/ipv4/ip_gre.c
··· 179 179 return flags; 180 180 } 181 181 182 + /* Fills in tpi and returns header length to be pulled. */ 182 183 static int parse_gre_header(struct sk_buff *skb, struct tnl_ptk_info *tpi, 183 184 bool *csum_err) 184 185 { ··· 239 238 return -EINVAL; 240 239 } 241 240 } 242 - return iptunnel_pull_header(skb, hdr_len, tpi->proto, false); 241 + return hdr_len; 243 242 } 244 243 245 244 static void ipgre_err(struct sk_buff *skb, u32 info, ··· 342 341 struct tnl_ptk_info tpi; 343 342 bool csum_err = false; 344 343 345 - if (parse_gre_header(skb, &tpi, &csum_err)) { 344 + if (parse_gre_header(skb, &tpi, &csum_err) < 0) { 346 345 if (!csum_err) /* ignore csum errors. */ 347 346 return; 348 347 } ··· 420 419 { 421 420 struct tnl_ptk_info tpi; 422 421 bool csum_err = false; 422 + int hdr_len; 423 423 424 424 #ifdef CONFIG_NET_IPGRE_BROADCAST 425 425 if (ipv4_is_multicast(ip_hdr(skb)->daddr)) { ··· 430 428 } 431 429 #endif 432 430 433 - if (parse_gre_header(skb, &tpi, &csum_err) < 0) 431 + hdr_len = parse_gre_header(skb, &tpi, &csum_err); 432 + if (hdr_len < 0) 433 + goto drop; 434 + if (iptunnel_pull_header(skb, hdr_len, tpi.proto, false) < 0) 434 435 goto drop; 435 436 436 437 if (ipgre_rcv(skb, &tpi) == PACKET_RCVD) ··· 528 523 return ip_route_output_key(net, fl); 529 524 } 530 525 531 - static void gre_fb_xmit(struct sk_buff *skb, struct net_device *dev) 526 + static void gre_fb_xmit(struct sk_buff *skb, struct net_device *dev, 527 + __be16 proto) 532 528 { 533 529 struct ip_tunnel_info *tun_info; 534 530 const struct ip_tunnel_key *key; ··· 581 575 } 582 576 583 577 flags = tun_info->key.tun_flags & (TUNNEL_CSUM | TUNNEL_KEY); 584 - build_header(skb, tunnel_hlen, flags, htons(ETH_P_TEB), 578 + build_header(skb, tunnel_hlen, flags, proto, 585 579 tunnel_id_to_key(tun_info->key.tun_id), 0); 586 580 587 581 df = key->tun_flags & TUNNEL_DONT_FRAGMENT ? htons(IP_DF) : 0; ··· 622 616 const struct iphdr *tnl_params; 623 617 624 618 if (tunnel->collect_md) { 625 - gre_fb_xmit(skb, dev); 619 + gre_fb_xmit(skb, dev, skb->protocol); 626 620 return NETDEV_TX_OK; 627 621 } 628 622 ··· 666 660 struct ip_tunnel *tunnel = netdev_priv(dev); 667 661 668 662 if (tunnel->collect_md) { 669 - gre_fb_xmit(skb, dev); 663 + gre_fb_xmit(skb, dev, htons(ETH_P_TEB)); 670 664 return NETDEV_TX_OK; 671 665 } 672 666 ··· 899 893 netif_keep_dst(dev); 900 894 dev->addr_len = 4; 901 895 902 - if (iph->daddr) { 896 + if (iph->daddr && !tunnel->collect_md) { 903 897 #ifdef CONFIG_NET_IPGRE_BROADCAST 904 898 if (ipv4_is_multicast(iph->daddr)) { 905 899 if (!iph->saddr) ··· 908 902 dev->header_ops = &ipgre_header_ops; 909 903 } 910 904 #endif 911 - } else 905 + } else if (!tunnel->collect_md) { 912 906 dev->header_ops = &ipgre_header_ops; 907 + } 913 908 914 909 return ip_tunnel_init(dev); 915 910 } ··· 951 944 if (data[IFLA_GRE_OFLAGS]) 952 945 flags |= nla_get_be16(data[IFLA_GRE_OFLAGS]); 953 946 if (flags & (GRE_VERSION|GRE_ROUTING)) 947 + return -EINVAL; 948 + 949 + if (data[IFLA_GRE_COLLECT_METADATA] && 950 + data[IFLA_GRE_ENCAP_TYPE] && 951 + nla_get_u16(data[IFLA_GRE_ENCAP_TYPE]) != TUNNEL_ENCAP_NONE) 954 952 return -EINVAL; 955 953 956 954 return 0;
+2 -2
net/ipv4/ip_tunnel.c
··· 326 326 327 327 if (!IS_ERR(rt)) { 328 328 tdev = rt->dst.dev; 329 - dst_cache_set_ip4(&tunnel->dst_cache, &rt->dst, 330 - fl4.saddr); 331 329 ip_rt_put(rt); 332 330 } 333 331 if (dev->type != ARPHRD_ETHER) 334 332 dev->flags |= IFF_POINTOPOINT; 333 + 334 + dst_cache_reset(&tunnel->dst_cache); 335 335 } 336 336 337 337 if (!tdev && tunnel->parms.link)
+2 -2
net/l2tp/l2tp_core.c
··· 1376 1376 memcpy(&udp_conf.peer_ip6, cfg->peer_ip6, 1377 1377 sizeof(udp_conf.peer_ip6)); 1378 1378 udp_conf.use_udp6_tx_checksums = 1379 - cfg->udp6_zero_tx_checksums; 1379 + ! cfg->udp6_zero_tx_checksums; 1380 1380 udp_conf.use_udp6_rx_checksums = 1381 - cfg->udp6_zero_rx_checksums; 1381 + ! cfg->udp6_zero_rx_checksums; 1382 1382 } else 1383 1383 #endif 1384 1384 {
+2 -2
net/mac80211/iface.c
··· 1761 1761 1762 1762 ret = dev_alloc_name(ndev, ndev->name); 1763 1763 if (ret < 0) { 1764 - free_netdev(ndev); 1764 + ieee80211_if_free(ndev); 1765 1765 return ret; 1766 1766 } 1767 1767 ··· 1847 1847 1848 1848 ret = register_netdevice(ndev); 1849 1849 if (ret) { 1850 - free_netdev(ndev); 1850 + ieee80211_if_free(ndev); 1851 1851 return ret; 1852 1852 } 1853 1853 }
+5
net/tipc/node.c
··· 1444 1444 int bearer_id = b->identity; 1445 1445 struct tipc_link_entry *le; 1446 1446 u16 bc_ack = msg_bcast_ack(hdr); 1447 + u32 self = tipc_own_addr(net); 1447 1448 int rc = 0; 1448 1449 1449 1450 __skb_queue_head_init(&xmitq); ··· 1460 1459 else 1461 1460 return tipc_node_bc_rcv(net, skb, bearer_id); 1462 1461 } 1462 + 1463 + /* Discard unicast link messages destined for another node */ 1464 + if (unlikely(!msg_short(hdr) && (msg_destnode(hdr) != self))) 1465 + goto discard; 1463 1466 1464 1467 /* Locate neighboring node that sent packet */ 1465 1468 n = tipc_node_find(net, msg_prevnode(hdr));
-1
samples/bpf/trace_output_kern.c
··· 18 18 u64 cookie; 19 19 } data; 20 20 21 - memset(&data, 0, sizeof(data)); 22 21 data.pid = bpf_get_current_pid_tgid(); 23 22 data.cookie = 0x12345678; 24 23