Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net

Pull networking fixes from David Miller:

1) If an IPVS tunnel is created with a mixed-family destination
address, it cannot be removed. Fix from Alexey Andriyanov.

2) Fix module refcount underflow in netfilter's nft_compat, from Pablo
Neira Ayuso.

3) Generic statistics infrastructure can reference variables sitting on
a released function stack, therefore use dynamic allocation always.
Fix from Ignacy Gawędzki.

4) skb_copy_bits() return value test is inverted in ip_check_defrag().

5) Fix network namespace exit in openvswitch, we have to release all of
the per-net vports. From Pravin B Shelar.

6) Fix signedness bug in CAIF's cfpkt_iterate(), from Dan Carpenter.

7) Fix rhashtable grow/shrink behavior, only expand during inserts and
shrink during deletes. From Daniel Borkmann.

8) Netdevice names with semicolons should never be allowed, because
they serve as a separator. From Matthew Thode.

9) Use {,__}set_current_state() where appropriate, from Fabian
Frederick.

10) Revert byte queue limits support in r8169 driver, it's causing
regressions we can't figure out.

11) tcp_should_expand_sndbuf() erroneously uses tp->packets_out to
measure packets in flight, properly use tcp_packets_in_flight()
instead. From Neal Cardwell.

12) Fix accidental removal of support for bluetooth in CSR based Intel
wireless cards. From Marcel Holtmann.

13) We accidently added a behavioral change between native and compat
tasks, wrt testing the MSG_CMSG_COMPAT bit. Just ignore it if the
user happened to set it in a native binary as that was always the
behavior we had. From Catalin Marinas.

14) Check genlmsg_unicast() return valud in hwsim netlink tx frame
handling, from Bob Copeland.

15) Fix stale ->radar_required setting in mac80211 that can prevent
starting new scans, from Eliad Peller.

16) Fix memory leak in nl80211 monitor, from Johannes Berg.

17) Fix race in TX index handling in xen-netback, from David Vrabel.

18) Don't enable interrupts in amx-xgbe driver until all software et al.
state is ready for the interrupt handler to run. From Thomas
Lendacky.

19) Add missing netlink_ns_capable() checks to rtnl_newlink(), from Eric
W Biederman.

20) The amount of header space needed in macvtap was not calculated
properly, fix it otherwise we splat past the beginning of the
packet. From Eric Dumazet.

21) Fix bcmgenet TCP TX perf regression, from Jaedon Shin.

22) Don't raw initialize or mod timers, use setup_timer() and
mod_timer() instead. From Vaishali Thakkar.

23) Fix software maintained statistics in bcmgenet and systemport
drivers, from Florian Fainelli.

24) DMA descriptor updates in sh_eth need proper memory barriers, from
Ben Hutchings.

25) Don't do UDP Fragmentation Offload on RAW sockets, from Michal
Kubecek.

26) Openvswitch's non-masked set actions aren't constructed properly
into netlink messages, fix from Joe Stringer.

* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net: (116 commits)
openvswitch: Fix serialization of non-masked set actions.
gianfar: Reduce logging noise seen due to phy polling if link is down
ibmveth: Add function to enable live MAC address changes
net: bridge: add compile-time assert for cb struct size
udp: only allow UFO for packets from SOCK_DGRAM sockets
sh_eth: Really fix padding of short frames on TX
Revert "sh_eth: Enable Rx descriptor word 0 shift for r8a7790"
sh_eth: Fix RX recovery on R-Car in case of RX ring underrun
sh_eth: Ensure proper ordering of descriptor active bit write/read
net/mlx4_en: Disbale GRO for incoming loopback/selftest packets
net/mlx4_core: Fix wrong mask and error flow for the update-qp command
net: systemport: fix software maintained statistics
net: bcmgenet: fix software maintained statistics
rxrpc: don't multiply with HZ twice
rxrpc: terminate retrans loop when sending of skb fails
net/hsr: Fix NULL pointer dereference and refcnt bugs when deleting a HSR interface.
net: pasemi: Use setup_timer and mod_timer
net: stmmac: Use setup_timer and mod_timer
net: 8390: axnet_cs: Use setup_timer and mod_timer
net: 8390: pcnet_cs: Use setup_timer and mod_timer
...

+1276 -740
+4
Documentation/devicetree/bindings/net/amd-xgbe-phy.txt
··· 27 27 - amd,serdes-cdr-rate: CDR rate speed selection 28 28 - amd,serdes-pq-skew: PQ (data sampling) skew 29 29 - amd,serdes-tx-amp: TX amplitude boost 30 + - amd,serdes-dfe-tap-config: DFE taps available to run 31 + - amd,serdes-dfe-tap-enable: DFE taps to enable 30 32 31 33 Example: 32 34 xgbe_phy@e1240800 { ··· 43 41 amd,serdes-cdr-rate = <2>, <2>, <7>; 44 42 amd,serdes-pq-skew = <10>, <10>, <30>; 45 43 amd,serdes-tx-amp = <15>, <15>, <10>; 44 + amd,serdes-dfe-tap-config = <3>, <3>, <1>; 45 + amd,serdes-dfe-tap-enable = <0>, <0>, <127>; 46 46 };
+1 -1
MAINTAINERS
··· 2065 2065 BONDING DRIVER 2066 2066 M: Jay Vosburgh <j.vosburgh@gmail.com> 2067 2067 M: Veaceslav Falico <vfalico@gmail.com> 2068 - M: Andy Gospodarek <andy@greyhouse.net> 2068 + M: Andy Gospodarek <gospo@cumulusnetworks.com> 2069 2069 L: netdev@vger.kernel.org 2070 2070 W: http://sourceforge.net/projects/bonding/ 2071 2071 S: Supported
+7 -1
arch/arm/mach-msm/board-halibut.c
··· 20 20 #include <linux/input.h> 21 21 #include <linux/io.h> 22 22 #include <linux/delay.h> 23 + #include <linux/smc91x.h> 23 24 24 25 #include <mach/hardware.h> 25 26 #include <asm/mach-types.h> ··· 47 46 [1] = { 48 47 .start = MSM_GPIO_TO_INT(49), 49 48 .end = MSM_GPIO_TO_INT(49), 50 - .flags = IORESOURCE_IRQ, 49 + .flags = IORESOURCE_IRQ | IORESOURCE_IRQ_HIGHLEVEL, 51 50 }, 51 + }; 52 + 53 + static struct smc91x_platdata smc91x_platdata = { 54 + .flags = SMC91X_USE_16BIT | SMC91X_NOWAIT, 52 55 }; 53 56 54 57 static struct platform_device smc91x_device = { ··· 60 55 .id = 0, 61 56 .num_resources = ARRAY_SIZE(smc91x_resources), 62 57 .resource = smc91x_resources, 58 + .dev.platform_data = &smc91x_platdata, 63 59 }; 64 60 65 61 static struct platform_device *devices[] __initdata = {
+7 -1
arch/arm/mach-msm/board-qsd8x50.c
··· 22 22 #include <linux/usb/msm_hsusb.h> 23 23 #include <linux/err.h> 24 24 #include <linux/clkdev.h> 25 + #include <linux/smc91x.h> 25 26 26 27 #include <asm/mach-types.h> 27 28 #include <asm/mach/arch.h> ··· 50 49 .flags = IORESOURCE_MEM, 51 50 }, 52 51 [1] = { 53 - .flags = IORESOURCE_IRQ, 52 + .flags = IORESOURCE_IRQ | IORESOURCE_IRQ_HIGHLEVEL, 54 53 }, 54 + }; 55 + 56 + static struct smc91x_platdata smc91x_platdata = { 57 + .flags = SMC91X_USE_16BIT | SMC91X_NOWAIT, 55 58 }; 56 59 57 60 static struct platform_device smc91x_device = { ··· 63 58 .id = 0, 64 59 .num_resources = ARRAY_SIZE(smc91x_resources), 65 60 .resource = smc91x_resources, 61 + .dev.platform_data = &smc91x_platdata, 66 62 }; 67 63 68 64 static int __init msm_init_smc91x(void)
+5
arch/arm/mach-pxa/idp.c
··· 81 81 } 82 82 }; 83 83 84 + static struct smc91x_platdata smc91x_platdata = { 85 + .flags = SMC91X_USE_32BIT | SMC91X_USE_DMA | SMC91X_NOWAIT, 86 + }; 87 + 84 88 static struct platform_device smc91x_device = { 85 89 .name = "smc91x", 86 90 .id = 0, 87 91 .num_resources = ARRAY_SIZE(smc91x_resources), 88 92 .resource = smc91x_resources, 93 + .dev.platform_data = &smc91x_platdata, 89 94 }; 90 95 91 96 static void idp_backlight_power(int on)
+7 -1
arch/arm/mach-pxa/lpd270.c
··· 24 24 #include <linux/mtd/mtd.h> 25 25 #include <linux/mtd/partitions.h> 26 26 #include <linux/pwm_backlight.h> 27 + #include <linux/smc91x.h> 27 28 28 29 #include <asm/types.h> 29 30 #include <asm/setup.h> ··· 190 189 [1] = { 191 190 .start = LPD270_ETHERNET_IRQ, 192 191 .end = LPD270_ETHERNET_IRQ, 193 - .flags = IORESOURCE_IRQ, 192 + .flags = IORESOURCE_IRQ | IORESOURCE_IRQ_HIGHEDGE, 194 193 }, 194 + }; 195 + 196 + struct smc91x_platdata smc91x_platdata = { 197 + .flags = SMC91X_USE_16BIT | SMC91X_NOWAIT; 195 198 }; 196 199 197 200 static struct platform_device smc91x_device = { ··· 203 198 .id = 0, 204 199 .num_resources = ARRAY_SIZE(smc91x_resources), 205 200 .resource = smc91x_resources, 201 + .dev.platform_data = &smc91x_platdata, 206 202 }; 207 203 208 204 static struct resource lpd270_flash_resources[] = {
+7
arch/arm/mach-realview/core.c
··· 28 28 #include <linux/platform_data/video-clcd-versatile.h> 29 29 #include <linux/io.h> 30 30 #include <linux/smsc911x.h> 31 + #include <linux/smc91x.h> 31 32 #include <linux/ata_platform.h> 32 33 #include <linux/amba/mmci.h> 33 34 #include <linux/gfp.h> ··· 95 94 .phy_interface = PHY_INTERFACE_MODE_MII, 96 95 }; 97 96 97 + static struct smc91x_platdata smc91x_platdata = { 98 + .flags = SMC91X_USE_32BIT | SMC91X_NOWAIT, 99 + }; 100 + 98 101 static struct platform_device realview_eth_device = { 99 102 .name = "smsc911x", 100 103 .id = 0, ··· 112 107 realview_eth_device.resource = res; 113 108 if (strcmp(realview_eth_device.name, "smsc911x") == 0) 114 109 realview_eth_device.dev.platform_data = &smsc911x_config; 110 + else 111 + realview_eth_device.dev.platform_data = &smc91x_platdata; 115 112 116 113 return platform_device_register(&realview_eth_device); 117 114 }
+1 -1
arch/arm/mach-realview/realview_eb.c
··· 234 234 [1] = { 235 235 .start = IRQ_EB_ETH, 236 236 .end = IRQ_EB_ETH, 237 - .flags = IORESOURCE_IRQ, 237 + .flags = IORESOURCE_IRQ | IORESOURCE_IRQ_HIGHEDGE, 238 238 }, 239 239 }; 240 240
+6
arch/arm/mach-sa1100/neponset.c
··· 12 12 #include <linux/pm.h> 13 13 #include <linux/serial_core.h> 14 14 #include <linux/slab.h> 15 + #include <linux/smc91x.h> 15 16 16 17 #include <asm/mach-types.h> 17 18 #include <asm/mach/map.h> ··· 259 258 0x02000000, "smc91x-attrib"), 260 259 { .flags = IORESOURCE_IRQ }, 261 260 }; 261 + struct smc91x_platdata smc91x_platdata = { 262 + .flags = SMC91X_USE_8BIT | SMC91X_IO_SHIFT_2 | SMC91X_NOWAIT, 263 + }; 262 264 struct platform_device_info smc91x_devinfo = { 263 265 .parent = &dev->dev, 264 266 .name = "smc91x", 265 267 .id = 0, 266 268 .res = smc91x_resources, 267 269 .num_res = ARRAY_SIZE(smc91x_resources), 270 + .data = &smc91c_platdata, 271 + .size_data = sizeof(smc91c_platdata), 268 272 }; 269 273 int ret, irq; 270 274
+7
arch/arm/mach-sa1100/pleb.c
··· 11 11 #include <linux/irq.h> 12 12 #include <linux/io.h> 13 13 #include <linux/mtd/partitions.h> 14 + #include <linux/smc91x.h> 14 15 15 16 #include <mach/hardware.h> 16 17 #include <asm/setup.h> ··· 44 43 #endif 45 44 }; 46 45 46 + static struct smc91x_platdata smc91x_platdata = { 47 + .flags = SMC91X_USE_16BIT | SMC91X_NOWAIT, 48 + }; 47 49 48 50 static struct platform_device smc91x_device = { 49 51 .name = "smc91x", 50 52 .id = 0, 51 53 .num_resources = ARRAY_SIZE(smc91x_resources), 52 54 .resource = smc91x_resources, 55 + .dev = { 56 + .platform_data = &smc91c_platdata, 57 + }, 53 58 }; 54 59 55 60 static struct platform_device *devices[] __initdata = {
+1
drivers/bluetooth/btusb.c
··· 272 272 { USB_DEVICE(0x1286, 0x2046), .driver_info = BTUSB_MARVELL }, 273 273 274 274 /* Intel Bluetooth devices */ 275 + { USB_DEVICE(0x8087, 0x07da), .driver_info = BTUSB_CSR }, 275 276 { USB_DEVICE(0x8087, 0x07dc), .driver_info = BTUSB_INTEL }, 276 277 { USB_DEVICE(0x8087, 0x0a2a), .driver_info = BTUSB_INTEL }, 277 278 { USB_DEVICE(0x8087, 0x0a2b), .driver_info = BTUSB_INTEL_NEW },
+1 -1
drivers/isdn/hardware/mISDN/hfcpci.c
··· 1755 1755 enable_hwirq(hc); 1756 1756 spin_unlock_irqrestore(&hc->lock, flags); 1757 1757 /* Timeout 80ms */ 1758 - current->state = TASK_UNINTERRUPTIBLE; 1758 + set_current_state(TASK_UNINTERRUPTIBLE); 1759 1759 schedule_timeout((80 * HZ) / 1000); 1760 1760 printk(KERN_INFO "HFC PCI: IRQ %d count %d\n", 1761 1761 hc->irq, hc->irqcnt);
+1 -1
drivers/net/Kconfig
··· 157 157 making it transparent to the connected L2 switch. 158 158 159 159 Ipvlan devices can be added using the "ip" command from the 160 - iproute2 package starting with the iproute2-X.Y.ZZ release: 160 + iproute2 package starting with the iproute2-3.19 release: 161 161 162 162 "ip link add link <main-dev> [ NAME ] type ipvlan" 163 163
+1 -1
drivers/net/appletalk/Kconfig
··· 40 40 41 41 config LTPC 42 42 tristate "Apple/Farallon LocalTalk PC support" 43 - depends on DEV_APPLETALK && (ISA || EISA) && ISA_DMA_API 43 + depends on DEV_APPLETALK && (ISA || EISA) && ISA_DMA_API && VIRT_TO_BUS 44 44 help 45 45 This allows you to use the AppleTalk PC card to connect to LocalTalk 46 46 networks. The card is also known as the Farallon PhoneNet PC card.
+1 -1
drivers/net/dsa/bcm_sf2.h
··· 105 105 { \ 106 106 u32 indir, dir; \ 107 107 spin_lock(&priv->indir_lock); \ 108 - indir = reg_readl(priv, REG_DIR_DATA_READ); \ 109 108 dir = __raw_readl(priv->name + off); \ 109 + indir = reg_readl(priv, REG_DIR_DATA_READ); \ 110 110 spin_unlock(&priv->indir_lock); \ 111 111 return (u64)indir << 32 | dir; \ 112 112 } \
+2 -5
drivers/net/ethernet/8390/axnet_cs.c
··· 484 484 link->open++; 485 485 486 486 info->link_status = 0x00; 487 - init_timer(&info->watchdog); 488 - info->watchdog.function = ei_watchdog; 489 - info->watchdog.data = (u_long)dev; 490 - info->watchdog.expires = jiffies + HZ; 491 - add_timer(&info->watchdog); 487 + setup_timer(&info->watchdog, ei_watchdog, (u_long)dev); 488 + mod_timer(&info->watchdog, jiffies + HZ); 492 489 493 490 return ax_open(dev); 494 491 } /* axnet_open */
+2 -5
drivers/net/ethernet/8390/pcnet_cs.c
··· 918 918 919 919 info->phy_id = info->eth_phy; 920 920 info->link_status = 0x00; 921 - init_timer(&info->watchdog); 922 - info->watchdog.function = ei_watchdog; 923 - info->watchdog.data = (u_long)dev; 924 - info->watchdog.expires = jiffies + HZ; 925 - add_timer(&info->watchdog); 921 + setup_timer(&info->watchdog, ei_watchdog, (u_long)dev); 922 + mod_timer(&info->watchdog, jiffies + HZ); 926 923 927 924 return ei_open(dev); 928 925 } /* pcnet_open */
+26 -27
drivers/net/ethernet/altera/altera_tse_main.c
··· 376 376 u16 pktlength; 377 377 u16 pktstatus; 378 378 379 - while ((rxstatus = priv->dmaops->get_rx_status(priv)) != 0) { 379 + while (((rxstatus = priv->dmaops->get_rx_status(priv)) != 0) && 380 + (count < limit)) { 380 381 pktstatus = rxstatus >> 16; 381 382 pktlength = rxstatus & 0xffff; 382 383 ··· 492 491 struct altera_tse_private *priv = 493 492 container_of(napi, struct altera_tse_private, napi); 494 493 int rxcomplete = 0; 495 - int txcomplete = 0; 496 494 unsigned long int flags; 497 495 498 - txcomplete = tse_tx_complete(priv); 496 + tse_tx_complete(priv); 499 497 500 498 rxcomplete = tse_rx(priv, budget); 501 499 502 - if (rxcomplete >= budget || txcomplete > 0) 503 - return rxcomplete; 500 + if (rxcomplete < budget) { 504 501 505 - napi_gro_flush(napi, false); 506 - __napi_complete(napi); 502 + napi_gro_flush(napi, false); 503 + __napi_complete(napi); 507 504 508 - netdev_dbg(priv->dev, 509 - "NAPI Complete, did %d packets with budget %d\n", 510 - txcomplete+rxcomplete, budget); 505 + netdev_dbg(priv->dev, 506 + "NAPI Complete, did %d packets with budget %d\n", 507 + rxcomplete, budget); 511 508 512 - spin_lock_irqsave(&priv->rxdma_irq_lock, flags); 513 - priv->dmaops->enable_rxirq(priv); 514 - priv->dmaops->enable_txirq(priv); 515 - spin_unlock_irqrestore(&priv->rxdma_irq_lock, flags); 516 - return rxcomplete + txcomplete; 509 + spin_lock_irqsave(&priv->rxdma_irq_lock, flags); 510 + priv->dmaops->enable_rxirq(priv); 511 + priv->dmaops->enable_txirq(priv); 512 + spin_unlock_irqrestore(&priv->rxdma_irq_lock, flags); 513 + } 514 + return rxcomplete; 517 515 } 518 516 519 517 /* DMA TX & RX FIFO interrupt routing ··· 521 521 { 522 522 struct net_device *dev = dev_id; 523 523 struct altera_tse_private *priv; 524 - unsigned long int flags; 525 524 526 525 if (unlikely(!dev)) { 527 526 pr_err("%s: invalid dev pointer\n", __func__); ··· 528 529 } 529 530 priv = netdev_priv(dev); 530 531 531 - /* turn off desc irqs and enable napi rx */ 532 - spin_lock_irqsave(&priv->rxdma_irq_lock, flags); 533 - 534 - if (likely(napi_schedule_prep(&priv->napi))) { 535 - priv->dmaops->disable_rxirq(priv); 536 - priv->dmaops->disable_txirq(priv); 537 - __napi_schedule(&priv->napi); 538 - } 539 - 532 + spin_lock(&priv->rxdma_irq_lock); 540 533 /* reset IRQs */ 541 534 priv->dmaops->clear_rxirq(priv); 542 535 priv->dmaops->clear_txirq(priv); 536 + spin_unlock(&priv->rxdma_irq_lock); 543 537 544 - spin_unlock_irqrestore(&priv->rxdma_irq_lock, flags); 538 + if (likely(napi_schedule_prep(&priv->napi))) { 539 + spin_lock(&priv->rxdma_irq_lock); 540 + priv->dmaops->disable_rxirq(priv); 541 + priv->dmaops->disable_txirq(priv); 542 + spin_unlock(&priv->rxdma_irq_lock); 543 + __napi_schedule(&priv->napi); 544 + } 545 + 545 546 546 547 return IRQ_HANDLED; 547 548 } ··· 1398 1399 } 1399 1400 1400 1401 if (of_property_read_u32(pdev->dev.of_node, "tx-fifo-depth", 1401 - &priv->rx_fifo_depth)) { 1402 + &priv->tx_fifo_depth)) { 1402 1403 dev_err(&pdev->dev, "cannot obtain tx-fifo-depth\n"); 1403 1404 ret = -ENXIO; 1404 1405 goto err_free_netdev;
+93 -82
drivers/net/ethernet/amd/xgbe/xgbe-drv.c
··· 609 609 } 610 610 } 611 611 612 + static int xgbe_request_irqs(struct xgbe_prv_data *pdata) 613 + { 614 + struct xgbe_channel *channel; 615 + struct net_device *netdev = pdata->netdev; 616 + unsigned int i; 617 + int ret; 618 + 619 + ret = devm_request_irq(pdata->dev, pdata->dev_irq, xgbe_isr, 0, 620 + netdev->name, pdata); 621 + if (ret) { 622 + netdev_alert(netdev, "error requesting irq %d\n", 623 + pdata->dev_irq); 624 + return ret; 625 + } 626 + 627 + if (!pdata->per_channel_irq) 628 + return 0; 629 + 630 + channel = pdata->channel; 631 + for (i = 0; i < pdata->channel_count; i++, channel++) { 632 + snprintf(channel->dma_irq_name, 633 + sizeof(channel->dma_irq_name) - 1, 634 + "%s-TxRx-%u", netdev_name(netdev), 635 + channel->queue_index); 636 + 637 + ret = devm_request_irq(pdata->dev, channel->dma_irq, 638 + xgbe_dma_isr, 0, 639 + channel->dma_irq_name, channel); 640 + if (ret) { 641 + netdev_alert(netdev, "error requesting irq %d\n", 642 + channel->dma_irq); 643 + goto err_irq; 644 + } 645 + } 646 + 647 + return 0; 648 + 649 + err_irq: 650 + /* Using an unsigned int, 'i' will go to UINT_MAX and exit */ 651 + for (i--, channel--; i < pdata->channel_count; i--, channel--) 652 + devm_free_irq(pdata->dev, channel->dma_irq, channel); 653 + 654 + devm_free_irq(pdata->dev, pdata->dev_irq, pdata); 655 + 656 + return ret; 657 + } 658 + 659 + static void xgbe_free_irqs(struct xgbe_prv_data *pdata) 660 + { 661 + struct xgbe_channel *channel; 662 + unsigned int i; 663 + 664 + devm_free_irq(pdata->dev, pdata->dev_irq, pdata); 665 + 666 + if (!pdata->per_channel_irq) 667 + return; 668 + 669 + channel = pdata->channel; 670 + for (i = 0; i < pdata->channel_count; i++, channel++) 671 + devm_free_irq(pdata->dev, channel->dma_irq, channel); 672 + } 673 + 612 674 void xgbe_init_tx_coalesce(struct xgbe_prv_data *pdata) 613 675 { 614 676 struct xgbe_hw_if *hw_if = &pdata->hw_if; ··· 872 810 return -EINVAL; 873 811 } 874 812 875 - phy_stop(pdata->phydev); 876 - 877 813 spin_lock_irqsave(&pdata->lock, flags); 878 814 879 815 if (caller == XGMAC_DRIVER_CONTEXT) 880 816 netif_device_detach(netdev); 881 817 882 818 netif_tx_stop_all_queues(netdev); 883 - xgbe_napi_disable(pdata, 0); 884 819 885 - /* Powerdown Tx/Rx */ 886 820 hw_if->powerdown_tx(pdata); 887 821 hw_if->powerdown_rx(pdata); 822 + 823 + xgbe_napi_disable(pdata, 0); 824 + 825 + phy_stop(pdata->phydev); 888 826 889 827 pdata->power_down = 1; 890 828 ··· 916 854 917 855 phy_start(pdata->phydev); 918 856 919 - /* Enable Tx/Rx */ 857 + xgbe_napi_enable(pdata, 0); 858 + 920 859 hw_if->powerup_tx(pdata); 921 860 hw_if->powerup_rx(pdata); 922 861 923 862 if (caller == XGMAC_DRIVER_CONTEXT) 924 863 netif_device_attach(netdev); 925 864 926 - xgbe_napi_enable(pdata, 0); 927 865 netif_tx_start_all_queues(netdev); 928 866 929 867 spin_unlock_irqrestore(&pdata->lock, flags); ··· 937 875 { 938 876 struct xgbe_hw_if *hw_if = &pdata->hw_if; 939 877 struct net_device *netdev = pdata->netdev; 878 + int ret; 940 879 941 880 DBGPR("-->xgbe_start\n"); 942 881 ··· 947 884 948 885 phy_start(pdata->phydev); 949 886 887 + xgbe_napi_enable(pdata, 1); 888 + 889 + ret = xgbe_request_irqs(pdata); 890 + if (ret) 891 + goto err_napi; 892 + 950 893 hw_if->enable_tx(pdata); 951 894 hw_if->enable_rx(pdata); 952 895 953 896 xgbe_init_tx_timers(pdata); 954 897 955 - xgbe_napi_enable(pdata, 1); 956 898 netif_tx_start_all_queues(netdev); 957 899 958 900 DBGPR("<--xgbe_start\n"); 959 901 960 902 return 0; 903 + 904 + err_napi: 905 + xgbe_napi_disable(pdata, 1); 906 + 907 + phy_stop(pdata->phydev); 908 + 909 + hw_if->exit(pdata); 910 + 911 + return ret; 961 912 } 962 913 963 914 static void xgbe_stop(struct xgbe_prv_data *pdata) ··· 984 907 985 908 DBGPR("-->xgbe_stop\n"); 986 909 987 - phy_stop(pdata->phydev); 988 - 989 910 netif_tx_stop_all_queues(netdev); 990 - xgbe_napi_disable(pdata, 1); 991 911 992 912 xgbe_stop_tx_timers(pdata); 993 913 994 914 hw_if->disable_tx(pdata); 995 915 hw_if->disable_rx(pdata); 916 + 917 + xgbe_free_irqs(pdata); 918 + 919 + xgbe_napi_disable(pdata, 1); 920 + 921 + phy_stop(pdata->phydev); 922 + 923 + hw_if->exit(pdata); 996 924 997 925 channel = pdata->channel; 998 926 for (i = 0; i < pdata->channel_count; i++, channel++) { ··· 1013 931 1014 932 static void xgbe_restart_dev(struct xgbe_prv_data *pdata) 1015 933 { 1016 - struct xgbe_channel *channel; 1017 - struct xgbe_hw_if *hw_if = &pdata->hw_if; 1018 - unsigned int i; 1019 - 1020 934 DBGPR("-->xgbe_restart_dev\n"); 1021 935 1022 936 /* If not running, "restart" will happen on open */ ··· 1020 942 return; 1021 943 1022 944 xgbe_stop(pdata); 1023 - synchronize_irq(pdata->dev_irq); 1024 - if (pdata->per_channel_irq) { 1025 - channel = pdata->channel; 1026 - for (i = 0; i < pdata->channel_count; i++, channel++) 1027 - synchronize_irq(channel->dma_irq); 1028 - } 1029 945 1030 946 xgbe_free_tx_data(pdata); 1031 947 xgbe_free_rx_data(pdata); 1032 - 1033 - /* Issue software reset to device */ 1034 - hw_if->exit(pdata); 1035 948 1036 949 xgbe_start(pdata); 1037 950 ··· 1352 1283 static int xgbe_open(struct net_device *netdev) 1353 1284 { 1354 1285 struct xgbe_prv_data *pdata = netdev_priv(netdev); 1355 - struct xgbe_hw_if *hw_if = &pdata->hw_if; 1356 1286 struct xgbe_desc_if *desc_if = &pdata->desc_if; 1357 - struct xgbe_channel *channel = NULL; 1358 - unsigned int i = 0; 1359 1287 int ret; 1360 1288 1361 1289 DBGPR("-->xgbe_open\n"); ··· 1395 1329 INIT_WORK(&pdata->restart_work, xgbe_restart); 1396 1330 INIT_WORK(&pdata->tx_tstamp_work, xgbe_tx_tstamp); 1397 1331 1398 - /* Request interrupts */ 1399 - ret = devm_request_irq(pdata->dev, pdata->dev_irq, xgbe_isr, 0, 1400 - netdev->name, pdata); 1401 - if (ret) { 1402 - netdev_alert(netdev, "error requesting irq %d\n", 1403 - pdata->dev_irq); 1404 - goto err_rings; 1405 - } 1406 - 1407 - if (pdata->per_channel_irq) { 1408 - channel = pdata->channel; 1409 - for (i = 0; i < pdata->channel_count; i++, channel++) { 1410 - snprintf(channel->dma_irq_name, 1411 - sizeof(channel->dma_irq_name) - 1, 1412 - "%s-TxRx-%u", netdev_name(netdev), 1413 - channel->queue_index); 1414 - 1415 - ret = devm_request_irq(pdata->dev, channel->dma_irq, 1416 - xgbe_dma_isr, 0, 1417 - channel->dma_irq_name, channel); 1418 - if (ret) { 1419 - netdev_alert(netdev, 1420 - "error requesting irq %d\n", 1421 - channel->dma_irq); 1422 - goto err_irq; 1423 - } 1424 - } 1425 - } 1426 - 1427 1332 ret = xgbe_start(pdata); 1428 1333 if (ret) 1429 - goto err_start; 1334 + goto err_rings; 1430 1335 1431 1336 DBGPR("<--xgbe_open\n"); 1432 1337 1433 1338 return 0; 1434 - 1435 - err_start: 1436 - hw_if->exit(pdata); 1437 - 1438 - err_irq: 1439 - if (pdata->per_channel_irq) { 1440 - /* Using an unsigned int, 'i' will go to UINT_MAX and exit */ 1441 - for (i--, channel--; i < pdata->channel_count; i--, channel--) 1442 - devm_free_irq(pdata->dev, channel->dma_irq, channel); 1443 - } 1444 - 1445 - devm_free_irq(pdata->dev, pdata->dev_irq, pdata); 1446 1339 1447 1340 err_rings: 1448 1341 desc_if->free_ring_resources(pdata); ··· 1424 1399 static int xgbe_close(struct net_device *netdev) 1425 1400 { 1426 1401 struct xgbe_prv_data *pdata = netdev_priv(netdev); 1427 - struct xgbe_hw_if *hw_if = &pdata->hw_if; 1428 1402 struct xgbe_desc_if *desc_if = &pdata->desc_if; 1429 - struct xgbe_channel *channel; 1430 - unsigned int i; 1431 1403 1432 1404 DBGPR("-->xgbe_close\n"); 1433 1405 1434 1406 /* Stop the device */ 1435 1407 xgbe_stop(pdata); 1436 1408 1437 - /* Issue software reset to device */ 1438 - hw_if->exit(pdata); 1439 - 1440 1409 /* Free the ring descriptors and buffers */ 1441 1410 desc_if->free_ring_resources(pdata); 1442 - 1443 - /* Release the interrupts */ 1444 - devm_free_irq(pdata->dev, pdata->dev_irq, pdata); 1445 - if (pdata->per_channel_irq) { 1446 - channel = pdata->channel; 1447 - for (i = 0; i < pdata->channel_count; i++, channel++) 1448 - devm_free_irq(pdata->dev, channel->dma_irq, channel); 1449 - } 1450 1411 1451 1412 /* Free the channel and ring structures */ 1452 1413 xgbe_free_channels(pdata);
+4 -3
drivers/net/ethernet/broadcom/bcmsysport.c
··· 274 274 /* RBUF misc statistics */ 275 275 STAT_RBUF("rbuf_ovflow_cnt", mib.rbuf_ovflow_cnt, RBUF_OVFL_DISC_CNTR), 276 276 STAT_RBUF("rbuf_err_cnt", mib.rbuf_err_cnt, RBUF_ERR_PKT_CNTR), 277 - STAT_MIB_RX("alloc_rx_buff_failed", mib.alloc_rx_buff_failed), 278 - STAT_MIB_RX("rx_dma_failed", mib.rx_dma_failed), 279 - STAT_MIB_TX("tx_dma_failed", mib.tx_dma_failed), 277 + STAT_MIB_SOFT("alloc_rx_buff_failed", mib.alloc_rx_buff_failed), 278 + STAT_MIB_SOFT("rx_dma_failed", mib.rx_dma_failed), 279 + STAT_MIB_SOFT("tx_dma_failed", mib.tx_dma_failed), 280 280 }; 281 281 282 282 #define BCM_SYSPORT_STATS_LEN ARRAY_SIZE(bcm_sysport_gstrings_stats) ··· 345 345 s = &bcm_sysport_gstrings_stats[i]; 346 346 switch (s->type) { 347 347 case BCM_SYSPORT_STAT_NETDEV: 348 + case BCM_SYSPORT_STAT_SOFT: 348 349 continue; 349 350 case BCM_SYSPORT_STAT_MIB_RX: 350 351 case BCM_SYSPORT_STAT_MIB_TX:
+2
drivers/net/ethernet/broadcom/bcmsysport.h
··· 570 570 BCM_SYSPORT_STAT_RUNT, 571 571 BCM_SYSPORT_STAT_RXCHK, 572 572 BCM_SYSPORT_STAT_RBUF, 573 + BCM_SYSPORT_STAT_SOFT, 573 574 }; 574 575 575 576 /* Macros to help define ethtool statistics */ ··· 591 590 #define STAT_MIB_RX(str, m) STAT_MIB(str, m, BCM_SYSPORT_STAT_MIB_RX) 592 591 #define STAT_MIB_TX(str, m) STAT_MIB(str, m, BCM_SYSPORT_STAT_MIB_TX) 593 592 #define STAT_RUNT(str, m) STAT_MIB(str, m, BCM_SYSPORT_STAT_RUNT) 593 + #define STAT_MIB_SOFT(str, m) STAT_MIB(str, m, BCM_SYSPORT_STAT_SOFT) 594 594 595 595 #define STAT_RXCHK(str, m, ofs) { \ 596 596 .stat_string = str, \
+92 -30
drivers/net/ethernet/broadcom/genet/bcmgenet.c
··· 487 487 BCMGENET_STAT_MIB_TX, 488 488 BCMGENET_STAT_RUNT, 489 489 BCMGENET_STAT_MISC, 490 + BCMGENET_STAT_SOFT, 490 491 }; 491 492 492 493 struct bcmgenet_stats { ··· 516 515 #define STAT_GENET_MIB_RX(str, m) STAT_GENET_MIB(str, m, BCMGENET_STAT_MIB_RX) 517 516 #define STAT_GENET_MIB_TX(str, m) STAT_GENET_MIB(str, m, BCMGENET_STAT_MIB_TX) 518 517 #define STAT_GENET_RUNT(str, m) STAT_GENET_MIB(str, m, BCMGENET_STAT_RUNT) 518 + #define STAT_GENET_SOFT_MIB(str, m) STAT_GENET_MIB(str, m, BCMGENET_STAT_SOFT) 519 519 520 520 #define STAT_GENET_MISC(str, m, offset) { \ 521 521 .stat_string = str, \ ··· 616 614 UMAC_RBUF_OVFL_CNT), 617 615 STAT_GENET_MISC("rbuf_err_cnt", mib.rbuf_err_cnt, UMAC_RBUF_ERR_CNT), 618 616 STAT_GENET_MISC("mdf_err_cnt", mib.mdf_err_cnt, UMAC_MDF_ERR_CNT), 619 - STAT_GENET_MIB_RX("alloc_rx_buff_failed", mib.alloc_rx_buff_failed), 620 - STAT_GENET_MIB_RX("rx_dma_failed", mib.rx_dma_failed), 621 - STAT_GENET_MIB_TX("tx_dma_failed", mib.tx_dma_failed), 617 + STAT_GENET_SOFT_MIB("alloc_rx_buff_failed", mib.alloc_rx_buff_failed), 618 + STAT_GENET_SOFT_MIB("rx_dma_failed", mib.rx_dma_failed), 619 + STAT_GENET_SOFT_MIB("tx_dma_failed", mib.tx_dma_failed), 622 620 }; 623 621 624 622 #define BCMGENET_STATS_LEN ARRAY_SIZE(bcmgenet_gstrings_stats) ··· 670 668 s = &bcmgenet_gstrings_stats[i]; 671 669 switch (s->type) { 672 670 case BCMGENET_STAT_NETDEV: 671 + case BCMGENET_STAT_SOFT: 673 672 continue; 674 673 case BCMGENET_STAT_MIB_RX: 675 674 case BCMGENET_STAT_MIB_TX: ··· 974 971 } 975 972 976 973 /* Unlocked version of the reclaim routine */ 977 - static void __bcmgenet_tx_reclaim(struct net_device *dev, 978 - struct bcmgenet_tx_ring *ring) 974 + static unsigned int __bcmgenet_tx_reclaim(struct net_device *dev, 975 + struct bcmgenet_tx_ring *ring) 979 976 { 980 977 struct bcmgenet_priv *priv = netdev_priv(dev); 981 978 int last_tx_cn, last_c_index, num_tx_bds; 982 979 struct enet_cb *tx_cb_ptr; 983 980 struct netdev_queue *txq; 981 + unsigned int pkts_compl = 0; 984 982 unsigned int bds_compl; 985 983 unsigned int c_index; 986 984 ··· 1009 1005 tx_cb_ptr = ring->cbs + last_c_index; 1010 1006 bds_compl = 0; 1011 1007 if (tx_cb_ptr->skb) { 1008 + pkts_compl++; 1012 1009 bds_compl = skb_shinfo(tx_cb_ptr->skb)->nr_frags + 1; 1013 1010 dev->stats.tx_bytes += tx_cb_ptr->skb->len; 1014 1011 dma_unmap_single(&dev->dev, ··· 1033 1028 last_c_index &= (num_tx_bds - 1); 1034 1029 } 1035 1030 1036 - if (ring->free_bds > (MAX_SKB_FRAGS + 1)) 1037 - ring->int_disable(priv, ring); 1038 - 1039 - if (netif_tx_queue_stopped(txq)) 1040 - netif_tx_wake_queue(txq); 1031 + if (ring->free_bds > (MAX_SKB_FRAGS + 1)) { 1032 + if (netif_tx_queue_stopped(txq)) 1033 + netif_tx_wake_queue(txq); 1034 + } 1041 1035 1042 1036 ring->c_index = c_index; 1037 + 1038 + return pkts_compl; 1043 1039 } 1044 1040 1045 - static void bcmgenet_tx_reclaim(struct net_device *dev, 1041 + static unsigned int bcmgenet_tx_reclaim(struct net_device *dev, 1046 1042 struct bcmgenet_tx_ring *ring) 1047 1043 { 1044 + unsigned int released; 1048 1045 unsigned long flags; 1049 1046 1050 1047 spin_lock_irqsave(&ring->lock, flags); 1051 - __bcmgenet_tx_reclaim(dev, ring); 1048 + released = __bcmgenet_tx_reclaim(dev, ring); 1052 1049 spin_unlock_irqrestore(&ring->lock, flags); 1050 + 1051 + return released; 1052 + } 1053 + 1054 + static int bcmgenet_tx_poll(struct napi_struct *napi, int budget) 1055 + { 1056 + struct bcmgenet_tx_ring *ring = 1057 + container_of(napi, struct bcmgenet_tx_ring, napi); 1058 + unsigned int work_done = 0; 1059 + 1060 + work_done = bcmgenet_tx_reclaim(ring->priv->dev, ring); 1061 + 1062 + if (work_done == 0) { 1063 + napi_complete(napi); 1064 + ring->int_enable(ring->priv, ring); 1065 + 1066 + return 0; 1067 + } 1068 + 1069 + return budget; 1053 1070 } 1054 1071 1055 1072 static void bcmgenet_tx_reclaim_all(struct net_device *dev) ··· 1329 1302 bcmgenet_tdma_ring_writel(priv, ring->index, 1330 1303 ring->prod_index, TDMA_PROD_INDEX); 1331 1304 1332 - if (ring->free_bds <= (MAX_SKB_FRAGS + 1)) { 1305 + if (ring->free_bds <= (MAX_SKB_FRAGS + 1)) 1333 1306 netif_tx_stop_queue(txq); 1334 - ring->int_enable(priv, ring); 1335 - } 1336 1307 1337 1308 out: 1338 1309 spin_unlock_irqrestore(&ring->lock, flags); ··· 1646 1621 struct device *kdev = &priv->pdev->dev; 1647 1622 int ret; 1648 1623 u32 reg, cpu_mask_clear; 1624 + int index; 1649 1625 1650 1626 dev_dbg(&priv->pdev->dev, "bcmgenet: init_umac\n"); 1651 1627 ··· 1673 1647 1674 1648 bcmgenet_intr_disable(priv); 1675 1649 1676 - cpu_mask_clear = UMAC_IRQ_RXDMA_BDONE; 1650 + cpu_mask_clear = UMAC_IRQ_RXDMA_BDONE | UMAC_IRQ_TXDMA_BDONE; 1677 1651 1678 1652 dev_dbg(kdev, "%s:Enabling RXDMA_BDONE interrupt\n", __func__); 1679 1653 ··· 1700 1674 1701 1675 bcmgenet_intrl2_0_writel(priv, cpu_mask_clear, INTRL2_CPU_MASK_CLEAR); 1702 1676 1677 + for (index = 0; index < priv->hw_params->tx_queues; index++) 1678 + bcmgenet_intrl2_1_writel(priv, (1 << index), 1679 + INTRL2_CPU_MASK_CLEAR); 1680 + 1703 1681 /* Enable rx/tx engine.*/ 1704 1682 dev_dbg(kdev, "done init umac\n"); 1705 1683 ··· 1723 1693 unsigned int first_bd; 1724 1694 1725 1695 spin_lock_init(&ring->lock); 1696 + ring->priv = priv; 1697 + netif_napi_add(priv->dev, &ring->napi, bcmgenet_tx_poll, 64); 1726 1698 ring->index = index; 1727 1699 if (index == DESC_INDEX) { 1728 1700 ring->queue = 0; ··· 1770 1738 TDMA_WRITE_PTR); 1771 1739 bcmgenet_tdma_ring_writel(priv, index, end_ptr * words_per_bd - 1, 1772 1740 DMA_END_ADDR); 1741 + 1742 + napi_enable(&ring->napi); 1743 + } 1744 + 1745 + static void bcmgenet_fini_tx_ring(struct bcmgenet_priv *priv, 1746 + unsigned int index) 1747 + { 1748 + struct bcmgenet_tx_ring *ring = &priv->tx_rings[index]; 1749 + 1750 + napi_disable(&ring->napi); 1751 + netif_napi_del(&ring->napi); 1773 1752 } 1774 1753 1775 1754 /* Initialize a RDMA ring */ ··· 1950 1907 return ret; 1951 1908 } 1952 1909 1953 - static void bcmgenet_fini_dma(struct bcmgenet_priv *priv) 1910 + static void __bcmgenet_fini_dma(struct bcmgenet_priv *priv) 1954 1911 { 1955 1912 int i; 1956 1913 ··· 1967 1924 bcmgenet_free_rx_buffers(priv); 1968 1925 kfree(priv->rx_cbs); 1969 1926 kfree(priv->tx_cbs); 1927 + } 1928 + 1929 + static void bcmgenet_fini_dma(struct bcmgenet_priv *priv) 1930 + { 1931 + int i; 1932 + 1933 + bcmgenet_fini_tx_ring(priv, DESC_INDEX); 1934 + 1935 + for (i = 0; i < priv->hw_params->tx_queues; i++) 1936 + bcmgenet_fini_tx_ring(priv, i); 1937 + 1938 + __bcmgenet_fini_dma(priv); 1970 1939 } 1971 1940 1972 1941 /* init_edma: Initialize DMA control register */ ··· 2007 1952 priv->tx_cbs = kcalloc(priv->num_tx_bds, sizeof(struct enet_cb), 2008 1953 GFP_KERNEL); 2009 1954 if (!priv->tx_cbs) { 2010 - bcmgenet_fini_dma(priv); 1955 + __bcmgenet_fini_dma(priv); 2011 1956 return -ENOMEM; 2012 1957 } 2013 1958 ··· 2029 1974 struct bcmgenet_priv *priv = container_of(napi, 2030 1975 struct bcmgenet_priv, napi); 2031 1976 unsigned int work_done; 2032 - 2033 - /* tx reclaim */ 2034 - bcmgenet_tx_reclaim(priv->dev, &priv->tx_rings[DESC_INDEX]); 2035 1977 2036 1978 work_done = bcmgenet_desc_rx(priv, budget); 2037 1979 ··· 2074 2022 static irqreturn_t bcmgenet_isr1(int irq, void *dev_id) 2075 2023 { 2076 2024 struct bcmgenet_priv *priv = dev_id; 2025 + struct bcmgenet_tx_ring *ring; 2077 2026 unsigned int index; 2078 2027 2079 2028 /* Save irq status for bottom-half processing. */ 2080 2029 priv->irq1_stat = 2081 2030 bcmgenet_intrl2_1_readl(priv, INTRL2_CPU_STAT) & 2082 - ~priv->int1_mask; 2031 + ~bcmgenet_intrl2_1_readl(priv, INTRL2_CPU_MASK_STATUS); 2083 2032 /* clear interrupts */ 2084 2033 bcmgenet_intrl2_1_writel(priv, priv->irq1_stat, INTRL2_CPU_CLEAR); 2085 2034 2086 2035 netif_dbg(priv, intr, priv->dev, 2087 2036 "%s: IRQ=0x%x\n", __func__, priv->irq1_stat); 2037 + 2088 2038 /* Check the MBDONE interrupts. 2089 2039 * packet is done, reclaim descriptors 2090 2040 */ 2091 - if (priv->irq1_stat & 0x0000ffff) { 2092 - index = 0; 2093 - for (index = 0; index < 16; index++) { 2094 - if (priv->irq1_stat & (1 << index)) 2095 - bcmgenet_tx_reclaim(priv->dev, 2096 - &priv->tx_rings[index]); 2041 + for (index = 0; index < priv->hw_params->tx_queues; index++) { 2042 + if (!(priv->irq1_stat & BIT(index))) 2043 + continue; 2044 + 2045 + ring = &priv->tx_rings[index]; 2046 + 2047 + if (likely(napi_schedule_prep(&ring->napi))) { 2048 + ring->int_disable(priv, ring); 2049 + __napi_schedule(&ring->napi); 2097 2050 } 2098 2051 } 2052 + 2099 2053 return IRQ_HANDLED; 2100 2054 } 2101 2055 ··· 2133 2075 } 2134 2076 if (priv->irq0_stat & 2135 2077 (UMAC_IRQ_TXDMA_BDONE | UMAC_IRQ_TXDMA_PDONE)) { 2136 - /* Tx reclaim */ 2137 - bcmgenet_tx_reclaim(priv->dev, &priv->tx_rings[DESC_INDEX]); 2078 + struct bcmgenet_tx_ring *ring = &priv->tx_rings[DESC_INDEX]; 2079 + 2080 + if (likely(napi_schedule_prep(&ring->napi))) { 2081 + ring->int_disable(priv, ring); 2082 + __napi_schedule(&ring->napi); 2083 + } 2138 2084 } 2139 2085 if (priv->irq0_stat & (UMAC_IRQ_PHY_DET_R | 2140 2086 UMAC_IRQ_PHY_DET_F |
+2
drivers/net/ethernet/broadcom/genet/bcmgenet.h
··· 520 520 521 521 struct bcmgenet_tx_ring { 522 522 spinlock_t lock; /* ring lock */ 523 + struct napi_struct napi; /* NAPI per tx queue */ 523 524 unsigned int index; /* ring index */ 524 525 unsigned int queue; /* queue index */ 525 526 struct enet_cb *cbs; /* tx ring buffer control block*/ ··· 535 534 struct bcmgenet_tx_ring *); 536 535 void (*int_disable)(struct bcmgenet_priv *priv, 537 536 struct bcmgenet_tx_ring *); 537 + struct bcmgenet_priv *priv; 538 538 }; 539 539 540 540 /* device context */
+29 -28
drivers/net/ethernet/chelsio/cxgb4/clip_tbl.c
··· 35 35 } 36 36 37 37 static unsigned int clip_addr_hash(struct clip_tbl *ctbl, const u32 *addr, 38 - int addr_len) 38 + u8 v6) 39 39 { 40 - return addr_len == 4 ? ipv4_clip_hash(ctbl, addr) : 41 - ipv6_clip_hash(ctbl, addr); 40 + return v6 ? ipv6_clip_hash(ctbl, addr) : 41 + ipv4_clip_hash(ctbl, addr); 42 42 } 43 43 44 44 static int clip6_get_mbox(const struct net_device *dev, ··· 78 78 struct clip_entry *ce, *cte; 79 79 u32 *addr = (u32 *)lip; 80 80 int hash; 81 - int addr_len; 82 - int ret = 0; 81 + int ret = -1; 83 82 84 83 if (!ctbl) 85 84 return 0; 86 85 87 - if (v6) 88 - addr_len = 16; 89 - else 90 - addr_len = 4; 91 - 92 - hash = clip_addr_hash(ctbl, addr, addr_len); 86 + hash = clip_addr_hash(ctbl, addr, v6); 93 87 94 88 read_lock_bh(&ctbl->lock); 95 89 list_for_each_entry(cte, &ctbl->hash_list[hash], list) { 96 - if (addr_len == cte->addr_len && 97 - memcmp(lip, cte->addr, cte->addr_len) == 0) { 90 + if (cte->addr6.sin6_family == AF_INET6 && v6) 91 + ret = memcmp(lip, cte->addr6.sin6_addr.s6_addr, 92 + sizeof(struct in6_addr)); 93 + else if (cte->addr.sin_family == AF_INET && !v6) 94 + ret = memcmp(lip, (char *)(&cte->addr.sin_addr), 95 + sizeof(struct in_addr)); 96 + if (!ret) { 98 97 ce = cte; 99 98 read_unlock_bh(&ctbl->lock); 100 99 goto found; ··· 110 111 spin_lock_init(&ce->lock); 111 112 atomic_set(&ce->refcnt, 0); 112 113 atomic_dec(&ctbl->nfree); 113 - ce->addr_len = addr_len; 114 - memcpy(ce->addr, lip, addr_len); 115 114 list_add_tail(&ce->list, &ctbl->hash_list[hash]); 116 115 if (v6) { 116 + ce->addr6.sin6_family = AF_INET6; 117 + memcpy(ce->addr6.sin6_addr.s6_addr, 118 + lip, sizeof(struct in6_addr)); 117 119 ret = clip6_get_mbox(dev, (const struct in6_addr *)lip); 118 120 if (ret) { 119 121 write_unlock_bh(&ctbl->lock); 120 122 return ret; 121 123 } 124 + } else { 125 + ce->addr.sin_family = AF_INET; 126 + memcpy((char *)(&ce->addr.sin_addr), lip, 127 + sizeof(struct in_addr)); 122 128 } 123 129 } else { 124 130 write_unlock_bh(&ctbl->lock); ··· 144 140 struct clip_entry *ce, *cte; 145 141 u32 *addr = (u32 *)lip; 146 142 int hash; 147 - int addr_len; 143 + int ret = -1; 148 144 149 - if (v6) 150 - addr_len = 16; 151 - else 152 - addr_len = 4; 153 - 154 - hash = clip_addr_hash(ctbl, addr, addr_len); 145 + hash = clip_addr_hash(ctbl, addr, v6); 155 146 156 147 read_lock_bh(&ctbl->lock); 157 148 list_for_each_entry(cte, &ctbl->hash_list[hash], list) { 158 - if (addr_len == cte->addr_len && 159 - memcmp(lip, cte->addr, cte->addr_len) == 0) { 149 + if (cte->addr6.sin6_family == AF_INET6 && v6) 150 + ret = memcmp(lip, cte->addr6.sin6_addr.s6_addr, 151 + sizeof(struct in6_addr)); 152 + else if (cte->addr.sin_family == AF_INET && !v6) 153 + ret = memcmp(lip, (char *)(&cte->addr.sin_addr), 154 + sizeof(struct in_addr)); 155 + if (!ret) { 160 156 ce = cte; 161 157 read_unlock_bh(&ctbl->lock); 162 158 goto found; ··· 253 249 for (i = 0 ; i < ctbl->clipt_size; ++i) { 254 250 list_for_each_entry(ce, &ctbl->hash_list[i], list) { 255 251 ip[0] = '\0'; 256 - if (ce->addr_len == 16) 257 - sprintf(ip, "%pI6c", ce->addr); 258 - else 259 - sprintf(ip, "%pI4c", ce->addr); 252 + sprintf(ip, "%pISc", &ce->addr); 260 253 seq_printf(seq, "%-25s %u\n", ip, 261 254 atomic_read(&ce->refcnt)); 262 255 }
+4 -2
drivers/net/ethernet/chelsio/cxgb4/clip_tbl.h
··· 14 14 spinlock_t lock; /* Hold while modifying clip reference */ 15 15 atomic_t refcnt; 16 16 struct list_head list; 17 - u32 addr[4]; 18 - int addr_len; 17 + union { 18 + struct sockaddr_in addr; 19 + struct sockaddr_in6 addr6; 20 + }; 19 21 }; 20 22 21 23 struct clip_tbl {
+1 -1
drivers/net/ethernet/chelsio/cxgb4/cxgb4.h
··· 1103 1103 #define T4_MEMORY_WRITE 0 1104 1104 #define T4_MEMORY_READ 1 1105 1105 int t4_memory_rw(struct adapter *adap, int win, int mtype, u32 addr, u32 len, 1106 - __be32 *buf, int dir); 1106 + void *buf, int dir); 1107 1107 static inline int t4_memory_write(struct adapter *adap, int mtype, u32 addr, 1108 1108 u32 len, __be32 *buf) 1109 1109 {
+44 -10
drivers/net/ethernet/chelsio/cxgb4/t4_hw.c
··· 449 449 * @mtype: memory type: MEM_EDC0, MEM_EDC1 or MEM_MC 450 450 * @addr: address within indicated memory type 451 451 * @len: amount of memory to transfer 452 - * @buf: host memory buffer 452 + * @hbuf: host memory buffer 453 453 * @dir: direction of transfer T4_MEMORY_READ (1) or T4_MEMORY_WRITE (0) 454 454 * 455 455 * Reads/writes an [almost] arbitrary memory region in the firmware: the ··· 460 460 * caller's responsibility to perform appropriate byte order conversions. 461 461 */ 462 462 int t4_memory_rw(struct adapter *adap, int win, int mtype, u32 addr, 463 - u32 len, __be32 *buf, int dir) 463 + u32 len, void *hbuf, int dir) 464 464 { 465 465 u32 pos, offset, resid, memoffset; 466 466 u32 edc_size, mc_size, win_pf, mem_reg, mem_aperture, mem_base; 467 + u32 *buf; 467 468 468 469 /* Argument sanity checks ... 469 470 */ 470 - if (addr & 0x3) 471 + if (addr & 0x3 || (uintptr_t)hbuf & 0x3) 471 472 return -EINVAL; 473 + buf = (u32 *)hbuf; 472 474 473 475 /* It's convenient to be able to handle lengths which aren't a 474 476 * multiple of 32-bits because we often end up transferring files to ··· 534 532 535 533 /* Transfer data to/from the adapter as long as there's an integral 536 534 * number of 32-bit transfers to complete. 535 + * 536 + * A note on Endianness issues: 537 + * 538 + * The "register" reads and writes below from/to the PCI-E Memory 539 + * Window invoke the standard adapter Big-Endian to PCI-E Link 540 + * Little-Endian "swizzel." As a result, if we have the following 541 + * data in adapter memory: 542 + * 543 + * Memory: ... | b0 | b1 | b2 | b3 | ... 544 + * Address: i+0 i+1 i+2 i+3 545 + * 546 + * Then a read of the adapter memory via the PCI-E Memory Window 547 + * will yield: 548 + * 549 + * x = readl(i) 550 + * 31 0 551 + * [ b3 | b2 | b1 | b0 ] 552 + * 553 + * If this value is stored into local memory on a Little-Endian system 554 + * it will show up correctly in local memory as: 555 + * 556 + * ( ..., b0, b1, b2, b3, ... ) 557 + * 558 + * But on a Big-Endian system, the store will show up in memory 559 + * incorrectly swizzled as: 560 + * 561 + * ( ..., b3, b2, b1, b0, ... ) 562 + * 563 + * So we need to account for this in the reads and writes to the 564 + * PCI-E Memory Window below by undoing the register read/write 565 + * swizzels. 537 566 */ 538 567 while (len > 0) { 539 568 if (dir == T4_MEMORY_READ) 540 - *buf++ = (__force __be32) t4_read_reg(adap, 541 - mem_base + offset); 569 + *buf++ = le32_to_cpu((__force __le32)t4_read_reg(adap, 570 + mem_base + offset)); 542 571 else 543 572 t4_write_reg(adap, mem_base + offset, 544 - (__force u32) *buf++); 573 + (__force u32)cpu_to_le32(*buf++)); 545 574 offset += sizeof(__be32); 546 575 len -= sizeof(__be32); 547 576 ··· 601 568 */ 602 569 if (resid) { 603 570 union { 604 - __be32 word; 571 + u32 word; 605 572 char byte[4]; 606 573 } last; 607 574 unsigned char *bp; 608 575 int i; 609 576 610 577 if (dir == T4_MEMORY_READ) { 611 - last.word = (__force __be32) t4_read_reg(adap, 612 - mem_base + offset); 578 + last.word = le32_to_cpu( 579 + (__force __le32)t4_read_reg(adap, 580 + mem_base + offset)); 613 581 for (bp = (unsigned char *)buf, i = resid; i < 4; i++) 614 582 bp[i] = last.byte[i]; 615 583 } else { ··· 618 584 for (i = resid; i < 4; i++) 619 585 last.byte[i] = 0; 620 586 t4_write_reg(adap, mem_base + offset, 621 - (__force u32) last.word); 587 + (__force u32)cpu_to_le32(last.word)); 622 588 } 623 589 } 624 590
+2 -2
drivers/net/ethernet/cisco/enic/enic_main.c
··· 272 272 } 273 273 274 274 if (ENIC_TEST_INTR(pba, notify_intr)) { 275 - vnic_intr_return_all_credits(&enic->intr[notify_intr]); 276 275 enic_notify_check(enic); 276 + vnic_intr_return_all_credits(&enic->intr[notify_intr]); 277 277 } 278 278 279 279 if (ENIC_TEST_INTR(pba, err_intr)) { ··· 346 346 struct enic *enic = data; 347 347 unsigned int intr = enic_msix_notify_intr(enic); 348 348 349 - vnic_intr_return_all_credits(&enic->intr[intr]); 350 349 enic_notify_check(enic); 350 + vnic_intr_return_all_credits(&enic->intr[intr]); 351 351 352 352 return IRQ_HANDLED; 353 353 }
+2 -2
drivers/net/ethernet/freescale/gianfar.c
··· 3162 3162 struct phy_device *phydev = priv->phydev; 3163 3163 3164 3164 if (unlikely(phydev->link != priv->oldlink || 3165 - phydev->duplex != priv->oldduplex || 3166 - phydev->speed != priv->oldspeed)) 3165 + (phydev->link && (phydev->duplex != priv->oldduplex || 3166 + phydev->speed != priv->oldspeed)))) 3167 3167 gfar_update_link_state(priv); 3168 3168 } 3169 3169
+141 -105
drivers/net/ethernet/ibm/ehea/ehea_main.c
··· 3262 3262 device_remove_file(&dev->dev, &dev_attr_remove_port); 3263 3263 } 3264 3264 3265 + static int ehea_reboot_notifier(struct notifier_block *nb, 3266 + unsigned long action, void *unused) 3267 + { 3268 + if (action == SYS_RESTART) { 3269 + pr_info("Reboot: freeing all eHEA resources\n"); 3270 + ibmebus_unregister_driver(&ehea_driver); 3271 + } 3272 + return NOTIFY_DONE; 3273 + } 3274 + 3275 + static struct notifier_block ehea_reboot_nb = { 3276 + .notifier_call = ehea_reboot_notifier, 3277 + }; 3278 + 3279 + static int ehea_mem_notifier(struct notifier_block *nb, 3280 + unsigned long action, void *data) 3281 + { 3282 + int ret = NOTIFY_BAD; 3283 + struct memory_notify *arg = data; 3284 + 3285 + mutex_lock(&dlpar_mem_lock); 3286 + 3287 + switch (action) { 3288 + case MEM_CANCEL_OFFLINE: 3289 + pr_info("memory offlining canceled"); 3290 + /* Fall through: re-add canceled memory block */ 3291 + 3292 + case MEM_ONLINE: 3293 + pr_info("memory is going online"); 3294 + set_bit(__EHEA_STOP_XFER, &ehea_driver_flags); 3295 + if (ehea_add_sect_bmap(arg->start_pfn, arg->nr_pages)) 3296 + goto out_unlock; 3297 + ehea_rereg_mrs(); 3298 + break; 3299 + 3300 + case MEM_GOING_OFFLINE: 3301 + pr_info("memory is going offline"); 3302 + set_bit(__EHEA_STOP_XFER, &ehea_driver_flags); 3303 + if (ehea_rem_sect_bmap(arg->start_pfn, arg->nr_pages)) 3304 + goto out_unlock; 3305 + ehea_rereg_mrs(); 3306 + break; 3307 + 3308 + default: 3309 + break; 3310 + } 3311 + 3312 + ehea_update_firmware_handles(); 3313 + ret = NOTIFY_OK; 3314 + 3315 + out_unlock: 3316 + mutex_unlock(&dlpar_mem_lock); 3317 + return ret; 3318 + } 3319 + 3320 + static struct notifier_block ehea_mem_nb = { 3321 + .notifier_call = ehea_mem_notifier, 3322 + }; 3323 + 3324 + static void ehea_crash_handler(void) 3325 + { 3326 + int i; 3327 + 3328 + if (ehea_fw_handles.arr) 3329 + for (i = 0; i < ehea_fw_handles.num_entries; i++) 3330 + ehea_h_free_resource(ehea_fw_handles.arr[i].adh, 3331 + ehea_fw_handles.arr[i].fwh, 3332 + FORCE_FREE); 3333 + 3334 + if (ehea_bcmc_regs.arr) 3335 + for (i = 0; i < ehea_bcmc_regs.num_entries; i++) 3336 + ehea_h_reg_dereg_bcmc(ehea_bcmc_regs.arr[i].adh, 3337 + ehea_bcmc_regs.arr[i].port_id, 3338 + ehea_bcmc_regs.arr[i].reg_type, 3339 + ehea_bcmc_regs.arr[i].macaddr, 3340 + 0, H_DEREG_BCMC); 3341 + } 3342 + 3343 + static atomic_t ehea_memory_hooks_registered; 3344 + 3345 + /* Register memory hooks on probe of first adapter */ 3346 + static int ehea_register_memory_hooks(void) 3347 + { 3348 + int ret = 0; 3349 + 3350 + if (atomic_inc_and_test(&ehea_memory_hooks_registered)) 3351 + return 0; 3352 + 3353 + ret = ehea_create_busmap(); 3354 + if (ret) { 3355 + pr_info("ehea_create_busmap failed\n"); 3356 + goto out; 3357 + } 3358 + 3359 + ret = register_reboot_notifier(&ehea_reboot_nb); 3360 + if (ret) { 3361 + pr_info("register_reboot_notifier failed\n"); 3362 + goto out; 3363 + } 3364 + 3365 + ret = register_memory_notifier(&ehea_mem_nb); 3366 + if (ret) { 3367 + pr_info("register_memory_notifier failed\n"); 3368 + goto out2; 3369 + } 3370 + 3371 + ret = crash_shutdown_register(ehea_crash_handler); 3372 + if (ret) { 3373 + pr_info("crash_shutdown_register failed\n"); 3374 + goto out3; 3375 + } 3376 + 3377 + return 0; 3378 + 3379 + out3: 3380 + unregister_memory_notifier(&ehea_mem_nb); 3381 + out2: 3382 + unregister_reboot_notifier(&ehea_reboot_nb); 3383 + out: 3384 + return ret; 3385 + } 3386 + 3387 + static void ehea_unregister_memory_hooks(void) 3388 + { 3389 + if (atomic_read(&ehea_memory_hooks_registered)) 3390 + return; 3391 + 3392 + unregister_reboot_notifier(&ehea_reboot_nb); 3393 + if (crash_shutdown_unregister(ehea_crash_handler)) 3394 + pr_info("failed unregistering crash handler\n"); 3395 + unregister_memory_notifier(&ehea_mem_nb); 3396 + } 3397 + 3265 3398 static int ehea_probe_adapter(struct platform_device *dev) 3266 3399 { 3267 3400 struct ehea_adapter *adapter; 3268 3401 const u64 *adapter_handle; 3269 3402 int ret; 3270 3403 int i; 3404 + 3405 + ret = ehea_register_memory_hooks(); 3406 + if (ret) 3407 + return ret; 3271 3408 3272 3409 if (!dev || !dev->dev.of_node) { 3273 3410 pr_err("Invalid ibmebus device probed\n"); ··· 3529 3392 return 0; 3530 3393 } 3531 3394 3532 - static void ehea_crash_handler(void) 3533 - { 3534 - int i; 3535 - 3536 - if (ehea_fw_handles.arr) 3537 - for (i = 0; i < ehea_fw_handles.num_entries; i++) 3538 - ehea_h_free_resource(ehea_fw_handles.arr[i].adh, 3539 - ehea_fw_handles.arr[i].fwh, 3540 - FORCE_FREE); 3541 - 3542 - if (ehea_bcmc_regs.arr) 3543 - for (i = 0; i < ehea_bcmc_regs.num_entries; i++) 3544 - ehea_h_reg_dereg_bcmc(ehea_bcmc_regs.arr[i].adh, 3545 - ehea_bcmc_regs.arr[i].port_id, 3546 - ehea_bcmc_regs.arr[i].reg_type, 3547 - ehea_bcmc_regs.arr[i].macaddr, 3548 - 0, H_DEREG_BCMC); 3549 - } 3550 - 3551 - static int ehea_mem_notifier(struct notifier_block *nb, 3552 - unsigned long action, void *data) 3553 - { 3554 - int ret = NOTIFY_BAD; 3555 - struct memory_notify *arg = data; 3556 - 3557 - mutex_lock(&dlpar_mem_lock); 3558 - 3559 - switch (action) { 3560 - case MEM_CANCEL_OFFLINE: 3561 - pr_info("memory offlining canceled"); 3562 - /* Readd canceled memory block */ 3563 - case MEM_ONLINE: 3564 - pr_info("memory is going online"); 3565 - set_bit(__EHEA_STOP_XFER, &ehea_driver_flags); 3566 - if (ehea_add_sect_bmap(arg->start_pfn, arg->nr_pages)) 3567 - goto out_unlock; 3568 - ehea_rereg_mrs(); 3569 - break; 3570 - case MEM_GOING_OFFLINE: 3571 - pr_info("memory is going offline"); 3572 - set_bit(__EHEA_STOP_XFER, &ehea_driver_flags); 3573 - if (ehea_rem_sect_bmap(arg->start_pfn, arg->nr_pages)) 3574 - goto out_unlock; 3575 - ehea_rereg_mrs(); 3576 - break; 3577 - default: 3578 - break; 3579 - } 3580 - 3581 - ehea_update_firmware_handles(); 3582 - ret = NOTIFY_OK; 3583 - 3584 - out_unlock: 3585 - mutex_unlock(&dlpar_mem_lock); 3586 - return ret; 3587 - } 3588 - 3589 - static struct notifier_block ehea_mem_nb = { 3590 - .notifier_call = ehea_mem_notifier, 3591 - }; 3592 - 3593 - static int ehea_reboot_notifier(struct notifier_block *nb, 3594 - unsigned long action, void *unused) 3595 - { 3596 - if (action == SYS_RESTART) { 3597 - pr_info("Reboot: freeing all eHEA resources\n"); 3598 - ibmebus_unregister_driver(&ehea_driver); 3599 - } 3600 - return NOTIFY_DONE; 3601 - } 3602 - 3603 - static struct notifier_block ehea_reboot_nb = { 3604 - .notifier_call = ehea_reboot_notifier, 3605 - }; 3606 - 3607 3395 static int check_module_parm(void) 3608 3396 { 3609 3397 int ret = 0; ··· 3582 3520 if (ret) 3583 3521 goto out; 3584 3522 3585 - ret = ehea_create_busmap(); 3586 - if (ret) 3587 - goto out; 3588 - 3589 - ret = register_reboot_notifier(&ehea_reboot_nb); 3590 - if (ret) 3591 - pr_info("failed registering reboot notifier\n"); 3592 - 3593 - ret = register_memory_notifier(&ehea_mem_nb); 3594 - if (ret) 3595 - pr_info("failed registering memory remove notifier\n"); 3596 - 3597 - ret = crash_shutdown_register(ehea_crash_handler); 3598 - if (ret) 3599 - pr_info("failed registering crash handler\n"); 3600 - 3601 3523 ret = ibmebus_register_driver(&ehea_driver); 3602 3524 if (ret) { 3603 3525 pr_err("failed registering eHEA device driver on ebus\n"); 3604 - goto out2; 3526 + goto out; 3605 3527 } 3606 3528 3607 3529 ret = driver_create_file(&ehea_driver.driver, ··· 3593 3547 if (ret) { 3594 3548 pr_err("failed to register capabilities attribute, ret=%d\n", 3595 3549 ret); 3596 - goto out3; 3550 + goto out2; 3597 3551 } 3598 3552 3599 3553 return ret; 3600 3554 3601 - out3: 3602 - ibmebus_unregister_driver(&ehea_driver); 3603 3555 out2: 3604 - unregister_memory_notifier(&ehea_mem_nb); 3605 - unregister_reboot_notifier(&ehea_reboot_nb); 3606 - crash_shutdown_unregister(ehea_crash_handler); 3556 + ibmebus_unregister_driver(&ehea_driver); 3607 3557 out: 3608 3558 return ret; 3609 3559 } 3610 3560 3611 3561 static void __exit ehea_module_exit(void) 3612 3562 { 3613 - int ret; 3614 - 3615 3563 driver_remove_file(&ehea_driver.driver, &driver_attr_capabilities); 3616 3564 ibmebus_unregister_driver(&ehea_driver); 3617 - unregister_reboot_notifier(&ehea_reboot_nb); 3618 - ret = crash_shutdown_unregister(ehea_crash_handler); 3619 - if (ret) 3620 - pr_info("failed unregistering crash handler\n"); 3621 - unregister_memory_notifier(&ehea_mem_nb); 3565 + ehea_unregister_memory_hooks(); 3622 3566 kfree(ehea_fw_handles.arr); 3623 3567 kfree(ehea_bcmc_regs.arr); 3624 3568 ehea_destroy_busmap();
+23 -1
drivers/net/ethernet/ibm/ibmveth.c
··· 1327 1327 return ret; 1328 1328 } 1329 1329 1330 + static int ibmveth_set_mac_addr(struct net_device *dev, void *p) 1331 + { 1332 + struct ibmveth_adapter *adapter = netdev_priv(dev); 1333 + struct sockaddr *addr = p; 1334 + u64 mac_address; 1335 + int rc; 1336 + 1337 + if (!is_valid_ether_addr(addr->sa_data)) 1338 + return -EADDRNOTAVAIL; 1339 + 1340 + mac_address = ibmveth_encode_mac_addr(addr->sa_data); 1341 + rc = h_change_logical_lan_mac(adapter->vdev->unit_address, mac_address); 1342 + if (rc) { 1343 + netdev_err(adapter->netdev, "h_change_logical_lan_mac failed with rc=%d\n", rc); 1344 + return rc; 1345 + } 1346 + 1347 + ether_addr_copy(dev->dev_addr, addr->sa_data); 1348 + 1349 + return 0; 1350 + } 1351 + 1330 1352 static const struct net_device_ops ibmveth_netdev_ops = { 1331 1353 .ndo_open = ibmveth_open, 1332 1354 .ndo_stop = ibmveth_close, ··· 1359 1337 .ndo_fix_features = ibmveth_fix_features, 1360 1338 .ndo_set_features = ibmveth_set_features, 1361 1339 .ndo_validate_addr = eth_validate_addr, 1362 - .ndo_set_mac_address = eth_mac_addr, 1340 + .ndo_set_mac_address = ibmveth_set_mac_addr, 1363 1341 #ifdef CONFIG_NET_POLL_CONTROLLER 1364 1342 .ndo_poll_controller = ibmveth_poll_controller, 1365 1343 #endif
+4 -3
drivers/net/ethernet/intel/i40e/i40e_common.c
··· 868 868 * The grst delay value is in 100ms units, and we'll wait a 869 869 * couple counts longer to be sure we don't just miss the end. 870 870 */ 871 - grst_del = rd32(hw, I40E_GLGEN_RSTCTL) & I40E_GLGEN_RSTCTL_GRSTDEL_MASK 872 - >> I40E_GLGEN_RSTCTL_GRSTDEL_SHIFT; 871 + grst_del = (rd32(hw, I40E_GLGEN_RSTCTL) & 872 + I40E_GLGEN_RSTCTL_GRSTDEL_MASK) >> 873 + I40E_GLGEN_RSTCTL_GRSTDEL_SHIFT; 873 874 for (cnt = 0; cnt < grst_del + 2; cnt++) { 874 875 reg = rd32(hw, I40E_GLGEN_RSTAT); 875 876 if (!(reg & I40E_GLGEN_RSTAT_DEVSTATE_MASK)) ··· 2847 2846 2848 2847 status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details); 2849 2848 2850 - if (!status) 2849 + if (!status && filter_index) 2851 2850 *filter_index = resp->index; 2852 2851 2853 2852 return status;
+1 -1
drivers/net/ethernet/intel/i40e/i40e_dcb_nl.c
··· 40 40 u32 val; 41 41 42 42 val = rd32(hw, I40E_PRTDCB_GENC); 43 - *delay = (u16)(val & I40E_PRTDCB_GENC_PFCLDA_MASK >> 43 + *delay = (u16)((val & I40E_PRTDCB_GENC_PFCLDA_MASK) >> 44 44 I40E_PRTDCB_GENC_PFCLDA_SHIFT); 45 45 } 46 46
+3 -1
drivers/net/ethernet/intel/i40e/i40e_debugfs.c
··· 989 989 if (!cmd_buf) 990 990 return count; 991 991 bytes_not_copied = copy_from_user(cmd_buf, buffer, count); 992 - if (bytes_not_copied < 0) 992 + if (bytes_not_copied < 0) { 993 + kfree(cmd_buf); 993 994 return bytes_not_copied; 995 + } 994 996 if (bytes_not_copied > 0) 995 997 count -= bytes_not_copied; 996 998 cmd_buf[count] = '\0';
+33 -11
drivers/net/ethernet/intel/i40e/i40e_main.c
··· 1512 1512 vsi->tc_config.numtc = numtc; 1513 1513 vsi->tc_config.enabled_tc = enabled_tc ? enabled_tc : 1; 1514 1514 /* Number of queues per enabled TC */ 1515 - num_tc_qps = vsi->alloc_queue_pairs/numtc; 1515 + /* In MFP case we can have a much lower count of MSIx 1516 + * vectors available and so we need to lower the used 1517 + * q count. 1518 + */ 1519 + qcount = min_t(int, vsi->alloc_queue_pairs, pf->num_lan_msix); 1520 + num_tc_qps = qcount / numtc; 1516 1521 num_tc_qps = min_t(int, num_tc_qps, I40E_MAX_QUEUES_PER_TC); 1517 1522 1518 1523 /* Setup queue offset/count for all TCs for given VSI */ ··· 2689 2684 u16 qoffset, qcount; 2690 2685 int i, n; 2691 2686 2692 - if (!(vsi->back->flags & I40E_FLAG_DCB_ENABLED)) 2693 - return; 2687 + if (!(vsi->back->flags & I40E_FLAG_DCB_ENABLED)) { 2688 + /* Reset the TC information */ 2689 + for (i = 0; i < vsi->num_queue_pairs; i++) { 2690 + rx_ring = vsi->rx_rings[i]; 2691 + tx_ring = vsi->tx_rings[i]; 2692 + rx_ring->dcb_tc = 0; 2693 + tx_ring->dcb_tc = 0; 2694 + } 2695 + } 2694 2696 2695 2697 for (n = 0; n < I40E_MAX_TRAFFIC_CLASS; n++) { 2696 2698 if (!(vsi->tc_config.enabled_tc & (1 << n))) ··· 3841 3829 static void i40e_clear_interrupt_scheme(struct i40e_pf *pf) 3842 3830 { 3843 3831 int i; 3832 + 3833 + i40e_stop_misc_vector(pf); 3834 + if (pf->flags & I40E_FLAG_MSIX_ENABLED) { 3835 + synchronize_irq(pf->msix_entries[0].vector); 3836 + free_irq(pf->msix_entries[0].vector, pf); 3837 + } 3844 3838 3845 3839 i40e_put_lump(pf->irq_pile, 0, I40E_PILE_VALID_BIT-1); 3846 3840 for (i = 0; i < pf->num_alloc_vsi; i++) ··· 5272 5254 5273 5255 /* Wait for the PF's Tx queues to be disabled */ 5274 5256 ret = i40e_pf_wait_txq_disabled(pf); 5275 - if (!ret) 5257 + if (ret) { 5258 + /* Schedule PF reset to recover */ 5259 + set_bit(__I40E_PF_RESET_REQUESTED, &pf->state); 5260 + i40e_service_event_schedule(pf); 5261 + } else { 5276 5262 i40e_pf_unquiesce_all_vsi(pf); 5263 + } 5264 + 5277 5265 exit: 5278 5266 return ret; 5279 5267 } ··· 5611 5587 int i, v; 5612 5588 5613 5589 /* If we're down or resetting, just bail */ 5614 - if (test_bit(__I40E_CONFIG_BUSY, &pf->state)) 5590 + if (test_bit(__I40E_DOWN, &pf->state) || 5591 + test_bit(__I40E_CONFIG_BUSY, &pf->state)) 5615 5592 return; 5616 5593 5617 5594 /* for each VSI/netdev ··· 9558 9533 set_bit(__I40E_DOWN, &pf->state); 9559 9534 del_timer_sync(&pf->service_timer); 9560 9535 cancel_work_sync(&pf->service_task); 9536 + i40e_fdir_teardown(pf); 9561 9537 9562 9538 if (pf->flags & I40E_FLAG_SRIOV_ENABLED) { 9563 9539 i40e_free_vfs(pf); ··· 9584 9558 */ 9585 9559 if (pf->vsi[pf->lan_vsi]) 9586 9560 i40e_vsi_release(pf->vsi[pf->lan_vsi]); 9587 - 9588 - i40e_stop_misc_vector(pf); 9589 - if (pf->flags & I40E_FLAG_MSIX_ENABLED) { 9590 - synchronize_irq(pf->msix_entries[0].vector); 9591 - free_irq(pf->msix_entries[0].vector, pf); 9592 - } 9593 9561 9594 9562 /* shutdown and destroy the HMC */ 9595 9563 if (pf->hw.hmc.hmc_obj) { ··· 9737 9717 9738 9718 wr32(hw, I40E_PFPM_APM, (pf->wol_en ? I40E_PFPM_APM_APME_MASK : 0)); 9739 9719 wr32(hw, I40E_PFPM_WUFC, (pf->wol_en ? I40E_PFPM_WUFC_MAG_MASK : 0)); 9720 + 9721 + i40e_clear_interrupt_scheme(pf); 9740 9722 9741 9723 if (system_state == SYSTEM_POWER_OFF) { 9742 9724 pci_wake_from_d3(pdev, pf->wol_en);
+35
drivers/net/ethernet/intel/i40e/i40e_nvm.c
··· 679 679 { 680 680 i40e_status status; 681 681 enum i40e_nvmupd_cmd upd_cmd; 682 + bool retry_attempt = false; 682 683 683 684 upd_cmd = i40e_nvmupd_validate_command(hw, cmd, errno); 684 685 686 + retry: 685 687 switch (upd_cmd) { 686 688 case I40E_NVMUPD_WRITE_CON: 687 689 status = i40e_nvmupd_nvm_write(hw, cmd, bytes, errno); ··· 727 725 *errno = -ESRCH; 728 726 break; 729 727 } 728 + 729 + /* In some circumstances, a multi-write transaction takes longer 730 + * than the default 3 minute timeout on the write semaphore. If 731 + * the write failed with an EBUSY status, this is likely the problem, 732 + * so here we try to reacquire the semaphore then retry the write. 733 + * We only do one retry, then give up. 734 + */ 735 + if (status && (hw->aq.asq_last_status == I40E_AQ_RC_EBUSY) && 736 + !retry_attempt) { 737 + i40e_status old_status = status; 738 + u32 old_asq_status = hw->aq.asq_last_status; 739 + u32 gtime; 740 + 741 + gtime = rd32(hw, I40E_GLVFGEN_TIMER); 742 + if (gtime >= hw->nvm.hw_semaphore_timeout) { 743 + i40e_debug(hw, I40E_DEBUG_ALL, 744 + "NVMUPD: write semaphore expired (%d >= %lld), retrying\n", 745 + gtime, hw->nvm.hw_semaphore_timeout); 746 + i40e_release_nvm(hw); 747 + status = i40e_acquire_nvm(hw, I40E_RESOURCE_WRITE); 748 + if (status) { 749 + i40e_debug(hw, I40E_DEBUG_ALL, 750 + "NVMUPD: write semaphore reacquire failed aq_err = %d\n", 751 + hw->aq.asq_last_status); 752 + status = old_status; 753 + hw->aq.asq_last_status = old_asq_status; 754 + } else { 755 + retry_attempt = true; 756 + goto retry; 757 + } 758 + } 759 + } 760 + 730 761 return status; 731 762 } 732 763
+95 -24
drivers/net/ethernet/intel/i40e/i40e_txrx.c
··· 586 586 } 587 587 588 588 /** 589 + * i40e_get_head - Retrieve head from head writeback 590 + * @tx_ring: tx ring to fetch head of 591 + * 592 + * Returns value of Tx ring head based on value stored 593 + * in head write-back location 594 + **/ 595 + static inline u32 i40e_get_head(struct i40e_ring *tx_ring) 596 + { 597 + void *head = (struct i40e_tx_desc *)tx_ring->desc + tx_ring->count; 598 + 599 + return le32_to_cpu(*(volatile __le32 *)head); 600 + } 601 + 602 + /** 589 603 * i40e_get_tx_pending - how many tx descriptors not processed 590 604 * @tx_ring: the ring of descriptors 591 605 * ··· 608 594 **/ 609 595 static u32 i40e_get_tx_pending(struct i40e_ring *ring) 610 596 { 611 - u32 ntu = ((ring->next_to_clean <= ring->next_to_use) 612 - ? ring->next_to_use 613 - : ring->next_to_use + ring->count); 614 - return ntu - ring->next_to_clean; 597 + u32 head, tail; 598 + 599 + head = i40e_get_head(ring); 600 + tail = readl(ring->tail); 601 + 602 + if (head != tail) 603 + return (head < tail) ? 604 + tail - head : (tail + ring->count - head); 605 + 606 + return 0; 615 607 } 616 608 617 609 /** ··· 626 606 **/ 627 607 static bool i40e_check_tx_hang(struct i40e_ring *tx_ring) 628 608 { 609 + u32 tx_done = tx_ring->stats.packets; 610 + u32 tx_done_old = tx_ring->tx_stats.tx_done_old; 629 611 u32 tx_pending = i40e_get_tx_pending(tx_ring); 630 612 struct i40e_pf *pf = tx_ring->vsi->back; 631 613 bool ret = false; ··· 645 623 * run the check_tx_hang logic with a transmit completion 646 624 * pending but without time to complete it yet. 647 625 */ 648 - if ((tx_ring->tx_stats.tx_done_old == tx_ring->stats.packets) && 649 - (tx_pending >= I40E_MIN_DESC_PENDING)) { 626 + if ((tx_done_old == tx_done) && tx_pending) { 650 627 /* make sure it is true for two checks in a row */ 651 628 ret = test_and_set_bit(__I40E_HANG_CHECK_ARMED, 652 629 &tx_ring->state); 653 - } else if ((tx_ring->tx_stats.tx_done_old == tx_ring->stats.packets) && 654 - (tx_pending < I40E_MIN_DESC_PENDING) && 655 - (tx_pending > 0)) { 630 + } else if (tx_done_old == tx_done && 631 + (tx_pending < I40E_MIN_DESC_PENDING) && (tx_pending > 0)) { 656 632 if (I40E_DEBUG_FLOW & pf->hw.debug_mask) 657 633 dev_info(tx_ring->dev, "HW needs some more descs to do a cacheline flush. tx_pending %d, queue %d", 658 634 tx_pending, tx_ring->queue_index); 659 635 pf->tx_sluggish_count++; 660 636 } else { 661 637 /* update completed stats and disarm the hang check */ 662 - tx_ring->tx_stats.tx_done_old = tx_ring->stats.packets; 638 + tx_ring->tx_stats.tx_done_old = tx_done; 663 639 clear_bit(__I40E_HANG_CHECK_ARMED, &tx_ring->state); 664 640 } 665 641 666 642 return ret; 667 - } 668 - 669 - /** 670 - * i40e_get_head - Retrieve head from head writeback 671 - * @tx_ring: tx ring to fetch head of 672 - * 673 - * Returns value of Tx ring head based on value stored 674 - * in head write-back location 675 - **/ 676 - static inline u32 i40e_get_head(struct i40e_ring *tx_ring) 677 - { 678 - void *head = (struct i40e_tx_desc *)tx_ring->desc + tx_ring->count; 679 - 680 - return le32_to_cpu(*(volatile __le32 *)head); 681 643 } 682 644 683 645 #define WB_STRIDE 0x3 ··· 2146 2140 } 2147 2141 2148 2142 /** 2143 + * i40e_chk_linearize - Check if there are more than 8 fragments per packet 2144 + * @skb: send buffer 2145 + * @tx_flags: collected send information 2146 + * @hdr_len: size of the packet header 2147 + * 2148 + * Note: Our HW can't scatter-gather more than 8 fragments to build 2149 + * a packet on the wire and so we need to figure out the cases where we 2150 + * need to linearize the skb. 2151 + **/ 2152 + static bool i40e_chk_linearize(struct sk_buff *skb, u32 tx_flags, 2153 + const u8 hdr_len) 2154 + { 2155 + struct skb_frag_struct *frag; 2156 + bool linearize = false; 2157 + unsigned int size = 0; 2158 + u16 num_frags; 2159 + u16 gso_segs; 2160 + 2161 + num_frags = skb_shinfo(skb)->nr_frags; 2162 + gso_segs = skb_shinfo(skb)->gso_segs; 2163 + 2164 + if (tx_flags & (I40E_TX_FLAGS_TSO | I40E_TX_FLAGS_FSO)) { 2165 + u16 j = 1; 2166 + 2167 + if (num_frags < (I40E_MAX_BUFFER_TXD)) 2168 + goto linearize_chk_done; 2169 + /* try the simple math, if we have too many frags per segment */ 2170 + if (DIV_ROUND_UP((num_frags + gso_segs), gso_segs) > 2171 + I40E_MAX_BUFFER_TXD) { 2172 + linearize = true; 2173 + goto linearize_chk_done; 2174 + } 2175 + frag = &skb_shinfo(skb)->frags[0]; 2176 + size = hdr_len; 2177 + /* we might still have more fragments per segment */ 2178 + do { 2179 + size += skb_frag_size(frag); 2180 + frag++; j++; 2181 + if (j == I40E_MAX_BUFFER_TXD) { 2182 + if (size < skb_shinfo(skb)->gso_size) { 2183 + linearize = true; 2184 + break; 2185 + } 2186 + j = 1; 2187 + size -= skb_shinfo(skb)->gso_size; 2188 + if (size) 2189 + j++; 2190 + size += hdr_len; 2191 + } 2192 + num_frags--; 2193 + } while (num_frags); 2194 + } else { 2195 + if (num_frags >= I40E_MAX_BUFFER_TXD) 2196 + linearize = true; 2197 + } 2198 + 2199 + linearize_chk_done: 2200 + return linearize; 2201 + } 2202 + 2203 + /** 2149 2204 * i40e_tx_map - Build the Tx descriptor 2150 2205 * @tx_ring: ring to send buffer on 2151 2206 * @skb: send buffer ··· 2462 2395 2463 2396 if (tsyn) 2464 2397 tx_flags |= I40E_TX_FLAGS_TSYN; 2398 + 2399 + if (i40e_chk_linearize(skb, tx_flags, hdr_len)) 2400 + if (skb_linearize(skb)) 2401 + goto out_drop; 2465 2402 2466 2403 skb_tx_timestamp(skb); 2467 2404
+1
drivers/net/ethernet/intel/i40e/i40e_txrx.h
··· 112 112 113 113 #define i40e_rx_desc i40e_32byte_rx_desc 114 114 115 + #define I40E_MAX_BUFFER_TXD 8 115 116 #define I40E_MIN_TX_LEN 17 116 117 #define I40E_MAX_DATA_PER_TXD 8192 117 118
+107 -36
drivers/net/ethernet/intel/i40evf/i40e_txrx.c
··· 126 126 } 127 127 128 128 /** 129 + * i40e_get_head - Retrieve head from head writeback 130 + * @tx_ring: tx ring to fetch head of 131 + * 132 + * Returns value of Tx ring head based on value stored 133 + * in head write-back location 134 + **/ 135 + static inline u32 i40e_get_head(struct i40e_ring *tx_ring) 136 + { 137 + void *head = (struct i40e_tx_desc *)tx_ring->desc + tx_ring->count; 138 + 139 + return le32_to_cpu(*(volatile __le32 *)head); 140 + } 141 + 142 + /** 129 143 * i40e_get_tx_pending - how many tx descriptors not processed 130 144 * @tx_ring: the ring of descriptors 131 145 * ··· 148 134 **/ 149 135 static u32 i40e_get_tx_pending(struct i40e_ring *ring) 150 136 { 151 - u32 ntu = ((ring->next_to_clean <= ring->next_to_use) 152 - ? ring->next_to_use 153 - : ring->next_to_use + ring->count); 154 - return ntu - ring->next_to_clean; 137 + u32 head, tail; 138 + 139 + head = i40e_get_head(ring); 140 + tail = readl(ring->tail); 141 + 142 + if (head != tail) 143 + return (head < tail) ? 144 + tail - head : (tail + ring->count - head); 145 + 146 + return 0; 155 147 } 156 148 157 149 /** ··· 166 146 **/ 167 147 static bool i40e_check_tx_hang(struct i40e_ring *tx_ring) 168 148 { 149 + u32 tx_done = tx_ring->stats.packets; 150 + u32 tx_done_old = tx_ring->tx_stats.tx_done_old; 169 151 u32 tx_pending = i40e_get_tx_pending(tx_ring); 170 152 bool ret = false; 171 153 ··· 184 162 * run the check_tx_hang logic with a transmit completion 185 163 * pending but without time to complete it yet. 186 164 */ 187 - if ((tx_ring->tx_stats.tx_done_old == tx_ring->stats.packets) && 188 - (tx_pending >= I40E_MIN_DESC_PENDING)) { 165 + if ((tx_done_old == tx_done) && tx_pending) { 189 166 /* make sure it is true for two checks in a row */ 190 167 ret = test_and_set_bit(__I40E_HANG_CHECK_ARMED, 191 168 &tx_ring->state); 192 - } else if (!(tx_ring->tx_stats.tx_done_old == tx_ring->stats.packets) || 193 - !(tx_pending < I40E_MIN_DESC_PENDING) || 194 - !(tx_pending > 0)) { 169 + } else if (tx_done_old == tx_done && 170 + (tx_pending < I40E_MIN_DESC_PENDING) && (tx_pending > 0)) { 195 171 /* update completed stats and disarm the hang check */ 196 - tx_ring->tx_stats.tx_done_old = tx_ring->stats.packets; 172 + tx_ring->tx_stats.tx_done_old = tx_done; 197 173 clear_bit(__I40E_HANG_CHECK_ARMED, &tx_ring->state); 198 174 } 199 175 200 176 return ret; 201 - } 202 - 203 - /** 204 - * i40e_get_head - Retrieve head from head writeback 205 - * @tx_ring: tx ring to fetch head of 206 - * 207 - * Returns value of Tx ring head based on value stored 208 - * in head write-back location 209 - **/ 210 - static inline u32 i40e_get_head(struct i40e_ring *tx_ring) 211 - { 212 - void *head = (struct i40e_tx_desc *)tx_ring->desc + tx_ring->count; 213 - 214 - return le32_to_cpu(*(volatile __le32 *)head); 215 177 } 216 178 217 179 #define WB_STRIDE 0x3 ··· 1212 1206 if (err < 0) 1213 1207 return err; 1214 1208 1215 - if (protocol == htons(ETH_P_IP)) { 1216 - iph = skb->encapsulation ? inner_ip_hdr(skb) : ip_hdr(skb); 1209 + iph = skb->encapsulation ? inner_ip_hdr(skb) : ip_hdr(skb); 1210 + ipv6h = skb->encapsulation ? inner_ipv6_hdr(skb) : ipv6_hdr(skb); 1211 + 1212 + if (iph->version == 4) { 1217 1213 tcph = skb->encapsulation ? inner_tcp_hdr(skb) : tcp_hdr(skb); 1218 1214 iph->tot_len = 0; 1219 1215 iph->check = 0; 1220 1216 tcph->check = ~csum_tcpudp_magic(iph->saddr, iph->daddr, 1221 1217 0, IPPROTO_TCP, 0); 1222 - } else if (skb_is_gso_v6(skb)) { 1223 - 1224 - ipv6h = skb->encapsulation ? inner_ipv6_hdr(skb) 1225 - : ipv6_hdr(skb); 1218 + } else if (ipv6h->version == 6) { 1226 1219 tcph = skb->encapsulation ? inner_tcp_hdr(skb) : tcp_hdr(skb); 1227 1220 ipv6h->payload_len = 0; 1228 1221 tcph->check = ~csum_ipv6_magic(&ipv6h->saddr, &ipv6h->daddr, ··· 1279 1274 I40E_TX_CTX_EXT_IP_IPV4_NO_CSUM; 1280 1275 } 1281 1276 } else if (tx_flags & I40E_TX_FLAGS_IPV6) { 1282 - if (tx_flags & I40E_TX_FLAGS_TSO) { 1283 - *cd_tunneling |= I40E_TX_CTX_EXT_IP_IPV6; 1277 + *cd_tunneling |= I40E_TX_CTX_EXT_IP_IPV6; 1278 + if (tx_flags & I40E_TX_FLAGS_TSO) 1284 1279 ip_hdr(skb)->check = 0; 1285 - } else { 1286 - *cd_tunneling |= 1287 - I40E_TX_CTX_EXT_IP_IPV4_NO_CSUM; 1288 - } 1289 1280 } 1290 1281 1291 1282 /* Now set the ctx descriptor fields */ ··· 1291 1290 ((skb_inner_network_offset(skb) - 1292 1291 skb_transport_offset(skb)) >> 1) << 1293 1292 I40E_TXD_CTX_QW0_NATLEN_SHIFT; 1293 + if (this_ip_hdr->version == 6) { 1294 + tx_flags &= ~I40E_TX_FLAGS_IPV4; 1295 + tx_flags |= I40E_TX_FLAGS_IPV6; 1296 + } 1297 + 1294 1298 1295 1299 } else { 1296 1300 network_hdr_len = skb_network_header_len(skb); ··· 1384 1378 context_desc->l2tag2 = cpu_to_le16(cd_l2tag2); 1385 1379 context_desc->rsvd = cpu_to_le16(0); 1386 1380 context_desc->type_cmd_tso_mss = cpu_to_le64(cd_type_cmd_tso_mss); 1381 + } 1382 + 1383 + /** 1384 + * i40e_chk_linearize - Check if there are more than 8 fragments per packet 1385 + * @skb: send buffer 1386 + * @tx_flags: collected send information 1387 + * @hdr_len: size of the packet header 1388 + * 1389 + * Note: Our HW can't scatter-gather more than 8 fragments to build 1390 + * a packet on the wire and so we need to figure out the cases where we 1391 + * need to linearize the skb. 1392 + **/ 1393 + static bool i40e_chk_linearize(struct sk_buff *skb, u32 tx_flags, 1394 + const u8 hdr_len) 1395 + { 1396 + struct skb_frag_struct *frag; 1397 + bool linearize = false; 1398 + unsigned int size = 0; 1399 + u16 num_frags; 1400 + u16 gso_segs; 1401 + 1402 + num_frags = skb_shinfo(skb)->nr_frags; 1403 + gso_segs = skb_shinfo(skb)->gso_segs; 1404 + 1405 + if (tx_flags & (I40E_TX_FLAGS_TSO | I40E_TX_FLAGS_FSO)) { 1406 + u16 j = 1; 1407 + 1408 + if (num_frags < (I40E_MAX_BUFFER_TXD)) 1409 + goto linearize_chk_done; 1410 + /* try the simple math, if we have too many frags per segment */ 1411 + if (DIV_ROUND_UP((num_frags + gso_segs), gso_segs) > 1412 + I40E_MAX_BUFFER_TXD) { 1413 + linearize = true; 1414 + goto linearize_chk_done; 1415 + } 1416 + frag = &skb_shinfo(skb)->frags[0]; 1417 + size = hdr_len; 1418 + /* we might still have more fragments per segment */ 1419 + do { 1420 + size += skb_frag_size(frag); 1421 + frag++; j++; 1422 + if (j == I40E_MAX_BUFFER_TXD) { 1423 + if (size < skb_shinfo(skb)->gso_size) { 1424 + linearize = true; 1425 + break; 1426 + } 1427 + j = 1; 1428 + size -= skb_shinfo(skb)->gso_size; 1429 + if (size) 1430 + j++; 1431 + size += hdr_len; 1432 + } 1433 + num_frags--; 1434 + } while (num_frags); 1435 + } else { 1436 + if (num_frags >= I40E_MAX_BUFFER_TXD) 1437 + linearize = true; 1438 + } 1439 + 1440 + linearize_chk_done: 1441 + return linearize; 1387 1442 } 1388 1443 1389 1444 /** ··· 1720 1653 goto out_drop; 1721 1654 else if (tso) 1722 1655 tx_flags |= I40E_TX_FLAGS_TSO; 1656 + 1657 + if (i40e_chk_linearize(skb, tx_flags, hdr_len)) 1658 + if (skb_linearize(skb)) 1659 + goto out_drop; 1723 1660 1724 1661 skb_tx_timestamp(skb); 1725 1662
+1
drivers/net/ethernet/intel/i40evf/i40e_txrx.h
··· 112 112 113 113 #define i40e_rx_desc i40e_32byte_rx_desc 114 114 115 + #define I40E_MAX_BUFFER_TXD 8 115 116 #define I40E_MIN_TX_LEN 17 116 117 #define I40E_MAX_DATA_PER_TXD 8192 117 118
+7 -1
drivers/net/ethernet/mellanox/mlx4/en_selftest.c
··· 81 81 { 82 82 u32 loopback_ok = 0; 83 83 int i; 84 - 84 + bool gro_enabled; 85 85 86 86 priv->loopback_ok = 0; 87 87 priv->validate_loopback = 1; 88 + gro_enabled = priv->dev->features & NETIF_F_GRO; 88 89 89 90 mlx4_en_update_loopback_state(priv->dev, priv->dev->features); 91 + priv->dev->features &= ~NETIF_F_GRO; 90 92 91 93 /* xmit */ 92 94 if (mlx4_en_test_loopback_xmit(priv)) { ··· 110 108 mlx4_en_test_loopback_exit: 111 109 112 110 priv->validate_loopback = 0; 111 + 112 + if (gro_enabled) 113 + priv->dev->features |= NETIF_F_GRO; 114 + 113 115 mlx4_en_update_loopback_state(priv->dev, priv->dev->features); 114 116 return !loopback_ok; 115 117 }
-1
drivers/net/ethernet/mellanox/mlx4/qp.c
··· 412 412 413 413 EXPORT_SYMBOL_GPL(mlx4_qp_alloc); 414 414 415 - #define MLX4_UPDATE_QP_SUPPORTED_ATTRS MLX4_UPDATE_QP_SMAC 416 415 int mlx4_update_qp(struct mlx4_dev *dev, u32 qpn, 417 416 enum mlx4_update_qp_attr attr, 418 417 struct mlx4_update_qp_params *params)
+6 -3
drivers/net/ethernet/mellanox/mlx4/resource_tracker.c
··· 713 713 struct mlx4_vport_oper_state *vp_oper; 714 714 struct mlx4_priv *priv; 715 715 u32 qp_type; 716 - int port; 716 + int port, err = 0; 717 717 718 718 port = (qpc->pri_path.sched_queue & 0x40) ? 2 : 1; 719 719 priv = mlx4_priv(dev); ··· 738 738 } else { 739 739 struct mlx4_update_qp_params params = {.flags = 0}; 740 740 741 - mlx4_update_qp(dev, qpn, MLX4_UPDATE_QP_VSD, &params); 741 + err = mlx4_update_qp(dev, qpn, MLX4_UPDATE_QP_VSD, &params); 742 + if (err) 743 + goto out; 742 744 } 743 745 } 744 746 ··· 775 773 qpc->pri_path.feup |= MLX4_FSM_FORCE_ETH_SRC_MAC; 776 774 qpc->pri_path.grh_mylmc = (0x80 & qpc->pri_path.grh_mylmc) + vp_oper->mac_idx; 777 775 } 778 - return 0; 776 + out: 777 + return err; 779 778 } 780 779 781 780 static int mpt_mask(struct mlx4_dev *dev)
+3 -5
drivers/net/ethernet/pasemi/pasemi_mac.c
··· 1239 1239 if (mac->phydev) 1240 1240 phy_start(mac->phydev); 1241 1241 1242 - init_timer(&mac->tx->clean_timer); 1243 - mac->tx->clean_timer.function = pasemi_mac_tx_timer; 1244 - mac->tx->clean_timer.data = (unsigned long)mac->tx; 1245 - mac->tx->clean_timer.expires = jiffies+HZ; 1246 - add_timer(&mac->tx->clean_timer); 1242 + setup_timer(&mac->tx->clean_timer, pasemi_mac_tx_timer, 1243 + (unsigned long)mac->tx); 1244 + mod_timer(&mac->tx->clean_timer, jiffies + HZ); 1247 1245 1248 1246 return 0; 1249 1247
+2 -2
drivers/net/ethernet/qlogic/netxen/netxen_nic.h
··· 354 354 355 355 } __attribute__ ((aligned(64))); 356 356 357 - /* Note: sizeof(rcv_desc) should always be a mutliple of 2 */ 357 + /* Note: sizeof(rcv_desc) should always be a multiple of 2 */ 358 358 struct rcv_desc { 359 359 __le16 reference_handle; 360 360 __le16 reserved; ··· 499 499 #define NETXEN_IMAGE_START 0x43000 /* compressed image */ 500 500 #define NETXEN_SECONDARY_START 0x200000 /* backup images */ 501 501 #define NETXEN_PXE_START 0x3E0000 /* PXE boot rom */ 502 - #define NETXEN_USER_START 0x3E8000 /* Firmare info */ 502 + #define NETXEN_USER_START 0x3E8000 /* Firmware info */ 503 503 #define NETXEN_FIXED_START 0x3F0000 /* backup of crbinit */ 504 504 #define NETXEN_USER_START_OLD NETXEN_PXE_START /* very old flash */ 505 505
+1 -1
drivers/net/ethernet/qlogic/qlcnic/qlcnic.h
··· 314 314 #define QLCNIC_BRDCFG_START 0x4000 /* board config */ 315 315 #define QLCNIC_BOOTLD_START 0x10000 /* bootld */ 316 316 #define QLCNIC_IMAGE_START 0x43000 /* compressed image */ 317 - #define QLCNIC_USER_START 0x3E8000 /* Firmare info */ 317 + #define QLCNIC_USER_START 0x3E8000 /* Firmware info */ 318 318 319 319 #define QLCNIC_FW_VERSION_OFFSET (QLCNIC_USER_START+0x408) 320 320 #define QLCNIC_FW_SIZE_OFFSET (QLCNIC_USER_START+0x40c)
+8 -24
drivers/net/ethernet/realtek/r8169.c
··· 2561 2561 int rc = -EINVAL; 2562 2562 2563 2563 if (!rtl_fw_format_ok(tp, rtl_fw)) { 2564 - netif_err(tp, ifup, dev, "invalid firwmare\n"); 2564 + netif_err(tp, ifup, dev, "invalid firmware\n"); 2565 2565 goto out; 2566 2566 } 2567 2567 ··· 5067 5067 RTL_W8(ChipCmd, CmdReset); 5068 5068 5069 5069 rtl_udelay_loop_wait_low(tp, &rtl_chipcmd_cond, 100, 100); 5070 - 5071 - netdev_reset_queue(tp->dev); 5072 5070 } 5073 5071 5074 5072 static void rtl_request_uncached_firmware(struct rtl8169_private *tp) ··· 7047 7049 u32 status, len; 7048 7050 u32 opts[2]; 7049 7051 int frags; 7050 - bool stop_queue; 7051 7052 7052 7053 if (unlikely(!TX_FRAGS_READY_FOR(tp, skb_shinfo(skb)->nr_frags))) { 7053 7054 netif_err(tp, drv, dev, "BUG! Tx Ring full when queue awake!\n"); ··· 7087 7090 7088 7091 txd->opts2 = cpu_to_le32(opts[1]); 7089 7092 7090 - netdev_sent_queue(dev, skb->len); 7091 - 7092 7093 skb_tx_timestamp(skb); 7093 7094 7094 7095 /* Force memory writes to complete before releasing descriptor */ ··· 7101 7106 7102 7107 tp->cur_tx += frags + 1; 7103 7108 7104 - stop_queue = !TX_FRAGS_READY_FOR(tp, MAX_SKB_FRAGS); 7109 + RTL_W8(TxPoll, NPQ); 7105 7110 7106 - if (!skb->xmit_more || stop_queue || 7107 - netif_xmit_stopped(netdev_get_tx_queue(dev, 0))) { 7108 - RTL_W8(TxPoll, NPQ); 7111 + mmiowb(); 7109 7112 7110 - mmiowb(); 7111 - } 7112 - 7113 - if (stop_queue) { 7113 + if (!TX_FRAGS_READY_FOR(tp, MAX_SKB_FRAGS)) { 7114 7114 /* Avoid wrongly optimistic queue wake-up: rtl_tx thread must 7115 7115 * not miss a ring update when it notices a stopped queue. 7116 7116 */ ··· 7188 7198 static void rtl_tx(struct net_device *dev, struct rtl8169_private *tp) 7189 7199 { 7190 7200 unsigned int dirty_tx, tx_left; 7191 - unsigned int bytes_compl = 0, pkts_compl = 0; 7192 7201 7193 7202 dirty_tx = tp->dirty_tx; 7194 7203 smp_rmb(); ··· 7211 7222 rtl8169_unmap_tx_skb(&tp->pci_dev->dev, tx_skb, 7212 7223 tp->TxDescArray + entry); 7213 7224 if (status & LastFrag) { 7214 - pkts_compl++; 7215 - bytes_compl += tx_skb->skb->len; 7225 + u64_stats_update_begin(&tp->tx_stats.syncp); 7226 + tp->tx_stats.packets++; 7227 + tp->tx_stats.bytes += tx_skb->skb->len; 7228 + u64_stats_update_end(&tp->tx_stats.syncp); 7216 7229 dev_kfree_skb_any(tx_skb->skb); 7217 7230 tx_skb->skb = NULL; 7218 7231 } ··· 7223 7232 } 7224 7233 7225 7234 if (tp->dirty_tx != dirty_tx) { 7226 - netdev_completed_queue(tp->dev, pkts_compl, bytes_compl); 7227 - 7228 - u64_stats_update_begin(&tp->tx_stats.syncp); 7229 - tp->tx_stats.packets += pkts_compl; 7230 - tp->tx_stats.bytes += bytes_compl; 7231 - u64_stats_update_end(&tp->tx_stats.syncp); 7232 - 7233 7235 tp->dirty_tx = dirty_tx; 7234 7236 /* Sync with rtl8169_start_xmit: 7235 7237 * - publish dirty_tx ring index (write barrier)
+13 -5
drivers/net/ethernet/renesas/sh_eth.c
··· 508 508 .tpauser = 1, 509 509 .hw_swap = 1, 510 510 .rmiimode = 1, 511 - .shift_rd0 = 1, 512 511 }; 513 512 514 513 static void sh_eth_set_rate_sh7724(struct net_device *ndev) ··· 1391 1392 msleep(2); /* max frame time at 10 Mbps < 1250 us */ 1392 1393 sh_eth_get_stats(ndev); 1393 1394 sh_eth_reset(ndev); 1395 + 1396 + /* Set MAC address again */ 1397 + update_mac_address(ndev); 1394 1398 } 1395 1399 1396 1400 /* free Tx skb function */ ··· 1409 1407 txdesc = &mdp->tx_ring[entry]; 1410 1408 if (txdesc->status & cpu_to_edmac(mdp, TD_TACT)) 1411 1409 break; 1410 + /* TACT bit must be checked before all the following reads */ 1411 + rmb(); 1412 1412 /* Free the original skb. */ 1413 1413 if (mdp->tx_skbuff[entry]) { 1414 1414 dma_unmap_single(&ndev->dev, txdesc->addr, ··· 1448 1444 limit = boguscnt; 1449 1445 rxdesc = &mdp->rx_ring[entry]; 1450 1446 while (!(rxdesc->status & cpu_to_edmac(mdp, RD_RACT))) { 1447 + /* RACT bit must be checked before all the following reads */ 1448 + rmb(); 1451 1449 desc_status = edmac_to_cpu(mdp, rxdesc->status); 1452 1450 pkt_len = rxdesc->frame_length; 1453 1451 ··· 1461 1455 1462 1456 /* In case of almost all GETHER/ETHERs, the Receive Frame State 1463 1457 * (RFS) bits in the Receive Descriptor 0 are from bit 9 to 1464 - * bit 0. However, in case of the R8A7740, R8A779x, and 1465 - * R7S72100 the RFS bits are from bit 25 to bit 16. So, the 1458 + * bit 0. However, in case of the R8A7740 and R7S72100 1459 + * the RFS bits are from bit 25 to bit 16. So, the 1466 1460 * driver needs right shifting by 16. 1467 1461 */ 1468 1462 if (mdp->cd->shift_rd0) ··· 1529 1523 skb_checksum_none_assert(skb); 1530 1524 rxdesc->addr = dma_addr; 1531 1525 } 1526 + wmb(); /* RACT bit must be set after all the above writes */ 1532 1527 if (entry >= mdp->num_rx_ring - 1) 1533 1528 rxdesc->status |= 1534 1529 cpu_to_edmac(mdp, RD_RACT | RD_RFP | RD_RDEL); ··· 1542 1535 /* If we don't need to check status, don't. -KDU */ 1543 1536 if (!(sh_eth_read(ndev, EDRRR) & EDRRR_R)) { 1544 1537 /* fix the values for the next receiving if RDE is set */ 1545 - if (intr_status & EESR_RDE) { 1538 + if (intr_status & EESR_RDE && mdp->reg_offset[RDFAR] != 0) { 1546 1539 u32 count = (sh_eth_read(ndev, RDFAR) - 1547 1540 sh_eth_read(ndev, RDLAR)) >> 4; 1548 1541 ··· 2181 2174 } 2182 2175 spin_unlock_irqrestore(&mdp->lock, flags); 2183 2176 2184 - if (skb_padto(skb, ETH_ZLEN)) 2177 + if (skb_put_padto(skb, ETH_ZLEN)) 2185 2178 return NETDEV_TX_OK; 2186 2179 2187 2180 entry = mdp->cur_tx % mdp->num_tx_ring; ··· 2199 2192 } 2200 2193 txdesc->buffer_length = skb->len; 2201 2194 2195 + wmb(); /* TACT bit must be set after all the above writes */ 2202 2196 if (entry >= mdp->num_tx_ring - 1) 2203 2197 txdesc->status |= cpu_to_edmac(mdp, TD_TACT | TD_TDLE); 2204 2198 else
+4 -2
drivers/net/ethernet/rocker/rocker.c
··· 1257 1257 u64 val = rocker_read64(rocker_port->rocker, PORT_PHYS_ENABLE); 1258 1258 1259 1259 if (enable) 1260 - val |= 1 << rocker_port->lport; 1260 + val |= 1ULL << rocker_port->lport; 1261 1261 else 1262 - val &= ~(1 << rocker_port->lport); 1262 + val &= ~(1ULL << rocker_port->lport); 1263 1263 rocker_write64(rocker_port->rocker, PORT_PHYS_ENABLE, val); 1264 1264 } 1265 1265 ··· 4201 4201 4202 4202 alloc_size = sizeof(struct rocker_port *) * rocker->port_count; 4203 4203 rocker->ports = kmalloc(alloc_size, GFP_KERNEL); 4204 + if (!rocker->ports) 4205 + return -ENOMEM; 4204 4206 for (i = 0; i < rocker->port_count; i++) { 4205 4207 err = rocker_probe_port(rocker, i); 4206 4208 if (err)
+2 -5
drivers/net/ethernet/smsc/smc91c92_cs.c
··· 1070 1070 smc->packets_waiting = 0; 1071 1071 1072 1072 smc_reset(dev); 1073 - init_timer(&smc->media); 1074 - smc->media.function = media_check; 1075 - smc->media.data = (u_long) dev; 1076 - smc->media.expires = jiffies + HZ; 1077 - add_timer(&smc->media); 1073 + setup_timer(&smc->media, media_check, (u_long)dev); 1074 + mod_timer(&smc->media, jiffies + HZ); 1078 1075 1079 1076 return 0; 1080 1077 } /* smc_open */
+7 -2
drivers/net/ethernet/smsc/smc91x.c
··· 91 91 92 92 #include "smc91x.h" 93 93 94 + #if defined(CONFIG_ASSABET_NEPONSET) 95 + #include <mach/neponset.h> 96 + #endif 97 + 94 98 #ifndef SMC_NOWAIT 95 99 # define SMC_NOWAIT 0 96 100 #endif ··· 2359 2355 ret = smc_request_attrib(pdev, ndev); 2360 2356 if (ret) 2361 2357 goto out_release_io; 2362 - #if defined(CONFIG_SA1100_ASSABET) 2363 - neponset_ncr_set(NCR_ENET_OSC_EN); 2358 + #if defined(CONFIG_ASSABET_NEPONSET) 2359 + if (machine_is_assabet() && machine_has_neponset()) 2360 + neponset_ncr_set(NCR_ENET_OSC_EN); 2364 2361 #endif 2365 2362 platform_set_drvdata(pdev, ndev); 2366 2363 ret = smc_enable_device(pdev);
+3 -111
drivers/net/ethernet/smsc/smc91x.h
··· 39 39 * Define your architecture specific bus configuration parameters here. 40 40 */ 41 41 42 - #if defined(CONFIG_ARCH_LUBBOCK) ||\ 43 - defined(CONFIG_MACH_MAINSTONE) ||\ 44 - defined(CONFIG_MACH_ZYLONITE) ||\ 45 - defined(CONFIG_MACH_LITTLETON) ||\ 46 - defined(CONFIG_MACH_ZYLONITE2) ||\ 47 - defined(CONFIG_ARCH_VIPER) ||\ 48 - defined(CONFIG_MACH_STARGATE2) ||\ 49 - defined(CONFIG_ARCH_VERSATILE) 42 + #if defined(CONFIG_ARM) 50 43 51 44 #include <asm/mach-types.h> 52 45 ··· 67 74 /* We actually can't write halfwords properly if not word aligned */ 68 75 static inline void SMC_outw(u16 val, void __iomem *ioaddr, int reg) 69 76 { 70 - if ((machine_is_mainstone() || machine_is_stargate2()) && reg & 2) { 71 - unsigned int v = val << 16; 72 - v |= readl(ioaddr + (reg & ~2)) & 0xffff; 73 - writel(v, ioaddr + (reg & ~2)); 74 - } else { 75 - writew(val, ioaddr + reg); 76 - } 77 - } 78 - 79 - #elif defined(CONFIG_SA1100_PLEB) 80 - /* We can only do 16-bit reads and writes in the static memory space. */ 81 - #define SMC_CAN_USE_8BIT 1 82 - #define SMC_CAN_USE_16BIT 1 83 - #define SMC_CAN_USE_32BIT 0 84 - #define SMC_IO_SHIFT 0 85 - #define SMC_NOWAIT 1 86 - 87 - #define SMC_inb(a, r) readb((a) + (r)) 88 - #define SMC_insb(a, r, p, l) readsb((a) + (r), p, (l)) 89 - #define SMC_inw(a, r) readw((a) + (r)) 90 - #define SMC_insw(a, r, p, l) readsw((a) + (r), p, l) 91 - #define SMC_outb(v, a, r) writeb(v, (a) + (r)) 92 - #define SMC_outsb(a, r, p, l) writesb((a) + (r), p, (l)) 93 - #define SMC_outw(v, a, r) writew(v, (a) + (r)) 94 - #define SMC_outsw(a, r, p, l) writesw((a) + (r), p, l) 95 - 96 - #define SMC_IRQ_FLAGS (-1) 97 - 98 - #elif defined(CONFIG_SA1100_ASSABET) 99 - 100 - #include <mach/neponset.h> 101 - 102 - /* We can only do 8-bit reads and writes in the static memory space. */ 103 - #define SMC_CAN_USE_8BIT 1 104 - #define SMC_CAN_USE_16BIT 0 105 - #define SMC_CAN_USE_32BIT 0 106 - #define SMC_NOWAIT 1 107 - 108 - /* The first two address lines aren't connected... */ 109 - #define SMC_IO_SHIFT 2 110 - 111 - #define SMC_inb(a, r) readb((a) + (r)) 112 - #define SMC_outb(v, a, r) writeb(v, (a) + (r)) 113 - #define SMC_insb(a, r, p, l) readsb((a) + (r), p, (l)) 114 - #define SMC_outsb(a, r, p, l) writesb((a) + (r), p, (l)) 115 - #define SMC_IRQ_FLAGS (-1) /* from resource */ 116 - 117 - #elif defined(CONFIG_MACH_LOGICPD_PXA270) || \ 118 - defined(CONFIG_MACH_NOMADIK_8815NHK) 119 - 120 - #define SMC_CAN_USE_8BIT 0 121 - #define SMC_CAN_USE_16BIT 1 122 - #define SMC_CAN_USE_32BIT 0 123 - #define SMC_IO_SHIFT 0 124 - #define SMC_NOWAIT 1 125 - 126 - #define SMC_inw(a, r) readw((a) + (r)) 127 - #define SMC_outw(v, a, r) writew(v, (a) + (r)) 128 - #define SMC_insw(a, r, p, l) readsw((a) + (r), p, l) 129 - #define SMC_outsw(a, r, p, l) writesw((a) + (r), p, l) 130 - 131 - #elif defined(CONFIG_ARCH_INNOKOM) || \ 132 - defined(CONFIG_ARCH_PXA_IDP) || \ 133 - defined(CONFIG_ARCH_RAMSES) || \ 134 - defined(CONFIG_ARCH_PCM027) 135 - 136 - #define SMC_CAN_USE_8BIT 1 137 - #define SMC_CAN_USE_16BIT 1 138 - #define SMC_CAN_USE_32BIT 1 139 - #define SMC_IO_SHIFT 0 140 - #define SMC_NOWAIT 1 141 - #define SMC_USE_PXA_DMA 1 142 - 143 - #define SMC_inb(a, r) readb((a) + (r)) 144 - #define SMC_inw(a, r) readw((a) + (r)) 145 - #define SMC_inl(a, r) readl((a) + (r)) 146 - #define SMC_outb(v, a, r) writeb(v, (a) + (r)) 147 - #define SMC_outl(v, a, r) writel(v, (a) + (r)) 148 - #define SMC_insl(a, r, p, l) readsl((a) + (r), p, l) 149 - #define SMC_outsl(a, r, p, l) writesl((a) + (r), p, l) 150 - #define SMC_insw(a, r, p, l) readsw((a) + (r), p, l) 151 - #define SMC_outsw(a, r, p, l) writesw((a) + (r), p, l) 152 - #define SMC_IRQ_FLAGS (-1) /* from resource */ 153 - 154 - /* We actually can't write halfwords properly if not word aligned */ 155 - static inline void 156 - SMC_outw(u16 val, void __iomem *ioaddr, int reg) 157 - { 158 - if (reg & 2) { 77 + if ((machine_is_mainstone() || machine_is_stargate2() || 78 + machine_is_pxa_idp()) && reg & 2) { 159 79 unsigned int v = val << 16; 160 80 v |= readl(ioaddr + (reg & ~2)) & 0xffff; 161 81 writel(v, ioaddr + (reg & ~2)); ··· 142 236 143 237 #define RPC_LSA_DEFAULT RPC_LED_100_10 144 238 #define RPC_LSB_DEFAULT RPC_LED_TX_RX 145 - 146 - #elif defined(CONFIG_ARCH_MSM) 147 - 148 - #define SMC_CAN_USE_8BIT 0 149 - #define SMC_CAN_USE_16BIT 1 150 - #define SMC_CAN_USE_32BIT 0 151 - #define SMC_NOWAIT 1 152 - 153 - #define SMC_inw(a, r) readw((a) + (r)) 154 - #define SMC_outw(v, a, r) writew(v, (a) + (r)) 155 - #define SMC_insw(a, r, p, l) readsw((a) + (r), p, l) 156 - #define SMC_outsw(a, r, p, l) writesw((a) + (r), p, l) 157 - 158 - #define SMC_IRQ_FLAGS IRQF_TRIGGER_HIGH 159 239 160 240 #elif defined(CONFIG_COLDFIRE) 161 241
+5 -5
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
··· 310 310 spin_lock_irqsave(&priv->lock, flags); 311 311 if (!priv->eee_active) { 312 312 priv->eee_active = 1; 313 - init_timer(&priv->eee_ctrl_timer); 314 - priv->eee_ctrl_timer.function = stmmac_eee_ctrl_timer; 315 - priv->eee_ctrl_timer.data = (unsigned long)priv; 316 - priv->eee_ctrl_timer.expires = STMMAC_LPI_T(eee_timer); 317 - add_timer(&priv->eee_ctrl_timer); 313 + setup_timer(&priv->eee_ctrl_timer, 314 + stmmac_eee_ctrl_timer, 315 + (unsigned long)priv); 316 + mod_timer(&priv->eee_ctrl_timer, 317 + STMMAC_LPI_T(eee_timer)); 318 318 319 319 priv->hw->mac->set_eee_timer(priv->hw, 320 320 STMMAC_DEFAULT_LIT_LS,
+2 -4
drivers/net/ethernet/sun/niu.c
··· 6989 6989 *flow_type = IP_USER_FLOW; 6990 6990 break; 6991 6991 default: 6992 - return 0; 6992 + return -EINVAL; 6993 6993 } 6994 6994 6995 - return 1; 6995 + return 0; 6996 6996 } 6997 6997 6998 6998 static int niu_ethflow_to_class(int flow_type, u64 *class) ··· 7198 7198 class = (tp->key[0] & TCAM_V4KEY0_CLASS_CODE) >> 7199 7199 TCAM_V4KEY0_CLASS_CODE_SHIFT; 7200 7200 ret = niu_class_to_ethflow(class, &fsp->flow_type); 7201 - 7202 7201 if (ret < 0) { 7203 7202 netdev_info(np->dev, "niu%d: niu_class_to_ethflow failed\n", 7204 7203 parent->index); 7205 - ret = -EINVAL; 7206 7204 goto out; 7207 7205 } 7208 7206
+4 -5
drivers/net/ethernet/ti/cpsw.c
··· 1103 1103 cpsw_ale_add_mcast(priv->ale, priv->ndev->broadcast, 1104 1104 port_mask, ALE_VLAN, slave->port_vlan, 0); 1105 1105 cpsw_ale_add_ucast(priv->ale, priv->mac_addr, 1106 - priv->host_port, ALE_VLAN, slave->port_vlan); 1106 + priv->host_port, ALE_VLAN | ALE_SECURE, slave->port_vlan); 1107 1107 } 1108 1108 1109 1109 static void soft_reset_slave(struct cpsw_slave *slave) ··· 2466 2466 return 0; 2467 2467 } 2468 2468 2469 + #ifdef CONFIG_PM_SLEEP 2469 2470 static int cpsw_suspend(struct device *dev) 2470 2471 { 2471 2472 struct platform_device *pdev = to_platform_device(dev); ··· 2519 2518 } 2520 2519 return 0; 2521 2520 } 2521 + #endif 2522 2522 2523 - static const struct dev_pm_ops cpsw_pm_ops = { 2524 - .suspend = cpsw_suspend, 2525 - .resume = cpsw_resume, 2526 - }; 2523 + static SIMPLE_DEV_PM_OPS(cpsw_pm_ops, cpsw_suspend, cpsw_resume); 2527 2524 2528 2525 static const struct of_device_id cpsw_of_mtable[] = { 2529 2526 { .compatible = "ti,cpsw", },
+3 -2
drivers/net/ethernet/ti/davinci_mdio.c
··· 423 423 return 0; 424 424 } 425 425 426 + #ifdef CONFIG_PM_SLEEP 426 427 static int davinci_mdio_suspend(struct device *dev) 427 428 { 428 429 struct davinci_mdio_data *data = dev_get_drvdata(dev); ··· 465 464 466 465 return 0; 467 466 } 467 + #endif 468 468 469 469 static const struct dev_pm_ops davinci_mdio_pm_ops = { 470 - .suspend_late = davinci_mdio_suspend, 471 - .resume_early = davinci_mdio_resume, 470 + SET_LATE_SYSTEM_SLEEP_PM_OPS(davinci_mdio_suspend, davinci_mdio_resume) 472 471 }; 473 472 474 473 #if IS_ENABLED(CONFIG_OF)
+1 -1
drivers/net/ethernet/xscale/ixp4xx_eth.c
··· 938 938 int i; 939 939 static const u8 allmulti[] = { 0x01, 0x00, 0x00, 0x00, 0x00, 0x00 }; 940 940 941 - if (dev->flags & IFF_ALLMULTI) { 941 + if ((dev->flags & IFF_ALLMULTI) && !(dev->flags & IFF_PROMISC)) { 942 942 for (i = 0; i < ETH_ALEN; i++) { 943 943 __raw_writel(allmulti[i], &port->regs->mcast_addr[i]); 944 944 __raw_writel(allmulti[i], &port->regs->mcast_mask[i]);
+5 -2
drivers/net/macvtap.c
··· 654 654 } /* else everything is zero */ 655 655 } 656 656 657 + /* Neighbour code has some assumptions on HH_DATA_MOD alignment */ 658 + #define MACVTAP_RESERVE HH_DATA_OFF(ETH_HLEN) 659 + 657 660 /* Get packet from user space buffer */ 658 661 static ssize_t macvtap_get_user(struct macvtap_queue *q, struct msghdr *m, 659 662 struct iov_iter *from, int noblock) 660 663 { 661 - int good_linear = SKB_MAX_HEAD(NET_IP_ALIGN); 664 + int good_linear = SKB_MAX_HEAD(MACVTAP_RESERVE); 662 665 struct sk_buff *skb; 663 666 struct macvlan_dev *vlan; 664 667 unsigned long total_len = iov_iter_count(from); ··· 725 722 linear = macvtap16_to_cpu(q, vnet_hdr.hdr_len); 726 723 } 727 724 728 - skb = macvtap_alloc_skb(&q->sk, NET_IP_ALIGN, copylen, 725 + skb = macvtap_alloc_skb(&q->sk, MACVTAP_RESERVE, copylen, 729 726 linear, noblock, &err); 730 727 if (!skb) 731 728 goto err;
+80 -2
drivers/net/phy/amd-xgbe-phy.c
··· 92 92 #define XGBE_PHY_CDR_RATE_PROPERTY "amd,serdes-cdr-rate" 93 93 #define XGBE_PHY_PQ_SKEW_PROPERTY "amd,serdes-pq-skew" 94 94 #define XGBE_PHY_TX_AMP_PROPERTY "amd,serdes-tx-amp" 95 + #define XGBE_PHY_DFE_CFG_PROPERTY "amd,serdes-dfe-tap-config" 96 + #define XGBE_PHY_DFE_ENA_PROPERTY "amd,serdes-dfe-tap-enable" 95 97 96 98 #define XGBE_PHY_SPEEDS 3 97 99 #define XGBE_PHY_SPEED_1000 0 ··· 179 177 #define SPEED_10000_BLWC 0 180 178 #define SPEED_10000_CDR 0x7 181 179 #define SPEED_10000_PLL 0x1 182 - #define SPEED_10000_PQ 0x1e 180 + #define SPEED_10000_PQ 0x12 183 181 #define SPEED_10000_RATE 0x0 184 182 #define SPEED_10000_TXAMP 0xa 185 183 #define SPEED_10000_WORD 0x7 184 + #define SPEED_10000_DFE_TAP_CONFIG 0x1 185 + #define SPEED_10000_DFE_TAP_ENABLE 0x7f 186 186 187 187 #define SPEED_2500_BLWC 1 188 188 #define SPEED_2500_CDR 0x2 ··· 193 189 #define SPEED_2500_RATE 0x1 194 190 #define SPEED_2500_TXAMP 0xf 195 191 #define SPEED_2500_WORD 0x1 192 + #define SPEED_2500_DFE_TAP_CONFIG 0x3 193 + #define SPEED_2500_DFE_TAP_ENABLE 0x0 196 194 197 195 #define SPEED_1000_BLWC 1 198 196 #define SPEED_1000_CDR 0x2 ··· 203 197 #define SPEED_1000_RATE 0x3 204 198 #define SPEED_1000_TXAMP 0xf 205 199 #define SPEED_1000_WORD 0x1 200 + #define SPEED_1000_DFE_TAP_CONFIG 0x3 201 + #define SPEED_1000_DFE_TAP_ENABLE 0x0 206 202 207 203 /* SerDes RxTx register offsets */ 204 + #define RXTX_REG6 0x0018 208 205 #define RXTX_REG20 0x0050 206 + #define RXTX_REG22 0x0058 209 207 #define RXTX_REG114 0x01c8 208 + #define RXTX_REG129 0x0204 210 209 211 210 /* SerDes RxTx register entry bit positions and sizes */ 211 + #define RXTX_REG6_RESETB_RXD_INDEX 8 212 + #define RXTX_REG6_RESETB_RXD_WIDTH 1 212 213 #define RXTX_REG20_BLWC_ENA_INDEX 2 213 214 #define RXTX_REG20_BLWC_ENA_WIDTH 1 214 215 #define RXTX_REG114_PQ_REG_INDEX 9 215 216 #define RXTX_REG114_PQ_REG_WIDTH 7 217 + #define RXTX_REG129_RXDFE_CONFIG_INDEX 14 218 + #define RXTX_REG129_RXDFE_CONFIG_WIDTH 2 216 219 217 220 /* Bit setting and getting macros 218 221 * The get macro will extract the current bit field value from within ··· 348 333 SPEED_10000_TXAMP, 349 334 }; 350 335 336 + static const u32 amd_xgbe_phy_serdes_dfe_tap_cfg[] = { 337 + SPEED_1000_DFE_TAP_CONFIG, 338 + SPEED_2500_DFE_TAP_CONFIG, 339 + SPEED_10000_DFE_TAP_CONFIG, 340 + }; 341 + 342 + static const u32 amd_xgbe_phy_serdes_dfe_tap_ena[] = { 343 + SPEED_1000_DFE_TAP_ENABLE, 344 + SPEED_2500_DFE_TAP_ENABLE, 345 + SPEED_10000_DFE_TAP_ENABLE, 346 + }; 347 + 351 348 enum amd_xgbe_phy_an { 352 349 AMD_XGBE_AN_READY = 0, 353 350 AMD_XGBE_AN_PAGE_RECEIVED, ··· 420 393 u32 serdes_cdr_rate[XGBE_PHY_SPEEDS]; 421 394 u32 serdes_pq_skew[XGBE_PHY_SPEEDS]; 422 395 u32 serdes_tx_amp[XGBE_PHY_SPEEDS]; 396 + u32 serdes_dfe_tap_cfg[XGBE_PHY_SPEEDS]; 397 + u32 serdes_dfe_tap_ena[XGBE_PHY_SPEEDS]; 423 398 424 399 /* Auto-negotiation state machine support */ 425 400 struct mutex an_mutex; ··· 510 481 status = XSIR0_IOREAD(priv, SIR0_STATUS); 511 482 if (XSIR_GET_BITS(status, SIR0_STATUS, RX_READY) && 512 483 XSIR_GET_BITS(status, SIR0_STATUS, TX_READY)) 513 - return; 484 + goto rx_reset; 514 485 } 515 486 516 487 netdev_dbg(phydev->attached_dev, "SerDes rx/tx not ready (%#hx)\n", 517 488 status); 489 + 490 + rx_reset: 491 + /* Perform Rx reset for the DFE changes */ 492 + XRXTX_IOWRITE_BITS(priv, RXTX_REG6, RESETB_RXD, 0); 493 + XRXTX_IOWRITE_BITS(priv, RXTX_REG6, RESETB_RXD, 1); 518 494 } 519 495 520 496 static int amd_xgbe_phy_xgmii_mode(struct phy_device *phydev) ··· 568 534 priv->serdes_blwc[XGBE_PHY_SPEED_10000]); 569 535 XRXTX_IOWRITE_BITS(priv, RXTX_REG114, PQ_REG, 570 536 priv->serdes_pq_skew[XGBE_PHY_SPEED_10000]); 537 + XRXTX_IOWRITE_BITS(priv, RXTX_REG129, RXDFE_CONFIG, 538 + priv->serdes_dfe_tap_cfg[XGBE_PHY_SPEED_10000]); 539 + XRXTX_IOWRITE(priv, RXTX_REG22, 540 + priv->serdes_dfe_tap_ena[XGBE_PHY_SPEED_10000]); 571 541 572 542 amd_xgbe_phy_serdes_complete_ratechange(phydev); 573 543 ··· 624 586 priv->serdes_blwc[XGBE_PHY_SPEED_2500]); 625 587 XRXTX_IOWRITE_BITS(priv, RXTX_REG114, PQ_REG, 626 588 priv->serdes_pq_skew[XGBE_PHY_SPEED_2500]); 589 + XRXTX_IOWRITE_BITS(priv, RXTX_REG129, RXDFE_CONFIG, 590 + priv->serdes_dfe_tap_cfg[XGBE_PHY_SPEED_2500]); 591 + XRXTX_IOWRITE(priv, RXTX_REG22, 592 + priv->serdes_dfe_tap_ena[XGBE_PHY_SPEED_2500]); 627 593 628 594 amd_xgbe_phy_serdes_complete_ratechange(phydev); 629 595 ··· 680 638 priv->serdes_blwc[XGBE_PHY_SPEED_1000]); 681 639 XRXTX_IOWRITE_BITS(priv, RXTX_REG114, PQ_REG, 682 640 priv->serdes_pq_skew[XGBE_PHY_SPEED_1000]); 641 + XRXTX_IOWRITE_BITS(priv, RXTX_REG129, RXDFE_CONFIG, 642 + priv->serdes_dfe_tap_cfg[XGBE_PHY_SPEED_1000]); 643 + XRXTX_IOWRITE(priv, RXTX_REG22, 644 + priv->serdes_dfe_tap_ena[XGBE_PHY_SPEED_1000]); 683 645 684 646 amd_xgbe_phy_serdes_complete_ratechange(phydev); 685 647 ··· 1712 1666 } else { 1713 1667 memcpy(priv->serdes_tx_amp, amd_xgbe_phy_serdes_tx_amp, 1714 1668 sizeof(priv->serdes_tx_amp)); 1669 + } 1670 + 1671 + if (device_property_present(phy_dev, XGBE_PHY_DFE_CFG_PROPERTY)) { 1672 + ret = device_property_read_u32_array(phy_dev, 1673 + XGBE_PHY_DFE_CFG_PROPERTY, 1674 + priv->serdes_dfe_tap_cfg, 1675 + XGBE_PHY_SPEEDS); 1676 + if (ret) { 1677 + dev_err(dev, "invalid %s property\n", 1678 + XGBE_PHY_DFE_CFG_PROPERTY); 1679 + goto err_sir1; 1680 + } 1681 + } else { 1682 + memcpy(priv->serdes_dfe_tap_cfg, 1683 + amd_xgbe_phy_serdes_dfe_tap_cfg, 1684 + sizeof(priv->serdes_dfe_tap_cfg)); 1685 + } 1686 + 1687 + if (device_property_present(phy_dev, XGBE_PHY_DFE_ENA_PROPERTY)) { 1688 + ret = device_property_read_u32_array(phy_dev, 1689 + XGBE_PHY_DFE_ENA_PROPERTY, 1690 + priv->serdes_dfe_tap_ena, 1691 + XGBE_PHY_SPEEDS); 1692 + if (ret) { 1693 + dev_err(dev, "invalid %s property\n", 1694 + XGBE_PHY_DFE_ENA_PROPERTY); 1695 + goto err_sir1; 1696 + } 1697 + } else { 1698 + memcpy(priv->serdes_dfe_tap_ena, 1699 + amd_xgbe_phy_serdes_dfe_tap_ena, 1700 + sizeof(priv->serdes_dfe_tap_ena)); 1715 1701 } 1716 1702 1717 1703 phydev->priv = priv;
+20 -3
drivers/net/phy/phy.c
··· 236 236 } 237 237 238 238 /** 239 + * phy_check_valid - check if there is a valid PHY setting which matches 240 + * speed, duplex, and feature mask 241 + * @speed: speed to match 242 + * @duplex: duplex to match 243 + * @features: A mask of the valid settings 244 + * 245 + * Description: Returns true if there is a valid setting, false otherwise. 246 + */ 247 + static inline bool phy_check_valid(int speed, int duplex, u32 features) 248 + { 249 + unsigned int idx; 250 + 251 + idx = phy_find_valid(phy_find_setting(speed, duplex), features); 252 + 253 + return settings[idx].speed == speed && settings[idx].duplex == duplex && 254 + (settings[idx].setting & features); 255 + } 256 + 257 + /** 239 258 * phy_sanitize_settings - make sure the PHY is set to supported speed and duplex 240 259 * @phydev: the target phy_device struct 241 260 * ··· 1064 1045 int eee_lp, eee_cap, eee_adv; 1065 1046 u32 lp, cap, adv; 1066 1047 int status; 1067 - unsigned int idx; 1068 1048 1069 1049 /* Read phy status to properly get the right settings */ 1070 1050 status = phy_read_status(phydev); ··· 1095 1077 1096 1078 adv = mmd_eee_adv_to_ethtool_adv_t(eee_adv); 1097 1079 lp = mmd_eee_adv_to_ethtool_adv_t(eee_lp); 1098 - idx = phy_find_setting(phydev->speed, phydev->duplex); 1099 - if (!(lp & adv & settings[idx].setting)) 1080 + if (!phy_check_valid(phydev->speed, phydev->duplex, lp & adv)) 1100 1081 goto eee_exit_err; 1101 1082 1102 1083 if (clk_stop_enable) {
+1 -3
drivers/net/team/team.c
··· 43 43 44 44 static struct team_port *team_port_get_rcu(const struct net_device *dev) 45 45 { 46 - struct team_port *port = rcu_dereference(dev->rx_handler_data); 47 - 48 - return team_port_exists(dev) ? port : NULL; 46 + return rcu_dereference(dev->rx_handler_data); 49 47 } 50 48 51 49 static struct team_port *team_port_get_rtnl(const struct net_device *dev)
+1
drivers/net/usb/Kconfig
··· 161 161 * Linksys USB200M 162 162 * Netgear FA120 163 163 * Sitecom LN-029 164 + * Sitecom LN-028 164 165 * Intellinet USB 2.0 Ethernet 165 166 * ST Lab USB 2.0 Ethernet 166 167 * TrendNet TU2-ET100
+4
drivers/net/usb/asix_devices.c
··· 979 979 USB_DEVICE (0x0df6, 0x0056), 980 980 .driver_info = (unsigned long) &ax88178_info, 981 981 }, { 982 + // Sitecom LN-028 "USB 2.0 10/100/1000 Ethernet adapter" 983 + USB_DEVICE (0x0df6, 0x061c), 984 + .driver_info = (unsigned long) &ax88178_info, 985 + }, { 982 986 // corega FEther USB2-TX 983 987 USB_DEVICE (0x07aa, 0x0017), 984 988 .driver_info = (unsigned long) &ax8817x_info,
+1 -1
drivers/net/usb/hso.c
··· 1594 1594 } 1595 1595 cprev = cnow; 1596 1596 } 1597 - current->state = TASK_RUNNING; 1597 + __set_current_state(TASK_RUNNING); 1598 1598 remove_wait_queue(&tiocmget->waitq, &wait); 1599 1599 1600 1600 return ret;
+5
drivers/net/usb/plusb.c
··· 134 134 }, { 135 135 USB_DEVICE(0x050d, 0x258a), /* Belkin F5U258/F5U279 (PL-25A1) */ 136 136 .driver_info = (unsigned long) &prolific_info, 137 + }, { 138 + USB_DEVICE(0x3923, 0x7825), /* National Instruments USB 139 + * Host-to-Host Cable 140 + */ 141 + .driver_info = (unsigned long) &prolific_info, 137 142 }, 138 143 139 144 { }, // END
+6 -6
drivers/net/wan/cosa.c
··· 806 806 spin_lock_irqsave(&cosa->lock, flags); 807 807 add_wait_queue(&chan->rxwaitq, &wait); 808 808 while (!chan->rx_status) { 809 - current->state = TASK_INTERRUPTIBLE; 809 + set_current_state(TASK_INTERRUPTIBLE); 810 810 spin_unlock_irqrestore(&cosa->lock, flags); 811 811 schedule(); 812 812 spin_lock_irqsave(&cosa->lock, flags); 813 813 if (signal_pending(current) && chan->rx_status == 0) { 814 814 chan->rx_status = 1; 815 815 remove_wait_queue(&chan->rxwaitq, &wait); 816 - current->state = TASK_RUNNING; 816 + __set_current_state(TASK_RUNNING); 817 817 spin_unlock_irqrestore(&cosa->lock, flags); 818 818 mutex_unlock(&chan->rlock); 819 819 return -ERESTARTSYS; 820 820 } 821 821 } 822 822 remove_wait_queue(&chan->rxwaitq, &wait); 823 - current->state = TASK_RUNNING; 823 + __set_current_state(TASK_RUNNING); 824 824 kbuf = chan->rxdata; 825 825 count = chan->rxsize; 826 826 spin_unlock_irqrestore(&cosa->lock, flags); ··· 890 890 spin_lock_irqsave(&cosa->lock, flags); 891 891 add_wait_queue(&chan->txwaitq, &wait); 892 892 while (!chan->tx_status) { 893 - current->state = TASK_INTERRUPTIBLE; 893 + set_current_state(TASK_INTERRUPTIBLE); 894 894 spin_unlock_irqrestore(&cosa->lock, flags); 895 895 schedule(); 896 896 spin_lock_irqsave(&cosa->lock, flags); 897 897 if (signal_pending(current) && chan->tx_status == 0) { 898 898 chan->tx_status = 1; 899 899 remove_wait_queue(&chan->txwaitq, &wait); 900 - current->state = TASK_RUNNING; 900 + __set_current_state(TASK_RUNNING); 901 901 chan->tx_status = 1; 902 902 spin_unlock_irqrestore(&cosa->lock, flags); 903 903 up(&chan->wsem); ··· 905 905 } 906 906 } 907 907 remove_wait_queue(&chan->txwaitq, &wait); 908 - current->state = TASK_RUNNING; 908 + __set_current_state(TASK_RUNNING); 909 909 up(&chan->wsem); 910 910 spin_unlock_irqrestore(&cosa->lock, flags); 911 911 kfree(kbuf);
+4 -1
drivers/net/wireless/mac80211_hwsim.c
··· 946 946 goto nla_put_failure; 947 947 948 948 genlmsg_end(skb, msg_head); 949 - genlmsg_unicast(&init_net, skb, dst_portid); 949 + if (genlmsg_unicast(&init_net, skb, dst_portid)) 950 + goto err_free_txskb; 950 951 951 952 /* Enqueue the packet */ 952 953 skb_queue_tail(&data->pending, my_skb); ··· 956 955 return; 957 956 958 957 nla_put_failure: 958 + nlmsg_free(skb); 959 + err_free_txskb: 959 960 printk(KERN_DEBUG "mac80211_hwsim: error occurred in %s\n", __func__); 960 961 ieee80211_free_txskb(hw, my_skb); 961 962 data->tx_failed++;
+21 -8
drivers/net/xen-netback/netback.c
··· 655 655 unsigned long flags; 656 656 657 657 do { 658 + int notify; 659 + 658 660 spin_lock_irqsave(&queue->response_lock, flags); 659 661 make_tx_response(queue, txp, XEN_NETIF_RSP_ERROR); 662 + RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&queue->tx, notify); 660 663 spin_unlock_irqrestore(&queue->response_lock, flags); 664 + if (notify) 665 + notify_remote_via_irq(queue->tx_irq); 666 + 661 667 if (cons == end) 662 668 break; 663 669 txp = RING_GET_REQUEST(&queue->tx, cons++); ··· 1655 1649 { 1656 1650 struct pending_tx_info *pending_tx_info; 1657 1651 pending_ring_idx_t index; 1652 + int notify; 1658 1653 unsigned long flags; 1659 1654 1660 1655 pending_tx_info = &queue->pending_tx_info[pending_idx]; 1656 + 1661 1657 spin_lock_irqsave(&queue->response_lock, flags); 1658 + 1662 1659 make_tx_response(queue, &pending_tx_info->req, status); 1663 - index = pending_index(queue->pending_prod); 1660 + 1661 + /* Release the pending index before pusing the Tx response so 1662 + * its available before a new Tx request is pushed by the 1663 + * frontend. 1664 + */ 1665 + index = pending_index(queue->pending_prod++); 1664 1666 queue->pending_ring[index] = pending_idx; 1665 - /* TX shouldn't use the index before we give it back here */ 1666 - mb(); 1667 - queue->pending_prod++; 1667 + 1668 + RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&queue->tx, notify); 1669 + 1668 1670 spin_unlock_irqrestore(&queue->response_lock, flags); 1671 + 1672 + if (notify) 1673 + notify_remote_via_irq(queue->tx_irq); 1669 1674 } 1670 1675 1671 1676 ··· 1686 1669 { 1687 1670 RING_IDX i = queue->tx.rsp_prod_pvt; 1688 1671 struct xen_netif_tx_response *resp; 1689 - int notify; 1690 1672 1691 1673 resp = RING_GET_RESPONSE(&queue->tx, i); 1692 1674 resp->id = txp->id; ··· 1695 1679 RING_GET_RESPONSE(&queue->tx, ++i)->status = XEN_NETIF_RSP_NULL; 1696 1680 1697 1681 queue->tx.rsp_prod_pvt = ++i; 1698 - RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&queue->tx, notify); 1699 - if (notify) 1700 - notify_remote_via_irq(queue->tx_irq); 1701 1682 } 1702 1683 1703 1684 static struct xen_netif_rx_response *make_rx_response(struct xenvif_queue *queue,
+14 -11
drivers/vhost/net.c
··· 591 591 * TODO: support TSO. 592 592 */ 593 593 iov_iter_advance(&msg.msg_iter, vhost_hlen); 594 - } else { 595 - /* It'll come from socket; we'll need to patch 596 - * ->num_buffers over if VIRTIO_NET_F_MRG_RXBUF 597 - */ 598 - iov_iter_advance(&fixup, sizeof(hdr)); 599 594 } 600 595 err = sock->ops->recvmsg(NULL, sock, &msg, 601 596 sock_len, MSG_DONTWAIT | MSG_TRUNC); ··· 604 609 continue; 605 610 } 606 611 /* Supply virtio_net_hdr if VHOST_NET_F_VIRTIO_NET_HDR */ 607 - if (unlikely(vhost_hlen) && 608 - copy_to_iter(&hdr, sizeof(hdr), &fixup) != sizeof(hdr)) { 609 - vq_err(vq, "Unable to write vnet_hdr at addr %p\n", 610 - vq->iov->iov_base); 611 - break; 612 + if (unlikely(vhost_hlen)) { 613 + if (copy_to_iter(&hdr, sizeof(hdr), 614 + &fixup) != sizeof(hdr)) { 615 + vq_err(vq, "Unable to write vnet_hdr " 616 + "at addr %p\n", vq->iov->iov_base); 617 + break; 618 + } 619 + } else { 620 + /* Header came from socket; we'll need to patch 621 + * ->num_buffers over if VIRTIO_NET_F_MRG_RXBUF 622 + */ 623 + iov_iter_advance(&fixup, sizeof(hdr)); 612 624 } 613 625 /* TODO: Should check and handle checksum. */ 614 626 615 627 num_buffers = cpu_to_vhost16(vq, headcount); 616 628 if (likely(mergeable) && 617 - copy_to_iter(&num_buffers, 2, &fixup) != 2) { 629 + copy_to_iter(&num_buffers, sizeof num_buffers, 630 + &fixup) != sizeof num_buffers) { 618 631 vq_err(vq, "Failed num_buffers write"); 619 632 vhost_discard_vq_desc(vq, headcount); 620 633 break;
+1 -1
include/linux/mlx4/qp.h
··· 427 427 428 428 enum mlx4_update_qp_attr { 429 429 MLX4_UPDATE_QP_SMAC = 1 << 0, 430 - MLX4_UPDATE_QP_VSD = 1 << 2, 430 + MLX4_UPDATE_QP_VSD = 1 << 1, 431 431 MLX4_UPDATE_QP_SUPPORTED_ATTRS = (1 << 2) - 1 432 432 }; 433 433
+1
include/linux/netdevice.h
··· 2342 2342 2343 2343 static inline void skb_gro_remcsum_init(struct gro_remcsum *grc) 2344 2344 { 2345 + grc->offset = 0; 2345 2346 grc->delta = 0; 2346 2347 } 2347 2348
+5 -17
include/linux/rhashtable.h
··· 54 54 * @buckets: size * hash buckets 55 55 */ 56 56 struct bucket_table { 57 - size_t size; 58 - unsigned int locks_mask; 59 - spinlock_t *locks; 60 - struct rhash_head __rcu *buckets[]; 57 + size_t size; 58 + unsigned int locks_mask; 59 + spinlock_t *locks; 60 + 61 + struct rhash_head __rcu *buckets[] ____cacheline_aligned_in_smp; 61 62 }; 62 63 63 64 typedef u32 (*rht_hashfn_t)(const void *data, u32 len, u32 seed); ··· 79 78 * @locks_mul: Number of bucket locks to allocate per cpu (default: 128) 80 79 * @hashfn: Function to hash key 81 80 * @obj_hashfn: Function to hash object 82 - * @grow_decision: If defined, may return true if table should expand 83 - * @shrink_decision: If defined, may return true if table should shrink 84 - * 85 - * Note: when implementing the grow and shrink decision function, min/max 86 - * shift must be enforced, otherwise, resizing watermarks they set may be 87 - * useless. 88 81 */ 89 82 struct rhashtable_params { 90 83 size_t nelem_hint; ··· 92 97 size_t locks_mul; 93 98 rht_hashfn_t hashfn; 94 99 rht_obj_hashfn_t obj_hashfn; 95 - bool (*grow_decision)(const struct rhashtable *ht, 96 - size_t new_size); 97 - bool (*shrink_decision)(const struct rhashtable *ht, 98 - size_t new_size); 99 100 }; 100 101 101 102 /** ··· 182 191 183 192 void rhashtable_insert(struct rhashtable *ht, struct rhash_head *node); 184 193 bool rhashtable_remove(struct rhashtable *ht, struct rhash_head *node); 185 - 186 - bool rht_grow_above_75(const struct rhashtable *ht, size_t new_size); 187 - bool rht_shrink_below_30(const struct rhashtable *ht, size_t new_size); 188 194 189 195 int rhashtable_expand(struct rhashtable *ht); 190 196 int rhashtable_shrink(struct rhashtable *ht);
+1 -1
include/net/caif/cfpkt.h
··· 171 171 * @return Checksum of buffer. 172 172 */ 173 173 174 - u16 cfpkt_iterate(struct cfpkt *pkt, 174 + int cfpkt_iterate(struct cfpkt *pkt, 175 175 u16 (*iter_func)(u16 chks, void *buf, u16 len), 176 176 u16 data); 177 177
+1
include/uapi/linux/tc_act/Kbuild
··· 9 9 header-y += tc_skbedit.h 10 10 header-y += tc_vlan.h 11 11 header-y += tc_bpf.h 12 + header-y += tc_connmark.h
+28 -34
lib/rhashtable.c
··· 17 17 #include <linux/kernel.h> 18 18 #include <linux/init.h> 19 19 #include <linux/log2.h> 20 + #include <linux/sched.h> 20 21 #include <linux/slab.h> 21 22 #include <linux/vmalloc.h> 22 23 #include <linux/mm.h> ··· 218 217 static struct bucket_table *bucket_table_alloc(struct rhashtable *ht, 219 218 size_t nbuckets) 220 219 { 221 - struct bucket_table *tbl; 220 + struct bucket_table *tbl = NULL; 222 221 size_t size; 223 222 int i; 224 223 225 224 size = sizeof(*tbl) + nbuckets * sizeof(tbl->buckets[0]); 226 - tbl = kzalloc(size, GFP_KERNEL | __GFP_NOWARN); 225 + if (size <= (PAGE_SIZE << PAGE_ALLOC_COSTLY_ORDER)) 226 + tbl = kzalloc(size, GFP_KERNEL | __GFP_NOWARN | __GFP_NORETRY); 227 227 if (tbl == NULL) 228 228 tbl = vzalloc(size); 229 - 230 229 if (tbl == NULL) 231 230 return NULL; 232 231 ··· 248 247 * @ht: hash table 249 248 * @new_size: new table size 250 249 */ 251 - bool rht_grow_above_75(const struct rhashtable *ht, size_t new_size) 250 + static bool rht_grow_above_75(const struct rhashtable *ht, size_t new_size) 252 251 { 253 252 /* Expand table when exceeding 75% load */ 254 253 return atomic_read(&ht->nelems) > (new_size / 4 * 3) && 255 - (ht->p.max_shift && atomic_read(&ht->shift) < ht->p.max_shift); 254 + (!ht->p.max_shift || atomic_read(&ht->shift) < ht->p.max_shift); 256 255 } 257 - EXPORT_SYMBOL_GPL(rht_grow_above_75); 258 256 259 257 /** 260 258 * rht_shrink_below_30 - returns true if nelems < 0.3 * table-size 261 259 * @ht: hash table 262 260 * @new_size: new table size 263 261 */ 264 - bool rht_shrink_below_30(const struct rhashtable *ht, size_t new_size) 262 + static bool rht_shrink_below_30(const struct rhashtable *ht, size_t new_size) 265 263 { 266 264 /* Shrink table beneath 30% load */ 267 265 return atomic_read(&ht->nelems) < (new_size * 3 / 10) && 268 266 (atomic_read(&ht->shift) > ht->p.min_shift); 269 267 } 270 - EXPORT_SYMBOL_GPL(rht_shrink_below_30); 271 268 272 269 static void lock_buckets(struct bucket_table *new_tbl, 273 270 struct bucket_table *old_tbl, unsigned int hash) ··· 413 414 } 414 415 } 415 416 unlock_buckets(new_tbl, old_tbl, new_hash); 417 + cond_resched(); 416 418 } 417 419 418 420 /* Unzip interleaved hash chains */ ··· 437 437 complete = false; 438 438 439 439 unlock_buckets(new_tbl, old_tbl, old_hash); 440 + cond_resched(); 440 441 } 441 442 } 442 443 ··· 496 495 tbl->buckets[new_hash + new_tbl->size]); 497 496 498 497 unlock_buckets(new_tbl, tbl, new_hash); 498 + cond_resched(); 499 499 } 500 500 501 501 /* Publish the new, valid hash table */ ··· 530 528 list_for_each_entry(walker, &ht->walkers, list) 531 529 walker->resize = true; 532 530 533 - if (ht->p.grow_decision && ht->p.grow_decision(ht, tbl->size)) 531 + if (rht_grow_above_75(ht, tbl->size)) 534 532 rhashtable_expand(ht); 535 - else if (ht->p.shrink_decision && ht->p.shrink_decision(ht, tbl->size)) 533 + else if (rht_shrink_below_30(ht, tbl->size)) 536 534 rhashtable_shrink(ht); 537 - 538 535 unlock: 539 536 mutex_unlock(&ht->mutex); 540 537 } 541 538 542 - static void rhashtable_wakeup_worker(struct rhashtable *ht) 543 - { 544 - struct bucket_table *tbl = rht_dereference_rcu(ht->tbl, ht); 545 - struct bucket_table *new_tbl = rht_dereference_rcu(ht->future_tbl, ht); 546 - size_t size = tbl->size; 547 - 548 - /* Only adjust the table if no resizing is currently in progress. */ 549 - if (tbl == new_tbl && 550 - ((ht->p.grow_decision && ht->p.grow_decision(ht, size)) || 551 - (ht->p.shrink_decision && ht->p.shrink_decision(ht, size)))) 552 - schedule_work(&ht->run_work); 553 - } 554 - 555 539 static void __rhashtable_insert(struct rhashtable *ht, struct rhash_head *obj, 556 - struct bucket_table *tbl, u32 hash) 540 + struct bucket_table *tbl, 541 + const struct bucket_table *old_tbl, u32 hash) 557 542 { 543 + bool no_resize_running = tbl == old_tbl; 558 544 struct rhash_head *head; 559 545 560 546 hash = rht_bucket_index(tbl, hash); ··· 558 568 rcu_assign_pointer(tbl->buckets[hash], obj); 559 569 560 570 atomic_inc(&ht->nelems); 561 - 562 - rhashtable_wakeup_worker(ht); 571 + if (no_resize_running && rht_grow_above_75(ht, tbl->size)) 572 + schedule_work(&ht->run_work); 563 573 } 564 574 565 575 /** ··· 589 599 hash = obj_raw_hashfn(ht, rht_obj(ht, obj)); 590 600 591 601 lock_buckets(tbl, old_tbl, hash); 592 - __rhashtable_insert(ht, obj, tbl, hash); 602 + __rhashtable_insert(ht, obj, tbl, old_tbl, hash); 593 603 unlock_buckets(tbl, old_tbl, hash); 594 604 595 605 rcu_read_unlock(); ··· 671 681 unlock_buckets(new_tbl, old_tbl, new_hash); 672 682 673 683 if (ret) { 684 + bool no_resize_running = new_tbl == old_tbl; 685 + 674 686 atomic_dec(&ht->nelems); 675 - rhashtable_wakeup_worker(ht); 687 + if (no_resize_running && rht_shrink_below_30(ht, new_tbl->size)) 688 + schedule_work(&ht->run_work); 676 689 } 677 690 678 691 rcu_read_unlock(); ··· 845 852 goto exit; 846 853 } 847 854 848 - __rhashtable_insert(ht, obj, new_tbl, new_hash); 855 + __rhashtable_insert(ht, obj, new_tbl, old_tbl, new_hash); 849 856 850 857 exit: 851 858 unlock_buckets(new_tbl, old_tbl, new_hash); ··· 886 893 iter->walker = kmalloc(sizeof(*iter->walker), GFP_KERNEL); 887 894 if (!iter->walker) 888 895 return -ENOMEM; 896 + 897 + INIT_LIST_HEAD(&iter->walker->list); 898 + iter->walker->resize = false; 889 899 890 900 mutex_lock(&ht->mutex); 891 901 list_add(&iter->walker->list, &ht->walkers); ··· 1107 1111 if (!ht->p.hash_rnd) 1108 1112 get_random_bytes(&ht->p.hash_rnd, sizeof(ht->p.hash_rnd)); 1109 1113 1110 - if (ht->p.grow_decision || ht->p.shrink_decision) 1111 - INIT_WORK(&ht->run_work, rht_deferred_worker); 1114 + INIT_WORK(&ht->run_work, rht_deferred_worker); 1112 1115 1113 1116 return 0; 1114 1117 } ··· 1125 1130 { 1126 1131 ht->being_destroyed = true; 1127 1132 1128 - if (ht->p.grow_decision || ht->p.shrink_decision) 1129 - cancel_work_sync(&ht->run_work); 1133 + cancel_work_sync(&ht->run_work); 1130 1134 1131 1135 mutex_lock(&ht->mutex); 1132 1136 bucket_table_free(rht_dereference(ht->tbl, ht));
+8 -3
lib/test_rhashtable.c
··· 191 191 return err; 192 192 } 193 193 194 + static struct rhashtable ht; 195 + 194 196 static int __init test_rht_init(void) 195 197 { 196 - struct rhashtable ht; 197 198 struct rhashtable_params params = { 198 199 .nelem_hint = TEST_HT_SIZE, 199 200 .head_offset = offsetof(struct test_obj, node), 200 201 .key_offset = offsetof(struct test_obj, value), 201 202 .key_len = sizeof(int), 202 203 .hashfn = jhash, 204 + .max_shift = 1, /* we expand/shrink manually here */ 203 205 .nulls_base = (3U << RHT_BASE_SHIFT), 204 - .grow_decision = rht_grow_above_75, 205 - .shrink_decision = rht_shrink_below_30, 206 206 }; 207 207 int err; 208 208 ··· 222 222 return err; 223 223 } 224 224 225 + static void __exit test_rht_exit(void) 226 + { 227 + } 228 + 225 229 module_init(test_rht_init); 230 + module_exit(test_rht_exit); 226 231 227 232 MODULE_LICENSE("GPL v2");
+2
net/bridge/br.c
··· 190 190 { 191 191 int err; 192 192 193 + BUILD_BUG_ON(sizeof(struct br_input_skb_cb) > FIELD_SIZEOF(struct sk_buff, cb)); 194 + 193 195 err = stp_proto_register(&br_stp_proto); 194 196 if (err < 0) { 195 197 pr_err("bridge: can't register sap for STP\n");
+1 -1
net/caif/cffrml.c
··· 84 84 u16 tmp; 85 85 u16 len; 86 86 u16 hdrchks; 87 - u16 pktchks; 87 + int pktchks; 88 88 struct cffrml *this; 89 89 this = container_obj(layr); 90 90
+3 -3
net/caif/cfpkt_skbuff.c
··· 255 255 return skb->len; 256 256 } 257 257 258 - inline u16 cfpkt_iterate(struct cfpkt *pkt, 259 - u16 (*iter_func)(u16, void *, u16), 260 - u16 data) 258 + int cfpkt_iterate(struct cfpkt *pkt, 259 + u16 (*iter_func)(u16, void *, u16), 260 + u16 data) 261 261 { 262 262 /* 263 263 * Don't care about the performance hit of linearizing,
-9
net/compat.c
··· 711 711 712 712 COMPAT_SYSCALL_DEFINE3(sendmsg, int, fd, struct compat_msghdr __user *, msg, unsigned int, flags) 713 713 { 714 - if (flags & MSG_CMSG_COMPAT) 715 - return -EINVAL; 716 714 return __sys_sendmsg(fd, (struct user_msghdr __user *)msg, flags | MSG_CMSG_COMPAT); 717 715 } 718 716 719 717 COMPAT_SYSCALL_DEFINE4(sendmmsg, int, fd, struct compat_mmsghdr __user *, mmsg, 720 718 unsigned int, vlen, unsigned int, flags) 721 719 { 722 - if (flags & MSG_CMSG_COMPAT) 723 - return -EINVAL; 724 720 return __sys_sendmmsg(fd, (struct mmsghdr __user *)mmsg, vlen, 725 721 flags | MSG_CMSG_COMPAT); 726 722 } 727 723 728 724 COMPAT_SYSCALL_DEFINE3(recvmsg, int, fd, struct compat_msghdr __user *, msg, unsigned int, flags) 729 725 { 730 - if (flags & MSG_CMSG_COMPAT) 731 - return -EINVAL; 732 726 return __sys_recvmsg(fd, (struct user_msghdr __user *)msg, flags | MSG_CMSG_COMPAT); 733 727 } 734 728 ··· 744 750 { 745 751 int datagrams; 746 752 struct timespec ktspec; 747 - 748 - if (flags & MSG_CMSG_COMPAT) 749 - return -EINVAL; 750 753 751 754 if (timeout == NULL) 752 755 return __sys_recvmmsg(fd, (struct mmsghdr __user *)mmsg, vlen,
+1 -1
net/core/dev.c
··· 946 946 return false; 947 947 948 948 while (*name) { 949 - if (*name == '/' || isspace(*name)) 949 + if (*name == '/' || *name == ':' || isspace(*name)) 950 950 return false; 951 951 name++; 952 952 }
+1
net/core/ethtool.c
··· 98 98 [NETIF_F_RXALL_BIT] = "rx-all", 99 99 [NETIF_F_HW_L2FW_DOFFLOAD_BIT] = "l2-fwd-offload", 100 100 [NETIF_F_BUSY_POLL_BIT] = "busy-poll", 101 + [NETIF_F_HW_SWITCH_OFFLOAD_BIT] = "hw-switch-offload", 101 102 }; 102 103 103 104 static const char
+14 -1
net/core/gen_stats.c
··· 32 32 return 0; 33 33 34 34 nla_put_failure: 35 + kfree(d->xstats); 36 + d->xstats = NULL; 37 + d->xstats_len = 0; 35 38 spin_unlock_bh(d->lock); 36 39 return -1; 37 40 } ··· 308 305 gnet_stats_copy_app(struct gnet_dump *d, void *st, int len) 309 306 { 310 307 if (d->compat_xstats) { 311 - d->xstats = st; 308 + d->xstats = kmemdup(st, len, GFP_ATOMIC); 309 + if (!d->xstats) 310 + goto err_out; 312 311 d->xstats_len = len; 313 312 } 314 313 ··· 318 313 return gnet_stats_copy(d, TCA_STATS_APP, st, len); 319 314 320 315 return 0; 316 + 317 + err_out: 318 + d->xstats_len = 0; 319 + spin_unlock_bh(d->lock); 320 + return -1; 321 321 } 322 322 EXPORT_SYMBOL(gnet_stats_copy_app); 323 323 ··· 355 345 return -1; 356 346 } 357 347 348 + kfree(d->xstats); 349 + d->xstats = NULL; 350 + d->xstats_len = 0; 358 351 spin_unlock_bh(d->lock); 359 352 return 0; 360 353 }
+3
net/core/pktgen.c
··· 1134 1134 return len; 1135 1135 1136 1136 i += len; 1137 + if ((value > 1) && 1138 + (!(pkt_dev->odev->priv_flags & IFF_TX_SKB_SHARING))) 1139 + return -ENOTSUPP; 1137 1140 pkt_dev->burst = value < 1 ? 1 : value; 1138 1141 sprintf(pg_result, "OK: burst=%d", pkt_dev->burst); 1139 1142 return count;
+10 -5
net/core/rtnetlink.c
··· 1300 1300 s_h = cb->args[0]; 1301 1301 s_idx = cb->args[1]; 1302 1302 1303 - rcu_read_lock(); 1304 1303 cb->seq = net->dev_base_seq; 1305 1304 1306 1305 /* A hack to preserve kernel<->userspace interface. ··· 1321 1322 for (h = s_h; h < NETDEV_HASHENTRIES; h++, s_idx = 0) { 1322 1323 idx = 0; 1323 1324 head = &net->dev_index_head[h]; 1324 - hlist_for_each_entry_rcu(dev, head, index_hlist) { 1325 + hlist_for_each_entry(dev, head, index_hlist) { 1325 1326 if (idx < s_idx) 1326 1327 goto cont; 1327 1328 err = rtnl_fill_ifinfo(skb, dev, RTM_NEWLINK, ··· 1343 1344 } 1344 1345 } 1345 1346 out: 1346 - rcu_read_unlock(); 1347 1347 cb->args[1] = idx; 1348 1348 cb->args[0] = h; 1349 1349 ··· 2010 2012 } 2011 2013 2012 2014 if (1) { 2013 - struct nlattr *attr[ops ? ops->maxtype + 1 : 0]; 2014 - struct nlattr *slave_attr[m_ops ? m_ops->slave_maxtype + 1 : 0]; 2015 + struct nlattr *attr[ops ? ops->maxtype + 1 : 1]; 2016 + struct nlattr *slave_attr[m_ops ? m_ops->slave_maxtype + 1 : 1]; 2015 2017 struct nlattr **data = NULL; 2016 2018 struct nlattr **slave_data = NULL; 2017 2019 struct net *dest_net, *link_net = NULL; ··· 2120 2122 if (IS_ERR(dest_net)) 2121 2123 return PTR_ERR(dest_net); 2122 2124 2125 + err = -EPERM; 2126 + if (!netlink_ns_capable(skb, dest_net->user_ns, CAP_NET_ADMIN)) 2127 + goto out; 2128 + 2123 2129 if (tb[IFLA_LINK_NETNSID]) { 2124 2130 int id = nla_get_s32(tb[IFLA_LINK_NETNSID]); 2125 2131 ··· 2132 2130 err = -EINVAL; 2133 2131 goto out; 2134 2132 } 2133 + err = -EPERM; 2134 + if (!netlink_ns_capable(skb, link_net->user_ns, CAP_NET_ADMIN)) 2135 + goto out; 2135 2136 } 2136 2137 2137 2138 dev = rtnl_create_link(link_net ? : dest_net, ifname,
+3 -2
net/core/skbuff.c
··· 3621 3621 { 3622 3622 struct sk_buff_head *q = &sk->sk_error_queue; 3623 3623 struct sk_buff *skb, *skb_next; 3624 + unsigned long flags; 3624 3625 int err = 0; 3625 3626 3626 - spin_lock_bh(&q->lock); 3627 + spin_lock_irqsave(&q->lock, flags); 3627 3628 skb = __skb_dequeue(q); 3628 3629 if (skb && (skb_next = skb_peek(q))) 3629 3630 err = SKB_EXT_ERR(skb_next)->ee.ee_errno; 3630 - spin_unlock_bh(&q->lock); 3631 + spin_unlock_irqrestore(&q->lock, flags); 3631 3632 3632 3633 sk->sk_err = err; 3633 3634 if (err)
+1 -1
net/decnet/dn_route.c
··· 1062 1062 if (decnet_debug_level & 16) 1063 1063 printk(KERN_DEBUG 1064 1064 "dn_route_output_slow: initial checks complete." 1065 - " dst=%o4x src=%04x oif=%d try_hard=%d\n", 1065 + " dst=%04x src=%04x oif=%d try_hard=%d\n", 1066 1066 le16_to_cpu(fld.daddr), le16_to_cpu(fld.saddr), 1067 1067 fld.flowidn_oif, try_hard); 1068 1068
+3
net/hsr/hsr_device.c
··· 359 359 struct hsr_port *port; 360 360 361 361 hsr = netdev_priv(hsr_dev); 362 + 363 + rtnl_lock(); 362 364 hsr_for_each_port(hsr, port) 363 365 hsr_del_port(port); 366 + rtnl_unlock(); 364 367 365 368 del_timer_sync(&hsr->prune_timer); 366 369 del_timer_sync(&hsr->announce_timer);
+4
net/hsr/hsr_main.c
··· 36 36 return NOTIFY_DONE; /* Not an HSR device */ 37 37 hsr = netdev_priv(dev); 38 38 port = hsr_port_get_hsr(hsr, HSR_PT_MASTER); 39 + if (port == NULL) { 40 + /* Resend of notification concerning removed device? */ 41 + return NOTIFY_DONE; 42 + } 39 43 } else { 40 44 hsr = port->hsr; 41 45 }
+7 -3
net/hsr/hsr_slave.c
··· 181 181 list_del_rcu(&port->port_list); 182 182 183 183 if (port != master) { 184 - netdev_update_features(master->dev); 185 - dev_set_mtu(master->dev, hsr_get_max_mtu(hsr)); 184 + if (master != NULL) { 185 + netdev_update_features(master->dev); 186 + dev_set_mtu(master->dev, hsr_get_max_mtu(hsr)); 187 + } 186 188 netdev_rx_handler_unregister(port->dev); 187 189 dev_set_promiscuity(port->dev, -1); 188 190 } ··· 194 192 */ 195 193 196 194 synchronize_rcu(); 197 - dev_put(port->dev); 195 + 196 + if (port != master) 197 + dev_put(port->dev); 198 198 }
+1 -1
net/ipv4/ip_fragment.c
··· 664 664 if (skb->protocol != htons(ETH_P_IP)) 665 665 return skb; 666 666 667 - if (!skb_copy_bits(skb, 0, &iph, sizeof(iph))) 667 + if (skb_copy_bits(skb, 0, &iph, sizeof(iph)) < 0) 668 668 return skb; 669 669 670 670 if (iph.ihl < 5 || iph.version != 4)
+2 -1
net/ipv4/ip_output.c
··· 888 888 cork->length += length; 889 889 if (((length > mtu) || (skb && skb_is_gso(skb))) && 890 890 (sk->sk_protocol == IPPROTO_UDP) && 891 - (rt->dst.dev->features & NETIF_F_UFO) && !rt->dst.header_len) { 891 + (rt->dst.dev->features & NETIF_F_UFO) && !rt->dst.header_len && 892 + (sk->sk_type == SOCK_DGRAM)) { 892 893 err = ip_ufo_append_data(sk, queue, getfrag, from, length, 893 894 hh_len, fragheaderlen, transhdrlen, 894 895 maxfraglen, flags);
+1 -1
net/ipv4/tcp_input.c
··· 4770 4770 return false; 4771 4771 4772 4772 /* If we filled the congestion window, do not expand. */ 4773 - if (tp->packets_out >= tp->snd_cwnd) 4773 + if (tcp_packets_in_flight(tp) >= tp->snd_cwnd) 4774 4774 return false; 4775 4775 4776 4776 return true;
+16 -1
net/ipv6/addrconf.c
··· 4903 4903 return ret; 4904 4904 } 4905 4905 4906 + static 4907 + int addrconf_sysctl_mtu(struct ctl_table *ctl, int write, 4908 + void __user *buffer, size_t *lenp, loff_t *ppos) 4909 + { 4910 + struct inet6_dev *idev = ctl->extra1; 4911 + int min_mtu = IPV6_MIN_MTU; 4912 + struct ctl_table lctl; 4913 + 4914 + lctl = *ctl; 4915 + lctl.extra1 = &min_mtu; 4916 + lctl.extra2 = idev ? &idev->dev->mtu : NULL; 4917 + 4918 + return proc_dointvec_minmax(&lctl, write, buffer, lenp, ppos); 4919 + } 4920 + 4906 4921 static void dev_disable_change(struct inet6_dev *idev) 4907 4922 { 4908 4923 struct netdev_notifier_info info; ··· 5069 5054 .data = &ipv6_devconf.mtu6, 5070 5055 .maxlen = sizeof(int), 5071 5056 .mode = 0644, 5072 - .proc_handler = proc_dointvec, 5057 + .proc_handler = addrconf_sysctl_mtu, 5073 5058 }, 5074 5059 { 5075 5060 .procname = "accept_ra",
+2 -1
net/ipv6/ip6_output.c
··· 1298 1298 if (((length > mtu) || 1299 1299 (skb && skb_is_gso(skb))) && 1300 1300 (sk->sk_protocol == IPPROTO_UDP) && 1301 - (rt->dst.dev->features & NETIF_F_UFO)) { 1301 + (rt->dst.dev->features & NETIF_F_UFO) && 1302 + (sk->sk_type == SOCK_DGRAM)) { 1302 1303 err = ip6_ufo_append_data(sk, queue, getfrag, from, length, 1303 1304 hh_len, fragheaderlen, 1304 1305 transhdrlen, mtu, flags, rt);
+1 -1
net/irda/ircomm/ircomm_tty.c
··· 811 811 break; 812 812 } 813 813 spin_unlock_irqrestore(&self->spinlock, flags); 814 - current->state = TASK_RUNNING; 814 + __set_current_state(TASK_RUNNING); 815 815 } 816 816 817 817 /*
+2 -2
net/irda/irnet/irnet_ppp.c
··· 305 305 306 306 /* Put ourselves on the wait queue to be woken up */ 307 307 add_wait_queue(&irnet_events.rwait, &wait); 308 - current->state = TASK_INTERRUPTIBLE; 308 + set_current_state(TASK_INTERRUPTIBLE); 309 309 for(;;) 310 310 { 311 311 /* If there is unread events */ ··· 321 321 /* Yield and wait to be woken up */ 322 322 schedule(); 323 323 } 324 - current->state = TASK_RUNNING; 324 + __set_current_state(TASK_RUNNING); 325 325 remove_wait_queue(&irnet_events.rwait, &wait); 326 326 327 327 /* Did we got it ? */
+5
net/mac80211/chan.c
··· 1508 1508 if (ieee80211_chanctx_refcount(local, ctx) == 0) 1509 1509 ieee80211_free_chanctx(local, ctx); 1510 1510 1511 + sdata->radar_required = false; 1512 + 1511 1513 /* Unreserving may ready an in-place reservation. */ 1512 1514 if (use_reserved_switch) 1513 1515 ieee80211_vif_use_reserved_switch(local); ··· 1568 1566 ieee80211_recalc_smps_chanctx(local, ctx); 1569 1567 ieee80211_recalc_radar_chanctx(local, ctx); 1570 1568 out: 1569 + if (ret) 1570 + sdata->radar_required = false; 1571 + 1571 1572 mutex_unlock(&local->chanctx_mtx); 1572 1573 return ret; 1573 1574 }
+1 -1
net/mac80211/rc80211_minstrel.c
··· 373 373 rate++; 374 374 mi->sample_deferred++; 375 375 } else { 376 - if (!msr->sample_limit != 0) 376 + if (!msr->sample_limit) 377 377 return; 378 378 379 379 mi->sample_packets++;
+1
net/mac80211/tx.c
··· 566 566 if (tx->sdata->control_port_no_encrypt) 567 567 info->flags |= IEEE80211_TX_INTFL_DONT_ENCRYPT; 568 568 info->control.flags |= IEEE80211_TX_CTRL_PORT_CTRL_PROTO; 569 + info->flags |= IEEE80211_TX_CTL_USE_MINRATE; 569 570 } 570 571 571 572 return TX_CONTINUE;
+1 -1
net/netfilter/ipvs/ip_vs_ctl.c
··· 3402 3402 if (udest.af == 0) 3403 3403 udest.af = svc->af; 3404 3404 3405 - if (udest.af != svc->af) { 3405 + if (udest.af != svc->af && cmd != IPVS_CMD_DEL_DEST) { 3406 3406 /* The synchronization protocol is incompatible 3407 3407 * with mixed family services 3408 3408 */
+10 -2
net/netfilter/nft_compat.c
··· 625 625 struct xt_match *match = nft_match->ops.data; 626 626 627 627 if (strcmp(match->name, mt_name) == 0 && 628 - match->revision == rev && match->family == family) 628 + match->revision == rev && match->family == family) { 629 + if (!try_module_get(match->me)) 630 + return ERR_PTR(-ENOENT); 631 + 629 632 return &nft_match->ops; 633 + } 630 634 } 631 635 632 636 match = xt_request_find_match(family, mt_name, rev); ··· 699 695 struct xt_target *target = nft_target->ops.data; 700 696 701 697 if (strcmp(target->name, tg_name) == 0 && 702 - target->revision == rev && target->family == family) 698 + target->revision == rev && target->family == family) { 699 + if (!try_module_get(target->me)) 700 + return ERR_PTR(-ENOENT); 701 + 703 702 return &nft_target->ops; 703 + } 704 704 } 705 705 706 706 target = xt_request_find_target(family, tg_name, rev);
-2
net/netfilter/nft_hash.c
··· 192 192 .key_offset = offsetof(struct nft_hash_elem, key), 193 193 .key_len = set->klen, 194 194 .hashfn = jhash, 195 - .grow_decision = rht_grow_above_75, 196 - .shrink_decision = rht_shrink_below_30, 197 195 }; 198 196 199 197 return rhashtable_init(priv, &params);
+5 -6
net/netfilter/xt_recent.c
··· 378 378 mutex_lock(&recent_mutex); 379 379 t = recent_table_lookup(recent_net, info->name); 380 380 if (t != NULL) { 381 - if (info->hit_count > t->nstamps_max_mask) { 382 - pr_info("hitcount (%u) is larger than packets to be remembered (%u) for table %s\n", 383 - info->hit_count, t->nstamps_max_mask + 1, 384 - info->name); 385 - ret = -EINVAL; 386 - goto out; 381 + if (nstamp_mask > t->nstamps_max_mask) { 382 + spin_lock_bh(&recent_lock); 383 + recent_table_flush(t); 384 + t->nstamps_max_mask = nstamp_mask; 385 + spin_unlock_bh(&recent_lock); 387 386 } 388 387 389 388 t->refcnt++;
+12 -9
net/netfilter/xt_socket.c
··· 243 243 extract_icmp6_fields(const struct sk_buff *skb, 244 244 unsigned int outside_hdrlen, 245 245 int *protocol, 246 - struct in6_addr **raddr, 247 - struct in6_addr **laddr, 246 + const struct in6_addr **raddr, 247 + const struct in6_addr **laddr, 248 248 __be16 *rport, 249 - __be16 *lport) 249 + __be16 *lport, 250 + struct ipv6hdr *ipv6_var) 250 251 { 251 - struct ipv6hdr *inside_iph, _inside_iph; 252 + const struct ipv6hdr *inside_iph; 252 253 struct icmp6hdr *icmph, _icmph; 253 254 __be16 *ports, _ports[2]; 254 255 u8 inside_nexthdr; ··· 264 263 if (icmph->icmp6_type & ICMPV6_INFOMSG_MASK) 265 264 return 1; 266 265 267 - inside_iph = skb_header_pointer(skb, outside_hdrlen + sizeof(_icmph), sizeof(_inside_iph), &_inside_iph); 266 + inside_iph = skb_header_pointer(skb, outside_hdrlen + sizeof(_icmph), 267 + sizeof(*ipv6_var), ipv6_var); 268 268 if (inside_iph == NULL) 269 269 return 1; 270 270 inside_nexthdr = inside_iph->nexthdr; 271 271 272 - inside_hdrlen = ipv6_skip_exthdr(skb, outside_hdrlen + sizeof(_icmph) + sizeof(_inside_iph), 272 + inside_hdrlen = ipv6_skip_exthdr(skb, outside_hdrlen + sizeof(_icmph) + 273 + sizeof(*ipv6_var), 273 274 &inside_nexthdr, &inside_fragoff); 274 275 if (inside_hdrlen < 0) 275 276 return 1; /* hjm: Packet has no/incomplete transport layer headers. */ ··· 318 315 static bool 319 316 socket_mt6_v1_v2(const struct sk_buff *skb, struct xt_action_param *par) 320 317 { 321 - struct ipv6hdr *iph = ipv6_hdr(skb); 318 + struct ipv6hdr ipv6_var, *iph = ipv6_hdr(skb); 322 319 struct udphdr _hdr, *hp = NULL; 323 320 struct sock *sk = skb->sk; 324 - struct in6_addr *daddr = NULL, *saddr = NULL; 321 + const struct in6_addr *daddr = NULL, *saddr = NULL; 325 322 __be16 uninitialized_var(dport), uninitialized_var(sport); 326 323 int thoff = 0, uninitialized_var(tproto); 327 324 const struct xt_socket_mtinfo1 *info = (struct xt_socket_mtinfo1 *) par->matchinfo; ··· 345 342 346 343 } else if (tproto == IPPROTO_ICMPV6) { 347 344 if (extract_icmp6_fields(skb, thoff, &tproto, &saddr, &daddr, 348 - &sport, &dport)) 345 + &sport, &dport, &ipv6_var)) 349 346 return false; 350 347 } else { 351 348 return false;
-2
net/netlink/af_netlink.c
··· 3126 3126 .key_len = sizeof(u32), /* portid */ 3127 3127 .hashfn = jhash, 3128 3128 .max_shift = 16, /* 64K */ 3129 - .grow_decision = rht_grow_above_75, 3130 - .shrink_decision = rht_shrink_below_30, 3131 3129 }; 3132 3130 3133 3131 if (err != 0)
+43 -2
net/openvswitch/datapath.c
··· 2194 2194 return 0; 2195 2195 } 2196 2196 2197 - static void __net_exit ovs_exit_net(struct net *net) 2197 + static void __net_exit list_vports_from_net(struct net *net, struct net *dnet, 2198 + struct list_head *head) 2199 + { 2200 + struct ovs_net *ovs_net = net_generic(net, ovs_net_id); 2201 + struct datapath *dp; 2202 + 2203 + list_for_each_entry(dp, &ovs_net->dps, list_node) { 2204 + int i; 2205 + 2206 + for (i = 0; i < DP_VPORT_HASH_BUCKETS; i++) { 2207 + struct vport *vport; 2208 + 2209 + hlist_for_each_entry(vport, &dp->ports[i], dp_hash_node) { 2210 + struct netdev_vport *netdev_vport; 2211 + 2212 + if (vport->ops->type != OVS_VPORT_TYPE_INTERNAL) 2213 + continue; 2214 + 2215 + netdev_vport = netdev_vport_priv(vport); 2216 + if (dev_net(netdev_vport->dev) == dnet) 2217 + list_add(&vport->detach_list, head); 2218 + } 2219 + } 2220 + } 2221 + } 2222 + 2223 + static void __net_exit ovs_exit_net(struct net *dnet) 2198 2224 { 2199 2225 struct datapath *dp, *dp_next; 2200 - struct ovs_net *ovs_net = net_generic(net, ovs_net_id); 2226 + struct ovs_net *ovs_net = net_generic(dnet, ovs_net_id); 2227 + struct vport *vport, *vport_next; 2228 + struct net *net; 2229 + LIST_HEAD(head); 2201 2230 2202 2231 ovs_lock(); 2203 2232 list_for_each_entry_safe(dp, dp_next, &ovs_net->dps, list_node) 2204 2233 __dp_destroy(dp); 2234 + 2235 + rtnl_lock(); 2236 + for_each_net(net) 2237 + list_vports_from_net(net, dnet, &head); 2238 + rtnl_unlock(); 2239 + 2240 + /* Detach all vports from given namespace. */ 2241 + list_for_each_entry_safe(vport, vport_next, &head, detach_list) { 2242 + list_del(&vport->detach_list); 2243 + ovs_dp_detach_port(vport); 2244 + } 2245 + 2205 2246 ovs_unlock(); 2206 2247 2207 2248 cancel_work_sync(&ovs_net->dp_notify_work);
+7 -1
net/openvswitch/flow_netlink.c
··· 2253 2253 struct sk_buff *skb) 2254 2254 { 2255 2255 const struct nlattr *ovs_key = nla_data(a); 2256 + struct nlattr *nla; 2256 2257 size_t key_len = nla_len(ovs_key) / 2; 2257 2258 2258 2259 /* Revert the conversion we did from a non-masked set action to 2259 2260 * masked set action. 2260 2261 */ 2261 - if (nla_put(skb, OVS_ACTION_ATTR_SET, nla_len(a) - key_len, ovs_key)) 2262 + nla = nla_nest_start(skb, OVS_ACTION_ATTR_SET); 2263 + if (!nla) 2262 2264 return -EMSGSIZE; 2263 2265 2266 + if (nla_put(skb, nla_type(ovs_key), key_len, nla_data(ovs_key))) 2267 + return -EMSGSIZE; 2268 + 2269 + nla_nest_end(skb, nla); 2264 2270 return 0; 2265 2271 } 2266 2272
+2
net/openvswitch/vport.h
··· 103 103 * @ops: Class structure. 104 104 * @percpu_stats: Points to per-CPU statistics used and maintained by vport 105 105 * @err_stats: Points to error statistics used and maintained by vport 106 + * @detach_list: list used for detaching vport in net-exit call. 106 107 */ 107 108 struct vport { 108 109 struct rcu_head rcu; ··· 118 117 struct pcpu_sw_netstats __percpu *percpu_stats; 119 118 120 119 struct vport_err_stats err_stats; 120 + struct list_head detach_list; 121 121 }; 122 122 123 123 /**
+14 -6
net/packet/af_packet.c
··· 698 698 699 699 if (pkc->last_kactive_blk_num == pkc->kactive_blk_num) { 700 700 if (!frozen) { 701 + if (!BLOCK_NUM_PKTS(pbd)) { 702 + /* An empty block. Just refresh the timer. */ 703 + goto refresh_timer; 704 + } 701 705 prb_retire_current_block(pkc, po, TP_STATUS_BLK_TMO); 702 706 if (!prb_dispatch_next_block(pkc, po)) 703 707 goto refresh_timer; ··· 802 798 h1->ts_last_pkt.ts_sec = last_pkt->tp_sec; 803 799 h1->ts_last_pkt.ts_nsec = last_pkt->tp_nsec; 804 800 } else { 805 - /* Ok, we tmo'd - so get the current time */ 801 + /* Ok, we tmo'd - so get the current time. 802 + * 803 + * It shouldn't really happen as we don't close empty 804 + * blocks. See prb_retire_rx_blk_timer_expired(). 805 + */ 806 806 struct timespec ts; 807 807 getnstimeofday(&ts); 808 808 h1->ts_last_pkt.ts_sec = ts.tv_sec; ··· 1357 1349 return 0; 1358 1350 } 1359 1351 1352 + if (fanout_has_flag(f, PACKET_FANOUT_FLAG_DEFRAG)) { 1353 + skb = ip_check_defrag(skb, IP_DEFRAG_AF_PACKET); 1354 + if (!skb) 1355 + return 0; 1356 + } 1360 1357 switch (f->type) { 1361 1358 case PACKET_FANOUT_HASH: 1362 1359 default: 1363 - if (fanout_has_flag(f, PACKET_FANOUT_FLAG_DEFRAG)) { 1364 - skb = ip_check_defrag(skb, IP_DEFRAG_AF_PACKET); 1365 - if (!skb) 1366 - return 0; 1367 - } 1368 1360 idx = fanout_demux_hash(f, skb, num); 1369 1361 break; 1370 1362 case PACKET_FANOUT_LB:
+5 -4
net/rxrpc/ar-ack.c
··· 218 218 struct rxrpc_header *hdr; 219 219 struct sk_buff *txb; 220 220 unsigned long *p_txb, resend_at; 221 - int loop, stop; 221 + bool stop; 222 + int loop; 222 223 u8 resend; 223 224 224 225 _enter("{%d,%d,%d,%d},", ··· 227 226 atomic_read(&call->sequence), 228 227 CIRC_CNT(call->acks_head, call->acks_tail, call->acks_winsz)); 229 228 230 - stop = 0; 229 + stop = false; 231 230 resend = 0; 232 231 resend_at = 0; 233 232 ··· 256 255 _proto("Tx DATA %%%u { #%d }", 257 256 ntohl(sp->hdr.serial), ntohl(sp->hdr.seq)); 258 257 if (rxrpc_send_packet(call->conn->trans, txb) < 0) { 259 - stop = 0; 258 + stop = true; 260 259 sp->resend_at = jiffies + 3; 261 260 } else { 262 261 sp->resend_at = 263 - jiffies + rxrpc_resend_timeout * HZ; 262 + jiffies + rxrpc_resend_timeout; 264 263 } 265 264 } 266 265
+1
net/sched/ematch.c
··· 228 228 * to replay the request. 229 229 */ 230 230 module_put(em->ops->owner); 231 + em->ops = NULL; 231 232 err = -EAGAIN; 232 233 } 233 234 #endif
-2
net/tipc/socket.c
··· 2364 2364 .hashfn = jhash, 2365 2365 .max_shift = 20, /* 1M */ 2366 2366 .min_shift = 8, /* 256 */ 2367 - .grow_decision = rht_grow_above_75, 2368 - .shrink_decision = rht_shrink_below_30, 2369 2367 }; 2370 2368 2371 2369 return rhashtable_init(&tn->sk_rht, &rht_params);
+1
net/wireless/core.c
··· 1199 1199 regulatory_exit(); 1200 1200 out_fail_reg: 1201 1201 debugfs_remove(ieee80211_debugfs_dir); 1202 + nl80211_exit(); 1202 1203 out_fail_nl80211: 1203 1204 unregister_netdevice_notifier(&cfg80211_netdev_notifier); 1204 1205 out_fail_notifier:
+5 -7
net/wireless/nl80211.c
··· 2654 2654 return err; 2655 2655 } 2656 2656 2657 - msg = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_KERNEL); 2658 - if (!msg) 2659 - return -ENOMEM; 2660 - 2661 2657 err = parse_monitor_flags(type == NL80211_IFTYPE_MONITOR ? 2662 2658 info->attrs[NL80211_ATTR_MNTR_FLAGS] : NULL, 2663 2659 &flags); ··· 2661 2665 if (!err && (flags & MONITOR_FLAG_ACTIVE) && 2662 2666 !(rdev->wiphy.features & NL80211_FEATURE_ACTIVE_MONITOR)) 2663 2667 return -EOPNOTSUPP; 2668 + 2669 + msg = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_KERNEL); 2670 + if (!msg) 2671 + return -ENOMEM; 2664 2672 2665 2673 wdev = rdev_add_virtual_intf(rdev, 2666 2674 nla_data(info->attrs[NL80211_ATTR_IFNAME]), ··· 12528 12528 } 12529 12529 12530 12530 for (j = 0; j < match->n_channels; j++) { 12531 - if (nla_put_u32(msg, 12532 - NL80211_ATTR_WIPHY_FREQ, 12533 - match->channels[j])) { 12531 + if (nla_put_u32(msg, j, match->channels[j])) { 12534 12532 nla_nest_cancel(msg, nl_freqs); 12535 12533 nla_nest_cancel(msg, nl_match); 12536 12534 goto out;
+1 -1
net/wireless/reg.c
··· 228 228 229 229 /* We keep a static world regulatory domain in case of the absence of CRDA */ 230 230 static const struct ieee80211_regdomain world_regdom = { 231 - .n_reg_rules = 6, 231 + .n_reg_rules = 8, 232 232 .alpha2 = "00", 233 233 .reg_rules = { 234 234 /* IEEE 802.11b/g, channels 1..11 */