Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branch 'master' of git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/net-next

Jeff Kirsher says:

====================
Intel Wired LAN Driver Updates 2015-02-05

This series contains updates to fm10k, ixgbe and ixgbevf.

Matthew fixes an issue where fm10k does not properly drop the upper-most four
bits on of the VLAN ID due to type promotion, so resolve the issue by not
masking off the bits, but by throwing an error if the VLAN ID is out-of-bounds.
Then cleans up two cases where variables were not being used, but were
being set, so just remove the unused variables.

Don cleans up sparse errors in the x550 family file for ixgbe. Fixed up
a redundant setting of the default value for set_rxpba, which was done
twice accidentally. Cleaned up the probe routine to remove a redundant
attempt to identify the PHY, which could lead to a panic on x550. Added
support for VXLAN receive checksum offload in x550 hardware. Added the
Ethertype Anti-spoofing feature for affected devices.

Emil enables ixgbe and ixgbevf to allow multiple queues in SRIOV mode.
Adds RSS support for x550 per VF. Fixed up a couple of issues introduced
in commit 2b509c0cd292 ("ixgbe: cleanup ixgbe_ndo_set_vf_vlan"), fixed
setting of the VLAN inside ixgbe_enable_port_vlan() and disable the
"hide VLAN" bit in PFQDE when port VLAN is disabled. Cleaned up the
setting of vlan_features by enabling all features at once. Fixed the
ordering of the shutdown patch so that we attempt to shutdown the rings
more gracefully. We shutdown the main Rx filter in the case of Rx and we
set the carrier_off state in the case of Tx so that packets stop being
delivered from outside the driver. Then we shutdown interrupts and NAPI,
then finally stop the rings from performing DMA and clean them. Added
code to allow for Tx hang checking to provide more robust debug info in
the event of a transmit unit hang in ixgbevf. Cleaned up ixgbevf logic
dealing with link up/down by breaking down the link detection and up/down
events into separate functions, similar to how these events are handled
in other drivers. Combined the ixgbevf reset and watchdog tasks into a
single task so that we can avoid multiple schedules of the reset task when
we have a reset event needed due to either the mailbox going down or
transmit packets being present on a link down.

v2: Fixed up patch #03 of the series to remove the variable type change
based on feedback from David Laight
====================

Signed-off-by: David S. Miller <davem@davemloft.net>

+602 -237
+11
drivers/net/ethernet/intel/Kconfig
··· 192 192 To compile this driver as a module, choose M here. The module 193 193 will be called ixgbe. 194 194 195 + config IXGBE_VXLAN 196 + bool "Virtual eXtensible Local Area Network Support" 197 + default n 198 + depends on IXGBE && VXLAN && !(IXGBE=y && VXLAN=m) 199 + ---help--- 200 + This allows one to create VXLAN virtual interfaces that provide 201 + Layer 2 Networks over Layer 3 Networks. VXLAN is often used 202 + to tunnel virtual network infrastructure in virtualized environments. 203 + Say Y here if you want to use Virtual eXtensible Local Area Network 204 + (VXLAN) in the driver. 205 + 195 206 config IXGBE_HWMON 196 207 bool "Intel(R) 10GbE PCI Express adapters HWMON support" 197 208 default y
+2 -3
drivers/net/ethernet/intel/fm10k/fm10k_mbx.c
··· 1194 1194 { 1195 1195 const enum fm10k_mbx_state state = mbx->state; 1196 1196 const u32 *hdr = &mbx->mbx_hdr; 1197 - u16 head, tail; 1197 + u16 head; 1198 1198 s32 err; 1199 1199 1200 - /* we will need to pull all of the fields for verification */ 1200 + /* we will need to pull the header field for verification */ 1201 1201 head = FM10K_MSG_HDR_FIELD_GET(*hdr, HEAD); 1202 - tail = FM10K_MSG_HDR_FIELD_GET(*hdr, TAIL); 1203 1202 1204 1203 /* We should not be receiving disconnect if Rx is incomplete */ 1205 1204 if (mbx->pushed)
+2 -5
drivers/net/ethernet/intel/fm10k/fm10k_pf.c
··· 330 330 struct fm10k_mac_update mac_update; 331 331 u32 msg[5]; 332 332 333 - /* if glort is not valid return error */ 334 - if (!fm10k_glort_valid_pf(hw, glort)) 333 + /* if glort or vlan are not valid return error */ 334 + if (!fm10k_glort_valid_pf(hw, glort) || vid >= FM10K_VLAN_TABLE_VID_MAX) 335 335 return FM10K_ERR_PARAM; 336 - 337 - /* drop upper 4 bits of VLAN ID */ 338 - vid = (vid << 4) >> 4; 339 336 340 337 /* record fields */ 341 338 mac_update.mac_lower = cpu_to_le32(((u32)mac[2] << 24) |
-3
drivers/net/ethernet/intel/fm10k/fm10k_ptp.c
··· 57 57 struct sk_buff_head *list = &interface->ts_tx_skb_queue; 58 58 struct sk_buff *clone; 59 59 unsigned long flags; 60 - __le16 dglort; 61 60 62 61 /* create clone for us to return on the Tx path */ 63 62 clone = skb_clone_sk(skb); ··· 64 65 return; 65 66 66 67 FM10K_CB(clone)->ts_tx_timeout = jiffies + FM10K_TS_TX_TIMEOUT; 67 - dglort = FM10K_CB(clone)->fi.w.dglort; 68 - 69 68 spin_lock_irqsave(&list->lock, flags); 70 69 71 70 /* attempt to locate any buffers with the same dglort,
+3
drivers/net/ethernet/intel/ixgbe/ixgbe.h
··· 76 76 #define IXGBE_MAX_RXD 4096 77 77 #define IXGBE_MIN_RXD 64 78 78 79 + #define IXGBE_ETH_P_LLDP 0x88CC 80 + 79 81 /* flow control */ 80 82 #define IXGBE_MIN_FCRTL 0x40 81 83 #define IXGBE_MAX_FCRTL 0x7FF80 ··· 755 753 u32 timer_event_accumulator; 756 754 u32 vferr_refcount; 757 755 struct ixgbe_mac_addr *mac_table; 756 + u16 vxlan_port; 758 757 struct kobject *info_kobj; 759 758 #ifdef CONFIG_IXGBE_HWMON 760 759 struct hwmon_buff *ixgbe_hwmon_buff;
+110 -2
drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
··· 50 50 #include <linux/if_bridge.h> 51 51 #include <linux/prefetch.h> 52 52 #include <scsi/fc/fc_fcoe.h> 53 + #include <net/vxlan.h> 53 54 54 55 #ifdef CONFIG_OF 55 56 #include <linux/of_net.h> ··· 1397 1396 union ixgbe_adv_rx_desc *rx_desc, 1398 1397 struct sk_buff *skb) 1399 1398 { 1399 + __le16 pkt_info = rx_desc->wb.lower.lo_dword.hs_rss.pkt_info; 1400 + __le16 hdr_info = rx_desc->wb.lower.lo_dword.hs_rss.hdr_info; 1401 + bool encap_pkt = false; 1402 + 1400 1403 skb_checksum_none_assert(skb); 1401 1404 1402 1405 /* Rx csum disabled */ 1403 1406 if (!(ring->netdev->features & NETIF_F_RXCSUM)) 1404 1407 return; 1408 + 1409 + if ((pkt_info & cpu_to_le16(IXGBE_RXDADV_PKTTYPE_VXLAN)) && 1410 + (hdr_info & cpu_to_le16(IXGBE_RXDADV_PKTTYPE_TUNNEL >> 16))) { 1411 + encap_pkt = true; 1412 + skb->encapsulation = 1; 1413 + skb->ip_summed = CHECKSUM_NONE; 1414 + } 1405 1415 1406 1416 /* if IP and error */ 1407 1417 if (ixgbe_test_staterr(rx_desc, IXGBE_RXD_STAT_IPCS) && ··· 1425 1413 return; 1426 1414 1427 1415 if (ixgbe_test_staterr(rx_desc, IXGBE_RXDADV_ERR_TCPE)) { 1428 - __le16 pkt_info = rx_desc->wb.lower.lo_dword.hs_rss.pkt_info; 1429 - 1430 1416 /* 1431 1417 * 82599 errata, UDP frames with a 0 checksum can be marked as 1432 1418 * checksum errors. ··· 1439 1429 1440 1430 /* It must be a TCP or UDP packet with a valid checksum */ 1441 1431 skb->ip_summed = CHECKSUM_UNNECESSARY; 1432 + if (encap_pkt) { 1433 + if (!ixgbe_test_staterr(rx_desc, IXGBE_RXD_STAT_OUTERIPCS)) 1434 + return; 1435 + 1436 + if (ixgbe_test_staterr(rx_desc, IXGBE_RXDADV_ERR_OUTERIPER)) { 1437 + ring->rx_stats.csum_err++; 1438 + return; 1439 + } 1440 + /* If we checked the outer header let the stack know */ 1441 + skb->csum_level = 1; 1442 + } 1442 1443 } 1443 1444 1444 1445 static bool ixgbe_alloc_mapped_page(struct ixgbe_ring *rx_ring, ··· 3585 3564 /* Enable MAC Anti-Spoofing */ 3586 3565 hw->mac.ops.set_mac_anti_spoofing(hw, (adapter->num_vfs != 0), 3587 3566 adapter->num_vfs); 3567 + 3568 + /* Ensure LLDP is set for Ethertype Antispoofing if we will be 3569 + * calling set_ethertype_anti_spoofing for each VF in loop below 3570 + */ 3571 + if (hw->mac.ops.set_ethertype_anti_spoofing) 3572 + IXGBE_WRITE_REG(hw, IXGBE_ETQF(IXGBE_ETQF_FILTER_LLDP), 3573 + (IXGBE_ETQF_FILTER_EN | /* enable filter */ 3574 + IXGBE_ETQF_TX_ANTISPOOF | /* tx antispoof */ 3575 + IXGBE_ETH_P_LLDP)); /* LLDP eth type */ 3576 + 3588 3577 /* For VFs that have spoof checking turned off */ 3589 3578 for (i = 0; i < adapter->num_vfs; i++) { 3590 3579 if (!adapter->vfinfo[i].spoofchk_enabled) 3591 3580 ixgbe_ndo_set_vf_spoofchk(adapter->netdev, i, false); 3581 + 3582 + /* enable ethertype anti spoofing if hw supports it */ 3583 + if (hw->mac.ops.set_ethertype_anti_spoofing) 3584 + hw->mac.ops.set_ethertype_anti_spoofing(hw, true, i); 3592 3585 } 3593 3586 } 3594 3587 ··· 5662 5627 5663 5628 ixgbe_up_complete(adapter); 5664 5629 5630 + #if IS_ENABLED(CONFIG_IXGBE_VXLAN) 5631 + vxlan_get_rx_port(netdev); 5632 + 5633 + #endif 5665 5634 return 0; 5666 5635 5667 5636 err_set_queues: ··· 7810 7771 return 0; 7811 7772 } 7812 7773 7774 + /** 7775 + * ixgbe_add_vxlan_port - Get notifications about VXLAN ports that come up 7776 + * @dev: The port's netdev 7777 + * @sa_family: Socket Family that VXLAN is notifiying us about 7778 + * @port: New UDP port number that VXLAN started listening to 7779 + **/ 7780 + static void ixgbe_add_vxlan_port(struct net_device *dev, sa_family_t sa_family, 7781 + __be16 port) 7782 + { 7783 + struct ixgbe_adapter *adapter = netdev_priv(dev); 7784 + struct ixgbe_hw *hw = &adapter->hw; 7785 + u16 new_port = ntohs(port); 7786 + 7787 + if (sa_family == AF_INET6) 7788 + return; 7789 + 7790 + if (adapter->vxlan_port == new_port) { 7791 + netdev_info(dev, "Port %d already offloaded\n", new_port); 7792 + return; 7793 + } 7794 + 7795 + if (adapter->vxlan_port) { 7796 + netdev_info(dev, 7797 + "Hit Max num of UDP ports, not adding port %d\n", 7798 + new_port); 7799 + return; 7800 + } 7801 + 7802 + adapter->vxlan_port = new_port; 7803 + IXGBE_WRITE_REG(hw, IXGBE_VXLANCTRL, new_port); 7804 + } 7805 + 7806 + /** 7807 + * ixgbe_del_vxlan_port - Get notifications about VXLAN ports that go away 7808 + * @dev: The port's netdev 7809 + * @sa_family: Socket Family that VXLAN is notifying us about 7810 + * @port: UDP port number that VXLAN stopped listening to 7811 + **/ 7812 + static void ixgbe_del_vxlan_port(struct net_device *dev, sa_family_t sa_family, 7813 + __be16 port) 7814 + { 7815 + struct ixgbe_adapter *adapter = netdev_priv(dev); 7816 + struct ixgbe_hw *hw = &adapter->hw; 7817 + u16 new_port = ntohs(port); 7818 + 7819 + if (sa_family == AF_INET6) 7820 + return; 7821 + 7822 + if (adapter->vxlan_port != new_port) { 7823 + netdev_info(dev, "Port %d was not found, not deleting\n", 7824 + new_port); 7825 + return; 7826 + } 7827 + 7828 + adapter->vxlan_port = 0; 7829 + IXGBE_WRITE_REG(hw, IXGBE_VXLANCTRL, 0); 7830 + } 7831 + 7813 7832 static int ixgbe_ndo_fdb_add(struct ndmsg *ndm, struct nlattr *tb[], 7814 7833 struct net_device *dev, 7815 7834 const unsigned char *addr, u16 vid, ··· 8079 7982 .ndo_bridge_getlink = ixgbe_ndo_bridge_getlink, 8080 7983 .ndo_dfwd_add_station = ixgbe_fwd_add, 8081 7984 .ndo_dfwd_del_station = ixgbe_fwd_del, 7985 + .ndo_add_vxlan_port = ixgbe_add_vxlan_port, 7986 + .ndo_del_vxlan_port = ixgbe_del_vxlan_port, 8082 7987 }; 8083 7988 8084 7989 /** ··· 8437 8338 8438 8339 netdev->priv_flags |= IFF_UNICAST_FLT; 8439 8340 netdev->priv_flags |= IFF_SUPP_NOFCS; 8341 + 8342 + switch (adapter->hw.mac.type) { 8343 + case ixgbe_mac_X550: 8344 + case ixgbe_mac_X550EM_x: 8345 + netdev->hw_enc_features |= NETIF_F_RXCSUM; 8346 + break; 8347 + default: 8348 + break; 8349 + } 8440 8350 8441 8351 #ifdef CONFIG_IXGBE_DCB 8442 8352 netdev->dcbnl_ops = &dcbnl_ops;
+8 -8
drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c
··· 101 101 adapter->dcb_cfg.num_tcs.pfc_tcs = 1; 102 102 } 103 103 104 - /* We do not support RSS w/ SR-IOV */ 105 - adapter->ring_feature[RING_F_RSS].limit = 1; 106 - 107 104 /* Disable RSC when in SR-IOV mode */ 108 105 adapter->flags2 &= ~(IXGBE_FLAG2_RSC_CAPABLE | 109 106 IXGBE_FLAG2_RSC_ENABLED); ··· 1094 1097 u16 vlan, u8 qos) 1095 1098 { 1096 1099 struct ixgbe_hw *hw = &adapter->hw; 1097 - int err = 0; 1100 + int err; 1098 1101 1099 - if (adapter->vfinfo[vf].pf_vlan) 1100 - err = ixgbe_set_vf_vlan(adapter, false, 1101 - adapter->vfinfo[vf].pf_vlan, 1102 - vf); 1102 + err = ixgbe_set_vf_vlan(adapter, true, vlan, vf); 1103 1103 if (err) 1104 1104 goto out; 1105 + 1105 1106 ixgbe_set_vmvir(adapter, vlan, qos, vf); 1106 1107 ixgbe_set_vmolr(hw, vf, false); 1107 1108 if (adapter->vfinfo[vf].spoofchk_enabled) ··· 1138 1143 hw->mac.ops.set_vlan_anti_spoofing(hw, false, vf); 1139 1144 if (adapter->vfinfo[vf].vlan_count) 1140 1145 adapter->vfinfo[vf].vlan_count--; 1146 + 1147 + /* disable hide VLAN on X550 */ 1148 + if (hw->mac.type >= ixgbe_mac_X550) 1149 + ixgbe_write_qde(adapter, vf, IXGBE_QDE_ENABLE); 1150 + 1141 1151 adapter->vfinfo[vf].pf_vlan = 0; 1142 1152 adapter->vfinfo[vf].pf_qos = 0; 1143 1153
+12
drivers/net/ethernet/intel/ixgbe/ixgbe_type.h
··· 378 378 #define IXGBE_SPOOF_MACAS_MASK 0xFF 379 379 #define IXGBE_SPOOF_VLANAS_MASK 0xFF00 380 380 #define IXGBE_SPOOF_VLANAS_SHIFT 8 381 + #define IXGBE_SPOOF_ETHERTYPEAS 0xFF000000 382 + #define IXGBE_SPOOF_ETHERTYPEAS_SHIFT 16 381 383 #define IXGBE_PFVFSPOOF_REG_COUNT 8 382 384 383 385 #define IXGBE_DCA_TXCTRL(_i) (0x07200 + ((_i) * 4)) /* 16 of these (0-15) */ ··· 401 399 402 400 #define IXGBE_WUPL 0x05900 403 401 #define IXGBE_WUPM 0x05A00 /* wake up pkt memory 0x5A00-0x5A7C */ 402 + #define IXGBE_VXLANCTRL 0x0000507C /* Rx filter VXLAN UDPPORT Register */ 404 403 #define IXGBE_FHFT(_n) (0x09000 + ((_n) * 0x100)) /* Flex host filter table */ 405 404 #define IXGBE_FHFT_EXT(_n) (0x09800 + ((_n) * 0x100)) /* Ext Flexible Host 406 405 * Filter Table */ ··· 1543 1540 #define IXGBE_MAX_ETQF_FILTERS 8 1544 1541 #define IXGBE_ETQF_FCOE 0x08000000 /* bit 27 */ 1545 1542 #define IXGBE_ETQF_BCN 0x10000000 /* bit 28 */ 1543 + #define IXGBE_ETQF_TX_ANTISPOOF 0x20000000 /* bit 29 */ 1546 1544 #define IXGBE_ETQF_1588 0x40000000 /* bit 30 */ 1547 1545 #define IXGBE_ETQF_FILTER_EN 0x80000000 /* bit 31 */ 1548 1546 #define IXGBE_ETQF_POOL_ENABLE (1 << 26) /* bit 26 */ ··· 1569 1565 #define IXGBE_ETQF_FILTER_FCOE 2 1570 1566 #define IXGBE_ETQF_FILTER_1588 3 1571 1567 #define IXGBE_ETQF_FILTER_FIP 4 1568 + #define IXGBE_ETQF_FILTER_LLDP 5 1569 + #define IXGBE_ETQF_FILTER_LACP 6 1570 + 1572 1571 /* VLAN Control Bit Masks */ 1573 1572 #define IXGBE_VLNCTRL_VET 0x0000FFFF /* bits 0-15 */ 1574 1573 #define IXGBE_VLNCTRL_CFI 0x10000000 /* bit 28 */ ··· 2129 2122 #define IXGBE_RXD_STAT_IPCS 0x40 /* IP xsum calculated */ 2130 2123 #define IXGBE_RXD_STAT_PIF 0x80 /* passed in-exact filter */ 2131 2124 #define IXGBE_RXD_STAT_CRCV 0x100 /* Speculative CRC Valid */ 2125 + #define IXGBE_RXD_STAT_OUTERIPCS 0x100 /* Cloud IP xsum calculated */ 2132 2126 #define IXGBE_RXD_STAT_VEXT 0x200 /* 1st VLAN found */ 2133 2127 #define IXGBE_RXD_STAT_UDPV 0x400 /* Valid UDP checksum */ 2134 2128 #define IXGBE_RXD_STAT_DYNINT 0x800 /* Pkt caused INT via DYNINT */ ··· 2147 2139 #define IXGBE_RXD_ERR_IPE 0x80 /* IP Checksum Error */ 2148 2140 #define IXGBE_RXDADV_ERR_MASK 0xfff00000 /* RDESC.ERRORS mask */ 2149 2141 #define IXGBE_RXDADV_ERR_SHIFT 20 /* RDESC.ERRORS shift */ 2142 + #define IXGBE_RXDADV_ERR_OUTERIPER 0x04000000 /* CRC IP Header error */ 2150 2143 #define IXGBE_RXDADV_ERR_FCEOFE 0x80000000 /* FCoEFe/IPE */ 2151 2144 #define IXGBE_RXDADV_ERR_FCERR 0x00700000 /* FCERR/FDIRERR */ 2152 2145 #define IXGBE_RXDADV_ERR_FDIR_LEN 0x00100000 /* FDIR Length error */ ··· 2236 2227 #define IXGBE_RXDADV_PKTTYPE_UDP 0x00000200 /* UDP hdr present */ 2237 2228 #define IXGBE_RXDADV_PKTTYPE_SCTP 0x00000400 /* SCTP hdr present */ 2238 2229 #define IXGBE_RXDADV_PKTTYPE_NFS 0x00000800 /* NFS hdr present */ 2230 + #define IXGBE_RXDADV_PKTTYPE_VXLAN 0x00000800 /* VXLAN hdr present */ 2231 + #define IXGBE_RXDADV_PKTTYPE_TUNNEL 0x00010000 /* Tunnel type */ 2239 2232 #define IXGBE_RXDADV_PKTTYPE_IPSEC_ESP 0x00001000 /* IPSec ESP */ 2240 2233 #define IXGBE_RXDADV_PKTTYPE_IPSEC_AH 0x00002000 /* IPSec AH */ 2241 2234 #define IXGBE_RXDADV_PKTTYPE_LINKSEC 0x00004000 /* LinkSec Encap */ ··· 3067 3056 s32 (*set_fw_drv_ver)(struct ixgbe_hw *, u8, u8, u8, u8); 3068 3057 s32 (*get_thermal_sensor_data)(struct ixgbe_hw *); 3069 3058 s32 (*init_thermal_sensor_thresh)(struct ixgbe_hw *hw); 3059 + void (*set_ethertype_anti_spoofing)(struct ixgbe_hw *, bool, int); 3070 3060 3071 3061 /* DMA Coalescing */ 3072 3062 s32 (*dmac_config)(struct ixgbe_hw *hw);
-3
drivers/net/ethernet/intel/ixgbe/ixgbe_x540.c
··· 55 55 { 56 56 struct ixgbe_mac_info *mac = &hw->mac; 57 57 58 - /* Call PHY identify routine to get the phy type */ 59 - ixgbe_identify_phy_generic(hw); 60 - 61 58 mac->mcft_size = IXGBE_X540_MC_TBL_SIZE; 62 59 mac->vft_size = IXGBE_X540_VFT_TBL_SIZE; 63 60 mac->num_rar_entries = IXGBE_X540_RAR_ENTRIES;
+59 -31
drivers/net/ethernet/intel/ixgbe/ixgbe_x550.c
··· 80 80 * Initializes the EEPROM parameters ixgbe_eeprom_info within the 81 81 * ixgbe_hw struct in order to set up EEPROM access. 82 82 **/ 83 - s32 ixgbe_init_eeprom_params_X550(struct ixgbe_hw *hw) 83 + static s32 ixgbe_init_eeprom_params_X550(struct ixgbe_hw *hw) 84 84 { 85 85 struct ixgbe_eeprom_info *eeprom = &hw->eeprom; 86 86 u32 eec; ··· 110 110 * @device_type: 3 bit device type 111 111 * @phy_data: Pointer to read data from the register 112 112 **/ 113 - s32 ixgbe_read_iosf_sb_reg_x550(struct ixgbe_hw *hw, u32 reg_addr, 114 - u32 device_type, u32 *data) 113 + static s32 ixgbe_read_iosf_sb_reg_x550(struct ixgbe_hw *hw, u32 reg_addr, 114 + u32 device_type, u32 *data) 115 115 { 116 116 u32 i, command, error; 117 117 ··· 158 158 * 159 159 * Reads a 16 bit word from the EEPROM using the hostif. 160 160 **/ 161 - s32 ixgbe_read_ee_hostif_data_X550(struct ixgbe_hw *hw, u16 offset, u16 *data) 161 + static s32 ixgbe_read_ee_hostif_data_X550(struct ixgbe_hw *hw, u16 offset, 162 + u16 *data) 162 163 { 163 164 s32 status; 164 165 struct ixgbe_hic_read_shadow_ram buffer; ··· 194 193 * 195 194 * Reads a 16 bit word(s) from the EEPROM using the hostif. 196 195 **/ 197 - s32 ixgbe_read_ee_hostif_buffer_X550(struct ixgbe_hw *hw, 198 - u16 offset, u16 words, u16 *data) 196 + static s32 ixgbe_read_ee_hostif_buffer_X550(struct ixgbe_hw *hw, 197 + u16 offset, u16 words, u16 *data) 199 198 { 200 199 struct ixgbe_hic_read_shadow_ram buffer; 201 200 u32 current_word = 0; ··· 332 331 * 333 332 * Returns a negative error code on error, or the 16-bit checksum 334 333 **/ 335 - s32 ixgbe_calc_checksum_X550(struct ixgbe_hw *hw, u16 *buffer, u32 buffer_size) 334 + static s32 ixgbe_calc_checksum_X550(struct ixgbe_hw *hw, u16 *buffer, 335 + u32 buffer_size) 336 336 { 337 337 u16 eeprom_ptrs[IXGBE_EEPROM_LAST_WORD + 1]; 338 338 u16 *local_buffer; ··· 409 407 * 410 408 * Returns a negative error code on error, or the 16-bit checksum 411 409 **/ 412 - s32 ixgbe_calc_eeprom_checksum_X550(struct ixgbe_hw *hw) 410 + static s32 ixgbe_calc_eeprom_checksum_X550(struct ixgbe_hw *hw) 413 411 { 414 412 return ixgbe_calc_checksum_X550(hw, NULL, 0); 415 413 } ··· 421 419 * 422 420 * Reads a 16 bit word from the EEPROM using the hostif. 423 421 **/ 424 - s32 ixgbe_read_ee_hostif_X550(struct ixgbe_hw *hw, u16 offset, u16 *data) 422 + static s32 ixgbe_read_ee_hostif_X550(struct ixgbe_hw *hw, u16 offset, u16 *data) 425 423 { 426 424 s32 status = 0; 427 425 ··· 442 440 * Performs checksum calculation and validates the EEPROM checksum. If the 443 441 * caller does not need checksum_val, the value can be NULL. 444 442 **/ 445 - s32 ixgbe_validate_eeprom_checksum_X550(struct ixgbe_hw *hw, u16 *checksum_val) 443 + static s32 ixgbe_validate_eeprom_checksum_X550(struct ixgbe_hw *hw, 444 + u16 *checksum_val) 446 445 { 447 446 s32 status; 448 447 u16 checksum; ··· 492 489 * 493 490 * Write a 16 bit word to the EEPROM using the hostif. 494 491 **/ 495 - s32 ixgbe_write_ee_hostif_data_X550(struct ixgbe_hw *hw, u16 offset, u16 data) 492 + static s32 ixgbe_write_ee_hostif_data_X550(struct ixgbe_hw *hw, u16 offset, 493 + u16 data) 496 494 { 497 495 s32 status; 498 496 struct ixgbe_hic_write_shadow_ram buffer; ··· 521 517 * 522 518 * Write a 16 bit word to the EEPROM using the hostif. 523 519 **/ 524 - s32 ixgbe_write_ee_hostif_X550(struct ixgbe_hw *hw, u16 offset, u16 data) 520 + static s32 ixgbe_write_ee_hostif_X550(struct ixgbe_hw *hw, u16 offset, u16 data) 525 521 { 526 522 s32 status = 0; 527 523 ··· 541 537 * 542 538 * Issue a shadow RAM dump to FW to copy EEPROM from shadow RAM to the flash. 543 539 **/ 544 - s32 ixgbe_update_flash_X550(struct ixgbe_hw *hw) 540 + static s32 ixgbe_update_flash_X550(struct ixgbe_hw *hw) 545 541 { 546 542 s32 status = 0; 547 543 union ixgbe_hic_hdr2 buffer; ··· 564 560 * checksum and updates the EEPROM and instructs the hardware to update 565 561 * the flash. 566 562 **/ 567 - s32 ixgbe_update_eeprom_checksum_X550(struct ixgbe_hw *hw) 563 + static s32 ixgbe_update_eeprom_checksum_X550(struct ixgbe_hw *hw) 568 564 { 569 565 s32 status; 570 566 u16 checksum = 0; ··· 604 600 * 605 601 * Write a 16 bit word(s) to the EEPROM using the hostif. 606 602 **/ 607 - s32 ixgbe_write_ee_hostif_buffer_X550(struct ixgbe_hw *hw, 608 - u16 offset, u16 words, u16 *data) 603 + static s32 ixgbe_write_ee_hostif_buffer_X550(struct ixgbe_hw *hw, 604 + u16 offset, u16 words, 605 + u16 *data) 609 606 { 610 607 s32 status = 0; 611 608 u32 i = 0; ··· 635 630 /** ixgbe_init_mac_link_ops_X550em - init mac link function pointers 636 631 * @hw: pointer to hardware structure 637 632 **/ 638 - void ixgbe_init_mac_link_ops_X550em(struct ixgbe_hw *hw) 633 + static void ixgbe_init_mac_link_ops_X550em(struct ixgbe_hw *hw) 639 634 { 640 635 struct ixgbe_mac_info *mac = &hw->mac; 641 636 ··· 652 647 /** ixgbe_setup_sfp_modules_X550em - Setup SFP module 653 648 * @hw: pointer to hardware structure 654 649 */ 655 - s32 ixgbe_setup_sfp_modules_X550em(struct ixgbe_hw *hw) 650 + static s32 ixgbe_setup_sfp_modules_X550em(struct ixgbe_hw *hw) 656 651 { 657 652 bool setup_linear; 658 653 u16 reg_slice, edc_mode; ··· 708 703 * @speed: pointer to link speed 709 704 * @autoneg: true when autoneg or autotry is enabled 710 705 **/ 711 - s32 ixgbe_get_link_capabilities_X550em(struct ixgbe_hw *hw, 712 - ixgbe_link_speed *speed, 713 - bool *autoneg) 706 + static s32 ixgbe_get_link_capabilities_X550em(struct ixgbe_hw *hw, 707 + ixgbe_link_speed *speed, 708 + bool *autoneg) 714 709 { 715 710 /* SFP */ 716 711 if (hw->phy.media_type == ixgbe_media_type_fiber) { ··· 745 740 * @device_type: 3 bit device type 746 741 * @data: Data to write to the register 747 742 **/ 748 - s32 ixgbe_write_iosf_sb_reg_x550(struct ixgbe_hw *hw, u32 reg_addr, 749 - u32 device_type, u32 data) 743 + static s32 ixgbe_write_iosf_sb_reg_x550(struct ixgbe_hw *hw, u32 reg_addr, 744 + u32 device_type, u32 data) 750 745 { 751 746 u32 i, command, error; 752 747 ··· 909 904 * 910 905 * Configures the integrated KX4 PHY. 911 906 **/ 912 - s32 ixgbe_setup_kx4_x550em(struct ixgbe_hw *hw) 907 + static s32 ixgbe_setup_kx4_x550em(struct ixgbe_hw *hw) 913 908 { 914 909 s32 status; 915 910 u32 reg_val; ··· 947 942 * 948 943 * Configures the integrated KR PHY. 949 944 **/ 950 - s32 ixgbe_setup_kr_x550em(struct ixgbe_hw *hw) 945 + static s32 ixgbe_setup_kr_x550em(struct ixgbe_hw *hw) 951 946 { 952 947 s32 status; 953 948 u32 reg_val; ··· 992 987 * A return of a non-zero value indicates an error, and the base driver should 993 988 * not report link up. 994 989 **/ 995 - s32 ixgbe_setup_internal_phy_x550em(struct ixgbe_hw *hw) 990 + static s32 ixgbe_setup_internal_phy_x550em(struct ixgbe_hw *hw) 996 991 { 997 992 u32 status; 998 993 u16 lasi, autoneg_status, speed; ··· 1054 1049 * set during init_shared_code because the PHY/SFP type was 1055 1050 * not known. Perform the SFP init if necessary. 1056 1051 **/ 1057 - s32 ixgbe_init_phy_ops_X550em(struct ixgbe_hw *hw) 1052 + static s32 ixgbe_init_phy_ops_X550em(struct ixgbe_hw *hw) 1058 1053 { 1059 1054 struct ixgbe_phy_info *phy = &hw->phy; 1060 1055 s32 ret_val; ··· 1107 1102 * Returns the media type (fiber, copper, backplane) 1108 1103 * 1109 1104 */ 1110 - enum ixgbe_media_type ixgbe_get_media_type_X550em(struct ixgbe_hw *hw) 1105 + static enum ixgbe_media_type ixgbe_get_media_type_X550em(struct ixgbe_hw *hw) 1111 1106 { 1112 1107 enum ixgbe_media_type media_type; 1113 1108 ··· 1134 1129 /** ixgbe_init_ext_t_x550em - Start (unstall) the external Base T PHY. 1135 1130 ** @hw: pointer to hardware structure 1136 1131 **/ 1137 - s32 ixgbe_init_ext_t_x550em(struct ixgbe_hw *hw) 1132 + static s32 ixgbe_init_ext_t_x550em(struct ixgbe_hw *hw) 1138 1133 { 1139 1134 u32 status; 1140 1135 u16 reg; ··· 1207 1202 ** and clears all interrupts, perform a PHY reset, and perform a link (MAC) 1208 1203 ** reset. 1209 1204 **/ 1210 - s32 ixgbe_reset_hw_X550em(struct ixgbe_hw *hw) 1205 + static s32 ixgbe_reset_hw_X550em(struct ixgbe_hw *hw) 1211 1206 { 1212 1207 ixgbe_link_speed link_speed; 1213 1208 s32 status; ··· 1300 1295 return status; 1301 1296 } 1302 1297 1298 + /** ixgbe_set_ethertype_anti_spoofing_X550 - Enable/Disable Ethertype 1299 + * anti-spoofing 1300 + * @hw: pointer to hardware structure 1301 + * @enable: enable or disable switch for Ethertype anti-spoofing 1302 + * @vf: Virtual Function pool - VF Pool to set for Ethertype anti-spoofing 1303 + **/ 1304 + void ixgbe_set_ethertype_anti_spoofing_X550(struct ixgbe_hw *hw, bool enable, 1305 + int vf) 1306 + { 1307 + int vf_target_reg = vf >> 3; 1308 + int vf_target_shift = vf % 8 + IXGBE_SPOOF_ETHERTYPEAS_SHIFT; 1309 + u32 pfvfspoof; 1310 + 1311 + pfvfspoof = IXGBE_READ_REG(hw, IXGBE_PFVFSPOOF(vf_target_reg)); 1312 + if (enable) 1313 + pfvfspoof |= (1 << vf_target_shift); 1314 + else 1315 + pfvfspoof &= ~(1 << vf_target_shift); 1316 + 1317 + IXGBE_WRITE_REG(hw, IXGBE_PFVFSPOOF(vf_target_reg), pfvfspoof); 1318 + } 1319 + 1303 1320 #define X550_COMMON_MAC \ 1304 1321 .init_hw = &ixgbe_init_hw_generic, \ 1305 1322 .start_hw = &ixgbe_start_hw_X540, \ ··· 1356 1329 .init_uta_tables = &ixgbe_init_uta_tables_generic, \ 1357 1330 .set_mac_anti_spoofing = &ixgbe_set_mac_anti_spoofing, \ 1358 1331 .set_vlan_anti_spoofing = &ixgbe_set_vlan_anti_spoofing, \ 1332 + .set_ethertype_anti_spoofing = \ 1333 + &ixgbe_set_ethertype_anti_spoofing_X550, \ 1359 1334 .acquire_swfw_sync = &ixgbe_acquire_swfw_sync_X540, \ 1360 1335 .release_swfw_sync = &ixgbe_release_swfw_sync_X540, \ 1361 1336 .disable_rx_buff = &ixgbe_disable_rx_buff_generic, \ ··· 1374 1345 .get_san_mac_addr = &ixgbe_get_san_mac_addr_generic, 1375 1346 .get_wwn_prefix = &ixgbe_get_wwn_prefix_generic, 1376 1347 .setup_link = &ixgbe_setup_mac_link_X540, 1377 - .set_rxpba = &ixgbe_set_rxpba_generic, 1378 1348 .get_link_capabilities = &ixgbe_get_copper_link_capabilities_generic, 1379 1349 .setup_sfp = NULL, 1380 1350 };
+28 -8
drivers/net/ethernet/intel/ixgbevf/ixgbevf.h
··· 43 43 #define BP_EXTENDED_STATS 44 44 #endif 45 45 46 + #define IXGBE_MAX_TXD_PWR 14 47 + #define IXGBE_MAX_DATA_PER_TXD BIT(IXGBE_MAX_TXD_PWR) 48 + 49 + /* Tx Descriptors needed, worst case */ 50 + #define TXD_USE_COUNT(S) DIV_ROUND_UP((S), IXGBE_MAX_DATA_PER_TXD) 51 + #define DESC_NEEDED (MAX_SKB_FRAGS + 4) 52 + 46 53 /* wrapper around a pointer to a socket buffer, 47 54 * so a DMA handle can be stored along with the buffer */ 48 55 struct ixgbevf_tx_buffer { ··· 92 85 u64 csum_err; 93 86 }; 94 87 88 + enum ixgbevf_ring_state_t { 89 + __IXGBEVF_TX_DETECT_HANG, 90 + __IXGBEVF_HANG_CHECK_ARMED, 91 + }; 92 + 93 + #define check_for_tx_hang(ring) \ 94 + test_bit(__IXGBEVF_TX_DETECT_HANG, &(ring)->state) 95 + #define set_check_for_tx_hang(ring) \ 96 + set_bit(__IXGBEVF_TX_DETECT_HANG, &(ring)->state) 97 + #define clear_check_for_tx_hang(ring) \ 98 + clear_bit(__IXGBEVF_TX_DETECT_HANG, &(ring)->state) 99 + 95 100 struct ixgbevf_ring { 96 101 struct ixgbevf_ring *next; 97 102 struct net_device *netdev; ··· 120 101 struct ixgbevf_tx_buffer *tx_buffer_info; 121 102 struct ixgbevf_rx_buffer *rx_buffer_info; 122 103 }; 123 - 104 + unsigned long state; 124 105 struct ixgbevf_stats stats; 125 106 struct u64_stats_sync syncp; 126 107 union { ··· 143 124 144 125 #define MAX_RX_QUEUES IXGBE_VF_MAX_RX_QUEUES 145 126 #define MAX_TX_QUEUES IXGBE_VF_MAX_TX_QUEUES 127 + #define IXGBEVF_MAX_RSS_QUEUES 2 146 128 147 129 #define IXGBEVF_DEFAULT_TXD 1024 148 130 #define IXGBEVF_DEFAULT_RXD 512 ··· 367 347 /* this field must be first, see ixgbevf_process_skb_fields */ 368 348 unsigned long active_vlans[BITS_TO_LONGS(VLAN_N_VID)]; 369 349 370 - struct timer_list watchdog_timer; 371 - struct work_struct reset_task; 372 350 struct ixgbevf_q_vector *q_vector[MAX_MSIX_Q_VECTORS]; 373 351 374 352 /* Interrupt Throttle Rate */ ··· 396 378 * thus the additional *_CAPABLE flags. 397 379 */ 398 380 u32 flags; 399 - #define IXGBE_FLAG_IN_WATCHDOG_TASK (u32)(1) 400 - 381 + #define IXGBEVF_FLAG_RESET_REQUESTED (u32)(1) 401 382 #define IXGBEVF_FLAG_QUEUE_RESET_REQUESTED (u32)(1 << 2) 402 383 403 384 struct msix_entry *msix_entries; ··· 432 415 u32 link_speed; 433 416 bool link_up; 434 417 435 - spinlock_t mbx_lock; 418 + struct timer_list service_timer; 419 + struct work_struct service_task; 436 420 437 - struct work_struct watchdog_task; 421 + spinlock_t mbx_lock; 422 + unsigned long last_reset; 438 423 }; 439 424 440 425 enum ixbgevf_state_t { ··· 445 426 __IXGBEVF_DOWN, 446 427 __IXGBEVF_DISABLED, 447 428 __IXGBEVF_REMOVING, 448 - __IXGBEVF_WORK_INIT, 429 + __IXGBEVF_SERVICE_SCHED, 430 + __IXGBEVF_SERVICE_INITED, 449 431 }; 450 432 451 433 enum ixgbevf_boards {
+357 -174
drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
··· 98 98 module_param(debug, int, 0); 99 99 MODULE_PARM_DESC(debug, "Debug level (0=none,...,16=all)"); 100 100 101 + static void ixgbevf_service_event_schedule(struct ixgbevf_adapter *adapter) 102 + { 103 + if (!test_bit(__IXGBEVF_DOWN, &adapter->state) && 104 + !test_bit(__IXGBEVF_REMOVING, &adapter->state) && 105 + !test_and_set_bit(__IXGBEVF_SERVICE_SCHED, &adapter->state)) 106 + schedule_work(&adapter->service_task); 107 + } 108 + 109 + static void ixgbevf_service_event_complete(struct ixgbevf_adapter *adapter) 110 + { 111 + BUG_ON(!test_bit(__IXGBEVF_SERVICE_SCHED, &adapter->state)); 112 + 113 + /* flush memory to make sure state is correct before next watchdog */ 114 + smp_mb__before_atomic(); 115 + clear_bit(__IXGBEVF_SERVICE_SCHED, &adapter->state); 116 + } 117 + 101 118 /* forward decls */ 102 119 static void ixgbevf_queue_reset_subtask(struct ixgbevf_adapter *adapter); 103 120 static void ixgbevf_set_itr(struct ixgbevf_q_vector *q_vector); ··· 128 111 return; 129 112 hw->hw_addr = NULL; 130 113 dev_err(&adapter->pdev->dev, "Adapter removed\n"); 131 - if (test_bit(__IXGBEVF_WORK_INIT, &adapter->state)) 132 - schedule_work(&adapter->watchdog_task); 114 + if (test_bit(__IXGBEVF_SERVICE_INITED, &adapter->state)) 115 + ixgbevf_service_event_schedule(adapter); 133 116 } 134 117 135 118 static void ixgbevf_check_remove(struct ixgbe_hw *hw, u32 reg) ··· 216 199 /* tx_buffer must be completely set up in the transmit path */ 217 200 } 218 201 219 - #define IXGBE_MAX_TXD_PWR 14 220 - #define IXGBE_MAX_DATA_PER_TXD (1 << IXGBE_MAX_TXD_PWR) 202 + static u64 ixgbevf_get_tx_completed(struct ixgbevf_ring *ring) 203 + { 204 + return ring->stats.packets; 205 + } 221 206 222 - /* Tx Descriptors needed, worst case */ 223 - #define TXD_USE_COUNT(S) DIV_ROUND_UP((S), IXGBE_MAX_DATA_PER_TXD) 224 - #define DESC_NEEDED (MAX_SKB_FRAGS + 4) 207 + static u32 ixgbevf_get_tx_pending(struct ixgbevf_ring *ring) 208 + { 209 + struct ixgbevf_adapter *adapter = netdev_priv(ring->netdev); 210 + struct ixgbe_hw *hw = &adapter->hw; 225 211 226 - static void ixgbevf_tx_timeout(struct net_device *netdev); 212 + u32 head = IXGBE_READ_REG(hw, IXGBE_VFTDH(ring->reg_idx)); 213 + u32 tail = IXGBE_READ_REG(hw, IXGBE_VFTDT(ring->reg_idx)); 214 + 215 + if (head != tail) 216 + return (head < tail) ? 217 + tail - head : (tail + ring->count - head); 218 + 219 + return 0; 220 + } 221 + 222 + static inline bool ixgbevf_check_tx_hang(struct ixgbevf_ring *tx_ring) 223 + { 224 + u32 tx_done = ixgbevf_get_tx_completed(tx_ring); 225 + u32 tx_done_old = tx_ring->tx_stats.tx_done_old; 226 + u32 tx_pending = ixgbevf_get_tx_pending(tx_ring); 227 + 228 + clear_check_for_tx_hang(tx_ring); 229 + 230 + /* Check for a hung queue, but be thorough. This verifies 231 + * that a transmit has been completed since the previous 232 + * check AND there is at least one packet pending. The 233 + * ARMED bit is set to indicate a potential hang. 234 + */ 235 + if ((tx_done_old == tx_done) && tx_pending) { 236 + /* make sure it is true for two checks in a row */ 237 + return test_and_set_bit(__IXGBEVF_HANG_CHECK_ARMED, 238 + &tx_ring->state); 239 + } 240 + /* reset the countdown */ 241 + clear_bit(__IXGBEVF_HANG_CHECK_ARMED, &tx_ring->state); 242 + 243 + /* update completed stats and continue */ 244 + tx_ring->tx_stats.tx_done_old = tx_done; 245 + 246 + return false; 247 + } 248 + 249 + static void ixgbevf_tx_timeout_reset(struct ixgbevf_adapter *adapter) 250 + { 251 + /* Do the reset outside of interrupt context */ 252 + if (!test_bit(__IXGBEVF_DOWN, &adapter->state)) { 253 + adapter->flags |= IXGBEVF_FLAG_RESET_REQUESTED; 254 + ixgbevf_service_event_schedule(adapter); 255 + } 256 + } 257 + 258 + /** 259 + * ixgbevf_tx_timeout - Respond to a Tx Hang 260 + * @netdev: network interface device structure 261 + **/ 262 + static void ixgbevf_tx_timeout(struct net_device *netdev) 263 + { 264 + struct ixgbevf_adapter *adapter = netdev_priv(netdev); 265 + 266 + ixgbevf_tx_timeout_reset(adapter); 267 + } 227 268 228 269 /** 229 270 * ixgbevf_clean_tx_irq - Reclaim resources after transmit completes ··· 385 310 u64_stats_update_end(&tx_ring->syncp); 386 311 q_vector->tx.total_bytes += total_bytes; 387 312 q_vector->tx.total_packets += total_packets; 313 + 314 + if (check_for_tx_hang(tx_ring) && ixgbevf_check_tx_hang(tx_ring)) { 315 + struct ixgbe_hw *hw = &adapter->hw; 316 + union ixgbe_adv_tx_desc *eop_desc; 317 + 318 + eop_desc = tx_ring->tx_buffer_info[i].next_to_watch; 319 + 320 + pr_err("Detected Tx Unit Hang\n" 321 + " Tx Queue <%d>\n" 322 + " TDH, TDT <%x>, <%x>\n" 323 + " next_to_use <%x>\n" 324 + " next_to_clean <%x>\n" 325 + "tx_buffer_info[next_to_clean]\n" 326 + " next_to_watch <%p>\n" 327 + " eop_desc->wb.status <%x>\n" 328 + " time_stamp <%lx>\n" 329 + " jiffies <%lx>\n", 330 + tx_ring->queue_index, 331 + IXGBE_READ_REG(hw, IXGBE_VFTDH(tx_ring->reg_idx)), 332 + IXGBE_READ_REG(hw, IXGBE_VFTDT(tx_ring->reg_idx)), 333 + tx_ring->next_to_use, i, 334 + eop_desc, (eop_desc ? eop_desc->wb.status : 0), 335 + tx_ring->tx_buffer_info[i].time_stamp, jiffies); 336 + 337 + netif_stop_subqueue(tx_ring->netdev, tx_ring->queue_index); 338 + 339 + /* schedule immediate reset if we believe we hung */ 340 + ixgbevf_tx_timeout_reset(adapter); 341 + 342 + return true; 343 + } 388 344 389 345 #define TX_WAKE_THRESHOLD (DESC_NEEDED * 2) 390 346 if (unlikely(total_packets && netif_carrier_ok(tx_ring->netdev) && ··· 1264 1158 1265 1159 hw->mac.get_link_status = 1; 1266 1160 1267 - if (!test_bit(__IXGBEVF_DOWN, &adapter->state) && 1268 - !test_bit(__IXGBEVF_REMOVING, &adapter->state)) 1269 - mod_timer(&adapter->watchdog_timer, jiffies); 1161 + ixgbevf_service_event_schedule(adapter); 1270 1162 1271 1163 IXGBE_WRITE_REG(hw, IXGBE_VTEIMS, adapter->eims_other); 1272 1164 ··· 1583 1479 txdctl |= (1 << 8) | /* HTHRESH = 1 */ 1584 1480 32; /* PTHRESH = 32 */ 1585 1481 1482 + clear_bit(__IXGBEVF_HANG_CHECK_ARMED, &ring->state); 1483 + 1586 1484 IXGBE_WRITE_REG(hw, IXGBE_VFTXDCTL(reg_idx), txdctl); 1587 1485 1588 1486 /* poll to verify queue is enabled */ ··· 1690 1584 reg_idx); 1691 1585 } 1692 1586 1587 + static void ixgbevf_setup_vfmrqc(struct ixgbevf_adapter *adapter) 1588 + { 1589 + struct ixgbe_hw *hw = &adapter->hw; 1590 + u32 vfmrqc = 0, vfreta = 0; 1591 + u32 rss_key[10]; 1592 + u16 rss_i = adapter->num_rx_queues; 1593 + int i, j; 1594 + 1595 + /* Fill out hash function seeds */ 1596 + netdev_rss_key_fill(rss_key, sizeof(rss_key)); 1597 + for (i = 0; i < 10; i++) 1598 + IXGBE_WRITE_REG(hw, IXGBE_VFRSSRK(i), rss_key[i]); 1599 + 1600 + /* Fill out redirection table */ 1601 + for (i = 0, j = 0; i < 64; i++, j++) { 1602 + if (j == rss_i) 1603 + j = 0; 1604 + vfreta = (vfreta << 8) | (j * 0x1); 1605 + if ((i & 3) == 3) 1606 + IXGBE_WRITE_REG(hw, IXGBE_VFRETA(i >> 2), vfreta); 1607 + } 1608 + 1609 + /* Perform hash on these packet types */ 1610 + vfmrqc |= IXGBE_VFMRQC_RSS_FIELD_IPV4 | 1611 + IXGBE_VFMRQC_RSS_FIELD_IPV4_TCP | 1612 + IXGBE_VFMRQC_RSS_FIELD_IPV6 | 1613 + IXGBE_VFMRQC_RSS_FIELD_IPV6_TCP; 1614 + 1615 + vfmrqc |= IXGBE_VFMRQC_RSSEN; 1616 + 1617 + IXGBE_WRITE_REG(hw, IXGBE_VFMRQC, vfmrqc); 1618 + } 1619 + 1693 1620 static void ixgbevf_configure_rx_ring(struct ixgbevf_adapter *adapter, 1694 1621 struct ixgbevf_ring *ring) 1695 1622 { ··· 1779 1640 struct net_device *netdev = adapter->netdev; 1780 1641 1781 1642 ixgbevf_setup_psrtype(adapter); 1643 + if (hw->mac.type >= ixgbe_mac_X550_vf) 1644 + ixgbevf_setup_vfmrqc(adapter); 1782 1645 1783 1646 /* notify the PF of our intent to use this size of frame */ 1784 1647 ixgbevf_rlpml_set_vf(hw, netdev->mtu + ETH_HLEN + ETH_FCS_LEN); ··· 1935 1794 struct ixgbe_hw *hw = &adapter->hw; 1936 1795 unsigned int def_q = 0; 1937 1796 unsigned int num_tcs = 0; 1938 - unsigned int num_rx_queues = 1; 1797 + unsigned int num_rx_queues = adapter->num_rx_queues; 1798 + unsigned int num_tx_queues = adapter->num_tx_queues; 1939 1799 int err; 1940 1800 1941 1801 spin_lock_bh(&adapter->mbx_lock); ··· 1950 1808 return err; 1951 1809 1952 1810 if (num_tcs > 1) { 1811 + /* we need only one Tx queue */ 1812 + num_tx_queues = 1; 1813 + 1953 1814 /* update default Tx ring register index */ 1954 1815 adapter->tx_ring[0]->reg_idx = def_q; 1955 1816 ··· 1961 1816 } 1962 1817 1963 1818 /* if we have a bad config abort request queue reset */ 1964 - if (adapter->num_rx_queues != num_rx_queues) { 1819 + if ((adapter->num_rx_queues != num_rx_queues) || 1820 + (adapter->num_tx_queues != num_tx_queues)) { 1965 1821 /* force mailbox timeout to prevent further messages */ 1966 1822 hw->mbx.timeout = 0; 1967 1823 ··· 2063 1917 clear_bit(__IXGBEVF_DOWN, &adapter->state); 2064 1918 ixgbevf_napi_enable_all(adapter); 2065 1919 1920 + /* clear any pending interrupts, may auto mask */ 1921 + IXGBE_READ_REG(hw, IXGBE_VTEICR); 1922 + ixgbevf_irq_enable(adapter); 1923 + 2066 1924 /* enable transmits */ 2067 1925 netif_tx_start_all_queues(netdev); 2068 1926 ··· 2074 1924 ixgbevf_init_last_counter_stats(adapter); 2075 1925 2076 1926 hw->mac.get_link_status = 1; 2077 - mod_timer(&adapter->watchdog_timer, jiffies); 1927 + mod_timer(&adapter->service_timer, jiffies); 2078 1928 } 2079 1929 2080 1930 void ixgbevf_up(struct ixgbevf_adapter *adapter) 2081 1931 { 2082 - struct ixgbe_hw *hw = &adapter->hw; 2083 - 2084 1932 ixgbevf_configure(adapter); 2085 1933 2086 1934 ixgbevf_up_complete(adapter); 2087 - 2088 - /* clear any pending interrupts, may auto mask */ 2089 - IXGBE_READ_REG(hw, IXGBE_VTEICR); 2090 - 2091 - ixgbevf_irq_enable(adapter); 2092 1935 } 2093 1936 2094 1937 /** ··· 2188 2045 for (i = 0; i < adapter->num_rx_queues; i++) 2189 2046 ixgbevf_disable_rx_queue(adapter, adapter->rx_ring[i]); 2190 2047 2191 - netif_tx_disable(netdev); 2192 - 2193 - msleep(10); 2048 + usleep_range(10000, 20000); 2194 2049 2195 2050 netif_tx_stop_all_queues(netdev); 2051 + 2052 + /* call carrier off first to avoid false dev_watchdog timeouts */ 2053 + netif_carrier_off(netdev); 2054 + netif_tx_disable(netdev); 2196 2055 2197 2056 ixgbevf_irq_disable(adapter); 2198 2057 2199 2058 ixgbevf_napi_disable_all(adapter); 2200 2059 2201 - del_timer_sync(&adapter->watchdog_timer); 2202 - /* can't call flush scheduled work here because it can deadlock 2203 - * if linkwatch_event tries to acquire the rtnl_lock which we are 2204 - * holding */ 2205 - while (adapter->flags & IXGBE_FLAG_IN_WATCHDOG_TASK) 2206 - msleep(1); 2060 + del_timer_sync(&adapter->service_timer); 2207 2061 2208 2062 /* disable transmits in the hardware now that interrupts are off */ 2209 2063 for (i = 0; i < adapter->num_tx_queues; i++) { ··· 2209 2069 IXGBE_WRITE_REG(hw, IXGBE_VFTXDCTL(reg_idx), 2210 2070 IXGBE_TXDCTL_SWFLSH); 2211 2071 } 2212 - 2213 - netif_carrier_off(netdev); 2214 2072 2215 2073 if (!pci_channel_offline(adapter->pdev)) 2216 2074 ixgbevf_reset(adapter); ··· 2248 2110 memcpy(netdev->perm_addr, adapter->hw.mac.addr, 2249 2111 netdev->addr_len); 2250 2112 } 2113 + 2114 + adapter->last_reset = jiffies; 2251 2115 } 2252 2116 2253 2117 static int ixgbevf_acquire_msix_vectors(struct ixgbevf_adapter *adapter, ··· 2321 2181 return; 2322 2182 2323 2183 /* we need as many queues as traffic classes */ 2324 - if (num_tcs > 1) 2184 + if (num_tcs > 1) { 2325 2185 adapter->num_rx_queues = num_tcs; 2186 + } else { 2187 + u16 rss = min_t(u16, num_online_cpus(), IXGBEVF_MAX_RSS_QUEUES); 2188 + 2189 + switch (hw->api_version) { 2190 + case ixgbe_mbox_api_11: 2191 + adapter->num_rx_queues = rss; 2192 + adapter->num_tx_queues = rss; 2193 + default: 2194 + break; 2195 + } 2196 + } 2326 2197 } 2327 2198 2328 2199 /** ··· 2703 2552 struct ixgbe_hw *hw = &adapter->hw; 2704 2553 int i; 2705 2554 2706 - if (!adapter->link_up) 2555 + if (test_bit(__IXGBEVF_DOWN, &adapter->state) || 2556 + test_bit(__IXGBEVF_RESETTING, &adapter->state)) 2707 2557 return; 2708 2558 2709 2559 UPDATE_VF_COUNTER_32bit(IXGBE_VFGPRC, adapter->stats.last_vfgprc, ··· 2728 2576 } 2729 2577 2730 2578 /** 2731 - * ixgbevf_watchdog - Timer Call-back 2579 + * ixgbevf_service_timer - Timer Call-back 2732 2580 * @data: pointer to adapter cast into an unsigned long 2733 2581 **/ 2734 - static void ixgbevf_watchdog(unsigned long data) 2582 + static void ixgbevf_service_timer(unsigned long data) 2735 2583 { 2736 2584 struct ixgbevf_adapter *adapter = (struct ixgbevf_adapter *)data; 2737 - struct ixgbe_hw *hw = &adapter->hw; 2738 - u32 eics = 0; 2739 - int i; 2740 2585 2741 - /* 2742 - * Do the watchdog outside of interrupt context due to the lovely 2743 - * delays that some of the newer hardware requires 2744 - */ 2586 + /* Reset the timer */ 2587 + mod_timer(&adapter->service_timer, (HZ * 2) + jiffies); 2745 2588 2746 - if (test_bit(__IXGBEVF_DOWN, &adapter->state)) 2747 - goto watchdog_short_circuit; 2748 - 2749 - /* get one bit for every active tx/rx interrupt vector */ 2750 - for (i = 0; i < adapter->num_msix_vectors - NON_Q_VECTORS; i++) { 2751 - struct ixgbevf_q_vector *qv = adapter->q_vector[i]; 2752 - if (qv->rx.ring || qv->tx.ring) 2753 - eics |= 1 << i; 2754 - } 2755 - 2756 - IXGBE_WRITE_REG(hw, IXGBE_VTEICS, eics); 2757 - 2758 - watchdog_short_circuit: 2759 - schedule_work(&adapter->watchdog_task); 2589 + ixgbevf_service_event_schedule(adapter); 2760 2590 } 2761 2591 2762 - /** 2763 - * ixgbevf_tx_timeout - Respond to a Tx Hang 2764 - * @netdev: network interface device structure 2765 - **/ 2766 - static void ixgbevf_tx_timeout(struct net_device *netdev) 2592 + static void ixgbevf_reset_subtask(struct ixgbevf_adapter *adapter) 2767 2593 { 2768 - struct ixgbevf_adapter *adapter = netdev_priv(netdev); 2594 + if (!(adapter->flags & IXGBEVF_FLAG_RESET_REQUESTED)) 2595 + return; 2769 2596 2770 - /* Do the reset outside of interrupt context */ 2771 - schedule_work(&adapter->reset_task); 2772 - } 2773 - 2774 - static void ixgbevf_reset_task(struct work_struct *work) 2775 - { 2776 - struct ixgbevf_adapter *adapter; 2777 - adapter = container_of(work, struct ixgbevf_adapter, reset_task); 2597 + adapter->flags &= ~IXGBEVF_FLAG_RESET_REQUESTED; 2778 2598 2779 2599 /* If we're already down or resetting, just bail */ 2780 2600 if (test_bit(__IXGBEVF_DOWN, &adapter->state) || 2781 - test_bit(__IXGBEVF_REMOVING, &adapter->state) || 2782 2601 test_bit(__IXGBEVF_RESETTING, &adapter->state)) 2783 2602 return; 2784 2603 ··· 2758 2635 ixgbevf_reinit_locked(adapter); 2759 2636 } 2760 2637 2761 - /** 2762 - * ixgbevf_watchdog_task - worker thread to bring link up 2763 - * @work: pointer to work_struct containing our data 2764 - **/ 2765 - static void ixgbevf_watchdog_task(struct work_struct *work) 2638 + /* ixgbevf_check_hang_subtask - check for hung queues and dropped interrupts 2639 + * @adapter - pointer to the device adapter structure 2640 + * 2641 + * This function serves two purposes. First it strobes the interrupt lines 2642 + * in order to make certain interrupts are occurring. Secondly it sets the 2643 + * bits needed to check for TX hangs. As a result we should immediately 2644 + * determine if a hang has occurred. 2645 + */ 2646 + static void ixgbevf_check_hang_subtask(struct ixgbevf_adapter *adapter) 2766 2647 { 2767 - struct ixgbevf_adapter *adapter = container_of(work, 2768 - struct ixgbevf_adapter, 2769 - watchdog_task); 2770 - struct net_device *netdev = adapter->netdev; 2648 + struct ixgbe_hw *hw = &adapter->hw; 2649 + u32 eics = 0; 2650 + int i; 2651 + 2652 + /* If we're down or resetting, just bail */ 2653 + if (test_bit(__IXGBEVF_DOWN, &adapter->state) || 2654 + test_bit(__IXGBEVF_RESETTING, &adapter->state)) 2655 + return; 2656 + 2657 + /* Force detection of hung controller */ 2658 + if (netif_carrier_ok(adapter->netdev)) { 2659 + for (i = 0; i < adapter->num_tx_queues; i++) 2660 + set_check_for_tx_hang(adapter->tx_ring[i]); 2661 + } 2662 + 2663 + /* get one bit for every active tx/rx interrupt vector */ 2664 + for (i = 0; i < adapter->num_msix_vectors - NON_Q_VECTORS; i++) { 2665 + struct ixgbevf_q_vector *qv = adapter->q_vector[i]; 2666 + 2667 + if (qv->rx.ring || qv->tx.ring) 2668 + eics |= 1 << i; 2669 + } 2670 + 2671 + /* Cause software interrupt to ensure rings are cleaned */ 2672 + IXGBE_WRITE_REG(hw, IXGBE_VTEICS, eics); 2673 + } 2674 + 2675 + /** 2676 + * ixgbevf_watchdog_update_link - update the link status 2677 + * @adapter - pointer to the device adapter structure 2678 + **/ 2679 + static void ixgbevf_watchdog_update_link(struct ixgbevf_adapter *adapter) 2680 + { 2771 2681 struct ixgbe_hw *hw = &adapter->hw; 2772 2682 u32 link_speed = adapter->link_speed; 2773 2683 bool link_up = adapter->link_up; 2774 - s32 need_reset; 2684 + s32 err; 2685 + 2686 + spin_lock_bh(&adapter->mbx_lock); 2687 + 2688 + err = hw->mac.ops.check_link(hw, &link_speed, &link_up, false); 2689 + 2690 + spin_unlock_bh(&adapter->mbx_lock); 2691 + 2692 + /* if check for link returns error we will need to reset */ 2693 + if (err && time_after(jiffies, adapter->last_reset + (10 * HZ))) { 2694 + adapter->flags |= IXGBEVF_FLAG_RESET_REQUESTED; 2695 + link_up = false; 2696 + } 2697 + 2698 + adapter->link_up = link_up; 2699 + adapter->link_speed = link_speed; 2700 + } 2701 + 2702 + /** 2703 + * ixgbevf_watchdog_link_is_up - update netif_carrier status and 2704 + * print link up message 2705 + * @adapter - pointer to the device adapter structure 2706 + **/ 2707 + static void ixgbevf_watchdog_link_is_up(struct ixgbevf_adapter *adapter) 2708 + { 2709 + struct net_device *netdev = adapter->netdev; 2710 + 2711 + /* only continue if link was previously down */ 2712 + if (netif_carrier_ok(netdev)) 2713 + return; 2714 + 2715 + dev_info(&adapter->pdev->dev, "NIC Link is Up %s\n", 2716 + (adapter->link_speed == IXGBE_LINK_SPEED_10GB_FULL) ? 2717 + "10 Gbps" : 2718 + (adapter->link_speed == IXGBE_LINK_SPEED_1GB_FULL) ? 2719 + "1 Gbps" : 2720 + (adapter->link_speed == IXGBE_LINK_SPEED_100_FULL) ? 2721 + "100 Mbps" : 2722 + "unknown speed"); 2723 + 2724 + netif_carrier_on(netdev); 2725 + } 2726 + 2727 + /** 2728 + * ixgbevf_watchdog_link_is_down - update netif_carrier status and 2729 + * print link down message 2730 + * @adapter - pointer to the adapter structure 2731 + **/ 2732 + static void ixgbevf_watchdog_link_is_down(struct ixgbevf_adapter *adapter) 2733 + { 2734 + struct net_device *netdev = adapter->netdev; 2735 + 2736 + adapter->link_speed = 0; 2737 + 2738 + /* only continue if link was up previously */ 2739 + if (!netif_carrier_ok(netdev)) 2740 + return; 2741 + 2742 + dev_info(&adapter->pdev->dev, "NIC Link is Down\n"); 2743 + 2744 + netif_carrier_off(netdev); 2745 + } 2746 + 2747 + /** 2748 + * ixgbevf_watchdog_subtask - worker thread to bring link up 2749 + * @work: pointer to work_struct containing our data 2750 + **/ 2751 + static void ixgbevf_watchdog_subtask(struct ixgbevf_adapter *adapter) 2752 + { 2753 + /* if interface is down do nothing */ 2754 + if (test_bit(__IXGBEVF_DOWN, &adapter->state) || 2755 + test_bit(__IXGBEVF_RESETTING, &adapter->state)) 2756 + return; 2757 + 2758 + ixgbevf_watchdog_update_link(adapter); 2759 + 2760 + if (adapter->link_up) 2761 + ixgbevf_watchdog_link_is_up(adapter); 2762 + else 2763 + ixgbevf_watchdog_link_is_down(adapter); 2764 + 2765 + ixgbevf_update_stats(adapter); 2766 + } 2767 + 2768 + /** 2769 + * ixgbevf_service_task - manages and runs subtasks 2770 + * @work: pointer to work_struct containing our data 2771 + **/ 2772 + static void ixgbevf_service_task(struct work_struct *work) 2773 + { 2774 + struct ixgbevf_adapter *adapter = container_of(work, 2775 + struct ixgbevf_adapter, 2776 + service_task); 2777 + struct ixgbe_hw *hw = &adapter->hw; 2775 2778 2776 2779 if (IXGBE_REMOVED(hw->hw_addr)) { 2777 2780 if (!test_bit(__IXGBEVF_DOWN, &adapter->state)) { ··· 2907 2658 } 2908 2659 return; 2909 2660 } 2661 + 2910 2662 ixgbevf_queue_reset_subtask(adapter); 2663 + ixgbevf_reset_subtask(adapter); 2664 + ixgbevf_watchdog_subtask(adapter); 2665 + ixgbevf_check_hang_subtask(adapter); 2911 2666 2912 - adapter->flags |= IXGBE_FLAG_IN_WATCHDOG_TASK; 2913 - 2914 - /* 2915 - * Always check the link on the watchdog because we have 2916 - * no LSC interrupt 2917 - */ 2918 - spin_lock_bh(&adapter->mbx_lock); 2919 - 2920 - need_reset = hw->mac.ops.check_link(hw, &link_speed, &link_up, false); 2921 - 2922 - spin_unlock_bh(&adapter->mbx_lock); 2923 - 2924 - if (need_reset) { 2925 - adapter->link_up = link_up; 2926 - adapter->link_speed = link_speed; 2927 - netif_carrier_off(netdev); 2928 - netif_tx_stop_all_queues(netdev); 2929 - schedule_work(&adapter->reset_task); 2930 - goto pf_has_reset; 2931 - } 2932 - adapter->link_up = link_up; 2933 - adapter->link_speed = link_speed; 2934 - 2935 - if (link_up) { 2936 - if (!netif_carrier_ok(netdev)) { 2937 - char *link_speed_string; 2938 - switch (link_speed) { 2939 - case IXGBE_LINK_SPEED_10GB_FULL: 2940 - link_speed_string = "10 Gbps"; 2941 - break; 2942 - case IXGBE_LINK_SPEED_1GB_FULL: 2943 - link_speed_string = "1 Gbps"; 2944 - break; 2945 - case IXGBE_LINK_SPEED_100_FULL: 2946 - link_speed_string = "100 Mbps"; 2947 - break; 2948 - default: 2949 - link_speed_string = "unknown speed"; 2950 - break; 2951 - } 2952 - dev_info(&adapter->pdev->dev, 2953 - "NIC Link is Up, %s\n", link_speed_string); 2954 - netif_carrier_on(netdev); 2955 - netif_tx_wake_all_queues(netdev); 2956 - } 2957 - } else { 2958 - adapter->link_up = false; 2959 - adapter->link_speed = 0; 2960 - if (netif_carrier_ok(netdev)) { 2961 - dev_info(&adapter->pdev->dev, "NIC Link is Down\n"); 2962 - netif_carrier_off(netdev); 2963 - netif_tx_stop_all_queues(netdev); 2964 - } 2965 - } 2966 - 2967 - ixgbevf_update_stats(adapter); 2968 - 2969 - pf_has_reset: 2970 - /* Reset the timer */ 2971 - if (!test_bit(__IXGBEVF_DOWN, &adapter->state) && 2972 - !test_bit(__IXGBEVF_REMOVING, &adapter->state)) 2973 - mod_timer(&adapter->watchdog_timer, 2974 - round_jiffies(jiffies + (2 * HZ))); 2975 - 2976 - adapter->flags &= ~IXGBE_FLAG_IN_WATCHDOG_TASK; 2667 + ixgbevf_service_event_complete(adapter); 2977 2668 } 2978 2669 2979 2670 /** ··· 3133 2944 if (!adapter->num_msix_vectors) 3134 2945 return -ENOMEM; 3135 2946 3136 - /* disallow open during test */ 3137 - if (test_bit(__IXGBEVF_TESTING, &adapter->state)) 3138 - return -EBUSY; 3139 - 3140 2947 if (hw->adapter_stopped) { 3141 2948 ixgbevf_reset(adapter); 3142 2949 /* if adapter is still stopped then PF isn't up and ··· 3144 2959 goto err_setup_reset; 3145 2960 } 3146 2961 } 2962 + 2963 + /* disallow open during test */ 2964 + if (test_bit(__IXGBEVF_TESTING, &adapter->state)) 2965 + return -EBUSY; 2966 + 2967 + netif_carrier_off(netdev); 3147 2968 3148 2969 /* allocate transmit descriptors */ 3149 2970 err = ixgbevf_setup_all_tx_resources(adapter); ··· 3170 2979 */ 3171 2980 ixgbevf_map_rings_to_vectors(adapter); 3172 2981 3173 - ixgbevf_up_complete(adapter); 3174 - 3175 - /* clear any pending interrupts, may auto mask */ 3176 - IXGBE_READ_REG(hw, IXGBE_VTEICR); 3177 2982 err = ixgbevf_request_irq(adapter); 3178 2983 if (err) 3179 2984 goto err_req_irq; 3180 2985 3181 - ixgbevf_irq_enable(adapter); 2986 + ixgbevf_up_complete(adapter); 3182 2987 3183 2988 return 0; 3184 2989 ··· 4009 3822 NETIF_F_HW_VLAN_CTAG_RX | 4010 3823 NETIF_F_HW_VLAN_CTAG_FILTER; 4011 3824 4012 - netdev->vlan_features |= NETIF_F_TSO; 4013 - netdev->vlan_features |= NETIF_F_TSO6; 4014 - netdev->vlan_features |= NETIF_F_IP_CSUM; 4015 - netdev->vlan_features |= NETIF_F_IPV6_CSUM; 4016 - netdev->vlan_features |= NETIF_F_SG; 3825 + netdev->vlan_features |= NETIF_F_TSO | 3826 + NETIF_F_TSO6 | 3827 + NETIF_F_IP_CSUM | 3828 + NETIF_F_IPV6_CSUM | 3829 + NETIF_F_SG; 4017 3830 4018 3831 if (pci_using_dac) 4019 3832 netdev->features |= NETIF_F_HIGHDMA; 4020 3833 4021 3834 netdev->priv_flags |= IFF_UNICAST_FLT; 4022 3835 4023 - init_timer(&adapter->watchdog_timer); 4024 - adapter->watchdog_timer.function = ixgbevf_watchdog; 4025 - adapter->watchdog_timer.data = (unsigned long)adapter; 4026 - 4027 3836 if (IXGBE_REMOVED(hw->hw_addr)) { 4028 3837 err = -EIO; 4029 3838 goto err_sw_init; 4030 3839 } 4031 - INIT_WORK(&adapter->reset_task, ixgbevf_reset_task); 4032 - INIT_WORK(&adapter->watchdog_task, ixgbevf_watchdog_task); 4033 - set_bit(__IXGBEVF_WORK_INIT, &adapter->state); 3840 + 3841 + setup_timer(&adapter->service_timer, &ixgbevf_service_timer, 3842 + (unsigned long)adapter); 3843 + 3844 + INIT_WORK(&adapter->service_task, ixgbevf_service_task); 3845 + set_bit(__IXGBEVF_SERVICE_INITED, &adapter->state); 3846 + clear_bit(__IXGBEVF_SERVICE_SCHED, &adapter->state); 4034 3847 4035 3848 err = ixgbevf_init_interrupt_scheme(adapter); 4036 3849 if (err) ··· 4104 3917 adapter = netdev_priv(netdev); 4105 3918 4106 3919 set_bit(__IXGBEVF_REMOVING, &adapter->state); 4107 - 4108 - del_timer_sync(&adapter->watchdog_timer); 4109 - 4110 - cancel_work_sync(&adapter->reset_task); 4111 - cancel_work_sync(&adapter->watchdog_task); 3920 + cancel_work_sync(&adapter->service_task); 4112 3921 4113 3922 if (netdev->reg_state == NETREG_REGISTERED) 4114 3923 unregister_netdev(netdev); ··· 4138 3955 struct net_device *netdev = pci_get_drvdata(pdev); 4139 3956 struct ixgbevf_adapter *adapter = netdev_priv(netdev); 4140 3957 4141 - if (!test_bit(__IXGBEVF_WORK_INIT, &adapter->state)) 3958 + if (!test_bit(__IXGBEVF_SERVICE_INITED, &adapter->state)) 4142 3959 return PCI_ERS_RESULT_DISCONNECT; 4143 3960 4144 3961 rtnl_lock();
+10
drivers/net/ethernet/intel/ixgbevf/regs.h
··· 69 69 #define IXGBE_VFGOTC_LSB 0x02020 70 70 #define IXGBE_VFGOTC_MSB 0x02024 71 71 #define IXGBE_VFMPRC 0x01034 72 + #define IXGBE_VFMRQC 0x3000 73 + #define IXGBE_VFRSSRK(x) (0x3100 + ((x) * 4)) 74 + #define IXGBE_VFRETA(x) (0x3200 + ((x) * 4)) 75 + 76 + /* VFMRQC bits */ 77 + #define IXGBE_VFMRQC_RSSEN 0x00000001 /* RSS Enable */ 78 + #define IXGBE_VFMRQC_RSS_FIELD_IPV4_TCP 0x00010000 79 + #define IXGBE_VFMRQC_RSS_FIELD_IPV4 0x00020000 80 + #define IXGBE_VFMRQC_RSS_FIELD_IPV6 0x00100000 81 + #define IXGBE_VFMRQC_RSS_FIELD_IPV6_TCP 0x00200000 72 82 73 83 #define IXGBE_WRITE_FLUSH(a) (IXGBE_READ_REG(a, IXGBE_VFSTATUS)) 74 84