Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branch '1GbE' of git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/next-queue

Jeff Kirsher says:

====================
Intel Wired LAN Driver Updates 2019-02-05

This series contains updates to igc, e1000e, ixgbe, fm10k and driver
documentation.

Kai-Heng Feng fixes an e1000e issue where the Wake-On-LAN settings where
being set incorrectly during a system suspend.

Sasha addresses community feedback on the igc driver and provides a
number of code cleanups to remove either unreachable or unused code. In
addition, added basic ethtool support for the igc driver.

Mike Rapoport fixes the formatting of the kernel driver documentation so
that the title is properly formatted and does not get lumped with the
document sections in the HTML kernel documents generated.

Jiri Kosina updates a hard coded RAR entries value with the existing
define IXGBE_82599_RAR_ENTRIES.

Jake fixes up whitespace in the fm10k driver.

Konstantin Khlebnikov fixes an issue where in some cases, the e1000e
driver will continually reset during a system boot because the watchdog
task sees items in the transmit buffer but the carrier is off (trying to
establish link) causing the device reset to flush the buffer. To
resolve, just move this check/flush into the watchdog section for when
the carrier is off.

Todd bumps the igb driver version to reflect the recent driver changes.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>

+1220 -144
+1
Documentation/networking/device_drivers/intel/e100.rst
··· 1 1 .. SPDX-License-Identifier: GPL-2.0+ 2 2 3 + ============================================================== 3 4 Linux* Base Driver for the Intel(R) PRO/100 Family of Adapters 4 5 ============================================================== 5 6
+1
Documentation/networking/device_drivers/intel/e1000.rst
··· 1 1 .. SPDX-License-Identifier: GPL-2.0+ 2 2 3 + =========================================================== 3 4 Linux* Base Driver for Intel(R) Ethernet Network Connection 4 5 =========================================================== 5 6
+1
Documentation/networking/device_drivers/intel/e1000e.rst
··· 1 1 .. SPDX-License-Identifier: GPL-2.0+ 2 2 3 + ====================================================== 3 4 Linux* Driver for Intel(R) Ethernet Network Connection 4 5 ====================================================== 5 6
+1
Documentation/networking/device_drivers/intel/fm10k.rst
··· 1 1 .. SPDX-License-Identifier: GPL-2.0+ 2 2 3 + ============================================================== 3 4 Linux* Base Driver for Intel(R) Ethernet Multi-host Controller 4 5 ============================================================== 5 6
+1
Documentation/networking/device_drivers/intel/i40e.rst
··· 1 1 .. SPDX-License-Identifier: GPL-2.0+ 2 2 3 + ================================================================== 3 4 Linux* Base Driver for the Intel(R) Ethernet Controller 700 Series 4 5 ================================================================== 5 6
+1
Documentation/networking/device_drivers/intel/iavf.rst
··· 1 1 .. SPDX-License-Identifier: GPL-2.0+ 2 2 3 + ================================================================== 3 4 Linux* Base Driver for Intel(R) Ethernet Adaptive Virtual Function 4 5 ================================================================== 5 6
+1
Documentation/networking/device_drivers/intel/ice.rst
··· 1 1 .. SPDX-License-Identifier: GPL-2.0+ 2 2 3 + =================================================================== 3 4 Linux* Base Driver for the Intel(R) Ethernet Connection E800 Series 4 5 =================================================================== 5 6
+1
Documentation/networking/device_drivers/intel/igb.rst
··· 1 1 .. SPDX-License-Identifier: GPL-2.0+ 2 2 3 + =========================================================== 3 4 Linux* Base Driver for Intel(R) Ethernet Network Connection 4 5 =========================================================== 5 6
+1
Documentation/networking/device_drivers/intel/igbvf.rst
··· 1 1 .. SPDX-License-Identifier: GPL-2.0+ 2 2 3 + ============================================================ 3 4 Linux* Base Virtual Function Driver for Intel(R) 1G Ethernet 4 5 ============================================================ 5 6
+1
Documentation/networking/device_drivers/intel/ixgb.rst
··· 1 1 .. SPDX-License-Identifier: GPL-2.0+ 2 2 3 + ===================================================================== 3 4 Linux Base Driver for 10 Gigabit Intel(R) Ethernet Network Connection 4 5 ===================================================================== 5 6
+1
Documentation/networking/device_drivers/intel/ixgbe.rst
··· 1 1 .. SPDX-License-Identifier: GPL-2.0+ 2 2 3 + ============================================================================= 3 4 Linux* Base Driver for the Intel(R) Ethernet 10 Gigabit PCI Express Adapters 4 5 ============================================================================= 5 6
+1
Documentation/networking/device_drivers/intel/ixgbevf.rst
··· 1 1 .. SPDX-License-Identifier: GPL-2.0+ 2 2 3 + ============================================================= 3 4 Linux* Base Virtual Function Driver for Intel(R) 10G Ethernet 4 5 ============================================================= 5 6
+23 -10
drivers/net/ethernet/intel/e1000e/80003es2lan.c
··· 696 696 ret_val = 697 697 e1000_read_kmrn_reg_80003es2lan(hw, E1000_KMRNCTRLSTA_INBAND_PARAM, 698 698 &kum_reg_data); 699 - if (ret_val) 700 - return ret_val; 701 - kum_reg_data |= E1000_KMRNCTRLSTA_IBIST_DISABLE; 702 - e1000_write_kmrn_reg_80003es2lan(hw, E1000_KMRNCTRLSTA_INBAND_PARAM, 703 - kum_reg_data); 699 + if (!ret_val) { 700 + kum_reg_data |= E1000_KMRNCTRLSTA_IBIST_DISABLE; 701 + ret_val = e1000_write_kmrn_reg_80003es2lan(hw, 702 + E1000_KMRNCTRLSTA_INBAND_PARAM, 703 + kum_reg_data); 704 + if (ret_val) 705 + e_dbg("Error disabling far-end loopback\n"); 706 + } else { 707 + e_dbg("Error disabling far-end loopback\n"); 708 + } 704 709 705 710 ret_val = e1000e_get_auto_rd_done(hw); 706 711 if (ret_val) ··· 759 754 return ret_val; 760 755 761 756 /* Disable IBIST slave mode (far-end loopback) */ 762 - e1000_read_kmrn_reg_80003es2lan(hw, E1000_KMRNCTRLSTA_INBAND_PARAM, 763 - &kum_reg_data); 764 - kum_reg_data |= E1000_KMRNCTRLSTA_IBIST_DISABLE; 765 - e1000_write_kmrn_reg_80003es2lan(hw, E1000_KMRNCTRLSTA_INBAND_PARAM, 766 - kum_reg_data); 757 + ret_val = 758 + e1000_read_kmrn_reg_80003es2lan(hw, E1000_KMRNCTRLSTA_INBAND_PARAM, 759 + &kum_reg_data); 760 + if (!ret_val) { 761 + kum_reg_data |= E1000_KMRNCTRLSTA_IBIST_DISABLE; 762 + ret_val = e1000_write_kmrn_reg_80003es2lan(hw, 763 + E1000_KMRNCTRLSTA_INBAND_PARAM, 764 + kum_reg_data); 765 + if (ret_val) 766 + e_dbg("Error disabling far-end loopback\n"); 767 + } else { 768 + e_dbg("Error disabling far-end loopback\n"); 769 + } 767 770 768 771 /* Set the transmit descriptor write-back policy */ 769 772 reg_data = er32(TXDCTL(0));
+8 -9
drivers/net/ethernet/intel/e1000e/netdev.c
··· 5309 5309 /* 8000ES2LAN requires a Rx packet buffer work-around 5310 5310 * on link down event; reset the controller to flush 5311 5311 * the Rx packet buffer. 5312 + * 5313 + * If the link is lost the controller stops DMA, but 5314 + * if there is queued Tx work it cannot be done. So 5315 + * reset the controller to flush the Tx packet buffers. 5312 5316 */ 5313 - if (adapter->flags & FLAG_RX_NEEDS_RESTART) 5317 + if ((adapter->flags & FLAG_RX_NEEDS_RESTART) || 5318 + e1000_desc_unused(tx_ring) + 1 < tx_ring->count) 5314 5319 adapter->flags |= FLAG_RESTART_NOW; 5315 5320 else 5316 5321 pm_schedule_suspend(netdev->dev.parent, ··· 5337 5332 adapter->gotc = adapter->stats.gotc - adapter->gotc_old; 5338 5333 adapter->gotc_old = adapter->stats.gotc; 5339 5334 spin_unlock(&adapter->stats64_lock); 5340 - 5341 - /* If the link is lost the controller stops DMA, but 5342 - * if there is queued Tx work it cannot be done. So 5343 - * reset the controller to flush the Tx packet buffers. 5344 - */ 5345 - if (!netif_carrier_ok(netdev) && 5346 - (e1000_desc_unused(tx_ring) + 1 < tx_ring->count)) 5347 - adapter->flags |= FLAG_RESTART_NOW; 5348 5335 5349 5336 /* If reset is necessary, do it outside of interrupt context. */ 5350 5337 if (adapter->flags & FLAG_RESTART_NOW) { ··· 7347 7350 netif_carrier_off(netdev); 7348 7351 7349 7352 e1000_print_device_info(adapter); 7353 + 7354 + dev_pm_set_driver_flags(&pdev->dev, DPM_FLAG_NEVER_SKIP); 7350 7355 7351 7356 if (pci_dev_run_wake(pdev)) 7352 7357 pm_runtime_put_noidle(&pdev->dev);
+1 -1
drivers/net/ethernet/intel/fm10k/fm10k_pf.c
··· 1148 1148 * @results: Pointer array to message, results[0] is pointer to message 1149 1149 * @mbx: Pointer to mailbox information structure 1150 1150 * 1151 - * This function is a default handler for MSI-X requests from the VF. The 1151 + * This function is a default handler for MSI-X requests from the VF. The 1152 1152 * assumption is that in this case it is acceptable to just directly 1153 1153 * hand off the message from the VF to the underlying shared code. 1154 1154 **/
+1 -1
drivers/net/ethernet/intel/igb/igb_main.c
··· 39 39 #include "igb.h" 40 40 41 41 #define MAJ 5 42 - #define MIN 4 42 + #define MIN 6 43 43 #define BUILD 0 44 44 #define DRV_VERSION __stringify(MAJ) "." __stringify(MIN) "." \ 45 45 __stringify(BUILD) "-k"
+2 -1
drivers/net/ethernet/intel/igc/Makefile
··· 7 7 8 8 obj-$(CONFIG_IGC) += igc.o 9 9 10 - igc-objs := igc_main.o igc_mac.o igc_i225.o igc_base.o igc_nvm.o igc_phy.o 10 + igc-objs := igc_main.o igc_mac.o igc_i225.o igc_base.o igc_nvm.o igc_phy.o \ 11 + igc_ethtool.o
+32 -2
drivers/net/ethernet/intel/igc/igc.h
··· 13 13 14 14 #include "igc_hw.h" 15 15 16 - /* main */ 16 + /* forward declaration */ 17 + void igc_set_ethtool_ops(struct net_device *); 18 + 19 + struct igc_adapter; 20 + struct igc_ring; 21 + 22 + void igc_up(struct igc_adapter *adapter); 23 + void igc_down(struct igc_adapter *adapter); 24 + int igc_setup_tx_resources(struct igc_ring *ring); 25 + int igc_setup_rx_resources(struct igc_ring *ring); 26 + void igc_free_tx_resources(struct igc_ring *ring); 27 + void igc_free_rx_resources(struct igc_ring *ring); 28 + unsigned int igc_get_max_rss_queues(struct igc_adapter *adapter); 29 + void igc_set_flag_queue_pairs(struct igc_adapter *adapter, 30 + const u32 max_rss_queues); 31 + int igc_reinit_queues(struct igc_adapter *adapter); 32 + bool igc_has_link(struct igc_adapter *adapter); 33 + void igc_reset(struct igc_adapter *adapter); 34 + int igc_set_spd_dplx(struct igc_adapter *adapter, u32 spd, u8 dplx); 35 + 17 36 extern char igc_driver_name[]; 18 37 extern char igc_driver_version[]; 38 + 39 + #define IGC_REGS_LEN 740 40 + #define IGC_RETA_SIZE 128 19 41 20 42 /* Interrupt defines */ 21 43 #define IGC_START_ITR 648 /* ~6000 ints/sec */ 22 44 #define IGC_FLAG_HAS_MSI BIT(0) 23 - #define IGC_FLAG_QUEUE_PAIRS BIT(4) 45 + #define IGC_FLAG_QUEUE_PAIRS BIT(3) 46 + #define IGC_FLAG_DMAC BIT(4) 24 47 #define IGC_FLAG_NEED_LINK_UPDATE BIT(9) 25 48 #define IGC_FLAG_MEDIA_RESET BIT(10) 26 49 #define IGC_FLAG_MAS_ENABLE BIT(12) 27 50 #define IGC_FLAG_HAS_MSIX BIT(13) 28 51 #define IGC_FLAG_VLAN_PROMISC BIT(15) 52 + #define IGC_FLAG_RX_LEGACY BIT(16) 29 53 30 54 #define IGC_START_ITR 648 /* ~6000 ints/sec */ 31 55 #define IGC_4K_ITR 980 ··· 84 60 #define IGC_RXBUFFER_2048 2048 85 61 #define IGC_RXBUFFER_3072 3072 86 62 63 + #define AUTO_ALL_MODES 0 87 64 #define IGC_RX_HDR_LEN IGC_RXBUFFER_256 88 65 89 66 /* RX and TX descriptor control thresholds. ··· 365 340 366 341 struct igc_mac_addr *mac_table; 367 342 343 + u8 rss_indir_tbl[IGC_RETA_SIZE]; 344 + 368 345 unsigned long link_check_timeout; 369 346 struct igc_info ei; 370 347 }; ··· 444 417 445 418 return 0; 446 419 } 420 + 421 + /* forward declaration */ 422 + void igc_reinit_locked(struct igc_adapter *); 447 423 448 424 #define igc_rx_pg_size(_ring) (PAGE_SIZE << igc_rx_pg_order(_ring)) 449 425
+6 -70
drivers/net/ethernet/intel/igc/igc_base.c
··· 54 54 } 55 55 56 56 /** 57 - * igc_check_for_link_base - Check for link 58 - * @hw: pointer to the HW structure 59 - * 60 - * If sgmii is enabled, then use the pcs register to determine link, otherwise 61 - * use the generic interface for determining link. 62 - */ 63 - static s32 igc_check_for_link_base(struct igc_hw *hw) 64 - { 65 - s32 ret_val = 0; 66 - 67 - ret_val = igc_check_for_copper_link(hw); 68 - 69 - return ret_val; 70 - } 71 - 72 - /** 73 57 * igc_reset_hw_base - Reset hardware 74 58 * @hw: pointer to the HW structure 75 59 * ··· 108 124 } 109 125 110 126 /** 111 - * igc_get_phy_id_base - Retrieve PHY addr and id 112 - * @hw: pointer to the HW structure 113 - * 114 - * Retrieves the PHY address and ID for both PHY's which do and do not use 115 - * sgmi interface. 116 - */ 117 - static s32 igc_get_phy_id_base(struct igc_hw *hw) 118 - { 119 - s32 ret_val = 0; 120 - 121 - ret_val = igc_get_phy_id(hw); 122 - 123 - return ret_val; 124 - } 125 - 126 - /** 127 127 * igc_init_nvm_params_base - Init NVM func ptrs. 128 128 * @hw: pointer to the HW structure 129 129 */ ··· 131 163 if (size > 15) 132 164 size = 15; 133 165 166 + nvm->type = igc_nvm_eeprom_spi; 134 167 nvm->word_size = BIT(size); 135 168 nvm->opcode_bits = 8; 136 169 nvm->delay_usec = 1; ··· 230 261 goto out; 231 262 } 232 263 233 - ret_val = igc_get_phy_id_base(hw); 264 + ret_val = igc_get_phy_id(hw); 234 265 if (ret_val) 235 266 return ret_val; 236 267 237 - igc_check_for_link_base(hw); 268 + igc_check_for_copper_link(hw); 238 269 239 270 /* Verify phy id and set remaining function pointers */ 240 271 switch (phy->id) { ··· 319 350 } 320 351 321 352 /** 322 - * igc_get_link_up_info_base - Get link speed/duplex info 323 - * @hw: pointer to the HW structure 324 - * @speed: stores the current speed 325 - * @duplex: stores the current duplex 326 - * 327 - * This is a wrapper function, if using the serial gigabit media independent 328 - * interface, use PCS to retrieve the link speed and duplex information. 329 - * Otherwise, use the generic function to get the link speed and duplex info. 330 - */ 331 - static s32 igc_get_link_up_info_base(struct igc_hw *hw, u16 *speed, 332 - u16 *duplex) 333 - { 334 - s32 ret_val; 335 - 336 - ret_val = igc_get_speed_and_duplex_copper(hw, speed, duplex); 337 - 338 - return ret_val; 339 - } 340 - 341 - /** 342 353 * igc_init_hw_base - Initialize hardware 343 354 * @hw: pointer to the HW structure 344 355 * ··· 352 403 * is no link. 353 404 */ 354 405 igc_clear_hw_cntrs_base(hw); 355 - 356 - return ret_val; 357 - } 358 - 359 - /** 360 - * igc_read_mac_addr_base - Read device MAC address 361 - * @hw: pointer to the HW structure 362 - */ 363 - static s32 igc_read_mac_addr_base(struct igc_hw *hw) 364 - { 365 - s32 ret_val = 0; 366 - 367 - ret_val = igc_read_mac_addr(hw); 368 406 369 407 return ret_val; 370 408 } ··· 448 512 449 513 static struct igc_mac_operations igc_mac_ops_base = { 450 514 .init_hw = igc_init_hw_base, 451 - .check_for_link = igc_check_for_link_base, 515 + .check_for_link = igc_check_for_copper_link, 452 516 .rar_set = igc_rar_set, 453 - .read_mac_addr = igc_read_mac_addr_base, 454 - .get_speed_and_duplex = igc_get_link_up_info_base, 517 + .read_mac_addr = igc_read_mac_addr, 518 + .get_speed_and_duplex = igc_get_speed_and_duplex_copper, 455 519 }; 456 520 457 521 static const struct igc_phy_operations igc_phy_ops_base = {
-25
drivers/net/ethernet/intel/igc/igc_base.h
··· 36 36 37 37 #define IGC_RAR_ENTRIES 16 38 38 39 - struct igc_adv_data_desc { 40 - __le64 buffer_addr; /* Address of the descriptor's data buffer */ 41 - union { 42 - u32 data; 43 - struct { 44 - u32 datalen:16; /* Data buffer length */ 45 - u32 rsvd:4; 46 - u32 dtyp:4; /* Descriptor type */ 47 - u32 dcmd:8; /* Descriptor command */ 48 - } config; 49 - } lower; 50 - union { 51 - u32 data; 52 - struct { 53 - u32 status:4; /* Descriptor status */ 54 - u32 idx:4; 55 - u32 popts:6; /* Packet Options */ 56 - u32 paylen:18; /* Payload length */ 57 - } options; 58 - } upper; 59 - }; 60 - 61 39 /* Receive Descriptor - Advanced */ 62 40 union igc_adv_rx_desc { 63 41 struct { ··· 67 89 } upper; 68 90 } wb; /* writeback */ 69 91 }; 70 - 71 - /* Adv Transmit Descriptor Config Masks */ 72 - #define IGC_ADVTXD_PAYLEN_SHIFT 14 /* Adv desc PAYLEN shift */ 73 92 74 93 /* Additional Transmit Descriptor Control definitions */ 75 94 #define IGC_TXDCTL_QUEUE_ENABLE 0x02000000 /* Ena specific Tx Queue */
+4
drivers/net/ethernet/intel/igc/igc_defines.h
··· 4 4 #ifndef _IGC_DEFINES_H_ 5 5 #define _IGC_DEFINES_H_ 6 6 7 + /* Number of Transmit and Receive Descriptors must be a multiple of 8 */ 8 + #define REQ_TX_DESCRIPTOR_MULTIPLE 8 9 + #define REQ_RX_DESCRIPTOR_MULTIPLE 8 10 + 7 11 #define IGC_CTRL_EXT_DRV_LOAD 0x10000000 /* Drv loaded bit for FW */ 8 12 9 13 /* PCI Bus Info */
+1032
drivers/net/ethernet/intel/igc/igc_ethtool.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* Copyright (c) 2018 Intel Corporation */ 3 + 4 + /* ethtool support for igc */ 5 + #include <linux/pm_runtime.h> 6 + 7 + #include "igc.h" 8 + 9 + static const char igc_priv_flags_strings[][ETH_GSTRING_LEN] = { 10 + #define IGC_PRIV_FLAGS_LEGACY_RX BIT(0) 11 + "legacy-rx", 12 + }; 13 + 14 + #define IGC_PRIV_FLAGS_STR_LEN ARRAY_SIZE(igc_priv_flags_strings) 15 + 16 + static void igc_get_drvinfo(struct net_device *netdev, 17 + struct ethtool_drvinfo *drvinfo) 18 + { 19 + struct igc_adapter *adapter = netdev_priv(netdev); 20 + 21 + strlcpy(drvinfo->driver, igc_driver_name, sizeof(drvinfo->driver)); 22 + strlcpy(drvinfo->version, igc_driver_version, sizeof(drvinfo->version)); 23 + 24 + /* add fw_version here */ 25 + strlcpy(drvinfo->bus_info, pci_name(adapter->pdev), 26 + sizeof(drvinfo->bus_info)); 27 + 28 + drvinfo->n_priv_flags = IGC_PRIV_FLAGS_STR_LEN; 29 + } 30 + 31 + static int igc_get_regs_len(struct net_device *netdev) 32 + { 33 + return IGC_REGS_LEN * sizeof(u32); 34 + } 35 + 36 + static void igc_get_regs(struct net_device *netdev, 37 + struct ethtool_regs *regs, void *p) 38 + { 39 + struct igc_adapter *adapter = netdev_priv(netdev); 40 + struct igc_hw *hw = &adapter->hw; 41 + u32 *regs_buff = p; 42 + u8 i; 43 + 44 + memset(p, 0, IGC_REGS_LEN * sizeof(u32)); 45 + 46 + regs->version = (1u << 24) | (hw->revision_id << 16) | hw->device_id; 47 + 48 + /* General Registers */ 49 + regs_buff[0] = rd32(IGC_CTRL); 50 + regs_buff[1] = rd32(IGC_STATUS); 51 + regs_buff[2] = rd32(IGC_CTRL_EXT); 52 + regs_buff[3] = rd32(IGC_MDIC); 53 + regs_buff[4] = rd32(IGC_CONNSW); 54 + 55 + /* NVM Register */ 56 + regs_buff[5] = rd32(IGC_EECD); 57 + 58 + /* Interrupt */ 59 + /* Reading EICS for EICR because they read the 60 + * same but EICS does not clear on read 61 + */ 62 + regs_buff[6] = rd32(IGC_EICS); 63 + regs_buff[7] = rd32(IGC_EICS); 64 + regs_buff[8] = rd32(IGC_EIMS); 65 + regs_buff[9] = rd32(IGC_EIMC); 66 + regs_buff[10] = rd32(IGC_EIAC); 67 + regs_buff[11] = rd32(IGC_EIAM); 68 + /* Reading ICS for ICR because they read the 69 + * same but ICS does not clear on read 70 + */ 71 + regs_buff[12] = rd32(IGC_ICS); 72 + regs_buff[13] = rd32(IGC_ICS); 73 + regs_buff[14] = rd32(IGC_IMS); 74 + regs_buff[15] = rd32(IGC_IMC); 75 + regs_buff[16] = rd32(IGC_IAC); 76 + regs_buff[17] = rd32(IGC_IAM); 77 + 78 + /* Flow Control */ 79 + regs_buff[18] = rd32(IGC_FCAL); 80 + regs_buff[19] = rd32(IGC_FCAH); 81 + regs_buff[20] = rd32(IGC_FCTTV); 82 + regs_buff[21] = rd32(IGC_FCRTL); 83 + regs_buff[22] = rd32(IGC_FCRTH); 84 + regs_buff[23] = rd32(IGC_FCRTV); 85 + 86 + /* Receive */ 87 + regs_buff[24] = rd32(IGC_RCTL); 88 + regs_buff[25] = rd32(IGC_RXCSUM); 89 + regs_buff[26] = rd32(IGC_RLPML); 90 + regs_buff[27] = rd32(IGC_RFCTL); 91 + 92 + /* Transmit */ 93 + regs_buff[28] = rd32(IGC_TCTL); 94 + regs_buff[29] = rd32(IGC_TIPG); 95 + 96 + /* Wake Up */ 97 + 98 + /* MAC */ 99 + 100 + /* Statistics */ 101 + regs_buff[30] = adapter->stats.crcerrs; 102 + regs_buff[31] = adapter->stats.algnerrc; 103 + regs_buff[32] = adapter->stats.symerrs; 104 + regs_buff[33] = adapter->stats.rxerrc; 105 + regs_buff[34] = adapter->stats.mpc; 106 + regs_buff[35] = adapter->stats.scc; 107 + regs_buff[36] = adapter->stats.ecol; 108 + regs_buff[37] = adapter->stats.mcc; 109 + regs_buff[38] = adapter->stats.latecol; 110 + regs_buff[39] = adapter->stats.colc; 111 + regs_buff[40] = adapter->stats.dc; 112 + regs_buff[41] = adapter->stats.tncrs; 113 + regs_buff[42] = adapter->stats.sec; 114 + regs_buff[43] = adapter->stats.htdpmc; 115 + regs_buff[44] = adapter->stats.rlec; 116 + regs_buff[45] = adapter->stats.xonrxc; 117 + regs_buff[46] = adapter->stats.xontxc; 118 + regs_buff[47] = adapter->stats.xoffrxc; 119 + regs_buff[48] = adapter->stats.xofftxc; 120 + regs_buff[49] = adapter->stats.fcruc; 121 + regs_buff[50] = adapter->stats.prc64; 122 + regs_buff[51] = adapter->stats.prc127; 123 + regs_buff[52] = adapter->stats.prc255; 124 + regs_buff[53] = adapter->stats.prc511; 125 + regs_buff[54] = adapter->stats.prc1023; 126 + regs_buff[55] = adapter->stats.prc1522; 127 + regs_buff[56] = adapter->stats.gprc; 128 + regs_buff[57] = adapter->stats.bprc; 129 + regs_buff[58] = adapter->stats.mprc; 130 + regs_buff[59] = adapter->stats.gptc; 131 + regs_buff[60] = adapter->stats.gorc; 132 + regs_buff[61] = adapter->stats.gotc; 133 + regs_buff[62] = adapter->stats.rnbc; 134 + regs_buff[63] = adapter->stats.ruc; 135 + regs_buff[64] = adapter->stats.rfc; 136 + regs_buff[65] = adapter->stats.roc; 137 + regs_buff[66] = adapter->stats.rjc; 138 + regs_buff[67] = adapter->stats.mgprc; 139 + regs_buff[68] = adapter->stats.mgpdc; 140 + regs_buff[69] = adapter->stats.mgptc; 141 + regs_buff[70] = adapter->stats.tor; 142 + regs_buff[71] = adapter->stats.tot; 143 + regs_buff[72] = adapter->stats.tpr; 144 + regs_buff[73] = adapter->stats.tpt; 145 + regs_buff[74] = adapter->stats.ptc64; 146 + regs_buff[75] = adapter->stats.ptc127; 147 + regs_buff[76] = adapter->stats.ptc255; 148 + regs_buff[77] = adapter->stats.ptc511; 149 + regs_buff[78] = adapter->stats.ptc1023; 150 + regs_buff[79] = adapter->stats.ptc1522; 151 + regs_buff[80] = adapter->stats.mptc; 152 + regs_buff[81] = adapter->stats.bptc; 153 + regs_buff[82] = adapter->stats.tsctc; 154 + regs_buff[83] = adapter->stats.iac; 155 + regs_buff[84] = adapter->stats.rpthc; 156 + regs_buff[85] = adapter->stats.hgptc; 157 + regs_buff[86] = adapter->stats.hgorc; 158 + regs_buff[87] = adapter->stats.hgotc; 159 + regs_buff[88] = adapter->stats.lenerrs; 160 + regs_buff[89] = adapter->stats.scvpc; 161 + regs_buff[90] = adapter->stats.hrmpc; 162 + 163 + for (i = 0; i < 4; i++) 164 + regs_buff[91 + i] = rd32(IGC_SRRCTL(i)); 165 + for (i = 0; i < 4; i++) 166 + regs_buff[95 + i] = rd32(IGC_PSRTYPE(i)); 167 + for (i = 0; i < 4; i++) 168 + regs_buff[99 + i] = rd32(IGC_RDBAL(i)); 169 + for (i = 0; i < 4; i++) 170 + regs_buff[103 + i] = rd32(IGC_RDBAH(i)); 171 + for (i = 0; i < 4; i++) 172 + regs_buff[107 + i] = rd32(IGC_RDLEN(i)); 173 + for (i = 0; i < 4; i++) 174 + regs_buff[111 + i] = rd32(IGC_RDH(i)); 175 + for (i = 0; i < 4; i++) 176 + regs_buff[115 + i] = rd32(IGC_RDT(i)); 177 + for (i = 0; i < 4; i++) 178 + regs_buff[119 + i] = rd32(IGC_RXDCTL(i)); 179 + 180 + for (i = 0; i < 10; i++) 181 + regs_buff[123 + i] = rd32(IGC_EITR(i)); 182 + for (i = 0; i < 16; i++) 183 + regs_buff[139 + i] = rd32(IGC_RAL(i)); 184 + for (i = 0; i < 16; i++) 185 + regs_buff[145 + i] = rd32(IGC_RAH(i)); 186 + 187 + for (i = 0; i < 4; i++) 188 + regs_buff[149 + i] = rd32(IGC_TDBAL(i)); 189 + for (i = 0; i < 4; i++) 190 + regs_buff[152 + i] = rd32(IGC_TDBAH(i)); 191 + for (i = 0; i < 4; i++) 192 + regs_buff[156 + i] = rd32(IGC_TDLEN(i)); 193 + for (i = 0; i < 4; i++) 194 + regs_buff[160 + i] = rd32(IGC_TDH(i)); 195 + for (i = 0; i < 4; i++) 196 + regs_buff[164 + i] = rd32(IGC_TDT(i)); 197 + for (i = 0; i < 4; i++) 198 + regs_buff[168 + i] = rd32(IGC_TXDCTL(i)); 199 + } 200 + 201 + static u32 igc_get_msglevel(struct net_device *netdev) 202 + { 203 + struct igc_adapter *adapter = netdev_priv(netdev); 204 + 205 + return adapter->msg_enable; 206 + } 207 + 208 + static void igc_set_msglevel(struct net_device *netdev, u32 data) 209 + { 210 + struct igc_adapter *adapter = netdev_priv(netdev); 211 + 212 + adapter->msg_enable = data; 213 + } 214 + 215 + static int igc_nway_reset(struct net_device *netdev) 216 + { 217 + struct igc_adapter *adapter = netdev_priv(netdev); 218 + 219 + if (netif_running(netdev)) 220 + igc_reinit_locked(adapter); 221 + return 0; 222 + } 223 + 224 + static u32 igc_get_link(struct net_device *netdev) 225 + { 226 + struct igc_adapter *adapter = netdev_priv(netdev); 227 + struct igc_mac_info *mac = &adapter->hw.mac; 228 + 229 + /* If the link is not reported up to netdev, interrupts are disabled, 230 + * and so the physical link state may have changed since we last 231 + * looked. Set get_link_status to make sure that the true link 232 + * state is interrogated, rather than pulling a cached and possibly 233 + * stale link state from the driver. 234 + */ 235 + if (!netif_carrier_ok(netdev)) 236 + mac->get_link_status = 1; 237 + 238 + return igc_has_link(adapter); 239 + } 240 + 241 + static int igc_get_eeprom_len(struct net_device *netdev) 242 + { 243 + struct igc_adapter *adapter = netdev_priv(netdev); 244 + 245 + return adapter->hw.nvm.word_size * 2; 246 + } 247 + 248 + static int igc_get_eeprom(struct net_device *netdev, 249 + struct ethtool_eeprom *eeprom, u8 *bytes) 250 + { 251 + struct igc_adapter *adapter = netdev_priv(netdev); 252 + struct igc_hw *hw = &adapter->hw; 253 + int first_word, last_word; 254 + u16 *eeprom_buff; 255 + int ret_val = 0; 256 + u16 i; 257 + 258 + if (eeprom->len == 0) 259 + return -EINVAL; 260 + 261 + eeprom->magic = hw->vendor_id | (hw->device_id << 16); 262 + 263 + first_word = eeprom->offset >> 1; 264 + last_word = (eeprom->offset + eeprom->len - 1) >> 1; 265 + 266 + eeprom_buff = kmalloc_array(last_word - first_word + 1, sizeof(u16), 267 + GFP_KERNEL); 268 + if (!eeprom_buff) 269 + return -ENOMEM; 270 + 271 + if (hw->nvm.type == igc_nvm_eeprom_spi) { 272 + ret_val = hw->nvm.ops.read(hw, first_word, 273 + last_word - first_word + 1, 274 + eeprom_buff); 275 + } else { 276 + for (i = 0; i < last_word - first_word + 1; i++) { 277 + ret_val = hw->nvm.ops.read(hw, first_word + i, 1, 278 + &eeprom_buff[i]); 279 + if (ret_val) 280 + break; 281 + } 282 + } 283 + 284 + /* Device's eeprom is always little-endian, word addressable */ 285 + for (i = 0; i < last_word - first_word + 1; i++) 286 + le16_to_cpus(&eeprom_buff[i]); 287 + 288 + memcpy(bytes, (u8 *)eeprom_buff + (eeprom->offset & 1), 289 + eeprom->len); 290 + kfree(eeprom_buff); 291 + 292 + return ret_val; 293 + } 294 + 295 + static int igc_set_eeprom(struct net_device *netdev, 296 + struct ethtool_eeprom *eeprom, u8 *bytes) 297 + { 298 + struct igc_adapter *adapter = netdev_priv(netdev); 299 + struct igc_hw *hw = &adapter->hw; 300 + int max_len, first_word, last_word, ret_val = 0; 301 + u16 *eeprom_buff; 302 + void *ptr; 303 + u16 i; 304 + 305 + if (eeprom->len == 0) 306 + return -EOPNOTSUPP; 307 + 308 + if (hw->mac.type >= igc_i225 && 309 + !igc_get_flash_presence_i225(hw)) { 310 + return -EOPNOTSUPP; 311 + } 312 + 313 + if (eeprom->magic != (hw->vendor_id | (hw->device_id << 16))) 314 + return -EFAULT; 315 + 316 + max_len = hw->nvm.word_size * 2; 317 + 318 + first_word = eeprom->offset >> 1; 319 + last_word = (eeprom->offset + eeprom->len - 1) >> 1; 320 + eeprom_buff = kmalloc(max_len, GFP_KERNEL); 321 + if (!eeprom_buff) 322 + return -ENOMEM; 323 + 324 + ptr = (void *)eeprom_buff; 325 + 326 + if (eeprom->offset & 1) { 327 + /* need read/modify/write of first changed EEPROM word 328 + * only the second byte of the word is being modified 329 + */ 330 + ret_val = hw->nvm.ops.read(hw, first_word, 1, 331 + &eeprom_buff[0]); 332 + ptr++; 333 + } 334 + if (((eeprom->offset + eeprom->len) & 1) && ret_val == 0) { 335 + /* need read/modify/write of last changed EEPROM word 336 + * only the first byte of the word is being modified 337 + */ 338 + ret_val = hw->nvm.ops.read(hw, last_word, 1, 339 + &eeprom_buff[last_word - first_word]); 340 + } 341 + 342 + /* Device's eeprom is always little-endian, word addressable */ 343 + for (i = 0; i < last_word - first_word + 1; i++) 344 + le16_to_cpus(&eeprom_buff[i]); 345 + 346 + memcpy(ptr, bytes, eeprom->len); 347 + 348 + for (i = 0; i < last_word - first_word + 1; i++) 349 + eeprom_buff[i] = cpu_to_le16(eeprom_buff[i]); 350 + 351 + ret_val = hw->nvm.ops.write(hw, first_word, 352 + last_word - first_word + 1, eeprom_buff); 353 + 354 + /* Update the checksum if nvm write succeeded */ 355 + if (ret_val == 0) 356 + hw->nvm.ops.update(hw); 357 + 358 + /* check if need: igc_set_fw_version(adapter); */ 359 + kfree(eeprom_buff); 360 + return ret_val; 361 + } 362 + 363 + static void igc_get_ringparam(struct net_device *netdev, 364 + struct ethtool_ringparam *ring) 365 + { 366 + struct igc_adapter *adapter = netdev_priv(netdev); 367 + 368 + ring->rx_max_pending = IGC_MAX_RXD; 369 + ring->tx_max_pending = IGC_MAX_TXD; 370 + ring->rx_pending = adapter->rx_ring_count; 371 + ring->tx_pending = adapter->tx_ring_count; 372 + } 373 + 374 + static int igc_set_ringparam(struct net_device *netdev, 375 + struct ethtool_ringparam *ring) 376 + { 377 + struct igc_adapter *adapter = netdev_priv(netdev); 378 + struct igc_ring *temp_ring; 379 + u16 new_rx_count, new_tx_count; 380 + int i, err = 0; 381 + 382 + if (ring->rx_mini_pending || ring->rx_jumbo_pending) 383 + return -EINVAL; 384 + 385 + new_rx_count = min_t(u32, ring->rx_pending, IGC_MAX_RXD); 386 + new_rx_count = max_t(u16, new_rx_count, IGC_MIN_RXD); 387 + new_rx_count = ALIGN(new_rx_count, REQ_RX_DESCRIPTOR_MULTIPLE); 388 + 389 + new_tx_count = min_t(u32, ring->tx_pending, IGC_MAX_TXD); 390 + new_tx_count = max_t(u16, new_tx_count, IGC_MIN_TXD); 391 + new_tx_count = ALIGN(new_tx_count, REQ_TX_DESCRIPTOR_MULTIPLE); 392 + 393 + if (new_tx_count == adapter->tx_ring_count && 394 + new_rx_count == adapter->rx_ring_count) { 395 + /* nothing to do */ 396 + return 0; 397 + } 398 + 399 + while (test_and_set_bit(__IGC_RESETTING, &adapter->state)) 400 + usleep_range(1000, 2000); 401 + 402 + if (!netif_running(adapter->netdev)) { 403 + for (i = 0; i < adapter->num_tx_queues; i++) 404 + adapter->tx_ring[i]->count = new_tx_count; 405 + for (i = 0; i < adapter->num_rx_queues; i++) 406 + adapter->rx_ring[i]->count = new_rx_count; 407 + adapter->tx_ring_count = new_tx_count; 408 + adapter->rx_ring_count = new_rx_count; 409 + goto clear_reset; 410 + } 411 + 412 + if (adapter->num_tx_queues > adapter->num_rx_queues) 413 + temp_ring = vmalloc(array_size(sizeof(struct igc_ring), 414 + adapter->num_tx_queues)); 415 + else 416 + temp_ring = vmalloc(array_size(sizeof(struct igc_ring), 417 + adapter->num_rx_queues)); 418 + 419 + if (!temp_ring) { 420 + err = -ENOMEM; 421 + goto clear_reset; 422 + } 423 + 424 + igc_down(adapter); 425 + 426 + /* We can't just free everything and then setup again, 427 + * because the ISRs in MSI-X mode get passed pointers 428 + * to the Tx and Rx ring structs. 429 + */ 430 + if (new_tx_count != adapter->tx_ring_count) { 431 + for (i = 0; i < adapter->num_tx_queues; i++) { 432 + memcpy(&temp_ring[i], adapter->tx_ring[i], 433 + sizeof(struct igc_ring)); 434 + 435 + temp_ring[i].count = new_tx_count; 436 + err = igc_setup_tx_resources(&temp_ring[i]); 437 + if (err) { 438 + while (i) { 439 + i--; 440 + igc_free_tx_resources(&temp_ring[i]); 441 + } 442 + goto err_setup; 443 + } 444 + } 445 + 446 + for (i = 0; i < adapter->num_tx_queues; i++) { 447 + igc_free_tx_resources(adapter->tx_ring[i]); 448 + 449 + memcpy(adapter->tx_ring[i], &temp_ring[i], 450 + sizeof(struct igc_ring)); 451 + } 452 + 453 + adapter->tx_ring_count = new_tx_count; 454 + } 455 + 456 + if (new_rx_count != adapter->rx_ring_count) { 457 + for (i = 0; i < adapter->num_rx_queues; i++) { 458 + memcpy(&temp_ring[i], adapter->rx_ring[i], 459 + sizeof(struct igc_ring)); 460 + 461 + temp_ring[i].count = new_rx_count; 462 + err = igc_setup_rx_resources(&temp_ring[i]); 463 + if (err) { 464 + while (i) { 465 + i--; 466 + igc_free_rx_resources(&temp_ring[i]); 467 + } 468 + goto err_setup; 469 + } 470 + } 471 + 472 + for (i = 0; i < adapter->num_rx_queues; i++) { 473 + igc_free_rx_resources(adapter->rx_ring[i]); 474 + 475 + memcpy(adapter->rx_ring[i], &temp_ring[i], 476 + sizeof(struct igc_ring)); 477 + } 478 + 479 + adapter->rx_ring_count = new_rx_count; 480 + } 481 + err_setup: 482 + igc_up(adapter); 483 + vfree(temp_ring); 484 + clear_reset: 485 + clear_bit(__IGC_RESETTING, &adapter->state); 486 + return err; 487 + } 488 + 489 + static void igc_get_pauseparam(struct net_device *netdev, 490 + struct ethtool_pauseparam *pause) 491 + { 492 + struct igc_adapter *adapter = netdev_priv(netdev); 493 + struct igc_hw *hw = &adapter->hw; 494 + 495 + pause->autoneg = 496 + (adapter->fc_autoneg ? AUTONEG_ENABLE : AUTONEG_DISABLE); 497 + 498 + if (hw->fc.current_mode == igc_fc_rx_pause) { 499 + pause->rx_pause = 1; 500 + } else if (hw->fc.current_mode == igc_fc_tx_pause) { 501 + pause->tx_pause = 1; 502 + } else if (hw->fc.current_mode == igc_fc_full) { 503 + pause->rx_pause = 1; 504 + pause->tx_pause = 1; 505 + } 506 + } 507 + 508 + static int igc_set_pauseparam(struct net_device *netdev, 509 + struct ethtool_pauseparam *pause) 510 + { 511 + struct igc_adapter *adapter = netdev_priv(netdev); 512 + struct igc_hw *hw = &adapter->hw; 513 + int retval = 0; 514 + 515 + adapter->fc_autoneg = pause->autoneg; 516 + 517 + while (test_and_set_bit(__IGC_RESETTING, &adapter->state)) 518 + usleep_range(1000, 2000); 519 + 520 + if (adapter->fc_autoneg == AUTONEG_ENABLE) { 521 + hw->fc.requested_mode = igc_fc_default; 522 + if (netif_running(adapter->netdev)) { 523 + igc_down(adapter); 524 + igc_up(adapter); 525 + } else { 526 + igc_reset(adapter); 527 + } 528 + } else { 529 + if (pause->rx_pause && pause->tx_pause) 530 + hw->fc.requested_mode = igc_fc_full; 531 + else if (pause->rx_pause && !pause->tx_pause) 532 + hw->fc.requested_mode = igc_fc_rx_pause; 533 + else if (!pause->rx_pause && pause->tx_pause) 534 + hw->fc.requested_mode = igc_fc_tx_pause; 535 + else if (!pause->rx_pause && !pause->tx_pause) 536 + hw->fc.requested_mode = igc_fc_none; 537 + 538 + hw->fc.current_mode = hw->fc.requested_mode; 539 + 540 + retval = ((hw->phy.media_type == igc_media_type_copper) ? 541 + igc_force_mac_fc(hw) : igc_setup_link(hw)); 542 + } 543 + 544 + clear_bit(__IGC_RESETTING, &adapter->state); 545 + return retval; 546 + } 547 + 548 + static int igc_get_coalesce(struct net_device *netdev, 549 + struct ethtool_coalesce *ec) 550 + { 551 + struct igc_adapter *adapter = netdev_priv(netdev); 552 + 553 + if (adapter->rx_itr_setting <= 3) 554 + ec->rx_coalesce_usecs = adapter->rx_itr_setting; 555 + else 556 + ec->rx_coalesce_usecs = adapter->rx_itr_setting >> 2; 557 + 558 + if (!(adapter->flags & IGC_FLAG_QUEUE_PAIRS)) { 559 + if (adapter->tx_itr_setting <= 3) 560 + ec->tx_coalesce_usecs = adapter->tx_itr_setting; 561 + else 562 + ec->tx_coalesce_usecs = adapter->tx_itr_setting >> 2; 563 + } 564 + 565 + return 0; 566 + } 567 + 568 + static int igc_set_coalesce(struct net_device *netdev, 569 + struct ethtool_coalesce *ec) 570 + { 571 + struct igc_adapter *adapter = netdev_priv(netdev); 572 + int i; 573 + 574 + if (ec->rx_max_coalesced_frames || 575 + ec->rx_coalesce_usecs_irq || 576 + ec->rx_max_coalesced_frames_irq || 577 + ec->tx_max_coalesced_frames || 578 + ec->tx_coalesce_usecs_irq || 579 + ec->stats_block_coalesce_usecs || 580 + ec->use_adaptive_rx_coalesce || 581 + ec->use_adaptive_tx_coalesce || 582 + ec->pkt_rate_low || 583 + ec->rx_coalesce_usecs_low || 584 + ec->rx_max_coalesced_frames_low || 585 + ec->tx_coalesce_usecs_low || 586 + ec->tx_max_coalesced_frames_low || 587 + ec->pkt_rate_high || 588 + ec->rx_coalesce_usecs_high || 589 + ec->rx_max_coalesced_frames_high || 590 + ec->tx_coalesce_usecs_high || 591 + ec->tx_max_coalesced_frames_high || 592 + ec->rate_sample_interval) 593 + return -ENOTSUPP; 594 + 595 + if (ec->rx_coalesce_usecs > IGC_MAX_ITR_USECS || 596 + (ec->rx_coalesce_usecs > 3 && 597 + ec->rx_coalesce_usecs < IGC_MIN_ITR_USECS) || 598 + ec->rx_coalesce_usecs == 2) 599 + return -EINVAL; 600 + 601 + if (ec->tx_coalesce_usecs > IGC_MAX_ITR_USECS || 602 + (ec->tx_coalesce_usecs > 3 && 603 + ec->tx_coalesce_usecs < IGC_MIN_ITR_USECS) || 604 + ec->tx_coalesce_usecs == 2) 605 + return -EINVAL; 606 + 607 + if ((adapter->flags & IGC_FLAG_QUEUE_PAIRS) && ec->tx_coalesce_usecs) 608 + return -EINVAL; 609 + 610 + /* If ITR is disabled, disable DMAC */ 611 + if (ec->rx_coalesce_usecs == 0) { 612 + if (adapter->flags & IGC_FLAG_DMAC) 613 + adapter->flags &= ~IGC_FLAG_DMAC; 614 + } 615 + 616 + /* convert to rate of irq's per second */ 617 + if (ec->rx_coalesce_usecs && ec->rx_coalesce_usecs <= 3) 618 + adapter->rx_itr_setting = ec->rx_coalesce_usecs; 619 + else 620 + adapter->rx_itr_setting = ec->rx_coalesce_usecs << 2; 621 + 622 + /* convert to rate of irq's per second */ 623 + if (adapter->flags & IGC_FLAG_QUEUE_PAIRS) 624 + adapter->tx_itr_setting = adapter->rx_itr_setting; 625 + else if (ec->tx_coalesce_usecs && ec->tx_coalesce_usecs <= 3) 626 + adapter->tx_itr_setting = ec->tx_coalesce_usecs; 627 + else 628 + adapter->tx_itr_setting = ec->tx_coalesce_usecs << 2; 629 + 630 + for (i = 0; i < adapter->num_q_vectors; i++) { 631 + struct igc_q_vector *q_vector = adapter->q_vector[i]; 632 + 633 + q_vector->tx.work_limit = adapter->tx_work_limit; 634 + if (q_vector->rx.ring) 635 + q_vector->itr_val = adapter->rx_itr_setting; 636 + else 637 + q_vector->itr_val = adapter->tx_itr_setting; 638 + if (q_vector->itr_val && q_vector->itr_val <= 3) 639 + q_vector->itr_val = IGC_START_ITR; 640 + q_vector->set_itr = 1; 641 + } 642 + 643 + return 0; 644 + } 645 + 646 + void igc_write_rss_indir_tbl(struct igc_adapter *adapter) 647 + { 648 + struct igc_hw *hw = &adapter->hw; 649 + u32 reg = IGC_RETA(0); 650 + u32 shift = 0; 651 + int i = 0; 652 + 653 + while (i < IGC_RETA_SIZE) { 654 + u32 val = 0; 655 + int j; 656 + 657 + for (j = 3; j >= 0; j--) { 658 + val <<= 8; 659 + val |= adapter->rss_indir_tbl[i + j]; 660 + } 661 + 662 + wr32(reg, val << shift); 663 + reg += 4; 664 + i += 4; 665 + } 666 + } 667 + 668 + static u32 igc_get_rxfh_indir_size(struct net_device *netdev) 669 + { 670 + return IGC_RETA_SIZE; 671 + } 672 + 673 + static int igc_get_rxfh(struct net_device *netdev, u32 *indir, u8 *key, 674 + u8 *hfunc) 675 + { 676 + struct igc_adapter *adapter = netdev_priv(netdev); 677 + int i; 678 + 679 + if (hfunc) 680 + *hfunc = ETH_RSS_HASH_TOP; 681 + if (!indir) 682 + return 0; 683 + for (i = 0; i < IGC_RETA_SIZE; i++) 684 + indir[i] = adapter->rss_indir_tbl[i]; 685 + 686 + return 0; 687 + } 688 + 689 + static int igc_set_rxfh(struct net_device *netdev, const u32 *indir, 690 + const u8 *key, const u8 hfunc) 691 + { 692 + struct igc_adapter *adapter = netdev_priv(netdev); 693 + u32 num_queues; 694 + int i; 695 + 696 + /* We do not allow change in unsupported parameters */ 697 + if (key || 698 + (hfunc != ETH_RSS_HASH_NO_CHANGE && hfunc != ETH_RSS_HASH_TOP)) 699 + return -EOPNOTSUPP; 700 + if (!indir) 701 + return 0; 702 + 703 + num_queues = adapter->rss_queues; 704 + 705 + /* Verify user input. */ 706 + for (i = 0; i < IGC_RETA_SIZE; i++) 707 + if (indir[i] >= num_queues) 708 + return -EINVAL; 709 + 710 + for (i = 0; i < IGC_RETA_SIZE; i++) 711 + adapter->rss_indir_tbl[i] = indir[i]; 712 + 713 + igc_write_rss_indir_tbl(adapter); 714 + 715 + return 0; 716 + } 717 + 718 + static unsigned int igc_max_channels(struct igc_adapter *adapter) 719 + { 720 + return igc_get_max_rss_queues(adapter); 721 + } 722 + 723 + static void igc_get_channels(struct net_device *netdev, 724 + struct ethtool_channels *ch) 725 + { 726 + struct igc_adapter *adapter = netdev_priv(netdev); 727 + 728 + /* Report maximum channels */ 729 + ch->max_combined = igc_max_channels(adapter); 730 + 731 + /* Report info for other vector */ 732 + if (adapter->flags & IGC_FLAG_HAS_MSIX) { 733 + ch->max_other = NON_Q_VECTORS; 734 + ch->other_count = NON_Q_VECTORS; 735 + } 736 + 737 + ch->combined_count = adapter->rss_queues; 738 + } 739 + 740 + static int igc_set_channels(struct net_device *netdev, 741 + struct ethtool_channels *ch) 742 + { 743 + struct igc_adapter *adapter = netdev_priv(netdev); 744 + unsigned int count = ch->combined_count; 745 + unsigned int max_combined = 0; 746 + 747 + /* Verify they are not requesting separate vectors */ 748 + if (!count || ch->rx_count || ch->tx_count) 749 + return -EINVAL; 750 + 751 + /* Verify other_count is valid and has not been changed */ 752 + if (ch->other_count != NON_Q_VECTORS) 753 + return -EINVAL; 754 + 755 + /* Verify the number of channels doesn't exceed hw limits */ 756 + max_combined = igc_max_channels(adapter); 757 + if (count > max_combined) 758 + return -EINVAL; 759 + 760 + if (count != adapter->rss_queues) { 761 + adapter->rss_queues = count; 762 + igc_set_flag_queue_pairs(adapter, max_combined); 763 + 764 + /* Hardware has to reinitialize queues and interrupts to 765 + * match the new configuration. 766 + */ 767 + return igc_reinit_queues(adapter); 768 + } 769 + 770 + return 0; 771 + } 772 + 773 + static u32 igc_get_priv_flags(struct net_device *netdev) 774 + { 775 + struct igc_adapter *adapter = netdev_priv(netdev); 776 + u32 priv_flags = 0; 777 + 778 + if (adapter->flags & IGC_FLAG_RX_LEGACY) 779 + priv_flags |= IGC_PRIV_FLAGS_LEGACY_RX; 780 + 781 + return priv_flags; 782 + } 783 + 784 + static int igc_set_priv_flags(struct net_device *netdev, u32 priv_flags) 785 + { 786 + struct igc_adapter *adapter = netdev_priv(netdev); 787 + unsigned int flags = adapter->flags; 788 + 789 + flags &= ~IGC_FLAG_RX_LEGACY; 790 + if (priv_flags & IGC_PRIV_FLAGS_LEGACY_RX) 791 + flags |= IGC_FLAG_RX_LEGACY; 792 + 793 + if (flags != adapter->flags) { 794 + adapter->flags = flags; 795 + 796 + /* reset interface to repopulate queues */ 797 + if (netif_running(netdev)) 798 + igc_reinit_locked(adapter); 799 + } 800 + 801 + return 0; 802 + } 803 + 804 + static int igc_ethtool_begin(struct net_device *netdev) 805 + { 806 + struct igc_adapter *adapter = netdev_priv(netdev); 807 + 808 + pm_runtime_get_sync(&adapter->pdev->dev); 809 + return 0; 810 + } 811 + 812 + static void igc_ethtool_complete(struct net_device *netdev) 813 + { 814 + struct igc_adapter *adapter = netdev_priv(netdev); 815 + 816 + pm_runtime_put(&adapter->pdev->dev); 817 + } 818 + 819 + static int igc_get_link_ksettings(struct net_device *netdev, 820 + struct ethtool_link_ksettings *cmd) 821 + { 822 + struct igc_adapter *adapter = netdev_priv(netdev); 823 + struct igc_hw *hw = &adapter->hw; 824 + u32 status; 825 + u32 speed; 826 + 827 + ethtool_link_ksettings_zero_link_mode(cmd, supported); 828 + ethtool_link_ksettings_zero_link_mode(cmd, advertising); 829 + 830 + /* supported link modes */ 831 + ethtool_link_ksettings_add_link_mode(cmd, supported, 10baseT_Half); 832 + ethtool_link_ksettings_add_link_mode(cmd, supported, 10baseT_Full); 833 + ethtool_link_ksettings_add_link_mode(cmd, supported, 100baseT_Half); 834 + ethtool_link_ksettings_add_link_mode(cmd, supported, 100baseT_Full); 835 + ethtool_link_ksettings_add_link_mode(cmd, supported, 1000baseT_Full); 836 + ethtool_link_ksettings_add_link_mode(cmd, supported, 2500baseT_Full); 837 + 838 + /* twisted pair */ 839 + cmd->base.port = PORT_TP; 840 + cmd->base.phy_address = hw->phy.addr; 841 + 842 + /* advertising link modes */ 843 + ethtool_link_ksettings_add_link_mode(cmd, advertising, 10baseT_Half); 844 + ethtool_link_ksettings_add_link_mode(cmd, advertising, 10baseT_Full); 845 + ethtool_link_ksettings_add_link_mode(cmd, advertising, 100baseT_Half); 846 + ethtool_link_ksettings_add_link_mode(cmd, advertising, 100baseT_Full); 847 + ethtool_link_ksettings_add_link_mode(cmd, advertising, 1000baseT_Full); 848 + ethtool_link_ksettings_add_link_mode(cmd, advertising, 2500baseT_Full); 849 + 850 + /* set autoneg settings */ 851 + if (hw->mac.autoneg == 1) { 852 + ethtool_link_ksettings_add_link_mode(cmd, supported, Autoneg); 853 + ethtool_link_ksettings_add_link_mode(cmd, advertising, 854 + Autoneg); 855 + } 856 + 857 + switch (hw->fc.requested_mode) { 858 + case igc_fc_full: 859 + ethtool_link_ksettings_add_link_mode(cmd, advertising, Pause); 860 + break; 861 + case igc_fc_rx_pause: 862 + ethtool_link_ksettings_add_link_mode(cmd, advertising, Pause); 863 + ethtool_link_ksettings_add_link_mode(cmd, advertising, 864 + Asym_Pause); 865 + break; 866 + case igc_fc_tx_pause: 867 + ethtool_link_ksettings_add_link_mode(cmd, advertising, 868 + Asym_Pause); 869 + break; 870 + default: 871 + ethtool_link_ksettings_add_link_mode(cmd, advertising, Pause); 872 + ethtool_link_ksettings_add_link_mode(cmd, advertising, 873 + Asym_Pause); 874 + } 875 + 876 + status = rd32(IGC_STATUS); 877 + 878 + if (status & IGC_STATUS_LU) { 879 + if (status & IGC_STATUS_SPEED_1000) { 880 + /* For I225, STATUS will indicate 1G speed in both 881 + * 1 Gbps and 2.5 Gbps link modes. 882 + * An additional bit is used 883 + * to differentiate between 1 Gbps and 2.5 Gbps. 884 + */ 885 + if (hw->mac.type == igc_i225 && 886 + (status & IGC_STATUS_SPEED_2500)) { 887 + speed = SPEED_2500; 888 + hw_dbg("2500 Mbs, "); 889 + } else { 890 + speed = SPEED_1000; 891 + hw_dbg("1000 Mbs, "); 892 + } 893 + } else if (status & IGC_STATUS_SPEED_100) { 894 + speed = SPEED_100; 895 + hw_dbg("100 Mbs, "); 896 + } else { 897 + speed = SPEED_10; 898 + hw_dbg("10 Mbs, "); 899 + } 900 + if ((status & IGC_STATUS_FD) || 901 + hw->phy.media_type != igc_media_type_copper) 902 + cmd->base.duplex = DUPLEX_FULL; 903 + else 904 + cmd->base.duplex = DUPLEX_HALF; 905 + } else { 906 + speed = SPEED_UNKNOWN; 907 + cmd->base.duplex = DUPLEX_UNKNOWN; 908 + } 909 + cmd->base.speed = speed; 910 + if (hw->mac.autoneg) 911 + cmd->base.autoneg = AUTONEG_ENABLE; 912 + else 913 + cmd->base.autoneg = AUTONEG_DISABLE; 914 + 915 + /* MDI-X => 2; MDI =>1; Invalid =>0 */ 916 + if (hw->phy.media_type == igc_media_type_copper) 917 + cmd->base.eth_tp_mdix = hw->phy.is_mdix ? ETH_TP_MDI_X : 918 + ETH_TP_MDI; 919 + else 920 + cmd->base.eth_tp_mdix = ETH_TP_MDI_INVALID; 921 + 922 + if (hw->phy.mdix == AUTO_ALL_MODES) 923 + cmd->base.eth_tp_mdix_ctrl = ETH_TP_MDI_AUTO; 924 + else 925 + cmd->base.eth_tp_mdix_ctrl = hw->phy.mdix; 926 + 927 + return 0; 928 + } 929 + 930 + static int igc_set_link_ksettings(struct net_device *netdev, 931 + const struct ethtool_link_ksettings *cmd) 932 + { 933 + struct igc_adapter *adapter = netdev_priv(netdev); 934 + struct igc_hw *hw = &adapter->hw; 935 + u32 advertising; 936 + 937 + /* When adapter in resetting mode, autoneg/speed/duplex 938 + * cannot be changed 939 + */ 940 + if (igc_check_reset_block(hw)) { 941 + dev_err(&adapter->pdev->dev, 942 + "Cannot change link characteristics when reset is active.\n"); 943 + return -EINVAL; 944 + } 945 + 946 + /* MDI setting is only allowed when autoneg enabled because 947 + * some hardware doesn't allow MDI setting when speed or 948 + * duplex is forced. 949 + */ 950 + if (cmd->base.eth_tp_mdix_ctrl) { 951 + if (cmd->base.eth_tp_mdix_ctrl != ETH_TP_MDI_AUTO && 952 + cmd->base.autoneg != AUTONEG_ENABLE) { 953 + dev_err(&adapter->pdev->dev, "forcing MDI/MDI-X state is not supported when link speed and/or duplex are forced\n"); 954 + return -EINVAL; 955 + } 956 + } 957 + 958 + while (test_and_set_bit(__IGC_RESETTING, &adapter->state)) 959 + usleep_range(1000, 2000); 960 + 961 + ethtool_convert_link_mode_to_legacy_u32(&advertising, 962 + cmd->link_modes.advertising); 963 + 964 + if (cmd->base.autoneg == AUTONEG_ENABLE) { 965 + hw->mac.autoneg = 1; 966 + hw->phy.autoneg_advertised = advertising; 967 + if (adapter->fc_autoneg) 968 + hw->fc.requested_mode = igc_fc_default; 969 + } else { 970 + /* calling this overrides forced MDI setting */ 971 + dev_info(&adapter->pdev->dev, 972 + "Force mode currently not supported\n"); 973 + } 974 + 975 + /* MDI-X => 2; MDI => 1; Auto => 3 */ 976 + if (cmd->base.eth_tp_mdix_ctrl) { 977 + /* fix up the value for auto (3 => 0) as zero is mapped 978 + * internally to auto 979 + */ 980 + if (cmd->base.eth_tp_mdix_ctrl == ETH_TP_MDI_AUTO) 981 + hw->phy.mdix = AUTO_ALL_MODES; 982 + else 983 + hw->phy.mdix = cmd->base.eth_tp_mdix_ctrl; 984 + } 985 + 986 + /* reset the link */ 987 + if (netif_running(adapter->netdev)) { 988 + igc_down(adapter); 989 + igc_up(adapter); 990 + } else { 991 + igc_reset(adapter); 992 + } 993 + 994 + clear_bit(__IGC_RESETTING, &adapter->state); 995 + 996 + return 0; 997 + } 998 + 999 + static const struct ethtool_ops igc_ethtool_ops = { 1000 + .get_drvinfo = igc_get_drvinfo, 1001 + .get_regs_len = igc_get_regs_len, 1002 + .get_regs = igc_get_regs, 1003 + .get_msglevel = igc_get_msglevel, 1004 + .set_msglevel = igc_set_msglevel, 1005 + .nway_reset = igc_nway_reset, 1006 + .get_link = igc_get_link, 1007 + .get_eeprom_len = igc_get_eeprom_len, 1008 + .get_eeprom = igc_get_eeprom, 1009 + .set_eeprom = igc_set_eeprom, 1010 + .get_ringparam = igc_get_ringparam, 1011 + .set_ringparam = igc_set_ringparam, 1012 + .get_pauseparam = igc_get_pauseparam, 1013 + .set_pauseparam = igc_set_pauseparam, 1014 + .get_coalesce = igc_get_coalesce, 1015 + .set_coalesce = igc_set_coalesce, 1016 + .get_rxfh_indir_size = igc_get_rxfh_indir_size, 1017 + .get_rxfh = igc_get_rxfh, 1018 + .set_rxfh = igc_set_rxfh, 1019 + .get_channels = igc_get_channels, 1020 + .set_channels = igc_set_channels, 1021 + .get_priv_flags = igc_get_priv_flags, 1022 + .set_priv_flags = igc_set_priv_flags, 1023 + .begin = igc_ethtool_begin, 1024 + .complete = igc_ethtool_complete, 1025 + .get_link_ksettings = igc_get_link_ksettings, 1026 + .set_link_ksettings = igc_set_link_ksettings, 1027 + }; 1028 + 1029 + void igc_set_ethtool_ops(struct net_device *netdev) 1030 + { 1031 + netdev->ethtool_ops = &igc_ethtool_ops; 1032 + }
+1
drivers/net/ethernet/intel/igc/igc_hw.h
··· 55 55 56 56 enum igc_nvm_type { 57 57 igc_nvm_unknown = 0, 58 + igc_nvm_eeprom_spi, 58 59 igc_nvm_flash_hw, 59 60 igc_nvm_invm, 60 61 };
+94 -15
drivers/net/ethernet/intel/igc/igc_main.c
··· 12 12 #define DRV_VERSION "0.0.1-k" 13 13 #define DRV_SUMMARY "Intel(R) 2.5G Ethernet Linux Driver" 14 14 15 + #define DEFAULT_MSG_ENABLE (NETIF_MSG_DRV | NETIF_MSG_PROBE | NETIF_MSG_LINK) 16 + 15 17 static int debug = -1; 16 18 17 19 MODULE_AUTHOR("Intel Corporation, <linux.nics@intel.com>"); ··· 68 66 latency_invalid = 255 69 67 }; 70 68 71 - static void igc_reset(struct igc_adapter *adapter) 69 + void igc_reset(struct igc_adapter *adapter) 72 70 { 73 71 struct pci_dev *pdev = adapter->pdev; 74 72 struct igc_hw *hw = &adapter->hw; ··· 152 150 * 153 151 * Free all transmit software resources 154 152 */ 155 - static void igc_free_tx_resources(struct igc_ring *tx_ring) 153 + void igc_free_tx_resources(struct igc_ring *tx_ring) 156 154 { 157 155 igc_clean_tx_ring(tx_ring); 158 156 ··· 263 261 * 264 262 * Return 0 on success, negative on failure 265 263 */ 266 - static int igc_setup_tx_resources(struct igc_ring *tx_ring) 264 + int igc_setup_tx_resources(struct igc_ring *tx_ring) 267 265 { 268 266 struct device *dev = tx_ring->dev; 269 267 int size = 0; ··· 383 381 * 384 382 * Free all receive software resources 385 383 */ 386 - static void igc_free_rx_resources(struct igc_ring *rx_ring) 384 + void igc_free_rx_resources(struct igc_ring *rx_ring) 387 385 { 388 386 igc_clean_rx_ring(rx_ring); 389 387 ··· 420 418 * 421 419 * Returns 0 on success, negative on failure 422 420 */ 423 - static int igc_setup_rx_resources(struct igc_ring *rx_ring) 421 + int igc_setup_rx_resources(struct igc_ring *rx_ring) 424 422 { 425 423 struct device *dev = rx_ring->dev; 426 424 int size, desc_len; ··· 1705 1703 * igc_up - Open the interface and prepare it to handle traffic 1706 1704 * @adapter: board private structure 1707 1705 */ 1708 - static void igc_up(struct igc_adapter *adapter) 1706 + void igc_up(struct igc_adapter *adapter) 1709 1707 { 1710 1708 struct igc_hw *hw = &adapter->hw; 1711 1709 int i = 0; ··· 1750 1748 * igc_down - Close the interface 1751 1749 * @adapter: board private structure 1752 1750 */ 1753 - static void igc_down(struct igc_adapter *adapter) 1751 + void igc_down(struct igc_adapter *adapter) 1754 1752 { 1755 1753 struct net_device *netdev = adapter->netdev; 1756 1754 struct igc_hw *hw = &adapter->hw; ··· 1812 1810 igc_clean_all_rx_rings(adapter); 1813 1811 } 1814 1812 1815 - static void igc_reinit_locked(struct igc_adapter *adapter) 1813 + void igc_reinit_locked(struct igc_adapter *adapter) 1816 1814 { 1817 1815 WARN_ON(in_interrupt()); 1818 1816 while (test_and_set_bit(__IGC_RESETTING, &adapter->state)) ··· 1924 1922 1925 1923 /** 1926 1924 * igc_rar_set_index - Sync RAL[index] and RAH[index] registers with MAC table 1927 - * @adapter: Pointer to adapter structure 1925 + * @adapter: address of board private structure 1928 1926 * @index: Index of the RAR entry which need to be synced with MAC table 1929 1927 */ 1930 1928 static void igc_rar_set_index(struct igc_adapter *adapter, u32 index) ··· 2300 2298 * igc_has_link - check shared code for link and determine up/down 2301 2299 * @adapter: pointer to driver private info 2302 2300 */ 2303 - static bool igc_has_link(struct igc_adapter *adapter) 2301 + bool igc_has_link(struct igc_adapter *adapter) 2304 2302 { 2305 2303 struct igc_hw *hw = &adapter->hw; 2306 2304 bool link_active = false; ··· 3503 3501 return value; 3504 3502 } 3505 3503 3504 + int igc_set_spd_dplx(struct igc_adapter *adapter, u32 spd, u8 dplx) 3505 + { 3506 + struct pci_dev *pdev = adapter->pdev; 3507 + struct igc_mac_info *mac = &adapter->hw.mac; 3508 + 3509 + mac->autoneg = 0; 3510 + 3511 + /* Make sure dplx is at most 1 bit and lsb of speed is not set 3512 + * for the switch() below to work 3513 + */ 3514 + if ((spd & 1) || (dplx & ~1)) 3515 + goto err_inval; 3516 + 3517 + switch (spd + dplx) { 3518 + case SPEED_10 + DUPLEX_HALF: 3519 + mac->forced_speed_duplex = ADVERTISE_10_HALF; 3520 + break; 3521 + case SPEED_10 + DUPLEX_FULL: 3522 + mac->forced_speed_duplex = ADVERTISE_10_FULL; 3523 + break; 3524 + case SPEED_100 + DUPLEX_HALF: 3525 + mac->forced_speed_duplex = ADVERTISE_100_HALF; 3526 + break; 3527 + case SPEED_100 + DUPLEX_FULL: 3528 + mac->forced_speed_duplex = ADVERTISE_100_FULL; 3529 + break; 3530 + case SPEED_1000 + DUPLEX_FULL: 3531 + mac->autoneg = 1; 3532 + adapter->hw.phy.autoneg_advertised = ADVERTISE_1000_FULL; 3533 + break; 3534 + case SPEED_1000 + DUPLEX_HALF: /* not supported */ 3535 + goto err_inval; 3536 + case SPEED_2500 + DUPLEX_FULL: 3537 + mac->autoneg = 1; 3538 + adapter->hw.phy.autoneg_advertised = ADVERTISE_2500_FULL; 3539 + break; 3540 + case SPEED_2500 + DUPLEX_HALF: /* not supported */ 3541 + default: 3542 + goto err_inval; 3543 + } 3544 + 3545 + /* clear MDI, MDI(-X) override is only allowed when autoneg enabled */ 3546 + adapter->hw.phy.mdix = AUTO_ALL_MODES; 3547 + 3548 + return 0; 3549 + 3550 + err_inval: 3551 + dev_err(&pdev->dev, "Unsupported Speed/Duplex configuration\n"); 3552 + return -EINVAL; 3553 + } 3554 + 3506 3555 /** 3507 3556 * igc_probe - Device Initialization Routine 3508 3557 * @pdev: PCI device information struct ··· 3621 3568 hw = &adapter->hw; 3622 3569 hw->back = adapter; 3623 3570 adapter->port_num = hw->bus.func; 3624 - adapter->msg_enable = GENMASK(debug - 1, 0); 3571 + adapter->msg_enable = netif_msg_init(debug, DEFAULT_MSG_ENABLE); 3625 3572 3626 3573 err = pci_save_state(pdev); 3627 3574 if (err) ··· 3637 3584 hw->hw_addr = adapter->io_addr; 3638 3585 3639 3586 netdev->netdev_ops = &igc_netdev_ops; 3640 - 3587 + igc_set_ethtool_ops(netdev); 3641 3588 netdev->watchdog_timeo = 5 * HZ; 3642 3589 3643 3590 netdev->mem_start = pci_resource_start(pdev, 0); ··· 3797 3744 .remove = igc_remove, 3798 3745 }; 3799 3746 3800 - static void igc_set_flag_queue_pairs(struct igc_adapter *adapter, 3801 - const u32 max_rss_queues) 3747 + void igc_set_flag_queue_pairs(struct igc_adapter *adapter, 3748 + const u32 max_rss_queues) 3802 3749 { 3803 3750 /* Determine if we need to pair queues. */ 3804 3751 /* If rss_queues > half of max_rss_queues, pair the queues in ··· 3810 3757 adapter->flags &= ~IGC_FLAG_QUEUE_PAIRS; 3811 3758 } 3812 3759 3813 - static unsigned int igc_get_max_rss_queues(struct igc_adapter *adapter) 3760 + unsigned int igc_get_max_rss_queues(struct igc_adapter *adapter) 3814 3761 { 3815 3762 unsigned int max_rss_queues; 3816 3763 ··· 3887 3834 set_bit(__IGC_DOWN, &adapter->state); 3888 3835 3889 3836 return 0; 3837 + } 3838 + 3839 + /** 3840 + * igc_reinit_queues - return error 3841 + * @adapter: pointer to adapter structure 3842 + */ 3843 + int igc_reinit_queues(struct igc_adapter *adapter) 3844 + { 3845 + struct net_device *netdev = adapter->netdev; 3846 + struct pci_dev *pdev = adapter->pdev; 3847 + int err = 0; 3848 + 3849 + if (netif_running(netdev)) 3850 + igc_close(netdev); 3851 + 3852 + igc_reset_interrupt_capability(adapter); 3853 + 3854 + if (igc_init_interrupt_scheme(adapter, true)) { 3855 + dev_err(&pdev->dev, "Unable to allocate memory for queues\n"); 3856 + return -ENOMEM; 3857 + } 3858 + 3859 + if (netif_running(netdev)) 3860 + err = igc_open(netdev); 3861 + 3862 + return err; 3890 3863 } 3891 3864 3892 3865 /**
-8
drivers/net/ethernet/intel/igc/igc_phy.c
··· 152 152 s32 igc_check_downshift(struct igc_hw *hw) 153 153 { 154 154 struct igc_phy_info *phy = &hw->phy; 155 - u16 phy_data, offset, mask; 156 155 s32 ret_val; 157 156 158 157 switch (phy->type) { ··· 160 161 /* speed downshift not supported */ 161 162 phy->speed_downgraded = false; 162 163 ret_val = 0; 163 - goto out; 164 164 } 165 165 166 - ret_val = phy->ops.read_reg(hw, offset, &phy_data); 167 - 168 - if (!ret_val) 169 - phy->speed_downgraded = (phy_data & mask) ? true : false; 170 - 171 - out: 172 166 return ret_val; 173 167 } 174 168
+3 -1
drivers/net/ethernet/intel/igc/igc_regs.h
··· 80 80 /* MSI-X Table Register Descriptions */ 81 81 #define IGC_PBACL 0x05B68 /* MSIx PBA Clear - R/W 1 to clear */ 82 82 83 + /* Redirection Table - RW Array */ 84 + #define IGC_RETA(_i) (0x05C00 + ((_i) * 4)) 85 + 83 86 /* Receive Register Descriptions */ 84 87 #define IGC_RCTL 0x00100 /* Rx Control - RW */ 85 88 #define IGC_SRRCTL(_n) (0x0C00C + ((_n) * 0x40)) ··· 191 188 #define IGC_HGOTCL 0x04130 /* Host Good Octets Transmit Count Low */ 192 189 #define IGC_HGOTCH 0x04134 /* Host Good Octets Transmit Count High */ 193 190 #define IGC_LENERRS 0x04138 /* Length Errors Count */ 194 - #define IGC_SCVPC 0x04228 /* SerDes/SGMII Code Violation Pkt Count */ 195 191 #define IGC_HRMPC 0x0A018 /* Header Redirection Missed Packet Count */ 196 192 197 193 /* Management registers */
+1 -1
drivers/net/ethernet/intel/ixgbe/ixgbe_82599.c
··· 1048 1048 * clear the multicast table. Also reset num_rar_entries to 128, 1049 1049 * since we modify this value when programming the SAN MAC address. 1050 1050 */ 1051 - hw->mac.num_rar_entries = 128; 1051 + hw->mac.num_rar_entries = IXGBE_82599_RAR_ENTRIES; 1052 1052 hw->mac.ops.init_rx_addrs(hw); 1053 1053 1054 1054 /* Store the permanent SAN mac address */