Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

at v2.6.23-rc3 5609 lines 172 kB view raw
1/* 2 * forcedeth: Ethernet driver for NVIDIA nForce media access controllers. 3 * 4 * Note: This driver is a cleanroom reimplementation based on reverse 5 * engineered documentation written by Carl-Daniel Hailfinger 6 * and Andrew de Quincey. 7 * 8 * NVIDIA, nForce and other NVIDIA marks are trademarks or registered 9 * trademarks of NVIDIA Corporation in the United States and other 10 * countries. 11 * 12 * Copyright (C) 2003,4,5 Manfred Spraul 13 * Copyright (C) 2004 Andrew de Quincey (wol support) 14 * Copyright (C) 2004 Carl-Daniel Hailfinger (invalid MAC handling, insane 15 * IRQ rate fixes, bigendian fixes, cleanups, verification) 16 * Copyright (c) 2004,5,6 NVIDIA Corporation 17 * 18 * This program is free software; you can redistribute it and/or modify 19 * it under the terms of the GNU General Public License as published by 20 * the Free Software Foundation; either version 2 of the License, or 21 * (at your option) any later version. 22 * 23 * This program is distributed in the hope that it will be useful, 24 * but WITHOUT ANY WARRANTY; without even the implied warranty of 25 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 26 * GNU General Public License for more details. 27 * 28 * You should have received a copy of the GNU General Public License 29 * along with this program; if not, write to the Free Software 30 * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA 31 * 32 * Changelog: 33 * 0.01: 05 Oct 2003: First release that compiles without warnings. 34 * 0.02: 05 Oct 2003: Fix bug for nv_drain_tx: do not try to free NULL skbs. 35 * Check all PCI BARs for the register window. 36 * udelay added to mii_rw. 37 * 0.03: 06 Oct 2003: Initialize dev->irq. 38 * 0.04: 07 Oct 2003: Initialize np->lock, reduce handled irqs, add printks. 39 * 0.05: 09 Oct 2003: printk removed again, irq status print tx_timeout. 40 * 0.06: 10 Oct 2003: MAC Address read updated, pff flag generation updated, 41 * irq mask updated 42 * 0.07: 14 Oct 2003: Further irq mask updates. 43 * 0.08: 20 Oct 2003: rx_desc.Length initialization added, nv_alloc_rx refill 44 * added into irq handler, NULL check for drain_ring. 45 * 0.09: 20 Oct 2003: Basic link speed irq implementation. Only handle the 46 * requested interrupt sources. 47 * 0.10: 20 Oct 2003: First cleanup for release. 48 * 0.11: 21 Oct 2003: hexdump for tx added, rx buffer sizes increased. 49 * MAC Address init fix, set_multicast cleanup. 50 * 0.12: 23 Oct 2003: Cleanups for release. 51 * 0.13: 25 Oct 2003: Limit for concurrent tx packets increased to 10. 52 * Set link speed correctly. start rx before starting 53 * tx (nv_start_rx sets the link speed). 54 * 0.14: 25 Oct 2003: Nic dependant irq mask. 55 * 0.15: 08 Nov 2003: fix smp deadlock with set_multicast_list during 56 * open. 57 * 0.16: 15 Nov 2003: include file cleanup for ppc64, rx buffer size 58 * increased to 1628 bytes. 59 * 0.17: 16 Nov 2003: undo rx buffer size increase. Substract 1 from 60 * the tx length. 61 * 0.18: 17 Nov 2003: fix oops due to late initialization of dev_stats 62 * 0.19: 29 Nov 2003: Handle RxNoBuf, detect & handle invalid mac 63 * addresses, really stop rx if already running 64 * in nv_start_rx, clean up a bit. 65 * 0.20: 07 Dec 2003: alloc fixes 66 * 0.21: 12 Jan 2004: additional alloc fix, nic polling fix. 67 * 0.22: 19 Jan 2004: reprogram timer to a sane rate, avoid lockup 68 * on close. 69 * 0.23: 26 Jan 2004: various small cleanups 70 * 0.24: 27 Feb 2004: make driver even less anonymous in backtraces 71 * 0.25: 09 Mar 2004: wol support 72 * 0.26: 03 Jun 2004: netdriver specific annotation, sparse-related fixes 73 * 0.27: 19 Jun 2004: Gigabit support, new descriptor rings, 74 * added CK804/MCP04 device IDs, code fixes 75 * for registers, link status and other minor fixes. 76 * 0.28: 21 Jun 2004: Big cleanup, making driver mostly endian safe 77 * 0.29: 31 Aug 2004: Add backup timer for link change notification. 78 * 0.30: 25 Sep 2004: rx checksum support for nf 250 Gb. Add rx reset 79 * into nv_close, otherwise reenabling for wol can 80 * cause DMA to kfree'd memory. 81 * 0.31: 14 Nov 2004: ethtool support for getting/setting link 82 * capabilities. 83 * 0.32: 16 Apr 2005: RX_ERROR4 handling added. 84 * 0.33: 16 May 2005: Support for MCP51 added. 85 * 0.34: 18 Jun 2005: Add DEV_NEED_LINKTIMER to all nForce nics. 86 * 0.35: 26 Jun 2005: Support for MCP55 added. 87 * 0.36: 28 Jun 2005: Add jumbo frame support. 88 * 0.37: 10 Jul 2005: Additional ethtool support, cleanup of pci id list 89 * 0.38: 16 Jul 2005: tx irq rewrite: Use global flags instead of 90 * per-packet flags. 91 * 0.39: 18 Jul 2005: Add 64bit descriptor support. 92 * 0.40: 19 Jul 2005: Add support for mac address change. 93 * 0.41: 30 Jul 2005: Write back original MAC in nv_close instead 94 * of nv_remove 95 * 0.42: 06 Aug 2005: Fix lack of link speed initialization 96 * in the second (and later) nv_open call 97 * 0.43: 10 Aug 2005: Add support for tx checksum. 98 * 0.44: 20 Aug 2005: Add support for scatter gather and segmentation. 99 * 0.45: 18 Sep 2005: Remove nv_stop/start_rx from every link check 100 * 0.46: 20 Oct 2005: Add irq optimization modes. 101 * 0.47: 26 Oct 2005: Add phyaddr 0 in phy scan. 102 * 0.48: 24 Dec 2005: Disable TSO, bugfix for pci_map_single 103 * 0.49: 10 Dec 2005: Fix tso for large buffers. 104 * 0.50: 20 Jan 2006: Add 8021pq tagging support. 105 * 0.51: 20 Jan 2006: Add 64bit consistent memory allocation for rings. 106 * 0.52: 20 Jan 2006: Add MSI/MSIX support. 107 * 0.53: 19 Mar 2006: Fix init from low power mode and add hw reset. 108 * 0.54: 21 Mar 2006: Fix spin locks for multi irqs and cleanup. 109 * 0.55: 22 Mar 2006: Add flow control (pause frame). 110 * 0.56: 22 Mar 2006: Additional ethtool config and moduleparam support. 111 * 0.57: 14 May 2006: Mac address set in probe/remove and order corrections. 112 * 0.58: 30 Oct 2006: Added support for sideband management unit. 113 * 0.59: 30 Oct 2006: Added support for recoverable error. 114 * 0.60: 20 Jan 2007: Code optimizations for rings, rx & tx data paths, and stats. 115 * 116 * Known bugs: 117 * We suspect that on some hardware no TX done interrupts are generated. 118 * This means recovery from netif_stop_queue only happens if the hw timer 119 * interrupt fires (100 times/second, configurable with NVREG_POLL_DEFAULT) 120 * and the timer is active in the IRQMask, or if a rx packet arrives by chance. 121 * If your hardware reliably generates tx done interrupts, then you can remove 122 * DEV_NEED_TIMERIRQ from the driver_data flags. 123 * DEV_NEED_TIMERIRQ will not harm you on sane hardware, only generating a few 124 * superfluous timer interrupts from the nic. 125 */ 126#ifdef CONFIG_FORCEDETH_NAPI 127#define DRIVERNAPI "-NAPI" 128#else 129#define DRIVERNAPI 130#endif 131#define FORCEDETH_VERSION "0.60" 132#define DRV_NAME "forcedeth" 133 134#include <linux/module.h> 135#include <linux/types.h> 136#include <linux/pci.h> 137#include <linux/interrupt.h> 138#include <linux/netdevice.h> 139#include <linux/etherdevice.h> 140#include <linux/delay.h> 141#include <linux/spinlock.h> 142#include <linux/ethtool.h> 143#include <linux/timer.h> 144#include <linux/skbuff.h> 145#include <linux/mii.h> 146#include <linux/random.h> 147#include <linux/init.h> 148#include <linux/if_vlan.h> 149#include <linux/dma-mapping.h> 150 151#include <asm/irq.h> 152#include <asm/io.h> 153#include <asm/uaccess.h> 154#include <asm/system.h> 155 156#if 0 157#define dprintk printk 158#else 159#define dprintk(x...) do { } while (0) 160#endif 161 162 163/* 164 * Hardware access: 165 */ 166 167#define DEV_NEED_TIMERIRQ 0x0001 /* set the timer irq flag in the irq mask */ 168#define DEV_NEED_LINKTIMER 0x0002 /* poll link settings. Relies on the timer irq */ 169#define DEV_HAS_LARGEDESC 0x0004 /* device supports jumbo frames and needs packet format 2 */ 170#define DEV_HAS_HIGH_DMA 0x0008 /* device supports 64bit dma */ 171#define DEV_HAS_CHECKSUM 0x0010 /* device supports tx and rx checksum offloads */ 172#define DEV_HAS_VLAN 0x0020 /* device supports vlan tagging and striping */ 173#define DEV_HAS_MSI 0x0040 /* device supports MSI */ 174#define DEV_HAS_MSI_X 0x0080 /* device supports MSI-X */ 175#define DEV_HAS_POWER_CNTRL 0x0100 /* device supports power savings */ 176#define DEV_HAS_PAUSEFRAME_TX 0x0200 /* device supports tx pause frames */ 177#define DEV_HAS_STATISTICS_V1 0x0400 /* device supports hw statistics version 1 */ 178#define DEV_HAS_STATISTICS_V2 0x0800 /* device supports hw statistics version 2 */ 179#define DEV_HAS_TEST_EXTENDED 0x1000 /* device supports extended diagnostic test */ 180#define DEV_HAS_MGMT_UNIT 0x2000 /* device supports management unit */ 181#define DEV_HAS_CORRECT_MACADDR 0x4000 /* device supports correct mac address order */ 182 183enum { 184 NvRegIrqStatus = 0x000, 185#define NVREG_IRQSTAT_MIIEVENT 0x040 186#define NVREG_IRQSTAT_MASK 0x81ff 187 NvRegIrqMask = 0x004, 188#define NVREG_IRQ_RX_ERROR 0x0001 189#define NVREG_IRQ_RX 0x0002 190#define NVREG_IRQ_RX_NOBUF 0x0004 191#define NVREG_IRQ_TX_ERR 0x0008 192#define NVREG_IRQ_TX_OK 0x0010 193#define NVREG_IRQ_TIMER 0x0020 194#define NVREG_IRQ_LINK 0x0040 195#define NVREG_IRQ_RX_FORCED 0x0080 196#define NVREG_IRQ_TX_FORCED 0x0100 197#define NVREG_IRQ_RECOVER_ERROR 0x8000 198#define NVREG_IRQMASK_THROUGHPUT 0x00df 199#define NVREG_IRQMASK_CPU 0x0060 200#define NVREG_IRQ_TX_ALL (NVREG_IRQ_TX_ERR|NVREG_IRQ_TX_OK|NVREG_IRQ_TX_FORCED) 201#define NVREG_IRQ_RX_ALL (NVREG_IRQ_RX_ERROR|NVREG_IRQ_RX|NVREG_IRQ_RX_NOBUF|NVREG_IRQ_RX_FORCED) 202#define NVREG_IRQ_OTHER (NVREG_IRQ_TIMER|NVREG_IRQ_LINK|NVREG_IRQ_RECOVER_ERROR) 203 204#define NVREG_IRQ_UNKNOWN (~(NVREG_IRQ_RX_ERROR|NVREG_IRQ_RX|NVREG_IRQ_RX_NOBUF|NVREG_IRQ_TX_ERR| \ 205 NVREG_IRQ_TX_OK|NVREG_IRQ_TIMER|NVREG_IRQ_LINK|NVREG_IRQ_RX_FORCED| \ 206 NVREG_IRQ_TX_FORCED|NVREG_IRQ_RECOVER_ERROR)) 207 208 NvRegUnknownSetupReg6 = 0x008, 209#define NVREG_UNKSETUP6_VAL 3 210 211/* 212 * NVREG_POLL_DEFAULT is the interval length of the timer source on the nic 213 * NVREG_POLL_DEFAULT=97 would result in an interval length of 1 ms 214 */ 215 NvRegPollingInterval = 0x00c, 216#define NVREG_POLL_DEFAULT_THROUGHPUT 970 /* backup tx cleanup if loop max reached */ 217#define NVREG_POLL_DEFAULT_CPU 13 218 NvRegMSIMap0 = 0x020, 219 NvRegMSIMap1 = 0x024, 220 NvRegMSIIrqMask = 0x030, 221#define NVREG_MSI_VECTOR_0_ENABLED 0x01 222 NvRegMisc1 = 0x080, 223#define NVREG_MISC1_PAUSE_TX 0x01 224#define NVREG_MISC1_HD 0x02 225#define NVREG_MISC1_FORCE 0x3b0f3c 226 227 NvRegMacReset = 0x3c, 228#define NVREG_MAC_RESET_ASSERT 0x0F3 229 NvRegTransmitterControl = 0x084, 230#define NVREG_XMITCTL_START 0x01 231#define NVREG_XMITCTL_MGMT_ST 0x40000000 232#define NVREG_XMITCTL_SYNC_MASK 0x000f0000 233#define NVREG_XMITCTL_SYNC_NOT_READY 0x0 234#define NVREG_XMITCTL_SYNC_PHY_INIT 0x00040000 235#define NVREG_XMITCTL_MGMT_SEMA_MASK 0x00000f00 236#define NVREG_XMITCTL_MGMT_SEMA_FREE 0x0 237#define NVREG_XMITCTL_HOST_SEMA_MASK 0x0000f000 238#define NVREG_XMITCTL_HOST_SEMA_ACQ 0x0000f000 239#define NVREG_XMITCTL_HOST_LOADED 0x00004000 240#define NVREG_XMITCTL_TX_PATH_EN 0x01000000 241 NvRegTransmitterStatus = 0x088, 242#define NVREG_XMITSTAT_BUSY 0x01 243 244 NvRegPacketFilterFlags = 0x8c, 245#define NVREG_PFF_PAUSE_RX 0x08 246#define NVREG_PFF_ALWAYS 0x7F0000 247#define NVREG_PFF_PROMISC 0x80 248#define NVREG_PFF_MYADDR 0x20 249#define NVREG_PFF_LOOPBACK 0x10 250 251 NvRegOffloadConfig = 0x90, 252#define NVREG_OFFLOAD_HOMEPHY 0x601 253#define NVREG_OFFLOAD_NORMAL RX_NIC_BUFSIZE 254 NvRegReceiverControl = 0x094, 255#define NVREG_RCVCTL_START 0x01 256#define NVREG_RCVCTL_RX_PATH_EN 0x01000000 257 NvRegReceiverStatus = 0x98, 258#define NVREG_RCVSTAT_BUSY 0x01 259 260 NvRegRandomSeed = 0x9c, 261#define NVREG_RNDSEED_MASK 0x00ff 262#define NVREG_RNDSEED_FORCE 0x7f00 263#define NVREG_RNDSEED_FORCE2 0x2d00 264#define NVREG_RNDSEED_FORCE3 0x7400 265 266 NvRegTxDeferral = 0xA0, 267#define NVREG_TX_DEFERRAL_DEFAULT 0x15050f 268#define NVREG_TX_DEFERRAL_RGMII_10_100 0x16070f 269#define NVREG_TX_DEFERRAL_RGMII_1000 0x14050f 270 NvRegRxDeferral = 0xA4, 271#define NVREG_RX_DEFERRAL_DEFAULT 0x16 272 NvRegMacAddrA = 0xA8, 273 NvRegMacAddrB = 0xAC, 274 NvRegMulticastAddrA = 0xB0, 275#define NVREG_MCASTADDRA_FORCE 0x01 276 NvRegMulticastAddrB = 0xB4, 277 NvRegMulticastMaskA = 0xB8, 278 NvRegMulticastMaskB = 0xBC, 279 280 NvRegPhyInterface = 0xC0, 281#define PHY_RGMII 0x10000000 282 283 NvRegTxRingPhysAddr = 0x100, 284 NvRegRxRingPhysAddr = 0x104, 285 NvRegRingSizes = 0x108, 286#define NVREG_RINGSZ_TXSHIFT 0 287#define NVREG_RINGSZ_RXSHIFT 16 288 NvRegTransmitPoll = 0x10c, 289#define NVREG_TRANSMITPOLL_MAC_ADDR_REV 0x00008000 290 NvRegLinkSpeed = 0x110, 291#define NVREG_LINKSPEED_FORCE 0x10000 292#define NVREG_LINKSPEED_10 1000 293#define NVREG_LINKSPEED_100 100 294#define NVREG_LINKSPEED_1000 50 295#define NVREG_LINKSPEED_MASK (0xFFF) 296 NvRegUnknownSetupReg5 = 0x130, 297#define NVREG_UNKSETUP5_BIT31 (1<<31) 298 NvRegTxWatermark = 0x13c, 299#define NVREG_TX_WM_DESC1_DEFAULT 0x0200010 300#define NVREG_TX_WM_DESC2_3_DEFAULT 0x1e08000 301#define NVREG_TX_WM_DESC2_3_1000 0xfe08000 302 NvRegTxRxControl = 0x144, 303#define NVREG_TXRXCTL_KICK 0x0001 304#define NVREG_TXRXCTL_BIT1 0x0002 305#define NVREG_TXRXCTL_BIT2 0x0004 306#define NVREG_TXRXCTL_IDLE 0x0008 307#define NVREG_TXRXCTL_RESET 0x0010 308#define NVREG_TXRXCTL_RXCHECK 0x0400 309#define NVREG_TXRXCTL_DESC_1 0 310#define NVREG_TXRXCTL_DESC_2 0x002100 311#define NVREG_TXRXCTL_DESC_3 0xc02200 312#define NVREG_TXRXCTL_VLANSTRIP 0x00040 313#define NVREG_TXRXCTL_VLANINS 0x00080 314 NvRegTxRingPhysAddrHigh = 0x148, 315 NvRegRxRingPhysAddrHigh = 0x14C, 316 NvRegTxPauseFrame = 0x170, 317#define NVREG_TX_PAUSEFRAME_DISABLE 0x1ff0080 318#define NVREG_TX_PAUSEFRAME_ENABLE 0x0c00030 319 NvRegMIIStatus = 0x180, 320#define NVREG_MIISTAT_ERROR 0x0001 321#define NVREG_MIISTAT_LINKCHANGE 0x0008 322#define NVREG_MIISTAT_MASK 0x000f 323#define NVREG_MIISTAT_MASK2 0x000f 324 NvRegMIIMask = 0x184, 325#define NVREG_MII_LINKCHANGE 0x0008 326 327 NvRegAdapterControl = 0x188, 328#define NVREG_ADAPTCTL_START 0x02 329#define NVREG_ADAPTCTL_LINKUP 0x04 330#define NVREG_ADAPTCTL_PHYVALID 0x40000 331#define NVREG_ADAPTCTL_RUNNING 0x100000 332#define NVREG_ADAPTCTL_PHYSHIFT 24 333 NvRegMIISpeed = 0x18c, 334#define NVREG_MIISPEED_BIT8 (1<<8) 335#define NVREG_MIIDELAY 5 336 NvRegMIIControl = 0x190, 337#define NVREG_MIICTL_INUSE 0x08000 338#define NVREG_MIICTL_WRITE 0x00400 339#define NVREG_MIICTL_ADDRSHIFT 5 340 NvRegMIIData = 0x194, 341 NvRegWakeUpFlags = 0x200, 342#define NVREG_WAKEUPFLAGS_VAL 0x7770 343#define NVREG_WAKEUPFLAGS_BUSYSHIFT 24 344#define NVREG_WAKEUPFLAGS_ENABLESHIFT 16 345#define NVREG_WAKEUPFLAGS_D3SHIFT 12 346#define NVREG_WAKEUPFLAGS_D2SHIFT 8 347#define NVREG_WAKEUPFLAGS_D1SHIFT 4 348#define NVREG_WAKEUPFLAGS_D0SHIFT 0 349#define NVREG_WAKEUPFLAGS_ACCEPT_MAGPAT 0x01 350#define NVREG_WAKEUPFLAGS_ACCEPT_WAKEUPPAT 0x02 351#define NVREG_WAKEUPFLAGS_ACCEPT_LINKCHANGE 0x04 352#define NVREG_WAKEUPFLAGS_ENABLE 0x1111 353 354 NvRegPatternCRC = 0x204, 355 NvRegPatternMask = 0x208, 356 NvRegPowerCap = 0x268, 357#define NVREG_POWERCAP_D3SUPP (1<<30) 358#define NVREG_POWERCAP_D2SUPP (1<<26) 359#define NVREG_POWERCAP_D1SUPP (1<<25) 360 NvRegPowerState = 0x26c, 361#define NVREG_POWERSTATE_POWEREDUP 0x8000 362#define NVREG_POWERSTATE_VALID 0x0100 363#define NVREG_POWERSTATE_MASK 0x0003 364#define NVREG_POWERSTATE_D0 0x0000 365#define NVREG_POWERSTATE_D1 0x0001 366#define NVREG_POWERSTATE_D2 0x0002 367#define NVREG_POWERSTATE_D3 0x0003 368 NvRegTxCnt = 0x280, 369 NvRegTxZeroReXmt = 0x284, 370 NvRegTxOneReXmt = 0x288, 371 NvRegTxManyReXmt = 0x28c, 372 NvRegTxLateCol = 0x290, 373 NvRegTxUnderflow = 0x294, 374 NvRegTxLossCarrier = 0x298, 375 NvRegTxExcessDef = 0x29c, 376 NvRegTxRetryErr = 0x2a0, 377 NvRegRxFrameErr = 0x2a4, 378 NvRegRxExtraByte = 0x2a8, 379 NvRegRxLateCol = 0x2ac, 380 NvRegRxRunt = 0x2b0, 381 NvRegRxFrameTooLong = 0x2b4, 382 NvRegRxOverflow = 0x2b8, 383 NvRegRxFCSErr = 0x2bc, 384 NvRegRxFrameAlignErr = 0x2c0, 385 NvRegRxLenErr = 0x2c4, 386 NvRegRxUnicast = 0x2c8, 387 NvRegRxMulticast = 0x2cc, 388 NvRegRxBroadcast = 0x2d0, 389 NvRegTxDef = 0x2d4, 390 NvRegTxFrame = 0x2d8, 391 NvRegRxCnt = 0x2dc, 392 NvRegTxPause = 0x2e0, 393 NvRegRxPause = 0x2e4, 394 NvRegRxDropFrame = 0x2e8, 395 NvRegVlanControl = 0x300, 396#define NVREG_VLANCONTROL_ENABLE 0x2000 397 NvRegMSIXMap0 = 0x3e0, 398 NvRegMSIXMap1 = 0x3e4, 399 NvRegMSIXIrqStatus = 0x3f0, 400 401 NvRegPowerState2 = 0x600, 402#define NVREG_POWERSTATE2_POWERUP_MASK 0x0F11 403#define NVREG_POWERSTATE2_POWERUP_REV_A3 0x0001 404}; 405 406/* Big endian: should work, but is untested */ 407struct ring_desc { 408 __le32 buf; 409 __le32 flaglen; 410}; 411 412struct ring_desc_ex { 413 __le32 bufhigh; 414 __le32 buflow; 415 __le32 txvlan; 416 __le32 flaglen; 417}; 418 419union ring_type { 420 struct ring_desc* orig; 421 struct ring_desc_ex* ex; 422}; 423 424#define FLAG_MASK_V1 0xffff0000 425#define FLAG_MASK_V2 0xffffc000 426#define LEN_MASK_V1 (0xffffffff ^ FLAG_MASK_V1) 427#define LEN_MASK_V2 (0xffffffff ^ FLAG_MASK_V2) 428 429#define NV_TX_LASTPACKET (1<<16) 430#define NV_TX_RETRYERROR (1<<19) 431#define NV_TX_FORCED_INTERRUPT (1<<24) 432#define NV_TX_DEFERRED (1<<26) 433#define NV_TX_CARRIERLOST (1<<27) 434#define NV_TX_LATECOLLISION (1<<28) 435#define NV_TX_UNDERFLOW (1<<29) 436#define NV_TX_ERROR (1<<30) 437#define NV_TX_VALID (1<<31) 438 439#define NV_TX2_LASTPACKET (1<<29) 440#define NV_TX2_RETRYERROR (1<<18) 441#define NV_TX2_FORCED_INTERRUPT (1<<30) 442#define NV_TX2_DEFERRED (1<<25) 443#define NV_TX2_CARRIERLOST (1<<26) 444#define NV_TX2_LATECOLLISION (1<<27) 445#define NV_TX2_UNDERFLOW (1<<28) 446/* error and valid are the same for both */ 447#define NV_TX2_ERROR (1<<30) 448#define NV_TX2_VALID (1<<31) 449#define NV_TX2_TSO (1<<28) 450#define NV_TX2_TSO_SHIFT 14 451#define NV_TX2_TSO_MAX_SHIFT 14 452#define NV_TX2_TSO_MAX_SIZE (1<<NV_TX2_TSO_MAX_SHIFT) 453#define NV_TX2_CHECKSUM_L3 (1<<27) 454#define NV_TX2_CHECKSUM_L4 (1<<26) 455 456#define NV_TX3_VLAN_TAG_PRESENT (1<<18) 457 458#define NV_RX_DESCRIPTORVALID (1<<16) 459#define NV_RX_MISSEDFRAME (1<<17) 460#define NV_RX_SUBSTRACT1 (1<<18) 461#define NV_RX_ERROR1 (1<<23) 462#define NV_RX_ERROR2 (1<<24) 463#define NV_RX_ERROR3 (1<<25) 464#define NV_RX_ERROR4 (1<<26) 465#define NV_RX_CRCERR (1<<27) 466#define NV_RX_OVERFLOW (1<<28) 467#define NV_RX_FRAMINGERR (1<<29) 468#define NV_RX_ERROR (1<<30) 469#define NV_RX_AVAIL (1<<31) 470 471#define NV_RX2_CHECKSUMMASK (0x1C000000) 472#define NV_RX2_CHECKSUMOK1 (0x10000000) 473#define NV_RX2_CHECKSUMOK2 (0x14000000) 474#define NV_RX2_CHECKSUMOK3 (0x18000000) 475#define NV_RX2_DESCRIPTORVALID (1<<29) 476#define NV_RX2_SUBSTRACT1 (1<<25) 477#define NV_RX2_ERROR1 (1<<18) 478#define NV_RX2_ERROR2 (1<<19) 479#define NV_RX2_ERROR3 (1<<20) 480#define NV_RX2_ERROR4 (1<<21) 481#define NV_RX2_CRCERR (1<<22) 482#define NV_RX2_OVERFLOW (1<<23) 483#define NV_RX2_FRAMINGERR (1<<24) 484/* error and avail are the same for both */ 485#define NV_RX2_ERROR (1<<30) 486#define NV_RX2_AVAIL (1<<31) 487 488#define NV_RX3_VLAN_TAG_PRESENT (1<<16) 489#define NV_RX3_VLAN_TAG_MASK (0x0000FFFF) 490 491/* Miscelaneous hardware related defines: */ 492#define NV_PCI_REGSZ_VER1 0x270 493#define NV_PCI_REGSZ_VER2 0x2d4 494#define NV_PCI_REGSZ_VER3 0x604 495 496/* various timeout delays: all in usec */ 497#define NV_TXRX_RESET_DELAY 4 498#define NV_TXSTOP_DELAY1 10 499#define NV_TXSTOP_DELAY1MAX 500000 500#define NV_TXSTOP_DELAY2 100 501#define NV_RXSTOP_DELAY1 10 502#define NV_RXSTOP_DELAY1MAX 500000 503#define NV_RXSTOP_DELAY2 100 504#define NV_SETUP5_DELAY 5 505#define NV_SETUP5_DELAYMAX 50000 506#define NV_POWERUP_DELAY 5 507#define NV_POWERUP_DELAYMAX 5000 508#define NV_MIIBUSY_DELAY 50 509#define NV_MIIPHY_DELAY 10 510#define NV_MIIPHY_DELAYMAX 10000 511#define NV_MAC_RESET_DELAY 64 512 513#define NV_WAKEUPPATTERNS 5 514#define NV_WAKEUPMASKENTRIES 4 515 516/* General driver defaults */ 517#define NV_WATCHDOG_TIMEO (5*HZ) 518 519#define RX_RING_DEFAULT 128 520#define TX_RING_DEFAULT 256 521#define RX_RING_MIN 128 522#define TX_RING_MIN 64 523#define RING_MAX_DESC_VER_1 1024 524#define RING_MAX_DESC_VER_2_3 16384 525 526/* rx/tx mac addr + type + vlan + align + slack*/ 527#define NV_RX_HEADERS (64) 528/* even more slack. */ 529#define NV_RX_ALLOC_PAD (64) 530 531/* maximum mtu size */ 532#define NV_PKTLIMIT_1 ETH_DATA_LEN /* hard limit not known */ 533#define NV_PKTLIMIT_2 9100 /* Actual limit according to NVidia: 9202 */ 534 535#define OOM_REFILL (1+HZ/20) 536#define POLL_WAIT (1+HZ/100) 537#define LINK_TIMEOUT (3*HZ) 538#define STATS_INTERVAL (10*HZ) 539 540/* 541 * desc_ver values: 542 * The nic supports three different descriptor types: 543 * - DESC_VER_1: Original 544 * - DESC_VER_2: support for jumbo frames. 545 * - DESC_VER_3: 64-bit format. 546 */ 547#define DESC_VER_1 1 548#define DESC_VER_2 2 549#define DESC_VER_3 3 550 551/* PHY defines */ 552#define PHY_OUI_MARVELL 0x5043 553#define PHY_OUI_CICADA 0x03f1 554#define PHY_OUI_VITESSE 0x01c1 555#define PHY_OUI_REALTEK 0x01c1 556#define PHYID1_OUI_MASK 0x03ff 557#define PHYID1_OUI_SHFT 6 558#define PHYID2_OUI_MASK 0xfc00 559#define PHYID2_OUI_SHFT 10 560#define PHYID2_MODEL_MASK 0x03f0 561#define PHY_MODEL_MARVELL_E3016 0x220 562#define PHY_MARVELL_E3016_INITMASK 0x0300 563#define PHY_CICADA_INIT1 0x0f000 564#define PHY_CICADA_INIT2 0x0e00 565#define PHY_CICADA_INIT3 0x01000 566#define PHY_CICADA_INIT4 0x0200 567#define PHY_CICADA_INIT5 0x0004 568#define PHY_CICADA_INIT6 0x02000 569#define PHY_VITESSE_INIT_REG1 0x1f 570#define PHY_VITESSE_INIT_REG2 0x10 571#define PHY_VITESSE_INIT_REG3 0x11 572#define PHY_VITESSE_INIT_REG4 0x12 573#define PHY_VITESSE_INIT_MSK1 0xc 574#define PHY_VITESSE_INIT_MSK2 0x0180 575#define PHY_VITESSE_INIT1 0x52b5 576#define PHY_VITESSE_INIT2 0xaf8a 577#define PHY_VITESSE_INIT3 0x8 578#define PHY_VITESSE_INIT4 0x8f8a 579#define PHY_VITESSE_INIT5 0xaf86 580#define PHY_VITESSE_INIT6 0x8f86 581#define PHY_VITESSE_INIT7 0xaf82 582#define PHY_VITESSE_INIT8 0x0100 583#define PHY_VITESSE_INIT9 0x8f82 584#define PHY_VITESSE_INIT10 0x0 585#define PHY_REALTEK_INIT_REG1 0x1f 586#define PHY_REALTEK_INIT_REG2 0x19 587#define PHY_REALTEK_INIT_REG3 0x13 588#define PHY_REALTEK_INIT1 0x0000 589#define PHY_REALTEK_INIT2 0x8e00 590#define PHY_REALTEK_INIT3 0x0001 591#define PHY_REALTEK_INIT4 0xad17 592 593#define PHY_GIGABIT 0x0100 594 595#define PHY_TIMEOUT 0x1 596#define PHY_ERROR 0x2 597 598#define PHY_100 0x1 599#define PHY_1000 0x2 600#define PHY_HALF 0x100 601 602#define NV_PAUSEFRAME_RX_CAPABLE 0x0001 603#define NV_PAUSEFRAME_TX_CAPABLE 0x0002 604#define NV_PAUSEFRAME_RX_ENABLE 0x0004 605#define NV_PAUSEFRAME_TX_ENABLE 0x0008 606#define NV_PAUSEFRAME_RX_REQ 0x0010 607#define NV_PAUSEFRAME_TX_REQ 0x0020 608#define NV_PAUSEFRAME_AUTONEG 0x0040 609 610/* MSI/MSI-X defines */ 611#define NV_MSI_X_MAX_VECTORS 8 612#define NV_MSI_X_VECTORS_MASK 0x000f 613#define NV_MSI_CAPABLE 0x0010 614#define NV_MSI_X_CAPABLE 0x0020 615#define NV_MSI_ENABLED 0x0040 616#define NV_MSI_X_ENABLED 0x0080 617 618#define NV_MSI_X_VECTOR_ALL 0x0 619#define NV_MSI_X_VECTOR_RX 0x0 620#define NV_MSI_X_VECTOR_TX 0x1 621#define NV_MSI_X_VECTOR_OTHER 0x2 622 623/* statistics */ 624struct nv_ethtool_str { 625 char name[ETH_GSTRING_LEN]; 626}; 627 628static const struct nv_ethtool_str nv_estats_str[] = { 629 { "tx_bytes" }, 630 { "tx_zero_rexmt" }, 631 { "tx_one_rexmt" }, 632 { "tx_many_rexmt" }, 633 { "tx_late_collision" }, 634 { "tx_fifo_errors" }, 635 { "tx_carrier_errors" }, 636 { "tx_excess_deferral" }, 637 { "tx_retry_error" }, 638 { "rx_frame_error" }, 639 { "rx_extra_byte" }, 640 { "rx_late_collision" }, 641 { "rx_runt" }, 642 { "rx_frame_too_long" }, 643 { "rx_over_errors" }, 644 { "rx_crc_errors" }, 645 { "rx_frame_align_error" }, 646 { "rx_length_error" }, 647 { "rx_unicast" }, 648 { "rx_multicast" }, 649 { "rx_broadcast" }, 650 { "rx_packets" }, 651 { "rx_errors_total" }, 652 { "tx_errors_total" }, 653 654 /* version 2 stats */ 655 { "tx_deferral" }, 656 { "tx_packets" }, 657 { "rx_bytes" }, 658 { "tx_pause" }, 659 { "rx_pause" }, 660 { "rx_drop_frame" } 661}; 662 663struct nv_ethtool_stats { 664 u64 tx_bytes; 665 u64 tx_zero_rexmt; 666 u64 tx_one_rexmt; 667 u64 tx_many_rexmt; 668 u64 tx_late_collision; 669 u64 tx_fifo_errors; 670 u64 tx_carrier_errors; 671 u64 tx_excess_deferral; 672 u64 tx_retry_error; 673 u64 rx_frame_error; 674 u64 rx_extra_byte; 675 u64 rx_late_collision; 676 u64 rx_runt; 677 u64 rx_frame_too_long; 678 u64 rx_over_errors; 679 u64 rx_crc_errors; 680 u64 rx_frame_align_error; 681 u64 rx_length_error; 682 u64 rx_unicast; 683 u64 rx_multicast; 684 u64 rx_broadcast; 685 u64 rx_packets; 686 u64 rx_errors_total; 687 u64 tx_errors_total; 688 689 /* version 2 stats */ 690 u64 tx_deferral; 691 u64 tx_packets; 692 u64 rx_bytes; 693 u64 tx_pause; 694 u64 rx_pause; 695 u64 rx_drop_frame; 696}; 697 698#define NV_DEV_STATISTICS_V2_COUNT (sizeof(struct nv_ethtool_stats)/sizeof(u64)) 699#define NV_DEV_STATISTICS_V1_COUNT (NV_DEV_STATISTICS_V2_COUNT - 6) 700 701/* diagnostics */ 702#define NV_TEST_COUNT_BASE 3 703#define NV_TEST_COUNT_EXTENDED 4 704 705static const struct nv_ethtool_str nv_etests_str[] = { 706 { "link (online/offline)" }, 707 { "register (offline) " }, 708 { "interrupt (offline) " }, 709 { "loopback (offline) " } 710}; 711 712struct register_test { 713 __le32 reg; 714 __le32 mask; 715}; 716 717static const struct register_test nv_registers_test[] = { 718 { NvRegUnknownSetupReg6, 0x01 }, 719 { NvRegMisc1, 0x03c }, 720 { NvRegOffloadConfig, 0x03ff }, 721 { NvRegMulticastAddrA, 0xffffffff }, 722 { NvRegTxWatermark, 0x0ff }, 723 { NvRegWakeUpFlags, 0x07777 }, 724 { 0,0 } 725}; 726 727struct nv_skb_map { 728 struct sk_buff *skb; 729 dma_addr_t dma; 730 unsigned int dma_len; 731}; 732 733/* 734 * SMP locking: 735 * All hardware access under dev->priv->lock, except the performance 736 * critical parts: 737 * - rx is (pseudo-) lockless: it relies on the single-threading provided 738 * by the arch code for interrupts. 739 * - tx setup is lockless: it relies on netif_tx_lock. Actual submission 740 * needs dev->priv->lock :-( 741 * - set_multicast_list: preparation lockless, relies on netif_tx_lock. 742 */ 743 744/* in dev: base, irq */ 745struct fe_priv { 746 spinlock_t lock; 747 748 /* General data: 749 * Locking: spin_lock(&np->lock); */ 750 struct net_device_stats stats; 751 struct nv_ethtool_stats estats; 752 int in_shutdown; 753 u32 linkspeed; 754 int duplex; 755 int autoneg; 756 int fixed_mode; 757 int phyaddr; 758 int wolenabled; 759 unsigned int phy_oui; 760 unsigned int phy_model; 761 u16 gigabit; 762 int intr_test; 763 int recover_error; 764 765 /* General data: RO fields */ 766 dma_addr_t ring_addr; 767 struct pci_dev *pci_dev; 768 u32 orig_mac[2]; 769 u32 irqmask; 770 u32 desc_ver; 771 u32 txrxctl_bits; 772 u32 vlanctl_bits; 773 u32 driver_data; 774 u32 register_size; 775 int rx_csum; 776 u32 mac_in_use; 777 778 void __iomem *base; 779 780 /* rx specific fields. 781 * Locking: Within irq hander or disable_irq+spin_lock(&np->lock); 782 */ 783 union ring_type get_rx, put_rx, first_rx, last_rx; 784 struct nv_skb_map *get_rx_ctx, *put_rx_ctx; 785 struct nv_skb_map *first_rx_ctx, *last_rx_ctx; 786 struct nv_skb_map *rx_skb; 787 788 union ring_type rx_ring; 789 unsigned int rx_buf_sz; 790 unsigned int pkt_limit; 791 struct timer_list oom_kick; 792 struct timer_list nic_poll; 793 struct timer_list stats_poll; 794 u32 nic_poll_irq; 795 int rx_ring_size; 796 797 /* media detection workaround. 798 * Locking: Within irq hander or disable_irq+spin_lock(&np->lock); 799 */ 800 int need_linktimer; 801 unsigned long link_timeout; 802 /* 803 * tx specific fields. 804 */ 805 union ring_type get_tx, put_tx, first_tx, last_tx; 806 struct nv_skb_map *get_tx_ctx, *put_tx_ctx; 807 struct nv_skb_map *first_tx_ctx, *last_tx_ctx; 808 struct nv_skb_map *tx_skb; 809 810 union ring_type tx_ring; 811 u32 tx_flags; 812 int tx_ring_size; 813 int tx_stop; 814 815 /* vlan fields */ 816 struct vlan_group *vlangrp; 817 818 /* msi/msi-x fields */ 819 u32 msi_flags; 820 struct msix_entry msi_x_entry[NV_MSI_X_MAX_VECTORS]; 821 822 /* flow control */ 823 u32 pause_flags; 824}; 825 826/* 827 * Maximum number of loops until we assume that a bit in the irq mask 828 * is stuck. Overridable with module param. 829 */ 830static int max_interrupt_work = 5; 831 832/* 833 * Optimization can be either throuput mode or cpu mode 834 * 835 * Throughput Mode: Every tx and rx packet will generate an interrupt. 836 * CPU Mode: Interrupts are controlled by a timer. 837 */ 838enum { 839 NV_OPTIMIZATION_MODE_THROUGHPUT, 840 NV_OPTIMIZATION_MODE_CPU 841}; 842static int optimization_mode = NV_OPTIMIZATION_MODE_THROUGHPUT; 843 844/* 845 * Poll interval for timer irq 846 * 847 * This interval determines how frequent an interrupt is generated. 848 * The is value is determined by [(time_in_micro_secs * 100) / (2^10)] 849 * Min = 0, and Max = 65535 850 */ 851static int poll_interval = -1; 852 853/* 854 * MSI interrupts 855 */ 856enum { 857 NV_MSI_INT_DISABLED, 858 NV_MSI_INT_ENABLED 859}; 860static int msi = NV_MSI_INT_ENABLED; 861 862/* 863 * MSIX interrupts 864 */ 865enum { 866 NV_MSIX_INT_DISABLED, 867 NV_MSIX_INT_ENABLED 868}; 869static int msix = NV_MSIX_INT_DISABLED; 870 871/* 872 * DMA 64bit 873 */ 874enum { 875 NV_DMA_64BIT_DISABLED, 876 NV_DMA_64BIT_ENABLED 877}; 878static int dma_64bit = NV_DMA_64BIT_ENABLED; 879 880static inline struct fe_priv *get_nvpriv(struct net_device *dev) 881{ 882 return netdev_priv(dev); 883} 884 885static inline u8 __iomem *get_hwbase(struct net_device *dev) 886{ 887 return ((struct fe_priv *)netdev_priv(dev))->base; 888} 889 890static inline void pci_push(u8 __iomem *base) 891{ 892 /* force out pending posted writes */ 893 readl(base); 894} 895 896static inline u32 nv_descr_getlength(struct ring_desc *prd, u32 v) 897{ 898 return le32_to_cpu(prd->flaglen) 899 & ((v == DESC_VER_1) ? LEN_MASK_V1 : LEN_MASK_V2); 900} 901 902static inline u32 nv_descr_getlength_ex(struct ring_desc_ex *prd, u32 v) 903{ 904 return le32_to_cpu(prd->flaglen) & LEN_MASK_V2; 905} 906 907static int reg_delay(struct net_device *dev, int offset, u32 mask, u32 target, 908 int delay, int delaymax, const char *msg) 909{ 910 u8 __iomem *base = get_hwbase(dev); 911 912 pci_push(base); 913 do { 914 udelay(delay); 915 delaymax -= delay; 916 if (delaymax < 0) { 917 if (msg) 918 printk(msg); 919 return 1; 920 } 921 } while ((readl(base + offset) & mask) != target); 922 return 0; 923} 924 925#define NV_SETUP_RX_RING 0x01 926#define NV_SETUP_TX_RING 0x02 927 928static void setup_hw_rings(struct net_device *dev, int rxtx_flags) 929{ 930 struct fe_priv *np = get_nvpriv(dev); 931 u8 __iomem *base = get_hwbase(dev); 932 933 if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) { 934 if (rxtx_flags & NV_SETUP_RX_RING) { 935 writel((u32) cpu_to_le64(np->ring_addr), base + NvRegRxRingPhysAddr); 936 } 937 if (rxtx_flags & NV_SETUP_TX_RING) { 938 writel((u32) cpu_to_le64(np->ring_addr + np->rx_ring_size*sizeof(struct ring_desc)), base + NvRegTxRingPhysAddr); 939 } 940 } else { 941 if (rxtx_flags & NV_SETUP_RX_RING) { 942 writel((u32) cpu_to_le64(np->ring_addr), base + NvRegRxRingPhysAddr); 943 writel((u32) (cpu_to_le64(np->ring_addr) >> 32), base + NvRegRxRingPhysAddrHigh); 944 } 945 if (rxtx_flags & NV_SETUP_TX_RING) { 946 writel((u32) cpu_to_le64(np->ring_addr + np->rx_ring_size*sizeof(struct ring_desc_ex)), base + NvRegTxRingPhysAddr); 947 writel((u32) (cpu_to_le64(np->ring_addr + np->rx_ring_size*sizeof(struct ring_desc_ex)) >> 32), base + NvRegTxRingPhysAddrHigh); 948 } 949 } 950} 951 952static void free_rings(struct net_device *dev) 953{ 954 struct fe_priv *np = get_nvpriv(dev); 955 956 if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) { 957 if (np->rx_ring.orig) 958 pci_free_consistent(np->pci_dev, sizeof(struct ring_desc) * (np->rx_ring_size + np->tx_ring_size), 959 np->rx_ring.orig, np->ring_addr); 960 } else { 961 if (np->rx_ring.ex) 962 pci_free_consistent(np->pci_dev, sizeof(struct ring_desc_ex) * (np->rx_ring_size + np->tx_ring_size), 963 np->rx_ring.ex, np->ring_addr); 964 } 965 if (np->rx_skb) 966 kfree(np->rx_skb); 967 if (np->tx_skb) 968 kfree(np->tx_skb); 969} 970 971static int using_multi_irqs(struct net_device *dev) 972{ 973 struct fe_priv *np = get_nvpriv(dev); 974 975 if (!(np->msi_flags & NV_MSI_X_ENABLED) || 976 ((np->msi_flags & NV_MSI_X_ENABLED) && 977 ((np->msi_flags & NV_MSI_X_VECTORS_MASK) == 0x1))) 978 return 0; 979 else 980 return 1; 981} 982 983static void nv_enable_irq(struct net_device *dev) 984{ 985 struct fe_priv *np = get_nvpriv(dev); 986 987 if (!using_multi_irqs(dev)) { 988 if (np->msi_flags & NV_MSI_X_ENABLED) 989 enable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_ALL].vector); 990 else 991 enable_irq(dev->irq); 992 } else { 993 enable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_RX].vector); 994 enable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_TX].vector); 995 enable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_OTHER].vector); 996 } 997} 998 999static void nv_disable_irq(struct net_device *dev) 1000{ 1001 struct fe_priv *np = get_nvpriv(dev); 1002 1003 if (!using_multi_irqs(dev)) { 1004 if (np->msi_flags & NV_MSI_X_ENABLED) 1005 disable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_ALL].vector); 1006 else 1007 disable_irq(dev->irq); 1008 } else { 1009 disable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_RX].vector); 1010 disable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_TX].vector); 1011 disable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_OTHER].vector); 1012 } 1013} 1014 1015/* In MSIX mode, a write to irqmask behaves as XOR */ 1016static void nv_enable_hw_interrupts(struct net_device *dev, u32 mask) 1017{ 1018 u8 __iomem *base = get_hwbase(dev); 1019 1020 writel(mask, base + NvRegIrqMask); 1021} 1022 1023static void nv_disable_hw_interrupts(struct net_device *dev, u32 mask) 1024{ 1025 struct fe_priv *np = get_nvpriv(dev); 1026 u8 __iomem *base = get_hwbase(dev); 1027 1028 if (np->msi_flags & NV_MSI_X_ENABLED) { 1029 writel(mask, base + NvRegIrqMask); 1030 } else { 1031 if (np->msi_flags & NV_MSI_ENABLED) 1032 writel(0, base + NvRegMSIIrqMask); 1033 writel(0, base + NvRegIrqMask); 1034 } 1035} 1036 1037#define MII_READ (-1) 1038/* mii_rw: read/write a register on the PHY. 1039 * 1040 * Caller must guarantee serialization 1041 */ 1042static int mii_rw(struct net_device *dev, int addr, int miireg, int value) 1043{ 1044 u8 __iomem *base = get_hwbase(dev); 1045 u32 reg; 1046 int retval; 1047 1048 writel(NVREG_MIISTAT_MASK, base + NvRegMIIStatus); 1049 1050 reg = readl(base + NvRegMIIControl); 1051 if (reg & NVREG_MIICTL_INUSE) { 1052 writel(NVREG_MIICTL_INUSE, base + NvRegMIIControl); 1053 udelay(NV_MIIBUSY_DELAY); 1054 } 1055 1056 reg = (addr << NVREG_MIICTL_ADDRSHIFT) | miireg; 1057 if (value != MII_READ) { 1058 writel(value, base + NvRegMIIData); 1059 reg |= NVREG_MIICTL_WRITE; 1060 } 1061 writel(reg, base + NvRegMIIControl); 1062 1063 if (reg_delay(dev, NvRegMIIControl, NVREG_MIICTL_INUSE, 0, 1064 NV_MIIPHY_DELAY, NV_MIIPHY_DELAYMAX, NULL)) { 1065 dprintk(KERN_DEBUG "%s: mii_rw of reg %d at PHY %d timed out.\n", 1066 dev->name, miireg, addr); 1067 retval = -1; 1068 } else if (value != MII_READ) { 1069 /* it was a write operation - fewer failures are detectable */ 1070 dprintk(KERN_DEBUG "%s: mii_rw wrote 0x%x to reg %d at PHY %d\n", 1071 dev->name, value, miireg, addr); 1072 retval = 0; 1073 } else if (readl(base + NvRegMIIStatus) & NVREG_MIISTAT_ERROR) { 1074 dprintk(KERN_DEBUG "%s: mii_rw of reg %d at PHY %d failed.\n", 1075 dev->name, miireg, addr); 1076 retval = -1; 1077 } else { 1078 retval = readl(base + NvRegMIIData); 1079 dprintk(KERN_DEBUG "%s: mii_rw read from reg %d at PHY %d: 0x%x.\n", 1080 dev->name, miireg, addr, retval); 1081 } 1082 1083 return retval; 1084} 1085 1086static int phy_reset(struct net_device *dev, u32 bmcr_setup) 1087{ 1088 struct fe_priv *np = netdev_priv(dev); 1089 u32 miicontrol; 1090 unsigned int tries = 0; 1091 1092 miicontrol = BMCR_RESET | bmcr_setup; 1093 if (mii_rw(dev, np->phyaddr, MII_BMCR, miicontrol)) { 1094 return -1; 1095 } 1096 1097 /* wait for 500ms */ 1098 msleep(500); 1099 1100 /* must wait till reset is deasserted */ 1101 while (miicontrol & BMCR_RESET) { 1102 msleep(10); 1103 miicontrol = mii_rw(dev, np->phyaddr, MII_BMCR, MII_READ); 1104 /* FIXME: 100 tries seem excessive */ 1105 if (tries++ > 100) 1106 return -1; 1107 } 1108 return 0; 1109} 1110 1111static int phy_init(struct net_device *dev) 1112{ 1113 struct fe_priv *np = get_nvpriv(dev); 1114 u8 __iomem *base = get_hwbase(dev); 1115 u32 phyinterface, phy_reserved, mii_status, mii_control, mii_control_1000,reg; 1116 1117 /* phy errata for E3016 phy */ 1118 if (np->phy_model == PHY_MODEL_MARVELL_E3016) { 1119 reg = mii_rw(dev, np->phyaddr, MII_NCONFIG, MII_READ); 1120 reg &= ~PHY_MARVELL_E3016_INITMASK; 1121 if (mii_rw(dev, np->phyaddr, MII_NCONFIG, reg)) { 1122 printk(KERN_INFO "%s: phy write to errata reg failed.\n", pci_name(np->pci_dev)); 1123 return PHY_ERROR; 1124 } 1125 } 1126 if (np->phy_oui == PHY_OUI_REALTEK) { 1127 if (mii_rw(dev, np->phyaddr, PHY_REALTEK_INIT_REG1, PHY_REALTEK_INIT1)) { 1128 printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev)); 1129 return PHY_ERROR; 1130 } 1131 if (mii_rw(dev, np->phyaddr, PHY_REALTEK_INIT_REG2, PHY_REALTEK_INIT2)) { 1132 printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev)); 1133 return PHY_ERROR; 1134 } 1135 if (mii_rw(dev, np->phyaddr, PHY_REALTEK_INIT_REG1, PHY_REALTEK_INIT3)) { 1136 printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev)); 1137 return PHY_ERROR; 1138 } 1139 if (mii_rw(dev, np->phyaddr, PHY_REALTEK_INIT_REG3, PHY_REALTEK_INIT4)) { 1140 printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev)); 1141 return PHY_ERROR; 1142 } 1143 if (mii_rw(dev, np->phyaddr, PHY_REALTEK_INIT_REG1, PHY_REALTEK_INIT1)) { 1144 printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev)); 1145 return PHY_ERROR; 1146 } 1147 } 1148 1149 /* set advertise register */ 1150 reg = mii_rw(dev, np->phyaddr, MII_ADVERTISE, MII_READ); 1151 reg |= (ADVERTISE_10HALF|ADVERTISE_10FULL|ADVERTISE_100HALF|ADVERTISE_100FULL|ADVERTISE_PAUSE_ASYM|ADVERTISE_PAUSE_CAP); 1152 if (mii_rw(dev, np->phyaddr, MII_ADVERTISE, reg)) { 1153 printk(KERN_INFO "%s: phy write to advertise failed.\n", pci_name(np->pci_dev)); 1154 return PHY_ERROR; 1155 } 1156 1157 /* get phy interface type */ 1158 phyinterface = readl(base + NvRegPhyInterface); 1159 1160 /* see if gigabit phy */ 1161 mii_status = mii_rw(dev, np->phyaddr, MII_BMSR, MII_READ); 1162 if (mii_status & PHY_GIGABIT) { 1163 np->gigabit = PHY_GIGABIT; 1164 mii_control_1000 = mii_rw(dev, np->phyaddr, MII_CTRL1000, MII_READ); 1165 mii_control_1000 &= ~ADVERTISE_1000HALF; 1166 if (phyinterface & PHY_RGMII) 1167 mii_control_1000 |= ADVERTISE_1000FULL; 1168 else 1169 mii_control_1000 &= ~ADVERTISE_1000FULL; 1170 1171 if (mii_rw(dev, np->phyaddr, MII_CTRL1000, mii_control_1000)) { 1172 printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev)); 1173 return PHY_ERROR; 1174 } 1175 } 1176 else 1177 np->gigabit = 0; 1178 1179 mii_control = mii_rw(dev, np->phyaddr, MII_BMCR, MII_READ); 1180 mii_control |= BMCR_ANENABLE; 1181 1182 /* reset the phy 1183 * (certain phys need bmcr to be setup with reset) 1184 */ 1185 if (phy_reset(dev, mii_control)) { 1186 printk(KERN_INFO "%s: phy reset failed\n", pci_name(np->pci_dev)); 1187 return PHY_ERROR; 1188 } 1189 1190 /* phy vendor specific configuration */ 1191 if ((np->phy_oui == PHY_OUI_CICADA) && (phyinterface & PHY_RGMII) ) { 1192 phy_reserved = mii_rw(dev, np->phyaddr, MII_RESV1, MII_READ); 1193 phy_reserved &= ~(PHY_CICADA_INIT1 | PHY_CICADA_INIT2); 1194 phy_reserved |= (PHY_CICADA_INIT3 | PHY_CICADA_INIT4); 1195 if (mii_rw(dev, np->phyaddr, MII_RESV1, phy_reserved)) { 1196 printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev)); 1197 return PHY_ERROR; 1198 } 1199 phy_reserved = mii_rw(dev, np->phyaddr, MII_NCONFIG, MII_READ); 1200 phy_reserved |= PHY_CICADA_INIT5; 1201 if (mii_rw(dev, np->phyaddr, MII_NCONFIG, phy_reserved)) { 1202 printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev)); 1203 return PHY_ERROR; 1204 } 1205 } 1206 if (np->phy_oui == PHY_OUI_CICADA) { 1207 phy_reserved = mii_rw(dev, np->phyaddr, MII_SREVISION, MII_READ); 1208 phy_reserved |= PHY_CICADA_INIT6; 1209 if (mii_rw(dev, np->phyaddr, MII_SREVISION, phy_reserved)) { 1210 printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev)); 1211 return PHY_ERROR; 1212 } 1213 } 1214 if (np->phy_oui == PHY_OUI_VITESSE) { 1215 if (mii_rw(dev, np->phyaddr, PHY_VITESSE_INIT_REG1, PHY_VITESSE_INIT1)) { 1216 printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev)); 1217 return PHY_ERROR; 1218 } 1219 if (mii_rw(dev, np->phyaddr, PHY_VITESSE_INIT_REG2, PHY_VITESSE_INIT2)) { 1220 printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev)); 1221 return PHY_ERROR; 1222 } 1223 phy_reserved = mii_rw(dev, np->phyaddr, PHY_VITESSE_INIT_REG4, MII_READ); 1224 if (mii_rw(dev, np->phyaddr, PHY_VITESSE_INIT_REG4, phy_reserved)) { 1225 printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev)); 1226 return PHY_ERROR; 1227 } 1228 phy_reserved = mii_rw(dev, np->phyaddr, PHY_VITESSE_INIT_REG3, MII_READ); 1229 phy_reserved &= ~PHY_VITESSE_INIT_MSK1; 1230 phy_reserved |= PHY_VITESSE_INIT3; 1231 if (mii_rw(dev, np->phyaddr, PHY_VITESSE_INIT_REG3, phy_reserved)) { 1232 printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev)); 1233 return PHY_ERROR; 1234 } 1235 if (mii_rw(dev, np->phyaddr, PHY_VITESSE_INIT_REG2, PHY_VITESSE_INIT4)) { 1236 printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev)); 1237 return PHY_ERROR; 1238 } 1239 if (mii_rw(dev, np->phyaddr, PHY_VITESSE_INIT_REG2, PHY_VITESSE_INIT5)) { 1240 printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev)); 1241 return PHY_ERROR; 1242 } 1243 phy_reserved = mii_rw(dev, np->phyaddr, PHY_VITESSE_INIT_REG4, MII_READ); 1244 phy_reserved &= ~PHY_VITESSE_INIT_MSK1; 1245 phy_reserved |= PHY_VITESSE_INIT3; 1246 if (mii_rw(dev, np->phyaddr, PHY_VITESSE_INIT_REG4, phy_reserved)) { 1247 printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev)); 1248 return PHY_ERROR; 1249 } 1250 phy_reserved = mii_rw(dev, np->phyaddr, PHY_VITESSE_INIT_REG3, MII_READ); 1251 if (mii_rw(dev, np->phyaddr, PHY_VITESSE_INIT_REG3, phy_reserved)) { 1252 printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev)); 1253 return PHY_ERROR; 1254 } 1255 if (mii_rw(dev, np->phyaddr, PHY_VITESSE_INIT_REG2, PHY_VITESSE_INIT6)) { 1256 printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev)); 1257 return PHY_ERROR; 1258 } 1259 if (mii_rw(dev, np->phyaddr, PHY_VITESSE_INIT_REG2, PHY_VITESSE_INIT7)) { 1260 printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev)); 1261 return PHY_ERROR; 1262 } 1263 phy_reserved = mii_rw(dev, np->phyaddr, PHY_VITESSE_INIT_REG4, MII_READ); 1264 if (mii_rw(dev, np->phyaddr, PHY_VITESSE_INIT_REG4, phy_reserved)) { 1265 printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev)); 1266 return PHY_ERROR; 1267 } 1268 phy_reserved = mii_rw(dev, np->phyaddr, PHY_VITESSE_INIT_REG3, MII_READ); 1269 phy_reserved &= ~PHY_VITESSE_INIT_MSK2; 1270 phy_reserved |= PHY_VITESSE_INIT8; 1271 if (mii_rw(dev, np->phyaddr, PHY_VITESSE_INIT_REG3, phy_reserved)) { 1272 printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev)); 1273 return PHY_ERROR; 1274 } 1275 if (mii_rw(dev, np->phyaddr, PHY_VITESSE_INIT_REG2, PHY_VITESSE_INIT9)) { 1276 printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev)); 1277 return PHY_ERROR; 1278 } 1279 if (mii_rw(dev, np->phyaddr, PHY_VITESSE_INIT_REG1, PHY_VITESSE_INIT10)) { 1280 printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev)); 1281 return PHY_ERROR; 1282 } 1283 } 1284 if (np->phy_oui == PHY_OUI_REALTEK) { 1285 /* reset could have cleared these out, set them back */ 1286 if (mii_rw(dev, np->phyaddr, PHY_REALTEK_INIT_REG1, PHY_REALTEK_INIT1)) { 1287 printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev)); 1288 return PHY_ERROR; 1289 } 1290 if (mii_rw(dev, np->phyaddr, PHY_REALTEK_INIT_REG2, PHY_REALTEK_INIT2)) { 1291 printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev)); 1292 return PHY_ERROR; 1293 } 1294 if (mii_rw(dev, np->phyaddr, PHY_REALTEK_INIT_REG1, PHY_REALTEK_INIT3)) { 1295 printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev)); 1296 return PHY_ERROR; 1297 } 1298 if (mii_rw(dev, np->phyaddr, PHY_REALTEK_INIT_REG3, PHY_REALTEK_INIT4)) { 1299 printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev)); 1300 return PHY_ERROR; 1301 } 1302 if (mii_rw(dev, np->phyaddr, PHY_REALTEK_INIT_REG1, PHY_REALTEK_INIT1)) { 1303 printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev)); 1304 return PHY_ERROR; 1305 } 1306 } 1307 1308 /* some phys clear out pause advertisment on reset, set it back */ 1309 mii_rw(dev, np->phyaddr, MII_ADVERTISE, reg); 1310 1311 /* restart auto negotiation */ 1312 mii_control = mii_rw(dev, np->phyaddr, MII_BMCR, MII_READ); 1313 mii_control |= (BMCR_ANRESTART | BMCR_ANENABLE); 1314 if (mii_rw(dev, np->phyaddr, MII_BMCR, mii_control)) { 1315 return PHY_ERROR; 1316 } 1317 1318 return 0; 1319} 1320 1321static void nv_start_rx(struct net_device *dev) 1322{ 1323 struct fe_priv *np = netdev_priv(dev); 1324 u8 __iomem *base = get_hwbase(dev); 1325 u32 rx_ctrl = readl(base + NvRegReceiverControl); 1326 1327 dprintk(KERN_DEBUG "%s: nv_start_rx\n", dev->name); 1328 /* Already running? Stop it. */ 1329 if ((readl(base + NvRegReceiverControl) & NVREG_RCVCTL_START) && !np->mac_in_use) { 1330 rx_ctrl &= ~NVREG_RCVCTL_START; 1331 writel(rx_ctrl, base + NvRegReceiverControl); 1332 pci_push(base); 1333 } 1334 writel(np->linkspeed, base + NvRegLinkSpeed); 1335 pci_push(base); 1336 rx_ctrl |= NVREG_RCVCTL_START; 1337 if (np->mac_in_use) 1338 rx_ctrl &= ~NVREG_RCVCTL_RX_PATH_EN; 1339 writel(rx_ctrl, base + NvRegReceiverControl); 1340 dprintk(KERN_DEBUG "%s: nv_start_rx to duplex %d, speed 0x%08x.\n", 1341 dev->name, np->duplex, np->linkspeed); 1342 pci_push(base); 1343} 1344 1345static void nv_stop_rx(struct net_device *dev) 1346{ 1347 struct fe_priv *np = netdev_priv(dev); 1348 u8 __iomem *base = get_hwbase(dev); 1349 u32 rx_ctrl = readl(base + NvRegReceiverControl); 1350 1351 dprintk(KERN_DEBUG "%s: nv_stop_rx\n", dev->name); 1352 if (!np->mac_in_use) 1353 rx_ctrl &= ~NVREG_RCVCTL_START; 1354 else 1355 rx_ctrl |= NVREG_RCVCTL_RX_PATH_EN; 1356 writel(rx_ctrl, base + NvRegReceiverControl); 1357 reg_delay(dev, NvRegReceiverStatus, NVREG_RCVSTAT_BUSY, 0, 1358 NV_RXSTOP_DELAY1, NV_RXSTOP_DELAY1MAX, 1359 KERN_INFO "nv_stop_rx: ReceiverStatus remained busy"); 1360 1361 udelay(NV_RXSTOP_DELAY2); 1362 if (!np->mac_in_use) 1363 writel(0, base + NvRegLinkSpeed); 1364} 1365 1366static void nv_start_tx(struct net_device *dev) 1367{ 1368 struct fe_priv *np = netdev_priv(dev); 1369 u8 __iomem *base = get_hwbase(dev); 1370 u32 tx_ctrl = readl(base + NvRegTransmitterControl); 1371 1372 dprintk(KERN_DEBUG "%s: nv_start_tx\n", dev->name); 1373 tx_ctrl |= NVREG_XMITCTL_START; 1374 if (np->mac_in_use) 1375 tx_ctrl &= ~NVREG_XMITCTL_TX_PATH_EN; 1376 writel(tx_ctrl, base + NvRegTransmitterControl); 1377 pci_push(base); 1378} 1379 1380static void nv_stop_tx(struct net_device *dev) 1381{ 1382 struct fe_priv *np = netdev_priv(dev); 1383 u8 __iomem *base = get_hwbase(dev); 1384 u32 tx_ctrl = readl(base + NvRegTransmitterControl); 1385 1386 dprintk(KERN_DEBUG "%s: nv_stop_tx\n", dev->name); 1387 if (!np->mac_in_use) 1388 tx_ctrl &= ~NVREG_XMITCTL_START; 1389 else 1390 tx_ctrl |= NVREG_XMITCTL_TX_PATH_EN; 1391 writel(tx_ctrl, base + NvRegTransmitterControl); 1392 reg_delay(dev, NvRegTransmitterStatus, NVREG_XMITSTAT_BUSY, 0, 1393 NV_TXSTOP_DELAY1, NV_TXSTOP_DELAY1MAX, 1394 KERN_INFO "nv_stop_tx: TransmitterStatus remained busy"); 1395 1396 udelay(NV_TXSTOP_DELAY2); 1397 if (!np->mac_in_use) 1398 writel(readl(base + NvRegTransmitPoll) & NVREG_TRANSMITPOLL_MAC_ADDR_REV, 1399 base + NvRegTransmitPoll); 1400} 1401 1402static void nv_txrx_reset(struct net_device *dev) 1403{ 1404 struct fe_priv *np = netdev_priv(dev); 1405 u8 __iomem *base = get_hwbase(dev); 1406 1407 dprintk(KERN_DEBUG "%s: nv_txrx_reset\n", dev->name); 1408 writel(NVREG_TXRXCTL_BIT2 | NVREG_TXRXCTL_RESET | np->txrxctl_bits, base + NvRegTxRxControl); 1409 pci_push(base); 1410 udelay(NV_TXRX_RESET_DELAY); 1411 writel(NVREG_TXRXCTL_BIT2 | np->txrxctl_bits, base + NvRegTxRxControl); 1412 pci_push(base); 1413} 1414 1415static void nv_mac_reset(struct net_device *dev) 1416{ 1417 struct fe_priv *np = netdev_priv(dev); 1418 u8 __iomem *base = get_hwbase(dev); 1419 1420 dprintk(KERN_DEBUG "%s: nv_mac_reset\n", dev->name); 1421 writel(NVREG_TXRXCTL_BIT2 | NVREG_TXRXCTL_RESET | np->txrxctl_bits, base + NvRegTxRxControl); 1422 pci_push(base); 1423 writel(NVREG_MAC_RESET_ASSERT, base + NvRegMacReset); 1424 pci_push(base); 1425 udelay(NV_MAC_RESET_DELAY); 1426 writel(0, base + NvRegMacReset); 1427 pci_push(base); 1428 udelay(NV_MAC_RESET_DELAY); 1429 writel(NVREG_TXRXCTL_BIT2 | np->txrxctl_bits, base + NvRegTxRxControl); 1430 pci_push(base); 1431} 1432 1433static void nv_get_hw_stats(struct net_device *dev) 1434{ 1435 struct fe_priv *np = netdev_priv(dev); 1436 u8 __iomem *base = get_hwbase(dev); 1437 1438 np->estats.tx_bytes += readl(base + NvRegTxCnt); 1439 np->estats.tx_zero_rexmt += readl(base + NvRegTxZeroReXmt); 1440 np->estats.tx_one_rexmt += readl(base + NvRegTxOneReXmt); 1441 np->estats.tx_many_rexmt += readl(base + NvRegTxManyReXmt); 1442 np->estats.tx_late_collision += readl(base + NvRegTxLateCol); 1443 np->estats.tx_fifo_errors += readl(base + NvRegTxUnderflow); 1444 np->estats.tx_carrier_errors += readl(base + NvRegTxLossCarrier); 1445 np->estats.tx_excess_deferral += readl(base + NvRegTxExcessDef); 1446 np->estats.tx_retry_error += readl(base + NvRegTxRetryErr); 1447 np->estats.rx_frame_error += readl(base + NvRegRxFrameErr); 1448 np->estats.rx_extra_byte += readl(base + NvRegRxExtraByte); 1449 np->estats.rx_late_collision += readl(base + NvRegRxLateCol); 1450 np->estats.rx_runt += readl(base + NvRegRxRunt); 1451 np->estats.rx_frame_too_long += readl(base + NvRegRxFrameTooLong); 1452 np->estats.rx_over_errors += readl(base + NvRegRxOverflow); 1453 np->estats.rx_crc_errors += readl(base + NvRegRxFCSErr); 1454 np->estats.rx_frame_align_error += readl(base + NvRegRxFrameAlignErr); 1455 np->estats.rx_length_error += readl(base + NvRegRxLenErr); 1456 np->estats.rx_unicast += readl(base + NvRegRxUnicast); 1457 np->estats.rx_multicast += readl(base + NvRegRxMulticast); 1458 np->estats.rx_broadcast += readl(base + NvRegRxBroadcast); 1459 np->estats.rx_packets = 1460 np->estats.rx_unicast + 1461 np->estats.rx_multicast + 1462 np->estats.rx_broadcast; 1463 np->estats.rx_errors_total = 1464 np->estats.rx_crc_errors + 1465 np->estats.rx_over_errors + 1466 np->estats.rx_frame_error + 1467 (np->estats.rx_frame_align_error - np->estats.rx_extra_byte) + 1468 np->estats.rx_late_collision + 1469 np->estats.rx_runt + 1470 np->estats.rx_frame_too_long; 1471 np->estats.tx_errors_total = 1472 np->estats.tx_late_collision + 1473 np->estats.tx_fifo_errors + 1474 np->estats.tx_carrier_errors + 1475 np->estats.tx_excess_deferral + 1476 np->estats.tx_retry_error; 1477 1478 if (np->driver_data & DEV_HAS_STATISTICS_V2) { 1479 np->estats.tx_deferral += readl(base + NvRegTxDef); 1480 np->estats.tx_packets += readl(base + NvRegTxFrame); 1481 np->estats.rx_bytes += readl(base + NvRegRxCnt); 1482 np->estats.tx_pause += readl(base + NvRegTxPause); 1483 np->estats.rx_pause += readl(base + NvRegRxPause); 1484 np->estats.rx_drop_frame += readl(base + NvRegRxDropFrame); 1485 } 1486} 1487 1488/* 1489 * nv_get_stats: dev->get_stats function 1490 * Get latest stats value from the nic. 1491 * Called with read_lock(&dev_base_lock) held for read - 1492 * only synchronized against unregister_netdevice. 1493 */ 1494static struct net_device_stats *nv_get_stats(struct net_device *dev) 1495{ 1496 struct fe_priv *np = netdev_priv(dev); 1497 1498 /* If the nic supports hw counters then retrieve latest values */ 1499 if (np->driver_data & (DEV_HAS_STATISTICS_V1|DEV_HAS_STATISTICS_V2)) { 1500 nv_get_hw_stats(dev); 1501 1502 /* copy to net_device stats */ 1503 np->stats.tx_bytes = np->estats.tx_bytes; 1504 np->stats.tx_fifo_errors = np->estats.tx_fifo_errors; 1505 np->stats.tx_carrier_errors = np->estats.tx_carrier_errors; 1506 np->stats.rx_crc_errors = np->estats.rx_crc_errors; 1507 np->stats.rx_over_errors = np->estats.rx_over_errors; 1508 np->stats.rx_errors = np->estats.rx_errors_total; 1509 np->stats.tx_errors = np->estats.tx_errors_total; 1510 } 1511 return &np->stats; 1512} 1513 1514/* 1515 * nv_alloc_rx: fill rx ring entries. 1516 * Return 1 if the allocations for the skbs failed and the 1517 * rx engine is without Available descriptors 1518 */ 1519static int nv_alloc_rx(struct net_device *dev) 1520{ 1521 struct fe_priv *np = netdev_priv(dev); 1522 struct ring_desc* less_rx; 1523 1524 less_rx = np->get_rx.orig; 1525 if (less_rx-- == np->first_rx.orig) 1526 less_rx = np->last_rx.orig; 1527 1528 while (np->put_rx.orig != less_rx) { 1529 struct sk_buff *skb = dev_alloc_skb(np->rx_buf_sz + NV_RX_ALLOC_PAD); 1530 if (skb) { 1531 np->put_rx_ctx->skb = skb; 1532 np->put_rx_ctx->dma = pci_map_single(np->pci_dev, 1533 skb->data, 1534 skb_tailroom(skb), 1535 PCI_DMA_FROMDEVICE); 1536 np->put_rx_ctx->dma_len = skb_tailroom(skb); 1537 np->put_rx.orig->buf = cpu_to_le32(np->put_rx_ctx->dma); 1538 wmb(); 1539 np->put_rx.orig->flaglen = cpu_to_le32(np->rx_buf_sz | NV_RX_AVAIL); 1540 if (unlikely(np->put_rx.orig++ == np->last_rx.orig)) 1541 np->put_rx.orig = np->first_rx.orig; 1542 if (unlikely(np->put_rx_ctx++ == np->last_rx_ctx)) 1543 np->put_rx_ctx = np->first_rx_ctx; 1544 } else { 1545 return 1; 1546 } 1547 } 1548 return 0; 1549} 1550 1551static int nv_alloc_rx_optimized(struct net_device *dev) 1552{ 1553 struct fe_priv *np = netdev_priv(dev); 1554 struct ring_desc_ex* less_rx; 1555 1556 less_rx = np->get_rx.ex; 1557 if (less_rx-- == np->first_rx.ex) 1558 less_rx = np->last_rx.ex; 1559 1560 while (np->put_rx.ex != less_rx) { 1561 struct sk_buff *skb = dev_alloc_skb(np->rx_buf_sz + NV_RX_ALLOC_PAD); 1562 if (skb) { 1563 np->put_rx_ctx->skb = skb; 1564 np->put_rx_ctx->dma = pci_map_single(np->pci_dev, 1565 skb->data, 1566 skb_tailroom(skb), 1567 PCI_DMA_FROMDEVICE); 1568 np->put_rx_ctx->dma_len = skb_tailroom(skb); 1569 np->put_rx.ex->bufhigh = cpu_to_le64(np->put_rx_ctx->dma) >> 32; 1570 np->put_rx.ex->buflow = cpu_to_le64(np->put_rx_ctx->dma) & 0x0FFFFFFFF; 1571 wmb(); 1572 np->put_rx.ex->flaglen = cpu_to_le32(np->rx_buf_sz | NV_RX2_AVAIL); 1573 if (unlikely(np->put_rx.ex++ == np->last_rx.ex)) 1574 np->put_rx.ex = np->first_rx.ex; 1575 if (unlikely(np->put_rx_ctx++ == np->last_rx_ctx)) 1576 np->put_rx_ctx = np->first_rx_ctx; 1577 } else { 1578 return 1; 1579 } 1580 } 1581 return 0; 1582} 1583 1584/* If rx bufs are exhausted called after 50ms to attempt to refresh */ 1585#ifdef CONFIG_FORCEDETH_NAPI 1586static void nv_do_rx_refill(unsigned long data) 1587{ 1588 struct net_device *dev = (struct net_device *) data; 1589 1590 /* Just reschedule NAPI rx processing */ 1591 netif_rx_schedule(dev); 1592} 1593#else 1594static void nv_do_rx_refill(unsigned long data) 1595{ 1596 struct net_device *dev = (struct net_device *) data; 1597 struct fe_priv *np = netdev_priv(dev); 1598 int retcode; 1599 1600 if (!using_multi_irqs(dev)) { 1601 if (np->msi_flags & NV_MSI_X_ENABLED) 1602 disable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_ALL].vector); 1603 else 1604 disable_irq(dev->irq); 1605 } else { 1606 disable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_RX].vector); 1607 } 1608 if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) 1609 retcode = nv_alloc_rx(dev); 1610 else 1611 retcode = nv_alloc_rx_optimized(dev); 1612 if (retcode) { 1613 spin_lock_irq(&np->lock); 1614 if (!np->in_shutdown) 1615 mod_timer(&np->oom_kick, jiffies + OOM_REFILL); 1616 spin_unlock_irq(&np->lock); 1617 } 1618 if (!using_multi_irqs(dev)) { 1619 if (np->msi_flags & NV_MSI_X_ENABLED) 1620 enable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_ALL].vector); 1621 else 1622 enable_irq(dev->irq); 1623 } else { 1624 enable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_RX].vector); 1625 } 1626} 1627#endif 1628 1629static void nv_init_rx(struct net_device *dev) 1630{ 1631 struct fe_priv *np = netdev_priv(dev); 1632 int i; 1633 np->get_rx = np->put_rx = np->first_rx = np->rx_ring; 1634 if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) 1635 np->last_rx.orig = &np->rx_ring.orig[np->rx_ring_size-1]; 1636 else 1637 np->last_rx.ex = &np->rx_ring.ex[np->rx_ring_size-1]; 1638 np->get_rx_ctx = np->put_rx_ctx = np->first_rx_ctx = np->rx_skb; 1639 np->last_rx_ctx = &np->rx_skb[np->rx_ring_size-1]; 1640 1641 for (i = 0; i < np->rx_ring_size; i++) { 1642 if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) { 1643 np->rx_ring.orig[i].flaglen = 0; 1644 np->rx_ring.orig[i].buf = 0; 1645 } else { 1646 np->rx_ring.ex[i].flaglen = 0; 1647 np->rx_ring.ex[i].txvlan = 0; 1648 np->rx_ring.ex[i].bufhigh = 0; 1649 np->rx_ring.ex[i].buflow = 0; 1650 } 1651 np->rx_skb[i].skb = NULL; 1652 np->rx_skb[i].dma = 0; 1653 } 1654} 1655 1656static void nv_init_tx(struct net_device *dev) 1657{ 1658 struct fe_priv *np = netdev_priv(dev); 1659 int i; 1660 np->get_tx = np->put_tx = np->first_tx = np->tx_ring; 1661 if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) 1662 np->last_tx.orig = &np->tx_ring.orig[np->tx_ring_size-1]; 1663 else 1664 np->last_tx.ex = &np->tx_ring.ex[np->tx_ring_size-1]; 1665 np->get_tx_ctx = np->put_tx_ctx = np->first_tx_ctx = np->tx_skb; 1666 np->last_tx_ctx = &np->tx_skb[np->tx_ring_size-1]; 1667 1668 for (i = 0; i < np->tx_ring_size; i++) { 1669 if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) { 1670 np->tx_ring.orig[i].flaglen = 0; 1671 np->tx_ring.orig[i].buf = 0; 1672 } else { 1673 np->tx_ring.ex[i].flaglen = 0; 1674 np->tx_ring.ex[i].txvlan = 0; 1675 np->tx_ring.ex[i].bufhigh = 0; 1676 np->tx_ring.ex[i].buflow = 0; 1677 } 1678 np->tx_skb[i].skb = NULL; 1679 np->tx_skb[i].dma = 0; 1680 } 1681} 1682 1683static int nv_init_ring(struct net_device *dev) 1684{ 1685 struct fe_priv *np = netdev_priv(dev); 1686 1687 nv_init_tx(dev); 1688 nv_init_rx(dev); 1689 if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) 1690 return nv_alloc_rx(dev); 1691 else 1692 return nv_alloc_rx_optimized(dev); 1693} 1694 1695static int nv_release_txskb(struct net_device *dev, struct nv_skb_map* tx_skb) 1696{ 1697 struct fe_priv *np = netdev_priv(dev); 1698 1699 if (tx_skb->dma) { 1700 pci_unmap_page(np->pci_dev, tx_skb->dma, 1701 tx_skb->dma_len, 1702 PCI_DMA_TODEVICE); 1703 tx_skb->dma = 0; 1704 } 1705 if (tx_skb->skb) { 1706 dev_kfree_skb_any(tx_skb->skb); 1707 tx_skb->skb = NULL; 1708 return 1; 1709 } else { 1710 return 0; 1711 } 1712} 1713 1714static void nv_drain_tx(struct net_device *dev) 1715{ 1716 struct fe_priv *np = netdev_priv(dev); 1717 unsigned int i; 1718 1719 for (i = 0; i < np->tx_ring_size; i++) { 1720 if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) { 1721 np->tx_ring.orig[i].flaglen = 0; 1722 np->tx_ring.orig[i].buf = 0; 1723 } else { 1724 np->tx_ring.ex[i].flaglen = 0; 1725 np->tx_ring.ex[i].txvlan = 0; 1726 np->tx_ring.ex[i].bufhigh = 0; 1727 np->tx_ring.ex[i].buflow = 0; 1728 } 1729 if (nv_release_txskb(dev, &np->tx_skb[i])) 1730 np->stats.tx_dropped++; 1731 } 1732} 1733 1734static void nv_drain_rx(struct net_device *dev) 1735{ 1736 struct fe_priv *np = netdev_priv(dev); 1737 int i; 1738 1739 for (i = 0; i < np->rx_ring_size; i++) { 1740 if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) { 1741 np->rx_ring.orig[i].flaglen = 0; 1742 np->rx_ring.orig[i].buf = 0; 1743 } else { 1744 np->rx_ring.ex[i].flaglen = 0; 1745 np->rx_ring.ex[i].txvlan = 0; 1746 np->rx_ring.ex[i].bufhigh = 0; 1747 np->rx_ring.ex[i].buflow = 0; 1748 } 1749 wmb(); 1750 if (np->rx_skb[i].skb) { 1751 pci_unmap_single(np->pci_dev, np->rx_skb[i].dma, 1752 (skb_end_pointer(np->rx_skb[i].skb) - 1753 np->rx_skb[i].skb->data), 1754 PCI_DMA_FROMDEVICE); 1755 dev_kfree_skb(np->rx_skb[i].skb); 1756 np->rx_skb[i].skb = NULL; 1757 } 1758 } 1759} 1760 1761static void drain_ring(struct net_device *dev) 1762{ 1763 nv_drain_tx(dev); 1764 nv_drain_rx(dev); 1765} 1766 1767static inline u32 nv_get_empty_tx_slots(struct fe_priv *np) 1768{ 1769 return (u32)(np->tx_ring_size - ((np->tx_ring_size + (np->put_tx_ctx - np->get_tx_ctx)) % np->tx_ring_size)); 1770} 1771 1772/* 1773 * nv_start_xmit: dev->hard_start_xmit function 1774 * Called with netif_tx_lock held. 1775 */ 1776static int nv_start_xmit(struct sk_buff *skb, struct net_device *dev) 1777{ 1778 struct fe_priv *np = netdev_priv(dev); 1779 u32 tx_flags = 0; 1780 u32 tx_flags_extra = (np->desc_ver == DESC_VER_1 ? NV_TX_LASTPACKET : NV_TX2_LASTPACKET); 1781 unsigned int fragments = skb_shinfo(skb)->nr_frags; 1782 unsigned int i; 1783 u32 offset = 0; 1784 u32 bcnt; 1785 u32 size = skb->len-skb->data_len; 1786 u32 entries = (size >> NV_TX2_TSO_MAX_SHIFT) + ((size & (NV_TX2_TSO_MAX_SIZE-1)) ? 1 : 0); 1787 u32 empty_slots; 1788 struct ring_desc* put_tx; 1789 struct ring_desc* start_tx; 1790 struct ring_desc* prev_tx; 1791 struct nv_skb_map* prev_tx_ctx; 1792 1793 /* add fragments to entries count */ 1794 for (i = 0; i < fragments; i++) { 1795 entries += (skb_shinfo(skb)->frags[i].size >> NV_TX2_TSO_MAX_SHIFT) + 1796 ((skb_shinfo(skb)->frags[i].size & (NV_TX2_TSO_MAX_SIZE-1)) ? 1 : 0); 1797 } 1798 1799 empty_slots = nv_get_empty_tx_slots(np); 1800 if (unlikely(empty_slots <= entries)) { 1801 spin_lock_irq(&np->lock); 1802 netif_stop_queue(dev); 1803 np->tx_stop = 1; 1804 spin_unlock_irq(&np->lock); 1805 return NETDEV_TX_BUSY; 1806 } 1807 1808 start_tx = put_tx = np->put_tx.orig; 1809 1810 /* setup the header buffer */ 1811 do { 1812 prev_tx = put_tx; 1813 prev_tx_ctx = np->put_tx_ctx; 1814 bcnt = (size > NV_TX2_TSO_MAX_SIZE) ? NV_TX2_TSO_MAX_SIZE : size; 1815 np->put_tx_ctx->dma = pci_map_single(np->pci_dev, skb->data + offset, bcnt, 1816 PCI_DMA_TODEVICE); 1817 np->put_tx_ctx->dma_len = bcnt; 1818 put_tx->buf = cpu_to_le32(np->put_tx_ctx->dma); 1819 put_tx->flaglen = cpu_to_le32((bcnt-1) | tx_flags); 1820 1821 tx_flags = np->tx_flags; 1822 offset += bcnt; 1823 size -= bcnt; 1824 if (unlikely(put_tx++ == np->last_tx.orig)) 1825 put_tx = np->first_tx.orig; 1826 if (unlikely(np->put_tx_ctx++ == np->last_tx_ctx)) 1827 np->put_tx_ctx = np->first_tx_ctx; 1828 } while (size); 1829 1830 /* setup the fragments */ 1831 for (i = 0; i < fragments; i++) { 1832 skb_frag_t *frag = &skb_shinfo(skb)->frags[i]; 1833 u32 size = frag->size; 1834 offset = 0; 1835 1836 do { 1837 prev_tx = put_tx; 1838 prev_tx_ctx = np->put_tx_ctx; 1839 bcnt = (size > NV_TX2_TSO_MAX_SIZE) ? NV_TX2_TSO_MAX_SIZE : size; 1840 np->put_tx_ctx->dma = pci_map_page(np->pci_dev, frag->page, frag->page_offset+offset, bcnt, 1841 PCI_DMA_TODEVICE); 1842 np->put_tx_ctx->dma_len = bcnt; 1843 put_tx->buf = cpu_to_le32(np->put_tx_ctx->dma); 1844 put_tx->flaglen = cpu_to_le32((bcnt-1) | tx_flags); 1845 1846 offset += bcnt; 1847 size -= bcnt; 1848 if (unlikely(put_tx++ == np->last_tx.orig)) 1849 put_tx = np->first_tx.orig; 1850 if (unlikely(np->put_tx_ctx++ == np->last_tx_ctx)) 1851 np->put_tx_ctx = np->first_tx_ctx; 1852 } while (size); 1853 } 1854 1855 /* set last fragment flag */ 1856 prev_tx->flaglen |= cpu_to_le32(tx_flags_extra); 1857 1858 /* save skb in this slot's context area */ 1859 prev_tx_ctx->skb = skb; 1860 1861 if (skb_is_gso(skb)) 1862 tx_flags_extra = NV_TX2_TSO | (skb_shinfo(skb)->gso_size << NV_TX2_TSO_SHIFT); 1863 else 1864 tx_flags_extra = skb->ip_summed == CHECKSUM_PARTIAL ? 1865 NV_TX2_CHECKSUM_L3 | NV_TX2_CHECKSUM_L4 : 0; 1866 1867 spin_lock_irq(&np->lock); 1868 1869 /* set tx flags */ 1870 start_tx->flaglen |= cpu_to_le32(tx_flags | tx_flags_extra); 1871 np->put_tx.orig = put_tx; 1872 1873 spin_unlock_irq(&np->lock); 1874 1875 dprintk(KERN_DEBUG "%s: nv_start_xmit: entries %d queued for transmission. tx_flags_extra: %x\n", 1876 dev->name, entries, tx_flags_extra); 1877 { 1878 int j; 1879 for (j=0; j<64; j++) { 1880 if ((j%16) == 0) 1881 dprintk("\n%03x:", j); 1882 dprintk(" %02x", ((unsigned char*)skb->data)[j]); 1883 } 1884 dprintk("\n"); 1885 } 1886 1887 dev->trans_start = jiffies; 1888 writel(NVREG_TXRXCTL_KICK|np->txrxctl_bits, get_hwbase(dev) + NvRegTxRxControl); 1889 return NETDEV_TX_OK; 1890} 1891 1892static int nv_start_xmit_optimized(struct sk_buff *skb, struct net_device *dev) 1893{ 1894 struct fe_priv *np = netdev_priv(dev); 1895 u32 tx_flags = 0; 1896 u32 tx_flags_extra; 1897 unsigned int fragments = skb_shinfo(skb)->nr_frags; 1898 unsigned int i; 1899 u32 offset = 0; 1900 u32 bcnt; 1901 u32 size = skb->len-skb->data_len; 1902 u32 entries = (size >> NV_TX2_TSO_MAX_SHIFT) + ((size & (NV_TX2_TSO_MAX_SIZE-1)) ? 1 : 0); 1903 u32 empty_slots; 1904 struct ring_desc_ex* put_tx; 1905 struct ring_desc_ex* start_tx; 1906 struct ring_desc_ex* prev_tx; 1907 struct nv_skb_map* prev_tx_ctx; 1908 1909 /* add fragments to entries count */ 1910 for (i = 0; i < fragments; i++) { 1911 entries += (skb_shinfo(skb)->frags[i].size >> NV_TX2_TSO_MAX_SHIFT) + 1912 ((skb_shinfo(skb)->frags[i].size & (NV_TX2_TSO_MAX_SIZE-1)) ? 1 : 0); 1913 } 1914 1915 empty_slots = nv_get_empty_tx_slots(np); 1916 if (unlikely(empty_slots <= entries)) { 1917 spin_lock_irq(&np->lock); 1918 netif_stop_queue(dev); 1919 np->tx_stop = 1; 1920 spin_unlock_irq(&np->lock); 1921 return NETDEV_TX_BUSY; 1922 } 1923 1924 start_tx = put_tx = np->put_tx.ex; 1925 1926 /* setup the header buffer */ 1927 do { 1928 prev_tx = put_tx; 1929 prev_tx_ctx = np->put_tx_ctx; 1930 bcnt = (size > NV_TX2_TSO_MAX_SIZE) ? NV_TX2_TSO_MAX_SIZE : size; 1931 np->put_tx_ctx->dma = pci_map_single(np->pci_dev, skb->data + offset, bcnt, 1932 PCI_DMA_TODEVICE); 1933 np->put_tx_ctx->dma_len = bcnt; 1934 put_tx->bufhigh = cpu_to_le64(np->put_tx_ctx->dma) >> 32; 1935 put_tx->buflow = cpu_to_le64(np->put_tx_ctx->dma) & 0x0FFFFFFFF; 1936 put_tx->flaglen = cpu_to_le32((bcnt-1) | tx_flags); 1937 1938 tx_flags = NV_TX2_VALID; 1939 offset += bcnt; 1940 size -= bcnt; 1941 if (unlikely(put_tx++ == np->last_tx.ex)) 1942 put_tx = np->first_tx.ex; 1943 if (unlikely(np->put_tx_ctx++ == np->last_tx_ctx)) 1944 np->put_tx_ctx = np->first_tx_ctx; 1945 } while (size); 1946 1947 /* setup the fragments */ 1948 for (i = 0; i < fragments; i++) { 1949 skb_frag_t *frag = &skb_shinfo(skb)->frags[i]; 1950 u32 size = frag->size; 1951 offset = 0; 1952 1953 do { 1954 prev_tx = put_tx; 1955 prev_tx_ctx = np->put_tx_ctx; 1956 bcnt = (size > NV_TX2_TSO_MAX_SIZE) ? NV_TX2_TSO_MAX_SIZE : size; 1957 np->put_tx_ctx->dma = pci_map_page(np->pci_dev, frag->page, frag->page_offset+offset, bcnt, 1958 PCI_DMA_TODEVICE); 1959 np->put_tx_ctx->dma_len = bcnt; 1960 put_tx->bufhigh = cpu_to_le64(np->put_tx_ctx->dma) >> 32; 1961 put_tx->buflow = cpu_to_le64(np->put_tx_ctx->dma) & 0x0FFFFFFFF; 1962 put_tx->flaglen = cpu_to_le32((bcnt-1) | tx_flags); 1963 1964 offset += bcnt; 1965 size -= bcnt; 1966 if (unlikely(put_tx++ == np->last_tx.ex)) 1967 put_tx = np->first_tx.ex; 1968 if (unlikely(np->put_tx_ctx++ == np->last_tx_ctx)) 1969 np->put_tx_ctx = np->first_tx_ctx; 1970 } while (size); 1971 } 1972 1973 /* set last fragment flag */ 1974 prev_tx->flaglen |= cpu_to_le32(NV_TX2_LASTPACKET); 1975 1976 /* save skb in this slot's context area */ 1977 prev_tx_ctx->skb = skb; 1978 1979 if (skb_is_gso(skb)) 1980 tx_flags_extra = NV_TX2_TSO | (skb_shinfo(skb)->gso_size << NV_TX2_TSO_SHIFT); 1981 else 1982 tx_flags_extra = skb->ip_summed == CHECKSUM_PARTIAL ? 1983 NV_TX2_CHECKSUM_L3 | NV_TX2_CHECKSUM_L4 : 0; 1984 1985 /* vlan tag */ 1986 if (likely(!np->vlangrp)) { 1987 start_tx->txvlan = 0; 1988 } else { 1989 if (vlan_tx_tag_present(skb)) 1990 start_tx->txvlan = cpu_to_le32(NV_TX3_VLAN_TAG_PRESENT | vlan_tx_tag_get(skb)); 1991 else 1992 start_tx->txvlan = 0; 1993 } 1994 1995 spin_lock_irq(&np->lock); 1996 1997 /* set tx flags */ 1998 start_tx->flaglen |= cpu_to_le32(tx_flags | tx_flags_extra); 1999 np->put_tx.ex = put_tx; 2000 2001 spin_unlock_irq(&np->lock); 2002 2003 dprintk(KERN_DEBUG "%s: nv_start_xmit_optimized: entries %d queued for transmission. tx_flags_extra: %x\n", 2004 dev->name, entries, tx_flags_extra); 2005 { 2006 int j; 2007 for (j=0; j<64; j++) { 2008 if ((j%16) == 0) 2009 dprintk("\n%03x:", j); 2010 dprintk(" %02x", ((unsigned char*)skb->data)[j]); 2011 } 2012 dprintk("\n"); 2013 } 2014 2015 dev->trans_start = jiffies; 2016 writel(NVREG_TXRXCTL_KICK|np->txrxctl_bits, get_hwbase(dev) + NvRegTxRxControl); 2017 return NETDEV_TX_OK; 2018} 2019 2020/* 2021 * nv_tx_done: check for completed packets, release the skbs. 2022 * 2023 * Caller must own np->lock. 2024 */ 2025static void nv_tx_done(struct net_device *dev) 2026{ 2027 struct fe_priv *np = netdev_priv(dev); 2028 u32 flags; 2029 struct ring_desc* orig_get_tx = np->get_tx.orig; 2030 2031 while ((np->get_tx.orig != np->put_tx.orig) && 2032 !((flags = le32_to_cpu(np->get_tx.orig->flaglen)) & NV_TX_VALID)) { 2033 2034 dprintk(KERN_DEBUG "%s: nv_tx_done: flags 0x%x.\n", 2035 dev->name, flags); 2036 2037 pci_unmap_page(np->pci_dev, np->get_tx_ctx->dma, 2038 np->get_tx_ctx->dma_len, 2039 PCI_DMA_TODEVICE); 2040 np->get_tx_ctx->dma = 0; 2041 2042 if (np->desc_ver == DESC_VER_1) { 2043 if (flags & NV_TX_LASTPACKET) { 2044 if (flags & NV_TX_ERROR) { 2045 if (flags & NV_TX_UNDERFLOW) 2046 np->stats.tx_fifo_errors++; 2047 if (flags & NV_TX_CARRIERLOST) 2048 np->stats.tx_carrier_errors++; 2049 np->stats.tx_errors++; 2050 } else { 2051 np->stats.tx_packets++; 2052 np->stats.tx_bytes += np->get_tx_ctx->skb->len; 2053 } 2054 dev_kfree_skb_any(np->get_tx_ctx->skb); 2055 np->get_tx_ctx->skb = NULL; 2056 } 2057 } else { 2058 if (flags & NV_TX2_LASTPACKET) { 2059 if (flags & NV_TX2_ERROR) { 2060 if (flags & NV_TX2_UNDERFLOW) 2061 np->stats.tx_fifo_errors++; 2062 if (flags & NV_TX2_CARRIERLOST) 2063 np->stats.tx_carrier_errors++; 2064 np->stats.tx_errors++; 2065 } else { 2066 np->stats.tx_packets++; 2067 np->stats.tx_bytes += np->get_tx_ctx->skb->len; 2068 } 2069 dev_kfree_skb_any(np->get_tx_ctx->skb); 2070 np->get_tx_ctx->skb = NULL; 2071 } 2072 } 2073 if (unlikely(np->get_tx.orig++ == np->last_tx.orig)) 2074 np->get_tx.orig = np->first_tx.orig; 2075 if (unlikely(np->get_tx_ctx++ == np->last_tx_ctx)) 2076 np->get_tx_ctx = np->first_tx_ctx; 2077 } 2078 if (unlikely((np->tx_stop == 1) && (np->get_tx.orig != orig_get_tx))) { 2079 np->tx_stop = 0; 2080 netif_wake_queue(dev); 2081 } 2082} 2083 2084static void nv_tx_done_optimized(struct net_device *dev, int limit) 2085{ 2086 struct fe_priv *np = netdev_priv(dev); 2087 u32 flags; 2088 struct ring_desc_ex* orig_get_tx = np->get_tx.ex; 2089 2090 while ((np->get_tx.ex != np->put_tx.ex) && 2091 !((flags = le32_to_cpu(np->get_tx.ex->flaglen)) & NV_TX_VALID) && 2092 (limit-- > 0)) { 2093 2094 dprintk(KERN_DEBUG "%s: nv_tx_done_optimized: flags 0x%x.\n", 2095 dev->name, flags); 2096 2097 pci_unmap_page(np->pci_dev, np->get_tx_ctx->dma, 2098 np->get_tx_ctx->dma_len, 2099 PCI_DMA_TODEVICE); 2100 np->get_tx_ctx->dma = 0; 2101 2102 if (flags & NV_TX2_LASTPACKET) { 2103 if (!(flags & NV_TX2_ERROR)) 2104 np->stats.tx_packets++; 2105 dev_kfree_skb_any(np->get_tx_ctx->skb); 2106 np->get_tx_ctx->skb = NULL; 2107 } 2108 if (unlikely(np->get_tx.ex++ == np->last_tx.ex)) 2109 np->get_tx.ex = np->first_tx.ex; 2110 if (unlikely(np->get_tx_ctx++ == np->last_tx_ctx)) 2111 np->get_tx_ctx = np->first_tx_ctx; 2112 } 2113 if (unlikely((np->tx_stop == 1) && (np->get_tx.ex != orig_get_tx))) { 2114 np->tx_stop = 0; 2115 netif_wake_queue(dev); 2116 } 2117} 2118 2119/* 2120 * nv_tx_timeout: dev->tx_timeout function 2121 * Called with netif_tx_lock held. 2122 */ 2123static void nv_tx_timeout(struct net_device *dev) 2124{ 2125 struct fe_priv *np = netdev_priv(dev); 2126 u8 __iomem *base = get_hwbase(dev); 2127 u32 status; 2128 2129 if (np->msi_flags & NV_MSI_X_ENABLED) 2130 status = readl(base + NvRegMSIXIrqStatus) & NVREG_IRQSTAT_MASK; 2131 else 2132 status = readl(base + NvRegIrqStatus) & NVREG_IRQSTAT_MASK; 2133 2134 printk(KERN_INFO "%s: Got tx_timeout. irq: %08x\n", dev->name, status); 2135 2136 { 2137 int i; 2138 2139 printk(KERN_INFO "%s: Ring at %lx\n", 2140 dev->name, (unsigned long)np->ring_addr); 2141 printk(KERN_INFO "%s: Dumping tx registers\n", dev->name); 2142 for (i=0;i<=np->register_size;i+= 32) { 2143 printk(KERN_INFO "%3x: %08x %08x %08x %08x %08x %08x %08x %08x\n", 2144 i, 2145 readl(base + i + 0), readl(base + i + 4), 2146 readl(base + i + 8), readl(base + i + 12), 2147 readl(base + i + 16), readl(base + i + 20), 2148 readl(base + i + 24), readl(base + i + 28)); 2149 } 2150 printk(KERN_INFO "%s: Dumping tx ring\n", dev->name); 2151 for (i=0;i<np->tx_ring_size;i+= 4) { 2152 if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) { 2153 printk(KERN_INFO "%03x: %08x %08x // %08x %08x // %08x %08x // %08x %08x\n", 2154 i, 2155 le32_to_cpu(np->tx_ring.orig[i].buf), 2156 le32_to_cpu(np->tx_ring.orig[i].flaglen), 2157 le32_to_cpu(np->tx_ring.orig[i+1].buf), 2158 le32_to_cpu(np->tx_ring.orig[i+1].flaglen), 2159 le32_to_cpu(np->tx_ring.orig[i+2].buf), 2160 le32_to_cpu(np->tx_ring.orig[i+2].flaglen), 2161 le32_to_cpu(np->tx_ring.orig[i+3].buf), 2162 le32_to_cpu(np->tx_ring.orig[i+3].flaglen)); 2163 } else { 2164 printk(KERN_INFO "%03x: %08x %08x %08x // %08x %08x %08x // %08x %08x %08x // %08x %08x %08x\n", 2165 i, 2166 le32_to_cpu(np->tx_ring.ex[i].bufhigh), 2167 le32_to_cpu(np->tx_ring.ex[i].buflow), 2168 le32_to_cpu(np->tx_ring.ex[i].flaglen), 2169 le32_to_cpu(np->tx_ring.ex[i+1].bufhigh), 2170 le32_to_cpu(np->tx_ring.ex[i+1].buflow), 2171 le32_to_cpu(np->tx_ring.ex[i+1].flaglen), 2172 le32_to_cpu(np->tx_ring.ex[i+2].bufhigh), 2173 le32_to_cpu(np->tx_ring.ex[i+2].buflow), 2174 le32_to_cpu(np->tx_ring.ex[i+2].flaglen), 2175 le32_to_cpu(np->tx_ring.ex[i+3].bufhigh), 2176 le32_to_cpu(np->tx_ring.ex[i+3].buflow), 2177 le32_to_cpu(np->tx_ring.ex[i+3].flaglen)); 2178 } 2179 } 2180 } 2181 2182 spin_lock_irq(&np->lock); 2183 2184 /* 1) stop tx engine */ 2185 nv_stop_tx(dev); 2186 2187 /* 2) check that the packets were not sent already: */ 2188 if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) 2189 nv_tx_done(dev); 2190 else 2191 nv_tx_done_optimized(dev, np->tx_ring_size); 2192 2193 /* 3) if there are dead entries: clear everything */ 2194 if (np->get_tx_ctx != np->put_tx_ctx) { 2195 printk(KERN_DEBUG "%s: tx_timeout: dead entries!\n", dev->name); 2196 nv_drain_tx(dev); 2197 nv_init_tx(dev); 2198 setup_hw_rings(dev, NV_SETUP_TX_RING); 2199 } 2200 2201 netif_wake_queue(dev); 2202 2203 /* 4) restart tx engine */ 2204 nv_start_tx(dev); 2205 spin_unlock_irq(&np->lock); 2206} 2207 2208/* 2209 * Called when the nic notices a mismatch between the actual data len on the 2210 * wire and the len indicated in the 802 header 2211 */ 2212static int nv_getlen(struct net_device *dev, void *packet, int datalen) 2213{ 2214 int hdrlen; /* length of the 802 header */ 2215 int protolen; /* length as stored in the proto field */ 2216 2217 /* 1) calculate len according to header */ 2218 if ( ((struct vlan_ethhdr *)packet)->h_vlan_proto == htons(ETH_P_8021Q)) { 2219 protolen = ntohs( ((struct vlan_ethhdr *)packet)->h_vlan_encapsulated_proto ); 2220 hdrlen = VLAN_HLEN; 2221 } else { 2222 protolen = ntohs( ((struct ethhdr *)packet)->h_proto); 2223 hdrlen = ETH_HLEN; 2224 } 2225 dprintk(KERN_DEBUG "%s: nv_getlen: datalen %d, protolen %d, hdrlen %d\n", 2226 dev->name, datalen, protolen, hdrlen); 2227 if (protolen > ETH_DATA_LEN) 2228 return datalen; /* Value in proto field not a len, no checks possible */ 2229 2230 protolen += hdrlen; 2231 /* consistency checks: */ 2232 if (datalen > ETH_ZLEN) { 2233 if (datalen >= protolen) { 2234 /* more data on wire than in 802 header, trim of 2235 * additional data. 2236 */ 2237 dprintk(KERN_DEBUG "%s: nv_getlen: accepting %d bytes.\n", 2238 dev->name, protolen); 2239 return protolen; 2240 } else { 2241 /* less data on wire than mentioned in header. 2242 * Discard the packet. 2243 */ 2244 dprintk(KERN_DEBUG "%s: nv_getlen: discarding long packet.\n", 2245 dev->name); 2246 return -1; 2247 } 2248 } else { 2249 /* short packet. Accept only if 802 values are also short */ 2250 if (protolen > ETH_ZLEN) { 2251 dprintk(KERN_DEBUG "%s: nv_getlen: discarding short packet.\n", 2252 dev->name); 2253 return -1; 2254 } 2255 dprintk(KERN_DEBUG "%s: nv_getlen: accepting %d bytes.\n", 2256 dev->name, datalen); 2257 return datalen; 2258 } 2259} 2260 2261static int nv_rx_process(struct net_device *dev, int limit) 2262{ 2263 struct fe_priv *np = netdev_priv(dev); 2264 u32 flags; 2265 u32 rx_processed_cnt = 0; 2266 struct sk_buff *skb; 2267 int len; 2268 2269 while((np->get_rx.orig != np->put_rx.orig) && 2270 !((flags = le32_to_cpu(np->get_rx.orig->flaglen)) & NV_RX_AVAIL) && 2271 (rx_processed_cnt++ < limit)) { 2272 2273 dprintk(KERN_DEBUG "%s: nv_rx_process: flags 0x%x.\n", 2274 dev->name, flags); 2275 2276 /* 2277 * the packet is for us - immediately tear down the pci mapping. 2278 * TODO: check if a prefetch of the first cacheline improves 2279 * the performance. 2280 */ 2281 pci_unmap_single(np->pci_dev, np->get_rx_ctx->dma, 2282 np->get_rx_ctx->dma_len, 2283 PCI_DMA_FROMDEVICE); 2284 skb = np->get_rx_ctx->skb; 2285 np->get_rx_ctx->skb = NULL; 2286 2287 { 2288 int j; 2289 dprintk(KERN_DEBUG "Dumping packet (flags 0x%x).",flags); 2290 for (j=0; j<64; j++) { 2291 if ((j%16) == 0) 2292 dprintk("\n%03x:", j); 2293 dprintk(" %02x", ((unsigned char*)skb->data)[j]); 2294 } 2295 dprintk("\n"); 2296 } 2297 /* look at what we actually got: */ 2298 if (np->desc_ver == DESC_VER_1) { 2299 if (likely(flags & NV_RX_DESCRIPTORVALID)) { 2300 len = flags & LEN_MASK_V1; 2301 if (unlikely(flags & NV_RX_ERROR)) { 2302 if (flags & NV_RX_ERROR4) { 2303 len = nv_getlen(dev, skb->data, len); 2304 if (len < 0) { 2305 np->stats.rx_errors++; 2306 dev_kfree_skb(skb); 2307 goto next_pkt; 2308 } 2309 } 2310 /* framing errors are soft errors */ 2311 else if (flags & NV_RX_FRAMINGERR) { 2312 if (flags & NV_RX_SUBSTRACT1) { 2313 len--; 2314 } 2315 } 2316 /* the rest are hard errors */ 2317 else { 2318 if (flags & NV_RX_MISSEDFRAME) 2319 np->stats.rx_missed_errors++; 2320 if (flags & NV_RX_CRCERR) 2321 np->stats.rx_crc_errors++; 2322 if (flags & NV_RX_OVERFLOW) 2323 np->stats.rx_over_errors++; 2324 np->stats.rx_errors++; 2325 dev_kfree_skb(skb); 2326 goto next_pkt; 2327 } 2328 } 2329 } else { 2330 dev_kfree_skb(skb); 2331 goto next_pkt; 2332 } 2333 } else { 2334 if (likely(flags & NV_RX2_DESCRIPTORVALID)) { 2335 len = flags & LEN_MASK_V2; 2336 if (unlikely(flags & NV_RX2_ERROR)) { 2337 if (flags & NV_RX2_ERROR4) { 2338 len = nv_getlen(dev, skb->data, len); 2339 if (len < 0) { 2340 np->stats.rx_errors++; 2341 dev_kfree_skb(skb); 2342 goto next_pkt; 2343 } 2344 } 2345 /* framing errors are soft errors */ 2346 else if (flags & NV_RX2_FRAMINGERR) { 2347 if (flags & NV_RX2_SUBSTRACT1) { 2348 len--; 2349 } 2350 } 2351 /* the rest are hard errors */ 2352 else { 2353 if (flags & NV_RX2_CRCERR) 2354 np->stats.rx_crc_errors++; 2355 if (flags & NV_RX2_OVERFLOW) 2356 np->stats.rx_over_errors++; 2357 np->stats.rx_errors++; 2358 dev_kfree_skb(skb); 2359 goto next_pkt; 2360 } 2361 } 2362 if ((flags & NV_RX2_CHECKSUMMASK) == NV_RX2_CHECKSUMOK2)/*ip and tcp */ { 2363 skb->ip_summed = CHECKSUM_UNNECESSARY; 2364 } else { 2365 if ((flags & NV_RX2_CHECKSUMMASK) == NV_RX2_CHECKSUMOK1 || 2366 (flags & NV_RX2_CHECKSUMMASK) == NV_RX2_CHECKSUMOK3) { 2367 skb->ip_summed = CHECKSUM_UNNECESSARY; 2368 } 2369 } 2370 } else { 2371 dev_kfree_skb(skb); 2372 goto next_pkt; 2373 } 2374 } 2375 /* got a valid packet - forward it to the network core */ 2376 skb_put(skb, len); 2377 skb->protocol = eth_type_trans(skb, dev); 2378 dprintk(KERN_DEBUG "%s: nv_rx_process: %d bytes, proto %d accepted.\n", 2379 dev->name, len, skb->protocol); 2380#ifdef CONFIG_FORCEDETH_NAPI 2381 netif_receive_skb(skb); 2382#else 2383 netif_rx(skb); 2384#endif 2385 dev->last_rx = jiffies; 2386 np->stats.rx_packets++; 2387 np->stats.rx_bytes += len; 2388next_pkt: 2389 if (unlikely(np->get_rx.orig++ == np->last_rx.orig)) 2390 np->get_rx.orig = np->first_rx.orig; 2391 if (unlikely(np->get_rx_ctx++ == np->last_rx_ctx)) 2392 np->get_rx_ctx = np->first_rx_ctx; 2393 } 2394 2395 return rx_processed_cnt; 2396} 2397 2398static int nv_rx_process_optimized(struct net_device *dev, int limit) 2399{ 2400 struct fe_priv *np = netdev_priv(dev); 2401 u32 flags; 2402 u32 vlanflags = 0; 2403 u32 rx_processed_cnt = 0; 2404 struct sk_buff *skb; 2405 int len; 2406 2407 while((np->get_rx.ex != np->put_rx.ex) && 2408 !((flags = le32_to_cpu(np->get_rx.ex->flaglen)) & NV_RX2_AVAIL) && 2409 (rx_processed_cnt++ < limit)) { 2410 2411 dprintk(KERN_DEBUG "%s: nv_rx_process_optimized: flags 0x%x.\n", 2412 dev->name, flags); 2413 2414 /* 2415 * the packet is for us - immediately tear down the pci mapping. 2416 * TODO: check if a prefetch of the first cacheline improves 2417 * the performance. 2418 */ 2419 pci_unmap_single(np->pci_dev, np->get_rx_ctx->dma, 2420 np->get_rx_ctx->dma_len, 2421 PCI_DMA_FROMDEVICE); 2422 skb = np->get_rx_ctx->skb; 2423 np->get_rx_ctx->skb = NULL; 2424 2425 { 2426 int j; 2427 dprintk(KERN_DEBUG "Dumping packet (flags 0x%x).",flags); 2428 for (j=0; j<64; j++) { 2429 if ((j%16) == 0) 2430 dprintk("\n%03x:", j); 2431 dprintk(" %02x", ((unsigned char*)skb->data)[j]); 2432 } 2433 dprintk("\n"); 2434 } 2435 /* look at what we actually got: */ 2436 if (likely(flags & NV_RX2_DESCRIPTORVALID)) { 2437 len = flags & LEN_MASK_V2; 2438 if (unlikely(flags & NV_RX2_ERROR)) { 2439 if (flags & NV_RX2_ERROR4) { 2440 len = nv_getlen(dev, skb->data, len); 2441 if (len < 0) { 2442 dev_kfree_skb(skb); 2443 goto next_pkt; 2444 } 2445 } 2446 /* framing errors are soft errors */ 2447 else if (flags & NV_RX2_FRAMINGERR) { 2448 if (flags & NV_RX2_SUBSTRACT1) { 2449 len--; 2450 } 2451 } 2452 /* the rest are hard errors */ 2453 else { 2454 dev_kfree_skb(skb); 2455 goto next_pkt; 2456 } 2457 } 2458 2459 if ((flags & NV_RX2_CHECKSUMMASK) == NV_RX2_CHECKSUMOK2)/*ip and tcp */ { 2460 skb->ip_summed = CHECKSUM_UNNECESSARY; 2461 } else { 2462 if ((flags & NV_RX2_CHECKSUMMASK) == NV_RX2_CHECKSUMOK1 || 2463 (flags & NV_RX2_CHECKSUMMASK) == NV_RX2_CHECKSUMOK3) { 2464 skb->ip_summed = CHECKSUM_UNNECESSARY; 2465 } 2466 } 2467 2468 /* got a valid packet - forward it to the network core */ 2469 skb_put(skb, len); 2470 skb->protocol = eth_type_trans(skb, dev); 2471 prefetch(skb->data); 2472 2473 dprintk(KERN_DEBUG "%s: nv_rx_process_optimized: %d bytes, proto %d accepted.\n", 2474 dev->name, len, skb->protocol); 2475 2476 if (likely(!np->vlangrp)) { 2477#ifdef CONFIG_FORCEDETH_NAPI 2478 netif_receive_skb(skb); 2479#else 2480 netif_rx(skb); 2481#endif 2482 } else { 2483 vlanflags = le32_to_cpu(np->get_rx.ex->buflow); 2484 if (vlanflags & NV_RX3_VLAN_TAG_PRESENT) { 2485#ifdef CONFIG_FORCEDETH_NAPI 2486 vlan_hwaccel_receive_skb(skb, np->vlangrp, 2487 vlanflags & NV_RX3_VLAN_TAG_MASK); 2488#else 2489 vlan_hwaccel_rx(skb, np->vlangrp, 2490 vlanflags & NV_RX3_VLAN_TAG_MASK); 2491#endif 2492 } else { 2493#ifdef CONFIG_FORCEDETH_NAPI 2494 netif_receive_skb(skb); 2495#else 2496 netif_rx(skb); 2497#endif 2498 } 2499 } 2500 2501 dev->last_rx = jiffies; 2502 np->stats.rx_packets++; 2503 np->stats.rx_bytes += len; 2504 } else { 2505 dev_kfree_skb(skb); 2506 } 2507next_pkt: 2508 if (unlikely(np->get_rx.ex++ == np->last_rx.ex)) 2509 np->get_rx.ex = np->first_rx.ex; 2510 if (unlikely(np->get_rx_ctx++ == np->last_rx_ctx)) 2511 np->get_rx_ctx = np->first_rx_ctx; 2512 } 2513 2514 return rx_processed_cnt; 2515} 2516 2517static void set_bufsize(struct net_device *dev) 2518{ 2519 struct fe_priv *np = netdev_priv(dev); 2520 2521 if (dev->mtu <= ETH_DATA_LEN) 2522 np->rx_buf_sz = ETH_DATA_LEN + NV_RX_HEADERS; 2523 else 2524 np->rx_buf_sz = dev->mtu + NV_RX_HEADERS; 2525} 2526 2527/* 2528 * nv_change_mtu: dev->change_mtu function 2529 * Called with dev_base_lock held for read. 2530 */ 2531static int nv_change_mtu(struct net_device *dev, int new_mtu) 2532{ 2533 struct fe_priv *np = netdev_priv(dev); 2534 int old_mtu; 2535 2536 if (new_mtu < 64 || new_mtu > np->pkt_limit) 2537 return -EINVAL; 2538 2539 old_mtu = dev->mtu; 2540 dev->mtu = new_mtu; 2541 2542 /* return early if the buffer sizes will not change */ 2543 if (old_mtu <= ETH_DATA_LEN && new_mtu <= ETH_DATA_LEN) 2544 return 0; 2545 if (old_mtu == new_mtu) 2546 return 0; 2547 2548 /* synchronized against open : rtnl_lock() held by caller */ 2549 if (netif_running(dev)) { 2550 u8 __iomem *base = get_hwbase(dev); 2551 /* 2552 * It seems that the nic preloads valid ring entries into an 2553 * internal buffer. The procedure for flushing everything is 2554 * guessed, there is probably a simpler approach. 2555 * Changing the MTU is a rare event, it shouldn't matter. 2556 */ 2557 nv_disable_irq(dev); 2558 netif_tx_lock_bh(dev); 2559 spin_lock(&np->lock); 2560 /* stop engines */ 2561 nv_stop_rx(dev); 2562 nv_stop_tx(dev); 2563 nv_txrx_reset(dev); 2564 /* drain rx queue */ 2565 nv_drain_rx(dev); 2566 nv_drain_tx(dev); 2567 /* reinit driver view of the rx queue */ 2568 set_bufsize(dev); 2569 if (nv_init_ring(dev)) { 2570 if (!np->in_shutdown) 2571 mod_timer(&np->oom_kick, jiffies + OOM_REFILL); 2572 } 2573 /* reinit nic view of the rx queue */ 2574 writel(np->rx_buf_sz, base + NvRegOffloadConfig); 2575 setup_hw_rings(dev, NV_SETUP_RX_RING | NV_SETUP_TX_RING); 2576 writel( ((np->rx_ring_size-1) << NVREG_RINGSZ_RXSHIFT) + ((np->tx_ring_size-1) << NVREG_RINGSZ_TXSHIFT), 2577 base + NvRegRingSizes); 2578 pci_push(base); 2579 writel(NVREG_TXRXCTL_KICK|np->txrxctl_bits, get_hwbase(dev) + NvRegTxRxControl); 2580 pci_push(base); 2581 2582 /* restart rx engine */ 2583 nv_start_rx(dev); 2584 nv_start_tx(dev); 2585 spin_unlock(&np->lock); 2586 netif_tx_unlock_bh(dev); 2587 nv_enable_irq(dev); 2588 } 2589 return 0; 2590} 2591 2592static void nv_copy_mac_to_hw(struct net_device *dev) 2593{ 2594 u8 __iomem *base = get_hwbase(dev); 2595 u32 mac[2]; 2596 2597 mac[0] = (dev->dev_addr[0] << 0) + (dev->dev_addr[1] << 8) + 2598 (dev->dev_addr[2] << 16) + (dev->dev_addr[3] << 24); 2599 mac[1] = (dev->dev_addr[4] << 0) + (dev->dev_addr[5] << 8); 2600 2601 writel(mac[0], base + NvRegMacAddrA); 2602 writel(mac[1], base + NvRegMacAddrB); 2603} 2604 2605/* 2606 * nv_set_mac_address: dev->set_mac_address function 2607 * Called with rtnl_lock() held. 2608 */ 2609static int nv_set_mac_address(struct net_device *dev, void *addr) 2610{ 2611 struct fe_priv *np = netdev_priv(dev); 2612 struct sockaddr *macaddr = (struct sockaddr*)addr; 2613 2614 if (!is_valid_ether_addr(macaddr->sa_data)) 2615 return -EADDRNOTAVAIL; 2616 2617 /* synchronized against open : rtnl_lock() held by caller */ 2618 memcpy(dev->dev_addr, macaddr->sa_data, ETH_ALEN); 2619 2620 if (netif_running(dev)) { 2621 netif_tx_lock_bh(dev); 2622 spin_lock_irq(&np->lock); 2623 2624 /* stop rx engine */ 2625 nv_stop_rx(dev); 2626 2627 /* set mac address */ 2628 nv_copy_mac_to_hw(dev); 2629 2630 /* restart rx engine */ 2631 nv_start_rx(dev); 2632 spin_unlock_irq(&np->lock); 2633 netif_tx_unlock_bh(dev); 2634 } else { 2635 nv_copy_mac_to_hw(dev); 2636 } 2637 return 0; 2638} 2639 2640/* 2641 * nv_set_multicast: dev->set_multicast function 2642 * Called with netif_tx_lock held. 2643 */ 2644static void nv_set_multicast(struct net_device *dev) 2645{ 2646 struct fe_priv *np = netdev_priv(dev); 2647 u8 __iomem *base = get_hwbase(dev); 2648 u32 addr[2]; 2649 u32 mask[2]; 2650 u32 pff = readl(base + NvRegPacketFilterFlags) & NVREG_PFF_PAUSE_RX; 2651 2652 memset(addr, 0, sizeof(addr)); 2653 memset(mask, 0, sizeof(mask)); 2654 2655 if (dev->flags & IFF_PROMISC) { 2656 pff |= NVREG_PFF_PROMISC; 2657 } else { 2658 pff |= NVREG_PFF_MYADDR; 2659 2660 if (dev->flags & IFF_ALLMULTI || dev->mc_list) { 2661 u32 alwaysOff[2]; 2662 u32 alwaysOn[2]; 2663 2664 alwaysOn[0] = alwaysOn[1] = alwaysOff[0] = alwaysOff[1] = 0xffffffff; 2665 if (dev->flags & IFF_ALLMULTI) { 2666 alwaysOn[0] = alwaysOn[1] = alwaysOff[0] = alwaysOff[1] = 0; 2667 } else { 2668 struct dev_mc_list *walk; 2669 2670 walk = dev->mc_list; 2671 while (walk != NULL) { 2672 u32 a, b; 2673 a = le32_to_cpu(*(u32 *) walk->dmi_addr); 2674 b = le16_to_cpu(*(u16 *) (&walk->dmi_addr[4])); 2675 alwaysOn[0] &= a; 2676 alwaysOff[0] &= ~a; 2677 alwaysOn[1] &= b; 2678 alwaysOff[1] &= ~b; 2679 walk = walk->next; 2680 } 2681 } 2682 addr[0] = alwaysOn[0]; 2683 addr[1] = alwaysOn[1]; 2684 mask[0] = alwaysOn[0] | alwaysOff[0]; 2685 mask[1] = alwaysOn[1] | alwaysOff[1]; 2686 } 2687 } 2688 addr[0] |= NVREG_MCASTADDRA_FORCE; 2689 pff |= NVREG_PFF_ALWAYS; 2690 spin_lock_irq(&np->lock); 2691 nv_stop_rx(dev); 2692 writel(addr[0], base + NvRegMulticastAddrA); 2693 writel(addr[1], base + NvRegMulticastAddrB); 2694 writel(mask[0], base + NvRegMulticastMaskA); 2695 writel(mask[1], base + NvRegMulticastMaskB); 2696 writel(pff, base + NvRegPacketFilterFlags); 2697 dprintk(KERN_INFO "%s: reconfiguration for multicast lists.\n", 2698 dev->name); 2699 nv_start_rx(dev); 2700 spin_unlock_irq(&np->lock); 2701} 2702 2703static void nv_update_pause(struct net_device *dev, u32 pause_flags) 2704{ 2705 struct fe_priv *np = netdev_priv(dev); 2706 u8 __iomem *base = get_hwbase(dev); 2707 2708 np->pause_flags &= ~(NV_PAUSEFRAME_TX_ENABLE | NV_PAUSEFRAME_RX_ENABLE); 2709 2710 if (np->pause_flags & NV_PAUSEFRAME_RX_CAPABLE) { 2711 u32 pff = readl(base + NvRegPacketFilterFlags) & ~NVREG_PFF_PAUSE_RX; 2712 if (pause_flags & NV_PAUSEFRAME_RX_ENABLE) { 2713 writel(pff|NVREG_PFF_PAUSE_RX, base + NvRegPacketFilterFlags); 2714 np->pause_flags |= NV_PAUSEFRAME_RX_ENABLE; 2715 } else { 2716 writel(pff, base + NvRegPacketFilterFlags); 2717 } 2718 } 2719 if (np->pause_flags & NV_PAUSEFRAME_TX_CAPABLE) { 2720 u32 regmisc = readl(base + NvRegMisc1) & ~NVREG_MISC1_PAUSE_TX; 2721 if (pause_flags & NV_PAUSEFRAME_TX_ENABLE) { 2722 writel(NVREG_TX_PAUSEFRAME_ENABLE, base + NvRegTxPauseFrame); 2723 writel(regmisc|NVREG_MISC1_PAUSE_TX, base + NvRegMisc1); 2724 np->pause_flags |= NV_PAUSEFRAME_TX_ENABLE; 2725 } else { 2726 writel(NVREG_TX_PAUSEFRAME_DISABLE, base + NvRegTxPauseFrame); 2727 writel(regmisc, base + NvRegMisc1); 2728 } 2729 } 2730} 2731 2732/** 2733 * nv_update_linkspeed: Setup the MAC according to the link partner 2734 * @dev: Network device to be configured 2735 * 2736 * The function queries the PHY and checks if there is a link partner. 2737 * If yes, then it sets up the MAC accordingly. Otherwise, the MAC is 2738 * set to 10 MBit HD. 2739 * 2740 * The function returns 0 if there is no link partner and 1 if there is 2741 * a good link partner. 2742 */ 2743static int nv_update_linkspeed(struct net_device *dev) 2744{ 2745 struct fe_priv *np = netdev_priv(dev); 2746 u8 __iomem *base = get_hwbase(dev); 2747 int adv = 0; 2748 int lpa = 0; 2749 int adv_lpa, adv_pause, lpa_pause; 2750 int newls = np->linkspeed; 2751 int newdup = np->duplex; 2752 int mii_status; 2753 int retval = 0; 2754 u32 control_1000, status_1000, phyreg, pause_flags, txreg; 2755 2756 /* BMSR_LSTATUS is latched, read it twice: 2757 * we want the current value. 2758 */ 2759 mii_rw(dev, np->phyaddr, MII_BMSR, MII_READ); 2760 mii_status = mii_rw(dev, np->phyaddr, MII_BMSR, MII_READ); 2761 2762 if (!(mii_status & BMSR_LSTATUS)) { 2763 dprintk(KERN_DEBUG "%s: no link detected by phy - falling back to 10HD.\n", 2764 dev->name); 2765 newls = NVREG_LINKSPEED_FORCE|NVREG_LINKSPEED_10; 2766 newdup = 0; 2767 retval = 0; 2768 goto set_speed; 2769 } 2770 2771 if (np->autoneg == 0) { 2772 dprintk(KERN_DEBUG "%s: nv_update_linkspeed: autoneg off, PHY set to 0x%04x.\n", 2773 dev->name, np->fixed_mode); 2774 if (np->fixed_mode & LPA_100FULL) { 2775 newls = NVREG_LINKSPEED_FORCE|NVREG_LINKSPEED_100; 2776 newdup = 1; 2777 } else if (np->fixed_mode & LPA_100HALF) { 2778 newls = NVREG_LINKSPEED_FORCE|NVREG_LINKSPEED_100; 2779 newdup = 0; 2780 } else if (np->fixed_mode & LPA_10FULL) { 2781 newls = NVREG_LINKSPEED_FORCE|NVREG_LINKSPEED_10; 2782 newdup = 1; 2783 } else { 2784 newls = NVREG_LINKSPEED_FORCE|NVREG_LINKSPEED_10; 2785 newdup = 0; 2786 } 2787 retval = 1; 2788 goto set_speed; 2789 } 2790 /* check auto negotiation is complete */ 2791 if (!(mii_status & BMSR_ANEGCOMPLETE)) { 2792 /* still in autonegotiation - configure nic for 10 MBit HD and wait. */ 2793 newls = NVREG_LINKSPEED_FORCE|NVREG_LINKSPEED_10; 2794 newdup = 0; 2795 retval = 0; 2796 dprintk(KERN_DEBUG "%s: autoneg not completed - falling back to 10HD.\n", dev->name); 2797 goto set_speed; 2798 } 2799 2800 adv = mii_rw(dev, np->phyaddr, MII_ADVERTISE, MII_READ); 2801 lpa = mii_rw(dev, np->phyaddr, MII_LPA, MII_READ); 2802 dprintk(KERN_DEBUG "%s: nv_update_linkspeed: PHY advertises 0x%04x, lpa 0x%04x.\n", 2803 dev->name, adv, lpa); 2804 2805 retval = 1; 2806 if (np->gigabit == PHY_GIGABIT) { 2807 control_1000 = mii_rw(dev, np->phyaddr, MII_CTRL1000, MII_READ); 2808 status_1000 = mii_rw(dev, np->phyaddr, MII_STAT1000, MII_READ); 2809 2810 if ((control_1000 & ADVERTISE_1000FULL) && 2811 (status_1000 & LPA_1000FULL)) { 2812 dprintk(KERN_DEBUG "%s: nv_update_linkspeed: GBit ethernet detected.\n", 2813 dev->name); 2814 newls = NVREG_LINKSPEED_FORCE|NVREG_LINKSPEED_1000; 2815 newdup = 1; 2816 goto set_speed; 2817 } 2818 } 2819 2820 /* FIXME: handle parallel detection properly */ 2821 adv_lpa = lpa & adv; 2822 if (adv_lpa & LPA_100FULL) { 2823 newls = NVREG_LINKSPEED_FORCE|NVREG_LINKSPEED_100; 2824 newdup = 1; 2825 } else if (adv_lpa & LPA_100HALF) { 2826 newls = NVREG_LINKSPEED_FORCE|NVREG_LINKSPEED_100; 2827 newdup = 0; 2828 } else if (adv_lpa & LPA_10FULL) { 2829 newls = NVREG_LINKSPEED_FORCE|NVREG_LINKSPEED_10; 2830 newdup = 1; 2831 } else if (adv_lpa & LPA_10HALF) { 2832 newls = NVREG_LINKSPEED_FORCE|NVREG_LINKSPEED_10; 2833 newdup = 0; 2834 } else { 2835 dprintk(KERN_DEBUG "%s: bad ability %04x - falling back to 10HD.\n", dev->name, adv_lpa); 2836 newls = NVREG_LINKSPEED_FORCE|NVREG_LINKSPEED_10; 2837 newdup = 0; 2838 } 2839 2840set_speed: 2841 if (np->duplex == newdup && np->linkspeed == newls) 2842 return retval; 2843 2844 dprintk(KERN_INFO "%s: changing link setting from %d/%d to %d/%d.\n", 2845 dev->name, np->linkspeed, np->duplex, newls, newdup); 2846 2847 np->duplex = newdup; 2848 np->linkspeed = newls; 2849 2850 if (np->gigabit == PHY_GIGABIT) { 2851 phyreg = readl(base + NvRegRandomSeed); 2852 phyreg &= ~(0x3FF00); 2853 if ((np->linkspeed & 0xFFF) == NVREG_LINKSPEED_10) 2854 phyreg |= NVREG_RNDSEED_FORCE3; 2855 else if ((np->linkspeed & 0xFFF) == NVREG_LINKSPEED_100) 2856 phyreg |= NVREG_RNDSEED_FORCE2; 2857 else if ((np->linkspeed & 0xFFF) == NVREG_LINKSPEED_1000) 2858 phyreg |= NVREG_RNDSEED_FORCE; 2859 writel(phyreg, base + NvRegRandomSeed); 2860 } 2861 2862 phyreg = readl(base + NvRegPhyInterface); 2863 phyreg &= ~(PHY_HALF|PHY_100|PHY_1000); 2864 if (np->duplex == 0) 2865 phyreg |= PHY_HALF; 2866 if ((np->linkspeed & NVREG_LINKSPEED_MASK) == NVREG_LINKSPEED_100) 2867 phyreg |= PHY_100; 2868 else if ((np->linkspeed & NVREG_LINKSPEED_MASK) == NVREG_LINKSPEED_1000) 2869 phyreg |= PHY_1000; 2870 writel(phyreg, base + NvRegPhyInterface); 2871 2872 if (phyreg & PHY_RGMII) { 2873 if ((np->linkspeed & NVREG_LINKSPEED_MASK) == NVREG_LINKSPEED_1000) 2874 txreg = NVREG_TX_DEFERRAL_RGMII_1000; 2875 else 2876 txreg = NVREG_TX_DEFERRAL_RGMII_10_100; 2877 } else { 2878 txreg = NVREG_TX_DEFERRAL_DEFAULT; 2879 } 2880 writel(txreg, base + NvRegTxDeferral); 2881 2882 if (np->desc_ver == DESC_VER_1) { 2883 txreg = NVREG_TX_WM_DESC1_DEFAULT; 2884 } else { 2885 if ((np->linkspeed & NVREG_LINKSPEED_MASK) == NVREG_LINKSPEED_1000) 2886 txreg = NVREG_TX_WM_DESC2_3_1000; 2887 else 2888 txreg = NVREG_TX_WM_DESC2_3_DEFAULT; 2889 } 2890 writel(txreg, base + NvRegTxWatermark); 2891 2892 writel(NVREG_MISC1_FORCE | ( np->duplex ? 0 : NVREG_MISC1_HD), 2893 base + NvRegMisc1); 2894 pci_push(base); 2895 writel(np->linkspeed, base + NvRegLinkSpeed); 2896 pci_push(base); 2897 2898 pause_flags = 0; 2899 /* setup pause frame */ 2900 if (np->duplex != 0) { 2901 if (np->autoneg && np->pause_flags & NV_PAUSEFRAME_AUTONEG) { 2902 adv_pause = adv & (ADVERTISE_PAUSE_CAP| ADVERTISE_PAUSE_ASYM); 2903 lpa_pause = lpa & (LPA_PAUSE_CAP| LPA_PAUSE_ASYM); 2904 2905 switch (adv_pause) { 2906 case ADVERTISE_PAUSE_CAP: 2907 if (lpa_pause & LPA_PAUSE_CAP) { 2908 pause_flags |= NV_PAUSEFRAME_RX_ENABLE; 2909 if (np->pause_flags & NV_PAUSEFRAME_TX_REQ) 2910 pause_flags |= NV_PAUSEFRAME_TX_ENABLE; 2911 } 2912 break; 2913 case ADVERTISE_PAUSE_ASYM: 2914 if (lpa_pause == (LPA_PAUSE_CAP| LPA_PAUSE_ASYM)) 2915 { 2916 pause_flags |= NV_PAUSEFRAME_TX_ENABLE; 2917 } 2918 break; 2919 case ADVERTISE_PAUSE_CAP| ADVERTISE_PAUSE_ASYM: 2920 if (lpa_pause & LPA_PAUSE_CAP) 2921 { 2922 pause_flags |= NV_PAUSEFRAME_RX_ENABLE; 2923 if (np->pause_flags & NV_PAUSEFRAME_TX_REQ) 2924 pause_flags |= NV_PAUSEFRAME_TX_ENABLE; 2925 } 2926 if (lpa_pause == LPA_PAUSE_ASYM) 2927 { 2928 pause_flags |= NV_PAUSEFRAME_RX_ENABLE; 2929 } 2930 break; 2931 } 2932 } else { 2933 pause_flags = np->pause_flags; 2934 } 2935 } 2936 nv_update_pause(dev, pause_flags); 2937 2938 return retval; 2939} 2940 2941static void nv_linkchange(struct net_device *dev) 2942{ 2943 if (nv_update_linkspeed(dev)) { 2944 if (!netif_carrier_ok(dev)) { 2945 netif_carrier_on(dev); 2946 printk(KERN_INFO "%s: link up.\n", dev->name); 2947 nv_start_rx(dev); 2948 } 2949 } else { 2950 if (netif_carrier_ok(dev)) { 2951 netif_carrier_off(dev); 2952 printk(KERN_INFO "%s: link down.\n", dev->name); 2953 nv_stop_rx(dev); 2954 } 2955 } 2956} 2957 2958static void nv_link_irq(struct net_device *dev) 2959{ 2960 u8 __iomem *base = get_hwbase(dev); 2961 u32 miistat; 2962 2963 miistat = readl(base + NvRegMIIStatus); 2964 writel(NVREG_MIISTAT_MASK, base + NvRegMIIStatus); 2965 dprintk(KERN_INFO "%s: link change irq, status 0x%x.\n", dev->name, miistat); 2966 2967 if (miistat & (NVREG_MIISTAT_LINKCHANGE)) 2968 nv_linkchange(dev); 2969 dprintk(KERN_DEBUG "%s: link change notification done.\n", dev->name); 2970} 2971 2972static irqreturn_t nv_nic_irq(int foo, void *data) 2973{ 2974 struct net_device *dev = (struct net_device *) data; 2975 struct fe_priv *np = netdev_priv(dev); 2976 u8 __iomem *base = get_hwbase(dev); 2977 u32 events; 2978 int i; 2979 2980 dprintk(KERN_DEBUG "%s: nv_nic_irq\n", dev->name); 2981 2982 for (i=0; ; i++) { 2983 if (!(np->msi_flags & NV_MSI_X_ENABLED)) { 2984 events = readl(base + NvRegIrqStatus) & NVREG_IRQSTAT_MASK; 2985 writel(NVREG_IRQSTAT_MASK, base + NvRegIrqStatus); 2986 } else { 2987 events = readl(base + NvRegMSIXIrqStatus) & NVREG_IRQSTAT_MASK; 2988 writel(NVREG_IRQSTAT_MASK, base + NvRegMSIXIrqStatus); 2989 } 2990 dprintk(KERN_DEBUG "%s: irq: %08x\n", dev->name, events); 2991 if (!(events & np->irqmask)) 2992 break; 2993 2994 spin_lock(&np->lock); 2995 nv_tx_done(dev); 2996 spin_unlock(&np->lock); 2997 2998#ifdef CONFIG_FORCEDETH_NAPI 2999 if (events & NVREG_IRQ_RX_ALL) { 3000 netif_rx_schedule(dev); 3001 3002 /* Disable furthur receive irq's */ 3003 spin_lock(&np->lock); 3004 np->irqmask &= ~NVREG_IRQ_RX_ALL; 3005 3006 if (np->msi_flags & NV_MSI_X_ENABLED) 3007 writel(NVREG_IRQ_RX_ALL, base + NvRegIrqMask); 3008 else 3009 writel(np->irqmask, base + NvRegIrqMask); 3010 spin_unlock(&np->lock); 3011 } 3012#else 3013 if (nv_rx_process(dev, dev->weight)) { 3014 if (unlikely(nv_alloc_rx(dev))) { 3015 spin_lock(&np->lock); 3016 if (!np->in_shutdown) 3017 mod_timer(&np->oom_kick, jiffies + OOM_REFILL); 3018 spin_unlock(&np->lock); 3019 } 3020 } 3021#endif 3022 if (unlikely(events & NVREG_IRQ_LINK)) { 3023 spin_lock(&np->lock); 3024 nv_link_irq(dev); 3025 spin_unlock(&np->lock); 3026 } 3027 if (unlikely(np->need_linktimer && time_after(jiffies, np->link_timeout))) { 3028 spin_lock(&np->lock); 3029 nv_linkchange(dev); 3030 spin_unlock(&np->lock); 3031 np->link_timeout = jiffies + LINK_TIMEOUT; 3032 } 3033 if (unlikely(events & (NVREG_IRQ_TX_ERR))) { 3034 dprintk(KERN_DEBUG "%s: received irq with events 0x%x. Probably TX fail.\n", 3035 dev->name, events); 3036 } 3037 if (unlikely(events & (NVREG_IRQ_UNKNOWN))) { 3038 printk(KERN_DEBUG "%s: received irq with unknown events 0x%x. Please report\n", 3039 dev->name, events); 3040 } 3041 if (unlikely(events & NVREG_IRQ_RECOVER_ERROR)) { 3042 spin_lock(&np->lock); 3043 /* disable interrupts on the nic */ 3044 if (!(np->msi_flags & NV_MSI_X_ENABLED)) 3045 writel(0, base + NvRegIrqMask); 3046 else 3047 writel(np->irqmask, base + NvRegIrqMask); 3048 pci_push(base); 3049 3050 if (!np->in_shutdown) { 3051 np->nic_poll_irq = np->irqmask; 3052 np->recover_error = 1; 3053 mod_timer(&np->nic_poll, jiffies + POLL_WAIT); 3054 } 3055 spin_unlock(&np->lock); 3056 break; 3057 } 3058 if (unlikely(i > max_interrupt_work)) { 3059 spin_lock(&np->lock); 3060 /* disable interrupts on the nic */ 3061 if (!(np->msi_flags & NV_MSI_X_ENABLED)) 3062 writel(0, base + NvRegIrqMask); 3063 else 3064 writel(np->irqmask, base + NvRegIrqMask); 3065 pci_push(base); 3066 3067 if (!np->in_shutdown) { 3068 np->nic_poll_irq = np->irqmask; 3069 mod_timer(&np->nic_poll, jiffies + POLL_WAIT); 3070 } 3071 printk(KERN_DEBUG "%s: too many iterations (%d) in nv_nic_irq.\n", dev->name, i); 3072 spin_unlock(&np->lock); 3073 break; 3074 } 3075 3076 } 3077 dprintk(KERN_DEBUG "%s: nv_nic_irq completed\n", dev->name); 3078 3079 return IRQ_RETVAL(i); 3080} 3081 3082#define TX_WORK_PER_LOOP 64 3083#define RX_WORK_PER_LOOP 64 3084/** 3085 * All _optimized functions are used to help increase performance 3086 * (reduce CPU and increase throughput). They use descripter version 3, 3087 * compiler directives, and reduce memory accesses. 3088 */ 3089static irqreturn_t nv_nic_irq_optimized(int foo, void *data) 3090{ 3091 struct net_device *dev = (struct net_device *) data; 3092 struct fe_priv *np = netdev_priv(dev); 3093 u8 __iomem *base = get_hwbase(dev); 3094 u32 events; 3095 int i; 3096 3097 dprintk(KERN_DEBUG "%s: nv_nic_irq_optimized\n", dev->name); 3098 3099 for (i=0; ; i++) { 3100 if (!(np->msi_flags & NV_MSI_X_ENABLED)) { 3101 events = readl(base + NvRegIrqStatus) & NVREG_IRQSTAT_MASK; 3102 writel(NVREG_IRQSTAT_MASK, base + NvRegIrqStatus); 3103 } else { 3104 events = readl(base + NvRegMSIXIrqStatus) & NVREG_IRQSTAT_MASK; 3105 writel(NVREG_IRQSTAT_MASK, base + NvRegMSIXIrqStatus); 3106 } 3107 dprintk(KERN_DEBUG "%s: irq: %08x\n", dev->name, events); 3108 if (!(events & np->irqmask)) 3109 break; 3110 3111 spin_lock(&np->lock); 3112 nv_tx_done_optimized(dev, TX_WORK_PER_LOOP); 3113 spin_unlock(&np->lock); 3114 3115#ifdef CONFIG_FORCEDETH_NAPI 3116 if (events & NVREG_IRQ_RX_ALL) { 3117 netif_rx_schedule(dev); 3118 3119 /* Disable furthur receive irq's */ 3120 spin_lock(&np->lock); 3121 np->irqmask &= ~NVREG_IRQ_RX_ALL; 3122 3123 if (np->msi_flags & NV_MSI_X_ENABLED) 3124 writel(NVREG_IRQ_RX_ALL, base + NvRegIrqMask); 3125 else 3126 writel(np->irqmask, base + NvRegIrqMask); 3127 spin_unlock(&np->lock); 3128 } 3129#else 3130 if (nv_rx_process_optimized(dev, dev->weight)) { 3131 if (unlikely(nv_alloc_rx_optimized(dev))) { 3132 spin_lock(&np->lock); 3133 if (!np->in_shutdown) 3134 mod_timer(&np->oom_kick, jiffies + OOM_REFILL); 3135 spin_unlock(&np->lock); 3136 } 3137 } 3138#endif 3139 if (unlikely(events & NVREG_IRQ_LINK)) { 3140 spin_lock(&np->lock); 3141 nv_link_irq(dev); 3142 spin_unlock(&np->lock); 3143 } 3144 if (unlikely(np->need_linktimer && time_after(jiffies, np->link_timeout))) { 3145 spin_lock(&np->lock); 3146 nv_linkchange(dev); 3147 spin_unlock(&np->lock); 3148 np->link_timeout = jiffies + LINK_TIMEOUT; 3149 } 3150 if (unlikely(events & (NVREG_IRQ_TX_ERR))) { 3151 dprintk(KERN_DEBUG "%s: received irq with events 0x%x. Probably TX fail.\n", 3152 dev->name, events); 3153 } 3154 if (unlikely(events & (NVREG_IRQ_UNKNOWN))) { 3155 printk(KERN_DEBUG "%s: received irq with unknown events 0x%x. Please report\n", 3156 dev->name, events); 3157 } 3158 if (unlikely(events & NVREG_IRQ_RECOVER_ERROR)) { 3159 spin_lock(&np->lock); 3160 /* disable interrupts on the nic */ 3161 if (!(np->msi_flags & NV_MSI_X_ENABLED)) 3162 writel(0, base + NvRegIrqMask); 3163 else 3164 writel(np->irqmask, base + NvRegIrqMask); 3165 pci_push(base); 3166 3167 if (!np->in_shutdown) { 3168 np->nic_poll_irq = np->irqmask; 3169 np->recover_error = 1; 3170 mod_timer(&np->nic_poll, jiffies + POLL_WAIT); 3171 } 3172 spin_unlock(&np->lock); 3173 break; 3174 } 3175 3176 if (unlikely(i > max_interrupt_work)) { 3177 spin_lock(&np->lock); 3178 /* disable interrupts on the nic */ 3179 if (!(np->msi_flags & NV_MSI_X_ENABLED)) 3180 writel(0, base + NvRegIrqMask); 3181 else 3182 writel(np->irqmask, base + NvRegIrqMask); 3183 pci_push(base); 3184 3185 if (!np->in_shutdown) { 3186 np->nic_poll_irq = np->irqmask; 3187 mod_timer(&np->nic_poll, jiffies + POLL_WAIT); 3188 } 3189 printk(KERN_DEBUG "%s: too many iterations (%d) in nv_nic_irq.\n", dev->name, i); 3190 spin_unlock(&np->lock); 3191 break; 3192 } 3193 3194 } 3195 dprintk(KERN_DEBUG "%s: nv_nic_irq_optimized completed\n", dev->name); 3196 3197 return IRQ_RETVAL(i); 3198} 3199 3200static irqreturn_t nv_nic_irq_tx(int foo, void *data) 3201{ 3202 struct net_device *dev = (struct net_device *) data; 3203 struct fe_priv *np = netdev_priv(dev); 3204 u8 __iomem *base = get_hwbase(dev); 3205 u32 events; 3206 int i; 3207 unsigned long flags; 3208 3209 dprintk(KERN_DEBUG "%s: nv_nic_irq_tx\n", dev->name); 3210 3211 for (i=0; ; i++) { 3212 events = readl(base + NvRegMSIXIrqStatus) & NVREG_IRQ_TX_ALL; 3213 writel(NVREG_IRQ_TX_ALL, base + NvRegMSIXIrqStatus); 3214 dprintk(KERN_DEBUG "%s: tx irq: %08x\n", dev->name, events); 3215 if (!(events & np->irqmask)) 3216 break; 3217 3218 spin_lock_irqsave(&np->lock, flags); 3219 nv_tx_done_optimized(dev, TX_WORK_PER_LOOP); 3220 spin_unlock_irqrestore(&np->lock, flags); 3221 3222 if (unlikely(events & (NVREG_IRQ_TX_ERR))) { 3223 dprintk(KERN_DEBUG "%s: received irq with events 0x%x. Probably TX fail.\n", 3224 dev->name, events); 3225 } 3226 if (unlikely(i > max_interrupt_work)) { 3227 spin_lock_irqsave(&np->lock, flags); 3228 /* disable interrupts on the nic */ 3229 writel(NVREG_IRQ_TX_ALL, base + NvRegIrqMask); 3230 pci_push(base); 3231 3232 if (!np->in_shutdown) { 3233 np->nic_poll_irq |= NVREG_IRQ_TX_ALL; 3234 mod_timer(&np->nic_poll, jiffies + POLL_WAIT); 3235 } 3236 printk(KERN_DEBUG "%s: too many iterations (%d) in nv_nic_irq_tx.\n", dev->name, i); 3237 spin_unlock_irqrestore(&np->lock, flags); 3238 break; 3239 } 3240 3241 } 3242 dprintk(KERN_DEBUG "%s: nv_nic_irq_tx completed\n", dev->name); 3243 3244 return IRQ_RETVAL(i); 3245} 3246 3247#ifdef CONFIG_FORCEDETH_NAPI 3248static int nv_napi_poll(struct net_device *dev, int *budget) 3249{ 3250 int pkts, limit = min(*budget, dev->quota); 3251 struct fe_priv *np = netdev_priv(dev); 3252 u8 __iomem *base = get_hwbase(dev); 3253 unsigned long flags; 3254 int retcode; 3255 3256 if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) { 3257 pkts = nv_rx_process(dev, limit); 3258 retcode = nv_alloc_rx(dev); 3259 } else { 3260 pkts = nv_rx_process_optimized(dev, limit); 3261 retcode = nv_alloc_rx_optimized(dev); 3262 } 3263 3264 if (retcode) { 3265 spin_lock_irqsave(&np->lock, flags); 3266 if (!np->in_shutdown) 3267 mod_timer(&np->oom_kick, jiffies + OOM_REFILL); 3268 spin_unlock_irqrestore(&np->lock, flags); 3269 } 3270 3271 if (pkts < limit) { 3272 /* all done, no more packets present */ 3273 netif_rx_complete(dev); 3274 3275 /* re-enable receive interrupts */ 3276 spin_lock_irqsave(&np->lock, flags); 3277 3278 np->irqmask |= NVREG_IRQ_RX_ALL; 3279 if (np->msi_flags & NV_MSI_X_ENABLED) 3280 writel(NVREG_IRQ_RX_ALL, base + NvRegIrqMask); 3281 else 3282 writel(np->irqmask, base + NvRegIrqMask); 3283 3284 spin_unlock_irqrestore(&np->lock, flags); 3285 return 0; 3286 } else { 3287 /* used up our quantum, so reschedule */ 3288 dev->quota -= pkts; 3289 *budget -= pkts; 3290 return 1; 3291 } 3292} 3293#endif 3294 3295#ifdef CONFIG_FORCEDETH_NAPI 3296static irqreturn_t nv_nic_irq_rx(int foo, void *data) 3297{ 3298 struct net_device *dev = (struct net_device *) data; 3299 u8 __iomem *base = get_hwbase(dev); 3300 u32 events; 3301 3302 events = readl(base + NvRegMSIXIrqStatus) & NVREG_IRQ_RX_ALL; 3303 writel(NVREG_IRQ_RX_ALL, base + NvRegMSIXIrqStatus); 3304 3305 if (events) { 3306 netif_rx_schedule(dev); 3307 /* disable receive interrupts on the nic */ 3308 writel(NVREG_IRQ_RX_ALL, base + NvRegIrqMask); 3309 pci_push(base); 3310 } 3311 return IRQ_HANDLED; 3312} 3313#else 3314static irqreturn_t nv_nic_irq_rx(int foo, void *data) 3315{ 3316 struct net_device *dev = (struct net_device *) data; 3317 struct fe_priv *np = netdev_priv(dev); 3318 u8 __iomem *base = get_hwbase(dev); 3319 u32 events; 3320 int i; 3321 unsigned long flags; 3322 3323 dprintk(KERN_DEBUG "%s: nv_nic_irq_rx\n", dev->name); 3324 3325 for (i=0; ; i++) { 3326 events = readl(base + NvRegMSIXIrqStatus) & NVREG_IRQ_RX_ALL; 3327 writel(NVREG_IRQ_RX_ALL, base + NvRegMSIXIrqStatus); 3328 dprintk(KERN_DEBUG "%s: rx irq: %08x\n", dev->name, events); 3329 if (!(events & np->irqmask)) 3330 break; 3331 3332 if (nv_rx_process_optimized(dev, dev->weight)) { 3333 if (unlikely(nv_alloc_rx_optimized(dev))) { 3334 spin_lock_irqsave(&np->lock, flags); 3335 if (!np->in_shutdown) 3336 mod_timer(&np->oom_kick, jiffies + OOM_REFILL); 3337 spin_unlock_irqrestore(&np->lock, flags); 3338 } 3339 } 3340 3341 if (unlikely(i > max_interrupt_work)) { 3342 spin_lock_irqsave(&np->lock, flags); 3343 /* disable interrupts on the nic */ 3344 writel(NVREG_IRQ_RX_ALL, base + NvRegIrqMask); 3345 pci_push(base); 3346 3347 if (!np->in_shutdown) { 3348 np->nic_poll_irq |= NVREG_IRQ_RX_ALL; 3349 mod_timer(&np->nic_poll, jiffies + POLL_WAIT); 3350 } 3351 printk(KERN_DEBUG "%s: too many iterations (%d) in nv_nic_irq_rx.\n", dev->name, i); 3352 spin_unlock_irqrestore(&np->lock, flags); 3353 break; 3354 } 3355 } 3356 dprintk(KERN_DEBUG "%s: nv_nic_irq_rx completed\n", dev->name); 3357 3358 return IRQ_RETVAL(i); 3359} 3360#endif 3361 3362static irqreturn_t nv_nic_irq_other(int foo, void *data) 3363{ 3364 struct net_device *dev = (struct net_device *) data; 3365 struct fe_priv *np = netdev_priv(dev); 3366 u8 __iomem *base = get_hwbase(dev); 3367 u32 events; 3368 int i; 3369 unsigned long flags; 3370 3371 dprintk(KERN_DEBUG "%s: nv_nic_irq_other\n", dev->name); 3372 3373 for (i=0; ; i++) { 3374 events = readl(base + NvRegMSIXIrqStatus) & NVREG_IRQ_OTHER; 3375 writel(NVREG_IRQ_OTHER, base + NvRegMSIXIrqStatus); 3376 dprintk(KERN_DEBUG "%s: irq: %08x\n", dev->name, events); 3377 if (!(events & np->irqmask)) 3378 break; 3379 3380 /* check tx in case we reached max loop limit in tx isr */ 3381 spin_lock_irqsave(&np->lock, flags); 3382 nv_tx_done_optimized(dev, TX_WORK_PER_LOOP); 3383 spin_unlock_irqrestore(&np->lock, flags); 3384 3385 if (events & NVREG_IRQ_LINK) { 3386 spin_lock_irqsave(&np->lock, flags); 3387 nv_link_irq(dev); 3388 spin_unlock_irqrestore(&np->lock, flags); 3389 } 3390 if (np->need_linktimer && time_after(jiffies, np->link_timeout)) { 3391 spin_lock_irqsave(&np->lock, flags); 3392 nv_linkchange(dev); 3393 spin_unlock_irqrestore(&np->lock, flags); 3394 np->link_timeout = jiffies + LINK_TIMEOUT; 3395 } 3396 if (events & NVREG_IRQ_RECOVER_ERROR) { 3397 spin_lock_irq(&np->lock); 3398 /* disable interrupts on the nic */ 3399 writel(NVREG_IRQ_OTHER, base + NvRegIrqMask); 3400 pci_push(base); 3401 3402 if (!np->in_shutdown) { 3403 np->nic_poll_irq |= NVREG_IRQ_OTHER; 3404 np->recover_error = 1; 3405 mod_timer(&np->nic_poll, jiffies + POLL_WAIT); 3406 } 3407 spin_unlock_irq(&np->lock); 3408 break; 3409 } 3410 if (events & (NVREG_IRQ_UNKNOWN)) { 3411 printk(KERN_DEBUG "%s: received irq with unknown events 0x%x. Please report\n", 3412 dev->name, events); 3413 } 3414 if (unlikely(i > max_interrupt_work)) { 3415 spin_lock_irqsave(&np->lock, flags); 3416 /* disable interrupts on the nic */ 3417 writel(NVREG_IRQ_OTHER, base + NvRegIrqMask); 3418 pci_push(base); 3419 3420 if (!np->in_shutdown) { 3421 np->nic_poll_irq |= NVREG_IRQ_OTHER; 3422 mod_timer(&np->nic_poll, jiffies + POLL_WAIT); 3423 } 3424 printk(KERN_DEBUG "%s: too many iterations (%d) in nv_nic_irq_other.\n", dev->name, i); 3425 spin_unlock_irqrestore(&np->lock, flags); 3426 break; 3427 } 3428 3429 } 3430 dprintk(KERN_DEBUG "%s: nv_nic_irq_other completed\n", dev->name); 3431 3432 return IRQ_RETVAL(i); 3433} 3434 3435static irqreturn_t nv_nic_irq_test(int foo, void *data) 3436{ 3437 struct net_device *dev = (struct net_device *) data; 3438 struct fe_priv *np = netdev_priv(dev); 3439 u8 __iomem *base = get_hwbase(dev); 3440 u32 events; 3441 3442 dprintk(KERN_DEBUG "%s: nv_nic_irq_test\n", dev->name); 3443 3444 if (!(np->msi_flags & NV_MSI_X_ENABLED)) { 3445 events = readl(base + NvRegIrqStatus) & NVREG_IRQSTAT_MASK; 3446 writel(NVREG_IRQ_TIMER, base + NvRegIrqStatus); 3447 } else { 3448 events = readl(base + NvRegMSIXIrqStatus) & NVREG_IRQSTAT_MASK; 3449 writel(NVREG_IRQ_TIMER, base + NvRegMSIXIrqStatus); 3450 } 3451 pci_push(base); 3452 dprintk(KERN_DEBUG "%s: irq: %08x\n", dev->name, events); 3453 if (!(events & NVREG_IRQ_TIMER)) 3454 return IRQ_RETVAL(0); 3455 3456 spin_lock(&np->lock); 3457 np->intr_test = 1; 3458 spin_unlock(&np->lock); 3459 3460 dprintk(KERN_DEBUG "%s: nv_nic_irq_test completed\n", dev->name); 3461 3462 return IRQ_RETVAL(1); 3463} 3464 3465static void set_msix_vector_map(struct net_device *dev, u32 vector, u32 irqmask) 3466{ 3467 u8 __iomem *base = get_hwbase(dev); 3468 int i; 3469 u32 msixmap = 0; 3470 3471 /* Each interrupt bit can be mapped to a MSIX vector (4 bits). 3472 * MSIXMap0 represents the first 8 interrupts and MSIXMap1 represents 3473 * the remaining 8 interrupts. 3474 */ 3475 for (i = 0; i < 8; i++) { 3476 if ((irqmask >> i) & 0x1) { 3477 msixmap |= vector << (i << 2); 3478 } 3479 } 3480 writel(readl(base + NvRegMSIXMap0) | msixmap, base + NvRegMSIXMap0); 3481 3482 msixmap = 0; 3483 for (i = 0; i < 8; i++) { 3484 if ((irqmask >> (i + 8)) & 0x1) { 3485 msixmap |= vector << (i << 2); 3486 } 3487 } 3488 writel(readl(base + NvRegMSIXMap1) | msixmap, base + NvRegMSIXMap1); 3489} 3490 3491static int nv_request_irq(struct net_device *dev, int intr_test) 3492{ 3493 struct fe_priv *np = get_nvpriv(dev); 3494 u8 __iomem *base = get_hwbase(dev); 3495 int ret = 1; 3496 int i; 3497 irqreturn_t (*handler)(int foo, void *data); 3498 3499 if (intr_test) { 3500 handler = nv_nic_irq_test; 3501 } else { 3502 if (np->desc_ver == DESC_VER_3) 3503 handler = nv_nic_irq_optimized; 3504 else 3505 handler = nv_nic_irq; 3506 } 3507 3508 if (np->msi_flags & NV_MSI_X_CAPABLE) { 3509 for (i = 0; i < (np->msi_flags & NV_MSI_X_VECTORS_MASK); i++) { 3510 np->msi_x_entry[i].entry = i; 3511 } 3512 if ((ret = pci_enable_msix(np->pci_dev, np->msi_x_entry, (np->msi_flags & NV_MSI_X_VECTORS_MASK))) == 0) { 3513 np->msi_flags |= NV_MSI_X_ENABLED; 3514 if (optimization_mode == NV_OPTIMIZATION_MODE_THROUGHPUT && !intr_test) { 3515 /* Request irq for rx handling */ 3516 if (request_irq(np->msi_x_entry[NV_MSI_X_VECTOR_RX].vector, &nv_nic_irq_rx, IRQF_SHARED, dev->name, dev) != 0) { 3517 printk(KERN_INFO "forcedeth: request_irq failed for rx %d\n", ret); 3518 pci_disable_msix(np->pci_dev); 3519 np->msi_flags &= ~NV_MSI_X_ENABLED; 3520 goto out_err; 3521 } 3522 /* Request irq for tx handling */ 3523 if (request_irq(np->msi_x_entry[NV_MSI_X_VECTOR_TX].vector, &nv_nic_irq_tx, IRQF_SHARED, dev->name, dev) != 0) { 3524 printk(KERN_INFO "forcedeth: request_irq failed for tx %d\n", ret); 3525 pci_disable_msix(np->pci_dev); 3526 np->msi_flags &= ~NV_MSI_X_ENABLED; 3527 goto out_free_rx; 3528 } 3529 /* Request irq for link and timer handling */ 3530 if (request_irq(np->msi_x_entry[NV_MSI_X_VECTOR_OTHER].vector, &nv_nic_irq_other, IRQF_SHARED, dev->name, dev) != 0) { 3531 printk(KERN_INFO "forcedeth: request_irq failed for link %d\n", ret); 3532 pci_disable_msix(np->pci_dev); 3533 np->msi_flags &= ~NV_MSI_X_ENABLED; 3534 goto out_free_tx; 3535 } 3536 /* map interrupts to their respective vector */ 3537 writel(0, base + NvRegMSIXMap0); 3538 writel(0, base + NvRegMSIXMap1); 3539 set_msix_vector_map(dev, NV_MSI_X_VECTOR_RX, NVREG_IRQ_RX_ALL); 3540 set_msix_vector_map(dev, NV_MSI_X_VECTOR_TX, NVREG_IRQ_TX_ALL); 3541 set_msix_vector_map(dev, NV_MSI_X_VECTOR_OTHER, NVREG_IRQ_OTHER); 3542 } else { 3543 /* Request irq for all interrupts */ 3544 if (request_irq(np->msi_x_entry[NV_MSI_X_VECTOR_ALL].vector, handler, IRQF_SHARED, dev->name, dev) != 0) { 3545 printk(KERN_INFO "forcedeth: request_irq failed %d\n", ret); 3546 pci_disable_msix(np->pci_dev); 3547 np->msi_flags &= ~NV_MSI_X_ENABLED; 3548 goto out_err; 3549 } 3550 3551 /* map interrupts to vector 0 */ 3552 writel(0, base + NvRegMSIXMap0); 3553 writel(0, base + NvRegMSIXMap1); 3554 } 3555 } 3556 } 3557 if (ret != 0 && np->msi_flags & NV_MSI_CAPABLE) { 3558 if ((ret = pci_enable_msi(np->pci_dev)) == 0) { 3559 np->msi_flags |= NV_MSI_ENABLED; 3560 if (request_irq(np->pci_dev->irq, handler, IRQF_SHARED, dev->name, dev) != 0) { 3561 printk(KERN_INFO "forcedeth: request_irq failed %d\n", ret); 3562 pci_disable_msi(np->pci_dev); 3563 np->msi_flags &= ~NV_MSI_ENABLED; 3564 goto out_err; 3565 } 3566 3567 /* map interrupts to vector 0 */ 3568 writel(0, base + NvRegMSIMap0); 3569 writel(0, base + NvRegMSIMap1); 3570 /* enable msi vector 0 */ 3571 writel(NVREG_MSI_VECTOR_0_ENABLED, base + NvRegMSIIrqMask); 3572 } 3573 } 3574 if (ret != 0) { 3575 if (request_irq(np->pci_dev->irq, handler, IRQF_SHARED, dev->name, dev) != 0) 3576 goto out_err; 3577 3578 } 3579 3580 return 0; 3581out_free_tx: 3582 free_irq(np->msi_x_entry[NV_MSI_X_VECTOR_TX].vector, dev); 3583out_free_rx: 3584 free_irq(np->msi_x_entry[NV_MSI_X_VECTOR_RX].vector, dev); 3585out_err: 3586 return 1; 3587} 3588 3589static void nv_free_irq(struct net_device *dev) 3590{ 3591 struct fe_priv *np = get_nvpriv(dev); 3592 int i; 3593 3594 if (np->msi_flags & NV_MSI_X_ENABLED) { 3595 for (i = 0; i < (np->msi_flags & NV_MSI_X_VECTORS_MASK); i++) { 3596 free_irq(np->msi_x_entry[i].vector, dev); 3597 } 3598 pci_disable_msix(np->pci_dev); 3599 np->msi_flags &= ~NV_MSI_X_ENABLED; 3600 } else { 3601 free_irq(np->pci_dev->irq, dev); 3602 if (np->msi_flags & NV_MSI_ENABLED) { 3603 pci_disable_msi(np->pci_dev); 3604 np->msi_flags &= ~NV_MSI_ENABLED; 3605 } 3606 } 3607} 3608 3609static void nv_do_nic_poll(unsigned long data) 3610{ 3611 struct net_device *dev = (struct net_device *) data; 3612 struct fe_priv *np = netdev_priv(dev); 3613 u8 __iomem *base = get_hwbase(dev); 3614 u32 mask = 0; 3615 3616 /* 3617 * First disable irq(s) and then 3618 * reenable interrupts on the nic, we have to do this before calling 3619 * nv_nic_irq because that may decide to do otherwise 3620 */ 3621 3622 if (!using_multi_irqs(dev)) { 3623 if (np->msi_flags & NV_MSI_X_ENABLED) 3624 disable_irq_lockdep(np->msi_x_entry[NV_MSI_X_VECTOR_ALL].vector); 3625 else 3626 disable_irq_lockdep(dev->irq); 3627 mask = np->irqmask; 3628 } else { 3629 if (np->nic_poll_irq & NVREG_IRQ_RX_ALL) { 3630 disable_irq_lockdep(np->msi_x_entry[NV_MSI_X_VECTOR_RX].vector); 3631 mask |= NVREG_IRQ_RX_ALL; 3632 } 3633 if (np->nic_poll_irq & NVREG_IRQ_TX_ALL) { 3634 disable_irq_lockdep(np->msi_x_entry[NV_MSI_X_VECTOR_TX].vector); 3635 mask |= NVREG_IRQ_TX_ALL; 3636 } 3637 if (np->nic_poll_irq & NVREG_IRQ_OTHER) { 3638 disable_irq_lockdep(np->msi_x_entry[NV_MSI_X_VECTOR_OTHER].vector); 3639 mask |= NVREG_IRQ_OTHER; 3640 } 3641 } 3642 np->nic_poll_irq = 0; 3643 3644 if (np->recover_error) { 3645 np->recover_error = 0; 3646 printk(KERN_INFO "forcedeth: MAC in recoverable error state\n"); 3647 if (netif_running(dev)) { 3648 netif_tx_lock_bh(dev); 3649 spin_lock(&np->lock); 3650 /* stop engines */ 3651 nv_stop_rx(dev); 3652 nv_stop_tx(dev); 3653 nv_txrx_reset(dev); 3654 /* drain rx queue */ 3655 nv_drain_rx(dev); 3656 nv_drain_tx(dev); 3657 /* reinit driver view of the rx queue */ 3658 set_bufsize(dev); 3659 if (nv_init_ring(dev)) { 3660 if (!np->in_shutdown) 3661 mod_timer(&np->oom_kick, jiffies + OOM_REFILL); 3662 } 3663 /* reinit nic view of the rx queue */ 3664 writel(np->rx_buf_sz, base + NvRegOffloadConfig); 3665 setup_hw_rings(dev, NV_SETUP_RX_RING | NV_SETUP_TX_RING); 3666 writel( ((np->rx_ring_size-1) << NVREG_RINGSZ_RXSHIFT) + ((np->tx_ring_size-1) << NVREG_RINGSZ_TXSHIFT), 3667 base + NvRegRingSizes); 3668 pci_push(base); 3669 writel(NVREG_TXRXCTL_KICK|np->txrxctl_bits, get_hwbase(dev) + NvRegTxRxControl); 3670 pci_push(base); 3671 3672 /* restart rx engine */ 3673 nv_start_rx(dev); 3674 nv_start_tx(dev); 3675 spin_unlock(&np->lock); 3676 netif_tx_unlock_bh(dev); 3677 } 3678 } 3679 3680 /* FIXME: Do we need synchronize_irq(dev->irq) here? */ 3681 3682 writel(mask, base + NvRegIrqMask); 3683 pci_push(base); 3684 3685 if (!using_multi_irqs(dev)) { 3686 if (np->desc_ver == DESC_VER_3) 3687 nv_nic_irq_optimized(0, dev); 3688 else 3689 nv_nic_irq(0, dev); 3690 if (np->msi_flags & NV_MSI_X_ENABLED) 3691 enable_irq_lockdep(np->msi_x_entry[NV_MSI_X_VECTOR_ALL].vector); 3692 else 3693 enable_irq_lockdep(dev->irq); 3694 } else { 3695 if (np->nic_poll_irq & NVREG_IRQ_RX_ALL) { 3696 nv_nic_irq_rx(0, dev); 3697 enable_irq_lockdep(np->msi_x_entry[NV_MSI_X_VECTOR_RX].vector); 3698 } 3699 if (np->nic_poll_irq & NVREG_IRQ_TX_ALL) { 3700 nv_nic_irq_tx(0, dev); 3701 enable_irq_lockdep(np->msi_x_entry[NV_MSI_X_VECTOR_TX].vector); 3702 } 3703 if (np->nic_poll_irq & NVREG_IRQ_OTHER) { 3704 nv_nic_irq_other(0, dev); 3705 enable_irq_lockdep(np->msi_x_entry[NV_MSI_X_VECTOR_OTHER].vector); 3706 } 3707 } 3708} 3709 3710#ifdef CONFIG_NET_POLL_CONTROLLER 3711static void nv_poll_controller(struct net_device *dev) 3712{ 3713 nv_do_nic_poll((unsigned long) dev); 3714} 3715#endif 3716 3717static void nv_do_stats_poll(unsigned long data) 3718{ 3719 struct net_device *dev = (struct net_device *) data; 3720 struct fe_priv *np = netdev_priv(dev); 3721 3722 nv_get_hw_stats(dev); 3723 3724 if (!np->in_shutdown) 3725 mod_timer(&np->stats_poll, jiffies + STATS_INTERVAL); 3726} 3727 3728static void nv_get_drvinfo(struct net_device *dev, struct ethtool_drvinfo *info) 3729{ 3730 struct fe_priv *np = netdev_priv(dev); 3731 strcpy(info->driver, "forcedeth"); 3732 strcpy(info->version, FORCEDETH_VERSION); 3733 strcpy(info->bus_info, pci_name(np->pci_dev)); 3734} 3735 3736static void nv_get_wol(struct net_device *dev, struct ethtool_wolinfo *wolinfo) 3737{ 3738 struct fe_priv *np = netdev_priv(dev); 3739 wolinfo->supported = WAKE_MAGIC; 3740 3741 spin_lock_irq(&np->lock); 3742 if (np->wolenabled) 3743 wolinfo->wolopts = WAKE_MAGIC; 3744 spin_unlock_irq(&np->lock); 3745} 3746 3747static int nv_set_wol(struct net_device *dev, struct ethtool_wolinfo *wolinfo) 3748{ 3749 struct fe_priv *np = netdev_priv(dev); 3750 u8 __iomem *base = get_hwbase(dev); 3751 u32 flags = 0; 3752 3753 if (wolinfo->wolopts == 0) { 3754 np->wolenabled = 0; 3755 } else if (wolinfo->wolopts & WAKE_MAGIC) { 3756 np->wolenabled = 1; 3757 flags = NVREG_WAKEUPFLAGS_ENABLE; 3758 } 3759 if (netif_running(dev)) { 3760 spin_lock_irq(&np->lock); 3761 writel(flags, base + NvRegWakeUpFlags); 3762 spin_unlock_irq(&np->lock); 3763 } 3764 return 0; 3765} 3766 3767static int nv_get_settings(struct net_device *dev, struct ethtool_cmd *ecmd) 3768{ 3769 struct fe_priv *np = netdev_priv(dev); 3770 int adv; 3771 3772 spin_lock_irq(&np->lock); 3773 ecmd->port = PORT_MII; 3774 if (!netif_running(dev)) { 3775 /* We do not track link speed / duplex setting if the 3776 * interface is disabled. Force a link check */ 3777 if (nv_update_linkspeed(dev)) { 3778 if (!netif_carrier_ok(dev)) 3779 netif_carrier_on(dev); 3780 } else { 3781 if (netif_carrier_ok(dev)) 3782 netif_carrier_off(dev); 3783 } 3784 } 3785 3786 if (netif_carrier_ok(dev)) { 3787 switch(np->linkspeed & (NVREG_LINKSPEED_MASK)) { 3788 case NVREG_LINKSPEED_10: 3789 ecmd->speed = SPEED_10; 3790 break; 3791 case NVREG_LINKSPEED_100: 3792 ecmd->speed = SPEED_100; 3793 break; 3794 case NVREG_LINKSPEED_1000: 3795 ecmd->speed = SPEED_1000; 3796 break; 3797 } 3798 ecmd->duplex = DUPLEX_HALF; 3799 if (np->duplex) 3800 ecmd->duplex = DUPLEX_FULL; 3801 } else { 3802 ecmd->speed = -1; 3803 ecmd->duplex = -1; 3804 } 3805 3806 ecmd->autoneg = np->autoneg; 3807 3808 ecmd->advertising = ADVERTISED_MII; 3809 if (np->autoneg) { 3810 ecmd->advertising |= ADVERTISED_Autoneg; 3811 adv = mii_rw(dev, np->phyaddr, MII_ADVERTISE, MII_READ); 3812 if (adv & ADVERTISE_10HALF) 3813 ecmd->advertising |= ADVERTISED_10baseT_Half; 3814 if (adv & ADVERTISE_10FULL) 3815 ecmd->advertising |= ADVERTISED_10baseT_Full; 3816 if (adv & ADVERTISE_100HALF) 3817 ecmd->advertising |= ADVERTISED_100baseT_Half; 3818 if (adv & ADVERTISE_100FULL) 3819 ecmd->advertising |= ADVERTISED_100baseT_Full; 3820 if (np->gigabit == PHY_GIGABIT) { 3821 adv = mii_rw(dev, np->phyaddr, MII_CTRL1000, MII_READ); 3822 if (adv & ADVERTISE_1000FULL) 3823 ecmd->advertising |= ADVERTISED_1000baseT_Full; 3824 } 3825 } 3826 ecmd->supported = (SUPPORTED_Autoneg | 3827 SUPPORTED_10baseT_Half | SUPPORTED_10baseT_Full | 3828 SUPPORTED_100baseT_Half | SUPPORTED_100baseT_Full | 3829 SUPPORTED_MII); 3830 if (np->gigabit == PHY_GIGABIT) 3831 ecmd->supported |= SUPPORTED_1000baseT_Full; 3832 3833 ecmd->phy_address = np->phyaddr; 3834 ecmd->transceiver = XCVR_EXTERNAL; 3835 3836 /* ignore maxtxpkt, maxrxpkt for now */ 3837 spin_unlock_irq(&np->lock); 3838 return 0; 3839} 3840 3841static int nv_set_settings(struct net_device *dev, struct ethtool_cmd *ecmd) 3842{ 3843 struct fe_priv *np = netdev_priv(dev); 3844 3845 if (ecmd->port != PORT_MII) 3846 return -EINVAL; 3847 if (ecmd->transceiver != XCVR_EXTERNAL) 3848 return -EINVAL; 3849 if (ecmd->phy_address != np->phyaddr) { 3850 /* TODO: support switching between multiple phys. Should be 3851 * trivial, but not enabled due to lack of test hardware. */ 3852 return -EINVAL; 3853 } 3854 if (ecmd->autoneg == AUTONEG_ENABLE) { 3855 u32 mask; 3856 3857 mask = ADVERTISED_10baseT_Half | ADVERTISED_10baseT_Full | 3858 ADVERTISED_100baseT_Half | ADVERTISED_100baseT_Full; 3859 if (np->gigabit == PHY_GIGABIT) 3860 mask |= ADVERTISED_1000baseT_Full; 3861 3862 if ((ecmd->advertising & mask) == 0) 3863 return -EINVAL; 3864 3865 } else if (ecmd->autoneg == AUTONEG_DISABLE) { 3866 /* Note: autonegotiation disable, speed 1000 intentionally 3867 * forbidden - noone should need that. */ 3868 3869 if (ecmd->speed != SPEED_10 && ecmd->speed != SPEED_100) 3870 return -EINVAL; 3871 if (ecmd->duplex != DUPLEX_HALF && ecmd->duplex != DUPLEX_FULL) 3872 return -EINVAL; 3873 } else { 3874 return -EINVAL; 3875 } 3876 3877 netif_carrier_off(dev); 3878 if (netif_running(dev)) { 3879 nv_disable_irq(dev); 3880 netif_tx_lock_bh(dev); 3881 spin_lock(&np->lock); 3882 /* stop engines */ 3883 nv_stop_rx(dev); 3884 nv_stop_tx(dev); 3885 spin_unlock(&np->lock); 3886 netif_tx_unlock_bh(dev); 3887 } 3888 3889 if (ecmd->autoneg == AUTONEG_ENABLE) { 3890 int adv, bmcr; 3891 3892 np->autoneg = 1; 3893 3894 /* advertise only what has been requested */ 3895 adv = mii_rw(dev, np->phyaddr, MII_ADVERTISE, MII_READ); 3896 adv &= ~(ADVERTISE_ALL | ADVERTISE_100BASE4 | ADVERTISE_PAUSE_CAP | ADVERTISE_PAUSE_ASYM); 3897 if (ecmd->advertising & ADVERTISED_10baseT_Half) 3898 adv |= ADVERTISE_10HALF; 3899 if (ecmd->advertising & ADVERTISED_10baseT_Full) 3900 adv |= ADVERTISE_10FULL; 3901 if (ecmd->advertising & ADVERTISED_100baseT_Half) 3902 adv |= ADVERTISE_100HALF; 3903 if (ecmd->advertising & ADVERTISED_100baseT_Full) 3904 adv |= ADVERTISE_100FULL; 3905 if (np->pause_flags & NV_PAUSEFRAME_RX_REQ) /* for rx we set both advertisments but disable tx pause */ 3906 adv |= ADVERTISE_PAUSE_CAP | ADVERTISE_PAUSE_ASYM; 3907 if (np->pause_flags & NV_PAUSEFRAME_TX_REQ) 3908 adv |= ADVERTISE_PAUSE_ASYM; 3909 mii_rw(dev, np->phyaddr, MII_ADVERTISE, adv); 3910 3911 if (np->gigabit == PHY_GIGABIT) { 3912 adv = mii_rw(dev, np->phyaddr, MII_CTRL1000, MII_READ); 3913 adv &= ~ADVERTISE_1000FULL; 3914 if (ecmd->advertising & ADVERTISED_1000baseT_Full) 3915 adv |= ADVERTISE_1000FULL; 3916 mii_rw(dev, np->phyaddr, MII_CTRL1000, adv); 3917 } 3918 3919 if (netif_running(dev)) 3920 printk(KERN_INFO "%s: link down.\n", dev->name); 3921 bmcr = mii_rw(dev, np->phyaddr, MII_BMCR, MII_READ); 3922 if (np->phy_model == PHY_MODEL_MARVELL_E3016) { 3923 bmcr |= BMCR_ANENABLE; 3924 /* reset the phy in order for settings to stick, 3925 * and cause autoneg to start */ 3926 if (phy_reset(dev, bmcr)) { 3927 printk(KERN_INFO "%s: phy reset failed\n", dev->name); 3928 return -EINVAL; 3929 } 3930 } else { 3931 bmcr |= (BMCR_ANENABLE | BMCR_ANRESTART); 3932 mii_rw(dev, np->phyaddr, MII_BMCR, bmcr); 3933 } 3934 } else { 3935 int adv, bmcr; 3936 3937 np->autoneg = 0; 3938 3939 adv = mii_rw(dev, np->phyaddr, MII_ADVERTISE, MII_READ); 3940 adv &= ~(ADVERTISE_ALL | ADVERTISE_100BASE4 | ADVERTISE_PAUSE_CAP | ADVERTISE_PAUSE_ASYM); 3941 if (ecmd->speed == SPEED_10 && ecmd->duplex == DUPLEX_HALF) 3942 adv |= ADVERTISE_10HALF; 3943 if (ecmd->speed == SPEED_10 && ecmd->duplex == DUPLEX_FULL) 3944 adv |= ADVERTISE_10FULL; 3945 if (ecmd->speed == SPEED_100 && ecmd->duplex == DUPLEX_HALF) 3946 adv |= ADVERTISE_100HALF; 3947 if (ecmd->speed == SPEED_100 && ecmd->duplex == DUPLEX_FULL) 3948 adv |= ADVERTISE_100FULL; 3949 np->pause_flags &= ~(NV_PAUSEFRAME_AUTONEG|NV_PAUSEFRAME_RX_ENABLE|NV_PAUSEFRAME_TX_ENABLE); 3950 if (np->pause_flags & NV_PAUSEFRAME_RX_REQ) {/* for rx we set both advertisments but disable tx pause */ 3951 adv |= ADVERTISE_PAUSE_CAP | ADVERTISE_PAUSE_ASYM; 3952 np->pause_flags |= NV_PAUSEFRAME_RX_ENABLE; 3953 } 3954 if (np->pause_flags & NV_PAUSEFRAME_TX_REQ) { 3955 adv |= ADVERTISE_PAUSE_ASYM; 3956 np->pause_flags |= NV_PAUSEFRAME_TX_ENABLE; 3957 } 3958 mii_rw(dev, np->phyaddr, MII_ADVERTISE, adv); 3959 np->fixed_mode = adv; 3960 3961 if (np->gigabit == PHY_GIGABIT) { 3962 adv = mii_rw(dev, np->phyaddr, MII_CTRL1000, MII_READ); 3963 adv &= ~ADVERTISE_1000FULL; 3964 mii_rw(dev, np->phyaddr, MII_CTRL1000, adv); 3965 } 3966 3967 bmcr = mii_rw(dev, np->phyaddr, MII_BMCR, MII_READ); 3968 bmcr &= ~(BMCR_ANENABLE|BMCR_SPEED100|BMCR_SPEED1000|BMCR_FULLDPLX); 3969 if (np->fixed_mode & (ADVERTISE_10FULL|ADVERTISE_100FULL)) 3970 bmcr |= BMCR_FULLDPLX; 3971 if (np->fixed_mode & (ADVERTISE_100HALF|ADVERTISE_100FULL)) 3972 bmcr |= BMCR_SPEED100; 3973 if (np->phy_oui == PHY_OUI_MARVELL) { 3974 /* reset the phy in order for forced mode settings to stick */ 3975 if (phy_reset(dev, bmcr)) { 3976 printk(KERN_INFO "%s: phy reset failed\n", dev->name); 3977 return -EINVAL; 3978 } 3979 } else { 3980 mii_rw(dev, np->phyaddr, MII_BMCR, bmcr); 3981 if (netif_running(dev)) { 3982 /* Wait a bit and then reconfigure the nic. */ 3983 udelay(10); 3984 nv_linkchange(dev); 3985 } 3986 } 3987 } 3988 3989 if (netif_running(dev)) { 3990 nv_start_rx(dev); 3991 nv_start_tx(dev); 3992 nv_enable_irq(dev); 3993 } 3994 3995 return 0; 3996} 3997 3998#define FORCEDETH_REGS_VER 1 3999 4000static int nv_get_regs_len(struct net_device *dev) 4001{ 4002 struct fe_priv *np = netdev_priv(dev); 4003 return np->register_size; 4004} 4005 4006static void nv_get_regs(struct net_device *dev, struct ethtool_regs *regs, void *buf) 4007{ 4008 struct fe_priv *np = netdev_priv(dev); 4009 u8 __iomem *base = get_hwbase(dev); 4010 u32 *rbuf = buf; 4011 int i; 4012 4013 regs->version = FORCEDETH_REGS_VER; 4014 spin_lock_irq(&np->lock); 4015 for (i = 0;i <= np->register_size/sizeof(u32); i++) 4016 rbuf[i] = readl(base + i*sizeof(u32)); 4017 spin_unlock_irq(&np->lock); 4018} 4019 4020static int nv_nway_reset(struct net_device *dev) 4021{ 4022 struct fe_priv *np = netdev_priv(dev); 4023 int ret; 4024 4025 if (np->autoneg) { 4026 int bmcr; 4027 4028 netif_carrier_off(dev); 4029 if (netif_running(dev)) { 4030 nv_disable_irq(dev); 4031 netif_tx_lock_bh(dev); 4032 spin_lock(&np->lock); 4033 /* stop engines */ 4034 nv_stop_rx(dev); 4035 nv_stop_tx(dev); 4036 spin_unlock(&np->lock); 4037 netif_tx_unlock_bh(dev); 4038 printk(KERN_INFO "%s: link down.\n", dev->name); 4039 } 4040 4041 bmcr = mii_rw(dev, np->phyaddr, MII_BMCR, MII_READ); 4042 if (np->phy_model == PHY_MODEL_MARVELL_E3016) { 4043 bmcr |= BMCR_ANENABLE; 4044 /* reset the phy in order for settings to stick*/ 4045 if (phy_reset(dev, bmcr)) { 4046 printk(KERN_INFO "%s: phy reset failed\n", dev->name); 4047 return -EINVAL; 4048 } 4049 } else { 4050 bmcr |= (BMCR_ANENABLE | BMCR_ANRESTART); 4051 mii_rw(dev, np->phyaddr, MII_BMCR, bmcr); 4052 } 4053 4054 if (netif_running(dev)) { 4055 nv_start_rx(dev); 4056 nv_start_tx(dev); 4057 nv_enable_irq(dev); 4058 } 4059 ret = 0; 4060 } else { 4061 ret = -EINVAL; 4062 } 4063 4064 return ret; 4065} 4066 4067static int nv_set_tso(struct net_device *dev, u32 value) 4068{ 4069 struct fe_priv *np = netdev_priv(dev); 4070 4071 if ((np->driver_data & DEV_HAS_CHECKSUM)) 4072 return ethtool_op_set_tso(dev, value); 4073 else 4074 return -EOPNOTSUPP; 4075} 4076 4077static void nv_get_ringparam(struct net_device *dev, struct ethtool_ringparam* ring) 4078{ 4079 struct fe_priv *np = netdev_priv(dev); 4080 4081 ring->rx_max_pending = (np->desc_ver == DESC_VER_1) ? RING_MAX_DESC_VER_1 : RING_MAX_DESC_VER_2_3; 4082 ring->rx_mini_max_pending = 0; 4083 ring->rx_jumbo_max_pending = 0; 4084 ring->tx_max_pending = (np->desc_ver == DESC_VER_1) ? RING_MAX_DESC_VER_1 : RING_MAX_DESC_VER_2_3; 4085 4086 ring->rx_pending = np->rx_ring_size; 4087 ring->rx_mini_pending = 0; 4088 ring->rx_jumbo_pending = 0; 4089 ring->tx_pending = np->tx_ring_size; 4090} 4091 4092static int nv_set_ringparam(struct net_device *dev, struct ethtool_ringparam* ring) 4093{ 4094 struct fe_priv *np = netdev_priv(dev); 4095 u8 __iomem *base = get_hwbase(dev); 4096 u8 *rxtx_ring, *rx_skbuff, *tx_skbuff; 4097 dma_addr_t ring_addr; 4098 4099 if (ring->rx_pending < RX_RING_MIN || 4100 ring->tx_pending < TX_RING_MIN || 4101 ring->rx_mini_pending != 0 || 4102 ring->rx_jumbo_pending != 0 || 4103 (np->desc_ver == DESC_VER_1 && 4104 (ring->rx_pending > RING_MAX_DESC_VER_1 || 4105 ring->tx_pending > RING_MAX_DESC_VER_1)) || 4106 (np->desc_ver != DESC_VER_1 && 4107 (ring->rx_pending > RING_MAX_DESC_VER_2_3 || 4108 ring->tx_pending > RING_MAX_DESC_VER_2_3))) { 4109 return -EINVAL; 4110 } 4111 4112 /* allocate new rings */ 4113 if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) { 4114 rxtx_ring = pci_alloc_consistent(np->pci_dev, 4115 sizeof(struct ring_desc) * (ring->rx_pending + ring->tx_pending), 4116 &ring_addr); 4117 } else { 4118 rxtx_ring = pci_alloc_consistent(np->pci_dev, 4119 sizeof(struct ring_desc_ex) * (ring->rx_pending + ring->tx_pending), 4120 &ring_addr); 4121 } 4122 rx_skbuff = kmalloc(sizeof(struct nv_skb_map) * ring->rx_pending, GFP_KERNEL); 4123 tx_skbuff = kmalloc(sizeof(struct nv_skb_map) * ring->tx_pending, GFP_KERNEL); 4124 if (!rxtx_ring || !rx_skbuff || !tx_skbuff) { 4125 /* fall back to old rings */ 4126 if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) { 4127 if (rxtx_ring) 4128 pci_free_consistent(np->pci_dev, sizeof(struct ring_desc) * (ring->rx_pending + ring->tx_pending), 4129 rxtx_ring, ring_addr); 4130 } else { 4131 if (rxtx_ring) 4132 pci_free_consistent(np->pci_dev, sizeof(struct ring_desc_ex) * (ring->rx_pending + ring->tx_pending), 4133 rxtx_ring, ring_addr); 4134 } 4135 if (rx_skbuff) 4136 kfree(rx_skbuff); 4137 if (tx_skbuff) 4138 kfree(tx_skbuff); 4139 goto exit; 4140 } 4141 4142 if (netif_running(dev)) { 4143 nv_disable_irq(dev); 4144 netif_tx_lock_bh(dev); 4145 spin_lock(&np->lock); 4146 /* stop engines */ 4147 nv_stop_rx(dev); 4148 nv_stop_tx(dev); 4149 nv_txrx_reset(dev); 4150 /* drain queues */ 4151 nv_drain_rx(dev); 4152 nv_drain_tx(dev); 4153 /* delete queues */ 4154 free_rings(dev); 4155 } 4156 4157 /* set new values */ 4158 np->rx_ring_size = ring->rx_pending; 4159 np->tx_ring_size = ring->tx_pending; 4160 if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) { 4161 np->rx_ring.orig = (struct ring_desc*)rxtx_ring; 4162 np->tx_ring.orig = &np->rx_ring.orig[np->rx_ring_size]; 4163 } else { 4164 np->rx_ring.ex = (struct ring_desc_ex*)rxtx_ring; 4165 np->tx_ring.ex = &np->rx_ring.ex[np->rx_ring_size]; 4166 } 4167 np->rx_skb = (struct nv_skb_map*)rx_skbuff; 4168 np->tx_skb = (struct nv_skb_map*)tx_skbuff; 4169 np->ring_addr = ring_addr; 4170 4171 memset(np->rx_skb, 0, sizeof(struct nv_skb_map) * np->rx_ring_size); 4172 memset(np->tx_skb, 0, sizeof(struct nv_skb_map) * np->tx_ring_size); 4173 4174 if (netif_running(dev)) { 4175 /* reinit driver view of the queues */ 4176 set_bufsize(dev); 4177 if (nv_init_ring(dev)) { 4178 if (!np->in_shutdown) 4179 mod_timer(&np->oom_kick, jiffies + OOM_REFILL); 4180 } 4181 4182 /* reinit nic view of the queues */ 4183 writel(np->rx_buf_sz, base + NvRegOffloadConfig); 4184 setup_hw_rings(dev, NV_SETUP_RX_RING | NV_SETUP_TX_RING); 4185 writel( ((np->rx_ring_size-1) << NVREG_RINGSZ_RXSHIFT) + ((np->tx_ring_size-1) << NVREG_RINGSZ_TXSHIFT), 4186 base + NvRegRingSizes); 4187 pci_push(base); 4188 writel(NVREG_TXRXCTL_KICK|np->txrxctl_bits, get_hwbase(dev) + NvRegTxRxControl); 4189 pci_push(base); 4190 4191 /* restart engines */ 4192 nv_start_rx(dev); 4193 nv_start_tx(dev); 4194 spin_unlock(&np->lock); 4195 netif_tx_unlock_bh(dev); 4196 nv_enable_irq(dev); 4197 } 4198 return 0; 4199exit: 4200 return -ENOMEM; 4201} 4202 4203static void nv_get_pauseparam(struct net_device *dev, struct ethtool_pauseparam* pause) 4204{ 4205 struct fe_priv *np = netdev_priv(dev); 4206 4207 pause->autoneg = (np->pause_flags & NV_PAUSEFRAME_AUTONEG) != 0; 4208 pause->rx_pause = (np->pause_flags & NV_PAUSEFRAME_RX_ENABLE) != 0; 4209 pause->tx_pause = (np->pause_flags & NV_PAUSEFRAME_TX_ENABLE) != 0; 4210} 4211 4212static int nv_set_pauseparam(struct net_device *dev, struct ethtool_pauseparam* pause) 4213{ 4214 struct fe_priv *np = netdev_priv(dev); 4215 int adv, bmcr; 4216 4217 if ((!np->autoneg && np->duplex == 0) || 4218 (np->autoneg && !pause->autoneg && np->duplex == 0)) { 4219 printk(KERN_INFO "%s: can not set pause settings when forced link is in half duplex.\n", 4220 dev->name); 4221 return -EINVAL; 4222 } 4223 if (pause->tx_pause && !(np->pause_flags & NV_PAUSEFRAME_TX_CAPABLE)) { 4224 printk(KERN_INFO "%s: hardware does not support tx pause frames.\n", dev->name); 4225 return -EINVAL; 4226 } 4227 4228 netif_carrier_off(dev); 4229 if (netif_running(dev)) { 4230 nv_disable_irq(dev); 4231 netif_tx_lock_bh(dev); 4232 spin_lock(&np->lock); 4233 /* stop engines */ 4234 nv_stop_rx(dev); 4235 nv_stop_tx(dev); 4236 spin_unlock(&np->lock); 4237 netif_tx_unlock_bh(dev); 4238 } 4239 4240 np->pause_flags &= ~(NV_PAUSEFRAME_RX_REQ|NV_PAUSEFRAME_TX_REQ); 4241 if (pause->rx_pause) 4242 np->pause_flags |= NV_PAUSEFRAME_RX_REQ; 4243 if (pause->tx_pause) 4244 np->pause_flags |= NV_PAUSEFRAME_TX_REQ; 4245 4246 if (np->autoneg && pause->autoneg) { 4247 np->pause_flags |= NV_PAUSEFRAME_AUTONEG; 4248 4249 adv = mii_rw(dev, np->phyaddr, MII_ADVERTISE, MII_READ); 4250 adv &= ~(ADVERTISE_PAUSE_CAP | ADVERTISE_PAUSE_ASYM); 4251 if (np->pause_flags & NV_PAUSEFRAME_RX_REQ) /* for rx we set both advertisments but disable tx pause */ 4252 adv |= ADVERTISE_PAUSE_CAP | ADVERTISE_PAUSE_ASYM; 4253 if (np->pause_flags & NV_PAUSEFRAME_TX_REQ) 4254 adv |= ADVERTISE_PAUSE_ASYM; 4255 mii_rw(dev, np->phyaddr, MII_ADVERTISE, adv); 4256 4257 if (netif_running(dev)) 4258 printk(KERN_INFO "%s: link down.\n", dev->name); 4259 bmcr = mii_rw(dev, np->phyaddr, MII_BMCR, MII_READ); 4260 bmcr |= (BMCR_ANENABLE | BMCR_ANRESTART); 4261 mii_rw(dev, np->phyaddr, MII_BMCR, bmcr); 4262 } else { 4263 np->pause_flags &= ~(NV_PAUSEFRAME_AUTONEG|NV_PAUSEFRAME_RX_ENABLE|NV_PAUSEFRAME_TX_ENABLE); 4264 if (pause->rx_pause) 4265 np->pause_flags |= NV_PAUSEFRAME_RX_ENABLE; 4266 if (pause->tx_pause) 4267 np->pause_flags |= NV_PAUSEFRAME_TX_ENABLE; 4268 4269 if (!netif_running(dev)) 4270 nv_update_linkspeed(dev); 4271 else 4272 nv_update_pause(dev, np->pause_flags); 4273 } 4274 4275 if (netif_running(dev)) { 4276 nv_start_rx(dev); 4277 nv_start_tx(dev); 4278 nv_enable_irq(dev); 4279 } 4280 return 0; 4281} 4282 4283static u32 nv_get_rx_csum(struct net_device *dev) 4284{ 4285 struct fe_priv *np = netdev_priv(dev); 4286 return (np->rx_csum) != 0; 4287} 4288 4289static int nv_set_rx_csum(struct net_device *dev, u32 data) 4290{ 4291 struct fe_priv *np = netdev_priv(dev); 4292 u8 __iomem *base = get_hwbase(dev); 4293 int retcode = 0; 4294 4295 if (np->driver_data & DEV_HAS_CHECKSUM) { 4296 if (data) { 4297 np->rx_csum = 1; 4298 np->txrxctl_bits |= NVREG_TXRXCTL_RXCHECK; 4299 } else { 4300 np->rx_csum = 0; 4301 /* vlan is dependent on rx checksum offload */ 4302 if (!(np->vlanctl_bits & NVREG_VLANCONTROL_ENABLE)) 4303 np->txrxctl_bits &= ~NVREG_TXRXCTL_RXCHECK; 4304 } 4305 if (netif_running(dev)) { 4306 spin_lock_irq(&np->lock); 4307 writel(np->txrxctl_bits, base + NvRegTxRxControl); 4308 spin_unlock_irq(&np->lock); 4309 } 4310 } else { 4311 return -EINVAL; 4312 } 4313 4314 return retcode; 4315} 4316 4317static int nv_set_tx_csum(struct net_device *dev, u32 data) 4318{ 4319 struct fe_priv *np = netdev_priv(dev); 4320 4321 if (np->driver_data & DEV_HAS_CHECKSUM) 4322 return ethtool_op_set_tx_hw_csum(dev, data); 4323 else 4324 return -EOPNOTSUPP; 4325} 4326 4327static int nv_set_sg(struct net_device *dev, u32 data) 4328{ 4329 struct fe_priv *np = netdev_priv(dev); 4330 4331 if (np->driver_data & DEV_HAS_CHECKSUM) 4332 return ethtool_op_set_sg(dev, data); 4333 else 4334 return -EOPNOTSUPP; 4335} 4336 4337static int nv_get_stats_count(struct net_device *dev) 4338{ 4339 struct fe_priv *np = netdev_priv(dev); 4340 4341 if (np->driver_data & DEV_HAS_STATISTICS_V1) 4342 return NV_DEV_STATISTICS_V1_COUNT; 4343 else if (np->driver_data & DEV_HAS_STATISTICS_V2) 4344 return NV_DEV_STATISTICS_V2_COUNT; 4345 else 4346 return 0; 4347} 4348 4349static void nv_get_ethtool_stats(struct net_device *dev, struct ethtool_stats *estats, u64 *buffer) 4350{ 4351 struct fe_priv *np = netdev_priv(dev); 4352 4353 /* update stats */ 4354 nv_do_stats_poll((unsigned long)dev); 4355 4356 memcpy(buffer, &np->estats, nv_get_stats_count(dev)*sizeof(u64)); 4357} 4358 4359static int nv_self_test_count(struct net_device *dev) 4360{ 4361 struct fe_priv *np = netdev_priv(dev); 4362 4363 if (np->driver_data & DEV_HAS_TEST_EXTENDED) 4364 return NV_TEST_COUNT_EXTENDED; 4365 else 4366 return NV_TEST_COUNT_BASE; 4367} 4368 4369static int nv_link_test(struct net_device *dev) 4370{ 4371 struct fe_priv *np = netdev_priv(dev); 4372 int mii_status; 4373 4374 mii_rw(dev, np->phyaddr, MII_BMSR, MII_READ); 4375 mii_status = mii_rw(dev, np->phyaddr, MII_BMSR, MII_READ); 4376 4377 /* check phy link status */ 4378 if (!(mii_status & BMSR_LSTATUS)) 4379 return 0; 4380 else 4381 return 1; 4382} 4383 4384static int nv_register_test(struct net_device *dev) 4385{ 4386 u8 __iomem *base = get_hwbase(dev); 4387 int i = 0; 4388 u32 orig_read, new_read; 4389 4390 do { 4391 orig_read = readl(base + nv_registers_test[i].reg); 4392 4393 /* xor with mask to toggle bits */ 4394 orig_read ^= nv_registers_test[i].mask; 4395 4396 writel(orig_read, base + nv_registers_test[i].reg); 4397 4398 new_read = readl(base + nv_registers_test[i].reg); 4399 4400 if ((new_read & nv_registers_test[i].mask) != (orig_read & nv_registers_test[i].mask)) 4401 return 0; 4402 4403 /* restore original value */ 4404 orig_read ^= nv_registers_test[i].mask; 4405 writel(orig_read, base + nv_registers_test[i].reg); 4406 4407 } while (nv_registers_test[++i].reg != 0); 4408 4409 return 1; 4410} 4411 4412static int nv_interrupt_test(struct net_device *dev) 4413{ 4414 struct fe_priv *np = netdev_priv(dev); 4415 u8 __iomem *base = get_hwbase(dev); 4416 int ret = 1; 4417 int testcnt; 4418 u32 save_msi_flags, save_poll_interval = 0; 4419 4420 if (netif_running(dev)) { 4421 /* free current irq */ 4422 nv_free_irq(dev); 4423 save_poll_interval = readl(base+NvRegPollingInterval); 4424 } 4425 4426 /* flag to test interrupt handler */ 4427 np->intr_test = 0; 4428 4429 /* setup test irq */ 4430 save_msi_flags = np->msi_flags; 4431 np->msi_flags &= ~NV_MSI_X_VECTORS_MASK; 4432 np->msi_flags |= 0x001; /* setup 1 vector */ 4433 if (nv_request_irq(dev, 1)) 4434 return 0; 4435 4436 /* setup timer interrupt */ 4437 writel(NVREG_POLL_DEFAULT_CPU, base + NvRegPollingInterval); 4438 writel(NVREG_UNKSETUP6_VAL, base + NvRegUnknownSetupReg6); 4439 4440 nv_enable_hw_interrupts(dev, NVREG_IRQ_TIMER); 4441 4442 /* wait for at least one interrupt */ 4443 msleep(100); 4444 4445 spin_lock_irq(&np->lock); 4446 4447 /* flag should be set within ISR */ 4448 testcnt = np->intr_test; 4449 if (!testcnt) 4450 ret = 2; 4451 4452 nv_disable_hw_interrupts(dev, NVREG_IRQ_TIMER); 4453 if (!(np->msi_flags & NV_MSI_X_ENABLED)) 4454 writel(NVREG_IRQSTAT_MASK, base + NvRegIrqStatus); 4455 else 4456 writel(NVREG_IRQSTAT_MASK, base + NvRegMSIXIrqStatus); 4457 4458 spin_unlock_irq(&np->lock); 4459 4460 nv_free_irq(dev); 4461 4462 np->msi_flags = save_msi_flags; 4463 4464 if (netif_running(dev)) { 4465 writel(save_poll_interval, base + NvRegPollingInterval); 4466 writel(NVREG_UNKSETUP6_VAL, base + NvRegUnknownSetupReg6); 4467 /* restore original irq */ 4468 if (nv_request_irq(dev, 0)) 4469 return 0; 4470 } 4471 4472 return ret; 4473} 4474 4475static int nv_loopback_test(struct net_device *dev) 4476{ 4477 struct fe_priv *np = netdev_priv(dev); 4478 u8 __iomem *base = get_hwbase(dev); 4479 struct sk_buff *tx_skb, *rx_skb; 4480 dma_addr_t test_dma_addr; 4481 u32 tx_flags_extra = (np->desc_ver == DESC_VER_1 ? NV_TX_LASTPACKET : NV_TX2_LASTPACKET); 4482 u32 flags; 4483 int len, i, pkt_len; 4484 u8 *pkt_data; 4485 u32 filter_flags = 0; 4486 u32 misc1_flags = 0; 4487 int ret = 1; 4488 4489 if (netif_running(dev)) { 4490 nv_disable_irq(dev); 4491 filter_flags = readl(base + NvRegPacketFilterFlags); 4492 misc1_flags = readl(base + NvRegMisc1); 4493 } else { 4494 nv_txrx_reset(dev); 4495 } 4496 4497 /* reinit driver view of the rx queue */ 4498 set_bufsize(dev); 4499 nv_init_ring(dev); 4500 4501 /* setup hardware for loopback */ 4502 writel(NVREG_MISC1_FORCE, base + NvRegMisc1); 4503 writel(NVREG_PFF_ALWAYS | NVREG_PFF_LOOPBACK, base + NvRegPacketFilterFlags); 4504 4505 /* reinit nic view of the rx queue */ 4506 writel(np->rx_buf_sz, base + NvRegOffloadConfig); 4507 setup_hw_rings(dev, NV_SETUP_RX_RING | NV_SETUP_TX_RING); 4508 writel( ((np->rx_ring_size-1) << NVREG_RINGSZ_RXSHIFT) + ((np->tx_ring_size-1) << NVREG_RINGSZ_TXSHIFT), 4509 base + NvRegRingSizes); 4510 pci_push(base); 4511 4512 /* restart rx engine */ 4513 nv_start_rx(dev); 4514 nv_start_tx(dev); 4515 4516 /* setup packet for tx */ 4517 pkt_len = ETH_DATA_LEN; 4518 tx_skb = dev_alloc_skb(pkt_len); 4519 if (!tx_skb) { 4520 printk(KERN_ERR "dev_alloc_skb() failed during loopback test" 4521 " of %s\n", dev->name); 4522 ret = 0; 4523 goto out; 4524 } 4525 test_dma_addr = pci_map_single(np->pci_dev, tx_skb->data, 4526 skb_tailroom(tx_skb), 4527 PCI_DMA_FROMDEVICE); 4528 pkt_data = skb_put(tx_skb, pkt_len); 4529 for (i = 0; i < pkt_len; i++) 4530 pkt_data[i] = (u8)(i & 0xff); 4531 4532 if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) { 4533 np->tx_ring.orig[0].buf = cpu_to_le32(test_dma_addr); 4534 np->tx_ring.orig[0].flaglen = cpu_to_le32((pkt_len-1) | np->tx_flags | tx_flags_extra); 4535 } else { 4536 np->tx_ring.ex[0].bufhigh = cpu_to_le64(test_dma_addr) >> 32; 4537 np->tx_ring.ex[0].buflow = cpu_to_le64(test_dma_addr) & 0x0FFFFFFFF; 4538 np->tx_ring.ex[0].flaglen = cpu_to_le32((pkt_len-1) | np->tx_flags | tx_flags_extra); 4539 } 4540 writel(NVREG_TXRXCTL_KICK|np->txrxctl_bits, get_hwbase(dev) + NvRegTxRxControl); 4541 pci_push(get_hwbase(dev)); 4542 4543 msleep(500); 4544 4545 /* check for rx of the packet */ 4546 if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) { 4547 flags = le32_to_cpu(np->rx_ring.orig[0].flaglen); 4548 len = nv_descr_getlength(&np->rx_ring.orig[0], np->desc_ver); 4549 4550 } else { 4551 flags = le32_to_cpu(np->rx_ring.ex[0].flaglen); 4552 len = nv_descr_getlength_ex(&np->rx_ring.ex[0], np->desc_ver); 4553 } 4554 4555 if (flags & NV_RX_AVAIL) { 4556 ret = 0; 4557 } else if (np->desc_ver == DESC_VER_1) { 4558 if (flags & NV_RX_ERROR) 4559 ret = 0; 4560 } else { 4561 if (flags & NV_RX2_ERROR) { 4562 ret = 0; 4563 } 4564 } 4565 4566 if (ret) { 4567 if (len != pkt_len) { 4568 ret = 0; 4569 dprintk(KERN_DEBUG "%s: loopback len mismatch %d vs %d\n", 4570 dev->name, len, pkt_len); 4571 } else { 4572 rx_skb = np->rx_skb[0].skb; 4573 for (i = 0; i < pkt_len; i++) { 4574 if (rx_skb->data[i] != (u8)(i & 0xff)) { 4575 ret = 0; 4576 dprintk(KERN_DEBUG "%s: loopback pattern check failed on byte %d\n", 4577 dev->name, i); 4578 break; 4579 } 4580 } 4581 } 4582 } else { 4583 dprintk(KERN_DEBUG "%s: loopback - did not receive test packet\n", dev->name); 4584 } 4585 4586 pci_unmap_page(np->pci_dev, test_dma_addr, 4587 (skb_end_pointer(tx_skb) - tx_skb->data), 4588 PCI_DMA_TODEVICE); 4589 dev_kfree_skb_any(tx_skb); 4590 out: 4591 /* stop engines */ 4592 nv_stop_rx(dev); 4593 nv_stop_tx(dev); 4594 nv_txrx_reset(dev); 4595 /* drain rx queue */ 4596 nv_drain_rx(dev); 4597 nv_drain_tx(dev); 4598 4599 if (netif_running(dev)) { 4600 writel(misc1_flags, base + NvRegMisc1); 4601 writel(filter_flags, base + NvRegPacketFilterFlags); 4602 nv_enable_irq(dev); 4603 } 4604 4605 return ret; 4606} 4607 4608static void nv_self_test(struct net_device *dev, struct ethtool_test *test, u64 *buffer) 4609{ 4610 struct fe_priv *np = netdev_priv(dev); 4611 u8 __iomem *base = get_hwbase(dev); 4612 int result; 4613 memset(buffer, 0, nv_self_test_count(dev)*sizeof(u64)); 4614 4615 if (!nv_link_test(dev)) { 4616 test->flags |= ETH_TEST_FL_FAILED; 4617 buffer[0] = 1; 4618 } 4619 4620 if (test->flags & ETH_TEST_FL_OFFLINE) { 4621 if (netif_running(dev)) { 4622 netif_stop_queue(dev); 4623 netif_poll_disable(dev); 4624 netif_tx_lock_bh(dev); 4625 spin_lock_irq(&np->lock); 4626 nv_disable_hw_interrupts(dev, np->irqmask); 4627 if (!(np->msi_flags & NV_MSI_X_ENABLED)) { 4628 writel(NVREG_IRQSTAT_MASK, base + NvRegIrqStatus); 4629 } else { 4630 writel(NVREG_IRQSTAT_MASK, base + NvRegMSIXIrqStatus); 4631 } 4632 /* stop engines */ 4633 nv_stop_rx(dev); 4634 nv_stop_tx(dev); 4635 nv_txrx_reset(dev); 4636 /* drain rx queue */ 4637 nv_drain_rx(dev); 4638 nv_drain_tx(dev); 4639 spin_unlock_irq(&np->lock); 4640 netif_tx_unlock_bh(dev); 4641 } 4642 4643 if (!nv_register_test(dev)) { 4644 test->flags |= ETH_TEST_FL_FAILED; 4645 buffer[1] = 1; 4646 } 4647 4648 result = nv_interrupt_test(dev); 4649 if (result != 1) { 4650 test->flags |= ETH_TEST_FL_FAILED; 4651 buffer[2] = 1; 4652 } 4653 if (result == 0) { 4654 /* bail out */ 4655 return; 4656 } 4657 4658 if (!nv_loopback_test(dev)) { 4659 test->flags |= ETH_TEST_FL_FAILED; 4660 buffer[3] = 1; 4661 } 4662 4663 if (netif_running(dev)) { 4664 /* reinit driver view of the rx queue */ 4665 set_bufsize(dev); 4666 if (nv_init_ring(dev)) { 4667 if (!np->in_shutdown) 4668 mod_timer(&np->oom_kick, jiffies + OOM_REFILL); 4669 } 4670 /* reinit nic view of the rx queue */ 4671 writel(np->rx_buf_sz, base + NvRegOffloadConfig); 4672 setup_hw_rings(dev, NV_SETUP_RX_RING | NV_SETUP_TX_RING); 4673 writel( ((np->rx_ring_size-1) << NVREG_RINGSZ_RXSHIFT) + ((np->tx_ring_size-1) << NVREG_RINGSZ_TXSHIFT), 4674 base + NvRegRingSizes); 4675 pci_push(base); 4676 writel(NVREG_TXRXCTL_KICK|np->txrxctl_bits, get_hwbase(dev) + NvRegTxRxControl); 4677 pci_push(base); 4678 /* restart rx engine */ 4679 nv_start_rx(dev); 4680 nv_start_tx(dev); 4681 netif_start_queue(dev); 4682 netif_poll_enable(dev); 4683 nv_enable_hw_interrupts(dev, np->irqmask); 4684 } 4685 } 4686} 4687 4688static void nv_get_strings(struct net_device *dev, u32 stringset, u8 *buffer) 4689{ 4690 switch (stringset) { 4691 case ETH_SS_STATS: 4692 memcpy(buffer, &nv_estats_str, nv_get_stats_count(dev)*sizeof(struct nv_ethtool_str)); 4693 break; 4694 case ETH_SS_TEST: 4695 memcpy(buffer, &nv_etests_str, nv_self_test_count(dev)*sizeof(struct nv_ethtool_str)); 4696 break; 4697 } 4698} 4699 4700static const struct ethtool_ops ops = { 4701 .get_drvinfo = nv_get_drvinfo, 4702 .get_link = ethtool_op_get_link, 4703 .get_wol = nv_get_wol, 4704 .set_wol = nv_set_wol, 4705 .get_settings = nv_get_settings, 4706 .set_settings = nv_set_settings, 4707 .get_regs_len = nv_get_regs_len, 4708 .get_regs = nv_get_regs, 4709 .nway_reset = nv_nway_reset, 4710 .get_tso = ethtool_op_get_tso, 4711 .set_tso = nv_set_tso, 4712 .get_ringparam = nv_get_ringparam, 4713 .set_ringparam = nv_set_ringparam, 4714 .get_pauseparam = nv_get_pauseparam, 4715 .set_pauseparam = nv_set_pauseparam, 4716 .get_rx_csum = nv_get_rx_csum, 4717 .set_rx_csum = nv_set_rx_csum, 4718 .get_tx_csum = ethtool_op_get_tx_csum, 4719 .set_tx_csum = nv_set_tx_csum, 4720 .get_sg = ethtool_op_get_sg, 4721 .set_sg = nv_set_sg, 4722 .get_strings = nv_get_strings, 4723 .get_stats_count = nv_get_stats_count, 4724 .get_ethtool_stats = nv_get_ethtool_stats, 4725 .self_test_count = nv_self_test_count, 4726 .self_test = nv_self_test, 4727}; 4728 4729static void nv_vlan_rx_register(struct net_device *dev, struct vlan_group *grp) 4730{ 4731 struct fe_priv *np = get_nvpriv(dev); 4732 4733 spin_lock_irq(&np->lock); 4734 4735 /* save vlan group */ 4736 np->vlangrp = grp; 4737 4738 if (grp) { 4739 /* enable vlan on MAC */ 4740 np->txrxctl_bits |= NVREG_TXRXCTL_VLANSTRIP | NVREG_TXRXCTL_VLANINS; 4741 } else { 4742 /* disable vlan on MAC */ 4743 np->txrxctl_bits &= ~NVREG_TXRXCTL_VLANSTRIP; 4744 np->txrxctl_bits &= ~NVREG_TXRXCTL_VLANINS; 4745 } 4746 4747 writel(np->txrxctl_bits, get_hwbase(dev) + NvRegTxRxControl); 4748 4749 spin_unlock_irq(&np->lock); 4750} 4751 4752/* The mgmt unit and driver use a semaphore to access the phy during init */ 4753static int nv_mgmt_acquire_sema(struct net_device *dev) 4754{ 4755 u8 __iomem *base = get_hwbase(dev); 4756 int i; 4757 u32 tx_ctrl, mgmt_sema; 4758 4759 for (i = 0; i < 10; i++) { 4760 mgmt_sema = readl(base + NvRegTransmitterControl) & NVREG_XMITCTL_MGMT_SEMA_MASK; 4761 if (mgmt_sema == NVREG_XMITCTL_MGMT_SEMA_FREE) 4762 break; 4763 msleep(500); 4764 } 4765 4766 if (mgmt_sema != NVREG_XMITCTL_MGMT_SEMA_FREE) 4767 return 0; 4768 4769 for (i = 0; i < 2; i++) { 4770 tx_ctrl = readl(base + NvRegTransmitterControl); 4771 tx_ctrl |= NVREG_XMITCTL_HOST_SEMA_ACQ; 4772 writel(tx_ctrl, base + NvRegTransmitterControl); 4773 4774 /* verify that semaphore was acquired */ 4775 tx_ctrl = readl(base + NvRegTransmitterControl); 4776 if (((tx_ctrl & NVREG_XMITCTL_HOST_SEMA_MASK) == NVREG_XMITCTL_HOST_SEMA_ACQ) && 4777 ((tx_ctrl & NVREG_XMITCTL_MGMT_SEMA_MASK) == NVREG_XMITCTL_MGMT_SEMA_FREE)) 4778 return 1; 4779 else 4780 udelay(50); 4781 } 4782 4783 return 0; 4784} 4785 4786static int nv_open(struct net_device *dev) 4787{ 4788 struct fe_priv *np = netdev_priv(dev); 4789 u8 __iomem *base = get_hwbase(dev); 4790 int ret = 1; 4791 int oom, i; 4792 4793 dprintk(KERN_DEBUG "nv_open: begin\n"); 4794 4795 /* erase previous misconfiguration */ 4796 if (np->driver_data & DEV_HAS_POWER_CNTRL) 4797 nv_mac_reset(dev); 4798 writel(NVREG_MCASTADDRA_FORCE, base + NvRegMulticastAddrA); 4799 writel(0, base + NvRegMulticastAddrB); 4800 writel(0, base + NvRegMulticastMaskA); 4801 writel(0, base + NvRegMulticastMaskB); 4802 writel(0, base + NvRegPacketFilterFlags); 4803 4804 writel(0, base + NvRegTransmitterControl); 4805 writel(0, base + NvRegReceiverControl); 4806 4807 writel(0, base + NvRegAdapterControl); 4808 4809 if (np->pause_flags & NV_PAUSEFRAME_TX_CAPABLE) 4810 writel(NVREG_TX_PAUSEFRAME_DISABLE, base + NvRegTxPauseFrame); 4811 4812 /* initialize descriptor rings */ 4813 set_bufsize(dev); 4814 oom = nv_init_ring(dev); 4815 4816 writel(0, base + NvRegLinkSpeed); 4817 writel(readl(base + NvRegTransmitPoll) & NVREG_TRANSMITPOLL_MAC_ADDR_REV, base + NvRegTransmitPoll); 4818 nv_txrx_reset(dev); 4819 writel(0, base + NvRegUnknownSetupReg6); 4820 4821 np->in_shutdown = 0; 4822 4823 /* give hw rings */ 4824 setup_hw_rings(dev, NV_SETUP_RX_RING | NV_SETUP_TX_RING); 4825 writel( ((np->rx_ring_size-1) << NVREG_RINGSZ_RXSHIFT) + ((np->tx_ring_size-1) << NVREG_RINGSZ_TXSHIFT), 4826 base + NvRegRingSizes); 4827 4828 writel(np->linkspeed, base + NvRegLinkSpeed); 4829 if (np->desc_ver == DESC_VER_1) 4830 writel(NVREG_TX_WM_DESC1_DEFAULT, base + NvRegTxWatermark); 4831 else 4832 writel(NVREG_TX_WM_DESC2_3_DEFAULT, base + NvRegTxWatermark); 4833 writel(np->txrxctl_bits, base + NvRegTxRxControl); 4834 writel(np->vlanctl_bits, base + NvRegVlanControl); 4835 pci_push(base); 4836 writel(NVREG_TXRXCTL_BIT1|np->txrxctl_bits, base + NvRegTxRxControl); 4837 reg_delay(dev, NvRegUnknownSetupReg5, NVREG_UNKSETUP5_BIT31, NVREG_UNKSETUP5_BIT31, 4838 NV_SETUP5_DELAY, NV_SETUP5_DELAYMAX, 4839 KERN_INFO "open: SetupReg5, Bit 31 remained off\n"); 4840 4841 writel(0, base + NvRegMIIMask); 4842 writel(NVREG_IRQSTAT_MASK, base + NvRegIrqStatus); 4843 writel(NVREG_MIISTAT_MASK2, base + NvRegMIIStatus); 4844 4845 writel(NVREG_MISC1_FORCE | NVREG_MISC1_HD, base + NvRegMisc1); 4846 writel(readl(base + NvRegTransmitterStatus), base + NvRegTransmitterStatus); 4847 writel(NVREG_PFF_ALWAYS, base + NvRegPacketFilterFlags); 4848 writel(np->rx_buf_sz, base + NvRegOffloadConfig); 4849 4850 writel(readl(base + NvRegReceiverStatus), base + NvRegReceiverStatus); 4851 get_random_bytes(&i, sizeof(i)); 4852 writel(NVREG_RNDSEED_FORCE | (i&NVREG_RNDSEED_MASK), base + NvRegRandomSeed); 4853 writel(NVREG_TX_DEFERRAL_DEFAULT, base + NvRegTxDeferral); 4854 writel(NVREG_RX_DEFERRAL_DEFAULT, base + NvRegRxDeferral); 4855 if (poll_interval == -1) { 4856 if (optimization_mode == NV_OPTIMIZATION_MODE_THROUGHPUT) 4857 writel(NVREG_POLL_DEFAULT_THROUGHPUT, base + NvRegPollingInterval); 4858 else 4859 writel(NVREG_POLL_DEFAULT_CPU, base + NvRegPollingInterval); 4860 } 4861 else 4862 writel(poll_interval & 0xFFFF, base + NvRegPollingInterval); 4863 writel(NVREG_UNKSETUP6_VAL, base + NvRegUnknownSetupReg6); 4864 writel((np->phyaddr << NVREG_ADAPTCTL_PHYSHIFT)|NVREG_ADAPTCTL_PHYVALID|NVREG_ADAPTCTL_RUNNING, 4865 base + NvRegAdapterControl); 4866 writel(NVREG_MIISPEED_BIT8|NVREG_MIIDELAY, base + NvRegMIISpeed); 4867 writel(NVREG_MII_LINKCHANGE, base + NvRegMIIMask); 4868 if (np->wolenabled) 4869 writel(NVREG_WAKEUPFLAGS_ENABLE , base + NvRegWakeUpFlags); 4870 4871 i = readl(base + NvRegPowerState); 4872 if ( (i & NVREG_POWERSTATE_POWEREDUP) == 0) 4873 writel(NVREG_POWERSTATE_POWEREDUP|i, base + NvRegPowerState); 4874 4875 pci_push(base); 4876 udelay(10); 4877 writel(readl(base + NvRegPowerState) | NVREG_POWERSTATE_VALID, base + NvRegPowerState); 4878 4879 nv_disable_hw_interrupts(dev, np->irqmask); 4880 pci_push(base); 4881 writel(NVREG_MIISTAT_MASK2, base + NvRegMIIStatus); 4882 writel(NVREG_IRQSTAT_MASK, base + NvRegIrqStatus); 4883 pci_push(base); 4884 4885 if (nv_request_irq(dev, 0)) { 4886 goto out_drain; 4887 } 4888 4889 /* ask for interrupts */ 4890 nv_enable_hw_interrupts(dev, np->irqmask); 4891 4892 spin_lock_irq(&np->lock); 4893 writel(NVREG_MCASTADDRA_FORCE, base + NvRegMulticastAddrA); 4894 writel(0, base + NvRegMulticastAddrB); 4895 writel(0, base + NvRegMulticastMaskA); 4896 writel(0, base + NvRegMulticastMaskB); 4897 writel(NVREG_PFF_ALWAYS|NVREG_PFF_MYADDR, base + NvRegPacketFilterFlags); 4898 /* One manual link speed update: Interrupts are enabled, future link 4899 * speed changes cause interrupts and are handled by nv_link_irq(). 4900 */ 4901 { 4902 u32 miistat; 4903 miistat = readl(base + NvRegMIIStatus); 4904 writel(NVREG_MIISTAT_MASK, base + NvRegMIIStatus); 4905 dprintk(KERN_INFO "startup: got 0x%08x.\n", miistat); 4906 } 4907 /* set linkspeed to invalid value, thus force nv_update_linkspeed 4908 * to init hw */ 4909 np->linkspeed = 0; 4910 ret = nv_update_linkspeed(dev); 4911 nv_start_rx(dev); 4912 nv_start_tx(dev); 4913 netif_start_queue(dev); 4914 netif_poll_enable(dev); 4915 4916 if (ret) { 4917 netif_carrier_on(dev); 4918 } else { 4919 printk("%s: no link during initialization.\n", dev->name); 4920 netif_carrier_off(dev); 4921 } 4922 if (oom) 4923 mod_timer(&np->oom_kick, jiffies + OOM_REFILL); 4924 4925 /* start statistics timer */ 4926 if (np->driver_data & (DEV_HAS_STATISTICS_V1|DEV_HAS_STATISTICS_V2)) 4927 mod_timer(&np->stats_poll, jiffies + STATS_INTERVAL); 4928 4929 spin_unlock_irq(&np->lock); 4930 4931 return 0; 4932out_drain: 4933 drain_ring(dev); 4934 return ret; 4935} 4936 4937static int nv_close(struct net_device *dev) 4938{ 4939 struct fe_priv *np = netdev_priv(dev); 4940 u8 __iomem *base; 4941 4942 spin_lock_irq(&np->lock); 4943 np->in_shutdown = 1; 4944 spin_unlock_irq(&np->lock); 4945 netif_poll_disable(dev); 4946 synchronize_irq(dev->irq); 4947 4948 del_timer_sync(&np->oom_kick); 4949 del_timer_sync(&np->nic_poll); 4950 del_timer_sync(&np->stats_poll); 4951 4952 netif_stop_queue(dev); 4953 spin_lock_irq(&np->lock); 4954 nv_stop_tx(dev); 4955 nv_stop_rx(dev); 4956 nv_txrx_reset(dev); 4957 4958 /* disable interrupts on the nic or we will lock up */ 4959 base = get_hwbase(dev); 4960 nv_disable_hw_interrupts(dev, np->irqmask); 4961 pci_push(base); 4962 dprintk(KERN_INFO "%s: Irqmask is zero again\n", dev->name); 4963 4964 spin_unlock_irq(&np->lock); 4965 4966 nv_free_irq(dev); 4967 4968 drain_ring(dev); 4969 4970 if (np->wolenabled) { 4971 writel(NVREG_PFF_ALWAYS|NVREG_PFF_MYADDR, base + NvRegPacketFilterFlags); 4972 nv_start_rx(dev); 4973 } 4974 4975 /* FIXME: power down nic */ 4976 4977 return 0; 4978} 4979 4980static int __devinit nv_probe(struct pci_dev *pci_dev, const struct pci_device_id *id) 4981{ 4982 struct net_device *dev; 4983 struct fe_priv *np; 4984 unsigned long addr; 4985 u8 __iomem *base; 4986 int err, i; 4987 u32 powerstate, txreg; 4988 u32 phystate_orig = 0, phystate; 4989 int phyinitialized = 0; 4990 4991 dev = alloc_etherdev(sizeof(struct fe_priv)); 4992 err = -ENOMEM; 4993 if (!dev) 4994 goto out; 4995 4996 np = netdev_priv(dev); 4997 np->pci_dev = pci_dev; 4998 spin_lock_init(&np->lock); 4999 SET_MODULE_OWNER(dev); 5000 SET_NETDEV_DEV(dev, &pci_dev->dev); 5001 5002 init_timer(&np->oom_kick); 5003 np->oom_kick.data = (unsigned long) dev; 5004 np->oom_kick.function = &nv_do_rx_refill; /* timer handler */ 5005 init_timer(&np->nic_poll); 5006 np->nic_poll.data = (unsigned long) dev; 5007 np->nic_poll.function = &nv_do_nic_poll; /* timer handler */ 5008 init_timer(&np->stats_poll); 5009 np->stats_poll.data = (unsigned long) dev; 5010 np->stats_poll.function = &nv_do_stats_poll; /* timer handler */ 5011 5012 err = pci_enable_device(pci_dev); 5013 if (err) { 5014 printk(KERN_INFO "forcedeth: pci_enable_dev failed (%d) for device %s\n", 5015 err, pci_name(pci_dev)); 5016 goto out_free; 5017 } 5018 5019 pci_set_master(pci_dev); 5020 5021 err = pci_request_regions(pci_dev, DRV_NAME); 5022 if (err < 0) 5023 goto out_disable; 5024 5025 if (id->driver_data & (DEV_HAS_VLAN|DEV_HAS_MSI_X|DEV_HAS_POWER_CNTRL|DEV_HAS_STATISTICS_V2)) 5026 np->register_size = NV_PCI_REGSZ_VER3; 5027 else if (id->driver_data & DEV_HAS_STATISTICS_V1) 5028 np->register_size = NV_PCI_REGSZ_VER2; 5029 else 5030 np->register_size = NV_PCI_REGSZ_VER1; 5031 5032 err = -EINVAL; 5033 addr = 0; 5034 for (i = 0; i < DEVICE_COUNT_RESOURCE; i++) { 5035 dprintk(KERN_DEBUG "%s: resource %d start %p len %ld flags 0x%08lx.\n", 5036 pci_name(pci_dev), i, (void*)pci_resource_start(pci_dev, i), 5037 pci_resource_len(pci_dev, i), 5038 pci_resource_flags(pci_dev, i)); 5039 if (pci_resource_flags(pci_dev, i) & IORESOURCE_MEM && 5040 pci_resource_len(pci_dev, i) >= np->register_size) { 5041 addr = pci_resource_start(pci_dev, i); 5042 break; 5043 } 5044 } 5045 if (i == DEVICE_COUNT_RESOURCE) { 5046 printk(KERN_INFO "forcedeth: Couldn't find register window for device %s.\n", 5047 pci_name(pci_dev)); 5048 goto out_relreg; 5049 } 5050 5051 /* copy of driver data */ 5052 np->driver_data = id->driver_data; 5053 5054 /* handle different descriptor versions */ 5055 if (id->driver_data & DEV_HAS_HIGH_DMA) { 5056 /* packet format 3: supports 40-bit addressing */ 5057 np->desc_ver = DESC_VER_3; 5058 np->txrxctl_bits = NVREG_TXRXCTL_DESC_3; 5059 if (dma_64bit) { 5060 if (pci_set_dma_mask(pci_dev, DMA_39BIT_MASK)) { 5061 printk(KERN_INFO "forcedeth: 64-bit DMA failed, using 32-bit addressing for device %s.\n", 5062 pci_name(pci_dev)); 5063 } else { 5064 dev->features |= NETIF_F_HIGHDMA; 5065 printk(KERN_INFO "forcedeth: using HIGHDMA\n"); 5066 } 5067 if (pci_set_consistent_dma_mask(pci_dev, DMA_39BIT_MASK)) { 5068 printk(KERN_INFO "forcedeth: 64-bit DMA (consistent) failed, using 32-bit ring buffers for device %s.\n", 5069 pci_name(pci_dev)); 5070 } 5071 } 5072 } else if (id->driver_data & DEV_HAS_LARGEDESC) { 5073 /* packet format 2: supports jumbo frames */ 5074 np->desc_ver = DESC_VER_2; 5075 np->txrxctl_bits = NVREG_TXRXCTL_DESC_2; 5076 } else { 5077 /* original packet format */ 5078 np->desc_ver = DESC_VER_1; 5079 np->txrxctl_bits = NVREG_TXRXCTL_DESC_1; 5080 } 5081 5082 np->pkt_limit = NV_PKTLIMIT_1; 5083 if (id->driver_data & DEV_HAS_LARGEDESC) 5084 np->pkt_limit = NV_PKTLIMIT_2; 5085 5086 if (id->driver_data & DEV_HAS_CHECKSUM) { 5087 np->rx_csum = 1; 5088 np->txrxctl_bits |= NVREG_TXRXCTL_RXCHECK; 5089 dev->features |= NETIF_F_HW_CSUM | NETIF_F_SG; 5090 dev->features |= NETIF_F_TSO; 5091 } 5092 5093 np->vlanctl_bits = 0; 5094 if (id->driver_data & DEV_HAS_VLAN) { 5095 np->vlanctl_bits = NVREG_VLANCONTROL_ENABLE; 5096 dev->features |= NETIF_F_HW_VLAN_RX | NETIF_F_HW_VLAN_TX; 5097 dev->vlan_rx_register = nv_vlan_rx_register; 5098 } 5099 5100 np->msi_flags = 0; 5101 if ((id->driver_data & DEV_HAS_MSI) && msi) { 5102 np->msi_flags |= NV_MSI_CAPABLE; 5103 } 5104 if ((id->driver_data & DEV_HAS_MSI_X) && msix) { 5105 np->msi_flags |= NV_MSI_X_CAPABLE; 5106 } 5107 5108 np->pause_flags = NV_PAUSEFRAME_RX_CAPABLE | NV_PAUSEFRAME_RX_REQ | NV_PAUSEFRAME_AUTONEG; 5109 if (id->driver_data & DEV_HAS_PAUSEFRAME_TX) { 5110 np->pause_flags |= NV_PAUSEFRAME_TX_CAPABLE | NV_PAUSEFRAME_TX_REQ; 5111 } 5112 5113 5114 err = -ENOMEM; 5115 np->base = ioremap(addr, np->register_size); 5116 if (!np->base) 5117 goto out_relreg; 5118 dev->base_addr = (unsigned long)np->base; 5119 5120 dev->irq = pci_dev->irq; 5121 5122 np->rx_ring_size = RX_RING_DEFAULT; 5123 np->tx_ring_size = TX_RING_DEFAULT; 5124 5125 if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) { 5126 np->rx_ring.orig = pci_alloc_consistent(pci_dev, 5127 sizeof(struct ring_desc) * (np->rx_ring_size + np->tx_ring_size), 5128 &np->ring_addr); 5129 if (!np->rx_ring.orig) 5130 goto out_unmap; 5131 np->tx_ring.orig = &np->rx_ring.orig[np->rx_ring_size]; 5132 } else { 5133 np->rx_ring.ex = pci_alloc_consistent(pci_dev, 5134 sizeof(struct ring_desc_ex) * (np->rx_ring_size + np->tx_ring_size), 5135 &np->ring_addr); 5136 if (!np->rx_ring.ex) 5137 goto out_unmap; 5138 np->tx_ring.ex = &np->rx_ring.ex[np->rx_ring_size]; 5139 } 5140 np->rx_skb = kcalloc(np->rx_ring_size, sizeof(struct nv_skb_map), GFP_KERNEL); 5141 np->tx_skb = kcalloc(np->tx_ring_size, sizeof(struct nv_skb_map), GFP_KERNEL); 5142 if (!np->rx_skb || !np->tx_skb) 5143 goto out_freering; 5144 5145 dev->open = nv_open; 5146 dev->stop = nv_close; 5147 if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) 5148 dev->hard_start_xmit = nv_start_xmit; 5149 else 5150 dev->hard_start_xmit = nv_start_xmit_optimized; 5151 dev->get_stats = nv_get_stats; 5152 dev->change_mtu = nv_change_mtu; 5153 dev->set_mac_address = nv_set_mac_address; 5154 dev->set_multicast_list = nv_set_multicast; 5155#ifdef CONFIG_NET_POLL_CONTROLLER 5156 dev->poll_controller = nv_poll_controller; 5157#endif 5158 dev->weight = RX_WORK_PER_LOOP; 5159#ifdef CONFIG_FORCEDETH_NAPI 5160 dev->poll = nv_napi_poll; 5161#endif 5162 SET_ETHTOOL_OPS(dev, &ops); 5163 dev->tx_timeout = nv_tx_timeout; 5164 dev->watchdog_timeo = NV_WATCHDOG_TIMEO; 5165 5166 pci_set_drvdata(pci_dev, dev); 5167 5168 /* read the mac address */ 5169 base = get_hwbase(dev); 5170 np->orig_mac[0] = readl(base + NvRegMacAddrA); 5171 np->orig_mac[1] = readl(base + NvRegMacAddrB); 5172 5173 /* check the workaround bit for correct mac address order */ 5174 txreg = readl(base + NvRegTransmitPoll); 5175 if ((txreg & NVREG_TRANSMITPOLL_MAC_ADDR_REV) || 5176 (id->driver_data & DEV_HAS_CORRECT_MACADDR)) { 5177 /* mac address is already in correct order */ 5178 dev->dev_addr[0] = (np->orig_mac[0] >> 0) & 0xff; 5179 dev->dev_addr[1] = (np->orig_mac[0] >> 8) & 0xff; 5180 dev->dev_addr[2] = (np->orig_mac[0] >> 16) & 0xff; 5181 dev->dev_addr[3] = (np->orig_mac[0] >> 24) & 0xff; 5182 dev->dev_addr[4] = (np->orig_mac[1] >> 0) & 0xff; 5183 dev->dev_addr[5] = (np->orig_mac[1] >> 8) & 0xff; 5184 } else { 5185 /* need to reverse mac address to correct order */ 5186 dev->dev_addr[0] = (np->orig_mac[1] >> 8) & 0xff; 5187 dev->dev_addr[1] = (np->orig_mac[1] >> 0) & 0xff; 5188 dev->dev_addr[2] = (np->orig_mac[0] >> 24) & 0xff; 5189 dev->dev_addr[3] = (np->orig_mac[0] >> 16) & 0xff; 5190 dev->dev_addr[4] = (np->orig_mac[0] >> 8) & 0xff; 5191 dev->dev_addr[5] = (np->orig_mac[0] >> 0) & 0xff; 5192 /* set permanent address to be correct aswell */ 5193 np->orig_mac[0] = (dev->dev_addr[0] << 0) + (dev->dev_addr[1] << 8) + 5194 (dev->dev_addr[2] << 16) + (dev->dev_addr[3] << 24); 5195 np->orig_mac[1] = (dev->dev_addr[4] << 0) + (dev->dev_addr[5] << 8); 5196 writel(txreg|NVREG_TRANSMITPOLL_MAC_ADDR_REV, base + NvRegTransmitPoll); 5197 } 5198 memcpy(dev->perm_addr, dev->dev_addr, dev->addr_len); 5199 5200 if (!is_valid_ether_addr(dev->perm_addr)) { 5201 /* 5202 * Bad mac address. At least one bios sets the mac address 5203 * to 01:23:45:67:89:ab 5204 */ 5205 printk(KERN_ERR "%s: Invalid Mac address detected: %02x:%02x:%02x:%02x:%02x:%02x\n", 5206 pci_name(pci_dev), 5207 dev->dev_addr[0], dev->dev_addr[1], dev->dev_addr[2], 5208 dev->dev_addr[3], dev->dev_addr[4], dev->dev_addr[5]); 5209 printk(KERN_ERR "Please complain to your hardware vendor. Switching to a random MAC.\n"); 5210 dev->dev_addr[0] = 0x00; 5211 dev->dev_addr[1] = 0x00; 5212 dev->dev_addr[2] = 0x6c; 5213 get_random_bytes(&dev->dev_addr[3], 3); 5214 } 5215 5216 dprintk(KERN_DEBUG "%s: MAC Address %02x:%02x:%02x:%02x:%02x:%02x\n", pci_name(pci_dev), 5217 dev->dev_addr[0], dev->dev_addr[1], dev->dev_addr[2], 5218 dev->dev_addr[3], dev->dev_addr[4], dev->dev_addr[5]); 5219 5220 /* set mac address */ 5221 nv_copy_mac_to_hw(dev); 5222 5223 /* disable WOL */ 5224 writel(0, base + NvRegWakeUpFlags); 5225 np->wolenabled = 0; 5226 5227 if (id->driver_data & DEV_HAS_POWER_CNTRL) { 5228 5229 /* take phy and nic out of low power mode */ 5230 powerstate = readl(base + NvRegPowerState2); 5231 powerstate &= ~NVREG_POWERSTATE2_POWERUP_MASK; 5232 if ((id->device == PCI_DEVICE_ID_NVIDIA_NVENET_12 || 5233 id->device == PCI_DEVICE_ID_NVIDIA_NVENET_13) && 5234 pci_dev->revision >= 0xA3) 5235 powerstate |= NVREG_POWERSTATE2_POWERUP_REV_A3; 5236 writel(powerstate, base + NvRegPowerState2); 5237 } 5238 5239 if (np->desc_ver == DESC_VER_1) { 5240 np->tx_flags = NV_TX_VALID; 5241 } else { 5242 np->tx_flags = NV_TX2_VALID; 5243 } 5244 if (optimization_mode == NV_OPTIMIZATION_MODE_THROUGHPUT) { 5245 np->irqmask = NVREG_IRQMASK_THROUGHPUT; 5246 if (np->msi_flags & NV_MSI_X_CAPABLE) /* set number of vectors */ 5247 np->msi_flags |= 0x0003; 5248 } else { 5249 np->irqmask = NVREG_IRQMASK_CPU; 5250 if (np->msi_flags & NV_MSI_X_CAPABLE) /* set number of vectors */ 5251 np->msi_flags |= 0x0001; 5252 } 5253 5254 if (id->driver_data & DEV_NEED_TIMERIRQ) 5255 np->irqmask |= NVREG_IRQ_TIMER; 5256 if (id->driver_data & DEV_NEED_LINKTIMER) { 5257 dprintk(KERN_INFO "%s: link timer on.\n", pci_name(pci_dev)); 5258 np->need_linktimer = 1; 5259 np->link_timeout = jiffies + LINK_TIMEOUT; 5260 } else { 5261 dprintk(KERN_INFO "%s: link timer off.\n", pci_name(pci_dev)); 5262 np->need_linktimer = 0; 5263 } 5264 5265 /* clear phy state and temporarily halt phy interrupts */ 5266 writel(0, base + NvRegMIIMask); 5267 phystate = readl(base + NvRegAdapterControl); 5268 if (phystate & NVREG_ADAPTCTL_RUNNING) { 5269 phystate_orig = 1; 5270 phystate &= ~NVREG_ADAPTCTL_RUNNING; 5271 writel(phystate, base + NvRegAdapterControl); 5272 } 5273 writel(NVREG_MIISTAT_MASK, base + NvRegMIIStatus); 5274 5275 if (id->driver_data & DEV_HAS_MGMT_UNIT) { 5276 /* management unit running on the mac? */ 5277 if (readl(base + NvRegTransmitterControl) & NVREG_XMITCTL_SYNC_PHY_INIT) { 5278 np->mac_in_use = readl(base + NvRegTransmitterControl) & NVREG_XMITCTL_MGMT_ST; 5279 dprintk(KERN_INFO "%s: mgmt unit is running. mac in use %x.\n", pci_name(pci_dev), np->mac_in_use); 5280 for (i = 0; i < 5000; i++) { 5281 msleep(1); 5282 if (nv_mgmt_acquire_sema(dev)) { 5283 /* management unit setup the phy already? */ 5284 if ((readl(base + NvRegTransmitterControl) & NVREG_XMITCTL_SYNC_MASK) == 5285 NVREG_XMITCTL_SYNC_PHY_INIT) { 5286 /* phy is inited by mgmt unit */ 5287 phyinitialized = 1; 5288 dprintk(KERN_INFO "%s: Phy already initialized by mgmt unit.\n", pci_name(pci_dev)); 5289 } else { 5290 /* we need to init the phy */ 5291 } 5292 break; 5293 } 5294 } 5295 } 5296 } 5297 5298 /* find a suitable phy */ 5299 for (i = 1; i <= 32; i++) { 5300 int id1, id2; 5301 int phyaddr = i & 0x1F; 5302 5303 spin_lock_irq(&np->lock); 5304 id1 = mii_rw(dev, phyaddr, MII_PHYSID1, MII_READ); 5305 spin_unlock_irq(&np->lock); 5306 if (id1 < 0 || id1 == 0xffff) 5307 continue; 5308 spin_lock_irq(&np->lock); 5309 id2 = mii_rw(dev, phyaddr, MII_PHYSID2, MII_READ); 5310 spin_unlock_irq(&np->lock); 5311 if (id2 < 0 || id2 == 0xffff) 5312 continue; 5313 5314 np->phy_model = id2 & PHYID2_MODEL_MASK; 5315 id1 = (id1 & PHYID1_OUI_MASK) << PHYID1_OUI_SHFT; 5316 id2 = (id2 & PHYID2_OUI_MASK) >> PHYID2_OUI_SHFT; 5317 dprintk(KERN_DEBUG "%s: open: Found PHY %04x:%04x at address %d.\n", 5318 pci_name(pci_dev), id1, id2, phyaddr); 5319 np->phyaddr = phyaddr; 5320 np->phy_oui = id1 | id2; 5321 break; 5322 } 5323 if (i == 33) { 5324 printk(KERN_INFO "%s: open: Could not find a valid PHY.\n", 5325 pci_name(pci_dev)); 5326 goto out_error; 5327 } 5328 5329 if (!phyinitialized) { 5330 /* reset it */ 5331 phy_init(dev); 5332 } else { 5333 /* see if it is a gigabit phy */ 5334 u32 mii_status = mii_rw(dev, np->phyaddr, MII_BMSR, MII_READ); 5335 if (mii_status & PHY_GIGABIT) { 5336 np->gigabit = PHY_GIGABIT; 5337 } 5338 } 5339 5340 /* set default link speed settings */ 5341 np->linkspeed = NVREG_LINKSPEED_FORCE|NVREG_LINKSPEED_10; 5342 np->duplex = 0; 5343 np->autoneg = 1; 5344 5345 err = register_netdev(dev); 5346 if (err) { 5347 printk(KERN_INFO "forcedeth: unable to register netdev: %d\n", err); 5348 goto out_error; 5349 } 5350 printk(KERN_INFO "%s: forcedeth.c: subsystem: %05x:%04x bound to %s\n", 5351 dev->name, pci_dev->subsystem_vendor, pci_dev->subsystem_device, 5352 pci_name(pci_dev)); 5353 5354 return 0; 5355 5356out_error: 5357 if (phystate_orig) 5358 writel(phystate|NVREG_ADAPTCTL_RUNNING, base + NvRegAdapterControl); 5359 pci_set_drvdata(pci_dev, NULL); 5360out_freering: 5361 free_rings(dev); 5362out_unmap: 5363 iounmap(get_hwbase(dev)); 5364out_relreg: 5365 pci_release_regions(pci_dev); 5366out_disable: 5367 pci_disable_device(pci_dev); 5368out_free: 5369 free_netdev(dev); 5370out: 5371 return err; 5372} 5373 5374static void __devexit nv_remove(struct pci_dev *pci_dev) 5375{ 5376 struct net_device *dev = pci_get_drvdata(pci_dev); 5377 struct fe_priv *np = netdev_priv(dev); 5378 u8 __iomem *base = get_hwbase(dev); 5379 5380 unregister_netdev(dev); 5381 5382 /* special op: write back the misordered MAC address - otherwise 5383 * the next nv_probe would see a wrong address. 5384 */ 5385 writel(np->orig_mac[0], base + NvRegMacAddrA); 5386 writel(np->orig_mac[1], base + NvRegMacAddrB); 5387 5388 /* free all structures */ 5389 free_rings(dev); 5390 iounmap(get_hwbase(dev)); 5391 pci_release_regions(pci_dev); 5392 pci_disable_device(pci_dev); 5393 free_netdev(dev); 5394 pci_set_drvdata(pci_dev, NULL); 5395} 5396 5397#ifdef CONFIG_PM 5398static int nv_suspend(struct pci_dev *pdev, pm_message_t state) 5399{ 5400 struct net_device *dev = pci_get_drvdata(pdev); 5401 struct fe_priv *np = netdev_priv(dev); 5402 5403 if (!netif_running(dev)) 5404 goto out; 5405 5406 netif_device_detach(dev); 5407 5408 // Gross. 5409 nv_close(dev); 5410 5411 pci_save_state(pdev); 5412 pci_enable_wake(pdev, pci_choose_state(pdev, state), np->wolenabled); 5413 pci_set_power_state(pdev, pci_choose_state(pdev, state)); 5414out: 5415 return 0; 5416} 5417 5418static int nv_resume(struct pci_dev *pdev) 5419{ 5420 struct net_device *dev = pci_get_drvdata(pdev); 5421 int rc = 0; 5422 5423 if (!netif_running(dev)) 5424 goto out; 5425 5426 netif_device_attach(dev); 5427 5428 pci_set_power_state(pdev, PCI_D0); 5429 pci_restore_state(pdev); 5430 pci_enable_wake(pdev, PCI_D0, 0); 5431 5432 rc = nv_open(dev); 5433out: 5434 return rc; 5435} 5436#else 5437#define nv_suspend NULL 5438#define nv_resume NULL 5439#endif /* CONFIG_PM */ 5440 5441static struct pci_device_id pci_tbl[] = { 5442 { /* nForce Ethernet Controller */ 5443 PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_1), 5444 .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER, 5445 }, 5446 { /* nForce2 Ethernet Controller */ 5447 PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_2), 5448 .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER, 5449 }, 5450 { /* nForce3 Ethernet Controller */ 5451 PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_3), 5452 .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER, 5453 }, 5454 { /* nForce3 Ethernet Controller */ 5455 PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_4), 5456 .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_LARGEDESC|DEV_HAS_CHECKSUM, 5457 }, 5458 { /* nForce3 Ethernet Controller */ 5459 PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_5), 5460 .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_LARGEDESC|DEV_HAS_CHECKSUM, 5461 }, 5462 { /* nForce3 Ethernet Controller */ 5463 PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_6), 5464 .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_LARGEDESC|DEV_HAS_CHECKSUM, 5465 }, 5466 { /* nForce3 Ethernet Controller */ 5467 PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_7), 5468 .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_LARGEDESC|DEV_HAS_CHECKSUM, 5469 }, 5470 { /* CK804 Ethernet Controller */ 5471 PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_8), 5472 .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_LARGEDESC|DEV_HAS_CHECKSUM|DEV_HAS_HIGH_DMA|DEV_HAS_STATISTICS_V1, 5473 }, 5474 { /* CK804 Ethernet Controller */ 5475 PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_9), 5476 .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_LARGEDESC|DEV_HAS_CHECKSUM|DEV_HAS_HIGH_DMA|DEV_HAS_STATISTICS_V1, 5477 }, 5478 { /* MCP04 Ethernet Controller */ 5479 PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_10), 5480 .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_LARGEDESC|DEV_HAS_CHECKSUM|DEV_HAS_HIGH_DMA|DEV_HAS_STATISTICS_V1, 5481 }, 5482 { /* MCP04 Ethernet Controller */ 5483 PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_11), 5484 .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_LARGEDESC|DEV_HAS_CHECKSUM|DEV_HAS_HIGH_DMA|DEV_HAS_STATISTICS_V1, 5485 }, 5486 { /* MCP51 Ethernet Controller */ 5487 PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_12), 5488 .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_HIGH_DMA|DEV_HAS_POWER_CNTRL|DEV_HAS_STATISTICS_V1, 5489 }, 5490 { /* MCP51 Ethernet Controller */ 5491 PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_13), 5492 .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_HIGH_DMA|DEV_HAS_POWER_CNTRL|DEV_HAS_STATISTICS_V1, 5493 }, 5494 { /* MCP55 Ethernet Controller */ 5495 PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_14), 5496 .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_LARGEDESC|DEV_HAS_CHECKSUM|DEV_HAS_HIGH_DMA|DEV_HAS_VLAN|DEV_HAS_MSI|DEV_HAS_MSI_X|DEV_HAS_POWER_CNTRL|DEV_HAS_PAUSEFRAME_TX|DEV_HAS_STATISTICS_V2|DEV_HAS_TEST_EXTENDED|DEV_HAS_MGMT_UNIT, 5497 }, 5498 { /* MCP55 Ethernet Controller */ 5499 PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_15), 5500 .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_LARGEDESC|DEV_HAS_CHECKSUM|DEV_HAS_HIGH_DMA|DEV_HAS_VLAN|DEV_HAS_MSI|DEV_HAS_MSI_X|DEV_HAS_POWER_CNTRL|DEV_HAS_PAUSEFRAME_TX|DEV_HAS_STATISTICS_V2|DEV_HAS_TEST_EXTENDED|DEV_HAS_MGMT_UNIT, 5501 }, 5502 { /* MCP61 Ethernet Controller */ 5503 PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_16), 5504 .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_HIGH_DMA|DEV_HAS_POWER_CNTRL|DEV_HAS_MSI|DEV_HAS_PAUSEFRAME_TX|DEV_HAS_STATISTICS_V2|DEV_HAS_TEST_EXTENDED|DEV_HAS_MGMT_UNIT|DEV_HAS_CORRECT_MACADDR, 5505 }, 5506 { /* MCP61 Ethernet Controller */ 5507 PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_17), 5508 .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_HIGH_DMA|DEV_HAS_POWER_CNTRL|DEV_HAS_MSI|DEV_HAS_PAUSEFRAME_TX|DEV_HAS_STATISTICS_V2|DEV_HAS_TEST_EXTENDED|DEV_HAS_MGMT_UNIT|DEV_HAS_CORRECT_MACADDR, 5509 }, 5510 { /* MCP61 Ethernet Controller */ 5511 PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_18), 5512 .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_HIGH_DMA|DEV_HAS_POWER_CNTRL|DEV_HAS_MSI|DEV_HAS_PAUSEFRAME_TX|DEV_HAS_STATISTICS_V2|DEV_HAS_TEST_EXTENDED|DEV_HAS_MGMT_UNIT|DEV_HAS_CORRECT_MACADDR, 5513 }, 5514 { /* MCP61 Ethernet Controller */ 5515 PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_19), 5516 .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_HIGH_DMA|DEV_HAS_POWER_CNTRL|DEV_HAS_MSI|DEV_HAS_PAUSEFRAME_TX|DEV_HAS_STATISTICS_V2|DEV_HAS_TEST_EXTENDED|DEV_HAS_MGMT_UNIT|DEV_HAS_CORRECT_MACADDR, 5517 }, 5518 { /* MCP65 Ethernet Controller */ 5519 PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_20), 5520 .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_LARGEDESC|DEV_HAS_HIGH_DMA|DEV_HAS_POWER_CNTRL|DEV_HAS_MSI|DEV_HAS_PAUSEFRAME_TX|DEV_HAS_STATISTICS_V2|DEV_HAS_TEST_EXTENDED|DEV_HAS_MGMT_UNIT|DEV_HAS_CORRECT_MACADDR, 5521 }, 5522 { /* MCP65 Ethernet Controller */ 5523 PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_21), 5524 .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_LARGEDESC|DEV_HAS_HIGH_DMA|DEV_HAS_POWER_CNTRL|DEV_HAS_MSI|DEV_HAS_PAUSEFRAME_TX|DEV_HAS_STATISTICS_V2|DEV_HAS_TEST_EXTENDED|DEV_HAS_MGMT_UNIT|DEV_HAS_CORRECT_MACADDR, 5525 }, 5526 { /* MCP65 Ethernet Controller */ 5527 PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_22), 5528 .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_LARGEDESC|DEV_HAS_HIGH_DMA|DEV_HAS_POWER_CNTRL|DEV_HAS_MSI|DEV_HAS_PAUSEFRAME_TX|DEV_HAS_STATISTICS_V2|DEV_HAS_TEST_EXTENDED|DEV_HAS_MGMT_UNIT|DEV_HAS_CORRECT_MACADDR, 5529 }, 5530 { /* MCP65 Ethernet Controller */ 5531 PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_23), 5532 .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_LARGEDESC|DEV_HAS_HIGH_DMA|DEV_HAS_POWER_CNTRL|DEV_HAS_MSI|DEV_HAS_PAUSEFRAME_TX|DEV_HAS_STATISTICS_V2|DEV_HAS_TEST_EXTENDED|DEV_HAS_MGMT_UNIT|DEV_HAS_CORRECT_MACADDR, 5533 }, 5534 { /* MCP67 Ethernet Controller */ 5535 PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_24), 5536 .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_HIGH_DMA|DEV_HAS_POWER_CNTRL|DEV_HAS_MSI|DEV_HAS_PAUSEFRAME_TX|DEV_HAS_STATISTICS_V2|DEV_HAS_TEST_EXTENDED|DEV_HAS_MGMT_UNIT|DEV_HAS_CORRECT_MACADDR, 5537 }, 5538 { /* MCP67 Ethernet Controller */ 5539 PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_25), 5540 .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_HIGH_DMA|DEV_HAS_POWER_CNTRL|DEV_HAS_MSI|DEV_HAS_PAUSEFRAME_TX|DEV_HAS_STATISTICS_V2|DEV_HAS_TEST_EXTENDED|DEV_HAS_MGMT_UNIT|DEV_HAS_CORRECT_MACADDR, 5541 }, 5542 { /* MCP67 Ethernet Controller */ 5543 PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_26), 5544 .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_HIGH_DMA|DEV_HAS_POWER_CNTRL|DEV_HAS_MSI|DEV_HAS_PAUSEFRAME_TX|DEV_HAS_STATISTICS_V2|DEV_HAS_TEST_EXTENDED|DEV_HAS_MGMT_UNIT|DEV_HAS_CORRECT_MACADDR, 5545 }, 5546 { /* MCP67 Ethernet Controller */ 5547 PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_27), 5548 .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_HIGH_DMA|DEV_HAS_POWER_CNTRL|DEV_HAS_MSI|DEV_HAS_PAUSEFRAME_TX|DEV_HAS_STATISTICS_V2|DEV_HAS_TEST_EXTENDED|DEV_HAS_MGMT_UNIT|DEV_HAS_CORRECT_MACADDR, 5549 }, 5550 { /* MCP73 Ethernet Controller */ 5551 PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_28), 5552 .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_HIGH_DMA|DEV_HAS_POWER_CNTRL|DEV_HAS_MSI|DEV_HAS_PAUSEFRAME_TX|DEV_HAS_STATISTICS_V2|DEV_HAS_TEST_EXTENDED|DEV_HAS_MGMT_UNIT|DEV_HAS_CORRECT_MACADDR, 5553 }, 5554 { /* MCP73 Ethernet Controller */ 5555 PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_29), 5556 .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_HIGH_DMA|DEV_HAS_POWER_CNTRL|DEV_HAS_MSI|DEV_HAS_PAUSEFRAME_TX|DEV_HAS_STATISTICS_V2|DEV_HAS_TEST_EXTENDED|DEV_HAS_MGMT_UNIT|DEV_HAS_CORRECT_MACADDR, 5557 }, 5558 { /* MCP73 Ethernet Controller */ 5559 PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_30), 5560 .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_HIGH_DMA|DEV_HAS_POWER_CNTRL|DEV_HAS_MSI|DEV_HAS_PAUSEFRAME_TX|DEV_HAS_STATISTICS_V2|DEV_HAS_TEST_EXTENDED|DEV_HAS_MGMT_UNIT|DEV_HAS_CORRECT_MACADDR, 5561 }, 5562 { /* MCP73 Ethernet Controller */ 5563 PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_31), 5564 .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_HIGH_DMA|DEV_HAS_POWER_CNTRL|DEV_HAS_MSI|DEV_HAS_PAUSEFRAME_TX|DEV_HAS_STATISTICS_V2|DEV_HAS_TEST_EXTENDED|DEV_HAS_MGMT_UNIT|DEV_HAS_CORRECT_MACADDR, 5565 }, 5566 {0,}, 5567}; 5568 5569static struct pci_driver driver = { 5570 .name = "forcedeth", 5571 .id_table = pci_tbl, 5572 .probe = nv_probe, 5573 .remove = __devexit_p(nv_remove), 5574 .suspend = nv_suspend, 5575 .resume = nv_resume, 5576}; 5577 5578static int __init init_nic(void) 5579{ 5580 printk(KERN_INFO "forcedeth.c: Reverse Engineered nForce ethernet driver. Version %s.\n", FORCEDETH_VERSION); 5581 return pci_register_driver(&driver); 5582} 5583 5584static void __exit exit_nic(void) 5585{ 5586 pci_unregister_driver(&driver); 5587} 5588 5589module_param(max_interrupt_work, int, 0); 5590MODULE_PARM_DESC(max_interrupt_work, "forcedeth maximum events handled per interrupt"); 5591module_param(optimization_mode, int, 0); 5592MODULE_PARM_DESC(optimization_mode, "In throughput mode (0), every tx & rx packet will generate an interrupt. In CPU mode (1), interrupts are controlled by a timer."); 5593module_param(poll_interval, int, 0); 5594MODULE_PARM_DESC(poll_interval, "Interval determines how frequent timer interrupt is generated by [(time_in_micro_secs * 100) / (2^10)]. Min is 0 and Max is 65535."); 5595module_param(msi, int, 0); 5596MODULE_PARM_DESC(msi, "MSI interrupts are enabled by setting to 1 and disabled by setting to 0."); 5597module_param(msix, int, 0); 5598MODULE_PARM_DESC(msix, "MSIX interrupts are enabled by setting to 1 and disabled by setting to 0."); 5599module_param(dma_64bit, int, 0); 5600MODULE_PARM_DESC(dma_64bit, "High DMA is enabled by setting to 1 and disabled by setting to 0."); 5601 5602MODULE_AUTHOR("Manfred Spraul <manfred@colorfullife.com>"); 5603MODULE_DESCRIPTION("Reverse Engineered nForce ethernet driver"); 5604MODULE_LICENSE("GPL"); 5605 5606MODULE_DEVICE_TABLE(pci, pci_tbl); 5607 5608module_init(init_nic); 5609module_exit(exit_nic);