···1+ Chelsio N210 10Gb Ethernet Network Controller2+3+ Driver Release Notes for Linux4+5+ Version 2.1.16+7+ June 20, 20058+9+CONTENTS10+========11+ INTRODUCTION12+ FEATURES13+ PERFORMANCE14+ DRIVER MESSAGES15+ KNOWN ISSUES16+ SUPPORT17+18+19+INTRODUCTION20+============21+22+ This document describes the Linux driver for Chelsio 10Gb Ethernet Network23+ Controller. This driver supports the Chelsio N210 NIC and is backward24+ compatible with the Chelsio N110 model 10Gb NICs.25+26+27+FEATURES28+========29+30+ Adaptive Interrupts (adaptive-rx)31+ ---------------------------------32+33+ This feature provides an adaptive algorithm that adjusts the interrupt34+ coalescing parameters, allowing the driver to dynamically adapt the latency35+ settings to achieve the highest performance during various types of network36+ load.37+38+ The interface used to control this feature is ethtool. Please see the39+ ethtool manpage for additional usage information.40+41+ By default, adaptive-rx is disabled.42+ To enable adaptive-rx:43+44+ ethtool -C <interface> adaptive-rx on45+46+ To disable adaptive-rx, use ethtool:47+48+ ethtool -C <interface> adaptive-rx off49+50+ After disabling adaptive-rx, the timer latency value will be set to 50us.51+ You may set the timer latency after disabling adaptive-rx:52+53+ ethtool -C <interface> rx-usecs <microseconds>54+55+ An example to set the timer latency value to 100us on eth0:56+57+ ethtool -C eth0 rx-usecs 10058+59+ You may also provide a timer latency value while disabling adpative-rx:60+61+ ethtool -C <interface> adaptive-rx off rx-usecs <microseconds>62+63+ If adaptive-rx is disabled and a timer latency value is specified, the timer64+ will be set to the specified value until changed by the user or until65+ adaptive-rx is enabled.66+67+ To view the status of the adaptive-rx and timer latency values:68+69+ ethtool -c <interface>70+71+72+ TCP Segmentation Offloading (TSO) Support73+ -----------------------------------------74+75+ This feature, also known as "large send", enables a system's protocol stack76+ to offload portions of outbound TCP processing to a network interface card77+ thereby reducing system CPU utilization and enhancing performance.78+79+ The interface used to control this feature is ethtool version 1.8 or higher.80+ Please see the ethtool manpage for additional usage information.81+82+ By default, TSO is enabled.83+ To disable TSO:84+85+ ethtool -K <interface> tso off86+87+ To enable TSO:88+89+ ethtool -K <interface> tso on90+91+ To view the status of TSO:92+93+ ethtool -k <interface>94+95+96+PERFORMANCE97+===========98+99+ The following information is provided as an example of how to change system100+ parameters for "performance tuning" an what value to use. You may or may not101+ want to change these system parameters, depending on your server/workstation102+ application. Doing so is not warranted in any way by Chelsio Communications,103+ and is done at "YOUR OWN RISK". Chelsio will not be held responsible for loss104+ of data or damage to equipment.105+106+ Your distribution may have a different way of doing things, or you may prefer107+ a different method. These commands are shown only to provide an example of108+ what to do and are by no means definitive.109+110+ Making any of the following system changes will only last until you reboot111+ your system. You may want to write a script that runs at boot-up which112+ includes the optimal settings for your system.113+114+ Setting PCI Latency Timer:115+ setpci -d 1425:* 0x0c.l=0x0000F800116+117+ Disabling TCP timestamp:118+ sysctl -w net.ipv4.tcp_timestamps=0119+120+ Disabling SACK:121+ sysctl -w net.ipv4.tcp_sack=0122+123+ Setting large number of incoming connection requests:124+ sysctl -w net.ipv4.tcp_max_syn_backlog=3000125+126+ Setting maximum receive socket buffer size:127+ sysctl -w net.core.rmem_max=1024000128+129+ Setting maximum send socket buffer size:130+ sysctl -w net.core.wmem_max=1024000131+132+ Set smp_affinity (on a multiprocessor system) to a single CPU:133+ echo 1 > /proc/irq/<interrupt_number>/smp_affinity134+135+ Setting default receive socket buffer size:136+ sysctl -w net.core.rmem_default=524287137+138+ Setting default send socket buffer size:139+ sysctl -w net.core.wmem_default=524287140+141+ Setting maximum option memory buffers:142+ sysctl -w net.core.optmem_max=524287143+144+ Setting maximum backlog (# of unprocessed packets before kernel drops):145+ sysctl -w net.core.netdev_max_backlog=300000146+147+ Setting TCP read buffers (min/default/max):148+ sysctl -w net.ipv4.tcp_rmem="10000000 10000000 10000000"149+150+ Setting TCP write buffers (min/pressure/max):151+ sysctl -w net.ipv4.tcp_wmem="10000000 10000000 10000000"152+153+ Setting TCP buffer space (min/pressure/max):154+ sysctl -w net.ipv4.tcp_mem="10000000 10000000 10000000"155+156+ TCP window size for single connections:157+ The receive buffer (RX_WINDOW) size must be at least as large as the158+ Bandwidth-Delay Product of the communication link between the sender and159+ receiver. Due to the variations of RTT, you may want to increase the buffer160+ size up to 2 times the Bandwidth-Delay Product. Reference page 289 of161+ "TCP/IP Illustrated, Volume 1, The Protocols" by W. Richard Stevens.162+ At 10Gb speeds, use the following formula:163+ RX_WINDOW >= 1.25MBytes * RTT(in milliseconds)164+ Example for RTT with 100us: RX_WINDOW = (1,250,000 * 0.1) = 125,000165+ RX_WINDOW sizes of 256KB - 512KB should be sufficient.166+ Setting the min, max, and default receive buffer (RX_WINDOW) size:167+ sysctl -w net.ipv4.tcp_rmem="<min> <default> <max>"168+169+ TCP window size for multiple connections:170+ The receive buffer (RX_WINDOW) size may be calculated the same as single171+ connections, but should be divided by the number of connections. The172+ smaller window prevents congestion and facilitates better pacing,173+ especially if/when MAC level flow control does not work well or when it is174+ not supported on the machine. Experimentation may be necessary to attain175+ the correct value. This method is provided as a starting point fot the176+ correct receive buffer size.177+ Setting the min, max, and default receive buffer (RX_WINDOW) size is178+ performed in the same manner as single connection.179+180+181+DRIVER MESSAGES182+===============183+184+ The following messages are the most common messages logged by syslog. These185+ may be found in /var/log/messages.186+187+ Driver up:188+ Chelsio Network Driver - version 2.1.1189+190+ NIC detected:191+ eth#: Chelsio N210 1x10GBaseX NIC (rev #), PCIX 133MHz/64-bit192+193+ Link up:194+ eth#: link is up at 10 Gbps, full duplex195+196+ Link down:197+ eth#: link is down198+199+200+KNOWN ISSUES201+============202+203+ These issues have been identified during testing. The following information204+ is provided as a workaround to the problem. In some cases, this problem is205+ inherent to Linux or to a particular Linux Distribution and/or hardware206+ platform.207+208+ 1. Large number of TCP retransmits on a multiprocessor (SMP) system.209+210+ On a system with multiple CPUs, the interrupt (IRQ) for the network211+ controller may be bound to more than one CPU. This will cause TCP212+ retransmits if the packet data were to be split across different CPUs213+ and re-assembled in a different order than expected.214+215+ To eliminate the TCP retransmits, set smp_affinity on the particular216+ interrupt to a single CPU. You can locate the interrupt (IRQ) used on217+ the N110/N210 by using ifconfig:218+ ifconfig <dev_name> | grep Interrupt219+ Set the smp_affinity to a single CPU:220+ echo 1 > /proc/irq/<interrupt_number>/smp_affinity221+222+ It is highly suggested that you do not run the irqbalance daemon on your223+ system, as this will change any smp_affinity setting you have applied.224+ The irqbalance daemon runs on a 10 second interval and binds interrupts225+ to the least loaded CPU determined by the daemon. To disable this daemon:226+ chkconfig --level 2345 irqbalance off227+228+ By default, some Linux distributions enable the kernel feature,229+ irqbalance, which performs the same function as the daemon. To disable230+ this feature, add the following line to your bootloader:231+ noirqbalance232+233+ Example using the Grub bootloader:234+ title Red Hat Enterprise Linux AS (2.4.21-27.ELsmp)235+ root (hd0,0)236+ kernel /vmlinuz-2.4.21-27.ELsmp ro root=/dev/hda3 noirqbalance237+ initrd /initrd-2.4.21-27.ELsmp.img238+239+ 2. After running insmod, the driver is loaded and the incorrect network240+ interface is brought up without running ifup.241+242+ When using 2.4.x kernels, including RHEL kernels, the Linux kernel243+ invokes a script named "hotplug". This script is primarily used to244+ automatically bring up USB devices when they are plugged in, however,245+ the script also attempts to automatically bring up a network interface246+ after loading the kernel module. The hotplug script does this by scanning247+ the ifcfg-eth# config files in /etc/sysconfig/network-scripts, looking248+ for HWADDR=<mac_address>.249+250+ If the hotplug script does not find the HWADDRR within any of the251+ ifcfg-eth# files, it will bring up the device with the next available252+ interface name. If this interface is already configured for a different253+ network card, your new interface will have incorrect IP address and254+ network settings.255+256+ To solve this issue, you can add the HWADDR=<mac_address> key to the257+ interface config file of your network controller.258+259+ To disable this "hotplug" feature, you may add the driver (module name)260+ to the "blacklist" file located in /etc/hotplug. It has been noted that261+ this does not work for network devices because the net.agent script262+ does not use the blacklist file. Simply remove, or rename, the net.agent263+ script located in /etc/hotplug to disable this feature.264+265+ 3. Transport Protocol (TP) hangs when running heavy multi-connection traffic266+ on an AMD Opteron system with HyperTransport PCI-X Tunnel chipset.267+268+ If your AMD Opteron system uses the AMD-8131 HyperTransport PCI-X Tunnel269+ chipset, you may experience the "133-Mhz Mode Split Completion Data270+ Corruption" bug identified by AMD while using a 133Mhz PCI-X card on the271+ bus PCI-X bus.272+273+ AMD states, "Under highly specific conditions, the AMD-8131 PCI-X Tunnel274+ can provide stale data via split completion cycles to a PCI-X card that275+ is operating at 133 Mhz", causing data corruption.276+277+ AMD's provides three workarounds for this problem, however, Chelsio278+ recommends the first option for best performance with this bug:279+280+ For 133Mhz secondary bus operation, limit the transaction length and281+ the number of outstanding transactions, via BIOS configuration282+ programming of the PCI-X card, to the following:283+284+ Data Length (bytes): 1k285+ Total allowed outstanding transactions: 2286+287+ Please refer to AMD 8131-HT/PCI-X Errata 26310 Rev 3.08 August 2004,288+ section 56, "133-MHz Mode Split Completion Data Corruption" for more289+ details with this bug and workarounds suggested by AMD.290+291+ It may be possible to work outside AMD's recommended PCI-X settings, try292+ increasing the Data Length to 2k bytes for increased performance. If you293+ have issues with these settings, please revert to the "safe" settings294+ and duplicate the problem before submitting a bug or asking for support.295+296+ NOTE: The default setting on most systems is 8 outstanding transactions297+ and 2k bytes data length.298+299+ 4. On multiprocessor systems, it has been noted that an application which300+ is handling 10Gb networking can switch between CPUs causing degraded301+ and/or unstable performance.302+303+ If running on an SMP system and taking performance measurements, it304+ is suggested you either run the latest netperf-2.4.0+ or use a binding305+ tool such as Tim Hockin's procstate utilities (runon)306+ <http://www.hockin.org/~thockin/procstate/>.307+308+ Binding netserver and netperf (or other applications) to particular309+ CPUs will have a significant difference in performance measurements.310+ You may need to experiment which CPU to bind the application to in311+ order to achieve the best performance for your system.312+313+ If you are developing an application designed for 10Gb networking,314+ please keep in mind you may want to look at kernel functions315+ sched_setaffinity & sched_getaffinity to bind your application.316+317+ If you are just running user-space applications such as ftp, telnet,318+ etc., you may want to try the runon tool provided by Tim Hockin's319+ procstate utility. You could also try binding the interface to a320+ particular CPU: runon 0 ifup eth0321+322+323+SUPPORT324+=======325+326+ If you have problems with the software or hardware, please contact our327+ customer support team via email at support@chelsio.com or check our website328+ at http://www.chelsio.com329+330+===============================================================================331+332+ Chelsio Communications333+ 370 San Aleso Ave.334+ Suite 100335+ Sunnyvale, CA 94085336+ http://www.chelsio.com337+338+This program is free software; you can redistribute it and/or modify339+it under the terms of the GNU General Public License, version 2, as340+published by the Free Software Foundation.341+342+You should have received a copy of the GNU General Public License along343+with this program; if not, write to the Free Software Foundation, Inc.,344+59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.345+346+THIS SOFTWARE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR IMPLIED347+WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED WARRANTIES OF348+MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE.349+350+ Copyright (c) 2003-2005 Chelsio Communications. All rights reserved.351+352+===============================================================================
+6
MAINTAINERS
···2092W: http://www.simtec.co.uk/products/EB2410ITX/2093S: Supported20940000002095SIS 5513 IDE CONTROLLER DRIVER2096P: Lionel Bouton2097M: Lionel.Bouton@inet6.fr
···19231924 If in doubt, say Y.1925000000000001926config SKGE1927 tristate "New SysKonnect GigaEthernet support (EXPERIMENTAL)"1928 depends on PCI && EXPERIMENTAL···21032104menu "Ethernet (10000 Mbit)"2105 depends on !UML000000000000000000021062107config IXGB2108 tristate "Intel(R) PRO/10GbE support"
···19231924 If in doubt, say Y.19251926+config SIS1901927+ tristate "SiS190 gigabit ethernet support"1928+ depends on PCI1929+ select CRC321930+ select MII1931+ ---help---1932+ Say Y here if you have a SiS 190 PCI Gigabit Ethernet adapter.1933+1934+ To compile this driver as a module, choose M here: the module1935+ will be called sis190. This is recommended.1936+1937config SKGE1938 tristate "New SysKonnect GigaEthernet support (EXPERIMENTAL)"1939 depends on PCI && EXPERIMENTAL···20922093menu "Ethernet (10000 Mbit)"2094 depends on !UML2095+2096+config CHELSIO_T12097+ tristate "Chelsio 10Gb Ethernet support"2098+ depends on PCI2099+ help2100+ This driver supports Chelsio N110 and N210 models 10Gb Ethernet2101+ cards. More information about adapter features and performance2102+ tuning is in <file:Documentation/networking/cxgb.txt>.2103+2104+ For general information about Chelsio and our products, visit2105+ our website at <http://www.chelsio.com>.2106+2107+ For customer support, please visit our customer support page at2108+ <http://www.chelsio.com/support.htm>.2109+2110+ Please send feedback to <linux-bugs@chelsio.com>.2111+2112+ To compile this driver as a module, choose M here: the module2113+ will be called cxgb.21142115config IXGB2116 tristate "Intel(R) PRO/10GbE support"
···1+/*****************************************************************************2+ * *3+ * File: cphy.h *4+ * $Revision: 1.7 $ *5+ * $Date: 2005/06/21 18:29:47 $ *6+ * Description: *7+ * part of the Chelsio 10Gb Ethernet Driver. *8+ * *9+ * This program is free software; you can redistribute it and/or modify *10+ * it under the terms of the GNU General Public License, version 2, as *11+ * published by the Free Software Foundation. *12+ * *13+ * You should have received a copy of the GNU General Public License along *14+ * with this program; if not, write to the Free Software Foundation, Inc., *15+ * 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. *16+ * *17+ * THIS SOFTWARE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR IMPLIED *18+ * WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED WARRANTIES OF *19+ * MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. *20+ * *21+ * http://www.chelsio.com *22+ * *23+ * Copyright (c) 2003 - 2005 Chelsio Communications, Inc. *24+ * All rights reserved. *25+ * *26+ * Maintainers: maintainers@chelsio.com *27+ * *28+ * Authors: Dimitrios Michailidis <dm@chelsio.com> *29+ * Tina Yang <tainay@chelsio.com> *30+ * Felix Marti <felix@chelsio.com> *31+ * Scott Bardone <sbardone@chelsio.com> *32+ * Kurt Ottaway <kottaway@chelsio.com> *33+ * Frank DiMambro <frank@chelsio.com> *34+ * *35+ * History: *36+ * *37+ ****************************************************************************/38+39+#ifndef _CXGB_CPHY_H_40+#define _CXGB_CPHY_H_41+42+#include "common.h"43+44+struct mdio_ops {45+ void (*init)(adapter_t *adapter, const struct board_info *bi);46+ int (*read)(adapter_t *adapter, int phy_addr, int mmd_addr,47+ int reg_addr, unsigned int *val);48+ int (*write)(adapter_t *adapter, int phy_addr, int mmd_addr,49+ int reg_addr, unsigned int val);50+};51+52+/* PHY interrupt types */53+enum {54+ cphy_cause_link_change = 0x1,55+ cphy_cause_error = 0x256+};57+58+struct cphy;59+60+/* PHY operations */61+struct cphy_ops {62+ void (*destroy)(struct cphy *);63+ int (*reset)(struct cphy *, int wait);64+65+ int (*interrupt_enable)(struct cphy *);66+ int (*interrupt_disable)(struct cphy *);67+ int (*interrupt_clear)(struct cphy *);68+ int (*interrupt_handler)(struct cphy *);69+70+ int (*autoneg_enable)(struct cphy *);71+ int (*autoneg_disable)(struct cphy *);72+ int (*autoneg_restart)(struct cphy *);73+74+ int (*advertise)(struct cphy *phy, unsigned int advertise_map);75+ int (*set_loopback)(struct cphy *, int on);76+ int (*set_speed_duplex)(struct cphy *phy, int speed, int duplex);77+ int (*get_link_status)(struct cphy *phy, int *link_ok, int *speed,78+ int *duplex, int *fc);79+};80+81+/* A PHY instance */82+struct cphy {83+ int addr; /* PHY address */84+ adapter_t *adapter; /* associated adapter */85+ struct cphy_ops *ops; /* PHY operations */86+ int (*mdio_read)(adapter_t *adapter, int phy_addr, int mmd_addr,87+ int reg_addr, unsigned int *val);88+ int (*mdio_write)(adapter_t *adapter, int phy_addr, int mmd_addr,89+ int reg_addr, unsigned int val);90+ struct cphy_instance *instance;91+};92+93+/* Convenience MDIO read/write wrappers */94+static inline int mdio_read(struct cphy *cphy, int mmd, int reg,95+ unsigned int *valp)96+{97+ return cphy->mdio_read(cphy->adapter, cphy->addr, mmd, reg, valp);98+}99+100+static inline int mdio_write(struct cphy *cphy, int mmd, int reg,101+ unsigned int val)102+{103+ return cphy->mdio_write(cphy->adapter, cphy->addr, mmd, reg, val);104+}105+106+static inline int simple_mdio_read(struct cphy *cphy, int reg,107+ unsigned int *valp)108+{109+ return mdio_read(cphy, 0, reg, valp);110+}111+112+static inline int simple_mdio_write(struct cphy *cphy, int reg,113+ unsigned int val)114+{115+ return mdio_write(cphy, 0, reg, val);116+}117+118+/* Convenience initializer */119+static inline void cphy_init(struct cphy *phy, adapter_t *adapter,120+ int phy_addr, struct cphy_ops *phy_ops,121+ struct mdio_ops *mdio_ops)122+{123+ phy->adapter = adapter;124+ phy->addr = phy_addr;125+ phy->ops = phy_ops;126+ if (mdio_ops) {127+ phy->mdio_read = mdio_ops->read;128+ phy->mdio_write = mdio_ops->write;129+ }130+}131+132+/* Operations of the PHY-instance factory */133+struct gphy {134+ /* Construct a PHY instance with the given PHY address */135+ struct cphy *(*create)(adapter_t *adapter, int phy_addr,136+ struct mdio_ops *mdio_ops);137+138+ /*139+ * Reset the PHY chip. This resets the whole PHY chip, not individual140+ * ports.141+ */142+ int (*reset)(adapter_t *adapter);143+};144+145+extern struct gphy t1_mv88x201x_ops;146+extern struct gphy t1_dummy_phy_ops;147+148+#endif /* _CXGB_CPHY_H_ */
···1+/*****************************************************************************2+ * *3+ * File: cpl5_cmd.h *4+ * $Revision: 1.6 $ *5+ * $Date: 2005/06/21 18:29:47 $ *6+ * Description: *7+ * part of the Chelsio 10Gb Ethernet Driver. *8+ * *9+ * This program is free software; you can redistribute it and/or modify *10+ * it under the terms of the GNU General Public License, version 2, as *11+ * published by the Free Software Foundation. *12+ * *13+ * You should have received a copy of the GNU General Public License along *14+ * with this program; if not, write to the Free Software Foundation, Inc., *15+ * 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. *16+ * *17+ * THIS SOFTWARE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR IMPLIED *18+ * WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED WARRANTIES OF *19+ * MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. *20+ * *21+ * http://www.chelsio.com *22+ * *23+ * Copyright (c) 2003 - 2005 Chelsio Communications, Inc. *24+ * All rights reserved. *25+ * *26+ * Maintainers: maintainers@chelsio.com *27+ * *28+ * Authors: Dimitrios Michailidis <dm@chelsio.com> *29+ * Tina Yang <tainay@chelsio.com> *30+ * Felix Marti <felix@chelsio.com> *31+ * Scott Bardone <sbardone@chelsio.com> *32+ * Kurt Ottaway <kottaway@chelsio.com> *33+ * Frank DiMambro <frank@chelsio.com> *34+ * *35+ * History: *36+ * *37+ ****************************************************************************/38+39+#ifndef _CXGB_CPL5_CMD_H_40+#define _CXGB_CPL5_CMD_H_41+42+#include <asm/byteorder.h>43+44+#if !defined(__LITTLE_ENDIAN_BITFIELD) && !defined(__BIG_ENDIAN_BITFIELD)45+#error "Adjust your <asm/byteorder.h> defines"46+#endif47+48+enum CPL_opcode {49+ CPL_RX_PKT = 0xAD,50+ CPL_TX_PKT = 0xB2,51+ CPL_TX_PKT_LSO = 0xB6,52+};53+54+enum { /* TX_PKT_LSO ethernet types */55+ CPL_ETH_II,56+ CPL_ETH_II_VLAN,57+ CPL_ETH_802_3,58+ CPL_ETH_802_3_VLAN59+};60+61+struct cpl_rx_data {62+ u32 rsvd0;63+ u32 len;64+ u32 seq;65+ u16 urg;66+ u8 rsvd1;67+ u8 status;68+};69+70+/*71+ * We want this header's alignment to be no more stringent than 2-byte aligned.72+ * All fields are u8 or u16 except for the length. However that field is not73+ * used so we break it into 2 16-bit parts to easily meet our alignment needs.74+ */75+struct cpl_tx_pkt {76+ u8 opcode;77+#if defined(__LITTLE_ENDIAN_BITFIELD)78+ u8 iff:4;79+ u8 ip_csum_dis:1;80+ u8 l4_csum_dis:1;81+ u8 vlan_valid:1;82+ u8 rsvd:1;83+#else84+ u8 rsvd:1;85+ u8 vlan_valid:1;86+ u8 l4_csum_dis:1;87+ u8 ip_csum_dis:1;88+ u8 iff:4;89+#endif90+ u16 vlan;91+ u16 len_hi;92+ u16 len_lo;93+};94+95+struct cpl_tx_pkt_lso {96+ u8 opcode;97+#if defined(__LITTLE_ENDIAN_BITFIELD)98+ u8 iff:4;99+ u8 ip_csum_dis:1;100+ u8 l4_csum_dis:1;101+ u8 vlan_valid:1;102+ u8 rsvd:1;103+#else104+ u8 rsvd:1;105+ u8 vlan_valid:1;106+ u8 l4_csum_dis:1;107+ u8 ip_csum_dis:1;108+ u8 iff:4;109+#endif110+ u16 vlan;111+ u32 len;112+113+ u32 rsvd2;114+ u8 rsvd3;115+#if defined(__LITTLE_ENDIAN_BITFIELD)116+ u8 tcp_hdr_words:4;117+ u8 ip_hdr_words:4;118+#else119+ u8 ip_hdr_words:4;120+ u8 tcp_hdr_words:4;121+#endif122+ u16 eth_type_mss;123+};124+125+struct cpl_rx_pkt {126+ u8 opcode;127+#if defined(__LITTLE_ENDIAN_BITFIELD)128+ u8 iff:4;129+ u8 csum_valid:1;130+ u8 bad_pkt:1;131+ u8 vlan_valid:1;132+ u8 rsvd:1;133+#else134+ u8 rsvd:1;135+ u8 vlan_valid:1;136+ u8 bad_pkt:1;137+ u8 csum_valid:1;138+ u8 iff:4;139+#endif140+ u16 csum;141+ u16 vlan;142+ u16 len;143+};144+145+#endif /* _CXGB_CPL5_CMD_H_ */
···1+/*****************************************************************************2+ * *3+ * File: espi.h *4+ * $Revision: 1.7 $ *5+ * $Date: 2005/06/21 18:29:47 $ *6+ * Description: *7+ * part of the Chelsio 10Gb Ethernet Driver. *8+ * *9+ * This program is free software; you can redistribute it and/or modify *10+ * it under the terms of the GNU General Public License, version 2, as *11+ * published by the Free Software Foundation. *12+ * *13+ * You should have received a copy of the GNU General Public License along *14+ * with this program; if not, write to the Free Software Foundation, Inc., *15+ * 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. *16+ * *17+ * THIS SOFTWARE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR IMPLIED *18+ * WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED WARRANTIES OF *19+ * MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. *20+ * *21+ * http://www.chelsio.com *22+ * *23+ * Copyright (c) 2003 - 2005 Chelsio Communications, Inc. *24+ * All rights reserved. *25+ * *26+ * Maintainers: maintainers@chelsio.com *27+ * *28+ * Authors: Dimitrios Michailidis <dm@chelsio.com> *29+ * Tina Yang <tainay@chelsio.com> *30+ * Felix Marti <felix@chelsio.com> *31+ * Scott Bardone <sbardone@chelsio.com> *32+ * Kurt Ottaway <kottaway@chelsio.com> *33+ * Frank DiMambro <frank@chelsio.com> *34+ * *35+ * History: *36+ * *37+ ****************************************************************************/38+39+#ifndef _CXGB_ESPI_H_40+#define _CXGB_ESPI_H_41+42+#include "common.h"43+44+struct espi_intr_counts {45+ unsigned int DIP4_err;46+ unsigned int rx_drops;47+ unsigned int tx_drops;48+ unsigned int rx_ovflw;49+ unsigned int parity_err;50+ unsigned int DIP2_parity_err;51+};52+53+struct peespi;54+55+struct peespi *t1_espi_create(adapter_t *adapter);56+void t1_espi_destroy(struct peespi *espi);57+int t1_espi_init(struct peespi *espi, int mac_type, int nports);58+59+void t1_espi_intr_enable(struct peespi *);60+void t1_espi_intr_clear(struct peespi *);61+void t1_espi_intr_disable(struct peespi *);62+int t1_espi_intr_handler(struct peespi *);63+const struct espi_intr_counts *t1_espi_get_intr_counts(struct peespi *espi);64+65+void t1_espi_set_misc_ctrl(adapter_t *adapter, u32 val);66+u32 t1_espi_get_mon(adapter_t *adapter, u32 addr, u8 wait);67+68+#endif /* _CXGB_ESPI_H_ */
···1+/*****************************************************************************2+ * *3+ * File: mv88x201x.c *4+ * $Revision: 1.12 $ *5+ * $Date: 2005/04/15 19:27:14 $ *6+ * Description: *7+ * Marvell PHY (mv88x201x) functionality. *8+ * part of the Chelsio 10Gb Ethernet Driver. *9+ * *10+ * This program is free software; you can redistribute it and/or modify *11+ * it under the terms of the GNU General Public License, version 2, as *12+ * published by the Free Software Foundation. *13+ * *14+ * You should have received a copy of the GNU General Public License along *15+ * with this program; if not, write to the Free Software Foundation, Inc., *16+ * 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. *17+ * *18+ * THIS SOFTWARE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR IMPLIED *19+ * WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED WARRANTIES OF *20+ * MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. *21+ * *22+ * http://www.chelsio.com *23+ * *24+ * Copyright (c) 2003 - 2005 Chelsio Communications, Inc. *25+ * All rights reserved. *26+ * *27+ * Maintainers: maintainers@chelsio.com *28+ * *29+ * Authors: Dimitrios Michailidis <dm@chelsio.com> *30+ * Tina Yang <tainay@chelsio.com> *31+ * Felix Marti <felix@chelsio.com> *32+ * Scott Bardone <sbardone@chelsio.com> *33+ * Kurt Ottaway <kottaway@chelsio.com> *34+ * Frank DiMambro <frank@chelsio.com> *35+ * *36+ * History: *37+ * *38+ ****************************************************************************/39+40+#include "cphy.h"41+#include "elmer0.h"42+43+/*44+ * The 88x2010 Rev C. requires some link status registers * to be read45+ * twice in order to get the right values. Future * revisions will fix46+ * this problem and then this macro * can disappear.47+ */48+#define MV88x2010_LINK_STATUS_BUGS 149+50+static int led_init(struct cphy *cphy)51+{52+ /* Setup the LED registers so we can turn on/off.53+ * Writing these bits maps control to another54+ * register. mmd(0x1) addr(0x7)55+ */56+ mdio_write(cphy, 0x3, 0x8304, 0xdddd);57+ return 0;58+}59+60+static int led_link(struct cphy *cphy, u32 do_enable)61+{62+ u32 led = 0;63+#define LINK_ENABLE_BIT 0x164+65+ mdio_read(cphy, 0x1, 0x7, &led);66+67+ if (do_enable & LINK_ENABLE_BIT) {68+ led |= LINK_ENABLE_BIT;69+ mdio_write(cphy, 0x1, 0x7, led);70+ } else {71+ led &= ~LINK_ENABLE_BIT;72+ mdio_write(cphy, 0x1, 0x7, led);73+ }74+ return 0;75+}76+77+/* Port Reset */78+static int mv88x201x_reset(struct cphy *cphy, int wait)79+{80+ /* This can be done through registers. It is not required since81+ * a full chip reset is used.82+ */83+ return 0;84+}85+86+static int mv88x201x_interrupt_enable(struct cphy *cphy)87+{88+ u32 elmer;89+90+ /* Enable PHY LASI interrupts. */91+ mdio_write(cphy, 0x1, 0x9002, 0x1);92+93+ /* Enable Marvell interrupts through Elmer0. */94+ t1_tpi_read(cphy->adapter, A_ELMER0_INT_ENABLE, &elmer);95+ elmer |= ELMER0_GP_BIT6;96+ t1_tpi_write(cphy->adapter, A_ELMER0_INT_ENABLE, elmer);97+ return 0;98+}99+100+static int mv88x201x_interrupt_disable(struct cphy *cphy)101+{102+ u32 elmer;103+104+ /* Disable PHY LASI interrupts. */105+ mdio_write(cphy, 0x1, 0x9002, 0x0);106+107+ /* Disable Marvell interrupts through Elmer0. */108+ t1_tpi_read(cphy->adapter, A_ELMER0_INT_ENABLE, &elmer);109+ elmer &= ~ELMER0_GP_BIT6;110+ t1_tpi_write(cphy->adapter, A_ELMER0_INT_ENABLE, elmer);111+ return 0;112+}113+114+static int mv88x201x_interrupt_clear(struct cphy *cphy)115+{116+ u32 elmer;117+ u32 val;118+119+#ifdef MV88x2010_LINK_STATUS_BUGS120+ /* Required to read twice before clear takes affect. */121+ mdio_read(cphy, 0x1, 0x9003, &val);122+ mdio_read(cphy, 0x1, 0x9004, &val);123+ mdio_read(cphy, 0x1, 0x9005, &val);124+125+ /* Read this register after the others above it else126+ * the register doesn't clear correctly.127+ */128+ mdio_read(cphy, 0x1, 0x1, &val);129+#endif130+131+ /* Clear link status. */132+ mdio_read(cphy, 0x1, 0x1, &val);133+ /* Clear PHY LASI interrupts. */134+ mdio_read(cphy, 0x1, 0x9005, &val);135+136+#ifdef MV88x2010_LINK_STATUS_BUGS137+ /* Do it again. */138+ mdio_read(cphy, 0x1, 0x9003, &val);139+ mdio_read(cphy, 0x1, 0x9004, &val);140+#endif141+142+ /* Clear Marvell interrupts through Elmer0. */143+ t1_tpi_read(cphy->adapter, A_ELMER0_INT_CAUSE, &elmer);144+ elmer |= ELMER0_GP_BIT6;145+ t1_tpi_write(cphy->adapter, A_ELMER0_INT_CAUSE, elmer);146+ return 0;147+}148+149+static int mv88x201x_interrupt_handler(struct cphy *cphy)150+{151+ /* Clear interrupts */152+ mv88x201x_interrupt_clear(cphy);153+154+ /* We have only enabled link change interrupts and so155+ * cphy_cause must be a link change interrupt.156+ */157+ return cphy_cause_link_change;158+}159+160+static int mv88x201x_set_loopback(struct cphy *cphy, int on)161+{162+ return 0;163+}164+165+static int mv88x201x_get_link_status(struct cphy *cphy, int *link_ok,166+ int *speed, int *duplex, int *fc)167+{168+ u32 val = 0;169+#define LINK_STATUS_BIT 0x4170+171+ if (link_ok) {172+ /* Read link status. */173+ mdio_read(cphy, 0x1, 0x1, &val);174+ val &= LINK_STATUS_BIT;175+ *link_ok = (val == LINK_STATUS_BIT);176+ /* Turn on/off Link LED */177+ led_link(cphy, *link_ok);178+ }179+ if (speed)180+ *speed = SPEED_10000;181+ if (duplex)182+ *duplex = DUPLEX_FULL;183+ if (fc)184+ *fc = PAUSE_RX | PAUSE_TX;185+ return 0;186+}187+188+static void mv88x201x_destroy(struct cphy *cphy)189+{190+ kfree(cphy);191+}192+193+static struct cphy_ops mv88x201x_ops = {194+ .destroy = mv88x201x_destroy,195+ .reset = mv88x201x_reset,196+ .interrupt_enable = mv88x201x_interrupt_enable,197+ .interrupt_disable = mv88x201x_interrupt_disable,198+ .interrupt_clear = mv88x201x_interrupt_clear,199+ .interrupt_handler = mv88x201x_interrupt_handler,200+ .get_link_status = mv88x201x_get_link_status,201+ .set_loopback = mv88x201x_set_loopback,202+};203+204+static struct cphy *mv88x201x_phy_create(adapter_t *adapter, int phy_addr,205+ struct mdio_ops *mdio_ops)206+{207+ u32 val;208+ struct cphy *cphy = kmalloc(sizeof(*cphy), GFP_KERNEL);209+210+ if (!cphy)211+ return NULL;212+ memset(cphy, 0, sizeof(*cphy));213+ cphy_init(cphy, adapter, phy_addr, &mv88x201x_ops, mdio_ops);214+215+ /* Commands the PHY to enable XFP's clock. */216+ mdio_read(cphy, 0x3, 0x8300, &val);217+ mdio_write(cphy, 0x3, 0x8300, val | 1);218+219+ /* Clear link status. Required because of a bug in the PHY. */220+ mdio_read(cphy, 0x1, 0x8, &val);221+ mdio_read(cphy, 0x3, 0x8, &val);222+223+ /* Allows for Link,Ack LED turn on/off */224+ led_init(cphy);225+ return cphy;226+}227+228+/* Chip Reset */229+static int mv88x201x_phy_reset(adapter_t *adapter)230+{231+ u32 val;232+233+ t1_tpi_read(adapter, A_ELMER0_GPO, &val);234+ val &= ~4;235+ t1_tpi_write(adapter, A_ELMER0_GPO, val);236+ msleep(100);237+238+ t1_tpi_write(adapter, A_ELMER0_GPO, val | 4);239+ msleep(1000);240+241+ /* Now lets enable the Laser. Delay 100us */242+ t1_tpi_read(adapter, A_ELMER0_GPO, &val);243+ val |= 0x8000;244+ t1_tpi_write(adapter, A_ELMER0_GPO, val);245+ udelay(100);246+ return 0;247+}248+249+struct gphy t1_mv88x201x_ops = {250+ mv88x201x_phy_create,251+ mv88x201x_phy_reset252+};
···1+/*****************************************************************************2+ * *3+ * File: sge.c *4+ * $Revision: 1.26 $ *5+ * $Date: 2005/06/21 18:29:48 $ *6+ * Description: *7+ * DMA engine. *8+ * part of the Chelsio 10Gb Ethernet Driver. *9+ * *10+ * This program is free software; you can redistribute it and/or modify *11+ * it under the terms of the GNU General Public License, version 2, as *12+ * published by the Free Software Foundation. *13+ * *14+ * You should have received a copy of the GNU General Public License along *15+ * with this program; if not, write to the Free Software Foundation, Inc., *16+ * 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. *17+ * *18+ * THIS SOFTWARE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR IMPLIED *19+ * WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED WARRANTIES OF *20+ * MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. *21+ * *22+ * http://www.chelsio.com *23+ * *24+ * Copyright (c) 2003 - 2005 Chelsio Communications, Inc. *25+ * All rights reserved. *26+ * *27+ * Maintainers: maintainers@chelsio.com *28+ * *29+ * Authors: Dimitrios Michailidis <dm@chelsio.com> *30+ * Tina Yang <tainay@chelsio.com> *31+ * Felix Marti <felix@chelsio.com> *32+ * Scott Bardone <sbardone@chelsio.com> *33+ * Kurt Ottaway <kottaway@chelsio.com> *34+ * Frank DiMambro <frank@chelsio.com> *35+ * *36+ * History: *37+ * *38+ ****************************************************************************/39+40+#include "common.h"41+42+#include <linux/config.h>43+#include <linux/types.h>44+#include <linux/errno.h>45+#include <linux/pci.h>46+#include <linux/netdevice.h>47+#include <linux/etherdevice.h>48+#include <linux/if_vlan.h>49+#include <linux/skbuff.h>50+#include <linux/init.h>51+#include <linux/mm.h>52+#include <linux/ip.h>53+#include <linux/in.h>54+#include <linux/if_arp.h>55+56+#include "cpl5_cmd.h"57+#include "sge.h"58+#include "regs.h"59+#include "espi.h"60+61+62+#ifdef NETIF_F_TSO63+#include <linux/tcp.h>64+#endif65+66+#define SGE_CMDQ_N 267+#define SGE_FREELQ_N 268+#define SGE_CMDQ0_E_N 102469+#define SGE_CMDQ1_E_N 12870+#define SGE_FREEL_SIZE 409671+#define SGE_JUMBO_FREEL_SIZE 51272+#define SGE_FREEL_REFILL_THRESH 1673+#define SGE_RESPQ_E_N 102474+#define SGE_INTRTIMER_NRES 100075+#define SGE_RX_COPY_THRES 25676+#define SGE_RX_SM_BUF_SIZE 153677+78+# define SGE_RX_DROP_THRES 279+80+#define SGE_RESPQ_REPLENISH_THRES (SGE_RESPQ_E_N / 4)81+82+/*83+ * Period of the TX buffer reclaim timer. This timer does not need to run84+ * frequently as TX buffers are usually reclaimed by new TX packets.85+ */86+#define TX_RECLAIM_PERIOD (HZ / 4)87+88+#ifndef NET_IP_ALIGN89+# define NET_IP_ALIGN 290+#endif91+92+#define M_CMD_LEN 0x7fffffff93+#define V_CMD_LEN(v) (v)94+#define G_CMD_LEN(v) ((v) & M_CMD_LEN)95+#define V_CMD_GEN1(v) ((v) << 31)96+#define V_CMD_GEN2(v) (v)97+#define F_CMD_DATAVALID (1 << 1)98+#define F_CMD_SOP (1 << 2)99+#define V_CMD_EOP(v) ((v) << 3)100+101+/*102+ * Command queue, receive buffer list, and response queue descriptors.103+ */104+#if defined(__BIG_ENDIAN_BITFIELD)105+struct cmdQ_e {106+ u32 addr_lo;107+ u32 len_gen;108+ u32 flags;109+ u32 addr_hi;110+};111+112+struct freelQ_e {113+ u32 addr_lo;114+ u32 len_gen;115+ u32 gen2;116+ u32 addr_hi;117+};118+119+struct respQ_e {120+ u32 Qsleeping : 4;121+ u32 Cmdq1CreditReturn : 5;122+ u32 Cmdq1DmaComplete : 5;123+ u32 Cmdq0CreditReturn : 5;124+ u32 Cmdq0DmaComplete : 5;125+ u32 FreelistQid : 2;126+ u32 CreditValid : 1;127+ u32 DataValid : 1;128+ u32 Offload : 1;129+ u32 Eop : 1;130+ u32 Sop : 1;131+ u32 GenerationBit : 1;132+ u32 BufferLength;133+};134+#elif defined(__LITTLE_ENDIAN_BITFIELD)135+struct cmdQ_e {136+ u32 len_gen;137+ u32 addr_lo;138+ u32 addr_hi;139+ u32 flags;140+};141+142+struct freelQ_e {143+ u32 len_gen;144+ u32 addr_lo;145+ u32 addr_hi;146+ u32 gen2;147+};148+149+struct respQ_e {150+ u32 BufferLength;151+ u32 GenerationBit : 1;152+ u32 Sop : 1;153+ u32 Eop : 1;154+ u32 Offload : 1;155+ u32 DataValid : 1;156+ u32 CreditValid : 1;157+ u32 FreelistQid : 2;158+ u32 Cmdq0DmaComplete : 5;159+ u32 Cmdq0CreditReturn : 5;160+ u32 Cmdq1DmaComplete : 5;161+ u32 Cmdq1CreditReturn : 5;162+ u32 Qsleeping : 4;163+} ;164+#endif165+166+/*167+ * SW Context Command and Freelist Queue Descriptors168+ */169+struct cmdQ_ce {170+ struct sk_buff *skb;171+ DECLARE_PCI_UNMAP_ADDR(dma_addr);172+ DECLARE_PCI_UNMAP_LEN(dma_len);173+};174+175+struct freelQ_ce {176+ struct sk_buff *skb;177+ DECLARE_PCI_UNMAP_ADDR(dma_addr);178+ DECLARE_PCI_UNMAP_LEN(dma_len);179+};180+181+/*182+ * SW command, freelist and response rings183+ */184+struct cmdQ {185+ unsigned long status; /* HW DMA fetch status */186+ unsigned int in_use; /* # of in-use command descriptors */187+ unsigned int size; /* # of descriptors */188+ unsigned int processed; /* total # of descs HW has processed */189+ unsigned int cleaned; /* total # of descs SW has reclaimed */190+ unsigned int stop_thres; /* SW TX queue suspend threshold */191+ u16 pidx; /* producer index (SW) */192+ u16 cidx; /* consumer index (HW) */193+ u8 genbit; /* current generation (=valid) bit */194+ u8 sop; /* is next entry start of packet? */195+ struct cmdQ_e *entries; /* HW command descriptor Q */196+ struct cmdQ_ce *centries; /* SW command context descriptor Q */197+ spinlock_t lock; /* Lock to protect cmdQ enqueuing */198+ dma_addr_t dma_addr; /* DMA addr HW command descriptor Q */199+};200+201+struct freelQ {202+ unsigned int credits; /* # of available RX buffers */203+ unsigned int size; /* free list capacity */204+ u16 pidx; /* producer index (SW) */205+ u16 cidx; /* consumer index (HW) */206+ u16 rx_buffer_size; /* Buffer size on this free list */207+ u16 dma_offset; /* DMA offset to align IP headers */208+ u16 recycleq_idx; /* skb recycle q to use */209+ u8 genbit; /* current generation (=valid) bit */210+ struct freelQ_e *entries; /* HW freelist descriptor Q */211+ struct freelQ_ce *centries; /* SW freelist context descriptor Q */212+ dma_addr_t dma_addr; /* DMA addr HW freelist descriptor Q */213+};214+215+struct respQ {216+ unsigned int credits; /* credits to be returned to SGE */217+ unsigned int size; /* # of response Q descriptors */218+ u16 cidx; /* consumer index (SW) */219+ u8 genbit; /* current generation(=valid) bit */220+ struct respQ_e *entries; /* HW response descriptor Q */221+ dma_addr_t dma_addr; /* DMA addr HW response descriptor Q */222+};223+224+/* Bit flags for cmdQ.status */225+enum {226+ CMDQ_STAT_RUNNING = 1, /* fetch engine is running */227+ CMDQ_STAT_LAST_PKT_DB = 2 /* last packet rung the doorbell */228+};229+230+/*231+ * Main SGE data structure232+ *233+ * Interrupts are handled by a single CPU and it is likely that on a MP system234+ * the application is migrated to another CPU. In that scenario, we try to235+ * seperate the RX(in irq context) and TX state in order to decrease memory236+ * contention.237+ */238+struct sge {239+ struct adapter *adapter; /* adapter backpointer */240+ struct net_device *netdev; /* netdevice backpointer */241+ struct freelQ freelQ[SGE_FREELQ_N]; /* buffer free lists */242+ struct respQ respQ; /* response Q */243+ unsigned long stopped_tx_queues; /* bitmap of suspended Tx queues */244+ unsigned int rx_pkt_pad; /* RX padding for L2 packets */245+ unsigned int jumbo_fl; /* jumbo freelist Q index */246+ unsigned int intrtimer_nres; /* no-resource interrupt timer */247+ unsigned int fixed_intrtimer;/* non-adaptive interrupt timer */248+ struct timer_list tx_reclaim_timer; /* reclaims TX buffers */249+ struct timer_list espibug_timer;250+ unsigned int espibug_timeout;251+ struct sk_buff *espibug_skb;252+ u32 sge_control; /* shadow value of sge control reg */253+ struct sge_intr_counts stats;254+ struct sge_port_stats port_stats[MAX_NPORTS];255+ struct cmdQ cmdQ[SGE_CMDQ_N] ____cacheline_aligned_in_smp;256+};257+258+/*259+ * PIO to indicate that memory mapped Q contains valid descriptor(s).260+ */261+static inline void doorbell_pio(struct adapter *adapter, u32 val)262+{263+ wmb();264+ writel(val, adapter->regs + A_SG_DOORBELL);265+}266+267+/*268+ * Frees all RX buffers on the freelist Q. The caller must make sure that269+ * the SGE is turned off before calling this function.270+ */271+static void free_freelQ_buffers(struct pci_dev *pdev, struct freelQ *q)272+{273+ unsigned int cidx = q->cidx;274+275+ while (q->credits--) {276+ struct freelQ_ce *ce = &q->centries[cidx];277+278+ pci_unmap_single(pdev, pci_unmap_addr(ce, dma_addr),279+ pci_unmap_len(ce, dma_len),280+ PCI_DMA_FROMDEVICE);281+ dev_kfree_skb(ce->skb);282+ ce->skb = NULL;283+ if (++cidx == q->size)284+ cidx = 0;285+ }286+}287+288+/*289+ * Free RX free list and response queue resources.290+ */291+static void free_rx_resources(struct sge *sge)292+{293+ struct pci_dev *pdev = sge->adapter->pdev;294+ unsigned int size, i;295+296+ if (sge->respQ.entries) {297+ size = sizeof(struct respQ_e) * sge->respQ.size;298+ pci_free_consistent(pdev, size, sge->respQ.entries,299+ sge->respQ.dma_addr);300+ }301+302+ for (i = 0; i < SGE_FREELQ_N; i++) {303+ struct freelQ *q = &sge->freelQ[i];304+305+ if (q->centries) {306+ free_freelQ_buffers(pdev, q);307+ kfree(q->centries);308+ }309+ if (q->entries) {310+ size = sizeof(struct freelQ_e) * q->size;311+ pci_free_consistent(pdev, size, q->entries,312+ q->dma_addr);313+ }314+ }315+}316+317+/*318+ * Allocates basic RX resources, consisting of memory mapped freelist Qs and a319+ * response queue.320+ */321+static int alloc_rx_resources(struct sge *sge, struct sge_params *p)322+{323+ struct pci_dev *pdev = sge->adapter->pdev;324+ unsigned int size, i;325+326+ for (i = 0; i < SGE_FREELQ_N; i++) {327+ struct freelQ *q = &sge->freelQ[i];328+329+ q->genbit = 1;330+ q->size = p->freelQ_size[i];331+ q->dma_offset = sge->rx_pkt_pad ? 0 : NET_IP_ALIGN;332+ size = sizeof(struct freelQ_e) * q->size;333+ q->entries = (struct freelQ_e *)334+ pci_alloc_consistent(pdev, size, &q->dma_addr);335+ if (!q->entries)336+ goto err_no_mem;337+ memset(q->entries, 0, size);338+ size = sizeof(struct freelQ_ce) * q->size;339+ q->centries = kmalloc(size, GFP_KERNEL);340+ if (!q->centries)341+ goto err_no_mem;342+ memset(q->centries, 0, size);343+ }344+345+ /*346+ * Calculate the buffer sizes for the two free lists. FL0 accommodates347+ * regular sized Ethernet frames, FL1 is sized not to exceed 16K,348+ * including all the sk_buff overhead.349+ *350+ * Note: For T2 FL0 and FL1 are reversed.351+ */352+ sge->freelQ[!sge->jumbo_fl].rx_buffer_size = SGE_RX_SM_BUF_SIZE +353+ sizeof(struct cpl_rx_data) +354+ sge->freelQ[!sge->jumbo_fl].dma_offset;355+ sge->freelQ[sge->jumbo_fl].rx_buffer_size = (16 * 1024) -356+ SKB_DATA_ALIGN(sizeof(struct skb_shared_info));357+358+ /*359+ * Setup which skb recycle Q should be used when recycling buffers from360+ * each free list.361+ */362+ sge->freelQ[!sge->jumbo_fl].recycleq_idx = 0;363+ sge->freelQ[sge->jumbo_fl].recycleq_idx = 1;364+365+ sge->respQ.genbit = 1;366+ sge->respQ.size = SGE_RESPQ_E_N;367+ sge->respQ.credits = 0;368+ size = sizeof(struct respQ_e) * sge->respQ.size;369+ sge->respQ.entries = (struct respQ_e *)370+ pci_alloc_consistent(pdev, size, &sge->respQ.dma_addr);371+ if (!sge->respQ.entries)372+ goto err_no_mem;373+ memset(sge->respQ.entries, 0, size);374+ return 0;375+376+err_no_mem:377+ free_rx_resources(sge);378+ return -ENOMEM;379+}380+381+/*382+ * Reclaims n TX descriptors and frees the buffers associated with them.383+ */384+static void free_cmdQ_buffers(struct sge *sge, struct cmdQ *q, unsigned int n)385+{386+ struct cmdQ_ce *ce;387+ struct pci_dev *pdev = sge->adapter->pdev;388+ unsigned int cidx = q->cidx;389+390+ q->in_use -= n;391+ ce = &q->centries[cidx];392+ while (n--) {393+ if (q->sop)394+ pci_unmap_single(pdev, pci_unmap_addr(ce, dma_addr),395+ pci_unmap_len(ce, dma_len),396+ PCI_DMA_TODEVICE);397+ else398+ pci_unmap_page(pdev, pci_unmap_addr(ce, dma_addr),399+ pci_unmap_len(ce, dma_len),400+ PCI_DMA_TODEVICE);401+ q->sop = 0;402+ if (ce->skb) {403+ dev_kfree_skb(ce->skb);404+ q->sop = 1;405+ }406+ ce++;407+ if (++cidx == q->size) {408+ cidx = 0;409+ ce = q->centries;410+ }411+ }412+ q->cidx = cidx;413+}414+415+/*416+ * Free TX resources.417+ *418+ * Assumes that SGE is stopped and all interrupts are disabled.419+ */420+static void free_tx_resources(struct sge *sge)421+{422+ struct pci_dev *pdev = sge->adapter->pdev;423+ unsigned int size, i;424+425+ for (i = 0; i < SGE_CMDQ_N; i++) {426+ struct cmdQ *q = &sge->cmdQ[i];427+428+ if (q->centries) {429+ if (q->in_use)430+ free_cmdQ_buffers(sge, q, q->in_use);431+ kfree(q->centries);432+ }433+ if (q->entries) {434+ size = sizeof(struct cmdQ_e) * q->size;435+ pci_free_consistent(pdev, size, q->entries,436+ q->dma_addr);437+ }438+ }439+}440+441+/*442+ * Allocates basic TX resources, consisting of memory mapped command Qs.443+ */444+static int alloc_tx_resources(struct sge *sge, struct sge_params *p)445+{446+ struct pci_dev *pdev = sge->adapter->pdev;447+ unsigned int size, i;448+449+ for (i = 0; i < SGE_CMDQ_N; i++) {450+ struct cmdQ *q = &sge->cmdQ[i];451+452+ q->genbit = 1;453+ q->sop = 1;454+ q->size = p->cmdQ_size[i];455+ q->in_use = 0;456+ q->status = 0;457+ q->processed = q->cleaned = 0;458+ q->stop_thres = 0;459+ spin_lock_init(&q->lock);460+ size = sizeof(struct cmdQ_e) * q->size;461+ q->entries = (struct cmdQ_e *)462+ pci_alloc_consistent(pdev, size, &q->dma_addr);463+ if (!q->entries)464+ goto err_no_mem;465+ memset(q->entries, 0, size);466+ size = sizeof(struct cmdQ_ce) * q->size;467+ q->centries = kmalloc(size, GFP_KERNEL);468+ if (!q->centries)469+ goto err_no_mem;470+ memset(q->centries, 0, size);471+ }472+473+ /*474+ * CommandQ 0 handles Ethernet and TOE packets, while queue 1 is TOE475+ * only. For queue 0 set the stop threshold so we can handle one more476+ * packet from each port, plus reserve an additional 24 entries for477+ * Ethernet packets only. Queue 1 never suspends nor do we reserve478+ * space for Ethernet packets.479+ */480+ sge->cmdQ[0].stop_thres = sge->adapter->params.nports *481+ (MAX_SKB_FRAGS + 1);482+ return 0;483+484+err_no_mem:485+ free_tx_resources(sge);486+ return -ENOMEM;487+}488+489+static inline void setup_ring_params(struct adapter *adapter, u64 addr,490+ u32 size, int base_reg_lo,491+ int base_reg_hi, int size_reg)492+{493+ writel((u32)addr, adapter->regs + base_reg_lo);494+ writel(addr >> 32, adapter->regs + base_reg_hi);495+ writel(size, adapter->regs + size_reg);496+}497+498+/*499+ * Enable/disable VLAN acceleration.500+ */501+void t1_set_vlan_accel(struct adapter *adapter, int on_off)502+{503+ struct sge *sge = adapter->sge;504+505+ sge->sge_control &= ~F_VLAN_XTRACT;506+ if (on_off)507+ sge->sge_control |= F_VLAN_XTRACT;508+ if (adapter->open_device_map) {509+ writel(sge->sge_control, adapter->regs + A_SG_CONTROL);510+ readl(adapter->regs + A_SG_CONTROL); /* flush */511+ }512+}513+514+/*515+ * Programs the various SGE registers. However, the engine is not yet enabled,516+ * but sge->sge_control is setup and ready to go.517+ */518+static void configure_sge(struct sge *sge, struct sge_params *p)519+{520+ struct adapter *ap = sge->adapter;521+522+ writel(0, ap->regs + A_SG_CONTROL);523+ setup_ring_params(ap, sge->cmdQ[0].dma_addr, sge->cmdQ[0].size,524+ A_SG_CMD0BASELWR, A_SG_CMD0BASEUPR, A_SG_CMD0SIZE);525+ setup_ring_params(ap, sge->cmdQ[1].dma_addr, sge->cmdQ[1].size,526+ A_SG_CMD1BASELWR, A_SG_CMD1BASEUPR, A_SG_CMD1SIZE);527+ setup_ring_params(ap, sge->freelQ[0].dma_addr,528+ sge->freelQ[0].size, A_SG_FL0BASELWR,529+ A_SG_FL0BASEUPR, A_SG_FL0SIZE);530+ setup_ring_params(ap, sge->freelQ[1].dma_addr,531+ sge->freelQ[1].size, A_SG_FL1BASELWR,532+ A_SG_FL1BASEUPR, A_SG_FL1SIZE);533+534+ /* The threshold comparison uses <. */535+ writel(SGE_RX_SM_BUF_SIZE + 1, ap->regs + A_SG_FLTHRESHOLD);536+537+ setup_ring_params(ap, sge->respQ.dma_addr, sge->respQ.size,538+ A_SG_RSPBASELWR, A_SG_RSPBASEUPR, A_SG_RSPSIZE);539+ writel((u32)sge->respQ.size - 1, ap->regs + A_SG_RSPQUEUECREDIT);540+541+ sge->sge_control = F_CMDQ0_ENABLE | F_CMDQ1_ENABLE | F_FL0_ENABLE |542+ F_FL1_ENABLE | F_CPL_ENABLE | F_RESPONSE_QUEUE_ENABLE |543+ V_CMDQ_PRIORITY(2) | F_DISABLE_CMDQ1_GTS | F_ISCSI_COALESCE |544+ F_DISABLE_FL0_GTS | F_DISABLE_FL1_GTS |545+ V_RX_PKT_OFFSET(sge->rx_pkt_pad);546+547+#if defined(__BIG_ENDIAN_BITFIELD)548+ sge->sge_control |= F_ENABLE_BIG_ENDIAN;549+#endif550+551+ /* Initialize no-resource timer */552+ sge->intrtimer_nres = SGE_INTRTIMER_NRES * core_ticks_per_usec(ap);553+554+ t1_sge_set_coalesce_params(sge, p);555+}556+557+/*558+ * Return the payload capacity of the jumbo free-list buffers.559+ */560+static inline unsigned int jumbo_payload_capacity(const struct sge *sge)561+{562+ return sge->freelQ[sge->jumbo_fl].rx_buffer_size -563+ sge->freelQ[sge->jumbo_fl].dma_offset -564+ sizeof(struct cpl_rx_data);565+}566+567+/*568+ * Frees all SGE related resources and the sge structure itself569+ */570+void t1_sge_destroy(struct sge *sge)571+{572+ if (sge->espibug_skb)573+ kfree_skb(sge->espibug_skb);574+575+ free_tx_resources(sge);576+ free_rx_resources(sge);577+ kfree(sge);578+}579+580+/*581+ * Allocates new RX buffers on the freelist Q (and tracks them on the freelist582+ * context Q) until the Q is full or alloc_skb fails.583+ *584+ * It is possible that the generation bits already match, indicating that the585+ * buffer is already valid and nothing needs to be done. This happens when we586+ * copied a received buffer into a new sk_buff during the interrupt processing.587+ *588+ * If the SGE doesn't automatically align packets properly (!sge->rx_pkt_pad),589+ * we specify a RX_OFFSET in order to make sure that the IP header is 4B590+ * aligned.591+ */592+static void refill_free_list(struct sge *sge, struct freelQ *q)593+{594+ struct pci_dev *pdev = sge->adapter->pdev;595+ struct freelQ_ce *ce = &q->centries[q->pidx];596+ struct freelQ_e *e = &q->entries[q->pidx];597+ unsigned int dma_len = q->rx_buffer_size - q->dma_offset;598+599+600+ while (q->credits < q->size) {601+ struct sk_buff *skb;602+ dma_addr_t mapping;603+604+ skb = alloc_skb(q->rx_buffer_size, GFP_ATOMIC);605+ if (!skb)606+ break;607+608+ skb_reserve(skb, q->dma_offset);609+ mapping = pci_map_single(pdev, skb->data, dma_len,610+ PCI_DMA_FROMDEVICE);611+ ce->skb = skb;612+ pci_unmap_addr_set(ce, dma_addr, mapping);613+ pci_unmap_len_set(ce, dma_len, dma_len);614+ e->addr_lo = (u32)mapping;615+ e->addr_hi = (u64)mapping >> 32;616+ e->len_gen = V_CMD_LEN(dma_len) | V_CMD_GEN1(q->genbit);617+ wmb();618+ e->gen2 = V_CMD_GEN2(q->genbit);619+620+ e++;621+ ce++;622+ if (++q->pidx == q->size) {623+ q->pidx = 0;624+ q->genbit ^= 1;625+ ce = q->centries;626+ e = q->entries;627+ }628+ q->credits++;629+ }630+631+}632+633+/*634+ * Calls refill_free_list for both free lists. If we cannot fill at least 1/4635+ * of both rings, we go into 'few interrupt mode' in order to give the system636+ * time to free up resources.637+ */638+static void freelQs_empty(struct sge *sge)639+{640+ struct adapter *adapter = sge->adapter;641+ u32 irq_reg = readl(adapter->regs + A_SG_INT_ENABLE);642+ u32 irqholdoff_reg;643+644+ refill_free_list(sge, &sge->freelQ[0]);645+ refill_free_list(sge, &sge->freelQ[1]);646+647+ if (sge->freelQ[0].credits > (sge->freelQ[0].size >> 2) &&648+ sge->freelQ[1].credits > (sge->freelQ[1].size >> 2)) {649+ irq_reg |= F_FL_EXHAUSTED;650+ irqholdoff_reg = sge->fixed_intrtimer;651+ } else {652+ /* Clear the F_FL_EXHAUSTED interrupts for now */653+ irq_reg &= ~F_FL_EXHAUSTED;654+ irqholdoff_reg = sge->intrtimer_nres;655+ }656+ writel(irqholdoff_reg, adapter->regs + A_SG_INTRTIMER);657+ writel(irq_reg, adapter->regs + A_SG_INT_ENABLE);658+659+ /* We reenable the Qs to force a freelist GTS interrupt later */660+ doorbell_pio(adapter, F_FL0_ENABLE | F_FL1_ENABLE);661+}662+663+#define SGE_PL_INTR_MASK (F_PL_INTR_SGE_ERR | F_PL_INTR_SGE_DATA)664+#define SGE_INT_FATAL (F_RESPQ_OVERFLOW | F_PACKET_TOO_BIG | F_PACKET_MISMATCH)665+#define SGE_INT_ENABLE (F_RESPQ_EXHAUSTED | F_RESPQ_OVERFLOW | \666+ F_FL_EXHAUSTED | F_PACKET_TOO_BIG | F_PACKET_MISMATCH)667+668+/*669+ * Disable SGE Interrupts670+ */671+void t1_sge_intr_disable(struct sge *sge)672+{673+ u32 val = readl(sge->adapter->regs + A_PL_ENABLE);674+675+ writel(val & ~SGE_PL_INTR_MASK, sge->adapter->regs + A_PL_ENABLE);676+ writel(0, sge->adapter->regs + A_SG_INT_ENABLE);677+}678+679+/*680+ * Enable SGE interrupts.681+ */682+void t1_sge_intr_enable(struct sge *sge)683+{684+ u32 en = SGE_INT_ENABLE;685+ u32 val = readl(sge->adapter->regs + A_PL_ENABLE);686+687+ if (sge->adapter->flags & TSO_CAPABLE)688+ en &= ~F_PACKET_TOO_BIG;689+ writel(en, sge->adapter->regs + A_SG_INT_ENABLE);690+ writel(val | SGE_PL_INTR_MASK, sge->adapter->regs + A_PL_ENABLE);691+}692+693+/*694+ * Clear SGE interrupts.695+ */696+void t1_sge_intr_clear(struct sge *sge)697+{698+ writel(SGE_PL_INTR_MASK, sge->adapter->regs + A_PL_CAUSE);699+ writel(0xffffffff, sge->adapter->regs + A_SG_INT_CAUSE);700+}701+702+/*703+ * SGE 'Error' interrupt handler704+ */705+int t1_sge_intr_error_handler(struct sge *sge)706+{707+ struct adapter *adapter = sge->adapter;708+ u32 cause = readl(adapter->regs + A_SG_INT_CAUSE);709+710+ if (adapter->flags & TSO_CAPABLE)711+ cause &= ~F_PACKET_TOO_BIG;712+ if (cause & F_RESPQ_EXHAUSTED)713+ sge->stats.respQ_empty++;714+ if (cause & F_RESPQ_OVERFLOW) {715+ sge->stats.respQ_overflow++;716+ CH_ALERT("%s: SGE response queue overflow\n",717+ adapter->name);718+ }719+ if (cause & F_FL_EXHAUSTED) {720+ sge->stats.freelistQ_empty++;721+ freelQs_empty(sge);722+ }723+ if (cause & F_PACKET_TOO_BIG) {724+ sge->stats.pkt_too_big++;725+ CH_ALERT("%s: SGE max packet size exceeded\n",726+ adapter->name);727+ }728+ if (cause & F_PACKET_MISMATCH) {729+ sge->stats.pkt_mismatch++;730+ CH_ALERT("%s: SGE packet mismatch\n", adapter->name);731+ }732+ if (cause & SGE_INT_FATAL)733+ t1_fatal_err(adapter);734+735+ writel(cause, adapter->regs + A_SG_INT_CAUSE);736+ return 0;737+}738+739+const struct sge_intr_counts *t1_sge_get_intr_counts(struct sge *sge)740+{741+ return &sge->stats;742+}743+744+const struct sge_port_stats *t1_sge_get_port_stats(struct sge *sge, int port)745+{746+ return &sge->port_stats[port];747+}748+749+/**750+ * recycle_fl_buf - recycle a free list buffer751+ * @fl: the free list752+ * @idx: index of buffer to recycle753+ *754+ * Recycles the specified buffer on the given free list by adding it at755+ * the next available slot on the list.756+ */757+static void recycle_fl_buf(struct freelQ *fl, int idx)758+{759+ struct freelQ_e *from = &fl->entries[idx];760+ struct freelQ_e *to = &fl->entries[fl->pidx];761+762+ fl->centries[fl->pidx] = fl->centries[idx];763+ to->addr_lo = from->addr_lo;764+ to->addr_hi = from->addr_hi;765+ to->len_gen = G_CMD_LEN(from->len_gen) | V_CMD_GEN1(fl->genbit);766+ wmb();767+ to->gen2 = V_CMD_GEN2(fl->genbit);768+ fl->credits++;769+770+ if (++fl->pidx == fl->size) {771+ fl->pidx = 0;772+ fl->genbit ^= 1;773+ }774+}775+776+/**777+ * get_packet - return the next ingress packet buffer778+ * @pdev: the PCI device that received the packet779+ * @fl: the SGE free list holding the packet780+ * @len: the actual packet length, excluding any SGE padding781+ * @dma_pad: padding at beginning of buffer left by SGE DMA782+ * @skb_pad: padding to be used if the packet is copied783+ * @copy_thres: length threshold under which a packet should be copied784+ * @drop_thres: # of remaining buffers before we start dropping packets785+ *786+ * Get the next packet from a free list and complete setup of the787+ * sk_buff. If the packet is small we make a copy and recycle the788+ * original buffer, otherwise we use the original buffer itself. If a789+ * positive drop threshold is supplied packets are dropped and their790+ * buffers recycled if (a) the number of remaining buffers is under the791+ * threshold and the packet is too big to copy, or (b) the packet should792+ * be copied but there is no memory for the copy.793+ */794+static inline struct sk_buff *get_packet(struct pci_dev *pdev,795+ struct freelQ *fl, unsigned int len,796+ int dma_pad, int skb_pad,797+ unsigned int copy_thres,798+ unsigned int drop_thres)799+{800+ struct sk_buff *skb;801+ struct freelQ_ce *ce = &fl->centries[fl->cidx];802+803+ if (len < copy_thres) {804+ skb = alloc_skb(len + skb_pad, GFP_ATOMIC);805+ if (likely(skb != NULL)) {806+ skb_reserve(skb, skb_pad);807+ skb_put(skb, len);808+ pci_dma_sync_single_for_cpu(pdev,809+ pci_unmap_addr(ce, dma_addr),810+ pci_unmap_len(ce, dma_len),811+ PCI_DMA_FROMDEVICE);812+ memcpy(skb->data, ce->skb->data + dma_pad, len);813+ pci_dma_sync_single_for_device(pdev,814+ pci_unmap_addr(ce, dma_addr),815+ pci_unmap_len(ce, dma_len),816+ PCI_DMA_FROMDEVICE);817+ } else if (!drop_thres)818+ goto use_orig_buf;819+820+ recycle_fl_buf(fl, fl->cidx);821+ return skb;822+ }823+824+ if (fl->credits < drop_thres) {825+ recycle_fl_buf(fl, fl->cidx);826+ return NULL;827+ }828+829+use_orig_buf:830+ pci_unmap_single(pdev, pci_unmap_addr(ce, dma_addr),831+ pci_unmap_len(ce, dma_len), PCI_DMA_FROMDEVICE);832+ skb = ce->skb;833+ skb_reserve(skb, dma_pad);834+ skb_put(skb, len);835+ return skb;836+}837+838+/**839+ * unexpected_offload - handle an unexpected offload packet840+ * @adapter: the adapter841+ * @fl: the free list that received the packet842+ *843+ * Called when we receive an unexpected offload packet (e.g., the TOE844+ * function is disabled or the card is a NIC). Prints a message and845+ * recycles the buffer.846+ */847+static void unexpected_offload(struct adapter *adapter, struct freelQ *fl)848+{849+ struct freelQ_ce *ce = &fl->centries[fl->cidx];850+ struct sk_buff *skb = ce->skb;851+852+ pci_dma_sync_single_for_cpu(adapter->pdev, pci_unmap_addr(ce, dma_addr),853+ pci_unmap_len(ce, dma_len), PCI_DMA_FROMDEVICE);854+ CH_ERR("%s: unexpected offload packet, cmd %u\n",855+ adapter->name, *skb->data);856+ recycle_fl_buf(fl, fl->cidx);857+}858+859+/*860+ * Write the command descriptors to transmit the given skb starting at861+ * descriptor pidx with the given generation.862+ */863+static inline void write_tx_descs(struct adapter *adapter, struct sk_buff *skb,864+ unsigned int pidx, unsigned int gen,865+ struct cmdQ *q)866+{867+ dma_addr_t mapping;868+ struct cmdQ_e *e, *e1;869+ struct cmdQ_ce *ce;870+ unsigned int i, flags, nfrags = skb_shinfo(skb)->nr_frags;871+872+ mapping = pci_map_single(adapter->pdev, skb->data,873+ skb->len - skb->data_len, PCI_DMA_TODEVICE);874+ ce = &q->centries[pidx];875+ ce->skb = NULL;876+ pci_unmap_addr_set(ce, dma_addr, mapping);877+ pci_unmap_len_set(ce, dma_len, skb->len - skb->data_len);878+879+ flags = F_CMD_DATAVALID | F_CMD_SOP | V_CMD_EOP(nfrags == 0) |880+ V_CMD_GEN2(gen);881+ e = &q->entries[pidx];882+ e->addr_lo = (u32)mapping;883+ e->addr_hi = (u64)mapping >> 32;884+ e->len_gen = V_CMD_LEN(skb->len - skb->data_len) | V_CMD_GEN1(gen);885+ for (e1 = e, i = 0; nfrags--; i++) {886+ skb_frag_t *frag = &skb_shinfo(skb)->frags[i];887+888+ ce++;889+ e1++;890+ if (++pidx == q->size) {891+ pidx = 0;892+ gen ^= 1;893+ ce = q->centries;894+ e1 = q->entries;895+ }896+897+ mapping = pci_map_page(adapter->pdev, frag->page,898+ frag->page_offset, frag->size,899+ PCI_DMA_TODEVICE);900+ ce->skb = NULL;901+ pci_unmap_addr_set(ce, dma_addr, mapping);902+ pci_unmap_len_set(ce, dma_len, frag->size);903+904+ e1->addr_lo = (u32)mapping;905+ e1->addr_hi = (u64)mapping >> 32;906+ e1->len_gen = V_CMD_LEN(frag->size) | V_CMD_GEN1(gen);907+ e1->flags = F_CMD_DATAVALID | V_CMD_EOP(nfrags == 0) |908+ V_CMD_GEN2(gen);909+ }910+911+ ce->skb = skb;912+ wmb();913+ e->flags = flags;914+}915+916+/*917+ * Clean up completed Tx buffers.918+ */919+static inline void reclaim_completed_tx(struct sge *sge, struct cmdQ *q)920+{921+ unsigned int reclaim = q->processed - q->cleaned;922+923+ if (reclaim) {924+ free_cmdQ_buffers(sge, q, reclaim);925+ q->cleaned += reclaim;926+ }927+}928+929+#ifndef SET_ETHTOOL_OPS930+# define __netif_rx_complete(dev) netif_rx_complete(dev)931+#endif932+933+/*934+ * We cannot use the standard netif_rx_schedule_prep() because we have multiple935+ * ports plus the TOE all multiplexing onto a single response queue, therefore936+ * accepting new responses cannot depend on the state of any particular port.937+ * So define our own equivalent that omits the netif_running() test.938+ */939+static inline int napi_schedule_prep(struct net_device *dev)940+{941+ return !test_and_set_bit(__LINK_STATE_RX_SCHED, &dev->state);942+}943+944+945+/**946+ * sge_rx - process an ingress ethernet packet947+ * @sge: the sge structure948+ * @fl: the free list that contains the packet buffer949+ * @len: the packet length950+ *951+ * Process an ingress ethernet pakcet and deliver it to the stack.952+ */953+static int sge_rx(struct sge *sge, struct freelQ *fl, unsigned int len)954+{955+ struct sk_buff *skb;956+ struct cpl_rx_pkt *p;957+ struct adapter *adapter = sge->adapter;958+959+ sge->stats.ethernet_pkts++;960+ skb = get_packet(adapter->pdev, fl, len - sge->rx_pkt_pad,961+ sge->rx_pkt_pad, 2, SGE_RX_COPY_THRES,962+ SGE_RX_DROP_THRES);963+ if (!skb) {964+ sge->port_stats[0].rx_drops++; /* charge only port 0 for now */965+ return 0;966+ }967+968+ p = (struct cpl_rx_pkt *)skb->data;969+ skb_pull(skb, sizeof(*p));970+ skb->dev = adapter->port[p->iff].dev;971+ skb->dev->last_rx = jiffies;972+ skb->protocol = eth_type_trans(skb, skb->dev);973+ if ((adapter->flags & RX_CSUM_ENABLED) && p->csum == 0xffff &&974+ skb->protocol == htons(ETH_P_IP) &&975+ (skb->data[9] == IPPROTO_TCP || skb->data[9] == IPPROTO_UDP)) {976+ sge->port_stats[p->iff].rx_cso_good++;977+ skb->ip_summed = CHECKSUM_UNNECESSARY;978+ } else979+ skb->ip_summed = CHECKSUM_NONE;980+981+ if (unlikely(adapter->vlan_grp && p->vlan_valid)) {982+ sge->port_stats[p->iff].vlan_xtract++;983+ if (adapter->params.sge.polling)984+ vlan_hwaccel_receive_skb(skb, adapter->vlan_grp,985+ ntohs(p->vlan));986+ else987+ vlan_hwaccel_rx(skb, adapter->vlan_grp,988+ ntohs(p->vlan));989+ } else if (adapter->params.sge.polling)990+ netif_receive_skb(skb);991+ else992+ netif_rx(skb);993+ return 0;994+}995+996+/*997+ * Returns true if a command queue has enough available descriptors that998+ * we can resume Tx operation after temporarily disabling its packet queue.999+ */1000+static inline int enough_free_Tx_descs(const struct cmdQ *q)1001+{1002+ unsigned int r = q->processed - q->cleaned;1003+1004+ return q->in_use - r < (q->size >> 1);1005+}1006+1007+/*1008+ * Called when sufficient space has become available in the SGE command queues1009+ * after the Tx packet schedulers have been suspended to restart the Tx path.1010+ */1011+static void restart_tx_queues(struct sge *sge)1012+{1013+ struct adapter *adap = sge->adapter;1014+1015+ if (enough_free_Tx_descs(&sge->cmdQ[0])) {1016+ int i;1017+1018+ for_each_port(adap, i) {1019+ struct net_device *nd = adap->port[i].dev;1020+1021+ if (test_and_clear_bit(nd->if_port,1022+ &sge->stopped_tx_queues) &&1023+ netif_running(nd)) {1024+ sge->stats.cmdQ_restarted[3]++;1025+ netif_wake_queue(nd);1026+ }1027+ }1028+ }1029+}1030+1031+/*1032+ * update_tx_info is called from the interrupt handler/NAPI to return cmdQ0 1033+ * information.1034+ */1035+static unsigned int update_tx_info(struct adapter *adapter, 1036+ unsigned int flags, 1037+ unsigned int pr0)1038+{1039+ struct sge *sge = adapter->sge;1040+ struct cmdQ *cmdq = &sge->cmdQ[0];1041+1042+ cmdq->processed += pr0;1043+1044+ if (flags & F_CMDQ0_ENABLE) {1045+ clear_bit(CMDQ_STAT_RUNNING, &cmdq->status);1046+1047+ if (cmdq->cleaned + cmdq->in_use != cmdq->processed &&1048+ !test_and_set_bit(CMDQ_STAT_LAST_PKT_DB, &cmdq->status)) {1049+ set_bit(CMDQ_STAT_RUNNING, &cmdq->status);1050+ writel(F_CMDQ0_ENABLE, adapter->regs + A_SG_DOORBELL);1051+ }1052+ flags &= ~F_CMDQ0_ENABLE;1053+ }1054+1055+ if (unlikely(sge->stopped_tx_queues != 0))1056+ restart_tx_queues(sge);1057+1058+ return flags;1059+}1060+1061+/*1062+ * Process SGE responses, up to the supplied budget. Returns the number of1063+ * responses processed. A negative budget is effectively unlimited.1064+ */1065+static int process_responses(struct adapter *adapter, int budget)1066+{1067+ struct sge *sge = adapter->sge;1068+ struct respQ *q = &sge->respQ;1069+ struct respQ_e *e = &q->entries[q->cidx];1070+ int budget_left = budget;1071+ unsigned int flags = 0;1072+ unsigned int cmdq_processed[SGE_CMDQ_N] = {0, 0};1073+1074+1075+ while (likely(budget_left && e->GenerationBit == q->genbit)) {1076+ flags |= e->Qsleeping;1077+1078+ cmdq_processed[0] += e->Cmdq0CreditReturn;1079+ cmdq_processed[1] += e->Cmdq1CreditReturn;1080+1081+ /* We batch updates to the TX side to avoid cacheline1082+ * ping-pong of TX state information on MP where the sender1083+ * might run on a different CPU than this function...1084+ */1085+ if (unlikely(flags & F_CMDQ0_ENABLE || cmdq_processed[0] > 64)) {1086+ flags = update_tx_info(adapter, flags, cmdq_processed[0]);1087+ cmdq_processed[0] = 0;1088+ }1089+ if (unlikely(cmdq_processed[1] > 16)) {1090+ sge->cmdQ[1].processed += cmdq_processed[1];1091+ cmdq_processed[1] = 0;1092+ }1093+ if (likely(e->DataValid)) {1094+ struct freelQ *fl = &sge->freelQ[e->FreelistQid];1095+1096+ if (unlikely(!e->Sop || !e->Eop))1097+ BUG();1098+ if (unlikely(e->Offload))1099+ unexpected_offload(adapter, fl);1100+ else1101+ sge_rx(sge, fl, e->BufferLength);1102+1103+ /*1104+ * Note: this depends on each packet consuming a1105+ * single free-list buffer; cf. the BUG above.1106+ */1107+ if (++fl->cidx == fl->size)1108+ fl->cidx = 0;1109+ if (unlikely(--fl->credits <1110+ fl->size - SGE_FREEL_REFILL_THRESH))1111+ refill_free_list(sge, fl);1112+ } else1113+ sge->stats.pure_rsps++;1114+1115+ e++;1116+ if (unlikely(++q->cidx == q->size)) {1117+ q->cidx = 0;1118+ q->genbit ^= 1;1119+ e = q->entries;1120+ }1121+ prefetch(e);1122+1123+ if (++q->credits > SGE_RESPQ_REPLENISH_THRES) {1124+ writel(q->credits, adapter->regs + A_SG_RSPQUEUECREDIT);1125+ q->credits = 0;1126+ }1127+ --budget_left;1128+ }1129+1130+ flags = update_tx_info(adapter, flags, cmdq_processed[0]); 1131+ sge->cmdQ[1].processed += cmdq_processed[1];1132+1133+ budget -= budget_left;1134+ return budget;1135+}1136+1137+/*1138+ * A simpler version of process_responses() that handles only pure (i.e.,1139+ * non data-carrying) responses. Such respones are too light-weight to justify1140+ * calling a softirq when using NAPI, so we handle them specially in hard1141+ * interrupt context. The function is called with a pointer to a response,1142+ * which the caller must ensure is a valid pure response. Returns 1 if it1143+ * encounters a valid data-carrying response, 0 otherwise.1144+ */1145+static int process_pure_responses(struct adapter *adapter, struct respQ_e *e)1146+{1147+ struct sge *sge = adapter->sge;1148+ struct respQ *q = &sge->respQ;1149+ unsigned int flags = 0;1150+ unsigned int cmdq_processed[SGE_CMDQ_N] = {0, 0};1151+1152+ do {1153+ flags |= e->Qsleeping;1154+1155+ cmdq_processed[0] += e->Cmdq0CreditReturn;1156+ cmdq_processed[1] += e->Cmdq1CreditReturn;1157+1158+ e++;1159+ if (unlikely(++q->cidx == q->size)) {1160+ q->cidx = 0;1161+ q->genbit ^= 1;1162+ e = q->entries;1163+ }1164+ prefetch(e);1165+1166+ if (++q->credits > SGE_RESPQ_REPLENISH_THRES) {1167+ writel(q->credits, adapter->regs + A_SG_RSPQUEUECREDIT);1168+ q->credits = 0;1169+ }1170+ sge->stats.pure_rsps++;1171+ } while (e->GenerationBit == q->genbit && !e->DataValid);1172+1173+ flags = update_tx_info(adapter, flags, cmdq_processed[0]); 1174+ sge->cmdQ[1].processed += cmdq_processed[1];1175+1176+ return e->GenerationBit == q->genbit;1177+}1178+1179+/*1180+ * Handler for new data events when using NAPI. This does not need any locking1181+ * or protection from interrupts as data interrupts are off at this point and1182+ * other adapter interrupts do not interfere.1183+ */1184+static int t1_poll(struct net_device *dev, int *budget)1185+{1186+ struct adapter *adapter = dev->priv;1187+ int effective_budget = min(*budget, dev->quota);1188+1189+ int work_done = process_responses(adapter, effective_budget);1190+ *budget -= work_done;1191+ dev->quota -= work_done;1192+1193+ if (work_done >= effective_budget)1194+ return 1;1195+1196+ __netif_rx_complete(dev);1197+1198+ /*1199+ * Because we don't atomically flush the following write it is1200+ * possible that in very rare cases it can reach the device in a way1201+ * that races with a new response being written plus an error interrupt1202+ * causing the NAPI interrupt handler below to return unhandled status1203+ * to the OS. To protect against this would require flushing the write1204+ * and doing both the write and the flush with interrupts off. Way too1205+ * expensive and unjustifiable given the rarity of the race.1206+ */1207+ writel(adapter->sge->respQ.cidx, adapter->regs + A_SG_SLEEPING);1208+ return 0;1209+}1210+1211+/*1212+ * Returns true if the device is already scheduled for polling.1213+ */1214+static inline int napi_is_scheduled(struct net_device *dev)1215+{1216+ return test_bit(__LINK_STATE_RX_SCHED, &dev->state);1217+}1218+1219+/*1220+ * NAPI version of the main interrupt handler.1221+ */1222+static irqreturn_t t1_interrupt_napi(int irq, void *data, struct pt_regs *regs)1223+{1224+ int handled;1225+ struct adapter *adapter = data;1226+ struct sge *sge = adapter->sge;1227+ struct respQ *q = &adapter->sge->respQ;1228+1229+ /*1230+ * Clear the SGE_DATA interrupt first thing. Normally the NAPI1231+ * handler has control of the response queue and the interrupt handler1232+ * can look at the queue reliably only once it knows NAPI is off.1233+ * We can't wait that long to clear the SGE_DATA interrupt because we1234+ * could race with t1_poll rearming the SGE interrupt, so we need to1235+ * clear the interrupt speculatively and really early on.1236+ */1237+ writel(F_PL_INTR_SGE_DATA, adapter->regs + A_PL_CAUSE);1238+1239+ spin_lock(&adapter->async_lock);1240+ if (!napi_is_scheduled(sge->netdev)) {1241+ struct respQ_e *e = &q->entries[q->cidx];1242+1243+ if (e->GenerationBit == q->genbit) {1244+ if (e->DataValid ||1245+ process_pure_responses(adapter, e)) {1246+ if (likely(napi_schedule_prep(sge->netdev)))1247+ __netif_rx_schedule(sge->netdev);1248+ else1249+ printk(KERN_CRIT1250+ "NAPI schedule failure!\n");1251+ } else1252+ writel(q->cidx, adapter->regs + A_SG_SLEEPING);1253+ handled = 1;1254+ goto unlock;1255+ } else1256+ writel(q->cidx, adapter->regs + A_SG_SLEEPING);1257+ } else1258+ if (readl(adapter->regs + A_PL_CAUSE) & F_PL_INTR_SGE_DATA)1259+ printk(KERN_ERR "data interrupt while NAPI running\n");1260+1261+ handled = t1_slow_intr_handler(adapter);1262+ if (!handled)1263+ sge->stats.unhandled_irqs++;1264+ unlock:1265+ spin_unlock(&adapter->async_lock);1266+ return IRQ_RETVAL(handled != 0);1267+}1268+1269+/*1270+ * Main interrupt handler, optimized assuming that we took a 'DATA'1271+ * interrupt.1272+ *1273+ * 1. Clear the interrupt1274+ * 2. Loop while we find valid descriptors and process them; accumulate1275+ * information that can be processed after the loop1276+ * 3. Tell the SGE at which index we stopped processing descriptors1277+ * 4. Bookkeeping; free TX buffers, ring doorbell if there are any1278+ * outstanding TX buffers waiting, replenish RX buffers, potentially1279+ * reenable upper layers if they were turned off due to lack of TX1280+ * resources which are available again.1281+ * 5. If we took an interrupt, but no valid respQ descriptors was found we1282+ * let the slow_intr_handler run and do error handling.1283+ */1284+static irqreturn_t t1_interrupt(int irq, void *cookie, struct pt_regs *regs)1285+{1286+ int work_done;1287+ struct respQ_e *e;1288+ struct adapter *adapter = cookie;1289+ struct respQ *Q = &adapter->sge->respQ;1290+1291+ spin_lock(&adapter->async_lock);1292+ e = &Q->entries[Q->cidx];1293+ prefetch(e);1294+1295+ writel(F_PL_INTR_SGE_DATA, adapter->regs + A_PL_CAUSE);1296+1297+ if (likely(e->GenerationBit == Q->genbit))1298+ work_done = process_responses(adapter, -1);1299+ else1300+ work_done = t1_slow_intr_handler(adapter);1301+1302+ /*1303+ * The unconditional clearing of the PL_CAUSE above may have raced1304+ * with DMA completion and the corresponding generation of a response1305+ * to cause us to miss the resulting data interrupt. The next write1306+ * is also unconditional to recover the missed interrupt and render1307+ * this race harmless.1308+ */1309+ writel(Q->cidx, adapter->regs + A_SG_SLEEPING);1310+1311+ if (!work_done)1312+ adapter->sge->stats.unhandled_irqs++;1313+ spin_unlock(&adapter->async_lock);1314+ return IRQ_RETVAL(work_done != 0);1315+}1316+1317+intr_handler_t t1_select_intr_handler(adapter_t *adapter)1318+{1319+ return adapter->params.sge.polling ? t1_interrupt_napi : t1_interrupt;1320+}1321+1322+/*1323+ * Enqueues the sk_buff onto the cmdQ[qid] and has hardware fetch it.1324+ *1325+ * The code figures out how many entries the sk_buff will require in the1326+ * cmdQ and updates the cmdQ data structure with the state once the enqueue1327+ * has complete. Then, it doesn't access the global structure anymore, but1328+ * uses the corresponding fields on the stack. In conjuction with a spinlock1329+ * around that code, we can make the function reentrant without holding the1330+ * lock when we actually enqueue (which might be expensive, especially on1331+ * architectures with IO MMUs).1332+ *1333+ * This runs with softirqs disabled.1334+ */1335+unsigned int t1_sge_tx(struct sk_buff *skb, struct adapter *adapter,1336+ unsigned int qid, struct net_device *dev)1337+{1338+ struct sge *sge = adapter->sge;1339+ struct cmdQ *q = &sge->cmdQ[qid];1340+ unsigned int credits, pidx, genbit, count;1341+1342+ spin_lock(&q->lock);1343+ reclaim_completed_tx(sge, q);1344+1345+ pidx = q->pidx;1346+ credits = q->size - q->in_use;1347+ count = 1 + skb_shinfo(skb)->nr_frags;1348+1349+ { /* Ethernet packet */1350+ if (unlikely(credits < count)) {1351+ netif_stop_queue(dev);1352+ set_bit(dev->if_port, &sge->stopped_tx_queues);1353+ sge->stats.cmdQ_full[3]++;1354+ spin_unlock(&q->lock);1355+ CH_ERR("%s: Tx ring full while queue awake!\n",1356+ adapter->name);1357+ return 1;1358+ }1359+ if (unlikely(credits - count < q->stop_thres)) {1360+ sge->stats.cmdQ_full[3]++;1361+ netif_stop_queue(dev);1362+ set_bit(dev->if_port, &sge->stopped_tx_queues);1363+ }1364+ }1365+ q->in_use += count;1366+ genbit = q->genbit;1367+ q->pidx += count;1368+ if (q->pidx >= q->size) {1369+ q->pidx -= q->size;1370+ q->genbit ^= 1;1371+ }1372+ spin_unlock(&q->lock);1373+1374+ write_tx_descs(adapter, skb, pidx, genbit, q);1375+1376+ /*1377+ * We always ring the doorbell for cmdQ1. For cmdQ0, we only ring1378+ * the doorbell if the Q is asleep. There is a natural race, where1379+ * the hardware is going to sleep just after we checked, however,1380+ * then the interrupt handler will detect the outstanding TX packet1381+ * and ring the doorbell for us.1382+ */1383+ if (qid)1384+ doorbell_pio(adapter, F_CMDQ1_ENABLE);1385+ else {1386+ clear_bit(CMDQ_STAT_LAST_PKT_DB, &q->status);1387+ if (test_and_set_bit(CMDQ_STAT_RUNNING, &q->status) == 0) {1388+ set_bit(CMDQ_STAT_LAST_PKT_DB, &q->status);1389+ writel(F_CMDQ0_ENABLE, adapter->regs + A_SG_DOORBELL);1390+ }1391+ }1392+ return 0;1393+}1394+1395+#define MK_ETH_TYPE_MSS(type, mss) (((mss) & 0x3FFF) | ((type) << 14))1396+1397+/*1398+ * eth_hdr_len - return the length of an Ethernet header1399+ * @data: pointer to the start of the Ethernet header1400+ *1401+ * Returns the length of an Ethernet header, including optional VLAN tag.1402+ */1403+static inline int eth_hdr_len(const void *data)1404+{1405+ const struct ethhdr *e = data;1406+1407+ return e->h_proto == htons(ETH_P_8021Q) ? VLAN_ETH_HLEN : ETH_HLEN;1408+}1409+1410+/*1411+ * Adds the CPL header to the sk_buff and passes it to t1_sge_tx.1412+ */1413+int t1_start_xmit(struct sk_buff *skb, struct net_device *dev)1414+{1415+ struct adapter *adapter = dev->priv;1416+ struct sge_port_stats *st = &adapter->sge->port_stats[dev->if_port];1417+ struct sge *sge = adapter->sge;1418+ struct cpl_tx_pkt *cpl;1419+1420+#ifdef NETIF_F_TSO1421+ if (skb_shinfo(skb)->tso_size) {1422+ int eth_type;1423+ struct cpl_tx_pkt_lso *hdr;1424+1425+ st->tso++;1426+1427+ eth_type = skb->nh.raw - skb->data == ETH_HLEN ?1428+ CPL_ETH_II : CPL_ETH_II_VLAN;1429+1430+ hdr = (struct cpl_tx_pkt_lso *)skb_push(skb, sizeof(*hdr));1431+ hdr->opcode = CPL_TX_PKT_LSO;1432+ hdr->ip_csum_dis = hdr->l4_csum_dis = 0;1433+ hdr->ip_hdr_words = skb->nh.iph->ihl;1434+ hdr->tcp_hdr_words = skb->h.th->doff;1435+ hdr->eth_type_mss = htons(MK_ETH_TYPE_MSS(eth_type,1436+ skb_shinfo(skb)->tso_size));1437+ hdr->len = htonl(skb->len - sizeof(*hdr));1438+ cpl = (struct cpl_tx_pkt *)hdr;1439+ sge->stats.tx_lso_pkts++;1440+ } else1441+#endif1442+ {1443+ /*1444+ * Packets shorter than ETH_HLEN can break the MAC, drop them1445+ * early. Also, we may get oversized packets because some1446+ * parts of the kernel don't handle our unusual hard_header_len1447+ * right, drop those too.1448+ */1449+ if (unlikely(skb->len < ETH_HLEN ||1450+ skb->len > dev->mtu + eth_hdr_len(skb->data))) {1451+ dev_kfree_skb_any(skb);1452+ return NET_XMIT_SUCCESS;1453+ }1454+1455+ /*1456+ * We are using a non-standard hard_header_len and some kernel1457+ * components, such as pktgen, do not handle it right.1458+ * Complain when this happens but try to fix things up.1459+ */1460+ if (unlikely(skb_headroom(skb) <1461+ dev->hard_header_len - ETH_HLEN)) {1462+ struct sk_buff *orig_skb = skb;1463+1464+ if (net_ratelimit())1465+ printk(KERN_ERR "%s: inadequate headroom in "1466+ "Tx packet\n", dev->name);1467+ skb = skb_realloc_headroom(skb, sizeof(*cpl));1468+ dev_kfree_skb_any(orig_skb);1469+ if (!skb)1470+ return -ENOMEM;1471+ }1472+1473+ if (!(adapter->flags & UDP_CSUM_CAPABLE) &&1474+ skb->ip_summed == CHECKSUM_HW &&1475+ skb->nh.iph->protocol == IPPROTO_UDP)1476+ if (unlikely(skb_checksum_help(skb, 0))) {1477+ dev_kfree_skb_any(skb);1478+ return -ENOMEM;1479+ }1480+1481+ /* Hmmm, assuming to catch the gratious arp... and we'll use1482+ * it to flush out stuck espi packets...1483+ */1484+ if (unlikely(!adapter->sge->espibug_skb)) {1485+ if (skb->protocol == htons(ETH_P_ARP) &&1486+ skb->nh.arph->ar_op == htons(ARPOP_REQUEST)) {1487+ adapter->sge->espibug_skb = skb;1488+ /* We want to re-use this skb later. We1489+ * simply bump the reference count and it1490+ * will not be freed...1491+ */1492+ skb = skb_get(skb);1493+ }1494+ }1495+1496+ cpl = (struct cpl_tx_pkt *)__skb_push(skb, sizeof(*cpl));1497+ cpl->opcode = CPL_TX_PKT;1498+ cpl->ip_csum_dis = 1; /* SW calculates IP csum */1499+ cpl->l4_csum_dis = skb->ip_summed == CHECKSUM_HW ? 0 : 1;1500+ /* the length field isn't used so don't bother setting it */1501+1502+ st->tx_cso += (skb->ip_summed == CHECKSUM_HW);1503+ sge->stats.tx_do_cksum += (skb->ip_summed == CHECKSUM_HW);1504+ sge->stats.tx_reg_pkts++;1505+ }1506+ cpl->iff = dev->if_port;1507+1508+#if defined(CONFIG_VLAN_8021Q) || defined(CONFIG_VLAN_8021Q_MODULE)1509+ if (adapter->vlan_grp && vlan_tx_tag_present(skb)) {1510+ cpl->vlan_valid = 1;1511+ cpl->vlan = htons(vlan_tx_tag_get(skb));1512+ st->vlan_insert++;1513+ } else1514+#endif1515+ cpl->vlan_valid = 0;1516+1517+ dev->trans_start = jiffies;1518+ return t1_sge_tx(skb, adapter, 0, dev);1519+}1520+1521+/*1522+ * Callback for the Tx buffer reclaim timer. Runs with softirqs disabled.1523+ */1524+static void sge_tx_reclaim_cb(unsigned long data)1525+{1526+ int i;1527+ struct sge *sge = (struct sge *)data;1528+1529+ for (i = 0; i < SGE_CMDQ_N; ++i) {1530+ struct cmdQ *q = &sge->cmdQ[i];1531+1532+ if (!spin_trylock(&q->lock))1533+ continue;1534+1535+ reclaim_completed_tx(sge, q);1536+ if (i == 0 && q->in_use) /* flush pending credits */1537+ writel(F_CMDQ0_ENABLE,1538+ sge->adapter->regs + A_SG_DOORBELL);1539+1540+ spin_unlock(&q->lock);1541+ }1542+ mod_timer(&sge->tx_reclaim_timer, jiffies + TX_RECLAIM_PERIOD);1543+}1544+1545+/*1546+ * Propagate changes of the SGE coalescing parameters to the HW.1547+ */1548+int t1_sge_set_coalesce_params(struct sge *sge, struct sge_params *p)1549+{1550+ sge->netdev->poll = t1_poll;1551+ sge->fixed_intrtimer = p->rx_coalesce_usecs *1552+ core_ticks_per_usec(sge->adapter);1553+ writel(sge->fixed_intrtimer, sge->adapter->regs + A_SG_INTRTIMER);1554+ return 0;1555+}1556+1557+/*1558+ * Allocates both RX and TX resources and configures the SGE. However,1559+ * the hardware is not enabled yet.1560+ */1561+int t1_sge_configure(struct sge *sge, struct sge_params *p)1562+{1563+ if (alloc_rx_resources(sge, p))1564+ return -ENOMEM;1565+ if (alloc_tx_resources(sge, p)) {1566+ free_rx_resources(sge);1567+ return -ENOMEM;1568+ }1569+ configure_sge(sge, p);1570+1571+ /*1572+ * Now that we have sized the free lists calculate the payload1573+ * capacity of the large buffers. Other parts of the driver use1574+ * this to set the max offload coalescing size so that RX packets1575+ * do not overflow our large buffers.1576+ */1577+ p->large_buf_capacity = jumbo_payload_capacity(sge);1578+ return 0;1579+}1580+1581+/*1582+ * Disables the DMA engine.1583+ */1584+void t1_sge_stop(struct sge *sge)1585+{1586+ writel(0, sge->adapter->regs + A_SG_CONTROL);1587+ (void) readl(sge->adapter->regs + A_SG_CONTROL); /* flush */1588+ if (is_T2(sge->adapter))1589+ del_timer_sync(&sge->espibug_timer);1590+ del_timer_sync(&sge->tx_reclaim_timer);1591+}1592+1593+/*1594+ * Enables the DMA engine.1595+ */1596+void t1_sge_start(struct sge *sge)1597+{1598+ refill_free_list(sge, &sge->freelQ[0]);1599+ refill_free_list(sge, &sge->freelQ[1]);1600+1601+ writel(sge->sge_control, sge->adapter->regs + A_SG_CONTROL);1602+ doorbell_pio(sge->adapter, F_FL0_ENABLE | F_FL1_ENABLE);1603+ (void) readl(sge->adapter->regs + A_SG_CONTROL); /* flush */1604+1605+ mod_timer(&sge->tx_reclaim_timer, jiffies + TX_RECLAIM_PERIOD);1606+1607+ if (is_T2(sge->adapter)) 1608+ mod_timer(&sge->espibug_timer, jiffies + sge->espibug_timeout);1609+}1610+1611+/*1612+ * Callback for the T2 ESPI 'stuck packet feature' workaorund1613+ */1614+static void espibug_workaround(void *data)1615+{1616+ struct adapter *adapter = (struct adapter *)data;1617+ struct sge *sge = adapter->sge;1618+1619+ if (netif_running(adapter->port[0].dev)) {1620+ struct sk_buff *skb = sge->espibug_skb;1621+1622+ u32 seop = t1_espi_get_mon(adapter, 0x930, 0);1623+1624+ if ((seop & 0xfff0fff) == 0xfff && skb) {1625+ if (!skb->cb[0]) {1626+ u8 ch_mac_addr[ETH_ALEN] =1627+ {0x0, 0x7, 0x43, 0x0, 0x0, 0x0};1628+ memcpy(skb->data + sizeof(struct cpl_tx_pkt),1629+ ch_mac_addr, ETH_ALEN);1630+ memcpy(skb->data + skb->len - 10, ch_mac_addr,1631+ ETH_ALEN);1632+ skb->cb[0] = 0xff;1633+ }1634+1635+ /* bump the reference count to avoid freeing of the1636+ * skb once the DMA has completed.1637+ */1638+ skb = skb_get(skb);1639+ t1_sge_tx(skb, adapter, 0, adapter->port[0].dev);1640+ }1641+ }1642+ mod_timer(&sge->espibug_timer, jiffies + sge->espibug_timeout);1643+}1644+1645+/*1646+ * Creates a t1_sge structure and returns suggested resource parameters.1647+ */1648+struct sge * __devinit t1_sge_create(struct adapter *adapter,1649+ struct sge_params *p)1650+{1651+ struct sge *sge = kmalloc(sizeof(*sge), GFP_KERNEL);1652+1653+ if (!sge)1654+ return NULL;1655+ memset(sge, 0, sizeof(*sge));1656+1657+ sge->adapter = adapter;1658+ sge->netdev = adapter->port[0].dev;1659+ sge->rx_pkt_pad = t1_is_T1B(adapter) ? 0 : 2;1660+ sge->jumbo_fl = t1_is_T1B(adapter) ? 1 : 0;1661+1662+ init_timer(&sge->tx_reclaim_timer);1663+ sge->tx_reclaim_timer.data = (unsigned long)sge;1664+ sge->tx_reclaim_timer.function = sge_tx_reclaim_cb;1665+1666+ if (is_T2(sge->adapter)) {1667+ init_timer(&sge->espibug_timer);1668+ sge->espibug_timer.function = (void *)&espibug_workaround;1669+ sge->espibug_timer.data = (unsigned long)sge->adapter;1670+ sge->espibug_timeout = 1;1671+ }1672+1673+1674+ p->cmdQ_size[0] = SGE_CMDQ0_E_N;1675+ p->cmdQ_size[1] = SGE_CMDQ1_E_N;1676+ p->freelQ_size[!sge->jumbo_fl] = SGE_FREEL_SIZE;1677+ p->freelQ_size[sge->jumbo_fl] = SGE_JUMBO_FREEL_SIZE;1678+ p->rx_coalesce_usecs = 50;1679+ p->coalesce_enable = 0;1680+ p->sample_interval_usecs = 0;1681+ p->polling = 0;1682+1683+ return sge;1684+}
···1+/*****************************************************************************2+ * *3+ * File: sge.h *4+ * $Revision: 1.11 $ *5+ * $Date: 2005/06/21 22:10:55 $ *6+ * Description: *7+ * part of the Chelsio 10Gb Ethernet Driver. *8+ * *9+ * This program is free software; you can redistribute it and/or modify *10+ * it under the terms of the GNU General Public License, version 2, as *11+ * published by the Free Software Foundation. *12+ * *13+ * You should have received a copy of the GNU General Public License along *14+ * with this program; if not, write to the Free Software Foundation, Inc., *15+ * 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. *16+ * *17+ * THIS SOFTWARE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR IMPLIED *18+ * WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED WARRANTIES OF *19+ * MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. *20+ * *21+ * http://www.chelsio.com *22+ * *23+ * Copyright (c) 2003 - 2005 Chelsio Communications, Inc. *24+ * All rights reserved. *25+ * *26+ * Maintainers: maintainers@chelsio.com *27+ * *28+ * Authors: Dimitrios Michailidis <dm@chelsio.com> *29+ * Tina Yang <tainay@chelsio.com> *30+ * Felix Marti <felix@chelsio.com> *31+ * Scott Bardone <sbardone@chelsio.com> *32+ * Kurt Ottaway <kottaway@chelsio.com> *33+ * Frank DiMambro <frank@chelsio.com> *34+ * *35+ * History: *36+ * *37+ ****************************************************************************/38+39+#ifndef _CXGB_SGE_H_40+#define _CXGB_SGE_H_41+42+#include <linux/types.h>43+#include <linux/interrupt.h>44+#include <asm/byteorder.h>45+46+#ifndef IRQ_RETVAL47+#define IRQ_RETVAL(x)48+typedef void irqreturn_t;49+#endif50+51+typedef irqreturn_t (*intr_handler_t)(int, void *, struct pt_regs *);52+53+struct sge_intr_counts {54+ unsigned int respQ_empty; /* # times respQ empty */55+ unsigned int respQ_overflow; /* # respQ overflow (fatal) */56+ unsigned int freelistQ_empty; /* # times freelist empty */57+ unsigned int pkt_too_big; /* packet too large (fatal) */58+ unsigned int pkt_mismatch;59+ unsigned int cmdQ_full[3]; /* not HW IRQ, host cmdQ[] full */60+ unsigned int cmdQ_restarted[3];/* # of times cmdQ X was restarted */61+ unsigned int ethernet_pkts; /* # of Ethernet packets received */62+ unsigned int offload_pkts; /* # of offload packets received */63+ unsigned int offload_bundles; /* # of offload pkt bundles delivered */64+ unsigned int pure_rsps; /* # of non-payload responses */65+ unsigned int unhandled_irqs; /* # of unhandled interrupts */66+ unsigned int tx_ipfrags;67+ unsigned int tx_reg_pkts;68+ unsigned int tx_lso_pkts;69+ unsigned int tx_do_cksum;70+};71+72+struct sge_port_stats {73+ unsigned long rx_cso_good; /* # of successful RX csum offloads */74+ unsigned long tx_cso; /* # of TX checksum offloads */75+ unsigned long vlan_xtract; /* # of VLAN tag extractions */76+ unsigned long vlan_insert; /* # of VLAN tag extractions */77+ unsigned long tso; /* # of TSO requests */78+ unsigned long rx_drops; /* # of packets dropped due to no mem */79+};80+81+struct sk_buff;82+struct net_device;83+struct adapter;84+struct sge_params;85+struct sge;86+87+struct sge *t1_sge_create(struct adapter *, struct sge_params *);88+int t1_sge_configure(struct sge *, struct sge_params *);89+int t1_sge_set_coalesce_params(struct sge *, struct sge_params *);90+void t1_sge_destroy(struct sge *);91+intr_handler_t t1_select_intr_handler(adapter_t *adapter);92+unsigned int t1_sge_tx(struct sk_buff *skb, struct adapter *adapter,93+ unsigned int qid, struct net_device *netdev);94+int t1_start_xmit(struct sk_buff *skb, struct net_device *dev);95+void t1_set_vlan_accel(struct adapter *adapter, int on_off);96+void t1_sge_start(struct sge *);97+void t1_sge_stop(struct sge *);98+int t1_sge_intr_error_handler(struct sge *);99+void t1_sge_intr_enable(struct sge *);100+void t1_sge_intr_disable(struct sge *);101+void t1_sge_intr_clear(struct sge *);102+const struct sge_intr_counts *t1_sge_get_intr_counts(struct sge *sge);103+const struct sge_port_stats *t1_sge_get_port_stats(struct sge *sge, int port);104+105+#endif /* _CXGB_SGE_H_ */
···1+/*2+ sis190.c: Silicon Integrated Systems SiS190 ethernet driver3+4+ Copyright (c) 2003 K.M. Liu <kmliu@sis.com>5+ Copyright (c) 2003, 2004 Jeff Garzik <jgarzik@pobox.com>6+ Copyright (c) 2003, 2004, 2005 Francois Romieu <romieu@fr.zoreil.com>7+8+ Based on r8169.c, tg3.c, 8139cp.c, skge.c, epic100.c and SiS 190/1919+ genuine driver.10+11+ This software may be used and distributed according to the terms of12+ the GNU General Public License (GPL), incorporated herein by reference.13+ Drivers based on or derived from this code fall under the GPL and must14+ retain the authorship, copyright and license notice. This file is not15+ a complete program and may only be used when the entire operating16+ system is licensed under the GPL.17+18+ See the file COPYING in this distribution for more information.19+20+ */21+22+#include <linux/module.h>23+#include <linux/moduleparam.h>24+#include <linux/netdevice.h>25+#include <linux/rtnetlink.h>26+#include <linux/etherdevice.h>27+#include <linux/ethtool.h>28+#include <linux/pci.h>29+#include <linux/mii.h>30+#include <linux/delay.h>31+#include <linux/crc32.h>32+#include <linux/dma-mapping.h>33+#include <asm/irq.h>34+35+#define net_drv(p, arg...) if (netif_msg_drv(p)) \36+ printk(arg)37+#define net_probe(p, arg...) if (netif_msg_probe(p)) \38+ printk(arg)39+#define net_link(p, arg...) if (netif_msg_link(p)) \40+ printk(arg)41+#define net_intr(p, arg...) if (netif_msg_intr(p)) \42+ printk(arg)43+#define net_tx_err(p, arg...) if (netif_msg_tx_err(p)) \44+ printk(arg)45+46+#define PHY_MAX_ADDR 3247+#define PHY_ID_ANY 0x1f48+#define MII_REG_ANY 0x1f49+50+#ifdef CONFIG_SIS190_NAPI51+#define NAPI_SUFFIX "-NAPI"52+#else53+#define NAPI_SUFFIX ""54+#endif55+56+#define DRV_VERSION "1.2" NAPI_SUFFIX57+#define DRV_NAME "sis190"58+#define SIS190_DRIVER_NAME DRV_NAME " Gigabit Ethernet driver " DRV_VERSION59+#define PFX DRV_NAME ": "60+61+#ifdef CONFIG_SIS190_NAPI62+#define sis190_rx_skb netif_receive_skb63+#define sis190_rx_quota(count, quota) min(count, quota)64+#else65+#define sis190_rx_skb netif_rx66+#define sis190_rx_quota(count, quota) count67+#endif68+69+#define MAC_ADDR_LEN 670+71+#define NUM_TX_DESC 64 /* [8..1024] */72+#define NUM_RX_DESC 64 /* [8..8192] */73+#define TX_RING_BYTES (NUM_TX_DESC * sizeof(struct TxDesc))74+#define RX_RING_BYTES (NUM_RX_DESC * sizeof(struct RxDesc))75+#define RX_BUF_SIZE 153676+#define RX_BUF_MASK 0xfff877+78+#define SIS190_REGS_SIZE 0x8079+#define SIS190_TX_TIMEOUT (6*HZ)80+#define SIS190_PHY_TIMEOUT (10*HZ)81+#define SIS190_MSG_DEFAULT (NETIF_MSG_DRV | NETIF_MSG_PROBE | \82+ NETIF_MSG_LINK | NETIF_MSG_IFUP | \83+ NETIF_MSG_IFDOWN)84+85+/* Enhanced PHY access register bit definitions */86+#define EhnMIIread 0x000087+#define EhnMIIwrite 0x002088+#define EhnMIIdataShift 1689+#define EhnMIIpmdShift 6 /* 7016 only */90+#define EhnMIIregShift 1191+#define EhnMIIreq 0x001092+#define EhnMIInotDone 0x001093+94+/* Write/read MMIO register */95+#define SIS_W8(reg, val) writeb ((val), ioaddr + (reg))96+#define SIS_W16(reg, val) writew ((val), ioaddr + (reg))97+#define SIS_W32(reg, val) writel ((val), ioaddr + (reg))98+#define SIS_R8(reg) readb (ioaddr + (reg))99+#define SIS_R16(reg) readw (ioaddr + (reg))100+#define SIS_R32(reg) readl (ioaddr + (reg))101+102+#define SIS_PCI_COMMIT() SIS_R32(IntrControl)103+104+enum sis190_registers {105+ TxControl = 0x00,106+ TxDescStartAddr = 0x04,107+ rsv0 = 0x08, // reserved108+ TxSts = 0x0c, // unused (Control/Status)109+ RxControl = 0x10,110+ RxDescStartAddr = 0x14,111+ rsv1 = 0x18, // reserved112+ RxSts = 0x1c, // unused113+ IntrStatus = 0x20,114+ IntrMask = 0x24,115+ IntrControl = 0x28,116+ IntrTimer = 0x2c, // unused (Interupt Timer)117+ PMControl = 0x30, // unused (Power Mgmt Control/Status)118+ rsv2 = 0x34, // reserved119+ ROMControl = 0x38,120+ ROMInterface = 0x3c,121+ StationControl = 0x40,122+ GMIIControl = 0x44,123+ GIoCR = 0x48, // unused (GMAC IO Compensation)124+ GIoCtrl = 0x4c, // unused (GMAC IO Control)125+ TxMacControl = 0x50,126+ TxLimit = 0x54, // unused (Tx MAC Timer/TryLimit)127+ RGDelay = 0x58, // unused (RGMII Tx Internal Delay)128+ rsv3 = 0x5c, // reserved129+ RxMacControl = 0x60,130+ RxMacAddr = 0x62,131+ RxHashTable = 0x68,132+ // Undocumented = 0x6c,133+ RxWolCtrl = 0x70,134+ RxWolData = 0x74, // unused (Rx WOL Data Access)135+ RxMPSControl = 0x78, // unused (Rx MPS Control)136+ rsv4 = 0x7c, // reserved137+};138+139+enum sis190_register_content {140+ /* IntrStatus */141+ SoftInt = 0x40000000, // unused142+ Timeup = 0x20000000, // unused143+ PauseFrame = 0x00080000, // unused144+ MagicPacket = 0x00040000, // unused145+ WakeupFrame = 0x00020000, // unused146+ LinkChange = 0x00010000,147+ RxQEmpty = 0x00000080,148+ RxQInt = 0x00000040,149+ TxQ1Empty = 0x00000020, // unused150+ TxQ1Int = 0x00000010,151+ TxQ0Empty = 0x00000008, // unused152+ TxQ0Int = 0x00000004,153+ RxHalt = 0x00000002,154+ TxHalt = 0x00000001,155+156+ /* {Rx/Tx}CmdBits */157+ CmdReset = 0x10,158+ CmdRxEnb = 0x08, // unused159+ CmdTxEnb = 0x01,160+ RxBufEmpty = 0x01, // unused161+162+ /* Cfg9346Bits */163+ Cfg9346_Lock = 0x00, // unused164+ Cfg9346_Unlock = 0xc0, // unused165+166+ /* RxMacControl */167+ AcceptErr = 0x20, // unused168+ AcceptRunt = 0x10, // unused169+ AcceptBroadcast = 0x0800,170+ AcceptMulticast = 0x0400,171+ AcceptMyPhys = 0x0200,172+ AcceptAllPhys = 0x0100,173+174+ /* RxConfigBits */175+ RxCfgFIFOShift = 13,176+ RxCfgDMAShift = 8, // 0x1a in RxControl ?177+178+ /* TxConfigBits */179+ TxInterFrameGapShift = 24,180+ TxDMAShift = 8, /* DMA burst value (0-7) is shift this many bits */181+182+ /* StationControl */183+ _1000bpsF = 0x1c00,184+ _1000bpsH = 0x0c00,185+ _100bpsF = 0x1800,186+ _100bpsH = 0x0800,187+ _10bpsF = 0x1400,188+ _10bpsH = 0x0400,189+190+ LinkStatus = 0x02, // unused191+ FullDup = 0x01, // unused192+193+ /* TBICSRBit */194+ TBILinkOK = 0x02000000, // unused195+};196+197+struct TxDesc {198+ __le32 PSize;199+ __le32 status;200+ __le32 addr;201+ __le32 size;202+};203+204+struct RxDesc {205+ __le32 PSize;206+ __le32 status;207+ __le32 addr;208+ __le32 size;209+};210+211+enum _DescStatusBit {212+ /* _Desc.status */213+ OWNbit = 0x80000000, // RXOWN/TXOWN214+ INTbit = 0x40000000, // RXINT/TXINT215+ CRCbit = 0x00020000, // CRCOFF/CRCEN216+ PADbit = 0x00010000, // PREADD/PADEN217+ /* _Desc.size */218+ RingEnd = 0x80000000,219+ /* TxDesc.status */220+ LSEN = 0x08000000, // TSO ? -- FR221+ IPCS = 0x04000000,222+ TCPCS = 0x02000000,223+ UDPCS = 0x01000000,224+ BSTEN = 0x00800000,225+ EXTEN = 0x00400000,226+ DEFEN = 0x00200000,227+ BKFEN = 0x00100000,228+ CRSEN = 0x00080000,229+ COLEN = 0x00040000,230+ THOL3 = 0x30000000,231+ THOL2 = 0x20000000,232+ THOL1 = 0x10000000,233+ THOL0 = 0x00000000,234+ /* RxDesc.status */235+ IPON = 0x20000000,236+ TCPON = 0x10000000,237+ UDPON = 0x08000000,238+ Wakup = 0x00400000,239+ Magic = 0x00200000,240+ Pause = 0x00100000,241+ DEFbit = 0x00200000,242+ BCAST = 0x000c0000,243+ MCAST = 0x00080000,244+ UCAST = 0x00040000,245+ /* RxDesc.PSize */246+ TAGON = 0x80000000,247+ RxDescCountMask = 0x7f000000, // multi-desc pkt when > 1 ? -- FR248+ ABORT = 0x00800000,249+ SHORT = 0x00400000,250+ LIMIT = 0x00200000,251+ MIIER = 0x00100000,252+ OVRUN = 0x00080000,253+ NIBON = 0x00040000,254+ COLON = 0x00020000,255+ CRCOK = 0x00010000,256+ RxSizeMask = 0x0000ffff257+ /*258+ * The asic could apparently do vlan, TSO, jumbo (sis191 only) and259+ * provide two (unused with Linux) Tx queues. No publically260+ * available documentation alas.261+ */262+};263+264+enum sis190_eeprom_access_register_bits {265+ EECS = 0x00000001, // unused266+ EECLK = 0x00000002, // unused267+ EEDO = 0x00000008, // unused268+ EEDI = 0x00000004, // unused269+ EEREQ = 0x00000080,270+ EEROP = 0x00000200,271+ EEWOP = 0x00000100 // unused272+};273+274+/* EEPROM Addresses */275+enum sis190_eeprom_address {276+ EEPROMSignature = 0x00,277+ EEPROMCLK = 0x01, // unused278+ EEPROMInfo = 0x02,279+ EEPROMMACAddr = 0x03280+};281+282+struct sis190_private {283+ void __iomem *mmio_addr;284+ struct pci_dev *pci_dev;285+ struct net_device_stats stats;286+ spinlock_t lock;287+ u32 rx_buf_sz;288+ u32 cur_rx;289+ u32 cur_tx;290+ u32 dirty_rx;291+ u32 dirty_tx;292+ dma_addr_t rx_dma;293+ dma_addr_t tx_dma;294+ struct RxDesc *RxDescRing;295+ struct TxDesc *TxDescRing;296+ struct sk_buff *Rx_skbuff[NUM_RX_DESC];297+ struct sk_buff *Tx_skbuff[NUM_TX_DESC];298+ struct work_struct phy_task;299+ struct timer_list timer;300+ u32 msg_enable;301+ struct mii_if_info mii_if;302+ struct list_head first_phy;303+};304+305+struct sis190_phy {306+ struct list_head list;307+ int phy_id;308+ u16 id[2];309+ u16 status;310+ u8 type;311+};312+313+enum sis190_phy_type {314+ UNKNOWN = 0x00,315+ HOME = 0x01,316+ LAN = 0x02,317+ MIX = 0x03318+};319+320+static struct mii_chip_info {321+ const char *name;322+ u16 id[2];323+ unsigned int type;324+} mii_chip_table[] = {325+ { "Broadcom PHY BCM5461", { 0x0020, 0x60c0 }, LAN },326+ { "Agere PHY ET1101B", { 0x0282, 0xf010 }, LAN },327+ { "Marvell PHY 88E1111", { 0x0141, 0x0cc0 }, LAN },328+ { "Realtek PHY RTL8201", { 0x0000, 0x8200 }, LAN },329+ { NULL, }330+};331+332+const static struct {333+ const char *name;334+ u8 version; /* depend on docs */335+ u32 RxConfigMask; /* clear the bits supported by this chip */336+} sis_chip_info[] = {337+ { DRV_NAME, 0x00, 0xff7e1880, },338+};339+340+static struct pci_device_id sis190_pci_tbl[] __devinitdata = {341+ { PCI_DEVICE(PCI_VENDOR_ID_SI, 0x0190), 0, 0, 0 },342+ { 0, },343+};344+345+MODULE_DEVICE_TABLE(pci, sis190_pci_tbl);346+347+static int rx_copybreak = 200;348+349+static struct {350+ u32 msg_enable;351+} debug = { -1 };352+353+MODULE_DESCRIPTION("SiS sis190 Gigabit Ethernet driver");354+module_param(rx_copybreak, int, 0);355+MODULE_PARM_DESC(rx_copybreak, "Copy breakpoint for copy-only-tiny-frames");356+module_param_named(debug, debug.msg_enable, int, 0);357+MODULE_PARM_DESC(debug, "Debug verbosity level (0=none, ..., 16=all)");358+MODULE_AUTHOR("K.M. Liu <kmliu@sis.com>, Ueimor <romieu@fr.zoreil.com>");359+MODULE_VERSION(DRV_VERSION);360+MODULE_LICENSE("GPL");361+362+static const u32 sis190_intr_mask =363+ RxQEmpty | RxQInt | TxQ1Int | TxQ0Int | RxHalt | TxHalt;364+365+/*366+ * Maximum number of multicast addresses to filter (vs. Rx-all-multicast).367+ * The chips use a 64 element hash table based on the Ethernet CRC.368+ */369+static int multicast_filter_limit = 32;370+371+static void __mdio_cmd(void __iomem *ioaddr, u32 ctl)372+{373+ unsigned int i;374+375+ SIS_W32(GMIIControl, ctl);376+377+ msleep(1);378+379+ for (i = 0; i < 100; i++) {380+ if (!(SIS_R32(GMIIControl) & EhnMIInotDone))381+ break;382+ msleep(1);383+ }384+385+ if (i > 999)386+ printk(KERN_ERR PFX "PHY command failed !\n");387+}388+389+static void mdio_write(void __iomem *ioaddr, int phy_id, int reg, int val)390+{391+ __mdio_cmd(ioaddr, EhnMIIreq | EhnMIIwrite |392+ (((u32) reg) << EhnMIIregShift) | (phy_id << EhnMIIpmdShift) |393+ (((u32) val) << EhnMIIdataShift));394+}395+396+static int mdio_read(void __iomem *ioaddr, int phy_id, int reg)397+{398+ __mdio_cmd(ioaddr, EhnMIIreq | EhnMIIread |399+ (((u32) reg) << EhnMIIregShift) | (phy_id << EhnMIIpmdShift));400+401+ return (u16) (SIS_R32(GMIIControl) >> EhnMIIdataShift);402+}403+404+static void __mdio_write(struct net_device *dev, int phy_id, int reg, int val)405+{406+ struct sis190_private *tp = netdev_priv(dev);407+408+ mdio_write(tp->mmio_addr, phy_id, reg, val);409+}410+411+static int __mdio_read(struct net_device *dev, int phy_id, int reg)412+{413+ struct sis190_private *tp = netdev_priv(dev);414+415+ return mdio_read(tp->mmio_addr, phy_id, reg);416+}417+418+static u16 mdio_read_latched(void __iomem *ioaddr, int phy_id, int reg)419+{420+ mdio_read(ioaddr, phy_id, reg);421+ return mdio_read(ioaddr, phy_id, reg);422+}423+424+static u16 __devinit sis190_read_eeprom(void __iomem *ioaddr, u32 reg)425+{426+ u16 data = 0xffff;427+ unsigned int i;428+429+ if (!(SIS_R32(ROMControl) & 0x0002))430+ return 0;431+432+ SIS_W32(ROMInterface, EEREQ | EEROP | (reg << 10));433+434+ for (i = 0; i < 200; i++) {435+ if (!(SIS_R32(ROMInterface) & EEREQ)) {436+ data = (SIS_R32(ROMInterface) & 0xffff0000) >> 16;437+ break;438+ }439+ msleep(1);440+ }441+442+ return data;443+}444+445+static void sis190_irq_mask_and_ack(void __iomem *ioaddr)446+{447+ SIS_W32(IntrMask, 0x00);448+ SIS_W32(IntrStatus, 0xffffffff);449+ SIS_PCI_COMMIT();450+}451+452+static void sis190_asic_down(void __iomem *ioaddr)453+{454+ /* Stop the chip's Tx and Rx DMA processes. */455+456+ SIS_W32(TxControl, 0x1a00);457+ SIS_W32(RxControl, 0x1a00);458+459+ sis190_irq_mask_and_ack(ioaddr);460+}461+462+static void sis190_mark_as_last_descriptor(struct RxDesc *desc)463+{464+ desc->size |= cpu_to_le32(RingEnd);465+}466+467+static inline void sis190_give_to_asic(struct RxDesc *desc, u32 rx_buf_sz)468+{469+ u32 eor = le32_to_cpu(desc->size) & RingEnd;470+471+ desc->PSize = 0x0;472+ desc->size = cpu_to_le32((rx_buf_sz & RX_BUF_MASK) | eor);473+ wmb();474+ desc->status = cpu_to_le32(OWNbit | INTbit);475+}476+477+static inline void sis190_map_to_asic(struct RxDesc *desc, dma_addr_t mapping,478+ u32 rx_buf_sz)479+{480+ desc->addr = cpu_to_le32(mapping);481+ sis190_give_to_asic(desc, rx_buf_sz);482+}483+484+static inline void sis190_make_unusable_by_asic(struct RxDesc *desc)485+{486+ desc->PSize = 0x0;487+ desc->addr = 0xdeadbeef;488+ desc->size &= cpu_to_le32(RingEnd);489+ wmb();490+ desc->status = 0x0;491+}492+493+static int sis190_alloc_rx_skb(struct pci_dev *pdev, struct sk_buff **sk_buff,494+ struct RxDesc *desc, u32 rx_buf_sz)495+{496+ struct sk_buff *skb;497+ dma_addr_t mapping;498+ int ret = 0;499+500+ skb = dev_alloc_skb(rx_buf_sz);501+ if (!skb)502+ goto err_out;503+504+ *sk_buff = skb;505+506+ mapping = pci_map_single(pdev, skb->data, rx_buf_sz,507+ PCI_DMA_FROMDEVICE);508+509+ sis190_map_to_asic(desc, mapping, rx_buf_sz);510+out:511+ return ret;512+513+err_out:514+ ret = -ENOMEM;515+ sis190_make_unusable_by_asic(desc);516+ goto out;517+}518+519+static u32 sis190_rx_fill(struct sis190_private *tp, struct net_device *dev,520+ u32 start, u32 end)521+{522+ u32 cur;523+524+ for (cur = start; cur < end; cur++) {525+ int ret, i = cur % NUM_RX_DESC;526+527+ if (tp->Rx_skbuff[i])528+ continue;529+530+ ret = sis190_alloc_rx_skb(tp->pci_dev, tp->Rx_skbuff + i,531+ tp->RxDescRing + i, tp->rx_buf_sz);532+ if (ret < 0)533+ break;534+ }535+ return cur - start;536+}537+538+static inline int sis190_try_rx_copy(struct sk_buff **sk_buff, int pkt_size,539+ struct RxDesc *desc, int rx_buf_sz)540+{541+ int ret = -1;542+543+ if (pkt_size < rx_copybreak) {544+ struct sk_buff *skb;545+546+ skb = dev_alloc_skb(pkt_size + NET_IP_ALIGN);547+ if (skb) {548+ skb_reserve(skb, NET_IP_ALIGN);549+ eth_copy_and_sum(skb, sk_buff[0]->data, pkt_size, 0);550+ *sk_buff = skb;551+ sis190_give_to_asic(desc, rx_buf_sz);552+ ret = 0;553+ }554+ }555+ return ret;556+}557+558+static inline int sis190_rx_pkt_err(u32 status, struct net_device_stats *stats)559+{560+#define ErrMask (OVRUN | SHORT | LIMIT | MIIER | NIBON | COLON | ABORT)561+562+ if ((status & CRCOK) && !(status & ErrMask))563+ return 0;564+565+ if (!(status & CRCOK))566+ stats->rx_crc_errors++;567+ else if (status & OVRUN)568+ stats->rx_over_errors++;569+ else if (status & (SHORT | LIMIT))570+ stats->rx_length_errors++;571+ else if (status & (MIIER | NIBON | COLON))572+ stats->rx_frame_errors++;573+574+ stats->rx_errors++;575+ return -1;576+}577+578+static int sis190_rx_interrupt(struct net_device *dev,579+ struct sis190_private *tp, void __iomem *ioaddr)580+{581+ struct net_device_stats *stats = &tp->stats;582+ u32 rx_left, cur_rx = tp->cur_rx;583+ u32 delta, count;584+585+ rx_left = NUM_RX_DESC + tp->dirty_rx - cur_rx;586+ rx_left = sis190_rx_quota(rx_left, (u32) dev->quota);587+588+ for (; rx_left > 0; rx_left--, cur_rx++) {589+ unsigned int entry = cur_rx % NUM_RX_DESC;590+ struct RxDesc *desc = tp->RxDescRing + entry;591+ u32 status;592+593+ if (desc->status & OWNbit)594+ break;595+596+ status = le32_to_cpu(desc->PSize);597+598+ // net_intr(tp, KERN_INFO "%s: Rx PSize = %08x.\n", dev->name,599+ // status);600+601+ if (sis190_rx_pkt_err(status, stats) < 0)602+ sis190_give_to_asic(desc, tp->rx_buf_sz);603+ else {604+ struct sk_buff *skb = tp->Rx_skbuff[entry];605+ int pkt_size = (status & RxSizeMask) - 4;606+ void (*pci_action)(struct pci_dev *, dma_addr_t,607+ size_t, int) = pci_dma_sync_single_for_device;608+609+ if (unlikely(pkt_size > tp->rx_buf_sz)) {610+ net_intr(tp, KERN_INFO611+ "%s: (frag) status = %08x.\n",612+ dev->name, status);613+ stats->rx_dropped++;614+ stats->rx_length_errors++;615+ sis190_give_to_asic(desc, tp->rx_buf_sz);616+ continue;617+ }618+619+ pci_dma_sync_single_for_cpu(tp->pci_dev,620+ le32_to_cpu(desc->addr), tp->rx_buf_sz,621+ PCI_DMA_FROMDEVICE);622+623+ if (sis190_try_rx_copy(&skb, pkt_size, desc,624+ tp->rx_buf_sz)) {625+ pci_action = pci_unmap_single;626+ tp->Rx_skbuff[entry] = NULL;627+ sis190_make_unusable_by_asic(desc);628+ }629+630+ pci_action(tp->pci_dev, le32_to_cpu(desc->addr),631+ tp->rx_buf_sz, PCI_DMA_FROMDEVICE);632+633+ skb->dev = dev;634+ skb_put(skb, pkt_size);635+ skb->protocol = eth_type_trans(skb, dev);636+637+ sis190_rx_skb(skb);638+639+ dev->last_rx = jiffies;640+ stats->rx_packets++;641+ stats->rx_bytes += pkt_size;642+ if ((status & BCAST) == MCAST)643+ stats->multicast++;644+ }645+ }646+ count = cur_rx - tp->cur_rx;647+ tp->cur_rx = cur_rx;648+649+ delta = sis190_rx_fill(tp, dev, tp->dirty_rx, tp->cur_rx);650+ if (!delta && count && netif_msg_intr(tp))651+ printk(KERN_INFO "%s: no Rx buffer allocated.\n", dev->name);652+ tp->dirty_rx += delta;653+654+ if (((tp->dirty_rx + NUM_RX_DESC) == tp->cur_rx) && netif_msg_intr(tp))655+ printk(KERN_EMERG "%s: Rx buffers exhausted.\n", dev->name);656+657+ return count;658+}659+660+static void sis190_unmap_tx_skb(struct pci_dev *pdev, struct sk_buff *skb,661+ struct TxDesc *desc)662+{663+ unsigned int len;664+665+ len = skb->len < ETH_ZLEN ? ETH_ZLEN : skb->len;666+667+ pci_unmap_single(pdev, le32_to_cpu(desc->addr), len, PCI_DMA_TODEVICE);668+669+ memset(desc, 0x00, sizeof(*desc));670+}671+672+static void sis190_tx_interrupt(struct net_device *dev,673+ struct sis190_private *tp, void __iomem *ioaddr)674+{675+ u32 pending, dirty_tx = tp->dirty_tx;676+ /*677+ * It would not be needed if queueing was allowed to be enabled678+ * again too early (hint: think preempt and unclocked smp systems).679+ */680+ unsigned int queue_stopped;681+682+ smp_rmb();683+ pending = tp->cur_tx - dirty_tx;684+ queue_stopped = (pending == NUM_TX_DESC);685+686+ for (; pending; pending--, dirty_tx++) {687+ unsigned int entry = dirty_tx % NUM_TX_DESC;688+ struct TxDesc *txd = tp->TxDescRing + entry;689+ struct sk_buff *skb;690+691+ if (le32_to_cpu(txd->status) & OWNbit)692+ break;693+694+ skb = tp->Tx_skbuff[entry];695+696+ tp->stats.tx_packets++;697+ tp->stats.tx_bytes += skb->len;698+699+ sis190_unmap_tx_skb(tp->pci_dev, skb, txd);700+ tp->Tx_skbuff[entry] = NULL;701+ dev_kfree_skb_irq(skb);702+ }703+704+ if (tp->dirty_tx != dirty_tx) {705+ tp->dirty_tx = dirty_tx;706+ smp_wmb();707+ if (queue_stopped)708+ netif_wake_queue(dev);709+ }710+}711+712+/*713+ * The interrupt handler does all of the Rx thread work and cleans up after714+ * the Tx thread.715+ */716+static irqreturn_t sis190_interrupt(int irq, void *__dev, struct pt_regs *regs)717+{718+ struct net_device *dev = __dev;719+ struct sis190_private *tp = netdev_priv(dev);720+ void __iomem *ioaddr = tp->mmio_addr;721+ unsigned int handled = 0;722+ u32 status;723+724+ status = SIS_R32(IntrStatus);725+726+ if ((status == 0xffffffff) || !status)727+ goto out;728+729+ handled = 1;730+731+ if (unlikely(!netif_running(dev))) {732+ sis190_asic_down(ioaddr);733+ goto out;734+ }735+736+ SIS_W32(IntrStatus, status);737+738+ // net_intr(tp, KERN_INFO "%s: status = %08x.\n", dev->name, status);739+740+ if (status & LinkChange) {741+ net_intr(tp, KERN_INFO "%s: link change.\n", dev->name);742+ schedule_work(&tp->phy_task);743+ }744+745+ if (status & RxQInt)746+ sis190_rx_interrupt(dev, tp, ioaddr);747+748+ if (status & TxQ0Int)749+ sis190_tx_interrupt(dev, tp, ioaddr);750+out:751+ return IRQ_RETVAL(handled);752+}753+754+#ifdef CONFIG_NET_POLL_CONTROLLER755+static void sis190_netpoll(struct net_device *dev)756+{757+ struct sis190_private *tp = netdev_priv(dev);758+ struct pci_dev *pdev = tp->pci_dev;759+760+ disable_irq(pdev->irq);761+ sis190_interrupt(pdev->irq, dev, NULL);762+ enable_irq(pdev->irq);763+}764+#endif765+766+static void sis190_free_rx_skb(struct sis190_private *tp,767+ struct sk_buff **sk_buff, struct RxDesc *desc)768+{769+ struct pci_dev *pdev = tp->pci_dev;770+771+ pci_unmap_single(pdev, le32_to_cpu(desc->addr), tp->rx_buf_sz,772+ PCI_DMA_FROMDEVICE);773+ dev_kfree_skb(*sk_buff);774+ *sk_buff = NULL;775+ sis190_make_unusable_by_asic(desc);776+}777+778+static void sis190_rx_clear(struct sis190_private *tp)779+{780+ unsigned int i;781+782+ for (i = 0; i < NUM_RX_DESC; i++) {783+ if (!tp->Rx_skbuff[i])784+ continue;785+ sis190_free_rx_skb(tp, tp->Rx_skbuff + i, tp->RxDescRing + i);786+ }787+}788+789+static void sis190_init_ring_indexes(struct sis190_private *tp)790+{791+ tp->dirty_tx = tp->dirty_rx = tp->cur_tx = tp->cur_rx = 0;792+}793+794+static int sis190_init_ring(struct net_device *dev)795+{796+ struct sis190_private *tp = netdev_priv(dev);797+798+ sis190_init_ring_indexes(tp);799+800+ memset(tp->Tx_skbuff, 0x0, NUM_TX_DESC * sizeof(struct sk_buff *));801+ memset(tp->Rx_skbuff, 0x0, NUM_RX_DESC * sizeof(struct sk_buff *));802+803+ if (sis190_rx_fill(tp, dev, 0, NUM_RX_DESC) != NUM_RX_DESC)804+ goto err_rx_clear;805+806+ sis190_mark_as_last_descriptor(tp->RxDescRing + NUM_RX_DESC - 1);807+808+ return 0;809+810+err_rx_clear:811+ sis190_rx_clear(tp);812+ return -ENOMEM;813+}814+815+static void sis190_set_rx_mode(struct net_device *dev)816+{817+ struct sis190_private *tp = netdev_priv(dev);818+ void __iomem *ioaddr = tp->mmio_addr;819+ unsigned long flags;820+ u32 mc_filter[2]; /* Multicast hash filter */821+ u16 rx_mode;822+823+ if (dev->flags & IFF_PROMISC) {824+ /* Unconditionally log net taps. */825+ net_drv(tp, KERN_NOTICE "%s: Promiscuous mode enabled.\n",826+ dev->name);827+ rx_mode =828+ AcceptBroadcast | AcceptMulticast | AcceptMyPhys |829+ AcceptAllPhys;830+ mc_filter[1] = mc_filter[0] = 0xffffffff;831+ } else if ((dev->mc_count > multicast_filter_limit) ||832+ (dev->flags & IFF_ALLMULTI)) {833+ /* Too many to filter perfectly -- accept all multicasts. */834+ rx_mode = AcceptBroadcast | AcceptMulticast | AcceptMyPhys;835+ mc_filter[1] = mc_filter[0] = 0xffffffff;836+ } else {837+ struct dev_mc_list *mclist;838+ unsigned int i;839+840+ rx_mode = AcceptBroadcast | AcceptMyPhys;841+ mc_filter[1] = mc_filter[0] = 0;842+ for (i = 0, mclist = dev->mc_list; mclist && i < dev->mc_count;843+ i++, mclist = mclist->next) {844+ int bit_nr =845+ ether_crc(ETH_ALEN, mclist->dmi_addr) >> 26;846+ mc_filter[bit_nr >> 5] |= 1 << (bit_nr & 31);847+ rx_mode |= AcceptMulticast;848+ }849+ }850+851+ spin_lock_irqsave(&tp->lock, flags);852+853+ SIS_W16(RxMacControl, rx_mode | 0x2);854+ SIS_W32(RxHashTable, mc_filter[0]);855+ SIS_W32(RxHashTable + 4, mc_filter[1]);856+857+ spin_unlock_irqrestore(&tp->lock, flags);858+}859+860+static void sis190_soft_reset(void __iomem *ioaddr)861+{862+ SIS_W32(IntrControl, 0x8000);863+ SIS_PCI_COMMIT();864+ msleep(1);865+ SIS_W32(IntrControl, 0x0);866+ sis190_asic_down(ioaddr);867+ msleep(1);868+}869+870+static void sis190_hw_start(struct net_device *dev)871+{872+ struct sis190_private *tp = netdev_priv(dev);873+ void __iomem *ioaddr = tp->mmio_addr;874+875+ sis190_soft_reset(ioaddr);876+877+ SIS_W32(TxDescStartAddr, tp->tx_dma);878+ SIS_W32(RxDescStartAddr, tp->rx_dma);879+880+ SIS_W32(IntrStatus, 0xffffffff);881+ SIS_W32(IntrMask, 0x0);882+ /*883+ * Default is 100Mbps.884+ * A bit strange: 100Mbps is 0x1801 elsewhere -- FR 2005/06/09885+ */886+ SIS_W16(StationControl, 0x1901);887+ SIS_W32(GMIIControl, 0x0);888+ SIS_W32(TxMacControl, 0x60);889+ SIS_W16(RxMacControl, 0x02);890+ SIS_W32(RxHashTable, 0x0);891+ SIS_W32(0x6c, 0x0);892+ SIS_W32(RxWolCtrl, 0x0);893+ SIS_W32(RxWolData, 0x0);894+895+ SIS_PCI_COMMIT();896+897+ sis190_set_rx_mode(dev);898+899+ /* Enable all known interrupts by setting the interrupt mask. */900+ SIS_W32(IntrMask, sis190_intr_mask);901+902+ SIS_W32(TxControl, 0x1a00 | CmdTxEnb);903+ SIS_W32(RxControl, 0x1a1d);904+905+ netif_start_queue(dev);906+}907+908+static void sis190_phy_task(void * data)909+{910+ struct net_device *dev = data;911+ struct sis190_private *tp = netdev_priv(dev);912+ void __iomem *ioaddr = tp->mmio_addr;913+ int phy_id = tp->mii_if.phy_id;914+ u16 val;915+916+ rtnl_lock();917+918+ val = mdio_read(ioaddr, phy_id, MII_BMCR);919+ if (val & BMCR_RESET) {920+ // FIXME: needlessly high ? -- FR 02/07/2005921+ mod_timer(&tp->timer, jiffies + HZ/10);922+ } else if (!(mdio_read_latched(ioaddr, phy_id, MII_BMSR) &923+ BMSR_ANEGCOMPLETE)) {924+ net_link(tp, KERN_WARNING "%s: PHY reset until link up.\n",925+ dev->name);926+ mdio_write(ioaddr, phy_id, MII_BMCR, val | BMCR_RESET);927+ mod_timer(&tp->timer, jiffies + SIS190_PHY_TIMEOUT);928+ } else {929+ /* Rejoice ! */930+ struct {931+ int val;932+ const char *msg;933+ u16 ctl;934+ } reg31[] = {935+ { LPA_1000XFULL | LPA_SLCT,936+ "1000 Mbps Full Duplex",937+ 0x01 | _1000bpsF },938+ { LPA_1000XHALF | LPA_SLCT,939+ "1000 Mbps Half Duplex",940+ 0x01 | _1000bpsH },941+ { LPA_100FULL,942+ "100 Mbps Full Duplex",943+ 0x01 | _100bpsF },944+ { LPA_100HALF,945+ "100 Mbps Half Duplex",946+ 0x01 | _100bpsH },947+ { LPA_10FULL,948+ "10 Mbps Full Duplex",949+ 0x01 | _10bpsF },950+ { LPA_10HALF,951+ "10 Mbps Half Duplex",952+ 0x01 | _10bpsH },953+ { 0, "unknown", 0x0000 }954+ }, *p;955+ u16 adv;956+957+ val = mdio_read(ioaddr, phy_id, 0x1f);958+ net_link(tp, KERN_INFO "%s: mii ext = %04x.\n", dev->name, val);959+960+ val = mdio_read(ioaddr, phy_id, MII_LPA);961+ adv = mdio_read(ioaddr, phy_id, MII_ADVERTISE);962+ net_link(tp, KERN_INFO "%s: mii lpa = %04x adv = %04x.\n",963+ dev->name, val, adv);964+965+ val &= adv;966+967+ for (p = reg31; p->ctl; p++) {968+ if ((val & p->val) == p->val)969+ break;970+ }971+ if (p->ctl)972+ SIS_W16(StationControl, p->ctl);973+ net_link(tp, KERN_INFO "%s: link on %s mode.\n", dev->name,974+ p->msg);975+ netif_carrier_on(dev);976+ }977+978+ rtnl_unlock();979+}980+981+static void sis190_phy_timer(unsigned long __opaque)982+{983+ struct net_device *dev = (struct net_device *)__opaque;984+ struct sis190_private *tp = netdev_priv(dev);985+986+ if (likely(netif_running(dev)))987+ schedule_work(&tp->phy_task);988+}989+990+static inline void sis190_delete_timer(struct net_device *dev)991+{992+ struct sis190_private *tp = netdev_priv(dev);993+994+ del_timer_sync(&tp->timer);995+}996+997+static inline void sis190_request_timer(struct net_device *dev)998+{999+ struct sis190_private *tp = netdev_priv(dev);1000+ struct timer_list *timer = &tp->timer;1001+1002+ init_timer(timer);1003+ timer->expires = jiffies + SIS190_PHY_TIMEOUT;1004+ timer->data = (unsigned long)dev;1005+ timer->function = sis190_phy_timer;1006+ add_timer(timer);1007+}1008+1009+static void sis190_set_rxbufsize(struct sis190_private *tp,1010+ struct net_device *dev)1011+{1012+ unsigned int mtu = dev->mtu;1013+1014+ tp->rx_buf_sz = (mtu > RX_BUF_SIZE) ? mtu + ETH_HLEN + 8 : RX_BUF_SIZE;1015+ /* RxDesc->size has a licence to kill the lower bits */1016+ if (tp->rx_buf_sz & 0x07) {1017+ tp->rx_buf_sz += 8;1018+ tp->rx_buf_sz &= RX_BUF_MASK;1019+ }1020+}1021+1022+static int sis190_open(struct net_device *dev)1023+{1024+ struct sis190_private *tp = netdev_priv(dev);1025+ struct pci_dev *pdev = tp->pci_dev;1026+ int rc = -ENOMEM;1027+1028+ sis190_set_rxbufsize(tp, dev);1029+1030+ /*1031+ * Rx and Tx descriptors need 256 bytes alignment.1032+ * pci_alloc_consistent() guarantees a stronger alignment.1033+ */1034+ tp->TxDescRing = pci_alloc_consistent(pdev, TX_RING_BYTES, &tp->tx_dma);1035+ if (!tp->TxDescRing)1036+ goto out;1037+1038+ tp->RxDescRing = pci_alloc_consistent(pdev, RX_RING_BYTES, &tp->rx_dma);1039+ if (!tp->RxDescRing)1040+ goto err_free_tx_0;1041+1042+ rc = sis190_init_ring(dev);1043+ if (rc < 0)1044+ goto err_free_rx_1;1045+1046+ INIT_WORK(&tp->phy_task, sis190_phy_task, dev);1047+1048+ sis190_request_timer(dev);1049+1050+ rc = request_irq(dev->irq, sis190_interrupt, SA_SHIRQ, dev->name, dev);1051+ if (rc < 0)1052+ goto err_release_timer_2;1053+1054+ sis190_hw_start(dev);1055+out:1056+ return rc;1057+1058+err_release_timer_2:1059+ sis190_delete_timer(dev);1060+ sis190_rx_clear(tp);1061+err_free_rx_1:1062+ pci_free_consistent(tp->pci_dev, RX_RING_BYTES, tp->RxDescRing,1063+ tp->rx_dma);1064+err_free_tx_0:1065+ pci_free_consistent(tp->pci_dev, TX_RING_BYTES, tp->TxDescRing,1066+ tp->tx_dma);1067+ goto out;1068+}1069+1070+static void sis190_tx_clear(struct sis190_private *tp)1071+{1072+ unsigned int i;1073+1074+ for (i = 0; i < NUM_TX_DESC; i++) {1075+ struct sk_buff *skb = tp->Tx_skbuff[i];1076+1077+ if (!skb)1078+ continue;1079+1080+ sis190_unmap_tx_skb(tp->pci_dev, skb, tp->TxDescRing + i);1081+ tp->Tx_skbuff[i] = NULL;1082+ dev_kfree_skb(skb);1083+1084+ tp->stats.tx_dropped++;1085+ }1086+ tp->cur_tx = tp->dirty_tx = 0;1087+}1088+1089+static void sis190_down(struct net_device *dev)1090+{1091+ struct sis190_private *tp = netdev_priv(dev);1092+ void __iomem *ioaddr = tp->mmio_addr;1093+ unsigned int poll_locked = 0;1094+1095+ sis190_delete_timer(dev);1096+1097+ netif_stop_queue(dev);1098+1099+ flush_scheduled_work();1100+1101+ do {1102+ spin_lock_irq(&tp->lock);1103+1104+ sis190_asic_down(ioaddr);1105+1106+ spin_unlock_irq(&tp->lock);1107+1108+ synchronize_irq(dev->irq);1109+1110+ if (!poll_locked) {1111+ netif_poll_disable(dev);1112+ poll_locked++;1113+ }1114+1115+ synchronize_sched();1116+1117+ } while (SIS_R32(IntrMask));1118+1119+ sis190_tx_clear(tp);1120+ sis190_rx_clear(tp);1121+}1122+1123+static int sis190_close(struct net_device *dev)1124+{1125+ struct sis190_private *tp = netdev_priv(dev);1126+ struct pci_dev *pdev = tp->pci_dev;1127+1128+ sis190_down(dev);1129+1130+ free_irq(dev->irq, dev);1131+1132+ netif_poll_enable(dev);1133+1134+ pci_free_consistent(pdev, TX_RING_BYTES, tp->TxDescRing, tp->tx_dma);1135+ pci_free_consistent(pdev, RX_RING_BYTES, tp->RxDescRing, tp->rx_dma);1136+1137+ tp->TxDescRing = NULL;1138+ tp->RxDescRing = NULL;1139+1140+ return 0;1141+}1142+1143+static int sis190_start_xmit(struct sk_buff *skb, struct net_device *dev)1144+{1145+ struct sis190_private *tp = netdev_priv(dev);1146+ void __iomem *ioaddr = tp->mmio_addr;1147+ u32 len, entry, dirty_tx;1148+ struct TxDesc *desc;1149+ dma_addr_t mapping;1150+1151+ if (unlikely(skb->len < ETH_ZLEN)) {1152+ skb = skb_padto(skb, ETH_ZLEN);1153+ if (!skb) {1154+ tp->stats.tx_dropped++;1155+ goto out;1156+ }1157+ len = ETH_ZLEN;1158+ } else {1159+ len = skb->len;1160+ }1161+1162+ entry = tp->cur_tx % NUM_TX_DESC;1163+ desc = tp->TxDescRing + entry;1164+1165+ if (unlikely(le32_to_cpu(desc->status) & OWNbit)) {1166+ netif_stop_queue(dev);1167+ net_tx_err(tp, KERN_ERR PFX1168+ "%s: BUG! Tx Ring full when queue awake!\n",1169+ dev->name);1170+ return NETDEV_TX_BUSY;1171+ }1172+1173+ mapping = pci_map_single(tp->pci_dev, skb->data, len, PCI_DMA_TODEVICE);1174+1175+ tp->Tx_skbuff[entry] = skb;1176+1177+ desc->PSize = cpu_to_le32(len);1178+ desc->addr = cpu_to_le32(mapping);1179+1180+ desc->size = cpu_to_le32(len);1181+ if (entry == (NUM_TX_DESC - 1))1182+ desc->size |= cpu_to_le32(RingEnd);1183+1184+ wmb();1185+1186+ desc->status = cpu_to_le32(OWNbit | INTbit | DEFbit | CRCbit | PADbit);1187+1188+ tp->cur_tx++;1189+1190+ smp_wmb();1191+1192+ SIS_W32(TxControl, 0x1a00 | CmdReset | CmdTxEnb);1193+1194+ dev->trans_start = jiffies;1195+1196+ dirty_tx = tp->dirty_tx;1197+ if ((tp->cur_tx - NUM_TX_DESC) == dirty_tx) {1198+ netif_stop_queue(dev);1199+ smp_rmb();1200+ if (dirty_tx != tp->dirty_tx)1201+ netif_wake_queue(dev);1202+ }1203+out:1204+ return NETDEV_TX_OK;1205+}1206+1207+static struct net_device_stats *sis190_get_stats(struct net_device *dev)1208+{1209+ struct sis190_private *tp = netdev_priv(dev);1210+1211+ return &tp->stats;1212+}1213+1214+static void sis190_free_phy(struct list_head *first_phy)1215+{1216+ struct sis190_phy *cur, *next;1217+1218+ list_for_each_entry_safe(cur, next, first_phy, list) {1219+ kfree(cur);1220+ }1221+}1222+1223+/**1224+ * sis190_default_phy - Select default PHY for sis190 mac.1225+ * @dev: the net device to probe for1226+ *1227+ * Select first detected PHY with link as default.1228+ * If no one is link on, select PHY whose types is HOME as default.1229+ * If HOME doesn't exist, select LAN.1230+ */1231+static u16 sis190_default_phy(struct net_device *dev)1232+{1233+ struct sis190_phy *phy, *phy_home, *phy_default, *phy_lan;1234+ struct sis190_private *tp = netdev_priv(dev);1235+ struct mii_if_info *mii_if = &tp->mii_if;1236+ void __iomem *ioaddr = tp->mmio_addr;1237+ u16 status;1238+1239+ phy_home = phy_default = phy_lan = NULL;1240+1241+ list_for_each_entry(phy, &tp->first_phy, list) {1242+ status = mdio_read_latched(ioaddr, phy->phy_id, MII_BMSR);1243+1244+ // Link ON & Not select default PHY & not ghost PHY.1245+ if ((status & BMSR_LSTATUS) &&1246+ !phy_default &&1247+ (phy->type != UNKNOWN)) {1248+ phy_default = phy;1249+ } else {1250+ status = mdio_read(ioaddr, phy->phy_id, MII_BMCR);1251+ mdio_write(ioaddr, phy->phy_id, MII_BMCR,1252+ status | BMCR_ANENABLE | BMCR_ISOLATE);1253+ if (phy->type == HOME)1254+ phy_home = phy;1255+ else if (phy->type == LAN)1256+ phy_lan = phy;1257+ }1258+ }1259+1260+ if (!phy_default) {1261+ if (phy_home)1262+ phy_default = phy_home;1263+ else if (phy_lan)1264+ phy_default = phy_lan;1265+ else1266+ phy_default = list_entry(&tp->first_phy,1267+ struct sis190_phy, list);1268+ }1269+1270+ if (mii_if->phy_id != phy_default->phy_id) {1271+ mii_if->phy_id = phy_default->phy_id;1272+ net_probe(tp, KERN_INFO1273+ "%s: Using transceiver at address %d as default.\n",1274+ pci_name(tp->pci_dev), mii_if->phy_id);1275+ }1276+1277+ status = mdio_read(ioaddr, mii_if->phy_id, MII_BMCR);1278+ status &= (~BMCR_ISOLATE);1279+1280+ mdio_write(ioaddr, mii_if->phy_id, MII_BMCR, status);1281+ status = mdio_read_latched(ioaddr, mii_if->phy_id, MII_BMSR);1282+1283+ return status;1284+}1285+1286+static void sis190_init_phy(struct net_device *dev, struct sis190_private *tp,1287+ struct sis190_phy *phy, unsigned int phy_id,1288+ u16 mii_status)1289+{1290+ void __iomem *ioaddr = tp->mmio_addr;1291+ struct mii_chip_info *p;1292+1293+ INIT_LIST_HEAD(&phy->list);1294+ phy->status = mii_status;1295+ phy->phy_id = phy_id;1296+1297+ phy->id[0] = mdio_read(ioaddr, phy_id, MII_PHYSID1);1298+ phy->id[1] = mdio_read(ioaddr, phy_id, MII_PHYSID2);1299+1300+ for (p = mii_chip_table; p->type; p++) {1301+ if ((p->id[0] == phy->id[0]) &&1302+ (p->id[1] == (phy->id[1] & 0xfff0))) {1303+ break;1304+ }1305+ }1306+1307+ if (p->id[1]) {1308+ phy->type = (p->type == MIX) ?1309+ ((mii_status & (BMSR_100FULL | BMSR_100HALF)) ?1310+ LAN : HOME) : p->type;1311+ } else1312+ phy->type = UNKNOWN;1313+1314+ net_probe(tp, KERN_INFO "%s: %s transceiver at address %d.\n",1315+ pci_name(tp->pci_dev),1316+ (phy->type == UNKNOWN) ? "Unknown PHY" : p->name, phy_id);1317+}1318+1319+/**1320+ * sis190_mii_probe - Probe MII PHY for sis1901321+ * @dev: the net device to probe for1322+ *1323+ * Search for total of 32 possible mii phy addresses.1324+ * Identify and set current phy if found one,1325+ * return error if it failed to found.1326+ */1327+static int __devinit sis190_mii_probe(struct net_device *dev)1328+{1329+ struct sis190_private *tp = netdev_priv(dev);1330+ struct mii_if_info *mii_if = &tp->mii_if;1331+ void __iomem *ioaddr = tp->mmio_addr;1332+ int phy_id;1333+ int rc = 0;1334+1335+ INIT_LIST_HEAD(&tp->first_phy);1336+1337+ for (phy_id = 0; phy_id < PHY_MAX_ADDR; phy_id++) {1338+ struct sis190_phy *phy;1339+ u16 status;1340+1341+ status = mdio_read_latched(ioaddr, phy_id, MII_BMSR);1342+1343+ // Try next mii if the current one is not accessible.1344+ if (status == 0xffff || status == 0x0000)1345+ continue;1346+1347+ phy = kmalloc(sizeof(*phy), GFP_KERNEL);1348+ if (!phy) {1349+ sis190_free_phy(&tp->first_phy);1350+ rc = -ENOMEM;1351+ goto out;1352+ }1353+1354+ sis190_init_phy(dev, tp, phy, phy_id, status);1355+1356+ list_add(&tp->first_phy, &phy->list);1357+ }1358+1359+ if (list_empty(&tp->first_phy)) {1360+ net_probe(tp, KERN_INFO "%s: No MII transceivers found!\n",1361+ pci_name(tp->pci_dev));1362+ rc = -EIO;1363+ goto out;1364+ }1365+1366+ /* Select default PHY for mac */1367+ sis190_default_phy(dev);1368+1369+ mii_if->dev = dev;1370+ mii_if->mdio_read = __mdio_read;1371+ mii_if->mdio_write = __mdio_write;1372+ mii_if->phy_id_mask = PHY_ID_ANY;1373+ mii_if->reg_num_mask = MII_REG_ANY;1374+out:1375+ return rc;1376+}1377+1378+static void __devexit sis190_mii_remove(struct net_device *dev)1379+{1380+ struct sis190_private *tp = netdev_priv(dev);1381+1382+ sis190_free_phy(&tp->first_phy);1383+}1384+1385+static void sis190_release_board(struct pci_dev *pdev)1386+{1387+ struct net_device *dev = pci_get_drvdata(pdev);1388+ struct sis190_private *tp = netdev_priv(dev);1389+1390+ iounmap(tp->mmio_addr);1391+ pci_release_regions(pdev);1392+ pci_disable_device(pdev);1393+ free_netdev(dev);1394+}1395+1396+static struct net_device * __devinit sis190_init_board(struct pci_dev *pdev)1397+{1398+ struct sis190_private *tp;1399+ struct net_device *dev;1400+ void __iomem *ioaddr;1401+ int rc;1402+1403+ dev = alloc_etherdev(sizeof(*tp));1404+ if (!dev) {1405+ net_drv(&debug, KERN_ERR PFX "unable to alloc new ethernet\n");1406+ rc = -ENOMEM;1407+ goto err_out_0;1408+ }1409+1410+ SET_MODULE_OWNER(dev);1411+ SET_NETDEV_DEV(dev, &pdev->dev);1412+1413+ tp = netdev_priv(dev);1414+ tp->msg_enable = netif_msg_init(debug.msg_enable, SIS190_MSG_DEFAULT);1415+1416+ rc = pci_enable_device(pdev);1417+ if (rc < 0) {1418+ net_probe(tp, KERN_ERR "%s: enable failure\n", pci_name(pdev));1419+ goto err_free_dev_1;1420+ }1421+1422+ rc = -ENODEV;1423+1424+ if (!(pci_resource_flags(pdev, 0) & IORESOURCE_MEM)) {1425+ net_probe(tp, KERN_ERR "%s: region #0 is no MMIO resource.\n",1426+ pci_name(pdev));1427+ goto err_pci_disable_2;1428+ }1429+ if (pci_resource_len(pdev, 0) < SIS190_REGS_SIZE) {1430+ net_probe(tp, KERN_ERR "%s: invalid PCI region size(s).\n",1431+ pci_name(pdev));1432+ goto err_pci_disable_2;1433+ }1434+1435+ rc = pci_request_regions(pdev, DRV_NAME);1436+ if (rc < 0) {1437+ net_probe(tp, KERN_ERR PFX "%s: could not request regions.\n",1438+ pci_name(pdev));1439+ goto err_pci_disable_2;1440+ }1441+1442+ rc = pci_set_dma_mask(pdev, DMA_32BIT_MASK);1443+ if (rc < 0) {1444+ net_probe(tp, KERN_ERR "%s: DMA configuration failed.\n",1445+ pci_name(pdev));1446+ goto err_free_res_3;1447+ }1448+1449+ pci_set_master(pdev);1450+1451+ ioaddr = ioremap(pci_resource_start(pdev, 0), SIS190_REGS_SIZE);1452+ if (!ioaddr) {1453+ net_probe(tp, KERN_ERR "%s: cannot remap MMIO, aborting\n",1454+ pci_name(pdev));1455+ rc = -EIO;1456+ goto err_free_res_3;1457+ }1458+1459+ tp->pci_dev = pdev;1460+ tp->mmio_addr = ioaddr;1461+1462+ sis190_irq_mask_and_ack(ioaddr);1463+1464+ sis190_soft_reset(ioaddr);1465+out:1466+ return dev;1467+1468+err_free_res_3:1469+ pci_release_regions(pdev);1470+err_pci_disable_2:1471+ pci_disable_device(pdev);1472+err_free_dev_1:1473+ free_netdev(dev);1474+err_out_0:1475+ dev = ERR_PTR(rc);1476+ goto out;1477+}1478+1479+static void sis190_tx_timeout(struct net_device *dev)1480+{1481+ struct sis190_private *tp = netdev_priv(dev);1482+ void __iomem *ioaddr = tp->mmio_addr;1483+ u8 tmp8;1484+1485+ /* Disable Tx, if not already */1486+ tmp8 = SIS_R8(TxControl);1487+ if (tmp8 & CmdTxEnb)1488+ SIS_W8(TxControl, tmp8 & ~CmdTxEnb);1489+1490+1491+ net_tx_err(tp, KERN_INFO "%s: Transmit timeout, status %08x %08x.\n",1492+ dev->name, SIS_R32(TxControl), SIS_R32(TxSts));1493+1494+ /* Disable interrupts by clearing the interrupt mask. */1495+ SIS_W32(IntrMask, 0x0000);1496+1497+ /* Stop a shared interrupt from scavenging while we are. */1498+ spin_lock_irq(&tp->lock);1499+ sis190_tx_clear(tp);1500+ spin_unlock_irq(&tp->lock);1501+1502+ /* ...and finally, reset everything. */1503+ sis190_hw_start(dev);1504+1505+ netif_wake_queue(dev);1506+}1507+1508+static int __devinit sis190_get_mac_addr_from_eeprom(struct pci_dev *pdev,1509+ struct net_device *dev)1510+{1511+ struct sis190_private *tp = netdev_priv(dev);1512+ void __iomem *ioaddr = tp->mmio_addr;1513+ u16 sig;1514+ int i;1515+1516+ net_probe(tp, KERN_INFO "%s: Read MAC address from EEPROM\n",1517+ pci_name(pdev));1518+1519+ /* Check to see if there is a sane EEPROM */1520+ sig = (u16) sis190_read_eeprom(ioaddr, EEPROMSignature);1521+1522+ if ((sig == 0xffff) || (sig == 0x0000)) {1523+ net_probe(tp, KERN_INFO "%s: Error EEPROM read %x.\n",1524+ pci_name(pdev), sig);1525+ return -EIO;1526+ }1527+1528+ /* Get MAC address from EEPROM */1529+ for (i = 0; i < MAC_ADDR_LEN / 2; i++) {1530+ __le16 w = sis190_read_eeprom(ioaddr, EEPROMMACAddr + i);1531+1532+ ((u16 *)dev->dev_addr)[0] = le16_to_cpu(w);1533+ }1534+1535+ return 0;1536+}1537+1538+/**1539+ * sis190_get_mac_addr_from_apc - Get MAC address for SiS965 model1540+ * @pdev: PCI device1541+ * @dev: network device to get address for1542+ *1543+ * SiS965 model, use APC CMOS RAM to store MAC address.1544+ * APC CMOS RAM is accessed through ISA bridge.1545+ * MAC address is read into @net_dev->dev_addr.1546+ */1547+static int __devinit sis190_get_mac_addr_from_apc(struct pci_dev *pdev,1548+ struct net_device *dev)1549+{1550+ struct sis190_private *tp = netdev_priv(dev);1551+ struct pci_dev *isa_bridge;1552+ u8 reg, tmp8;1553+ int i;1554+1555+ net_probe(tp, KERN_INFO "%s: Read MAC address from APC.\n",1556+ pci_name(pdev));1557+1558+ isa_bridge = pci_get_device(PCI_VENDOR_ID_SI, 0x0965, NULL);1559+ if (!isa_bridge) {1560+ net_probe(tp, KERN_INFO "%s: Can not find ISA bridge.\n",1561+ pci_name(pdev));1562+ return -EIO;1563+ }1564+1565+ /* Enable port 78h & 79h to access APC Registers. */1566+ pci_read_config_byte(isa_bridge, 0x48, &tmp8);1567+ reg = (tmp8 & ~0x02);1568+ pci_write_config_byte(isa_bridge, 0x48, reg);1569+ udelay(50);1570+ pci_read_config_byte(isa_bridge, 0x48, ®);1571+1572+ for (i = 0; i < MAC_ADDR_LEN; i++) {1573+ outb(0x9 + i, 0x78);1574+ dev->dev_addr[i] = inb(0x79);1575+ }1576+1577+ outb(0x12, 0x78);1578+ reg = inb(0x79);1579+1580+ /* Restore the value to ISA Bridge */1581+ pci_write_config_byte(isa_bridge, 0x48, tmp8);1582+ pci_dev_put(isa_bridge);1583+1584+ return 0;1585+}1586+1587+/**1588+ * sis190_init_rxfilter - Initialize the Rx filter1589+ * @dev: network device to initialize1590+ *1591+ * Set receive filter address to our MAC address1592+ * and enable packet filtering.1593+ */1594+static inline void sis190_init_rxfilter(struct net_device *dev)1595+{1596+ struct sis190_private *tp = netdev_priv(dev);1597+ void __iomem *ioaddr = tp->mmio_addr;1598+ u16 ctl;1599+ int i;1600+1601+ ctl = SIS_R16(RxMacControl);1602+ /*1603+ * Disable packet filtering before setting filter.1604+ * Note: SiS's driver writes 32 bits but RxMacControl is 16 bits1605+ * only and followed by RxMacAddr (6 bytes). Strange. -- FR1606+ */1607+ SIS_W16(RxMacControl, ctl & ~0x0f00);1608+1609+ for (i = 0; i < MAC_ADDR_LEN; i++)1610+ SIS_W8(RxMacAddr + i, dev->dev_addr[i]);1611+1612+ SIS_W16(RxMacControl, ctl);1613+ SIS_PCI_COMMIT();1614+}1615+1616+static int sis190_get_mac_addr(struct pci_dev *pdev, struct net_device *dev)1617+{1618+ u8 from;1619+1620+ pci_read_config_byte(pdev, 0x73, &from);1621+1622+ return (from & 0x00000001) ?1623+ sis190_get_mac_addr_from_apc(pdev, dev) :1624+ sis190_get_mac_addr_from_eeprom(pdev, dev);1625+}1626+1627+static void sis190_set_speed_auto(struct net_device *dev)1628+{1629+ struct sis190_private *tp = netdev_priv(dev);1630+ void __iomem *ioaddr = tp->mmio_addr;1631+ int phy_id = tp->mii_if.phy_id;1632+ int val;1633+1634+ net_link(tp, KERN_INFO "%s: Enabling Auto-negotiation.\n", dev->name);1635+1636+ val = mdio_read(ioaddr, phy_id, MII_ADVERTISE);1637+1638+ // Enable 10/100 Full/Half Mode, leave MII_ADVERTISE bit4:01639+ // unchanged.1640+ mdio_write(ioaddr, phy_id, MII_ADVERTISE, (val & ADVERTISE_SLCT) |1641+ ADVERTISE_100FULL | ADVERTISE_10FULL |1642+ ADVERTISE_100HALF | ADVERTISE_10HALF);1643+1644+ // Enable 1000 Full Mode.1645+ mdio_write(ioaddr, phy_id, MII_CTRL1000, ADVERTISE_1000FULL);1646+1647+ // Enable auto-negotiation and restart auto-negotiation.1648+ mdio_write(ioaddr, phy_id, MII_BMCR,1649+ BMCR_ANENABLE | BMCR_ANRESTART | BMCR_RESET);1650+}1651+1652+static int sis190_get_settings(struct net_device *dev, struct ethtool_cmd *cmd)1653+{1654+ struct sis190_private *tp = netdev_priv(dev);1655+1656+ return mii_ethtool_gset(&tp->mii_if, cmd);1657+}1658+1659+static int sis190_set_settings(struct net_device *dev, struct ethtool_cmd *cmd)1660+{1661+ struct sis190_private *tp = netdev_priv(dev);1662+1663+ return mii_ethtool_sset(&tp->mii_if, cmd);1664+}1665+1666+static void sis190_get_drvinfo(struct net_device *dev,1667+ struct ethtool_drvinfo *info)1668+{1669+ struct sis190_private *tp = netdev_priv(dev);1670+1671+ strcpy(info->driver, DRV_NAME);1672+ strcpy(info->version, DRV_VERSION);1673+ strcpy(info->bus_info, pci_name(tp->pci_dev));1674+}1675+1676+static int sis190_get_regs_len(struct net_device *dev)1677+{1678+ return SIS190_REGS_SIZE;1679+}1680+1681+static void sis190_get_regs(struct net_device *dev, struct ethtool_regs *regs,1682+ void *p)1683+{1684+ struct sis190_private *tp = netdev_priv(dev);1685+ unsigned long flags;1686+1687+ if (regs->len > SIS190_REGS_SIZE)1688+ regs->len = SIS190_REGS_SIZE;1689+1690+ spin_lock_irqsave(&tp->lock, flags);1691+ memcpy_fromio(p, tp->mmio_addr, regs->len);1692+ spin_unlock_irqrestore(&tp->lock, flags);1693+}1694+1695+static int sis190_nway_reset(struct net_device *dev)1696+{1697+ struct sis190_private *tp = netdev_priv(dev);1698+1699+ return mii_nway_restart(&tp->mii_if);1700+}1701+1702+static u32 sis190_get_msglevel(struct net_device *dev)1703+{1704+ struct sis190_private *tp = netdev_priv(dev);1705+1706+ return tp->msg_enable;1707+}1708+1709+static void sis190_set_msglevel(struct net_device *dev, u32 value)1710+{1711+ struct sis190_private *tp = netdev_priv(dev);1712+1713+ tp->msg_enable = value;1714+}1715+1716+static struct ethtool_ops sis190_ethtool_ops = {1717+ .get_settings = sis190_get_settings,1718+ .set_settings = sis190_set_settings,1719+ .get_drvinfo = sis190_get_drvinfo,1720+ .get_regs_len = sis190_get_regs_len,1721+ .get_regs = sis190_get_regs,1722+ .get_link = ethtool_op_get_link,1723+ .get_msglevel = sis190_get_msglevel,1724+ .set_msglevel = sis190_set_msglevel,1725+ .nway_reset = sis190_nway_reset,1726+};1727+1728+static int sis190_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd)1729+{1730+ struct sis190_private *tp = netdev_priv(dev);1731+1732+ return !netif_running(dev) ? -EINVAL :1733+ generic_mii_ioctl(&tp->mii_if, if_mii(ifr), cmd, NULL);1734+}1735+1736+static int __devinit sis190_init_one(struct pci_dev *pdev,1737+ const struct pci_device_id *ent)1738+{1739+ static int printed_version = 0;1740+ struct sis190_private *tp;1741+ struct net_device *dev;1742+ void __iomem *ioaddr;1743+ int rc;1744+1745+ if (!printed_version) {1746+ net_drv(&debug, KERN_INFO SIS190_DRIVER_NAME " loaded.\n");1747+ printed_version = 1;1748+ }1749+1750+ dev = sis190_init_board(pdev);1751+ if (IS_ERR(dev)) {1752+ rc = PTR_ERR(dev);1753+ goto out;1754+ }1755+1756+ tp = netdev_priv(dev);1757+ ioaddr = tp->mmio_addr;1758+1759+ rc = sis190_get_mac_addr(pdev, dev);1760+ if (rc < 0)1761+ goto err_release_board;1762+1763+ sis190_init_rxfilter(dev);1764+1765+ INIT_WORK(&tp->phy_task, sis190_phy_task, dev);1766+1767+ dev->open = sis190_open;1768+ dev->stop = sis190_close;1769+ dev->do_ioctl = sis190_ioctl;1770+ dev->get_stats = sis190_get_stats;1771+ dev->tx_timeout = sis190_tx_timeout;1772+ dev->watchdog_timeo = SIS190_TX_TIMEOUT;1773+ dev->hard_start_xmit = sis190_start_xmit;1774+#ifdef CONFIG_NET_POLL_CONTROLLER1775+ dev->poll_controller = sis190_netpoll;1776+#endif1777+ dev->set_multicast_list = sis190_set_rx_mode;1778+ SET_ETHTOOL_OPS(dev, &sis190_ethtool_ops);1779+ dev->irq = pdev->irq;1780+ dev->base_addr = (unsigned long) 0xdead;1781+1782+ spin_lock_init(&tp->lock);1783+1784+ rc = sis190_mii_probe(dev);1785+ if (rc < 0)1786+ goto err_release_board;1787+1788+ rc = register_netdev(dev);1789+ if (rc < 0)1790+ goto err_remove_mii;1791+1792+ pci_set_drvdata(pdev, dev);1793+1794+ net_probe(tp, KERN_INFO "%s: %s at %p (IRQ: %d), "1795+ "%2.2x:%2.2x:%2.2x:%2.2x:%2.2x:%2.2x\n",1796+ pci_name(pdev), sis_chip_info[ent->driver_data].name,1797+ ioaddr, dev->irq,1798+ dev->dev_addr[0], dev->dev_addr[1],1799+ dev->dev_addr[2], dev->dev_addr[3],1800+ dev->dev_addr[4], dev->dev_addr[5]);1801+1802+ netif_carrier_off(dev);1803+1804+ sis190_set_speed_auto(dev);1805+out:1806+ return rc;1807+1808+err_remove_mii:1809+ sis190_mii_remove(dev);1810+err_release_board:1811+ sis190_release_board(pdev);1812+ goto out;1813+}1814+1815+static void __devexit sis190_remove_one(struct pci_dev *pdev)1816+{1817+ struct net_device *dev = pci_get_drvdata(pdev);1818+1819+ sis190_mii_remove(dev);1820+ unregister_netdev(dev);1821+ sis190_release_board(pdev);1822+ pci_set_drvdata(pdev, NULL);1823+}1824+1825+static struct pci_driver sis190_pci_driver = {1826+ .name = DRV_NAME,1827+ .id_table = sis190_pci_tbl,1828+ .probe = sis190_init_one,1829+ .remove = __devexit_p(sis190_remove_one),1830+};1831+1832+static int __init sis190_init_module(void)1833+{1834+ return pci_module_init(&sis190_pci_driver);1835+}1836+1837+static void __exit sis190_cleanup_module(void)1838+{1839+ pci_unregister_driver(&sis190_pci_driver);1840+}1841+1842+module_init(sis190_init_module);1843+module_exit(sis190_cleanup_module);
+12
drivers/net/tulip/Kconfig
···135 <file:Documentation/networking/net-modules.txt>. The module will136 be called dmfe.137000000000000138config PCMCIA_XIRCOM139 tristate "Xircom CardBus support (new driver)"140 depends on NET_TULIP && CARDBUS
···135 <file:Documentation/networking/net-modules.txt>. The module will136 be called dmfe.137138+config ULI526X139+ tristate "ULi M526x controller support"140+ depends on NET_TULIP && PCI141+ select CRC32142+ ---help---143+ This driver is for ULi M5261/M5263 10/100M Ethernet Controller144+ (<http://www.uli.com.tw/>).145+146+ To compile this driver as a module, choose M here and read147+ <file:Documentation/networking/net-modules.txt>. The module will148+ be called uli526x.149+150config PCMCIA_XIRCOM151 tristate "Xircom CardBus support (new driver)"152 depends on NET_TULIP && CARDBUS