···11+ Chelsio N210 10Gb Ethernet Network Controller22+33+ Driver Release Notes for Linux44+55+ Version 2.1.166+77+ June 20, 200588+99+CONTENTS1010+========1111+ INTRODUCTION1212+ FEATURES1313+ PERFORMANCE1414+ DRIVER MESSAGES1515+ KNOWN ISSUES1616+ SUPPORT1717+1818+1919+INTRODUCTION2020+============2121+2222+ This document describes the Linux driver for Chelsio 10Gb Ethernet Network2323+ Controller. This driver supports the Chelsio N210 NIC and is backward2424+ compatible with the Chelsio N110 model 10Gb NICs.2525+2626+2727+FEATURES2828+========2929+3030+ Adaptive Interrupts (adaptive-rx)3131+ ---------------------------------3232+3333+ This feature provides an adaptive algorithm that adjusts the interrupt3434+ coalescing parameters, allowing the driver to dynamically adapt the latency3535+ settings to achieve the highest performance during various types of network3636+ load.3737+3838+ The interface used to control this feature is ethtool. Please see the3939+ ethtool manpage for additional usage information.4040+4141+ By default, adaptive-rx is disabled.4242+ To enable adaptive-rx:4343+4444+ ethtool -C <interface> adaptive-rx on4545+4646+ To disable adaptive-rx, use ethtool:4747+4848+ ethtool -C <interface> adaptive-rx off4949+5050+ After disabling adaptive-rx, the timer latency value will be set to 50us.5151+ You may set the timer latency after disabling adaptive-rx:5252+5353+ ethtool -C <interface> rx-usecs <microseconds>5454+5555+ An example to set the timer latency value to 100us on eth0:5656+5757+ ethtool -C eth0 rx-usecs 1005858+5959+ You may also provide a timer latency value while disabling adpative-rx:6060+6161+ ethtool -C <interface> adaptive-rx off rx-usecs <microseconds>6262+6363+ If adaptive-rx is disabled and a timer latency value is specified, the timer6464+ will be set to the specified value until changed by the user or until6565+ adaptive-rx is enabled.6666+6767+ To view the status of the adaptive-rx and timer latency values:6868+6969+ ethtool -c <interface>7070+7171+7272+ TCP Segmentation Offloading (TSO) Support7373+ -----------------------------------------7474+7575+ This feature, also known as "large send", enables a system's protocol stack7676+ to offload portions of outbound TCP processing to a network interface card7777+ thereby reducing system CPU utilization and enhancing performance.7878+7979+ The interface used to control this feature is ethtool version 1.8 or higher.8080+ Please see the ethtool manpage for additional usage information.8181+8282+ By default, TSO is enabled.8383+ To disable TSO:8484+8585+ ethtool -K <interface> tso off8686+8787+ To enable TSO:8888+8989+ ethtool -K <interface> tso on9090+9191+ To view the status of TSO:9292+9393+ ethtool -k <interface>9494+9595+9696+PERFORMANCE9797+===========9898+9999+ The following information is provided as an example of how to change system100100+ parameters for "performance tuning" an what value to use. You may or may not101101+ want to change these system parameters, depending on your server/workstation102102+ application. Doing so is not warranted in any way by Chelsio Communications,103103+ and is done at "YOUR OWN RISK". Chelsio will not be held responsible for loss104104+ of data or damage to equipment.105105+106106+ Your distribution may have a different way of doing things, or you may prefer107107+ a different method. These commands are shown only to provide an example of108108+ what to do and are by no means definitive.109109+110110+ Making any of the following system changes will only last until you reboot111111+ your system. You may want to write a script that runs at boot-up which112112+ includes the optimal settings for your system.113113+114114+ Setting PCI Latency Timer:115115+ setpci -d 1425:* 0x0c.l=0x0000F800116116+117117+ Disabling TCP timestamp:118118+ sysctl -w net.ipv4.tcp_timestamps=0119119+120120+ Disabling SACK:121121+ sysctl -w net.ipv4.tcp_sack=0122122+123123+ Setting large number of incoming connection requests:124124+ sysctl -w net.ipv4.tcp_max_syn_backlog=3000125125+126126+ Setting maximum receive socket buffer size:127127+ sysctl -w net.core.rmem_max=1024000128128+129129+ Setting maximum send socket buffer size:130130+ sysctl -w net.core.wmem_max=1024000131131+132132+ Set smp_affinity (on a multiprocessor system) to a single CPU:133133+ echo 1 > /proc/irq/<interrupt_number>/smp_affinity134134+135135+ Setting default receive socket buffer size:136136+ sysctl -w net.core.rmem_default=524287137137+138138+ Setting default send socket buffer size:139139+ sysctl -w net.core.wmem_default=524287140140+141141+ Setting maximum option memory buffers:142142+ sysctl -w net.core.optmem_max=524287143143+144144+ Setting maximum backlog (# of unprocessed packets before kernel drops):145145+ sysctl -w net.core.netdev_max_backlog=300000146146+147147+ Setting TCP read buffers (min/default/max):148148+ sysctl -w net.ipv4.tcp_rmem="10000000 10000000 10000000"149149+150150+ Setting TCP write buffers (min/pressure/max):151151+ sysctl -w net.ipv4.tcp_wmem="10000000 10000000 10000000"152152+153153+ Setting TCP buffer space (min/pressure/max):154154+ sysctl -w net.ipv4.tcp_mem="10000000 10000000 10000000"155155+156156+ TCP window size for single connections:157157+ The receive buffer (RX_WINDOW) size must be at least as large as the158158+ Bandwidth-Delay Product of the communication link between the sender and159159+ receiver. Due to the variations of RTT, you may want to increase the buffer160160+ size up to 2 times the Bandwidth-Delay Product. Reference page 289 of161161+ "TCP/IP Illustrated, Volume 1, The Protocols" by W. Richard Stevens.162162+ At 10Gb speeds, use the following formula:163163+ RX_WINDOW >= 1.25MBytes * RTT(in milliseconds)164164+ Example for RTT with 100us: RX_WINDOW = (1,250,000 * 0.1) = 125,000165165+ RX_WINDOW sizes of 256KB - 512KB should be sufficient.166166+ Setting the min, max, and default receive buffer (RX_WINDOW) size:167167+ sysctl -w net.ipv4.tcp_rmem="<min> <default> <max>"168168+169169+ TCP window size for multiple connections:170170+ The receive buffer (RX_WINDOW) size may be calculated the same as single171171+ connections, but should be divided by the number of connections. The172172+ smaller window prevents congestion and facilitates better pacing,173173+ especially if/when MAC level flow control does not work well or when it is174174+ not supported on the machine. Experimentation may be necessary to attain175175+ the correct value. This method is provided as a starting point fot the176176+ correct receive buffer size.177177+ Setting the min, max, and default receive buffer (RX_WINDOW) size is178178+ performed in the same manner as single connection.179179+180180+181181+DRIVER MESSAGES182182+===============183183+184184+ The following messages are the most common messages logged by syslog. These185185+ may be found in /var/log/messages.186186+187187+ Driver up:188188+ Chelsio Network Driver - version 2.1.1189189+190190+ NIC detected:191191+ eth#: Chelsio N210 1x10GBaseX NIC (rev #), PCIX 133MHz/64-bit192192+193193+ Link up:194194+ eth#: link is up at 10 Gbps, full duplex195195+196196+ Link down:197197+ eth#: link is down198198+199199+200200+KNOWN ISSUES201201+============202202+203203+ These issues have been identified during testing. The following information204204+ is provided as a workaround to the problem. In some cases, this problem is205205+ inherent to Linux or to a particular Linux Distribution and/or hardware206206+ platform.207207+208208+ 1. Large number of TCP retransmits on a multiprocessor (SMP) system.209209+210210+ On a system with multiple CPUs, the interrupt (IRQ) for the network211211+ controller may be bound to more than one CPU. This will cause TCP212212+ retransmits if the packet data were to be split across different CPUs213213+ and re-assembled in a different order than expected.214214+215215+ To eliminate the TCP retransmits, set smp_affinity on the particular216216+ interrupt to a single CPU. You can locate the interrupt (IRQ) used on217217+ the N110/N210 by using ifconfig:218218+ ifconfig <dev_name> | grep Interrupt219219+ Set the smp_affinity to a single CPU:220220+ echo 1 > /proc/irq/<interrupt_number>/smp_affinity221221+222222+ It is highly suggested that you do not run the irqbalance daemon on your223223+ system, as this will change any smp_affinity setting you have applied.224224+ The irqbalance daemon runs on a 10 second interval and binds interrupts225225+ to the least loaded CPU determined by the daemon. To disable this daemon:226226+ chkconfig --level 2345 irqbalance off227227+228228+ By default, some Linux distributions enable the kernel feature,229229+ irqbalance, which performs the same function as the daemon. To disable230230+ this feature, add the following line to your bootloader:231231+ noirqbalance232232+233233+ Example using the Grub bootloader:234234+ title Red Hat Enterprise Linux AS (2.4.21-27.ELsmp)235235+ root (hd0,0)236236+ kernel /vmlinuz-2.4.21-27.ELsmp ro root=/dev/hda3 noirqbalance237237+ initrd /initrd-2.4.21-27.ELsmp.img238238+239239+ 2. After running insmod, the driver is loaded and the incorrect network240240+ interface is brought up without running ifup.241241+242242+ When using 2.4.x kernels, including RHEL kernels, the Linux kernel243243+ invokes a script named "hotplug". This script is primarily used to244244+ automatically bring up USB devices when they are plugged in, however,245245+ the script also attempts to automatically bring up a network interface246246+ after loading the kernel module. The hotplug script does this by scanning247247+ the ifcfg-eth# config files in /etc/sysconfig/network-scripts, looking248248+ for HWADDR=<mac_address>.249249+250250+ If the hotplug script does not find the HWADDRR within any of the251251+ ifcfg-eth# files, it will bring up the device with the next available252252+ interface name. If this interface is already configured for a different253253+ network card, your new interface will have incorrect IP address and254254+ network settings.255255+256256+ To solve this issue, you can add the HWADDR=<mac_address> key to the257257+ interface config file of your network controller.258258+259259+ To disable this "hotplug" feature, you may add the driver (module name)260260+ to the "blacklist" file located in /etc/hotplug. It has been noted that261261+ this does not work for network devices because the net.agent script262262+ does not use the blacklist file. Simply remove, or rename, the net.agent263263+ script located in /etc/hotplug to disable this feature.264264+265265+ 3. Transport Protocol (TP) hangs when running heavy multi-connection traffic266266+ on an AMD Opteron system with HyperTransport PCI-X Tunnel chipset.267267+268268+ If your AMD Opteron system uses the AMD-8131 HyperTransport PCI-X Tunnel269269+ chipset, you may experience the "133-Mhz Mode Split Completion Data270270+ Corruption" bug identified by AMD while using a 133Mhz PCI-X card on the271271+ bus PCI-X bus.272272+273273+ AMD states, "Under highly specific conditions, the AMD-8131 PCI-X Tunnel274274+ can provide stale data via split completion cycles to a PCI-X card that275275+ is operating at 133 Mhz", causing data corruption.276276+277277+ AMD's provides three workarounds for this problem, however, Chelsio278278+ recommends the first option for best performance with this bug:279279+280280+ For 133Mhz secondary bus operation, limit the transaction length and281281+ the number of outstanding transactions, via BIOS configuration282282+ programming of the PCI-X card, to the following:283283+284284+ Data Length (bytes): 1k285285+ Total allowed outstanding transactions: 2286286+287287+ Please refer to AMD 8131-HT/PCI-X Errata 26310 Rev 3.08 August 2004,288288+ section 56, "133-MHz Mode Split Completion Data Corruption" for more289289+ details with this bug and workarounds suggested by AMD.290290+291291+ It may be possible to work outside AMD's recommended PCI-X settings, try292292+ increasing the Data Length to 2k bytes for increased performance. If you293293+ have issues with these settings, please revert to the "safe" settings294294+ and duplicate the problem before submitting a bug or asking for support.295295+296296+ NOTE: The default setting on most systems is 8 outstanding transactions297297+ and 2k bytes data length.298298+299299+ 4. On multiprocessor systems, it has been noted that an application which300300+ is handling 10Gb networking can switch between CPUs causing degraded301301+ and/or unstable performance.302302+303303+ If running on an SMP system and taking performance measurements, it304304+ is suggested you either run the latest netperf-2.4.0+ or use a binding305305+ tool such as Tim Hockin's procstate utilities (runon)306306+ <http://www.hockin.org/~thockin/procstate/>.307307+308308+ Binding netserver and netperf (or other applications) to particular309309+ CPUs will have a significant difference in performance measurements.310310+ You may need to experiment which CPU to bind the application to in311311+ order to achieve the best performance for your system.312312+313313+ If you are developing an application designed for 10Gb networking,314314+ please keep in mind you may want to look at kernel functions315315+ sched_setaffinity & sched_getaffinity to bind your application.316316+317317+ If you are just running user-space applications such as ftp, telnet,318318+ etc., you may want to try the runon tool provided by Tim Hockin's319319+ procstate utility. You could also try binding the interface to a320320+ particular CPU: runon 0 ifup eth0321321+322322+323323+SUPPORT324324+=======325325+326326+ If you have problems with the software or hardware, please contact our327327+ customer support team via email at support@chelsio.com or check our website328328+ at http://www.chelsio.com329329+330330+===============================================================================331331+332332+ Chelsio Communications333333+ 370 San Aleso Ave.334334+ Suite 100335335+ Sunnyvale, CA 94085336336+ http://www.chelsio.com337337+338338+This program is free software; you can redistribute it and/or modify339339+it under the terms of the GNU General Public License, version 2, as340340+published by the Free Software Foundation.341341+342342+You should have received a copy of the GNU General Public License along343343+with this program; if not, write to the Free Software Foundation, Inc.,344344+59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.345345+346346+THIS SOFTWARE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR IMPLIED347347+WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED WARRANTIES OF348348+MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE.349349+350350+ Copyright (c) 2003-2005 Chelsio Communications. All rights reserved.351351+352352+===============================================================================
···1923192319241924 If in doubt, say Y.1925192519261926+config SIS19019271927+ tristate "SiS190 gigabit ethernet support"19281928+ depends on PCI19291929+ select CRC3219301930+ select MII19311931+ ---help---19321932+ Say Y here if you have a SiS 190 PCI Gigabit Ethernet adapter.19331933+19341934+ To compile this driver as a module, choose M here: the module19351935+ will be called sis190. This is recommended.19361936+19261937config SKGE19271938 tristate "New SysKonnect GigaEthernet support (EXPERIMENTAL)"19281939 depends on PCI && EXPERIMENTAL···2103209221042093menu "Ethernet (10000 Mbit)"21052094 depends on !UML20952095+20962096+config CHELSIO_T120972097+ tristate "Chelsio 10Gb Ethernet support"20982098+ depends on PCI20992099+ help21002100+ This driver supports Chelsio N110 and N210 models 10Gb Ethernet21012101+ cards. More information about adapter features and performance21022102+ tuning is in <file:Documentation/networking/cxgb.txt>.21032103+21042104+ For general information about Chelsio and our products, visit21052105+ our website at <http://www.chelsio.com>.21062106+21072107+ For customer support, please visit our customer support page at21082108+ <http://www.chelsio.com/support.htm>.21092109+21102110+ Please send feedback to <linux-bugs@chelsio.com>.21112111+21122112+ To compile this driver as a module, choose M here: the module21132113+ will be called cxgb.2106211421072115config IXGB21082116 tristate "Intel(R) PRO/10GbE support"
···11+#22+# Chelsio 10Gb NIC driver for Linux.33+#44+55+obj-$(CONFIG_CHELSIO_T1) += cxgb.o66+77+EXTRA_CFLAGS += -I$(TOPDIR)/drivers/net/chelsio $(DEBUG_FLAGS)88+99+1010+cxgb-objs := cxgb2.o espi.o pm3393.o sge.o subr.o mv88x201x.o1111+
+314
drivers/net/chelsio/common.h
···11+/*****************************************************************************22+ * *33+ * File: common.h *44+ * $Revision: 1.21 $ *55+ * $Date: 2005/06/22 00:43:25 $ *66+ * Description: *77+ * part of the Chelsio 10Gb Ethernet Driver. *88+ * *99+ * This program is free software; you can redistribute it and/or modify *1010+ * it under the terms of the GNU General Public License, version 2, as *1111+ * published by the Free Software Foundation. *1212+ * *1313+ * You should have received a copy of the GNU General Public License along *1414+ * with this program; if not, write to the Free Software Foundation, Inc., *1515+ * 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. *1616+ * *1717+ * THIS SOFTWARE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR IMPLIED *1818+ * WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED WARRANTIES OF *1919+ * MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. *2020+ * *2121+ * http://www.chelsio.com *2222+ * *2323+ * Copyright (c) 2003 - 2005 Chelsio Communications, Inc. *2424+ * All rights reserved. *2525+ * *2626+ * Maintainers: maintainers@chelsio.com *2727+ * *2828+ * Authors: Dimitrios Michailidis <dm@chelsio.com> *2929+ * Tina Yang <tainay@chelsio.com> *3030+ * Felix Marti <felix@chelsio.com> *3131+ * Scott Bardone <sbardone@chelsio.com> *3232+ * Kurt Ottaway <kottaway@chelsio.com> *3333+ * Frank DiMambro <frank@chelsio.com> *3434+ * *3535+ * History: *3636+ * *3737+ ****************************************************************************/3838+3939+#ifndef _CXGB_COMMON_H_4040+#define _CXGB_COMMON_H_4141+4242+#include <linux/config.h>4343+#include <linux/module.h>4444+#include <linux/netdevice.h>4545+#include <linux/types.h>4646+#include <linux/delay.h>4747+#include <linux/pci.h>4848+#include <linux/ethtool.h>4949+#include <linux/mii.h>5050+#include <linux/crc32.h>5151+#include <linux/init.h>5252+#include <asm/io.h>5353+#include <linux/pci_ids.h>5454+5555+#define DRV_DESCRIPTION "Chelsio 10Gb Ethernet Driver"5656+#define DRV_NAME "cxgb"5757+#define DRV_VERSION "2.1.1"5858+#define PFX DRV_NAME ": "5959+6060+#define CH_ERR(fmt, ...) printk(KERN_ERR PFX fmt, ## __VA_ARGS__)6161+#define CH_WARN(fmt, ...) printk(KERN_WARNING PFX fmt, ## __VA_ARGS__)6262+#define CH_ALERT(fmt, ...) printk(KERN_ALERT PFX fmt, ## __VA_ARGS__)6363+6464+#define CH_DEVICE(devid, ssid, idx) \6565+ { PCI_VENDOR_ID_CHELSIO, devid, PCI_ANY_ID, ssid, 0, 0, idx }6666+6767+#define SUPPORTED_PAUSE (1 << 13)6868+#define SUPPORTED_LOOPBACK (1 << 15)6969+7070+#define ADVERTISED_PAUSE (1 << 13)7171+#define ADVERTISED_ASYM_PAUSE (1 << 14)7272+7373+typedef struct adapter adapter_t;7474+7575+void t1_elmer0_ext_intr(adapter_t *adapter);7676+void t1_link_changed(adapter_t *adapter, int port_id, int link_status,7777+ int speed, int duplex, int fc);7878+7979+struct t1_rx_mode {8080+ struct net_device *dev;8181+ u32 idx;8282+ struct dev_mc_list *list;8383+};8484+8585+#define t1_rx_mode_promisc(rm) (rm->dev->flags & IFF_PROMISC)8686+#define t1_rx_mode_allmulti(rm) (rm->dev->flags & IFF_ALLMULTI)8787+#define t1_rx_mode_mc_cnt(rm) (rm->dev->mc_count)8888+8989+static inline u8 *t1_get_next_mcaddr(struct t1_rx_mode *rm)9090+{9191+ u8 *addr = 0;9292+9393+ if (rm->idx++ < rm->dev->mc_count) {9494+ addr = rm->list->dmi_addr;9595+ rm->list = rm->list->next;9696+ }9797+ return addr;9898+}9999+100100+#define MAX_NPORTS 4101101+102102+#define SPEED_INVALID 0xffff103103+#define DUPLEX_INVALID 0xff104104+105105+enum {106106+ CHBT_BOARD_N110,107107+ CHBT_BOARD_N210108108+};109109+110110+enum {111111+ CHBT_TERM_T1,112112+ CHBT_TERM_T2113113+};114114+115115+enum {116116+ CHBT_MAC_PM3393,117117+};118118+119119+enum {120120+ CHBT_PHY_88X2010,121121+};122122+123123+enum {124124+ PAUSE_RX = 1 << 0,125125+ PAUSE_TX = 1 << 1,126126+ PAUSE_AUTONEG = 1 << 2127127+};128128+129129+/* Revisions of T1 chip */130130+enum {131131+ TERM_T1A = 0,132132+ TERM_T1B = 1,133133+ TERM_T2 = 3134134+};135135+136136+struct sge_params {137137+ unsigned int cmdQ_size[2];138138+ unsigned int freelQ_size[2];139139+ unsigned int large_buf_capacity;140140+ unsigned int rx_coalesce_usecs;141141+ unsigned int last_rx_coalesce_raw;142142+ unsigned int default_rx_coalesce_usecs;143143+ unsigned int sample_interval_usecs;144144+ unsigned int coalesce_enable;145145+ unsigned int polling;146146+};147147+148148+struct chelsio_pci_params {149149+ unsigned short speed;150150+ unsigned char width;151151+ unsigned char is_pcix;152152+};153153+154154+struct adapter_params {155155+ struct sge_params sge;156156+ struct chelsio_pci_params pci;157157+158158+ const struct board_info *brd_info;159159+160160+ unsigned int nports; /* # of ethernet ports */161161+ unsigned int stats_update_period;162162+ unsigned short chip_revision;163163+ unsigned char chip_version;164164+};165165+166166+struct link_config {167167+ unsigned int supported; /* link capabilities */168168+ unsigned int advertising; /* advertised capabilities */169169+ unsigned short requested_speed; /* speed user has requested */170170+ unsigned short speed; /* actual link speed */171171+ unsigned char requested_duplex; /* duplex user has requested */172172+ unsigned char duplex; /* actual link duplex */173173+ unsigned char requested_fc; /* flow control user has requested */174174+ unsigned char fc; /* actual link flow control */175175+ unsigned char autoneg; /* autonegotiating? */176176+};177177+178178+struct cmac;179179+struct cphy;180180+181181+struct port_info {182182+ struct net_device *dev;183183+ struct cmac *mac;184184+ struct cphy *phy;185185+ struct link_config link_config;186186+ struct net_device_stats netstats;187187+};188188+189189+struct sge;190190+struct peespi;191191+192192+struct adapter {193193+ u8 *regs;194194+ struct pci_dev *pdev;195195+ unsigned long registered_device_map;196196+ unsigned long open_device_map;197197+ unsigned long flags;198198+199199+ const char *name;200200+ int msg_enable;201201+ u32 mmio_len;202202+203203+ struct work_struct ext_intr_handler_task;204204+ struct adapter_params params;205205+206206+ struct vlan_group *vlan_grp;207207+208208+ /* Terminator modules. */209209+ struct sge *sge;210210+ struct peespi *espi;211211+212212+ struct port_info port[MAX_NPORTS];213213+ struct work_struct stats_update_task;214214+ struct timer_list stats_update_timer;215215+216216+ struct semaphore mib_mutex;217217+ spinlock_t tpi_lock;218218+ spinlock_t work_lock;219219+ /* guards async operations */220220+ spinlock_t async_lock ____cacheline_aligned;221221+ u32 slow_intr_mask;222222+};223223+224224+enum { /* adapter flags */225225+ FULL_INIT_DONE = 1 << 0,226226+ TSO_CAPABLE = 1 << 2,227227+ TCP_CSUM_CAPABLE = 1 << 3,228228+ UDP_CSUM_CAPABLE = 1 << 4,229229+ VLAN_ACCEL_CAPABLE = 1 << 5,230230+ RX_CSUM_ENABLED = 1 << 6,231231+};232232+233233+struct mdio_ops;234234+struct gmac;235235+struct gphy;236236+237237+struct board_info {238238+ unsigned char board;239239+ unsigned char port_number;240240+ unsigned long caps;241241+ unsigned char chip_term;242242+ unsigned char chip_mac;243243+ unsigned char chip_phy;244244+ unsigned int clock_core;245245+ unsigned int clock_mc3;246246+ unsigned int clock_mc4;247247+ unsigned int espi_nports;248248+ unsigned int clock_cspi;249249+ unsigned int clock_elmer0;250250+ unsigned char mdio_mdien;251251+ unsigned char mdio_mdiinv;252252+ unsigned char mdio_mdc;253253+ unsigned char mdio_phybaseaddr;254254+ struct gmac *gmac;255255+ struct gphy *gphy;256256+ struct mdio_ops *mdio_ops;257257+ const char *desc;258258+};259259+260260+extern struct pci_device_id t1_pci_tbl[];261261+262262+static inline int adapter_matches_type(const adapter_t *adapter,263263+ int version, int revision)264264+{265265+ return adapter->params.chip_version == version &&266266+ adapter->params.chip_revision == revision;267267+}268268+269269+#define t1_is_T1B(adap) adapter_matches_type(adap, CHBT_TERM_T1, TERM_T1B)270270+#define is_T2(adap) adapter_matches_type(adap, CHBT_TERM_T2, TERM_T2)271271+272272+/* Returns true if an adapter supports VLAN acceleration and TSO */273273+static inline int vlan_tso_capable(const adapter_t *adapter)274274+{275275+ return !t1_is_T1B(adapter);276276+}277277+278278+#define for_each_port(adapter, iter) \279279+ for (iter = 0; iter < (adapter)->params.nports; ++iter)280280+281281+#define board_info(adapter) ((adapter)->params.brd_info)282282+#define is_10G(adapter) (board_info(adapter)->caps & SUPPORTED_10000baseT_Full)283283+284284+static inline unsigned int core_ticks_per_usec(const adapter_t *adap)285285+{286286+ return board_info(adap)->clock_core / 1000000;287287+}288288+289289+extern int t1_tpi_write(adapter_t *adapter, u32 addr, u32 value);290290+extern int t1_tpi_read(adapter_t *adapter, u32 addr, u32 *value);291291+292292+extern void t1_interrupts_enable(adapter_t *adapter);293293+extern void t1_interrupts_disable(adapter_t *adapter);294294+extern void t1_interrupts_clear(adapter_t *adapter);295295+extern int elmer0_ext_intr_handler(adapter_t *adapter);296296+extern int t1_slow_intr_handler(adapter_t *adapter);297297+298298+extern int t1_link_start(struct cphy *phy, struct cmac *mac, struct link_config *lc);299299+extern const struct board_info *t1_get_board_info(unsigned int board_id);300300+extern const struct board_info *t1_get_board_info_from_ids(unsigned int devid,301301+ unsigned short ssid);302302+extern int t1_seeprom_read(adapter_t *adapter, u32 addr, u32 *data);303303+extern int t1_get_board_rev(adapter_t *adapter, const struct board_info *bi,304304+ struct adapter_params *p);305305+extern int t1_init_hw_modules(adapter_t *adapter);306306+extern int t1_init_sw_modules(adapter_t *adapter, const struct board_info *bi);307307+extern void t1_free_sw_modules(adapter_t *adapter);308308+extern void t1_fatal_err(adapter_t *adapter);309309+310310+extern void t1_tp_set_udp_checksum_offload(adapter_t *adapter, int enable);311311+extern void t1_tp_set_tcp_checksum_offload(adapter_t *adapter, int enable);312312+extern void t1_tp_set_ip_checksum_offload(adapter_t *adapter, int enable);313313+314314+#endif /* _CXGB_COMMON_H_ */
+148
drivers/net/chelsio/cphy.h
···11+/*****************************************************************************22+ * *33+ * File: cphy.h *44+ * $Revision: 1.7 $ *55+ * $Date: 2005/06/21 18:29:47 $ *66+ * Description: *77+ * part of the Chelsio 10Gb Ethernet Driver. *88+ * *99+ * This program is free software; you can redistribute it and/or modify *1010+ * it under the terms of the GNU General Public License, version 2, as *1111+ * published by the Free Software Foundation. *1212+ * *1313+ * You should have received a copy of the GNU General Public License along *1414+ * with this program; if not, write to the Free Software Foundation, Inc., *1515+ * 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. *1616+ * *1717+ * THIS SOFTWARE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR IMPLIED *1818+ * WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED WARRANTIES OF *1919+ * MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. *2020+ * *2121+ * http://www.chelsio.com *2222+ * *2323+ * Copyright (c) 2003 - 2005 Chelsio Communications, Inc. *2424+ * All rights reserved. *2525+ * *2626+ * Maintainers: maintainers@chelsio.com *2727+ * *2828+ * Authors: Dimitrios Michailidis <dm@chelsio.com> *2929+ * Tina Yang <tainay@chelsio.com> *3030+ * Felix Marti <felix@chelsio.com> *3131+ * Scott Bardone <sbardone@chelsio.com> *3232+ * Kurt Ottaway <kottaway@chelsio.com> *3333+ * Frank DiMambro <frank@chelsio.com> *3434+ * *3535+ * History: *3636+ * *3737+ ****************************************************************************/3838+3939+#ifndef _CXGB_CPHY_H_4040+#define _CXGB_CPHY_H_4141+4242+#include "common.h"4343+4444+struct mdio_ops {4545+ void (*init)(adapter_t *adapter, const struct board_info *bi);4646+ int (*read)(adapter_t *adapter, int phy_addr, int mmd_addr,4747+ int reg_addr, unsigned int *val);4848+ int (*write)(adapter_t *adapter, int phy_addr, int mmd_addr,4949+ int reg_addr, unsigned int val);5050+};5151+5252+/* PHY interrupt types */5353+enum {5454+ cphy_cause_link_change = 0x1,5555+ cphy_cause_error = 0x25656+};5757+5858+struct cphy;5959+6060+/* PHY operations */6161+struct cphy_ops {6262+ void (*destroy)(struct cphy *);6363+ int (*reset)(struct cphy *, int wait);6464+6565+ int (*interrupt_enable)(struct cphy *);6666+ int (*interrupt_disable)(struct cphy *);6767+ int (*interrupt_clear)(struct cphy *);6868+ int (*interrupt_handler)(struct cphy *);6969+7070+ int (*autoneg_enable)(struct cphy *);7171+ int (*autoneg_disable)(struct cphy *);7272+ int (*autoneg_restart)(struct cphy *);7373+7474+ int (*advertise)(struct cphy *phy, unsigned int advertise_map);7575+ int (*set_loopback)(struct cphy *, int on);7676+ int (*set_speed_duplex)(struct cphy *phy, int speed, int duplex);7777+ int (*get_link_status)(struct cphy *phy, int *link_ok, int *speed,7878+ int *duplex, int *fc);7979+};8080+8181+/* A PHY instance */8282+struct cphy {8383+ int addr; /* PHY address */8484+ adapter_t *adapter; /* associated adapter */8585+ struct cphy_ops *ops; /* PHY operations */8686+ int (*mdio_read)(adapter_t *adapter, int phy_addr, int mmd_addr,8787+ int reg_addr, unsigned int *val);8888+ int (*mdio_write)(adapter_t *adapter, int phy_addr, int mmd_addr,8989+ int reg_addr, unsigned int val);9090+ struct cphy_instance *instance;9191+};9292+9393+/* Convenience MDIO read/write wrappers */9494+static inline int mdio_read(struct cphy *cphy, int mmd, int reg,9595+ unsigned int *valp)9696+{9797+ return cphy->mdio_read(cphy->adapter, cphy->addr, mmd, reg, valp);9898+}9999+100100+static inline int mdio_write(struct cphy *cphy, int mmd, int reg,101101+ unsigned int val)102102+{103103+ return cphy->mdio_write(cphy->adapter, cphy->addr, mmd, reg, val);104104+}105105+106106+static inline int simple_mdio_read(struct cphy *cphy, int reg,107107+ unsigned int *valp)108108+{109109+ return mdio_read(cphy, 0, reg, valp);110110+}111111+112112+static inline int simple_mdio_write(struct cphy *cphy, int reg,113113+ unsigned int val)114114+{115115+ return mdio_write(cphy, 0, reg, val);116116+}117117+118118+/* Convenience initializer */119119+static inline void cphy_init(struct cphy *phy, adapter_t *adapter,120120+ int phy_addr, struct cphy_ops *phy_ops,121121+ struct mdio_ops *mdio_ops)122122+{123123+ phy->adapter = adapter;124124+ phy->addr = phy_addr;125125+ phy->ops = phy_ops;126126+ if (mdio_ops) {127127+ phy->mdio_read = mdio_ops->read;128128+ phy->mdio_write = mdio_ops->write;129129+ }130130+}131131+132132+/* Operations of the PHY-instance factory */133133+struct gphy {134134+ /* Construct a PHY instance with the given PHY address */135135+ struct cphy *(*create)(adapter_t *adapter, int phy_addr,136136+ struct mdio_ops *mdio_ops);137137+138138+ /*139139+ * Reset the PHY chip. This resets the whole PHY chip, not individual140140+ * ports.141141+ */142142+ int (*reset)(adapter_t *adapter);143143+};144144+145145+extern struct gphy t1_mv88x201x_ops;146146+extern struct gphy t1_dummy_phy_ops;147147+148148+#endif /* _CXGB_CPHY_H_ */
+145
drivers/net/chelsio/cpl5_cmd.h
···11+/*****************************************************************************22+ * *33+ * File: cpl5_cmd.h *44+ * $Revision: 1.6 $ *55+ * $Date: 2005/06/21 18:29:47 $ *66+ * Description: *77+ * part of the Chelsio 10Gb Ethernet Driver. *88+ * *99+ * This program is free software; you can redistribute it and/or modify *1010+ * it under the terms of the GNU General Public License, version 2, as *1111+ * published by the Free Software Foundation. *1212+ * *1313+ * You should have received a copy of the GNU General Public License along *1414+ * with this program; if not, write to the Free Software Foundation, Inc., *1515+ * 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. *1616+ * *1717+ * THIS SOFTWARE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR IMPLIED *1818+ * WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED WARRANTIES OF *1919+ * MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. *2020+ * *2121+ * http://www.chelsio.com *2222+ * *2323+ * Copyright (c) 2003 - 2005 Chelsio Communications, Inc. *2424+ * All rights reserved. *2525+ * *2626+ * Maintainers: maintainers@chelsio.com *2727+ * *2828+ * Authors: Dimitrios Michailidis <dm@chelsio.com> *2929+ * Tina Yang <tainay@chelsio.com> *3030+ * Felix Marti <felix@chelsio.com> *3131+ * Scott Bardone <sbardone@chelsio.com> *3232+ * Kurt Ottaway <kottaway@chelsio.com> *3333+ * Frank DiMambro <frank@chelsio.com> *3434+ * *3535+ * History: *3636+ * *3737+ ****************************************************************************/3838+3939+#ifndef _CXGB_CPL5_CMD_H_4040+#define _CXGB_CPL5_CMD_H_4141+4242+#include <asm/byteorder.h>4343+4444+#if !defined(__LITTLE_ENDIAN_BITFIELD) && !defined(__BIG_ENDIAN_BITFIELD)4545+#error "Adjust your <asm/byteorder.h> defines"4646+#endif4747+4848+enum CPL_opcode {4949+ CPL_RX_PKT = 0xAD,5050+ CPL_TX_PKT = 0xB2,5151+ CPL_TX_PKT_LSO = 0xB6,5252+};5353+5454+enum { /* TX_PKT_LSO ethernet types */5555+ CPL_ETH_II,5656+ CPL_ETH_II_VLAN,5757+ CPL_ETH_802_3,5858+ CPL_ETH_802_3_VLAN5959+};6060+6161+struct cpl_rx_data {6262+ u32 rsvd0;6363+ u32 len;6464+ u32 seq;6565+ u16 urg;6666+ u8 rsvd1;6767+ u8 status;6868+};6969+7070+/*7171+ * We want this header's alignment to be no more stringent than 2-byte aligned.7272+ * All fields are u8 or u16 except for the length. However that field is not7373+ * used so we break it into 2 16-bit parts to easily meet our alignment needs.7474+ */7575+struct cpl_tx_pkt {7676+ u8 opcode;7777+#if defined(__LITTLE_ENDIAN_BITFIELD)7878+ u8 iff:4;7979+ u8 ip_csum_dis:1;8080+ u8 l4_csum_dis:1;8181+ u8 vlan_valid:1;8282+ u8 rsvd:1;8383+#else8484+ u8 rsvd:1;8585+ u8 vlan_valid:1;8686+ u8 l4_csum_dis:1;8787+ u8 ip_csum_dis:1;8888+ u8 iff:4;8989+#endif9090+ u16 vlan;9191+ u16 len_hi;9292+ u16 len_lo;9393+};9494+9595+struct cpl_tx_pkt_lso {9696+ u8 opcode;9797+#if defined(__LITTLE_ENDIAN_BITFIELD)9898+ u8 iff:4;9999+ u8 ip_csum_dis:1;100100+ u8 l4_csum_dis:1;101101+ u8 vlan_valid:1;102102+ u8 rsvd:1;103103+#else104104+ u8 rsvd:1;105105+ u8 vlan_valid:1;106106+ u8 l4_csum_dis:1;107107+ u8 ip_csum_dis:1;108108+ u8 iff:4;109109+#endif110110+ u16 vlan;111111+ u32 len;112112+113113+ u32 rsvd2;114114+ u8 rsvd3;115115+#if defined(__LITTLE_ENDIAN_BITFIELD)116116+ u8 tcp_hdr_words:4;117117+ u8 ip_hdr_words:4;118118+#else119119+ u8 ip_hdr_words:4;120120+ u8 tcp_hdr_words:4;121121+#endif122122+ u16 eth_type_mss;123123+};124124+125125+struct cpl_rx_pkt {126126+ u8 opcode;127127+#if defined(__LITTLE_ENDIAN_BITFIELD)128128+ u8 iff:4;129129+ u8 csum_valid:1;130130+ u8 bad_pkt:1;131131+ u8 vlan_valid:1;132132+ u8 rsvd:1;133133+#else134134+ u8 rsvd:1;135135+ u8 vlan_valid:1;136136+ u8 bad_pkt:1;137137+ u8 csum_valid:1;138138+ u8 iff:4;139139+#endif140140+ u16 csum;141141+ u16 vlan;142142+ u16 len;143143+};144144+145145+#endif /* _CXGB_CPL5_CMD_H_ */
+1256
drivers/net/chelsio/cxgb2.c
···11+/*****************************************************************************22+ * *33+ * File: cxgb2.c *44+ * $Revision: 1.25 $ *55+ * $Date: 2005/06/22 00:43:25 $ *66+ * Description: *77+ * Chelsio 10Gb Ethernet Driver. *88+ * *99+ * This program is free software; you can redistribute it and/or modify *1010+ * it under the terms of the GNU General Public License, version 2, as *1111+ * published by the Free Software Foundation. *1212+ * *1313+ * You should have received a copy of the GNU General Public License along *1414+ * with this program; if not, write to the Free Software Foundation, Inc., *1515+ * 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. *1616+ * *1717+ * THIS SOFTWARE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR IMPLIED *1818+ * WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED WARRANTIES OF *1919+ * MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. *2020+ * *2121+ * http://www.chelsio.com *2222+ * *2323+ * Copyright (c) 2003 - 2005 Chelsio Communications, Inc. *2424+ * All rights reserved. *2525+ * *2626+ * Maintainers: maintainers@chelsio.com *2727+ * *2828+ * Authors: Dimitrios Michailidis <dm@chelsio.com> *2929+ * Tina Yang <tainay@chelsio.com> *3030+ * Felix Marti <felix@chelsio.com> *3131+ * Scott Bardone <sbardone@chelsio.com> *3232+ * Kurt Ottaway <kottaway@chelsio.com> *3333+ * Frank DiMambro <frank@chelsio.com> *3434+ * *3535+ * History: *3636+ * *3737+ ****************************************************************************/3838+3939+#include "common.h"4040+#include <linux/config.h>4141+#include <linux/module.h>4242+#include <linux/init.h>4343+#include <linux/pci.h>4444+#include <linux/netdevice.h>4545+#include <linux/etherdevice.h>4646+#include <linux/if_vlan.h>4747+#include <linux/mii.h>4848+#include <linux/sockios.h>4949+#include <linux/proc_fs.h>5050+#include <linux/dma-mapping.h>5151+#include <asm/uaccess.h>5252+5353+#include "cpl5_cmd.h"5454+#include "regs.h"5555+#include "gmac.h"5656+#include "cphy.h"5757+#include "sge.h"5858+#include "espi.h"5959+6060+#ifdef work_struct6161+#include <linux/tqueue.h>6262+#define INIT_WORK INIT_TQUEUE6363+#define schedule_work schedule_task6464+#define flush_scheduled_work flush_scheduled_tasks6565+6666+static inline void schedule_mac_stats_update(struct adapter *ap, int secs)6767+{6868+ mod_timer(&ap->stats_update_timer, jiffies + secs * HZ);6969+}7070+7171+static inline void cancel_mac_stats_update(struct adapter *ap)7272+{7373+ del_timer_sync(&ap->stats_update_timer);7474+ flush_scheduled_tasks();7575+}7676+7777+/*7878+ * Stats update timer for 2.4. It schedules a task to do the actual update as7979+ * we need to access MAC statistics in process context.8080+ */8181+static void mac_stats_timer(unsigned long data)8282+{8383+ struct adapter *ap = (struct adapter *)data;8484+8585+ schedule_task(&ap->stats_update_task);8686+}8787+#else8888+#include <linux/workqueue.h>8989+9090+static inline void schedule_mac_stats_update(struct adapter *ap, int secs)9191+{9292+ schedule_delayed_work(&ap->stats_update_task, secs * HZ);9393+}9494+9595+static inline void cancel_mac_stats_update(struct adapter *ap)9696+{9797+ cancel_delayed_work(&ap->stats_update_task);9898+}9999+#endif100100+101101+#define MAX_CMDQ_ENTRIES 16384102102+#define MAX_CMDQ1_ENTRIES 1024103103+#define MAX_RX_BUFFERS 16384104104+#define MAX_RX_JUMBO_BUFFERS 16384105105+#define MAX_TX_BUFFERS_HIGH 16384U106106+#define MAX_TX_BUFFERS_LOW 1536U107107+#define MIN_FL_ENTRIES 32108108+109109+#define PORT_MASK ((1 << MAX_NPORTS) - 1)110110+111111+#define DFLT_MSG_ENABLE (NETIF_MSG_DRV | NETIF_MSG_PROBE | NETIF_MSG_LINK | \112112+ NETIF_MSG_TIMER | NETIF_MSG_IFDOWN | NETIF_MSG_IFUP |\113113+ NETIF_MSG_RX_ERR | NETIF_MSG_TX_ERR)114114+115115+/*116116+ * The EEPROM is actually bigger but only the first few bytes are used so we117117+ * only report those.118118+ */119119+#define EEPROM_SIZE 32120120+121121+MODULE_DESCRIPTION(DRV_DESCRIPTION);122122+MODULE_AUTHOR("Chelsio Communications");123123+MODULE_LICENSE("GPL");124124+125125+static int dflt_msg_enable = DFLT_MSG_ENABLE;126126+127127+MODULE_PARM(dflt_msg_enable, "i");128128+MODULE_PARM_DESC(dflt_msg_enable, "Chelsio T1 message enable bitmap");129129+130130+131131+static const char pci_speed[][4] = {132132+ "33", "66", "100", "133"133133+};134134+135135+/*136136+ * Setup MAC to receive the types of packets we want.137137+ */138138+static void t1_set_rxmode(struct net_device *dev)139139+{140140+ struct adapter *adapter = dev->priv;141141+ struct cmac *mac = adapter->port[dev->if_port].mac;142142+ struct t1_rx_mode rm;143143+144144+ rm.dev = dev;145145+ rm.idx = 0;146146+ rm.list = dev->mc_list;147147+ mac->ops->set_rx_mode(mac, &rm);148148+}149149+150150+static void link_report(struct port_info *p)151151+{152152+ if (!netif_carrier_ok(p->dev))153153+ printk(KERN_INFO "%s: link down\n", p->dev->name);154154+ else {155155+ const char *s = "10Mbps";156156+157157+ switch (p->link_config.speed) {158158+ case SPEED_10000: s = "10Gbps"; break;159159+ case SPEED_1000: s = "1000Mbps"; break;160160+ case SPEED_100: s = "100Mbps"; break;161161+ }162162+163163+ printk(KERN_INFO "%s: link up, %s, %s-duplex\n",164164+ p->dev->name, s,165165+ p->link_config.duplex == DUPLEX_FULL ? "full" : "half");166166+ }167167+}168168+169169+void t1_link_changed(struct adapter *adapter, int port_id, int link_stat,170170+ int speed, int duplex, int pause)171171+{172172+ struct port_info *p = &adapter->port[port_id];173173+174174+ if (link_stat != netif_carrier_ok(p->dev)) {175175+ if (link_stat)176176+ netif_carrier_on(p->dev);177177+ else178178+ netif_carrier_off(p->dev);179179+ link_report(p);180180+181181+ }182182+}183183+184184+static void link_start(struct port_info *p)185185+{186186+ struct cmac *mac = p->mac;187187+188188+ mac->ops->reset(mac);189189+ if (mac->ops->macaddress_set)190190+ mac->ops->macaddress_set(mac, p->dev->dev_addr);191191+ t1_set_rxmode(p->dev);192192+ t1_link_start(p->phy, mac, &p->link_config);193193+ mac->ops->enable(mac, MAC_DIRECTION_RX | MAC_DIRECTION_TX);194194+}195195+196196+static void enable_hw_csum(struct adapter *adapter)197197+{198198+ if (adapter->flags & TSO_CAPABLE)199199+ t1_tp_set_ip_checksum_offload(adapter, 1); /* for TSO only */200200+ t1_tp_set_tcp_checksum_offload(adapter, 1);201201+}202202+203203+/*204204+ * Things to do upon first use of a card.205205+ * This must run with the rtnl lock held.206206+ */207207+static int cxgb_up(struct adapter *adapter)208208+{209209+ int err = 0;210210+211211+ if (!(adapter->flags & FULL_INIT_DONE)) {212212+ err = t1_init_hw_modules(adapter);213213+ if (err)214214+ goto out_err;215215+216216+ enable_hw_csum(adapter);217217+ adapter->flags |= FULL_INIT_DONE;218218+ }219219+220220+ t1_interrupts_clear(adapter);221221+ if ((err = request_irq(adapter->pdev->irq,222222+ t1_select_intr_handler(adapter), SA_SHIRQ,223223+ adapter->name, adapter))) {224224+ goto out_err;225225+ }226226+ t1_sge_start(adapter->sge);227227+ t1_interrupts_enable(adapter);228228+ out_err:229229+ return err;230230+}231231+232232+/*233233+ * Release resources when all the ports have been stopped.234234+ */235235+static void cxgb_down(struct adapter *adapter)236236+{237237+ t1_sge_stop(adapter->sge);238238+ t1_interrupts_disable(adapter);239239+ free_irq(adapter->pdev->irq, adapter);240240+}241241+242242+static int cxgb_open(struct net_device *dev)243243+{244244+ int err;245245+ struct adapter *adapter = dev->priv;246246+ int other_ports = adapter->open_device_map & PORT_MASK;247247+248248+ if (!adapter->open_device_map && (err = cxgb_up(adapter)) < 0)249249+ return err;250250+251251+ __set_bit(dev->if_port, &adapter->open_device_map);252252+ link_start(&adapter->port[dev->if_port]);253253+ netif_start_queue(dev);254254+ if (!other_ports && adapter->params.stats_update_period)255255+ schedule_mac_stats_update(adapter,256256+ adapter->params.stats_update_period);257257+ return 0;258258+}259259+260260+static int cxgb_close(struct net_device *dev)261261+{262262+ struct adapter *adapter = dev->priv;263263+ struct port_info *p = &adapter->port[dev->if_port];264264+ struct cmac *mac = p->mac;265265+266266+ netif_stop_queue(dev);267267+ mac->ops->disable(mac, MAC_DIRECTION_TX | MAC_DIRECTION_RX);268268+ netif_carrier_off(dev);269269+270270+ clear_bit(dev->if_port, &adapter->open_device_map);271271+ if (adapter->params.stats_update_period &&272272+ !(adapter->open_device_map & PORT_MASK)) {273273+ /* Stop statistics accumulation. */274274+ smp_mb__after_clear_bit();275275+ spin_lock(&adapter->work_lock); /* sync with update task */276276+ spin_unlock(&adapter->work_lock);277277+ cancel_mac_stats_update(adapter);278278+ }279279+280280+ if (!adapter->open_device_map)281281+ cxgb_down(adapter);282282+ return 0;283283+}284284+285285+static struct net_device_stats *t1_get_stats(struct net_device *dev)286286+{287287+ struct adapter *adapter = dev->priv;288288+ struct port_info *p = &adapter->port[dev->if_port];289289+ struct net_device_stats *ns = &p->netstats;290290+ const struct cmac_statistics *pstats;291291+292292+ /* Do a full update of the MAC stats */293293+ pstats = p->mac->ops->statistics_update(p->mac,294294+ MAC_STATS_UPDATE_FULL);295295+296296+ ns->tx_packets = pstats->TxUnicastFramesOK +297297+ pstats->TxMulticastFramesOK + pstats->TxBroadcastFramesOK;298298+299299+ ns->rx_packets = pstats->RxUnicastFramesOK +300300+ pstats->RxMulticastFramesOK + pstats->RxBroadcastFramesOK;301301+302302+ ns->tx_bytes = pstats->TxOctetsOK;303303+ ns->rx_bytes = pstats->RxOctetsOK;304304+305305+ ns->tx_errors = pstats->TxLateCollisions + pstats->TxLengthErrors +306306+ pstats->TxUnderrun + pstats->TxFramesAbortedDueToXSCollisions;307307+ ns->rx_errors = pstats->RxDataErrors + pstats->RxJabberErrors +308308+ pstats->RxFCSErrors + pstats->RxAlignErrors +309309+ pstats->RxSequenceErrors + pstats->RxFrameTooLongErrors +310310+ pstats->RxSymbolErrors + pstats->RxRuntErrors;311311+312312+ ns->multicast = pstats->RxMulticastFramesOK;313313+ ns->collisions = pstats->TxTotalCollisions;314314+315315+ /* detailed rx_errors */316316+ ns->rx_length_errors = pstats->RxFrameTooLongErrors +317317+ pstats->RxJabberErrors;318318+ ns->rx_over_errors = 0;319319+ ns->rx_crc_errors = pstats->RxFCSErrors;320320+ ns->rx_frame_errors = pstats->RxAlignErrors;321321+ ns->rx_fifo_errors = 0;322322+ ns->rx_missed_errors = 0;323323+324324+ /* detailed tx_errors */325325+ ns->tx_aborted_errors = pstats->TxFramesAbortedDueToXSCollisions;326326+ ns->tx_carrier_errors = 0;327327+ ns->tx_fifo_errors = pstats->TxUnderrun;328328+ ns->tx_heartbeat_errors = 0;329329+ ns->tx_window_errors = pstats->TxLateCollisions;330330+ return ns;331331+}332332+333333+static u32 get_msglevel(struct net_device *dev)334334+{335335+ struct adapter *adapter = dev->priv;336336+337337+ return adapter->msg_enable;338338+}339339+340340+static void set_msglevel(struct net_device *dev, u32 val)341341+{342342+ struct adapter *adapter = dev->priv;343343+344344+ adapter->msg_enable = val;345345+}346346+347347+static char stats_strings[][ETH_GSTRING_LEN] = {348348+ "TxOctetsOK",349349+ "TxOctetsBad",350350+ "TxUnicastFramesOK",351351+ "TxMulticastFramesOK",352352+ "TxBroadcastFramesOK",353353+ "TxPauseFrames",354354+ "TxFramesWithDeferredXmissions",355355+ "TxLateCollisions",356356+ "TxTotalCollisions",357357+ "TxFramesAbortedDueToXSCollisions",358358+ "TxUnderrun",359359+ "TxLengthErrors",360360+ "TxInternalMACXmitError",361361+ "TxFramesWithExcessiveDeferral",362362+ "TxFCSErrors",363363+364364+ "RxOctetsOK",365365+ "RxOctetsBad",366366+ "RxUnicastFramesOK",367367+ "RxMulticastFramesOK",368368+ "RxBroadcastFramesOK",369369+ "RxPauseFrames",370370+ "RxFCSErrors",371371+ "RxAlignErrors",372372+ "RxSymbolErrors",373373+ "RxDataErrors",374374+ "RxSequenceErrors",375375+ "RxRuntErrors",376376+ "RxJabberErrors",377377+ "RxInternalMACRcvError",378378+ "RxInRangeLengthErrors",379379+ "RxOutOfRangeLengthField",380380+ "RxFrameTooLongErrors",381381+382382+ "TSO",383383+ "VLANextractions",384384+ "VLANinsertions",385385+ "RxCsumGood",386386+ "TxCsumOffload",387387+ "RxDrops"388388+389389+ "respQ_empty",390390+ "respQ_overflow",391391+ "freelistQ_empty",392392+ "pkt_too_big",393393+ "pkt_mismatch",394394+ "cmdQ_full0",395395+ "cmdQ_full1",396396+ "tx_ipfrags",397397+ "tx_reg_pkts",398398+ "tx_lso_pkts",399399+ "tx_do_cksum",400400+401401+ "espi_DIP2ParityErr",402402+ "espi_DIP4Err",403403+ "espi_RxDrops",404404+ "espi_TxDrops",405405+ "espi_RxOvfl",406406+ "espi_ParityErr"407407+};408408+409409+#define T2_REGMAP_SIZE (3 * 1024)410410+411411+static int get_regs_len(struct net_device *dev)412412+{413413+ return T2_REGMAP_SIZE;414414+}415415+416416+static void get_drvinfo(struct net_device *dev, struct ethtool_drvinfo *info)417417+{418418+ struct adapter *adapter = dev->priv;419419+420420+ strcpy(info->driver, DRV_NAME);421421+ strcpy(info->version, DRV_VERSION);422422+ strcpy(info->fw_version, "N/A");423423+ strcpy(info->bus_info, pci_name(adapter->pdev));424424+}425425+426426+static int get_stats_count(struct net_device *dev)427427+{428428+ return ARRAY_SIZE(stats_strings);429429+}430430+431431+static void get_strings(struct net_device *dev, u32 stringset, u8 *data)432432+{433433+ if (stringset == ETH_SS_STATS)434434+ memcpy(data, stats_strings, sizeof(stats_strings));435435+}436436+437437+static void get_stats(struct net_device *dev, struct ethtool_stats *stats,438438+ u64 *data)439439+{440440+ struct adapter *adapter = dev->priv;441441+ struct cmac *mac = adapter->port[dev->if_port].mac;442442+ const struct cmac_statistics *s;443443+ const struct sge_port_stats *ss;444444+ const struct sge_intr_counts *t;445445+446446+ s = mac->ops->statistics_update(mac, MAC_STATS_UPDATE_FULL);447447+ ss = t1_sge_get_port_stats(adapter->sge, dev->if_port);448448+ t = t1_sge_get_intr_counts(adapter->sge);449449+450450+ *data++ = s->TxOctetsOK;451451+ *data++ = s->TxOctetsBad;452452+ *data++ = s->TxUnicastFramesOK;453453+ *data++ = s->TxMulticastFramesOK;454454+ *data++ = s->TxBroadcastFramesOK;455455+ *data++ = s->TxPauseFrames;456456+ *data++ = s->TxFramesWithDeferredXmissions;457457+ *data++ = s->TxLateCollisions;458458+ *data++ = s->TxTotalCollisions;459459+ *data++ = s->TxFramesAbortedDueToXSCollisions;460460+ *data++ = s->TxUnderrun;461461+ *data++ = s->TxLengthErrors;462462+ *data++ = s->TxInternalMACXmitError;463463+ *data++ = s->TxFramesWithExcessiveDeferral;464464+ *data++ = s->TxFCSErrors;465465+466466+ *data++ = s->RxOctetsOK;467467+ *data++ = s->RxOctetsBad;468468+ *data++ = s->RxUnicastFramesOK;469469+ *data++ = s->RxMulticastFramesOK;470470+ *data++ = s->RxBroadcastFramesOK;471471+ *data++ = s->RxPauseFrames;472472+ *data++ = s->RxFCSErrors;473473+ *data++ = s->RxAlignErrors;474474+ *data++ = s->RxSymbolErrors;475475+ *data++ = s->RxDataErrors;476476+ *data++ = s->RxSequenceErrors;477477+ *data++ = s->RxRuntErrors;478478+ *data++ = s->RxJabberErrors;479479+ *data++ = s->RxInternalMACRcvError;480480+ *data++ = s->RxInRangeLengthErrors;481481+ *data++ = s->RxOutOfRangeLengthField;482482+ *data++ = s->RxFrameTooLongErrors;483483+484484+ *data++ = ss->tso;485485+ *data++ = ss->vlan_xtract;486486+ *data++ = ss->vlan_insert;487487+ *data++ = ss->rx_cso_good;488488+ *data++ = ss->tx_cso;489489+ *data++ = ss->rx_drops;490490+491491+ *data++ = (u64)t->respQ_empty;492492+ *data++ = (u64)t->respQ_overflow;493493+ *data++ = (u64)t->freelistQ_empty;494494+ *data++ = (u64)t->pkt_too_big;495495+ *data++ = (u64)t->pkt_mismatch;496496+ *data++ = (u64)t->cmdQ_full[0];497497+ *data++ = (u64)t->cmdQ_full[1];498498+ *data++ = (u64)t->tx_ipfrags;499499+ *data++ = (u64)t->tx_reg_pkts;500500+ *data++ = (u64)t->tx_lso_pkts;501501+ *data++ = (u64)t->tx_do_cksum;502502+}503503+504504+static inline void reg_block_dump(struct adapter *ap, void *buf,505505+ unsigned int start, unsigned int end)506506+{507507+ u32 *p = buf + start;508508+509509+ for ( ; start <= end; start += sizeof(u32))510510+ *p++ = readl(ap->regs + start);511511+}512512+513513+static void get_regs(struct net_device *dev, struct ethtool_regs *regs,514514+ void *buf)515515+{516516+ struct adapter *ap = dev->priv;517517+518518+ /*519519+ * Version scheme: bits 0..9: chip version, bits 10..15: chip revision520520+ */521521+ regs->version = 2;522522+523523+ memset(buf, 0, T2_REGMAP_SIZE);524524+ reg_block_dump(ap, buf, 0, A_SG_RESPACCUTIMER);525525+}526526+527527+static int get_settings(struct net_device *dev, struct ethtool_cmd *cmd)528528+{529529+ struct adapter *adapter = dev->priv;530530+ struct port_info *p = &adapter->port[dev->if_port];531531+532532+ cmd->supported = p->link_config.supported;533533+ cmd->advertising = p->link_config.advertising;534534+535535+ if (netif_carrier_ok(dev)) {536536+ cmd->speed = p->link_config.speed;537537+ cmd->duplex = p->link_config.duplex;538538+ } else {539539+ cmd->speed = -1;540540+ cmd->duplex = -1;541541+ }542542+543543+ cmd->port = (cmd->supported & SUPPORTED_TP) ? PORT_TP : PORT_FIBRE;544544+ cmd->phy_address = p->phy->addr;545545+ cmd->transceiver = XCVR_EXTERNAL;546546+ cmd->autoneg = p->link_config.autoneg;547547+ cmd->maxtxpkt = 0;548548+ cmd->maxrxpkt = 0;549549+ return 0;550550+}551551+552552+static int speed_duplex_to_caps(int speed, int duplex)553553+{554554+ int cap = 0;555555+556556+ switch (speed) {557557+ case SPEED_10:558558+ if (duplex == DUPLEX_FULL)559559+ cap = SUPPORTED_10baseT_Full;560560+ else561561+ cap = SUPPORTED_10baseT_Half;562562+ break;563563+ case SPEED_100:564564+ if (duplex == DUPLEX_FULL)565565+ cap = SUPPORTED_100baseT_Full;566566+ else567567+ cap = SUPPORTED_100baseT_Half;568568+ break;569569+ case SPEED_1000:570570+ if (duplex == DUPLEX_FULL)571571+ cap = SUPPORTED_1000baseT_Full;572572+ else573573+ cap = SUPPORTED_1000baseT_Half;574574+ break;575575+ case SPEED_10000:576576+ if (duplex == DUPLEX_FULL)577577+ cap = SUPPORTED_10000baseT_Full;578578+ }579579+ return cap;580580+}581581+582582+#define ADVERTISED_MASK (ADVERTISED_10baseT_Half | ADVERTISED_10baseT_Full | \583583+ ADVERTISED_100baseT_Half | ADVERTISED_100baseT_Full | \584584+ ADVERTISED_1000baseT_Half | ADVERTISED_1000baseT_Full | \585585+ ADVERTISED_10000baseT_Full)586586+587587+static int set_settings(struct net_device *dev, struct ethtool_cmd *cmd)588588+{589589+ struct adapter *adapter = dev->priv;590590+ struct port_info *p = &adapter->port[dev->if_port];591591+ struct link_config *lc = &p->link_config;592592+593593+ if (!(lc->supported & SUPPORTED_Autoneg))594594+ return -EOPNOTSUPP; /* can't change speed/duplex */595595+596596+ if (cmd->autoneg == AUTONEG_DISABLE) {597597+ int cap = speed_duplex_to_caps(cmd->speed, cmd->duplex);598598+599599+ if (!(lc->supported & cap) || cmd->speed == SPEED_1000)600600+ return -EINVAL;601601+ lc->requested_speed = cmd->speed;602602+ lc->requested_duplex = cmd->duplex;603603+ lc->advertising = 0;604604+ } else {605605+ cmd->advertising &= ADVERTISED_MASK;606606+ if (cmd->advertising & (cmd->advertising - 1))607607+ cmd->advertising = lc->supported;608608+ cmd->advertising &= lc->supported;609609+ if (!cmd->advertising)610610+ return -EINVAL;611611+ lc->requested_speed = SPEED_INVALID;612612+ lc->requested_duplex = DUPLEX_INVALID;613613+ lc->advertising = cmd->advertising | ADVERTISED_Autoneg;614614+ }615615+ lc->autoneg = cmd->autoneg;616616+ if (netif_running(dev))617617+ t1_link_start(p->phy, p->mac, lc);618618+ return 0;619619+}620620+621621+static void get_pauseparam(struct net_device *dev,622622+ struct ethtool_pauseparam *epause)623623+{624624+ struct adapter *adapter = dev->priv;625625+ struct port_info *p = &adapter->port[dev->if_port];626626+627627+ epause->autoneg = (p->link_config.requested_fc & PAUSE_AUTONEG) != 0;628628+ epause->rx_pause = (p->link_config.fc & PAUSE_RX) != 0;629629+ epause->tx_pause = (p->link_config.fc & PAUSE_TX) != 0;630630+}631631+632632+static int set_pauseparam(struct net_device *dev,633633+ struct ethtool_pauseparam *epause)634634+{635635+ struct adapter *adapter = dev->priv;636636+ struct port_info *p = &adapter->port[dev->if_port];637637+ struct link_config *lc = &p->link_config;638638+639639+ if (epause->autoneg == AUTONEG_DISABLE)640640+ lc->requested_fc = 0;641641+ else if (lc->supported & SUPPORTED_Autoneg)642642+ lc->requested_fc = PAUSE_AUTONEG;643643+ else644644+ return -EINVAL;645645+646646+ if (epause->rx_pause)647647+ lc->requested_fc |= PAUSE_RX;648648+ if (epause->tx_pause)649649+ lc->requested_fc |= PAUSE_TX;650650+ if (lc->autoneg == AUTONEG_ENABLE) {651651+ if (netif_running(dev))652652+ t1_link_start(p->phy, p->mac, lc);653653+ } else {654654+ lc->fc = lc->requested_fc & (PAUSE_RX | PAUSE_TX);655655+ if (netif_running(dev))656656+ p->mac->ops->set_speed_duplex_fc(p->mac, -1, -1,657657+ lc->fc);658658+ }659659+ return 0;660660+}661661+662662+static u32 get_rx_csum(struct net_device *dev)663663+{664664+ struct adapter *adapter = dev->priv;665665+666666+ return (adapter->flags & RX_CSUM_ENABLED) != 0;667667+}668668+669669+static int set_rx_csum(struct net_device *dev, u32 data)670670+{671671+ struct adapter *adapter = dev->priv;672672+673673+ if (data)674674+ adapter->flags |= RX_CSUM_ENABLED;675675+ else676676+ adapter->flags &= ~RX_CSUM_ENABLED;677677+ return 0;678678+}679679+680680+static int set_tso(struct net_device *dev, u32 value)681681+{682682+ struct adapter *adapter = dev->priv;683683+684684+ if (!(adapter->flags & TSO_CAPABLE))685685+ return value ? -EOPNOTSUPP : 0;686686+ return ethtool_op_set_tso(dev, value);687687+}688688+689689+static void get_sge_param(struct net_device *dev, struct ethtool_ringparam *e)690690+{691691+ struct adapter *adapter = dev->priv;692692+ int jumbo_fl = t1_is_T1B(adapter) ? 1 : 0;693693+694694+ e->rx_max_pending = MAX_RX_BUFFERS;695695+ e->rx_mini_max_pending = 0;696696+ e->rx_jumbo_max_pending = MAX_RX_JUMBO_BUFFERS;697697+ e->tx_max_pending = MAX_CMDQ_ENTRIES;698698+699699+ e->rx_pending = adapter->params.sge.freelQ_size[!jumbo_fl];700700+ e->rx_mini_pending = 0;701701+ e->rx_jumbo_pending = adapter->params.sge.freelQ_size[jumbo_fl];702702+ e->tx_pending = adapter->params.sge.cmdQ_size[0];703703+}704704+705705+static int set_sge_param(struct net_device *dev, struct ethtool_ringparam *e)706706+{707707+ struct adapter *adapter = dev->priv;708708+ int jumbo_fl = t1_is_T1B(adapter) ? 1 : 0;709709+710710+ if (e->rx_pending > MAX_RX_BUFFERS || e->rx_mini_pending ||711711+ e->rx_jumbo_pending > MAX_RX_JUMBO_BUFFERS ||712712+ e->tx_pending > MAX_CMDQ_ENTRIES ||713713+ e->rx_pending < MIN_FL_ENTRIES ||714714+ e->rx_jumbo_pending < MIN_FL_ENTRIES ||715715+ e->tx_pending < (adapter->params.nports + 1) * (MAX_SKB_FRAGS + 1))716716+ return -EINVAL;717717+718718+ if (adapter->flags & FULL_INIT_DONE)719719+ return -EBUSY;720720+721721+ adapter->params.sge.freelQ_size[!jumbo_fl] = e->rx_pending;722722+ adapter->params.sge.freelQ_size[jumbo_fl] = e->rx_jumbo_pending;723723+ adapter->params.sge.cmdQ_size[0] = e->tx_pending;724724+ adapter->params.sge.cmdQ_size[1] = e->tx_pending > MAX_CMDQ1_ENTRIES ?725725+ MAX_CMDQ1_ENTRIES : e->tx_pending;726726+ return 0;727727+}728728+729729+static int set_coalesce(struct net_device *dev, struct ethtool_coalesce *c)730730+{731731+ struct adapter *adapter = dev->priv;732732+733733+ /*734734+ * If RX coalescing is requested we use NAPI, otherwise interrupts.735735+ * This choice can be made only when all ports and the TOE are off.736736+ */737737+ if (adapter->open_device_map == 0)738738+ adapter->params.sge.polling = c->use_adaptive_rx_coalesce;739739+740740+ if (adapter->params.sge.polling) {741741+ adapter->params.sge.rx_coalesce_usecs = 0;742742+ } else {743743+ adapter->params.sge.rx_coalesce_usecs = c->rx_coalesce_usecs;744744+ }745745+ adapter->params.sge.coalesce_enable = c->use_adaptive_rx_coalesce;746746+ adapter->params.sge.sample_interval_usecs = c->rate_sample_interval;747747+ t1_sge_set_coalesce_params(adapter->sge, &adapter->params.sge);748748+ return 0;749749+}750750+751751+static int get_coalesce(struct net_device *dev, struct ethtool_coalesce *c)752752+{753753+ struct adapter *adapter = dev->priv;754754+755755+ c->rx_coalesce_usecs = adapter->params.sge.rx_coalesce_usecs;756756+ c->rate_sample_interval = adapter->params.sge.sample_interval_usecs;757757+ c->use_adaptive_rx_coalesce = adapter->params.sge.coalesce_enable;758758+ return 0;759759+}760760+761761+static int get_eeprom_len(struct net_device *dev)762762+{763763+ return EEPROM_SIZE;764764+}765765+766766+#define EEPROM_MAGIC(ap) \767767+ (PCI_VENDOR_ID_CHELSIO | ((ap)->params.chip_version << 16))768768+769769+static int get_eeprom(struct net_device *dev, struct ethtool_eeprom *e,770770+ u8 *data)771771+{772772+ int i;773773+ u8 buf[EEPROM_SIZE] __attribute__((aligned(4)));774774+ struct adapter *adapter = dev->priv;775775+776776+ e->magic = EEPROM_MAGIC(adapter);777777+ for (i = e->offset & ~3; i < e->offset + e->len; i += sizeof(u32))778778+ t1_seeprom_read(adapter, i, (u32 *)&buf[i]);779779+ memcpy(data, buf + e->offset, e->len);780780+ return 0;781781+}782782+783783+static struct ethtool_ops t1_ethtool_ops = {784784+ .get_settings = get_settings,785785+ .set_settings = set_settings,786786+ .get_drvinfo = get_drvinfo,787787+ .get_msglevel = get_msglevel,788788+ .set_msglevel = set_msglevel,789789+ .get_ringparam = get_sge_param,790790+ .set_ringparam = set_sge_param,791791+ .get_coalesce = get_coalesce,792792+ .set_coalesce = set_coalesce,793793+ .get_eeprom_len = get_eeprom_len,794794+ .get_eeprom = get_eeprom,795795+ .get_pauseparam = get_pauseparam,796796+ .set_pauseparam = set_pauseparam,797797+ .get_rx_csum = get_rx_csum,798798+ .set_rx_csum = set_rx_csum,799799+ .get_tx_csum = ethtool_op_get_tx_csum,800800+ .set_tx_csum = ethtool_op_set_tx_csum,801801+ .get_sg = ethtool_op_get_sg,802802+ .set_sg = ethtool_op_set_sg,803803+ .get_link = ethtool_op_get_link,804804+ .get_strings = get_strings,805805+ .get_stats_count = get_stats_count,806806+ .get_ethtool_stats = get_stats,807807+ .get_regs_len = get_regs_len,808808+ .get_regs = get_regs,809809+ .get_tso = ethtool_op_get_tso,810810+ .set_tso = set_tso,811811+};812812+813813+static void cxgb_proc_cleanup(struct adapter *adapter,814814+ struct proc_dir_entry *dir)815815+{816816+ const char *name;817817+ name = adapter->name;818818+ remove_proc_entry(name, dir);819819+}820820+//#define chtoe_setup_toedev(adapter) NULL821821+#define update_mtu_tab(adapter)822822+#define write_smt_entry(adapter, idx)823823+824824+static int t1_ioctl(struct net_device *dev, struct ifreq *req, int cmd)825825+{826826+ struct adapter *adapter = dev->priv;827827+ struct mii_ioctl_data *data = (struct mii_ioctl_data *)&req->ifr_data;828828+829829+ switch (cmd) {830830+ case SIOCGMIIPHY:831831+ data->phy_id = adapter->port[dev->if_port].phy->addr;832832+ /* FALLTHRU */833833+ case SIOCGMIIREG: {834834+ struct cphy *phy = adapter->port[dev->if_port].phy;835835+ u32 val;836836+837837+ if (!phy->mdio_read)838838+ return -EOPNOTSUPP;839839+ phy->mdio_read(adapter, data->phy_id, 0, data->reg_num & 0x1f,840840+ &val);841841+ data->val_out = val;842842+ break;843843+ }844844+ case SIOCSMIIREG: {845845+ struct cphy *phy = adapter->port[dev->if_port].phy;846846+847847+ if (!capable(CAP_NET_ADMIN))848848+ return -EPERM;849849+ if (!phy->mdio_write)850850+ return -EOPNOTSUPP;851851+ phy->mdio_write(adapter, data->phy_id, 0, data->reg_num & 0x1f,852852+ data->val_in);853853+ break;854854+ }855855+856856+ default:857857+ return -EOPNOTSUPP;858858+ }859859+ return 0;860860+}861861+862862+static int t1_change_mtu(struct net_device *dev, int new_mtu)863863+{864864+ int ret;865865+ struct adapter *adapter = dev->priv;866866+ struct cmac *mac = adapter->port[dev->if_port].mac;867867+868868+ if (!mac->ops->set_mtu)869869+ return -EOPNOTSUPP;870870+ if (new_mtu < 68)871871+ return -EINVAL;872872+ if ((ret = mac->ops->set_mtu(mac, new_mtu)))873873+ return ret;874874+ dev->mtu = new_mtu;875875+ return 0;876876+}877877+878878+static int t1_set_mac_addr(struct net_device *dev, void *p)879879+{880880+ struct adapter *adapter = dev->priv;881881+ struct cmac *mac = adapter->port[dev->if_port].mac;882882+ struct sockaddr *addr = p;883883+884884+ if (!mac->ops->macaddress_set)885885+ return -EOPNOTSUPP;886886+887887+ memcpy(dev->dev_addr, addr->sa_data, dev->addr_len);888888+ mac->ops->macaddress_set(mac, dev->dev_addr);889889+ return 0;890890+}891891+892892+#if defined(CONFIG_VLAN_8021Q) || defined(CONFIG_VLAN_8021Q_MODULE)893893+static void vlan_rx_register(struct net_device *dev,894894+ struct vlan_group *grp)895895+{896896+ struct adapter *adapter = dev->priv;897897+898898+ spin_lock_irq(&adapter->async_lock);899899+ adapter->vlan_grp = grp;900900+ t1_set_vlan_accel(adapter, grp != NULL);901901+ spin_unlock_irq(&adapter->async_lock);902902+}903903+904904+static void vlan_rx_kill_vid(struct net_device *dev, unsigned short vid)905905+{906906+ struct adapter *adapter = dev->priv;907907+908908+ spin_lock_irq(&adapter->async_lock);909909+ if (adapter->vlan_grp)910910+ adapter->vlan_grp->vlan_devices[vid] = NULL;911911+ spin_unlock_irq(&adapter->async_lock);912912+}913913+#endif914914+915915+#ifdef CONFIG_NET_POLL_CONTROLLER916916+static void t1_netpoll(struct net_device *dev)917917+{918918+ unsigned long flags;919919+ struct adapter *adapter = dev->priv;920920+921921+ local_irq_save(flags);922922+ t1_select_intr_handler(adapter)(adapter->pdev->irq, adapter, NULL);923923+ local_irq_restore(flags);924924+}925925+#endif926926+927927+/*928928+ * Periodic accumulation of MAC statistics. This is used only if the MAC929929+ * does not have any other way to prevent stats counter overflow.930930+ */931931+static void mac_stats_task(void *data)932932+{933933+ int i;934934+ struct adapter *adapter = data;935935+936936+ for_each_port(adapter, i) {937937+ struct port_info *p = &adapter->port[i];938938+939939+ if (netif_running(p->dev))940940+ p->mac->ops->statistics_update(p->mac,941941+ MAC_STATS_UPDATE_FAST);942942+ }943943+944944+ /* Schedule the next statistics update if any port is active. */945945+ spin_lock(&adapter->work_lock);946946+ if (adapter->open_device_map & PORT_MASK)947947+ schedule_mac_stats_update(adapter,948948+ adapter->params.stats_update_period);949949+ spin_unlock(&adapter->work_lock);950950+}951951+952952+/*953953+ * Processes elmer0 external interrupts in process context.954954+ */955955+static void ext_intr_task(void *data)956956+{957957+ struct adapter *adapter = data;958958+959959+ elmer0_ext_intr_handler(adapter);960960+961961+ /* Now reenable external interrupts */962962+ spin_lock_irq(&adapter->async_lock);963963+ adapter->slow_intr_mask |= F_PL_INTR_EXT;964964+ writel(F_PL_INTR_EXT, adapter->regs + A_PL_CAUSE);965965+ writel(adapter->slow_intr_mask | F_PL_INTR_SGE_DATA,966966+ adapter->regs + A_PL_ENABLE);967967+ spin_unlock_irq(&adapter->async_lock);968968+}969969+970970+/*971971+ * Interrupt-context handler for elmer0 external interrupts.972972+ */973973+void t1_elmer0_ext_intr(struct adapter *adapter)974974+{975975+ /*976976+ * Schedule a task to handle external interrupts as we require977977+ * a process context. We disable EXT interrupts in the interim978978+ * and let the task reenable them when it's done.979979+ */980980+ adapter->slow_intr_mask &= ~F_PL_INTR_EXT;981981+ writel(adapter->slow_intr_mask | F_PL_INTR_SGE_DATA,982982+ adapter->regs + A_PL_ENABLE);983983+ schedule_work(&adapter->ext_intr_handler_task);984984+}985985+986986+void t1_fatal_err(struct adapter *adapter)987987+{988988+ if (adapter->flags & FULL_INIT_DONE) {989989+ t1_sge_stop(adapter->sge);990990+ t1_interrupts_disable(adapter);991991+ }992992+ CH_ALERT("%s: encountered fatal error, operation suspended\n",993993+ adapter->name);994994+}995995+996996+static int __devinit init_one(struct pci_dev *pdev,997997+ const struct pci_device_id *ent)998998+{999999+ static int version_printed;10001000+10011001+ int i, err, pci_using_dac = 0;10021002+ unsigned long mmio_start, mmio_len;10031003+ const struct board_info *bi;10041004+ struct adapter *adapter = NULL;10051005+ struct port_info *pi;10061006+10071007+ if (!version_printed) {10081008+ printk(KERN_INFO "%s - version %s\n", DRV_DESCRIPTION,10091009+ DRV_VERSION);10101010+ ++version_printed;10111011+ }10121012+10131013+ err = pci_enable_device(pdev);10141014+ if (err)10151015+ return err;10161016+10171017+ if (!(pci_resource_flags(pdev, 0) & IORESOURCE_MEM)) {10181018+ CH_ERR("%s: cannot find PCI device memory base address\n",10191019+ pci_name(pdev));10201020+ err = -ENODEV;10211021+ goto out_disable_pdev;10221022+ }10231023+10241024+ if (!pci_set_dma_mask(pdev, DMA_64BIT_MASK)) {10251025+ pci_using_dac = 1;10261026+10271027+ if (pci_set_consistent_dma_mask(pdev, DMA_64BIT_MASK)) {10281028+ CH_ERR("%s: unable to obtain 64-bit DMA for"10291029+ "consistent allocations\n", pci_name(pdev));10301030+ err = -ENODEV;10311031+ goto out_disable_pdev;10321032+ }10331033+10341034+ } else if ((err = pci_set_dma_mask(pdev, DMA_32BIT_MASK)) != 0) {10351035+ CH_ERR("%s: no usable DMA configuration\n", pci_name(pdev));10361036+ goto out_disable_pdev;10371037+ }10381038+10391039+ err = pci_request_regions(pdev, DRV_NAME);10401040+ if (err) {10411041+ CH_ERR("%s: cannot obtain PCI resources\n", pci_name(pdev));10421042+ goto out_disable_pdev;10431043+ }10441044+10451045+ pci_set_master(pdev);10461046+10471047+ mmio_start = pci_resource_start(pdev, 0);10481048+ mmio_len = pci_resource_len(pdev, 0);10491049+ bi = t1_get_board_info(ent->driver_data);10501050+10511051+ for (i = 0; i < bi->port_number; ++i) {10521052+ struct net_device *netdev;10531053+10541054+ netdev = alloc_etherdev(adapter ? 0 : sizeof(*adapter));10551055+ if (!netdev) {10561056+ err = -ENOMEM;10571057+ goto out_free_dev;10581058+ }10591059+10601060+ SET_MODULE_OWNER(netdev);10611061+ SET_NETDEV_DEV(netdev, &pdev->dev);10621062+10631063+ if (!adapter) {10641064+ adapter = netdev->priv;10651065+ adapter->pdev = pdev;10661066+ adapter->port[0].dev = netdev; /* so we don't leak it */10671067+10681068+ adapter->regs = ioremap(mmio_start, mmio_len);10691069+ if (!adapter->regs) {10701070+ CH_ERR("%s: cannot map device registers\n",10711071+ pci_name(pdev));10721072+ err = -ENOMEM;10731073+ goto out_free_dev;10741074+ }10751075+10761076+ if (t1_get_board_rev(adapter, bi, &adapter->params)) {10771077+ err = -ENODEV; /* Can't handle this chip rev */10781078+ goto out_free_dev;10791079+ }10801080+10811081+ adapter->name = pci_name(pdev);10821082+ adapter->msg_enable = dflt_msg_enable;10831083+ adapter->mmio_len = mmio_len;10841084+10851085+ init_MUTEX(&adapter->mib_mutex);10861086+ spin_lock_init(&adapter->tpi_lock);10871087+ spin_lock_init(&adapter->work_lock);10881088+ spin_lock_init(&adapter->async_lock);10891089+10901090+ INIT_WORK(&adapter->ext_intr_handler_task,10911091+ ext_intr_task, adapter);10921092+ INIT_WORK(&adapter->stats_update_task, mac_stats_task,10931093+ adapter);10941094+#ifdef work_struct10951095+ init_timer(&adapter->stats_update_timer);10961096+ adapter->stats_update_timer.function = mac_stats_timer;10971097+ adapter->stats_update_timer.data =10981098+ (unsigned long)adapter;10991099+#endif11001100+11011101+ pci_set_drvdata(pdev, netdev);11021102+ }11031103+11041104+ pi = &adapter->port[i];11051105+ pi->dev = netdev;11061106+ netif_carrier_off(netdev);11071107+ netdev->irq = pdev->irq;11081108+ netdev->if_port = i;11091109+ netdev->mem_start = mmio_start;11101110+ netdev->mem_end = mmio_start + mmio_len - 1;11111111+ netdev->priv = adapter;11121112+ netdev->features |= NETIF_F_SG | NETIF_F_IP_CSUM;11131113+ netdev->features |= NETIF_F_LLTX;11141114+11151115+ adapter->flags |= RX_CSUM_ENABLED | TCP_CSUM_CAPABLE;11161116+ if (pci_using_dac)11171117+ netdev->features |= NETIF_F_HIGHDMA;11181118+ if (vlan_tso_capable(adapter)) {11191119+#if defined(CONFIG_VLAN_8021Q) || defined(CONFIG_VLAN_8021Q_MODULE)11201120+ adapter->flags |= VLAN_ACCEL_CAPABLE;11211121+ netdev->features |=11221122+ NETIF_F_HW_VLAN_TX | NETIF_F_HW_VLAN_RX;11231123+ netdev->vlan_rx_register = vlan_rx_register;11241124+ netdev->vlan_rx_kill_vid = vlan_rx_kill_vid;11251125+#endif11261126+ adapter->flags |= TSO_CAPABLE;11271127+ netdev->features |= NETIF_F_TSO;11281128+ }11291129+11301130+ netdev->open = cxgb_open;11311131+ netdev->stop = cxgb_close;11321132+ netdev->hard_start_xmit = t1_start_xmit;11331133+ netdev->hard_header_len += (adapter->flags & TSO_CAPABLE) ?11341134+ sizeof(struct cpl_tx_pkt_lso) :11351135+ sizeof(struct cpl_tx_pkt);11361136+ netdev->get_stats = t1_get_stats;11371137+ netdev->set_multicast_list = t1_set_rxmode;11381138+ netdev->do_ioctl = t1_ioctl;11391139+ netdev->change_mtu = t1_change_mtu;11401140+ netdev->set_mac_address = t1_set_mac_addr;11411141+#ifdef CONFIG_NET_POLL_CONTROLLER11421142+ netdev->poll_controller = t1_netpoll;11431143+#endif11441144+ netdev->weight = 64;11451145+11461146+ SET_ETHTOOL_OPS(netdev, &t1_ethtool_ops);11471147+ }11481148+11491149+ if (t1_init_sw_modules(adapter, bi) < 0) {11501150+ err = -ENODEV;11511151+ goto out_free_dev;11521152+ }11531153+11541154+ /*11551155+ * The card is now ready to go. If any errors occur during device11561156+ * registration we do not fail the whole card but rather proceed only11571157+ * with the ports we manage to register successfully. However we must11581158+ * register at least one net device.11591159+ */11601160+ for (i = 0; i < bi->port_number; ++i) {11611161+ err = register_netdev(adapter->port[i].dev);11621162+ if (err)11631163+ CH_WARN("%s: cannot register net device %s, skipping\n",11641164+ pci_name(pdev), adapter->port[i].dev->name);11651165+ else {11661166+ /*11671167+ * Change the name we use for messages to the name of11681168+ * the first successfully registered interface.11691169+ */11701170+ if (!adapter->registered_device_map)11711171+ adapter->name = adapter->port[i].dev->name;11721172+11731173+ __set_bit(i, &adapter->registered_device_map);11741174+ }11751175+ }11761176+ if (!adapter->registered_device_map) {11771177+ CH_ERR("%s: could not register any net devices\n",11781178+ pci_name(pdev));11791179+ goto out_release_adapter_res;11801180+ }11811181+11821182+ printk(KERN_INFO "%s: %s (rev %d), %s %dMHz/%d-bit\n", adapter->name,11831183+ bi->desc, adapter->params.chip_revision,11841184+ adapter->params.pci.is_pcix ? "PCIX" : "PCI",11851185+ adapter->params.pci.speed, adapter->params.pci.width);11861186+ return 0;11871187+11881188+ out_release_adapter_res:11891189+ t1_free_sw_modules(adapter);11901190+ out_free_dev:11911191+ if (adapter) {11921192+ if (adapter->regs) iounmap(adapter->regs);11931193+ for (i = bi->port_number - 1; i >= 0; --i)11941194+ if (adapter->port[i].dev) {11951195+ cxgb_proc_cleanup(adapter, proc_root_driver);11961196+ kfree(adapter->port[i].dev);11971197+ }11981198+ }11991199+ pci_release_regions(pdev);12001200+ out_disable_pdev:12011201+ pci_disable_device(pdev);12021202+ pci_set_drvdata(pdev, NULL);12031203+ return err;12041204+}12051205+12061206+static inline void t1_sw_reset(struct pci_dev *pdev)12071207+{12081208+ pci_write_config_dword(pdev, A_PCICFG_PM_CSR, 3);12091209+ pci_write_config_dword(pdev, A_PCICFG_PM_CSR, 0);12101210+}12111211+12121212+static void __devexit remove_one(struct pci_dev *pdev)12131213+{12141214+ struct net_device *dev = pci_get_drvdata(pdev);12151215+12161216+ if (dev) {12171217+ int i;12181218+ struct adapter *adapter = dev->priv;12191219+12201220+ for_each_port(adapter, i)12211221+ if (test_bit(i, &adapter->registered_device_map))12221222+ unregister_netdev(adapter->port[i].dev);12231223+12241224+ t1_free_sw_modules(adapter);12251225+ iounmap(adapter->regs);12261226+ while (--i >= 0)12271227+ if (adapter->port[i].dev) {12281228+ cxgb_proc_cleanup(adapter, proc_root_driver);12291229+ kfree(adapter->port[i].dev);12301230+ }12311231+ pci_release_regions(pdev);12321232+ pci_disable_device(pdev);12331233+ pci_set_drvdata(pdev, NULL);12341234+ t1_sw_reset(pdev);12351235+ }12361236+}12371237+12381238+static struct pci_driver driver = {12391239+ .name = DRV_NAME,12401240+ .id_table = t1_pci_tbl,12411241+ .probe = init_one,12421242+ .remove = __devexit_p(remove_one),12431243+};12441244+12451245+static int __init t1_init_module(void)12461246+{12471247+ return pci_module_init(&driver);12481248+}12491249+12501250+static void __exit t1_cleanup_module(void)12511251+{12521252+ pci_unregister_driver(&driver);12531253+}12541254+12551255+module_init(t1_init_module);12561256+module_exit(t1_cleanup_module);
+151
drivers/net/chelsio/elmer0.h
···11+/*****************************************************************************22+ * *33+ * File: elmer0.h *44+ * $Revision: 1.6 $ *55+ * $Date: 2005/06/21 22:49:43 $ *66+ * Description: *77+ * part of the Chelsio 10Gb Ethernet Driver. *88+ * *99+ * This program is free software; you can redistribute it and/or modify *1010+ * it under the terms of the GNU General Public License, version 2, as *1111+ * published by the Free Software Foundation. *1212+ * *1313+ * You should have received a copy of the GNU General Public License along *1414+ * with this program; if not, write to the Free Software Foundation, Inc., *1515+ * 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. *1616+ * *1717+ * THIS SOFTWARE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR IMPLIED *1818+ * WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED WARRANTIES OF *1919+ * MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. *2020+ * *2121+ * http://www.chelsio.com *2222+ * *2323+ * Copyright (c) 2003 - 2005 Chelsio Communications, Inc. *2424+ * All rights reserved. *2525+ * *2626+ * Maintainers: maintainers@chelsio.com *2727+ * *2828+ * Authors: Dimitrios Michailidis <dm@chelsio.com> *2929+ * Tina Yang <tainay@chelsio.com> *3030+ * Felix Marti <felix@chelsio.com> *3131+ * Scott Bardone <sbardone@chelsio.com> *3232+ * Kurt Ottaway <kottaway@chelsio.com> *3333+ * Frank DiMambro <frank@chelsio.com> *3434+ * *3535+ * History: *3636+ * *3737+ ****************************************************************************/3838+3939+#ifndef _CXGB_ELMER0_H_4040+#define _CXGB_ELMER0_H_4141+4242+/* ELMER0 registers */4343+#define A_ELMER0_VERSION 0x1000004444+#define A_ELMER0_PHY_CFG 0x1000044545+#define A_ELMER0_INT_ENABLE 0x1000084646+#define A_ELMER0_INT_CAUSE 0x10000c4747+#define A_ELMER0_GPI_CFG 0x1000104848+#define A_ELMER0_GPI_STAT 0x1000144949+#define A_ELMER0_GPO 0x1000185050+#define A_ELMER0_PORT0_MI1_CFG 0x4000005151+5252+#define S_MI1_MDI_ENABLE 05353+#define V_MI1_MDI_ENABLE(x) ((x) << S_MI1_MDI_ENABLE)5454+#define F_MI1_MDI_ENABLE V_MI1_MDI_ENABLE(1U)5555+5656+#define S_MI1_MDI_INVERT 15757+#define V_MI1_MDI_INVERT(x) ((x) << S_MI1_MDI_INVERT)5858+#define F_MI1_MDI_INVERT V_MI1_MDI_INVERT(1U)5959+6060+#define S_MI1_PREAMBLE_ENABLE 26161+#define V_MI1_PREAMBLE_ENABLE(x) ((x) << S_MI1_PREAMBLE_ENABLE)6262+#define F_MI1_PREAMBLE_ENABLE V_MI1_PREAMBLE_ENABLE(1U)6363+6464+#define S_MI1_SOF 36565+#define M_MI1_SOF 0x36666+#define V_MI1_SOF(x) ((x) << S_MI1_SOF)6767+#define G_MI1_SOF(x) (((x) >> S_MI1_SOF) & M_MI1_SOF)6868+6969+#define S_MI1_CLK_DIV 57070+#define M_MI1_CLK_DIV 0xff7171+#define V_MI1_CLK_DIV(x) ((x) << S_MI1_CLK_DIV)7272+#define G_MI1_CLK_DIV(x) (((x) >> S_MI1_CLK_DIV) & M_MI1_CLK_DIV)7373+7474+#define A_ELMER0_PORT0_MI1_ADDR 0x4000047575+7676+#define S_MI1_REG_ADDR 07777+#define M_MI1_REG_ADDR 0x1f7878+#define V_MI1_REG_ADDR(x) ((x) << S_MI1_REG_ADDR)7979+#define G_MI1_REG_ADDR(x) (((x) >> S_MI1_REG_ADDR) & M_MI1_REG_ADDR)8080+8181+#define S_MI1_PHY_ADDR 58282+#define M_MI1_PHY_ADDR 0x1f8383+#define V_MI1_PHY_ADDR(x) ((x) << S_MI1_PHY_ADDR)8484+#define G_MI1_PHY_ADDR(x) (((x) >> S_MI1_PHY_ADDR) & M_MI1_PHY_ADDR)8585+8686+#define A_ELMER0_PORT0_MI1_DATA 0x4000088787+8888+#define S_MI1_DATA 08989+#define M_MI1_DATA 0xffff9090+#define V_MI1_DATA(x) ((x) << S_MI1_DATA)9191+#define G_MI1_DATA(x) (((x) >> S_MI1_DATA) & M_MI1_DATA)9292+9393+#define A_ELMER0_PORT0_MI1_OP 0x40000c9494+9595+#define S_MI1_OP 09696+#define M_MI1_OP 0x39797+#define V_MI1_OP(x) ((x) << S_MI1_OP)9898+#define G_MI1_OP(x) (((x) >> S_MI1_OP) & M_MI1_OP)9999+100100+#define S_MI1_ADDR_AUTOINC 2101101+#define V_MI1_ADDR_AUTOINC(x) ((x) << S_MI1_ADDR_AUTOINC)102102+#define F_MI1_ADDR_AUTOINC V_MI1_ADDR_AUTOINC(1U)103103+104104+#define S_MI1_OP_BUSY 31105105+#define V_MI1_OP_BUSY(x) ((x) << S_MI1_OP_BUSY)106106+#define F_MI1_OP_BUSY V_MI1_OP_BUSY(1U)107107+108108+#define A_ELMER0_PORT1_MI1_CFG 0x500000109109+#define A_ELMER0_PORT1_MI1_ADDR 0x500004110110+#define A_ELMER0_PORT1_MI1_DATA 0x500008111111+#define A_ELMER0_PORT1_MI1_OP 0x50000c112112+#define A_ELMER0_PORT2_MI1_CFG 0x600000113113+#define A_ELMER0_PORT2_MI1_ADDR 0x600004114114+#define A_ELMER0_PORT2_MI1_DATA 0x600008115115+#define A_ELMER0_PORT2_MI1_OP 0x60000c116116+#define A_ELMER0_PORT3_MI1_CFG 0x700000117117+#define A_ELMER0_PORT3_MI1_ADDR 0x700004118118+#define A_ELMER0_PORT3_MI1_DATA 0x700008119119+#define A_ELMER0_PORT3_MI1_OP 0x70000c120120+121121+/* Simple bit definition for GPI and GP0 registers. */122122+#define ELMER0_GP_BIT0 0x0001123123+#define ELMER0_GP_BIT1 0x0002124124+#define ELMER0_GP_BIT2 0x0004125125+#define ELMER0_GP_BIT3 0x0008126126+#define ELMER0_GP_BIT4 0x0010127127+#define ELMER0_GP_BIT5 0x0020128128+#define ELMER0_GP_BIT6 0x0040129129+#define ELMER0_GP_BIT7 0x0080130130+#define ELMER0_GP_BIT8 0x0100131131+#define ELMER0_GP_BIT9 0x0200132132+#define ELMER0_GP_BIT10 0x0400133133+#define ELMER0_GP_BIT11 0x0800134134+#define ELMER0_GP_BIT12 0x1000135135+#define ELMER0_GP_BIT13 0x2000136136+#define ELMER0_GP_BIT14 0x4000137137+#define ELMER0_GP_BIT15 0x8000138138+#define ELMER0_GP_BIT16 0x10000139139+#define ELMER0_GP_BIT17 0x20000140140+#define ELMER0_GP_BIT18 0x40000141141+#define ELMER0_GP_BIT19 0x80000142142+143143+#define MI1_OP_DIRECT_WRITE 1144144+#define MI1_OP_DIRECT_READ 2145145+146146+#define MI1_OP_INDIRECT_ADDRESS 0147147+#define MI1_OP_INDIRECT_WRITE 1148148+#define MI1_OP_INDIRECT_READ_INC 2149149+#define MI1_OP_INDIRECT_READ 3150150+151151+#endif /* _CXGB_ELMER0_H_ */
+346
drivers/net/chelsio/espi.c
···11+/*****************************************************************************22+ * *33+ * File: espi.c *44+ * $Revision: 1.14 $ *55+ * $Date: 2005/05/14 00:59:32 $ *66+ * Description: *77+ * Ethernet SPI functionality. *88+ * part of the Chelsio 10Gb Ethernet Driver. *99+ * *1010+ * This program is free software; you can redistribute it and/or modify *1111+ * it under the terms of the GNU General Public License, version 2, as *1212+ * published by the Free Software Foundation. *1313+ * *1414+ * You should have received a copy of the GNU General Public License along *1515+ * with this program; if not, write to the Free Software Foundation, Inc., *1616+ * 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. *1717+ * *1818+ * THIS SOFTWARE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR IMPLIED *1919+ * WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED WARRANTIES OF *2020+ * MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. *2121+ * *2222+ * http://www.chelsio.com *2323+ * *2424+ * Copyright (c) 2003 - 2005 Chelsio Communications, Inc. *2525+ * All rights reserved. *2626+ * *2727+ * Maintainers: maintainers@chelsio.com *2828+ * *2929+ * Authors: Dimitrios Michailidis <dm@chelsio.com> *3030+ * Tina Yang <tainay@chelsio.com> *3131+ * Felix Marti <felix@chelsio.com> *3232+ * Scott Bardone <sbardone@chelsio.com> *3333+ * Kurt Ottaway <kottaway@chelsio.com> *3434+ * Frank DiMambro <frank@chelsio.com> *3535+ * *3636+ * History: *3737+ * *3838+ ****************************************************************************/3939+4040+#include "common.h"4141+#include "regs.h"4242+#include "espi.h"4343+4444+struct peespi {4545+ adapter_t *adapter;4646+ struct espi_intr_counts intr_cnt;4747+ u32 misc_ctrl;4848+ spinlock_t lock;4949+};5050+5151+#define ESPI_INTR_MASK (F_DIP4ERR | F_RXDROP | F_TXDROP | F_RXOVERFLOW | \5252+ F_RAMPARITYERR | F_DIP2PARITYERR)5353+#define MON_MASK (V_MONITORED_PORT_NUM(3) | F_MONITORED_DIRECTION \5454+ | F_MONITORED_INTERFACE)5555+5656+#define TRICN_CNFG 145757+#define TRICN_CMD_READ 0x115858+#define TRICN_CMD_WRITE 0x215959+#define TRICN_CMD_ATTEMPTS 106060+6161+static int tricn_write(adapter_t *adapter, int bundle_addr, int module_addr,6262+ int ch_addr, int reg_offset, u32 wr_data)6363+{6464+ int busy, attempts = TRICN_CMD_ATTEMPTS;6565+6666+ writel(V_WRITE_DATA(wr_data) |6767+ V_REGISTER_OFFSET(reg_offset) |6868+ V_CHANNEL_ADDR(ch_addr) | V_MODULE_ADDR(module_addr) |6969+ V_BUNDLE_ADDR(bundle_addr) |7070+ V_SPI4_COMMAND(TRICN_CMD_WRITE),7171+ adapter->regs + A_ESPI_CMD_ADDR);7272+ writel(0, adapter->regs + A_ESPI_GOSTAT);7373+7474+ do {7575+ busy = readl(adapter->regs + A_ESPI_GOSTAT) & F_ESPI_CMD_BUSY;7676+ } while (busy && --attempts);7777+7878+ if (busy)7979+ CH_ERR("%s: TRICN write timed out\n", adapter->name);8080+8181+ return busy;8282+}8383+8484+/* 1. Deassert rx_reset_core. */8585+/* 2. Program TRICN_CNFG registers. */8686+/* 3. Deassert rx_reset_link */8787+static int tricn_init(adapter_t *adapter)8888+{8989+ int i = 0;9090+ int sme = 1;9191+ int stat = 0;9292+ int timeout = 0;9393+ int is_ready = 0;9494+ int dynamic_deskew = 0;9595+9696+ if (dynamic_deskew)9797+ sme = 0;9898+9999+100100+ /* 1 */101101+ timeout=1000;102102+ do {103103+ stat = readl(adapter->regs + A_ESPI_RX_RESET);104104+ is_ready = (stat & 0x4);105105+ timeout--;106106+ udelay(5);107107+ } while (!is_ready || (timeout==0));108108+ writel(0x2, adapter->regs + A_ESPI_RX_RESET);109109+ if (timeout==0)110110+ {111111+ CH_ERR("ESPI : ERROR : Timeout tricn_init() \n");112112+ t1_fatal_err(adapter);113113+ }114114+115115+ /* 2 */116116+ if (sme) {117117+ tricn_write(adapter, 0, 0, 0, TRICN_CNFG, 0x81);118118+ tricn_write(adapter, 0, 1, 0, TRICN_CNFG, 0x81);119119+ tricn_write(adapter, 0, 2, 0, TRICN_CNFG, 0x81);120120+ }121121+ for (i=1; i<= 8; i++) tricn_write(adapter, 0, 0, i, TRICN_CNFG, 0xf1);122122+ for (i=1; i<= 2; i++) tricn_write(adapter, 0, 1, i, TRICN_CNFG, 0xf1);123123+ for (i=1; i<= 3; i++) tricn_write(adapter, 0, 2, i, TRICN_CNFG, 0xe1);124124+ for (i=4; i<= 4; i++) tricn_write(adapter, 0, 2, i, TRICN_CNFG, 0xf1);125125+ for (i=5; i<= 5; i++) tricn_write(adapter, 0, 2, i, TRICN_CNFG, 0xe1);126126+ for (i=6; i<= 6; i++) tricn_write(adapter, 0, 2, i, TRICN_CNFG, 0xf1);127127+ for (i=7; i<= 7; i++) tricn_write(adapter, 0, 2, i, TRICN_CNFG, 0x80);128128+ for (i=8; i<= 8; i++) tricn_write(adapter, 0, 2, i, TRICN_CNFG, 0xf1);129129+130130+ /* 3 */131131+ writel(0x3, adapter->regs + A_ESPI_RX_RESET);132132+133133+ return 0;134134+}135135+136136+void t1_espi_intr_enable(struct peespi *espi)137137+{138138+ u32 enable, pl_intr = readl(espi->adapter->regs + A_PL_ENABLE);139139+140140+ /*141141+ * Cannot enable ESPI interrupts on T1B because HW asserts the142142+ * interrupt incorrectly, namely the driver gets ESPI interrupts143143+ * but no data is actually dropped (can verify this reading the ESPI144144+ * drop registers). Also, once the ESPI interrupt is asserted it145145+ * cannot be cleared (HW bug).146146+ */147147+ enable = t1_is_T1B(espi->adapter) ? 0 : ESPI_INTR_MASK;148148+ writel(enable, espi->adapter->regs + A_ESPI_INTR_ENABLE);149149+ writel(pl_intr | F_PL_INTR_ESPI, espi->adapter->regs + A_PL_ENABLE);150150+}151151+152152+void t1_espi_intr_clear(struct peespi *espi)153153+{154154+ writel(0xffffffff, espi->adapter->regs + A_ESPI_INTR_STATUS);155155+ writel(F_PL_INTR_ESPI, espi->adapter->regs + A_PL_CAUSE);156156+}157157+158158+void t1_espi_intr_disable(struct peespi *espi)159159+{160160+ u32 pl_intr = readl(espi->adapter->regs + A_PL_ENABLE);161161+162162+ writel(0, espi->adapter->regs + A_ESPI_INTR_ENABLE);163163+ writel(pl_intr & ~F_PL_INTR_ESPI, espi->adapter->regs + A_PL_ENABLE);164164+}165165+166166+int t1_espi_intr_handler(struct peespi *espi)167167+{168168+ u32 cnt;169169+ u32 status = readl(espi->adapter->regs + A_ESPI_INTR_STATUS);170170+171171+ if (status & F_DIP4ERR)172172+ espi->intr_cnt.DIP4_err++;173173+ if (status & F_RXDROP)174174+ espi->intr_cnt.rx_drops++;175175+ if (status & F_TXDROP)176176+ espi->intr_cnt.tx_drops++;177177+ if (status & F_RXOVERFLOW)178178+ espi->intr_cnt.rx_ovflw++;179179+ if (status & F_RAMPARITYERR)180180+ espi->intr_cnt.parity_err++;181181+ if (status & F_DIP2PARITYERR) {182182+ espi->intr_cnt.DIP2_parity_err++;183183+184184+ /*185185+ * Must read the error count to clear the interrupt186186+ * that it causes.187187+ */188188+ cnt = readl(espi->adapter->regs + A_ESPI_DIP2_ERR_COUNT);189189+ }190190+191191+ /*192192+ * For T1B we need to write 1 to clear ESPI interrupts. For T2+ we193193+ * write the status as is.194194+ */195195+ if (status && t1_is_T1B(espi->adapter))196196+ status = 1;197197+ writel(status, espi->adapter->regs + A_ESPI_INTR_STATUS);198198+ return 0;199199+}200200+201201+const struct espi_intr_counts *t1_espi_get_intr_counts(struct peespi *espi)202202+{203203+ return &espi->intr_cnt;204204+}205205+206206+static void espi_setup_for_pm3393(adapter_t *adapter)207207+{208208+ u32 wmark = t1_is_T1B(adapter) ? 0x4000 : 0x3200;209209+210210+ writel(0x1f4, adapter->regs + A_ESPI_SCH_TOKEN0);211211+ writel(0x1f4, adapter->regs + A_ESPI_SCH_TOKEN1);212212+ writel(0x1f4, adapter->regs + A_ESPI_SCH_TOKEN2);213213+ writel(0x1f4, adapter->regs + A_ESPI_SCH_TOKEN3);214214+ writel(0x100, adapter->regs + A_ESPI_RX_FIFO_ALMOST_EMPTY_WATERMARK);215215+ writel(wmark, adapter->regs + A_ESPI_RX_FIFO_ALMOST_FULL_WATERMARK);216216+ writel(3, adapter->regs + A_ESPI_CALENDAR_LENGTH);217217+ writel(0x08000008, adapter->regs + A_ESPI_TRAIN);218218+ writel(V_RX_NPORTS(1) | V_TX_NPORTS(1), adapter->regs + A_PORT_CONFIG);219219+}220220+221221+/* T2 Init part -- */222222+/* 1. Set T_ESPI_MISCCTRL_ADDR */223223+/* 2. Init ESPI registers. */224224+/* 3. Init TriCN Hard Macro */225225+int t1_espi_init(struct peespi *espi, int mac_type, int nports)226226+{227227+ u32 cnt;228228+229229+ u32 status_enable_extra = 0;230230+ adapter_t *adapter = espi->adapter;231231+ u32 status, burstval = 0x800100;232232+233233+ /* Disable ESPI training. MACs that can handle it enable it below. */234234+ writel(0, adapter->regs + A_ESPI_TRAIN);235235+236236+ if (is_T2(adapter)) {237237+ writel(V_OUT_OF_SYNC_COUNT(4) |238238+ V_DIP2_PARITY_ERR_THRES(3) |239239+ V_DIP4_THRES(1), adapter->regs + A_ESPI_MISC_CONTROL);240240+ if (nports == 4) {241241+ /* T204: maxburst1 = 0x40, maxburst2 = 0x20 */242242+ burstval = 0x200040;243243+ }244244+ }245245+ writel(burstval, adapter->regs + A_ESPI_MAXBURST1_MAXBURST2);246246+247247+ switch (mac_type) {248248+ case CHBT_MAC_PM3393:249249+ espi_setup_for_pm3393(adapter);250250+ break;251251+ default:252252+ return -1;253253+ }254254+255255+ /*256256+ * Make sure any pending interrupts from the SPI are257257+ * Cleared before enabling the interrupt.258258+ */259259+ writel(ESPI_INTR_MASK, espi->adapter->regs + A_ESPI_INTR_ENABLE);260260+ status = readl(espi->adapter->regs + A_ESPI_INTR_STATUS);261261+ if (status & F_DIP2PARITYERR) {262262+ cnt = readl(espi->adapter->regs + A_ESPI_DIP2_ERR_COUNT);263263+ }264264+265265+ /*266266+ * For T1B we need to write 1 to clear ESPI interrupts. For T2+ we267267+ * write the status as is.268268+ */269269+ if (status && t1_is_T1B(espi->adapter))270270+ status = 1;271271+ writel(status, espi->adapter->regs + A_ESPI_INTR_STATUS);272272+273273+ writel(status_enable_extra | F_RXSTATUSENABLE,274274+ adapter->regs + A_ESPI_FIFO_STATUS_ENABLE);275275+276276+ if (is_T2(adapter)) {277277+ tricn_init(adapter);278278+ /*279279+ * Always position the control at the 1st port egress IN280280+ * (sop,eop) counter to reduce PIOs for T/N210 workaround.281281+ */282282+ espi->misc_ctrl = (readl(adapter->regs + A_ESPI_MISC_CONTROL)283283+ & ~MON_MASK) | (F_MONITORED_DIRECTION284284+ | F_MONITORED_INTERFACE);285285+ writel(espi->misc_ctrl, adapter->regs + A_ESPI_MISC_CONTROL);286286+ spin_lock_init(&espi->lock);287287+ }288288+289289+ return 0;290290+}291291+292292+void t1_espi_destroy(struct peespi *espi)293293+{294294+ kfree(espi);295295+}296296+297297+struct peespi *t1_espi_create(adapter_t *adapter)298298+{299299+ struct peespi *espi = kmalloc(sizeof(*espi), GFP_KERNEL);300300+301301+ memset(espi, 0, sizeof(*espi));302302+303303+ if (espi)304304+ espi->adapter = adapter;305305+ return espi;306306+}307307+308308+void t1_espi_set_misc_ctrl(adapter_t *adapter, u32 val)309309+{310310+ struct peespi *espi = adapter->espi;311311+312312+ if (!is_T2(adapter))313313+ return;314314+ spin_lock(&espi->lock);315315+ espi->misc_ctrl = (val & ~MON_MASK) |316316+ (espi->misc_ctrl & MON_MASK);317317+ writel(espi->misc_ctrl, adapter->regs + A_ESPI_MISC_CONTROL);318318+ spin_unlock(&espi->lock);319319+}320320+321321+u32 t1_espi_get_mon(adapter_t *adapter, u32 addr, u8 wait)322322+{323323+ u32 sel;324324+325325+ struct peespi *espi = adapter->espi;326326+327327+ if (!is_T2(adapter))328328+ return 0;329329+ sel = V_MONITORED_PORT_NUM((addr & 0x3c) >> 2);330330+ if (!wait) {331331+ if (!spin_trylock(&espi->lock))332332+ return 0;333333+ }334334+ else335335+ spin_lock(&espi->lock);336336+ if ((sel != (espi->misc_ctrl & MON_MASK))) {337337+ writel(((espi->misc_ctrl & ~MON_MASK) | sel),338338+ adapter->regs + A_ESPI_MISC_CONTROL);339339+ sel = readl(adapter->regs + A_ESPI_SCH_TOKEN3);340340+ writel(espi->misc_ctrl, adapter->regs + A_ESPI_MISC_CONTROL);341341+ }342342+ else343343+ sel = readl(adapter->regs + A_ESPI_SCH_TOKEN3);344344+ spin_unlock(&espi->lock);345345+ return sel;346346+}
+68
drivers/net/chelsio/espi.h
···11+/*****************************************************************************22+ * *33+ * File: espi.h *44+ * $Revision: 1.7 $ *55+ * $Date: 2005/06/21 18:29:47 $ *66+ * Description: *77+ * part of the Chelsio 10Gb Ethernet Driver. *88+ * *99+ * This program is free software; you can redistribute it and/or modify *1010+ * it under the terms of the GNU General Public License, version 2, as *1111+ * published by the Free Software Foundation. *1212+ * *1313+ * You should have received a copy of the GNU General Public License along *1414+ * with this program; if not, write to the Free Software Foundation, Inc., *1515+ * 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. *1616+ * *1717+ * THIS SOFTWARE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR IMPLIED *1818+ * WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED WARRANTIES OF *1919+ * MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. *2020+ * *2121+ * http://www.chelsio.com *2222+ * *2323+ * Copyright (c) 2003 - 2005 Chelsio Communications, Inc. *2424+ * All rights reserved. *2525+ * *2626+ * Maintainers: maintainers@chelsio.com *2727+ * *2828+ * Authors: Dimitrios Michailidis <dm@chelsio.com> *2929+ * Tina Yang <tainay@chelsio.com> *3030+ * Felix Marti <felix@chelsio.com> *3131+ * Scott Bardone <sbardone@chelsio.com> *3232+ * Kurt Ottaway <kottaway@chelsio.com> *3333+ * Frank DiMambro <frank@chelsio.com> *3434+ * *3535+ * History: *3636+ * *3737+ ****************************************************************************/3838+3939+#ifndef _CXGB_ESPI_H_4040+#define _CXGB_ESPI_H_4141+4242+#include "common.h"4343+4444+struct espi_intr_counts {4545+ unsigned int DIP4_err;4646+ unsigned int rx_drops;4747+ unsigned int tx_drops;4848+ unsigned int rx_ovflw;4949+ unsigned int parity_err;5050+ unsigned int DIP2_parity_err;5151+};5252+5353+struct peespi;5454+5555+struct peespi *t1_espi_create(adapter_t *adapter);5656+void t1_espi_destroy(struct peespi *espi);5757+int t1_espi_init(struct peespi *espi, int mac_type, int nports);5858+5959+void t1_espi_intr_enable(struct peespi *);6060+void t1_espi_intr_clear(struct peespi *);6161+void t1_espi_intr_disable(struct peespi *);6262+int t1_espi_intr_handler(struct peespi *);6363+const struct espi_intr_counts *t1_espi_get_intr_counts(struct peespi *espi);6464+6565+void t1_espi_set_misc_ctrl(adapter_t *adapter, u32 val);6666+u32 t1_espi_get_mon(adapter_t *adapter, u32 addr, u8 wait);6767+6868+#endif /* _CXGB_ESPI_H_ */
+134
drivers/net/chelsio/gmac.h
···11+/*****************************************************************************22+ * *33+ * File: gmac.h *44+ * $Revision: 1.6 $ *55+ * $Date: 2005/06/21 18:29:47 $ *66+ * Description: *77+ * Generic MAC functionality. *88+ * part of the Chelsio 10Gb Ethernet Driver. *99+ * *1010+ * This program is free software; you can redistribute it and/or modify *1111+ * it under the terms of the GNU General Public License, version 2, as *1212+ * published by the Free Software Foundation. *1313+ * *1414+ * You should have received a copy of the GNU General Public License along *1515+ * with this program; if not, write to the Free Software Foundation, Inc., *1616+ * 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. *1717+ * *1818+ * THIS SOFTWARE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR IMPLIED *1919+ * WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED WARRANTIES OF *2020+ * MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. *2121+ * *2222+ * http://www.chelsio.com *2323+ * *2424+ * Copyright (c) 2003 - 2005 Chelsio Communications, Inc. *2525+ * All rights reserved. *2626+ * *2727+ * Maintainers: maintainers@chelsio.com *2828+ * *2929+ * Authors: Dimitrios Michailidis <dm@chelsio.com> *3030+ * Tina Yang <tainay@chelsio.com> *3131+ * Felix Marti <felix@chelsio.com> *3232+ * Scott Bardone <sbardone@chelsio.com> *3333+ * Kurt Ottaway <kottaway@chelsio.com> *3434+ * Frank DiMambro <frank@chelsio.com> *3535+ * *3636+ * History: *3737+ * *3838+ ****************************************************************************/3939+4040+#ifndef _CXGB_GMAC_H_4141+#define _CXGB_GMAC_H_4242+4343+#include "common.h"4444+4545+enum { MAC_STATS_UPDATE_FAST, MAC_STATS_UPDATE_FULL };4646+enum { MAC_DIRECTION_RX = 1, MAC_DIRECTION_TX = 2 };4747+4848+struct cmac_statistics {4949+ /* Transmit */5050+ u64 TxOctetsOK;5151+ u64 TxOctetsBad;5252+ u64 TxUnicastFramesOK;5353+ u64 TxMulticastFramesOK;5454+ u64 TxBroadcastFramesOK;5555+ u64 TxPauseFrames;5656+ u64 TxFramesWithDeferredXmissions;5757+ u64 TxLateCollisions;5858+ u64 TxTotalCollisions;5959+ u64 TxFramesAbortedDueToXSCollisions;6060+ u64 TxUnderrun;6161+ u64 TxLengthErrors;6262+ u64 TxInternalMACXmitError;6363+ u64 TxFramesWithExcessiveDeferral;6464+ u64 TxFCSErrors;6565+6666+ /* Receive */6767+ u64 RxOctetsOK;6868+ u64 RxOctetsBad;6969+ u64 RxUnicastFramesOK;7070+ u64 RxMulticastFramesOK;7171+ u64 RxBroadcastFramesOK;7272+ u64 RxPauseFrames;7373+ u64 RxFCSErrors;7474+ u64 RxAlignErrors;7575+ u64 RxSymbolErrors;7676+ u64 RxDataErrors;7777+ u64 RxSequenceErrors;7878+ u64 RxRuntErrors;7979+ u64 RxJabberErrors;8080+ u64 RxInternalMACRcvError;8181+ u64 RxInRangeLengthErrors;8282+ u64 RxOutOfRangeLengthField;8383+ u64 RxFrameTooLongErrors;8484+};8585+8686+struct cmac_ops {8787+ void (*destroy)(struct cmac *);8888+ int (*reset)(struct cmac *);8989+ int (*interrupt_enable)(struct cmac *);9090+ int (*interrupt_disable)(struct cmac *);9191+ int (*interrupt_clear)(struct cmac *);9292+ int (*interrupt_handler)(struct cmac *);9393+9494+ int (*enable)(struct cmac *, int);9595+ int (*disable)(struct cmac *, int);9696+9797+ int (*loopback_enable)(struct cmac *);9898+ int (*loopback_disable)(struct cmac *);9999+100100+ int (*set_mtu)(struct cmac *, int mtu);101101+ int (*set_rx_mode)(struct cmac *, struct t1_rx_mode *rm);102102+103103+ int (*set_speed_duplex_fc)(struct cmac *, int speed, int duplex, int fc);104104+ int (*get_speed_duplex_fc)(struct cmac *, int *speed, int *duplex,105105+ int *fc);106106+107107+ const struct cmac_statistics *(*statistics_update)(struct cmac *, int);108108+109109+ int (*macaddress_get)(struct cmac *, u8 mac_addr[6]);110110+ int (*macaddress_set)(struct cmac *, u8 mac_addr[6]);111111+};112112+113113+typedef struct _cmac_instance cmac_instance;114114+115115+struct cmac {116116+ struct cmac_statistics stats;117117+ adapter_t *adapter;118118+ struct cmac_ops *ops;119119+ cmac_instance *instance;120120+};121121+122122+struct gmac {123123+ unsigned int stats_update_period;124124+ struct cmac *(*create)(adapter_t *adapter, int index);125125+ int (*reset)(adapter_t *);126126+};127127+128128+extern struct gmac t1_pm3393_ops;129129+extern struct gmac t1_chelsio_mac_ops;130130+extern struct gmac t1_vsc7321_ops;131131+extern struct gmac t1_ixf1010_ops;132132+extern struct gmac t1_dummy_mac_ops;133133+134134+#endif /* _CXGB_GMAC_H_ */
+252
drivers/net/chelsio/mv88x201x.c
···11+/*****************************************************************************22+ * *33+ * File: mv88x201x.c *44+ * $Revision: 1.12 $ *55+ * $Date: 2005/04/15 19:27:14 $ *66+ * Description: *77+ * Marvell PHY (mv88x201x) functionality. *88+ * part of the Chelsio 10Gb Ethernet Driver. *99+ * *1010+ * This program is free software; you can redistribute it and/or modify *1111+ * it under the terms of the GNU General Public License, version 2, as *1212+ * published by the Free Software Foundation. *1313+ * *1414+ * You should have received a copy of the GNU General Public License along *1515+ * with this program; if not, write to the Free Software Foundation, Inc., *1616+ * 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. *1717+ * *1818+ * THIS SOFTWARE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR IMPLIED *1919+ * WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED WARRANTIES OF *2020+ * MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. *2121+ * *2222+ * http://www.chelsio.com *2323+ * *2424+ * Copyright (c) 2003 - 2005 Chelsio Communications, Inc. *2525+ * All rights reserved. *2626+ * *2727+ * Maintainers: maintainers@chelsio.com *2828+ * *2929+ * Authors: Dimitrios Michailidis <dm@chelsio.com> *3030+ * Tina Yang <tainay@chelsio.com> *3131+ * Felix Marti <felix@chelsio.com> *3232+ * Scott Bardone <sbardone@chelsio.com> *3333+ * Kurt Ottaway <kottaway@chelsio.com> *3434+ * Frank DiMambro <frank@chelsio.com> *3535+ * *3636+ * History: *3737+ * *3838+ ****************************************************************************/3939+4040+#include "cphy.h"4141+#include "elmer0.h"4242+4343+/*4444+ * The 88x2010 Rev C. requires some link status registers * to be read4545+ * twice in order to get the right values. Future * revisions will fix4646+ * this problem and then this macro * can disappear.4747+ */4848+#define MV88x2010_LINK_STATUS_BUGS 14949+5050+static int led_init(struct cphy *cphy)5151+{5252+ /* Setup the LED registers so we can turn on/off.5353+ * Writing these bits maps control to another5454+ * register. mmd(0x1) addr(0x7)5555+ */5656+ mdio_write(cphy, 0x3, 0x8304, 0xdddd);5757+ return 0;5858+}5959+6060+static int led_link(struct cphy *cphy, u32 do_enable)6161+{6262+ u32 led = 0;6363+#define LINK_ENABLE_BIT 0x16464+6565+ mdio_read(cphy, 0x1, 0x7, &led);6666+6767+ if (do_enable & LINK_ENABLE_BIT) {6868+ led |= LINK_ENABLE_BIT;6969+ mdio_write(cphy, 0x1, 0x7, led);7070+ } else {7171+ led &= ~LINK_ENABLE_BIT;7272+ mdio_write(cphy, 0x1, 0x7, led);7373+ }7474+ return 0;7575+}7676+7777+/* Port Reset */7878+static int mv88x201x_reset(struct cphy *cphy, int wait)7979+{8080+ /* This can be done through registers. It is not required since8181+ * a full chip reset is used.8282+ */8383+ return 0;8484+}8585+8686+static int mv88x201x_interrupt_enable(struct cphy *cphy)8787+{8888+ u32 elmer;8989+9090+ /* Enable PHY LASI interrupts. */9191+ mdio_write(cphy, 0x1, 0x9002, 0x1);9292+9393+ /* Enable Marvell interrupts through Elmer0. */9494+ t1_tpi_read(cphy->adapter, A_ELMER0_INT_ENABLE, &elmer);9595+ elmer |= ELMER0_GP_BIT6;9696+ t1_tpi_write(cphy->adapter, A_ELMER0_INT_ENABLE, elmer);9797+ return 0;9898+}9999+100100+static int mv88x201x_interrupt_disable(struct cphy *cphy)101101+{102102+ u32 elmer;103103+104104+ /* Disable PHY LASI interrupts. */105105+ mdio_write(cphy, 0x1, 0x9002, 0x0);106106+107107+ /* Disable Marvell interrupts through Elmer0. */108108+ t1_tpi_read(cphy->adapter, A_ELMER0_INT_ENABLE, &elmer);109109+ elmer &= ~ELMER0_GP_BIT6;110110+ t1_tpi_write(cphy->adapter, A_ELMER0_INT_ENABLE, elmer);111111+ return 0;112112+}113113+114114+static int mv88x201x_interrupt_clear(struct cphy *cphy)115115+{116116+ u32 elmer;117117+ u32 val;118118+119119+#ifdef MV88x2010_LINK_STATUS_BUGS120120+ /* Required to read twice before clear takes affect. */121121+ mdio_read(cphy, 0x1, 0x9003, &val);122122+ mdio_read(cphy, 0x1, 0x9004, &val);123123+ mdio_read(cphy, 0x1, 0x9005, &val);124124+125125+ /* Read this register after the others above it else126126+ * the register doesn't clear correctly.127127+ */128128+ mdio_read(cphy, 0x1, 0x1, &val);129129+#endif130130+131131+ /* Clear link status. */132132+ mdio_read(cphy, 0x1, 0x1, &val);133133+ /* Clear PHY LASI interrupts. */134134+ mdio_read(cphy, 0x1, 0x9005, &val);135135+136136+#ifdef MV88x2010_LINK_STATUS_BUGS137137+ /* Do it again. */138138+ mdio_read(cphy, 0x1, 0x9003, &val);139139+ mdio_read(cphy, 0x1, 0x9004, &val);140140+#endif141141+142142+ /* Clear Marvell interrupts through Elmer0. */143143+ t1_tpi_read(cphy->adapter, A_ELMER0_INT_CAUSE, &elmer);144144+ elmer |= ELMER0_GP_BIT6;145145+ t1_tpi_write(cphy->adapter, A_ELMER0_INT_CAUSE, elmer);146146+ return 0;147147+}148148+149149+static int mv88x201x_interrupt_handler(struct cphy *cphy)150150+{151151+ /* Clear interrupts */152152+ mv88x201x_interrupt_clear(cphy);153153+154154+ /* We have only enabled link change interrupts and so155155+ * cphy_cause must be a link change interrupt.156156+ */157157+ return cphy_cause_link_change;158158+}159159+160160+static int mv88x201x_set_loopback(struct cphy *cphy, int on)161161+{162162+ return 0;163163+}164164+165165+static int mv88x201x_get_link_status(struct cphy *cphy, int *link_ok,166166+ int *speed, int *duplex, int *fc)167167+{168168+ u32 val = 0;169169+#define LINK_STATUS_BIT 0x4170170+171171+ if (link_ok) {172172+ /* Read link status. */173173+ mdio_read(cphy, 0x1, 0x1, &val);174174+ val &= LINK_STATUS_BIT;175175+ *link_ok = (val == LINK_STATUS_BIT);176176+ /* Turn on/off Link LED */177177+ led_link(cphy, *link_ok);178178+ }179179+ if (speed)180180+ *speed = SPEED_10000;181181+ if (duplex)182182+ *duplex = DUPLEX_FULL;183183+ if (fc)184184+ *fc = PAUSE_RX | PAUSE_TX;185185+ return 0;186186+}187187+188188+static void mv88x201x_destroy(struct cphy *cphy)189189+{190190+ kfree(cphy);191191+}192192+193193+static struct cphy_ops mv88x201x_ops = {194194+ .destroy = mv88x201x_destroy,195195+ .reset = mv88x201x_reset,196196+ .interrupt_enable = mv88x201x_interrupt_enable,197197+ .interrupt_disable = mv88x201x_interrupt_disable,198198+ .interrupt_clear = mv88x201x_interrupt_clear,199199+ .interrupt_handler = mv88x201x_interrupt_handler,200200+ .get_link_status = mv88x201x_get_link_status,201201+ .set_loopback = mv88x201x_set_loopback,202202+};203203+204204+static struct cphy *mv88x201x_phy_create(adapter_t *adapter, int phy_addr,205205+ struct mdio_ops *mdio_ops)206206+{207207+ u32 val;208208+ struct cphy *cphy = kmalloc(sizeof(*cphy), GFP_KERNEL);209209+210210+ if (!cphy)211211+ return NULL;212212+ memset(cphy, 0, sizeof(*cphy));213213+ cphy_init(cphy, adapter, phy_addr, &mv88x201x_ops, mdio_ops);214214+215215+ /* Commands the PHY to enable XFP's clock. */216216+ mdio_read(cphy, 0x3, 0x8300, &val);217217+ mdio_write(cphy, 0x3, 0x8300, val | 1);218218+219219+ /* Clear link status. Required because of a bug in the PHY. */220220+ mdio_read(cphy, 0x1, 0x8, &val);221221+ mdio_read(cphy, 0x3, 0x8, &val);222222+223223+ /* Allows for Link,Ack LED turn on/off */224224+ led_init(cphy);225225+ return cphy;226226+}227227+228228+/* Chip Reset */229229+static int mv88x201x_phy_reset(adapter_t *adapter)230230+{231231+ u32 val;232232+233233+ t1_tpi_read(adapter, A_ELMER0_GPO, &val);234234+ val &= ~4;235235+ t1_tpi_write(adapter, A_ELMER0_GPO, val);236236+ msleep(100);237237+238238+ t1_tpi_write(adapter, A_ELMER0_GPO, val | 4);239239+ msleep(1000);240240+241241+ /* Now lets enable the Laser. Delay 100us */242242+ t1_tpi_read(adapter, A_ELMER0_GPO, &val);243243+ val |= 0x8000;244244+ t1_tpi_write(adapter, A_ELMER0_GPO, val);245245+ udelay(100);246246+ return 0;247247+}248248+249249+struct gphy t1_mv88x201x_ops = {250250+ mv88x201x_phy_create,251251+ mv88x201x_phy_reset252252+};
+826
drivers/net/chelsio/pm3393.c
···11+/*****************************************************************************22+ * *33+ * File: pm3393.c *44+ * $Revision: 1.16 $ *55+ * $Date: 2005/05/14 00:59:32 $ *66+ * Description: *77+ * PMC/SIERRA (pm3393) MAC-PHY functionality. *88+ * part of the Chelsio 10Gb Ethernet Driver. *99+ * *1010+ * This program is free software; you can redistribute it and/or modify *1111+ * it under the terms of the GNU General Public License, version 2, as *1212+ * published by the Free Software Foundation. *1313+ * *1414+ * You should have received a copy of the GNU General Public License along *1515+ * with this program; if not, write to the Free Software Foundation, Inc., *1616+ * 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. *1717+ * *1818+ * THIS SOFTWARE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR IMPLIED *1919+ * WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED WARRANTIES OF *2020+ * MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. *2121+ * *2222+ * http://www.chelsio.com *2323+ * *2424+ * Copyright (c) 2003 - 2005 Chelsio Communications, Inc. *2525+ * All rights reserved. *2626+ * *2727+ * Maintainers: maintainers@chelsio.com *2828+ * *2929+ * Authors: Dimitrios Michailidis <dm@chelsio.com> *3030+ * Tina Yang <tainay@chelsio.com> *3131+ * Felix Marti <felix@chelsio.com> *3232+ * Scott Bardone <sbardone@chelsio.com> *3333+ * Kurt Ottaway <kottaway@chelsio.com> *3434+ * Frank DiMambro <frank@chelsio.com> *3535+ * *3636+ * History: *3737+ * *3838+ ****************************************************************************/3939+4040+#include "common.h"4141+#include "regs.h"4242+#include "gmac.h"4343+#include "elmer0.h"4444+#include "suni1x10gexp_regs.h"4545+4646+/* 802.3ae 10Gb/s MDIO Manageable Device(MMD)4747+ */4848+enum {4949+ MMD_RESERVED,5050+ MMD_PMAPMD,5151+ MMD_WIS,5252+ MMD_PCS,5353+ MMD_PHY_XGXS, /* XGMII Extender Sublayer */5454+ MMD_DTE_XGXS,5555+};5656+5757+enum {5858+ PHY_XGXS_CTRL_1,5959+ PHY_XGXS_STATUS_16060+};6161+6262+#define OFFSET(REG_ADDR) (REG_ADDR << 2)6363+6464+/* Max frame size PM3393 can handle. Includes Ethernet header and CRC. */6565+#define MAX_FRAME_SIZE 96006666+6767+#define IPG 126868+#define TXXG_CONF1_VAL ((IPG << SUNI1x10GEXP_BITOFF_TXXG_IPGT) | \6969+ SUNI1x10GEXP_BITMSK_TXXG_32BIT_ALIGN | SUNI1x10GEXP_BITMSK_TXXG_CRCEN | \7070+ SUNI1x10GEXP_BITMSK_TXXG_PADEN)7171+#define RXXG_CONF1_VAL (SUNI1x10GEXP_BITMSK_RXXG_PUREP | 0x14 | \7272+ SUNI1x10GEXP_BITMSK_RXXG_FLCHK | SUNI1x10GEXP_BITMSK_RXXG_CRC_STRIP)7373+7474+/* Update statistics every 15 minutes */7575+#define STATS_TICK_SECS (15 * 60)7676+7777+enum { /* RMON registers */7878+ RxOctetsReceivedOK = SUNI1x10GEXP_REG_MSTAT_COUNTER_1_LOW,7979+ RxUnicastFramesReceivedOK = SUNI1x10GEXP_REG_MSTAT_COUNTER_4_LOW,8080+ RxMulticastFramesReceivedOK = SUNI1x10GEXP_REG_MSTAT_COUNTER_5_LOW,8181+ RxBroadcastFramesReceivedOK = SUNI1x10GEXP_REG_MSTAT_COUNTER_6_LOW,8282+ RxPAUSEMACCtrlFramesReceived = SUNI1x10GEXP_REG_MSTAT_COUNTER_8_LOW,8383+ RxFrameCheckSequenceErrors = SUNI1x10GEXP_REG_MSTAT_COUNTER_10_LOW,8484+ RxFramesLostDueToInternalMACErrors = SUNI1x10GEXP_REG_MSTAT_COUNTER_11_LOW,8585+ RxSymbolErrors = SUNI1x10GEXP_REG_MSTAT_COUNTER_12_LOW,8686+ RxInRangeLengthErrors = SUNI1x10GEXP_REG_MSTAT_COUNTER_13_LOW,8787+ RxFramesTooLongErrors = SUNI1x10GEXP_REG_MSTAT_COUNTER_15_LOW,8888+ RxJabbers = SUNI1x10GEXP_REG_MSTAT_COUNTER_16_LOW,8989+ RxFragments = SUNI1x10GEXP_REG_MSTAT_COUNTER_17_LOW,9090+ RxUndersizedFrames = SUNI1x10GEXP_REG_MSTAT_COUNTER_18_LOW,9191+9292+ TxOctetsTransmittedOK = SUNI1x10GEXP_REG_MSTAT_COUNTER_33_LOW,9393+ TxFramesLostDueToInternalMACTransmissionError = SUNI1x10GEXP_REG_MSTAT_COUNTER_35_LOW,9494+ TxTransmitSystemError = SUNI1x10GEXP_REG_MSTAT_COUNTER_36_LOW,9595+ TxUnicastFramesTransmittedOK = SUNI1x10GEXP_REG_MSTAT_COUNTER_38_LOW,9696+ TxMulticastFramesTransmittedOK = SUNI1x10GEXP_REG_MSTAT_COUNTER_40_LOW,9797+ TxBroadcastFramesTransmittedOK = SUNI1x10GEXP_REG_MSTAT_COUNTER_42_LOW,9898+ TxPAUSEMACCtrlFramesTransmitted = SUNI1x10GEXP_REG_MSTAT_COUNTER_43_LOW9999+};100100+101101+struct _cmac_instance {102102+ u8 enabled;103103+ u8 fc;104104+ u8 mac_addr[6];105105+};106106+107107+static int pmread(struct cmac *cmac, u32 reg, u32 * data32)108108+{109109+ t1_tpi_read(cmac->adapter, OFFSET(reg), data32);110110+ return 0;111111+}112112+113113+static int pmwrite(struct cmac *cmac, u32 reg, u32 data32)114114+{115115+ t1_tpi_write(cmac->adapter, OFFSET(reg), data32);116116+ return 0;117117+}118118+119119+/* Port reset. */120120+static int pm3393_reset(struct cmac *cmac)121121+{122122+ return 0;123123+}124124+125125+/*126126+ * Enable interrupts for the PM3393127127+128128+ 1. Enable PM3393 BLOCK interrupts.129129+ 2. Enable PM3393 Master Interrupt bit(INTE)130130+ 3. Enable ELMER's PM3393 bit.131131+ 4. Enable Terminator external interrupt.132132+*/133133+static int pm3393_interrupt_enable(struct cmac *cmac)134134+{135135+ u32 pl_intr;136136+137137+ /* PM3393 - Enabling all hardware block interrupts.138138+ */139139+ pmwrite(cmac, SUNI1x10GEXP_REG_SERDES_3125_INTERRUPT_ENABLE, 0xffff);140140+ pmwrite(cmac, SUNI1x10GEXP_REG_XRF_INTERRUPT_ENABLE, 0xffff);141141+ pmwrite(cmac, SUNI1x10GEXP_REG_XRF_DIAG_INTERRUPT_ENABLE, 0xffff);142142+ pmwrite(cmac, SUNI1x10GEXP_REG_RXOAM_INTERRUPT_ENABLE, 0xffff);143143+144144+ /* Don't interrupt on statistics overflow, we are polling */145145+ pmwrite(cmac, SUNI1x10GEXP_REG_MSTAT_INTERRUPT_MASK_0, 0);146146+ pmwrite(cmac, SUNI1x10GEXP_REG_MSTAT_INTERRUPT_MASK_1, 0);147147+ pmwrite(cmac, SUNI1x10GEXP_REG_MSTAT_INTERRUPT_MASK_2, 0);148148+ pmwrite(cmac, SUNI1x10GEXP_REG_MSTAT_INTERRUPT_MASK_3, 0);149149+150150+ pmwrite(cmac, SUNI1x10GEXP_REG_IFLX_FIFO_OVERFLOW_ENABLE, 0xffff);151151+ pmwrite(cmac, SUNI1x10GEXP_REG_PL4ODP_INTERRUPT_MASK, 0xffff);152152+ pmwrite(cmac, SUNI1x10GEXP_REG_XTEF_INTERRUPT_ENABLE, 0xffff);153153+ pmwrite(cmac, SUNI1x10GEXP_REG_TXOAM_INTERRUPT_ENABLE, 0xffff);154154+ pmwrite(cmac, SUNI1x10GEXP_REG_RXXG_CONFIG_3, 0xffff);155155+ pmwrite(cmac, SUNI1x10GEXP_REG_PL4IO_LOCK_DETECT_MASK, 0xffff);156156+ pmwrite(cmac, SUNI1x10GEXP_REG_TXXG_CONFIG_3, 0xffff);157157+ pmwrite(cmac, SUNI1x10GEXP_REG_PL4IDU_INTERRUPT_MASK, 0xffff);158158+ pmwrite(cmac, SUNI1x10GEXP_REG_EFLX_FIFO_OVERFLOW_ERROR_ENABLE, 0xffff);159159+160160+ /* PM3393 - Global interrupt enable161161+ */162162+ /* TBD XXX Disable for now until we figure out why error interrupts keep asserting. */163163+ pmwrite(cmac, SUNI1x10GEXP_REG_GLOBAL_INTERRUPT_ENABLE,164164+ 0 /*SUNI1x10GEXP_BITMSK_TOP_INTE */ );165165+166166+ /* TERMINATOR - PL_INTERUPTS_EXT */167167+ pl_intr = readl(cmac->adapter->regs + A_PL_ENABLE);168168+ pl_intr |= F_PL_INTR_EXT;169169+ writel(pl_intr, cmac->adapter->regs + A_PL_ENABLE);170170+ return 0;171171+}172172+173173+static int pm3393_interrupt_disable(struct cmac *cmac)174174+{175175+ u32 elmer;176176+177177+ /* PM3393 - Enabling HW interrupt blocks. */178178+ pmwrite(cmac, SUNI1x10GEXP_REG_SERDES_3125_INTERRUPT_ENABLE, 0);179179+ pmwrite(cmac, SUNI1x10GEXP_REG_XRF_INTERRUPT_ENABLE, 0);180180+ pmwrite(cmac, SUNI1x10GEXP_REG_XRF_DIAG_INTERRUPT_ENABLE, 0);181181+ pmwrite(cmac, SUNI1x10GEXP_REG_RXOAM_INTERRUPT_ENABLE, 0);182182+ pmwrite(cmac, SUNI1x10GEXP_REG_MSTAT_INTERRUPT_MASK_0, 0);183183+ pmwrite(cmac, SUNI1x10GEXP_REG_MSTAT_INTERRUPT_MASK_1, 0);184184+ pmwrite(cmac, SUNI1x10GEXP_REG_MSTAT_INTERRUPT_MASK_2, 0);185185+ pmwrite(cmac, SUNI1x10GEXP_REG_MSTAT_INTERRUPT_MASK_3, 0);186186+ pmwrite(cmac, SUNI1x10GEXP_REG_IFLX_FIFO_OVERFLOW_ENABLE, 0);187187+ pmwrite(cmac, SUNI1x10GEXP_REG_PL4ODP_INTERRUPT_MASK, 0);188188+ pmwrite(cmac, SUNI1x10GEXP_REG_XTEF_INTERRUPT_ENABLE, 0);189189+ pmwrite(cmac, SUNI1x10GEXP_REG_TXOAM_INTERRUPT_ENABLE, 0);190190+ pmwrite(cmac, SUNI1x10GEXP_REG_RXXG_CONFIG_3, 0);191191+ pmwrite(cmac, SUNI1x10GEXP_REG_PL4IO_LOCK_DETECT_MASK, 0);192192+ pmwrite(cmac, SUNI1x10GEXP_REG_TXXG_CONFIG_3, 0);193193+ pmwrite(cmac, SUNI1x10GEXP_REG_PL4IDU_INTERRUPT_MASK, 0);194194+ pmwrite(cmac, SUNI1x10GEXP_REG_EFLX_FIFO_OVERFLOW_ERROR_ENABLE, 0);195195+196196+ /* PM3393 - Global interrupt enable */197197+ pmwrite(cmac, SUNI1x10GEXP_REG_GLOBAL_INTERRUPT_ENABLE, 0);198198+199199+ /* ELMER - External chip interrupts. */200200+ t1_tpi_read(cmac->adapter, A_ELMER0_INT_ENABLE, &elmer);201201+ elmer &= ~ELMER0_GP_BIT1;202202+ t1_tpi_write(cmac->adapter, A_ELMER0_INT_ENABLE, elmer);203203+204204+ /* TERMINATOR - PL_INTERUPTS_EXT */205205+ /* DO NOT DISABLE TERMINATOR's EXTERNAL INTERRUPTS. ANOTHER CHIP206206+ * COULD WANT THEM ENABLED. We disable PM3393 at the ELMER level.207207+ */208208+209209+ return 0;210210+}211211+212212+static int pm3393_interrupt_clear(struct cmac *cmac)213213+{214214+ u32 elmer;215215+ u32 pl_intr;216216+ u32 val32;217217+218218+ /* PM3393 - Clearing HW interrupt blocks. Note, this assumes219219+ * bit WCIMODE=0 for a clear-on-read.220220+ */221221+ pmread(cmac, SUNI1x10GEXP_REG_SERDES_3125_INTERRUPT_STATUS, &val32);222222+ pmread(cmac, SUNI1x10GEXP_REG_XRF_INTERRUPT_STATUS, &val32);223223+ pmread(cmac, SUNI1x10GEXP_REG_XRF_DIAG_INTERRUPT_STATUS, &val32);224224+ pmread(cmac, SUNI1x10GEXP_REG_RXOAM_INTERRUPT_STATUS, &val32);225225+ pmread(cmac, SUNI1x10GEXP_REG_PL4ODP_INTERRUPT, &val32);226226+ pmread(cmac, SUNI1x10GEXP_REG_XTEF_INTERRUPT_STATUS, &val32);227227+ pmread(cmac, SUNI1x10GEXP_REG_IFLX_FIFO_OVERFLOW_INTERRUPT, &val32);228228+ pmread(cmac, SUNI1x10GEXP_REG_TXOAM_INTERRUPT_STATUS, &val32);229229+ pmread(cmac, SUNI1x10GEXP_REG_RXXG_INTERRUPT, &val32);230230+ pmread(cmac, SUNI1x10GEXP_REG_TXXG_INTERRUPT, &val32);231231+ pmread(cmac, SUNI1x10GEXP_REG_PL4IDU_INTERRUPT, &val32);232232+ pmread(cmac, SUNI1x10GEXP_REG_EFLX_FIFO_OVERFLOW_ERROR_INDICATION,233233+ &val32);234234+ pmread(cmac, SUNI1x10GEXP_REG_PL4IO_LOCK_DETECT_STATUS, &val32);235235+ pmread(cmac, SUNI1x10GEXP_REG_PL4IO_LOCK_DETECT_CHANGE, &val32);236236+237237+ /* PM3393 - Global interrupt status238238+ */239239+ pmread(cmac, SUNI1x10GEXP_REG_MASTER_INTERRUPT_STATUS, &val32);240240+241241+ /* ELMER - External chip interrupts.242242+ */243243+ t1_tpi_read(cmac->adapter, A_ELMER0_INT_CAUSE, &elmer);244244+ elmer |= ELMER0_GP_BIT1;245245+ t1_tpi_write(cmac->adapter, A_ELMER0_INT_CAUSE, elmer);246246+247247+ /* TERMINATOR - PL_INTERUPTS_EXT248248+ */249249+ pl_intr = readl(cmac->adapter->regs + A_PL_CAUSE);250250+ pl_intr |= F_PL_INTR_EXT;251251+ writel(pl_intr, cmac->adapter->regs + A_PL_CAUSE);252252+253253+ return 0;254254+}255255+256256+/* Interrupt handler */257257+static int pm3393_interrupt_handler(struct cmac *cmac)258258+{259259+ u32 master_intr_status;260260+/*261261+ 1. Read master interrupt register.262262+ 2. Read BLOCK's interrupt status registers.263263+ 3. Handle BLOCK interrupts.264264+*/265265+ /* Read the master interrupt status register. */266266+ pmread(cmac, SUNI1x10GEXP_REG_MASTER_INTERRUPT_STATUS,267267+ &master_intr_status);268268+269269+ /* TBD XXX Lets just clear everything for now */270270+ pm3393_interrupt_clear(cmac);271271+272272+ return 0;273273+}274274+275275+static int pm3393_enable(struct cmac *cmac, int which)276276+{277277+ if (which & MAC_DIRECTION_RX)278278+ pmwrite(cmac, SUNI1x10GEXP_REG_RXXG_CONFIG_1,279279+ (RXXG_CONF1_VAL | SUNI1x10GEXP_BITMSK_RXXG_RXEN));280280+281281+ if (which & MAC_DIRECTION_TX) {282282+ u32 val = TXXG_CONF1_VAL | SUNI1x10GEXP_BITMSK_TXXG_TXEN0;283283+284284+ if (cmac->instance->fc & PAUSE_RX)285285+ val |= SUNI1x10GEXP_BITMSK_TXXG_FCRX;286286+ if (cmac->instance->fc & PAUSE_TX)287287+ val |= SUNI1x10GEXP_BITMSK_TXXG_FCTX;288288+ pmwrite(cmac, SUNI1x10GEXP_REG_TXXG_CONFIG_1, val);289289+ }290290+291291+ cmac->instance->enabled |= which;292292+ return 0;293293+}294294+295295+static int pm3393_enable_port(struct cmac *cmac, int which)296296+{297297+ /* Clear port statistics */298298+ pmwrite(cmac, SUNI1x10GEXP_REG_MSTAT_CONTROL,299299+ SUNI1x10GEXP_BITMSK_MSTAT_CLEAR);300300+ udelay(2);301301+ memset(&cmac->stats, 0, sizeof(struct cmac_statistics));302302+303303+ pm3393_enable(cmac, which);304304+305305+ /*306306+ * XXX This should be done by the PHY and preferrably not at all.307307+ * The PHY doesn't give us link status indication on its own so have308308+ * the link management code query it instead.309309+ */310310+ {311311+ extern void link_changed(adapter_t *adapter, int port_id);312312+313313+ link_changed(cmac->adapter, 0);314314+ }315315+ return 0;316316+}317317+318318+static int pm3393_disable(struct cmac *cmac, int which)319319+{320320+ if (which & MAC_DIRECTION_RX)321321+ pmwrite(cmac, SUNI1x10GEXP_REG_RXXG_CONFIG_1, RXXG_CONF1_VAL);322322+ if (which & MAC_DIRECTION_TX)323323+ pmwrite(cmac, SUNI1x10GEXP_REG_TXXG_CONFIG_1, TXXG_CONF1_VAL);324324+325325+ /*326326+ * The disable is graceful. Give the PM3393 time. Can't wait very327327+ * long here, we may be holding locks.328328+ */329329+ udelay(20);330330+331331+ cmac->instance->enabled &= ~which;332332+ return 0;333333+}334334+335335+static int pm3393_loopback_enable(struct cmac *cmac)336336+{337337+ return 0;338338+}339339+340340+static int pm3393_loopback_disable(struct cmac *cmac)341341+{342342+ return 0;343343+}344344+345345+static int pm3393_set_mtu(struct cmac *cmac, int mtu)346346+{347347+ int enabled = cmac->instance->enabled;348348+349349+ /* MAX_FRAME_SIZE includes header + FCS, mtu doesn't */350350+ mtu += 14 + 4;351351+ if (mtu > MAX_FRAME_SIZE)352352+ return -EINVAL;353353+354354+ /* Disable Rx/Tx MAC before configuring it. */355355+ if (enabled)356356+ pm3393_disable(cmac, MAC_DIRECTION_RX | MAC_DIRECTION_TX);357357+358358+ pmwrite(cmac, SUNI1x10GEXP_REG_RXXG_MAX_FRAME_LENGTH, mtu);359359+ pmwrite(cmac, SUNI1x10GEXP_REG_TXXG_MAX_FRAME_SIZE, mtu);360360+361361+ if (enabled)362362+ pm3393_enable(cmac, enabled);363363+ return 0;364364+}365365+366366+static u32 calc_crc(u8 *b, int len)367367+{368368+ int i;369369+ u32 crc = (u32)~0;370370+371371+ /* calculate crc one bit at a time */372372+ while (len--) {373373+ crc ^= *b++;374374+ for (i = 0; i < 8; i++) {375375+ if (crc & 0x1)376376+ crc = (crc >> 1) ^ 0xedb88320;377377+ else378378+ crc = (crc >> 1);379379+ }380380+ }381381+382382+ /* reverse bits */383383+ crc = ((crc >> 4) & 0x0f0f0f0f) | ((crc << 4) & 0xf0f0f0f0);384384+ crc = ((crc >> 2) & 0x33333333) | ((crc << 2) & 0xcccccccc);385385+ crc = ((crc >> 1) & 0x55555555) | ((crc << 1) & 0xaaaaaaaa);386386+ /* swap bytes */387387+ crc = (crc >> 16) | (crc << 16);388388+ crc = (crc >> 8 & 0x00ff00ff) | (crc << 8 & 0xff00ff00);389389+390390+ return crc;391391+}392392+393393+static int pm3393_set_rx_mode(struct cmac *cmac, struct t1_rx_mode *rm)394394+{395395+ int enabled = cmac->instance->enabled & MAC_DIRECTION_RX;396396+ u32 rx_mode;397397+398398+ /* Disable MAC RX before reconfiguring it */399399+ if (enabled)400400+ pm3393_disable(cmac, MAC_DIRECTION_RX);401401+402402+ pmread(cmac, SUNI1x10GEXP_REG_RXXG_ADDRESS_FILTER_CONTROL_2, &rx_mode);403403+ rx_mode &= ~(SUNI1x10GEXP_BITMSK_RXXG_PMODE |404404+ SUNI1x10GEXP_BITMSK_RXXG_MHASH_EN);405405+ pmwrite(cmac, SUNI1x10GEXP_REG_RXXG_ADDRESS_FILTER_CONTROL_2,406406+ (u16)rx_mode);407407+408408+ if (t1_rx_mode_promisc(rm)) {409409+ /* Promiscuous mode. */410410+ rx_mode |= SUNI1x10GEXP_BITMSK_RXXG_PMODE;411411+ }412412+ if (t1_rx_mode_allmulti(rm)) {413413+ /* Accept all multicast. */414414+ pmwrite(cmac, SUNI1x10GEXP_REG_RXXG_MULTICAST_HASH_LOW, 0xffff);415415+ pmwrite(cmac, SUNI1x10GEXP_REG_RXXG_MULTICAST_HASH_MIDLOW, 0xffff);416416+ pmwrite(cmac, SUNI1x10GEXP_REG_RXXG_MULTICAST_HASH_MIDHIGH, 0xffff);417417+ pmwrite(cmac, SUNI1x10GEXP_REG_RXXG_MULTICAST_HASH_HIGH, 0xffff);418418+ rx_mode |= SUNI1x10GEXP_BITMSK_RXXG_MHASH_EN;419419+ } else if (t1_rx_mode_mc_cnt(rm)) {420420+ /* Accept one or more multicast(s). */421421+ u8 *addr;422422+ int bit;423423+ u16 mc_filter[4] = { 0, };424424+425425+ while ((addr = t1_get_next_mcaddr(rm))) {426426+ bit = (calc_crc(addr, ETH_ALEN) >> 23) & 0x3f; /* bit[23:28] */427427+ mc_filter[bit >> 4] |= 1 << (bit & 0xf);428428+ }429429+ pmwrite(cmac, SUNI1x10GEXP_REG_RXXG_MULTICAST_HASH_LOW, mc_filter[0]);430430+ pmwrite(cmac, SUNI1x10GEXP_REG_RXXG_MULTICAST_HASH_MIDLOW, mc_filter[1]);431431+ pmwrite(cmac, SUNI1x10GEXP_REG_RXXG_MULTICAST_HASH_MIDHIGH, mc_filter[2]);432432+ pmwrite(cmac, SUNI1x10GEXP_REG_RXXG_MULTICAST_HASH_HIGH, mc_filter[3]);433433+ rx_mode |= SUNI1x10GEXP_BITMSK_RXXG_MHASH_EN;434434+ }435435+436436+ pmwrite(cmac, SUNI1x10GEXP_REG_RXXG_ADDRESS_FILTER_CONTROL_2, (u16)rx_mode);437437+438438+ if (enabled)439439+ pm3393_enable(cmac, MAC_DIRECTION_RX);440440+441441+ return 0;442442+}443443+444444+static int pm3393_get_speed_duplex_fc(struct cmac *cmac, int *speed,445445+ int *duplex, int *fc)446446+{447447+ if (speed)448448+ *speed = SPEED_10000;449449+ if (duplex)450450+ *duplex = DUPLEX_FULL;451451+ if (fc)452452+ *fc = cmac->instance->fc;453453+ return 0;454454+}455455+456456+static int pm3393_set_speed_duplex_fc(struct cmac *cmac, int speed, int duplex,457457+ int fc)458458+{459459+ if (speed >= 0 && speed != SPEED_10000)460460+ return -1;461461+ if (duplex >= 0 && duplex != DUPLEX_FULL)462462+ return -1;463463+ if (fc & ~(PAUSE_TX | PAUSE_RX))464464+ return -1;465465+466466+ if (fc != cmac->instance->fc) {467467+ cmac->instance->fc = (u8) fc;468468+ if (cmac->instance->enabled & MAC_DIRECTION_TX)469469+ pm3393_enable(cmac, MAC_DIRECTION_TX);470470+ }471471+ return 0;472472+}473473+474474+#define RMON_UPDATE(mac, name, stat_name) \475475+ { \476476+ t1_tpi_read((mac)->adapter, OFFSET(name), &val0); \477477+ t1_tpi_read((mac)->adapter, OFFSET(((name)+1)), &val1); \478478+ t1_tpi_read((mac)->adapter, OFFSET(((name)+2)), &val2); \479479+ (mac)->stats.stat_name = ((u64)val0 & 0xffff) | \480480+ (((u64)val1 & 0xffff) << 16) | \481481+ (((u64)val2 & 0xff) << 32) | \482482+ ((mac)->stats.stat_name & \483483+ (~(u64)0 << 40)); \484484+ if (ro & \485485+ ((name - SUNI1x10GEXP_REG_MSTAT_COUNTER_0_LOW) >> 2)) \486486+ (mac)->stats.stat_name += ((u64)1 << 40); \487487+ }488488+489489+static const struct cmac_statistics *pm3393_update_statistics(struct cmac *mac,490490+ int flag)491491+{492492+ u64 ro;493493+ u32 val0, val1, val2, val3;494494+495495+ /* Snap the counters */496496+ pmwrite(mac, SUNI1x10GEXP_REG_MSTAT_CONTROL,497497+ SUNI1x10GEXP_BITMSK_MSTAT_SNAP);498498+499499+ /* Counter rollover, clear on read */500500+ pmread(mac, SUNI1x10GEXP_REG_MSTAT_COUNTER_ROLLOVER_0, &val0);501501+ pmread(mac, SUNI1x10GEXP_REG_MSTAT_COUNTER_ROLLOVER_1, &val1);502502+ pmread(mac, SUNI1x10GEXP_REG_MSTAT_COUNTER_ROLLOVER_2, &val2);503503+ pmread(mac, SUNI1x10GEXP_REG_MSTAT_COUNTER_ROLLOVER_3, &val3);504504+ ro = ((u64)val0 & 0xffff) | (((u64)val1 & 0xffff) << 16) |505505+ (((u64)val2 & 0xffff) << 32) | (((u64)val3 & 0xffff) << 48);506506+507507+ /* Rx stats */508508+ RMON_UPDATE(mac, RxOctetsReceivedOK, RxOctetsOK);509509+ RMON_UPDATE(mac, RxUnicastFramesReceivedOK, RxUnicastFramesOK);510510+ RMON_UPDATE(mac, RxMulticastFramesReceivedOK, RxMulticastFramesOK);511511+ RMON_UPDATE(mac, RxBroadcastFramesReceivedOK, RxBroadcastFramesOK);512512+ RMON_UPDATE(mac, RxPAUSEMACCtrlFramesReceived, RxPauseFrames);513513+ RMON_UPDATE(mac, RxFrameCheckSequenceErrors, RxFCSErrors);514514+ RMON_UPDATE(mac, RxFramesLostDueToInternalMACErrors,515515+ RxInternalMACRcvError);516516+ RMON_UPDATE(mac, RxSymbolErrors, RxSymbolErrors);517517+ RMON_UPDATE(mac, RxInRangeLengthErrors, RxInRangeLengthErrors);518518+ RMON_UPDATE(mac, RxFramesTooLongErrors , RxFrameTooLongErrors);519519+ RMON_UPDATE(mac, RxJabbers, RxJabberErrors);520520+ RMON_UPDATE(mac, RxFragments, RxRuntErrors);521521+ RMON_UPDATE(mac, RxUndersizedFrames, RxRuntErrors);522522+523523+ /* Tx stats */524524+ RMON_UPDATE(mac, TxOctetsTransmittedOK, TxOctetsOK);525525+ RMON_UPDATE(mac, TxFramesLostDueToInternalMACTransmissionError,526526+ TxInternalMACXmitError);527527+ RMON_UPDATE(mac, TxTransmitSystemError, TxFCSErrors);528528+ RMON_UPDATE(mac, TxUnicastFramesTransmittedOK, TxUnicastFramesOK);529529+ RMON_UPDATE(mac, TxMulticastFramesTransmittedOK, TxMulticastFramesOK);530530+ RMON_UPDATE(mac, TxBroadcastFramesTransmittedOK, TxBroadcastFramesOK);531531+ RMON_UPDATE(mac, TxPAUSEMACCtrlFramesTransmitted, TxPauseFrames);532532+533533+ return &mac->stats;534534+}535535+536536+static int pm3393_macaddress_get(struct cmac *cmac, u8 mac_addr[6])537537+{538538+ memcpy(mac_addr, cmac->instance->mac_addr, 6);539539+ return 0;540540+}541541+542542+static int pm3393_macaddress_set(struct cmac *cmac, u8 ma[6])543543+{544544+ u32 val, lo, mid, hi, enabled = cmac->instance->enabled;545545+546546+ /*547547+ * MAC addr: 00:07:43:00:13:09548548+ *549549+ * ma[5] = 0x09550550+ * ma[4] = 0x13551551+ * ma[3] = 0x00552552+ * ma[2] = 0x43553553+ * ma[1] = 0x07554554+ * ma[0] = 0x00555555+ *556556+ * The PM3393 requires byte swapping and reverse order entry557557+ * when programming MAC addresses:558558+ *559559+ * low_bits[15:0] = ma[1]:ma[0]560560+ * mid_bits[31:16] = ma[3]:ma[2]561561+ * high_bits[47:32] = ma[5]:ma[4]562562+ */563563+564564+ /* Store local copy */565565+ memcpy(cmac->instance->mac_addr, ma, 6);566566+567567+ lo = ((u32) ma[1] << 8) | (u32) ma[0];568568+ mid = ((u32) ma[3] << 8) | (u32) ma[2];569569+ hi = ((u32) ma[5] << 8) | (u32) ma[4];570570+571571+ /* Disable Rx/Tx MAC before configuring it. */572572+ if (enabled)573573+ pm3393_disable(cmac, MAC_DIRECTION_RX | MAC_DIRECTION_TX);574574+575575+ /* Set RXXG Station Address */576576+ pmwrite(cmac, SUNI1x10GEXP_REG_RXXG_SA_15_0, lo);577577+ pmwrite(cmac, SUNI1x10GEXP_REG_RXXG_SA_31_16, mid);578578+ pmwrite(cmac, SUNI1x10GEXP_REG_RXXG_SA_47_32, hi);579579+580580+ /* Set TXXG Station Address */581581+ pmwrite(cmac, SUNI1x10GEXP_REG_TXXG_SA_15_0, lo);582582+ pmwrite(cmac, SUNI1x10GEXP_REG_TXXG_SA_31_16, mid);583583+ pmwrite(cmac, SUNI1x10GEXP_REG_TXXG_SA_47_32, hi);584584+585585+ /* Setup Exact Match Filter 1 with our MAC address586586+ *587587+ * Must disable exact match filter before configuring it.588588+ */589589+ pmread(cmac, SUNI1x10GEXP_REG_RXXG_ADDRESS_FILTER_CONTROL_0, &val);590590+ val &= 0xff0f;591591+ pmwrite(cmac, SUNI1x10GEXP_REG_RXXG_ADDRESS_FILTER_CONTROL_0, val);592592+593593+ pmwrite(cmac, SUNI1x10GEXP_REG_RXXG_EXACT_MATCH_ADDR_1_LOW, lo);594594+ pmwrite(cmac, SUNI1x10GEXP_REG_RXXG_EXACT_MATCH_ADDR_1_MID, mid);595595+ pmwrite(cmac, SUNI1x10GEXP_REG_RXXG_EXACT_MATCH_ADDR_1_HIGH, hi);596596+597597+ val |= 0x0090;598598+ pmwrite(cmac, SUNI1x10GEXP_REG_RXXG_ADDRESS_FILTER_CONTROL_0, val);599599+600600+ if (enabled)601601+ pm3393_enable(cmac, enabled);602602+ return 0;603603+}604604+605605+static void pm3393_destroy(struct cmac *cmac)606606+{607607+ kfree(cmac);608608+}609609+610610+static struct cmac_ops pm3393_ops = {611611+ .destroy = pm3393_destroy,612612+ .reset = pm3393_reset,613613+ .interrupt_enable = pm3393_interrupt_enable,614614+ .interrupt_disable = pm3393_interrupt_disable,615615+ .interrupt_clear = pm3393_interrupt_clear,616616+ .interrupt_handler = pm3393_interrupt_handler,617617+ .enable = pm3393_enable_port,618618+ .disable = pm3393_disable,619619+ .loopback_enable = pm3393_loopback_enable,620620+ .loopback_disable = pm3393_loopback_disable,621621+ .set_mtu = pm3393_set_mtu,622622+ .set_rx_mode = pm3393_set_rx_mode,623623+ .get_speed_duplex_fc = pm3393_get_speed_duplex_fc,624624+ .set_speed_duplex_fc = pm3393_set_speed_duplex_fc,625625+ .statistics_update = pm3393_update_statistics,626626+ .macaddress_get = pm3393_macaddress_get,627627+ .macaddress_set = pm3393_macaddress_set628628+};629629+630630+static struct cmac *pm3393_mac_create(adapter_t *adapter, int index)631631+{632632+ struct cmac *cmac;633633+634634+ cmac = kmalloc(sizeof(*cmac) + sizeof(cmac_instance), GFP_KERNEL);635635+ if (!cmac)636636+ return NULL;637637+ memset(cmac, 0, sizeof(*cmac));638638+639639+ cmac->ops = &pm3393_ops;640640+ cmac->instance = (cmac_instance *) (cmac + 1);641641+ cmac->adapter = adapter;642642+ cmac->instance->fc = PAUSE_TX | PAUSE_RX;643643+644644+ t1_tpi_write(adapter, OFFSET(0x0001), 0x00008000);645645+ t1_tpi_write(adapter, OFFSET(0x0001), 0x00000000);646646+ t1_tpi_write(adapter, OFFSET(0x2308), 0x00009800);647647+ t1_tpi_write(adapter, OFFSET(0x2305), 0x00001001); /* PL4IO Enable */648648+ t1_tpi_write(adapter, OFFSET(0x2320), 0x00008800);649649+ t1_tpi_write(adapter, OFFSET(0x2321), 0x00008800);650650+ t1_tpi_write(adapter, OFFSET(0x2322), 0x00008800);651651+ t1_tpi_write(adapter, OFFSET(0x2323), 0x00008800);652652+ t1_tpi_write(adapter, OFFSET(0x2324), 0x00008800);653653+ t1_tpi_write(adapter, OFFSET(0x2325), 0x00008800);654654+ t1_tpi_write(adapter, OFFSET(0x2326), 0x00008800);655655+ t1_tpi_write(adapter, OFFSET(0x2327), 0x00008800);656656+ t1_tpi_write(adapter, OFFSET(0x2328), 0x00008800);657657+ t1_tpi_write(adapter, OFFSET(0x2329), 0x00008800);658658+ t1_tpi_write(adapter, OFFSET(0x232a), 0x00008800);659659+ t1_tpi_write(adapter, OFFSET(0x232b), 0x00008800);660660+ t1_tpi_write(adapter, OFFSET(0x232c), 0x00008800);661661+ t1_tpi_write(adapter, OFFSET(0x232d), 0x00008800);662662+ t1_tpi_write(adapter, OFFSET(0x232e), 0x00008800);663663+ t1_tpi_write(adapter, OFFSET(0x232f), 0x00008800);664664+ t1_tpi_write(adapter, OFFSET(0x230d), 0x00009c00);665665+ t1_tpi_write(adapter, OFFSET(0x2304), 0x00000202); /* PL4IO Calendar Repetitions */666666+667667+ t1_tpi_write(adapter, OFFSET(0x3200), 0x00008080); /* EFLX Enable */668668+ t1_tpi_write(adapter, OFFSET(0x3210), 0x00000000); /* EFLX Channel Deprovision */669669+ t1_tpi_write(adapter, OFFSET(0x3203), 0x00000000); /* EFLX Low Limit */670670+ t1_tpi_write(adapter, OFFSET(0x3204), 0x00000040); /* EFLX High Limit */671671+ t1_tpi_write(adapter, OFFSET(0x3205), 0x000002cc); /* EFLX Almost Full */672672+ t1_tpi_write(adapter, OFFSET(0x3206), 0x00000199); /* EFLX Almost Empty */673673+ t1_tpi_write(adapter, OFFSET(0x3207), 0x00000240); /* EFLX Cut Through Threshold */674674+ t1_tpi_write(adapter, OFFSET(0x3202), 0x00000000); /* EFLX Indirect Register Update */675675+ t1_tpi_write(adapter, OFFSET(0x3210), 0x00000001); /* EFLX Channel Provision */676676+ t1_tpi_write(adapter, OFFSET(0x3208), 0x0000ffff); /* EFLX Undocumented */677677+ t1_tpi_write(adapter, OFFSET(0x320a), 0x0000ffff); /* EFLX Undocumented */678678+ t1_tpi_write(adapter, OFFSET(0x320c), 0x0000ffff); /* EFLX enable overflow interrupt The other bit are undocumented */679679+ t1_tpi_write(adapter, OFFSET(0x320e), 0x0000ffff); /* EFLX Undocumented */680680+681681+ t1_tpi_write(adapter, OFFSET(0x2200), 0x0000c000); /* IFLX Configuration - enable */682682+ t1_tpi_write(adapter, OFFSET(0x2201), 0x00000000); /* IFLX Channel Deprovision */683683+ t1_tpi_write(adapter, OFFSET(0x220e), 0x00000000); /* IFLX Low Limit */684684+ t1_tpi_write(adapter, OFFSET(0x220f), 0x00000100); /* IFLX High Limit */685685+ t1_tpi_write(adapter, OFFSET(0x2210), 0x00000c00); /* IFLX Almost Full Limit */686686+ t1_tpi_write(adapter, OFFSET(0x2211), 0x00000599); /* IFLX Almost Empty Limit */687687+ t1_tpi_write(adapter, OFFSET(0x220d), 0x00000000); /* IFLX Indirect Register Update */688688+ t1_tpi_write(adapter, OFFSET(0x2201), 0x00000001); /* IFLX Channel Provision */689689+ t1_tpi_write(adapter, OFFSET(0x2203), 0x0000ffff); /* IFLX Undocumented */690690+ t1_tpi_write(adapter, OFFSET(0x2205), 0x0000ffff); /* IFLX Undocumented */691691+ t1_tpi_write(adapter, OFFSET(0x2209), 0x0000ffff); /* IFLX Enable overflow interrupt. The other bit are undocumented */692692+693693+ t1_tpi_write(adapter, OFFSET(0x2241), 0xfffffffe); /* PL4MOS Undocumented */694694+ t1_tpi_write(adapter, OFFSET(0x2242), 0x0000ffff); /* PL4MOS Undocumented */695695+ t1_tpi_write(adapter, OFFSET(0x2243), 0x00000008); /* PL4MOS Starving Burst Size */696696+ t1_tpi_write(adapter, OFFSET(0x2244), 0x00000008); /* PL4MOS Hungry Burst Size */697697+ t1_tpi_write(adapter, OFFSET(0x2245), 0x00000008); /* PL4MOS Transfer Size */698698+ t1_tpi_write(adapter, OFFSET(0x2240), 0x00000005); /* PL4MOS Disable */699699+700700+ t1_tpi_write(adapter, OFFSET(0x2280), 0x00002103); /* PL4ODP Training Repeat and SOP rule */701701+ t1_tpi_write(adapter, OFFSET(0x2284), 0x00000000); /* PL4ODP MAX_T setting */702702+703703+ t1_tpi_write(adapter, OFFSET(0x3280), 0x00000087); /* PL4IDU Enable data forward, port state machine. Set ALLOW_NON_ZERO_OLB */704704+ t1_tpi_write(adapter, OFFSET(0x3282), 0x0000001f); /* PL4IDU Enable Dip4 check error interrupts */705705+706706+ t1_tpi_write(adapter, OFFSET(0x3040), 0x0c32); /* # TXXG Config */707707+ /* For T1 use timer based Mac flow control. */708708+ t1_tpi_write(adapter, OFFSET(0x304d), 0x8000);709709+ t1_tpi_write(adapter, OFFSET(0x2040), 0x059c); /* # RXXG Config */710710+ t1_tpi_write(adapter, OFFSET(0x2049), 0x0001); /* # RXXG Cut Through */711711+ t1_tpi_write(adapter, OFFSET(0x2070), 0x0000); /* # Disable promiscuous mode */712712+713713+ /* Setup Exact Match Filter 0 to allow broadcast packets.714714+ */715715+ t1_tpi_write(adapter, OFFSET(0x206e), 0x0000); /* # Disable Match Enable bit */716716+ t1_tpi_write(adapter, OFFSET(0x204a), 0xffff); /* # low addr */717717+ t1_tpi_write(adapter, OFFSET(0x204b), 0xffff); /* # mid addr */718718+ t1_tpi_write(adapter, OFFSET(0x204c), 0xffff); /* # high addr */719719+ t1_tpi_write(adapter, OFFSET(0x206e), 0x0009); /* # Enable Match Enable bit */720720+721721+ t1_tpi_write(adapter, OFFSET(0x0003), 0x0000); /* # NO SOP/ PAD_EN setup */722722+ t1_tpi_write(adapter, OFFSET(0x0100), 0x0ff0); /* # RXEQB disabled */723723+ t1_tpi_write(adapter, OFFSET(0x0101), 0x0f0f); /* # No Preemphasis */724724+725725+ return cmac;726726+}727727+728728+static int pm3393_mac_reset(adapter_t * adapter)729729+{730730+ u32 val;731731+ u32 x;732732+ u32 is_pl4_reset_finished;733733+ u32 is_pl4_outof_lock;734734+ u32 is_xaui_mabc_pll_locked;735735+ u32 successful_reset;736736+ int i;737737+738738+ /* The following steps are required to properly reset739739+ * the PM3393. This information is provided in the740740+ * PM3393 datasheet (Issue 2: November 2002)741741+ * section 13.1 -- Device Reset.742742+ *743743+ * The PM3393 has three types of components that are744744+ * individually reset:745745+ *746746+ * DRESETB - Digital circuitry747747+ * PL4_ARESETB - PL4 analog circuitry748748+ * XAUI_ARESETB - XAUI bus analog circuitry749749+ *750750+ * Steps to reset PM3393 using RSTB pin:751751+ *752752+ * 1. Assert RSTB pin low ( write 0 )753753+ * 2. Wait at least 1ms to initiate a complete initialization of device.754754+ * 3. Wait until all external clocks and REFSEL are stable.755755+ * 4. Wait minimum of 1ms. (after external clocks and REFEL are stable)756756+ * 5. De-assert RSTB ( write 1 )757757+ * 6. Wait until internal timers to expires after ~14ms.758758+ * - Allows analog clock synthesizer(PL4CSU) to stabilize to759759+ * selected reference frequency before allowing the digital760760+ * portion of the device to operate.761761+ * 7. Wait at least 200us for XAUI interface to stabilize.762762+ * 8. Verify the PM3393 came out of reset successfully.763763+ * Set successful reset flag if everything worked else try again764764+ * a few more times.765765+ */766766+767767+ successful_reset = 0;768768+ for (i = 0; i < 3 && !successful_reset; i++) {769769+ /* 1 */770770+ t1_tpi_read(adapter, A_ELMER0_GPO, &val);771771+ val &= ~1;772772+ t1_tpi_write(adapter, A_ELMER0_GPO, val);773773+774774+ /* 2 */775775+ msleep(1);776776+777777+ /* 3 */778778+ msleep(1);779779+780780+ /* 4 */781781+ msleep(2 /*1 extra ms for safety */ );782782+783783+ /* 5 */784784+ val |= 1;785785+ t1_tpi_write(adapter, A_ELMER0_GPO, val);786786+787787+ /* 6 */788788+ msleep(15 /*1 extra ms for safety */ );789789+790790+ /* 7 */791791+ msleep(1);792792+793793+ /* 8 */794794+795795+ /* Has PL4 analog block come out of reset correctly? */796796+ t1_tpi_read(adapter, OFFSET(SUNI1x10GEXP_REG_DEVICE_STATUS), &val);797797+ is_pl4_reset_finished = (val & SUNI1x10GEXP_BITMSK_TOP_EXPIRED);798798+799799+ /* TBD XXX SUNI1x10GEXP_BITMSK_TOP_PL4_IS_DOOL gets locked later in the init sequence800800+ * figure out why? */801801+802802+ /* Have all PL4 block clocks locked? */803803+ x = (SUNI1x10GEXP_BITMSK_TOP_PL4_ID_DOOL804804+ /*| SUNI1x10GEXP_BITMSK_TOP_PL4_IS_DOOL */ |805805+ SUNI1x10GEXP_BITMSK_TOP_PL4_ID_ROOL |806806+ SUNI1x10GEXP_BITMSK_TOP_PL4_IS_ROOL |807807+ SUNI1x10GEXP_BITMSK_TOP_PL4_OUT_ROOL);808808+ is_pl4_outof_lock = (val & x);809809+810810+ /* ??? If this fails, might be able to software reset the XAUI part811811+ * and try to recover... thus saving us from doing another HW reset */812812+ /* Has the XAUI MABC PLL circuitry stablized? */813813+ is_xaui_mabc_pll_locked =814814+ (val & SUNI1x10GEXP_BITMSK_TOP_SXRA_EXPIRED);815815+816816+ successful_reset = (is_pl4_reset_finished && !is_pl4_outof_lock817817+ && is_xaui_mabc_pll_locked);818818+ }819819+ return successful_reset ? 0 : 1;820820+}821821+822822+struct gmac t1_pm3393_ops = {823823+ STATS_TICK_SECS,824824+ pm3393_mac_create,825825+ pm3393_mac_reset826826+};
···11+/*****************************************************************************22+ * *33+ * File: sge.c *44+ * $Revision: 1.26 $ *55+ * $Date: 2005/06/21 18:29:48 $ *66+ * Description: *77+ * DMA engine. *88+ * part of the Chelsio 10Gb Ethernet Driver. *99+ * *1010+ * This program is free software; you can redistribute it and/or modify *1111+ * it under the terms of the GNU General Public License, version 2, as *1212+ * published by the Free Software Foundation. *1313+ * *1414+ * You should have received a copy of the GNU General Public License along *1515+ * with this program; if not, write to the Free Software Foundation, Inc., *1616+ * 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. *1717+ * *1818+ * THIS SOFTWARE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR IMPLIED *1919+ * WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED WARRANTIES OF *2020+ * MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. *2121+ * *2222+ * http://www.chelsio.com *2323+ * *2424+ * Copyright (c) 2003 - 2005 Chelsio Communications, Inc. *2525+ * All rights reserved. *2626+ * *2727+ * Maintainers: maintainers@chelsio.com *2828+ * *2929+ * Authors: Dimitrios Michailidis <dm@chelsio.com> *3030+ * Tina Yang <tainay@chelsio.com> *3131+ * Felix Marti <felix@chelsio.com> *3232+ * Scott Bardone <sbardone@chelsio.com> *3333+ * Kurt Ottaway <kottaway@chelsio.com> *3434+ * Frank DiMambro <frank@chelsio.com> *3535+ * *3636+ * History: *3737+ * *3838+ ****************************************************************************/3939+4040+#include "common.h"4141+4242+#include <linux/config.h>4343+#include <linux/types.h>4444+#include <linux/errno.h>4545+#include <linux/pci.h>4646+#include <linux/netdevice.h>4747+#include <linux/etherdevice.h>4848+#include <linux/if_vlan.h>4949+#include <linux/skbuff.h>5050+#include <linux/init.h>5151+#include <linux/mm.h>5252+#include <linux/ip.h>5353+#include <linux/in.h>5454+#include <linux/if_arp.h>5555+5656+#include "cpl5_cmd.h"5757+#include "sge.h"5858+#include "regs.h"5959+#include "espi.h"6060+6161+6262+#ifdef NETIF_F_TSO6363+#include <linux/tcp.h>6464+#endif6565+6666+#define SGE_CMDQ_N 26767+#define SGE_FREELQ_N 26868+#define SGE_CMDQ0_E_N 10246969+#define SGE_CMDQ1_E_N 1287070+#define SGE_FREEL_SIZE 40967171+#define SGE_JUMBO_FREEL_SIZE 5127272+#define SGE_FREEL_REFILL_THRESH 167373+#define SGE_RESPQ_E_N 10247474+#define SGE_INTRTIMER_NRES 10007575+#define SGE_RX_COPY_THRES 2567676+#define SGE_RX_SM_BUF_SIZE 15367777+7878+# define SGE_RX_DROP_THRES 27979+8080+#define SGE_RESPQ_REPLENISH_THRES (SGE_RESPQ_E_N / 4)8181+8282+/*8383+ * Period of the TX buffer reclaim timer. This timer does not need to run8484+ * frequently as TX buffers are usually reclaimed by new TX packets.8585+ */8686+#define TX_RECLAIM_PERIOD (HZ / 4)8787+8888+#ifndef NET_IP_ALIGN8989+# define NET_IP_ALIGN 29090+#endif9191+9292+#define M_CMD_LEN 0x7fffffff9393+#define V_CMD_LEN(v) (v)9494+#define G_CMD_LEN(v) ((v) & M_CMD_LEN)9595+#define V_CMD_GEN1(v) ((v) << 31)9696+#define V_CMD_GEN2(v) (v)9797+#define F_CMD_DATAVALID (1 << 1)9898+#define F_CMD_SOP (1 << 2)9999+#define V_CMD_EOP(v) ((v) << 3)100100+101101+/*102102+ * Command queue, receive buffer list, and response queue descriptors.103103+ */104104+#if defined(__BIG_ENDIAN_BITFIELD)105105+struct cmdQ_e {106106+ u32 addr_lo;107107+ u32 len_gen;108108+ u32 flags;109109+ u32 addr_hi;110110+};111111+112112+struct freelQ_e {113113+ u32 addr_lo;114114+ u32 len_gen;115115+ u32 gen2;116116+ u32 addr_hi;117117+};118118+119119+struct respQ_e {120120+ u32 Qsleeping : 4;121121+ u32 Cmdq1CreditReturn : 5;122122+ u32 Cmdq1DmaComplete : 5;123123+ u32 Cmdq0CreditReturn : 5;124124+ u32 Cmdq0DmaComplete : 5;125125+ u32 FreelistQid : 2;126126+ u32 CreditValid : 1;127127+ u32 DataValid : 1;128128+ u32 Offload : 1;129129+ u32 Eop : 1;130130+ u32 Sop : 1;131131+ u32 GenerationBit : 1;132132+ u32 BufferLength;133133+};134134+#elif defined(__LITTLE_ENDIAN_BITFIELD)135135+struct cmdQ_e {136136+ u32 len_gen;137137+ u32 addr_lo;138138+ u32 addr_hi;139139+ u32 flags;140140+};141141+142142+struct freelQ_e {143143+ u32 len_gen;144144+ u32 addr_lo;145145+ u32 addr_hi;146146+ u32 gen2;147147+};148148+149149+struct respQ_e {150150+ u32 BufferLength;151151+ u32 GenerationBit : 1;152152+ u32 Sop : 1;153153+ u32 Eop : 1;154154+ u32 Offload : 1;155155+ u32 DataValid : 1;156156+ u32 CreditValid : 1;157157+ u32 FreelistQid : 2;158158+ u32 Cmdq0DmaComplete : 5;159159+ u32 Cmdq0CreditReturn : 5;160160+ u32 Cmdq1DmaComplete : 5;161161+ u32 Cmdq1CreditReturn : 5;162162+ u32 Qsleeping : 4;163163+} ;164164+#endif165165+166166+/*167167+ * SW Context Command and Freelist Queue Descriptors168168+ */169169+struct cmdQ_ce {170170+ struct sk_buff *skb;171171+ DECLARE_PCI_UNMAP_ADDR(dma_addr);172172+ DECLARE_PCI_UNMAP_LEN(dma_len);173173+};174174+175175+struct freelQ_ce {176176+ struct sk_buff *skb;177177+ DECLARE_PCI_UNMAP_ADDR(dma_addr);178178+ DECLARE_PCI_UNMAP_LEN(dma_len);179179+};180180+181181+/*182182+ * SW command, freelist and response rings183183+ */184184+struct cmdQ {185185+ unsigned long status; /* HW DMA fetch status */186186+ unsigned int in_use; /* # of in-use command descriptors */187187+ unsigned int size; /* # of descriptors */188188+ unsigned int processed; /* total # of descs HW has processed */189189+ unsigned int cleaned; /* total # of descs SW has reclaimed */190190+ unsigned int stop_thres; /* SW TX queue suspend threshold */191191+ u16 pidx; /* producer index (SW) */192192+ u16 cidx; /* consumer index (HW) */193193+ u8 genbit; /* current generation (=valid) bit */194194+ u8 sop; /* is next entry start of packet? */195195+ struct cmdQ_e *entries; /* HW command descriptor Q */196196+ struct cmdQ_ce *centries; /* SW command context descriptor Q */197197+ spinlock_t lock; /* Lock to protect cmdQ enqueuing */198198+ dma_addr_t dma_addr; /* DMA addr HW command descriptor Q */199199+};200200+201201+struct freelQ {202202+ unsigned int credits; /* # of available RX buffers */203203+ unsigned int size; /* free list capacity */204204+ u16 pidx; /* producer index (SW) */205205+ u16 cidx; /* consumer index (HW) */206206+ u16 rx_buffer_size; /* Buffer size on this free list */207207+ u16 dma_offset; /* DMA offset to align IP headers */208208+ u16 recycleq_idx; /* skb recycle q to use */209209+ u8 genbit; /* current generation (=valid) bit */210210+ struct freelQ_e *entries; /* HW freelist descriptor Q */211211+ struct freelQ_ce *centries; /* SW freelist context descriptor Q */212212+ dma_addr_t dma_addr; /* DMA addr HW freelist descriptor Q */213213+};214214+215215+struct respQ {216216+ unsigned int credits; /* credits to be returned to SGE */217217+ unsigned int size; /* # of response Q descriptors */218218+ u16 cidx; /* consumer index (SW) */219219+ u8 genbit; /* current generation(=valid) bit */220220+ struct respQ_e *entries; /* HW response descriptor Q */221221+ dma_addr_t dma_addr; /* DMA addr HW response descriptor Q */222222+};223223+224224+/* Bit flags for cmdQ.status */225225+enum {226226+ CMDQ_STAT_RUNNING = 1, /* fetch engine is running */227227+ CMDQ_STAT_LAST_PKT_DB = 2 /* last packet rung the doorbell */228228+};229229+230230+/*231231+ * Main SGE data structure232232+ *233233+ * Interrupts are handled by a single CPU and it is likely that on a MP system234234+ * the application is migrated to another CPU. In that scenario, we try to235235+ * seperate the RX(in irq context) and TX state in order to decrease memory236236+ * contention.237237+ */238238+struct sge {239239+ struct adapter *adapter; /* adapter backpointer */240240+ struct net_device *netdev; /* netdevice backpointer */241241+ struct freelQ freelQ[SGE_FREELQ_N]; /* buffer free lists */242242+ struct respQ respQ; /* response Q */243243+ unsigned long stopped_tx_queues; /* bitmap of suspended Tx queues */244244+ unsigned int rx_pkt_pad; /* RX padding for L2 packets */245245+ unsigned int jumbo_fl; /* jumbo freelist Q index */246246+ unsigned int intrtimer_nres; /* no-resource interrupt timer */247247+ unsigned int fixed_intrtimer;/* non-adaptive interrupt timer */248248+ struct timer_list tx_reclaim_timer; /* reclaims TX buffers */249249+ struct timer_list espibug_timer;250250+ unsigned int espibug_timeout;251251+ struct sk_buff *espibug_skb;252252+ u32 sge_control; /* shadow value of sge control reg */253253+ struct sge_intr_counts stats;254254+ struct sge_port_stats port_stats[MAX_NPORTS];255255+ struct cmdQ cmdQ[SGE_CMDQ_N] ____cacheline_aligned_in_smp;256256+};257257+258258+/*259259+ * PIO to indicate that memory mapped Q contains valid descriptor(s).260260+ */261261+static inline void doorbell_pio(struct adapter *adapter, u32 val)262262+{263263+ wmb();264264+ writel(val, adapter->regs + A_SG_DOORBELL);265265+}266266+267267+/*268268+ * Frees all RX buffers on the freelist Q. The caller must make sure that269269+ * the SGE is turned off before calling this function.270270+ */271271+static void free_freelQ_buffers(struct pci_dev *pdev, struct freelQ *q)272272+{273273+ unsigned int cidx = q->cidx;274274+275275+ while (q->credits--) {276276+ struct freelQ_ce *ce = &q->centries[cidx];277277+278278+ pci_unmap_single(pdev, pci_unmap_addr(ce, dma_addr),279279+ pci_unmap_len(ce, dma_len),280280+ PCI_DMA_FROMDEVICE);281281+ dev_kfree_skb(ce->skb);282282+ ce->skb = NULL;283283+ if (++cidx == q->size)284284+ cidx = 0;285285+ }286286+}287287+288288+/*289289+ * Free RX free list and response queue resources.290290+ */291291+static void free_rx_resources(struct sge *sge)292292+{293293+ struct pci_dev *pdev = sge->adapter->pdev;294294+ unsigned int size, i;295295+296296+ if (sge->respQ.entries) {297297+ size = sizeof(struct respQ_e) * sge->respQ.size;298298+ pci_free_consistent(pdev, size, sge->respQ.entries,299299+ sge->respQ.dma_addr);300300+ }301301+302302+ for (i = 0; i < SGE_FREELQ_N; i++) {303303+ struct freelQ *q = &sge->freelQ[i];304304+305305+ if (q->centries) {306306+ free_freelQ_buffers(pdev, q);307307+ kfree(q->centries);308308+ }309309+ if (q->entries) {310310+ size = sizeof(struct freelQ_e) * q->size;311311+ pci_free_consistent(pdev, size, q->entries,312312+ q->dma_addr);313313+ }314314+ }315315+}316316+317317+/*318318+ * Allocates basic RX resources, consisting of memory mapped freelist Qs and a319319+ * response queue.320320+ */321321+static int alloc_rx_resources(struct sge *sge, struct sge_params *p)322322+{323323+ struct pci_dev *pdev = sge->adapter->pdev;324324+ unsigned int size, i;325325+326326+ for (i = 0; i < SGE_FREELQ_N; i++) {327327+ struct freelQ *q = &sge->freelQ[i];328328+329329+ q->genbit = 1;330330+ q->size = p->freelQ_size[i];331331+ q->dma_offset = sge->rx_pkt_pad ? 0 : NET_IP_ALIGN;332332+ size = sizeof(struct freelQ_e) * q->size;333333+ q->entries = (struct freelQ_e *)334334+ pci_alloc_consistent(pdev, size, &q->dma_addr);335335+ if (!q->entries)336336+ goto err_no_mem;337337+ memset(q->entries, 0, size);338338+ size = sizeof(struct freelQ_ce) * q->size;339339+ q->centries = kmalloc(size, GFP_KERNEL);340340+ if (!q->centries)341341+ goto err_no_mem;342342+ memset(q->centries, 0, size);343343+ }344344+345345+ /*346346+ * Calculate the buffer sizes for the two free lists. FL0 accommodates347347+ * regular sized Ethernet frames, FL1 is sized not to exceed 16K,348348+ * including all the sk_buff overhead.349349+ *350350+ * Note: For T2 FL0 and FL1 are reversed.351351+ */352352+ sge->freelQ[!sge->jumbo_fl].rx_buffer_size = SGE_RX_SM_BUF_SIZE +353353+ sizeof(struct cpl_rx_data) +354354+ sge->freelQ[!sge->jumbo_fl].dma_offset;355355+ sge->freelQ[sge->jumbo_fl].rx_buffer_size = (16 * 1024) -356356+ SKB_DATA_ALIGN(sizeof(struct skb_shared_info));357357+358358+ /*359359+ * Setup which skb recycle Q should be used when recycling buffers from360360+ * each free list.361361+ */362362+ sge->freelQ[!sge->jumbo_fl].recycleq_idx = 0;363363+ sge->freelQ[sge->jumbo_fl].recycleq_idx = 1;364364+365365+ sge->respQ.genbit = 1;366366+ sge->respQ.size = SGE_RESPQ_E_N;367367+ sge->respQ.credits = 0;368368+ size = sizeof(struct respQ_e) * sge->respQ.size;369369+ sge->respQ.entries = (struct respQ_e *)370370+ pci_alloc_consistent(pdev, size, &sge->respQ.dma_addr);371371+ if (!sge->respQ.entries)372372+ goto err_no_mem;373373+ memset(sge->respQ.entries, 0, size);374374+ return 0;375375+376376+err_no_mem:377377+ free_rx_resources(sge);378378+ return -ENOMEM;379379+}380380+381381+/*382382+ * Reclaims n TX descriptors and frees the buffers associated with them.383383+ */384384+static void free_cmdQ_buffers(struct sge *sge, struct cmdQ *q, unsigned int n)385385+{386386+ struct cmdQ_ce *ce;387387+ struct pci_dev *pdev = sge->adapter->pdev;388388+ unsigned int cidx = q->cidx;389389+390390+ q->in_use -= n;391391+ ce = &q->centries[cidx];392392+ while (n--) {393393+ if (q->sop)394394+ pci_unmap_single(pdev, pci_unmap_addr(ce, dma_addr),395395+ pci_unmap_len(ce, dma_len),396396+ PCI_DMA_TODEVICE);397397+ else398398+ pci_unmap_page(pdev, pci_unmap_addr(ce, dma_addr),399399+ pci_unmap_len(ce, dma_len),400400+ PCI_DMA_TODEVICE);401401+ q->sop = 0;402402+ if (ce->skb) {403403+ dev_kfree_skb(ce->skb);404404+ q->sop = 1;405405+ }406406+ ce++;407407+ if (++cidx == q->size) {408408+ cidx = 0;409409+ ce = q->centries;410410+ }411411+ }412412+ q->cidx = cidx;413413+}414414+415415+/*416416+ * Free TX resources.417417+ *418418+ * Assumes that SGE is stopped and all interrupts are disabled.419419+ */420420+static void free_tx_resources(struct sge *sge)421421+{422422+ struct pci_dev *pdev = sge->adapter->pdev;423423+ unsigned int size, i;424424+425425+ for (i = 0; i < SGE_CMDQ_N; i++) {426426+ struct cmdQ *q = &sge->cmdQ[i];427427+428428+ if (q->centries) {429429+ if (q->in_use)430430+ free_cmdQ_buffers(sge, q, q->in_use);431431+ kfree(q->centries);432432+ }433433+ if (q->entries) {434434+ size = sizeof(struct cmdQ_e) * q->size;435435+ pci_free_consistent(pdev, size, q->entries,436436+ q->dma_addr);437437+ }438438+ }439439+}440440+441441+/*442442+ * Allocates basic TX resources, consisting of memory mapped command Qs.443443+ */444444+static int alloc_tx_resources(struct sge *sge, struct sge_params *p)445445+{446446+ struct pci_dev *pdev = sge->adapter->pdev;447447+ unsigned int size, i;448448+449449+ for (i = 0; i < SGE_CMDQ_N; i++) {450450+ struct cmdQ *q = &sge->cmdQ[i];451451+452452+ q->genbit = 1;453453+ q->sop = 1;454454+ q->size = p->cmdQ_size[i];455455+ q->in_use = 0;456456+ q->status = 0;457457+ q->processed = q->cleaned = 0;458458+ q->stop_thres = 0;459459+ spin_lock_init(&q->lock);460460+ size = sizeof(struct cmdQ_e) * q->size;461461+ q->entries = (struct cmdQ_e *)462462+ pci_alloc_consistent(pdev, size, &q->dma_addr);463463+ if (!q->entries)464464+ goto err_no_mem;465465+ memset(q->entries, 0, size);466466+ size = sizeof(struct cmdQ_ce) * q->size;467467+ q->centries = kmalloc(size, GFP_KERNEL);468468+ if (!q->centries)469469+ goto err_no_mem;470470+ memset(q->centries, 0, size);471471+ }472472+473473+ /*474474+ * CommandQ 0 handles Ethernet and TOE packets, while queue 1 is TOE475475+ * only. For queue 0 set the stop threshold so we can handle one more476476+ * packet from each port, plus reserve an additional 24 entries for477477+ * Ethernet packets only. Queue 1 never suspends nor do we reserve478478+ * space for Ethernet packets.479479+ */480480+ sge->cmdQ[0].stop_thres = sge->adapter->params.nports *481481+ (MAX_SKB_FRAGS + 1);482482+ return 0;483483+484484+err_no_mem:485485+ free_tx_resources(sge);486486+ return -ENOMEM;487487+}488488+489489+static inline void setup_ring_params(struct adapter *adapter, u64 addr,490490+ u32 size, int base_reg_lo,491491+ int base_reg_hi, int size_reg)492492+{493493+ writel((u32)addr, adapter->regs + base_reg_lo);494494+ writel(addr >> 32, adapter->regs + base_reg_hi);495495+ writel(size, adapter->regs + size_reg);496496+}497497+498498+/*499499+ * Enable/disable VLAN acceleration.500500+ */501501+void t1_set_vlan_accel(struct adapter *adapter, int on_off)502502+{503503+ struct sge *sge = adapter->sge;504504+505505+ sge->sge_control &= ~F_VLAN_XTRACT;506506+ if (on_off)507507+ sge->sge_control |= F_VLAN_XTRACT;508508+ if (adapter->open_device_map) {509509+ writel(sge->sge_control, adapter->regs + A_SG_CONTROL);510510+ readl(adapter->regs + A_SG_CONTROL); /* flush */511511+ }512512+}513513+514514+/*515515+ * Programs the various SGE registers. However, the engine is not yet enabled,516516+ * but sge->sge_control is setup and ready to go.517517+ */518518+static void configure_sge(struct sge *sge, struct sge_params *p)519519+{520520+ struct adapter *ap = sge->adapter;521521+522522+ writel(0, ap->regs + A_SG_CONTROL);523523+ setup_ring_params(ap, sge->cmdQ[0].dma_addr, sge->cmdQ[0].size,524524+ A_SG_CMD0BASELWR, A_SG_CMD0BASEUPR, A_SG_CMD0SIZE);525525+ setup_ring_params(ap, sge->cmdQ[1].dma_addr, sge->cmdQ[1].size,526526+ A_SG_CMD1BASELWR, A_SG_CMD1BASEUPR, A_SG_CMD1SIZE);527527+ setup_ring_params(ap, sge->freelQ[0].dma_addr,528528+ sge->freelQ[0].size, A_SG_FL0BASELWR,529529+ A_SG_FL0BASEUPR, A_SG_FL0SIZE);530530+ setup_ring_params(ap, sge->freelQ[1].dma_addr,531531+ sge->freelQ[1].size, A_SG_FL1BASELWR,532532+ A_SG_FL1BASEUPR, A_SG_FL1SIZE);533533+534534+ /* The threshold comparison uses <. */535535+ writel(SGE_RX_SM_BUF_SIZE + 1, ap->regs + A_SG_FLTHRESHOLD);536536+537537+ setup_ring_params(ap, sge->respQ.dma_addr, sge->respQ.size,538538+ A_SG_RSPBASELWR, A_SG_RSPBASEUPR, A_SG_RSPSIZE);539539+ writel((u32)sge->respQ.size - 1, ap->regs + A_SG_RSPQUEUECREDIT);540540+541541+ sge->sge_control = F_CMDQ0_ENABLE | F_CMDQ1_ENABLE | F_FL0_ENABLE |542542+ F_FL1_ENABLE | F_CPL_ENABLE | F_RESPONSE_QUEUE_ENABLE |543543+ V_CMDQ_PRIORITY(2) | F_DISABLE_CMDQ1_GTS | F_ISCSI_COALESCE |544544+ F_DISABLE_FL0_GTS | F_DISABLE_FL1_GTS |545545+ V_RX_PKT_OFFSET(sge->rx_pkt_pad);546546+547547+#if defined(__BIG_ENDIAN_BITFIELD)548548+ sge->sge_control |= F_ENABLE_BIG_ENDIAN;549549+#endif550550+551551+ /* Initialize no-resource timer */552552+ sge->intrtimer_nres = SGE_INTRTIMER_NRES * core_ticks_per_usec(ap);553553+554554+ t1_sge_set_coalesce_params(sge, p);555555+}556556+557557+/*558558+ * Return the payload capacity of the jumbo free-list buffers.559559+ */560560+static inline unsigned int jumbo_payload_capacity(const struct sge *sge)561561+{562562+ return sge->freelQ[sge->jumbo_fl].rx_buffer_size -563563+ sge->freelQ[sge->jumbo_fl].dma_offset -564564+ sizeof(struct cpl_rx_data);565565+}566566+567567+/*568568+ * Frees all SGE related resources and the sge structure itself569569+ */570570+void t1_sge_destroy(struct sge *sge)571571+{572572+ if (sge->espibug_skb)573573+ kfree_skb(sge->espibug_skb);574574+575575+ free_tx_resources(sge);576576+ free_rx_resources(sge);577577+ kfree(sge);578578+}579579+580580+/*581581+ * Allocates new RX buffers on the freelist Q (and tracks them on the freelist582582+ * context Q) until the Q is full or alloc_skb fails.583583+ *584584+ * It is possible that the generation bits already match, indicating that the585585+ * buffer is already valid and nothing needs to be done. This happens when we586586+ * copied a received buffer into a new sk_buff during the interrupt processing.587587+ *588588+ * If the SGE doesn't automatically align packets properly (!sge->rx_pkt_pad),589589+ * we specify a RX_OFFSET in order to make sure that the IP header is 4B590590+ * aligned.591591+ */592592+static void refill_free_list(struct sge *sge, struct freelQ *q)593593+{594594+ struct pci_dev *pdev = sge->adapter->pdev;595595+ struct freelQ_ce *ce = &q->centries[q->pidx];596596+ struct freelQ_e *e = &q->entries[q->pidx];597597+ unsigned int dma_len = q->rx_buffer_size - q->dma_offset;598598+599599+600600+ while (q->credits < q->size) {601601+ struct sk_buff *skb;602602+ dma_addr_t mapping;603603+604604+ skb = alloc_skb(q->rx_buffer_size, GFP_ATOMIC);605605+ if (!skb)606606+ break;607607+608608+ skb_reserve(skb, q->dma_offset);609609+ mapping = pci_map_single(pdev, skb->data, dma_len,610610+ PCI_DMA_FROMDEVICE);611611+ ce->skb = skb;612612+ pci_unmap_addr_set(ce, dma_addr, mapping);613613+ pci_unmap_len_set(ce, dma_len, dma_len);614614+ e->addr_lo = (u32)mapping;615615+ e->addr_hi = (u64)mapping >> 32;616616+ e->len_gen = V_CMD_LEN(dma_len) | V_CMD_GEN1(q->genbit);617617+ wmb();618618+ e->gen2 = V_CMD_GEN2(q->genbit);619619+620620+ e++;621621+ ce++;622622+ if (++q->pidx == q->size) {623623+ q->pidx = 0;624624+ q->genbit ^= 1;625625+ ce = q->centries;626626+ e = q->entries;627627+ }628628+ q->credits++;629629+ }630630+631631+}632632+633633+/*634634+ * Calls refill_free_list for both free lists. If we cannot fill at least 1/4635635+ * of both rings, we go into 'few interrupt mode' in order to give the system636636+ * time to free up resources.637637+ */638638+static void freelQs_empty(struct sge *sge)639639+{640640+ struct adapter *adapter = sge->adapter;641641+ u32 irq_reg = readl(adapter->regs + A_SG_INT_ENABLE);642642+ u32 irqholdoff_reg;643643+644644+ refill_free_list(sge, &sge->freelQ[0]);645645+ refill_free_list(sge, &sge->freelQ[1]);646646+647647+ if (sge->freelQ[0].credits > (sge->freelQ[0].size >> 2) &&648648+ sge->freelQ[1].credits > (sge->freelQ[1].size >> 2)) {649649+ irq_reg |= F_FL_EXHAUSTED;650650+ irqholdoff_reg = sge->fixed_intrtimer;651651+ } else {652652+ /* Clear the F_FL_EXHAUSTED interrupts for now */653653+ irq_reg &= ~F_FL_EXHAUSTED;654654+ irqholdoff_reg = sge->intrtimer_nres;655655+ }656656+ writel(irqholdoff_reg, adapter->regs + A_SG_INTRTIMER);657657+ writel(irq_reg, adapter->regs + A_SG_INT_ENABLE);658658+659659+ /* We reenable the Qs to force a freelist GTS interrupt later */660660+ doorbell_pio(adapter, F_FL0_ENABLE | F_FL1_ENABLE);661661+}662662+663663+#define SGE_PL_INTR_MASK (F_PL_INTR_SGE_ERR | F_PL_INTR_SGE_DATA)664664+#define SGE_INT_FATAL (F_RESPQ_OVERFLOW | F_PACKET_TOO_BIG | F_PACKET_MISMATCH)665665+#define SGE_INT_ENABLE (F_RESPQ_EXHAUSTED | F_RESPQ_OVERFLOW | \666666+ F_FL_EXHAUSTED | F_PACKET_TOO_BIG | F_PACKET_MISMATCH)667667+668668+/*669669+ * Disable SGE Interrupts670670+ */671671+void t1_sge_intr_disable(struct sge *sge)672672+{673673+ u32 val = readl(sge->adapter->regs + A_PL_ENABLE);674674+675675+ writel(val & ~SGE_PL_INTR_MASK, sge->adapter->regs + A_PL_ENABLE);676676+ writel(0, sge->adapter->regs + A_SG_INT_ENABLE);677677+}678678+679679+/*680680+ * Enable SGE interrupts.681681+ */682682+void t1_sge_intr_enable(struct sge *sge)683683+{684684+ u32 en = SGE_INT_ENABLE;685685+ u32 val = readl(sge->adapter->regs + A_PL_ENABLE);686686+687687+ if (sge->adapter->flags & TSO_CAPABLE)688688+ en &= ~F_PACKET_TOO_BIG;689689+ writel(en, sge->adapter->regs + A_SG_INT_ENABLE);690690+ writel(val | SGE_PL_INTR_MASK, sge->adapter->regs + A_PL_ENABLE);691691+}692692+693693+/*694694+ * Clear SGE interrupts.695695+ */696696+void t1_sge_intr_clear(struct sge *sge)697697+{698698+ writel(SGE_PL_INTR_MASK, sge->adapter->regs + A_PL_CAUSE);699699+ writel(0xffffffff, sge->adapter->regs + A_SG_INT_CAUSE);700700+}701701+702702+/*703703+ * SGE 'Error' interrupt handler704704+ */705705+int t1_sge_intr_error_handler(struct sge *sge)706706+{707707+ struct adapter *adapter = sge->adapter;708708+ u32 cause = readl(adapter->regs + A_SG_INT_CAUSE);709709+710710+ if (adapter->flags & TSO_CAPABLE)711711+ cause &= ~F_PACKET_TOO_BIG;712712+ if (cause & F_RESPQ_EXHAUSTED)713713+ sge->stats.respQ_empty++;714714+ if (cause & F_RESPQ_OVERFLOW) {715715+ sge->stats.respQ_overflow++;716716+ CH_ALERT("%s: SGE response queue overflow\n",717717+ adapter->name);718718+ }719719+ if (cause & F_FL_EXHAUSTED) {720720+ sge->stats.freelistQ_empty++;721721+ freelQs_empty(sge);722722+ }723723+ if (cause & F_PACKET_TOO_BIG) {724724+ sge->stats.pkt_too_big++;725725+ CH_ALERT("%s: SGE max packet size exceeded\n",726726+ adapter->name);727727+ }728728+ if (cause & F_PACKET_MISMATCH) {729729+ sge->stats.pkt_mismatch++;730730+ CH_ALERT("%s: SGE packet mismatch\n", adapter->name);731731+ }732732+ if (cause & SGE_INT_FATAL)733733+ t1_fatal_err(adapter);734734+735735+ writel(cause, adapter->regs + A_SG_INT_CAUSE);736736+ return 0;737737+}738738+739739+const struct sge_intr_counts *t1_sge_get_intr_counts(struct sge *sge)740740+{741741+ return &sge->stats;742742+}743743+744744+const struct sge_port_stats *t1_sge_get_port_stats(struct sge *sge, int port)745745+{746746+ return &sge->port_stats[port];747747+}748748+749749+/**750750+ * recycle_fl_buf - recycle a free list buffer751751+ * @fl: the free list752752+ * @idx: index of buffer to recycle753753+ *754754+ * Recycles the specified buffer on the given free list by adding it at755755+ * the next available slot on the list.756756+ */757757+static void recycle_fl_buf(struct freelQ *fl, int idx)758758+{759759+ struct freelQ_e *from = &fl->entries[idx];760760+ struct freelQ_e *to = &fl->entries[fl->pidx];761761+762762+ fl->centries[fl->pidx] = fl->centries[idx];763763+ to->addr_lo = from->addr_lo;764764+ to->addr_hi = from->addr_hi;765765+ to->len_gen = G_CMD_LEN(from->len_gen) | V_CMD_GEN1(fl->genbit);766766+ wmb();767767+ to->gen2 = V_CMD_GEN2(fl->genbit);768768+ fl->credits++;769769+770770+ if (++fl->pidx == fl->size) {771771+ fl->pidx = 0;772772+ fl->genbit ^= 1;773773+ }774774+}775775+776776+/**777777+ * get_packet - return the next ingress packet buffer778778+ * @pdev: the PCI device that received the packet779779+ * @fl: the SGE free list holding the packet780780+ * @len: the actual packet length, excluding any SGE padding781781+ * @dma_pad: padding at beginning of buffer left by SGE DMA782782+ * @skb_pad: padding to be used if the packet is copied783783+ * @copy_thres: length threshold under which a packet should be copied784784+ * @drop_thres: # of remaining buffers before we start dropping packets785785+ *786786+ * Get the next packet from a free list and complete setup of the787787+ * sk_buff. If the packet is small we make a copy and recycle the788788+ * original buffer, otherwise we use the original buffer itself. If a789789+ * positive drop threshold is supplied packets are dropped and their790790+ * buffers recycled if (a) the number of remaining buffers is under the791791+ * threshold and the packet is too big to copy, or (b) the packet should792792+ * be copied but there is no memory for the copy.793793+ */794794+static inline struct sk_buff *get_packet(struct pci_dev *pdev,795795+ struct freelQ *fl, unsigned int len,796796+ int dma_pad, int skb_pad,797797+ unsigned int copy_thres,798798+ unsigned int drop_thres)799799+{800800+ struct sk_buff *skb;801801+ struct freelQ_ce *ce = &fl->centries[fl->cidx];802802+803803+ if (len < copy_thres) {804804+ skb = alloc_skb(len + skb_pad, GFP_ATOMIC);805805+ if (likely(skb != NULL)) {806806+ skb_reserve(skb, skb_pad);807807+ skb_put(skb, len);808808+ pci_dma_sync_single_for_cpu(pdev,809809+ pci_unmap_addr(ce, dma_addr),810810+ pci_unmap_len(ce, dma_len),811811+ PCI_DMA_FROMDEVICE);812812+ memcpy(skb->data, ce->skb->data + dma_pad, len);813813+ pci_dma_sync_single_for_device(pdev,814814+ pci_unmap_addr(ce, dma_addr),815815+ pci_unmap_len(ce, dma_len),816816+ PCI_DMA_FROMDEVICE);817817+ } else if (!drop_thres)818818+ goto use_orig_buf;819819+820820+ recycle_fl_buf(fl, fl->cidx);821821+ return skb;822822+ }823823+824824+ if (fl->credits < drop_thres) {825825+ recycle_fl_buf(fl, fl->cidx);826826+ return NULL;827827+ }828828+829829+use_orig_buf:830830+ pci_unmap_single(pdev, pci_unmap_addr(ce, dma_addr),831831+ pci_unmap_len(ce, dma_len), PCI_DMA_FROMDEVICE);832832+ skb = ce->skb;833833+ skb_reserve(skb, dma_pad);834834+ skb_put(skb, len);835835+ return skb;836836+}837837+838838+/**839839+ * unexpected_offload - handle an unexpected offload packet840840+ * @adapter: the adapter841841+ * @fl: the free list that received the packet842842+ *843843+ * Called when we receive an unexpected offload packet (e.g., the TOE844844+ * function is disabled or the card is a NIC). Prints a message and845845+ * recycles the buffer.846846+ */847847+static void unexpected_offload(struct adapter *adapter, struct freelQ *fl)848848+{849849+ struct freelQ_ce *ce = &fl->centries[fl->cidx];850850+ struct sk_buff *skb = ce->skb;851851+852852+ pci_dma_sync_single_for_cpu(adapter->pdev, pci_unmap_addr(ce, dma_addr),853853+ pci_unmap_len(ce, dma_len), PCI_DMA_FROMDEVICE);854854+ CH_ERR("%s: unexpected offload packet, cmd %u\n",855855+ adapter->name, *skb->data);856856+ recycle_fl_buf(fl, fl->cidx);857857+}858858+859859+/*860860+ * Write the command descriptors to transmit the given skb starting at861861+ * descriptor pidx with the given generation.862862+ */863863+static inline void write_tx_descs(struct adapter *adapter, struct sk_buff *skb,864864+ unsigned int pidx, unsigned int gen,865865+ struct cmdQ *q)866866+{867867+ dma_addr_t mapping;868868+ struct cmdQ_e *e, *e1;869869+ struct cmdQ_ce *ce;870870+ unsigned int i, flags, nfrags = skb_shinfo(skb)->nr_frags;871871+872872+ mapping = pci_map_single(adapter->pdev, skb->data,873873+ skb->len - skb->data_len, PCI_DMA_TODEVICE);874874+ ce = &q->centries[pidx];875875+ ce->skb = NULL;876876+ pci_unmap_addr_set(ce, dma_addr, mapping);877877+ pci_unmap_len_set(ce, dma_len, skb->len - skb->data_len);878878+879879+ flags = F_CMD_DATAVALID | F_CMD_SOP | V_CMD_EOP(nfrags == 0) |880880+ V_CMD_GEN2(gen);881881+ e = &q->entries[pidx];882882+ e->addr_lo = (u32)mapping;883883+ e->addr_hi = (u64)mapping >> 32;884884+ e->len_gen = V_CMD_LEN(skb->len - skb->data_len) | V_CMD_GEN1(gen);885885+ for (e1 = e, i = 0; nfrags--; i++) {886886+ skb_frag_t *frag = &skb_shinfo(skb)->frags[i];887887+888888+ ce++;889889+ e1++;890890+ if (++pidx == q->size) {891891+ pidx = 0;892892+ gen ^= 1;893893+ ce = q->centries;894894+ e1 = q->entries;895895+ }896896+897897+ mapping = pci_map_page(adapter->pdev, frag->page,898898+ frag->page_offset, frag->size,899899+ PCI_DMA_TODEVICE);900900+ ce->skb = NULL;901901+ pci_unmap_addr_set(ce, dma_addr, mapping);902902+ pci_unmap_len_set(ce, dma_len, frag->size);903903+904904+ e1->addr_lo = (u32)mapping;905905+ e1->addr_hi = (u64)mapping >> 32;906906+ e1->len_gen = V_CMD_LEN(frag->size) | V_CMD_GEN1(gen);907907+ e1->flags = F_CMD_DATAVALID | V_CMD_EOP(nfrags == 0) |908908+ V_CMD_GEN2(gen);909909+ }910910+911911+ ce->skb = skb;912912+ wmb();913913+ e->flags = flags;914914+}915915+916916+/*917917+ * Clean up completed Tx buffers.918918+ */919919+static inline void reclaim_completed_tx(struct sge *sge, struct cmdQ *q)920920+{921921+ unsigned int reclaim = q->processed - q->cleaned;922922+923923+ if (reclaim) {924924+ free_cmdQ_buffers(sge, q, reclaim);925925+ q->cleaned += reclaim;926926+ }927927+}928928+929929+#ifndef SET_ETHTOOL_OPS930930+# define __netif_rx_complete(dev) netif_rx_complete(dev)931931+#endif932932+933933+/*934934+ * We cannot use the standard netif_rx_schedule_prep() because we have multiple935935+ * ports plus the TOE all multiplexing onto a single response queue, therefore936936+ * accepting new responses cannot depend on the state of any particular port.937937+ * So define our own equivalent that omits the netif_running() test.938938+ */939939+static inline int napi_schedule_prep(struct net_device *dev)940940+{941941+ return !test_and_set_bit(__LINK_STATE_RX_SCHED, &dev->state);942942+}943943+944944+945945+/**946946+ * sge_rx - process an ingress ethernet packet947947+ * @sge: the sge structure948948+ * @fl: the free list that contains the packet buffer949949+ * @len: the packet length950950+ *951951+ * Process an ingress ethernet pakcet and deliver it to the stack.952952+ */953953+static int sge_rx(struct sge *sge, struct freelQ *fl, unsigned int len)954954+{955955+ struct sk_buff *skb;956956+ struct cpl_rx_pkt *p;957957+ struct adapter *adapter = sge->adapter;958958+959959+ sge->stats.ethernet_pkts++;960960+ skb = get_packet(adapter->pdev, fl, len - sge->rx_pkt_pad,961961+ sge->rx_pkt_pad, 2, SGE_RX_COPY_THRES,962962+ SGE_RX_DROP_THRES);963963+ if (!skb) {964964+ sge->port_stats[0].rx_drops++; /* charge only port 0 for now */965965+ return 0;966966+ }967967+968968+ p = (struct cpl_rx_pkt *)skb->data;969969+ skb_pull(skb, sizeof(*p));970970+ skb->dev = adapter->port[p->iff].dev;971971+ skb->dev->last_rx = jiffies;972972+ skb->protocol = eth_type_trans(skb, skb->dev);973973+ if ((adapter->flags & RX_CSUM_ENABLED) && p->csum == 0xffff &&974974+ skb->protocol == htons(ETH_P_IP) &&975975+ (skb->data[9] == IPPROTO_TCP || skb->data[9] == IPPROTO_UDP)) {976976+ sge->port_stats[p->iff].rx_cso_good++;977977+ skb->ip_summed = CHECKSUM_UNNECESSARY;978978+ } else979979+ skb->ip_summed = CHECKSUM_NONE;980980+981981+ if (unlikely(adapter->vlan_grp && p->vlan_valid)) {982982+ sge->port_stats[p->iff].vlan_xtract++;983983+ if (adapter->params.sge.polling)984984+ vlan_hwaccel_receive_skb(skb, adapter->vlan_grp,985985+ ntohs(p->vlan));986986+ else987987+ vlan_hwaccel_rx(skb, adapter->vlan_grp,988988+ ntohs(p->vlan));989989+ } else if (adapter->params.sge.polling)990990+ netif_receive_skb(skb);991991+ else992992+ netif_rx(skb);993993+ return 0;994994+}995995+996996+/*997997+ * Returns true if a command queue has enough available descriptors that998998+ * we can resume Tx operation after temporarily disabling its packet queue.999999+ */10001000+static inline int enough_free_Tx_descs(const struct cmdQ *q)10011001+{10021002+ unsigned int r = q->processed - q->cleaned;10031003+10041004+ return q->in_use - r < (q->size >> 1);10051005+}10061006+10071007+/*10081008+ * Called when sufficient space has become available in the SGE command queues10091009+ * after the Tx packet schedulers have been suspended to restart the Tx path.10101010+ */10111011+static void restart_tx_queues(struct sge *sge)10121012+{10131013+ struct adapter *adap = sge->adapter;10141014+10151015+ if (enough_free_Tx_descs(&sge->cmdQ[0])) {10161016+ int i;10171017+10181018+ for_each_port(adap, i) {10191019+ struct net_device *nd = adap->port[i].dev;10201020+10211021+ if (test_and_clear_bit(nd->if_port,10221022+ &sge->stopped_tx_queues) &&10231023+ netif_running(nd)) {10241024+ sge->stats.cmdQ_restarted[3]++;10251025+ netif_wake_queue(nd);10261026+ }10271027+ }10281028+ }10291029+}10301030+10311031+/*10321032+ * update_tx_info is called from the interrupt handler/NAPI to return cmdQ0 10331033+ * information.10341034+ */10351035+static unsigned int update_tx_info(struct adapter *adapter, 10361036+ unsigned int flags, 10371037+ unsigned int pr0)10381038+{10391039+ struct sge *sge = adapter->sge;10401040+ struct cmdQ *cmdq = &sge->cmdQ[0];10411041+10421042+ cmdq->processed += pr0;10431043+10441044+ if (flags & F_CMDQ0_ENABLE) {10451045+ clear_bit(CMDQ_STAT_RUNNING, &cmdq->status);10461046+10471047+ if (cmdq->cleaned + cmdq->in_use != cmdq->processed &&10481048+ !test_and_set_bit(CMDQ_STAT_LAST_PKT_DB, &cmdq->status)) {10491049+ set_bit(CMDQ_STAT_RUNNING, &cmdq->status);10501050+ writel(F_CMDQ0_ENABLE, adapter->regs + A_SG_DOORBELL);10511051+ }10521052+ flags &= ~F_CMDQ0_ENABLE;10531053+ }10541054+10551055+ if (unlikely(sge->stopped_tx_queues != 0))10561056+ restart_tx_queues(sge);10571057+10581058+ return flags;10591059+}10601060+10611061+/*10621062+ * Process SGE responses, up to the supplied budget. Returns the number of10631063+ * responses processed. A negative budget is effectively unlimited.10641064+ */10651065+static int process_responses(struct adapter *adapter, int budget)10661066+{10671067+ struct sge *sge = adapter->sge;10681068+ struct respQ *q = &sge->respQ;10691069+ struct respQ_e *e = &q->entries[q->cidx];10701070+ int budget_left = budget;10711071+ unsigned int flags = 0;10721072+ unsigned int cmdq_processed[SGE_CMDQ_N] = {0, 0};10731073+10741074+10751075+ while (likely(budget_left && e->GenerationBit == q->genbit)) {10761076+ flags |= e->Qsleeping;10771077+10781078+ cmdq_processed[0] += e->Cmdq0CreditReturn;10791079+ cmdq_processed[1] += e->Cmdq1CreditReturn;10801080+10811081+ /* We batch updates to the TX side to avoid cacheline10821082+ * ping-pong of TX state information on MP where the sender10831083+ * might run on a different CPU than this function...10841084+ */10851085+ if (unlikely(flags & F_CMDQ0_ENABLE || cmdq_processed[0] > 64)) {10861086+ flags = update_tx_info(adapter, flags, cmdq_processed[0]);10871087+ cmdq_processed[0] = 0;10881088+ }10891089+ if (unlikely(cmdq_processed[1] > 16)) {10901090+ sge->cmdQ[1].processed += cmdq_processed[1];10911091+ cmdq_processed[1] = 0;10921092+ }10931093+ if (likely(e->DataValid)) {10941094+ struct freelQ *fl = &sge->freelQ[e->FreelistQid];10951095+10961096+ if (unlikely(!e->Sop || !e->Eop))10971097+ BUG();10981098+ if (unlikely(e->Offload))10991099+ unexpected_offload(adapter, fl);11001100+ else11011101+ sge_rx(sge, fl, e->BufferLength);11021102+11031103+ /*11041104+ * Note: this depends on each packet consuming a11051105+ * single free-list buffer; cf. the BUG above.11061106+ */11071107+ if (++fl->cidx == fl->size)11081108+ fl->cidx = 0;11091109+ if (unlikely(--fl->credits <11101110+ fl->size - SGE_FREEL_REFILL_THRESH))11111111+ refill_free_list(sge, fl);11121112+ } else11131113+ sge->stats.pure_rsps++;11141114+11151115+ e++;11161116+ if (unlikely(++q->cidx == q->size)) {11171117+ q->cidx = 0;11181118+ q->genbit ^= 1;11191119+ e = q->entries;11201120+ }11211121+ prefetch(e);11221122+11231123+ if (++q->credits > SGE_RESPQ_REPLENISH_THRES) {11241124+ writel(q->credits, adapter->regs + A_SG_RSPQUEUECREDIT);11251125+ q->credits = 0;11261126+ }11271127+ --budget_left;11281128+ }11291129+11301130+ flags = update_tx_info(adapter, flags, cmdq_processed[0]); 11311131+ sge->cmdQ[1].processed += cmdq_processed[1];11321132+11331133+ budget -= budget_left;11341134+ return budget;11351135+}11361136+11371137+/*11381138+ * A simpler version of process_responses() that handles only pure (i.e.,11391139+ * non data-carrying) responses. Such respones are too light-weight to justify11401140+ * calling a softirq when using NAPI, so we handle them specially in hard11411141+ * interrupt context. The function is called with a pointer to a response,11421142+ * which the caller must ensure is a valid pure response. Returns 1 if it11431143+ * encounters a valid data-carrying response, 0 otherwise.11441144+ */11451145+static int process_pure_responses(struct adapter *adapter, struct respQ_e *e)11461146+{11471147+ struct sge *sge = adapter->sge;11481148+ struct respQ *q = &sge->respQ;11491149+ unsigned int flags = 0;11501150+ unsigned int cmdq_processed[SGE_CMDQ_N] = {0, 0};11511151+11521152+ do {11531153+ flags |= e->Qsleeping;11541154+11551155+ cmdq_processed[0] += e->Cmdq0CreditReturn;11561156+ cmdq_processed[1] += e->Cmdq1CreditReturn;11571157+11581158+ e++;11591159+ if (unlikely(++q->cidx == q->size)) {11601160+ q->cidx = 0;11611161+ q->genbit ^= 1;11621162+ e = q->entries;11631163+ }11641164+ prefetch(e);11651165+11661166+ if (++q->credits > SGE_RESPQ_REPLENISH_THRES) {11671167+ writel(q->credits, adapter->regs + A_SG_RSPQUEUECREDIT);11681168+ q->credits = 0;11691169+ }11701170+ sge->stats.pure_rsps++;11711171+ } while (e->GenerationBit == q->genbit && !e->DataValid);11721172+11731173+ flags = update_tx_info(adapter, flags, cmdq_processed[0]); 11741174+ sge->cmdQ[1].processed += cmdq_processed[1];11751175+11761176+ return e->GenerationBit == q->genbit;11771177+}11781178+11791179+/*11801180+ * Handler for new data events when using NAPI. This does not need any locking11811181+ * or protection from interrupts as data interrupts are off at this point and11821182+ * other adapter interrupts do not interfere.11831183+ */11841184+static int t1_poll(struct net_device *dev, int *budget)11851185+{11861186+ struct adapter *adapter = dev->priv;11871187+ int effective_budget = min(*budget, dev->quota);11881188+11891189+ int work_done = process_responses(adapter, effective_budget);11901190+ *budget -= work_done;11911191+ dev->quota -= work_done;11921192+11931193+ if (work_done >= effective_budget)11941194+ return 1;11951195+11961196+ __netif_rx_complete(dev);11971197+11981198+ /*11991199+ * Because we don't atomically flush the following write it is12001200+ * possible that in very rare cases it can reach the device in a way12011201+ * that races with a new response being written plus an error interrupt12021202+ * causing the NAPI interrupt handler below to return unhandled status12031203+ * to the OS. To protect against this would require flushing the write12041204+ * and doing both the write and the flush with interrupts off. Way too12051205+ * expensive and unjustifiable given the rarity of the race.12061206+ */12071207+ writel(adapter->sge->respQ.cidx, adapter->regs + A_SG_SLEEPING);12081208+ return 0;12091209+}12101210+12111211+/*12121212+ * Returns true if the device is already scheduled for polling.12131213+ */12141214+static inline int napi_is_scheduled(struct net_device *dev)12151215+{12161216+ return test_bit(__LINK_STATE_RX_SCHED, &dev->state);12171217+}12181218+12191219+/*12201220+ * NAPI version of the main interrupt handler.12211221+ */12221222+static irqreturn_t t1_interrupt_napi(int irq, void *data, struct pt_regs *regs)12231223+{12241224+ int handled;12251225+ struct adapter *adapter = data;12261226+ struct sge *sge = adapter->sge;12271227+ struct respQ *q = &adapter->sge->respQ;12281228+12291229+ /*12301230+ * Clear the SGE_DATA interrupt first thing. Normally the NAPI12311231+ * handler has control of the response queue and the interrupt handler12321232+ * can look at the queue reliably only once it knows NAPI is off.12331233+ * We can't wait that long to clear the SGE_DATA interrupt because we12341234+ * could race with t1_poll rearming the SGE interrupt, so we need to12351235+ * clear the interrupt speculatively and really early on.12361236+ */12371237+ writel(F_PL_INTR_SGE_DATA, adapter->regs + A_PL_CAUSE);12381238+12391239+ spin_lock(&adapter->async_lock);12401240+ if (!napi_is_scheduled(sge->netdev)) {12411241+ struct respQ_e *e = &q->entries[q->cidx];12421242+12431243+ if (e->GenerationBit == q->genbit) {12441244+ if (e->DataValid ||12451245+ process_pure_responses(adapter, e)) {12461246+ if (likely(napi_schedule_prep(sge->netdev)))12471247+ __netif_rx_schedule(sge->netdev);12481248+ else12491249+ printk(KERN_CRIT12501250+ "NAPI schedule failure!\n");12511251+ } else12521252+ writel(q->cidx, adapter->regs + A_SG_SLEEPING);12531253+ handled = 1;12541254+ goto unlock;12551255+ } else12561256+ writel(q->cidx, adapter->regs + A_SG_SLEEPING);12571257+ } else12581258+ if (readl(adapter->regs + A_PL_CAUSE) & F_PL_INTR_SGE_DATA)12591259+ printk(KERN_ERR "data interrupt while NAPI running\n");12601260+12611261+ handled = t1_slow_intr_handler(adapter);12621262+ if (!handled)12631263+ sge->stats.unhandled_irqs++;12641264+ unlock:12651265+ spin_unlock(&adapter->async_lock);12661266+ return IRQ_RETVAL(handled != 0);12671267+}12681268+12691269+/*12701270+ * Main interrupt handler, optimized assuming that we took a 'DATA'12711271+ * interrupt.12721272+ *12731273+ * 1. Clear the interrupt12741274+ * 2. Loop while we find valid descriptors and process them; accumulate12751275+ * information that can be processed after the loop12761276+ * 3. Tell the SGE at which index we stopped processing descriptors12771277+ * 4. Bookkeeping; free TX buffers, ring doorbell if there are any12781278+ * outstanding TX buffers waiting, replenish RX buffers, potentially12791279+ * reenable upper layers if they were turned off due to lack of TX12801280+ * resources which are available again.12811281+ * 5. If we took an interrupt, but no valid respQ descriptors was found we12821282+ * let the slow_intr_handler run and do error handling.12831283+ */12841284+static irqreturn_t t1_interrupt(int irq, void *cookie, struct pt_regs *regs)12851285+{12861286+ int work_done;12871287+ struct respQ_e *e;12881288+ struct adapter *adapter = cookie;12891289+ struct respQ *Q = &adapter->sge->respQ;12901290+12911291+ spin_lock(&adapter->async_lock);12921292+ e = &Q->entries[Q->cidx];12931293+ prefetch(e);12941294+12951295+ writel(F_PL_INTR_SGE_DATA, adapter->regs + A_PL_CAUSE);12961296+12971297+ if (likely(e->GenerationBit == Q->genbit))12981298+ work_done = process_responses(adapter, -1);12991299+ else13001300+ work_done = t1_slow_intr_handler(adapter);13011301+13021302+ /*13031303+ * The unconditional clearing of the PL_CAUSE above may have raced13041304+ * with DMA completion and the corresponding generation of a response13051305+ * to cause us to miss the resulting data interrupt. The next write13061306+ * is also unconditional to recover the missed interrupt and render13071307+ * this race harmless.13081308+ */13091309+ writel(Q->cidx, adapter->regs + A_SG_SLEEPING);13101310+13111311+ if (!work_done)13121312+ adapter->sge->stats.unhandled_irqs++;13131313+ spin_unlock(&adapter->async_lock);13141314+ return IRQ_RETVAL(work_done != 0);13151315+}13161316+13171317+intr_handler_t t1_select_intr_handler(adapter_t *adapter)13181318+{13191319+ return adapter->params.sge.polling ? t1_interrupt_napi : t1_interrupt;13201320+}13211321+13221322+/*13231323+ * Enqueues the sk_buff onto the cmdQ[qid] and has hardware fetch it.13241324+ *13251325+ * The code figures out how many entries the sk_buff will require in the13261326+ * cmdQ and updates the cmdQ data structure with the state once the enqueue13271327+ * has complete. Then, it doesn't access the global structure anymore, but13281328+ * uses the corresponding fields on the stack. In conjuction with a spinlock13291329+ * around that code, we can make the function reentrant without holding the13301330+ * lock when we actually enqueue (which might be expensive, especially on13311331+ * architectures with IO MMUs).13321332+ *13331333+ * This runs with softirqs disabled.13341334+ */13351335+unsigned int t1_sge_tx(struct sk_buff *skb, struct adapter *adapter,13361336+ unsigned int qid, struct net_device *dev)13371337+{13381338+ struct sge *sge = adapter->sge;13391339+ struct cmdQ *q = &sge->cmdQ[qid];13401340+ unsigned int credits, pidx, genbit, count;13411341+13421342+ spin_lock(&q->lock);13431343+ reclaim_completed_tx(sge, q);13441344+13451345+ pidx = q->pidx;13461346+ credits = q->size - q->in_use;13471347+ count = 1 + skb_shinfo(skb)->nr_frags;13481348+13491349+ { /* Ethernet packet */13501350+ if (unlikely(credits < count)) {13511351+ netif_stop_queue(dev);13521352+ set_bit(dev->if_port, &sge->stopped_tx_queues);13531353+ sge->stats.cmdQ_full[3]++;13541354+ spin_unlock(&q->lock);13551355+ CH_ERR("%s: Tx ring full while queue awake!\n",13561356+ adapter->name);13571357+ return 1;13581358+ }13591359+ if (unlikely(credits - count < q->stop_thres)) {13601360+ sge->stats.cmdQ_full[3]++;13611361+ netif_stop_queue(dev);13621362+ set_bit(dev->if_port, &sge->stopped_tx_queues);13631363+ }13641364+ }13651365+ q->in_use += count;13661366+ genbit = q->genbit;13671367+ q->pidx += count;13681368+ if (q->pidx >= q->size) {13691369+ q->pidx -= q->size;13701370+ q->genbit ^= 1;13711371+ }13721372+ spin_unlock(&q->lock);13731373+13741374+ write_tx_descs(adapter, skb, pidx, genbit, q);13751375+13761376+ /*13771377+ * We always ring the doorbell for cmdQ1. For cmdQ0, we only ring13781378+ * the doorbell if the Q is asleep. There is a natural race, where13791379+ * the hardware is going to sleep just after we checked, however,13801380+ * then the interrupt handler will detect the outstanding TX packet13811381+ * and ring the doorbell for us.13821382+ */13831383+ if (qid)13841384+ doorbell_pio(adapter, F_CMDQ1_ENABLE);13851385+ else {13861386+ clear_bit(CMDQ_STAT_LAST_PKT_DB, &q->status);13871387+ if (test_and_set_bit(CMDQ_STAT_RUNNING, &q->status) == 0) {13881388+ set_bit(CMDQ_STAT_LAST_PKT_DB, &q->status);13891389+ writel(F_CMDQ0_ENABLE, adapter->regs + A_SG_DOORBELL);13901390+ }13911391+ }13921392+ return 0;13931393+}13941394+13951395+#define MK_ETH_TYPE_MSS(type, mss) (((mss) & 0x3FFF) | ((type) << 14))13961396+13971397+/*13981398+ * eth_hdr_len - return the length of an Ethernet header13991399+ * @data: pointer to the start of the Ethernet header14001400+ *14011401+ * Returns the length of an Ethernet header, including optional VLAN tag.14021402+ */14031403+static inline int eth_hdr_len(const void *data)14041404+{14051405+ const struct ethhdr *e = data;14061406+14071407+ return e->h_proto == htons(ETH_P_8021Q) ? VLAN_ETH_HLEN : ETH_HLEN;14081408+}14091409+14101410+/*14111411+ * Adds the CPL header to the sk_buff and passes it to t1_sge_tx.14121412+ */14131413+int t1_start_xmit(struct sk_buff *skb, struct net_device *dev)14141414+{14151415+ struct adapter *adapter = dev->priv;14161416+ struct sge_port_stats *st = &adapter->sge->port_stats[dev->if_port];14171417+ struct sge *sge = adapter->sge;14181418+ struct cpl_tx_pkt *cpl;14191419+14201420+#ifdef NETIF_F_TSO14211421+ if (skb_shinfo(skb)->tso_size) {14221422+ int eth_type;14231423+ struct cpl_tx_pkt_lso *hdr;14241424+14251425+ st->tso++;14261426+14271427+ eth_type = skb->nh.raw - skb->data == ETH_HLEN ?14281428+ CPL_ETH_II : CPL_ETH_II_VLAN;14291429+14301430+ hdr = (struct cpl_tx_pkt_lso *)skb_push(skb, sizeof(*hdr));14311431+ hdr->opcode = CPL_TX_PKT_LSO;14321432+ hdr->ip_csum_dis = hdr->l4_csum_dis = 0;14331433+ hdr->ip_hdr_words = skb->nh.iph->ihl;14341434+ hdr->tcp_hdr_words = skb->h.th->doff;14351435+ hdr->eth_type_mss = htons(MK_ETH_TYPE_MSS(eth_type,14361436+ skb_shinfo(skb)->tso_size));14371437+ hdr->len = htonl(skb->len - sizeof(*hdr));14381438+ cpl = (struct cpl_tx_pkt *)hdr;14391439+ sge->stats.tx_lso_pkts++;14401440+ } else14411441+#endif14421442+ {14431443+ /*14441444+ * Packets shorter than ETH_HLEN can break the MAC, drop them14451445+ * early. Also, we may get oversized packets because some14461446+ * parts of the kernel don't handle our unusual hard_header_len14471447+ * right, drop those too.14481448+ */14491449+ if (unlikely(skb->len < ETH_HLEN ||14501450+ skb->len > dev->mtu + eth_hdr_len(skb->data))) {14511451+ dev_kfree_skb_any(skb);14521452+ return NET_XMIT_SUCCESS;14531453+ }14541454+14551455+ /*14561456+ * We are using a non-standard hard_header_len and some kernel14571457+ * components, such as pktgen, do not handle it right.14581458+ * Complain when this happens but try to fix things up.14591459+ */14601460+ if (unlikely(skb_headroom(skb) <14611461+ dev->hard_header_len - ETH_HLEN)) {14621462+ struct sk_buff *orig_skb = skb;14631463+14641464+ if (net_ratelimit())14651465+ printk(KERN_ERR "%s: inadequate headroom in "14661466+ "Tx packet\n", dev->name);14671467+ skb = skb_realloc_headroom(skb, sizeof(*cpl));14681468+ dev_kfree_skb_any(orig_skb);14691469+ if (!skb)14701470+ return -ENOMEM;14711471+ }14721472+14731473+ if (!(adapter->flags & UDP_CSUM_CAPABLE) &&14741474+ skb->ip_summed == CHECKSUM_HW &&14751475+ skb->nh.iph->protocol == IPPROTO_UDP)14761476+ if (unlikely(skb_checksum_help(skb, 0))) {14771477+ dev_kfree_skb_any(skb);14781478+ return -ENOMEM;14791479+ }14801480+14811481+ /* Hmmm, assuming to catch the gratious arp... and we'll use14821482+ * it to flush out stuck espi packets...14831483+ */14841484+ if (unlikely(!adapter->sge->espibug_skb)) {14851485+ if (skb->protocol == htons(ETH_P_ARP) &&14861486+ skb->nh.arph->ar_op == htons(ARPOP_REQUEST)) {14871487+ adapter->sge->espibug_skb = skb;14881488+ /* We want to re-use this skb later. We14891489+ * simply bump the reference count and it14901490+ * will not be freed...14911491+ */14921492+ skb = skb_get(skb);14931493+ }14941494+ }14951495+14961496+ cpl = (struct cpl_tx_pkt *)__skb_push(skb, sizeof(*cpl));14971497+ cpl->opcode = CPL_TX_PKT;14981498+ cpl->ip_csum_dis = 1; /* SW calculates IP csum */14991499+ cpl->l4_csum_dis = skb->ip_summed == CHECKSUM_HW ? 0 : 1;15001500+ /* the length field isn't used so don't bother setting it */15011501+15021502+ st->tx_cso += (skb->ip_summed == CHECKSUM_HW);15031503+ sge->stats.tx_do_cksum += (skb->ip_summed == CHECKSUM_HW);15041504+ sge->stats.tx_reg_pkts++;15051505+ }15061506+ cpl->iff = dev->if_port;15071507+15081508+#if defined(CONFIG_VLAN_8021Q) || defined(CONFIG_VLAN_8021Q_MODULE)15091509+ if (adapter->vlan_grp && vlan_tx_tag_present(skb)) {15101510+ cpl->vlan_valid = 1;15111511+ cpl->vlan = htons(vlan_tx_tag_get(skb));15121512+ st->vlan_insert++;15131513+ } else15141514+#endif15151515+ cpl->vlan_valid = 0;15161516+15171517+ dev->trans_start = jiffies;15181518+ return t1_sge_tx(skb, adapter, 0, dev);15191519+}15201520+15211521+/*15221522+ * Callback for the Tx buffer reclaim timer. Runs with softirqs disabled.15231523+ */15241524+static void sge_tx_reclaim_cb(unsigned long data)15251525+{15261526+ int i;15271527+ struct sge *sge = (struct sge *)data;15281528+15291529+ for (i = 0; i < SGE_CMDQ_N; ++i) {15301530+ struct cmdQ *q = &sge->cmdQ[i];15311531+15321532+ if (!spin_trylock(&q->lock))15331533+ continue;15341534+15351535+ reclaim_completed_tx(sge, q);15361536+ if (i == 0 && q->in_use) /* flush pending credits */15371537+ writel(F_CMDQ0_ENABLE,15381538+ sge->adapter->regs + A_SG_DOORBELL);15391539+15401540+ spin_unlock(&q->lock);15411541+ }15421542+ mod_timer(&sge->tx_reclaim_timer, jiffies + TX_RECLAIM_PERIOD);15431543+}15441544+15451545+/*15461546+ * Propagate changes of the SGE coalescing parameters to the HW.15471547+ */15481548+int t1_sge_set_coalesce_params(struct sge *sge, struct sge_params *p)15491549+{15501550+ sge->netdev->poll = t1_poll;15511551+ sge->fixed_intrtimer = p->rx_coalesce_usecs *15521552+ core_ticks_per_usec(sge->adapter);15531553+ writel(sge->fixed_intrtimer, sge->adapter->regs + A_SG_INTRTIMER);15541554+ return 0;15551555+}15561556+15571557+/*15581558+ * Allocates both RX and TX resources and configures the SGE. However,15591559+ * the hardware is not enabled yet.15601560+ */15611561+int t1_sge_configure(struct sge *sge, struct sge_params *p)15621562+{15631563+ if (alloc_rx_resources(sge, p))15641564+ return -ENOMEM;15651565+ if (alloc_tx_resources(sge, p)) {15661566+ free_rx_resources(sge);15671567+ return -ENOMEM;15681568+ }15691569+ configure_sge(sge, p);15701570+15711571+ /*15721572+ * Now that we have sized the free lists calculate the payload15731573+ * capacity of the large buffers. Other parts of the driver use15741574+ * this to set the max offload coalescing size so that RX packets15751575+ * do not overflow our large buffers.15761576+ */15771577+ p->large_buf_capacity = jumbo_payload_capacity(sge);15781578+ return 0;15791579+}15801580+15811581+/*15821582+ * Disables the DMA engine.15831583+ */15841584+void t1_sge_stop(struct sge *sge)15851585+{15861586+ writel(0, sge->adapter->regs + A_SG_CONTROL);15871587+ (void) readl(sge->adapter->regs + A_SG_CONTROL); /* flush */15881588+ if (is_T2(sge->adapter))15891589+ del_timer_sync(&sge->espibug_timer);15901590+ del_timer_sync(&sge->tx_reclaim_timer);15911591+}15921592+15931593+/*15941594+ * Enables the DMA engine.15951595+ */15961596+void t1_sge_start(struct sge *sge)15971597+{15981598+ refill_free_list(sge, &sge->freelQ[0]);15991599+ refill_free_list(sge, &sge->freelQ[1]);16001600+16011601+ writel(sge->sge_control, sge->adapter->regs + A_SG_CONTROL);16021602+ doorbell_pio(sge->adapter, F_FL0_ENABLE | F_FL1_ENABLE);16031603+ (void) readl(sge->adapter->regs + A_SG_CONTROL); /* flush */16041604+16051605+ mod_timer(&sge->tx_reclaim_timer, jiffies + TX_RECLAIM_PERIOD);16061606+16071607+ if (is_T2(sge->adapter)) 16081608+ mod_timer(&sge->espibug_timer, jiffies + sge->espibug_timeout);16091609+}16101610+16111611+/*16121612+ * Callback for the T2 ESPI 'stuck packet feature' workaorund16131613+ */16141614+static void espibug_workaround(void *data)16151615+{16161616+ struct adapter *adapter = (struct adapter *)data;16171617+ struct sge *sge = adapter->sge;16181618+16191619+ if (netif_running(adapter->port[0].dev)) {16201620+ struct sk_buff *skb = sge->espibug_skb;16211621+16221622+ u32 seop = t1_espi_get_mon(adapter, 0x930, 0);16231623+16241624+ if ((seop & 0xfff0fff) == 0xfff && skb) {16251625+ if (!skb->cb[0]) {16261626+ u8 ch_mac_addr[ETH_ALEN] =16271627+ {0x0, 0x7, 0x43, 0x0, 0x0, 0x0};16281628+ memcpy(skb->data + sizeof(struct cpl_tx_pkt),16291629+ ch_mac_addr, ETH_ALEN);16301630+ memcpy(skb->data + skb->len - 10, ch_mac_addr,16311631+ ETH_ALEN);16321632+ skb->cb[0] = 0xff;16331633+ }16341634+16351635+ /* bump the reference count to avoid freeing of the16361636+ * skb once the DMA has completed.16371637+ */16381638+ skb = skb_get(skb);16391639+ t1_sge_tx(skb, adapter, 0, adapter->port[0].dev);16401640+ }16411641+ }16421642+ mod_timer(&sge->espibug_timer, jiffies + sge->espibug_timeout);16431643+}16441644+16451645+/*16461646+ * Creates a t1_sge structure and returns suggested resource parameters.16471647+ */16481648+struct sge * __devinit t1_sge_create(struct adapter *adapter,16491649+ struct sge_params *p)16501650+{16511651+ struct sge *sge = kmalloc(sizeof(*sge), GFP_KERNEL);16521652+16531653+ if (!sge)16541654+ return NULL;16551655+ memset(sge, 0, sizeof(*sge));16561656+16571657+ sge->adapter = adapter;16581658+ sge->netdev = adapter->port[0].dev;16591659+ sge->rx_pkt_pad = t1_is_T1B(adapter) ? 0 : 2;16601660+ sge->jumbo_fl = t1_is_T1B(adapter) ? 1 : 0;16611661+16621662+ init_timer(&sge->tx_reclaim_timer);16631663+ sge->tx_reclaim_timer.data = (unsigned long)sge;16641664+ sge->tx_reclaim_timer.function = sge_tx_reclaim_cb;16651665+16661666+ if (is_T2(sge->adapter)) {16671667+ init_timer(&sge->espibug_timer);16681668+ sge->espibug_timer.function = (void *)&espibug_workaround;16691669+ sge->espibug_timer.data = (unsigned long)sge->adapter;16701670+ sge->espibug_timeout = 1;16711671+ }16721672+16731673+16741674+ p->cmdQ_size[0] = SGE_CMDQ0_E_N;16751675+ p->cmdQ_size[1] = SGE_CMDQ1_E_N;16761676+ p->freelQ_size[!sge->jumbo_fl] = SGE_FREEL_SIZE;16771677+ p->freelQ_size[sge->jumbo_fl] = SGE_JUMBO_FREEL_SIZE;16781678+ p->rx_coalesce_usecs = 50;16791679+ p->coalesce_enable = 0;16801680+ p->sample_interval_usecs = 0;16811681+ p->polling = 0;16821682+16831683+ return sge;16841684+}
+105
drivers/net/chelsio/sge.h
···11+/*****************************************************************************22+ * *33+ * File: sge.h *44+ * $Revision: 1.11 $ *55+ * $Date: 2005/06/21 22:10:55 $ *66+ * Description: *77+ * part of the Chelsio 10Gb Ethernet Driver. *88+ * *99+ * This program is free software; you can redistribute it and/or modify *1010+ * it under the terms of the GNU General Public License, version 2, as *1111+ * published by the Free Software Foundation. *1212+ * *1313+ * You should have received a copy of the GNU General Public License along *1414+ * with this program; if not, write to the Free Software Foundation, Inc., *1515+ * 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. *1616+ * *1717+ * THIS SOFTWARE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR IMPLIED *1818+ * WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED WARRANTIES OF *1919+ * MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. *2020+ * *2121+ * http://www.chelsio.com *2222+ * *2323+ * Copyright (c) 2003 - 2005 Chelsio Communications, Inc. *2424+ * All rights reserved. *2525+ * *2626+ * Maintainers: maintainers@chelsio.com *2727+ * *2828+ * Authors: Dimitrios Michailidis <dm@chelsio.com> *2929+ * Tina Yang <tainay@chelsio.com> *3030+ * Felix Marti <felix@chelsio.com> *3131+ * Scott Bardone <sbardone@chelsio.com> *3232+ * Kurt Ottaway <kottaway@chelsio.com> *3333+ * Frank DiMambro <frank@chelsio.com> *3434+ * *3535+ * History: *3636+ * *3737+ ****************************************************************************/3838+3939+#ifndef _CXGB_SGE_H_4040+#define _CXGB_SGE_H_4141+4242+#include <linux/types.h>4343+#include <linux/interrupt.h>4444+#include <asm/byteorder.h>4545+4646+#ifndef IRQ_RETVAL4747+#define IRQ_RETVAL(x)4848+typedef void irqreturn_t;4949+#endif5050+5151+typedef irqreturn_t (*intr_handler_t)(int, void *, struct pt_regs *);5252+5353+struct sge_intr_counts {5454+ unsigned int respQ_empty; /* # times respQ empty */5555+ unsigned int respQ_overflow; /* # respQ overflow (fatal) */5656+ unsigned int freelistQ_empty; /* # times freelist empty */5757+ unsigned int pkt_too_big; /* packet too large (fatal) */5858+ unsigned int pkt_mismatch;5959+ unsigned int cmdQ_full[3]; /* not HW IRQ, host cmdQ[] full */6060+ unsigned int cmdQ_restarted[3];/* # of times cmdQ X was restarted */6161+ unsigned int ethernet_pkts; /* # of Ethernet packets received */6262+ unsigned int offload_pkts; /* # of offload packets received */6363+ unsigned int offload_bundles; /* # of offload pkt bundles delivered */6464+ unsigned int pure_rsps; /* # of non-payload responses */6565+ unsigned int unhandled_irqs; /* # of unhandled interrupts */6666+ unsigned int tx_ipfrags;6767+ unsigned int tx_reg_pkts;6868+ unsigned int tx_lso_pkts;6969+ unsigned int tx_do_cksum;7070+};7171+7272+struct sge_port_stats {7373+ unsigned long rx_cso_good; /* # of successful RX csum offloads */7474+ unsigned long tx_cso; /* # of TX checksum offloads */7575+ unsigned long vlan_xtract; /* # of VLAN tag extractions */7676+ unsigned long vlan_insert; /* # of VLAN tag extractions */7777+ unsigned long tso; /* # of TSO requests */7878+ unsigned long rx_drops; /* # of packets dropped due to no mem */7979+};8080+8181+struct sk_buff;8282+struct net_device;8383+struct adapter;8484+struct sge_params;8585+struct sge;8686+8787+struct sge *t1_sge_create(struct adapter *, struct sge_params *);8888+int t1_sge_configure(struct sge *, struct sge_params *);8989+int t1_sge_set_coalesce_params(struct sge *, struct sge_params *);9090+void t1_sge_destroy(struct sge *);9191+intr_handler_t t1_select_intr_handler(adapter_t *adapter);9292+unsigned int t1_sge_tx(struct sk_buff *skb, struct adapter *adapter,9393+ unsigned int qid, struct net_device *netdev);9494+int t1_start_xmit(struct sk_buff *skb, struct net_device *dev);9595+void t1_set_vlan_accel(struct adapter *adapter, int on_off);9696+void t1_sge_start(struct sge *);9797+void t1_sge_stop(struct sge *);9898+int t1_sge_intr_error_handler(struct sge *);9999+void t1_sge_intr_enable(struct sge *);100100+void t1_sge_intr_disable(struct sge *);101101+void t1_sge_intr_clear(struct sge *);102102+const struct sge_intr_counts *t1_sge_get_intr_counts(struct sge *sge);103103+const struct sge_port_stats *t1_sge_get_port_stats(struct sge *sge, int port);104104+105105+#endif /* _CXGB_SGE_H_ */
+812
drivers/net/chelsio/subr.c
···11+/*****************************************************************************22+ * *33+ * File: subr.c *44+ * $Revision: 1.27 $ *55+ * $Date: 2005/06/22 01:08:36 $ *66+ * Description: *77+ * Various subroutines (intr,pio,etc.) used by Chelsio 10G Ethernet driver. *88+ * part of the Chelsio 10Gb Ethernet Driver. *99+ * *1010+ * This program is free software; you can redistribute it and/or modify *1111+ * it under the terms of the GNU General Public License, version 2, as *1212+ * published by the Free Software Foundation. *1313+ * *1414+ * You should have received a copy of the GNU General Public License along *1515+ * with this program; if not, write to the Free Software Foundation, Inc., *1616+ * 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. *1717+ * *1818+ * THIS SOFTWARE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR IMPLIED *1919+ * WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED WARRANTIES OF *2020+ * MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. *2121+ * *2222+ * http://www.chelsio.com *2323+ * *2424+ * Copyright (c) 2003 - 2005 Chelsio Communications, Inc. *2525+ * All rights reserved. *2626+ * *2727+ * Maintainers: maintainers@chelsio.com *2828+ * *2929+ * Authors: Dimitrios Michailidis <dm@chelsio.com> *3030+ * Tina Yang <tainay@chelsio.com> *3131+ * Felix Marti <felix@chelsio.com> *3232+ * Scott Bardone <sbardone@chelsio.com> *3333+ * Kurt Ottaway <kottaway@chelsio.com> *3434+ * Frank DiMambro <frank@chelsio.com> *3535+ * *3636+ * History: *3737+ * *3838+ ****************************************************************************/3939+4040+#include "common.h"4141+#include "elmer0.h"4242+#include "regs.h"4343+#include "gmac.h"4444+#include "cphy.h"4545+#include "sge.h"4646+#include "espi.h"4747+4848+/**4949+ * t1_wait_op_done - wait until an operation is completed5050+ * @adapter: the adapter performing the operation5151+ * @reg: the register to check for completion5252+ * @mask: a single-bit field within @reg that indicates completion5353+ * @polarity: the value of the field when the operation is completed5454+ * @attempts: number of check iterations5555+ * @delay: delay in usecs between iterations5656+ *5757+ * Wait until an operation is completed by checking a bit in a register5858+ * up to @attempts times. Returns %0 if the operation completes and %15959+ * otherwise.6060+ */6161+static int t1_wait_op_done(adapter_t *adapter, int reg, u32 mask, int polarity,6262+ int attempts, int delay)6363+{6464+ while (1) {6565+ u32 val = readl(adapter->regs + reg) & mask;6666+6767+ if (!!val == polarity)6868+ return 0;6969+ if (--attempts == 0)7070+ return 1;7171+ if (delay)7272+ udelay(delay);7373+ }7474+}7575+7676+#define TPI_ATTEMPTS 507777+7878+/*7979+ * Write a register over the TPI interface (unlocked and locked versions).8080+ */8181+static int __t1_tpi_write(adapter_t *adapter, u32 addr, u32 value)8282+{8383+ int tpi_busy;8484+8585+ writel(addr, adapter->regs + A_TPI_ADDR);8686+ writel(value, adapter->regs + A_TPI_WR_DATA);8787+ writel(F_TPIWR, adapter->regs + A_TPI_CSR);8888+8989+ tpi_busy = t1_wait_op_done(adapter, A_TPI_CSR, F_TPIRDY, 1,9090+ TPI_ATTEMPTS, 3);9191+ if (tpi_busy)9292+ CH_ALERT("%s: TPI write to 0x%x failed\n",9393+ adapter->name, addr);9494+ return tpi_busy;9595+}9696+9797+int t1_tpi_write(adapter_t *adapter, u32 addr, u32 value)9898+{9999+ int ret;100100+101101+ spin_lock(&(adapter)->tpi_lock);102102+ ret = __t1_tpi_write(adapter, addr, value);103103+ spin_unlock(&(adapter)->tpi_lock);104104+ return ret;105105+}106106+107107+/*108108+ * Read a register over the TPI interface (unlocked and locked versions).109109+ */110110+static int __t1_tpi_read(adapter_t *adapter, u32 addr, u32 *valp)111111+{112112+ int tpi_busy;113113+114114+ writel(addr, adapter->regs + A_TPI_ADDR);115115+ writel(0, adapter->regs + A_TPI_CSR);116116+117117+ tpi_busy = t1_wait_op_done(adapter, A_TPI_CSR, F_TPIRDY, 1,118118+ TPI_ATTEMPTS, 3);119119+ if (tpi_busy)120120+ CH_ALERT("%s: TPI read from 0x%x failed\n",121121+ adapter->name, addr);122122+ else123123+ *valp = readl(adapter->regs + A_TPI_RD_DATA);124124+ return tpi_busy;125125+}126126+127127+int t1_tpi_read(adapter_t *adapter, u32 addr, u32 *valp)128128+{129129+ int ret;130130+131131+ spin_lock(&(adapter)->tpi_lock);132132+ ret = __t1_tpi_read(adapter, addr, valp);133133+ spin_unlock(&(adapter)->tpi_lock);134134+ return ret;135135+}136136+137137+/*138138+ * Called when a port's link settings change to propagate the new values to the139139+ * associated PHY and MAC. After performing the common tasks it invokes an140140+ * OS-specific handler.141141+ */142142+/* static */ void link_changed(adapter_t *adapter, int port_id)143143+{144144+ int link_ok, speed, duplex, fc;145145+ struct cphy *phy = adapter->port[port_id].phy;146146+ struct link_config *lc = &adapter->port[port_id].link_config;147147+148148+ phy->ops->get_link_status(phy, &link_ok, &speed, &duplex, &fc);149149+150150+ lc->speed = speed < 0 ? SPEED_INVALID : speed;151151+ lc->duplex = duplex < 0 ? DUPLEX_INVALID : duplex;152152+ if (!(lc->requested_fc & PAUSE_AUTONEG))153153+ fc = lc->requested_fc & (PAUSE_RX | PAUSE_TX);154154+155155+ if (link_ok && speed >= 0 && lc->autoneg == AUTONEG_ENABLE) {156156+ /* Set MAC speed, duplex, and flow control to match PHY. */157157+ struct cmac *mac = adapter->port[port_id].mac;158158+159159+ mac->ops->set_speed_duplex_fc(mac, speed, duplex, fc);160160+ lc->fc = (unsigned char)fc;161161+ }162162+ t1_link_changed(adapter, port_id, link_ok, speed, duplex, fc);163163+}164164+165165+static int t1_pci_intr_handler(adapter_t *adapter)166166+{167167+ u32 pcix_cause;168168+169169+ pci_read_config_dword(adapter->pdev, A_PCICFG_INTR_CAUSE, &pcix_cause);170170+171171+ if (pcix_cause) {172172+ pci_write_config_dword(adapter->pdev, A_PCICFG_INTR_CAUSE,173173+ pcix_cause);174174+ t1_fatal_err(adapter); /* PCI errors are fatal */175175+ }176176+ return 0;177177+}178178+179179+180180+/*181181+ * Wait until Elmer's MI1 interface is ready for new operations.182182+ */183183+static int mi1_wait_until_ready(adapter_t *adapter, int mi1_reg)184184+{185185+ int attempts = 100, busy;186186+187187+ do {188188+ u32 val;189189+190190+ __t1_tpi_read(adapter, mi1_reg, &val);191191+ busy = val & F_MI1_OP_BUSY;192192+ if (busy)193193+ udelay(10);194194+ } while (busy && --attempts);195195+ if (busy)196196+ CH_ALERT("%s: MDIO operation timed out\n",197197+ adapter->name);198198+ return busy;199199+}200200+201201+/*202202+ * MI1 MDIO initialization.203203+ */204204+static void mi1_mdio_init(adapter_t *adapter, const struct board_info *bi)205205+{206206+ u32 clkdiv = bi->clock_elmer0 / (2 * bi->mdio_mdc) - 1;207207+ u32 val = F_MI1_PREAMBLE_ENABLE | V_MI1_MDI_INVERT(bi->mdio_mdiinv) |208208+ V_MI1_MDI_ENABLE(bi->mdio_mdien) | V_MI1_CLK_DIV(clkdiv);209209+210210+ if (!(bi->caps & SUPPORTED_10000baseT_Full))211211+ val |= V_MI1_SOF(1);212212+ t1_tpi_write(adapter, A_ELMER0_PORT0_MI1_CFG, val);213213+}214214+215215+static int mi1_mdio_ext_read(adapter_t *adapter, int phy_addr, int mmd_addr,216216+ int reg_addr, unsigned int *valp)217217+{218218+ u32 addr = V_MI1_REG_ADDR(mmd_addr) | V_MI1_PHY_ADDR(phy_addr);219219+220220+ spin_lock(&(adapter)->tpi_lock);221221+222222+ /* Write the address we want. */223223+ __t1_tpi_write(adapter, A_ELMER0_PORT0_MI1_ADDR, addr);224224+ __t1_tpi_write(adapter, A_ELMER0_PORT0_MI1_DATA, reg_addr);225225+ __t1_tpi_write(adapter, A_ELMER0_PORT0_MI1_OP,226226+ MI1_OP_INDIRECT_ADDRESS);227227+ mi1_wait_until_ready(adapter, A_ELMER0_PORT0_MI1_OP);228228+229229+ /* Write the operation we want. */230230+ __t1_tpi_write(adapter, A_ELMER0_PORT0_MI1_OP, MI1_OP_INDIRECT_READ);231231+ mi1_wait_until_ready(adapter, A_ELMER0_PORT0_MI1_OP);232232+233233+ /* Read the data. */234234+ __t1_tpi_read(adapter, A_ELMER0_PORT0_MI1_DATA, valp);235235+ spin_unlock(&(adapter)->tpi_lock);236236+ return 0;237237+}238238+239239+static int mi1_mdio_ext_write(adapter_t *adapter, int phy_addr, int mmd_addr,240240+ int reg_addr, unsigned int val)241241+{242242+ u32 addr = V_MI1_REG_ADDR(mmd_addr) | V_MI1_PHY_ADDR(phy_addr);243243+244244+ spin_lock(&(adapter)->tpi_lock);245245+246246+ /* Write the address we want. */247247+ __t1_tpi_write(adapter, A_ELMER0_PORT0_MI1_ADDR, addr);248248+ __t1_tpi_write(adapter, A_ELMER0_PORT0_MI1_DATA, reg_addr);249249+ __t1_tpi_write(adapter, A_ELMER0_PORT0_MI1_OP,250250+ MI1_OP_INDIRECT_ADDRESS);251251+ mi1_wait_until_ready(adapter, A_ELMER0_PORT0_MI1_OP);252252+253253+ /* Write the data. */254254+ __t1_tpi_write(adapter, A_ELMER0_PORT0_MI1_DATA, val);255255+ __t1_tpi_write(adapter, A_ELMER0_PORT0_MI1_OP, MI1_OP_INDIRECT_WRITE);256256+ mi1_wait_until_ready(adapter, A_ELMER0_PORT0_MI1_OP);257257+ spin_unlock(&(adapter)->tpi_lock);258258+ return 0;259259+}260260+261261+static struct mdio_ops mi1_mdio_ext_ops = {262262+ mi1_mdio_init,263263+ mi1_mdio_ext_read,264264+ mi1_mdio_ext_write265265+};266266+267267+enum {268268+ CH_BRD_N110_1F,269269+ CH_BRD_N210_1F,270270+};271271+272272+static struct board_info t1_board[] = {273273+274274+{ CHBT_BOARD_N110, 1/*ports#*/,275275+ SUPPORTED_10000baseT_Full | SUPPORTED_FIBRE /*caps*/, CHBT_TERM_T1,276276+ CHBT_MAC_PM3393, CHBT_PHY_88X2010,277277+ 125000000/*clk-core*/, 0/*clk-mc3*/, 0/*clk-mc4*/,278278+ 1/*espi-ports*/, 0/*clk-cspi*/, 44/*clk-elmer0*/, 0/*mdien*/,279279+ 0/*mdiinv*/, 1/*mdc*/, 0/*phybaseaddr*/, &t1_pm3393_ops,280280+ &t1_mv88x201x_ops, &mi1_mdio_ext_ops,281281+ "Chelsio N110 1x10GBaseX NIC" },282282+283283+{ CHBT_BOARD_N210, 1/*ports#*/,284284+ SUPPORTED_10000baseT_Full | SUPPORTED_FIBRE /*caps*/, CHBT_TERM_T2,285285+ CHBT_MAC_PM3393, CHBT_PHY_88X2010,286286+ 125000000/*clk-core*/, 0/*clk-mc3*/, 0/*clk-mc4*/,287287+ 1/*espi-ports*/, 0/*clk-cspi*/, 44/*clk-elmer0*/, 0/*mdien*/,288288+ 0/*mdiinv*/, 1/*mdc*/, 0/*phybaseaddr*/, &t1_pm3393_ops,289289+ &t1_mv88x201x_ops, &mi1_mdio_ext_ops,290290+ "Chelsio N210 1x10GBaseX NIC" },291291+292292+};293293+294294+struct pci_device_id t1_pci_tbl[] = {295295+ CH_DEVICE(7, 0, CH_BRD_N110_1F),296296+ CH_DEVICE(10, 1, CH_BRD_N210_1F),297297+ { 0, }298298+};299299+300300+MODULE_DEVICE_TABLE(pci, t1_pci_tbl);301301+302302+/*303303+ * Return the board_info structure with a given index. Out-of-range indices304304+ * return NULL.305305+ */306306+const struct board_info *t1_get_board_info(unsigned int board_id)307307+{308308+ return board_id < ARRAY_SIZE(t1_board) ? &t1_board[board_id] : NULL;309309+}310310+311311+struct chelsio_vpd_t {312312+ u32 format_version;313313+ u8 serial_number[16];314314+ u8 mac_base_address[6];315315+ u8 pad[2]; /* make multiple-of-4 size requirement explicit */316316+};317317+318318+#define EEPROMSIZE (8 * 1024)319319+#define EEPROM_MAX_POLL 4320320+321321+/*322322+ * Read SEEPROM. A zero is written to the flag register when the addres is323323+ * written to the Control register. The hardware device will set the flag to a324324+ * one when 4B have been transferred to the Data register.325325+ */326326+int t1_seeprom_read(adapter_t *adapter, u32 addr, u32 *data)327327+{328328+ int i = EEPROM_MAX_POLL;329329+ u16 val;330330+331331+ if (addr >= EEPROMSIZE || (addr & 3))332332+ return -EINVAL;333333+334334+ pci_write_config_word(adapter->pdev, A_PCICFG_VPD_ADDR, (u16)addr);335335+ do {336336+ udelay(50);337337+ pci_read_config_word(adapter->pdev, A_PCICFG_VPD_ADDR, &val);338338+ } while (!(val & F_VPD_OP_FLAG) && --i);339339+340340+ if (!(val & F_VPD_OP_FLAG)) {341341+ CH_ERR("%s: reading EEPROM address 0x%x failed\n",342342+ adapter->name, addr);343343+ return -EIO;344344+ }345345+ pci_read_config_dword(adapter->pdev, A_PCICFG_VPD_DATA, data);346346+ *data = le32_to_cpu(*data);347347+ return 0;348348+}349349+350350+static int t1_eeprom_vpd_get(adapter_t *adapter, struct chelsio_vpd_t *vpd)351351+{352352+ int addr, ret = 0;353353+354354+ for (addr = 0; !ret && addr < sizeof(*vpd); addr += sizeof(u32))355355+ ret = t1_seeprom_read(adapter, addr,356356+ (u32 *)((u8 *)vpd + addr));357357+358358+ return ret;359359+}360360+361361+/*362362+ * Read a port's MAC address from the VPD ROM.363363+ */364364+static int vpd_macaddress_get(adapter_t *adapter, int index, u8 mac_addr[])365365+{366366+ struct chelsio_vpd_t vpd;367367+368368+ if (t1_eeprom_vpd_get(adapter, &vpd))369369+ return 1;370370+ memcpy(mac_addr, vpd.mac_base_address, 5);371371+ mac_addr[5] = vpd.mac_base_address[5] + index;372372+ return 0;373373+}374374+375375+/*376376+ * Set up the MAC/PHY according to the requested link settings.377377+ *378378+ * If the PHY can auto-negotiate first decide what to advertise, then379379+ * enable/disable auto-negotiation as desired and reset.380380+ *381381+ * If the PHY does not auto-negotiate we just reset it.382382+ *383383+ * If auto-negotiation is off set the MAC to the proper speed/duplex/FC,384384+ * otherwise do it later based on the outcome of auto-negotiation.385385+ */386386+int t1_link_start(struct cphy *phy, struct cmac *mac, struct link_config *lc)387387+{388388+ unsigned int fc = lc->requested_fc & (PAUSE_RX | PAUSE_TX);389389+390390+ if (lc->supported & SUPPORTED_Autoneg) {391391+ lc->advertising &= ~(ADVERTISED_ASYM_PAUSE | ADVERTISED_PAUSE);392392+ if (fc) {393393+ lc->advertising |= ADVERTISED_ASYM_PAUSE;394394+ if (fc == (PAUSE_RX | PAUSE_TX))395395+ lc->advertising |= ADVERTISED_PAUSE;396396+ }397397+ phy->ops->advertise(phy, lc->advertising);398398+399399+ if (lc->autoneg == AUTONEG_DISABLE) {400400+ lc->speed = lc->requested_speed;401401+ lc->duplex = lc->requested_duplex;402402+ lc->fc = (unsigned char)fc;403403+ mac->ops->set_speed_duplex_fc(mac, lc->speed,404404+ lc->duplex, fc);405405+ /* Also disables autoneg */406406+ phy->ops->set_speed_duplex(phy, lc->speed, lc->duplex);407407+ phy->ops->reset(phy, 0);408408+ } else409409+ phy->ops->autoneg_enable(phy); /* also resets PHY */410410+ } else {411411+ mac->ops->set_speed_duplex_fc(mac, -1, -1, fc);412412+ lc->fc = (unsigned char)fc;413413+ phy->ops->reset(phy, 0);414414+ }415415+ return 0;416416+}417417+418418+/*419419+ * External interrupt handler for boards using elmer0.420420+ */421421+int elmer0_ext_intr_handler(adapter_t *adapter)422422+{423423+ struct cphy *phy;424424+ int phy_cause;425425+ u32 cause;426426+427427+ t1_tpi_read(adapter, A_ELMER0_INT_CAUSE, &cause);428428+429429+ switch (board_info(adapter)->board) {430430+ case CHBT_BOARD_N210:431431+ case CHBT_BOARD_N110:432432+ if (cause & ELMER0_GP_BIT6) { /* Marvell 88x2010 interrupt */433433+ phy = adapter->port[0].phy;434434+ phy_cause = phy->ops->interrupt_handler(phy);435435+ if (phy_cause & cphy_cause_link_change)436436+ link_changed(adapter, 0);437437+ }438438+ break;439439+ }440440+ t1_tpi_write(adapter, A_ELMER0_INT_CAUSE, cause);441441+ return 0;442442+}443443+444444+/* Enables all interrupts. */445445+void t1_interrupts_enable(adapter_t *adapter)446446+{447447+ unsigned int i;448448+ u32 pl_intr;449449+450450+ adapter->slow_intr_mask = F_PL_INTR_SGE_ERR;451451+452452+ t1_sge_intr_enable(adapter->sge);453453+ if (adapter->espi) {454454+ adapter->slow_intr_mask |= F_PL_INTR_ESPI;455455+ t1_espi_intr_enable(adapter->espi);456456+ }457457+458458+ /* Enable MAC/PHY interrupts for each port. */459459+ for_each_port(adapter, i) {460460+ adapter->port[i].mac->ops->interrupt_enable(adapter->port[i].mac);461461+ adapter->port[i].phy->ops->interrupt_enable(adapter->port[i].phy);462462+ }463463+464464+ /* Enable PCIX & external chip interrupts on ASIC boards. */465465+ pl_intr = readl(adapter->regs + A_PL_ENABLE);466466+467467+ /* PCI-X interrupts */468468+ pci_write_config_dword(adapter->pdev, A_PCICFG_INTR_ENABLE,469469+ 0xffffffff);470470+471471+ adapter->slow_intr_mask |= F_PL_INTR_EXT | F_PL_INTR_PCIX;472472+ pl_intr |= F_PL_INTR_EXT | F_PL_INTR_PCIX;473473+ writel(pl_intr, adapter->regs + A_PL_ENABLE);474474+}475475+476476+/* Disables all interrupts. */477477+void t1_interrupts_disable(adapter_t* adapter)478478+{479479+ unsigned int i;480480+481481+ t1_sge_intr_disable(adapter->sge);482482+ if (adapter->espi)483483+ t1_espi_intr_disable(adapter->espi);484484+485485+ /* Disable MAC/PHY interrupts for each port. */486486+ for_each_port(adapter, i) {487487+ adapter->port[i].mac->ops->interrupt_disable(adapter->port[i].mac);488488+ adapter->port[i].phy->ops->interrupt_disable(adapter->port[i].phy);489489+ }490490+491491+ /* Disable PCIX & external chip interrupts. */492492+ writel(0, adapter->regs + A_PL_ENABLE);493493+494494+ /* PCI-X interrupts */495495+ pci_write_config_dword(adapter->pdev, A_PCICFG_INTR_ENABLE, 0);496496+497497+ adapter->slow_intr_mask = 0;498498+}499499+500500+/* Clears all interrupts */501501+void t1_interrupts_clear(adapter_t* adapter)502502+{503503+ unsigned int i;504504+ u32 pl_intr;505505+506506+507507+ t1_sge_intr_clear(adapter->sge);508508+ if (adapter->espi)509509+ t1_espi_intr_clear(adapter->espi);510510+511511+ /* Clear MAC/PHY interrupts for each port. */512512+ for_each_port(adapter, i) {513513+ adapter->port[i].mac->ops->interrupt_clear(adapter->port[i].mac);514514+ adapter->port[i].phy->ops->interrupt_clear(adapter->port[i].phy);515515+ }516516+517517+ /* Enable interrupts for external devices. */518518+ pl_intr = readl(adapter->regs + A_PL_CAUSE);519519+520520+ writel(pl_intr | F_PL_INTR_EXT | F_PL_INTR_PCIX,521521+ adapter->regs + A_PL_CAUSE);522522+523523+ /* PCI-X interrupts */524524+ pci_write_config_dword(adapter->pdev, A_PCICFG_INTR_CAUSE, 0xffffffff);525525+}526526+527527+/*528528+ * Slow path interrupt handler for ASICs.529529+ */530530+int t1_slow_intr_handler(adapter_t *adapter)531531+{532532+ u32 cause = readl(adapter->regs + A_PL_CAUSE);533533+534534+ cause &= adapter->slow_intr_mask;535535+ if (!cause)536536+ return 0;537537+ if (cause & F_PL_INTR_SGE_ERR)538538+ t1_sge_intr_error_handler(adapter->sge);539539+ if (cause & F_PL_INTR_ESPI)540540+ t1_espi_intr_handler(adapter->espi);541541+ if (cause & F_PL_INTR_PCIX)542542+ t1_pci_intr_handler(adapter);543543+ if (cause & F_PL_INTR_EXT)544544+ t1_elmer0_ext_intr(adapter);545545+546546+ /* Clear the interrupts just processed. */547547+ writel(cause, adapter->regs + A_PL_CAUSE);548548+ (void)readl(adapter->regs + A_PL_CAUSE); /* flush writes */549549+ return 1;550550+}551551+552552+/* Pause deadlock avoidance parameters */553553+#define DROP_MSEC 16554554+#define DROP_PKTS_CNT 1555555+556556+static void set_csum_offload(adapter_t *adapter, u32 csum_bit, int enable)557557+{558558+ u32 val = readl(adapter->regs + A_TP_GLOBAL_CONFIG);559559+560560+ if (enable)561561+ val |= csum_bit;562562+ else563563+ val &= ~csum_bit;564564+ writel(val, adapter->regs + A_TP_GLOBAL_CONFIG);565565+}566566+567567+void t1_tp_set_ip_checksum_offload(adapter_t *adapter, int enable)568568+{569569+ set_csum_offload(adapter, F_IP_CSUM, enable);570570+}571571+572572+void t1_tp_set_udp_checksum_offload(adapter_t *adapter, int enable)573573+{574574+ set_csum_offload(adapter, F_UDP_CSUM, enable);575575+}576576+577577+void t1_tp_set_tcp_checksum_offload(adapter_t *adapter, int enable)578578+{579579+ set_csum_offload(adapter, F_TCP_CSUM, enable);580580+}581581+582582+static void t1_tp_reset(adapter_t *adapter, unsigned int tp_clk)583583+{584584+ u32 val;585585+586586+ val = F_TP_IN_CSPI_CPL | F_TP_IN_CSPI_CHECK_IP_CSUM |587587+ F_TP_IN_CSPI_CHECK_TCP_CSUM | F_TP_IN_ESPI_ETHERNET;588588+ val |= F_TP_IN_ESPI_CHECK_IP_CSUM |589589+ F_TP_IN_ESPI_CHECK_TCP_CSUM;590590+ writel(val, adapter->regs + A_TP_IN_CONFIG);591591+ writel(F_TP_OUT_CSPI_CPL |592592+ F_TP_OUT_ESPI_ETHERNET |593593+ F_TP_OUT_ESPI_GENERATE_IP_CSUM |594594+ F_TP_OUT_ESPI_GENERATE_TCP_CSUM,595595+ adapter->regs + A_TP_OUT_CONFIG);596596+597597+ val = readl(adapter->regs + A_TP_GLOBAL_CONFIG);598598+ val &= ~(F_IP_CSUM | F_UDP_CSUM | F_TCP_CSUM);599599+ writel(val, adapter->regs + A_TP_GLOBAL_CONFIG);600600+601601+ /*602602+ * Enable pause frame deadlock prevention.603603+ */604604+ if (is_T2(adapter)) {605605+ u32 drop_ticks = DROP_MSEC * (tp_clk / 1000);606606+607607+ writel(F_ENABLE_TX_DROP | F_ENABLE_TX_ERROR |608608+ V_DROP_TICKS_CNT(drop_ticks) |609609+ V_NUM_PKTS_DROPPED(DROP_PKTS_CNT),610610+ adapter->regs + A_TP_TX_DROP_CONFIG);611611+ }612612+613613+ writel(F_TP_RESET, adapter->regs + A_TP_RESET);614614+}615615+616616+int __devinit t1_get_board_rev(adapter_t *adapter, const struct board_info *bi,617617+ struct adapter_params *p)618618+{619619+ p->chip_version = bi->chip_term;620620+ if (p->chip_version == CHBT_TERM_T1 ||621621+ p->chip_version == CHBT_TERM_T2) {622622+ u32 val = readl(adapter->regs + A_TP_PC_CONFIG);623623+624624+ val = G_TP_PC_REV(val);625625+ if (val == 2)626626+ p->chip_revision = TERM_T1B;627627+ else if (val == 3)628628+ p->chip_revision = TERM_T2;629629+ else630630+ return -1;631631+ } else632632+ return -1;633633+ return 0;634634+}635635+636636+/*637637+ * Enable board components other than the Chelsio chip, such as external MAC638638+ * and PHY.639639+ */640640+static int board_init(adapter_t *adapter, const struct board_info *bi)641641+{642642+ switch (bi->board) {643643+ case CHBT_BOARD_N110:644644+ case CHBT_BOARD_N210:645645+ writel(V_TPIPAR(0xf), adapter->regs + A_TPI_PAR);646646+ t1_tpi_write(adapter, A_ELMER0_GPO, 0x800);647647+ break;648648+ }649649+ return 0;650650+}651651+652652+/*653653+ * Initialize and configure the Terminator HW modules. Note that external654654+ * MAC and PHYs are initialized separately.655655+ */656656+int t1_init_hw_modules(adapter_t *adapter)657657+{658658+ int err = -EIO;659659+ const struct board_info *bi = board_info(adapter);660660+661661+ if (!bi->clock_mc4) {662662+ u32 val = readl(adapter->regs + A_MC4_CFG);663663+664664+ writel(val | F_READY | F_MC4_SLOW, adapter->regs + A_MC4_CFG);665665+ writel(F_M_BUS_ENABLE | F_TCAM_RESET,666666+ adapter->regs + A_MC5_CONFIG);667667+ }668668+669669+ if (adapter->espi && t1_espi_init(adapter->espi, bi->chip_mac,670670+ bi->espi_nports))671671+ goto out_err;672672+673673+ t1_tp_reset(adapter, bi->clock_core);674674+675675+ err = t1_sge_configure(adapter->sge, &adapter->params.sge);676676+ if (err)677677+ goto out_err;678678+679679+ err = 0;680680+ out_err:681681+ return err;682682+}683683+684684+/*685685+ * Determine a card's PCI mode.686686+ */687687+static void __devinit get_pci_mode(adapter_t *adapter, struct chelsio_pci_params *p)688688+{689689+ static unsigned short speed_map[] = { 33, 66, 100, 133 };690690+ u32 pci_mode;691691+692692+ pci_read_config_dword(adapter->pdev, A_PCICFG_MODE, &pci_mode);693693+ p->speed = speed_map[G_PCI_MODE_CLK(pci_mode)];694694+ p->width = (pci_mode & F_PCI_MODE_64BIT) ? 64 : 32;695695+ p->is_pcix = (pci_mode & F_PCI_MODE_PCIX) != 0;696696+}697697+698698+/*699699+ * Release the structures holding the SW per-Terminator-HW-module state.700700+ */701701+void t1_free_sw_modules(adapter_t *adapter)702702+{703703+ unsigned int i;704704+705705+ for_each_port(adapter, i) {706706+ struct cmac *mac = adapter->port[i].mac;707707+ struct cphy *phy = adapter->port[i].phy;708708+709709+ if (mac)710710+ mac->ops->destroy(mac);711711+ if (phy)712712+ phy->ops->destroy(phy);713713+ }714714+715715+ if (adapter->sge)716716+ t1_sge_destroy(adapter->sge);717717+ if (adapter->espi)718718+ t1_espi_destroy(adapter->espi);719719+}720720+721721+static void __devinit init_link_config(struct link_config *lc,722722+ const struct board_info *bi)723723+{724724+ lc->supported = bi->caps;725725+ lc->requested_speed = lc->speed = SPEED_INVALID;726726+ lc->requested_duplex = lc->duplex = DUPLEX_INVALID;727727+ lc->requested_fc = lc->fc = PAUSE_RX | PAUSE_TX;728728+ if (lc->supported & SUPPORTED_Autoneg) {729729+ lc->advertising = lc->supported;730730+ lc->autoneg = AUTONEG_ENABLE;731731+ lc->requested_fc |= PAUSE_AUTONEG;732732+ } else {733733+ lc->advertising = 0;734734+ lc->autoneg = AUTONEG_DISABLE;735735+ }736736+}737737+738738+739739+/*740740+ * Allocate and initialize the data structures that hold the SW state of741741+ * the Terminator HW modules.742742+ */743743+int __devinit t1_init_sw_modules(adapter_t *adapter,744744+ const struct board_info *bi)745745+{746746+ unsigned int i;747747+748748+ adapter->params.brd_info = bi;749749+ adapter->params.nports = bi->port_number;750750+ adapter->params.stats_update_period = bi->gmac->stats_update_period;751751+752752+ adapter->sge = t1_sge_create(adapter, &adapter->params.sge);753753+ if (!adapter->sge) {754754+ CH_ERR("%s: SGE initialization failed\n",755755+ adapter->name);756756+ goto error;757757+ }758758+759759+ if (bi->espi_nports && !(adapter->espi = t1_espi_create(adapter))) {760760+ CH_ERR("%s: ESPI initialization failed\n",761761+ adapter->name);762762+ goto error;763763+ }764764+765765+ board_init(adapter, bi);766766+ bi->mdio_ops->init(adapter, bi);767767+ if (bi->gphy->reset)768768+ bi->gphy->reset(adapter);769769+ if (bi->gmac->reset)770770+ bi->gmac->reset(adapter);771771+772772+ for_each_port(adapter, i) {773773+ u8 hw_addr[6];774774+ struct cmac *mac;775775+ int phy_addr = bi->mdio_phybaseaddr + i;776776+777777+ adapter->port[i].phy = bi->gphy->create(adapter, phy_addr,778778+ bi->mdio_ops);779779+ if (!adapter->port[i].phy) {780780+ CH_ERR("%s: PHY %d initialization failed\n",781781+ adapter->name, i);782782+ goto error;783783+ }784784+785785+ adapter->port[i].mac = mac = bi->gmac->create(adapter, i);786786+ if (!mac) {787787+ CH_ERR("%s: MAC %d initialization failed\n",788788+ adapter->name, i);789789+ goto error;790790+ }791791+792792+ /*793793+ * Get the port's MAC addresses either from the EEPROM if one794794+ * exists or the one hardcoded in the MAC.795795+ */796796+ if (vpd_macaddress_get(adapter, i, hw_addr)) {797797+ CH_ERR("%s: could not read MAC address from VPD ROM\n",798798+ adapter->port[i].dev->name);799799+ goto error;800800+ }801801+ memcpy(adapter->port[i].dev->dev_addr, hw_addr, ETH_ALEN);802802+ init_link_config(&adapter->port[i].link_config, bi);803803+ }804804+805805+ get_pci_mode(adapter, &adapter->params.pci);806806+ t1_interrupts_clear(adapter);807807+ return 0;808808+809809+ error:810810+ t1_free_sw_modules(adapter);811811+ return -1;812812+}
+213
drivers/net/chelsio/suni1x10gexp_regs.h
···11+/*****************************************************************************22+ * *33+ * File: suni1x10gexp_regs.h *44+ * $Revision: 1.9 $ *55+ * $Date: 2005/06/22 00:17:04 $ *66+ * Description: *77+ * PMC/SIERRA (pm3393) MAC-PHY functionality. *88+ * part of the Chelsio 10Gb Ethernet Driver. *99+ * *1010+ * This program is free software; you can redistribute it and/or modify *1111+ * it under the terms of the GNU General Public License, version 2, as *1212+ * published by the Free Software Foundation. *1313+ * *1414+ * You should have received a copy of the GNU General Public License along *1515+ * with this program; if not, write to the Free Software Foundation, Inc., *1616+ * 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. *1717+ * *1818+ * THIS SOFTWARE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR IMPLIED *1919+ * WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED WARRANTIES OF *2020+ * MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. *2121+ * *2222+ * http://www.chelsio.com *2323+ * *2424+ * Maintainers: maintainers@chelsio.com *2525+ * *2626+ * Authors: PMC/SIERRA *2727+ * *2828+ * History: *2929+ * *3030+ ****************************************************************************/3131+3232+#ifndef _CXGB_SUNI1x10GEXP_REGS_H_3333+#define _CXGB_SUNI1x10GEXP_REGS_H_3434+3535+/******************************************************************************/3636+/** S/UNI-1x10GE-XP REGISTER ADDRESS MAP **/3737+/******************************************************************************/3838+/* Refer to the Register Bit Masks bellow for the naming of each register and */3939+/* to the S/UNI-1x10GE-XP Data Sheet for the signification of each bit */4040+/******************************************************************************/4141+4242+#define SUNI1x10GEXP_REG_DEVICE_STATUS 0x00044343+#define SUNI1x10GEXP_REG_MASTER_INTERRUPT_STATUS 0x000D4444+#define SUNI1x10GEXP_REG_GLOBAL_INTERRUPT_ENABLE 0x000E4545+#define SUNI1x10GEXP_REG_SERDES_3125_INTERRUPT_ENABLE 0x01024646+#define SUNI1x10GEXP_REG_SERDES_3125_INTERRUPT_STATUS 0x01044747+#define SUNI1x10GEXP_REG_RXXG_CONFIG_1 0x20404848+#define SUNI1x10GEXP_REG_RXXG_CONFIG_3 0x20424949+#define SUNI1x10GEXP_REG_RXXG_INTERRUPT 0x20435050+#define SUNI1x10GEXP_REG_RXXG_MAX_FRAME_LENGTH 0x20455151+#define SUNI1x10GEXP_REG_RXXG_SA_15_0 0x20465252+#define SUNI1x10GEXP_REG_RXXG_SA_31_16 0x20475353+#define SUNI1x10GEXP_REG_RXXG_SA_47_32 0x20485454+#define SUNI1x10GEXP_REG_RXXG_EXACT_MATCH_ADDR_1_LOW 0x204D5555+#define SUNI1x10GEXP_REG_RXXG_EXACT_MATCH_ADDR_1_MID 0x204E5656+#define SUNI1x10GEXP_REG_RXXG_EXACT_MATCH_ADDR_1_HIGH 0x204F5757+#define SUNI1x10GEXP_REG_RXXG_MULTICAST_HASH_LOW 0x206A5858+#define SUNI1x10GEXP_REG_RXXG_MULTICAST_HASH_MIDLOW 0x206B5959+#define SUNI1x10GEXP_REG_RXXG_MULTICAST_HASH_MIDHIGH 0x206C6060+#define SUNI1x10GEXP_REG_RXXG_MULTICAST_HASH_HIGH 0x206D6161+#define SUNI1x10GEXP_REG_RXXG_ADDRESS_FILTER_CONTROL_0 0x206E6262+#define SUNI1x10GEXP_REG_RXXG_ADDRESS_FILTER_CONTROL_2 0x20706363+#define SUNI1x10GEXP_REG_XRF_INTERRUPT_ENABLE 0x20886464+#define SUNI1x10GEXP_REG_XRF_INTERRUPT_STATUS 0x20896565+#define SUNI1x10GEXP_REG_XRF_DIAG_INTERRUPT_ENABLE 0x208B6666+#define SUNI1x10GEXP_REG_XRF_DIAG_INTERRUPT_STATUS 0x208C6767+#define SUNI1x10GEXP_REG_RXOAM_INTERRUPT_ENABLE 0x20C76868+#define SUNI1x10GEXP_REG_RXOAM_INTERRUPT_STATUS 0x20C86969+#define SUNI1x10GEXP_REG_MSTAT_CONTROL 0x21007070+#define SUNI1x10GEXP_REG_MSTAT_COUNTER_ROLLOVER_0 0x21017171+#define SUNI1x10GEXP_REG_MSTAT_COUNTER_ROLLOVER_1 0x21027272+#define SUNI1x10GEXP_REG_MSTAT_COUNTER_ROLLOVER_2 0x21037373+#define SUNI1x10GEXP_REG_MSTAT_COUNTER_ROLLOVER_3 0x21047474+#define SUNI1x10GEXP_REG_MSTAT_INTERRUPT_MASK_0 0x21057575+#define SUNI1x10GEXP_REG_MSTAT_INTERRUPT_MASK_1 0x21067676+#define SUNI1x10GEXP_REG_MSTAT_INTERRUPT_MASK_2 0x21077777+#define SUNI1x10GEXP_REG_MSTAT_INTERRUPT_MASK_3 0x21087878+#define SUNI1x10GEXP_REG_MSTAT_COUNTER_0_LOW 0x21107979+#define SUNI1x10GEXP_REG_MSTAT_COUNTER_1_LOW 0x21148080+#define SUNI1x10GEXP_REG_MSTAT_COUNTER_4_LOW 0x21208181+#define SUNI1x10GEXP_REG_MSTAT_COUNTER_5_LOW 0x21248282+#define SUNI1x10GEXP_REG_MSTAT_COUNTER_6_LOW 0x21288383+#define SUNI1x10GEXP_REG_MSTAT_COUNTER_8_LOW 0x21308484+#define SUNI1x10GEXP_REG_MSTAT_COUNTER_10_LOW 0x21388585+#define SUNI1x10GEXP_REG_MSTAT_COUNTER_11_LOW 0x213C8686+#define SUNI1x10GEXP_REG_MSTAT_COUNTER_12_LOW 0x21408787+#define SUNI1x10GEXP_REG_MSTAT_COUNTER_13_LOW 0x21448888+#define SUNI1x10GEXP_REG_MSTAT_COUNTER_15_LOW 0x214C8989+#define SUNI1x10GEXP_REG_MSTAT_COUNTER_16_LOW 0x21509090+#define SUNI1x10GEXP_REG_MSTAT_COUNTER_17_LOW 0x21549191+#define SUNI1x10GEXP_REG_MSTAT_COUNTER_18_LOW 0x21589292+#define SUNI1x10GEXP_REG_MSTAT_COUNTER_33_LOW 0x21949393+#define SUNI1x10GEXP_REG_MSTAT_COUNTER_35_LOW 0x219C9494+#define SUNI1x10GEXP_REG_MSTAT_COUNTER_36_LOW 0x21A09595+#define SUNI1x10GEXP_REG_MSTAT_COUNTER_38_LOW 0x21A89696+#define SUNI1x10GEXP_REG_MSTAT_COUNTER_40_LOW 0x21B09797+#define SUNI1x10GEXP_REG_MSTAT_COUNTER_42_LOW 0x21B89898+#define SUNI1x10GEXP_REG_MSTAT_COUNTER_43_LOW 0x21BC9999+#define SUNI1x10GEXP_REG_IFLX_FIFO_OVERFLOW_ENABLE 0x2209100100+#define SUNI1x10GEXP_REG_IFLX_FIFO_OVERFLOW_INTERRUPT 0x220A101101+#define SUNI1x10GEXP_REG_PL4ODP_INTERRUPT_MASK 0x2282102102+#define SUNI1x10GEXP_REG_PL4ODP_INTERRUPT 0x2283103103+#define SUNI1x10GEXP_REG_PL4IO_LOCK_DETECT_STATUS 0x2300104104+#define SUNI1x10GEXP_REG_PL4IO_LOCK_DETECT_CHANGE 0x2301105105+#define SUNI1x10GEXP_REG_PL4IO_LOCK_DETECT_MASK 0x2302106106+#define SUNI1x10GEXP_REG_TXXG_CONFIG_1 0x3040107107+#define SUNI1x10GEXP_REG_TXXG_CONFIG_3 0x3042108108+#define SUNI1x10GEXP_REG_TXXG_INTERRUPT 0x3043109109+#define SUNI1x10GEXP_REG_TXXG_MAX_FRAME_SIZE 0x3045110110+#define SUNI1x10GEXP_REG_TXXG_SA_15_0 0x3047111111+#define SUNI1x10GEXP_REG_TXXG_SA_31_16 0x3048112112+#define SUNI1x10GEXP_REG_TXXG_SA_47_32 0x3049113113+#define SUNI1x10GEXP_REG_XTEF_INTERRUPT_STATUS 0x3084114114+#define SUNI1x10GEXP_REG_XTEF_INTERRUPT_ENABLE 0x3085115115+#define SUNI1x10GEXP_REG_TXOAM_INTERRUPT_ENABLE 0x30C6116116+#define SUNI1x10GEXP_REG_TXOAM_INTERRUPT_STATUS 0x30C7117117+#define SUNI1x10GEXP_REG_EFLX_FIFO_OVERFLOW_ERROR_ENABLE 0x320C118118+#define SUNI1x10GEXP_REG_EFLX_FIFO_OVERFLOW_ERROR_INDICATION 0x320D119119+#define SUNI1x10GEXP_REG_PL4IDU_INTERRUPT_MASK 0x3282120120+#define SUNI1x10GEXP_REG_PL4IDU_INTERRUPT 0x3283121121+122122+/******************************************************************************/123123+/* -- End register offset definitions -- */124124+/******************************************************************************/125125+126126+/******************************************************************************/127127+/** SUNI-1x10GE-XP REGISTER BIT MASKS **/128128+/******************************************************************************/129129+130130+/*----------------------------------------------------------------------------131131+ * Register 0x0004: S/UNI-1x10GE-XP Device Status132132+ * Bit 9 TOP_SXRA_EXPIRED133133+ * Bit 8 TOP_MDIO_BUSY134134+ * Bit 7 TOP_DTRB135135+ * Bit 6 TOP_EXPIRED136136+ * Bit 5 TOP_PAUSED137137+ * Bit 4 TOP_PL4_ID_DOOL138138+ * Bit 3 TOP_PL4_IS_DOOL139139+ * Bit 2 TOP_PL4_ID_ROOL140140+ * Bit 1 TOP_PL4_IS_ROOL141141+ * Bit 0 TOP_PL4_OUT_ROOL142142+ *----------------------------------------------------------------------------*/143143+#define SUNI1x10GEXP_BITMSK_TOP_SXRA_EXPIRED 0x0200144144+#define SUNI1x10GEXP_BITMSK_TOP_EXPIRED 0x0040145145+#define SUNI1x10GEXP_BITMSK_TOP_PL4_ID_DOOL 0x0010146146+#define SUNI1x10GEXP_BITMSK_TOP_PL4_IS_DOOL 0x0008147147+#define SUNI1x10GEXP_BITMSK_TOP_PL4_ID_ROOL 0x0004148148+#define SUNI1x10GEXP_BITMSK_TOP_PL4_IS_ROOL 0x0002149149+#define SUNI1x10GEXP_BITMSK_TOP_PL4_OUT_ROOL 0x0001150150+151151+/*----------------------------------------------------------------------------152152+ * Register 0x000E:PM3393 Global interrupt enable153153+ * Bit 15 TOP_INTE154154+ *----------------------------------------------------------------------------*/155155+#define SUNI1x10GEXP_BITMSK_TOP_INTE 0x8000156156+157157+/*----------------------------------------------------------------------------158158+ * Register 0x2040: RXXG Configuration 1159159+ * Bit 15 RXXG_RXEN160160+ * Bit 14 RXXG_ROCF161161+ * Bit 13 RXXG_PAD_STRIP162162+ * Bit 10 RXXG_PUREP163163+ * Bit 9 RXXG_LONGP164164+ * Bit 8 RXXG_PARF165165+ * Bit 7 RXXG_FLCHK166166+ * Bit 5 RXXG_PASS_CTRL167167+ * Bit 3 RXXG_CRC_STRIP168168+ * Bit 2-0 RXXG_MIFG169169+ *----------------------------------------------------------------------------*/170170+#define SUNI1x10GEXP_BITMSK_RXXG_RXEN 0x8000171171+#define SUNI1x10GEXP_BITMSK_RXXG_PUREP 0x0400172172+#define SUNI1x10GEXP_BITMSK_RXXG_FLCHK 0x0080173173+#define SUNI1x10GEXP_BITMSK_RXXG_CRC_STRIP 0x0008174174+175175+/*----------------------------------------------------------------------------176176+ * Register 0x2070: RXXG Address Filter Control 2177177+ * Bit 1 RXXG_PMODE178178+ * Bit 0 RXXG_MHASH_EN179179+ *----------------------------------------------------------------------------*/180180+#define SUNI1x10GEXP_BITMSK_RXXG_PMODE 0x0002181181+#define SUNI1x10GEXP_BITMSK_RXXG_MHASH_EN 0x0001182182+183183+/*----------------------------------------------------------------------------184184+ * Register 0x2100: MSTAT Control185185+ * Bit 2 MSTAT_WRITE186186+ * Bit 1 MSTAT_CLEAR187187+ * Bit 0 MSTAT_SNAP188188+ *----------------------------------------------------------------------------*/189189+#define SUNI1x10GEXP_BITMSK_MSTAT_CLEAR 0x0002190190+#define SUNI1x10GEXP_BITMSK_MSTAT_SNAP 0x0001191191+192192+/*----------------------------------------------------------------------------193193+ * Register 0x3040: TXXG Configuration Register 1194194+ * Bit 15 TXXG_TXEN0195195+ * Bit 13 TXXG_HOSTPAUSE196196+ * Bit 12-7 TXXG_IPGT197197+ * Bit 5 TXXG_32BIT_ALIGN198198+ * Bit 4 TXXG_CRCEN199199+ * Bit 3 TXXG_FCTX200200+ * Bit 2 TXXG_FCRX201201+ * Bit 1 TXXG_PADEN202202+ * Bit 0 TXXG_SPRE203203+ *----------------------------------------------------------------------------*/204204+#define SUNI1x10GEXP_BITMSK_TXXG_TXEN0 0x8000205205+#define SUNI1x10GEXP_BITOFF_TXXG_IPGT 7206206+#define SUNI1x10GEXP_BITMSK_TXXG_32BIT_ALIGN 0x0020207207+#define SUNI1x10GEXP_BITMSK_TXXG_CRCEN 0x0010208208+#define SUNI1x10GEXP_BITMSK_TXXG_FCTX 0x0008209209+#define SUNI1x10GEXP_BITMSK_TXXG_FCRX 0x0004210210+#define SUNI1x10GEXP_BITMSK_TXXG_PADEN 0x0002211211+212212+#endif /* _CXGB_SUNI1x10GEXP_REGS_H_ */213213+
···11+/*22+ sis190.c: Silicon Integrated Systems SiS190 ethernet driver33+44+ Copyright (c) 2003 K.M. Liu <kmliu@sis.com>55+ Copyright (c) 2003, 2004 Jeff Garzik <jgarzik@pobox.com>66+ Copyright (c) 2003, 2004, 2005 Francois Romieu <romieu@fr.zoreil.com>77+88+ Based on r8169.c, tg3.c, 8139cp.c, skge.c, epic100.c and SiS 190/19199+ genuine driver.1010+1111+ This software may be used and distributed according to the terms of1212+ the GNU General Public License (GPL), incorporated herein by reference.1313+ Drivers based on or derived from this code fall under the GPL and must1414+ retain the authorship, copyright and license notice. This file is not1515+ a complete program and may only be used when the entire operating1616+ system is licensed under the GPL.1717+1818+ See the file COPYING in this distribution for more information.1919+2020+ */2121+2222+#include <linux/module.h>2323+#include <linux/moduleparam.h>2424+#include <linux/netdevice.h>2525+#include <linux/rtnetlink.h>2626+#include <linux/etherdevice.h>2727+#include <linux/ethtool.h>2828+#include <linux/pci.h>2929+#include <linux/mii.h>3030+#include <linux/delay.h>3131+#include <linux/crc32.h>3232+#include <linux/dma-mapping.h>3333+#include <asm/irq.h>3434+3535+#define net_drv(p, arg...) if (netif_msg_drv(p)) \3636+ printk(arg)3737+#define net_probe(p, arg...) if (netif_msg_probe(p)) \3838+ printk(arg)3939+#define net_link(p, arg...) if (netif_msg_link(p)) \4040+ printk(arg)4141+#define net_intr(p, arg...) if (netif_msg_intr(p)) \4242+ printk(arg)4343+#define net_tx_err(p, arg...) if (netif_msg_tx_err(p)) \4444+ printk(arg)4545+4646+#define PHY_MAX_ADDR 324747+#define PHY_ID_ANY 0x1f4848+#define MII_REG_ANY 0x1f4949+5050+#ifdef CONFIG_SIS190_NAPI5151+#define NAPI_SUFFIX "-NAPI"5252+#else5353+#define NAPI_SUFFIX ""5454+#endif5555+5656+#define DRV_VERSION "1.2" NAPI_SUFFIX5757+#define DRV_NAME "sis190"5858+#define SIS190_DRIVER_NAME DRV_NAME " Gigabit Ethernet driver " DRV_VERSION5959+#define PFX DRV_NAME ": "6060+6161+#ifdef CONFIG_SIS190_NAPI6262+#define sis190_rx_skb netif_receive_skb6363+#define sis190_rx_quota(count, quota) min(count, quota)6464+#else6565+#define sis190_rx_skb netif_rx6666+#define sis190_rx_quota(count, quota) count6767+#endif6868+6969+#define MAC_ADDR_LEN 67070+7171+#define NUM_TX_DESC 64 /* [8..1024] */7272+#define NUM_RX_DESC 64 /* [8..8192] */7373+#define TX_RING_BYTES (NUM_TX_DESC * sizeof(struct TxDesc))7474+#define RX_RING_BYTES (NUM_RX_DESC * sizeof(struct RxDesc))7575+#define RX_BUF_SIZE 15367676+#define RX_BUF_MASK 0xfff87777+7878+#define SIS190_REGS_SIZE 0x807979+#define SIS190_TX_TIMEOUT (6*HZ)8080+#define SIS190_PHY_TIMEOUT (10*HZ)8181+#define SIS190_MSG_DEFAULT (NETIF_MSG_DRV | NETIF_MSG_PROBE | \8282+ NETIF_MSG_LINK | NETIF_MSG_IFUP | \8383+ NETIF_MSG_IFDOWN)8484+8585+/* Enhanced PHY access register bit definitions */8686+#define EhnMIIread 0x00008787+#define EhnMIIwrite 0x00208888+#define EhnMIIdataShift 168989+#define EhnMIIpmdShift 6 /* 7016 only */9090+#define EhnMIIregShift 119191+#define EhnMIIreq 0x00109292+#define EhnMIInotDone 0x00109393+9494+/* Write/read MMIO register */9595+#define SIS_W8(reg, val) writeb ((val), ioaddr + (reg))9696+#define SIS_W16(reg, val) writew ((val), ioaddr + (reg))9797+#define SIS_W32(reg, val) writel ((val), ioaddr + (reg))9898+#define SIS_R8(reg) readb (ioaddr + (reg))9999+#define SIS_R16(reg) readw (ioaddr + (reg))100100+#define SIS_R32(reg) readl (ioaddr + (reg))101101+102102+#define SIS_PCI_COMMIT() SIS_R32(IntrControl)103103+104104+enum sis190_registers {105105+ TxControl = 0x00,106106+ TxDescStartAddr = 0x04,107107+ rsv0 = 0x08, // reserved108108+ TxSts = 0x0c, // unused (Control/Status)109109+ RxControl = 0x10,110110+ RxDescStartAddr = 0x14,111111+ rsv1 = 0x18, // reserved112112+ RxSts = 0x1c, // unused113113+ IntrStatus = 0x20,114114+ IntrMask = 0x24,115115+ IntrControl = 0x28,116116+ IntrTimer = 0x2c, // unused (Interupt Timer)117117+ PMControl = 0x30, // unused (Power Mgmt Control/Status)118118+ rsv2 = 0x34, // reserved119119+ ROMControl = 0x38,120120+ ROMInterface = 0x3c,121121+ StationControl = 0x40,122122+ GMIIControl = 0x44,123123+ GIoCR = 0x48, // unused (GMAC IO Compensation)124124+ GIoCtrl = 0x4c, // unused (GMAC IO Control)125125+ TxMacControl = 0x50,126126+ TxLimit = 0x54, // unused (Tx MAC Timer/TryLimit)127127+ RGDelay = 0x58, // unused (RGMII Tx Internal Delay)128128+ rsv3 = 0x5c, // reserved129129+ RxMacControl = 0x60,130130+ RxMacAddr = 0x62,131131+ RxHashTable = 0x68,132132+ // Undocumented = 0x6c,133133+ RxWolCtrl = 0x70,134134+ RxWolData = 0x74, // unused (Rx WOL Data Access)135135+ RxMPSControl = 0x78, // unused (Rx MPS Control)136136+ rsv4 = 0x7c, // reserved137137+};138138+139139+enum sis190_register_content {140140+ /* IntrStatus */141141+ SoftInt = 0x40000000, // unused142142+ Timeup = 0x20000000, // unused143143+ PauseFrame = 0x00080000, // unused144144+ MagicPacket = 0x00040000, // unused145145+ WakeupFrame = 0x00020000, // unused146146+ LinkChange = 0x00010000,147147+ RxQEmpty = 0x00000080,148148+ RxQInt = 0x00000040,149149+ TxQ1Empty = 0x00000020, // unused150150+ TxQ1Int = 0x00000010,151151+ TxQ0Empty = 0x00000008, // unused152152+ TxQ0Int = 0x00000004,153153+ RxHalt = 0x00000002,154154+ TxHalt = 0x00000001,155155+156156+ /* {Rx/Tx}CmdBits */157157+ CmdReset = 0x10,158158+ CmdRxEnb = 0x08, // unused159159+ CmdTxEnb = 0x01,160160+ RxBufEmpty = 0x01, // unused161161+162162+ /* Cfg9346Bits */163163+ Cfg9346_Lock = 0x00, // unused164164+ Cfg9346_Unlock = 0xc0, // unused165165+166166+ /* RxMacControl */167167+ AcceptErr = 0x20, // unused168168+ AcceptRunt = 0x10, // unused169169+ AcceptBroadcast = 0x0800,170170+ AcceptMulticast = 0x0400,171171+ AcceptMyPhys = 0x0200,172172+ AcceptAllPhys = 0x0100,173173+174174+ /* RxConfigBits */175175+ RxCfgFIFOShift = 13,176176+ RxCfgDMAShift = 8, // 0x1a in RxControl ?177177+178178+ /* TxConfigBits */179179+ TxInterFrameGapShift = 24,180180+ TxDMAShift = 8, /* DMA burst value (0-7) is shift this many bits */181181+182182+ /* StationControl */183183+ _1000bpsF = 0x1c00,184184+ _1000bpsH = 0x0c00,185185+ _100bpsF = 0x1800,186186+ _100bpsH = 0x0800,187187+ _10bpsF = 0x1400,188188+ _10bpsH = 0x0400,189189+190190+ LinkStatus = 0x02, // unused191191+ FullDup = 0x01, // unused192192+193193+ /* TBICSRBit */194194+ TBILinkOK = 0x02000000, // unused195195+};196196+197197+struct TxDesc {198198+ __le32 PSize;199199+ __le32 status;200200+ __le32 addr;201201+ __le32 size;202202+};203203+204204+struct RxDesc {205205+ __le32 PSize;206206+ __le32 status;207207+ __le32 addr;208208+ __le32 size;209209+};210210+211211+enum _DescStatusBit {212212+ /* _Desc.status */213213+ OWNbit = 0x80000000, // RXOWN/TXOWN214214+ INTbit = 0x40000000, // RXINT/TXINT215215+ CRCbit = 0x00020000, // CRCOFF/CRCEN216216+ PADbit = 0x00010000, // PREADD/PADEN217217+ /* _Desc.size */218218+ RingEnd = 0x80000000,219219+ /* TxDesc.status */220220+ LSEN = 0x08000000, // TSO ? -- FR221221+ IPCS = 0x04000000,222222+ TCPCS = 0x02000000,223223+ UDPCS = 0x01000000,224224+ BSTEN = 0x00800000,225225+ EXTEN = 0x00400000,226226+ DEFEN = 0x00200000,227227+ BKFEN = 0x00100000,228228+ CRSEN = 0x00080000,229229+ COLEN = 0x00040000,230230+ THOL3 = 0x30000000,231231+ THOL2 = 0x20000000,232232+ THOL1 = 0x10000000,233233+ THOL0 = 0x00000000,234234+ /* RxDesc.status */235235+ IPON = 0x20000000,236236+ TCPON = 0x10000000,237237+ UDPON = 0x08000000,238238+ Wakup = 0x00400000,239239+ Magic = 0x00200000,240240+ Pause = 0x00100000,241241+ DEFbit = 0x00200000,242242+ BCAST = 0x000c0000,243243+ MCAST = 0x00080000,244244+ UCAST = 0x00040000,245245+ /* RxDesc.PSize */246246+ TAGON = 0x80000000,247247+ RxDescCountMask = 0x7f000000, // multi-desc pkt when > 1 ? -- FR248248+ ABORT = 0x00800000,249249+ SHORT = 0x00400000,250250+ LIMIT = 0x00200000,251251+ MIIER = 0x00100000,252252+ OVRUN = 0x00080000,253253+ NIBON = 0x00040000,254254+ COLON = 0x00020000,255255+ CRCOK = 0x00010000,256256+ RxSizeMask = 0x0000ffff257257+ /*258258+ * The asic could apparently do vlan, TSO, jumbo (sis191 only) and259259+ * provide two (unused with Linux) Tx queues. No publically260260+ * available documentation alas.261261+ */262262+};263263+264264+enum sis190_eeprom_access_register_bits {265265+ EECS = 0x00000001, // unused266266+ EECLK = 0x00000002, // unused267267+ EEDO = 0x00000008, // unused268268+ EEDI = 0x00000004, // unused269269+ EEREQ = 0x00000080,270270+ EEROP = 0x00000200,271271+ EEWOP = 0x00000100 // unused272272+};273273+274274+/* EEPROM Addresses */275275+enum sis190_eeprom_address {276276+ EEPROMSignature = 0x00,277277+ EEPROMCLK = 0x01, // unused278278+ EEPROMInfo = 0x02,279279+ EEPROMMACAddr = 0x03280280+};281281+282282+struct sis190_private {283283+ void __iomem *mmio_addr;284284+ struct pci_dev *pci_dev;285285+ struct net_device_stats stats;286286+ spinlock_t lock;287287+ u32 rx_buf_sz;288288+ u32 cur_rx;289289+ u32 cur_tx;290290+ u32 dirty_rx;291291+ u32 dirty_tx;292292+ dma_addr_t rx_dma;293293+ dma_addr_t tx_dma;294294+ struct RxDesc *RxDescRing;295295+ struct TxDesc *TxDescRing;296296+ struct sk_buff *Rx_skbuff[NUM_RX_DESC];297297+ struct sk_buff *Tx_skbuff[NUM_TX_DESC];298298+ struct work_struct phy_task;299299+ struct timer_list timer;300300+ u32 msg_enable;301301+ struct mii_if_info mii_if;302302+ struct list_head first_phy;303303+};304304+305305+struct sis190_phy {306306+ struct list_head list;307307+ int phy_id;308308+ u16 id[2];309309+ u16 status;310310+ u8 type;311311+};312312+313313+enum sis190_phy_type {314314+ UNKNOWN = 0x00,315315+ HOME = 0x01,316316+ LAN = 0x02,317317+ MIX = 0x03318318+};319319+320320+static struct mii_chip_info {321321+ const char *name;322322+ u16 id[2];323323+ unsigned int type;324324+} mii_chip_table[] = {325325+ { "Broadcom PHY BCM5461", { 0x0020, 0x60c0 }, LAN },326326+ { "Agere PHY ET1101B", { 0x0282, 0xf010 }, LAN },327327+ { "Marvell PHY 88E1111", { 0x0141, 0x0cc0 }, LAN },328328+ { "Realtek PHY RTL8201", { 0x0000, 0x8200 }, LAN },329329+ { NULL, }330330+};331331+332332+const static struct {333333+ const char *name;334334+ u8 version; /* depend on docs */335335+ u32 RxConfigMask; /* clear the bits supported by this chip */336336+} sis_chip_info[] = {337337+ { DRV_NAME, 0x00, 0xff7e1880, },338338+};339339+340340+static struct pci_device_id sis190_pci_tbl[] __devinitdata = {341341+ { PCI_DEVICE(PCI_VENDOR_ID_SI, 0x0190), 0, 0, 0 },342342+ { 0, },343343+};344344+345345+MODULE_DEVICE_TABLE(pci, sis190_pci_tbl);346346+347347+static int rx_copybreak = 200;348348+349349+static struct {350350+ u32 msg_enable;351351+} debug = { -1 };352352+353353+MODULE_DESCRIPTION("SiS sis190 Gigabit Ethernet driver");354354+module_param(rx_copybreak, int, 0);355355+MODULE_PARM_DESC(rx_copybreak, "Copy breakpoint for copy-only-tiny-frames");356356+module_param_named(debug, debug.msg_enable, int, 0);357357+MODULE_PARM_DESC(debug, "Debug verbosity level (0=none, ..., 16=all)");358358+MODULE_AUTHOR("K.M. Liu <kmliu@sis.com>, Ueimor <romieu@fr.zoreil.com>");359359+MODULE_VERSION(DRV_VERSION);360360+MODULE_LICENSE("GPL");361361+362362+static const u32 sis190_intr_mask =363363+ RxQEmpty | RxQInt | TxQ1Int | TxQ0Int | RxHalt | TxHalt;364364+365365+/*366366+ * Maximum number of multicast addresses to filter (vs. Rx-all-multicast).367367+ * The chips use a 64 element hash table based on the Ethernet CRC.368368+ */369369+static int multicast_filter_limit = 32;370370+371371+static void __mdio_cmd(void __iomem *ioaddr, u32 ctl)372372+{373373+ unsigned int i;374374+375375+ SIS_W32(GMIIControl, ctl);376376+377377+ msleep(1);378378+379379+ for (i = 0; i < 100; i++) {380380+ if (!(SIS_R32(GMIIControl) & EhnMIInotDone))381381+ break;382382+ msleep(1);383383+ }384384+385385+ if (i > 999)386386+ printk(KERN_ERR PFX "PHY command failed !\n");387387+}388388+389389+static void mdio_write(void __iomem *ioaddr, int phy_id, int reg, int val)390390+{391391+ __mdio_cmd(ioaddr, EhnMIIreq | EhnMIIwrite |392392+ (((u32) reg) << EhnMIIregShift) | (phy_id << EhnMIIpmdShift) |393393+ (((u32) val) << EhnMIIdataShift));394394+}395395+396396+static int mdio_read(void __iomem *ioaddr, int phy_id, int reg)397397+{398398+ __mdio_cmd(ioaddr, EhnMIIreq | EhnMIIread |399399+ (((u32) reg) << EhnMIIregShift) | (phy_id << EhnMIIpmdShift));400400+401401+ return (u16) (SIS_R32(GMIIControl) >> EhnMIIdataShift);402402+}403403+404404+static void __mdio_write(struct net_device *dev, int phy_id, int reg, int val)405405+{406406+ struct sis190_private *tp = netdev_priv(dev);407407+408408+ mdio_write(tp->mmio_addr, phy_id, reg, val);409409+}410410+411411+static int __mdio_read(struct net_device *dev, int phy_id, int reg)412412+{413413+ struct sis190_private *tp = netdev_priv(dev);414414+415415+ return mdio_read(tp->mmio_addr, phy_id, reg);416416+}417417+418418+static u16 mdio_read_latched(void __iomem *ioaddr, int phy_id, int reg)419419+{420420+ mdio_read(ioaddr, phy_id, reg);421421+ return mdio_read(ioaddr, phy_id, reg);422422+}423423+424424+static u16 __devinit sis190_read_eeprom(void __iomem *ioaddr, u32 reg)425425+{426426+ u16 data = 0xffff;427427+ unsigned int i;428428+429429+ if (!(SIS_R32(ROMControl) & 0x0002))430430+ return 0;431431+432432+ SIS_W32(ROMInterface, EEREQ | EEROP | (reg << 10));433433+434434+ for (i = 0; i < 200; i++) {435435+ if (!(SIS_R32(ROMInterface) & EEREQ)) {436436+ data = (SIS_R32(ROMInterface) & 0xffff0000) >> 16;437437+ break;438438+ }439439+ msleep(1);440440+ }441441+442442+ return data;443443+}444444+445445+static void sis190_irq_mask_and_ack(void __iomem *ioaddr)446446+{447447+ SIS_W32(IntrMask, 0x00);448448+ SIS_W32(IntrStatus, 0xffffffff);449449+ SIS_PCI_COMMIT();450450+}451451+452452+static void sis190_asic_down(void __iomem *ioaddr)453453+{454454+ /* Stop the chip's Tx and Rx DMA processes. */455455+456456+ SIS_W32(TxControl, 0x1a00);457457+ SIS_W32(RxControl, 0x1a00);458458+459459+ sis190_irq_mask_and_ack(ioaddr);460460+}461461+462462+static void sis190_mark_as_last_descriptor(struct RxDesc *desc)463463+{464464+ desc->size |= cpu_to_le32(RingEnd);465465+}466466+467467+static inline void sis190_give_to_asic(struct RxDesc *desc, u32 rx_buf_sz)468468+{469469+ u32 eor = le32_to_cpu(desc->size) & RingEnd;470470+471471+ desc->PSize = 0x0;472472+ desc->size = cpu_to_le32((rx_buf_sz & RX_BUF_MASK) | eor);473473+ wmb();474474+ desc->status = cpu_to_le32(OWNbit | INTbit);475475+}476476+477477+static inline void sis190_map_to_asic(struct RxDesc *desc, dma_addr_t mapping,478478+ u32 rx_buf_sz)479479+{480480+ desc->addr = cpu_to_le32(mapping);481481+ sis190_give_to_asic(desc, rx_buf_sz);482482+}483483+484484+static inline void sis190_make_unusable_by_asic(struct RxDesc *desc)485485+{486486+ desc->PSize = 0x0;487487+ desc->addr = 0xdeadbeef;488488+ desc->size &= cpu_to_le32(RingEnd);489489+ wmb();490490+ desc->status = 0x0;491491+}492492+493493+static int sis190_alloc_rx_skb(struct pci_dev *pdev, struct sk_buff **sk_buff,494494+ struct RxDesc *desc, u32 rx_buf_sz)495495+{496496+ struct sk_buff *skb;497497+ dma_addr_t mapping;498498+ int ret = 0;499499+500500+ skb = dev_alloc_skb(rx_buf_sz);501501+ if (!skb)502502+ goto err_out;503503+504504+ *sk_buff = skb;505505+506506+ mapping = pci_map_single(pdev, skb->data, rx_buf_sz,507507+ PCI_DMA_FROMDEVICE);508508+509509+ sis190_map_to_asic(desc, mapping, rx_buf_sz);510510+out:511511+ return ret;512512+513513+err_out:514514+ ret = -ENOMEM;515515+ sis190_make_unusable_by_asic(desc);516516+ goto out;517517+}518518+519519+static u32 sis190_rx_fill(struct sis190_private *tp, struct net_device *dev,520520+ u32 start, u32 end)521521+{522522+ u32 cur;523523+524524+ for (cur = start; cur < end; cur++) {525525+ int ret, i = cur % NUM_RX_DESC;526526+527527+ if (tp->Rx_skbuff[i])528528+ continue;529529+530530+ ret = sis190_alloc_rx_skb(tp->pci_dev, tp->Rx_skbuff + i,531531+ tp->RxDescRing + i, tp->rx_buf_sz);532532+ if (ret < 0)533533+ break;534534+ }535535+ return cur - start;536536+}537537+538538+static inline int sis190_try_rx_copy(struct sk_buff **sk_buff, int pkt_size,539539+ struct RxDesc *desc, int rx_buf_sz)540540+{541541+ int ret = -1;542542+543543+ if (pkt_size < rx_copybreak) {544544+ struct sk_buff *skb;545545+546546+ skb = dev_alloc_skb(pkt_size + NET_IP_ALIGN);547547+ if (skb) {548548+ skb_reserve(skb, NET_IP_ALIGN);549549+ eth_copy_and_sum(skb, sk_buff[0]->data, pkt_size, 0);550550+ *sk_buff = skb;551551+ sis190_give_to_asic(desc, rx_buf_sz);552552+ ret = 0;553553+ }554554+ }555555+ return ret;556556+}557557+558558+static inline int sis190_rx_pkt_err(u32 status, struct net_device_stats *stats)559559+{560560+#define ErrMask (OVRUN | SHORT | LIMIT | MIIER | NIBON | COLON | ABORT)561561+562562+ if ((status & CRCOK) && !(status & ErrMask))563563+ return 0;564564+565565+ if (!(status & CRCOK))566566+ stats->rx_crc_errors++;567567+ else if (status & OVRUN)568568+ stats->rx_over_errors++;569569+ else if (status & (SHORT | LIMIT))570570+ stats->rx_length_errors++;571571+ else if (status & (MIIER | NIBON | COLON))572572+ stats->rx_frame_errors++;573573+574574+ stats->rx_errors++;575575+ return -1;576576+}577577+578578+static int sis190_rx_interrupt(struct net_device *dev,579579+ struct sis190_private *tp, void __iomem *ioaddr)580580+{581581+ struct net_device_stats *stats = &tp->stats;582582+ u32 rx_left, cur_rx = tp->cur_rx;583583+ u32 delta, count;584584+585585+ rx_left = NUM_RX_DESC + tp->dirty_rx - cur_rx;586586+ rx_left = sis190_rx_quota(rx_left, (u32) dev->quota);587587+588588+ for (; rx_left > 0; rx_left--, cur_rx++) {589589+ unsigned int entry = cur_rx % NUM_RX_DESC;590590+ struct RxDesc *desc = tp->RxDescRing + entry;591591+ u32 status;592592+593593+ if (desc->status & OWNbit)594594+ break;595595+596596+ status = le32_to_cpu(desc->PSize);597597+598598+ // net_intr(tp, KERN_INFO "%s: Rx PSize = %08x.\n", dev->name,599599+ // status);600600+601601+ if (sis190_rx_pkt_err(status, stats) < 0)602602+ sis190_give_to_asic(desc, tp->rx_buf_sz);603603+ else {604604+ struct sk_buff *skb = tp->Rx_skbuff[entry];605605+ int pkt_size = (status & RxSizeMask) - 4;606606+ void (*pci_action)(struct pci_dev *, dma_addr_t,607607+ size_t, int) = pci_dma_sync_single_for_device;608608+609609+ if (unlikely(pkt_size > tp->rx_buf_sz)) {610610+ net_intr(tp, KERN_INFO611611+ "%s: (frag) status = %08x.\n",612612+ dev->name, status);613613+ stats->rx_dropped++;614614+ stats->rx_length_errors++;615615+ sis190_give_to_asic(desc, tp->rx_buf_sz);616616+ continue;617617+ }618618+619619+ pci_dma_sync_single_for_cpu(tp->pci_dev,620620+ le32_to_cpu(desc->addr), tp->rx_buf_sz,621621+ PCI_DMA_FROMDEVICE);622622+623623+ if (sis190_try_rx_copy(&skb, pkt_size, desc,624624+ tp->rx_buf_sz)) {625625+ pci_action = pci_unmap_single;626626+ tp->Rx_skbuff[entry] = NULL;627627+ sis190_make_unusable_by_asic(desc);628628+ }629629+630630+ pci_action(tp->pci_dev, le32_to_cpu(desc->addr),631631+ tp->rx_buf_sz, PCI_DMA_FROMDEVICE);632632+633633+ skb->dev = dev;634634+ skb_put(skb, pkt_size);635635+ skb->protocol = eth_type_trans(skb, dev);636636+637637+ sis190_rx_skb(skb);638638+639639+ dev->last_rx = jiffies;640640+ stats->rx_packets++;641641+ stats->rx_bytes += pkt_size;642642+ if ((status & BCAST) == MCAST)643643+ stats->multicast++;644644+ }645645+ }646646+ count = cur_rx - tp->cur_rx;647647+ tp->cur_rx = cur_rx;648648+649649+ delta = sis190_rx_fill(tp, dev, tp->dirty_rx, tp->cur_rx);650650+ if (!delta && count && netif_msg_intr(tp))651651+ printk(KERN_INFO "%s: no Rx buffer allocated.\n", dev->name);652652+ tp->dirty_rx += delta;653653+654654+ if (((tp->dirty_rx + NUM_RX_DESC) == tp->cur_rx) && netif_msg_intr(tp))655655+ printk(KERN_EMERG "%s: Rx buffers exhausted.\n", dev->name);656656+657657+ return count;658658+}659659+660660+static void sis190_unmap_tx_skb(struct pci_dev *pdev, struct sk_buff *skb,661661+ struct TxDesc *desc)662662+{663663+ unsigned int len;664664+665665+ len = skb->len < ETH_ZLEN ? ETH_ZLEN : skb->len;666666+667667+ pci_unmap_single(pdev, le32_to_cpu(desc->addr), len, PCI_DMA_TODEVICE);668668+669669+ memset(desc, 0x00, sizeof(*desc));670670+}671671+672672+static void sis190_tx_interrupt(struct net_device *dev,673673+ struct sis190_private *tp, void __iomem *ioaddr)674674+{675675+ u32 pending, dirty_tx = tp->dirty_tx;676676+ /*677677+ * It would not be needed if queueing was allowed to be enabled678678+ * again too early (hint: think preempt and unclocked smp systems).679679+ */680680+ unsigned int queue_stopped;681681+682682+ smp_rmb();683683+ pending = tp->cur_tx - dirty_tx;684684+ queue_stopped = (pending == NUM_TX_DESC);685685+686686+ for (; pending; pending--, dirty_tx++) {687687+ unsigned int entry = dirty_tx % NUM_TX_DESC;688688+ struct TxDesc *txd = tp->TxDescRing + entry;689689+ struct sk_buff *skb;690690+691691+ if (le32_to_cpu(txd->status) & OWNbit)692692+ break;693693+694694+ skb = tp->Tx_skbuff[entry];695695+696696+ tp->stats.tx_packets++;697697+ tp->stats.tx_bytes += skb->len;698698+699699+ sis190_unmap_tx_skb(tp->pci_dev, skb, txd);700700+ tp->Tx_skbuff[entry] = NULL;701701+ dev_kfree_skb_irq(skb);702702+ }703703+704704+ if (tp->dirty_tx != dirty_tx) {705705+ tp->dirty_tx = dirty_tx;706706+ smp_wmb();707707+ if (queue_stopped)708708+ netif_wake_queue(dev);709709+ }710710+}711711+712712+/*713713+ * The interrupt handler does all of the Rx thread work and cleans up after714714+ * the Tx thread.715715+ */716716+static irqreturn_t sis190_interrupt(int irq, void *__dev, struct pt_regs *regs)717717+{718718+ struct net_device *dev = __dev;719719+ struct sis190_private *tp = netdev_priv(dev);720720+ void __iomem *ioaddr = tp->mmio_addr;721721+ unsigned int handled = 0;722722+ u32 status;723723+724724+ status = SIS_R32(IntrStatus);725725+726726+ if ((status == 0xffffffff) || !status)727727+ goto out;728728+729729+ handled = 1;730730+731731+ if (unlikely(!netif_running(dev))) {732732+ sis190_asic_down(ioaddr);733733+ goto out;734734+ }735735+736736+ SIS_W32(IntrStatus, status);737737+738738+ // net_intr(tp, KERN_INFO "%s: status = %08x.\n", dev->name, status);739739+740740+ if (status & LinkChange) {741741+ net_intr(tp, KERN_INFO "%s: link change.\n", dev->name);742742+ schedule_work(&tp->phy_task);743743+ }744744+745745+ if (status & RxQInt)746746+ sis190_rx_interrupt(dev, tp, ioaddr);747747+748748+ if (status & TxQ0Int)749749+ sis190_tx_interrupt(dev, tp, ioaddr);750750+out:751751+ return IRQ_RETVAL(handled);752752+}753753+754754+#ifdef CONFIG_NET_POLL_CONTROLLER755755+static void sis190_netpoll(struct net_device *dev)756756+{757757+ struct sis190_private *tp = netdev_priv(dev);758758+ struct pci_dev *pdev = tp->pci_dev;759759+760760+ disable_irq(pdev->irq);761761+ sis190_interrupt(pdev->irq, dev, NULL);762762+ enable_irq(pdev->irq);763763+}764764+#endif765765+766766+static void sis190_free_rx_skb(struct sis190_private *tp,767767+ struct sk_buff **sk_buff, struct RxDesc *desc)768768+{769769+ struct pci_dev *pdev = tp->pci_dev;770770+771771+ pci_unmap_single(pdev, le32_to_cpu(desc->addr), tp->rx_buf_sz,772772+ PCI_DMA_FROMDEVICE);773773+ dev_kfree_skb(*sk_buff);774774+ *sk_buff = NULL;775775+ sis190_make_unusable_by_asic(desc);776776+}777777+778778+static void sis190_rx_clear(struct sis190_private *tp)779779+{780780+ unsigned int i;781781+782782+ for (i = 0; i < NUM_RX_DESC; i++) {783783+ if (!tp->Rx_skbuff[i])784784+ continue;785785+ sis190_free_rx_skb(tp, tp->Rx_skbuff + i, tp->RxDescRing + i);786786+ }787787+}788788+789789+static void sis190_init_ring_indexes(struct sis190_private *tp)790790+{791791+ tp->dirty_tx = tp->dirty_rx = tp->cur_tx = tp->cur_rx = 0;792792+}793793+794794+static int sis190_init_ring(struct net_device *dev)795795+{796796+ struct sis190_private *tp = netdev_priv(dev);797797+798798+ sis190_init_ring_indexes(tp);799799+800800+ memset(tp->Tx_skbuff, 0x0, NUM_TX_DESC * sizeof(struct sk_buff *));801801+ memset(tp->Rx_skbuff, 0x0, NUM_RX_DESC * sizeof(struct sk_buff *));802802+803803+ if (sis190_rx_fill(tp, dev, 0, NUM_RX_DESC) != NUM_RX_DESC)804804+ goto err_rx_clear;805805+806806+ sis190_mark_as_last_descriptor(tp->RxDescRing + NUM_RX_DESC - 1);807807+808808+ return 0;809809+810810+err_rx_clear:811811+ sis190_rx_clear(tp);812812+ return -ENOMEM;813813+}814814+815815+static void sis190_set_rx_mode(struct net_device *dev)816816+{817817+ struct sis190_private *tp = netdev_priv(dev);818818+ void __iomem *ioaddr = tp->mmio_addr;819819+ unsigned long flags;820820+ u32 mc_filter[2]; /* Multicast hash filter */821821+ u16 rx_mode;822822+823823+ if (dev->flags & IFF_PROMISC) {824824+ /* Unconditionally log net taps. */825825+ net_drv(tp, KERN_NOTICE "%s: Promiscuous mode enabled.\n",826826+ dev->name);827827+ rx_mode =828828+ AcceptBroadcast | AcceptMulticast | AcceptMyPhys |829829+ AcceptAllPhys;830830+ mc_filter[1] = mc_filter[0] = 0xffffffff;831831+ } else if ((dev->mc_count > multicast_filter_limit) ||832832+ (dev->flags & IFF_ALLMULTI)) {833833+ /* Too many to filter perfectly -- accept all multicasts. */834834+ rx_mode = AcceptBroadcast | AcceptMulticast | AcceptMyPhys;835835+ mc_filter[1] = mc_filter[0] = 0xffffffff;836836+ } else {837837+ struct dev_mc_list *mclist;838838+ unsigned int i;839839+840840+ rx_mode = AcceptBroadcast | AcceptMyPhys;841841+ mc_filter[1] = mc_filter[0] = 0;842842+ for (i = 0, mclist = dev->mc_list; mclist && i < dev->mc_count;843843+ i++, mclist = mclist->next) {844844+ int bit_nr =845845+ ether_crc(ETH_ALEN, mclist->dmi_addr) >> 26;846846+ mc_filter[bit_nr >> 5] |= 1 << (bit_nr & 31);847847+ rx_mode |= AcceptMulticast;848848+ }849849+ }850850+851851+ spin_lock_irqsave(&tp->lock, flags);852852+853853+ SIS_W16(RxMacControl, rx_mode | 0x2);854854+ SIS_W32(RxHashTable, mc_filter[0]);855855+ SIS_W32(RxHashTable + 4, mc_filter[1]);856856+857857+ spin_unlock_irqrestore(&tp->lock, flags);858858+}859859+860860+static void sis190_soft_reset(void __iomem *ioaddr)861861+{862862+ SIS_W32(IntrControl, 0x8000);863863+ SIS_PCI_COMMIT();864864+ msleep(1);865865+ SIS_W32(IntrControl, 0x0);866866+ sis190_asic_down(ioaddr);867867+ msleep(1);868868+}869869+870870+static void sis190_hw_start(struct net_device *dev)871871+{872872+ struct sis190_private *tp = netdev_priv(dev);873873+ void __iomem *ioaddr = tp->mmio_addr;874874+875875+ sis190_soft_reset(ioaddr);876876+877877+ SIS_W32(TxDescStartAddr, tp->tx_dma);878878+ SIS_W32(RxDescStartAddr, tp->rx_dma);879879+880880+ SIS_W32(IntrStatus, 0xffffffff);881881+ SIS_W32(IntrMask, 0x0);882882+ /*883883+ * Default is 100Mbps.884884+ * A bit strange: 100Mbps is 0x1801 elsewhere -- FR 2005/06/09885885+ */886886+ SIS_W16(StationControl, 0x1901);887887+ SIS_W32(GMIIControl, 0x0);888888+ SIS_W32(TxMacControl, 0x60);889889+ SIS_W16(RxMacControl, 0x02);890890+ SIS_W32(RxHashTable, 0x0);891891+ SIS_W32(0x6c, 0x0);892892+ SIS_W32(RxWolCtrl, 0x0);893893+ SIS_W32(RxWolData, 0x0);894894+895895+ SIS_PCI_COMMIT();896896+897897+ sis190_set_rx_mode(dev);898898+899899+ /* Enable all known interrupts by setting the interrupt mask. */900900+ SIS_W32(IntrMask, sis190_intr_mask);901901+902902+ SIS_W32(TxControl, 0x1a00 | CmdTxEnb);903903+ SIS_W32(RxControl, 0x1a1d);904904+905905+ netif_start_queue(dev);906906+}907907+908908+static void sis190_phy_task(void * data)909909+{910910+ struct net_device *dev = data;911911+ struct sis190_private *tp = netdev_priv(dev);912912+ void __iomem *ioaddr = tp->mmio_addr;913913+ int phy_id = tp->mii_if.phy_id;914914+ u16 val;915915+916916+ rtnl_lock();917917+918918+ val = mdio_read(ioaddr, phy_id, MII_BMCR);919919+ if (val & BMCR_RESET) {920920+ // FIXME: needlessly high ? -- FR 02/07/2005921921+ mod_timer(&tp->timer, jiffies + HZ/10);922922+ } else if (!(mdio_read_latched(ioaddr, phy_id, MII_BMSR) &923923+ BMSR_ANEGCOMPLETE)) {924924+ net_link(tp, KERN_WARNING "%s: PHY reset until link up.\n",925925+ dev->name);926926+ mdio_write(ioaddr, phy_id, MII_BMCR, val | BMCR_RESET);927927+ mod_timer(&tp->timer, jiffies + SIS190_PHY_TIMEOUT);928928+ } else {929929+ /* Rejoice ! */930930+ struct {931931+ int val;932932+ const char *msg;933933+ u16 ctl;934934+ } reg31[] = {935935+ { LPA_1000XFULL | LPA_SLCT,936936+ "1000 Mbps Full Duplex",937937+ 0x01 | _1000bpsF },938938+ { LPA_1000XHALF | LPA_SLCT,939939+ "1000 Mbps Half Duplex",940940+ 0x01 | _1000bpsH },941941+ { LPA_100FULL,942942+ "100 Mbps Full Duplex",943943+ 0x01 | _100bpsF },944944+ { LPA_100HALF,945945+ "100 Mbps Half Duplex",946946+ 0x01 | _100bpsH },947947+ { LPA_10FULL,948948+ "10 Mbps Full Duplex",949949+ 0x01 | _10bpsF },950950+ { LPA_10HALF,951951+ "10 Mbps Half Duplex",952952+ 0x01 | _10bpsH },953953+ { 0, "unknown", 0x0000 }954954+ }, *p;955955+ u16 adv;956956+957957+ val = mdio_read(ioaddr, phy_id, 0x1f);958958+ net_link(tp, KERN_INFO "%s: mii ext = %04x.\n", dev->name, val);959959+960960+ val = mdio_read(ioaddr, phy_id, MII_LPA);961961+ adv = mdio_read(ioaddr, phy_id, MII_ADVERTISE);962962+ net_link(tp, KERN_INFO "%s: mii lpa = %04x adv = %04x.\n",963963+ dev->name, val, adv);964964+965965+ val &= adv;966966+967967+ for (p = reg31; p->ctl; p++) {968968+ if ((val & p->val) == p->val)969969+ break;970970+ }971971+ if (p->ctl)972972+ SIS_W16(StationControl, p->ctl);973973+ net_link(tp, KERN_INFO "%s: link on %s mode.\n", dev->name,974974+ p->msg);975975+ netif_carrier_on(dev);976976+ }977977+978978+ rtnl_unlock();979979+}980980+981981+static void sis190_phy_timer(unsigned long __opaque)982982+{983983+ struct net_device *dev = (struct net_device *)__opaque;984984+ struct sis190_private *tp = netdev_priv(dev);985985+986986+ if (likely(netif_running(dev)))987987+ schedule_work(&tp->phy_task);988988+}989989+990990+static inline void sis190_delete_timer(struct net_device *dev)991991+{992992+ struct sis190_private *tp = netdev_priv(dev);993993+994994+ del_timer_sync(&tp->timer);995995+}996996+997997+static inline void sis190_request_timer(struct net_device *dev)998998+{999999+ struct sis190_private *tp = netdev_priv(dev);10001000+ struct timer_list *timer = &tp->timer;10011001+10021002+ init_timer(timer);10031003+ timer->expires = jiffies + SIS190_PHY_TIMEOUT;10041004+ timer->data = (unsigned long)dev;10051005+ timer->function = sis190_phy_timer;10061006+ add_timer(timer);10071007+}10081008+10091009+static void sis190_set_rxbufsize(struct sis190_private *tp,10101010+ struct net_device *dev)10111011+{10121012+ unsigned int mtu = dev->mtu;10131013+10141014+ tp->rx_buf_sz = (mtu > RX_BUF_SIZE) ? mtu + ETH_HLEN + 8 : RX_BUF_SIZE;10151015+ /* RxDesc->size has a licence to kill the lower bits */10161016+ if (tp->rx_buf_sz & 0x07) {10171017+ tp->rx_buf_sz += 8;10181018+ tp->rx_buf_sz &= RX_BUF_MASK;10191019+ }10201020+}10211021+10221022+static int sis190_open(struct net_device *dev)10231023+{10241024+ struct sis190_private *tp = netdev_priv(dev);10251025+ struct pci_dev *pdev = tp->pci_dev;10261026+ int rc = -ENOMEM;10271027+10281028+ sis190_set_rxbufsize(tp, dev);10291029+10301030+ /*10311031+ * Rx and Tx descriptors need 256 bytes alignment.10321032+ * pci_alloc_consistent() guarantees a stronger alignment.10331033+ */10341034+ tp->TxDescRing = pci_alloc_consistent(pdev, TX_RING_BYTES, &tp->tx_dma);10351035+ if (!tp->TxDescRing)10361036+ goto out;10371037+10381038+ tp->RxDescRing = pci_alloc_consistent(pdev, RX_RING_BYTES, &tp->rx_dma);10391039+ if (!tp->RxDescRing)10401040+ goto err_free_tx_0;10411041+10421042+ rc = sis190_init_ring(dev);10431043+ if (rc < 0)10441044+ goto err_free_rx_1;10451045+10461046+ INIT_WORK(&tp->phy_task, sis190_phy_task, dev);10471047+10481048+ sis190_request_timer(dev);10491049+10501050+ rc = request_irq(dev->irq, sis190_interrupt, SA_SHIRQ, dev->name, dev);10511051+ if (rc < 0)10521052+ goto err_release_timer_2;10531053+10541054+ sis190_hw_start(dev);10551055+out:10561056+ return rc;10571057+10581058+err_release_timer_2:10591059+ sis190_delete_timer(dev);10601060+ sis190_rx_clear(tp);10611061+err_free_rx_1:10621062+ pci_free_consistent(tp->pci_dev, RX_RING_BYTES, tp->RxDescRing,10631063+ tp->rx_dma);10641064+err_free_tx_0:10651065+ pci_free_consistent(tp->pci_dev, TX_RING_BYTES, tp->TxDescRing,10661066+ tp->tx_dma);10671067+ goto out;10681068+}10691069+10701070+static void sis190_tx_clear(struct sis190_private *tp)10711071+{10721072+ unsigned int i;10731073+10741074+ for (i = 0; i < NUM_TX_DESC; i++) {10751075+ struct sk_buff *skb = tp->Tx_skbuff[i];10761076+10771077+ if (!skb)10781078+ continue;10791079+10801080+ sis190_unmap_tx_skb(tp->pci_dev, skb, tp->TxDescRing + i);10811081+ tp->Tx_skbuff[i] = NULL;10821082+ dev_kfree_skb(skb);10831083+10841084+ tp->stats.tx_dropped++;10851085+ }10861086+ tp->cur_tx = tp->dirty_tx = 0;10871087+}10881088+10891089+static void sis190_down(struct net_device *dev)10901090+{10911091+ struct sis190_private *tp = netdev_priv(dev);10921092+ void __iomem *ioaddr = tp->mmio_addr;10931093+ unsigned int poll_locked = 0;10941094+10951095+ sis190_delete_timer(dev);10961096+10971097+ netif_stop_queue(dev);10981098+10991099+ flush_scheduled_work();11001100+11011101+ do {11021102+ spin_lock_irq(&tp->lock);11031103+11041104+ sis190_asic_down(ioaddr);11051105+11061106+ spin_unlock_irq(&tp->lock);11071107+11081108+ synchronize_irq(dev->irq);11091109+11101110+ if (!poll_locked) {11111111+ netif_poll_disable(dev);11121112+ poll_locked++;11131113+ }11141114+11151115+ synchronize_sched();11161116+11171117+ } while (SIS_R32(IntrMask));11181118+11191119+ sis190_tx_clear(tp);11201120+ sis190_rx_clear(tp);11211121+}11221122+11231123+static int sis190_close(struct net_device *dev)11241124+{11251125+ struct sis190_private *tp = netdev_priv(dev);11261126+ struct pci_dev *pdev = tp->pci_dev;11271127+11281128+ sis190_down(dev);11291129+11301130+ free_irq(dev->irq, dev);11311131+11321132+ netif_poll_enable(dev);11331133+11341134+ pci_free_consistent(pdev, TX_RING_BYTES, tp->TxDescRing, tp->tx_dma);11351135+ pci_free_consistent(pdev, RX_RING_BYTES, tp->RxDescRing, tp->rx_dma);11361136+11371137+ tp->TxDescRing = NULL;11381138+ tp->RxDescRing = NULL;11391139+11401140+ return 0;11411141+}11421142+11431143+static int sis190_start_xmit(struct sk_buff *skb, struct net_device *dev)11441144+{11451145+ struct sis190_private *tp = netdev_priv(dev);11461146+ void __iomem *ioaddr = tp->mmio_addr;11471147+ u32 len, entry, dirty_tx;11481148+ struct TxDesc *desc;11491149+ dma_addr_t mapping;11501150+11511151+ if (unlikely(skb->len < ETH_ZLEN)) {11521152+ skb = skb_padto(skb, ETH_ZLEN);11531153+ if (!skb) {11541154+ tp->stats.tx_dropped++;11551155+ goto out;11561156+ }11571157+ len = ETH_ZLEN;11581158+ } else {11591159+ len = skb->len;11601160+ }11611161+11621162+ entry = tp->cur_tx % NUM_TX_DESC;11631163+ desc = tp->TxDescRing + entry;11641164+11651165+ if (unlikely(le32_to_cpu(desc->status) & OWNbit)) {11661166+ netif_stop_queue(dev);11671167+ net_tx_err(tp, KERN_ERR PFX11681168+ "%s: BUG! Tx Ring full when queue awake!\n",11691169+ dev->name);11701170+ return NETDEV_TX_BUSY;11711171+ }11721172+11731173+ mapping = pci_map_single(tp->pci_dev, skb->data, len, PCI_DMA_TODEVICE);11741174+11751175+ tp->Tx_skbuff[entry] = skb;11761176+11771177+ desc->PSize = cpu_to_le32(len);11781178+ desc->addr = cpu_to_le32(mapping);11791179+11801180+ desc->size = cpu_to_le32(len);11811181+ if (entry == (NUM_TX_DESC - 1))11821182+ desc->size |= cpu_to_le32(RingEnd);11831183+11841184+ wmb();11851185+11861186+ desc->status = cpu_to_le32(OWNbit | INTbit | DEFbit | CRCbit | PADbit);11871187+11881188+ tp->cur_tx++;11891189+11901190+ smp_wmb();11911191+11921192+ SIS_W32(TxControl, 0x1a00 | CmdReset | CmdTxEnb);11931193+11941194+ dev->trans_start = jiffies;11951195+11961196+ dirty_tx = tp->dirty_tx;11971197+ if ((tp->cur_tx - NUM_TX_DESC) == dirty_tx) {11981198+ netif_stop_queue(dev);11991199+ smp_rmb();12001200+ if (dirty_tx != tp->dirty_tx)12011201+ netif_wake_queue(dev);12021202+ }12031203+out:12041204+ return NETDEV_TX_OK;12051205+}12061206+12071207+static struct net_device_stats *sis190_get_stats(struct net_device *dev)12081208+{12091209+ struct sis190_private *tp = netdev_priv(dev);12101210+12111211+ return &tp->stats;12121212+}12131213+12141214+static void sis190_free_phy(struct list_head *first_phy)12151215+{12161216+ struct sis190_phy *cur, *next;12171217+12181218+ list_for_each_entry_safe(cur, next, first_phy, list) {12191219+ kfree(cur);12201220+ }12211221+}12221222+12231223+/**12241224+ * sis190_default_phy - Select default PHY for sis190 mac.12251225+ * @dev: the net device to probe for12261226+ *12271227+ * Select first detected PHY with link as default.12281228+ * If no one is link on, select PHY whose types is HOME as default.12291229+ * If HOME doesn't exist, select LAN.12301230+ */12311231+static u16 sis190_default_phy(struct net_device *dev)12321232+{12331233+ struct sis190_phy *phy, *phy_home, *phy_default, *phy_lan;12341234+ struct sis190_private *tp = netdev_priv(dev);12351235+ struct mii_if_info *mii_if = &tp->mii_if;12361236+ void __iomem *ioaddr = tp->mmio_addr;12371237+ u16 status;12381238+12391239+ phy_home = phy_default = phy_lan = NULL;12401240+12411241+ list_for_each_entry(phy, &tp->first_phy, list) {12421242+ status = mdio_read_latched(ioaddr, phy->phy_id, MII_BMSR);12431243+12441244+ // Link ON & Not select default PHY & not ghost PHY.12451245+ if ((status & BMSR_LSTATUS) &&12461246+ !phy_default &&12471247+ (phy->type != UNKNOWN)) {12481248+ phy_default = phy;12491249+ } else {12501250+ status = mdio_read(ioaddr, phy->phy_id, MII_BMCR);12511251+ mdio_write(ioaddr, phy->phy_id, MII_BMCR,12521252+ status | BMCR_ANENABLE | BMCR_ISOLATE);12531253+ if (phy->type == HOME)12541254+ phy_home = phy;12551255+ else if (phy->type == LAN)12561256+ phy_lan = phy;12571257+ }12581258+ }12591259+12601260+ if (!phy_default) {12611261+ if (phy_home)12621262+ phy_default = phy_home;12631263+ else if (phy_lan)12641264+ phy_default = phy_lan;12651265+ else12661266+ phy_default = list_entry(&tp->first_phy,12671267+ struct sis190_phy, list);12681268+ }12691269+12701270+ if (mii_if->phy_id != phy_default->phy_id) {12711271+ mii_if->phy_id = phy_default->phy_id;12721272+ net_probe(tp, KERN_INFO12731273+ "%s: Using transceiver at address %d as default.\n",12741274+ pci_name(tp->pci_dev), mii_if->phy_id);12751275+ }12761276+12771277+ status = mdio_read(ioaddr, mii_if->phy_id, MII_BMCR);12781278+ status &= (~BMCR_ISOLATE);12791279+12801280+ mdio_write(ioaddr, mii_if->phy_id, MII_BMCR, status);12811281+ status = mdio_read_latched(ioaddr, mii_if->phy_id, MII_BMSR);12821282+12831283+ return status;12841284+}12851285+12861286+static void sis190_init_phy(struct net_device *dev, struct sis190_private *tp,12871287+ struct sis190_phy *phy, unsigned int phy_id,12881288+ u16 mii_status)12891289+{12901290+ void __iomem *ioaddr = tp->mmio_addr;12911291+ struct mii_chip_info *p;12921292+12931293+ INIT_LIST_HEAD(&phy->list);12941294+ phy->status = mii_status;12951295+ phy->phy_id = phy_id;12961296+12971297+ phy->id[0] = mdio_read(ioaddr, phy_id, MII_PHYSID1);12981298+ phy->id[1] = mdio_read(ioaddr, phy_id, MII_PHYSID2);12991299+13001300+ for (p = mii_chip_table; p->type; p++) {13011301+ if ((p->id[0] == phy->id[0]) &&13021302+ (p->id[1] == (phy->id[1] & 0xfff0))) {13031303+ break;13041304+ }13051305+ }13061306+13071307+ if (p->id[1]) {13081308+ phy->type = (p->type == MIX) ?13091309+ ((mii_status & (BMSR_100FULL | BMSR_100HALF)) ?13101310+ LAN : HOME) : p->type;13111311+ } else13121312+ phy->type = UNKNOWN;13131313+13141314+ net_probe(tp, KERN_INFO "%s: %s transceiver at address %d.\n",13151315+ pci_name(tp->pci_dev),13161316+ (phy->type == UNKNOWN) ? "Unknown PHY" : p->name, phy_id);13171317+}13181318+13191319+/**13201320+ * sis190_mii_probe - Probe MII PHY for sis19013211321+ * @dev: the net device to probe for13221322+ *13231323+ * Search for total of 32 possible mii phy addresses.13241324+ * Identify and set current phy if found one,13251325+ * return error if it failed to found.13261326+ */13271327+static int __devinit sis190_mii_probe(struct net_device *dev)13281328+{13291329+ struct sis190_private *tp = netdev_priv(dev);13301330+ struct mii_if_info *mii_if = &tp->mii_if;13311331+ void __iomem *ioaddr = tp->mmio_addr;13321332+ int phy_id;13331333+ int rc = 0;13341334+13351335+ INIT_LIST_HEAD(&tp->first_phy);13361336+13371337+ for (phy_id = 0; phy_id < PHY_MAX_ADDR; phy_id++) {13381338+ struct sis190_phy *phy;13391339+ u16 status;13401340+13411341+ status = mdio_read_latched(ioaddr, phy_id, MII_BMSR);13421342+13431343+ // Try next mii if the current one is not accessible.13441344+ if (status == 0xffff || status == 0x0000)13451345+ continue;13461346+13471347+ phy = kmalloc(sizeof(*phy), GFP_KERNEL);13481348+ if (!phy) {13491349+ sis190_free_phy(&tp->first_phy);13501350+ rc = -ENOMEM;13511351+ goto out;13521352+ }13531353+13541354+ sis190_init_phy(dev, tp, phy, phy_id, status);13551355+13561356+ list_add(&tp->first_phy, &phy->list);13571357+ }13581358+13591359+ if (list_empty(&tp->first_phy)) {13601360+ net_probe(tp, KERN_INFO "%s: No MII transceivers found!\n",13611361+ pci_name(tp->pci_dev));13621362+ rc = -EIO;13631363+ goto out;13641364+ }13651365+13661366+ /* Select default PHY for mac */13671367+ sis190_default_phy(dev);13681368+13691369+ mii_if->dev = dev;13701370+ mii_if->mdio_read = __mdio_read;13711371+ mii_if->mdio_write = __mdio_write;13721372+ mii_if->phy_id_mask = PHY_ID_ANY;13731373+ mii_if->reg_num_mask = MII_REG_ANY;13741374+out:13751375+ return rc;13761376+}13771377+13781378+static void __devexit sis190_mii_remove(struct net_device *dev)13791379+{13801380+ struct sis190_private *tp = netdev_priv(dev);13811381+13821382+ sis190_free_phy(&tp->first_phy);13831383+}13841384+13851385+static void sis190_release_board(struct pci_dev *pdev)13861386+{13871387+ struct net_device *dev = pci_get_drvdata(pdev);13881388+ struct sis190_private *tp = netdev_priv(dev);13891389+13901390+ iounmap(tp->mmio_addr);13911391+ pci_release_regions(pdev);13921392+ pci_disable_device(pdev);13931393+ free_netdev(dev);13941394+}13951395+13961396+static struct net_device * __devinit sis190_init_board(struct pci_dev *pdev)13971397+{13981398+ struct sis190_private *tp;13991399+ struct net_device *dev;14001400+ void __iomem *ioaddr;14011401+ int rc;14021402+14031403+ dev = alloc_etherdev(sizeof(*tp));14041404+ if (!dev) {14051405+ net_drv(&debug, KERN_ERR PFX "unable to alloc new ethernet\n");14061406+ rc = -ENOMEM;14071407+ goto err_out_0;14081408+ }14091409+14101410+ SET_MODULE_OWNER(dev);14111411+ SET_NETDEV_DEV(dev, &pdev->dev);14121412+14131413+ tp = netdev_priv(dev);14141414+ tp->msg_enable = netif_msg_init(debug.msg_enable, SIS190_MSG_DEFAULT);14151415+14161416+ rc = pci_enable_device(pdev);14171417+ if (rc < 0) {14181418+ net_probe(tp, KERN_ERR "%s: enable failure\n", pci_name(pdev));14191419+ goto err_free_dev_1;14201420+ }14211421+14221422+ rc = -ENODEV;14231423+14241424+ if (!(pci_resource_flags(pdev, 0) & IORESOURCE_MEM)) {14251425+ net_probe(tp, KERN_ERR "%s: region #0 is no MMIO resource.\n",14261426+ pci_name(pdev));14271427+ goto err_pci_disable_2;14281428+ }14291429+ if (pci_resource_len(pdev, 0) < SIS190_REGS_SIZE) {14301430+ net_probe(tp, KERN_ERR "%s: invalid PCI region size(s).\n",14311431+ pci_name(pdev));14321432+ goto err_pci_disable_2;14331433+ }14341434+14351435+ rc = pci_request_regions(pdev, DRV_NAME);14361436+ if (rc < 0) {14371437+ net_probe(tp, KERN_ERR PFX "%s: could not request regions.\n",14381438+ pci_name(pdev));14391439+ goto err_pci_disable_2;14401440+ }14411441+14421442+ rc = pci_set_dma_mask(pdev, DMA_32BIT_MASK);14431443+ if (rc < 0) {14441444+ net_probe(tp, KERN_ERR "%s: DMA configuration failed.\n",14451445+ pci_name(pdev));14461446+ goto err_free_res_3;14471447+ }14481448+14491449+ pci_set_master(pdev);14501450+14511451+ ioaddr = ioremap(pci_resource_start(pdev, 0), SIS190_REGS_SIZE);14521452+ if (!ioaddr) {14531453+ net_probe(tp, KERN_ERR "%s: cannot remap MMIO, aborting\n",14541454+ pci_name(pdev));14551455+ rc = -EIO;14561456+ goto err_free_res_3;14571457+ }14581458+14591459+ tp->pci_dev = pdev;14601460+ tp->mmio_addr = ioaddr;14611461+14621462+ sis190_irq_mask_and_ack(ioaddr);14631463+14641464+ sis190_soft_reset(ioaddr);14651465+out:14661466+ return dev;14671467+14681468+err_free_res_3:14691469+ pci_release_regions(pdev);14701470+err_pci_disable_2:14711471+ pci_disable_device(pdev);14721472+err_free_dev_1:14731473+ free_netdev(dev);14741474+err_out_0:14751475+ dev = ERR_PTR(rc);14761476+ goto out;14771477+}14781478+14791479+static void sis190_tx_timeout(struct net_device *dev)14801480+{14811481+ struct sis190_private *tp = netdev_priv(dev);14821482+ void __iomem *ioaddr = tp->mmio_addr;14831483+ u8 tmp8;14841484+14851485+ /* Disable Tx, if not already */14861486+ tmp8 = SIS_R8(TxControl);14871487+ if (tmp8 & CmdTxEnb)14881488+ SIS_W8(TxControl, tmp8 & ~CmdTxEnb);14891489+14901490+14911491+ net_tx_err(tp, KERN_INFO "%s: Transmit timeout, status %08x %08x.\n",14921492+ dev->name, SIS_R32(TxControl), SIS_R32(TxSts));14931493+14941494+ /* Disable interrupts by clearing the interrupt mask. */14951495+ SIS_W32(IntrMask, 0x0000);14961496+14971497+ /* Stop a shared interrupt from scavenging while we are. */14981498+ spin_lock_irq(&tp->lock);14991499+ sis190_tx_clear(tp);15001500+ spin_unlock_irq(&tp->lock);15011501+15021502+ /* ...and finally, reset everything. */15031503+ sis190_hw_start(dev);15041504+15051505+ netif_wake_queue(dev);15061506+}15071507+15081508+static int __devinit sis190_get_mac_addr_from_eeprom(struct pci_dev *pdev,15091509+ struct net_device *dev)15101510+{15111511+ struct sis190_private *tp = netdev_priv(dev);15121512+ void __iomem *ioaddr = tp->mmio_addr;15131513+ u16 sig;15141514+ int i;15151515+15161516+ net_probe(tp, KERN_INFO "%s: Read MAC address from EEPROM\n",15171517+ pci_name(pdev));15181518+15191519+ /* Check to see if there is a sane EEPROM */15201520+ sig = (u16) sis190_read_eeprom(ioaddr, EEPROMSignature);15211521+15221522+ if ((sig == 0xffff) || (sig == 0x0000)) {15231523+ net_probe(tp, KERN_INFO "%s: Error EEPROM read %x.\n",15241524+ pci_name(pdev), sig);15251525+ return -EIO;15261526+ }15271527+15281528+ /* Get MAC address from EEPROM */15291529+ for (i = 0; i < MAC_ADDR_LEN / 2; i++) {15301530+ __le16 w = sis190_read_eeprom(ioaddr, EEPROMMACAddr + i);15311531+15321532+ ((u16 *)dev->dev_addr)[0] = le16_to_cpu(w);15331533+ }15341534+15351535+ return 0;15361536+}15371537+15381538+/**15391539+ * sis190_get_mac_addr_from_apc - Get MAC address for SiS965 model15401540+ * @pdev: PCI device15411541+ * @dev: network device to get address for15421542+ *15431543+ * SiS965 model, use APC CMOS RAM to store MAC address.15441544+ * APC CMOS RAM is accessed through ISA bridge.15451545+ * MAC address is read into @net_dev->dev_addr.15461546+ */15471547+static int __devinit sis190_get_mac_addr_from_apc(struct pci_dev *pdev,15481548+ struct net_device *dev)15491549+{15501550+ struct sis190_private *tp = netdev_priv(dev);15511551+ struct pci_dev *isa_bridge;15521552+ u8 reg, tmp8;15531553+ int i;15541554+15551555+ net_probe(tp, KERN_INFO "%s: Read MAC address from APC.\n",15561556+ pci_name(pdev));15571557+15581558+ isa_bridge = pci_get_device(PCI_VENDOR_ID_SI, 0x0965, NULL);15591559+ if (!isa_bridge) {15601560+ net_probe(tp, KERN_INFO "%s: Can not find ISA bridge.\n",15611561+ pci_name(pdev));15621562+ return -EIO;15631563+ }15641564+15651565+ /* Enable port 78h & 79h to access APC Registers. */15661566+ pci_read_config_byte(isa_bridge, 0x48, &tmp8);15671567+ reg = (tmp8 & ~0x02);15681568+ pci_write_config_byte(isa_bridge, 0x48, reg);15691569+ udelay(50);15701570+ pci_read_config_byte(isa_bridge, 0x48, ®);15711571+15721572+ for (i = 0; i < MAC_ADDR_LEN; i++) {15731573+ outb(0x9 + i, 0x78);15741574+ dev->dev_addr[i] = inb(0x79);15751575+ }15761576+15771577+ outb(0x12, 0x78);15781578+ reg = inb(0x79);15791579+15801580+ /* Restore the value to ISA Bridge */15811581+ pci_write_config_byte(isa_bridge, 0x48, tmp8);15821582+ pci_dev_put(isa_bridge);15831583+15841584+ return 0;15851585+}15861586+15871587+/**15881588+ * sis190_init_rxfilter - Initialize the Rx filter15891589+ * @dev: network device to initialize15901590+ *15911591+ * Set receive filter address to our MAC address15921592+ * and enable packet filtering.15931593+ */15941594+static inline void sis190_init_rxfilter(struct net_device *dev)15951595+{15961596+ struct sis190_private *tp = netdev_priv(dev);15971597+ void __iomem *ioaddr = tp->mmio_addr;15981598+ u16 ctl;15991599+ int i;16001600+16011601+ ctl = SIS_R16(RxMacControl);16021602+ /*16031603+ * Disable packet filtering before setting filter.16041604+ * Note: SiS's driver writes 32 bits but RxMacControl is 16 bits16051605+ * only and followed by RxMacAddr (6 bytes). Strange. -- FR16061606+ */16071607+ SIS_W16(RxMacControl, ctl & ~0x0f00);16081608+16091609+ for (i = 0; i < MAC_ADDR_LEN; i++)16101610+ SIS_W8(RxMacAddr + i, dev->dev_addr[i]);16111611+16121612+ SIS_W16(RxMacControl, ctl);16131613+ SIS_PCI_COMMIT();16141614+}16151615+16161616+static int sis190_get_mac_addr(struct pci_dev *pdev, struct net_device *dev)16171617+{16181618+ u8 from;16191619+16201620+ pci_read_config_byte(pdev, 0x73, &from);16211621+16221622+ return (from & 0x00000001) ?16231623+ sis190_get_mac_addr_from_apc(pdev, dev) :16241624+ sis190_get_mac_addr_from_eeprom(pdev, dev);16251625+}16261626+16271627+static void sis190_set_speed_auto(struct net_device *dev)16281628+{16291629+ struct sis190_private *tp = netdev_priv(dev);16301630+ void __iomem *ioaddr = tp->mmio_addr;16311631+ int phy_id = tp->mii_if.phy_id;16321632+ int val;16331633+16341634+ net_link(tp, KERN_INFO "%s: Enabling Auto-negotiation.\n", dev->name);16351635+16361636+ val = mdio_read(ioaddr, phy_id, MII_ADVERTISE);16371637+16381638+ // Enable 10/100 Full/Half Mode, leave MII_ADVERTISE bit4:016391639+ // unchanged.16401640+ mdio_write(ioaddr, phy_id, MII_ADVERTISE, (val & ADVERTISE_SLCT) |16411641+ ADVERTISE_100FULL | ADVERTISE_10FULL |16421642+ ADVERTISE_100HALF | ADVERTISE_10HALF);16431643+16441644+ // Enable 1000 Full Mode.16451645+ mdio_write(ioaddr, phy_id, MII_CTRL1000, ADVERTISE_1000FULL);16461646+16471647+ // Enable auto-negotiation and restart auto-negotiation.16481648+ mdio_write(ioaddr, phy_id, MII_BMCR,16491649+ BMCR_ANENABLE | BMCR_ANRESTART | BMCR_RESET);16501650+}16511651+16521652+static int sis190_get_settings(struct net_device *dev, struct ethtool_cmd *cmd)16531653+{16541654+ struct sis190_private *tp = netdev_priv(dev);16551655+16561656+ return mii_ethtool_gset(&tp->mii_if, cmd);16571657+}16581658+16591659+static int sis190_set_settings(struct net_device *dev, struct ethtool_cmd *cmd)16601660+{16611661+ struct sis190_private *tp = netdev_priv(dev);16621662+16631663+ return mii_ethtool_sset(&tp->mii_if, cmd);16641664+}16651665+16661666+static void sis190_get_drvinfo(struct net_device *dev,16671667+ struct ethtool_drvinfo *info)16681668+{16691669+ struct sis190_private *tp = netdev_priv(dev);16701670+16711671+ strcpy(info->driver, DRV_NAME);16721672+ strcpy(info->version, DRV_VERSION);16731673+ strcpy(info->bus_info, pci_name(tp->pci_dev));16741674+}16751675+16761676+static int sis190_get_regs_len(struct net_device *dev)16771677+{16781678+ return SIS190_REGS_SIZE;16791679+}16801680+16811681+static void sis190_get_regs(struct net_device *dev, struct ethtool_regs *regs,16821682+ void *p)16831683+{16841684+ struct sis190_private *tp = netdev_priv(dev);16851685+ unsigned long flags;16861686+16871687+ if (regs->len > SIS190_REGS_SIZE)16881688+ regs->len = SIS190_REGS_SIZE;16891689+16901690+ spin_lock_irqsave(&tp->lock, flags);16911691+ memcpy_fromio(p, tp->mmio_addr, regs->len);16921692+ spin_unlock_irqrestore(&tp->lock, flags);16931693+}16941694+16951695+static int sis190_nway_reset(struct net_device *dev)16961696+{16971697+ struct sis190_private *tp = netdev_priv(dev);16981698+16991699+ return mii_nway_restart(&tp->mii_if);17001700+}17011701+17021702+static u32 sis190_get_msglevel(struct net_device *dev)17031703+{17041704+ struct sis190_private *tp = netdev_priv(dev);17051705+17061706+ return tp->msg_enable;17071707+}17081708+17091709+static void sis190_set_msglevel(struct net_device *dev, u32 value)17101710+{17111711+ struct sis190_private *tp = netdev_priv(dev);17121712+17131713+ tp->msg_enable = value;17141714+}17151715+17161716+static struct ethtool_ops sis190_ethtool_ops = {17171717+ .get_settings = sis190_get_settings,17181718+ .set_settings = sis190_set_settings,17191719+ .get_drvinfo = sis190_get_drvinfo,17201720+ .get_regs_len = sis190_get_regs_len,17211721+ .get_regs = sis190_get_regs,17221722+ .get_link = ethtool_op_get_link,17231723+ .get_msglevel = sis190_get_msglevel,17241724+ .set_msglevel = sis190_set_msglevel,17251725+ .nway_reset = sis190_nway_reset,17261726+};17271727+17281728+static int sis190_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd)17291729+{17301730+ struct sis190_private *tp = netdev_priv(dev);17311731+17321732+ return !netif_running(dev) ? -EINVAL :17331733+ generic_mii_ioctl(&tp->mii_if, if_mii(ifr), cmd, NULL);17341734+}17351735+17361736+static int __devinit sis190_init_one(struct pci_dev *pdev,17371737+ const struct pci_device_id *ent)17381738+{17391739+ static int printed_version = 0;17401740+ struct sis190_private *tp;17411741+ struct net_device *dev;17421742+ void __iomem *ioaddr;17431743+ int rc;17441744+17451745+ if (!printed_version) {17461746+ net_drv(&debug, KERN_INFO SIS190_DRIVER_NAME " loaded.\n");17471747+ printed_version = 1;17481748+ }17491749+17501750+ dev = sis190_init_board(pdev);17511751+ if (IS_ERR(dev)) {17521752+ rc = PTR_ERR(dev);17531753+ goto out;17541754+ }17551755+17561756+ tp = netdev_priv(dev);17571757+ ioaddr = tp->mmio_addr;17581758+17591759+ rc = sis190_get_mac_addr(pdev, dev);17601760+ if (rc < 0)17611761+ goto err_release_board;17621762+17631763+ sis190_init_rxfilter(dev);17641764+17651765+ INIT_WORK(&tp->phy_task, sis190_phy_task, dev);17661766+17671767+ dev->open = sis190_open;17681768+ dev->stop = sis190_close;17691769+ dev->do_ioctl = sis190_ioctl;17701770+ dev->get_stats = sis190_get_stats;17711771+ dev->tx_timeout = sis190_tx_timeout;17721772+ dev->watchdog_timeo = SIS190_TX_TIMEOUT;17731773+ dev->hard_start_xmit = sis190_start_xmit;17741774+#ifdef CONFIG_NET_POLL_CONTROLLER17751775+ dev->poll_controller = sis190_netpoll;17761776+#endif17771777+ dev->set_multicast_list = sis190_set_rx_mode;17781778+ SET_ETHTOOL_OPS(dev, &sis190_ethtool_ops);17791779+ dev->irq = pdev->irq;17801780+ dev->base_addr = (unsigned long) 0xdead;17811781+17821782+ spin_lock_init(&tp->lock);17831783+17841784+ rc = sis190_mii_probe(dev);17851785+ if (rc < 0)17861786+ goto err_release_board;17871787+17881788+ rc = register_netdev(dev);17891789+ if (rc < 0)17901790+ goto err_remove_mii;17911791+17921792+ pci_set_drvdata(pdev, dev);17931793+17941794+ net_probe(tp, KERN_INFO "%s: %s at %p (IRQ: %d), "17951795+ "%2.2x:%2.2x:%2.2x:%2.2x:%2.2x:%2.2x\n",17961796+ pci_name(pdev), sis_chip_info[ent->driver_data].name,17971797+ ioaddr, dev->irq,17981798+ dev->dev_addr[0], dev->dev_addr[1],17991799+ dev->dev_addr[2], dev->dev_addr[3],18001800+ dev->dev_addr[4], dev->dev_addr[5]);18011801+18021802+ netif_carrier_off(dev);18031803+18041804+ sis190_set_speed_auto(dev);18051805+out:18061806+ return rc;18071807+18081808+err_remove_mii:18091809+ sis190_mii_remove(dev);18101810+err_release_board:18111811+ sis190_release_board(pdev);18121812+ goto out;18131813+}18141814+18151815+static void __devexit sis190_remove_one(struct pci_dev *pdev)18161816+{18171817+ struct net_device *dev = pci_get_drvdata(pdev);18181818+18191819+ sis190_mii_remove(dev);18201820+ unregister_netdev(dev);18211821+ sis190_release_board(pdev);18221822+ pci_set_drvdata(pdev, NULL);18231823+}18241824+18251825+static struct pci_driver sis190_pci_driver = {18261826+ .name = DRV_NAME,18271827+ .id_table = sis190_pci_tbl,18281828+ .probe = sis190_init_one,18291829+ .remove = __devexit_p(sis190_remove_one),18301830+};18311831+18321832+static int __init sis190_init_module(void)18331833+{18341834+ return pci_module_init(&sis190_pci_driver);18351835+}18361836+18371837+static void __exit sis190_cleanup_module(void)18381838+{18391839+ pci_unregister_driver(&sis190_pci_driver);18401840+}18411841+18421842+module_init(sis190_init_module);18431843+module_exit(sis190_cleanup_module);
+12
drivers/net/tulip/Kconfig
···135135 <file:Documentation/networking/net-modules.txt>. The module will136136 be called dmfe.137137138138+config ULI526X139139+ tristate "ULi M526x controller support"140140+ depends on NET_TULIP && PCI141141+ select CRC32142142+ ---help---143143+ This driver is for ULi M5261/M5263 10/100M Ethernet Controller144144+ (<http://www.uli.com.tw/>).145145+146146+ To compile this driver as a module, choose M here and read147147+ <file:Documentation/networking/net-modules.txt>. The module will148148+ be called uli526x.149149+138150config PCMCIA_XIRCOM139151 tristate "Xircom CardBus support (new driver)"140152 depends on NET_TULIP && CARDBUS