Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branch 'upstream-linus' of master.kernel.org:/pub/scm/linux/kernel/git/jgarzik/netdev-2.6

+788 -503
+150 -45
Documentation/networking/s2io.txt
··· 1 - S2IO Technologies XFrame 10 Gig adapter. 2 - ------------------------------------------- 1 + Release notes for Neterion's (Formerly S2io) Xframe I/II PCI-X 10GbE driver. 3 2 4 - I. Module loadable parameters. 5 - When loaded as a module, the driver provides a host of Module loadable 6 - parameters, so the device can be tuned as per the users needs. 7 - A list of the Module params is given below. 8 - (i) ring_num: This can be used to program the number of 9 - receive rings used in the driver. 10 - (ii) ring_len: This defines the number of descriptors each ring 11 - can have. There can be a maximum of 8 rings. 12 - (iii) frame_len: This is an array of size 8. Using this we can 13 - set the maximum size of the received frame that can 14 - be steered into the corrsponding receive ring. 15 - (iv) fifo_num: This defines the number of Tx FIFOs thats used in 16 - the driver. 17 - (v) fifo_len: Each element defines the number of 18 - Tx descriptors that can be associated with each 19 - corresponding FIFO. There are a maximum of 8 FIFOs. 20 - (vi) tx_prio: This is a bool, if module is loaded with a non-zero 21 - value for tx_prio multi FIFO scheme is activated. 22 - (vii) rx_prio: This is a bool, if module is loaded with a non-zero 23 - value for tx_prio multi RING scheme is activated. 24 - (viii) latency_timer: The value given against this param will be 25 - loaded into the latency timer register in PCI Config 26 - space, else the register is left with its reset value. 3 + Contents 4 + ======= 5 + - 1. Introduction 6 + - 2. Identifying the adapter/interface 7 + - 3. Features supported 8 + - 4. Command line parameters 9 + - 5. Performance suggestions 10 + - 6. Available Downloads 27 11 28 - II. Performance tuning. 29 - By changing a few sysctl parameters. 30 - Copy the following lines into a file and run the following command, 31 - "sysctl -p <file_name>" 32 - ### IPV4 specific settings 33 - net.ipv4.tcp_timestamps = 0 # turns TCP timestamp support off, default 1, reduces CPU use 34 - net.ipv4.tcp_sack = 0 # turn SACK support off, default on 35 - # on systems with a VERY fast bus -> memory interface this is the big gainer 36 - net.ipv4.tcp_rmem = 10000000 10000000 10000000 # sets min/default/max TCP read buffer, default 4096 87380 174760 37 - net.ipv4.tcp_wmem = 10000000 10000000 10000000 # sets min/pressure/max TCP write buffer, default 4096 16384 131072 38 - net.ipv4.tcp_mem = 10000000 10000000 10000000 # sets min/pressure/max TCP buffer space, default 31744 32256 32768 39 - 40 - ### CORE settings (mostly for socket and UDP effect) 41 - net.core.rmem_max = 524287 # maximum receive socket buffer size, default 131071 42 - net.core.wmem_max = 524287 # maximum send socket buffer size, default 131071 43 - net.core.rmem_default = 524287 # default receive socket buffer size, default 65535 44 - net.core.wmem_default = 524287 # default send socket buffer size, default 65535 45 - net.core.optmem_max = 524287 # maximum amount of option memory buffers, default 10240 46 - net.core.netdev_max_backlog = 300000 # number of unprocessed input packets before kernel starts dropping them, default 300 47 - ---End of performance tuning file--- 12 + 13 + 1. Introduction: 14 + This Linux driver supports Neterion's Xframe I PCI-X 1.0 and 15 + Xframe II PCI-X 2.0 adapters. It supports several features 16 + such as jumbo frames, MSI/MSI-X, checksum offloads, TSO, UFO and so on. 17 + See below for complete list of features. 18 + All features are supported for both IPv4 and IPv6. 19 + 20 + 2. Identifying the adapter/interface: 21 + a. Insert the adapter(s) in your system. 22 + b. Build and load driver 23 + # insmod s2io.ko 24 + c. View log messages 25 + # dmesg | tail -40 26 + You will see messages similar to: 27 + eth3: Neterion Xframe I 10GbE adapter (rev 3), Version 2.0.9.1, Intr type INTA 28 + eth4: Neterion Xframe II 10GbE adapter (rev 2), Version 2.0.9.1, Intr type INTA 29 + eth4: Device is on 64 bit 133MHz PCIX(M1) bus 30 + 31 + The above messages identify the adapter type(Xframe I/II), adapter revision, 32 + driver version, interface name(eth3, eth4), Interrupt type(INTA, MSI, MSI-X). 33 + In case of Xframe II, the PCI/PCI-X bus width and frequency are displayed 34 + as well. 35 + 36 + To associate an interface with a physical adapter use "ethtool -p <ethX>". 37 + The corresponding adapter's LED will blink multiple times. 38 + 39 + 3. Features supported: 40 + a. Jumbo frames. Xframe I/II supports MTU upto 9600 bytes, 41 + modifiable using ifconfig command. 42 + 43 + b. Offloads. Supports checksum offload(TCP/UDP/IP) on transmit 44 + and receive, TSO. 45 + 46 + c. Multi-buffer receive mode. Scattering of packet across multiple 47 + buffers. Currently driver supports 2-buffer mode which yields 48 + significant performance improvement on certain platforms(SGI Altix, 49 + IBM xSeries). 50 + 51 + d. MSI/MSI-X. Can be enabled on platforms which support this feature 52 + (IA64, Xeon) resulting in noticeable performance improvement(upto 7% 53 + on certain platforms). 54 + 55 + e. NAPI. Compile-time option(CONFIG_S2IO_NAPI) for better Rx interrupt 56 + moderation. 57 + 58 + f. Statistics. Comprehensive MAC-level and software statistics displayed 59 + using "ethtool -S" option. 60 + 61 + g. Multi-FIFO/Ring. Supports up to 8 transmit queues and receive rings, 62 + with multiple steering options. 63 + 64 + 4. Command line parameters 65 + a. tx_fifo_num 66 + Number of transmit queues 67 + Valid range: 1-8 68 + Default: 1 69 + 70 + b. rx_ring_num 71 + Number of receive rings 72 + Valid range: 1-8 73 + Default: 1 74 + 75 + c. tx_fifo_len 76 + Size of each transmit queue 77 + Valid range: Total length of all queues should not exceed 8192 78 + Default: 4096 79 + 80 + d. rx_ring_sz 81 + Size of each receive ring(in 4K blocks) 82 + Valid range: Limited by memory on system 83 + Default: 30 84 + 85 + e. intr_type 86 + Specifies interrupt type. Possible values 1(INTA), 2(MSI), 3(MSI-X) 87 + Valid range: 1-3 88 + Default: 1 89 + 90 + 5. Performance suggestions 91 + General: 92 + a. Set MTU to maximum(9000 for switch setup, 9600 in back-to-back configuration) 93 + b. Set TCP windows size to optimal value. 94 + For instance, for MTU=1500 a value of 210K has been observed to result in 95 + good performance. 96 + # sysctl -w net.ipv4.tcp_rmem="210000 210000 210000" 97 + # sysctl -w net.ipv4.tcp_wmem="210000 210000 210000" 98 + For MTU=9000, TCP window size of 10 MB is recommended. 99 + # sysctl -w net.ipv4.tcp_rmem="10000000 10000000 10000000" 100 + # sysctl -w net.ipv4.tcp_wmem="10000000 10000000 10000000" 101 + 102 + Transmit performance: 103 + a. By default, the driver respects BIOS settings for PCI bus parameters. 104 + However, you may want to experiment with PCI bus parameters 105 + max-split-transactions(MOST) and MMRBC (use setpci command). 106 + A MOST value of 2 has been found optimal for Opterons and 3 for Itanium. 107 + It could be different for your hardware. 108 + Set MMRBC to 4K**. 109 + 110 + For example you can set 111 + For opteron 112 + #setpci -d 17d5:* 62=1d 113 + For Itanium 114 + #setpci -d 17d5:* 62=3d 115 + 116 + For detailed description of the PCI registers, please see Xframe User Guide. 117 + 118 + b. Ensure Transmit Checksum offload is enabled. Use ethtool to set/verify this 119 + parameter. 120 + c. Turn on TSO(using "ethtool -K") 121 + # ethtool -K <ethX> tso on 122 + 123 + Receive performance: 124 + a. By default, the driver respects BIOS settings for PCI bus parameters. 125 + However, you may want to set PCI latency timer to 248. 126 + #setpci -d 17d5:* LATENCY_TIMER=f8 127 + For detailed description of the PCI registers, please see Xframe User Guide. 128 + b. Use 2-buffer mode. This results in large performance boost on 129 + on certain platforms(eg. SGI Altix, IBM xSeries). 130 + c. Ensure Receive Checksum offload is enabled. Use "ethtool -K ethX" command to 131 + set/verify this option. 132 + d. Enable NAPI feature(in kernel configuration Device Drivers ---> Network 133 + device support ---> Ethernet (10000 Mbit) ---> S2IO 10Gbe Xframe NIC) to 134 + bring down CPU utilization. 135 + 136 + ** For AMD opteron platforms with 8131 chipset, MMRBC=1 and MOST=1 are 137 + recommended as safe parameters. 138 + For more information, please review the AMD8131 errata at 139 + http://www.amd.com/us-en/assets/content_type/white_papers_and_tech_docs/26310.pdf 140 + 141 + 6. Available Downloads 142 + Neterion "s2io" driver in Red Hat and Suse 2.6-based distributions is kept up 143 + to date, also the latest "s2io" code (including support for 2.4 kernels) is 144 + available via "Support" link on the Neterion site: http://www.neterion.com. 145 + 146 + For Xframe User Guide (Programming manual), visit ftp site ns1.s2io.com, 147 + user: linuxdocs password: HALdocs 148 + 149 + 7. Support 150 + For further support please contact either your 10GbE Xframe NIC vendor (IBM, 151 + HP, SGI etc.) or click on the "Support" link on the Neterion site: 152 + http://www.neterion.com. 48 153
+9
MAINTAINERS
··· 910 910 W: http://linux-fbdev.sourceforge.net/ 911 911 S: Maintained 912 912 913 + FREESCALE SOC FS_ENET DRIVER 914 + P: Pantelis Antoniou 915 + M: pantelis.antoniou@gmail.com 916 + P: Vitaly Bordug 917 + M: vbordug@ru.mvista.com 918 + L: linuxppc-embedded@ozlabs.org 919 + L: netdev@vger.kernel.org 920 + S: Maintained 921 + 913 922 FILE LOCKING (flock() and fcntl()/lockf()) 914 923 P: Matthew Wilcox 915 924 M: matthew@wil.cx
+1 -12
drivers/net/Kconfig
··· 1203 1203 1204 1204 config IBM_EMAC_PHY_RX_CLK_FIX 1205 1205 bool "PHY Rx clock workaround" 1206 - depends on IBM_EMAC && (405EP || 440GX || 440EP) 1206 + depends on IBM_EMAC && (405EP || 440GX || 440EP || 440GR) 1207 1207 help 1208 1208 Enable this if EMAC attached to a PHY which doesn't generate 1209 1209 RX clock if there is no link, if this is the case, you will ··· 2257 2257 information. 2258 2258 2259 2259 If in doubt, say N. 2260 - 2261 - config 2BUFF_MODE 2262 - bool "Use 2 Buffer Mode on Rx side." 2263 - depends on S2IO 2264 - ---help--- 2265 - On enabling the 2 buffer mode, the received frame will be 2266 - split into 2 parts before being DMA'ed to the hosts memory. 2267 - The parts are the ethernet header and ethernet payload. 2268 - This is useful on systems where DMA'ing to to unaligned 2269 - physical memory loactions comes with a heavy price. 2270 - If not sure please say N. 2271 2260 2272 2261 endmenu 2273 2262
+7 -1
drivers/net/fec_8xx/Kconfig
··· 1 1 config FEC_8XX 2 2 tristate "Motorola 8xx FEC driver" 3 - depends on NET_ETHERNET && 8xx && (NETTA || NETPHONE) 3 + depends on NET_ETHERNET 4 4 select MII 5 5 6 6 config FEC_8XX_GENERIC_PHY ··· 12 12 bool "Support DM9161 PHY" 13 13 depends on FEC_8XX 14 14 default n 15 + 16 + config FEC_8XX_LXT971_PHY 17 + bool "Support LXT971/LXT972 PHY" 18 + depends on FEC_8XX 19 + default n 20 +
+42
drivers/net/fec_8xx/fec_mii.c
··· 203 203 204 204 #endif 205 205 206 + #ifdef CONFIG_FEC_8XX_LXT971_PHY 207 + 208 + /* Support for LXT971/972 PHY */ 209 + 210 + #define MII_LXT971_PCR 16 /* Port Control Register */ 211 + #define MII_LXT971_SR2 17 /* Status Register 2 */ 212 + #define MII_LXT971_IER 18 /* Interrupt Enable Register */ 213 + #define MII_LXT971_ISR 19 /* Interrupt Status Register */ 214 + #define MII_LXT971_LCR 20 /* LED Control Register */ 215 + #define MII_LXT971_TCR 30 /* Transmit Control Register */ 216 + 217 + static void lxt971_startup(struct net_device *dev) 218 + { 219 + struct fec_enet_private *fep = netdev_priv(dev); 220 + 221 + fec_mii_write(dev, fep->mii_if.phy_id, MII_LXT971_IER, 0x00F2); 222 + } 223 + 224 + static void lxt971_ack_int(struct net_device *dev) 225 + { 226 + struct fec_enet_private *fep = netdev_priv(dev); 227 + 228 + fec_mii_read(dev, fep->mii_if.phy_id, MII_LXT971_ISR); 229 + } 230 + 231 + static void lxt971_shutdown(struct net_device *dev) 232 + { 233 + struct fec_enet_private *fep = netdev_priv(dev); 234 + 235 + fec_mii_write(dev, fep->mii_if.phy_id, MII_LXT971_IER, 0x0000); 236 + } 237 + #endif 238 + 206 239 /**********************************************************************************/ 207 240 208 241 static const struct phy_info phy_info[] = { ··· 247 214 .ack_int = dm9161_ack_int, 248 215 .shutdown = dm9161_shutdown, 249 216 }, 217 + #endif 218 + #ifdef CONFIG_FEC_8XX_LXT971_PHY 219 + { 220 + .id = 0x0001378e, 221 + .name = "LXT971/972", 222 + .startup = lxt971_startup, 223 + .ack_int = lxt971_ack_int, 224 + .shutdown = lxt971_shutdown, 225 + }, 250 226 #endif 251 227 #ifdef CONFIG_FEC_8XX_GENERIC_PHY 252 228 {
+12 -9
drivers/net/fs_enet/fs_enet-main.c
··· 130 130 131 131 skb = fep->rx_skbuff[curidx]; 132 132 133 - dma_unmap_single(fep->dev, skb->data, 133 + dma_unmap_single(fep->dev, CBDR_BUFADDR(bdp), 134 134 L1_CACHE_ALIGN(PKT_MAXBUF_SIZE), 135 135 DMA_FROM_DEVICE); 136 136 ··· 144 144 145 145 skb = fep->rx_skbuff[curidx]; 146 146 147 - dma_unmap_single(fep->dev, skb->data, 147 + dma_unmap_single(fep->dev, CBDR_BUFADDR(bdp), 148 148 L1_CACHE_ALIGN(PKT_MAXBUF_SIZE), 149 149 DMA_FROM_DEVICE); 150 150 ··· 268 268 269 269 skb = fep->rx_skbuff[curidx]; 270 270 271 - dma_unmap_single(fep->dev, skb->data, 271 + dma_unmap_single(fep->dev, CBDR_BUFADDR(bdp), 272 272 L1_CACHE_ALIGN(PKT_MAXBUF_SIZE), 273 273 DMA_FROM_DEVICE); 274 274 ··· 278 278 279 279 skb = fep->rx_skbuff[curidx]; 280 280 281 - dma_unmap_single(fep->dev, skb->data, 281 + dma_unmap_single(fep->dev, CBDR_BUFADDR(bdp), 282 282 L1_CACHE_ALIGN(PKT_MAXBUF_SIZE), 283 283 DMA_FROM_DEVICE); 284 284 ··· 399 399 fep->stats.collisions++; 400 400 401 401 /* unmap */ 402 - dma_unmap_single(fep->dev, skb->data, skb->len, DMA_TO_DEVICE); 402 + dma_unmap_single(fep->dev, CBDR_BUFADDR(bdp), 403 + skb->len, DMA_TO_DEVICE); 403 404 404 405 /* 405 406 * Free the sk buffer associated with this last transmit. ··· 548 547 { 549 548 struct fs_enet_private *fep = netdev_priv(dev); 550 549 struct sk_buff *skb; 550 + cbd_t *bdp; 551 551 int i; 552 552 553 553 /* 554 554 * Reset SKB transmit buffers. 555 555 */ 556 - for (i = 0; i < fep->tx_ring; i++) { 556 + for (i = 0, bdp = fep->tx_bd_base; i < fep->tx_ring; i++, bdp++) { 557 557 if ((skb = fep->tx_skbuff[i]) == NULL) 558 558 continue; 559 559 560 560 /* unmap */ 561 - dma_unmap_single(fep->dev, skb->data, skb->len, DMA_TO_DEVICE); 561 + dma_unmap_single(fep->dev, CBDR_BUFADDR(bdp), 562 + skb->len, DMA_TO_DEVICE); 562 563 563 564 fep->tx_skbuff[i] = NULL; 564 565 dev_kfree_skb(skb); ··· 569 566 /* 570 567 * Reset SKB receive buffers 571 568 */ 572 - for (i = 0; i < fep->rx_ring; i++) { 569 + for (i = 0, bdp = fep->rx_bd_base; i < fep->rx_ring; i++, bdp++) { 573 570 if ((skb = fep->rx_skbuff[i]) == NULL) 574 571 continue; 575 572 576 573 /* unmap */ 577 - dma_unmap_single(fep->dev, skb->data, 574 + dma_unmap_single(fep->dev, CBDR_BUFADDR(bdp), 578 575 L1_CACHE_ALIGN(PKT_MAXBUF_SIZE), 579 576 DMA_FROM_DEVICE); 580 577
+21 -1
drivers/net/ibm_emac/ibm_emac.h
··· 26 26 /* This is a simple check to prevent use of this driver on non-tested SoCs */ 27 27 #if !defined(CONFIG_405GP) && !defined(CONFIG_405GPR) && !defined(CONFIG_405EP) && \ 28 28 !defined(CONFIG_440GP) && !defined(CONFIG_440GX) && !defined(CONFIG_440SP) && \ 29 - !defined(CONFIG_440EP) && !defined(CONFIG_NP405H) 29 + !defined(CONFIG_440EP) && !defined(CONFIG_NP405H) && !defined(CONFIG_440SPE) && \ 30 + !defined(CONFIG_440GR) 30 31 #error "Unknown SoC. Please, check chip user manual and make sure EMAC defines are OK" 31 32 #endif 32 33 ··· 246 245 #define EMAC_STACR_PCDA_MASK 0x1f 247 246 #define EMAC_STACR_PCDA_SHIFT 5 248 247 #define EMAC_STACR_PRA_MASK 0x1f 248 + 249 + /* 250 + * For the 440SPe, AMCC inexplicably changed the polarity of 251 + * the "operation complete" bit in the MII control register. 252 + */ 253 + #if defined(CONFIG_440SPE) 254 + static inline int emac_phy_done(u32 stacr) 255 + { 256 + return !(stacr & EMAC_STACR_OC); 257 + }; 258 + #define EMAC_STACR_START EMAC_STACR_OC 259 + 260 + #else /* CONFIG_440SPE */ 261 + static inline int emac_phy_done(u32 stacr) 262 + { 263 + return stacr & EMAC_STACR_OC; 264 + }; 265 + #define EMAC_STACR_START 0 266 + #endif /* !CONFIG_440SPE */ 249 267 250 268 /* EMACx_TRTR */ 251 269 #if !defined(CONFIG_IBM_EMAC4)
+11 -9
drivers/net/ibm_emac/ibm_emac_core.c
··· 87 87 */ 88 88 static u32 busy_phy_map; 89 89 90 - #if defined(CONFIG_IBM_EMAC_PHY_RX_CLK_FIX) && (defined(CONFIG_405EP) || defined(CONFIG_440EP)) 90 + #if defined(CONFIG_IBM_EMAC_PHY_RX_CLK_FIX) && \ 91 + (defined(CONFIG_405EP) || defined(CONFIG_440EP) || defined(CONFIG_440GR)) 91 92 /* 405EP has "EMAC to PHY Control Register" (CPC0_EPCTL) which can help us 92 93 * with PHY RX clock problem. 93 - * 440EP has more sane SDR0_MFR register implementation than 440GX, which 94 + * 440EP/440GR has more sane SDR0_MFR register implementation than 440GX, which 94 95 * also allows controlling each EMAC clock 95 96 */ 96 97 static inline void EMAC_RX_CLK_TX(int idx) ··· 101 100 102 101 #if defined(CONFIG_405EP) 103 102 mtdcr(0xf3, mfdcr(0xf3) | (1 << idx)); 104 - #else /* CONFIG_440EP */ 103 + #else /* CONFIG_440EP || CONFIG_440GR */ 105 104 SDR_WRITE(DCRN_SDR_MFR, SDR_READ(DCRN_SDR_MFR) | (0x08000000 >> idx)); 106 105 #endif 107 106 ··· 547 546 548 547 /* Wait for management interface to become idle */ 549 548 n = 10; 550 - while (!(in_be32(&p->stacr) & EMAC_STACR_OC)) { 549 + while (!emac_phy_done(in_be32(&p->stacr))) { 551 550 udelay(1); 552 551 if (!--n) 553 552 goto to; ··· 557 556 out_be32(&p->stacr, 558 557 EMAC_STACR_BASE(emac_opb_mhz()) | EMAC_STACR_STAC_READ | 559 558 (reg & EMAC_STACR_PRA_MASK) 560 - | ((id & EMAC_STACR_PCDA_MASK) << EMAC_STACR_PCDA_SHIFT)); 559 + | ((id & EMAC_STACR_PCDA_MASK) << EMAC_STACR_PCDA_SHIFT) 560 + | EMAC_STACR_START); 561 561 562 562 /* Wait for read to complete */ 563 563 n = 100; 564 - while (!((r = in_be32(&p->stacr)) & EMAC_STACR_OC)) { 564 + while (!emac_phy_done(r = in_be32(&p->stacr))) { 565 565 udelay(1); 566 566 if (!--n) 567 567 goto to; ··· 596 594 597 595 /* Wait for management interface to be idle */ 598 596 n = 10; 599 - while (!(in_be32(&p->stacr) & EMAC_STACR_OC)) { 597 + while (!emac_phy_done(in_be32(&p->stacr))) { 600 598 udelay(1); 601 599 if (!--n) 602 600 goto to; ··· 607 605 EMAC_STACR_BASE(emac_opb_mhz()) | EMAC_STACR_STAC_WRITE | 608 606 (reg & EMAC_STACR_PRA_MASK) | 609 607 ((id & EMAC_STACR_PCDA_MASK) << EMAC_STACR_PCDA_SHIFT) | 610 - (val << EMAC_STACR_PHYD_SHIFT)); 608 + (val << EMAC_STACR_PHYD_SHIFT) | EMAC_STACR_START); 611 609 612 610 /* Wait for write to complete */ 613 611 n = 100; 614 - while (!(in_be32(&p->stacr) & EMAC_STACR_OC)) { 612 + while (!emac_phy_done(in_be32(&p->stacr))) { 615 613 udelay(1); 616 614 if (!--n) 617 615 goto to;
+3 -2
drivers/net/ibm_emac/ibm_emac_mal.h
··· 32 32 * reflect the fact that 40x and 44x have slightly different MALs. --ebs 33 33 */ 34 34 #if defined(CONFIG_405GP) || defined(CONFIG_405GPR) || defined(CONFIG_405EP) || \ 35 - defined(CONFIG_440EP) || defined(CONFIG_NP405H) 35 + defined(CONFIG_440EP) || defined(CONFIG_440GR) || defined(CONFIG_NP405H) 36 36 #define MAL_VERSION 1 37 - #elif defined(CONFIG_440GP) || defined(CONFIG_440GX) || defined(CONFIG_440SP) 37 + #elif defined(CONFIG_440GP) || defined(CONFIG_440GX) || defined(CONFIG_440SP) || \ 38 + defined(CONFIG_440SPE) 38 39 #define MAL_VERSION 2 39 40 #else 40 41 #error "Unknown SoC, please check chip manual and choose MAL 'version'"
+12
drivers/net/ibm_emac/ibm_emac_phy.c
··· 236 236 }; 237 237 238 238 /* CIS8201 */ 239 + #define MII_CIS8201_10BTCSR 0x16 240 + #define TENBTCSR_ECHO_DISABLE 0x2000 239 241 #define MII_CIS8201_EPCR 0x17 240 242 #define EPCR_MODE_MASK 0x3000 241 243 #define EPCR_GMII_MODE 0x0000 242 244 #define EPCR_RGMII_MODE 0x1000 243 245 #define EPCR_TBI_MODE 0x2000 244 246 #define EPCR_RTBI_MODE 0x3000 247 + #define MII_CIS8201_ACSR 0x1c 248 + #define ACSR_PIN_PRIO_SELECT 0x0004 245 249 246 250 static int cis8201_init(struct mii_phy *phy) 247 251 { ··· 273 269 } 274 270 275 271 phy_write(phy, MII_CIS8201_EPCR, epcr); 272 + 273 + /* MII regs override strap pins */ 274 + phy_write(phy, MII_CIS8201_ACSR, 275 + phy_read(phy, MII_CIS8201_ACSR) | ACSR_PIN_PRIO_SELECT); 276 + 277 + /* Disable TX_EN -> CRS echo mode, otherwise 10/HDX doesn't work */ 278 + phy_write(phy, MII_CIS8201_10BTCSR, 279 + phy_read(phy, MII_CIS8201_10BTCSR) | TENBTCSR_ECHO_DISABLE); 276 280 277 281 return 0; 278 282 }
+60 -27
drivers/net/pcnet32.c
··· 22 22 *************************************************************************/ 23 23 24 24 #define DRV_NAME "pcnet32" 25 - #define DRV_VERSION "1.31a" 26 - #define DRV_RELDATE "12.Sep.2005" 25 + #define DRV_VERSION "1.31c" 26 + #define DRV_RELDATE "01.Nov.2005" 27 27 #define PFX DRV_NAME ": " 28 28 29 29 static const char *version = ··· 260 260 * v1.31 02 Sep 2005 Hubert WS Lin <wslin@tw.ibm.c0m> added set_ringparam(). 261 261 * v1.31a 12 Sep 2005 Hubert WS Lin <wslin@tw.ibm.c0m> set min ring size to 4 262 262 * to allow loopback test to work unchanged. 263 + * v1.31b 06 Oct 2005 Don Fry changed alloc_ring to show name of device 264 + * if allocation fails 265 + * v1.31c 01 Nov 2005 Don Fry Allied Telesyn 2700/2701 FX are 100Mbit only. 266 + * Force 100Mbit FD if Auto (ASEL) is selected. 267 + * See Bugzilla 2669 and 4551. 263 268 */ 264 269 265 270 ··· 413 408 static void pcnet32_get_regs(struct net_device *dev, struct ethtool_regs *regs, 414 409 void *ptr); 415 410 static void pcnet32_purge_tx_ring(struct net_device *dev); 416 - static int pcnet32_alloc_ring(struct net_device *dev); 411 + static int pcnet32_alloc_ring(struct net_device *dev, char *name); 417 412 static void pcnet32_free_ring(struct net_device *dev); 418 413 419 414 ··· 674 669 lp->rx_mod_mask = lp->rx_ring_size - 1; 675 670 lp->rx_len_bits = (i << 4); 676 671 677 - if (pcnet32_alloc_ring(dev)) { 672 + if (pcnet32_alloc_ring(dev, dev->name)) { 678 673 pcnet32_free_ring(dev); 674 + spin_unlock_irqrestore(&lp->lock, flags); 679 675 return -ENOMEM; 680 676 } 681 677 682 678 spin_unlock_irqrestore(&lp->lock, flags); 683 679 684 680 if (pcnet32_debug & NETIF_MSG_DRV) 685 - printk(KERN_INFO PFX "Ring Param Settings: RX: %d, TX: %d\n", lp->rx_ring_size, lp->tx_ring_size); 681 + printk(KERN_INFO PFX "%s: Ring Param Settings: RX: %d, TX: %d\n", 682 + dev->name, lp->rx_ring_size, lp->tx_ring_size); 686 683 687 684 if (netif_running(dev)) 688 685 pcnet32_open(dev); ··· 988 981 *buff++ = a->read_csr(ioaddr, 114); 989 982 990 983 /* read bus configuration registers */ 991 - for (i=0; i<36; i++) { 984 + for (i=0; i<30; i++) { 985 + *buff++ = a->read_bcr(ioaddr, i); 986 + } 987 + *buff++ = 0; /* skip bcr30 so as not to hang 79C976 */ 988 + for (i=31; i<36; i++) { 992 989 *buff++ = a->read_bcr(ioaddr, i); 993 990 } 994 991 ··· 1351 1340 } 1352 1341 lp->a = *a; 1353 1342 1354 - if (pcnet32_alloc_ring(dev)) { 1343 + /* prior to register_netdev, dev->name is not yet correct */ 1344 + if (pcnet32_alloc_ring(dev, pci_name(lp->pci_dev))) { 1355 1345 ret = -ENOMEM; 1356 1346 goto err_free_ring; 1357 1347 } ··· 1460 1448 } 1461 1449 1462 1450 1463 - static int pcnet32_alloc_ring(struct net_device *dev) 1451 + /* if any allocation fails, caller must also call pcnet32_free_ring */ 1452 + static int pcnet32_alloc_ring(struct net_device *dev, char *name) 1464 1453 { 1465 1454 struct pcnet32_private *lp = dev->priv; 1466 1455 1467 - if ((lp->tx_ring = pci_alloc_consistent(lp->pci_dev, sizeof(struct pcnet32_tx_head) * lp->tx_ring_size, 1468 - &lp->tx_ring_dma_addr)) == NULL) { 1456 + lp->tx_ring = pci_alloc_consistent(lp->pci_dev, 1457 + sizeof(struct pcnet32_tx_head) * lp->tx_ring_size, 1458 + &lp->tx_ring_dma_addr); 1459 + if (lp->tx_ring == NULL) { 1469 1460 if (pcnet32_debug & NETIF_MSG_DRV) 1470 - printk(KERN_ERR PFX "Consistent memory allocation failed.\n"); 1461 + printk("\n" KERN_ERR PFX "%s: Consistent memory allocation failed.\n", 1462 + name); 1471 1463 return -ENOMEM; 1472 1464 } 1473 1465 1474 - if ((lp->rx_ring = pci_alloc_consistent(lp->pci_dev, sizeof(struct pcnet32_rx_head) * lp->rx_ring_size, 1475 - &lp->rx_ring_dma_addr)) == NULL) { 1466 + lp->rx_ring = pci_alloc_consistent(lp->pci_dev, 1467 + sizeof(struct pcnet32_rx_head) * lp->rx_ring_size, 1468 + &lp->rx_ring_dma_addr); 1469 + if (lp->rx_ring == NULL) { 1476 1470 if (pcnet32_debug & NETIF_MSG_DRV) 1477 - printk(KERN_ERR PFX "Consistent memory allocation failed.\n"); 1471 + printk("\n" KERN_ERR PFX "%s: Consistent memory allocation failed.\n", 1472 + name); 1478 1473 return -ENOMEM; 1479 1474 } 1480 1475 1481 - if (!(lp->tx_dma_addr = kmalloc(sizeof(dma_addr_t) * lp->tx_ring_size, GFP_ATOMIC))) { 1476 + lp->tx_dma_addr = kmalloc(sizeof(dma_addr_t) * lp->tx_ring_size, 1477 + GFP_ATOMIC); 1478 + if (!lp->tx_dma_addr) { 1482 1479 if (pcnet32_debug & NETIF_MSG_DRV) 1483 - printk(KERN_ERR PFX "Memory allocation failed.\n"); 1480 + printk("\n" KERN_ERR PFX "%s: Memory allocation failed.\n", name); 1484 1481 return -ENOMEM; 1485 1482 } 1486 1483 memset(lp->tx_dma_addr, 0, sizeof(dma_addr_t) * lp->tx_ring_size); 1487 1484 1488 - if (!(lp->rx_dma_addr = kmalloc(sizeof(dma_addr_t) * lp->rx_ring_size, GFP_ATOMIC))) { 1485 + lp->rx_dma_addr = kmalloc(sizeof(dma_addr_t) * lp->rx_ring_size, 1486 + GFP_ATOMIC); 1487 + if (!lp->rx_dma_addr) { 1489 1488 if (pcnet32_debug & NETIF_MSG_DRV) 1490 - printk(KERN_ERR PFX "Memory allocation failed.\n"); 1489 + printk("\n" KERN_ERR PFX "%s: Memory allocation failed.\n", name); 1491 1490 return -ENOMEM; 1492 1491 } 1493 1492 memset(lp->rx_dma_addr, 0, sizeof(dma_addr_t) * lp->rx_ring_size); 1494 1493 1495 - if (!(lp->tx_skbuff = kmalloc(sizeof(struct sk_buff *) * lp->tx_ring_size, GFP_ATOMIC))) { 1494 + lp->tx_skbuff = kmalloc(sizeof(struct sk_buff *) * lp->tx_ring_size, 1495 + GFP_ATOMIC); 1496 + if (!lp->tx_skbuff) { 1496 1497 if (pcnet32_debug & NETIF_MSG_DRV) 1497 - printk(KERN_ERR PFX "Memory allocation failed.\n"); 1498 + printk("\n" KERN_ERR PFX "%s: Memory allocation failed.\n", name); 1498 1499 return -ENOMEM; 1499 1500 } 1500 1501 memset(lp->tx_skbuff, 0, sizeof(struct sk_buff *) * lp->tx_ring_size); 1501 1502 1502 - if (!(lp->rx_skbuff = kmalloc(sizeof(struct sk_buff *) * lp->rx_ring_size, GFP_ATOMIC))) { 1503 + lp->rx_skbuff = kmalloc(sizeof(struct sk_buff *) * lp->rx_ring_size, 1504 + GFP_ATOMIC); 1505 + if (!lp->rx_skbuff) { 1503 1506 if (pcnet32_debug & NETIF_MSG_DRV) 1504 - printk(KERN_ERR PFX "Memory allocation failed.\n"); 1507 + printk("\n" KERN_ERR PFX "%s: Memory allocation failed.\n", name); 1505 1508 return -ENOMEM; 1506 1509 } 1507 1510 memset(lp->rx_skbuff, 0, sizeof(struct sk_buff *) * lp->rx_ring_size); ··· 1619 1592 val |= 0x10; 1620 1593 lp->a.write_csr (ioaddr, 124, val); 1621 1594 1622 - /* Allied Telesyn AT 2700/2701 FX looses the link, so skip that */ 1595 + /* Allied Telesyn AT 2700/2701 FX are 100Mbit only and do not negotiate */ 1623 1596 if (lp->pci_dev->subsystem_vendor == PCI_VENDOR_ID_AT && 1624 - (lp->pci_dev->subsystem_device == PCI_SUBDEVICE_ID_AT_2700FX || 1625 - lp->pci_dev->subsystem_device == PCI_SUBDEVICE_ID_AT_2701FX)) { 1626 - printk(KERN_DEBUG "%s: Skipping PHY selection.\n", dev->name); 1627 - } else { 1597 + (lp->pci_dev->subsystem_device == PCI_SUBDEVICE_ID_AT_2700FX || 1598 + lp->pci_dev->subsystem_device == PCI_SUBDEVICE_ID_AT_2701FX)) { 1599 + if (lp->options & PCNET32_PORT_ASEL) { 1600 + lp->options = PCNET32_PORT_FD | PCNET32_PORT_100; 1601 + if (netif_msg_link(lp)) 1602 + printk(KERN_DEBUG "%s: Setting 100Mb-Full Duplex.\n", 1603 + dev->name); 1604 + } 1605 + } 1606 + { 1628 1607 /* 1629 1608 * 24 Jun 2004 according AMD, in order to change the PHY, 1630 1609 * DANAS (or DISPM for 79C976) must be set; then select the speed,
+3
drivers/net/phy/mdio_bus.c
··· 61 61 for (i = 0; i < PHY_MAX_ADDR; i++) { 62 62 struct phy_device *phydev; 63 63 64 + if (bus->phy_mask & (1 << i)) 65 + continue; 66 + 64 67 phydev = get_phy_device(bus, i); 65 68 66 69 if (IS_ERR(phydev))
+406 -352
drivers/net/s2io.c
··· 30 30 * in the driver. 31 31 * rx_ring_sz: This defines the number of descriptors each ring can have. This 32 32 * is also an array of size 8. 33 + * rx_ring_mode: This defines the operation mode of all 8 rings. The valid 34 + * values are 1, 2 and 3. 33 35 * tx_fifo_num: This defines the number of Tx FIFOs thats used int the driver. 34 36 * tx_fifo_len: This too is an array of 8. Each element defines the number of 35 37 * Tx descriptors that can be associated with each corresponding FIFO. ··· 67 65 #include "s2io.h" 68 66 #include "s2io-regs.h" 69 67 70 - #define DRV_VERSION "Version 2.0.9.1" 68 + #define DRV_VERSION "Version 2.0.9.3" 71 69 72 70 /* S2io Driver name & version. */ 73 71 static char s2io_driver_name[] = "Neterion"; 74 72 static char s2io_driver_version[] = DRV_VERSION; 73 + 74 + int rxd_size[4] = {32,48,48,64}; 75 + int rxd_count[4] = {127,85,85,63}; 75 76 76 77 static inline int RXD_IS_UP2DT(RxD_t *rxdp) 77 78 { ··· 109 104 mac_control = &sp->mac_control; 110 105 if ((mac_control->rings[ring].pkt_cnt - rxb_size) > 16) { 111 106 level = LOW; 112 - if (rxb_size <= MAX_RXDS_PER_BLOCK) { 107 + if (rxb_size <= rxd_count[sp->rxd_mode]) { 113 108 level = PANIC; 114 109 } 115 110 } ··· 301 296 {[0 ...(MAX_RX_RINGS - 1)] = 0 }; 302 297 static unsigned int rts_frm_len[MAX_RX_RINGS] = 303 298 {[0 ...(MAX_RX_RINGS - 1)] = 0 }; 299 + static unsigned int rx_ring_mode = 1; 304 300 static unsigned int use_continuous_tx_intrs = 1; 305 301 static unsigned int rmac_pause_time = 65535; 306 302 static unsigned int mc_pause_threshold_q0q3 = 187; ··· 310 304 static unsigned int tmac_util_period = 5; 311 305 static unsigned int rmac_util_period = 5; 312 306 static unsigned int bimodal = 0; 307 + static unsigned int l3l4hdr_size = 128; 313 308 #ifndef CONFIG_S2IO_NAPI 314 309 static unsigned int indicate_max_pkts; 315 310 #endif ··· 364 357 int i, j, blk_cnt, rx_sz, tx_sz; 365 358 int lst_size, lst_per_page; 366 359 struct net_device *dev = nic->dev; 367 - #ifdef CONFIG_2BUFF_MODE 368 360 unsigned long tmp; 369 361 buffAdd_t *ba; 370 - #endif 371 362 372 363 mac_info_t *mac_control; 373 364 struct config_param *config; ··· 463 458 /* Allocation and initialization of RXDs in Rings */ 464 459 size = 0; 465 460 for (i = 0; i < config->rx_ring_num; i++) { 466 - if (config->rx_cfg[i].num_rxd % (MAX_RXDS_PER_BLOCK + 1)) { 461 + if (config->rx_cfg[i].num_rxd % 462 + (rxd_count[nic->rxd_mode] + 1)) { 467 463 DBG_PRINT(ERR_DBG, "%s: RxD count of ", dev->name); 468 464 DBG_PRINT(ERR_DBG, "Ring%d is not a multiple of ", 469 465 i); ··· 473 467 } 474 468 size += config->rx_cfg[i].num_rxd; 475 469 mac_control->rings[i].block_count = 476 - config->rx_cfg[i].num_rxd / (MAX_RXDS_PER_BLOCK + 1); 477 - mac_control->rings[i].pkt_cnt = 478 - config->rx_cfg[i].num_rxd - mac_control->rings[i].block_count; 470 + config->rx_cfg[i].num_rxd / 471 + (rxd_count[nic->rxd_mode] + 1 ); 472 + mac_control->rings[i].pkt_cnt = config->rx_cfg[i].num_rxd - 473 + mac_control->rings[i].block_count; 479 474 } 480 - size = (size * (sizeof(RxD_t))); 475 + if (nic->rxd_mode == RXD_MODE_1) 476 + size = (size * (sizeof(RxD1_t))); 477 + else 478 + size = (size * (sizeof(RxD3_t))); 481 479 rx_sz = size; 482 480 483 481 for (i = 0; i < config->rx_ring_num; i++) { ··· 496 486 mac_control->rings[i].nic = nic; 497 487 mac_control->rings[i].ring_no = i; 498 488 499 - blk_cnt = 500 - config->rx_cfg[i].num_rxd / (MAX_RXDS_PER_BLOCK + 1); 489 + blk_cnt = config->rx_cfg[i].num_rxd / 490 + (rxd_count[nic->rxd_mode] + 1); 501 491 /* Allocating all the Rx blocks */ 502 492 for (j = 0; j < blk_cnt; j++) { 503 - #ifndef CONFIG_2BUFF_MODE 504 - size = (MAX_RXDS_PER_BLOCK + 1) * (sizeof(RxD_t)); 505 - #else 506 - size = SIZE_OF_BLOCK; 507 - #endif 493 + rx_block_info_t *rx_blocks; 494 + int l; 495 + 496 + rx_blocks = &mac_control->rings[i].rx_blocks[j]; 497 + size = SIZE_OF_BLOCK; //size is always page size 508 498 tmp_v_addr = pci_alloc_consistent(nic->pdev, size, 509 499 &tmp_p_addr); 510 500 if (tmp_v_addr == NULL) { ··· 514 504 * memory that was alloced till the 515 505 * failure happened. 516 506 */ 517 - mac_control->rings[i].rx_blocks[j].block_virt_addr = 518 - tmp_v_addr; 507 + rx_blocks->block_virt_addr = tmp_v_addr; 519 508 return -ENOMEM; 520 509 } 521 510 memset(tmp_v_addr, 0, size); 511 + rx_blocks->block_virt_addr = tmp_v_addr; 512 + rx_blocks->block_dma_addr = tmp_p_addr; 513 + rx_blocks->rxds = kmalloc(sizeof(rxd_info_t)* 514 + rxd_count[nic->rxd_mode], 515 + GFP_KERNEL); 516 + for (l=0; l<rxd_count[nic->rxd_mode];l++) { 517 + rx_blocks->rxds[l].virt_addr = 518 + rx_blocks->block_virt_addr + 519 + (rxd_size[nic->rxd_mode] * l); 520 + rx_blocks->rxds[l].dma_addr = 521 + rx_blocks->block_dma_addr + 522 + (rxd_size[nic->rxd_mode] * l); 523 + } 524 + 522 525 mac_control->rings[i].rx_blocks[j].block_virt_addr = 523 526 tmp_v_addr; 524 527 mac_control->rings[i].rx_blocks[j].block_dma_addr = ··· 551 528 blk_cnt].block_dma_addr; 552 529 553 530 pre_rxd_blk = (RxD_block_t *) tmp_v_addr; 554 - pre_rxd_blk->reserved_1 = END_OF_BLOCK; /* last RxD 555 - * marker. 556 - */ 557 - #ifndef CONFIG_2BUFF_MODE 558 531 pre_rxd_blk->reserved_2_pNext_RxD_block = 559 532 (unsigned long) tmp_v_addr_next; 560 - #endif 561 533 pre_rxd_blk->pNext_RxD_Blk_physical = 562 534 (u64) tmp_p_addr_next; 563 535 } 564 536 } 565 - 566 - #ifdef CONFIG_2BUFF_MODE 567 - /* 568 - * Allocation of Storages for buffer addresses in 2BUFF mode 569 - * and the buffers as well. 570 - */ 571 - for (i = 0; i < config->rx_ring_num; i++) { 572 - blk_cnt = 573 - config->rx_cfg[i].num_rxd / (MAX_RXDS_PER_BLOCK + 1); 574 - mac_control->rings[i].ba = kmalloc((sizeof(buffAdd_t *) * blk_cnt), 537 + if (nic->rxd_mode >= RXD_MODE_3A) { 538 + /* 539 + * Allocation of Storages for buffer addresses in 2BUFF mode 540 + * and the buffers as well. 541 + */ 542 + for (i = 0; i < config->rx_ring_num; i++) { 543 + blk_cnt = config->rx_cfg[i].num_rxd / 544 + (rxd_count[nic->rxd_mode]+ 1); 545 + mac_control->rings[i].ba = 546 + kmalloc((sizeof(buffAdd_t *) * blk_cnt), 575 547 GFP_KERNEL); 576 - if (!mac_control->rings[i].ba) 577 - return -ENOMEM; 578 - for (j = 0; j < blk_cnt; j++) { 579 - int k = 0; 580 - mac_control->rings[i].ba[j] = kmalloc((sizeof(buffAdd_t) * 581 - (MAX_RXDS_PER_BLOCK + 1)), 582 - GFP_KERNEL); 583 - if (!mac_control->rings[i].ba[j]) 548 + if (!mac_control->rings[i].ba) 584 549 return -ENOMEM; 585 - while (k != MAX_RXDS_PER_BLOCK) { 586 - ba = &mac_control->rings[i].ba[j][k]; 587 - 588 - ba->ba_0_org = (void *) kmalloc 589 - (BUF0_LEN + ALIGN_SIZE, GFP_KERNEL); 590 - if (!ba->ba_0_org) 550 + for (j = 0; j < blk_cnt; j++) { 551 + int k = 0; 552 + mac_control->rings[i].ba[j] = 553 + kmalloc((sizeof(buffAdd_t) * 554 + (rxd_count[nic->rxd_mode] + 1)), 555 + GFP_KERNEL); 556 + if (!mac_control->rings[i].ba[j]) 591 557 return -ENOMEM; 592 - tmp = (unsigned long) ba->ba_0_org; 593 - tmp += ALIGN_SIZE; 594 - tmp &= ~((unsigned long) ALIGN_SIZE); 595 - ba->ba_0 = (void *) tmp; 558 + while (k != rxd_count[nic->rxd_mode]) { 559 + ba = &mac_control->rings[i].ba[j][k]; 596 560 597 - ba->ba_1_org = (void *) kmalloc 598 - (BUF1_LEN + ALIGN_SIZE, GFP_KERNEL); 599 - if (!ba->ba_1_org) 600 - return -ENOMEM; 601 - tmp = (unsigned long) ba->ba_1_org; 602 - tmp += ALIGN_SIZE; 603 - tmp &= ~((unsigned long) ALIGN_SIZE); 604 - ba->ba_1 = (void *) tmp; 605 - k++; 561 + ba->ba_0_org = (void *) kmalloc 562 + (BUF0_LEN + ALIGN_SIZE, GFP_KERNEL); 563 + if (!ba->ba_0_org) 564 + return -ENOMEM; 565 + tmp = (unsigned long)ba->ba_0_org; 566 + tmp += ALIGN_SIZE; 567 + tmp &= ~((unsigned long) ALIGN_SIZE); 568 + ba->ba_0 = (void *) tmp; 569 + 570 + ba->ba_1_org = (void *) kmalloc 571 + (BUF1_LEN + ALIGN_SIZE, GFP_KERNEL); 572 + if (!ba->ba_1_org) 573 + return -ENOMEM; 574 + tmp = (unsigned long) ba->ba_1_org; 575 + tmp += ALIGN_SIZE; 576 + tmp &= ~((unsigned long) ALIGN_SIZE); 577 + ba->ba_1 = (void *) tmp; 578 + k++; 579 + } 606 580 } 607 581 } 608 582 } 609 - #endif 610 583 611 584 /* Allocation and initialization of Statistics block */ 612 585 size = sizeof(StatInfo_t); ··· 688 669 kfree(mac_control->fifos[i].list_info); 689 670 } 690 671 691 - #ifndef CONFIG_2BUFF_MODE 692 - size = (MAX_RXDS_PER_BLOCK + 1) * (sizeof(RxD_t)); 693 - #else 694 672 size = SIZE_OF_BLOCK; 695 - #endif 696 673 for (i = 0; i < config->rx_ring_num; i++) { 697 674 blk_cnt = mac_control->rings[i].block_count; 698 675 for (j = 0; j < blk_cnt; j++) { ··· 700 685 break; 701 686 pci_free_consistent(nic->pdev, size, 702 687 tmp_v_addr, tmp_p_addr); 688 + kfree(mac_control->rings[i].rx_blocks[j].rxds); 703 689 } 704 690 } 705 691 706 - #ifdef CONFIG_2BUFF_MODE 707 - /* Freeing buffer storage addresses in 2BUFF mode. */ 708 - for (i = 0; i < config->rx_ring_num; i++) { 709 - blk_cnt = 710 - config->rx_cfg[i].num_rxd / (MAX_RXDS_PER_BLOCK + 1); 711 - for (j = 0; j < blk_cnt; j++) { 712 - int k = 0; 713 - if (!mac_control->rings[i].ba[j]) 714 - continue; 715 - while (k != MAX_RXDS_PER_BLOCK) { 716 - buffAdd_t *ba = &mac_control->rings[i].ba[j][k]; 717 - kfree(ba->ba_0_org); 718 - kfree(ba->ba_1_org); 719 - k++; 692 + if (nic->rxd_mode >= RXD_MODE_3A) { 693 + /* Freeing buffer storage addresses in 2BUFF mode. */ 694 + for (i = 0; i < config->rx_ring_num; i++) { 695 + blk_cnt = config->rx_cfg[i].num_rxd / 696 + (rxd_count[nic->rxd_mode] + 1); 697 + for (j = 0; j < blk_cnt; j++) { 698 + int k = 0; 699 + if (!mac_control->rings[i].ba[j]) 700 + continue; 701 + while (k != rxd_count[nic->rxd_mode]) { 702 + buffAdd_t *ba = 703 + &mac_control->rings[i].ba[j][k]; 704 + kfree(ba->ba_0_org); 705 + kfree(ba->ba_1_org); 706 + k++; 707 + } 708 + kfree(mac_control->rings[i].ba[j]); 720 709 } 721 - kfree(mac_control->rings[i].ba[j]); 710 + kfree(mac_control->rings[i].ba); 722 711 } 723 - kfree(mac_control->rings[i].ba); 724 712 } 725 - #endif 726 713 727 714 if (mac_control->stats_mem) { 728 715 pci_free_consistent(nic->pdev, ··· 1911 1894 val64 = readq(&bar0->prc_ctrl_n[i]); 1912 1895 if (nic->config.bimodal) 1913 1896 val64 |= PRC_CTRL_BIMODAL_INTERRUPT; 1914 - #ifndef CONFIG_2BUFF_MODE 1915 - val64 |= PRC_CTRL_RC_ENABLED; 1916 - #else 1917 - val64 |= PRC_CTRL_RC_ENABLED | PRC_CTRL_RING_MODE_3; 1918 - #endif 1897 + if (nic->rxd_mode == RXD_MODE_1) 1898 + val64 |= PRC_CTRL_RC_ENABLED; 1899 + else 1900 + val64 |= PRC_CTRL_RC_ENABLED | PRC_CTRL_RING_MODE_3; 1919 1901 writeq(val64, &bar0->prc_ctrl_n[i]); 1920 1902 } 1921 1903 1922 - #ifdef CONFIG_2BUFF_MODE 1923 - /* Enabling 2 buffer mode by writing into Rx_pa_cfg reg. */ 1924 - val64 = readq(&bar0->rx_pa_cfg); 1925 - val64 |= RX_PA_CFG_IGNORE_L2_ERR; 1926 - writeq(val64, &bar0->rx_pa_cfg); 1927 - #endif 1904 + if (nic->rxd_mode == RXD_MODE_3B) { 1905 + /* Enabling 2 buffer mode by writing into Rx_pa_cfg reg. */ 1906 + val64 = readq(&bar0->rx_pa_cfg); 1907 + val64 |= RX_PA_CFG_IGNORE_L2_ERR; 1908 + writeq(val64, &bar0->rx_pa_cfg); 1909 + } 1928 1910 1929 1911 /* 1930 1912 * Enabling MC-RLDRAM. After enabling the device, we timeout ··· 2106 2090 } 2107 2091 } 2108 2092 2093 + int fill_rxd_3buf(nic_t *nic, RxD_t *rxdp, struct sk_buff *skb) 2094 + { 2095 + struct net_device *dev = nic->dev; 2096 + struct sk_buff *frag_list; 2097 + u64 tmp; 2098 + 2099 + /* Buffer-1 receives L3/L4 headers */ 2100 + ((RxD3_t*)rxdp)->Buffer1_ptr = pci_map_single 2101 + (nic->pdev, skb->data, l3l4hdr_size + 4, 2102 + PCI_DMA_FROMDEVICE); 2103 + 2104 + /* skb_shinfo(skb)->frag_list will have L4 data payload */ 2105 + skb_shinfo(skb)->frag_list = dev_alloc_skb(dev->mtu + ALIGN_SIZE); 2106 + if (skb_shinfo(skb)->frag_list == NULL) { 2107 + DBG_PRINT(ERR_DBG, "%s: dev_alloc_skb failed\n ", dev->name); 2108 + return -ENOMEM ; 2109 + } 2110 + frag_list = skb_shinfo(skb)->frag_list; 2111 + frag_list->next = NULL; 2112 + tmp = (u64) frag_list->data; 2113 + tmp += ALIGN_SIZE; 2114 + tmp &= ~ALIGN_SIZE; 2115 + frag_list->data = (void *) tmp; 2116 + frag_list->tail = (void *) tmp; 2117 + 2118 + /* Buffer-2 receives L4 data payload */ 2119 + ((RxD3_t*)rxdp)->Buffer2_ptr = pci_map_single(nic->pdev, 2120 + frag_list->data, dev->mtu, 2121 + PCI_DMA_FROMDEVICE); 2122 + rxdp->Control_2 |= SET_BUFFER1_SIZE_3(l3l4hdr_size + 4); 2123 + rxdp->Control_2 |= SET_BUFFER2_SIZE_3(dev->mtu); 2124 + 2125 + return SUCCESS; 2126 + } 2127 + 2109 2128 /** 2110 2129 * fill_rx_buffers - Allocates the Rx side skbs 2111 2130 * @nic: device private variable ··· 2168 2117 struct sk_buff *skb; 2169 2118 RxD_t *rxdp; 2170 2119 int off, off1, size, block_no, block_no1; 2171 - int offset, offset1; 2172 2120 u32 alloc_tab = 0; 2173 2121 u32 alloc_cnt; 2174 2122 mac_info_t *mac_control; 2175 2123 struct config_param *config; 2176 - #ifdef CONFIG_2BUFF_MODE 2177 - RxD_t *rxdpnext; 2178 - int nextblk; 2179 2124 u64 tmp; 2180 2125 buffAdd_t *ba; 2181 - dma_addr_t rxdpphys; 2182 - #endif 2183 2126 #ifndef CONFIG_S2IO_NAPI 2184 2127 unsigned long flags; 2185 2128 #endif ··· 2183 2138 config = &nic->config; 2184 2139 alloc_cnt = mac_control->rings[ring_no].pkt_cnt - 2185 2140 atomic_read(&nic->rx_bufs_left[ring_no]); 2186 - size = dev->mtu + HEADER_ETHERNET_II_802_3_SIZE + 2187 - HEADER_802_2_SIZE + HEADER_SNAP_SIZE; 2188 2141 2189 2142 while (alloc_tab < alloc_cnt) { 2190 2143 block_no = mac_control->rings[ring_no].rx_curr_put_info. ··· 2191 2148 block_index; 2192 2149 off = mac_control->rings[ring_no].rx_curr_put_info.offset; 2193 2150 off1 = mac_control->rings[ring_no].rx_curr_get_info.offset; 2194 - #ifndef CONFIG_2BUFF_MODE 2195 - offset = block_no * (MAX_RXDS_PER_BLOCK + 1) + off; 2196 - offset1 = block_no1 * (MAX_RXDS_PER_BLOCK + 1) + off1; 2197 - #else 2198 - offset = block_no * (MAX_RXDS_PER_BLOCK) + off; 2199 - offset1 = block_no1 * (MAX_RXDS_PER_BLOCK) + off1; 2200 - #endif 2201 2151 2202 - rxdp = mac_control->rings[ring_no].rx_blocks[block_no]. 2203 - block_virt_addr + off; 2204 - if ((offset == offset1) && (rxdp->Host_Control)) { 2205 - DBG_PRINT(INTR_DBG, "%s: Get and Put", dev->name); 2152 + rxdp = mac_control->rings[ring_no]. 2153 + rx_blocks[block_no].rxds[off].virt_addr; 2154 + 2155 + if ((block_no == block_no1) && (off == off1) && 2156 + (rxdp->Host_Control)) { 2157 + DBG_PRINT(INTR_DBG, "%s: Get and Put", 2158 + dev->name); 2206 2159 DBG_PRINT(INTR_DBG, " info equated\n"); 2207 2160 goto end; 2208 2161 } 2209 - #ifndef CONFIG_2BUFF_MODE 2210 - if (rxdp->Control_1 == END_OF_BLOCK) { 2162 + if (off && (off == rxd_count[nic->rxd_mode])) { 2211 2163 mac_control->rings[ring_no].rx_curr_put_info. 2212 2164 block_index++; 2165 + if (mac_control->rings[ring_no].rx_curr_put_info. 2166 + block_index == mac_control->rings[ring_no]. 2167 + block_count) 2168 + mac_control->rings[ring_no].rx_curr_put_info. 2169 + block_index = 0; 2170 + block_no = mac_control->rings[ring_no]. 2171 + rx_curr_put_info.block_index; 2172 + if (off == rxd_count[nic->rxd_mode]) 2173 + off = 0; 2213 2174 mac_control->rings[ring_no].rx_curr_put_info. 2214 - block_index %= mac_control->rings[ring_no].block_count; 2215 - block_no = mac_control->rings[ring_no].rx_curr_put_info. 2216 - block_index; 2217 - off++; 2218 - off %= (MAX_RXDS_PER_BLOCK + 1); 2219 - mac_control->rings[ring_no].rx_curr_put_info.offset = 2220 - off; 2221 - rxdp = (RxD_t *) ((unsigned long) rxdp->Control_2); 2175 + offset = off; 2176 + rxdp = mac_control->rings[ring_no]. 2177 + rx_blocks[block_no].block_virt_addr; 2222 2178 DBG_PRINT(INTR_DBG, "%s: Next block at: %p\n", 2223 2179 dev->name, rxdp); 2224 2180 } 2225 2181 #ifndef CONFIG_S2IO_NAPI 2226 2182 spin_lock_irqsave(&nic->put_lock, flags); 2227 2183 mac_control->rings[ring_no].put_pos = 2228 - (block_no * (MAX_RXDS_PER_BLOCK + 1)) + off; 2184 + (block_no * (rxd_count[nic->rxd_mode] + 1)) + off; 2229 2185 spin_unlock_irqrestore(&nic->put_lock, flags); 2230 2186 #endif 2231 - #else 2232 - if (rxdp->Host_Control == END_OF_BLOCK) { 2187 + if ((rxdp->Control_1 & RXD_OWN_XENA) && 2188 + ((nic->rxd_mode >= RXD_MODE_3A) && 2189 + (rxdp->Control_2 & BIT(0)))) { 2233 2190 mac_control->rings[ring_no].rx_curr_put_info. 2234 - block_index++; 2235 - mac_control->rings[ring_no].rx_curr_put_info.block_index 2236 - %= mac_control->rings[ring_no].block_count; 2237 - block_no = mac_control->rings[ring_no].rx_curr_put_info 2238 - .block_index; 2239 - off = 0; 2240 - DBG_PRINT(INTR_DBG, "%s: block%d at: 0x%llx\n", 2241 - dev->name, block_no, 2242 - (unsigned long long) rxdp->Control_1); 2243 - mac_control->rings[ring_no].rx_curr_put_info.offset = 2244 - off; 2245 - rxdp = mac_control->rings[ring_no].rx_blocks[block_no]. 2246 - block_virt_addr; 2247 - } 2248 - #ifndef CONFIG_S2IO_NAPI 2249 - spin_lock_irqsave(&nic->put_lock, flags); 2250 - mac_control->rings[ring_no].put_pos = (block_no * 2251 - (MAX_RXDS_PER_BLOCK + 1)) + off; 2252 - spin_unlock_irqrestore(&nic->put_lock, flags); 2253 - #endif 2254 - #endif 2255 - 2256 - #ifndef CONFIG_2BUFF_MODE 2257 - if (rxdp->Control_1 & RXD_OWN_XENA) 2258 - #else 2259 - if (rxdp->Control_2 & BIT(0)) 2260 - #endif 2261 - { 2262 - mac_control->rings[ring_no].rx_curr_put_info. 2263 - offset = off; 2191 + offset = off; 2264 2192 goto end; 2265 2193 } 2266 - #ifdef CONFIG_2BUFF_MODE 2267 - /* 2268 - * RxDs Spanning cache lines will be replenished only 2269 - * if the succeeding RxD is also owned by Host. It 2270 - * will always be the ((8*i)+3) and ((8*i)+6) 2271 - * descriptors for the 48 byte descriptor. The offending 2272 - * decsriptor is of-course the 3rd descriptor. 2273 - */ 2274 - rxdpphys = mac_control->rings[ring_no].rx_blocks[block_no]. 2275 - block_dma_addr + (off * sizeof(RxD_t)); 2276 - if (((u64) (rxdpphys)) % 128 > 80) { 2277 - rxdpnext = mac_control->rings[ring_no].rx_blocks[block_no]. 2278 - block_virt_addr + (off + 1); 2279 - if (rxdpnext->Host_Control == END_OF_BLOCK) { 2280 - nextblk = (block_no + 1) % 2281 - (mac_control->rings[ring_no].block_count); 2282 - rxdpnext = mac_control->rings[ring_no].rx_blocks 2283 - [nextblk].block_virt_addr; 2284 - } 2285 - if (rxdpnext->Control_2 & BIT(0)) 2286 - goto end; 2287 - } 2288 - #endif 2194 + /* calculate size of skb based on ring mode */ 2195 + size = dev->mtu + HEADER_ETHERNET_II_802_3_SIZE + 2196 + HEADER_802_2_SIZE + HEADER_SNAP_SIZE; 2197 + if (nic->rxd_mode == RXD_MODE_1) 2198 + size += NET_IP_ALIGN; 2199 + else if (nic->rxd_mode == RXD_MODE_3B) 2200 + size = dev->mtu + ALIGN_SIZE + BUF0_LEN + 4; 2201 + else 2202 + size = l3l4hdr_size + ALIGN_SIZE + BUF0_LEN + 4; 2289 2203 2290 - #ifndef CONFIG_2BUFF_MODE 2291 - skb = dev_alloc_skb(size + NET_IP_ALIGN); 2292 - #else 2293 - skb = dev_alloc_skb(dev->mtu + ALIGN_SIZE + BUF0_LEN + 4); 2294 - #endif 2295 - if (!skb) { 2204 + /* allocate skb */ 2205 + skb = dev_alloc_skb(size); 2206 + if(!skb) { 2296 2207 DBG_PRINT(ERR_DBG, "%s: Out of ", dev->name); 2297 2208 DBG_PRINT(ERR_DBG, "memory to allocate SKBs\n"); 2298 2209 if (first_rxdp) { 2299 2210 wmb(); 2300 2211 first_rxdp->Control_1 |= RXD_OWN_XENA; 2301 2212 } 2302 - return -ENOMEM; 2213 + return -ENOMEM ; 2303 2214 } 2304 - #ifndef CONFIG_2BUFF_MODE 2305 - skb_reserve(skb, NET_IP_ALIGN); 2306 - memset(rxdp, 0, sizeof(RxD_t)); 2307 - rxdp->Buffer0_ptr = pci_map_single 2308 - (nic->pdev, skb->data, size, PCI_DMA_FROMDEVICE); 2309 - rxdp->Control_2 &= (~MASK_BUFFER0_SIZE); 2310 - rxdp->Control_2 |= SET_BUFFER0_SIZE(size); 2215 + if (nic->rxd_mode == RXD_MODE_1) { 2216 + /* 1 buffer mode - normal operation mode */ 2217 + memset(rxdp, 0, sizeof(RxD1_t)); 2218 + skb_reserve(skb, NET_IP_ALIGN); 2219 + ((RxD1_t*)rxdp)->Buffer0_ptr = pci_map_single 2220 + (nic->pdev, skb->data, size, PCI_DMA_FROMDEVICE); 2221 + rxdp->Control_2 &= (~MASK_BUFFER0_SIZE_1); 2222 + rxdp->Control_2 |= SET_BUFFER0_SIZE_1(size); 2223 + 2224 + } else if (nic->rxd_mode >= RXD_MODE_3A) { 2225 + /* 2226 + * 2 or 3 buffer mode - 2227 + * Both 2 buffer mode and 3 buffer mode provides 128 2228 + * byte aligned receive buffers. 2229 + * 2230 + * 3 buffer mode provides header separation where in 2231 + * skb->data will have L3/L4 headers where as 2232 + * skb_shinfo(skb)->frag_list will have the L4 data 2233 + * payload 2234 + */ 2235 + 2236 + memset(rxdp, 0, sizeof(RxD3_t)); 2237 + ba = &mac_control->rings[ring_no].ba[block_no][off]; 2238 + skb_reserve(skb, BUF0_LEN); 2239 + tmp = (u64)(unsigned long) skb->data; 2240 + tmp += ALIGN_SIZE; 2241 + tmp &= ~ALIGN_SIZE; 2242 + skb->data = (void *) (unsigned long)tmp; 2243 + skb->tail = (void *) (unsigned long)tmp; 2244 + 2245 + ((RxD3_t*)rxdp)->Buffer0_ptr = 2246 + pci_map_single(nic->pdev, ba->ba_0, BUF0_LEN, 2247 + PCI_DMA_FROMDEVICE); 2248 + rxdp->Control_2 = SET_BUFFER0_SIZE_3(BUF0_LEN); 2249 + if (nic->rxd_mode == RXD_MODE_3B) { 2250 + /* Two buffer mode */ 2251 + 2252 + /* 2253 + * Buffer2 will have L3/L4 header plus 2254 + * L4 payload 2255 + */ 2256 + ((RxD3_t*)rxdp)->Buffer2_ptr = pci_map_single 2257 + (nic->pdev, skb->data, dev->mtu + 4, 2258 + PCI_DMA_FROMDEVICE); 2259 + 2260 + /* Buffer-1 will be dummy buffer not used */ 2261 + ((RxD3_t*)rxdp)->Buffer1_ptr = 2262 + pci_map_single(nic->pdev, ba->ba_1, BUF1_LEN, 2263 + PCI_DMA_FROMDEVICE); 2264 + rxdp->Control_2 |= SET_BUFFER1_SIZE_3(1); 2265 + rxdp->Control_2 |= SET_BUFFER2_SIZE_3 2266 + (dev->mtu + 4); 2267 + } else { 2268 + /* 3 buffer mode */ 2269 + if (fill_rxd_3buf(nic, rxdp, skb) == -ENOMEM) { 2270 + dev_kfree_skb_irq(skb); 2271 + if (first_rxdp) { 2272 + wmb(); 2273 + first_rxdp->Control_1 |= 2274 + RXD_OWN_XENA; 2275 + } 2276 + return -ENOMEM ; 2277 + } 2278 + } 2279 + rxdp->Control_2 |= BIT(0); 2280 + } 2311 2281 rxdp->Host_Control = (unsigned long) (skb); 2312 2282 if (alloc_tab & ((1 << rxsync_frequency) - 1)) 2313 2283 rxdp->Control_1 |= RXD_OWN_XENA; 2314 2284 off++; 2315 - off %= (MAX_RXDS_PER_BLOCK + 1); 2285 + if (off == (rxd_count[nic->rxd_mode] + 1)) 2286 + off = 0; 2316 2287 mac_control->rings[ring_no].rx_curr_put_info.offset = off; 2317 - #else 2318 - ba = &mac_control->rings[ring_no].ba[block_no][off]; 2319 - skb_reserve(skb, BUF0_LEN); 2320 - tmp = ((unsigned long) skb->data & ALIGN_SIZE); 2321 - if (tmp) 2322 - skb_reserve(skb, (ALIGN_SIZE + 1) - tmp); 2323 2288 2324 - memset(rxdp, 0, sizeof(RxD_t)); 2325 - rxdp->Buffer2_ptr = pci_map_single 2326 - (nic->pdev, skb->data, dev->mtu + BUF0_LEN + 4, 2327 - PCI_DMA_FROMDEVICE); 2328 - rxdp->Buffer0_ptr = 2329 - pci_map_single(nic->pdev, ba->ba_0, BUF0_LEN, 2330 - PCI_DMA_FROMDEVICE); 2331 - rxdp->Buffer1_ptr = 2332 - pci_map_single(nic->pdev, ba->ba_1, BUF1_LEN, 2333 - PCI_DMA_FROMDEVICE); 2334 - 2335 - rxdp->Control_2 = SET_BUFFER2_SIZE(dev->mtu + 4); 2336 - rxdp->Control_2 |= SET_BUFFER0_SIZE(BUF0_LEN); 2337 - rxdp->Control_2 |= SET_BUFFER1_SIZE(1); /* dummy. */ 2338 - rxdp->Control_2 |= BIT(0); /* Set Buffer_Empty bit. */ 2339 - rxdp->Host_Control = (u64) ((unsigned long) (skb)); 2340 - if (alloc_tab & ((1 << rxsync_frequency) - 1)) 2341 - rxdp->Control_1 |= RXD_OWN_XENA; 2342 - off++; 2343 - mac_control->rings[ring_no].rx_curr_put_info.offset = off; 2344 - #endif 2345 2289 rxdp->Control_2 |= SET_RXD_MARKER; 2346 - 2347 2290 if (!(alloc_tab & ((1 << rxsync_frequency) - 1))) { 2348 2291 if (first_rxdp) { 2349 2292 wmb(); ··· 2354 2325 return SUCCESS; 2355 2326 } 2356 2327 2328 + static void free_rxd_blk(struct s2io_nic *sp, int ring_no, int blk) 2329 + { 2330 + struct net_device *dev = sp->dev; 2331 + int j; 2332 + struct sk_buff *skb; 2333 + RxD_t *rxdp; 2334 + mac_info_t *mac_control; 2335 + buffAdd_t *ba; 2336 + 2337 + mac_control = &sp->mac_control; 2338 + for (j = 0 ; j < rxd_count[sp->rxd_mode]; j++) { 2339 + rxdp = mac_control->rings[ring_no]. 2340 + rx_blocks[blk].rxds[j].virt_addr; 2341 + skb = (struct sk_buff *) 2342 + ((unsigned long) rxdp->Host_Control); 2343 + if (!skb) { 2344 + continue; 2345 + } 2346 + if (sp->rxd_mode == RXD_MODE_1) { 2347 + pci_unmap_single(sp->pdev, (dma_addr_t) 2348 + ((RxD1_t*)rxdp)->Buffer0_ptr, 2349 + dev->mtu + 2350 + HEADER_ETHERNET_II_802_3_SIZE 2351 + + HEADER_802_2_SIZE + 2352 + HEADER_SNAP_SIZE, 2353 + PCI_DMA_FROMDEVICE); 2354 + memset(rxdp, 0, sizeof(RxD1_t)); 2355 + } else if(sp->rxd_mode == RXD_MODE_3B) { 2356 + ba = &mac_control->rings[ring_no]. 2357 + ba[blk][j]; 2358 + pci_unmap_single(sp->pdev, (dma_addr_t) 2359 + ((RxD3_t*)rxdp)->Buffer0_ptr, 2360 + BUF0_LEN, 2361 + PCI_DMA_FROMDEVICE); 2362 + pci_unmap_single(sp->pdev, (dma_addr_t) 2363 + ((RxD3_t*)rxdp)->Buffer1_ptr, 2364 + BUF1_LEN, 2365 + PCI_DMA_FROMDEVICE); 2366 + pci_unmap_single(sp->pdev, (dma_addr_t) 2367 + ((RxD3_t*)rxdp)->Buffer2_ptr, 2368 + dev->mtu + 4, 2369 + PCI_DMA_FROMDEVICE); 2370 + memset(rxdp, 0, sizeof(RxD3_t)); 2371 + } else { 2372 + pci_unmap_single(sp->pdev, (dma_addr_t) 2373 + ((RxD3_t*)rxdp)->Buffer0_ptr, BUF0_LEN, 2374 + PCI_DMA_FROMDEVICE); 2375 + pci_unmap_single(sp->pdev, (dma_addr_t) 2376 + ((RxD3_t*)rxdp)->Buffer1_ptr, 2377 + l3l4hdr_size + 4, 2378 + PCI_DMA_FROMDEVICE); 2379 + pci_unmap_single(sp->pdev, (dma_addr_t) 2380 + ((RxD3_t*)rxdp)->Buffer2_ptr, dev->mtu, 2381 + PCI_DMA_FROMDEVICE); 2382 + memset(rxdp, 0, sizeof(RxD3_t)); 2383 + } 2384 + dev_kfree_skb(skb); 2385 + atomic_dec(&sp->rx_bufs_left[ring_no]); 2386 + } 2387 + } 2388 + 2357 2389 /** 2358 2390 * free_rx_buffers - Frees all Rx buffers 2359 2391 * @sp: device private variable. ··· 2427 2337 static void free_rx_buffers(struct s2io_nic *sp) 2428 2338 { 2429 2339 struct net_device *dev = sp->dev; 2430 - int i, j, blk = 0, off, buf_cnt = 0; 2431 - RxD_t *rxdp; 2432 - struct sk_buff *skb; 2340 + int i, blk = 0, buf_cnt = 0; 2433 2341 mac_info_t *mac_control; 2434 2342 struct config_param *config; 2435 - #ifdef CONFIG_2BUFF_MODE 2436 - buffAdd_t *ba; 2437 - #endif 2438 2343 2439 2344 mac_control = &sp->mac_control; 2440 2345 config = &sp->config; 2441 2346 2442 2347 for (i = 0; i < config->rx_ring_num; i++) { 2443 - for (j = 0, blk = 0; j < config->rx_cfg[i].num_rxd; j++) { 2444 - off = j % (MAX_RXDS_PER_BLOCK + 1); 2445 - rxdp = mac_control->rings[i].rx_blocks[blk]. 2446 - block_virt_addr + off; 2348 + for (blk = 0; blk < rx_ring_sz[i]; blk++) 2349 + free_rxd_blk(sp,i,blk); 2447 2350 2448 - #ifndef CONFIG_2BUFF_MODE 2449 - if (rxdp->Control_1 == END_OF_BLOCK) { 2450 - rxdp = 2451 - (RxD_t *) ((unsigned long) rxdp-> 2452 - Control_2); 2453 - j++; 2454 - blk++; 2455 - } 2456 - #else 2457 - if (rxdp->Host_Control == END_OF_BLOCK) { 2458 - blk++; 2459 - continue; 2460 - } 2461 - #endif 2462 - 2463 - if (!(rxdp->Control_1 & RXD_OWN_XENA)) { 2464 - memset(rxdp, 0, sizeof(RxD_t)); 2465 - continue; 2466 - } 2467 - 2468 - skb = 2469 - (struct sk_buff *) ((unsigned long) rxdp-> 2470 - Host_Control); 2471 - if (skb) { 2472 - #ifndef CONFIG_2BUFF_MODE 2473 - pci_unmap_single(sp->pdev, (dma_addr_t) 2474 - rxdp->Buffer0_ptr, 2475 - dev->mtu + 2476 - HEADER_ETHERNET_II_802_3_SIZE 2477 - + HEADER_802_2_SIZE + 2478 - HEADER_SNAP_SIZE, 2479 - PCI_DMA_FROMDEVICE); 2480 - #else 2481 - ba = &mac_control->rings[i].ba[blk][off]; 2482 - pci_unmap_single(sp->pdev, (dma_addr_t) 2483 - rxdp->Buffer0_ptr, 2484 - BUF0_LEN, 2485 - PCI_DMA_FROMDEVICE); 2486 - pci_unmap_single(sp->pdev, (dma_addr_t) 2487 - rxdp->Buffer1_ptr, 2488 - BUF1_LEN, 2489 - PCI_DMA_FROMDEVICE); 2490 - pci_unmap_single(sp->pdev, (dma_addr_t) 2491 - rxdp->Buffer2_ptr, 2492 - dev->mtu + BUF0_LEN + 4, 2493 - PCI_DMA_FROMDEVICE); 2494 - #endif 2495 - dev_kfree_skb(skb); 2496 - atomic_dec(&sp->rx_bufs_left[i]); 2497 - buf_cnt++; 2498 - } 2499 - memset(rxdp, 0, sizeof(RxD_t)); 2500 - } 2501 2351 mac_control->rings[i].rx_curr_put_info.block_index = 0; 2502 2352 mac_control->rings[i].rx_curr_get_info.block_index = 0; 2503 2353 mac_control->rings[i].rx_curr_put_info.offset = 0; ··· 2543 2513 { 2544 2514 nic_t *nic = ring_data->nic; 2545 2515 struct net_device *dev = (struct net_device *) nic->dev; 2546 - int get_block, get_offset, put_block, put_offset, ring_bufs; 2516 + int get_block, put_block, put_offset; 2547 2517 rx_curr_get_info_t get_info, put_info; 2548 2518 RxD_t *rxdp; 2549 2519 struct sk_buff *skb; ··· 2562 2532 get_block = get_info.block_index; 2563 2533 put_info = ring_data->rx_curr_put_info; 2564 2534 put_block = put_info.block_index; 2565 - ring_bufs = get_info.ring_len+1; 2566 - rxdp = ring_data->rx_blocks[get_block].block_virt_addr + 2567 - get_info.offset; 2568 - get_offset = (get_block * (MAX_RXDS_PER_BLOCK + 1)) + 2569 - get_info.offset; 2535 + rxdp = ring_data->rx_blocks[get_block].rxds[get_info.offset].virt_addr; 2570 2536 #ifndef CONFIG_S2IO_NAPI 2571 2537 spin_lock(&nic->put_lock); 2572 2538 put_offset = ring_data->put_pos; 2573 2539 spin_unlock(&nic->put_lock); 2574 2540 #else 2575 - put_offset = (put_block * (MAX_RXDS_PER_BLOCK + 1)) + 2541 + put_offset = (put_block * (rxd_count[nic->rxd_mode] + 1)) + 2576 2542 put_info.offset; 2577 2543 #endif 2578 - while (RXD_IS_UP2DT(rxdp) && 2579 - (((get_offset + 1) % ring_bufs) != put_offset)) { 2544 + while (RXD_IS_UP2DT(rxdp)) { 2545 + /* If your are next to put index then it's FIFO full condition */ 2546 + if ((get_block == put_block) && 2547 + (get_info.offset + 1) == put_info.offset) { 2548 + DBG_PRINT(ERR_DBG, "%s: Ring Full\n",dev->name); 2549 + break; 2550 + } 2580 2551 skb = (struct sk_buff *) ((unsigned long)rxdp->Host_Control); 2581 2552 if (skb == NULL) { 2582 2553 DBG_PRINT(ERR_DBG, "%s: The skb is ", ··· 2586 2555 spin_unlock(&nic->rx_lock); 2587 2556 return; 2588 2557 } 2589 - #ifndef CONFIG_2BUFF_MODE 2590 - pci_unmap_single(nic->pdev, (dma_addr_t) 2591 - rxdp->Buffer0_ptr, 2558 + if (nic->rxd_mode == RXD_MODE_1) { 2559 + pci_unmap_single(nic->pdev, (dma_addr_t) 2560 + ((RxD1_t*)rxdp)->Buffer0_ptr, 2592 2561 dev->mtu + 2593 2562 HEADER_ETHERNET_II_802_3_SIZE + 2594 2563 HEADER_802_2_SIZE + 2595 2564 HEADER_SNAP_SIZE, 2596 2565 PCI_DMA_FROMDEVICE); 2597 - #else 2598 - pci_unmap_single(nic->pdev, (dma_addr_t) 2599 - rxdp->Buffer0_ptr, 2566 + } else if (nic->rxd_mode == RXD_MODE_3B) { 2567 + pci_unmap_single(nic->pdev, (dma_addr_t) 2568 + ((RxD3_t*)rxdp)->Buffer0_ptr, 2600 2569 BUF0_LEN, PCI_DMA_FROMDEVICE); 2601 - pci_unmap_single(nic->pdev, (dma_addr_t) 2602 - rxdp->Buffer1_ptr, 2570 + pci_unmap_single(nic->pdev, (dma_addr_t) 2571 + ((RxD3_t*)rxdp)->Buffer1_ptr, 2603 2572 BUF1_LEN, PCI_DMA_FROMDEVICE); 2604 - pci_unmap_single(nic->pdev, (dma_addr_t) 2605 - rxdp->Buffer2_ptr, 2606 - dev->mtu + BUF0_LEN + 4, 2573 + pci_unmap_single(nic->pdev, (dma_addr_t) 2574 + ((RxD3_t*)rxdp)->Buffer2_ptr, 2575 + dev->mtu + 4, 2607 2576 PCI_DMA_FROMDEVICE); 2608 - #endif 2577 + } else { 2578 + pci_unmap_single(nic->pdev, (dma_addr_t) 2579 + ((RxD3_t*)rxdp)->Buffer0_ptr, BUF0_LEN, 2580 + PCI_DMA_FROMDEVICE); 2581 + pci_unmap_single(nic->pdev, (dma_addr_t) 2582 + ((RxD3_t*)rxdp)->Buffer1_ptr, 2583 + l3l4hdr_size + 4, 2584 + PCI_DMA_FROMDEVICE); 2585 + pci_unmap_single(nic->pdev, (dma_addr_t) 2586 + ((RxD3_t*)rxdp)->Buffer2_ptr, 2587 + dev->mtu, PCI_DMA_FROMDEVICE); 2588 + } 2609 2589 rx_osm_handler(ring_data, rxdp); 2610 2590 get_info.offset++; 2611 - ring_data->rx_curr_get_info.offset = 2612 - get_info.offset; 2613 - rxdp = ring_data->rx_blocks[get_block].block_virt_addr + 2614 - get_info.offset; 2615 - if (get_info.offset && 2616 - (!(get_info.offset % MAX_RXDS_PER_BLOCK))) { 2591 + ring_data->rx_curr_get_info.offset = get_info.offset; 2592 + rxdp = ring_data->rx_blocks[get_block]. 2593 + rxds[get_info.offset].virt_addr; 2594 + if (get_info.offset == rxd_count[nic->rxd_mode]) { 2617 2595 get_info.offset = 0; 2618 - ring_data->rx_curr_get_info.offset 2619 - = get_info.offset; 2596 + ring_data->rx_curr_get_info.offset = get_info.offset; 2620 2597 get_block++; 2621 - get_block %= ring_data->block_count; 2622 - ring_data->rx_curr_get_info.block_index 2623 - = get_block; 2598 + if (get_block == ring_data->block_count) 2599 + get_block = 0; 2600 + ring_data->rx_curr_get_info.block_index = get_block; 2624 2601 rxdp = ring_data->rx_blocks[get_block].block_virt_addr; 2625 2602 } 2626 2603 2627 - get_offset = (get_block * (MAX_RXDS_PER_BLOCK + 1)) + 2628 - get_info.offset; 2629 2604 #ifdef CONFIG_S2IO_NAPI 2630 2605 nic->pkts_to_process -= 1; 2631 2606 if (!nic->pkts_to_process) ··· 3081 3044 3082 3045 int wait_for_msix_trans(nic_t *nic, int i) 3083 3046 { 3084 - XENA_dev_config_t __iomem *bar0 = nic->bar0; 3047 + XENA_dev_config_t *bar0 = (XENA_dev_config_t *) nic->bar0; 3085 3048 u64 val64; 3086 3049 int ret = 0, cnt = 0; 3087 3050 ··· 3102 3065 3103 3066 void restore_xmsi_data(nic_t *nic) 3104 3067 { 3105 - XENA_dev_config_t __iomem *bar0 = nic->bar0; 3068 + XENA_dev_config_t *bar0 = (XENA_dev_config_t *) nic->bar0; 3106 3069 u64 val64; 3107 3070 int i; 3108 3071 ··· 3120 3083 3121 3084 void store_xmsi_data(nic_t *nic) 3122 3085 { 3123 - XENA_dev_config_t __iomem *bar0 = nic->bar0; 3086 + XENA_dev_config_t *bar0 = (XENA_dev_config_t *) nic->bar0; 3124 3087 u64 val64, addr, data; 3125 3088 int i; 3126 3089 ··· 3143 3106 3144 3107 int s2io_enable_msi(nic_t *nic) 3145 3108 { 3146 - XENA_dev_config_t __iomem *bar0 = nic->bar0; 3109 + XENA_dev_config_t *bar0 = (XENA_dev_config_t *) nic->bar0; 3147 3110 u16 msi_ctrl, msg_val; 3148 3111 struct config_param *config = &nic->config; 3149 3112 struct net_device *dev = nic->dev; ··· 3193 3156 3194 3157 int s2io_enable_msi_x(nic_t *nic) 3195 3158 { 3196 - XENA_dev_config_t __iomem *bar0 = nic->bar0; 3159 + XENA_dev_config_t *bar0 = (XENA_dev_config_t *) nic->bar0; 3197 3160 u64 tx_mat, rx_mat; 3198 3161 u16 msi_control; /* Temp variable */ 3199 3162 int ret, i, j, msix_indx = 1; ··· 5574 5537 ((unsigned long) rxdp->Host_Control); 5575 5538 int ring_no = ring_data->ring_no; 5576 5539 u16 l3_csum, l4_csum; 5577 - #ifdef CONFIG_2BUFF_MODE 5578 - int buf0_len = RXD_GET_BUFFER0_SIZE(rxdp->Control_2); 5579 - int buf2_len = RXD_GET_BUFFER2_SIZE(rxdp->Control_2); 5580 - int get_block = ring_data->rx_curr_get_info.block_index; 5581 - int get_off = ring_data->rx_curr_get_info.offset; 5582 - buffAdd_t *ba = &ring_data->ba[get_block][get_off]; 5583 - unsigned char *buff; 5584 - #else 5585 - u16 len = (u16) ((RXD_GET_BUFFER0_SIZE(rxdp->Control_2)) >> 48);; 5586 - #endif 5540 + 5587 5541 skb->dev = dev; 5588 5542 if (rxdp->Control_1 & RXD_T_CODE) { 5589 5543 unsigned long long err = rxdp->Control_1 & RXD_T_CODE; ··· 5591 5563 rxdp->Host_Control = 0; 5592 5564 sp->rx_pkt_count++; 5593 5565 sp->stats.rx_packets++; 5594 - #ifndef CONFIG_2BUFF_MODE 5595 - sp->stats.rx_bytes += len; 5596 - #else 5597 - sp->stats.rx_bytes += buf0_len + buf2_len; 5598 - #endif 5566 + if (sp->rxd_mode == RXD_MODE_1) { 5567 + int len = RXD_GET_BUFFER0_SIZE_1(rxdp->Control_2); 5599 5568 5600 - #ifndef CONFIG_2BUFF_MODE 5601 - skb_put(skb, len); 5602 - #else 5603 - buff = skb_push(skb, buf0_len); 5604 - memcpy(buff, ba->ba_0, buf0_len); 5605 - skb_put(skb, buf2_len); 5606 - #endif 5569 + sp->stats.rx_bytes += len; 5570 + skb_put(skb, len); 5571 + 5572 + } else if (sp->rxd_mode >= RXD_MODE_3A) { 5573 + int get_block = ring_data->rx_curr_get_info.block_index; 5574 + int get_off = ring_data->rx_curr_get_info.offset; 5575 + int buf0_len = RXD_GET_BUFFER0_SIZE_3(rxdp->Control_2); 5576 + int buf2_len = RXD_GET_BUFFER2_SIZE_3(rxdp->Control_2); 5577 + unsigned char *buff = skb_push(skb, buf0_len); 5578 + 5579 + buffAdd_t *ba = &ring_data->ba[get_block][get_off]; 5580 + sp->stats.rx_bytes += buf0_len + buf2_len; 5581 + memcpy(buff, ba->ba_0, buf0_len); 5582 + 5583 + if (sp->rxd_mode == RXD_MODE_3A) { 5584 + int buf1_len = RXD_GET_BUFFER1_SIZE_3(rxdp->Control_2); 5585 + 5586 + skb_put(skb, buf1_len); 5587 + skb->len += buf2_len; 5588 + skb->data_len += buf2_len; 5589 + skb->truesize += buf2_len; 5590 + skb_put(skb_shinfo(skb)->frag_list, buf2_len); 5591 + sp->stats.rx_bytes += buf1_len; 5592 + 5593 + } else 5594 + skb_put(skb, buf2_len); 5595 + } 5607 5596 5608 5597 if ((rxdp->Control_1 & TCP_OR_UDP_FRAME) && 5609 5598 (sp->rx_csum)) { ··· 5756 5711 5757 5712 module_param(tx_fifo_num, int, 0); 5758 5713 module_param(rx_ring_num, int, 0); 5714 + module_param(rx_ring_mode, int, 0); 5759 5715 module_param_array(tx_fifo_len, uint, NULL, 0); 5760 5716 module_param_array(rx_ring_sz, uint, NULL, 0); 5761 5717 module_param_array(rts_frm_len, uint, NULL, 0); ··· 5768 5722 module_param(tmac_util_period, int, 0); 5769 5723 module_param(rmac_util_period, int, 0); 5770 5724 module_param(bimodal, bool, 0); 5725 + module_param(l3l4hdr_size, int , 0); 5771 5726 #ifndef CONFIG_S2IO_NAPI 5772 5727 module_param(indicate_max_pkts, int, 0); 5773 5728 #endif ··· 5890 5843 sp->pdev = pdev; 5891 5844 sp->high_dma_flag = dma_flag; 5892 5845 sp->device_enabled_once = FALSE; 5846 + if (rx_ring_mode == 1) 5847 + sp->rxd_mode = RXD_MODE_1; 5848 + if (rx_ring_mode == 2) 5849 + sp->rxd_mode = RXD_MODE_3B; 5850 + if (rx_ring_mode == 3) 5851 + sp->rxd_mode = RXD_MODE_3A; 5852 + 5893 5853 sp->intr_type = dev_intr_type; 5894 5854 5895 5855 if ((pdev->device == PCI_DEVICE_ID_HERC_WIN) || ··· 5949 5895 config->rx_ring_num = rx_ring_num; 5950 5896 for (i = 0; i < MAX_RX_RINGS; i++) { 5951 5897 config->rx_cfg[i].num_rxd = rx_ring_sz[i] * 5952 - (MAX_RXDS_PER_BLOCK + 1); 5898 + (rxd_count[sp->rxd_mode] + 1); 5953 5899 config->rx_cfg[i].ring_priority = i; 5954 5900 } 5955 5901 ··· 6144 6090 DBG_PRINT(ERR_DBG, "(rev %d), Version %s", 6145 6091 get_xena_rev_id(sp->pdev), 6146 6092 s2io_driver_version); 6147 - #ifdef CONFIG_2BUFF_MODE 6148 - DBG_PRINT(ERR_DBG, ", Buffer mode %d",2); 6149 - #endif 6150 6093 switch(sp->intr_type) { 6151 6094 case INTA: 6152 6095 DBG_PRINT(ERR_DBG, ", Intr type INTA"); ··· 6176 6125 DBG_PRINT(ERR_DBG, "(rev %d), Version %s", 6177 6126 get_xena_rev_id(sp->pdev), 6178 6127 s2io_driver_version); 6179 - #ifdef CONFIG_2BUFF_MODE 6180 - DBG_PRINT(ERR_DBG, ", Buffer mode %d",2); 6181 - #endif 6182 6128 switch(sp->intr_type) { 6183 6129 case INTA: 6184 6130 DBG_PRINT(ERR_DBG, ", Intr type INTA"); ··· 6196 6148 sp->def_mac_addr[0].mac_addr[4], 6197 6149 sp->def_mac_addr[0].mac_addr[5]); 6198 6150 } 6151 + if (sp->rxd_mode == RXD_MODE_3B) 6152 + DBG_PRINT(ERR_DBG, "%s: 2-Buffer mode support has been " 6153 + "enabled\n",dev->name); 6154 + if (sp->rxd_mode == RXD_MODE_3A) 6155 + DBG_PRINT(ERR_DBG, "%s: 3-Buffer mode support has been " 6156 + "enabled\n",dev->name); 6199 6157 6200 6158 /* Initialize device name */ 6201 6159 strcpy(sp->name, dev->name);
+47 -44
drivers/net/s2io.h
··· 418 418 void *list_virt_addr; 419 419 } list_info_hold_t; 420 420 421 - /* Rx descriptor structure */ 421 + /* Rx descriptor structure for 1 buffer mode */ 422 422 typedef struct _RxD_t { 423 423 u64 Host_Control; /* reserved for host */ 424 424 u64 Control_1; ··· 439 439 #define SET_RXD_MARKER vBIT(THE_RXD_MARK, 0, 2) 440 440 #define GET_RXD_MARKER(ctrl) ((ctrl & SET_RXD_MARKER) >> 62) 441 441 442 - #ifndef CONFIG_2BUFF_MODE 443 - #define MASK_BUFFER0_SIZE vBIT(0x3FFF,2,14) 444 - #define SET_BUFFER0_SIZE(val) vBIT(val,2,14) 445 - #else 446 - #define MASK_BUFFER0_SIZE vBIT(0xFF,2,14) 447 - #define MASK_BUFFER1_SIZE vBIT(0xFFFF,16,16) 448 - #define MASK_BUFFER2_SIZE vBIT(0xFFFF,32,16) 449 - #define SET_BUFFER0_SIZE(val) vBIT(val,8,8) 450 - #define SET_BUFFER1_SIZE(val) vBIT(val,16,16) 451 - #define SET_BUFFER2_SIZE(val) vBIT(val,32,16) 452 - #endif 453 - 454 442 #define MASK_VLAN_TAG vBIT(0xFFFF,48,16) 455 443 #define SET_VLAN_TAG(val) vBIT(val,48,16) 456 444 #define SET_NUM_TAG(val) vBIT(val,16,32) 457 445 458 - #ifndef CONFIG_2BUFF_MODE 459 - #define RXD_GET_BUFFER0_SIZE(Control_2) (u64)((Control_2 & vBIT(0x3FFF,2,14))) 460 - #else 461 - #define RXD_GET_BUFFER0_SIZE(Control_2) (u8)((Control_2 & MASK_BUFFER0_SIZE) \ 462 - >> 48) 463 - #define RXD_GET_BUFFER1_SIZE(Control_2) (u16)((Control_2 & MASK_BUFFER1_SIZE) \ 464 - >> 32) 465 - #define RXD_GET_BUFFER2_SIZE(Control_2) (u16)((Control_2 & MASK_BUFFER2_SIZE) \ 466 - >> 16) 446 + 447 + } RxD_t; 448 + /* Rx descriptor structure for 1 buffer mode */ 449 + typedef struct _RxD1_t { 450 + struct _RxD_t h; 451 + 452 + #define MASK_BUFFER0_SIZE_1 vBIT(0x3FFF,2,14) 453 + #define SET_BUFFER0_SIZE_1(val) vBIT(val,2,14) 454 + #define RXD_GET_BUFFER0_SIZE_1(_Control_2) \ 455 + (u16)((_Control_2 & MASK_BUFFER0_SIZE_1) >> 48) 456 + u64 Buffer0_ptr; 457 + } RxD1_t; 458 + /* Rx descriptor structure for 3 or 2 buffer mode */ 459 + 460 + typedef struct _RxD3_t { 461 + struct _RxD_t h; 462 + 463 + #define MASK_BUFFER0_SIZE_3 vBIT(0xFF,2,14) 464 + #define MASK_BUFFER1_SIZE_3 vBIT(0xFFFF,16,16) 465 + #define MASK_BUFFER2_SIZE_3 vBIT(0xFFFF,32,16) 466 + #define SET_BUFFER0_SIZE_3(val) vBIT(val,8,8) 467 + #define SET_BUFFER1_SIZE_3(val) vBIT(val,16,16) 468 + #define SET_BUFFER2_SIZE_3(val) vBIT(val,32,16) 469 + #define RXD_GET_BUFFER0_SIZE_3(Control_2) \ 470 + (u8)((Control_2 & MASK_BUFFER0_SIZE_3) >> 48) 471 + #define RXD_GET_BUFFER1_SIZE_3(Control_2) \ 472 + (u16)((Control_2 & MASK_BUFFER1_SIZE_3) >> 32) 473 + #define RXD_GET_BUFFER2_SIZE_3(Control_2) \ 474 + (u16)((Control_2 & MASK_BUFFER2_SIZE_3) >> 16) 467 475 #define BUF0_LEN 40 468 476 #define BUF1_LEN 1 469 - #endif 470 477 471 478 u64 Buffer0_ptr; 472 - #ifdef CONFIG_2BUFF_MODE 473 479 u64 Buffer1_ptr; 474 480 u64 Buffer2_ptr; 475 - #endif 476 - } RxD_t; 481 + } RxD3_t; 482 + 477 483 478 484 /* Structure that represents the Rx descriptor block which contains 479 485 * 128 Rx descriptors. 480 486 */ 481 - #ifndef CONFIG_2BUFF_MODE 482 487 typedef struct _RxD_block { 483 - #define MAX_RXDS_PER_BLOCK 127 484 - RxD_t rxd[MAX_RXDS_PER_BLOCK]; 488 + #define MAX_RXDS_PER_BLOCK_1 127 489 + RxD1_t rxd[MAX_RXDS_PER_BLOCK_1]; 485 490 486 491 u64 reserved_0; 487 492 #define END_OF_BLOCK 0xFEFFFFFFFFFFFFFFULL ··· 497 492 * the upper 32 bits should 498 493 * be 0 */ 499 494 } RxD_block_t; 500 - #else 501 - typedef struct _RxD_block { 502 - #define MAX_RXDS_PER_BLOCK 85 503 - RxD_t rxd[MAX_RXDS_PER_BLOCK]; 504 495 505 - #define END_OF_BLOCK 0xFEFFFFFFFFFFFFFFULL 506 - u64 reserved_1; /* 0xFEFFFFFFFFFFFFFF to mark last Rxd 507 - * in this blk */ 508 - u64 pNext_RxD_Blk_physical; /* Phy ponter to next blk. */ 509 - } RxD_block_t; 510 496 #define SIZE_OF_BLOCK 4096 497 + 498 + #define RXD_MODE_1 0 499 + #define RXD_MODE_3A 1 500 + #define RXD_MODE_3B 2 511 501 512 502 /* Structure to hold virtual addresses of Buf0 and Buf1 in 513 503 * 2buf mode. */ ··· 512 512 void *ba_0; 513 513 void *ba_1; 514 514 } buffAdd_t; 515 - #endif 516 515 517 516 /* Structure which stores all the MAC control parameters */ 518 517 ··· 538 539 539 540 typedef tx_curr_get_info_t tx_curr_put_info_t; 540 541 542 + 543 + typedef struct rxd_info { 544 + void *virt_addr; 545 + dma_addr_t dma_addr; 546 + }rxd_info_t; 547 + 541 548 /* Structure that holds the Phy and virt addresses of the Blocks */ 542 549 typedef struct rx_block_info { 543 - RxD_t *block_virt_addr; 550 + void *block_virt_addr; 544 551 dma_addr_t block_dma_addr; 552 + rxd_info_t *rxds; 545 553 } rx_block_info_t; 546 554 547 555 /* pre declaration of the nic structure */ ··· 584 578 int put_pos; 585 579 #endif 586 580 587 - #ifdef CONFIG_2BUFF_MODE 588 581 /* Buffer Address store. */ 589 582 buffAdd_t **ba; 590 - #endif 591 583 nic_t *nic; 592 584 } ring_info_t; 593 585 ··· 651 647 652 648 /* Default Tunable parameters of the NIC. */ 653 649 #define DEFAULT_FIFO_LEN 4096 654 - #define SMALL_RXD_CNT 30 * (MAX_RXDS_PER_BLOCK+1) 655 - #define LARGE_RXD_CNT 100 * (MAX_RXDS_PER_BLOCK+1) 656 650 #define SMALL_BLK_CNT 30 657 651 #define LARGE_BLK_CNT 100 658 652 ··· 680 678 681 679 /* Structure representing one instance of the NIC */ 682 680 struct s2io_nic { 681 + int rxd_mode; 683 682 #ifdef CONFIG_S2IO_NAPI 684 683 /* 685 684 * Count of packets to be processed in a given iteration, it will be indicated
+1 -1
drivers/net/wireless/airo.c
··· 2040 2040 return 1; 2041 2041 } 2042 2042 2043 - static void get_tx_error(struct airo_info *ai, u32 fid) 2043 + static void get_tx_error(struct airo_info *ai, s32 fid) 2044 2044 { 2045 2045 u16 status; 2046 2046
+3
include/linux/phy.h
··· 72 72 /* list of all PHYs on bus */ 73 73 struct phy_device *phy_map[PHY_MAX_ADDR]; 74 74 75 + /* Phy addresses to be ignored when probing */ 76 + u32 phy_mask; 77 + 75 78 /* Pointer to an array of interrupts, each PHY's 76 79 * interrupt at the index matching its address */ 77 80 int *irq;