Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net

Pull networking fixes from David Miller:
"Several networking final fixes and tidies for the merge window:

1) Changes during the merge window unintentionally took away the
ability to build bluetooth modular, fix from Geert Uytterhoeven.

2) Several phy_node reference count bug fixes from Uwe Kleine-König.

3) Fix ucc_geth build failures, also from Uwe Kleine-König.

4) Fix klog false positivies when netlink messages go to network
taps, by properly resetting the network header. Fix from Daniel
Borkmann.

5) Sizing estimate of VF netlink messages is too small, from Jiri
Benc.

6) New APM X-Gene SoC ethernet driver, from Iyappan Subramanian.

7) VLAN untagging is erroneously dependent upon whether the VLAN
module is loaded or not, but there are generic dependencies that
matter wrt what can be expected as the SKB enters the stack.
Make the basic untagging generic code, and do it unconditionally.
From Vlad Yasevich.

8) xen-netfront only has so many slots in it's transmit queue so
linearize packets that have too many frags. From Zoltan Kiss.

9) Fix suspend/resume PHY handling in bcmgenet driver, from Florian
Fainelli"

* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net: (55 commits)
net: bcmgenet: correctly resume adapter from Wake-on-LAN
net: bcmgenet: update UMAC_CMD only when link is detected
net: bcmgenet: correctly suspend and resume PHY device
net: bcmgenet: request and enable main clock earlier
net: ethernet: myricom: myri10ge: myri10ge.c: Cleaning up missing null-terminate after strncpy call
xen-netfront: Fix handling packets on compound pages with skb_linearize
net: fec: Support phys probed from devicetree and fixed-link
smsc: replace WARN_ON() with WARN_ON_SMP()
xen-netback: Don't deschedule NAPI when carrier off
net: ethernet: qlogic: qlcnic: Remove duplicate object file from Makefile
wan: wanxl: Remove typedefs from struct names
m68k/atari: EtherNEC - ethernet support (ne)
net: ethernet: ti: cpmac.c: Cleaning up missing null-terminate after strncpy call
hdlc: Remove typedefs from struct names
airo_cs: Remove typedef local_info_t
atmel: Remove typedef atmel_priv_ioctl
com20020_cs: Remove typedef com20020_dev_t
ethernet: amd: Remove typedef local_info_t
net: Always untag vlan-tagged traffic on input.
drivers: net: Add APM X-Gene SoC ethernet driver support.
...

+3119 -536
+66
Documentation/devicetree/bindings/net/apm-xgene-enet.txt
··· 1 + APM X-Gene SoC Ethernet nodes 2 + 3 + Ethernet nodes are defined to describe on-chip ethernet interfaces in 4 + APM X-Gene SoC. 5 + 6 + Required properties: 7 + - compatible: Should be "apm,xgene-enet" 8 + - reg: Address and length of the register set for the device. It contains the 9 + information of registers in the same order as described by reg-names 10 + - reg-names: Should contain the register set names 11 + - "enet_csr": Ethernet control and status register address space 12 + - "ring_csr": Descriptor ring control and status register address space 13 + - "ring_cmd": Descriptor ring command register address space 14 + - interrupts: Ethernet main interrupt 15 + - clocks: Reference to the clock entry. 16 + - local-mac-address: MAC address assigned to this device 17 + - phy-connection-type: Interface type between ethernet device and PHY device 18 + - phy-handle: Reference to a PHY node connected to this device 19 + 20 + - mdio: Device tree subnode with the following required properties: 21 + - compatible: Must be "apm,xgene-mdio". 22 + - #address-cells: Must be <1>. 23 + - #size-cells: Must be <0>. 24 + 25 + For the phy on the mdio bus, there must be a node with the following fields: 26 + - compatible: PHY identifier. Please refer ./phy.txt for the format. 27 + - reg: The ID number for the phy. 28 + 29 + Optional properties: 30 + - status: Should be "ok" or "disabled" for enabled/disabled. Default is "ok". 31 + 32 + Example: 33 + menetclk: menetclk { 34 + compatible = "apm,xgene-device-clock"; 35 + clock-output-names = "menetclk"; 36 + status = "ok"; 37 + }; 38 + 39 + menet: ethernet@17020000 { 40 + compatible = "apm,xgene-enet"; 41 + status = "disabled"; 42 + reg = <0x0 0x17020000 0x0 0xd100>, 43 + <0x0 0X17030000 0x0 0X400>, 44 + <0x0 0X10000000 0x0 0X200>; 45 + reg-names = "enet_csr", "ring_csr", "ring_cmd"; 46 + interrupts = <0x0 0x3c 0x4>; 47 + clocks = <&menetclk 0>; 48 + local-mac-address = [00 01 73 00 00 01]; 49 + phy-connection-type = "rgmii"; 50 + phy-handle = <&menetphy>; 51 + mdio { 52 + compatible = "apm,xgene-mdio"; 53 + #address-cells = <1>; 54 + #size-cells = <0>; 55 + menetphy: menetphy@3 { 56 + compatible = "ethernet-phy-id001c.c915"; 57 + reg = <0x3>; 58 + }; 59 + 60 + }; 61 + }; 62 + 63 + /* Board-specific peripheral configurations */ 64 + &menet { 65 + status = "ok"; 66 + };
+28 -1
Documentation/devicetree/bindings/net/fsl-fec.txt
··· 12 12 only if property "phy-reset-gpios" is available. Missing the property 13 13 will have the duration be 1 millisecond. Numbers greater than 1000 are 14 14 invalid and 1 millisecond will be used instead. 15 - - phy-supply: regulator that powers the Ethernet PHY. 15 + - phy-supply : regulator that powers the Ethernet PHY. 16 + - phy-handle : phandle to the PHY device connected to this device. 17 + - fixed-link : Assume a fixed link. See fixed-link.txt in the same directory. 18 + Use instead of phy-handle. 19 + 20 + Optional subnodes: 21 + - mdio : specifies the mdio bus in the FEC, used as a container for phy nodes 22 + according to phy.txt in the same directory 16 23 17 24 Example: 18 25 ··· 31 24 phy-reset-gpios = <&gpio2 14 0>; /* GPIO2_14 */ 32 25 local-mac-address = [00 04 9F 01 1B B9]; 33 26 phy-supply = <&reg_fec_supply>; 27 + }; 28 + 29 + Example with phy specified: 30 + 31 + ethernet@83fec000 { 32 + compatible = "fsl,imx51-fec", "fsl,imx27-fec"; 33 + reg = <0x83fec000 0x4000>; 34 + interrupts = <87>; 35 + phy-mode = "mii"; 36 + phy-reset-gpios = <&gpio2 14 0>; /* GPIO2_14 */ 37 + local-mac-address = [00 04 9F 01 1B B9]; 38 + phy-supply = <&reg_fec_supply>; 39 + phy-handle = <&ethphy>; 40 + mdio { 41 + ethphy: ethernet-phy@6 { 42 + compatible = "ethernet-phy-ieee802.3-c22"; 43 + reg = <6>; 44 + max-speed = <100>; 45 + }; 46 + }; 34 47 };
+8
MAINTAINERS
··· 719 719 F: drivers/net/appletalk/ 720 720 F: net/appletalk/ 721 721 722 + APPLIED MICRO (APM) X-GENE SOC ETHERNET DRIVER 723 + M: Iyappan Subramanian <isubramanian@apm.com> 724 + M: Keyur Chudgar <kchudgar@apm.com> 725 + M: Ravi Patel <rapatel@apm.com> 726 + S: Supported 727 + F: drivers/net/ethernet/apm/xgene/ 728 + F: Documentation/devicetree/bindings/net/apm-xgene-enet.txt 729 + 722 730 APTINA CAMERA SENSOR PLL 723 731 M: Laurent Pinchart <Laurent.pinchart@ideasonboard.com> 724 732 L: linux-media@vger.kernel.org
+4
arch/arm64/boot/dts/apm-mustang.dts
··· 28 28 &serial0 { 29 29 status = "ok"; 30 30 }; 31 + 32 + &menet { 33 + status = "ok"; 34 + };
+27 -3
arch/arm64/boot/dts/apm-storm.dtsi
··· 167 167 clock-output-names = "ethclk"; 168 168 }; 169 169 170 - eth8clk: eth8clk { 170 + menetclk: menetclk { 171 171 compatible = "apm,xgene-device-clock"; 172 172 #clock-cells = <1>; 173 173 clocks = <&ethclk 0>; 174 - clock-names = "eth8clk"; 175 174 reg = <0x0 0x1702C000 0x0 0x1000>; 176 175 reg-names = "csr-reg"; 177 - clock-output-names = "eth8clk"; 176 + clock-output-names = "menetclk"; 178 177 }; 179 178 180 179 sataphy1clk: sataphy1clk@1f21c000 { ··· 395 396 interrupts = <0x0 0x46 0x4>; 396 397 #clock-cells = <1>; 397 398 clocks = <&rtcclk 0>; 399 + }; 400 + 401 + menet: ethernet@17020000 { 402 + compatible = "apm,xgene-enet"; 403 + status = "disabled"; 404 + reg = <0x0 0x17020000 0x0 0xd100>, 405 + <0x0 0X17030000 0x0 0X400>, 406 + <0x0 0X10000000 0x0 0X200>; 407 + reg-names = "enet_csr", "ring_csr", "ring_cmd"; 408 + interrupts = <0x0 0x3c 0x4>; 409 + dma-coherent; 410 + clocks = <&menetclk 0>; 411 + local-mac-address = [00 01 73 00 00 01]; 412 + phy-connection-type = "rgmii"; 413 + phy-handle = <&menetphy>; 414 + mdio { 415 + compatible = "apm,xgene-mdio"; 416 + #address-cells = <1>; 417 + #size-cells = <0>; 418 + menetphy: menetphy@3 { 419 + compatible = "ethernet-phy-id001c.c915"; 420 + reg = <0x3>; 421 + }; 422 + 423 + }; 398 424 }; 399 425 }; 400 426 };
+1
drivers/atm/atmtcp.c
··· 299 299 out_vcc = find_vcc(dev, ntohs(hdr->vpi), ntohs(hdr->vci)); 300 300 read_unlock(&vcc_sklist_lock); 301 301 if (!out_vcc) { 302 + result = -EUNATCH; 302 303 atomic_inc(&vcc->stats->tx_err); 303 304 goto done; 304 305 }
+1
drivers/atm/solos-pci.c
··· 1278 1278 card->dma_bounce = kmalloc(card->nr_ports * BUF_SIZE, GFP_KERNEL); 1279 1279 if (!card->dma_bounce) { 1280 1280 dev_warn(&card->dev->dev, "Failed to allocate DMA bounce buffers\n"); 1281 + err = -ENOMEM; 1281 1282 /* Fallback to MMIO doesn't work */ 1282 1283 goto out_unmap_both; 1283 1284 }
+8 -8
drivers/net/arcnet/com20020_cs.c
··· 112 112 113 113 /*====================================================================*/ 114 114 115 - typedef struct com20020_dev_t { 115 + struct com20020_dev { 116 116 struct net_device *dev; 117 - } com20020_dev_t; 117 + }; 118 118 119 119 static int com20020_probe(struct pcmcia_device *p_dev) 120 120 { 121 - com20020_dev_t *info; 121 + struct com20020_dev *info; 122 122 struct net_device *dev; 123 123 struct arcnet_local *lp; 124 124 125 125 dev_dbg(&p_dev->dev, "com20020_attach()\n"); 126 126 127 127 /* Create new network device */ 128 - info = kzalloc(sizeof(struct com20020_dev_t), GFP_KERNEL); 128 + info = kzalloc(sizeof(*info), GFP_KERNEL); 129 129 if (!info) 130 130 goto fail_alloc_info; 131 131 ··· 160 160 161 161 static void com20020_detach(struct pcmcia_device *link) 162 162 { 163 - struct com20020_dev_t *info = link->priv; 163 + struct com20020_dev *info = link->priv; 164 164 struct net_device *dev = info->dev; 165 165 166 166 dev_dbg(&link->dev, "detach...\n"); ··· 199 199 static int com20020_config(struct pcmcia_device *link) 200 200 { 201 201 struct arcnet_local *lp; 202 - com20020_dev_t *info; 202 + struct com20020_dev *info; 203 203 struct net_device *dev; 204 204 int i, ret; 205 205 int ioaddr; ··· 291 291 292 292 static int com20020_suspend(struct pcmcia_device *link) 293 293 { 294 - com20020_dev_t *info = link->priv; 294 + struct com20020_dev *info = link->priv; 295 295 struct net_device *dev = info->dev; 296 296 297 297 if (link->open) ··· 302 302 303 303 static int com20020_resume(struct pcmcia_device *link) 304 304 { 305 - com20020_dev_t *info = link->priv; 305 + struct com20020_dev *info = link->priv; 306 306 struct net_device *dev = info->dev; 307 307 308 308 if (link->open) {
+2 -1
drivers/net/ethernet/8390/Kconfig
··· 91 91 92 92 config NE2000 93 93 tristate "NE2000/NE1000 support" 94 - depends on (ISA || (Q40 && m) || M32R || MACH_TX49XX) 94 + depends on (ISA || (Q40 && m) || M32R || MACH_TX49XX || \ 95 + ATARI_ETHERNEC) 95 96 select CRC32 96 97 ---help--- 97 98 If you have a network (Ethernet) card of this type, say Y and read
+13 -13
drivers/net/ethernet/8390/axnet_cs.c
··· 108 108 109 109 /*====================================================================*/ 110 110 111 - typedef struct axnet_dev_t { 111 + struct axnet_dev { 112 112 struct pcmcia_device *p_dev; 113 113 caddr_t base; 114 114 struct timer_list watchdog; ··· 118 118 int phy_id; 119 119 int flags; 120 120 int active_low; 121 - } axnet_dev_t; 121 + }; 122 122 123 - static inline axnet_dev_t *PRIV(struct net_device *dev) 123 + static inline struct axnet_dev *PRIV(struct net_device *dev) 124 124 { 125 125 void *p = (char *)netdev_priv(dev) + sizeof(struct ei_device); 126 126 return p; ··· 141 141 142 142 static int axnet_probe(struct pcmcia_device *link) 143 143 { 144 - axnet_dev_t *info; 144 + struct axnet_dev *info; 145 145 struct net_device *dev; 146 146 struct ei_device *ei_local; 147 147 148 148 dev_dbg(&link->dev, "axnet_attach()\n"); 149 149 150 - dev = alloc_etherdev(sizeof(struct ei_device) + sizeof(axnet_dev_t)); 150 + dev = alloc_etherdev(sizeof(struct ei_device) + sizeof(struct axnet_dev)); 151 151 if (!dev) 152 152 return -ENOMEM; 153 153 ··· 274 274 static int axnet_config(struct pcmcia_device *link) 275 275 { 276 276 struct net_device *dev = link->priv; 277 - axnet_dev_t *info = PRIV(dev); 277 + struct axnet_dev *info = PRIV(dev); 278 278 int i, j, j2, ret; 279 279 280 280 dev_dbg(&link->dev, "axnet_config(0x%p)\n", link); ··· 389 389 static int axnet_resume(struct pcmcia_device *link) 390 390 { 391 391 struct net_device *dev = link->priv; 392 - axnet_dev_t *info = PRIV(dev); 392 + struct axnet_dev *info = PRIV(dev); 393 393 394 394 if (link->open) { 395 395 if (info->active_low == 1) ··· 467 467 static int axnet_open(struct net_device *dev) 468 468 { 469 469 int ret; 470 - axnet_dev_t *info = PRIV(dev); 470 + struct axnet_dev *info = PRIV(dev); 471 471 struct pcmcia_device *link = info->p_dev; 472 472 unsigned int nic_base = dev->base_addr; 473 473 ··· 497 497 498 498 static int axnet_close(struct net_device *dev) 499 499 { 500 - axnet_dev_t *info = PRIV(dev); 500 + struct axnet_dev *info = PRIV(dev); 501 501 struct pcmcia_device *link = info->p_dev; 502 502 503 503 dev_dbg(&link->dev, "axnet_close('%s')\n", dev->name); ··· 554 554 static void ei_watchdog(u_long arg) 555 555 { 556 556 struct net_device *dev = (struct net_device *)(arg); 557 - axnet_dev_t *info = PRIV(dev); 557 + struct axnet_dev *info = PRIV(dev); 558 558 unsigned int nic_base = dev->base_addr; 559 559 unsigned int mii_addr = nic_base + AXNET_MII_EEP; 560 560 u_short link; ··· 610 610 611 611 static int axnet_ioctl(struct net_device *dev, struct ifreq *rq, int cmd) 612 612 { 613 - axnet_dev_t *info = PRIV(dev); 613 + struct axnet_dev *info = PRIV(dev); 614 614 struct mii_ioctl_data *data = if_mii(rq); 615 615 unsigned int mii_addr = dev->base_addr + AXNET_MII_EEP; 616 616 switch (cmd) { ··· 1452 1452 1453 1453 static void ei_rx_overrun(struct net_device *dev) 1454 1454 { 1455 - axnet_dev_t *info = PRIV(dev); 1455 + struct axnet_dev *info = PRIV(dev); 1456 1456 long e8390_base = dev->base_addr; 1457 1457 unsigned char was_txing, must_resend = 0; 1458 1458 struct ei_device *ei_local = netdev_priv(dev); ··· 1624 1624 1625 1625 static void AX88190_init(struct net_device *dev, int startp) 1626 1626 { 1627 - axnet_dev_t *info = PRIV(dev); 1627 + struct axnet_dev *info = PRIV(dev); 1628 1628 long e8390_base = dev->base_addr; 1629 1629 struct ei_device *ei_local = netdev_priv(dev); 1630 1630 int i;
+2
drivers/net/ethernet/8390/ne.c
··· 169 169 #elif defined(CONFIG_PLAT_OAKS32R) || \ 170 170 defined(CONFIG_MACH_TX49XX) 171 171 # define DCR_VAL 0x48 /* 8-bit mode */ 172 + #elif defined(CONFIG_ATARI) /* 8-bit mode on Atari, normal on Q40 */ 173 + # define DCR_VAL (MACH_IS_ATARI ? 0x48 : 0x49) 172 174 #else 173 175 # define DCR_VAL 0x49 174 176 #endif
+34 -34
drivers/net/ethernet/8390/pcnet_cs.c
··· 111 111 112 112 /*====================================================================*/ 113 113 114 - typedef struct hw_info_t { 114 + struct hw_info { 115 115 u_int offset; 116 116 u_char a0, a1, a2; 117 117 u_int flags; 118 - } hw_info_t; 118 + }; 119 119 120 120 #define DELAY_OUTPUT 0x01 121 121 #define HAS_MISC_REG 0x02 ··· 132 132 #define MII_PHYID_REG1 0x02 133 133 #define MII_PHYID_REG2 0x03 134 134 135 - static hw_info_t hw_info[] = { 135 + static struct hw_info hw_info[] = { 136 136 { /* Accton EN2212 */ 0x0ff0, 0x00, 0x00, 0xe8, DELAY_OUTPUT }, 137 137 { /* Allied Telesis LA-PCM */ 0x0ff0, 0x00, 0x00, 0xf4, 0 }, 138 138 { /* APEX MultiCard */ 0x03f4, 0x00, 0x20, 0xe5, 0 }, ··· 196 196 197 197 #define NR_INFO ARRAY_SIZE(hw_info) 198 198 199 - static hw_info_t default_info = { 0, 0, 0, 0, 0 }; 200 - static hw_info_t dl10019_info = { 0, 0, 0, 0, IS_DL10019|HAS_MII }; 201 - static hw_info_t dl10022_info = { 0, 0, 0, 0, IS_DL10022|HAS_MII }; 199 + static struct hw_info default_info = { 0, 0, 0, 0, 0 }; 200 + static struct hw_info dl10019_info = { 0, 0, 0, 0, IS_DL10019|HAS_MII }; 201 + static struct hw_info dl10022_info = { 0, 0, 0, 0, IS_DL10022|HAS_MII }; 202 202 203 - typedef struct pcnet_dev_t { 203 + struct pcnet_dev { 204 204 struct pcmcia_device *p_dev; 205 205 u_int flags; 206 206 void __iomem *base; ··· 210 210 u_char eth_phy, pna_phy; 211 211 u_short link_status; 212 212 u_long mii_reset; 213 - } pcnet_dev_t; 213 + }; 214 214 215 - static inline pcnet_dev_t *PRIV(struct net_device *dev) 215 + static inline struct pcnet_dev *PRIV(struct net_device *dev) 216 216 { 217 217 char *p = netdev_priv(dev); 218 - return (pcnet_dev_t *)(p + sizeof(struct ei_device)); 218 + return (struct pcnet_dev *)(p + sizeof(struct ei_device)); 219 219 } 220 220 221 221 static const struct net_device_ops pcnet_netdev_ops = { ··· 237 237 238 238 static int pcnet_probe(struct pcmcia_device *link) 239 239 { 240 - pcnet_dev_t *info; 240 + struct pcnet_dev *info; 241 241 struct net_device *dev; 242 242 243 243 dev_dbg(&link->dev, "pcnet_attach()\n"); 244 244 245 245 /* Create new ethernet device */ 246 - dev = __alloc_ei_netdev(sizeof(pcnet_dev_t)); 246 + dev = __alloc_ei_netdev(sizeof(struct pcnet_dev)); 247 247 if (!dev) return -ENOMEM; 248 248 info = PRIV(dev); 249 249 info->p_dev = link; ··· 276 276 277 277 ======================================================================*/ 278 278 279 - static hw_info_t *get_hwinfo(struct pcmcia_device *link) 279 + static struct hw_info *get_hwinfo(struct pcmcia_device *link) 280 280 { 281 281 struct net_device *dev = link->priv; 282 282 u_char __iomem *base, *virt; ··· 317 317 318 318 ======================================================================*/ 319 319 320 - static hw_info_t *get_prom(struct pcmcia_device *link) 320 + static struct hw_info *get_prom(struct pcmcia_device *link) 321 321 { 322 322 struct net_device *dev = link->priv; 323 323 unsigned int ioaddr = dev->base_addr; ··· 371 371 372 372 ======================================================================*/ 373 373 374 - static hw_info_t *get_dl10019(struct pcmcia_device *link) 374 + static struct hw_info *get_dl10019(struct pcmcia_device *link) 375 375 { 376 376 struct net_device *dev = link->priv; 377 377 int i; ··· 393 393 394 394 ======================================================================*/ 395 395 396 - static hw_info_t *get_ax88190(struct pcmcia_device *link) 396 + static struct hw_info *get_ax88190(struct pcmcia_device *link) 397 397 { 398 398 struct net_device *dev = link->priv; 399 399 unsigned int ioaddr = dev->base_addr; ··· 424 424 425 425 ======================================================================*/ 426 426 427 - static hw_info_t *get_hwired(struct pcmcia_device *link) 427 + static struct hw_info *get_hwired(struct pcmcia_device *link) 428 428 { 429 429 struct net_device *dev = link->priv; 430 430 int i; ··· 489 489 return try_io_port(p_dev); 490 490 } 491 491 492 - static hw_info_t *pcnet_try_config(struct pcmcia_device *link, 493 - int *has_shmem, int try) 492 + static struct hw_info *pcnet_try_config(struct pcmcia_device *link, 493 + int *has_shmem, int try) 494 494 { 495 495 struct net_device *dev = link->priv; 496 - hw_info_t *local_hw_info; 497 - pcnet_dev_t *info = PRIV(dev); 496 + struct hw_info *local_hw_info; 497 + struct pcnet_dev *info = PRIV(dev); 498 498 int priv = try; 499 499 int ret; 500 500 ··· 553 553 static int pcnet_config(struct pcmcia_device *link) 554 554 { 555 555 struct net_device *dev = link->priv; 556 - pcnet_dev_t *info = PRIV(dev); 556 + struct pcnet_dev *info = PRIV(dev); 557 557 int start_pg, stop_pg, cm_offset; 558 558 int has_shmem = 0; 559 - hw_info_t *local_hw_info; 559 + struct hw_info *local_hw_info; 560 560 struct ei_device *ei_local; 561 561 562 562 dev_dbg(&link->dev, "pcnet_config\n"); ··· 639 639 640 640 static void pcnet_release(struct pcmcia_device *link) 641 641 { 642 - pcnet_dev_t *info = PRIV(link->priv); 642 + struct pcnet_dev *info = PRIV(link->priv); 643 643 644 644 dev_dbg(&link->dev, "pcnet_release\n"); 645 645 ··· 836 836 static void set_misc_reg(struct net_device *dev) 837 837 { 838 838 unsigned int nic_base = dev->base_addr; 839 - pcnet_dev_t *info = PRIV(dev); 839 + struct pcnet_dev *info = PRIV(dev); 840 840 u_char tmp; 841 841 842 842 if (info->flags & HAS_MISC_REG) { ··· 873 873 874 874 static void mii_phy_probe(struct net_device *dev) 875 875 { 876 - pcnet_dev_t *info = PRIV(dev); 876 + struct pcnet_dev *info = PRIV(dev); 877 877 unsigned int mii_addr = dev->base_addr + DLINK_GPIO; 878 878 int i; 879 879 u_int tmp, phyid; ··· 898 898 static int pcnet_open(struct net_device *dev) 899 899 { 900 900 int ret; 901 - pcnet_dev_t *info = PRIV(dev); 901 + struct pcnet_dev *info = PRIV(dev); 902 902 struct pcmcia_device *link = info->p_dev; 903 903 unsigned int nic_base = dev->base_addr; 904 904 ··· 931 931 932 932 static int pcnet_close(struct net_device *dev) 933 933 { 934 - pcnet_dev_t *info = PRIV(dev); 934 + struct pcnet_dev *info = PRIV(dev); 935 935 struct pcmcia_device *link = info->p_dev; 936 936 937 937 dev_dbg(&link->dev, "pcnet_close('%s')\n", dev->name); ··· 982 982 983 983 static int set_config(struct net_device *dev, struct ifmap *map) 984 984 { 985 - pcnet_dev_t *info = PRIV(dev); 985 + struct pcnet_dev *info = PRIV(dev); 986 986 if ((map->port != (u_char)(-1)) && (map->port != dev->if_port)) { 987 987 if (!(info->flags & HAS_MISC_REG)) 988 988 return -EOPNOTSUPP; ··· 1000 1000 static irqreturn_t ei_irq_wrapper(int irq, void *dev_id) 1001 1001 { 1002 1002 struct net_device *dev = dev_id; 1003 - pcnet_dev_t *info; 1003 + struct pcnet_dev *info; 1004 1004 irqreturn_t ret = ei_interrupt(irq, dev_id); 1005 1005 1006 1006 if (ret == IRQ_HANDLED) { ··· 1013 1013 static void ei_watchdog(u_long arg) 1014 1014 { 1015 1015 struct net_device *dev = (struct net_device *)arg; 1016 - pcnet_dev_t *info = PRIV(dev); 1016 + struct pcnet_dev *info = PRIV(dev); 1017 1017 unsigned int nic_base = dev->base_addr; 1018 1018 unsigned int mii_addr = nic_base + DLINK_GPIO; 1019 1019 u_short link; ··· 1101 1101 1102 1102 static int ei_ioctl(struct net_device *dev, struct ifreq *rq, int cmd) 1103 1103 { 1104 - pcnet_dev_t *info = PRIV(dev); 1104 + struct pcnet_dev *info = PRIV(dev); 1105 1105 struct mii_ioctl_data *data = if_mii(rq); 1106 1106 unsigned int mii_addr = dev->base_addr + DLINK_GPIO; 1107 1107 ··· 1214 1214 const u_char *buf, const int start_page) 1215 1215 { 1216 1216 unsigned int nic_base = dev->base_addr; 1217 - pcnet_dev_t *info = PRIV(dev); 1217 + struct pcnet_dev *info = PRIV(dev); 1218 1218 #ifdef PCMCIA_DEBUG 1219 1219 int retries = 0; 1220 1220 struct ei_device *ei_local = netdev_priv(dev); ··· 1403 1403 int stop_pg, int cm_offset) 1404 1404 { 1405 1405 struct net_device *dev = link->priv; 1406 - pcnet_dev_t *info = PRIV(dev); 1406 + struct pcnet_dev *info = PRIV(dev); 1407 1407 int i, window_size, offset, ret; 1408 1408 1409 1409 window_size = (stop_pg - start_pg) << 8;
+1
drivers/net/ethernet/Kconfig
··· 24 24 source "drivers/net/ethernet/alteon/Kconfig" 25 25 source "drivers/net/ethernet/altera/Kconfig" 26 26 source "drivers/net/ethernet/amd/Kconfig" 27 + source "drivers/net/ethernet/apm/Kconfig" 27 28 source "drivers/net/ethernet/apple/Kconfig" 28 29 source "drivers/net/ethernet/arc/Kconfig" 29 30 source "drivers/net/ethernet/atheros/Kconfig"
+1
drivers/net/ethernet/Makefile
··· 10 10 obj-$(CONFIG_NET_VENDOR_ALTEON) += alteon/ 11 11 obj-$(CONFIG_ALTERA_TSE) += altera/ 12 12 obj-$(CONFIG_NET_VENDOR_AMD) += amd/ 13 + obj-$(CONFIG_NET_XGENE) += apm/ 13 14 obj-$(CONFIG_NET_VENDOR_APPLE) += apple/ 14 15 obj-$(CONFIG_NET_VENDOR_ARC) += arc/ 15 16 obj-$(CONFIG_NET_VENDOR_ATHEROS) += atheros/
-1
drivers/net/ethernet/amd/xgbe/xgbe-drv.c
··· 117 117 #include <linux/spinlock.h> 118 118 #include <linux/tcp.h> 119 119 #include <linux/if_vlan.h> 120 - #include <linux/phy.h> 121 120 #include <net/busy_poll.h> 122 121 #include <linux/clk.h> 123 122 #include <linux/if_ether.h>
+1
drivers/net/ethernet/apm/Kconfig
··· 1 + source "drivers/net/ethernet/apm/xgene/Kconfig"
+5
drivers/net/ethernet/apm/Makefile
··· 1 + # 2 + # Makefile for APM X-GENE Ethernet driver. 3 + # 4 + 5 + obj-$(CONFIG_NET_XGENE) += xgene/
+9
drivers/net/ethernet/apm/xgene/Kconfig
··· 1 + config NET_XGENE 2 + tristate "APM X-Gene SoC Ethernet Driver" 3 + select PHYLIB 4 + help 5 + This is the Ethernet driver for the on-chip ethernet interface on the 6 + APM X-Gene SoC. 7 + 8 + To compile this driver as a module, choose M here. This module will 9 + be called xgene_enet.
+6
drivers/net/ethernet/apm/xgene/Makefile
··· 1 + # 2 + # Makefile for APM X-Gene Ethernet Driver. 3 + # 4 + 5 + xgene-enet-objs := xgene_enet_hw.o xgene_enet_main.o xgene_enet_ethtool.o 6 + obj-$(CONFIG_NET_XGENE) += xgene-enet.o
+125
drivers/net/ethernet/apm/xgene/xgene_enet_ethtool.c
··· 1 + /* Applied Micro X-Gene SoC Ethernet Driver 2 + * 3 + * Copyright (c) 2014, Applied Micro Circuits Corporation 4 + * Authors: Iyappan Subramanian <isubramanian@apm.com> 5 + * 6 + * This program is free software; you can redistribute it and/or modify it 7 + * under the terms of the GNU General Public License as published by the 8 + * Free Software Foundation; either version 2 of the License, or (at your 9 + * option) any later version. 10 + * 11 + * This program is distributed in the hope that it will be useful, 12 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 13 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 14 + * GNU General Public License for more details. 15 + * 16 + * You should have received a copy of the GNU General Public License 17 + * along with this program. If not, see <http://www.gnu.org/licenses/>. 18 + */ 19 + 20 + #include <linux/ethtool.h> 21 + #include "xgene_enet_main.h" 22 + 23 + struct xgene_gstrings_stats { 24 + char name[ETH_GSTRING_LEN]; 25 + int offset; 26 + }; 27 + 28 + #define XGENE_STAT(m) { #m, offsetof(struct xgene_enet_pdata, stats.m) } 29 + 30 + static const struct xgene_gstrings_stats gstrings_stats[] = { 31 + XGENE_STAT(rx_packets), 32 + XGENE_STAT(tx_packets), 33 + XGENE_STAT(rx_bytes), 34 + XGENE_STAT(tx_bytes), 35 + XGENE_STAT(rx_errors), 36 + XGENE_STAT(tx_errors), 37 + XGENE_STAT(rx_length_errors), 38 + XGENE_STAT(rx_crc_errors), 39 + XGENE_STAT(rx_frame_errors), 40 + XGENE_STAT(rx_fifo_errors) 41 + }; 42 + 43 + #define XGENE_STATS_LEN ARRAY_SIZE(gstrings_stats) 44 + 45 + static void xgene_get_drvinfo(struct net_device *ndev, 46 + struct ethtool_drvinfo *info) 47 + { 48 + struct xgene_enet_pdata *pdata = netdev_priv(ndev); 49 + struct platform_device *pdev = pdata->pdev; 50 + 51 + strcpy(info->driver, "xgene_enet"); 52 + strcpy(info->version, XGENE_DRV_VERSION); 53 + snprintf(info->fw_version, ETHTOOL_FWVERS_LEN, "N/A"); 54 + sprintf(info->bus_info, "%s", pdev->name); 55 + } 56 + 57 + static int xgene_get_settings(struct net_device *ndev, struct ethtool_cmd *cmd) 58 + { 59 + struct xgene_enet_pdata *pdata = netdev_priv(ndev); 60 + struct phy_device *phydev = pdata->phy_dev; 61 + 62 + if (phydev == NULL) 63 + return -ENODEV; 64 + 65 + return phy_ethtool_gset(phydev, cmd); 66 + } 67 + 68 + static int xgene_set_settings(struct net_device *ndev, struct ethtool_cmd *cmd) 69 + { 70 + struct xgene_enet_pdata *pdata = netdev_priv(ndev); 71 + struct phy_device *phydev = pdata->phy_dev; 72 + 73 + if (phydev == NULL) 74 + return -ENODEV; 75 + 76 + return phy_ethtool_sset(phydev, cmd); 77 + } 78 + 79 + static void xgene_get_strings(struct net_device *ndev, u32 stringset, u8 *data) 80 + { 81 + int i; 82 + u8 *p = data; 83 + 84 + if (stringset != ETH_SS_STATS) 85 + return; 86 + 87 + for (i = 0; i < XGENE_STATS_LEN; i++) { 88 + memcpy(p, gstrings_stats[i].name, ETH_GSTRING_LEN); 89 + p += ETH_GSTRING_LEN; 90 + } 91 + } 92 + 93 + static int xgene_get_sset_count(struct net_device *ndev, int sset) 94 + { 95 + if (sset != ETH_SS_STATS) 96 + return -EINVAL; 97 + 98 + return XGENE_STATS_LEN; 99 + } 100 + 101 + static void xgene_get_ethtool_stats(struct net_device *ndev, 102 + struct ethtool_stats *dummy, 103 + u64 *data) 104 + { 105 + void *pdata = netdev_priv(ndev); 106 + int i; 107 + 108 + for (i = 0; i < XGENE_STATS_LEN; i++) 109 + *data++ = *(u64 *)(pdata + gstrings_stats[i].offset); 110 + } 111 + 112 + static const struct ethtool_ops xgene_ethtool_ops = { 113 + .get_drvinfo = xgene_get_drvinfo, 114 + .get_settings = xgene_get_settings, 115 + .set_settings = xgene_set_settings, 116 + .get_link = ethtool_op_get_link, 117 + .get_strings = xgene_get_strings, 118 + .get_sset_count = xgene_get_sset_count, 119 + .get_ethtool_stats = xgene_get_ethtool_stats 120 + }; 121 + 122 + void xgene_enet_set_ethtool_ops(struct net_device *ndev) 123 + { 124 + ndev->ethtool_ops = &xgene_ethtool_ops; 125 + }
+728
drivers/net/ethernet/apm/xgene/xgene_enet_hw.c
··· 1 + /* Applied Micro X-Gene SoC Ethernet Driver 2 + * 3 + * Copyright (c) 2014, Applied Micro Circuits Corporation 4 + * Authors: Iyappan Subramanian <isubramanian@apm.com> 5 + * Ravi Patel <rapatel@apm.com> 6 + * Keyur Chudgar <kchudgar@apm.com> 7 + * 8 + * This program is free software; you can redistribute it and/or modify it 9 + * under the terms of the GNU General Public License as published by the 10 + * Free Software Foundation; either version 2 of the License, or (at your 11 + * option) any later version. 12 + * 13 + * This program is distributed in the hope that it will be useful, 14 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 15 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 16 + * GNU General Public License for more details. 17 + * 18 + * You should have received a copy of the GNU General Public License 19 + * along with this program. If not, see <http://www.gnu.org/licenses/>. 20 + */ 21 + 22 + #include "xgene_enet_main.h" 23 + #include "xgene_enet_hw.h" 24 + 25 + static void xgene_enet_ring_init(struct xgene_enet_desc_ring *ring) 26 + { 27 + u32 *ring_cfg = ring->state; 28 + u64 addr = ring->dma; 29 + enum xgene_enet_ring_cfgsize cfgsize = ring->cfgsize; 30 + 31 + ring_cfg[4] |= (1 << SELTHRSH_POS) & 32 + CREATE_MASK(SELTHRSH_POS, SELTHRSH_LEN); 33 + ring_cfg[3] |= ACCEPTLERR; 34 + ring_cfg[2] |= QCOHERENT; 35 + 36 + addr >>= 8; 37 + ring_cfg[2] |= (addr << RINGADDRL_POS) & 38 + CREATE_MASK_ULL(RINGADDRL_POS, RINGADDRL_LEN); 39 + addr >>= RINGADDRL_LEN; 40 + ring_cfg[3] |= addr & CREATE_MASK_ULL(RINGADDRH_POS, RINGADDRH_LEN); 41 + ring_cfg[3] |= ((u32)cfgsize << RINGSIZE_POS) & 42 + CREATE_MASK(RINGSIZE_POS, RINGSIZE_LEN); 43 + } 44 + 45 + static void xgene_enet_ring_set_type(struct xgene_enet_desc_ring *ring) 46 + { 47 + u32 *ring_cfg = ring->state; 48 + bool is_bufpool; 49 + u32 val; 50 + 51 + is_bufpool = xgene_enet_is_bufpool(ring->id); 52 + val = (is_bufpool) ? RING_BUFPOOL : RING_REGULAR; 53 + ring_cfg[4] |= (val << RINGTYPE_POS) & 54 + CREATE_MASK(RINGTYPE_POS, RINGTYPE_LEN); 55 + 56 + if (is_bufpool) { 57 + ring_cfg[3] |= (BUFPOOL_MODE << RINGMODE_POS) & 58 + CREATE_MASK(RINGMODE_POS, RINGMODE_LEN); 59 + } 60 + } 61 + 62 + static void xgene_enet_ring_set_recombbuf(struct xgene_enet_desc_ring *ring) 63 + { 64 + u32 *ring_cfg = ring->state; 65 + 66 + ring_cfg[3] |= RECOMBBUF; 67 + ring_cfg[3] |= (0xf << RECOMTIMEOUTL_POS) & 68 + CREATE_MASK(RECOMTIMEOUTL_POS, RECOMTIMEOUTL_LEN); 69 + ring_cfg[4] |= 0x7 & CREATE_MASK(RECOMTIMEOUTH_POS, RECOMTIMEOUTH_LEN); 70 + } 71 + 72 + static void xgene_enet_ring_wr32(struct xgene_enet_desc_ring *ring, 73 + u32 offset, u32 data) 74 + { 75 + struct xgene_enet_pdata *pdata = netdev_priv(ring->ndev); 76 + 77 + iowrite32(data, pdata->ring_csr_addr + offset); 78 + } 79 + 80 + static void xgene_enet_ring_rd32(struct xgene_enet_desc_ring *ring, 81 + u32 offset, u32 *data) 82 + { 83 + struct xgene_enet_pdata *pdata = netdev_priv(ring->ndev); 84 + 85 + *data = ioread32(pdata->ring_csr_addr + offset); 86 + } 87 + 88 + static void xgene_enet_write_ring_state(struct xgene_enet_desc_ring *ring) 89 + { 90 + int i; 91 + 92 + xgene_enet_ring_wr32(ring, CSR_RING_CONFIG, ring->num); 93 + for (i = 0; i < NUM_RING_CONFIG; i++) { 94 + xgene_enet_ring_wr32(ring, CSR_RING_WR_BASE + (i * 4), 95 + ring->state[i]); 96 + } 97 + } 98 + 99 + static void xgene_enet_clr_ring_state(struct xgene_enet_desc_ring *ring) 100 + { 101 + memset(ring->state, 0, sizeof(u32) * NUM_RING_CONFIG); 102 + xgene_enet_write_ring_state(ring); 103 + } 104 + 105 + static void xgene_enet_set_ring_state(struct xgene_enet_desc_ring *ring) 106 + { 107 + xgene_enet_ring_set_type(ring); 108 + 109 + if (xgene_enet_ring_owner(ring->id) == RING_OWNER_ETH0) 110 + xgene_enet_ring_set_recombbuf(ring); 111 + 112 + xgene_enet_ring_init(ring); 113 + xgene_enet_write_ring_state(ring); 114 + } 115 + 116 + static void xgene_enet_set_ring_id(struct xgene_enet_desc_ring *ring) 117 + { 118 + u32 ring_id_val, ring_id_buf; 119 + bool is_bufpool; 120 + 121 + is_bufpool = xgene_enet_is_bufpool(ring->id); 122 + 123 + ring_id_val = ring->id & GENMASK(9, 0); 124 + ring_id_val |= OVERWRITE; 125 + 126 + ring_id_buf = (ring->num << 9) & GENMASK(18, 9); 127 + ring_id_buf |= PREFETCH_BUF_EN; 128 + if (is_bufpool) 129 + ring_id_buf |= IS_BUFFER_POOL; 130 + 131 + xgene_enet_ring_wr32(ring, CSR_RING_ID, ring_id_val); 132 + xgene_enet_ring_wr32(ring, CSR_RING_ID_BUF, ring_id_buf); 133 + } 134 + 135 + static void xgene_enet_clr_desc_ring_id(struct xgene_enet_desc_ring *ring) 136 + { 137 + u32 ring_id; 138 + 139 + ring_id = ring->id | OVERWRITE; 140 + xgene_enet_ring_wr32(ring, CSR_RING_ID, ring_id); 141 + xgene_enet_ring_wr32(ring, CSR_RING_ID_BUF, 0); 142 + } 143 + 144 + struct xgene_enet_desc_ring *xgene_enet_setup_ring( 145 + struct xgene_enet_desc_ring *ring) 146 + { 147 + u32 size = ring->size; 148 + u32 i, data; 149 + bool is_bufpool; 150 + 151 + xgene_enet_clr_ring_state(ring); 152 + xgene_enet_set_ring_state(ring); 153 + xgene_enet_set_ring_id(ring); 154 + 155 + ring->slots = xgene_enet_get_numslots(ring->id, size); 156 + 157 + is_bufpool = xgene_enet_is_bufpool(ring->id); 158 + if (is_bufpool || xgene_enet_ring_owner(ring->id) != RING_OWNER_CPU) 159 + return ring; 160 + 161 + for (i = 0; i < ring->slots; i++) 162 + xgene_enet_mark_desc_slot_empty(&ring->raw_desc[i]); 163 + 164 + xgene_enet_ring_rd32(ring, CSR_RING_NE_INT_MODE, &data); 165 + data |= BIT(31 - xgene_enet_ring_bufnum(ring->id)); 166 + xgene_enet_ring_wr32(ring, CSR_RING_NE_INT_MODE, data); 167 + 168 + return ring; 169 + } 170 + 171 + void xgene_enet_clear_ring(struct xgene_enet_desc_ring *ring) 172 + { 173 + u32 data; 174 + bool is_bufpool; 175 + 176 + is_bufpool = xgene_enet_is_bufpool(ring->id); 177 + if (is_bufpool || xgene_enet_ring_owner(ring->id) != RING_OWNER_CPU) 178 + goto out; 179 + 180 + xgene_enet_ring_rd32(ring, CSR_RING_NE_INT_MODE, &data); 181 + data &= ~BIT(31 - xgene_enet_ring_bufnum(ring->id)); 182 + xgene_enet_ring_wr32(ring, CSR_RING_NE_INT_MODE, data); 183 + 184 + out: 185 + xgene_enet_clr_desc_ring_id(ring); 186 + xgene_enet_clr_ring_state(ring); 187 + } 188 + 189 + void xgene_enet_parse_error(struct xgene_enet_desc_ring *ring, 190 + struct xgene_enet_pdata *pdata, 191 + enum xgene_enet_err_code status) 192 + { 193 + struct rtnl_link_stats64 *stats = &pdata->stats; 194 + 195 + switch (status) { 196 + case INGRESS_CRC: 197 + stats->rx_crc_errors++; 198 + break; 199 + case INGRESS_CHECKSUM: 200 + case INGRESS_CHECKSUM_COMPUTE: 201 + stats->rx_errors++; 202 + break; 203 + case INGRESS_TRUNC_FRAME: 204 + stats->rx_frame_errors++; 205 + break; 206 + case INGRESS_PKT_LEN: 207 + stats->rx_length_errors++; 208 + break; 209 + case INGRESS_PKT_UNDER: 210 + stats->rx_frame_errors++; 211 + break; 212 + case INGRESS_FIFO_OVERRUN: 213 + stats->rx_fifo_errors++; 214 + break; 215 + default: 216 + break; 217 + } 218 + } 219 + 220 + static void xgene_enet_wr_csr(struct xgene_enet_pdata *pdata, 221 + u32 offset, u32 val) 222 + { 223 + void __iomem *addr = pdata->eth_csr_addr + offset; 224 + 225 + iowrite32(val, addr); 226 + } 227 + 228 + static void xgene_enet_wr_ring_if(struct xgene_enet_pdata *pdata, 229 + u32 offset, u32 val) 230 + { 231 + void __iomem *addr = pdata->eth_ring_if_addr + offset; 232 + 233 + iowrite32(val, addr); 234 + } 235 + 236 + static void xgene_enet_wr_diag_csr(struct xgene_enet_pdata *pdata, 237 + u32 offset, u32 val) 238 + { 239 + void __iomem *addr = pdata->eth_diag_csr_addr + offset; 240 + 241 + iowrite32(val, addr); 242 + } 243 + 244 + static void xgene_enet_wr_mcx_csr(struct xgene_enet_pdata *pdata, 245 + u32 offset, u32 val) 246 + { 247 + void __iomem *addr = pdata->mcx_mac_csr_addr + offset; 248 + 249 + iowrite32(val, addr); 250 + } 251 + 252 + static bool xgene_enet_wr_indirect(void __iomem *addr, void __iomem *wr, 253 + void __iomem *cmd, void __iomem *cmd_done, 254 + u32 wr_addr, u32 wr_data) 255 + { 256 + u32 done; 257 + u8 wait = 10; 258 + 259 + iowrite32(wr_addr, addr); 260 + iowrite32(wr_data, wr); 261 + iowrite32(XGENE_ENET_WR_CMD, cmd); 262 + 263 + /* wait for write command to complete */ 264 + while (!(done = ioread32(cmd_done)) && wait--) 265 + udelay(1); 266 + 267 + if (!done) 268 + return false; 269 + 270 + iowrite32(0, cmd); 271 + 272 + return true; 273 + } 274 + 275 + static void xgene_enet_wr_mcx_mac(struct xgene_enet_pdata *pdata, 276 + u32 wr_addr, u32 wr_data) 277 + { 278 + void __iomem *addr, *wr, *cmd, *cmd_done; 279 + 280 + addr = pdata->mcx_mac_addr + MAC_ADDR_REG_OFFSET; 281 + wr = pdata->mcx_mac_addr + MAC_WRITE_REG_OFFSET; 282 + cmd = pdata->mcx_mac_addr + MAC_COMMAND_REG_OFFSET; 283 + cmd_done = pdata->mcx_mac_addr + MAC_COMMAND_DONE_REG_OFFSET; 284 + 285 + if (!xgene_enet_wr_indirect(addr, wr, cmd, cmd_done, wr_addr, wr_data)) 286 + netdev_err(pdata->ndev, "MCX mac write failed, addr: %04x\n", 287 + wr_addr); 288 + } 289 + 290 + static void xgene_enet_rd_csr(struct xgene_enet_pdata *pdata, 291 + u32 offset, u32 *val) 292 + { 293 + void __iomem *addr = pdata->eth_csr_addr + offset; 294 + 295 + *val = ioread32(addr); 296 + } 297 + 298 + static void xgene_enet_rd_diag_csr(struct xgene_enet_pdata *pdata, 299 + u32 offset, u32 *val) 300 + { 301 + void __iomem *addr = pdata->eth_diag_csr_addr + offset; 302 + 303 + *val = ioread32(addr); 304 + } 305 + 306 + static void xgene_enet_rd_mcx_csr(struct xgene_enet_pdata *pdata, 307 + u32 offset, u32 *val) 308 + { 309 + void __iomem *addr = pdata->mcx_mac_csr_addr + offset; 310 + 311 + *val = ioread32(addr); 312 + } 313 + 314 + static bool xgene_enet_rd_indirect(void __iomem *addr, void __iomem *rd, 315 + void __iomem *cmd, void __iomem *cmd_done, 316 + u32 rd_addr, u32 *rd_data) 317 + { 318 + u32 done; 319 + u8 wait = 10; 320 + 321 + iowrite32(rd_addr, addr); 322 + iowrite32(XGENE_ENET_RD_CMD, cmd); 323 + 324 + /* wait for read command to complete */ 325 + while (!(done = ioread32(cmd_done)) && wait--) 326 + udelay(1); 327 + 328 + if (!done) 329 + return false; 330 + 331 + *rd_data = ioread32(rd); 332 + iowrite32(0, cmd); 333 + 334 + return true; 335 + } 336 + 337 + static void xgene_enet_rd_mcx_mac(struct xgene_enet_pdata *pdata, 338 + u32 rd_addr, u32 *rd_data) 339 + { 340 + void __iomem *addr, *rd, *cmd, *cmd_done; 341 + 342 + addr = pdata->mcx_mac_addr + MAC_ADDR_REG_OFFSET; 343 + rd = pdata->mcx_mac_addr + MAC_READ_REG_OFFSET; 344 + cmd = pdata->mcx_mac_addr + MAC_COMMAND_REG_OFFSET; 345 + cmd_done = pdata->mcx_mac_addr + MAC_COMMAND_DONE_REG_OFFSET; 346 + 347 + if (!xgene_enet_rd_indirect(addr, rd, cmd, cmd_done, rd_addr, rd_data)) 348 + netdev_err(pdata->ndev, "MCX mac read failed, addr: %04x\n", 349 + rd_addr); 350 + } 351 + 352 + static int xgene_mii_phy_write(struct xgene_enet_pdata *pdata, int phy_id, 353 + u32 reg, u16 data) 354 + { 355 + u32 addr = 0, wr_data = 0; 356 + u32 done; 357 + u8 wait = 10; 358 + 359 + PHY_ADDR_SET(&addr, phy_id); 360 + REG_ADDR_SET(&addr, reg); 361 + xgene_enet_wr_mcx_mac(pdata, MII_MGMT_ADDRESS_ADDR, addr); 362 + 363 + PHY_CONTROL_SET(&wr_data, data); 364 + xgene_enet_wr_mcx_mac(pdata, MII_MGMT_CONTROL_ADDR, wr_data); 365 + do { 366 + usleep_range(5, 10); 367 + xgene_enet_rd_mcx_mac(pdata, MII_MGMT_INDICATORS_ADDR, &done); 368 + } while ((done & BUSY_MASK) && wait--); 369 + 370 + if (done & BUSY_MASK) { 371 + netdev_err(pdata->ndev, "MII_MGMT write failed\n"); 372 + return -EBUSY; 373 + } 374 + 375 + return 0; 376 + } 377 + 378 + static int xgene_mii_phy_read(struct xgene_enet_pdata *pdata, 379 + u8 phy_id, u32 reg) 380 + { 381 + u32 addr = 0; 382 + u32 data, done; 383 + u8 wait = 10; 384 + 385 + PHY_ADDR_SET(&addr, phy_id); 386 + REG_ADDR_SET(&addr, reg); 387 + xgene_enet_wr_mcx_mac(pdata, MII_MGMT_ADDRESS_ADDR, addr); 388 + xgene_enet_wr_mcx_mac(pdata, MII_MGMT_COMMAND_ADDR, READ_CYCLE_MASK); 389 + do { 390 + usleep_range(5, 10); 391 + xgene_enet_rd_mcx_mac(pdata, MII_MGMT_INDICATORS_ADDR, &done); 392 + } while ((done & BUSY_MASK) && wait--); 393 + 394 + if (done & BUSY_MASK) { 395 + netdev_err(pdata->ndev, "MII_MGMT read failed\n"); 396 + return -EBUSY; 397 + } 398 + 399 + xgene_enet_rd_mcx_mac(pdata, MII_MGMT_STATUS_ADDR, &data); 400 + xgene_enet_wr_mcx_mac(pdata, MII_MGMT_COMMAND_ADDR, 0); 401 + 402 + return data; 403 + } 404 + 405 + void xgene_gmac_set_mac_addr(struct xgene_enet_pdata *pdata) 406 + { 407 + u32 addr0, addr1; 408 + u8 *dev_addr = pdata->ndev->dev_addr; 409 + 410 + addr0 = (dev_addr[3] << 24) | (dev_addr[2] << 16) | 411 + (dev_addr[1] << 8) | dev_addr[0]; 412 + addr1 = (dev_addr[5] << 24) | (dev_addr[4] << 16); 413 + addr1 |= pdata->phy_addr & 0xFFFF; 414 + 415 + xgene_enet_wr_mcx_mac(pdata, STATION_ADDR0_ADDR, addr0); 416 + xgene_enet_wr_mcx_mac(pdata, STATION_ADDR1_ADDR, addr1); 417 + } 418 + 419 + static int xgene_enet_ecc_init(struct xgene_enet_pdata *pdata) 420 + { 421 + struct net_device *ndev = pdata->ndev; 422 + u32 data; 423 + u8 wait = 10; 424 + 425 + xgene_enet_wr_diag_csr(pdata, ENET_CFG_MEM_RAM_SHUTDOWN_ADDR, 0x0); 426 + do { 427 + usleep_range(100, 110); 428 + xgene_enet_rd_diag_csr(pdata, ENET_BLOCK_MEM_RDY_ADDR, &data); 429 + } while ((data != 0xffffffff) && wait--); 430 + 431 + if (data != 0xffffffff) { 432 + netdev_err(ndev, "Failed to release memory from shutdown\n"); 433 + return -ENODEV; 434 + } 435 + 436 + return 0; 437 + } 438 + 439 + void xgene_gmac_reset(struct xgene_enet_pdata *pdata) 440 + { 441 + xgene_enet_wr_mcx_mac(pdata, MAC_CONFIG_1_ADDR, SOFT_RESET1); 442 + xgene_enet_wr_mcx_mac(pdata, MAC_CONFIG_1_ADDR, 0); 443 + } 444 + 445 + void xgene_gmac_init(struct xgene_enet_pdata *pdata, int speed) 446 + { 447 + u32 value, mc2; 448 + u32 intf_ctl, rgmii; 449 + u32 icm0, icm2; 450 + 451 + xgene_gmac_reset(pdata); 452 + 453 + xgene_enet_rd_mcx_csr(pdata, ICM_CONFIG0_REG_0_ADDR, &icm0); 454 + xgene_enet_rd_mcx_csr(pdata, ICM_CONFIG2_REG_0_ADDR, &icm2); 455 + xgene_enet_rd_mcx_mac(pdata, MAC_CONFIG_2_ADDR, &mc2); 456 + xgene_enet_rd_mcx_mac(pdata, INTERFACE_CONTROL_ADDR, &intf_ctl); 457 + xgene_enet_rd_csr(pdata, RGMII_REG_0_ADDR, &rgmii); 458 + 459 + switch (speed) { 460 + case SPEED_10: 461 + ENET_INTERFACE_MODE2_SET(&mc2, 1); 462 + CFG_MACMODE_SET(&icm0, 0); 463 + CFG_WAITASYNCRD_SET(&icm2, 500); 464 + rgmii &= ~CFG_SPEED_1250; 465 + break; 466 + case SPEED_100: 467 + ENET_INTERFACE_MODE2_SET(&mc2, 1); 468 + intf_ctl |= ENET_LHD_MODE; 469 + CFG_MACMODE_SET(&icm0, 1); 470 + CFG_WAITASYNCRD_SET(&icm2, 80); 471 + rgmii &= ~CFG_SPEED_1250; 472 + break; 473 + default: 474 + ENET_INTERFACE_MODE2_SET(&mc2, 2); 475 + intf_ctl |= ENET_GHD_MODE; 476 + CFG_TXCLK_MUXSEL0_SET(&rgmii, 4); 477 + xgene_enet_rd_csr(pdata, DEBUG_REG_ADDR, &value); 478 + value |= CFG_BYPASS_UNISEC_TX | CFG_BYPASS_UNISEC_RX; 479 + xgene_enet_wr_csr(pdata, DEBUG_REG_ADDR, value); 480 + break; 481 + } 482 + 483 + mc2 |= FULL_DUPLEX2; 484 + xgene_enet_wr_mcx_mac(pdata, MAC_CONFIG_2_ADDR, mc2); 485 + xgene_enet_wr_mcx_mac(pdata, INTERFACE_CONTROL_ADDR, intf_ctl); 486 + 487 + xgene_gmac_set_mac_addr(pdata); 488 + 489 + /* Adjust MDC clock frequency */ 490 + xgene_enet_rd_mcx_mac(pdata, MII_MGMT_CONFIG_ADDR, &value); 491 + MGMT_CLOCK_SEL_SET(&value, 7); 492 + xgene_enet_wr_mcx_mac(pdata, MII_MGMT_CONFIG_ADDR, value); 493 + 494 + /* Enable drop if bufpool not available */ 495 + xgene_enet_rd_csr(pdata, RSIF_CONFIG_REG_ADDR, &value); 496 + value |= CFG_RSIF_FPBUFF_TIMEOUT_EN; 497 + xgene_enet_wr_csr(pdata, RSIF_CONFIG_REG_ADDR, value); 498 + 499 + /* Rtype should be copied from FP */ 500 + xgene_enet_wr_csr(pdata, RSIF_RAM_DBG_REG0_ADDR, 0); 501 + xgene_enet_wr_csr(pdata, RGMII_REG_0_ADDR, rgmii); 502 + 503 + /* Rx-Tx traffic resume */ 504 + xgene_enet_wr_csr(pdata, CFG_LINK_AGGR_RESUME_0_ADDR, TX_PORT0); 505 + 506 + xgene_enet_wr_mcx_csr(pdata, ICM_CONFIG0_REG_0_ADDR, icm0); 507 + xgene_enet_wr_mcx_csr(pdata, ICM_CONFIG2_REG_0_ADDR, icm2); 508 + 509 + xgene_enet_rd_mcx_csr(pdata, RX_DV_GATE_REG_0_ADDR, &value); 510 + value &= ~TX_DV_GATE_EN0; 511 + value &= ~RX_DV_GATE_EN0; 512 + value |= RESUME_RX0; 513 + xgene_enet_wr_mcx_csr(pdata, RX_DV_GATE_REG_0_ADDR, value); 514 + 515 + xgene_enet_wr_csr(pdata, CFG_BYPASS_ADDR, RESUME_TX); 516 + } 517 + 518 + static void xgene_enet_config_ring_if_assoc(struct xgene_enet_pdata *pdata) 519 + { 520 + u32 val = 0xffffffff; 521 + 522 + xgene_enet_wr_ring_if(pdata, ENET_CFGSSQMIWQASSOC_ADDR, val); 523 + xgene_enet_wr_ring_if(pdata, ENET_CFGSSQMIFPQASSOC_ADDR, val); 524 + xgene_enet_wr_ring_if(pdata, ENET_CFGSSQMIQMLITEWQASSOC_ADDR, val); 525 + xgene_enet_wr_ring_if(pdata, ENET_CFGSSQMIQMLITEFPQASSOC_ADDR, val); 526 + } 527 + 528 + void xgene_enet_cle_bypass(struct xgene_enet_pdata *pdata, 529 + u32 dst_ring_num, u16 bufpool_id) 530 + { 531 + u32 cb; 532 + u32 fpsel; 533 + 534 + fpsel = xgene_enet_ring_bufnum(bufpool_id) - 0x20; 535 + 536 + xgene_enet_rd_csr(pdata, CLE_BYPASS_REG0_0_ADDR, &cb); 537 + cb |= CFG_CLE_BYPASS_EN0; 538 + CFG_CLE_IP_PROTOCOL0_SET(&cb, 3); 539 + xgene_enet_wr_csr(pdata, CLE_BYPASS_REG0_0_ADDR, cb); 540 + 541 + xgene_enet_rd_csr(pdata, CLE_BYPASS_REG1_0_ADDR, &cb); 542 + CFG_CLE_DSTQID0_SET(&cb, dst_ring_num); 543 + CFG_CLE_FPSEL0_SET(&cb, fpsel); 544 + xgene_enet_wr_csr(pdata, CLE_BYPASS_REG1_0_ADDR, cb); 545 + } 546 + 547 + void xgene_gmac_rx_enable(struct xgene_enet_pdata *pdata) 548 + { 549 + u32 data; 550 + 551 + xgene_enet_rd_mcx_mac(pdata, MAC_CONFIG_1_ADDR, &data); 552 + xgene_enet_wr_mcx_mac(pdata, MAC_CONFIG_1_ADDR, data | RX_EN); 553 + } 554 + 555 + void xgene_gmac_tx_enable(struct xgene_enet_pdata *pdata) 556 + { 557 + u32 data; 558 + 559 + xgene_enet_rd_mcx_mac(pdata, MAC_CONFIG_1_ADDR, &data); 560 + xgene_enet_wr_mcx_mac(pdata, MAC_CONFIG_1_ADDR, data | TX_EN); 561 + } 562 + 563 + void xgene_gmac_rx_disable(struct xgene_enet_pdata *pdata) 564 + { 565 + u32 data; 566 + 567 + xgene_enet_rd_mcx_mac(pdata, MAC_CONFIG_1_ADDR, &data); 568 + xgene_enet_wr_mcx_mac(pdata, MAC_CONFIG_1_ADDR, data & ~RX_EN); 569 + } 570 + 571 + void xgene_gmac_tx_disable(struct xgene_enet_pdata *pdata) 572 + { 573 + u32 data; 574 + 575 + xgene_enet_rd_mcx_mac(pdata, MAC_CONFIG_1_ADDR, &data); 576 + xgene_enet_wr_mcx_mac(pdata, MAC_CONFIG_1_ADDR, data & ~TX_EN); 577 + } 578 + 579 + void xgene_enet_reset(struct xgene_enet_pdata *pdata) 580 + { 581 + u32 val; 582 + 583 + clk_prepare_enable(pdata->clk); 584 + clk_disable_unprepare(pdata->clk); 585 + clk_prepare_enable(pdata->clk); 586 + xgene_enet_ecc_init(pdata); 587 + xgene_enet_config_ring_if_assoc(pdata); 588 + 589 + /* Enable auto-incr for scanning */ 590 + xgene_enet_rd_mcx_mac(pdata, MII_MGMT_CONFIG_ADDR, &val); 591 + val |= SCAN_AUTO_INCR; 592 + MGMT_CLOCK_SEL_SET(&val, 1); 593 + xgene_enet_wr_mcx_mac(pdata, MII_MGMT_CONFIG_ADDR, val); 594 + } 595 + 596 + void xgene_gport_shutdown(struct xgene_enet_pdata *pdata) 597 + { 598 + clk_disable_unprepare(pdata->clk); 599 + } 600 + 601 + static int xgene_enet_mdio_read(struct mii_bus *bus, int mii_id, int regnum) 602 + { 603 + struct xgene_enet_pdata *pdata = bus->priv; 604 + u32 val; 605 + 606 + val = xgene_mii_phy_read(pdata, mii_id, regnum); 607 + netdev_dbg(pdata->ndev, "mdio_rd: bus=%d reg=%d val=%x\n", 608 + mii_id, regnum, val); 609 + 610 + return val; 611 + } 612 + 613 + static int xgene_enet_mdio_write(struct mii_bus *bus, int mii_id, int regnum, 614 + u16 val) 615 + { 616 + struct xgene_enet_pdata *pdata = bus->priv; 617 + 618 + netdev_dbg(pdata->ndev, "mdio_wr: bus=%d reg=%d val=%x\n", 619 + mii_id, regnum, val); 620 + return xgene_mii_phy_write(pdata, mii_id, regnum, val); 621 + } 622 + 623 + static void xgene_enet_adjust_link(struct net_device *ndev) 624 + { 625 + struct xgene_enet_pdata *pdata = netdev_priv(ndev); 626 + struct phy_device *phydev = pdata->phy_dev; 627 + 628 + if (phydev->link) { 629 + if (pdata->phy_speed != phydev->speed) { 630 + xgene_gmac_init(pdata, phydev->speed); 631 + xgene_gmac_rx_enable(pdata); 632 + xgene_gmac_tx_enable(pdata); 633 + pdata->phy_speed = phydev->speed; 634 + phy_print_status(phydev); 635 + } 636 + } else { 637 + xgene_gmac_rx_disable(pdata); 638 + xgene_gmac_tx_disable(pdata); 639 + pdata->phy_speed = SPEED_UNKNOWN; 640 + phy_print_status(phydev); 641 + } 642 + } 643 + 644 + static int xgene_enet_phy_connect(struct net_device *ndev) 645 + { 646 + struct xgene_enet_pdata *pdata = netdev_priv(ndev); 647 + struct device_node *phy_np; 648 + struct phy_device *phy_dev; 649 + struct device *dev = &pdata->pdev->dev; 650 + 651 + phy_np = of_parse_phandle(dev->of_node, "phy-handle", 0); 652 + if (!phy_np) { 653 + netdev_dbg(ndev, "No phy-handle found\n"); 654 + return -ENODEV; 655 + } 656 + 657 + phy_dev = of_phy_connect(ndev, phy_np, &xgene_enet_adjust_link, 658 + 0, pdata->phy_mode); 659 + if (!phy_dev) { 660 + netdev_err(ndev, "Could not connect to PHY\n"); 661 + return -ENODEV; 662 + } 663 + 664 + pdata->phy_speed = SPEED_UNKNOWN; 665 + phy_dev->supported &= ~SUPPORTED_10baseT_Half & 666 + ~SUPPORTED_100baseT_Half & 667 + ~SUPPORTED_1000baseT_Half; 668 + phy_dev->advertising = phy_dev->supported; 669 + pdata->phy_dev = phy_dev; 670 + 671 + return 0; 672 + } 673 + 674 + int xgene_enet_mdio_config(struct xgene_enet_pdata *pdata) 675 + { 676 + struct net_device *ndev = pdata->ndev; 677 + struct device *dev = &pdata->pdev->dev; 678 + struct device_node *child_np; 679 + struct device_node *mdio_np = NULL; 680 + struct mii_bus *mdio_bus; 681 + int ret; 682 + 683 + for_each_child_of_node(dev->of_node, child_np) { 684 + if (of_device_is_compatible(child_np, "apm,xgene-mdio")) { 685 + mdio_np = child_np; 686 + break; 687 + } 688 + } 689 + 690 + if (!mdio_np) { 691 + netdev_dbg(ndev, "No mdio node in the dts\n"); 692 + return -ENXIO; 693 + } 694 + 695 + mdio_bus = mdiobus_alloc(); 696 + if (!mdio_bus) 697 + return -ENOMEM; 698 + 699 + mdio_bus->name = "APM X-Gene MDIO bus"; 700 + mdio_bus->read = xgene_enet_mdio_read; 701 + mdio_bus->write = xgene_enet_mdio_write; 702 + snprintf(mdio_bus->id, MII_BUS_ID_SIZE, "%s-%s", "xgene-mii", 703 + ndev->name); 704 + 705 + mdio_bus->priv = pdata; 706 + mdio_bus->parent = &ndev->dev; 707 + 708 + ret = of_mdiobus_register(mdio_bus, mdio_np); 709 + if (ret) { 710 + netdev_err(ndev, "Failed to register MDIO bus\n"); 711 + mdiobus_free(mdio_bus); 712 + return ret; 713 + } 714 + pdata->mdio_bus = mdio_bus; 715 + 716 + ret = xgene_enet_phy_connect(ndev); 717 + if (ret) 718 + xgene_enet_mdio_remove(pdata); 719 + 720 + return ret; 721 + } 722 + 723 + void xgene_enet_mdio_remove(struct xgene_enet_pdata *pdata) 724 + { 725 + mdiobus_unregister(pdata->mdio_bus); 726 + mdiobus_free(pdata->mdio_bus); 727 + pdata->mdio_bus = NULL; 728 + }
+337
drivers/net/ethernet/apm/xgene/xgene_enet_hw.h
··· 1 + /* Applied Micro X-Gene SoC Ethernet Driver 2 + * 3 + * Copyright (c) 2014, Applied Micro Circuits Corporation 4 + * Authors: Iyappan Subramanian <isubramanian@apm.com> 5 + * Ravi Patel <rapatel@apm.com> 6 + * Keyur Chudgar <kchudgar@apm.com> 7 + * 8 + * This program is free software; you can redistribute it and/or modify it 9 + * under the terms of the GNU General Public License as published by the 10 + * Free Software Foundation; either version 2 of the License, or (at your 11 + * option) any later version. 12 + * 13 + * This program is distributed in the hope that it will be useful, 14 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 15 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 16 + * GNU General Public License for more details. 17 + * 18 + * You should have received a copy of the GNU General Public License 19 + * along with this program. If not, see <http://www.gnu.org/licenses/>. 20 + */ 21 + 22 + #ifndef __XGENE_ENET_HW_H__ 23 + #define __XGENE_ENET_HW_H__ 24 + 25 + #include "xgene_enet_main.h" 26 + 27 + struct xgene_enet_pdata; 28 + struct xgene_enet_stats; 29 + 30 + /* clears and then set bits */ 31 + static inline void xgene_set_bits(u32 *dst, u32 val, u32 start, u32 len) 32 + { 33 + u32 end = start + len - 1; 34 + u32 mask = GENMASK(end, start); 35 + 36 + *dst &= ~mask; 37 + *dst |= (val << start) & mask; 38 + } 39 + 40 + static inline u32 xgene_get_bits(u32 val, u32 start, u32 end) 41 + { 42 + return (val & GENMASK(end, start)) >> start; 43 + } 44 + 45 + #define CSR_RING_ID 0x0008 46 + #define OVERWRITE BIT(31) 47 + #define IS_BUFFER_POOL BIT(20) 48 + #define PREFETCH_BUF_EN BIT(21) 49 + #define CSR_RING_ID_BUF 0x000c 50 + #define CSR_RING_NE_INT_MODE 0x017c 51 + #define CSR_RING_CONFIG 0x006c 52 + #define CSR_RING_WR_BASE 0x0070 53 + #define NUM_RING_CONFIG 5 54 + #define BUFPOOL_MODE 3 55 + #define RM3 3 56 + #define INC_DEC_CMD_ADDR 0x002c 57 + #define UDP_HDR_SIZE 2 58 + #define BUF_LEN_CODE_2K 0x5000 59 + 60 + #define CREATE_MASK(pos, len) GENMASK((pos)+(len)-1, (pos)) 61 + #define CREATE_MASK_ULL(pos, len) GENMASK_ULL((pos)+(len)-1, (pos)) 62 + 63 + /* Empty slot soft signature */ 64 + #define EMPTY_SLOT_INDEX 1 65 + #define EMPTY_SLOT ~0ULL 66 + 67 + #define WORK_DESC_SIZE 32 68 + #define BUFPOOL_DESC_SIZE 16 69 + 70 + #define RING_OWNER_MASK GENMASK(9, 6) 71 + #define RING_BUFNUM_MASK GENMASK(5, 0) 72 + 73 + #define SELTHRSH_POS 3 74 + #define SELTHRSH_LEN 3 75 + #define RINGADDRL_POS 5 76 + #define RINGADDRL_LEN 27 77 + #define RINGADDRH_POS 0 78 + #define RINGADDRH_LEN 6 79 + #define RINGSIZE_POS 23 80 + #define RINGSIZE_LEN 3 81 + #define RINGTYPE_POS 19 82 + #define RINGTYPE_LEN 2 83 + #define RINGMODE_POS 20 84 + #define RINGMODE_LEN 3 85 + #define RECOMTIMEOUTL_POS 28 86 + #define RECOMTIMEOUTL_LEN 3 87 + #define RECOMTIMEOUTH_POS 0 88 + #define RECOMTIMEOUTH_LEN 2 89 + #define NUMMSGSINQ_POS 1 90 + #define NUMMSGSINQ_LEN 16 91 + #define ACCEPTLERR BIT(19) 92 + #define QCOHERENT BIT(4) 93 + #define RECOMBBUF BIT(27) 94 + 95 + #define BLOCK_ETH_CSR_OFFSET 0x2000 96 + #define BLOCK_ETH_RING_IF_OFFSET 0x9000 97 + #define BLOCK_ETH_CLKRST_CSR_OFFSET 0xC000 98 + #define BLOCK_ETH_DIAG_CSR_OFFSET 0xD000 99 + 100 + #define BLOCK_ETH_MAC_OFFSET 0x0000 101 + #define BLOCK_ETH_STATS_OFFSET 0x0014 102 + #define BLOCK_ETH_MAC_CSR_OFFSET 0x2800 103 + 104 + #define MAC_ADDR_REG_OFFSET 0x00 105 + #define MAC_COMMAND_REG_OFFSET 0x04 106 + #define MAC_WRITE_REG_OFFSET 0x08 107 + #define MAC_READ_REG_OFFSET 0x0c 108 + #define MAC_COMMAND_DONE_REG_OFFSET 0x10 109 + 110 + #define STAT_ADDR_REG_OFFSET 0x00 111 + #define STAT_COMMAND_REG_OFFSET 0x04 112 + #define STAT_WRITE_REG_OFFSET 0x08 113 + #define STAT_READ_REG_OFFSET 0x0c 114 + #define STAT_COMMAND_DONE_REG_OFFSET 0x10 115 + 116 + #define MII_MGMT_CONFIG_ADDR 0x20 117 + #define MII_MGMT_COMMAND_ADDR 0x24 118 + #define MII_MGMT_ADDRESS_ADDR 0x28 119 + #define MII_MGMT_CONTROL_ADDR 0x2c 120 + #define MII_MGMT_STATUS_ADDR 0x30 121 + #define MII_MGMT_INDICATORS_ADDR 0x34 122 + 123 + #define BUSY_MASK BIT(0) 124 + #define READ_CYCLE_MASK BIT(0) 125 + #define PHY_CONTROL_SET(dst, val) xgene_set_bits(dst, val, 0, 16) 126 + 127 + #define ENET_SPARE_CFG_REG_ADDR 0x0750 128 + #define RSIF_CONFIG_REG_ADDR 0x0010 129 + #define RSIF_RAM_DBG_REG0_ADDR 0x0048 130 + #define RGMII_REG_0_ADDR 0x07e0 131 + #define CFG_LINK_AGGR_RESUME_0_ADDR 0x07c8 132 + #define DEBUG_REG_ADDR 0x0700 133 + #define CFG_BYPASS_ADDR 0x0294 134 + #define CLE_BYPASS_REG0_0_ADDR 0x0490 135 + #define CLE_BYPASS_REG1_0_ADDR 0x0494 136 + #define CFG_RSIF_FPBUFF_TIMEOUT_EN BIT(31) 137 + #define RESUME_TX BIT(0) 138 + #define CFG_SPEED_1250 BIT(24) 139 + #define TX_PORT0 BIT(0) 140 + #define CFG_BYPASS_UNISEC_TX BIT(2) 141 + #define CFG_BYPASS_UNISEC_RX BIT(1) 142 + #define CFG_CLE_BYPASS_EN0 BIT(31) 143 + #define CFG_TXCLK_MUXSEL0_SET(dst, val) xgene_set_bits(dst, val, 29, 3) 144 + 145 + #define CFG_CLE_IP_PROTOCOL0_SET(dst, val) xgene_set_bits(dst, val, 16, 2) 146 + #define CFG_CLE_DSTQID0_SET(dst, val) xgene_set_bits(dst, val, 0, 12) 147 + #define CFG_CLE_FPSEL0_SET(dst, val) xgene_set_bits(dst, val, 16, 4) 148 + #define CFG_MACMODE_SET(dst, val) xgene_set_bits(dst, val, 18, 2) 149 + #define CFG_WAITASYNCRD_SET(dst, val) xgene_set_bits(dst, val, 0, 16) 150 + #define ICM_CONFIG0_REG_0_ADDR 0x0400 151 + #define ICM_CONFIG2_REG_0_ADDR 0x0410 152 + #define RX_DV_GATE_REG_0_ADDR 0x05fc 153 + #define TX_DV_GATE_EN0 BIT(2) 154 + #define RX_DV_GATE_EN0 BIT(1) 155 + #define RESUME_RX0 BIT(0) 156 + #define ENET_CFGSSQMIWQASSOC_ADDR 0xe0 157 + #define ENET_CFGSSQMIFPQASSOC_ADDR 0xdc 158 + #define ENET_CFGSSQMIQMLITEFPQASSOC_ADDR 0xf0 159 + #define ENET_CFGSSQMIQMLITEWQASSOC_ADDR 0xf4 160 + #define ENET_CFG_MEM_RAM_SHUTDOWN_ADDR 0x70 161 + #define ENET_BLOCK_MEM_RDY_ADDR 0x74 162 + #define MAC_CONFIG_1_ADDR 0x00 163 + #define MAC_CONFIG_2_ADDR 0x04 164 + #define MAX_FRAME_LEN_ADDR 0x10 165 + #define INTERFACE_CONTROL_ADDR 0x38 166 + #define STATION_ADDR0_ADDR 0x40 167 + #define STATION_ADDR1_ADDR 0x44 168 + #define PHY_ADDR_SET(dst, val) xgene_set_bits(dst, val, 8, 5) 169 + #define REG_ADDR_SET(dst, val) xgene_set_bits(dst, val, 0, 5) 170 + #define ENET_INTERFACE_MODE2_SET(dst, val) xgene_set_bits(dst, val, 8, 2) 171 + #define MGMT_CLOCK_SEL_SET(dst, val) xgene_set_bits(dst, val, 0, 3) 172 + #define SOFT_RESET1 BIT(31) 173 + #define TX_EN BIT(0) 174 + #define RX_EN BIT(2) 175 + #define ENET_LHD_MODE BIT(25) 176 + #define ENET_GHD_MODE BIT(26) 177 + #define FULL_DUPLEX2 BIT(0) 178 + #define SCAN_AUTO_INCR BIT(5) 179 + #define TBYT_ADDR 0x38 180 + #define TPKT_ADDR 0x39 181 + #define TDRP_ADDR 0x45 182 + #define TFCS_ADDR 0x47 183 + #define TUND_ADDR 0x4a 184 + 185 + #define TSO_IPPROTO_TCP 1 186 + #define FULL_DUPLEX 2 187 + 188 + #define USERINFO_POS 0 189 + #define USERINFO_LEN 32 190 + #define FPQNUM_POS 32 191 + #define FPQNUM_LEN 12 192 + #define LERR_POS 60 193 + #define LERR_LEN 3 194 + #define STASH_POS 52 195 + #define STASH_LEN 2 196 + #define BUFDATALEN_POS 48 197 + #define BUFDATALEN_LEN 12 198 + #define DATAADDR_POS 0 199 + #define DATAADDR_LEN 42 200 + #define COHERENT_POS 63 201 + #define HENQNUM_POS 48 202 + #define HENQNUM_LEN 12 203 + #define TYPESEL_POS 44 204 + #define TYPESEL_LEN 4 205 + #define ETHHDR_POS 12 206 + #define ETHHDR_LEN 8 207 + #define IC_POS 35 /* Insert CRC */ 208 + #define TCPHDR_POS 0 209 + #define TCPHDR_LEN 6 210 + #define IPHDR_POS 6 211 + #define IPHDR_LEN 6 212 + #define EC_POS 22 /* Enable checksum */ 213 + #define EC_LEN 1 214 + #define IS_POS 24 /* IP protocol select */ 215 + #define IS_LEN 1 216 + #define TYPE_ETH_WORK_MESSAGE_POS 44 217 + 218 + struct xgene_enet_raw_desc { 219 + __le64 m0; 220 + __le64 m1; 221 + __le64 m2; 222 + __le64 m3; 223 + }; 224 + 225 + struct xgene_enet_raw_desc16 { 226 + __le64 m0; 227 + __le64 m1; 228 + }; 229 + 230 + static inline void xgene_enet_mark_desc_slot_empty(void *desc_slot_ptr) 231 + { 232 + __le64 *desc_slot = desc_slot_ptr; 233 + 234 + desc_slot[EMPTY_SLOT_INDEX] = cpu_to_le64(EMPTY_SLOT); 235 + } 236 + 237 + static inline bool xgene_enet_is_desc_slot_empty(void *desc_slot_ptr) 238 + { 239 + __le64 *desc_slot = desc_slot_ptr; 240 + 241 + return (desc_slot[EMPTY_SLOT_INDEX] == cpu_to_le64(EMPTY_SLOT)); 242 + } 243 + 244 + enum xgene_enet_ring_cfgsize { 245 + RING_CFGSIZE_512B, 246 + RING_CFGSIZE_2KB, 247 + RING_CFGSIZE_16KB, 248 + RING_CFGSIZE_64KB, 249 + RING_CFGSIZE_512KB, 250 + RING_CFGSIZE_INVALID 251 + }; 252 + 253 + enum xgene_enet_ring_type { 254 + RING_DISABLED, 255 + RING_REGULAR, 256 + RING_BUFPOOL 257 + }; 258 + 259 + enum xgene_ring_owner { 260 + RING_OWNER_ETH0, 261 + RING_OWNER_CPU = 15, 262 + RING_OWNER_INVALID 263 + }; 264 + 265 + enum xgene_enet_ring_bufnum { 266 + RING_BUFNUM_REGULAR = 0x0, 267 + RING_BUFNUM_BUFPOOL = 0x20, 268 + RING_BUFNUM_INVALID 269 + }; 270 + 271 + enum xgene_enet_cmd { 272 + XGENE_ENET_WR_CMD = BIT(31), 273 + XGENE_ENET_RD_CMD = BIT(30) 274 + }; 275 + 276 + enum xgene_enet_err_code { 277 + HBF_READ_DATA = 3, 278 + HBF_LL_READ = 4, 279 + BAD_WORK_MSG = 6, 280 + BUFPOOL_TIMEOUT = 15, 281 + INGRESS_CRC = 16, 282 + INGRESS_CHECKSUM = 17, 283 + INGRESS_TRUNC_FRAME = 18, 284 + INGRESS_PKT_LEN = 19, 285 + INGRESS_PKT_UNDER = 20, 286 + INGRESS_FIFO_OVERRUN = 21, 287 + INGRESS_CHECKSUM_COMPUTE = 26, 288 + ERR_CODE_INVALID 289 + }; 290 + 291 + static inline enum xgene_ring_owner xgene_enet_ring_owner(u16 id) 292 + { 293 + return (id & RING_OWNER_MASK) >> 6; 294 + } 295 + 296 + static inline u8 xgene_enet_ring_bufnum(u16 id) 297 + { 298 + return id & RING_BUFNUM_MASK; 299 + } 300 + 301 + static inline bool xgene_enet_is_bufpool(u16 id) 302 + { 303 + return ((id & RING_BUFNUM_MASK) >= 0x20) ? true : false; 304 + } 305 + 306 + static inline u16 xgene_enet_get_numslots(u16 id, u32 size) 307 + { 308 + bool is_bufpool = xgene_enet_is_bufpool(id); 309 + 310 + return (is_bufpool) ? size / BUFPOOL_DESC_SIZE : 311 + size / WORK_DESC_SIZE; 312 + } 313 + 314 + struct xgene_enet_desc_ring *xgene_enet_setup_ring( 315 + struct xgene_enet_desc_ring *ring); 316 + void xgene_enet_clear_ring(struct xgene_enet_desc_ring *ring); 317 + void xgene_enet_parse_error(struct xgene_enet_desc_ring *ring, 318 + struct xgene_enet_pdata *pdata, 319 + enum xgene_enet_err_code status); 320 + 321 + void xgene_enet_reset(struct xgene_enet_pdata *priv); 322 + void xgene_gmac_reset(struct xgene_enet_pdata *priv); 323 + void xgene_gmac_init(struct xgene_enet_pdata *priv, int speed); 324 + void xgene_gmac_tx_enable(struct xgene_enet_pdata *priv); 325 + void xgene_gmac_rx_enable(struct xgene_enet_pdata *priv); 326 + void xgene_gmac_tx_disable(struct xgene_enet_pdata *priv); 327 + void xgene_gmac_rx_disable(struct xgene_enet_pdata *priv); 328 + void xgene_gmac_set_mac_addr(struct xgene_enet_pdata *pdata); 329 + void xgene_enet_cle_bypass(struct xgene_enet_pdata *pdata, 330 + u32 dst_ring_num, u16 bufpool_id); 331 + void xgene_gport_shutdown(struct xgene_enet_pdata *priv); 332 + void xgene_gmac_get_tx_stats(struct xgene_enet_pdata *pdata); 333 + 334 + int xgene_enet_mdio_config(struct xgene_enet_pdata *pdata); 335 + void xgene_enet_mdio_remove(struct xgene_enet_pdata *pdata); 336 + 337 + #endif /* __XGENE_ENET_HW_H__ */
+951
drivers/net/ethernet/apm/xgene/xgene_enet_main.c
··· 1 + /* Applied Micro X-Gene SoC Ethernet Driver 2 + * 3 + * Copyright (c) 2014, Applied Micro Circuits Corporation 4 + * Authors: Iyappan Subramanian <isubramanian@apm.com> 5 + * Ravi Patel <rapatel@apm.com> 6 + * Keyur Chudgar <kchudgar@apm.com> 7 + * 8 + * This program is free software; you can redistribute it and/or modify it 9 + * under the terms of the GNU General Public License as published by the 10 + * Free Software Foundation; either version 2 of the License, or (at your 11 + * option) any later version. 12 + * 13 + * This program is distributed in the hope that it will be useful, 14 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 15 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 16 + * GNU General Public License for more details. 17 + * 18 + * You should have received a copy of the GNU General Public License 19 + * along with this program. If not, see <http://www.gnu.org/licenses/>. 20 + */ 21 + 22 + #include "xgene_enet_main.h" 23 + #include "xgene_enet_hw.h" 24 + 25 + static void xgene_enet_init_bufpool(struct xgene_enet_desc_ring *buf_pool) 26 + { 27 + struct xgene_enet_raw_desc16 *raw_desc; 28 + int i; 29 + 30 + for (i = 0; i < buf_pool->slots; i++) { 31 + raw_desc = &buf_pool->raw_desc16[i]; 32 + 33 + /* Hardware expects descriptor in little endian format */ 34 + raw_desc->m0 = cpu_to_le64(i | 35 + SET_VAL(FPQNUM, buf_pool->dst_ring_num) | 36 + SET_VAL(STASH, 3)); 37 + } 38 + } 39 + 40 + static int xgene_enet_refill_bufpool(struct xgene_enet_desc_ring *buf_pool, 41 + u32 nbuf) 42 + { 43 + struct sk_buff *skb; 44 + struct xgene_enet_raw_desc16 *raw_desc; 45 + struct net_device *ndev; 46 + struct device *dev; 47 + dma_addr_t dma_addr; 48 + u32 tail = buf_pool->tail; 49 + u32 slots = buf_pool->slots - 1; 50 + u16 bufdatalen, len; 51 + int i; 52 + 53 + ndev = buf_pool->ndev; 54 + dev = ndev_to_dev(buf_pool->ndev); 55 + bufdatalen = BUF_LEN_CODE_2K | (SKB_BUFFER_SIZE & GENMASK(11, 0)); 56 + len = XGENE_ENET_MAX_MTU; 57 + 58 + for (i = 0; i < nbuf; i++) { 59 + raw_desc = &buf_pool->raw_desc16[tail]; 60 + 61 + skb = netdev_alloc_skb_ip_align(ndev, len); 62 + if (unlikely(!skb)) 63 + return -ENOMEM; 64 + buf_pool->rx_skb[tail] = skb; 65 + 66 + dma_addr = dma_map_single(dev, skb->data, len, DMA_FROM_DEVICE); 67 + if (dma_mapping_error(dev, dma_addr)) { 68 + netdev_err(ndev, "DMA mapping error\n"); 69 + dev_kfree_skb_any(skb); 70 + return -EINVAL; 71 + } 72 + 73 + raw_desc->m1 = cpu_to_le64(SET_VAL(DATAADDR, dma_addr) | 74 + SET_VAL(BUFDATALEN, bufdatalen) | 75 + SET_BIT(COHERENT)); 76 + tail = (tail + 1) & slots; 77 + } 78 + 79 + iowrite32(nbuf, buf_pool->cmd); 80 + buf_pool->tail = tail; 81 + 82 + return 0; 83 + } 84 + 85 + static u16 xgene_enet_dst_ring_num(struct xgene_enet_desc_ring *ring) 86 + { 87 + struct xgene_enet_pdata *pdata = netdev_priv(ring->ndev); 88 + 89 + return ((u16)pdata->rm << 10) | ring->num; 90 + } 91 + 92 + static u8 xgene_enet_hdr_len(const void *data) 93 + { 94 + const struct ethhdr *eth = data; 95 + 96 + return (eth->h_proto == htons(ETH_P_8021Q)) ? VLAN_ETH_HLEN : ETH_HLEN; 97 + } 98 + 99 + static u32 xgene_enet_ring_len(struct xgene_enet_desc_ring *ring) 100 + { 101 + u32 __iomem *cmd_base = ring->cmd_base; 102 + u32 ring_state, num_msgs; 103 + 104 + ring_state = ioread32(&cmd_base[1]); 105 + num_msgs = ring_state & CREATE_MASK(NUMMSGSINQ_POS, NUMMSGSINQ_LEN); 106 + 107 + return num_msgs >> NUMMSGSINQ_POS; 108 + } 109 + 110 + static void xgene_enet_delete_bufpool(struct xgene_enet_desc_ring *buf_pool) 111 + { 112 + struct xgene_enet_raw_desc16 *raw_desc; 113 + u32 slots = buf_pool->slots - 1; 114 + u32 tail = buf_pool->tail; 115 + u32 userinfo; 116 + int i, len; 117 + 118 + len = xgene_enet_ring_len(buf_pool); 119 + for (i = 0; i < len; i++) { 120 + tail = (tail - 1) & slots; 121 + raw_desc = &buf_pool->raw_desc16[tail]; 122 + 123 + /* Hardware stores descriptor in little endian format */ 124 + userinfo = GET_VAL(USERINFO, le64_to_cpu(raw_desc->m0)); 125 + dev_kfree_skb_any(buf_pool->rx_skb[userinfo]); 126 + } 127 + 128 + iowrite32(-len, buf_pool->cmd); 129 + buf_pool->tail = tail; 130 + } 131 + 132 + static irqreturn_t xgene_enet_rx_irq(const int irq, void *data) 133 + { 134 + struct xgene_enet_desc_ring *rx_ring = data; 135 + 136 + if (napi_schedule_prep(&rx_ring->napi)) { 137 + disable_irq_nosync(irq); 138 + __napi_schedule(&rx_ring->napi); 139 + } 140 + 141 + return IRQ_HANDLED; 142 + } 143 + 144 + static int xgene_enet_tx_completion(struct xgene_enet_desc_ring *cp_ring, 145 + struct xgene_enet_raw_desc *raw_desc) 146 + { 147 + struct sk_buff *skb; 148 + struct device *dev; 149 + u16 skb_index; 150 + u8 status; 151 + int ret = 0; 152 + 153 + skb_index = GET_VAL(USERINFO, le64_to_cpu(raw_desc->m0)); 154 + skb = cp_ring->cp_skb[skb_index]; 155 + 156 + dev = ndev_to_dev(cp_ring->ndev); 157 + dma_unmap_single(dev, GET_VAL(DATAADDR, le64_to_cpu(raw_desc->m1)), 158 + GET_VAL(BUFDATALEN, le64_to_cpu(raw_desc->m1)), 159 + DMA_TO_DEVICE); 160 + 161 + /* Checking for error */ 162 + status = GET_VAL(LERR, le64_to_cpu(raw_desc->m0)); 163 + if (unlikely(status > 2)) { 164 + xgene_enet_parse_error(cp_ring, netdev_priv(cp_ring->ndev), 165 + status); 166 + ret = -EIO; 167 + } 168 + 169 + if (likely(skb)) { 170 + dev_kfree_skb_any(skb); 171 + } else { 172 + netdev_err(cp_ring->ndev, "completion skb is NULL\n"); 173 + ret = -EIO; 174 + } 175 + 176 + return ret; 177 + } 178 + 179 + static u64 xgene_enet_work_msg(struct sk_buff *skb) 180 + { 181 + struct iphdr *iph; 182 + u8 l3hlen, l4hlen = 0; 183 + u8 csum_enable = 0; 184 + u8 proto = 0; 185 + u8 ethhdr; 186 + u64 hopinfo; 187 + 188 + if (unlikely(skb->protocol != htons(ETH_P_IP)) && 189 + unlikely(skb->protocol != htons(ETH_P_8021Q))) 190 + goto out; 191 + 192 + if (unlikely(!(skb->dev->features & NETIF_F_IP_CSUM))) 193 + goto out; 194 + 195 + iph = ip_hdr(skb); 196 + if (unlikely(ip_is_fragment(iph))) 197 + goto out; 198 + 199 + if (likely(iph->protocol == IPPROTO_TCP)) { 200 + l4hlen = tcp_hdrlen(skb) >> 2; 201 + csum_enable = 1; 202 + proto = TSO_IPPROTO_TCP; 203 + } else if (iph->protocol == IPPROTO_UDP) { 204 + l4hlen = UDP_HDR_SIZE; 205 + csum_enable = 1; 206 + } 207 + out: 208 + l3hlen = ip_hdrlen(skb) >> 2; 209 + ethhdr = xgene_enet_hdr_len(skb->data); 210 + hopinfo = SET_VAL(TCPHDR, l4hlen) | 211 + SET_VAL(IPHDR, l3hlen) | 212 + SET_VAL(ETHHDR, ethhdr) | 213 + SET_VAL(EC, csum_enable) | 214 + SET_VAL(IS, proto) | 215 + SET_BIT(IC) | 216 + SET_BIT(TYPE_ETH_WORK_MESSAGE); 217 + 218 + return hopinfo; 219 + } 220 + 221 + static int xgene_enet_setup_tx_desc(struct xgene_enet_desc_ring *tx_ring, 222 + struct sk_buff *skb) 223 + { 224 + struct device *dev = ndev_to_dev(tx_ring->ndev); 225 + struct xgene_enet_raw_desc *raw_desc; 226 + dma_addr_t dma_addr; 227 + u16 tail = tx_ring->tail; 228 + u64 hopinfo; 229 + 230 + raw_desc = &tx_ring->raw_desc[tail]; 231 + memset(raw_desc, 0, sizeof(struct xgene_enet_raw_desc)); 232 + 233 + dma_addr = dma_map_single(dev, skb->data, skb->len, DMA_TO_DEVICE); 234 + if (dma_mapping_error(dev, dma_addr)) { 235 + netdev_err(tx_ring->ndev, "DMA mapping error\n"); 236 + return -EINVAL; 237 + } 238 + 239 + /* Hardware expects descriptor in little endian format */ 240 + raw_desc->m0 = cpu_to_le64(tail); 241 + raw_desc->m1 = cpu_to_le64(SET_VAL(DATAADDR, dma_addr) | 242 + SET_VAL(BUFDATALEN, skb->len) | 243 + SET_BIT(COHERENT)); 244 + hopinfo = xgene_enet_work_msg(skb); 245 + raw_desc->m3 = cpu_to_le64(SET_VAL(HENQNUM, tx_ring->dst_ring_num) | 246 + hopinfo); 247 + tx_ring->cp_ring->cp_skb[tail] = skb; 248 + 249 + return 0; 250 + } 251 + 252 + static netdev_tx_t xgene_enet_start_xmit(struct sk_buff *skb, 253 + struct net_device *ndev) 254 + { 255 + struct xgene_enet_pdata *pdata = netdev_priv(ndev); 256 + struct xgene_enet_desc_ring *tx_ring = pdata->tx_ring; 257 + struct xgene_enet_desc_ring *cp_ring = tx_ring->cp_ring; 258 + u32 tx_level, cq_level; 259 + 260 + tx_level = xgene_enet_ring_len(tx_ring); 261 + cq_level = xgene_enet_ring_len(cp_ring); 262 + if (unlikely(tx_level > pdata->tx_qcnt_hi || 263 + cq_level > pdata->cp_qcnt_hi)) { 264 + netif_stop_queue(ndev); 265 + return NETDEV_TX_BUSY; 266 + } 267 + 268 + if (xgene_enet_setup_tx_desc(tx_ring, skb)) { 269 + dev_kfree_skb_any(skb); 270 + return NETDEV_TX_OK; 271 + } 272 + 273 + iowrite32(1, tx_ring->cmd); 274 + skb_tx_timestamp(skb); 275 + tx_ring->tail = (tx_ring->tail + 1) & (tx_ring->slots - 1); 276 + 277 + pdata->stats.tx_packets++; 278 + pdata->stats.tx_bytes += skb->len; 279 + 280 + return NETDEV_TX_OK; 281 + } 282 + 283 + static void xgene_enet_skip_csum(struct sk_buff *skb) 284 + { 285 + struct iphdr *iph = ip_hdr(skb); 286 + 287 + if (!ip_is_fragment(iph) || 288 + (iph->protocol != IPPROTO_TCP && iph->protocol != IPPROTO_UDP)) { 289 + skb->ip_summed = CHECKSUM_UNNECESSARY; 290 + } 291 + } 292 + 293 + static int xgene_enet_rx_frame(struct xgene_enet_desc_ring *rx_ring, 294 + struct xgene_enet_raw_desc *raw_desc) 295 + { 296 + struct net_device *ndev; 297 + struct xgene_enet_pdata *pdata; 298 + struct device *dev; 299 + struct xgene_enet_desc_ring *buf_pool; 300 + u32 datalen, skb_index; 301 + struct sk_buff *skb; 302 + u8 status; 303 + int ret = 0; 304 + 305 + ndev = rx_ring->ndev; 306 + pdata = netdev_priv(ndev); 307 + dev = ndev_to_dev(rx_ring->ndev); 308 + buf_pool = rx_ring->buf_pool; 309 + 310 + dma_unmap_single(dev, GET_VAL(DATAADDR, le64_to_cpu(raw_desc->m1)), 311 + XGENE_ENET_MAX_MTU, DMA_FROM_DEVICE); 312 + skb_index = GET_VAL(USERINFO, le64_to_cpu(raw_desc->m0)); 313 + skb = buf_pool->rx_skb[skb_index]; 314 + 315 + /* checking for error */ 316 + status = GET_VAL(LERR, le64_to_cpu(raw_desc->m0)); 317 + if (unlikely(status > 2)) { 318 + dev_kfree_skb_any(skb); 319 + xgene_enet_parse_error(rx_ring, netdev_priv(rx_ring->ndev), 320 + status); 321 + pdata->stats.rx_dropped++; 322 + ret = -EIO; 323 + goto out; 324 + } 325 + 326 + /* strip off CRC as HW isn't doing this */ 327 + datalen = GET_VAL(BUFDATALEN, le64_to_cpu(raw_desc->m1)); 328 + datalen -= 4; 329 + prefetch(skb->data - NET_IP_ALIGN); 330 + skb_put(skb, datalen); 331 + 332 + skb_checksum_none_assert(skb); 333 + skb->protocol = eth_type_trans(skb, ndev); 334 + if (likely((ndev->features & NETIF_F_IP_CSUM) && 335 + skb->protocol == htons(ETH_P_IP))) { 336 + xgene_enet_skip_csum(skb); 337 + } 338 + 339 + pdata->stats.rx_packets++; 340 + pdata->stats.rx_bytes += datalen; 341 + napi_gro_receive(&rx_ring->napi, skb); 342 + out: 343 + if (--rx_ring->nbufpool == 0) { 344 + ret = xgene_enet_refill_bufpool(buf_pool, NUM_BUFPOOL); 345 + rx_ring->nbufpool = NUM_BUFPOOL; 346 + } 347 + 348 + return ret; 349 + } 350 + 351 + static bool is_rx_desc(struct xgene_enet_raw_desc *raw_desc) 352 + { 353 + return GET_VAL(FPQNUM, le64_to_cpu(raw_desc->m0)) ? true : false; 354 + } 355 + 356 + static int xgene_enet_process_ring(struct xgene_enet_desc_ring *ring, 357 + int budget) 358 + { 359 + struct xgene_enet_pdata *pdata = netdev_priv(ring->ndev); 360 + struct xgene_enet_raw_desc *raw_desc; 361 + u16 head = ring->head; 362 + u16 slots = ring->slots - 1; 363 + int ret, count = 0; 364 + 365 + do { 366 + raw_desc = &ring->raw_desc[head]; 367 + if (unlikely(xgene_enet_is_desc_slot_empty(raw_desc))) 368 + break; 369 + 370 + if (is_rx_desc(raw_desc)) 371 + ret = xgene_enet_rx_frame(ring, raw_desc); 372 + else 373 + ret = xgene_enet_tx_completion(ring, raw_desc); 374 + xgene_enet_mark_desc_slot_empty(raw_desc); 375 + 376 + head = (head + 1) & slots; 377 + count++; 378 + 379 + if (ret) 380 + break; 381 + } while (--budget); 382 + 383 + if (likely(count)) { 384 + iowrite32(-count, ring->cmd); 385 + ring->head = head; 386 + 387 + if (netif_queue_stopped(ring->ndev)) { 388 + if (xgene_enet_ring_len(ring) < pdata->cp_qcnt_low) 389 + netif_wake_queue(ring->ndev); 390 + } 391 + } 392 + 393 + return budget; 394 + } 395 + 396 + static int xgene_enet_napi(struct napi_struct *napi, const int budget) 397 + { 398 + struct xgene_enet_desc_ring *ring; 399 + int processed; 400 + 401 + ring = container_of(napi, struct xgene_enet_desc_ring, napi); 402 + processed = xgene_enet_process_ring(ring, budget); 403 + 404 + if (processed != budget) { 405 + napi_complete(napi); 406 + enable_irq(ring->irq); 407 + } 408 + 409 + return processed; 410 + } 411 + 412 + static void xgene_enet_timeout(struct net_device *ndev) 413 + { 414 + struct xgene_enet_pdata *pdata = netdev_priv(ndev); 415 + 416 + xgene_gmac_reset(pdata); 417 + } 418 + 419 + static int xgene_enet_register_irq(struct net_device *ndev) 420 + { 421 + struct xgene_enet_pdata *pdata = netdev_priv(ndev); 422 + struct device *dev = ndev_to_dev(ndev); 423 + int ret; 424 + 425 + ret = devm_request_irq(dev, pdata->rx_ring->irq, xgene_enet_rx_irq, 426 + IRQF_SHARED, ndev->name, pdata->rx_ring); 427 + if (ret) { 428 + netdev_err(ndev, "rx%d interrupt request failed\n", 429 + pdata->rx_ring->irq); 430 + } 431 + 432 + return ret; 433 + } 434 + 435 + static void xgene_enet_free_irq(struct net_device *ndev) 436 + { 437 + struct xgene_enet_pdata *pdata; 438 + struct device *dev; 439 + 440 + pdata = netdev_priv(ndev); 441 + dev = ndev_to_dev(ndev); 442 + devm_free_irq(dev, pdata->rx_ring->irq, pdata->rx_ring); 443 + } 444 + 445 + static int xgene_enet_open(struct net_device *ndev) 446 + { 447 + struct xgene_enet_pdata *pdata = netdev_priv(ndev); 448 + int ret; 449 + 450 + xgene_gmac_tx_enable(pdata); 451 + xgene_gmac_rx_enable(pdata); 452 + 453 + ret = xgene_enet_register_irq(ndev); 454 + if (ret) 455 + return ret; 456 + napi_enable(&pdata->rx_ring->napi); 457 + 458 + if (pdata->phy_dev) 459 + phy_start(pdata->phy_dev); 460 + 461 + netif_start_queue(ndev); 462 + 463 + return ret; 464 + } 465 + 466 + static int xgene_enet_close(struct net_device *ndev) 467 + { 468 + struct xgene_enet_pdata *pdata = netdev_priv(ndev); 469 + 470 + netif_stop_queue(ndev); 471 + 472 + if (pdata->phy_dev) 473 + phy_stop(pdata->phy_dev); 474 + 475 + napi_disable(&pdata->rx_ring->napi); 476 + xgene_enet_free_irq(ndev); 477 + xgene_enet_process_ring(pdata->rx_ring, -1); 478 + 479 + xgene_gmac_tx_disable(pdata); 480 + xgene_gmac_rx_disable(pdata); 481 + 482 + return 0; 483 + } 484 + 485 + static void xgene_enet_delete_ring(struct xgene_enet_desc_ring *ring) 486 + { 487 + struct xgene_enet_pdata *pdata; 488 + struct device *dev; 489 + 490 + pdata = netdev_priv(ring->ndev); 491 + dev = ndev_to_dev(ring->ndev); 492 + 493 + xgene_enet_clear_ring(ring); 494 + dma_free_coherent(dev, ring->size, ring->desc_addr, ring->dma); 495 + } 496 + 497 + static void xgene_enet_delete_desc_rings(struct xgene_enet_pdata *pdata) 498 + { 499 + struct xgene_enet_desc_ring *buf_pool; 500 + 501 + if (pdata->tx_ring) { 502 + xgene_enet_delete_ring(pdata->tx_ring); 503 + pdata->tx_ring = NULL; 504 + } 505 + 506 + if (pdata->rx_ring) { 507 + buf_pool = pdata->rx_ring->buf_pool; 508 + xgene_enet_delete_bufpool(buf_pool); 509 + xgene_enet_delete_ring(buf_pool); 510 + xgene_enet_delete_ring(pdata->rx_ring); 511 + pdata->rx_ring = NULL; 512 + } 513 + } 514 + 515 + static int xgene_enet_get_ring_size(struct device *dev, 516 + enum xgene_enet_ring_cfgsize cfgsize) 517 + { 518 + int size = -EINVAL; 519 + 520 + switch (cfgsize) { 521 + case RING_CFGSIZE_512B: 522 + size = 0x200; 523 + break; 524 + case RING_CFGSIZE_2KB: 525 + size = 0x800; 526 + break; 527 + case RING_CFGSIZE_16KB: 528 + size = 0x4000; 529 + break; 530 + case RING_CFGSIZE_64KB: 531 + size = 0x10000; 532 + break; 533 + case RING_CFGSIZE_512KB: 534 + size = 0x80000; 535 + break; 536 + default: 537 + dev_err(dev, "Unsupported cfg ring size %d\n", cfgsize); 538 + break; 539 + } 540 + 541 + return size; 542 + } 543 + 544 + static void xgene_enet_free_desc_ring(struct xgene_enet_desc_ring *ring) 545 + { 546 + struct device *dev; 547 + 548 + if (!ring) 549 + return; 550 + 551 + dev = ndev_to_dev(ring->ndev); 552 + 553 + if (ring->desc_addr) { 554 + xgene_enet_clear_ring(ring); 555 + dma_free_coherent(dev, ring->size, ring->desc_addr, ring->dma); 556 + } 557 + devm_kfree(dev, ring); 558 + } 559 + 560 + static void xgene_enet_free_desc_rings(struct xgene_enet_pdata *pdata) 561 + { 562 + struct device *dev = &pdata->pdev->dev; 563 + struct xgene_enet_desc_ring *ring; 564 + 565 + ring = pdata->tx_ring; 566 + if (ring && ring->cp_ring && ring->cp_ring->cp_skb) 567 + devm_kfree(dev, ring->cp_ring->cp_skb); 568 + xgene_enet_free_desc_ring(ring); 569 + 570 + ring = pdata->rx_ring; 571 + if (ring && ring->buf_pool && ring->buf_pool->rx_skb) 572 + devm_kfree(dev, ring->buf_pool->rx_skb); 573 + xgene_enet_free_desc_ring(ring->buf_pool); 574 + xgene_enet_free_desc_ring(ring); 575 + } 576 + 577 + static struct xgene_enet_desc_ring *xgene_enet_create_desc_ring( 578 + struct net_device *ndev, u32 ring_num, 579 + enum xgene_enet_ring_cfgsize cfgsize, u32 ring_id) 580 + { 581 + struct xgene_enet_desc_ring *ring; 582 + struct xgene_enet_pdata *pdata = netdev_priv(ndev); 583 + struct device *dev = ndev_to_dev(ndev); 584 + u32 size; 585 + 586 + ring = devm_kzalloc(dev, sizeof(struct xgene_enet_desc_ring), 587 + GFP_KERNEL); 588 + if (!ring) 589 + return NULL; 590 + 591 + ring->ndev = ndev; 592 + ring->num = ring_num; 593 + ring->cfgsize = cfgsize; 594 + ring->id = ring_id; 595 + 596 + size = xgene_enet_get_ring_size(dev, cfgsize); 597 + ring->desc_addr = dma_zalloc_coherent(dev, size, &ring->dma, 598 + GFP_KERNEL); 599 + if (!ring->desc_addr) { 600 + devm_kfree(dev, ring); 601 + return NULL; 602 + } 603 + ring->size = size; 604 + 605 + ring->cmd_base = pdata->ring_cmd_addr + (ring->num << 6); 606 + ring->cmd = ring->cmd_base + INC_DEC_CMD_ADDR; 607 + pdata->rm = RM3; 608 + ring = xgene_enet_setup_ring(ring); 609 + netdev_dbg(ndev, "ring info: num=%d size=%d id=%d slots=%d\n", 610 + ring->num, ring->size, ring->id, ring->slots); 611 + 612 + return ring; 613 + } 614 + 615 + static u16 xgene_enet_get_ring_id(enum xgene_ring_owner owner, u8 bufnum) 616 + { 617 + return (owner << 6) | (bufnum & GENMASK(5, 0)); 618 + } 619 + 620 + static int xgene_enet_create_desc_rings(struct net_device *ndev) 621 + { 622 + struct xgene_enet_pdata *pdata = netdev_priv(ndev); 623 + struct device *dev = ndev_to_dev(ndev); 624 + struct xgene_enet_desc_ring *rx_ring, *tx_ring, *cp_ring; 625 + struct xgene_enet_desc_ring *buf_pool = NULL; 626 + u8 cpu_bufnum = 0, eth_bufnum = 0; 627 + u8 bp_bufnum = 0x20; 628 + u16 ring_id, ring_num = 0; 629 + int ret; 630 + 631 + /* allocate rx descriptor ring */ 632 + ring_id = xgene_enet_get_ring_id(RING_OWNER_CPU, cpu_bufnum++); 633 + rx_ring = xgene_enet_create_desc_ring(ndev, ring_num++, 634 + RING_CFGSIZE_16KB, ring_id); 635 + if (!rx_ring) { 636 + ret = -ENOMEM; 637 + goto err; 638 + } 639 + 640 + /* allocate buffer pool for receiving packets */ 641 + ring_id = xgene_enet_get_ring_id(RING_OWNER_ETH0, bp_bufnum++); 642 + buf_pool = xgene_enet_create_desc_ring(ndev, ring_num++, 643 + RING_CFGSIZE_2KB, ring_id); 644 + if (!buf_pool) { 645 + ret = -ENOMEM; 646 + goto err; 647 + } 648 + 649 + rx_ring->nbufpool = NUM_BUFPOOL; 650 + rx_ring->buf_pool = buf_pool; 651 + rx_ring->irq = pdata->rx_irq; 652 + buf_pool->rx_skb = devm_kcalloc(dev, buf_pool->slots, 653 + sizeof(struct sk_buff *), GFP_KERNEL); 654 + if (!buf_pool->rx_skb) { 655 + ret = -ENOMEM; 656 + goto err; 657 + } 658 + 659 + buf_pool->dst_ring_num = xgene_enet_dst_ring_num(buf_pool); 660 + rx_ring->buf_pool = buf_pool; 661 + pdata->rx_ring = rx_ring; 662 + 663 + /* allocate tx descriptor ring */ 664 + ring_id = xgene_enet_get_ring_id(RING_OWNER_ETH0, eth_bufnum++); 665 + tx_ring = xgene_enet_create_desc_ring(ndev, ring_num++, 666 + RING_CFGSIZE_16KB, ring_id); 667 + if (!tx_ring) { 668 + ret = -ENOMEM; 669 + goto err; 670 + } 671 + pdata->tx_ring = tx_ring; 672 + 673 + cp_ring = pdata->rx_ring; 674 + cp_ring->cp_skb = devm_kcalloc(dev, tx_ring->slots, 675 + sizeof(struct sk_buff *), GFP_KERNEL); 676 + if (!cp_ring->cp_skb) { 677 + ret = -ENOMEM; 678 + goto err; 679 + } 680 + pdata->tx_ring->cp_ring = cp_ring; 681 + pdata->tx_ring->dst_ring_num = xgene_enet_dst_ring_num(cp_ring); 682 + 683 + pdata->tx_qcnt_hi = pdata->tx_ring->slots / 2; 684 + pdata->cp_qcnt_hi = pdata->rx_ring->slots / 2; 685 + pdata->cp_qcnt_low = pdata->cp_qcnt_hi / 2; 686 + 687 + return 0; 688 + 689 + err: 690 + xgene_enet_free_desc_rings(pdata); 691 + return ret; 692 + } 693 + 694 + static struct rtnl_link_stats64 *xgene_enet_get_stats64( 695 + struct net_device *ndev, 696 + struct rtnl_link_stats64 *storage) 697 + { 698 + struct xgene_enet_pdata *pdata = netdev_priv(ndev); 699 + struct rtnl_link_stats64 *stats = &pdata->stats; 700 + 701 + stats->rx_errors += stats->rx_length_errors + 702 + stats->rx_crc_errors + 703 + stats->rx_frame_errors + 704 + stats->rx_fifo_errors; 705 + memcpy(storage, &pdata->stats, sizeof(struct rtnl_link_stats64)); 706 + 707 + return storage; 708 + } 709 + 710 + static int xgene_enet_set_mac_address(struct net_device *ndev, void *addr) 711 + { 712 + struct xgene_enet_pdata *pdata = netdev_priv(ndev); 713 + int ret; 714 + 715 + ret = eth_mac_addr(ndev, addr); 716 + if (ret) 717 + return ret; 718 + xgene_gmac_set_mac_addr(pdata); 719 + 720 + return ret; 721 + } 722 + 723 + static const struct net_device_ops xgene_ndev_ops = { 724 + .ndo_open = xgene_enet_open, 725 + .ndo_stop = xgene_enet_close, 726 + .ndo_start_xmit = xgene_enet_start_xmit, 727 + .ndo_tx_timeout = xgene_enet_timeout, 728 + .ndo_get_stats64 = xgene_enet_get_stats64, 729 + .ndo_change_mtu = eth_change_mtu, 730 + .ndo_set_mac_address = xgene_enet_set_mac_address, 731 + }; 732 + 733 + static int xgene_enet_get_resources(struct xgene_enet_pdata *pdata) 734 + { 735 + struct platform_device *pdev; 736 + struct net_device *ndev; 737 + struct device *dev; 738 + struct resource *res; 739 + void __iomem *base_addr; 740 + const char *mac; 741 + int ret; 742 + 743 + pdev = pdata->pdev; 744 + dev = &pdev->dev; 745 + ndev = pdata->ndev; 746 + 747 + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "enet_csr"); 748 + if (!res) { 749 + dev_err(dev, "Resource enet_csr not defined\n"); 750 + return -ENODEV; 751 + } 752 + pdata->base_addr = devm_ioremap_resource(dev, res); 753 + if (IS_ERR(pdata->base_addr)) { 754 + dev_err(dev, "Unable to retrieve ENET Port CSR region\n"); 755 + return PTR_ERR(pdata->base_addr); 756 + } 757 + 758 + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "ring_csr"); 759 + if (!res) { 760 + dev_err(dev, "Resource ring_csr not defined\n"); 761 + return -ENODEV; 762 + } 763 + pdata->ring_csr_addr = devm_ioremap_resource(dev, res); 764 + if (IS_ERR(pdata->ring_csr_addr)) { 765 + dev_err(dev, "Unable to retrieve ENET Ring CSR region\n"); 766 + return PTR_ERR(pdata->ring_csr_addr); 767 + } 768 + 769 + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "ring_cmd"); 770 + if (!res) { 771 + dev_err(dev, "Resource ring_cmd not defined\n"); 772 + return -ENODEV; 773 + } 774 + pdata->ring_cmd_addr = devm_ioremap_resource(dev, res); 775 + if (IS_ERR(pdata->ring_cmd_addr)) { 776 + dev_err(dev, "Unable to retrieve ENET Ring command region\n"); 777 + return PTR_ERR(pdata->ring_cmd_addr); 778 + } 779 + 780 + ret = platform_get_irq(pdev, 0); 781 + if (ret <= 0) { 782 + dev_err(dev, "Unable to get ENET Rx IRQ\n"); 783 + ret = ret ? : -ENXIO; 784 + return ret; 785 + } 786 + pdata->rx_irq = ret; 787 + 788 + mac = of_get_mac_address(dev->of_node); 789 + if (mac) 790 + memcpy(ndev->dev_addr, mac, ndev->addr_len); 791 + else 792 + eth_hw_addr_random(ndev); 793 + memcpy(ndev->perm_addr, ndev->dev_addr, ndev->addr_len); 794 + 795 + pdata->phy_mode = of_get_phy_mode(pdev->dev.of_node); 796 + if (pdata->phy_mode < 0) { 797 + dev_err(dev, "Incorrect phy-connection-type in DTS\n"); 798 + return -EINVAL; 799 + } 800 + 801 + pdata->clk = devm_clk_get(&pdev->dev, NULL); 802 + ret = IS_ERR(pdata->clk); 803 + if (IS_ERR(pdata->clk)) { 804 + dev_err(&pdev->dev, "can't get clock\n"); 805 + ret = PTR_ERR(pdata->clk); 806 + return ret; 807 + } 808 + 809 + base_addr = pdata->base_addr; 810 + pdata->eth_csr_addr = base_addr + BLOCK_ETH_CSR_OFFSET; 811 + pdata->eth_ring_if_addr = base_addr + BLOCK_ETH_RING_IF_OFFSET; 812 + pdata->eth_diag_csr_addr = base_addr + BLOCK_ETH_DIAG_CSR_OFFSET; 813 + pdata->mcx_mac_addr = base_addr + BLOCK_ETH_MAC_OFFSET; 814 + pdata->mcx_stats_addr = base_addr + BLOCK_ETH_STATS_OFFSET; 815 + pdata->mcx_mac_csr_addr = base_addr + BLOCK_ETH_MAC_CSR_OFFSET; 816 + pdata->rx_buff_cnt = NUM_PKT_BUF; 817 + 818 + return ret; 819 + } 820 + 821 + static int xgene_enet_init_hw(struct xgene_enet_pdata *pdata) 822 + { 823 + struct net_device *ndev = pdata->ndev; 824 + struct xgene_enet_desc_ring *buf_pool; 825 + u16 dst_ring_num; 826 + int ret; 827 + 828 + xgene_gmac_tx_disable(pdata); 829 + xgene_gmac_rx_disable(pdata); 830 + 831 + ret = xgene_enet_create_desc_rings(ndev); 832 + if (ret) { 833 + netdev_err(ndev, "Error in ring configuration\n"); 834 + return ret; 835 + } 836 + 837 + /* setup buffer pool */ 838 + buf_pool = pdata->rx_ring->buf_pool; 839 + xgene_enet_init_bufpool(buf_pool); 840 + ret = xgene_enet_refill_bufpool(buf_pool, pdata->rx_buff_cnt); 841 + if (ret) { 842 + xgene_enet_delete_desc_rings(pdata); 843 + return ret; 844 + } 845 + 846 + dst_ring_num = xgene_enet_dst_ring_num(pdata->rx_ring); 847 + xgene_enet_cle_bypass(pdata, dst_ring_num, buf_pool->id); 848 + 849 + return ret; 850 + } 851 + 852 + static int xgene_enet_probe(struct platform_device *pdev) 853 + { 854 + struct net_device *ndev; 855 + struct xgene_enet_pdata *pdata; 856 + struct device *dev = &pdev->dev; 857 + struct napi_struct *napi; 858 + int ret; 859 + 860 + ndev = alloc_etherdev(sizeof(struct xgene_enet_pdata)); 861 + if (!ndev) 862 + return -ENOMEM; 863 + 864 + pdata = netdev_priv(ndev); 865 + 866 + pdata->pdev = pdev; 867 + pdata->ndev = ndev; 868 + SET_NETDEV_DEV(ndev, dev); 869 + platform_set_drvdata(pdev, pdata); 870 + ndev->netdev_ops = &xgene_ndev_ops; 871 + xgene_enet_set_ethtool_ops(ndev); 872 + ndev->features |= NETIF_F_IP_CSUM | 873 + NETIF_F_GSO | 874 + NETIF_F_GRO; 875 + 876 + ret = xgene_enet_get_resources(pdata); 877 + if (ret) 878 + goto err; 879 + 880 + xgene_enet_reset(pdata); 881 + xgene_gmac_init(pdata, SPEED_1000); 882 + 883 + ret = register_netdev(ndev); 884 + if (ret) { 885 + netdev_err(ndev, "Failed to register netdev\n"); 886 + goto err; 887 + } 888 + 889 + ret = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64)); 890 + if (ret) { 891 + netdev_err(ndev, "No usable DMA configuration\n"); 892 + goto err; 893 + } 894 + 895 + ret = xgene_enet_init_hw(pdata); 896 + if (ret) 897 + goto err; 898 + 899 + napi = &pdata->rx_ring->napi; 900 + netif_napi_add(ndev, napi, xgene_enet_napi, NAPI_POLL_WEIGHT); 901 + ret = xgene_enet_mdio_config(pdata); 902 + 903 + return ret; 904 + err: 905 + free_netdev(ndev); 906 + return ret; 907 + } 908 + 909 + static int xgene_enet_remove(struct platform_device *pdev) 910 + { 911 + struct xgene_enet_pdata *pdata; 912 + struct net_device *ndev; 913 + 914 + pdata = platform_get_drvdata(pdev); 915 + ndev = pdata->ndev; 916 + 917 + xgene_gmac_rx_disable(pdata); 918 + xgene_gmac_tx_disable(pdata); 919 + 920 + netif_napi_del(&pdata->rx_ring->napi); 921 + xgene_enet_mdio_remove(pdata); 922 + xgene_enet_delete_desc_rings(pdata); 923 + unregister_netdev(ndev); 924 + xgene_gport_shutdown(pdata); 925 + free_netdev(ndev); 926 + 927 + return 0; 928 + } 929 + 930 + static struct of_device_id xgene_enet_match[] = { 931 + {.compatible = "apm,xgene-enet",}, 932 + {}, 933 + }; 934 + 935 + MODULE_DEVICE_TABLE(of, xgene_enet_match); 936 + 937 + static struct platform_driver xgene_enet_driver = { 938 + .driver = { 939 + .name = "xgene-enet", 940 + .of_match_table = xgene_enet_match, 941 + }, 942 + .probe = xgene_enet_probe, 943 + .remove = xgene_enet_remove, 944 + }; 945 + 946 + module_platform_driver(xgene_enet_driver); 947 + 948 + MODULE_DESCRIPTION("APM X-Gene SoC Ethernet driver"); 949 + MODULE_VERSION(XGENE_DRV_VERSION); 950 + MODULE_AUTHOR("Keyur Chudgar <kchudgar@apm.com>"); 951 + MODULE_LICENSE("GPL");
+135
drivers/net/ethernet/apm/xgene/xgene_enet_main.h
··· 1 + /* Applied Micro X-Gene SoC Ethernet Driver 2 + * 3 + * Copyright (c) 2014, Applied Micro Circuits Corporation 4 + * Authors: Iyappan Subramanian <isubramanian@apm.com> 5 + * Ravi Patel <rapatel@apm.com> 6 + * Keyur Chudgar <kchudgar@apm.com> 7 + * 8 + * This program is free software; you can redistribute it and/or modify it 9 + * under the terms of the GNU General Public License as published by the 10 + * Free Software Foundation; either version 2 of the License, or (at your 11 + * option) any later version. 12 + * 13 + * This program is distributed in the hope that it will be useful, 14 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 15 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 16 + * GNU General Public License for more details. 17 + * 18 + * You should have received a copy of the GNU General Public License 19 + * along with this program. If not, see <http://www.gnu.org/licenses/>. 20 + */ 21 + 22 + #ifndef __XGENE_ENET_MAIN_H__ 23 + #define __XGENE_ENET_MAIN_H__ 24 + 25 + #include <linux/clk.h> 26 + #include <linux/of_platform.h> 27 + #include <linux/of_net.h> 28 + #include <linux/of_mdio.h> 29 + #include <linux/module.h> 30 + #include <net/ip.h> 31 + #include <linux/prefetch.h> 32 + #include <linux/if_vlan.h> 33 + #include <linux/phy.h> 34 + #include "xgene_enet_hw.h" 35 + 36 + #define XGENE_DRV_VERSION "v1.0" 37 + #define XGENE_ENET_MAX_MTU 1536 38 + #define SKB_BUFFER_SIZE (XGENE_ENET_MAX_MTU - NET_IP_ALIGN) 39 + #define NUM_PKT_BUF 64 40 + #define NUM_BUFPOOL 32 41 + 42 + /* software context of a descriptor ring */ 43 + struct xgene_enet_desc_ring { 44 + struct net_device *ndev; 45 + u16 id; 46 + u16 num; 47 + u16 head; 48 + u16 tail; 49 + u16 slots; 50 + u16 irq; 51 + u32 size; 52 + u32 state[NUM_RING_CONFIG]; 53 + void __iomem *cmd_base; 54 + void __iomem *cmd; 55 + dma_addr_t dma; 56 + u16 dst_ring_num; 57 + u8 nbufpool; 58 + struct sk_buff *(*rx_skb); 59 + struct sk_buff *(*cp_skb); 60 + enum xgene_enet_ring_cfgsize cfgsize; 61 + struct xgene_enet_desc_ring *cp_ring; 62 + struct xgene_enet_desc_ring *buf_pool; 63 + struct napi_struct napi; 64 + union { 65 + void *desc_addr; 66 + struct xgene_enet_raw_desc *raw_desc; 67 + struct xgene_enet_raw_desc16 *raw_desc16; 68 + }; 69 + }; 70 + 71 + /* ethernet private data */ 72 + struct xgene_enet_pdata { 73 + struct net_device *ndev; 74 + struct mii_bus *mdio_bus; 75 + struct phy_device *phy_dev; 76 + int phy_speed; 77 + struct clk *clk; 78 + struct platform_device *pdev; 79 + struct xgene_enet_desc_ring *tx_ring; 80 + struct xgene_enet_desc_ring *rx_ring; 81 + char *dev_name; 82 + u32 rx_buff_cnt; 83 + u32 tx_qcnt_hi; 84 + u32 cp_qcnt_hi; 85 + u32 cp_qcnt_low; 86 + u32 rx_irq; 87 + void __iomem *eth_csr_addr; 88 + void __iomem *eth_ring_if_addr; 89 + void __iomem *eth_diag_csr_addr; 90 + void __iomem *mcx_mac_addr; 91 + void __iomem *mcx_stats_addr; 92 + void __iomem *mcx_mac_csr_addr; 93 + void __iomem *base_addr; 94 + void __iomem *ring_csr_addr; 95 + void __iomem *ring_cmd_addr; 96 + u32 phy_addr; 97 + int phy_mode; 98 + u32 speed; 99 + u16 rm; 100 + struct rtnl_link_stats64 stats; 101 + }; 102 + 103 + /* Set the specified value into a bit-field defined by its starting position 104 + * and length within a single u64. 105 + */ 106 + static inline u64 xgene_enet_set_field_value(int pos, int len, u64 val) 107 + { 108 + return (val & ((1ULL << len) - 1)) << pos; 109 + } 110 + 111 + #define SET_VAL(field, val) \ 112 + xgene_enet_set_field_value(field ## _POS, field ## _LEN, val) 113 + 114 + #define SET_BIT(field) \ 115 + xgene_enet_set_field_value(field ## _POS, 1, 1) 116 + 117 + /* Get the value from a bit-field defined by its starting position 118 + * and length within the specified u64. 119 + */ 120 + static inline u64 xgene_enet_get_field_value(int pos, int len, u64 src) 121 + { 122 + return (src >> pos) & ((1ULL << len) - 1); 123 + } 124 + 125 + #define GET_VAL(field, src) \ 126 + xgene_enet_get_field_value(field ## _POS, field ## _LEN, src) 127 + 128 + static inline struct device *ndev_to_dev(struct net_device *ndev) 129 + { 130 + return ndev->dev.parent; 131 + } 132 + 133 + void xgene_enet_set_ethtool_ops(struct net_device *netdev); 134 + 135 + #endif /* __XGENE_ENET_MAIN_H__ */
+24 -13
drivers/net/ethernet/broadcom/genet/bcmgenet.c
··· 739 739 740 740 case GENET_POWER_PASSIVE: 741 741 /* Power down LED */ 742 - bcmgenet_mii_reset(priv->dev); 743 742 if (priv->hw_params->flags & GENET_HAS_EXT) { 744 743 reg = bcmgenet_ext_readl(priv, EXT_EXT_PWR_MGMT); 745 744 reg |= (EXT_PWR_DOWN_PHY | ··· 778 779 } 779 780 780 781 bcmgenet_ext_writel(priv, reg, EXT_EXT_PWR_MGMT); 781 - bcmgenet_mii_reset(priv->dev); 782 + 783 + if (mode == GENET_POWER_PASSIVE) 784 + bcmgenet_mii_reset(priv->dev); 782 785 } 783 786 784 787 /* ioctl handle special commands that are not present in ethtool. */ ··· 1962 1961 static int bcmgenet_wol_resume(struct bcmgenet_priv *priv) 1963 1962 { 1964 1963 /* From WOL-enabled suspend, switch to regular clock */ 1965 - clk_disable_unprepare(priv->clk_wol); 1964 + if (priv->wolopts) 1965 + clk_disable_unprepare(priv->clk_wol); 1966 1966 1967 1967 phy_init_hw(priv->phydev); 1968 1968 /* Speed settings must be restored */ ··· 2166 2164 * disabled no new work will be scheduled. 2167 2165 */ 2168 2166 cancel_work_sync(&priv->bcmgenet_irq_work); 2167 + 2168 + priv->old_pause = -1; 2169 + priv->old_link = -1; 2170 + priv->old_duplex = -1; 2169 2171 } 2170 2172 2171 2173 static int bcmgenet_close(struct net_device *dev) ··· 2539 2533 priv->pdev = pdev; 2540 2534 priv->version = (enum bcmgenet_version)of_id->data; 2541 2535 2536 + priv->clk = devm_clk_get(&priv->pdev->dev, "enet"); 2537 + if (IS_ERR(priv->clk)) 2538 + dev_warn(&priv->pdev->dev, "failed to get enet clock\n"); 2539 + 2540 + if (!IS_ERR(priv->clk)) 2541 + clk_prepare_enable(priv->clk); 2542 + 2542 2543 bcmgenet_set_hw_params(priv); 2543 2544 2544 2545 /* Mii wait queue */ ··· 2554 2541 priv->rx_buf_len = RX_BUF_LENGTH; 2555 2542 INIT_WORK(&priv->bcmgenet_irq_work, bcmgenet_irq_task); 2556 2543 2557 - priv->clk = devm_clk_get(&priv->pdev->dev, "enet"); 2558 - if (IS_ERR(priv->clk)) 2559 - dev_warn(&priv->pdev->dev, "failed to get enet clock\n"); 2560 - 2561 2544 priv->clk_wol = devm_clk_get(&priv->pdev->dev, "enet-wol"); 2562 2545 if (IS_ERR(priv->clk_wol)) 2563 2546 dev_warn(&priv->pdev->dev, "failed to get enet-wol clock\n"); 2564 - 2565 - if (!IS_ERR(priv->clk)) 2566 - clk_prepare_enable(priv->clk); 2567 2547 2568 2548 err = reset_umac(priv); 2569 2549 if (err) ··· 2617 2611 2618 2612 bcmgenet_netif_stop(dev); 2619 2613 2614 + phy_suspend(priv->phydev); 2615 + 2620 2616 netif_device_detach(dev); 2621 2617 2622 2618 /* Disable MAC receive */ ··· 2669 2661 if (ret) 2670 2662 goto out_clk_disable; 2671 2663 2672 - if (priv->wolopts) 2673 - ret = bcmgenet_wol_resume(priv); 2674 - 2664 + ret = bcmgenet_wol_resume(priv); 2675 2665 if (ret) 2676 2666 goto out_clk_disable; 2677 2667 ··· 2683 2677 reg |= EXT_ENERGY_DET_MASK; 2684 2678 bcmgenet_ext_writel(priv, reg, EXT_EXT_PWR_MGMT); 2685 2679 } 2680 + 2681 + if (priv->wolopts) 2682 + bcmgenet_power_up(priv, GENET_POWER_WOL_MAGIC); 2686 2683 2687 2684 /* Disable RX/TX DMA and flush TX queues */ 2688 2685 dma_ctrl = bcmgenet_dma_disable(priv); ··· 2701 2692 bcmgenet_enable_dma(priv, dma_ctrl); 2702 2693 2703 2694 netif_device_attach(dev); 2695 + 2696 + phy_resume(priv->phydev); 2704 2697 2705 2698 bcmgenet_netif_start(dev); 2706 2699
+10 -4
drivers/net/ethernet/broadcom/genet/bcmmii.c
··· 129 129 cmd_bits |= CMD_RX_PAUSE_IGNORE | CMD_TX_PAUSE_IGNORE; 130 130 } 131 131 132 - if (status_changed) { 132 + if (!status_changed) 133 + return; 134 + 135 + if (phydev->link) { 133 136 reg = bcmgenet_umac_readl(priv, UMAC_CMD); 134 137 reg &= ~((CMD_SPEED_MASK << CMD_SPEED_SHIFT) | 135 138 CMD_HD_EN | ··· 140 137 reg |= cmd_bits; 141 138 bcmgenet_umac_writel(priv, reg, UMAC_CMD); 142 139 143 - phy_print_status(phydev); 144 140 } 141 + 142 + phy_print_status(phydev); 145 143 } 146 144 147 145 void bcmgenet_mii_reset(struct net_device *dev) ··· 307 303 /* In the case of a fixed PHY, the DT node associated 308 304 * to the PHY is the Ethernet MAC DT node. 309 305 */ 310 - if (of_phy_is_fixed_link(dn)) { 306 + if (!priv->phy_dn && of_phy_is_fixed_link(dn)) { 311 307 ret = of_phy_register_fixed_link(dn); 312 308 if (ret) 313 309 return ret; 314 310 315 - priv->phy_dn = dn; 311 + priv->phy_dn = of_node_get(dn); 316 312 } 317 313 318 314 phydev = of_phy_connect(dev, priv->phy_dn, bcmgenet_mii_setup, 0, ··· 448 444 return 0; 449 445 450 446 out: 447 + of_node_put(priv->phy_dn); 451 448 mdiobus_unregister(priv->mii_bus); 452 449 out_free: 453 450 kfree(priv->mii_bus->irq); ··· 460 455 { 461 456 struct bcmgenet_priv *priv = netdev_priv(dev); 462 457 458 + of_node_put(priv->phy_dn); 463 459 mdiobus_unregister(priv->mii_bus); 464 460 kfree(priv->mii_bus->irq); 465 461 mdiobus_free(priv->mii_bus);
+7 -4
drivers/net/ethernet/chelsio/cxgb4/cxgb4.h
··· 50 50 #include "cxgb4_uld.h" 51 51 52 52 #define T4FW_VERSION_MAJOR 0x01 53 - #define T4FW_VERSION_MINOR 0x09 54 - #define T4FW_VERSION_MICRO 0x17 53 + #define T4FW_VERSION_MINOR 0x0B 54 + #define T4FW_VERSION_MICRO 0x1B 55 55 #define T4FW_VERSION_BUILD 0x00 56 56 57 57 #define T5FW_VERSION_MAJOR 0x01 58 - #define T5FW_VERSION_MINOR 0x09 59 - #define T5FW_VERSION_MICRO 0x17 58 + #define T5FW_VERSION_MINOR 0x0B 59 + #define T5FW_VERSION_MICRO 0x1B 60 60 #define T5FW_VERSION_BUILD 0x00 61 61 62 62 #define CH_WARN(adap, fmt, ...) dev_warn(adap->pdev_dev, fmt, ## __VA_ARGS__) ··· 522 522 struct sge_eth_txq { /* state for an SGE Ethernet Tx queue */ 523 523 struct sge_txq q; 524 524 struct netdev_queue *txq; /* associated netdev TX queue */ 525 + #ifdef CONFIG_CHELSIO_T4_DCB 526 + u8 dcb_prio; /* DCB Priority bound to queue */ 527 + #endif 525 528 unsigned long tso; /* # of TSO requests */ 526 529 unsigned long tx_cso; /* # of Tx checksum offloads */ 527 530 unsigned long vlan_ins; /* # of Tx VLAN insertions */
+194 -74
drivers/net/ethernet/chelsio/cxgb4/cxgb4_dcb.c
··· 20 20 21 21 #include "cxgb4.h" 22 22 23 + /* DCBx version control 24 + */ 25 + char *dcb_ver_array[] = { 26 + "Unknown", 27 + "DCBx-CIN", 28 + "DCBx-CEE 1.01", 29 + "DCBx-IEEE", 30 + "", "", "", 31 + "Auto Negotiated" 32 + }; 33 + 23 34 /* Initialize a port's Data Center Bridging state. Typically used after a 24 35 * Link Down event. 25 36 */ ··· 38 27 { 39 28 struct port_info *pi = netdev2pinfo(dev); 40 29 struct port_dcb_info *dcb = &pi->dcb; 30 + int version_temp = dcb->dcb_version; 41 31 42 32 memset(dcb, 0, sizeof(struct port_dcb_info)); 43 33 dcb->state = CXGB4_DCB_STATE_START; 34 + if (version_temp) 35 + dcb->dcb_version = version_temp; 36 + 37 + netdev_dbg(dev, "%s: Initializing DCB state for port[%d]\n", 38 + __func__, pi->port_id); 39 + } 40 + 41 + void cxgb4_dcb_version_init(struct net_device *dev) 42 + { 43 + struct port_info *pi = netdev2pinfo(dev); 44 + struct port_dcb_info *dcb = &pi->dcb; 45 + 46 + /* Any writes here are only done on kernels that exlicitly need 47 + * a specific version, say < 2.6.38 which only support CEE 48 + */ 49 + dcb->dcb_version = FW_PORT_DCB_VER_AUTO; 44 50 } 45 51 46 52 /* Finite State machine for Data Center Bridging. 47 53 */ 48 54 void cxgb4_dcb_state_fsm(struct net_device *dev, 49 - enum cxgb4_dcb_state_input input) 55 + enum cxgb4_dcb_state_input transition_to) 50 56 { 51 57 struct port_info *pi = netdev2pinfo(dev); 52 58 struct port_dcb_info *dcb = &pi->dcb; 53 59 struct adapter *adap = pi->adapter; 60 + enum cxgb4_dcb_state current_state = dcb->state; 54 61 55 - switch (input) { 56 - case CXGB4_DCB_INPUT_FW_DISABLED: { 57 - /* Firmware tells us it's not doing DCB */ 58 - switch (dcb->state) { 59 - case CXGB4_DCB_STATE_START: { 62 + netdev_dbg(dev, "%s: State change from %d to %d for %s\n", 63 + __func__, dcb->state, transition_to, dev->name); 64 + 65 + switch (current_state) { 66 + case CXGB4_DCB_STATE_START: { 67 + switch (transition_to) { 68 + case CXGB4_DCB_INPUT_FW_DISABLED: { 60 69 /* we're going to use Host DCB */ 61 70 dcb->state = CXGB4_DCB_STATE_HOST; 62 71 dcb->supported = CXGB4_DCBX_HOST_SUPPORT; ··· 84 53 break; 85 54 } 86 55 87 - case CXGB4_DCB_STATE_HOST: { 88 - /* we're alreaady in Host DCB mode */ 89 - break; 90 - } 91 - 92 - default: 93 - goto bad_state_transition; 94 - } 95 - break; 96 - } 97 - 98 - case CXGB4_DCB_INPUT_FW_ENABLED: { 99 - /* Firmware tells us that it is doing DCB */ 100 - switch (dcb->state) { 101 - case CXGB4_DCB_STATE_START: { 56 + case CXGB4_DCB_INPUT_FW_ENABLED: { 102 57 /* we're going to use Firmware DCB */ 103 58 dcb->state = CXGB4_DCB_STATE_FW_INCOMPLETE; 104 59 dcb->supported = CXGB4_DCBX_FW_SUPPORT; 105 60 break; 106 61 } 107 62 108 - case CXGB4_DCB_STATE_FW_INCOMPLETE: 109 - case CXGB4_DCB_STATE_FW_ALLSYNCED: { 110 - /* we're alreaady in firmware DCB mode */ 63 + case CXGB4_DCB_INPUT_FW_INCOMPLETE: { 64 + /* expected transition */ 65 + break; 66 + } 67 + 68 + case CXGB4_DCB_INPUT_FW_ALLSYNCED: { 69 + dcb->state = CXGB4_DCB_STATE_FW_ALLSYNCED; 111 70 break; 112 71 } 113 72 114 73 default: 115 - goto bad_state_transition; 74 + goto bad_state_input; 116 75 } 117 76 break; 118 77 } 119 78 120 - case CXGB4_DCB_INPUT_FW_INCOMPLETE: { 121 - /* Firmware tells us that its DCB state is incomplete */ 122 - switch (dcb->state) { 123 - case CXGB4_DCB_STATE_FW_INCOMPLETE: { 79 + case CXGB4_DCB_STATE_FW_INCOMPLETE: { 80 + switch (transition_to) { 81 + case CXGB4_DCB_INPUT_FW_ENABLED: { 82 + /* we're alreaady in firmware DCB mode */ 83 + break; 84 + } 85 + 86 + case CXGB4_DCB_INPUT_FW_INCOMPLETE: { 124 87 /* we're already incomplete */ 125 88 break; 126 89 } 127 90 128 - case CXGB4_DCB_STATE_FW_ALLSYNCED: { 91 + case CXGB4_DCB_INPUT_FW_ALLSYNCED: { 92 + dcb->state = CXGB4_DCB_STATE_FW_ALLSYNCED; 93 + dcb->enabled = 1; 94 + linkwatch_fire_event(dev); 95 + break; 96 + } 97 + 98 + default: 99 + goto bad_state_input; 100 + } 101 + break; 102 + } 103 + 104 + case CXGB4_DCB_STATE_FW_ALLSYNCED: { 105 + switch (transition_to) { 106 + case CXGB4_DCB_INPUT_FW_ENABLED: { 107 + /* we're alreaady in firmware DCB mode */ 108 + break; 109 + } 110 + 111 + case CXGB4_DCB_INPUT_FW_INCOMPLETE: { 129 112 /* We were successfully running with firmware DCB but 130 113 * now it's telling us that it's in an "incomplete 131 114 * state. We need to reset back to a ground state ··· 152 107 break; 153 108 } 154 109 155 - default: 156 - goto bad_state_transition; 157 - } 158 - break; 159 - } 160 - 161 - case CXGB4_DCB_INPUT_FW_ALLSYNCED: { 162 - /* Firmware tells us that its DCB state is complete */ 163 - switch (dcb->state) { 164 - case CXGB4_DCB_STATE_FW_INCOMPLETE: { 165 - dcb->state = CXGB4_DCB_STATE_FW_ALLSYNCED; 110 + case CXGB4_DCB_INPUT_FW_ALLSYNCED: { 111 + /* we're already all sync'ed 112 + * this is only applicable for IEEE or 113 + * when another VI already completed negotiaton 114 + */ 166 115 dcb->enabled = 1; 167 116 linkwatch_fire_event(dev); 168 117 break; 169 118 } 170 119 171 - case CXGB4_DCB_STATE_FW_ALLSYNCED: { 172 - /* we're already all sync'ed */ 120 + default: 121 + goto bad_state_input; 122 + } 123 + break; 124 + } 125 + 126 + case CXGB4_DCB_STATE_HOST: { 127 + switch (transition_to) { 128 + case CXGB4_DCB_INPUT_FW_DISABLED: { 129 + /* we're alreaady in Host DCB mode */ 173 130 break; 174 131 } 175 132 176 133 default: 177 - goto bad_state_transition; 134 + goto bad_state_input; 178 135 } 179 136 break; 180 137 } 181 138 182 139 default: 183 - goto bad_state_input; 140 + goto bad_state_transition; 184 141 } 185 142 return; 186 143 187 144 bad_state_input: 188 145 dev_err(adap->pdev_dev, "cxgb4_dcb_state_fsm: illegal input symbol %d\n", 189 - input); 146 + transition_to); 190 147 return; 191 148 192 149 bad_state_transition: 193 150 dev_err(adap->pdev_dev, "cxgb4_dcb_state_fsm: bad state transition, state = %d, input = %d\n", 194 - dcb->state, input); 151 + current_state, transition_to); 195 152 } 196 153 197 154 /* Handle a DCB/DCBX update message from the firmware. ··· 207 160 struct port_info *pi = netdev_priv(dev); 208 161 struct port_dcb_info *dcb = &pi->dcb; 209 162 int dcb_type = pcmd->u.dcb.pgid.type; 163 + int dcb_running_version; 210 164 211 165 /* Handle Firmware DCB Control messages separately since they drive 212 166 * our state machine. ··· 218 170 FW_PORT_CMD_ALL_SYNCD) 219 171 ? CXGB4_DCB_STATE_FW_ALLSYNCED 220 172 : CXGB4_DCB_STATE_FW_INCOMPLETE); 173 + 174 + if (dcb->dcb_version != FW_PORT_DCB_VER_UNKNOWN) { 175 + dcb_running_version = FW_PORT_CMD_DCB_VERSION_GET( 176 + be16_to_cpu( 177 + pcmd->u.dcb.control.dcb_version_to_app_state)); 178 + if (dcb_running_version == FW_PORT_DCB_VER_CEE1D01 || 179 + dcb_running_version == FW_PORT_DCB_VER_IEEE) { 180 + dcb->dcb_version = dcb_running_version; 181 + dev_warn(adap->pdev_dev, "Interface %s is running %s\n", 182 + dev->name, 183 + dcb_ver_array[dcb->dcb_version]); 184 + } else { 185 + dev_warn(adap->pdev_dev, 186 + "Something screwed up, requested firmware for %s, but firmware returned %s instead\n", 187 + dcb_ver_array[dcb->dcb_version], 188 + dcb_ver_array[dcb_running_version]); 189 + dcb->dcb_version = FW_PORT_DCB_VER_UNKNOWN; 190 + } 191 + } 221 192 222 193 cxgb4_dcb_state_fsm(dev, input); 223 194 return; ··· 266 199 dcb->pg_num_tcs_supported = fwdcb->pgrate.num_tcs_supported; 267 200 memcpy(dcb->pgrate, &fwdcb->pgrate.pgrate, 268 201 sizeof(dcb->pgrate)); 202 + memcpy(dcb->tsa, &fwdcb->pgrate.tsa, 203 + sizeof(dcb->tsa)); 269 204 dcb->msgs |= CXGB4_DCB_FW_PGRATE; 205 + if (dcb->msgs & CXGB4_DCB_FW_PGID) 206 + IEEE_FAUX_SYNC(dev, dcb); 270 207 break; 271 208 272 209 case FW_PORT_DCB_TYPE_PRIORATE: ··· 283 212 dcb->pfcen = fwdcb->pfc.pfcen; 284 213 dcb->pfc_num_tcs_supported = fwdcb->pfc.max_pfc_tcs; 285 214 dcb->msgs |= CXGB4_DCB_FW_PFC; 215 + IEEE_FAUX_SYNC(dev, dcb); 286 216 break; 287 217 288 218 case FW_PORT_DCB_TYPE_APP_ID: { ··· 292 220 struct app_priority *ap = &dcb->app_priority[idx]; 293 221 294 222 struct dcb_app app = { 295 - .selector = fwap->sel_field, 296 223 .protocol = be16_to_cpu(fwap->protocolid), 297 - .priority = fwap->user_prio_map, 298 224 }; 299 225 int err; 300 226 301 - err = dcb_setapp(dev, &app); 227 + /* Convert from firmware format to relevant format 228 + * when using app selector 229 + */ 230 + if (dcb->dcb_version == FW_PORT_DCB_VER_IEEE) { 231 + app.selector = (fwap->sel_field + 1); 232 + app.priority = ffs(fwap->user_prio_map) - 1; 233 + err = dcb_ieee_setapp(dev, &app); 234 + IEEE_FAUX_SYNC(dev, dcb); 235 + } else { 236 + /* Default is CEE */ 237 + app.selector = !!(fwap->sel_field); 238 + app.priority = fwap->user_prio_map; 239 + err = dcb_setapp(dev, &app); 240 + } 241 + 302 242 if (err) 303 243 dev_err(adap->pdev_dev, 304 244 "Failed DCB Set Application Priority: sel=%d, prot=%d, prio=%d, err=%d\n", ··· 492 408 if (err != FW_PORT_DCB_CFG_SUCCESS) { 493 409 dev_err(adap->pdev_dev, "DCB read PGRATE failed with %d\n", 494 410 -err); 495 - } else { 496 - *bw_per = pcmd.u.dcb.pgrate.pgrate[pgid]; 411 + return; 497 412 } 413 + 414 + *bw_per = pcmd.u.dcb.pgrate.pgrate[pgid]; 498 415 } 499 416 500 417 static void cxgb4_getpgbwgcfg_tx(struct net_device *dev, int pgid, u8 *bw_per) ··· 722 637 return err; 723 638 } 724 639 if (be16_to_cpu(pcmd.u.dcb.app_priority.protocolid) == app_id) 725 - return pcmd.u.dcb.app_priority.user_prio_map; 640 + if (pcmd.u.dcb.app_priority.sel_field == app_idtype) 641 + return pcmd.u.dcb.app_priority.user_prio_map; 726 642 727 643 /* exhausted app list */ 728 644 if (!pcmd.u.dcb.app_priority.protocolid) ··· 743 657 744 658 /* Write a new Application User Priority Map for the specified Application ID 745 659 */ 746 - static int cxgb4_setapp(struct net_device *dev, u8 app_idtype, u16 app_id, 747 - u8 app_prio) 660 + static int __cxgb4_setapp(struct net_device *dev, u8 app_idtype, u16 app_id, 661 + u8 app_prio) 748 662 { 749 663 struct fw_port_cmd pcmd; 750 664 struct port_info *pi = netdev2pinfo(dev); ··· 758 672 /* DCB info gets thrown away on link up */ 759 673 if (!netif_carrier_ok(dev)) 760 674 return -ENOLINK; 761 - 762 - if (app_idtype != DCB_APP_IDTYPE_ETHTYPE && 763 - app_idtype != DCB_APP_IDTYPE_PORTNUM) 764 - return -EINVAL; 765 675 766 676 for (i = 0; i < CXGB4_MAX_DCBX_APP_SUPPORTED; i++) { 767 677 INIT_PORT_DCB_READ_LOCAL_CMD(pcmd, pi->port_id); ··· 807 725 return 0; 808 726 } 809 727 728 + /* Priority for CEE inside dcb_app is bitmask, with 0 being an invalid value */ 729 + static int cxgb4_setapp(struct net_device *dev, u8 app_idtype, u16 app_id, 730 + u8 app_prio) 731 + { 732 + int ret; 733 + struct dcb_app app = { 734 + .selector = app_idtype, 735 + .protocol = app_id, 736 + .priority = app_prio, 737 + }; 738 + 739 + if (app_idtype != DCB_APP_IDTYPE_ETHTYPE && 740 + app_idtype != DCB_APP_IDTYPE_PORTNUM) 741 + return -EINVAL; 742 + 743 + /* Convert app_idtype to a format that firmware understands */ 744 + ret = __cxgb4_setapp(dev, app_idtype == DCB_APP_IDTYPE_ETHTYPE ? 745 + app_idtype : 3, app_id, app_prio); 746 + if (ret) 747 + return ret; 748 + 749 + return dcb_setapp(dev, &app); 750 + } 751 + 810 752 /* Return whether IEEE Data Center Bridging has been negotiated. 811 753 */ 812 754 static inline int cxgb4_ieee_negotiation_complete(struct net_device *dev) ··· 844 738 845 739 /* Fill in the Application User Priority Map associated with the 846 740 * specified Application. 741 + * Priority for IEEE dcb_app is an integer, with 0 being a valid value 847 742 */ 848 743 static int cxgb4_ieee_getapp(struct net_device *dev, struct dcb_app *app) 849 744 { ··· 855 748 if (!(app->selector && app->protocol)) 856 749 return -EINVAL; 857 750 858 - prio = dcb_getapp(dev, app); 859 - if (prio == 0) { 860 - /* If app doesn't exist in dcb_app table, try firmware 861 - * directly. 862 - */ 863 - prio = __cxgb4_getapp(dev, app->selector, app->protocol, 0); 864 - } 751 + /* Try querying firmware first, use firmware format */ 752 + prio = __cxgb4_getapp(dev, app->selector - 1, app->protocol, 0); 865 753 866 - app->priority = prio; 754 + if (prio < 0) 755 + prio = dcb_ieee_getapp_mask(dev, app); 756 + 757 + app->priority = ffs(prio) - 1; 867 758 return 0; 868 759 } 869 760 870 - /* Write a new Application User Priority Map for the specified App id. */ 761 + /* Write a new Application User Priority Map for the specified Application ID. 762 + * Priority for IEEE dcb_app is an integer, with 0 being a valid value 763 + */ 871 764 static int cxgb4_ieee_setapp(struct net_device *dev, struct dcb_app *app) 872 765 { 766 + int ret; 767 + 873 768 if (!cxgb4_ieee_negotiation_complete(dev)) 874 769 return -EINVAL; 875 - if (!(app->selector && app->protocol && app->priority)) 770 + if (!(app->selector && app->protocol)) 876 771 return -EINVAL; 877 772 878 - cxgb4_setapp(dev, app->selector, app->protocol, app->priority); 879 - return dcb_setapp(dev, app); 773 + if (!(app->selector > IEEE_8021QAZ_APP_SEL_ETHERTYPE && 774 + app->selector < IEEE_8021QAZ_APP_SEL_ANY)) 775 + return -EINVAL; 776 + 777 + /* change selector to a format that firmware understands */ 778 + ret = __cxgb4_setapp(dev, app->selector - 1, app->protocol, 779 + (1 << app->priority)); 780 + if (ret) 781 + return ret; 782 + 783 + return dcb_ieee_setapp(dev, app); 880 784 } 881 785 882 786 /* Return our DCBX parameters. ··· 912 794 != dcb_request) 913 795 return 1; 914 796 915 - /* Can't set DCBX capabilities if DCBX isn't enabled. */ 916 - if (!pi->dcb.state) 797 + /* Can't enable DCB if we haven't successfully negotiated it. 798 + */ 799 + if (pi->dcb.state != CXGB4_DCB_STATE_FW_ALLSYNCED) 917 800 return 1; 918 801 919 802 /* There's currently no mechanism to allow for the firmware DCBX ··· 993 874 table[i].selector = pcmd.u.dcb.app_priority.sel_field; 994 875 table[i].protocol = 995 876 be16_to_cpu(pcmd.u.dcb.app_priority.protocolid); 996 - table[i].priority = pcmd.u.dcb.app_priority.user_prio_map; 877 + table[i].priority = 878 + ffs(pcmd.u.dcb.app_priority.user_prio_map) - 1; 997 879 } 998 880 return err; 999 881 }
+10
drivers/net/ethernet/chelsio/cxgb4/cxgb4_dcb.h
··· 63 63 #define INIT_PORT_DCB_WRITE_CMD(__pcmd, __port) \ 64 64 INIT_PORT_DCB_CMD(__pcmd, __port, EXEC, FW_PORT_ACTION_L2_DCB_CFG) 65 65 66 + #define IEEE_FAUX_SYNC(__dev, __dcb) \ 67 + do { \ 68 + if ((__dcb)->dcb_version == FW_PORT_DCB_VER_IEEE) \ 69 + cxgb4_dcb_state_fsm((__dev), \ 70 + CXGB4_DCB_STATE_FW_ALLSYNCED); \ 71 + } while (0) 72 + 66 73 /* States we can be in for a port's Data Center Bridging. 67 74 */ 68 75 enum cxgb4_dcb_state { ··· 115 108 * Native Endian format). 116 109 */ 117 110 u32 pgid; /* Priority Group[0..7] */ 111 + u8 dcb_version; /* Running DCBx version */ 118 112 u8 pfcen; /* Priority Flow Control[0..7] */ 119 113 u8 pg_num_tcs_supported; /* max PG Traffic Classes */ 120 114 u8 pfc_num_tcs_supported; /* max PFC Traffic Classes */ 121 115 u8 pgrate[8]; /* Priority Group Rate[0..7] */ 122 116 u8 priorate[8]; /* Priority Rate[0..7] */ 117 + u8 tsa[8]; /* TSA Algorithm[0..7] */ 123 118 struct app_priority { /* Application Information */ 124 119 u8 user_prio_map; /* Priority Map bitfield */ 125 120 u8 sel_field; /* Protocol ID interpretation */ ··· 130 121 }; 131 122 132 123 void cxgb4_dcb_state_init(struct net_device *); 124 + void cxgb4_dcb_version_init(struct net_device *); 133 125 void cxgb4_dcb_state_fsm(struct net_device *, enum cxgb4_dcb_state_input); 134 126 void cxgb4_dcb_handle_fw_update(struct adapter *, const struct fw_port_cmd *); 135 127 void cxgb4_dcb_set_caps(struct adapter *, const struct fw_port_cmd *);
+2
drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
··· 522 522 dev_err(adap->pdev_dev, 523 523 "Can't %s DCB Priority on port %d, TX Queue %d: err=%d\n", 524 524 enable ? "set" : "unset", pi->port_id, i, -err); 525 + else 526 + txq->dcb_prio = value; 525 527 } 526 528 } 527 529 #endif /* CONFIG_CHELSIO_T4_DCB */
+11 -1
drivers/net/ethernet/chelsio/cxgb4/t4fw_api.h
··· 1629 1629 FW_PORT_L2_CTLBF_TXIPG = 0x20 1630 1630 }; 1631 1631 1632 + enum fw_port_dcb_versions { 1633 + FW_PORT_DCB_VER_UNKNOWN, 1634 + FW_PORT_DCB_VER_CEE1D0, 1635 + FW_PORT_DCB_VER_CEE1D01, 1636 + FW_PORT_DCB_VER_IEEE, 1637 + FW_PORT_DCB_VER_AUTO = 7 1638 + }; 1639 + 1632 1640 enum fw_port_dcb_cfg { 1633 1641 FW_PORT_DCB_CFG_PG = 0x01, 1634 1642 FW_PORT_DCB_CFG_PFC = 0x02, ··· 1717 1709 __u8 r10_lo[5]; 1718 1710 __u8 num_tcs_supported; 1719 1711 __u8 pgrate[8]; 1712 + __u8 tsa[8]; 1720 1713 } pgrate; 1721 1714 struct fw_port_dcb_priorate { 1722 1715 __u8 type; ··· 1744 1735 struct fw_port_dcb_control { 1745 1736 __u8 type; 1746 1737 __u8 all_syncd_pkd; 1747 - __be16 pfc_state_to_app_state; 1738 + __be16 dcb_version_to_app_state; 1748 1739 __be32 r11; 1749 1740 __be64 r12; 1750 1741 } control; ··· 1787 1778 #define FW_PORT_CMD_DCBXDIS (1U << 7) 1788 1779 #define FW_PORT_CMD_APPLY (1U << 7) 1789 1780 #define FW_PORT_CMD_ALL_SYNCD (1U << 7) 1781 + #define FW_PORT_CMD_DCB_VERSION_GET(x) (((x) >> 8) & 0xf) 1790 1782 1791 1783 #define FW_PORT_CMD_PPPEN(x) ((x) << 31) 1792 1784 #define FW_PORT_CMD_TPSRC(x) ((x) << 28)
+46 -46
drivers/net/ethernet/davicom/dm9000.c
··· 93 93 }; 94 94 95 95 /* Structure/enum declaration ------------------------------- */ 96 - typedef struct board_info { 96 + struct board_info { 97 97 98 98 void __iomem *io_addr; /* Register I/O base address */ 99 99 void __iomem *io_data; /* Data I/O address */ ··· 141 141 u32 wake_state; 142 142 143 143 int ip_summed; 144 - } board_info_t; 144 + }; 145 145 146 146 /* debug code */ 147 147 ··· 151 151 } \ 152 152 } while (0) 153 153 154 - static inline board_info_t *to_dm9000_board(struct net_device *dev) 154 + static inline struct board_info *to_dm9000_board(struct net_device *dev) 155 155 { 156 156 return netdev_priv(dev); 157 157 } ··· 162 162 * Read a byte from I/O port 163 163 */ 164 164 static u8 165 - ior(board_info_t *db, int reg) 165 + ior(struct board_info *db, int reg) 166 166 { 167 167 writeb(reg, db->io_addr); 168 168 return readb(db->io_data); ··· 173 173 */ 174 174 175 175 static void 176 - iow(board_info_t *db, int reg, int value) 176 + iow(struct board_info *db, int reg, int value) 177 177 { 178 178 writeb(reg, db->io_addr); 179 179 writeb(value, db->io_data); 180 180 } 181 181 182 182 static void 183 - dm9000_reset(board_info_t *db) 183 + dm9000_reset(struct board_info *db) 184 184 { 185 185 dev_dbg(db->dev, "resetting device\n"); 186 186 ··· 272 272 * Sleep, either by using msleep() or if we are suspending, then 273 273 * use mdelay() to sleep. 274 274 */ 275 - static void dm9000_msleep(board_info_t *db, unsigned int ms) 275 + static void dm9000_msleep(struct board_info *db, unsigned int ms) 276 276 { 277 277 if (db->in_suspend || db->in_timeout) 278 278 mdelay(ms); ··· 284 284 static int 285 285 dm9000_phy_read(struct net_device *dev, int phy_reg_unused, int reg) 286 286 { 287 - board_info_t *db = netdev_priv(dev); 287 + struct board_info *db = netdev_priv(dev); 288 288 unsigned long flags; 289 289 unsigned int reg_save; 290 290 int ret; ··· 330 330 dm9000_phy_write(struct net_device *dev, 331 331 int phyaddr_unused, int reg, int value) 332 332 { 333 - board_info_t *db = netdev_priv(dev); 333 + struct board_info *db = netdev_priv(dev); 334 334 unsigned long flags; 335 335 unsigned long reg_save; 336 336 ··· 408 408 } 409 409 } 410 410 411 - static void dm9000_schedule_poll(board_info_t *db) 411 + static void dm9000_schedule_poll(struct board_info *db) 412 412 { 413 413 if (db->type == TYPE_DM9000E) 414 414 schedule_delayed_work(&db->phy_poll, HZ * 2); ··· 416 416 417 417 static int dm9000_ioctl(struct net_device *dev, struct ifreq *req, int cmd) 418 418 { 419 - board_info_t *dm = to_dm9000_board(dev); 419 + struct board_info *dm = to_dm9000_board(dev); 420 420 421 421 if (!netif_running(dev)) 422 422 return -EINVAL; ··· 425 425 } 426 426 427 427 static unsigned int 428 - dm9000_read_locked(board_info_t *db, int reg) 428 + dm9000_read_locked(struct board_info *db, int reg) 429 429 { 430 430 unsigned long flags; 431 431 unsigned int ret; ··· 437 437 return ret; 438 438 } 439 439 440 - static int dm9000_wait_eeprom(board_info_t *db) 440 + static int dm9000_wait_eeprom(struct board_info *db) 441 441 { 442 442 unsigned int status; 443 443 int timeout = 8; /* wait max 8msec */ ··· 474 474 * Read a word data from EEPROM 475 475 */ 476 476 static void 477 - dm9000_read_eeprom(board_info_t *db, int offset, u8 *to) 477 + dm9000_read_eeprom(struct board_info *db, int offset, u8 *to) 478 478 { 479 479 unsigned long flags; 480 480 ··· 514 514 * Write a word data to SROM 515 515 */ 516 516 static void 517 - dm9000_write_eeprom(board_info_t *db, int offset, u8 *data) 517 + dm9000_write_eeprom(struct board_info *db, int offset, u8 *data) 518 518 { 519 519 unsigned long flags; 520 520 ··· 546 546 static void dm9000_get_drvinfo(struct net_device *dev, 547 547 struct ethtool_drvinfo *info) 548 548 { 549 - board_info_t *dm = to_dm9000_board(dev); 549 + struct board_info *dm = to_dm9000_board(dev); 550 550 551 551 strlcpy(info->driver, CARDNAME, sizeof(info->driver)); 552 552 strlcpy(info->version, DRV_VERSION, sizeof(info->version)); ··· 556 556 557 557 static u32 dm9000_get_msglevel(struct net_device *dev) 558 558 { 559 - board_info_t *dm = to_dm9000_board(dev); 559 + struct board_info *dm = to_dm9000_board(dev); 560 560 561 561 return dm->msg_enable; 562 562 } 563 563 564 564 static void dm9000_set_msglevel(struct net_device *dev, u32 value) 565 565 { 566 - board_info_t *dm = to_dm9000_board(dev); 566 + struct board_info *dm = to_dm9000_board(dev); 567 567 568 568 dm->msg_enable = value; 569 569 } 570 570 571 571 static int dm9000_get_settings(struct net_device *dev, struct ethtool_cmd *cmd) 572 572 { 573 - board_info_t *dm = to_dm9000_board(dev); 573 + struct board_info *dm = to_dm9000_board(dev); 574 574 575 575 mii_ethtool_gset(&dm->mii, cmd); 576 576 return 0; ··· 578 578 579 579 static int dm9000_set_settings(struct net_device *dev, struct ethtool_cmd *cmd) 580 580 { 581 - board_info_t *dm = to_dm9000_board(dev); 581 + struct board_info *dm = to_dm9000_board(dev); 582 582 583 583 return mii_ethtool_sset(&dm->mii, cmd); 584 584 } 585 585 586 586 static int dm9000_nway_reset(struct net_device *dev) 587 587 { 588 - board_info_t *dm = to_dm9000_board(dev); 588 + struct board_info *dm = to_dm9000_board(dev); 589 589 return mii_nway_restart(&dm->mii); 590 590 } 591 591 592 592 static int dm9000_set_features(struct net_device *dev, 593 593 netdev_features_t features) 594 594 { 595 - board_info_t *dm = to_dm9000_board(dev); 595 + struct board_info *dm = to_dm9000_board(dev); 596 596 netdev_features_t changed = dev->features ^ features; 597 597 unsigned long flags; 598 598 ··· 608 608 609 609 static u32 dm9000_get_link(struct net_device *dev) 610 610 { 611 - board_info_t *dm = to_dm9000_board(dev); 611 + struct board_info *dm = to_dm9000_board(dev); 612 612 u32 ret; 613 613 614 614 if (dm->flags & DM9000_PLATF_EXT_PHY) ··· 629 629 static int dm9000_get_eeprom(struct net_device *dev, 630 630 struct ethtool_eeprom *ee, u8 *data) 631 631 { 632 - board_info_t *dm = to_dm9000_board(dev); 632 + struct board_info *dm = to_dm9000_board(dev); 633 633 int offset = ee->offset; 634 634 int len = ee->len; 635 635 int i; ··· 653 653 static int dm9000_set_eeprom(struct net_device *dev, 654 654 struct ethtool_eeprom *ee, u8 *data) 655 655 { 656 - board_info_t *dm = to_dm9000_board(dev); 656 + struct board_info *dm = to_dm9000_board(dev); 657 657 int offset = ee->offset; 658 658 int len = ee->len; 659 659 int done; ··· 691 691 692 692 static void dm9000_get_wol(struct net_device *dev, struct ethtool_wolinfo *w) 693 693 { 694 - board_info_t *dm = to_dm9000_board(dev); 694 + struct board_info *dm = to_dm9000_board(dev); 695 695 696 696 memset(w, 0, sizeof(struct ethtool_wolinfo)); 697 697 ··· 702 702 703 703 static int dm9000_set_wol(struct net_device *dev, struct ethtool_wolinfo *w) 704 704 { 705 - board_info_t *dm = to_dm9000_board(dev); 705 + struct board_info *dm = to_dm9000_board(dev); 706 706 unsigned long flags; 707 707 u32 opts = w->wolopts; 708 708 u32 wcr = 0; ··· 752 752 .set_eeprom = dm9000_set_eeprom, 753 753 }; 754 754 755 - static void dm9000_show_carrier(board_info_t *db, 755 + static void dm9000_show_carrier(struct board_info *db, 756 756 unsigned carrier, unsigned nsr) 757 757 { 758 758 int lpa; ··· 775 775 dm9000_poll_work(struct work_struct *w) 776 776 { 777 777 struct delayed_work *dw = to_delayed_work(w); 778 - board_info_t *db = container_of(dw, board_info_t, phy_poll); 778 + struct board_info *db = container_of(dw, struct board_info, phy_poll); 779 779 struct net_device *ndev = db->ndev; 780 780 781 781 if (db->flags & DM9000_PLATF_SIMPLE_PHY && ··· 843 843 static void 844 844 dm9000_hash_table_unlocked(struct net_device *dev) 845 845 { 846 - board_info_t *db = netdev_priv(dev); 846 + struct board_info *db = netdev_priv(dev); 847 847 struct netdev_hw_addr *ha; 848 848 int i, oft; 849 849 u32 hash_val; ··· 879 879 static void 880 880 dm9000_hash_table(struct net_device *dev) 881 881 { 882 - board_info_t *db = netdev_priv(dev); 882 + struct board_info *db = netdev_priv(dev); 883 883 unsigned long flags; 884 884 885 885 spin_lock_irqsave(&db->lock, flags); ··· 888 888 } 889 889 890 890 static void 891 - dm9000_mask_interrupts(board_info_t *db) 891 + dm9000_mask_interrupts(struct board_info *db) 892 892 { 893 893 iow(db, DM9000_IMR, IMR_PAR); 894 894 } 895 895 896 896 static void 897 - dm9000_unmask_interrupts(board_info_t *db) 897 + dm9000_unmask_interrupts(struct board_info *db) 898 898 { 899 899 iow(db, DM9000_IMR, db->imr_all); 900 900 } ··· 905 905 static void 906 906 dm9000_init_dm9000(struct net_device *dev) 907 907 { 908 - board_info_t *db = netdev_priv(dev); 908 + struct board_info *db = netdev_priv(dev); 909 909 unsigned int imr; 910 910 unsigned int ncr; 911 911 ··· 970 970 /* Our watchdog timed out. Called by the networking layer */ 971 971 static void dm9000_timeout(struct net_device *dev) 972 972 { 973 - board_info_t *db = netdev_priv(dev); 973 + struct board_info *db = netdev_priv(dev); 974 974 u8 reg_save; 975 975 unsigned long flags; 976 976 ··· 996 996 int ip_summed, 997 997 u16 pkt_len) 998 998 { 999 - board_info_t *dm = to_dm9000_board(dev); 999 + struct board_info *dm = to_dm9000_board(dev); 1000 1000 1001 1001 /* The DM9000 is not smart enough to leave fragmented packets alone. */ 1002 1002 if (dm->ip_summed != ip_summed) { ··· 1023 1023 dm9000_start_xmit(struct sk_buff *skb, struct net_device *dev) 1024 1024 { 1025 1025 unsigned long flags; 1026 - board_info_t *db = netdev_priv(dev); 1026 + struct board_info *db = netdev_priv(dev); 1027 1027 1028 1028 dm9000_dbg(db, 3, "%s:\n", __func__); 1029 1029 ··· 1062 1062 * receive the packet to upper layer, free the transmitted packet 1063 1063 */ 1064 1064 1065 - static void dm9000_tx_done(struct net_device *dev, board_info_t *db) 1065 + static void dm9000_tx_done(struct net_device *dev, struct board_info *db) 1066 1066 { 1067 1067 int tx_status = ior(db, DM9000_NSR); /* Got TX status */ 1068 1068 ··· 1094 1094 static void 1095 1095 dm9000_rx(struct net_device *dev) 1096 1096 { 1097 - board_info_t *db = netdev_priv(dev); 1097 + struct board_info *db = netdev_priv(dev); 1098 1098 struct dm9000_rxhdr rxhdr; 1099 1099 struct sk_buff *skb; 1100 1100 u8 rxbyte, *rdptr; ··· 1196 1196 static irqreturn_t dm9000_interrupt(int irq, void *dev_id) 1197 1197 { 1198 1198 struct net_device *dev = dev_id; 1199 - board_info_t *db = netdev_priv(dev); 1199 + struct board_info *db = netdev_priv(dev); 1200 1200 int int_status; 1201 1201 unsigned long flags; 1202 1202 u8 reg_save; ··· 1246 1246 static irqreturn_t dm9000_wol_interrupt(int irq, void *dev_id) 1247 1247 { 1248 1248 struct net_device *dev = dev_id; 1249 - board_info_t *db = netdev_priv(dev); 1249 + struct board_info *db = netdev_priv(dev); 1250 1250 unsigned long flags; 1251 1251 unsigned nsr, wcr; 1252 1252 ··· 1296 1296 static int 1297 1297 dm9000_open(struct net_device *dev) 1298 1298 { 1299 - board_info_t *db = netdev_priv(dev); 1299 + struct board_info *db = netdev_priv(dev); 1300 1300 unsigned long irqflags = db->irq_res->flags & IRQF_TRIGGER_MASK; 1301 1301 1302 1302 if (netif_msg_ifup(db)) ··· 1342 1342 static void 1343 1343 dm9000_shutdown(struct net_device *dev) 1344 1344 { 1345 - board_info_t *db = netdev_priv(dev); 1345 + struct board_info *db = netdev_priv(dev); 1346 1346 1347 1347 /* RESET device */ 1348 1348 dm9000_phy_write(dev, 0, MII_BMCR, BMCR_RESET); /* PHY RESET */ ··· 1358 1358 static int 1359 1359 dm9000_stop(struct net_device *ndev) 1360 1360 { 1361 - board_info_t *db = netdev_priv(ndev); 1361 + struct board_info *db = netdev_priv(ndev); 1362 1362 1363 1363 if (netif_msg_ifdown(db)) 1364 1364 dev_dbg(db->dev, "shutting down %s\n", ndev->name); ··· 1681 1681 { 1682 1682 struct platform_device *pdev = to_platform_device(dev); 1683 1683 struct net_device *ndev = platform_get_drvdata(pdev); 1684 - board_info_t *db; 1684 + struct board_info *db; 1685 1685 1686 1686 if (ndev) { 1687 1687 db = netdev_priv(ndev); ··· 1704 1704 { 1705 1705 struct platform_device *pdev = to_platform_device(dev); 1706 1706 struct net_device *ndev = platform_get_drvdata(pdev); 1707 - board_info_t *db = netdev_priv(ndev); 1707 + struct board_info *db = netdev_priv(ndev); 1708 1708 1709 1709 if (ndev) { 1710 1710 if (netif_running(ndev)) {
+1
drivers/net/ethernet/freescale/fec.h
··· 310 310 int mii_timeout; 311 311 uint phy_speed; 312 312 phy_interface_t phy_interface; 313 + struct device_node *phy_node; 313 314 int link; 314 315 int full_duplex; 315 316 int speed;
+56 -22
drivers/net/ethernet/freescale/fec_main.c
··· 52 52 #include <linux/of.h> 53 53 #include <linux/of_device.h> 54 54 #include <linux/of_gpio.h> 55 + #include <linux/of_mdio.h> 55 56 #include <linux/of_net.h> 56 57 #include <linux/regulator/consumer.h> 57 58 #include <linux/if_vlan.h> ··· 1649 1648 1650 1649 fep->phy_dev = NULL; 1651 1650 1652 - /* check for attached phy */ 1653 - for (phy_id = 0; (phy_id < PHY_MAX_ADDR); phy_id++) { 1654 - if ((fep->mii_bus->phy_mask & (1 << phy_id))) 1655 - continue; 1656 - if (fep->mii_bus->phy_map[phy_id] == NULL) 1657 - continue; 1658 - if (fep->mii_bus->phy_map[phy_id]->phy_id == 0) 1659 - continue; 1660 - if (dev_id--) 1661 - continue; 1662 - strncpy(mdio_bus_id, fep->mii_bus->id, MII_BUS_ID_SIZE); 1663 - break; 1651 + if (fep->phy_node) { 1652 + phy_dev = of_phy_connect(ndev, fep->phy_node, 1653 + &fec_enet_adjust_link, 0, 1654 + fep->phy_interface); 1655 + } else { 1656 + /* check for attached phy */ 1657 + for (phy_id = 0; (phy_id < PHY_MAX_ADDR); phy_id++) { 1658 + if ((fep->mii_bus->phy_mask & (1 << phy_id))) 1659 + continue; 1660 + if (fep->mii_bus->phy_map[phy_id] == NULL) 1661 + continue; 1662 + if (fep->mii_bus->phy_map[phy_id]->phy_id == 0) 1663 + continue; 1664 + if (dev_id--) 1665 + continue; 1666 + strncpy(mdio_bus_id, fep->mii_bus->id, MII_BUS_ID_SIZE); 1667 + break; 1668 + } 1669 + 1670 + if (phy_id >= PHY_MAX_ADDR) { 1671 + netdev_info(ndev, "no PHY, assuming direct connection to switch\n"); 1672 + strncpy(mdio_bus_id, "fixed-0", MII_BUS_ID_SIZE); 1673 + phy_id = 0; 1674 + } 1675 + 1676 + snprintf(phy_name, sizeof(phy_name), 1677 + PHY_ID_FMT, mdio_bus_id, phy_id); 1678 + phy_dev = phy_connect(ndev, phy_name, &fec_enet_adjust_link, 1679 + fep->phy_interface); 1664 1680 } 1665 1681 1666 - if (phy_id >= PHY_MAX_ADDR) { 1667 - netdev_info(ndev, "no PHY, assuming direct connection to switch\n"); 1668 - strncpy(mdio_bus_id, "fixed-0", MII_BUS_ID_SIZE); 1669 - phy_id = 0; 1670 - } 1671 - 1672 - snprintf(phy_name, sizeof(phy_name), PHY_ID_FMT, mdio_bus_id, phy_id); 1673 - phy_dev = phy_connect(ndev, phy_name, &fec_enet_adjust_link, 1674 - fep->phy_interface); 1675 1682 if (IS_ERR(phy_dev)) { 1676 1683 netdev_err(ndev, "could not attach to PHY\n"); 1677 1684 return PTR_ERR(phy_dev); ··· 1716 1707 struct fec_enet_private *fep = netdev_priv(ndev); 1717 1708 const struct platform_device_id *id_entry = 1718 1709 platform_get_device_id(fep->pdev); 1710 + struct device_node *node; 1719 1711 int err = -ENXIO, i; 1720 1712 1721 1713 /* ··· 1784 1774 for (i = 0; i < PHY_MAX_ADDR; i++) 1785 1775 fep->mii_bus->irq[i] = PHY_POLL; 1786 1776 1787 - if (mdiobus_register(fep->mii_bus)) 1777 + node = of_get_child_by_name(pdev->dev.of_node, "mdio"); 1778 + if (node) { 1779 + err = of_mdiobus_register(fep->mii_bus, node); 1780 + of_node_put(node); 1781 + } else { 1782 + err = mdiobus_register(fep->mii_bus); 1783 + } 1784 + 1785 + if (err) 1788 1786 goto err_out_free_mdio_irq; 1789 1787 1790 1788 mii_cnt++; ··· 2545 2527 struct resource *r; 2546 2528 const struct of_device_id *of_id; 2547 2529 static int dev_id; 2530 + struct device_node *np = pdev->dev.of_node, *phy_node; 2548 2531 2549 2532 of_id = of_match_device(fec_dt_ids, &pdev->dev); 2550 2533 if (of_id) ··· 2584 2565 fep->bufdesc_ex = 0; 2585 2566 2586 2567 platform_set_drvdata(pdev, ndev); 2568 + 2569 + phy_node = of_parse_phandle(np, "phy-handle", 0); 2570 + if (!phy_node && of_phy_is_fixed_link(np)) { 2571 + ret = of_phy_register_fixed_link(np); 2572 + if (ret < 0) { 2573 + dev_err(&pdev->dev, 2574 + "broken fixed-link specification\n"); 2575 + goto failed_phy; 2576 + } 2577 + phy_node = of_node_get(np); 2578 + } 2579 + fep->phy_node = phy_node; 2587 2580 2588 2581 ret = of_get_phy_mode(pdev->dev.of_node); 2589 2582 if (ret < 0) { ··· 2701 2670 failed_regulator: 2702 2671 fec_enet_clk_enable(ndev, false); 2703 2672 failed_clk: 2673 + failed_phy: 2674 + of_node_put(phy_node); 2704 2675 failed_ioremap: 2705 2676 free_netdev(ndev); 2706 2677 ··· 2724 2691 if (fep->ptp_clock) 2725 2692 ptp_clock_unregister(fep->ptp_clock); 2726 2693 fec_enet_clk_enable(ndev, false); 2694 + of_node_put(fep->phy_node); 2727 2695 free_netdev(ndev); 2728 2696 2729 2697 return 0;
+1 -2
drivers/net/ethernet/freescale/fec_mpc52xx.c
··· 1015 1015 1016 1016 unregister_netdev(ndev); 1017 1017 1018 - if (priv->phy_node) 1019 - of_node_put(priv->phy_node); 1018 + of_node_put(priv->phy_node); 1020 1019 priv->phy_node = NULL; 1021 1020 1022 1021 irq_dispose_mapping(ndev->irq);
+1 -1
drivers/net/ethernet/freescale/fs_enet/fs_enet-main.c
··· 1033 1033 /* In the case of a fixed PHY, the DT node associated 1034 1034 * to the PHY is the Ethernet MAC DT node. 1035 1035 */ 1036 - fpi->phy_node = ofdev->dev.of_node; 1036 + fpi->phy_node = of_node_get(ofdev->dev.of_node); 1037 1037 } 1038 1038 1039 1039 if (of_device_is_compatible(ofdev->dev.of_node, "fsl,mpc5125-fec")) {
+6 -10
drivers/net/ethernet/freescale/gianfar.c
··· 892 892 /* In the case of a fixed PHY, the DT node associated 893 893 * to the PHY is the Ethernet MAC DT node. 894 894 */ 895 - if (of_phy_is_fixed_link(np)) { 895 + if (!priv->phy_node && of_phy_is_fixed_link(np)) { 896 896 err = of_phy_register_fixed_link(np); 897 897 if (err) 898 898 goto err_grp_init; 899 899 900 - priv->phy_node = np; 900 + priv->phy_node = of_node_get(np); 901 901 } 902 902 903 903 /* Find the TBI PHY. If it's not there, we don't support SGMII */ ··· 1435 1435 unmap_group_regs(priv); 1436 1436 gfar_free_rx_queues(priv); 1437 1437 gfar_free_tx_queues(priv); 1438 - if (priv->phy_node) 1439 - of_node_put(priv->phy_node); 1440 - if (priv->tbi_node) 1441 - of_node_put(priv->tbi_node); 1438 + of_node_put(priv->phy_node); 1439 + of_node_put(priv->tbi_node); 1442 1440 free_gfar_dev(priv); 1443 1441 return err; 1444 1442 } ··· 1445 1447 { 1446 1448 struct gfar_private *priv = platform_get_drvdata(ofdev); 1447 1449 1448 - if (priv->phy_node) 1449 - of_node_put(priv->phy_node); 1450 - if (priv->tbi_node) 1451 - of_node_put(priv->tbi_node); 1450 + of_node_put(priv->phy_node); 1451 + of_node_put(priv->tbi_node); 1452 1452 1453 1453 unregister_netdev(priv->ndev); 1454 1454 unmap_group_regs(priv);
+15 -9
drivers/net/ethernet/freescale/ucc_geth.c
··· 3785 3785 ug_info->uf_info.irq = irq_of_parse_and_map(np, 0); 3786 3786 3787 3787 ug_info->phy_node = of_parse_phandle(np, "phy-handle", 0); 3788 - if (!ug_info->phy_node) { 3789 - /* In the case of a fixed PHY, the DT node associated 3788 + if (!ug_info->phy_node && of_phy_is_fixed_link(np)) { 3789 + /* 3790 + * In the case of a fixed PHY, the DT node associated 3790 3791 * to the PHY is the Ethernet MAC DT node. 3791 3792 */ 3792 - if (of_phy_is_fixed_link(np)) { 3793 - err = of_phy_register_fixed_link(np); 3794 - if (err) 3795 - return err; 3796 - } 3797 - ug_info->phy_node = np; 3793 + err = of_phy_register_fixed_link(np); 3794 + if (err) 3795 + return err; 3796 + ug_info->phy_node = of_node_get(np); 3798 3797 } 3799 3798 3800 3799 /* Find the TBI PHY node. If it's not there, we don't support SGMII */ ··· 3861 3862 /* Create an ethernet device instance */ 3862 3863 dev = alloc_etherdev(sizeof(*ugeth)); 3863 3864 3864 - if (dev == NULL) 3865 + if (dev == NULL) { 3866 + of_node_put(ug_info->tbi_node); 3867 + of_node_put(ug_info->phy_node); 3865 3868 return -ENOMEM; 3869 + } 3866 3870 3867 3871 ugeth = netdev_priv(dev); 3868 3872 spin_lock_init(&ugeth->lock); ··· 3899 3897 pr_err("%s: Cannot register net device, aborting\n", 3900 3898 dev->name); 3901 3899 free_netdev(dev); 3900 + of_node_put(ug_info->tbi_node); 3901 + of_node_put(ug_info->phy_node); 3902 3902 return err; 3903 3903 } 3904 3904 ··· 3924 3920 unregister_netdev(dev); 3925 3921 free_netdev(dev); 3926 3922 ucc_geth_memclean(ugeth); 3923 + of_node_put(ugeth->ug_info->tbi_node); 3924 + of_node_put(ugeth->ug_info->phy_node); 3927 3925 3928 3926 return 0; 3929 3927 }
+17 -17
drivers/net/ethernet/fujitsu/fmvj18x_cs.c
··· 99 99 /* 100 100 card type 101 101 */ 102 - typedef enum { MBH10302, MBH10304, TDK, CONTEC, LA501, UNGERMANN, 102 + enum cardtype { MBH10302, MBH10304, TDK, CONTEC, LA501, UNGERMANN, 103 103 XXX10304, NEC, KME 104 - } cardtype_t; 104 + }; 105 105 106 106 /* 107 107 driver specific data structure 108 108 */ 109 - typedef struct local_info_t { 109 + struct local_info { 110 110 struct pcmcia_device *p_dev; 111 111 long open_time; 112 112 uint tx_started:1; 113 113 uint tx_queue; 114 114 u_short tx_queue_len; 115 - cardtype_t cardtype; 115 + enum cardtype cardtype; 116 116 u_short sent; 117 117 u_char __iomem *base; 118 - } local_info_t; 118 + }; 119 119 120 120 #define MC_FILTERBREAK 64 121 121 ··· 232 232 233 233 static int fmvj18x_probe(struct pcmcia_device *link) 234 234 { 235 - local_info_t *lp; 235 + struct local_info *lp; 236 236 struct net_device *dev; 237 237 238 238 dev_dbg(&link->dev, "fmvj18x_attach()\n"); 239 239 240 240 /* Make up a FMVJ18x specific data structure */ 241 - dev = alloc_etherdev(sizeof(local_info_t)); 241 + dev = alloc_etherdev(sizeof(struct local_info)); 242 242 if (!dev) 243 243 return -ENOMEM; 244 244 lp = netdev_priv(dev); ··· 327 327 static int fmvj18x_config(struct pcmcia_device *link) 328 328 { 329 329 struct net_device *dev = link->priv; 330 - local_info_t *lp = netdev_priv(dev); 330 + struct local_info *lp = netdev_priv(dev); 331 331 int i, ret; 332 332 unsigned int ioaddr; 333 - cardtype_t cardtype; 333 + enum cardtype cardtype; 334 334 char *card_name = "unknown"; 335 335 u8 *buf; 336 336 size_t len; ··· 584 584 int i; 585 585 struct net_device *dev = link->priv; 586 586 unsigned int ioaddr; 587 - local_info_t *lp = netdev_priv(dev); 587 + struct local_info *lp = netdev_priv(dev); 588 588 589 589 /* Allocate a small memory window */ 590 590 link->resource[3]->flags = WIN_DATA_WIDTH_8|WIN_MEMORY_TYPE_AM|WIN_ENABLE; ··· 626 626 { 627 627 628 628 struct net_device *dev = link->priv; 629 - local_info_t *lp = netdev_priv(dev); 629 + struct local_info *lp = netdev_priv(dev); 630 630 u_char __iomem *tmp; 631 631 632 632 dev_dbg(&link->dev, "fmvj18x_release\n"); ··· 711 711 static irqreturn_t fjn_interrupt(int dummy, void *dev_id) 712 712 { 713 713 struct net_device *dev = dev_id; 714 - local_info_t *lp = netdev_priv(dev); 714 + struct local_info *lp = netdev_priv(dev); 715 715 unsigned int ioaddr; 716 716 unsigned short tx_stat, rx_stat; 717 717 ··· 772 772 773 773 static void fjn_tx_timeout(struct net_device *dev) 774 774 { 775 - struct local_info_t *lp = netdev_priv(dev); 775 + struct local_info *lp = netdev_priv(dev); 776 776 unsigned int ioaddr = dev->base_addr; 777 777 778 778 netdev_notice(dev, "transmit timed out with status %04x, %s?\n", ··· 802 802 static netdev_tx_t fjn_start_xmit(struct sk_buff *skb, 803 803 struct net_device *dev) 804 804 { 805 - struct local_info_t *lp = netdev_priv(dev); 805 + struct local_info *lp = netdev_priv(dev); 806 806 unsigned int ioaddr = dev->base_addr; 807 807 short length = skb->len; 808 808 ··· 874 874 875 875 static void fjn_reset(struct net_device *dev) 876 876 { 877 - struct local_info_t *lp = netdev_priv(dev); 877 + struct local_info *lp = netdev_priv(dev); 878 878 unsigned int ioaddr = dev->base_addr; 879 879 int i; 880 880 ··· 1058 1058 1059 1059 static int fjn_open(struct net_device *dev) 1060 1060 { 1061 - struct local_info_t *lp = netdev_priv(dev); 1061 + struct local_info *lp = netdev_priv(dev); 1062 1062 struct pcmcia_device *link = lp->p_dev; 1063 1063 1064 1064 pr_debug("fjn_open('%s').\n", dev->name); ··· 1083 1083 1084 1084 static int fjn_close(struct net_device *dev) 1085 1085 { 1086 - struct local_info_t *lp = netdev_priv(dev); 1086 + struct local_info *lp = netdev_priv(dev); 1087 1087 struct pcmcia_device *link = lp->p_dev; 1088 1088 unsigned int ioaddr = dev->base_addr; 1089 1089
+6 -3
drivers/net/ethernet/marvell/mvneta.c
··· 2969 2969 /* In the case of a fixed PHY, the DT node associated 2970 2970 * to the PHY is the Ethernet MAC DT node. 2971 2971 */ 2972 - phy_node = dn; 2972 + phy_node = of_node_get(dn); 2973 2973 } 2974 2974 2975 2975 phy_mode = of_get_phy_mode(dn); 2976 2976 if (phy_mode < 0) { 2977 2977 dev_err(&pdev->dev, "incorrect phy-mode\n"); 2978 2978 err = -EINVAL; 2979 - goto err_free_irq; 2979 + goto err_put_phy_node; 2980 2980 } 2981 2981 2982 2982 dev->tx_queue_len = MVNETA_MAX_TXD; ··· 2992 2992 pp->clk = devm_clk_get(&pdev->dev, NULL); 2993 2993 if (IS_ERR(pp->clk)) { 2994 2994 err = PTR_ERR(pp->clk); 2995 - goto err_free_irq; 2995 + goto err_put_phy_node; 2996 2996 } 2997 2997 2998 2998 clk_prepare_enable(pp->clk); ··· 3071 3071 free_percpu(pp->stats); 3072 3072 err_clk: 3073 3073 clk_disable_unprepare(pp->clk); 3074 + err_put_phy_node: 3075 + of_node_put(phy_node); 3074 3076 err_free_irq: 3075 3077 irq_dispose_mapping(dev->irq); 3076 3078 err_free_netdev: ··· 3090 3088 clk_disable_unprepare(pp->clk); 3091 3089 free_percpu(pp->stats); 3092 3090 irq_dispose_mapping(dev->irq); 3091 + of_node_put(pp->phy_node); 3093 3092 free_netdev(dev); 3094 3093 3095 3094 return 0;
+1
drivers/net/ethernet/myricom/myri10ge/myri10ge.c
··· 574 574 575 575 /* save firmware version for ethtool */ 576 576 strncpy(mgp->fw_version, hdr->version, sizeof(mgp->fw_version)); 577 + mgp->fw_version[sizeof(mgp->fw_version) - 1] = '\0'; 577 578 578 579 sscanf(mgp->fw_version, "%d.%d.%d", &mgp->fw_ver_major, 579 580 &mgp->fw_ver_minor, &mgp->fw_ver_tiny);
+1 -1
drivers/net/ethernet/qlogic/qlcnic/Makefile
··· 8 8 qlcnic_ethtool.o qlcnic_ctx.o qlcnic_io.o \ 9 9 qlcnic_sysfs.o qlcnic_minidump.o qlcnic_83xx_hw.o \ 10 10 qlcnic_83xx_init.o qlcnic_83xx_vnic.o \ 11 - qlcnic_minidump.o qlcnic_sriov_common.o 11 + qlcnic_sriov_common.o 12 12 13 13 qlcnic-$(CONFIG_QLCNIC_SRIOV) += qlcnic_sriov_pf.o 14 14
+1 -1
drivers/net/ethernet/smsc/smsc911x.h
··· 51 51 52 52 #ifdef CONFIG_DEBUG_SPINLOCK 53 53 #define SMSC_ASSERT_MAC_LOCK(pdata) \ 54 - WARN_ON(!spin_is_locked(&pdata->mac_lock)) 54 + WARN_ON_SMP(!spin_is_locked(&pdata->mac_lock)) 55 55 #else 56 56 #define SMSC_ASSERT_MAC_LOCK(pdata) do {} while (0) 57 57 #endif /* CONFIG_DEBUG_SPINLOCK */
+1
drivers/net/ethernet/ti/cpmac.c
··· 1130 1130 strncpy(mdio_bus_id, "fixed-0", MII_BUS_ID_SIZE); /* fixed phys bus */ 1131 1131 phy_id = pdev->id; 1132 1132 } 1133 + mdio_bus_id[sizeof(mdio_bus_id) - 1] = '\0'; 1133 1134 1134 1135 dev = alloc_etherdev_mq(sizeof(*priv), CPMAC_QUEUES); 1135 1136 if (!dev)
+1 -2
drivers/net/ethernet/xilinx/ll_temac_main.c
··· 1148 1148 temac_mdio_teardown(lp); 1149 1149 unregister_netdev(ndev); 1150 1150 sysfs_remove_group(&lp->dev->kobj, &temac_attr_group); 1151 - if (lp->phy_node) 1152 - of_node_put(lp->phy_node); 1151 + of_node_put(lp->phy_node); 1153 1152 lp->phy_node = NULL; 1154 1153 iounmap(lp->regs); 1155 1154 if (lp->sdma_regs)
+1 -2
drivers/net/ethernet/xilinx/xilinx_axienet_main.c
··· 1630 1630 axienet_mdio_teardown(lp); 1631 1631 unregister_netdev(ndev); 1632 1632 1633 - if (lp->phy_node) 1634 - of_node_put(lp->phy_node); 1633 + of_node_put(lp->phy_node); 1635 1634 lp->phy_node = NULL; 1636 1635 1637 1636 iounmap(lp->regs);
+20 -20
drivers/net/ethernet/xircom/xirc2ps_cs.c
··· 266 266 267 267 static irqreturn_t xirc2ps_interrupt(int irq, void *dev_id); 268 268 269 - typedef struct local_info_t { 269 + struct local_info { 270 270 struct net_device *dev; 271 271 struct pcmcia_device *p_dev; 272 272 ··· 281 281 unsigned last_ptr_value; /* last packets transmitted value */ 282 282 const char *manf_str; 283 283 struct work_struct tx_timeout_task; 284 - } local_info_t; 284 + }; 285 285 286 286 /**************** 287 287 * Some more prototypes ··· 475 475 xirc2ps_probe(struct pcmcia_device *link) 476 476 { 477 477 struct net_device *dev; 478 - local_info_t *local; 478 + struct local_info *local; 479 479 480 480 dev_dbg(&link->dev, "attach()\n"); 481 481 482 482 /* Allocate the device structure */ 483 - dev = alloc_etherdev(sizeof(local_info_t)); 483 + dev = alloc_etherdev(sizeof(struct local_info)); 484 484 if (!dev) 485 485 return -ENOMEM; 486 486 local = netdev_priv(dev); ··· 536 536 set_card_type(struct pcmcia_device *link) 537 537 { 538 538 struct net_device *dev = link->priv; 539 - local_info_t *local = netdev_priv(dev); 539 + struct local_info *local = netdev_priv(dev); 540 540 u8 *buf; 541 541 unsigned int cisrev, mediaid, prodid; 542 542 size_t len; ··· 690 690 xirc2ps_config(struct pcmcia_device * link) 691 691 { 692 692 struct net_device *dev = link->priv; 693 - local_info_t *local = netdev_priv(dev); 693 + struct local_info *local = netdev_priv(dev); 694 694 unsigned int ioaddr; 695 695 int err; 696 696 u8 *buf; ··· 931 931 932 932 if (link->resource[2]->end) { 933 933 struct net_device *dev = link->priv; 934 - local_info_t *local = netdev_priv(dev); 934 + struct local_info *local = netdev_priv(dev); 935 935 if (local->dingo) 936 936 iounmap(local->dingo_ccr - 0x0800); 937 937 } ··· 975 975 xirc2ps_interrupt(int irq, void *dev_id) 976 976 { 977 977 struct net_device *dev = (struct net_device *)dev_id; 978 - local_info_t *lp = netdev_priv(dev); 978 + struct local_info *lp = netdev_priv(dev); 979 979 unsigned int ioaddr; 980 980 u_char saved_page; 981 981 unsigned bytes_rcvd; ··· 1194 1194 static void 1195 1195 xirc2ps_tx_timeout_task(struct work_struct *work) 1196 1196 { 1197 - local_info_t *local = 1198 - container_of(work, local_info_t, tx_timeout_task); 1197 + struct local_info *local = 1198 + container_of(work, struct local_info, tx_timeout_task); 1199 1199 struct net_device *dev = local->dev; 1200 1200 /* reset the card */ 1201 1201 do_reset(dev,1); ··· 1206 1206 static void 1207 1207 xirc_tx_timeout(struct net_device *dev) 1208 1208 { 1209 - local_info_t *lp = netdev_priv(dev); 1209 + struct local_info *lp = netdev_priv(dev); 1210 1210 dev->stats.tx_errors++; 1211 1211 netdev_notice(dev, "transmit timed out\n"); 1212 1212 schedule_work(&lp->tx_timeout_task); ··· 1215 1215 static netdev_tx_t 1216 1216 do_start_xmit(struct sk_buff *skb, struct net_device *dev) 1217 1217 { 1218 - local_info_t *lp = netdev_priv(dev); 1218 + struct local_info *lp = netdev_priv(dev); 1219 1219 unsigned int ioaddr = dev->base_addr; 1220 1220 int okay; 1221 1221 unsigned freespace; ··· 1300 1300 static void set_addresses(struct net_device *dev) 1301 1301 { 1302 1302 unsigned int ioaddr = dev->base_addr; 1303 - local_info_t *lp = netdev_priv(dev); 1303 + struct local_info *lp = netdev_priv(dev); 1304 1304 struct netdev_hw_addr *ha; 1305 1305 struct set_address_info sa_info; 1306 1306 int i; ··· 1362 1362 static int 1363 1363 do_config(struct net_device *dev, struct ifmap *map) 1364 1364 { 1365 - local_info_t *local = netdev_priv(dev); 1365 + struct local_info *local = netdev_priv(dev); 1366 1366 1367 1367 pr_debug("do_config(%p)\n", dev); 1368 1368 if (map->port != 255 && map->port != dev->if_port) { ··· 1387 1387 static int 1388 1388 do_open(struct net_device *dev) 1389 1389 { 1390 - local_info_t *lp = netdev_priv(dev); 1390 + struct local_info *lp = netdev_priv(dev); 1391 1391 struct pcmcia_device *link = lp->p_dev; 1392 1392 1393 1393 dev_dbg(&link->dev, "do_open(%p)\n", dev); ··· 1421 1421 static int 1422 1422 do_ioctl(struct net_device *dev, struct ifreq *rq, int cmd) 1423 1423 { 1424 - local_info_t *local = netdev_priv(dev); 1424 + struct local_info *local = netdev_priv(dev); 1425 1425 unsigned int ioaddr = dev->base_addr; 1426 1426 struct mii_ioctl_data *data = if_mii(rq); 1427 1427 ··· 1453 1453 static void 1454 1454 hardreset(struct net_device *dev) 1455 1455 { 1456 - local_info_t *local = netdev_priv(dev); 1456 + struct local_info *local = netdev_priv(dev); 1457 1457 unsigned int ioaddr = dev->base_addr; 1458 1458 1459 1459 SelectPage(4); ··· 1470 1470 static void 1471 1471 do_reset(struct net_device *dev, int full) 1472 1472 { 1473 - local_info_t *local = netdev_priv(dev); 1473 + struct local_info *local = netdev_priv(dev); 1474 1474 unsigned int ioaddr = dev->base_addr; 1475 1475 unsigned value; 1476 1476 ··· 1631 1631 static int 1632 1632 init_mii(struct net_device *dev) 1633 1633 { 1634 - local_info_t *local = netdev_priv(dev); 1634 + struct local_info *local = netdev_priv(dev); 1635 1635 unsigned int ioaddr = dev->base_addr; 1636 1636 unsigned control, status, linkpartner; 1637 1637 int i; ··· 1715 1715 do_stop(struct net_device *dev) 1716 1716 { 1717 1717 unsigned int ioaddr = dev->base_addr; 1718 - local_info_t *lp = netdev_priv(dev); 1718 + struct local_info *lp = netdev_priv(dev); 1719 1719 struct pcmcia_device *link = lp->p_dev; 1720 1720 1721 1721 dev_dbg(&link->dev, "do_stop(%p)\n", dev);
+32 -31
drivers/net/wan/hdlc_fr.c
··· 90 90 #define LMI_ANSI_LENGTH 14 91 91 92 92 93 - typedef struct { 93 + struct fr_hdr { 94 94 #if defined(__LITTLE_ENDIAN_BITFIELD) 95 95 unsigned ea1: 1; 96 96 unsigned cr: 1; ··· 112 112 unsigned de: 1; 113 113 unsigned ea2: 1; 114 114 #endif 115 - }__packed fr_hdr; 115 + } __packed; 116 116 117 117 118 - typedef struct pvc_device_struct { 118 + struct pvc_device { 119 119 struct net_device *frad; 120 120 struct net_device *main; 121 121 struct net_device *ether; /* bridged Ethernet interface */ 122 - struct pvc_device_struct *next; /* Sorted in ascending DLCI order */ 122 + struct pvc_device *next; /* Sorted in ascending DLCI order */ 123 123 int dlci; 124 124 int open_count; 125 125 ··· 132 132 unsigned int becn: 1; 133 133 unsigned int bandwidth; /* Cisco LMI reporting only */ 134 134 }state; 135 - }pvc_device; 135 + }; 136 136 137 137 struct frad_state { 138 138 fr_proto settings; 139 - pvc_device *first_pvc; 139 + struct pvc_device *first_pvc; 140 140 int dce_pvc_count; 141 141 142 142 struct timer_list timer; ··· 174 174 } 175 175 176 176 177 - static inline pvc_device* find_pvc(hdlc_device *hdlc, u16 dlci) 177 + static inline struct pvc_device *find_pvc(hdlc_device *hdlc, u16 dlci) 178 178 { 179 - pvc_device *pvc = state(hdlc)->first_pvc; 179 + struct pvc_device *pvc = state(hdlc)->first_pvc; 180 180 181 181 while (pvc) { 182 182 if (pvc->dlci == dlci) ··· 190 190 } 191 191 192 192 193 - static pvc_device* add_pvc(struct net_device *dev, u16 dlci) 193 + static struct pvc_device *add_pvc(struct net_device *dev, u16 dlci) 194 194 { 195 195 hdlc_device *hdlc = dev_to_hdlc(dev); 196 - pvc_device *pvc, **pvc_p = &state(hdlc)->first_pvc; 196 + struct pvc_device *pvc, **pvc_p = &state(hdlc)->first_pvc; 197 197 198 198 while (*pvc_p) { 199 199 if ((*pvc_p)->dlci == dlci) ··· 203 203 pvc_p = &(*pvc_p)->next; 204 204 } 205 205 206 - pvc = kzalloc(sizeof(pvc_device), GFP_ATOMIC); 206 + pvc = kzalloc(sizeof(*pvc), GFP_ATOMIC); 207 207 #ifdef DEBUG_PVC 208 208 printk(KERN_DEBUG "add_pvc: allocated pvc %p, frad %p\n", pvc, dev); 209 209 #endif ··· 218 218 } 219 219 220 220 221 - static inline int pvc_is_used(pvc_device *pvc) 221 + static inline int pvc_is_used(struct pvc_device *pvc) 222 222 { 223 223 return pvc->main || pvc->ether; 224 224 } 225 225 226 226 227 - static inline void pvc_carrier(int on, pvc_device *pvc) 227 + static inline void pvc_carrier(int on, struct pvc_device *pvc) 228 228 { 229 229 if (on) { 230 230 if (pvc->main) ··· 246 246 247 247 static inline void delete_unused_pvcs(hdlc_device *hdlc) 248 248 { 249 - pvc_device **pvc_p = &state(hdlc)->first_pvc; 249 + struct pvc_device **pvc_p = &state(hdlc)->first_pvc; 250 250 251 251 while (*pvc_p) { 252 252 if (!pvc_is_used(*pvc_p)) { 253 - pvc_device *pvc = *pvc_p; 253 + struct pvc_device *pvc = *pvc_p; 254 254 #ifdef DEBUG_PVC 255 255 printk(KERN_DEBUG "freeing unused pvc: %p\n", pvc); 256 256 #endif ··· 263 263 } 264 264 265 265 266 - static inline struct net_device** get_dev_p(pvc_device *pvc, int type) 266 + static inline struct net_device **get_dev_p(struct pvc_device *pvc, 267 + int type) 267 268 { 268 269 if (type == ARPHRD_ETHER) 269 270 return &pvc->ether; ··· 343 342 344 343 static int pvc_open(struct net_device *dev) 345 344 { 346 - pvc_device *pvc = dev->ml_priv; 345 + struct pvc_device *pvc = dev->ml_priv; 347 346 348 347 if ((pvc->frad->flags & IFF_UP) == 0) 349 348 return -EIO; /* Frad must be UP in order to activate PVC */ ··· 363 362 364 363 static int pvc_close(struct net_device *dev) 365 364 { 366 - pvc_device *pvc = dev->ml_priv; 365 + struct pvc_device *pvc = dev->ml_priv; 367 366 368 367 if (--pvc->open_count == 0) { 369 368 hdlc_device *hdlc = dev_to_hdlc(pvc->frad); ··· 382 381 383 382 static int pvc_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd) 384 383 { 385 - pvc_device *pvc = dev->ml_priv; 384 + struct pvc_device *pvc = dev->ml_priv; 386 385 fr_proto_pvc_info info; 387 386 388 387 if (ifr->ifr_settings.type == IF_GET_PROTO) { ··· 410 409 411 410 static netdev_tx_t pvc_xmit(struct sk_buff *skb, struct net_device *dev) 412 411 { 413 - pvc_device *pvc = dev->ml_priv; 412 + struct pvc_device *pvc = dev->ml_priv; 414 413 415 414 if (pvc->state.active) { 416 415 if (dev->type == ARPHRD_ETHER) { ··· 445 444 return NETDEV_TX_OK; 446 445 } 447 446 448 - static inline void fr_log_dlci_active(pvc_device *pvc) 447 + static inline void fr_log_dlci_active(struct pvc_device *pvc) 449 448 { 450 449 netdev_info(pvc->frad, "DLCI %d [%s%s%s]%s %s\n", 451 450 pvc->dlci, ··· 470 469 { 471 470 hdlc_device *hdlc = dev_to_hdlc(dev); 472 471 struct sk_buff *skb; 473 - pvc_device *pvc = state(hdlc)->first_pvc; 472 + struct pvc_device *pvc = state(hdlc)->first_pvc; 474 473 int lmi = state(hdlc)->settings.lmi; 475 474 int dce = state(hdlc)->settings.dce; 476 475 int len = lmi == LMI_ANSI ? LMI_ANSI_LENGTH : LMI_CCITT_CISCO_LENGTH; ··· 567 566 static void fr_set_link_state(int reliable, struct net_device *dev) 568 567 { 569 568 hdlc_device *hdlc = dev_to_hdlc(dev); 570 - pvc_device *pvc = state(hdlc)->first_pvc; 569 + struct pvc_device *pvc = state(hdlc)->first_pvc; 571 570 572 571 state(hdlc)->reliable = reliable; 573 572 if (reliable) { ··· 653 652 static int fr_lmi_recv(struct net_device *dev, struct sk_buff *skb) 654 653 { 655 654 hdlc_device *hdlc = dev_to_hdlc(dev); 656 - pvc_device *pvc; 655 + struct pvc_device *pvc; 657 656 u8 rxseq, txseq; 658 657 int lmi = state(hdlc)->settings.lmi; 659 658 int dce = state(hdlc)->settings.dce; ··· 870 869 { 871 870 struct net_device *frad = skb->dev; 872 871 hdlc_device *hdlc = dev_to_hdlc(frad); 873 - fr_hdr *fh = (fr_hdr*)skb->data; 872 + struct fr_hdr *fh = (struct fr_hdr *)skb->data; 874 873 u8 *data = skb->data; 875 874 u16 dlci; 876 - pvc_device *pvc; 875 + struct pvc_device *pvc; 877 876 struct net_device *dev = NULL; 878 877 879 878 if (skb->len <= 4 || fh->ea1 || data[2] != FR_UI) ··· 1029 1028 static void fr_close(struct net_device *dev) 1030 1029 { 1031 1030 hdlc_device *hdlc = dev_to_hdlc(dev); 1032 - pvc_device *pvc = state(hdlc)->first_pvc; 1031 + struct pvc_device *pvc = state(hdlc)->first_pvc; 1033 1032 1034 1033 while (pvc) { /* Shutdown all PVCs for this FRAD */ 1035 1034 if (pvc->main) ··· 1061 1060 static int fr_add_pvc(struct net_device *frad, unsigned int dlci, int type) 1062 1061 { 1063 1062 hdlc_device *hdlc = dev_to_hdlc(frad); 1064 - pvc_device *pvc; 1063 + struct pvc_device *pvc; 1065 1064 struct net_device *dev; 1066 1065 int used; 1067 1066 ··· 1118 1117 1119 1118 static int fr_del_pvc(hdlc_device *hdlc, unsigned int dlci, int type) 1120 1119 { 1121 - pvc_device *pvc; 1120 + struct pvc_device *pvc; 1122 1121 struct net_device *dev; 1123 1122 1124 1123 if ((pvc = find_pvc(hdlc, dlci)) == NULL) ··· 1146 1145 static void fr_destroy(struct net_device *frad) 1147 1146 { 1148 1147 hdlc_device *hdlc = dev_to_hdlc(frad); 1149 - pvc_device *pvc = state(hdlc)->first_pvc; 1148 + struct pvc_device *pvc = state(hdlc)->first_pvc; 1150 1149 state(hdlc)->first_pvc = NULL; /* All PVCs destroyed */ 1151 1150 state(hdlc)->dce_pvc_count = 0; 1152 1151 state(hdlc)->dce_changed = 1; 1153 1152 1154 1153 while (pvc) { 1155 - pvc_device *next = pvc->next; 1154 + struct pvc_device *next = pvc->next; 1156 1155 /* destructors will free_netdev() main and ether */ 1157 1156 if (pvc->main) 1158 1157 unregister_netdevice(pvc->main);
+32 -31
drivers/net/wan/wanxl.c
··· 54 54 #define MBX2_MEMSZ_MASK 0xFFFF0000 /* PUTS Memory Size Register mask */ 55 55 56 56 57 - typedef struct { 57 + struct port { 58 58 struct net_device *dev; 59 - struct card_t *card; 59 + struct card *card; 60 60 spinlock_t lock; /* for wanxl_xmit */ 61 61 int node; /* physical port #0 - 3 */ 62 62 unsigned int clock_type; 63 63 int tx_in, tx_out; 64 64 struct sk_buff *tx_skbs[TX_BUFFERS]; 65 - }port_t; 65 + }; 66 66 67 67 68 - typedef struct { 68 + struct card_status { 69 69 desc_t rx_descs[RX_QUEUE_LENGTH]; 70 70 port_status_t port_status[4]; 71 - }card_status_t; 71 + }; 72 72 73 73 74 - typedef struct card_t { 74 + struct card { 75 75 int n_ports; /* 1, 2 or 4 ports */ 76 76 u8 irq; 77 77 ··· 79 79 struct pci_dev *pdev; /* for pci_name(pdev) */ 80 80 int rx_in; 81 81 struct sk_buff *rx_skbs[RX_QUEUE_LENGTH]; 82 - card_status_t *status; /* shared between host and card */ 82 + struct card_status *status; /* shared between host and card */ 83 83 dma_addr_t status_address; 84 - port_t ports[0]; /* 1 - 4 port_t structures follow */ 85 - }card_t; 84 + struct port ports[0]; /* 1 - 4 port structures follow */ 85 + }; 86 86 87 87 88 88 89 - static inline port_t* dev_to_port(struct net_device *dev) 89 + static inline struct port *dev_to_port(struct net_device *dev) 90 90 { 91 - return (port_t *)dev_to_hdlc(dev)->priv; 91 + return (struct port *)dev_to_hdlc(dev)->priv; 92 92 } 93 93 94 94 95 - static inline port_status_t* get_status(port_t *port) 95 + static inline port_status_t *get_status(struct port *port) 96 96 { 97 97 return &port->card->status->port_status[port->node]; 98 98 } ··· 115 115 116 116 117 117 /* Cable and/or personality module change interrupt service */ 118 - static inline void wanxl_cable_intr(port_t *port) 118 + static inline void wanxl_cable_intr(struct port *port) 119 119 { 120 120 u32 value = get_status(port)->cable; 121 121 int valid = 1; ··· 160 160 161 161 162 162 /* Transmit complete interrupt service */ 163 - static inline void wanxl_tx_intr(port_t *port) 163 + static inline void wanxl_tx_intr(struct port *port) 164 164 { 165 165 struct net_device *dev = port->dev; 166 166 while (1) { ··· 193 193 194 194 195 195 /* Receive complete interrupt service */ 196 - static inline void wanxl_rx_intr(card_t *card) 196 + static inline void wanxl_rx_intr(struct card *card) 197 197 { 198 198 desc_t *desc; 199 199 while (desc = &card->status->rx_descs[card->rx_in], ··· 203 203 pci_name(card->pdev)); 204 204 else { 205 205 struct sk_buff *skb = card->rx_skbs[card->rx_in]; 206 - port_t *port = &card->ports[desc->stat & 206 + struct port *port = &card->ports[desc->stat & 207 207 PACKET_PORT_MASK]; 208 208 struct net_device *dev = port->dev; 209 209 ··· 245 245 246 246 static irqreturn_t wanxl_intr(int irq, void* dev_id) 247 247 { 248 - card_t *card = dev_id; 248 + struct card *card = dev_id; 249 249 int i; 250 250 u32 stat; 251 251 int handled = 0; ··· 272 272 273 273 static netdev_tx_t wanxl_xmit(struct sk_buff *skb, struct net_device *dev) 274 274 { 275 - port_t *port = dev_to_port(dev); 275 + struct port *port = dev_to_port(dev); 276 276 desc_t *desc; 277 277 278 278 spin_lock(&port->lock); ··· 319 319 static int wanxl_attach(struct net_device *dev, unsigned short encoding, 320 320 unsigned short parity) 321 321 { 322 - port_t *port = dev_to_port(dev); 322 + struct port *port = dev_to_port(dev); 323 323 324 324 if (encoding != ENCODING_NRZ && 325 325 encoding != ENCODING_NRZI) ··· 343 343 { 344 344 const size_t size = sizeof(sync_serial_settings); 345 345 sync_serial_settings line; 346 - port_t *port = dev_to_port(dev); 346 + struct port *port = dev_to_port(dev); 347 347 348 348 if (cmd != SIOCWANDEV) 349 349 return hdlc_ioctl(dev, ifr, cmd); ··· 393 393 394 394 static int wanxl_open(struct net_device *dev) 395 395 { 396 - port_t *port = dev_to_port(dev); 396 + struct port *port = dev_to_port(dev); 397 397 u8 __iomem *dbr = port->card->plx + PLX_DOORBELL_TO_CARD; 398 398 unsigned long timeout; 399 399 int i; ··· 429 429 430 430 static int wanxl_close(struct net_device *dev) 431 431 { 432 - port_t *port = dev_to_port(dev); 432 + struct port *port = dev_to_port(dev); 433 433 unsigned long timeout; 434 434 int i; 435 435 ··· 467 467 468 468 static struct net_device_stats *wanxl_get_stats(struct net_device *dev) 469 469 { 470 - port_t *port = dev_to_port(dev); 470 + struct port *port = dev_to_port(dev); 471 471 472 472 dev->stats.rx_over_errors = get_status(port)->rx_overruns; 473 473 dev->stats.rx_frame_errors = get_status(port)->rx_frame_errors; ··· 478 478 479 479 480 480 481 - static int wanxl_puts_command(card_t *card, u32 cmd) 481 + static int wanxl_puts_command(struct card *card, u32 cmd) 482 482 { 483 483 unsigned long timeout = jiffies + 5 * HZ; 484 484 ··· 495 495 496 496 497 497 498 - static void wanxl_reset(card_t *card) 498 + static void wanxl_reset(struct card *card) 499 499 { 500 500 u32 old_value = readl(card->plx + PLX_CONTROL) & ~PLX_CTL_RESET; 501 501 ··· 511 511 512 512 static void wanxl_pci_remove_one(struct pci_dev *pdev) 513 513 { 514 - card_t *card = pci_get_drvdata(pdev); 514 + struct card *card = pci_get_drvdata(pdev); 515 515 int i; 516 516 517 517 for (i = 0; i < card->n_ports; i++) { ··· 537 537 iounmap(card->plx); 538 538 539 539 if (card->status) 540 - pci_free_consistent(pdev, sizeof(card_status_t), 540 + pci_free_consistent(pdev, sizeof(struct card_status), 541 541 card->status, card->status_address); 542 542 543 543 pci_release_regions(pdev); ··· 560 560 static int wanxl_pci_init_one(struct pci_dev *pdev, 561 561 const struct pci_device_id *ent) 562 562 { 563 - card_t *card; 563 + struct card *card; 564 564 u32 ramsize, stat; 565 565 unsigned long timeout; 566 566 u32 plx_phy; /* PLX PCI base address */ ··· 601 601 default: ports = 4; 602 602 } 603 603 604 - alloc_size = sizeof(card_t) + ports * sizeof(port_t); 604 + alloc_size = sizeof(struct card) + ports * sizeof(struct port); 605 605 card = kzalloc(alloc_size, GFP_KERNEL); 606 606 if (card == NULL) { 607 607 pci_release_regions(pdev); ··· 612 612 pci_set_drvdata(pdev, card); 613 613 card->pdev = pdev; 614 614 615 - card->status = pci_alloc_consistent(pdev, sizeof(card_status_t), 615 + card->status = pci_alloc_consistent(pdev, 616 + sizeof(struct card_status), 616 617 &card->status_address); 617 618 if (card->status == NULL) { 618 619 wanxl_pci_remove_one(pdev); ··· 767 766 768 767 for (i = 0; i < ports; i++) { 769 768 hdlc_device *hdlc; 770 - port_t *port = &card->ports[i]; 769 + struct port *port = &card->ports[i]; 771 770 struct net_device *dev = alloc_hdlcdev(port); 772 771 if (!dev) { 773 772 pr_err("%s: unable to allocate memory\n",
+13 -12
drivers/net/wireless/airo_cs.c
··· 56 56 57 57 static void airo_detach(struct pcmcia_device *p_dev); 58 58 59 - typedef struct local_info_t { 59 + struct local_info { 60 60 struct net_device *eth_dev; 61 - } local_info_t; 61 + }; 62 62 63 63 static int airo_probe(struct pcmcia_device *p_dev) 64 64 { 65 - local_info_t *local; 65 + struct local_info *local; 66 66 67 67 dev_dbg(&p_dev->dev, "airo_attach()\n"); 68 68 69 69 /* Allocate space for private device-specific data */ 70 - local = kzalloc(sizeof(local_info_t), GFP_KERNEL); 70 + local = kzalloc(sizeof(*local), GFP_KERNEL); 71 71 if (!local) 72 72 return -ENOMEM; 73 73 ··· 82 82 83 83 airo_release(link); 84 84 85 - if (((local_info_t *)link->priv)->eth_dev) { 86 - stop_airo_card(((local_info_t *)link->priv)->eth_dev, 0); 85 + if (((struct local_info *)link->priv)->eth_dev) { 86 + stop_airo_card(((struct local_info *)link->priv)->eth_dev, 87 + 0); 87 88 } 88 - ((local_info_t *)link->priv)->eth_dev = NULL; 89 + ((struct local_info *)link->priv)->eth_dev = NULL; 89 90 90 91 kfree(link->priv); 91 92 } /* airo_detach */ ··· 102 101 103 102 static int airo_config(struct pcmcia_device *link) 104 103 { 105 - local_info_t *dev; 104 + struct local_info *dev; 106 105 int ret; 107 106 108 107 dev = link->priv; ··· 122 121 ret = pcmcia_enable_device(link); 123 122 if (ret) 124 123 goto failed; 125 - ((local_info_t *)link->priv)->eth_dev = 124 + ((struct local_info *)link->priv)->eth_dev = 126 125 init_airo_card(link->irq, 127 126 link->resource[0]->start, 1, &link->dev); 128 - if (!((local_info_t *)link->priv)->eth_dev) 127 + if (!((struct local_info *)link->priv)->eth_dev) 129 128 goto failed; 130 129 131 130 return 0; ··· 143 142 144 143 static int airo_suspend(struct pcmcia_device *link) 145 144 { 146 - local_info_t *local = link->priv; 145 + struct local_info *local = link->priv; 147 146 148 147 netif_device_detach(local->eth_dev); 149 148 ··· 152 151 153 152 static int airo_resume(struct pcmcia_device *link) 154 153 { 155 - local_info_t *local = link->priv; 154 + struct local_info *local = link->priv; 156 155 157 156 if (link->open) { 158 157 reset_airo_card(local->eth_dev);
+4 -4
drivers/net/wireless/atmel.c
··· 2598 2598 NULL, /* SIOCIWFIRSTPRIV */ 2599 2599 }; 2600 2600 2601 - typedef struct atmel_priv_ioctl { 2601 + struct atmel_priv_ioctl { 2602 2602 char id[32]; 2603 2603 unsigned char __user *data; 2604 2604 unsigned short len; 2605 - } atmel_priv_ioctl; 2605 + }; 2606 2606 2607 2607 #define ATMELFWL SIOCIWFIRSTPRIV 2608 2608 #define ATMELIDIFC ATMELFWL + 1 ··· 2615 2615 .cmd = ATMELFWL, 2616 2616 .set_args = IW_PRIV_TYPE_BYTE 2617 2617 | IW_PRIV_SIZE_FIXED 2618 - | sizeof (atmel_priv_ioctl), 2618 + | sizeof(struct atmel_priv_ioctl), 2619 2619 .get_args = IW_PRIV_TYPE_NONE, 2620 2620 .name = "atmelfwl" 2621 2621 }, { ··· 2645 2645 { 2646 2646 int i, rc = 0; 2647 2647 struct atmel_private *priv = netdev_priv(dev); 2648 - atmel_priv_ioctl com; 2648 + struct atmel_priv_ioctl com; 2649 2649 struct iwreq *wrq = (struct iwreq *) rq; 2650 2650 unsigned char *new_firmware; 2651 2651 char domain[REGDOMAINSZ + 1];
+1 -5
drivers/net/xen-netback/interface.c
··· 78 78 /* This vif is rogue, we pretend we've there is nothing to do 79 79 * for this vif to deschedule it from NAPI. But this interface 80 80 * will be turned off in thread context later. 81 - * Also, if a guest doesn't post enough slots to receive data on one of 82 - * its queues, the carrier goes down and NAPI is descheduled here so 83 - * the guest can't send more packets until it's ready to receive. 84 81 */ 85 - if (unlikely(queue->vif->disabled || 86 - !netif_carrier_ok(queue->vif->dev))) { 82 + if (unlikely(queue->vif->disabled)) { 87 83 napi_complete(napi); 88 84 return 0; 89 85 }
+8 -2
drivers/net/xen-netback/netback.c
··· 2025 2025 * context so we defer it here, if this thread is 2026 2026 * associated with queue 0. 2027 2027 */ 2028 - if (unlikely(queue->vif->disabled && queue->id == 0)) 2028 + if (unlikely(queue->vif->disabled && queue->id == 0)) { 2029 2029 xenvif_carrier_off(queue->vif); 2030 - else if (unlikely(test_and_clear_bit(QUEUE_STATUS_RX_PURGE_EVENT, 2030 + } else if (unlikely(queue->vif->disabled)) { 2031 + /* kthread_stop() would be called upon this thread soon, 2032 + * be a bit proactive 2033 + */ 2034 + skb_queue_purge(&queue->rx_queue); 2035 + queue->rx_last_skb_slots = 0; 2036 + } else if (unlikely(test_and_clear_bit(QUEUE_STATUS_RX_PURGE_EVENT, 2031 2037 &queue->status))) { 2032 2038 xenvif_rx_purge_event(queue); 2033 2039 } else if (!netif_carrier_ok(queue->vif->dev)) {
+4 -3
drivers/net/xen-netfront.c
··· 628 628 slots = DIV_ROUND_UP(offset + len, PAGE_SIZE) + 629 629 xennet_count_skb_frag_slots(skb); 630 630 if (unlikely(slots > MAX_SKB_FRAGS + 1)) { 631 - net_alert_ratelimited( 632 - "xennet: skb rides the rocket: %d slots\n", slots); 633 - goto drop; 631 + net_dbg_ratelimited("xennet: skb rides the rocket: %d slots, %d bytes\n", 632 + slots, skb->len); 633 + if (skb_linearize(skb)) 634 + goto drop; 634 635 } 635 636 636 637 spin_lock_irqsave(&queue->tx_lock, flags);
-6
include/linux/if_vlan.h
··· 187 187 } 188 188 189 189 extern bool vlan_do_receive(struct sk_buff **skb); 190 - extern struct sk_buff *vlan_untag(struct sk_buff *skb); 191 190 192 191 extern int vlan_vid_add(struct net_device *dev, __be16 proto, u16 vid); 193 192 extern void vlan_vid_del(struct net_device *dev, __be16 proto, u16 vid); ··· 238 239 static inline bool vlan_do_receive(struct sk_buff **skb) 239 240 { 240 241 return false; 241 - } 242 - 243 - static inline struct sk_buff *vlan_untag(struct sk_buff *skb) 244 - { 245 - return skb; 246 242 } 247 243 248 244 static inline int vlan_vid_add(struct net_device *dev, __be16 proto, u16 vid)
+1
include/linux/skbuff.h
··· 2555 2555 void skb_scrub_packet(struct sk_buff *skb, bool xnet); 2556 2556 unsigned int skb_gso_transport_seglen(const struct sk_buff *skb); 2557 2557 struct sk_buff *skb_segment(struct sk_buff *skb, netdev_features_t features); 2558 + struct sk_buff *skb_vlan_untag(struct sk_buff *skb); 2558 2559 2559 2560 struct skb_checksum_ops { 2560 2561 __wsum (*update)(const void *mem, int len, __wsum wsum);
+1 -1
net/6lowpan/Kconfig
··· 1 1 config 6LOWPAN 2 - bool "6LoWPAN Support" 2 + tristate "6LoWPAN Support" 3 3 depends on IPV6 4 4 ---help--- 5 5 This enables IPv6 over Low power Wireless Personal Area Network -
-53
net/8021q/vlan_core.c
··· 112 112 } 113 113 EXPORT_SYMBOL(vlan_dev_vlan_proto); 114 114 115 - static struct sk_buff *vlan_reorder_header(struct sk_buff *skb) 116 - { 117 - if (skb_cow(skb, skb_headroom(skb)) < 0) { 118 - kfree_skb(skb); 119 - return NULL; 120 - } 121 - 122 - memmove(skb->data - ETH_HLEN, skb->data - VLAN_ETH_HLEN, 2 * ETH_ALEN); 123 - skb->mac_header += VLAN_HLEN; 124 - return skb; 125 - } 126 - 127 - struct sk_buff *vlan_untag(struct sk_buff *skb) 128 - { 129 - struct vlan_hdr *vhdr; 130 - u16 vlan_tci; 131 - 132 - if (unlikely(vlan_tx_tag_present(skb))) { 133 - /* vlan_tci is already set-up so leave this for another time */ 134 - return skb; 135 - } 136 - 137 - skb = skb_share_check(skb, GFP_ATOMIC); 138 - if (unlikely(!skb)) 139 - goto err_free; 140 - 141 - if (unlikely(!pskb_may_pull(skb, VLAN_HLEN))) 142 - goto err_free; 143 - 144 - vhdr = (struct vlan_hdr *) skb->data; 145 - vlan_tci = ntohs(vhdr->h_vlan_TCI); 146 - __vlan_hwaccel_put_tag(skb, skb->protocol, vlan_tci); 147 - 148 - skb_pull_rcsum(skb, VLAN_HLEN); 149 - vlan_set_encap_proto(skb, vhdr); 150 - 151 - skb = vlan_reorder_header(skb); 152 - if (unlikely(!skb)) 153 - goto err_free; 154 - 155 - skb_reset_network_header(skb); 156 - skb_reset_transport_header(skb); 157 - skb_reset_mac_len(skb); 158 - 159 - return skb; 160 - 161 - err_free: 162 - kfree_skb(skb); 163 - return NULL; 164 - } 165 - EXPORT_SYMBOL(vlan_untag); 166 - 167 - 168 115 /* 169 116 * vlan info and vid list 170 117 */
-1
net/batman-adv/multicast.c
··· 20 20 #include "originator.h" 21 21 #include "hard-interface.h" 22 22 #include "translation-table.h" 23 - #include "multicast.h" 24 23 25 24 /** 26 25 * batadv_mcast_mla_softif_get - get softif multicast listeners
+1 -1
net/bridge/br_vlan.c
··· 181 181 */ 182 182 if (unlikely(!vlan_tx_tag_present(skb) && 183 183 skb->protocol == proto)) { 184 - skb = vlan_untag(skb); 184 + skb = skb_vlan_untag(skb); 185 185 if (unlikely(!skb)) 186 186 return false; 187 187 }
+2 -8
net/bridge/netfilter/ebtables.c
··· 327 327 char name[EBT_FUNCTION_MAXNAMELEN]; 328 328 } *e; 329 329 330 - *error = mutex_lock_interruptible(mutex); 331 - if (*error != 0) 332 - return NULL; 333 - 330 + mutex_lock(mutex); 334 331 list_for_each_entry(e, head, list) { 335 332 if (strcmp(e->name, name) == 0) 336 333 return e; ··· 1200 1203 1201 1204 table->private = newinfo; 1202 1205 rwlock_init(&table->lock); 1203 - ret = mutex_lock_interruptible(&ebt_mutex); 1204 - if (ret != 0) 1205 - goto free_chainstack; 1206 - 1206 + mutex_lock(&ebt_mutex); 1207 1207 list_for_each_entry(t, &net->xt.tables[NFPROTO_BRIDGE], list) { 1208 1208 if (strcmp(t->name, table->name) == 0) { 1209 1209 ret = -EEXIST;
+1 -1
net/core/dev.c
··· 3602 3602 3603 3603 if (skb->protocol == cpu_to_be16(ETH_P_8021Q) || 3604 3604 skb->protocol == cpu_to_be16(ETH_P_8021AD)) { 3605 - skb = vlan_untag(skb); 3605 + skb = skb_vlan_untag(skb); 3606 3606 if (unlikely(!skb)) 3607 3607 goto unlock; 3608 3608 }
+2 -1
net/core/rtnetlink.c
··· 804 804 (nla_total_size(sizeof(struct ifla_vf_mac)) + 805 805 nla_total_size(sizeof(struct ifla_vf_vlan)) + 806 806 nla_total_size(sizeof(struct ifla_vf_spoofchk)) + 807 - nla_total_size(sizeof(struct ifla_vf_rate))); 807 + nla_total_size(sizeof(struct ifla_vf_rate)) + 808 + nla_total_size(sizeof(struct ifla_vf_link_state))); 808 809 return size; 809 810 } else 810 811 return 0;
+53
net/core/skbuff.c
··· 62 62 #include <linux/scatterlist.h> 63 63 #include <linux/errqueue.h> 64 64 #include <linux/prefetch.h> 65 + #include <linux/if_vlan.h> 65 66 66 67 #include <net/protocol.h> 67 68 #include <net/dst.h> ··· 3974 3973 return shinfo->gso_size; 3975 3974 } 3976 3975 EXPORT_SYMBOL_GPL(skb_gso_transport_seglen); 3976 + 3977 + static struct sk_buff *skb_reorder_vlan_header(struct sk_buff *skb) 3978 + { 3979 + if (skb_cow(skb, skb_headroom(skb)) < 0) { 3980 + kfree_skb(skb); 3981 + return NULL; 3982 + } 3983 + 3984 + memmove(skb->data - ETH_HLEN, skb->data - VLAN_ETH_HLEN, 2 * ETH_ALEN); 3985 + skb->mac_header += VLAN_HLEN; 3986 + return skb; 3987 + } 3988 + 3989 + struct sk_buff *skb_vlan_untag(struct sk_buff *skb) 3990 + { 3991 + struct vlan_hdr *vhdr; 3992 + u16 vlan_tci; 3993 + 3994 + if (unlikely(vlan_tx_tag_present(skb))) { 3995 + /* vlan_tci is already set-up so leave this for another time */ 3996 + return skb; 3997 + } 3998 + 3999 + skb = skb_share_check(skb, GFP_ATOMIC); 4000 + if (unlikely(!skb)) 4001 + goto err_free; 4002 + 4003 + if (unlikely(!pskb_may_pull(skb, VLAN_HLEN))) 4004 + goto err_free; 4005 + 4006 + vhdr = (struct vlan_hdr *)skb->data; 4007 + vlan_tci = ntohs(vhdr->h_vlan_TCI); 4008 + __vlan_hwaccel_put_tag(skb, skb->protocol, vlan_tci); 4009 + 4010 + skb_pull_rcsum(skb, VLAN_HLEN); 4011 + vlan_set_encap_proto(skb, vhdr); 4012 + 4013 + skb = skb_reorder_vlan_header(skb); 4014 + if (unlikely(!skb)) 4015 + goto err_free; 4016 + 4017 + skb_reset_network_header(skb); 4018 + skb_reset_transport_header(skb); 4019 + skb_reset_mac_len(skb); 4020 + 4021 + return skb; 4022 + 4023 + err_free: 4024 + kfree_skb(skb); 4025 + return NULL; 4026 + } 4027 + EXPORT_SYMBOL(skb_vlan_untag);
-2
net/ipv4/route.c
··· 1798 1798 no_route: 1799 1799 RT_CACHE_STAT_INC(in_no_route); 1800 1800 res.type = RTN_UNREACHABLE; 1801 - if (err == -ESRCH) 1802 - err = -ENETUNREACH; 1803 1801 goto local_input; 1804 1802 1805 1803 /*
+2 -9
net/netfilter/core.c
··· 35 35 36 36 int nf_register_afinfo(const struct nf_afinfo *afinfo) 37 37 { 38 - int err; 39 - 40 - err = mutex_lock_interruptible(&afinfo_mutex); 41 - if (err < 0) 42 - return err; 38 + mutex_lock(&afinfo_mutex); 43 39 RCU_INIT_POINTER(nf_afinfo[afinfo->family], afinfo); 44 40 mutex_unlock(&afinfo_mutex); 45 41 return 0; ··· 64 68 int nf_register_hook(struct nf_hook_ops *reg) 65 69 { 66 70 struct nf_hook_ops *elem; 67 - int err; 68 71 69 - err = mutex_lock_interruptible(&nf_hook_mutex); 70 - if (err < 0) 71 - return err; 72 + mutex_lock(&nf_hook_mutex); 72 73 list_for_each_entry(elem, &nf_hooks[reg->pf][reg->hooknum], list) { 73 74 if (reg->priority < elem->priority) 74 75 break;
+4 -15
net/netfilter/ipvs/ip_vs_ctl.c
··· 2271 2271 cmd == IP_VS_SO_SET_STOPDAEMON) { 2272 2272 struct ip_vs_daemon_user *dm = (struct ip_vs_daemon_user *)arg; 2273 2273 2274 - if (mutex_lock_interruptible(&ipvs->sync_mutex)) { 2275 - ret = -ERESTARTSYS; 2276 - goto out_dec; 2277 - } 2274 + mutex_lock(&ipvs->sync_mutex); 2278 2275 if (cmd == IP_VS_SO_SET_STARTDAEMON) 2279 2276 ret = start_sync_thread(net, dm->state, dm->mcast_ifn, 2280 2277 dm->syncid); ··· 2281 2284 goto out_dec; 2282 2285 } 2283 2286 2284 - if (mutex_lock_interruptible(&__ip_vs_mutex)) { 2285 - ret = -ERESTARTSYS; 2286 - goto out_dec; 2287 - } 2288 - 2287 + mutex_lock(&__ip_vs_mutex); 2289 2288 if (cmd == IP_VS_SO_SET_FLUSH) { 2290 2289 /* Flush the virtual service */ 2291 2290 ret = ip_vs_flush(net, false); ··· 2566 2573 struct ip_vs_daemon_user d[2]; 2567 2574 2568 2575 memset(&d, 0, sizeof(d)); 2569 - if (mutex_lock_interruptible(&ipvs->sync_mutex)) 2570 - return -ERESTARTSYS; 2571 - 2576 + mutex_lock(&ipvs->sync_mutex); 2572 2577 if (ipvs->sync_state & IP_VS_STATE_MASTER) { 2573 2578 d[0].state = IP_VS_STATE_MASTER; 2574 2579 strlcpy(d[0].mcast_ifn, ipvs->master_mcast_ifn, ··· 2585 2594 return ret; 2586 2595 } 2587 2596 2588 - if (mutex_lock_interruptible(&__ip_vs_mutex)) 2589 - return -ERESTARTSYS; 2590 - 2597 + mutex_lock(&__ip_vs_mutex); 2591 2598 switch (cmd) { 2592 2599 case IP_VS_SO_GET_VERSION: 2593 2600 {
+2 -6
net/netfilter/nf_sockopt.c
··· 26 26 struct nf_sockopt_ops *ops; 27 27 int ret = 0; 28 28 29 - if (mutex_lock_interruptible(&nf_sockopt_mutex) != 0) 30 - return -EINTR; 31 - 29 + mutex_lock(&nf_sockopt_mutex); 32 30 list_for_each_entry(ops, &nf_sockopts, list) { 33 31 if (ops->pf == reg->pf 34 32 && (overlap(ops->set_optmin, ops->set_optmax, ··· 63 65 { 64 66 struct nf_sockopt_ops *ops; 65 67 66 - if (mutex_lock_interruptible(&nf_sockopt_mutex) != 0) 67 - return ERR_PTR(-EINTR); 68 - 68 + mutex_lock(&nf_sockopt_mutex); 69 69 list_for_each_entry(ops, &nf_sockopts, list) { 70 70 if (ops->pf == pf) { 71 71 if (!try_module_get(ops->owner))
+17 -13
net/netfilter/nf_tables_api.c
··· 899 899 static void nft_chain_stats_replace(struct nft_base_chain *chain, 900 900 struct nft_stats __percpu *newstats) 901 901 { 902 + if (newstats == NULL) 903 + return; 904 + 902 905 if (chain->stats) { 903 906 struct nft_stats __percpu *oldstats = 904 907 nft_dereference(chain->stats); ··· 3137 3134 goto err2; 3138 3135 3139 3136 trans = nft_trans_elem_alloc(ctx, NFT_MSG_DELSETELEM, set); 3140 - if (trans == NULL) 3137 + if (trans == NULL) { 3138 + err = -ENOMEM; 3141 3139 goto err2; 3140 + } 3142 3141 3143 3142 nft_trans_elem(trans) = elem; 3144 3143 list_add_tail(&trans->list, &ctx->net->nft.commit_list); 3145 - 3146 - nft_data_uninit(&elem.key, NFT_DATA_VALUE); 3147 - if (set->flags & NFT_SET_MAP) 3148 - nft_data_uninit(&elem.data, set->dtype); 3149 - 3150 3144 return 0; 3151 3145 err2: 3152 3146 nft_data_uninit(&elem.key, desc.type); ··· 3310 3310 { 3311 3311 struct net *net = sock_net(skb->sk); 3312 3312 struct nft_trans *trans, *next; 3313 - struct nft_set *set; 3313 + struct nft_trans_elem *te; 3314 3314 3315 3315 /* Bump generation counter, invalidate any dump in progress */ 3316 3316 while (++net->nft.base_seq == 0); ··· 3396 3396 nft_trans_destroy(trans); 3397 3397 break; 3398 3398 case NFT_MSG_DELSETELEM: 3399 - nf_tables_setelem_notify(&trans->ctx, 3400 - nft_trans_elem_set(trans), 3401 - &nft_trans_elem(trans), 3399 + te = (struct nft_trans_elem *)trans->data; 3400 + nf_tables_setelem_notify(&trans->ctx, te->set, 3401 + &te->elem, 3402 3402 NFT_MSG_DELSETELEM, 0); 3403 - set = nft_trans_elem_set(trans); 3404 - set->ops->get(set, &nft_trans_elem(trans)); 3405 - set->ops->remove(set, &nft_trans_elem(trans)); 3403 + te->set->ops->get(te->set, &te->elem); 3404 + te->set->ops->remove(te->set, &te->elem); 3405 + nft_data_uninit(&te->elem.key, NFT_DATA_VALUE); 3406 + if (te->elem.flags & NFT_SET_MAP) { 3407 + nft_data_uninit(&te->elem.data, 3408 + te->set->dtype); 3409 + } 3406 3410 nft_trans_destroy(trans); 3407 3411 break; 3408 3412 }
+12 -35
net/netfilter/x_tables.c
··· 71 71 static const unsigned int xt_jumpstack_multiplier = 2; 72 72 73 73 /* Registration hooks for targets. */ 74 - int 75 - xt_register_target(struct xt_target *target) 74 + int xt_register_target(struct xt_target *target) 76 75 { 77 76 u_int8_t af = target->family; 78 - int ret; 79 77 80 - ret = mutex_lock_interruptible(&xt[af].mutex); 81 - if (ret != 0) 82 - return ret; 78 + mutex_lock(&xt[af].mutex); 83 79 list_add(&target->list, &xt[af].target); 84 80 mutex_unlock(&xt[af].mutex); 85 - return ret; 81 + return 0; 86 82 } 87 83 EXPORT_SYMBOL(xt_register_target); 88 84 ··· 121 125 } 122 126 EXPORT_SYMBOL(xt_unregister_targets); 123 127 124 - int 125 - xt_register_match(struct xt_match *match) 128 + int xt_register_match(struct xt_match *match) 126 129 { 127 130 u_int8_t af = match->family; 128 - int ret; 129 131 130 - ret = mutex_lock_interruptible(&xt[af].mutex); 131 - if (ret != 0) 132 - return ret; 133 - 132 + mutex_lock(&xt[af].mutex); 134 133 list_add(&match->list, &xt[af].match); 135 134 mutex_unlock(&xt[af].mutex); 136 - 137 - return ret; 135 + return 0; 138 136 } 139 137 EXPORT_SYMBOL(xt_register_match); 140 138 ··· 184 194 struct xt_match *m; 185 195 int err = -ENOENT; 186 196 187 - if (mutex_lock_interruptible(&xt[af].mutex) != 0) 188 - return ERR_PTR(-EINTR); 189 - 197 + mutex_lock(&xt[af].mutex); 190 198 list_for_each_entry(m, &xt[af].match, list) { 191 199 if (strcmp(m->name, name) == 0) { 192 200 if (m->revision == revision) { ··· 227 239 struct xt_target *t; 228 240 int err = -ENOENT; 229 241 230 - if (mutex_lock_interruptible(&xt[af].mutex) != 0) 231 - return ERR_PTR(-EINTR); 232 - 242 + mutex_lock(&xt[af].mutex); 233 243 list_for_each_entry(t, &xt[af].target, list) { 234 244 if (strcmp(t->name, name) == 0) { 235 245 if (t->revision == revision) { ··· 309 323 { 310 324 int have_rev, best = -1; 311 325 312 - if (mutex_lock_interruptible(&xt[af].mutex) != 0) { 313 - *err = -EINTR; 314 - return 1; 315 - } 326 + mutex_lock(&xt[af].mutex); 316 327 if (target == 1) 317 328 have_rev = target_revfn(af, name, revision, &best); 318 329 else ··· 715 732 { 716 733 struct xt_table *t; 717 734 718 - if (mutex_lock_interruptible(&xt[af].mutex) != 0) 719 - return ERR_PTR(-EINTR); 720 - 735 + mutex_lock(&xt[af].mutex); 721 736 list_for_each_entry(t, &net->xt.tables[af], list) 722 737 if (strcmp(t->name, name) == 0 && try_module_get(t->me)) 723 738 return t; ··· 864 883 goto out; 865 884 } 866 885 867 - ret = mutex_lock_interruptible(&xt[table->af].mutex); 868 - if (ret != 0) 869 - goto out_free; 870 - 886 + mutex_lock(&xt[table->af].mutex); 871 887 /* Don't autoload: we'd eat our tail... */ 872 888 list_for_each_entry(t, &net->xt.tables[table->af], list) { 873 889 if (strcmp(t->name, table->name) == 0) { ··· 889 911 mutex_unlock(&xt[table->af].mutex); 890 912 return table; 891 913 892 - unlock: 914 + unlock: 893 915 mutex_unlock(&xt[table->af].mutex); 894 - out_free: 895 916 kfree(table); 896 917 out: 897 918 return ERR_PTR(ret);
+1 -1
net/netlink/af_netlink.c
··· 213 213 nskb->protocol = htons((u16) sk->sk_protocol); 214 214 nskb->pkt_type = netlink_is_kernel(sk) ? 215 215 PACKET_KERNEL : PACKET_USER; 216 - 216 + skb_reset_network_header(nskb); 217 217 ret = dev_queue_xmit(nskb); 218 218 if (unlikely(ret > 0)) 219 219 ret = net_xmit_errno(ret);
-2
net/openvswitch/datapath.c
··· 47 47 #include <linux/openvswitch.h> 48 48 #include <linux/rculist.h> 49 49 #include <linux/dmi.h> 50 - #include <linux/genetlink.h> 51 - #include <net/genetlink.h> 52 50 #include <net/genetlink.h> 53 51 #include <net/net_namespace.h> 54 52 #include <net/netns/generic.h>