Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net

Pull networking fixes from David Miller:

1) Fix list tests in netfilter ingress support, from Florian Westphal.

2) Fix reversal of input and output interfaces in ingress hook
invocation, from Pablo Neira Ayuso.

3) We have a use after free in r8169, caught by Dave Jones, fixed by
Francois Romieu.

4) Splice use-after-free fix in AF_UNIX frmo Hannes Frederic Sowa.

5) Three ipv6 route handling bug fixes from Martin KaFai Lau:
a) Don't create clone routes not managed by the fib6 tree
b) Don't forget to check expiration of DST_NOCACHE routes.
c) Handle rt->dst.from == NULL properly.

6) Several AF_PACKET fixes wrt transport header setting and SKB
protocol setting, from Daniel Borkmann.

7) Fix thunder driver crash on shutdown, from Pavel Fedin.

8) Several Mellanox driver fixes (max MTU calculations, use of correct
DMA unmap in TX path, etc.) from Saeed Mahameed, Tariq Toukan, Doron
Tsur, Achiad Shochat, Eran Ben Elisha, and Noa Osherovich.

9) Several mv88e6060 DSA driver fixes (wrong bit definitions for
certain registers, etc.) from Neil Armstrong.

10) Make sure to disable preemption while updating per-cpu stats of ip
tunnels, from Jason A. Donenfeld.

11) Various ARM64 bpf JIT fixes, from Yang Shi.

12) Flush icache properly in ARM JITs, from Daniel Borkmann.

13) Fix masking of RX and TX interrupts in ravb driver, from Masaru
Nagai.

14) Fix netdev feature propagation for devices not implementing
->ndo_set_features(). From Nikolay Aleksandrov.

15) Big endian fix in vmxnet3 driver, from Shrikrishna Khare.

16) RAW socket code increments incorrect SNMP counters, fix from Ben
Cartwright-Cox.

17) IPv6 multicast SNMP counters are bumped twice, fix from Neil Horman.

18) Fix handling of VLAN headers on stacked devices when REORDER is
disabled. From Vlad Yasevich.

19) Fix SKB leaks and use-after-free in ipvlan and macvlan drivers, from
Sabrina Dubroca.

* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net: (83 commits)
MAINTAINERS: Update Mellanox's Eth NIC driver entries
net/core: revert "net: fix __netdev_update_features return.." and add comment
af_unix: take receive queue lock while appending new skb
rtnetlink: fix frame size warning in rtnl_fill_ifinfo
net: use skb_clone to avoid alloc_pages failure.
packet: Use PAGE_ALIGNED macro
packet: Don't check frames_per_block against negative values
net: phy: Use interrupts when available in NOLINK state
phy: marvell: Add support for 88E1540 PHY
arm64: bpf: make BPF prologue and epilogue align with ARM64 AAPCS
macvlan: fix leak in macvlan_handle_frame
ipvlan: fix use after free of skb
ipvlan: fix leak in ipvlan_rcv_frame
vlan: Do not put vlan headers back on bridge and macvlan ports
vlan: Fix untag operations of stacked vlans with REORDER_HEADER off
via-velocity: unconditionally drop frames with bad l2 length
ipg: Remove ipg driver
dl2k: Add support for IP1000A-based cards
snmp: Remove duplicate OUTMCAST stat increment
net: thunder: Check for driver data in nicvf_remove()
...

+1038 -3652
+9 -8
MAINTAINERS
··· 5711 5711 S: Maintained 5712 5712 F: net/ipv4/netfilter/ipt_MASQUERADE.c 5713 5713 5714 - IP1000A 10/100/1000 GIGABIT ETHERNET DRIVER 5715 - M: Francois Romieu <romieu@fr.zoreil.com> 5716 - M: Sorbica Shieh <sorbica@icplus.com.tw> 5717 - L: netdev@vger.kernel.org 5718 - S: Maintained 5719 - F: drivers/net/ethernet/icplus/ipg.* 5720 - 5721 5714 IPATH DRIVER 5722 5715 M: Mike Marciniszyn <infinipath@intel.com> 5723 5716 L: linux-rdma@vger.kernel.org ··· 6916 6923 F: drivers/scsi/megaraid/ 6917 6924 6918 6925 MELLANOX ETHERNET DRIVER (mlx4_en) 6919 - M: Amir Vadai <amirv@mellanox.com> 6926 + M: Eugenia Emantayev <eugenia@mellanox.com> 6920 6927 L: netdev@vger.kernel.org 6921 6928 S: Supported 6922 6929 W: http://www.mellanox.com 6923 6930 Q: http://patchwork.ozlabs.org/project/netdev/list/ 6924 6931 F: drivers/net/ethernet/mellanox/mlx4/en_* 6932 + 6933 + MELLANOX ETHERNET DRIVER (mlx5e) 6934 + M: Saeed Mahameed <saeedm@mellanox.com> 6935 + L: netdev@vger.kernel.org 6936 + S: Supported 6937 + W: http://www.mellanox.com 6938 + Q: http://patchwork.ozlabs.org/project/netdev/list/ 6939 + F: drivers/net/ethernet/mellanox/mlx5/core/en_* 6925 6940 6926 6941 MELLANOX ETHERNET SWITCH DRIVERS 6927 6942 M: Jiri Pirko <jiri@mellanox.com>
+1 -1
arch/arm/net/bpf_jit_32.c
··· 1061 1061 } 1062 1062 build_epilogue(&ctx); 1063 1063 1064 - flush_icache_range((u32)ctx.target, (u32)(ctx.target + ctx.idx)); 1064 + flush_icache_range((u32)header, (u32)(ctx.target + ctx.idx)); 1065 1065 1066 1066 #if __LINUX_ARM_ARCH__ < 7 1067 1067 if (ctx.imm_count)
+41 -7
arch/arm64/net/bpf_jit_comp.c
··· 50 50 [BPF_REG_8] = A64_R(21), 51 51 [BPF_REG_9] = A64_R(22), 52 52 /* read-only frame pointer to access stack */ 53 - [BPF_REG_FP] = A64_FP, 53 + [BPF_REG_FP] = A64_R(25), 54 54 /* temporary register for internal BPF JIT */ 55 55 [TMP_REG_1] = A64_R(23), 56 56 [TMP_REG_2] = A64_R(24), ··· 155 155 stack_size += 4; /* extra for skb_copy_bits buffer */ 156 156 stack_size = STACK_ALIGN(stack_size); 157 157 158 + /* 159 + * BPF prog stack layout 160 + * 161 + * high 162 + * original A64_SP => 0:+-----+ BPF prologue 163 + * |FP/LR| 164 + * current A64_FP => -16:+-----+ 165 + * | ... | callee saved registers 166 + * +-----+ 167 + * | | x25/x26 168 + * BPF fp register => -80:+-----+ 169 + * | | 170 + * | ... | BPF prog stack 171 + * | | 172 + * | | 173 + * current A64_SP => +-----+ 174 + * | | 175 + * | ... | Function call stack 176 + * | | 177 + * +-----+ 178 + * low 179 + * 180 + */ 181 + 182 + /* Save FP and LR registers to stay align with ARM64 AAPCS */ 183 + emit(A64_PUSH(A64_FP, A64_LR, A64_SP), ctx); 184 + emit(A64_MOV(1, A64_FP, A64_SP), ctx); 185 + 158 186 /* Save callee-saved register */ 159 187 emit(A64_PUSH(r6, r7, A64_SP), ctx); 160 188 emit(A64_PUSH(r8, r9, A64_SP), ctx); 161 189 if (ctx->tmp_used) 162 190 emit(A64_PUSH(tmp1, tmp2, A64_SP), ctx); 163 191 164 - /* Set up BPF stack */ 165 - emit(A64_SUB_I(1, A64_SP, A64_SP, stack_size), ctx); 192 + /* Save fp (x25) and x26. SP requires 16 bytes alignment */ 193 + emit(A64_PUSH(fp, A64_R(26), A64_SP), ctx); 166 194 167 - /* Set up frame pointer */ 195 + /* Set up BPF prog stack base register (x25) */ 168 196 emit(A64_MOV(1, fp, A64_SP), ctx); 197 + 198 + /* Set up function call stack */ 199 + emit(A64_SUB_I(1, A64_SP, A64_SP, stack_size), ctx); 169 200 170 201 /* Clear registers A and X */ 171 202 emit_a64_mov_i64(ra, 0, ctx); ··· 221 190 /* We're done with BPF stack */ 222 191 emit(A64_ADD_I(1, A64_SP, A64_SP, stack_size), ctx); 223 192 193 + /* Restore fs (x25) and x26 */ 194 + emit(A64_POP(fp, A64_R(26), A64_SP), ctx); 195 + 224 196 /* Restore callee-saved register */ 225 197 if (ctx->tmp_used) 226 198 emit(A64_POP(tmp1, tmp2, A64_SP), ctx); 227 199 emit(A64_POP(r8, r9, A64_SP), ctx); 228 200 emit(A64_POP(r6, r7, A64_SP), ctx); 229 201 230 - /* Restore frame pointer */ 231 - emit(A64_MOV(1, fp, A64_SP), ctx); 202 + /* Restore FP/LR registers */ 203 + emit(A64_POP(A64_FP, A64_LR, A64_SP), ctx); 232 204 233 205 /* Set return value */ 234 206 emit(A64_MOV(1, A64_R(0), r0), ctx); ··· 792 758 if (bpf_jit_enable > 1) 793 759 bpf_jit_dump(prog->len, image_size, 2, ctx.image); 794 760 795 - bpf_flush_icache(ctx.image, ctx.image + ctx.idx); 761 + bpf_flush_icache(header, ctx.image + ctx.idx); 796 762 797 763 set_memory_ro((unsigned long)header, header->pages); 798 764 prog->bpf_func = (void *)ctx.image;
+38 -76
drivers/net/dsa/mv88e6060.c
··· 15 15 #include <linux/netdevice.h> 16 16 #include <linux/phy.h> 17 17 #include <net/dsa.h> 18 - 19 - #define REG_PORT(p) (8 + (p)) 20 - #define REG_GLOBAL 0x0f 18 + #include "mv88e6060.h" 21 19 22 20 static int reg_read(struct dsa_switch *ds, int addr, int reg) 23 21 { ··· 65 67 if (bus == NULL) 66 68 return NULL; 67 69 68 - ret = mdiobus_read(bus, sw_addr + REG_PORT(0), 0x03); 70 + ret = mdiobus_read(bus, sw_addr + REG_PORT(0), PORT_SWITCH_ID); 69 71 if (ret >= 0) { 70 - if (ret == 0x0600) 72 + if (ret == PORT_SWITCH_ID_6060) 71 73 return "Marvell 88E6060 (A0)"; 72 - if (ret == 0x0601 || ret == 0x0602) 74 + if (ret == PORT_SWITCH_ID_6060_R1 || 75 + ret == PORT_SWITCH_ID_6060_R2) 73 76 return "Marvell 88E6060 (B0)"; 74 - if ((ret & 0xfff0) == 0x0600) 77 + if ((ret & PORT_SWITCH_ID_6060_MASK) == PORT_SWITCH_ID_6060) 75 78 return "Marvell 88E6060"; 76 79 } 77 80 ··· 86 87 unsigned long timeout; 87 88 88 89 /* Set all ports to the disabled state. */ 89 - for (i = 0; i < 6; i++) { 90 - ret = REG_READ(REG_PORT(i), 0x04); 91 - REG_WRITE(REG_PORT(i), 0x04, ret & 0xfffc); 90 + for (i = 0; i < MV88E6060_PORTS; i++) { 91 + ret = REG_READ(REG_PORT(i), PORT_CONTROL); 92 + REG_WRITE(REG_PORT(i), PORT_CONTROL, 93 + ret & ~PORT_CONTROL_STATE_MASK); 92 94 } 93 95 94 96 /* Wait for transmit queues to drain. */ 95 97 usleep_range(2000, 4000); 96 98 97 99 /* Reset the switch. */ 98 - REG_WRITE(REG_GLOBAL, 0x0a, 0xa130); 100 + REG_WRITE(REG_GLOBAL, GLOBAL_ATU_CONTROL, 101 + GLOBAL_ATU_CONTROL_SWRESET | 102 + GLOBAL_ATU_CONTROL_ATUSIZE_1024 | 103 + GLOBAL_ATU_CONTROL_ATE_AGE_5MIN); 99 104 100 105 /* Wait up to one second for reset to complete. */ 101 106 timeout = jiffies + 1 * HZ; 102 107 while (time_before(jiffies, timeout)) { 103 - ret = REG_READ(REG_GLOBAL, 0x00); 104 - if ((ret & 0x8000) == 0x0000) 108 + ret = REG_READ(REG_GLOBAL, GLOBAL_STATUS); 109 + if (ret & GLOBAL_STATUS_INIT_READY) 105 110 break; 106 111 107 112 usleep_range(1000, 2000); ··· 122 119 * set the maximum frame size to 1536 bytes, and mask all 123 120 * interrupt sources. 124 121 */ 125 - REG_WRITE(REG_GLOBAL, 0x04, 0x0800); 122 + REG_WRITE(REG_GLOBAL, GLOBAL_CONTROL, GLOBAL_CONTROL_MAX_FRAME_1536); 126 123 127 124 /* Enable automatic address learning, set the address 128 125 * database size to 1024 entries, and set the default aging 129 126 * time to 5 minutes. 130 127 */ 131 - REG_WRITE(REG_GLOBAL, 0x0a, 0x2130); 128 + REG_WRITE(REG_GLOBAL, GLOBAL_ATU_CONTROL, 129 + GLOBAL_ATU_CONTROL_ATUSIZE_1024 | 130 + GLOBAL_ATU_CONTROL_ATE_AGE_5MIN); 132 131 133 132 return 0; 134 133 } ··· 144 139 * state to Forwarding. Additionally, if this is the CPU 145 140 * port, enable Ingress and Egress Trailer tagging mode. 146 141 */ 147 - REG_WRITE(addr, 0x04, dsa_is_cpu_port(ds, p) ? 0x4103 : 0x0003); 142 + REG_WRITE(addr, PORT_CONTROL, 143 + dsa_is_cpu_port(ds, p) ? 144 + PORT_CONTROL_TRAILER | 145 + PORT_CONTROL_INGRESS_MODE | 146 + PORT_CONTROL_STATE_FORWARDING : 147 + PORT_CONTROL_STATE_FORWARDING); 148 148 149 149 /* Port based VLAN map: give each port its own address 150 150 * database, allow the CPU port to talk to each of the 'real' 151 151 * ports, and allow each of the 'real' ports to only talk to 152 152 * the CPU port. 153 153 */ 154 - REG_WRITE(addr, 0x06, 155 - ((p & 0xf) << 12) | 156 - (dsa_is_cpu_port(ds, p) ? 157 - ds->phys_port_mask : 158 - (1 << ds->dst->cpu_port))); 154 + REG_WRITE(addr, PORT_VLAN_MAP, 155 + ((p & 0xf) << PORT_VLAN_MAP_DBNUM_SHIFT) | 156 + (dsa_is_cpu_port(ds, p) ? 157 + ds->phys_port_mask : 158 + BIT(ds->dst->cpu_port))); 159 159 160 160 /* Port Association Vector: when learning source addresses 161 161 * of packets, add the address to the address database using 162 162 * a port bitmap that has only the bit for this port set and 163 163 * the other bits clear. 164 164 */ 165 - REG_WRITE(addr, 0x0b, 1 << p); 165 + REG_WRITE(addr, PORT_ASSOC_VECTOR, BIT(p)); 166 166 167 167 return 0; 168 168 } ··· 187 177 if (ret < 0) 188 178 return ret; 189 179 190 - for (i = 0; i < 6; i++) { 180 + for (i = 0; i < MV88E6060_PORTS; i++) { 191 181 ret = mv88e6060_setup_port(ds, i); 192 182 if (ret < 0) 193 183 return ret; ··· 198 188 199 189 static int mv88e6060_set_addr(struct dsa_switch *ds, u8 *addr) 200 190 { 201 - REG_WRITE(REG_GLOBAL, 0x01, (addr[0] << 8) | addr[1]); 202 - REG_WRITE(REG_GLOBAL, 0x02, (addr[2] << 8) | addr[3]); 203 - REG_WRITE(REG_GLOBAL, 0x03, (addr[4] << 8) | addr[5]); 191 + /* Use the same MAC Address as FD Pause frames for all ports */ 192 + REG_WRITE(REG_GLOBAL, GLOBAL_MAC_01, (addr[0] << 9) | addr[1]); 193 + REG_WRITE(REG_GLOBAL, GLOBAL_MAC_23, (addr[2] << 8) | addr[3]); 194 + REG_WRITE(REG_GLOBAL, GLOBAL_MAC_45, (addr[4] << 8) | addr[5]); 204 195 205 196 return 0; 206 197 } 207 198 208 199 static int mv88e6060_port_to_phy_addr(int port) 209 200 { 210 - if (port >= 0 && port <= 5) 201 + if (port >= 0 && port < MV88E6060_PORTS) 211 202 return port; 212 203 return -1; 213 204 } ··· 236 225 return reg_write(ds, addr, regnum, val); 237 226 } 238 227 239 - static void mv88e6060_poll_link(struct dsa_switch *ds) 240 - { 241 - int i; 242 - 243 - for (i = 0; i < DSA_MAX_PORTS; i++) { 244 - struct net_device *dev; 245 - int uninitialized_var(port_status); 246 - int link; 247 - int speed; 248 - int duplex; 249 - int fc; 250 - 251 - dev = ds->ports[i]; 252 - if (dev == NULL) 253 - continue; 254 - 255 - link = 0; 256 - if (dev->flags & IFF_UP) { 257 - port_status = reg_read(ds, REG_PORT(i), 0x00); 258 - if (port_status < 0) 259 - continue; 260 - 261 - link = !!(port_status & 0x1000); 262 - } 263 - 264 - if (!link) { 265 - if (netif_carrier_ok(dev)) { 266 - netdev_info(dev, "link down\n"); 267 - netif_carrier_off(dev); 268 - } 269 - continue; 270 - } 271 - 272 - speed = (port_status & 0x0100) ? 100 : 10; 273 - duplex = (port_status & 0x0200) ? 1 : 0; 274 - fc = ((port_status & 0xc000) == 0xc000) ? 1 : 0; 275 - 276 - if (!netif_carrier_ok(dev)) { 277 - netdev_info(dev, 278 - "link up, %d Mb/s, %s duplex, flow control %sabled\n", 279 - speed, 280 - duplex ? "full" : "half", 281 - fc ? "en" : "dis"); 282 - netif_carrier_on(dev); 283 - } 284 - } 285 - } 286 - 287 228 static struct dsa_switch_driver mv88e6060_switch_driver = { 288 229 .tag_protocol = DSA_TAG_PROTO_TRAILER, 289 230 .probe = mv88e6060_probe, ··· 243 280 .set_addr = mv88e6060_set_addr, 244 281 .phy_read = mv88e6060_phy_read, 245 282 .phy_write = mv88e6060_phy_write, 246 - .poll_link = mv88e6060_poll_link, 247 283 }; 248 284 249 285 static int __init mv88e6060_init(void)
+111
drivers/net/dsa/mv88e6060.h
··· 1 + /* 2 + * drivers/net/dsa/mv88e6060.h - Marvell 88e6060 switch chip support 3 + * Copyright (c) 2015 Neil Armstrong 4 + * 5 + * Based on mv88e6xxx.h 6 + * Copyright (c) 2008 Marvell Semiconductor 7 + * 8 + * This program is free software; you can redistribute it and/or modify 9 + * it under the terms of the GNU General Public License as published by 10 + * the Free Software Foundation; either version 2 of the License, or 11 + * (at your option) any later version. 12 + */ 13 + 14 + #ifndef __MV88E6060_H 15 + #define __MV88E6060_H 16 + 17 + #define MV88E6060_PORTS 6 18 + 19 + #define REG_PORT(p) (0x8 + (p)) 20 + #define PORT_STATUS 0x00 21 + #define PORT_STATUS_PAUSE_EN BIT(15) 22 + #define PORT_STATUS_MY_PAUSE BIT(14) 23 + #define PORT_STATUS_FC (PORT_STATUS_MY_PAUSE | PORT_STATUS_PAUSE_EN) 24 + #define PORT_STATUS_RESOLVED BIT(13) 25 + #define PORT_STATUS_LINK BIT(12) 26 + #define PORT_STATUS_PORTMODE BIT(11) 27 + #define PORT_STATUS_PHYMODE BIT(10) 28 + #define PORT_STATUS_DUPLEX BIT(9) 29 + #define PORT_STATUS_SPEED BIT(8) 30 + #define PORT_SWITCH_ID 0x03 31 + #define PORT_SWITCH_ID_6060 0x0600 32 + #define PORT_SWITCH_ID_6060_MASK 0xfff0 33 + #define PORT_SWITCH_ID_6060_R1 0x0601 34 + #define PORT_SWITCH_ID_6060_R2 0x0602 35 + #define PORT_CONTROL 0x04 36 + #define PORT_CONTROL_FORCE_FLOW_CTRL BIT(15) 37 + #define PORT_CONTROL_TRAILER BIT(14) 38 + #define PORT_CONTROL_HEADER BIT(11) 39 + #define PORT_CONTROL_INGRESS_MODE BIT(8) 40 + #define PORT_CONTROL_VLAN_TUNNEL BIT(7) 41 + #define PORT_CONTROL_STATE_MASK 0x03 42 + #define PORT_CONTROL_STATE_DISABLED 0x00 43 + #define PORT_CONTROL_STATE_BLOCKING 0x01 44 + #define PORT_CONTROL_STATE_LEARNING 0x02 45 + #define PORT_CONTROL_STATE_FORWARDING 0x03 46 + #define PORT_VLAN_MAP 0x06 47 + #define PORT_VLAN_MAP_DBNUM_SHIFT 12 48 + #define PORT_VLAN_MAP_TABLE_MASK 0x1f 49 + #define PORT_ASSOC_VECTOR 0x0b 50 + #define PORT_ASSOC_VECTOR_MONITOR BIT(15) 51 + #define PORT_ASSOC_VECTOR_PAV_MASK 0x1f 52 + #define PORT_RX_CNTR 0x10 53 + #define PORT_TX_CNTR 0x11 54 + 55 + #define REG_GLOBAL 0x0f 56 + #define GLOBAL_STATUS 0x00 57 + #define GLOBAL_STATUS_SW_MODE_MASK (0x3 << 12) 58 + #define GLOBAL_STATUS_SW_MODE_0 (0x0 << 12) 59 + #define GLOBAL_STATUS_SW_MODE_1 (0x1 << 12) 60 + #define GLOBAL_STATUS_SW_MODE_2 (0x2 << 12) 61 + #define GLOBAL_STATUS_SW_MODE_3 (0x3 << 12) 62 + #define GLOBAL_STATUS_INIT_READY BIT(11) 63 + #define GLOBAL_STATUS_ATU_FULL BIT(3) 64 + #define GLOBAL_STATUS_ATU_DONE BIT(2) 65 + #define GLOBAL_STATUS_PHY_INT BIT(1) 66 + #define GLOBAL_STATUS_EEINT BIT(0) 67 + #define GLOBAL_MAC_01 0x01 68 + #define GLOBAL_MAC_01_DIFF_ADDR BIT(8) 69 + #define GLOBAL_MAC_23 0x02 70 + #define GLOBAL_MAC_45 0x03 71 + #define GLOBAL_CONTROL 0x04 72 + #define GLOBAL_CONTROL_DISCARD_EXCESS BIT(13) 73 + #define GLOBAL_CONTROL_MAX_FRAME_1536 BIT(10) 74 + #define GLOBAL_CONTROL_RELOAD_EEPROM BIT(9) 75 + #define GLOBAL_CONTROL_CTRMODE BIT(8) 76 + #define GLOBAL_CONTROL_ATU_FULL_EN BIT(3) 77 + #define GLOBAL_CONTROL_ATU_DONE_EN BIT(2) 78 + #define GLOBAL_CONTROL_PHYINT_EN BIT(1) 79 + #define GLOBAL_CONTROL_EEPROM_DONE_EN BIT(0) 80 + #define GLOBAL_ATU_CONTROL 0x0a 81 + #define GLOBAL_ATU_CONTROL_SWRESET BIT(15) 82 + #define GLOBAL_ATU_CONTROL_LEARNDIS BIT(14) 83 + #define GLOBAL_ATU_CONTROL_ATUSIZE_256 (0x0 << 12) 84 + #define GLOBAL_ATU_CONTROL_ATUSIZE_512 (0x1 << 12) 85 + #define GLOBAL_ATU_CONTROL_ATUSIZE_1024 (0x2 << 12) 86 + #define GLOBAL_ATU_CONTROL_ATE_AGE_SHIFT 4 87 + #define GLOBAL_ATU_CONTROL_ATE_AGE_MASK (0xff << 4) 88 + #define GLOBAL_ATU_CONTROL_ATE_AGE_5MIN (0x13 << 4) 89 + #define GLOBAL_ATU_OP 0x0b 90 + #define GLOBAL_ATU_OP_BUSY BIT(15) 91 + #define GLOBAL_ATU_OP_NOP (0 << 12) 92 + #define GLOBAL_ATU_OP_FLUSH_ALL ((1 << 12) | GLOBAL_ATU_OP_BUSY) 93 + #define GLOBAL_ATU_OP_FLUSH_UNLOCKED ((2 << 12) | GLOBAL_ATU_OP_BUSY) 94 + #define GLOBAL_ATU_OP_LOAD_DB ((3 << 12) | GLOBAL_ATU_OP_BUSY) 95 + #define GLOBAL_ATU_OP_GET_NEXT_DB ((4 << 12) | GLOBAL_ATU_OP_BUSY) 96 + #define GLOBAL_ATU_OP_FLUSH_DB ((5 << 12) | GLOBAL_ATU_OP_BUSY) 97 + #define GLOBAL_ATU_OP_FLUSH_UNLOCKED_DB ((6 << 12) | GLOBAL_ATU_OP_BUSY) 98 + #define GLOBAL_ATU_DATA 0x0c 99 + #define GLOBAL_ATU_DATA_PORT_VECTOR_MASK 0x3f0 100 + #define GLOBAL_ATU_DATA_PORT_VECTOR_SHIFT 4 101 + #define GLOBAL_ATU_DATA_STATE_MASK 0x0f 102 + #define GLOBAL_ATU_DATA_STATE_UNUSED 0x00 103 + #define GLOBAL_ATU_DATA_STATE_UC_STATIC 0x0e 104 + #define GLOBAL_ATU_DATA_STATE_UC_LOCKED 0x0f 105 + #define GLOBAL_ATU_DATA_STATE_MC_STATIC 0x07 106 + #define GLOBAL_ATU_DATA_STATE_MC_LOCKED 0x0e 107 + #define GLOBAL_ATU_MAC_01 0x0d 108 + #define GLOBAL_ATU_MAC_23 0x0e 109 + #define GLOBAL_ATU_MAC_45 0x0f 110 + 111 + #endif
-1
drivers/net/ethernet/Kconfig
··· 78 78 source "drivers/net/ethernet/intel/Kconfig" 79 79 source "drivers/net/ethernet/i825xx/Kconfig" 80 80 source "drivers/net/ethernet/xscale/Kconfig" 81 - source "drivers/net/ethernet/icplus/Kconfig" 82 81 83 82 config JME 84 83 tristate "JMicron(R) PCI-Express Gigabit Ethernet support"
-1
drivers/net/ethernet/Makefile
··· 41 41 obj-$(CONFIG_NET_VENDOR_INTEL) += intel/ 42 42 obj-$(CONFIG_NET_VENDOR_I825XX) += i825xx/ 43 43 obj-$(CONFIG_NET_VENDOR_XSCALE) += xscale/ 44 - obj-$(CONFIG_IP1000) += icplus/ 45 44 obj-$(CONFIG_JME) += jme.o 46 45 obj-$(CONFIG_KORINA) += korina.o 47 46 obj-$(CONFIG_LANTIQ_ETOP) += lantiq_etop.o
+1 -1
drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
··· 13207 13207 13208 13208 /* VF with OLD Hypervisor or old PF do not support filtering */ 13209 13209 if (IS_PF(bp)) { 13210 - if (CHIP_IS_E1x(bp)) 13210 + if (chip_is_e1x) 13211 13211 bp->accept_any_vlan = true; 13212 13212 else 13213 13213 dev->hw_features |= NETIF_F_HW_VLAN_CTAG_FILTER;
+1 -1
drivers/net/ethernet/cavium/liquidio/lio_main.c
··· 560 560 #endif 561 561 562 562 /* For PCI-E Advanced Error Recovery (AER) Interface */ 563 - static struct pci_error_handlers liquidio_err_handler = { 563 + static const struct pci_error_handlers liquidio_err_handler = { 564 564 .error_detected = liquidio_pcie_error_detected, 565 565 .mmio_enabled = liquidio_pcie_mmio_enabled, 566 566 .slot_reset = liquidio_pcie_slot_reset,
+8 -2
drivers/net/ethernet/cavium/thunder/nicvf_main.c
··· 1583 1583 static void nicvf_remove(struct pci_dev *pdev) 1584 1584 { 1585 1585 struct net_device *netdev = pci_get_drvdata(pdev); 1586 - struct nicvf *nic = netdev_priv(netdev); 1587 - struct net_device *pnetdev = nic->pnicvf->netdev; 1586 + struct nicvf *nic; 1587 + struct net_device *pnetdev; 1588 + 1589 + if (!netdev) 1590 + return; 1591 + 1592 + nic = netdev_priv(netdev); 1593 + pnetdev = nic->pnicvf->netdev; 1588 1594 1589 1595 /* Check if this Qset is assigned to different VF. 1590 1596 * If yes, clean primary and all secondary Qsets.
+3 -2
drivers/net/ethernet/dlink/Kconfig
··· 17 17 if NET_VENDOR_DLINK 18 18 19 19 config DL2K 20 - tristate "DL2000/TC902x-based Gigabit Ethernet support" 20 + tristate "DL2000/TC902x/IP1000A-based Gigabit Ethernet support" 21 21 depends on PCI 22 22 select CRC32 23 23 ---help--- 24 - This driver supports DL2000/TC902x-based Gigabit ethernet cards, 24 + This driver supports DL2000/TC902x/IP1000A-based Gigabit ethernet cards, 25 25 which includes 26 26 D-Link DGE-550T Gigabit Ethernet Adapter. 27 27 D-Link DL2000-based Gigabit Ethernet Adapter. 28 28 Sundance/Tamarack TC902x Gigabit Ethernet Adapter. 29 + ICPlus IP1000A-based cards 29 30 30 31 To compile this driver as a module, choose M here: the 31 32 module will be called dl2k.
+52 -3
drivers/net/ethernet/dlink/dl2k.c
··· 253 253 if (err) 254 254 goto err_out_unmap_rx; 255 255 256 + if (np->chip_id == CHIP_IP1000A && 257 + (np->pdev->revision == 0x40 || np->pdev->revision == 0x41)) { 258 + /* PHY magic taken from ipg driver, undocumented registers */ 259 + mii_write(dev, np->phy_addr, 31, 0x0001); 260 + mii_write(dev, np->phy_addr, 27, 0x01e0); 261 + mii_write(dev, np->phy_addr, 31, 0x0002); 262 + mii_write(dev, np->phy_addr, 27, 0xeb8e); 263 + mii_write(dev, np->phy_addr, 31, 0x0000); 264 + mii_write(dev, np->phy_addr, 30, 0x005e); 265 + /* advertise 1000BASE-T half & full duplex, prefer MASTER */ 266 + mii_write(dev, np->phy_addr, MII_CTRL1000, 0x0700); 267 + } 268 + 256 269 /* Fiber device? */ 257 270 np->phy_media = (dr16(ASICCtrl) & PhyMedia) ? 1 : 0; 258 271 np->link_status = 0; ··· 374 361 for (i = 0; i < 6; i++) 375 362 dev->dev_addr[i] = psrom->mac_addr[i]; 376 363 364 + if (np->chip_id == CHIP_IP1000A) { 365 + np->led_mode = psrom->led_mode; 366 + return 0; 367 + } 368 + 377 369 if (np->pdev->vendor != PCI_VENDOR_ID_DLINK) { 378 370 return 0; 379 371 } ··· 424 406 return 0; 425 407 } 426 408 409 + static void rio_set_led_mode(struct net_device *dev) 410 + { 411 + struct netdev_private *np = netdev_priv(dev); 412 + void __iomem *ioaddr = np->ioaddr; 413 + u32 mode; 414 + 415 + if (np->chip_id != CHIP_IP1000A) 416 + return; 417 + 418 + mode = dr32(ASICCtrl); 419 + mode &= ~(IPG_AC_LED_MODE_BIT_1 | IPG_AC_LED_MODE | IPG_AC_LED_SPEED); 420 + 421 + if (np->led_mode & 0x01) 422 + mode |= IPG_AC_LED_MODE; 423 + if (np->led_mode & 0x02) 424 + mode |= IPG_AC_LED_MODE_BIT_1; 425 + if (np->led_mode & 0x08) 426 + mode |= IPG_AC_LED_SPEED; 427 + 428 + dw32(ASICCtrl, mode); 429 + } 430 + 427 431 static int 428 432 rio_open (struct net_device *dev) 429 433 { ··· 464 424 GlobalReset | DMAReset | FIFOReset | NetworkReset | HostReset); 465 425 mdelay(10); 466 426 427 + rio_set_led_mode(dev); 428 + 467 429 /* DebugCtrl bit 4, 5, 9 must set */ 468 430 dw32(DebugCtrl, dr32(DebugCtrl) | 0x0230); 469 431 ··· 475 433 476 434 alloc_list (dev); 477 435 478 - /* Get station address */ 479 - for (i = 0; i < 6; i++) 480 - dw8(StationAddr0 + i, dev->dev_addr[i]); 436 + /* Set station address */ 437 + /* 16 or 32-bit access is required by TC9020 datasheet but 8-bit works 438 + * too. However, it doesn't work on IP1000A so we use 16-bit access. 439 + */ 440 + for (i = 0; i < 3; i++) 441 + dw16(StationAddr0 + 2 * i, 442 + cpu_to_le16(((u16 *)dev->dev_addr)[i])); 481 443 482 444 set_multicast (dev); 483 445 if (np->coalesce) { ··· 826 780 break; 827 781 mdelay (1); 828 782 } 783 + rio_set_led_mode(dev); 829 784 rio_free_tx (dev, 1); 830 785 /* Reset TFDListPtr */ 831 786 dw32(TFDListPtr0, np->tx_ring_dma + ··· 846 799 break; 847 800 mdelay (1); 848 801 } 802 + rio_set_led_mode(dev); 849 803 /* Let TxStartThresh stay default value */ 850 804 } 851 805 /* Maximum Collisions */ ··· 1013 965 dev->name, int_status); 1014 966 dw16(ASICCtrl + 2, GlobalReset | HostReset); 1015 967 mdelay (500); 968 + rio_set_led_mode(dev); 1016 969 } 1017 970 } 1018 971
+14 -1
drivers/net/ethernet/dlink/dl2k.h
··· 211 211 ResetBusy = 0x0400, 212 212 }; 213 213 214 + #define IPG_AC_LED_MODE BIT(14) 215 + #define IPG_AC_LED_SPEED BIT(27) 216 + #define IPG_AC_LED_MODE_BIT_1 BIT(29) 217 + 214 218 /* Transmit Frame Control bits */ 215 219 enum TFC_bits { 216 220 DwordAlign = 0x00000000, ··· 336 332 u16 asic_ctrl; /* 0x02 */ 337 333 u16 sub_vendor_id; /* 0x04 */ 338 334 u16 sub_system_id; /* 0x06 */ 339 - u16 reserved1[12]; /* 0x08-0x1f */ 335 + u16 pci_base_1; /* 0x08 (IP1000A only) */ 336 + u16 pci_base_2; /* 0x0a (IP1000A only) */ 337 + u16 led_mode; /* 0x0c (IP1000A only) */ 338 + u16 reserved1[9]; /* 0x0e-0x1f */ 340 339 u8 mac_addr[6]; /* 0x20-0x25 */ 341 340 u8 reserved2[10]; /* 0x26-0x2f */ 342 341 u8 sib[204]; /* 0x30-0xfb */ ··· 404 397 u16 advertising; /* NWay media advertisement */ 405 398 u16 negotiate; /* Negotiated media */ 406 399 int phy_addr; /* PHY addresses. */ 400 + u16 led_mode; /* LED mode read from EEPROM (IP1000A only) */ 407 401 }; 408 402 409 403 /* The station address location in the EEPROM. */ ··· 415 407 class_mask of the class are honored during the comparison. 416 408 driver_data Data private to the driver. 417 409 */ 410 + #define CHIP_IP1000A 1 418 411 419 412 static const struct pci_device_id rio_pci_tbl[] = { 420 413 {0x1186, 0x4000, PCI_ANY_ID, PCI_ANY_ID, }, 421 414 {0x13f0, 0x1021, PCI_ANY_ID, PCI_ANY_ID, }, 415 + { PCI_VDEVICE(SUNDANCE, 0x1023), CHIP_IP1000A }, 416 + { PCI_VDEVICE(SUNDANCE, 0x2021), CHIP_IP1000A }, 417 + { PCI_VDEVICE(DLINK, 0x9021), CHIP_IP1000A }, 418 + { PCI_VDEVICE(DLINK, 0x4020), CHIP_IP1000A }, 422 419 { } 423 420 }; 424 421 MODULE_DEVICE_TABLE (pci, rio_pci_tbl);
+4 -15
drivers/net/ethernet/emulex/benet/be_ethtool.c
··· 1062 1062 static int be_set_rss_hash_opts(struct be_adapter *adapter, 1063 1063 struct ethtool_rxnfc *cmd) 1064 1064 { 1065 - struct be_rx_obj *rxo; 1066 - int status = 0, i, j; 1067 - u8 rsstable[128]; 1065 + int status; 1068 1066 u32 rss_flags = adapter->rss_info.rss_flags; 1069 1067 1070 1068 if (cmd->data != L3_RSS_FLAGS && ··· 1111 1113 } 1112 1114 1113 1115 if (rss_flags == adapter->rss_info.rss_flags) 1114 - return status; 1115 - 1116 - if (be_multi_rxq(adapter)) { 1117 - for (j = 0; j < 128; j += adapter->num_rss_qs) { 1118 - for_all_rss_queues(adapter, rxo, i) { 1119 - if ((j + i) >= 128) 1120 - break; 1121 - rsstable[j + i] = rxo->rss_id; 1122 - } 1123 - } 1124 - } 1116 + return 0; 1125 1117 1126 1118 status = be_cmd_rss_config(adapter, adapter->rss_info.rsstable, 1127 - rss_flags, 128, adapter->rss_info.rss_hkey); 1119 + rss_flags, RSS_INDIR_TABLE_LEN, 1120 + adapter->rss_info.rss_hkey); 1128 1121 if (!status) 1129 1122 adapter->rss_info.rss_flags = rss_flags; 1130 1123
+1 -1
drivers/net/ethernet/emulex/benet/be_main.c
··· 3518 3518 3519 3519 netdev_rss_key_fill(rss_key, RSS_HASH_KEY_LEN); 3520 3520 rc = be_cmd_rss_config(adapter, rss->rsstable, rss->rss_flags, 3521 - 128, rss_key); 3521 + RSS_INDIR_TABLE_LEN, rss_key); 3522 3522 if (rc) { 3523 3523 rss->rss_flags = RSS_ENABLE_NONE; 3524 3524 return rc;
-13
drivers/net/ethernet/icplus/Kconfig
··· 1 - # 2 - # IC Plus device configuration 3 - # 4 - 5 - config IP1000 6 - tristate "IP1000 Gigabit Ethernet support" 7 - depends on PCI 8 - select MII 9 - ---help--- 10 - This driver supports IP1000 gigabit Ethernet cards. 11 - 12 - To compile this driver as a module, choose M here: the module 13 - will be called ipg. This is recommended.
-5
drivers/net/ethernet/icplus/Makefile
··· 1 - # 2 - # Makefile for the IC Plus device drivers 3 - # 4 - 5 - obj-$(CONFIG_IP1000) += ipg.o
-2300
drivers/net/ethernet/icplus/ipg.c
··· 1 - /* 2 - * ipg.c: Device Driver for the IP1000 Gigabit Ethernet Adapter 3 - * 4 - * Copyright (C) 2003, 2007 IC Plus Corp 5 - * 6 - * Original Author: 7 - * 8 - * Craig Rich 9 - * Sundance Technology, Inc. 10 - * www.sundanceti.com 11 - * craig_rich@sundanceti.com 12 - * 13 - * Current Maintainer: 14 - * 15 - * Sorbica Shieh. 16 - * http://www.icplus.com.tw 17 - * sorbica@icplus.com.tw 18 - * 19 - * Jesse Huang 20 - * http://www.icplus.com.tw 21 - * jesse@icplus.com.tw 22 - */ 23 - 24 - #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 25 - 26 - #include <linux/crc32.h> 27 - #include <linux/ethtool.h> 28 - #include <linux/interrupt.h> 29 - #include <linux/gfp.h> 30 - #include <linux/mii.h> 31 - #include <linux/mutex.h> 32 - 33 - #include <asm/div64.h> 34 - 35 - #define IPG_RX_RING_BYTES (sizeof(struct ipg_rx) * IPG_RFDLIST_LENGTH) 36 - #define IPG_TX_RING_BYTES (sizeof(struct ipg_tx) * IPG_TFDLIST_LENGTH) 37 - #define IPG_RESET_MASK \ 38 - (IPG_AC_GLOBAL_RESET | IPG_AC_RX_RESET | IPG_AC_TX_RESET | \ 39 - IPG_AC_DMA | IPG_AC_FIFO | IPG_AC_NETWORK | IPG_AC_HOST | \ 40 - IPG_AC_AUTO_INIT) 41 - 42 - #define ipg_w32(val32, reg) iowrite32((val32), ioaddr + (reg)) 43 - #define ipg_w16(val16, reg) iowrite16((val16), ioaddr + (reg)) 44 - #define ipg_w8(val8, reg) iowrite8((val8), ioaddr + (reg)) 45 - 46 - #define ipg_r32(reg) ioread32(ioaddr + (reg)) 47 - #define ipg_r16(reg) ioread16(ioaddr + (reg)) 48 - #define ipg_r8(reg) ioread8(ioaddr + (reg)) 49 - 50 - enum { 51 - netdev_io_size = 128 52 - }; 53 - 54 - #include "ipg.h" 55 - #define DRV_NAME "ipg" 56 - 57 - MODULE_AUTHOR("IC Plus Corp. 2003"); 58 - MODULE_DESCRIPTION("IC Plus IP1000 Gigabit Ethernet Adapter Linux Driver"); 59 - MODULE_LICENSE("GPL"); 60 - 61 - /* 62 - * Defaults 63 - */ 64 - #define IPG_MAX_RXFRAME_SIZE 0x0600 65 - #define IPG_RXFRAG_SIZE 0x0600 66 - #define IPG_RXSUPPORT_SIZE 0x0600 67 - #define IPG_IS_JUMBO false 68 - 69 - /* 70 - * Variable record -- index by leading revision/length 71 - * Revision/Length(=N*4), Address1, Data1, Address2, Data2,...,AddressN,DataN 72 - */ 73 - static const unsigned short DefaultPhyParam[] = { 74 - /* 11/12/03 IP1000A v1-3 rev=0x40 */ 75 - /*-------------------------------------------------------------------------- 76 - (0x4000|(15*4)), 31, 0x0001, 27, 0x01e0, 31, 0x0002, 22, 0x85bd, 24, 0xfff2, 77 - 27, 0x0c10, 28, 0x0c10, 29, 0x2c10, 31, 0x0003, 23, 0x92f6, 78 - 31, 0x0000, 23, 0x003d, 30, 0x00de, 20, 0x20e7, 9, 0x0700, 79 - --------------------------------------------------------------------------*/ 80 - /* 12/17/03 IP1000A v1-4 rev=0x40 */ 81 - (0x4000 | (07 * 4)), 31, 0x0001, 27, 0x01e0, 31, 0x0002, 27, 0xeb8e, 31, 82 - 0x0000, 83 - 30, 0x005e, 9, 0x0700, 84 - /* 01/09/04 IP1000A v1-5 rev=0x41 */ 85 - (0x4100 | (07 * 4)), 31, 0x0001, 27, 0x01e0, 31, 0x0002, 27, 0xeb8e, 31, 86 - 0x0000, 87 - 30, 0x005e, 9, 0x0700, 88 - 0x0000 89 - }; 90 - 91 - static const char * const ipg_brand_name[] = { 92 - "IC PLUS IP1000 1000/100/10 based NIC", 93 - "Sundance Technology ST2021 based NIC", 94 - "Tamarack Microelectronics TC9020/9021 based NIC", 95 - "D-Link NIC IP1000A" 96 - }; 97 - 98 - static const struct pci_device_id ipg_pci_tbl[] = { 99 - { PCI_VDEVICE(SUNDANCE, 0x1023), 0 }, 100 - { PCI_VDEVICE(SUNDANCE, 0x2021), 1 }, 101 - { PCI_VDEVICE(DLINK, 0x9021), 2 }, 102 - { PCI_VDEVICE(DLINK, 0x4020), 3 }, 103 - { 0, } 104 - }; 105 - 106 - MODULE_DEVICE_TABLE(pci, ipg_pci_tbl); 107 - 108 - static inline void __iomem *ipg_ioaddr(struct net_device *dev) 109 - { 110 - struct ipg_nic_private *sp = netdev_priv(dev); 111 - return sp->ioaddr; 112 - } 113 - 114 - #ifdef IPG_DEBUG 115 - static void ipg_dump_rfdlist(struct net_device *dev) 116 - { 117 - struct ipg_nic_private *sp = netdev_priv(dev); 118 - void __iomem *ioaddr = sp->ioaddr; 119 - unsigned int i; 120 - u32 offset; 121 - 122 - IPG_DEBUG_MSG("_dump_rfdlist\n"); 123 - 124 - netdev_info(dev, "rx_current = %02x\n", sp->rx_current); 125 - netdev_info(dev, "rx_dirty = %02x\n", sp->rx_dirty); 126 - netdev_info(dev, "RFDList start address = %016lx\n", 127 - (unsigned long)sp->rxd_map); 128 - netdev_info(dev, "RFDListPtr register = %08x%08x\n", 129 - ipg_r32(IPG_RFDLISTPTR1), ipg_r32(IPG_RFDLISTPTR0)); 130 - 131 - for (i = 0; i < IPG_RFDLIST_LENGTH; i++) { 132 - offset = (u32) &sp->rxd[i].next_desc - (u32) sp->rxd; 133 - netdev_info(dev, "%02x %04x RFDNextPtr = %016lx\n", 134 - i, offset, (unsigned long)sp->rxd[i].next_desc); 135 - offset = (u32) &sp->rxd[i].rfs - (u32) sp->rxd; 136 - netdev_info(dev, "%02x %04x RFS = %016lx\n", 137 - i, offset, (unsigned long)sp->rxd[i].rfs); 138 - offset = (u32) &sp->rxd[i].frag_info - (u32) sp->rxd; 139 - netdev_info(dev, "%02x %04x frag_info = %016lx\n", 140 - i, offset, (unsigned long)sp->rxd[i].frag_info); 141 - } 142 - } 143 - 144 - static void ipg_dump_tfdlist(struct net_device *dev) 145 - { 146 - struct ipg_nic_private *sp = netdev_priv(dev); 147 - void __iomem *ioaddr = sp->ioaddr; 148 - unsigned int i; 149 - u32 offset; 150 - 151 - IPG_DEBUG_MSG("_dump_tfdlist\n"); 152 - 153 - netdev_info(dev, "tx_current = %02x\n", sp->tx_current); 154 - netdev_info(dev, "tx_dirty = %02x\n", sp->tx_dirty); 155 - netdev_info(dev, "TFDList start address = %016lx\n", 156 - (unsigned long) sp->txd_map); 157 - netdev_info(dev, "TFDListPtr register = %08x%08x\n", 158 - ipg_r32(IPG_TFDLISTPTR1), ipg_r32(IPG_TFDLISTPTR0)); 159 - 160 - for (i = 0; i < IPG_TFDLIST_LENGTH; i++) { 161 - offset = (u32) &sp->txd[i].next_desc - (u32) sp->txd; 162 - netdev_info(dev, "%02x %04x TFDNextPtr = %016lx\n", 163 - i, offset, (unsigned long)sp->txd[i].next_desc); 164 - 165 - offset = (u32) &sp->txd[i].tfc - (u32) sp->txd; 166 - netdev_info(dev, "%02x %04x TFC = %016lx\n", 167 - i, offset, (unsigned long) sp->txd[i].tfc); 168 - offset = (u32) &sp->txd[i].frag_info - (u32) sp->txd; 169 - netdev_info(dev, "%02x %04x frag_info = %016lx\n", 170 - i, offset, (unsigned long) sp->txd[i].frag_info); 171 - } 172 - } 173 - #endif 174 - 175 - static void ipg_write_phy_ctl(void __iomem *ioaddr, u8 data) 176 - { 177 - ipg_w8(IPG_PC_RSVD_MASK & data, PHY_CTRL); 178 - ndelay(IPG_PC_PHYCTRLWAIT_NS); 179 - } 180 - 181 - static void ipg_drive_phy_ctl_low_high(void __iomem *ioaddr, u8 data) 182 - { 183 - ipg_write_phy_ctl(ioaddr, IPG_PC_MGMTCLK_LO | data); 184 - ipg_write_phy_ctl(ioaddr, IPG_PC_MGMTCLK_HI | data); 185 - } 186 - 187 - static void send_three_state(void __iomem *ioaddr, u8 phyctrlpolarity) 188 - { 189 - phyctrlpolarity |= (IPG_PC_MGMTDATA & 0) | IPG_PC_MGMTDIR; 190 - 191 - ipg_drive_phy_ctl_low_high(ioaddr, phyctrlpolarity); 192 - } 193 - 194 - static void send_end(void __iomem *ioaddr, u8 phyctrlpolarity) 195 - { 196 - ipg_w8((IPG_PC_MGMTCLK_LO | (IPG_PC_MGMTDATA & 0) | IPG_PC_MGMTDIR | 197 - phyctrlpolarity) & IPG_PC_RSVD_MASK, PHY_CTRL); 198 - } 199 - 200 - static u16 read_phy_bit(void __iomem *ioaddr, u8 phyctrlpolarity) 201 - { 202 - u16 bit_data; 203 - 204 - ipg_write_phy_ctl(ioaddr, IPG_PC_MGMTCLK_LO | phyctrlpolarity); 205 - 206 - bit_data = ((ipg_r8(PHY_CTRL) & IPG_PC_MGMTDATA) >> 1) & 1; 207 - 208 - ipg_write_phy_ctl(ioaddr, IPG_PC_MGMTCLK_HI | phyctrlpolarity); 209 - 210 - return bit_data; 211 - } 212 - 213 - /* 214 - * Read a register from the Physical Layer device located 215 - * on the IPG NIC, using the IPG PHYCTRL register. 216 - */ 217 - static int mdio_read(struct net_device *dev, int phy_id, int phy_reg) 218 - { 219 - void __iomem *ioaddr = ipg_ioaddr(dev); 220 - /* 221 - * The GMII mangement frame structure for a read is as follows: 222 - * 223 - * |Preamble|st|op|phyad|regad|ta| data |idle| 224 - * |< 32 1s>|01|10|AAAAA|RRRRR|z0|DDDDDDDDDDDDDDDD|z | 225 - * 226 - * <32 1s> = 32 consecutive logic 1 values 227 - * A = bit of Physical Layer device address (MSB first) 228 - * R = bit of register address (MSB first) 229 - * z = High impedance state 230 - * D = bit of read data (MSB first) 231 - * 232 - * Transmission order is 'Preamble' field first, bits transmitted 233 - * left to right (first to last). 234 - */ 235 - struct { 236 - u32 field; 237 - unsigned int len; 238 - } p[] = { 239 - { GMII_PREAMBLE, 32 }, /* Preamble */ 240 - { GMII_ST, 2 }, /* ST */ 241 - { GMII_READ, 2 }, /* OP */ 242 - { phy_id, 5 }, /* PHYAD */ 243 - { phy_reg, 5 }, /* REGAD */ 244 - { 0x0000, 2 }, /* TA */ 245 - { 0x0000, 16 }, /* DATA */ 246 - { 0x0000, 1 } /* IDLE */ 247 - }; 248 - unsigned int i, j; 249 - u8 polarity, data; 250 - 251 - polarity = ipg_r8(PHY_CTRL); 252 - polarity &= (IPG_PC_DUPLEX_POLARITY | IPG_PC_LINK_POLARITY); 253 - 254 - /* Create the Preamble, ST, OP, PHYAD, and REGAD field. */ 255 - for (j = 0; j < 5; j++) { 256 - for (i = 0; i < p[j].len; i++) { 257 - /* For each variable length field, the MSB must be 258 - * transmitted first. Rotate through the field bits, 259 - * starting with the MSB, and move each bit into the 260 - * the 1st (2^1) bit position (this is the bit position 261 - * corresponding to the MgmtData bit of the PhyCtrl 262 - * register for the IPG). 263 - * 264 - * Example: ST = 01; 265 - * 266 - * First write a '0' to bit 1 of the PhyCtrl 267 - * register, then write a '1' to bit 1 of the 268 - * PhyCtrl register. 269 - * 270 - * To do this, right shift the MSB of ST by the value: 271 - * [field length - 1 - #ST bits already written] 272 - * then left shift this result by 1. 273 - */ 274 - data = (p[j].field >> (p[j].len - 1 - i)) << 1; 275 - data &= IPG_PC_MGMTDATA; 276 - data |= polarity | IPG_PC_MGMTDIR; 277 - 278 - ipg_drive_phy_ctl_low_high(ioaddr, data); 279 - } 280 - } 281 - 282 - send_three_state(ioaddr, polarity); 283 - 284 - read_phy_bit(ioaddr, polarity); 285 - 286 - /* 287 - * For a read cycle, the bits for the next two fields (TA and 288 - * DATA) are driven by the PHY (the IPG reads these bits). 289 - */ 290 - for (i = 0; i < p[6].len; i++) { 291 - p[6].field |= 292 - (read_phy_bit(ioaddr, polarity) << (p[6].len - 1 - i)); 293 - } 294 - 295 - send_three_state(ioaddr, polarity); 296 - send_three_state(ioaddr, polarity); 297 - send_three_state(ioaddr, polarity); 298 - send_end(ioaddr, polarity); 299 - 300 - /* Return the value of the DATA field. */ 301 - return p[6].field; 302 - } 303 - 304 - /* 305 - * Write to a register from the Physical Layer device located 306 - * on the IPG NIC, using the IPG PHYCTRL register. 307 - */ 308 - static void mdio_write(struct net_device *dev, int phy_id, int phy_reg, int val) 309 - { 310 - void __iomem *ioaddr = ipg_ioaddr(dev); 311 - /* 312 - * The GMII mangement frame structure for a read is as follows: 313 - * 314 - * |Preamble|st|op|phyad|regad|ta| data |idle| 315 - * |< 32 1s>|01|10|AAAAA|RRRRR|z0|DDDDDDDDDDDDDDDD|z | 316 - * 317 - * <32 1s> = 32 consecutive logic 1 values 318 - * A = bit of Physical Layer device address (MSB first) 319 - * R = bit of register address (MSB first) 320 - * z = High impedance state 321 - * D = bit of write data (MSB first) 322 - * 323 - * Transmission order is 'Preamble' field first, bits transmitted 324 - * left to right (first to last). 325 - */ 326 - struct { 327 - u32 field; 328 - unsigned int len; 329 - } p[] = { 330 - { GMII_PREAMBLE, 32 }, /* Preamble */ 331 - { GMII_ST, 2 }, /* ST */ 332 - { GMII_WRITE, 2 }, /* OP */ 333 - { phy_id, 5 }, /* PHYAD */ 334 - { phy_reg, 5 }, /* REGAD */ 335 - { 0x0002, 2 }, /* TA */ 336 - { val & 0xffff, 16 }, /* DATA */ 337 - { 0x0000, 1 } /* IDLE */ 338 - }; 339 - unsigned int i, j; 340 - u8 polarity, data; 341 - 342 - polarity = ipg_r8(PHY_CTRL); 343 - polarity &= (IPG_PC_DUPLEX_POLARITY | IPG_PC_LINK_POLARITY); 344 - 345 - /* Create the Preamble, ST, OP, PHYAD, and REGAD field. */ 346 - for (j = 0; j < 7; j++) { 347 - for (i = 0; i < p[j].len; i++) { 348 - /* For each variable length field, the MSB must be 349 - * transmitted first. Rotate through the field bits, 350 - * starting with the MSB, and move each bit into the 351 - * the 1st (2^1) bit position (this is the bit position 352 - * corresponding to the MgmtData bit of the PhyCtrl 353 - * register for the IPG). 354 - * 355 - * Example: ST = 01; 356 - * 357 - * First write a '0' to bit 1 of the PhyCtrl 358 - * register, then write a '1' to bit 1 of the 359 - * PhyCtrl register. 360 - * 361 - * To do this, right shift the MSB of ST by the value: 362 - * [field length - 1 - #ST bits already written] 363 - * then left shift this result by 1. 364 - */ 365 - data = (p[j].field >> (p[j].len - 1 - i)) << 1; 366 - data &= IPG_PC_MGMTDATA; 367 - data |= polarity | IPG_PC_MGMTDIR; 368 - 369 - ipg_drive_phy_ctl_low_high(ioaddr, data); 370 - } 371 - } 372 - 373 - /* The last cycle is a tri-state, so read from the PHY. */ 374 - ipg_write_phy_ctl(ioaddr, IPG_PC_MGMTCLK_LO | polarity); 375 - ipg_r8(PHY_CTRL); 376 - ipg_write_phy_ctl(ioaddr, IPG_PC_MGMTCLK_HI | polarity); 377 - } 378 - 379 - static void ipg_set_led_mode(struct net_device *dev) 380 - { 381 - struct ipg_nic_private *sp = netdev_priv(dev); 382 - void __iomem *ioaddr = sp->ioaddr; 383 - u32 mode; 384 - 385 - mode = ipg_r32(ASIC_CTRL); 386 - mode &= ~(IPG_AC_LED_MODE_BIT_1 | IPG_AC_LED_MODE | IPG_AC_LED_SPEED); 387 - 388 - if ((sp->led_mode & 0x03) > 1) 389 - mode |= IPG_AC_LED_MODE_BIT_1; /* Write Asic Control Bit 29 */ 390 - 391 - if ((sp->led_mode & 0x01) == 1) 392 - mode |= IPG_AC_LED_MODE; /* Write Asic Control Bit 14 */ 393 - 394 - if ((sp->led_mode & 0x08) == 8) 395 - mode |= IPG_AC_LED_SPEED; /* Write Asic Control Bit 27 */ 396 - 397 - ipg_w32(mode, ASIC_CTRL); 398 - } 399 - 400 - static void ipg_set_phy_set(struct net_device *dev) 401 - { 402 - struct ipg_nic_private *sp = netdev_priv(dev); 403 - void __iomem *ioaddr = sp->ioaddr; 404 - int physet; 405 - 406 - physet = ipg_r8(PHY_SET); 407 - physet &= ~(IPG_PS_MEM_LENB9B | IPG_PS_MEM_LEN9 | IPG_PS_NON_COMPDET); 408 - physet |= ((sp->led_mode & 0x70) >> 4); 409 - ipg_w8(physet, PHY_SET); 410 - } 411 - 412 - static int ipg_reset(struct net_device *dev, u32 resetflags) 413 - { 414 - /* Assert functional resets via the IPG AsicCtrl 415 - * register as specified by the 'resetflags' input 416 - * parameter. 417 - */ 418 - void __iomem *ioaddr = ipg_ioaddr(dev); 419 - unsigned int timeout_count = 0; 420 - 421 - IPG_DEBUG_MSG("_reset\n"); 422 - 423 - ipg_w32(ipg_r32(ASIC_CTRL) | resetflags, ASIC_CTRL); 424 - 425 - /* Delay added to account for problem with 10Mbps reset. */ 426 - mdelay(IPG_AC_RESETWAIT); 427 - 428 - while (IPG_AC_RESET_BUSY & ipg_r32(ASIC_CTRL)) { 429 - mdelay(IPG_AC_RESETWAIT); 430 - if (++timeout_count > IPG_AC_RESET_TIMEOUT) 431 - return -ETIME; 432 - } 433 - /* Set LED Mode in Asic Control */ 434 - ipg_set_led_mode(dev); 435 - 436 - /* Set PHYSet Register Value */ 437 - ipg_set_phy_set(dev); 438 - return 0; 439 - } 440 - 441 - /* Find the GMII PHY address. */ 442 - static int ipg_find_phyaddr(struct net_device *dev) 443 - { 444 - unsigned int phyaddr, i; 445 - 446 - for (i = 0; i < 32; i++) { 447 - u32 status; 448 - 449 - /* Search for the correct PHY address among 32 possible. */ 450 - phyaddr = (IPG_NIC_PHY_ADDRESS + i) % 32; 451 - 452 - /* 10/22/03 Grace change verify from GMII_PHY_STATUS to 453 - GMII_PHY_ID1 454 - */ 455 - 456 - status = mdio_read(dev, phyaddr, MII_BMSR); 457 - 458 - if ((status != 0xFFFF) && (status != 0)) 459 - return phyaddr; 460 - } 461 - 462 - return 0x1f; 463 - } 464 - 465 - /* 466 - * Configure IPG based on result of IEEE 802.3 PHY 467 - * auto-negotiation. 468 - */ 469 - static int ipg_config_autoneg(struct net_device *dev) 470 - { 471 - struct ipg_nic_private *sp = netdev_priv(dev); 472 - void __iomem *ioaddr = sp->ioaddr; 473 - unsigned int txflowcontrol; 474 - unsigned int rxflowcontrol; 475 - unsigned int fullduplex; 476 - u32 mac_ctrl_val; 477 - u32 asicctrl; 478 - u8 phyctrl; 479 - const char *speed; 480 - const char *duplex; 481 - const char *tx_desc; 482 - const char *rx_desc; 483 - 484 - IPG_DEBUG_MSG("_config_autoneg\n"); 485 - 486 - asicctrl = ipg_r32(ASIC_CTRL); 487 - phyctrl = ipg_r8(PHY_CTRL); 488 - mac_ctrl_val = ipg_r32(MAC_CTRL); 489 - 490 - /* Set flags for use in resolving auto-negotiation, assuming 491 - * non-1000Mbps, half duplex, no flow control. 492 - */ 493 - fullduplex = 0; 494 - txflowcontrol = 0; 495 - rxflowcontrol = 0; 496 - 497 - /* To accommodate a problem in 10Mbps operation, 498 - * set a global flag if PHY running in 10Mbps mode. 499 - */ 500 - sp->tenmbpsmode = 0; 501 - 502 - /* Determine actual speed of operation. */ 503 - switch (phyctrl & IPG_PC_LINK_SPEED) { 504 - case IPG_PC_LINK_SPEED_10MBPS: 505 - speed = "10Mbps"; 506 - sp->tenmbpsmode = 1; 507 - break; 508 - case IPG_PC_LINK_SPEED_100MBPS: 509 - speed = "100Mbps"; 510 - break; 511 - case IPG_PC_LINK_SPEED_1000MBPS: 512 - speed = "1000Mbps"; 513 - break; 514 - default: 515 - speed = "undefined!"; 516 - return 0; 517 - } 518 - 519 - netdev_info(dev, "Link speed = %s\n", speed); 520 - if (sp->tenmbpsmode == 1) 521 - netdev_info(dev, "10Mbps operational mode enabled\n"); 522 - 523 - if (phyctrl & IPG_PC_DUPLEX_STATUS) { 524 - fullduplex = 1; 525 - txflowcontrol = 1; 526 - rxflowcontrol = 1; 527 - } 528 - 529 - /* Configure full duplex, and flow control. */ 530 - if (fullduplex == 1) { 531 - 532 - /* Configure IPG for full duplex operation. */ 533 - 534 - duplex = "full"; 535 - 536 - mac_ctrl_val |= IPG_MC_DUPLEX_SELECT_FD; 537 - 538 - if (txflowcontrol == 1) { 539 - tx_desc = ""; 540 - mac_ctrl_val |= IPG_MC_TX_FLOW_CONTROL_ENABLE; 541 - } else { 542 - tx_desc = "no "; 543 - mac_ctrl_val &= ~IPG_MC_TX_FLOW_CONTROL_ENABLE; 544 - } 545 - 546 - if (rxflowcontrol == 1) { 547 - rx_desc = ""; 548 - mac_ctrl_val |= IPG_MC_RX_FLOW_CONTROL_ENABLE; 549 - } else { 550 - rx_desc = "no "; 551 - mac_ctrl_val &= ~IPG_MC_RX_FLOW_CONTROL_ENABLE; 552 - } 553 - } else { 554 - duplex = "half"; 555 - tx_desc = "no "; 556 - rx_desc = "no "; 557 - mac_ctrl_val &= (~IPG_MC_DUPLEX_SELECT_FD & 558 - ~IPG_MC_TX_FLOW_CONTROL_ENABLE & 559 - ~IPG_MC_RX_FLOW_CONTROL_ENABLE); 560 - } 561 - 562 - netdev_info(dev, "setting %s duplex, %sTX, %sRX flow control\n", 563 - duplex, tx_desc, rx_desc); 564 - ipg_w32(mac_ctrl_val, MAC_CTRL); 565 - 566 - return 0; 567 - } 568 - 569 - /* Determine and configure multicast operation and set 570 - * receive mode for IPG. 571 - */ 572 - static void ipg_nic_set_multicast_list(struct net_device *dev) 573 - { 574 - void __iomem *ioaddr = ipg_ioaddr(dev); 575 - struct netdev_hw_addr *ha; 576 - unsigned int hashindex; 577 - u32 hashtable[2]; 578 - u8 receivemode; 579 - 580 - IPG_DEBUG_MSG("_nic_set_multicast_list\n"); 581 - 582 - receivemode = IPG_RM_RECEIVEUNICAST | IPG_RM_RECEIVEBROADCAST; 583 - 584 - if (dev->flags & IFF_PROMISC) { 585 - /* NIC to be configured in promiscuous mode. */ 586 - receivemode = IPG_RM_RECEIVEALLFRAMES; 587 - } else if ((dev->flags & IFF_ALLMULTI) || 588 - ((dev->flags & IFF_MULTICAST) && 589 - (netdev_mc_count(dev) > IPG_MULTICAST_HASHTABLE_SIZE))) { 590 - /* NIC to be configured to receive all multicast 591 - * frames. */ 592 - receivemode |= IPG_RM_RECEIVEMULTICAST; 593 - } else if ((dev->flags & IFF_MULTICAST) && !netdev_mc_empty(dev)) { 594 - /* NIC to be configured to receive selected 595 - * multicast addresses. */ 596 - receivemode |= IPG_RM_RECEIVEMULTICASTHASH; 597 - } 598 - 599 - /* Calculate the bits to set for the 64 bit, IPG HASHTABLE. 600 - * The IPG applies a cyclic-redundancy-check (the same CRC 601 - * used to calculate the frame data FCS) to the destination 602 - * address all incoming multicast frames whose destination 603 - * address has the multicast bit set. The least significant 604 - * 6 bits of the CRC result are used as an addressing index 605 - * into the hash table. If the value of the bit addressed by 606 - * this index is a 1, the frame is passed to the host system. 607 - */ 608 - 609 - /* Clear hashtable. */ 610 - hashtable[0] = 0x00000000; 611 - hashtable[1] = 0x00000000; 612 - 613 - /* Cycle through all multicast addresses to filter. */ 614 - netdev_for_each_mc_addr(ha, dev) { 615 - /* Calculate CRC result for each multicast address. */ 616 - hashindex = crc32_le(0xffffffff, ha->addr, 617 - ETH_ALEN); 618 - 619 - /* Use only the least significant 6 bits. */ 620 - hashindex = hashindex & 0x3F; 621 - 622 - /* Within "hashtable", set bit number "hashindex" 623 - * to a logic 1. 624 - */ 625 - set_bit(hashindex, (void *)hashtable); 626 - } 627 - 628 - /* Write the value of the hashtable, to the 4, 16 bit 629 - * HASHTABLE IPG registers. 630 - */ 631 - ipg_w32(hashtable[0], HASHTABLE_0); 632 - ipg_w32(hashtable[1], HASHTABLE_1); 633 - 634 - ipg_w8(IPG_RM_RSVD_MASK & receivemode, RECEIVE_MODE); 635 - 636 - IPG_DEBUG_MSG("ReceiveMode = %x\n", ipg_r8(RECEIVE_MODE)); 637 - } 638 - 639 - static int ipg_io_config(struct net_device *dev) 640 - { 641 - struct ipg_nic_private *sp = netdev_priv(dev); 642 - void __iomem *ioaddr = ipg_ioaddr(dev); 643 - u32 origmacctrl; 644 - u32 restoremacctrl; 645 - 646 - IPG_DEBUG_MSG("_io_config\n"); 647 - 648 - origmacctrl = ipg_r32(MAC_CTRL); 649 - 650 - restoremacctrl = origmacctrl | IPG_MC_STATISTICS_ENABLE; 651 - 652 - /* Based on compilation option, determine if FCS is to be 653 - * stripped on receive frames by IPG. 654 - */ 655 - if (!IPG_STRIP_FCS_ON_RX) 656 - restoremacctrl |= IPG_MC_RCV_FCS; 657 - 658 - /* Determine if transmitter and/or receiver are 659 - * enabled so we may restore MACCTRL correctly. 660 - */ 661 - if (origmacctrl & IPG_MC_TX_ENABLED) 662 - restoremacctrl |= IPG_MC_TX_ENABLE; 663 - 664 - if (origmacctrl & IPG_MC_RX_ENABLED) 665 - restoremacctrl |= IPG_MC_RX_ENABLE; 666 - 667 - /* Transmitter and receiver must be disabled before setting 668 - * IFSSelect. 669 - */ 670 - ipg_w32((origmacctrl & (IPG_MC_RX_DISABLE | IPG_MC_TX_DISABLE)) & 671 - IPG_MC_RSVD_MASK, MAC_CTRL); 672 - 673 - /* Now that transmitter and receiver are disabled, write 674 - * to IFSSelect. 675 - */ 676 - ipg_w32((origmacctrl & IPG_MC_IFS_96BIT) & IPG_MC_RSVD_MASK, MAC_CTRL); 677 - 678 - /* Set RECEIVEMODE register. */ 679 - ipg_nic_set_multicast_list(dev); 680 - 681 - ipg_w16(sp->max_rxframe_size, MAX_FRAME_SIZE); 682 - 683 - ipg_w8(IPG_RXDMAPOLLPERIOD_VALUE, RX_DMA_POLL_PERIOD); 684 - ipg_w8(IPG_RXDMAURGENTTHRESH_VALUE, RX_DMA_URGENT_THRESH); 685 - ipg_w8(IPG_RXDMABURSTTHRESH_VALUE, RX_DMA_BURST_THRESH); 686 - ipg_w8(IPG_TXDMAPOLLPERIOD_VALUE, TX_DMA_POLL_PERIOD); 687 - ipg_w8(IPG_TXDMAURGENTTHRESH_VALUE, TX_DMA_URGENT_THRESH); 688 - ipg_w8(IPG_TXDMABURSTTHRESH_VALUE, TX_DMA_BURST_THRESH); 689 - ipg_w16((IPG_IE_HOST_ERROR | IPG_IE_TX_DMA_COMPLETE | 690 - IPG_IE_TX_COMPLETE | IPG_IE_INT_REQUESTED | 691 - IPG_IE_UPDATE_STATS | IPG_IE_LINK_EVENT | 692 - IPG_IE_RX_DMA_COMPLETE | IPG_IE_RX_DMA_PRIORITY), INT_ENABLE); 693 - ipg_w16(IPG_FLOWONTHRESH_VALUE, FLOW_ON_THRESH); 694 - ipg_w16(IPG_FLOWOFFTHRESH_VALUE, FLOW_OFF_THRESH); 695 - 696 - /* IPG multi-frag frame bug workaround. 697 - * Per silicon revision B3 eratta. 698 - */ 699 - ipg_w16(ipg_r16(DEBUG_CTRL) | 0x0200, DEBUG_CTRL); 700 - 701 - /* IPG TX poll now bug workaround. 702 - * Per silicon revision B3 eratta. 703 - */ 704 - ipg_w16(ipg_r16(DEBUG_CTRL) | 0x0010, DEBUG_CTRL); 705 - 706 - /* IPG RX poll now bug workaround. 707 - * Per silicon revision B3 eratta. 708 - */ 709 - ipg_w16(ipg_r16(DEBUG_CTRL) | 0x0020, DEBUG_CTRL); 710 - 711 - /* Now restore MACCTRL to original setting. */ 712 - ipg_w32(IPG_MC_RSVD_MASK & restoremacctrl, MAC_CTRL); 713 - 714 - /* Disable unused RMON statistics. */ 715 - ipg_w32(IPG_RZ_ALL, RMON_STATISTICS_MASK); 716 - 717 - /* Disable unused MIB statistics. */ 718 - ipg_w32(IPG_SM_MACCONTROLFRAMESXMTD | IPG_SM_MACCONTROLFRAMESRCVD | 719 - IPG_SM_BCSTOCTETXMTOK_BCSTFRAMESXMTDOK | IPG_SM_TXJUMBOFRAMES | 720 - IPG_SM_MCSTOCTETXMTOK_MCSTFRAMESXMTDOK | IPG_SM_RXJUMBOFRAMES | 721 - IPG_SM_BCSTOCTETRCVDOK_BCSTFRAMESRCVDOK | 722 - IPG_SM_UDPCHECKSUMERRORS | IPG_SM_TCPCHECKSUMERRORS | 723 - IPG_SM_IPCHECKSUMERRORS, STATISTICS_MASK); 724 - 725 - return 0; 726 - } 727 - 728 - /* 729 - * Create a receive buffer within system memory and update 730 - * NIC private structure appropriately. 731 - */ 732 - static int ipg_get_rxbuff(struct net_device *dev, int entry) 733 - { 734 - struct ipg_nic_private *sp = netdev_priv(dev); 735 - struct ipg_rx *rxfd = sp->rxd + entry; 736 - struct sk_buff *skb; 737 - u64 rxfragsize; 738 - 739 - IPG_DEBUG_MSG("_get_rxbuff\n"); 740 - 741 - skb = netdev_alloc_skb_ip_align(dev, sp->rxsupport_size); 742 - if (!skb) { 743 - sp->rx_buff[entry] = NULL; 744 - return -ENOMEM; 745 - } 746 - 747 - /* Save the address of the sk_buff structure. */ 748 - sp->rx_buff[entry] = skb; 749 - 750 - rxfd->frag_info = cpu_to_le64(pci_map_single(sp->pdev, skb->data, 751 - sp->rx_buf_sz, PCI_DMA_FROMDEVICE)); 752 - 753 - /* Set the RFD fragment length. */ 754 - rxfragsize = sp->rxfrag_size; 755 - rxfd->frag_info |= cpu_to_le64((rxfragsize << 48) & IPG_RFI_FRAGLEN); 756 - 757 - return 0; 758 - } 759 - 760 - static int init_rfdlist(struct net_device *dev) 761 - { 762 - struct ipg_nic_private *sp = netdev_priv(dev); 763 - void __iomem *ioaddr = sp->ioaddr; 764 - unsigned int i; 765 - 766 - IPG_DEBUG_MSG("_init_rfdlist\n"); 767 - 768 - for (i = 0; i < IPG_RFDLIST_LENGTH; i++) { 769 - struct ipg_rx *rxfd = sp->rxd + i; 770 - 771 - if (sp->rx_buff[i]) { 772 - pci_unmap_single(sp->pdev, 773 - le64_to_cpu(rxfd->frag_info) & ~IPG_RFI_FRAGLEN, 774 - sp->rx_buf_sz, PCI_DMA_FROMDEVICE); 775 - dev_kfree_skb_irq(sp->rx_buff[i]); 776 - sp->rx_buff[i] = NULL; 777 - } 778 - 779 - /* Clear out the RFS field. */ 780 - rxfd->rfs = 0x0000000000000000; 781 - 782 - if (ipg_get_rxbuff(dev, i) < 0) { 783 - /* 784 - * A receive buffer was not ready, break the 785 - * RFD list here. 786 - */ 787 - IPG_DEBUG_MSG("Cannot allocate Rx buffer\n"); 788 - 789 - /* Just in case we cannot allocate a single RFD. 790 - * Should not occur. 791 - */ 792 - if (i == 0) { 793 - netdev_err(dev, "No memory available for RFD list\n"); 794 - return -ENOMEM; 795 - } 796 - } 797 - 798 - rxfd->next_desc = cpu_to_le64(sp->rxd_map + 799 - sizeof(struct ipg_rx)*(i + 1)); 800 - } 801 - sp->rxd[i - 1].next_desc = cpu_to_le64(sp->rxd_map); 802 - 803 - sp->rx_current = 0; 804 - sp->rx_dirty = 0; 805 - 806 - /* Write the location of the RFDList to the IPG. */ 807 - ipg_w32((u32) sp->rxd_map, RFD_LIST_PTR_0); 808 - ipg_w32(0x00000000, RFD_LIST_PTR_1); 809 - 810 - return 0; 811 - } 812 - 813 - static void init_tfdlist(struct net_device *dev) 814 - { 815 - struct ipg_nic_private *sp = netdev_priv(dev); 816 - void __iomem *ioaddr = sp->ioaddr; 817 - unsigned int i; 818 - 819 - IPG_DEBUG_MSG("_init_tfdlist\n"); 820 - 821 - for (i = 0; i < IPG_TFDLIST_LENGTH; i++) { 822 - struct ipg_tx *txfd = sp->txd + i; 823 - 824 - txfd->tfc = cpu_to_le64(IPG_TFC_TFDDONE); 825 - 826 - if (sp->tx_buff[i]) { 827 - dev_kfree_skb_irq(sp->tx_buff[i]); 828 - sp->tx_buff[i] = NULL; 829 - } 830 - 831 - txfd->next_desc = cpu_to_le64(sp->txd_map + 832 - sizeof(struct ipg_tx)*(i + 1)); 833 - } 834 - sp->txd[i - 1].next_desc = cpu_to_le64(sp->txd_map); 835 - 836 - sp->tx_current = 0; 837 - sp->tx_dirty = 0; 838 - 839 - /* Write the location of the TFDList to the IPG. */ 840 - IPG_DDEBUG_MSG("Starting TFDListPtr = %08x\n", 841 - (u32) sp->txd_map); 842 - ipg_w32((u32) sp->txd_map, TFD_LIST_PTR_0); 843 - ipg_w32(0x00000000, TFD_LIST_PTR_1); 844 - 845 - sp->reset_current_tfd = 1; 846 - } 847 - 848 - /* 849 - * Free all transmit buffers which have already been transferred 850 - * via DMA to the IPG. 851 - */ 852 - static void ipg_nic_txfree(struct net_device *dev) 853 - { 854 - struct ipg_nic_private *sp = netdev_priv(dev); 855 - unsigned int released, pending, dirty; 856 - 857 - IPG_DEBUG_MSG("_nic_txfree\n"); 858 - 859 - pending = sp->tx_current - sp->tx_dirty; 860 - dirty = sp->tx_dirty % IPG_TFDLIST_LENGTH; 861 - 862 - for (released = 0; released < pending; released++) { 863 - struct sk_buff *skb = sp->tx_buff[dirty]; 864 - struct ipg_tx *txfd = sp->txd + dirty; 865 - 866 - IPG_DEBUG_MSG("TFC = %016lx\n", (unsigned long) txfd->tfc); 867 - 868 - /* Look at each TFD's TFC field beginning 869 - * at the last freed TFD up to the current TFD. 870 - * If the TFDDone bit is set, free the associated 871 - * buffer. 872 - */ 873 - if (!(txfd->tfc & cpu_to_le64(IPG_TFC_TFDDONE))) 874 - break; 875 - 876 - /* Free the transmit buffer. */ 877 - if (skb) { 878 - pci_unmap_single(sp->pdev, 879 - le64_to_cpu(txfd->frag_info) & ~IPG_TFI_FRAGLEN, 880 - skb->len, PCI_DMA_TODEVICE); 881 - 882 - dev_kfree_skb_irq(skb); 883 - 884 - sp->tx_buff[dirty] = NULL; 885 - } 886 - dirty = (dirty + 1) % IPG_TFDLIST_LENGTH; 887 - } 888 - 889 - sp->tx_dirty += released; 890 - 891 - if (netif_queue_stopped(dev) && 892 - (sp->tx_current != (sp->tx_dirty + IPG_TFDLIST_LENGTH))) { 893 - netif_wake_queue(dev); 894 - } 895 - } 896 - 897 - static void ipg_tx_timeout(struct net_device *dev) 898 - { 899 - struct ipg_nic_private *sp = netdev_priv(dev); 900 - void __iomem *ioaddr = sp->ioaddr; 901 - 902 - ipg_reset(dev, IPG_AC_TX_RESET | IPG_AC_DMA | IPG_AC_NETWORK | 903 - IPG_AC_FIFO); 904 - 905 - spin_lock_irq(&sp->lock); 906 - 907 - /* Re-configure after DMA reset. */ 908 - if (ipg_io_config(dev) < 0) 909 - netdev_info(dev, "Error during re-configuration\n"); 910 - 911 - init_tfdlist(dev); 912 - 913 - spin_unlock_irq(&sp->lock); 914 - 915 - ipg_w32((ipg_r32(MAC_CTRL) | IPG_MC_TX_ENABLE) & IPG_MC_RSVD_MASK, 916 - MAC_CTRL); 917 - } 918 - 919 - /* 920 - * For TxComplete interrupts, free all transmit 921 - * buffers which have already been transferred via DMA 922 - * to the IPG. 923 - */ 924 - static void ipg_nic_txcleanup(struct net_device *dev) 925 - { 926 - struct ipg_nic_private *sp = netdev_priv(dev); 927 - void __iomem *ioaddr = sp->ioaddr; 928 - unsigned int i; 929 - 930 - IPG_DEBUG_MSG("_nic_txcleanup\n"); 931 - 932 - for (i = 0; i < IPG_TFDLIST_LENGTH; i++) { 933 - /* Reading the TXSTATUS register clears the 934 - * TX_COMPLETE interrupt. 935 - */ 936 - u32 txstatusdword = ipg_r32(TX_STATUS); 937 - 938 - IPG_DEBUG_MSG("TxStatus = %08x\n", txstatusdword); 939 - 940 - /* Check for Transmit errors. Error bits only valid if 941 - * TX_COMPLETE bit in the TXSTATUS register is a 1. 942 - */ 943 - if (!(txstatusdword & IPG_TS_TX_COMPLETE)) 944 - break; 945 - 946 - /* If in 10Mbps mode, indicate transmit is ready. */ 947 - if (sp->tenmbpsmode) { 948 - netif_wake_queue(dev); 949 - } 950 - 951 - /* Transmit error, increment stat counters. */ 952 - if (txstatusdword & IPG_TS_TX_ERROR) { 953 - IPG_DEBUG_MSG("Transmit error\n"); 954 - sp->stats.tx_errors++; 955 - } 956 - 957 - /* Late collision, re-enable transmitter. */ 958 - if (txstatusdword & IPG_TS_LATE_COLLISION) { 959 - IPG_DEBUG_MSG("Late collision on transmit\n"); 960 - ipg_w32((ipg_r32(MAC_CTRL) | IPG_MC_TX_ENABLE) & 961 - IPG_MC_RSVD_MASK, MAC_CTRL); 962 - } 963 - 964 - /* Maximum collisions, re-enable transmitter. */ 965 - if (txstatusdword & IPG_TS_TX_MAX_COLL) { 966 - IPG_DEBUG_MSG("Maximum collisions on transmit\n"); 967 - ipg_w32((ipg_r32(MAC_CTRL) | IPG_MC_TX_ENABLE) & 968 - IPG_MC_RSVD_MASK, MAC_CTRL); 969 - } 970 - 971 - /* Transmit underrun, reset and re-enable 972 - * transmitter. 973 - */ 974 - if (txstatusdword & IPG_TS_TX_UNDERRUN) { 975 - IPG_DEBUG_MSG("Transmitter underrun\n"); 976 - sp->stats.tx_fifo_errors++; 977 - ipg_reset(dev, IPG_AC_TX_RESET | IPG_AC_DMA | 978 - IPG_AC_NETWORK | IPG_AC_FIFO); 979 - 980 - /* Re-configure after DMA reset. */ 981 - if (ipg_io_config(dev) < 0) { 982 - netdev_info(dev, "Error during re-configuration\n"); 983 - } 984 - init_tfdlist(dev); 985 - 986 - ipg_w32((ipg_r32(MAC_CTRL) | IPG_MC_TX_ENABLE) & 987 - IPG_MC_RSVD_MASK, MAC_CTRL); 988 - } 989 - } 990 - 991 - ipg_nic_txfree(dev); 992 - } 993 - 994 - /* Provides statistical information about the IPG NIC. */ 995 - static struct net_device_stats *ipg_nic_get_stats(struct net_device *dev) 996 - { 997 - struct ipg_nic_private *sp = netdev_priv(dev); 998 - void __iomem *ioaddr = sp->ioaddr; 999 - u16 temp1; 1000 - u16 temp2; 1001 - 1002 - IPG_DEBUG_MSG("_nic_get_stats\n"); 1003 - 1004 - /* Check to see if the NIC has been initialized via nic_open, 1005 - * before trying to read statistic registers. 1006 - */ 1007 - if (!netif_running(dev)) 1008 - return &sp->stats; 1009 - 1010 - sp->stats.rx_packets += ipg_r32(IPG_FRAMESRCVDOK); 1011 - sp->stats.tx_packets += ipg_r32(IPG_FRAMESXMTDOK); 1012 - sp->stats.rx_bytes += ipg_r32(IPG_OCTETRCVOK); 1013 - sp->stats.tx_bytes += ipg_r32(IPG_OCTETXMTOK); 1014 - temp1 = ipg_r16(IPG_FRAMESLOSTRXERRORS); 1015 - sp->stats.rx_errors += temp1; 1016 - sp->stats.rx_missed_errors += temp1; 1017 - temp1 = ipg_r32(IPG_SINGLECOLFRAMES) + ipg_r32(IPG_MULTICOLFRAMES) + 1018 - ipg_r32(IPG_LATECOLLISIONS); 1019 - temp2 = ipg_r16(IPG_CARRIERSENSEERRORS); 1020 - sp->stats.collisions += temp1; 1021 - sp->stats.tx_dropped += ipg_r16(IPG_FRAMESABORTXSCOLLS); 1022 - sp->stats.tx_errors += ipg_r16(IPG_FRAMESWEXDEFERRAL) + 1023 - ipg_r32(IPG_FRAMESWDEFERREDXMT) + temp1 + temp2; 1024 - sp->stats.multicast += ipg_r32(IPG_MCSTOCTETRCVDOK); 1025 - 1026 - /* detailed tx_errors */ 1027 - sp->stats.tx_carrier_errors += temp2; 1028 - 1029 - /* detailed rx_errors */ 1030 - sp->stats.rx_length_errors += ipg_r16(IPG_INRANGELENGTHERRORS) + 1031 - ipg_r16(IPG_FRAMETOOLONGERRORS); 1032 - sp->stats.rx_crc_errors += ipg_r16(IPG_FRAMECHECKSEQERRORS); 1033 - 1034 - /* Unutilized IPG statistic registers. */ 1035 - ipg_r32(IPG_MCSTFRAMESRCVDOK); 1036 - 1037 - return &sp->stats; 1038 - } 1039 - 1040 - /* Restore used receive buffers. */ 1041 - static int ipg_nic_rxrestore(struct net_device *dev) 1042 - { 1043 - struct ipg_nic_private *sp = netdev_priv(dev); 1044 - const unsigned int curr = sp->rx_current; 1045 - unsigned int dirty = sp->rx_dirty; 1046 - 1047 - IPG_DEBUG_MSG("_nic_rxrestore\n"); 1048 - 1049 - for (dirty = sp->rx_dirty; curr - dirty > 0; dirty++) { 1050 - unsigned int entry = dirty % IPG_RFDLIST_LENGTH; 1051 - 1052 - /* rx_copybreak may poke hole here and there. */ 1053 - if (sp->rx_buff[entry]) 1054 - continue; 1055 - 1056 - /* Generate a new receive buffer to replace the 1057 - * current buffer (which will be released by the 1058 - * Linux system). 1059 - */ 1060 - if (ipg_get_rxbuff(dev, entry) < 0) { 1061 - IPG_DEBUG_MSG("Cannot allocate new Rx buffer\n"); 1062 - 1063 - break; 1064 - } 1065 - 1066 - /* Reset the RFS field. */ 1067 - sp->rxd[entry].rfs = 0x0000000000000000; 1068 - } 1069 - sp->rx_dirty = dirty; 1070 - 1071 - return 0; 1072 - } 1073 - 1074 - /* use jumboindex and jumbosize to control jumbo frame status 1075 - * initial status is jumboindex=-1 and jumbosize=0 1076 - * 1. jumboindex = -1 and jumbosize=0 : previous jumbo frame has been done. 1077 - * 2. jumboindex != -1 and jumbosize != 0 : jumbo frame is not over size and receiving 1078 - * 3. jumboindex = -1 and jumbosize != 0 : jumbo frame is over size, already dump 1079 - * previous receiving and need to continue dumping the current one 1080 - */ 1081 - enum { 1082 - NORMAL_PACKET, 1083 - ERROR_PACKET 1084 - }; 1085 - 1086 - enum { 1087 - FRAME_NO_START_NO_END = 0, 1088 - FRAME_WITH_START = 1, 1089 - FRAME_WITH_END = 10, 1090 - FRAME_WITH_START_WITH_END = 11 1091 - }; 1092 - 1093 - static void ipg_nic_rx_free_skb(struct net_device *dev) 1094 - { 1095 - struct ipg_nic_private *sp = netdev_priv(dev); 1096 - unsigned int entry = sp->rx_current % IPG_RFDLIST_LENGTH; 1097 - 1098 - if (sp->rx_buff[entry]) { 1099 - struct ipg_rx *rxfd = sp->rxd + entry; 1100 - 1101 - pci_unmap_single(sp->pdev, 1102 - le64_to_cpu(rxfd->frag_info) & ~IPG_RFI_FRAGLEN, 1103 - sp->rx_buf_sz, PCI_DMA_FROMDEVICE); 1104 - dev_kfree_skb_irq(sp->rx_buff[entry]); 1105 - sp->rx_buff[entry] = NULL; 1106 - } 1107 - } 1108 - 1109 - static int ipg_nic_rx_check_frame_type(struct net_device *dev) 1110 - { 1111 - struct ipg_nic_private *sp = netdev_priv(dev); 1112 - struct ipg_rx *rxfd = sp->rxd + (sp->rx_current % IPG_RFDLIST_LENGTH); 1113 - int type = FRAME_NO_START_NO_END; 1114 - 1115 - if (le64_to_cpu(rxfd->rfs) & IPG_RFS_FRAMESTART) 1116 - type += FRAME_WITH_START; 1117 - if (le64_to_cpu(rxfd->rfs) & IPG_RFS_FRAMEEND) 1118 - type += FRAME_WITH_END; 1119 - return type; 1120 - } 1121 - 1122 - static int ipg_nic_rx_check_error(struct net_device *dev) 1123 - { 1124 - struct ipg_nic_private *sp = netdev_priv(dev); 1125 - unsigned int entry = sp->rx_current % IPG_RFDLIST_LENGTH; 1126 - struct ipg_rx *rxfd = sp->rxd + entry; 1127 - 1128 - if (IPG_DROP_ON_RX_ETH_ERRORS && (le64_to_cpu(rxfd->rfs) & 1129 - (IPG_RFS_RXFIFOOVERRUN | IPG_RFS_RXRUNTFRAME | 1130 - IPG_RFS_RXALIGNMENTERROR | IPG_RFS_RXFCSERROR | 1131 - IPG_RFS_RXOVERSIZEDFRAME | IPG_RFS_RXLENGTHERROR))) { 1132 - IPG_DEBUG_MSG("Rx error, RFS = %016lx\n", 1133 - (unsigned long) rxfd->rfs); 1134 - 1135 - /* Increment general receive error statistic. */ 1136 - sp->stats.rx_errors++; 1137 - 1138 - /* Increment detailed receive error statistics. */ 1139 - if (le64_to_cpu(rxfd->rfs) & IPG_RFS_RXFIFOOVERRUN) { 1140 - IPG_DEBUG_MSG("RX FIFO overrun occurred\n"); 1141 - 1142 - sp->stats.rx_fifo_errors++; 1143 - } 1144 - 1145 - if (le64_to_cpu(rxfd->rfs) & IPG_RFS_RXRUNTFRAME) { 1146 - IPG_DEBUG_MSG("RX runt occurred\n"); 1147 - sp->stats.rx_length_errors++; 1148 - } 1149 - 1150 - /* Do nothing for IPG_RFS_RXOVERSIZEDFRAME, 1151 - * error count handled by a IPG statistic register. 1152 - */ 1153 - 1154 - if (le64_to_cpu(rxfd->rfs) & IPG_RFS_RXALIGNMENTERROR) { 1155 - IPG_DEBUG_MSG("RX alignment error occurred\n"); 1156 - sp->stats.rx_frame_errors++; 1157 - } 1158 - 1159 - /* Do nothing for IPG_RFS_RXFCSERROR, error count 1160 - * handled by a IPG statistic register. 1161 - */ 1162 - 1163 - /* Free the memory associated with the RX 1164 - * buffer since it is erroneous and we will 1165 - * not pass it to higher layer processes. 1166 - */ 1167 - if (sp->rx_buff[entry]) { 1168 - pci_unmap_single(sp->pdev, 1169 - le64_to_cpu(rxfd->frag_info) & ~IPG_RFI_FRAGLEN, 1170 - sp->rx_buf_sz, PCI_DMA_FROMDEVICE); 1171 - 1172 - dev_kfree_skb_irq(sp->rx_buff[entry]); 1173 - sp->rx_buff[entry] = NULL; 1174 - } 1175 - return ERROR_PACKET; 1176 - } 1177 - return NORMAL_PACKET; 1178 - } 1179 - 1180 - static void ipg_nic_rx_with_start_and_end(struct net_device *dev, 1181 - struct ipg_nic_private *sp, 1182 - struct ipg_rx *rxfd, unsigned entry) 1183 - { 1184 - struct ipg_jumbo *jumbo = &sp->jumbo; 1185 - struct sk_buff *skb; 1186 - int framelen; 1187 - 1188 - if (jumbo->found_start) { 1189 - dev_kfree_skb_irq(jumbo->skb); 1190 - jumbo->found_start = 0; 1191 - jumbo->current_size = 0; 1192 - jumbo->skb = NULL; 1193 - } 1194 - 1195 - /* 1: found error, 0 no error */ 1196 - if (ipg_nic_rx_check_error(dev) != NORMAL_PACKET) 1197 - return; 1198 - 1199 - skb = sp->rx_buff[entry]; 1200 - if (!skb) 1201 - return; 1202 - 1203 - /* accept this frame and send to upper layer */ 1204 - framelen = le64_to_cpu(rxfd->rfs) & IPG_RFS_RXFRAMELEN; 1205 - if (framelen > sp->rxfrag_size) 1206 - framelen = sp->rxfrag_size; 1207 - 1208 - skb_put(skb, framelen); 1209 - skb->protocol = eth_type_trans(skb, dev); 1210 - skb_checksum_none_assert(skb); 1211 - netif_rx(skb); 1212 - sp->rx_buff[entry] = NULL; 1213 - } 1214 - 1215 - static void ipg_nic_rx_with_start(struct net_device *dev, 1216 - struct ipg_nic_private *sp, 1217 - struct ipg_rx *rxfd, unsigned entry) 1218 - { 1219 - struct ipg_jumbo *jumbo = &sp->jumbo; 1220 - struct pci_dev *pdev = sp->pdev; 1221 - struct sk_buff *skb; 1222 - 1223 - /* 1: found error, 0 no error */ 1224 - if (ipg_nic_rx_check_error(dev) != NORMAL_PACKET) 1225 - return; 1226 - 1227 - /* accept this frame and send to upper layer */ 1228 - skb = sp->rx_buff[entry]; 1229 - if (!skb) 1230 - return; 1231 - 1232 - if (jumbo->found_start) 1233 - dev_kfree_skb_irq(jumbo->skb); 1234 - 1235 - pci_unmap_single(pdev, le64_to_cpu(rxfd->frag_info) & ~IPG_RFI_FRAGLEN, 1236 - sp->rx_buf_sz, PCI_DMA_FROMDEVICE); 1237 - 1238 - skb_put(skb, sp->rxfrag_size); 1239 - 1240 - jumbo->found_start = 1; 1241 - jumbo->current_size = sp->rxfrag_size; 1242 - jumbo->skb = skb; 1243 - 1244 - sp->rx_buff[entry] = NULL; 1245 - } 1246 - 1247 - static void ipg_nic_rx_with_end(struct net_device *dev, 1248 - struct ipg_nic_private *sp, 1249 - struct ipg_rx *rxfd, unsigned entry) 1250 - { 1251 - struct ipg_jumbo *jumbo = &sp->jumbo; 1252 - 1253 - /* 1: found error, 0 no error */ 1254 - if (ipg_nic_rx_check_error(dev) == NORMAL_PACKET) { 1255 - struct sk_buff *skb = sp->rx_buff[entry]; 1256 - 1257 - if (!skb) 1258 - return; 1259 - 1260 - if (jumbo->found_start) { 1261 - int framelen, endframelen; 1262 - 1263 - framelen = le64_to_cpu(rxfd->rfs) & IPG_RFS_RXFRAMELEN; 1264 - 1265 - endframelen = framelen - jumbo->current_size; 1266 - if (framelen > sp->rxsupport_size) 1267 - dev_kfree_skb_irq(jumbo->skb); 1268 - else { 1269 - memcpy(skb_put(jumbo->skb, endframelen), 1270 - skb->data, endframelen); 1271 - 1272 - jumbo->skb->protocol = 1273 - eth_type_trans(jumbo->skb, dev); 1274 - 1275 - skb_checksum_none_assert(jumbo->skb); 1276 - netif_rx(jumbo->skb); 1277 - } 1278 - } 1279 - 1280 - jumbo->found_start = 0; 1281 - jumbo->current_size = 0; 1282 - jumbo->skb = NULL; 1283 - 1284 - ipg_nic_rx_free_skb(dev); 1285 - } else { 1286 - dev_kfree_skb_irq(jumbo->skb); 1287 - jumbo->found_start = 0; 1288 - jumbo->current_size = 0; 1289 - jumbo->skb = NULL; 1290 - } 1291 - } 1292 - 1293 - static void ipg_nic_rx_no_start_no_end(struct net_device *dev, 1294 - struct ipg_nic_private *sp, 1295 - struct ipg_rx *rxfd, unsigned entry) 1296 - { 1297 - struct ipg_jumbo *jumbo = &sp->jumbo; 1298 - 1299 - /* 1: found error, 0 no error */ 1300 - if (ipg_nic_rx_check_error(dev) == NORMAL_PACKET) { 1301 - struct sk_buff *skb = sp->rx_buff[entry]; 1302 - 1303 - if (skb) { 1304 - if (jumbo->found_start) { 1305 - jumbo->current_size += sp->rxfrag_size; 1306 - if (jumbo->current_size <= sp->rxsupport_size) { 1307 - memcpy(skb_put(jumbo->skb, 1308 - sp->rxfrag_size), 1309 - skb->data, sp->rxfrag_size); 1310 - } 1311 - } 1312 - ipg_nic_rx_free_skb(dev); 1313 - } 1314 - } else { 1315 - dev_kfree_skb_irq(jumbo->skb); 1316 - jumbo->found_start = 0; 1317 - jumbo->current_size = 0; 1318 - jumbo->skb = NULL; 1319 - } 1320 - } 1321 - 1322 - static int ipg_nic_rx_jumbo(struct net_device *dev) 1323 - { 1324 - struct ipg_nic_private *sp = netdev_priv(dev); 1325 - unsigned int curr = sp->rx_current; 1326 - void __iomem *ioaddr = sp->ioaddr; 1327 - unsigned int i; 1328 - 1329 - IPG_DEBUG_MSG("_nic_rx\n"); 1330 - 1331 - for (i = 0; i < IPG_MAXRFDPROCESS_COUNT; i++, curr++) { 1332 - unsigned int entry = curr % IPG_RFDLIST_LENGTH; 1333 - struct ipg_rx *rxfd = sp->rxd + entry; 1334 - 1335 - if (!(rxfd->rfs & cpu_to_le64(IPG_RFS_RFDDONE))) 1336 - break; 1337 - 1338 - switch (ipg_nic_rx_check_frame_type(dev)) { 1339 - case FRAME_WITH_START_WITH_END: 1340 - ipg_nic_rx_with_start_and_end(dev, sp, rxfd, entry); 1341 - break; 1342 - case FRAME_WITH_START: 1343 - ipg_nic_rx_with_start(dev, sp, rxfd, entry); 1344 - break; 1345 - case FRAME_WITH_END: 1346 - ipg_nic_rx_with_end(dev, sp, rxfd, entry); 1347 - break; 1348 - case FRAME_NO_START_NO_END: 1349 - ipg_nic_rx_no_start_no_end(dev, sp, rxfd, entry); 1350 - break; 1351 - } 1352 - } 1353 - 1354 - sp->rx_current = curr; 1355 - 1356 - if (i == IPG_MAXRFDPROCESS_COUNT) { 1357 - /* There are more RFDs to process, however the 1358 - * allocated amount of RFD processing time has 1359 - * expired. Assert Interrupt Requested to make 1360 - * sure we come back to process the remaining RFDs. 1361 - */ 1362 - ipg_w32(ipg_r32(ASIC_CTRL) | IPG_AC_INT_REQUEST, ASIC_CTRL); 1363 - } 1364 - 1365 - ipg_nic_rxrestore(dev); 1366 - 1367 - return 0; 1368 - } 1369 - 1370 - static int ipg_nic_rx(struct net_device *dev) 1371 - { 1372 - /* Transfer received Ethernet frames to higher network layers. */ 1373 - struct ipg_nic_private *sp = netdev_priv(dev); 1374 - unsigned int curr = sp->rx_current; 1375 - void __iomem *ioaddr = sp->ioaddr; 1376 - struct ipg_rx *rxfd; 1377 - unsigned int i; 1378 - 1379 - IPG_DEBUG_MSG("_nic_rx\n"); 1380 - 1381 - #define __RFS_MASK \ 1382 - cpu_to_le64(IPG_RFS_RFDDONE | IPG_RFS_FRAMESTART | IPG_RFS_FRAMEEND) 1383 - 1384 - for (i = 0; i < IPG_MAXRFDPROCESS_COUNT; i++, curr++) { 1385 - unsigned int entry = curr % IPG_RFDLIST_LENGTH; 1386 - struct sk_buff *skb = sp->rx_buff[entry]; 1387 - unsigned int framelen; 1388 - 1389 - rxfd = sp->rxd + entry; 1390 - 1391 - if (((rxfd->rfs & __RFS_MASK) != __RFS_MASK) || !skb) 1392 - break; 1393 - 1394 - /* Get received frame length. */ 1395 - framelen = le64_to_cpu(rxfd->rfs) & IPG_RFS_RXFRAMELEN; 1396 - 1397 - /* Check for jumbo frame arrival with too small 1398 - * RXFRAG_SIZE. 1399 - */ 1400 - if (framelen > sp->rxfrag_size) { 1401 - IPG_DEBUG_MSG 1402 - ("RFS FrameLen > allocated fragment size\n"); 1403 - 1404 - framelen = sp->rxfrag_size; 1405 - } 1406 - 1407 - if ((IPG_DROP_ON_RX_ETH_ERRORS && (le64_to_cpu(rxfd->rfs) & 1408 - (IPG_RFS_RXFIFOOVERRUN | IPG_RFS_RXRUNTFRAME | 1409 - IPG_RFS_RXALIGNMENTERROR | IPG_RFS_RXFCSERROR | 1410 - IPG_RFS_RXOVERSIZEDFRAME | IPG_RFS_RXLENGTHERROR)))) { 1411 - 1412 - IPG_DEBUG_MSG("Rx error, RFS = %016lx\n", 1413 - (unsigned long int) rxfd->rfs); 1414 - 1415 - /* Increment general receive error statistic. */ 1416 - sp->stats.rx_errors++; 1417 - 1418 - /* Increment detailed receive error statistics. */ 1419 - if (le64_to_cpu(rxfd->rfs) & IPG_RFS_RXFIFOOVERRUN) { 1420 - IPG_DEBUG_MSG("RX FIFO overrun occurred\n"); 1421 - sp->stats.rx_fifo_errors++; 1422 - } 1423 - 1424 - if (le64_to_cpu(rxfd->rfs) & IPG_RFS_RXRUNTFRAME) { 1425 - IPG_DEBUG_MSG("RX runt occurred\n"); 1426 - sp->stats.rx_length_errors++; 1427 - } 1428 - 1429 - if (le64_to_cpu(rxfd->rfs) & IPG_RFS_RXOVERSIZEDFRAME) ; 1430 - /* Do nothing, error count handled by a IPG 1431 - * statistic register. 1432 - */ 1433 - 1434 - if (le64_to_cpu(rxfd->rfs) & IPG_RFS_RXALIGNMENTERROR) { 1435 - IPG_DEBUG_MSG("RX alignment error occurred\n"); 1436 - sp->stats.rx_frame_errors++; 1437 - } 1438 - 1439 - if (le64_to_cpu(rxfd->rfs) & IPG_RFS_RXFCSERROR) ; 1440 - /* Do nothing, error count handled by a IPG 1441 - * statistic register. 1442 - */ 1443 - 1444 - /* Free the memory associated with the RX 1445 - * buffer since it is erroneous and we will 1446 - * not pass it to higher layer processes. 1447 - */ 1448 - if (skb) { 1449 - __le64 info = rxfd->frag_info; 1450 - 1451 - pci_unmap_single(sp->pdev, 1452 - le64_to_cpu(info) & ~IPG_RFI_FRAGLEN, 1453 - sp->rx_buf_sz, PCI_DMA_FROMDEVICE); 1454 - 1455 - dev_kfree_skb_irq(skb); 1456 - } 1457 - } else { 1458 - 1459 - /* Adjust the new buffer length to accommodate the size 1460 - * of the received frame. 1461 - */ 1462 - skb_put(skb, framelen); 1463 - 1464 - /* Set the buffer's protocol field to Ethernet. */ 1465 - skb->protocol = eth_type_trans(skb, dev); 1466 - 1467 - /* The IPG encountered an error with (or 1468 - * there were no) IP/TCP/UDP checksums. 1469 - * This may or may not indicate an invalid 1470 - * IP/TCP/UDP frame was received. Let the 1471 - * upper layer decide. 1472 - */ 1473 - skb_checksum_none_assert(skb); 1474 - 1475 - /* Hand off frame for higher layer processing. 1476 - * The function netif_rx() releases the sk_buff 1477 - * when processing completes. 1478 - */ 1479 - netif_rx(skb); 1480 - } 1481 - 1482 - /* Assure RX buffer is not reused by IPG. */ 1483 - sp->rx_buff[entry] = NULL; 1484 - } 1485 - 1486 - /* 1487 - * If there are more RFDs to process and the allocated amount of RFD 1488 - * processing time has expired, assert Interrupt Requested to make 1489 - * sure we come back to process the remaining RFDs. 1490 - */ 1491 - if (i == IPG_MAXRFDPROCESS_COUNT) 1492 - ipg_w32(ipg_r32(ASIC_CTRL) | IPG_AC_INT_REQUEST, ASIC_CTRL); 1493 - 1494 - #ifdef IPG_DEBUG 1495 - /* Check if the RFD list contained no receive frame data. */ 1496 - if (!i) 1497 - sp->EmptyRFDListCount++; 1498 - #endif 1499 - while ((le64_to_cpu(rxfd->rfs) & IPG_RFS_RFDDONE) && 1500 - !((le64_to_cpu(rxfd->rfs) & IPG_RFS_FRAMESTART) && 1501 - (le64_to_cpu(rxfd->rfs) & IPG_RFS_FRAMEEND))) { 1502 - unsigned int entry = curr++ % IPG_RFDLIST_LENGTH; 1503 - 1504 - rxfd = sp->rxd + entry; 1505 - 1506 - IPG_DEBUG_MSG("Frame requires multiple RFDs\n"); 1507 - 1508 - /* An unexpected event, additional code needed to handle 1509 - * properly. So for the time being, just disregard the 1510 - * frame. 1511 - */ 1512 - 1513 - /* Free the memory associated with the RX 1514 - * buffer since it is erroneous and we will 1515 - * not pass it to higher layer processes. 1516 - */ 1517 - if (sp->rx_buff[entry]) { 1518 - pci_unmap_single(sp->pdev, 1519 - le64_to_cpu(rxfd->frag_info) & ~IPG_RFI_FRAGLEN, 1520 - sp->rx_buf_sz, PCI_DMA_FROMDEVICE); 1521 - dev_kfree_skb_irq(sp->rx_buff[entry]); 1522 - } 1523 - 1524 - /* Assure RX buffer is not reused by IPG. */ 1525 - sp->rx_buff[entry] = NULL; 1526 - } 1527 - 1528 - sp->rx_current = curr; 1529 - 1530 - /* Check to see if there are a minimum number of used 1531 - * RFDs before restoring any (should improve performance.) 1532 - */ 1533 - if ((curr - sp->rx_dirty) >= IPG_MINUSEDRFDSTOFREE) 1534 - ipg_nic_rxrestore(dev); 1535 - 1536 - return 0; 1537 - } 1538 - 1539 - static void ipg_reset_after_host_error(struct work_struct *work) 1540 - { 1541 - struct ipg_nic_private *sp = 1542 - container_of(work, struct ipg_nic_private, task.work); 1543 - struct net_device *dev = sp->dev; 1544 - 1545 - /* 1546 - * Acknowledge HostError interrupt by resetting 1547 - * IPG DMA and HOST. 1548 - */ 1549 - ipg_reset(dev, IPG_AC_GLOBAL_RESET | IPG_AC_HOST | IPG_AC_DMA); 1550 - 1551 - init_rfdlist(dev); 1552 - init_tfdlist(dev); 1553 - 1554 - if (ipg_io_config(dev) < 0) { 1555 - netdev_info(dev, "Cannot recover from PCI error\n"); 1556 - schedule_delayed_work(&sp->task, HZ); 1557 - } 1558 - } 1559 - 1560 - static irqreturn_t ipg_interrupt_handler(int irq, void *dev_inst) 1561 - { 1562 - struct net_device *dev = dev_inst; 1563 - struct ipg_nic_private *sp = netdev_priv(dev); 1564 - void __iomem *ioaddr = sp->ioaddr; 1565 - unsigned int handled = 0; 1566 - u16 status; 1567 - 1568 - IPG_DEBUG_MSG("_interrupt_handler\n"); 1569 - 1570 - if (sp->is_jumbo) 1571 - ipg_nic_rxrestore(dev); 1572 - 1573 - spin_lock(&sp->lock); 1574 - 1575 - /* Get interrupt source information, and acknowledge 1576 - * some (i.e. TxDMAComplete, RxDMAComplete, RxEarly, 1577 - * IntRequested, MacControlFrame, LinkEvent) interrupts 1578 - * if issued. Also, all IPG interrupts are disabled by 1579 - * reading IntStatusAck. 1580 - */ 1581 - status = ipg_r16(INT_STATUS_ACK); 1582 - 1583 - IPG_DEBUG_MSG("IntStatusAck = %04x\n", status); 1584 - 1585 - /* Shared IRQ of remove event. */ 1586 - if (!(status & IPG_IS_RSVD_MASK)) 1587 - goto out_enable; 1588 - 1589 - handled = 1; 1590 - 1591 - if (unlikely(!netif_running(dev))) 1592 - goto out_unlock; 1593 - 1594 - /* If RFDListEnd interrupt, restore all used RFDs. */ 1595 - if (status & IPG_IS_RFD_LIST_END) { 1596 - IPG_DEBUG_MSG("RFDListEnd Interrupt\n"); 1597 - 1598 - /* The RFD list end indicates an RFD was encountered 1599 - * with a 0 NextPtr, or with an RFDDone bit set to 1 1600 - * (indicating the RFD is not read for use by the 1601 - * IPG.) Try to restore all RFDs. 1602 - */ 1603 - ipg_nic_rxrestore(dev); 1604 - 1605 - #ifdef IPG_DEBUG 1606 - /* Increment the RFDlistendCount counter. */ 1607 - sp->RFDlistendCount++; 1608 - #endif 1609 - } 1610 - 1611 - /* If RFDListEnd, RxDMAPriority, RxDMAComplete, or 1612 - * IntRequested interrupt, process received frames. */ 1613 - if ((status & IPG_IS_RX_DMA_PRIORITY) || 1614 - (status & IPG_IS_RFD_LIST_END) || 1615 - (status & IPG_IS_RX_DMA_COMPLETE) || 1616 - (status & IPG_IS_INT_REQUESTED)) { 1617 - #ifdef IPG_DEBUG 1618 - /* Increment the RFD list checked counter if interrupted 1619 - * only to check the RFD list. */ 1620 - if (status & (~(IPG_IS_RX_DMA_PRIORITY | IPG_IS_RFD_LIST_END | 1621 - IPG_IS_RX_DMA_COMPLETE | IPG_IS_INT_REQUESTED) & 1622 - (IPG_IS_HOST_ERROR | IPG_IS_TX_DMA_COMPLETE | 1623 - IPG_IS_LINK_EVENT | IPG_IS_TX_COMPLETE | 1624 - IPG_IS_UPDATE_STATS))) 1625 - sp->RFDListCheckedCount++; 1626 - #endif 1627 - 1628 - if (sp->is_jumbo) 1629 - ipg_nic_rx_jumbo(dev); 1630 - else 1631 - ipg_nic_rx(dev); 1632 - } 1633 - 1634 - /* If TxDMAComplete interrupt, free used TFDs. */ 1635 - if (status & IPG_IS_TX_DMA_COMPLETE) 1636 - ipg_nic_txfree(dev); 1637 - 1638 - /* TxComplete interrupts indicate one of numerous actions. 1639 - * Determine what action to take based on TXSTATUS register. 1640 - */ 1641 - if (status & IPG_IS_TX_COMPLETE) 1642 - ipg_nic_txcleanup(dev); 1643 - 1644 - /* If UpdateStats interrupt, update Linux Ethernet statistics */ 1645 - if (status & IPG_IS_UPDATE_STATS) 1646 - ipg_nic_get_stats(dev); 1647 - 1648 - /* If HostError interrupt, reset IPG. */ 1649 - if (status & IPG_IS_HOST_ERROR) { 1650 - IPG_DDEBUG_MSG("HostError Interrupt\n"); 1651 - 1652 - schedule_delayed_work(&sp->task, 0); 1653 - } 1654 - 1655 - /* If LinkEvent interrupt, resolve autonegotiation. */ 1656 - if (status & IPG_IS_LINK_EVENT) { 1657 - if (ipg_config_autoneg(dev) < 0) 1658 - netdev_info(dev, "Auto-negotiation error\n"); 1659 - } 1660 - 1661 - /* If MACCtrlFrame interrupt, do nothing. */ 1662 - if (status & IPG_IS_MAC_CTRL_FRAME) 1663 - IPG_DEBUG_MSG("MACCtrlFrame interrupt\n"); 1664 - 1665 - /* If RxComplete interrupt, do nothing. */ 1666 - if (status & IPG_IS_RX_COMPLETE) 1667 - IPG_DEBUG_MSG("RxComplete interrupt\n"); 1668 - 1669 - /* If RxEarly interrupt, do nothing. */ 1670 - if (status & IPG_IS_RX_EARLY) 1671 - IPG_DEBUG_MSG("RxEarly interrupt\n"); 1672 - 1673 - out_enable: 1674 - /* Re-enable IPG interrupts. */ 1675 - ipg_w16(IPG_IE_TX_DMA_COMPLETE | IPG_IE_RX_DMA_COMPLETE | 1676 - IPG_IE_HOST_ERROR | IPG_IE_INT_REQUESTED | IPG_IE_TX_COMPLETE | 1677 - IPG_IE_LINK_EVENT | IPG_IE_UPDATE_STATS, INT_ENABLE); 1678 - out_unlock: 1679 - spin_unlock(&sp->lock); 1680 - 1681 - return IRQ_RETVAL(handled); 1682 - } 1683 - 1684 - static void ipg_rx_clear(struct ipg_nic_private *sp) 1685 - { 1686 - unsigned int i; 1687 - 1688 - for (i = 0; i < IPG_RFDLIST_LENGTH; i++) { 1689 - if (sp->rx_buff[i]) { 1690 - struct ipg_rx *rxfd = sp->rxd + i; 1691 - 1692 - dev_kfree_skb_irq(sp->rx_buff[i]); 1693 - sp->rx_buff[i] = NULL; 1694 - pci_unmap_single(sp->pdev, 1695 - le64_to_cpu(rxfd->frag_info) & ~IPG_RFI_FRAGLEN, 1696 - sp->rx_buf_sz, PCI_DMA_FROMDEVICE); 1697 - } 1698 - } 1699 - } 1700 - 1701 - static void ipg_tx_clear(struct ipg_nic_private *sp) 1702 - { 1703 - unsigned int i; 1704 - 1705 - for (i = 0; i < IPG_TFDLIST_LENGTH; i++) { 1706 - if (sp->tx_buff[i]) { 1707 - struct ipg_tx *txfd = sp->txd + i; 1708 - 1709 - pci_unmap_single(sp->pdev, 1710 - le64_to_cpu(txfd->frag_info) & ~IPG_TFI_FRAGLEN, 1711 - sp->tx_buff[i]->len, PCI_DMA_TODEVICE); 1712 - 1713 - dev_kfree_skb_irq(sp->tx_buff[i]); 1714 - 1715 - sp->tx_buff[i] = NULL; 1716 - } 1717 - } 1718 - } 1719 - 1720 - static int ipg_nic_open(struct net_device *dev) 1721 - { 1722 - struct ipg_nic_private *sp = netdev_priv(dev); 1723 - void __iomem *ioaddr = sp->ioaddr; 1724 - struct pci_dev *pdev = sp->pdev; 1725 - int rc; 1726 - 1727 - IPG_DEBUG_MSG("_nic_open\n"); 1728 - 1729 - sp->rx_buf_sz = sp->rxsupport_size; 1730 - 1731 - /* Check for interrupt line conflicts, and request interrupt 1732 - * line for IPG. 1733 - * 1734 - * IMPORTANT: Disable IPG interrupts prior to registering 1735 - * IRQ. 1736 - */ 1737 - ipg_w16(0x0000, INT_ENABLE); 1738 - 1739 - /* Register the interrupt line to be used by the IPG within 1740 - * the Linux system. 1741 - */ 1742 - rc = request_irq(pdev->irq, ipg_interrupt_handler, IRQF_SHARED, 1743 - dev->name, dev); 1744 - if (rc < 0) { 1745 - netdev_info(dev, "Error when requesting interrupt\n"); 1746 - goto out; 1747 - } 1748 - 1749 - dev->irq = pdev->irq; 1750 - 1751 - rc = -ENOMEM; 1752 - 1753 - sp->rxd = dma_alloc_coherent(&pdev->dev, IPG_RX_RING_BYTES, 1754 - &sp->rxd_map, GFP_KERNEL); 1755 - if (!sp->rxd) 1756 - goto err_free_irq_0; 1757 - 1758 - sp->txd = dma_alloc_coherent(&pdev->dev, IPG_TX_RING_BYTES, 1759 - &sp->txd_map, GFP_KERNEL); 1760 - if (!sp->txd) 1761 - goto err_free_rx_1; 1762 - 1763 - rc = init_rfdlist(dev); 1764 - if (rc < 0) { 1765 - netdev_info(dev, "Error during configuration\n"); 1766 - goto err_free_tx_2; 1767 - } 1768 - 1769 - init_tfdlist(dev); 1770 - 1771 - rc = ipg_io_config(dev); 1772 - if (rc < 0) { 1773 - netdev_info(dev, "Error during configuration\n"); 1774 - goto err_release_tfdlist_3; 1775 - } 1776 - 1777 - /* Resolve autonegotiation. */ 1778 - if (ipg_config_autoneg(dev) < 0) 1779 - netdev_info(dev, "Auto-negotiation error\n"); 1780 - 1781 - /* initialize JUMBO Frame control variable */ 1782 - sp->jumbo.found_start = 0; 1783 - sp->jumbo.current_size = 0; 1784 - sp->jumbo.skb = NULL; 1785 - 1786 - /* Enable transmit and receive operation of the IPG. */ 1787 - ipg_w32((ipg_r32(MAC_CTRL) | IPG_MC_RX_ENABLE | IPG_MC_TX_ENABLE) & 1788 - IPG_MC_RSVD_MASK, MAC_CTRL); 1789 - 1790 - netif_start_queue(dev); 1791 - out: 1792 - return rc; 1793 - 1794 - err_release_tfdlist_3: 1795 - ipg_tx_clear(sp); 1796 - ipg_rx_clear(sp); 1797 - err_free_tx_2: 1798 - dma_free_coherent(&pdev->dev, IPG_TX_RING_BYTES, sp->txd, sp->txd_map); 1799 - err_free_rx_1: 1800 - dma_free_coherent(&pdev->dev, IPG_RX_RING_BYTES, sp->rxd, sp->rxd_map); 1801 - err_free_irq_0: 1802 - free_irq(pdev->irq, dev); 1803 - goto out; 1804 - } 1805 - 1806 - static int ipg_nic_stop(struct net_device *dev) 1807 - { 1808 - struct ipg_nic_private *sp = netdev_priv(dev); 1809 - void __iomem *ioaddr = sp->ioaddr; 1810 - struct pci_dev *pdev = sp->pdev; 1811 - 1812 - IPG_DEBUG_MSG("_nic_stop\n"); 1813 - 1814 - netif_stop_queue(dev); 1815 - 1816 - IPG_DUMPTFDLIST(dev); 1817 - 1818 - do { 1819 - (void) ipg_r16(INT_STATUS_ACK); 1820 - 1821 - ipg_reset(dev, IPG_AC_GLOBAL_RESET | IPG_AC_HOST | IPG_AC_DMA); 1822 - 1823 - synchronize_irq(pdev->irq); 1824 - } while (ipg_r16(INT_ENABLE) & IPG_IE_RSVD_MASK); 1825 - 1826 - ipg_rx_clear(sp); 1827 - 1828 - ipg_tx_clear(sp); 1829 - 1830 - pci_free_consistent(pdev, IPG_RX_RING_BYTES, sp->rxd, sp->rxd_map); 1831 - pci_free_consistent(pdev, IPG_TX_RING_BYTES, sp->txd, sp->txd_map); 1832 - 1833 - free_irq(pdev->irq, dev); 1834 - 1835 - return 0; 1836 - } 1837 - 1838 - static netdev_tx_t ipg_nic_hard_start_xmit(struct sk_buff *skb, 1839 - struct net_device *dev) 1840 - { 1841 - struct ipg_nic_private *sp = netdev_priv(dev); 1842 - void __iomem *ioaddr = sp->ioaddr; 1843 - unsigned int entry = sp->tx_current % IPG_TFDLIST_LENGTH; 1844 - unsigned long flags; 1845 - struct ipg_tx *txfd; 1846 - 1847 - IPG_DDEBUG_MSG("_nic_hard_start_xmit\n"); 1848 - 1849 - /* If in 10Mbps mode, stop the transmit queue so 1850 - * no more transmit frames are accepted. 1851 - */ 1852 - if (sp->tenmbpsmode) 1853 - netif_stop_queue(dev); 1854 - 1855 - if (sp->reset_current_tfd) { 1856 - sp->reset_current_tfd = 0; 1857 - entry = 0; 1858 - } 1859 - 1860 - txfd = sp->txd + entry; 1861 - 1862 - sp->tx_buff[entry] = skb; 1863 - 1864 - /* Clear all TFC fields, except TFDDONE. */ 1865 - txfd->tfc = cpu_to_le64(IPG_TFC_TFDDONE); 1866 - 1867 - /* Specify the TFC field within the TFD. */ 1868 - txfd->tfc |= cpu_to_le64(IPG_TFC_WORDALIGNDISABLED | 1869 - (IPG_TFC_FRAMEID & sp->tx_current) | 1870 - (IPG_TFC_FRAGCOUNT & (1 << 24))); 1871 - /* 1872 - * 16--17 (WordAlign) <- 3 (disable), 1873 - * 0--15 (FrameId) <- sp->tx_current, 1874 - * 24--27 (FragCount) <- 1 1875 - */ 1876 - 1877 - /* Request TxComplete interrupts at an interval defined 1878 - * by the constant IPG_FRAMESBETWEENTXCOMPLETES. 1879 - * Request TxComplete interrupt for every frame 1880 - * if in 10Mbps mode to accommodate problem with 10Mbps 1881 - * processing. 1882 - */ 1883 - if (sp->tenmbpsmode) 1884 - txfd->tfc |= cpu_to_le64(IPG_TFC_TXINDICATE); 1885 - txfd->tfc |= cpu_to_le64(IPG_TFC_TXDMAINDICATE); 1886 - /* Based on compilation option, determine if FCS is to be 1887 - * appended to transmit frame by IPG. 1888 - */ 1889 - if (!(IPG_APPEND_FCS_ON_TX)) 1890 - txfd->tfc |= cpu_to_le64(IPG_TFC_FCSAPPENDDISABLE); 1891 - 1892 - /* Based on compilation option, determine if IP, TCP and/or 1893 - * UDP checksums are to be added to transmit frame by IPG. 1894 - */ 1895 - if (IPG_ADD_IPCHECKSUM_ON_TX) 1896 - txfd->tfc |= cpu_to_le64(IPG_TFC_IPCHECKSUMENABLE); 1897 - 1898 - if (IPG_ADD_TCPCHECKSUM_ON_TX) 1899 - txfd->tfc |= cpu_to_le64(IPG_TFC_TCPCHECKSUMENABLE); 1900 - 1901 - if (IPG_ADD_UDPCHECKSUM_ON_TX) 1902 - txfd->tfc |= cpu_to_le64(IPG_TFC_UDPCHECKSUMENABLE); 1903 - 1904 - /* Based on compilation option, determine if VLAN tag info is to be 1905 - * inserted into transmit frame by IPG. 1906 - */ 1907 - if (IPG_INSERT_MANUAL_VLAN_TAG) { 1908 - txfd->tfc |= cpu_to_le64(IPG_TFC_VLANTAGINSERT | 1909 - ((u64) IPG_MANUAL_VLAN_VID << 32) | 1910 - ((u64) IPG_MANUAL_VLAN_CFI << 44) | 1911 - ((u64) IPG_MANUAL_VLAN_USERPRIORITY << 45)); 1912 - } 1913 - 1914 - /* The fragment start location within system memory is defined 1915 - * by the sk_buff structure's data field. The physical address 1916 - * of this location within the system's virtual memory space 1917 - * is determined using the IPG_HOST2BUS_MAP function. 1918 - */ 1919 - txfd->frag_info = cpu_to_le64(pci_map_single(sp->pdev, skb->data, 1920 - skb->len, PCI_DMA_TODEVICE)); 1921 - 1922 - /* The length of the fragment within system memory is defined by 1923 - * the sk_buff structure's len field. 1924 - */ 1925 - txfd->frag_info |= cpu_to_le64(IPG_TFI_FRAGLEN & 1926 - ((u64) (skb->len & 0xffff) << 48)); 1927 - 1928 - /* Clear the TFDDone bit last to indicate the TFD is ready 1929 - * for transfer to the IPG. 1930 - */ 1931 - txfd->tfc &= cpu_to_le64(~IPG_TFC_TFDDONE); 1932 - 1933 - spin_lock_irqsave(&sp->lock, flags); 1934 - 1935 - sp->tx_current++; 1936 - 1937 - mmiowb(); 1938 - 1939 - ipg_w32(IPG_DC_TX_DMA_POLL_NOW, DMA_CTRL); 1940 - 1941 - if (sp->tx_current == (sp->tx_dirty + IPG_TFDLIST_LENGTH)) 1942 - netif_stop_queue(dev); 1943 - 1944 - spin_unlock_irqrestore(&sp->lock, flags); 1945 - 1946 - return NETDEV_TX_OK; 1947 - } 1948 - 1949 - static void ipg_set_phy_default_param(unsigned char rev, 1950 - struct net_device *dev, int phy_address) 1951 - { 1952 - unsigned short length; 1953 - unsigned char revision; 1954 - const unsigned short *phy_param; 1955 - unsigned short address, value; 1956 - 1957 - phy_param = &DefaultPhyParam[0]; 1958 - length = *phy_param & 0x00FF; 1959 - revision = (unsigned char)((*phy_param) >> 8); 1960 - phy_param++; 1961 - while (length != 0) { 1962 - if (rev == revision) { 1963 - while (length > 1) { 1964 - address = *phy_param; 1965 - value = *(phy_param + 1); 1966 - phy_param += 2; 1967 - mdio_write(dev, phy_address, address, value); 1968 - length -= 4; 1969 - } 1970 - break; 1971 - } else { 1972 - phy_param += length / 2; 1973 - length = *phy_param & 0x00FF; 1974 - revision = (unsigned char)((*phy_param) >> 8); 1975 - phy_param++; 1976 - } 1977 - } 1978 - } 1979 - 1980 - static int read_eeprom(struct net_device *dev, int eep_addr) 1981 - { 1982 - void __iomem *ioaddr = ipg_ioaddr(dev); 1983 - unsigned int i; 1984 - int ret = 0; 1985 - u16 value; 1986 - 1987 - value = IPG_EC_EEPROM_READOPCODE | (eep_addr & 0xff); 1988 - ipg_w16(value, EEPROM_CTRL); 1989 - 1990 - for (i = 0; i < 1000; i++) { 1991 - u16 data; 1992 - 1993 - mdelay(10); 1994 - data = ipg_r16(EEPROM_CTRL); 1995 - if (!(data & IPG_EC_EEPROM_BUSY)) { 1996 - ret = ipg_r16(EEPROM_DATA); 1997 - break; 1998 - } 1999 - } 2000 - return ret; 2001 - } 2002 - 2003 - static void ipg_init_mii(struct net_device *dev) 2004 - { 2005 - struct ipg_nic_private *sp = netdev_priv(dev); 2006 - struct mii_if_info *mii_if = &sp->mii_if; 2007 - int phyaddr; 2008 - 2009 - mii_if->dev = dev; 2010 - mii_if->mdio_read = mdio_read; 2011 - mii_if->mdio_write = mdio_write; 2012 - mii_if->phy_id_mask = 0x1f; 2013 - mii_if->reg_num_mask = 0x1f; 2014 - 2015 - mii_if->phy_id = phyaddr = ipg_find_phyaddr(dev); 2016 - 2017 - if (phyaddr != 0x1f) { 2018 - u16 mii_phyctrl, mii_1000cr; 2019 - 2020 - mii_1000cr = mdio_read(dev, phyaddr, MII_CTRL1000); 2021 - mii_1000cr |= ADVERTISE_1000FULL | ADVERTISE_1000HALF | 2022 - GMII_PHY_1000BASETCONTROL_PreferMaster; 2023 - mdio_write(dev, phyaddr, MII_CTRL1000, mii_1000cr); 2024 - 2025 - mii_phyctrl = mdio_read(dev, phyaddr, MII_BMCR); 2026 - 2027 - /* Set default phyparam */ 2028 - ipg_set_phy_default_param(sp->pdev->revision, dev, phyaddr); 2029 - 2030 - /* Reset PHY */ 2031 - mii_phyctrl |= BMCR_RESET | BMCR_ANRESTART; 2032 - mdio_write(dev, phyaddr, MII_BMCR, mii_phyctrl); 2033 - 2034 - } 2035 - } 2036 - 2037 - static int ipg_hw_init(struct net_device *dev) 2038 - { 2039 - struct ipg_nic_private *sp = netdev_priv(dev); 2040 - void __iomem *ioaddr = sp->ioaddr; 2041 - unsigned int i; 2042 - int rc; 2043 - 2044 - /* Read/Write and Reset EEPROM Value */ 2045 - /* Read LED Mode Configuration from EEPROM */ 2046 - sp->led_mode = read_eeprom(dev, 6); 2047 - 2048 - /* Reset all functions within the IPG. Do not assert 2049 - * RST_OUT as not compatible with some PHYs. 2050 - */ 2051 - rc = ipg_reset(dev, IPG_RESET_MASK); 2052 - if (rc < 0) 2053 - goto out; 2054 - 2055 - ipg_init_mii(dev); 2056 - 2057 - /* Read MAC Address from EEPROM */ 2058 - for (i = 0; i < 3; i++) 2059 - sp->station_addr[i] = read_eeprom(dev, 16 + i); 2060 - 2061 - for (i = 0; i < 3; i++) 2062 - ipg_w16(sp->station_addr[i], STATION_ADDRESS_0 + 2*i); 2063 - 2064 - /* Set station address in ethernet_device structure. */ 2065 - dev->dev_addr[0] = ipg_r16(STATION_ADDRESS_0) & 0x00ff; 2066 - dev->dev_addr[1] = (ipg_r16(STATION_ADDRESS_0) & 0xff00) >> 8; 2067 - dev->dev_addr[2] = ipg_r16(STATION_ADDRESS_1) & 0x00ff; 2068 - dev->dev_addr[3] = (ipg_r16(STATION_ADDRESS_1) & 0xff00) >> 8; 2069 - dev->dev_addr[4] = ipg_r16(STATION_ADDRESS_2) & 0x00ff; 2070 - dev->dev_addr[5] = (ipg_r16(STATION_ADDRESS_2) & 0xff00) >> 8; 2071 - out: 2072 - return rc; 2073 - } 2074 - 2075 - static int ipg_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd) 2076 - { 2077 - struct ipg_nic_private *sp = netdev_priv(dev); 2078 - int rc; 2079 - 2080 - mutex_lock(&sp->mii_mutex); 2081 - rc = generic_mii_ioctl(&sp->mii_if, if_mii(ifr), cmd, NULL); 2082 - mutex_unlock(&sp->mii_mutex); 2083 - 2084 - return rc; 2085 - } 2086 - 2087 - static int ipg_nic_change_mtu(struct net_device *dev, int new_mtu) 2088 - { 2089 - struct ipg_nic_private *sp = netdev_priv(dev); 2090 - int err; 2091 - 2092 - /* Function to accommodate changes to Maximum Transfer Unit 2093 - * (or MTU) of IPG NIC. Cannot use default function since 2094 - * the default will not allow for MTU > 1500 bytes. 2095 - */ 2096 - 2097 - IPG_DEBUG_MSG("_nic_change_mtu\n"); 2098 - 2099 - /* 2100 - * Check that the new MTU value is between 68 (14 byte header, 46 byte 2101 - * payload, 4 byte FCS) and 10 KB, which is the largest supported MTU. 2102 - */ 2103 - if (new_mtu < 68 || new_mtu > 10240) 2104 - return -EINVAL; 2105 - 2106 - err = ipg_nic_stop(dev); 2107 - if (err) 2108 - return err; 2109 - 2110 - dev->mtu = new_mtu; 2111 - 2112 - sp->max_rxframe_size = new_mtu; 2113 - 2114 - sp->rxfrag_size = new_mtu; 2115 - if (sp->rxfrag_size > 4088) 2116 - sp->rxfrag_size = 4088; 2117 - 2118 - sp->rxsupport_size = sp->max_rxframe_size; 2119 - 2120 - if (new_mtu > 0x0600) 2121 - sp->is_jumbo = true; 2122 - else 2123 - sp->is_jumbo = false; 2124 - 2125 - return ipg_nic_open(dev); 2126 - } 2127 - 2128 - static int ipg_get_settings(struct net_device *dev, struct ethtool_cmd *cmd) 2129 - { 2130 - struct ipg_nic_private *sp = netdev_priv(dev); 2131 - int rc; 2132 - 2133 - mutex_lock(&sp->mii_mutex); 2134 - rc = mii_ethtool_gset(&sp->mii_if, cmd); 2135 - mutex_unlock(&sp->mii_mutex); 2136 - 2137 - return rc; 2138 - } 2139 - 2140 - static int ipg_set_settings(struct net_device *dev, struct ethtool_cmd *cmd) 2141 - { 2142 - struct ipg_nic_private *sp = netdev_priv(dev); 2143 - int rc; 2144 - 2145 - mutex_lock(&sp->mii_mutex); 2146 - rc = mii_ethtool_sset(&sp->mii_if, cmd); 2147 - mutex_unlock(&sp->mii_mutex); 2148 - 2149 - return rc; 2150 - } 2151 - 2152 - static int ipg_nway_reset(struct net_device *dev) 2153 - { 2154 - struct ipg_nic_private *sp = netdev_priv(dev); 2155 - int rc; 2156 - 2157 - mutex_lock(&sp->mii_mutex); 2158 - rc = mii_nway_restart(&sp->mii_if); 2159 - mutex_unlock(&sp->mii_mutex); 2160 - 2161 - return rc; 2162 - } 2163 - 2164 - static const struct ethtool_ops ipg_ethtool_ops = { 2165 - .get_settings = ipg_get_settings, 2166 - .set_settings = ipg_set_settings, 2167 - .nway_reset = ipg_nway_reset, 2168 - }; 2169 - 2170 - static void ipg_remove(struct pci_dev *pdev) 2171 - { 2172 - struct net_device *dev = pci_get_drvdata(pdev); 2173 - struct ipg_nic_private *sp = netdev_priv(dev); 2174 - 2175 - IPG_DEBUG_MSG("_remove\n"); 2176 - 2177 - /* Un-register Ethernet device. */ 2178 - unregister_netdev(dev); 2179 - 2180 - pci_iounmap(pdev, sp->ioaddr); 2181 - 2182 - pci_release_regions(pdev); 2183 - 2184 - free_netdev(dev); 2185 - pci_disable_device(pdev); 2186 - } 2187 - 2188 - static const struct net_device_ops ipg_netdev_ops = { 2189 - .ndo_open = ipg_nic_open, 2190 - .ndo_stop = ipg_nic_stop, 2191 - .ndo_start_xmit = ipg_nic_hard_start_xmit, 2192 - .ndo_get_stats = ipg_nic_get_stats, 2193 - .ndo_set_rx_mode = ipg_nic_set_multicast_list, 2194 - .ndo_do_ioctl = ipg_ioctl, 2195 - .ndo_tx_timeout = ipg_tx_timeout, 2196 - .ndo_change_mtu = ipg_nic_change_mtu, 2197 - .ndo_set_mac_address = eth_mac_addr, 2198 - .ndo_validate_addr = eth_validate_addr, 2199 - }; 2200 - 2201 - static int ipg_probe(struct pci_dev *pdev, const struct pci_device_id *id) 2202 - { 2203 - unsigned int i = id->driver_data; 2204 - struct ipg_nic_private *sp; 2205 - struct net_device *dev; 2206 - void __iomem *ioaddr; 2207 - int rc; 2208 - 2209 - rc = pci_enable_device(pdev); 2210 - if (rc < 0) 2211 - goto out; 2212 - 2213 - pr_info("%s: %s\n", pci_name(pdev), ipg_brand_name[i]); 2214 - 2215 - pci_set_master(pdev); 2216 - 2217 - rc = pci_set_dma_mask(pdev, DMA_BIT_MASK(40)); 2218 - if (rc < 0) { 2219 - rc = pci_set_dma_mask(pdev, DMA_BIT_MASK(32)); 2220 - if (rc < 0) { 2221 - pr_err("%s: DMA config failed\n", pci_name(pdev)); 2222 - goto err_disable_0; 2223 - } 2224 - } 2225 - 2226 - /* 2227 - * Initialize net device. 2228 - */ 2229 - dev = alloc_etherdev(sizeof(struct ipg_nic_private)); 2230 - if (!dev) { 2231 - rc = -ENOMEM; 2232 - goto err_disable_0; 2233 - } 2234 - 2235 - sp = netdev_priv(dev); 2236 - spin_lock_init(&sp->lock); 2237 - mutex_init(&sp->mii_mutex); 2238 - 2239 - sp->is_jumbo = IPG_IS_JUMBO; 2240 - sp->rxfrag_size = IPG_RXFRAG_SIZE; 2241 - sp->rxsupport_size = IPG_RXSUPPORT_SIZE; 2242 - sp->max_rxframe_size = IPG_MAX_RXFRAME_SIZE; 2243 - 2244 - /* Declare IPG NIC functions for Ethernet device methods. 2245 - */ 2246 - dev->netdev_ops = &ipg_netdev_ops; 2247 - SET_NETDEV_DEV(dev, &pdev->dev); 2248 - dev->ethtool_ops = &ipg_ethtool_ops; 2249 - 2250 - rc = pci_request_regions(pdev, DRV_NAME); 2251 - if (rc) 2252 - goto err_free_dev_1; 2253 - 2254 - ioaddr = pci_iomap(pdev, 1, pci_resource_len(pdev, 1)); 2255 - if (!ioaddr) { 2256 - pr_err("%s: cannot map MMIO\n", pci_name(pdev)); 2257 - rc = -EIO; 2258 - goto err_release_regions_2; 2259 - } 2260 - 2261 - /* Save the pointer to the PCI device information. */ 2262 - sp->ioaddr = ioaddr; 2263 - sp->pdev = pdev; 2264 - sp->dev = dev; 2265 - 2266 - INIT_DELAYED_WORK(&sp->task, ipg_reset_after_host_error); 2267 - 2268 - pci_set_drvdata(pdev, dev); 2269 - 2270 - rc = ipg_hw_init(dev); 2271 - if (rc < 0) 2272 - goto err_unmap_3; 2273 - 2274 - rc = register_netdev(dev); 2275 - if (rc < 0) 2276 - goto err_unmap_3; 2277 - 2278 - netdev_info(dev, "Ethernet device registered\n"); 2279 - out: 2280 - return rc; 2281 - 2282 - err_unmap_3: 2283 - pci_iounmap(pdev, ioaddr); 2284 - err_release_regions_2: 2285 - pci_release_regions(pdev); 2286 - err_free_dev_1: 2287 - free_netdev(dev); 2288 - err_disable_0: 2289 - pci_disable_device(pdev); 2290 - goto out; 2291 - } 2292 - 2293 - static struct pci_driver ipg_pci_driver = { 2294 - .name = IPG_DRIVER_NAME, 2295 - .id_table = ipg_pci_tbl, 2296 - .probe = ipg_probe, 2297 - .remove = ipg_remove, 2298 - }; 2299 - 2300 - module_pci_driver(ipg_pci_driver);
-748
drivers/net/ethernet/icplus/ipg.h
··· 1 - /* 2 - * Include file for Gigabit Ethernet device driver for Network 3 - * Interface Cards (NICs) utilizing the Tamarack Microelectronics 4 - * Inc. IPG Gigabit or Triple Speed Ethernet Media Access 5 - * Controller. 6 - */ 7 - #ifndef __LINUX_IPG_H 8 - #define __LINUX_IPG_H 9 - 10 - #include <linux/module.h> 11 - 12 - #include <linux/kernel.h> 13 - #include <linux/pci.h> 14 - #include <linux/ioport.h> 15 - #include <linux/errno.h> 16 - #include <asm/io.h> 17 - #include <linux/delay.h> 18 - #include <linux/types.h> 19 - #include <linux/netdevice.h> 20 - #include <linux/etherdevice.h> 21 - #include <linux/skbuff.h> 22 - #include <asm/bitops.h> 23 - 24 - /* 25 - * Constants 26 - */ 27 - 28 - /* GMII based PHY IDs */ 29 - #define NS 0x2000 30 - #define MARVELL 0x0141 31 - #define ICPLUS_PHY 0x243 32 - 33 - /* NIC Physical Layer Device MII register fields. */ 34 - #define MII_PHY_SELECTOR_IEEE8023 0x0001 35 - #define MII_PHY_TECHABILITYFIELD 0x1FE0 36 - 37 - /* GMII_PHY_1000 need to set to prefer master */ 38 - #define GMII_PHY_1000BASETCONTROL_PreferMaster 0x0400 39 - 40 - /* NIC Physical Layer Device GMII constants. */ 41 - #define GMII_PREAMBLE 0xFFFFFFFF 42 - #define GMII_ST 0x1 43 - #define GMII_READ 0x2 44 - #define GMII_WRITE 0x1 45 - #define GMII_TA_READ_MASK 0x1 46 - #define GMII_TA_WRITE 0x2 47 - 48 - /* I/O register offsets. */ 49 - enum ipg_regs { 50 - DMA_CTRL = 0x00, 51 - RX_DMA_STATUS = 0x08, /* Unused + reserved */ 52 - TFD_LIST_PTR_0 = 0x10, 53 - TFD_LIST_PTR_1 = 0x14, 54 - TX_DMA_BURST_THRESH = 0x18, 55 - TX_DMA_URGENT_THRESH = 0x19, 56 - TX_DMA_POLL_PERIOD = 0x1a, 57 - RFD_LIST_PTR_0 = 0x1c, 58 - RFD_LIST_PTR_1 = 0x20, 59 - RX_DMA_BURST_THRESH = 0x24, 60 - RX_DMA_URGENT_THRESH = 0x25, 61 - RX_DMA_POLL_PERIOD = 0x26, 62 - DEBUG_CTRL = 0x2c, 63 - ASIC_CTRL = 0x30, 64 - FIFO_CTRL = 0x38, /* Unused */ 65 - FLOW_OFF_THRESH = 0x3c, 66 - FLOW_ON_THRESH = 0x3e, 67 - EEPROM_DATA = 0x48, 68 - EEPROM_CTRL = 0x4a, 69 - EXPROM_ADDR = 0x4c, /* Unused */ 70 - EXPROM_DATA = 0x50, /* Unused */ 71 - WAKE_EVENT = 0x51, /* Unused */ 72 - COUNTDOWN = 0x54, /* Unused */ 73 - INT_STATUS_ACK = 0x5a, 74 - INT_ENABLE = 0x5c, 75 - INT_STATUS = 0x5e, /* Unused */ 76 - TX_STATUS = 0x60, 77 - MAC_CTRL = 0x6c, 78 - VLAN_TAG = 0x70, /* Unused */ 79 - PHY_SET = 0x75, 80 - PHY_CTRL = 0x76, 81 - STATION_ADDRESS_0 = 0x78, 82 - STATION_ADDRESS_1 = 0x7a, 83 - STATION_ADDRESS_2 = 0x7c, 84 - MAX_FRAME_SIZE = 0x86, 85 - RECEIVE_MODE = 0x88, 86 - HASHTABLE_0 = 0x8c, 87 - HASHTABLE_1 = 0x90, 88 - RMON_STATISTICS_MASK = 0x98, 89 - STATISTICS_MASK = 0x9c, 90 - RX_JUMBO_FRAMES = 0xbc, /* Unused */ 91 - TCP_CHECKSUM_ERRORS = 0xc0, /* Unused */ 92 - IP_CHECKSUM_ERRORS = 0xc2, /* Unused */ 93 - UDP_CHECKSUM_ERRORS = 0xc4, /* Unused */ 94 - TX_JUMBO_FRAMES = 0xf4 /* Unused */ 95 - }; 96 - 97 - /* Ethernet MIB statistic register offsets. */ 98 - #define IPG_OCTETRCVOK 0xA8 99 - #define IPG_MCSTOCTETRCVDOK 0xAC 100 - #define IPG_BCSTOCTETRCVOK 0xB0 101 - #define IPG_FRAMESRCVDOK 0xB4 102 - #define IPG_MCSTFRAMESRCVDOK 0xB8 103 - #define IPG_BCSTFRAMESRCVDOK 0xBE 104 - #define IPG_MACCONTROLFRAMESRCVD 0xC6 105 - #define IPG_FRAMETOOLONGERRORS 0xC8 106 - #define IPG_INRANGELENGTHERRORS 0xCA 107 - #define IPG_FRAMECHECKSEQERRORS 0xCC 108 - #define IPG_FRAMESLOSTRXERRORS 0xCE 109 - #define IPG_OCTETXMTOK 0xD0 110 - #define IPG_MCSTOCTETXMTOK 0xD4 111 - #define IPG_BCSTOCTETXMTOK 0xD8 112 - #define IPG_FRAMESXMTDOK 0xDC 113 - #define IPG_MCSTFRAMESXMTDOK 0xE0 114 - #define IPG_FRAMESWDEFERREDXMT 0xE4 115 - #define IPG_LATECOLLISIONS 0xE8 116 - #define IPG_MULTICOLFRAMES 0xEC 117 - #define IPG_SINGLECOLFRAMES 0xF0 118 - #define IPG_BCSTFRAMESXMTDOK 0xF6 119 - #define IPG_CARRIERSENSEERRORS 0xF8 120 - #define IPG_MACCONTROLFRAMESXMTDOK 0xFA 121 - #define IPG_FRAMESABORTXSCOLLS 0xFC 122 - #define IPG_FRAMESWEXDEFERRAL 0xFE 123 - 124 - /* RMON statistic register offsets. */ 125 - #define IPG_ETHERSTATSCOLLISIONS 0x100 126 - #define IPG_ETHERSTATSOCTETSTRANSMIT 0x104 127 - #define IPG_ETHERSTATSPKTSTRANSMIT 0x108 128 - #define IPG_ETHERSTATSPKTS64OCTESTSTRANSMIT 0x10C 129 - #define IPG_ETHERSTATSPKTS65TO127OCTESTSTRANSMIT 0x110 130 - #define IPG_ETHERSTATSPKTS128TO255OCTESTSTRANSMIT 0x114 131 - #define IPG_ETHERSTATSPKTS256TO511OCTESTSTRANSMIT 0x118 132 - #define IPG_ETHERSTATSPKTS512TO1023OCTESTSTRANSMIT 0x11C 133 - #define IPG_ETHERSTATSPKTS1024TO1518OCTESTSTRANSMIT 0x120 134 - #define IPG_ETHERSTATSCRCALIGNERRORS 0x124 135 - #define IPG_ETHERSTATSUNDERSIZEPKTS 0x128 136 - #define IPG_ETHERSTATSFRAGMENTS 0x12C 137 - #define IPG_ETHERSTATSJABBERS 0x130 138 - #define IPG_ETHERSTATSOCTETS 0x134 139 - #define IPG_ETHERSTATSPKTS 0x138 140 - #define IPG_ETHERSTATSPKTS64OCTESTS 0x13C 141 - #define IPG_ETHERSTATSPKTS65TO127OCTESTS 0x140 142 - #define IPG_ETHERSTATSPKTS128TO255OCTESTS 0x144 143 - #define IPG_ETHERSTATSPKTS256TO511OCTESTS 0x148 144 - #define IPG_ETHERSTATSPKTS512TO1023OCTESTS 0x14C 145 - #define IPG_ETHERSTATSPKTS1024TO1518OCTESTS 0x150 146 - 147 - /* RMON statistic register equivalents. */ 148 - #define IPG_ETHERSTATSMULTICASTPKTSTRANSMIT 0xE0 149 - #define IPG_ETHERSTATSBROADCASTPKTSTRANSMIT 0xF6 150 - #define IPG_ETHERSTATSMULTICASTPKTS 0xB8 151 - #define IPG_ETHERSTATSBROADCASTPKTS 0xBE 152 - #define IPG_ETHERSTATSOVERSIZEPKTS 0xC8 153 - #define IPG_ETHERSTATSDROPEVENTS 0xCE 154 - 155 - /* Serial EEPROM offsets */ 156 - #define IPG_EEPROM_CONFIGPARAM 0x00 157 - #define IPG_EEPROM_ASICCTRL 0x01 158 - #define IPG_EEPROM_SUBSYSTEMVENDORID 0x02 159 - #define IPG_EEPROM_SUBSYSTEMID 0x03 160 - #define IPG_EEPROM_STATIONADDRESS0 0x10 161 - #define IPG_EEPROM_STATIONADDRESS1 0x11 162 - #define IPG_EEPROM_STATIONADDRESS2 0x12 163 - 164 - /* Register & data structure bit masks */ 165 - 166 - /* PCI register masks. */ 167 - 168 - /* IOBaseAddress */ 169 - #define IPG_PIB_RSVD_MASK 0xFFFFFE01 170 - #define IPG_PIB_IOBASEADDRESS 0xFFFFFF00 171 - #define IPG_PIB_IOBASEADDRIND 0x00000001 172 - 173 - /* MemBaseAddress */ 174 - #define IPG_PMB_RSVD_MASK 0xFFFFFE07 175 - #define IPG_PMB_MEMBASEADDRIND 0x00000001 176 - #define IPG_PMB_MEMMAPTYPE 0x00000006 177 - #define IPG_PMB_MEMMAPTYPE0 0x00000002 178 - #define IPG_PMB_MEMMAPTYPE1 0x00000004 179 - #define IPG_PMB_MEMBASEADDRESS 0xFFFFFE00 180 - 181 - /* ConfigStatus */ 182 - #define IPG_CS_RSVD_MASK 0xFFB0 183 - #define IPG_CS_CAPABILITIES 0x0010 184 - #define IPG_CS_66MHZCAPABLE 0x0020 185 - #define IPG_CS_FASTBACK2BACK 0x0080 186 - #define IPG_CS_DATAPARITYREPORTED 0x0100 187 - #define IPG_CS_DEVSELTIMING 0x0600 188 - #define IPG_CS_SIGNALEDTARGETABORT 0x0800 189 - #define IPG_CS_RECEIVEDTARGETABORT 0x1000 190 - #define IPG_CS_RECEIVEDMASTERABORT 0x2000 191 - #define IPG_CS_SIGNALEDSYSTEMERROR 0x4000 192 - #define IPG_CS_DETECTEDPARITYERROR 0x8000 193 - 194 - /* TFD data structure masks. */ 195 - 196 - /* TFDList, TFC */ 197 - #define IPG_TFC_RSVD_MASK 0x0000FFFF9FFFFFFFULL 198 - #define IPG_TFC_FRAMEID 0x000000000000FFFFULL 199 - #define IPG_TFC_WORDALIGN 0x0000000000030000ULL 200 - #define IPG_TFC_WORDALIGNTODWORD 0x0000000000000000ULL 201 - #define IPG_TFC_WORDALIGNTOWORD 0x0000000000020000ULL 202 - #define IPG_TFC_WORDALIGNDISABLED 0x0000000000030000ULL 203 - #define IPG_TFC_TCPCHECKSUMENABLE 0x0000000000040000ULL 204 - #define IPG_TFC_UDPCHECKSUMENABLE 0x0000000000080000ULL 205 - #define IPG_TFC_IPCHECKSUMENABLE 0x0000000000100000ULL 206 - #define IPG_TFC_FCSAPPENDDISABLE 0x0000000000200000ULL 207 - #define IPG_TFC_TXINDICATE 0x0000000000400000ULL 208 - #define IPG_TFC_TXDMAINDICATE 0x0000000000800000ULL 209 - #define IPG_TFC_FRAGCOUNT 0x000000000F000000ULL 210 - #define IPG_TFC_VLANTAGINSERT 0x0000000010000000ULL 211 - #define IPG_TFC_TFDDONE 0x0000000080000000ULL 212 - #define IPG_TFC_VID 0x00000FFF00000000ULL 213 - #define IPG_TFC_CFI 0x0000100000000000ULL 214 - #define IPG_TFC_USERPRIORITY 0x0000E00000000000ULL 215 - 216 - /* TFDList, FragInfo */ 217 - #define IPG_TFI_RSVD_MASK 0xFFFF00FFFFFFFFFFULL 218 - #define IPG_TFI_FRAGADDR 0x000000FFFFFFFFFFULL 219 - #define IPG_TFI_FRAGLEN 0xFFFF000000000000ULL 220 - 221 - /* RFD data structure masks. */ 222 - 223 - /* RFDList, RFS */ 224 - #define IPG_RFS_RSVD_MASK 0x0000FFFFFFFFFFFFULL 225 - #define IPG_RFS_RXFRAMELEN 0x000000000000FFFFULL 226 - #define IPG_RFS_RXFIFOOVERRUN 0x0000000000010000ULL 227 - #define IPG_RFS_RXRUNTFRAME 0x0000000000020000ULL 228 - #define IPG_RFS_RXALIGNMENTERROR 0x0000000000040000ULL 229 - #define IPG_RFS_RXFCSERROR 0x0000000000080000ULL 230 - #define IPG_RFS_RXOVERSIZEDFRAME 0x0000000000100000ULL 231 - #define IPG_RFS_RXLENGTHERROR 0x0000000000200000ULL 232 - #define IPG_RFS_VLANDETECTED 0x0000000000400000ULL 233 - #define IPG_RFS_TCPDETECTED 0x0000000000800000ULL 234 - #define IPG_RFS_TCPERROR 0x0000000001000000ULL 235 - #define IPG_RFS_UDPDETECTED 0x0000000002000000ULL 236 - #define IPG_RFS_UDPERROR 0x0000000004000000ULL 237 - #define IPG_RFS_IPDETECTED 0x0000000008000000ULL 238 - #define IPG_RFS_IPERROR 0x0000000010000000ULL 239 - #define IPG_RFS_FRAMESTART 0x0000000020000000ULL 240 - #define IPG_RFS_FRAMEEND 0x0000000040000000ULL 241 - #define IPG_RFS_RFDDONE 0x0000000080000000ULL 242 - #define IPG_RFS_TCI 0x0000FFFF00000000ULL 243 - 244 - /* RFDList, FragInfo */ 245 - #define IPG_RFI_RSVD_MASK 0xFFFF00FFFFFFFFFFULL 246 - #define IPG_RFI_FRAGADDR 0x000000FFFFFFFFFFULL 247 - #define IPG_RFI_FRAGLEN 0xFFFF000000000000ULL 248 - 249 - /* I/O Register masks. */ 250 - 251 - /* RMON Statistics Mask */ 252 - #define IPG_RZ_ALL 0x0FFFFFFF 253 - 254 - /* Statistics Mask */ 255 - #define IPG_SM_ALL 0x0FFFFFFF 256 - #define IPG_SM_OCTETRCVOK_FRAMESRCVDOK 0x00000001 257 - #define IPG_SM_MCSTOCTETRCVDOK_MCSTFRAMESRCVDOK 0x00000002 258 - #define IPG_SM_BCSTOCTETRCVDOK_BCSTFRAMESRCVDOK 0x00000004 259 - #define IPG_SM_RXJUMBOFRAMES 0x00000008 260 - #define IPG_SM_TCPCHECKSUMERRORS 0x00000010 261 - #define IPG_SM_IPCHECKSUMERRORS 0x00000020 262 - #define IPG_SM_UDPCHECKSUMERRORS 0x00000040 263 - #define IPG_SM_MACCONTROLFRAMESRCVD 0x00000080 264 - #define IPG_SM_FRAMESTOOLONGERRORS 0x00000100 265 - #define IPG_SM_INRANGELENGTHERRORS 0x00000200 266 - #define IPG_SM_FRAMECHECKSEQERRORS 0x00000400 267 - #define IPG_SM_FRAMESLOSTRXERRORS 0x00000800 268 - #define IPG_SM_OCTETXMTOK_FRAMESXMTOK 0x00001000 269 - #define IPG_SM_MCSTOCTETXMTOK_MCSTFRAMESXMTDOK 0x00002000 270 - #define IPG_SM_BCSTOCTETXMTOK_BCSTFRAMESXMTDOK 0x00004000 271 - #define IPG_SM_FRAMESWDEFERREDXMT 0x00008000 272 - #define IPG_SM_LATECOLLISIONS 0x00010000 273 - #define IPG_SM_MULTICOLFRAMES 0x00020000 274 - #define IPG_SM_SINGLECOLFRAMES 0x00040000 275 - #define IPG_SM_TXJUMBOFRAMES 0x00080000 276 - #define IPG_SM_CARRIERSENSEERRORS 0x00100000 277 - #define IPG_SM_MACCONTROLFRAMESXMTD 0x00200000 278 - #define IPG_SM_FRAMESABORTXSCOLLS 0x00400000 279 - #define IPG_SM_FRAMESWEXDEFERAL 0x00800000 280 - 281 - /* Countdown */ 282 - #define IPG_CD_RSVD_MASK 0x0700FFFF 283 - #define IPG_CD_COUNT 0x0000FFFF 284 - #define IPG_CD_COUNTDOWNSPEED 0x01000000 285 - #define IPG_CD_COUNTDOWNMODE 0x02000000 286 - #define IPG_CD_COUNTINTENABLED 0x04000000 287 - 288 - /* TxDMABurstThresh */ 289 - #define IPG_TB_RSVD_MASK 0xFF 290 - 291 - /* TxDMAUrgentThresh */ 292 - #define IPG_TU_RSVD_MASK 0xFF 293 - 294 - /* TxDMAPollPeriod */ 295 - #define IPG_TP_RSVD_MASK 0xFF 296 - 297 - /* RxDMAUrgentThresh */ 298 - #define IPG_RU_RSVD_MASK 0xFF 299 - 300 - /* RxDMAPollPeriod */ 301 - #define IPG_RP_RSVD_MASK 0xFF 302 - 303 - /* ReceiveMode */ 304 - #define IPG_RM_RSVD_MASK 0x3F 305 - #define IPG_RM_RECEIVEUNICAST 0x01 306 - #define IPG_RM_RECEIVEMULTICAST 0x02 307 - #define IPG_RM_RECEIVEBROADCAST 0x04 308 - #define IPG_RM_RECEIVEALLFRAMES 0x08 309 - #define IPG_RM_RECEIVEMULTICASTHASH 0x10 310 - #define IPG_RM_RECEIVEIPMULTICAST 0x20 311 - 312 - /* PhySet */ 313 - #define IPG_PS_MEM_LENB9B 0x01 314 - #define IPG_PS_MEM_LEN9 0x02 315 - #define IPG_PS_NON_COMPDET 0x04 316 - 317 - /* PhyCtrl */ 318 - #define IPG_PC_RSVD_MASK 0xFF 319 - #define IPG_PC_MGMTCLK_LO 0x00 320 - #define IPG_PC_MGMTCLK_HI 0x01 321 - #define IPG_PC_MGMTCLK 0x01 322 - #define IPG_PC_MGMTDATA 0x02 323 - #define IPG_PC_MGMTDIR 0x04 324 - #define IPG_PC_DUPLEX_POLARITY 0x08 325 - #define IPG_PC_DUPLEX_STATUS 0x10 326 - #define IPG_PC_LINK_POLARITY 0x20 327 - #define IPG_PC_LINK_SPEED 0xC0 328 - #define IPG_PC_LINK_SPEED_10MBPS 0x40 329 - #define IPG_PC_LINK_SPEED_100MBPS 0x80 330 - #define IPG_PC_LINK_SPEED_1000MBPS 0xC0 331 - 332 - /* DMACtrl */ 333 - #define IPG_DC_RSVD_MASK 0xC07D9818 334 - #define IPG_DC_RX_DMA_COMPLETE 0x00000008 335 - #define IPG_DC_RX_DMA_POLL_NOW 0x00000010 336 - #define IPG_DC_TX_DMA_COMPLETE 0x00000800 337 - #define IPG_DC_TX_DMA_POLL_NOW 0x00001000 338 - #define IPG_DC_TX_DMA_IN_PROG 0x00008000 339 - #define IPG_DC_RX_EARLY_DISABLE 0x00010000 340 - #define IPG_DC_MWI_DISABLE 0x00040000 341 - #define IPG_DC_TX_WRITE_BACK_DISABLE 0x00080000 342 - #define IPG_DC_TX_BURST_LIMIT 0x00700000 343 - #define IPG_DC_TARGET_ABORT 0x40000000 344 - #define IPG_DC_MASTER_ABORT 0x80000000 345 - 346 - /* ASICCtrl */ 347 - #define IPG_AC_RSVD_MASK 0x07FFEFF2 348 - #define IPG_AC_EXP_ROM_SIZE 0x00000002 349 - #define IPG_AC_PHY_SPEED10 0x00000010 350 - #define IPG_AC_PHY_SPEED100 0x00000020 351 - #define IPG_AC_PHY_SPEED1000 0x00000040 352 - #define IPG_AC_PHY_MEDIA 0x00000080 353 - #define IPG_AC_FORCED_CFG 0x00000700 354 - #define IPG_AC_D3RESETDISABLE 0x00000800 355 - #define IPG_AC_SPEED_UP_MODE 0x00002000 356 - #define IPG_AC_LED_MODE 0x00004000 357 - #define IPG_AC_RST_OUT_POLARITY 0x00008000 358 - #define IPG_AC_GLOBAL_RESET 0x00010000 359 - #define IPG_AC_RX_RESET 0x00020000 360 - #define IPG_AC_TX_RESET 0x00040000 361 - #define IPG_AC_DMA 0x00080000 362 - #define IPG_AC_FIFO 0x00100000 363 - #define IPG_AC_NETWORK 0x00200000 364 - #define IPG_AC_HOST 0x00400000 365 - #define IPG_AC_AUTO_INIT 0x00800000 366 - #define IPG_AC_RST_OUT 0x01000000 367 - #define IPG_AC_INT_REQUEST 0x02000000 368 - #define IPG_AC_RESET_BUSY 0x04000000 369 - #define IPG_AC_LED_SPEED 0x08000000 370 - #define IPG_AC_LED_MODE_BIT_1 0x20000000 371 - 372 - /* EepromCtrl */ 373 - #define IPG_EC_RSVD_MASK 0x83FF 374 - #define IPG_EC_EEPROM_ADDR 0x00FF 375 - #define IPG_EC_EEPROM_OPCODE 0x0300 376 - #define IPG_EC_EEPROM_SUBCOMMAD 0x0000 377 - #define IPG_EC_EEPROM_WRITEOPCODE 0x0100 378 - #define IPG_EC_EEPROM_READOPCODE 0x0200 379 - #define IPG_EC_EEPROM_ERASEOPCODE 0x0300 380 - #define IPG_EC_EEPROM_BUSY 0x8000 381 - 382 - /* FIFOCtrl */ 383 - #define IPG_FC_RSVD_MASK 0xC001 384 - #define IPG_FC_RAM_TEST_MODE 0x0001 385 - #define IPG_FC_TRANSMITTING 0x4000 386 - #define IPG_FC_RECEIVING 0x8000 387 - 388 - /* TxStatus */ 389 - #define IPG_TS_RSVD_MASK 0xFFFF00DD 390 - #define IPG_TS_TX_ERROR 0x00000001 391 - #define IPG_TS_LATE_COLLISION 0x00000004 392 - #define IPG_TS_TX_MAX_COLL 0x00000008 393 - #define IPG_TS_TX_UNDERRUN 0x00000010 394 - #define IPG_TS_TX_IND_REQD 0x00000040 395 - #define IPG_TS_TX_COMPLETE 0x00000080 396 - #define IPG_TS_TX_FRAMEID 0xFFFF0000 397 - 398 - /* WakeEvent */ 399 - #define IPG_WE_WAKE_PKT_ENABLE 0x01 400 - #define IPG_WE_MAGIC_PKT_ENABLE 0x02 401 - #define IPG_WE_LINK_EVT_ENABLE 0x04 402 - #define IPG_WE_WAKE_POLARITY 0x08 403 - #define IPG_WE_WAKE_PKT_EVT 0x10 404 - #define IPG_WE_MAGIC_PKT_EVT 0x20 405 - #define IPG_WE_LINK_EVT 0x40 406 - #define IPG_WE_WOL_ENABLE 0x80 407 - 408 - /* IntEnable */ 409 - #define IPG_IE_RSVD_MASK 0x1FFE 410 - #define IPG_IE_HOST_ERROR 0x0002 411 - #define IPG_IE_TX_COMPLETE 0x0004 412 - #define IPG_IE_MAC_CTRL_FRAME 0x0008 413 - #define IPG_IE_RX_COMPLETE 0x0010 414 - #define IPG_IE_RX_EARLY 0x0020 415 - #define IPG_IE_INT_REQUESTED 0x0040 416 - #define IPG_IE_UPDATE_STATS 0x0080 417 - #define IPG_IE_LINK_EVENT 0x0100 418 - #define IPG_IE_TX_DMA_COMPLETE 0x0200 419 - #define IPG_IE_RX_DMA_COMPLETE 0x0400 420 - #define IPG_IE_RFD_LIST_END 0x0800 421 - #define IPG_IE_RX_DMA_PRIORITY 0x1000 422 - 423 - /* IntStatus */ 424 - #define IPG_IS_RSVD_MASK 0x1FFF 425 - #define IPG_IS_INTERRUPT_STATUS 0x0001 426 - #define IPG_IS_HOST_ERROR 0x0002 427 - #define IPG_IS_TX_COMPLETE 0x0004 428 - #define IPG_IS_MAC_CTRL_FRAME 0x0008 429 - #define IPG_IS_RX_COMPLETE 0x0010 430 - #define IPG_IS_RX_EARLY 0x0020 431 - #define IPG_IS_INT_REQUESTED 0x0040 432 - #define IPG_IS_UPDATE_STATS 0x0080 433 - #define IPG_IS_LINK_EVENT 0x0100 434 - #define IPG_IS_TX_DMA_COMPLETE 0x0200 435 - #define IPG_IS_RX_DMA_COMPLETE 0x0400 436 - #define IPG_IS_RFD_LIST_END 0x0800 437 - #define IPG_IS_RX_DMA_PRIORITY 0x1000 438 - 439 - /* MACCtrl */ 440 - #define IPG_MC_RSVD_MASK 0x7FE33FA3 441 - #define IPG_MC_IFS_SELECT 0x00000003 442 - #define IPG_MC_IFS_4352BIT 0x00000003 443 - #define IPG_MC_IFS_1792BIT 0x00000002 444 - #define IPG_MC_IFS_1024BIT 0x00000001 445 - #define IPG_MC_IFS_96BIT 0x00000000 446 - #define IPG_MC_DUPLEX_SELECT 0x00000020 447 - #define IPG_MC_DUPLEX_SELECT_FD 0x00000020 448 - #define IPG_MC_DUPLEX_SELECT_HD 0x00000000 449 - #define IPG_MC_TX_FLOW_CONTROL_ENABLE 0x00000080 450 - #define IPG_MC_RX_FLOW_CONTROL_ENABLE 0x00000100 451 - #define IPG_MC_RCV_FCS 0x00000200 452 - #define IPG_MC_FIFO_LOOPBACK 0x00000400 453 - #define IPG_MC_MAC_LOOPBACK 0x00000800 454 - #define IPG_MC_AUTO_VLAN_TAGGING 0x00001000 455 - #define IPG_MC_AUTO_VLAN_UNTAGGING 0x00002000 456 - #define IPG_MC_COLLISION_DETECT 0x00010000 457 - #define IPG_MC_CARRIER_SENSE 0x00020000 458 - #define IPG_MC_STATISTICS_ENABLE 0x00200000 459 - #define IPG_MC_STATISTICS_DISABLE 0x00400000 460 - #define IPG_MC_STATISTICS_ENABLED 0x00800000 461 - #define IPG_MC_TX_ENABLE 0x01000000 462 - #define IPG_MC_TX_DISABLE 0x02000000 463 - #define IPG_MC_TX_ENABLED 0x04000000 464 - #define IPG_MC_RX_ENABLE 0x08000000 465 - #define IPG_MC_RX_DISABLE 0x10000000 466 - #define IPG_MC_RX_ENABLED 0x20000000 467 - #define IPG_MC_PAUSED 0x40000000 468 - 469 - /* 470 - * Tune 471 - */ 472 - 473 - /* Assign IPG_APPEND_FCS_ON_TX > 0 for auto FCS append on TX. */ 474 - #define IPG_APPEND_FCS_ON_TX 1 475 - 476 - /* Assign IPG_APPEND_FCS_ON_TX > 0 for auto FCS strip on RX. */ 477 - #define IPG_STRIP_FCS_ON_RX 1 478 - 479 - /* Assign IPG_DROP_ON_RX_ETH_ERRORS > 0 to drop RX frames with 480 - * Ethernet errors. 481 - */ 482 - #define IPG_DROP_ON_RX_ETH_ERRORS 1 483 - 484 - /* Assign IPG_INSERT_MANUAL_VLAN_TAG > 0 to insert VLAN tags manually 485 - * (via TFC). 486 - */ 487 - #define IPG_INSERT_MANUAL_VLAN_TAG 0 488 - 489 - /* Assign IPG_ADD_IPCHECKSUM_ON_TX > 0 for auto IP checksum on TX. */ 490 - #define IPG_ADD_IPCHECKSUM_ON_TX 0 491 - 492 - /* Assign IPG_ADD_TCPCHECKSUM_ON_TX > 0 for auto TCP checksum on TX. 493 - * DO NOT USE FOR SILICON REVISIONS B3 AND EARLIER. 494 - */ 495 - #define IPG_ADD_TCPCHECKSUM_ON_TX 0 496 - 497 - /* Assign IPG_ADD_UDPCHECKSUM_ON_TX > 0 for auto UDP checksum on TX. 498 - * DO NOT USE FOR SILICON REVISIONS B3 AND EARLIER. 499 - */ 500 - #define IPG_ADD_UDPCHECKSUM_ON_TX 0 501 - 502 - /* If inserting VLAN tags manually, assign the IPG_MANUAL_VLAN_xx 503 - * constants as desired. 504 - */ 505 - #define IPG_MANUAL_VLAN_VID 0xABC 506 - #define IPG_MANUAL_VLAN_CFI 0x1 507 - #define IPG_MANUAL_VLAN_USERPRIORITY 0x5 508 - 509 - #define IPG_IO_REG_RANGE 0xFF 510 - #define IPG_MEM_REG_RANGE 0x154 511 - #define IPG_DRIVER_NAME "Sundance Technology IPG Triple-Speed Ethernet" 512 - #define IPG_NIC_PHY_ADDRESS 0x01 513 - #define IPG_DMALIST_ALIGN_PAD 0x07 514 - #define IPG_MULTICAST_HASHTABLE_SIZE 0x40 515 - 516 - /* Number of milliseconds to wait after issuing a software reset. 517 - * 0x05 <= IPG_AC_RESETWAIT to account for proper 10Mbps operation. 518 - */ 519 - #define IPG_AC_RESETWAIT 0x05 520 - 521 - /* Number of IPG_AC_RESETWAIT timeperiods before declaring timeout. */ 522 - #define IPG_AC_RESET_TIMEOUT 0x0A 523 - 524 - /* Minimum number of nanoseconds used to toggle MDC clock during 525 - * MII/GMII register access. 526 - */ 527 - #define IPG_PC_PHYCTRLWAIT_NS 200 528 - 529 - #define IPG_TFDLIST_LENGTH 0x100 530 - 531 - /* Number of frames between TxDMAComplete interrupt. 532 - * 0 < IPG_FRAMESBETWEENTXDMACOMPLETES <= IPG_TFDLIST_LENGTH 533 - */ 534 - #define IPG_FRAMESBETWEENTXDMACOMPLETES 0x1 535 - 536 - #define IPG_RFDLIST_LENGTH 0x100 537 - 538 - /* Maximum number of RFDs to process per interrupt. 539 - * 1 < IPG_MAXRFDPROCESS_COUNT < IPG_RFDLIST_LENGTH 540 - */ 541 - #define IPG_MAXRFDPROCESS_COUNT 0x80 542 - 543 - /* Minimum margin between last freed RFD, and current RFD. 544 - * 1 < IPG_MINUSEDRFDSTOFREE < IPG_RFDLIST_LENGTH 545 - */ 546 - #define IPG_MINUSEDRFDSTOFREE 0x80 547 - 548 - /* specify the jumbo frame maximum size 549 - * per unit is 0x600 (the rx_buffer size that one RFD can carry) 550 - */ 551 - #define MAX_JUMBOSIZE 0x8 /* max is 12K */ 552 - 553 - /* Key register values loaded at driver start up. */ 554 - 555 - /* TXDMAPollPeriod is specified in 320ns increments. 556 - * 557 - * Value Time 558 - * --------------------- 559 - * 0x00-0x01 320ns 560 - * 0x03 ~1us 561 - * 0x1F ~10us 562 - * 0xFF ~82us 563 - */ 564 - #define IPG_TXDMAPOLLPERIOD_VALUE 0x26 565 - 566 - /* TxDMAUrgentThresh specifies the minimum amount of 567 - * data in the transmit FIFO before asserting an 568 - * urgent transmit DMA request. 569 - * 570 - * Value Min TxFIFO occupied space before urgent TX request 571 - * --------------------------------------------------------------- 572 - * 0x00-0x04 128 bytes (1024 bits) 573 - * 0x27 1248 bytes (~10000 bits) 574 - * 0x30 1536 bytes (12288 bits) 575 - * 0xFF 8192 bytes (65535 bits) 576 - */ 577 - #define IPG_TXDMAURGENTTHRESH_VALUE 0x04 578 - 579 - /* TxDMABurstThresh specifies the minimum amount of 580 - * free space in the transmit FIFO before asserting an 581 - * transmit DMA request. 582 - * 583 - * Value Min TxFIFO free space before TX request 584 - * ---------------------------------------------------- 585 - * 0x00-0x08 256 bytes 586 - * 0x30 1536 bytes 587 - * 0xFF 8192 bytes 588 - */ 589 - #define IPG_TXDMABURSTTHRESH_VALUE 0x30 590 - 591 - /* RXDMAPollPeriod is specified in 320ns increments. 592 - * 593 - * Value Time 594 - * --------------------- 595 - * 0x00-0x01 320ns 596 - * 0x03 ~1us 597 - * 0x1F ~10us 598 - * 0xFF ~82us 599 - */ 600 - #define IPG_RXDMAPOLLPERIOD_VALUE 0x01 601 - 602 - /* RxDMAUrgentThresh specifies the minimum amount of 603 - * free space within the receive FIFO before asserting 604 - * a urgent receive DMA request. 605 - * 606 - * Value Min RxFIFO free space before urgent RX request 607 - * --------------------------------------------------------------- 608 - * 0x00-0x04 128 bytes (1024 bits) 609 - * 0x27 1248 bytes (~10000 bits) 610 - * 0x30 1536 bytes (12288 bits) 611 - * 0xFF 8192 bytes (65535 bits) 612 - */ 613 - #define IPG_RXDMAURGENTTHRESH_VALUE 0x30 614 - 615 - /* RxDMABurstThresh specifies the minimum amount of 616 - * occupied space within the receive FIFO before asserting 617 - * a receive DMA request. 618 - * 619 - * Value Min TxFIFO free space before TX request 620 - * ---------------------------------------------------- 621 - * 0x00-0x08 256 bytes 622 - * 0x30 1536 bytes 623 - * 0xFF 8192 bytes 624 - */ 625 - #define IPG_RXDMABURSTTHRESH_VALUE 0x30 626 - 627 - /* FlowOnThresh specifies the maximum amount of occupied 628 - * space in the receive FIFO before a PAUSE frame with 629 - * maximum pause time transmitted. 630 - * 631 - * Value Max RxFIFO occupied space before PAUSE 632 - * --------------------------------------------------- 633 - * 0x0000 0 bytes 634 - * 0x0740 29,696 bytes 635 - * 0x07FF 32,752 bytes 636 - */ 637 - #define IPG_FLOWONTHRESH_VALUE 0x0740 638 - 639 - /* FlowOffThresh specifies the minimum amount of occupied 640 - * space in the receive FIFO before a PAUSE frame with 641 - * zero pause time is transmitted. 642 - * 643 - * Value Max RxFIFO occupied space before PAUSE 644 - * --------------------------------------------------- 645 - * 0x0000 0 bytes 646 - * 0x00BF 3056 bytes 647 - * 0x07FF 32,752 bytes 648 - */ 649 - #define IPG_FLOWOFFTHRESH_VALUE 0x00BF 650 - 651 - /* 652 - * Miscellaneous macros. 653 - */ 654 - 655 - /* Macros for printing debug statements. */ 656 - #ifdef IPG_DEBUG 657 - # define IPG_DEBUG_MSG(fmt, args...) \ 658 - do { \ 659 - if (0) \ 660 - printk(KERN_DEBUG "IPG: " fmt, ##args); \ 661 - } while (0) 662 - # define IPG_DDEBUG_MSG(fmt, args...) \ 663 - printk(KERN_DEBUG "IPG: " fmt, ##args) 664 - # define IPG_DUMPRFDLIST(args) ipg_dump_rfdlist(args) 665 - # define IPG_DUMPTFDLIST(args) ipg_dump_tfdlist(args) 666 - #else 667 - # define IPG_DEBUG_MSG(fmt, args...) \ 668 - do { \ 669 - if (0) \ 670 - printk(KERN_DEBUG "IPG: " fmt, ##args); \ 671 - } while (0) 672 - # define IPG_DDEBUG_MSG(fmt, args...) \ 673 - do { \ 674 - if (0) \ 675 - printk(KERN_DEBUG "IPG: " fmt, ##args); \ 676 - } while (0) 677 - # define IPG_DUMPRFDLIST(args) 678 - # define IPG_DUMPTFDLIST(args) 679 - #endif 680 - 681 - /* 682 - * End miscellaneous macros. 683 - */ 684 - 685 - /* Transmit Frame Descriptor. The IPG supports 15 fragments, 686 - * however Linux requires only a single fragment. Note, each 687 - * TFD field is 64 bits wide. 688 - */ 689 - struct ipg_tx { 690 - __le64 next_desc; 691 - __le64 tfc; 692 - __le64 frag_info; 693 - }; 694 - 695 - /* Receive Frame Descriptor. Note, each RFD field is 64 bits wide. 696 - */ 697 - struct ipg_rx { 698 - __le64 next_desc; 699 - __le64 rfs; 700 - __le64 frag_info; 701 - }; 702 - 703 - struct ipg_jumbo { 704 - int found_start; 705 - int current_size; 706 - struct sk_buff *skb; 707 - }; 708 - 709 - /* Structure of IPG NIC specific data. */ 710 - struct ipg_nic_private { 711 - void __iomem *ioaddr; 712 - struct ipg_tx *txd; 713 - struct ipg_rx *rxd; 714 - dma_addr_t txd_map; 715 - dma_addr_t rxd_map; 716 - struct sk_buff *tx_buff[IPG_TFDLIST_LENGTH]; 717 - struct sk_buff *rx_buff[IPG_RFDLIST_LENGTH]; 718 - unsigned int tx_current; 719 - unsigned int tx_dirty; 720 - unsigned int rx_current; 721 - unsigned int rx_dirty; 722 - bool is_jumbo; 723 - struct ipg_jumbo jumbo; 724 - unsigned long rxfrag_size; 725 - unsigned long rxsupport_size; 726 - unsigned long max_rxframe_size; 727 - unsigned int rx_buf_sz; 728 - struct pci_dev *pdev; 729 - struct net_device *dev; 730 - struct net_device_stats stats; 731 - spinlock_t lock; 732 - int tenmbpsmode; 733 - 734 - u16 led_mode; 735 - u16 station_addr[3]; /* Station Address in EEPROM Reg 0x10..0x12 */ 736 - 737 - struct mutex mii_mutex; 738 - struct mii_if_info mii_if; 739 - int reset_current_tfd; 740 - #ifdef IPG_DEBUG 741 - int RFDlistendCount; 742 - int RFDListCheckedCount; 743 - int EmptyRFDListCount; 744 - #endif 745 - struct delayed_work task; 746 - }; 747 - 748 - #endif /* __LINUX_IPG_H */
+5 -3
drivers/net/ethernet/mellanox/mlx4/main.c
··· 892 892 dev->caps.qp1_proxy[i - 1] = func_cap.qp1_proxy_qpn; 893 893 dev->caps.port_mask[i] = dev->caps.port_type[i]; 894 894 dev->caps.phys_port_id[i] = func_cap.phys_port_id; 895 - if (mlx4_get_slave_pkey_gid_tbl_len(dev, i, 896 - &dev->caps.gid_table_len[i], 897 - &dev->caps.pkey_table_len[i])) 895 + err = mlx4_get_slave_pkey_gid_tbl_len(dev, i, 896 + &dev->caps.gid_table_len[i], 897 + &dev->caps.pkey_table_len[i]); 898 + if (err) 898 899 goto err_mem; 899 900 } 900 901 ··· 907 906 dev->caps.uar_page_size * dev->caps.num_uars, 908 907 (unsigned long long) 909 908 pci_resource_len(dev->persist->pdev, 2)); 909 + err = -ENOMEM; 910 910 goto err_mem; 911 911 } 912 912
+27 -12
drivers/net/ethernet/mellanox/mlx4/resource_tracker.c
··· 4952 4952 struct res_counter *counter; 4953 4953 struct res_counter *tmp; 4954 4954 int err; 4955 - int index; 4955 + int *counters_arr = NULL; 4956 + int i, j; 4956 4957 4957 4958 err = move_all_busy(dev, slave, RES_COUNTER); 4958 4959 if (err) 4959 4960 mlx4_warn(dev, "rem_slave_counters: Could not move all counters - too busy for slave %d\n", 4960 4961 slave); 4961 4962 4962 - spin_lock_irq(mlx4_tlock(dev)); 4963 - list_for_each_entry_safe(counter, tmp, counter_list, com.list) { 4964 - if (counter->com.owner == slave) { 4965 - index = counter->com.res_id; 4966 - rb_erase(&counter->com.node, 4967 - &tracker->res_tree[RES_COUNTER]); 4968 - list_del(&counter->com.list); 4969 - kfree(counter); 4970 - __mlx4_counter_free(dev, index); 4963 + counters_arr = kmalloc_array(dev->caps.max_counters, 4964 + sizeof(*counters_arr), GFP_KERNEL); 4965 + if (!counters_arr) 4966 + return; 4967 + 4968 + do { 4969 + i = 0; 4970 + j = 0; 4971 + spin_lock_irq(mlx4_tlock(dev)); 4972 + list_for_each_entry_safe(counter, tmp, counter_list, com.list) { 4973 + if (counter->com.owner == slave) { 4974 + counters_arr[i++] = counter->com.res_id; 4975 + rb_erase(&counter->com.node, 4976 + &tracker->res_tree[RES_COUNTER]); 4977 + list_del(&counter->com.list); 4978 + kfree(counter); 4979 + } 4980 + } 4981 + spin_unlock_irq(mlx4_tlock(dev)); 4982 + 4983 + while (j < i) { 4984 + __mlx4_counter_free(dev, counters_arr[j++]); 4971 4985 mlx4_release_resource(dev, slave, RES_COUNTER, 1, 0); 4972 4986 } 4973 - } 4974 - spin_unlock_irq(mlx4_tlock(dev)); 4987 + } while (i); 4988 + 4989 + kfree(counters_arr); 4975 4990 } 4976 4991 4977 4992 static void rem_slave_xrcdns(struct mlx4_dev *dev, int slave)
+8 -2
drivers/net/ethernet/mellanox/mlx5/core/en.h
··· 334 334 335 335 #define MLX5E_TX_SKB_CB(__skb) ((struct mlx5e_tx_skb_cb *)__skb->cb) 336 336 337 + enum mlx5e_dma_map_type { 338 + MLX5E_DMA_MAP_SINGLE, 339 + MLX5E_DMA_MAP_PAGE 340 + }; 341 + 337 342 struct mlx5e_sq_dma { 338 - dma_addr_t addr; 339 - u32 size; 343 + dma_addr_t addr; 344 + u32 size; 345 + enum mlx5e_dma_map_type type; 340 346 }; 341 347 342 348 enum {
+50
drivers/net/ethernet/mellanox/mlx5/core/en_main.c
··· 1332 1332 return err; 1333 1333 } 1334 1334 1335 + static int mlx5e_refresh_tir_self_loopback_enable(struct mlx5_core_dev *mdev, 1336 + u32 tirn) 1337 + { 1338 + void *in; 1339 + int inlen; 1340 + int err; 1341 + 1342 + inlen = MLX5_ST_SZ_BYTES(modify_tir_in); 1343 + in = mlx5_vzalloc(inlen); 1344 + if (!in) 1345 + return -ENOMEM; 1346 + 1347 + MLX5_SET(modify_tir_in, in, bitmask.self_lb_en, 1); 1348 + 1349 + err = mlx5_core_modify_tir(mdev, tirn, in, inlen); 1350 + 1351 + kvfree(in); 1352 + 1353 + return err; 1354 + } 1355 + 1356 + static int mlx5e_refresh_tirs_self_loopback_enable(struct mlx5e_priv *priv) 1357 + { 1358 + int err; 1359 + int i; 1360 + 1361 + for (i = 0; i < MLX5E_NUM_TT; i++) { 1362 + err = mlx5e_refresh_tir_self_loopback_enable(priv->mdev, 1363 + priv->tirn[i]); 1364 + if (err) 1365 + return err; 1366 + } 1367 + 1368 + return 0; 1369 + } 1370 + 1335 1371 static int mlx5e_set_dev_port_mtu(struct net_device *netdev) 1336 1372 { 1337 1373 struct mlx5e_priv *priv = netdev_priv(netdev); ··· 1412 1376 goto err_clear_state_opened_flag; 1413 1377 } 1414 1378 1379 + err = mlx5e_refresh_tirs_self_loopback_enable(priv); 1380 + if (err) { 1381 + netdev_err(netdev, "%s: mlx5e_refresh_tirs_self_loopback_enable failed, %d\n", 1382 + __func__, err); 1383 + goto err_close_channels; 1384 + } 1385 + 1415 1386 mlx5e_update_carrier(priv); 1416 1387 mlx5e_redirect_rqts(priv); 1417 1388 ··· 1426 1383 1427 1384 return 0; 1428 1385 1386 + err_close_channels: 1387 + mlx5e_close_channels(priv); 1429 1388 err_clear_state_opened_flag: 1430 1389 clear_bit(MLX5E_STATE_OPENED, &priv->state); 1431 1390 return err; ··· 1901 1856 1902 1857 mlx5_query_port_max_mtu(mdev, &max_mtu, 1); 1903 1858 1859 + max_mtu = MLX5E_HW2SW_MTU(max_mtu); 1860 + 1904 1861 if (new_mtu > max_mtu) { 1905 1862 netdev_err(netdev, 1906 1863 "%s: Bad MTU (%d) > (%d) Max\n", ··· 1956 1909 "Not creating net device, some required device capabilities are missing\n"); 1957 1910 return -ENOTSUPP; 1958 1911 } 1912 + if (!MLX5_CAP_ETH(mdev, self_lb_en_modifiable)) 1913 + mlx5_core_warn(mdev, "Self loop back prevention is not supported\n"); 1914 + 1959 1915 return 0; 1960 1916 } 1961 1917
+46 -34
drivers/net/ethernet/mellanox/mlx5/core/en_tx.c
··· 61 61 } 62 62 } 63 63 64 - static void mlx5e_dma_pop_last_pushed(struct mlx5e_sq *sq, dma_addr_t *addr, 65 - u32 *size) 64 + static inline void mlx5e_tx_dma_unmap(struct device *pdev, 65 + struct mlx5e_sq_dma *dma) 66 66 { 67 - sq->dma_fifo_pc--; 68 - *addr = sq->dma_fifo[sq->dma_fifo_pc & sq->dma_fifo_mask].addr; 69 - *size = sq->dma_fifo[sq->dma_fifo_pc & sq->dma_fifo_mask].size; 67 + switch (dma->type) { 68 + case MLX5E_DMA_MAP_SINGLE: 69 + dma_unmap_single(pdev, dma->addr, dma->size, DMA_TO_DEVICE); 70 + break; 71 + case MLX5E_DMA_MAP_PAGE: 72 + dma_unmap_page(pdev, dma->addr, dma->size, DMA_TO_DEVICE); 73 + break; 74 + default: 75 + WARN_ONCE(true, "mlx5e_tx_dma_unmap unknown DMA type!\n"); 76 + } 77 + } 78 + 79 + static inline void mlx5e_dma_push(struct mlx5e_sq *sq, 80 + dma_addr_t addr, 81 + u32 size, 82 + enum mlx5e_dma_map_type map_type) 83 + { 84 + sq->dma_fifo[sq->dma_fifo_pc & sq->dma_fifo_mask].addr = addr; 85 + sq->dma_fifo[sq->dma_fifo_pc & sq->dma_fifo_mask].size = size; 86 + sq->dma_fifo[sq->dma_fifo_pc & sq->dma_fifo_mask].type = map_type; 87 + sq->dma_fifo_pc++; 88 + } 89 + 90 + static inline struct mlx5e_sq_dma *mlx5e_dma_get(struct mlx5e_sq *sq, u32 i) 91 + { 92 + return &sq->dma_fifo[i & sq->dma_fifo_mask]; 70 93 } 71 94 72 95 static void mlx5e_dma_unmap_wqe_err(struct mlx5e_sq *sq, struct sk_buff *skb) 73 96 { 74 - dma_addr_t addr; 75 - u32 size; 76 97 int i; 77 98 78 99 for (i = 0; i < MLX5E_TX_SKB_CB(skb)->num_dma; i++) { 79 - mlx5e_dma_pop_last_pushed(sq, &addr, &size); 80 - dma_unmap_single(sq->pdev, addr, size, DMA_TO_DEVICE); 100 + struct mlx5e_sq_dma *last_pushed_dma = 101 + mlx5e_dma_get(sq, --sq->dma_fifo_pc); 102 + 103 + mlx5e_tx_dma_unmap(sq->pdev, last_pushed_dma); 81 104 } 82 - } 83 - 84 - static inline void mlx5e_dma_push(struct mlx5e_sq *sq, dma_addr_t addr, 85 - u32 size) 86 - { 87 - sq->dma_fifo[sq->dma_fifo_pc & sq->dma_fifo_mask].addr = addr; 88 - sq->dma_fifo[sq->dma_fifo_pc & sq->dma_fifo_mask].size = size; 89 - sq->dma_fifo_pc++; 90 - } 91 - 92 - static inline void mlx5e_dma_get(struct mlx5e_sq *sq, u32 i, dma_addr_t *addr, 93 - u32 *size) 94 - { 95 - *addr = sq->dma_fifo[i & sq->dma_fifo_mask].addr; 96 - *size = sq->dma_fifo[i & sq->dma_fifo_mask].size; 97 105 } 98 106 99 107 u16 mlx5e_select_queue(struct net_device *dev, struct sk_buff *skb, ··· 126 118 */ 127 119 #define MLX5E_MIN_INLINE ETH_HLEN 128 120 129 - if (bf && (skb_headlen(skb) <= sq->max_inline)) 130 - return skb_headlen(skb); 121 + if (bf) { 122 + u16 ihs = skb_headlen(skb); 123 + 124 + if (skb_vlan_tag_present(skb)) 125 + ihs += VLAN_HLEN; 126 + 127 + if (ihs <= sq->max_inline) 128 + return skb_headlen(skb); 129 + } 131 130 132 131 return MLX5E_MIN_INLINE; 133 132 } ··· 233 218 dseg->lkey = sq->mkey_be; 234 219 dseg->byte_count = cpu_to_be32(headlen); 235 220 236 - mlx5e_dma_push(sq, dma_addr, headlen); 221 + mlx5e_dma_push(sq, dma_addr, headlen, MLX5E_DMA_MAP_SINGLE); 237 222 MLX5E_TX_SKB_CB(skb)->num_dma++; 238 223 239 224 dseg++; ··· 252 237 dseg->lkey = sq->mkey_be; 253 238 dseg->byte_count = cpu_to_be32(fsz); 254 239 255 - mlx5e_dma_push(sq, dma_addr, fsz); 240 + mlx5e_dma_push(sq, dma_addr, fsz, MLX5E_DMA_MAP_PAGE); 256 241 MLX5E_TX_SKB_CB(skb)->num_dma++; 257 242 258 243 dseg++; ··· 368 353 } 369 354 370 355 for (j = 0; j < MLX5E_TX_SKB_CB(skb)->num_dma; j++) { 371 - dma_addr_t addr; 372 - u32 size; 356 + struct mlx5e_sq_dma *dma = 357 + mlx5e_dma_get(sq, dma_fifo_cc++); 373 358 374 - mlx5e_dma_get(sq, dma_fifo_cc, &addr, &size); 375 - dma_fifo_cc++; 376 - dma_unmap_single(sq->pdev, addr, size, 377 - DMA_TO_DEVICE); 359 + mlx5e_tx_dma_unmap(sq->pdev, dma); 378 360 } 379 361 380 362 npkts++;
+3 -3
drivers/net/ethernet/realtek/r8169.c
··· 7429 7429 7430 7430 rtl8169_rx_vlan_tag(desc, skb); 7431 7431 7432 + if (skb->pkt_type == PACKET_MULTICAST) 7433 + dev->stats.multicast++; 7434 + 7432 7435 napi_gro_receive(&tp->napi, skb); 7433 7436 7434 7437 u64_stats_update_begin(&tp->rx_stats.syncp); 7435 7438 tp->rx_stats.packets++; 7436 7439 tp->rx_stats.bytes += pkt_size; 7437 7440 u64_stats_update_end(&tp->rx_stats.syncp); 7438 - 7439 - if (skb->pkt_type == PACKET_MULTICAST) 7440 - dev->stats.multicast++; 7441 7441 } 7442 7442 release_descriptor: 7443 7443 desc->opts2 = 0;
+4 -4
drivers/net/ethernet/renesas/ravb_main.c
··· 408 408 /* Interrupt enable: */ 409 409 /* Frame receive */ 410 410 ravb_write(ndev, RIC0_FRE0 | RIC0_FRE1, RIC0); 411 - /* Receive FIFO full warning */ 412 - ravb_write(ndev, RIC1_RFWE, RIC1); 413 411 /* Receive FIFO full error, descriptor empty */ 414 412 ravb_write(ndev, RIC2_QFE0 | RIC2_QFE1 | RIC2_RFFE, RIC2); 415 413 /* Frame transmitted, timestamp FIFO updated */ ··· 731 733 ((tis & tic) & BIT(q))) { 732 734 if (napi_schedule_prep(&priv->napi[q])) { 733 735 /* Mask RX and TX interrupts */ 734 - ravb_write(ndev, ric0 & ~BIT(q), RIC0); 735 - ravb_write(ndev, tic & ~BIT(q), TIC); 736 + ric0 &= ~BIT(q); 737 + tic &= ~BIT(q); 738 + ravb_write(ndev, ric0, RIC0); 739 + ravb_write(ndev, tic, TIC); 736 740 __napi_schedule(&priv->napi[q]); 737 741 } else { 738 742 netdev_warn(ndev,
+1 -1
drivers/net/ethernet/sfc/efx.c
··· 3422 3422 * with our request for slot reset the mmio_enabled callback will never be 3423 3423 * called, and the link_reset callback is not used by AER or EEH mechanisms. 3424 3424 */ 3425 - static struct pci_error_handlers efx_err_handlers = { 3425 + static const struct pci_error_handlers efx_err_handlers = { 3426 3426 .error_detected = efx_io_error_detected, 3427 3427 .slot_reset = efx_io_slot_reset, 3428 3428 .resume = efx_io_resume,
+6 -11
drivers/net/ethernet/smsc/smsc911x.c
··· 809 809 810 810 static int smsc911x_phy_reset(struct smsc911x_data *pdata) 811 811 { 812 - struct phy_device *phy_dev = pdata->phy_dev; 813 812 unsigned int temp; 814 813 unsigned int i = 100000; 815 814 816 - BUG_ON(!phy_dev); 817 - BUG_ON(!phy_dev->bus); 818 - 819 - SMSC_TRACE(pdata, hw, "Performing PHY BCR Reset"); 820 - smsc911x_mii_write(phy_dev->bus, phy_dev->addr, MII_BMCR, BMCR_RESET); 815 + temp = smsc911x_reg_read(pdata, PMT_CTRL); 816 + smsc911x_reg_write(pdata, PMT_CTRL, temp | PMT_CTRL_PHY_RST_); 821 817 do { 822 818 msleep(1); 823 - temp = smsc911x_mii_read(phy_dev->bus, phy_dev->addr, 824 - MII_BMCR); 825 - } while ((i--) && (temp & BMCR_RESET)); 819 + temp = smsc911x_reg_read(pdata, PMT_CTRL); 820 + } while ((i--) && (temp & PMT_CTRL_PHY_RST_)); 826 821 827 - if (temp & BMCR_RESET) { 822 + if (unlikely(temp & PMT_CTRL_PHY_RST_)) { 828 823 SMSC_WARN(pdata, hw, "PHY reset failed to complete"); 829 824 return -EIO; 830 825 } ··· 2291 2296 } 2292 2297 2293 2298 /* Reset the LAN911x */ 2294 - if (smsc911x_soft_reset(pdata)) 2299 + if (smsc911x_phy_reset(pdata) || smsc911x_soft_reset(pdata)) 2295 2300 return -ENODEV; 2296 2301 2297 2302 dev->flags |= IFF_MULTICAST;
+5 -5
drivers/net/ethernet/stmicro/stmmac/dwmac-ipq806x.c
··· 337 337 QSGMII_PHY_RX_SIGNAL_DETECT_EN | 338 338 QSGMII_PHY_TX_DRIVER_EN | 339 339 QSGMII_PHY_QSGMII_EN | 340 - 0x4 << QSGMII_PHY_PHASE_LOOP_GAIN_OFFSET | 341 - 0x3 << QSGMII_PHY_RX_DC_BIAS_OFFSET | 342 - 0x1 << QSGMII_PHY_RX_INPUT_EQU_OFFSET | 343 - 0x2 << QSGMII_PHY_CDR_PI_SLEW_OFFSET | 344 - 0xC << QSGMII_PHY_TX_DRV_AMP_OFFSET); 340 + 0x4ul << QSGMII_PHY_PHASE_LOOP_GAIN_OFFSET | 341 + 0x3ul << QSGMII_PHY_RX_DC_BIAS_OFFSET | 342 + 0x1ul << QSGMII_PHY_RX_INPUT_EQU_OFFSET | 343 + 0x2ul << QSGMII_PHY_CDR_PI_SLEW_OFFSET | 344 + 0xCul << QSGMII_PHY_TX_DRV_AMP_OFFSET); 345 345 } 346 346 347 347 plat_dat->has_gmac = true;
+3 -21
drivers/net/ethernet/via/via-velocity.c
··· 345 345 */ 346 346 VELOCITY_PARAM(speed_duplex, "Setting the speed and duplex mode"); 347 347 348 - #define VAL_PKT_LEN_DEF 0 349 - /* ValPktLen[] is used for setting the checksum offload ability of NIC. 350 - 0: Receive frame with invalid layer 2 length (Default) 351 - 1: Drop frame with invalid layer 2 length 352 - */ 353 - VELOCITY_PARAM(ValPktLen, "Receiving or Drop invalid 802.3 frame"); 354 - 355 348 #define WOL_OPT_DEF 0 356 349 #define WOL_OPT_MIN 0 357 350 #define WOL_OPT_MAX 7 ··· 487 494 488 495 velocity_set_int_opt(&opts->flow_cntl, flow_control[index], FLOW_CNTL_MIN, FLOW_CNTL_MAX, FLOW_CNTL_DEF, "flow_control", devname); 489 496 velocity_set_bool_opt(&opts->flags, IP_byte_align[index], IP_ALIG_DEF, VELOCITY_FLAGS_IP_ALIGN, "IP_byte_align", devname); 490 - velocity_set_bool_opt(&opts->flags, ValPktLen[index], VAL_PKT_LEN_DEF, VELOCITY_FLAGS_VAL_PKT_LEN, "ValPktLen", devname); 491 497 velocity_set_int_opt((int *) &opts->spd_dpx, speed_duplex[index], MED_LNK_MIN, MED_LNK_MAX, MED_LNK_DEF, "Media link mode", devname); 492 498 velocity_set_int_opt(&opts->wol_opts, wol_opts[index], WOL_OPT_MIN, WOL_OPT_MAX, WOL_OPT_DEF, "Wake On Lan options", devname); 493 499 opts->numrx = (opts->numrx & ~3); ··· 2047 2055 int pkt_len = le16_to_cpu(rd->rdesc0.len) & 0x3fff; 2048 2056 struct sk_buff *skb; 2049 2057 2050 - if (rd->rdesc0.RSR & (RSR_STP | RSR_EDP)) { 2051 - VELOCITY_PRT(MSG_LEVEL_VERBOSE, KERN_ERR " %s : the received frame spans multiple RDs.\n", vptr->netdev->name); 2058 + if (unlikely(rd->rdesc0.RSR & (RSR_STP | RSR_EDP | RSR_RL))) { 2059 + if (rd->rdesc0.RSR & (RSR_STP | RSR_EDP)) 2060 + VELOCITY_PRT(MSG_LEVEL_VERBOSE, KERN_ERR " %s : the received frame spans multiple RDs.\n", vptr->netdev->name); 2052 2061 stats->rx_length_errors++; 2053 2062 return -EINVAL; 2054 2063 } ··· 2061 2068 2062 2069 dma_sync_single_for_cpu(vptr->dev, rd_info->skb_dma, 2063 2070 vptr->rx.buf_sz, DMA_FROM_DEVICE); 2064 - 2065 - /* 2066 - * Drop frame not meeting IEEE 802.3 2067 - */ 2068 - 2069 - if (vptr->flags & VELOCITY_FLAGS_VAL_PKT_LEN) { 2070 - if (rd->rdesc0.RSR & RSR_RL) { 2071 - stats->rx_length_errors++; 2072 - return -EINVAL; 2073 - } 2074 - } 2075 2071 2076 2072 velocity_rx_csum(rd, skb); 2077 2073
+1 -1
drivers/net/fjes/fjes_hw.c
··· 599 599 FJES_CMD_REQ_RES_CODE_BUSY) && 600 600 (timeout > 0)) { 601 601 msleep(200 + hw->my_epid * 20); 602 - timeout -= (200 + hw->my_epid * 20); 602 + timeout -= (200 + hw->my_epid * 20); 603 603 604 604 res_buf->unshare_buffer.length = 0; 605 605 res_buf->unshare_buffer.code = 0;
+8 -6
drivers/net/ipvlan/ipvlan_core.c
··· 254 254 } 255 255 } 256 256 257 - static int ipvlan_rcv_frame(struct ipvl_addr *addr, struct sk_buff *skb, 257 + static int ipvlan_rcv_frame(struct ipvl_addr *addr, struct sk_buff **pskb, 258 258 bool local) 259 259 { 260 260 struct ipvl_dev *ipvlan = addr->master; ··· 262 262 unsigned int len; 263 263 rx_handler_result_t ret = RX_HANDLER_CONSUMED; 264 264 bool success = false; 265 + struct sk_buff *skb = *pskb; 265 266 266 267 len = skb->len + ETH_HLEN; 267 268 if (unlikely(!(dev->flags & IFF_UP))) { ··· 274 273 if (!skb) 275 274 goto out; 276 275 276 + *pskb = skb; 277 277 skb->dev = dev; 278 278 skb->pkt_type = PACKET_HOST; 279 279 ··· 488 486 489 487 addr = ipvlan_addr_lookup(ipvlan->port, lyr3h, addr_type, true); 490 488 if (addr) 491 - return ipvlan_rcv_frame(addr, skb, true); 489 + return ipvlan_rcv_frame(addr, &skb, true); 492 490 493 491 out: 494 492 skb->dev = ipvlan->phy_dev; ··· 508 506 if (lyr3h) { 509 507 addr = ipvlan_addr_lookup(ipvlan->port, lyr3h, addr_type, true); 510 508 if (addr) 511 - return ipvlan_rcv_frame(addr, skb, true); 509 + return ipvlan_rcv_frame(addr, &skb, true); 512 510 } 513 511 skb = skb_share_check(skb, GFP_ATOMIC); 514 512 if (!skb) ··· 591 589 592 590 addr = ipvlan_addr_lookup(port, lyr3h, addr_type, true); 593 591 if (addr) 594 - ret = ipvlan_rcv_frame(addr, skb, false); 592 + ret = ipvlan_rcv_frame(addr, pskb, false); 595 593 596 594 out: 597 595 return ret; ··· 628 626 629 627 addr = ipvlan_addr_lookup(port, lyr3h, addr_type, true); 630 628 if (addr) 631 - ret = ipvlan_rcv_frame(addr, skb, false); 629 + ret = ipvlan_rcv_frame(addr, pskb, false); 632 630 } 633 631 634 632 return ret; ··· 653 651 WARN_ONCE(true, "ipvlan_handle_frame() called for mode = [%hx]\n", 654 652 port->mode); 655 653 kfree_skb(skb); 656 - return NET_RX_DROP; 654 + return RX_HANDLER_CONSUMED; 657 655 }
+2
drivers/net/macvlan.c
··· 415 415 skb = ip_check_defrag(dev_net(skb->dev), skb, IP_DEFRAG_MACVLAN); 416 416 if (!skb) 417 417 return RX_HANDLER_CONSUMED; 418 + *pskb = skb; 418 419 eth = eth_hdr(skb); 419 420 macvlan_forward_source(skb, port, eth->h_source); 420 421 src = macvlan_hash_lookup(port, eth->h_source); ··· 457 456 goto out; 458 457 } 459 458 459 + *pskb = skb; 460 460 skb->dev = dev; 461 461 skb->pkt_type = PACKET_HOST; 462 462
+4
drivers/net/phy/at803x.c
··· 308 308 .flags = PHY_HAS_INTERRUPT, 309 309 .config_aneg = genphy_config_aneg, 310 310 .read_status = genphy_read_status, 311 + .ack_interrupt = at803x_ack_interrupt, 312 + .config_intr = at803x_config_intr, 311 313 .driver = { 312 314 .owner = THIS_MODULE, 313 315 }, ··· 329 327 .flags = PHY_HAS_INTERRUPT, 330 328 .config_aneg = genphy_config_aneg, 331 329 .read_status = genphy_read_status, 330 + .ack_interrupt = at803x_ack_interrupt, 331 + .config_intr = at803x_config_intr, 332 332 .driver = { 333 333 .owner = THIS_MODULE, 334 334 },
+16
drivers/net/phy/marvell.c
··· 1154 1154 .driver = { .owner = THIS_MODULE }, 1155 1155 }, 1156 1156 { 1157 + .phy_id = MARVELL_PHY_ID_88E1540, 1158 + .phy_id_mask = MARVELL_PHY_ID_MASK, 1159 + .name = "Marvell 88E1540", 1160 + .features = PHY_GBIT_FEATURES, 1161 + .flags = PHY_HAS_INTERRUPT, 1162 + .config_aneg = &m88e1510_config_aneg, 1163 + .read_status = &marvell_read_status, 1164 + .ack_interrupt = &marvell_ack_interrupt, 1165 + .config_intr = &marvell_config_intr, 1166 + .did_interrupt = &m88e1121_did_interrupt, 1167 + .resume = &genphy_resume, 1168 + .suspend = &genphy_suspend, 1169 + .driver = { .owner = THIS_MODULE }, 1170 + }, 1171 + { 1157 1172 .phy_id = MARVELL_PHY_ID_88E3016, 1158 1173 .phy_id_mask = MARVELL_PHY_ID_MASK, 1159 1174 .name = "Marvell 88E3016", ··· 1201 1186 { MARVELL_PHY_ID_88E1318S, MARVELL_PHY_ID_MASK }, 1202 1187 { MARVELL_PHY_ID_88E1116R, MARVELL_PHY_ID_MASK }, 1203 1188 { MARVELL_PHY_ID_88E1510, MARVELL_PHY_ID_MASK }, 1189 + { MARVELL_PHY_ID_88E1540, MARVELL_PHY_ID_MASK }, 1204 1190 { MARVELL_PHY_ID_88E3016, MARVELL_PHY_ID_MASK }, 1205 1191 { } 1206 1192 };
+3
drivers/net/phy/phy.c
··· 863 863 needs_aneg = true; 864 864 break; 865 865 case PHY_NOLINK: 866 + if (phy_interrupt_is_valid(phydev)) 867 + break; 868 + 866 869 err = phy_read_status(phydev); 867 870 if (err) 868 871 break;
+15 -1
drivers/net/phy/vitesse.c
··· 66 66 #define PHY_ID_VSC8244 0x000fc6c0 67 67 #define PHY_ID_VSC8514 0x00070670 68 68 #define PHY_ID_VSC8574 0x000704a0 69 + #define PHY_ID_VSC8601 0x00070420 69 70 #define PHY_ID_VSC8662 0x00070660 70 71 #define PHY_ID_VSC8221 0x000fc550 71 72 #define PHY_ID_VSC8211 0x000fc4b0 ··· 134 133 (phydev->drv->phy_id == PHY_ID_VSC8234 || 135 134 phydev->drv->phy_id == PHY_ID_VSC8244 || 136 135 phydev->drv->phy_id == PHY_ID_VSC8514 || 137 - phydev->drv->phy_id == PHY_ID_VSC8574) ? 136 + phydev->drv->phy_id == PHY_ID_VSC8574 || 137 + phydev->drv->phy_id == PHY_ID_VSC8601) ? 138 138 MII_VSC8244_IMASK_MASK : 139 139 MII_VSC8221_IMASK_MASK); 140 140 else { ··· 269 267 .flags = PHY_HAS_INTERRUPT, 270 268 .config_init = &vsc824x_config_init, 271 269 .config_aneg = &vsc82x4_config_aneg, 270 + .read_status = &genphy_read_status, 271 + .ack_interrupt = &vsc824x_ack_interrupt, 272 + .config_intr = &vsc82xx_config_intr, 273 + .driver = { .owner = THIS_MODULE,}, 274 + }, { 275 + .phy_id = PHY_ID_VSC8601, 276 + .name = "Vitesse VSC8601", 277 + .phy_id_mask = 0x000ffff0, 278 + .features = PHY_GBIT_FEATURES, 279 + .flags = PHY_HAS_INTERRUPT, 280 + .config_init = &genphy_config_init, 281 + .config_aneg = &genphy_config_aneg, 272 282 .read_status = &genphy_read_status, 273 283 .ack_interrupt = &vsc824x_ack_interrupt, 274 284 .config_intr = &vsc82xx_config_intr,
+5
drivers/net/usb/cdc_ether.c
··· 696 696 USB_CDC_SUBCLASS_ETHERNET, USB_CDC_PROTO_NONE), 697 697 .driver_info = (kernel_ulong_t) &wwan_info, 698 698 }, { 699 + /* Dell DW5580 modules */ 700 + USB_DEVICE_AND_INTERFACE_INFO(DELL_VENDOR_ID, 0x81ba, USB_CLASS_COMM, 701 + USB_CDC_SUBCLASS_ETHERNET, USB_CDC_PROTO_NONE), 702 + .driver_info = (kernel_ulong_t)&wwan_info, 703 + }, { 699 704 USB_INTERFACE_INFO(USB_CLASS_COMM, USB_CDC_SUBCLASS_ETHERNET, 700 705 USB_CDC_PROTO_NONE), 701 706 .driver_info = (unsigned long) &cdc_info,
+4 -3
drivers/net/vmxnet3/vmxnet3_drv.c
··· 2157 2157 if (!netdev_mc_empty(netdev)) { 2158 2158 new_table = vmxnet3_copy_mc(netdev); 2159 2159 if (new_table) { 2160 - rxConf->mfTableLen = cpu_to_le16( 2161 - netdev_mc_count(netdev) * ETH_ALEN); 2160 + size_t sz = netdev_mc_count(netdev) * ETH_ALEN; 2161 + 2162 + rxConf->mfTableLen = cpu_to_le16(sz); 2162 2163 new_table_pa = dma_map_single( 2163 2164 &adapter->pdev->dev, 2164 2165 new_table, 2165 - rxConf->mfTableLen, 2166 + sz, 2166 2167 PCI_DMA_TODEVICE); 2167 2168 } 2168 2169
+2 -2
drivers/net/vmxnet3/vmxnet3_int.h
··· 69 69 /* 70 70 * Version numbers 71 71 */ 72 - #define VMXNET3_DRIVER_VERSION_STRING "1.4.3.0-k" 72 + #define VMXNET3_DRIVER_VERSION_STRING "1.4.4.0-k" 73 73 74 74 /* a 32-bit int, each byte encode a verion number in VMXNET3_DRIVER_VERSION */ 75 - #define VMXNET3_DRIVER_VERSION_NUM 0x01040300 75 + #define VMXNET3_DRIVER_VERSION_NUM 0x01040400 76 76 77 77 #if defined(CONFIG_PCI_MSI) 78 78 /* RSS only makes sense if MSI-X is supported. */
+1
include/linux/marvell_phy.h
··· 16 16 #define MARVELL_PHY_ID_88E1318S 0x01410e90 17 17 #define MARVELL_PHY_ID_88E1116R 0x01410e40 18 18 #define MARVELL_PHY_ID_88E1510 0x01410dd0 19 + #define MARVELL_PHY_ID_88E1540 0x01410eb0 19 20 #define MARVELL_PHY_ID_88E3016 0x01410e60 20 21 21 22 /* struct phy_device dev_flags definitions */
+14 -10
include/linux/mlx5/mlx5_ifc.h
··· 453 453 u8 lro_cap[0x1]; 454 454 u8 lro_psh_flag[0x1]; 455 455 u8 lro_time_stamp[0x1]; 456 - u8 reserved_0[0x6]; 456 + u8 reserved_0[0x3]; 457 + u8 self_lb_en_modifiable[0x1]; 458 + u8 reserved_1[0x2]; 457 459 u8 max_lso_cap[0x5]; 458 - u8 reserved_1[0x4]; 460 + u8 reserved_2[0x4]; 459 461 u8 rss_ind_tbl_cap[0x4]; 460 - u8 reserved_2[0x3]; 462 + u8 reserved_3[0x3]; 461 463 u8 tunnel_lso_const_out_ip_id[0x1]; 462 - u8 reserved_3[0x2]; 464 + u8 reserved_4[0x2]; 463 465 u8 tunnel_statless_gre[0x1]; 464 466 u8 tunnel_stateless_vxlan[0x1]; 465 467 466 - u8 reserved_4[0x20]; 468 + u8 reserved_5[0x20]; 467 469 468 - u8 reserved_5[0x10]; 470 + u8 reserved_6[0x10]; 469 471 u8 lro_min_mss_size[0x10]; 470 472 471 - u8 reserved_6[0x120]; 473 + u8 reserved_7[0x120]; 472 474 473 475 u8 lro_timer_supported_periods[4][0x20]; 474 476 475 - u8 reserved_7[0x600]; 477 + u8 reserved_8[0x600]; 476 478 }; 477 479 478 480 struct mlx5_ifc_roce_cap_bits { ··· 4053 4051 }; 4054 4052 4055 4053 struct mlx5_ifc_modify_tir_bitmask_bits { 4056 - u8 reserved[0x20]; 4054 + u8 reserved_0[0x20]; 4057 4055 4058 - u8 reserved1[0x1f]; 4056 + u8 reserved_1[0x1b]; 4057 + u8 self_lb_en[0x1]; 4058 + u8 reserved_2[0x3]; 4059 4059 u8 lro[0x1]; 4060 4060 }; 4061 4061
+20 -12
include/linux/netdevice.h
··· 2068 2068 struct u64_stats_sync syncp; 2069 2069 }; 2070 2070 2071 - #define netdev_alloc_pcpu_stats(type) \ 2072 - ({ \ 2073 - typeof(type) __percpu *pcpu_stats = alloc_percpu(type); \ 2074 - if (pcpu_stats) { \ 2075 - int __cpu; \ 2076 - for_each_possible_cpu(__cpu) { \ 2077 - typeof(type) *stat; \ 2078 - stat = per_cpu_ptr(pcpu_stats, __cpu); \ 2079 - u64_stats_init(&stat->syncp); \ 2080 - } \ 2081 - } \ 2082 - pcpu_stats; \ 2071 + #define __netdev_alloc_pcpu_stats(type, gfp) \ 2072 + ({ \ 2073 + typeof(type) __percpu *pcpu_stats = alloc_percpu_gfp(type, gfp);\ 2074 + if (pcpu_stats) { \ 2075 + int __cpu; \ 2076 + for_each_possible_cpu(__cpu) { \ 2077 + typeof(type) *stat; \ 2078 + stat = per_cpu_ptr(pcpu_stats, __cpu); \ 2079 + u64_stats_init(&stat->syncp); \ 2080 + } \ 2081 + } \ 2082 + pcpu_stats; \ 2083 2083 }) 2084 + 2085 + #define netdev_alloc_pcpu_stats(type) \ 2086 + __netdev_alloc_pcpu_stats(type, GFP_KERNEL); 2084 2087 2085 2088 #include <linux/notifier.h> 2086 2089 ··· 3855 3852 static inline bool netif_is_bridge_master(const struct net_device *dev) 3856 3853 { 3857 3854 return dev->priv_flags & IFF_EBRIDGE; 3855 + } 3856 + 3857 + static inline bool netif_is_bridge_port(const struct net_device *dev) 3858 + { 3859 + return dev->priv_flags & IFF_BRIDGE_PORT; 3858 3860 } 3859 3861 3860 3862 static inline bool netif_is_ovs_master(const struct net_device *dev)
+1 -1
include/linux/netfilter/ipset/ip_set.h
··· 421 421 extern int ip_set_get_ipaddr4(struct nlattr *nla, __be32 *ipaddr); 422 422 extern int ip_set_get_ipaddr6(struct nlattr *nla, union nf_inet_addr *ipaddr); 423 423 extern size_t ip_set_elem_len(struct ip_set *set, struct nlattr *tb[], 424 - size_t len); 424 + size_t len, size_t align); 425 425 extern int ip_set_get_extensions(struct ip_set *set, struct nlattr *tb[], 426 426 struct ip_set_ext *ext); 427 427
+8 -5
include/linux/netfilter_ingress.h
··· 5 5 #include <linux/netdevice.h> 6 6 7 7 #ifdef CONFIG_NETFILTER_INGRESS 8 - static inline int nf_hook_ingress_active(struct sk_buff *skb) 8 + static inline bool nf_hook_ingress_active(const struct sk_buff *skb) 9 9 { 10 - return nf_hook_list_active(&skb->dev->nf_hooks_ingress, 11 - NFPROTO_NETDEV, NF_NETDEV_INGRESS); 10 + #ifdef HAVE_JUMP_LABEL 11 + if (!static_key_false(&nf_hooks_needed[NFPROTO_NETDEV][NF_NETDEV_INGRESS])) 12 + return false; 13 + #endif 14 + return !list_empty(&skb->dev->nf_hooks_ingress); 12 15 } 13 16 14 17 static inline int nf_hook_ingress(struct sk_buff *skb) ··· 19 16 struct nf_hook_state state; 20 17 21 18 nf_hook_state_init(&state, &skb->dev->nf_hooks_ingress, 22 - NF_NETDEV_INGRESS, INT_MIN, NFPROTO_NETDEV, NULL, 23 - skb->dev, NULL, dev_net(skb->dev), NULL); 19 + NF_NETDEV_INGRESS, INT_MIN, NFPROTO_NETDEV, 20 + skb->dev, NULL, NULL, dev_net(skb->dev), NULL); 24 21 return nf_hook_slow(skb, &state); 25 22 } 26 23
+2 -1
include/net/ip6_fib.h
··· 167 167 168 168 static inline u32 rt6_get_cookie(const struct rt6_info *rt) 169 169 { 170 - if (rt->rt6i_flags & RTF_PCPU || unlikely(rt->dst.flags & DST_NOCACHE)) 170 + if (rt->rt6i_flags & RTF_PCPU || 171 + (unlikely(rt->dst.flags & DST_NOCACHE) && rt->dst.from)) 171 172 rt = (struct rt6_info *)(rt->dst.from); 172 173 173 174 return rt->rt6i_node ? rt->rt6i_node->fn_sernum : 0;
+2 -1
include/net/ip6_tunnel.h
··· 90 90 err = ip6_local_out(dev_net(skb_dst(skb)->dev), sk, skb); 91 91 92 92 if (net_xmit_eval(err) == 0) { 93 - struct pcpu_sw_netstats *tstats = this_cpu_ptr(dev->tstats); 93 + struct pcpu_sw_netstats *tstats = get_cpu_ptr(dev->tstats); 94 94 u64_stats_update_begin(&tstats->syncp); 95 95 tstats->tx_bytes += pkt_len; 96 96 tstats->tx_packets++; 97 97 u64_stats_update_end(&tstats->syncp); 98 + put_cpu_ptr(tstats); 98 99 } else { 99 100 stats->tx_errors++; 100 101 stats->tx_aborted_errors++;
+2 -1
include/net/ip_tunnels.h
··· 287 287 struct pcpu_sw_netstats __percpu *stats) 288 288 { 289 289 if (err > 0) { 290 - struct pcpu_sw_netstats *tstats = this_cpu_ptr(stats); 290 + struct pcpu_sw_netstats *tstats = get_cpu_ptr(stats); 291 291 292 292 u64_stats_update_begin(&tstats->syncp); 293 293 tstats->tx_bytes += err; 294 294 tstats->tx_packets++; 295 295 u64_stats_update_end(&tstats->syncp); 296 + put_cpu_ptr(tstats); 296 297 } else if (err < 0) { 297 298 err_stats->tx_errors++; 298 299 err_stats->tx_aborted_errors++;
+14 -2
include/net/netfilter/nf_tables.h
··· 618 618 void (*eval)(const struct nft_expr *expr, 619 619 struct nft_regs *regs, 620 620 const struct nft_pktinfo *pkt); 621 + int (*clone)(struct nft_expr *dst, 622 + const struct nft_expr *src); 621 623 unsigned int size; 622 624 623 625 int (*init)(const struct nft_ctx *ctx, ··· 662 660 int nft_expr_dump(struct sk_buff *skb, unsigned int attr, 663 661 const struct nft_expr *expr); 664 662 665 - static inline void nft_expr_clone(struct nft_expr *dst, struct nft_expr *src) 663 + static inline int nft_expr_clone(struct nft_expr *dst, struct nft_expr *src) 666 664 { 665 + int err; 666 + 667 667 __module_get(src->ops->type->owner); 668 - memcpy(dst, src, src->ops->size); 668 + if (src->ops->clone) { 669 + dst->ops = src->ops; 670 + err = src->ops->clone(dst, src); 671 + if (err < 0) 672 + return err; 673 + } else { 674 + memcpy(dst, src, src->ops->size); 675 + } 676 + return 0; 669 677 } 670 678 671 679 /**
+25
include/net/sock.h
··· 2226 2226 return (1 << sk->sk_state) & (TCPF_LISTEN | TCPF_NEW_SYN_RECV); 2227 2227 } 2228 2228 2229 + /** 2230 + * sk_state_load - read sk->sk_state for lockless contexts 2231 + * @sk: socket pointer 2232 + * 2233 + * Paired with sk_state_store(). Used in places we do not hold socket lock : 2234 + * tcp_diag_get_info(), tcp_get_info(), tcp_poll(), get_tcp4_sock() ... 2235 + */ 2236 + static inline int sk_state_load(const struct sock *sk) 2237 + { 2238 + return smp_load_acquire(&sk->sk_state); 2239 + } 2240 + 2241 + /** 2242 + * sk_state_store - update sk->sk_state 2243 + * @sk: socket pointer 2244 + * @newstate: new state 2245 + * 2246 + * Paired with sk_state_load(). Should be used in contexts where 2247 + * state change might impact lockless readers. 2248 + */ 2249 + static inline void sk_state_store(struct sock *sk, int newstate) 2250 + { 2251 + smp_store_release(&sk->sk_state, newstate); 2252 + } 2253 + 2229 2254 void sock_enable_timestamp(struct sock *sk, int flag); 2230 2255 int sock_get_timestamp(struct sock *, struct timeval __user *); 2231 2256 int sock_get_timestampns(struct sock *, struct timespec __user *);
+1 -1
include/net/switchdev.h
··· 323 323 struct net_device *filter_dev, 324 324 int idx) 325 325 { 326 - return -EOPNOTSUPP; 326 + return idx; 327 327 } 328 328 329 329 static inline void switchdev_port_fwd_mark_set(struct net_device *dev,
+3 -1
net/8021q/vlan_core.c
··· 30 30 skb->pkt_type = PACKET_HOST; 31 31 } 32 32 33 - if (!(vlan_dev_priv(vlan_dev)->flags & VLAN_FLAG_REORDER_HDR)) { 33 + if (!(vlan_dev_priv(vlan_dev)->flags & VLAN_FLAG_REORDER_HDR) && 34 + !netif_is_macvlan_port(vlan_dev) && 35 + !netif_is_bridge_port(vlan_dev)) { 34 36 unsigned int offset = skb->data - skb_mac_header(skb); 35 37 36 38 /*
+1 -1
net/bridge/br_stp.c
··· 48 48 49 49 p->state = state; 50 50 err = switchdev_port_attr_set(p->dev, &attr); 51 - if (err) 51 + if (err && err != -EOPNOTSUPP) 52 52 br_warn(p->br, "error setting offload STP state on port %u(%s)\n", 53 53 (unsigned int) p->port_no, p->dev->name); 54 54 }
+1 -1
net/bridge/br_stp_if.c
··· 50 50 p->config_pending = 0; 51 51 52 52 err = switchdev_port_attr_set(p->dev, &attr); 53 - if (err) 53 + if (err && err != -EOPNOTSUPP) 54 54 netdev_err(p->dev, "failed to set HW ageing time\n"); 55 55 } 56 56
+13 -5
net/core/dev.c
··· 2403 2403 { 2404 2404 static const netdev_features_t null_features = 0; 2405 2405 struct net_device *dev = skb->dev; 2406 - const char *driver = ""; 2406 + const char *name = ""; 2407 2407 2408 2408 if (!net_ratelimit()) 2409 2409 return; 2410 2410 2411 - if (dev && dev->dev.parent) 2412 - driver = dev_driver_string(dev->dev.parent); 2413 - 2411 + if (dev) { 2412 + if (dev->dev.parent) 2413 + name = dev_driver_string(dev->dev.parent); 2414 + else 2415 + name = netdev_name(dev); 2416 + } 2414 2417 WARN(1, "%s: caps=(%pNF, %pNF) len=%d data_len=%d gso_size=%d " 2415 2418 "gso_type=%d ip_summed=%d\n", 2416 - driver, dev ? &dev->features : &null_features, 2419 + name, dev ? &dev->features : &null_features, 2417 2420 skb->sk ? &skb->sk->sk_route_caps : &null_features, 2418 2421 skb->len, skb->data_len, skb_shinfo(skb)->gso_size, 2419 2422 skb_shinfo(skb)->gso_type, skb->ip_summed); ··· 6429 6426 6430 6427 if (dev->netdev_ops->ndo_set_features) 6431 6428 err = dev->netdev_ops->ndo_set_features(dev, features); 6429 + else 6430 + err = 0; 6432 6431 6433 6432 if (unlikely(err < 0)) { 6434 6433 netdev_err(dev, 6435 6434 "set_features() failed (%d); wanted %pNF, left %pNF\n", 6436 6435 err, &features, &dev->features); 6436 + /* return non-0 since some features might have changed and 6437 + * it's better to fire a spurious notification than miss it 6438 + */ 6437 6439 return -1; 6438 6440 } 6439 6441
+1 -1
net/core/neighbour.c
··· 857 857 struct sk_buff *skb = skb_peek_tail(&neigh->arp_queue); 858 858 /* keep skb alive even if arp_queue overflows */ 859 859 if (skb) 860 - skb = skb_copy(skb, GFP_ATOMIC); 860 + skb = skb_clone(skb, GFP_ATOMIC); 861 861 write_unlock(&neigh->lock); 862 862 neigh->ops->solicit(neigh, skb); 863 863 atomic_inc(&neigh->probes);
+152 -122
net/core/rtnetlink.c
··· 1045 1045 return 0; 1046 1046 } 1047 1047 1048 + static noinline_for_stack int rtnl_fill_stats(struct sk_buff *skb, 1049 + struct net_device *dev) 1050 + { 1051 + const struct rtnl_link_stats64 *stats; 1052 + struct rtnl_link_stats64 temp; 1053 + struct nlattr *attr; 1054 + 1055 + stats = dev_get_stats(dev, &temp); 1056 + 1057 + attr = nla_reserve(skb, IFLA_STATS, 1058 + sizeof(struct rtnl_link_stats)); 1059 + if (!attr) 1060 + return -EMSGSIZE; 1061 + 1062 + copy_rtnl_link_stats(nla_data(attr), stats); 1063 + 1064 + attr = nla_reserve(skb, IFLA_STATS64, 1065 + sizeof(struct rtnl_link_stats64)); 1066 + if (!attr) 1067 + return -EMSGSIZE; 1068 + 1069 + copy_rtnl_link_stats64(nla_data(attr), stats); 1070 + 1071 + return 0; 1072 + } 1073 + 1074 + static noinline_for_stack int rtnl_fill_vfinfo(struct sk_buff *skb, 1075 + struct net_device *dev, 1076 + int vfs_num, 1077 + struct nlattr *vfinfo) 1078 + { 1079 + struct ifla_vf_rss_query_en vf_rss_query_en; 1080 + struct ifla_vf_link_state vf_linkstate; 1081 + struct ifla_vf_spoofchk vf_spoofchk; 1082 + struct ifla_vf_tx_rate vf_tx_rate; 1083 + struct ifla_vf_stats vf_stats; 1084 + struct ifla_vf_trust vf_trust; 1085 + struct ifla_vf_vlan vf_vlan; 1086 + struct ifla_vf_rate vf_rate; 1087 + struct nlattr *vf, *vfstats; 1088 + struct ifla_vf_mac vf_mac; 1089 + struct ifla_vf_info ivi; 1090 + 1091 + /* Not all SR-IOV capable drivers support the 1092 + * spoofcheck and "RSS query enable" query. Preset to 1093 + * -1 so the user space tool can detect that the driver 1094 + * didn't report anything. 1095 + */ 1096 + ivi.spoofchk = -1; 1097 + ivi.rss_query_en = -1; 1098 + ivi.trusted = -1; 1099 + memset(ivi.mac, 0, sizeof(ivi.mac)); 1100 + /* The default value for VF link state is "auto" 1101 + * IFLA_VF_LINK_STATE_AUTO which equals zero 1102 + */ 1103 + ivi.linkstate = 0; 1104 + if (dev->netdev_ops->ndo_get_vf_config(dev, vfs_num, &ivi)) 1105 + return 0; 1106 + 1107 + vf_mac.vf = 1108 + vf_vlan.vf = 1109 + vf_rate.vf = 1110 + vf_tx_rate.vf = 1111 + vf_spoofchk.vf = 1112 + vf_linkstate.vf = 1113 + vf_rss_query_en.vf = 1114 + vf_trust.vf = ivi.vf; 1115 + 1116 + memcpy(vf_mac.mac, ivi.mac, sizeof(ivi.mac)); 1117 + vf_vlan.vlan = ivi.vlan; 1118 + vf_vlan.qos = ivi.qos; 1119 + vf_tx_rate.rate = ivi.max_tx_rate; 1120 + vf_rate.min_tx_rate = ivi.min_tx_rate; 1121 + vf_rate.max_tx_rate = ivi.max_tx_rate; 1122 + vf_spoofchk.setting = ivi.spoofchk; 1123 + vf_linkstate.link_state = ivi.linkstate; 1124 + vf_rss_query_en.setting = ivi.rss_query_en; 1125 + vf_trust.setting = ivi.trusted; 1126 + vf = nla_nest_start(skb, IFLA_VF_INFO); 1127 + if (!vf) { 1128 + nla_nest_cancel(skb, vfinfo); 1129 + return -EMSGSIZE; 1130 + } 1131 + if (nla_put(skb, IFLA_VF_MAC, sizeof(vf_mac), &vf_mac) || 1132 + nla_put(skb, IFLA_VF_VLAN, sizeof(vf_vlan), &vf_vlan) || 1133 + nla_put(skb, IFLA_VF_RATE, sizeof(vf_rate), 1134 + &vf_rate) || 1135 + nla_put(skb, IFLA_VF_TX_RATE, sizeof(vf_tx_rate), 1136 + &vf_tx_rate) || 1137 + nla_put(skb, IFLA_VF_SPOOFCHK, sizeof(vf_spoofchk), 1138 + &vf_spoofchk) || 1139 + nla_put(skb, IFLA_VF_LINK_STATE, sizeof(vf_linkstate), 1140 + &vf_linkstate) || 1141 + nla_put(skb, IFLA_VF_RSS_QUERY_EN, 1142 + sizeof(vf_rss_query_en), 1143 + &vf_rss_query_en) || 1144 + nla_put(skb, IFLA_VF_TRUST, 1145 + sizeof(vf_trust), &vf_trust)) 1146 + return -EMSGSIZE; 1147 + memset(&vf_stats, 0, sizeof(vf_stats)); 1148 + if (dev->netdev_ops->ndo_get_vf_stats) 1149 + dev->netdev_ops->ndo_get_vf_stats(dev, vfs_num, 1150 + &vf_stats); 1151 + vfstats = nla_nest_start(skb, IFLA_VF_STATS); 1152 + if (!vfstats) { 1153 + nla_nest_cancel(skb, vf); 1154 + nla_nest_cancel(skb, vfinfo); 1155 + return -EMSGSIZE; 1156 + } 1157 + if (nla_put_u64(skb, IFLA_VF_STATS_RX_PACKETS, 1158 + vf_stats.rx_packets) || 1159 + nla_put_u64(skb, IFLA_VF_STATS_TX_PACKETS, 1160 + vf_stats.tx_packets) || 1161 + nla_put_u64(skb, IFLA_VF_STATS_RX_BYTES, 1162 + vf_stats.rx_bytes) || 1163 + nla_put_u64(skb, IFLA_VF_STATS_TX_BYTES, 1164 + vf_stats.tx_bytes) || 1165 + nla_put_u64(skb, IFLA_VF_STATS_BROADCAST, 1166 + vf_stats.broadcast) || 1167 + nla_put_u64(skb, IFLA_VF_STATS_MULTICAST, 1168 + vf_stats.multicast)) 1169 + return -EMSGSIZE; 1170 + nla_nest_end(skb, vfstats); 1171 + nla_nest_end(skb, vf); 1172 + return 0; 1173 + } 1174 + 1175 + static int rtnl_fill_link_ifmap(struct sk_buff *skb, struct net_device *dev) 1176 + { 1177 + struct rtnl_link_ifmap map = { 1178 + .mem_start = dev->mem_start, 1179 + .mem_end = dev->mem_end, 1180 + .base_addr = dev->base_addr, 1181 + .irq = dev->irq, 1182 + .dma = dev->dma, 1183 + .port = dev->if_port, 1184 + }; 1185 + if (nla_put(skb, IFLA_MAP, sizeof(map), &map)) 1186 + return -EMSGSIZE; 1187 + 1188 + return 0; 1189 + } 1190 + 1048 1191 static int rtnl_fill_ifinfo(struct sk_buff *skb, struct net_device *dev, 1049 1192 int type, u32 pid, u32 seq, u32 change, 1050 1193 unsigned int flags, u32 ext_filter_mask) 1051 1194 { 1052 1195 struct ifinfomsg *ifm; 1053 1196 struct nlmsghdr *nlh; 1054 - struct rtnl_link_stats64 temp; 1055 - const struct rtnl_link_stats64 *stats; 1056 - struct nlattr *attr, *af_spec; 1197 + struct nlattr *af_spec; 1057 1198 struct rtnl_af_ops *af_ops; 1058 1199 struct net_device *upper_dev = netdev_master_upper_dev_get(dev); 1059 1200 ··· 1237 1096 nla_put_u8(skb, IFLA_PROTO_DOWN, dev->proto_down)) 1238 1097 goto nla_put_failure; 1239 1098 1240 - if (1) { 1241 - struct rtnl_link_ifmap map = { 1242 - .mem_start = dev->mem_start, 1243 - .mem_end = dev->mem_end, 1244 - .base_addr = dev->base_addr, 1245 - .irq = dev->irq, 1246 - .dma = dev->dma, 1247 - .port = dev->if_port, 1248 - }; 1249 - if (nla_put(skb, IFLA_MAP, sizeof(map), &map)) 1250 - goto nla_put_failure; 1251 - } 1099 + if (rtnl_fill_link_ifmap(skb, dev)) 1100 + goto nla_put_failure; 1252 1101 1253 1102 if (dev->addr_len) { 1254 1103 if (nla_put(skb, IFLA_ADDRESS, dev->addr_len, dev->dev_addr) || ··· 1255 1124 if (rtnl_phys_switch_id_fill(skb, dev)) 1256 1125 goto nla_put_failure; 1257 1126 1258 - attr = nla_reserve(skb, IFLA_STATS, 1259 - sizeof(struct rtnl_link_stats)); 1260 - if (attr == NULL) 1127 + if (rtnl_fill_stats(skb, dev)) 1261 1128 goto nla_put_failure; 1262 - 1263 - stats = dev_get_stats(dev, &temp); 1264 - copy_rtnl_link_stats(nla_data(attr), stats); 1265 - 1266 - attr = nla_reserve(skb, IFLA_STATS64, 1267 - sizeof(struct rtnl_link_stats64)); 1268 - if (attr == NULL) 1269 - goto nla_put_failure; 1270 - copy_rtnl_link_stats64(nla_data(attr), stats); 1271 1129 1272 1130 if (dev->dev.parent && (ext_filter_mask & RTEXT_FILTER_VF) && 1273 1131 nla_put_u32(skb, IFLA_NUM_VF, dev_num_vf(dev->dev.parent))) 1274 1132 goto nla_put_failure; 1275 1133 1276 - if (dev->netdev_ops->ndo_get_vf_config && dev->dev.parent 1277 - && (ext_filter_mask & RTEXT_FILTER_VF)) { 1134 + if (dev->netdev_ops->ndo_get_vf_config && dev->dev.parent && 1135 + ext_filter_mask & RTEXT_FILTER_VF) { 1278 1136 int i; 1279 - 1280 - struct nlattr *vfinfo, *vf, *vfstats; 1137 + struct nlattr *vfinfo; 1281 1138 int num_vfs = dev_num_vf(dev->dev.parent); 1282 1139 1283 1140 vfinfo = nla_nest_start(skb, IFLA_VFINFO_LIST); 1284 1141 if (!vfinfo) 1285 1142 goto nla_put_failure; 1286 1143 for (i = 0; i < num_vfs; i++) { 1287 - struct ifla_vf_info ivi; 1288 - struct ifla_vf_mac vf_mac; 1289 - struct ifla_vf_vlan vf_vlan; 1290 - struct ifla_vf_rate vf_rate; 1291 - struct ifla_vf_tx_rate vf_tx_rate; 1292 - struct ifla_vf_spoofchk vf_spoofchk; 1293 - struct ifla_vf_link_state vf_linkstate; 1294 - struct ifla_vf_rss_query_en vf_rss_query_en; 1295 - struct ifla_vf_stats vf_stats; 1296 - struct ifla_vf_trust vf_trust; 1297 - 1298 - /* 1299 - * Not all SR-IOV capable drivers support the 1300 - * spoofcheck and "RSS query enable" query. Preset to 1301 - * -1 so the user space tool can detect that the driver 1302 - * didn't report anything. 1303 - */ 1304 - ivi.spoofchk = -1; 1305 - ivi.rss_query_en = -1; 1306 - ivi.trusted = -1; 1307 - memset(ivi.mac, 0, sizeof(ivi.mac)); 1308 - /* The default value for VF link state is "auto" 1309 - * IFLA_VF_LINK_STATE_AUTO which equals zero 1310 - */ 1311 - ivi.linkstate = 0; 1312 - if (dev->netdev_ops->ndo_get_vf_config(dev, i, &ivi)) 1313 - break; 1314 - vf_mac.vf = 1315 - vf_vlan.vf = 1316 - vf_rate.vf = 1317 - vf_tx_rate.vf = 1318 - vf_spoofchk.vf = 1319 - vf_linkstate.vf = 1320 - vf_rss_query_en.vf = 1321 - vf_trust.vf = ivi.vf; 1322 - 1323 - memcpy(vf_mac.mac, ivi.mac, sizeof(ivi.mac)); 1324 - vf_vlan.vlan = ivi.vlan; 1325 - vf_vlan.qos = ivi.qos; 1326 - vf_tx_rate.rate = ivi.max_tx_rate; 1327 - vf_rate.min_tx_rate = ivi.min_tx_rate; 1328 - vf_rate.max_tx_rate = ivi.max_tx_rate; 1329 - vf_spoofchk.setting = ivi.spoofchk; 1330 - vf_linkstate.link_state = ivi.linkstate; 1331 - vf_rss_query_en.setting = ivi.rss_query_en; 1332 - vf_trust.setting = ivi.trusted; 1333 - vf = nla_nest_start(skb, IFLA_VF_INFO); 1334 - if (!vf) { 1335 - nla_nest_cancel(skb, vfinfo); 1144 + if (rtnl_fill_vfinfo(skb, dev, i, vfinfo)) 1336 1145 goto nla_put_failure; 1337 - } 1338 - if (nla_put(skb, IFLA_VF_MAC, sizeof(vf_mac), &vf_mac) || 1339 - nla_put(skb, IFLA_VF_VLAN, sizeof(vf_vlan), &vf_vlan) || 1340 - nla_put(skb, IFLA_VF_RATE, sizeof(vf_rate), 1341 - &vf_rate) || 1342 - nla_put(skb, IFLA_VF_TX_RATE, sizeof(vf_tx_rate), 1343 - &vf_tx_rate) || 1344 - nla_put(skb, IFLA_VF_SPOOFCHK, sizeof(vf_spoofchk), 1345 - &vf_spoofchk) || 1346 - nla_put(skb, IFLA_VF_LINK_STATE, sizeof(vf_linkstate), 1347 - &vf_linkstate) || 1348 - nla_put(skb, IFLA_VF_RSS_QUERY_EN, 1349 - sizeof(vf_rss_query_en), 1350 - &vf_rss_query_en) || 1351 - nla_put(skb, IFLA_VF_TRUST, 1352 - sizeof(vf_trust), &vf_trust)) 1353 - goto nla_put_failure; 1354 - memset(&vf_stats, 0, sizeof(vf_stats)); 1355 - if (dev->netdev_ops->ndo_get_vf_stats) 1356 - dev->netdev_ops->ndo_get_vf_stats(dev, i, 1357 - &vf_stats); 1358 - vfstats = nla_nest_start(skb, IFLA_VF_STATS); 1359 - if (!vfstats) { 1360 - nla_nest_cancel(skb, vf); 1361 - nla_nest_cancel(skb, vfinfo); 1362 - goto nla_put_failure; 1363 - } 1364 - if (nla_put_u64(skb, IFLA_VF_STATS_RX_PACKETS, 1365 - vf_stats.rx_packets) || 1366 - nla_put_u64(skb, IFLA_VF_STATS_TX_PACKETS, 1367 - vf_stats.tx_packets) || 1368 - nla_put_u64(skb, IFLA_VF_STATS_RX_BYTES, 1369 - vf_stats.rx_bytes) || 1370 - nla_put_u64(skb, IFLA_VF_STATS_TX_BYTES, 1371 - vf_stats.tx_bytes) || 1372 - nla_put_u64(skb, IFLA_VF_STATS_BROADCAST, 1373 - vf_stats.broadcast) || 1374 - nla_put_u64(skb, IFLA_VF_STATS_MULTICAST, 1375 - vf_stats.multicast)) 1376 - goto nla_put_failure; 1377 - nla_nest_end(skb, vfstats); 1378 - nla_nest_end(skb, vf); 1379 1146 } 1147 + 1380 1148 nla_nest_end(skb, vfinfo); 1381 1149 } 1382 1150
+2 -1
net/core/skbuff.c
··· 4268 4268 return NULL; 4269 4269 } 4270 4270 4271 - memmove(skb->data - ETH_HLEN, skb->data - VLAN_ETH_HLEN, 2 * ETH_ALEN); 4271 + memmove(skb->data - ETH_HLEN, skb->data - skb->mac_len, 4272 + 2 * ETH_ALEN); 4272 4273 skb->mac_header += VLAN_HLEN; 4273 4274 return skb; 4274 4275 }
+2 -2
net/ipv4/inet_connection_sock.c
··· 563 563 int max_retries, thresh; 564 564 u8 defer_accept; 565 565 566 - if (sk_listener->sk_state != TCP_LISTEN) 566 + if (sk_state_load(sk_listener) != TCP_LISTEN) 567 567 goto drop; 568 568 569 569 max_retries = icsk->icsk_syn_retries ? : sysctl_tcp_synack_retries; ··· 749 749 * It is OK, because this socket enters to hash table only 750 750 * after validation is complete. 751 751 */ 752 - sk->sk_state = TCP_LISTEN; 752 + sk_state_store(sk, TCP_LISTEN); 753 753 if (!sk->sk_prot->get_port(sk, inet->inet_num)) { 754 754 inet->inet_sport = htons(inet->inet_num); 755 755
+1 -1
net/ipv4/netfilter/nf_nat_pptp.c
··· 45 45 struct net *net = nf_ct_net(ct); 46 46 const struct nf_conn *master = ct->master; 47 47 struct nf_conntrack_expect *other_exp; 48 - struct nf_conntrack_tuple t; 48 + struct nf_conntrack_tuple t = {}; 49 49 const struct nf_ct_pptp_master *ct_pptp_info; 50 50 const struct nf_nat_pptp *nat_pptp_info; 51 51 struct nf_nat_range range;
+5 -3
net/ipv4/raw.c
··· 406 406 ip_select_ident(net, skb, NULL); 407 407 408 408 iph->check = ip_fast_csum((unsigned char *)iph, iph->ihl); 409 + skb->transport_header += iphlen; 410 + if (iph->protocol == IPPROTO_ICMP && 411 + length >= iphlen + sizeof(struct icmphdr)) 412 + icmp_out_count(net, ((struct icmphdr *) 413 + skb_transport_header(skb))->type); 409 414 } 410 - if (iph->protocol == IPPROTO_ICMP) 411 - icmp_out_count(net, ((struct icmphdr *) 412 - skb_transport_header(skb))->type); 413 415 414 416 err = NF_HOOK(NFPROTO_IPV4, NF_INET_LOCAL_OUT, 415 417 net, sk, skb, NULL, rt->dst.dev,
+11 -10
net/ipv4/tcp.c
··· 451 451 unsigned int mask; 452 452 struct sock *sk = sock->sk; 453 453 const struct tcp_sock *tp = tcp_sk(sk); 454 + int state; 454 455 455 456 sock_rps_record_flow(sk); 456 457 457 458 sock_poll_wait(file, sk_sleep(sk), wait); 458 - if (sk->sk_state == TCP_LISTEN) 459 + 460 + state = sk_state_load(sk); 461 + if (state == TCP_LISTEN) 459 462 return inet_csk_listen_poll(sk); 460 463 461 464 /* Socket is not locked. We are protected from async events ··· 495 492 * NOTE. Check for TCP_CLOSE is added. The goal is to prevent 496 493 * blocking on fresh not-connected or disconnected socket. --ANK 497 494 */ 498 - if (sk->sk_shutdown == SHUTDOWN_MASK || sk->sk_state == TCP_CLOSE) 495 + if (sk->sk_shutdown == SHUTDOWN_MASK || state == TCP_CLOSE) 499 496 mask |= POLLHUP; 500 497 if (sk->sk_shutdown & RCV_SHUTDOWN) 501 498 mask |= POLLIN | POLLRDNORM | POLLRDHUP; 502 499 503 500 /* Connected or passive Fast Open socket? */ 504 - if (sk->sk_state != TCP_SYN_SENT && 505 - (sk->sk_state != TCP_SYN_RECV || tp->fastopen_rsk)) { 501 + if (state != TCP_SYN_SENT && 502 + (state != TCP_SYN_RECV || tp->fastopen_rsk)) { 506 503 int target = sock_rcvlowat(sk, 0, INT_MAX); 507 504 508 505 if (tp->urg_seq == tp->copied_seq && ··· 510 507 tp->urg_data) 511 508 target++; 512 509 513 - /* Potential race condition. If read of tp below will 514 - * escape above sk->sk_state, we can be illegally awaken 515 - * in SYN_* states. */ 516 510 if (tp->rcv_nxt - tp->copied_seq >= target) 517 511 mask |= POLLIN | POLLRDNORM; 518 512 ··· 1934 1934 /* Change state AFTER socket is unhashed to avoid closed 1935 1935 * socket sitting in hash tables. 1936 1936 */ 1937 - sk->sk_state = state; 1937 + sk_state_store(sk, state); 1938 1938 1939 1939 #ifdef STATE_TRACE 1940 1940 SOCK_DEBUG(sk, "TCP sk=%p, State %s -> %s\n", sk, statename[oldstate], statename[state]); ··· 2644 2644 if (sk->sk_type != SOCK_STREAM) 2645 2645 return; 2646 2646 2647 - info->tcpi_state = sk->sk_state; 2647 + info->tcpi_state = sk_state_load(sk); 2648 + 2648 2649 info->tcpi_ca_state = icsk->icsk_ca_state; 2649 2650 info->tcpi_retransmits = icsk->icsk_retransmits; 2650 2651 info->tcpi_probes = icsk->icsk_probes_out; ··· 2673 2672 info->tcpi_snd_mss = tp->mss_cache; 2674 2673 info->tcpi_rcv_mss = icsk->icsk_ack.rcv_mss; 2675 2674 2676 - if (sk->sk_state == TCP_LISTEN) { 2675 + if (info->tcpi_state == TCP_LISTEN) { 2677 2676 info->tcpi_unacked = sk->sk_ack_backlog; 2678 2677 info->tcpi_sacked = sk->sk_max_ack_backlog; 2679 2678 } else {
+1 -1
net/ipv4/tcp_diag.c
··· 21 21 { 22 22 struct tcp_info *info = _info; 23 23 24 - if (sk->sk_state == TCP_LISTEN) { 24 + if (sk_state_load(sk) == TCP_LISTEN) { 25 25 r->idiag_rqueue = sk->sk_ack_backlog; 26 26 r->idiag_wqueue = sk->sk_max_ack_backlog; 27 27 } else if (sk->sk_type == SOCK_STREAM) {
+8 -6
net/ipv4/tcp_ipv4.c
··· 2158 2158 __u16 destp = ntohs(inet->inet_dport); 2159 2159 __u16 srcp = ntohs(inet->inet_sport); 2160 2160 int rx_queue; 2161 + int state; 2161 2162 2162 2163 if (icsk->icsk_pending == ICSK_TIME_RETRANS || 2163 2164 icsk->icsk_pending == ICSK_TIME_EARLY_RETRANS || ··· 2176 2175 timer_expires = jiffies; 2177 2176 } 2178 2177 2179 - if (sk->sk_state == TCP_LISTEN) 2178 + state = sk_state_load(sk); 2179 + if (state == TCP_LISTEN) 2180 2180 rx_queue = sk->sk_ack_backlog; 2181 2181 else 2182 - /* 2183 - * because we dont lock socket, we might find a transient negative value 2182 + /* Because we don't lock the socket, 2183 + * we might find a transient negative value. 2184 2184 */ 2185 2185 rx_queue = max_t(int, tp->rcv_nxt - tp->copied_seq, 0); 2186 2186 2187 2187 seq_printf(f, "%4d: %08X:%04X %08X:%04X %02X %08X:%08X %02X:%08lX " 2188 2188 "%08X %5u %8d %lu %d %pK %lu %lu %u %u %d", 2189 - i, src, srcp, dest, destp, sk->sk_state, 2189 + i, src, srcp, dest, destp, state, 2190 2190 tp->write_seq - tp->snd_una, 2191 2191 rx_queue, 2192 2192 timer_active, ··· 2201 2199 jiffies_to_clock_t(icsk->icsk_ack.ato), 2202 2200 (icsk->icsk_ack.quick << 1) | icsk->icsk_ack.pingpong, 2203 2201 tp->snd_cwnd, 2204 - sk->sk_state == TCP_LISTEN ? 2205 - (fastopenq ? fastopenq->max_qlen : 0) : 2202 + state == TCP_LISTEN ? 2203 + fastopenq->max_qlen : 2206 2204 (tcp_in_initial_slowstart(tp) ? -1 : tp->snd_ssthresh)); 2207 2205 } 2208 2206
-2
net/ipv6/mcast.c
··· 1651 1651 if (!err) { 1652 1652 ICMP6MSGOUT_INC_STATS(net, idev, ICMPV6_MLD2_REPORT); 1653 1653 ICMP6_INC_STATS(net, idev, ICMP6_MIB_OUTMSGS); 1654 - IP6_UPD_PO_STATS(net, idev, IPSTATS_MIB_OUTMCAST, payload_len); 1655 1654 } else { 1656 1655 IP6_INC_STATS(net, idev, IPSTATS_MIB_OUTDISCARDS); 1657 1656 } ··· 2014 2015 if (!err) { 2015 2016 ICMP6MSGOUT_INC_STATS(net, idev, type); 2016 2017 ICMP6_INC_STATS(net, idev, ICMP6_MIB_OUTMSGS); 2017 - IP6_UPD_PO_STATS(net, idev, IPSTATS_MIB_OUTMCAST, full_len); 2018 2018 } else 2019 2019 IP6_INC_STATS(net, idev, IPSTATS_MIB_OUTDISCARDS); 2020 2020
+19 -3
net/ipv6/route.c
··· 404 404 } 405 405 } 406 406 407 + static bool __rt6_check_expired(const struct rt6_info *rt) 408 + { 409 + if (rt->rt6i_flags & RTF_EXPIRES) 410 + return time_after(jiffies, rt->dst.expires); 411 + else 412 + return false; 413 + } 414 + 407 415 static bool rt6_check_expired(const struct rt6_info *rt) 408 416 { 409 417 if (rt->rt6i_flags & RTF_EXPIRES) { ··· 1260 1252 1261 1253 static struct dst_entry *rt6_dst_from_check(struct rt6_info *rt, u32 cookie) 1262 1254 { 1263 - if (rt->dst.obsolete == DST_OBSOLETE_FORCE_CHK && 1255 + if (!__rt6_check_expired(rt) && 1256 + rt->dst.obsolete == DST_OBSOLETE_FORCE_CHK && 1264 1257 rt6_check((struct rt6_info *)(rt->dst.from), cookie)) 1265 1258 return &rt->dst; 1266 1259 else ··· 1281 1272 1282 1273 rt6_dst_from_metrics_check(rt); 1283 1274 1284 - if ((rt->rt6i_flags & RTF_PCPU) || unlikely(dst->flags & DST_NOCACHE)) 1275 + if (rt->rt6i_flags & RTF_PCPU || 1276 + (unlikely(dst->flags & DST_NOCACHE) && rt->dst.from)) 1285 1277 return rt6_dst_from_check(rt, cookie); 1286 1278 else 1287 1279 return rt6_check(rt, cookie); ··· 1332 1322 rt6_update_expires(rt, net->ipv6.sysctl.ip6_rt_mtu_expires); 1333 1323 } 1334 1324 1325 + static bool rt6_cache_allowed_for_pmtu(const struct rt6_info *rt) 1326 + { 1327 + return !(rt->rt6i_flags & RTF_CACHE) && 1328 + (rt->rt6i_flags & RTF_PCPU || rt->rt6i_node); 1329 + } 1330 + 1335 1331 static void __ip6_rt_update_pmtu(struct dst_entry *dst, const struct sock *sk, 1336 1332 const struct ipv6hdr *iph, u32 mtu) 1337 1333 { ··· 1351 1335 if (mtu >= dst_mtu(dst)) 1352 1336 return; 1353 1337 1354 - if (rt6->rt6i_flags & RTF_CACHE) { 1338 + if (!rt6_cache_allowed_for_pmtu(rt6)) { 1355 1339 rt6_do_update_pmtu(rt6, mtu); 1356 1340 } else { 1357 1341 const struct in6_addr *daddr, *saddr;
+15 -4
net/ipv6/tcp_ipv6.c
··· 1690 1690 const struct tcp_sock *tp = tcp_sk(sp); 1691 1691 const struct inet_connection_sock *icsk = inet_csk(sp); 1692 1692 const struct fastopen_queue *fastopenq = &icsk->icsk_accept_queue.fastopenq; 1693 + int rx_queue; 1694 + int state; 1693 1695 1694 1696 dest = &sp->sk_v6_daddr; 1695 1697 src = &sp->sk_v6_rcv_saddr; ··· 1712 1710 timer_expires = jiffies; 1713 1711 } 1714 1712 1713 + state = sk_state_load(sp); 1714 + if (state == TCP_LISTEN) 1715 + rx_queue = sp->sk_ack_backlog; 1716 + else 1717 + /* Because we don't lock the socket, 1718 + * we might find a transient negative value. 1719 + */ 1720 + rx_queue = max_t(int, tp->rcv_nxt - tp->copied_seq, 0); 1721 + 1715 1722 seq_printf(seq, 1716 1723 "%4d: %08X%08X%08X%08X:%04X %08X%08X%08X%08X:%04X " 1717 1724 "%02X %08X:%08X %02X:%08lX %08X %5u %8d %lu %d %pK %lu %lu %u %u %d\n", ··· 1729 1718 src->s6_addr32[2], src->s6_addr32[3], srcp, 1730 1719 dest->s6_addr32[0], dest->s6_addr32[1], 1731 1720 dest->s6_addr32[2], dest->s6_addr32[3], destp, 1732 - sp->sk_state, 1733 - tp->write_seq-tp->snd_una, 1734 - (sp->sk_state == TCP_LISTEN) ? sp->sk_ack_backlog : (tp->rcv_nxt - tp->copied_seq), 1721 + state, 1722 + tp->write_seq - tp->snd_una, 1723 + rx_queue, 1735 1724 timer_active, 1736 1725 jiffies_delta_to_clock_t(timer_expires - jiffies), 1737 1726 icsk->icsk_retransmits, ··· 1743 1732 jiffies_to_clock_t(icsk->icsk_ack.ato), 1744 1733 (icsk->icsk_ack.quick << 1) | icsk->icsk_ack.pingpong, 1745 1734 tp->snd_cwnd, 1746 - sp->sk_state == TCP_LISTEN ? 1735 + state == TCP_LISTEN ? 1747 1736 fastopenq->max_qlen : 1748 1737 (tcp_in_initial_slowstart(tp) ? -1 : tp->snd_ssthresh) 1749 1738 );
+3 -3
net/netfilter/Kconfig
··· 869 869 depends on IPV6 || IPV6=n 870 870 depends on !NF_CONNTRACK || NF_CONNTRACK 871 871 select NF_DUP_IPV4 872 - select NF_DUP_IPV6 if IP6_NF_IPTABLES 872 + select NF_DUP_IPV6 if IP6_NF_IPTABLES != n 873 873 ---help--- 874 874 This option adds a "TEE" target with which a packet can be cloned and 875 875 this clone be rerouted to another nexthop. ··· 882 882 depends on IP6_NF_IPTABLES || IP6_NF_IPTABLES=n 883 883 depends on IP_NF_MANGLE 884 884 select NF_DEFRAG_IPV4 885 - select NF_DEFRAG_IPV6 if IP6_NF_IPTABLES 885 + select NF_DEFRAG_IPV6 if IP6_NF_IPTABLES != n 886 886 help 887 887 This option adds a `TPROXY' target, which is somewhat similar to 888 888 REDIRECT. It can only be used in the mangle table and is useful ··· 1375 1375 depends on IPV6 || IPV6=n 1376 1376 depends on IP6_NF_IPTABLES || IP6_NF_IPTABLES=n 1377 1377 select NF_DEFRAG_IPV4 1378 - select NF_DEFRAG_IPV6 if IP6_NF_IPTABLES 1378 + select NF_DEFRAG_IPV6 if IP6_NF_IPTABLES != n 1379 1379 help 1380 1380 This option adds a `socket' match, which can be used to match 1381 1381 packets for which a TCP or UDP socket lookup finds a valid socket.
+6 -11
net/netfilter/ipset/ip_set_bitmap_gen.h
··· 33 33 #define mtype_gc IPSET_TOKEN(MTYPE, _gc) 34 34 #define mtype MTYPE 35 35 36 - #define get_ext(set, map, id) ((map)->extensions + (set)->dsize * (id)) 36 + #define get_ext(set, map, id) ((map)->extensions + ((set)->dsize * (id))) 37 37 38 38 static void 39 39 mtype_gc_init(struct ip_set *set, void (*gc)(unsigned long ul_set)) ··· 67 67 del_timer_sync(&map->gc); 68 68 69 69 ip_set_free(map->members); 70 - if (set->dsize) { 71 - if (set->extensions & IPSET_EXT_DESTROY) 72 - mtype_ext_cleanup(set); 73 - ip_set_free(map->extensions); 74 - } 75 - kfree(map); 70 + if (set->dsize && set->extensions & IPSET_EXT_DESTROY) 71 + mtype_ext_cleanup(set); 72 + ip_set_free(map); 76 73 77 74 set->data = NULL; 78 75 } ··· 89 92 { 90 93 const struct mtype *map = set->data; 91 94 struct nlattr *nested; 95 + size_t memsize = sizeof(*map) + map->memsize; 92 96 93 97 nested = ipset_nest_start(skb, IPSET_ATTR_DATA); 94 98 if (!nested) 95 99 goto nla_put_failure; 96 100 if (mtype_do_head(skb, map) || 97 101 nla_put_net32(skb, IPSET_ATTR_REFERENCES, htonl(set->ref - 1)) || 98 - nla_put_net32(skb, IPSET_ATTR_MEMSIZE, 99 - htonl(sizeof(*map) + 100 - map->memsize + 101 - set->dsize * map->elements))) 102 + nla_put_net32(skb, IPSET_ATTR_MEMSIZE, htonl(memsize))) 102 103 goto nla_put_failure; 103 104 if (unlikely(ip_set_put_flags(skb, set))) 104 105 goto nla_put_failure;
+4 -10
net/netfilter/ipset/ip_set_bitmap_ip.c
··· 41 41 /* Type structure */ 42 42 struct bitmap_ip { 43 43 void *members; /* the set members */ 44 - void *extensions; /* data extensions */ 45 44 u32 first_ip; /* host byte order, included in range */ 46 45 u32 last_ip; /* host byte order, included in range */ 47 46 u32 elements; /* number of max elements in the set */ ··· 48 49 size_t memsize; /* members size */ 49 50 u8 netmask; /* subnet netmask */ 50 51 struct timer_list gc; /* garbage collection */ 52 + unsigned char extensions[0] /* data extensions */ 53 + __aligned(__alignof__(u64)); 51 54 }; 52 55 53 56 /* ADT structure for generic function args */ ··· 225 224 map->members = ip_set_alloc(map->memsize); 226 225 if (!map->members) 227 226 return false; 228 - if (set->dsize) { 229 - map->extensions = ip_set_alloc(set->dsize * elements); 230 - if (!map->extensions) { 231 - kfree(map->members); 232 - return false; 233 - } 234 - } 235 227 map->first_ip = first_ip; 236 228 map->last_ip = last_ip; 237 229 map->elements = elements; ··· 310 316 pr_debug("hosts %u, elements %llu\n", 311 317 hosts, (unsigned long long)elements); 312 318 313 - map = kzalloc(sizeof(*map), GFP_KERNEL); 319 + set->dsize = ip_set_elem_len(set, tb, 0, 0); 320 + map = ip_set_alloc(sizeof(*map) + elements * set->dsize); 314 321 if (!map) 315 322 return -ENOMEM; 316 323 317 324 map->memsize = bitmap_bytes(0, elements - 1); 318 325 set->variant = &bitmap_ip; 319 - set->dsize = ip_set_elem_len(set, tb, 0); 320 326 if (!init_map_ip(set, map, first_ip, last_ip, 321 327 elements, hosts, netmask)) { 322 328 kfree(map);
+29 -35
net/netfilter/ipset/ip_set_bitmap_ipmac.c
··· 47 47 /* Type structure */ 48 48 struct bitmap_ipmac { 49 49 void *members; /* the set members */ 50 - void *extensions; /* MAC + data extensions */ 51 50 u32 first_ip; /* host byte order, included in range */ 52 51 u32 last_ip; /* host byte order, included in range */ 53 52 u32 elements; /* number of max elements in the set */ 54 53 size_t memsize; /* members size */ 55 54 struct timer_list gc; /* garbage collector */ 55 + unsigned char extensions[0] /* MAC + data extensions */ 56 + __aligned(__alignof__(u64)); 56 57 }; 57 58 58 59 /* ADT structure for generic function args */ 59 60 struct bitmap_ipmac_adt_elem { 61 + unsigned char ether[ETH_ALEN] __aligned(2); 60 62 u16 id; 61 - unsigned char *ether; 63 + u16 add_mac; 62 64 }; 63 65 64 66 struct bitmap_ipmac_elem { 65 67 unsigned char ether[ETH_ALEN]; 66 68 unsigned char filled; 67 - } __attribute__ ((aligned)); 69 + } __aligned(__alignof__(u64)); 68 70 69 71 static inline u32 70 72 ip_to_id(const struct bitmap_ipmac *m, u32 ip) ··· 74 72 return ip - m->first_ip; 75 73 } 76 74 77 - static inline struct bitmap_ipmac_elem * 78 - get_elem(void *extensions, u16 id, size_t dsize) 79 - { 80 - return (struct bitmap_ipmac_elem *)(extensions + id * dsize); 81 - } 75 + #define get_elem(extensions, id, dsize) \ 76 + (struct bitmap_ipmac_elem *)(extensions + (id) * (dsize)) 77 + 78 + #define get_const_elem(extensions, id, dsize) \ 79 + (const struct bitmap_ipmac_elem *)(extensions + (id) * (dsize)) 82 80 83 81 /* Common functions */ 84 82 ··· 90 88 91 89 if (!test_bit(e->id, map->members)) 92 90 return 0; 93 - elem = get_elem(map->extensions, e->id, dsize); 94 - if (elem->filled == MAC_FILLED) 95 - return !e->ether || 96 - ether_addr_equal(e->ether, elem->ether); 91 + elem = get_const_elem(map->extensions, e->id, dsize); 92 + if (e->add_mac && elem->filled == MAC_FILLED) 93 + return ether_addr_equal(e->ether, elem->ether); 97 94 /* Trigger kernel to fill out the ethernet address */ 98 95 return -EAGAIN; 99 96 } ··· 104 103 105 104 if (!test_bit(id, map->members)) 106 105 return 0; 107 - elem = get_elem(map->extensions, id, dsize); 106 + elem = get_const_elem(map->extensions, id, dsize); 108 107 /* Timer not started for the incomplete elements */ 109 108 return elem->filled == MAC_FILLED; 110 109 } ··· 134 133 * and we can reuse it later when MAC is filled out, 135 134 * possibly by the kernel 136 135 */ 137 - if (e->ether) 136 + if (e->add_mac) 138 137 ip_set_timeout_set(timeout, t); 139 138 else 140 139 *timeout = t; ··· 151 150 elem = get_elem(map->extensions, e->id, dsize); 152 151 if (test_bit(e->id, map->members)) { 153 152 if (elem->filled == MAC_FILLED) { 154 - if (e->ether && 153 + if (e->add_mac && 155 154 (flags & IPSET_FLAG_EXIST) && 156 155 !ether_addr_equal(e->ether, elem->ether)) { 157 156 /* memcpy isn't atomic */ ··· 160 159 ether_addr_copy(elem->ether, e->ether); 161 160 } 162 161 return IPSET_ADD_FAILED; 163 - } else if (!e->ether) 162 + } else if (!e->add_mac) 164 163 /* Already added without ethernet address */ 165 164 return IPSET_ADD_FAILED; 166 165 /* Fill the MAC address and trigger the timer activation */ ··· 169 168 ether_addr_copy(elem->ether, e->ether); 170 169 elem->filled = MAC_FILLED; 171 170 return IPSET_ADD_START_STORED_TIMEOUT; 172 - } else if (e->ether) { 171 + } else if (e->add_mac) { 173 172 /* We can store MAC too */ 174 173 ether_addr_copy(elem->ether, e->ether); 175 174 elem->filled = MAC_FILLED; ··· 192 191 u32 id, size_t dsize) 193 192 { 194 193 const struct bitmap_ipmac_elem *elem = 195 - get_elem(map->extensions, id, dsize); 194 + get_const_elem(map->extensions, id, dsize); 196 195 197 196 return nla_put_ipaddr4(skb, IPSET_ATTR_IP, 198 197 htonl(map->first_ip + id)) || ··· 214 213 { 215 214 struct bitmap_ipmac *map = set->data; 216 215 ipset_adtfn adtfn = set->variant->adt[adt]; 217 - struct bitmap_ipmac_adt_elem e = { .id = 0 }; 216 + struct bitmap_ipmac_adt_elem e = { .id = 0, .add_mac = 1 }; 218 217 struct ip_set_ext ext = IP_SET_INIT_KEXT(skb, opt, set); 219 218 u32 ip; 220 219 ··· 232 231 return -EINVAL; 233 232 234 233 e.id = ip_to_id(map, ip); 235 - e.ether = eth_hdr(skb)->h_source; 234 + memcpy(e.ether, eth_hdr(skb)->h_source, ETH_ALEN); 236 235 237 236 return adtfn(set, &e, &ext, &opt->ext, opt->cmdflags); 238 237 } ··· 266 265 return -IPSET_ERR_BITMAP_RANGE; 267 266 268 267 e.id = ip_to_id(map, ip); 269 - if (tb[IPSET_ATTR_ETHER]) 270 - e.ether = nla_data(tb[IPSET_ATTR_ETHER]); 271 - else 272 - e.ether = NULL; 273 - 268 + if (tb[IPSET_ATTR_ETHER]) { 269 + memcpy(e.ether, nla_data(tb[IPSET_ATTR_ETHER]), ETH_ALEN); 270 + e.add_mac = 1; 271 + } 274 272 ret = adtfn(set, &e, &ext, &ext, flags); 275 273 276 274 return ip_set_eexist(ret, flags) ? 0 : ret; ··· 300 300 map->members = ip_set_alloc(map->memsize); 301 301 if (!map->members) 302 302 return false; 303 - if (set->dsize) { 304 - map->extensions = ip_set_alloc(set->dsize * elements); 305 - if (!map->extensions) { 306 - kfree(map->members); 307 - return false; 308 - } 309 - } 310 303 map->first_ip = first_ip; 311 304 map->last_ip = last_ip; 312 305 map->elements = elements; ··· 354 361 if (elements > IPSET_BITMAP_MAX_RANGE + 1) 355 362 return -IPSET_ERR_BITMAP_RANGE_SIZE; 356 363 357 - map = kzalloc(sizeof(*map), GFP_KERNEL); 364 + set->dsize = ip_set_elem_len(set, tb, 365 + sizeof(struct bitmap_ipmac_elem), 366 + __alignof__(struct bitmap_ipmac_elem)); 367 + map = ip_set_alloc(sizeof(*map) + elements * set->dsize); 358 368 if (!map) 359 369 return -ENOMEM; 360 370 361 371 map->memsize = bitmap_bytes(0, elements - 1); 362 372 set->variant = &bitmap_ipmac; 363 - set->dsize = ip_set_elem_len(set, tb, 364 - sizeof(struct bitmap_ipmac_elem)); 365 373 if (!init_map_ipmac(set, map, first_ip, last_ip, elements)) { 366 374 kfree(map); 367 375 return -ENOMEM;
+7 -11
net/netfilter/ipset/ip_set_bitmap_port.c
··· 35 35 /* Type structure */ 36 36 struct bitmap_port { 37 37 void *members; /* the set members */ 38 - void *extensions; /* data extensions */ 39 38 u16 first_port; /* host byte order, included in range */ 40 39 u16 last_port; /* host byte order, included in range */ 41 40 u32 elements; /* number of max elements in the set */ 42 41 size_t memsize; /* members size */ 43 42 struct timer_list gc; /* garbage collection */ 43 + unsigned char extensions[0] /* data extensions */ 44 + __aligned(__alignof__(u64)); 44 45 }; 45 46 46 47 /* ADT structure for generic function args */ ··· 210 209 map->members = ip_set_alloc(map->memsize); 211 210 if (!map->members) 212 211 return false; 213 - if (set->dsize) { 214 - map->extensions = ip_set_alloc(set->dsize * map->elements); 215 - if (!map->extensions) { 216 - kfree(map->members); 217 - return false; 218 - } 219 - } 220 212 map->first_port = first_port; 221 213 map->last_port = last_port; 222 214 set->timeout = IPSET_NO_TIMEOUT; ··· 226 232 { 227 233 struct bitmap_port *map; 228 234 u16 first_port, last_port; 235 + u32 elements; 229 236 230 237 if (unlikely(!ip_set_attr_netorder(tb, IPSET_ATTR_PORT) || 231 238 !ip_set_attr_netorder(tb, IPSET_ATTR_PORT_TO) || ··· 243 248 last_port = tmp; 244 249 } 245 250 246 - map = kzalloc(sizeof(*map), GFP_KERNEL); 251 + elements = last_port - first_port + 1; 252 + set->dsize = ip_set_elem_len(set, tb, 0, 0); 253 + map = ip_set_alloc(sizeof(*map) + elements * set->dsize); 247 254 if (!map) 248 255 return -ENOMEM; 249 256 250 - map->elements = last_port - first_port + 1; 257 + map->elements = elements; 251 258 map->memsize = bitmap_bytes(0, map->elements); 252 259 set->variant = &bitmap_port; 253 - set->dsize = ip_set_elem_len(set, tb, 0); 254 260 if (!init_map_port(set, map, first_port, last_port)) { 255 261 kfree(map); 256 262 return -ENOMEM;
+8 -6
net/netfilter/ipset/ip_set_core.c
··· 364 364 } 365 365 366 366 size_t 367 - ip_set_elem_len(struct ip_set *set, struct nlattr *tb[], size_t len) 367 + ip_set_elem_len(struct ip_set *set, struct nlattr *tb[], size_t len, 368 + size_t align) 368 369 { 369 370 enum ip_set_ext_id id; 370 - size_t offset = len; 371 371 u32 cadt_flags = 0; 372 372 373 373 if (tb[IPSET_ATTR_CADT_FLAGS]) 374 374 cadt_flags = ip_set_get_h32(tb[IPSET_ATTR_CADT_FLAGS]); 375 375 if (cadt_flags & IPSET_FLAG_WITH_FORCEADD) 376 376 set->flags |= IPSET_CREATE_FLAG_FORCEADD; 377 + if (!align) 378 + align = 1; 377 379 for (id = 0; id < IPSET_EXT_ID_MAX; id++) { 378 380 if (!add_extension(id, cadt_flags, tb)) 379 381 continue; 380 - offset = ALIGN(offset, ip_set_extensions[id].align); 381 - set->offset[id] = offset; 382 + len = ALIGN(len, ip_set_extensions[id].align); 383 + set->offset[id] = len; 382 384 set->extensions |= ip_set_extensions[id].type; 383 - offset += ip_set_extensions[id].len; 385 + len += ip_set_extensions[id].len; 384 386 } 385 - return offset; 387 + return ALIGN(len, align); 386 388 } 387 389 EXPORT_SYMBOL_GPL(ip_set_elem_len); 388 390
+17 -9
net/netfilter/ipset/ip_set_hash_gen.h
··· 72 72 DECLARE_BITMAP(used, AHASH_MAX_TUNED); 73 73 u8 size; /* size of the array */ 74 74 u8 pos; /* position of the first free entry */ 75 - unsigned char value[0]; /* the array of the values */ 76 - } __attribute__ ((aligned)); 75 + unsigned char value[0] /* the array of the values */ 76 + __aligned(__alignof__(u64)); 77 + }; 77 78 78 79 /* The hash table: the table size stored here in order to make resizing easy */ 79 80 struct htable { ··· 476 475 mtype_expire(struct ip_set *set, struct htype *h, u8 nets_length, size_t dsize) 477 476 { 478 477 struct htable *t; 479 - struct hbucket *n; 478 + struct hbucket *n, *tmp; 480 479 struct mtype_elem *data; 481 480 u32 i, j, d; 482 481 #ifdef IP_SET_HASH_WITH_NETS ··· 511 510 } 512 511 } 513 512 if (d >= AHASH_INIT_SIZE) { 514 - struct hbucket *tmp = kzalloc(sizeof(*tmp) + 515 - (n->size - AHASH_INIT_SIZE) * dsize, 516 - GFP_ATOMIC); 513 + if (d >= n->size) { 514 + rcu_assign_pointer(hbucket(t, i), NULL); 515 + kfree_rcu(n, rcu); 516 + continue; 517 + } 518 + tmp = kzalloc(sizeof(*tmp) + 519 + (n->size - AHASH_INIT_SIZE) * dsize, 520 + GFP_ATOMIC); 517 521 if (!tmp) 518 522 /* Still try to delete expired elements */ 519 523 continue; ··· 528 522 continue; 529 523 data = ahash_data(n, j, dsize); 530 524 memcpy(tmp->value + d * dsize, data, dsize); 531 - set_bit(j, tmp->used); 525 + set_bit(d, tmp->used); 532 526 d++; 533 527 } 534 528 tmp->pos = d; ··· 1329 1323 #endif 1330 1324 set->variant = &IPSET_TOKEN(HTYPE, 4_variant); 1331 1325 set->dsize = ip_set_elem_len(set, tb, 1332 - sizeof(struct IPSET_TOKEN(HTYPE, 4_elem))); 1326 + sizeof(struct IPSET_TOKEN(HTYPE, 4_elem)), 1327 + __alignof__(struct IPSET_TOKEN(HTYPE, 4_elem))); 1333 1328 #ifndef IP_SET_PROTO_UNDEF 1334 1329 } else { 1335 1330 set->variant = &IPSET_TOKEN(HTYPE, 6_variant); 1336 1331 set->dsize = ip_set_elem_len(set, tb, 1337 - sizeof(struct IPSET_TOKEN(HTYPE, 6_elem))); 1332 + sizeof(struct IPSET_TOKEN(HTYPE, 6_elem)), 1333 + __alignof__(struct IPSET_TOKEN(HTYPE, 6_elem))); 1338 1334 } 1339 1335 #endif 1340 1336 if (tb[IPSET_ATTR_TIMEOUT]) {
+3 -2
net/netfilter/ipset/ip_set_list_set.c
··· 31 31 struct rcu_head rcu; 32 32 struct list_head list; 33 33 ip_set_id_t id; 34 - }; 34 + } __aligned(__alignof__(u64)); 35 35 36 36 struct set_adt_elem { 37 37 ip_set_id_t id; ··· 618 618 size = IP_SET_LIST_MIN_SIZE; 619 619 620 620 set->variant = &set_variant; 621 - set->dsize = ip_set_elem_len(set, tb, sizeof(struct set_elem)); 621 + set->dsize = ip_set_elem_len(set, tb, sizeof(struct set_elem), 622 + __alignof__(struct set_elem)); 622 623 if (!init_list_set(net, set, size)) 623 624 return -ENOMEM; 624 625 if (tb[IPSET_ATTR_TIMEOUT]) {
+8 -8
net/netfilter/ipvs/ip_vs_core.c
··· 1176 1176 struct ip_vs_protocol *pp; 1177 1177 struct ip_vs_proto_data *pd; 1178 1178 struct ip_vs_conn *cp; 1179 + struct sock *sk; 1179 1180 1180 1181 EnterFunction(11); 1181 1182 ··· 1184 1183 if (skb->ipvs_property) 1185 1184 return NF_ACCEPT; 1186 1185 1186 + sk = skb_to_full_sk(skb); 1187 1187 /* Bad... Do not break raw sockets */ 1188 - if (unlikely(skb->sk != NULL && hooknum == NF_INET_LOCAL_OUT && 1188 + if (unlikely(sk && hooknum == NF_INET_LOCAL_OUT && 1189 1189 af == AF_INET)) { 1190 - struct sock *sk = skb->sk; 1191 - struct inet_sock *inet = inet_sk(skb->sk); 1192 1190 1193 - if (inet && sk->sk_family == PF_INET && inet->nodefrag) 1191 + if (sk->sk_family == PF_INET && inet_sk(sk)->nodefrag) 1194 1192 return NF_ACCEPT; 1195 1193 } 1196 1194 ··· 1681 1681 struct ip_vs_conn *cp; 1682 1682 int ret, pkts; 1683 1683 int conn_reuse_mode; 1684 + struct sock *sk; 1684 1685 1685 1686 /* Already marked as IPVS request or reply? */ 1686 1687 if (skb->ipvs_property) ··· 1709 1708 ip_vs_fill_iph_skb(af, skb, false, &iph); 1710 1709 1711 1710 /* Bad... Do not break raw sockets */ 1712 - if (unlikely(skb->sk != NULL && hooknum == NF_INET_LOCAL_OUT && 1711 + sk = skb_to_full_sk(skb); 1712 + if (unlikely(sk && hooknum == NF_INET_LOCAL_OUT && 1713 1713 af == AF_INET)) { 1714 - struct sock *sk = skb->sk; 1715 - struct inet_sock *inet = inet_sk(skb->sk); 1716 1714 1717 - if (inet && sk->sk_family == PF_INET && inet->nodefrag) 1715 + if (sk->sk_family == PF_INET && inet_sk(sk)->nodefrag) 1718 1716 return NF_ACCEPT; 1719 1717 } 1720 1718
+41 -8
net/netfilter/nft_counter.c
··· 47 47 local_bh_enable(); 48 48 } 49 49 50 - static int nft_counter_dump(struct sk_buff *skb, const struct nft_expr *expr) 50 + static void nft_counter_fetch(const struct nft_counter_percpu __percpu *counter, 51 + struct nft_counter *total) 51 52 { 52 - struct nft_counter_percpu_priv *priv = nft_expr_priv(expr); 53 - struct nft_counter_percpu *cpu_stats; 54 - struct nft_counter total; 53 + const struct nft_counter_percpu *cpu_stats; 55 54 u64 bytes, packets; 56 55 unsigned int seq; 57 56 int cpu; 58 57 59 - memset(&total, 0, sizeof(total)); 58 + memset(total, 0, sizeof(*total)); 60 59 for_each_possible_cpu(cpu) { 61 - cpu_stats = per_cpu_ptr(priv->counter, cpu); 60 + cpu_stats = per_cpu_ptr(counter, cpu); 62 61 do { 63 62 seq = u64_stats_fetch_begin_irq(&cpu_stats->syncp); 64 63 bytes = cpu_stats->counter.bytes; 65 64 packets = cpu_stats->counter.packets; 66 65 } while (u64_stats_fetch_retry_irq(&cpu_stats->syncp, seq)); 67 66 68 - total.packets += packets; 69 - total.bytes += bytes; 67 + total->packets += packets; 68 + total->bytes += bytes; 70 69 } 70 + } 71 + 72 + static int nft_counter_dump(struct sk_buff *skb, const struct nft_expr *expr) 73 + { 74 + struct nft_counter_percpu_priv *priv = nft_expr_priv(expr); 75 + struct nft_counter total; 76 + 77 + nft_counter_fetch(priv->counter, &total); 71 78 72 79 if (nla_put_be64(skb, NFTA_COUNTER_BYTES, cpu_to_be64(total.bytes)) || 73 80 nla_put_be64(skb, NFTA_COUNTER_PACKETS, cpu_to_be64(total.packets))) ··· 125 118 free_percpu(priv->counter); 126 119 } 127 120 121 + static int nft_counter_clone(struct nft_expr *dst, const struct nft_expr *src) 122 + { 123 + struct nft_counter_percpu_priv *priv = nft_expr_priv(src); 124 + struct nft_counter_percpu_priv *priv_clone = nft_expr_priv(dst); 125 + struct nft_counter_percpu __percpu *cpu_stats; 126 + struct nft_counter_percpu *this_cpu; 127 + struct nft_counter total; 128 + 129 + nft_counter_fetch(priv->counter, &total); 130 + 131 + cpu_stats = __netdev_alloc_pcpu_stats(struct nft_counter_percpu, 132 + GFP_ATOMIC); 133 + if (cpu_stats == NULL) 134 + return ENOMEM; 135 + 136 + preempt_disable(); 137 + this_cpu = this_cpu_ptr(cpu_stats); 138 + this_cpu->counter.packets = total.packets; 139 + this_cpu->counter.bytes = total.bytes; 140 + preempt_enable(); 141 + 142 + priv_clone->counter = cpu_stats; 143 + return 0; 144 + } 145 + 128 146 static struct nft_expr_type nft_counter_type; 129 147 static const struct nft_expr_ops nft_counter_ops = { 130 148 .type = &nft_counter_type, ··· 158 126 .init = nft_counter_init, 159 127 .destroy = nft_counter_destroy, 160 128 .dump = nft_counter_dump, 129 + .clone = nft_counter_clone, 161 130 }; 162 131 163 132 static struct nft_expr_type nft_counter_type __read_mostly = {
+3 -2
net/netfilter/nft_dynset.c
··· 50 50 } 51 51 52 52 ext = nft_set_elem_ext(set, elem); 53 - if (priv->expr != NULL) 54 - nft_expr_clone(nft_set_ext_expr(ext), priv->expr); 53 + if (priv->expr != NULL && 54 + nft_expr_clone(nft_set_ext_expr(ext), priv->expr) < 0) 55 + return NULL; 55 56 56 57 return elem; 57 58 }
+47 -45
net/packet/af_packet.c
··· 1741 1741 kfree_rcu(po->rollover, rcu); 1742 1742 } 1743 1743 1744 + static bool packet_extra_vlan_len_allowed(const struct net_device *dev, 1745 + struct sk_buff *skb) 1746 + { 1747 + /* Earlier code assumed this would be a VLAN pkt, double-check 1748 + * this now that we have the actual packet in hand. We can only 1749 + * do this check on Ethernet devices. 1750 + */ 1751 + if (unlikely(dev->type != ARPHRD_ETHER)) 1752 + return false; 1753 + 1754 + skb_reset_mac_header(skb); 1755 + return likely(eth_hdr(skb)->h_proto == htons(ETH_P_8021Q)); 1756 + } 1757 + 1744 1758 static const struct proto_ops packet_ops; 1745 1759 1746 1760 static const struct proto_ops packet_ops_spkt; ··· 1916 1902 goto retry; 1917 1903 } 1918 1904 1919 - if (len > (dev->mtu + dev->hard_header_len + extra_len)) { 1920 - /* Earlier code assumed this would be a VLAN pkt, 1921 - * double-check this now that we have the actual 1922 - * packet in hand. 1923 - */ 1924 - struct ethhdr *ehdr; 1925 - skb_reset_mac_header(skb); 1926 - ehdr = eth_hdr(skb); 1927 - if (ehdr->h_proto != htons(ETH_P_8021Q)) { 1928 - err = -EMSGSIZE; 1929 - goto out_unlock; 1930 - } 1905 + if (len > (dev->mtu + dev->hard_header_len + extra_len) && 1906 + !packet_extra_vlan_len_allowed(dev, skb)) { 1907 + err = -EMSGSIZE; 1908 + goto out_unlock; 1931 1909 } 1932 1910 1933 1911 skb->protocol = proto; ··· 2338 2332 return false; 2339 2333 } 2340 2334 2335 + static void tpacket_set_protocol(const struct net_device *dev, 2336 + struct sk_buff *skb) 2337 + { 2338 + if (dev->type == ARPHRD_ETHER) { 2339 + skb_reset_mac_header(skb); 2340 + skb->protocol = eth_hdr(skb)->h_proto; 2341 + } 2342 + } 2343 + 2341 2344 static int tpacket_fill_skb(struct packet_sock *po, struct sk_buff *skb, 2342 2345 void *frame, struct net_device *dev, int size_max, 2343 2346 __be16 proto, unsigned char *addr, int hlen) ··· 2383 2368 skb_reserve(skb, hlen); 2384 2369 skb_reset_network_header(skb); 2385 2370 2386 - if (!packet_use_direct_xmit(po)) 2387 - skb_probe_transport_header(skb, 0); 2388 2371 if (unlikely(po->tp_tx_has_off)) { 2389 2372 int off_min, off_max, off; 2390 2373 off_min = po->tp_hdrlen - sizeof(struct sockaddr_ll); ··· 2428 2415 dev->hard_header_len); 2429 2416 if (unlikely(err)) 2430 2417 return err; 2418 + if (!skb->protocol) 2419 + tpacket_set_protocol(dev, skb); 2431 2420 2432 2421 data += dev->hard_header_len; 2433 2422 to_write -= dev->hard_header_len; ··· 2463 2448 len_max = PAGE_SIZE; 2464 2449 len = ((to_write > len_max) ? len_max : to_write); 2465 2450 } 2451 + 2452 + skb_probe_transport_header(skb, 0); 2466 2453 2467 2454 return tp_len; 2468 2455 } ··· 2510 2493 if (unlikely(!(dev->flags & IFF_UP))) 2511 2494 goto out_put; 2512 2495 2513 - reserve = dev->hard_header_len + VLAN_HLEN; 2496 + if (po->sk.sk_socket->type == SOCK_RAW) 2497 + reserve = dev->hard_header_len; 2514 2498 size_max = po->tx_ring.frame_size 2515 2499 - (po->tp_hdrlen - sizeof(struct sockaddr_ll)); 2516 2500 2517 - if (size_max > dev->mtu + reserve) 2518 - size_max = dev->mtu + reserve; 2501 + if (size_max > dev->mtu + reserve + VLAN_HLEN) 2502 + size_max = dev->mtu + reserve + VLAN_HLEN; 2519 2503 2520 2504 do { 2521 2505 ph = packet_current_frame(po, &po->tx_ring, ··· 2543 2525 tp_len = tpacket_fill_skb(po, skb, ph, dev, size_max, proto, 2544 2526 addr, hlen); 2545 2527 if (likely(tp_len >= 0) && 2546 - tp_len > dev->mtu + dev->hard_header_len) { 2547 - struct ethhdr *ehdr; 2548 - /* Earlier code assumed this would be a VLAN pkt, 2549 - * double-check this now that we have the actual 2550 - * packet in hand. 2551 - */ 2528 + tp_len > dev->mtu + reserve && 2529 + !packet_extra_vlan_len_allowed(dev, skb)) 2530 + tp_len = -EMSGSIZE; 2552 2531 2553 - skb_reset_mac_header(skb); 2554 - ehdr = eth_hdr(skb); 2555 - if (ehdr->h_proto != htons(ETH_P_8021Q)) 2556 - tp_len = -EMSGSIZE; 2557 - } 2558 2532 if (unlikely(tp_len < 0)) { 2559 2533 if (po->tp_loss) { 2560 2534 __packet_set_status(po, ph, ··· 2775 2765 2776 2766 sock_tx_timestamp(sk, &skb_shinfo(skb)->tx_flags); 2777 2767 2778 - if (!gso_type && (len > dev->mtu + reserve + extra_len)) { 2779 - /* Earlier code assumed this would be a VLAN pkt, 2780 - * double-check this now that we have the actual 2781 - * packet in hand. 2782 - */ 2783 - struct ethhdr *ehdr; 2784 - skb_reset_mac_header(skb); 2785 - ehdr = eth_hdr(skb); 2786 - if (ehdr->h_proto != htons(ETH_P_8021Q)) { 2787 - err = -EMSGSIZE; 2788 - goto out_free; 2789 - } 2768 + if (!gso_type && (len > dev->mtu + reserve + extra_len) && 2769 + !packet_extra_vlan_len_allowed(dev, skb)) { 2770 + err = -EMSGSIZE; 2771 + goto out_free; 2790 2772 } 2791 2773 2792 2774 skb->protocol = proto; ··· 2809 2807 len += vnet_hdr_len; 2810 2808 } 2811 2809 2812 - if (!packet_use_direct_xmit(po)) 2813 - skb_probe_transport_header(skb, reserve); 2810 + skb_probe_transport_header(skb, reserve); 2811 + 2814 2812 if (unlikely(extra_len == 4)) 2815 2813 skb->no_fcs = 1; 2816 2814 ··· 4109 4107 err = -EINVAL; 4110 4108 if (unlikely((int)req->tp_block_size <= 0)) 4111 4109 goto out; 4112 - if (unlikely(req->tp_block_size & (PAGE_SIZE - 1))) 4110 + if (unlikely(!PAGE_ALIGNED(req->tp_block_size))) 4113 4111 goto out; 4114 4112 if (po->tp_version >= TPACKET_V3 && 4115 4113 (int)(req->tp_block_size - ··· 4121 4119 if (unlikely(req->tp_frame_size & (TPACKET_ALIGNMENT - 1))) 4122 4120 goto out; 4123 4121 4124 - rb->frames_per_block = req->tp_block_size/req->tp_frame_size; 4125 - if (unlikely(rb->frames_per_block <= 0)) 4122 + rb->frames_per_block = req->tp_block_size / req->tp_frame_size; 4123 + if (unlikely(rb->frames_per_block == 0)) 4126 4124 goto out; 4127 4125 if (unlikely((rb->frames_per_block * req->tp_block_nr) != 4128 4126 req->tp_frame_nr))
+2 -2
net/sctp/auth.c
··· 809 809 if (!has_sha1) 810 810 return -EINVAL; 811 811 812 - memcpy(ep->auth_hmacs_list->hmac_ids, &hmacs->shmac_idents[0], 813 - hmacs->shmac_num_idents * sizeof(__u16)); 812 + for (i = 0; i < hmacs->shmac_num_idents; i++) 813 + ep->auth_hmacs_list->hmac_ids[i] = htons(hmacs->shmac_idents[i]); 814 814 ep->auth_hmacs_list->param_hdr.length = htons(sizeof(sctp_paramhdr_t) + 815 815 hmacs->shmac_num_idents * sizeof(__u16)); 816 816 return 0;
+23 -1
net/unix/af_unix.c
··· 441 441 if (state == TCP_LISTEN) 442 442 unix_release_sock(skb->sk, 1); 443 443 /* passed fds are erased in the kfree_skb hook */ 444 + UNIXCB(skb).consumed = skb->len; 444 445 kfree_skb(skb); 445 446 } 446 447 ··· 1800 1799 * this - does no harm 1801 1800 */ 1802 1801 consume_skb(newskb); 1802 + newskb = NULL; 1803 1803 } 1804 1804 1805 1805 if (skb_append_pagefrags(skb, page, offset, size)) { ··· 1813 1811 skb->truesize += size; 1814 1812 atomic_add(size, &sk->sk_wmem_alloc); 1815 1813 1816 - if (newskb) 1814 + if (newskb) { 1815 + spin_lock(&other->sk_receive_queue.lock); 1817 1816 __skb_queue_tail(&other->sk_receive_queue, newskb); 1817 + spin_unlock(&other->sk_receive_queue.lock); 1818 + } 1818 1819 1819 1820 unix_state_unlock(other); 1820 1821 mutex_unlock(&unix_sk(other)->readlock); ··· 2077 2072 2078 2073 do { 2079 2074 int chunk; 2075 + bool drop_skb; 2080 2076 struct sk_buff *skb, *last; 2081 2077 2082 2078 unix_state_lock(sk); ··· 2158 2152 } 2159 2153 2160 2154 chunk = min_t(unsigned int, unix_skb_len(skb) - skip, size); 2155 + skb_get(skb); 2161 2156 chunk = state->recv_actor(skb, skip, chunk, state); 2157 + drop_skb = !unix_skb_len(skb); 2158 + /* skb is only safe to use if !drop_skb */ 2159 + consume_skb(skb); 2162 2160 if (chunk < 0) { 2163 2161 if (copied == 0) 2164 2162 copied = -EFAULT; ··· 2170 2160 } 2171 2161 copied += chunk; 2172 2162 size -= chunk; 2163 + 2164 + if (drop_skb) { 2165 + /* the skb was touched by a concurrent reader; 2166 + * we should not expect anything from this skb 2167 + * anymore and assume it invalid - we can be 2168 + * sure it was dropped from the socket queue 2169 + * 2170 + * let's report a short read 2171 + */ 2172 + err = 0; 2173 + break; 2174 + } 2173 2175 2174 2176 /* Mark read part of skb as used */ 2175 2177 if (!(flags & MSG_PEEK)) {
+5 -2
samples/bpf/Makefile
··· 67 67 # point this to your LLVM backend with bpf support 68 68 LLC=$(srctree)/tools/bpf/llvm/bld/Debug+Asserts/bin/llc 69 69 70 + # asm/sysreg.h inline assmbly used by it is incompatible with llvm. 71 + # But, ehere is not easy way to fix it, so just exclude it since it is 72 + # useless for BPF samples. 70 73 $(obj)/%.o: $(src)/%.c 71 74 clang $(NOSTDINC_FLAGS) $(LINUXINCLUDE) $(EXTRA_CFLAGS) \ 72 - -D__KERNEL__ -Wno-unused-value -Wno-pointer-sign \ 75 + -D__KERNEL__ -D__ASM_SYSREG_H -Wno-unused-value -Wno-pointer-sign \ 73 76 -O2 -emit-llvm -c $< -o -| $(LLC) -march=bpf -filetype=obj -o $@ 74 77 clang $(NOSTDINC_FLAGS) $(LINUXINCLUDE) $(EXTRA_CFLAGS) \ 75 - -D__KERNEL__ -Wno-unused-value -Wno-pointer-sign \ 78 + -D__KERNEL__ -D__ASM_SYSREG_H -Wno-unused-value -Wno-pointer-sign \ 76 79 -O2 -emit-llvm -c $< -o -| $(LLC) -march=bpf -filetype=asm -o $@.s
+4 -3
tools/net/Makefile
··· 4 4 LEX = flex 5 5 YACC = bison 6 6 7 + CFLAGS += -Wall -O2 8 + CFLAGS += -D__EXPORTED_HEADERS__ -I../../include/uapi -I../../include 9 + 7 10 %.yacc.c: %.y 8 11 $(YACC) -o $@ -d $< 9 12 ··· 15 12 16 13 all : bpf_jit_disasm bpf_dbg bpf_asm 17 14 18 - bpf_jit_disasm : CFLAGS = -Wall -O2 -DPACKAGE='bpf_jit_disasm' 15 + bpf_jit_disasm : CFLAGS += -DPACKAGE='bpf_jit_disasm' 19 16 bpf_jit_disasm : LDLIBS = -lopcodes -lbfd -ldl 20 17 bpf_jit_disasm : bpf_jit_disasm.o 21 18 22 - bpf_dbg : CFLAGS = -Wall -O2 23 19 bpf_dbg : LDLIBS = -lreadline 24 20 bpf_dbg : bpf_dbg.o 25 21 26 - bpf_asm : CFLAGS = -Wall -O2 -I. 27 22 bpf_asm : LDLIBS = 28 23 bpf_asm : bpf_asm.o bpf_exp.yacc.o bpf_exp.lex.o 29 24 bpf_exp.lex.o : bpf_exp.yacc.c