Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net

Pull networking fixes from David Miller:

1) e1000e computes header length incorrectly wrt vlans, fix from Vlad
Yasevich.

2) ns_capable() check in sock_diag netlink code, from Andrew
Lutomirski.

3) Fix invalid queue pairs handling in virtio_net, from Amos Kong.

4) Checksum offloading busted in sxgbe driver due to incorrect
descriptor layout, fix from Byungho An.

5) Fix build failure with SMC_DEBUG set to 2 or larger, from Zi Shen
Lim.

6) Fix uninitialized A and X registers in BPF interpreter, from Alexei
Starovoitov.

7) Fix arch dependencies of candence driver.

8) Fix netlink capabilities checking tree-wide, from Eric W Biederman.

9) Don't dump IFLA_VF_PORTS if netlink request didn't ask for it in
IFLA_EXT_MASK, from David Gibson.

10) IPV6 FIB dump restart doesn't handle table changes that happen
meanwhile, causing the code to loop forever or emit dups, fix from
Kumar Sandararajan.

11) Memory leak on VF removal in bnx2x, from Yuval Mintz.

12) Bug fixes for new Altera TSE driver from Vince Bridgers.

13) Fix route lookup key in SCTP, from Xugeng Zhang.

14) Use BH blocking spinlocks in SLIP, as per a similar fix to CAN/SLCAN
driver. From Oliver Hartkopp.

15) TCP doesn't bump retransmit counters in some code paths, fix from
Eric Dumazet.

16) Clamp delayed_ack in tcp_cubic to prevent theoretical divides by
zero. Fix from Liu Yu.

17) Fix locking imbalance in error paths of HHF packet scheduler, from
John Fastabend.

18) Properly reference the transport module when vsock_core_init() runs,
from Andy King.

19) Fix buffer overflow in cdc_ncm driver, from Bjørn Mork.

20) IP_ECN_decapsulate() doesn't see a correct SKB network header in
ip_tunnel_rcv(), fix from Ying Cai.

* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net: (132 commits)
net: macb: Fix race between HW and driver
net: macb: Remove 'unlikely' optimization
net: macb: Re-enable RX interrupt only when RX is done
net: macb: Clear interrupt flags
net: macb: Pass same size to DMA_UNMAP as used for DMA_MAP
ip_tunnel: Set network header properly for IP_ECN_decapsulate()
e1000e: Restrict MDIO Slow Mode workaround to relevant parts
e1000e: Fix issue with link flap on 82579
e1000e: Expand workaround for 10Mb HD throughput bug
e1000e: Workaround for dropped packets in Gig/100 speeds on 82579
net/mlx4_core: Don't issue PCIe speed/width checks for VFs
net/mlx4_core: Load the Eth driver first
net/mlx4_core: Fix slave id computation for single port VF
net/mlx4_core: Adjust port number in qp_attach wrapper when detaching
net: cdc_ncm: fix buffer overflow
Altera TSE: ALTERA_TSE should depend on HAS_DMA
vsock: Make transport the proto owner
net: sched: lock imbalance in hhf qdisc
net: mvmdio: Check for a valid interrupt instead of an error
net phy: Check for aneg completion before setting state to PHY_RUNNING
...

+1498 -1045
+10 -2
Documentation/devicetree/bindings/net/arc_emac.txt
··· 4 4 - compatible: Should be "snps,arc-emac" 5 5 - reg: Address and length of the register set for the device 6 6 - interrupts: Should contain the EMAC interrupts 7 - - clock-frequency: CPU frequency. It is needed to calculate and set polling 8 - period of EMAC. 9 7 - max-speed: see ethernet.txt file in the same directory. 10 8 - phy: see ethernet.txt file in the same directory. 9 + 10 + Clock handling: 11 + The clock frequency is needed to calculate and set polling period of EMAC. 12 + It must be provided by one of: 13 + - clock-frequency: CPU frequency. 14 + - clocks: reference to the clock supplying the EMAC. 11 15 12 16 Child nodes of the driver are the individual PHY devices connected to the 13 17 MDIO bus. They must have a "reg" property given the PHY address on the MDIO bus. ··· 23 19 reg = <0xc0fc2000 0x3c>; 24 20 interrupts = <6>; 25 21 mac-address = [ 00 11 22 33 44 55 ]; 22 + 26 23 clock-frequency = <80000000>; 24 + /* or */ 25 + clocks = <&emac_clock>; 26 + 27 27 max-speed = <100>; 28 28 phy = <&phy0>; 29 29
+1 -1
Documentation/networking/scaling.txt
··· 429 429 (therbert@google.com) 430 430 431 431 Accelerated RFS was introduced in 2.6.35. Original patches were 432 - submitted by Ben Hutchings (bhutchings@solarflare.com) 432 + submitted by Ben Hutchings (bwh@kernel.org) 433 433 434 434 Authors: 435 435 Tom Herbert (therbert@google.com)
-2
MAINTAINERS
··· 7288 7288 RALINK RT2X00 WIRELESS LAN DRIVER 7289 7289 P: rt2x00 project 7290 7290 M: Ivo van Doorn <IvDoorn@gmail.com> 7291 - M: Gertjan van Wingerde <gwingerde@gmail.com> 7292 7291 M: Helmut Schaa <helmut.schaa@googlemail.com> 7293 7292 L: linux-wireless@vger.kernel.org 7294 7293 L: users@rt2x00.serialmonkey.com (moderated for non-subscribers) ··· 7684 7685 SAMSUNG SXGBE DRIVERS 7685 7686 M: Byungho An <bh74.an@samsung.com> 7686 7687 M: Girish K S <ks.giri@samsung.com> 7687 - M: Siva Reddy Kallam <siva.kallam@samsung.com> 7688 7688 M: Vipul Pandya <vipul.pandya@samsung.com> 7689 7689 S: Supported 7690 7690 L: netdev@vger.kernel.org
+1 -1
crypto/crypto_user.c
··· 466 466 type -= CRYPTO_MSG_BASE; 467 467 link = &crypto_dispatch[type]; 468 468 469 - if (!capable(CAP_NET_ADMIN)) 469 + if (!netlink_capable(skb, CAP_NET_ADMIN)) 470 470 return -EPERM; 471 471 472 472 if ((type == (CRYPTO_MSG_GETALG - CRYPTO_MSG_BASE) &&
+2
drivers/bluetooth/ath3k.c
··· 82 82 { USB_DEVICE(0x04CA, 0x3004) }, 83 83 { USB_DEVICE(0x04CA, 0x3005) }, 84 84 { USB_DEVICE(0x04CA, 0x3006) }, 85 + { USB_DEVICE(0x04CA, 0x3007) }, 85 86 { USB_DEVICE(0x04CA, 0x3008) }, 86 87 { USB_DEVICE(0x04CA, 0x300b) }, 87 88 { USB_DEVICE(0x0930, 0x0219) }, ··· 132 131 { USB_DEVICE(0x04ca, 0x3004), .driver_info = BTUSB_ATH3012 }, 133 132 { USB_DEVICE(0x04ca, 0x3005), .driver_info = BTUSB_ATH3012 }, 134 133 { USB_DEVICE(0x04ca, 0x3006), .driver_info = BTUSB_ATH3012 }, 134 + { USB_DEVICE(0x04ca, 0x3007), .driver_info = BTUSB_ATH3012 }, 135 135 { USB_DEVICE(0x04ca, 0x3008), .driver_info = BTUSB_ATH3012 }, 136 136 { USB_DEVICE(0x04ca, 0x300b), .driver_info = BTUSB_ATH3012 }, 137 137 { USB_DEVICE(0x0930, 0x0219), .driver_info = BTUSB_ATH3012 },
+2 -3
drivers/bluetooth/btusb.c
··· 152 152 { USB_DEVICE(0x04ca, 0x3004), .driver_info = BTUSB_ATH3012 }, 153 153 { USB_DEVICE(0x04ca, 0x3005), .driver_info = BTUSB_ATH3012 }, 154 154 { USB_DEVICE(0x04ca, 0x3006), .driver_info = BTUSB_ATH3012 }, 155 + { USB_DEVICE(0x04ca, 0x3007), .driver_info = BTUSB_ATH3012 }, 155 156 { USB_DEVICE(0x04ca, 0x3008), .driver_info = BTUSB_ATH3012 }, 156 157 { USB_DEVICE(0x04ca, 0x300b), .driver_info = BTUSB_ATH3012 }, 157 158 { USB_DEVICE(0x0930, 0x0219), .driver_info = BTUSB_ATH3012 }, ··· 1486 1485 if (id->driver_info & BTUSB_BCM92035) 1487 1486 hdev->setup = btusb_setup_bcm92035; 1488 1487 1489 - if (id->driver_info & BTUSB_INTEL) { 1490 - usb_enable_autosuspend(data->udev); 1488 + if (id->driver_info & BTUSB_INTEL) 1491 1489 hdev->setup = btusb_setup_intel; 1492 - } 1493 1490 1494 1491 /* Interface numbers are hardcoded in the specification */ 1495 1492 data->isoc = usb_ifnum_to_if(data->udev, 1);
+1 -1
drivers/connector/cn_proc.c
··· 369 369 return; 370 370 371 371 /* Can only change if privileged. */ 372 - if (!capable(CAP_NET_ADMIN)) { 372 + if (!__netlink_ns_capable(nsp, &init_user_ns, CAP_NET_ADMIN)) { 373 373 err = EPERM; 374 374 goto out; 375 375 }
+1 -1
drivers/isdn/hisax/icc.c
··· 425 425 if (cs->debug & L1_DEB_MONITOR) 426 426 debugl1(cs, "ICC %02x -> MOX1", cs->dc.icc.mon_tx[cs->dc.icc.mon_txp - 1]); 427 427 } 428 - AfterMOX1: 428 + AfterMOX1: ; 429 429 #endif 430 430 } 431 431 }
+1 -1
drivers/net/bonding/bond_sysfs.c
··· 534 534 { 535 535 struct bonding *bond = to_bond(d); 536 536 537 - return sprintf(buf, "%d\n", bond->params.min_links); 537 + return sprintf(buf, "%u\n", bond->params.min_links); 538 538 } 539 539 540 540 static ssize_t bonding_store_min_links(struct device *d,
+7
drivers/net/can/c_can/Kconfig
··· 14 14 SPEAr1310 and SPEAr320 evaluation boards & TI (www.ti.com) 15 15 boards like am335x, dm814x, dm813x and dm811x. 16 16 17 + config CAN_C_CAN_STRICT_FRAME_ORDERING 18 + bool "Force a strict RX CAN frame order (may cause frame loss)" 19 + ---help--- 20 + The RX split buffer prevents packet reordering but can cause packet 21 + loss. Only enable this option when you accept to lose CAN frames 22 + in favour of getting the received CAN frames in the correct order. 23 + 17 24 config CAN_C_CAN_PCI 18 25 tristate "Generic PCI Bus based C_CAN/D_CAN driver" 19 26 depends on PCI
+309 -357
drivers/net/can/c_can/c_can.c
··· 60 60 #define CONTROL_IE BIT(1) 61 61 #define CONTROL_INIT BIT(0) 62 62 63 + #define CONTROL_IRQMSK (CONTROL_EIE | CONTROL_IE | CONTROL_SIE) 64 + 63 65 /* test register */ 64 66 #define TEST_RX BIT(7) 65 67 #define TEST_TX1 BIT(6) ··· 110 108 #define IF_COMM_CONTROL BIT(4) 111 109 #define IF_COMM_CLR_INT_PND BIT(3) 112 110 #define IF_COMM_TXRQST BIT(2) 111 + #define IF_COMM_CLR_NEWDAT IF_COMM_TXRQST 113 112 #define IF_COMM_DATAA BIT(1) 114 113 #define IF_COMM_DATAB BIT(0) 115 - #define IF_COMM_ALL (IF_COMM_MASK | IF_COMM_ARB | \ 116 - IF_COMM_CONTROL | IF_COMM_TXRQST | \ 117 - IF_COMM_DATAA | IF_COMM_DATAB) 114 + 115 + /* TX buffer setup */ 116 + #define IF_COMM_TX (IF_COMM_ARB | IF_COMM_CONTROL | \ 117 + IF_COMM_TXRQST | \ 118 + IF_COMM_DATAA | IF_COMM_DATAB) 118 119 119 120 /* For the low buffers we clear the interrupt bit, but keep newdat */ 120 121 #define IF_COMM_RCV_LOW (IF_COMM_MASK | IF_COMM_ARB | \ ··· 125 120 IF_COMM_DATAA | IF_COMM_DATAB) 126 121 127 122 /* For the high buffers we clear the interrupt bit and newdat */ 128 - #define IF_COMM_RCV_HIGH (IF_COMM_RCV_LOW | IF_COMM_TXRQST) 123 + #define IF_COMM_RCV_HIGH (IF_COMM_RCV_LOW | IF_COMM_CLR_NEWDAT) 124 + 125 + 126 + /* Receive setup of message objects */ 127 + #define IF_COMM_RCV_SETUP (IF_COMM_MASK | IF_COMM_ARB | IF_COMM_CONTROL) 128 + 129 + /* Invalidation of message objects */ 130 + #define IF_COMM_INVAL (IF_COMM_ARB | IF_COMM_CONTROL) 129 131 130 132 /* IFx arbitration */ 131 - #define IF_ARB_MSGVAL BIT(15) 132 - #define IF_ARB_MSGXTD BIT(14) 133 - #define IF_ARB_TRANSMIT BIT(13) 133 + #define IF_ARB_MSGVAL BIT(31) 134 + #define IF_ARB_MSGXTD BIT(30) 135 + #define IF_ARB_TRANSMIT BIT(29) 134 136 135 137 /* IFx message control */ 136 138 #define IF_MCONT_NEWDAT BIT(15) ··· 151 139 #define IF_MCONT_EOB BIT(7) 152 140 #define IF_MCONT_DLC_MASK 0xf 153 141 142 + #define IF_MCONT_RCV (IF_MCONT_RXIE | IF_MCONT_UMASK) 143 + #define IF_MCONT_RCV_EOB (IF_MCONT_RCV | IF_MCONT_EOB) 144 + 145 + #define IF_MCONT_TX (IF_MCONT_TXIE | IF_MCONT_EOB) 146 + 154 147 /* 155 148 * Use IF1 for RX and IF2 for TX 156 149 */ 157 150 #define IF_RX 0 158 151 #define IF_TX 1 159 - 160 - /* status interrupt */ 161 - #define STATUS_INTERRUPT 0x8000 162 - 163 - /* global interrupt masks */ 164 - #define ENABLE_ALL_INTERRUPTS 1 165 - #define DISABLE_ALL_INTERRUPTS 0 166 152 167 153 /* minimum timeout for checking BUSY status */ 168 154 #define MIN_TIMEOUT_VALUE 6 ··· 181 171 LEC_BIT0_ERROR, 182 172 LEC_CRC_ERROR, 183 173 LEC_UNUSED, 174 + LEC_MASK = LEC_UNUSED, 184 175 }; 185 176 186 177 /* ··· 237 226 priv->raminit(priv, enable); 238 227 } 239 228 240 - static inline int get_tx_next_msg_obj(const struct c_can_priv *priv) 229 + static void c_can_irq_control(struct c_can_priv *priv, bool enable) 241 230 { 242 - return (priv->tx_next & C_CAN_NEXT_MSG_OBJ_MASK) + 243 - C_CAN_MSG_OBJ_TX_FIRST; 244 - } 245 - 246 - static inline int get_tx_echo_msg_obj(int txecho) 247 - { 248 - return (txecho & C_CAN_NEXT_MSG_OBJ_MASK) + C_CAN_MSG_OBJ_TX_FIRST; 249 - } 250 - 251 - static u32 c_can_read_reg32(struct c_can_priv *priv, enum reg index) 252 - { 253 - u32 val = priv->read_reg(priv, index); 254 - val |= ((u32) priv->read_reg(priv, index + 1)) << 16; 255 - return val; 256 - } 257 - 258 - static void c_can_enable_all_interrupts(struct c_can_priv *priv, 259 - int enable) 260 - { 261 - unsigned int cntrl_save = priv->read_reg(priv, 262 - C_CAN_CTRL_REG); 231 + u32 ctrl = priv->read_reg(priv, C_CAN_CTRL_REG) & ~CONTROL_IRQMSK; 263 232 264 233 if (enable) 265 - cntrl_save |= (CONTROL_SIE | CONTROL_EIE | CONTROL_IE); 266 - else 267 - cntrl_save &= ~(CONTROL_EIE | CONTROL_IE | CONTROL_SIE); 234 + ctrl |= CONTROL_IRQMSK; 268 235 269 - priv->write_reg(priv, C_CAN_CTRL_REG, cntrl_save); 236 + priv->write_reg(priv, C_CAN_CTRL_REG, ctrl); 270 237 } 271 238 272 - static inline int c_can_msg_obj_is_busy(struct c_can_priv *priv, int iface) 239 + static void c_can_obj_update(struct net_device *dev, int iface, u32 cmd, u32 obj) 273 240 { 274 - int count = MIN_TIMEOUT_VALUE; 241 + struct c_can_priv *priv = netdev_priv(dev); 242 + int cnt, reg = C_CAN_IFACE(COMREQ_REG, iface); 275 243 276 - while (count && priv->read_reg(priv, 277 - C_CAN_IFACE(COMREQ_REG, iface)) & 278 - IF_COMR_BUSY) { 279 - count--; 244 + priv->write_reg(priv, reg + 1, cmd); 245 + priv->write_reg(priv, reg, obj); 246 + 247 + for (cnt = MIN_TIMEOUT_VALUE; cnt; cnt--) { 248 + if (!(priv->read_reg(priv, reg) & IF_COMR_BUSY)) 249 + return; 280 250 udelay(1); 281 251 } 252 + netdev_err(dev, "Updating object timed out\n"); 282 253 283 - if (!count) 284 - return 1; 285 - 286 - return 0; 287 254 } 288 255 289 - static inline void c_can_object_get(struct net_device *dev, 290 - int iface, int objno, int mask) 256 + static inline void c_can_object_get(struct net_device *dev, int iface, 257 + u32 obj, u32 cmd) 258 + { 259 + c_can_obj_update(dev, iface, cmd, obj); 260 + } 261 + 262 + static inline void c_can_object_put(struct net_device *dev, int iface, 263 + u32 obj, u32 cmd) 264 + { 265 + c_can_obj_update(dev, iface, cmd | IF_COMM_WR, obj); 266 + } 267 + 268 + /* 269 + * Note: According to documentation clearing TXIE while MSGVAL is set 270 + * is not allowed, but works nicely on C/DCAN. And that lowers the I/O 271 + * load significantly. 272 + */ 273 + static void c_can_inval_tx_object(struct net_device *dev, int iface, int obj) 291 274 { 292 275 struct c_can_priv *priv = netdev_priv(dev); 293 276 294 - /* 295 - * As per specs, after writting the message object number in the 296 - * IF command request register the transfer b/w interface 297 - * register and message RAM must be complete in 6 CAN-CLK 298 - * period. 299 - */ 300 - priv->write_reg(priv, C_CAN_IFACE(COMMSK_REG, iface), 301 - IFX_WRITE_LOW_16BIT(mask)); 302 - priv->write_reg(priv, C_CAN_IFACE(COMREQ_REG, iface), 303 - IFX_WRITE_LOW_16BIT(objno)); 304 - 305 - if (c_can_msg_obj_is_busy(priv, iface)) 306 - netdev_err(dev, "timed out in object get\n"); 277 + priv->write_reg(priv, C_CAN_IFACE(MSGCTRL_REG, iface), 0); 278 + c_can_object_put(dev, iface, obj, IF_COMM_INVAL); 307 279 } 308 280 309 - static inline void c_can_object_put(struct net_device *dev, 310 - int iface, int objno, int mask) 281 + static void c_can_inval_msg_object(struct net_device *dev, int iface, int obj) 311 282 { 312 283 struct c_can_priv *priv = netdev_priv(dev); 313 284 314 - /* 315 - * As per specs, after writting the message object number in the 316 - * IF command request register the transfer b/w interface 317 - * register and message RAM must be complete in 6 CAN-CLK 318 - * period. 319 - */ 320 - priv->write_reg(priv, C_CAN_IFACE(COMMSK_REG, iface), 321 - (IF_COMM_WR | IFX_WRITE_LOW_16BIT(mask))); 322 - priv->write_reg(priv, C_CAN_IFACE(COMREQ_REG, iface), 323 - IFX_WRITE_LOW_16BIT(objno)); 324 - 325 - if (c_can_msg_obj_is_busy(priv, iface)) 326 - netdev_err(dev, "timed out in object put\n"); 285 + priv->write_reg(priv, C_CAN_IFACE(ARB1_REG, iface), 0); 286 + priv->write_reg(priv, C_CAN_IFACE(ARB2_REG, iface), 0); 287 + c_can_inval_tx_object(dev, iface, obj); 327 288 } 328 289 329 - static void c_can_write_msg_object(struct net_device *dev, 330 - int iface, struct can_frame *frame, int objno) 290 + static void c_can_setup_tx_object(struct net_device *dev, int iface, 291 + struct can_frame *frame, int idx) 331 292 { 293 + struct c_can_priv *priv = netdev_priv(dev); 294 + u16 ctrl = IF_MCONT_TX | frame->can_dlc; 295 + bool rtr = frame->can_id & CAN_RTR_FLAG; 296 + u32 arb = IF_ARB_MSGVAL; 332 297 int i; 333 - u16 flags = 0; 334 - unsigned int id; 335 - struct c_can_priv *priv = netdev_priv(dev); 336 - 337 - if (!(frame->can_id & CAN_RTR_FLAG)) 338 - flags |= IF_ARB_TRANSMIT; 339 298 340 299 if (frame->can_id & CAN_EFF_FLAG) { 341 - id = frame->can_id & CAN_EFF_MASK; 342 - flags |= IF_ARB_MSGXTD; 343 - } else 344 - id = ((frame->can_id & CAN_SFF_MASK) << 18); 300 + arb |= frame->can_id & CAN_EFF_MASK; 301 + arb |= IF_ARB_MSGXTD; 302 + } else { 303 + arb |= (frame->can_id & CAN_SFF_MASK) << 18; 304 + } 345 305 346 - flags |= IF_ARB_MSGVAL; 306 + if (!rtr) 307 + arb |= IF_ARB_TRANSMIT; 347 308 348 - priv->write_reg(priv, C_CAN_IFACE(ARB1_REG, iface), 349 - IFX_WRITE_LOW_16BIT(id)); 350 - priv->write_reg(priv, C_CAN_IFACE(ARB2_REG, iface), flags | 351 - IFX_WRITE_HIGH_16BIT(id)); 309 + /* 310 + * If we change the DIR bit, we need to invalidate the buffer 311 + * first, i.e. clear the MSGVAL flag in the arbiter. 312 + */ 313 + if (rtr != (bool)test_bit(idx, &priv->tx_dir)) { 314 + u32 obj = idx + C_CAN_MSG_OBJ_TX_FIRST; 315 + 316 + c_can_inval_msg_object(dev, iface, obj); 317 + change_bit(idx, &priv->tx_dir); 318 + } 319 + 320 + priv->write_reg(priv, C_CAN_IFACE(ARB1_REG, iface), arb); 321 + priv->write_reg(priv, C_CAN_IFACE(ARB2_REG, iface), arb >> 16); 322 + 323 + priv->write_reg(priv, C_CAN_IFACE(MSGCTRL_REG, iface), ctrl); 352 324 353 325 for (i = 0; i < frame->can_dlc; i += 2) { 354 326 priv->write_reg(priv, C_CAN_IFACE(DATA1_REG, iface) + i / 2, 355 327 frame->data[i] | (frame->data[i + 1] << 8)); 356 328 } 357 - 358 - /* enable interrupt for this message object */ 359 - priv->write_reg(priv, C_CAN_IFACE(MSGCTRL_REG, iface), 360 - IF_MCONT_TXIE | IF_MCONT_TXRQST | IF_MCONT_EOB | 361 - frame->can_dlc); 362 - c_can_object_put(dev, iface, objno, IF_COMM_ALL); 363 329 } 364 330 365 331 static inline void c_can_activate_all_lower_rx_msg_obj(struct net_device *dev, 366 - int iface, 367 - int ctrl_mask) 332 + int iface) 368 333 { 369 334 int i; 370 - struct c_can_priv *priv = netdev_priv(dev); 371 335 372 - for (i = C_CAN_MSG_OBJ_RX_FIRST; i <= C_CAN_MSG_RX_LOW_LAST; i++) { 373 - priv->write_reg(priv, C_CAN_IFACE(MSGCTRL_REG, iface), 374 - ctrl_mask & ~IF_MCONT_NEWDAT); 375 - c_can_object_put(dev, iface, i, IF_COMM_CONTROL); 376 - } 336 + for (i = C_CAN_MSG_OBJ_RX_FIRST; i <= C_CAN_MSG_RX_LOW_LAST; i++) 337 + c_can_object_get(dev, iface, i, IF_COMM_CLR_NEWDAT); 377 338 } 378 339 379 340 static int c_can_handle_lost_msg_obj(struct net_device *dev, ··· 360 377 priv->write_reg(priv, C_CAN_IFACE(MSGCTRL_REG, iface), ctrl); 361 378 c_can_object_put(dev, iface, objno, IF_COMM_CONTROL); 362 379 380 + stats->rx_errors++; 381 + stats->rx_over_errors++; 382 + 363 383 /* create an error msg */ 364 384 skb = alloc_can_err_skb(dev, &frame); 365 385 if (unlikely(!skb)) ··· 370 384 371 385 frame->can_id |= CAN_ERR_CRTL; 372 386 frame->data[1] = CAN_ERR_CRTL_RX_OVERFLOW; 373 - stats->rx_errors++; 374 - stats->rx_over_errors++; 375 387 376 388 netif_receive_skb(skb); 377 389 return 1; 378 390 } 379 391 380 - static int c_can_read_msg_object(struct net_device *dev, int iface, int ctrl) 392 + static int c_can_read_msg_object(struct net_device *dev, int iface, u32 ctrl) 381 393 { 382 - u16 flags, data; 383 - int i; 384 - unsigned int val; 385 - struct c_can_priv *priv = netdev_priv(dev); 386 394 struct net_device_stats *stats = &dev->stats; 387 - struct sk_buff *skb; 395 + struct c_can_priv *priv = netdev_priv(dev); 388 396 struct can_frame *frame; 397 + struct sk_buff *skb; 398 + u32 arb, data; 389 399 390 400 skb = alloc_can_skb(dev, &frame); 391 401 if (!skb) { ··· 391 409 392 410 frame->can_dlc = get_can_dlc(ctrl & 0x0F); 393 411 394 - flags = priv->read_reg(priv, C_CAN_IFACE(ARB2_REG, iface)); 395 - val = priv->read_reg(priv, C_CAN_IFACE(ARB1_REG, iface)) | 396 - (flags << 16); 412 + arb = priv->read_reg(priv, C_CAN_IFACE(ARB1_REG, iface)); 413 + arb |= priv->read_reg(priv, C_CAN_IFACE(ARB2_REG, iface)) << 16; 397 414 398 - if (flags & IF_ARB_MSGXTD) 399 - frame->can_id = (val & CAN_EFF_MASK) | CAN_EFF_FLAG; 415 + if (arb & IF_ARB_MSGXTD) 416 + frame->can_id = (arb & CAN_EFF_MASK) | CAN_EFF_FLAG; 400 417 else 401 - frame->can_id = (val >> 18) & CAN_SFF_MASK; 418 + frame->can_id = (arb >> 18) & CAN_SFF_MASK; 402 419 403 - if (flags & IF_ARB_TRANSMIT) 420 + if (arb & IF_ARB_TRANSMIT) { 404 421 frame->can_id |= CAN_RTR_FLAG; 405 - else { 406 - for (i = 0; i < frame->can_dlc; i += 2) { 407 - data = priv->read_reg(priv, 408 - C_CAN_IFACE(DATA1_REG, iface) + i / 2); 422 + } else { 423 + int i, dreg = C_CAN_IFACE(DATA1_REG, iface); 424 + 425 + for (i = 0; i < frame->can_dlc; i += 2, dreg ++) { 426 + data = priv->read_reg(priv, dreg); 409 427 frame->data[i] = data; 410 428 frame->data[i + 1] = data >> 8; 411 429 } 412 430 } 413 431 414 - netif_receive_skb(skb); 415 - 416 432 stats->rx_packets++; 417 433 stats->rx_bytes += frame->can_dlc; 434 + 435 + netif_receive_skb(skb); 418 436 return 0; 419 437 } 420 438 421 439 static void c_can_setup_receive_object(struct net_device *dev, int iface, 422 - int objno, unsigned int mask, 423 - unsigned int id, unsigned int mcont) 440 + u32 obj, u32 mask, u32 id, u32 mcont) 424 441 { 425 442 struct c_can_priv *priv = netdev_priv(dev); 426 443 427 - priv->write_reg(priv, C_CAN_IFACE(MASK1_REG, iface), 428 - IFX_WRITE_LOW_16BIT(mask)); 444 + mask |= BIT(29); 445 + priv->write_reg(priv, C_CAN_IFACE(MASK1_REG, iface), mask); 446 + priv->write_reg(priv, C_CAN_IFACE(MASK2_REG, iface), mask >> 16); 429 447 430 - /* According to C_CAN documentation, the reserved bit 431 - * in IFx_MASK2 register is fixed 1 432 - */ 433 - priv->write_reg(priv, C_CAN_IFACE(MASK2_REG, iface), 434 - IFX_WRITE_HIGH_16BIT(mask) | BIT(13)); 435 - 436 - priv->write_reg(priv, C_CAN_IFACE(ARB1_REG, iface), 437 - IFX_WRITE_LOW_16BIT(id)); 438 - priv->write_reg(priv, C_CAN_IFACE(ARB2_REG, iface), 439 - (IF_ARB_MSGVAL | IFX_WRITE_HIGH_16BIT(id))); 448 + id |= IF_ARB_MSGVAL; 449 + priv->write_reg(priv, C_CAN_IFACE(ARB1_REG, iface), id); 450 + priv->write_reg(priv, C_CAN_IFACE(ARB2_REG, iface), id >> 16); 440 451 441 452 priv->write_reg(priv, C_CAN_IFACE(MSGCTRL_REG, iface), mcont); 442 - c_can_object_put(dev, iface, objno, IF_COMM_ALL & ~IF_COMM_TXRQST); 443 - 444 - netdev_dbg(dev, "obj no:%d, msgval:0x%08x\n", objno, 445 - c_can_read_reg32(priv, C_CAN_MSGVAL1_REG)); 446 - } 447 - 448 - static void c_can_inval_msg_object(struct net_device *dev, int iface, int objno) 449 - { 450 - struct c_can_priv *priv = netdev_priv(dev); 451 - 452 - priv->write_reg(priv, C_CAN_IFACE(ARB1_REG, iface), 0); 453 - priv->write_reg(priv, C_CAN_IFACE(ARB2_REG, iface), 0); 454 - priv->write_reg(priv, C_CAN_IFACE(MSGCTRL_REG, iface), 0); 455 - 456 - c_can_object_put(dev, iface, objno, IF_COMM_ARB | IF_COMM_CONTROL); 457 - 458 - netdev_dbg(dev, "obj no:%d, msgval:0x%08x\n", objno, 459 - c_can_read_reg32(priv, C_CAN_MSGVAL1_REG)); 460 - } 461 - 462 - static inline int c_can_is_next_tx_obj_busy(struct c_can_priv *priv, int objno) 463 - { 464 - int val = c_can_read_reg32(priv, C_CAN_TXRQST1_REG); 465 - 466 - /* 467 - * as transmission request register's bit n-1 corresponds to 468 - * message object n, we need to handle the same properly. 469 - */ 470 - if (val & (1 << (objno - 1))) 471 - return 1; 472 - 473 - return 0; 453 + c_can_object_put(dev, iface, obj, IF_COMM_RCV_SETUP); 474 454 } 475 455 476 456 static netdev_tx_t c_can_start_xmit(struct sk_buff *skb, 477 - struct net_device *dev) 457 + struct net_device *dev) 478 458 { 479 - u32 msg_obj_no; 480 - struct c_can_priv *priv = netdev_priv(dev); 481 459 struct can_frame *frame = (struct can_frame *)skb->data; 460 + struct c_can_priv *priv = netdev_priv(dev); 461 + u32 idx, obj; 482 462 483 463 if (can_dropped_invalid_skb(dev, skb)) 484 464 return NETDEV_TX_OK; 485 - 486 - spin_lock_bh(&priv->xmit_lock); 487 - msg_obj_no = get_tx_next_msg_obj(priv); 488 - 489 - /* prepare message object for transmission */ 490 - c_can_write_msg_object(dev, IF_TX, frame, msg_obj_no); 491 - priv->dlc[msg_obj_no - C_CAN_MSG_OBJ_TX_FIRST] = frame->can_dlc; 492 - can_put_echo_skb(skb, dev, msg_obj_no - C_CAN_MSG_OBJ_TX_FIRST); 493 - 494 465 /* 495 - * we have to stop the queue in case of a wrap around or 496 - * if the next TX message object is still in use 466 + * This is not a FIFO. C/D_CAN sends out the buffers 467 + * prioritized. The lowest buffer number wins. 497 468 */ 498 - priv->tx_next++; 499 - if (c_can_is_next_tx_obj_busy(priv, get_tx_next_msg_obj(priv)) || 500 - (priv->tx_next & C_CAN_NEXT_MSG_OBJ_MASK) == 0) 469 + idx = fls(atomic_read(&priv->tx_active)); 470 + obj = idx + C_CAN_MSG_OBJ_TX_FIRST; 471 + 472 + /* If this is the last buffer, stop the xmit queue */ 473 + if (idx == C_CAN_MSG_OBJ_TX_NUM - 1) 501 474 netif_stop_queue(dev); 502 - spin_unlock_bh(&priv->xmit_lock); 475 + /* 476 + * Store the message in the interface so we can call 477 + * can_put_echo_skb(). We must do this before we enable 478 + * transmit as we might race against do_tx(). 479 + */ 480 + c_can_setup_tx_object(dev, IF_TX, frame, idx); 481 + priv->dlc[idx] = frame->can_dlc; 482 + can_put_echo_skb(skb, dev, idx); 483 + 484 + /* Update the active bits */ 485 + atomic_add((1 << idx), &priv->tx_active); 486 + /* Start transmission */ 487 + c_can_object_put(dev, IF_TX, obj, IF_COMM_TX); 503 488 504 489 return NETDEV_TX_OK; 505 490 } ··· 543 594 544 595 /* setup receive message objects */ 545 596 for (i = C_CAN_MSG_OBJ_RX_FIRST; i < C_CAN_MSG_OBJ_RX_LAST; i++) 546 - c_can_setup_receive_object(dev, IF_RX, i, 0, 0, 547 - (IF_MCONT_RXIE | IF_MCONT_UMASK) & ~IF_MCONT_EOB); 597 + c_can_setup_receive_object(dev, IF_RX, i, 0, 0, IF_MCONT_RCV); 548 598 549 599 c_can_setup_receive_object(dev, IF_RX, C_CAN_MSG_OBJ_RX_LAST, 0, 0, 550 - IF_MCONT_EOB | IF_MCONT_RXIE | IF_MCONT_UMASK); 600 + IF_MCONT_RCV_EOB); 551 601 } 552 602 553 603 /* ··· 560 612 struct c_can_priv *priv = netdev_priv(dev); 561 613 562 614 /* enable automatic retransmission */ 563 - priv->write_reg(priv, C_CAN_CTRL_REG, 564 - CONTROL_ENABLE_AR); 615 + priv->write_reg(priv, C_CAN_CTRL_REG, CONTROL_ENABLE_AR); 565 616 566 617 if ((priv->can.ctrlmode & CAN_CTRLMODE_LISTENONLY) && 567 618 (priv->can.ctrlmode & CAN_CTRLMODE_LOOPBACK)) { 568 619 /* loopback + silent mode : useful for hot self-test */ 569 - priv->write_reg(priv, C_CAN_CTRL_REG, CONTROL_EIE | 570 - CONTROL_SIE | CONTROL_IE | CONTROL_TEST); 571 - priv->write_reg(priv, C_CAN_TEST_REG, 572 - TEST_LBACK | TEST_SILENT); 620 + priv->write_reg(priv, C_CAN_CTRL_REG, CONTROL_TEST); 621 + priv->write_reg(priv, C_CAN_TEST_REG, TEST_LBACK | TEST_SILENT); 573 622 } else if (priv->can.ctrlmode & CAN_CTRLMODE_LOOPBACK) { 574 623 /* loopback mode : useful for self-test function */ 575 - priv->write_reg(priv, C_CAN_CTRL_REG, CONTROL_EIE | 576 - CONTROL_SIE | CONTROL_IE | CONTROL_TEST); 624 + priv->write_reg(priv, C_CAN_CTRL_REG, CONTROL_TEST); 577 625 priv->write_reg(priv, C_CAN_TEST_REG, TEST_LBACK); 578 626 } else if (priv->can.ctrlmode & CAN_CTRLMODE_LISTENONLY) { 579 627 /* silent mode : bus-monitoring mode */ 580 - priv->write_reg(priv, C_CAN_CTRL_REG, CONTROL_EIE | 581 - CONTROL_SIE | CONTROL_IE | CONTROL_TEST); 628 + priv->write_reg(priv, C_CAN_CTRL_REG, CONTROL_TEST); 582 629 priv->write_reg(priv, C_CAN_TEST_REG, TEST_SILENT); 583 - } else 584 - /* normal mode*/ 585 - priv->write_reg(priv, C_CAN_CTRL_REG, 586 - CONTROL_EIE | CONTROL_SIE | CONTROL_IE); 630 + } 587 631 588 632 /* configure message objects */ 589 633 c_can_configure_msg_objects(dev); 590 634 591 635 /* set a `lec` value so that we can check for updates later */ 592 636 priv->write_reg(priv, C_CAN_STS_REG, LEC_UNUSED); 637 + 638 + /* Clear all internal status */ 639 + atomic_set(&priv->tx_active, 0); 640 + priv->rxmasked = 0; 641 + priv->tx_dir = 0; 593 642 594 643 /* set bittiming params */ 595 644 return c_can_set_bittiming(dev); ··· 602 657 if (err) 603 658 return err; 604 659 660 + /* Setup the command for new messages */ 661 + priv->comm_rcv_high = priv->type != BOSCH_D_CAN ? 662 + IF_COMM_RCV_LOW : IF_COMM_RCV_HIGH; 663 + 605 664 priv->can.state = CAN_STATE_ERROR_ACTIVE; 606 - 607 - /* reset tx helper pointers */ 608 - priv->tx_next = priv->tx_echo = 0; 609 - 610 - /* enable status change, error and module interrupts */ 611 - c_can_enable_all_interrupts(priv, ENABLE_ALL_INTERRUPTS); 612 665 613 666 return 0; 614 667 } ··· 615 672 { 616 673 struct c_can_priv *priv = netdev_priv(dev); 617 674 618 - /* disable all interrupts */ 619 - c_can_enable_all_interrupts(priv, DISABLE_ALL_INTERRUPTS); 620 - 621 - /* set the state as STOPPED */ 675 + c_can_irq_control(priv, false); 622 676 priv->can.state = CAN_STATE_STOPPED; 623 677 } 624 678 625 679 static int c_can_set_mode(struct net_device *dev, enum can_mode mode) 626 680 { 681 + struct c_can_priv *priv = netdev_priv(dev); 627 682 int err; 628 683 629 684 switch (mode) { ··· 630 689 if (err) 631 690 return err; 632 691 netif_wake_queue(dev); 692 + c_can_irq_control(priv, true); 633 693 break; 634 694 default: 635 695 return -EOPNOTSUPP; ··· 666 724 return err; 667 725 } 668 726 669 - /* 670 - * priv->tx_echo holds the number of the oldest can_frame put for 671 - * transmission into the hardware, but not yet ACKed by the CAN tx 672 - * complete IRQ. 673 - * 674 - * We iterate from priv->tx_echo to priv->tx_next and check if the 675 - * packet has been transmitted, echo it back to the CAN framework. 676 - * If we discover a not yet transmitted packet, stop looking for more. 677 - */ 678 727 static void c_can_do_tx(struct net_device *dev) 679 728 { 680 729 struct c_can_priv *priv = netdev_priv(dev); 681 730 struct net_device_stats *stats = &dev->stats; 682 - u32 val, obj, pkts = 0, bytes = 0; 731 + u32 idx, obj, pkts = 0, bytes = 0, pend, clr; 683 732 684 - spin_lock_bh(&priv->xmit_lock); 733 + clr = pend = priv->read_reg(priv, C_CAN_INTPND2_REG); 685 734 686 - for (; (priv->tx_next - priv->tx_echo) > 0; priv->tx_echo++) { 687 - obj = get_tx_echo_msg_obj(priv->tx_echo); 688 - val = c_can_read_reg32(priv, C_CAN_TXRQST1_REG); 689 - 690 - if (val & (1 << (obj - 1))) 691 - break; 692 - 693 - can_get_echo_skb(dev, obj - C_CAN_MSG_OBJ_TX_FIRST); 694 - bytes += priv->dlc[obj - C_CAN_MSG_OBJ_TX_FIRST]; 735 + while ((idx = ffs(pend))) { 736 + idx--; 737 + pend &= ~(1 << idx); 738 + obj = idx + C_CAN_MSG_OBJ_TX_FIRST; 739 + c_can_inval_tx_object(dev, IF_RX, obj); 740 + can_get_echo_skb(dev, idx); 741 + bytes += priv->dlc[idx]; 695 742 pkts++; 696 - c_can_inval_msg_object(dev, IF_TX, obj); 697 743 } 698 744 699 - /* restart queue if wrap-up or if queue stalled on last pkt */ 700 - if (((priv->tx_next & C_CAN_NEXT_MSG_OBJ_MASK) != 0) || 701 - ((priv->tx_echo & C_CAN_NEXT_MSG_OBJ_MASK) == 0)) 702 - netif_wake_queue(dev); 745 + /* Clear the bits in the tx_active mask */ 746 + atomic_sub(clr, &priv->tx_active); 703 747 704 - spin_unlock_bh(&priv->xmit_lock); 748 + if (clr & (1 << (C_CAN_MSG_OBJ_TX_NUM - 1))) 749 + netif_wake_queue(dev); 705 750 706 751 if (pkts) { 707 752 stats->tx_bytes += bytes; ··· 729 800 return pend & ~((1 << lasts) - 1); 730 801 } 731 802 803 + static inline void c_can_rx_object_get(struct net_device *dev, 804 + struct c_can_priv *priv, u32 obj) 805 + { 806 + #ifdef CONFIG_CAN_C_CAN_STRICT_FRAME_ORDERING 807 + if (obj < C_CAN_MSG_RX_LOW_LAST) 808 + c_can_object_get(dev, IF_RX, obj, IF_COMM_RCV_LOW); 809 + else 810 + #endif 811 + c_can_object_get(dev, IF_RX, obj, priv->comm_rcv_high); 812 + } 813 + 814 + static inline void c_can_rx_finalize(struct net_device *dev, 815 + struct c_can_priv *priv, u32 obj) 816 + { 817 + #ifdef CONFIG_CAN_C_CAN_STRICT_FRAME_ORDERING 818 + if (obj < C_CAN_MSG_RX_LOW_LAST) 819 + priv->rxmasked |= BIT(obj - 1); 820 + else if (obj == C_CAN_MSG_RX_LOW_LAST) { 821 + priv->rxmasked = 0; 822 + /* activate all lower message objects */ 823 + c_can_activate_all_lower_rx_msg_obj(dev, IF_RX); 824 + } 825 + #endif 826 + if (priv->type != BOSCH_D_CAN) 827 + c_can_object_get(dev, IF_RX, obj, IF_COMM_CLR_NEWDAT); 828 + } 829 + 732 830 static int c_can_read_objects(struct net_device *dev, struct c_can_priv *priv, 733 831 u32 pend, int quota) 734 832 { 735 - u32 pkts = 0, ctrl, obj, mcmd; 833 + u32 pkts = 0, ctrl, obj; 736 834 737 835 while ((obj = ffs(pend)) && quota > 0) { 738 836 pend &= ~BIT(obj - 1); 739 837 740 - mcmd = obj < C_CAN_MSG_RX_LOW_LAST ? 741 - IF_COMM_RCV_LOW : IF_COMM_RCV_HIGH; 742 - 743 - c_can_object_get(dev, IF_RX, obj, mcmd); 838 + c_can_rx_object_get(dev, priv, obj); 744 839 ctrl = priv->read_reg(priv, C_CAN_IFACE(MSGCTRL_REG, IF_RX)); 745 840 746 841 if (ctrl & IF_MCONT_MSGLST) { ··· 786 833 /* read the data from the message object */ 787 834 c_can_read_msg_object(dev, IF_RX, ctrl); 788 835 789 - if (obj == C_CAN_MSG_RX_LOW_LAST) 790 - /* activate all lower message objects */ 791 - c_can_activate_all_lower_rx_msg_obj(dev, IF_RX, ctrl); 836 + c_can_rx_finalize(dev, priv, obj); 792 837 793 838 pkts++; 794 839 quota--; 795 840 } 796 841 797 842 return pkts; 843 + } 844 + 845 + static inline u32 c_can_get_pending(struct c_can_priv *priv) 846 + { 847 + u32 pend = priv->read_reg(priv, C_CAN_NEWDAT1_REG); 848 + 849 + #ifdef CONFIG_CAN_C_CAN_STRICT_FRAME_ORDERING 850 + pend &= ~priv->rxmasked; 851 + #endif 852 + return pend; 798 853 } 799 854 800 855 /* ··· 813 852 * INTPND are set for this message object indicating that a new message 814 853 * has arrived. To work-around this issue, we keep two groups of message 815 854 * objects whose partitioning is defined by C_CAN_MSG_OBJ_RX_SPLIT. 855 + * 856 + * If CONFIG_CAN_C_CAN_STRICT_FRAME_ORDERING = y 816 857 * 817 858 * To ensure in-order frame reception we use the following 818 859 * approach while re-activating a message object to receive further ··· 828 865 * - if the current message object number is greater than 829 866 * C_CAN_MSG_RX_LOW_LAST then clear the NEWDAT bit of 830 867 * only this message object. 868 + * 869 + * This can cause packet loss! 870 + * 871 + * If CONFIG_CAN_C_CAN_STRICT_FRAME_ORDERING = n 872 + * 873 + * We clear the newdat bit right away. 874 + * 875 + * This can result in packet reordering when the readout is slow. 831 876 */ 832 877 static int c_can_do_rx_poll(struct net_device *dev, int quota) 833 878 { ··· 851 880 852 881 while (quota > 0) { 853 882 if (!pend) { 854 - pend = priv->read_reg(priv, C_CAN_INTPND1_REG); 883 + pend = c_can_get_pending(priv); 855 884 if (!pend) 856 885 break; 857 886 /* ··· 876 905 return pkts; 877 906 } 878 907 879 - static inline int c_can_has_and_handle_berr(struct c_can_priv *priv) 880 - { 881 - return (priv->can.ctrlmode & CAN_CTRLMODE_BERR_REPORTING) && 882 - (priv->current_status & LEC_UNUSED); 883 - } 884 - 885 908 static int c_can_handle_state_change(struct net_device *dev, 886 909 enum c_can_bus_error_types error_type) 887 910 { ··· 886 921 struct can_frame *cf; 887 922 struct sk_buff *skb; 888 923 struct can_berr_counter bec; 924 + 925 + switch (error_type) { 926 + case C_CAN_ERROR_WARNING: 927 + /* error warning state */ 928 + priv->can.can_stats.error_warning++; 929 + priv->can.state = CAN_STATE_ERROR_WARNING; 930 + break; 931 + case C_CAN_ERROR_PASSIVE: 932 + /* error passive state */ 933 + priv->can.can_stats.error_passive++; 934 + priv->can.state = CAN_STATE_ERROR_PASSIVE; 935 + break; 936 + case C_CAN_BUS_OFF: 937 + /* bus-off state */ 938 + priv->can.state = CAN_STATE_BUS_OFF; 939 + can_bus_off(dev); 940 + break; 941 + default: 942 + break; 943 + } 889 944 890 945 /* propagate the error condition to the CAN stack */ 891 946 skb = alloc_can_err_skb(dev, &cf); ··· 920 935 switch (error_type) { 921 936 case C_CAN_ERROR_WARNING: 922 937 /* error warning state */ 923 - priv->can.can_stats.error_warning++; 924 - priv->can.state = CAN_STATE_ERROR_WARNING; 925 938 cf->can_id |= CAN_ERR_CRTL; 926 939 cf->data[1] = (bec.txerr > bec.rxerr) ? 927 940 CAN_ERR_CRTL_TX_WARNING : ··· 930 947 break; 931 948 case C_CAN_ERROR_PASSIVE: 932 949 /* error passive state */ 933 - priv->can.can_stats.error_passive++; 934 - priv->can.state = CAN_STATE_ERROR_PASSIVE; 935 950 cf->can_id |= CAN_ERR_CRTL; 936 951 if (rx_err_passive) 937 952 cf->data[1] |= CAN_ERR_CRTL_RX_PASSIVE; ··· 941 960 break; 942 961 case C_CAN_BUS_OFF: 943 962 /* bus-off state */ 944 - priv->can.state = CAN_STATE_BUS_OFF; 945 963 cf->can_id |= CAN_ERR_BUSOFF; 946 - /* 947 - * disable all interrupts in bus-off mode to ensure that 948 - * the CPU is not hogged down 949 - */ 950 - c_can_enable_all_interrupts(priv, DISABLE_ALL_INTERRUPTS); 951 964 can_bus_off(dev); 952 965 break; 953 966 default: 954 967 break; 955 968 } 956 969 957 - netif_receive_skb(skb); 958 970 stats->rx_packets++; 959 971 stats->rx_bytes += cf->can_dlc; 972 + netif_receive_skb(skb); 960 973 961 974 return 1; 962 975 } ··· 971 996 if (lec_type == LEC_UNUSED || lec_type == LEC_NO_ERROR) 972 997 return 0; 973 998 999 + if (!(priv->can.ctrlmode & CAN_CTRLMODE_BERR_REPORTING)) 1000 + return 0; 1001 + 1002 + /* common for all type of bus errors */ 1003 + priv->can.can_stats.bus_error++; 1004 + stats->rx_errors++; 1005 + 974 1006 /* propagate the error condition to the CAN stack */ 975 1007 skb = alloc_can_err_skb(dev, &cf); 976 1008 if (unlikely(!skb)) ··· 987 1005 * check for 'last error code' which tells us the 988 1006 * type of the last error to occur on the CAN bus 989 1007 */ 990 - 991 - /* common for all type of bus errors */ 992 - priv->can.can_stats.bus_error++; 993 - stats->rx_errors++; 994 1008 cf->can_id |= CAN_ERR_PROT | CAN_ERR_BUSERROR; 995 1009 cf->data[2] |= CAN_ERR_PROT_UNSPEC; 996 1010 ··· 1021 1043 break; 1022 1044 } 1023 1045 1024 - /* set a `lec` value so that we can check for updates later */ 1025 - priv->write_reg(priv, C_CAN_STS_REG, LEC_UNUSED); 1026 - 1027 - netif_receive_skb(skb); 1028 1046 stats->rx_packets++; 1029 1047 stats->rx_bytes += cf->can_dlc; 1030 - 1048 + netif_receive_skb(skb); 1031 1049 return 1; 1032 1050 } 1033 1051 1034 1052 static int c_can_poll(struct napi_struct *napi, int quota) 1035 1053 { 1036 - u16 irqstatus; 1037 - int lec_type = 0; 1038 - int work_done = 0; 1039 1054 struct net_device *dev = napi->dev; 1040 1055 struct c_can_priv *priv = netdev_priv(dev); 1056 + u16 curr, last = priv->last_status; 1057 + int work_done = 0; 1041 1058 1042 - irqstatus = priv->irqstatus; 1043 - if (!irqstatus) 1044 - goto end; 1059 + priv->last_status = curr = priv->read_reg(priv, C_CAN_STS_REG); 1060 + /* Ack status on C_CAN. D_CAN is self clearing */ 1061 + if (priv->type != BOSCH_D_CAN) 1062 + priv->write_reg(priv, C_CAN_STS_REG, LEC_UNUSED); 1045 1063 1046 - /* status events have the highest priority */ 1047 - if (irqstatus == STATUS_INTERRUPT) { 1048 - priv->current_status = priv->read_reg(priv, 1049 - C_CAN_STS_REG); 1050 - 1051 - /* handle Tx/Rx events */ 1052 - if (priv->current_status & STATUS_TXOK) 1053 - priv->write_reg(priv, C_CAN_STS_REG, 1054 - priv->current_status & ~STATUS_TXOK); 1055 - 1056 - if (priv->current_status & STATUS_RXOK) 1057 - priv->write_reg(priv, C_CAN_STS_REG, 1058 - priv->current_status & ~STATUS_RXOK); 1059 - 1060 - /* handle state changes */ 1061 - if ((priv->current_status & STATUS_EWARN) && 1062 - (!(priv->last_status & STATUS_EWARN))) { 1063 - netdev_dbg(dev, "entered error warning state\n"); 1064 - work_done += c_can_handle_state_change(dev, 1065 - C_CAN_ERROR_WARNING); 1066 - } 1067 - if ((priv->current_status & STATUS_EPASS) && 1068 - (!(priv->last_status & STATUS_EPASS))) { 1069 - netdev_dbg(dev, "entered error passive state\n"); 1070 - work_done += c_can_handle_state_change(dev, 1071 - C_CAN_ERROR_PASSIVE); 1072 - } 1073 - if ((priv->current_status & STATUS_BOFF) && 1074 - (!(priv->last_status & STATUS_BOFF))) { 1075 - netdev_dbg(dev, "entered bus off state\n"); 1076 - work_done += c_can_handle_state_change(dev, 1077 - C_CAN_BUS_OFF); 1078 - } 1079 - 1080 - /* handle bus recovery events */ 1081 - if ((!(priv->current_status & STATUS_BOFF)) && 1082 - (priv->last_status & STATUS_BOFF)) { 1083 - netdev_dbg(dev, "left bus off state\n"); 1084 - priv->can.state = CAN_STATE_ERROR_ACTIVE; 1085 - } 1086 - if ((!(priv->current_status & STATUS_EPASS)) && 1087 - (priv->last_status & STATUS_EPASS)) { 1088 - netdev_dbg(dev, "left error passive state\n"); 1089 - priv->can.state = CAN_STATE_ERROR_ACTIVE; 1090 - } 1091 - 1092 - priv->last_status = priv->current_status; 1093 - 1094 - /* handle lec errors on the bus */ 1095 - lec_type = c_can_has_and_handle_berr(priv); 1096 - if (lec_type) 1097 - work_done += c_can_handle_bus_err(dev, lec_type); 1098 - } else if ((irqstatus >= C_CAN_MSG_OBJ_RX_FIRST) && 1099 - (irqstatus <= C_CAN_MSG_OBJ_RX_LAST)) { 1100 - /* handle events corresponding to receive message objects */ 1101 - work_done += c_can_do_rx_poll(dev, (quota - work_done)); 1102 - } else if ((irqstatus >= C_CAN_MSG_OBJ_TX_FIRST) && 1103 - (irqstatus <= C_CAN_MSG_OBJ_TX_LAST)) { 1104 - /* handle events corresponding to transmit message objects */ 1105 - c_can_do_tx(dev); 1064 + /* handle state changes */ 1065 + if ((curr & STATUS_EWARN) && (!(last & STATUS_EWARN))) { 1066 + netdev_dbg(dev, "entered error warning state\n"); 1067 + work_done += c_can_handle_state_change(dev, C_CAN_ERROR_WARNING); 1106 1068 } 1069 + 1070 + if ((curr & STATUS_EPASS) && (!(last & STATUS_EPASS))) { 1071 + netdev_dbg(dev, "entered error passive state\n"); 1072 + work_done += c_can_handle_state_change(dev, C_CAN_ERROR_PASSIVE); 1073 + } 1074 + 1075 + if ((curr & STATUS_BOFF) && (!(last & STATUS_BOFF))) { 1076 + netdev_dbg(dev, "entered bus off state\n"); 1077 + work_done += c_can_handle_state_change(dev, C_CAN_BUS_OFF); 1078 + goto end; 1079 + } 1080 + 1081 + /* handle bus recovery events */ 1082 + if ((!(curr & STATUS_BOFF)) && (last & STATUS_BOFF)) { 1083 + netdev_dbg(dev, "left bus off state\n"); 1084 + priv->can.state = CAN_STATE_ERROR_ACTIVE; 1085 + } 1086 + if ((!(curr & STATUS_EPASS)) && (last & STATUS_EPASS)) { 1087 + netdev_dbg(dev, "left error passive state\n"); 1088 + priv->can.state = CAN_STATE_ERROR_ACTIVE; 1089 + } 1090 + 1091 + /* handle lec errors on the bus */ 1092 + work_done += c_can_handle_bus_err(dev, curr & LEC_MASK); 1093 + 1094 + /* Handle Tx/Rx events. We do this unconditionally */ 1095 + work_done += c_can_do_rx_poll(dev, (quota - work_done)); 1096 + c_can_do_tx(dev); 1107 1097 1108 1098 end: 1109 1099 if (work_done < quota) { 1110 1100 napi_complete(napi); 1111 - /* enable all IRQs */ 1112 - c_can_enable_all_interrupts(priv, ENABLE_ALL_INTERRUPTS); 1101 + /* enable all IRQs if we are not in bus off state */ 1102 + if (priv->can.state != CAN_STATE_BUS_OFF) 1103 + c_can_irq_control(priv, true); 1113 1104 } 1114 1105 1115 1106 return work_done; ··· 1089 1142 struct net_device *dev = (struct net_device *)dev_id; 1090 1143 struct c_can_priv *priv = netdev_priv(dev); 1091 1144 1092 - priv->irqstatus = priv->read_reg(priv, C_CAN_INT_REG); 1093 - if (!priv->irqstatus) 1145 + if (!priv->read_reg(priv, C_CAN_INT_REG)) 1094 1146 return IRQ_NONE; 1095 1147 1096 1148 /* disable all interrupts and schedule the NAPI */ 1097 - c_can_enable_all_interrupts(priv, DISABLE_ALL_INTERRUPTS); 1149 + c_can_irq_control(priv, false); 1098 1150 napi_schedule(&priv->napi); 1099 1151 1100 1152 return IRQ_HANDLED; ··· 1130 1184 can_led_event(dev, CAN_LED_EVENT_OPEN); 1131 1185 1132 1186 napi_enable(&priv->napi); 1187 + /* enable status change, error and module interrupts */ 1188 + c_can_irq_control(priv, true); 1133 1189 netif_start_queue(dev); 1134 1190 1135 1191 return 0; ··· 1174 1226 return NULL; 1175 1227 1176 1228 priv = netdev_priv(dev); 1177 - spin_lock_init(&priv->xmit_lock); 1178 1229 netif_napi_add(dev, &priv->napi, c_can_poll, C_CAN_NAPI_WEIGHT); 1179 1230 1180 1231 priv->dev = dev; ··· 1228 1281 u32 val; 1229 1282 unsigned long time_out; 1230 1283 struct c_can_priv *priv = netdev_priv(dev); 1284 + int ret; 1231 1285 1232 1286 if (!(dev->flags & IFF_UP)) 1233 1287 return 0; ··· 1255 1307 if (time_after(jiffies, time_out)) 1256 1308 return -ETIMEDOUT; 1257 1309 1258 - return c_can_start(dev); 1310 + ret = c_can_start(dev); 1311 + if (!ret) 1312 + c_can_irq_control(priv, true); 1313 + 1314 + return ret; 1259 1315 } 1260 1316 EXPORT_SYMBOL_GPL(c_can_power_up); 1261 1317 #endif
+5 -18
drivers/net/can/c_can/c_can.h
··· 22 22 #ifndef C_CAN_H 23 23 #define C_CAN_H 24 24 25 - /* 26 - * IFx register masks: 27 - * allow easy operation on 16-bit registers when the 28 - * argument is 32-bit instead 29 - */ 30 - #define IFX_WRITE_LOW_16BIT(x) ((x) & 0xFFFF) 31 - #define IFX_WRITE_HIGH_16BIT(x) (((x) & 0xFFFF0000) >> 16) 32 - 33 25 /* message object split */ 34 26 #define C_CAN_NO_OF_OBJECTS 32 35 27 #define C_CAN_MSG_OBJ_RX_NUM 16 ··· 37 45 38 46 #define C_CAN_MSG_OBJ_RX_SPLIT 9 39 47 #define C_CAN_MSG_RX_LOW_LAST (C_CAN_MSG_OBJ_RX_SPLIT - 1) 40 - 41 - #define C_CAN_NEXT_MSG_OBJ_MASK (C_CAN_MSG_OBJ_TX_NUM - 1) 42 48 #define RECEIVE_OBJECT_BITS 0x0000ffff 43 49 44 50 enum reg { ··· 173 183 struct napi_struct napi; 174 184 struct net_device *dev; 175 185 struct device *device; 176 - spinlock_t xmit_lock; 177 - int tx_object; 178 - int current_status; 186 + atomic_t tx_active; 187 + unsigned long tx_dir; 179 188 int last_status; 180 189 u16 (*read_reg) (struct c_can_priv *priv, enum reg index); 181 190 void (*write_reg) (struct c_can_priv *priv, enum reg index, u16 val); 182 191 void __iomem *base; 183 192 const u16 *regs; 184 - unsigned long irq_flags; /* for request_irq() */ 185 - unsigned int tx_next; 186 - unsigned int tx_echo; 187 193 void *priv; /* for board-specific data */ 188 - u16 irqstatus; 189 194 enum c_can_dev_id type; 190 195 u32 __iomem *raminit_ctrlreg; 191 - unsigned int instance; 196 + int instance; 192 197 void (*raminit) (const struct c_can_priv *priv, bool enable); 198 + u32 comm_rcv_high; 199 + u32 rxmasked; 193 200 u32 dlc[C_CAN_MSG_OBJ_TX_NUM]; 194 201 }; 195 202
+7 -2
drivers/net/can/c_can/c_can_pci.c
··· 84 84 goto out_disable_device; 85 85 } 86 86 87 - pci_set_master(pdev); 88 - pci_enable_msi(pdev); 87 + ret = pci_enable_msi(pdev); 88 + if (!ret) { 89 + dev_info(&pdev->dev, "MSI enabled\n"); 90 + pci_set_master(pdev); 91 + } 89 92 90 93 addr = pci_iomap(pdev, 0, pci_resource_len(pdev, 0)); 91 94 if (!addr) { ··· 134 131 ret = -EINVAL; 135 132 goto out_free_c_can; 136 133 } 134 + 135 + priv->type = c_can_pci_data->type; 137 136 138 137 /* Configure access to registers */ 139 138 switch (c_can_pci_data->reg_align) {
+1 -1
drivers/net/can/c_can/c_can_platform.c
··· 222 222 223 223 res = platform_get_resource(pdev, IORESOURCE_MEM, 1); 224 224 priv->raminit_ctrlreg = devm_ioremap_resource(&pdev->dev, res); 225 - if (IS_ERR(priv->raminit_ctrlreg) || (int)priv->instance < 0) 225 + if (IS_ERR(priv->raminit_ctrlreg) || priv->instance < 0) 226 226 dev_info(&pdev->dev, "control memory is not used for raminit\n"); 227 227 else 228 228 priv->raminit = c_can_hw_raminit;
+1 -1
drivers/net/can/dev.c
··· 256 256 257 257 /* Check if the CAN device has bit-timing parameters */ 258 258 if (!btc) 259 - return -ENOTSUPP; 259 + return -EOPNOTSUPP; 260 260 261 261 /* 262 262 * Depending on the given can_bittiming parameter structure the CAN
+13 -3
drivers/net/can/sja1000/sja1000_isa.c
··· 46 46 static unsigned char cdr[MAXDEV] = {[0 ... (MAXDEV - 1)] = 0xff}; 47 47 static unsigned char ocr[MAXDEV] = {[0 ... (MAXDEV - 1)] = 0xff}; 48 48 static int indirect[MAXDEV] = {[0 ... (MAXDEV - 1)] = -1}; 49 + static spinlock_t indirect_lock[MAXDEV]; /* lock for indirect access mode */ 49 50 50 51 module_param_array(port, ulong, NULL, S_IRUGO); 51 52 MODULE_PARM_DESC(port, "I/O port number"); ··· 102 101 static u8 sja1000_isa_port_read_reg_indirect(const struct sja1000_priv *priv, 103 102 int reg) 104 103 { 105 - unsigned long base = (unsigned long)priv->reg_base; 104 + unsigned long flags, base = (unsigned long)priv->reg_base; 105 + u8 readval; 106 106 107 + spin_lock_irqsave(&indirect_lock[priv->dev->dev_id], flags); 107 108 outb(reg, base); 108 - return inb(base + 1); 109 + readval = inb(base + 1); 110 + spin_unlock_irqrestore(&indirect_lock[priv->dev->dev_id], flags); 111 + 112 + return readval; 109 113 } 110 114 111 115 static void sja1000_isa_port_write_reg_indirect(const struct sja1000_priv *priv, 112 116 int reg, u8 val) 113 117 { 114 - unsigned long base = (unsigned long)priv->reg_base; 118 + unsigned long flags, base = (unsigned long)priv->reg_base; 115 119 120 + spin_lock_irqsave(&indirect_lock[priv->dev->dev_id], flags); 116 121 outb(reg, base); 117 122 outb(val, base + 1); 123 + spin_unlock_irqrestore(&indirect_lock[priv->dev->dev_id], flags); 118 124 } 119 125 120 126 static int sja1000_isa_probe(struct platform_device *pdev) ··· 177 169 if (iosize == SJA1000_IOSIZE_INDIRECT) { 178 170 priv->read_reg = sja1000_isa_port_read_reg_indirect; 179 171 priv->write_reg = sja1000_isa_port_write_reg_indirect; 172 + spin_lock_init(&indirect_lock[idx]); 180 173 } else { 181 174 priv->read_reg = sja1000_isa_port_read_reg; 182 175 priv->write_reg = sja1000_isa_port_write_reg; ··· 207 198 208 199 platform_set_drvdata(pdev, dev); 209 200 SET_NETDEV_DEV(dev, &pdev->dev); 201 + dev->dev_id = idx; 210 202 211 203 err = register_sja1000dev(dev); 212 204 if (err) {
+3 -3
drivers/net/can/slcan.c
··· 322 322 if (!sl || sl->magic != SLCAN_MAGIC || !netif_running(sl->dev)) 323 323 return; 324 324 325 - spin_lock(&sl->lock); 325 + spin_lock_bh(&sl->lock); 326 326 if (sl->xleft <= 0) { 327 327 /* Now serial buffer is almost free & we can start 328 328 * transmission of another packet */ 329 329 sl->dev->stats.tx_packets++; 330 330 clear_bit(TTY_DO_WRITE_WAKEUP, &tty->flags); 331 - spin_unlock(&sl->lock); 331 + spin_unlock_bh(&sl->lock); 332 332 netif_wake_queue(sl->dev); 333 333 return; 334 334 } ··· 336 336 actual = tty->ops->write(tty, sl->xhead, sl->xleft); 337 337 sl->xleft -= actual; 338 338 sl->xhead += actual; 339 - spin_unlock(&sl->lock); 339 + spin_unlock_bh(&sl->lock); 340 340 } 341 341 342 342 /* Send a can_frame to a TTY queue. */
+1
drivers/net/ethernet/altera/Kconfig
··· 1 1 config ALTERA_TSE 2 2 tristate "Altera Triple-Speed Ethernet MAC support" 3 + depends on HAS_DMA 3 4 select PHYLIB 4 5 ---help--- 5 6 This driver supports the Altera Triple-Speed (TSE) Ethernet MAC.
+6 -2
drivers/net/ethernet/altera/altera_msgdma.c
··· 18 18 #include "altera_utils.h" 19 19 #include "altera_tse.h" 20 20 #include "altera_msgdmahw.h" 21 + #include "altera_msgdma.h" 21 22 22 23 /* No initialization work to do for MSGDMA */ 23 24 int msgdma_initialize(struct altera_tse_private *priv) ··· 27 26 } 28 27 29 28 void msgdma_uninitialize(struct altera_tse_private *priv) 29 + { 30 + } 31 + 32 + void msgdma_start_rxdma(struct altera_tse_private *priv) 30 33 { 31 34 } 32 35 ··· 159 154 160 155 /* Put buffer to the mSGDMA RX FIFO 161 156 */ 162 - int msgdma_add_rx_desc(struct altera_tse_private *priv, 157 + void msgdma_add_rx_desc(struct altera_tse_private *priv, 163 158 struct tse_buffer *rxbuffer) 164 159 { 165 160 struct msgdma_extended_desc *desc = priv->rx_dma_desc; ··· 180 175 iowrite32(0, &desc->burst_seq_num); 181 176 iowrite32(0x00010001, &desc->stride); 182 177 iowrite32(control, &desc->control); 183 - return 1; 184 178 } 185 179 186 180 /* status is returned on upper 16 bits,
+2 -1
drivers/net/ethernet/altera/altera_msgdma.h
··· 25 25 void msgdma_clear_rxirq(struct altera_tse_private *); 26 26 void msgdma_clear_txirq(struct altera_tse_private *); 27 27 u32 msgdma_tx_completions(struct altera_tse_private *); 28 - int msgdma_add_rx_desc(struct altera_tse_private *, struct tse_buffer *); 28 + void msgdma_add_rx_desc(struct altera_tse_private *, struct tse_buffer *); 29 29 int msgdma_tx_buffer(struct altera_tse_private *, struct tse_buffer *); 30 30 u32 msgdma_rx_status(struct altera_tse_private *); 31 31 int msgdma_initialize(struct altera_tse_private *); 32 32 void msgdma_uninitialize(struct altera_tse_private *); 33 + void msgdma_start_rxdma(struct altera_tse_private *); 33 34 34 35 #endif /* __ALTERA_MSGDMA_H__ */
+105 -72
drivers/net/ethernet/altera/altera_sgdma.c
··· 20 20 #include "altera_sgdmahw.h" 21 21 #include "altera_sgdma.h" 22 22 23 - static void sgdma_descrip(struct sgdma_descrip *desc, 24 - struct sgdma_descrip *ndesc, 25 - dma_addr_t ndesc_phys, 26 - dma_addr_t raddr, 27 - dma_addr_t waddr, 28 - u16 length, 29 - int generate_eop, 30 - int rfixed, 31 - int wfixed); 23 + static void sgdma_setup_descrip(struct sgdma_descrip *desc, 24 + struct sgdma_descrip *ndesc, 25 + dma_addr_t ndesc_phys, 26 + dma_addr_t raddr, 27 + dma_addr_t waddr, 28 + u16 length, 29 + int generate_eop, 30 + int rfixed, 31 + int wfixed); 32 32 33 33 static int sgdma_async_write(struct altera_tse_private *priv, 34 34 struct sgdma_descrip *desc); ··· 64 64 65 65 int sgdma_initialize(struct altera_tse_private *priv) 66 66 { 67 - priv->txctrlreg = SGDMA_CTRLREG_ILASTD; 67 + priv->txctrlreg = SGDMA_CTRLREG_ILASTD | 68 + SGDMA_CTRLREG_INTEN; 68 69 69 70 priv->rxctrlreg = SGDMA_CTRLREG_IDESCRIP | 71 + SGDMA_CTRLREG_INTEN | 70 72 SGDMA_CTRLREG_ILASTD; 73 + 74 + priv->sgdmadesclen = sizeof(struct sgdma_descrip); 71 75 72 76 INIT_LIST_HEAD(&priv->txlisthd); 73 77 INIT_LIST_HEAD(&priv->rxlisthd); ··· 96 92 netdev_err(priv->dev, "error mapping tx descriptor memory\n"); 97 93 return -EINVAL; 98 94 } 95 + 96 + /* Initialize descriptor memory to all 0's, sync memory to cache */ 97 + memset(priv->tx_dma_desc, 0, priv->txdescmem); 98 + memset(priv->rx_dma_desc, 0, priv->rxdescmem); 99 + 100 + dma_sync_single_for_device(priv->device, priv->txdescphys, 101 + priv->txdescmem, DMA_TO_DEVICE); 102 + 103 + dma_sync_single_for_device(priv->device, priv->rxdescphys, 104 + priv->rxdescmem, DMA_TO_DEVICE); 99 105 100 106 return 0; 101 107 } ··· 144 130 iowrite32(0, &prxsgdma->control); 145 131 } 146 132 133 + /* For SGDMA, interrupts remain enabled after initially enabling, 134 + * so no need to provide implementations for abstract enable 135 + * and disable 136 + */ 137 + 147 138 void sgdma_enable_rxirq(struct altera_tse_private *priv) 148 139 { 149 - struct sgdma_csr *csr = (struct sgdma_csr *)priv->rx_dma_csr; 150 - priv->rxctrlreg |= SGDMA_CTRLREG_INTEN; 151 - tse_set_bit(&csr->control, SGDMA_CTRLREG_INTEN); 152 140 } 153 141 154 142 void sgdma_enable_txirq(struct altera_tse_private *priv) 155 143 { 156 - struct sgdma_csr *csr = (struct sgdma_csr *)priv->tx_dma_csr; 157 - priv->txctrlreg |= SGDMA_CTRLREG_INTEN; 158 - tse_set_bit(&csr->control, SGDMA_CTRLREG_INTEN); 159 144 } 160 145 161 - /* for SGDMA, RX interrupts remain enabled after enabling */ 162 146 void sgdma_disable_rxirq(struct altera_tse_private *priv) 163 147 { 164 148 } 165 149 166 - /* for SGDMA, TX interrupts remain enabled after enabling */ 167 150 void sgdma_disable_txirq(struct altera_tse_private *priv) 168 151 { 169 152 } ··· 195 184 if (sgdma_txbusy(priv)) 196 185 return 0; 197 186 198 - sgdma_descrip(cdesc, /* current descriptor */ 199 - ndesc, /* next descriptor */ 200 - sgdma_txphysaddr(priv, ndesc), 201 - buffer->dma_addr, /* address of packet to xmit */ 202 - 0, /* write addr 0 for tx dma */ 203 - buffer->len, /* length of packet */ 204 - SGDMA_CONTROL_EOP, /* Generate EOP */ 205 - 0, /* read fixed */ 206 - SGDMA_CONTROL_WR_FIXED); /* Generate SOP */ 187 + sgdma_setup_descrip(cdesc, /* current descriptor */ 188 + ndesc, /* next descriptor */ 189 + sgdma_txphysaddr(priv, ndesc), 190 + buffer->dma_addr, /* address of packet to xmit */ 191 + 0, /* write addr 0 for tx dma */ 192 + buffer->len, /* length of packet */ 193 + SGDMA_CONTROL_EOP, /* Generate EOP */ 194 + 0, /* read fixed */ 195 + SGDMA_CONTROL_WR_FIXED); /* Generate SOP */ 207 196 208 197 pktstx = sgdma_async_write(priv, cdesc); 209 198 ··· 230 219 return ready; 231 220 } 232 221 233 - int sgdma_add_rx_desc(struct altera_tse_private *priv, 234 - struct tse_buffer *rxbuffer) 222 + void sgdma_start_rxdma(struct altera_tse_private *priv) 223 + { 224 + sgdma_async_read(priv); 225 + } 226 + 227 + void sgdma_add_rx_desc(struct altera_tse_private *priv, 228 + struct tse_buffer *rxbuffer) 235 229 { 236 230 queue_rx(priv, rxbuffer); 237 - return sgdma_async_read(priv); 238 231 } 239 232 240 233 /* status is returned on upper 16 bits, ··· 255 240 unsigned int pktstatus = 0; 256 241 struct tse_buffer *rxbuffer = NULL; 257 242 258 - dma_sync_single_for_cpu(priv->device, 259 - priv->rxdescphys, 260 - priv->rxdescmem, 261 - DMA_BIDIRECTIONAL); 243 + u32 sts = ioread32(&csr->status); 262 244 263 245 desc = &base[0]; 264 - if ((ioread32(&csr->status) & SGDMA_STSREG_EOP) || 265 - (desc->status & SGDMA_STATUS_EOP)) { 246 + if (sts & SGDMA_STSREG_EOP) { 247 + dma_sync_single_for_cpu(priv->device, 248 + priv->rxdescphys, 249 + priv->sgdmadesclen, 250 + DMA_FROM_DEVICE); 251 + 266 252 pktlength = desc->bytes_xferred; 267 253 pktstatus = desc->status & 0x3f; 268 254 rxstatus = pktstatus; 269 255 rxstatus = rxstatus << 16; 270 256 rxstatus |= (pktlength & 0xffff); 271 257 272 - desc->status = 0; 258 + if (rxstatus) { 259 + desc->status = 0; 273 260 274 - rxbuffer = dequeue_rx(priv); 275 - if (rxbuffer == NULL) 261 + rxbuffer = dequeue_rx(priv); 262 + if (rxbuffer == NULL) 263 + netdev_info(priv->dev, 264 + "sgdma rx and rx queue empty!\n"); 265 + 266 + /* Clear control */ 267 + iowrite32(0, &csr->control); 268 + /* clear status */ 269 + iowrite32(0xf, &csr->status); 270 + 271 + /* kick the rx sgdma after reaping this descriptor */ 272 + pktsrx = sgdma_async_read(priv); 273 + 274 + } else { 275 + /* If the SGDMA indicated an end of packet on recv, 276 + * then it's expected that the rxstatus from the 277 + * descriptor is non-zero - meaning a valid packet 278 + * with a nonzero length, or an error has been 279 + * indicated. if not, then all we can do is signal 280 + * an error and return no packet received. Most likely 281 + * there is a system design error, or an error in the 282 + * underlying kernel (cache or cache management problem) 283 + */ 276 284 netdev_err(priv->dev, 277 - "sgdma rx and rx queue empty!\n"); 278 - 279 - /* kick the rx sgdma after reaping this descriptor */ 285 + "SGDMA RX Error Info: %x, %x, %x\n", 286 + sts, desc->status, rxstatus); 287 + } 288 + } else if (sts == 0) { 280 289 pktsrx = sgdma_async_read(priv); 281 290 } 282 291 ··· 309 270 310 271 311 272 /* Private functions */ 312 - static void sgdma_descrip(struct sgdma_descrip *desc, 313 - struct sgdma_descrip *ndesc, 314 - dma_addr_t ndesc_phys, 315 - dma_addr_t raddr, 316 - dma_addr_t waddr, 317 - u16 length, 318 - int generate_eop, 319 - int rfixed, 320 - int wfixed) 273 + static void sgdma_setup_descrip(struct sgdma_descrip *desc, 274 + struct sgdma_descrip *ndesc, 275 + dma_addr_t ndesc_phys, 276 + dma_addr_t raddr, 277 + dma_addr_t waddr, 278 + u16 length, 279 + int generate_eop, 280 + int rfixed, 281 + int wfixed) 321 282 { 322 283 /* Clear the next descriptor as not owned by hardware */ 323 284 u32 ctrl = ndesc->control; ··· 358 319 struct sgdma_descrip *cdesc = &descbase[0]; 359 320 struct sgdma_descrip *ndesc = &descbase[1]; 360 321 361 - unsigned int sts = ioread32(&csr->status); 362 322 struct tse_buffer *rxbuffer = NULL; 363 323 364 324 if (!sgdma_rxbusy(priv)) { 365 325 rxbuffer = queue_rx_peekhead(priv); 366 - if (rxbuffer == NULL) 326 + if (rxbuffer == NULL) { 327 + netdev_err(priv->dev, "no rx buffers available\n"); 367 328 return 0; 329 + } 368 330 369 - sgdma_descrip(cdesc, /* current descriptor */ 370 - ndesc, /* next descriptor */ 371 - sgdma_rxphysaddr(priv, ndesc), 372 - 0, /* read addr 0 for rx dma */ 373 - rxbuffer->dma_addr, /* write addr for rx dma */ 374 - 0, /* read 'til EOP */ 375 - 0, /* EOP: NA for rx dma */ 376 - 0, /* read fixed: NA for rx dma */ 377 - 0); /* SOP: NA for rx DMA */ 378 - 379 - /* clear control and status */ 380 - iowrite32(0, &csr->control); 381 - 382 - /* If status available, clear those bits */ 383 - if (sts & 0xf) 384 - iowrite32(0xf, &csr->status); 331 + sgdma_setup_descrip(cdesc, /* current descriptor */ 332 + ndesc, /* next descriptor */ 333 + sgdma_rxphysaddr(priv, ndesc), 334 + 0, /* read addr 0 for rx dma */ 335 + rxbuffer->dma_addr, /* write addr for rx dma */ 336 + 0, /* read 'til EOP */ 337 + 0, /* EOP: NA for rx dma */ 338 + 0, /* read fixed: NA for rx dma */ 339 + 0); /* SOP: NA for rx DMA */ 385 340 386 341 dma_sync_single_for_device(priv->device, 387 342 priv->rxdescphys, 388 - priv->rxdescmem, 389 - DMA_BIDIRECTIONAL); 343 + priv->sgdmadesclen, 344 + DMA_TO_DEVICE); 390 345 391 346 iowrite32(lower_32_bits(sgdma_rxphysaddr(priv, cdesc)), 392 347 &csr->next_descrip); ··· 407 374 iowrite32(0x1f, &csr->status); 408 375 409 376 dma_sync_single_for_device(priv->device, priv->txdescphys, 410 - priv->txdescmem, DMA_TO_DEVICE); 377 + priv->sgdmadesclen, DMA_TO_DEVICE); 411 378 412 379 iowrite32(lower_32_bits(sgdma_txphysaddr(priv, desc)), 413 380 &csr->next_descrip);
+2 -1
drivers/net/ethernet/altera/altera_sgdma.h
··· 26 26 void sgdma_clear_txirq(struct altera_tse_private *); 27 27 int sgdma_tx_buffer(struct altera_tse_private *priv, struct tse_buffer *); 28 28 u32 sgdma_tx_completions(struct altera_tse_private *); 29 - int sgdma_add_rx_desc(struct altera_tse_private *priv, struct tse_buffer *); 29 + void sgdma_add_rx_desc(struct altera_tse_private *priv, struct tse_buffer *); 30 30 void sgdma_status(struct altera_tse_private *); 31 31 u32 sgdma_rx_status(struct altera_tse_private *); 32 32 int sgdma_initialize(struct altera_tse_private *); 33 33 void sgdma_uninitialize(struct altera_tse_private *); 34 + void sgdma_start_rxdma(struct altera_tse_private *); 34 35 35 36 #endif /* __ALTERA_SGDMA_H__ */
+5 -1
drivers/net/ethernet/altera/altera_tse.h
··· 58 58 /* MAC function configuration default settings */ 59 59 #define ALTERA_TSE_TX_IPG_LENGTH 12 60 60 61 + #define ALTERA_TSE_PAUSE_QUANTA 0xffff 62 + 61 63 #define GET_BIT_VALUE(v, bit) (((v) >> (bit)) & 0x1) 62 64 63 65 /* MAC Command_Config Register Bit Definitions ··· 392 390 void (*clear_rxirq)(struct altera_tse_private *); 393 391 int (*tx_buffer)(struct altera_tse_private *, struct tse_buffer *); 394 392 u32 (*tx_completions)(struct altera_tse_private *); 395 - int (*add_rx_desc)(struct altera_tse_private *, struct tse_buffer *); 393 + void (*add_rx_desc)(struct altera_tse_private *, struct tse_buffer *); 396 394 u32 (*get_rx_status)(struct altera_tse_private *); 397 395 int (*init_dma)(struct altera_tse_private *); 398 396 void (*uninit_dma)(struct altera_tse_private *); 397 + void (*start_rxdma)(struct altera_tse_private *); 399 398 }; 400 399 401 400 /* This structure is private to each device. ··· 456 453 u32 rxctrlreg; 457 454 dma_addr_t rxdescphys; 458 455 dma_addr_t txdescphys; 456 + size_t sgdmadesclen; 459 457 460 458 struct list_head txlisthd; 461 459 struct list_head rxlisthd;
+7 -1
drivers/net/ethernet/altera/altera_tse_ethtool.c
··· 77 77 struct altera_tse_private *priv = netdev_priv(dev); 78 78 u32 rev = ioread32(&priv->mac_dev->megacore_revision); 79 79 80 - strcpy(info->driver, "Altera TSE MAC IP Driver"); 80 + strcpy(info->driver, "altera_tse"); 81 81 strcpy(info->version, "v8.0"); 82 82 snprintf(info->fw_version, ETHTOOL_FWVERS_LEN, "v%d.%d", 83 83 rev & 0xFFFF, (rev & 0xFFFF0000) >> 16); ··· 185 185 * how to do any special formatting of this data. 186 186 * This version number will need to change if and 187 187 * when this register table is changed. 188 + * 189 + * version[31:0] = 1: Dump the first 128 TSE Registers 190 + * Upper bits are all 0 by default 191 + * 192 + * Upper 16-bits will indicate feature presence for 193 + * Ethtool register decoding in future version. 188 194 */ 189 195 190 196 regs->version = 1;
+46 -31
drivers/net/ethernet/altera/altera_tse_main.c
··· 224 224 dev_kfree_skb_any(rxbuffer->skb); 225 225 return -EINVAL; 226 226 } 227 + rxbuffer->dma_addr &= (dma_addr_t)~3; 227 228 rxbuffer->len = len; 228 229 return 0; 229 230 } ··· 426 425 priv->dev->stats.rx_bytes += pktlength; 427 426 428 427 entry = next_entry; 428 + 429 + tse_rx_refill(priv); 429 430 } 430 431 431 - tse_rx_refill(priv); 432 432 return count; 433 433 } 434 434 ··· 521 519 struct net_device *dev = dev_id; 522 520 struct altera_tse_private *priv; 523 521 unsigned long int flags; 524 - 525 522 526 523 if (unlikely(!dev)) { 527 524 pr_err("%s: invalid dev pointer\n", __func__); ··· 869 868 /* Disable RX/TX shift 16 for alignment of all received frames on 16-bit 870 869 * start address 871 870 */ 872 - tse_clear_bit(&mac->rx_cmd_stat, ALTERA_TSE_RX_CMD_STAT_RX_SHIFT16); 871 + tse_set_bit(&mac->rx_cmd_stat, ALTERA_TSE_RX_CMD_STAT_RX_SHIFT16); 873 872 tse_clear_bit(&mac->tx_cmd_stat, ALTERA_TSE_TX_CMD_STAT_TX_SHIFT16 | 874 873 ALTERA_TSE_TX_CMD_STAT_OMIT_CRC); 875 874 876 875 /* Set the MAC options */ 877 876 cmd = ioread32(&mac->command_config); 878 - cmd |= MAC_CMDCFG_PAD_EN; /* Padding Removal on Receive */ 877 + cmd &= ~MAC_CMDCFG_PAD_EN; /* No padding Removal on Receive */ 879 878 cmd &= ~MAC_CMDCFG_CRC_FWD; /* CRC Removal */ 880 879 cmd |= MAC_CMDCFG_RX_ERR_DISC; /* Automatically discard frames 881 880 * with CRC errors ··· 883 882 cmd |= MAC_CMDCFG_CNTL_FRM_ENA; 884 883 cmd &= ~MAC_CMDCFG_TX_ENA; 885 884 cmd &= ~MAC_CMDCFG_RX_ENA; 885 + 886 + /* Default speed and duplex setting, full/100 */ 887 + cmd &= ~MAC_CMDCFG_HD_ENA; 888 + cmd &= ~MAC_CMDCFG_ETH_SPEED; 889 + cmd &= ~MAC_CMDCFG_ENA_10; 890 + 886 891 iowrite32(cmd, &mac->command_config); 892 + 893 + iowrite32(ALTERA_TSE_PAUSE_QUANTA, &mac->pause_quanta); 887 894 888 895 if (netif_msg_hw(priv)) 889 896 dev_dbg(priv->device, ··· 1094 1085 1095 1086 spin_unlock_irqrestore(&priv->rxdma_irq_lock, flags); 1096 1087 1097 - /* Start MAC Rx/Tx */ 1098 - spin_lock(&priv->mac_cfg_lock); 1099 - tse_set_mac(priv, true); 1100 - spin_unlock(&priv->mac_cfg_lock); 1101 - 1102 1088 if (priv->phydev) 1103 1089 phy_start(priv->phydev); 1104 1090 1105 1091 napi_enable(&priv->napi); 1106 1092 netif_start_queue(dev); 1093 + 1094 + priv->dmaops->start_rxdma(priv); 1095 + 1096 + /* Start MAC Rx/Tx */ 1097 + spin_lock(&priv->mac_cfg_lock); 1098 + tse_set_mac(priv, true); 1099 + spin_unlock(&priv->mac_cfg_lock); 1107 1100 1108 1101 return 0; 1109 1102 ··· 1178 1167 .ndo_validate_addr = eth_validate_addr, 1179 1168 }; 1180 1169 1181 - 1182 1170 static int request_and_map(struct platform_device *pdev, const char *name, 1183 1171 struct resource **res, void __iomem **ptr) 1184 1172 { ··· 1245 1235 /* Get the mapped address to the SGDMA descriptor memory */ 1246 1236 ret = request_and_map(pdev, "s1", &dma_res, &descmap); 1247 1237 if (ret) 1248 - goto out_free; 1238 + goto err_free_netdev; 1249 1239 1250 1240 /* Start of that memory is for transmit descriptors */ 1251 1241 priv->tx_dma_desc = descmap; ··· 1264 1254 if (upper_32_bits(priv->rxdescmem_busaddr)) { 1265 1255 dev_dbg(priv->device, 1266 1256 "SGDMA bus addresses greater than 32-bits\n"); 1267 - goto out_free; 1257 + goto err_free_netdev; 1268 1258 } 1269 1259 if (upper_32_bits(priv->txdescmem_busaddr)) { 1270 1260 dev_dbg(priv->device, 1271 1261 "SGDMA bus addresses greater than 32-bits\n"); 1272 - goto out_free; 1262 + goto err_free_netdev; 1273 1263 } 1274 1264 } else if (priv->dmaops && 1275 1265 priv->dmaops->altera_dtype == ALTERA_DTYPE_MSGDMA) { 1276 1266 ret = request_and_map(pdev, "rx_resp", &dma_res, 1277 1267 &priv->rx_dma_resp); 1278 1268 if (ret) 1279 - goto out_free; 1269 + goto err_free_netdev; 1280 1270 1281 1271 ret = request_and_map(pdev, "tx_desc", &dma_res, 1282 1272 &priv->tx_dma_desc); 1283 1273 if (ret) 1284 - goto out_free; 1274 + goto err_free_netdev; 1285 1275 1286 1276 priv->txdescmem = resource_size(dma_res); 1287 1277 priv->txdescmem_busaddr = dma_res->start; ··· 1289 1279 ret = request_and_map(pdev, "rx_desc", &dma_res, 1290 1280 &priv->rx_dma_desc); 1291 1281 if (ret) 1292 - goto out_free; 1282 + goto err_free_netdev; 1293 1283 1294 1284 priv->rxdescmem = resource_size(dma_res); 1295 1285 priv->rxdescmem_busaddr = dma_res->start; 1296 1286 1297 1287 } else { 1298 - goto out_free; 1288 + goto err_free_netdev; 1299 1289 } 1300 1290 1301 1291 if (!dma_set_mask(priv->device, DMA_BIT_MASK(priv->dmaops->dmamask))) ··· 1304 1294 else if (!dma_set_mask(priv->device, DMA_BIT_MASK(32))) 1305 1295 dma_set_coherent_mask(priv->device, DMA_BIT_MASK(32)); 1306 1296 else 1307 - goto out_free; 1297 + goto err_free_netdev; 1308 1298 1309 1299 /* MAC address space */ 1310 1300 ret = request_and_map(pdev, "control_port", &control_port, 1311 1301 (void __iomem **)&priv->mac_dev); 1312 1302 if (ret) 1313 - goto out_free; 1303 + goto err_free_netdev; 1314 1304 1315 1305 /* xSGDMA Rx Dispatcher address space */ 1316 1306 ret = request_and_map(pdev, "rx_csr", &dma_res, 1317 1307 &priv->rx_dma_csr); 1318 1308 if (ret) 1319 - goto out_free; 1309 + goto err_free_netdev; 1320 1310 1321 1311 1322 1312 /* xSGDMA Tx Dispatcher address space */ 1323 1313 ret = request_and_map(pdev, "tx_csr", &dma_res, 1324 1314 &priv->tx_dma_csr); 1325 1315 if (ret) 1326 - goto out_free; 1316 + goto err_free_netdev; 1327 1317 1328 1318 1329 1319 /* Rx IRQ */ ··· 1331 1321 if (priv->rx_irq == -ENXIO) { 1332 1322 dev_err(&pdev->dev, "cannot obtain Rx IRQ\n"); 1333 1323 ret = -ENXIO; 1334 - goto out_free; 1324 + goto err_free_netdev; 1335 1325 } 1336 1326 1337 1327 /* Tx IRQ */ ··· 1339 1329 if (priv->tx_irq == -ENXIO) { 1340 1330 dev_err(&pdev->dev, "cannot obtain Tx IRQ\n"); 1341 1331 ret = -ENXIO; 1342 - goto out_free; 1332 + goto err_free_netdev; 1343 1333 } 1344 1334 1345 1335 /* get FIFO depths from device tree */ ··· 1347 1337 &priv->rx_fifo_depth)) { 1348 1338 dev_err(&pdev->dev, "cannot obtain rx-fifo-depth\n"); 1349 1339 ret = -ENXIO; 1350 - goto out_free; 1340 + goto err_free_netdev; 1351 1341 } 1352 1342 1353 1343 if (of_property_read_u32(pdev->dev.of_node, "tx-fifo-depth", 1354 1344 &priv->rx_fifo_depth)) { 1355 1345 dev_err(&pdev->dev, "cannot obtain tx-fifo-depth\n"); 1356 1346 ret = -ENXIO; 1357 - goto out_free; 1347 + goto err_free_netdev; 1358 1348 } 1359 1349 1360 1350 /* get hash filter settings for this instance */ ··· 1403 1393 ((priv->phy_addr >= 0) && (priv->phy_addr < PHY_MAX_ADDR)))) { 1404 1394 dev_err(&pdev->dev, "invalid phy-addr specified %d\n", 1405 1395 priv->phy_addr); 1406 - goto out_free; 1396 + goto err_free_netdev; 1407 1397 } 1408 1398 1409 1399 /* Create/attach to MDIO bus */ ··· 1411 1401 atomic_add_return(1, &instance_count)); 1412 1402 1413 1403 if (ret) 1414 - goto out_free; 1404 + goto err_free_netdev; 1415 1405 1416 1406 /* initialize netdev */ 1417 1407 ether_setup(ndev); ··· 1448 1438 ret = register_netdev(ndev); 1449 1439 if (ret) { 1450 1440 dev_err(&pdev->dev, "failed to register TSE net device\n"); 1451 - goto out_free_mdio; 1441 + goto err_register_netdev; 1452 1442 } 1453 1443 1454 1444 platform_set_drvdata(pdev, ndev); ··· 1465 1455 ret = init_phy(ndev); 1466 1456 if (ret != 0) { 1467 1457 netdev_err(ndev, "Cannot attach to PHY (error: %d)\n", ret); 1468 - goto out_free_mdio; 1458 + goto err_init_phy; 1469 1459 } 1470 1460 return 0; 1471 1461 1472 - out_free_mdio: 1462 + err_init_phy: 1463 + unregister_netdev(ndev); 1464 + err_register_netdev: 1465 + netif_napi_del(&priv->napi); 1473 1466 altera_tse_mdio_destroy(ndev); 1474 - out_free: 1467 + err_free_netdev: 1475 1468 free_netdev(ndev); 1476 1469 return ret; 1477 1470 } ··· 1509 1496 .get_rx_status = sgdma_rx_status, 1510 1497 .init_dma = sgdma_initialize, 1511 1498 .uninit_dma = sgdma_uninitialize, 1499 + .start_rxdma = sgdma_start_rxdma, 1512 1500 }; 1513 1501 1514 1502 struct altera_dmaops altera_dtype_msgdma = { ··· 1528 1514 .get_rx_status = msgdma_rx_status, 1529 1515 .init_dma = msgdma_initialize, 1530 1516 .uninit_dma = msgdma_uninitialize, 1517 + .start_rxdma = msgdma_start_rxdma, 1531 1518 }; 1532 1519 1533 1520 static struct of_device_id altera_tse_ids[] = {
+2
drivers/net/ethernet/arc/emac.h
··· 11 11 #include <linux/dma-mapping.h> 12 12 #include <linux/netdevice.h> 13 13 #include <linux/phy.h> 14 + #include <linux/clk.h> 14 15 15 16 /* STATUS and ENABLE Register bit masks */ 16 17 #define TXINT_MASK (1<<0) /* Transmit interrupt */ ··· 132 131 struct mii_bus *bus; 133 132 134 133 void __iomem *regs; 134 + struct clk *clk; 135 135 136 136 struct napi_struct napi; 137 137 struct net_device_stats stats;
+59 -23
drivers/net/ethernet/arc/emac_main.c
··· 574 574 return NETDEV_TX_OK; 575 575 } 576 576 577 + static void arc_emac_set_address_internal(struct net_device *ndev) 578 + { 579 + struct arc_emac_priv *priv = netdev_priv(ndev); 580 + unsigned int addr_low, addr_hi; 581 + 582 + addr_low = le32_to_cpu(*(__le32 *) &ndev->dev_addr[0]); 583 + addr_hi = le16_to_cpu(*(__le16 *) &ndev->dev_addr[4]); 584 + 585 + arc_reg_set(priv, R_ADDRL, addr_low); 586 + arc_reg_set(priv, R_ADDRH, addr_hi); 587 + } 588 + 577 589 /** 578 590 * arc_emac_set_address - Set the MAC address for this device. 579 591 * @ndev: Pointer to net_device structure. ··· 599 587 */ 600 588 static int arc_emac_set_address(struct net_device *ndev, void *p) 601 589 { 602 - struct arc_emac_priv *priv = netdev_priv(ndev); 603 590 struct sockaddr *addr = p; 604 - unsigned int addr_low, addr_hi; 605 591 606 592 if (netif_running(ndev)) 607 593 return -EBUSY; ··· 609 599 610 600 memcpy(ndev->dev_addr, addr->sa_data, ndev->addr_len); 611 601 612 - addr_low = le32_to_cpu(*(__le32 *) &ndev->dev_addr[0]); 613 - addr_hi = le16_to_cpu(*(__le16 *) &ndev->dev_addr[4]); 614 - 615 - arc_reg_set(priv, R_ADDRL, addr_low); 616 - arc_reg_set(priv, R_ADDRH, addr_hi); 602 + arc_emac_set_address_internal(ndev); 617 603 618 604 return 0; 619 605 } ··· 649 643 return -ENODEV; 650 644 } 651 645 652 - /* Get CPU clock frequency from device tree */ 653 - if (of_property_read_u32(pdev->dev.of_node, "clock-frequency", 654 - &clock_frequency)) { 655 - dev_err(&pdev->dev, "failed to retrieve <clock-frequency> from device tree\n"); 656 - return -EINVAL; 657 - } 658 - 659 646 /* Get IRQ from device tree */ 660 647 irq = irq_of_parse_and_map(pdev->dev.of_node, 0); 661 648 if (!irq) { ··· 676 677 priv->regs = devm_ioremap_resource(&pdev->dev, &res_regs); 677 678 if (IS_ERR(priv->regs)) { 678 679 err = PTR_ERR(priv->regs); 679 - goto out; 680 + goto out_netdev; 680 681 } 681 682 dev_dbg(&pdev->dev, "Registers base address is 0x%p\n", priv->regs); 683 + 684 + priv->clk = of_clk_get(pdev->dev.of_node, 0); 685 + if (IS_ERR(priv->clk)) { 686 + /* Get CPU clock frequency from device tree */ 687 + if (of_property_read_u32(pdev->dev.of_node, "clock-frequency", 688 + &clock_frequency)) { 689 + dev_err(&pdev->dev, "failed to retrieve <clock-frequency> from device tree\n"); 690 + err = -EINVAL; 691 + goto out_netdev; 692 + } 693 + } else { 694 + err = clk_prepare_enable(priv->clk); 695 + if (err) { 696 + dev_err(&pdev->dev, "failed to enable clock\n"); 697 + goto out_clkget; 698 + } 699 + 700 + clock_frequency = clk_get_rate(priv->clk); 701 + } 682 702 683 703 id = arc_reg_get(priv, R_ID); 684 704 ··· 705 687 if (!(id == 0x0005fd02 || id == 0x0007fd02)) { 706 688 dev_err(&pdev->dev, "ARC EMAC not detected, id=0x%x\n", id); 707 689 err = -ENODEV; 708 - goto out; 690 + goto out_clken; 709 691 } 710 692 dev_info(&pdev->dev, "ARC EMAC detected with id: 0x%x\n", id); 711 693 ··· 720 702 ndev->name, ndev); 721 703 if (err) { 722 704 dev_err(&pdev->dev, "could not allocate IRQ\n"); 723 - goto out; 705 + goto out_clken; 724 706 } 725 707 726 708 /* Get MAC address from device tree */ ··· 731 713 else 732 714 eth_hw_addr_random(ndev); 733 715 716 + arc_emac_set_address_internal(ndev); 734 717 dev_info(&pdev->dev, "MAC address is now %pM\n", ndev->dev_addr); 735 718 736 719 /* Do 1 allocation instead of 2 separate ones for Rx and Tx BD rings */ ··· 741 722 if (!priv->rxbd) { 742 723 dev_err(&pdev->dev, "failed to allocate data buffers\n"); 743 724 err = -ENOMEM; 744 - goto out; 725 + goto out_clken; 745 726 } 746 727 747 728 priv->txbd = priv->rxbd + RX_BD_NUM; ··· 753 734 err = arc_mdio_probe(pdev, priv); 754 735 if (err) { 755 736 dev_err(&pdev->dev, "failed to probe MII bus\n"); 756 - goto out; 737 + goto out_clken; 757 738 } 758 739 759 740 priv->phy_dev = of_phy_connect(ndev, phy_node, arc_emac_adjust_link, 0, ··· 761 742 if (!priv->phy_dev) { 762 743 dev_err(&pdev->dev, "of_phy_connect() failed\n"); 763 744 err = -ENODEV; 764 - goto out; 745 + goto out_mdio; 765 746 } 766 747 767 748 dev_info(&pdev->dev, "connected to %s phy with id 0x%x\n", ··· 771 752 772 753 err = register_netdev(ndev); 773 754 if (err) { 774 - netif_napi_del(&priv->napi); 775 755 dev_err(&pdev->dev, "failed to register network device\n"); 776 - goto out; 756 + goto out_netif_api; 777 757 } 778 758 779 759 return 0; 780 760 781 - out: 761 + out_netif_api: 762 + netif_napi_del(&priv->napi); 763 + phy_disconnect(priv->phy_dev); 764 + priv->phy_dev = NULL; 765 + out_mdio: 766 + arc_mdio_remove(priv); 767 + out_clken: 768 + if (!IS_ERR(priv->clk)) 769 + clk_disable_unprepare(priv->clk); 770 + out_clkget: 771 + if (!IS_ERR(priv->clk)) 772 + clk_put(priv->clk); 773 + out_netdev: 782 774 free_netdev(ndev); 783 775 return err; 784 776 } ··· 804 774 arc_mdio_remove(priv); 805 775 unregister_netdev(ndev); 806 776 netif_napi_del(&priv->napi); 777 + 778 + if (!IS_ERR(priv->clk)) { 779 + clk_disable_unprepare(priv->clk); 780 + clk_put(priv->clk); 781 + } 782 + 807 783 free_netdev(ndev); 808 784 809 785 return 0;
+2
drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
··· 13233 13233 iounmap(bp->doorbells); 13234 13234 13235 13235 bnx2x_release_firmware(bp); 13236 + } else { 13237 + bnx2x_vf_pci_dealloc(bp); 13236 13238 } 13237 13239 bnx2x_free_mem_bp(bp); 13238 13240
+44 -14
drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c
··· 427 427 if (filter->add && filter->type == BNX2X_VF_FILTER_VLAN && 428 428 (atomic_read(&bnx2x_vfq(vf, qid, vlan_count)) >= 429 429 vf_vlan_rules_cnt(vf))) { 430 - BNX2X_ERR("No credits for vlan\n"); 430 + BNX2X_ERR("No credits for vlan [%d >= %d]\n", 431 + atomic_read(&bnx2x_vfq(vf, qid, vlan_count)), 432 + vf_vlan_rules_cnt(vf)); 431 433 return -ENOMEM; 432 434 } 433 435 ··· 612 610 } 613 611 614 612 /* add new mcasts */ 613 + mcast.mcast_list_len = mc_num; 615 614 rc = bnx2x_config_mcast(bp, &mcast, BNX2X_MCAST_CMD_ADD); 616 615 if (rc) 617 616 BNX2X_ERR("Faled to add multicasts\n"); ··· 840 837 return 0; 841 838 } 842 839 840 + static void bnx2x_iov_re_set_vlan_filters(struct bnx2x *bp, 841 + struct bnx2x_virtf *vf, 842 + int new) 843 + { 844 + int num = vf_vlan_rules_cnt(vf); 845 + int diff = new - num; 846 + bool rc = true; 847 + 848 + DP(BNX2X_MSG_IOV, "vf[%d] - %d vlan filter credits [previously %d]\n", 849 + vf->abs_vfid, new, num); 850 + 851 + if (diff > 0) 852 + rc = bp->vlans_pool.get(&bp->vlans_pool, diff); 853 + else if (diff < 0) 854 + rc = bp->vlans_pool.put(&bp->vlans_pool, -diff); 855 + 856 + if (rc) 857 + vf_vlan_rules_cnt(vf) = new; 858 + else 859 + DP(BNX2X_MSG_IOV, "vf[%d] - Failed to configure vlan filter credits change\n", 860 + vf->abs_vfid); 861 + } 862 + 843 863 /* must be called after the number of PF queues and the number of VFs are 844 864 * both known 845 865 */ ··· 880 854 resc->num_mac_filters = 1; 881 855 882 856 /* divvy up vlan rules */ 857 + bnx2x_iov_re_set_vlan_filters(bp, vf, 0); 883 858 vlan_count = bp->vlans_pool.check(&bp->vlans_pool); 884 859 vlan_count = 1 << ilog2(vlan_count); 885 - resc->num_vlan_filters = vlan_count / BNX2X_NR_VIRTFN(bp); 860 + bnx2x_iov_re_set_vlan_filters(bp, vf, 861 + vlan_count / BNX2X_NR_VIRTFN(bp)); 886 862 887 863 /* no real limitation */ 888 864 resc->num_mc_filters = 0; ··· 1506 1478 bnx2x_iov_static_resc(bp, vf); 1507 1479 1508 1480 /* queues are initialized during VF-ACQUIRE */ 1509 - 1510 - /* reserve the vf vlan credit */ 1511 - bp->vlans_pool.get(&bp->vlans_pool, vf_vlan_rules_cnt(vf)); 1512 - 1513 1481 vf->filter_state = 0; 1514 1482 vf->sp_cl_id = bnx2x_fp(bp, 0, cl_id); 1515 1483 ··· 1936 1912 u8 rxq_cnt = vf_rxq_count(vf) ? : bnx2x_vf_max_queue_cnt(bp, vf); 1937 1913 u8 txq_cnt = vf_txq_count(vf) ? : bnx2x_vf_max_queue_cnt(bp, vf); 1938 1914 1915 + /* Save a vlan filter for the Hypervisor */ 1939 1916 return ((req_resc->num_rxqs <= rxq_cnt) && 1940 1917 (req_resc->num_txqs <= txq_cnt) && 1941 1918 (req_resc->num_sbs <= vf_sb_count(vf)) && 1942 1919 (req_resc->num_mac_filters <= vf_mac_rules_cnt(vf)) && 1943 - (req_resc->num_vlan_filters <= vf_vlan_rules_cnt(vf))); 1920 + (req_resc->num_vlan_filters <= vf_vlan_rules_visible_cnt(vf))); 1944 1921 } 1945 1922 1946 1923 /* CORE VF API */ ··· 1997 1972 vf_txq_count(vf) = resc->num_txqs ? : bnx2x_vf_max_queue_cnt(bp, vf); 1998 1973 if (resc->num_mac_filters) 1999 1974 vf_mac_rules_cnt(vf) = resc->num_mac_filters; 2000 - if (resc->num_vlan_filters) 2001 - vf_vlan_rules_cnt(vf) = resc->num_vlan_filters; 1975 + /* Add an additional vlan filter credit for the hypervisor */ 1976 + bnx2x_iov_re_set_vlan_filters(bp, vf, resc->num_vlan_filters + 1); 2002 1977 2003 1978 DP(BNX2X_MSG_IOV, 2004 1979 "Fulfilling vf request: sb count %d, tx_count %d, rx_count %d, mac_rules_count %d, vlan_rules_count %d\n", 2005 1980 vf_sb_count(vf), vf_rxq_count(vf), 2006 1981 vf_txq_count(vf), vf_mac_rules_cnt(vf), 2007 - vf_vlan_rules_cnt(vf)); 1982 + vf_vlan_rules_visible_cnt(vf)); 2008 1983 2009 1984 /* Initialize the queues */ 2010 1985 if (!vf->vfqs) { ··· 2921 2896 return bp->regview + PXP_VF_ADDR_DB_START; 2922 2897 } 2923 2898 2899 + void bnx2x_vf_pci_dealloc(struct bnx2x *bp) 2900 + { 2901 + BNX2X_PCI_FREE(bp->vf2pf_mbox, bp->vf2pf_mbox_mapping, 2902 + sizeof(struct bnx2x_vf_mbx_msg)); 2903 + BNX2X_PCI_FREE(bp->vf2pf_mbox, bp->pf2vf_bulletin_mapping, 2904 + sizeof(union pf_vf_bulletin)); 2905 + } 2906 + 2924 2907 int bnx2x_vf_pci_alloc(struct bnx2x *bp) 2925 2908 { 2926 2909 mutex_init(&bp->vf2pf_mutex); ··· 2948 2915 return 0; 2949 2916 2950 2917 alloc_mem_err: 2951 - BNX2X_PCI_FREE(bp->vf2pf_mbox, bp->vf2pf_mbox_mapping, 2952 - sizeof(struct bnx2x_vf_mbx_msg)); 2953 - BNX2X_PCI_FREE(bp->vf2pf_mbox, bp->pf2vf_bulletin_mapping, 2954 - sizeof(union pf_vf_bulletin)); 2918 + bnx2x_vf_pci_dealloc(bp); 2955 2919 return -ENOMEM; 2956 2920 } 2957 2921
+4
drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.h
··· 159 159 #define vf_mac_rules_cnt(vf) ((vf)->alloc_resc.num_mac_filters) 160 160 #define vf_vlan_rules_cnt(vf) ((vf)->alloc_resc.num_vlan_filters) 161 161 #define vf_mc_rules_cnt(vf) ((vf)->alloc_resc.num_mc_filters) 162 + /* Hide a single vlan filter credit for the hypervisor */ 163 + #define vf_vlan_rules_visible_cnt(vf) (vf_vlan_rules_cnt(vf) - 1) 162 164 163 165 u8 sb_count; /* actual number of SBs */ 164 166 u8 igu_base_id; /* base igu status block id */ ··· 504 502 enum sample_bulletin_result bnx2x_sample_bulletin(struct bnx2x *bp); 505 503 void bnx2x_timer_sriov(struct bnx2x *bp); 506 504 void __iomem *bnx2x_vf_doorbells(struct bnx2x *bp); 505 + void bnx2x_vf_pci_dealloc(struct bnx2x *bp); 507 506 int bnx2x_vf_pci_alloc(struct bnx2x *bp); 508 507 int bnx2x_enable_sriov(struct bnx2x *bp); 509 508 void bnx2x_disable_sriov(struct bnx2x *bp); ··· 571 568 return NULL; 572 569 } 573 570 571 + static inline void bnx2x_vf_pci_dealloc(struct bnx2 *bp) {return 0; } 574 572 static inline int bnx2x_vf_pci_alloc(struct bnx2x *bp) {return 0; } 575 573 static inline void bnx2x_pf_set_vfs_vlan(struct bnx2x *bp) {} 576 574 static inline int bnx2x_sriov_configure(struct pci_dev *dev, int num_vfs) {return 0; }
+1 -1
drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.c
··· 1163 1163 bnx2x_vf_max_queue_cnt(bp, vf); 1164 1164 resc->num_sbs = vf_sb_count(vf); 1165 1165 resc->num_mac_filters = vf_mac_rules_cnt(vf); 1166 - resc->num_vlan_filters = vf_vlan_rules_cnt(vf); 1166 + resc->num_vlan_filters = vf_vlan_rules_visible_cnt(vf); 1167 1167 resc->num_mc_filters = 0; 1168 1168 1169 1169 if (status == PFVF_STATUS_SUCCESS) {
+2 -2
drivers/net/ethernet/cadence/Kconfig
··· 4 4 5 5 config NET_CADENCE 6 6 bool "Cadence devices" 7 - depends on HAS_IOMEM && (ARM || AVR32 || COMPILE_TEST) 7 + depends on HAS_IOMEM && (ARM || AVR32 || MICROBLAZE || COMPILE_TEST) 8 8 default y 9 9 ---help--- 10 10 If you have a network (Ethernet) card belonging to this class, say Y. ··· 30 30 31 31 config MACB 32 32 tristate "Cadence MACB/GEM support" 33 - depends on HAS_DMA && (PLATFORM_AT32AP || ARCH_AT91 || ARCH_PICOXCELL || ARCH_ZYNQ || COMPILE_TEST) 33 + depends on HAS_DMA && (PLATFORM_AT32AP || ARCH_AT91 || ARCH_PICOXCELL || ARCH_ZYNQ || MICROBLAZE || COMPILE_TEST) 34 34 select PHYLIB 35 35 ---help--- 36 36 The Cadence MACB ethernet interface is found on many Atmel AT32 and
+17 -18
drivers/net/ethernet/cadence/macb.c
··· 599 599 { 600 600 unsigned int entry; 601 601 struct sk_buff *skb; 602 - struct macb_dma_desc *desc; 603 602 dma_addr_t paddr; 604 603 605 604 while (CIRC_SPACE(bp->rx_prepared_head, bp->rx_tail, RX_RING_SIZE) > 0) { 606 - u32 addr, ctrl; 607 - 608 605 entry = macb_rx_ring_wrap(bp->rx_prepared_head); 609 - desc = &bp->rx_ring[entry]; 610 606 611 607 /* Make hw descriptor updates visible to CPU */ 612 608 rmb(); 613 609 614 - addr = desc->addr; 615 - ctrl = desc->ctrl; 616 610 bp->rx_prepared_head++; 617 - 618 - if ((addr & MACB_BIT(RX_USED))) 619 - continue; 620 611 621 612 if (bp->rx_skbuff[entry] == NULL) { 622 613 /* allocate sk_buff for this free entry in ring */ ··· 689 698 if (!(addr & MACB_BIT(RX_USED))) 690 699 break; 691 700 692 - desc->addr &= ~MACB_BIT(RX_USED); 693 701 bp->rx_tail++; 694 702 count++; 695 703 ··· 881 891 if (work_done < budget) { 882 892 napi_complete(napi); 883 893 884 - /* 885 - * We've done what we can to clean the buffers. Make sure we 886 - * get notified when new packets arrive. 887 - */ 888 - macb_writel(bp, IER, MACB_RX_INT_FLAGS); 889 - 890 894 /* Packets received while interrupts were disabled */ 891 895 status = macb_readl(bp, RSR); 892 - if (unlikely(status)) 896 + if (status) { 897 + if (bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE) 898 + macb_writel(bp, ISR, MACB_BIT(RCOMP)); 893 899 napi_reschedule(napi); 900 + } else { 901 + macb_writel(bp, IER, MACB_RX_INT_FLAGS); 902 + } 894 903 } 895 904 896 905 /* TODO: Handle errors */ ··· 940 951 if (unlikely(status & (MACB_TX_ERR_FLAGS))) { 941 952 macb_writel(bp, IDR, MACB_TX_INT_FLAGS); 942 953 schedule_work(&bp->tx_error_task); 954 + 955 + if (bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE) 956 + macb_writel(bp, ISR, MACB_TX_ERR_FLAGS); 957 + 943 958 break; 944 959 } 945 960 ··· 961 968 bp->hw_stats.gem.rx_overruns++; 962 969 else 963 970 bp->hw_stats.macb.rx_overruns++; 971 + 972 + if (bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE) 973 + macb_writel(bp, ISR, MACB_BIT(ISR_ROVR)); 964 974 } 965 975 966 976 if (status & MACB_BIT(HRESP)) { ··· 973 977 * (work queue?) 974 978 */ 975 979 netdev_err(dev, "DMA bus error: HRESP not OK\n"); 980 + 981 + if (bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE) 982 + macb_writel(bp, ISR, MACB_BIT(HRESP)); 976 983 } 977 984 978 985 status = macb_readl(bp, ISR); ··· 1112 1113 1113 1114 desc = &bp->rx_ring[i]; 1114 1115 addr = MACB_BF(RX_WADDR, MACB_BFEXT(RX_WADDR, desc->addr)); 1115 - dma_unmap_single(&bp->pdev->dev, addr, skb->len, 1116 + dma_unmap_single(&bp->pdev->dev, addr, bp->rx_buffer_size, 1116 1117 DMA_FROM_DEVICE); 1117 1118 dev_kfree_skb_any(skb); 1118 1119 skb = NULL;
+7 -6
drivers/net/ethernet/chelsio/Kconfig
··· 67 67 will be called cxgb3. 68 68 69 69 config CHELSIO_T4 70 - tristate "Chelsio Communications T4 Ethernet support" 70 + tristate "Chelsio Communications T4/T5 Ethernet support" 71 71 depends on PCI 72 72 select FW_LOADER 73 73 select MDIO 74 74 ---help--- 75 - This driver supports Chelsio T4-based gigabit and 10Gb Ethernet 76 - adapters. 75 + This driver supports Chelsio T4 and T5 based gigabit, 10Gb Ethernet 76 + adapter and T5 based 40Gb Ethernet adapter. 77 77 78 78 For general information about Chelsio and our products, visit 79 79 our website at <http://www.chelsio.com>. ··· 87 87 will be called cxgb4. 88 88 89 89 config CHELSIO_T4VF 90 - tristate "Chelsio Communications T4 Virtual Function Ethernet support" 90 + tristate "Chelsio Communications T4/T5 Virtual Function Ethernet support" 91 91 depends on PCI 92 92 ---help--- 93 - This driver supports Chelsio T4-based gigabit and 10Gb Ethernet 94 - adapters with PCI-E SR-IOV Virtual Functions. 93 + This driver supports Chelsio T4 and T5 based gigabit, 10Gb Ethernet 94 + adapters and T5 based 40Gb Ethernet adapters with PCI-E SR-IOV Virtual 95 + Functions. 95 96 96 97 For general information about Chelsio and our products, visit 97 98 our website at <http://www.chelsio.com>.
+2
drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
··· 5870 5870 spd = " 2.5 GT/s"; 5871 5871 else if (adap->params.pci.speed == PCI_EXP_LNKSTA_CLS_5_0GB) 5872 5872 spd = " 5 GT/s"; 5873 + else if (adap->params.pci.speed == PCI_EXP_LNKSTA_CLS_8_0GB) 5874 + spd = " 8 GT/s"; 5873 5875 5874 5876 if (pi->link_cfg.supported & FW_PORT_CAP_SPEED_100M) 5875 5877 bufp += sprintf(bufp, "100/");
+113 -110
drivers/net/ethernet/freescale/gianfar.c
··· 121 121 static irqreturn_t gfar_transmit(int irq, void *dev_id); 122 122 static irqreturn_t gfar_interrupt(int irq, void *dev_id); 123 123 static void adjust_link(struct net_device *dev); 124 + static noinline void gfar_update_link_state(struct gfar_private *priv); 124 125 static int init_phy(struct net_device *dev); 125 126 static int gfar_probe(struct platform_device *ofdev); 126 127 static int gfar_remove(struct platform_device *ofdev); ··· 3077 3076 return IRQ_HANDLED; 3078 3077 } 3079 3078 3080 - static u32 gfar_get_flowctrl_cfg(struct gfar_private *priv) 3081 - { 3082 - struct phy_device *phydev = priv->phydev; 3083 - u32 val = 0; 3084 - 3085 - if (!phydev->duplex) 3086 - return val; 3087 - 3088 - if (!priv->pause_aneg_en) { 3089 - if (priv->tx_pause_en) 3090 - val |= MACCFG1_TX_FLOW; 3091 - if (priv->rx_pause_en) 3092 - val |= MACCFG1_RX_FLOW; 3093 - } else { 3094 - u16 lcl_adv, rmt_adv; 3095 - u8 flowctrl; 3096 - /* get link partner capabilities */ 3097 - rmt_adv = 0; 3098 - if (phydev->pause) 3099 - rmt_adv = LPA_PAUSE_CAP; 3100 - if (phydev->asym_pause) 3101 - rmt_adv |= LPA_PAUSE_ASYM; 3102 - 3103 - lcl_adv = mii_advertise_flowctrl(phydev->advertising); 3104 - 3105 - flowctrl = mii_resolve_flowctrl_fdx(lcl_adv, rmt_adv); 3106 - if (flowctrl & FLOW_CTRL_TX) 3107 - val |= MACCFG1_TX_FLOW; 3108 - if (flowctrl & FLOW_CTRL_RX) 3109 - val |= MACCFG1_RX_FLOW; 3110 - } 3111 - 3112 - return val; 3113 - } 3114 - 3115 3079 /* Called every time the controller might need to be made 3116 3080 * aware of new link state. The PHY code conveys this 3117 3081 * information through variables in the phydev structure, and this ··· 3086 3120 static void adjust_link(struct net_device *dev) 3087 3121 { 3088 3122 struct gfar_private *priv = netdev_priv(dev); 3089 - struct gfar __iomem *regs = priv->gfargrp[0].regs; 3090 3123 struct phy_device *phydev = priv->phydev; 3091 - int new_state = 0; 3092 3124 3093 - if (test_bit(GFAR_RESETTING, &priv->state)) 3094 - return; 3095 - 3096 - if (phydev->link) { 3097 - u32 tempval1 = gfar_read(&regs->maccfg1); 3098 - u32 tempval = gfar_read(&regs->maccfg2); 3099 - u32 ecntrl = gfar_read(&regs->ecntrl); 3100 - 3101 - /* Now we make sure that we can be in full duplex mode. 3102 - * If not, we operate in half-duplex mode. 3103 - */ 3104 - if (phydev->duplex != priv->oldduplex) { 3105 - new_state = 1; 3106 - if (!(phydev->duplex)) 3107 - tempval &= ~(MACCFG2_FULL_DUPLEX); 3108 - else 3109 - tempval |= MACCFG2_FULL_DUPLEX; 3110 - 3111 - priv->oldduplex = phydev->duplex; 3112 - } 3113 - 3114 - if (phydev->speed != priv->oldspeed) { 3115 - new_state = 1; 3116 - switch (phydev->speed) { 3117 - case 1000: 3118 - tempval = 3119 - ((tempval & ~(MACCFG2_IF)) | MACCFG2_GMII); 3120 - 3121 - ecntrl &= ~(ECNTRL_R100); 3122 - break; 3123 - case 100: 3124 - case 10: 3125 - tempval = 3126 - ((tempval & ~(MACCFG2_IF)) | MACCFG2_MII); 3127 - 3128 - /* Reduced mode distinguishes 3129 - * between 10 and 100 3130 - */ 3131 - if (phydev->speed == SPEED_100) 3132 - ecntrl |= ECNTRL_R100; 3133 - else 3134 - ecntrl &= ~(ECNTRL_R100); 3135 - break; 3136 - default: 3137 - netif_warn(priv, link, dev, 3138 - "Ack! Speed (%d) is not 10/100/1000!\n", 3139 - phydev->speed); 3140 - break; 3141 - } 3142 - 3143 - priv->oldspeed = phydev->speed; 3144 - } 3145 - 3146 - tempval1 &= ~(MACCFG1_TX_FLOW | MACCFG1_RX_FLOW); 3147 - tempval1 |= gfar_get_flowctrl_cfg(priv); 3148 - 3149 - gfar_write(&regs->maccfg1, tempval1); 3150 - gfar_write(&regs->maccfg2, tempval); 3151 - gfar_write(&regs->ecntrl, ecntrl); 3152 - 3153 - if (!priv->oldlink) { 3154 - new_state = 1; 3155 - priv->oldlink = 1; 3156 - } 3157 - } else if (priv->oldlink) { 3158 - new_state = 1; 3159 - priv->oldlink = 0; 3160 - priv->oldspeed = 0; 3161 - priv->oldduplex = -1; 3162 - } 3163 - 3164 - if (new_state && netif_msg_link(priv)) 3165 - phy_print_status(phydev); 3125 + if (unlikely(phydev->link != priv->oldlink || 3126 + phydev->duplex != priv->oldduplex || 3127 + phydev->speed != priv->oldspeed)) 3128 + gfar_update_link_state(priv); 3166 3129 } 3167 3130 3168 3131 /* Update the hash table based on the current list of multicast ··· 3335 3440 netif_dbg(priv, tx_err, dev, "babbling TX error\n"); 3336 3441 } 3337 3442 return IRQ_HANDLED; 3443 + } 3444 + 3445 + static u32 gfar_get_flowctrl_cfg(struct gfar_private *priv) 3446 + { 3447 + struct phy_device *phydev = priv->phydev; 3448 + u32 val = 0; 3449 + 3450 + if (!phydev->duplex) 3451 + return val; 3452 + 3453 + if (!priv->pause_aneg_en) { 3454 + if (priv->tx_pause_en) 3455 + val |= MACCFG1_TX_FLOW; 3456 + if (priv->rx_pause_en) 3457 + val |= MACCFG1_RX_FLOW; 3458 + } else { 3459 + u16 lcl_adv, rmt_adv; 3460 + u8 flowctrl; 3461 + /* get link partner capabilities */ 3462 + rmt_adv = 0; 3463 + if (phydev->pause) 3464 + rmt_adv = LPA_PAUSE_CAP; 3465 + if (phydev->asym_pause) 3466 + rmt_adv |= LPA_PAUSE_ASYM; 3467 + 3468 + lcl_adv = mii_advertise_flowctrl(phydev->advertising); 3469 + 3470 + flowctrl = mii_resolve_flowctrl_fdx(lcl_adv, rmt_adv); 3471 + if (flowctrl & FLOW_CTRL_TX) 3472 + val |= MACCFG1_TX_FLOW; 3473 + if (flowctrl & FLOW_CTRL_RX) 3474 + val |= MACCFG1_RX_FLOW; 3475 + } 3476 + 3477 + return val; 3478 + } 3479 + 3480 + static noinline void gfar_update_link_state(struct gfar_private *priv) 3481 + { 3482 + struct gfar __iomem *regs = priv->gfargrp[0].regs; 3483 + struct phy_device *phydev = priv->phydev; 3484 + 3485 + if (unlikely(test_bit(GFAR_RESETTING, &priv->state))) 3486 + return; 3487 + 3488 + if (phydev->link) { 3489 + u32 tempval1 = gfar_read(&regs->maccfg1); 3490 + u32 tempval = gfar_read(&regs->maccfg2); 3491 + u32 ecntrl = gfar_read(&regs->ecntrl); 3492 + 3493 + if (phydev->duplex != priv->oldduplex) { 3494 + if (!(phydev->duplex)) 3495 + tempval &= ~(MACCFG2_FULL_DUPLEX); 3496 + else 3497 + tempval |= MACCFG2_FULL_DUPLEX; 3498 + 3499 + priv->oldduplex = phydev->duplex; 3500 + } 3501 + 3502 + if (phydev->speed != priv->oldspeed) { 3503 + switch (phydev->speed) { 3504 + case 1000: 3505 + tempval = 3506 + ((tempval & ~(MACCFG2_IF)) | MACCFG2_GMII); 3507 + 3508 + ecntrl &= ~(ECNTRL_R100); 3509 + break; 3510 + case 100: 3511 + case 10: 3512 + tempval = 3513 + ((tempval & ~(MACCFG2_IF)) | MACCFG2_MII); 3514 + 3515 + /* Reduced mode distinguishes 3516 + * between 10 and 100 3517 + */ 3518 + if (phydev->speed == SPEED_100) 3519 + ecntrl |= ECNTRL_R100; 3520 + else 3521 + ecntrl &= ~(ECNTRL_R100); 3522 + break; 3523 + default: 3524 + netif_warn(priv, link, priv->ndev, 3525 + "Ack! Speed (%d) is not 10/100/1000!\n", 3526 + phydev->speed); 3527 + break; 3528 + } 3529 + 3530 + priv->oldspeed = phydev->speed; 3531 + } 3532 + 3533 + tempval1 &= ~(MACCFG1_TX_FLOW | MACCFG1_RX_FLOW); 3534 + tempval1 |= gfar_get_flowctrl_cfg(priv); 3535 + 3536 + gfar_write(&regs->maccfg1, tempval1); 3537 + gfar_write(&regs->maccfg2, tempval); 3538 + gfar_write(&regs->ecntrl, ecntrl); 3539 + 3540 + if (!priv->oldlink) 3541 + priv->oldlink = 1; 3542 + 3543 + } else if (priv->oldlink) { 3544 + priv->oldlink = 0; 3545 + priv->oldspeed = 0; 3546 + priv->oldduplex = -1; 3547 + } 3548 + 3549 + if (netif_msg_link(priv)) 3550 + phy_print_status(phydev); 3338 3551 } 3339 3552 3340 3553 static struct of_device_id gfar_match[] =
+3
drivers/net/ethernet/freescale/gianfar_ethtool.c
··· 533 533 struct gfar __iomem *regs = priv->gfargrp[0].regs; 534 534 u32 oldadv, newadv; 535 535 536 + if (!phydev) 537 + return -ENODEV; 538 + 536 539 if (!(phydev->supported & SUPPORTED_Pause) || 537 540 (!(phydev->supported & SUPPORTED_Asym_Pause) && 538 541 (epause->rx_pause != epause->tx_pause)))
+42 -29
drivers/net/ethernet/intel/e1000e/ich8lan.c
··· 186 186 { 187 187 u16 phy_reg = 0; 188 188 u32 phy_id = 0; 189 - s32 ret_val; 189 + s32 ret_val = 0; 190 190 u16 retry_count; 191 191 u32 mac_reg = 0; 192 192 ··· 217 217 /* In case the PHY needs to be in mdio slow mode, 218 218 * set slow mode and try to get the PHY id again. 219 219 */ 220 - hw->phy.ops.release(hw); 221 - ret_val = e1000_set_mdio_slow_mode_hv(hw); 222 - if (!ret_val) 223 - ret_val = e1000e_get_phy_id(hw); 224 - hw->phy.ops.acquire(hw); 220 + if (hw->mac.type < e1000_pch_lpt) { 221 + hw->phy.ops.release(hw); 222 + ret_val = e1000_set_mdio_slow_mode_hv(hw); 223 + if (!ret_val) 224 + ret_val = e1000e_get_phy_id(hw); 225 + hw->phy.ops.acquire(hw); 226 + } 225 227 226 228 if (ret_val) 227 229 return false; ··· 844 842 } 845 843 } 846 844 845 + if (hw->phy.type == e1000_phy_82579) { 846 + ret_val = e1000_read_emi_reg_locked(hw, I82579_LPI_PLL_SHUT, 847 + &data); 848 + if (ret_val) 849 + goto release; 850 + 851 + data &= ~I82579_LPI_100_PLL_SHUT; 852 + ret_val = e1000_write_emi_reg_locked(hw, I82579_LPI_PLL_SHUT, 853 + data); 854 + } 855 + 847 856 /* R/Clr IEEE MMD 3.1 bits 11:10 - Tx/Rx LPI Received */ 848 857 ret_val = e1000_read_emi_reg_locked(hw, pcs_status, &data); 849 858 if (ret_val) ··· 1327 1314 return ret_val; 1328 1315 } 1329 1316 1330 - /* When connected at 10Mbps half-duplex, 82579 parts are excessively 1317 + /* When connected at 10Mbps half-duplex, some parts are excessively 1331 1318 * aggressive resulting in many collisions. To avoid this, increase 1332 1319 * the IPG and reduce Rx latency in the PHY. 1333 1320 */ 1334 - if ((hw->mac.type == e1000_pch2lan) && link) { 1321 + if (((hw->mac.type == e1000_pch2lan) || 1322 + (hw->mac.type == e1000_pch_lpt)) && link) { 1335 1323 u32 reg; 1336 1324 reg = er32(STATUS); 1337 1325 if (!(reg & (E1000_STATUS_FD | E1000_STATUS_SPEED_MASK))) { 1326 + u16 emi_addr; 1327 + 1338 1328 reg = er32(TIPG); 1339 1329 reg &= ~E1000_TIPG_IPGT_MASK; 1340 1330 reg |= 0xFF; ··· 1348 1332 if (ret_val) 1349 1333 return ret_val; 1350 1334 1351 - ret_val = 1352 - e1000_write_emi_reg_locked(hw, I82579_RX_CONFIG, 0); 1335 + if (hw->mac.type == e1000_pch2lan) 1336 + emi_addr = I82579_RX_CONFIG; 1337 + else 1338 + emi_addr = I217_RX_CONFIG; 1339 + 1340 + ret_val = e1000_write_emi_reg_locked(hw, emi_addr, 0); 1353 1341 1354 1342 hw->phy.ops.release(hw); 1355 1343 ··· 2513 2493 * e1000_k1_gig_workaround_lv - K1 Si workaround 2514 2494 * @hw: pointer to the HW structure 2515 2495 * 2516 - * Workaround to set the K1 beacon duration for 82579 parts 2496 + * Workaround to set the K1 beacon duration for 82579 parts in 10Mbps 2497 + * Disable K1 in 1000Mbps and 100Mbps 2517 2498 **/ 2518 2499 static s32 e1000_k1_workaround_lv(struct e1000_hw *hw) 2519 2500 { 2520 2501 s32 ret_val = 0; 2521 2502 u16 status_reg = 0; 2522 - u32 mac_reg; 2523 - u16 phy_reg; 2524 2503 2525 2504 if (hw->mac.type != e1000_pch2lan) 2526 2505 return 0; 2527 2506 2528 - /* Set K1 beacon duration based on 1Gbps speed or otherwise */ 2507 + /* Set K1 beacon duration based on 10Mbs speed */ 2529 2508 ret_val = e1e_rphy(hw, HV_M_STATUS, &status_reg); 2530 2509 if (ret_val) 2531 2510 return ret_val; 2532 2511 2533 2512 if ((status_reg & (HV_M_STATUS_LINK_UP | HV_M_STATUS_AUTONEG_COMPLETE)) 2534 2513 == (HV_M_STATUS_LINK_UP | HV_M_STATUS_AUTONEG_COMPLETE)) { 2535 - mac_reg = er32(FEXTNVM4); 2536 - mac_reg &= ~E1000_FEXTNVM4_BEACON_DURATION_MASK; 2537 - 2538 - ret_val = e1e_rphy(hw, I82579_LPI_CTRL, &phy_reg); 2539 - if (ret_val) 2540 - return ret_val; 2541 - 2542 - if (status_reg & HV_M_STATUS_SPEED_1000) { 2514 + if (status_reg & 2515 + (HV_M_STATUS_SPEED_1000 | HV_M_STATUS_SPEED_100)) { 2543 2516 u16 pm_phy_reg; 2544 2517 2545 - mac_reg |= E1000_FEXTNVM4_BEACON_DURATION_8USEC; 2546 - phy_reg &= ~I82579_LPI_CTRL_FORCE_PLL_LOCK_COUNT; 2547 - /* LV 1G Packet drop issue wa */ 2518 + /* LV 1G/100 Packet drop issue wa */ 2548 2519 ret_val = e1e_rphy(hw, HV_PM_CTRL, &pm_phy_reg); 2549 2520 if (ret_val) 2550 2521 return ret_val; 2551 - pm_phy_reg &= ~HV_PM_CTRL_PLL_STOP_IN_K1_GIGA; 2522 + pm_phy_reg &= ~HV_PM_CTRL_K1_ENABLE; 2552 2523 ret_val = e1e_wphy(hw, HV_PM_CTRL, pm_phy_reg); 2553 2524 if (ret_val) 2554 2525 return ret_val; 2555 2526 } else { 2527 + u32 mac_reg; 2528 + 2529 + mac_reg = er32(FEXTNVM4); 2530 + mac_reg &= ~E1000_FEXTNVM4_BEACON_DURATION_MASK; 2556 2531 mac_reg |= E1000_FEXTNVM4_BEACON_DURATION_16USEC; 2557 - phy_reg |= I82579_LPI_CTRL_FORCE_PLL_LOCK_COUNT; 2532 + ew32(FEXTNVM4, mac_reg); 2558 2533 } 2559 - ew32(FEXTNVM4, mac_reg); 2560 - ret_val = e1e_wphy(hw, I82579_LPI_CTRL, phy_reg); 2561 2534 } 2562 2535 2563 2536 return ret_val;
+3
drivers/net/ethernet/intel/e1000e/ich8lan.h
··· 232 232 #define I82577_MSE_THRESHOLD 0x0887 /* 82577 Mean Square Error Threshold */ 233 233 #define I82579_MSE_LINK_DOWN 0x2411 /* MSE count before dropping link */ 234 234 #define I82579_RX_CONFIG 0x3412 /* Receive configuration */ 235 + #define I82579_LPI_PLL_SHUT 0x4412 /* LPI PLL Shut Enable */ 235 236 #define I82579_EEE_PCS_STATUS 0x182E /* IEEE MMD Register 3.1 >> 8 */ 236 237 #define I82579_EEE_CAPABILITY 0x0410 /* IEEE MMD Register 3.20 */ 237 238 #define I82579_EEE_ADVERTISEMENT 0x040E /* IEEE MMD Register 7.60 */ 238 239 #define I82579_EEE_LP_ABILITY 0x040F /* IEEE MMD Register 7.61 */ 239 240 #define I82579_EEE_100_SUPPORTED (1 << 1) /* 100BaseTx EEE */ 240 241 #define I82579_EEE_1000_SUPPORTED (1 << 2) /* 1000BaseTx EEE */ 242 + #define I82579_LPI_100_PLL_SHUT (1 << 2) /* 100M LPI PLL Shut Enabled */ 241 243 #define I217_EEE_PCS_STATUS 0x9401 /* IEEE MMD Register 3.1 */ 242 244 #define I217_EEE_CAPABILITY 0x8000 /* IEEE MMD Register 3.20 */ 243 245 #define I217_EEE_ADVERTISEMENT 0x8001 /* IEEE MMD Register 7.60 */ 244 246 #define I217_EEE_LP_ABILITY 0x8002 /* IEEE MMD Register 7.61 */ 247 + #define I217_RX_CONFIG 0xB20C /* Receive configuration */ 245 248 246 249 #define E1000_EEE_RX_LPI_RCVD 0x0400 /* Tx LP idle received */ 247 250 #define E1000_EEE_TX_LPI_RCVD 0x0800 /* Rx LP idle received */
+3 -3
drivers/net/ethernet/intel/e1000e/netdev.c
··· 1165 1165 dev_kfree_skb_any(adapter->tx_hwtstamp_skb); 1166 1166 adapter->tx_hwtstamp_skb = NULL; 1167 1167 adapter->tx_hwtstamp_timeouts++; 1168 - e_warn("clearing Tx timestamp hang"); 1168 + e_warn("clearing Tx timestamp hang\n"); 1169 1169 } else { 1170 1170 /* reschedule to check later */ 1171 1171 schedule_work(&adapter->tx_hwtstamp_work); ··· 5687 5687 static int e1000_change_mtu(struct net_device *netdev, int new_mtu) 5688 5688 { 5689 5689 struct e1000_adapter *adapter = netdev_priv(netdev); 5690 - int max_frame = new_mtu + ETH_HLEN + ETH_FCS_LEN; 5690 + int max_frame = new_mtu + VLAN_HLEN + ETH_HLEN + ETH_FCS_LEN; 5691 5691 5692 5692 /* Jumbo frame support */ 5693 5693 if ((max_frame > ETH_FRAME_LEN + ETH_FCS_LEN) && ··· 6235 6235 return 0; 6236 6236 } 6237 6237 6238 + #ifdef CONFIG_PM_SLEEP 6238 6239 static int e1000e_pm_thaw(struct device *dev) 6239 6240 { 6240 6241 struct net_device *netdev = pci_get_drvdata(to_pci_dev(dev)); ··· 6256 6255 return 0; 6257 6256 } 6258 6257 6259 - #ifdef CONFIG_PM_SLEEP 6260 6258 static int e1000e_pm_suspend(struct device *dev) 6261 6259 { 6262 6260 struct pci_dev *pdev = to_pci_dev(dev);
+1
drivers/net/ethernet/intel/e1000e/phy.h
··· 164 164 #define HV_M_STATUS_AUTONEG_COMPLETE 0x1000 165 165 #define HV_M_STATUS_SPEED_MASK 0x0300 166 166 #define HV_M_STATUS_SPEED_1000 0x0200 167 + #define HV_M_STATUS_SPEED_100 0x0100 167 168 #define HV_M_STATUS_LINK_UP 0x0040 168 169 169 170 #define IGP01E1000_PHY_PCS_INIT_REG 0x00B4
+10 -4
drivers/net/ethernet/intel/i40e/i40e_main.c
··· 2897 2897 u32 prttsyn_stat = rd32(hw, I40E_PRTTSYN_STAT_0); 2898 2898 2899 2899 if (prttsyn_stat & I40E_PRTTSYN_STAT_0_TXTIME_MASK) { 2900 - ena_mask &= ~I40E_PFINT_ICR0_ENA_TIMESYNC_MASK; 2900 + icr0 &= ~I40E_PFINT_ICR0_ENA_TIMESYNC_MASK; 2901 2901 i40e_ptp_tx_hwtstamp(pf); 2902 - prttsyn_stat &= ~I40E_PRTTSYN_STAT_0_TXTIME_MASK; 2903 2902 } 2904 - 2905 - wr32(hw, I40E_PRTTSYN_STAT_0, prttsyn_stat); 2906 2903 } 2907 2904 2908 2905 /* If a critical error is pending we have no choice but to reset the ··· 4267 4270 err = i40e_vsi_open(vsi); 4268 4271 if (err) 4269 4272 return err; 4273 + 4274 + /* configure global TSO hardware offload settings */ 4275 + wr32(&pf->hw, I40E_GLLAN_TSOMSK_F, be32_to_cpu(TCP_FLAG_PSH | 4276 + TCP_FLAG_FIN) >> 16); 4277 + wr32(&pf->hw, I40E_GLLAN_TSOMSK_M, be32_to_cpu(TCP_FLAG_PSH | 4278 + TCP_FLAG_FIN | 4279 + TCP_FLAG_CWR) >> 16); 4280 + wr32(&pf->hw, I40E_GLLAN_TSOMSK_L, be32_to_cpu(TCP_FLAG_CWR) >> 16); 4270 4281 4271 4282 #ifdef CONFIG_I40E_VXLAN 4272 4283 vxlan_get_rx_port(netdev); ··· 6717 6712 NETIF_F_HW_VLAN_CTAG_FILTER | 6718 6713 NETIF_F_IPV6_CSUM | 6719 6714 NETIF_F_TSO | 6715 + NETIF_F_TSO_ECN | 6720 6716 NETIF_F_TSO6 | 6721 6717 NETIF_F_RXCSUM | 6722 6718 NETIF_F_NTUPLE |
+1 -1
drivers/net/ethernet/intel/i40e/i40e_nvm.c
··· 160 160 udelay(5); 161 161 } 162 162 if (ret_code == I40E_ERR_TIMEOUT) 163 - hw_dbg(hw, "Done bit in GLNVM_SRCTL not set"); 163 + hw_dbg(hw, "Done bit in GLNVM_SRCTL not set\n"); 164 164 return ret_code; 165 165 } 166 166
+2 -2
drivers/net/ethernet/intel/i40e/i40e_ptp.c
··· 239 239 dev_kfree_skb_any(pf->ptp_tx_skb); 240 240 pf->ptp_tx_skb = NULL; 241 241 pf->tx_hwtstamp_timeouts++; 242 - dev_warn(&pf->pdev->dev, "clearing Tx timestamp hang"); 242 + dev_warn(&pf->pdev->dev, "clearing Tx timestamp hang\n"); 243 243 return; 244 244 } 245 245 ··· 321 321 pf->last_rx_ptp_check = jiffies; 322 322 pf->rx_hwtstamp_cleared++; 323 323 dev_warn(&vsi->back->pdev->dev, 324 - "%s: clearing Rx timestamp hang", 324 + "%s: clearing Rx timestamp hang\n", 325 325 __func__); 326 326 } 327 327 }
+11 -11
drivers/net/ethernet/intel/i40e/i40e_txrx.c
··· 418 418 } 419 419 break; 420 420 default: 421 - dev_info(&pf->pdev->dev, "Could not specify spec type %d", 421 + dev_info(&pf->pdev->dev, "Could not specify spec type %d\n", 422 422 input->flow_type); 423 423 ret = -EINVAL; 424 424 } ··· 478 478 pf->flags |= I40E_FLAG_FDIR_REQUIRES_REINIT; 479 479 } 480 480 } else { 481 - dev_info(&pdev->dev, "FD filter programming error"); 481 + dev_info(&pdev->dev, "FD filter programming error\n"); 482 482 } 483 483 } else if (error == 484 484 (0x1 << I40E_RX_PROG_STATUS_DESC_NO_FD_ENTRY_SHIFT)) { ··· 1713 1713 I40E_TX_FLAGS_VLAN_PRIO_SHIFT; 1714 1714 if (tx_flags & I40E_TX_FLAGS_SW_VLAN) { 1715 1715 struct vlan_ethhdr *vhdr; 1716 - if (skb_header_cloned(skb) && 1717 - pskb_expand_head(skb, 0, 0, GFP_ATOMIC)) 1718 - return -ENOMEM; 1716 + int rc; 1717 + 1718 + rc = skb_cow_head(skb, 0); 1719 + if (rc < 0) 1720 + return rc; 1719 1721 vhdr = (struct vlan_ethhdr *)skb->data; 1720 1722 vhdr->h_vlan_TCI = htons(tx_flags >> 1721 1723 I40E_TX_FLAGS_VLAN_SHIFT); ··· 1745 1743 u64 *cd_type_cmd_tso_mss, u32 *cd_tunneling) 1746 1744 { 1747 1745 u32 cd_cmd, cd_tso_len, cd_mss; 1746 + struct ipv6hdr *ipv6h; 1748 1747 struct tcphdr *tcph; 1749 1748 struct iphdr *iph; 1750 1749 u32 l4len; 1751 1750 int err; 1752 - struct ipv6hdr *ipv6h; 1753 1751 1754 1752 if (!skb_is_gso(skb)) 1755 1753 return 0; 1756 1754 1757 - if (skb_header_cloned(skb)) { 1758 - err = pskb_expand_head(skb, 0, 0, GFP_ATOMIC); 1759 - if (err) 1760 - return err; 1761 - } 1755 + err = skb_cow_head(skb, 0); 1756 + if (err < 0) 1757 + return err; 1762 1758 1763 1759 if (protocol == htons(ETH_P_IP)) { 1764 1760 iph = skb->encapsulation ? inner_ip_hdr(skb) : ip_hdr(skb);
+1 -1
drivers/net/ethernet/intel/igb/e1000_i210.c
··· 365 365 word_address = INVM_DWORD_TO_WORD_ADDRESS(invm_dword); 366 366 if (word_address == address) { 367 367 *data = INVM_DWORD_TO_WORD_DATA(invm_dword); 368 - hw_dbg("Read INVM Word 0x%02x = %x", 368 + hw_dbg("Read INVM Word 0x%02x = %x\n", 369 369 address, *data); 370 370 status = E1000_SUCCESS; 371 371 break;
+6 -7
drivers/net/ethernet/intel/igb/e1000_mac.c
··· 929 929 */ 930 930 if (hw->fc.requested_mode == e1000_fc_full) { 931 931 hw->fc.current_mode = e1000_fc_full; 932 - hw_dbg("Flow Control = FULL.\r\n"); 932 + hw_dbg("Flow Control = FULL.\n"); 933 933 } else { 934 934 hw->fc.current_mode = e1000_fc_rx_pause; 935 - hw_dbg("Flow Control = " 936 - "RX PAUSE frames only.\r\n"); 935 + hw_dbg("Flow Control = RX PAUSE frames only.\n"); 937 936 } 938 937 } 939 938 /* For receiving PAUSE frames ONLY. ··· 947 948 (mii_nway_lp_ability_reg & NWAY_LPAR_PAUSE) && 948 949 (mii_nway_lp_ability_reg & NWAY_LPAR_ASM_DIR)) { 949 950 hw->fc.current_mode = e1000_fc_tx_pause; 950 - hw_dbg("Flow Control = TX PAUSE frames only.\r\n"); 951 + hw_dbg("Flow Control = TX PAUSE frames only.\n"); 951 952 } 952 953 /* For transmitting PAUSE frames ONLY. 953 954 * ··· 961 962 !(mii_nway_lp_ability_reg & NWAY_LPAR_PAUSE) && 962 963 (mii_nway_lp_ability_reg & NWAY_LPAR_ASM_DIR)) { 963 964 hw->fc.current_mode = e1000_fc_rx_pause; 964 - hw_dbg("Flow Control = RX PAUSE frames only.\r\n"); 965 + hw_dbg("Flow Control = RX PAUSE frames only.\n"); 965 966 } 966 967 /* Per the IEEE spec, at this point flow control should be 967 968 * disabled. However, we want to consider that we could ··· 987 988 (hw->fc.requested_mode == e1000_fc_tx_pause) || 988 989 (hw->fc.strict_ieee)) { 989 990 hw->fc.current_mode = e1000_fc_none; 990 - hw_dbg("Flow Control = NONE.\r\n"); 991 + hw_dbg("Flow Control = NONE.\n"); 991 992 } else { 992 993 hw->fc.current_mode = e1000_fc_rx_pause; 993 - hw_dbg("Flow Control = RX PAUSE frames only.\r\n"); 994 + hw_dbg("Flow Control = RX PAUSE frames only.\n"); 994 995 } 995 996 996 997 /* Now we need to do one last check... If we auto-
+3 -1
drivers/net/ethernet/intel/igb/igb_main.c
··· 5193 5193 5194 5194 rcu_read_lock(); 5195 5195 for (i = 0; i < adapter->num_rx_queues; i++) { 5196 - u32 rqdpc = rd32(E1000_RQDPC(i)); 5197 5196 struct igb_ring *ring = adapter->rx_ring[i]; 5197 + u32 rqdpc = rd32(E1000_RQDPC(i)); 5198 + if (hw->mac.type >= e1000_i210) 5199 + wr32(E1000_RQDPC(i), 0); 5198 5200 5199 5201 if (rqdpc) { 5200 5202 ring->rx_stats.drops += rqdpc;
+2 -2
drivers/net/ethernet/intel/igb/igb_ptp.c
··· 389 389 adapter->ptp_tx_skb = NULL; 390 390 clear_bit_unlock(__IGB_PTP_TX_IN_PROGRESS, &adapter->state); 391 391 adapter->tx_hwtstamp_timeouts++; 392 - dev_warn(&adapter->pdev->dev, "clearing Tx timestamp hang"); 392 + dev_warn(&adapter->pdev->dev, "clearing Tx timestamp hang\n"); 393 393 return; 394 394 } 395 395 ··· 451 451 rd32(E1000_RXSTMPH); 452 452 adapter->last_rx_ptp_check = jiffies; 453 453 adapter->rx_hwtstamp_cleared++; 454 - dev_warn(&adapter->pdev->dev, "clearing Rx timestamp hang"); 454 + dev_warn(&adapter->pdev->dev, "clearing Rx timestamp hang\n"); 455 455 } 456 456 } 457 457
+2 -19
drivers/net/ethernet/intel/ixgbe/ixgbe.h
··· 256 256 struct ixgbe_tx_buffer *tx_buffer_info; 257 257 struct ixgbe_rx_buffer *rx_buffer_info; 258 258 }; 259 - unsigned long last_rx_timestamp; 260 259 unsigned long state; 261 260 u8 __iomem *tail; 262 261 dma_addr_t dma; /* phys. address of descriptor ring */ ··· 769 770 unsigned long ptp_tx_start; 770 771 unsigned long last_overflow_check; 771 772 unsigned long last_rx_ptp_check; 773 + unsigned long last_rx_timestamp; 772 774 spinlock_t tmreg_lock; 773 775 struct cyclecounter cc; 774 776 struct timecounter tc; ··· 944 944 void ixgbe_ptp_stop(struct ixgbe_adapter *adapter); 945 945 void ixgbe_ptp_overflow_check(struct ixgbe_adapter *adapter); 946 946 void ixgbe_ptp_rx_hang(struct ixgbe_adapter *adapter); 947 - void __ixgbe_ptp_rx_hwtstamp(struct ixgbe_q_vector *q_vector, 948 - struct sk_buff *skb); 949 - static inline void ixgbe_ptp_rx_hwtstamp(struct ixgbe_ring *rx_ring, 950 - union ixgbe_adv_rx_desc *rx_desc, 951 - struct sk_buff *skb) 952 - { 953 - if (unlikely(!ixgbe_test_staterr(rx_desc, IXGBE_RXDADV_STAT_TS))) 954 - return; 955 - 956 - __ixgbe_ptp_rx_hwtstamp(rx_ring->q_vector, skb); 957 - 958 - /* 959 - * Update the last_rx_timestamp timer in order to enable watchdog check 960 - * for error case of latched timestamp on a dropped packet. 961 - */ 962 - rx_ring->last_rx_timestamp = jiffies; 963 - } 964 - 947 + void ixgbe_ptp_rx_hwtstamp(struct ixgbe_adapter *adapter, struct sk_buff *skb); 965 948 int ixgbe_ptp_set_ts_config(struct ixgbe_adapter *adapter, struct ifreq *ifr); 966 949 int ixgbe_ptp_get_ts_config(struct ixgbe_adapter *adapter, struct ifreq *ifr); 967 950 void ixgbe_ptp_start_cyclecounter(struct ixgbe_adapter *adapter);
+1 -1
drivers/net/ethernet/intel/ixgbe/ixgbe_common.c
··· 1195 1195 */ 1196 1196 hw->eeprom.word_page_size = IXGBE_EEPROM_PAGE_SIZE_MAX - data[0]; 1197 1197 1198 - hw_dbg(hw, "Detected EEPROM page size = %d words.", 1198 + hw_dbg(hw, "Detected EEPROM page size = %d words.\n", 1199 1199 hw->eeprom.word_page_size); 1200 1200 out: 1201 1201 return status;
+2 -1
drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
··· 1664 1664 1665 1665 ixgbe_rx_checksum(rx_ring, rx_desc, skb); 1666 1666 1667 - ixgbe_ptp_rx_hwtstamp(rx_ring, rx_desc, skb); 1667 + if (unlikely(ixgbe_test_staterr(rx_desc, IXGBE_RXDADV_STAT_TS))) 1668 + ixgbe_ptp_rx_hwtstamp(rx_ring->q_vector->adapter, skb); 1668 1669 1669 1670 if ((dev->features & NETIF_F_HW_VLAN_CTAG_RX) && 1670 1671 ixgbe_test_staterr(rx_desc, IXGBE_RXD_STAT_VP)) {
+3 -3
drivers/net/ethernet/intel/ixgbe/ixgbe_phy.c
··· 536 536 537 537 if (time_out == max_time_out) { 538 538 status = IXGBE_ERR_LINK_SETUP; 539 - hw_dbg(hw, "ixgbe_setup_phy_link_generic: time out"); 539 + hw_dbg(hw, "ixgbe_setup_phy_link_generic: time out\n"); 540 540 } 541 541 542 542 return status; ··· 745 745 746 746 if (time_out == max_time_out) { 747 747 status = IXGBE_ERR_LINK_SETUP; 748 - hw_dbg(hw, "ixgbe_setup_phy_link_tnx: time out"); 748 + hw_dbg(hw, "ixgbe_setup_phy_link_tnx: time out\n"); 749 749 } 750 750 751 751 return status; ··· 1175 1175 status = 0; 1176 1176 } else { 1177 1177 if (hw->allow_unsupported_sfp) { 1178 - e_warn(drv, "WARNING: Intel (R) Network Connections are quality tested using Intel (R) Ethernet Optics. Using untested modules is not supported and may cause unstable operation or damage to the module or the adapter. Intel Corporation is not responsible for any harm caused by using untested modules."); 1178 + e_warn(drv, "WARNING: Intel (R) Network Connections are quality tested using Intel (R) Ethernet Optics. Using untested modules is not supported and may cause unstable operation or damage to the module or the adapter. Intel Corporation is not responsible for any harm caused by using untested modules.\n"); 1179 1179 status = 0; 1180 1180 } else { 1181 1181 hw_dbg(hw,
+13 -27
drivers/net/ethernet/intel/ixgbe/ixgbe_ptp.c
··· 435 435 void ixgbe_ptp_rx_hang(struct ixgbe_adapter *adapter) 436 436 { 437 437 struct ixgbe_hw *hw = &adapter->hw; 438 - struct ixgbe_ring *rx_ring; 439 438 u32 tsyncrxctl = IXGBE_READ_REG(hw, IXGBE_TSYNCRXCTL); 440 439 unsigned long rx_event; 441 - int n; 442 440 443 441 /* if we don't have a valid timestamp in the registers, just update the 444 442 * timeout counter and exit ··· 448 450 449 451 /* determine the most recent watchdog or rx_timestamp event */ 450 452 rx_event = adapter->last_rx_ptp_check; 451 - for (n = 0; n < adapter->num_rx_queues; n++) { 452 - rx_ring = adapter->rx_ring[n]; 453 - if (time_after(rx_ring->last_rx_timestamp, rx_event)) 454 - rx_event = rx_ring->last_rx_timestamp; 455 - } 453 + if (time_after(adapter->last_rx_timestamp, rx_event)) 454 + rx_event = adapter->last_rx_timestamp; 456 455 457 456 /* only need to read the high RXSTMP register to clear the lock */ 458 457 if (time_is_before_jiffies(rx_event + 5*HZ)) { 459 458 IXGBE_READ_REG(hw, IXGBE_RXSTMPH); 460 459 adapter->last_rx_ptp_check = jiffies; 461 460 462 - e_warn(drv, "clearing RX Timestamp hang"); 461 + e_warn(drv, "clearing RX Timestamp hang\n"); 463 462 } 464 463 } 465 464 ··· 512 517 dev_kfree_skb_any(adapter->ptp_tx_skb); 513 518 adapter->ptp_tx_skb = NULL; 514 519 clear_bit_unlock(__IXGBE_PTP_TX_IN_PROGRESS, &adapter->state); 515 - e_warn(drv, "clearing Tx Timestamp hang"); 520 + e_warn(drv, "clearing Tx Timestamp hang\n"); 516 521 return; 517 522 } 518 523 ··· 525 530 } 526 531 527 532 /** 528 - * __ixgbe_ptp_rx_hwtstamp - utility function which checks for RX time stamp 529 - * @q_vector: structure containing interrupt and ring information 533 + * ixgbe_ptp_rx_hwtstamp - utility function which checks for RX time stamp 534 + * @adapter: pointer to adapter struct 530 535 * @skb: particular skb to send timestamp with 531 536 * 532 537 * if the timestamp is valid, we convert it into the timecounter ns 533 538 * value, then store that result into the shhwtstamps structure which 534 539 * is passed up the network stack 535 540 */ 536 - void __ixgbe_ptp_rx_hwtstamp(struct ixgbe_q_vector *q_vector, 537 - struct sk_buff *skb) 541 + void ixgbe_ptp_rx_hwtstamp(struct ixgbe_adapter *adapter, struct sk_buff *skb) 538 542 { 539 - struct ixgbe_adapter *adapter; 540 - struct ixgbe_hw *hw; 543 + struct ixgbe_hw *hw = &adapter->hw; 541 544 struct skb_shared_hwtstamps *shhwtstamps; 542 545 u64 regval = 0, ns; 543 546 u32 tsyncrxctl; 544 547 unsigned long flags; 545 548 546 - /* we cannot process timestamps on a ring without a q_vector */ 547 - if (!q_vector || !q_vector->adapter) 548 - return; 549 - 550 - adapter = q_vector->adapter; 551 - hw = &adapter->hw; 552 - 553 - /* 554 - * Read the tsyncrxctl register afterwards in order to prevent taking an 555 - * I/O hit on every packet. 556 - */ 557 549 tsyncrxctl = IXGBE_READ_REG(hw, IXGBE_TSYNCRXCTL); 558 550 if (!(tsyncrxctl & IXGBE_TSYNCRXCTL_VALID)) 559 551 return; ··· 548 566 regval |= (u64)IXGBE_READ_REG(hw, IXGBE_RXSTMPL); 549 567 regval |= (u64)IXGBE_READ_REG(hw, IXGBE_RXSTMPH) << 32; 550 568 551 - 552 569 spin_lock_irqsave(&adapter->tmreg_lock, flags); 553 570 ns = timecounter_cyc2time(&adapter->tc, regval); 554 571 spin_unlock_irqrestore(&adapter->tmreg_lock, flags); 555 572 556 573 shhwtstamps = skb_hwtstamps(skb); 557 574 shhwtstamps->hwtstamp = ns_to_ktime(ns); 575 + 576 + /* Update the last_rx_timestamp timer in order to enable watchdog check 577 + * for error case of latched timestamp on a dropped packet. 578 + */ 579 + adapter->last_rx_timestamp = jiffies; 558 580 } 559 581 560 582 int ixgbe_ptp_get_ts_config(struct ixgbe_adapter *adapter, struct ifreq *ifr)
+4 -1
drivers/net/ethernet/marvell/mvmdio.c
··· 232 232 clk_prepare_enable(dev->clk); 233 233 234 234 dev->err_interrupt = platform_get_irq(pdev, 0); 235 - if (dev->err_interrupt != -ENXIO) { 235 + if (dev->err_interrupt > 0) { 236 236 ret = devm_request_irq(&pdev->dev, dev->err_interrupt, 237 237 orion_mdio_err_irq, 238 238 IRQF_SHARED, pdev->name, dev); ··· 241 241 242 242 writel(MVMDIO_ERR_INT_SMI_DONE, 243 243 dev->regs + MVMDIO_ERR_INT_MASK); 244 + 245 + } else if (dev->err_interrupt == -EPROBE_DEFER) { 246 + return -EPROBE_DEFER; 244 247 } 245 248 246 249 mutex_init(&dev->lock);
+4 -3
drivers/net/ethernet/mellanox/mlx4/main.c
··· 754 754 has_eth_port = true; 755 755 } 756 756 757 - if (has_ib_port || (dev->caps.flags & MLX4_DEV_CAP_FLAG_IBOE)) 758 - request_module_nowait(IB_DRV_NAME); 759 757 if (has_eth_port) 760 758 request_module_nowait(EN_DRV_NAME); 759 + if (has_ib_port || (dev->caps.flags & MLX4_DEV_CAP_FLAG_IBOE)) 760 + request_module_nowait(IB_DRV_NAME); 761 761 } 762 762 763 763 /* ··· 2440 2440 * No return code for this call, just warn the user in case of PCI 2441 2441 * express device capabilities are under-satisfied by the bus. 2442 2442 */ 2443 - mlx4_check_pcie_caps(dev); 2443 + if (!mlx4_is_slave(dev)) 2444 + mlx4_check_pcie_caps(dev); 2444 2445 2445 2446 /* In master functions, the communication channel must be initialized 2446 2447 * after obtaining its address from fw */
+20 -15
drivers/net/ethernet/mellanox/mlx4/port.c
··· 1106 1106 } 1107 1107 1108 1108 if (found_ix >= 0) { 1109 + /* Calculate a slave_gid which is the slave number in the gid 1110 + * table and not a globally unique slave number. 1111 + */ 1109 1112 if (found_ix < MLX4_ROCE_PF_GIDS) 1110 1113 slave_gid = 0; 1111 1114 else if (found_ix < MLX4_ROCE_PF_GIDS + (vf_gids % num_vfs) * ··· 1121 1118 ((vf_gids % num_vfs) * ((vf_gids / num_vfs + 1)))) / 1122 1119 (vf_gids / num_vfs)) + vf_gids % num_vfs + 1; 1123 1120 1121 + /* Calculate the globally unique slave id */ 1124 1122 if (slave_gid) { 1125 1123 struct mlx4_active_ports exclusive_ports; 1126 1124 struct mlx4_active_ports actv_ports; 1127 1125 struct mlx4_slaves_pport slaves_pport_actv; 1128 1126 unsigned max_port_p_one; 1129 - int num_slaves_before = 1; 1127 + int num_vfs_before = 0; 1128 + int candidate_slave_gid; 1130 1129 1130 + /* Calculate how many VFs are on the previous port, if exists */ 1131 1131 for (i = 1; i < port; i++) { 1132 1132 bitmap_zero(exclusive_ports.ports, dev->caps.num_ports); 1133 - set_bit(i, exclusive_ports.ports); 1133 + set_bit(i - 1, exclusive_ports.ports); 1134 1134 slaves_pport_actv = 1135 1135 mlx4_phys_to_slaves_pport_actv( 1136 1136 dev, &exclusive_ports); 1137 - num_slaves_before += bitmap_weight( 1137 + num_vfs_before += bitmap_weight( 1138 1138 slaves_pport_actv.slaves, 1139 1139 dev->num_vfs + 1); 1140 1140 } 1141 1141 1142 - if (slave_gid < num_slaves_before) { 1143 - bitmap_zero(exclusive_ports.ports, dev->caps.num_ports); 1144 - set_bit(port - 1, exclusive_ports.ports); 1145 - slaves_pport_actv = 1146 - mlx4_phys_to_slaves_pport_actv( 1147 - dev, &exclusive_ports); 1148 - slave_gid += bitmap_weight( 1149 - slaves_pport_actv.slaves, 1150 - dev->num_vfs + 1) - 1151 - num_slaves_before; 1152 - } 1153 - actv_ports = mlx4_get_active_ports(dev, slave_gid); 1142 + /* candidate_slave_gid isn't necessarily the correct slave, but 1143 + * it has the same number of ports and is assigned to the same 1144 + * ports as the real slave we're looking for. On dual port VF, 1145 + * slave_gid = [single port VFs on port <port>] + 1146 + * [offset of the current slave from the first dual port VF] + 1147 + * 1 (for the PF). 1148 + */ 1149 + candidate_slave_gid = slave_gid + num_vfs_before; 1150 + 1151 + actv_ports = mlx4_get_active_ports(dev, candidate_slave_gid); 1154 1152 max_port_p_one = find_first_bit( 1155 1153 actv_ports.ports, dev->caps.num_ports) + 1156 1154 bitmap_weight(actv_ports.ports, 1157 1155 dev->caps.num_ports) + 1; 1158 1156 1157 + /* Calculate the real slave number */ 1159 1158 for (i = 1; i < max_port_p_one; i++) { 1160 1159 if (i == port) 1161 1160 continue;
+23
drivers/net/ethernet/mellanox/mlx4/resource_tracker.c
··· 3733 3733 } 3734 3734 } 3735 3735 3736 + static int mlx4_adjust_port(struct mlx4_dev *dev, int slave, 3737 + u8 *gid, enum mlx4_protocol prot) 3738 + { 3739 + int real_port; 3740 + 3741 + if (prot != MLX4_PROT_ETH) 3742 + return 0; 3743 + 3744 + if (dev->caps.steering_mode == MLX4_STEERING_MODE_B0 || 3745 + dev->caps.steering_mode == MLX4_STEERING_MODE_DEVICE_MANAGED) { 3746 + real_port = mlx4_slave_convert_port(dev, slave, gid[5]); 3747 + if (real_port < 0) 3748 + return -EINVAL; 3749 + gid[5] = real_port; 3750 + } 3751 + 3752 + return 0; 3753 + } 3754 + 3736 3755 int mlx4_QP_ATTACH_wrapper(struct mlx4_dev *dev, int slave, 3737 3756 struct mlx4_vhcr *vhcr, 3738 3757 struct mlx4_cmd_mailbox *inbox, ··· 3787 3768 if (err) 3788 3769 goto ex_detach; 3789 3770 } else { 3771 + err = mlx4_adjust_port(dev, slave, gid, prot); 3772 + if (err) 3773 + goto ex_put; 3774 + 3790 3775 err = rem_mcg_res(dev, slave, rqp, gid, prot, type, &reg_id); 3791 3776 if (err) 3792 3777 goto ex_put;
+9
drivers/net/ethernet/qlogic/qlcnic/qlcnic_main.c
··· 2374 2374 qlcnic_fw_cmd_set_drv_version(adapter, fw_cmd); 2375 2375 } 2376 2376 2377 + /* Reset firmware API lock */ 2378 + static void qlcnic_reset_api_lock(struct qlcnic_adapter *adapter) 2379 + { 2380 + qlcnic_api_lock(adapter); 2381 + qlcnic_api_unlock(adapter); 2382 + } 2383 + 2384 + 2377 2385 static int 2378 2386 qlcnic_probe(struct pci_dev *pdev, const struct pci_device_id *ent) 2379 2387 { ··· 2484 2476 if (qlcnic_82xx_check(adapter)) { 2485 2477 qlcnic_check_vf(adapter, ent); 2486 2478 adapter->portnum = adapter->ahw->pci_func; 2479 + qlcnic_reset_api_lock(adapter); 2487 2480 err = qlcnic_start_firmware(adapter); 2488 2481 if (err) { 2489 2482 dev_err(&pdev->dev, "Loading fw failed.Please Reboot\n"
+8 -1
drivers/net/ethernet/qlogic/qlcnic/qlcnic_sriov_common.c
··· 1370 1370 1371 1371 rsp = qlcnic_sriov_alloc_bc_trans(&trans); 1372 1372 if (rsp) 1373 - return rsp; 1373 + goto free_cmd; 1374 1374 1375 1375 rsp = qlcnic_sriov_prepare_bc_hdr(trans, cmd, seq, QLC_BC_COMMAND); 1376 1376 if (rsp) ··· 1425 1425 1426 1426 cleanup_transaction: 1427 1427 qlcnic_sriov_cleanup_transaction(trans); 1428 + 1429 + free_cmd: 1430 + if (cmd->type == QLC_83XX_MBX_CMD_NO_WAIT) { 1431 + qlcnic_free_mbx_args(cmd); 1432 + kfree(cmd); 1433 + } 1434 + 1428 1435 return rsp; 1429 1436 } 1430 1437
+2
drivers/net/ethernet/samsung/sxgbe/sxgbe_common.h
··· 358 358 /* Enable disable checksum offload operations */ 359 359 void (*enable_rx_csum)(void __iomem *ioaddr); 360 360 void (*disable_rx_csum)(void __iomem *ioaddr); 361 + void (*enable_rxqueue)(void __iomem *ioaddr, int queue_num); 362 + void (*disable_rxqueue)(void __iomem *ioaddr, int queue_num); 361 363 }; 362 364 363 365 const struct sxgbe_core_ops *sxgbe_get_core_ops(void);
+22
drivers/net/ethernet/samsung/sxgbe/sxgbe_core.c
··· 165 165 writel(tx_cfg, ioaddr + SXGBE_CORE_TX_CONFIG_REG); 166 166 } 167 167 168 + static void sxgbe_core_enable_rxqueue(void __iomem *ioaddr, int queue_num) 169 + { 170 + u32 reg_val; 171 + 172 + reg_val = readl(ioaddr + SXGBE_CORE_RX_CTL0_REG); 173 + reg_val &= ~(SXGBE_CORE_RXQ_ENABLE_MASK << queue_num); 174 + reg_val |= SXGBE_CORE_RXQ_ENABLE; 175 + writel(reg_val, ioaddr + SXGBE_CORE_RX_CTL0_REG); 176 + } 177 + 178 + static void sxgbe_core_disable_rxqueue(void __iomem *ioaddr, int queue_num) 179 + { 180 + u32 reg_val; 181 + 182 + reg_val = readl(ioaddr + SXGBE_CORE_RX_CTL0_REG); 183 + reg_val &= ~(SXGBE_CORE_RXQ_ENABLE_MASK << queue_num); 184 + reg_val |= SXGBE_CORE_RXQ_DISABLE; 185 + writel(reg_val, ioaddr + SXGBE_CORE_RX_CTL0_REG); 186 + } 187 + 168 188 static void sxgbe_set_eee_mode(void __iomem *ioaddr) 169 189 { 170 190 u32 ctrl; ··· 274 254 .set_eee_pls = sxgbe_set_eee_pls, 275 255 .enable_rx_csum = sxgbe_enable_rx_csum, 276 256 .disable_rx_csum = sxgbe_disable_rx_csum, 257 + .enable_rxqueue = sxgbe_core_enable_rxqueue, 258 + .disable_rxqueue = sxgbe_core_disable_rxqueue, 277 259 }; 278 260 279 261 const struct sxgbe_core_ops *sxgbe_get_core_ops(void)
+9 -2
drivers/net/ethernet/samsung/sxgbe/sxgbe_desc.c
··· 45 45 p->tdes23.tx_rd_des23.first_desc = is_fd; 46 46 p->tdes23.tx_rd_des23.buf1_size = buf1_len; 47 47 48 - p->tdes23.tx_rd_des23.tx_pkt_len.cksum_pktlen.total_pkt_len = pkt_len; 48 + p->tdes23.tx_rd_des23.tx_pkt_len.pkt_len.total_pkt_len = pkt_len; 49 49 50 50 if (cksum) 51 - p->tdes23.tx_rd_des23.tx_pkt_len.cksum_pktlen.cksum_ctl = cic_full; 51 + p->tdes23.tx_rd_des23.cksum_ctl = cic_full; 52 52 } 53 53 54 54 /* Set VLAN control information */ ··· 231 231 static void sxgbe_set_rx_owner(struct sxgbe_rx_norm_desc *p) 232 232 { 233 233 p->rdes23.rx_rd_des23.own_bit = 1; 234 + } 235 + 236 + /* Set Interrupt on completion bit */ 237 + static void sxgbe_set_rx_int_on_com(struct sxgbe_rx_norm_desc *p) 238 + { 239 + p->rdes23.rx_rd_des23.int_on_com = 1; 234 240 } 235 241 236 242 /* Get the receive frame size */ ··· 504 498 .init_rx_desc = sxgbe_init_rx_desc, 505 499 .get_rx_owner = sxgbe_get_rx_owner, 506 500 .set_rx_owner = sxgbe_set_rx_owner, 501 + .set_rx_int_on_com = sxgbe_set_rx_int_on_com, 507 502 .get_rx_frame_len = sxgbe_get_rx_frame_len, 508 503 .get_rx_fd_status = sxgbe_get_rx_fd_status, 509 504 .get_rx_ld_status = sxgbe_get_rx_ld_status,
+20 -22
drivers/net/ethernet/samsung/sxgbe/sxgbe_desc.h
··· 39 39 u32 int_on_com:1; 40 40 /* TDES3 */ 41 41 union { 42 - u32 tcp_payload_len:18; 42 + u16 tcp_payload_len; 43 43 struct { 44 44 u32 total_pkt_len:15; 45 45 u32 reserved1:1; 46 - u32 cksum_ctl:2; 47 - } cksum_pktlen; 46 + } pkt_len; 48 47 } tx_pkt_len; 49 48 50 - u32 tse_bit:1; 51 - u32 tcp_hdr_len:4; 52 - u32 sa_insert_ctl:3; 53 - u32 crc_pad_ctl:2; 54 - u32 last_desc:1; 55 - u32 first_desc:1; 56 - u32 ctxt_bit:1; 57 - u32 own_bit:1; 49 + u16 cksum_ctl:2; 50 + u16 tse_bit:1; 51 + u16 tcp_hdr_len:4; 52 + u16 sa_insert_ctl:3; 53 + u16 crc_pad_ctl:2; 54 + u16 last_desc:1; 55 + u16 first_desc:1; 56 + u16 ctxt_bit:1; 57 + u16 own_bit:1; 58 58 } tx_rd_des23; 59 59 60 60 /* tx write back Desc 2,3 */ ··· 70 70 71 71 struct sxgbe_rx_norm_desc { 72 72 union { 73 - u32 rdes0; /* buf1 address */ 74 - struct { 73 + u64 rdes01; /* buf1 address */ 74 + union { 75 75 u32 out_vlan_tag:16; 76 76 u32 in_vlan_tag:16; 77 - } wb_rx_des0; 78 - } rd_wb_des0; 79 - 80 - union { 81 - u32 rdes1; /* buf2 address or buf1[63:32] */ 82 - u32 rss_hash; /* Write-back RX */ 83 - } rd_wb_des1; 77 + u32 rss_hash; 78 + } rx_wb_des01; 79 + } rdes01; 84 80 85 81 union { 86 82 /* RX Read format Desc 2,3 */ 87 83 struct{ 88 84 /* RDES2 */ 89 - u32 buf2_addr; 85 + u64 buf2_addr:62; 90 86 /* RDES3 */ 91 - u32 buf2_hi_addr:30; 92 87 u32 int_on_com:1; 93 88 u32 own_bit:1; 94 89 } rx_rd_des23; ··· 257 262 258 263 /* Set own bit */ 259 264 void (*set_rx_owner)(struct sxgbe_rx_norm_desc *p); 265 + 266 + /* Set Interrupt on completion bit */ 267 + void (*set_rx_int_on_com)(struct sxgbe_rx_norm_desc *p); 260 268 261 269 /* Get the receive frame size */ 262 270 int (*get_rx_frame_len)(struct sxgbe_rx_norm_desc *p);
-13
drivers/net/ethernet/samsung/sxgbe/sxgbe_dma.c
··· 23 23 /* DMA core initialization */ 24 24 static int sxgbe_dma_init(void __iomem *ioaddr, int fix_burst, int burst_map) 25 25 { 26 - int retry_count = 10; 27 26 u32 reg_val; 28 - 29 - /* reset the DMA */ 30 - writel(SXGBE_DMA_SOFT_RESET, ioaddr + SXGBE_DMA_MODE_REG); 31 - while (retry_count--) { 32 - if (!(readl(ioaddr + SXGBE_DMA_MODE_REG) & 33 - SXGBE_DMA_SOFT_RESET)) 34 - break; 35 - mdelay(10); 36 - } 37 - 38 - if (retry_count < 0) 39 - return -EBUSY; 40 27 41 28 reg_val = readl(ioaddr + SXGBE_DMA_SYSBUS_MODE_REG); 42 29
+31
drivers/net/ethernet/samsung/sxgbe/sxgbe_main.c
··· 1076 1076 1077 1077 /* Initialize the MAC Core */ 1078 1078 priv->hw->mac->core_init(priv->ioaddr); 1079 + SXGBE_FOR_EACH_QUEUE(SXGBE_RX_QUEUES, queue_num) { 1080 + priv->hw->mac->enable_rxqueue(priv->ioaddr, queue_num); 1081 + } 1079 1082 1080 1083 /* Request the IRQ lines */ 1081 1084 ret = devm_request_irq(priv->device, priv->irq, sxgbe_common_interrupt, ··· 1456 1453 /* Added memory barrier for RX descriptor modification */ 1457 1454 wmb(); 1458 1455 priv->hw->desc->set_rx_owner(p); 1456 + priv->hw->desc->set_rx_int_on_com(p); 1459 1457 /* Added memory barrier for RX descriptor modification */ 1460 1458 wmb(); 1461 1459 } ··· 2074 2070 return 0; 2075 2071 } 2076 2072 2073 + static int sxgbe_sw_reset(void __iomem *addr) 2074 + { 2075 + int retry_count = 10; 2076 + 2077 + writel(SXGBE_DMA_SOFT_RESET, addr + SXGBE_DMA_MODE_REG); 2078 + while (retry_count--) { 2079 + if (!(readl(addr + SXGBE_DMA_MODE_REG) & 2080 + SXGBE_DMA_SOFT_RESET)) 2081 + break; 2082 + mdelay(10); 2083 + } 2084 + 2085 + if (retry_count < 0) 2086 + return -EBUSY; 2087 + 2088 + return 0; 2089 + } 2090 + 2077 2091 /** 2078 2092 * sxgbe_drv_probe 2079 2093 * @device: device pointer ··· 2123 2101 sxgbe_set_ethtool_ops(ndev); 2124 2102 priv->plat = plat_dat; 2125 2103 priv->ioaddr = addr; 2104 + 2105 + ret = sxgbe_sw_reset(priv->ioaddr); 2106 + if (ret) 2107 + goto error_free_netdev; 2126 2108 2127 2109 /* Verify driver arguments */ 2128 2110 sxgbe_verify_args(); ··· 2244 2218 int sxgbe_drv_remove(struct net_device *ndev) 2245 2219 { 2246 2220 struct sxgbe_priv_data *priv = netdev_priv(ndev); 2221 + u8 queue_num; 2247 2222 2248 2223 netdev_info(ndev, "%s: removing driver\n", __func__); 2224 + 2225 + SXGBE_FOR_EACH_QUEUE(SXGBE_RX_QUEUES, queue_num) { 2226 + priv->hw->mac->disable_rxqueue(priv->ioaddr, queue_num); 2227 + } 2249 2228 2250 2229 priv->hw->dma->stop_rx(priv->ioaddr, SXGBE_RX_QUEUES); 2251 2230 priv->hw->dma->stop_tx(priv->ioaddr, SXGBE_TX_QUEUES);
+12 -2
drivers/net/ethernet/samsung/sxgbe/sxgbe_mdio.c
··· 27 27 #define SXGBE_SMA_PREAD_CMD 0x02 /* post read increament address */ 28 28 #define SXGBE_SMA_READ_CMD 0x03 /* read command */ 29 29 #define SXGBE_SMA_SKIP_ADDRFRM 0x00040000 /* skip the address frame */ 30 - #define SXGBE_MII_BUSY 0x00800000 /* mii busy */ 30 + #define SXGBE_MII_BUSY 0x00400000 /* mii busy */ 31 31 32 32 static int sxgbe_mdio_busy_wait(void __iomem *ioaddr, unsigned int mii_data) 33 33 { ··· 147 147 struct sxgbe_mdio_bus_data *mdio_data = priv->plat->mdio_bus_data; 148 148 int err, phy_addr; 149 149 int *irqlist; 150 + bool phy_found = false; 150 151 bool act; 151 152 152 153 /* allocate the new mdio bus */ ··· 163 162 irqlist = priv->mii_irq; 164 163 165 164 /* assign mii bus fields */ 166 - mdio_bus->name = "samsxgbe"; 165 + mdio_bus->name = "sxgbe"; 167 166 mdio_bus->read = &sxgbe_mdio_read; 168 167 mdio_bus->write = &sxgbe_mdio_write; 169 168 snprintf(mdio_bus->id, MII_BUS_ID_SIZE, "%s-%x", ··· 217 216 netdev_info(ndev, "PHY ID %08x at %d IRQ %s (%s)%s\n", 218 217 phy->phy_id, phy_addr, irq_str, 219 218 dev_name(&phy->dev), act ? " active" : ""); 219 + phy_found = true; 220 220 } 221 + } 222 + 223 + if (!phy_found) { 224 + netdev_err(ndev, "PHY not found\n"); 225 + goto phyfound_err; 221 226 } 222 227 223 228 priv->mii = mdio_bus; 224 229 225 230 return 0; 226 231 232 + phyfound_err: 233 + err = -ENODEV; 234 + mdiobus_unregister(mdio_bus); 227 235 mdiobus_err: 228 236 mdiobus_free(mdio_bus); 229 237 return err;
+4
drivers/net/ethernet/samsung/sxgbe/sxgbe_reg.h
··· 52 52 #define SXGBE_CORE_RX_CTL2_REG 0x00A8 53 53 #define SXGBE_CORE_RX_CTL3_REG 0x00AC 54 54 55 + #define SXGBE_CORE_RXQ_ENABLE_MASK 0x0003 56 + #define SXGBE_CORE_RXQ_ENABLE 0x0002 57 + #define SXGBE_CORE_RXQ_DISABLE 0x0000 58 + 55 59 /* Interrupt Registers */ 56 60 #define SXGBE_CORE_INT_STATUS_REG 0x00B0 57 61 #define SXGBE_CORE_INT_ENABLE_REG 0x00B4
+13 -12
drivers/net/ethernet/smsc/smc91x.c
··· 147 147 */ 148 148 #define MII_DELAY 1 149 149 150 - #if SMC_DEBUG > 0 151 - #define DBG(n, dev, args...) \ 152 - do { \ 153 - if (SMC_DEBUG >= (n)) \ 154 - netdev_dbg(dev, args); \ 150 + #define DBG(n, dev, fmt, ...) \ 151 + do { \ 152 + if (SMC_DEBUG >= (n)) \ 153 + netdev_dbg(dev, fmt, ##__VA_ARGS__); \ 155 154 } while (0) 156 155 157 - #define PRINTK(dev, args...) netdev_info(dev, args) 158 - #else 159 - #define DBG(n, dev, args...) do { } while (0) 160 - #define PRINTK(dev, args...) netdev_dbg(dev, args) 161 - #endif 156 + #define PRINTK(dev, fmt, ...) \ 157 + do { \ 158 + if (SMC_DEBUG > 0) \ 159 + netdev_info(dev, fmt, ##__VA_ARGS__); \ 160 + else \ 161 + netdev_dbg(dev, fmt, ##__VA_ARGS__); \ 162 + } while (0) 162 163 163 164 #if SMC_DEBUG > 3 164 165 static void PRINT_PKT(u_char *buf, int length) ··· 192 191 pr_cont("\n"); 193 192 } 194 193 #else 195 - #define PRINT_PKT(x...) do { } while (0) 194 + static inline void PRINT_PKT(u_char *buf, int length) { } 196 195 #endif 197 196 198 197 ··· 1782 1781 int timeout = 20; 1783 1782 unsigned long cookie; 1784 1783 1785 - DBG(2, dev, "%s: %s\n", CARDNAME, __func__); 1784 + DBG(2, lp->dev, "%s: %s\n", CARDNAME, __func__); 1786 1785 1787 1786 cookie = probe_irq_on(); 1788 1787
+4
drivers/net/hyperv/netvsc_drv.c
··· 382 382 if (skb_is_gso(skb)) 383 383 goto do_lso; 384 384 385 + if ((skb->ip_summed == CHECKSUM_NONE) || 386 + (skb->ip_summed == CHECKSUM_UNNECESSARY)) 387 + goto do_send; 388 + 385 389 rndis_msg_size += NDIS_CSUM_PPI_SIZE; 386 390 ppi = init_ppi_data(rndis_msg, NDIS_CSUM_PPI_SIZE, 387 391 TCPIP_CHKSUM_PKTINFO);
-3
drivers/net/macvlan.c
··· 263 263 const struct macvlan_dev *vlan = netdev_priv(dev); 264 264 const struct macvlan_port *port = vlan->port; 265 265 const struct macvlan_dev *dest; 266 - __u8 ip_summed = skb->ip_summed; 267 266 268 267 if (vlan->mode == MACVLAN_MODE_BRIDGE) { 269 268 const struct ethhdr *eth = (void *)skb->data; 270 - skb->ip_summed = CHECKSUM_UNNECESSARY; 271 269 272 270 /* send to other bridge ports directly */ 273 271 if (is_multicast_ether_addr(eth->h_dest)) { ··· 283 285 } 284 286 285 287 xmit_world: 286 - skb->ip_summed = ip_summed; 287 288 skb->dev = vlan->lowerdev; 288 289 return dev_queue_xmit(skb); 289 290 }
+9
drivers/net/macvtap.c
··· 322 322 segs = nskb; 323 323 } 324 324 } else { 325 + /* If we receive a partial checksum and the tap side 326 + * doesn't support checksum offload, compute the checksum. 327 + * Note: it doesn't matter which checksum feature to 328 + * check, we either support them all or none. 329 + */ 330 + if (skb->ip_summed == CHECKSUM_PARTIAL && 331 + !(features & NETIF_F_ALL_CSUM) && 332 + skb_checksum_help(skb)) 333 + goto drop; 325 334 skb_queue_tail(&q->sk.sk_receive_queue, skb); 326 335 } 327 336
+3 -3
drivers/net/phy/micrel.c
··· 246 246 if (val1 != -1) 247 247 newval = ((newval & 0xfff0) | ((val1 / PS_TO_REG) & 0xf) << 0); 248 248 249 - if (val2 != -1) 249 + if (val2 != -2) 250 250 newval = ((newval & 0xff0f) | ((val2 / PS_TO_REG) & 0xf) << 4); 251 251 252 - if (val3 != -1) 252 + if (val3 != -3) 253 253 newval = ((newval & 0xf0ff) | ((val3 / PS_TO_REG) & 0xf) << 8); 254 254 255 - if (val4 != -1) 255 + if (val4 != -4) 256 256 newval = ((newval & 0x0fff) | ((val4 / PS_TO_REG) & 0xf) << 12); 257 257 258 258 return kszphy_extended_write(phydev, reg, newval);
+11
drivers/net/phy/phy.c
··· 765 765 break; 766 766 767 767 if (phydev->link) { 768 + if (AUTONEG_ENABLE == phydev->autoneg) { 769 + err = phy_aneg_done(phydev); 770 + if (err < 0) 771 + break; 772 + 773 + if (!err) { 774 + phydev->state = PHY_AN; 775 + phydev->link_timeout = PHY_AN_TIMEOUT; 776 + break; 777 + } 778 + } 768 779 phydev->state = PHY_RUNNING; 769 780 netif_carrier_on(phydev->attached_dev); 770 781 phydev->adjust_link(phydev->attached_dev);
+3 -3
drivers/net/slip/slip.c
··· 429 429 if (!sl || sl->magic != SLIP_MAGIC || !netif_running(sl->dev)) 430 430 return; 431 431 432 - spin_lock(&sl->lock); 432 + spin_lock_bh(&sl->lock); 433 433 if (sl->xleft <= 0) { 434 434 /* Now serial buffer is almost free & we can start 435 435 * transmission of another packet */ 436 436 sl->dev->stats.tx_packets++; 437 437 clear_bit(TTY_DO_WRITE_WAKEUP, &tty->flags); 438 - spin_unlock(&sl->lock); 438 + spin_unlock_bh(&sl->lock); 439 439 sl_unlock(sl); 440 440 return; 441 441 } ··· 443 443 actual = tty->ops->write(tty, sl->xhead, sl->xleft); 444 444 sl->xleft -= actual; 445 445 sl->xhead += actual; 446 - spin_unlock(&sl->lock); 446 + spin_unlock_bh(&sl->lock); 447 447 } 448 448 449 449 static void sl_tx_timeout(struct net_device *dev)
+2
drivers/net/team/team.c
··· 2834 2834 case NETDEV_UP: 2835 2835 if (netif_carrier_ok(dev)) 2836 2836 team_port_change_check(port, true); 2837 + break; 2837 2838 case NETDEV_DOWN: 2838 2839 team_port_change_check(port, false); 2840 + break; 2839 2841 case NETDEV_CHANGE: 2840 2842 if (netif_running(port->dev)) 2841 2843 team_port_change_check(port,
+1 -1
drivers/net/usb/cdc_ncm.c
··· 785 785 skb_out->len > CDC_NCM_MIN_TX_PKT) 786 786 memset(skb_put(skb_out, ctx->tx_max - skb_out->len), 0, 787 787 ctx->tx_max - skb_out->len); 788 - else if ((skb_out->len % dev->maxpacket) == 0) 788 + else if (skb_out->len < ctx->tx_max && (skb_out->len % dev->maxpacket) == 0) 789 789 *skb_put(skb_out, 1) = 0; /* force short packet */ 790 790 791 791 /* set final frame length */
+28
drivers/net/usb/qmi_wwan.c
··· 669 669 {QMI_FIXED_INTF(0x05c6, 0x920d, 5)}, 670 670 {QMI_FIXED_INTF(0x12d1, 0x140c, 1)}, /* Huawei E173 */ 671 671 {QMI_FIXED_INTF(0x12d1, 0x14ac, 1)}, /* Huawei E1820 */ 672 + {QMI_FIXED_INTF(0x16d8, 0x6003, 0)}, /* CMOTech 6003 */ 673 + {QMI_FIXED_INTF(0x16d8, 0x6007, 0)}, /* CMOTech CHE-628S */ 674 + {QMI_FIXED_INTF(0x16d8, 0x6008, 0)}, /* CMOTech CMU-301 */ 675 + {QMI_FIXED_INTF(0x16d8, 0x6280, 0)}, /* CMOTech CHU-628 */ 676 + {QMI_FIXED_INTF(0x16d8, 0x7001, 0)}, /* CMOTech CHU-720S */ 677 + {QMI_FIXED_INTF(0x16d8, 0x7002, 0)}, /* CMOTech 7002 */ 678 + {QMI_FIXED_INTF(0x16d8, 0x7003, 4)}, /* CMOTech CHU-629K */ 679 + {QMI_FIXED_INTF(0x16d8, 0x7004, 3)}, /* CMOTech 7004 */ 680 + {QMI_FIXED_INTF(0x16d8, 0x7006, 5)}, /* CMOTech CGU-629 */ 681 + {QMI_FIXED_INTF(0x16d8, 0x700a, 4)}, /* CMOTech CHU-629S */ 682 + {QMI_FIXED_INTF(0x16d8, 0x7211, 0)}, /* CMOTech CHU-720I */ 683 + {QMI_FIXED_INTF(0x16d8, 0x7212, 0)}, /* CMOTech 7212 */ 684 + {QMI_FIXED_INTF(0x16d8, 0x7213, 0)}, /* CMOTech 7213 */ 685 + {QMI_FIXED_INTF(0x16d8, 0x7251, 1)}, /* CMOTech 7251 */ 686 + {QMI_FIXED_INTF(0x16d8, 0x7252, 1)}, /* CMOTech 7252 */ 687 + {QMI_FIXED_INTF(0x16d8, 0x7253, 1)}, /* CMOTech 7253 */ 672 688 {QMI_FIXED_INTF(0x19d2, 0x0002, 1)}, 673 689 {QMI_FIXED_INTF(0x19d2, 0x0012, 1)}, 674 690 {QMI_FIXED_INTF(0x19d2, 0x0017, 3)}, ··· 746 730 {QMI_FIXED_INTF(0x114f, 0x68a2, 8)}, /* Sierra Wireless MC7750 */ 747 731 {QMI_FIXED_INTF(0x1199, 0x68a2, 8)}, /* Sierra Wireless MC7710 in QMI mode */ 748 732 {QMI_FIXED_INTF(0x1199, 0x68a2, 19)}, /* Sierra Wireless MC7710 in QMI mode */ 733 + {QMI_FIXED_INTF(0x1199, 0x68c0, 8)}, /* Sierra Wireless MC73xx */ 734 + {QMI_FIXED_INTF(0x1199, 0x68c0, 10)}, /* Sierra Wireless MC73xx */ 735 + {QMI_FIXED_INTF(0x1199, 0x68c0, 11)}, /* Sierra Wireless MC73xx */ 749 736 {QMI_FIXED_INTF(0x1199, 0x901c, 8)}, /* Sierra Wireless EM7700 */ 737 + {QMI_FIXED_INTF(0x1199, 0x901f, 8)}, /* Sierra Wireless EM7355 */ 738 + {QMI_FIXED_INTF(0x1199, 0x9041, 8)}, /* Sierra Wireless MC7305/MC7355 */ 750 739 {QMI_FIXED_INTF(0x1199, 0x9051, 8)}, /* Netgear AirCard 340U */ 751 740 {QMI_FIXED_INTF(0x1bbb, 0x011e, 4)}, /* Telekom Speedstick LTE II (Alcatel One Touch L100V LTE) */ 741 + {QMI_FIXED_INTF(0x1bbb, 0x0203, 2)}, /* Alcatel L800MA */ 752 742 {QMI_FIXED_INTF(0x2357, 0x0201, 4)}, /* TP-LINK HSUPA Modem MA180 */ 753 743 {QMI_FIXED_INTF(0x2357, 0x9000, 4)}, /* TP-LINK MA260 */ 754 744 {QMI_FIXED_INTF(0x1bc7, 0x1200, 5)}, /* Telit LE920 */ 755 745 {QMI_FIXED_INTF(0x1bc7, 0x1201, 2)}, /* Telit LE920 */ 756 746 {QMI_FIXED_INTF(0x0b3c, 0xc005, 6)}, /* Olivetti Olicard 200 */ 747 + {QMI_FIXED_INTF(0x0b3c, 0xc00b, 4)}, /* Olivetti Olicard 500 */ 757 748 {QMI_FIXED_INTF(0x1e2d, 0x0060, 4)}, /* Cinterion PLxx */ 758 749 {QMI_FIXED_INTF(0x1e2d, 0x0053, 4)}, /* Cinterion PHxx,PXxx */ 750 + {QMI_FIXED_INTF(0x413c, 0x81a2, 8)}, /* Dell Wireless 5806 Gobi(TM) 4G LTE Mobile Broadband Card */ 751 + {QMI_FIXED_INTF(0x413c, 0x81a3, 8)}, /* Dell Wireless 5570 HSPA+ (42Mbps) Mobile Broadband Card */ 752 + {QMI_FIXED_INTF(0x413c, 0x81a4, 8)}, /* Dell Wireless 5570e HSPA+ (42Mbps) Mobile Broadband Card */ 753 + {QMI_FIXED_INTF(0x413c, 0x81a8, 8)}, /* Dell Wireless 5808 Gobi(TM) 4G LTE Mobile Broadband Card */ 754 + {QMI_FIXED_INTF(0x413c, 0x81a9, 8)}, /* Dell Wireless 5808e Gobi(TM) 4G LTE Mobile Broadband Card */ 759 755 760 756 /* 4. Gobi 1000 devices */ 761 757 {QMI_GOBI1K_DEVICE(0x05c6, 0x9212)}, /* Acer Gobi Modem Device */
+1 -1
drivers/net/virtio_net.c
··· 1285 1285 if (channels->rx_count || channels->tx_count || channels->other_count) 1286 1286 return -EINVAL; 1287 1287 1288 - if (queue_pairs > vi->max_queue_pairs) 1288 + if (queue_pairs > vi->max_queue_pairs || queue_pairs == 0) 1289 1289 return -EINVAL; 1290 1290 1291 1291 get_online_cpus();
+21 -17
drivers/net/vxlan.c
··· 389 389 + nla_total_size(sizeof(struct nda_cacheinfo)); 390 390 } 391 391 392 - static void vxlan_fdb_notify(struct vxlan_dev *vxlan, 393 - struct vxlan_fdb *fdb, int type) 392 + static void vxlan_fdb_notify(struct vxlan_dev *vxlan, struct vxlan_fdb *fdb, 393 + struct vxlan_rdst *rd, int type) 394 394 { 395 395 struct net *net = dev_net(vxlan->dev); 396 396 struct sk_buff *skb; ··· 400 400 if (skb == NULL) 401 401 goto errout; 402 402 403 - err = vxlan_fdb_info(skb, vxlan, fdb, 0, 0, type, 0, 404 - first_remote_rtnl(fdb)); 403 + err = vxlan_fdb_info(skb, vxlan, fdb, 0, 0, type, 0, rd); 405 404 if (err < 0) { 406 405 /* -EMSGSIZE implies BUG in vxlan_nlmsg_size() */ 407 406 WARN_ON(err == -EMSGSIZE); ··· 426 427 .remote_vni = VXLAN_N_VID, 427 428 }; 428 429 429 - INIT_LIST_HEAD(&f.remotes); 430 - list_add_rcu(&remote.list, &f.remotes); 431 - 432 - vxlan_fdb_notify(vxlan, &f, RTM_GETNEIGH); 430 + vxlan_fdb_notify(vxlan, &f, &remote, RTM_GETNEIGH); 433 431 } 434 432 435 433 static void vxlan_fdb_miss(struct vxlan_dev *vxlan, const u8 eth_addr[ETH_ALEN]) ··· 434 438 struct vxlan_fdb f = { 435 439 .state = NUD_STALE, 436 440 }; 441 + struct vxlan_rdst remote = { }; 437 442 438 - INIT_LIST_HEAD(&f.remotes); 439 443 memcpy(f.eth_addr, eth_addr, ETH_ALEN); 440 444 441 - vxlan_fdb_notify(vxlan, &f, RTM_GETNEIGH); 445 + vxlan_fdb_notify(vxlan, &f, &remote, RTM_GETNEIGH); 442 446 } 443 447 444 448 /* Hash Ethernet address */ ··· 529 533 530 534 /* Add/update destinations for multicast */ 531 535 static int vxlan_fdb_append(struct vxlan_fdb *f, 532 - union vxlan_addr *ip, __be16 port, __u32 vni, __u32 ifindex) 536 + union vxlan_addr *ip, __be16 port, __u32 vni, 537 + __u32 ifindex, struct vxlan_rdst **rdp) 533 538 { 534 539 struct vxlan_rdst *rd; 535 540 ··· 548 551 549 552 list_add_tail_rcu(&rd->list, &f->remotes); 550 553 554 + *rdp = rd; 551 555 return 1; 552 556 } 553 557 ··· 688 690 __be16 port, __u32 vni, __u32 ifindex, 689 691 __u8 ndm_flags) 690 692 { 693 + struct vxlan_rdst *rd = NULL; 691 694 struct vxlan_fdb *f; 692 695 int notify = 0; 693 696 ··· 725 726 if ((flags & NLM_F_APPEND) && 726 727 (is_multicast_ether_addr(f->eth_addr) || 727 728 is_zero_ether_addr(f->eth_addr))) { 728 - int rc = vxlan_fdb_append(f, ip, port, vni, ifindex); 729 + int rc = vxlan_fdb_append(f, ip, port, vni, ifindex, 730 + &rd); 729 731 730 732 if (rc < 0) 731 733 return rc; ··· 756 756 INIT_LIST_HEAD(&f->remotes); 757 757 memcpy(f->eth_addr, mac, ETH_ALEN); 758 758 759 - vxlan_fdb_append(f, ip, port, vni, ifindex); 759 + vxlan_fdb_append(f, ip, port, vni, ifindex, &rd); 760 760 761 761 ++vxlan->addrcnt; 762 762 hlist_add_head_rcu(&f->hlist, 763 763 vxlan_fdb_head(vxlan, mac)); 764 764 } 765 765 766 - if (notify) 767 - vxlan_fdb_notify(vxlan, f, RTM_NEWNEIGH); 766 + if (notify) { 767 + if (rd == NULL) 768 + rd = first_remote_rtnl(f); 769 + vxlan_fdb_notify(vxlan, f, rd, RTM_NEWNEIGH); 770 + } 768 771 769 772 return 0; 770 773 } ··· 788 785 "delete %pM\n", f->eth_addr); 789 786 790 787 --vxlan->addrcnt; 791 - vxlan_fdb_notify(vxlan, f, RTM_DELNEIGH); 788 + vxlan_fdb_notify(vxlan, f, first_remote_rtnl(f), RTM_DELNEIGH); 792 789 793 790 hlist_del_rcu(&f->hlist); 794 791 call_rcu(&f->rcu, vxlan_fdb_free); ··· 922 919 */ 923 920 if (rd && !list_is_singular(&f->remotes)) { 924 921 list_del_rcu(&rd->list); 922 + vxlan_fdb_notify(vxlan, f, rd, RTM_DELNEIGH); 925 923 kfree_rcu(rd, rcu); 926 924 goto out; 927 925 } ··· 997 993 998 994 rdst->remote_ip = *src_ip; 999 995 f->updated = jiffies; 1000 - vxlan_fdb_notify(vxlan, f, RTM_NEWNEIGH); 996 + vxlan_fdb_notify(vxlan, f, rdst, RTM_NEWNEIGH); 1001 997 } else { 1002 998 /* learned new entry */ 1003 999 spin_lock(&vxlan->hash_lock);
-4
drivers/net/wireless/ath/ath9k/ahb.c
··· 86 86 int irq; 87 87 int ret = 0; 88 88 struct ath_hw *ah; 89 - struct ath_common *common; 90 89 char hw_name[64]; 91 90 92 91 if (!dev_get_platdata(&pdev->dev)) { ··· 145 146 wiphy_info(hw->wiphy, "%s mem=0x%lx, irq=%d\n", 146 147 hw_name, (unsigned long)mem, irq); 147 148 148 - common = ath9k_hw_common(sc->sc_ah); 149 - /* Will be cleared in ath9k_start() */ 150 - set_bit(ATH_OP_INVALID, &common->op_flags); 151 149 return 0; 152 150 153 151 err_irq:
+6
drivers/net/wireless/ath/ath9k/ani.c
··· 155 155 ATH9K_ANI_RSSI_THR_LOW, 156 156 ATH9K_ANI_RSSI_THR_HIGH); 157 157 158 + if (AR_SREV_9100(ah) && immunityLevel < ATH9K_ANI_OFDM_DEF_LEVEL) 159 + immunityLevel = ATH9K_ANI_OFDM_DEF_LEVEL; 160 + 158 161 if (!scan) 159 162 aniState->ofdmNoiseImmunityLevel = immunityLevel; 160 163 ··· 237 234 aniState->cckNoiseImmunityLevel, immunityLevel, 238 235 BEACON_RSSI(ah), ATH9K_ANI_RSSI_THR_LOW, 239 236 ATH9K_ANI_RSSI_THR_HIGH); 237 + 238 + if (AR_SREV_9100(ah) && immunityLevel < ATH9K_ANI_CCK_DEF_LEVEL) 239 + immunityLevel = ATH9K_ANI_CCK_DEF_LEVEL; 240 240 241 241 if (ah->opmode == NL80211_IFTYPE_STATION && 242 242 BEACON_RSSI(ah) <= ATH9K_ANI_RSSI_THR_LOW &&
-1
drivers/net/wireless/ath/ath9k/ath9k.h
··· 251 251 252 252 s8 bar_index; 253 253 bool sched; 254 - bool paused; 255 254 bool active; 256 255 }; 257 256
+2 -3
drivers/net/wireless/ath/ath9k/debug_sta.c
··· 72 72 ath_txq_lock(sc, txq); 73 73 if (tid->active) { 74 74 len += scnprintf(buf + len, size - len, 75 - "%3d%11d%10d%10d%10d%10d%9d%6d%8d\n", 75 + "%3d%11d%10d%10d%10d%10d%9d%6d\n", 76 76 tid->tidno, 77 77 tid->seq_start, 78 78 tid->seq_next, ··· 80 80 tid->baw_head, 81 81 tid->baw_tail, 82 82 tid->bar_index, 83 - tid->sched, 84 - tid->paused); 83 + tid->sched); 85 84 } 86 85 ath_txq_unlock(sc, txq); 87 86 }
+3
drivers/net/wireless/ath/ath9k/init.c
··· 783 783 common = ath9k_hw_common(ah); 784 784 ath9k_set_hw_capab(sc, hw); 785 785 786 + /* Will be cleared in ath9k_start() */ 787 + set_bit(ATH_OP_INVALID, &common->op_flags); 788 + 786 789 /* Initialize regulatory */ 787 790 error = ath_regd_init(&common->regulatory, sc->hw->wiphy, 788 791 ath9k_reg_notifier);
-5
drivers/net/wireless/ath/ath9k/pci.c
··· 784 784 { 785 785 struct ath_softc *sc; 786 786 struct ieee80211_hw *hw; 787 - struct ath_common *common; 788 787 u8 csz; 789 788 u32 val; 790 789 int ret = 0; ··· 875 876 ath9k_hw_name(sc->sc_ah, hw_name, sizeof(hw_name)); 876 877 wiphy_info(hw->wiphy, "%s mem=0x%lx, irq=%d\n", 877 878 hw_name, (unsigned long)sc->mem, pdev->irq); 878 - 879 - /* Will be cleared in ath9k_start() */ 880 - common = ath9k_hw_common(sc->sc_ah); 881 - set_bit(ATH_OP_INVALID, &common->op_flags); 882 879 883 880 return 0; 884 881
+6 -3
drivers/net/wireless/ath/ath9k/recv.c
··· 975 975 u64 tsf = 0; 976 976 unsigned long flags; 977 977 dma_addr_t new_buf_addr; 978 + unsigned int budget = 512; 978 979 979 980 if (edma) 980 981 dma_type = DMA_BIDIRECTIONAL; ··· 1114 1113 } 1115 1114 requeue: 1116 1115 list_add_tail(&bf->list, &sc->rx.rxbuf); 1117 - if (flush) 1118 - continue; 1119 1116 1120 1117 if (edma) { 1121 1118 ath_rx_edma_buf_link(sc, qtype); 1122 1119 } else { 1123 1120 ath_rx_buf_relink(sc, bf); 1124 - ath9k_hw_rxena(ah); 1121 + if (!flush) 1122 + ath9k_hw_rxena(ah); 1125 1123 } 1124 + 1125 + if (!budget--) 1126 + break; 1126 1127 } while (1); 1127 1128 1128 1129 if (!(ah->imask & ATH9K_INT_RXEOL)) {
+1 -13
drivers/net/wireless/ath/ath9k/xmit.c
··· 107 107 { 108 108 struct ath_atx_ac *ac = tid->ac; 109 109 110 - if (tid->paused) 111 - return; 112 - 113 110 if (tid->sched) 114 111 return; 115 112 ··· 1404 1407 ath_tx_tid_change_state(sc, txtid); 1405 1408 1406 1409 txtid->active = true; 1407 - txtid->paused = true; 1408 1410 *ssn = txtid->seq_start = txtid->seq_next; 1409 1411 txtid->bar_index = -1; 1410 1412 ··· 1423 1427 1424 1428 ath_txq_lock(sc, txq); 1425 1429 txtid->active = false; 1426 - txtid->paused = false; 1427 1430 ath_tx_flush_tid(sc, txtid); 1428 1431 ath_tx_tid_change_state(sc, txtid); 1429 1432 ath_txq_unlock_complete(sc, txq); ··· 1482 1487 ath_txq_lock(sc, txq); 1483 1488 ac->clear_ps_filter = true; 1484 1489 1485 - if (!tid->paused && ath_tid_has_buffered(tid)) { 1490 + if (ath_tid_has_buffered(tid)) { 1486 1491 ath_tx_queue_tid(txq, tid); 1487 1492 ath_txq_schedule(sc, txq); 1488 1493 } ··· 1505 1510 ath_txq_lock(sc, txq); 1506 1511 1507 1512 tid->baw_size = IEEE80211_MIN_AMPDU_BUF << sta->ht_cap.ampdu_factor; 1508 - tid->paused = false; 1509 1513 1510 1514 if (ath_tid_has_buffered(tid)) { 1511 1515 ath_tx_queue_tid(txq, tid); ··· 1538 1544 continue; 1539 1545 1540 1546 tid = ATH_AN_2_TID(an, i); 1541 - if (tid->paused) 1542 - continue; 1543 1547 1544 1548 ath_txq_lock(sc, tid->ac->txq); 1545 1549 while (nframes > 0) { ··· 1835 1843 list); 1836 1844 list_del(&tid->list); 1837 1845 tid->sched = false; 1838 - 1839 - if (tid->paused) 1840 - continue; 1841 1846 1842 1847 if (ath_tx_sched_aggr(sc, txq, tid, &stop)) 1843 1848 sent = true; ··· 2687 2698 tid->baw_size = WME_MAX_BA; 2688 2699 tid->baw_head = tid->baw_tail = 0; 2689 2700 tid->sched = false; 2690 - tid->paused = false; 2691 2701 tid->active = false; 2692 2702 __skb_queue_head_init(&tid->buf_q); 2693 2703 __skb_queue_head_init(&tid->retry_q);
+3 -2
drivers/net/wireless/brcm80211/brcmfmac/chip.c
··· 303 303 304 304 ci = core->chip; 305 305 306 - /* if core is already in reset, just return */ 306 + /* if core is already in reset, skip reset */ 307 307 regdata = ci->ops->read32(ci->ctx, core->wrapbase + BCMA_RESET_CTL); 308 308 if ((regdata & BCMA_RESET_CTL_RESET) != 0) 309 - return; 309 + goto in_reset_configure; 310 310 311 311 /* configure reset */ 312 312 ci->ops->write32(ci->ctx, core->wrapbase + BCMA_IOCTL, ··· 322 322 SPINWAIT(ci->ops->read32(ci->ctx, core->wrapbase + BCMA_RESET_CTL) != 323 323 BCMA_RESET_CTL_RESET, 300); 324 324 325 + in_reset_configure: 325 326 /* in-reset configure */ 326 327 ci->ops->write32(ci->ctx, core->wrapbase + BCMA_IOCTL, 327 328 reset | BCMA_IOCTL_FGC | BCMA_IOCTL_CLK);
+12 -10
drivers/net/wireless/rt2x00/rt2x00mac.c
··· 621 621 bss_conf->bssid); 622 622 623 623 /* 624 - * Update the beacon. This is only required on USB devices. PCI 625 - * devices fetch beacons periodically. 626 - */ 627 - if (changes & BSS_CHANGED_BEACON && rt2x00_is_usb(rt2x00dev)) 628 - rt2x00queue_update_beacon(rt2x00dev, vif); 629 - 630 - /* 631 624 * Start/stop beaconing. 632 625 */ 633 626 if (changes & BSS_CHANGED_BEACON_ENABLED) { 634 627 if (!bss_conf->enable_beacon && intf->enable_beacon) { 635 - rt2x00queue_clear_beacon(rt2x00dev, vif); 636 628 rt2x00dev->intf_beaconing--; 637 629 intf->enable_beacon = false; 630 + /* 631 + * Clear beacon in the H/W for this vif. This is needed 632 + * to disable beaconing on this particular interface 633 + * and keep it running on other interfaces. 634 + */ 635 + rt2x00queue_clear_beacon(rt2x00dev, vif); 638 636 639 637 if (rt2x00dev->intf_beaconing == 0) { 640 638 /* ··· 643 645 rt2x00queue_stop_queue(rt2x00dev->bcn); 644 646 mutex_unlock(&intf->beacon_skb_mutex); 645 647 } 646 - 647 - 648 648 } else if (bss_conf->enable_beacon && !intf->enable_beacon) { 649 649 rt2x00dev->intf_beaconing++; 650 650 intf->enable_beacon = true; 651 + /* 652 + * Upload beacon to the H/W. This is only required on 653 + * USB devices. PCI devices fetch beacons periodically. 654 + */ 655 + if (rt2x00_is_usb(rt2x00dev)) 656 + rt2x00queue_update_beacon(rt2x00dev, vif); 651 657 652 658 if (rt2x00dev->intf_beaconing == 1) { 653 659 /*
+1 -1
drivers/net/wireless/rtlwifi/rtl8188ee/trx.c
··· 293 293 u8 *psaddr; 294 294 __le16 fc; 295 295 u16 type, ufc; 296 - bool match_bssid, packet_toself, packet_beacon, addr; 296 + bool match_bssid, packet_toself, packet_beacon = false, addr; 297 297 298 298 tmp_buf = skb->data + pstatus->rx_drvinfo_size + pstatus->rx_bufshift; 299 299
+1 -1
drivers/net/wireless/rtlwifi/rtl8192cu/hw.c
··· 1001 1001 err = _rtl92cu_init_mac(hw); 1002 1002 if (err) { 1003 1003 RT_TRACE(rtlpriv, COMP_ERR, DBG_EMERG, "init mac failed!\n"); 1004 - return err; 1004 + goto exit; 1005 1005 } 1006 1006 err = rtl92c_download_fw(hw); 1007 1007 if (err) {
+6
drivers/net/wireless/rtlwifi/rtl8192se/trx.c
··· 49 49 if (ieee80211_is_nullfunc(fc)) 50 50 return QSLT_HIGH; 51 51 52 + /* Kernel commit 1bf4bbb4024dcdab changed EAPOL packets to use 53 + * queue V0 at priority 7; however, the RTL8192SE appears to have 54 + * that queue at priority 6 55 + */ 56 + if (skb->priority == 7) 57 + return QSLT_VO; 52 58 return skb->priority; 53 59 } 54 60
+1 -1
drivers/scsi/scsi_netlink.c
··· 77 77 goto next_msg; 78 78 } 79 79 80 - if (!capable(CAP_SYS_ADMIN)) { 80 + if (!netlink_capable(skb, CAP_SYS_ADMIN)) { 81 81 err = -EPERM; 82 82 goto next_msg; 83 83 }
+7
include/linux/netlink.h
··· 169 169 extern int netlink_add_tap(struct netlink_tap *nt); 170 170 extern int netlink_remove_tap(struct netlink_tap *nt); 171 171 172 + bool __netlink_ns_capable(const struct netlink_skb_parms *nsp, 173 + struct user_namespace *ns, int cap); 174 + bool netlink_ns_capable(const struct sk_buff *skb, 175 + struct user_namespace *ns, int cap); 176 + bool netlink_capable(const struct sk_buff *skb, int cap); 177 + bool netlink_net_capable(const struct sk_buff *skb, int cap); 178 + 172 179 #endif /* __LINUX_NETLINK_H */
+1 -1
include/linux/sock_diag.h
··· 23 23 void sock_diag_save_cookie(void *sk, __u32 *cookie); 24 24 25 25 int sock_diag_put_meminfo(struct sock *sk, struct sk_buff *skb, int attr); 26 - int sock_diag_put_filterinfo(struct user_namespace *user_ns, struct sock *sk, 26 + int sock_diag_put_filterinfo(bool may_report_filterinfo, struct sock *sk, 27 27 struct sk_buff *skb, int attrtype); 28 28 29 29 #endif
+5 -1
include/net/af_vsock.h
··· 155 155 156 156 /**** CORE ****/ 157 157 158 - int vsock_core_init(const struct vsock_transport *t); 158 + int __vsock_core_init(const struct vsock_transport *t, struct module *owner); 159 + static inline int vsock_core_init(const struct vsock_transport *t) 160 + { 161 + return __vsock_core_init(t, THIS_MODULE); 162 + } 159 163 void vsock_core_exit(void); 160 164 161 165 /**** UTILS ****/
+5
include/net/sock.h
··· 2255 2255 int sock_recv_errqueue(struct sock *sk, struct msghdr *msg, int len, int level, 2256 2256 int type); 2257 2257 2258 + bool sk_ns_capable(const struct sock *sk, 2259 + struct user_namespace *user_ns, int cap); 2260 + bool sk_capable(const struct sock *sk, int cap); 2261 + bool sk_net_capable(const struct sock *sk, int cap); 2262 + 2258 2263 /* 2259 2264 * Enable debug/info messages 2260 2265 */
+2 -2
kernel/audit.c
··· 643 643 if ((task_active_pid_ns(current) != &init_pid_ns)) 644 644 return -EPERM; 645 645 646 - if (!capable(CAP_AUDIT_CONTROL)) 646 + if (!netlink_capable(skb, CAP_AUDIT_CONTROL)) 647 647 err = -EPERM; 648 648 break; 649 649 case AUDIT_USER: 650 650 case AUDIT_FIRST_USER_MSG ... AUDIT_LAST_USER_MSG: 651 651 case AUDIT_FIRST_USER_MSG2 ... AUDIT_LAST_USER_MSG2: 652 - if (!capable(CAP_AUDIT_WRITE)) 652 + if (!netlink_capable(skb, CAP_AUDIT_WRITE)) 653 653 err = -EPERM; 654 654 break; 655 655 default: /* bad msg */
+6 -3
net/bluetooth/hci_conn.c
··· 819 819 if (!test_and_set_bit(HCI_CONN_AUTH_PEND, &conn->flags)) { 820 820 struct hci_cp_auth_requested cp; 821 821 822 - /* encrypt must be pending if auth is also pending */ 823 - set_bit(HCI_CONN_ENCRYPT_PEND, &conn->flags); 824 - 825 822 cp.handle = cpu_to_le16(conn->handle); 826 823 hci_send_cmd(conn->hdev, HCI_OP_AUTH_REQUESTED, 827 824 sizeof(cp), &cp); 825 + 826 + /* If we're already encrypted set the REAUTH_PEND flag, 827 + * otherwise set the ENCRYPT_PEND. 828 + */ 828 829 if (conn->key_type != 0xff) 829 830 set_bit(HCI_CONN_REAUTH_PEND, &conn->flags); 831 + else 832 + set_bit(HCI_CONN_ENCRYPT_PEND, &conn->flags); 830 833 } 831 834 832 835 return 0;
+6
net/bluetooth/hci_event.c
··· 3330 3330 if (!conn) 3331 3331 goto unlock; 3332 3332 3333 + /* For BR/EDR the necessary steps are taken through the 3334 + * auth_complete event. 3335 + */ 3336 + if (conn->type != LE_LINK) 3337 + goto unlock; 3338 + 3333 3339 if (!ev->status) 3334 3340 conn->sec_level = conn->pending_sec_level; 3335 3341
+15
net/bridge/br_netlink.c
··· 445 445 return 0; 446 446 } 447 447 448 + static int br_dev_newlink(struct net *src_net, struct net_device *dev, 449 + struct nlattr *tb[], struct nlattr *data[]) 450 + { 451 + struct net_bridge *br = netdev_priv(dev); 452 + 453 + if (tb[IFLA_ADDRESS]) { 454 + spin_lock_bh(&br->lock); 455 + br_stp_change_bridge_id(br, nla_data(tb[IFLA_ADDRESS])); 456 + spin_unlock_bh(&br->lock); 457 + } 458 + 459 + return register_netdevice(dev); 460 + } 461 + 448 462 static size_t br_get_link_af_size(const struct net_device *dev) 449 463 { 450 464 struct net_port_vlans *pv; ··· 487 473 .priv_size = sizeof(struct net_bridge), 488 474 .setup = br_dev_setup, 489 475 .validate = br_validate, 476 + .newlink = br_dev_newlink, 490 477 .dellink = br_dev_delete, 491 478 }; 492 479
+2 -2
net/can/gw.c
··· 804 804 u8 limhops = 0; 805 805 int err = 0; 806 806 807 - if (!capable(CAP_NET_ADMIN)) 807 + if (!netlink_capable(skb, CAP_NET_ADMIN)) 808 808 return -EPERM; 809 809 810 810 if (nlmsg_len(nlh) < sizeof(*r)) ··· 893 893 u8 limhops = 0; 894 894 int err = 0; 895 895 896 - if (!capable(CAP_NET_ADMIN)) 896 + if (!netlink_capable(skb, CAP_NET_ADMIN)) 897 897 return -EPERM; 898 898 899 899 if (nlmsg_len(nlh) < sizeof(*r))
+9 -7
net/core/filter.c
··· 122 122 return 0; 123 123 } 124 124 125 + /* Register mappings for user programs. */ 126 + #define A_REG 0 127 + #define X_REG 7 128 + #define TMP_REG 8 129 + #define ARG2_REG 2 130 + #define ARG3_REG 3 131 + 125 132 /** 126 133 * __sk_run_filter - run a filter on a given context 127 134 * @ctx: buffer to run the filter on ··· 249 242 250 243 regs[FP_REG] = (u64) (unsigned long) &stack[ARRAY_SIZE(stack)]; 251 244 regs[ARG1_REG] = (u64) (unsigned long) ctx; 245 + regs[A_REG] = 0; 246 + regs[X_REG] = 0; 252 247 253 248 select_insn: 254 249 goto *jumptable[insn->code]; ··· 651 642 { 652 643 return raw_smp_processor_id(); 653 644 } 654 - 655 - /* Register mappings for user programs. */ 656 - #define A_REG 0 657 - #define X_REG 7 658 - #define TMP_REG 8 659 - #define ARG2_REG 2 660 - #define ARG3_REG 3 661 645 662 646 static bool convert_bpf_extensions(struct sock_filter *fp, 663 647 struct sock_filter_int **insnp)
+33 -20
net/core/rtnetlink.c
··· 774 774 return 0; 775 775 } 776 776 777 - static size_t rtnl_port_size(const struct net_device *dev) 777 + static size_t rtnl_port_size(const struct net_device *dev, 778 + u32 ext_filter_mask) 778 779 { 779 780 size_t port_size = nla_total_size(4) /* PORT_VF */ 780 781 + nla_total_size(PORT_PROFILE_MAX) /* PORT_PROFILE */ ··· 791 790 size_t port_self_size = nla_total_size(sizeof(struct nlattr)) 792 791 + port_size; 793 792 794 - if (!dev->netdev_ops->ndo_get_vf_port || !dev->dev.parent) 793 + if (!dev->netdev_ops->ndo_get_vf_port || !dev->dev.parent || 794 + !(ext_filter_mask & RTEXT_FILTER_VF)) 795 795 return 0; 796 796 if (dev_num_vf(dev->dev.parent)) 797 797 return port_self_size + vf_ports_size + ··· 828 826 + nla_total_size(ext_filter_mask 829 827 & RTEXT_FILTER_VF ? 4 : 0) /* IFLA_NUM_VF */ 830 828 + rtnl_vfinfo_size(dev, ext_filter_mask) /* IFLA_VFINFO_LIST */ 831 - + rtnl_port_size(dev) /* IFLA_VF_PORTS + IFLA_PORT_SELF */ 829 + + rtnl_port_size(dev, ext_filter_mask) /* IFLA_VF_PORTS + IFLA_PORT_SELF */ 832 830 + rtnl_link_get_size(dev) /* IFLA_LINKINFO */ 833 831 + rtnl_link_get_af_size(dev) /* IFLA_AF_SPEC */ 834 832 + nla_total_size(MAX_PHYS_PORT_ID_LEN); /* IFLA_PHYS_PORT_ID */ ··· 890 888 return 0; 891 889 } 892 890 893 - static int rtnl_port_fill(struct sk_buff *skb, struct net_device *dev) 891 + static int rtnl_port_fill(struct sk_buff *skb, struct net_device *dev, 892 + u32 ext_filter_mask) 894 893 { 895 894 int err; 896 895 897 - if (!dev->netdev_ops->ndo_get_vf_port || !dev->dev.parent) 896 + if (!dev->netdev_ops->ndo_get_vf_port || !dev->dev.parent || 897 + !(ext_filter_mask & RTEXT_FILTER_VF)) 898 898 return 0; 899 899 900 900 err = rtnl_port_self_fill(skb, dev); ··· 1083 1079 nla_nest_end(skb, vfinfo); 1084 1080 } 1085 1081 1086 - if (rtnl_port_fill(skb, dev)) 1082 + if (rtnl_port_fill(skb, dev, ext_filter_mask)) 1087 1083 goto nla_put_failure; 1088 1084 1089 1085 if (dev->rtnl_link_ops || rtnl_have_link_slave_info(dev)) { ··· 1202 1198 struct hlist_head *head; 1203 1199 struct nlattr *tb[IFLA_MAX+1]; 1204 1200 u32 ext_filter_mask = 0; 1201 + int err; 1205 1202 1206 1203 s_h = cb->args[0]; 1207 1204 s_idx = cb->args[1]; ··· 1223 1218 hlist_for_each_entry_rcu(dev, head, index_hlist) { 1224 1219 if (idx < s_idx) 1225 1220 goto cont; 1226 - if (rtnl_fill_ifinfo(skb, dev, RTM_NEWLINK, 1227 - NETLINK_CB(cb->skb).portid, 1228 - cb->nlh->nlmsg_seq, 0, 1229 - NLM_F_MULTI, 1230 - ext_filter_mask) <= 0) 1221 + err = rtnl_fill_ifinfo(skb, dev, RTM_NEWLINK, 1222 + NETLINK_CB(cb->skb).portid, 1223 + cb->nlh->nlmsg_seq, 0, 1224 + NLM_F_MULTI, 1225 + ext_filter_mask); 1226 + /* If we ran out of room on the first message, 1227 + * we're in trouble 1228 + */ 1229 + WARN_ON((err == -EMSGSIZE) && (skb->len == 0)); 1230 + 1231 + if (err <= 0) 1231 1232 goto out; 1232 1233 1233 1234 nl_dump_check_consistent(cb, nlmsg_hdr(skb)); ··· 1406 1395 return 0; 1407 1396 } 1408 1397 1409 - static int do_setlink(struct net_device *dev, struct ifinfomsg *ifm, 1398 + static int do_setlink(const struct sk_buff *skb, 1399 + struct net_device *dev, struct ifinfomsg *ifm, 1410 1400 struct nlattr **tb, char *ifname, int modified) 1411 1401 { 1412 1402 const struct net_device_ops *ops = dev->netdev_ops; ··· 1419 1407 err = PTR_ERR(net); 1420 1408 goto errout; 1421 1409 } 1422 - if (!ns_capable(net->user_ns, CAP_NET_ADMIN)) { 1410 + if (!netlink_ns_capable(skb, net->user_ns, CAP_NET_ADMIN)) { 1423 1411 err = -EPERM; 1424 1412 goto errout; 1425 1413 } ··· 1673 1661 if (err < 0) 1674 1662 goto errout; 1675 1663 1676 - err = do_setlink(dev, ifm, tb, ifname, 0); 1664 + err = do_setlink(skb, dev, ifm, tb, ifname, 0); 1677 1665 errout: 1678 1666 return err; 1679 1667 } ··· 1790 1778 } 1791 1779 EXPORT_SYMBOL(rtnl_create_link); 1792 1780 1793 - static int rtnl_group_changelink(struct net *net, int group, 1781 + static int rtnl_group_changelink(const struct sk_buff *skb, 1782 + struct net *net, int group, 1794 1783 struct ifinfomsg *ifm, 1795 1784 struct nlattr **tb) 1796 1785 { ··· 1800 1787 1801 1788 for_each_netdev(net, dev) { 1802 1789 if (dev->group == group) { 1803 - err = do_setlink(dev, ifm, tb, NULL, 0); 1790 + err = do_setlink(skb, dev, ifm, tb, NULL, 0); 1804 1791 if (err < 0) 1805 1792 return err; 1806 1793 } ··· 1942 1929 modified = 1; 1943 1930 } 1944 1931 1945 - return do_setlink(dev, ifm, tb, ifname, modified); 1932 + return do_setlink(skb, dev, ifm, tb, ifname, modified); 1946 1933 } 1947 1934 1948 1935 if (!(nlh->nlmsg_flags & NLM_F_CREATE)) { 1949 1936 if (ifm->ifi_index == 0 && tb[IFLA_GROUP]) 1950 - return rtnl_group_changelink(net, 1937 + return rtnl_group_changelink(skb, net, 1951 1938 nla_get_u32(tb[IFLA_GROUP]), 1952 1939 ifm, tb); 1953 1940 return -ENODEV; ··· 2334 2321 int err = -EINVAL; 2335 2322 __u8 *addr; 2336 2323 2337 - if (!capable(CAP_NET_ADMIN)) 2324 + if (!netlink_capable(skb, CAP_NET_ADMIN)) 2338 2325 return -EPERM; 2339 2326 2340 2327 err = nlmsg_parse(nlh, sizeof(*ndm), tb, NDA_MAX, NULL); ··· 2786 2773 sz_idx = type>>2; 2787 2774 kind = type&3; 2788 2775 2789 - if (kind != 2 && !ns_capable(net->user_ns, CAP_NET_ADMIN)) 2776 + if (kind != 2 && !netlink_net_capable(skb, CAP_NET_ADMIN)) 2790 2777 return -EPERM; 2791 2778 2792 2779 if (kind == 2 && nlh->nlmsg_flags&NLM_F_DUMP) {
+49
net/core/sock.c
··· 145 145 static DEFINE_MUTEX(proto_list_mutex); 146 146 static LIST_HEAD(proto_list); 147 147 148 + /** 149 + * sk_ns_capable - General socket capability test 150 + * @sk: Socket to use a capability on or through 151 + * @user_ns: The user namespace of the capability to use 152 + * @cap: The capability to use 153 + * 154 + * Test to see if the opener of the socket had when the socket was 155 + * created and the current process has the capability @cap in the user 156 + * namespace @user_ns. 157 + */ 158 + bool sk_ns_capable(const struct sock *sk, 159 + struct user_namespace *user_ns, int cap) 160 + { 161 + return file_ns_capable(sk->sk_socket->file, user_ns, cap) && 162 + ns_capable(user_ns, cap); 163 + } 164 + EXPORT_SYMBOL(sk_ns_capable); 165 + 166 + /** 167 + * sk_capable - Socket global capability test 168 + * @sk: Socket to use a capability on or through 169 + * @cap: The global capbility to use 170 + * 171 + * Test to see if the opener of the socket had when the socket was 172 + * created and the current process has the capability @cap in all user 173 + * namespaces. 174 + */ 175 + bool sk_capable(const struct sock *sk, int cap) 176 + { 177 + return sk_ns_capable(sk, &init_user_ns, cap); 178 + } 179 + EXPORT_SYMBOL(sk_capable); 180 + 181 + /** 182 + * sk_net_capable - Network namespace socket capability test 183 + * @sk: Socket to use a capability on or through 184 + * @cap: The capability to use 185 + * 186 + * Test to see if the opener of the socket had when the socke was created 187 + * and the current process has the capability @cap over the network namespace 188 + * the socket is a member of. 189 + */ 190 + bool sk_net_capable(const struct sock *sk, int cap) 191 + { 192 + return sk_ns_capable(sk, sock_net(sk)->user_ns, cap); 193 + } 194 + EXPORT_SYMBOL(sk_net_capable); 195 + 196 + 148 197 #ifdef CONFIG_MEMCG_KMEM 149 198 int mem_cgroup_sockets_init(struct mem_cgroup *memcg, struct cgroup_subsys *ss) 150 199 {
+2 -2
net/core/sock_diag.c
··· 49 49 } 50 50 EXPORT_SYMBOL_GPL(sock_diag_put_meminfo); 51 51 52 - int sock_diag_put_filterinfo(struct user_namespace *user_ns, struct sock *sk, 52 + int sock_diag_put_filterinfo(bool may_report_filterinfo, struct sock *sk, 53 53 struct sk_buff *skb, int attrtype) 54 54 { 55 55 struct sock_fprog_kern *fprog; ··· 58 58 unsigned int flen; 59 59 int err = 0; 60 60 61 - if (!ns_capable(user_ns, CAP_NET_ADMIN)) { 61 + if (!may_report_filterinfo) { 62 62 nla_reserve(skb, attrtype, 0); 63 63 return 0; 64 64 }
+1 -1
net/dcb/dcbnl.c
··· 1669 1669 struct nlmsghdr *reply_nlh = NULL; 1670 1670 const struct reply_func *fn; 1671 1671 1672 - if ((nlh->nlmsg_type == RTM_SETDCB) && !capable(CAP_NET_ADMIN)) 1672 + if ((nlh->nlmsg_type == RTM_SETDCB) && !netlink_capable(skb, CAP_NET_ADMIN)) 1673 1673 return -EPERM; 1674 1674 1675 1675 ret = nlmsg_parse(nlh, sizeof(*dcb), tb, DCB_ATTR_MAX,
+2 -2
net/decnet/dn_dev.c
··· 574 574 struct dn_ifaddr __rcu **ifap; 575 575 int err = -EINVAL; 576 576 577 - if (!capable(CAP_NET_ADMIN)) 577 + if (!netlink_capable(skb, CAP_NET_ADMIN)) 578 578 return -EPERM; 579 579 580 580 if (!net_eq(net, &init_net)) ··· 618 618 struct dn_ifaddr *ifa; 619 619 int err; 620 620 621 - if (!capable(CAP_NET_ADMIN)) 621 + if (!netlink_capable(skb, CAP_NET_ADMIN)) 622 622 return -EPERM; 623 623 624 624 if (!net_eq(net, &init_net))
+2 -2
net/decnet/dn_fib.c
··· 505 505 struct nlattr *attrs[RTA_MAX+1]; 506 506 int err; 507 507 508 - if (!capable(CAP_NET_ADMIN)) 508 + if (!netlink_capable(skb, CAP_NET_ADMIN)) 509 509 return -EPERM; 510 510 511 511 if (!net_eq(net, &init_net)) ··· 530 530 struct nlattr *attrs[RTA_MAX+1]; 531 531 int err; 532 532 533 - if (!capable(CAP_NET_ADMIN)) 533 + if (!netlink_capable(skb, CAP_NET_ADMIN)) 534 534 return -EPERM; 535 535 536 536 if (!net_eq(net, &init_net))
+1 -1
net/decnet/netfilter/dn_rtmsg.c
··· 107 107 if (nlh->nlmsg_len < sizeof(*nlh) || skb->len < nlh->nlmsg_len) 108 108 return; 109 109 110 - if (!capable(CAP_NET_ADMIN)) 110 + if (!netlink_capable(skb, CAP_NET_ADMIN)) 111 111 RCV_SKB_FAIL(-EPERM); 112 112 113 113 /* Eventually we might send routing messages too */
+2
net/ipv4/ip_tunnel.c
··· 442 442 tunnel->i_seqno = ntohl(tpi->seq) + 1; 443 443 } 444 444 445 + skb_reset_network_header(skb); 446 + 445 447 err = IP_ECN_decapsulate(iph, skb); 446 448 if (unlikely(err)) { 447 449 if (log_ecn_error)
+1 -1
net/ipv4/tcp_cubic.c
··· 409 409 ratio -= ca->delayed_ack >> ACK_RATIO_SHIFT; 410 410 ratio += cnt; 411 411 412 - ca->delayed_ack = min(ratio, ACK_RATIO_LIMIT); 412 + ca->delayed_ack = clamp(ratio, 1U, ACK_RATIO_LIMIT); 413 413 } 414 414 415 415 /* Some calls are for duplicates without timetamps */
+7 -7
net/ipv4/tcp_output.c
··· 2441 2441 err = tcp_transmit_skb(sk, skb, 1, GFP_ATOMIC); 2442 2442 } 2443 2443 2444 - if (likely(!err)) 2444 + if (likely(!err)) { 2445 2445 TCP_SKB_CB(skb)->sacked |= TCPCB_EVER_RETRANS; 2446 + /* Update global TCP statistics. */ 2447 + TCP_INC_STATS(sock_net(sk), TCP_MIB_RETRANSSEGS); 2448 + if (TCP_SKB_CB(skb)->tcp_flags & TCPHDR_SYN) 2449 + NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPSYNRETRANS); 2450 + tp->total_retrans++; 2451 + } 2446 2452 return err; 2447 2453 } 2448 2454 ··· 2458 2452 int err = __tcp_retransmit_skb(sk, skb); 2459 2453 2460 2454 if (err == 0) { 2461 - /* Update global TCP statistics. */ 2462 - TCP_INC_STATS(sock_net(sk), TCP_MIB_RETRANSSEGS); 2463 - if (TCP_SKB_CB(skb)->tcp_flags & TCPHDR_SYN) 2464 - NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPSYNRETRANS); 2465 - tp->total_retrans++; 2466 - 2467 2455 #if FASTRETRANS_DEBUG > 0 2468 2456 if (TCP_SKB_CB(skb)->sacked & TCPCB_SACKED_RETRANS) { 2469 2457 net_dbg_ratelimited("retrans_out leaked\n");
+2 -1
net/ipv6/ip6_fib.c
··· 1459 1459 1460 1460 if (w->skip) { 1461 1461 w->skip--; 1462 - continue; 1462 + goto skip; 1463 1463 } 1464 1464 1465 1465 err = w->func(w); ··· 1469 1469 w->count++; 1470 1470 continue; 1471 1471 } 1472 + skip: 1472 1473 w->state = FWS_U; 1473 1474 case FWS_U: 1474 1475 if (fn == w->root)
+1 -1
net/ipv6/ip6mr.c
··· 1633 1633 { 1634 1634 struct mr6_table *mrt; 1635 1635 struct flowi6 fl6 = { 1636 - .flowi6_iif = skb->skb_iif, 1636 + .flowi6_iif = skb->skb_iif ? : LOOPBACK_IFINDEX, 1637 1637 .flowi6_oif = skb->dev->ifindex, 1638 1638 .flowi6_mark = skb->mark, 1639 1639 };
+1
net/ipv6/netfilter/ip6t_rpfilter.c
··· 33 33 struct ipv6hdr *iph = ipv6_hdr(skb); 34 34 bool ret = false; 35 35 struct flowi6 fl6 = { 36 + .flowi6_iif = LOOPBACK_IFINDEX, 36 37 .flowlabel = (* (__be32 *) iph) & IPV6_FLOWINFO_MASK, 37 38 .flowi6_proto = iph->nexthdr, 38 39 .daddr = iph->saddr,
+2
net/ipv6/route.c
··· 1273 1273 struct flowi6 fl6; 1274 1274 1275 1275 memset(&fl6, 0, sizeof(fl6)); 1276 + fl6.flowi6_iif = LOOPBACK_IFINDEX; 1276 1277 fl6.flowi6_oif = oif; 1277 1278 fl6.flowi6_mark = mark; 1278 1279 fl6.daddr = iph->daddr; ··· 1295 1294 struct flowi6 fl6; 1296 1295 1297 1296 memset(&fl6, 0, sizeof(fl6)); 1297 + fl6.flowi6_iif = LOOPBACK_IFINDEX; 1298 1298 fl6.flowi6_oif = oif; 1299 1299 fl6.flowi6_mark = mark; 1300 1300 fl6.daddr = msg->dest;
+1 -2
net/netfilter/nfnetlink.c
··· 368 368 static void nfnetlink_rcv(struct sk_buff *skb) 369 369 { 370 370 struct nlmsghdr *nlh = nlmsg_hdr(skb); 371 - struct net *net = sock_net(skb->sk); 372 371 int msglen; 373 372 374 373 if (nlh->nlmsg_len < NLMSG_HDRLEN || 375 374 skb->len < nlh->nlmsg_len) 376 375 return; 377 376 378 - if (!ns_capable(net->user_ns, CAP_NET_ADMIN)) { 377 + if (!netlink_net_capable(skb, CAP_NET_ADMIN)) { 379 378 netlink_ack(skb, nlh, -EPERM); 380 379 return; 381 380 }
+70 -5
net/netlink/af_netlink.c
··· 1360 1360 return err; 1361 1361 } 1362 1362 1363 - static inline int netlink_capable(const struct socket *sock, unsigned int flag) 1363 + /** 1364 + * __netlink_ns_capable - General netlink message capability test 1365 + * @nsp: NETLINK_CB of the socket buffer holding a netlink command from userspace. 1366 + * @user_ns: The user namespace of the capability to use 1367 + * @cap: The capability to use 1368 + * 1369 + * Test to see if the opener of the socket we received the message 1370 + * from had when the netlink socket was created and the sender of the 1371 + * message has has the capability @cap in the user namespace @user_ns. 1372 + */ 1373 + bool __netlink_ns_capable(const struct netlink_skb_parms *nsp, 1374 + struct user_namespace *user_ns, int cap) 1375 + { 1376 + return sk_ns_capable(nsp->sk, user_ns, cap); 1377 + } 1378 + EXPORT_SYMBOL(__netlink_ns_capable); 1379 + 1380 + /** 1381 + * netlink_ns_capable - General netlink message capability test 1382 + * @skb: socket buffer holding a netlink command from userspace 1383 + * @user_ns: The user namespace of the capability to use 1384 + * @cap: The capability to use 1385 + * 1386 + * Test to see if the opener of the socket we received the message 1387 + * from had when the netlink socket was created and the sender of the 1388 + * message has has the capability @cap in the user namespace @user_ns. 1389 + */ 1390 + bool netlink_ns_capable(const struct sk_buff *skb, 1391 + struct user_namespace *user_ns, int cap) 1392 + { 1393 + return __netlink_ns_capable(&NETLINK_CB(skb), user_ns, cap); 1394 + } 1395 + EXPORT_SYMBOL(netlink_ns_capable); 1396 + 1397 + /** 1398 + * netlink_capable - Netlink global message capability test 1399 + * @skb: socket buffer holding a netlink command from userspace 1400 + * @cap: The capability to use 1401 + * 1402 + * Test to see if the opener of the socket we received the message 1403 + * from had when the netlink socket was created and the sender of the 1404 + * message has has the capability @cap in all user namespaces. 1405 + */ 1406 + bool netlink_capable(const struct sk_buff *skb, int cap) 1407 + { 1408 + return netlink_ns_capable(skb, &init_user_ns, cap); 1409 + } 1410 + EXPORT_SYMBOL(netlink_capable); 1411 + 1412 + /** 1413 + * netlink_net_capable - Netlink network namespace message capability test 1414 + * @skb: socket buffer holding a netlink command from userspace 1415 + * @cap: The capability to use 1416 + * 1417 + * Test to see if the opener of the socket we received the message 1418 + * from had when the netlink socket was created and the sender of the 1419 + * message has has the capability @cap over the network namespace of 1420 + * the socket we received the message from. 1421 + */ 1422 + bool netlink_net_capable(const struct sk_buff *skb, int cap) 1423 + { 1424 + return netlink_ns_capable(skb, sock_net(skb->sk)->user_ns, cap); 1425 + } 1426 + EXPORT_SYMBOL(netlink_net_capable); 1427 + 1428 + static inline int netlink_allowed(const struct socket *sock, unsigned int flag) 1364 1429 { 1365 1430 return (nl_table[sock->sk->sk_protocol].flags & flag) || 1366 1431 ns_capable(sock_net(sock->sk)->user_ns, CAP_NET_ADMIN); ··· 1493 1428 1494 1429 /* Only superuser is allowed to listen multicasts */ 1495 1430 if (nladdr->nl_groups) { 1496 - if (!netlink_capable(sock, NL_CFG_F_NONROOT_RECV)) 1431 + if (!netlink_allowed(sock, NL_CFG_F_NONROOT_RECV)) 1497 1432 return -EPERM; 1498 1433 err = netlink_realloc_groups(sk); 1499 1434 if (err) ··· 1555 1490 return -EINVAL; 1556 1491 1557 1492 if ((nladdr->nl_groups || nladdr->nl_pid) && 1558 - !netlink_capable(sock, NL_CFG_F_NONROOT_SEND)) 1493 + !netlink_allowed(sock, NL_CFG_F_NONROOT_SEND)) 1559 1494 return -EPERM; 1560 1495 1561 1496 if (!nlk->portid) ··· 2161 2096 break; 2162 2097 case NETLINK_ADD_MEMBERSHIP: 2163 2098 case NETLINK_DROP_MEMBERSHIP: { 2164 - if (!netlink_capable(sock, NL_CFG_F_NONROOT_RECV)) 2099 + if (!netlink_allowed(sock, NL_CFG_F_NONROOT_RECV)) 2165 2100 return -EPERM; 2166 2101 err = netlink_realloc_groups(sk); 2167 2102 if (err) ··· 2312 2247 dst_group = ffs(addr->nl_groups); 2313 2248 err = -EPERM; 2314 2249 if ((dst_group || dst_portid) && 2315 - !netlink_capable(sock, NL_CFG_F_NONROOT_SEND)) 2250 + !netlink_allowed(sock, NL_CFG_F_NONROOT_SEND)) 2316 2251 goto out; 2317 2252 } else { 2318 2253 dst_portid = nlk->dst_portid;
+1 -1
net/netlink/genetlink.c
··· 561 561 return -EOPNOTSUPP; 562 562 563 563 if ((ops->flags & GENL_ADMIN_PERM) && 564 - !capable(CAP_NET_ADMIN)) 564 + !netlink_capable(skb, CAP_NET_ADMIN)) 565 565 return -EPERM; 566 566 567 567 if ((nlh->nlmsg_flags & NLM_F_DUMP) == NLM_F_DUMP) {
+6 -1
net/packet/diag.c
··· 128 128 129 129 static int sk_diag_fill(struct sock *sk, struct sk_buff *skb, 130 130 struct packet_diag_req *req, 131 + bool may_report_filterinfo, 131 132 struct user_namespace *user_ns, 132 133 u32 portid, u32 seq, u32 flags, int sk_ino) 133 134 { ··· 173 172 goto out_nlmsg_trim; 174 173 175 174 if ((req->pdiag_show & PACKET_SHOW_FILTER) && 176 - sock_diag_put_filterinfo(user_ns, sk, skb, PACKET_DIAG_FILTER)) 175 + sock_diag_put_filterinfo(may_report_filterinfo, sk, skb, 176 + PACKET_DIAG_FILTER)) 177 177 goto out_nlmsg_trim; 178 178 179 179 return nlmsg_end(skb, nlh); ··· 190 188 struct packet_diag_req *req; 191 189 struct net *net; 192 190 struct sock *sk; 191 + bool may_report_filterinfo; 193 192 194 193 net = sock_net(skb->sk); 195 194 req = nlmsg_data(cb->nlh); 195 + may_report_filterinfo = netlink_net_capable(cb->skb, CAP_NET_ADMIN); 196 196 197 197 mutex_lock(&net->packet.sklist_lock); 198 198 sk_for_each(sk, &net->packet.sklist) { ··· 204 200 goto next; 205 201 206 202 if (sk_diag_fill(sk, skb, req, 203 + may_report_filterinfo, 207 204 sk_user_ns(NETLINK_CB(cb->skb).sk), 208 205 NETLINK_CB(cb->skb).portid, 209 206 cb->nlh->nlmsg_seq, NLM_F_MULTI,
+4 -4
net/phonet/pn_netlink.c
··· 70 70 int err; 71 71 u8 pnaddr; 72 72 73 - if (!capable(CAP_NET_ADMIN)) 73 + if (!netlink_capable(skb, CAP_NET_ADMIN)) 74 74 return -EPERM; 75 75 76 - if (!capable(CAP_SYS_ADMIN)) 76 + if (!netlink_capable(skb, CAP_SYS_ADMIN)) 77 77 return -EPERM; 78 78 79 79 ASSERT_RTNL(); ··· 233 233 int err; 234 234 u8 dst; 235 235 236 - if (!capable(CAP_NET_ADMIN)) 236 + if (!netlink_capable(skb, CAP_NET_ADMIN)) 237 237 return -EPERM; 238 238 239 - if (!capable(CAP_SYS_ADMIN)) 239 + if (!netlink_capable(skb, CAP_SYS_ADMIN)) 240 240 return -EPERM; 241 241 242 242 ASSERT_RTNL();
+1 -1
net/sched/act_api.c
··· 948 948 u32 portid = skb ? NETLINK_CB(skb).portid : 0; 949 949 int ret = 0, ovr = 0; 950 950 951 - if ((n->nlmsg_type != RTM_GETACTION) && !capable(CAP_NET_ADMIN)) 951 + if ((n->nlmsg_type != RTM_GETACTION) && !netlink_capable(skb, CAP_NET_ADMIN)) 952 952 return -EPERM; 953 953 954 954 ret = nlmsg_parse(n, sizeof(struct tcamsg), tca, TCA_ACT_MAX, NULL);
+1 -1
net/sched/cls_api.c
··· 134 134 int err; 135 135 int tp_created = 0; 136 136 137 - if ((n->nlmsg_type != RTM_GETTFILTER) && !capable(CAP_NET_ADMIN)) 137 + if ((n->nlmsg_type != RTM_GETTFILTER) && !netlink_capable(skb, CAP_NET_ADMIN)) 138 138 return -EPERM; 139 139 140 140 replay:
+3 -3
net/sched/sch_api.c
··· 1084 1084 struct Qdisc *p = NULL; 1085 1085 int err; 1086 1086 1087 - if ((n->nlmsg_type != RTM_GETQDISC) && !capable(CAP_NET_ADMIN)) 1087 + if ((n->nlmsg_type != RTM_GETQDISC) && !netlink_capable(skb, CAP_NET_ADMIN)) 1088 1088 return -EPERM; 1089 1089 1090 1090 err = nlmsg_parse(n, sizeof(*tcm), tca, TCA_MAX, NULL); ··· 1151 1151 struct Qdisc *q, *p; 1152 1152 int err; 1153 1153 1154 - if (!capable(CAP_NET_ADMIN)) 1154 + if (!netlink_capable(skb, CAP_NET_ADMIN)) 1155 1155 return -EPERM; 1156 1156 1157 1157 replay: ··· 1490 1490 u32 qid; 1491 1491 int err; 1492 1492 1493 - if ((n->nlmsg_type != RTM_GETTCLASS) && !capable(CAP_NET_ADMIN)) 1493 + if ((n->nlmsg_type != RTM_GETTCLASS) && !netlink_capable(skb, CAP_NET_ADMIN)) 1494 1494 return -EPERM; 1495 1495 1496 1496 err = nlmsg_parse(n, sizeof(*tcm), tca, TCA_MAX, NULL);
+6 -5
net/sched/sch_hhf.c
··· 553 553 if (err < 0) 554 554 return err; 555 555 556 - sch_tree_lock(sch); 557 - 558 - if (tb[TCA_HHF_BACKLOG_LIMIT]) 559 - sch->limit = nla_get_u32(tb[TCA_HHF_BACKLOG_LIMIT]); 560 - 561 556 if (tb[TCA_HHF_QUANTUM]) 562 557 new_quantum = nla_get_u32(tb[TCA_HHF_QUANTUM]); 563 558 ··· 562 567 non_hh_quantum = (u64)new_quantum * new_hhf_non_hh_weight; 563 568 if (non_hh_quantum > INT_MAX) 564 569 return -EINVAL; 570 + 571 + sch_tree_lock(sch); 572 + 573 + if (tb[TCA_HHF_BACKLOG_LIMIT]) 574 + sch->limit = nla_get_u32(tb[TCA_HHF_BACKLOG_LIMIT]); 575 + 565 576 q->quantum = new_quantum; 566 577 q->hhf_non_hh_weight = new_hhf_non_hh_weight; 567 578
+6 -1
net/sctp/protocol.c
··· 491 491 continue; 492 492 if ((laddr->state == SCTP_ADDR_SRC) && 493 493 (AF_INET == laddr->a.sa.sa_family)) { 494 - fl4->saddr = laddr->a.v4.sin_addr.s_addr; 495 494 fl4->fl4_sport = laddr->a.v4.sin_port; 495 + flowi4_update_output(fl4, 496 + asoc->base.sk->sk_bound_dev_if, 497 + RT_CONN_FLAGS(asoc->base.sk), 498 + daddr->v4.sin_addr.s_addr, 499 + laddr->a.v4.sin_addr.s_addr); 500 + 496 501 rt = ip_route_output_key(sock_net(sk), fl4); 497 502 if (!IS_ERR(rt)) { 498 503 dst = &rt->dst;
+3 -4
net/sctp/sm_sideeffect.c
··· 496 496 497 497 /* If the transport error count is greater than the pf_retrans 498 498 * threshold, and less than pathmaxrtx, and if the current state 499 - * is not SCTP_UNCONFIRMED, then mark this transport as Partially 500 - * Failed, see SCTP Quick Failover Draft, section 5.1 499 + * is SCTP_ACTIVE, then mark this transport as Partially Failed, 500 + * see SCTP Quick Failover Draft, section 5.1 501 501 */ 502 - if ((transport->state != SCTP_PF) && 503 - (transport->state != SCTP_UNCONFIRMED) && 502 + if ((transport->state == SCTP_ACTIVE) && 504 503 (asoc->pf_retrans < transport->pathmaxrxt) && 505 504 (transport->error_count > asoc->pf_retrans)) { 506 505
+1 -1
net/tipc/netlink.c
··· 47 47 int hdr_space = nlmsg_total_size(GENL_HDRLEN + TIPC_GENL_HDRLEN); 48 48 u16 cmd; 49 49 50 - if ((req_userhdr->cmd & 0xC000) && (!capable(CAP_NET_ADMIN))) 50 + if ((req_userhdr->cmd & 0xC000) && (!netlink_capable(skb, CAP_NET_ADMIN))) 51 51 cmd = TIPC_CMD_NOT_NET_ADMIN; 52 52 else 53 53 cmd = req_userhdr->cmd;
+22 -25
net/vmw_vsock/af_vsock.c
··· 1925 1925 .fops = &vsock_device_ops, 1926 1926 }; 1927 1927 1928 - static int __vsock_core_init(void) 1928 + int __vsock_core_init(const struct vsock_transport *t, struct module *owner) 1929 1929 { 1930 - int err; 1930 + int err = mutex_lock_interruptible(&vsock_register_mutex); 1931 + 1932 + if (err) 1933 + return err; 1934 + 1935 + if (transport) { 1936 + err = -EBUSY; 1937 + goto err_busy; 1938 + } 1939 + 1940 + /* Transport must be the owner of the protocol so that it can't 1941 + * unload while there are open sockets. 1942 + */ 1943 + vsock_proto.owner = owner; 1944 + transport = t; 1931 1945 1932 1946 vsock_init_tables(); 1933 1947 ··· 1965 1951 goto err_unregister_proto; 1966 1952 } 1967 1953 1954 + mutex_unlock(&vsock_register_mutex); 1968 1955 return 0; 1969 1956 1970 1957 err_unregister_proto: 1971 1958 proto_unregister(&vsock_proto); 1972 1959 err_misc_deregister: 1973 1960 misc_deregister(&vsock_device); 1961 + transport = NULL; 1962 + err_busy: 1963 + mutex_unlock(&vsock_register_mutex); 1974 1964 return err; 1975 1965 } 1976 - 1977 - int vsock_core_init(const struct vsock_transport *t) 1978 - { 1979 - int retval = mutex_lock_interruptible(&vsock_register_mutex); 1980 - if (retval) 1981 - return retval; 1982 - 1983 - if (transport) { 1984 - retval = -EBUSY; 1985 - goto out; 1986 - } 1987 - 1988 - transport = t; 1989 - retval = __vsock_core_init(); 1990 - if (retval) 1991 - transport = NULL; 1992 - 1993 - out: 1994 - mutex_unlock(&vsock_register_mutex); 1995 - return retval; 1996 - } 1997 - EXPORT_SYMBOL_GPL(vsock_core_init); 1966 + EXPORT_SYMBOL_GPL(__vsock_core_init); 1998 1967 1999 1968 void vsock_core_exit(void) 2000 1969 { ··· 1997 2000 1998 2001 MODULE_AUTHOR("VMware, Inc."); 1999 2002 MODULE_DESCRIPTION("VMware Virtual Socket Family"); 2000 - MODULE_VERSION("1.0.0.0-k"); 2003 + MODULE_VERSION("1.0.1.0-k"); 2001 2004 MODULE_LICENSE("GPL v2");
+1 -1
net/xfrm/xfrm_user.c
··· 2377 2377 link = &xfrm_dispatch[type]; 2378 2378 2379 2379 /* All operations require privileges, even GET */ 2380 - if (!ns_capable(net->user_ns, CAP_NET_ADMIN)) 2380 + if (!netlink_net_capable(skb, CAP_NET_ADMIN)) 2381 2381 return -EPERM; 2382 2382 2383 2383 if ((type == (XFRM_MSG_GETSA - XFRM_MSG_BASE) ||
+1 -1
tools/net/bpf_dbg.c
··· 820 820 r->A &= r->X; 821 821 break; 822 822 case BPF_ALU_AND | BPF_K: 823 - r->A &= r->X; 823 + r->A &= K; 824 824 break; 825 825 case BPF_ALU_OR | BPF_X: 826 826 r->A |= r->X;