Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net

Pull networking changes from David Miller:

1) Multiply in netfilter IPVS can overflow when calculating destination
weight. From Simon Kirby.

2) Use after free fixes in IPVS from Julian Anastasov.

3) SFC driver bug fixes from Daniel Pieczko.

4) Memory leak in pcan_usb_core failure paths, from Alexey Khoroshilov.

5) Locking and encapsulation fixes to serial line CAN driver, from
Andrew Naujoks.

6) Duplex and VF handling fixes to bnx2x driver from Yaniv Rosner,
Eilon Greenstein, and Ariel Elior.

7) In lapb, if no other packets are outstanding, T1 timeouts actually
stall things and no packet gets sent. Fix from Josselin Costanzi.

8) ICMP redirects should not make it to the socket error queues, from
Duan Jiong.

9) Fix bugs in skge DMA mapping error handling, from Nikulas Patocka.

10) Fix setting of VLAN priority field on via-rhine driver, from Roget
Luethi.

11) Fix TX stalls and VLAN promisc programming in be2net driver from
Ajit Khaparde.

12) Packet padding doesn't get handled correctly in new usbnet SG
support code, from Ming Lei.

13) Fix races in netdevice teardown wrt. network namespace closing.
From Eric W. Biederman.

14) Fix potential missed initialization of net_secret if not TCP
connections are openned. From Eric Dumazet.

15) Cinterion PLXX product ID in qmi_wwan driver is wrong, from
Aleksander Morgado.

16) skb_cow_head() can change skb->data and thus packet header pointers,
don't use stale ip_hdr reference in ip_tunnel code.

17) Backend state transition handling fixes in xen-netback, from Paul
Durrant.

18) Packet offset for AH protocol is handled wrong in flow dissector,
from Eric Dumazet.

19) Taking down an fq packet scheduler instance can leave stale packets
in the queues, fix from Eric Dumazet.

20) Fix performance regressions introduced by TCP Small Queues. From
Eric Dumazet.

21) IPV6 GRE tunneling code calculates max_headroom incorrectly, from
Hannes Frederic Sowa.

22) Multicast timer handlers in ipv4 and ipv6 can be the last and final
reference to the ipv4/ipv6 specific network device state, so use the
reference put that will check and release the object if the
reference hits zero. From Salam Noureddine.

23) Fix memory corruption in ip_tunnel driver, and use skb_push()
instead of __skb_push() so that similar bugs are less hard to find.
From Steffen Klassert.

24) Add forgotten hookup of rtnl_ops in SIT and ip6tnl drivers, from
Nicolas Dichtel.

25) fq scheduler doesn't accurately rate limit in certain circumstances,
from Eric Dumazet.

* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net: (103 commits)
pkt_sched: fq: rate limiting improvements
ip6tnl: allow to use rtnl ops on fb tunnel
sit: allow to use rtnl ops on fb tunnel
ip_tunnel: Remove double unregister of the fallback device
ip_tunnel_core: Change __skb_push back to skb_push
ip_tunnel: Add fallback tunnels to the hash lists
ip_tunnel: Fix a memory corruption in ip_tunnel_xmit
qlcnic: Fix SR-IOV configuration
ll_temac: Reset dma descriptors indexes on ndo_open
skbuff: size of hole is wrong in a comment
ipv6 mcast: use in6_dev_put in timer handlers instead of __in6_dev_put
ipv4 igmp: use in_dev_put in timer handlers instead of __in_dev_put
ethernet: moxa: fix incorrect placement of __initdata tag
ipv6: gre: correct calculation of max_headroom
powerpc/83xx: gianfar_ptp: select 1588 clock source through dts file
Revert "powerpc/83xx: gianfar_ptp: select 1588 clock source through dts file"
bonding: Fix broken promiscuity reference counting issue
tcp: TSQ can use a dynamic limit
dm9601: fix IFF_ALLMULTI handling
pkt_sched: fq: qdisc dismantle fixes
...

+1305 -849
+17 -1
Documentation/devicetree/bindings/net/fsl-tsec-phy.txt
··· 86 86 87 87 Clock Properties: 88 88 89 + - fsl,cksel Timer reference clock source. 89 90 - fsl,tclk-period Timer reference clock period in nanoseconds. 90 91 - fsl,tmr-prsc Prescaler, divides the output clock. 91 92 - fsl,tmr-add Frequency compensation value. ··· 98 97 clock. You must choose these carefully for the clock to work right. 99 98 Here is how to figure good values: 100 99 101 - TimerOsc = system clock MHz 100 + TimerOsc = selected reference clock MHz 102 101 tclk_period = desired clock period nanoseconds 103 102 NominalFreq = 1000 / tclk_period MHz 104 103 FreqDivRatio = TimerOsc / NominalFreq (must be greater that 1.0) ··· 115 114 Pulse Per Second (PPS) signal, since this will be offered to the PPS 116 115 subsystem to synchronize the Linux clock. 117 116 117 + Reference clock source is determined by the value, which is holded 118 + in CKSEL bits in TMR_CTRL register. "fsl,cksel" property keeps the 119 + value, which will be directly written in those bits, that is why, 120 + according to reference manual, the next clock sources can be used: 121 + 122 + <0> - external high precision timer reference clock (TSEC_TMR_CLK 123 + input is used for this purpose); 124 + <1> - eTSEC system clock; 125 + <2> - eTSEC1 transmit clock; 126 + <3> - RTC clock input. 127 + 128 + When this attribute is not used, eTSEC system clock will serve as 129 + IEEE 1588 timer reference clock. 130 + 118 131 Example: 119 132 120 133 ptp_clock@24E00 { ··· 136 121 reg = <0x24E00 0xB0>; 137 122 interrupts = <12 0x8 13 0x8>; 138 123 interrupt-parent = < &ipic >; 124 + fsl,cksel = <1>; 139 125 fsl,tclk-period = <10>; 140 126 fsl,tmr-prsc = <100>; 141 127 fsl,tmr-add = <0x999999A4>;
+1
MAINTAINERS
··· 9378 9378 9379 9379 XEN NETWORK BACKEND DRIVER 9380 9380 M: Ian Campbell <ian.campbell@citrix.com> 9381 + M: Wei Liu <wei.liu2@citrix.com> 9381 9382 L: xen-devel@lists.xenproject.org (moderated for non-subscribers) 9382 9383 L: netdev@vger.kernel.org 9383 9384 S: Supported
+26 -23
drivers/bcma/driver_pci.c
··· 210 210 } 211 211 } 212 212 213 - static void bcma_core_pci_power_save(struct bcma_drv_pci *pc, bool up) 214 - { 215 - u16 data; 216 - 217 - if (pc->core->id.rev >= 15 && pc->core->id.rev <= 20) { 218 - data = up ? 0x74 : 0x7C; 219 - bcma_pcie_mdio_writeread(pc, BCMA_CORE_PCI_MDIO_BLK1, 220 - BCMA_CORE_PCI_MDIO_BLK1_MGMT1, 0x7F64); 221 - bcma_pcie_mdio_writeread(pc, BCMA_CORE_PCI_MDIO_BLK1, 222 - BCMA_CORE_PCI_MDIO_BLK1_MGMT3, data); 223 - } else if (pc->core->id.rev >= 21 && pc->core->id.rev <= 22) { 224 - data = up ? 0x75 : 0x7D; 225 - bcma_pcie_mdio_writeread(pc, BCMA_CORE_PCI_MDIO_BLK1, 226 - BCMA_CORE_PCI_MDIO_BLK1_MGMT1, 0x7E65); 227 - bcma_pcie_mdio_writeread(pc, BCMA_CORE_PCI_MDIO_BLK1, 228 - BCMA_CORE_PCI_MDIO_BLK1_MGMT3, data); 229 - } 230 - } 231 - 232 213 /************************************************** 233 214 * Init. 234 215 **************************************************/ ··· 235 254 if (!pc->hostmode) 236 255 bcma_core_pci_clientmode_init(pc); 237 256 } 257 + 258 + void bcma_core_pci_power_save(struct bcma_bus *bus, bool up) 259 + { 260 + struct bcma_drv_pci *pc; 261 + u16 data; 262 + 263 + if (bus->hosttype != BCMA_HOSTTYPE_PCI) 264 + return; 265 + 266 + pc = &bus->drv_pci[0]; 267 + 268 + if (pc->core->id.rev >= 15 && pc->core->id.rev <= 20) { 269 + data = up ? 0x74 : 0x7C; 270 + bcma_pcie_mdio_writeread(pc, BCMA_CORE_PCI_MDIO_BLK1, 271 + BCMA_CORE_PCI_MDIO_BLK1_MGMT1, 0x7F64); 272 + bcma_pcie_mdio_writeread(pc, BCMA_CORE_PCI_MDIO_BLK1, 273 + BCMA_CORE_PCI_MDIO_BLK1_MGMT3, data); 274 + } else if (pc->core->id.rev >= 21 && pc->core->id.rev <= 22) { 275 + data = up ? 0x75 : 0x7D; 276 + bcma_pcie_mdio_writeread(pc, BCMA_CORE_PCI_MDIO_BLK1, 277 + BCMA_CORE_PCI_MDIO_BLK1_MGMT1, 0x7E65); 278 + bcma_pcie_mdio_writeread(pc, BCMA_CORE_PCI_MDIO_BLK1, 279 + BCMA_CORE_PCI_MDIO_BLK1_MGMT3, data); 280 + } 281 + } 282 + EXPORT_SYMBOL_GPL(bcma_core_pci_power_save); 238 283 239 284 int bcma_core_pci_irq_ctl(struct bcma_drv_pci *pc, struct bcma_device *core, 240 285 bool enable) ··· 317 310 318 311 pc = &bus->drv_pci[0]; 319 312 320 - bcma_core_pci_power_save(pc, true); 321 - 322 313 bcma_core_pci_extend_L1timer(pc, true); 323 314 } 324 315 EXPORT_SYMBOL_GPL(bcma_core_pci_up); ··· 331 326 pc = &bus->drv_pci[0]; 332 327 333 328 bcma_core_pci_extend_L1timer(pc, false); 334 - 335 - bcma_core_pci_power_save(pc, false); 336 329 } 337 330 EXPORT_SYMBOL_GPL(bcma_core_pci_down);
+2
drivers/bluetooth/ath3k.c
··· 85 85 { USB_DEVICE(0x04CA, 0x3008) }, 86 86 { USB_DEVICE(0x13d3, 0x3362) }, 87 87 { USB_DEVICE(0x0CF3, 0xE004) }, 88 + { USB_DEVICE(0x0CF3, 0xE005) }, 88 89 { USB_DEVICE(0x0930, 0x0219) }, 89 90 { USB_DEVICE(0x0489, 0xe057) }, 90 91 { USB_DEVICE(0x13d3, 0x3393) }, ··· 127 126 { USB_DEVICE(0x04ca, 0x3008), .driver_info = BTUSB_ATH3012 }, 128 127 { USB_DEVICE(0x13d3, 0x3362), .driver_info = BTUSB_ATH3012 }, 129 128 { USB_DEVICE(0x0cf3, 0xe004), .driver_info = BTUSB_ATH3012 }, 129 + { USB_DEVICE(0x0cf3, 0xe005), .driver_info = BTUSB_ATH3012 }, 130 130 { USB_DEVICE(0x0930, 0x0219), .driver_info = BTUSB_ATH3012 }, 131 131 { USB_DEVICE(0x0489, 0xe057), .driver_info = BTUSB_ATH3012 }, 132 132 { USB_DEVICE(0x13d3, 0x3393), .driver_info = BTUSB_ATH3012 },
+5
drivers/bluetooth/btusb.c
··· 102 102 103 103 /* Broadcom BCM20702A0 */ 104 104 { USB_DEVICE(0x0b05, 0x17b5) }, 105 + { USB_DEVICE(0x0b05, 0x17cb) }, 105 106 { USB_DEVICE(0x04ca, 0x2003) }, 106 107 { USB_DEVICE(0x0489, 0xe042) }, 107 108 { USB_DEVICE(0x413c, 0x8197) }, ··· 112 111 113 112 /*Broadcom devices with vendor specific id */ 114 113 { USB_VENDOR_AND_INTERFACE_INFO(0x0a5c, 0xff, 0x01, 0x01) }, 114 + 115 + /* Belkin F8065bf - Broadcom based */ 116 + { USB_VENDOR_AND_INTERFACE_INFO(0x050d, 0xff, 0x01, 0x01) }, 115 117 116 118 { } /* Terminating entry */ 117 119 }; ··· 152 148 { USB_DEVICE(0x04ca, 0x3008), .driver_info = BTUSB_ATH3012 }, 153 149 { USB_DEVICE(0x13d3, 0x3362), .driver_info = BTUSB_ATH3012 }, 154 150 { USB_DEVICE(0x0cf3, 0xe004), .driver_info = BTUSB_ATH3012 }, 151 + { USB_DEVICE(0x0cf3, 0xe005), .driver_info = BTUSB_ATH3012 }, 155 152 { USB_DEVICE(0x0930, 0x0219), .driver_info = BTUSB_ATH3012 }, 156 153 { USB_DEVICE(0x0489, 0xe057), .driver_info = BTUSB_ATH3012 }, 157 154 { USB_DEVICE(0x13d3, 0x3393), .driver_info = BTUSB_ATH3012 },
+10 -3
drivers/net/bonding/bond_main.c
··· 1724 1724 struct bonding *bond = netdev_priv(bond_dev); 1725 1725 struct slave *slave, *oldcurrent; 1726 1726 struct sockaddr addr; 1727 + int old_flags = bond_dev->flags; 1727 1728 netdev_features_t old_features = bond_dev->features; 1728 1729 1729 1730 /* slave is not a slave or master is not master of this slave */ ··· 1856 1855 * bond_change_active_slave(..., NULL) 1857 1856 */ 1858 1857 if (!USES_PRIMARY(bond->params.mode)) { 1859 - /* unset promiscuity level from slave */ 1860 - if (bond_dev->flags & IFF_PROMISC) 1858 + /* unset promiscuity level from slave 1859 + * NOTE: The NETDEV_CHANGEADDR call above may change the value 1860 + * of the IFF_PROMISC flag in the bond_dev, but we need the 1861 + * value of that flag before that change, as that was the value 1862 + * when this slave was attached, so we cache at the start of the 1863 + * function and use it here. Same goes for ALLMULTI below 1864 + */ 1865 + if (old_flags & IFF_PROMISC) 1861 1866 dev_set_promiscuity(slave_dev, -1); 1862 1867 1863 1868 /* unset allmulti level from slave */ 1864 - if (bond_dev->flags & IFF_ALLMULTI) 1869 + if (old_flags & IFF_ALLMULTI) 1865 1870 dev_set_allmulti(slave_dev, -1); 1866 1871 1867 1872 bond_hw_addr_flush(bond_dev, slave_dev);
-12
drivers/net/can/flexcan.c
··· 702 702 { 703 703 struct flexcan_priv *priv = netdev_priv(dev); 704 704 struct flexcan_regs __iomem *regs = priv->base; 705 - unsigned int i; 706 705 int err; 707 706 u32 reg_mcr, reg_ctrl; 708 707 ··· 770 771 priv->reg_ctrl_default = reg_ctrl; 771 772 netdev_dbg(dev, "%s: writing ctrl=0x%08x", __func__, reg_ctrl); 772 773 flexcan_write(reg_ctrl, &regs->ctrl); 773 - 774 - for (i = 0; i < ARRAY_SIZE(regs->cantxfg); i++) { 775 - flexcan_write(0, &regs->cantxfg[i].can_ctrl); 776 - flexcan_write(0, &regs->cantxfg[i].can_id); 777 - flexcan_write(0, &regs->cantxfg[i].data[0]); 778 - flexcan_write(0, &regs->cantxfg[i].data[1]); 779 - 780 - /* put MB into rx queue */ 781 - flexcan_write(FLEXCAN_MB_CNT_CODE(0x4), 782 - &regs->cantxfg[i].can_ctrl); 783 - } 784 774 785 775 /* acceptance mask/acceptance code (accept everything) */ 786 776 flexcan_write(0x0, &regs->rxgmask);
+92 -51
drivers/net/can/slcan.c
··· 76 76 /* maximum rx buffer len: extended CAN frame with timestamp */ 77 77 #define SLC_MTU (sizeof("T1111222281122334455667788EA5F\r")+1) 78 78 79 + #define SLC_CMD_LEN 1 80 + #define SLC_SFF_ID_LEN 3 81 + #define SLC_EFF_ID_LEN 8 82 + 79 83 struct slcan { 80 84 int magic; 81 85 ··· 146 142 { 147 143 struct sk_buff *skb; 148 144 struct can_frame cf; 149 - int i, dlc_pos, tmp; 150 - unsigned long ultmp; 151 - char cmd = sl->rbuff[0]; 145 + int i, tmp; 146 + u32 tmpid; 147 + char *cmd = sl->rbuff; 152 148 153 - if ((cmd != 't') && (cmd != 'T') && (cmd != 'r') && (cmd != 'R')) 154 - return; 149 + cf.can_id = 0; 155 150 156 - if (cmd & 0x20) /* tiny chars 'r' 't' => standard frame format */ 157 - dlc_pos = 4; /* dlc position tiiid */ 158 - else 159 - dlc_pos = 9; /* dlc position Tiiiiiiiid */ 160 - 161 - if (!((sl->rbuff[dlc_pos] >= '0') && (sl->rbuff[dlc_pos] < '9'))) 162 - return; 163 - 164 - cf.can_dlc = sl->rbuff[dlc_pos] - '0'; /* get can_dlc from ASCII val */ 165 - 166 - sl->rbuff[dlc_pos] = 0; /* terminate can_id string */ 167 - 168 - if (kstrtoul(sl->rbuff+1, 16, &ultmp)) 169 - return; 170 - 171 - cf.can_id = ultmp; 172 - 173 - if (!(cmd & 0x20)) /* NO tiny chars => extended frame format */ 151 + switch (*cmd) { 152 + case 'r': 153 + cf.can_id = CAN_RTR_FLAG; 154 + /* fallthrough */ 155 + case 't': 156 + /* store dlc ASCII value and terminate SFF CAN ID string */ 157 + cf.can_dlc = sl->rbuff[SLC_CMD_LEN + SLC_SFF_ID_LEN]; 158 + sl->rbuff[SLC_CMD_LEN + SLC_SFF_ID_LEN] = 0; 159 + /* point to payload data behind the dlc */ 160 + cmd += SLC_CMD_LEN + SLC_SFF_ID_LEN + 1; 161 + break; 162 + case 'R': 163 + cf.can_id = CAN_RTR_FLAG; 164 + /* fallthrough */ 165 + case 'T': 174 166 cf.can_id |= CAN_EFF_FLAG; 167 + /* store dlc ASCII value and terminate EFF CAN ID string */ 168 + cf.can_dlc = sl->rbuff[SLC_CMD_LEN + SLC_EFF_ID_LEN]; 169 + sl->rbuff[SLC_CMD_LEN + SLC_EFF_ID_LEN] = 0; 170 + /* point to payload data behind the dlc */ 171 + cmd += SLC_CMD_LEN + SLC_EFF_ID_LEN + 1; 172 + break; 173 + default: 174 + return; 175 + } 175 176 176 - if ((cmd | 0x20) == 'r') /* RTR frame */ 177 - cf.can_id |= CAN_RTR_FLAG; 177 + if (kstrtou32(sl->rbuff + SLC_CMD_LEN, 16, &tmpid)) 178 + return; 179 + 180 + cf.can_id |= tmpid; 181 + 182 + /* get can_dlc from sanitized ASCII value */ 183 + if (cf.can_dlc >= '0' && cf.can_dlc < '9') 184 + cf.can_dlc -= '0'; 185 + else 186 + return; 178 187 179 188 *(u64 *) (&cf.data) = 0; /* clear payload */ 180 189 181 - for (i = 0, dlc_pos++; i < cf.can_dlc; i++) { 182 - tmp = hex_to_bin(sl->rbuff[dlc_pos++]); 183 - if (tmp < 0) 184 - return; 185 - cf.data[i] = (tmp << 4); 186 - tmp = hex_to_bin(sl->rbuff[dlc_pos++]); 187 - if (tmp < 0) 188 - return; 189 - cf.data[i] |= tmp; 190 + /* RTR frames may have a dlc > 0 but they never have any data bytes */ 191 + if (!(cf.can_id & CAN_RTR_FLAG)) { 192 + for (i = 0; i < cf.can_dlc; i++) { 193 + tmp = hex_to_bin(*cmd++); 194 + if (tmp < 0) 195 + return; 196 + cf.data[i] = (tmp << 4); 197 + tmp = hex_to_bin(*cmd++); 198 + if (tmp < 0) 199 + return; 200 + cf.data[i] |= tmp; 201 + } 190 202 } 191 203 192 204 skb = dev_alloc_skb(sizeof(struct can_frame) + ··· 229 209 /* parse tty input stream */ 230 210 static void slcan_unesc(struct slcan *sl, unsigned char s) 231 211 { 232 - 233 212 if ((s == '\r') || (s == '\a')) { /* CR or BEL ends the pdu */ 234 213 if (!test_and_clear_bit(SLF_ERROR, &sl->flags) && 235 214 (sl->rcount > 4)) { ··· 255 236 /* Encapsulate one can_frame and stuff into a TTY queue. */ 256 237 static void slc_encaps(struct slcan *sl, struct can_frame *cf) 257 238 { 258 - int actual, idx, i; 259 - char cmd; 239 + int actual, i; 240 + unsigned char *pos; 241 + unsigned char *endpos; 242 + canid_t id = cf->can_id; 243 + 244 + pos = sl->xbuff; 260 245 261 246 if (cf->can_id & CAN_RTR_FLAG) 262 - cmd = 'R'; /* becomes 'r' in standard frame format */ 247 + *pos = 'R'; /* becomes 'r' in standard frame format (SFF) */ 263 248 else 264 - cmd = 'T'; /* becomes 't' in standard frame format */ 249 + *pos = 'T'; /* becomes 't' in standard frame format (SSF) */ 265 250 266 - if (cf->can_id & CAN_EFF_FLAG) 267 - sprintf(sl->xbuff, "%c%08X%d", cmd, 268 - cf->can_id & CAN_EFF_MASK, cf->can_dlc); 269 - else 270 - sprintf(sl->xbuff, "%c%03X%d", cmd | 0x20, 271 - cf->can_id & CAN_SFF_MASK, cf->can_dlc); 251 + /* determine number of chars for the CAN-identifier */ 252 + if (cf->can_id & CAN_EFF_FLAG) { 253 + id &= CAN_EFF_MASK; 254 + endpos = pos + SLC_EFF_ID_LEN; 255 + } else { 256 + *pos |= 0x20; /* convert R/T to lower case for SFF */ 257 + id &= CAN_SFF_MASK; 258 + endpos = pos + SLC_SFF_ID_LEN; 259 + } 272 260 273 - idx = strlen(sl->xbuff); 261 + /* build 3 (SFF) or 8 (EFF) digit CAN identifier */ 262 + pos++; 263 + while (endpos >= pos) { 264 + *endpos-- = hex_asc_upper[id & 0xf]; 265 + id >>= 4; 266 + } 274 267 275 - for (i = 0; i < cf->can_dlc; i++) 276 - sprintf(&sl->xbuff[idx + 2*i], "%02X", cf->data[i]); 268 + pos += (cf->can_id & CAN_EFF_FLAG) ? SLC_EFF_ID_LEN : SLC_SFF_ID_LEN; 277 269 278 - strcat(sl->xbuff, "\r"); /* add terminating character */ 270 + *pos++ = cf->can_dlc + '0'; 271 + 272 + /* RTR frames may have a dlc > 0 but they never have any data bytes */ 273 + if (!(cf->can_id & CAN_RTR_FLAG)) { 274 + for (i = 0; i < cf->can_dlc; i++) 275 + pos = hex_byte_pack_upper(pos, cf->data[i]); 276 + } 277 + 278 + *pos++ = '\r'; 279 279 280 280 /* Order of next two lines is *very* important. 281 281 * When we are sending a little amount of data, ··· 305 267 * 14 Oct 1994 Dmitry Gorodchanin. 306 268 */ 307 269 set_bit(TTY_DO_WRITE_WAKEUP, &sl->tty->flags); 308 - actual = sl->tty->ops->write(sl->tty, sl->xbuff, strlen(sl->xbuff)); 309 - sl->xleft = strlen(sl->xbuff) - actual; 270 + actual = sl->tty->ops->write(sl->tty, sl->xbuff, pos - sl->xbuff); 271 + sl->xleft = (pos - sl->xbuff) - actual; 310 272 sl->xhead = sl->xbuff + actual; 311 273 sl->dev->stats.tx_bytes += cf->can_dlc; 312 274 } ··· 324 286 if (!sl || sl->magic != SLCAN_MAGIC || !netif_running(sl->dev)) 325 287 return; 326 288 289 + spin_lock(&sl->lock); 327 290 if (sl->xleft <= 0) { 328 291 /* Now serial buffer is almost free & we can start 329 292 * transmission of another packet */ 330 293 sl->dev->stats.tx_packets++; 331 294 clear_bit(TTY_DO_WRITE_WAKEUP, &tty->flags); 295 + spin_unlock(&sl->lock); 332 296 netif_wake_queue(sl->dev); 333 297 return; 334 298 } ··· 338 298 actual = tty->ops->write(tty, sl->xhead, sl->xleft); 339 299 sl->xleft -= actual; 340 300 sl->xhead += actual; 301 + spin_unlock(&sl->lock); 341 302 } 342 303 343 304 /* Send a can_frame to a TTY queue. */
+11 -4
drivers/net/can/usb/peak_usb/pcan_usb_core.c
··· 463 463 if (i < PCAN_USB_MAX_TX_URBS) { 464 464 if (i == 0) { 465 465 netdev_err(netdev, "couldn't setup any tx URB\n"); 466 - return err; 466 + goto err_tx; 467 467 } 468 468 469 469 netdev_warn(netdev, "tx performance may be slow\n"); ··· 472 472 if (dev->adapter->dev_start) { 473 473 err = dev->adapter->dev_start(dev); 474 474 if (err) 475 - goto failed; 475 + goto err_adapter; 476 476 } 477 477 478 478 dev->state |= PCAN_USB_STATE_STARTED; ··· 481 481 if (dev->adapter->dev_set_bus) { 482 482 err = dev->adapter->dev_set_bus(dev, 1); 483 483 if (err) 484 - goto failed; 484 + goto err_adapter; 485 485 } 486 486 487 487 dev->can.state = CAN_STATE_ERROR_ACTIVE; 488 488 489 489 return 0; 490 490 491 - failed: 491 + err_adapter: 492 492 if (err == -ENODEV) 493 493 netif_device_detach(dev->netdev); 494 494 495 495 netdev_warn(netdev, "couldn't submit control: %d\n", err); 496 + 497 + for (i = 0; i < PCAN_USB_MAX_TX_URBS; i++) { 498 + usb_free_urb(dev->tx_contexts[i].urb); 499 + dev->tx_contexts[i].urb = NULL; 500 + } 501 + err_tx: 502 + usb_kill_anchored_urbs(&dev->rx_submitted); 496 503 497 504 return err; 498 505 }
+1 -2
drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
··· 2481 2481 load_error_cnic1: 2482 2482 bnx2x_napi_disable_cnic(bp); 2483 2483 /* Update the number of queues without the cnic queues */ 2484 - rc = bnx2x_set_real_num_queues(bp, 0); 2485 - if (rc) 2484 + if (bnx2x_set_real_num_queues(bp, 0)) 2486 2485 BNX2X_ERR("Unable to set real_num_queues not including cnic\n"); 2487 2486 load_error_cnic0: 2488 2487 BNX2X_ERR("CNIC-related load failed\n");
+99 -90
drivers/net/ethernet/broadcom/bnx2x/bnx2x_link.c
··· 175 175 #define EDC_MODE_LINEAR 0x0022 176 176 #define EDC_MODE_LIMITING 0x0044 177 177 #define EDC_MODE_PASSIVE_DAC 0x0055 178 + #define EDC_MODE_ACTIVE_DAC 0x0066 178 179 179 180 /* ETS defines*/ 180 181 #define DCBX_INVALID_COS (0xFF) ··· 3685 3684 bnx2x_update_link_attr(params, vars->link_attr_sync); 3686 3685 } 3687 3686 3687 + static void bnx2x_disable_kr2(struct link_params *params, 3688 + struct link_vars *vars, 3689 + struct bnx2x_phy *phy) 3690 + { 3691 + struct bnx2x *bp = params->bp; 3692 + int i; 3693 + static struct bnx2x_reg_set reg_set[] = { 3694 + /* Step 1 - Program the TX/RX alignment markers */ 3695 + {MDIO_WC_DEVAD, MDIO_WC_REG_CL82_USERB1_TX_CTRL5, 0x7690}, 3696 + {MDIO_WC_DEVAD, MDIO_WC_REG_CL82_USERB1_TX_CTRL7, 0xe647}, 3697 + {MDIO_WC_DEVAD, MDIO_WC_REG_CL82_USERB1_TX_CTRL6, 0xc4f0}, 3698 + {MDIO_WC_DEVAD, MDIO_WC_REG_CL82_USERB1_TX_CTRL9, 0x7690}, 3699 + {MDIO_WC_DEVAD, MDIO_WC_REG_CL82_USERB1_RX_CTRL11, 0xe647}, 3700 + {MDIO_WC_DEVAD, MDIO_WC_REG_CL82_USERB1_RX_CTRL10, 0xc4f0}, 3701 + {MDIO_WC_DEVAD, MDIO_WC_REG_CL73_USERB0_CTRL, 0x000c}, 3702 + {MDIO_WC_DEVAD, MDIO_WC_REG_CL73_BAM_CTRL1, 0x6000}, 3703 + {MDIO_WC_DEVAD, MDIO_WC_REG_CL73_BAM_CTRL3, 0x0000}, 3704 + {MDIO_WC_DEVAD, MDIO_WC_REG_CL73_BAM_CODE_FIELD, 0x0002}, 3705 + {MDIO_WC_DEVAD, MDIO_WC_REG_ETA_CL73_OUI1, 0x0000}, 3706 + {MDIO_WC_DEVAD, MDIO_WC_REG_ETA_CL73_OUI2, 0x0af7}, 3707 + {MDIO_WC_DEVAD, MDIO_WC_REG_ETA_CL73_OUI3, 0x0af7}, 3708 + {MDIO_WC_DEVAD, MDIO_WC_REG_ETA_CL73_LD_BAM_CODE, 0x0002}, 3709 + {MDIO_WC_DEVAD, MDIO_WC_REG_ETA_CL73_LD_UD_CODE, 0x0000} 3710 + }; 3711 + DP(NETIF_MSG_LINK, "Disabling 20G-KR2\n"); 3712 + 3713 + for (i = 0; i < ARRAY_SIZE(reg_set); i++) 3714 + bnx2x_cl45_write(bp, phy, reg_set[i].devad, reg_set[i].reg, 3715 + reg_set[i].val); 3716 + vars->link_attr_sync &= ~LINK_ATTR_SYNC_KR2_ENABLE; 3717 + bnx2x_update_link_attr(params, vars->link_attr_sync); 3718 + 3719 + vars->check_kr2_recovery_cnt = CHECK_KR2_RECOVERY_CNT; 3720 + } 3721 + 3688 3722 static void bnx2x_warpcore_set_lpi_passthrough(struct bnx2x_phy *phy, 3689 3723 struct link_params *params) 3690 3724 { ··· 3751 3715 struct link_params *params, 3752 3716 struct link_vars *vars) { 3753 3717 u16 lane, i, cl72_ctrl, an_adv = 0; 3754 - u16 ucode_ver; 3755 3718 struct bnx2x *bp = params->bp; 3756 3719 static struct bnx2x_reg_set reg_set[] = { 3757 3720 {MDIO_WC_DEVAD, MDIO_WC_REG_SERDESDIGITAL_CONTROL1000X2, 0x7}, ··· 3841 3806 3842 3807 /* Advertise pause */ 3843 3808 bnx2x_ext_phy_set_pause(params, phy, vars); 3844 - /* Set KR Autoneg Work-Around flag for Warpcore version older than D108 3845 - */ 3846 - bnx2x_cl45_read(bp, phy, MDIO_WC_DEVAD, 3847 - MDIO_WC_REG_UC_INFO_B1_VERSION, &ucode_ver); 3848 - if (ucode_ver < 0xd108) { 3849 - DP(NETIF_MSG_LINK, "Enable AN KR work-around. WC ver:0x%x\n", 3850 - ucode_ver); 3851 - vars->rx_tx_asic_rst = MAX_KR_LINK_RETRY; 3852 - } 3809 + vars->rx_tx_asic_rst = MAX_KR_LINK_RETRY; 3853 3810 bnx2x_cl45_read_or_write(bp, phy, MDIO_WC_DEVAD, 3854 3811 MDIO_WC_REG_DIGITAL5_MISC7, 0x100); 3855 3812 ··· 3865 3838 bnx2x_set_aer_mmd(params, phy); 3866 3839 3867 3840 bnx2x_warpcore_enable_AN_KR2(phy, params, vars); 3841 + } else { 3842 + bnx2x_disable_kr2(params, vars, phy); 3868 3843 } 3869 3844 3870 3845 /* Enable Autoneg: only on the main lane */ ··· 4376 4347 struct bnx2x *bp = params->bp; 4377 4348 u32 serdes_net_if; 4378 4349 u16 gp_status1 = 0, lnkup = 0, lnkup_kr = 0; 4379 - u16 lane = bnx2x_get_warpcore_lane(phy, params); 4380 4350 4381 4351 vars->turn_to_run_wc_rt = vars->turn_to_run_wc_rt ? 0 : 1; 4382 4352 4383 4353 if (!vars->turn_to_run_wc_rt) 4384 4354 return; 4385 4355 4386 - /* Return if there is no link partner */ 4387 - if (!(bnx2x_warpcore_get_sigdet(phy, params))) { 4388 - DP(NETIF_MSG_LINK, "bnx2x_warpcore_get_sigdet false\n"); 4389 - return; 4390 - } 4391 - 4392 4356 if (vars->rx_tx_asic_rst) { 4357 + u16 lane = bnx2x_get_warpcore_lane(phy, params); 4393 4358 serdes_net_if = (REG_RD(bp, params->shmem_base + 4394 4359 offsetof(struct shmem_region, dev_info. 4395 4360 port_hw_config[params->port].default_cfg)) & ··· 4398 4375 /*10G KR*/ 4399 4376 lnkup_kr = (gp_status1 >> (12+lane)) & 0x1; 4400 4377 4401 - DP(NETIF_MSG_LINK, 4402 - "gp_status1 0x%x\n", gp_status1); 4403 - 4404 4378 if (lnkup_kr || lnkup) { 4405 - vars->rx_tx_asic_rst = 0; 4406 - DP(NETIF_MSG_LINK, 4407 - "link up, rx_tx_asic_rst 0x%x\n", 4408 - vars->rx_tx_asic_rst); 4379 + vars->rx_tx_asic_rst = 0; 4409 4380 } else { 4410 4381 /* Reset the lane to see if link comes up.*/ 4411 4382 bnx2x_warpcore_reset_lane(bp, phy, 1); ··· 4524 4507 * enabled transmitter to avoid current leakage in case 4525 4508 * no module is connected 4526 4509 */ 4527 - if (bnx2x_is_sfp_module_plugged(phy, params)) 4528 - bnx2x_sfp_module_detection(phy, params); 4529 - else 4530 - bnx2x_sfp_e3_set_transmitter(params, phy, 1); 4510 + if ((params->loopback_mode == LOOPBACK_NONE) || 4511 + (params->loopback_mode == LOOPBACK_EXT)) { 4512 + if (bnx2x_is_sfp_module_plugged(phy, params)) 4513 + bnx2x_sfp_module_detection(phy, params); 4514 + else 4515 + bnx2x_sfp_e3_set_transmitter(params, 4516 + phy, 1); 4517 + } 4531 4518 4532 4519 bnx2x_warpcore_config_sfi(phy, params); 4533 4520 break; ··· 5778 5757 rc = bnx2x_get_link_speed_duplex(phy, params, vars, link_up, gp_speed, 5779 5758 duplex); 5780 5759 5760 + /* In case of KR link down, start up the recovering procedure */ 5761 + if ((!link_up) && (phy->media_type == ETH_PHY_KR) && 5762 + (!(phy->flags & FLAGS_WC_DUAL_MODE))) 5763 + vars->rx_tx_asic_rst = MAX_KR_LINK_RETRY; 5764 + 5781 5765 DP(NETIF_MSG_LINK, "duplex %x flow_ctrl 0x%x link_status 0x%x\n", 5782 5766 vars->duplex, vars->flow_ctrl, vars->link_status); 5783 5767 return rc; ··· 6532 6506 if (params->phy[INT_PHY].config_init) 6533 6507 params->phy[INT_PHY].config_init(phy, params, vars); 6534 6508 } 6509 + 6510 + /* Re-read this value in case it was changed inside config_init due to 6511 + * limitations of optic module 6512 + */ 6513 + vars->line_speed = params->phy[INT_PHY].req_line_speed; 6535 6514 6536 6515 /* Init external phy*/ 6537 6516 if (non_ext_phy) { ··· 8111 8080 if (copper_module_type & 8112 8081 SFP_EEPROM_FC_TX_TECH_BITMASK_COPPER_ACTIVE) { 8113 8082 DP(NETIF_MSG_LINK, "Active Copper cable detected\n"); 8114 - check_limiting_mode = 1; 8083 + if (phy->type == PORT_HW_CFG_XGXS_EXT_PHY_TYPE_DIRECT) 8084 + *edc_mode = EDC_MODE_ACTIVE_DAC; 8085 + else 8086 + check_limiting_mode = 1; 8115 8087 } else if (copper_module_type & 8116 8088 SFP_EEPROM_FC_TX_TECH_BITMASK_COPPER_PASSIVE) { 8117 8089 DP(NETIF_MSG_LINK, ··· 8589 8555 mode = MDIO_WC_REG_UC_INFO_B1_FIRMWARE_MODE_DEFAULT; 8590 8556 break; 8591 8557 case EDC_MODE_PASSIVE_DAC: 8558 + case EDC_MODE_ACTIVE_DAC: 8592 8559 mode = MDIO_WC_REG_UC_INFO_B1_FIRMWARE_MODE_SFP_DAC; 8593 8560 break; 8594 8561 default: ··· 9765 9730 MDIO_AN_DEVAD, MDIO_AN_REG_8481_1000T_CTRL, 9766 9731 an_1000_val); 9767 9732 9768 - /* set 100 speed advertisement */ 9769 - if ((phy->req_line_speed == SPEED_AUTO_NEG) && 9770 - (phy->speed_cap_mask & 9771 - (PORT_HW_CFG_SPEED_CAPABILITY_D0_100M_FULL | 9772 - PORT_HW_CFG_SPEED_CAPABILITY_D0_100M_HALF))) { 9773 - an_10_100_val |= (1<<7); 9774 - /* Enable autoneg and restart autoneg for legacy speeds */ 9775 - autoneg_val |= (1<<9 | 1<<12); 9776 - 9777 - if (phy->req_duplex == DUPLEX_FULL) 9733 + /* Set 10/100 speed advertisement */ 9734 + if (phy->req_line_speed == SPEED_AUTO_NEG) { 9735 + if (phy->speed_cap_mask & 9736 + PORT_HW_CFG_SPEED_CAPABILITY_D0_100M_FULL) { 9737 + /* Enable autoneg and restart autoneg for legacy speeds 9738 + */ 9739 + autoneg_val |= (1<<9 | 1<<12); 9778 9740 an_10_100_val |= (1<<8); 9779 - DP(NETIF_MSG_LINK, "Advertising 100M\n"); 9780 - } 9781 - /* set 10 speed advertisement */ 9782 - if (((phy->req_line_speed == SPEED_AUTO_NEG) && 9783 - (phy->speed_cap_mask & 9784 - (PORT_HW_CFG_SPEED_CAPABILITY_D0_10M_FULL | 9785 - PORT_HW_CFG_SPEED_CAPABILITY_D0_10M_HALF)) && 9786 - (phy->supported & 9787 - (SUPPORTED_10baseT_Half | 9788 - SUPPORTED_10baseT_Full)))) { 9789 - an_10_100_val |= (1<<5); 9790 - autoneg_val |= (1<<9 | 1<<12); 9791 - if (phy->req_duplex == DUPLEX_FULL) 9741 + DP(NETIF_MSG_LINK, "Advertising 100M-FD\n"); 9742 + } 9743 + 9744 + if (phy->speed_cap_mask & 9745 + PORT_HW_CFG_SPEED_CAPABILITY_D0_100M_HALF) { 9746 + /* Enable autoneg and restart autoneg for legacy speeds 9747 + */ 9748 + autoneg_val |= (1<<9 | 1<<12); 9749 + an_10_100_val |= (1<<7); 9750 + DP(NETIF_MSG_LINK, "Advertising 100M-HD\n"); 9751 + } 9752 + 9753 + if ((phy->speed_cap_mask & 9754 + PORT_HW_CFG_SPEED_CAPABILITY_D0_10M_FULL) && 9755 + (phy->supported & SUPPORTED_10baseT_Full)) { 9792 9756 an_10_100_val |= (1<<6); 9793 - DP(NETIF_MSG_LINK, "Advertising 10M\n"); 9757 + autoneg_val |= (1<<9 | 1<<12); 9758 + DP(NETIF_MSG_LINK, "Advertising 10M-FD\n"); 9759 + } 9760 + 9761 + if ((phy->speed_cap_mask & 9762 + PORT_HW_CFG_SPEED_CAPABILITY_D0_10M_HALF) && 9763 + (phy->supported & SUPPORTED_10baseT_Half)) { 9764 + an_10_100_val |= (1<<5); 9765 + autoneg_val |= (1<<9 | 1<<12); 9766 + DP(NETIF_MSG_LINK, "Advertising 10M-HD\n"); 9767 + } 9794 9768 } 9795 9769 9796 9770 /* Only 10/100 are allowed to work in FORCE mode */ ··· 13476 13432 } 13477 13433 } 13478 13434 } 13479 - static void bnx2x_disable_kr2(struct link_params *params, 13480 - struct link_vars *vars, 13481 - struct bnx2x_phy *phy) 13482 - { 13483 - struct bnx2x *bp = params->bp; 13484 - int i; 13485 - static struct bnx2x_reg_set reg_set[] = { 13486 - /* Step 1 - Program the TX/RX alignment markers */ 13487 - {MDIO_WC_DEVAD, MDIO_WC_REG_CL82_USERB1_TX_CTRL5, 0x7690}, 13488 - {MDIO_WC_DEVAD, MDIO_WC_REG_CL82_USERB1_TX_CTRL7, 0xe647}, 13489 - {MDIO_WC_DEVAD, MDIO_WC_REG_CL82_USERB1_TX_CTRL6, 0xc4f0}, 13490 - {MDIO_WC_DEVAD, MDIO_WC_REG_CL82_USERB1_TX_CTRL9, 0x7690}, 13491 - {MDIO_WC_DEVAD, MDIO_WC_REG_CL82_USERB1_RX_CTRL11, 0xe647}, 13492 - {MDIO_WC_DEVAD, MDIO_WC_REG_CL82_USERB1_RX_CTRL10, 0xc4f0}, 13493 - {MDIO_WC_DEVAD, MDIO_WC_REG_CL73_USERB0_CTRL, 0x000c}, 13494 - {MDIO_WC_DEVAD, MDIO_WC_REG_CL73_BAM_CTRL1, 0x6000}, 13495 - {MDIO_WC_DEVAD, MDIO_WC_REG_CL73_BAM_CTRL3, 0x0000}, 13496 - {MDIO_WC_DEVAD, MDIO_WC_REG_CL73_BAM_CODE_FIELD, 0x0002}, 13497 - {MDIO_WC_DEVAD, MDIO_WC_REG_ETA_CL73_OUI1, 0x0000}, 13498 - {MDIO_WC_DEVAD, MDIO_WC_REG_ETA_CL73_OUI2, 0x0af7}, 13499 - {MDIO_WC_DEVAD, MDIO_WC_REG_ETA_CL73_OUI3, 0x0af7}, 13500 - {MDIO_WC_DEVAD, MDIO_WC_REG_ETA_CL73_LD_BAM_CODE, 0x0002}, 13501 - {MDIO_WC_DEVAD, MDIO_WC_REG_ETA_CL73_LD_UD_CODE, 0x0000} 13502 - }; 13503 - DP(NETIF_MSG_LINK, "Disabling 20G-KR2\n"); 13504 - 13505 - for (i = 0; i < ARRAY_SIZE(reg_set); i++) 13506 - bnx2x_cl45_write(bp, phy, reg_set[i].devad, reg_set[i].reg, 13507 - reg_set[i].val); 13508 - vars->link_attr_sync &= ~LINK_ATTR_SYNC_KR2_ENABLE; 13509 - bnx2x_update_link_attr(params, vars->link_attr_sync); 13510 - 13511 - vars->check_kr2_recovery_cnt = CHECK_KR2_RECOVERY_CNT; 13512 - /* Restart AN on leading lane */ 13513 - bnx2x_warpcore_restart_AN_KR(phy, params); 13514 - } 13515 - 13516 13435 static void bnx2x_kr2_recovery(struct link_params *params, 13517 13436 struct link_vars *vars, 13518 13437 struct bnx2x_phy *phy) ··· 13553 13546 /* Disable KR2 on both lanes */ 13554 13547 DP(NETIF_MSG_LINK, "BP=0x%x, NP=0x%x\n", base_page, next_page); 13555 13548 bnx2x_disable_kr2(params, vars, phy); 13549 + /* Restart AN on leading lane */ 13550 + bnx2x_warpcore_restart_AN_KR(phy, params); 13556 13551 return; 13557 13552 } 13558 13553 }
+15 -9
drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
··· 4703 4703 attn.sig[3] = REG_RD(bp, 4704 4704 MISC_REG_AEU_AFTER_INVERT_4_FUNC_0 + 4705 4705 port*4); 4706 + /* Since MCP attentions can't be disabled inside the block, we need to 4707 + * read AEU registers to see whether they're currently disabled 4708 + */ 4709 + attn.sig[3] &= ((REG_RD(bp, 4710 + !port ? MISC_REG_AEU_ENABLE4_FUNC_0_OUT_0 4711 + : MISC_REG_AEU_ENABLE4_FUNC_1_OUT_0) & 4712 + MISC_AEU_ENABLE_MCP_PRTY_BITS) | 4713 + ~MISC_AEU_ENABLE_MCP_PRTY_BITS); 4706 4714 4707 4715 if (!CHIP_IS_E1x(bp)) 4708 4716 attn.sig[4] = REG_RD(bp, ··· 5455 5447 if (IS_PF(bp) && 5456 5448 !BP_NOMCP(bp)) { 5457 5449 int mb_idx = BP_FW_MB_IDX(bp); 5458 - u32 drv_pulse; 5459 - u32 mcp_pulse; 5450 + u16 drv_pulse; 5451 + u16 mcp_pulse; 5460 5452 5461 5453 ++bp->fw_drv_pulse_wr_seq; 5462 5454 bp->fw_drv_pulse_wr_seq &= DRV_PULSE_SEQ_MASK; 5463 - /* TBD - add SYSTEM_TIME */ 5464 5455 drv_pulse = bp->fw_drv_pulse_wr_seq; 5465 5456 bnx2x_drv_pulse(bp); 5466 5457 5467 5458 mcp_pulse = (SHMEM_RD(bp, func_mb[mb_idx].mcp_pulse_mb) & 5468 5459 MCP_PULSE_SEQ_MASK); 5469 5460 /* The delta between driver pulse and mcp response 5470 - * should be 1 (before mcp response) or 0 (after mcp response) 5461 + * should not get too big. If the MFW is more than 5 pulses 5462 + * behind, we should worry about it enough to generate an error 5463 + * log. 5471 5464 */ 5472 - if ((drv_pulse != mcp_pulse) && 5473 - (drv_pulse != ((mcp_pulse + 1) & MCP_PULSE_SEQ_MASK))) { 5474 - /* someone lost a heartbeat... */ 5475 - BNX2X_ERR("drv_pulse (0x%x) != mcp_pulse (0x%x)\n", 5465 + if (((drv_pulse - mcp_pulse) & MCP_PULSE_SEQ_MASK) > 5) 5466 + BNX2X_ERR("MFW seems hanged: drv_pulse (0x%x) != mcp_pulse (0x%x)\n", 5476 5467 drv_pulse, mcp_pulse); 5477 - } 5478 5468 } 5479 5469 5480 5470 if (bp->state == BNX2X_STATE_OPEN)
+4 -1
drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c
··· 1819 1819 fid = GET_FIELD((val), IGU_REG_MAPPING_MEMORY_FID); 1820 1820 if (fid & IGU_FID_ENCODE_IS_PF) 1821 1821 current_pf = fid & IGU_FID_PF_NUM_MASK; 1822 - else if (current_pf == BP_ABS_FUNC(bp)) 1822 + else if (current_pf == BP_FUNC(bp)) 1823 1823 bnx2x_vf_set_igu_info(bp, sb_id, 1824 1824 (fid & IGU_FID_VF_NUM_MASK)); 1825 1825 DP(BNX2X_MSG_IOV, "%s[%d], igu_sb_id=%d, msix=%d\n", ··· 3180 3180 /* set local queue arrays */ 3181 3181 vf->vfqs = &bp->vfdb->vfqs[qcount]; 3182 3182 qcount += vf_sb_count(vf); 3183 + bnx2x_iov_static_resc(bp, vf); 3183 3184 } 3184 3185 3185 3186 /* prepare msix vectors in VF configuration space */ ··· 3188 3187 bnx2x_pretend_func(bp, HW_VF_HANDLE(bp, vf_idx)); 3189 3188 REG_WR(bp, PCICFG_OFFSET + GRC_CONFIG_REG_VF_MSIX_CONTROL, 3190 3189 num_vf_queues); 3190 + DP(BNX2X_MSG_IOV, "set msix vec num in VF %d cfg space to %d\n", 3191 + vf_idx, num_vf_queues); 3191 3192 } 3192 3193 bnx2x_pretend_func(bp, BP_ABS_FUNC(bp)); 3193 3194
+24 -26
drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.c
··· 1765 1765 switch (mbx->first_tlv.tl.type) { 1766 1766 case CHANNEL_TLV_ACQUIRE: 1767 1767 bnx2x_vf_mbx_acquire(bp, vf, mbx); 1768 - break; 1768 + return; 1769 1769 case CHANNEL_TLV_INIT: 1770 1770 bnx2x_vf_mbx_init_vf(bp, vf, mbx); 1771 - break; 1771 + return; 1772 1772 case CHANNEL_TLV_SETUP_Q: 1773 1773 bnx2x_vf_mbx_setup_q(bp, vf, mbx); 1774 - break; 1774 + return; 1775 1775 case CHANNEL_TLV_SET_Q_FILTERS: 1776 1776 bnx2x_vf_mbx_set_q_filters(bp, vf, mbx); 1777 - break; 1777 + return; 1778 1778 case CHANNEL_TLV_TEARDOWN_Q: 1779 1779 bnx2x_vf_mbx_teardown_q(bp, vf, mbx); 1780 - break; 1780 + return; 1781 1781 case CHANNEL_TLV_CLOSE: 1782 1782 bnx2x_vf_mbx_close_vf(bp, vf, mbx); 1783 - break; 1783 + return; 1784 1784 case CHANNEL_TLV_RELEASE: 1785 1785 bnx2x_vf_mbx_release_vf(bp, vf, mbx); 1786 - break; 1786 + return; 1787 1787 case CHANNEL_TLV_UPDATE_RSS: 1788 1788 bnx2x_vf_mbx_update_rss(bp, vf, mbx); 1789 - break; 1789 + return; 1790 1790 } 1791 1791 1792 1792 } else { ··· 1802 1802 for (i = 0; i < 20; i++) 1803 1803 DP_CONT(BNX2X_MSG_IOV, "%x ", 1804 1804 mbx->msg->req.tlv_buf_size.tlv_buffer[i]); 1805 + } 1805 1806 1806 - /* test whether we can respond to the VF (do we have an address 1807 - * for it?) 1807 + /* can we respond to VF (do we have an address for it?) */ 1808 + if (vf->state == VF_ACQUIRED || vf->state == VF_ENABLED) { 1809 + /* mbx_resp uses the op_rc of the VF */ 1810 + vf->op_rc = PFVF_STATUS_NOT_SUPPORTED; 1811 + 1812 + /* notify the VF that we do not support this request */ 1813 + bnx2x_vf_mbx_resp(bp, vf); 1814 + } else { 1815 + /* can't send a response since this VF is unknown to us 1816 + * just ack the FW to release the mailbox and unlock 1817 + * the channel. 1808 1818 */ 1809 - if (vf->state == VF_ACQUIRED || vf->state == VF_ENABLED) { 1810 - /* mbx_resp uses the op_rc of the VF */ 1811 - vf->op_rc = PFVF_STATUS_NOT_SUPPORTED; 1812 - 1813 - /* notify the VF that we do not support this request */ 1814 - bnx2x_vf_mbx_resp(bp, vf); 1815 - } else { 1816 - /* can't send a response since this VF is unknown to us 1817 - * just ack the FW to release the mailbox and unlock 1818 - * the channel. 1819 - */ 1820 - storm_memset_vf_mbx_ack(bp, vf->abs_vfid); 1821 - mmiowb(); 1822 - bnx2x_unlock_vf_pf_channel(bp, vf, 1823 - mbx->first_tlv.tl.type); 1824 - } 1819 + storm_memset_vf_mbx_ack(bp, vf->abs_vfid); 1820 + /* Firmware ack should be written before unlocking channel */ 1821 + mmiowb(); 1822 + bnx2x_unlock_vf_pf_channel(bp, vf, mbx->first_tlv.tl.type); 1825 1823 } 1826 1824 } 1827 1825
+2
drivers/net/ethernet/emulex/benet/be.h
··· 88 88 #define BE_MIN_MTU 256 89 89 90 90 #define BE_NUM_VLANS_SUPPORTED 64 91 + #define BE_UMC_NUM_VLANS_SUPPORTED 15 91 92 #define BE_MAX_EQD 96u 92 93 #define BE_MAX_TX_FRAG_COUNT 30 93 94 ··· 334 333 335 334 #define BE_FLAGS_LINK_STATUS_INIT 1 336 335 #define BE_FLAGS_WORKER_SCHEDULED (1 << 3) 336 + #define BE_FLAGS_VLAN_PROMISC (1 << 4) 337 337 #define BE_FLAGS_NAPI_ENABLED (1 << 9) 338 338 #define BE_UC_PMAC_COUNT 30 339 339 #define BE_VF_UC_PMAC_COUNT 2
+9
drivers/net/ethernet/emulex/benet/be_cmds.c
··· 180 180 dev_err(&adapter->pdev->dev, 181 181 "opcode %d-%d failed:status %d-%d\n", 182 182 opcode, subsystem, compl_status, extd_status); 183 + 184 + if (extd_status == MCC_ADDL_STS_INSUFFICIENT_RESOURCES) 185 + return extd_status; 183 186 } 184 187 } 185 188 done: ··· 1815 1812 } else if (flags & IFF_ALLMULTI) { 1816 1813 req->if_flags_mask = req->if_flags = 1817 1814 cpu_to_le32(BE_IF_FLAGS_MCAST_PROMISCUOUS); 1815 + } else if (flags & BE_FLAGS_VLAN_PROMISC) { 1816 + req->if_flags_mask = cpu_to_le32(BE_IF_FLAGS_VLAN_PROMISCUOUS); 1817 + 1818 + if (value == ON) 1819 + req->if_flags = 1820 + cpu_to_le32(BE_IF_FLAGS_VLAN_PROMISCUOUS); 1818 1821 } else { 1819 1822 struct netdev_hw_addr *ha; 1820 1823 int i = 0;
+3 -1
drivers/net/ethernet/emulex/benet/be_cmds.h
··· 60 60 MCC_STATUS_NOT_SUPPORTED = 66 61 61 }; 62 62 63 + #define MCC_ADDL_STS_INSUFFICIENT_RESOURCES 0x16 64 + 63 65 #define CQE_STATUS_COMPL_MASK 0xFFFF 64 66 #define CQE_STATUS_COMPL_SHIFT 0 /* bits 0 - 15 */ 65 67 #define CQE_STATUS_EXTD_MASK 0xFFFF ··· 1793 1791 u8 acpi_params; 1794 1792 u8 wol_param; 1795 1793 u16 rsvd7; 1796 - u32 rsvd8[3]; 1794 + u32 rsvd8[7]; 1797 1795 } __packed; 1798 1796 1799 1797 struct be_cmd_req_get_func_config {
+46 -30
drivers/net/ethernet/emulex/benet/be_main.c
··· 855 855 unsigned int eth_hdr_len; 856 856 struct iphdr *ip; 857 857 858 - /* Lancer ASIC has a bug wherein packets that are 32 bytes or less 858 + /* Lancer, SH-R ASICs have a bug wherein Packets that are 32 bytes or less 859 859 * may cause a transmit stall on that port. So the work-around is to 860 - * pad such packets to a 36-byte length. 860 + * pad short packets (<= 32 bytes) to a 36-byte length. 861 861 */ 862 - if (unlikely(lancer_chip(adapter) && skb->len <= 32)) { 862 + if (unlikely(!BEx_chip(adapter) && skb->len <= 32)) { 863 863 if (skb_padto(skb, 36)) 864 864 goto tx_drop; 865 865 skb->len = 36; ··· 1013 1013 status = be_cmd_vlan_config(adapter, adapter->if_handle, 1014 1014 vids, num, 1, 0); 1015 1015 1016 - /* Set to VLAN promisc mode as setting VLAN filter failed */ 1017 1016 if (status) { 1018 - dev_info(&adapter->pdev->dev, "Exhausted VLAN HW filters.\n"); 1019 - dev_info(&adapter->pdev->dev, "Disabling HW VLAN filtering.\n"); 1020 - goto set_vlan_promisc; 1017 + /* Set to VLAN promisc mode as setting VLAN filter failed */ 1018 + if (status == MCC_ADDL_STS_INSUFFICIENT_RESOURCES) 1019 + goto set_vlan_promisc; 1020 + dev_err(&adapter->pdev->dev, 1021 + "Setting HW VLAN filtering failed.\n"); 1022 + } else { 1023 + if (adapter->flags & BE_FLAGS_VLAN_PROMISC) { 1024 + /* hw VLAN filtering re-enabled. */ 1025 + status = be_cmd_rx_filter(adapter, 1026 + BE_FLAGS_VLAN_PROMISC, OFF); 1027 + if (!status) { 1028 + dev_info(&adapter->pdev->dev, 1029 + "Disabling VLAN Promiscuous mode.\n"); 1030 + adapter->flags &= ~BE_FLAGS_VLAN_PROMISC; 1031 + dev_info(&adapter->pdev->dev, 1032 + "Re-Enabling HW VLAN filtering\n"); 1033 + } 1034 + } 1021 1035 } 1022 1036 1023 1037 return status; 1024 1038 1025 1039 set_vlan_promisc: 1026 - status = be_cmd_vlan_config(adapter, adapter->if_handle, 1027 - NULL, 0, 1, 1); 1040 + dev_warn(&adapter->pdev->dev, "Exhausted VLAN HW filters.\n"); 1041 + 1042 + status = be_cmd_rx_filter(adapter, BE_FLAGS_VLAN_PROMISC, ON); 1043 + if (!status) { 1044 + dev_info(&adapter->pdev->dev, "Enable VLAN Promiscuous mode\n"); 1045 + dev_info(&adapter->pdev->dev, "Disabling HW VLAN filtering\n"); 1046 + adapter->flags |= BE_FLAGS_VLAN_PROMISC; 1047 + } else 1048 + dev_err(&adapter->pdev->dev, 1049 + "Failed to enable VLAN Promiscuous mode.\n"); 1028 1050 return status; 1029 1051 } 1030 1052 ··· 1055 1033 struct be_adapter *adapter = netdev_priv(netdev); 1056 1034 int status = 0; 1057 1035 1058 - if (!lancer_chip(adapter) && !be_physfn(adapter)) { 1059 - status = -EINVAL; 1060 - goto ret; 1061 - } 1062 1036 1063 1037 /* Packets with VID 0 are always received by Lancer by default */ 1064 1038 if (lancer_chip(adapter) && vid == 0) ··· 1076 1058 { 1077 1059 struct be_adapter *adapter = netdev_priv(netdev); 1078 1060 int status = 0; 1079 - 1080 - if (!lancer_chip(adapter) && !be_physfn(adapter)) { 1081 - status = -EINVAL; 1082 - goto ret; 1083 - } 1084 1061 1085 1062 /* Packets with VID 0 are always received by Lancer by default */ 1086 1063 if (lancer_chip(adapter) && vid == 0) ··· 1201 1188 1202 1189 vi->vf = vf; 1203 1190 vi->tx_rate = vf_cfg->tx_rate; 1204 - vi->vlan = vf_cfg->vlan_tag; 1205 - vi->qos = 0; 1191 + vi->vlan = vf_cfg->vlan_tag & VLAN_VID_MASK; 1192 + vi->qos = vf_cfg->vlan_tag >> VLAN_PRIO_SHIFT; 1206 1193 memcpy(&vi->mac, vf_cfg->mac_addr, ETH_ALEN); 1207 1194 1208 1195 return 0; ··· 1212 1199 int vf, u16 vlan, u8 qos) 1213 1200 { 1214 1201 struct be_adapter *adapter = netdev_priv(netdev); 1202 + struct be_vf_cfg *vf_cfg = &adapter->vf_cfg[vf]; 1215 1203 int status = 0; 1216 1204 1217 1205 if (!sriov_enabled(adapter)) 1218 1206 return -EPERM; 1219 1207 1220 - if (vf >= adapter->num_vfs || vlan > 4095) 1208 + if (vf >= adapter->num_vfs || vlan > 4095 || qos > 7) 1221 1209 return -EINVAL; 1222 1210 1223 - if (vlan) { 1224 - if (adapter->vf_cfg[vf].vlan_tag != vlan) { 1211 + if (vlan || qos) { 1212 + vlan |= qos << VLAN_PRIO_SHIFT; 1213 + if (vf_cfg->vlan_tag != vlan) { 1225 1214 /* If this is new value, program it. Else skip. */ 1226 - adapter->vf_cfg[vf].vlan_tag = vlan; 1227 - 1228 - status = be_cmd_set_hsw_config(adapter, vlan, 1229 - vf + 1, adapter->vf_cfg[vf].if_handle, 0); 1215 + vf_cfg->vlan_tag = vlan; 1216 + status = be_cmd_set_hsw_config(adapter, vlan, vf + 1, 1217 + vf_cfg->if_handle, 0); 1230 1218 } 1231 1219 } else { 1232 1220 /* Reset Transparent Vlan Tagging. */ 1233 - adapter->vf_cfg[vf].vlan_tag = 0; 1234 - vlan = adapter->vf_cfg[vf].def_vid; 1221 + vf_cfg->vlan_tag = 0; 1222 + vlan = vf_cfg->def_vid; 1235 1223 status = be_cmd_set_hsw_config(adapter, vlan, vf + 1, 1236 - adapter->vf_cfg[vf].if_handle, 0); 1224 + vf_cfg->if_handle, 0); 1237 1225 } 1238 1226 1239 1227 ··· 2977 2963 2978 2964 if (adapter->function_mode & FLEX10_MODE) 2979 2965 res->max_vlans = BE_NUM_VLANS_SUPPORTED/8; 2966 + else if (adapter->function_mode & UMC_ENABLED) 2967 + res->max_vlans = BE_UMC_NUM_VLANS_SUPPORTED; 2980 2968 else 2981 2969 res->max_vlans = BE_NUM_VLANS_SUPPORTED; 2982 2970 res->max_mcast_mac = BE_MAX_MC;
+3 -1
drivers/net/ethernet/freescale/gianfar_ptp.c
··· 452 452 err = -ENODEV; 453 453 454 454 etsects->caps = ptp_gianfar_caps; 455 - etsects->cksel = DEFAULT_CKSEL; 455 + 456 + if (get_of_u32(node, "fsl,cksel", &etsects->cksel)) 457 + etsects->cksel = DEFAULT_CKSEL; 456 458 457 459 if (get_of_u32(node, "fsl,tclk-period", &etsects->tclk_period) || 458 460 get_of_u32(node, "fsl,tmr-prsc", &etsects->tmr_prsc) ||
+3 -4
drivers/net/ethernet/intel/i40e/i40e_adminq.c
··· 701 701 702 702 details = I40E_ADMINQ_DETAILS(hw->aq.asq, hw->aq.asq.next_to_use); 703 703 if (cmd_details) { 704 - memcpy(details, cmd_details, 705 - sizeof(struct i40e_asq_cmd_details)); 704 + *details = *cmd_details; 706 705 707 706 /* If the cmd_details are defined copy the cookie. The 708 707 * cpu_to_le32 is not needed here because the data is ignored ··· 759 760 desc_on_ring = I40E_ADMINQ_DESC(hw->aq.asq, hw->aq.asq.next_to_use); 760 761 761 762 /* if the desc is available copy the temp desc to the right place */ 762 - memcpy(desc_on_ring, desc, sizeof(struct i40e_aq_desc)); 763 + *desc_on_ring = *desc; 763 764 764 765 /* if buff is not NULL assume indirect command */ 765 766 if (buff != NULL) { ··· 806 807 807 808 /* if ready, copy the desc back to temp */ 808 809 if (i40e_asq_done(hw)) { 809 - memcpy(desc, desc_on_ring, sizeof(struct i40e_aq_desc)); 810 + *desc = *desc_on_ring; 810 811 if (buff != NULL) 811 812 memcpy(buff, dma_buff->va, buff_size); 812 813 retval = le16_to_cpu(desc->retval);
+1 -1
drivers/net/ethernet/intel/i40e/i40e_common.c
··· 507 507 508 508 /* save link status information */ 509 509 if (link) 510 - memcpy(link, hw_link_info, sizeof(struct i40e_link_status)); 510 + *link = *hw_link_info; 511 511 512 512 /* flag cleared so helper functions don't call AQ again */ 513 513 hw->phy.get_link_info = false;
+82 -80
drivers/net/ethernet/intel/i40e/i40e_main.c
··· 101 101 mem->size = ALIGN(size, alignment); 102 102 mem->va = dma_zalloc_coherent(&pf->pdev->dev, mem->size, 103 103 &mem->pa, GFP_KERNEL); 104 - if (mem->va) 105 - return 0; 104 + if (!mem->va) 105 + return -ENOMEM; 106 106 107 - return -ENOMEM; 107 + return 0; 108 108 } 109 109 110 110 /** ··· 136 136 mem->size = size; 137 137 mem->va = kzalloc(size, GFP_KERNEL); 138 138 139 - if (mem->va) 140 - return 0; 139 + if (!mem->va) 140 + return -ENOMEM; 141 141 142 - return -ENOMEM; 142 + return 0; 143 143 } 144 144 145 145 /** ··· 174 174 u16 needed, u16 id) 175 175 { 176 176 int ret = -ENOMEM; 177 - int i = 0; 178 - int j = 0; 177 + int i, j; 179 178 180 179 if (!pile || needed == 0 || id >= I40E_PILE_VALID_BIT) { 181 180 dev_info(&pf->pdev->dev, ··· 185 186 186 187 /* start the linear search with an imperfect hint */ 187 188 i = pile->search_hint; 188 - while (i < pile->num_entries && ret < 0) { 189 + while (i < pile->num_entries) { 189 190 /* skip already allocated entries */ 190 191 if (pile->list[i] & I40E_PILE_VALID_BIT) { 191 192 i++; ··· 204 205 pile->list[i+j] = id | I40E_PILE_VALID_BIT; 205 206 ret = i; 206 207 pile->search_hint = i + j; 208 + break; 207 209 } else { 208 210 /* not enough, so skip over it and continue looking */ 209 211 i += j; ··· 1388 1388 bool add_happened = false; 1389 1389 int filter_list_len = 0; 1390 1390 u32 changed_flags = 0; 1391 - i40e_status ret = 0; 1391 + i40e_status aq_ret = 0; 1392 1392 struct i40e_pf *pf; 1393 1393 int num_add = 0; 1394 1394 int num_del = 0; ··· 1449 1449 1450 1450 /* flush a full buffer */ 1451 1451 if (num_del == filter_list_len) { 1452 - ret = i40e_aq_remove_macvlan(&pf->hw, 1452 + aq_ret = i40e_aq_remove_macvlan(&pf->hw, 1453 1453 vsi->seid, del_list, num_del, 1454 1454 NULL); 1455 1455 num_del = 0; 1456 1456 memset(del_list, 0, sizeof(*del_list)); 1457 1457 1458 - if (ret) 1458 + if (aq_ret) 1459 1459 dev_info(&pf->pdev->dev, 1460 1460 "ignoring delete macvlan error, err %d, aq_err %d while flushing a full buffer\n", 1461 - ret, 1461 + aq_ret, 1462 1462 pf->hw.aq.asq_last_status); 1463 1463 } 1464 1464 } 1465 1465 if (num_del) { 1466 - ret = i40e_aq_remove_macvlan(&pf->hw, vsi->seid, 1466 + aq_ret = i40e_aq_remove_macvlan(&pf->hw, vsi->seid, 1467 1467 del_list, num_del, NULL); 1468 1468 num_del = 0; 1469 1469 1470 - if (ret) 1470 + if (aq_ret) 1471 1471 dev_info(&pf->pdev->dev, 1472 1472 "ignoring delete macvlan error, err %d, aq_err %d\n", 1473 - ret, pf->hw.aq.asq_last_status); 1473 + aq_ret, pf->hw.aq.asq_last_status); 1474 1474 } 1475 1475 1476 1476 kfree(del_list); ··· 1515 1515 1516 1516 /* flush a full buffer */ 1517 1517 if (num_add == filter_list_len) { 1518 - ret = i40e_aq_add_macvlan(&pf->hw, 1519 - vsi->seid, 1520 - add_list, 1521 - num_add, 1522 - NULL); 1518 + aq_ret = i40e_aq_add_macvlan(&pf->hw, vsi->seid, 1519 + add_list, num_add, 1520 + NULL); 1523 1521 num_add = 0; 1524 1522 1525 - if (ret) 1523 + if (aq_ret) 1526 1524 break; 1527 1525 memset(add_list, 0, sizeof(*add_list)); 1528 1526 } 1529 1527 } 1530 1528 if (num_add) { 1531 - ret = i40e_aq_add_macvlan(&pf->hw, vsi->seid, 1532 - add_list, num_add, NULL); 1529 + aq_ret = i40e_aq_add_macvlan(&pf->hw, vsi->seid, 1530 + add_list, num_add, NULL); 1533 1531 num_add = 0; 1534 1532 } 1535 1533 kfree(add_list); 1536 1534 add_list = NULL; 1537 1535 1538 - if (add_happened && (!ret)) { 1536 + if (add_happened && (!aq_ret)) { 1539 1537 /* do nothing */; 1540 - } else if (add_happened && (ret)) { 1538 + } else if (add_happened && (aq_ret)) { 1541 1539 dev_info(&pf->pdev->dev, 1542 1540 "add filter failed, err %d, aq_err %d\n", 1543 - ret, pf->hw.aq.asq_last_status); 1541 + aq_ret, pf->hw.aq.asq_last_status); 1544 1542 if ((pf->hw.aq.asq_last_status == I40E_AQ_RC_ENOSPC) && 1545 1543 !test_bit(__I40E_FILTER_OVERFLOW_PROMISC, 1546 1544 &vsi->state)) { ··· 1554 1556 if (changed_flags & IFF_ALLMULTI) { 1555 1557 bool cur_multipromisc; 1556 1558 cur_multipromisc = !!(vsi->current_netdev_flags & IFF_ALLMULTI); 1557 - ret = i40e_aq_set_vsi_multicast_promiscuous(&vsi->back->hw, 1558 - vsi->seid, 1559 - cur_multipromisc, 1560 - NULL); 1561 - if (ret) 1559 + aq_ret = i40e_aq_set_vsi_multicast_promiscuous(&vsi->back->hw, 1560 + vsi->seid, 1561 + cur_multipromisc, 1562 + NULL); 1563 + if (aq_ret) 1562 1564 dev_info(&pf->pdev->dev, 1563 1565 "set multi promisc failed, err %d, aq_err %d\n", 1564 - ret, pf->hw.aq.asq_last_status); 1566 + aq_ret, pf->hw.aq.asq_last_status); 1565 1567 } 1566 1568 if ((changed_flags & IFF_PROMISC) || promisc_forced_on) { 1567 1569 bool cur_promisc; 1568 1570 cur_promisc = (!!(vsi->current_netdev_flags & IFF_PROMISC) || 1569 1571 test_bit(__I40E_FILTER_OVERFLOW_PROMISC, 1570 1572 &vsi->state)); 1571 - ret = i40e_aq_set_vsi_unicast_promiscuous(&vsi->back->hw, 1572 - vsi->seid, 1573 - cur_promisc, 1574 - NULL); 1575 - if (ret) 1573 + aq_ret = i40e_aq_set_vsi_unicast_promiscuous(&vsi->back->hw, 1574 + vsi->seid, 1575 + cur_promisc, NULL); 1576 + if (aq_ret) 1576 1577 dev_info(&pf->pdev->dev, 1577 1578 "set uni promisc failed, err %d, aq_err %d\n", 1578 - ret, pf->hw.aq.asq_last_status); 1579 + aq_ret, pf->hw.aq.asq_last_status); 1579 1580 } 1580 1581 1581 1582 clear_bit(__I40E_CONFIG_BUSY, &vsi->state); ··· 1787 1790 * i40e_vsi_kill_vlan - Remove vsi membership for given vlan 1788 1791 * @vsi: the vsi being configured 1789 1792 * @vid: vlan id to be removed (0 = untagged only , -1 = any) 1793 + * 1794 + * Return: 0 on success or negative otherwise 1790 1795 **/ 1791 1796 int i40e_vsi_kill_vlan(struct i40e_vsi *vsi, s16 vid) 1792 1797 { ··· 1862 1863 * i40e_vlan_rx_add_vid - Add a vlan id filter to HW offload 1863 1864 * @netdev: network interface to be adjusted 1864 1865 * @vid: vlan id to be added 1866 + * 1867 + * net_device_ops implementation for adding vlan ids 1865 1868 **/ 1866 1869 static int i40e_vlan_rx_add_vid(struct net_device *netdev, 1867 1870 __always_unused __be16 proto, u16 vid) 1868 1871 { 1869 1872 struct i40e_netdev_priv *np = netdev_priv(netdev); 1870 1873 struct i40e_vsi *vsi = np->vsi; 1871 - int ret; 1874 + int ret = 0; 1872 1875 1873 1876 if (vid > 4095) 1874 - return 0; 1877 + return -EINVAL; 1875 1878 1876 - netdev_info(vsi->netdev, "adding %pM vid=%d\n", 1877 - netdev->dev_addr, vid); 1879 + netdev_info(netdev, "adding %pM vid=%d\n", netdev->dev_addr, vid); 1880 + 1878 1881 /* If the network stack called us with vid = 0, we should 1879 1882 * indicate to i40e_vsi_add_vlan() that we want to receive 1880 1883 * any traffic (i.e. with any vlan tag, or untagged) 1881 1884 */ 1882 1885 ret = i40e_vsi_add_vlan(vsi, vid ? vid : I40E_VLAN_ANY); 1883 1886 1884 - if (!ret) { 1885 - if (vid < VLAN_N_VID) 1886 - set_bit(vid, vsi->active_vlans); 1887 - } 1887 + if (!ret && (vid < VLAN_N_VID)) 1888 + set_bit(vid, vsi->active_vlans); 1888 1889 1889 - return 0; 1890 + return ret; 1890 1891 } 1891 1892 1892 1893 /** 1893 1894 * i40e_vlan_rx_kill_vid - Remove a vlan id filter from HW offload 1894 1895 * @netdev: network interface to be adjusted 1895 1896 * @vid: vlan id to be removed 1897 + * 1898 + * net_device_ops implementation for adding vlan ids 1896 1899 **/ 1897 1900 static int i40e_vlan_rx_kill_vid(struct net_device *netdev, 1898 1901 __always_unused __be16 proto, u16 vid) ··· 1902 1901 struct i40e_netdev_priv *np = netdev_priv(netdev); 1903 1902 struct i40e_vsi *vsi = np->vsi; 1904 1903 1905 - netdev_info(vsi->netdev, "removing %pM vid=%d\n", 1906 - netdev->dev_addr, vid); 1904 + netdev_info(netdev, "removing %pM vid=%d\n", netdev->dev_addr, vid); 1905 + 1907 1906 /* return code is ignored as there is nothing a user 1908 1907 * can do about failure to remove and a log message was 1909 - * already printed from another function 1908 + * already printed from the other function 1910 1909 */ 1911 1910 i40e_vsi_kill_vlan(vsi, vid); 1912 1911 1913 1912 clear_bit(vid, vsi->active_vlans); 1913 + 1914 1914 return 0; 1915 1915 } 1916 1916 ··· 1938 1936 * @vsi: the vsi being adjusted 1939 1937 * @vid: the vlan id to set as a PVID 1940 1938 **/ 1941 - i40e_status i40e_vsi_add_pvid(struct i40e_vsi *vsi, u16 vid) 1939 + int i40e_vsi_add_pvid(struct i40e_vsi *vsi, u16 vid) 1942 1940 { 1943 1941 struct i40e_vsi_context ctxt; 1944 - i40e_status ret; 1942 + i40e_status aq_ret; 1945 1943 1946 1944 vsi->info.valid_sections = cpu_to_le16(I40E_AQ_VSI_PROP_VLAN_VALID); 1947 1945 vsi->info.pvid = cpu_to_le16(vid); ··· 1950 1948 1951 1949 ctxt.seid = vsi->seid; 1952 1950 memcpy(&ctxt.info, &vsi->info, sizeof(vsi->info)); 1953 - ret = i40e_aq_update_vsi_params(&vsi->back->hw, &ctxt, NULL); 1954 - if (ret) { 1951 + aq_ret = i40e_aq_update_vsi_params(&vsi->back->hw, &ctxt, NULL); 1952 + if (aq_ret) { 1955 1953 dev_info(&vsi->back->pdev->dev, 1956 1954 "%s: update vsi failed, aq_err=%d\n", 1957 1955 __func__, vsi->back->hw.aq.asq_last_status); 1956 + return -ENOENT; 1958 1957 } 1959 1958 1960 - return ret; 1959 + return 0; 1961 1960 } 1962 1961 1963 1962 /** ··· 3329 3326 **/ 3330 3327 static u8 i40e_dcb_get_num_tc(struct i40e_dcbx_config *dcbcfg) 3331 3328 { 3332 - int num_tc = 0, i; 3329 + u8 num_tc = 0; 3330 + int i; 3333 3331 3334 3332 /* Scan the ETS Config Priority Table to find 3335 3333 * traffic class enabled for a given priority ··· 3345 3341 /* Traffic class index starts from zero so 3346 3342 * increment to return the actual count 3347 3343 */ 3348 - num_tc++; 3349 - 3350 - return num_tc; 3344 + return num_tc + 1; 3351 3345 } 3352 3346 3353 3347 /** ··· 3453 3451 struct i40e_aqc_query_vsi_bw_config_resp bw_config = {0}; 3454 3452 struct i40e_pf *pf = vsi->back; 3455 3453 struct i40e_hw *hw = &pf->hw; 3454 + i40e_status aq_ret; 3456 3455 u32 tc_bw_max; 3457 - int ret; 3458 3456 int i; 3459 3457 3460 3458 /* Get the VSI level BW configuration */ 3461 - ret = i40e_aq_query_vsi_bw_config(hw, vsi->seid, &bw_config, NULL); 3462 - if (ret) { 3459 + aq_ret = i40e_aq_query_vsi_bw_config(hw, vsi->seid, &bw_config, NULL); 3460 + if (aq_ret) { 3463 3461 dev_info(&pf->pdev->dev, 3464 3462 "couldn't get pf vsi bw config, err %d, aq_err %d\n", 3465 - ret, pf->hw.aq.asq_last_status); 3466 - return ret; 3463 + aq_ret, pf->hw.aq.asq_last_status); 3464 + return -EINVAL; 3467 3465 } 3468 3466 3469 3467 /* Get the VSI level BW configuration per TC */ 3470 - ret = i40e_aq_query_vsi_ets_sla_config(hw, vsi->seid, 3471 - &bw_ets_config, 3472 - NULL); 3473 - if (ret) { 3468 + aq_ret = i40e_aq_query_vsi_ets_sla_config(hw, vsi->seid, &bw_ets_config, 3469 + NULL); 3470 + if (aq_ret) { 3474 3471 dev_info(&pf->pdev->dev, 3475 3472 "couldn't get pf vsi ets bw config, err %d, aq_err %d\n", 3476 - ret, pf->hw.aq.asq_last_status); 3477 - return ret; 3473 + aq_ret, pf->hw.aq.asq_last_status); 3474 + return -EINVAL; 3478 3475 } 3479 3476 3480 3477 if (bw_config.tc_valid_bits != bw_ets_config.tc_valid_bits) { ··· 3495 3494 /* 3 bits out of 4 for each TC */ 3496 3495 vsi->bw_ets_max_quanta[i] = (u8)((tc_bw_max >> (i*4)) & 0x7); 3497 3496 } 3498 - return ret; 3497 + 3498 + return 0; 3499 3499 } 3500 3500 3501 3501 /** ··· 3507 3505 * 3508 3506 * Returns 0 on success, negative value on failure 3509 3507 **/ 3510 - static int i40e_vsi_configure_bw_alloc(struct i40e_vsi *vsi, 3511 - u8 enabled_tc, 3508 + static int i40e_vsi_configure_bw_alloc(struct i40e_vsi *vsi, u8 enabled_tc, 3512 3509 u8 *bw_share) 3513 3510 { 3514 3511 struct i40e_aqc_configure_vsi_tc_bw_data bw_data; 3515 - int i, ret = 0; 3512 + i40e_status aq_ret; 3513 + int i; 3516 3514 3517 3515 bw_data.tc_valid_bits = enabled_tc; 3518 3516 for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++) 3519 3517 bw_data.tc_bw_credits[i] = bw_share[i]; 3520 3518 3521 - ret = i40e_aq_config_vsi_tc_bw(&vsi->back->hw, vsi->seid, 3522 - &bw_data, NULL); 3523 - if (ret) { 3519 + aq_ret = i40e_aq_config_vsi_tc_bw(&vsi->back->hw, vsi->seid, &bw_data, 3520 + NULL); 3521 + if (aq_ret) { 3524 3522 dev_info(&vsi->back->pdev->dev, 3525 3523 "%s: AQ command Config VSI BW allocation per TC failed = %d\n", 3526 3524 __func__, vsi->back->hw.aq.asq_last_status); 3527 - return ret; 3525 + return -EINVAL; 3528 3526 } 3529 3527 3530 3528 for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++) 3531 3529 vsi->info.qs_handle[i] = bw_data.qs_handles[i]; 3532 3530 3533 - return ret; 3531 + return 0; 3534 3532 } 3535 3533 3536 3534 /**
+3
drivers/net/ethernet/intel/igb/igb_ethtool.c
··· 1607 1607 igb_write_phy_reg(hw, I347AT4_PAGE_SELECT, 0); 1608 1608 igb_write_phy_reg(hw, PHY_CONTROL, 0x4140); 1609 1609 } 1610 + } else if (hw->phy.type == e1000_phy_82580) { 1611 + /* enable MII loopback */ 1612 + igb_write_phy_reg(hw, I82580_PHY_LBK_CTRL, 0x8041); 1610 1613 } 1611 1614 1612 1615 /* add small delay to avoid loopback test failure */
+6 -3
drivers/net/ethernet/marvell/skge.c
··· 3086 3086 PCI_DMA_FROMDEVICE); 3087 3087 skge_rx_reuse(e, skge->rx_buf_size); 3088 3088 } else { 3089 + struct skge_element ee; 3089 3090 struct sk_buff *nskb; 3090 3091 3091 3092 nskb = netdev_alloc_skb_ip_align(dev, skge->rx_buf_size); 3092 3093 if (!nskb) 3093 3094 goto resubmit; 3094 3095 3095 - skb = e->skb; 3096 + ee = *e; 3097 + 3098 + skb = ee.skb; 3096 3099 prefetch(skb->data); 3097 3100 3098 3101 if (skge_rx_setup(skge, e, nskb, skge->rx_buf_size) < 0) { ··· 3104 3101 } 3105 3102 3106 3103 pci_unmap_single(skge->hw->pdev, 3107 - dma_unmap_addr(e, mapaddr), 3108 - dma_unmap_len(e, maplen), 3104 + dma_unmap_addr(&ee, mapaddr), 3105 + dma_unmap_len(&ee, maplen), 3109 3106 PCI_DMA_FROMDEVICE); 3110 3107 } 3111 3108
+1 -1
drivers/net/ethernet/moxa/moxart_ether.c
··· 543 543 { } 544 544 }; 545 545 546 - struct __initdata platform_driver moxart_mac_driver = { 546 + static struct platform_driver moxart_mac_driver = { 547 547 .probe = moxart_mac_probe, 548 548 .remove = moxart_remove, 549 549 .driver = {
+8
drivers/net/ethernet/qlogic/qlcnic/qlcnic_ethtool.c
··· 1794 1794 .set_msglevel = qlcnic_set_msglevel, 1795 1795 .get_msglevel = qlcnic_get_msglevel, 1796 1796 }; 1797 + 1798 + const struct ethtool_ops qlcnic_ethtool_failed_ops = { 1799 + .get_settings = qlcnic_get_settings, 1800 + .get_drvinfo = qlcnic_get_drvinfo, 1801 + .set_msglevel = qlcnic_set_msglevel, 1802 + .get_msglevel = qlcnic_get_msglevel, 1803 + .set_dump = qlcnic_set_dump, 1804 + };
+37 -2
drivers/net/ethernet/qlogic/qlcnic/qlcnic_main.c
··· 431 431 while (test_and_set_bit(__QLCNIC_RESETTING, &adapter->state)) 432 432 usleep_range(10000, 11000); 433 433 434 + if (!adapter->fw_work.work.func) 435 + return; 436 + 434 437 cancel_delayed_work_sync(&adapter->fw_work); 435 438 } 436 439 ··· 2278 2275 adapter->portnum = adapter->ahw->pci_func; 2279 2276 err = qlcnic_start_firmware(adapter); 2280 2277 if (err) { 2281 - dev_err(&pdev->dev, "Loading fw failed.Please Reboot\n"); 2282 - goto err_out_free_hw; 2278 + dev_err(&pdev->dev, "Loading fw failed.Please Reboot\n" 2279 + "\t\tIf reboot doesn't help, try flashing the card\n"); 2280 + goto err_out_maintenance_mode; 2283 2281 } 2284 2282 2285 2283 qlcnic_get_multiq_capability(adapter); ··· 2412 2408 pci_set_drvdata(pdev, NULL); 2413 2409 pci_disable_device(pdev); 2414 2410 return err; 2411 + 2412 + err_out_maintenance_mode: 2413 + netdev->netdev_ops = &qlcnic_netdev_failed_ops; 2414 + SET_ETHTOOL_OPS(netdev, &qlcnic_ethtool_failed_ops); 2415 + err = register_netdev(netdev); 2416 + 2417 + if (err) { 2418 + dev_err(&pdev->dev, "Failed to register net device\n"); 2419 + qlcnic_clr_all_drv_state(adapter, 0); 2420 + goto err_out_free_hw; 2421 + } 2422 + 2423 + pci_set_drvdata(pdev, adapter); 2424 + qlcnic_add_sysfs(adapter); 2425 + 2426 + return 0; 2415 2427 } 2416 2428 2417 2429 static void qlcnic_remove(struct pci_dev *pdev) ··· 2538 2518 static int qlcnic_open(struct net_device *netdev) 2539 2519 { 2540 2520 struct qlcnic_adapter *adapter = netdev_priv(netdev); 2521 + u32 state; 2541 2522 int err; 2523 + 2524 + state = QLC_SHARED_REG_RD32(adapter, QLCNIC_CRB_DEV_STATE); 2525 + if (state == QLCNIC_DEV_FAILED || state == QLCNIC_DEV_BADBAD) { 2526 + netdev_err(netdev, "%s: Device is in FAILED state\n", __func__); 2527 + 2528 + return -EIO; 2529 + } 2542 2530 2543 2531 netif_carrier_off(netdev); 2544 2532 ··· 3256 3228 return; 3257 3229 3258 3230 state = QLC_SHARED_REG_RD32(adapter, QLCNIC_CRB_DEV_STATE); 3231 + if (state == QLCNIC_DEV_FAILED || state == QLCNIC_DEV_BADBAD) { 3232 + netdev_err(adapter->netdev, "%s: Device is in FAILED state\n", 3233 + __func__); 3234 + qlcnic_api_unlock(adapter); 3235 + 3236 + return; 3237 + } 3259 3238 3260 3239 if (state == QLCNIC_DEV_READY) { 3261 3240 QLC_SHARED_REG_WR32(adapter, QLCNIC_CRB_DEV_STATE,
+7 -1
drivers/net/ethernet/qlogic/qlcnic/qlcnic_sriov_pf.c
··· 397 397 { 398 398 struct net_device *netdev = adapter->netdev; 399 399 400 + rtnl_lock(); 400 401 if (netif_running(netdev)) 401 402 __qlcnic_down(adapter, netdev); 402 403 ··· 408 407 /* After disabling SRIOV re-init the driver in default mode 409 408 configure opmode based on op_mode of function 410 409 */ 411 - if (qlcnic_83xx_configure_opmode(adapter)) 410 + if (qlcnic_83xx_configure_opmode(adapter)) { 411 + rtnl_unlock(); 412 412 return -EIO; 413 + } 413 414 414 415 if (netif_running(netdev)) 415 416 __qlcnic_up(adapter, netdev); 416 417 418 + rtnl_unlock(); 417 419 return 0; 418 420 } 419 421 ··· 537 533 return -EIO; 538 534 } 539 535 536 + rtnl_lock(); 540 537 if (netif_running(netdev)) 541 538 __qlcnic_down(adapter, netdev); 542 539 ··· 560 555 __qlcnic_up(adapter, netdev); 561 556 562 557 error: 558 + rtnl_unlock(); 563 559 return err; 564 560 } 565 561
+12
drivers/net/ethernet/qlogic/qlcnic/qlcnic_sysfs.c
··· 1272 1272 void qlcnic_create_diag_entries(struct qlcnic_adapter *adapter) 1273 1273 { 1274 1274 struct device *dev = &adapter->pdev->dev; 1275 + u32 state; 1275 1276 1276 1277 if (device_create_bin_file(dev, &bin_attr_port_stats)) 1277 1278 dev_info(dev, "failed to create port stats sysfs entry"); ··· 1286 1285 if (device_create_bin_file(dev, &bin_attr_mem)) 1287 1286 dev_info(dev, "failed to create mem sysfs entry\n"); 1288 1287 1288 + state = QLC_SHARED_REG_RD32(adapter, QLCNIC_CRB_DEV_STATE); 1289 + if (state == QLCNIC_DEV_FAILED || state == QLCNIC_DEV_BADBAD) 1290 + return; 1291 + 1289 1292 if (device_create_bin_file(dev, &bin_attr_pci_config)) 1290 1293 dev_info(dev, "failed to create pci config sysfs entry"); 1294 + 1291 1295 if (device_create_file(dev, &dev_attr_beacon)) 1292 1296 dev_info(dev, "failed to create beacon sysfs entry"); 1293 1297 ··· 1313 1307 void qlcnic_remove_diag_entries(struct qlcnic_adapter *adapter) 1314 1308 { 1315 1309 struct device *dev = &adapter->pdev->dev; 1310 + u32 state; 1316 1311 1317 1312 device_remove_bin_file(dev, &bin_attr_port_stats); 1318 1313 ··· 1322 1315 device_remove_file(dev, &dev_attr_diag_mode); 1323 1316 device_remove_bin_file(dev, &bin_attr_crb); 1324 1317 device_remove_bin_file(dev, &bin_attr_mem); 1318 + 1319 + state = QLC_SHARED_REG_RD32(adapter, QLCNIC_CRB_DEV_STATE); 1320 + if (state == QLCNIC_DEV_FAILED || state == QLCNIC_DEV_BADBAD) 1321 + return; 1322 + 1325 1323 device_remove_bin_file(dev, &bin_attr_pci_config); 1326 1324 device_remove_file(dev, &dev_attr_beacon); 1327 1325 if (!(adapter->flags & QLCNIC_ESWITCH_ENABLED))
+2 -2
drivers/net/ethernet/qlogic/qlge/qlge_dbg.c
··· 740 740 int i; 741 741 742 742 if (!mpi_coredump) { 743 - netif_err(qdev, drv, qdev->ndev, "No memory available\n"); 744 - return -ENOMEM; 743 + netif_err(qdev, drv, qdev->ndev, "No memory allocated\n"); 744 + return -EINVAL; 745 745 } 746 746 747 747 /* Try to get the spinlock, but dont worry if
+1 -1
drivers/net/ethernet/qlogic/qlge/qlge_mpi.c
··· 1274 1274 return; 1275 1275 } 1276 1276 1277 - if (!ql_core_dump(qdev, qdev->mpi_coredump)) { 1277 + if (qdev->mpi_coredump && !ql_core_dump(qdev, qdev->mpi_coredump)) { 1278 1278 netif_err(qdev, drv, qdev->ndev, "Core is dumped!\n"); 1279 1279 qdev->core_is_dumped = 1; 1280 1280 queue_delayed_work(qdev->workqueue,
+5 -5
drivers/net/ethernet/sfc/mcdi.c
··· 27 27 28 28 /* A reboot/assertion causes the MCDI status word to be set after the 29 29 * command word is set or a REBOOT event is sent. If we notice a reboot 30 - * via these mechanisms then wait 20ms for the status word to be set. 30 + * via these mechanisms then wait 250ms for the status word to be set. 31 31 */ 32 32 #define MCDI_STATUS_DELAY_US 100 33 - #define MCDI_STATUS_DELAY_COUNT 200 33 + #define MCDI_STATUS_DELAY_COUNT 2500 34 34 #define MCDI_STATUS_SLEEP_MS \ 35 35 (MCDI_STATUS_DELAY_US * MCDI_STATUS_DELAY_COUNT / 1000) 36 36 ··· 800 800 } else { 801 801 int count; 802 802 803 - /* Nobody was waiting for an MCDI request, so trigger a reset */ 804 - efx_schedule_reset(efx, RESET_TYPE_MC_FAILURE); 805 - 806 803 /* Consume the status word since efx_mcdi_rpc_finish() won't */ 807 804 for (count = 0; count < MCDI_STATUS_DELAY_COUNT; ++count) { 808 805 if (efx_mcdi_poll_reboot(efx)) ··· 807 810 udelay(MCDI_STATUS_DELAY_US); 808 811 } 809 812 mcdi->new_epoch = true; 813 + 814 + /* Nobody was waiting for an MCDI request, so trigger a reset */ 815 + efx_schedule_reset(efx, RESET_TYPE_MC_FAILURE); 810 816 } 811 817 812 818 spin_unlock(&mcdi->iface_lock);
+7 -2
drivers/net/ethernet/via/via-rhine.c
··· 32 32 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 33 33 34 34 #define DRV_NAME "via-rhine" 35 - #define DRV_VERSION "1.5.0" 35 + #define DRV_VERSION "1.5.1" 36 36 #define DRV_RELDATE "2010-10-09" 37 37 38 38 #include <linux/types.h> ··· 1704 1704 cpu_to_le32(TXDESC | (skb->len >= ETH_ZLEN ? skb->len : ETH_ZLEN)); 1705 1705 1706 1706 if (unlikely(vlan_tx_tag_present(skb))) { 1707 - rp->tx_ring[entry].tx_status = cpu_to_le32((vlan_tx_tag_get(skb)) << 16); 1707 + u16 vid_pcp = vlan_tx_tag_get(skb); 1708 + 1709 + /* drop CFI/DEI bit, register needs VID and PCP */ 1710 + vid_pcp = (vid_pcp & VLAN_VID_MASK) | 1711 + ((vid_pcp & VLAN_PRIO_MASK) >> 1); 1712 + rp->tx_ring[entry].tx_status = cpu_to_le32((vid_pcp) << 16); 1708 1713 /* request tagging */ 1709 1714 rp->tx_ring[entry].desc_length |= cpu_to_le32(0x020000); 1710 1715 }
+6
drivers/net/ethernet/xilinx/ll_temac_main.c
··· 297 297 lp->rx_bd_p + (sizeof(*lp->rx_bd_v) * (RX_BD_NUM - 1))); 298 298 lp->dma_out(lp, TX_CURDESC_PTR, lp->tx_bd_p); 299 299 300 + /* Init descriptor indexes */ 301 + lp->tx_bd_ci = 0; 302 + lp->tx_bd_next = 0; 303 + lp->tx_bd_tail = 0; 304 + lp->rx_bd_ci = 0; 305 + 300 306 return 0; 301 307 302 308 out:
+3
drivers/net/slip/slip.c
··· 429 429 if (!sl || sl->magic != SLIP_MAGIC || !netif_running(sl->dev)) 430 430 return; 431 431 432 + spin_lock(&sl->lock); 432 433 if (sl->xleft <= 0) { 433 434 /* Now serial buffer is almost free & we can start 434 435 * transmission of another packet */ 435 436 sl->dev->stats.tx_packets++; 436 437 clear_bit(TTY_DO_WRITE_WAKEUP, &tty->flags); 438 + spin_unlock(&sl->lock); 437 439 sl_unlock(sl); 438 440 return; 439 441 } ··· 443 441 actual = tty->ops->write(tty, sl->xhead, sl->xleft); 444 442 sl->xleft -= actual; 445 443 sl->xhead += actual; 444 + spin_unlock(&sl->lock); 446 445 } 447 446 448 447 static void sl_tx_timeout(struct net_device *dev)
+1 -1
drivers/net/usb/dm9601.c
··· 303 303 rx_ctl |= 0x02; 304 304 } else if (net->flags & IFF_ALLMULTI || 305 305 netdev_mc_count(net) > DM_MAX_MCAST) { 306 - rx_ctl |= 0x04; 306 + rx_ctl |= 0x08; 307 307 } else if (!netdev_mc_empty(net)) { 308 308 struct netdev_hw_addr *ha; 309 309
+1 -1
drivers/net/usb/qmi_wwan.c
··· 714 714 {QMI_FIXED_INTF(0x2357, 0x0201, 4)}, /* TP-LINK HSUPA Modem MA180 */ 715 715 {QMI_FIXED_INTF(0x2357, 0x9000, 4)}, /* TP-LINK MA260 */ 716 716 {QMI_FIXED_INTF(0x1bc7, 0x1200, 5)}, /* Telit LE920 */ 717 - {QMI_FIXED_INTF(0x1e2d, 0x12d1, 4)}, /* Cinterion PLxx */ 717 + {QMI_FIXED_INTF(0x1e2d, 0x0060, 4)}, /* Cinterion PLxx */ 718 718 719 719 /* 4. Gobi 1000 devices */ 720 720 {QMI_GOBI1K_DEVICE(0x05c6, 0x9212)}, /* Acer Gobi Modem Device */
+21 -6
drivers/net/usb/usbnet.c
··· 1241 1241 if (num_sgs == 1) 1242 1242 return 0; 1243 1243 1244 - urb->sg = kmalloc(num_sgs * sizeof(struct scatterlist), GFP_ATOMIC); 1244 + /* reserve one for zero packet */ 1245 + urb->sg = kmalloc((num_sgs + 1) * sizeof(struct scatterlist), 1246 + GFP_ATOMIC); 1245 1247 if (!urb->sg) 1246 1248 return -ENOMEM; 1247 1249 ··· 1307 1305 if (build_dma_sg(skb, urb) < 0) 1308 1306 goto drop; 1309 1307 } 1310 - entry->length = length = urb->transfer_buffer_length; 1308 + length = urb->transfer_buffer_length; 1311 1309 1312 1310 /* don't assume the hardware handles USB_ZERO_PACKET 1313 1311 * NOTE: strictly conforming cdc-ether devices should expect ··· 1319 1317 if (length % dev->maxpacket == 0) { 1320 1318 if (!(info->flags & FLAG_SEND_ZLP)) { 1321 1319 if (!(info->flags & FLAG_MULTI_PACKET)) { 1322 - urb->transfer_buffer_length++; 1323 - if (skb_tailroom(skb)) { 1320 + length++; 1321 + if (skb_tailroom(skb) && !urb->num_sgs) { 1324 1322 skb->data[skb->len] = 0; 1325 1323 __skb_put(skb, 1); 1326 - } 1324 + } else if (urb->num_sgs) 1325 + sg_set_buf(&urb->sg[urb->num_sgs++], 1326 + dev->padding_pkt, 1); 1327 1327 } 1328 1328 } else 1329 1329 urb->transfer_flags |= URB_ZERO_PACKET; 1330 1330 } 1331 + entry->length = urb->transfer_buffer_length = length; 1331 1332 1332 1333 spin_lock_irqsave(&dev->txq.lock, flags); 1333 1334 retval = usb_autopm_get_interface_async(dev->intf); ··· 1514 1509 1515 1510 usb_kill_urb(dev->interrupt); 1516 1511 usb_free_urb(dev->interrupt); 1512 + kfree(dev->padding_pkt); 1517 1513 1518 1514 free_netdev(net); 1519 1515 } ··· 1685 1679 /* initialize max rx_qlen and tx_qlen */ 1686 1680 usbnet_update_max_qlen(dev); 1687 1681 1682 + if (dev->can_dma_sg && !(info->flags & FLAG_SEND_ZLP) && 1683 + !(info->flags & FLAG_MULTI_PACKET)) { 1684 + dev->padding_pkt = kzalloc(1, GFP_KERNEL); 1685 + if (!dev->padding_pkt) 1686 + goto out4; 1687 + } 1688 + 1688 1689 status = register_netdev (net); 1689 1690 if (status) 1690 - goto out4; 1691 + goto out5; 1691 1692 netif_info(dev, probe, dev->net, 1692 1693 "register '%s' at usb-%s-%s, %s, %pM\n", 1693 1694 udev->dev.driver->name, ··· 1712 1699 1713 1700 return 0; 1714 1701 1702 + out5: 1703 + kfree(dev->padding_pkt); 1715 1704 out4: 1716 1705 usb_free_urb(dev->interrupt); 1717 1706 out3:
+3 -6
drivers/net/vxlan.c
··· 952 952 953 953 spin_lock(&vn->sock_lock); 954 954 hlist_del_rcu(&vs->hlist); 955 - smp_wmb(); 956 - vs->sock->sk->sk_user_data = NULL; 955 + rcu_assign_sk_user_data(vs->sock->sk, NULL); 957 956 vxlan_notify_del_rx_port(sk); 958 957 spin_unlock(&vn->sock_lock); 959 958 ··· 1047 1048 1048 1049 port = inet_sk(sk)->inet_sport; 1049 1050 1050 - smp_read_barrier_depends(); 1051 - vs = (struct vxlan_sock *)sk->sk_user_data; 1051 + vs = rcu_dereference_sk_user_data(sk); 1052 1052 if (!vs) 1053 1053 goto drop; 1054 1054 ··· 2300 2302 atomic_set(&vs->refcnt, 1); 2301 2303 vs->rcv = rcv; 2302 2304 vs->data = data; 2303 - smp_wmb(); 2304 - vs->sock->sk->sk_user_data = vs; 2305 + rcu_assign_sk_user_data(vs->sock->sk, vs); 2305 2306 2306 2307 spin_lock(&vn->sock_lock); 2307 2308 hlist_add_head_rcu(&vs->hlist, vs_head(net, port));
-7
drivers/net/wireless/ath/ath9k/recv.c
··· 1270 1270 return; 1271 1271 1272 1272 /* 1273 - * All MPDUs in an aggregate will use the same LNA 1274 - * as the first MPDU. 1275 - */ 1276 - if (rs->rs_isaggr && !rs->rs_firstaggr) 1277 - return; 1278 - 1279 - /* 1280 1273 * Change the default rx antenna if rx diversity 1281 1274 * chooses the other antenna 3 times in a row. 1282 1275 */
+14 -3
drivers/net/wireless/ath/ath9k/xmit.c
··· 399 399 tbf->bf_buf_addr = bf->bf_buf_addr; 400 400 memcpy(tbf->bf_desc, bf->bf_desc, sc->sc_ah->caps.tx_desc_len); 401 401 tbf->bf_state = bf->bf_state; 402 + tbf->bf_state.stale = false; 402 403 403 404 return tbf; 404 405 } ··· 1390 1389 u16 tid, u16 *ssn) 1391 1390 { 1392 1391 struct ath_atx_tid *txtid; 1392 + struct ath_txq *txq; 1393 1393 struct ath_node *an; 1394 1394 u8 density; 1395 1395 1396 1396 an = (struct ath_node *)sta->drv_priv; 1397 1397 txtid = ATH_AN_2_TID(an, tid); 1398 + txq = txtid->ac->txq; 1399 + 1400 + ath_txq_lock(sc, txq); 1398 1401 1399 1402 /* update ampdu factor/density, they may have changed. This may happen 1400 1403 * in HT IBSS when a beacon with HT-info is received after the station ··· 1421 1416 1422 1417 memset(txtid->tx_buf, 0, sizeof(txtid->tx_buf)); 1423 1418 txtid->baw_head = txtid->baw_tail = 0; 1419 + 1420 + ath_txq_unlock_complete(sc, txq); 1424 1421 1425 1422 return 0; 1426 1423 } ··· 1562 1555 __skb_unlink(bf->bf_mpdu, tid_q); 1563 1556 list_add_tail(&bf->list, &bf_q); 1564 1557 ath_set_rates(tid->an->vif, tid->an->sta, bf); 1565 - ath_tx_addto_baw(sc, tid, bf); 1566 - bf->bf_state.bf_type &= ~BUF_AGGR; 1558 + if (bf_isampdu(bf)) { 1559 + ath_tx_addto_baw(sc, tid, bf); 1560 + bf->bf_state.bf_type &= ~BUF_AGGR; 1561 + } 1567 1562 if (bf_tail) 1568 1563 bf_tail->bf_next = bf; 1569 1564 ··· 1959 1950 if (bf_is_ampdu_not_probing(bf)) 1960 1951 txq->axq_ampdu_depth++; 1961 1952 1962 - bf = bf->bf_lastbf->bf_next; 1953 + bf_last = bf->bf_lastbf; 1954 + bf = bf_last->bf_next; 1955 + bf_last->bf_next = NULL; 1963 1956 } 1964 1957 } 1965 1958 }
+13 -15
drivers/net/wireless/brcm80211/brcmfmac/bcmsdh_sdmmc.c
··· 464 464 465 465 static int brcmf_sdio_pd_probe(struct platform_device *pdev) 466 466 { 467 - int ret; 468 - 469 467 brcmf_dbg(SDIO, "Enter\n"); 470 468 471 469 brcmfmac_sdio_pdata = pdev->dev.platform_data; ··· 471 473 if (brcmfmac_sdio_pdata->power_on) 472 474 brcmfmac_sdio_pdata->power_on(); 473 475 474 - ret = sdio_register_driver(&brcmf_sdmmc_driver); 475 - if (ret) 476 - brcmf_err("sdio_register_driver failed: %d\n", ret); 477 - 478 - return ret; 476 + return 0; 479 477 } 480 478 481 479 static int brcmf_sdio_pd_remove(struct platform_device *pdev) ··· 494 500 } 495 501 }; 496 502 503 + void brcmf_sdio_register(void) 504 + { 505 + int ret; 506 + 507 + ret = sdio_register_driver(&brcmf_sdmmc_driver); 508 + if (ret) 509 + brcmf_err("sdio_register_driver failed: %d\n", ret); 510 + } 511 + 497 512 void brcmf_sdio_exit(void) 498 513 { 499 514 brcmf_dbg(SDIO, "Enter\n"); ··· 513 510 sdio_unregister_driver(&brcmf_sdmmc_driver); 514 511 } 515 512 516 - void brcmf_sdio_init(void) 513 + void __init brcmf_sdio_init(void) 517 514 { 518 515 int ret; 519 516 520 517 brcmf_dbg(SDIO, "Enter\n"); 521 518 522 519 ret = platform_driver_probe(&brcmf_sdio_pd, brcmf_sdio_pd_probe); 523 - if (ret == -ENODEV) { 524 - brcmf_dbg(SDIO, "No platform data available, registering without.\n"); 525 - ret = sdio_register_driver(&brcmf_sdmmc_driver); 526 - } 527 - 528 - if (ret) 529 - brcmf_err("driver registration failed: %d\n", ret); 520 + if (ret == -ENODEV) 521 + brcmf_dbg(SDIO, "No platform data available.\n"); 530 522 }
+2 -1
drivers/net/wireless/brcm80211/brcmfmac/dhd_bus.h
··· 156 156 #ifdef CONFIG_BRCMFMAC_SDIO 157 157 extern void brcmf_sdio_exit(void); 158 158 extern void brcmf_sdio_init(void); 159 + extern void brcmf_sdio_register(void); 159 160 #endif 160 161 #ifdef CONFIG_BRCMFMAC_USB 161 162 extern void brcmf_usb_exit(void); 162 - extern void brcmf_usb_init(void); 163 + extern void brcmf_usb_register(void); 163 164 #endif 164 165 165 166 #endif /* _BRCMF_BUS_H_ */
+8 -6
drivers/net/wireless/brcm80211/brcmfmac/dhd_linux.c
··· 1231 1231 return bus->chip << 4 | bus->chiprev; 1232 1232 } 1233 1233 1234 - static void brcmf_driver_init(struct work_struct *work) 1234 + static void brcmf_driver_register(struct work_struct *work) 1235 1235 { 1236 - brcmf_debugfs_init(); 1237 - 1238 1236 #ifdef CONFIG_BRCMFMAC_SDIO 1239 - brcmf_sdio_init(); 1237 + brcmf_sdio_register(); 1240 1238 #endif 1241 1239 #ifdef CONFIG_BRCMFMAC_USB 1242 - brcmf_usb_init(); 1240 + brcmf_usb_register(); 1243 1241 #endif 1244 1242 } 1245 - static DECLARE_WORK(brcmf_driver_work, brcmf_driver_init); 1243 + static DECLARE_WORK(brcmf_driver_work, brcmf_driver_register); 1246 1244 1247 1245 static int __init brcmfmac_module_init(void) 1248 1246 { 1247 + brcmf_debugfs_init(); 1248 + #ifdef CONFIG_BRCMFMAC_SDIO 1249 + brcmf_sdio_init(); 1250 + #endif 1249 1251 if (!schedule_work(&brcmf_driver_work)) 1250 1252 return -EBUSY; 1251 1253
+1 -1
drivers/net/wireless/brcm80211/brcmfmac/usb.c
··· 1539 1539 brcmf_release_fw(&fw_image_list); 1540 1540 } 1541 1541 1542 - void brcmf_usb_init(void) 1542 + void brcmf_usb_register(void) 1543 1543 { 1544 1544 brcmf_dbg(USB, "Enter\n"); 1545 1545 INIT_LIST_HEAD(&fw_image_list);
+4
drivers/net/wireless/brcm80211/brcmsmac/mac80211_if.c
··· 457 457 if (err != 0) 458 458 brcms_err(wl->wlc->hw->d11core, "%s: brcms_up() returned %d\n", 459 459 __func__, err); 460 + 461 + bcma_core_pci_power_save(wl->wlc->hw->d11core->bus, true); 460 462 return err; 461 463 } 462 464 ··· 480 478 "wl: brcms_ops_stop: chipmatch failed\n"); 481 479 return; 482 480 } 481 + 482 + bcma_core_pci_power_save(wl->wlc->hw->d11core->bus, false); 483 483 484 484 /* put driver in down state */ 485 485 spin_lock_bh(&wl->lock);
+7 -19
drivers/net/wireless/cw1200/cw1200_spi.c
··· 42 42 spinlock_t lock; /* Serialize all bus operations */ 43 43 wait_queue_head_t wq; 44 44 int claimed; 45 - int irq_disabled; 46 45 }; 47 46 48 47 #define SDIO_TO_SPI_ADDR(addr) ((addr & 0x1f)>>2) ··· 237 238 struct hwbus_priv *self = dev_id; 238 239 239 240 if (self->core) { 240 - disable_irq_nosync(self->func->irq); 241 - self->irq_disabled = 1; 242 241 cw1200_irq_handler(self->core); 243 242 return IRQ_HANDLED; 244 243 } else { ··· 250 253 251 254 pr_debug("SW IRQ subscribe\n"); 252 255 253 - ret = request_any_context_irq(self->func->irq, cw1200_spi_irq_handler, 254 - IRQF_TRIGGER_HIGH, 255 - "cw1200_wlan_irq", self); 256 + ret = request_threaded_irq(self->func->irq, NULL, 257 + cw1200_spi_irq_handler, 258 + IRQF_TRIGGER_HIGH | IRQF_ONESHOT, 259 + "cw1200_wlan_irq", self); 256 260 if (WARN_ON(ret < 0)) 257 261 goto exit; 258 262 ··· 271 273 272 274 static int cw1200_spi_irq_unsubscribe(struct hwbus_priv *self) 273 275 { 276 + int ret = 0; 277 + 274 278 pr_debug("SW IRQ unsubscribe\n"); 275 279 disable_irq_wake(self->func->irq); 276 280 free_irq(self->func->irq, self); 277 281 278 - return 0; 279 - } 280 - 281 - static int cw1200_spi_irq_enable(struct hwbus_priv *self, int enable) 282 - { 283 - /* Disables are handled by the interrupt handler */ 284 - if (enable && self->irq_disabled) { 285 - enable_irq(self->func->irq); 286 - self->irq_disabled = 0; 287 - } 288 - 289 - return 0; 282 + return ret; 290 283 } 291 284 292 285 static int cw1200_spi_off(const struct cw1200_platform_data_spi *pdata) ··· 357 368 .unlock = cw1200_spi_unlock, 358 369 .align_size = cw1200_spi_align_size, 359 370 .power_mgmt = cw1200_spi_pm, 360 - .irq_enable = cw1200_spi_irq_enable, 361 371 }; 362 372 363 373 /* Probe Function to be called by SPI stack when device is discovered */
+1 -1
drivers/net/wireless/cw1200/fwio.c
··· 485 485 486 486 /* Enable interrupt signalling */ 487 487 priv->hwbus_ops->lock(priv->hwbus_priv); 488 - ret = __cw1200_irq_enable(priv, 2); 488 + ret = __cw1200_irq_enable(priv, 1); 489 489 priv->hwbus_ops->unlock(priv->hwbus_priv); 490 490 if (ret < 0) 491 491 goto unsubscribe;
-1
drivers/net/wireless/cw1200/hwbus.h
··· 28 28 void (*unlock)(struct hwbus_priv *self); 29 29 size_t (*align_size)(struct hwbus_priv *self, size_t size); 30 30 int (*power_mgmt)(struct hwbus_priv *self, bool suspend); 31 - int (*irq_enable)(struct hwbus_priv *self, int enable); 32 31 }; 33 32 34 33 #endif /* CW1200_HWBUS_H */
-15
drivers/net/wireless/cw1200/hwio.c
··· 273 273 u16 val16; 274 274 int ret; 275 275 276 - /* We need to do this hack because the SPI layer can sleep on I/O 277 - and the general path involves I/O to the device in interrupt 278 - context. 279 - 280 - However, the initial enable call needs to go to the hardware. 281 - 282 - We don't worry about shutdown because we do a full reset which 283 - clears the interrupt enabled bits. 284 - */ 285 - if (priv->hwbus_ops->irq_enable) { 286 - ret = priv->hwbus_ops->irq_enable(priv->hwbus_priv, enable); 287 - if (ret || enable < 2) 288 - return ret; 289 - } 290 - 291 276 if (HIF_8601_SILICON == priv->hw_type) { 292 277 ret = __cw1200_reg_read_32(priv, ST90TDS_CONFIG_REG_ID, &val32); 293 278 if (ret < 0) {
+2 -1
drivers/net/wireless/mwifiex/11n_aggr.c
··· 150 150 */ 151 151 int 152 152 mwifiex_11n_aggregate_pkt(struct mwifiex_private *priv, 153 - struct mwifiex_ra_list_tbl *pra_list, int headroom, 153 + struct mwifiex_ra_list_tbl *pra_list, 154 154 int ptrindex, unsigned long ra_list_flags) 155 155 __releases(&priv->wmm.ra_list_spinlock) 156 156 { ··· 160 160 int pad = 0, ret; 161 161 struct mwifiex_tx_param tx_param; 162 162 struct txpd *ptx_pd = NULL; 163 + int headroom = adapter->iface_type == MWIFIEX_USB ? 0 : INTF_HEADER_LEN; 163 164 164 165 skb_src = skb_peek(&pra_list->skb_head); 165 166 if (!skb_src) {
+1 -1
drivers/net/wireless/mwifiex/11n_aggr.h
··· 26 26 int mwifiex_11n_deaggregate_pkt(struct mwifiex_private *priv, 27 27 struct sk_buff *skb); 28 28 int mwifiex_11n_aggregate_pkt(struct mwifiex_private *priv, 29 - struct mwifiex_ra_list_tbl *ptr, int headroom, 29 + struct mwifiex_ra_list_tbl *ptr, 30 30 int ptr_index, unsigned long flags) 31 31 __releases(&priv->wmm.ra_list_spinlock); 32 32
+2 -3
drivers/net/wireless/mwifiex/cmdevt.c
··· 1155 1155 uint32_t conditions = le32_to_cpu(phs_cfg->params.hs_config.conditions); 1156 1156 1157 1157 if (phs_cfg->action == cpu_to_le16(HS_ACTIVATE) && 1158 - adapter->iface_type == MWIFIEX_SDIO) { 1158 + adapter->iface_type != MWIFIEX_USB) { 1159 1159 mwifiex_hs_activated_event(priv, true); 1160 1160 return 0; 1161 1161 } else { ··· 1167 1167 } 1168 1168 if (conditions != HS_CFG_CANCEL) { 1169 1169 adapter->is_hs_configured = true; 1170 - if (adapter->iface_type == MWIFIEX_USB || 1171 - adapter->iface_type == MWIFIEX_PCIE) 1170 + if (adapter->iface_type == MWIFIEX_USB) 1172 1171 mwifiex_hs_activated_event(priv, true); 1173 1172 } else { 1174 1173 adapter->is_hs_configured = false;
-7
drivers/net/wireless/mwifiex/usb.c
··· 447 447 */ 448 448 adapter->is_suspended = true; 449 449 450 - for (i = 0; i < adapter->priv_num; i++) 451 - netif_carrier_off(adapter->priv[i]->netdev); 452 - 453 450 if (atomic_read(&card->rx_cmd_urb_pending) && card->rx_cmd.urb) 454 451 usb_kill_urb(card->rx_cmd.urb); 455 452 ··· 505 508 mwifiex_usb_submit_rx_urb(&card->rx_cmd, 506 509 MWIFIEX_RX_CMD_BUF_SIZE); 507 510 } 508 - 509 - for (i = 0; i < adapter->priv_num; i++) 510 - if (adapter->priv[i]->media_connected) 511 - netif_carrier_on(adapter->priv[i]->netdev); 512 511 513 512 /* Disable Host Sleep */ 514 513 if (adapter->hs_activated)
+1 -2
drivers/net/wireless/mwifiex/wmm.c
··· 1239 1239 if (enable_tx_amsdu && mwifiex_is_amsdu_allowed(priv, tid) && 1240 1240 mwifiex_is_11n_aggragation_possible(priv, ptr, 1241 1241 adapter->tx_buf_size)) 1242 - mwifiex_11n_aggregate_pkt(priv, ptr, INTF_HEADER_LEN, 1243 - ptr_index, flags); 1242 + mwifiex_11n_aggregate_pkt(priv, ptr, ptr_index, flags); 1244 1243 /* ra_list_spinlock has been freed in 1245 1244 mwifiex_11n_aggregate_pkt() */ 1246 1245 else
+2
drivers/net/wireless/p54/p54usb.c
··· 83 83 {USB_DEVICE(0x06a9, 0x000e)}, /* Westell 802.11g USB (A90-211WG-01) */ 84 84 {USB_DEVICE(0x06b9, 0x0121)}, /* Thomson SpeedTouch 121g */ 85 85 {USB_DEVICE(0x0707, 0xee13)}, /* SMC 2862W-G version 2 */ 86 + {USB_DEVICE(0x07aa, 0x0020)}, /* Corega WLUSB2GTST USB */ 86 87 {USB_DEVICE(0x0803, 0x4310)}, /* Zoom 4410a */ 87 88 {USB_DEVICE(0x083a, 0x4521)}, /* Siemens Gigaset USB Adapter 54 version 2 */ 88 89 {USB_DEVICE(0x083a, 0x4531)}, /* T-Com Sinus 154 data II */ ··· 980 979 if (err) { 981 980 dev_err(&priv->udev->dev, "(p54usb) cannot load firmware %s " 982 981 "(%d)!\n", p54u_fwlist[i].fw, err); 982 + usb_put_dev(udev); 983 983 } 984 984 985 985 return err;
+1 -1
drivers/net/wireless/rtlwifi/wifi.h
··· 2057 2057 that it points to the data allocated 2058 2058 beyond this structure like: 2059 2059 rtl_pci_priv or rtl_usb_priv */ 2060 - u8 priv[0]; 2060 + u8 priv[0] __aligned(sizeof(void *)); 2061 2061 }; 2062 2062 2063 2063 #define rtl_priv(hw) (((struct rtl_priv *)(hw)->priv))
+118 -30
drivers/net/xen-netback/xenbus.c
··· 24 24 struct backend_info { 25 25 struct xenbus_device *dev; 26 26 struct xenvif *vif; 27 + 28 + /* This is the state that will be reflected in xenstore when any 29 + * active hotplug script completes. 30 + */ 31 + enum xenbus_state state; 32 + 27 33 enum xenbus_state frontend_state; 28 34 struct xenbus_watch hotplug_status_watch; 29 35 u8 have_hotplug_status_watch:1; ··· 142 136 if (err) 143 137 goto fail; 144 138 139 + be->state = XenbusStateInitWait; 140 + 145 141 /* This kicks hotplug scripts, so do it immediately. */ 146 142 backend_create_xenvif(be); 147 143 ··· 216 208 kobject_uevent(&dev->dev.kobj, KOBJ_ONLINE); 217 209 } 218 210 219 - 220 - static void disconnect_backend(struct xenbus_device *dev) 211 + static void backend_disconnect(struct backend_info *be) 221 212 { 222 - struct backend_info *be = dev_get_drvdata(&dev->dev); 223 - 224 213 if (be->vif) 225 214 xenvif_disconnect(be->vif); 226 215 } 227 216 228 - static void destroy_backend(struct xenbus_device *dev) 217 + static void backend_connect(struct backend_info *be) 229 218 { 230 - struct backend_info *be = dev_get_drvdata(&dev->dev); 219 + if (be->vif) 220 + connect(be); 221 + } 231 222 232 - if (be->vif) { 233 - kobject_uevent(&dev->dev.kobj, KOBJ_OFFLINE); 234 - xenbus_rm(XBT_NIL, dev->nodename, "hotplug-status"); 235 - xenvif_free(be->vif); 236 - be->vif = NULL; 223 + static inline void backend_switch_state(struct backend_info *be, 224 + enum xenbus_state state) 225 + { 226 + struct xenbus_device *dev = be->dev; 227 + 228 + pr_debug("%s -> %s\n", dev->nodename, xenbus_strstate(state)); 229 + be->state = state; 230 + 231 + /* If we are waiting for a hotplug script then defer the 232 + * actual xenbus state change. 233 + */ 234 + if (!be->have_hotplug_status_watch) 235 + xenbus_switch_state(dev, state); 236 + } 237 + 238 + /* Handle backend state transitions: 239 + * 240 + * The backend state starts in InitWait and the following transitions are 241 + * allowed. 242 + * 243 + * InitWait -> Connected 244 + * 245 + * ^ \ | 246 + * | \ | 247 + * | \ | 248 + * | \ | 249 + * | \ | 250 + * | \ | 251 + * | V V 252 + * 253 + * Closed <-> Closing 254 + * 255 + * The state argument specifies the eventual state of the backend and the 256 + * function transitions to that state via the shortest path. 257 + */ 258 + static void set_backend_state(struct backend_info *be, 259 + enum xenbus_state state) 260 + { 261 + while (be->state != state) { 262 + switch (be->state) { 263 + case XenbusStateClosed: 264 + switch (state) { 265 + case XenbusStateInitWait: 266 + case XenbusStateConnected: 267 + pr_info("%s: prepare for reconnect\n", 268 + be->dev->nodename); 269 + backend_switch_state(be, XenbusStateInitWait); 270 + break; 271 + case XenbusStateClosing: 272 + backend_switch_state(be, XenbusStateClosing); 273 + break; 274 + default: 275 + BUG(); 276 + } 277 + break; 278 + case XenbusStateInitWait: 279 + switch (state) { 280 + case XenbusStateConnected: 281 + backend_connect(be); 282 + backend_switch_state(be, XenbusStateConnected); 283 + break; 284 + case XenbusStateClosing: 285 + case XenbusStateClosed: 286 + backend_switch_state(be, XenbusStateClosing); 287 + break; 288 + default: 289 + BUG(); 290 + } 291 + break; 292 + case XenbusStateConnected: 293 + switch (state) { 294 + case XenbusStateInitWait: 295 + case XenbusStateClosing: 296 + case XenbusStateClosed: 297 + backend_disconnect(be); 298 + backend_switch_state(be, XenbusStateClosing); 299 + break; 300 + default: 301 + BUG(); 302 + } 303 + break; 304 + case XenbusStateClosing: 305 + switch (state) { 306 + case XenbusStateInitWait: 307 + case XenbusStateConnected: 308 + case XenbusStateClosed: 309 + backend_switch_state(be, XenbusStateClosed); 310 + break; 311 + default: 312 + BUG(); 313 + } 314 + break; 315 + default: 316 + BUG(); 317 + } 237 318 } 238 319 } 239 320 ··· 334 237 { 335 238 struct backend_info *be = dev_get_drvdata(&dev->dev); 336 239 337 - pr_debug("frontend state %s\n", xenbus_strstate(frontend_state)); 240 + pr_debug("%s -> %s\n", dev->otherend, xenbus_strstate(frontend_state)); 338 241 339 242 be->frontend_state = frontend_state; 340 243 341 244 switch (frontend_state) { 342 245 case XenbusStateInitialising: 343 - if (dev->state == XenbusStateClosed) { 344 - pr_info("%s: prepare for reconnect\n", dev->nodename); 345 - xenbus_switch_state(dev, XenbusStateInitWait); 346 - } 246 + set_backend_state(be, XenbusStateInitWait); 347 247 break; 348 248 349 249 case XenbusStateInitialised: 350 250 break; 351 251 352 252 case XenbusStateConnected: 353 - if (dev->state == XenbusStateConnected) 354 - break; 355 - if (be->vif) 356 - connect(be); 253 + set_backend_state(be, XenbusStateConnected); 357 254 break; 358 255 359 256 case XenbusStateClosing: 360 - disconnect_backend(dev); 361 - xenbus_switch_state(dev, XenbusStateClosing); 257 + set_backend_state(be, XenbusStateClosing); 362 258 break; 363 259 364 260 case XenbusStateClosed: 365 - xenbus_switch_state(dev, XenbusStateClosed); 261 + set_backend_state(be, XenbusStateClosed); 366 262 if (xenbus_dev_is_online(dev)) 367 263 break; 368 - destroy_backend(dev); 369 264 /* fall through if not online */ 370 265 case XenbusStateUnknown: 266 + set_backend_state(be, XenbusStateClosed); 371 267 device_unregister(&dev->dev); 372 268 break; 373 269 ··· 453 363 if (IS_ERR(str)) 454 364 return; 455 365 if (len == sizeof("connected")-1 && !memcmp(str, "connected", len)) { 456 - xenbus_switch_state(be->dev, XenbusStateConnected); 366 + /* Complete any pending state change */ 367 + xenbus_switch_state(be->dev, be->state); 368 + 457 369 /* Not interested in this watch anymore. */ 458 370 unregister_hotplug_status_watch(be); 459 371 } ··· 485 393 err = xenbus_watch_pathfmt(dev, &be->hotplug_status_watch, 486 394 hotplug_status_changed, 487 395 "%s/%s", dev->nodename, "hotplug-status"); 488 - if (err) { 489 - /* Switch now, since we can't do a watch. */ 490 - xenbus_switch_state(dev, XenbusStateConnected); 491 - } else { 396 + if (!err) 492 397 be->have_hotplug_status_watch = 1; 493 - } 494 398 495 399 netif_wake_queue(be->vif->dev); 496 400 }
+1
include/linux/bcma/bcma_driver_pci.h
··· 242 242 struct bcma_device *core, bool enable); 243 243 extern void bcma_core_pci_up(struct bcma_bus *bus); 244 244 extern void bcma_core_pci_down(struct bcma_bus *bus); 245 + extern void bcma_core_pci_power_save(struct bcma_bus *bus, bool up); 245 246 246 247 extern int bcma_core_pci_pcibios_map_irq(const struct pci_dev *dev); 247 248 extern int bcma_core_pci_plat_dev_init(struct pci_dev *dev);
+11
include/linux/kernel.h
··· 439 439 return buf; 440 440 } 441 441 442 + extern const char hex_asc_upper[]; 443 + #define hex_asc_upper_lo(x) hex_asc_upper[((x) & 0x0f)] 444 + #define hex_asc_upper_hi(x) hex_asc_upper[((x) & 0xf0) >> 4] 445 + 446 + static inline char *hex_byte_pack_upper(char *buf, u8 byte) 447 + { 448 + *buf++ = hex_asc_upper_hi(byte); 449 + *buf++ = hex_asc_upper_lo(byte); 450 + return buf; 451 + } 452 + 442 453 static inline char * __deprecated pack_hex_byte(char *buf, u8 byte) 443 454 { 444 455 return hex_byte_pack(buf, byte);
+1 -1
include/linux/skbuff.h
··· 498 498 * headers if needed 499 499 */ 500 500 __u8 encapsulation:1; 501 - /* 7/9 bit hole (depending on ndisc_nodetype presence) */ 501 + /* 6/8 bit hole (depending on ndisc_nodetype presence) */ 502 502 kmemcheck_bitfield_end(flags2); 503 503 504 504 #if defined CONFIG_NET_DMA || defined CONFIG_NET_RX_BUSY_POLL
+1
include/linux/usb/usbnet.h
··· 42 42 struct usb_host_endpoint *status; 43 43 unsigned maxpacket; 44 44 struct timer_list delay; 45 + const char *padding_pkt; 45 46 46 47 /* protocol/interface state */ 47 48 struct net_device *net;
+4
include/net/addrconf.h
··· 67 67 int ipv6_chk_home_addr(struct net *net, const struct in6_addr *addr); 68 68 #endif 69 69 70 + bool ipv6_chk_custom_prefix(const struct in6_addr *addr, 71 + const unsigned int prefix_len, 72 + struct net_device *dev); 73 + 70 74 int ipv6_chk_prefix(const struct in6_addr *addr, struct net_device *dev); 71 75 72 76 struct inet6_ifaddr *ipv6_get_ifaddr(struct net *net,
+1
include/net/bluetooth/hci.h
··· 104 104 enum { 105 105 HCI_SETUP, 106 106 HCI_AUTO_OFF, 107 + HCI_RFKILLED, 107 108 HCI_MGMT, 108 109 HCI_PAIRABLE, 109 110 HCI_SERVICE_CACHE,
+3 -6
include/net/ip_vs.h
··· 723 723 struct rcu_head rcu_head; 724 724 }; 725 725 726 - /* In grace period after removing */ 727 - #define IP_VS_DEST_STATE_REMOVING 0x01 728 726 /* 729 727 * The real server destination forwarding entry 730 728 * with ip address, port number, and so on. ··· 740 742 741 743 atomic_t refcnt; /* reference counter */ 742 744 struct ip_vs_stats stats; /* statistics */ 743 - unsigned long state; /* state flags */ 745 + unsigned long idle_start; /* start time, jiffies */ 744 746 745 747 /* connection counters and thresholds */ 746 748 atomic_t activeconns; /* active connections */ ··· 754 756 struct ip_vs_dest_dst __rcu *dest_dst; /* cached dst info */ 755 757 756 758 /* for virtual service */ 757 - struct ip_vs_service *svc; /* service it belongs to */ 759 + struct ip_vs_service __rcu *svc; /* service it belongs to */ 758 760 __u16 protocol; /* which protocol (TCP/UDP) */ 759 761 __be16 vport; /* virtual port number */ 760 762 union nf_inet_addr vaddr; /* virtual IP address */ 761 763 __u32 vfwmark; /* firewall mark of service */ 762 764 763 765 struct list_head t_list; /* in dest_trash */ 764 - struct rcu_head rcu_head; 765 766 unsigned int in_rs_table:1; /* we are in rs_table */ 766 767 }; 767 768 ··· 1646 1649 /* CONFIG_IP_VS_NFCT */ 1647 1650 #endif 1648 1651 1649 - static inline unsigned int 1652 + static inline int 1650 1653 ip_vs_dest_conn_overhead(struct ip_vs_dest *dest) 1651 1654 { 1652 1655 /*
+1
include/net/mrp.h
··· 112 112 struct mrp_application *app; 113 113 struct net_device *dev; 114 114 struct timer_list join_timer; 115 + struct timer_list periodic_timer; 115 116 116 117 spinlock_t lock; 117 118 struct sk_buff_head queue;
+1
include/net/net_namespace.h
··· 74 74 struct hlist_head *dev_index_head; 75 75 unsigned int dev_base_seq; /* protected by rtnl_mutex */ 76 76 int ifindex; 77 + unsigned int dev_unreg_count; 77 78 78 79 /* core fib_rules */ 79 80 struct list_head rules_ops;
+1 -1
include/net/netfilter/nf_conntrack_synproxy.h
··· 56 56 57 57 struct tcphdr; 58 58 struct xt_synproxy_info; 59 - extern void synproxy_parse_options(const struct sk_buff *skb, unsigned int doff, 59 + extern bool synproxy_parse_options(const struct sk_buff *skb, unsigned int doff, 60 60 const struct tcphdr *th, 61 61 struct synproxy_options *opts); 62 62 extern unsigned int synproxy_options_size(const struct synproxy_options *opts);
-1
include/net/secure_seq.h
··· 3 3 4 4 #include <linux/types.h> 5 5 6 - extern void net_secret_init(void); 7 6 extern __u32 secure_ip_id(__be32 daddr); 8 7 extern __u32 secure_ipv6_id(const __be32 daddr[4]); 9 8 extern u32 secure_ipv4_port_ephemeral(__be32 saddr, __be32 daddr, __be16 dport);
+5
include/net/sock.h
··· 409 409 void (*sk_destruct)(struct sock *sk); 410 410 }; 411 411 412 + #define __sk_user_data(sk) ((*((void __rcu **)&(sk)->sk_user_data))) 413 + 414 + #define rcu_dereference_sk_user_data(sk) rcu_dereference(__sk_user_data((sk))) 415 + #define rcu_assign_sk_user_data(sk, ptr) rcu_assign_pointer(__sk_user_data((sk)), ptr) 416 + 412 417 /* 413 418 * SK_CAN_REUSE and SK_NO_REUSE on a socket mean that the socket is OK 414 419 * or not whether his port will be reused by someone else. SK_FORCE_REUSE
+2
lib/hexdump.c
··· 14 14 15 15 const char hex_asc[] = "0123456789abcdef"; 16 16 EXPORT_SYMBOL(hex_asc); 17 + const char hex_asc_upper[] = "0123456789ABCDEF"; 18 + EXPORT_SYMBOL(hex_asc_upper); 17 19 18 20 /** 19 21 * hex_to_bin - convert a hex digit to its real value
+27
net/802/mrp.c
··· 24 24 static unsigned int mrp_join_time __read_mostly = 200; 25 25 module_param(mrp_join_time, uint, 0644); 26 26 MODULE_PARM_DESC(mrp_join_time, "Join time in ms (default 200ms)"); 27 + 28 + static unsigned int mrp_periodic_time __read_mostly = 1000; 29 + module_param(mrp_periodic_time, uint, 0644); 30 + MODULE_PARM_DESC(mrp_periodic_time, "Periodic time in ms (default 1s)"); 31 + 27 32 MODULE_LICENSE("GPL"); 28 33 29 34 static const u8 ··· 600 595 mrp_join_timer_arm(app); 601 596 } 602 597 598 + static void mrp_periodic_timer_arm(struct mrp_applicant *app) 599 + { 600 + mod_timer(&app->periodic_timer, 601 + jiffies + msecs_to_jiffies(mrp_periodic_time)); 602 + } 603 + 604 + static void mrp_periodic_timer(unsigned long data) 605 + { 606 + struct mrp_applicant *app = (struct mrp_applicant *)data; 607 + 608 + spin_lock(&app->lock); 609 + mrp_mad_event(app, MRP_EVENT_PERIODIC); 610 + mrp_pdu_queue(app); 611 + spin_unlock(&app->lock); 612 + 613 + mrp_periodic_timer_arm(app); 614 + } 615 + 603 616 static int mrp_pdu_parse_end_mark(struct sk_buff *skb, int *offset) 604 617 { 605 618 __be16 endmark; ··· 868 845 rcu_assign_pointer(dev->mrp_port->applicants[appl->type], app); 869 846 setup_timer(&app->join_timer, mrp_join_timer, (unsigned long)app); 870 847 mrp_join_timer_arm(app); 848 + setup_timer(&app->periodic_timer, mrp_periodic_timer, 849 + (unsigned long)app); 850 + mrp_periodic_timer_arm(app); 871 851 return 0; 872 852 873 853 err3: ··· 896 870 * all pending messages before the applicant is gone. 897 871 */ 898 872 del_timer_sync(&app->join_timer); 873 + del_timer_sync(&app->periodic_timer); 899 874 900 875 spin_lock_bh(&app->lock); 901 876 mrp_mad_event(app, MRP_EVENT_TX);
+20 -6
net/bluetooth/hci_core.c
··· 1146 1146 goto done; 1147 1147 } 1148 1148 1149 - if (hdev->rfkill && rfkill_blocked(hdev->rfkill)) { 1149 + /* Check for rfkill but allow the HCI setup stage to proceed 1150 + * (which in itself doesn't cause any RF activity). 1151 + */ 1152 + if (test_bit(HCI_RFKILLED, &hdev->dev_flags) && 1153 + !test_bit(HCI_SETUP, &hdev->dev_flags)) { 1150 1154 ret = -ERFKILL; 1151 1155 goto done; 1152 1156 } ··· 1570 1566 1571 1567 BT_DBG("%p name %s blocked %d", hdev, hdev->name, blocked); 1572 1568 1573 - if (!blocked) 1574 - return 0; 1575 - 1576 - hci_dev_do_close(hdev); 1569 + if (blocked) { 1570 + set_bit(HCI_RFKILLED, &hdev->dev_flags); 1571 + if (!test_bit(HCI_SETUP, &hdev->dev_flags)) 1572 + hci_dev_do_close(hdev); 1573 + } else { 1574 + clear_bit(HCI_RFKILLED, &hdev->dev_flags); 1575 + } 1577 1576 1578 1577 return 0; 1579 1578 } ··· 1598 1591 return; 1599 1592 } 1600 1593 1601 - if (test_bit(HCI_AUTO_OFF, &hdev->dev_flags)) 1594 + if (test_bit(HCI_RFKILLED, &hdev->dev_flags)) { 1595 + clear_bit(HCI_AUTO_OFF, &hdev->dev_flags); 1596 + hci_dev_do_close(hdev); 1597 + } else if (test_bit(HCI_AUTO_OFF, &hdev->dev_flags)) { 1602 1598 queue_delayed_work(hdev->req_workqueue, &hdev->power_off, 1603 1599 HCI_AUTO_OFF_TIMEOUT); 1600 + } 1604 1601 1605 1602 if (test_and_clear_bit(HCI_SETUP, &hdev->dev_flags)) 1606 1603 mgmt_index_added(hdev); ··· 2219 2208 hdev->rfkill = NULL; 2220 2209 } 2221 2210 } 2211 + 2212 + if (hdev->rfkill && rfkill_blocked(hdev->rfkill)) 2213 + set_bit(HCI_RFKILLED, &hdev->dev_flags); 2222 2214 2223 2215 set_bit(HCI_SETUP, &hdev->dev_flags); 2224 2216
+5 -1
net/bluetooth/hci_event.c
··· 3557 3557 cp.handle = cpu_to_le16(conn->handle); 3558 3558 3559 3559 if (ltk->authenticated) 3560 - conn->sec_level = BT_SECURITY_HIGH; 3560 + conn->pending_sec_level = BT_SECURITY_HIGH; 3561 + else 3562 + conn->pending_sec_level = BT_SECURITY_MEDIUM; 3563 + 3564 + conn->enc_key_size = ltk->enc_size; 3561 3565 3562 3566 hci_send_cmd(hdev, HCI_OP_LE_LTK_REPLY, sizeof(cp), &cp); 3563 3567
+7
net/bluetooth/l2cap_core.c
··· 3755 3755 3756 3756 sk = chan->sk; 3757 3757 3758 + /* For certain devices (ex: HID mouse), support for authentication, 3759 + * pairing and bonding is optional. For such devices, inorder to avoid 3760 + * the ACL alive for too long after L2CAP disconnection, reset the ACL 3761 + * disc_timeout back to HCI_DISCONN_TIMEOUT during L2CAP connect. 3762 + */ 3763 + conn->hcon->disc_timeout = HCI_DISCONN_TIMEOUT; 3764 + 3758 3765 bacpy(&bt_sk(sk)->src, conn->src); 3759 3766 bacpy(&bt_sk(sk)->dst, conn->dst); 3760 3767 chan->psm = psm;
+2 -33
net/bluetooth/rfcomm/tty.c
··· 569 569 static void rfcomm_dev_state_change(struct rfcomm_dlc *dlc, int err) 570 570 { 571 571 struct rfcomm_dev *dev = dlc->owner; 572 - struct tty_struct *tty; 573 572 if (!dev) 574 573 return; 575 574 ··· 580 581 DPM_ORDER_DEV_AFTER_PARENT); 581 582 582 583 wake_up_interruptible(&dev->port.open_wait); 583 - } else if (dlc->state == BT_CLOSED) { 584 - tty = tty_port_tty_get(&dev->port); 585 - if (!tty) { 586 - if (test_bit(RFCOMM_RELEASE_ONHUP, &dev->flags)) { 587 - /* Drop DLC lock here to avoid deadlock 588 - * 1. rfcomm_dev_get will take rfcomm_dev_lock 589 - * but in rfcomm_dev_add there's lock order: 590 - * rfcomm_dev_lock -> dlc lock 591 - * 2. tty_port_put will deadlock if it's 592 - * the last reference 593 - * 594 - * FIXME: when we release the lock anything 595 - * could happen to dev, even its destruction 596 - */ 597 - rfcomm_dlc_unlock(dlc); 598 - if (rfcomm_dev_get(dev->id) == NULL) { 599 - rfcomm_dlc_lock(dlc); 600 - return; 601 - } 602 - 603 - if (!test_and_set_bit(RFCOMM_TTY_RELEASED, 604 - &dev->flags)) 605 - tty_port_put(&dev->port); 606 - 607 - tty_port_put(&dev->port); 608 - rfcomm_dlc_lock(dlc); 609 - } 610 - } else { 611 - tty_hangup(tty); 612 - tty_kref_put(tty); 613 - } 614 - } 584 + } else if (dlc->state == BT_CLOSED) 585 + tty_port_tty_hangup(&dev->port, false); 615 586 } 616 587 617 588 static void rfcomm_dev_modem_status(struct rfcomm_dlc *dlc, u8 v24_sig)
+48 -1
net/core/dev.c
··· 5247 5247 5248 5248 /* Delayed registration/unregisteration */ 5249 5249 static LIST_HEAD(net_todo_list); 5250 + static DECLARE_WAIT_QUEUE_HEAD(netdev_unregistering_wq); 5250 5251 5251 5252 static void net_set_todo(struct net_device *dev) 5252 5253 { 5253 5254 list_add_tail(&dev->todo_list, &net_todo_list); 5255 + dev_net(dev)->dev_unreg_count++; 5254 5256 } 5255 5257 5256 5258 static void rollback_registered_many(struct list_head *head) ··· 5919 5917 5920 5918 if (dev->destructor) 5921 5919 dev->destructor(dev); 5920 + 5921 + /* Report a network device has been unregistered */ 5922 + rtnl_lock(); 5923 + dev_net(dev)->dev_unreg_count--; 5924 + __rtnl_unlock(); 5925 + wake_up(&netdev_unregistering_wq); 5922 5926 5923 5927 /* Free network device */ 5924 5928 kobject_put(&dev->dev.kobj); ··· 6611 6603 rtnl_unlock(); 6612 6604 } 6613 6605 6606 + static void __net_exit rtnl_lock_unregistering(struct list_head *net_list) 6607 + { 6608 + /* Return with the rtnl_lock held when there are no network 6609 + * devices unregistering in any network namespace in net_list. 6610 + */ 6611 + struct net *net; 6612 + bool unregistering; 6613 + DEFINE_WAIT(wait); 6614 + 6615 + for (;;) { 6616 + prepare_to_wait(&netdev_unregistering_wq, &wait, 6617 + TASK_UNINTERRUPTIBLE); 6618 + unregistering = false; 6619 + rtnl_lock(); 6620 + list_for_each_entry(net, net_list, exit_list) { 6621 + if (net->dev_unreg_count > 0) { 6622 + unregistering = true; 6623 + break; 6624 + } 6625 + } 6626 + if (!unregistering) 6627 + break; 6628 + __rtnl_unlock(); 6629 + schedule(); 6630 + } 6631 + finish_wait(&netdev_unregistering_wq, &wait); 6632 + } 6633 + 6614 6634 static void __net_exit default_device_exit_batch(struct list_head *net_list) 6615 6635 { 6616 6636 /* At exit all network devices most be removed from a network ··· 6650 6614 struct net *net; 6651 6615 LIST_HEAD(dev_kill_list); 6652 6616 6653 - rtnl_lock(); 6617 + /* To prevent network device cleanup code from dereferencing 6618 + * loopback devices or network devices that have been freed 6619 + * wait here for all pending unregistrations to complete, 6620 + * before unregistring the loopback device and allowing the 6621 + * network namespace be freed. 6622 + * 6623 + * The netdev todo list containing all network devices 6624 + * unregistrations that happen in default_device_exit_batch 6625 + * will run in the rtnl_unlock() at the end of 6626 + * default_device_exit_batch. 6627 + */ 6628 + rtnl_lock_unregistering(net_list); 6654 6629 list_for_each_entry(net, net_list, exit_list) { 6655 6630 for_each_netdev_reverse(net, dev) { 6656 6631 if (dev->rtnl_link_ops)
+2 -2
net/core/flow_dissector.c
··· 154 154 if (poff >= 0) { 155 155 __be32 *ports, _ports; 156 156 157 - nhoff += poff; 158 - ports = skb_header_pointer(skb, nhoff, sizeof(_ports), &_ports); 157 + ports = skb_header_pointer(skb, nhoff + poff, 158 + sizeof(_ports), &_ports); 159 159 if (ports) 160 160 flow->ports = *ports; 161 161 }
+24 -3
net/core/secure_seq.c
··· 10 10 11 11 #include <net/secure_seq.h> 12 12 13 - static u32 net_secret[MD5_MESSAGE_BYTES / 4] ____cacheline_aligned; 13 + #define NET_SECRET_SIZE (MD5_MESSAGE_BYTES / 4) 14 14 15 - void net_secret_init(void) 15 + static u32 net_secret[NET_SECRET_SIZE] ____cacheline_aligned; 16 + 17 + static void net_secret_init(void) 16 18 { 17 - get_random_bytes(net_secret, sizeof(net_secret)); 19 + u32 tmp; 20 + int i; 21 + 22 + if (likely(net_secret[0])) 23 + return; 24 + 25 + for (i = NET_SECRET_SIZE; i > 0;) { 26 + do { 27 + get_random_bytes(&tmp, sizeof(tmp)); 28 + } while (!tmp); 29 + cmpxchg(&net_secret[--i], 0, tmp); 30 + } 18 31 } 19 32 20 33 #ifdef CONFIG_INET ··· 55 42 u32 hash[MD5_DIGEST_WORDS]; 56 43 u32 i; 57 44 45 + net_secret_init(); 58 46 memcpy(hash, saddr, 16); 59 47 for (i = 0; i < 4; i++) 60 48 secret[i] = net_secret[i] + (__force u32)daddr[i]; ··· 77 63 u32 hash[MD5_DIGEST_WORDS]; 78 64 u32 i; 79 65 66 + net_secret_init(); 80 67 memcpy(hash, saddr, 16); 81 68 for (i = 0; i < 4; i++) 82 69 secret[i] = net_secret[i] + (__force u32) daddr[i]; ··· 97 82 { 98 83 u32 hash[MD5_DIGEST_WORDS]; 99 84 85 + net_secret_init(); 100 86 hash[0] = (__force __u32) daddr; 101 87 hash[1] = net_secret[13]; 102 88 hash[2] = net_secret[14]; ··· 112 96 { 113 97 __u32 hash[4]; 114 98 99 + net_secret_init(); 115 100 memcpy(hash, daddr, 16); 116 101 md5_transform(hash, net_secret); 117 102 ··· 124 107 { 125 108 u32 hash[MD5_DIGEST_WORDS]; 126 109 110 + net_secret_init(); 127 111 hash[0] = (__force u32)saddr; 128 112 hash[1] = (__force u32)daddr; 129 113 hash[2] = ((__force u16)sport << 16) + (__force u16)dport; ··· 139 121 { 140 122 u32 hash[MD5_DIGEST_WORDS]; 141 123 124 + net_secret_init(); 142 125 hash[0] = (__force u32)saddr; 143 126 hash[1] = (__force u32)daddr; 144 127 hash[2] = (__force u32)dport ^ net_secret[14]; ··· 159 140 u32 hash[MD5_DIGEST_WORDS]; 160 141 u64 seq; 161 142 143 + net_secret_init(); 162 144 hash[0] = (__force u32)saddr; 163 145 hash[1] = (__force u32)daddr; 164 146 hash[2] = ((__force u16)sport << 16) + (__force u16)dport; ··· 184 164 u64 seq; 185 165 u32 i; 186 166 167 + net_secret_init(); 187 168 memcpy(hash, saddr, 16); 188 169 for (i = 0; i < 4; i++) 189 170 secret[i] = net_secret[i] + daddr[i];
+1 -3
net/ipv4/af_inet.c
··· 263 263 get_random_bytes(&rnd, sizeof(rnd)); 264 264 } while (rnd == 0); 265 265 266 - if (cmpxchg(&inet_ehash_secret, 0, rnd) == 0) { 266 + if (cmpxchg(&inet_ehash_secret, 0, rnd) == 0) 267 267 get_random_bytes(&ipv6_hash_secret, sizeof(ipv6_hash_secret)); 268 - net_secret_init(); 269 - } 270 268 } 271 269 EXPORT_SYMBOL(build_ehash_secret); 272 270
+2 -2
net/ipv4/igmp.c
··· 736 736 737 737 in_dev->mr_gq_running = 0; 738 738 igmpv3_send_report(in_dev, NULL); 739 - __in_dev_put(in_dev); 739 + in_dev_put(in_dev); 740 740 } 741 741 742 742 static void igmp_ifc_timer_expire(unsigned long data) ··· 749 749 igmp_ifc_start_timer(in_dev, 750 750 unsolicited_report_interval(in_dev)); 751 751 } 752 - __in_dev_put(in_dev); 752 + in_dev_put(in_dev); 753 753 } 754 754 755 755 static void igmp_ifc_event(struct in_device *in_dev)
+11 -11
net/ipv4/ip_tunnel.c
··· 623 623 tunnel->err_count = 0; 624 624 } 625 625 626 + tos = ip_tunnel_ecn_encap(tos, inner_iph, skb); 626 627 ttl = tnl_params->ttl; 627 628 if (ttl == 0) { 628 629 if (skb->protocol == htons(ETH_P_IP)) ··· 642 641 643 642 max_headroom = LL_RESERVED_SPACE(rt->dst.dev) + sizeof(struct iphdr) 644 643 + rt->dst.header_len; 645 - if (max_headroom > dev->needed_headroom) { 644 + if (max_headroom > dev->needed_headroom) 646 645 dev->needed_headroom = max_headroom; 647 - if (skb_cow_head(skb, dev->needed_headroom)) { 648 - dev->stats.tx_dropped++; 649 - dev_kfree_skb(skb); 650 - return; 651 - } 646 + 647 + if (skb_cow_head(skb, dev->needed_headroom)) { 648 + dev->stats.tx_dropped++; 649 + dev_kfree_skb(skb); 650 + return; 652 651 } 653 652 654 653 err = iptunnel_xmit(rt, skb, fl4.saddr, fl4.daddr, protocol, 655 - ip_tunnel_ecn_encap(tos, inner_iph, skb), ttl, df, 656 - !net_eq(tunnel->net, dev_net(dev))); 654 + tos, ttl, df, !net_eq(tunnel->net, dev_net(dev))); 657 655 iptunnel_xmit_stats(err, &dev->stats, dev->tstats); 658 656 659 657 return; ··· 853 853 /* FB netdevice is special: we have one, and only one per netns. 854 854 * Allowing to move it to another netns is clearly unsafe. 855 855 */ 856 - if (!IS_ERR(itn->fb_tunnel_dev)) 856 + if (!IS_ERR(itn->fb_tunnel_dev)) { 857 857 itn->fb_tunnel_dev->features |= NETIF_F_NETNS_LOCAL; 858 + ip_tunnel_add(itn, netdev_priv(itn->fb_tunnel_dev)); 859 + } 858 860 rtnl_unlock(); 859 861 860 862 return PTR_RET(itn->fb_tunnel_dev); ··· 886 884 if (!net_eq(dev_net(t->dev), net)) 887 885 unregister_netdevice_queue(t->dev, head); 888 886 } 889 - if (itn->fb_tunnel_dev) 890 - unregister_netdevice_queue(itn->fb_tunnel_dev, head); 891 887 } 892 888 893 889 void ip_tunnel_delete_net(struct ip_tunnel_net *itn, struct rtnl_link_ops *ops)
+1 -1
net/ipv4/ip_tunnel_core.c
··· 61 61 memset(IPCB(skb), 0, sizeof(*IPCB(skb))); 62 62 63 63 /* Push down and install the IP header. */ 64 - __skb_push(skb, sizeof(struct iphdr)); 64 + skb_push(skb, sizeof(struct iphdr)); 65 65 skb_reset_network_header(skb); 66 66 67 67 iph = ip_hdr(skb);
+7 -3
net/ipv4/netfilter/ipt_SYNPROXY.c
··· 267 267 if (th == NULL) 268 268 return NF_DROP; 269 269 270 - synproxy_parse_options(skb, par->thoff, th, &opts); 270 + if (!synproxy_parse_options(skb, par->thoff, th, &opts)) 271 + return NF_DROP; 271 272 272 273 if (th->syn && !(th->ack || th->fin || th->rst)) { 273 274 /* Initial SYN from client */ ··· 351 350 352 351 /* fall through */ 353 352 case TCP_CONNTRACK_SYN_SENT: 354 - synproxy_parse_options(skb, thoff, th, &opts); 353 + if (!synproxy_parse_options(skb, thoff, th, &opts)) 354 + return NF_DROP; 355 355 356 356 if (!th->syn && th->ack && 357 357 CTINFO2DIR(ctinfo) == IP_CT_DIR_ORIGINAL) { ··· 375 373 if (!th->syn || !th->ack) 376 374 break; 377 375 378 - synproxy_parse_options(skb, thoff, th, &opts); 376 + if (!synproxy_parse_options(skb, thoff, th, &opts)) 377 + return NF_DROP; 378 + 379 379 if (opts.options & XT_SYNPROXY_OPT_TIMESTAMP) 380 380 synproxy->tsoff = opts.tsval - synproxy->its; 381 381
+3 -1
net/ipv4/raw.c
··· 218 218 219 219 if (type == ICMP_DEST_UNREACH && code == ICMP_FRAG_NEEDED) 220 220 ipv4_sk_update_pmtu(skb, sk, info); 221 - else if (type == ICMP_REDIRECT) 221 + else if (type == ICMP_REDIRECT) { 222 222 ipv4_sk_redirect(skb, sk); 223 + return; 224 + } 223 225 224 226 /* Report error on raw socket, if: 225 227 1. User requested ip_recverr.
+11 -6
net/ipv4/tcp_output.c
··· 895 895 896 896 skb_orphan(skb); 897 897 skb->sk = sk; 898 - skb->destructor = (sysctl_tcp_limit_output_bytes > 0) ? 899 - tcp_wfree : sock_wfree; 898 + skb->destructor = tcp_wfree; 900 899 atomic_add(skb->truesize, &sk->sk_wmem_alloc); 901 900 902 901 /* Build TCP header and checksum it. */ ··· 1839 1840 while ((skb = tcp_send_head(sk))) { 1840 1841 unsigned int limit; 1841 1842 1842 - 1843 1843 tso_segs = tcp_init_tso_segs(sk, skb, mss_now); 1844 1844 BUG_ON(!tso_segs); 1845 1845 ··· 1867 1869 break; 1868 1870 } 1869 1871 1870 - /* TSQ : sk_wmem_alloc accounts skb truesize, 1871 - * including skb overhead. But thats OK. 1872 + /* TCP Small Queues : 1873 + * Control number of packets in qdisc/devices to two packets / or ~1 ms. 1874 + * This allows for : 1875 + * - better RTT estimation and ACK scheduling 1876 + * - faster recovery 1877 + * - high rates 1872 1878 */ 1873 - if (atomic_read(&sk->sk_wmem_alloc) >= sysctl_tcp_limit_output_bytes) { 1879 + limit = max(skb->truesize, sk->sk_pacing_rate >> 10); 1880 + 1881 + if (atomic_read(&sk->sk_wmem_alloc) > limit) { 1874 1882 set_bit(TSQ_THROTTLED, &tp->tsq_flags); 1875 1883 break; 1876 1884 } 1885 + 1877 1886 limit = mss_now; 1878 1887 if (tso_segs > 1 && !tcp_urg_mode(tp)) 1879 1888 limit = tcp_mss_split_point(sk, skb, mss_now,
+1 -1
net/ipv4/udp.c
··· 658 658 break; 659 659 case ICMP_REDIRECT: 660 660 ipv4_sk_redirect(skb, sk); 661 - break; 661 + goto out; 662 662 } 663 663 664 664 /*
+42 -37
net/ipv6/addrconf.c
··· 1499 1499 return false; 1500 1500 } 1501 1501 1502 + /* Compares an address/prefix_len with addresses on device @dev. 1503 + * If one is found it returns true. 1504 + */ 1505 + bool ipv6_chk_custom_prefix(const struct in6_addr *addr, 1506 + const unsigned int prefix_len, struct net_device *dev) 1507 + { 1508 + struct inet6_dev *idev; 1509 + struct inet6_ifaddr *ifa; 1510 + bool ret = false; 1511 + 1512 + rcu_read_lock(); 1513 + idev = __in6_dev_get(dev); 1514 + if (idev) { 1515 + read_lock_bh(&idev->lock); 1516 + list_for_each_entry(ifa, &idev->addr_list, if_list) { 1517 + ret = ipv6_prefix_equal(addr, &ifa->addr, prefix_len); 1518 + if (ret) 1519 + break; 1520 + } 1521 + read_unlock_bh(&idev->lock); 1522 + } 1523 + rcu_read_unlock(); 1524 + 1525 + return ret; 1526 + } 1527 + EXPORT_SYMBOL(ipv6_chk_custom_prefix); 1528 + 1502 1529 int ipv6_chk_prefix(const struct in6_addr *addr, struct net_device *dev) 1503 1530 { 1504 1531 struct inet6_dev *idev; ··· 2220 2193 else 2221 2194 stored_lft = 0; 2222 2195 if (!update_lft && !create && stored_lft) { 2223 - if (valid_lft > MIN_VALID_LIFETIME || 2224 - valid_lft > stored_lft) 2225 - update_lft = 1; 2226 - else if (stored_lft <= MIN_VALID_LIFETIME) { 2227 - /* valid_lft <= stored_lft is always true */ 2228 - /* 2229 - * RFC 4862 Section 5.5.3e: 2230 - * "Note that the preferred lifetime of 2231 - * the corresponding address is always 2232 - * reset to the Preferred Lifetime in 2233 - * the received Prefix Information 2234 - * option, regardless of whether the 2235 - * valid lifetime is also reset or 2236 - * ignored." 2237 - * 2238 - * So if the preferred lifetime in 2239 - * this advertisement is different 2240 - * than what we have stored, but the 2241 - * valid lifetime is invalid, just 2242 - * reset prefered_lft. 2243 - * 2244 - * We must set the valid lifetime 2245 - * to the stored lifetime since we'll 2246 - * be updating the timestamp below, 2247 - * else we'll set it back to the 2248 - * minimum. 2249 - */ 2250 - if (prefered_lft != ifp->prefered_lft) { 2251 - valid_lft = stored_lft; 2252 - update_lft = 1; 2253 - } 2254 - } else { 2255 - valid_lft = MIN_VALID_LIFETIME; 2256 - if (valid_lft < prefered_lft) 2257 - prefered_lft = valid_lft; 2258 - update_lft = 1; 2259 - } 2196 + const u32 minimum_lft = min( 2197 + stored_lft, (u32)MIN_VALID_LIFETIME); 2198 + valid_lft = max(valid_lft, minimum_lft); 2199 + 2200 + /* RFC4862 Section 5.5.3e: 2201 + * "Note that the preferred lifetime of the 2202 + * corresponding address is always reset to 2203 + * the Preferred Lifetime in the received 2204 + * Prefix Information option, regardless of 2205 + * whether the valid lifetime is also reset or 2206 + * ignored." 2207 + * 2208 + * So we should always update prefered_lft here. 2209 + */ 2210 + update_lft = 1; 2260 2211 } 2261 2212 2262 2213 if (update_lft) {
+2 -2
net/ipv6/ip6_gre.c
··· 618 618 struct ip6_tnl *tunnel = netdev_priv(dev); 619 619 struct net_device *tdev; /* Device to other host */ 620 620 struct ipv6hdr *ipv6h; /* Our new IP header */ 621 - unsigned int max_headroom; /* The extra header space needed */ 621 + unsigned int max_headroom = 0; /* The extra header space needed */ 622 622 int gre_hlen; 623 623 struct ipv6_tel_txoption opt; 624 624 int mtu; ··· 693 693 694 694 skb_scrub_packet(skb, !net_eq(tunnel->net, dev_net(dev))); 695 695 696 - max_headroom = LL_RESERVED_SPACE(tdev) + gre_hlen + dst->header_len; 696 + max_headroom += LL_RESERVED_SPACE(tdev) + gre_hlen + dst->header_len; 697 697 698 698 if (skb_headroom(skb) < max_headroom || skb_shared(skb) || 699 699 (skb_cloned(skb) && !skb_clone_writable(skb, 0))) {
+23 -32
net/ipv6/ip6_output.c
··· 1015 1015 * udp datagram 1016 1016 */ 1017 1017 if ((skb = skb_peek_tail(&sk->sk_write_queue)) == NULL) { 1018 + struct frag_hdr fhdr; 1019 + 1018 1020 skb = sock_alloc_send_skb(sk, 1019 1021 hh_len + fragheaderlen + transhdrlen + 20, 1020 1022 (flags & MSG_DONTWAIT), &err); ··· 1038 1036 skb->protocol = htons(ETH_P_IPV6); 1039 1037 skb->ip_summed = CHECKSUM_PARTIAL; 1040 1038 skb->csum = 0; 1041 - } 1042 - 1043 - err = skb_append_datato_frags(sk,skb, getfrag, from, 1044 - (length - transhdrlen)); 1045 - if (!err) { 1046 - struct frag_hdr fhdr; 1047 1039 1048 1040 /* Specify the length of each IPv6 datagram fragment. 1049 1041 * It has to be a multiple of 8. ··· 1048 1052 ipv6_select_ident(&fhdr, rt); 1049 1053 skb_shinfo(skb)->ip6_frag_id = fhdr.identification; 1050 1054 __skb_queue_tail(&sk->sk_write_queue, skb); 1051 - 1052 - return 0; 1053 1055 } 1054 - /* There is not enough support do UPD LSO, 1055 - * so follow normal path 1056 - */ 1057 - kfree_skb(skb); 1058 1056 1059 - return err; 1057 + return skb_append_datato_frags(sk, skb, getfrag, from, 1058 + (length - transhdrlen)); 1060 1059 } 1061 1060 1062 1061 static inline struct ipv6_opt_hdr *ip6_opt_dup(struct ipv6_opt_hdr *src, ··· 1218 1227 * --yoshfuji 1219 1228 */ 1220 1229 1221 - cork->length += length; 1222 - if (length > mtu) { 1223 - int proto = sk->sk_protocol; 1224 - if (dontfrag && (proto == IPPROTO_UDP || proto == IPPROTO_RAW)){ 1225 - ipv6_local_rxpmtu(sk, fl6, mtu-exthdrlen); 1226 - return -EMSGSIZE; 1227 - } 1228 - 1229 - if (proto == IPPROTO_UDP && 1230 - (rt->dst.dev->features & NETIF_F_UFO)) { 1231 - 1232 - err = ip6_ufo_append_data(sk, getfrag, from, length, 1233 - hh_len, fragheaderlen, 1234 - transhdrlen, mtu, flags, rt); 1235 - if (err) 1236 - goto error; 1237 - return 0; 1238 - } 1230 + if ((length > mtu) && dontfrag && (sk->sk_protocol == IPPROTO_UDP || 1231 + sk->sk_protocol == IPPROTO_RAW)) { 1232 + ipv6_local_rxpmtu(sk, fl6, mtu-exthdrlen); 1233 + return -EMSGSIZE; 1239 1234 } 1240 1235 1241 - if ((skb = skb_peek_tail(&sk->sk_write_queue)) == NULL) 1236 + skb = skb_peek_tail(&sk->sk_write_queue); 1237 + cork->length += length; 1238 + if (((length > mtu) || 1239 + (skb && skb_is_gso(skb))) && 1240 + (sk->sk_protocol == IPPROTO_UDP) && 1241 + (rt->dst.dev->features & NETIF_F_UFO)) { 1242 + err = ip6_ufo_append_data(sk, getfrag, from, length, 1243 + hh_len, fragheaderlen, 1244 + transhdrlen, mtu, flags, rt); 1245 + if (err) 1246 + goto error; 1247 + return 0; 1248 + } 1249 + 1250 + if (!skb) 1242 1251 goto alloc_new_skb; 1243 1252 1244 1253 while (length > 0) {
+1 -2
net/ipv6/ip6_tunnel.c
··· 1731 1731 } 1732 1732 } 1733 1733 1734 - t = rtnl_dereference(ip6n->tnls_wc[0]); 1735 - unregister_netdevice_queue(t->dev, &list); 1736 1734 unregister_netdevice_many(&list); 1737 1735 } 1738 1736 ··· 1750 1752 if (!ip6n->fb_tnl_dev) 1751 1753 goto err_alloc_dev; 1752 1754 dev_net_set(ip6n->fb_tnl_dev, net); 1755 + ip6n->fb_tnl_dev->rtnl_link_ops = &ip6_link_ops; 1753 1756 /* FB netdevice is special: we have one, and only one per netns. 1754 1757 * Allowing to move it to another netns is clearly unsafe. 1755 1758 */
+3 -3
net/ipv6/mcast.c
··· 2034 2034 if (idev->mc_dad_count) 2035 2035 mld_dad_start_timer(idev, idev->mc_maxdelay); 2036 2036 } 2037 - __in6_dev_put(idev); 2037 + in6_dev_put(idev); 2038 2038 } 2039 2039 2040 2040 static int ip6_mc_del1_src(struct ifmcaddr6 *pmc, int sfmode, ··· 2379 2379 2380 2380 idev->mc_gq_running = 0; 2381 2381 mld_send_report(idev, NULL); 2382 - __in6_dev_put(idev); 2382 + in6_dev_put(idev); 2383 2383 } 2384 2384 2385 2385 static void mld_ifc_timer_expire(unsigned long data) ··· 2392 2392 if (idev->mc_ifc_count) 2393 2393 mld_ifc_start_timer(idev, idev->mc_maxdelay); 2394 2394 } 2395 - __in6_dev_put(idev); 2395 + in6_dev_put(idev); 2396 2396 } 2397 2397 2398 2398 static void mld_ifc_event(struct inet6_dev *idev)
+7 -3
net/ipv6/netfilter/ip6t_SYNPROXY.c
··· 282 282 if (th == NULL) 283 283 return NF_DROP; 284 284 285 - synproxy_parse_options(skb, par->thoff, th, &opts); 285 + if (!synproxy_parse_options(skb, par->thoff, th, &opts)) 286 + return NF_DROP; 286 287 287 288 if (th->syn && !(th->ack || th->fin || th->rst)) { 288 289 /* Initial SYN from client */ ··· 373 372 374 373 /* fall through */ 375 374 case TCP_CONNTRACK_SYN_SENT: 376 - synproxy_parse_options(skb, thoff, th, &opts); 375 + if (!synproxy_parse_options(skb, thoff, th, &opts)) 376 + return NF_DROP; 377 377 378 378 if (!th->syn && th->ack && 379 379 CTINFO2DIR(ctinfo) == IP_CT_DIR_ORIGINAL) { ··· 397 395 if (!th->syn || !th->ack) 398 396 break; 399 397 400 - synproxy_parse_options(skb, thoff, th, &opts); 398 + if (!synproxy_parse_options(skb, thoff, th, &opts)) 399 + return NF_DROP; 400 + 401 401 if (opts.options & XT_SYNPROXY_OPT_TIMESTAMP) 402 402 synproxy->tsoff = opts.tsval - synproxy->its; 403 403
+3 -1
net/ipv6/raw.c
··· 335 335 ip6_sk_update_pmtu(skb, sk, info); 336 336 harderr = (np->pmtudisc == IPV6_PMTUDISC_DO); 337 337 } 338 - if (type == NDISC_REDIRECT) 338 + if (type == NDISC_REDIRECT) { 339 339 ip6_sk_redirect(skb, sk); 340 + return; 341 + } 340 342 if (np->recverr) { 341 343 u8 *payload = skb->data; 342 344 if (!inet->hdrincl)
+70 -16
net/ipv6/sit.c
··· 566 566 return false; 567 567 } 568 568 569 + /* Checks if an address matches an address on the tunnel interface. 570 + * Used to detect the NAT of proto 41 packets and let them pass spoofing test. 571 + * Long story: 572 + * This function is called after we considered the packet as spoofed 573 + * in is_spoofed_6rd. 574 + * We may have a router that is doing NAT for proto 41 packets 575 + * for an internal station. Destination a.a.a.a/PREFIX:bbbb:bbbb 576 + * will be translated to n.n.n.n/PREFIX:bbbb:bbbb. And is_spoofed_6rd 577 + * function will return true, dropping the packet. 578 + * But, we can still check if is spoofed against the IP 579 + * addresses associated with the interface. 580 + */ 581 + static bool only_dnatted(const struct ip_tunnel *tunnel, 582 + const struct in6_addr *v6dst) 583 + { 584 + int prefix_len; 585 + 586 + #ifdef CONFIG_IPV6_SIT_6RD 587 + prefix_len = tunnel->ip6rd.prefixlen + 32 588 + - tunnel->ip6rd.relay_prefixlen; 589 + #else 590 + prefix_len = 48; 591 + #endif 592 + return ipv6_chk_custom_prefix(v6dst, prefix_len, tunnel->dev); 593 + } 594 + 595 + /* Returns true if a packet is spoofed */ 596 + static bool packet_is_spoofed(struct sk_buff *skb, 597 + const struct iphdr *iph, 598 + struct ip_tunnel *tunnel) 599 + { 600 + const struct ipv6hdr *ipv6h; 601 + 602 + if (tunnel->dev->priv_flags & IFF_ISATAP) { 603 + if (!isatap_chksrc(skb, iph, tunnel)) 604 + return true; 605 + 606 + return false; 607 + } 608 + 609 + if (tunnel->dev->flags & IFF_POINTOPOINT) 610 + return false; 611 + 612 + ipv6h = ipv6_hdr(skb); 613 + 614 + if (unlikely(is_spoofed_6rd(tunnel, iph->saddr, &ipv6h->saddr))) { 615 + net_warn_ratelimited("Src spoofed %pI4/%pI6c -> %pI4/%pI6c\n", 616 + &iph->saddr, &ipv6h->saddr, 617 + &iph->daddr, &ipv6h->daddr); 618 + return true; 619 + } 620 + 621 + if (likely(!is_spoofed_6rd(tunnel, iph->daddr, &ipv6h->daddr))) 622 + return false; 623 + 624 + if (only_dnatted(tunnel, &ipv6h->daddr)) 625 + return false; 626 + 627 + net_warn_ratelimited("Dst spoofed %pI4/%pI6c -> %pI4/%pI6c\n", 628 + &iph->saddr, &ipv6h->saddr, 629 + &iph->daddr, &ipv6h->daddr); 630 + return true; 631 + } 632 + 569 633 static int ipip6_rcv(struct sk_buff *skb) 570 634 { 571 635 const struct iphdr *iph = ip_hdr(skb); ··· 650 586 IPCB(skb)->flags = 0; 651 587 skb->protocol = htons(ETH_P_IPV6); 652 588 653 - if (tunnel->dev->priv_flags & IFF_ISATAP) { 654 - if (!isatap_chksrc(skb, iph, tunnel)) { 655 - tunnel->dev->stats.rx_errors++; 656 - goto out; 657 - } 658 - } else if (!(tunnel->dev->flags&IFF_POINTOPOINT)) { 659 - if (is_spoofed_6rd(tunnel, iph->saddr, 660 - &ipv6_hdr(skb)->saddr) || 661 - is_spoofed_6rd(tunnel, iph->daddr, 662 - &ipv6_hdr(skb)->daddr)) { 663 - tunnel->dev->stats.rx_errors++; 664 - goto out; 665 - } 589 + if (packet_is_spoofed(skb, iph, tunnel)) { 590 + tunnel->dev->stats.rx_errors++; 591 + goto out; 666 592 } 667 593 668 594 __skb_tunnel_rx(skb, tunnel->dev, tunnel->net); ··· 802 748 neigh = dst_neigh_lookup(skb_dst(skb), &iph6->daddr); 803 749 804 750 if (neigh == NULL) { 805 - net_dbg_ratelimited("sit: nexthop == NULL\n"); 751 + net_dbg_ratelimited("nexthop == NULL\n"); 806 752 goto tx_error; 807 753 } 808 754 ··· 831 777 neigh = dst_neigh_lookup(skb_dst(skb), &iph6->daddr); 832 778 833 779 if (neigh == NULL) { 834 - net_dbg_ratelimited("sit: nexthop == NULL\n"); 780 + net_dbg_ratelimited("nexthop == NULL\n"); 835 781 goto tx_error; 836 782 } 837 783 ··· 1666 1612 goto err_alloc_dev; 1667 1613 } 1668 1614 dev_net_set(sitn->fb_tunnel_dev, net); 1615 + sitn->fb_tunnel_dev->rtnl_link_ops = &sit_link_ops; 1669 1616 /* FB netdevice is special: we have one, and only one per netns. 1670 1617 * Allowing to move it to another netns is clearly unsafe. 1671 1618 */ ··· 1701 1646 1702 1647 rtnl_lock(); 1703 1648 sit_destroy_tunnels(sitn, &list); 1704 - unregister_netdevice_queue(sitn->fb_tunnel_dev, &list); 1705 1649 unregister_netdevice_many(&list); 1706 1650 rtnl_unlock(); 1707 1651 }
+3 -1
net/ipv6/udp.c
··· 525 525 526 526 if (type == ICMPV6_PKT_TOOBIG) 527 527 ip6_sk_update_pmtu(skb, sk, info); 528 - if (type == NDISC_REDIRECT) 528 + if (type == NDISC_REDIRECT) { 529 529 ip6_sk_redirect(skb, sk); 530 + goto out; 531 + } 530 532 531 533 np = inet6_sk(sk); 532 534
+1
net/lapb/lapb_timer.c
··· 154 154 } else { 155 155 lapb->n2count++; 156 156 lapb_requeue_frames(lapb); 157 + lapb_kick(lapb); 157 158 } 158 159 break; 159 160
+10 -2
net/netfilter/ipvs/ip_vs_core.c
··· 116 116 117 117 if (dest && (dest->flags & IP_VS_DEST_F_AVAILABLE)) { 118 118 struct ip_vs_cpu_stats *s; 119 + struct ip_vs_service *svc; 119 120 120 121 s = this_cpu_ptr(dest->stats.cpustats); 121 122 s->ustats.inpkts++; ··· 124 123 s->ustats.inbytes += skb->len; 125 124 u64_stats_update_end(&s->syncp); 126 125 127 - s = this_cpu_ptr(dest->svc->stats.cpustats); 126 + rcu_read_lock(); 127 + svc = rcu_dereference(dest->svc); 128 + s = this_cpu_ptr(svc->stats.cpustats); 128 129 s->ustats.inpkts++; 129 130 u64_stats_update_begin(&s->syncp); 130 131 s->ustats.inbytes += skb->len; 131 132 u64_stats_update_end(&s->syncp); 133 + rcu_read_unlock(); 132 134 133 135 s = this_cpu_ptr(ipvs->tot_stats.cpustats); 134 136 s->ustats.inpkts++; ··· 150 146 151 147 if (dest && (dest->flags & IP_VS_DEST_F_AVAILABLE)) { 152 148 struct ip_vs_cpu_stats *s; 149 + struct ip_vs_service *svc; 153 150 154 151 s = this_cpu_ptr(dest->stats.cpustats); 155 152 s->ustats.outpkts++; ··· 158 153 s->ustats.outbytes += skb->len; 159 154 u64_stats_update_end(&s->syncp); 160 155 161 - s = this_cpu_ptr(dest->svc->stats.cpustats); 156 + rcu_read_lock(); 157 + svc = rcu_dereference(dest->svc); 158 + s = this_cpu_ptr(svc->stats.cpustats); 162 159 s->ustats.outpkts++; 163 160 u64_stats_update_begin(&s->syncp); 164 161 s->ustats.outbytes += skb->len; 165 162 u64_stats_update_end(&s->syncp); 163 + rcu_read_unlock(); 166 164 167 165 s = this_cpu_ptr(ipvs->tot_stats.cpustats); 168 166 s->ustats.outpkts++;
+35 -51
net/netfilter/ipvs/ip_vs_ctl.c
··· 460 460 __ip_vs_bind_svc(struct ip_vs_dest *dest, struct ip_vs_service *svc) 461 461 { 462 462 atomic_inc(&svc->refcnt); 463 - dest->svc = svc; 463 + rcu_assign_pointer(dest->svc, svc); 464 464 } 465 465 466 466 static void ip_vs_service_free(struct ip_vs_service *svc) ··· 470 470 kfree(svc); 471 471 } 472 472 473 - static void 474 - __ip_vs_unbind_svc(struct ip_vs_dest *dest) 473 + static void ip_vs_service_rcu_free(struct rcu_head *head) 475 474 { 476 - struct ip_vs_service *svc = dest->svc; 475 + struct ip_vs_service *svc; 477 476 478 - dest->svc = NULL; 477 + svc = container_of(head, struct ip_vs_service, rcu_head); 478 + ip_vs_service_free(svc); 479 + } 480 + 481 + static void __ip_vs_svc_put(struct ip_vs_service *svc, bool do_delay) 482 + { 479 483 if (atomic_dec_and_test(&svc->refcnt)) { 480 484 IP_VS_DBG_BUF(3, "Removing service %u/%s:%u\n", 481 485 svc->fwmark, 482 486 IP_VS_DBG_ADDR(svc->af, &svc->addr), 483 487 ntohs(svc->port)); 484 - ip_vs_service_free(svc); 488 + if (do_delay) 489 + call_rcu(&svc->rcu_head, ip_vs_service_rcu_free); 490 + else 491 + ip_vs_service_free(svc); 485 492 } 486 493 } 487 494 ··· 674 667 IP_VS_DBG_ADDR(svc->af, &dest->addr), 675 668 ntohs(dest->port), 676 669 atomic_read(&dest->refcnt)); 677 - /* We can not reuse dest while in grace period 678 - * because conns still can use dest->svc 679 - */ 680 - if (test_bit(IP_VS_DEST_STATE_REMOVING, &dest->state)) 681 - continue; 682 670 if (dest->af == svc->af && 683 671 ip_vs_addr_equal(svc->af, &dest->addr, daddr) && 684 672 dest->port == dport && ··· 699 697 700 698 static void ip_vs_dest_free(struct ip_vs_dest *dest) 701 699 { 700 + struct ip_vs_service *svc = rcu_dereference_protected(dest->svc, 1); 701 + 702 702 __ip_vs_dst_cache_reset(dest); 703 - __ip_vs_unbind_svc(dest); 703 + __ip_vs_svc_put(svc, false); 704 704 free_percpu(dest->stats.cpustats); 705 705 kfree(dest); 706 706 } ··· 775 771 struct ip_vs_dest_user_kern *udest, int add) 776 772 { 777 773 struct netns_ipvs *ipvs = net_ipvs(svc->net); 774 + struct ip_vs_service *old_svc; 778 775 struct ip_vs_scheduler *sched; 779 776 int conn_flags; 780 777 ··· 797 792 atomic_set(&dest->conn_flags, conn_flags); 798 793 799 794 /* bind the service */ 800 - if (!dest->svc) { 795 + old_svc = rcu_dereference_protected(dest->svc, 1); 796 + if (!old_svc) { 801 797 __ip_vs_bind_svc(dest, svc); 802 798 } else { 803 - if (dest->svc != svc) { 804 - __ip_vs_unbind_svc(dest); 799 + if (old_svc != svc) { 805 800 ip_vs_zero_stats(&dest->stats); 806 801 __ip_vs_bind_svc(dest, svc); 802 + __ip_vs_svc_put(old_svc, true); 807 803 } 808 804 } 809 805 ··· 1004 998 return 0; 1005 999 } 1006 1000 1007 - static void ip_vs_dest_wait_readers(struct rcu_head *head) 1008 - { 1009 - struct ip_vs_dest *dest = container_of(head, struct ip_vs_dest, 1010 - rcu_head); 1011 - 1012 - /* End of grace period after unlinking */ 1013 - clear_bit(IP_VS_DEST_STATE_REMOVING, &dest->state); 1014 - } 1015 - 1016 - 1017 1001 /* 1018 1002 * Delete a destination (must be already unlinked from the service) 1019 1003 */ ··· 1019 1023 */ 1020 1024 ip_vs_rs_unhash(dest); 1021 1025 1022 - if (!cleanup) { 1023 - set_bit(IP_VS_DEST_STATE_REMOVING, &dest->state); 1024 - call_rcu(&dest->rcu_head, ip_vs_dest_wait_readers); 1025 - } 1026 - 1027 1026 spin_lock_bh(&ipvs->dest_trash_lock); 1028 1027 IP_VS_DBG_BUF(3, "Moving dest %s:%u into trash, dest->refcnt=%d\n", 1029 1028 IP_VS_DBG_ADDR(dest->af, &dest->addr), ntohs(dest->port), 1030 1029 atomic_read(&dest->refcnt)); 1031 1030 if (list_empty(&ipvs->dest_trash) && !cleanup) 1032 1031 mod_timer(&ipvs->dest_trash_timer, 1033 - jiffies + IP_VS_DEST_TRASH_PERIOD); 1032 + jiffies + (IP_VS_DEST_TRASH_PERIOD >> 1)); 1034 1033 /* dest lives in trash without reference */ 1035 1034 list_add(&dest->t_list, &ipvs->dest_trash); 1035 + dest->idle_start = 0; 1036 1036 spin_unlock_bh(&ipvs->dest_trash_lock); 1037 1037 ip_vs_dest_put(dest); 1038 1038 } ··· 1100 1108 struct net *net = (struct net *) data; 1101 1109 struct netns_ipvs *ipvs = net_ipvs(net); 1102 1110 struct ip_vs_dest *dest, *next; 1111 + unsigned long now = jiffies; 1103 1112 1104 1113 spin_lock(&ipvs->dest_trash_lock); 1105 1114 list_for_each_entry_safe(dest, next, &ipvs->dest_trash, t_list) { 1106 - /* Skip if dest is in grace period */ 1107 - if (test_bit(IP_VS_DEST_STATE_REMOVING, &dest->state)) 1108 - continue; 1109 1115 if (atomic_read(&dest->refcnt) > 0) 1110 1116 continue; 1117 + if (dest->idle_start) { 1118 + if (time_before(now, dest->idle_start + 1119 + IP_VS_DEST_TRASH_PERIOD)) 1120 + continue; 1121 + } else { 1122 + dest->idle_start = max(1UL, now); 1123 + continue; 1124 + } 1111 1125 IP_VS_DBG_BUF(3, "Removing destination %u/%s:%u from trash\n", 1112 1126 dest->vfwmark, 1113 - IP_VS_DBG_ADDR(dest->svc->af, &dest->addr), 1127 + IP_VS_DBG_ADDR(dest->af, &dest->addr), 1114 1128 ntohs(dest->port)); 1115 1129 list_del(&dest->t_list); 1116 1130 ip_vs_dest_free(dest); 1117 1131 } 1118 1132 if (!list_empty(&ipvs->dest_trash)) 1119 1133 mod_timer(&ipvs->dest_trash_timer, 1120 - jiffies + IP_VS_DEST_TRASH_PERIOD); 1134 + jiffies + (IP_VS_DEST_TRASH_PERIOD >> 1)); 1121 1135 spin_unlock(&ipvs->dest_trash_lock); 1122 1136 } 1123 1137 ··· 1318 1320 return ret; 1319 1321 } 1320 1322 1321 - static void ip_vs_service_rcu_free(struct rcu_head *head) 1322 - { 1323 - struct ip_vs_service *svc; 1324 - 1325 - svc = container_of(head, struct ip_vs_service, rcu_head); 1326 - ip_vs_service_free(svc); 1327 - } 1328 - 1329 1323 /* 1330 1324 * Delete a service from the service list 1331 1325 * - The service must be unlinked, unlocked and not referenced! ··· 1366 1376 /* 1367 1377 * Free the service if nobody refers to it 1368 1378 */ 1369 - if (atomic_dec_and_test(&svc->refcnt)) { 1370 - IP_VS_DBG_BUF(3, "Removing service %u/%s:%u\n", 1371 - svc->fwmark, 1372 - IP_VS_DBG_ADDR(svc->af, &svc->addr), 1373 - ntohs(svc->port)); 1374 - call_rcu(&svc->rcu_head, ip_vs_service_rcu_free); 1375 - } 1379 + __ip_vs_svc_put(svc, true); 1376 1380 1377 1381 /* decrease the module use count */ 1378 1382 ip_vs_use_count_dec();
+3 -1
net/netfilter/ipvs/ip_vs_est.c
··· 59 59 struct ip_vs_cpu_stats __percpu *stats) 60 60 { 61 61 int i; 62 + bool add = false; 62 63 63 64 for_each_possible_cpu(i) { 64 65 struct ip_vs_cpu_stats *s = per_cpu_ptr(stats, i); 65 66 unsigned int start; 66 67 __u64 inbytes, outbytes; 67 - if (i) { 68 + if (add) { 68 69 sum->conns += s->ustats.conns; 69 70 sum->inpkts += s->ustats.inpkts; 70 71 sum->outpkts += s->ustats.outpkts; ··· 77 76 sum->inbytes += inbytes; 78 77 sum->outbytes += outbytes; 79 78 } else { 79 + add = true; 80 80 sum->conns = s->ustats.conns; 81 81 sum->inpkts = s->ustats.inpkts; 82 82 sum->outpkts = s->ustats.outpkts;
+35 -41
net/netfilter/ipvs/ip_vs_lblc.c
··· 93 93 struct hlist_node list; 94 94 int af; /* address family */ 95 95 union nf_inet_addr addr; /* destination IP address */ 96 - struct ip_vs_dest __rcu *dest; /* real server (cache) */ 96 + struct ip_vs_dest *dest; /* real server (cache) */ 97 97 unsigned long lastuse; /* last used time */ 98 98 struct rcu_head rcu_head; 99 99 }; ··· 130 130 }; 131 131 #endif 132 132 133 - static inline void ip_vs_lblc_free(struct ip_vs_lblc_entry *en) 133 + static void ip_vs_lblc_rcu_free(struct rcu_head *head) 134 134 { 135 - struct ip_vs_dest *dest; 135 + struct ip_vs_lblc_entry *en = container_of(head, 136 + struct ip_vs_lblc_entry, 137 + rcu_head); 136 138 137 - hlist_del_rcu(&en->list); 138 - /* 139 - * We don't kfree dest because it is referred either by its service 140 - * or the trash dest list. 141 - */ 142 - dest = rcu_dereference_protected(en->dest, 1); 143 - ip_vs_dest_put(dest); 144 - kfree_rcu(en, rcu_head); 139 + ip_vs_dest_put(en->dest); 140 + kfree(en); 145 141 } 146 142 143 + static inline void ip_vs_lblc_del(struct ip_vs_lblc_entry *en) 144 + { 145 + hlist_del_rcu(&en->list); 146 + call_rcu(&en->rcu_head, ip_vs_lblc_rcu_free); 147 + } 147 148 148 149 /* 149 150 * Returns hash value for IPVS LBLC entry ··· 204 203 struct ip_vs_lblc_entry *en; 205 204 206 205 en = ip_vs_lblc_get(dest->af, tbl, daddr); 207 - if (!en) { 208 - en = kmalloc(sizeof(*en), GFP_ATOMIC); 209 - if (!en) 210 - return NULL; 211 - 212 - en->af = dest->af; 213 - ip_vs_addr_copy(dest->af, &en->addr, daddr); 214 - en->lastuse = jiffies; 215 - 216 - ip_vs_dest_hold(dest); 217 - RCU_INIT_POINTER(en->dest, dest); 218 - 219 - ip_vs_lblc_hash(tbl, en); 220 - } else { 221 - struct ip_vs_dest *old_dest; 222 - 223 - old_dest = rcu_dereference_protected(en->dest, 1); 224 - if (old_dest != dest) { 225 - ip_vs_dest_put(old_dest); 226 - ip_vs_dest_hold(dest); 227 - /* No ordering constraints for refcnt */ 228 - RCU_INIT_POINTER(en->dest, dest); 229 - } 206 + if (en) { 207 + if (en->dest == dest) 208 + return en; 209 + ip_vs_lblc_del(en); 230 210 } 211 + en = kmalloc(sizeof(*en), GFP_ATOMIC); 212 + if (!en) 213 + return NULL; 214 + 215 + en->af = dest->af; 216 + ip_vs_addr_copy(dest->af, &en->addr, daddr); 217 + en->lastuse = jiffies; 218 + 219 + ip_vs_dest_hold(dest); 220 + en->dest = dest; 221 + 222 + ip_vs_lblc_hash(tbl, en); 231 223 232 224 return en; 233 225 } ··· 240 246 tbl->dead = 1; 241 247 for (i=0; i<IP_VS_LBLC_TAB_SIZE; i++) { 242 248 hlist_for_each_entry_safe(en, next, &tbl->bucket[i], list) { 243 - ip_vs_lblc_free(en); 249 + ip_vs_lblc_del(en); 244 250 atomic_dec(&tbl->entries); 245 251 } 246 252 } ··· 275 281 sysctl_lblc_expiration(svc))) 276 282 continue; 277 283 278 - ip_vs_lblc_free(en); 284 + ip_vs_lblc_del(en); 279 285 atomic_dec(&tbl->entries); 280 286 } 281 287 spin_unlock(&svc->sched_lock); ··· 329 335 if (time_before(now, en->lastuse + ENTRY_TIMEOUT)) 330 336 continue; 331 337 332 - ip_vs_lblc_free(en); 338 + ip_vs_lblc_del(en); 333 339 atomic_dec(&tbl->entries); 334 340 goal--; 335 341 } ··· 437 443 continue; 438 444 439 445 doh = ip_vs_dest_conn_overhead(dest); 440 - if (loh * atomic_read(&dest->weight) > 441 - doh * atomic_read(&least->weight)) { 446 + if ((__s64)loh * atomic_read(&dest->weight) > 447 + (__s64)doh * atomic_read(&least->weight)) { 442 448 least = dest; 443 449 loh = doh; 444 450 } ··· 505 511 * free up entries from the trash at any time. 506 512 */ 507 513 508 - dest = rcu_dereference(en->dest); 514 + dest = en->dest; 509 515 if ((dest->flags & IP_VS_DEST_F_AVAILABLE) && 510 516 atomic_read(&dest->weight) > 0 && !is_overloaded(dest, svc)) 511 517 goto out; ··· 625 631 { 626 632 unregister_ip_vs_scheduler(&ip_vs_lblc_scheduler); 627 633 unregister_pernet_subsys(&ip_vs_lblc_ops); 628 - synchronize_rcu(); 634 + rcu_barrier(); 629 635 } 630 636 631 637
+26 -36
net/netfilter/ipvs/ip_vs_lblcr.c
··· 89 89 */ 90 90 struct ip_vs_dest_set_elem { 91 91 struct list_head list; /* list link */ 92 - struct ip_vs_dest __rcu *dest; /* destination server */ 92 + struct ip_vs_dest *dest; /* destination server */ 93 93 struct rcu_head rcu_head; 94 94 }; 95 95 ··· 107 107 108 108 if (check) { 109 109 list_for_each_entry(e, &set->list, list) { 110 - struct ip_vs_dest *d; 111 - 112 - d = rcu_dereference_protected(e->dest, 1); 113 - if (d == dest) 114 - /* already existed */ 110 + if (e->dest == dest) 115 111 return; 116 112 } 117 113 } ··· 117 121 return; 118 122 119 123 ip_vs_dest_hold(dest); 120 - RCU_INIT_POINTER(e->dest, dest); 124 + e->dest = dest; 121 125 122 126 list_add_rcu(&e->list, &set->list); 123 127 atomic_inc(&set->size); 124 128 125 129 set->lastmod = jiffies; 130 + } 131 + 132 + static void ip_vs_lblcr_elem_rcu_free(struct rcu_head *head) 133 + { 134 + struct ip_vs_dest_set_elem *e; 135 + 136 + e = container_of(head, struct ip_vs_dest_set_elem, rcu_head); 137 + ip_vs_dest_put(e->dest); 138 + kfree(e); 126 139 } 127 140 128 141 static void ··· 140 135 struct ip_vs_dest_set_elem *e; 141 136 142 137 list_for_each_entry(e, &set->list, list) { 143 - struct ip_vs_dest *d; 144 - 145 - d = rcu_dereference_protected(e->dest, 1); 146 - if (d == dest) { 138 + if (e->dest == dest) { 147 139 /* HIT */ 148 140 atomic_dec(&set->size); 149 141 set->lastmod = jiffies; 150 - ip_vs_dest_put(dest); 151 142 list_del_rcu(&e->list); 152 - kfree_rcu(e, rcu_head); 143 + call_rcu(&e->rcu_head, ip_vs_lblcr_elem_rcu_free); 153 144 break; 154 145 } 155 146 } ··· 156 155 struct ip_vs_dest_set_elem *e, *ep; 157 156 158 157 list_for_each_entry_safe(e, ep, &set->list, list) { 159 - struct ip_vs_dest *d; 160 - 161 - d = rcu_dereference_protected(e->dest, 1); 162 - /* 163 - * We don't kfree dest because it is referred either 164 - * by its service or by the trash dest list. 165 - */ 166 - ip_vs_dest_put(d); 167 158 list_del_rcu(&e->list); 168 - kfree_rcu(e, rcu_head); 159 + call_rcu(&e->rcu_head, ip_vs_lblcr_elem_rcu_free); 169 160 } 170 161 } 171 162 ··· 168 175 struct ip_vs_dest *dest, *least; 169 176 int loh, doh; 170 177 171 - if (set == NULL) 172 - return NULL; 173 - 174 178 /* select the first destination server, whose weight > 0 */ 175 179 list_for_each_entry_rcu(e, &set->list, list) { 176 - least = rcu_dereference(e->dest); 180 + least = e->dest; 177 181 if (least->flags & IP_VS_DEST_F_OVERLOAD) 178 182 continue; 179 183 ··· 185 195 /* find the destination with the weighted least load */ 186 196 nextstage: 187 197 list_for_each_entry_continue_rcu(e, &set->list, list) { 188 - dest = rcu_dereference(e->dest); 198 + dest = e->dest; 189 199 if (dest->flags & IP_VS_DEST_F_OVERLOAD) 190 200 continue; 191 201 192 202 doh = ip_vs_dest_conn_overhead(dest); 193 - if ((loh * atomic_read(&dest->weight) > 194 - doh * atomic_read(&least->weight)) 203 + if (((__s64)loh * atomic_read(&dest->weight) > 204 + (__s64)doh * atomic_read(&least->weight)) 195 205 && (dest->flags & IP_VS_DEST_F_AVAILABLE)) { 196 206 least = dest; 197 207 loh = doh; ··· 222 232 223 233 /* select the first destination server, whose weight > 0 */ 224 234 list_for_each_entry(e, &set->list, list) { 225 - most = rcu_dereference_protected(e->dest, 1); 235 + most = e->dest; 226 236 if (atomic_read(&most->weight) > 0) { 227 237 moh = ip_vs_dest_conn_overhead(most); 228 238 goto nextstage; ··· 233 243 /* find the destination with the weighted most load */ 234 244 nextstage: 235 245 list_for_each_entry_continue(e, &set->list, list) { 236 - dest = rcu_dereference_protected(e->dest, 1); 246 + dest = e->dest; 237 247 doh = ip_vs_dest_conn_overhead(dest); 238 248 /* moh/mw < doh/dw ==> moh*dw < doh*mw, where mw,dw>0 */ 239 - if ((moh * atomic_read(&dest->weight) < 240 - doh * atomic_read(&most->weight)) 249 + if (((__s64)moh * atomic_read(&dest->weight) < 250 + (__s64)doh * atomic_read(&most->weight)) 241 251 && (atomic_read(&dest->weight) > 0)) { 242 252 most = dest; 243 253 moh = doh; ··· 601 611 continue; 602 612 603 613 doh = ip_vs_dest_conn_overhead(dest); 604 - if (loh * atomic_read(&dest->weight) > 605 - doh * atomic_read(&least->weight)) { 614 + if ((__s64)loh * atomic_read(&dest->weight) > 615 + (__s64)doh * atomic_read(&least->weight)) { 606 616 least = dest; 607 617 loh = doh; 608 618 } ··· 809 819 { 810 820 unregister_ip_vs_scheduler(&ip_vs_lblcr_scheduler); 811 821 unregister_pernet_subsys(&ip_vs_lblcr_ops); 812 - synchronize_rcu(); 822 + rcu_barrier(); 813 823 } 814 824 815 825
+4 -4
net/netfilter/ipvs/ip_vs_nq.c
··· 40 40 #include <net/ip_vs.h> 41 41 42 42 43 - static inline unsigned int 43 + static inline int 44 44 ip_vs_nq_dest_overhead(struct ip_vs_dest *dest) 45 45 { 46 46 /* ··· 59 59 struct ip_vs_iphdr *iph) 60 60 { 61 61 struct ip_vs_dest *dest, *least = NULL; 62 - unsigned int loh = 0, doh; 62 + int loh = 0, doh; 63 63 64 64 IP_VS_DBG(6, "%s(): Scheduling...\n", __func__); 65 65 ··· 92 92 } 93 93 94 94 if (!least || 95 - (loh * atomic_read(&dest->weight) > 96 - doh * atomic_read(&least->weight))) { 95 + ((__s64)loh * atomic_read(&dest->weight) > 96 + (__s64)doh * atomic_read(&least->weight))) { 97 97 least = dest; 98 98 loh = doh; 99 99 }
+4 -4
net/netfilter/ipvs/ip_vs_sed.c
··· 44 44 #include <net/ip_vs.h> 45 45 46 46 47 - static inline unsigned int 47 + static inline int 48 48 ip_vs_sed_dest_overhead(struct ip_vs_dest *dest) 49 49 { 50 50 /* ··· 63 63 struct ip_vs_iphdr *iph) 64 64 { 65 65 struct ip_vs_dest *dest, *least; 66 - unsigned int loh, doh; 66 + int loh, doh; 67 67 68 68 IP_VS_DBG(6, "%s(): Scheduling...\n", __func__); 69 69 ··· 99 99 if (dest->flags & IP_VS_DEST_F_OVERLOAD) 100 100 continue; 101 101 doh = ip_vs_sed_dest_overhead(dest); 102 - if (loh * atomic_read(&dest->weight) > 103 - doh * atomic_read(&least->weight)) { 102 + if ((__s64)loh * atomic_read(&dest->weight) > 103 + (__s64)doh * atomic_read(&least->weight)) { 104 104 least = dest; 105 105 loh = doh; 106 106 }
+3 -3
net/netfilter/ipvs/ip_vs_wlc.c
··· 35 35 struct ip_vs_iphdr *iph) 36 36 { 37 37 struct ip_vs_dest *dest, *least; 38 - unsigned int loh, doh; 38 + int loh, doh; 39 39 40 40 IP_VS_DBG(6, "ip_vs_wlc_schedule(): Scheduling...\n"); 41 41 ··· 71 71 if (dest->flags & IP_VS_DEST_F_OVERLOAD) 72 72 continue; 73 73 doh = ip_vs_dest_conn_overhead(dest); 74 - if (loh * atomic_read(&dest->weight) > 75 - doh * atomic_read(&least->weight)) { 74 + if ((__s64)loh * atomic_read(&dest->weight) > 75 + (__s64)doh * atomic_read(&least->weight)) { 76 76 least = dest; 77 77 loh = doh; 78 78 }
+7 -5
net/netfilter/nf_synproxy_core.c
··· 24 24 int synproxy_net_id; 25 25 EXPORT_SYMBOL_GPL(synproxy_net_id); 26 26 27 - void 27 + bool 28 28 synproxy_parse_options(const struct sk_buff *skb, unsigned int doff, 29 29 const struct tcphdr *th, struct synproxy_options *opts) 30 30 { ··· 32 32 u8 buf[40], *ptr; 33 33 34 34 ptr = skb_header_pointer(skb, doff + sizeof(*th), length, buf); 35 - BUG_ON(ptr == NULL); 35 + if (ptr == NULL) 36 + return false; 36 37 37 38 opts->options = 0; 38 39 while (length > 0) { ··· 42 41 43 42 switch (opcode) { 44 43 case TCPOPT_EOL: 45 - return; 44 + return true; 46 45 case TCPOPT_NOP: 47 46 length--; 48 47 continue; 49 48 default: 50 49 opsize = *ptr++; 51 50 if (opsize < 2) 52 - return; 51 + return true; 53 52 if (opsize > length) 54 - return; 53 + return true; 55 54 56 55 switch (opcode) { 57 56 case TCPOPT_MSS: ··· 85 84 length -= opsize; 86 85 } 87 86 } 87 + return true; 88 88 } 89 89 EXPORT_SYMBOL_GPL(synproxy_parse_options); 90 90
+62 -38
net/sched/sch_fq.c
··· 285 285 286 286 287 287 /* remove one skb from head of flow queue */ 288 - static struct sk_buff *fq_dequeue_head(struct fq_flow *flow) 288 + static struct sk_buff *fq_dequeue_head(struct Qdisc *sch, struct fq_flow *flow) 289 289 { 290 290 struct sk_buff *skb = flow->head; 291 291 ··· 293 293 flow->head = skb->next; 294 294 skb->next = NULL; 295 295 flow->qlen--; 296 + sch->qstats.backlog -= qdisc_pkt_len(skb); 297 + sch->q.qlen--; 296 298 } 297 299 return skb; 298 300 } ··· 420 418 struct fq_flow_head *head; 421 419 struct sk_buff *skb; 422 420 struct fq_flow *f; 421 + u32 rate; 423 422 424 - skb = fq_dequeue_head(&q->internal); 423 + skb = fq_dequeue_head(sch, &q->internal); 425 424 if (skb) 426 425 goto out; 427 426 fq_check_throttled(q, now); ··· 452 449 goto begin; 453 450 } 454 451 455 - skb = fq_dequeue_head(f); 452 + skb = fq_dequeue_head(sch, f); 456 453 if (!skb) { 457 454 head->first = f->next; 458 455 /* force a pass through old_flows to prevent starvation */ ··· 469 466 f->time_next_packet = now; 470 467 f->credit -= qdisc_pkt_len(skb); 471 468 472 - if (f->credit <= 0 && 473 - q->rate_enable && 474 - skb->sk && skb->sk->sk_state != TCP_TIME_WAIT) { 475 - u32 rate = skb->sk->sk_pacing_rate ?: q->flow_default_rate; 469 + if (f->credit > 0 || !q->rate_enable) 470 + goto out; 471 + 472 + if (skb->sk && skb->sk->sk_state != TCP_TIME_WAIT) { 473 + rate = skb->sk->sk_pacing_rate ?: q->flow_default_rate; 476 474 477 475 rate = min(rate, q->flow_max_rate); 478 - if (rate) { 479 - u64 len = (u64)qdisc_pkt_len(skb) * NSEC_PER_SEC; 476 + } else { 477 + rate = q->flow_max_rate; 478 + if (rate == ~0U) 479 + goto out; 480 + } 481 + if (rate) { 482 + u32 plen = max(qdisc_pkt_len(skb), q->quantum); 483 + u64 len = (u64)plen * NSEC_PER_SEC; 480 484 481 - do_div(len, rate); 482 - /* Since socket rate can change later, 483 - * clamp the delay to 125 ms. 484 - * TODO: maybe segment the too big skb, as in commit 485 - * e43ac79a4bc ("sch_tbf: segment too big GSO packets") 486 - */ 487 - if (unlikely(len > 125 * NSEC_PER_MSEC)) { 488 - len = 125 * NSEC_PER_MSEC; 489 - q->stat_pkts_too_long++; 490 - } 491 - 492 - f->time_next_packet = now + len; 485 + do_div(len, rate); 486 + /* Since socket rate can change later, 487 + * clamp the delay to 125 ms. 488 + * TODO: maybe segment the too big skb, as in commit 489 + * e43ac79a4bc ("sch_tbf: segment too big GSO packets") 490 + */ 491 + if (unlikely(len > 125 * NSEC_PER_MSEC)) { 492 + len = 125 * NSEC_PER_MSEC; 493 + q->stat_pkts_too_long++; 493 494 } 495 + 496 + f->time_next_packet = now + len; 494 497 } 495 498 out: 496 - sch->qstats.backlog -= qdisc_pkt_len(skb); 497 499 qdisc_bstats_update(sch, skb); 498 - sch->q.qlen--; 499 500 qdisc_unthrottled(sch); 500 501 return skb; 501 502 } 502 503 503 504 static void fq_reset(struct Qdisc *sch) 504 505 { 506 + struct fq_sched_data *q = qdisc_priv(sch); 507 + struct rb_root *root; 505 508 struct sk_buff *skb; 509 + struct rb_node *p; 510 + struct fq_flow *f; 511 + unsigned int idx; 506 512 507 - while ((skb = fq_dequeue(sch)) != NULL) 513 + while ((skb = fq_dequeue_head(sch, &q->internal)) != NULL) 508 514 kfree_skb(skb); 515 + 516 + if (!q->fq_root) 517 + return; 518 + 519 + for (idx = 0; idx < (1U << q->fq_trees_log); idx++) { 520 + root = &q->fq_root[idx]; 521 + while ((p = rb_first(root)) != NULL) { 522 + f = container_of(p, struct fq_flow, fq_node); 523 + rb_erase(p, root); 524 + 525 + while ((skb = fq_dequeue_head(sch, f)) != NULL) 526 + kfree_skb(skb); 527 + 528 + kmem_cache_free(fq_flow_cachep, f); 529 + } 530 + } 531 + q->new_flows.first = NULL; 532 + q->old_flows.first = NULL; 533 + q->delayed = RB_ROOT; 534 + q->flows = 0; 535 + q->inactive_flows = 0; 536 + q->throttled_flows = 0; 509 537 } 510 538 511 539 static void fq_rehash(struct fq_sched_data *q, ··· 679 645 while (sch->q.qlen > sch->limit) { 680 646 struct sk_buff *skb = fq_dequeue(sch); 681 647 648 + if (!skb) 649 + break; 682 650 kfree_skb(skb); 683 651 drop_count++; 684 652 } ··· 693 657 static void fq_destroy(struct Qdisc *sch) 694 658 { 695 659 struct fq_sched_data *q = qdisc_priv(sch); 696 - struct rb_root *root; 697 - struct rb_node *p; 698 - unsigned int idx; 699 660 700 - if (q->fq_root) { 701 - for (idx = 0; idx < (1U << q->fq_trees_log); idx++) { 702 - root = &q->fq_root[idx]; 703 - while ((p = rb_first(root)) != NULL) { 704 - rb_erase(p, root); 705 - kmem_cache_free(fq_flow_cachep, 706 - container_of(p, struct fq_flow, fq_node)); 707 - } 708 - } 709 - kfree(q->fq_root); 710 - } 661 + fq_reset(sch); 662 + kfree(q->fq_root); 711 663 qdisc_watchdog_cancel(&q->watchdog); 712 664 } 713 665