Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net

Pull networking updates from David Miller:

1) Fix flexcan build on big endian, from Arnd Bergmann

2) Correctly attach cpsw to GPIO bitbang MDIO drive, from Stefan Roese

3) udp_add_offload has to use GFP_ATOMIC since it can be invoked from
non-sleepable contexts. From Or Gerlitz

4) vxlan_gro_receive() does not iterate over all possible flows
properly, fix also from Or Gerlitz

5) CAN core doesn't use a proper SKB destructor when it hooks up
sockets to SKBs. Fix from Oliver Hartkopp

6) ip_tunnel_xmit() can use an uninitialized route pointer, fix from
Eric Dumazet

7) Fix address family assignment in IPVS, from Michal Kubecek

8) Fix ath9k build on ARM, from Sujith Manoharan

9) Make sure fail_over_mac only applies for the correct bonding modes,
from Ding Tianhong

10) The udp offload code doesn't use RCU correctly, from Shlomo Pongratz

11) Handle gigabit features properly in generic PHY code, from Florian
Fainelli

12) Don't blindly invoke link operations in
rtnl_link_get_slave_info_data_size, they are optional. Fix from
Fernando Luis Vazquez Cao

13) Add USB IDs for Netgear Aircard 340U, from Bjørn Mork

14) Handle netlink packet padding properly in openvswitch, from Thomas
Graf

15) Fix oops when deleting chains in nf_tables, from Patrick McHardy

16) Fix RX stalls in xen-netback driver, from Zoltan Kiss

17) Fix deadlock in mac80211 stack, from Emmanuel Grumbach

18) inet_nlmsg_size() forgets to consider ifa_cacheinfo, fix from Geert
Uytterhoeven

19) tg3_change_mtu() can deadlock, fix from Nithin Sujir

20) Fix regression in setting SCTP local source addresses on accepted
sockets, caused by some generic ipv6 socket changes. Fix from
Matija Glavinic Pecotic

21) IPPROTO_* must be pure defines, otherwise module aliases don't get
constructed properly. Fix from Jan Moskyto

22) IPV6 netconsole setup doesn't work properly unless an explicit
source address is specified, fix from Sabrina Dubroca

23) Use __GFP_NORETRY for high order skb page allocations in
sock_alloc_send_pskb and skb_page_frag_refill. From Eric Dumazet

24) Fix a regression added in netconsole over bridging, from Cong Wang

25) TCP uses an artificial offset of 1ms for SRTT, but this doesn't jive
well with TCP pacing which needs the SRTT to be accurate. Fix from
Eric Dumazet

26) Several cases of missing header file includes from Rashika Kheria

27) Add ZTE MF667 device ID to qmi_wwan driver, from Raymond Wanyoike

28) TCP Small Queues doesn't handle nonagle properly in some corner
cases, fix from Eric Dumazet

29) Remove extraneous read_unlock in bond_enslave, whoops. From Ding
Tianhong

30) Fix 9p trans_virtio handling of vmalloc buffers, from Richard Yao

* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net: (136 commits)
6lowpan: fix lockdep splats
alx: add missing stats_lock spinlock init
9p/trans_virtio.c: Fix broken zero-copy on vmalloc() buffers
bonding: remove unwanted bond lock for enslave processing
USB2NET : SR9800 : One chip USB2.0 USB2NET SR9800 Device Driver Support
tcp: tsq: fix nonagle handling
bridge: Prevent possible race condition in br_fdb_change_mac_address
bridge: Properly check if local fdb entry can be deleted when deleting vlan
bridge: Properly check if local fdb entry can be deleted in br_fdb_delete_by_port
bridge: Properly check if local fdb entry can be deleted in br_fdb_change_mac_address
bridge: Fix the way to check if a local fdb entry can be deleted
bridge: Change local fdb entries whenever mac address of bridge device changes
bridge: Fix the way to find old local fdb entries in br_fdb_change_mac_address
bridge: Fix the way to insert new local fdb entries in br_fdb_changeaddr
bridge: Fix the way to find old local fdb entries in br_fdb_changeaddr
tcp: correct code comment stating 3 min timeout for FIN_WAIT2, we only do 1 min
net: vxge: Remove unused device pointer
net: qmi_wwan: add ZTE MF667
3c59x: Remove unused pointer in vortex_eisa_cleanup()
net: fix 'ip rule' iif/oif device rename
...

+2459 -804
+3 -2
Documentation/devicetree/bindings/net/allwinner,sun4i-emac.txt
··· 1 1 * Allwinner EMAC ethernet controller 2 2 3 3 Required properties: 4 - - compatible: should be "allwinner,sun4i-emac". 4 + - compatible: should be "allwinner,sun4i-a10-emac" (Deprecated: 5 + "allwinner,sun4i-emac") 5 6 - reg: address and length of the register set for the device. 6 7 - interrupts: interrupt for the device 7 8 - phy: A phandle to a phy node defining the PHY address (as the reg ··· 15 14 Example: 16 15 17 16 emac: ethernet@01c0b000 { 18 - compatible = "allwinner,sun4i-emac"; 17 + compatible = "allwinner,sun4i-a10-emac"; 19 18 reg = <0x01c0b000 0x1000>; 20 19 interrupts = <55>; 21 20 clocks = <&ahb_gates 17>;
+3 -2
Documentation/devicetree/bindings/net/allwinner,sun4i-mdio.txt
··· 1 1 * Allwinner A10 MDIO Ethernet Controller interface 2 2 3 3 Required properties: 4 - - compatible: should be "allwinner,sun4i-mdio". 4 + - compatible: should be "allwinner,sun4i-a10-mdio" 5 + (Deprecated: "allwinner,sun4i-mdio"). 5 6 - reg: address and length of the register set for the device. 6 7 7 8 Optional properties: ··· 10 9 11 10 Example at the SoC level: 12 11 mdio@01c0b080 { 13 - compatible = "allwinner,sun4i-mdio"; 12 + compatible = "allwinner,sun4i-a10-mdio"; 14 13 reg = <0x01c0b080 0x14>; 15 14 #address-cells = <1>; 16 15 #size-cells = <0>;
+8 -3
Documentation/ptp/testptp.c
··· 117 117 " -f val adjust the ptp clock frequency by 'val' ppb\n" 118 118 " -g get the ptp clock time\n" 119 119 " -h prints this message\n" 120 + " -i val index for event/trigger\n" 120 121 " -k val measure the time offset between system and phc clock\n" 121 122 " for 'val' times (Maximum 25)\n" 122 123 " -p val enable output with a period of 'val' nanoseconds\n" ··· 155 154 int capabilities = 0; 156 155 int extts = 0; 157 156 int gettime = 0; 157 + int index = 0; 158 158 int oneshot = 0; 159 159 int pct_offset = 0; 160 160 int n_samples = 0; ··· 169 167 170 168 progname = strrchr(argv[0], '/'); 171 169 progname = progname ? 1+progname : argv[0]; 172 - while (EOF != (c = getopt(argc, argv, "a:A:cd:e:f:ghk:p:P:sSt:v"))) { 170 + while (EOF != (c = getopt(argc, argv, "a:A:cd:e:f:ghi:k:p:P:sSt:v"))) { 173 171 switch (c) { 174 172 case 'a': 175 173 oneshot = atoi(optarg); ··· 191 189 break; 192 190 case 'g': 193 191 gettime = 1; 192 + break; 193 + case 'i': 194 + index = atoi(optarg); 194 195 break; 195 196 case 'k': 196 197 pct_offset = 1; ··· 306 301 307 302 if (extts) { 308 303 memset(&extts_request, 0, sizeof(extts_request)); 309 - extts_request.index = 0; 304 + extts_request.index = index; 310 305 extts_request.flags = PTP_ENABLE_FEATURE; 311 306 if (ioctl(fd, PTP_EXTTS_REQUEST, &extts_request)) { 312 307 perror("PTP_EXTTS_REQUEST"); ··· 380 375 return -1; 381 376 } 382 377 memset(&perout_request, 0, sizeof(perout_request)); 383 - perout_request.index = 0; 378 + perout_request.index = index; 384 379 perout_request.start.sec = ts.tv_sec + 2; 385 380 perout_request.start.nsec = 0; 386 381 perout_request.period.sec = 0;
+1 -1
MAINTAINERS
··· 7196 7196 F: drivers/net/ethernet/rdc/r6040.c 7197 7197 7198 7198 RDS - RELIABLE DATAGRAM SOCKETS 7199 - M: Venkat Venkatsubra <venkat.x.venkatsubra@oracle.com> 7199 + M: Chien Yen <chien.yen@oracle.com> 7200 7200 L: rds-devel@oss.oracle.com (moderated for non-subscribers) 7201 7201 S: Supported 7202 7202 F: net/rds/
+2 -2
arch/arm/boot/dts/sun4i-a10.dtsi
··· 315 315 ranges; 316 316 317 317 emac: ethernet@01c0b000 { 318 - compatible = "allwinner,sun4i-emac"; 318 + compatible = "allwinner,sun4i-a10-emac"; 319 319 reg = <0x01c0b000 0x1000>; 320 320 interrupts = <55>; 321 321 clocks = <&ahb_gates 17>; ··· 323 323 }; 324 324 325 325 mdio@01c0b080 { 326 - compatible = "allwinner,sun4i-mdio"; 326 + compatible = "allwinner,sun4i-a10-mdio"; 327 327 reg = <0x01c0b080 0x14>; 328 328 status = "disabled"; 329 329 #address-cells = <1>;
+2 -2
arch/arm/boot/dts/sun5i-a10s.dtsi
··· 278 278 ranges; 279 279 280 280 emac: ethernet@01c0b000 { 281 - compatible = "allwinner,sun4i-emac"; 281 + compatible = "allwinner,sun4i-a10-emac"; 282 282 reg = <0x01c0b000 0x1000>; 283 283 interrupts = <55>; 284 284 clocks = <&ahb_gates 17>; ··· 286 286 }; 287 287 288 288 mdio@01c0b080 { 289 - compatible = "allwinner,sun4i-mdio"; 289 + compatible = "allwinner,sun4i-a10-mdio"; 290 290 reg = <0x01c0b080 0x14>; 291 291 status = "disabled"; 292 292 #address-cells = <1>;
+2 -2
arch/arm/boot/dts/sun7i-a20.dtsi
··· 340 340 ranges; 341 341 342 342 emac: ethernet@01c0b000 { 343 - compatible = "allwinner,sun4i-emac"; 343 + compatible = "allwinner,sun4i-a10-emac"; 344 344 reg = <0x01c0b000 0x1000>; 345 345 interrupts = <0 55 4>; 346 346 clocks = <&ahb_gates 17>; ··· 348 348 }; 349 349 350 350 mdio@01c0b080 { 351 - compatible = "allwinner,sun4i-mdio"; 351 + compatible = "allwinner,sun4i-a10-mdio"; 352 352 reg = <0x01c0b080 0x14>; 353 353 status = "disabled"; 354 354 #address-cells = <1>;
+1 -1
drivers/isdn/hisax/q931.c
··· 810 810 dp += sprintf(dp, " octet 3 "); 811 811 dp += prbits(dp, *p, 8, 8); 812 812 *dp++ = '\n'; 813 - if (!(*p++ & 80)) { 813 + if (!(*p++ & 0x80)) { 814 814 dp += sprintf(dp, " octet 4 "); 815 815 dp += prbits(dp, *p++, 8, 8); 816 816 *dp++ = '\n';
+17 -9
drivers/net/bonding/bond_main.c
··· 1270 1270 1271 1271 if (slave_ops->ndo_set_mac_address == NULL) { 1272 1272 if (!bond_has_slaves(bond)) { 1273 - pr_warning("%s: Warning: The first slave device specified does not support setting the MAC address. Setting fail_over_mac to active.", 1274 - bond_dev->name); 1275 - bond->params.fail_over_mac = BOND_FOM_ACTIVE; 1273 + pr_warn("%s: Warning: The first slave device specified does not support setting the MAC address.\n", 1274 + bond_dev->name); 1275 + if (bond->params.mode == BOND_MODE_ACTIVEBACKUP) { 1276 + bond->params.fail_over_mac = BOND_FOM_ACTIVE; 1277 + pr_warn("%s: Setting fail_over_mac to active for active-backup mode.\n", 1278 + bond_dev->name); 1279 + } 1276 1280 } else if (bond->params.fail_over_mac != BOND_FOM_ACTIVE) { 1277 1281 pr_err("%s: Error: The slave device specified does not support setting the MAC address, but fail_over_mac is not set to active.\n", 1278 1282 bond_dev->name); ··· 1319 1315 */ 1320 1316 memcpy(new_slave->perm_hwaddr, slave_dev->dev_addr, ETH_ALEN); 1321 1317 1322 - if (!bond->params.fail_over_mac) { 1318 + if (!bond->params.fail_over_mac || 1319 + bond->params.mode != BOND_MODE_ACTIVEBACKUP) { 1323 1320 /* 1324 1321 * Set slave to master's mac address. The application already 1325 1322 * set the master's mac address to that of the first slave ··· 1510 1505 slave_dev->npinfo = bond->dev->npinfo; 1511 1506 if (slave_dev->npinfo) { 1512 1507 if (slave_enable_netpoll(new_slave)) { 1513 - read_unlock(&bond->lock); 1514 1508 pr_info("Error, %s: master_dev is using netpoll, " 1515 1509 "but new slave device does not support netpoll.\n", 1516 1510 bond_dev->name); ··· 1583 1579 dev_close(slave_dev); 1584 1580 1585 1581 err_restore_mac: 1586 - if (!bond->params.fail_over_mac) { 1582 + if (!bond->params.fail_over_mac || 1583 + bond->params.mode != BOND_MODE_ACTIVEBACKUP) { 1587 1584 /* XXX TODO - fom follow mode needs to change master's 1588 1585 * MAC if this slave's MAC is in use by the bond, or at 1589 1586 * least print a warning. ··· 1677 1672 1678 1673 bond->current_arp_slave = NULL; 1679 1674 1680 - if (!all && !bond->params.fail_over_mac) { 1675 + if (!all && (!bond->params.fail_over_mac || 1676 + bond->params.mode != BOND_MODE_ACTIVEBACKUP)) { 1681 1677 if (ether_addr_equal_64bits(bond_dev->dev_addr, slave->perm_hwaddr) && 1682 1678 bond_has_slaves(bond)) 1683 1679 pr_warn("%s: Warning: the permanent HWaddr of %s - %pM - is still in use by %s. Set the HWaddr of %s to a different address to avoid conflicts.\n", ··· 1775 1769 /* close slave before restoring its mac address */ 1776 1770 dev_close(slave_dev); 1777 1771 1778 - if (bond->params.fail_over_mac != BOND_FOM_ACTIVE) { 1772 + if (bond->params.fail_over_mac != BOND_FOM_ACTIVE || 1773 + bond->params.mode != BOND_MODE_ACTIVEBACKUP) { 1779 1774 /* restore original ("permanent") mac address */ 1780 1775 memcpy(addr.sa_data, slave->perm_hwaddr, ETH_ALEN); 1781 1776 addr.sa_family = slave_dev->type; ··· 3438 3431 /* If fail_over_mac is enabled, do nothing and return success. 3439 3432 * Returning an error causes ifenslave to fail. 3440 3433 */ 3441 - if (bond->params.fail_over_mac) 3434 + if (bond->params.fail_over_mac && 3435 + bond->params.mode == BOND_MODE_ACTIVEBACKUP) 3442 3436 return 0; 3443 3437 3444 3438 if (!is_valid_ether_addr(sa->sa_data))
+1 -1
drivers/net/can/Kconfig
··· 104 104 105 105 config CAN_FLEXCAN 106 106 tristate "Support for Freescale FLEXCAN based chips" 107 - depends on (ARM && CPU_LITTLE_ENDIAN) || PPC 107 + depends on ARM || PPC 108 108 ---help--- 109 109 Say Y here if you want to support for Freescale FlexCAN. 110 110
+3 -12
drivers/net/can/dev.c
··· 323 323 } 324 324 325 325 if (!priv->echo_skb[idx]) { 326 - struct sock *srcsk = skb->sk; 327 326 328 - if (atomic_read(&skb->users) != 1) { 329 - struct sk_buff *old_skb = skb; 330 - 331 - skb = skb_clone(old_skb, GFP_ATOMIC); 332 - kfree_skb(old_skb); 333 - if (!skb) 334 - return; 335 - } else 336 - skb_orphan(skb); 337 - 338 - skb->sk = srcsk; 327 + skb = can_create_echo_skb(skb); 328 + if (!skb) 329 + return; 339 330 340 331 /* make settings for echo to reduce code in irq context */ 341 332 skb->protocol = htons(ETH_P_CAN);
+5 -2
drivers/net/can/flexcan.c
··· 235 235 }; 236 236 237 237 /* 238 - * Abstract off the read/write for arm versus ppc. 238 + * Abstract off the read/write for arm versus ppc. This 239 + * assumes that PPC uses big-endian registers and everything 240 + * else uses little-endian registers, independent of CPU 241 + * endianess. 239 242 */ 240 - #if defined(__BIG_ENDIAN) 243 + #if defined(CONFIG_PPC) 241 244 static inline u32 flexcan_read(void __iomem *addr) 242 245 { 243 246 return in_be32(addr);
+5 -15
drivers/net/can/janz-ican3.c
··· 18 18 #include <linux/netdevice.h> 19 19 #include <linux/can.h> 20 20 #include <linux/can/dev.h> 21 + #include <linux/can/skb.h> 21 22 #include <linux/can/error.h> 22 23 23 24 #include <linux/mfd/janz.h> ··· 1134 1133 */ 1135 1134 static void ican3_put_echo_skb(struct ican3_dev *mod, struct sk_buff *skb) 1136 1135 { 1137 - struct sock *srcsk = skb->sk; 1138 - 1139 - if (atomic_read(&skb->users) != 1) { 1140 - struct sk_buff *old_skb = skb; 1141 - 1142 - skb = skb_clone(old_skb, GFP_ATOMIC); 1143 - kfree_skb(old_skb); 1144 - if (!skb) 1145 - return; 1146 - } else { 1147 - skb_orphan(skb); 1148 - } 1149 - 1150 - skb->sk = srcsk; 1136 + skb = can_create_echo_skb(skb); 1137 + if (!skb) 1138 + return; 1151 1139 1152 1140 /* save this skb for tx interrupt echo handling */ 1153 1141 skb_queue_tail(&mod->echoq, skb); ··· 1312 1322 1313 1323 /* process all communication messages */ 1314 1324 while (true) { 1315 - struct ican3_msg msg; 1325 + struct ican3_msg uninitialized_var(msg); 1316 1326 ret = ican3_recv_msg(mod, &msg); 1317 1327 if (ret) 1318 1328 break;
+4 -5
drivers/net/can/vcan.c
··· 46 46 #include <linux/if_ether.h> 47 47 #include <linux/can.h> 48 48 #include <linux/can/dev.h> 49 + #include <linux/can/skb.h> 49 50 #include <linux/slab.h> 50 51 #include <net/rtnetlink.h> 51 52 ··· 110 109 stats->rx_packets++; 111 110 stats->rx_bytes += cfd->len; 112 111 } 113 - kfree_skb(skb); 112 + consume_skb(skb); 114 113 return NETDEV_TX_OK; 115 114 } 116 115 117 116 /* perform standard echo handling for CAN network interfaces */ 118 117 119 118 if (loop) { 120 - struct sock *srcsk = skb->sk; 121 119 122 - skb = skb_share_check(skb, GFP_ATOMIC); 120 + skb = can_create_echo_skb(skb); 123 121 if (!skb) 124 122 return NETDEV_TX_OK; 125 123 126 124 /* receive with packet counting */ 127 - skb->sk = srcsk; 128 125 vcan_rx(skb, dev); 129 126 } else { 130 127 /* no looped packets => no counting */ 131 - kfree_skb(skb); 128 + consume_skb(skb); 132 129 } 133 130 return NETDEV_TX_OK; 134 131 }
-2
drivers/net/ethernet/3com/3c59x.c
··· 3294 3294 3295 3295 static void __exit vortex_eisa_cleanup(void) 3296 3296 { 3297 - struct vortex_private *vp; 3298 3297 void __iomem *ioaddr; 3299 3298 3300 3299 #ifdef CONFIG_EISA ··· 3302 3303 #endif 3303 3304 3304 3305 if (compaq_net_device) { 3305 - vp = netdev_priv(compaq_net_device); 3306 3306 ioaddr = ioport_map(compaq_net_device->base_addr, 3307 3307 VORTEX_TOTAL_SIZE); 3308 3308
+3
drivers/net/ethernet/allwinner/sun4i-emac.c
··· 929 929 } 930 930 931 931 static const struct of_device_id emac_of_match[] = { 932 + {.compatible = "allwinner,sun4i-a10-emac",}, 933 + 934 + /* Deprecated */ 932 935 {.compatible = "allwinner,sun4i-emac",}, 933 936 {}, 934 937 };
+1
drivers/net/ethernet/atheros/alx/main.c
··· 1292 1292 alx = netdev_priv(netdev); 1293 1293 spin_lock_init(&alx->hw.mdio_lock); 1294 1294 spin_lock_init(&alx->irq_lock); 1295 + spin_lock_init(&alx->stats_lock); 1295 1296 alx->dev = netdev; 1296 1297 alx->hw.pdev = pdev; 1297 1298 alx->msg_enable = NETIF_MSG_LINK | NETIF_MSG_HW | NETIF_MSG_IFUP |
+1 -1
drivers/net/ethernet/broadcom/bnx2.c
··· 85 85 86 86 static int disable_msi = 0; 87 87 88 - module_param(disable_msi, int, 0); 88 + module_param(disable_msi, int, S_IRUGO); 89 89 MODULE_PARM_DESC(disable_msi, "Disable Message Signaled Interrupt (MSI)"); 90 90 91 91 typedef enum {
+1 -1
drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.h
··· 936 936 else /* CHIP_IS_E1X */ 937 937 start_params->network_cos_mode = FW_WRR; 938 938 939 - start_params->gre_tunnel_mode = IPGRE_TUNNEL; 939 + start_params->gre_tunnel_mode = L2GRE_TUNNEL; 940 940 start_params->gre_tunnel_rss = GRE_INNER_HEADERS_RSS; 941 941 942 942 return bnx2x_func_state_change(bp, &func_params);
+6 -6
drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
··· 95 95 MODULE_FIRMWARE(FW_FILE_NAME_E2); 96 96 97 97 int bnx2x_num_queues; 98 - module_param_named(num_queues, bnx2x_num_queues, int, 0); 98 + module_param_named(num_queues, bnx2x_num_queues, int, S_IRUGO); 99 99 MODULE_PARM_DESC(num_queues, 100 100 " Set number of queues (default is as a number of CPUs)"); 101 101 102 102 static int disable_tpa; 103 - module_param(disable_tpa, int, 0); 103 + module_param(disable_tpa, int, S_IRUGO); 104 104 MODULE_PARM_DESC(disable_tpa, " Disable the TPA (LRO) feature"); 105 105 106 106 static int int_mode; 107 - module_param(int_mode, int, 0); 107 + module_param(int_mode, int, S_IRUGO); 108 108 MODULE_PARM_DESC(int_mode, " Force interrupt mode other than MSI-X " 109 109 "(1 INT#x; 2 MSI)"); 110 110 111 111 static int dropless_fc; 112 - module_param(dropless_fc, int, 0); 112 + module_param(dropless_fc, int, S_IRUGO); 113 113 MODULE_PARM_DESC(dropless_fc, " Pause on exhausted host ring"); 114 114 115 115 static int mrrs = -1; 116 - module_param(mrrs, int, 0); 116 + module_param(mrrs, int, S_IRUGO); 117 117 MODULE_PARM_DESC(mrrs, " Force Max Read Req Size (0..3) (for debug)"); 118 118 119 119 static int debug; 120 - module_param(debug, int, 0); 120 + module_param(debug, int, S_IRUGO); 121 121 MODULE_PARM_DESC(debug, " Default debug msglevel"); 122 122 123 123 struct workqueue_struct *bnx2x_wq;
+3 -3
drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c
··· 1446 1446 if (vf->cfg_flags & VF_CFG_INT_SIMD) 1447 1447 val |= IGU_VF_CONF_SINGLE_ISR_EN; 1448 1448 val &= ~IGU_VF_CONF_PARENT_MASK; 1449 - val |= BP_FUNC(bp) << IGU_VF_CONF_PARENT_SHIFT; /* parent PF */ 1449 + val |= (BP_ABS_FUNC(bp) >> 1) << IGU_VF_CONF_PARENT_SHIFT; 1450 1450 REG_WR(bp, IGU_REG_VF_CONFIGURATION, val); 1451 1451 1452 1452 DP(BNX2X_MSG_IOV, 1453 - "value in IGU_REG_VF_CONFIGURATION of vf %d after write %x\n", 1454 - vf->abs_vfid, REG_RD(bp, IGU_REG_VF_CONFIGURATION)); 1453 + "value in IGU_REG_VF_CONFIGURATION of vf %d after write is 0x%08x\n", 1454 + vf->abs_vfid, val); 1455 1455 1456 1456 bnx2x_pretend_func(bp, BP_ABS_FUNC(bp)); 1457 1457
+9 -8
drivers/net/ethernet/broadcom/tg3.c
··· 2609 2609 2610 2610 tg3_writephy(tp, MII_CTRL1000, phy9_orig); 2611 2611 2612 - if (!tg3_readphy(tp, MII_TG3_EXT_CTRL, &reg32)) { 2613 - reg32 &= ~0x3000; 2614 - tg3_writephy(tp, MII_TG3_EXT_CTRL, reg32); 2615 - } else if (!err) 2616 - err = -EBUSY; 2612 + err = tg3_readphy(tp, MII_TG3_EXT_CTRL, &reg32); 2613 + if (err) 2614 + return err; 2617 2615 2618 - return err; 2616 + reg32 &= ~0x3000; 2617 + tg3_writephy(tp, MII_TG3_EXT_CTRL, reg32); 2618 + 2619 + return 0; 2619 2620 } 2620 2621 2621 2622 static void tg3_carrier_off(struct tg3 *tp) ··· 14114 14113 14115 14114 tg3_netif_stop(tp); 14116 14115 14116 + tg3_set_mtu(dev, tp, new_mtu); 14117 + 14117 14118 tg3_full_lock(tp, 1); 14118 14119 14119 14120 tg3_halt(tp, RESET_KIND_SHUTDOWN, 1); 14120 - 14121 - tg3_set_mtu(dev, tp, new_mtu); 14122 14121 14123 14122 /* Reset PHY, otherwise the read DMA engine will be in a mode that 14124 14123 * breaks all requests to 256 bytes.
+136 -2
drivers/net/ethernet/ethoc.c
··· 13 13 14 14 #include <linux/dma-mapping.h> 15 15 #include <linux/etherdevice.h> 16 + #include <linux/clk.h> 16 17 #include <linux/crc32.h> 17 18 #include <linux/interrupt.h> 18 19 #include <linux/io.h> ··· 52 51 #define ETH_HASH0 0x48 53 52 #define ETH_HASH1 0x4c 54 53 #define ETH_TXCTRL 0x50 54 + #define ETH_END 0x54 55 55 56 56 /* mode register */ 57 57 #define MODER_RXEN (1 << 0) /* receive enable */ ··· 181 179 * @membase: pointer to buffer memory region 182 180 * @dma_alloc: dma allocated buffer size 183 181 * @io_region_size: I/O memory region size 182 + * @num_bd: number of buffer descriptors 184 183 * @num_tx: number of send buffers 185 184 * @cur_tx: last send buffer written 186 185 * @dty_tx: last buffer actually sent ··· 202 199 int dma_alloc; 203 200 resource_size_t io_region_size; 204 201 202 + unsigned int num_bd; 205 203 unsigned int num_tx; 206 204 unsigned int cur_tx; 207 205 unsigned int dty_tx; ··· 220 216 221 217 struct phy_device *phy; 222 218 struct mii_bus *mdio; 219 + struct clk *clk; 223 220 s8 phy_id; 224 221 }; 225 222 ··· 693 688 } 694 689 695 690 priv->phy = phy; 691 + phy->advertising &= ~(ADVERTISED_1000baseT_Full | 692 + ADVERTISED_1000baseT_Half); 693 + phy->supported &= ~(SUPPORTED_1000baseT_Full | 694 + SUPPORTED_1000baseT_Half); 695 + 696 696 return 0; 697 697 } 698 698 ··· 900 890 return NETDEV_TX_OK; 901 891 } 902 892 893 + static int ethoc_get_settings(struct net_device *dev, struct ethtool_cmd *cmd) 894 + { 895 + struct ethoc *priv = netdev_priv(dev); 896 + struct phy_device *phydev = priv->phy; 897 + 898 + if (!phydev) 899 + return -EOPNOTSUPP; 900 + 901 + return phy_ethtool_gset(phydev, cmd); 902 + } 903 + 904 + static int ethoc_set_settings(struct net_device *dev, struct ethtool_cmd *cmd) 905 + { 906 + struct ethoc *priv = netdev_priv(dev); 907 + struct phy_device *phydev = priv->phy; 908 + 909 + if (!phydev) 910 + return -EOPNOTSUPP; 911 + 912 + return phy_ethtool_sset(phydev, cmd); 913 + } 914 + 915 + static int ethoc_get_regs_len(struct net_device *netdev) 916 + { 917 + return ETH_END; 918 + } 919 + 920 + static void ethoc_get_regs(struct net_device *dev, struct ethtool_regs *regs, 921 + void *p) 922 + { 923 + struct ethoc *priv = netdev_priv(dev); 924 + u32 *regs_buff = p; 925 + unsigned i; 926 + 927 + regs->version = 0; 928 + for (i = 0; i < ETH_END / sizeof(u32); ++i) 929 + regs_buff[i] = ethoc_read(priv, i * sizeof(u32)); 930 + } 931 + 932 + static void ethoc_get_ringparam(struct net_device *dev, 933 + struct ethtool_ringparam *ring) 934 + { 935 + struct ethoc *priv = netdev_priv(dev); 936 + 937 + ring->rx_max_pending = priv->num_bd - 1; 938 + ring->rx_mini_max_pending = 0; 939 + ring->rx_jumbo_max_pending = 0; 940 + ring->tx_max_pending = priv->num_bd - 1; 941 + 942 + ring->rx_pending = priv->num_rx; 943 + ring->rx_mini_pending = 0; 944 + ring->rx_jumbo_pending = 0; 945 + ring->tx_pending = priv->num_tx; 946 + } 947 + 948 + static int ethoc_set_ringparam(struct net_device *dev, 949 + struct ethtool_ringparam *ring) 950 + { 951 + struct ethoc *priv = netdev_priv(dev); 952 + 953 + if (ring->tx_pending < 1 || ring->rx_pending < 1 || 954 + ring->tx_pending + ring->rx_pending > priv->num_bd) 955 + return -EINVAL; 956 + if (ring->rx_mini_pending || ring->rx_jumbo_pending) 957 + return -EINVAL; 958 + 959 + if (netif_running(dev)) { 960 + netif_tx_disable(dev); 961 + ethoc_disable_rx_and_tx(priv); 962 + ethoc_disable_irq(priv, INT_MASK_TX | INT_MASK_RX); 963 + synchronize_irq(dev->irq); 964 + } 965 + 966 + priv->num_tx = rounddown_pow_of_two(ring->tx_pending); 967 + priv->num_rx = ring->rx_pending; 968 + ethoc_init_ring(priv, dev->mem_start); 969 + 970 + if (netif_running(dev)) { 971 + ethoc_enable_irq(priv, INT_MASK_TX | INT_MASK_RX); 972 + ethoc_enable_rx_and_tx(priv); 973 + netif_wake_queue(dev); 974 + } 975 + return 0; 976 + } 977 + 978 + const struct ethtool_ops ethoc_ethtool_ops = { 979 + .get_settings = ethoc_get_settings, 980 + .set_settings = ethoc_set_settings, 981 + .get_regs_len = ethoc_get_regs_len, 982 + .get_regs = ethoc_get_regs, 983 + .get_link = ethtool_op_get_link, 984 + .get_ringparam = ethoc_get_ringparam, 985 + .set_ringparam = ethoc_set_ringparam, 986 + .get_ts_info = ethtool_op_get_ts_info, 987 + }; 988 + 903 989 static const struct net_device_ops ethoc_netdev_ops = { 904 990 .ndo_open = ethoc_open, 905 991 .ndo_stop = ethoc_stop, ··· 1023 917 int num_bd; 1024 918 int ret = 0; 1025 919 bool random_mac = false; 920 + struct ethoc_platform_data *pdata = dev_get_platdata(&pdev->dev); 921 + u32 eth_clkfreq = pdata ? pdata->eth_clkfreq : 0; 1026 922 1027 923 /* allocate networking device */ 1028 924 netdev = alloc_etherdev(sizeof(struct ethoc)); ··· 1124 1016 ret = -ENODEV; 1125 1017 goto error; 1126 1018 } 1019 + priv->num_bd = num_bd; 1127 1020 /* num_tx must be a power of two */ 1128 1021 priv->num_tx = rounddown_pow_of_two(num_bd >> 1); 1129 1022 priv->num_rx = num_bd - priv->num_tx; ··· 1139 1030 } 1140 1031 1141 1032 /* Allow the platform setup code to pass in a MAC address. */ 1142 - if (dev_get_platdata(&pdev->dev)) { 1143 - struct ethoc_platform_data *pdata = dev_get_platdata(&pdev->dev); 1033 + if (pdata) { 1144 1034 memcpy(netdev->dev_addr, pdata->hwaddr, IFHWADDRLEN); 1145 1035 priv->phy_id = pdata->phy_id; 1146 1036 } else { ··· 1176 1068 1177 1069 if (random_mac) 1178 1070 netdev->addr_assign_type = NET_ADDR_RANDOM; 1071 + 1072 + /* Allow the platform setup code to adjust MII management bus clock. */ 1073 + if (!eth_clkfreq) { 1074 + struct clk *clk = devm_clk_get(&pdev->dev, NULL); 1075 + 1076 + if (!IS_ERR(clk)) { 1077 + priv->clk = clk; 1078 + clk_prepare_enable(clk); 1079 + eth_clkfreq = clk_get_rate(clk); 1080 + } 1081 + } 1082 + if (eth_clkfreq) { 1083 + u32 clkdiv = MIIMODER_CLKDIV(eth_clkfreq / 2500000 + 1); 1084 + 1085 + if (!clkdiv) 1086 + clkdiv = 2; 1087 + dev_dbg(&pdev->dev, "setting MII clkdiv to %u\n", clkdiv); 1088 + ethoc_write(priv, MIIMODER, 1089 + (ethoc_read(priv, MIIMODER) & MIIMODER_NOPRE) | 1090 + clkdiv); 1091 + } 1179 1092 1180 1093 /* register MII bus */ 1181 1094 priv->mdio = mdiobus_alloc(); ··· 1240 1111 netdev->netdev_ops = &ethoc_netdev_ops; 1241 1112 netdev->watchdog_timeo = ETHOC_TIMEOUT; 1242 1113 netdev->features |= 0; 1114 + netdev->ethtool_ops = &ethoc_ethtool_ops; 1243 1115 1244 1116 /* setup NAPI */ 1245 1117 netif_napi_add(netdev, &priv->napi, ethoc_poll, 64); ··· 1263 1133 kfree(priv->mdio->irq); 1264 1134 mdiobus_free(priv->mdio); 1265 1135 free: 1136 + if (priv->clk) 1137 + clk_disable_unprepare(priv->clk); 1266 1138 free_netdev(netdev); 1267 1139 out: 1268 1140 return ret; ··· 1289 1157 kfree(priv->mdio->irq); 1290 1158 mdiobus_free(priv->mdio); 1291 1159 } 1160 + if (priv->clk) 1161 + clk_disable_unprepare(priv->clk); 1292 1162 unregister_netdev(netdev); 1293 1163 free_netdev(netdev); 1294 1164 }
+1 -1
drivers/net/ethernet/intel/e100.c
··· 3034 3034 *enable_wake = false; 3035 3035 } 3036 3036 3037 - pci_disable_device(pdev); 3037 + pci_clear_master(pdev); 3038 3038 } 3039 3039 3040 3040 static int __e100_power_off(struct pci_dev *pdev, bool wake)
-6
drivers/net/ethernet/neterion/vxge/vxge-main.c
··· 726 726 int vpath_idx = 0; 727 727 enum vxge_hw_status status = VXGE_HW_OK; 728 728 struct vxge_vpath *vpath = NULL; 729 - struct __vxge_hw_device *hldev; 730 - 731 - hldev = pci_get_drvdata(vdev->pdev); 732 729 733 730 mac_address = (u8 *)&mac_addr; 734 731 memcpy(mac_address, mac_header, ETH_ALEN); ··· 2440 2443 2441 2444 static void vxge_rem_isr(struct vxgedev *vdev) 2442 2445 { 2443 - struct __vxge_hw_device *hldev; 2444 - hldev = pci_get_drvdata(vdev->pdev); 2445 - 2446 2446 #ifdef CONFIG_PCI_MSI 2447 2447 if (vdev->config.intr_type == MSI_X) { 2448 2448 vxge_rem_msix_isr(vdev);
+2
drivers/net/ethernet/sfc/tx.c
··· 429 429 } 430 430 431 431 /* Transfer ownership of the skb to the final buffer */ 432 + #ifdef EFX_USE_PIO 432 433 finish_packet: 434 + #endif 433 435 buffer->skb = skb; 434 436 buffer->flags = EFX_TX_BUF_SKB | dma_flags; 435 437
+12 -2
drivers/net/ethernet/ti/cpsw.c
··· 1878 1878 mdio_node = of_find_node_by_phandle(be32_to_cpup(parp)); 1879 1879 phyid = be32_to_cpup(parp+1); 1880 1880 mdio = of_find_device_by_node(mdio_node); 1881 - snprintf(slave_data->phy_id, sizeof(slave_data->phy_id), 1882 - PHY_ID_FMT, mdio->name, phyid); 1881 + 1882 + if (strncmp(mdio->name, "gpio", 4) == 0) { 1883 + /* GPIO bitbang MDIO driver attached */ 1884 + struct mii_bus *bus = dev_get_drvdata(&mdio->dev); 1885 + 1886 + snprintf(slave_data->phy_id, sizeof(slave_data->phy_id), 1887 + PHY_ID_FMT, bus->id, phyid); 1888 + } else { 1889 + /* davinci MDIO driver attached */ 1890 + snprintf(slave_data->phy_id, sizeof(slave_data->phy_id), 1891 + PHY_ID_FMT, mdio->name, phyid); 1892 + } 1883 1893 1884 1894 mac_addr = of_get_mac_address(slave_node); 1885 1895 if (mac_addr)
-7
drivers/net/irda/Kconfig
··· 210 210 To compile it as a module, choose M here: the module will be called 211 211 kingsun-sir. 212 212 213 - config EP7211_DONGLE 214 - tristate "Cirrus Logic clps711x I/R support" 215 - depends on IRTTY_SIR && ARCH_CLPS711X && IRDA 216 - help 217 - Say Y here if you want to build support for the Cirrus logic 218 - EP7211 chipset's infrared module. 219 - 220 213 config KSDAZZLE_DONGLE 221 214 tristate "KingSun Dazzle IrDA-USB dongle" 222 215 depends on IRDA && USB
-1
drivers/net/irda/Makefile
··· 35 35 obj-$(CONFIG_ACT200L_DONGLE) += act200l-sir.o 36 36 obj-$(CONFIG_MA600_DONGLE) += ma600-sir.o 37 37 obj-$(CONFIG_TOIM3232_DONGLE) += toim3232-sir.o 38 - obj-$(CONFIG_EP7211_DONGLE) += ep7211-sir.o 39 38 obj-$(CONFIG_KINGSUN_DONGLE) += kingsun-sir.o 40 39 obj-$(CONFIG_KSDAZZLE_DONGLE) += ksdazzle-sir.o 41 40 obj-$(CONFIG_KS959_DONGLE) += ks959-sir.o
-70
drivers/net/irda/ep7211-sir.c
··· 1 - /* 2 - * IR port driver for the Cirrus Logic CLPS711X processors 3 - * 4 - * Copyright 2001, Blue Mug Inc. All rights reserved. 5 - * Copyright 2007, Samuel Ortiz <samuel@sortiz.org> 6 - */ 7 - 8 - #include <linux/module.h> 9 - #include <linux/platform_device.h> 10 - 11 - #include <mach/hardware.h> 12 - 13 - #include "sir-dev.h" 14 - 15 - static int clps711x_dongle_open(struct sir_dev *dev) 16 - { 17 - unsigned int syscon; 18 - 19 - /* Turn on the SIR encoder. */ 20 - syscon = clps_readl(SYSCON1); 21 - syscon |= SYSCON1_SIREN; 22 - clps_writel(syscon, SYSCON1); 23 - 24 - return 0; 25 - } 26 - 27 - static int clps711x_dongle_close(struct sir_dev *dev) 28 - { 29 - unsigned int syscon; 30 - 31 - /* Turn off the SIR encoder. */ 32 - syscon = clps_readl(SYSCON1); 33 - syscon &= ~SYSCON1_SIREN; 34 - clps_writel(syscon, SYSCON1); 35 - 36 - return 0; 37 - } 38 - 39 - static struct dongle_driver clps711x_dongle = { 40 - .owner = THIS_MODULE, 41 - .driver_name = "EP7211 IR driver", 42 - .type = IRDA_EP7211_DONGLE, 43 - .open = clps711x_dongle_open, 44 - .close = clps711x_dongle_close, 45 - }; 46 - 47 - static int clps711x_sir_probe(struct platform_device *pdev) 48 - { 49 - return irda_register_dongle(&clps711x_dongle); 50 - } 51 - 52 - static int clps711x_sir_remove(struct platform_device *pdev) 53 - { 54 - return irda_unregister_dongle(&clps711x_dongle); 55 - } 56 - 57 - static struct platform_driver clps711x_sir_driver = { 58 - .driver = { 59 - .name = "sir-clps711x", 60 - .owner = THIS_MODULE, 61 - }, 62 - .probe = clps711x_sir_probe, 63 - .remove = clps711x_sir_remove, 64 - }; 65 - module_platform_driver(clps711x_sir_driver); 66 - 67 - MODULE_AUTHOR("Samuel Ortiz <samuel@sortiz.org>"); 68 - MODULE_DESCRIPTION("EP7211 IR dongle driver"); 69 - MODULE_LICENSE("GPL"); 70 - MODULE_ALIAS("irda-dongle-13"); /* IRDA_EP7211_DONGLE */
+13 -6
drivers/net/phy/dp83640.c
··· 437 437 if (on) { 438 438 gpio_num = gpio_tab[EXTTS0_GPIO + index]; 439 439 evnt |= (gpio_num & EVNT_GPIO_MASK) << EVNT_GPIO_SHIFT; 440 - evnt |= EVNT_RISE; 440 + if (rq->extts.flags & PTP_FALLING_EDGE) 441 + evnt |= EVNT_FALL; 442 + else 443 + evnt |= EVNT_RISE; 441 444 } 442 445 ext_write(0, phydev, PAGE5, PTP_EVNT, evnt); 443 446 return 0; ··· 1061 1058 kfree(dp83640); 1062 1059 } 1063 1060 1061 + static int dp83640_config_init(struct phy_device *phydev) 1062 + { 1063 + enable_status_frames(phydev, true); 1064 + ext_write(0, phydev, PAGE4, PTP_CTL, PTP_ENABLE); 1065 + return 0; 1066 + } 1067 + 1064 1068 static int dp83640_ack_interrupt(struct phy_device *phydev) 1065 1069 { 1066 1070 int err = phy_read(phydev, MII_DP83640_MISR); ··· 1205 1195 1206 1196 mutex_lock(&dp83640->clock->extreg_lock); 1207 1197 1208 - if (dp83640->hwts_tx_en || dp83640->hwts_rx_en) { 1209 - enable_status_frames(phydev, true); 1210 - ext_write(0, phydev, PAGE4, PTP_CTL, PTP_ENABLE); 1211 - } 1212 - 1213 1198 ext_write(0, phydev, PAGE5, PTP_TXCFG0, txcfg0); 1214 1199 ext_write(0, phydev, PAGE5, PTP_RXCFG0, rxcfg0); 1215 1200 ··· 1286 1281 } 1287 1282 /* fall through */ 1288 1283 case HWTSTAMP_TX_ON: 1284 + skb_shinfo(skb)->tx_flags |= SKBTX_IN_PROGRESS; 1289 1285 skb_queue_tail(&dp83640->tx_queue, skb); 1290 1286 schedule_work(&dp83640->ts_work); 1291 1287 break; ··· 1336 1330 .flags = PHY_HAS_INTERRUPT, 1337 1331 .probe = dp83640_probe, 1338 1332 .remove = dp83640_remove, 1333 + .config_init = dp83640_config_init, 1339 1334 .config_aneg = genphy_config_aneg, 1340 1335 .read_status = genphy_read_status, 1341 1336 .ack_interrupt = dp83640_ack_interrupt,
+3
drivers/net/phy/mdio-sun4i.c
··· 170 170 } 171 171 172 172 static const struct of_device_id sun4i_mdio_dt_ids[] = { 173 + { .compatible = "allwinner,sun4i-a10-mdio" }, 174 + 175 + /* Deprecated */ 173 176 { .compatible = "allwinner,sun4i-mdio" }, 174 177 { } 175 178 };
+24 -14
drivers/net/phy/phy_device.c
··· 719 719 static int genphy_config_advert(struct phy_device *phydev) 720 720 { 721 721 u32 advertise; 722 - int oldadv, adv; 722 + int oldadv, adv, bmsr; 723 723 int err, changed = 0; 724 724 725 725 /* Only allow advertising what this PHY supports */ ··· 744 744 changed = 1; 745 745 } 746 746 747 + bmsr = phy_read(phydev, MII_BMSR); 748 + if (bmsr < 0) 749 + return bmsr; 750 + 751 + /* Per 802.3-2008, Section 22.2.4.2.16 Extended status all 752 + * 1000Mbits/sec capable PHYs shall have the BMSR_ESTATEN bit set to a 753 + * logical 1. 754 + */ 755 + if (!(bmsr & BMSR_ESTATEN)) 756 + return changed; 757 + 747 758 /* Configure gigabit if it's supported */ 759 + adv = phy_read(phydev, MII_CTRL1000); 760 + if (adv < 0) 761 + return adv; 762 + 763 + oldadv = adv; 764 + adv &= ~(ADVERTISE_1000FULL | ADVERTISE_1000HALF); 765 + 748 766 if (phydev->supported & (SUPPORTED_1000baseT_Half | 749 767 SUPPORTED_1000baseT_Full)) { 750 - adv = phy_read(phydev, MII_CTRL1000); 751 - if (adv < 0) 752 - return adv; 753 - 754 - oldadv = adv; 755 - adv &= ~(ADVERTISE_1000FULL | ADVERTISE_1000HALF); 756 768 adv |= ethtool_adv_to_mii_ctrl1000_t(advertise); 757 - 758 - if (adv != oldadv) { 759 - err = phy_write(phydev, MII_CTRL1000, adv); 760 - 761 - if (err < 0) 762 - return err; 769 + if (adv != oldadv) 763 770 changed = 1; 764 - } 765 771 } 772 + 773 + err = phy_write(phydev, MII_CTRL1000, adv); 774 + if (err < 0) 775 + return err; 766 776 767 777 return changed; 768 778 }
+16
drivers/net/usb/Kconfig
··· 292 292 This option adds support for CoreChip-sz SR9700 based USB 1.1 293 293 10/100 Ethernet adapters. 294 294 295 + config USB_NET_SR9800 296 + tristate "CoreChip-sz SR9800 based USB 2.0 10/100 ethernet devices" 297 + depends on USB_USBNET 298 + select CRC32 299 + default y 300 + ---help--- 301 + Say Y if you want to use one of the following 100Mbps USB Ethernet 302 + device based on the CoreChip-sz SR9800 chip. 303 + 304 + This driver makes the adapter appear as a normal Ethernet interface, 305 + typically on eth0, if it is the only ethernet device, or perhaps on 306 + eth1, if you have a PCI or ISA ethernet card installed. 307 + 308 + To compile this driver as a module, choose M here: the 309 + module will be called sr9800. 310 + 295 311 config USB_NET_SMSC75XX 296 312 tristate "SMSC LAN75XX based USB 2.0 gigabit ethernet devices" 297 313 depends on USB_USBNET
+1
drivers/net/usb/Makefile
··· 15 15 obj-$(CONFIG_USB_NET_CDC_EEM) += cdc_eem.o 16 16 obj-$(CONFIG_USB_NET_DM9601) += dm9601.o 17 17 obj-$(CONFIG_USB_NET_SR9700) += sr9700.o 18 + obj-$(CONFIG_USB_NET_SR9800) += sr9800.o 18 19 obj-$(CONFIG_USB_NET_SMSC75XX) += smsc75xx.o 19 20 obj-$(CONFIG_USB_NET_SMSC95XX) += smsc95xx.o 20 21 obj-$(CONFIG_USB_NET_GL620A) += gl620a.o
+11 -21
drivers/net/usb/hso.c
··· 1201 1201 struct hso_serial *serial = urb->context; 1202 1202 int status = urb->status; 1203 1203 1204 + D4("\n--- Got serial_read_bulk callback %02x ---", status); 1205 + 1204 1206 /* sanity check */ 1205 1207 if (!serial) { 1206 1208 D1("serial == NULL"); 1207 1209 return; 1208 - } else if (status) { 1210 + } 1211 + if (status) { 1209 1212 handle_usb_error(status, __func__, serial->parent); 1210 1213 return; 1211 1214 } 1212 1215 1213 - D4("\n--- Got serial_read_bulk callback %02x ---", status); 1214 1216 D1("Actual length = %d\n", urb->actual_length); 1215 1217 DUMP1(urb->transfer_buffer, urb->actual_length); 1216 1218 ··· 1220 1218 if (serial->port.count == 0) 1221 1219 return; 1222 1220 1223 - if (status == 0) { 1224 - if (serial->parent->port_spec & HSO_INFO_CRC_BUG) 1225 - fix_crc_bug(urb, serial->in_endp->wMaxPacketSize); 1226 - /* Valid data, handle RX data */ 1227 - spin_lock(&serial->serial_lock); 1228 - serial->rx_urb_filled[hso_urb_to_index(serial, urb)] = 1; 1229 - put_rxbuf_data_and_resubmit_bulk_urb(serial); 1230 - spin_unlock(&serial->serial_lock); 1231 - } else if (status == -ENOENT || status == -ECONNRESET) { 1232 - /* Unlinked - check for throttled port. */ 1233 - D2("Port %d, successfully unlinked urb", serial->minor); 1234 - spin_lock(&serial->serial_lock); 1235 - serial->rx_urb_filled[hso_urb_to_index(serial, urb)] = 0; 1236 - hso_resubmit_rx_bulk_urb(serial, urb); 1237 - spin_unlock(&serial->serial_lock); 1238 - } else { 1239 - D2("Port %d, status = %d for read urb", serial->minor, status); 1240 - return; 1241 - } 1221 + if (serial->parent->port_spec & HSO_INFO_CRC_BUG) 1222 + fix_crc_bug(urb, serial->in_endp->wMaxPacketSize); 1223 + /* Valid data, handle RX data */ 1224 + spin_lock(&serial->serial_lock); 1225 + serial->rx_urb_filled[hso_urb_to_index(serial, urb)] = 1; 1226 + put_rxbuf_data_and_resubmit_bulk_urb(serial); 1227 + spin_unlock(&serial->serial_lock); 1242 1228 } 1243 1229 1244 1230 /*
+2
drivers/net/usb/qmi_wwan.c
··· 712 712 {QMI_FIXED_INTF(0x19d2, 0x1255, 3)}, 713 713 {QMI_FIXED_INTF(0x19d2, 0x1255, 4)}, 714 714 {QMI_FIXED_INTF(0x19d2, 0x1256, 4)}, 715 + {QMI_FIXED_INTF(0x19d2, 0x1270, 5)}, /* ZTE MF667 */ 715 716 {QMI_FIXED_INTF(0x19d2, 0x1401, 2)}, 716 717 {QMI_FIXED_INTF(0x19d2, 0x1402, 2)}, /* ZTE MF60 */ 717 718 {QMI_FIXED_INTF(0x19d2, 0x1424, 2)}, ··· 724 723 {QMI_FIXED_INTF(0x1199, 0x68a2, 8)}, /* Sierra Wireless MC7710 in QMI mode */ 725 724 {QMI_FIXED_INTF(0x1199, 0x68a2, 19)}, /* Sierra Wireless MC7710 in QMI mode */ 726 725 {QMI_FIXED_INTF(0x1199, 0x901c, 8)}, /* Sierra Wireless EM7700 */ 726 + {QMI_FIXED_INTF(0x1199, 0x9051, 8)}, /* Netgear AirCard 340U */ 727 727 {QMI_FIXED_INTF(0x1bbb, 0x011e, 4)}, /* Telekom Speedstick LTE II (Alcatel One Touch L100V LTE) */ 728 728 {QMI_FIXED_INTF(0x2357, 0x0201, 4)}, /* TP-LINK HSUPA Modem MA180 */ 729 729 {QMI_FIXED_INTF(0x2357, 0x9000, 4)}, /* TP-LINK MA260 */
+9 -10
drivers/net/usb/r8152.c
··· 2273 2273 struct r8152 *tp = netdev_priv(netdev); 2274 2274 int res = 0; 2275 2275 2276 - res = usb_submit_urb(tp->intr_urb, GFP_KERNEL); 2277 - if (res) { 2278 - if (res == -ENODEV) 2279 - netif_device_detach(tp->netdev); 2280 - netif_warn(tp, ifup, netdev, "intr_urb submit failed: %d\n", 2281 - res); 2282 - return res; 2283 - } 2284 - 2285 2276 rtl8152_set_speed(tp, AUTONEG_ENABLE, 2286 2277 tp->mii.supports_gmii ? SPEED_1000 : SPEED_100, 2287 2278 DUPLEX_FULL); ··· 2280 2289 netif_carrier_off(netdev); 2281 2290 netif_start_queue(netdev); 2282 2291 set_bit(WORK_ENABLE, &tp->flags); 2292 + res = usb_submit_urb(tp->intr_urb, GFP_KERNEL); 2293 + if (res) { 2294 + if (res == -ENODEV) 2295 + netif_device_detach(tp->netdev); 2296 + netif_warn(tp, ifup, netdev, "intr_urb submit failed: %d\n", 2297 + res); 2298 + } 2299 + 2283 2300 2284 2301 return res; 2285 2302 } ··· 2297 2298 struct r8152 *tp = netdev_priv(netdev); 2298 2299 int res = 0; 2299 2300 2300 - usb_kill_urb(tp->intr_urb); 2301 2301 clear_bit(WORK_ENABLE, &tp->flags); 2302 + usb_kill_urb(tp->intr_urb); 2302 2303 cancel_delayed_work_sync(&tp->schedule); 2303 2304 netif_stop_queue(netdev); 2304 2305 tasklet_disable(&tp->tl);
+870
drivers/net/usb/sr9800.c
··· 1 + /* CoreChip-sz SR9800 one chip USB 2.0 Ethernet Devices 2 + * 3 + * Author : Liu Junliang <liujunliang_ljl@163.com> 4 + * 5 + * Based on asix_common.c, asix_devices.c 6 + * 7 + * This file is licensed under the terms of the GNU General Public License 8 + * version 2. This program is licensed "as is" without any warranty of any 9 + * kind, whether express or implied.* 10 + */ 11 + 12 + #include <linux/module.h> 13 + #include <linux/kmod.h> 14 + #include <linux/init.h> 15 + #include <linux/netdevice.h> 16 + #include <linux/etherdevice.h> 17 + #include <linux/ethtool.h> 18 + #include <linux/workqueue.h> 19 + #include <linux/mii.h> 20 + #include <linux/usb.h> 21 + #include <linux/crc32.h> 22 + #include <linux/usb/usbnet.h> 23 + #include <linux/slab.h> 24 + #include <linux/if_vlan.h> 25 + 26 + #include "sr9800.h" 27 + 28 + static int sr_read_cmd(struct usbnet *dev, u8 cmd, u16 value, u16 index, 29 + u16 size, void *data) 30 + { 31 + int err; 32 + 33 + err = usbnet_read_cmd(dev, cmd, SR_REQ_RD_REG, value, index, 34 + data, size); 35 + if ((err != size) && (err >= 0)) 36 + err = -EINVAL; 37 + 38 + return err; 39 + } 40 + 41 + static int sr_write_cmd(struct usbnet *dev, u8 cmd, u16 value, u16 index, 42 + u16 size, void *data) 43 + { 44 + int err; 45 + 46 + err = usbnet_write_cmd(dev, cmd, SR_REQ_WR_REG, value, index, 47 + data, size); 48 + if ((err != size) && (err >= 0)) 49 + err = -EINVAL; 50 + 51 + return err; 52 + } 53 + 54 + static void 55 + sr_write_cmd_async(struct usbnet *dev, u8 cmd, u16 value, u16 index, 56 + u16 size, void *data) 57 + { 58 + usbnet_write_cmd_async(dev, cmd, SR_REQ_WR_REG, value, index, data, 59 + size); 60 + } 61 + 62 + static int sr_rx_fixup(struct usbnet *dev, struct sk_buff *skb) 63 + { 64 + int offset = 0; 65 + 66 + while (offset + sizeof(u32) < skb->len) { 67 + struct sk_buff *sr_skb; 68 + u16 size; 69 + u32 header = get_unaligned_le32(skb->data + offset); 70 + 71 + offset += sizeof(u32); 72 + /* get the packet length */ 73 + size = (u16) (header & 0x7ff); 74 + if (size != ((~header >> 16) & 0x07ff)) { 75 + netdev_err(dev->net, "%s : Bad Header Length\n", 76 + __func__); 77 + return 0; 78 + } 79 + 80 + if ((size > dev->net->mtu + ETH_HLEN + VLAN_HLEN) || 81 + (size + offset > skb->len)) { 82 + netdev_err(dev->net, "%s : Bad RX Length %d\n", 83 + __func__, size); 84 + return 0; 85 + } 86 + sr_skb = netdev_alloc_skb_ip_align(dev->net, size); 87 + if (!sr_skb) 88 + return 0; 89 + 90 + skb_put(sr_skb, size); 91 + memcpy(sr_skb->data, skb->data + offset, size); 92 + usbnet_skb_return(dev, sr_skb); 93 + 94 + offset += (size + 1) & 0xfffe; 95 + } 96 + 97 + if (skb->len != offset) { 98 + netdev_err(dev->net, "%s : Bad SKB Length %d\n", __func__, 99 + skb->len); 100 + return 0; 101 + } 102 + 103 + return 1; 104 + } 105 + 106 + static struct sk_buff *sr_tx_fixup(struct usbnet *dev, struct sk_buff *skb, 107 + gfp_t flags) 108 + { 109 + int headroom = skb_headroom(skb); 110 + int tailroom = skb_tailroom(skb); 111 + u32 padbytes = 0xffff0000; 112 + u32 packet_len; 113 + int padlen; 114 + 115 + padlen = ((skb->len + 4) % (dev->maxpacket - 1)) ? 0 : 4; 116 + 117 + if ((!skb_cloned(skb)) && ((headroom + tailroom) >= (4 + padlen))) { 118 + if ((headroom < 4) || (tailroom < padlen)) { 119 + skb->data = memmove(skb->head + 4, skb->data, 120 + skb->len); 121 + skb_set_tail_pointer(skb, skb->len); 122 + } 123 + } else { 124 + struct sk_buff *skb2; 125 + skb2 = skb_copy_expand(skb, 4, padlen, flags); 126 + dev_kfree_skb_any(skb); 127 + skb = skb2; 128 + if (!skb) 129 + return NULL; 130 + } 131 + 132 + skb_push(skb, 4); 133 + packet_len = (((skb->len - 4) ^ 0x0000ffff) << 16) + (skb->len - 4); 134 + cpu_to_le32s(&packet_len); 135 + skb_copy_to_linear_data(skb, &packet_len, sizeof(packet_len)); 136 + 137 + if (padlen) { 138 + cpu_to_le32s(&padbytes); 139 + memcpy(skb_tail_pointer(skb), &padbytes, sizeof(padbytes)); 140 + skb_put(skb, sizeof(padbytes)); 141 + } 142 + 143 + return skb; 144 + } 145 + 146 + static void sr_status(struct usbnet *dev, struct urb *urb) 147 + { 148 + struct sr9800_int_data *event; 149 + int link; 150 + 151 + if (urb->actual_length < 8) 152 + return; 153 + 154 + event = urb->transfer_buffer; 155 + link = event->link & 0x01; 156 + if (netif_carrier_ok(dev->net) != link) { 157 + usbnet_link_change(dev, link, 1); 158 + netdev_dbg(dev->net, "Link Status is: %d\n", link); 159 + } 160 + 161 + return; 162 + } 163 + 164 + static inline int sr_set_sw_mii(struct usbnet *dev) 165 + { 166 + int ret; 167 + 168 + ret = sr_write_cmd(dev, SR_CMD_SET_SW_MII, 0x0000, 0, 0, NULL); 169 + if (ret < 0) 170 + netdev_err(dev->net, "Failed to enable software MII access\n"); 171 + return ret; 172 + } 173 + 174 + static inline int sr_set_hw_mii(struct usbnet *dev) 175 + { 176 + int ret; 177 + 178 + ret = sr_write_cmd(dev, SR_CMD_SET_HW_MII, 0x0000, 0, 0, NULL); 179 + if (ret < 0) 180 + netdev_err(dev->net, "Failed to enable hardware MII access\n"); 181 + return ret; 182 + } 183 + 184 + static inline int sr_get_phy_addr(struct usbnet *dev) 185 + { 186 + u8 buf[2]; 187 + int ret; 188 + 189 + ret = sr_read_cmd(dev, SR_CMD_READ_PHY_ID, 0, 0, 2, buf); 190 + if (ret < 0) { 191 + netdev_err(dev->net, "%s : Error reading PHYID register:%02x\n", 192 + __func__, ret); 193 + goto out; 194 + } 195 + netdev_dbg(dev->net, "%s : returning 0x%04x\n", __func__, 196 + *((__le16 *)buf)); 197 + 198 + ret = buf[1]; 199 + 200 + out: 201 + return ret; 202 + } 203 + 204 + static int sr_sw_reset(struct usbnet *dev, u8 flags) 205 + { 206 + int ret; 207 + 208 + ret = sr_write_cmd(dev, SR_CMD_SW_RESET, flags, 0, 0, NULL); 209 + if (ret < 0) 210 + netdev_err(dev->net, "Failed to send software reset:%02x\n", 211 + ret); 212 + 213 + return ret; 214 + } 215 + 216 + static u16 sr_read_rx_ctl(struct usbnet *dev) 217 + { 218 + __le16 v; 219 + int ret; 220 + 221 + ret = sr_read_cmd(dev, SR_CMD_READ_RX_CTL, 0, 0, 2, &v); 222 + if (ret < 0) { 223 + netdev_err(dev->net, "Error reading RX_CTL register:%02x\n", 224 + ret); 225 + goto out; 226 + } 227 + 228 + ret = le16_to_cpu(v); 229 + out: 230 + return ret; 231 + } 232 + 233 + static int sr_write_rx_ctl(struct usbnet *dev, u16 mode) 234 + { 235 + int ret; 236 + 237 + netdev_dbg(dev->net, "%s : mode = 0x%04x\n", __func__, mode); 238 + ret = sr_write_cmd(dev, SR_CMD_WRITE_RX_CTL, mode, 0, 0, NULL); 239 + if (ret < 0) 240 + netdev_err(dev->net, 241 + "Failed to write RX_CTL mode to 0x%04x:%02x\n", 242 + mode, ret); 243 + 244 + return ret; 245 + } 246 + 247 + static u16 sr_read_medium_status(struct usbnet *dev) 248 + { 249 + __le16 v; 250 + int ret; 251 + 252 + ret = sr_read_cmd(dev, SR_CMD_READ_MEDIUM_STATUS, 0, 0, 2, &v); 253 + if (ret < 0) { 254 + netdev_err(dev->net, 255 + "Error reading Medium Status register:%02x\n", ret); 256 + return ret; /* TODO: callers not checking for error ret */ 257 + } 258 + 259 + return le16_to_cpu(v); 260 + } 261 + 262 + static int sr_write_medium_mode(struct usbnet *dev, u16 mode) 263 + { 264 + int ret; 265 + 266 + netdev_dbg(dev->net, "%s : mode = 0x%04x\n", __func__, mode); 267 + ret = sr_write_cmd(dev, SR_CMD_WRITE_MEDIUM_MODE, mode, 0, 0, NULL); 268 + if (ret < 0) 269 + netdev_err(dev->net, 270 + "Failed to write Medium Mode mode to 0x%04x:%02x\n", 271 + mode, ret); 272 + return ret; 273 + } 274 + 275 + static int sr_write_gpio(struct usbnet *dev, u16 value, int sleep) 276 + { 277 + int ret; 278 + 279 + netdev_dbg(dev->net, "%s : value = 0x%04x\n", __func__, value); 280 + ret = sr_write_cmd(dev, SR_CMD_WRITE_GPIOS, value, 0, 0, NULL); 281 + if (ret < 0) 282 + netdev_err(dev->net, "Failed to write GPIO value 0x%04x:%02x\n", 283 + value, ret); 284 + if (sleep) 285 + msleep(sleep); 286 + 287 + return ret; 288 + } 289 + 290 + /* SR9800 have a 16-bit RX_CTL value */ 291 + static void sr_set_multicast(struct net_device *net) 292 + { 293 + struct usbnet *dev = netdev_priv(net); 294 + struct sr_data *data = (struct sr_data *)&dev->data; 295 + u16 rx_ctl = SR_DEFAULT_RX_CTL; 296 + 297 + if (net->flags & IFF_PROMISC) { 298 + rx_ctl |= SR_RX_CTL_PRO; 299 + } else if (net->flags & IFF_ALLMULTI || 300 + netdev_mc_count(net) > SR_MAX_MCAST) { 301 + rx_ctl |= SR_RX_CTL_AMALL; 302 + } else if (netdev_mc_empty(net)) { 303 + /* just broadcast and directed */ 304 + } else { 305 + /* We use the 20 byte dev->data 306 + * for our 8 byte filter buffer 307 + * to avoid allocating memory that 308 + * is tricky to free later 309 + */ 310 + struct netdev_hw_addr *ha; 311 + u32 crc_bits; 312 + 313 + memset(data->multi_filter, 0, SR_MCAST_FILTER_SIZE); 314 + 315 + /* Build the multicast hash filter. */ 316 + netdev_for_each_mc_addr(ha, net) { 317 + crc_bits = ether_crc(ETH_ALEN, ha->addr) >> 26; 318 + data->multi_filter[crc_bits >> 3] |= 319 + 1 << (crc_bits & 7); 320 + } 321 + 322 + sr_write_cmd_async(dev, SR_CMD_WRITE_MULTI_FILTER, 0, 0, 323 + SR_MCAST_FILTER_SIZE, data->multi_filter); 324 + 325 + rx_ctl |= SR_RX_CTL_AM; 326 + } 327 + 328 + sr_write_cmd_async(dev, SR_CMD_WRITE_RX_CTL, rx_ctl, 0, 0, NULL); 329 + } 330 + 331 + static int sr_mdio_read(struct net_device *net, int phy_id, int loc) 332 + { 333 + struct usbnet *dev = netdev_priv(net); 334 + __le16 res; 335 + 336 + mutex_lock(&dev->phy_mutex); 337 + sr_set_sw_mii(dev); 338 + sr_read_cmd(dev, SR_CMD_READ_MII_REG, phy_id, (__u16)loc, 2, &res); 339 + sr_set_hw_mii(dev); 340 + mutex_unlock(&dev->phy_mutex); 341 + 342 + netdev_dbg(dev->net, 343 + "%s : phy_id=0x%02x, loc=0x%02x, returns=0x%04x\n", __func__, 344 + phy_id, loc, le16_to_cpu(res)); 345 + 346 + return le16_to_cpu(res); 347 + } 348 + 349 + static void 350 + sr_mdio_write(struct net_device *net, int phy_id, int loc, int val) 351 + { 352 + struct usbnet *dev = netdev_priv(net); 353 + __le16 res = cpu_to_le16(val); 354 + 355 + netdev_dbg(dev->net, 356 + "%s : phy_id=0x%02x, loc=0x%02x, val=0x%04x\n", __func__, 357 + phy_id, loc, val); 358 + mutex_lock(&dev->phy_mutex); 359 + sr_set_sw_mii(dev); 360 + sr_write_cmd(dev, SR_CMD_WRITE_MII_REG, phy_id, (__u16)loc, 2, &res); 361 + sr_set_hw_mii(dev); 362 + mutex_unlock(&dev->phy_mutex); 363 + } 364 + 365 + /* Get the PHY Identifier from the PHYSID1 & PHYSID2 MII registers */ 366 + static u32 sr_get_phyid(struct usbnet *dev) 367 + { 368 + int phy_reg; 369 + u32 phy_id; 370 + int i; 371 + 372 + /* Poll for the rare case the FW or phy isn't ready yet. */ 373 + for (i = 0; i < 100; i++) { 374 + phy_reg = sr_mdio_read(dev->net, dev->mii.phy_id, MII_PHYSID1); 375 + if (phy_reg != 0 && phy_reg != 0xFFFF) 376 + break; 377 + mdelay(1); 378 + } 379 + 380 + if (phy_reg <= 0 || phy_reg == 0xFFFF) 381 + return 0; 382 + 383 + phy_id = (phy_reg & 0xffff) << 16; 384 + 385 + phy_reg = sr_mdio_read(dev->net, dev->mii.phy_id, MII_PHYSID2); 386 + if (phy_reg < 0) 387 + return 0; 388 + 389 + phy_id |= (phy_reg & 0xffff); 390 + 391 + return phy_id; 392 + } 393 + 394 + static void 395 + sr_get_wol(struct net_device *net, struct ethtool_wolinfo *wolinfo) 396 + { 397 + struct usbnet *dev = netdev_priv(net); 398 + u8 opt; 399 + 400 + if (sr_read_cmd(dev, SR_CMD_READ_MONITOR_MODE, 0, 0, 1, &opt) < 0) { 401 + wolinfo->supported = 0; 402 + wolinfo->wolopts = 0; 403 + return; 404 + } 405 + wolinfo->supported = WAKE_PHY | WAKE_MAGIC; 406 + wolinfo->wolopts = 0; 407 + if (opt & SR_MONITOR_LINK) 408 + wolinfo->wolopts |= WAKE_PHY; 409 + if (opt & SR_MONITOR_MAGIC) 410 + wolinfo->wolopts |= WAKE_MAGIC; 411 + } 412 + 413 + static int 414 + sr_set_wol(struct net_device *net, struct ethtool_wolinfo *wolinfo) 415 + { 416 + struct usbnet *dev = netdev_priv(net); 417 + u8 opt = 0; 418 + 419 + if (wolinfo->wolopts & WAKE_PHY) 420 + opt |= SR_MONITOR_LINK; 421 + if (wolinfo->wolopts & WAKE_MAGIC) 422 + opt |= SR_MONITOR_MAGIC; 423 + 424 + if (sr_write_cmd(dev, SR_CMD_WRITE_MONITOR_MODE, 425 + opt, 0, 0, NULL) < 0) 426 + return -EINVAL; 427 + 428 + return 0; 429 + } 430 + 431 + static int sr_get_eeprom_len(struct net_device *net) 432 + { 433 + struct usbnet *dev = netdev_priv(net); 434 + struct sr_data *data = (struct sr_data *)&dev->data; 435 + 436 + return data->eeprom_len; 437 + } 438 + 439 + static int sr_get_eeprom(struct net_device *net, 440 + struct ethtool_eeprom *eeprom, u8 *data) 441 + { 442 + struct usbnet *dev = netdev_priv(net); 443 + __le16 *ebuf = (__le16 *)data; 444 + int ret; 445 + int i; 446 + 447 + /* Crude hack to ensure that we don't overwrite memory 448 + * if an odd length is supplied 449 + */ 450 + if (eeprom->len % 2) 451 + return -EINVAL; 452 + 453 + eeprom->magic = SR_EEPROM_MAGIC; 454 + 455 + /* sr9800 returns 2 bytes from eeprom on read */ 456 + for (i = 0; i < eeprom->len / 2; i++) { 457 + ret = sr_read_cmd(dev, SR_CMD_READ_EEPROM, eeprom->offset + i, 458 + 0, 2, &ebuf[i]); 459 + if (ret < 0) 460 + return -EINVAL; 461 + } 462 + return 0; 463 + } 464 + 465 + static void sr_get_drvinfo(struct net_device *net, 466 + struct ethtool_drvinfo *info) 467 + { 468 + struct usbnet *dev = netdev_priv(net); 469 + struct sr_data *data = (struct sr_data *)&dev->data; 470 + 471 + /* Inherit standard device info */ 472 + usbnet_get_drvinfo(net, info); 473 + strncpy(info->driver, DRIVER_NAME, sizeof(info->driver)); 474 + strncpy(info->version, DRIVER_VERSION, sizeof(info->version)); 475 + info->eedump_len = data->eeprom_len; 476 + } 477 + 478 + static u32 sr_get_link(struct net_device *net) 479 + { 480 + struct usbnet *dev = netdev_priv(net); 481 + 482 + return mii_link_ok(&dev->mii); 483 + } 484 + 485 + static int sr_ioctl(struct net_device *net, struct ifreq *rq, int cmd) 486 + { 487 + struct usbnet *dev = netdev_priv(net); 488 + 489 + return generic_mii_ioctl(&dev->mii, if_mii(rq), cmd, NULL); 490 + } 491 + 492 + static int sr_set_mac_address(struct net_device *net, void *p) 493 + { 494 + struct usbnet *dev = netdev_priv(net); 495 + struct sr_data *data = (struct sr_data *)&dev->data; 496 + struct sockaddr *addr = p; 497 + 498 + if (netif_running(net)) 499 + return -EBUSY; 500 + if (!is_valid_ether_addr(addr->sa_data)) 501 + return -EADDRNOTAVAIL; 502 + 503 + memcpy(net->dev_addr, addr->sa_data, ETH_ALEN); 504 + 505 + /* We use the 20 byte dev->data 506 + * for our 6 byte mac buffer 507 + * to avoid allocating memory that 508 + * is tricky to free later 509 + */ 510 + memcpy(data->mac_addr, addr->sa_data, ETH_ALEN); 511 + sr_write_cmd_async(dev, SR_CMD_WRITE_NODE_ID, 0, 0, ETH_ALEN, 512 + data->mac_addr); 513 + 514 + return 0; 515 + } 516 + 517 + static const struct ethtool_ops sr9800_ethtool_ops = { 518 + .get_drvinfo = sr_get_drvinfo, 519 + .get_link = sr_get_link, 520 + .get_msglevel = usbnet_get_msglevel, 521 + .set_msglevel = usbnet_set_msglevel, 522 + .get_wol = sr_get_wol, 523 + .set_wol = sr_set_wol, 524 + .get_eeprom_len = sr_get_eeprom_len, 525 + .get_eeprom = sr_get_eeprom, 526 + .get_settings = usbnet_get_settings, 527 + .set_settings = usbnet_set_settings, 528 + .nway_reset = usbnet_nway_reset, 529 + }; 530 + 531 + static int sr9800_link_reset(struct usbnet *dev) 532 + { 533 + struct ethtool_cmd ecmd = { .cmd = ETHTOOL_GSET }; 534 + u16 mode; 535 + 536 + mii_check_media(&dev->mii, 1, 1); 537 + mii_ethtool_gset(&dev->mii, &ecmd); 538 + mode = SR9800_MEDIUM_DEFAULT; 539 + 540 + if (ethtool_cmd_speed(&ecmd) != SPEED_100) 541 + mode &= ~SR_MEDIUM_PS; 542 + 543 + if (ecmd.duplex != DUPLEX_FULL) 544 + mode &= ~SR_MEDIUM_FD; 545 + 546 + netdev_dbg(dev->net, "%s : speed: %u duplex: %d mode: 0x%04x\n", 547 + __func__, ethtool_cmd_speed(&ecmd), ecmd.duplex, mode); 548 + 549 + sr_write_medium_mode(dev, mode); 550 + 551 + return 0; 552 + } 553 + 554 + 555 + static int sr9800_set_default_mode(struct usbnet *dev) 556 + { 557 + u16 rx_ctl; 558 + int ret; 559 + 560 + sr_mdio_write(dev->net, dev->mii.phy_id, MII_BMCR, BMCR_RESET); 561 + sr_mdio_write(dev->net, dev->mii.phy_id, MII_ADVERTISE, 562 + ADVERTISE_ALL | ADVERTISE_CSMA); 563 + mii_nway_restart(&dev->mii); 564 + 565 + ret = sr_write_medium_mode(dev, SR9800_MEDIUM_DEFAULT); 566 + if (ret < 0) 567 + goto out; 568 + 569 + ret = sr_write_cmd(dev, SR_CMD_WRITE_IPG012, 570 + SR9800_IPG0_DEFAULT | SR9800_IPG1_DEFAULT, 571 + SR9800_IPG2_DEFAULT, 0, NULL); 572 + if (ret < 0) { 573 + netdev_dbg(dev->net, "Write IPG,IPG1,IPG2 failed: %d\n", ret); 574 + goto out; 575 + } 576 + 577 + /* Set RX_CTL to default values with 2k buffer, and enable cactus */ 578 + ret = sr_write_rx_ctl(dev, SR_DEFAULT_RX_CTL); 579 + if (ret < 0) 580 + goto out; 581 + 582 + rx_ctl = sr_read_rx_ctl(dev); 583 + netdev_dbg(dev->net, "RX_CTL is 0x%04x after all initializations\n", 584 + rx_ctl); 585 + 586 + rx_ctl = sr_read_medium_status(dev); 587 + netdev_dbg(dev->net, "Medium Status:0x%04x after all initializations\n", 588 + rx_ctl); 589 + 590 + return 0; 591 + out: 592 + return ret; 593 + } 594 + 595 + static int sr9800_reset(struct usbnet *dev) 596 + { 597 + struct sr_data *data = (struct sr_data *)&dev->data; 598 + int ret, embd_phy; 599 + u16 rx_ctl; 600 + 601 + ret = sr_write_gpio(dev, 602 + SR_GPIO_RSE | SR_GPIO_GPO_2 | SR_GPIO_GPO2EN, 5); 603 + if (ret < 0) 604 + goto out; 605 + 606 + embd_phy = ((sr_get_phy_addr(dev) & 0x1f) == 0x10 ? 1 : 0); 607 + 608 + ret = sr_write_cmd(dev, SR_CMD_SW_PHY_SELECT, embd_phy, 0, 0, NULL); 609 + if (ret < 0) { 610 + netdev_dbg(dev->net, "Select PHY #1 failed: %d\n", ret); 611 + goto out; 612 + } 613 + 614 + ret = sr_sw_reset(dev, SR_SWRESET_IPPD | SR_SWRESET_PRL); 615 + if (ret < 0) 616 + goto out; 617 + 618 + msleep(150); 619 + 620 + ret = sr_sw_reset(dev, SR_SWRESET_CLEAR); 621 + if (ret < 0) 622 + goto out; 623 + 624 + msleep(150); 625 + 626 + if (embd_phy) { 627 + ret = sr_sw_reset(dev, SR_SWRESET_IPRL); 628 + if (ret < 0) 629 + goto out; 630 + } else { 631 + ret = sr_sw_reset(dev, SR_SWRESET_PRTE); 632 + if (ret < 0) 633 + goto out; 634 + } 635 + 636 + msleep(150); 637 + rx_ctl = sr_read_rx_ctl(dev); 638 + netdev_dbg(dev->net, "RX_CTL is 0x%04x after software reset\n", rx_ctl); 639 + ret = sr_write_rx_ctl(dev, 0x0000); 640 + if (ret < 0) 641 + goto out; 642 + 643 + rx_ctl = sr_read_rx_ctl(dev); 644 + netdev_dbg(dev->net, "RX_CTL is 0x%04x setting to 0x0000\n", rx_ctl); 645 + 646 + ret = sr_sw_reset(dev, SR_SWRESET_PRL); 647 + if (ret < 0) 648 + goto out; 649 + 650 + msleep(150); 651 + 652 + ret = sr_sw_reset(dev, SR_SWRESET_IPRL | SR_SWRESET_PRL); 653 + if (ret < 0) 654 + goto out; 655 + 656 + msleep(150); 657 + 658 + ret = sr9800_set_default_mode(dev); 659 + if (ret < 0) 660 + goto out; 661 + 662 + /* Rewrite MAC address */ 663 + memcpy(data->mac_addr, dev->net->dev_addr, ETH_ALEN); 664 + ret = sr_write_cmd(dev, SR_CMD_WRITE_NODE_ID, 0, 0, ETH_ALEN, 665 + data->mac_addr); 666 + if (ret < 0) 667 + goto out; 668 + 669 + return 0; 670 + 671 + out: 672 + return ret; 673 + } 674 + 675 + static const struct net_device_ops sr9800_netdev_ops = { 676 + .ndo_open = usbnet_open, 677 + .ndo_stop = usbnet_stop, 678 + .ndo_start_xmit = usbnet_start_xmit, 679 + .ndo_tx_timeout = usbnet_tx_timeout, 680 + .ndo_change_mtu = usbnet_change_mtu, 681 + .ndo_set_mac_address = sr_set_mac_address, 682 + .ndo_validate_addr = eth_validate_addr, 683 + .ndo_do_ioctl = sr_ioctl, 684 + .ndo_set_rx_mode = sr_set_multicast, 685 + }; 686 + 687 + static int sr9800_phy_powerup(struct usbnet *dev) 688 + { 689 + int ret; 690 + 691 + /* set the embedded Ethernet PHY in power-down state */ 692 + ret = sr_sw_reset(dev, SR_SWRESET_IPPD | SR_SWRESET_IPRL); 693 + if (ret < 0) { 694 + netdev_err(dev->net, "Failed to power down PHY : %d\n", ret); 695 + return ret; 696 + } 697 + msleep(20); 698 + 699 + /* set the embedded Ethernet PHY in power-up state */ 700 + ret = sr_sw_reset(dev, SR_SWRESET_IPRL); 701 + if (ret < 0) { 702 + netdev_err(dev->net, "Failed to reset PHY: %d\n", ret); 703 + return ret; 704 + } 705 + msleep(600); 706 + 707 + /* set the embedded Ethernet PHY in reset state */ 708 + ret = sr_sw_reset(dev, SR_SWRESET_CLEAR); 709 + if (ret < 0) { 710 + netdev_err(dev->net, "Failed to power up PHY: %d\n", ret); 711 + return ret; 712 + } 713 + msleep(20); 714 + 715 + /* set the embedded Ethernet PHY in power-up state */ 716 + ret = sr_sw_reset(dev, SR_SWRESET_IPRL); 717 + if (ret < 0) { 718 + netdev_err(dev->net, "Failed to reset PHY: %d\n", ret); 719 + return ret; 720 + } 721 + 722 + return 0; 723 + } 724 + 725 + static int sr9800_bind(struct usbnet *dev, struct usb_interface *intf) 726 + { 727 + struct sr_data *data = (struct sr_data *)&dev->data; 728 + u16 led01_mux, led23_mux; 729 + int ret, embd_phy; 730 + u32 phyid; 731 + u16 rx_ctl; 732 + 733 + data->eeprom_len = SR9800_EEPROM_LEN; 734 + 735 + usbnet_get_endpoints(dev, intf); 736 + 737 + /* LED Setting Rule : 738 + * AABB:CCDD 739 + * AA : MFA0(LED0) 740 + * BB : MFA1(LED1) 741 + * CC : MFA2(LED2), Reserved for SR9800 742 + * DD : MFA3(LED3), Reserved for SR9800 743 + */ 744 + led01_mux = (SR_LED_MUX_LINK_ACTIVE << 8) | SR_LED_MUX_LINK; 745 + led23_mux = (SR_LED_MUX_LINK_ACTIVE << 8) | SR_LED_MUX_TX_ACTIVE; 746 + ret = sr_write_cmd(dev, SR_CMD_LED_MUX, led01_mux, led23_mux, 0, NULL); 747 + if (ret < 0) { 748 + netdev_err(dev->net, "set LINK LED failed : %d\n", ret); 749 + goto out; 750 + } 751 + 752 + /* Get the MAC address */ 753 + ret = sr_read_cmd(dev, SR_CMD_READ_NODE_ID, 0, 0, ETH_ALEN, 754 + dev->net->dev_addr); 755 + if (ret < 0) { 756 + netdev_dbg(dev->net, "Failed to read MAC address: %d\n", ret); 757 + return ret; 758 + } 759 + netdev_dbg(dev->net, "mac addr : %pM\n", dev->net->dev_addr); 760 + 761 + /* Initialize MII structure */ 762 + dev->mii.dev = dev->net; 763 + dev->mii.mdio_read = sr_mdio_read; 764 + dev->mii.mdio_write = sr_mdio_write; 765 + dev->mii.phy_id_mask = 0x1f; 766 + dev->mii.reg_num_mask = 0x1f; 767 + dev->mii.phy_id = sr_get_phy_addr(dev); 768 + 769 + dev->net->netdev_ops = &sr9800_netdev_ops; 770 + dev->net->ethtool_ops = &sr9800_ethtool_ops; 771 + 772 + embd_phy = ((dev->mii.phy_id & 0x1f) == 0x10 ? 1 : 0); 773 + /* Reset the PHY to normal operation mode */ 774 + ret = sr_write_cmd(dev, SR_CMD_SW_PHY_SELECT, embd_phy, 0, 0, NULL); 775 + if (ret < 0) { 776 + netdev_dbg(dev->net, "Select PHY #1 failed: %d\n", ret); 777 + return ret; 778 + } 779 + 780 + /* Init PHY routine */ 781 + ret = sr9800_phy_powerup(dev); 782 + if (ret < 0) 783 + goto out; 784 + 785 + rx_ctl = sr_read_rx_ctl(dev); 786 + netdev_dbg(dev->net, "RX_CTL is 0x%04x after software reset\n", rx_ctl); 787 + ret = sr_write_rx_ctl(dev, 0x0000); 788 + if (ret < 0) 789 + goto out; 790 + 791 + rx_ctl = sr_read_rx_ctl(dev); 792 + netdev_dbg(dev->net, "RX_CTL is 0x%04x setting to 0x0000\n", rx_ctl); 793 + 794 + /* Read PHYID register *AFTER* the PHY was reset properly */ 795 + phyid = sr_get_phyid(dev); 796 + netdev_dbg(dev->net, "PHYID=0x%08x\n", phyid); 797 + 798 + /* medium mode setting */ 799 + ret = sr9800_set_default_mode(dev); 800 + if (ret < 0) 801 + goto out; 802 + 803 + if (dev->udev->speed == USB_SPEED_HIGH) { 804 + ret = sr_write_cmd(dev, SR_CMD_BULKIN_SIZE, 805 + SR9800_BULKIN_SIZE[SR9800_MAX_BULKIN_4K].byte_cnt, 806 + SR9800_BULKIN_SIZE[SR9800_MAX_BULKIN_4K].threshold, 807 + 0, NULL); 808 + if (ret < 0) { 809 + netdev_err(dev->net, "Reset RX_CTL failed: %d\n", ret); 810 + goto out; 811 + } 812 + dev->rx_urb_size = 813 + SR9800_BULKIN_SIZE[SR9800_MAX_BULKIN_4K].size; 814 + } else { 815 + ret = sr_write_cmd(dev, SR_CMD_BULKIN_SIZE, 816 + SR9800_BULKIN_SIZE[SR9800_MAX_BULKIN_2K].byte_cnt, 817 + SR9800_BULKIN_SIZE[SR9800_MAX_BULKIN_2K].threshold, 818 + 0, NULL); 819 + if (ret < 0) { 820 + netdev_err(dev->net, "Reset RX_CTL failed: %d\n", ret); 821 + goto out; 822 + } 823 + dev->rx_urb_size = 824 + SR9800_BULKIN_SIZE[SR9800_MAX_BULKIN_2K].size; 825 + } 826 + netdev_dbg(dev->net, "%s : setting rx_urb_size with : %ld\n", __func__, 827 + dev->rx_urb_size); 828 + return 0; 829 + 830 + out: 831 + return ret; 832 + } 833 + 834 + static const struct driver_info sr9800_driver_info = { 835 + .description = "CoreChip SR9800 USB 2.0 Ethernet", 836 + .bind = sr9800_bind, 837 + .status = sr_status, 838 + .link_reset = sr9800_link_reset, 839 + .reset = sr9800_reset, 840 + .flags = DRIVER_FLAG, 841 + .rx_fixup = sr_rx_fixup, 842 + .tx_fixup = sr_tx_fixup, 843 + }; 844 + 845 + static const struct usb_device_id products[] = { 846 + { 847 + USB_DEVICE(0x0fe6, 0x9800), /* SR9800 Device */ 848 + .driver_info = (unsigned long) &sr9800_driver_info, 849 + }, 850 + {}, /* END */ 851 + }; 852 + 853 + MODULE_DEVICE_TABLE(usb, products); 854 + 855 + static struct usb_driver sr_driver = { 856 + .name = DRIVER_NAME, 857 + .id_table = products, 858 + .probe = usbnet_probe, 859 + .suspend = usbnet_suspend, 860 + .resume = usbnet_resume, 861 + .disconnect = usbnet_disconnect, 862 + .supports_autosuspend = 1, 863 + }; 864 + 865 + module_usb_driver(sr_driver); 866 + 867 + MODULE_AUTHOR("Liu Junliang <liujunliang_ljl@163.com"); 868 + MODULE_VERSION(DRIVER_VERSION); 869 + MODULE_DESCRIPTION("SR9800 USB 2.0 USB2NET Dev : http://www.corechip-sz.com"); 870 + MODULE_LICENSE("GPL");
+202
drivers/net/usb/sr9800.h
··· 1 + /* CoreChip-sz SR9800 one chip USB 2.0 Ethernet Devices 2 + * 3 + * Author : Liu Junliang <liujunliang_ljl@163.com> 4 + * 5 + * This file is licensed under the terms of the GNU General Public License 6 + * version 2. This program is licensed "as is" without any warranty of any 7 + * kind, whether express or implied. 8 + */ 9 + 10 + #ifndef _SR9800_H 11 + #define _SR9800_H 12 + 13 + /* SR9800 spec. command table on Linux Platform */ 14 + 15 + /* command : Software Station Management Control Reg */ 16 + #define SR_CMD_SET_SW_MII 0x06 17 + /* command : PHY Read Reg */ 18 + #define SR_CMD_READ_MII_REG 0x07 19 + /* command : PHY Write Reg */ 20 + #define SR_CMD_WRITE_MII_REG 0x08 21 + /* command : Hardware Station Management Control Reg */ 22 + #define SR_CMD_SET_HW_MII 0x0a 23 + /* command : SROM Read Reg */ 24 + #define SR_CMD_READ_EEPROM 0x0b 25 + /* command : SROM Write Reg */ 26 + #define SR_CMD_WRITE_EEPROM 0x0c 27 + /* command : SROM Write Enable Reg */ 28 + #define SR_CMD_WRITE_ENABLE 0x0d 29 + /* command : SROM Write Disable Reg */ 30 + #define SR_CMD_WRITE_DISABLE 0x0e 31 + /* command : RX Control Read Reg */ 32 + #define SR_CMD_READ_RX_CTL 0x0f 33 + #define SR_RX_CTL_PRO (1 << 0) 34 + #define SR_RX_CTL_AMALL (1 << 1) 35 + #define SR_RX_CTL_SEP (1 << 2) 36 + #define SR_RX_CTL_AB (1 << 3) 37 + #define SR_RX_CTL_AM (1 << 4) 38 + #define SR_RX_CTL_AP (1 << 5) 39 + #define SR_RX_CTL_ARP (1 << 6) 40 + #define SR_RX_CTL_SO (1 << 7) 41 + #define SR_RX_CTL_RH1M (1 << 8) 42 + #define SR_RX_CTL_RH2M (1 << 9) 43 + #define SR_RX_CTL_RH3M (1 << 10) 44 + /* command : RX Control Write Reg */ 45 + #define SR_CMD_WRITE_RX_CTL 0x10 46 + /* command : IPG0/IPG1/IPG2 Control Read Reg */ 47 + #define SR_CMD_READ_IPG012 0x11 48 + /* command : IPG0/IPG1/IPG2 Control Write Reg */ 49 + #define SR_CMD_WRITE_IPG012 0x12 50 + /* command : Node ID Read Reg */ 51 + #define SR_CMD_READ_NODE_ID 0x13 52 + /* command : Node ID Write Reg */ 53 + #define SR_CMD_WRITE_NODE_ID 0x14 54 + /* command : Multicast Filter Array Read Reg */ 55 + #define SR_CMD_READ_MULTI_FILTER 0x15 56 + /* command : Multicast Filter Array Write Reg */ 57 + #define SR_CMD_WRITE_MULTI_FILTER 0x16 58 + /* command : Eth/HomePNA PHY Address Reg */ 59 + #define SR_CMD_READ_PHY_ID 0x19 60 + /* command : Medium Status Read Reg */ 61 + #define SR_CMD_READ_MEDIUM_STATUS 0x1a 62 + #define SR_MONITOR_LINK (1 << 1) 63 + #define SR_MONITOR_MAGIC (1 << 2) 64 + #define SR_MONITOR_HSFS (1 << 4) 65 + /* command : Medium Status Write Reg */ 66 + #define SR_CMD_WRITE_MEDIUM_MODE 0x1b 67 + #define SR_MEDIUM_GM (1 << 0) 68 + #define SR_MEDIUM_FD (1 << 1) 69 + #define SR_MEDIUM_AC (1 << 2) 70 + #define SR_MEDIUM_ENCK (1 << 3) 71 + #define SR_MEDIUM_RFC (1 << 4) 72 + #define SR_MEDIUM_TFC (1 << 5) 73 + #define SR_MEDIUM_JFE (1 << 6) 74 + #define SR_MEDIUM_PF (1 << 7) 75 + #define SR_MEDIUM_RE (1 << 8) 76 + #define SR_MEDIUM_PS (1 << 9) 77 + #define SR_MEDIUM_RSV (1 << 10) 78 + #define SR_MEDIUM_SBP (1 << 11) 79 + #define SR_MEDIUM_SM (1 << 12) 80 + /* command : Monitor Mode Status Read Reg */ 81 + #define SR_CMD_READ_MONITOR_MODE 0x1c 82 + /* command : Monitor Mode Status Write Reg */ 83 + #define SR_CMD_WRITE_MONITOR_MODE 0x1d 84 + /* command : GPIO Status Read Reg */ 85 + #define SR_CMD_READ_GPIOS 0x1e 86 + #define SR_GPIO_GPO0EN (1 << 0) /* GPIO0 Output enable */ 87 + #define SR_GPIO_GPO_0 (1 << 1) /* GPIO0 Output value */ 88 + #define SR_GPIO_GPO1EN (1 << 2) /* GPIO1 Output enable */ 89 + #define SR_GPIO_GPO_1 (1 << 3) /* GPIO1 Output value */ 90 + #define SR_GPIO_GPO2EN (1 << 4) /* GPIO2 Output enable */ 91 + #define SR_GPIO_GPO_2 (1 << 5) /* GPIO2 Output value */ 92 + #define SR_GPIO_RESERVED (1 << 6) /* Reserved */ 93 + #define SR_GPIO_RSE (1 << 7) /* Reload serial EEPROM */ 94 + /* command : GPIO Status Write Reg */ 95 + #define SR_CMD_WRITE_GPIOS 0x1f 96 + /* command : Eth PHY Power and Reset Control Reg */ 97 + #define SR_CMD_SW_RESET 0x20 98 + #define SR_SWRESET_CLEAR 0x00 99 + #define SR_SWRESET_RR (1 << 0) 100 + #define SR_SWRESET_RT (1 << 1) 101 + #define SR_SWRESET_PRTE (1 << 2) 102 + #define SR_SWRESET_PRL (1 << 3) 103 + #define SR_SWRESET_BZ (1 << 4) 104 + #define SR_SWRESET_IPRL (1 << 5) 105 + #define SR_SWRESET_IPPD (1 << 6) 106 + /* command : Software Interface Selection Status Read Reg */ 107 + #define SR_CMD_SW_PHY_STATUS 0x21 108 + /* command : Software Interface Selection Status Write Reg */ 109 + #define SR_CMD_SW_PHY_SELECT 0x22 110 + /* command : BULK in Buffer Size Reg */ 111 + #define SR_CMD_BULKIN_SIZE 0x2A 112 + /* command : LED_MUX Control Reg */ 113 + #define SR_CMD_LED_MUX 0x70 114 + #define SR_LED_MUX_TX_ACTIVE (1 << 0) 115 + #define SR_LED_MUX_RX_ACTIVE (1 << 1) 116 + #define SR_LED_MUX_COLLISION (1 << 2) 117 + #define SR_LED_MUX_DUP_COL (1 << 3) 118 + #define SR_LED_MUX_DUP (1 << 4) 119 + #define SR_LED_MUX_SPEED (1 << 5) 120 + #define SR_LED_MUX_LINK_ACTIVE (1 << 6) 121 + #define SR_LED_MUX_LINK (1 << 7) 122 + 123 + /* Register Access Flags */ 124 + #define SR_REQ_RD_REG (USB_DIR_IN | USB_TYPE_VENDOR | USB_RECIP_DEVICE) 125 + #define SR_REQ_WR_REG (USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_DEVICE) 126 + 127 + /* Multicast Filter Array size & Max Number */ 128 + #define SR_MCAST_FILTER_SIZE 8 129 + #define SR_MAX_MCAST 64 130 + 131 + /* IPG0/1/2 Default Value */ 132 + #define SR9800_IPG0_DEFAULT 0x15 133 + #define SR9800_IPG1_DEFAULT 0x0c 134 + #define SR9800_IPG2_DEFAULT 0x12 135 + 136 + /* Medium Status Default Mode */ 137 + #define SR9800_MEDIUM_DEFAULT \ 138 + (SR_MEDIUM_FD | SR_MEDIUM_RFC | \ 139 + SR_MEDIUM_TFC | SR_MEDIUM_PS | \ 140 + SR_MEDIUM_AC | SR_MEDIUM_RE) 141 + 142 + /* RX Control Default Setting */ 143 + #define SR_DEFAULT_RX_CTL \ 144 + (SR_RX_CTL_SO | SR_RX_CTL_AB | SR_RX_CTL_RH1M) 145 + 146 + /* EEPROM Magic Number & EEPROM Size */ 147 + #define SR_EEPROM_MAGIC 0xdeadbeef 148 + #define SR9800_EEPROM_LEN 0xff 149 + 150 + /* SR9800 Driver Version and Driver Name */ 151 + #define DRIVER_VERSION "11-Nov-2013" 152 + #define DRIVER_NAME "CoreChips" 153 + #define DRIVER_FLAG \ 154 + (FLAG_ETHER | FLAG_FRAMING_AX | FLAG_LINK_INTR | FLAG_MULTI_PACKET) 155 + 156 + /* SR9800 BULKIN Buffer Size */ 157 + #define SR9800_MAX_BULKIN_2K 0 158 + #define SR9800_MAX_BULKIN_4K 1 159 + #define SR9800_MAX_BULKIN_6K 2 160 + #define SR9800_MAX_BULKIN_8K 3 161 + #define SR9800_MAX_BULKIN_16K 4 162 + #define SR9800_MAX_BULKIN_20K 5 163 + #define SR9800_MAX_BULKIN_24K 6 164 + #define SR9800_MAX_BULKIN_32K 7 165 + 166 + struct {unsigned short size, byte_cnt, threshold; } SR9800_BULKIN_SIZE[] = { 167 + /* 2k */ 168 + {2048, 0x8000, 0x8001}, 169 + /* 4k */ 170 + {4096, 0x8100, 0x8147}, 171 + /* 6k */ 172 + {6144, 0x8200, 0x81EB}, 173 + /* 8k */ 174 + {8192, 0x8300, 0x83D7}, 175 + /* 16 */ 176 + {16384, 0x8400, 0x851E}, 177 + /* 20k */ 178 + {20480, 0x8500, 0x8666}, 179 + /* 24k */ 180 + {24576, 0x8600, 0x87AE}, 181 + /* 32k */ 182 + {32768, 0x8700, 0x8A3D}, 183 + }; 184 + 185 + /* This structure cannot exceed sizeof(unsigned long [5]) AKA 20 bytes */ 186 + struct sr_data { 187 + u8 multi_filter[SR_MCAST_FILTER_SIZE]; 188 + u8 mac_addr[ETH_ALEN]; 189 + u8 phymode; 190 + u8 ledmode; 191 + u8 eeprom_len; 192 + }; 193 + 194 + struct sr9800_int_data { 195 + __le16 res1; 196 + u8 link; 197 + __le16 res2; 198 + u8 status; 199 + __le16 res3; 200 + } __packed; 201 + 202 + #endif /* _SR9800_H */
-3
drivers/net/vxlan.c
··· 469 469 /* Look up Ethernet address in forwarding table */ 470 470 static struct vxlan_fdb *__vxlan_find_mac(struct vxlan_dev *vxlan, 471 471 const u8 *mac) 472 - 473 472 { 474 473 struct hlist_head *head = vxlan_fdb_head(vxlan, mac); 475 474 struct vxlan_fdb *f; ··· 595 596 NAPI_GRO_CB(p)->same_flow = 0; 596 597 continue; 597 598 } 598 - goto found; 599 599 } 600 600 601 - found: 602 601 type = eh->h_proto; 603 602 604 603 rcu_read_lock();
-5
drivers/net/wan/dlci.c
··· 71 71 const void *saddr, unsigned len) 72 72 { 73 73 struct frhdr hdr; 74 - struct dlci_local *dlp; 75 74 unsigned int hlen; 76 75 char *dest; 77 - 78 - dlp = netdev_priv(dev); 79 76 80 77 hdr.control = FRAD_I_UI; 81 78 switch (type) ··· 104 107 105 108 static void dlci_receive(struct sk_buff *skb, struct net_device *dev) 106 109 { 107 - struct dlci_local *dlp; 108 110 struct frhdr *hdr; 109 111 int process, header; 110 112 111 - dlp = netdev_priv(dev); 112 113 if (!pskb_may_pull(skb, sizeof(*hdr))) { 113 114 netdev_notice(dev, "invalid data no header\n"); 114 115 dev->stats.rx_errors++;
+1 -1
drivers/net/wireless/ath/ar5523/ar5523.c
··· 1764 1764 AR5523_DEVICE_UG(0x07d1, 0x3a07), /* D-Link / WUA-2340 rev A1 */ 1765 1765 AR5523_DEVICE_UG(0x1690, 0x0712), /* Gigaset / AR5523 */ 1766 1766 AR5523_DEVICE_UG(0x1690, 0x0710), /* Gigaset / SMCWUSBTG */ 1767 - AR5523_DEVICE_UG(0x129b, 0x160c), /* Gigaset / USB stick 108 1767 + AR5523_DEVICE_UG(0x129b, 0x160b), /* Gigaset / USB stick 108 1768 1768 (CyberTAN Technology) */ 1769 1769 AR5523_DEVICE_UG(0x16ab, 0x7801), /* Globalsun / AR5523_1 */ 1770 1770 AR5523_DEVICE_UX(0x16ab, 0x7811), /* Globalsun / AR5523_2 */
+4
drivers/net/wireless/ath/ath9k/ar9003_eeprom.c
··· 5065 5065 break; 5066 5066 } 5067 5067 } 5068 + 5069 + if (is2GHz && !twiceMaxEdgePower) 5070 + twiceMaxEdgePower = 60; 5071 + 5068 5072 return twiceMaxEdgePower; 5069 5073 } 5070 5074
+2
drivers/net/wireless/ath/ath9k/htc.h
··· 262 262 struct ath9k_htc_sta { 263 263 u8 index; 264 264 enum tid_aggr_state tid_state[ATH9K_HTC_MAX_TID]; 265 + struct work_struct rc_update_work; 266 + struct ath9k_htc_priv *htc_priv; 265 267 }; 266 268 267 269 #define ATH9K_HTC_RXBUF 256
+7 -1
drivers/net/wireless/ath/ath9k/htc_drv_init.c
··· 34 34 module_param_named(btcoex_enable, ath9k_htc_btcoex_enable, int, 0444); 35 35 MODULE_PARM_DESC(btcoex_enable, "Enable wifi-BT coexistence"); 36 36 37 + static int ath9k_ps_enable; 38 + module_param_named(ps_enable, ath9k_ps_enable, int, 0444); 39 + MODULE_PARM_DESC(ps_enable, "Enable WLAN PowerSave"); 40 + 37 41 #define CHAN2G(_freq, _idx) { \ 38 42 .center_freq = (_freq), \ 39 43 .hw_value = (_idx), \ ··· 729 725 IEEE80211_HW_SPECTRUM_MGMT | 730 726 IEEE80211_HW_HAS_RATE_CONTROL | 731 727 IEEE80211_HW_RX_INCLUDES_FCS | 732 - IEEE80211_HW_SUPPORTS_PS | 733 728 IEEE80211_HW_PS_NULLFUNC_STACK | 734 729 IEEE80211_HW_REPORTS_TX_ACK_STATUS | 735 730 IEEE80211_HW_MFP_CAPABLE | 736 731 IEEE80211_HW_HOST_BROADCAST_PS_BUFFERING; 732 + 733 + if (ath9k_ps_enable) 734 + hw->flags |= IEEE80211_HW_SUPPORTS_PS; 737 735 738 736 hw->wiphy->interface_modes = 739 737 BIT(NL80211_IFTYPE_STATION) |
+40 -23
drivers/net/wireless/ath/ath9k/htc_drv_main.c
··· 1270 1270 mutex_unlock(&priv->mutex); 1271 1271 } 1272 1272 1273 + static void ath9k_htc_sta_rc_update_work(struct work_struct *work) 1274 + { 1275 + struct ath9k_htc_sta *ista = 1276 + container_of(work, struct ath9k_htc_sta, rc_update_work); 1277 + struct ieee80211_sta *sta = 1278 + container_of((void *)ista, struct ieee80211_sta, drv_priv); 1279 + struct ath9k_htc_priv *priv = ista->htc_priv; 1280 + struct ath_common *common = ath9k_hw_common(priv->ah); 1281 + struct ath9k_htc_target_rate trate; 1282 + 1283 + mutex_lock(&priv->mutex); 1284 + ath9k_htc_ps_wakeup(priv); 1285 + 1286 + memset(&trate, 0, sizeof(struct ath9k_htc_target_rate)); 1287 + ath9k_htc_setup_rate(priv, sta, &trate); 1288 + if (!ath9k_htc_send_rate_cmd(priv, &trate)) 1289 + ath_dbg(common, CONFIG, 1290 + "Supported rates for sta: %pM updated, rate caps: 0x%X\n", 1291 + sta->addr, be32_to_cpu(trate.capflags)); 1292 + else 1293 + ath_dbg(common, CONFIG, 1294 + "Unable to update supported rates for sta: %pM\n", 1295 + sta->addr); 1296 + 1297 + ath9k_htc_ps_restore(priv); 1298 + mutex_unlock(&priv->mutex); 1299 + } 1300 + 1273 1301 static int ath9k_htc_sta_add(struct ieee80211_hw *hw, 1274 1302 struct ieee80211_vif *vif, 1275 1303 struct ieee80211_sta *sta) 1276 1304 { 1277 1305 struct ath9k_htc_priv *priv = hw->priv; 1306 + struct ath9k_htc_sta *ista = (struct ath9k_htc_sta *) sta->drv_priv; 1278 1307 int ret; 1279 1308 1280 1309 mutex_lock(&priv->mutex); 1281 1310 ath9k_htc_ps_wakeup(priv); 1282 1311 ret = ath9k_htc_add_station(priv, vif, sta); 1283 - if (!ret) 1312 + if (!ret) { 1313 + INIT_WORK(&ista->rc_update_work, ath9k_htc_sta_rc_update_work); 1314 + ista->htc_priv = priv; 1284 1315 ath9k_htc_init_rate(priv, sta); 1316 + } 1285 1317 ath9k_htc_ps_restore(priv); 1286 1318 mutex_unlock(&priv->mutex); 1287 1319 ··· 1325 1293 struct ieee80211_sta *sta) 1326 1294 { 1327 1295 struct ath9k_htc_priv *priv = hw->priv; 1328 - struct ath9k_htc_sta *ista; 1296 + struct ath9k_htc_sta *ista = (struct ath9k_htc_sta *) sta->drv_priv; 1329 1297 int ret; 1298 + 1299 + cancel_work_sync(&ista->rc_update_work); 1330 1300 1331 1301 mutex_lock(&priv->mutex); 1332 1302 ath9k_htc_ps_wakeup(priv); 1333 - ista = (struct ath9k_htc_sta *) sta->drv_priv; 1334 1303 htc_sta_drain(priv->htc, ista->index); 1335 1304 ret = ath9k_htc_remove_station(priv, vif, sta); 1336 1305 ath9k_htc_ps_restore(priv); ··· 1344 1311 struct ieee80211_vif *vif, 1345 1312 struct ieee80211_sta *sta, u32 changed) 1346 1313 { 1347 - struct ath9k_htc_priv *priv = hw->priv; 1348 - struct ath_common *common = ath9k_hw_common(priv->ah); 1349 - struct ath9k_htc_target_rate trate; 1314 + struct ath9k_htc_sta *ista = (struct ath9k_htc_sta *) sta->drv_priv; 1350 1315 1351 - mutex_lock(&priv->mutex); 1352 - ath9k_htc_ps_wakeup(priv); 1316 + if (!(changed & IEEE80211_RC_SUPP_RATES_CHANGED)) 1317 + return; 1353 1318 1354 - if (changed & IEEE80211_RC_SUPP_RATES_CHANGED) { 1355 - memset(&trate, 0, sizeof(struct ath9k_htc_target_rate)); 1356 - ath9k_htc_setup_rate(priv, sta, &trate); 1357 - if (!ath9k_htc_send_rate_cmd(priv, &trate)) 1358 - ath_dbg(common, CONFIG, 1359 - "Supported rates for sta: %pM updated, rate caps: 0x%X\n", 1360 - sta->addr, be32_to_cpu(trate.capflags)); 1361 - else 1362 - ath_dbg(common, CONFIG, 1363 - "Unable to update supported rates for sta: %pM\n", 1364 - sta->addr); 1365 - } 1366 - 1367 - ath9k_htc_ps_restore(priv); 1368 - mutex_unlock(&priv->mutex); 1319 + schedule_work(&ista->rc_update_work); 1369 1320 } 1370 1321 1371 1322 static int ath9k_htc_conf_tx(struct ieee80211_hw *hw,
+2 -3
drivers/net/wireless/ath/ath9k/hw.c
··· 1316 1316 if (AR_SREV_9300_20_OR_LATER(ah)) 1317 1317 udelay(50); 1318 1318 else if (AR_SREV_9100(ah)) 1319 - udelay(10000); 1319 + mdelay(10); 1320 1320 else 1321 1321 udelay(100); 1322 1322 ··· 2051 2051 2052 2052 REG_SET_BIT(ah, AR_RTC_FORCE_WAKE, 2053 2053 AR_RTC_FORCE_WAKE_EN); 2054 - 2055 2054 if (AR_SREV_9100(ah)) 2056 - udelay(10000); 2055 + mdelay(10); 2057 2056 else 2058 2057 udelay(50); 2059 2058
+7 -1
drivers/net/wireless/ath/ath9k/init.c
··· 57 57 module_param_named(bt_ant_diversity, ath9k_bt_ant_diversity, int, 0444); 58 58 MODULE_PARM_DESC(bt_ant_diversity, "Enable WLAN/BT RX antenna diversity"); 59 59 60 + static int ath9k_ps_enable; 61 + module_param_named(ps_enable, ath9k_ps_enable, int, 0444); 62 + MODULE_PARM_DESC(ps_enable, "Enable WLAN PowerSave"); 63 + 60 64 bool is_ath9k_unloaded; 61 65 /* We use the hw_value as an index into our private channel structure */ 62 66 ··· 907 903 hw->flags = IEEE80211_HW_RX_INCLUDES_FCS | 908 904 IEEE80211_HW_HOST_BROADCAST_PS_BUFFERING | 909 905 IEEE80211_HW_SIGNAL_DBM | 910 - IEEE80211_HW_SUPPORTS_PS | 911 906 IEEE80211_HW_PS_NULLFUNC_STACK | 912 907 IEEE80211_HW_SPECTRUM_MGMT | 913 908 IEEE80211_HW_REPORTS_TX_ACK_STATUS | 914 909 IEEE80211_HW_SUPPORTS_RC_TABLE | 915 910 IEEE80211_HW_SUPPORTS_HT_CCK_RATES; 911 + 912 + if (ath9k_ps_enable) 913 + hw->flags |= IEEE80211_HW_SUPPORTS_PS; 916 914 917 915 if (sc->sc_ah->caps.hw_caps & ATH9K_HW_CAP_HT) { 918 916 hw->flags |= IEEE80211_HW_AMPDU_AGGREGATION;
+5
drivers/net/wireless/iwlwifi/iwl-nvm-parse.c
··· 182 182 183 183 for (ch_idx = 0; ch_idx < IWL_NUM_CHANNELS; ch_idx++) { 184 184 ch_flags = __le16_to_cpup(nvm_ch_flags + ch_idx); 185 + 186 + if (ch_idx >= NUM_2GHZ_CHANNELS && 187 + !data->sku_cap_band_52GHz_enable) 188 + ch_flags &= ~NVM_CHANNEL_VALID; 189 + 185 190 if (!(ch_flags & NVM_CHANNEL_VALID)) { 186 191 IWL_DEBUG_EEPROM(dev, 187 192 "Ch. %d Flags %x [%sGHz] - No traffic\n",
+3 -1
drivers/net/wireless/iwlwifi/mvm/fw-api-scan.h
··· 504 504 * @match_notify: clients waiting for match found notification 505 505 * @pass_match: clients waiting for the results 506 506 * @active_clients: active clients bitmap - enum scan_framework_client 507 + * @any_beacon_notify: clients waiting for match notification without match 507 508 */ 508 509 struct iwl_scan_offload_profile_cfg { 509 510 struct iwl_scan_offload_profile profiles[IWL_SCAN_MAX_PROFILES]; ··· 513 512 u8 match_notify; 514 513 u8 pass_match; 515 514 u8 active_clients; 516 - u8 reserved[3]; 515 + u8 any_beacon_notify; 516 + u8 reserved[2]; 517 517 } __packed; 518 518 519 519 /**
+1 -1
drivers/net/wireless/iwlwifi/mvm/mac80211.c
··· 246 246 else 247 247 hw->wiphy->flags &= ~WIPHY_FLAG_PS_ON_BY_DEFAULT; 248 248 249 - if (mvm->fw->ucode_capa.flags & IWL_UCODE_TLV_FLAGS_SCHED_SCAN) { 249 + if (0 && mvm->fw->ucode_capa.flags & IWL_UCODE_TLV_FLAGS_SCHED_SCAN) { 250 250 hw->wiphy->flags |= WIPHY_FLAG_SUPPORTS_SCHED_SCAN; 251 251 hw->wiphy->max_sched_scan_ssids = PROBE_OPTION_MAX; 252 252 hw->wiphy->max_match_sets = IWL_SCAN_MAX_PROFILES;
+4 -1
drivers/net/wireless/iwlwifi/mvm/scan.c
··· 344 344 345 345 iwl_mvm_scan_fill_ssids(cmd, req, basic_ssid ? 1 : 0); 346 346 347 - cmd->tx_cmd.tx_flags = cpu_to_le32(TX_CMD_FLG_SEQ_CTL); 347 + cmd->tx_cmd.tx_flags = cpu_to_le32(TX_CMD_FLG_SEQ_CTL | 348 + TX_CMD_FLG_BT_DIS); 348 349 cmd->tx_cmd.sta_id = mvm->aux_sta.sta_id; 349 350 cmd->tx_cmd.life_time = cpu_to_le32(TX_CMD_LIFE_TIME_INFINITE); 350 351 cmd->tx_cmd.rate_n_flags = ··· 808 807 profile_cfg->active_clients = SCAN_CLIENT_SCHED_SCAN; 809 808 profile_cfg->pass_match = SCAN_CLIENT_SCHED_SCAN; 810 809 profile_cfg->match_notify = SCAN_CLIENT_SCHED_SCAN; 810 + if (!req->n_match_sets || !req->match_sets[0].ssid.ssid_len) 811 + profile_cfg->any_beacon_notify = SCAN_CLIENT_SCHED_SCAN; 811 812 812 813 for (i = 0; i < req->n_match_sets; i++) { 813 814 profile = &profile_cfg->profiles[i];
+1 -1
drivers/net/wireless/iwlwifi/mvm/sta.c
··· 652 652 { 653 653 struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif); 654 654 static const u8 _baddr[] = {0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF}; 655 - static const u8 *baddr = _baddr; 655 + const u8 *baddr = _baddr; 656 656 657 657 lockdep_assert_held(&mvm->mutex); 658 658
+37 -36
drivers/net/wireless/iwlwifi/mvm/tx.c
··· 659 659 rcu_read_lock(); 660 660 661 661 sta = rcu_dereference(mvm->fw_id_to_mac_id[sta_id]); 662 + /* 663 + * sta can't be NULL otherwise it'd mean that the sta has been freed in 664 + * the firmware while we still have packets for it in the Tx queues. 665 + */ 666 + if (WARN_ON_ONCE(!sta)) 667 + goto out; 662 668 663 - if (!IS_ERR_OR_NULL(sta)) { 669 + if (!IS_ERR(sta)) { 664 670 mvmsta = iwl_mvm_sta_from_mac80211(sta); 665 671 666 672 if (tid != IWL_TID_NON_QOS) { ··· 681 675 spin_unlock_bh(&mvmsta->lock); 682 676 } 683 677 } else { 684 - sta = NULL; 685 678 mvmsta = NULL; 686 679 } 687 680 ··· 688 683 * If the txq is not an AMPDU queue, there is no chance we freed 689 684 * several skbs. Check that out... 690 685 */ 691 - if (txq_id < mvm->first_agg_queue && !WARN_ON(skb_freed > 1) && 692 - atomic_sub_and_test(skb_freed, &mvm->pending_frames[sta_id])) { 693 - if (mvmsta) { 694 - /* 695 - * If there are no pending frames for this STA, notify 696 - * mac80211 that this station can go to sleep in its 697 - * STA table. 698 - */ 699 - if (mvmsta->vif->type == NL80211_IFTYPE_AP) 700 - ieee80211_sta_block_awake(mvm->hw, sta, false); 701 - /* 702 - * We might very well have taken mvmsta pointer while 703 - * the station was being removed. The remove flow might 704 - * have seen a pending_frame (because we didn't take 705 - * the lock) even if now the queues are drained. So make 706 - * really sure now that this the station is not being 707 - * removed. If it is, run the drain worker to remove it. 708 - */ 709 - spin_lock_bh(&mvmsta->lock); 710 - sta = rcu_dereference(mvm->fw_id_to_mac_id[sta_id]); 711 - if (!sta || PTR_ERR(sta) == -EBUSY) { 712 - /* 713 - * Station disappeared in the meantime: 714 - * so we are draining. 715 - */ 716 - set_bit(sta_id, mvm->sta_drained); 717 - schedule_work(&mvm->sta_drained_wk); 718 - } 719 - spin_unlock_bh(&mvmsta->lock); 720 - } else if (!mvmsta && PTR_ERR(sta) == -EBUSY) { 721 - /* Tx response without STA, so we are draining */ 722 - set_bit(sta_id, mvm->sta_drained); 723 - schedule_work(&mvm->sta_drained_wk); 724 - } 686 + if (txq_id >= mvm->first_agg_queue) 687 + goto out; 688 + 689 + /* We can't free more than one frame at once on a shared queue */ 690 + WARN_ON(skb_freed > 1); 691 + 692 + /* If we have still frames from this STA nothing to do here */ 693 + if (!atomic_sub_and_test(skb_freed, &mvm->pending_frames[sta_id])) 694 + goto out; 695 + 696 + if (mvmsta && mvmsta->vif->type == NL80211_IFTYPE_AP) { 697 + /* 698 + * If there are no pending frames for this STA, notify 699 + * mac80211 that this station can go to sleep in its 700 + * STA table. 701 + * If mvmsta is not NULL, sta is valid. 702 + */ 703 + ieee80211_sta_block_awake(mvm->hw, sta, false); 725 704 } 726 705 706 + if (PTR_ERR(sta) == -EBUSY || PTR_ERR(sta) == -ENOENT) { 707 + /* 708 + * We are draining and this was the last packet - pre_rcu_remove 709 + * has been called already. We might be after the 710 + * synchronize_net already. 711 + * Don't rely on iwl_mvm_rm_sta to see the empty Tx queues. 712 + */ 713 + set_bit(sta_id, mvm->sta_drained); 714 + schedule_work(&mvm->sta_drained_wk); 715 + } 716 + 717 + out: 727 718 rcu_read_unlock(); 728 719 } 729 720
+2
drivers/net/wireless/iwlwifi/mvm/utils.c
··· 411 411 mvm->status, table.valid); 412 412 } 413 413 414 + IWL_ERR(mvm, "Loaded firmware version: %s\n", mvm->fw->fw_version); 415 + 414 416 trace_iwlwifi_dev_ucode_error(trans->dev, table.error_id, table.tsf_low, 415 417 table.data1, table.data2, table.data3, 416 418 table.blink1, table.blink2, table.ilink1,
+6 -1
drivers/net/wireless/iwlwifi/pcie/drv.c
··· 359 359 /* 7265 Series */ 360 360 {IWL_PCI_DEVICE(0x095A, 0x5010, iwl7265_2ac_cfg)}, 361 361 {IWL_PCI_DEVICE(0x095A, 0x5110, iwl7265_2ac_cfg)}, 362 + {IWL_PCI_DEVICE(0x095A, 0x5112, iwl7265_2ac_cfg)}, 363 + {IWL_PCI_DEVICE(0x095A, 0x5100, iwl7265_2ac_cfg)}, 364 + {IWL_PCI_DEVICE(0x095A, 0x510A, iwl7265_2ac_cfg)}, 362 365 {IWL_PCI_DEVICE(0x095B, 0x5310, iwl7265_2ac_cfg)}, 363 366 {IWL_PCI_DEVICE(0x095B, 0x5302, iwl7265_2ac_cfg)}, 364 367 {IWL_PCI_DEVICE(0x095B, 0x5210, iwl7265_2ac_cfg)}, 365 368 {IWL_PCI_DEVICE(0x095A, 0x5012, iwl7265_2ac_cfg)}, 366 - {IWL_PCI_DEVICE(0x095A, 0x500A, iwl7265_2ac_cfg)}, 367 369 {IWL_PCI_DEVICE(0x095A, 0x5410, iwl7265_2ac_cfg)}, 368 370 {IWL_PCI_DEVICE(0x095A, 0x5400, iwl7265_2ac_cfg)}, 369 371 {IWL_PCI_DEVICE(0x095A, 0x1010, iwl7265_2ac_cfg)}, 370 372 {IWL_PCI_DEVICE(0x095A, 0x5000, iwl7265_2n_cfg)}, 373 + {IWL_PCI_DEVICE(0x095A, 0x500A, iwl7265_2n_cfg)}, 371 374 {IWL_PCI_DEVICE(0x095B, 0x5200, iwl7265_2n_cfg)}, 372 375 {IWL_PCI_DEVICE(0x095A, 0x5002, iwl7265_n_cfg)}, 373 376 {IWL_PCI_DEVICE(0x095B, 0x5202, iwl7265_n_cfg)}, 374 377 {IWL_PCI_DEVICE(0x095A, 0x9010, iwl7265_2ac_cfg)}, 378 + {IWL_PCI_DEVICE(0x095A, 0x9012, iwl7265_2ac_cfg)}, 375 379 {IWL_PCI_DEVICE(0x095A, 0x9110, iwl7265_2ac_cfg)}, 380 + {IWL_PCI_DEVICE(0x095A, 0x9112, iwl7265_2ac_cfg)}, 376 381 {IWL_PCI_DEVICE(0x095A, 0x9210, iwl7265_2ac_cfg)}, 377 382 {IWL_PCI_DEVICE(0x095A, 0x9510, iwl7265_2ac_cfg)}, 378 383 {IWL_PCI_DEVICE(0x095A, 0x9310, iwl7265_2ac_cfg)},
+5
drivers/net/wireless/rt2x00/rt2500pci.c
··· 1877 1877 EEPROM_MAC_ADDR_0)); 1878 1878 1879 1879 /* 1880 + * Disable powersaving as default. 1881 + */ 1882 + rt2x00dev->hw->wiphy->flags &= ~WIPHY_FLAG_PS_ON_BY_DEFAULT; 1883 + 1884 + /* 1880 1885 * Initialize hw_mode information. 1881 1886 */ 1882 1887 spec->supported_bands = SUPPORT_BAND_2GHZ;
+5
drivers/net/wireless/rt2x00/rt2500usb.c
··· 1706 1706 IEEE80211_HW_SUPPORTS_PS | 1707 1707 IEEE80211_HW_PS_NULLFUNC_STACK; 1708 1708 1709 + /* 1710 + * Disable powersaving as default. 1711 + */ 1712 + rt2x00dev->hw->wiphy->flags &= ~WIPHY_FLAG_PS_ON_BY_DEFAULT; 1713 + 1709 1714 SET_IEEE80211_DEV(rt2x00dev->hw, rt2x00dev->dev); 1710 1715 SET_IEEE80211_PERM_ADDR(rt2x00dev->hw, 1711 1716 rt2x00_eeprom_addr(rt2x00dev,
+2 -3
drivers/net/wireless/rt2x00/rt2800lib.c
··· 7458 7458 u32 reg; 7459 7459 7460 7460 /* 7461 - * Disable powersaving as default on PCI devices. 7461 + * Disable powersaving as default. 7462 7462 */ 7463 - if (rt2x00_is_pci(rt2x00dev) || rt2x00_is_soc(rt2x00dev)) 7464 - rt2x00dev->hw->wiphy->flags &= ~WIPHY_FLAG_PS_ON_BY_DEFAULT; 7463 + rt2x00dev->hw->wiphy->flags &= ~WIPHY_FLAG_PS_ON_BY_DEFAULT; 7465 7464 7466 7465 /* 7467 7466 * Initialize all hw fields.
+20 -3
drivers/net/wireless/rtl818x/rtl8180/dev.c
··· 107 107 struct rtl8180_priv *priv = dev->priv; 108 108 unsigned int count = 32; 109 109 u8 signal, agc, sq; 110 + dma_addr_t mapping; 110 111 111 112 while (count--) { 112 113 struct rtl8180_rx_desc *entry = &priv->rx_ring[priv->rx_idx]; ··· 128 127 129 128 if (unlikely(!new_skb)) 130 129 goto done; 130 + 131 + mapping = pci_map_single(priv->pdev, 132 + skb_tail_pointer(new_skb), 133 + MAX_RX_SIZE, PCI_DMA_FROMDEVICE); 134 + 135 + if (pci_dma_mapping_error(priv->pdev, mapping)) { 136 + kfree_skb(new_skb); 137 + dev_err(&priv->pdev->dev, "RX DMA map error\n"); 138 + 139 + goto done; 140 + } 131 141 132 142 pci_unmap_single(priv->pdev, 133 143 *((dma_addr_t *)skb->cb), ··· 170 158 171 159 skb = new_skb; 172 160 priv->rx_buf[priv->rx_idx] = skb; 173 - *((dma_addr_t *) skb->cb) = 174 - pci_map_single(priv->pdev, skb_tail_pointer(skb), 175 - MAX_RX_SIZE, PCI_DMA_FROMDEVICE); 161 + *((dma_addr_t *) skb->cb) = mapping; 176 162 } 177 163 178 164 done: ··· 275 265 276 266 mapping = pci_map_single(priv->pdev, skb->data, 277 267 skb->len, PCI_DMA_TODEVICE); 268 + 269 + if (pci_dma_mapping_error(priv->pdev, mapping)) { 270 + kfree_skb(skb); 271 + dev_err(&priv->pdev->dev, "TX DMA mapping error\n"); 272 + return; 273 + 274 + } 278 275 279 276 tx_flags = RTL818X_TX_DESC_FLAG_OWN | RTL818X_TX_DESC_FLAG_FS | 280 277 RTL818X_TX_DESC_FLAG_LS |
+1 -5
drivers/net/xen-netback/common.h
··· 143 143 char rx_irq_name[IFNAMSIZ+4]; /* DEVNAME-rx */ 144 144 struct xen_netif_rx_back_ring rx; 145 145 struct sk_buff_head rx_queue; 146 - bool rx_queue_stopped; 147 - /* Set when the RX interrupt is triggered by the frontend. 148 - * The worker thread may need to wake the queue. 149 - */ 150 - bool rx_event; 146 + RING_IDX rx_last_skb_slots; 151 147 152 148 /* This array is allocated seperately as it is large */ 153 149 struct gnttab_copy *grant_copy_op;
-1
drivers/net/xen-netback/interface.c
··· 100 100 { 101 101 struct xenvif *vif = dev_id; 102 102 103 - vif->rx_event = true; 104 103 xenvif_kick_thread(vif); 105 104 106 105 return IRQ_HANDLED;
+6 -10
drivers/net/xen-netback/netback.c
··· 476 476 unsigned long offset; 477 477 struct skb_cb_overlay *sco; 478 478 bool need_to_notify = false; 479 - bool ring_full = false; 480 479 481 480 struct netrx_pending_operations npo = { 482 481 .copy = vif->grant_copy_op, ··· 485 486 skb_queue_head_init(&rxq); 486 487 487 488 while ((skb = skb_dequeue(&vif->rx_queue)) != NULL) { 488 - int max_slots_needed; 489 + RING_IDX max_slots_needed; 489 490 int i; 490 491 491 492 /* We need a cheap worse case estimate for the number of ··· 508 509 if (!xenvif_rx_ring_slots_available(vif, max_slots_needed)) { 509 510 skb_queue_head(&vif->rx_queue, skb); 510 511 need_to_notify = true; 511 - ring_full = true; 512 + vif->rx_last_skb_slots = max_slots_needed; 512 513 break; 513 - } 514 + } else 515 + vif->rx_last_skb_slots = 0; 514 516 515 517 sco = (struct skb_cb_overlay *)skb->cb; 516 518 sco->meta_slots_used = xenvif_gop_skb(skb, &npo); ··· 521 521 } 522 522 523 523 BUG_ON(npo.meta_prod > ARRAY_SIZE(vif->meta)); 524 - 525 - vif->rx_queue_stopped = !npo.copy_prod && ring_full; 526 524 527 525 if (!npo.copy_prod) 528 526 goto done; ··· 1471 1473 1472 1474 static inline int rx_work_todo(struct xenvif *vif) 1473 1475 { 1474 - return (!skb_queue_empty(&vif->rx_queue) && !vif->rx_queue_stopped) || 1475 - vif->rx_event; 1476 + return !skb_queue_empty(&vif->rx_queue) && 1477 + xenvif_rx_ring_slots_available(vif, vif->rx_last_skb_slots); 1476 1478 } 1477 1479 1478 1480 static inline int tx_work_todo(struct xenvif *vif) ··· 1557 1559 1558 1560 if (!skb_queue_empty(&vif->rx_queue)) 1559 1561 xenvif_rx_action(vif); 1560 - 1561 - vif->rx_event = false; 1562 1562 1563 1563 if (skb_queue_empty(&vif->rx_queue) && 1564 1564 netif_queue_stopped(vif->dev))
+4 -1
drivers/net/xen-netfront.c
··· 1832 1832 case XenbusStateReconfiguring: 1833 1833 case XenbusStateReconfigured: 1834 1834 case XenbusStateUnknown: 1835 - case XenbusStateClosed: 1836 1835 break; 1837 1836 1838 1837 case XenbusStateInitWait: ··· 1846 1847 netdev_notify_peers(netdev); 1847 1848 break; 1848 1849 1850 + case XenbusStateClosed: 1851 + if (dev->state == XenbusStateClosed) 1852 + break; 1853 + /* Missed the backend's CLOSING state -- fallthrough */ 1849 1854 case XenbusStateClosing: 1850 1855 xenbus_frontend_closed(dev); 1851 1856 break;
+38
include/linux/can/skb.h
··· 11 11 #define CAN_SKB_H 12 12 13 13 #include <linux/types.h> 14 + #include <linux/skbuff.h> 14 15 #include <linux/can.h> 16 + #include <net/sock.h> 15 17 16 18 /* 17 19 * The struct can_skb_priv is used to transport additional information along ··· 42 40 static inline void can_skb_reserve(struct sk_buff *skb) 43 41 { 44 42 skb_reserve(skb, sizeof(struct can_skb_priv)); 43 + } 44 + 45 + static inline void can_skb_destructor(struct sk_buff *skb) 46 + { 47 + sock_put(skb->sk); 48 + } 49 + 50 + static inline void can_skb_set_owner(struct sk_buff *skb, struct sock *sk) 51 + { 52 + if (sk) { 53 + sock_hold(sk); 54 + skb->destructor = can_skb_destructor; 55 + skb->sk = sk; 56 + } 57 + } 58 + 59 + /* 60 + * returns an unshared skb owned by the original sock to be echo'ed back 61 + */ 62 + static inline struct sk_buff *can_create_echo_skb(struct sk_buff *skb) 63 + { 64 + if (skb_shared(skb)) { 65 + struct sk_buff *nskb = skb_clone(skb, GFP_ATOMIC); 66 + 67 + if (likely(nskb)) { 68 + can_skb_set_owner(nskb, skb->sk); 69 + consume_skb(skb); 70 + return nskb; 71 + } else { 72 + kfree_skb(skb); 73 + return NULL; 74 + } 75 + } 76 + 77 + /* we can assume to have an unshared skb with proper owner */ 78 + return skb; 45 79 } 46 80 47 81 #endif /* CAN_SKB_H */
+2
include/net/datalink.h
··· 15 15 struct list_head node; 16 16 }; 17 17 18 + struct datalink_proto *make_EII_client(void); 19 + void destroy_EII_client(struct datalink_proto *dl); 18 20 #endif
+2
include/net/dn.h
··· 200 200 } 201 201 202 202 unsigned int dn_mss_from_pmtu(struct net_device *dev, int mtu); 203 + void dn_register_sysctl(void); 204 + void dn_unregister_sysctl(void); 203 205 204 206 #define DN_MENUVER_ACC 0x01 205 207 #define DN_MENUVER_USR 0x02
+2
include/net/dn_route.h
··· 20 20 struct sock *sk, int flags); 21 21 int dn_cache_dump(struct sk_buff *skb, struct netlink_callback *cb); 22 22 void dn_rt_cache_flush(int delay); 23 + int dn_route_rcv(struct sk_buff *skb, struct net_device *dev, 24 + struct packet_type *pt, struct net_device *orig_dev); 23 25 24 26 /* Masks for flags field */ 25 27 #define DN_RT_F_PID 0x07 /* Mask for packet type */
+1
include/net/ethoc.h
··· 16 16 struct ethoc_platform_data { 17 17 u8 hwaddr[IFHWADDRLEN]; 18 18 s8 phy_id; 19 + u32 eth_clkfreq; 19 20 }; 20 21 21 22 #endif /* !LINUX_NET_ETHOC_H */
+11
include/net/ipx.h
··· 140 140 } 141 141 142 142 void ipxitf_down(struct ipx_interface *intrfc); 143 + struct ipx_interface *ipxitf_find_using_net(__be32 net); 144 + int ipxitf_send(struct ipx_interface *intrfc, struct sk_buff *skb, char *node); 145 + __be16 ipx_cksum(struct ipxhdr *packet, int length); 146 + int ipxrtr_add_route(__be32 network, struct ipx_interface *intrfc, 147 + unsigned char *node); 148 + void ipxrtr_del_routes(struct ipx_interface *intrfc); 149 + int ipxrtr_route_packet(struct sock *sk, struct sockaddr_ipx *usipx, 150 + struct iovec *iov, size_t len, int noblock); 151 + int ipxrtr_route_skb(struct sk_buff *skb); 152 + struct ipx_route *ipxrtr_lookup(__be32 net); 153 + int ipxrtr_ioctl(unsigned int cmd, void __user *arg); 143 154 144 155 static __inline__ void ipxitf_put(struct ipx_interface *intrfc) 145 156 {
+8
include/net/net_namespace.h
··· 162 162 struct net *get_net_ns_by_pid(pid_t pid); 163 163 struct net *get_net_ns_by_fd(int pid); 164 164 165 + #ifdef CONFIG_SYSCTL 166 + void ipx_register_sysctl(void); 167 + void ipx_unregister_sysctl(void); 168 + #else 169 + #define ipx_register_sysctl() 170 + #define ipx_unregister_sysctl() 171 + #endif 172 + 165 173 #ifdef CONFIG_NET_NS 166 174 void __put_net(struct net *net); 167 175
+2
include/net/netfilter/nf_conntrack.h
··· 284 284 extern unsigned int nf_conntrack_hash_rnd; 285 285 void init_nf_conntrack_hash_rnd(void); 286 286 287 + void nf_conntrack_tmpl_insert(struct net *net, struct nf_conn *tmpl); 288 + 287 289 #define NF_CT_STAT_INC(net, count) __this_cpu_inc((net)->ct.stat->count) 288 290 #define NF_CT_STAT_INC_ATOMIC(net, count) this_cpu_inc((net)->ct.stat->count) 289 291
+5 -4
include/net/netfilter/nf_tables.h
··· 252 252 * @owner: module reference 253 253 * @policy: netlink attribute policy 254 254 * @maxattr: highest netlink attribute number 255 + * @family: address family for AF-specific types 255 256 */ 256 257 struct nft_expr_type { 257 258 const struct nft_expr_ops *(*select_ops)(const struct nft_ctx *, ··· 263 262 struct module *owner; 264 263 const struct nla_policy *policy; 265 264 unsigned int maxattr; 265 + u8 family; 266 266 }; 267 267 268 268 /** ··· 322 320 * struct nft_rule - nf_tables rule 323 321 * 324 322 * @list: used internally 325 - * @rcu_head: used internally for rcu 326 323 * @handle: rule handle 327 324 * @genmask: generation mask 328 325 * @dlen: length of expression data ··· 329 328 */ 330 329 struct nft_rule { 331 330 struct list_head list; 332 - struct rcu_head rcu_head; 333 331 u64 handle:46, 334 332 genmask:2, 335 333 dlen:16; ··· 389 389 * 390 390 * @rules: list of rules in the chain 391 391 * @list: used internally 392 - * @rcu_head: used internally 393 392 * @net: net namespace that this chain belongs to 394 393 * @table: table that this chain belongs to 395 394 * @handle: chain handle ··· 400 401 struct nft_chain { 401 402 struct list_head rules; 402 403 struct list_head list; 403 - struct rcu_head rcu_head; 404 404 struct net *net; 405 405 struct nft_table *table; 406 406 u64 handle; ··· 526 528 527 529 #define MODULE_ALIAS_NFT_CHAIN(family, name) \ 528 530 MODULE_ALIAS("nft-chain-" __stringify(family) "-" name) 531 + 532 + #define MODULE_ALIAS_NFT_AF_EXPR(family, name) \ 533 + MODULE_ALIAS("nft-expr-" __stringify(family) "-" name) 529 534 530 535 #define MODULE_ALIAS_NFT_EXPR(name) \ 531 536 MODULE_ALIAS("nft-expr-" name)
+25
include/net/netfilter/nft_reject.h
··· 1 + #ifndef _NFT_REJECT_H_ 2 + #define _NFT_REJECT_H_ 3 + 4 + struct nft_reject { 5 + enum nft_reject_types type:8; 6 + u8 icmp_code; 7 + }; 8 + 9 + extern const struct nla_policy nft_reject_policy[]; 10 + 11 + int nft_reject_init(const struct nft_ctx *ctx, 12 + const struct nft_expr *expr, 13 + const struct nlattr * const tb[]); 14 + 15 + int nft_reject_dump(struct sk_buff *skb, const struct nft_expr *expr); 16 + 17 + void nft_reject_ipv4_eval(const struct nft_expr *expr, 18 + struct nft_data data[NFT_REG_MAX + 1], 19 + const struct nft_pktinfo *pkt); 20 + 21 + void nft_reject_ipv6_eval(const struct nft_expr *expr, 22 + struct nft_data data[NFT_REG_MAX + 1], 23 + const struct nft_pktinfo *pkt); 24 + 25 + #endif
+7 -16
include/uapi/linux/in6.h
··· 128 128 * IPV6 extension headers 129 129 */ 130 130 #if __UAPI_DEF_IPPROTO_V6 131 - enum { 132 - IPPROTO_HOPOPTS = 0, /* IPv6 hop-by-hop options */ 133 - #define IPPROTO_HOPOPTS IPPROTO_HOPOPTS 134 - IPPROTO_ROUTING = 43, /* IPv6 routing header */ 135 - #define IPPROTO_ROUTING IPPROTO_ROUTING 136 - IPPROTO_FRAGMENT = 44, /* IPv6 fragmentation header */ 137 - #define IPPROTO_FRAGMENT IPPROTO_FRAGMENT 138 - IPPROTO_ICMPV6 = 58, /* ICMPv6 */ 139 - #define IPPROTO_ICMPV6 IPPROTO_ICMPV6 140 - IPPROTO_NONE = 59, /* IPv6 no next header */ 141 - #define IPPROTO_NONE IPPROTO_NONE 142 - IPPROTO_DSTOPTS = 60, /* IPv6 destination options */ 143 - #define IPPROTO_DSTOPTS IPPROTO_DSTOPTS 144 - IPPROTO_MH = 135, /* IPv6 mobility header */ 145 - #define IPPROTO_MH IPPROTO_MH 146 - }; 131 + #define IPPROTO_HOPOPTS 0 /* IPv6 hop-by-hop options */ 132 + #define IPPROTO_ROUTING 43 /* IPv6 routing header */ 133 + #define IPPROTO_FRAGMENT 44 /* IPv6 fragmentation header */ 134 + #define IPPROTO_ICMPV6 58 /* ICMPv6 */ 135 + #define IPPROTO_NONE 59 /* IPv6 no next header */ 136 + #define IPPROTO_DSTOPTS 60 /* IPv6 destination options */ 137 + #define IPPROTO_MH 135 /* IPv6 mobility header */ 147 138 #endif /* __UAPI_DEF_IPPROTO_V6 */ 148 139 149 140 /*
+1 -1
net/9p/client.c
··· 204 204 return ret; 205 205 } 206 206 207 - struct p9_fcall *p9_fcall_alloc(int alloc_msize) 207 + static struct p9_fcall *p9_fcall_alloc(int alloc_msize) 208 208 { 209 209 struct p9_fcall *fc; 210 210 fc = kmalloc(sizeof(struct p9_fcall) + alloc_msize, GFP_NOFS);
+4 -1
net/9p/trans_virtio.c
··· 340 340 int count = nr_pages; 341 341 while (nr_pages) { 342 342 s = rest_of_page(data); 343 - pages[index++] = kmap_to_page(data); 343 + if (is_vmalloc_addr(data)) 344 + pages[index++] = vmalloc_to_page(data); 345 + else 346 + pages[index++] = kmap_to_page(data); 344 347 data += s; 345 348 nr_pages--; 346 349 }
+33 -29
net/bridge/br_device.c
··· 187 187 188 188 spin_lock_bh(&br->lock); 189 189 if (!ether_addr_equal(dev->dev_addr, addr->sa_data)) { 190 - memcpy(dev->dev_addr, addr->sa_data, ETH_ALEN); 191 - br_fdb_change_mac_address(br, addr->sa_data); 190 + /* Mac address will be changed in br_stp_change_bridge_id(). */ 192 191 br_stp_change_bridge_id(br, addr->sa_data); 193 192 } 194 193 spin_unlock_bh(&br->lock); ··· 225 226 br_netpoll_disable(p); 226 227 } 227 228 228 - static int br_netpoll_setup(struct net_device *dev, struct netpoll_info *ni, 229 - gfp_t gfp) 230 - { 231 - struct net_bridge *br = netdev_priv(dev); 232 - struct net_bridge_port *p; 233 - int err = 0; 234 - 235 - list_for_each_entry(p, &br->port_list, list) { 236 - if (!p->dev) 237 - continue; 238 - err = br_netpoll_enable(p, gfp); 239 - if (err) 240 - goto fail; 241 - } 242 - 243 - out: 244 - return err; 245 - 246 - fail: 247 - br_netpoll_cleanup(dev); 248 - goto out; 249 - } 250 - 251 - int br_netpoll_enable(struct net_bridge_port *p, gfp_t gfp) 229 + static int __br_netpoll_enable(struct net_bridge_port *p, gfp_t gfp) 252 230 { 253 231 struct netpoll *np; 254 232 int err; 255 - 256 - if (!p->br->dev->npinfo) 257 - return 0; 258 233 259 234 np = kzalloc(sizeof(*p->np), gfp); 260 235 if (!np) ··· 242 269 243 270 p->np = np; 244 271 return err; 272 + } 273 + 274 + int br_netpoll_enable(struct net_bridge_port *p, gfp_t gfp) 275 + { 276 + if (!p->br->dev->npinfo) 277 + return 0; 278 + 279 + return __br_netpoll_enable(p, gfp); 280 + } 281 + 282 + static int br_netpoll_setup(struct net_device *dev, struct netpoll_info *ni, 283 + gfp_t gfp) 284 + { 285 + struct net_bridge *br = netdev_priv(dev); 286 + struct net_bridge_port *p; 287 + int err = 0; 288 + 289 + list_for_each_entry(p, &br->port_list, list) { 290 + if (!p->dev) 291 + continue; 292 + err = __br_netpoll_enable(p, gfp); 293 + if (err) 294 + goto fail; 295 + } 296 + 297 + out: 298 + return err; 299 + 300 + fail: 301 + br_netpoll_cleanup(dev); 302 + goto out; 245 303 } 246 304 247 305 void br_netpoll_disable(struct net_bridge_port *p)
+89 -48
net/bridge/br_fdb.c
··· 27 27 #include "br_private.h" 28 28 29 29 static struct kmem_cache *br_fdb_cache __read_mostly; 30 + static struct net_bridge_fdb_entry *fdb_find(struct hlist_head *head, 31 + const unsigned char *addr, 32 + __u16 vid); 30 33 static int fdb_insert(struct net_bridge *br, struct net_bridge_port *source, 31 34 const unsigned char *addr, u16 vid); 32 35 static void fdb_notify(struct net_bridge *br, ··· 92 89 call_rcu(&f->rcu, fdb_rcu_free); 93 90 } 94 91 92 + /* Delete a local entry if no other port had the same address. */ 93 + static void fdb_delete_local(struct net_bridge *br, 94 + const struct net_bridge_port *p, 95 + struct net_bridge_fdb_entry *f) 96 + { 97 + const unsigned char *addr = f->addr.addr; 98 + u16 vid = f->vlan_id; 99 + struct net_bridge_port *op; 100 + 101 + /* Maybe another port has same hw addr? */ 102 + list_for_each_entry(op, &br->port_list, list) { 103 + if (op != p && ether_addr_equal(op->dev->dev_addr, addr) && 104 + (!vid || nbp_vlan_find(op, vid))) { 105 + f->dst = op; 106 + f->added_by_user = 0; 107 + return; 108 + } 109 + } 110 + 111 + /* Maybe bridge device has same hw addr? */ 112 + if (p && ether_addr_equal(br->dev->dev_addr, addr) && 113 + (!vid || br_vlan_find(br, vid))) { 114 + f->dst = NULL; 115 + f->added_by_user = 0; 116 + return; 117 + } 118 + 119 + fdb_delete(br, f); 120 + } 121 + 122 + void br_fdb_find_delete_local(struct net_bridge *br, 123 + const struct net_bridge_port *p, 124 + const unsigned char *addr, u16 vid) 125 + { 126 + struct hlist_head *head = &br->hash[br_mac_hash(addr, vid)]; 127 + struct net_bridge_fdb_entry *f; 128 + 129 + spin_lock_bh(&br->hash_lock); 130 + f = fdb_find(head, addr, vid); 131 + if (f && f->is_local && !f->added_by_user && f->dst == p) 132 + fdb_delete_local(br, p, f); 133 + spin_unlock_bh(&br->hash_lock); 134 + } 135 + 95 136 void br_fdb_changeaddr(struct net_bridge_port *p, const unsigned char *newaddr) 96 137 { 97 138 struct net_bridge *br = p->br; 98 - bool no_vlan = (nbp_get_vlan_info(p) == NULL) ? true : false; 139 + struct net_port_vlans *pv = nbp_get_vlan_info(p); 140 + bool no_vlan = !pv; 99 141 int i; 142 + u16 vid; 100 143 101 144 spin_lock_bh(&br->hash_lock); 102 145 ··· 153 104 struct net_bridge_fdb_entry *f; 154 105 155 106 f = hlist_entry(h, struct net_bridge_fdb_entry, hlist); 156 - if (f->dst == p && f->is_local) { 157 - /* maybe another port has same hw addr? */ 158 - struct net_bridge_port *op; 159 - u16 vid = f->vlan_id; 160 - list_for_each_entry(op, &br->port_list, list) { 161 - if (op != p && 162 - ether_addr_equal(op->dev->dev_addr, 163 - f->addr.addr) && 164 - nbp_vlan_find(op, vid)) { 165 - f->dst = op; 166 - goto insert; 167 - } 168 - } 169 - 107 + if (f->dst == p && f->is_local && !f->added_by_user) { 170 108 /* delete old one */ 171 - fdb_delete(br, f); 172 - insert: 173 - /* insert new address, may fail if invalid 174 - * address or dup. 175 - */ 176 - fdb_insert(br, p, newaddr, vid); 109 + fdb_delete_local(br, p, f); 177 110 178 111 /* if this port has no vlan information 179 112 * configured, we can safely be done at 180 113 * this point. 181 114 */ 182 115 if (no_vlan) 183 - goto done; 116 + goto insert; 184 117 } 185 118 } 186 119 } 120 + 121 + insert: 122 + /* insert new address, may fail if invalid address or dup. */ 123 + fdb_insert(br, p, newaddr, 0); 124 + 125 + if (no_vlan) 126 + goto done; 127 + 128 + /* Now add entries for every VLAN configured on the port. 129 + * This function runs under RTNL so the bitmap will not change 130 + * from under us. 131 + */ 132 + for_each_set_bit(vid, pv->vlan_bitmap, VLAN_N_VID) 133 + fdb_insert(br, p, newaddr, vid); 187 134 188 135 done: 189 136 spin_unlock_bh(&br->hash_lock); ··· 191 146 struct net_port_vlans *pv; 192 147 u16 vid = 0; 193 148 149 + spin_lock_bh(&br->hash_lock); 150 + 194 151 /* If old entry was unassociated with any port, then delete it. */ 195 152 f = __br_fdb_get(br, br->dev->dev_addr, 0); 196 153 if (f && f->is_local && !f->dst) 197 - fdb_delete(br, f); 154 + fdb_delete_local(br, NULL, f); 198 155 199 156 fdb_insert(br, NULL, newaddr, 0); 200 157 ··· 206 159 */ 207 160 pv = br_get_vlan_info(br); 208 161 if (!pv) 209 - return; 162 + goto out; 210 163 211 164 for_each_set_bit_from(vid, pv->vlan_bitmap, VLAN_N_VID) { 212 165 f = __br_fdb_get(br, br->dev->dev_addr, vid); 213 166 if (f && f->is_local && !f->dst) 214 - fdb_delete(br, f); 167 + fdb_delete_local(br, NULL, f); 215 168 fdb_insert(br, NULL, newaddr, vid); 216 169 } 170 + out: 171 + spin_unlock_bh(&br->hash_lock); 217 172 } 218 173 219 174 void br_fdb_cleanup(unsigned long _data) ··· 284 235 285 236 if (f->is_static && !do_all) 286 237 continue; 287 - /* 288 - * if multiple ports all have the same device address 289 - * then when one port is deleted, assign 290 - * the local entry to other port 291 - */ 292 - if (f->is_local) { 293 - struct net_bridge_port *op; 294 - list_for_each_entry(op, &br->port_list, list) { 295 - if (op != p && 296 - ether_addr_equal(op->dev->dev_addr, 297 - f->addr.addr)) { 298 - f->dst = op; 299 - goto skip_delete; 300 - } 301 - } 302 - } 303 238 304 - fdb_delete(br, f); 305 - skip_delete: ; 239 + if (f->is_local) 240 + fdb_delete_local(br, p, f); 241 + else 242 + fdb_delete(br, f); 306 243 } 307 244 } 308 245 spin_unlock_bh(&br->hash_lock); ··· 432 397 fdb->vlan_id = vid; 433 398 fdb->is_local = 0; 434 399 fdb->is_static = 0; 400 + fdb->added_by_user = 0; 435 401 fdb->updated = fdb->used = jiffies; 436 402 hlist_add_head_rcu(&fdb->hlist, head); 437 403 } ··· 483 447 } 484 448 485 449 void br_fdb_update(struct net_bridge *br, struct net_bridge_port *source, 486 - const unsigned char *addr, u16 vid) 450 + const unsigned char *addr, u16 vid, bool added_by_user) 487 451 { 488 452 struct hlist_head *head = &br->hash[br_mac_hash(addr, vid)]; 489 453 struct net_bridge_fdb_entry *fdb; ··· 509 473 /* fastpath: update of existing entry */ 510 474 fdb->dst = source; 511 475 fdb->updated = jiffies; 476 + if (unlikely(added_by_user)) 477 + fdb->added_by_user = 1; 512 478 } 513 479 } else { 514 480 spin_lock(&br->hash_lock); 515 481 if (likely(!fdb_find(head, addr, vid))) { 516 482 fdb = fdb_create(head, source, addr, vid); 517 - if (fdb) 483 + if (fdb) { 484 + if (unlikely(added_by_user)) 485 + fdb->added_by_user = 1; 518 486 fdb_notify(br, fdb, RTM_NEWNEIGH); 487 + } 519 488 } 520 489 /* else we lose race and someone else inserts 521 490 * it first, don't bother updating ··· 688 647 689 648 modified = true; 690 649 } 650 + fdb->added_by_user = 1; 691 651 692 652 fdb->used = jiffies; 693 653 if (modified) { ··· 706 664 707 665 if (ndm->ndm_flags & NTF_USE) { 708 666 rcu_read_lock(); 709 - br_fdb_update(p->br, p, addr, vid); 667 + br_fdb_update(p->br, p, addr, vid, true); 710 668 rcu_read_unlock(); 711 669 } else { 712 670 spin_lock_bh(&p->br->hash_lock); ··· 791 749 return err; 792 750 } 793 751 794 - int fdb_delete_by_addr(struct net_bridge *br, const u8 *addr, 795 - u16 vlan) 752 + static int fdb_delete_by_addr(struct net_bridge *br, const u8 *addr, u16 vlan) 796 753 { 797 754 struct hlist_head *head = &br->hash[br_mac_hash(addr, vlan)]; 798 755 struct net_bridge_fdb_entry *fdb;
+3 -3
net/bridge/br_if.c
··· 389 389 if (br->dev->needed_headroom < dev->needed_headroom) 390 390 br->dev->needed_headroom = dev->needed_headroom; 391 391 392 + if (br_fdb_insert(br, p, dev->dev_addr, 0)) 393 + netdev_err(dev, "failed insert local address bridge forwarding table\n"); 394 + 392 395 spin_lock_bh(&br->lock); 393 396 changed_addr = br_stp_recalculate_bridge_id(br); 394 397 ··· 406 403 call_netdevice_notifiers(NETDEV_CHANGEADDR, br->dev); 407 404 408 405 dev_set_mtu(br->dev, br_min_mtu(br)); 409 - 410 - if (br_fdb_insert(br, p, dev->dev_addr, 0)) 411 - netdev_err(dev, "failed insert local address bridge forwarding table\n"); 412 406 413 407 kobject_uevent(&p->kobj, KOBJ_ADD); 414 408
+2 -2
net/bridge/br_input.c
··· 77 77 /* insert into forwarding database after filtering to avoid spoofing */ 78 78 br = p->br; 79 79 if (p->flags & BR_LEARNING) 80 - br_fdb_update(br, p, eth_hdr(skb)->h_source, vid); 80 + br_fdb_update(br, p, eth_hdr(skb)->h_source, vid, false); 81 81 82 82 if (!is_broadcast_ether_addr(dest) && is_multicast_ether_addr(dest) && 83 83 br_multicast_rcv(br, p, skb, vid)) ··· 148 148 149 149 br_vlan_get_tag(skb, &vid); 150 150 if (p->flags & BR_LEARNING) 151 - br_fdb_update(p->br, p, eth_hdr(skb)->h_source, vid); 151 + br_fdb_update(p->br, p, eth_hdr(skb)->h_source, vid, false); 152 152 return 0; /* process further */ 153 153 } 154 154
+11 -2
net/bridge/br_private.h
··· 104 104 mac_addr addr; 105 105 unsigned char is_local; 106 106 unsigned char is_static; 107 + unsigned char added_by_user; 107 108 __u16 vlan_id; 108 109 }; 109 110 ··· 371 370 int br_fdb_init(void); 372 371 void br_fdb_fini(void); 373 372 void br_fdb_flush(struct net_bridge *br); 373 + void br_fdb_find_delete_local(struct net_bridge *br, 374 + const struct net_bridge_port *p, 375 + const unsigned char *addr, u16 vid); 374 376 void br_fdb_changeaddr(struct net_bridge_port *p, const unsigned char *newaddr); 375 377 void br_fdb_change_mac_address(struct net_bridge *br, const u8 *newaddr); 376 378 void br_fdb_cleanup(unsigned long arg); ··· 387 383 int br_fdb_insert(struct net_bridge *br, struct net_bridge_port *source, 388 384 const unsigned char *addr, u16 vid); 389 385 void br_fdb_update(struct net_bridge *br, struct net_bridge_port *source, 390 - const unsigned char *addr, u16 vid); 391 - int fdb_delete_by_addr(struct net_bridge *br, const u8 *addr, u16 vid); 386 + const unsigned char *addr, u16 vid, bool added_by_user); 392 387 393 388 int br_fdb_delete(struct ndmsg *ndm, struct nlattr *tb[], 394 389 struct net_device *dev, const unsigned char *addr); ··· 587 584 int br_vlan_add(struct net_bridge *br, u16 vid, u16 flags); 588 585 int br_vlan_delete(struct net_bridge *br, u16 vid); 589 586 void br_vlan_flush(struct net_bridge *br); 587 + bool br_vlan_find(struct net_bridge *br, u16 vid); 590 588 int br_vlan_filter_toggle(struct net_bridge *br, unsigned long val); 591 589 int nbp_vlan_add(struct net_bridge_port *port, u16 vid, u16 flags); 592 590 int nbp_vlan_delete(struct net_bridge_port *port, u16 vid); ··· 667 663 668 664 static inline void br_vlan_flush(struct net_bridge *br) 669 665 { 666 + } 667 + 668 + static inline bool br_vlan_find(struct net_bridge *br, u16 vid) 669 + { 670 + return false; 670 671 } 671 672 672 673 static inline int nbp_vlan_add(struct net_bridge_port *port, u16 vid, u16 flags)
+2
net/bridge/br_stp_if.c
··· 194 194 195 195 wasroot = br_is_root_bridge(br); 196 196 197 + br_fdb_change_mac_address(br, addr); 198 + 197 199 memcpy(oldaddr, br->bridge_id.addr, ETH_ALEN); 198 200 memcpy(br->bridge_id.addr, addr, ETH_ALEN); 199 201 memcpy(br->dev->dev_addr, addr, ETH_ALEN);
+21 -6
net/bridge/br_vlan.c
··· 275 275 if (!pv) 276 276 return -EINVAL; 277 277 278 - spin_lock_bh(&br->hash_lock); 279 - fdb_delete_by_addr(br, br->dev->dev_addr, vid); 280 - spin_unlock_bh(&br->hash_lock); 278 + br_fdb_find_delete_local(br, NULL, br->dev->dev_addr, vid); 281 279 282 280 __vlan_del(pv, vid); 283 281 return 0; ··· 291 293 return; 292 294 293 295 __vlan_flush(pv); 296 + } 297 + 298 + bool br_vlan_find(struct net_bridge *br, u16 vid) 299 + { 300 + struct net_port_vlans *pv; 301 + bool found = false; 302 + 303 + rcu_read_lock(); 304 + pv = rcu_dereference(br->vlan_info); 305 + 306 + if (!pv) 307 + goto out; 308 + 309 + if (test_bit(vid, pv->vlan_bitmap)) 310 + found = true; 311 + 312 + out: 313 + rcu_read_unlock(); 314 + return found; 294 315 } 295 316 296 317 int br_vlan_filter_toggle(struct net_bridge *br, unsigned long val) ··· 376 359 if (!pv) 377 360 return -EINVAL; 378 361 379 - spin_lock_bh(&port->br->hash_lock); 380 - fdb_delete_by_addr(port->br, port->dev->dev_addr, vid); 381 - spin_unlock_bh(&port->br->hash_lock); 362 + br_fdb_find_delete_local(port->br, port, port->dev->dev_addr, vid); 382 363 383 364 return __vlan_del(pv, vid); 384 365 }
+1
net/caif/caif_dev.c
··· 22 22 #include <net/pkt_sched.h> 23 23 #include <net/caif/caif_device.h> 24 24 #include <net/caif/caif_layer.h> 25 + #include <net/caif/caif_dev.h> 25 26 #include <net/caif/cfpkt.h> 26 27 #include <net/caif/cfcnfg.h> 27 28 #include <net/caif/cfserl.h>
+1
net/caif/cfsrvl.c
··· 15 15 #include <net/caif/caif_layer.h> 16 16 #include <net/caif/cfsrvl.h> 17 17 #include <net/caif/cfpkt.h> 18 + #include <net/caif/caif_dev.h> 18 19 19 20 #define SRVL_CTRL_PKT_SIZE 1 20 21 #define SRVL_FLOW_OFF 0x81
+2 -1
net/can/af_can.c
··· 57 57 #include <linux/skbuff.h> 58 58 #include <linux/can.h> 59 59 #include <linux/can/core.h> 60 + #include <linux/can/skb.h> 60 61 #include <linux/ratelimit.h> 61 62 #include <net/net_namespace.h> 62 63 #include <net/sock.h> ··· 291 290 return -ENOMEM; 292 291 } 293 292 294 - newskb->sk = skb->sk; 293 + can_skb_set_owner(newskb, skb->sk); 295 294 newskb->ip_summed = CHECKSUM_UNNECESSARY; 296 295 newskb->pkt_type = PACKET_BROADCAST; 297 296 }
+2 -2
net/can/bcm.c
··· 268 268 269 269 /* send with loopback */ 270 270 skb->dev = dev; 271 - skb->sk = op->sk; 271 + can_skb_set_owner(skb, op->sk); 272 272 can_send(skb, 1); 273 273 274 274 /* update statistics */ ··· 1223 1223 1224 1224 can_skb_prv(skb)->ifindex = dev->ifindex; 1225 1225 skb->dev = dev; 1226 - skb->sk = sk; 1226 + can_skb_set_owner(skb, sk); 1227 1227 err = can_send(skb, 1); /* send with loopback */ 1228 1228 dev_put(dev); 1229 1229
+1
net/can/raw.c
··· 715 715 716 716 skb->dev = dev; 717 717 skb->sk = sk; 718 + skb->priority = sk->sk_priority; 718 719 719 720 err = can_send(skb, ro->loopback); 720 721
+3 -3
net/core/dev.c
··· 2803 2803 * the BH enable code must have IRQs enabled so that it will not deadlock. 2804 2804 * --BLG 2805 2805 */ 2806 - int __dev_queue_xmit(struct sk_buff *skb, void *accel_priv) 2806 + static int __dev_queue_xmit(struct sk_buff *skb, void *accel_priv) 2807 2807 { 2808 2808 struct net_device *dev = skb->dev; 2809 2809 struct netdev_queue *txq; ··· 4637 4637 } 4638 4638 EXPORT_SYMBOL(netdev_master_upper_dev_get_rcu); 4639 4639 4640 - int netdev_adjacent_sysfs_add(struct net_device *dev, 4640 + static int netdev_adjacent_sysfs_add(struct net_device *dev, 4641 4641 struct net_device *adj_dev, 4642 4642 struct list_head *dev_list) 4643 4643 { ··· 4647 4647 return sysfs_create_link(&(dev->dev.kobj), &(adj_dev->dev.kobj), 4648 4648 linkname); 4649 4649 } 4650 - void netdev_adjacent_sysfs_del(struct net_device *dev, 4650 + static void netdev_adjacent_sysfs_del(struct net_device *dev, 4651 4651 char *name, 4652 4652 struct list_head *dev_list) 4653 4653 {
+7
net/core/fib_rules.c
··· 745 745 attach_rules(&ops->rules_list, dev); 746 746 break; 747 747 748 + case NETDEV_CHANGENAME: 749 + list_for_each_entry(ops, &net->rules_ops, list) { 750 + detach_rules(&ops->rules_list, dev); 751 + attach_rules(&ops->rules_list, dev); 752 + } 753 + break; 754 + 748 755 case NETDEV_UNREGISTER: 749 756 list_for_each_entry(ops, &net->rules_ops, list) 750 757 detach_rules(&ops->rules_list, dev);
+3 -1
net/core/netpoll.c
··· 948 948 { 949 949 char *cur=opt, *delim; 950 950 int ipv6; 951 + bool ipversion_set = false; 951 952 952 953 if (*cur != '@') { 953 954 if ((delim = strchr(cur, '@')) == NULL) ··· 961 960 cur++; 962 961 963 962 if (*cur != '/') { 963 + ipversion_set = true; 964 964 if ((delim = strchr(cur, '/')) == NULL) 965 965 goto parse_failed; 966 966 *delim = 0; ··· 1004 1002 ipv6 = netpoll_parse_ip_addr(cur, &np->remote_ip); 1005 1003 if (ipv6 < 0) 1006 1004 goto parse_failed; 1007 - else if (np->ipv6 != (bool)ipv6) 1005 + else if (ipversion_set && np->ipv6 != (bool)ipv6) 1008 1006 goto parse_failed; 1009 1007 else 1010 1008 np->ipv6 = (bool)ipv6;
+1 -1
net/core/rtnetlink.c
··· 374 374 if (!master_dev) 375 375 return 0; 376 376 ops = master_dev->rtnl_link_ops; 377 - if (!ops->get_slave_size) 377 + if (!ops || !ops->get_slave_size) 378 378 return 0; 379 379 /* IFLA_INFO_SLAVE_DATA + nested data */ 380 380 return nla_total_size(sizeof(struct nlattr)) +
+4 -2
net/core/sock.c
··· 1775 1775 while (order) { 1776 1776 if (npages >= 1 << order) { 1777 1777 page = alloc_pages(sk->sk_allocation | 1778 - __GFP_COMP | __GFP_NOWARN, 1778 + __GFP_COMP | 1779 + __GFP_NOWARN | 1780 + __GFP_NORETRY, 1779 1781 order); 1780 1782 if (page) 1781 1783 goto fill_page; ··· 1847 1845 gfp_t gfp = prio; 1848 1846 1849 1847 if (order) 1850 - gfp |= __GFP_COMP | __GFP_NOWARN; 1848 + gfp |= __GFP_COMP | __GFP_NOWARN | __GFP_NORETRY; 1851 1849 pfrag->page = alloc_pages(gfp, order); 1852 1850 if (likely(pfrag->page)) { 1853 1851 pfrag->offset = 0;
-5
net/decnet/af_decnet.c
··· 2104 2104 .notifier_call = dn_device_event, 2105 2105 }; 2106 2106 2107 - extern int dn_route_rcv(struct sk_buff *, struct net_device *, struct packet_type *, struct net_device *); 2108 - 2109 2107 static struct packet_type dn_dix_packet_type __read_mostly = { 2110 2108 .type = cpu_to_be16(ETH_P_DNA_RT), 2111 2109 .func = dn_route_rcv, ··· 2350 2352 .mmap = sock_no_mmap, 2351 2353 .sendpage = sock_no_sendpage, 2352 2354 }; 2353 - 2354 - void dn_register_sysctl(void); 2355 - void dn_unregister_sysctl(void); 2356 2355 2357 2356 MODULE_DESCRIPTION("The Linux DECnet Network Protocol"); 2358 2357 MODULE_AUTHOR("Linux DECnet Project Team");
+20 -3
net/ieee802154/6lowpan.c
··· 106 106 unsigned short type, const void *_daddr, 107 107 const void *_saddr, unsigned int len) 108 108 { 109 - struct ipv6hdr *hdr; 110 109 const u8 *saddr = _saddr; 111 110 const u8 *daddr = _daddr; 112 111 struct ieee802154_addr sa, da; ··· 115 116 */ 116 117 if (type != ETH_P_IPV6) 117 118 return 0; 118 - 119 - hdr = ipv6_hdr(skb); 120 119 121 120 if (!saddr) 122 121 saddr = dev->dev_addr; ··· 530 533 .create = lowpan_header_create, 531 534 }; 532 535 536 + static struct lock_class_key lowpan_tx_busylock; 537 + static struct lock_class_key lowpan_netdev_xmit_lock_key; 538 + 539 + static void lowpan_set_lockdep_class_one(struct net_device *dev, 540 + struct netdev_queue *txq, 541 + void *_unused) 542 + { 543 + lockdep_set_class(&txq->_xmit_lock, 544 + &lowpan_netdev_xmit_lock_key); 545 + } 546 + 547 + 548 + static int lowpan_dev_init(struct net_device *dev) 549 + { 550 + netdev_for_each_tx_queue(dev, lowpan_set_lockdep_class_one, NULL); 551 + dev->qdisc_tx_busylock = &lowpan_tx_busylock; 552 + return 0; 553 + } 554 + 533 555 static const struct net_device_ops lowpan_netdev_ops = { 556 + .ndo_init = lowpan_dev_init, 534 557 .ndo_start_xmit = lowpan_xmit, 535 558 .ndo_set_mac_address = lowpan_set_address, 536 559 };
+2 -1
net/ipv4/devinet.c
··· 1443 1443 + nla_total_size(4) /* IFA_LOCAL */ 1444 1444 + nla_total_size(4) /* IFA_BROADCAST */ 1445 1445 + nla_total_size(IFNAMSIZ) /* IFA_LABEL */ 1446 - + nla_total_size(4); /* IFA_FLAGS */ 1446 + + nla_total_size(4) /* IFA_FLAGS */ 1447 + + nla_total_size(sizeof(struct ifa_cacheinfo)); /* IFA_CACHEINFO */ 1447 1448 } 1448 1449 1449 1450 static inline u32 cstamp_delta(unsigned long cstamp)
+11 -18
net/ipv4/ip_tunnel.c
··· 101 101 __tunnel_dst_set(per_cpu_ptr(t->dst_cache, i), NULL); 102 102 } 103 103 104 - static struct dst_entry *tunnel_dst_get(struct ip_tunnel *t) 104 + static struct rtable *tunnel_rtable_get(struct ip_tunnel *t, u32 cookie) 105 105 { 106 106 struct dst_entry *dst; 107 107 108 108 rcu_read_lock(); 109 109 dst = rcu_dereference(this_cpu_ptr(t->dst_cache)->dst); 110 - if (dst) 110 + if (dst) { 111 + if (dst->obsolete && dst->ops->check(dst, cookie) == NULL) { 112 + rcu_read_unlock(); 113 + tunnel_dst_reset(t); 114 + return NULL; 115 + } 111 116 dst_hold(dst); 112 - rcu_read_unlock(); 113 - return dst; 114 - } 115 - 116 - static struct dst_entry *tunnel_dst_check(struct ip_tunnel *t, u32 cookie) 117 - { 118 - struct dst_entry *dst = tunnel_dst_get(t); 119 - 120 - if (dst && dst->obsolete && dst->ops->check(dst, cookie) == NULL) { 121 - tunnel_dst_reset(t); 122 - return NULL; 123 117 } 124 - 125 - return dst; 118 + rcu_read_unlock(); 119 + return (struct rtable *)dst; 126 120 } 127 121 128 122 /* Often modified stats are per cpu, other are shared (netdev->stats) */ ··· 578 584 struct flowi4 fl4; 579 585 u8 tos, ttl; 580 586 __be16 df; 581 - struct rtable *rt = NULL; /* Route to the other host */ 587 + struct rtable *rt; /* Route to the other host */ 582 588 unsigned int max_headroom; /* The extra header space needed */ 583 589 __be32 dst; 584 590 int err; ··· 651 657 init_tunnel_flow(&fl4, protocol, dst, tnl_params->saddr, 652 658 tunnel->parms.o_key, RT_TOS(tos), tunnel->parms.link); 653 659 654 - if (connected) 655 - rt = (struct rtable *)tunnel_dst_check(tunnel, 0); 660 + rt = connected ? tunnel_rtable_get(tunnel, 0) : NULL; 656 661 657 662 if (!rt) { 658 663 rt = ip_route_output_key(tunnel->net, &fl4);
+5
net/ipv4/netfilter/Kconfig
··· 61 61 packet transformations such as the source, destination address and 62 62 source and destination ports. 63 63 64 + config NFT_REJECT_IPV4 65 + depends on NF_TABLES_IPV4 66 + default NFT_REJECT 67 + tristate 68 + 64 69 config NF_TABLES_ARP 65 70 depends on NF_TABLES 66 71 tristate "ARP nf_tables support"
+1
net/ipv4/netfilter/Makefile
··· 30 30 obj-$(CONFIG_NF_TABLES_IPV4) += nf_tables_ipv4.o 31 31 obj-$(CONFIG_NFT_CHAIN_ROUTE_IPV4) += nft_chain_route_ipv4.o 32 32 obj-$(CONFIG_NFT_CHAIN_NAT_IPV4) += nft_chain_nat_ipv4.o 33 + obj-$(CONFIG_NFT_REJECT_IPV4) += nft_reject_ipv4.o 33 34 obj-$(CONFIG_NF_TABLES_ARP) += nf_tables_arp.o 34 35 35 36 # generic IP tables
+4 -1
net/ipv4/netfilter/nf_nat_h323.c
··· 229 229 ret = nf_ct_expect_related(rtcp_exp); 230 230 if (ret == 0) 231 231 break; 232 - else if (ret != -EBUSY) { 232 + else if (ret == -EBUSY) { 233 + nf_ct_unexpect_related(rtp_exp); 234 + continue; 235 + } else if (ret < 0) { 233 236 nf_ct_unexpect_related(rtp_exp); 234 237 nated_port = 0; 235 238 break;
+75
net/ipv4/netfilter/nft_reject_ipv4.c
··· 1 + /* 2 + * Copyright (c) 2008-2009 Patrick McHardy <kaber@trash.net> 3 + * Copyright (c) 2013 Eric Leblond <eric@regit.org> 4 + * 5 + * This program is free software; you can redistribute it and/or modify 6 + * it under the terms of the GNU General Public License version 2 as 7 + * published by the Free Software Foundation. 8 + * 9 + * Development of this code funded by Astaro AG (http://www.astaro.com/) 10 + */ 11 + 12 + #include <linux/kernel.h> 13 + #include <linux/init.h> 14 + #include <linux/module.h> 15 + #include <linux/netlink.h> 16 + #include <linux/netfilter.h> 17 + #include <linux/netfilter/nf_tables.h> 18 + #include <net/netfilter/nf_tables.h> 19 + #include <net/icmp.h> 20 + #include <net/netfilter/ipv4/nf_reject.h> 21 + #include <net/netfilter/nft_reject.h> 22 + 23 + void nft_reject_ipv4_eval(const struct nft_expr *expr, 24 + struct nft_data data[NFT_REG_MAX + 1], 25 + const struct nft_pktinfo *pkt) 26 + { 27 + struct nft_reject *priv = nft_expr_priv(expr); 28 + 29 + switch (priv->type) { 30 + case NFT_REJECT_ICMP_UNREACH: 31 + nf_send_unreach(pkt->skb, priv->icmp_code); 32 + break; 33 + case NFT_REJECT_TCP_RST: 34 + nf_send_reset(pkt->skb, pkt->ops->hooknum); 35 + break; 36 + } 37 + 38 + data[NFT_REG_VERDICT].verdict = NF_DROP; 39 + } 40 + EXPORT_SYMBOL_GPL(nft_reject_ipv4_eval); 41 + 42 + static struct nft_expr_type nft_reject_ipv4_type; 43 + static const struct nft_expr_ops nft_reject_ipv4_ops = { 44 + .type = &nft_reject_ipv4_type, 45 + .size = NFT_EXPR_SIZE(sizeof(struct nft_reject)), 46 + .eval = nft_reject_ipv4_eval, 47 + .init = nft_reject_init, 48 + .dump = nft_reject_dump, 49 + }; 50 + 51 + static struct nft_expr_type nft_reject_ipv4_type __read_mostly = { 52 + .family = NFPROTO_IPV4, 53 + .name = "reject", 54 + .ops = &nft_reject_ipv4_ops, 55 + .policy = nft_reject_policy, 56 + .maxattr = NFTA_REJECT_MAX, 57 + .owner = THIS_MODULE, 58 + }; 59 + 60 + static int __init nft_reject_ipv4_module_init(void) 61 + { 62 + return nft_register_expr(&nft_reject_ipv4_type); 63 + } 64 + 65 + static void __exit nft_reject_ipv4_module_exit(void) 66 + { 67 + nft_unregister_expr(&nft_reject_ipv4_type); 68 + } 69 + 70 + module_init(nft_reject_ipv4_module_init); 71 + module_exit(nft_reject_ipv4_module_exit); 72 + 73 + MODULE_LICENSE("GPL"); 74 + MODULE_AUTHOR("Patrick McHardy <kaber@trash.net>"); 75 + MODULE_ALIAS_NFT_AF_EXPR(AF_INET, "reject");
+1 -1
net/ipv4/tcp.c
··· 2229 2229 /* This is a (useful) BSD violating of the RFC. There is a 2230 2230 * problem with TCP as specified in that the other end could 2231 2231 * keep a socket open forever with no application left this end. 2232 - * We use a 3 minute timeout (about the same as BSD) then kill 2232 + * We use a 1 minute timeout (about the same as BSD) then kill 2233 2233 * our end. If they send after that then tough - BUT: long enough 2234 2234 * that we won't make the old 4*rto = almost no time - whoops 2235 2235 * reset mistake.
+10 -8
net/ipv4/tcp_input.c
··· 671 671 { 672 672 struct tcp_sock *tp = tcp_sk(sk); 673 673 long m = mrtt; /* RTT */ 674 + u32 srtt = tp->srtt; 674 675 675 676 /* The following amusing code comes from Jacobson's 676 677 * article in SIGCOMM '88. Note that rtt and mdev ··· 689 688 * does not matter how to _calculate_ it. Seems, it was trap 690 689 * that VJ failed to avoid. 8) 691 690 */ 692 - if (m == 0) 693 - m = 1; 694 - if (tp->srtt != 0) { 695 - m -= (tp->srtt >> 3); /* m is now error in rtt est */ 696 - tp->srtt += m; /* rtt = 7/8 rtt + 1/8 new */ 691 + if (srtt != 0) { 692 + m -= (srtt >> 3); /* m is now error in rtt est */ 693 + srtt += m; /* rtt = 7/8 rtt + 1/8 new */ 697 694 if (m < 0) { 698 695 m = -m; /* m is now abs(error) */ 699 696 m -= (tp->mdev >> 2); /* similar update on mdev */ ··· 722 723 } 723 724 } else { 724 725 /* no previous measure. */ 725 - tp->srtt = m << 3; /* take the measured time to be rtt */ 726 + srtt = m << 3; /* take the measured time to be rtt */ 726 727 tp->mdev = m << 1; /* make sure rto = 3*rtt */ 727 728 tp->mdev_max = tp->rttvar = max(tp->mdev, tcp_rto_min(sk)); 728 729 tp->rtt_seq = tp->snd_nxt; 729 730 } 731 + tp->srtt = max(1U, srtt); 730 732 } 731 733 732 734 /* Set the sk_pacing_rate to allow proper sizing of TSO packets. ··· 746 746 747 747 rate *= max(tp->snd_cwnd, tp->packets_out); 748 748 749 - /* Correction for small srtt : minimum srtt being 8 (1 jiffy << 3), 750 - * be conservative and assume srtt = 1 (125 us instead of 1.25 ms) 749 + /* Correction for small srtt and scheduling constraints. 750 + * For small rtt, consider noise is too high, and use 751 + * the minimal value (srtt = 1 -> 125 us for HZ=1000) 752 + * 751 753 * We probably need usec resolution in the future. 752 754 * Note: This also takes care of possible srtt=0 case, 753 755 * when tcp_rtt_estimator() was not yet called.
+12 -3
net/ipv4/tcp_output.c
··· 698 698 if ((1 << sk->sk_state) & 699 699 (TCPF_ESTABLISHED | TCPF_FIN_WAIT1 | TCPF_CLOSING | 700 700 TCPF_CLOSE_WAIT | TCPF_LAST_ACK)) 701 - tcp_write_xmit(sk, tcp_current_mss(sk), 0, 0, GFP_ATOMIC); 701 + tcp_write_xmit(sk, tcp_current_mss(sk), tcp_sk(sk)->nonagle, 702 + 0, GFP_ATOMIC); 702 703 } 703 704 /* 704 705 * One tasklet per cpu tries to send more skbs. ··· 1905 1904 1906 1905 if (atomic_read(&sk->sk_wmem_alloc) > limit) { 1907 1906 set_bit(TSQ_THROTTLED, &tp->tsq_flags); 1908 - break; 1907 + /* It is possible TX completion already happened 1908 + * before we set TSQ_THROTTLED, so we must 1909 + * test again the condition. 1910 + * We abuse smp_mb__after_clear_bit() because 1911 + * there is no smp_mb__after_set_bit() yet 1912 + */ 1913 + smp_mb__after_clear_bit(); 1914 + if (atomic_read(&sk->sk_wmem_alloc) > limit) 1915 + break; 1909 1916 } 1910 1917 1911 1918 limit = mss_now; ··· 1986 1977 /* Schedule a loss probe in 2*RTT for SACK capable connections 1987 1978 * in Open state, that are either limited by cwnd or application. 1988 1979 */ 1989 - if (sysctl_tcp_early_retrans < 3 || !rtt || !tp->packets_out || 1980 + if (sysctl_tcp_early_retrans < 3 || !tp->srtt || !tp->packets_out || 1990 1981 !tcp_is_sack(tp) || inet_csk(sk)->icsk_ca_state != TCP_CA_Open) 1991 1982 return false; 1992 1983
+9 -8
net/ipv4/udp_offload.c
··· 17 17 static DEFINE_SPINLOCK(udp_offload_lock); 18 18 static struct udp_offload_priv __rcu *udp_offload_base __read_mostly; 19 19 20 + #define udp_deref_protected(X) rcu_dereference_protected(X, lockdep_is_held(&udp_offload_lock)) 21 + 20 22 struct udp_offload_priv { 21 23 struct udp_offload *offload; 22 24 struct rcu_head rcu; ··· 102 100 103 101 int udp_add_offload(struct udp_offload *uo) 104 102 { 105 - struct udp_offload_priv __rcu **head = &udp_offload_base; 106 - struct udp_offload_priv *new_offload = kzalloc(sizeof(*new_offload), GFP_KERNEL); 103 + struct udp_offload_priv *new_offload = kzalloc(sizeof(*new_offload), GFP_ATOMIC); 107 104 108 105 if (!new_offload) 109 106 return -ENOMEM; ··· 110 109 new_offload->offload = uo; 111 110 112 111 spin_lock(&udp_offload_lock); 113 - rcu_assign_pointer(new_offload->next, rcu_dereference(*head)); 114 - rcu_assign_pointer(*head, new_offload); 112 + new_offload->next = udp_offload_base; 113 + rcu_assign_pointer(udp_offload_base, new_offload); 115 114 spin_unlock(&udp_offload_lock); 116 115 117 116 return 0; ··· 131 130 132 131 spin_lock(&udp_offload_lock); 133 132 134 - uo_priv = rcu_dereference(*head); 133 + uo_priv = udp_deref_protected(*head); 135 134 for (; uo_priv != NULL; 136 - uo_priv = rcu_dereference(*head)) { 137 - 135 + uo_priv = udp_deref_protected(*head)) { 138 136 if (uo_priv->offload == uo) { 139 - rcu_assign_pointer(*head, rcu_dereference(uo_priv->next)); 137 + rcu_assign_pointer(*head, 138 + udp_deref_protected(uo_priv->next)); 140 139 goto unlock; 141 140 } 142 141 head = &uo_priv->next;
+1 -1
net/ipv6/icmp.c
··· 414 414 addr_type = ipv6_addr_type(&hdr->daddr); 415 415 416 416 if (ipv6_chk_addr(net, &hdr->daddr, skb->dev, 0) || 417 - ipv6_anycast_destination(skb)) 417 + ipv6_chk_acast_addr_src(net, skb->dev, &hdr->daddr)) 418 418 saddr = &hdr->daddr; 419 419 420 420 /*
+5
net/ipv6/netfilter/Kconfig
··· 50 50 packet transformations such as the source, destination address and 51 51 source and destination ports. 52 52 53 + config NFT_REJECT_IPV6 54 + depends on NF_TABLES_IPV6 55 + default NFT_REJECT 56 + tristate 57 + 53 58 config IP6_NF_IPTABLES 54 59 tristate "IP6 tables support (required for filtering)" 55 60 depends on INET && IPV6
+1
net/ipv6/netfilter/Makefile
··· 27 27 obj-$(CONFIG_NF_TABLES_IPV6) += nf_tables_ipv6.o 28 28 obj-$(CONFIG_NFT_CHAIN_ROUTE_IPV6) += nft_chain_route_ipv6.o 29 29 obj-$(CONFIG_NFT_CHAIN_NAT_IPV6) += nft_chain_nat_ipv6.o 30 + obj-$(CONFIG_NFT_REJECT_IPV6) += nft_reject_ipv6.o 30 31 31 32 # matches 32 33 obj-$(CONFIG_IP6_NF_MATCH_AH) += ip6t_ah.o
+76
net/ipv6/netfilter/nft_reject_ipv6.c
··· 1 + /* 2 + * Copyright (c) 2008-2009 Patrick McHardy <kaber@trash.net> 3 + * Copyright (c) 2013 Eric Leblond <eric@regit.org> 4 + * 5 + * This program is free software; you can redistribute it and/or modify 6 + * it under the terms of the GNU General Public License version 2 as 7 + * published by the Free Software Foundation. 8 + * 9 + * Development of this code funded by Astaro AG (http://www.astaro.com/) 10 + */ 11 + 12 + #include <linux/kernel.h> 13 + #include <linux/init.h> 14 + #include <linux/module.h> 15 + #include <linux/netlink.h> 16 + #include <linux/netfilter.h> 17 + #include <linux/netfilter/nf_tables.h> 18 + #include <net/netfilter/nf_tables.h> 19 + #include <net/netfilter/nft_reject.h> 20 + #include <net/netfilter/ipv6/nf_reject.h> 21 + 22 + void nft_reject_ipv6_eval(const struct nft_expr *expr, 23 + struct nft_data data[NFT_REG_MAX + 1], 24 + const struct nft_pktinfo *pkt) 25 + { 26 + struct nft_reject *priv = nft_expr_priv(expr); 27 + struct net *net = dev_net((pkt->in != NULL) ? pkt->in : pkt->out); 28 + 29 + switch (priv->type) { 30 + case NFT_REJECT_ICMP_UNREACH: 31 + nf_send_unreach6(net, pkt->skb, priv->icmp_code, 32 + pkt->ops->hooknum); 33 + break; 34 + case NFT_REJECT_TCP_RST: 35 + nf_send_reset6(net, pkt->skb, pkt->ops->hooknum); 36 + break; 37 + } 38 + 39 + data[NFT_REG_VERDICT].verdict = NF_DROP; 40 + } 41 + EXPORT_SYMBOL_GPL(nft_reject_ipv6_eval); 42 + 43 + static struct nft_expr_type nft_reject_ipv6_type; 44 + static const struct nft_expr_ops nft_reject_ipv6_ops = { 45 + .type = &nft_reject_ipv6_type, 46 + .size = NFT_EXPR_SIZE(sizeof(struct nft_reject)), 47 + .eval = nft_reject_ipv6_eval, 48 + .init = nft_reject_init, 49 + .dump = nft_reject_dump, 50 + }; 51 + 52 + static struct nft_expr_type nft_reject_ipv6_type __read_mostly = { 53 + .family = NFPROTO_IPV6, 54 + .name = "reject", 55 + .ops = &nft_reject_ipv6_ops, 56 + .policy = nft_reject_policy, 57 + .maxattr = NFTA_REJECT_MAX, 58 + .owner = THIS_MODULE, 59 + }; 60 + 61 + static int __init nft_reject_ipv6_module_init(void) 62 + { 63 + return nft_register_expr(&nft_reject_ipv6_type); 64 + } 65 + 66 + static void __exit nft_reject_ipv6_module_exit(void) 67 + { 68 + nft_unregister_expr(&nft_reject_ipv6_type); 69 + } 70 + 71 + module_init(nft_reject_ipv6_module_init); 72 + module_exit(nft_reject_ipv6_module_exit); 73 + 74 + MODULE_LICENSE("GPL"); 75 + MODULE_AUTHOR("Patrick McHardy <kaber@trash.net>"); 76 + MODULE_ALIAS_NFT_AF_EXPR(AF_INET6, "reject");
+2 -20
net/ipx/af_ipx.c
··· 52 52 #include <net/p8022.h> 53 53 #include <net/psnap.h> 54 54 #include <net/sock.h> 55 + #include <net/datalink.h> 55 56 #include <net/tcp_states.h> 57 + #include <net/net_namespace.h> 56 58 57 59 #include <asm/uaccess.h> 58 - 59 - #ifdef CONFIG_SYSCTL 60 - extern void ipx_register_sysctl(void); 61 - extern void ipx_unregister_sysctl(void); 62 - #else 63 - #define ipx_register_sysctl() 64 - #define ipx_unregister_sysctl() 65 - #endif 66 60 67 61 /* Configuration Variables */ 68 62 static unsigned char ipxcfg_max_hops = 16; ··· 77 83 78 84 struct ipx_interface *ipx_primary_net; 79 85 struct ipx_interface *ipx_internal_net; 80 - 81 - extern int ipxrtr_add_route(__be32 network, struct ipx_interface *intrfc, 82 - unsigned char *node); 83 - extern void ipxrtr_del_routes(struct ipx_interface *intrfc); 84 - extern int ipxrtr_route_packet(struct sock *sk, struct sockaddr_ipx *usipx, 85 - struct iovec *iov, size_t len, int noblock); 86 - extern int ipxrtr_route_skb(struct sk_buff *skb); 87 - extern struct ipx_route *ipxrtr_lookup(__be32 net); 88 - extern int ipxrtr_ioctl(unsigned int cmd, void __user *arg); 89 86 90 87 struct ipx_interface *ipx_interfaces_head(void) 91 88 { ··· 1970 1985 static struct notifier_block ipx_dev_notifier = { 1971 1986 .notifier_call = ipxitf_device_event, 1972 1987 }; 1973 - 1974 - extern struct datalink_proto *make_EII_client(void); 1975 - extern void destroy_EII_client(struct datalink_proto *); 1976 1988 1977 1989 static const unsigned char ipx_8022_type = 0xE0; 1978 1990 static const unsigned char ipx_snap_id[5] = { 0x0, 0x0, 0x0, 0x81, 0x37 };
-4
net/ipx/ipx_route.c
··· 20 20 21 21 extern struct ipx_interface *ipx_internal_net; 22 22 23 - extern __be16 ipx_cksum(struct ipxhdr *packet, int length); 24 23 extern struct ipx_interface *ipxitf_find_using_net(__be32 net); 25 24 extern int ipxitf_demux_socket(struct ipx_interface *intrfc, 26 25 struct sk_buff *skb, int copy); 27 26 extern int ipxitf_demux_socket(struct ipx_interface *intrfc, 28 27 struct sk_buff *skb, int copy); 29 - extern int ipxitf_send(struct ipx_interface *intrfc, struct sk_buff *skb, 30 - char *node); 31 - extern struct ipx_interface *ipxitf_find_using_net(__be32 net); 32 28 33 29 struct ipx_route *ipxrtr_lookup(__be32 net) 34 30 {
+23 -21
net/mac80211/cfg.c
··· 1021 1021 IEEE80211_P2P_OPPPS_ENABLE_BIT; 1022 1022 1023 1023 err = ieee80211_assign_beacon(sdata, &params->beacon); 1024 - if (err < 0) 1024 + if (err < 0) { 1025 + ieee80211_vif_release_channel(sdata); 1025 1026 return err; 1027 + } 1026 1028 changed |= err; 1027 1029 1028 1030 err = drv_start_ap(sdata->local, sdata); ··· 1034 1032 if (old) 1035 1033 kfree_rcu(old, rcu_head); 1036 1034 RCU_INIT_POINTER(sdata->u.ap.beacon, NULL); 1035 + ieee80211_vif_release_channel(sdata); 1037 1036 return err; 1038 1037 } 1039 1038 ··· 1093 1090 kfree(sdata->u.ap.next_beacon); 1094 1091 sdata->u.ap.next_beacon = NULL; 1095 1092 1096 - cancel_work_sync(&sdata->u.ap.request_smps_work); 1097 - 1098 1093 /* turn off carrier for this interface and dependent VLANs */ 1099 1094 list_for_each_entry(vlan, &sdata->u.ap.vlans, u.vlan.list) 1100 1095 netif_carrier_off(vlan->dev); ··· 1104 1103 kfree_rcu(old_beacon, rcu_head); 1105 1104 if (old_probe_resp) 1106 1105 kfree_rcu(old_probe_resp, rcu_head); 1106 + sdata->u.ap.driver_smps_mode = IEEE80211_SMPS_OFF; 1107 1107 1108 1108 __sta_info_flush(sdata, true); 1109 1109 ieee80211_free_keys(sdata, true); ··· 2640 2638 INIT_DELAYED_WORK(&roc->work, ieee80211_sw_roc_work); 2641 2639 INIT_LIST_HEAD(&roc->dependents); 2642 2640 2641 + /* 2642 + * cookie is either the roc cookie (for normal roc) 2643 + * or the SKB (for mgmt TX) 2644 + */ 2645 + if (!txskb) { 2646 + /* local->mtx protects this */ 2647 + local->roc_cookie_counter++; 2648 + roc->cookie = local->roc_cookie_counter; 2649 + /* wow, you wrapped 64 bits ... more likely a bug */ 2650 + if (WARN_ON(roc->cookie == 0)) { 2651 + roc->cookie = 1; 2652 + local->roc_cookie_counter++; 2653 + } 2654 + *cookie = roc->cookie; 2655 + } else { 2656 + *cookie = (unsigned long)txskb; 2657 + } 2658 + 2643 2659 /* if there's one pending or we're scanning, queue this one */ 2644 2660 if (!list_empty(&local->roc_list) || 2645 2661 local->scanning || local->radar_detect_enabled) ··· 2791 2771 out_queue: 2792 2772 if (!queued) 2793 2773 list_add_tail(&roc->list, &local->roc_list); 2794 - 2795 - /* 2796 - * cookie is either the roc cookie (for normal roc) 2797 - * or the SKB (for mgmt TX) 2798 - */ 2799 - if (!txskb) { 2800 - /* local->mtx protects this */ 2801 - local->roc_cookie_counter++; 2802 - roc->cookie = local->roc_cookie_counter; 2803 - /* wow, you wrapped 64 bits ... more likely a bug */ 2804 - if (WARN_ON(roc->cookie == 0)) { 2805 - roc->cookie = 1; 2806 - local->roc_cookie_counter++; 2807 - } 2808 - *cookie = roc->cookie; 2809 - } else { 2810 - *cookie = (unsigned long)txskb; 2811 - } 2812 2774 2813 2775 return 0; 2814 2776 }
+3 -1
net/mac80211/ht.c
··· 466 466 u.ap.request_smps_work); 467 467 468 468 sdata_lock(sdata); 469 - __ieee80211_request_smps_ap(sdata, sdata->u.ap.driver_smps_mode); 469 + if (sdata_dereference(sdata->u.ap.beacon, sdata)) 470 + __ieee80211_request_smps_ap(sdata, 471 + sdata->u.ap.driver_smps_mode); 470 472 sdata_unlock(sdata); 471 473 } 472 474
+1 -4
net/mac80211/ibss.c
··· 695 695 struct cfg80211_bss *cbss; 696 696 struct beacon_data *presp; 697 697 struct sta_info *sta; 698 - int active_ibss; 699 698 u16 capability; 700 699 701 - active_ibss = ieee80211_sta_active_ibss(sdata); 702 - 703 - if (!active_ibss && !is_zero_ether_addr(ifibss->bssid)) { 700 + if (!is_zero_ether_addr(ifibss->bssid)) { 704 701 capability = WLAN_CAPABILITY_IBSS; 705 702 706 703 if (ifibss->privacy)
+19 -8
net/mac80211/iface.c
··· 418 418 return ret; 419 419 } 420 420 421 + mutex_lock(&local->iflist_mtx); 422 + rcu_assign_pointer(local->monitor_sdata, sdata); 423 + mutex_unlock(&local->iflist_mtx); 424 + 421 425 mutex_lock(&local->mtx); 422 426 ret = ieee80211_vif_use_channel(sdata, &local->monitor_chandef, 423 427 IEEE80211_CHANCTX_EXCLUSIVE); 424 428 mutex_unlock(&local->mtx); 425 429 if (ret) { 430 + mutex_lock(&local->iflist_mtx); 431 + rcu_assign_pointer(local->monitor_sdata, NULL); 432 + mutex_unlock(&local->iflist_mtx); 433 + synchronize_net(); 426 434 drv_remove_interface(local, sdata); 427 435 kfree(sdata); 428 436 return ret; 429 437 } 430 - 431 - mutex_lock(&local->iflist_mtx); 432 - rcu_assign_pointer(local->monitor_sdata, sdata); 433 - mutex_unlock(&local->iflist_mtx); 434 438 435 439 return 0; 436 440 } ··· 774 770 775 771 ieee80211_roc_purge(local, sdata); 776 772 777 - if (sdata->vif.type == NL80211_IFTYPE_STATION) 773 + switch (sdata->vif.type) { 774 + case NL80211_IFTYPE_STATION: 778 775 ieee80211_mgd_stop(sdata); 779 - 780 - if (sdata->vif.type == NL80211_IFTYPE_ADHOC) 776 + break; 777 + case NL80211_IFTYPE_ADHOC: 781 778 ieee80211_ibss_stop(sdata); 782 - 779 + break; 780 + case NL80211_IFTYPE_AP: 781 + cancel_work_sync(&sdata->u.ap.request_smps_work); 782 + break; 783 + default: 784 + break; 785 + } 783 786 784 787 /* 785 788 * Remove all stations associated with this interface.
+1 -1
net/mac80211/tx.c
··· 878 878 } 879 879 880 880 /* adjust first fragment's length */ 881 - skb->len = hdrlen + per_fragm; 881 + skb_trim(skb, hdrlen + per_fragm); 882 882 return 0; 883 883 } 884 884
+5 -1
net/netfilter/Kconfig
··· 513 513 514 514 config NFT_REJECT 515 515 depends on NF_TABLES 516 - depends on NF_TABLES_IPV6 || !NF_TABLES_IPV6 517 516 default m if NETFILTER_ADVANCED=n 518 517 tristate "Netfilter nf_tables reject support" 519 518 help 520 519 This option adds the "reject" expression that you can use to 521 520 explicitly deny and notify via TCP reset/ICMP informational errors 522 521 unallowed traffic. 522 + 523 + config NFT_REJECT_INET 524 + depends on NF_TABLES_INET 525 + default NFT_REJECT 526 + tristate 523 527 524 528 config NFT_COMPAT 525 529 depends on NF_TABLES
+1
net/netfilter/Makefile
··· 79 79 obj-$(CONFIG_NFT_NAT) += nft_nat.o 80 80 obj-$(CONFIG_NFT_QUEUE) += nft_queue.o 81 81 obj-$(CONFIG_NFT_REJECT) += nft_reject.o 82 + obj-$(CONFIG_NFT_REJECT_INET) += nft_reject_inet.o 82 83 obj-$(CONFIG_NFT_RBTREE) += nft_rbtree.o 83 84 obj-$(CONFIG_NFT_HASH) += nft_hash.o 84 85 obj-$(CONFIG_NFT_COUNTER) += nft_counter.o
+4 -4
net/netfilter/ipvs/ip_vs_conn.c
··· 871 871 cp->protocol = p->protocol; 872 872 ip_vs_addr_set(p->af, &cp->caddr, p->caddr); 873 873 cp->cport = p->cport; 874 - ip_vs_addr_set(p->af, &cp->vaddr, p->vaddr); 875 - cp->vport = p->vport; 876 - /* proto should only be IPPROTO_IP if d_addr is a fwmark */ 874 + /* proto should only be IPPROTO_IP if p->vaddr is a fwmark */ 877 875 ip_vs_addr_set(p->protocol == IPPROTO_IP ? AF_UNSPEC : p->af, 878 - &cp->daddr, daddr); 876 + &cp->vaddr, p->vaddr); 877 + cp->vport = p->vport; 878 + ip_vs_addr_set(p->af, &cp->daddr, daddr); 879 879 cp->dport = dport; 880 880 cp->flags = flags; 881 881 cp->fwmark = fwmark;
+46 -9
net/netfilter/nf_conntrack_core.c
··· 312 312 nf_ct_delete((struct nf_conn *)ul_conntrack, 0, 0); 313 313 } 314 314 315 + static inline bool 316 + nf_ct_key_equal(struct nf_conntrack_tuple_hash *h, 317 + const struct nf_conntrack_tuple *tuple, 318 + u16 zone) 319 + { 320 + struct nf_conn *ct = nf_ct_tuplehash_to_ctrack(h); 321 + 322 + /* A conntrack can be recreated with the equal tuple, 323 + * so we need to check that the conntrack is confirmed 324 + */ 325 + return nf_ct_tuple_equal(tuple, &h->tuple) && 326 + nf_ct_zone(ct) == zone && 327 + nf_ct_is_confirmed(ct); 328 + } 329 + 315 330 /* 316 331 * Warning : 317 332 * - Caller must take a reference on returned object ··· 348 333 local_bh_disable(); 349 334 begin: 350 335 hlist_nulls_for_each_entry_rcu(h, n, &net->ct.hash[bucket], hnnode) { 351 - if (nf_ct_tuple_equal(tuple, &h->tuple) && 352 - nf_ct_zone(nf_ct_tuplehash_to_ctrack(h)) == zone) { 336 + if (nf_ct_key_equal(h, tuple, zone)) { 353 337 NF_CT_STAT_INC(net, found); 354 338 local_bh_enable(); 355 339 return h; ··· 386 372 !atomic_inc_not_zero(&ct->ct_general.use))) 387 373 h = NULL; 388 374 else { 389 - if (unlikely(!nf_ct_tuple_equal(tuple, &h->tuple) || 390 - nf_ct_zone(ct) != zone)) { 375 + if (unlikely(!nf_ct_key_equal(h, tuple, zone))) { 391 376 nf_ct_put(ct); 392 377 goto begin; 393 378 } ··· 448 435 goto out; 449 436 450 437 add_timer(&ct->timeout); 451 - nf_conntrack_get(&ct->ct_general); 438 + smp_wmb(); 439 + /* The caller holds a reference to this object */ 440 + atomic_set(&ct->ct_general.use, 2); 452 441 __nf_conntrack_hash_insert(ct, hash, repl_hash); 453 442 NF_CT_STAT_INC(net, insert); 454 443 spin_unlock_bh(&nf_conntrack_lock); ··· 463 448 return -EEXIST; 464 449 } 465 450 EXPORT_SYMBOL_GPL(nf_conntrack_hash_check_insert); 451 + 452 + /* deletion from this larval template list happens via nf_ct_put() */ 453 + void nf_conntrack_tmpl_insert(struct net *net, struct nf_conn *tmpl) 454 + { 455 + __set_bit(IPS_TEMPLATE_BIT, &tmpl->status); 456 + __set_bit(IPS_CONFIRMED_BIT, &tmpl->status); 457 + nf_conntrack_get(&tmpl->ct_general); 458 + 459 + spin_lock_bh(&nf_conntrack_lock); 460 + /* Overload tuple linked list to put us in template list. */ 461 + hlist_nulls_add_head_rcu(&tmpl->tuplehash[IP_CT_DIR_ORIGINAL].hnnode, 462 + &net->ct.tmpl); 463 + spin_unlock_bh(&nf_conntrack_lock); 464 + } 465 + EXPORT_SYMBOL_GPL(nf_conntrack_tmpl_insert); 466 466 467 467 /* Confirm a connection given skb; places it in hash table */ 468 468 int ··· 750 720 nf_ct_zone->id = zone; 751 721 } 752 722 #endif 753 - /* 754 - * changes to lookup keys must be done before setting refcnt to 1 723 + /* Because we use RCU lookups, we set ct_general.use to zero before 724 + * this is inserted in any list. 755 725 */ 756 - smp_wmb(); 757 - atomic_set(&ct->ct_general.use, 1); 726 + atomic_set(&ct->ct_general.use, 0); 758 727 return ct; 759 728 760 729 #ifdef CONFIG_NF_CONNTRACK_ZONES ··· 776 747 void nf_conntrack_free(struct nf_conn *ct) 777 748 { 778 749 struct net *net = nf_ct_net(ct); 750 + 751 + /* A freed object has refcnt == 0, that's 752 + * the golden rule for SLAB_DESTROY_BY_RCU 753 + */ 754 + NF_CT_ASSERT(atomic_read(&ct->ct_general.use) == 0); 779 755 780 756 nf_ct_ext_destroy(ct); 781 757 nf_ct_ext_free(ct); ··· 876 842 __nf_ct_try_assign_helper(ct, tmpl, GFP_ATOMIC); 877 843 NF_CT_STAT_INC(net, new); 878 844 } 845 + 846 + /* Now it is inserted into the unconfirmed list, bump refcount */ 847 + nf_conntrack_get(&ct->ct_general); 879 848 880 849 /* Overload tuple linked list to put us in unconfirmed list. */ 881 850 hlist_nulls_add_head_rcu(&ct->tuplehash[IP_CT_DIR_ORIGINAL].hnnode,
+2 -3
net/netfilter/nf_synproxy_core.c
··· 363 363 goto err2; 364 364 if (!nfct_synproxy_ext_add(ct)) 365 365 goto err2; 366 - __set_bit(IPS_TEMPLATE_BIT, &ct->status); 367 - __set_bit(IPS_CONFIRMED_BIT, &ct->status); 368 366 367 + nf_conntrack_tmpl_insert(net, ct); 369 368 snet->tmpl = ct; 370 369 371 370 snet->stats = alloc_percpu(struct synproxy_stats); ··· 389 390 { 390 391 struct synproxy_net *snet = synproxy_pernet(net); 391 392 392 - nf_conntrack_free(snet->tmpl); 393 + nf_ct_put(snet->tmpl); 393 394 synproxy_proc_exit(net); 394 395 free_percpu(snet->stats); 395 396 }
+53 -29
net/netfilter/nf_tables_api.c
··· 1008 1008 return 0; 1009 1009 } 1010 1010 1011 - static void nf_tables_rcu_chain_destroy(struct rcu_head *head) 1011 + static void nf_tables_chain_destroy(struct nft_chain *chain) 1012 1012 { 1013 - struct nft_chain *chain = container_of(head, struct nft_chain, rcu_head); 1014 - 1015 1013 BUG_ON(chain->use > 0); 1016 1014 1017 1015 if (chain->flags & NFT_BASE_CHAIN) { ··· 1043 1045 if (IS_ERR(chain)) 1044 1046 return PTR_ERR(chain); 1045 1047 1046 - if (!list_empty(&chain->rules)) 1048 + if (!list_empty(&chain->rules) || chain->use > 0) 1047 1049 return -EBUSY; 1048 1050 1049 1051 list_del(&chain->list); ··· 1057 1059 family); 1058 1060 1059 1061 /* Make sure all rule references are gone before this is released */ 1060 - call_rcu(&chain->rcu_head, nf_tables_rcu_chain_destroy); 1062 + synchronize_rcu(); 1063 + 1064 + nf_tables_chain_destroy(chain); 1061 1065 return 0; 1062 1066 } 1063 1067 ··· 1114 1114 } 1115 1115 EXPORT_SYMBOL_GPL(nft_unregister_expr); 1116 1116 1117 - static const struct nft_expr_type *__nft_expr_type_get(struct nlattr *nla) 1117 + static const struct nft_expr_type *__nft_expr_type_get(u8 family, 1118 + struct nlattr *nla) 1118 1119 { 1119 1120 const struct nft_expr_type *type; 1120 1121 1121 1122 list_for_each_entry(type, &nf_tables_expressions, list) { 1122 - if (!nla_strcmp(nla, type->name)) 1123 + if (!nla_strcmp(nla, type->name) && 1124 + (!type->family || type->family == family)) 1123 1125 return type; 1124 1126 } 1125 1127 return NULL; 1126 1128 } 1127 1129 1128 - static const struct nft_expr_type *nft_expr_type_get(struct nlattr *nla) 1130 + static const struct nft_expr_type *nft_expr_type_get(u8 family, 1131 + struct nlattr *nla) 1129 1132 { 1130 1133 const struct nft_expr_type *type; 1131 1134 1132 1135 if (nla == NULL) 1133 1136 return ERR_PTR(-EINVAL); 1134 1137 1135 - type = __nft_expr_type_get(nla); 1138 + type = __nft_expr_type_get(family, nla); 1136 1139 if (type != NULL && try_module_get(type->owner)) 1137 1140 return type; 1138 1141 1139 1142 #ifdef CONFIG_MODULES 1140 1143 if (type == NULL) { 1141 1144 nfnl_unlock(NFNL_SUBSYS_NFTABLES); 1145 + request_module("nft-expr-%u-%.*s", family, 1146 + nla_len(nla), (char *)nla_data(nla)); 1147 + nfnl_lock(NFNL_SUBSYS_NFTABLES); 1148 + if (__nft_expr_type_get(family, nla)) 1149 + return ERR_PTR(-EAGAIN); 1150 + 1151 + nfnl_unlock(NFNL_SUBSYS_NFTABLES); 1142 1152 request_module("nft-expr-%.*s", 1143 1153 nla_len(nla), (char *)nla_data(nla)); 1144 1154 nfnl_lock(NFNL_SUBSYS_NFTABLES); 1145 - if (__nft_expr_type_get(nla)) 1155 + if (__nft_expr_type_get(family, nla)) 1146 1156 return ERR_PTR(-EAGAIN); 1147 1157 } 1148 1158 #endif ··· 1203 1193 if (err < 0) 1204 1194 return err; 1205 1195 1206 - type = nft_expr_type_get(tb[NFTA_EXPR_NAME]); 1196 + type = nft_expr_type_get(ctx->afi->family, tb[NFTA_EXPR_NAME]); 1207 1197 if (IS_ERR(type)) 1208 1198 return PTR_ERR(type); 1209 1199 ··· 1531 1521 return err; 1532 1522 } 1533 1523 1534 - static void nf_tables_rcu_rule_destroy(struct rcu_head *head) 1524 + static void nf_tables_rule_destroy(struct nft_rule *rule) 1535 1525 { 1536 - struct nft_rule *rule = container_of(head, struct nft_rule, rcu_head); 1537 1526 struct nft_expr *expr; 1538 1527 1539 1528 /* ··· 1545 1536 expr = nft_expr_next(expr); 1546 1537 } 1547 1538 kfree(rule); 1548 - } 1549 - 1550 - static void nf_tables_rule_destroy(struct nft_rule *rule) 1551 - { 1552 - call_rcu(&rule->rcu_head, nf_tables_rcu_rule_destroy); 1553 1539 } 1554 1540 1555 1541 #define NFT_RULE_MAXEXPRS 128 ··· 1813 1809 synchronize_rcu(); 1814 1810 1815 1811 list_for_each_entry_safe(rupd, tmp, &net->nft.commit_list, list) { 1816 - /* Delete this rule from the dirty list */ 1817 - list_del(&rupd->list); 1818 - 1819 1812 /* This rule was inactive in the past and just became active. 1820 1813 * Clear the next bit of the genmask since its meaning has 1821 1814 * changed, now it is the future. ··· 1823 1822 rupd->chain, rupd->rule, 1824 1823 NFT_MSG_NEWRULE, 0, 1825 1824 rupd->family); 1825 + list_del(&rupd->list); 1826 1826 kfree(rupd); 1827 1827 continue; 1828 1828 } ··· 1833 1831 nf_tables_rule_notify(skb, rupd->nlh, rupd->table, rupd->chain, 1834 1832 rupd->rule, NFT_MSG_DELRULE, 0, 1835 1833 rupd->family); 1834 + } 1835 + 1836 + /* Make sure we don't see any packet traversing old rules */ 1837 + synchronize_rcu(); 1838 + 1839 + /* Now we can safely release unused old rules */ 1840 + list_for_each_entry_safe(rupd, tmp, &net->nft.commit_list, list) { 1836 1841 nf_tables_rule_destroy(rupd->rule); 1842 + list_del(&rupd->list); 1837 1843 kfree(rupd); 1838 1844 } 1839 1845 ··· 1854 1844 struct nft_rule_trans *rupd, *tmp; 1855 1845 1856 1846 list_for_each_entry_safe(rupd, tmp, &net->nft.commit_list, list) { 1857 - /* Delete all rules from the dirty list */ 1858 - list_del(&rupd->list); 1859 - 1860 1847 if (!nft_rule_is_active_next(net, rupd->rule)) { 1861 1848 nft_rule_clear(net, rupd->rule); 1849 + list_del(&rupd->list); 1862 1850 kfree(rupd); 1863 1851 continue; 1864 1852 } 1865 1853 1866 1854 /* This rule is inactive, get rid of it */ 1867 1855 list_del_rcu(&rupd->rule->list); 1856 + } 1857 + 1858 + /* Make sure we don't see any packet accessing aborted rules */ 1859 + synchronize_rcu(); 1860 + 1861 + list_for_each_entry_safe(rupd, tmp, &net->nft.commit_list, list) { 1868 1862 nf_tables_rule_destroy(rupd->rule); 1863 + list_del(&rupd->list); 1869 1864 kfree(rupd); 1870 1865 } 1866 + 1871 1867 return 0; 1872 1868 } 1873 1869 ··· 1959 1943 } 1960 1944 1961 1945 if (nla[NFTA_SET_TABLE] != NULL) { 1946 + if (afi == NULL) 1947 + return -EAFNOSUPPORT; 1948 + 1962 1949 table = nf_tables_table_lookup(afi, nla[NFTA_SET_TABLE]); 1963 1950 if (IS_ERR(table)) 1964 1951 return PTR_ERR(table); ··· 2008 1989 2009 1990 if (!sscanf(i->name, name, &tmp)) 2010 1991 continue; 2011 - if (tmp < 0 || tmp > BITS_PER_LONG * PAGE_SIZE) 1992 + if (tmp < 0 || tmp >= BITS_PER_BYTE * PAGE_SIZE) 2012 1993 continue; 2013 1994 2014 1995 set_bit(tmp, inuse); 2015 1996 } 2016 1997 2017 - n = find_first_zero_bit(inuse, BITS_PER_LONG * PAGE_SIZE); 1998 + n = find_first_zero_bit(inuse, BITS_PER_BYTE * PAGE_SIZE); 2018 1999 free_page((unsigned long)inuse); 2019 2000 } 2020 2001 ··· 2447 2428 struct nft_ctx ctx; 2448 2429 int err; 2449 2430 2431 + if (nfmsg->nfgen_family == NFPROTO_UNSPEC) 2432 + return -EAFNOSUPPORT; 2450 2433 if (nla[NFTA_SET_TABLE] == NULL) 2451 2434 return -EINVAL; 2452 2435 2453 2436 err = nft_ctx_init_from_setattr(&ctx, skb, nlh, nla); 2454 2437 if (err < 0) 2455 2438 return err; 2456 - 2457 - if (nfmsg->nfgen_family == NFPROTO_UNSPEC) 2458 - return -EAFNOSUPPORT; 2459 2439 2460 2440 set = nf_tables_set_lookup(ctx.table, nla[NFTA_SET_NAME]); 2461 2441 if (IS_ERR(set)) ··· 2741 2723 if (nla[NFTA_SET_ELEM_DATA] == NULL && 2742 2724 !(elem.flags & NFT_SET_ELEM_INTERVAL_END)) 2743 2725 return -EINVAL; 2726 + if (nla[NFTA_SET_ELEM_DATA] != NULL && 2727 + elem.flags & NFT_SET_ELEM_INTERVAL_END) 2728 + return -EINVAL; 2744 2729 } else { 2745 2730 if (nla[NFTA_SET_ELEM_DATA] != NULL) 2746 2731 return -EINVAL; ··· 2998 2977 const struct nft_set_iter *iter, 2999 2978 const struct nft_set_elem *elem) 3000 2979 { 2980 + if (elem->flags & NFT_SET_ELEM_INTERVAL_END) 2981 + return 0; 2982 + 3001 2983 switch (elem->data.verdict) { 3002 2984 case NFT_JUMP: 3003 2985 case NFT_GOTO:
+3 -3
net/netfilter/nf_tables_core.c
··· 103 103 }, 104 104 }; 105 105 106 - static inline void nft_trace_packet(const struct nft_pktinfo *pkt, 107 - const struct nft_chain *chain, 108 - int rulenum, enum nft_trace type) 106 + static void nft_trace_packet(const struct nft_pktinfo *pkt, 107 + const struct nft_chain *chain, 108 + int rulenum, enum nft_trace type) 109 109 { 110 110 struct net *net = dev_net(pkt->in ? pkt->in : pkt->out); 111 111
+14 -2
net/netfilter/nft_ct.c
··· 226 226 if (tb[NFTA_CT_DIRECTION] != NULL) 227 227 return -EINVAL; 228 228 break; 229 + case NFT_CT_L3PROTOCOL: 229 230 case NFT_CT_PROTOCOL: 230 231 case NFT_CT_SRC: 231 232 case NFT_CT_DST: ··· 312 311 goto nla_put_failure; 313 312 if (nla_put_be32(skb, NFTA_CT_KEY, htonl(priv->key))) 314 313 goto nla_put_failure; 315 - if (nla_put_u8(skb, NFTA_CT_DIRECTION, priv->dir)) 316 - goto nla_put_failure; 314 + 315 + switch (priv->key) { 316 + case NFT_CT_PROTOCOL: 317 + case NFT_CT_SRC: 318 + case NFT_CT_DST: 319 + case NFT_CT_PROTO_SRC: 320 + case NFT_CT_PROTO_DST: 321 + if (nla_put_u8(skb, NFTA_CT_DIRECTION, priv->dir)) 322 + goto nla_put_failure; 323 + default: 324 + break; 325 + } 326 + 317 327 return 0; 318 328 319 329 nla_put_failure:
+1 -4
net/netfilter/nft_log.c
··· 23 23 struct nft_log { 24 24 struct nf_loginfo loginfo; 25 25 char *prefix; 26 - int family; 27 26 }; 28 27 29 28 static void nft_log_eval(const struct nft_expr *expr, ··· 32 33 const struct nft_log *priv = nft_expr_priv(expr); 33 34 struct net *net = dev_net(pkt->in ? pkt->in : pkt->out); 34 35 35 - nf_log_packet(net, priv->family, pkt->ops->hooknum, pkt->skb, pkt->in, 36 + nf_log_packet(net, pkt->ops->pf, pkt->ops->hooknum, pkt->skb, pkt->in, 36 37 pkt->out, &priv->loginfo, "%s", priv->prefix); 37 38 } 38 39 ··· 50 51 struct nft_log *priv = nft_expr_priv(expr); 51 52 struct nf_loginfo *li = &priv->loginfo; 52 53 const struct nlattr *nla; 53 - 54 - priv->family = ctx->afi->family; 55 54 56 55 nla = tb[NFTA_LOG_PREFIX]; 57 56 if (nla != NULL) {
+1
net/netfilter/nft_lookup.c
··· 16 16 #include <linux/netfilter.h> 17 17 #include <linux/netfilter/nf_tables.h> 18 18 #include <net/netfilter/nf_tables.h> 19 + #include <net/netfilter/nf_tables_core.h> 19 20 20 21 struct nft_lookup { 21 22 struct nft_set *set;
+1 -3
net/netfilter/nft_queue.c
··· 25 25 u16 queuenum; 26 26 u16 queues_total; 27 27 u16 flags; 28 - u8 family; 29 28 }; 30 29 31 30 static void nft_queue_eval(const struct nft_expr *expr, ··· 42 43 queue = priv->queuenum + cpu % priv->queues_total; 43 44 } else { 44 45 queue = nfqueue_hash(pkt->skb, queue, 45 - priv->queues_total, priv->family, 46 + priv->queues_total, pkt->ops->pf, 46 47 jhash_initval); 47 48 } 48 49 } ··· 70 71 return -EINVAL; 71 72 72 73 init_hashrandom(&jhash_initval); 73 - priv->family = ctx->afi->family; 74 74 priv->queuenum = ntohs(nla_get_be16(tb[NFTA_QUEUE_NUM])); 75 75 76 76 if (tb[NFTA_QUEUE_TOTAL] != NULL)
+11 -5
net/netfilter/nft_rbtree.c
··· 69 69 struct nft_rbtree_elem *rbe) 70 70 { 71 71 nft_data_uninit(&rbe->key, NFT_DATA_VALUE); 72 - if (set->flags & NFT_SET_MAP) 72 + if (set->flags & NFT_SET_MAP && 73 + !(rbe->flags & NFT_SET_ELEM_INTERVAL_END)) 73 74 nft_data_uninit(rbe->data, set->dtype); 75 + 74 76 kfree(rbe); 75 77 } 76 78 ··· 110 108 int err; 111 109 112 110 size = sizeof(*rbe); 113 - if (set->flags & NFT_SET_MAP) 111 + if (set->flags & NFT_SET_MAP && 112 + !(elem->flags & NFT_SET_ELEM_INTERVAL_END)) 114 113 size += sizeof(rbe->data[0]); 115 114 116 115 rbe = kzalloc(size, GFP_KERNEL); ··· 120 117 121 118 rbe->flags = elem->flags; 122 119 nft_data_copy(&rbe->key, &elem->key); 123 - if (set->flags & NFT_SET_MAP) 120 + if (set->flags & NFT_SET_MAP && 121 + !(rbe->flags & NFT_SET_ELEM_INTERVAL_END)) 124 122 nft_data_copy(rbe->data, &elem->data); 125 123 126 124 err = __nft_rbtree_insert(set, rbe); ··· 157 153 parent = parent->rb_right; 158 154 else { 159 155 elem->cookie = rbe; 160 - if (set->flags & NFT_SET_MAP) 156 + if (set->flags & NFT_SET_MAP && 157 + !(rbe->flags & NFT_SET_ELEM_INTERVAL_END)) 161 158 nft_data_copy(&elem->data, rbe->data); 162 159 elem->flags = rbe->flags; 163 160 return 0; ··· 182 177 183 178 rbe = rb_entry(node, struct nft_rbtree_elem, node); 184 179 nft_data_copy(&elem.key, &rbe->key); 185 - if (set->flags & NFT_SET_MAP) 180 + if (set->flags & NFT_SET_MAP && 181 + !(rbe->flags & NFT_SET_ELEM_INTERVAL_END)) 186 182 nft_data_copy(&elem.data, rbe->data); 187 183 elem.flags = rbe->flags; 188 184
+9 -80
net/netfilter/nft_reject.c
··· 16 16 #include <linux/netfilter.h> 17 17 #include <linux/netfilter/nf_tables.h> 18 18 #include <net/netfilter/nf_tables.h> 19 - #include <net/icmp.h> 20 - #include <net/netfilter/ipv4/nf_reject.h> 19 + #include <net/netfilter/nft_reject.h> 21 20 22 - #if IS_ENABLED(CONFIG_NF_TABLES_IPV6) 23 - #include <net/netfilter/ipv6/nf_reject.h> 24 - #endif 25 - 26 - struct nft_reject { 27 - enum nft_reject_types type:8; 28 - u8 icmp_code; 29 - u8 family; 30 - }; 31 - 32 - static void nft_reject_eval(const struct nft_expr *expr, 33 - struct nft_data data[NFT_REG_MAX + 1], 34 - const struct nft_pktinfo *pkt) 35 - { 36 - struct nft_reject *priv = nft_expr_priv(expr); 37 - #if IS_ENABLED(CONFIG_NF_TABLES_IPV6) 38 - struct net *net = dev_net((pkt->in != NULL) ? pkt->in : pkt->out); 39 - #endif 40 - switch (priv->type) { 41 - case NFT_REJECT_ICMP_UNREACH: 42 - if (priv->family == NFPROTO_IPV4) 43 - nf_send_unreach(pkt->skb, priv->icmp_code); 44 - #if IS_ENABLED(CONFIG_NF_TABLES_IPV6) 45 - else if (priv->family == NFPROTO_IPV6) 46 - nf_send_unreach6(net, pkt->skb, priv->icmp_code, 47 - pkt->ops->hooknum); 48 - #endif 49 - break; 50 - case NFT_REJECT_TCP_RST: 51 - if (priv->family == NFPROTO_IPV4) 52 - nf_send_reset(pkt->skb, pkt->ops->hooknum); 53 - #if IS_ENABLED(CONFIG_NF_TABLES_IPV6) 54 - else if (priv->family == NFPROTO_IPV6) 55 - nf_send_reset6(net, pkt->skb, pkt->ops->hooknum); 56 - #endif 57 - break; 58 - } 59 - 60 - data[NFT_REG_VERDICT].verdict = NF_DROP; 61 - } 62 - 63 - static const struct nla_policy nft_reject_policy[NFTA_REJECT_MAX + 1] = { 21 + const struct nla_policy nft_reject_policy[NFTA_REJECT_MAX + 1] = { 64 22 [NFTA_REJECT_TYPE] = { .type = NLA_U32 }, 65 23 [NFTA_REJECT_ICMP_CODE] = { .type = NLA_U8 }, 66 24 }; 25 + EXPORT_SYMBOL_GPL(nft_reject_policy); 67 26 68 - static int nft_reject_init(const struct nft_ctx *ctx, 69 - const struct nft_expr *expr, 70 - const struct nlattr * const tb[]) 27 + int nft_reject_init(const struct nft_ctx *ctx, 28 + const struct nft_expr *expr, 29 + const struct nlattr * const tb[]) 71 30 { 72 31 struct nft_reject *priv = nft_expr_priv(expr); 73 32 74 33 if (tb[NFTA_REJECT_TYPE] == NULL) 75 34 return -EINVAL; 76 35 77 - priv->family = ctx->afi->family; 78 36 priv->type = ntohl(nla_get_be32(tb[NFTA_REJECT_TYPE])); 79 37 switch (priv->type) { 80 38 case NFT_REJECT_ICMP_UNREACH: ··· 47 89 48 90 return 0; 49 91 } 92 + EXPORT_SYMBOL_GPL(nft_reject_init); 50 93 51 - static int nft_reject_dump(struct sk_buff *skb, const struct nft_expr *expr) 94 + int nft_reject_dump(struct sk_buff *skb, const struct nft_expr *expr) 52 95 { 53 96 const struct nft_reject *priv = nft_expr_priv(expr); 54 97 ··· 68 109 nla_put_failure: 69 110 return -1; 70 111 } 71 - 72 - static struct nft_expr_type nft_reject_type; 73 - static const struct nft_expr_ops nft_reject_ops = { 74 - .type = &nft_reject_type, 75 - .size = NFT_EXPR_SIZE(sizeof(struct nft_reject)), 76 - .eval = nft_reject_eval, 77 - .init = nft_reject_init, 78 - .dump = nft_reject_dump, 79 - }; 80 - 81 - static struct nft_expr_type nft_reject_type __read_mostly = { 82 - .name = "reject", 83 - .ops = &nft_reject_ops, 84 - .policy = nft_reject_policy, 85 - .maxattr = NFTA_REJECT_MAX, 86 - .owner = THIS_MODULE, 87 - }; 88 - 89 - static int __init nft_reject_module_init(void) 90 - { 91 - return nft_register_expr(&nft_reject_type); 92 - } 93 - 94 - static void __exit nft_reject_module_exit(void) 95 - { 96 - nft_unregister_expr(&nft_reject_type); 97 - } 98 - 99 - module_init(nft_reject_module_init); 100 - module_exit(nft_reject_module_exit); 112 + EXPORT_SYMBOL_GPL(nft_reject_dump); 101 113 102 114 MODULE_LICENSE("GPL"); 103 115 MODULE_AUTHOR("Patrick McHardy <kaber@trash.net>"); 104 - MODULE_ALIAS_NFT_EXPR("reject");
+63
net/netfilter/nft_reject_inet.c
··· 1 + /* 2 + * Copyright (c) 2014 Patrick McHardy <kaber@trash.net> 3 + * 4 + * This program is free software; you can redistribute it and/or modify 5 + * it under the terms of the GNU General Public License version 2 as 6 + * published by the Free Software Foundation. 7 + */ 8 + 9 + #include <linux/kernel.h> 10 + #include <linux/init.h> 11 + #include <linux/module.h> 12 + #include <linux/netlink.h> 13 + #include <linux/netfilter.h> 14 + #include <linux/netfilter/nf_tables.h> 15 + #include <net/netfilter/nf_tables.h> 16 + #include <net/netfilter/nft_reject.h> 17 + 18 + static void nft_reject_inet_eval(const struct nft_expr *expr, 19 + struct nft_data data[NFT_REG_MAX + 1], 20 + const struct nft_pktinfo *pkt) 21 + { 22 + switch (pkt->ops->pf) { 23 + case NFPROTO_IPV4: 24 + nft_reject_ipv4_eval(expr, data, pkt); 25 + case NFPROTO_IPV6: 26 + nft_reject_ipv6_eval(expr, data, pkt); 27 + } 28 + } 29 + 30 + static struct nft_expr_type nft_reject_inet_type; 31 + static const struct nft_expr_ops nft_reject_inet_ops = { 32 + .type = &nft_reject_inet_type, 33 + .size = NFT_EXPR_SIZE(sizeof(struct nft_reject)), 34 + .eval = nft_reject_inet_eval, 35 + .init = nft_reject_init, 36 + .dump = nft_reject_dump, 37 + }; 38 + 39 + static struct nft_expr_type nft_reject_inet_type __read_mostly = { 40 + .family = NFPROTO_INET, 41 + .name = "reject", 42 + .ops = &nft_reject_inet_ops, 43 + .policy = nft_reject_policy, 44 + .maxattr = NFTA_REJECT_MAX, 45 + .owner = THIS_MODULE, 46 + }; 47 + 48 + static int __init nft_reject_inet_module_init(void) 49 + { 50 + return nft_register_expr(&nft_reject_inet_type); 51 + } 52 + 53 + static void __exit nft_reject_inet_module_exit(void) 54 + { 55 + nft_unregister_expr(&nft_reject_inet_type); 56 + } 57 + 58 + module_init(nft_reject_inet_module_init); 59 + module_exit(nft_reject_inet_module_exit); 60 + 61 + MODULE_LICENSE("GPL"); 62 + MODULE_AUTHOR("Patrick McHardy <kaber@trash.net>"); 63 + MODULE_ALIAS_NFT_AF_EXPR(1, "reject");
+1 -6
net/netfilter/xt_CT.c
··· 228 228 goto err3; 229 229 } 230 230 231 - __set_bit(IPS_TEMPLATE_BIT, &ct->status); 232 - __set_bit(IPS_CONFIRMED_BIT, &ct->status); 233 - 234 - /* Overload tuple linked list to put us in template list. */ 235 - hlist_nulls_add_head_rcu(&ct->tuplehash[IP_CT_DIR_ORIGINAL].hnnode, 236 - &par->net->ct.tmpl); 231 + nf_conntrack_tmpl_insert(par->net, ct); 237 232 out: 238 233 info->ct = ct; 239 234 return 0;
+16 -7
net/openvswitch/datapath.c
··· 55 55 56 56 #include "datapath.h" 57 57 #include "flow.h" 58 + #include "flow_table.h" 58 59 #include "flow_netlink.h" 59 60 #include "vport-internal_dev.h" 60 61 #include "vport-netdev.h" ··· 161 160 { 162 161 struct datapath *dp = container_of(rcu, struct datapath, rcu); 163 162 164 - ovs_flow_tbl_destroy(&dp->table); 165 163 free_percpu(dp->stats_percpu); 166 164 release_net(ovs_dp_get_net(dp)); 167 165 kfree(dp->ports); ··· 465 465 nla->nla_len = nla_attr_size(skb->len); 466 466 467 467 skb_zerocopy(user_skb, skb, skb->len, hlen); 468 + 469 + /* Pad OVS_PACKET_ATTR_PACKET if linear copy was performed */ 470 + if (!(dp->user_features & OVS_DP_F_UNALIGNED)) { 471 + size_t plen = NLA_ALIGN(user_skb->len) - user_skb->len; 472 + 473 + if (plen > 0) 474 + memset(skb_put(user_skb, plen), 0, plen); 475 + } 468 476 469 477 ((struct nlmsghdr *) user_skb->data)->nlmsg_len = user_skb->len; 470 478 ··· 860 852 goto err_unlock_ovs; 861 853 862 854 /* The unmasked key has to be the same for flow updates. */ 863 - error = -EINVAL; 864 - if (!ovs_flow_cmp_unmasked_key(flow, &match)) { 865 - OVS_NLERR("Flow modification message rejected, unmasked key does not match.\n"); 855 + if (!ovs_flow_cmp_unmasked_key(flow, &match)) 866 856 goto err_unlock_ovs; 867 - } 868 857 869 858 /* Update actions. */ 870 859 old_acts = ovsl_dereference(flow->sf_acts); ··· 1084 1079 msgsize += nla_total_size(IFNAMSIZ); 1085 1080 msgsize += nla_total_size(sizeof(struct ovs_dp_stats)); 1086 1081 msgsize += nla_total_size(sizeof(struct ovs_dp_megaflow_stats)); 1082 + msgsize += nla_total_size(sizeof(u32)); /* OVS_DP_ATTR_USER_FEATURES */ 1087 1083 1088 1084 return msgsize; 1089 1085 } ··· 1285 1279 err_destroy_percpu: 1286 1280 free_percpu(dp->stats_percpu); 1287 1281 err_destroy_table: 1288 - ovs_flow_tbl_destroy(&dp->table); 1282 + ovs_flow_tbl_destroy(&dp->table, false); 1289 1283 err_free_dp: 1290 1284 release_net(ovs_dp_get_net(dp)); 1291 1285 kfree(dp); ··· 1312 1306 list_del_rcu(&dp->list_node); 1313 1307 1314 1308 /* OVSP_LOCAL is datapath internal port. We need to make sure that 1315 - * all port in datapath are destroyed first before freeing datapath. 1309 + * all ports in datapath are destroyed first before freeing datapath. 1316 1310 */ 1317 1311 ovs_dp_detach_port(ovs_vport_ovsl(dp, OVSP_LOCAL)); 1312 + 1313 + /* RCU destroy the flow table */ 1314 + ovs_flow_tbl_destroy(&dp->table, true); 1318 1315 1319 1316 call_rcu(&dp->rcu, destroy_dp_rcu); 1320 1317 }
+43 -45
net/openvswitch/flow_table.c
··· 153 153 flow_free(flow); 154 154 } 155 155 156 - static void flow_mask_del_ref(struct sw_flow_mask *mask, bool deferred) 157 - { 158 - if (!mask) 159 - return; 160 - 161 - BUG_ON(!mask->ref_count); 162 - mask->ref_count--; 163 - 164 - if (!mask->ref_count) { 165 - list_del_rcu(&mask->list); 166 - if (deferred) 167 - kfree_rcu(mask, rcu); 168 - else 169 - kfree(mask); 170 - } 171 - } 172 - 173 156 void ovs_flow_free(struct sw_flow *flow, bool deferred) 174 157 { 175 158 if (!flow) 176 159 return; 177 160 178 - flow_mask_del_ref(flow->mask, deferred); 161 + if (flow->mask) { 162 + struct sw_flow_mask *mask = flow->mask; 163 + 164 + /* ovs-lock is required to protect mask-refcount and 165 + * mask list. 166 + */ 167 + ASSERT_OVSL(); 168 + BUG_ON(!mask->ref_count); 169 + mask->ref_count--; 170 + 171 + if (!mask->ref_count) { 172 + list_del_rcu(&mask->list); 173 + if (deferred) 174 + kfree_rcu(mask, rcu); 175 + else 176 + kfree(mask); 177 + } 178 + } 179 179 180 180 if (deferred) 181 181 call_rcu(&flow->rcu, rcu_free_flow_callback); ··· 188 188 flex_array_free(buckets); 189 189 } 190 190 191 + 191 192 static void __table_instance_destroy(struct table_instance *ti) 192 193 { 193 - int i; 194 - 195 - if (ti->keep_flows) 196 - goto skip_flows; 197 - 198 - for (i = 0; i < ti->n_buckets; i++) { 199 - struct sw_flow *flow; 200 - struct hlist_head *head = flex_array_get(ti->buckets, i); 201 - struct hlist_node *n; 202 - int ver = ti->node_ver; 203 - 204 - hlist_for_each_entry_safe(flow, n, head, hash_node[ver]) { 205 - hlist_del(&flow->hash_node[ver]); 206 - ovs_flow_free(flow, false); 207 - } 208 - } 209 - 210 - skip_flows: 211 194 free_buckets(ti->buckets); 212 195 kfree(ti); 213 196 } ··· 241 258 242 259 static void table_instance_destroy(struct table_instance *ti, bool deferred) 243 260 { 261 + int i; 262 + 244 263 if (!ti) 245 264 return; 246 265 266 + if (ti->keep_flows) 267 + goto skip_flows; 268 + 269 + for (i = 0; i < ti->n_buckets; i++) { 270 + struct sw_flow *flow; 271 + struct hlist_head *head = flex_array_get(ti->buckets, i); 272 + struct hlist_node *n; 273 + int ver = ti->node_ver; 274 + 275 + hlist_for_each_entry_safe(flow, n, head, hash_node[ver]) { 276 + hlist_del_rcu(&flow->hash_node[ver]); 277 + ovs_flow_free(flow, deferred); 278 + } 279 + } 280 + 281 + skip_flows: 247 282 if (deferred) 248 283 call_rcu(&ti->rcu, flow_tbl_destroy_rcu_cb); 249 284 else 250 285 __table_instance_destroy(ti); 251 286 } 252 287 253 - void ovs_flow_tbl_destroy(struct flow_table *table) 288 + void ovs_flow_tbl_destroy(struct flow_table *table, bool deferred) 254 289 { 255 290 struct table_instance *ti = ovsl_dereference(table->ti); 256 291 257 - table_instance_destroy(ti, false); 292 + table_instance_destroy(ti, deferred); 258 293 } 259 294 260 295 struct sw_flow *ovs_flow_tbl_dump_next(struct table_instance *ti, ··· 505 504 506 505 mask = kmalloc(sizeof(*mask), GFP_KERNEL); 507 506 if (mask) 508 - mask->ref_count = 0; 507 + mask->ref_count = 1; 509 508 510 509 return mask; 511 - } 512 - 513 - static void mask_add_ref(struct sw_flow_mask *mask) 514 - { 515 - mask->ref_count++; 516 510 } 517 511 518 512 static bool mask_equal(const struct sw_flow_mask *a, ··· 550 554 mask->key = new->key; 551 555 mask->range = new->range; 552 556 list_add_rcu(&mask->list, &tbl->mask_list); 557 + } else { 558 + BUG_ON(!mask->ref_count); 559 + mask->ref_count++; 553 560 } 554 561 555 - mask_add_ref(mask); 556 562 flow->mask = mask; 557 563 return 0; 558 564 }
+1 -1
net/openvswitch/flow_table.h
··· 60 60 61 61 int ovs_flow_tbl_init(struct flow_table *); 62 62 int ovs_flow_tbl_count(struct flow_table *table); 63 - void ovs_flow_tbl_destroy(struct flow_table *table); 63 + void ovs_flow_tbl_destroy(struct flow_table *table, bool deferred); 64 64 int ovs_flow_tbl_flush(struct flow_table *flow_table); 65 65 66 66 int ovs_flow_tbl_insert(struct flow_table *table, struct sw_flow *flow,
+2
net/sctp/ipv6.c
··· 662 662 */ 663 663 sctp_v6_to_sk_daddr(&asoc->peer.primary_addr, newsk); 664 664 665 + newsk->sk_v6_rcv_saddr = sk->sk_v6_rcv_saddr; 666 + 665 667 sk_refcnt_debug_inc(newsk); 666 668 667 669 if (newsk->sk_prot->init(newsk)) {
+3 -3
net/sunrpc/svc_xprt.c
··· 571 571 } 572 572 } 573 573 574 - int svc_alloc_arg(struct svc_rqst *rqstp) 574 + static int svc_alloc_arg(struct svc_rqst *rqstp) 575 575 { 576 576 struct svc_serv *serv = rqstp->rq_server; 577 577 struct xdr_buf *arg; ··· 612 612 return 0; 613 613 } 614 614 615 - struct svc_xprt *svc_get_next_xprt(struct svc_rqst *rqstp, long timeout) 615 + static struct svc_xprt *svc_get_next_xprt(struct svc_rqst *rqstp, long timeout) 616 616 { 617 617 struct svc_xprt *xprt; 618 618 struct svc_pool *pool = rqstp->rq_pool; ··· 691 691 return xprt; 692 692 } 693 693 694 - void svc_add_new_temp_xprt(struct svc_serv *serv, struct svc_xprt *newxpt) 694 + static void svc_add_new_temp_xprt(struct svc_serv *serv, struct svc_xprt *newxpt) 695 695 { 696 696 spin_lock_bh(&serv->sv_lock); 697 697 set_bit(XPT_TEMP, &newxpt->xpt_flags);
+10 -7
net/wireless/core.c
··· 203 203 204 204 rdev->opencount--; 205 205 206 - WARN_ON(rdev->scan_req && rdev->scan_req->wdev == wdev && 207 - !rdev->scan_req->notified); 206 + if (rdev->scan_req && rdev->scan_req->wdev == wdev) { 207 + if (WARN_ON(!rdev->scan_req->notified)) 208 + rdev->scan_req->aborted = true; 209 + ___cfg80211_scan_done(rdev, false); 210 + } 208 211 } 209 212 210 213 static int cfg80211_rfkill_set_block(void *data, bool blocked) ··· 442 439 bool have_band = false; 443 440 int i; 444 441 u16 ifmodes = wiphy->interface_modes; 445 - 446 - /* support for 5/10 MHz is broken due to nl80211 API mess - disable */ 447 - wiphy->flags &= ~WIPHY_FLAG_SUPPORTS_5_10_MHZ; 448 442 449 443 /* 450 444 * There are major locking problems in nl80211/mac80211 for CSA, ··· 859 859 break; 860 860 case NETDEV_DOWN: 861 861 cfg80211_update_iface_num(rdev, wdev->iftype, -1); 862 - WARN_ON(rdev->scan_req && rdev->scan_req->wdev == wdev && 863 - !rdev->scan_req->notified); 862 + if (rdev->scan_req && rdev->scan_req->wdev == wdev) { 863 + if (WARN_ON(!rdev->scan_req->notified)) 864 + rdev->scan_req->aborted = true; 865 + ___cfg80211_scan_done(rdev, false); 866 + } 864 867 865 868 if (WARN_ON(rdev->sched_scan_req && 866 869 rdev->sched_scan_req->dev == wdev->netdev)) {
+3 -1
net/wireless/core.h
··· 62 62 struct rb_root bss_tree; 63 63 u32 bss_generation; 64 64 struct cfg80211_scan_request *scan_req; /* protected by RTNL */ 65 + struct sk_buff *scan_msg; 65 66 struct cfg80211_sched_scan_request *sched_scan_req; 66 67 unsigned long suspend_at; 67 68 struct work_struct scan_done_wk; ··· 362 361 struct key_params *params, int key_idx, 363 362 bool pairwise, const u8 *mac_addr); 364 363 void __cfg80211_scan_done(struct work_struct *wk); 365 - void ___cfg80211_scan_done(struct cfg80211_registered_device *rdev); 364 + void ___cfg80211_scan_done(struct cfg80211_registered_device *rdev, 365 + bool send_message); 366 366 void __cfg80211_sched_scan_results(struct work_struct *wk); 367 367 int __cfg80211_stop_sched_scan(struct cfg80211_registered_device *rdev, 368 368 bool driver_initiated);
+12 -20
net/wireless/nl80211.c
··· 1719 1719 * We can then retry with the larger buffer. 1720 1720 */ 1721 1721 if ((ret == -ENOBUFS || ret == -EMSGSIZE) && 1722 - !skb->len && 1722 + !skb->len && !state->split && 1723 1723 cb->min_dump_alloc < 4096) { 1724 1724 cb->min_dump_alloc = 4096; 1725 + state->split_start = 0; 1725 1726 rtnl_unlock(); 1726 1727 return 1; 1727 1728 } ··· 5245 5244 if (!rdev->ops->scan) 5246 5245 return -EOPNOTSUPP; 5247 5246 5248 - if (rdev->scan_req) { 5247 + if (rdev->scan_req || rdev->scan_msg) { 5249 5248 err = -EBUSY; 5250 5249 goto unlock; 5251 5250 } ··· 10012 10011 NL80211_MCGRP_SCAN, GFP_KERNEL); 10013 10012 } 10014 10013 10015 - void nl80211_send_scan_done(struct cfg80211_registered_device *rdev, 10016 - struct wireless_dev *wdev) 10014 + struct sk_buff *nl80211_build_scan_msg(struct cfg80211_registered_device *rdev, 10015 + struct wireless_dev *wdev, bool aborted) 10017 10016 { 10018 10017 struct sk_buff *msg; 10019 10018 10020 10019 msg = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_KERNEL); 10021 10020 if (!msg) 10022 - return; 10021 + return NULL; 10023 10022 10024 10023 if (nl80211_send_scan_msg(msg, rdev, wdev, 0, 0, 0, 10025 - NL80211_CMD_NEW_SCAN_RESULTS) < 0) { 10024 + aborted ? NL80211_CMD_SCAN_ABORTED : 10025 + NL80211_CMD_NEW_SCAN_RESULTS) < 0) { 10026 10026 nlmsg_free(msg); 10027 - return; 10027 + return NULL; 10028 10028 } 10029 10029 10030 - genlmsg_multicast_netns(&nl80211_fam, wiphy_net(&rdev->wiphy), msg, 0, 10031 - NL80211_MCGRP_SCAN, GFP_KERNEL); 10030 + return msg; 10032 10031 } 10033 10032 10034 - void nl80211_send_scan_aborted(struct cfg80211_registered_device *rdev, 10035 - struct wireless_dev *wdev) 10033 + void nl80211_send_scan_result(struct cfg80211_registered_device *rdev, 10034 + struct sk_buff *msg) 10036 10035 { 10037 - struct sk_buff *msg; 10038 - 10039 - msg = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_KERNEL); 10040 10036 if (!msg) 10041 10037 return; 10042 - 10043 - if (nl80211_send_scan_msg(msg, rdev, wdev, 0, 0, 0, 10044 - NL80211_CMD_SCAN_ABORTED) < 0) { 10045 - nlmsg_free(msg); 10046 - return; 10047 - } 10048 10038 10049 10039 genlmsg_multicast_netns(&nl80211_fam, wiphy_net(&rdev->wiphy), msg, 0, 10050 10040 NL80211_MCGRP_SCAN, GFP_KERNEL);
+4 -4
net/wireless/nl80211.h
··· 8 8 void nl80211_notify_dev_rename(struct cfg80211_registered_device *rdev); 9 9 void nl80211_send_scan_start(struct cfg80211_registered_device *rdev, 10 10 struct wireless_dev *wdev); 11 - void nl80211_send_scan_done(struct cfg80211_registered_device *rdev, 12 - struct wireless_dev *wdev); 13 - void nl80211_send_scan_aborted(struct cfg80211_registered_device *rdev, 14 - struct wireless_dev *wdev); 11 + struct sk_buff *nl80211_build_scan_msg(struct cfg80211_registered_device *rdev, 12 + struct wireless_dev *wdev, bool aborted); 13 + void nl80211_send_scan_result(struct cfg80211_registered_device *rdev, 14 + struct sk_buff *msg); 15 15 void nl80211_send_sched_scan(struct cfg80211_registered_device *rdev, 16 16 struct net_device *netdev, u32 cmd); 17 17 void nl80211_send_sched_scan_results(struct cfg80211_registered_device *rdev,
+25 -15
net/wireless/scan.c
··· 161 161 dev->bss_generation++; 162 162 } 163 163 164 - void ___cfg80211_scan_done(struct cfg80211_registered_device *rdev) 164 + void ___cfg80211_scan_done(struct cfg80211_registered_device *rdev, 165 + bool send_message) 165 166 { 166 167 struct cfg80211_scan_request *request; 167 168 struct wireless_dev *wdev; 169 + struct sk_buff *msg; 168 170 #ifdef CONFIG_CFG80211_WEXT 169 171 union iwreq_data wrqu; 170 172 #endif 171 173 172 174 ASSERT_RTNL(); 173 175 174 - request = rdev->scan_req; 176 + if (rdev->scan_msg) { 177 + nl80211_send_scan_result(rdev, rdev->scan_msg); 178 + rdev->scan_msg = NULL; 179 + return; 180 + } 175 181 182 + request = rdev->scan_req; 176 183 if (!request) 177 184 return; 178 185 ··· 193 186 if (wdev->netdev) 194 187 cfg80211_sme_scan_done(wdev->netdev); 195 188 196 - if (request->aborted) { 197 - nl80211_send_scan_aborted(rdev, wdev); 198 - } else { 199 - if (request->flags & NL80211_SCAN_FLAG_FLUSH) { 200 - /* flush entries from previous scans */ 201 - spin_lock_bh(&rdev->bss_lock); 202 - __cfg80211_bss_expire(rdev, request->scan_start); 203 - spin_unlock_bh(&rdev->bss_lock); 204 - } 205 - nl80211_send_scan_done(rdev, wdev); 189 + if (!request->aborted && 190 + request->flags & NL80211_SCAN_FLAG_FLUSH) { 191 + /* flush entries from previous scans */ 192 + spin_lock_bh(&rdev->bss_lock); 193 + __cfg80211_bss_expire(rdev, request->scan_start); 194 + spin_unlock_bh(&rdev->bss_lock); 206 195 } 196 + 197 + msg = nl80211_build_scan_msg(rdev, wdev, request->aborted); 207 198 208 199 #ifdef CONFIG_CFG80211_WEXT 209 200 if (wdev->netdev && !request->aborted) { ··· 216 211 217 212 rdev->scan_req = NULL; 218 213 kfree(request); 214 + 215 + if (!send_message) 216 + rdev->scan_msg = msg; 217 + else 218 + nl80211_send_scan_result(rdev, msg); 219 219 } 220 220 221 221 void __cfg80211_scan_done(struct work_struct *wk) ··· 231 221 scan_done_wk); 232 222 233 223 rtnl_lock(); 234 - ___cfg80211_scan_done(rdev); 224 + ___cfg80211_scan_done(rdev, true); 235 225 rtnl_unlock(); 236 226 } 237 227 ··· 1089 1079 if (IS_ERR(rdev)) 1090 1080 return PTR_ERR(rdev); 1091 1081 1092 - if (rdev->scan_req) { 1082 + if (rdev->scan_req || rdev->scan_msg) { 1093 1083 err = -EBUSY; 1094 1084 goto out; 1095 1085 } ··· 1491 1481 if (IS_ERR(rdev)) 1492 1482 return PTR_ERR(rdev); 1493 1483 1494 - if (rdev->scan_req) 1484 + if (rdev->scan_req || rdev->scan_msg) 1495 1485 return -EAGAIN; 1496 1486 1497 1487 res = ieee80211_scan_results(rdev, info, extra, data->length);
+1 -1
net/wireless/sme.c
··· 67 67 ASSERT_RDEV_LOCK(rdev); 68 68 ASSERT_WDEV_LOCK(wdev); 69 69 70 - if (rdev->scan_req) 70 + if (rdev->scan_req || rdev->scan_msg) 71 71 return -EBUSY; 72 72 73 73 if (wdev->conn->params.channel)