Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net

Pull networking fixes from David Miller:

1) Don't ignore user initiated wireless regulatory settings on cards
with custom regulatory domains, from Arik Nemtsov.

2) Fix length check of bluetooth information responses, from Jaganath
Kanakkassery.

3) Fix misuse of PTR_ERR in btusb, from Adam Lee.

4) Handle rfkill properly while iwlwifi devices are offline, from
Emmanuel Grumbach.

5) Fix r815x devices DMA'ing to stack buffers, from Hayes Wang.

6) Kernel info leak in ATM packet scheduler, from Dan Carpenter.

7) 8139cp doesn't check for DMA mapping errors, from Neil Horman.

8) Fix bridge multicast code to not snoop when no querier exists,
otherwise mutlicast traffic is lost. From Linus Lüssing.

9) Avoid soft lockups in fib6_run_gc(), from Michal Kubecek.

10) Fix races in automatic address asignment on ipv6, which can result
in incorrect lifetime assignments. From Jiri Benc.

11) Cure build bustage when CONFIG_NET_LL_RX_POLL is not set and rename
it CONFIG_NET_RX_BUSY_POLL to eliminate the last reference to the
original naming of this feature. From Cong Wang.

12) Fix crash in TIPC when server socket creation fails, from Ying Xue.

13) macvlan_changelink() silently succeeds when it shouldn't, from
Michael S Tsirkin.

14) HTB packet scheduler can crash due to sign extension, fix from
Stephen Hemminger.

15) With the cable unplugged, r8169 prints out a message every 10
seconds, make it netif_dbg() instead of netif_warn(). From Peter
Wu.

16) Fix memory leak in rtm_to_ifaddr(), from Daniel Borkmann.

17) sis900 gets spurious TX queue timeouts due to mismanagement of link
carrier state, from Denis Kirjanov.

18) Validate somaxconn sysctl to make sure it fits inside of a u16.
From Roman Gushchin.

19) Fix MAC address filtering on qlcnic, from Shahed Shaikh.

* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net: (68 commits)
qlcnic: Fix for flash update failure on 83xx adapter
qlcnic: Fix link speed and duplex display for 83xx adapter
qlcnic: Fix link speed display for 82xx adapter
qlcnic: Fix external loopback test.
qlcnic: Removed adapter series name from warning messages.
qlcnic: Free up memory in error path.
qlcnic: Fix ingress MAC learning
qlcnic: Fix MAC address filter issue on 82xx adapter
net: ethernet: davinci_emac: drop IRQF_DISABLED
netlabel: use domain based selectors when address based selectors are not available
net: check net.core.somaxconn sysctl values
sis900: Fix the tx queue timeout issue
net: rtm_to_ifaddr: free ifa if ifa_cacheinfo processing fails
r8169: remove "PHY reset until link up" log spam
net: ethernet: cpsw: drop IRQF_DISABLED
htb: fix sign extension bug
macvlan: handle set_promiscuity failures
macvlan: better mode validation
tipc: fix oops when creating server socket fails
net: rename CONFIG_NET_LL_RX_POLL to CONFIG_NET_RX_BUSY_POLL
...

+2 -2
Documentation/sysctl/net.txt
··· 52 52 53 53 busy_read 54 54 ---------------- 55 - Low latency busy poll timeout for socket reads. (needs CONFIG_NET_LL_RX_POLL) 55 + Low latency busy poll timeout for socket reads. (needs CONFIG_NET_RX_BUSY_POLL) 56 56 Approximate time in us to busy loop waiting for packets on the device queue. 57 57 This sets the default value of the SO_BUSY_POLL socket option. 58 58 Can be set or overridden per socket by setting socket option SO_BUSY_POLL, ··· 63 63 64 64 busy_poll 65 65 ---------------- 66 - Low latency busy poll timeout for poll and select. (needs CONFIG_NET_LL_RX_POLL) 66 + Low latency busy poll timeout for poll and select. (needs CONFIG_NET_RX_BUSY_POLL) 67 67 Approximate time in us to busy loop waiting for events. 68 68 Recommended value depends on the number of sockets you poll on. 69 69 For several sockets 50, for several hundreds 100.
+10 -2
MAINTAINERS
··· 1406 1406 M: Kalle Valo <kvalo@qca.qualcomm.com> 1407 1407 L: linux-wireless@vger.kernel.org 1408 1408 W: http://wireless.kernel.org/en/users/Drivers/ath6kl 1409 - T: git git://git.kernel.org/pub/scm/linux/kernel/git/kvalo/ath6kl.git 1409 + T: git git://github.com/kvalo/ath.git 1410 1410 S: Supported 1411 1411 F: drivers/net/wireless/ath/ath6kl/ 1412 1412 ··· 6726 6726 S: Maintained 6727 6727 F: drivers/media/tuners/qt1010* 6728 6728 6729 + QUALCOMM ATHEROS ATH10K WIRELESS DRIVER 6730 + M: Kalle Valo <kvalo@qca.qualcomm.com> 6731 + L: ath10k@lists.infradead.org 6732 + W: http://wireless.kernel.org/en/users/Drivers/ath10k 6733 + T: git git://github.com/kvalo/ath.git 6734 + S: Supported 6735 + F: drivers/net/wireless/ath/ath10k/ 6736 + 6729 6737 QUALCOMM HEXAGON ARCHITECTURE 6730 6738 M: Richard Kuo <rkuo@codeaurora.org> 6731 6739 L: linux-hexagon@vger.kernel.org ··· 8278 8270 F: sound/soc/codecs/twl4030* 8279 8271 8280 8272 TI WILINK WIRELESS DRIVERS 8281 - M: Luciano Coelho <coelho@ti.com> 8273 + M: Luciano Coelho <luca@coelho.fi> 8282 8274 L: linux-wireless@vger.kernel.org 8283 8275 W: http://wireless.kernel.org/en/users/Drivers/wl12xx 8284 8276 W: http://wireless.kernel.org/en/users/Drivers/wl1251
+37 -9
drivers/bluetooth/ath3k.c
··· 91 91 { USB_DEVICE(0x0489, 0xe04e) }, 92 92 { USB_DEVICE(0x0489, 0xe056) }, 93 93 { USB_DEVICE(0x0489, 0xe04d) }, 94 + { USB_DEVICE(0x04c5, 0x1330) }, 95 + { USB_DEVICE(0x13d3, 0x3402) }, 96 + { USB_DEVICE(0x0cf3, 0x3121) }, 97 + { USB_DEVICE(0x0cf3, 0xe003) }, 94 98 95 99 /* Atheros AR5BBU12 with sflash firmware */ 96 100 { USB_DEVICE(0x0489, 0xE02C) }, ··· 132 128 { USB_DEVICE(0x0489, 0xe04e), .driver_info = BTUSB_ATH3012 }, 133 129 { USB_DEVICE(0x0489, 0xe056), .driver_info = BTUSB_ATH3012 }, 134 130 { USB_DEVICE(0x0489, 0xe04d), .driver_info = BTUSB_ATH3012 }, 131 + { USB_DEVICE(0x04c5, 0x1330), .driver_info = BTUSB_ATH3012 }, 132 + { USB_DEVICE(0x13d3, 0x3402), .driver_info = BTUSB_ATH3012 }, 133 + { USB_DEVICE(0x0cf3, 0x3121), .driver_info = BTUSB_ATH3012 }, 134 + { USB_DEVICE(0x0cf3, 0xe003), .driver_info = BTUSB_ATH3012 }, 135 135 136 136 /* Atheros AR5BBU22 with sflash firmware */ 137 137 { USB_DEVICE(0x0489, 0xE03C), .driver_info = BTUSB_ATH3012 }, ··· 201 193 202 194 static int ath3k_get_state(struct usb_device *udev, unsigned char *state) 203 195 { 204 - int pipe = 0; 196 + int ret, pipe = 0; 197 + char *buf; 198 + 199 + buf = kmalloc(sizeof(*buf), GFP_KERNEL); 200 + if (!buf) 201 + return -ENOMEM; 205 202 206 203 pipe = usb_rcvctrlpipe(udev, 0); 207 - return usb_control_msg(udev, pipe, ATH3K_GETSTATE, 208 - USB_TYPE_VENDOR | USB_DIR_IN, 0, 0, 209 - state, 0x01, USB_CTRL_SET_TIMEOUT); 204 + ret = usb_control_msg(udev, pipe, ATH3K_GETSTATE, 205 + USB_TYPE_VENDOR | USB_DIR_IN, 0, 0, 206 + buf, sizeof(*buf), USB_CTRL_SET_TIMEOUT); 207 + 208 + *state = *buf; 209 + kfree(buf); 210 + 211 + return ret; 210 212 } 211 213 212 214 static int ath3k_get_version(struct usb_device *udev, 213 215 struct ath3k_version *version) 214 216 { 215 - int pipe = 0; 217 + int ret, pipe = 0; 218 + struct ath3k_version *buf; 219 + const int size = sizeof(*buf); 220 + 221 + buf = kmalloc(size, GFP_KERNEL); 222 + if (!buf) 223 + return -ENOMEM; 216 224 217 225 pipe = usb_rcvctrlpipe(udev, 0); 218 - return usb_control_msg(udev, pipe, ATH3K_GETVERSION, 219 - USB_TYPE_VENDOR | USB_DIR_IN, 0, 0, version, 220 - sizeof(struct ath3k_version), 221 - USB_CTRL_SET_TIMEOUT); 226 + ret = usb_control_msg(udev, pipe, ATH3K_GETVERSION, 227 + USB_TYPE_VENDOR | USB_DIR_IN, 0, 0, 228 + buf, size, USB_CTRL_SET_TIMEOUT); 229 + 230 + memcpy(version, buf, size); 231 + kfree(buf); 232 + 233 + return ret; 222 234 } 223 235 224 236 static int ath3k_load_fwfile(struct usb_device *udev,
+11 -7
drivers/bluetooth/btusb.c
··· 154 154 { USB_DEVICE(0x0489, 0xe04e), .driver_info = BTUSB_ATH3012 }, 155 155 { USB_DEVICE(0x0489, 0xe056), .driver_info = BTUSB_ATH3012 }, 156 156 { USB_DEVICE(0x0489, 0xe04d), .driver_info = BTUSB_ATH3012 }, 157 + { USB_DEVICE(0x04c5, 0x1330), .driver_info = BTUSB_ATH3012 }, 158 + { USB_DEVICE(0x13d3, 0x3402), .driver_info = BTUSB_ATH3012 }, 159 + { USB_DEVICE(0x0cf3, 0x3121), .driver_info = BTUSB_ATH3012 }, 160 + { USB_DEVICE(0x0cf3, 0xe003), .driver_info = BTUSB_ATH3012 }, 157 161 158 162 /* Atheros AR5BBU12 with sflash firmware */ 159 163 { USB_DEVICE(0x0489, 0xe02c), .driver_info = BTUSB_IGNORE }, ··· 1099 1095 if (IS_ERR(skb)) { 1100 1096 BT_ERR("%s sending Intel patch command (0x%4.4x) failed (%ld)", 1101 1097 hdev->name, cmd->opcode, PTR_ERR(skb)); 1102 - return -PTR_ERR(skb); 1098 + return PTR_ERR(skb); 1103 1099 } 1104 1100 1105 1101 /* It ensures that the returned event matches the event data read from ··· 1151 1147 if (IS_ERR(skb)) { 1152 1148 BT_ERR("%s sending initial HCI reset command failed (%ld)", 1153 1149 hdev->name, PTR_ERR(skb)); 1154 - return -PTR_ERR(skb); 1150 + return PTR_ERR(skb); 1155 1151 } 1156 1152 kfree_skb(skb); 1157 1153 ··· 1165 1161 if (IS_ERR(skb)) { 1166 1162 BT_ERR("%s reading Intel fw version command failed (%ld)", 1167 1163 hdev->name, PTR_ERR(skb)); 1168 - return -PTR_ERR(skb); 1164 + return PTR_ERR(skb); 1169 1165 } 1170 1166 1171 1167 if (skb->len != sizeof(*ver)) { ··· 1223 1219 BT_ERR("%s entering Intel manufacturer mode failed (%ld)", 1224 1220 hdev->name, PTR_ERR(skb)); 1225 1221 release_firmware(fw); 1226 - return -PTR_ERR(skb); 1222 + return PTR_ERR(skb); 1227 1223 } 1228 1224 1229 1225 if (skb->data[0]) { ··· 1280 1276 if (IS_ERR(skb)) { 1281 1277 BT_ERR("%s exiting Intel manufacturer mode failed (%ld)", 1282 1278 hdev->name, PTR_ERR(skb)); 1283 - return -PTR_ERR(skb); 1279 + return PTR_ERR(skb); 1284 1280 } 1285 1281 kfree_skb(skb); 1286 1282 ··· 1296 1292 if (IS_ERR(skb)) { 1297 1293 BT_ERR("%s exiting Intel manufacturer mode failed (%ld)", 1298 1294 hdev->name, PTR_ERR(skb)); 1299 - return -PTR_ERR(skb); 1295 + return PTR_ERR(skb); 1300 1296 } 1301 1297 kfree_skb(skb); 1302 1298 ··· 1314 1310 if (IS_ERR(skb)) { 1315 1311 BT_ERR("%s exiting Intel manufacturer mode failed (%ld)", 1316 1312 hdev->name, PTR_ERR(skb)); 1317 - return -PTR_ERR(skb); 1313 + return PTR_ERR(skb); 1318 1314 } 1319 1315 kfree_skb(skb); 1320 1316
+4 -4
drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
··· 486 486 487 487 struct napi_struct napi; 488 488 489 - #ifdef CONFIG_NET_LL_RX_POLL 489 + #ifdef CONFIG_NET_RX_BUSY_POLL 490 490 unsigned int state; 491 491 #define BNX2X_FP_STATE_IDLE 0 492 492 #define BNX2X_FP_STATE_NAPI (1 << 0) /* NAPI owns this FP */ ··· 498 498 #define BNX2X_FP_USER_PEND (BNX2X_FP_STATE_POLL | BNX2X_FP_STATE_POLL_YIELD) 499 499 /* protect state */ 500 500 spinlock_t lock; 501 - #endif /* CONFIG_NET_LL_RX_POLL */ 501 + #endif /* CONFIG_NET_RX_BUSY_POLL */ 502 502 503 503 union host_hc_status_block status_blk; 504 504 /* chip independent shortcuts into sb structure */ ··· 572 572 #define bnx2x_fp_stats(bp, fp) (&((bp)->fp_stats[(fp)->index])) 573 573 #define bnx2x_fp_qstats(bp, fp) (&((bp)->fp_stats[(fp)->index].eth_q_stats)) 574 574 575 - #ifdef CONFIG_NET_LL_RX_POLL 575 + #ifdef CONFIG_NET_RX_BUSY_POLL 576 576 static inline void bnx2x_fp_init_lock(struct bnx2x_fastpath *fp) 577 577 { 578 578 spin_lock_init(&fp->lock); ··· 680 680 { 681 681 return false; 682 682 } 683 - #endif /* CONFIG_NET_LL_RX_POLL */ 683 + #endif /* CONFIG_NET_RX_BUSY_POLL */ 684 684 685 685 /* Use 2500 as a mini-jumbo MTU for FCoE */ 686 686 #define BNX2X_FCOE_MINI_JUMBO_MTU 2500
+1 -1
drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
··· 3117 3117 return work_done; 3118 3118 } 3119 3119 3120 - #ifdef CONFIG_NET_LL_RX_POLL 3120 + #ifdef CONFIG_NET_RX_BUSY_POLL 3121 3121 /* must be called with local_bh_disable()d */ 3122 3122 int bnx2x_low_latency_recv(struct napi_struct *napi) 3123 3123 {
+1 -1
drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
··· 12026 12026 .ndo_fcoe_get_wwn = bnx2x_fcoe_get_wwn, 12027 12027 #endif 12028 12028 12029 - #ifdef CONFIG_NET_LL_RX_POLL 12029 + #ifdef CONFIG_NET_RX_BUSY_POLL 12030 12030 .ndo_busy_poll = bnx2x_low_latency_recv, 12031 12031 #endif 12032 12032 };
+6 -6
drivers/net/ethernet/intel/ixgbe/ixgbe.h
··· 54 54 55 55 #include <net/busy_poll.h> 56 56 57 - #ifdef CONFIG_NET_LL_RX_POLL 57 + #ifdef CONFIG_NET_RX_BUSY_POLL 58 58 #define LL_EXTENDED_STATS 59 59 #endif 60 60 /* common prefix used by pr_<> macros */ ··· 366 366 struct rcu_head rcu; /* to avoid race with update stats on free */ 367 367 char name[IFNAMSIZ + 9]; 368 368 369 - #ifdef CONFIG_NET_LL_RX_POLL 369 + #ifdef CONFIG_NET_RX_BUSY_POLL 370 370 unsigned int state; 371 371 #define IXGBE_QV_STATE_IDLE 0 372 372 #define IXGBE_QV_STATE_NAPI 1 /* NAPI owns this QV */ ··· 377 377 #define IXGBE_QV_YIELD (IXGBE_QV_STATE_NAPI_YIELD | IXGBE_QV_STATE_POLL_YIELD) 378 378 #define IXGBE_QV_USER_PEND (IXGBE_QV_STATE_POLL | IXGBE_QV_STATE_POLL_YIELD) 379 379 spinlock_t lock; 380 - #endif /* CONFIG_NET_LL_RX_POLL */ 380 + #endif /* CONFIG_NET_RX_BUSY_POLL */ 381 381 382 382 /* for dynamic allocation of rings associated with this q_vector */ 383 383 struct ixgbe_ring ring[0] ____cacheline_internodealigned_in_smp; 384 384 }; 385 - #ifdef CONFIG_NET_LL_RX_POLL 385 + #ifdef CONFIG_NET_RX_BUSY_POLL 386 386 static inline void ixgbe_qv_init_lock(struct ixgbe_q_vector *q_vector) 387 387 { 388 388 ··· 462 462 WARN_ON(!(q_vector->state & IXGBE_QV_LOCKED)); 463 463 return q_vector->state & IXGBE_QV_USER_PEND; 464 464 } 465 - #else /* CONFIG_NET_LL_RX_POLL */ 465 + #else /* CONFIG_NET_RX_BUSY_POLL */ 466 466 static inline void ixgbe_qv_init_lock(struct ixgbe_q_vector *q_vector) 467 467 { 468 468 } ··· 491 491 { 492 492 return false; 493 493 } 494 - #endif /* CONFIG_NET_LL_RX_POLL */ 494 + #endif /* CONFIG_NET_RX_BUSY_POLL */ 495 495 496 496 #ifdef CONFIG_IXGBE_HWMON 497 497
+3 -3
drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
··· 1998 1998 return total_rx_packets; 1999 1999 } 2000 2000 2001 - #ifdef CONFIG_NET_LL_RX_POLL 2001 + #ifdef CONFIG_NET_RX_BUSY_POLL 2002 2002 /* must be called with local_bh_disable()d */ 2003 2003 static int ixgbe_low_latency_recv(struct napi_struct *napi) 2004 2004 { ··· 2030 2030 2031 2031 return found; 2032 2032 } 2033 - #endif /* CONFIG_NET_LL_RX_POLL */ 2033 + #endif /* CONFIG_NET_RX_BUSY_POLL */ 2034 2034 2035 2035 /** 2036 2036 * ixgbe_configure_msix - Configure MSI-X hardware ··· 7227 7227 #ifdef CONFIG_NET_POLL_CONTROLLER 7228 7228 .ndo_poll_controller = ixgbe_netpoll, 7229 7229 #endif 7230 - #ifdef CONFIG_NET_LL_RX_POLL 7230 + #ifdef CONFIG_NET_RX_BUSY_POLL 7231 7231 .ndo_busy_poll = ixgbe_low_latency_recv, 7232 7232 #endif 7233 7233 #ifdef IXGBE_FCOE
+3 -3
drivers/net/ethernet/mellanox/mlx4/en_ethtool.c
··· 223 223 case ETH_SS_STATS: 224 224 return (priv->stats_bitmap ? bit_count : NUM_ALL_STATS) + 225 225 (priv->tx_ring_num * 2) + 226 - #ifdef CONFIG_NET_LL_RX_POLL 226 + #ifdef CONFIG_NET_RX_BUSY_POLL 227 227 (priv->rx_ring_num * 5); 228 228 #else 229 229 (priv->rx_ring_num * 2); ··· 276 276 for (i = 0; i < priv->rx_ring_num; i++) { 277 277 data[index++] = priv->rx_ring[i].packets; 278 278 data[index++] = priv->rx_ring[i].bytes; 279 - #ifdef CONFIG_NET_LL_RX_POLL 279 + #ifdef CONFIG_NET_RX_BUSY_POLL 280 280 data[index++] = priv->rx_ring[i].yields; 281 281 data[index++] = priv->rx_ring[i].misses; 282 282 data[index++] = priv->rx_ring[i].cleaned; ··· 344 344 "rx%d_packets", i); 345 345 sprintf(data + (index++) * ETH_GSTRING_LEN, 346 346 "rx%d_bytes", i); 347 - #ifdef CONFIG_NET_LL_RX_POLL 347 + #ifdef CONFIG_NET_RX_BUSY_POLL 348 348 sprintf(data + (index++) * ETH_GSTRING_LEN, 349 349 "rx%d_napi_yield", i); 350 350 sprintf(data + (index++) * ETH_GSTRING_LEN,
+3 -3
drivers/net/ethernet/mellanox/mlx4/en_netdev.c
··· 68 68 return 0; 69 69 } 70 70 71 - #ifdef CONFIG_NET_LL_RX_POLL 71 + #ifdef CONFIG_NET_RX_BUSY_POLL 72 72 /* must be called with local_bh_disable()d */ 73 73 static int mlx4_en_low_latency_recv(struct napi_struct *napi) 74 74 { ··· 94 94 95 95 return done; 96 96 } 97 - #endif /* CONFIG_NET_LL_RX_POLL */ 97 + #endif /* CONFIG_NET_RX_BUSY_POLL */ 98 98 99 99 #ifdef CONFIG_RFS_ACCEL 100 100 ··· 2140 2140 #ifdef CONFIG_RFS_ACCEL 2141 2141 .ndo_rx_flow_steer = mlx4_en_filter_rfs, 2142 2142 #endif 2143 - #ifdef CONFIG_NET_LL_RX_POLL 2143 + #ifdef CONFIG_NET_RX_BUSY_POLL 2144 2144 .ndo_busy_poll = mlx4_en_low_latency_recv, 2145 2145 #endif 2146 2146 };
+1 -10
drivers/net/ethernet/mellanox/mlx4/fw.c
··· 845 845 MLX4_CMD_NATIVE); 846 846 847 847 if (!err && dev->caps.function != slave) { 848 - /* if config MAC in DB use it */ 849 - if (priv->mfunc.master.vf_oper[slave].vport[vhcr->in_modifier].state.mac) 850 - def_mac = priv->mfunc.master.vf_oper[slave].vport[vhcr->in_modifier].state.mac; 851 - else { 852 - /* set slave default_mac address */ 853 - MLX4_GET(def_mac, outbox->buf, QUERY_PORT_MAC_OFFSET); 854 - def_mac += slave << 8; 855 - priv->mfunc.master.vf_admin[slave].vport[vhcr->in_modifier].mac = def_mac; 856 - } 857 - 848 + def_mac = priv->mfunc.master.vf_oper[slave].vport[vhcr->in_modifier].state.mac; 858 849 MLX4_PUT(outbox->buf, def_mac, QUERY_PORT_MAC_OFFSET); 859 850 860 851 /* get port type - currently only eth is enabled */
+1 -1
drivers/net/ethernet/mellanox/mlx4/main.c
··· 371 371 372 372 dev->caps.sqp_demux = (mlx4_is_master(dev)) ? MLX4_MAX_NUM_SLAVES : 0; 373 373 374 - if (!enable_64b_cqe_eqe) { 374 + if (!enable_64b_cqe_eqe && !mlx4_is_slave(dev)) { 375 375 if (dev_cap->flags & 376 376 (MLX4_DEV_CAP_FLAG_64B_CQE | MLX4_DEV_CAP_FLAG_64B_EQE)) { 377 377 mlx4_warn(dev, "64B EQEs/CQEs supported by the device but not enabled\n");
+5 -5
drivers/net/ethernet/mellanox/mlx4/mlx4_en.h
··· 292 292 void *rx_info; 293 293 unsigned long bytes; 294 294 unsigned long packets; 295 - #ifdef CONFIG_NET_LL_RX_POLL 295 + #ifdef CONFIG_NET_RX_BUSY_POLL 296 296 unsigned long yields; 297 297 unsigned long misses; 298 298 unsigned long cleaned; ··· 318 318 struct mlx4_cqe *buf; 319 319 #define MLX4_EN_OPCODE_ERROR 0x1e 320 320 321 - #ifdef CONFIG_NET_LL_RX_POLL 321 + #ifdef CONFIG_NET_RX_BUSY_POLL 322 322 unsigned int state; 323 323 #define MLX4_EN_CQ_STATE_IDLE 0 324 324 #define MLX4_EN_CQ_STATE_NAPI 1 /* NAPI owns this CQ */ ··· 329 329 #define CQ_YIELD (MLX4_EN_CQ_STATE_NAPI_YIELD | MLX4_EN_CQ_STATE_POLL_YIELD) 330 330 #define CQ_USER_PEND (MLX4_EN_CQ_STATE_POLL | MLX4_EN_CQ_STATE_POLL_YIELD) 331 331 spinlock_t poll_lock; /* protects from LLS/napi conflicts */ 332 - #endif /* CONFIG_NET_LL_RX_POLL */ 332 + #endif /* CONFIG_NET_RX_BUSY_POLL */ 333 333 }; 334 334 335 335 struct mlx4_en_port_profile { ··· 580 580 struct rcu_head rcu; 581 581 }; 582 582 583 - #ifdef CONFIG_NET_LL_RX_POLL 583 + #ifdef CONFIG_NET_RX_BUSY_POLL 584 584 static inline void mlx4_en_cq_init_lock(struct mlx4_en_cq *cq) 585 585 { 586 586 spin_lock_init(&cq->poll_lock); ··· 687 687 { 688 688 return false; 689 689 } 690 - #endif /* CONFIG_NET_LL_RX_POLL */ 690 + #endif /* CONFIG_NET_RX_BUSY_POLL */ 691 691 692 692 #define MLX4_EN_WOL_DO_MODIFY (1ULL << 63) 693 693
+3 -9
drivers/net/ethernet/qlogic/qlcnic/qlcnic.h
··· 1400 1400 #define ADDR_IN_RANGE(addr, low, high) \ 1401 1401 (((addr) < (high)) && ((addr) >= (low))) 1402 1402 1403 - #define QLCRD32(adapter, off) \ 1404 - (adapter->ahw->hw_ops->read_reg)(adapter, off) 1403 + #define QLCRD32(adapter, off, err) \ 1404 + (adapter->ahw->hw_ops->read_reg)(adapter, off, err) 1405 1405 1406 1406 #define QLCWR32(adapter, off, val) \ 1407 1407 adapter->ahw->hw_ops->write_reg(adapter, off, val) ··· 1604 1604 struct qlcnic_hardware_ops { 1605 1605 void (*read_crb) (struct qlcnic_adapter *, char *, loff_t, size_t); 1606 1606 void (*write_crb) (struct qlcnic_adapter *, char *, loff_t, size_t); 1607 - int (*read_reg) (struct qlcnic_adapter *, ulong); 1607 + int (*read_reg) (struct qlcnic_adapter *, ulong, int *); 1608 1608 int (*write_reg) (struct qlcnic_adapter *, ulong, u32); 1609 1609 void (*get_ocm_win) (struct qlcnic_hardware_context *); 1610 1610 int (*get_mac_address) (struct qlcnic_adapter *, u8 *); ··· 1660 1660 loff_t offset, size_t size) 1661 1661 { 1662 1662 adapter->ahw->hw_ops->write_crb(adapter, buf, offset, size); 1663 - } 1664 - 1665 - static inline int qlcnic_hw_read_wx_2M(struct qlcnic_adapter *adapter, 1666 - ulong off) 1667 - { 1668 - return adapter->ahw->hw_ops->read_reg(adapter, off); 1669 1663 } 1670 1664 1671 1665 static inline int qlcnic_hw_write_wx_2M(struct qlcnic_adapter *adapter,
+76 -50
drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_hw.c
··· 228 228 return 0; 229 229 } 230 230 231 - int qlcnic_83xx_rd_reg_indirect(struct qlcnic_adapter *adapter, ulong addr) 231 + int qlcnic_83xx_rd_reg_indirect(struct qlcnic_adapter *adapter, ulong addr, 232 + int *err) 232 233 { 233 - int ret; 234 234 struct qlcnic_hardware_context *ahw = adapter->ahw; 235 235 236 - ret = __qlcnic_set_win_base(adapter, (u32) addr); 237 - if (!ret) { 236 + *err = __qlcnic_set_win_base(adapter, (u32) addr); 237 + if (!*err) { 238 238 return QLCRDX(ahw, QLCNIC_WILDCARD); 239 239 } else { 240 240 dev_err(&adapter->pdev->dev, 241 - "%s failed, addr = 0x%x\n", __func__, (int)addr); 241 + "%s failed, addr = 0x%lx\n", __func__, addr); 242 242 return -EIO; 243 243 } 244 244 } ··· 561 561 void qlcnic_83xx_read_crb(struct qlcnic_adapter *adapter, char *buf, 562 562 loff_t offset, size_t size) 563 563 { 564 - int ret; 564 + int ret = 0; 565 565 u32 data; 566 566 567 567 if (qlcnic_api_lock(adapter)) { ··· 571 571 return; 572 572 } 573 573 574 - ret = qlcnic_83xx_rd_reg_indirect(adapter, (u32) offset); 574 + data = QLCRD32(adapter, (u32) offset, &ret); 575 575 qlcnic_api_unlock(adapter); 576 576 577 577 if (ret == -EIO) { ··· 580 580 __func__, (u32)offset); 581 581 return; 582 582 } 583 - data = ret; 584 583 memcpy(buf, &data, size); 585 584 } 586 585 ··· 2074 2075 static void qlcnic_83xx_handle_link_aen(struct qlcnic_adapter *adapter, 2075 2076 u32 data[]) 2076 2077 { 2078 + struct qlcnic_hardware_context *ahw = adapter->ahw; 2077 2079 u8 link_status, duplex; 2078 2080 /* link speed */ 2079 2081 link_status = LSB(data[3]) & 1; 2080 - adapter->ahw->link_speed = MSW(data[2]); 2081 - adapter->ahw->link_autoneg = MSB(MSW(data[3])); 2082 - adapter->ahw->module_type = MSB(LSW(data[3])); 2083 - duplex = LSB(MSW(data[3])); 2084 - if (duplex) 2085 - adapter->ahw->link_duplex = DUPLEX_FULL; 2086 - else 2087 - adapter->ahw->link_duplex = DUPLEX_HALF; 2088 - adapter->ahw->has_link_events = 1; 2082 + if (link_status) { 2083 + ahw->link_speed = MSW(data[2]); 2084 + duplex = LSB(MSW(data[3])); 2085 + if (duplex) 2086 + ahw->link_duplex = DUPLEX_FULL; 2087 + else 2088 + ahw->link_duplex = DUPLEX_HALF; 2089 + } else { 2090 + ahw->link_speed = SPEED_UNKNOWN; 2091 + ahw->link_duplex = DUPLEX_UNKNOWN; 2092 + } 2093 + 2094 + ahw->link_autoneg = MSB(MSW(data[3])); 2095 + ahw->module_type = MSB(LSW(data[3])); 2096 + ahw->has_link_events = 1; 2089 2097 qlcnic_advert_link_change(adapter, link_status); 2090 2098 } 2091 2099 ··· 2390 2384 u32 flash_addr, u8 *p_data, 2391 2385 int count) 2392 2386 { 2393 - int i, ret; 2394 - u32 word, range, flash_offset, addr = flash_addr; 2387 + u32 word, range, flash_offset, addr = flash_addr, ret; 2395 2388 ulong indirect_add, direct_window; 2389 + int i, err = 0; 2396 2390 2397 2391 flash_offset = addr & (QLCNIC_FLASH_SECTOR_SIZE - 1); 2398 2392 if (addr & 0x3) { ··· 2410 2404 /* Multi sector read */ 2411 2405 for (i = 0; i < count; i++) { 2412 2406 indirect_add = QLC_83XX_FLASH_DIRECT_DATA(addr); 2413 - ret = qlcnic_83xx_rd_reg_indirect(adapter, 2414 - indirect_add); 2415 - if (ret == -EIO) 2416 - return -EIO; 2407 + ret = QLCRD32(adapter, indirect_add, &err); 2408 + if (err == -EIO) 2409 + return err; 2417 2410 2418 2411 word = ret; 2419 2412 *(u32 *)p_data = word; ··· 2433 2428 /* Single sector read */ 2434 2429 for (i = 0; i < count; i++) { 2435 2430 indirect_add = QLC_83XX_FLASH_DIRECT_DATA(addr); 2436 - ret = qlcnic_83xx_rd_reg_indirect(adapter, 2437 - indirect_add); 2438 - if (ret == -EIO) 2439 - return -EIO; 2431 + ret = QLCRD32(adapter, indirect_add, &err); 2432 + if (err == -EIO) 2433 + return err; 2440 2434 2441 2435 word = ret; 2442 2436 *(u32 *)p_data = word; ··· 2451 2447 { 2452 2448 u32 status; 2453 2449 int retries = QLC_83XX_FLASH_READ_RETRY_COUNT; 2450 + int err = 0; 2454 2451 2455 2452 do { 2456 - status = qlcnic_83xx_rd_reg_indirect(adapter, 2457 - QLC_83XX_FLASH_STATUS); 2453 + status = QLCRD32(adapter, QLC_83XX_FLASH_STATUS, &err); 2454 + if (err == -EIO) 2455 + return err; 2456 + 2458 2457 if ((status & QLC_83XX_FLASH_STATUS_READY) == 2459 2458 QLC_83XX_FLASH_STATUS_READY) 2460 2459 break; ··· 2509 2502 2510 2503 int qlcnic_83xx_read_flash_mfg_id(struct qlcnic_adapter *adapter) 2511 2504 { 2512 - int ret, mfg_id; 2505 + int ret, err = 0; 2506 + u32 mfg_id; 2513 2507 2514 2508 if (qlcnic_83xx_lock_flash(adapter)) 2515 2509 return -EIO; ··· 2525 2517 return -EIO; 2526 2518 } 2527 2519 2528 - mfg_id = qlcnic_83xx_rd_reg_indirect(adapter, QLC_83XX_FLASH_RDDATA); 2529 - if (mfg_id == -EIO) 2530 - return -EIO; 2520 + mfg_id = QLCRD32(adapter, QLC_83XX_FLASH_RDDATA, &err); 2521 + if (err == -EIO) { 2522 + qlcnic_83xx_unlock_flash(adapter); 2523 + return err; 2524 + } 2531 2525 2532 2526 adapter->flash_mfg_id = (mfg_id & 0xFF); 2533 2527 qlcnic_83xx_unlock_flash(adapter); ··· 2646 2636 u32 *p_data, int count) 2647 2637 { 2648 2638 u32 temp; 2649 - int ret = -EIO; 2639 + int ret = -EIO, err = 0; 2650 2640 2651 2641 if ((count < QLC_83XX_FLASH_WRITE_MIN) || 2652 2642 (count > QLC_83XX_FLASH_WRITE_MAX)) { ··· 2655 2645 return -EIO; 2656 2646 } 2657 2647 2658 - temp = qlcnic_83xx_rd_reg_indirect(adapter, 2659 - QLC_83XX_FLASH_SPI_CONTROL); 2648 + temp = QLCRD32(adapter, QLC_83XX_FLASH_SPI_CONTROL, &err); 2649 + if (err == -EIO) 2650 + return err; 2651 + 2660 2652 qlcnic_83xx_wrt_reg_indirect(adapter, QLC_83XX_FLASH_SPI_CONTROL, 2661 2653 (temp | QLC_83XX_FLASH_SPI_CTRL)); 2662 2654 qlcnic_83xx_wrt_reg_indirect(adapter, QLC_83XX_FLASH_ADDR, ··· 2707 2695 return -EIO; 2708 2696 } 2709 2697 2710 - ret = qlcnic_83xx_rd_reg_indirect(adapter, QLC_83XX_FLASH_SPI_STATUS); 2698 + ret = QLCRD32(adapter, QLC_83XX_FLASH_SPI_STATUS, &err); 2699 + if (err == -EIO) 2700 + return err; 2701 + 2711 2702 if ((ret & QLC_83XX_FLASH_SPI_CTRL) == QLC_83XX_FLASH_SPI_CTRL) { 2712 2703 dev_err(&adapter->pdev->dev, "%s: failed at %d\n", 2713 2704 __func__, __LINE__); 2714 2705 /* Operation failed, clear error bit */ 2715 - temp = qlcnic_83xx_rd_reg_indirect(adapter, 2716 - QLC_83XX_FLASH_SPI_CONTROL); 2706 + temp = QLCRD32(adapter, QLC_83XX_FLASH_SPI_CONTROL, &err); 2707 + if (err == -EIO) 2708 + return err; 2709 + 2717 2710 qlcnic_83xx_wrt_reg_indirect(adapter, 2718 2711 QLC_83XX_FLASH_SPI_CONTROL, 2719 2712 (temp | QLC_83XX_FLASH_SPI_CTRL)); ··· 2840 2823 { 2841 2824 int i, j, ret = 0; 2842 2825 u32 temp; 2826 + int err = 0; 2843 2827 2844 2828 /* Check alignment */ 2845 2829 if (addr & 0xF) ··· 2873 2855 QLCNIC_TA_WRITE_START); 2874 2856 2875 2857 for (j = 0; j < MAX_CTL_CHECK; j++) { 2876 - temp = qlcnic_83xx_rd_reg_indirect(adapter, 2877 - QLCNIC_MS_CTRL); 2858 + temp = QLCRD32(adapter, QLCNIC_MS_CTRL, &err); 2859 + if (err == -EIO) { 2860 + mutex_unlock(&adapter->ahw->mem_lock); 2861 + return err; 2862 + } 2863 + 2878 2864 if ((temp & TA_CTL_BUSY) == 0) 2879 2865 break; 2880 2866 } ··· 2900 2878 int qlcnic_83xx_flash_read32(struct qlcnic_adapter *adapter, u32 flash_addr, 2901 2879 u8 *p_data, int count) 2902 2880 { 2903 - int i, ret; 2904 - u32 word, addr = flash_addr; 2881 + u32 word, addr = flash_addr, ret; 2905 2882 ulong indirect_addr; 2883 + int i, err = 0; 2906 2884 2907 2885 if (qlcnic_83xx_lock_flash(adapter) != 0) 2908 2886 return -EIO; ··· 2922 2900 } 2923 2901 2924 2902 indirect_addr = QLC_83XX_FLASH_DIRECT_DATA(addr); 2925 - ret = qlcnic_83xx_rd_reg_indirect(adapter, 2926 - indirect_addr); 2927 - if (ret == -EIO) 2928 - return -EIO; 2903 + ret = QLCRD32(adapter, indirect_addr, &err); 2904 + if (err == -EIO) 2905 + return err; 2906 + 2929 2907 word = ret; 2930 2908 *(u32 *)p_data = word; 2931 2909 p_data = p_data + 4; ··· 3391 3369 3392 3370 static int qlcnic_83xx_read_flash_status_reg(struct qlcnic_adapter *adapter) 3393 3371 { 3394 - int ret; 3372 + int ret, err = 0; 3373 + u32 temp; 3395 3374 3396 3375 qlcnic_83xx_wrt_reg_indirect(adapter, QLC_83XX_FLASH_ADDR, 3397 3376 QLC_83XX_FLASH_OEM_READ_SIG); ··· 3402 3379 if (ret) 3403 3380 return -EIO; 3404 3381 3405 - ret = qlcnic_83xx_rd_reg_indirect(adapter, QLC_83XX_FLASH_RDDATA); 3406 - return ret & 0xFF; 3382 + temp = QLCRD32(adapter, QLC_83XX_FLASH_RDDATA, &err); 3383 + if (err == -EIO) 3384 + return err; 3385 + 3386 + return temp & 0xFF; 3407 3387 } 3408 3388 3409 3389 int qlcnic_83xx_flash_test(struct qlcnic_adapter *adapter)
+1 -1
drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_hw.h
··· 508 508 void qlcnic_83xx_remove_sysfs(struct qlcnic_adapter *); 509 509 void qlcnic_83xx_write_crb(struct qlcnic_adapter *, char *, loff_t, size_t); 510 510 void qlcnic_83xx_read_crb(struct qlcnic_adapter *, char *, loff_t, size_t); 511 - int qlcnic_83xx_rd_reg_indirect(struct qlcnic_adapter *, ulong); 511 + int qlcnic_83xx_rd_reg_indirect(struct qlcnic_adapter *, ulong, int *); 512 512 int qlcnic_83xx_wrt_reg_indirect(struct qlcnic_adapter *, ulong, u32); 513 513 void qlcnic_83xx_process_rcv_diag(struct qlcnic_adapter *, int, u64 []); 514 514 int qlcnic_83xx_nic_set_promisc(struct qlcnic_adapter *, u32);
+61 -30
drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_init.c
··· 1303 1303 { 1304 1304 int i, j; 1305 1305 u32 val = 0, val1 = 0, reg = 0; 1306 + int err = 0; 1306 1307 1307 - val = QLCRD32(adapter, QLC_83XX_SRE_SHIM_REG); 1308 + val = QLCRD32(adapter, QLC_83XX_SRE_SHIM_REG, &err); 1309 + if (err == -EIO) 1310 + return; 1308 1311 dev_info(&adapter->pdev->dev, "SRE-Shim Ctrl:0x%x\n", val); 1309 1312 1310 1313 for (j = 0; j < 2; j++) { ··· 1321 1318 reg = QLC_83XX_PORT1_THRESHOLD; 1322 1319 } 1323 1320 for (i = 0; i < 8; i++) { 1324 - val = QLCRD32(adapter, reg + (i * 0x4)); 1321 + val = QLCRD32(adapter, reg + (i * 0x4), &err); 1322 + if (err == -EIO) 1323 + return; 1325 1324 dev_info(&adapter->pdev->dev, "0x%x ", val); 1326 1325 } 1327 1326 dev_info(&adapter->pdev->dev, "\n"); ··· 1340 1335 reg = QLC_83XX_PORT1_TC_MC_REG; 1341 1336 } 1342 1337 for (i = 0; i < 4; i++) { 1343 - val = QLCRD32(adapter, reg + (i * 0x4)); 1344 - dev_info(&adapter->pdev->dev, "0x%x ", val); 1338 + val = QLCRD32(adapter, reg + (i * 0x4), &err); 1339 + if (err == -EIO) 1340 + return; 1341 + dev_info(&adapter->pdev->dev, "0x%x ", val); 1345 1342 } 1346 1343 dev_info(&adapter->pdev->dev, "\n"); 1347 1344 } ··· 1359 1352 reg = QLC_83XX_PORT1_TC_STATS; 1360 1353 } 1361 1354 for (i = 7; i >= 0; i--) { 1362 - val = QLCRD32(adapter, reg); 1355 + val = QLCRD32(adapter, reg, &err); 1356 + if (err == -EIO) 1357 + return; 1363 1358 val &= ~(0x7 << 29); /* Reset bits 29 to 31 */ 1364 1359 QLCWR32(adapter, reg, (val | (i << 29))); 1365 - val = QLCRD32(adapter, reg); 1360 + val = QLCRD32(adapter, reg, &err); 1361 + if (err == -EIO) 1362 + return; 1366 1363 dev_info(&adapter->pdev->dev, "0x%x ", val); 1367 1364 } 1368 1365 dev_info(&adapter->pdev->dev, "\n"); 1369 1366 } 1370 1367 1371 - val = QLCRD32(adapter, QLC_83XX_PORT2_IFB_THRESHOLD); 1372 - val1 = QLCRD32(adapter, QLC_83XX_PORT3_IFB_THRESHOLD); 1368 + val = QLCRD32(adapter, QLC_83XX_PORT2_IFB_THRESHOLD, &err); 1369 + if (err == -EIO) 1370 + return; 1371 + val1 = QLCRD32(adapter, QLC_83XX_PORT3_IFB_THRESHOLD, &err); 1372 + if (err == -EIO) 1373 + return; 1373 1374 dev_info(&adapter->pdev->dev, 1374 1375 "IFB-Pause Thresholds: Port 2:0x%x, Port 3:0x%x\n", 1375 1376 val, val1); ··· 1440 1425 static int qlcnic_83xx_check_heartbeat(struct qlcnic_adapter *p_dev) 1441 1426 { 1442 1427 u32 heartbeat, peg_status; 1443 - int retries, ret = -EIO; 1428 + int retries, ret = -EIO, err = 0; 1444 1429 1445 1430 retries = QLCNIC_HEARTBEAT_CHECK_RETRY_COUNT; 1446 1431 p_dev->heartbeat = QLC_SHARED_REG_RD32(p_dev, ··· 1468 1453 "PEG_NET_2_PC: 0x%x, PEG_NET_3_PC: 0x%x,\n" 1469 1454 "PEG_NET_4_PC: 0x%x\n", peg_status, 1470 1455 QLC_SHARED_REG_RD32(p_dev, QLCNIC_PEG_HALT_STATUS2), 1471 - QLCRD32(p_dev, QLC_83XX_CRB_PEG_NET_0), 1472 - QLCRD32(p_dev, QLC_83XX_CRB_PEG_NET_1), 1473 - QLCRD32(p_dev, QLC_83XX_CRB_PEG_NET_2), 1474 - QLCRD32(p_dev, QLC_83XX_CRB_PEG_NET_3), 1475 - QLCRD32(p_dev, QLC_83XX_CRB_PEG_NET_4)); 1456 + QLCRD32(p_dev, QLC_83XX_CRB_PEG_NET_0, &err), 1457 + QLCRD32(p_dev, QLC_83XX_CRB_PEG_NET_1, &err), 1458 + QLCRD32(p_dev, QLC_83XX_CRB_PEG_NET_2, &err), 1459 + QLCRD32(p_dev, QLC_83XX_CRB_PEG_NET_3, &err), 1460 + QLCRD32(p_dev, QLC_83XX_CRB_PEG_NET_4, &err)); 1476 1461 1477 1462 if (QLCNIC_FWERROR_CODE(peg_status) == 0x67) 1478 1463 dev_err(&p_dev->pdev->dev, ··· 1516 1501 static int qlcnic_83xx_poll_reg(struct qlcnic_adapter *p_dev, u32 addr, 1517 1502 int duration, u32 mask, u32 status) 1518 1503 { 1504 + int timeout_error, err = 0; 1519 1505 u32 value; 1520 - int timeout_error; 1521 1506 u8 retries; 1522 1507 1523 - value = qlcnic_83xx_rd_reg_indirect(p_dev, addr); 1508 + value = QLCRD32(p_dev, addr, &err); 1509 + if (err == -EIO) 1510 + return err; 1524 1511 retries = duration / 10; 1525 1512 1526 1513 do { 1527 1514 if ((value & mask) != status) { 1528 1515 timeout_error = 1; 1529 1516 msleep(duration / 10); 1530 - value = qlcnic_83xx_rd_reg_indirect(p_dev, addr); 1517 + value = QLCRD32(p_dev, addr, &err); 1518 + if (err == -EIO) 1519 + return err; 1531 1520 } else { 1532 1521 timeout_error = 0; 1533 1522 break; ··· 1625 1606 static void qlcnic_83xx_read_write_crb_reg(struct qlcnic_adapter *p_dev, 1626 1607 u32 raddr, u32 waddr) 1627 1608 { 1628 - int value; 1609 + int err = 0; 1610 + u32 value; 1629 1611 1630 - value = qlcnic_83xx_rd_reg_indirect(p_dev, raddr); 1612 + value = QLCRD32(p_dev, raddr, &err); 1613 + if (err == -EIO) 1614 + return; 1631 1615 qlcnic_83xx_wrt_reg_indirect(p_dev, waddr, value); 1632 1616 } 1633 1617 ··· 1639 1617 u32 raddr, u32 waddr, 1640 1618 struct qlc_83xx_rmw *p_rmw_hdr) 1641 1619 { 1642 - int value; 1620 + int err = 0; 1621 + u32 value; 1643 1622 1644 - if (p_rmw_hdr->index_a) 1623 + if (p_rmw_hdr->index_a) { 1645 1624 value = p_dev->ahw->reset.array[p_rmw_hdr->index_a]; 1646 - else 1647 - value = qlcnic_83xx_rd_reg_indirect(p_dev, raddr); 1625 + } else { 1626 + value = QLCRD32(p_dev, raddr, &err); 1627 + if (err == -EIO) 1628 + return; 1629 + } 1648 1630 1649 1631 value &= p_rmw_hdr->mask; 1650 1632 value <<= p_rmw_hdr->shl; ··· 1701 1675 long delay; 1702 1676 struct qlc_83xx_entry *entry; 1703 1677 struct qlc_83xx_poll *poll; 1704 - int i; 1678 + int i, err = 0; 1705 1679 unsigned long arg1, arg2; 1706 1680 1707 1681 poll = (struct qlc_83xx_poll *)((char *)p_hdr + ··· 1725 1699 arg1, delay, 1726 1700 poll->mask, 1727 1701 poll->status)){ 1728 - qlcnic_83xx_rd_reg_indirect(p_dev, 1729 - arg1); 1730 - qlcnic_83xx_rd_reg_indirect(p_dev, 1731 - arg2); 1702 + QLCRD32(p_dev, arg1, &err); 1703 + if (err == -EIO) 1704 + return; 1705 + QLCRD32(p_dev, arg2, &err); 1706 + if (err == -EIO) 1707 + return; 1732 1708 } 1733 1709 } 1734 1710 } ··· 1796 1768 struct qlc_83xx_entry_hdr *p_hdr) 1797 1769 { 1798 1770 long delay; 1799 - int index, i, j; 1771 + int index, i, j, err; 1800 1772 struct qlc_83xx_quad_entry *entry; 1801 1773 struct qlc_83xx_poll *poll; 1802 1774 unsigned long addr; ··· 1816 1788 poll->mask, poll->status)){ 1817 1789 index = p_dev->ahw->reset.array_index; 1818 1790 addr = entry->dr_addr; 1819 - j = qlcnic_83xx_rd_reg_indirect(p_dev, addr); 1791 + j = QLCRD32(p_dev, addr, &err); 1792 + if (err == -EIO) 1793 + return; 1794 + 1820 1795 p_dev->ahw->reset.array[index++] = j; 1821 1796 1822 1797 if (index == QLC_83XX_MAX_RESET_SEQ_ENTRIES)
+8 -8
drivers/net/ethernet/qlogic/qlcnic/qlcnic_ctx.c
··· 104 104 qlcnic_poll_rsp(struct qlcnic_adapter *adapter) 105 105 { 106 106 u32 rsp; 107 - int timeout = 0; 107 + int timeout = 0, err = 0; 108 108 109 109 do { 110 110 /* give atleast 1ms for firmware to respond */ ··· 113 113 if (++timeout > QLCNIC_OS_CRB_RETRY_COUNT) 114 114 return QLCNIC_CDRP_RSP_TIMEOUT; 115 115 116 - rsp = QLCRD32(adapter, QLCNIC_CDRP_CRB_OFFSET); 116 + rsp = QLCRD32(adapter, QLCNIC_CDRP_CRB_OFFSET, &err); 117 117 } while (!QLCNIC_CDRP_IS_RSP(rsp)); 118 118 119 119 return rsp; ··· 122 122 int qlcnic_82xx_issue_cmd(struct qlcnic_adapter *adapter, 123 123 struct qlcnic_cmd_args *cmd) 124 124 { 125 - int i; 125 + int i, err = 0; 126 126 u32 rsp; 127 127 u32 signature; 128 128 struct pci_dev *pdev = adapter->pdev; ··· 148 148 dev_err(&pdev->dev, "card response timeout.\n"); 149 149 cmd->rsp.arg[0] = QLCNIC_RCODE_TIMEOUT; 150 150 } else if (rsp == QLCNIC_CDRP_RSP_FAIL) { 151 - cmd->rsp.arg[0] = QLCRD32(adapter, QLCNIC_CDRP_ARG(1)); 151 + cmd->rsp.arg[0] = QLCRD32(adapter, QLCNIC_CDRP_ARG(1), &err); 152 152 switch (cmd->rsp.arg[0]) { 153 153 case QLCNIC_RCODE_INVALID_ARGS: 154 154 fmt = "CDRP invalid args: [%d]\n"; ··· 175 175 cmd->rsp.arg[0] = QLCNIC_RCODE_SUCCESS; 176 176 177 177 for (i = 1; i < cmd->rsp.num; i++) 178 - cmd->rsp.arg[i] = QLCRD32(adapter, QLCNIC_CDRP_ARG(i)); 178 + cmd->rsp.arg[i] = QLCRD32(adapter, QLCNIC_CDRP_ARG(i), &err); 179 179 180 180 /* Release semaphore */ 181 181 qlcnic_api_unlock(adapter); ··· 210 210 if (err) { 211 211 dev_info(&adapter->pdev->dev, 212 212 "Failed to set driver version in firmware\n"); 213 - return -EIO; 213 + err = -EIO; 214 214 } 215 - 216 - return 0; 215 + qlcnic_free_mbx_args(&cmd); 216 + return err; 217 217 } 218 218 219 219 int
+62 -21
drivers/net/ethernet/qlogic/qlcnic/qlcnic_ethtool.c
··· 150 150 "Link_Test_on_offline", 151 151 "Interrupt_Test_offline", 152 152 "Internal_Loopback_offline", 153 + "External_Loopback_offline", 153 154 "EEPROM_Test_offline" 154 155 }; 155 156 ··· 267 266 { 268 267 struct qlcnic_hardware_context *ahw = adapter->ahw; 269 268 u32 speed, reg; 270 - int check_sfp_module = 0; 269 + int check_sfp_module = 0, err = 0; 271 270 u16 pcifn = ahw->pci_func; 272 271 273 272 /* read which mode */ ··· 290 289 291 290 } else if (adapter->ahw->port_type == QLCNIC_XGBE) { 292 291 u32 val = 0; 293 - val = QLCRD32(adapter, QLCNIC_PORT_MODE_ADDR); 292 + val = QLCRD32(adapter, QLCNIC_PORT_MODE_ADDR, &err); 294 293 295 294 if (val == QLCNIC_PORT_MODE_802_3_AP) { 296 295 ecmd->supported = SUPPORTED_1000baseT_Full; ··· 301 300 } 302 301 303 302 if (netif_running(adapter->netdev) && ahw->has_link_events) { 304 - reg = QLCRD32(adapter, P3P_LINK_SPEED_REG(pcifn)); 305 - speed = P3P_LINK_SPEED_VAL(pcifn, reg); 306 - ahw->link_speed = speed * P3P_LINK_SPEED_MHZ; 303 + if (ahw->linkup) { 304 + reg = QLCRD32(adapter, 305 + P3P_LINK_SPEED_REG(pcifn), &err); 306 + speed = P3P_LINK_SPEED_VAL(pcifn, reg); 307 + ahw->link_speed = speed * P3P_LINK_SPEED_MHZ; 308 + } 309 + 307 310 ethtool_cmd_speed_set(ecmd, ahw->link_speed); 308 311 ecmd->autoneg = ahw->link_autoneg; 309 312 ecmd->duplex = ahw->link_duplex; ··· 468 463 static int qlcnic_82xx_get_registers(struct qlcnic_adapter *adapter, 469 464 u32 *regs_buff) 470 465 { 471 - int i, j = 0; 466 + int i, j = 0, err = 0; 472 467 473 468 for (i = QLCNIC_DEV_INFO_SIZE + 1; diag_registers[j] != -1; j++, i++) 474 469 regs_buff[i] = QLC_SHARED_REG_RD32(adapter, diag_registers[j]); 475 470 j = 0; 476 471 while (ext_diag_registers[j] != -1) 477 - regs_buff[i++] = QLCRD32(adapter, ext_diag_registers[j++]); 472 + regs_buff[i++] = QLCRD32(adapter, ext_diag_registers[j++], 473 + &err); 478 474 return i; 479 475 } 480 476 ··· 525 519 static u32 qlcnic_test_link(struct net_device *dev) 526 520 { 527 521 struct qlcnic_adapter *adapter = netdev_priv(dev); 522 + int err = 0; 528 523 u32 val; 529 524 530 525 if (qlcnic_83xx_check(adapter)) { 531 526 val = qlcnic_83xx_test_link(adapter); 532 527 return (val & 1) ? 0 : 1; 533 528 } 534 - val = QLCRD32(adapter, CRB_XG_STATE_P3P); 529 + val = QLCRD32(adapter, CRB_XG_STATE_P3P, &err); 530 + if (err == -EIO) 531 + return err; 535 532 val = XG_LINK_STATE_P3P(adapter->ahw->pci_func, val); 536 533 return (val == XG_LINK_UP_P3P) ? 0 : 1; 537 534 } ··· 667 658 { 668 659 struct qlcnic_adapter *adapter = netdev_priv(netdev); 669 660 int port = adapter->ahw->physical_port; 661 + int err = 0; 670 662 __u32 val; 671 663 672 664 if (qlcnic_83xx_check(adapter)) { ··· 678 668 if ((port < 0) || (port > QLCNIC_NIU_MAX_GBE_PORTS)) 679 669 return; 680 670 /* get flow control settings */ 681 - val = QLCRD32(adapter, QLCNIC_NIU_GB_MAC_CONFIG_0(port)); 671 + val = QLCRD32(adapter, QLCNIC_NIU_GB_MAC_CONFIG_0(port), &err); 672 + if (err == -EIO) 673 + return; 682 674 pause->rx_pause = qlcnic_gb_get_rx_flowctl(val); 683 - val = QLCRD32(adapter, QLCNIC_NIU_GB_PAUSE_CTL); 675 + val = QLCRD32(adapter, QLCNIC_NIU_GB_PAUSE_CTL, &err); 676 + if (err == -EIO) 677 + return; 684 678 switch (port) { 685 679 case 0: 686 680 pause->tx_pause = !(qlcnic_gb_get_gb0_mask(val)); ··· 704 690 if ((port < 0) || (port > QLCNIC_NIU_MAX_XG_PORTS)) 705 691 return; 706 692 pause->rx_pause = 1; 707 - val = QLCRD32(adapter, QLCNIC_NIU_XG_PAUSE_CTL); 693 + val = QLCRD32(adapter, QLCNIC_NIU_XG_PAUSE_CTL, &err); 694 + if (err == -EIO) 695 + return; 708 696 if (port == 0) 709 697 pause->tx_pause = !(qlcnic_xg_get_xg0_mask(val)); 710 698 else ··· 723 707 { 724 708 struct qlcnic_adapter *adapter = netdev_priv(netdev); 725 709 int port = adapter->ahw->physical_port; 710 + int err = 0; 726 711 __u32 val; 727 712 728 713 if (qlcnic_83xx_check(adapter)) ··· 734 717 if ((port < 0) || (port > QLCNIC_NIU_MAX_GBE_PORTS)) 735 718 return -EIO; 736 719 /* set flow control */ 737 - val = QLCRD32(adapter, QLCNIC_NIU_GB_MAC_CONFIG_0(port)); 720 + val = QLCRD32(adapter, QLCNIC_NIU_GB_MAC_CONFIG_0(port), &err); 721 + if (err == -EIO) 722 + return err; 738 723 739 724 if (pause->rx_pause) 740 725 qlcnic_gb_rx_flowctl(val); ··· 747 728 val); 748 729 QLCWR32(adapter, QLCNIC_NIU_GB_MAC_CONFIG_0(port), val); 749 730 /* set autoneg */ 750 - val = QLCRD32(adapter, QLCNIC_NIU_GB_PAUSE_CTL); 731 + val = QLCRD32(adapter, QLCNIC_NIU_GB_PAUSE_CTL, &err); 732 + if (err == -EIO) 733 + return err; 751 734 switch (port) { 752 735 case 0: 753 736 if (pause->tx_pause) ··· 785 764 if ((port < 0) || (port > QLCNIC_NIU_MAX_XG_PORTS)) 786 765 return -EIO; 787 766 788 - val = QLCRD32(adapter, QLCNIC_NIU_XG_PAUSE_CTL); 767 + val = QLCRD32(adapter, QLCNIC_NIU_XG_PAUSE_CTL, &err); 768 + if (err == -EIO) 769 + return err; 789 770 if (port == 0) { 790 771 if (pause->tx_pause) 791 772 qlcnic_xg_unset_xg0_mask(val); ··· 811 788 { 812 789 struct qlcnic_adapter *adapter = netdev_priv(dev); 813 790 u32 data_read; 791 + int err = 0; 814 792 815 793 if (qlcnic_83xx_check(adapter)) 816 794 return qlcnic_83xx_reg_test(adapter); 817 795 818 - data_read = QLCRD32(adapter, QLCNIC_PCIX_PH_REG(0)); 796 + data_read = QLCRD32(adapter, QLCNIC_PCIX_PH_REG(0), &err); 797 + if (err == -EIO) 798 + return err; 819 799 if ((data_read & 0xffff) != adapter->pdev->vendor) 820 800 return 1; 821 801 ··· 1052 1026 if (data[3]) 1053 1027 eth_test->flags |= ETH_TEST_FL_FAILED; 1054 1028 1055 - data[4] = qlcnic_eeprom_test(dev); 1056 - if (data[4]) 1029 + if (eth_test->flags & ETH_TEST_FL_EXTERNAL_LB) { 1030 + data[4] = qlcnic_loopback_test(dev, QLCNIC_ELB_MODE); 1031 + if (data[4]) 1032 + eth_test->flags |= ETH_TEST_FL_FAILED; 1033 + eth_test->flags |= ETH_TEST_FL_EXTERNAL_LB_DONE; 1034 + } 1035 + 1036 + data[5] = qlcnic_eeprom_test(dev); 1037 + if (data[5]) 1057 1038 eth_test->flags |= ETH_TEST_FL_FAILED; 1058 1039 } 1059 1040 } ··· 1290 1257 { 1291 1258 struct qlcnic_adapter *adapter = netdev_priv(dev); 1292 1259 u32 wol_cfg; 1260 + int err = 0; 1293 1261 1294 1262 if (qlcnic_83xx_check(adapter)) 1295 1263 return; 1296 1264 wol->supported = 0; 1297 1265 wol->wolopts = 0; 1298 1266 1299 - wol_cfg = QLCRD32(adapter, QLCNIC_WOL_CONFIG_NV); 1267 + wol_cfg = QLCRD32(adapter, QLCNIC_WOL_CONFIG_NV, &err); 1268 + if (err == -EIO) 1269 + return; 1300 1270 if (wol_cfg & (1UL << adapter->portnum)) 1301 1271 wol->supported |= WAKE_MAGIC; 1302 1272 1303 - wol_cfg = QLCRD32(adapter, QLCNIC_WOL_CONFIG); 1273 + wol_cfg = QLCRD32(adapter, QLCNIC_WOL_CONFIG, &err); 1304 1274 if (wol_cfg & (1UL << adapter->portnum)) 1305 1275 wol->wolopts |= WAKE_MAGIC; 1306 1276 } ··· 1313 1277 { 1314 1278 struct qlcnic_adapter *adapter = netdev_priv(dev); 1315 1279 u32 wol_cfg; 1280 + int err = 0; 1316 1281 1317 1282 if (qlcnic_83xx_check(adapter)) 1318 1283 return -EOPNOTSUPP; 1319 1284 if (wol->wolopts & ~WAKE_MAGIC) 1320 1285 return -EINVAL; 1321 1286 1322 - wol_cfg = QLCRD32(adapter, QLCNIC_WOL_CONFIG_NV); 1287 + wol_cfg = QLCRD32(adapter, QLCNIC_WOL_CONFIG_NV, &err); 1288 + if (err == -EIO) 1289 + return err; 1323 1290 if (!(wol_cfg & (1 << adapter->portnum))) 1324 1291 return -EOPNOTSUPP; 1325 1292 1326 - wol_cfg = QLCRD32(adapter, QLCNIC_WOL_CONFIG); 1293 + wol_cfg = QLCRD32(adapter, QLCNIC_WOL_CONFIG, &err); 1294 + if (err == -EIO) 1295 + return err; 1327 1296 if (wol->wolopts & WAKE_MAGIC) 1328 1297 wol_cfg |= 1UL << adapter->portnum; 1329 1298 else
+27 -13
drivers/net/ethernet/qlogic/qlcnic/qlcnic_hw.c
··· 317 317 int 318 318 qlcnic_pcie_sem_lock(struct qlcnic_adapter *adapter, int sem, u32 id_reg) 319 319 { 320 - int done = 0, timeout = 0; 320 + int timeout = 0; 321 + int err = 0; 322 + u32 done = 0; 321 323 322 324 while (!done) { 323 - done = QLCRD32(adapter, QLCNIC_PCIE_REG(PCIE_SEM_LOCK(sem))); 325 + done = QLCRD32(adapter, QLCNIC_PCIE_REG(PCIE_SEM_LOCK(sem)), 326 + &err); 324 327 if (done == 1) 325 328 break; 326 329 if (++timeout >= QLCNIC_PCIE_SEM_TIMEOUT) { 327 330 dev_err(&adapter->pdev->dev, 328 331 "Failed to acquire sem=%d lock; holdby=%d\n", 329 - sem, id_reg ? QLCRD32(adapter, id_reg) : -1); 332 + sem, 333 + id_reg ? QLCRD32(adapter, id_reg, &err) : -1); 330 334 return -EIO; 331 335 } 332 336 msleep(1); ··· 345 341 void 346 342 qlcnic_pcie_sem_unlock(struct qlcnic_adapter *adapter, int sem) 347 343 { 348 - QLCRD32(adapter, QLCNIC_PCIE_REG(PCIE_SEM_UNLOCK(sem))); 344 + int err = 0; 345 + 346 + QLCRD32(adapter, QLCNIC_PCIE_REG(PCIE_SEM_UNLOCK(sem)), &err); 349 347 } 350 348 351 349 int qlcnic_ind_rd(struct qlcnic_adapter *adapter, u32 addr) 352 350 { 351 + int err = 0; 353 352 u32 data; 354 353 355 354 if (qlcnic_82xx_check(adapter)) 356 355 qlcnic_read_window_reg(addr, adapter->ahw->pci_base0, &data); 357 356 else { 358 - data = qlcnic_83xx_rd_reg_indirect(adapter, addr); 359 - if (data == -EIO) 360 - return -EIO; 357 + data = QLCRD32(adapter, addr, &err); 358 + if (err == -EIO) 359 + return err; 361 360 } 362 361 return data; 363 362 } ··· 1166 1159 return -EIO; 1167 1160 } 1168 1161 1169 - int qlcnic_82xx_hw_read_wx_2M(struct qlcnic_adapter *adapter, ulong off) 1162 + int qlcnic_82xx_hw_read_wx_2M(struct qlcnic_adapter *adapter, ulong off, 1163 + int *err) 1170 1164 { 1171 1165 unsigned long flags; 1172 1166 int rv; ··· 1423 1415 1424 1416 int qlcnic_82xx_get_board_info(struct qlcnic_adapter *adapter) 1425 1417 { 1426 - int offset, board_type, magic; 1418 + int offset, board_type, magic, err = 0; 1427 1419 struct pci_dev *pdev = adapter->pdev; 1428 1420 1429 1421 offset = QLCNIC_FW_MAGIC_OFFSET; ··· 1443 1435 adapter->ahw->board_type = board_type; 1444 1436 1445 1437 if (board_type == QLCNIC_BRDTYPE_P3P_4_GB_MM) { 1446 - u32 gpio = QLCRD32(adapter, QLCNIC_ROMUSB_GLB_PAD_GPIO_I); 1438 + u32 gpio = QLCRD32(adapter, QLCNIC_ROMUSB_GLB_PAD_GPIO_I, &err); 1439 + if (err == -EIO) 1440 + return err; 1447 1441 if ((gpio & 0x8000) == 0) 1448 1442 board_type = QLCNIC_BRDTYPE_P3P_10G_TP; 1449 1443 } ··· 1485 1475 qlcnic_wol_supported(struct qlcnic_adapter *adapter) 1486 1476 { 1487 1477 u32 wol_cfg; 1478 + int err = 0; 1488 1479 1489 - wol_cfg = QLCRD32(adapter, QLCNIC_WOL_CONFIG_NV); 1480 + wol_cfg = QLCRD32(adapter, QLCNIC_WOL_CONFIG_NV, &err); 1490 1481 if (wol_cfg & (1UL << adapter->portnum)) { 1491 - wol_cfg = QLCRD32(adapter, QLCNIC_WOL_CONFIG); 1482 + wol_cfg = QLCRD32(adapter, QLCNIC_WOL_CONFIG, &err); 1483 + if (err == -EIO) 1484 + return err; 1492 1485 if (wol_cfg & (1 << adapter->portnum)) 1493 1486 return 1; 1494 1487 } ··· 1552 1539 void qlcnic_82xx_read_crb(struct qlcnic_adapter *adapter, char *buf, 1553 1540 loff_t offset, size_t size) 1554 1541 { 1542 + int err = 0; 1555 1543 u32 data; 1556 1544 u64 qmdata; 1557 1545 ··· 1560 1546 qlcnic_pci_camqm_read_2M(adapter, offset, &qmdata); 1561 1547 memcpy(buf, &qmdata, size); 1562 1548 } else { 1563 - data = QLCRD32(adapter, offset); 1549 + data = QLCRD32(adapter, offset, &err); 1564 1550 memcpy(buf, &data, size); 1565 1551 } 1566 1552 }
+1 -1
drivers/net/ethernet/qlogic/qlcnic/qlcnic_hw.h
··· 154 154 struct qlcnic_adapter; 155 155 156 156 int qlcnic_82xx_start_firmware(struct qlcnic_adapter *); 157 - int qlcnic_82xx_hw_read_wx_2M(struct qlcnic_adapter *adapter, ulong); 157 + int qlcnic_82xx_hw_read_wx_2M(struct qlcnic_adapter *adapter, ulong, int *); 158 158 int qlcnic_82xx_hw_write_wx_2M(struct qlcnic_adapter *, ulong, u32); 159 159 int qlcnic_82xx_config_hw_lro(struct qlcnic_adapter *adapter, int); 160 160 int qlcnic_82xx_nic_set_promisc(struct qlcnic_adapter *adapter, u32);
+17 -8
drivers/net/ethernet/qlogic/qlcnic/qlcnic_init.c
··· 286 286 { 287 287 long timeout = 0; 288 288 long done = 0; 289 + int err = 0; 289 290 290 291 cond_resched(); 291 292 while (done == 0) { 292 - done = QLCRD32(adapter, QLCNIC_ROMUSB_GLB_STATUS); 293 + done = QLCRD32(adapter, QLCNIC_ROMUSB_GLB_STATUS, &err); 293 294 done &= 2; 294 295 if (++timeout >= QLCNIC_MAX_ROM_WAIT_USEC) { 295 296 dev_err(&adapter->pdev->dev, ··· 305 304 static int do_rom_fast_read(struct qlcnic_adapter *adapter, 306 305 u32 addr, u32 *valp) 307 306 { 307 + int err = 0; 308 + 308 309 QLCWR32(adapter, QLCNIC_ROMUSB_ROM_ADDRESS, addr); 309 310 QLCWR32(adapter, QLCNIC_ROMUSB_ROM_DUMMY_BYTE_CNT, 0); 310 311 QLCWR32(adapter, QLCNIC_ROMUSB_ROM_ABYTE_CNT, 3); ··· 320 317 udelay(10); 321 318 QLCWR32(adapter, QLCNIC_ROMUSB_ROM_DUMMY_BYTE_CNT, 0); 322 319 323 - *valp = QLCRD32(adapter, QLCNIC_ROMUSB_ROM_RDATA); 320 + *valp = QLCRD32(adapter, QLCNIC_ROMUSB_ROM_RDATA, &err); 321 + if (err == -EIO) 322 + return err; 324 323 return 0; 325 324 } 326 325 ··· 374 369 375 370 int qlcnic_pinit_from_rom(struct qlcnic_adapter *adapter) 376 371 { 377 - int addr, val; 372 + int addr, err = 0; 378 373 int i, n, init_delay; 379 374 struct crb_addr_pair *buf; 380 375 unsigned offset; 381 - u32 off; 376 + u32 off, val; 382 377 struct pci_dev *pdev = adapter->pdev; 383 378 384 379 QLC_SHARED_REG_WR32(adapter, QLCNIC_CMDPEG_STATE, 0); ··· 407 402 QLCWR32(adapter, QLCNIC_CRB_NIU + 0xb0000, 0x00); 408 403 409 404 /* halt sre */ 410 - val = QLCRD32(adapter, QLCNIC_CRB_SRE + 0x1000); 405 + val = QLCRD32(adapter, QLCNIC_CRB_SRE + 0x1000, &err); 406 + if (err == -EIO) 407 + return err; 411 408 QLCWR32(adapter, QLCNIC_CRB_SRE + 0x1000, val & (~(0x1))); 412 409 413 410 /* halt epg */ ··· 726 719 static int 727 720 qlcnic_has_mn(struct qlcnic_adapter *adapter) 728 721 { 729 - u32 capability; 730 - capability = 0; 722 + u32 capability = 0; 723 + int err = 0; 731 724 732 - capability = QLCRD32(adapter, QLCNIC_PEG_TUNE_CAPABILITY); 725 + capability = QLCRD32(adapter, QLCNIC_PEG_TUNE_CAPABILITY, &err); 726 + if (err == -EIO) 727 + return err; 733 728 if (capability & QLCNIC_PEG_TUNE_MN_PRESENT) 734 729 return 1; 735 730
+67 -34
drivers/net/ethernet/qlogic/qlcnic/qlcnic_io.c
··· 161 161 return (qlcnic_get_sts_status(sts_data) == STATUS_CKSUM_LOOP) ? 1 : 0; 162 162 } 163 163 164 + static void qlcnic_delete_rx_list_mac(struct qlcnic_adapter *adapter, 165 + struct qlcnic_filter *fil, 166 + void *addr, u16 vlan_id) 167 + { 168 + int ret; 169 + u8 op; 170 + 171 + op = vlan_id ? QLCNIC_MAC_VLAN_ADD : QLCNIC_MAC_ADD; 172 + ret = qlcnic_sre_macaddr_change(adapter, addr, vlan_id, op); 173 + if (ret) 174 + return; 175 + 176 + op = vlan_id ? QLCNIC_MAC_VLAN_DEL : QLCNIC_MAC_DEL; 177 + ret = qlcnic_sre_macaddr_change(adapter, addr, vlan_id, op); 178 + if (!ret) { 179 + hlist_del(&fil->fnode); 180 + adapter->rx_fhash.fnum--; 181 + } 182 + } 183 + 184 + static struct qlcnic_filter *qlcnic_find_mac_filter(struct hlist_head *head, 185 + void *addr, u16 vlan_id) 186 + { 187 + struct qlcnic_filter *tmp_fil = NULL; 188 + struct hlist_node *n; 189 + 190 + hlist_for_each_entry_safe(tmp_fil, n, head, fnode) { 191 + if (!memcmp(tmp_fil->faddr, addr, ETH_ALEN) && 192 + tmp_fil->vlan_id == vlan_id) 193 + return tmp_fil; 194 + } 195 + 196 + return NULL; 197 + } 198 + 164 199 void qlcnic_add_lb_filter(struct qlcnic_adapter *adapter, struct sk_buff *skb, 165 200 int loopback_pkt, u16 vlan_id) 166 201 { 167 202 struct ethhdr *phdr = (struct ethhdr *)(skb->data); 168 203 struct qlcnic_filter *fil, *tmp_fil; 169 - struct hlist_node *n; 170 204 struct hlist_head *head; 171 205 unsigned long time; 172 206 u64 src_addr = 0; 173 - u8 hindex, found = 0, op; 207 + u8 hindex, op; 174 208 int ret; 175 209 176 210 memcpy(&src_addr, phdr->h_source, ETH_ALEN); 211 + hindex = qlcnic_mac_hash(src_addr) & 212 + (adapter->fhash.fbucket_size - 1); 177 213 178 214 if (loopback_pkt) { 179 215 if (adapter->rx_fhash.fnum >= adapter->rx_fhash.fmax) 180 216 return; 181 217 182 - hindex = qlcnic_mac_hash(src_addr) & 183 - (adapter->fhash.fbucket_size - 1); 184 218 head = &(adapter->rx_fhash.fhead[hindex]); 185 219 186 - hlist_for_each_entry_safe(tmp_fil, n, head, fnode) { 187 - if (!memcmp(tmp_fil->faddr, &src_addr, ETH_ALEN) && 188 - tmp_fil->vlan_id == vlan_id) { 189 - time = tmp_fil->ftime; 190 - if (jiffies > (QLCNIC_READD_AGE * HZ + time)) 191 - tmp_fil->ftime = jiffies; 192 - return; 193 - } 220 + tmp_fil = qlcnic_find_mac_filter(head, &src_addr, vlan_id); 221 + if (tmp_fil) { 222 + time = tmp_fil->ftime; 223 + if (time_after(jiffies, QLCNIC_READD_AGE * HZ + time)) 224 + tmp_fil->ftime = jiffies; 225 + return; 194 226 } 195 227 196 228 fil = kzalloc(sizeof(struct qlcnic_filter), GFP_ATOMIC); ··· 237 205 adapter->rx_fhash.fnum++; 238 206 spin_unlock(&adapter->rx_mac_learn_lock); 239 207 } else { 240 - hindex = qlcnic_mac_hash(src_addr) & 241 - (adapter->fhash.fbucket_size - 1); 242 - head = &(adapter->rx_fhash.fhead[hindex]); 243 - spin_lock(&adapter->rx_mac_learn_lock); 244 - hlist_for_each_entry_safe(tmp_fil, n, head, fnode) { 245 - if (!memcmp(tmp_fil->faddr, &src_addr, ETH_ALEN) && 246 - tmp_fil->vlan_id == vlan_id) { 247 - found = 1; 248 - break; 249 - } 250 - } 208 + head = &adapter->fhash.fhead[hindex]; 251 209 252 - if (!found) { 253 - spin_unlock(&adapter->rx_mac_learn_lock); 254 - return; 255 - } 210 + spin_lock(&adapter->mac_learn_lock); 256 211 257 - op = vlan_id ? QLCNIC_MAC_VLAN_ADD : QLCNIC_MAC_ADD; 258 - ret = qlcnic_sre_macaddr_change(adapter, (u8 *)&src_addr, 259 - vlan_id, op); 260 - if (!ret) { 212 + tmp_fil = qlcnic_find_mac_filter(head, &src_addr, vlan_id); 213 + if (tmp_fil) { 261 214 op = vlan_id ? QLCNIC_MAC_VLAN_DEL : QLCNIC_MAC_DEL; 262 215 ret = qlcnic_sre_macaddr_change(adapter, 263 216 (u8 *)&src_addr, 264 217 vlan_id, op); 265 218 if (!ret) { 266 - hlist_del(&(tmp_fil->fnode)); 267 - adapter->rx_fhash.fnum--; 219 + hlist_del(&tmp_fil->fnode); 220 + adapter->fhash.fnum--; 268 221 } 222 + 223 + spin_unlock(&adapter->mac_learn_lock); 224 + 225 + return; 269 226 } 227 + 228 + spin_unlock(&adapter->mac_learn_lock); 229 + 230 + head = &adapter->rx_fhash.fhead[hindex]; 231 + 232 + spin_lock(&adapter->rx_mac_learn_lock); 233 + 234 + tmp_fil = qlcnic_find_mac_filter(head, &src_addr, vlan_id); 235 + if (tmp_fil) 236 + qlcnic_delete_rx_list_mac(adapter, tmp_fil, &src_addr, 237 + vlan_id); 238 + 270 239 spin_unlock(&adapter->rx_mac_learn_lock); 271 240 } 272 241 } ··· 295 262 296 263 mac_req = (struct qlcnic_mac_req *)&(req->words[0]); 297 264 mac_req->op = vlan_id ? QLCNIC_MAC_VLAN_ADD : QLCNIC_MAC_ADD; 298 - memcpy(mac_req->mac_addr, &uaddr, ETH_ALEN); 265 + memcpy(mac_req->mac_addr, uaddr, ETH_ALEN); 299 266 300 267 vlan_req = (struct qlcnic_vlan_req *)&req->words[1]; 301 268 vlan_req->vlan_id = cpu_to_le16(vlan_id);
+11 -8
drivers/net/ethernet/qlogic/qlcnic/qlcnic_main.c
··· 977 977 static int 978 978 qlcnic_initialize_nic(struct qlcnic_adapter *adapter) 979 979 { 980 - int err; 981 980 struct qlcnic_info nic_info; 981 + int err = 0; 982 982 983 983 memset(&nic_info, 0, sizeof(struct qlcnic_info)); 984 984 err = qlcnic_get_nic_info(adapter, &nic_info, adapter->ahw->pci_func); ··· 993 993 994 994 if (adapter->ahw->capabilities & QLCNIC_FW_CAPABILITY_MORE_CAPS) { 995 995 u32 temp; 996 - temp = QLCRD32(adapter, CRB_FW_CAPABILITIES_2); 996 + temp = QLCRD32(adapter, CRB_FW_CAPABILITIES_2, &err); 997 + if (err == -EIO) 998 + return err; 997 999 adapter->ahw->extra_capability[0] = temp; 998 1000 } 999 1001 adapter->ahw->max_mac_filters = nic_info.max_mac_filters; ··· 2143 2141 if (qlcnic_83xx_check(adapter) && !qlcnic_use_msi_x && 2144 2142 !!qlcnic_use_msi) 2145 2143 dev_warn(&pdev->dev, 2146 - "83xx adapter do not support MSI interrupts\n"); 2144 + "Device does not support MSI interrupts\n"); 2147 2145 2148 2146 err = qlcnic_setup_intr(adapter, 0); 2149 2147 if (err) { ··· 3097 3095 { 3098 3096 u32 state = 0, heartbeat; 3099 3097 u32 peg_status; 3098 + int err = 0; 3100 3099 3101 3100 if (qlcnic_check_temp(adapter)) 3102 3101 goto detach; ··· 3144 3141 "PEG_NET_4_PC: 0x%x\n", 3145 3142 peg_status, 3146 3143 QLC_SHARED_REG_RD32(adapter, QLCNIC_PEG_HALT_STATUS2), 3147 - QLCRD32(adapter, QLCNIC_CRB_PEG_NET_0 + 0x3c), 3148 - QLCRD32(adapter, QLCNIC_CRB_PEG_NET_1 + 0x3c), 3149 - QLCRD32(adapter, QLCNIC_CRB_PEG_NET_2 + 0x3c), 3150 - QLCRD32(adapter, QLCNIC_CRB_PEG_NET_3 + 0x3c), 3151 - QLCRD32(adapter, QLCNIC_CRB_PEG_NET_4 + 0x3c)); 3144 + QLCRD32(adapter, QLCNIC_CRB_PEG_NET_0 + 0x3c, &err), 3145 + QLCRD32(adapter, QLCNIC_CRB_PEG_NET_1 + 0x3c, &err), 3146 + QLCRD32(adapter, QLCNIC_CRB_PEG_NET_2 + 0x3c, &err), 3147 + QLCRD32(adapter, QLCNIC_CRB_PEG_NET_3 + 0x3c, &err), 3148 + QLCRD32(adapter, QLCNIC_CRB_PEG_NET_4 + 0x3c, &err)); 3152 3149 if (QLCNIC_FWERROR_CODE(peg_status) == 0x67) 3153 3150 dev_err(&adapter->pdev->dev, 3154 3151 "Firmware aborted with error code 0x00006700. "
+1 -1
drivers/net/ethernet/qlogic/qlcnic/qlcnic_sriov_common.c
··· 562 562 INIT_LIST_HEAD(&adapter->vf_mc_list); 563 563 if (!qlcnic_use_msi_x && !!qlcnic_use_msi) 564 564 dev_warn(&adapter->pdev->dev, 565 - "83xx adapter do not support MSI interrupts\n"); 565 + "Device does not support MSI interrupts\n"); 566 566 567 567 err = qlcnic_setup_intr(adapter, 1); 568 568 if (err) {
+45 -3
drivers/net/ethernet/realtek/8139cp.c
··· 478 478 479 479 while (1) { 480 480 u32 status, len; 481 - dma_addr_t mapping; 481 + dma_addr_t mapping, new_mapping; 482 482 struct sk_buff *skb, *new_skb; 483 483 struct cp_desc *desc; 484 484 const unsigned buflen = cp->rx_buf_sz; ··· 520 520 goto rx_next; 521 521 } 522 522 523 + new_mapping = dma_map_single(&cp->pdev->dev, new_skb->data, buflen, 524 + PCI_DMA_FROMDEVICE); 525 + if (dma_mapping_error(&cp->pdev->dev, new_mapping)) { 526 + dev->stats.rx_dropped++; 527 + goto rx_next; 528 + } 529 + 523 530 dma_unmap_single(&cp->pdev->dev, mapping, 524 531 buflen, PCI_DMA_FROMDEVICE); 525 532 ··· 538 531 539 532 skb_put(skb, len); 540 533 541 - mapping = dma_map_single(&cp->pdev->dev, new_skb->data, buflen, 542 - PCI_DMA_FROMDEVICE); 543 534 cp->rx_skb[rx_tail] = new_skb; 544 535 545 536 cp_rx_skb(cp, skb, desc); 546 537 rx++; 538 + mapping = new_mapping; 547 539 548 540 rx_next: 549 541 cp->rx_ring[rx_tail].opts2 = 0; ··· 722 716 TxVlanTag | swab16(vlan_tx_tag_get(skb)) : 0x00; 723 717 } 724 718 719 + static void unwind_tx_frag_mapping(struct cp_private *cp, struct sk_buff *skb, 720 + int first, int entry_last) 721 + { 722 + int frag, index; 723 + struct cp_desc *txd; 724 + skb_frag_t *this_frag; 725 + for (frag = 0; frag+first < entry_last; frag++) { 726 + index = first+frag; 727 + cp->tx_skb[index] = NULL; 728 + txd = &cp->tx_ring[index]; 729 + this_frag = &skb_shinfo(skb)->frags[frag]; 730 + dma_unmap_single(&cp->pdev->dev, le64_to_cpu(txd->addr), 731 + skb_frag_size(this_frag), PCI_DMA_TODEVICE); 732 + } 733 + } 734 + 725 735 static netdev_tx_t cp_start_xmit (struct sk_buff *skb, 726 736 struct net_device *dev) 727 737 { ··· 771 749 772 750 len = skb->len; 773 751 mapping = dma_map_single(&cp->pdev->dev, skb->data, len, PCI_DMA_TODEVICE); 752 + if (dma_mapping_error(&cp->pdev->dev, mapping)) 753 + goto out_dma_error; 754 + 774 755 txd->opts2 = opts2; 775 756 txd->addr = cpu_to_le64(mapping); 776 757 wmb(); ··· 811 786 first_len = skb_headlen(skb); 812 787 first_mapping = dma_map_single(&cp->pdev->dev, skb->data, 813 788 first_len, PCI_DMA_TODEVICE); 789 + if (dma_mapping_error(&cp->pdev->dev, first_mapping)) 790 + goto out_dma_error; 791 + 814 792 cp->tx_skb[entry] = skb; 815 793 entry = NEXT_TX(entry); 816 794 ··· 827 799 mapping = dma_map_single(&cp->pdev->dev, 828 800 skb_frag_address(this_frag), 829 801 len, PCI_DMA_TODEVICE); 802 + if (dma_mapping_error(&cp->pdev->dev, mapping)) { 803 + unwind_tx_frag_mapping(cp, skb, first_entry, entry); 804 + goto out_dma_error; 805 + } 806 + 830 807 eor = (entry == (CP_TX_RING_SIZE - 1)) ? RingEnd : 0; 831 808 832 809 ctrl = eor | len | DescOwn; ··· 892 859 if (TX_BUFFS_AVAIL(cp) <= (MAX_SKB_FRAGS + 1)) 893 860 netif_stop_queue(dev); 894 861 862 + out_unlock: 895 863 spin_unlock_irqrestore(&cp->lock, intr_flags); 896 864 897 865 cpw8(TxPoll, NormalTxPoll); 898 866 899 867 return NETDEV_TX_OK; 868 + out_dma_error: 869 + kfree_skb(skb); 870 + cp->dev->stats.tx_dropped++; 871 + goto out_unlock; 900 872 } 901 873 902 874 /* Set or clear the multicast filter for this adaptor. ··· 1092 1054 1093 1055 mapping = dma_map_single(&cp->pdev->dev, skb->data, 1094 1056 cp->rx_buf_sz, PCI_DMA_FROMDEVICE); 1057 + if (dma_mapping_error(&cp->pdev->dev, mapping)) { 1058 + kfree_skb(skb); 1059 + goto err_out; 1060 + } 1095 1061 cp->rx_skb[i] = skb; 1096 1062 1097 1063 cp->rx_ring[i].opts2 = 0;
+1 -1
drivers/net/ethernet/realtek/r8169.c
··· 3689 3689 if (tp->link_ok(ioaddr)) 3690 3690 return; 3691 3691 3692 - netif_warn(tp, link, tp->dev, "PHY reset until link up\n"); 3692 + netif_dbg(tp, link, tp->dev, "PHY reset until link up\n"); 3693 3693 3694 3694 tp->phy_reset_enable(tp); 3695 3695
+2 -10
drivers/net/ethernet/sis/sis900.c
··· 1318 1318 if (duplex){ 1319 1319 sis900_set_mode(sis_priv, speed, duplex); 1320 1320 sis630_set_eq(net_dev, sis_priv->chipset_rev); 1321 - netif_start_queue(net_dev); 1321 + netif_carrier_on(net_dev); 1322 1322 } 1323 1323 1324 1324 sis_priv->timer.expires = jiffies + HZ; ··· 1336 1336 status = sis900_default_phy(net_dev); 1337 1337 mii_phy = sis_priv->mii; 1338 1338 1339 - if (status & MII_STAT_LINK){ 1339 + if (status & MII_STAT_LINK) 1340 1340 sis900_check_mode(net_dev, mii_phy); 1341 - netif_carrier_on(net_dev); 1342 - } 1343 1341 } else { 1344 1342 /* Link ON -> OFF */ 1345 1343 if (!(status & MII_STAT_LINK)){ ··· 1609 1611 unsigned long flags; 1610 1612 unsigned int index_cur_tx, index_dirty_tx; 1611 1613 unsigned int count_dirty_tx; 1612 - 1613 - /* Don't transmit data before the complete of auto-negotiation */ 1614 - if(!sis_priv->autong_complete){ 1615 - netif_stop_queue(net_dev); 1616 - return NETDEV_TX_BUSY; 1617 - } 1618 1614 1619 1615 spin_lock_irqsave(&sis_priv->lock, flags); 1620 1616
+1 -1
drivers/net/ethernet/ti/cpsw.c
··· 1867 1867 1868 1868 while ((res = platform_get_resource(priv->pdev, IORESOURCE_IRQ, k))) { 1869 1869 for (i = res->start; i <= res->end; i++) { 1870 - if (request_irq(i, cpsw_interrupt, IRQF_DISABLED, 1870 + if (request_irq(i, cpsw_interrupt, 0, 1871 1871 dev_name(&pdev->dev), priv)) { 1872 1872 dev_err(priv->dev, "error attaching irq\n"); 1873 1873 goto clean_ale_ret;
+1 -2
drivers/net/ethernet/ti/davinci_emac.c
··· 1568 1568 while ((res = platform_get_resource(priv->pdev, IORESOURCE_IRQ, k))) { 1569 1569 for (i = res->start; i <= res->end; i++) { 1570 1570 if (devm_request_irq(&priv->pdev->dev, i, emac_irq, 1571 - IRQF_DISABLED, 1572 - ndev->name, ndev)) 1571 + 0, ndev->name, ndev)) 1573 1572 goto rollback; 1574 1573 } 1575 1574 k++;
+19 -4
drivers/net/macvlan.c
··· 337 337 int err; 338 338 339 339 if (vlan->port->passthru) { 340 - if (!(vlan->flags & MACVLAN_FLAG_NOPROMISC)) 341 - dev_set_promiscuity(lowerdev, 1); 340 + if (!(vlan->flags & MACVLAN_FLAG_NOPROMISC)) { 341 + err = dev_set_promiscuity(lowerdev, 1); 342 + if (err < 0) 343 + goto out; 344 + } 342 345 goto hash_add; 343 346 } 344 347 ··· 866 863 struct nlattr *tb[], struct nlattr *data[]) 867 864 { 868 865 struct macvlan_dev *vlan = netdev_priv(dev); 866 + enum macvlan_mode mode; 867 + bool set_mode = false; 868 + 869 + /* Validate mode, but don't set yet: setting flags may fail. */ 870 + if (data && data[IFLA_MACVLAN_MODE]) { 871 + set_mode = true; 872 + mode = nla_get_u32(data[IFLA_MACVLAN_MODE]); 873 + /* Passthrough mode can't be set or cleared dynamically */ 874 + if ((mode == MACVLAN_MODE_PASSTHRU) != 875 + (vlan->mode == MACVLAN_MODE_PASSTHRU)) 876 + return -EINVAL; 877 + } 869 878 870 879 if (data && data[IFLA_MACVLAN_FLAGS]) { 871 880 __u16 flags = nla_get_u16(data[IFLA_MACVLAN_FLAGS]); ··· 894 879 } 895 880 vlan->flags = flags; 896 881 } 897 - if (data && data[IFLA_MACVLAN_MODE]) 898 - vlan->mode = nla_get_u32(data[IFLA_MACVLAN_MODE]); 882 + if (set_mode) 883 + vlan->mode = mode; 899 884 return 0; 900 885 } 901 886
+58 -68
drivers/net/usb/r8152.c
··· 344 344 static 345 345 int get_registers(struct r8152 *tp, u16 value, u16 index, u16 size, void *data) 346 346 { 347 - return usb_control_msg(tp->udev, usb_rcvctrlpipe(tp->udev, 0), 347 + int ret; 348 + void *tmp; 349 + 350 + tmp = kmalloc(size, GFP_KERNEL); 351 + if (!tmp) 352 + return -ENOMEM; 353 + 354 + ret = usb_control_msg(tp->udev, usb_rcvctrlpipe(tp->udev, 0), 348 355 RTL8152_REQ_GET_REGS, RTL8152_REQT_READ, 349 - value, index, data, size, 500); 356 + value, index, tmp, size, 500); 357 + 358 + memcpy(data, tmp, size); 359 + kfree(tmp); 360 + 361 + return ret; 350 362 } 351 363 352 364 static 353 365 int set_registers(struct r8152 *tp, u16 value, u16 index, u16 size, void *data) 354 366 { 355 - return usb_control_msg(tp->udev, usb_sndctrlpipe(tp->udev, 0), 367 + int ret; 368 + void *tmp; 369 + 370 + tmp = kmalloc(size, GFP_KERNEL); 371 + if (!tmp) 372 + return -ENOMEM; 373 + 374 + memcpy(tmp, data, size); 375 + 376 + ret = usb_control_msg(tp->udev, usb_sndctrlpipe(tp->udev, 0), 356 377 RTL8152_REQ_SET_REGS, RTL8152_REQT_WRITE, 357 - value, index, data, size, 500); 378 + value, index, tmp, size, 500); 379 + 380 + kfree(tmp); 381 + return ret; 358 382 } 359 383 360 384 static int generic_ocp_read(struct r8152 *tp, u16 index, u16 size, ··· 514 490 515 491 static u32 ocp_read_dword(struct r8152 *tp, u16 type, u16 index) 516 492 { 517 - u32 data; 493 + __le32 data; 518 494 519 - if (type == MCU_TYPE_PLA) 520 - pla_ocp_read(tp, index, sizeof(data), &data); 521 - else 522 - usb_ocp_read(tp, index, sizeof(data), &data); 495 + generic_ocp_read(tp, index, sizeof(data), &data, type); 523 496 524 497 return __le32_to_cpu(data); 525 498 } 526 499 527 500 static void ocp_write_dword(struct r8152 *tp, u16 type, u16 index, u32 data) 528 501 { 529 - if (type == MCU_TYPE_PLA) 530 - pla_ocp_write(tp, index, BYTE_EN_DWORD, sizeof(data), &data); 531 - else 532 - usb_ocp_write(tp, index, BYTE_EN_DWORD, sizeof(data), &data); 502 + __le32 tmp = __cpu_to_le32(data); 503 + 504 + generic_ocp_write(tp, index, BYTE_EN_DWORD, sizeof(tmp), &tmp, type); 533 505 } 534 506 535 507 static u16 ocp_read_word(struct r8152 *tp, u16 type, u16 index) 536 508 { 537 509 u32 data; 510 + __le32 tmp; 538 511 u8 shift = index & 2; 539 512 540 513 index &= ~3; 541 514 542 - if (type == MCU_TYPE_PLA) 543 - pla_ocp_read(tp, index, sizeof(data), &data); 544 - else 545 - usb_ocp_read(tp, index, sizeof(data), &data); 515 + generic_ocp_read(tp, index, sizeof(tmp), &tmp, type); 546 516 547 - data = __le32_to_cpu(data); 517 + data = __le32_to_cpu(tmp); 548 518 data >>= (shift * 8); 549 519 data &= 0xffff; 550 520 ··· 547 529 548 530 static void ocp_write_word(struct r8152 *tp, u16 type, u16 index, u32 data) 549 531 { 550 - u32 tmp, mask = 0xffff; 532 + u32 mask = 0xffff; 533 + __le32 tmp; 551 534 u16 byen = BYTE_EN_WORD; 552 535 u8 shift = index & 2; 553 536 ··· 561 542 index &= ~3; 562 543 } 563 544 564 - if (type == MCU_TYPE_PLA) 565 - pla_ocp_read(tp, index, sizeof(tmp), &tmp); 566 - else 567 - usb_ocp_read(tp, index, sizeof(tmp), &tmp); 545 + generic_ocp_read(tp, index, sizeof(tmp), &tmp, type); 568 546 569 - tmp = __le32_to_cpu(tmp) & ~mask; 570 - tmp |= data; 571 - tmp = __cpu_to_le32(tmp); 547 + data |= __le32_to_cpu(tmp) & ~mask; 548 + tmp = __cpu_to_le32(data); 572 549 573 - if (type == MCU_TYPE_PLA) 574 - pla_ocp_write(tp, index, byen, sizeof(tmp), &tmp); 575 - else 576 - usb_ocp_write(tp, index, byen, sizeof(tmp), &tmp); 550 + generic_ocp_write(tp, index, byen, sizeof(tmp), &tmp, type); 577 551 } 578 552 579 553 static u8 ocp_read_byte(struct r8152 *tp, u16 type, u16 index) 580 554 { 581 555 u32 data; 556 + __le32 tmp; 582 557 u8 shift = index & 3; 583 558 584 559 index &= ~3; 585 560 586 - if (type == MCU_TYPE_PLA) 587 - pla_ocp_read(tp, index, sizeof(data), &data); 588 - else 589 - usb_ocp_read(tp, index, sizeof(data), &data); 561 + generic_ocp_read(tp, index, sizeof(tmp), &tmp, type); 590 562 591 - data = __le32_to_cpu(data); 563 + data = __le32_to_cpu(tmp); 592 564 data >>= (shift * 8); 593 565 data &= 0xff; 594 566 ··· 588 578 589 579 static void ocp_write_byte(struct r8152 *tp, u16 type, u16 index, u32 data) 590 580 { 591 - u32 tmp, mask = 0xff; 581 + u32 mask = 0xff; 582 + __le32 tmp; 592 583 u16 byen = BYTE_EN_BYTE; 593 584 u8 shift = index & 3; 594 585 ··· 602 591 index &= ~3; 603 592 } 604 593 605 - if (type == MCU_TYPE_PLA) 606 - pla_ocp_read(tp, index, sizeof(tmp), &tmp); 607 - else 608 - usb_ocp_read(tp, index, sizeof(tmp), &tmp); 594 + generic_ocp_read(tp, index, sizeof(tmp), &tmp, type); 609 595 610 - tmp = __le32_to_cpu(tmp) & ~mask; 611 - tmp |= data; 612 - tmp = __cpu_to_le32(tmp); 596 + data |= __le32_to_cpu(tmp) & ~mask; 597 + tmp = __cpu_to_le32(data); 613 598 614 - if (type == MCU_TYPE_PLA) 615 - pla_ocp_write(tp, index, byen, sizeof(tmp), &tmp); 616 - else 617 - usb_ocp_write(tp, index, byen, sizeof(tmp), &tmp); 599 + generic_ocp_write(tp, index, byen, sizeof(tmp), &tmp, type); 618 600 } 619 601 620 602 static void r8152_mdio_write(struct r8152 *tp, u32 reg_addr, u32 value) ··· 689 685 static inline void set_ethernet_addr(struct r8152 *tp) 690 686 { 691 687 struct net_device *dev = tp->netdev; 692 - u8 *node_id; 688 + u8 node_id[8] = {0}; 693 689 694 - node_id = kmalloc(sizeof(u8) * 8, GFP_KERNEL); 695 - if (!node_id) { 696 - netif_err(tp, probe, dev, "out of memory"); 697 - return; 698 - } 699 - 700 - if (pla_ocp_read(tp, PLA_IDR, sizeof(u8) * 8, node_id) < 0) 690 + if (pla_ocp_read(tp, PLA_IDR, sizeof(node_id), node_id) < 0) 701 691 netif_notice(tp, probe, dev, "inet addr fail\n"); 702 692 else { 703 693 memcpy(dev->dev_addr, node_id, dev->addr_len); 704 694 memcpy(dev->perm_addr, dev->dev_addr, dev->addr_len); 705 695 } 706 - kfree(node_id); 707 696 } 708 697 709 698 static int rtl8152_set_mac_address(struct net_device *netdev, void *p) ··· 879 882 static void _rtl8152_set_rx_mode(struct net_device *netdev) 880 883 { 881 884 struct r8152 *tp = netdev_priv(netdev); 882 - u32 tmp, *mc_filter; /* Multicast hash filter */ 885 + u32 mc_filter[2]; /* Multicast hash filter */ 886 + __le32 tmp[2]; 883 887 u32 ocp_data; 884 - 885 - mc_filter = kmalloc(sizeof(u32) * 2, GFP_KERNEL); 886 - if (!mc_filter) { 887 - netif_err(tp, link, netdev, "out of memory"); 888 - return; 889 - } 890 888 891 889 clear_bit(RTL8152_SET_RX_MODE, &tp->flags); 892 890 netif_stop_queue(netdev); ··· 910 918 } 911 919 } 912 920 913 - tmp = mc_filter[0]; 914 - mc_filter[0] = __cpu_to_le32(swab32(mc_filter[1])); 915 - mc_filter[1] = __cpu_to_le32(swab32(tmp)); 921 + tmp[0] = __cpu_to_le32(swab32(mc_filter[1])); 922 + tmp[1] = __cpu_to_le32(swab32(mc_filter[0])); 916 923 917 - pla_ocp_write(tp, PLA_MAR, BYTE_EN_DWORD, sizeof(u32) * 2, mc_filter); 924 + pla_ocp_write(tp, PLA_MAR, BYTE_EN_DWORD, sizeof(tmp), tmp); 918 925 ocp_write_dword(tp, MCU_TYPE_PLA, PLA_RCR, ocp_data); 919 926 netif_wake_queue(netdev); 920 - kfree(mc_filter); 921 927 } 922 928 923 929 static netdev_tx_t rtl8152_start_xmit(struct sk_buff *skb,
+42 -20
drivers/net/usb/r815x.c
··· 24 24 25 25 static int pla_read_word(struct usb_device *udev, u16 index) 26 26 { 27 - int data, ret; 27 + int ret; 28 28 u8 shift = index & 2; 29 - __le32 ocp_data; 29 + __le32 *tmp; 30 + 31 + tmp = kmalloc(sizeof(*tmp), GFP_KERNEL); 32 + if (!tmp) 33 + return -ENOMEM; 30 34 31 35 index &= ~3; 32 36 33 37 ret = usb_control_msg(udev, usb_rcvctrlpipe(udev, 0), 34 38 RTL815x_REQ_GET_REGS, RTL815x_REQT_READ, 35 - index, MCU_TYPE_PLA, &ocp_data, sizeof(ocp_data), 36 - 500); 39 + index, MCU_TYPE_PLA, tmp, sizeof(*tmp), 500); 37 40 if (ret < 0) 38 - return ret; 41 + goto out2; 39 42 40 - data = __le32_to_cpu(ocp_data); 41 - data >>= (shift * 8); 42 - data &= 0xffff; 43 + ret = __le32_to_cpu(*tmp); 44 + ret >>= (shift * 8); 45 + ret &= 0xffff; 43 46 44 - return data; 47 + out2: 48 + kfree(tmp); 49 + return ret; 45 50 } 46 51 47 52 static int pla_write_word(struct usb_device *udev, u16 index, u32 data) 48 53 { 49 - __le32 ocp_data; 54 + __le32 *tmp; 50 55 u32 mask = 0xffff; 51 56 u16 byen = BYTE_EN_WORD; 52 57 u8 shift = index & 2; 53 58 int ret; 59 + 60 + tmp = kmalloc(sizeof(*tmp), GFP_KERNEL); 61 + if (!tmp) 62 + return -ENOMEM; 54 63 55 64 data &= mask; 56 65 ··· 72 63 73 64 ret = usb_control_msg(udev, usb_rcvctrlpipe(udev, 0), 74 65 RTL815x_REQ_GET_REGS, RTL815x_REQT_READ, 75 - index, MCU_TYPE_PLA, &ocp_data, sizeof(ocp_data), 76 - 500); 66 + index, MCU_TYPE_PLA, tmp, sizeof(*tmp), 500); 77 67 if (ret < 0) 78 - return ret; 68 + goto out3; 79 69 80 - data |= __le32_to_cpu(ocp_data) & ~mask; 81 - ocp_data = __cpu_to_le32(data); 70 + data |= __le32_to_cpu(*tmp) & ~mask; 71 + *tmp = __cpu_to_le32(data); 82 72 83 73 ret = usb_control_msg(udev, usb_sndctrlpipe(udev, 0), 84 74 RTL815x_REQ_SET_REGS, RTL815x_REQT_WRITE, 85 - index, MCU_TYPE_PLA | byen, &ocp_data, 86 - sizeof(ocp_data), 500); 75 + index, MCU_TYPE_PLA | byen, tmp, sizeof(*tmp), 76 + 500); 87 77 78 + out3: 79 + kfree(tmp); 88 80 return ret; 89 81 } 90 82 ··· 126 116 static int r815x_mdio_read(struct net_device *netdev, int phy_id, int reg) 127 117 { 128 118 struct usbnet *dev = netdev_priv(netdev); 119 + int ret; 129 120 130 121 if (phy_id != R815x_PHY_ID) 131 122 return -EINVAL; 132 123 133 - return ocp_reg_read(dev, BASE_MII + reg * 2); 124 + if (usb_autopm_get_interface(dev->intf) < 0) 125 + return -ENODEV; 126 + 127 + ret = ocp_reg_read(dev, BASE_MII + reg * 2); 128 + 129 + usb_autopm_put_interface(dev->intf); 130 + return ret; 134 131 } 135 132 136 133 static ··· 148 131 if (phy_id != R815x_PHY_ID) 149 132 return; 150 133 134 + if (usb_autopm_get_interface(dev->intf) < 0) 135 + return; 136 + 151 137 ocp_reg_write(dev, BASE_MII + reg * 2, val); 138 + 139 + usb_autopm_put_interface(dev->intf); 152 140 } 153 141 154 142 static int r8153_bind(struct usbnet *dev, struct usb_interface *intf) ··· 172 150 dev->mii.phy_id = R815x_PHY_ID; 173 151 dev->mii.supports_gmii = 1; 174 152 175 - return 0; 153 + return status; 176 154 } 177 155 178 156 static int r8152_bind(struct usbnet *dev, struct usb_interface *intf) ··· 191 169 dev->mii.phy_id = R815x_PHY_ID; 192 170 dev->mii.supports_gmii = 0; 193 171 194 - return 0; 172 + return status; 195 173 } 196 174 197 175 static const struct driver_info r8152_info = {
+1 -1
drivers/net/wireless/ath/ath10k/Kconfig
··· 1 1 config ATH10K 2 2 tristate "Atheros 802.11ac wireless cards support" 3 - depends on MAC80211 3 + depends on MAC80211 && HAS_DMA 4 4 select ATH_COMMON 5 5 ---help--- 6 6 This module adds support for wireless adapters based on
+4 -1
drivers/net/wireless/brcm80211/brcmfmac/wl_cfg80211.c
··· 1093 1093 brcmf_dbg(INFO, "Call WLC_DISASSOC to stop excess roaming\n "); 1094 1094 err = brcmf_fil_cmd_data_set(vif->ifp, 1095 1095 BRCMF_C_DISASSOC, NULL, 0); 1096 - if (err) 1096 + if (err) { 1097 1097 brcmf_err("WLC_DISASSOC failed (%d)\n", err); 1098 + cfg80211_disconnected(vif->wdev.netdev, 0, 1099 + NULL, 0, GFP_KERNEL); 1100 + } 1098 1101 clear_bit(BRCMF_VIF_STATUS_CONNECTED, &vif->sme_state); 1099 1102 } 1100 1103 clear_bit(BRCMF_VIF_STATUS_CONNECTING, &vif->sme_state);
+2
drivers/net/wireless/iwlwifi/iwl-prph.h
··· 97 97 98 98 #define APMG_PCIDEV_STT_VAL_L1_ACT_DIS (0x00000800) 99 99 100 + #define APMG_RTC_INT_STT_RFKILL (0x10000000) 101 + 100 102 /* Device system time */ 101 103 #define DEVICE_SYSTEM_TIME_REG 0xA0206C 102 104
+10 -5
drivers/net/wireless/iwlwifi/mvm/d3.c
··· 134 134 struct iwl_wowlan_rsc_tsc_params_cmd *rsc_tsc; 135 135 struct iwl_wowlan_tkip_params_cmd *tkip; 136 136 bool error, use_rsc_tsc, use_tkip; 137 - int gtk_key_idx; 137 + int wep_key_idx; 138 138 }; 139 139 140 140 static void iwl_mvm_wowlan_program_keys(struct ieee80211_hw *hw, ··· 188 188 wkc.wep_key.key_offset = 0; 189 189 } else { 190 190 /* others start at 1 */ 191 - data->gtk_key_idx++; 192 - wkc.wep_key.key_offset = data->gtk_key_idx; 191 + data->wep_key_idx++; 192 + wkc.wep_key.key_offset = data->wep_key_idx; 193 193 } 194 194 195 195 ret = iwl_mvm_send_cmd_pdu(mvm, WEP_KEY, CMD_SYNC, ··· 316 316 mvm->ptk_ivlen = key->iv_len; 317 317 mvm->ptk_icvlen = key->icv_len; 318 318 } else { 319 - data->gtk_key_idx++; 320 - key->hw_key_idx = data->gtk_key_idx; 319 + /* 320 + * firmware only supports TSC/RSC for a single key, 321 + * so if there are multiple keep overwriting them 322 + * with new ones -- this relies on mac80211 doing 323 + * list_add_tail(). 324 + */ 325 + key->hw_key_idx = 1; 321 326 mvm->gtk_ivlen = key->iv_len; 322 327 mvm->gtk_icvlen = key->icv_len; 323 328 }
-1
drivers/net/wireless/iwlwifi/mvm/fw-api-scan.h
··· 69 69 /* Scan Commands, Responses, Notifications */ 70 70 71 71 /* Masks for iwl_scan_channel.type flags */ 72 - #define SCAN_CHANNEL_TYPE_PASSIVE 0 73 72 #define SCAN_CHANNEL_TYPE_ACTIVE BIT(0) 74 73 #define SCAN_CHANNEL_NARROW_BAND BIT(22) 75 74
+21 -21
drivers/net/wireless/iwlwifi/mvm/mac80211.c
··· 512 512 goto out_unlock; 513 513 514 514 /* 515 + * TODO: remove this temporary code. 516 + * Currently MVM FW supports power management only on single MAC. 517 + * If new interface added, disable PM on existing interface. 518 + * P2P device is a special case, since it is handled by FW similary to 519 + * scan. If P2P deviced is added, PM remains enabled on existing 520 + * interface. 521 + * Note: the method below does not count the new interface being added 522 + * at this moment. 523 + */ 524 + if (vif->type != NL80211_IFTYPE_P2P_DEVICE) 525 + mvm->vif_count++; 526 + if (mvm->vif_count > 1) { 527 + IWL_DEBUG_MAC80211(mvm, 528 + "Disable power on existing interfaces\n"); 529 + ieee80211_iterate_active_interfaces_atomic( 530 + mvm->hw, 531 + IEEE80211_IFACE_ITER_NORMAL, 532 + iwl_mvm_pm_disable_iterator, mvm); 533 + } 534 + 535 + /* 515 536 * The AP binding flow can be done only after the beacon 516 537 * template is configured (which happens only in the mac80211 517 538 * start_ap() flow), and adding the broadcast station can happen ··· 553 532 } 554 533 555 534 goto out_unlock; 556 - } 557 - 558 - /* 559 - * TODO: remove this temporary code. 560 - * Currently MVM FW supports power management only on single MAC. 561 - * If new interface added, disable PM on existing interface. 562 - * P2P device is a special case, since it is handled by FW similary to 563 - * scan. If P2P deviced is added, PM remains enabled on existing 564 - * interface. 565 - * Note: the method below does not count the new interface being added 566 - * at this moment. 567 - */ 568 - if (vif->type != NL80211_IFTYPE_P2P_DEVICE) 569 - mvm->vif_count++; 570 - if (mvm->vif_count > 1) { 571 - IWL_DEBUG_MAC80211(mvm, 572 - "Disable power on existing interfaces\n"); 573 - ieee80211_iterate_active_interfaces_atomic( 574 - mvm->hw, 575 - IEEE80211_IFACE_ITER_NORMAL, 576 - iwl_mvm_pm_disable_iterator, mvm); 577 535 } 578 536 579 537 ret = iwl_mvm_mac_ctxt_add(mvm, vif);
+2 -9
drivers/net/wireless/iwlwifi/mvm/scan.c
··· 178 178 struct iwl_scan_channel *chan = (struct iwl_scan_channel *) 179 179 (cmd->data + le16_to_cpu(cmd->tx_cmd.len)); 180 180 int i; 181 - __le32 chan_type_value; 182 - 183 - if (req->n_ssids > 0) 184 - chan_type_value = cpu_to_le32(BIT(req->n_ssids) - 1); 185 - else 186 - chan_type_value = SCAN_CHANNEL_TYPE_PASSIVE; 187 181 188 182 for (i = 0; i < cmd->channel_count; i++) { 189 183 chan->channel = cpu_to_le16(req->channels[i]->hw_value); 184 + chan->type = cpu_to_le32(BIT(req->n_ssids) - 1); 190 185 if (req->channels[i]->flags & IEEE80211_CHAN_PASSIVE_SCAN) 191 - chan->type = SCAN_CHANNEL_TYPE_PASSIVE; 192 - else 193 - chan->type = chan_type_value; 186 + chan->type &= cpu_to_le32(~SCAN_CHANNEL_TYPE_ACTIVE); 194 187 chan->active_dwell = cpu_to_le16(active_dwell); 195 188 chan->passive_dwell = cpu_to_le16(passive_dwell); 196 189 chan->iteration_count = cpu_to_le16(1);
+8 -3
drivers/net/wireless/iwlwifi/mvm/sta.c
··· 915 915 struct iwl_mvm_sta *mvmsta = (void *)sta->drv_priv; 916 916 struct iwl_mvm_tid_data *tid_data = &mvmsta->tid_data[tid]; 917 917 u16 txq_id; 918 + enum iwl_mvm_agg_state old_state; 918 919 919 920 /* 920 921 * First set the agg state to OFF to avoid calling ··· 925 924 txq_id = tid_data->txq_id; 926 925 IWL_DEBUG_TX_QUEUES(mvm, "Flush AGG: sta %d tid %d q %d state %d\n", 927 926 mvmsta->sta_id, tid, txq_id, tid_data->state); 927 + old_state = tid_data->state; 928 928 tid_data->state = IWL_AGG_OFF; 929 929 spin_unlock_bh(&mvmsta->lock); 930 930 931 - if (iwl_mvm_flush_tx_path(mvm, BIT(txq_id), true)) 932 - IWL_ERR(mvm, "Couldn't flush the AGG queue\n"); 931 + if (old_state >= IWL_AGG_ON) { 932 + if (iwl_mvm_flush_tx_path(mvm, BIT(txq_id), true)) 933 + IWL_ERR(mvm, "Couldn't flush the AGG queue\n"); 933 934 934 - iwl_trans_txq_disable(mvm->trans, tid_data->txq_id); 935 + iwl_trans_txq_disable(mvm->trans, tid_data->txq_id); 936 + } 937 + 935 938 mvm->queue_to_mac80211[tid_data->txq_id] = 936 939 IWL_INVALID_MAC80211_QUEUE; 937 940
+1
drivers/net/wireless/iwlwifi/pcie/drv.c
··· 130 130 {IWL_PCI_DEVICE(0x423C, 0x1306, iwl5150_abg_cfg)}, /* Half Mini Card */ 131 131 {IWL_PCI_DEVICE(0x423C, 0x1221, iwl5150_agn_cfg)}, /* Mini Card */ 132 132 {IWL_PCI_DEVICE(0x423C, 0x1321, iwl5150_agn_cfg)}, /* Half Mini Card */ 133 + {IWL_PCI_DEVICE(0x423C, 0x1326, iwl5150_abg_cfg)}, /* Half Mini Card */ 133 134 134 135 {IWL_PCI_DEVICE(0x423D, 0x1211, iwl5150_agn_cfg)}, /* Mini Card */ 135 136 {IWL_PCI_DEVICE(0x423D, 0x1311, iwl5150_agn_cfg)}, /* Half Mini Card */
+8
drivers/net/wireless/iwlwifi/pcie/rx.c
··· 888 888 889 889 iwl_op_mode_hw_rf_kill(trans->op_mode, hw_rfkill); 890 890 if (hw_rfkill) { 891 + /* 892 + * Clear the interrupt in APMG if the NIC is going down. 893 + * Note that when the NIC exits RFkill (else branch), we 894 + * can't access prph and the NIC will be reset in 895 + * start_hw anyway. 896 + */ 897 + iwl_write_prph(trans, APMG_RTC_INT_STT_REG, 898 + APMG_RTC_INT_STT_RFKILL); 891 899 set_bit(STATUS_RFKILL, &trans_pcie->status); 892 900 if (test_and_clear_bit(STATUS_HCMD_ACTIVE, 893 901 &trans_pcie->status))
+5
drivers/net/wireless/iwlwifi/pcie/trans.c
··· 670 670 return err; 671 671 } 672 672 673 + /* Reset the entire device */ 674 + iwl_set_bit(trans, CSR_RESET, CSR_RESET_REG_FLAG_SW_RESET); 675 + 676 + usleep_range(10, 15); 677 + 673 678 iwl_pcie_apm_init(trans); 674 679 675 680 /* From now on, the op_mode will be kept updated about RF kill state */
+2 -2
drivers/net/wireless/mwifiex/cfg80211.c
··· 1716 1716 struct mwifiex_private *priv = mwifiex_netdev_get_priv(dev); 1717 1717 int ret; 1718 1718 1719 - if (priv->bss_mode != NL80211_IFTYPE_STATION) { 1719 + if (GET_BSS_ROLE(priv) != MWIFIEX_BSS_ROLE_STA) { 1720 1720 wiphy_err(wiphy, 1721 - "%s: reject infra assoc request in non-STA mode\n", 1721 + "%s: reject infra assoc request in non-STA role\n", 1722 1722 dev->name); 1723 1723 return -EINVAL; 1724 1724 }
+2 -1
drivers/net/wireless/mwifiex/cfp.c
··· 415 415 u32 k = 0; 416 416 struct mwifiex_adapter *adapter = priv->adapter; 417 417 418 - if (priv->bss_mode == NL80211_IFTYPE_STATION) { 418 + if (priv->bss_mode == NL80211_IFTYPE_STATION || 419 + priv->bss_mode == NL80211_IFTYPE_P2P_CLIENT) { 419 420 switch (adapter->config_bands) { 420 421 case BAND_B: 421 422 dev_dbg(adapter->dev, "info: infra band=%d "
+4 -2
drivers/net/wireless/mwifiex/join.c
··· 1291 1291 { 1292 1292 u8 current_bssid[ETH_ALEN]; 1293 1293 1294 - /* Return error if the adapter or table entry is not marked as infra */ 1295 - if ((priv->bss_mode != NL80211_IFTYPE_STATION) || 1294 + /* Return error if the adapter is not STA role or table entry 1295 + * is not marked as infra. 1296 + */ 1297 + if ((GET_BSS_ROLE(priv) != MWIFIEX_BSS_ROLE_STA) || 1296 1298 (bss_desc->bss_mode != NL80211_IFTYPE_STATION)) 1297 1299 return -1; 1298 1300
+2 -2
drivers/net/wireless/mwifiex/sdio.c
··· 1639 1639 /* Allocate buffer and copy payload */ 1640 1640 blk_size = MWIFIEX_SDIO_BLOCK_SIZE; 1641 1641 buf_block_len = (pkt_len + blk_size - 1) / blk_size; 1642 - *(u16 *) &payload[0] = (u16) pkt_len; 1643 - *(u16 *) &payload[2] = type; 1642 + *(__le16 *)&payload[0] = cpu_to_le16((u16)pkt_len); 1643 + *(__le16 *)&payload[2] = cpu_to_le16(type); 1644 1644 1645 1645 /* 1646 1646 * This is SDIO specific header
+2 -2
drivers/net/wireless/mwifiex/sta_ioctl.c
··· 257 257 goto done; 258 258 } 259 259 260 - if (priv->bss_mode == NL80211_IFTYPE_STATION) { 260 + if (priv->bss_mode == NL80211_IFTYPE_STATION || 261 + priv->bss_mode == NL80211_IFTYPE_P2P_CLIENT) { 261 262 u8 config_bands; 262 263 263 - /* Infra mode */ 264 264 ret = mwifiex_deauthenticate(priv, NULL); 265 265 if (ret) 266 266 goto done;
+11 -7
drivers/net/wireless/rt2x00/rt2x00queue.c
··· 936 936 spin_unlock_irqrestore(&queue->index_lock, irqflags); 937 937 } 938 938 939 - void rt2x00queue_pause_queue(struct data_queue *queue) 939 + void rt2x00queue_pause_queue_nocheck(struct data_queue *queue) 940 940 { 941 - if (!test_bit(DEVICE_STATE_PRESENT, &queue->rt2x00dev->flags) || 942 - !test_bit(QUEUE_STARTED, &queue->flags) || 943 - test_and_set_bit(QUEUE_PAUSED, &queue->flags)) 944 - return; 945 - 946 941 switch (queue->qid) { 947 942 case QID_AC_VO: 948 943 case QID_AC_VI: ··· 952 957 default: 953 958 break; 954 959 } 960 + } 961 + void rt2x00queue_pause_queue(struct data_queue *queue) 962 + { 963 + if (!test_bit(DEVICE_STATE_PRESENT, &queue->rt2x00dev->flags) || 964 + !test_bit(QUEUE_STARTED, &queue->flags) || 965 + test_and_set_bit(QUEUE_PAUSED, &queue->flags)) 966 + return; 967 + 968 + rt2x00queue_pause_queue_nocheck(queue); 955 969 } 956 970 EXPORT_SYMBOL_GPL(rt2x00queue_pause_queue); 957 971 ··· 1023 1019 return; 1024 1020 } 1025 1021 1026 - rt2x00queue_pause_queue(queue); 1022 + rt2x00queue_pause_queue_nocheck(queue); 1027 1023 1028 1024 queue->rt2x00dev->ops->lib->stop_queue(queue); 1029 1025
+1 -1
include/linux/netdevice.h
··· 973 973 gfp_t gfp); 974 974 void (*ndo_netpoll_cleanup)(struct net_device *dev); 975 975 #endif 976 - #ifdef CONFIG_NET_LL_RX_POLL 976 + #ifdef CONFIG_NET_RX_BUSY_POLL 977 977 int (*ndo_busy_poll)(struct napi_struct *dev); 978 978 #endif 979 979 int (*ndo_set_vf_mac)(struct net_device *dev,
+1 -1
include/linux/skbuff.h
··· 501 501 /* 7/9 bit hole (depending on ndisc_nodetype presence) */ 502 502 kmemcheck_bitfield_end(flags2); 503 503 504 - #if defined CONFIG_NET_DMA || defined CONFIG_NET_LL_RX_POLL 504 + #if defined CONFIG_NET_DMA || defined CONFIG_NET_RX_BUSY_POLL 505 505 union { 506 506 unsigned int napi_id; 507 507 dma_cookie_t dma_cookie;
+8 -3
include/net/busy_poll.h
··· 27 27 #include <linux/netdevice.h> 28 28 #include <net/ip.h> 29 29 30 - #ifdef CONFIG_NET_LL_RX_POLL 30 + #ifdef CONFIG_NET_RX_BUSY_POLL 31 31 32 32 struct napi_struct; 33 33 extern unsigned int sysctl_net_busy_read __read_mostly; ··· 146 146 sk->sk_napi_id = skb->napi_id; 147 147 } 148 148 149 - #else /* CONFIG_NET_LL_RX_POLL */ 149 + #else /* CONFIG_NET_RX_BUSY_POLL */ 150 150 static inline unsigned long net_busy_loop_on(void) 151 151 { 152 152 return 0; ··· 181 181 return true; 182 182 } 183 183 184 - #endif /* CONFIG_NET_LL_RX_POLL */ 184 + static inline bool sk_busy_loop(struct sock *sk, int nonblock) 185 + { 186 + return false; 187 + } 188 + 189 + #endif /* CONFIG_NET_RX_BUSY_POLL */ 185 190 #endif /* _LINUX_NET_BUSY_POLL_H */
+1 -1
include/net/ip6_fib.h
··· 300 300 struct nl_info *info); 301 301 302 302 extern void fib6_run_gc(unsigned long expires, 303 - struct net *net); 303 + struct net *net, bool force); 304 304 305 305 extern void fib6_gc_cleanup(void); 306 306
+1 -1
include/net/ndisc.h
··· 119 119 * if RFC 3831 IPv6-over-Fibre Channel is ever implemented it may 120 120 * also need a pad of 2. 121 121 */ 122 - static int ndisc_addr_option_pad(unsigned short type) 122 + static inline int ndisc_addr_option_pad(unsigned short type) 123 123 { 124 124 switch (type) { 125 125 case ARPHRD_INFINIBAND: return 2;
+1 -1
include/net/nfc/hci.h
··· 59 59 struct nfc_target *target); 60 60 int (*event_received)(struct nfc_hci_dev *hdev, u8 gate, u8 event, 61 61 struct sk_buff *skb); 62 - int (*fw_upload)(struct nfc_hci_dev *hdev, const char *firmware_name); 62 + int (*fw_download)(struct nfc_hci_dev *hdev, const char *firmware_name); 63 63 int (*discover_se)(struct nfc_hci_dev *dev); 64 64 int (*enable_se)(struct nfc_hci_dev *dev, u32 se_idx); 65 65 int (*disable_se)(struct nfc_hci_dev *dev, u32 se_idx);
+2 -2
include/net/nfc/nfc.h
··· 68 68 void *cb_context); 69 69 int (*tm_send)(struct nfc_dev *dev, struct sk_buff *skb); 70 70 int (*check_presence)(struct nfc_dev *dev, struct nfc_target *target); 71 - int (*fw_upload)(struct nfc_dev *dev, const char *firmware_name); 71 + int (*fw_download)(struct nfc_dev *dev, const char *firmware_name); 72 72 73 73 /* Secure Element API */ 74 74 int (*discover_se)(struct nfc_dev *dev); ··· 127 127 int targets_generation; 128 128 struct device dev; 129 129 bool dev_up; 130 - bool fw_upload_in_progress; 130 + bool fw_download_in_progress; 131 131 u8 rf_mode; 132 132 bool polling; 133 133 struct nfc_target *active_target;
+1 -1
include/net/sock.h
··· 327 327 #ifdef CONFIG_RPS 328 328 __u32 sk_rxhash; 329 329 #endif 330 - #ifdef CONFIG_NET_LL_RX_POLL 330 + #ifdef CONFIG_NET_RX_BUSY_POLL 331 331 unsigned int sk_napi_id; 332 332 unsigned int sk_ll_usec; 333 333 #endif
+3 -3
include/uapi/linux/nfc.h
··· 69 69 * starting a poll from a device which has a secure element enabled means 70 70 * we want to do SE based card emulation. 71 71 * @NFC_CMD_DISABLE_SE: Disable the physical link to a specific secure element. 72 - * @NFC_CMD_FW_UPLOAD: Request to Load/flash firmware, or event to inform that 73 - * some firmware was loaded 72 + * @NFC_CMD_FW_DOWNLOAD: Request to Load/flash firmware, or event to inform 73 + * that some firmware was loaded 74 74 */ 75 75 enum nfc_commands { 76 76 NFC_CMD_UNSPEC, ··· 94 94 NFC_CMD_DISABLE_SE, 95 95 NFC_CMD_LLC_SDREQ, 96 96 NFC_EVENT_LLC_SDRES, 97 - NFC_CMD_FW_UPLOAD, 97 + NFC_CMD_FW_DOWNLOAD, 98 98 NFC_EVENT_SE_ADDED, 99 99 NFC_EVENT_SE_REMOVED, 100 100 /* private: internal use only */
+1 -1
net/Kconfig
··· 244 244 Cgroup subsystem for use in assigning processes to network priorities on 245 245 a per-interface basis 246 246 247 - config NET_LL_RX_POLL 247 + config NET_RX_BUSY_POLL 248 248 boolean 249 249 default y 250 250
+17 -9
net/bluetooth/hci_core.c
··· 513 513 514 514 hci_setup_event_mask(req); 515 515 516 - if (hdev->hci_ver > BLUETOOTH_VER_1_1) 516 + /* AVM Berlin (31), aka "BlueFRITZ!", doesn't support the read 517 + * local supported commands HCI command. 518 + */ 519 + if (hdev->manufacturer != 31 && hdev->hci_ver > BLUETOOTH_VER_1_1) 517 520 hci_req_add(req, HCI_OP_READ_LOCAL_COMMANDS, 0, NULL); 518 521 519 522 if (lmp_ssp_capable(hdev)) { ··· 2168 2165 2169 2166 BT_DBG("%p name %s bus %d", hdev, hdev->name, hdev->bus); 2170 2167 2171 - write_lock(&hci_dev_list_lock); 2172 - list_add(&hdev->list, &hci_dev_list); 2173 - write_unlock(&hci_dev_list_lock); 2174 - 2175 2168 hdev->workqueue = alloc_workqueue("%s", WQ_HIGHPRI | WQ_UNBOUND | 2176 2169 WQ_MEM_RECLAIM, 1, hdev->name); 2177 2170 if (!hdev->workqueue) { ··· 2202 2203 if (hdev->dev_type != HCI_AMP) 2203 2204 set_bit(HCI_AUTO_OFF, &hdev->dev_flags); 2204 2205 2206 + write_lock(&hci_dev_list_lock); 2207 + list_add(&hdev->list, &hci_dev_list); 2208 + write_unlock(&hci_dev_list_lock); 2209 + 2205 2210 hci_notify(hdev, HCI_DEV_REG); 2206 2211 hci_dev_hold(hdev); 2207 2212 ··· 2218 2215 destroy_workqueue(hdev->req_workqueue); 2219 2216 err: 2220 2217 ida_simple_remove(&hci_index_ida, hdev->id); 2221 - write_lock(&hci_dev_list_lock); 2222 - list_del(&hdev->list); 2223 - write_unlock(&hci_dev_list_lock); 2224 2218 2225 2219 return error; 2226 2220 } ··· 3399 3399 */ 3400 3400 if (hdev->sent_cmd) { 3401 3401 req_complete = bt_cb(hdev->sent_cmd)->req.complete; 3402 - if (req_complete) 3402 + 3403 + if (req_complete) { 3404 + /* We must set the complete callback to NULL to 3405 + * avoid calling the callback more than once if 3406 + * this function gets called again. 3407 + */ 3408 + bt_cb(hdev->sent_cmd)->req.complete = NULL; 3409 + 3403 3410 goto call_complete; 3411 + } 3404 3412 } 3405 3413 3406 3414 /* Remove all pending commands belonging to this request */
+2 -1
net/bridge/br_device.c
··· 70 70 } 71 71 72 72 mdst = br_mdb_get(br, skb, vid); 73 - if (mdst || BR_INPUT_SKB_CB_MROUTERS_ONLY(skb)) 73 + if ((mdst || BR_INPUT_SKB_CB_MROUTERS_ONLY(skb)) && 74 + br_multicast_querier_exists(br)) 74 75 br_multicast_deliver(mdst, skb); 75 76 else 76 77 br_flood_deliver(br, skb, false);
+2 -1
net/bridge/br_input.c
··· 101 101 unicast = false; 102 102 } else if (is_multicast_ether_addr(dest)) { 103 103 mdst = br_mdb_get(br, skb, vid); 104 - if (mdst || BR_INPUT_SKB_CB_MROUTERS_ONLY(skb)) { 104 + if ((mdst || BR_INPUT_SKB_CB_MROUTERS_ONLY(skb)) && 105 + br_multicast_querier_exists(br)) { 105 106 if ((mdst && mdst->mglist) || 106 107 br_multicast_is_router(br)) 107 108 skb2 = skb;
+30 -9
net/bridge/br_multicast.c
··· 1014 1014 } 1015 1015 #endif 1016 1016 1017 + static void br_multicast_update_querier_timer(struct net_bridge *br, 1018 + unsigned long max_delay) 1019 + { 1020 + if (!timer_pending(&br->multicast_querier_timer)) 1021 + br->multicast_querier_delay_time = jiffies + max_delay; 1022 + 1023 + mod_timer(&br->multicast_querier_timer, 1024 + jiffies + br->multicast_querier_interval); 1025 + } 1026 + 1017 1027 /* 1018 1028 * Add port to router_list 1019 1029 * list is maintained ordered by pointer value ··· 1074 1064 1075 1065 static void br_multicast_query_received(struct net_bridge *br, 1076 1066 struct net_bridge_port *port, 1077 - int saddr) 1067 + int saddr, 1068 + unsigned long max_delay) 1078 1069 { 1079 1070 if (saddr) 1080 - mod_timer(&br->multicast_querier_timer, 1081 - jiffies + br->multicast_querier_interval); 1071 + br_multicast_update_querier_timer(br, max_delay); 1082 1072 else if (timer_pending(&br->multicast_querier_timer)) 1083 1073 return; 1084 1074 ··· 1106 1096 (port && port->state == BR_STATE_DISABLED)) 1107 1097 goto out; 1108 1098 1109 - br_multicast_query_received(br, port, !!iph->saddr); 1110 - 1111 1099 group = ih->group; 1112 1100 1113 1101 if (skb->len == sizeof(*ih)) { ··· 1128 1120 max_delay = ih3->code ? 1129 1121 IGMPV3_MRC(ih3->code) * (HZ / IGMP_TIMER_SCALE) : 1; 1130 1122 } 1123 + 1124 + br_multicast_query_received(br, port, !!iph->saddr, max_delay); 1131 1125 1132 1126 if (!group) 1133 1127 goto out; ··· 1186 1176 (port && port->state == BR_STATE_DISABLED)) 1187 1177 goto out; 1188 1178 1189 - br_multicast_query_received(br, port, !ipv6_addr_any(&ip6h->saddr)); 1190 - 1191 1179 if (skb->len == sizeof(*mld)) { 1192 1180 if (!pskb_may_pull(skb, sizeof(*mld))) { 1193 1181 err = -EINVAL; ··· 1205 1197 group = &mld2q->mld2q_mca; 1206 1198 max_delay = mld2q->mld2q_mrc ? MLDV2_MRC(ntohs(mld2q->mld2q_mrc)) : 1; 1207 1199 } 1200 + 1201 + br_multicast_query_received(br, port, !ipv6_addr_any(&ip6h->saddr), 1202 + max_delay); 1208 1203 1209 1204 if (!group) 1210 1205 goto out; ··· 1654 1643 br->multicast_querier_interval = 255 * HZ; 1655 1644 br->multicast_membership_interval = 260 * HZ; 1656 1645 1646 + br->multicast_querier_delay_time = 0; 1647 + 1657 1648 spin_lock_init(&br->multicast_lock); 1658 1649 setup_timer(&br->multicast_router_timer, 1659 1650 br_multicast_local_router_expired, 0); ··· 1844 1831 1845 1832 int br_multicast_set_querier(struct net_bridge *br, unsigned long val) 1846 1833 { 1834 + unsigned long max_delay; 1835 + 1847 1836 val = !!val; 1848 1837 1849 1838 spin_lock_bh(&br->multicast_lock); ··· 1853 1838 goto unlock; 1854 1839 1855 1840 br->multicast_querier = val; 1856 - if (val) 1857 - br_multicast_start_querier(br); 1841 + if (!val) 1842 + goto unlock; 1843 + 1844 + max_delay = br->multicast_query_response_interval; 1845 + if (!timer_pending(&br->multicast_querier_timer)) 1846 + br->multicast_querier_delay_time = jiffies + max_delay; 1847 + 1848 + br_multicast_start_querier(br); 1858 1849 1859 1850 unlock: 1860 1851 spin_unlock_bh(&br->multicast_lock);
+12
net/bridge/br_private.h
··· 267 267 unsigned long multicast_query_interval; 268 268 unsigned long multicast_query_response_interval; 269 269 unsigned long multicast_startup_query_interval; 270 + unsigned long multicast_querier_delay_time; 270 271 271 272 spinlock_t multicast_lock; 272 273 struct net_bridge_mdb_htable __rcu *mdb; ··· 502 501 (br->multicast_router == 1 && 503 502 timer_pending(&br->multicast_router_timer)); 504 503 } 504 + 505 + static inline bool br_multicast_querier_exists(struct net_bridge *br) 506 + { 507 + return time_is_before_jiffies(br->multicast_querier_delay_time) && 508 + (br->multicast_querier || 509 + timer_pending(&br->multicast_querier_timer)); 510 + } 505 511 #else 506 512 static inline int br_multicast_rcv(struct net_bridge *br, 507 513 struct net_bridge_port *port, ··· 564 556 static inline bool br_multicast_is_router(struct net_bridge *br) 565 557 { 566 558 return 0; 559 + } 560 + static inline bool br_multicast_querier_exists(struct net_bridge *br) 561 + { 562 + return false; 567 563 } 568 564 static inline void br_mdb_init(void) 569 565 {
+1 -1
net/core/skbuff.c
··· 740 740 741 741 skb_copy_secmark(new, old); 742 742 743 - #ifdef CONFIG_NET_LL_RX_POLL 743 + #ifdef CONFIG_NET_RX_BUSY_POLL 744 744 new->napi_id = old->napi_id; 745 745 #endif 746 746 }
+3 -3
net/core/sock.c
··· 900 900 sock_valbool_flag(sk, SOCK_SELECT_ERR_QUEUE, valbool); 901 901 break; 902 902 903 - #ifdef CONFIG_NET_LL_RX_POLL 903 + #ifdef CONFIG_NET_RX_BUSY_POLL 904 904 case SO_BUSY_POLL: 905 905 /* allow unprivileged users to decrease the value */ 906 906 if ((val > sk->sk_ll_usec) && !capable(CAP_NET_ADMIN)) ··· 1170 1170 v.val = sock_flag(sk, SOCK_SELECT_ERR_QUEUE); 1171 1171 break; 1172 1172 1173 - #ifdef CONFIG_NET_LL_RX_POLL 1173 + #ifdef CONFIG_NET_RX_BUSY_POLL 1174 1174 case SO_BUSY_POLL: 1175 1175 v.val = sk->sk_ll_usec; 1176 1176 break; ··· 2292 2292 2293 2293 sk->sk_stamp = ktime_set(-1L, 0); 2294 2294 2295 - #ifdef CONFIG_NET_LL_RX_POLL 2295 + #ifdef CONFIG_NET_RX_BUSY_POLL 2296 2296 sk->sk_napi_id = 0; 2297 2297 sk->sk_ll_usec = sysctl_net_busy_read; 2298 2298 #endif
+6 -2
net/core/sysctl_net_core.c
··· 21 21 #include <net/net_ratelimit.h> 22 22 #include <net/busy_poll.h> 23 23 24 + static int zero = 0; 24 25 static int one = 1; 26 + static int ushort_max = USHRT_MAX; 25 27 26 28 #ifdef CONFIG_RPS 27 29 static int rps_sock_flow_sysctl(struct ctl_table *table, int write, ··· 300 298 .proc_handler = flow_limit_table_len_sysctl 301 299 }, 302 300 #endif /* CONFIG_NET_FLOW_LIMIT */ 303 - #ifdef CONFIG_NET_LL_RX_POLL 301 + #ifdef CONFIG_NET_RX_BUSY_POLL 304 302 { 305 303 .procname = "busy_poll", 306 304 .data = &sysctl_net_busy_poll, ··· 341 339 .data = &init_net.core.sysctl_somaxconn, 342 340 .maxlen = sizeof(int), 343 341 .mode = 0644, 344 - .proc_handler = proc_dointvec 342 + .extra1 = &zero, 343 + .extra2 = &ushort_max, 344 + .proc_handler = proc_dointvec_minmax 345 345 }, 346 346 { } 347 347 };
+3 -1
net/ipv4/devinet.c
··· 772 772 ci = nla_data(tb[IFA_CACHEINFO]); 773 773 if (!ci->ifa_valid || ci->ifa_prefered > ci->ifa_valid) { 774 774 err = -EINVAL; 775 - goto errout; 775 + goto errout_free; 776 776 } 777 777 *pvalid_lft = ci->ifa_valid; 778 778 *pprefered_lft = ci->ifa_prefered; ··· 780 780 781 781 return ifa; 782 782 783 + errout_free: 784 + inet_free_ifa(ifa); 783 785 errout: 784 786 return ERR_PTR(err); 785 787 }
+22 -21
net/ipv6/addrconf.c
··· 813 813 /* On success it returns ifp with increased reference count */ 814 814 815 815 static struct inet6_ifaddr * 816 - ipv6_add_addr(struct inet6_dev *idev, const struct in6_addr *addr, int pfxlen, 817 - int scope, u32 flags) 816 + ipv6_add_addr(struct inet6_dev *idev, const struct in6_addr *addr, 817 + const struct in6_addr *peer_addr, int pfxlen, 818 + int scope, u32 flags, u32 valid_lft, u32 prefered_lft) 818 819 { 819 820 struct inet6_ifaddr *ifa = NULL; 820 821 struct rt6_info *rt; ··· 864 863 } 865 864 866 865 ifa->addr = *addr; 866 + if (peer_addr) 867 + ifa->peer_addr = *peer_addr; 867 868 868 869 spin_lock_init(&ifa->lock); 869 870 spin_lock_init(&ifa->state_lock); ··· 875 872 ifa->scope = scope; 876 873 ifa->prefix_len = pfxlen; 877 874 ifa->flags = flags | IFA_F_TENTATIVE; 875 + ifa->valid_lft = valid_lft; 876 + ifa->prefered_lft = prefered_lft; 878 877 ifa->cstamp = ifa->tstamp = jiffies; 879 878 ifa->tokenized = false; 880 879 ··· 1128 1123 1129 1124 ift = !max_addresses || 1130 1125 ipv6_count_addresses(idev) < max_addresses ? 1131 - ipv6_add_addr(idev, &addr, tmp_plen, ipv6_addr_scope(&addr), 1132 - addr_flags) : NULL; 1126 + ipv6_add_addr(idev, &addr, NULL, tmp_plen, 1127 + ipv6_addr_scope(&addr), addr_flags, 1128 + tmp_valid_lft, tmp_prefered_lft) : NULL; 1133 1129 if (IS_ERR_OR_NULL(ift)) { 1134 1130 in6_ifa_put(ifp); 1135 1131 in6_dev_put(idev); ··· 1142 1136 1143 1137 spin_lock_bh(&ift->lock); 1144 1138 ift->ifpub = ifp; 1145 - ift->valid_lft = tmp_valid_lft; 1146 - ift->prefered_lft = tmp_prefered_lft; 1147 1139 ift->cstamp = now; 1148 1140 ift->tstamp = tmp_tstamp; 1149 1141 spin_unlock_bh(&ift->lock); ··· 2183 2179 */ 2184 2180 if (!max_addresses || 2185 2181 ipv6_count_addresses(in6_dev) < max_addresses) 2186 - ifp = ipv6_add_addr(in6_dev, &addr, pinfo->prefix_len, 2182 + ifp = ipv6_add_addr(in6_dev, &addr, NULL, 2183 + pinfo->prefix_len, 2187 2184 addr_type&IPV6_ADDR_SCOPE_MASK, 2188 - addr_flags); 2185 + addr_flags, valid_lft, 2186 + prefered_lft); 2189 2187 2190 2188 if (IS_ERR_OR_NULL(ifp)) { 2191 2189 in6_dev_put(in6_dev); 2192 2190 return; 2193 2191 } 2194 2192 2195 - update_lft = create = 1; 2193 + update_lft = 0; 2194 + create = 1; 2196 2195 ifp->cstamp = jiffies; 2197 2196 ifp->tokenized = tokenized; 2198 2197 addrconf_dad_start(ifp); ··· 2216 2209 stored_lft = ifp->valid_lft - (now - ifp->tstamp) / HZ; 2217 2210 else 2218 2211 stored_lft = 0; 2219 - if (!update_lft && stored_lft) { 2212 + if (!update_lft && !create && stored_lft) { 2220 2213 if (valid_lft > MIN_VALID_LIFETIME || 2221 2214 valid_lft > stored_lft) 2222 2215 update_lft = 1; ··· 2462 2455 prefered_lft = timeout; 2463 2456 } 2464 2457 2465 - ifp = ipv6_add_addr(idev, pfx, plen, scope, ifa_flags); 2458 + ifp = ipv6_add_addr(idev, pfx, peer_pfx, plen, scope, ifa_flags, 2459 + valid_lft, prefered_lft); 2466 2460 2467 2461 if (!IS_ERR(ifp)) { 2468 - spin_lock_bh(&ifp->lock); 2469 - ifp->valid_lft = valid_lft; 2470 - ifp->prefered_lft = prefered_lft; 2471 - ifp->tstamp = jiffies; 2472 - if (peer_pfx) 2473 - ifp->peer_addr = *peer_pfx; 2474 - spin_unlock_bh(&ifp->lock); 2475 - 2476 2462 addrconf_prefix_route(&ifp->addr, ifp->prefix_len, dev, 2477 2463 expires, flags); 2478 2464 /* ··· 2557 2557 { 2558 2558 struct inet6_ifaddr *ifp; 2559 2559 2560 - ifp = ipv6_add_addr(idev, addr, plen, scope, IFA_F_PERMANENT); 2560 + ifp = ipv6_add_addr(idev, addr, NULL, plen, 2561 + scope, IFA_F_PERMANENT, 0, 0); 2561 2562 if (!IS_ERR(ifp)) { 2562 2563 spin_lock_bh(&ifp->lock); 2563 2564 ifp->flags &= ~IFA_F_TENTATIVE; ··· 2684 2683 #endif 2685 2684 2686 2685 2687 - ifp = ipv6_add_addr(idev, addr, 64, IFA_LINK, addr_flags); 2686 + ifp = ipv6_add_addr(idev, addr, NULL, 64, IFA_LINK, addr_flags, 0, 0); 2688 2687 if (!IS_ERR(ifp)) { 2689 2688 addrconf_prefix_route(&ifp->addr, ifp->prefix_len, idev->dev, 0, 0); 2690 2689 addrconf_dad_start(ifp);
+13 -12
net/ipv6/ip6_fib.c
··· 1632 1632 1633 1633 static DEFINE_SPINLOCK(fib6_gc_lock); 1634 1634 1635 - void fib6_run_gc(unsigned long expires, struct net *net) 1635 + void fib6_run_gc(unsigned long expires, struct net *net, bool force) 1636 1636 { 1637 - if (expires != ~0UL) { 1637 + unsigned long now; 1638 + 1639 + if (force) { 1638 1640 spin_lock_bh(&fib6_gc_lock); 1639 - gc_args.timeout = expires ? (int)expires : 1640 - net->ipv6.sysctl.ip6_rt_gc_interval; 1641 - } else { 1642 - if (!spin_trylock_bh(&fib6_gc_lock)) { 1643 - mod_timer(&net->ipv6.ip6_fib_timer, jiffies + HZ); 1644 - return; 1645 - } 1646 - gc_args.timeout = net->ipv6.sysctl.ip6_rt_gc_interval; 1641 + } else if (!spin_trylock_bh(&fib6_gc_lock)) { 1642 + mod_timer(&net->ipv6.ip6_fib_timer, jiffies + HZ); 1643 + return; 1647 1644 } 1645 + gc_args.timeout = expires ? (int)expires : 1646 + net->ipv6.sysctl.ip6_rt_gc_interval; 1648 1647 1649 1648 gc_args.more = icmp6_dst_gc(); 1650 1649 1651 1650 fib6_clean_all(net, fib6_age, 0, NULL); 1651 + now = jiffies; 1652 + net->ipv6.ip6_rt_last_gc = now; 1652 1653 1653 1654 if (gc_args.more) 1654 1655 mod_timer(&net->ipv6.ip6_fib_timer, 1655 - round_jiffies(jiffies 1656 + round_jiffies(now 1656 1657 + net->ipv6.sysctl.ip6_rt_gc_interval)); 1657 1658 else 1658 1659 del_timer(&net->ipv6.ip6_fib_timer); ··· 1662 1661 1663 1662 static void fib6_gc_timer_cb(unsigned long arg) 1664 1663 { 1665 - fib6_run_gc(0, (struct net *)arg); 1664 + fib6_run_gc(0, (struct net *)arg, true); 1666 1665 } 1667 1666 1668 1667 static int __net_init fib6_net_init(struct net *net)
+2 -2
net/ipv6/ndisc.c
··· 1576 1576 switch (event) { 1577 1577 case NETDEV_CHANGEADDR: 1578 1578 neigh_changeaddr(&nd_tbl, dev); 1579 - fib6_run_gc(~0UL, net); 1579 + fib6_run_gc(0, net, false); 1580 1580 idev = in6_dev_get(dev); 1581 1581 if (!idev) 1582 1582 break; ··· 1586 1586 break; 1587 1587 case NETDEV_DOWN: 1588 1588 neigh_ifdown(&nd_tbl, dev); 1589 - fib6_run_gc(~0UL, net); 1589 + fib6_run_gc(0, net, false); 1590 1590 break; 1591 1591 case NETDEV_NOTIFY_PEERS: 1592 1592 ndisc_send_unsol_na(dev);
+3 -5
net/ipv6/route.c
··· 1311 1311 1312 1312 static int ip6_dst_gc(struct dst_ops *ops) 1313 1313 { 1314 - unsigned long now = jiffies; 1315 1314 struct net *net = container_of(ops, struct net, ipv6.ip6_dst_ops); 1316 1315 int rt_min_interval = net->ipv6.sysctl.ip6_rt_gc_min_interval; 1317 1316 int rt_max_size = net->ipv6.sysctl.ip6_rt_max_size; ··· 1320 1321 int entries; 1321 1322 1322 1323 entries = dst_entries_get_fast(ops); 1323 - if (time_after(rt_last_gc + rt_min_interval, now) && 1324 + if (time_after(rt_last_gc + rt_min_interval, jiffies) && 1324 1325 entries <= rt_max_size) 1325 1326 goto out; 1326 1327 1327 1328 net->ipv6.ip6_rt_gc_expire++; 1328 - fib6_run_gc(net->ipv6.ip6_rt_gc_expire, net); 1329 - net->ipv6.ip6_rt_last_gc = now; 1329 + fib6_run_gc(net->ipv6.ip6_rt_gc_expire, net, entries > rt_max_size); 1330 1330 entries = dst_entries_get_slow(ops); 1331 1331 if (entries < ops->gc_thresh) 1332 1332 net->ipv6.ip6_rt_gc_expire = rt_gc_timeout>>1; ··· 2825 2827 net = (struct net *)ctl->extra1; 2826 2828 delay = net->ipv6.sysctl.flush_delay; 2827 2829 proc_dointvec(ctl, write, buffer, lenp, ppos); 2828 - fib6_run_gc(delay <= 0 ? ~0UL : (unsigned long)delay, net); 2830 + fib6_run_gc(delay <= 0 ? 0 : (unsigned long)delay, net, delay > 0); 2829 2831 return 0; 2830 2832 } 2831 2833
+4
net/mac80211/mesh_ps.c
··· 229 229 enum nl80211_mesh_power_mode pm; 230 230 bool do_buffer; 231 231 232 + /* For non-assoc STA, prevent buffering or frame transmission */ 233 + if (sta->sta_state < IEEE80211_STA_ASSOC) 234 + return; 235 + 232 236 /* 233 237 * use peer-specific power mode if peering is established and the 234 238 * peer's power mode is known
+5 -2
net/mac80211/pm.c
··· 99 99 } 100 100 mutex_unlock(&local->sta_mtx); 101 101 102 - /* remove all interfaces */ 102 + /* remove all interfaces that were created in the driver */ 103 103 list_for_each_entry(sdata, &local->interfaces, list) { 104 - if (!ieee80211_sdata_running(sdata)) 104 + if (!ieee80211_sdata_running(sdata) || 105 + sdata->vif.type == NL80211_IFTYPE_AP_VLAN || 106 + sdata->vif.type == NL80211_IFTYPE_MONITOR) 105 107 continue; 108 + 106 109 drv_remove_interface(local, sdata); 107 110 } 108 111
+2 -2
net/netlabel/netlabel_cipso_v4.c
··· 691 691 { 692 692 struct netlbl_domhsh_walk_arg *cb_arg = arg; 693 693 694 - if (entry->type == NETLBL_NLTYPE_CIPSOV4 && 695 - entry->type_def.cipsov4->doi == cb_arg->doi) 694 + if (entry->def.type == NETLBL_NLTYPE_CIPSOV4 && 695 + entry->def.cipso->doi == cb_arg->doi) 696 696 return netlbl_domhsh_remove_entry(entry, cb_arg->audit_info); 697 697 698 698 return 0;
+49 -55
net/netlabel/netlabel_domainhash.c
··· 84 84 #endif /* IPv6 */ 85 85 86 86 ptr = container_of(entry, struct netlbl_dom_map, rcu); 87 - if (ptr->type == NETLBL_NLTYPE_ADDRSELECT) { 87 + if (ptr->def.type == NETLBL_NLTYPE_ADDRSELECT) { 88 88 netlbl_af4list_foreach_safe(iter4, tmp4, 89 - &ptr->type_def.addrsel->list4) { 89 + &ptr->def.addrsel->list4) { 90 90 netlbl_af4list_remove_entry(iter4); 91 91 kfree(netlbl_domhsh_addr4_entry(iter4)); 92 92 } 93 93 #if IS_ENABLED(CONFIG_IPV6) 94 94 netlbl_af6list_foreach_safe(iter6, tmp6, 95 - &ptr->type_def.addrsel->list6) { 95 + &ptr->def.addrsel->list6) { 96 96 netlbl_af6list_remove_entry(iter6); 97 97 kfree(netlbl_domhsh_addr6_entry(iter6)); 98 98 } ··· 213 213 if (addr4 != NULL) { 214 214 struct netlbl_domaddr4_map *map4; 215 215 map4 = netlbl_domhsh_addr4_entry(addr4); 216 - type = map4->type; 217 - cipsov4 = map4->type_def.cipsov4; 216 + type = map4->def.type; 217 + cipsov4 = map4->def.cipso; 218 218 netlbl_af4list_audit_addr(audit_buf, 0, NULL, 219 219 addr4->addr, addr4->mask); 220 220 #if IS_ENABLED(CONFIG_IPV6) 221 221 } else if (addr6 != NULL) { 222 222 struct netlbl_domaddr6_map *map6; 223 223 map6 = netlbl_domhsh_addr6_entry(addr6); 224 - type = map6->type; 224 + type = map6->def.type; 225 225 netlbl_af6list_audit_addr(audit_buf, 0, NULL, 226 226 &addr6->addr, &addr6->mask); 227 227 #endif /* IPv6 */ 228 228 } else { 229 - type = entry->type; 230 - cipsov4 = entry->type_def.cipsov4; 229 + type = entry->def.type; 230 + cipsov4 = entry->def.cipso; 231 231 } 232 232 switch (type) { 233 233 case NETLBL_NLTYPE_UNLABELED: ··· 265 265 if (entry == NULL) 266 266 return -EINVAL; 267 267 268 - switch (entry->type) { 268 + switch (entry->def.type) { 269 269 case NETLBL_NLTYPE_UNLABELED: 270 - if (entry->type_def.cipsov4 != NULL || 271 - entry->type_def.addrsel != NULL) 270 + if (entry->def.cipso != NULL || entry->def.addrsel != NULL) 272 271 return -EINVAL; 273 272 break; 274 273 case NETLBL_NLTYPE_CIPSOV4: 275 - if (entry->type_def.cipsov4 == NULL) 274 + if (entry->def.cipso == NULL) 276 275 return -EINVAL; 277 276 break; 278 277 case NETLBL_NLTYPE_ADDRSELECT: 279 - netlbl_af4list_foreach(iter4, &entry->type_def.addrsel->list4) { 278 + netlbl_af4list_foreach(iter4, &entry->def.addrsel->list4) { 280 279 map4 = netlbl_domhsh_addr4_entry(iter4); 281 - switch (map4->type) { 280 + switch (map4->def.type) { 282 281 case NETLBL_NLTYPE_UNLABELED: 283 - if (map4->type_def.cipsov4 != NULL) 282 + if (map4->def.cipso != NULL) 284 283 return -EINVAL; 285 284 break; 286 285 case NETLBL_NLTYPE_CIPSOV4: 287 - if (map4->type_def.cipsov4 == NULL) 286 + if (map4->def.cipso == NULL) 288 287 return -EINVAL; 289 288 break; 290 289 default: ··· 291 292 } 292 293 } 293 294 #if IS_ENABLED(CONFIG_IPV6) 294 - netlbl_af6list_foreach(iter6, &entry->type_def.addrsel->list6) { 295 + netlbl_af6list_foreach(iter6, &entry->def.addrsel->list6) { 295 296 map6 = netlbl_domhsh_addr6_entry(iter6); 296 - switch (map6->type) { 297 + switch (map6->def.type) { 297 298 case NETLBL_NLTYPE_UNLABELED: 298 299 break; 299 300 default: ··· 401 402 rcu_assign_pointer(netlbl_domhsh_def, entry); 402 403 } 403 404 404 - if (entry->type == NETLBL_NLTYPE_ADDRSELECT) { 405 + if (entry->def.type == NETLBL_NLTYPE_ADDRSELECT) { 405 406 netlbl_af4list_foreach_rcu(iter4, 406 - &entry->type_def.addrsel->list4) 407 + &entry->def.addrsel->list4) 407 408 netlbl_domhsh_audit_add(entry, iter4, NULL, 408 409 ret_val, audit_info); 409 410 #if IS_ENABLED(CONFIG_IPV6) 410 411 netlbl_af6list_foreach_rcu(iter6, 411 - &entry->type_def.addrsel->list6) 412 + &entry->def.addrsel->list6) 412 413 netlbl_domhsh_audit_add(entry, NULL, iter6, 413 414 ret_val, audit_info); 414 415 #endif /* IPv6 */ 415 416 } else 416 417 netlbl_domhsh_audit_add(entry, NULL, NULL, 417 418 ret_val, audit_info); 418 - } else if (entry_old->type == NETLBL_NLTYPE_ADDRSELECT && 419 - entry->type == NETLBL_NLTYPE_ADDRSELECT) { 419 + } else if (entry_old->def.type == NETLBL_NLTYPE_ADDRSELECT && 420 + entry->def.type == NETLBL_NLTYPE_ADDRSELECT) { 420 421 struct list_head *old_list4; 421 422 struct list_head *old_list6; 422 423 423 - old_list4 = &entry_old->type_def.addrsel->list4; 424 - old_list6 = &entry_old->type_def.addrsel->list6; 424 + old_list4 = &entry_old->def.addrsel->list4; 425 + old_list6 = &entry_old->def.addrsel->list6; 425 426 426 427 /* we only allow the addition of address selectors if all of 427 428 * the selectors do not exist in the existing domain map */ 428 - netlbl_af4list_foreach_rcu(iter4, 429 - &entry->type_def.addrsel->list4) 429 + netlbl_af4list_foreach_rcu(iter4, &entry->def.addrsel->list4) 430 430 if (netlbl_af4list_search_exact(iter4->addr, 431 431 iter4->mask, 432 432 old_list4)) { ··· 433 435 goto add_return; 434 436 } 435 437 #if IS_ENABLED(CONFIG_IPV6) 436 - netlbl_af6list_foreach_rcu(iter6, 437 - &entry->type_def.addrsel->list6) 438 + netlbl_af6list_foreach_rcu(iter6, &entry->def.addrsel->list6) 438 439 if (netlbl_af6list_search_exact(&iter6->addr, 439 440 &iter6->mask, 440 441 old_list6)) { ··· 443 446 #endif /* IPv6 */ 444 447 445 448 netlbl_af4list_foreach_safe(iter4, tmp4, 446 - &entry->type_def.addrsel->list4) { 449 + &entry->def.addrsel->list4) { 447 450 netlbl_af4list_remove_entry(iter4); 448 451 iter4->valid = 1; 449 452 ret_val = netlbl_af4list_add(iter4, old_list4); ··· 454 457 } 455 458 #if IS_ENABLED(CONFIG_IPV6) 456 459 netlbl_af6list_foreach_safe(iter6, tmp6, 457 - &entry->type_def.addrsel->list6) { 460 + &entry->def.addrsel->list6) { 458 461 netlbl_af6list_remove_entry(iter6); 459 462 iter6->valid = 1; 460 463 ret_val = netlbl_af6list_add(iter6, old_list6); ··· 535 538 struct netlbl_af4list *iter4; 536 539 struct netlbl_domaddr4_map *map4; 537 540 538 - switch (entry->type) { 541 + switch (entry->def.type) { 539 542 case NETLBL_NLTYPE_ADDRSELECT: 540 543 netlbl_af4list_foreach_rcu(iter4, 541 - &entry->type_def.addrsel->list4) { 544 + &entry->def.addrsel->list4) { 542 545 map4 = netlbl_domhsh_addr4_entry(iter4); 543 - cipso_v4_doi_putdef(map4->type_def.cipsov4); 546 + cipso_v4_doi_putdef(map4->def.cipso); 544 547 } 545 548 /* no need to check the IPv6 list since we currently 546 549 * support only unlabeled protocols for IPv6 */ 547 550 break; 548 551 case NETLBL_NLTYPE_CIPSOV4: 549 - cipso_v4_doi_putdef(entry->type_def.cipsov4); 552 + cipso_v4_doi_putdef(entry->def.cipso); 550 553 break; 551 554 } 552 555 call_rcu(&entry->rcu, netlbl_domhsh_free_entry); ··· 587 590 entry_map = netlbl_domhsh_search(domain); 588 591 else 589 592 entry_map = netlbl_domhsh_search_def(domain); 590 - if (entry_map == NULL || entry_map->type != NETLBL_NLTYPE_ADDRSELECT) 593 + if (entry_map == NULL || 594 + entry_map->def.type != NETLBL_NLTYPE_ADDRSELECT) 591 595 goto remove_af4_failure; 592 596 593 597 spin_lock(&netlbl_domhsh_lock); 594 598 entry_addr = netlbl_af4list_remove(addr->s_addr, mask->s_addr, 595 - &entry_map->type_def.addrsel->list4); 599 + &entry_map->def.addrsel->list4); 596 600 spin_unlock(&netlbl_domhsh_lock); 597 601 598 602 if (entry_addr == NULL) 599 603 goto remove_af4_failure; 600 - netlbl_af4list_foreach_rcu(iter4, &entry_map->type_def.addrsel->list4) 604 + netlbl_af4list_foreach_rcu(iter4, &entry_map->def.addrsel->list4) 601 605 goto remove_af4_single_addr; 602 606 #if IS_ENABLED(CONFIG_IPV6) 603 - netlbl_af6list_foreach_rcu(iter6, &entry_map->type_def.addrsel->list6) 607 + netlbl_af6list_foreach_rcu(iter6, &entry_map->def.addrsel->list6) 604 608 goto remove_af4_single_addr; 605 609 #endif /* IPv6 */ 606 610 /* the domain mapping is empty so remove it from the mapping table */ ··· 614 616 * shouldn't be a problem */ 615 617 synchronize_rcu(); 616 618 entry = netlbl_domhsh_addr4_entry(entry_addr); 617 - cipso_v4_doi_putdef(entry->type_def.cipsov4); 619 + cipso_v4_doi_putdef(entry->def.cipso); 618 620 kfree(entry); 619 621 return 0; 620 622 ··· 691 693 * responsible for ensuring that rcu_read_[un]lock() is called. 692 694 * 693 695 */ 694 - struct netlbl_domaddr4_map *netlbl_domhsh_getentry_af4(const char *domain, 695 - __be32 addr) 696 + struct netlbl_dommap_def *netlbl_domhsh_getentry_af4(const char *domain, 697 + __be32 addr) 696 698 { 697 699 struct netlbl_dom_map *dom_iter; 698 700 struct netlbl_af4list *addr_iter; ··· 700 702 dom_iter = netlbl_domhsh_search_def(domain); 701 703 if (dom_iter == NULL) 702 704 return NULL; 703 - if (dom_iter->type != NETLBL_NLTYPE_ADDRSELECT) 704 - return NULL; 705 705 706 - addr_iter = netlbl_af4list_search(addr, 707 - &dom_iter->type_def.addrsel->list4); 706 + if (dom_iter->def.type != NETLBL_NLTYPE_ADDRSELECT) 707 + return &dom_iter->def; 708 + addr_iter = netlbl_af4list_search(addr, &dom_iter->def.addrsel->list4); 708 709 if (addr_iter == NULL) 709 710 return NULL; 710 - 711 - return netlbl_domhsh_addr4_entry(addr_iter); 711 + return &(netlbl_domhsh_addr4_entry(addr_iter)->def); 712 712 } 713 713 714 714 #if IS_ENABLED(CONFIG_IPV6) ··· 721 725 * responsible for ensuring that rcu_read_[un]lock() is called. 722 726 * 723 727 */ 724 - struct netlbl_domaddr6_map *netlbl_domhsh_getentry_af6(const char *domain, 728 + struct netlbl_dommap_def *netlbl_domhsh_getentry_af6(const char *domain, 725 729 const struct in6_addr *addr) 726 730 { 727 731 struct netlbl_dom_map *dom_iter; ··· 730 734 dom_iter = netlbl_domhsh_search_def(domain); 731 735 if (dom_iter == NULL) 732 736 return NULL; 733 - if (dom_iter->type != NETLBL_NLTYPE_ADDRSELECT) 734 - return NULL; 735 737 736 - addr_iter = netlbl_af6list_search(addr, 737 - &dom_iter->type_def.addrsel->list6); 738 + if (dom_iter->def.type != NETLBL_NLTYPE_ADDRSELECT) 739 + return &dom_iter->def; 740 + addr_iter = netlbl_af6list_search(addr, &dom_iter->def.addrsel->list6); 738 741 if (addr_iter == NULL) 739 742 return NULL; 740 - 741 - return netlbl_domhsh_addr6_entry(addr_iter); 743 + return &(netlbl_domhsh_addr6_entry(addr_iter)->def); 742 744 } 743 745 #endif /* IPv6 */ 744 746
+22 -24
net/netlabel/netlabel_domainhash.h
··· 43 43 #define NETLBL_DOMHSH_BITSIZE 7 44 44 45 45 /* Domain mapping definition structures */ 46 + struct netlbl_domaddr_map { 47 + struct list_head list4; 48 + struct list_head list6; 49 + }; 50 + struct netlbl_dommap_def { 51 + u32 type; 52 + union { 53 + struct netlbl_domaddr_map *addrsel; 54 + struct cipso_v4_doi *cipso; 55 + }; 56 + }; 46 57 #define netlbl_domhsh_addr4_entry(iter) \ 47 58 container_of(iter, struct netlbl_domaddr4_map, list) 48 59 struct netlbl_domaddr4_map { 49 - u32 type; 50 - union { 51 - struct cipso_v4_doi *cipsov4; 52 - } type_def; 60 + struct netlbl_dommap_def def; 53 61 54 62 struct netlbl_af4list list; 55 63 }; 56 64 #define netlbl_domhsh_addr6_entry(iter) \ 57 65 container_of(iter, struct netlbl_domaddr6_map, list) 58 66 struct netlbl_domaddr6_map { 59 - u32 type; 60 - 61 - /* NOTE: no 'type_def' union needed at present since we don't currently 62 - * support any IPv6 labeling protocols */ 67 + struct netlbl_dommap_def def; 63 68 64 69 struct netlbl_af6list list; 65 70 }; 66 - struct netlbl_domaddr_map { 67 - struct list_head list4; 68 - struct list_head list6; 69 - }; 71 + 70 72 struct netlbl_dom_map { 71 73 char *domain; 72 - u32 type; 73 - union { 74 - struct cipso_v4_doi *cipsov4; 75 - struct netlbl_domaddr_map *addrsel; 76 - } type_def; 74 + struct netlbl_dommap_def def; 77 75 78 76 u32 valid; 79 77 struct list_head list; ··· 95 97 int netlbl_domhsh_remove(const char *domain, struct netlbl_audit *audit_info); 96 98 int netlbl_domhsh_remove_default(struct netlbl_audit *audit_info); 97 99 struct netlbl_dom_map *netlbl_domhsh_getentry(const char *domain); 98 - struct netlbl_domaddr4_map *netlbl_domhsh_getentry_af4(const char *domain, 99 - __be32 addr); 100 + struct netlbl_dommap_def *netlbl_domhsh_getentry_af4(const char *domain, 101 + __be32 addr); 102 + #if IS_ENABLED(CONFIG_IPV6) 103 + struct netlbl_dommap_def *netlbl_domhsh_getentry_af6(const char *domain, 104 + const struct in6_addr *addr); 105 + #endif /* IPv6 */ 106 + 100 107 int netlbl_domhsh_walk(u32 *skip_bkt, 101 108 u32 *skip_chain, 102 109 int (*callback) (struct netlbl_dom_map *entry, void *arg), 103 110 void *cb_arg); 104 - 105 - #if IS_ENABLED(CONFIG_IPV6) 106 - struct netlbl_domaddr6_map *netlbl_domhsh_getentry_af6(const char *domain, 107 - const struct in6_addr *addr); 108 - #endif /* IPv6 */ 109 111 110 112 #endif
+35 -53
net/netlabel/netlabel_kapi.c
··· 122 122 } 123 123 124 124 if (addr == NULL && mask == NULL) 125 - entry->type = NETLBL_NLTYPE_UNLABELED; 125 + entry->def.type = NETLBL_NLTYPE_UNLABELED; 126 126 else if (addr != NULL && mask != NULL) { 127 127 addrmap = kzalloc(sizeof(*addrmap), GFP_ATOMIC); 128 128 if (addrmap == NULL) ··· 137 137 map4 = kzalloc(sizeof(*map4), GFP_ATOMIC); 138 138 if (map4 == NULL) 139 139 goto cfg_unlbl_map_add_failure; 140 - map4->type = NETLBL_NLTYPE_UNLABELED; 140 + map4->def.type = NETLBL_NLTYPE_UNLABELED; 141 141 map4->list.addr = addr4->s_addr & mask4->s_addr; 142 142 map4->list.mask = mask4->s_addr; 143 143 map4->list.valid = 1; ··· 154 154 map6 = kzalloc(sizeof(*map6), GFP_ATOMIC); 155 155 if (map6 == NULL) 156 156 goto cfg_unlbl_map_add_failure; 157 - map6->type = NETLBL_NLTYPE_UNLABELED; 157 + map6->def.type = NETLBL_NLTYPE_UNLABELED; 158 158 map6->list.addr = *addr6; 159 159 map6->list.addr.s6_addr32[0] &= mask6->s6_addr32[0]; 160 160 map6->list.addr.s6_addr32[1] &= mask6->s6_addr32[1]; ··· 174 174 break; 175 175 } 176 176 177 - entry->type_def.addrsel = addrmap; 178 - entry->type = NETLBL_NLTYPE_ADDRSELECT; 177 + entry->def.addrsel = addrmap; 178 + entry->def.type = NETLBL_NLTYPE_ADDRSELECT; 179 179 } else { 180 180 ret_val = -EINVAL; 181 181 goto cfg_unlbl_map_add_failure; ··· 355 355 } 356 356 357 357 if (addr == NULL && mask == NULL) { 358 - entry->type_def.cipsov4 = doi_def; 359 - entry->type = NETLBL_NLTYPE_CIPSOV4; 358 + entry->def.cipso = doi_def; 359 + entry->def.type = NETLBL_NLTYPE_CIPSOV4; 360 360 } else if (addr != NULL && mask != NULL) { 361 361 addrmap = kzalloc(sizeof(*addrmap), GFP_ATOMIC); 362 362 if (addrmap == NULL) ··· 367 367 addrinfo = kzalloc(sizeof(*addrinfo), GFP_ATOMIC); 368 368 if (addrinfo == NULL) 369 369 goto out_addrinfo; 370 - addrinfo->type_def.cipsov4 = doi_def; 371 - addrinfo->type = NETLBL_NLTYPE_CIPSOV4; 370 + addrinfo->def.cipso = doi_def; 371 + addrinfo->def.type = NETLBL_NLTYPE_CIPSOV4; 372 372 addrinfo->list.addr = addr->s_addr & mask->s_addr; 373 373 addrinfo->list.mask = mask->s_addr; 374 374 addrinfo->list.valid = 1; ··· 376 376 if (ret_val != 0) 377 377 goto cfg_cipsov4_map_add_failure; 378 378 379 - entry->type_def.addrsel = addrmap; 380 - entry->type = NETLBL_NLTYPE_ADDRSELECT; 379 + entry->def.addrsel = addrmap; 380 + entry->def.type = NETLBL_NLTYPE_ADDRSELECT; 381 381 } else { 382 382 ret_val = -EINVAL; 383 383 goto out_addrmap; ··· 657 657 } 658 658 switch (family) { 659 659 case AF_INET: 660 - switch (dom_entry->type) { 660 + switch (dom_entry->def.type) { 661 661 case NETLBL_NLTYPE_ADDRSELECT: 662 662 ret_val = -EDESTADDRREQ; 663 663 break; 664 664 case NETLBL_NLTYPE_CIPSOV4: 665 665 ret_val = cipso_v4_sock_setattr(sk, 666 - dom_entry->type_def.cipsov4, 667 - secattr); 666 + dom_entry->def.cipso, 667 + secattr); 668 668 break; 669 669 case NETLBL_NLTYPE_UNLABELED: 670 670 ret_val = 0; ··· 754 754 { 755 755 int ret_val; 756 756 struct sockaddr_in *addr4; 757 - struct netlbl_domaddr4_map *af4_entry; 757 + struct netlbl_dommap_def *entry; 758 758 759 759 rcu_read_lock(); 760 760 switch (addr->sa_family) { 761 761 case AF_INET: 762 762 addr4 = (struct sockaddr_in *)addr; 763 - af4_entry = netlbl_domhsh_getentry_af4(secattr->domain, 764 - addr4->sin_addr.s_addr); 765 - if (af4_entry == NULL) { 763 + entry = netlbl_domhsh_getentry_af4(secattr->domain, 764 + addr4->sin_addr.s_addr); 765 + if (entry == NULL) { 766 766 ret_val = -ENOENT; 767 767 goto conn_setattr_return; 768 768 } 769 - switch (af4_entry->type) { 769 + switch (entry->type) { 770 770 case NETLBL_NLTYPE_CIPSOV4: 771 771 ret_val = cipso_v4_sock_setattr(sk, 772 - af4_entry->type_def.cipsov4, 773 - secattr); 772 + entry->cipso, secattr); 774 773 break; 775 774 case NETLBL_NLTYPE_UNLABELED: 776 775 /* just delete the protocols we support for right now ··· 811 812 const struct netlbl_lsm_secattr *secattr) 812 813 { 813 814 int ret_val; 814 - struct netlbl_dom_map *dom_entry; 815 - struct netlbl_domaddr4_map *af4_entry; 816 - u32 proto_type; 817 - struct cipso_v4_doi *proto_cv4; 815 + struct netlbl_dommap_def *entry; 818 816 819 817 rcu_read_lock(); 820 - dom_entry = netlbl_domhsh_getentry(secattr->domain); 821 - if (dom_entry == NULL) { 822 - ret_val = -ENOENT; 823 - goto req_setattr_return; 824 - } 825 818 switch (req->rsk_ops->family) { 826 819 case AF_INET: 827 - if (dom_entry->type == NETLBL_NLTYPE_ADDRSELECT) { 828 - struct inet_request_sock *req_inet = inet_rsk(req); 829 - af4_entry = netlbl_domhsh_getentry_af4(secattr->domain, 830 - req_inet->rmt_addr); 831 - if (af4_entry == NULL) { 832 - ret_val = -ENOENT; 833 - goto req_setattr_return; 834 - } 835 - proto_type = af4_entry->type; 836 - proto_cv4 = af4_entry->type_def.cipsov4; 837 - } else { 838 - proto_type = dom_entry->type; 839 - proto_cv4 = dom_entry->type_def.cipsov4; 820 + entry = netlbl_domhsh_getentry_af4(secattr->domain, 821 + inet_rsk(req)->rmt_addr); 822 + if (entry == NULL) { 823 + ret_val = -ENOENT; 824 + goto req_setattr_return; 840 825 } 841 - switch (proto_type) { 826 + switch (entry->type) { 842 827 case NETLBL_NLTYPE_CIPSOV4: 843 - ret_val = cipso_v4_req_setattr(req, proto_cv4, secattr); 828 + ret_val = cipso_v4_req_setattr(req, 829 + entry->cipso, secattr); 844 830 break; 845 831 case NETLBL_NLTYPE_UNLABELED: 846 832 /* just delete the protocols we support for right now ··· 883 899 { 884 900 int ret_val; 885 901 struct iphdr *hdr4; 886 - struct netlbl_domaddr4_map *af4_entry; 902 + struct netlbl_dommap_def *entry; 887 903 888 904 rcu_read_lock(); 889 905 switch (family) { 890 906 case AF_INET: 891 907 hdr4 = ip_hdr(skb); 892 - af4_entry = netlbl_domhsh_getentry_af4(secattr->domain, 893 - hdr4->daddr); 894 - if (af4_entry == NULL) { 908 + entry = netlbl_domhsh_getentry_af4(secattr->domain,hdr4->daddr); 909 + if (entry == NULL) { 895 910 ret_val = -ENOENT; 896 911 goto skbuff_setattr_return; 897 912 } 898 - switch (af4_entry->type) { 913 + switch (entry->type) { 899 914 case NETLBL_NLTYPE_CIPSOV4: 900 - ret_val = cipso_v4_skbuff_setattr(skb, 901 - af4_entry->type_def.cipsov4, 902 - secattr); 915 + ret_val = cipso_v4_skbuff_setattr(skb, entry->cipso, 916 + secattr); 903 917 break; 904 918 case NETLBL_NLTYPE_UNLABELED: 905 919 /* just delete the protocols we support for right now
+21 -23
net/netlabel/netlabel_mgmt.c
··· 104 104 ret_val = -ENOMEM; 105 105 goto add_failure; 106 106 } 107 - entry->type = nla_get_u32(info->attrs[NLBL_MGMT_A_PROTOCOL]); 107 + entry->def.type = nla_get_u32(info->attrs[NLBL_MGMT_A_PROTOCOL]); 108 108 if (info->attrs[NLBL_MGMT_A_DOMAIN]) { 109 109 size_t tmp_size = nla_len(info->attrs[NLBL_MGMT_A_DOMAIN]); 110 110 entry->domain = kmalloc(tmp_size, GFP_KERNEL); ··· 116 116 info->attrs[NLBL_MGMT_A_DOMAIN], tmp_size); 117 117 } 118 118 119 - /* NOTE: internally we allow/use a entry->type value of 119 + /* NOTE: internally we allow/use a entry->def.type value of 120 120 * NETLBL_NLTYPE_ADDRSELECT but we don't currently allow users 121 121 * to pass that as a protocol value because we need to know the 122 122 * "real" protocol */ 123 123 124 - switch (entry->type) { 124 + switch (entry->def.type) { 125 125 case NETLBL_NLTYPE_UNLABELED: 126 126 break; 127 127 case NETLBL_NLTYPE_CIPSOV4: ··· 132 132 cipsov4 = cipso_v4_doi_getdef(tmp_val); 133 133 if (cipsov4 == NULL) 134 134 goto add_failure; 135 - entry->type_def.cipsov4 = cipsov4; 135 + entry->def.cipso = cipsov4; 136 136 break; 137 137 default: 138 138 goto add_failure; ··· 172 172 map->list.addr = addr->s_addr & mask->s_addr; 173 173 map->list.mask = mask->s_addr; 174 174 map->list.valid = 1; 175 - map->type = entry->type; 175 + map->def.type = entry->def.type; 176 176 if (cipsov4) 177 - map->type_def.cipsov4 = cipsov4; 177 + map->def.cipso = cipsov4; 178 178 179 179 ret_val = netlbl_af4list_add(&map->list, &addrmap->list4); 180 180 if (ret_val != 0) { ··· 182 182 goto add_failure; 183 183 } 184 184 185 - entry->type = NETLBL_NLTYPE_ADDRSELECT; 186 - entry->type_def.addrsel = addrmap; 185 + entry->def.type = NETLBL_NLTYPE_ADDRSELECT; 186 + entry->def.addrsel = addrmap; 187 187 #if IS_ENABLED(CONFIG_IPV6) 188 188 } else if (info->attrs[NLBL_MGMT_A_IPV6ADDR]) { 189 189 struct in6_addr *addr; ··· 223 223 map->list.addr.s6_addr32[3] &= mask->s6_addr32[3]; 224 224 map->list.mask = *mask; 225 225 map->list.valid = 1; 226 - map->type = entry->type; 226 + map->def.type = entry->def.type; 227 227 228 228 ret_val = netlbl_af6list_add(&map->list, &addrmap->list6); 229 229 if (ret_val != 0) { ··· 231 231 goto add_failure; 232 232 } 233 233 234 - entry->type = NETLBL_NLTYPE_ADDRSELECT; 235 - entry->type_def.addrsel = addrmap; 234 + entry->def.type = NETLBL_NLTYPE_ADDRSELECT; 235 + entry->def.addrsel = addrmap; 236 236 #endif /* IPv6 */ 237 237 } 238 238 ··· 281 281 return ret_val; 282 282 } 283 283 284 - switch (entry->type) { 284 + switch (entry->def.type) { 285 285 case NETLBL_NLTYPE_ADDRSELECT: 286 286 nla_a = nla_nest_start(skb, NLBL_MGMT_A_SELECTORLIST); 287 287 if (nla_a == NULL) 288 288 return -ENOMEM; 289 289 290 - netlbl_af4list_foreach_rcu(iter4, 291 - &entry->type_def.addrsel->list4) { 290 + netlbl_af4list_foreach_rcu(iter4, &entry->def.addrsel->list4) { 292 291 struct netlbl_domaddr4_map *map4; 293 292 struct in_addr addr_struct; 294 293 ··· 309 310 return ret_val; 310 311 map4 = netlbl_domhsh_addr4_entry(iter4); 311 312 ret_val = nla_put_u32(skb, NLBL_MGMT_A_PROTOCOL, 312 - map4->type); 313 + map4->def.type); 313 314 if (ret_val != 0) 314 315 return ret_val; 315 - switch (map4->type) { 316 + switch (map4->def.type) { 316 317 case NETLBL_NLTYPE_CIPSOV4: 317 318 ret_val = nla_put_u32(skb, NLBL_MGMT_A_CV4DOI, 318 - map4->type_def.cipsov4->doi); 319 + map4->def.cipso->doi); 319 320 if (ret_val != 0) 320 321 return ret_val; 321 322 break; ··· 324 325 nla_nest_end(skb, nla_b); 325 326 } 326 327 #if IS_ENABLED(CONFIG_IPV6) 327 - netlbl_af6list_foreach_rcu(iter6, 328 - &entry->type_def.addrsel->list6) { 328 + netlbl_af6list_foreach_rcu(iter6, &entry->def.addrsel->list6) { 329 329 struct netlbl_domaddr6_map *map6; 330 330 331 331 nla_b = nla_nest_start(skb, NLBL_MGMT_A_ADDRSELECTOR); ··· 343 345 return ret_val; 344 346 map6 = netlbl_domhsh_addr6_entry(iter6); 345 347 ret_val = nla_put_u32(skb, NLBL_MGMT_A_PROTOCOL, 346 - map6->type); 348 + map6->def.type); 347 349 if (ret_val != 0) 348 350 return ret_val; 349 351 ··· 354 356 nla_nest_end(skb, nla_a); 355 357 break; 356 358 case NETLBL_NLTYPE_UNLABELED: 357 - ret_val = nla_put_u32(skb, NLBL_MGMT_A_PROTOCOL, entry->type); 359 + ret_val = nla_put_u32(skb,NLBL_MGMT_A_PROTOCOL,entry->def.type); 358 360 break; 359 361 case NETLBL_NLTYPE_CIPSOV4: 360 - ret_val = nla_put_u32(skb, NLBL_MGMT_A_PROTOCOL, entry->type); 362 + ret_val = nla_put_u32(skb,NLBL_MGMT_A_PROTOCOL,entry->def.type); 361 363 if (ret_val != 0) 362 364 return ret_val; 363 365 ret_val = nla_put_u32(skb, NLBL_MGMT_A_CV4DOI, 364 - entry->type_def.cipsov4->doi); 366 + entry->def.cipso->doi); 365 367 break; 366 368 } 367 369
+1 -1
net/netlabel/netlabel_unlabeled.c
··· 1541 1541 entry = kzalloc(sizeof(*entry), GFP_KERNEL); 1542 1542 if (entry == NULL) 1543 1543 return -ENOMEM; 1544 - entry->type = NETLBL_NLTYPE_UNLABELED; 1544 + entry->def.type = NETLBL_NLTYPE_UNLABELED; 1545 1545 ret_val = netlbl_domhsh_add_default(entry, &audit_info); 1546 1546 if (ret_val != 0) 1547 1547 return ret_val;
+10 -10
net/nfc/core.c
··· 44 44 /* NFC device ID bitmap */ 45 45 static DEFINE_IDA(nfc_index_ida); 46 46 47 - int nfc_fw_upload(struct nfc_dev *dev, const char *firmware_name) 47 + int nfc_fw_download(struct nfc_dev *dev, const char *firmware_name) 48 48 { 49 49 int rc = 0; 50 50 ··· 62 62 goto error; 63 63 } 64 64 65 - if (!dev->ops->fw_upload) { 65 + if (!dev->ops->fw_download) { 66 66 rc = -EOPNOTSUPP; 67 67 goto error; 68 68 } 69 69 70 - dev->fw_upload_in_progress = true; 71 - rc = dev->ops->fw_upload(dev, firmware_name); 70 + dev->fw_download_in_progress = true; 71 + rc = dev->ops->fw_download(dev, firmware_name); 72 72 if (rc) 73 - dev->fw_upload_in_progress = false; 73 + dev->fw_download_in_progress = false; 74 74 75 75 error: 76 76 device_unlock(&dev->dev); 77 77 return rc; 78 78 } 79 79 80 - int nfc_fw_upload_done(struct nfc_dev *dev, const char *firmware_name) 80 + int nfc_fw_download_done(struct nfc_dev *dev, const char *firmware_name) 81 81 { 82 - dev->fw_upload_in_progress = false; 82 + dev->fw_download_in_progress = false; 83 83 84 - return nfc_genl_fw_upload_done(dev, firmware_name); 84 + return nfc_genl_fw_download_done(dev, firmware_name); 85 85 } 86 - EXPORT_SYMBOL(nfc_fw_upload_done); 86 + EXPORT_SYMBOL(nfc_fw_download_done); 87 87 88 88 /** 89 89 * nfc_dev_up - turn on the NFC device ··· 110 110 goto error; 111 111 } 112 112 113 - if (dev->fw_upload_in_progress) { 113 + if (dev->fw_download_in_progress) { 114 114 rc = -EBUSY; 115 115 goto error; 116 116 }
+4 -4
net/nfc/hci/core.c
··· 809 809 } 810 810 } 811 811 812 - static int hci_fw_upload(struct nfc_dev *nfc_dev, const char *firmware_name) 812 + static int hci_fw_download(struct nfc_dev *nfc_dev, const char *firmware_name) 813 813 { 814 814 struct nfc_hci_dev *hdev = nfc_get_drvdata(nfc_dev); 815 815 816 - if (!hdev->ops->fw_upload) 816 + if (!hdev->ops->fw_download) 817 817 return -ENOTSUPP; 818 818 819 - return hdev->ops->fw_upload(hdev, firmware_name); 819 + return hdev->ops->fw_download(hdev, firmware_name); 820 820 } 821 821 822 822 static struct nfc_ops hci_nfc_ops = { ··· 831 831 .im_transceive = hci_transceive, 832 832 .tm_send = hci_tm_send, 833 833 .check_presence = hci_check_presence, 834 - .fw_upload = hci_fw_upload, 834 + .fw_download = hci_fw_download, 835 835 .discover_se = hci_discover_se, 836 836 .enable_se = hci_enable_se, 837 837 .disable_se = hci_disable_se,
+1
net/nfc/nci/Kconfig
··· 11 11 12 12 config NFC_NCI_SPI 13 13 depends on NFC_NCI && SPI 14 + select CRC_CCITT 14 15 bool "NCI over SPI protocol support" 15 16 default n 16 17 help
+6 -6
net/nfc/netlink.c
··· 1089 1089 return rc; 1090 1090 } 1091 1091 1092 - static int nfc_genl_fw_upload(struct sk_buff *skb, struct genl_info *info) 1092 + static int nfc_genl_fw_download(struct sk_buff *skb, struct genl_info *info) 1093 1093 { 1094 1094 struct nfc_dev *dev; 1095 1095 int rc; ··· 1108 1108 nla_strlcpy(firmware_name, info->attrs[NFC_ATTR_FIRMWARE_NAME], 1109 1109 sizeof(firmware_name)); 1110 1110 1111 - rc = nfc_fw_upload(dev, firmware_name); 1111 + rc = nfc_fw_download(dev, firmware_name); 1112 1112 1113 1113 nfc_put_device(dev); 1114 1114 return rc; 1115 1115 } 1116 1116 1117 - int nfc_genl_fw_upload_done(struct nfc_dev *dev, const char *firmware_name) 1117 + int nfc_genl_fw_download_done(struct nfc_dev *dev, const char *firmware_name) 1118 1118 { 1119 1119 struct sk_buff *msg; 1120 1120 void *hdr; ··· 1124 1124 return -ENOMEM; 1125 1125 1126 1126 hdr = genlmsg_put(msg, 0, 0, &nfc_genl_family, 0, 1127 - NFC_CMD_FW_UPLOAD); 1127 + NFC_CMD_FW_DOWNLOAD); 1128 1128 if (!hdr) 1129 1129 goto free_msg; 1130 1130 ··· 1251 1251 .policy = nfc_genl_policy, 1252 1252 }, 1253 1253 { 1254 - .cmd = NFC_CMD_FW_UPLOAD, 1255 - .doit = nfc_genl_fw_upload, 1254 + .cmd = NFC_CMD_FW_DOWNLOAD, 1255 + .doit = nfc_genl_fw_download, 1256 1256 .policy = nfc_genl_policy, 1257 1257 }, 1258 1258 {
+3 -3
net/nfc/nfc.h
··· 123 123 class_dev_iter_exit(iter); 124 124 } 125 125 126 - int nfc_fw_upload(struct nfc_dev *dev, const char *firmware_name); 127 - int nfc_genl_fw_upload_done(struct nfc_dev *dev, const char *firmware_name); 126 + int nfc_fw_download(struct nfc_dev *dev, const char *firmware_name); 127 + int nfc_genl_fw_download_done(struct nfc_dev *dev, const char *firmware_name); 128 128 129 - int nfc_fw_upload_done(struct nfc_dev *dev, const char *firmware_name); 129 + int nfc_fw_download_done(struct nfc_dev *dev, const char *firmware_name); 130 130 131 131 int nfc_dev_up(struct nfc_dev *dev); 132 132
+1
net/sched/sch_atm.c
··· 605 605 struct sockaddr_atmpvc pvc; 606 606 int state; 607 607 608 + memset(&pvc, 0, sizeof(pvc)); 608 609 pvc.sap_family = AF_ATMPVC; 609 610 pvc.sap_addr.itf = flow->vcc->dev ? flow->vcc->dev->number : -1; 610 611 pvc.sap_addr.vpi = flow->vcc->vpi;
+1 -1
net/sched/sch_htb.c
··· 100 100 struct psched_ratecfg ceil; 101 101 s64 buffer, cbuffer;/* token bucket depth/rate */ 102 102 s64 mbuffer; /* max wait time */ 103 - int prio; /* these two are used only by leaves... */ 103 + u32 prio; /* these two are used only by leaves... */ 104 104 int quantum; /* but stored for parent-to-leaf return */ 105 105 106 106 struct tcf_proto *filter_list; /* class attached filters */
+1 -1
net/socket.c
··· 106 106 #include <linux/atalk.h> 107 107 #include <net/busy_poll.h> 108 108 109 - #ifdef CONFIG_NET_LL_RX_POLL 109 + #ifdef CONFIG_NET_RX_BUSY_POLL 110 110 unsigned int sysctl_net_busy_read __read_mostly; 111 111 unsigned int sysctl_net_busy_poll __read_mostly; 112 112 #endif
+12 -3
net/tipc/server.c
··· 355 355 return PTR_ERR(con); 356 356 357 357 sock = tipc_create_listen_sock(con); 358 - if (!sock) 358 + if (!sock) { 359 + idr_remove(&s->conn_idr, con->conid); 360 + s->idr_in_use--; 361 + kfree(con); 359 362 return -EINVAL; 363 + } 360 364 361 365 tipc_register_callbacks(sock, con); 362 366 return 0; ··· 567 563 kmem_cache_destroy(s->rcvbuf_cache); 568 564 return ret; 569 565 } 566 + ret = tipc_open_listening_sock(s); 567 + if (ret < 0) { 568 + tipc_work_stop(s); 569 + kmem_cache_destroy(s->rcvbuf_cache); 570 + return ret; 571 + } 570 572 s->enabled = 1; 571 - 572 - return tipc_open_listening_sock(s); 573 + return ret; 573 574 } 574 575 575 576 void tipc_server_stop(struct tipc_server *s)
+4 -1
net/wireless/reg.c
··· 2247 2247 2248 2248 void wiphy_regulatory_register(struct wiphy *wiphy) 2249 2249 { 2250 + struct regulatory_request *lr; 2251 + 2250 2252 if (!reg_dev_ignore_cell_hint(wiphy)) 2251 2253 reg_num_devs_support_basehint++; 2252 2254 2253 - wiphy_update_regulatory(wiphy, NL80211_REGDOM_SET_BY_CORE); 2255 + lr = get_last_request(); 2256 + wiphy_update_regulatory(wiphy, lr->initiator); 2254 2257 } 2255 2258 2256 2259 void wiphy_regulatory_deregister(struct wiphy *wiphy)