Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net

Pull networking fixes from David Miller:

1) Don't ignore user initiated wireless regulatory settings on cards
with custom regulatory domains, from Arik Nemtsov.

2) Fix length check of bluetooth information responses, from Jaganath
Kanakkassery.

3) Fix misuse of PTR_ERR in btusb, from Adam Lee.

4) Handle rfkill properly while iwlwifi devices are offline, from
Emmanuel Grumbach.

5) Fix r815x devices DMA'ing to stack buffers, from Hayes Wang.

6) Kernel info leak in ATM packet scheduler, from Dan Carpenter.

7) 8139cp doesn't check for DMA mapping errors, from Neil Horman.

8) Fix bridge multicast code to not snoop when no querier exists,
otherwise mutlicast traffic is lost. From Linus Lüssing.

9) Avoid soft lockups in fib6_run_gc(), from Michal Kubecek.

10) Fix races in automatic address asignment on ipv6, which can result
in incorrect lifetime assignments. From Jiri Benc.

11) Cure build bustage when CONFIG_NET_LL_RX_POLL is not set and rename
it CONFIG_NET_RX_BUSY_POLL to eliminate the last reference to the
original naming of this feature. From Cong Wang.

12) Fix crash in TIPC when server socket creation fails, from Ying Xue.

13) macvlan_changelink() silently succeeds when it shouldn't, from
Michael S Tsirkin.

14) HTB packet scheduler can crash due to sign extension, fix from
Stephen Hemminger.

15) With the cable unplugged, r8169 prints out a message every 10
seconds, make it netif_dbg() instead of netif_warn(). From Peter
Wu.

16) Fix memory leak in rtm_to_ifaddr(), from Daniel Borkmann.

17) sis900 gets spurious TX queue timeouts due to mismanagement of link
carrier state, from Denis Kirjanov.

18) Validate somaxconn sysctl to make sure it fits inside of a u16.
From Roman Gushchin.

19) Fix MAC address filtering on qlcnic, from Shahed Shaikh.

* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net: (68 commits)
qlcnic: Fix for flash update failure on 83xx adapter
qlcnic: Fix link speed and duplex display for 83xx adapter
qlcnic: Fix link speed display for 82xx adapter
qlcnic: Fix external loopback test.
qlcnic: Removed adapter series name from warning messages.
qlcnic: Free up memory in error path.
qlcnic: Fix ingress MAC learning
qlcnic: Fix MAC address filter issue on 82xx adapter
net: ethernet: davinci_emac: drop IRQF_DISABLED
netlabel: use domain based selectors when address based selectors are not available
net: check net.core.somaxconn sysctl values
sis900: Fix the tx queue timeout issue
net: rtm_to_ifaddr: free ifa if ifa_cacheinfo processing fails
r8169: remove "PHY reset until link up" log spam
net: ethernet: cpsw: drop IRQF_DISABLED
htb: fix sign extension bug
macvlan: handle set_promiscuity failures
macvlan: better mode validation
tipc: fix oops when creating server socket fails
net: rename CONFIG_NET_LL_RX_POLL to CONFIG_NET_RX_BUSY_POLL
...

+995 -678
+2 -2
Documentation/sysctl/net.txt
··· 52 53 busy_read 54 ---------------- 55 - Low latency busy poll timeout for socket reads. (needs CONFIG_NET_LL_RX_POLL) 56 Approximate time in us to busy loop waiting for packets on the device queue. 57 This sets the default value of the SO_BUSY_POLL socket option. 58 Can be set or overridden per socket by setting socket option SO_BUSY_POLL, ··· 63 64 busy_poll 65 ---------------- 66 - Low latency busy poll timeout for poll and select. (needs CONFIG_NET_LL_RX_POLL) 67 Approximate time in us to busy loop waiting for events. 68 Recommended value depends on the number of sockets you poll on. 69 For several sockets 50, for several hundreds 100.
··· 52 53 busy_read 54 ---------------- 55 + Low latency busy poll timeout for socket reads. (needs CONFIG_NET_RX_BUSY_POLL) 56 Approximate time in us to busy loop waiting for packets on the device queue. 57 This sets the default value of the SO_BUSY_POLL socket option. 58 Can be set or overridden per socket by setting socket option SO_BUSY_POLL, ··· 63 64 busy_poll 65 ---------------- 66 + Low latency busy poll timeout for poll and select. (needs CONFIG_NET_RX_BUSY_POLL) 67 Approximate time in us to busy loop waiting for events. 68 Recommended value depends on the number of sockets you poll on. 69 For several sockets 50, for several hundreds 100.
+10 -2
MAINTAINERS
··· 1406 M: Kalle Valo <kvalo@qca.qualcomm.com> 1407 L: linux-wireless@vger.kernel.org 1408 W: http://wireless.kernel.org/en/users/Drivers/ath6kl 1409 - T: git git://git.kernel.org/pub/scm/linux/kernel/git/kvalo/ath6kl.git 1410 S: Supported 1411 F: drivers/net/wireless/ath/ath6kl/ 1412 ··· 6726 S: Maintained 6727 F: drivers/media/tuners/qt1010* 6728 6729 QUALCOMM HEXAGON ARCHITECTURE 6730 M: Richard Kuo <rkuo@codeaurora.org> 6731 L: linux-hexagon@vger.kernel.org ··· 8278 F: sound/soc/codecs/twl4030* 8279 8280 TI WILINK WIRELESS DRIVERS 8281 - M: Luciano Coelho <coelho@ti.com> 8282 L: linux-wireless@vger.kernel.org 8283 W: http://wireless.kernel.org/en/users/Drivers/wl12xx 8284 W: http://wireless.kernel.org/en/users/Drivers/wl1251
··· 1406 M: Kalle Valo <kvalo@qca.qualcomm.com> 1407 L: linux-wireless@vger.kernel.org 1408 W: http://wireless.kernel.org/en/users/Drivers/ath6kl 1409 + T: git git://github.com/kvalo/ath.git 1410 S: Supported 1411 F: drivers/net/wireless/ath/ath6kl/ 1412 ··· 6726 S: Maintained 6727 F: drivers/media/tuners/qt1010* 6728 6729 + QUALCOMM ATHEROS ATH10K WIRELESS DRIVER 6730 + M: Kalle Valo <kvalo@qca.qualcomm.com> 6731 + L: ath10k@lists.infradead.org 6732 + W: http://wireless.kernel.org/en/users/Drivers/ath10k 6733 + T: git git://github.com/kvalo/ath.git 6734 + S: Supported 6735 + F: drivers/net/wireless/ath/ath10k/ 6736 + 6737 QUALCOMM HEXAGON ARCHITECTURE 6738 M: Richard Kuo <rkuo@codeaurora.org> 6739 L: linux-hexagon@vger.kernel.org ··· 8270 F: sound/soc/codecs/twl4030* 8271 8272 TI WILINK WIRELESS DRIVERS 8273 + M: Luciano Coelho <luca@coelho.fi> 8274 L: linux-wireless@vger.kernel.org 8275 W: http://wireless.kernel.org/en/users/Drivers/wl12xx 8276 W: http://wireless.kernel.org/en/users/Drivers/wl1251
+37 -9
drivers/bluetooth/ath3k.c
··· 91 { USB_DEVICE(0x0489, 0xe04e) }, 92 { USB_DEVICE(0x0489, 0xe056) }, 93 { USB_DEVICE(0x0489, 0xe04d) }, 94 95 /* Atheros AR5BBU12 with sflash firmware */ 96 { USB_DEVICE(0x0489, 0xE02C) }, ··· 132 { USB_DEVICE(0x0489, 0xe04e), .driver_info = BTUSB_ATH3012 }, 133 { USB_DEVICE(0x0489, 0xe056), .driver_info = BTUSB_ATH3012 }, 134 { USB_DEVICE(0x0489, 0xe04d), .driver_info = BTUSB_ATH3012 }, 135 136 /* Atheros AR5BBU22 with sflash firmware */ 137 { USB_DEVICE(0x0489, 0xE03C), .driver_info = BTUSB_ATH3012 }, ··· 201 202 static int ath3k_get_state(struct usb_device *udev, unsigned char *state) 203 { 204 - int pipe = 0; 205 206 pipe = usb_rcvctrlpipe(udev, 0); 207 - return usb_control_msg(udev, pipe, ATH3K_GETSTATE, 208 - USB_TYPE_VENDOR | USB_DIR_IN, 0, 0, 209 - state, 0x01, USB_CTRL_SET_TIMEOUT); 210 } 211 212 static int ath3k_get_version(struct usb_device *udev, 213 struct ath3k_version *version) 214 { 215 - int pipe = 0; 216 217 pipe = usb_rcvctrlpipe(udev, 0); 218 - return usb_control_msg(udev, pipe, ATH3K_GETVERSION, 219 - USB_TYPE_VENDOR | USB_DIR_IN, 0, 0, version, 220 - sizeof(struct ath3k_version), 221 - USB_CTRL_SET_TIMEOUT); 222 } 223 224 static int ath3k_load_fwfile(struct usb_device *udev,
··· 91 { USB_DEVICE(0x0489, 0xe04e) }, 92 { USB_DEVICE(0x0489, 0xe056) }, 93 { USB_DEVICE(0x0489, 0xe04d) }, 94 + { USB_DEVICE(0x04c5, 0x1330) }, 95 + { USB_DEVICE(0x13d3, 0x3402) }, 96 + { USB_DEVICE(0x0cf3, 0x3121) }, 97 + { USB_DEVICE(0x0cf3, 0xe003) }, 98 99 /* Atheros AR5BBU12 with sflash firmware */ 100 { USB_DEVICE(0x0489, 0xE02C) }, ··· 128 { USB_DEVICE(0x0489, 0xe04e), .driver_info = BTUSB_ATH3012 }, 129 { USB_DEVICE(0x0489, 0xe056), .driver_info = BTUSB_ATH3012 }, 130 { USB_DEVICE(0x0489, 0xe04d), .driver_info = BTUSB_ATH3012 }, 131 + { USB_DEVICE(0x04c5, 0x1330), .driver_info = BTUSB_ATH3012 }, 132 + { USB_DEVICE(0x13d3, 0x3402), .driver_info = BTUSB_ATH3012 }, 133 + { USB_DEVICE(0x0cf3, 0x3121), .driver_info = BTUSB_ATH3012 }, 134 + { USB_DEVICE(0x0cf3, 0xe003), .driver_info = BTUSB_ATH3012 }, 135 136 /* Atheros AR5BBU22 with sflash firmware */ 137 { USB_DEVICE(0x0489, 0xE03C), .driver_info = BTUSB_ATH3012 }, ··· 193 194 static int ath3k_get_state(struct usb_device *udev, unsigned char *state) 195 { 196 + int ret, pipe = 0; 197 + char *buf; 198 + 199 + buf = kmalloc(sizeof(*buf), GFP_KERNEL); 200 + if (!buf) 201 + return -ENOMEM; 202 203 pipe = usb_rcvctrlpipe(udev, 0); 204 + ret = usb_control_msg(udev, pipe, ATH3K_GETSTATE, 205 + USB_TYPE_VENDOR | USB_DIR_IN, 0, 0, 206 + buf, sizeof(*buf), USB_CTRL_SET_TIMEOUT); 207 + 208 + *state = *buf; 209 + kfree(buf); 210 + 211 + return ret; 212 } 213 214 static int ath3k_get_version(struct usb_device *udev, 215 struct ath3k_version *version) 216 { 217 + int ret, pipe = 0; 218 + struct ath3k_version *buf; 219 + const int size = sizeof(*buf); 220 + 221 + buf = kmalloc(size, GFP_KERNEL); 222 + if (!buf) 223 + return -ENOMEM; 224 225 pipe = usb_rcvctrlpipe(udev, 0); 226 + ret = usb_control_msg(udev, pipe, ATH3K_GETVERSION, 227 + USB_TYPE_VENDOR | USB_DIR_IN, 0, 0, 228 + buf, size, USB_CTRL_SET_TIMEOUT); 229 + 230 + memcpy(version, buf, size); 231 + kfree(buf); 232 + 233 + return ret; 234 } 235 236 static int ath3k_load_fwfile(struct usb_device *udev,
+11 -7
drivers/bluetooth/btusb.c
··· 154 { USB_DEVICE(0x0489, 0xe04e), .driver_info = BTUSB_ATH3012 }, 155 { USB_DEVICE(0x0489, 0xe056), .driver_info = BTUSB_ATH3012 }, 156 { USB_DEVICE(0x0489, 0xe04d), .driver_info = BTUSB_ATH3012 }, 157 158 /* Atheros AR5BBU12 with sflash firmware */ 159 { USB_DEVICE(0x0489, 0xe02c), .driver_info = BTUSB_IGNORE }, ··· 1099 if (IS_ERR(skb)) { 1100 BT_ERR("%s sending Intel patch command (0x%4.4x) failed (%ld)", 1101 hdev->name, cmd->opcode, PTR_ERR(skb)); 1102 - return -PTR_ERR(skb); 1103 } 1104 1105 /* It ensures that the returned event matches the event data read from ··· 1151 if (IS_ERR(skb)) { 1152 BT_ERR("%s sending initial HCI reset command failed (%ld)", 1153 hdev->name, PTR_ERR(skb)); 1154 - return -PTR_ERR(skb); 1155 } 1156 kfree_skb(skb); 1157 ··· 1165 if (IS_ERR(skb)) { 1166 BT_ERR("%s reading Intel fw version command failed (%ld)", 1167 hdev->name, PTR_ERR(skb)); 1168 - return -PTR_ERR(skb); 1169 } 1170 1171 if (skb->len != sizeof(*ver)) { ··· 1223 BT_ERR("%s entering Intel manufacturer mode failed (%ld)", 1224 hdev->name, PTR_ERR(skb)); 1225 release_firmware(fw); 1226 - return -PTR_ERR(skb); 1227 } 1228 1229 if (skb->data[0]) { ··· 1280 if (IS_ERR(skb)) { 1281 BT_ERR("%s exiting Intel manufacturer mode failed (%ld)", 1282 hdev->name, PTR_ERR(skb)); 1283 - return -PTR_ERR(skb); 1284 } 1285 kfree_skb(skb); 1286 ··· 1296 if (IS_ERR(skb)) { 1297 BT_ERR("%s exiting Intel manufacturer mode failed (%ld)", 1298 hdev->name, PTR_ERR(skb)); 1299 - return -PTR_ERR(skb); 1300 } 1301 kfree_skb(skb); 1302 ··· 1314 if (IS_ERR(skb)) { 1315 BT_ERR("%s exiting Intel manufacturer mode failed (%ld)", 1316 hdev->name, PTR_ERR(skb)); 1317 - return -PTR_ERR(skb); 1318 } 1319 kfree_skb(skb); 1320
··· 154 { USB_DEVICE(0x0489, 0xe04e), .driver_info = BTUSB_ATH3012 }, 155 { USB_DEVICE(0x0489, 0xe056), .driver_info = BTUSB_ATH3012 }, 156 { USB_DEVICE(0x0489, 0xe04d), .driver_info = BTUSB_ATH3012 }, 157 + { USB_DEVICE(0x04c5, 0x1330), .driver_info = BTUSB_ATH3012 }, 158 + { USB_DEVICE(0x13d3, 0x3402), .driver_info = BTUSB_ATH3012 }, 159 + { USB_DEVICE(0x0cf3, 0x3121), .driver_info = BTUSB_ATH3012 }, 160 + { USB_DEVICE(0x0cf3, 0xe003), .driver_info = BTUSB_ATH3012 }, 161 162 /* Atheros AR5BBU12 with sflash firmware */ 163 { USB_DEVICE(0x0489, 0xe02c), .driver_info = BTUSB_IGNORE }, ··· 1095 if (IS_ERR(skb)) { 1096 BT_ERR("%s sending Intel patch command (0x%4.4x) failed (%ld)", 1097 hdev->name, cmd->opcode, PTR_ERR(skb)); 1098 + return PTR_ERR(skb); 1099 } 1100 1101 /* It ensures that the returned event matches the event data read from ··· 1147 if (IS_ERR(skb)) { 1148 BT_ERR("%s sending initial HCI reset command failed (%ld)", 1149 hdev->name, PTR_ERR(skb)); 1150 + return PTR_ERR(skb); 1151 } 1152 kfree_skb(skb); 1153 ··· 1161 if (IS_ERR(skb)) { 1162 BT_ERR("%s reading Intel fw version command failed (%ld)", 1163 hdev->name, PTR_ERR(skb)); 1164 + return PTR_ERR(skb); 1165 } 1166 1167 if (skb->len != sizeof(*ver)) { ··· 1219 BT_ERR("%s entering Intel manufacturer mode failed (%ld)", 1220 hdev->name, PTR_ERR(skb)); 1221 release_firmware(fw); 1222 + return PTR_ERR(skb); 1223 } 1224 1225 if (skb->data[0]) { ··· 1276 if (IS_ERR(skb)) { 1277 BT_ERR("%s exiting Intel manufacturer mode failed (%ld)", 1278 hdev->name, PTR_ERR(skb)); 1279 + return PTR_ERR(skb); 1280 } 1281 kfree_skb(skb); 1282 ··· 1292 if (IS_ERR(skb)) { 1293 BT_ERR("%s exiting Intel manufacturer mode failed (%ld)", 1294 hdev->name, PTR_ERR(skb)); 1295 + return PTR_ERR(skb); 1296 } 1297 kfree_skb(skb); 1298 ··· 1310 if (IS_ERR(skb)) { 1311 BT_ERR("%s exiting Intel manufacturer mode failed (%ld)", 1312 hdev->name, PTR_ERR(skb)); 1313 + return PTR_ERR(skb); 1314 } 1315 kfree_skb(skb); 1316
+4 -4
drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
··· 486 487 struct napi_struct napi; 488 489 - #ifdef CONFIG_NET_LL_RX_POLL 490 unsigned int state; 491 #define BNX2X_FP_STATE_IDLE 0 492 #define BNX2X_FP_STATE_NAPI (1 << 0) /* NAPI owns this FP */ ··· 498 #define BNX2X_FP_USER_PEND (BNX2X_FP_STATE_POLL | BNX2X_FP_STATE_POLL_YIELD) 499 /* protect state */ 500 spinlock_t lock; 501 - #endif /* CONFIG_NET_LL_RX_POLL */ 502 503 union host_hc_status_block status_blk; 504 /* chip independent shortcuts into sb structure */ ··· 572 #define bnx2x_fp_stats(bp, fp) (&((bp)->fp_stats[(fp)->index])) 573 #define bnx2x_fp_qstats(bp, fp) (&((bp)->fp_stats[(fp)->index].eth_q_stats)) 574 575 - #ifdef CONFIG_NET_LL_RX_POLL 576 static inline void bnx2x_fp_init_lock(struct bnx2x_fastpath *fp) 577 { 578 spin_lock_init(&fp->lock); ··· 680 { 681 return false; 682 } 683 - #endif /* CONFIG_NET_LL_RX_POLL */ 684 685 /* Use 2500 as a mini-jumbo MTU for FCoE */ 686 #define BNX2X_FCOE_MINI_JUMBO_MTU 2500
··· 486 487 struct napi_struct napi; 488 489 + #ifdef CONFIG_NET_RX_BUSY_POLL 490 unsigned int state; 491 #define BNX2X_FP_STATE_IDLE 0 492 #define BNX2X_FP_STATE_NAPI (1 << 0) /* NAPI owns this FP */ ··· 498 #define BNX2X_FP_USER_PEND (BNX2X_FP_STATE_POLL | BNX2X_FP_STATE_POLL_YIELD) 499 /* protect state */ 500 spinlock_t lock; 501 + #endif /* CONFIG_NET_RX_BUSY_POLL */ 502 503 union host_hc_status_block status_blk; 504 /* chip independent shortcuts into sb structure */ ··· 572 #define bnx2x_fp_stats(bp, fp) (&((bp)->fp_stats[(fp)->index])) 573 #define bnx2x_fp_qstats(bp, fp) (&((bp)->fp_stats[(fp)->index].eth_q_stats)) 574 575 + #ifdef CONFIG_NET_RX_BUSY_POLL 576 static inline void bnx2x_fp_init_lock(struct bnx2x_fastpath *fp) 577 { 578 spin_lock_init(&fp->lock); ··· 680 { 681 return false; 682 } 683 + #endif /* CONFIG_NET_RX_BUSY_POLL */ 684 685 /* Use 2500 as a mini-jumbo MTU for FCoE */ 686 #define BNX2X_FCOE_MINI_JUMBO_MTU 2500
+1 -1
drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
··· 3117 return work_done; 3118 } 3119 3120 - #ifdef CONFIG_NET_LL_RX_POLL 3121 /* must be called with local_bh_disable()d */ 3122 int bnx2x_low_latency_recv(struct napi_struct *napi) 3123 {
··· 3117 return work_done; 3118 } 3119 3120 + #ifdef CONFIG_NET_RX_BUSY_POLL 3121 /* must be called with local_bh_disable()d */ 3122 int bnx2x_low_latency_recv(struct napi_struct *napi) 3123 {
+1 -1
drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
··· 12026 .ndo_fcoe_get_wwn = bnx2x_fcoe_get_wwn, 12027 #endif 12028 12029 - #ifdef CONFIG_NET_LL_RX_POLL 12030 .ndo_busy_poll = bnx2x_low_latency_recv, 12031 #endif 12032 };
··· 12026 .ndo_fcoe_get_wwn = bnx2x_fcoe_get_wwn, 12027 #endif 12028 12029 + #ifdef CONFIG_NET_RX_BUSY_POLL 12030 .ndo_busy_poll = bnx2x_low_latency_recv, 12031 #endif 12032 };
+6 -6
drivers/net/ethernet/intel/ixgbe/ixgbe.h
··· 54 55 #include <net/busy_poll.h> 56 57 - #ifdef CONFIG_NET_LL_RX_POLL 58 #define LL_EXTENDED_STATS 59 #endif 60 /* common prefix used by pr_<> macros */ ··· 366 struct rcu_head rcu; /* to avoid race with update stats on free */ 367 char name[IFNAMSIZ + 9]; 368 369 - #ifdef CONFIG_NET_LL_RX_POLL 370 unsigned int state; 371 #define IXGBE_QV_STATE_IDLE 0 372 #define IXGBE_QV_STATE_NAPI 1 /* NAPI owns this QV */ ··· 377 #define IXGBE_QV_YIELD (IXGBE_QV_STATE_NAPI_YIELD | IXGBE_QV_STATE_POLL_YIELD) 378 #define IXGBE_QV_USER_PEND (IXGBE_QV_STATE_POLL | IXGBE_QV_STATE_POLL_YIELD) 379 spinlock_t lock; 380 - #endif /* CONFIG_NET_LL_RX_POLL */ 381 382 /* for dynamic allocation of rings associated with this q_vector */ 383 struct ixgbe_ring ring[0] ____cacheline_internodealigned_in_smp; 384 }; 385 - #ifdef CONFIG_NET_LL_RX_POLL 386 static inline void ixgbe_qv_init_lock(struct ixgbe_q_vector *q_vector) 387 { 388 ··· 462 WARN_ON(!(q_vector->state & IXGBE_QV_LOCKED)); 463 return q_vector->state & IXGBE_QV_USER_PEND; 464 } 465 - #else /* CONFIG_NET_LL_RX_POLL */ 466 static inline void ixgbe_qv_init_lock(struct ixgbe_q_vector *q_vector) 467 { 468 } ··· 491 { 492 return false; 493 } 494 - #endif /* CONFIG_NET_LL_RX_POLL */ 495 496 #ifdef CONFIG_IXGBE_HWMON 497
··· 54 55 #include <net/busy_poll.h> 56 57 + #ifdef CONFIG_NET_RX_BUSY_POLL 58 #define LL_EXTENDED_STATS 59 #endif 60 /* common prefix used by pr_<> macros */ ··· 366 struct rcu_head rcu; /* to avoid race with update stats on free */ 367 char name[IFNAMSIZ + 9]; 368 369 + #ifdef CONFIG_NET_RX_BUSY_POLL 370 unsigned int state; 371 #define IXGBE_QV_STATE_IDLE 0 372 #define IXGBE_QV_STATE_NAPI 1 /* NAPI owns this QV */ ··· 377 #define IXGBE_QV_YIELD (IXGBE_QV_STATE_NAPI_YIELD | IXGBE_QV_STATE_POLL_YIELD) 378 #define IXGBE_QV_USER_PEND (IXGBE_QV_STATE_POLL | IXGBE_QV_STATE_POLL_YIELD) 379 spinlock_t lock; 380 + #endif /* CONFIG_NET_RX_BUSY_POLL */ 381 382 /* for dynamic allocation of rings associated with this q_vector */ 383 struct ixgbe_ring ring[0] ____cacheline_internodealigned_in_smp; 384 }; 385 + #ifdef CONFIG_NET_RX_BUSY_POLL 386 static inline void ixgbe_qv_init_lock(struct ixgbe_q_vector *q_vector) 387 { 388 ··· 462 WARN_ON(!(q_vector->state & IXGBE_QV_LOCKED)); 463 return q_vector->state & IXGBE_QV_USER_PEND; 464 } 465 + #else /* CONFIG_NET_RX_BUSY_POLL */ 466 static inline void ixgbe_qv_init_lock(struct ixgbe_q_vector *q_vector) 467 { 468 } ··· 491 { 492 return false; 493 } 494 + #endif /* CONFIG_NET_RX_BUSY_POLL */ 495 496 #ifdef CONFIG_IXGBE_HWMON 497
+3 -3
drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
··· 1998 return total_rx_packets; 1999 } 2000 2001 - #ifdef CONFIG_NET_LL_RX_POLL 2002 /* must be called with local_bh_disable()d */ 2003 static int ixgbe_low_latency_recv(struct napi_struct *napi) 2004 { ··· 2030 2031 return found; 2032 } 2033 - #endif /* CONFIG_NET_LL_RX_POLL */ 2034 2035 /** 2036 * ixgbe_configure_msix - Configure MSI-X hardware ··· 7227 #ifdef CONFIG_NET_POLL_CONTROLLER 7228 .ndo_poll_controller = ixgbe_netpoll, 7229 #endif 7230 - #ifdef CONFIG_NET_LL_RX_POLL 7231 .ndo_busy_poll = ixgbe_low_latency_recv, 7232 #endif 7233 #ifdef IXGBE_FCOE
··· 1998 return total_rx_packets; 1999 } 2000 2001 + #ifdef CONFIG_NET_RX_BUSY_POLL 2002 /* must be called with local_bh_disable()d */ 2003 static int ixgbe_low_latency_recv(struct napi_struct *napi) 2004 { ··· 2030 2031 return found; 2032 } 2033 + #endif /* CONFIG_NET_RX_BUSY_POLL */ 2034 2035 /** 2036 * ixgbe_configure_msix - Configure MSI-X hardware ··· 7227 #ifdef CONFIG_NET_POLL_CONTROLLER 7228 .ndo_poll_controller = ixgbe_netpoll, 7229 #endif 7230 + #ifdef CONFIG_NET_RX_BUSY_POLL 7231 .ndo_busy_poll = ixgbe_low_latency_recv, 7232 #endif 7233 #ifdef IXGBE_FCOE
+3 -3
drivers/net/ethernet/mellanox/mlx4/en_ethtool.c
··· 223 case ETH_SS_STATS: 224 return (priv->stats_bitmap ? bit_count : NUM_ALL_STATS) + 225 (priv->tx_ring_num * 2) + 226 - #ifdef CONFIG_NET_LL_RX_POLL 227 (priv->rx_ring_num * 5); 228 #else 229 (priv->rx_ring_num * 2); ··· 276 for (i = 0; i < priv->rx_ring_num; i++) { 277 data[index++] = priv->rx_ring[i].packets; 278 data[index++] = priv->rx_ring[i].bytes; 279 - #ifdef CONFIG_NET_LL_RX_POLL 280 data[index++] = priv->rx_ring[i].yields; 281 data[index++] = priv->rx_ring[i].misses; 282 data[index++] = priv->rx_ring[i].cleaned; ··· 344 "rx%d_packets", i); 345 sprintf(data + (index++) * ETH_GSTRING_LEN, 346 "rx%d_bytes", i); 347 - #ifdef CONFIG_NET_LL_RX_POLL 348 sprintf(data + (index++) * ETH_GSTRING_LEN, 349 "rx%d_napi_yield", i); 350 sprintf(data + (index++) * ETH_GSTRING_LEN,
··· 223 case ETH_SS_STATS: 224 return (priv->stats_bitmap ? bit_count : NUM_ALL_STATS) + 225 (priv->tx_ring_num * 2) + 226 + #ifdef CONFIG_NET_RX_BUSY_POLL 227 (priv->rx_ring_num * 5); 228 #else 229 (priv->rx_ring_num * 2); ··· 276 for (i = 0; i < priv->rx_ring_num; i++) { 277 data[index++] = priv->rx_ring[i].packets; 278 data[index++] = priv->rx_ring[i].bytes; 279 + #ifdef CONFIG_NET_RX_BUSY_POLL 280 data[index++] = priv->rx_ring[i].yields; 281 data[index++] = priv->rx_ring[i].misses; 282 data[index++] = priv->rx_ring[i].cleaned; ··· 344 "rx%d_packets", i); 345 sprintf(data + (index++) * ETH_GSTRING_LEN, 346 "rx%d_bytes", i); 347 + #ifdef CONFIG_NET_RX_BUSY_POLL 348 sprintf(data + (index++) * ETH_GSTRING_LEN, 349 "rx%d_napi_yield", i); 350 sprintf(data + (index++) * ETH_GSTRING_LEN,
+3 -3
drivers/net/ethernet/mellanox/mlx4/en_netdev.c
··· 68 return 0; 69 } 70 71 - #ifdef CONFIG_NET_LL_RX_POLL 72 /* must be called with local_bh_disable()d */ 73 static int mlx4_en_low_latency_recv(struct napi_struct *napi) 74 { ··· 94 95 return done; 96 } 97 - #endif /* CONFIG_NET_LL_RX_POLL */ 98 99 #ifdef CONFIG_RFS_ACCEL 100 ··· 2140 #ifdef CONFIG_RFS_ACCEL 2141 .ndo_rx_flow_steer = mlx4_en_filter_rfs, 2142 #endif 2143 - #ifdef CONFIG_NET_LL_RX_POLL 2144 .ndo_busy_poll = mlx4_en_low_latency_recv, 2145 #endif 2146 };
··· 68 return 0; 69 } 70 71 + #ifdef CONFIG_NET_RX_BUSY_POLL 72 /* must be called with local_bh_disable()d */ 73 static int mlx4_en_low_latency_recv(struct napi_struct *napi) 74 { ··· 94 95 return done; 96 } 97 + #endif /* CONFIG_NET_RX_BUSY_POLL */ 98 99 #ifdef CONFIG_RFS_ACCEL 100 ··· 2140 #ifdef CONFIG_RFS_ACCEL 2141 .ndo_rx_flow_steer = mlx4_en_filter_rfs, 2142 #endif 2143 + #ifdef CONFIG_NET_RX_BUSY_POLL 2144 .ndo_busy_poll = mlx4_en_low_latency_recv, 2145 #endif 2146 };
+1 -10
drivers/net/ethernet/mellanox/mlx4/fw.c
··· 845 MLX4_CMD_NATIVE); 846 847 if (!err && dev->caps.function != slave) { 848 - /* if config MAC in DB use it */ 849 - if (priv->mfunc.master.vf_oper[slave].vport[vhcr->in_modifier].state.mac) 850 - def_mac = priv->mfunc.master.vf_oper[slave].vport[vhcr->in_modifier].state.mac; 851 - else { 852 - /* set slave default_mac address */ 853 - MLX4_GET(def_mac, outbox->buf, QUERY_PORT_MAC_OFFSET); 854 - def_mac += slave << 8; 855 - priv->mfunc.master.vf_admin[slave].vport[vhcr->in_modifier].mac = def_mac; 856 - } 857 - 858 MLX4_PUT(outbox->buf, def_mac, QUERY_PORT_MAC_OFFSET); 859 860 /* get port type - currently only eth is enabled */
··· 845 MLX4_CMD_NATIVE); 846 847 if (!err && dev->caps.function != slave) { 848 + def_mac = priv->mfunc.master.vf_oper[slave].vport[vhcr->in_modifier].state.mac; 849 MLX4_PUT(outbox->buf, def_mac, QUERY_PORT_MAC_OFFSET); 850 851 /* get port type - currently only eth is enabled */
+1 -1
drivers/net/ethernet/mellanox/mlx4/main.c
··· 371 372 dev->caps.sqp_demux = (mlx4_is_master(dev)) ? MLX4_MAX_NUM_SLAVES : 0; 373 374 - if (!enable_64b_cqe_eqe) { 375 if (dev_cap->flags & 376 (MLX4_DEV_CAP_FLAG_64B_CQE | MLX4_DEV_CAP_FLAG_64B_EQE)) { 377 mlx4_warn(dev, "64B EQEs/CQEs supported by the device but not enabled\n");
··· 371 372 dev->caps.sqp_demux = (mlx4_is_master(dev)) ? MLX4_MAX_NUM_SLAVES : 0; 373 374 + if (!enable_64b_cqe_eqe && !mlx4_is_slave(dev)) { 375 if (dev_cap->flags & 376 (MLX4_DEV_CAP_FLAG_64B_CQE | MLX4_DEV_CAP_FLAG_64B_EQE)) { 377 mlx4_warn(dev, "64B EQEs/CQEs supported by the device but not enabled\n");
+5 -5
drivers/net/ethernet/mellanox/mlx4/mlx4_en.h
··· 292 void *rx_info; 293 unsigned long bytes; 294 unsigned long packets; 295 - #ifdef CONFIG_NET_LL_RX_POLL 296 unsigned long yields; 297 unsigned long misses; 298 unsigned long cleaned; ··· 318 struct mlx4_cqe *buf; 319 #define MLX4_EN_OPCODE_ERROR 0x1e 320 321 - #ifdef CONFIG_NET_LL_RX_POLL 322 unsigned int state; 323 #define MLX4_EN_CQ_STATE_IDLE 0 324 #define MLX4_EN_CQ_STATE_NAPI 1 /* NAPI owns this CQ */ ··· 329 #define CQ_YIELD (MLX4_EN_CQ_STATE_NAPI_YIELD | MLX4_EN_CQ_STATE_POLL_YIELD) 330 #define CQ_USER_PEND (MLX4_EN_CQ_STATE_POLL | MLX4_EN_CQ_STATE_POLL_YIELD) 331 spinlock_t poll_lock; /* protects from LLS/napi conflicts */ 332 - #endif /* CONFIG_NET_LL_RX_POLL */ 333 }; 334 335 struct mlx4_en_port_profile { ··· 580 struct rcu_head rcu; 581 }; 582 583 - #ifdef CONFIG_NET_LL_RX_POLL 584 static inline void mlx4_en_cq_init_lock(struct mlx4_en_cq *cq) 585 { 586 spin_lock_init(&cq->poll_lock); ··· 687 { 688 return false; 689 } 690 - #endif /* CONFIG_NET_LL_RX_POLL */ 691 692 #define MLX4_EN_WOL_DO_MODIFY (1ULL << 63) 693
··· 292 void *rx_info; 293 unsigned long bytes; 294 unsigned long packets; 295 + #ifdef CONFIG_NET_RX_BUSY_POLL 296 unsigned long yields; 297 unsigned long misses; 298 unsigned long cleaned; ··· 318 struct mlx4_cqe *buf; 319 #define MLX4_EN_OPCODE_ERROR 0x1e 320 321 + #ifdef CONFIG_NET_RX_BUSY_POLL 322 unsigned int state; 323 #define MLX4_EN_CQ_STATE_IDLE 0 324 #define MLX4_EN_CQ_STATE_NAPI 1 /* NAPI owns this CQ */ ··· 329 #define CQ_YIELD (MLX4_EN_CQ_STATE_NAPI_YIELD | MLX4_EN_CQ_STATE_POLL_YIELD) 330 #define CQ_USER_PEND (MLX4_EN_CQ_STATE_POLL | MLX4_EN_CQ_STATE_POLL_YIELD) 331 spinlock_t poll_lock; /* protects from LLS/napi conflicts */ 332 + #endif /* CONFIG_NET_RX_BUSY_POLL */ 333 }; 334 335 struct mlx4_en_port_profile { ··· 580 struct rcu_head rcu; 581 }; 582 583 + #ifdef CONFIG_NET_RX_BUSY_POLL 584 static inline void mlx4_en_cq_init_lock(struct mlx4_en_cq *cq) 585 { 586 spin_lock_init(&cq->poll_lock); ··· 687 { 688 return false; 689 } 690 + #endif /* CONFIG_NET_RX_BUSY_POLL */ 691 692 #define MLX4_EN_WOL_DO_MODIFY (1ULL << 63) 693
+3 -9
drivers/net/ethernet/qlogic/qlcnic/qlcnic.h
··· 1400 #define ADDR_IN_RANGE(addr, low, high) \ 1401 (((addr) < (high)) && ((addr) >= (low))) 1402 1403 - #define QLCRD32(adapter, off) \ 1404 - (adapter->ahw->hw_ops->read_reg)(adapter, off) 1405 1406 #define QLCWR32(adapter, off, val) \ 1407 adapter->ahw->hw_ops->write_reg(adapter, off, val) ··· 1604 struct qlcnic_hardware_ops { 1605 void (*read_crb) (struct qlcnic_adapter *, char *, loff_t, size_t); 1606 void (*write_crb) (struct qlcnic_adapter *, char *, loff_t, size_t); 1607 - int (*read_reg) (struct qlcnic_adapter *, ulong); 1608 int (*write_reg) (struct qlcnic_adapter *, ulong, u32); 1609 void (*get_ocm_win) (struct qlcnic_hardware_context *); 1610 int (*get_mac_address) (struct qlcnic_adapter *, u8 *); ··· 1660 loff_t offset, size_t size) 1661 { 1662 adapter->ahw->hw_ops->write_crb(adapter, buf, offset, size); 1663 - } 1664 - 1665 - static inline int qlcnic_hw_read_wx_2M(struct qlcnic_adapter *adapter, 1666 - ulong off) 1667 - { 1668 - return adapter->ahw->hw_ops->read_reg(adapter, off); 1669 } 1670 1671 static inline int qlcnic_hw_write_wx_2M(struct qlcnic_adapter *adapter,
··· 1400 #define ADDR_IN_RANGE(addr, low, high) \ 1401 (((addr) < (high)) && ((addr) >= (low))) 1402 1403 + #define QLCRD32(adapter, off, err) \ 1404 + (adapter->ahw->hw_ops->read_reg)(adapter, off, err) 1405 1406 #define QLCWR32(adapter, off, val) \ 1407 adapter->ahw->hw_ops->write_reg(adapter, off, val) ··· 1604 struct qlcnic_hardware_ops { 1605 void (*read_crb) (struct qlcnic_adapter *, char *, loff_t, size_t); 1606 void (*write_crb) (struct qlcnic_adapter *, char *, loff_t, size_t); 1607 + int (*read_reg) (struct qlcnic_adapter *, ulong, int *); 1608 int (*write_reg) (struct qlcnic_adapter *, ulong, u32); 1609 void (*get_ocm_win) (struct qlcnic_hardware_context *); 1610 int (*get_mac_address) (struct qlcnic_adapter *, u8 *); ··· 1660 loff_t offset, size_t size) 1661 { 1662 adapter->ahw->hw_ops->write_crb(adapter, buf, offset, size); 1663 } 1664 1665 static inline int qlcnic_hw_write_wx_2M(struct qlcnic_adapter *adapter,
+76 -50
drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_hw.c
··· 228 return 0; 229 } 230 231 - int qlcnic_83xx_rd_reg_indirect(struct qlcnic_adapter *adapter, ulong addr) 232 { 233 - int ret; 234 struct qlcnic_hardware_context *ahw = adapter->ahw; 235 236 - ret = __qlcnic_set_win_base(adapter, (u32) addr); 237 - if (!ret) { 238 return QLCRDX(ahw, QLCNIC_WILDCARD); 239 } else { 240 dev_err(&adapter->pdev->dev, 241 - "%s failed, addr = 0x%x\n", __func__, (int)addr); 242 return -EIO; 243 } 244 } ··· 561 void qlcnic_83xx_read_crb(struct qlcnic_adapter *adapter, char *buf, 562 loff_t offset, size_t size) 563 { 564 - int ret; 565 u32 data; 566 567 if (qlcnic_api_lock(adapter)) { ··· 571 return; 572 } 573 574 - ret = qlcnic_83xx_rd_reg_indirect(adapter, (u32) offset); 575 qlcnic_api_unlock(adapter); 576 577 if (ret == -EIO) { ··· 580 __func__, (u32)offset); 581 return; 582 } 583 - data = ret; 584 memcpy(buf, &data, size); 585 } 586 ··· 2074 static void qlcnic_83xx_handle_link_aen(struct qlcnic_adapter *adapter, 2075 u32 data[]) 2076 { 2077 u8 link_status, duplex; 2078 /* link speed */ 2079 link_status = LSB(data[3]) & 1; 2080 - adapter->ahw->link_speed = MSW(data[2]); 2081 - adapter->ahw->link_autoneg = MSB(MSW(data[3])); 2082 - adapter->ahw->module_type = MSB(LSW(data[3])); 2083 - duplex = LSB(MSW(data[3])); 2084 - if (duplex) 2085 - adapter->ahw->link_duplex = DUPLEX_FULL; 2086 - else 2087 - adapter->ahw->link_duplex = DUPLEX_HALF; 2088 - adapter->ahw->has_link_events = 1; 2089 qlcnic_advert_link_change(adapter, link_status); 2090 } 2091 ··· 2390 u32 flash_addr, u8 *p_data, 2391 int count) 2392 { 2393 - int i, ret; 2394 - u32 word, range, flash_offset, addr = flash_addr; 2395 ulong indirect_add, direct_window; 2396 2397 flash_offset = addr & (QLCNIC_FLASH_SECTOR_SIZE - 1); 2398 if (addr & 0x3) { ··· 2410 /* Multi sector read */ 2411 for (i = 0; i < count; i++) { 2412 indirect_add = QLC_83XX_FLASH_DIRECT_DATA(addr); 2413 - ret = qlcnic_83xx_rd_reg_indirect(adapter, 2414 - indirect_add); 2415 - if (ret == -EIO) 2416 - return -EIO; 2417 2418 word = ret; 2419 *(u32 *)p_data = word; ··· 2433 /* Single sector read */ 2434 for (i = 0; i < count; i++) { 2435 indirect_add = QLC_83XX_FLASH_DIRECT_DATA(addr); 2436 - ret = qlcnic_83xx_rd_reg_indirect(adapter, 2437 - indirect_add); 2438 - if (ret == -EIO) 2439 - return -EIO; 2440 2441 word = ret; 2442 *(u32 *)p_data = word; ··· 2451 { 2452 u32 status; 2453 int retries = QLC_83XX_FLASH_READ_RETRY_COUNT; 2454 2455 do { 2456 - status = qlcnic_83xx_rd_reg_indirect(adapter, 2457 - QLC_83XX_FLASH_STATUS); 2458 if ((status & QLC_83XX_FLASH_STATUS_READY) == 2459 QLC_83XX_FLASH_STATUS_READY) 2460 break; ··· 2509 2510 int qlcnic_83xx_read_flash_mfg_id(struct qlcnic_adapter *adapter) 2511 { 2512 - int ret, mfg_id; 2513 2514 if (qlcnic_83xx_lock_flash(adapter)) 2515 return -EIO; ··· 2525 return -EIO; 2526 } 2527 2528 - mfg_id = qlcnic_83xx_rd_reg_indirect(adapter, QLC_83XX_FLASH_RDDATA); 2529 - if (mfg_id == -EIO) 2530 - return -EIO; 2531 2532 adapter->flash_mfg_id = (mfg_id & 0xFF); 2533 qlcnic_83xx_unlock_flash(adapter); ··· 2646 u32 *p_data, int count) 2647 { 2648 u32 temp; 2649 - int ret = -EIO; 2650 2651 if ((count < QLC_83XX_FLASH_WRITE_MIN) || 2652 (count > QLC_83XX_FLASH_WRITE_MAX)) { ··· 2655 return -EIO; 2656 } 2657 2658 - temp = qlcnic_83xx_rd_reg_indirect(adapter, 2659 - QLC_83XX_FLASH_SPI_CONTROL); 2660 qlcnic_83xx_wrt_reg_indirect(adapter, QLC_83XX_FLASH_SPI_CONTROL, 2661 (temp | QLC_83XX_FLASH_SPI_CTRL)); 2662 qlcnic_83xx_wrt_reg_indirect(adapter, QLC_83XX_FLASH_ADDR, ··· 2707 return -EIO; 2708 } 2709 2710 - ret = qlcnic_83xx_rd_reg_indirect(adapter, QLC_83XX_FLASH_SPI_STATUS); 2711 if ((ret & QLC_83XX_FLASH_SPI_CTRL) == QLC_83XX_FLASH_SPI_CTRL) { 2712 dev_err(&adapter->pdev->dev, "%s: failed at %d\n", 2713 __func__, __LINE__); 2714 /* Operation failed, clear error bit */ 2715 - temp = qlcnic_83xx_rd_reg_indirect(adapter, 2716 - QLC_83XX_FLASH_SPI_CONTROL); 2717 qlcnic_83xx_wrt_reg_indirect(adapter, 2718 QLC_83XX_FLASH_SPI_CONTROL, 2719 (temp | QLC_83XX_FLASH_SPI_CTRL)); ··· 2840 { 2841 int i, j, ret = 0; 2842 u32 temp; 2843 2844 /* Check alignment */ 2845 if (addr & 0xF) ··· 2873 QLCNIC_TA_WRITE_START); 2874 2875 for (j = 0; j < MAX_CTL_CHECK; j++) { 2876 - temp = qlcnic_83xx_rd_reg_indirect(adapter, 2877 - QLCNIC_MS_CTRL); 2878 if ((temp & TA_CTL_BUSY) == 0) 2879 break; 2880 } ··· 2900 int qlcnic_83xx_flash_read32(struct qlcnic_adapter *adapter, u32 flash_addr, 2901 u8 *p_data, int count) 2902 { 2903 - int i, ret; 2904 - u32 word, addr = flash_addr; 2905 ulong indirect_addr; 2906 2907 if (qlcnic_83xx_lock_flash(adapter) != 0) 2908 return -EIO; ··· 2922 } 2923 2924 indirect_addr = QLC_83XX_FLASH_DIRECT_DATA(addr); 2925 - ret = qlcnic_83xx_rd_reg_indirect(adapter, 2926 - indirect_addr); 2927 - if (ret == -EIO) 2928 - return -EIO; 2929 word = ret; 2930 *(u32 *)p_data = word; 2931 p_data = p_data + 4; ··· 3391 3392 static int qlcnic_83xx_read_flash_status_reg(struct qlcnic_adapter *adapter) 3393 { 3394 - int ret; 3395 3396 qlcnic_83xx_wrt_reg_indirect(adapter, QLC_83XX_FLASH_ADDR, 3397 QLC_83XX_FLASH_OEM_READ_SIG); ··· 3402 if (ret) 3403 return -EIO; 3404 3405 - ret = qlcnic_83xx_rd_reg_indirect(adapter, QLC_83XX_FLASH_RDDATA); 3406 - return ret & 0xFF; 3407 } 3408 3409 int qlcnic_83xx_flash_test(struct qlcnic_adapter *adapter)
··· 228 return 0; 229 } 230 231 + int qlcnic_83xx_rd_reg_indirect(struct qlcnic_adapter *adapter, ulong addr, 232 + int *err) 233 { 234 struct qlcnic_hardware_context *ahw = adapter->ahw; 235 236 + *err = __qlcnic_set_win_base(adapter, (u32) addr); 237 + if (!*err) { 238 return QLCRDX(ahw, QLCNIC_WILDCARD); 239 } else { 240 dev_err(&adapter->pdev->dev, 241 + "%s failed, addr = 0x%lx\n", __func__, addr); 242 return -EIO; 243 } 244 } ··· 561 void qlcnic_83xx_read_crb(struct qlcnic_adapter *adapter, char *buf, 562 loff_t offset, size_t size) 563 { 564 + int ret = 0; 565 u32 data; 566 567 if (qlcnic_api_lock(adapter)) { ··· 571 return; 572 } 573 574 + data = QLCRD32(adapter, (u32) offset, &ret); 575 qlcnic_api_unlock(adapter); 576 577 if (ret == -EIO) { ··· 580 __func__, (u32)offset); 581 return; 582 } 583 memcpy(buf, &data, size); 584 } 585 ··· 2075 static void qlcnic_83xx_handle_link_aen(struct qlcnic_adapter *adapter, 2076 u32 data[]) 2077 { 2078 + struct qlcnic_hardware_context *ahw = adapter->ahw; 2079 u8 link_status, duplex; 2080 /* link speed */ 2081 link_status = LSB(data[3]) & 1; 2082 + if (link_status) { 2083 + ahw->link_speed = MSW(data[2]); 2084 + duplex = LSB(MSW(data[3])); 2085 + if (duplex) 2086 + ahw->link_duplex = DUPLEX_FULL; 2087 + else 2088 + ahw->link_duplex = DUPLEX_HALF; 2089 + } else { 2090 + ahw->link_speed = SPEED_UNKNOWN; 2091 + ahw->link_duplex = DUPLEX_UNKNOWN; 2092 + } 2093 + 2094 + ahw->link_autoneg = MSB(MSW(data[3])); 2095 + ahw->module_type = MSB(LSW(data[3])); 2096 + ahw->has_link_events = 1; 2097 qlcnic_advert_link_change(adapter, link_status); 2098 } 2099 ··· 2384 u32 flash_addr, u8 *p_data, 2385 int count) 2386 { 2387 + u32 word, range, flash_offset, addr = flash_addr, ret; 2388 ulong indirect_add, direct_window; 2389 + int i, err = 0; 2390 2391 flash_offset = addr & (QLCNIC_FLASH_SECTOR_SIZE - 1); 2392 if (addr & 0x3) { ··· 2404 /* Multi sector read */ 2405 for (i = 0; i < count; i++) { 2406 indirect_add = QLC_83XX_FLASH_DIRECT_DATA(addr); 2407 + ret = QLCRD32(adapter, indirect_add, &err); 2408 + if (err == -EIO) 2409 + return err; 2410 2411 word = ret; 2412 *(u32 *)p_data = word; ··· 2428 /* Single sector read */ 2429 for (i = 0; i < count; i++) { 2430 indirect_add = QLC_83XX_FLASH_DIRECT_DATA(addr); 2431 + ret = QLCRD32(adapter, indirect_add, &err); 2432 + if (err == -EIO) 2433 + return err; 2434 2435 word = ret; 2436 *(u32 *)p_data = word; ··· 2447 { 2448 u32 status; 2449 int retries = QLC_83XX_FLASH_READ_RETRY_COUNT; 2450 + int err = 0; 2451 2452 do { 2453 + status = QLCRD32(adapter, QLC_83XX_FLASH_STATUS, &err); 2454 + if (err == -EIO) 2455 + return err; 2456 + 2457 if ((status & QLC_83XX_FLASH_STATUS_READY) == 2458 QLC_83XX_FLASH_STATUS_READY) 2459 break; ··· 2502 2503 int qlcnic_83xx_read_flash_mfg_id(struct qlcnic_adapter *adapter) 2504 { 2505 + int ret, err = 0; 2506 + u32 mfg_id; 2507 2508 if (qlcnic_83xx_lock_flash(adapter)) 2509 return -EIO; ··· 2517 return -EIO; 2518 } 2519 2520 + mfg_id = QLCRD32(adapter, QLC_83XX_FLASH_RDDATA, &err); 2521 + if (err == -EIO) { 2522 + qlcnic_83xx_unlock_flash(adapter); 2523 + return err; 2524 + } 2525 2526 adapter->flash_mfg_id = (mfg_id & 0xFF); 2527 qlcnic_83xx_unlock_flash(adapter); ··· 2636 u32 *p_data, int count) 2637 { 2638 u32 temp; 2639 + int ret = -EIO, err = 0; 2640 2641 if ((count < QLC_83XX_FLASH_WRITE_MIN) || 2642 (count > QLC_83XX_FLASH_WRITE_MAX)) { ··· 2645 return -EIO; 2646 } 2647 2648 + temp = QLCRD32(adapter, QLC_83XX_FLASH_SPI_CONTROL, &err); 2649 + if (err == -EIO) 2650 + return err; 2651 + 2652 qlcnic_83xx_wrt_reg_indirect(adapter, QLC_83XX_FLASH_SPI_CONTROL, 2653 (temp | QLC_83XX_FLASH_SPI_CTRL)); 2654 qlcnic_83xx_wrt_reg_indirect(adapter, QLC_83XX_FLASH_ADDR, ··· 2695 return -EIO; 2696 } 2697 2698 + ret = QLCRD32(adapter, QLC_83XX_FLASH_SPI_STATUS, &err); 2699 + if (err == -EIO) 2700 + return err; 2701 + 2702 if ((ret & QLC_83XX_FLASH_SPI_CTRL) == QLC_83XX_FLASH_SPI_CTRL) { 2703 dev_err(&adapter->pdev->dev, "%s: failed at %d\n", 2704 __func__, __LINE__); 2705 /* Operation failed, clear error bit */ 2706 + temp = QLCRD32(adapter, QLC_83XX_FLASH_SPI_CONTROL, &err); 2707 + if (err == -EIO) 2708 + return err; 2709 + 2710 qlcnic_83xx_wrt_reg_indirect(adapter, 2711 QLC_83XX_FLASH_SPI_CONTROL, 2712 (temp | QLC_83XX_FLASH_SPI_CTRL)); ··· 2823 { 2824 int i, j, ret = 0; 2825 u32 temp; 2826 + int err = 0; 2827 2828 /* Check alignment */ 2829 if (addr & 0xF) ··· 2855 QLCNIC_TA_WRITE_START); 2856 2857 for (j = 0; j < MAX_CTL_CHECK; j++) { 2858 + temp = QLCRD32(adapter, QLCNIC_MS_CTRL, &err); 2859 + if (err == -EIO) { 2860 + mutex_unlock(&adapter->ahw->mem_lock); 2861 + return err; 2862 + } 2863 + 2864 if ((temp & TA_CTL_BUSY) == 0) 2865 break; 2866 } ··· 2878 int qlcnic_83xx_flash_read32(struct qlcnic_adapter *adapter, u32 flash_addr, 2879 u8 *p_data, int count) 2880 { 2881 + u32 word, addr = flash_addr, ret; 2882 ulong indirect_addr; 2883 + int i, err = 0; 2884 2885 if (qlcnic_83xx_lock_flash(adapter) != 0) 2886 return -EIO; ··· 2900 } 2901 2902 indirect_addr = QLC_83XX_FLASH_DIRECT_DATA(addr); 2903 + ret = QLCRD32(adapter, indirect_addr, &err); 2904 + if (err == -EIO) 2905 + return err; 2906 + 2907 word = ret; 2908 *(u32 *)p_data = word; 2909 p_data = p_data + 4; ··· 3369 3370 static int qlcnic_83xx_read_flash_status_reg(struct qlcnic_adapter *adapter) 3371 { 3372 + int ret, err = 0; 3373 + u32 temp; 3374 3375 qlcnic_83xx_wrt_reg_indirect(adapter, QLC_83XX_FLASH_ADDR, 3376 QLC_83XX_FLASH_OEM_READ_SIG); ··· 3379 if (ret) 3380 return -EIO; 3381 3382 + temp = QLCRD32(adapter, QLC_83XX_FLASH_RDDATA, &err); 3383 + if (err == -EIO) 3384 + return err; 3385 + 3386 + return temp & 0xFF; 3387 } 3388 3389 int qlcnic_83xx_flash_test(struct qlcnic_adapter *adapter)
+1 -1
drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_hw.h
··· 508 void qlcnic_83xx_remove_sysfs(struct qlcnic_adapter *); 509 void qlcnic_83xx_write_crb(struct qlcnic_adapter *, char *, loff_t, size_t); 510 void qlcnic_83xx_read_crb(struct qlcnic_adapter *, char *, loff_t, size_t); 511 - int qlcnic_83xx_rd_reg_indirect(struct qlcnic_adapter *, ulong); 512 int qlcnic_83xx_wrt_reg_indirect(struct qlcnic_adapter *, ulong, u32); 513 void qlcnic_83xx_process_rcv_diag(struct qlcnic_adapter *, int, u64 []); 514 int qlcnic_83xx_nic_set_promisc(struct qlcnic_adapter *, u32);
··· 508 void qlcnic_83xx_remove_sysfs(struct qlcnic_adapter *); 509 void qlcnic_83xx_write_crb(struct qlcnic_adapter *, char *, loff_t, size_t); 510 void qlcnic_83xx_read_crb(struct qlcnic_adapter *, char *, loff_t, size_t); 511 + int qlcnic_83xx_rd_reg_indirect(struct qlcnic_adapter *, ulong, int *); 512 int qlcnic_83xx_wrt_reg_indirect(struct qlcnic_adapter *, ulong, u32); 513 void qlcnic_83xx_process_rcv_diag(struct qlcnic_adapter *, int, u64 []); 514 int qlcnic_83xx_nic_set_promisc(struct qlcnic_adapter *, u32);
+61 -30
drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_init.c
··· 1303 { 1304 int i, j; 1305 u32 val = 0, val1 = 0, reg = 0; 1306 1307 - val = QLCRD32(adapter, QLC_83XX_SRE_SHIM_REG); 1308 dev_info(&adapter->pdev->dev, "SRE-Shim Ctrl:0x%x\n", val); 1309 1310 for (j = 0; j < 2; j++) { ··· 1321 reg = QLC_83XX_PORT1_THRESHOLD; 1322 } 1323 for (i = 0; i < 8; i++) { 1324 - val = QLCRD32(adapter, reg + (i * 0x4)); 1325 dev_info(&adapter->pdev->dev, "0x%x ", val); 1326 } 1327 dev_info(&adapter->pdev->dev, "\n"); ··· 1340 reg = QLC_83XX_PORT1_TC_MC_REG; 1341 } 1342 for (i = 0; i < 4; i++) { 1343 - val = QLCRD32(adapter, reg + (i * 0x4)); 1344 - dev_info(&adapter->pdev->dev, "0x%x ", val); 1345 } 1346 dev_info(&adapter->pdev->dev, "\n"); 1347 } ··· 1359 reg = QLC_83XX_PORT1_TC_STATS; 1360 } 1361 for (i = 7; i >= 0; i--) { 1362 - val = QLCRD32(adapter, reg); 1363 val &= ~(0x7 << 29); /* Reset bits 29 to 31 */ 1364 QLCWR32(adapter, reg, (val | (i << 29))); 1365 - val = QLCRD32(adapter, reg); 1366 dev_info(&adapter->pdev->dev, "0x%x ", val); 1367 } 1368 dev_info(&adapter->pdev->dev, "\n"); 1369 } 1370 1371 - val = QLCRD32(adapter, QLC_83XX_PORT2_IFB_THRESHOLD); 1372 - val1 = QLCRD32(adapter, QLC_83XX_PORT3_IFB_THRESHOLD); 1373 dev_info(&adapter->pdev->dev, 1374 "IFB-Pause Thresholds: Port 2:0x%x, Port 3:0x%x\n", 1375 val, val1); ··· 1440 static int qlcnic_83xx_check_heartbeat(struct qlcnic_adapter *p_dev) 1441 { 1442 u32 heartbeat, peg_status; 1443 - int retries, ret = -EIO; 1444 1445 retries = QLCNIC_HEARTBEAT_CHECK_RETRY_COUNT; 1446 p_dev->heartbeat = QLC_SHARED_REG_RD32(p_dev, ··· 1468 "PEG_NET_2_PC: 0x%x, PEG_NET_3_PC: 0x%x,\n" 1469 "PEG_NET_4_PC: 0x%x\n", peg_status, 1470 QLC_SHARED_REG_RD32(p_dev, QLCNIC_PEG_HALT_STATUS2), 1471 - QLCRD32(p_dev, QLC_83XX_CRB_PEG_NET_0), 1472 - QLCRD32(p_dev, QLC_83XX_CRB_PEG_NET_1), 1473 - QLCRD32(p_dev, QLC_83XX_CRB_PEG_NET_2), 1474 - QLCRD32(p_dev, QLC_83XX_CRB_PEG_NET_3), 1475 - QLCRD32(p_dev, QLC_83XX_CRB_PEG_NET_4)); 1476 1477 if (QLCNIC_FWERROR_CODE(peg_status) == 0x67) 1478 dev_err(&p_dev->pdev->dev, ··· 1516 static int qlcnic_83xx_poll_reg(struct qlcnic_adapter *p_dev, u32 addr, 1517 int duration, u32 mask, u32 status) 1518 { 1519 u32 value; 1520 - int timeout_error; 1521 u8 retries; 1522 1523 - value = qlcnic_83xx_rd_reg_indirect(p_dev, addr); 1524 retries = duration / 10; 1525 1526 do { 1527 if ((value & mask) != status) { 1528 timeout_error = 1; 1529 msleep(duration / 10); 1530 - value = qlcnic_83xx_rd_reg_indirect(p_dev, addr); 1531 } else { 1532 timeout_error = 0; 1533 break; ··· 1625 static void qlcnic_83xx_read_write_crb_reg(struct qlcnic_adapter *p_dev, 1626 u32 raddr, u32 waddr) 1627 { 1628 - int value; 1629 1630 - value = qlcnic_83xx_rd_reg_indirect(p_dev, raddr); 1631 qlcnic_83xx_wrt_reg_indirect(p_dev, waddr, value); 1632 } 1633 ··· 1639 u32 raddr, u32 waddr, 1640 struct qlc_83xx_rmw *p_rmw_hdr) 1641 { 1642 - int value; 1643 1644 - if (p_rmw_hdr->index_a) 1645 value = p_dev->ahw->reset.array[p_rmw_hdr->index_a]; 1646 - else 1647 - value = qlcnic_83xx_rd_reg_indirect(p_dev, raddr); 1648 1649 value &= p_rmw_hdr->mask; 1650 value <<= p_rmw_hdr->shl; ··· 1701 long delay; 1702 struct qlc_83xx_entry *entry; 1703 struct qlc_83xx_poll *poll; 1704 - int i; 1705 unsigned long arg1, arg2; 1706 1707 poll = (struct qlc_83xx_poll *)((char *)p_hdr + ··· 1725 arg1, delay, 1726 poll->mask, 1727 poll->status)){ 1728 - qlcnic_83xx_rd_reg_indirect(p_dev, 1729 - arg1); 1730 - qlcnic_83xx_rd_reg_indirect(p_dev, 1731 - arg2); 1732 } 1733 } 1734 } ··· 1796 struct qlc_83xx_entry_hdr *p_hdr) 1797 { 1798 long delay; 1799 - int index, i, j; 1800 struct qlc_83xx_quad_entry *entry; 1801 struct qlc_83xx_poll *poll; 1802 unsigned long addr; ··· 1816 poll->mask, poll->status)){ 1817 index = p_dev->ahw->reset.array_index; 1818 addr = entry->dr_addr; 1819 - j = qlcnic_83xx_rd_reg_indirect(p_dev, addr); 1820 p_dev->ahw->reset.array[index++] = j; 1821 1822 if (index == QLC_83XX_MAX_RESET_SEQ_ENTRIES)
··· 1303 { 1304 int i, j; 1305 u32 val = 0, val1 = 0, reg = 0; 1306 + int err = 0; 1307 1308 + val = QLCRD32(adapter, QLC_83XX_SRE_SHIM_REG, &err); 1309 + if (err == -EIO) 1310 + return; 1311 dev_info(&adapter->pdev->dev, "SRE-Shim Ctrl:0x%x\n", val); 1312 1313 for (j = 0; j < 2; j++) { ··· 1318 reg = QLC_83XX_PORT1_THRESHOLD; 1319 } 1320 for (i = 0; i < 8; i++) { 1321 + val = QLCRD32(adapter, reg + (i * 0x4), &err); 1322 + if (err == -EIO) 1323 + return; 1324 dev_info(&adapter->pdev->dev, "0x%x ", val); 1325 } 1326 dev_info(&adapter->pdev->dev, "\n"); ··· 1335 reg = QLC_83XX_PORT1_TC_MC_REG; 1336 } 1337 for (i = 0; i < 4; i++) { 1338 + val = QLCRD32(adapter, reg + (i * 0x4), &err); 1339 + if (err == -EIO) 1340 + return; 1341 + dev_info(&adapter->pdev->dev, "0x%x ", val); 1342 } 1343 dev_info(&adapter->pdev->dev, "\n"); 1344 } ··· 1352 reg = QLC_83XX_PORT1_TC_STATS; 1353 } 1354 for (i = 7; i >= 0; i--) { 1355 + val = QLCRD32(adapter, reg, &err); 1356 + if (err == -EIO) 1357 + return; 1358 val &= ~(0x7 << 29); /* Reset bits 29 to 31 */ 1359 QLCWR32(adapter, reg, (val | (i << 29))); 1360 + val = QLCRD32(adapter, reg, &err); 1361 + if (err == -EIO) 1362 + return; 1363 dev_info(&adapter->pdev->dev, "0x%x ", val); 1364 } 1365 dev_info(&adapter->pdev->dev, "\n"); 1366 } 1367 1368 + val = QLCRD32(adapter, QLC_83XX_PORT2_IFB_THRESHOLD, &err); 1369 + if (err == -EIO) 1370 + return; 1371 + val1 = QLCRD32(adapter, QLC_83XX_PORT3_IFB_THRESHOLD, &err); 1372 + if (err == -EIO) 1373 + return; 1374 dev_info(&adapter->pdev->dev, 1375 "IFB-Pause Thresholds: Port 2:0x%x, Port 3:0x%x\n", 1376 val, val1); ··· 1425 static int qlcnic_83xx_check_heartbeat(struct qlcnic_adapter *p_dev) 1426 { 1427 u32 heartbeat, peg_status; 1428 + int retries, ret = -EIO, err = 0; 1429 1430 retries = QLCNIC_HEARTBEAT_CHECK_RETRY_COUNT; 1431 p_dev->heartbeat = QLC_SHARED_REG_RD32(p_dev, ··· 1453 "PEG_NET_2_PC: 0x%x, PEG_NET_3_PC: 0x%x,\n" 1454 "PEG_NET_4_PC: 0x%x\n", peg_status, 1455 QLC_SHARED_REG_RD32(p_dev, QLCNIC_PEG_HALT_STATUS2), 1456 + QLCRD32(p_dev, QLC_83XX_CRB_PEG_NET_0, &err), 1457 + QLCRD32(p_dev, QLC_83XX_CRB_PEG_NET_1, &err), 1458 + QLCRD32(p_dev, QLC_83XX_CRB_PEG_NET_2, &err), 1459 + QLCRD32(p_dev, QLC_83XX_CRB_PEG_NET_3, &err), 1460 + QLCRD32(p_dev, QLC_83XX_CRB_PEG_NET_4, &err)); 1461 1462 if (QLCNIC_FWERROR_CODE(peg_status) == 0x67) 1463 dev_err(&p_dev->pdev->dev, ··· 1501 static int qlcnic_83xx_poll_reg(struct qlcnic_adapter *p_dev, u32 addr, 1502 int duration, u32 mask, u32 status) 1503 { 1504 + int timeout_error, err = 0; 1505 u32 value; 1506 u8 retries; 1507 1508 + value = QLCRD32(p_dev, addr, &err); 1509 + if (err == -EIO) 1510 + return err; 1511 retries = duration / 10; 1512 1513 do { 1514 if ((value & mask) != status) { 1515 timeout_error = 1; 1516 msleep(duration / 10); 1517 + value = QLCRD32(p_dev, addr, &err); 1518 + if (err == -EIO) 1519 + return err; 1520 } else { 1521 timeout_error = 0; 1522 break; ··· 1606 static void qlcnic_83xx_read_write_crb_reg(struct qlcnic_adapter *p_dev, 1607 u32 raddr, u32 waddr) 1608 { 1609 + int err = 0; 1610 + u32 value; 1611 1612 + value = QLCRD32(p_dev, raddr, &err); 1613 + if (err == -EIO) 1614 + return; 1615 qlcnic_83xx_wrt_reg_indirect(p_dev, waddr, value); 1616 } 1617 ··· 1617 u32 raddr, u32 waddr, 1618 struct qlc_83xx_rmw *p_rmw_hdr) 1619 { 1620 + int err = 0; 1621 + u32 value; 1622 1623 + if (p_rmw_hdr->index_a) { 1624 value = p_dev->ahw->reset.array[p_rmw_hdr->index_a]; 1625 + } else { 1626 + value = QLCRD32(p_dev, raddr, &err); 1627 + if (err == -EIO) 1628 + return; 1629 + } 1630 1631 value &= p_rmw_hdr->mask; 1632 value <<= p_rmw_hdr->shl; ··· 1675 long delay; 1676 struct qlc_83xx_entry *entry; 1677 struct qlc_83xx_poll *poll; 1678 + int i, err = 0; 1679 unsigned long arg1, arg2; 1680 1681 poll = (struct qlc_83xx_poll *)((char *)p_hdr + ··· 1699 arg1, delay, 1700 poll->mask, 1701 poll->status)){ 1702 + QLCRD32(p_dev, arg1, &err); 1703 + if (err == -EIO) 1704 + return; 1705 + QLCRD32(p_dev, arg2, &err); 1706 + if (err == -EIO) 1707 + return; 1708 } 1709 } 1710 } ··· 1768 struct qlc_83xx_entry_hdr *p_hdr) 1769 { 1770 long delay; 1771 + int index, i, j, err; 1772 struct qlc_83xx_quad_entry *entry; 1773 struct qlc_83xx_poll *poll; 1774 unsigned long addr; ··· 1788 poll->mask, poll->status)){ 1789 index = p_dev->ahw->reset.array_index; 1790 addr = entry->dr_addr; 1791 + j = QLCRD32(p_dev, addr, &err); 1792 + if (err == -EIO) 1793 + return; 1794 + 1795 p_dev->ahw->reset.array[index++] = j; 1796 1797 if (index == QLC_83XX_MAX_RESET_SEQ_ENTRIES)
+8 -8
drivers/net/ethernet/qlogic/qlcnic/qlcnic_ctx.c
··· 104 qlcnic_poll_rsp(struct qlcnic_adapter *adapter) 105 { 106 u32 rsp; 107 - int timeout = 0; 108 109 do { 110 /* give atleast 1ms for firmware to respond */ ··· 113 if (++timeout > QLCNIC_OS_CRB_RETRY_COUNT) 114 return QLCNIC_CDRP_RSP_TIMEOUT; 115 116 - rsp = QLCRD32(adapter, QLCNIC_CDRP_CRB_OFFSET); 117 } while (!QLCNIC_CDRP_IS_RSP(rsp)); 118 119 return rsp; ··· 122 int qlcnic_82xx_issue_cmd(struct qlcnic_adapter *adapter, 123 struct qlcnic_cmd_args *cmd) 124 { 125 - int i; 126 u32 rsp; 127 u32 signature; 128 struct pci_dev *pdev = adapter->pdev; ··· 148 dev_err(&pdev->dev, "card response timeout.\n"); 149 cmd->rsp.arg[0] = QLCNIC_RCODE_TIMEOUT; 150 } else if (rsp == QLCNIC_CDRP_RSP_FAIL) { 151 - cmd->rsp.arg[0] = QLCRD32(adapter, QLCNIC_CDRP_ARG(1)); 152 switch (cmd->rsp.arg[0]) { 153 case QLCNIC_RCODE_INVALID_ARGS: 154 fmt = "CDRP invalid args: [%d]\n"; ··· 175 cmd->rsp.arg[0] = QLCNIC_RCODE_SUCCESS; 176 177 for (i = 1; i < cmd->rsp.num; i++) 178 - cmd->rsp.arg[i] = QLCRD32(adapter, QLCNIC_CDRP_ARG(i)); 179 180 /* Release semaphore */ 181 qlcnic_api_unlock(adapter); ··· 210 if (err) { 211 dev_info(&adapter->pdev->dev, 212 "Failed to set driver version in firmware\n"); 213 - return -EIO; 214 } 215 - 216 - return 0; 217 } 218 219 int
··· 104 qlcnic_poll_rsp(struct qlcnic_adapter *adapter) 105 { 106 u32 rsp; 107 + int timeout = 0, err = 0; 108 109 do { 110 /* give atleast 1ms for firmware to respond */ ··· 113 if (++timeout > QLCNIC_OS_CRB_RETRY_COUNT) 114 return QLCNIC_CDRP_RSP_TIMEOUT; 115 116 + rsp = QLCRD32(adapter, QLCNIC_CDRP_CRB_OFFSET, &err); 117 } while (!QLCNIC_CDRP_IS_RSP(rsp)); 118 119 return rsp; ··· 122 int qlcnic_82xx_issue_cmd(struct qlcnic_adapter *adapter, 123 struct qlcnic_cmd_args *cmd) 124 { 125 + int i, err = 0; 126 u32 rsp; 127 u32 signature; 128 struct pci_dev *pdev = adapter->pdev; ··· 148 dev_err(&pdev->dev, "card response timeout.\n"); 149 cmd->rsp.arg[0] = QLCNIC_RCODE_TIMEOUT; 150 } else if (rsp == QLCNIC_CDRP_RSP_FAIL) { 151 + cmd->rsp.arg[0] = QLCRD32(adapter, QLCNIC_CDRP_ARG(1), &err); 152 switch (cmd->rsp.arg[0]) { 153 case QLCNIC_RCODE_INVALID_ARGS: 154 fmt = "CDRP invalid args: [%d]\n"; ··· 175 cmd->rsp.arg[0] = QLCNIC_RCODE_SUCCESS; 176 177 for (i = 1; i < cmd->rsp.num; i++) 178 + cmd->rsp.arg[i] = QLCRD32(adapter, QLCNIC_CDRP_ARG(i), &err); 179 180 /* Release semaphore */ 181 qlcnic_api_unlock(adapter); ··· 210 if (err) { 211 dev_info(&adapter->pdev->dev, 212 "Failed to set driver version in firmware\n"); 213 + err = -EIO; 214 } 215 + qlcnic_free_mbx_args(&cmd); 216 + return err; 217 } 218 219 int
+62 -21
drivers/net/ethernet/qlogic/qlcnic/qlcnic_ethtool.c
··· 150 "Link_Test_on_offline", 151 "Interrupt_Test_offline", 152 "Internal_Loopback_offline", 153 "EEPROM_Test_offline" 154 }; 155 ··· 267 { 268 struct qlcnic_hardware_context *ahw = adapter->ahw; 269 u32 speed, reg; 270 - int check_sfp_module = 0; 271 u16 pcifn = ahw->pci_func; 272 273 /* read which mode */ ··· 290 291 } else if (adapter->ahw->port_type == QLCNIC_XGBE) { 292 u32 val = 0; 293 - val = QLCRD32(adapter, QLCNIC_PORT_MODE_ADDR); 294 295 if (val == QLCNIC_PORT_MODE_802_3_AP) { 296 ecmd->supported = SUPPORTED_1000baseT_Full; ··· 301 } 302 303 if (netif_running(adapter->netdev) && ahw->has_link_events) { 304 - reg = QLCRD32(adapter, P3P_LINK_SPEED_REG(pcifn)); 305 - speed = P3P_LINK_SPEED_VAL(pcifn, reg); 306 - ahw->link_speed = speed * P3P_LINK_SPEED_MHZ; 307 ethtool_cmd_speed_set(ecmd, ahw->link_speed); 308 ecmd->autoneg = ahw->link_autoneg; 309 ecmd->duplex = ahw->link_duplex; ··· 468 static int qlcnic_82xx_get_registers(struct qlcnic_adapter *adapter, 469 u32 *regs_buff) 470 { 471 - int i, j = 0; 472 473 for (i = QLCNIC_DEV_INFO_SIZE + 1; diag_registers[j] != -1; j++, i++) 474 regs_buff[i] = QLC_SHARED_REG_RD32(adapter, diag_registers[j]); 475 j = 0; 476 while (ext_diag_registers[j] != -1) 477 - regs_buff[i++] = QLCRD32(adapter, ext_diag_registers[j++]); 478 return i; 479 } 480 ··· 525 static u32 qlcnic_test_link(struct net_device *dev) 526 { 527 struct qlcnic_adapter *adapter = netdev_priv(dev); 528 u32 val; 529 530 if (qlcnic_83xx_check(adapter)) { 531 val = qlcnic_83xx_test_link(adapter); 532 return (val & 1) ? 0 : 1; 533 } 534 - val = QLCRD32(adapter, CRB_XG_STATE_P3P); 535 val = XG_LINK_STATE_P3P(adapter->ahw->pci_func, val); 536 return (val == XG_LINK_UP_P3P) ? 0 : 1; 537 } ··· 667 { 668 struct qlcnic_adapter *adapter = netdev_priv(netdev); 669 int port = adapter->ahw->physical_port; 670 __u32 val; 671 672 if (qlcnic_83xx_check(adapter)) { ··· 678 if ((port < 0) || (port > QLCNIC_NIU_MAX_GBE_PORTS)) 679 return; 680 /* get flow control settings */ 681 - val = QLCRD32(adapter, QLCNIC_NIU_GB_MAC_CONFIG_0(port)); 682 pause->rx_pause = qlcnic_gb_get_rx_flowctl(val); 683 - val = QLCRD32(adapter, QLCNIC_NIU_GB_PAUSE_CTL); 684 switch (port) { 685 case 0: 686 pause->tx_pause = !(qlcnic_gb_get_gb0_mask(val)); ··· 704 if ((port < 0) || (port > QLCNIC_NIU_MAX_XG_PORTS)) 705 return; 706 pause->rx_pause = 1; 707 - val = QLCRD32(adapter, QLCNIC_NIU_XG_PAUSE_CTL); 708 if (port == 0) 709 pause->tx_pause = !(qlcnic_xg_get_xg0_mask(val)); 710 else ··· 723 { 724 struct qlcnic_adapter *adapter = netdev_priv(netdev); 725 int port = adapter->ahw->physical_port; 726 __u32 val; 727 728 if (qlcnic_83xx_check(adapter)) ··· 734 if ((port < 0) || (port > QLCNIC_NIU_MAX_GBE_PORTS)) 735 return -EIO; 736 /* set flow control */ 737 - val = QLCRD32(adapter, QLCNIC_NIU_GB_MAC_CONFIG_0(port)); 738 739 if (pause->rx_pause) 740 qlcnic_gb_rx_flowctl(val); ··· 747 val); 748 QLCWR32(adapter, QLCNIC_NIU_GB_MAC_CONFIG_0(port), val); 749 /* set autoneg */ 750 - val = QLCRD32(adapter, QLCNIC_NIU_GB_PAUSE_CTL); 751 switch (port) { 752 case 0: 753 if (pause->tx_pause) ··· 785 if ((port < 0) || (port > QLCNIC_NIU_MAX_XG_PORTS)) 786 return -EIO; 787 788 - val = QLCRD32(adapter, QLCNIC_NIU_XG_PAUSE_CTL); 789 if (port == 0) { 790 if (pause->tx_pause) 791 qlcnic_xg_unset_xg0_mask(val); ··· 811 { 812 struct qlcnic_adapter *adapter = netdev_priv(dev); 813 u32 data_read; 814 815 if (qlcnic_83xx_check(adapter)) 816 return qlcnic_83xx_reg_test(adapter); 817 818 - data_read = QLCRD32(adapter, QLCNIC_PCIX_PH_REG(0)); 819 if ((data_read & 0xffff) != adapter->pdev->vendor) 820 return 1; 821 ··· 1052 if (data[3]) 1053 eth_test->flags |= ETH_TEST_FL_FAILED; 1054 1055 - data[4] = qlcnic_eeprom_test(dev); 1056 - if (data[4]) 1057 eth_test->flags |= ETH_TEST_FL_FAILED; 1058 } 1059 } ··· 1290 { 1291 struct qlcnic_adapter *adapter = netdev_priv(dev); 1292 u32 wol_cfg; 1293 1294 if (qlcnic_83xx_check(adapter)) 1295 return; 1296 wol->supported = 0; 1297 wol->wolopts = 0; 1298 1299 - wol_cfg = QLCRD32(adapter, QLCNIC_WOL_CONFIG_NV); 1300 if (wol_cfg & (1UL << adapter->portnum)) 1301 wol->supported |= WAKE_MAGIC; 1302 1303 - wol_cfg = QLCRD32(adapter, QLCNIC_WOL_CONFIG); 1304 if (wol_cfg & (1UL << adapter->portnum)) 1305 wol->wolopts |= WAKE_MAGIC; 1306 } ··· 1313 { 1314 struct qlcnic_adapter *adapter = netdev_priv(dev); 1315 u32 wol_cfg; 1316 1317 if (qlcnic_83xx_check(adapter)) 1318 return -EOPNOTSUPP; 1319 if (wol->wolopts & ~WAKE_MAGIC) 1320 return -EINVAL; 1321 1322 - wol_cfg = QLCRD32(adapter, QLCNIC_WOL_CONFIG_NV); 1323 if (!(wol_cfg & (1 << adapter->portnum))) 1324 return -EOPNOTSUPP; 1325 1326 - wol_cfg = QLCRD32(adapter, QLCNIC_WOL_CONFIG); 1327 if (wol->wolopts & WAKE_MAGIC) 1328 wol_cfg |= 1UL << adapter->portnum; 1329 else
··· 150 "Link_Test_on_offline", 151 "Interrupt_Test_offline", 152 "Internal_Loopback_offline", 153 + "External_Loopback_offline", 154 "EEPROM_Test_offline" 155 }; 156 ··· 266 { 267 struct qlcnic_hardware_context *ahw = adapter->ahw; 268 u32 speed, reg; 269 + int check_sfp_module = 0, err = 0; 270 u16 pcifn = ahw->pci_func; 271 272 /* read which mode */ ··· 289 290 } else if (adapter->ahw->port_type == QLCNIC_XGBE) { 291 u32 val = 0; 292 + val = QLCRD32(adapter, QLCNIC_PORT_MODE_ADDR, &err); 293 294 if (val == QLCNIC_PORT_MODE_802_3_AP) { 295 ecmd->supported = SUPPORTED_1000baseT_Full; ··· 300 } 301 302 if (netif_running(adapter->netdev) && ahw->has_link_events) { 303 + if (ahw->linkup) { 304 + reg = QLCRD32(adapter, 305 + P3P_LINK_SPEED_REG(pcifn), &err); 306 + speed = P3P_LINK_SPEED_VAL(pcifn, reg); 307 + ahw->link_speed = speed * P3P_LINK_SPEED_MHZ; 308 + } 309 + 310 ethtool_cmd_speed_set(ecmd, ahw->link_speed); 311 ecmd->autoneg = ahw->link_autoneg; 312 ecmd->duplex = ahw->link_duplex; ··· 463 static int qlcnic_82xx_get_registers(struct qlcnic_adapter *adapter, 464 u32 *regs_buff) 465 { 466 + int i, j = 0, err = 0; 467 468 for (i = QLCNIC_DEV_INFO_SIZE + 1; diag_registers[j] != -1; j++, i++) 469 regs_buff[i] = QLC_SHARED_REG_RD32(adapter, diag_registers[j]); 470 j = 0; 471 while (ext_diag_registers[j] != -1) 472 + regs_buff[i++] = QLCRD32(adapter, ext_diag_registers[j++], 473 + &err); 474 return i; 475 } 476 ··· 519 static u32 qlcnic_test_link(struct net_device *dev) 520 { 521 struct qlcnic_adapter *adapter = netdev_priv(dev); 522 + int err = 0; 523 u32 val; 524 525 if (qlcnic_83xx_check(adapter)) { 526 val = qlcnic_83xx_test_link(adapter); 527 return (val & 1) ? 0 : 1; 528 } 529 + val = QLCRD32(adapter, CRB_XG_STATE_P3P, &err); 530 + if (err == -EIO) 531 + return err; 532 val = XG_LINK_STATE_P3P(adapter->ahw->pci_func, val); 533 return (val == XG_LINK_UP_P3P) ? 0 : 1; 534 } ··· 658 { 659 struct qlcnic_adapter *adapter = netdev_priv(netdev); 660 int port = adapter->ahw->physical_port; 661 + int err = 0; 662 __u32 val; 663 664 if (qlcnic_83xx_check(adapter)) { ··· 668 if ((port < 0) || (port > QLCNIC_NIU_MAX_GBE_PORTS)) 669 return; 670 /* get flow control settings */ 671 + val = QLCRD32(adapter, QLCNIC_NIU_GB_MAC_CONFIG_0(port), &err); 672 + if (err == -EIO) 673 + return; 674 pause->rx_pause = qlcnic_gb_get_rx_flowctl(val); 675 + val = QLCRD32(adapter, QLCNIC_NIU_GB_PAUSE_CTL, &err); 676 + if (err == -EIO) 677 + return; 678 switch (port) { 679 case 0: 680 pause->tx_pause = !(qlcnic_gb_get_gb0_mask(val)); ··· 690 if ((port < 0) || (port > QLCNIC_NIU_MAX_XG_PORTS)) 691 return; 692 pause->rx_pause = 1; 693 + val = QLCRD32(adapter, QLCNIC_NIU_XG_PAUSE_CTL, &err); 694 + if (err == -EIO) 695 + return; 696 if (port == 0) 697 pause->tx_pause = !(qlcnic_xg_get_xg0_mask(val)); 698 else ··· 707 { 708 struct qlcnic_adapter *adapter = netdev_priv(netdev); 709 int port = adapter->ahw->physical_port; 710 + int err = 0; 711 __u32 val; 712 713 if (qlcnic_83xx_check(adapter)) ··· 717 if ((port < 0) || (port > QLCNIC_NIU_MAX_GBE_PORTS)) 718 return -EIO; 719 /* set flow control */ 720 + val = QLCRD32(adapter, QLCNIC_NIU_GB_MAC_CONFIG_0(port), &err); 721 + if (err == -EIO) 722 + return err; 723 724 if (pause->rx_pause) 725 qlcnic_gb_rx_flowctl(val); ··· 728 val); 729 QLCWR32(adapter, QLCNIC_NIU_GB_MAC_CONFIG_0(port), val); 730 /* set autoneg */ 731 + val = QLCRD32(adapter, QLCNIC_NIU_GB_PAUSE_CTL, &err); 732 + if (err == -EIO) 733 + return err; 734 switch (port) { 735 case 0: 736 if (pause->tx_pause) ··· 764 if ((port < 0) || (port > QLCNIC_NIU_MAX_XG_PORTS)) 765 return -EIO; 766 767 + val = QLCRD32(adapter, QLCNIC_NIU_XG_PAUSE_CTL, &err); 768 + if (err == -EIO) 769 + return err; 770 if (port == 0) { 771 if (pause->tx_pause) 772 qlcnic_xg_unset_xg0_mask(val); ··· 788 { 789 struct qlcnic_adapter *adapter = netdev_priv(dev); 790 u32 data_read; 791 + int err = 0; 792 793 if (qlcnic_83xx_check(adapter)) 794 return qlcnic_83xx_reg_test(adapter); 795 796 + data_read = QLCRD32(adapter, QLCNIC_PCIX_PH_REG(0), &err); 797 + if (err == -EIO) 798 + return err; 799 if ((data_read & 0xffff) != adapter->pdev->vendor) 800 return 1; 801 ··· 1026 if (data[3]) 1027 eth_test->flags |= ETH_TEST_FL_FAILED; 1028 1029 + if (eth_test->flags & ETH_TEST_FL_EXTERNAL_LB) { 1030 + data[4] = qlcnic_loopback_test(dev, QLCNIC_ELB_MODE); 1031 + if (data[4]) 1032 + eth_test->flags |= ETH_TEST_FL_FAILED; 1033 + eth_test->flags |= ETH_TEST_FL_EXTERNAL_LB_DONE; 1034 + } 1035 + 1036 + data[5] = qlcnic_eeprom_test(dev); 1037 + if (data[5]) 1038 eth_test->flags |= ETH_TEST_FL_FAILED; 1039 } 1040 } ··· 1257 { 1258 struct qlcnic_adapter *adapter = netdev_priv(dev); 1259 u32 wol_cfg; 1260 + int err = 0; 1261 1262 if (qlcnic_83xx_check(adapter)) 1263 return; 1264 wol->supported = 0; 1265 wol->wolopts = 0; 1266 1267 + wol_cfg = QLCRD32(adapter, QLCNIC_WOL_CONFIG_NV, &err); 1268 + if (err == -EIO) 1269 + return; 1270 if (wol_cfg & (1UL << adapter->portnum)) 1271 wol->supported |= WAKE_MAGIC; 1272 1273 + wol_cfg = QLCRD32(adapter, QLCNIC_WOL_CONFIG, &err); 1274 if (wol_cfg & (1UL << adapter->portnum)) 1275 wol->wolopts |= WAKE_MAGIC; 1276 } ··· 1277 { 1278 struct qlcnic_adapter *adapter = netdev_priv(dev); 1279 u32 wol_cfg; 1280 + int err = 0; 1281 1282 if (qlcnic_83xx_check(adapter)) 1283 return -EOPNOTSUPP; 1284 if (wol->wolopts & ~WAKE_MAGIC) 1285 return -EINVAL; 1286 1287 + wol_cfg = QLCRD32(adapter, QLCNIC_WOL_CONFIG_NV, &err); 1288 + if (err == -EIO) 1289 + return err; 1290 if (!(wol_cfg & (1 << adapter->portnum))) 1291 return -EOPNOTSUPP; 1292 1293 + wol_cfg = QLCRD32(adapter, QLCNIC_WOL_CONFIG, &err); 1294 + if (err == -EIO) 1295 + return err; 1296 if (wol->wolopts & WAKE_MAGIC) 1297 wol_cfg |= 1UL << adapter->portnum; 1298 else
+27 -13
drivers/net/ethernet/qlogic/qlcnic/qlcnic_hw.c
··· 317 int 318 qlcnic_pcie_sem_lock(struct qlcnic_adapter *adapter, int sem, u32 id_reg) 319 { 320 - int done = 0, timeout = 0; 321 322 while (!done) { 323 - done = QLCRD32(adapter, QLCNIC_PCIE_REG(PCIE_SEM_LOCK(sem))); 324 if (done == 1) 325 break; 326 if (++timeout >= QLCNIC_PCIE_SEM_TIMEOUT) { 327 dev_err(&adapter->pdev->dev, 328 "Failed to acquire sem=%d lock; holdby=%d\n", 329 - sem, id_reg ? QLCRD32(adapter, id_reg) : -1); 330 return -EIO; 331 } 332 msleep(1); ··· 345 void 346 qlcnic_pcie_sem_unlock(struct qlcnic_adapter *adapter, int sem) 347 { 348 - QLCRD32(adapter, QLCNIC_PCIE_REG(PCIE_SEM_UNLOCK(sem))); 349 } 350 351 int qlcnic_ind_rd(struct qlcnic_adapter *adapter, u32 addr) 352 { 353 u32 data; 354 355 if (qlcnic_82xx_check(adapter)) 356 qlcnic_read_window_reg(addr, adapter->ahw->pci_base0, &data); 357 else { 358 - data = qlcnic_83xx_rd_reg_indirect(adapter, addr); 359 - if (data == -EIO) 360 - return -EIO; 361 } 362 return data; 363 } ··· 1166 return -EIO; 1167 } 1168 1169 - int qlcnic_82xx_hw_read_wx_2M(struct qlcnic_adapter *adapter, ulong off) 1170 { 1171 unsigned long flags; 1172 int rv; ··· 1423 1424 int qlcnic_82xx_get_board_info(struct qlcnic_adapter *adapter) 1425 { 1426 - int offset, board_type, magic; 1427 struct pci_dev *pdev = adapter->pdev; 1428 1429 offset = QLCNIC_FW_MAGIC_OFFSET; ··· 1443 adapter->ahw->board_type = board_type; 1444 1445 if (board_type == QLCNIC_BRDTYPE_P3P_4_GB_MM) { 1446 - u32 gpio = QLCRD32(adapter, QLCNIC_ROMUSB_GLB_PAD_GPIO_I); 1447 if ((gpio & 0x8000) == 0) 1448 board_type = QLCNIC_BRDTYPE_P3P_10G_TP; 1449 } ··· 1485 qlcnic_wol_supported(struct qlcnic_adapter *adapter) 1486 { 1487 u32 wol_cfg; 1488 1489 - wol_cfg = QLCRD32(adapter, QLCNIC_WOL_CONFIG_NV); 1490 if (wol_cfg & (1UL << adapter->portnum)) { 1491 - wol_cfg = QLCRD32(adapter, QLCNIC_WOL_CONFIG); 1492 if (wol_cfg & (1 << adapter->portnum)) 1493 return 1; 1494 } ··· 1552 void qlcnic_82xx_read_crb(struct qlcnic_adapter *adapter, char *buf, 1553 loff_t offset, size_t size) 1554 { 1555 u32 data; 1556 u64 qmdata; 1557 ··· 1560 qlcnic_pci_camqm_read_2M(adapter, offset, &qmdata); 1561 memcpy(buf, &qmdata, size); 1562 } else { 1563 - data = QLCRD32(adapter, offset); 1564 memcpy(buf, &data, size); 1565 } 1566 }
··· 317 int 318 qlcnic_pcie_sem_lock(struct qlcnic_adapter *adapter, int sem, u32 id_reg) 319 { 320 + int timeout = 0; 321 + int err = 0; 322 + u32 done = 0; 323 324 while (!done) { 325 + done = QLCRD32(adapter, QLCNIC_PCIE_REG(PCIE_SEM_LOCK(sem)), 326 + &err); 327 if (done == 1) 328 break; 329 if (++timeout >= QLCNIC_PCIE_SEM_TIMEOUT) { 330 dev_err(&adapter->pdev->dev, 331 "Failed to acquire sem=%d lock; holdby=%d\n", 332 + sem, 333 + id_reg ? QLCRD32(adapter, id_reg, &err) : -1); 334 return -EIO; 335 } 336 msleep(1); ··· 341 void 342 qlcnic_pcie_sem_unlock(struct qlcnic_adapter *adapter, int sem) 343 { 344 + int err = 0; 345 + 346 + QLCRD32(adapter, QLCNIC_PCIE_REG(PCIE_SEM_UNLOCK(sem)), &err); 347 } 348 349 int qlcnic_ind_rd(struct qlcnic_adapter *adapter, u32 addr) 350 { 351 + int err = 0; 352 u32 data; 353 354 if (qlcnic_82xx_check(adapter)) 355 qlcnic_read_window_reg(addr, adapter->ahw->pci_base0, &data); 356 else { 357 + data = QLCRD32(adapter, addr, &err); 358 + if (err == -EIO) 359 + return err; 360 } 361 return data; 362 } ··· 1159 return -EIO; 1160 } 1161 1162 + int qlcnic_82xx_hw_read_wx_2M(struct qlcnic_adapter *adapter, ulong off, 1163 + int *err) 1164 { 1165 unsigned long flags; 1166 int rv; ··· 1415 1416 int qlcnic_82xx_get_board_info(struct qlcnic_adapter *adapter) 1417 { 1418 + int offset, board_type, magic, err = 0; 1419 struct pci_dev *pdev = adapter->pdev; 1420 1421 offset = QLCNIC_FW_MAGIC_OFFSET; ··· 1435 adapter->ahw->board_type = board_type; 1436 1437 if (board_type == QLCNIC_BRDTYPE_P3P_4_GB_MM) { 1438 + u32 gpio = QLCRD32(adapter, QLCNIC_ROMUSB_GLB_PAD_GPIO_I, &err); 1439 + if (err == -EIO) 1440 + return err; 1441 if ((gpio & 0x8000) == 0) 1442 board_type = QLCNIC_BRDTYPE_P3P_10G_TP; 1443 } ··· 1475 qlcnic_wol_supported(struct qlcnic_adapter *adapter) 1476 { 1477 u32 wol_cfg; 1478 + int err = 0; 1479 1480 + wol_cfg = QLCRD32(adapter, QLCNIC_WOL_CONFIG_NV, &err); 1481 if (wol_cfg & (1UL << adapter->portnum)) { 1482 + wol_cfg = QLCRD32(adapter, QLCNIC_WOL_CONFIG, &err); 1483 + if (err == -EIO) 1484 + return err; 1485 if (wol_cfg & (1 << adapter->portnum)) 1486 return 1; 1487 } ··· 1539 void qlcnic_82xx_read_crb(struct qlcnic_adapter *adapter, char *buf, 1540 loff_t offset, size_t size) 1541 { 1542 + int err = 0; 1543 u32 data; 1544 u64 qmdata; 1545 ··· 1546 qlcnic_pci_camqm_read_2M(adapter, offset, &qmdata); 1547 memcpy(buf, &qmdata, size); 1548 } else { 1549 + data = QLCRD32(adapter, offset, &err); 1550 memcpy(buf, &data, size); 1551 } 1552 }
+1 -1
drivers/net/ethernet/qlogic/qlcnic/qlcnic_hw.h
··· 154 struct qlcnic_adapter; 155 156 int qlcnic_82xx_start_firmware(struct qlcnic_adapter *); 157 - int qlcnic_82xx_hw_read_wx_2M(struct qlcnic_adapter *adapter, ulong); 158 int qlcnic_82xx_hw_write_wx_2M(struct qlcnic_adapter *, ulong, u32); 159 int qlcnic_82xx_config_hw_lro(struct qlcnic_adapter *adapter, int); 160 int qlcnic_82xx_nic_set_promisc(struct qlcnic_adapter *adapter, u32);
··· 154 struct qlcnic_adapter; 155 156 int qlcnic_82xx_start_firmware(struct qlcnic_adapter *); 157 + int qlcnic_82xx_hw_read_wx_2M(struct qlcnic_adapter *adapter, ulong, int *); 158 int qlcnic_82xx_hw_write_wx_2M(struct qlcnic_adapter *, ulong, u32); 159 int qlcnic_82xx_config_hw_lro(struct qlcnic_adapter *adapter, int); 160 int qlcnic_82xx_nic_set_promisc(struct qlcnic_adapter *adapter, u32);
+17 -8
drivers/net/ethernet/qlogic/qlcnic/qlcnic_init.c
··· 286 { 287 long timeout = 0; 288 long done = 0; 289 290 cond_resched(); 291 while (done == 0) { 292 - done = QLCRD32(adapter, QLCNIC_ROMUSB_GLB_STATUS); 293 done &= 2; 294 if (++timeout >= QLCNIC_MAX_ROM_WAIT_USEC) { 295 dev_err(&adapter->pdev->dev, ··· 305 static int do_rom_fast_read(struct qlcnic_adapter *adapter, 306 u32 addr, u32 *valp) 307 { 308 QLCWR32(adapter, QLCNIC_ROMUSB_ROM_ADDRESS, addr); 309 QLCWR32(adapter, QLCNIC_ROMUSB_ROM_DUMMY_BYTE_CNT, 0); 310 QLCWR32(adapter, QLCNIC_ROMUSB_ROM_ABYTE_CNT, 3); ··· 320 udelay(10); 321 QLCWR32(adapter, QLCNIC_ROMUSB_ROM_DUMMY_BYTE_CNT, 0); 322 323 - *valp = QLCRD32(adapter, QLCNIC_ROMUSB_ROM_RDATA); 324 return 0; 325 } 326 ··· 374 375 int qlcnic_pinit_from_rom(struct qlcnic_adapter *adapter) 376 { 377 - int addr, val; 378 int i, n, init_delay; 379 struct crb_addr_pair *buf; 380 unsigned offset; 381 - u32 off; 382 struct pci_dev *pdev = adapter->pdev; 383 384 QLC_SHARED_REG_WR32(adapter, QLCNIC_CMDPEG_STATE, 0); ··· 407 QLCWR32(adapter, QLCNIC_CRB_NIU + 0xb0000, 0x00); 408 409 /* halt sre */ 410 - val = QLCRD32(adapter, QLCNIC_CRB_SRE + 0x1000); 411 QLCWR32(adapter, QLCNIC_CRB_SRE + 0x1000, val & (~(0x1))); 412 413 /* halt epg */ ··· 726 static int 727 qlcnic_has_mn(struct qlcnic_adapter *adapter) 728 { 729 - u32 capability; 730 - capability = 0; 731 732 - capability = QLCRD32(adapter, QLCNIC_PEG_TUNE_CAPABILITY); 733 if (capability & QLCNIC_PEG_TUNE_MN_PRESENT) 734 return 1; 735
··· 286 { 287 long timeout = 0; 288 long done = 0; 289 + int err = 0; 290 291 cond_resched(); 292 while (done == 0) { 293 + done = QLCRD32(adapter, QLCNIC_ROMUSB_GLB_STATUS, &err); 294 done &= 2; 295 if (++timeout >= QLCNIC_MAX_ROM_WAIT_USEC) { 296 dev_err(&adapter->pdev->dev, ··· 304 static int do_rom_fast_read(struct qlcnic_adapter *adapter, 305 u32 addr, u32 *valp) 306 { 307 + int err = 0; 308 + 309 QLCWR32(adapter, QLCNIC_ROMUSB_ROM_ADDRESS, addr); 310 QLCWR32(adapter, QLCNIC_ROMUSB_ROM_DUMMY_BYTE_CNT, 0); 311 QLCWR32(adapter, QLCNIC_ROMUSB_ROM_ABYTE_CNT, 3); ··· 317 udelay(10); 318 QLCWR32(adapter, QLCNIC_ROMUSB_ROM_DUMMY_BYTE_CNT, 0); 319 320 + *valp = QLCRD32(adapter, QLCNIC_ROMUSB_ROM_RDATA, &err); 321 + if (err == -EIO) 322 + return err; 323 return 0; 324 } 325 ··· 369 370 int qlcnic_pinit_from_rom(struct qlcnic_adapter *adapter) 371 { 372 + int addr, err = 0; 373 int i, n, init_delay; 374 struct crb_addr_pair *buf; 375 unsigned offset; 376 + u32 off, val; 377 struct pci_dev *pdev = adapter->pdev; 378 379 QLC_SHARED_REG_WR32(adapter, QLCNIC_CMDPEG_STATE, 0); ··· 402 QLCWR32(adapter, QLCNIC_CRB_NIU + 0xb0000, 0x00); 403 404 /* halt sre */ 405 + val = QLCRD32(adapter, QLCNIC_CRB_SRE + 0x1000, &err); 406 + if (err == -EIO) 407 + return err; 408 QLCWR32(adapter, QLCNIC_CRB_SRE + 0x1000, val & (~(0x1))); 409 410 /* halt epg */ ··· 719 static int 720 qlcnic_has_mn(struct qlcnic_adapter *adapter) 721 { 722 + u32 capability = 0; 723 + int err = 0; 724 725 + capability = QLCRD32(adapter, QLCNIC_PEG_TUNE_CAPABILITY, &err); 726 + if (err == -EIO) 727 + return err; 728 if (capability & QLCNIC_PEG_TUNE_MN_PRESENT) 729 return 1; 730
+67 -34
drivers/net/ethernet/qlogic/qlcnic/qlcnic_io.c
··· 161 return (qlcnic_get_sts_status(sts_data) == STATUS_CKSUM_LOOP) ? 1 : 0; 162 } 163 164 void qlcnic_add_lb_filter(struct qlcnic_adapter *adapter, struct sk_buff *skb, 165 int loopback_pkt, u16 vlan_id) 166 { 167 struct ethhdr *phdr = (struct ethhdr *)(skb->data); 168 struct qlcnic_filter *fil, *tmp_fil; 169 - struct hlist_node *n; 170 struct hlist_head *head; 171 unsigned long time; 172 u64 src_addr = 0; 173 - u8 hindex, found = 0, op; 174 int ret; 175 176 memcpy(&src_addr, phdr->h_source, ETH_ALEN); 177 178 if (loopback_pkt) { 179 if (adapter->rx_fhash.fnum >= adapter->rx_fhash.fmax) 180 return; 181 182 - hindex = qlcnic_mac_hash(src_addr) & 183 - (adapter->fhash.fbucket_size - 1); 184 head = &(adapter->rx_fhash.fhead[hindex]); 185 186 - hlist_for_each_entry_safe(tmp_fil, n, head, fnode) { 187 - if (!memcmp(tmp_fil->faddr, &src_addr, ETH_ALEN) && 188 - tmp_fil->vlan_id == vlan_id) { 189 - time = tmp_fil->ftime; 190 - if (jiffies > (QLCNIC_READD_AGE * HZ + time)) 191 - tmp_fil->ftime = jiffies; 192 - return; 193 - } 194 } 195 196 fil = kzalloc(sizeof(struct qlcnic_filter), GFP_ATOMIC); ··· 237 adapter->rx_fhash.fnum++; 238 spin_unlock(&adapter->rx_mac_learn_lock); 239 } else { 240 - hindex = qlcnic_mac_hash(src_addr) & 241 - (adapter->fhash.fbucket_size - 1); 242 - head = &(adapter->rx_fhash.fhead[hindex]); 243 - spin_lock(&adapter->rx_mac_learn_lock); 244 - hlist_for_each_entry_safe(tmp_fil, n, head, fnode) { 245 - if (!memcmp(tmp_fil->faddr, &src_addr, ETH_ALEN) && 246 - tmp_fil->vlan_id == vlan_id) { 247 - found = 1; 248 - break; 249 - } 250 - } 251 252 - if (!found) { 253 - spin_unlock(&adapter->rx_mac_learn_lock); 254 - return; 255 - } 256 257 - op = vlan_id ? QLCNIC_MAC_VLAN_ADD : QLCNIC_MAC_ADD; 258 - ret = qlcnic_sre_macaddr_change(adapter, (u8 *)&src_addr, 259 - vlan_id, op); 260 - if (!ret) { 261 op = vlan_id ? QLCNIC_MAC_VLAN_DEL : QLCNIC_MAC_DEL; 262 ret = qlcnic_sre_macaddr_change(adapter, 263 (u8 *)&src_addr, 264 vlan_id, op); 265 if (!ret) { 266 - hlist_del(&(tmp_fil->fnode)); 267 - adapter->rx_fhash.fnum--; 268 } 269 } 270 spin_unlock(&adapter->rx_mac_learn_lock); 271 } 272 } ··· 295 296 mac_req = (struct qlcnic_mac_req *)&(req->words[0]); 297 mac_req->op = vlan_id ? QLCNIC_MAC_VLAN_ADD : QLCNIC_MAC_ADD; 298 - memcpy(mac_req->mac_addr, &uaddr, ETH_ALEN); 299 300 vlan_req = (struct qlcnic_vlan_req *)&req->words[1]; 301 vlan_req->vlan_id = cpu_to_le16(vlan_id);
··· 161 return (qlcnic_get_sts_status(sts_data) == STATUS_CKSUM_LOOP) ? 1 : 0; 162 } 163 164 + static void qlcnic_delete_rx_list_mac(struct qlcnic_adapter *adapter, 165 + struct qlcnic_filter *fil, 166 + void *addr, u16 vlan_id) 167 + { 168 + int ret; 169 + u8 op; 170 + 171 + op = vlan_id ? QLCNIC_MAC_VLAN_ADD : QLCNIC_MAC_ADD; 172 + ret = qlcnic_sre_macaddr_change(adapter, addr, vlan_id, op); 173 + if (ret) 174 + return; 175 + 176 + op = vlan_id ? QLCNIC_MAC_VLAN_DEL : QLCNIC_MAC_DEL; 177 + ret = qlcnic_sre_macaddr_change(adapter, addr, vlan_id, op); 178 + if (!ret) { 179 + hlist_del(&fil->fnode); 180 + adapter->rx_fhash.fnum--; 181 + } 182 + } 183 + 184 + static struct qlcnic_filter *qlcnic_find_mac_filter(struct hlist_head *head, 185 + void *addr, u16 vlan_id) 186 + { 187 + struct qlcnic_filter *tmp_fil = NULL; 188 + struct hlist_node *n; 189 + 190 + hlist_for_each_entry_safe(tmp_fil, n, head, fnode) { 191 + if (!memcmp(tmp_fil->faddr, addr, ETH_ALEN) && 192 + tmp_fil->vlan_id == vlan_id) 193 + return tmp_fil; 194 + } 195 + 196 + return NULL; 197 + } 198 + 199 void qlcnic_add_lb_filter(struct qlcnic_adapter *adapter, struct sk_buff *skb, 200 int loopback_pkt, u16 vlan_id) 201 { 202 struct ethhdr *phdr = (struct ethhdr *)(skb->data); 203 struct qlcnic_filter *fil, *tmp_fil; 204 struct hlist_head *head; 205 unsigned long time; 206 u64 src_addr = 0; 207 + u8 hindex, op; 208 int ret; 209 210 memcpy(&src_addr, phdr->h_source, ETH_ALEN); 211 + hindex = qlcnic_mac_hash(src_addr) & 212 + (adapter->fhash.fbucket_size - 1); 213 214 if (loopback_pkt) { 215 if (adapter->rx_fhash.fnum >= adapter->rx_fhash.fmax) 216 return; 217 218 head = &(adapter->rx_fhash.fhead[hindex]); 219 220 + tmp_fil = qlcnic_find_mac_filter(head, &src_addr, vlan_id); 221 + if (tmp_fil) { 222 + time = tmp_fil->ftime; 223 + if (time_after(jiffies, QLCNIC_READD_AGE * HZ + time)) 224 + tmp_fil->ftime = jiffies; 225 + return; 226 } 227 228 fil = kzalloc(sizeof(struct qlcnic_filter), GFP_ATOMIC); ··· 205 adapter->rx_fhash.fnum++; 206 spin_unlock(&adapter->rx_mac_learn_lock); 207 } else { 208 + head = &adapter->fhash.fhead[hindex]; 209 210 + spin_lock(&adapter->mac_learn_lock); 211 212 + tmp_fil = qlcnic_find_mac_filter(head, &src_addr, vlan_id); 213 + if (tmp_fil) { 214 op = vlan_id ? QLCNIC_MAC_VLAN_DEL : QLCNIC_MAC_DEL; 215 ret = qlcnic_sre_macaddr_change(adapter, 216 (u8 *)&src_addr, 217 vlan_id, op); 218 if (!ret) { 219 + hlist_del(&tmp_fil->fnode); 220 + adapter->fhash.fnum--; 221 } 222 + 223 + spin_unlock(&adapter->mac_learn_lock); 224 + 225 + return; 226 } 227 + 228 + spin_unlock(&adapter->mac_learn_lock); 229 + 230 + head = &adapter->rx_fhash.fhead[hindex]; 231 + 232 + spin_lock(&adapter->rx_mac_learn_lock); 233 + 234 + tmp_fil = qlcnic_find_mac_filter(head, &src_addr, vlan_id); 235 + if (tmp_fil) 236 + qlcnic_delete_rx_list_mac(adapter, tmp_fil, &src_addr, 237 + vlan_id); 238 + 239 spin_unlock(&adapter->rx_mac_learn_lock); 240 } 241 } ··· 262 263 mac_req = (struct qlcnic_mac_req *)&(req->words[0]); 264 mac_req->op = vlan_id ? QLCNIC_MAC_VLAN_ADD : QLCNIC_MAC_ADD; 265 + memcpy(mac_req->mac_addr, uaddr, ETH_ALEN); 266 267 vlan_req = (struct qlcnic_vlan_req *)&req->words[1]; 268 vlan_req->vlan_id = cpu_to_le16(vlan_id);
+11 -8
drivers/net/ethernet/qlogic/qlcnic/qlcnic_main.c
··· 977 static int 978 qlcnic_initialize_nic(struct qlcnic_adapter *adapter) 979 { 980 - int err; 981 struct qlcnic_info nic_info; 982 983 memset(&nic_info, 0, sizeof(struct qlcnic_info)); 984 err = qlcnic_get_nic_info(adapter, &nic_info, adapter->ahw->pci_func); ··· 993 994 if (adapter->ahw->capabilities & QLCNIC_FW_CAPABILITY_MORE_CAPS) { 995 u32 temp; 996 - temp = QLCRD32(adapter, CRB_FW_CAPABILITIES_2); 997 adapter->ahw->extra_capability[0] = temp; 998 } 999 adapter->ahw->max_mac_filters = nic_info.max_mac_filters; ··· 2143 if (qlcnic_83xx_check(adapter) && !qlcnic_use_msi_x && 2144 !!qlcnic_use_msi) 2145 dev_warn(&pdev->dev, 2146 - "83xx adapter do not support MSI interrupts\n"); 2147 2148 err = qlcnic_setup_intr(adapter, 0); 2149 if (err) { ··· 3097 { 3098 u32 state = 0, heartbeat; 3099 u32 peg_status; 3100 3101 if (qlcnic_check_temp(adapter)) 3102 goto detach; ··· 3144 "PEG_NET_4_PC: 0x%x\n", 3145 peg_status, 3146 QLC_SHARED_REG_RD32(adapter, QLCNIC_PEG_HALT_STATUS2), 3147 - QLCRD32(adapter, QLCNIC_CRB_PEG_NET_0 + 0x3c), 3148 - QLCRD32(adapter, QLCNIC_CRB_PEG_NET_1 + 0x3c), 3149 - QLCRD32(adapter, QLCNIC_CRB_PEG_NET_2 + 0x3c), 3150 - QLCRD32(adapter, QLCNIC_CRB_PEG_NET_3 + 0x3c), 3151 - QLCRD32(adapter, QLCNIC_CRB_PEG_NET_4 + 0x3c)); 3152 if (QLCNIC_FWERROR_CODE(peg_status) == 0x67) 3153 dev_err(&adapter->pdev->dev, 3154 "Firmware aborted with error code 0x00006700. "
··· 977 static int 978 qlcnic_initialize_nic(struct qlcnic_adapter *adapter) 979 { 980 struct qlcnic_info nic_info; 981 + int err = 0; 982 983 memset(&nic_info, 0, sizeof(struct qlcnic_info)); 984 err = qlcnic_get_nic_info(adapter, &nic_info, adapter->ahw->pci_func); ··· 993 994 if (adapter->ahw->capabilities & QLCNIC_FW_CAPABILITY_MORE_CAPS) { 995 u32 temp; 996 + temp = QLCRD32(adapter, CRB_FW_CAPABILITIES_2, &err); 997 + if (err == -EIO) 998 + return err; 999 adapter->ahw->extra_capability[0] = temp; 1000 } 1001 adapter->ahw->max_mac_filters = nic_info.max_mac_filters; ··· 2141 if (qlcnic_83xx_check(adapter) && !qlcnic_use_msi_x && 2142 !!qlcnic_use_msi) 2143 dev_warn(&pdev->dev, 2144 + "Device does not support MSI interrupts\n"); 2145 2146 err = qlcnic_setup_intr(adapter, 0); 2147 if (err) { ··· 3095 { 3096 u32 state = 0, heartbeat; 3097 u32 peg_status; 3098 + int err = 0; 3099 3100 if (qlcnic_check_temp(adapter)) 3101 goto detach; ··· 3141 "PEG_NET_4_PC: 0x%x\n", 3142 peg_status, 3143 QLC_SHARED_REG_RD32(adapter, QLCNIC_PEG_HALT_STATUS2), 3144 + QLCRD32(adapter, QLCNIC_CRB_PEG_NET_0 + 0x3c, &err), 3145 + QLCRD32(adapter, QLCNIC_CRB_PEG_NET_1 + 0x3c, &err), 3146 + QLCRD32(adapter, QLCNIC_CRB_PEG_NET_2 + 0x3c, &err), 3147 + QLCRD32(adapter, QLCNIC_CRB_PEG_NET_3 + 0x3c, &err), 3148 + QLCRD32(adapter, QLCNIC_CRB_PEG_NET_4 + 0x3c, &err)); 3149 if (QLCNIC_FWERROR_CODE(peg_status) == 0x67) 3150 dev_err(&adapter->pdev->dev, 3151 "Firmware aborted with error code 0x00006700. "
+1 -1
drivers/net/ethernet/qlogic/qlcnic/qlcnic_sriov_common.c
··· 562 INIT_LIST_HEAD(&adapter->vf_mc_list); 563 if (!qlcnic_use_msi_x && !!qlcnic_use_msi) 564 dev_warn(&adapter->pdev->dev, 565 - "83xx adapter do not support MSI interrupts\n"); 566 567 err = qlcnic_setup_intr(adapter, 1); 568 if (err) {
··· 562 INIT_LIST_HEAD(&adapter->vf_mc_list); 563 if (!qlcnic_use_msi_x && !!qlcnic_use_msi) 564 dev_warn(&adapter->pdev->dev, 565 + "Device does not support MSI interrupts\n"); 566 567 err = qlcnic_setup_intr(adapter, 1); 568 if (err) {
+45 -3
drivers/net/ethernet/realtek/8139cp.c
··· 478 479 while (1) { 480 u32 status, len; 481 - dma_addr_t mapping; 482 struct sk_buff *skb, *new_skb; 483 struct cp_desc *desc; 484 const unsigned buflen = cp->rx_buf_sz; ··· 520 goto rx_next; 521 } 522 523 dma_unmap_single(&cp->pdev->dev, mapping, 524 buflen, PCI_DMA_FROMDEVICE); 525 ··· 538 539 skb_put(skb, len); 540 541 - mapping = dma_map_single(&cp->pdev->dev, new_skb->data, buflen, 542 - PCI_DMA_FROMDEVICE); 543 cp->rx_skb[rx_tail] = new_skb; 544 545 cp_rx_skb(cp, skb, desc); 546 rx++; 547 548 rx_next: 549 cp->rx_ring[rx_tail].opts2 = 0; ··· 722 TxVlanTag | swab16(vlan_tx_tag_get(skb)) : 0x00; 723 } 724 725 static netdev_tx_t cp_start_xmit (struct sk_buff *skb, 726 struct net_device *dev) 727 { ··· 771 772 len = skb->len; 773 mapping = dma_map_single(&cp->pdev->dev, skb->data, len, PCI_DMA_TODEVICE); 774 txd->opts2 = opts2; 775 txd->addr = cpu_to_le64(mapping); 776 wmb(); ··· 811 first_len = skb_headlen(skb); 812 first_mapping = dma_map_single(&cp->pdev->dev, skb->data, 813 first_len, PCI_DMA_TODEVICE); 814 cp->tx_skb[entry] = skb; 815 entry = NEXT_TX(entry); 816 ··· 827 mapping = dma_map_single(&cp->pdev->dev, 828 skb_frag_address(this_frag), 829 len, PCI_DMA_TODEVICE); 830 eor = (entry == (CP_TX_RING_SIZE - 1)) ? RingEnd : 0; 831 832 ctrl = eor | len | DescOwn; ··· 892 if (TX_BUFFS_AVAIL(cp) <= (MAX_SKB_FRAGS + 1)) 893 netif_stop_queue(dev); 894 895 spin_unlock_irqrestore(&cp->lock, intr_flags); 896 897 cpw8(TxPoll, NormalTxPoll); 898 899 return NETDEV_TX_OK; 900 } 901 902 /* Set or clear the multicast filter for this adaptor. ··· 1092 1093 mapping = dma_map_single(&cp->pdev->dev, skb->data, 1094 cp->rx_buf_sz, PCI_DMA_FROMDEVICE); 1095 cp->rx_skb[i] = skb; 1096 1097 cp->rx_ring[i].opts2 = 0;
··· 478 479 while (1) { 480 u32 status, len; 481 + dma_addr_t mapping, new_mapping; 482 struct sk_buff *skb, *new_skb; 483 struct cp_desc *desc; 484 const unsigned buflen = cp->rx_buf_sz; ··· 520 goto rx_next; 521 } 522 523 + new_mapping = dma_map_single(&cp->pdev->dev, new_skb->data, buflen, 524 + PCI_DMA_FROMDEVICE); 525 + if (dma_mapping_error(&cp->pdev->dev, new_mapping)) { 526 + dev->stats.rx_dropped++; 527 + goto rx_next; 528 + } 529 + 530 dma_unmap_single(&cp->pdev->dev, mapping, 531 buflen, PCI_DMA_FROMDEVICE); 532 ··· 531 532 skb_put(skb, len); 533 534 cp->rx_skb[rx_tail] = new_skb; 535 536 cp_rx_skb(cp, skb, desc); 537 rx++; 538 + mapping = new_mapping; 539 540 rx_next: 541 cp->rx_ring[rx_tail].opts2 = 0; ··· 716 TxVlanTag | swab16(vlan_tx_tag_get(skb)) : 0x00; 717 } 718 719 + static void unwind_tx_frag_mapping(struct cp_private *cp, struct sk_buff *skb, 720 + int first, int entry_last) 721 + { 722 + int frag, index; 723 + struct cp_desc *txd; 724 + skb_frag_t *this_frag; 725 + for (frag = 0; frag+first < entry_last; frag++) { 726 + index = first+frag; 727 + cp->tx_skb[index] = NULL; 728 + txd = &cp->tx_ring[index]; 729 + this_frag = &skb_shinfo(skb)->frags[frag]; 730 + dma_unmap_single(&cp->pdev->dev, le64_to_cpu(txd->addr), 731 + skb_frag_size(this_frag), PCI_DMA_TODEVICE); 732 + } 733 + } 734 + 735 static netdev_tx_t cp_start_xmit (struct sk_buff *skb, 736 struct net_device *dev) 737 { ··· 749 750 len = skb->len; 751 mapping = dma_map_single(&cp->pdev->dev, skb->data, len, PCI_DMA_TODEVICE); 752 + if (dma_mapping_error(&cp->pdev->dev, mapping)) 753 + goto out_dma_error; 754 + 755 txd->opts2 = opts2; 756 txd->addr = cpu_to_le64(mapping); 757 wmb(); ··· 786 first_len = skb_headlen(skb); 787 first_mapping = dma_map_single(&cp->pdev->dev, skb->data, 788 first_len, PCI_DMA_TODEVICE); 789 + if (dma_mapping_error(&cp->pdev->dev, first_mapping)) 790 + goto out_dma_error; 791 + 792 cp->tx_skb[entry] = skb; 793 entry = NEXT_TX(entry); 794 ··· 799 mapping = dma_map_single(&cp->pdev->dev, 800 skb_frag_address(this_frag), 801 len, PCI_DMA_TODEVICE); 802 + if (dma_mapping_error(&cp->pdev->dev, mapping)) { 803 + unwind_tx_frag_mapping(cp, skb, first_entry, entry); 804 + goto out_dma_error; 805 + } 806 + 807 eor = (entry == (CP_TX_RING_SIZE - 1)) ? RingEnd : 0; 808 809 ctrl = eor | len | DescOwn; ··· 859 if (TX_BUFFS_AVAIL(cp) <= (MAX_SKB_FRAGS + 1)) 860 netif_stop_queue(dev); 861 862 + out_unlock: 863 spin_unlock_irqrestore(&cp->lock, intr_flags); 864 865 cpw8(TxPoll, NormalTxPoll); 866 867 return NETDEV_TX_OK; 868 + out_dma_error: 869 + kfree_skb(skb); 870 + cp->dev->stats.tx_dropped++; 871 + goto out_unlock; 872 } 873 874 /* Set or clear the multicast filter for this adaptor. ··· 1054 1055 mapping = dma_map_single(&cp->pdev->dev, skb->data, 1056 cp->rx_buf_sz, PCI_DMA_FROMDEVICE); 1057 + if (dma_mapping_error(&cp->pdev->dev, mapping)) { 1058 + kfree_skb(skb); 1059 + goto err_out; 1060 + } 1061 cp->rx_skb[i] = skb; 1062 1063 cp->rx_ring[i].opts2 = 0;
+1 -1
drivers/net/ethernet/realtek/r8169.c
··· 3689 if (tp->link_ok(ioaddr)) 3690 return; 3691 3692 - netif_warn(tp, link, tp->dev, "PHY reset until link up\n"); 3693 3694 tp->phy_reset_enable(tp); 3695
··· 3689 if (tp->link_ok(ioaddr)) 3690 return; 3691 3692 + netif_dbg(tp, link, tp->dev, "PHY reset until link up\n"); 3693 3694 tp->phy_reset_enable(tp); 3695
+2 -10
drivers/net/ethernet/sis/sis900.c
··· 1318 if (duplex){ 1319 sis900_set_mode(sis_priv, speed, duplex); 1320 sis630_set_eq(net_dev, sis_priv->chipset_rev); 1321 - netif_start_queue(net_dev); 1322 } 1323 1324 sis_priv->timer.expires = jiffies + HZ; ··· 1336 status = sis900_default_phy(net_dev); 1337 mii_phy = sis_priv->mii; 1338 1339 - if (status & MII_STAT_LINK){ 1340 sis900_check_mode(net_dev, mii_phy); 1341 - netif_carrier_on(net_dev); 1342 - } 1343 } else { 1344 /* Link ON -> OFF */ 1345 if (!(status & MII_STAT_LINK)){ ··· 1609 unsigned long flags; 1610 unsigned int index_cur_tx, index_dirty_tx; 1611 unsigned int count_dirty_tx; 1612 - 1613 - /* Don't transmit data before the complete of auto-negotiation */ 1614 - if(!sis_priv->autong_complete){ 1615 - netif_stop_queue(net_dev); 1616 - return NETDEV_TX_BUSY; 1617 - } 1618 1619 spin_lock_irqsave(&sis_priv->lock, flags); 1620
··· 1318 if (duplex){ 1319 sis900_set_mode(sis_priv, speed, duplex); 1320 sis630_set_eq(net_dev, sis_priv->chipset_rev); 1321 + netif_carrier_on(net_dev); 1322 } 1323 1324 sis_priv->timer.expires = jiffies + HZ; ··· 1336 status = sis900_default_phy(net_dev); 1337 mii_phy = sis_priv->mii; 1338 1339 + if (status & MII_STAT_LINK) 1340 sis900_check_mode(net_dev, mii_phy); 1341 } else { 1342 /* Link ON -> OFF */ 1343 if (!(status & MII_STAT_LINK)){ ··· 1611 unsigned long flags; 1612 unsigned int index_cur_tx, index_dirty_tx; 1613 unsigned int count_dirty_tx; 1614 1615 spin_lock_irqsave(&sis_priv->lock, flags); 1616
+1 -1
drivers/net/ethernet/ti/cpsw.c
··· 1867 1868 while ((res = platform_get_resource(priv->pdev, IORESOURCE_IRQ, k))) { 1869 for (i = res->start; i <= res->end; i++) { 1870 - if (request_irq(i, cpsw_interrupt, IRQF_DISABLED, 1871 dev_name(&pdev->dev), priv)) { 1872 dev_err(priv->dev, "error attaching irq\n"); 1873 goto clean_ale_ret;
··· 1867 1868 while ((res = platform_get_resource(priv->pdev, IORESOURCE_IRQ, k))) { 1869 for (i = res->start; i <= res->end; i++) { 1870 + if (request_irq(i, cpsw_interrupt, 0, 1871 dev_name(&pdev->dev), priv)) { 1872 dev_err(priv->dev, "error attaching irq\n"); 1873 goto clean_ale_ret;
+1 -2
drivers/net/ethernet/ti/davinci_emac.c
··· 1568 while ((res = platform_get_resource(priv->pdev, IORESOURCE_IRQ, k))) { 1569 for (i = res->start; i <= res->end; i++) { 1570 if (devm_request_irq(&priv->pdev->dev, i, emac_irq, 1571 - IRQF_DISABLED, 1572 - ndev->name, ndev)) 1573 goto rollback; 1574 } 1575 k++;
··· 1568 while ((res = platform_get_resource(priv->pdev, IORESOURCE_IRQ, k))) { 1569 for (i = res->start; i <= res->end; i++) { 1570 if (devm_request_irq(&priv->pdev->dev, i, emac_irq, 1571 + 0, ndev->name, ndev)) 1572 goto rollback; 1573 } 1574 k++;
+19 -4
drivers/net/macvlan.c
··· 337 int err; 338 339 if (vlan->port->passthru) { 340 - if (!(vlan->flags & MACVLAN_FLAG_NOPROMISC)) 341 - dev_set_promiscuity(lowerdev, 1); 342 goto hash_add; 343 } 344 ··· 866 struct nlattr *tb[], struct nlattr *data[]) 867 { 868 struct macvlan_dev *vlan = netdev_priv(dev); 869 870 if (data && data[IFLA_MACVLAN_FLAGS]) { 871 __u16 flags = nla_get_u16(data[IFLA_MACVLAN_FLAGS]); ··· 894 } 895 vlan->flags = flags; 896 } 897 - if (data && data[IFLA_MACVLAN_MODE]) 898 - vlan->mode = nla_get_u32(data[IFLA_MACVLAN_MODE]); 899 return 0; 900 } 901
··· 337 int err; 338 339 if (vlan->port->passthru) { 340 + if (!(vlan->flags & MACVLAN_FLAG_NOPROMISC)) { 341 + err = dev_set_promiscuity(lowerdev, 1); 342 + if (err < 0) 343 + goto out; 344 + } 345 goto hash_add; 346 } 347 ··· 863 struct nlattr *tb[], struct nlattr *data[]) 864 { 865 struct macvlan_dev *vlan = netdev_priv(dev); 866 + enum macvlan_mode mode; 867 + bool set_mode = false; 868 + 869 + /* Validate mode, but don't set yet: setting flags may fail. */ 870 + if (data && data[IFLA_MACVLAN_MODE]) { 871 + set_mode = true; 872 + mode = nla_get_u32(data[IFLA_MACVLAN_MODE]); 873 + /* Passthrough mode can't be set or cleared dynamically */ 874 + if ((mode == MACVLAN_MODE_PASSTHRU) != 875 + (vlan->mode == MACVLAN_MODE_PASSTHRU)) 876 + return -EINVAL; 877 + } 878 879 if (data && data[IFLA_MACVLAN_FLAGS]) { 880 __u16 flags = nla_get_u16(data[IFLA_MACVLAN_FLAGS]); ··· 879 } 880 vlan->flags = flags; 881 } 882 + if (set_mode) 883 + vlan->mode = mode; 884 return 0; 885 } 886
+58 -68
drivers/net/usb/r8152.c
··· 344 static 345 int get_registers(struct r8152 *tp, u16 value, u16 index, u16 size, void *data) 346 { 347 - return usb_control_msg(tp->udev, usb_rcvctrlpipe(tp->udev, 0), 348 RTL8152_REQ_GET_REGS, RTL8152_REQT_READ, 349 - value, index, data, size, 500); 350 } 351 352 static 353 int set_registers(struct r8152 *tp, u16 value, u16 index, u16 size, void *data) 354 { 355 - return usb_control_msg(tp->udev, usb_sndctrlpipe(tp->udev, 0), 356 RTL8152_REQ_SET_REGS, RTL8152_REQT_WRITE, 357 - value, index, data, size, 500); 358 } 359 360 static int generic_ocp_read(struct r8152 *tp, u16 index, u16 size, ··· 514 515 static u32 ocp_read_dword(struct r8152 *tp, u16 type, u16 index) 516 { 517 - u32 data; 518 519 - if (type == MCU_TYPE_PLA) 520 - pla_ocp_read(tp, index, sizeof(data), &data); 521 - else 522 - usb_ocp_read(tp, index, sizeof(data), &data); 523 524 return __le32_to_cpu(data); 525 } 526 527 static void ocp_write_dword(struct r8152 *tp, u16 type, u16 index, u32 data) 528 { 529 - if (type == MCU_TYPE_PLA) 530 - pla_ocp_write(tp, index, BYTE_EN_DWORD, sizeof(data), &data); 531 - else 532 - usb_ocp_write(tp, index, BYTE_EN_DWORD, sizeof(data), &data); 533 } 534 535 static u16 ocp_read_word(struct r8152 *tp, u16 type, u16 index) 536 { 537 u32 data; 538 u8 shift = index & 2; 539 540 index &= ~3; 541 542 - if (type == MCU_TYPE_PLA) 543 - pla_ocp_read(tp, index, sizeof(data), &data); 544 - else 545 - usb_ocp_read(tp, index, sizeof(data), &data); 546 547 - data = __le32_to_cpu(data); 548 data >>= (shift * 8); 549 data &= 0xffff; 550 ··· 547 548 static void ocp_write_word(struct r8152 *tp, u16 type, u16 index, u32 data) 549 { 550 - u32 tmp, mask = 0xffff; 551 u16 byen = BYTE_EN_WORD; 552 u8 shift = index & 2; 553 ··· 561 index &= ~3; 562 } 563 564 - if (type == MCU_TYPE_PLA) 565 - pla_ocp_read(tp, index, sizeof(tmp), &tmp); 566 - else 567 - usb_ocp_read(tp, index, sizeof(tmp), &tmp); 568 569 - tmp = __le32_to_cpu(tmp) & ~mask; 570 - tmp |= data; 571 - tmp = __cpu_to_le32(tmp); 572 573 - if (type == MCU_TYPE_PLA) 574 - pla_ocp_write(tp, index, byen, sizeof(tmp), &tmp); 575 - else 576 - usb_ocp_write(tp, index, byen, sizeof(tmp), &tmp); 577 } 578 579 static u8 ocp_read_byte(struct r8152 *tp, u16 type, u16 index) 580 { 581 u32 data; 582 u8 shift = index & 3; 583 584 index &= ~3; 585 586 - if (type == MCU_TYPE_PLA) 587 - pla_ocp_read(tp, index, sizeof(data), &data); 588 - else 589 - usb_ocp_read(tp, index, sizeof(data), &data); 590 591 - data = __le32_to_cpu(data); 592 data >>= (shift * 8); 593 data &= 0xff; 594 ··· 588 589 static void ocp_write_byte(struct r8152 *tp, u16 type, u16 index, u32 data) 590 { 591 - u32 tmp, mask = 0xff; 592 u16 byen = BYTE_EN_BYTE; 593 u8 shift = index & 3; 594 ··· 602 index &= ~3; 603 } 604 605 - if (type == MCU_TYPE_PLA) 606 - pla_ocp_read(tp, index, sizeof(tmp), &tmp); 607 - else 608 - usb_ocp_read(tp, index, sizeof(tmp), &tmp); 609 610 - tmp = __le32_to_cpu(tmp) & ~mask; 611 - tmp |= data; 612 - tmp = __cpu_to_le32(tmp); 613 614 - if (type == MCU_TYPE_PLA) 615 - pla_ocp_write(tp, index, byen, sizeof(tmp), &tmp); 616 - else 617 - usb_ocp_write(tp, index, byen, sizeof(tmp), &tmp); 618 } 619 620 static void r8152_mdio_write(struct r8152 *tp, u32 reg_addr, u32 value) ··· 689 static inline void set_ethernet_addr(struct r8152 *tp) 690 { 691 struct net_device *dev = tp->netdev; 692 - u8 *node_id; 693 694 - node_id = kmalloc(sizeof(u8) * 8, GFP_KERNEL); 695 - if (!node_id) { 696 - netif_err(tp, probe, dev, "out of memory"); 697 - return; 698 - } 699 - 700 - if (pla_ocp_read(tp, PLA_IDR, sizeof(u8) * 8, node_id) < 0) 701 netif_notice(tp, probe, dev, "inet addr fail\n"); 702 else { 703 memcpy(dev->dev_addr, node_id, dev->addr_len); 704 memcpy(dev->perm_addr, dev->dev_addr, dev->addr_len); 705 } 706 - kfree(node_id); 707 } 708 709 static int rtl8152_set_mac_address(struct net_device *netdev, void *p) ··· 879 static void _rtl8152_set_rx_mode(struct net_device *netdev) 880 { 881 struct r8152 *tp = netdev_priv(netdev); 882 - u32 tmp, *mc_filter; /* Multicast hash filter */ 883 u32 ocp_data; 884 - 885 - mc_filter = kmalloc(sizeof(u32) * 2, GFP_KERNEL); 886 - if (!mc_filter) { 887 - netif_err(tp, link, netdev, "out of memory"); 888 - return; 889 - } 890 891 clear_bit(RTL8152_SET_RX_MODE, &tp->flags); 892 netif_stop_queue(netdev); ··· 910 } 911 } 912 913 - tmp = mc_filter[0]; 914 - mc_filter[0] = __cpu_to_le32(swab32(mc_filter[1])); 915 - mc_filter[1] = __cpu_to_le32(swab32(tmp)); 916 917 - pla_ocp_write(tp, PLA_MAR, BYTE_EN_DWORD, sizeof(u32) * 2, mc_filter); 918 ocp_write_dword(tp, MCU_TYPE_PLA, PLA_RCR, ocp_data); 919 netif_wake_queue(netdev); 920 - kfree(mc_filter); 921 } 922 923 static netdev_tx_t rtl8152_start_xmit(struct sk_buff *skb,
··· 344 static 345 int get_registers(struct r8152 *tp, u16 value, u16 index, u16 size, void *data) 346 { 347 + int ret; 348 + void *tmp; 349 + 350 + tmp = kmalloc(size, GFP_KERNEL); 351 + if (!tmp) 352 + return -ENOMEM; 353 + 354 + ret = usb_control_msg(tp->udev, usb_rcvctrlpipe(tp->udev, 0), 355 RTL8152_REQ_GET_REGS, RTL8152_REQT_READ, 356 + value, index, tmp, size, 500); 357 + 358 + memcpy(data, tmp, size); 359 + kfree(tmp); 360 + 361 + return ret; 362 } 363 364 static 365 int set_registers(struct r8152 *tp, u16 value, u16 index, u16 size, void *data) 366 { 367 + int ret; 368 + void *tmp; 369 + 370 + tmp = kmalloc(size, GFP_KERNEL); 371 + if (!tmp) 372 + return -ENOMEM; 373 + 374 + memcpy(tmp, data, size); 375 + 376 + ret = usb_control_msg(tp->udev, usb_sndctrlpipe(tp->udev, 0), 377 RTL8152_REQ_SET_REGS, RTL8152_REQT_WRITE, 378 + value, index, tmp, size, 500); 379 + 380 + kfree(tmp); 381 + return ret; 382 } 383 384 static int generic_ocp_read(struct r8152 *tp, u16 index, u16 size, ··· 490 491 static u32 ocp_read_dword(struct r8152 *tp, u16 type, u16 index) 492 { 493 + __le32 data; 494 495 + generic_ocp_read(tp, index, sizeof(data), &data, type); 496 497 return __le32_to_cpu(data); 498 } 499 500 static void ocp_write_dword(struct r8152 *tp, u16 type, u16 index, u32 data) 501 { 502 + __le32 tmp = __cpu_to_le32(data); 503 + 504 + generic_ocp_write(tp, index, BYTE_EN_DWORD, sizeof(tmp), &tmp, type); 505 } 506 507 static u16 ocp_read_word(struct r8152 *tp, u16 type, u16 index) 508 { 509 u32 data; 510 + __le32 tmp; 511 u8 shift = index & 2; 512 513 index &= ~3; 514 515 + generic_ocp_read(tp, index, sizeof(tmp), &tmp, type); 516 517 + data = __le32_to_cpu(tmp); 518 data >>= (shift * 8); 519 data &= 0xffff; 520 ··· 529 530 static void ocp_write_word(struct r8152 *tp, u16 type, u16 index, u32 data) 531 { 532 + u32 mask = 0xffff; 533 + __le32 tmp; 534 u16 byen = BYTE_EN_WORD; 535 u8 shift = index & 2; 536 ··· 542 index &= ~3; 543 } 544 545 + generic_ocp_read(tp, index, sizeof(tmp), &tmp, type); 546 547 + data |= __le32_to_cpu(tmp) & ~mask; 548 + tmp = __cpu_to_le32(data); 549 550 + generic_ocp_write(tp, index, byen, sizeof(tmp), &tmp, type); 551 } 552 553 static u8 ocp_read_byte(struct r8152 *tp, u16 type, u16 index) 554 { 555 u32 data; 556 + __le32 tmp; 557 u8 shift = index & 3; 558 559 index &= ~3; 560 561 + generic_ocp_read(tp, index, sizeof(tmp), &tmp, type); 562 563 + data = __le32_to_cpu(tmp); 564 data >>= (shift * 8); 565 data &= 0xff; 566 ··· 578 579 static void ocp_write_byte(struct r8152 *tp, u16 type, u16 index, u32 data) 580 { 581 + u32 mask = 0xff; 582 + __le32 tmp; 583 u16 byen = BYTE_EN_BYTE; 584 u8 shift = index & 3; 585 ··· 591 index &= ~3; 592 } 593 594 + generic_ocp_read(tp, index, sizeof(tmp), &tmp, type); 595 596 + data |= __le32_to_cpu(tmp) & ~mask; 597 + tmp = __cpu_to_le32(data); 598 599 + generic_ocp_write(tp, index, byen, sizeof(tmp), &tmp, type); 600 } 601 602 static void r8152_mdio_write(struct r8152 *tp, u32 reg_addr, u32 value) ··· 685 static inline void set_ethernet_addr(struct r8152 *tp) 686 { 687 struct net_device *dev = tp->netdev; 688 + u8 node_id[8] = {0}; 689 690 + if (pla_ocp_read(tp, PLA_IDR, sizeof(node_id), node_id) < 0) 691 netif_notice(tp, probe, dev, "inet addr fail\n"); 692 else { 693 memcpy(dev->dev_addr, node_id, dev->addr_len); 694 memcpy(dev->perm_addr, dev->dev_addr, dev->addr_len); 695 } 696 } 697 698 static int rtl8152_set_mac_address(struct net_device *netdev, void *p) ··· 882 static void _rtl8152_set_rx_mode(struct net_device *netdev) 883 { 884 struct r8152 *tp = netdev_priv(netdev); 885 + u32 mc_filter[2]; /* Multicast hash filter */ 886 + __le32 tmp[2]; 887 u32 ocp_data; 888 889 clear_bit(RTL8152_SET_RX_MODE, &tp->flags); 890 netif_stop_queue(netdev); ··· 918 } 919 } 920 921 + tmp[0] = __cpu_to_le32(swab32(mc_filter[1])); 922 + tmp[1] = __cpu_to_le32(swab32(mc_filter[0])); 923 924 + pla_ocp_write(tp, PLA_MAR, BYTE_EN_DWORD, sizeof(tmp), tmp); 925 ocp_write_dword(tp, MCU_TYPE_PLA, PLA_RCR, ocp_data); 926 netif_wake_queue(netdev); 927 } 928 929 static netdev_tx_t rtl8152_start_xmit(struct sk_buff *skb,
+42 -20
drivers/net/usb/r815x.c
··· 24 25 static int pla_read_word(struct usb_device *udev, u16 index) 26 { 27 - int data, ret; 28 u8 shift = index & 2; 29 - __le32 ocp_data; 30 31 index &= ~3; 32 33 ret = usb_control_msg(udev, usb_rcvctrlpipe(udev, 0), 34 RTL815x_REQ_GET_REGS, RTL815x_REQT_READ, 35 - index, MCU_TYPE_PLA, &ocp_data, sizeof(ocp_data), 36 - 500); 37 if (ret < 0) 38 - return ret; 39 40 - data = __le32_to_cpu(ocp_data); 41 - data >>= (shift * 8); 42 - data &= 0xffff; 43 44 - return data; 45 } 46 47 static int pla_write_word(struct usb_device *udev, u16 index, u32 data) 48 { 49 - __le32 ocp_data; 50 u32 mask = 0xffff; 51 u16 byen = BYTE_EN_WORD; 52 u8 shift = index & 2; 53 int ret; 54 55 data &= mask; 56 ··· 72 73 ret = usb_control_msg(udev, usb_rcvctrlpipe(udev, 0), 74 RTL815x_REQ_GET_REGS, RTL815x_REQT_READ, 75 - index, MCU_TYPE_PLA, &ocp_data, sizeof(ocp_data), 76 - 500); 77 if (ret < 0) 78 - return ret; 79 80 - data |= __le32_to_cpu(ocp_data) & ~mask; 81 - ocp_data = __cpu_to_le32(data); 82 83 ret = usb_control_msg(udev, usb_sndctrlpipe(udev, 0), 84 RTL815x_REQ_SET_REGS, RTL815x_REQT_WRITE, 85 - index, MCU_TYPE_PLA | byen, &ocp_data, 86 - sizeof(ocp_data), 500); 87 88 return ret; 89 } 90 ··· 126 static int r815x_mdio_read(struct net_device *netdev, int phy_id, int reg) 127 { 128 struct usbnet *dev = netdev_priv(netdev); 129 130 if (phy_id != R815x_PHY_ID) 131 return -EINVAL; 132 133 - return ocp_reg_read(dev, BASE_MII + reg * 2); 134 } 135 136 static ··· 148 if (phy_id != R815x_PHY_ID) 149 return; 150 151 ocp_reg_write(dev, BASE_MII + reg * 2, val); 152 } 153 154 static int r8153_bind(struct usbnet *dev, struct usb_interface *intf) ··· 172 dev->mii.phy_id = R815x_PHY_ID; 173 dev->mii.supports_gmii = 1; 174 175 - return 0; 176 } 177 178 static int r8152_bind(struct usbnet *dev, struct usb_interface *intf) ··· 191 dev->mii.phy_id = R815x_PHY_ID; 192 dev->mii.supports_gmii = 0; 193 194 - return 0; 195 } 196 197 static const struct driver_info r8152_info = {
··· 24 25 static int pla_read_word(struct usb_device *udev, u16 index) 26 { 27 + int ret; 28 u8 shift = index & 2; 29 + __le32 *tmp; 30 + 31 + tmp = kmalloc(sizeof(*tmp), GFP_KERNEL); 32 + if (!tmp) 33 + return -ENOMEM; 34 35 index &= ~3; 36 37 ret = usb_control_msg(udev, usb_rcvctrlpipe(udev, 0), 38 RTL815x_REQ_GET_REGS, RTL815x_REQT_READ, 39 + index, MCU_TYPE_PLA, tmp, sizeof(*tmp), 500); 40 if (ret < 0) 41 + goto out2; 42 43 + ret = __le32_to_cpu(*tmp); 44 + ret >>= (shift * 8); 45 + ret &= 0xffff; 46 47 + out2: 48 + kfree(tmp); 49 + return ret; 50 } 51 52 static int pla_write_word(struct usb_device *udev, u16 index, u32 data) 53 { 54 + __le32 *tmp; 55 u32 mask = 0xffff; 56 u16 byen = BYTE_EN_WORD; 57 u8 shift = index & 2; 58 int ret; 59 + 60 + tmp = kmalloc(sizeof(*tmp), GFP_KERNEL); 61 + if (!tmp) 62 + return -ENOMEM; 63 64 data &= mask; 65 ··· 63 64 ret = usb_control_msg(udev, usb_rcvctrlpipe(udev, 0), 65 RTL815x_REQ_GET_REGS, RTL815x_REQT_READ, 66 + index, MCU_TYPE_PLA, tmp, sizeof(*tmp), 500); 67 if (ret < 0) 68 + goto out3; 69 70 + data |= __le32_to_cpu(*tmp) & ~mask; 71 + *tmp = __cpu_to_le32(data); 72 73 ret = usb_control_msg(udev, usb_sndctrlpipe(udev, 0), 74 RTL815x_REQ_SET_REGS, RTL815x_REQT_WRITE, 75 + index, MCU_TYPE_PLA | byen, tmp, sizeof(*tmp), 76 + 500); 77 78 + out3: 79 + kfree(tmp); 80 return ret; 81 } 82 ··· 116 static int r815x_mdio_read(struct net_device *netdev, int phy_id, int reg) 117 { 118 struct usbnet *dev = netdev_priv(netdev); 119 + int ret; 120 121 if (phy_id != R815x_PHY_ID) 122 return -EINVAL; 123 124 + if (usb_autopm_get_interface(dev->intf) < 0) 125 + return -ENODEV; 126 + 127 + ret = ocp_reg_read(dev, BASE_MII + reg * 2); 128 + 129 + usb_autopm_put_interface(dev->intf); 130 + return ret; 131 } 132 133 static ··· 131 if (phy_id != R815x_PHY_ID) 132 return; 133 134 + if (usb_autopm_get_interface(dev->intf) < 0) 135 + return; 136 + 137 ocp_reg_write(dev, BASE_MII + reg * 2, val); 138 + 139 + usb_autopm_put_interface(dev->intf); 140 } 141 142 static int r8153_bind(struct usbnet *dev, struct usb_interface *intf) ··· 150 dev->mii.phy_id = R815x_PHY_ID; 151 dev->mii.supports_gmii = 1; 152 153 + return status; 154 } 155 156 static int r8152_bind(struct usbnet *dev, struct usb_interface *intf) ··· 169 dev->mii.phy_id = R815x_PHY_ID; 170 dev->mii.supports_gmii = 0; 171 172 + return status; 173 } 174 175 static const struct driver_info r8152_info = {
+1 -1
drivers/net/wireless/ath/ath10k/Kconfig
··· 1 config ATH10K 2 tristate "Atheros 802.11ac wireless cards support" 3 - depends on MAC80211 4 select ATH_COMMON 5 ---help--- 6 This module adds support for wireless adapters based on
··· 1 config ATH10K 2 tristate "Atheros 802.11ac wireless cards support" 3 + depends on MAC80211 && HAS_DMA 4 select ATH_COMMON 5 ---help--- 6 This module adds support for wireless adapters based on
+4 -1
drivers/net/wireless/brcm80211/brcmfmac/wl_cfg80211.c
··· 1093 brcmf_dbg(INFO, "Call WLC_DISASSOC to stop excess roaming\n "); 1094 err = brcmf_fil_cmd_data_set(vif->ifp, 1095 BRCMF_C_DISASSOC, NULL, 0); 1096 - if (err) 1097 brcmf_err("WLC_DISASSOC failed (%d)\n", err); 1098 clear_bit(BRCMF_VIF_STATUS_CONNECTED, &vif->sme_state); 1099 } 1100 clear_bit(BRCMF_VIF_STATUS_CONNECTING, &vif->sme_state);
··· 1093 brcmf_dbg(INFO, "Call WLC_DISASSOC to stop excess roaming\n "); 1094 err = brcmf_fil_cmd_data_set(vif->ifp, 1095 BRCMF_C_DISASSOC, NULL, 0); 1096 + if (err) { 1097 brcmf_err("WLC_DISASSOC failed (%d)\n", err); 1098 + cfg80211_disconnected(vif->wdev.netdev, 0, 1099 + NULL, 0, GFP_KERNEL); 1100 + } 1101 clear_bit(BRCMF_VIF_STATUS_CONNECTED, &vif->sme_state); 1102 } 1103 clear_bit(BRCMF_VIF_STATUS_CONNECTING, &vif->sme_state);
+2
drivers/net/wireless/iwlwifi/iwl-prph.h
··· 97 98 #define APMG_PCIDEV_STT_VAL_L1_ACT_DIS (0x00000800) 99 100 /* Device system time */ 101 #define DEVICE_SYSTEM_TIME_REG 0xA0206C 102
··· 97 98 #define APMG_PCIDEV_STT_VAL_L1_ACT_DIS (0x00000800) 99 100 + #define APMG_RTC_INT_STT_RFKILL (0x10000000) 101 + 102 /* Device system time */ 103 #define DEVICE_SYSTEM_TIME_REG 0xA0206C 104
+10 -5
drivers/net/wireless/iwlwifi/mvm/d3.c
··· 134 struct iwl_wowlan_rsc_tsc_params_cmd *rsc_tsc; 135 struct iwl_wowlan_tkip_params_cmd *tkip; 136 bool error, use_rsc_tsc, use_tkip; 137 - int gtk_key_idx; 138 }; 139 140 static void iwl_mvm_wowlan_program_keys(struct ieee80211_hw *hw, ··· 188 wkc.wep_key.key_offset = 0; 189 } else { 190 /* others start at 1 */ 191 - data->gtk_key_idx++; 192 - wkc.wep_key.key_offset = data->gtk_key_idx; 193 } 194 195 ret = iwl_mvm_send_cmd_pdu(mvm, WEP_KEY, CMD_SYNC, ··· 316 mvm->ptk_ivlen = key->iv_len; 317 mvm->ptk_icvlen = key->icv_len; 318 } else { 319 - data->gtk_key_idx++; 320 - key->hw_key_idx = data->gtk_key_idx; 321 mvm->gtk_ivlen = key->iv_len; 322 mvm->gtk_icvlen = key->icv_len; 323 }
··· 134 struct iwl_wowlan_rsc_tsc_params_cmd *rsc_tsc; 135 struct iwl_wowlan_tkip_params_cmd *tkip; 136 bool error, use_rsc_tsc, use_tkip; 137 + int wep_key_idx; 138 }; 139 140 static void iwl_mvm_wowlan_program_keys(struct ieee80211_hw *hw, ··· 188 wkc.wep_key.key_offset = 0; 189 } else { 190 /* others start at 1 */ 191 + data->wep_key_idx++; 192 + wkc.wep_key.key_offset = data->wep_key_idx; 193 } 194 195 ret = iwl_mvm_send_cmd_pdu(mvm, WEP_KEY, CMD_SYNC, ··· 316 mvm->ptk_ivlen = key->iv_len; 317 mvm->ptk_icvlen = key->icv_len; 318 } else { 319 + /* 320 + * firmware only supports TSC/RSC for a single key, 321 + * so if there are multiple keep overwriting them 322 + * with new ones -- this relies on mac80211 doing 323 + * list_add_tail(). 324 + */ 325 + key->hw_key_idx = 1; 326 mvm->gtk_ivlen = key->iv_len; 327 mvm->gtk_icvlen = key->icv_len; 328 }
-1
drivers/net/wireless/iwlwifi/mvm/fw-api-scan.h
··· 69 /* Scan Commands, Responses, Notifications */ 70 71 /* Masks for iwl_scan_channel.type flags */ 72 - #define SCAN_CHANNEL_TYPE_PASSIVE 0 73 #define SCAN_CHANNEL_TYPE_ACTIVE BIT(0) 74 #define SCAN_CHANNEL_NARROW_BAND BIT(22) 75
··· 69 /* Scan Commands, Responses, Notifications */ 70 71 /* Masks for iwl_scan_channel.type flags */ 72 #define SCAN_CHANNEL_TYPE_ACTIVE BIT(0) 73 #define SCAN_CHANNEL_NARROW_BAND BIT(22) 74
+21 -21
drivers/net/wireless/iwlwifi/mvm/mac80211.c
··· 512 goto out_unlock; 513 514 /* 515 * The AP binding flow can be done only after the beacon 516 * template is configured (which happens only in the mac80211 517 * start_ap() flow), and adding the broadcast station can happen ··· 553 } 554 555 goto out_unlock; 556 - } 557 - 558 - /* 559 - * TODO: remove this temporary code. 560 - * Currently MVM FW supports power management only on single MAC. 561 - * If new interface added, disable PM on existing interface. 562 - * P2P device is a special case, since it is handled by FW similary to 563 - * scan. If P2P deviced is added, PM remains enabled on existing 564 - * interface. 565 - * Note: the method below does not count the new interface being added 566 - * at this moment. 567 - */ 568 - if (vif->type != NL80211_IFTYPE_P2P_DEVICE) 569 - mvm->vif_count++; 570 - if (mvm->vif_count > 1) { 571 - IWL_DEBUG_MAC80211(mvm, 572 - "Disable power on existing interfaces\n"); 573 - ieee80211_iterate_active_interfaces_atomic( 574 - mvm->hw, 575 - IEEE80211_IFACE_ITER_NORMAL, 576 - iwl_mvm_pm_disable_iterator, mvm); 577 } 578 579 ret = iwl_mvm_mac_ctxt_add(mvm, vif);
··· 512 goto out_unlock; 513 514 /* 515 + * TODO: remove this temporary code. 516 + * Currently MVM FW supports power management only on single MAC. 517 + * If new interface added, disable PM on existing interface. 518 + * P2P device is a special case, since it is handled by FW similary to 519 + * scan. If P2P deviced is added, PM remains enabled on existing 520 + * interface. 521 + * Note: the method below does not count the new interface being added 522 + * at this moment. 523 + */ 524 + if (vif->type != NL80211_IFTYPE_P2P_DEVICE) 525 + mvm->vif_count++; 526 + if (mvm->vif_count > 1) { 527 + IWL_DEBUG_MAC80211(mvm, 528 + "Disable power on existing interfaces\n"); 529 + ieee80211_iterate_active_interfaces_atomic( 530 + mvm->hw, 531 + IEEE80211_IFACE_ITER_NORMAL, 532 + iwl_mvm_pm_disable_iterator, mvm); 533 + } 534 + 535 + /* 536 * The AP binding flow can be done only after the beacon 537 * template is configured (which happens only in the mac80211 538 * start_ap() flow), and adding the broadcast station can happen ··· 532 } 533 534 goto out_unlock; 535 } 536 537 ret = iwl_mvm_mac_ctxt_add(mvm, vif);
+2 -9
drivers/net/wireless/iwlwifi/mvm/scan.c
··· 178 struct iwl_scan_channel *chan = (struct iwl_scan_channel *) 179 (cmd->data + le16_to_cpu(cmd->tx_cmd.len)); 180 int i; 181 - __le32 chan_type_value; 182 - 183 - if (req->n_ssids > 0) 184 - chan_type_value = cpu_to_le32(BIT(req->n_ssids) - 1); 185 - else 186 - chan_type_value = SCAN_CHANNEL_TYPE_PASSIVE; 187 188 for (i = 0; i < cmd->channel_count; i++) { 189 chan->channel = cpu_to_le16(req->channels[i]->hw_value); 190 if (req->channels[i]->flags & IEEE80211_CHAN_PASSIVE_SCAN) 191 - chan->type = SCAN_CHANNEL_TYPE_PASSIVE; 192 - else 193 - chan->type = chan_type_value; 194 chan->active_dwell = cpu_to_le16(active_dwell); 195 chan->passive_dwell = cpu_to_le16(passive_dwell); 196 chan->iteration_count = cpu_to_le16(1);
··· 178 struct iwl_scan_channel *chan = (struct iwl_scan_channel *) 179 (cmd->data + le16_to_cpu(cmd->tx_cmd.len)); 180 int i; 181 182 for (i = 0; i < cmd->channel_count; i++) { 183 chan->channel = cpu_to_le16(req->channels[i]->hw_value); 184 + chan->type = cpu_to_le32(BIT(req->n_ssids) - 1); 185 if (req->channels[i]->flags & IEEE80211_CHAN_PASSIVE_SCAN) 186 + chan->type &= cpu_to_le32(~SCAN_CHANNEL_TYPE_ACTIVE); 187 chan->active_dwell = cpu_to_le16(active_dwell); 188 chan->passive_dwell = cpu_to_le16(passive_dwell); 189 chan->iteration_count = cpu_to_le16(1);
+8 -3
drivers/net/wireless/iwlwifi/mvm/sta.c
··· 915 struct iwl_mvm_sta *mvmsta = (void *)sta->drv_priv; 916 struct iwl_mvm_tid_data *tid_data = &mvmsta->tid_data[tid]; 917 u16 txq_id; 918 919 /* 920 * First set the agg state to OFF to avoid calling ··· 925 txq_id = tid_data->txq_id; 926 IWL_DEBUG_TX_QUEUES(mvm, "Flush AGG: sta %d tid %d q %d state %d\n", 927 mvmsta->sta_id, tid, txq_id, tid_data->state); 928 tid_data->state = IWL_AGG_OFF; 929 spin_unlock_bh(&mvmsta->lock); 930 931 - if (iwl_mvm_flush_tx_path(mvm, BIT(txq_id), true)) 932 - IWL_ERR(mvm, "Couldn't flush the AGG queue\n"); 933 934 - iwl_trans_txq_disable(mvm->trans, tid_data->txq_id); 935 mvm->queue_to_mac80211[tid_data->txq_id] = 936 IWL_INVALID_MAC80211_QUEUE; 937
··· 915 struct iwl_mvm_sta *mvmsta = (void *)sta->drv_priv; 916 struct iwl_mvm_tid_data *tid_data = &mvmsta->tid_data[tid]; 917 u16 txq_id; 918 + enum iwl_mvm_agg_state old_state; 919 920 /* 921 * First set the agg state to OFF to avoid calling ··· 924 txq_id = tid_data->txq_id; 925 IWL_DEBUG_TX_QUEUES(mvm, "Flush AGG: sta %d tid %d q %d state %d\n", 926 mvmsta->sta_id, tid, txq_id, tid_data->state); 927 + old_state = tid_data->state; 928 tid_data->state = IWL_AGG_OFF; 929 spin_unlock_bh(&mvmsta->lock); 930 931 + if (old_state >= IWL_AGG_ON) { 932 + if (iwl_mvm_flush_tx_path(mvm, BIT(txq_id), true)) 933 + IWL_ERR(mvm, "Couldn't flush the AGG queue\n"); 934 935 + iwl_trans_txq_disable(mvm->trans, tid_data->txq_id); 936 + } 937 + 938 mvm->queue_to_mac80211[tid_data->txq_id] = 939 IWL_INVALID_MAC80211_QUEUE; 940
+1
drivers/net/wireless/iwlwifi/pcie/drv.c
··· 130 {IWL_PCI_DEVICE(0x423C, 0x1306, iwl5150_abg_cfg)}, /* Half Mini Card */ 131 {IWL_PCI_DEVICE(0x423C, 0x1221, iwl5150_agn_cfg)}, /* Mini Card */ 132 {IWL_PCI_DEVICE(0x423C, 0x1321, iwl5150_agn_cfg)}, /* Half Mini Card */ 133 134 {IWL_PCI_DEVICE(0x423D, 0x1211, iwl5150_agn_cfg)}, /* Mini Card */ 135 {IWL_PCI_DEVICE(0x423D, 0x1311, iwl5150_agn_cfg)}, /* Half Mini Card */
··· 130 {IWL_PCI_DEVICE(0x423C, 0x1306, iwl5150_abg_cfg)}, /* Half Mini Card */ 131 {IWL_PCI_DEVICE(0x423C, 0x1221, iwl5150_agn_cfg)}, /* Mini Card */ 132 {IWL_PCI_DEVICE(0x423C, 0x1321, iwl5150_agn_cfg)}, /* Half Mini Card */ 133 + {IWL_PCI_DEVICE(0x423C, 0x1326, iwl5150_abg_cfg)}, /* Half Mini Card */ 134 135 {IWL_PCI_DEVICE(0x423D, 0x1211, iwl5150_agn_cfg)}, /* Mini Card */ 136 {IWL_PCI_DEVICE(0x423D, 0x1311, iwl5150_agn_cfg)}, /* Half Mini Card */
+8
drivers/net/wireless/iwlwifi/pcie/rx.c
··· 888 889 iwl_op_mode_hw_rf_kill(trans->op_mode, hw_rfkill); 890 if (hw_rfkill) { 891 set_bit(STATUS_RFKILL, &trans_pcie->status); 892 if (test_and_clear_bit(STATUS_HCMD_ACTIVE, 893 &trans_pcie->status))
··· 888 889 iwl_op_mode_hw_rf_kill(trans->op_mode, hw_rfkill); 890 if (hw_rfkill) { 891 + /* 892 + * Clear the interrupt in APMG if the NIC is going down. 893 + * Note that when the NIC exits RFkill (else branch), we 894 + * can't access prph and the NIC will be reset in 895 + * start_hw anyway. 896 + */ 897 + iwl_write_prph(trans, APMG_RTC_INT_STT_REG, 898 + APMG_RTC_INT_STT_RFKILL); 899 set_bit(STATUS_RFKILL, &trans_pcie->status); 900 if (test_and_clear_bit(STATUS_HCMD_ACTIVE, 901 &trans_pcie->status))
+5
drivers/net/wireless/iwlwifi/pcie/trans.c
··· 670 return err; 671 } 672 673 iwl_pcie_apm_init(trans); 674 675 /* From now on, the op_mode will be kept updated about RF kill state */
··· 670 return err; 671 } 672 673 + /* Reset the entire device */ 674 + iwl_set_bit(trans, CSR_RESET, CSR_RESET_REG_FLAG_SW_RESET); 675 + 676 + usleep_range(10, 15); 677 + 678 iwl_pcie_apm_init(trans); 679 680 /* From now on, the op_mode will be kept updated about RF kill state */
+2 -2
drivers/net/wireless/mwifiex/cfg80211.c
··· 1716 struct mwifiex_private *priv = mwifiex_netdev_get_priv(dev); 1717 int ret; 1718 1719 - if (priv->bss_mode != NL80211_IFTYPE_STATION) { 1720 wiphy_err(wiphy, 1721 - "%s: reject infra assoc request in non-STA mode\n", 1722 dev->name); 1723 return -EINVAL; 1724 }
··· 1716 struct mwifiex_private *priv = mwifiex_netdev_get_priv(dev); 1717 int ret; 1718 1719 + if (GET_BSS_ROLE(priv) != MWIFIEX_BSS_ROLE_STA) { 1720 wiphy_err(wiphy, 1721 + "%s: reject infra assoc request in non-STA role\n", 1722 dev->name); 1723 return -EINVAL; 1724 }
+2 -1
drivers/net/wireless/mwifiex/cfp.c
··· 415 u32 k = 0; 416 struct mwifiex_adapter *adapter = priv->adapter; 417 418 - if (priv->bss_mode == NL80211_IFTYPE_STATION) { 419 switch (adapter->config_bands) { 420 case BAND_B: 421 dev_dbg(adapter->dev, "info: infra band=%d "
··· 415 u32 k = 0; 416 struct mwifiex_adapter *adapter = priv->adapter; 417 418 + if (priv->bss_mode == NL80211_IFTYPE_STATION || 419 + priv->bss_mode == NL80211_IFTYPE_P2P_CLIENT) { 420 switch (adapter->config_bands) { 421 case BAND_B: 422 dev_dbg(adapter->dev, "info: infra band=%d "
+4 -2
drivers/net/wireless/mwifiex/join.c
··· 1291 { 1292 u8 current_bssid[ETH_ALEN]; 1293 1294 - /* Return error if the adapter or table entry is not marked as infra */ 1295 - if ((priv->bss_mode != NL80211_IFTYPE_STATION) || 1296 (bss_desc->bss_mode != NL80211_IFTYPE_STATION)) 1297 return -1; 1298
··· 1291 { 1292 u8 current_bssid[ETH_ALEN]; 1293 1294 + /* Return error if the adapter is not STA role or table entry 1295 + * is not marked as infra. 1296 + */ 1297 + if ((GET_BSS_ROLE(priv) != MWIFIEX_BSS_ROLE_STA) || 1298 (bss_desc->bss_mode != NL80211_IFTYPE_STATION)) 1299 return -1; 1300
+2 -2
drivers/net/wireless/mwifiex/sdio.c
··· 1639 /* Allocate buffer and copy payload */ 1640 blk_size = MWIFIEX_SDIO_BLOCK_SIZE; 1641 buf_block_len = (pkt_len + blk_size - 1) / blk_size; 1642 - *(u16 *) &payload[0] = (u16) pkt_len; 1643 - *(u16 *) &payload[2] = type; 1644 1645 /* 1646 * This is SDIO specific header
··· 1639 /* Allocate buffer and copy payload */ 1640 blk_size = MWIFIEX_SDIO_BLOCK_SIZE; 1641 buf_block_len = (pkt_len + blk_size - 1) / blk_size; 1642 + *(__le16 *)&payload[0] = cpu_to_le16((u16)pkt_len); 1643 + *(__le16 *)&payload[2] = cpu_to_le16(type); 1644 1645 /* 1646 * This is SDIO specific header
+2 -2
drivers/net/wireless/mwifiex/sta_ioctl.c
··· 257 goto done; 258 } 259 260 - if (priv->bss_mode == NL80211_IFTYPE_STATION) { 261 u8 config_bands; 262 263 - /* Infra mode */ 264 ret = mwifiex_deauthenticate(priv, NULL); 265 if (ret) 266 goto done;
··· 257 goto done; 258 } 259 260 + if (priv->bss_mode == NL80211_IFTYPE_STATION || 261 + priv->bss_mode == NL80211_IFTYPE_P2P_CLIENT) { 262 u8 config_bands; 263 264 ret = mwifiex_deauthenticate(priv, NULL); 265 if (ret) 266 goto done;
+11 -7
drivers/net/wireless/rt2x00/rt2x00queue.c
··· 936 spin_unlock_irqrestore(&queue->index_lock, irqflags); 937 } 938 939 - void rt2x00queue_pause_queue(struct data_queue *queue) 940 { 941 - if (!test_bit(DEVICE_STATE_PRESENT, &queue->rt2x00dev->flags) || 942 - !test_bit(QUEUE_STARTED, &queue->flags) || 943 - test_and_set_bit(QUEUE_PAUSED, &queue->flags)) 944 - return; 945 - 946 switch (queue->qid) { 947 case QID_AC_VO: 948 case QID_AC_VI: ··· 952 default: 953 break; 954 } 955 } 956 EXPORT_SYMBOL_GPL(rt2x00queue_pause_queue); 957 ··· 1023 return; 1024 } 1025 1026 - rt2x00queue_pause_queue(queue); 1027 1028 queue->rt2x00dev->ops->lib->stop_queue(queue); 1029
··· 936 spin_unlock_irqrestore(&queue->index_lock, irqflags); 937 } 938 939 + void rt2x00queue_pause_queue_nocheck(struct data_queue *queue) 940 { 941 switch (queue->qid) { 942 case QID_AC_VO: 943 case QID_AC_VI: ··· 957 default: 958 break; 959 } 960 + } 961 + void rt2x00queue_pause_queue(struct data_queue *queue) 962 + { 963 + if (!test_bit(DEVICE_STATE_PRESENT, &queue->rt2x00dev->flags) || 964 + !test_bit(QUEUE_STARTED, &queue->flags) || 965 + test_and_set_bit(QUEUE_PAUSED, &queue->flags)) 966 + return; 967 + 968 + rt2x00queue_pause_queue_nocheck(queue); 969 } 970 EXPORT_SYMBOL_GPL(rt2x00queue_pause_queue); 971 ··· 1019 return; 1020 } 1021 1022 + rt2x00queue_pause_queue_nocheck(queue); 1023 1024 queue->rt2x00dev->ops->lib->stop_queue(queue); 1025
+1 -1
include/linux/netdevice.h
··· 973 gfp_t gfp); 974 void (*ndo_netpoll_cleanup)(struct net_device *dev); 975 #endif 976 - #ifdef CONFIG_NET_LL_RX_POLL 977 int (*ndo_busy_poll)(struct napi_struct *dev); 978 #endif 979 int (*ndo_set_vf_mac)(struct net_device *dev,
··· 973 gfp_t gfp); 974 void (*ndo_netpoll_cleanup)(struct net_device *dev); 975 #endif 976 + #ifdef CONFIG_NET_RX_BUSY_POLL 977 int (*ndo_busy_poll)(struct napi_struct *dev); 978 #endif 979 int (*ndo_set_vf_mac)(struct net_device *dev,
+1 -1
include/linux/skbuff.h
··· 501 /* 7/9 bit hole (depending on ndisc_nodetype presence) */ 502 kmemcheck_bitfield_end(flags2); 503 504 - #if defined CONFIG_NET_DMA || defined CONFIG_NET_LL_RX_POLL 505 union { 506 unsigned int napi_id; 507 dma_cookie_t dma_cookie;
··· 501 /* 7/9 bit hole (depending on ndisc_nodetype presence) */ 502 kmemcheck_bitfield_end(flags2); 503 504 + #if defined CONFIG_NET_DMA || defined CONFIG_NET_RX_BUSY_POLL 505 union { 506 unsigned int napi_id; 507 dma_cookie_t dma_cookie;
+8 -3
include/net/busy_poll.h
··· 27 #include <linux/netdevice.h> 28 #include <net/ip.h> 29 30 - #ifdef CONFIG_NET_LL_RX_POLL 31 32 struct napi_struct; 33 extern unsigned int sysctl_net_busy_read __read_mostly; ··· 146 sk->sk_napi_id = skb->napi_id; 147 } 148 149 - #else /* CONFIG_NET_LL_RX_POLL */ 150 static inline unsigned long net_busy_loop_on(void) 151 { 152 return 0; ··· 181 return true; 182 } 183 184 - #endif /* CONFIG_NET_LL_RX_POLL */ 185 #endif /* _LINUX_NET_BUSY_POLL_H */
··· 27 #include <linux/netdevice.h> 28 #include <net/ip.h> 29 30 + #ifdef CONFIG_NET_RX_BUSY_POLL 31 32 struct napi_struct; 33 extern unsigned int sysctl_net_busy_read __read_mostly; ··· 146 sk->sk_napi_id = skb->napi_id; 147 } 148 149 + #else /* CONFIG_NET_RX_BUSY_POLL */ 150 static inline unsigned long net_busy_loop_on(void) 151 { 152 return 0; ··· 181 return true; 182 } 183 184 + static inline bool sk_busy_loop(struct sock *sk, int nonblock) 185 + { 186 + return false; 187 + } 188 + 189 + #endif /* CONFIG_NET_RX_BUSY_POLL */ 190 #endif /* _LINUX_NET_BUSY_POLL_H */
+1 -1
include/net/ip6_fib.h
··· 300 struct nl_info *info); 301 302 extern void fib6_run_gc(unsigned long expires, 303 - struct net *net); 304 305 extern void fib6_gc_cleanup(void); 306
··· 300 struct nl_info *info); 301 302 extern void fib6_run_gc(unsigned long expires, 303 + struct net *net, bool force); 304 305 extern void fib6_gc_cleanup(void); 306
+1 -1
include/net/ndisc.h
··· 119 * if RFC 3831 IPv6-over-Fibre Channel is ever implemented it may 120 * also need a pad of 2. 121 */ 122 - static int ndisc_addr_option_pad(unsigned short type) 123 { 124 switch (type) { 125 case ARPHRD_INFINIBAND: return 2;
··· 119 * if RFC 3831 IPv6-over-Fibre Channel is ever implemented it may 120 * also need a pad of 2. 121 */ 122 + static inline int ndisc_addr_option_pad(unsigned short type) 123 { 124 switch (type) { 125 case ARPHRD_INFINIBAND: return 2;
+1 -1
include/net/nfc/hci.h
··· 59 struct nfc_target *target); 60 int (*event_received)(struct nfc_hci_dev *hdev, u8 gate, u8 event, 61 struct sk_buff *skb); 62 - int (*fw_upload)(struct nfc_hci_dev *hdev, const char *firmware_name); 63 int (*discover_se)(struct nfc_hci_dev *dev); 64 int (*enable_se)(struct nfc_hci_dev *dev, u32 se_idx); 65 int (*disable_se)(struct nfc_hci_dev *dev, u32 se_idx);
··· 59 struct nfc_target *target); 60 int (*event_received)(struct nfc_hci_dev *hdev, u8 gate, u8 event, 61 struct sk_buff *skb); 62 + int (*fw_download)(struct nfc_hci_dev *hdev, const char *firmware_name); 63 int (*discover_se)(struct nfc_hci_dev *dev); 64 int (*enable_se)(struct nfc_hci_dev *dev, u32 se_idx); 65 int (*disable_se)(struct nfc_hci_dev *dev, u32 se_idx);
+2 -2
include/net/nfc/nfc.h
··· 68 void *cb_context); 69 int (*tm_send)(struct nfc_dev *dev, struct sk_buff *skb); 70 int (*check_presence)(struct nfc_dev *dev, struct nfc_target *target); 71 - int (*fw_upload)(struct nfc_dev *dev, const char *firmware_name); 72 73 /* Secure Element API */ 74 int (*discover_se)(struct nfc_dev *dev); ··· 127 int targets_generation; 128 struct device dev; 129 bool dev_up; 130 - bool fw_upload_in_progress; 131 u8 rf_mode; 132 bool polling; 133 struct nfc_target *active_target;
··· 68 void *cb_context); 69 int (*tm_send)(struct nfc_dev *dev, struct sk_buff *skb); 70 int (*check_presence)(struct nfc_dev *dev, struct nfc_target *target); 71 + int (*fw_download)(struct nfc_dev *dev, const char *firmware_name); 72 73 /* Secure Element API */ 74 int (*discover_se)(struct nfc_dev *dev); ··· 127 int targets_generation; 128 struct device dev; 129 bool dev_up; 130 + bool fw_download_in_progress; 131 u8 rf_mode; 132 bool polling; 133 struct nfc_target *active_target;
+1 -1
include/net/sock.h
··· 327 #ifdef CONFIG_RPS 328 __u32 sk_rxhash; 329 #endif 330 - #ifdef CONFIG_NET_LL_RX_POLL 331 unsigned int sk_napi_id; 332 unsigned int sk_ll_usec; 333 #endif
··· 327 #ifdef CONFIG_RPS 328 __u32 sk_rxhash; 329 #endif 330 + #ifdef CONFIG_NET_RX_BUSY_POLL 331 unsigned int sk_napi_id; 332 unsigned int sk_ll_usec; 333 #endif
+3 -3
include/uapi/linux/nfc.h
··· 69 * starting a poll from a device which has a secure element enabled means 70 * we want to do SE based card emulation. 71 * @NFC_CMD_DISABLE_SE: Disable the physical link to a specific secure element. 72 - * @NFC_CMD_FW_UPLOAD: Request to Load/flash firmware, or event to inform that 73 - * some firmware was loaded 74 */ 75 enum nfc_commands { 76 NFC_CMD_UNSPEC, ··· 94 NFC_CMD_DISABLE_SE, 95 NFC_CMD_LLC_SDREQ, 96 NFC_EVENT_LLC_SDRES, 97 - NFC_CMD_FW_UPLOAD, 98 NFC_EVENT_SE_ADDED, 99 NFC_EVENT_SE_REMOVED, 100 /* private: internal use only */
··· 69 * starting a poll from a device which has a secure element enabled means 70 * we want to do SE based card emulation. 71 * @NFC_CMD_DISABLE_SE: Disable the physical link to a specific secure element. 72 + * @NFC_CMD_FW_DOWNLOAD: Request to Load/flash firmware, or event to inform 73 + * that some firmware was loaded 74 */ 75 enum nfc_commands { 76 NFC_CMD_UNSPEC, ··· 94 NFC_CMD_DISABLE_SE, 95 NFC_CMD_LLC_SDREQ, 96 NFC_EVENT_LLC_SDRES, 97 + NFC_CMD_FW_DOWNLOAD, 98 NFC_EVENT_SE_ADDED, 99 NFC_EVENT_SE_REMOVED, 100 /* private: internal use only */
+1 -1
net/Kconfig
··· 244 Cgroup subsystem for use in assigning processes to network priorities on 245 a per-interface basis 246 247 - config NET_LL_RX_POLL 248 boolean 249 default y 250
··· 244 Cgroup subsystem for use in assigning processes to network priorities on 245 a per-interface basis 246 247 + config NET_RX_BUSY_POLL 248 boolean 249 default y 250
+17 -9
net/bluetooth/hci_core.c
··· 513 514 hci_setup_event_mask(req); 515 516 - if (hdev->hci_ver > BLUETOOTH_VER_1_1) 517 hci_req_add(req, HCI_OP_READ_LOCAL_COMMANDS, 0, NULL); 518 519 if (lmp_ssp_capable(hdev)) { ··· 2168 2169 BT_DBG("%p name %s bus %d", hdev, hdev->name, hdev->bus); 2170 2171 - write_lock(&hci_dev_list_lock); 2172 - list_add(&hdev->list, &hci_dev_list); 2173 - write_unlock(&hci_dev_list_lock); 2174 - 2175 hdev->workqueue = alloc_workqueue("%s", WQ_HIGHPRI | WQ_UNBOUND | 2176 WQ_MEM_RECLAIM, 1, hdev->name); 2177 if (!hdev->workqueue) { ··· 2202 if (hdev->dev_type != HCI_AMP) 2203 set_bit(HCI_AUTO_OFF, &hdev->dev_flags); 2204 2205 hci_notify(hdev, HCI_DEV_REG); 2206 hci_dev_hold(hdev); 2207 ··· 2218 destroy_workqueue(hdev->req_workqueue); 2219 err: 2220 ida_simple_remove(&hci_index_ida, hdev->id); 2221 - write_lock(&hci_dev_list_lock); 2222 - list_del(&hdev->list); 2223 - write_unlock(&hci_dev_list_lock); 2224 2225 return error; 2226 } ··· 3399 */ 3400 if (hdev->sent_cmd) { 3401 req_complete = bt_cb(hdev->sent_cmd)->req.complete; 3402 - if (req_complete) 3403 goto call_complete; 3404 } 3405 3406 /* Remove all pending commands belonging to this request */
··· 513 514 hci_setup_event_mask(req); 515 516 + /* AVM Berlin (31), aka "BlueFRITZ!", doesn't support the read 517 + * local supported commands HCI command. 518 + */ 519 + if (hdev->manufacturer != 31 && hdev->hci_ver > BLUETOOTH_VER_1_1) 520 hci_req_add(req, HCI_OP_READ_LOCAL_COMMANDS, 0, NULL); 521 522 if (lmp_ssp_capable(hdev)) { ··· 2165 2166 BT_DBG("%p name %s bus %d", hdev, hdev->name, hdev->bus); 2167 2168 hdev->workqueue = alloc_workqueue("%s", WQ_HIGHPRI | WQ_UNBOUND | 2169 WQ_MEM_RECLAIM, 1, hdev->name); 2170 if (!hdev->workqueue) { ··· 2203 if (hdev->dev_type != HCI_AMP) 2204 set_bit(HCI_AUTO_OFF, &hdev->dev_flags); 2205 2206 + write_lock(&hci_dev_list_lock); 2207 + list_add(&hdev->list, &hci_dev_list); 2208 + write_unlock(&hci_dev_list_lock); 2209 + 2210 hci_notify(hdev, HCI_DEV_REG); 2211 hci_dev_hold(hdev); 2212 ··· 2215 destroy_workqueue(hdev->req_workqueue); 2216 err: 2217 ida_simple_remove(&hci_index_ida, hdev->id); 2218 2219 return error; 2220 } ··· 3399 */ 3400 if (hdev->sent_cmd) { 3401 req_complete = bt_cb(hdev->sent_cmd)->req.complete; 3402 + 3403 + if (req_complete) { 3404 + /* We must set the complete callback to NULL to 3405 + * avoid calling the callback more than once if 3406 + * this function gets called again. 3407 + */ 3408 + bt_cb(hdev->sent_cmd)->req.complete = NULL; 3409 + 3410 goto call_complete; 3411 + } 3412 } 3413 3414 /* Remove all pending commands belonging to this request */
+2 -1
net/bridge/br_device.c
··· 70 } 71 72 mdst = br_mdb_get(br, skb, vid); 73 - if (mdst || BR_INPUT_SKB_CB_MROUTERS_ONLY(skb)) 74 br_multicast_deliver(mdst, skb); 75 else 76 br_flood_deliver(br, skb, false);
··· 70 } 71 72 mdst = br_mdb_get(br, skb, vid); 73 + if ((mdst || BR_INPUT_SKB_CB_MROUTERS_ONLY(skb)) && 74 + br_multicast_querier_exists(br)) 75 br_multicast_deliver(mdst, skb); 76 else 77 br_flood_deliver(br, skb, false);
+2 -1
net/bridge/br_input.c
··· 101 unicast = false; 102 } else if (is_multicast_ether_addr(dest)) { 103 mdst = br_mdb_get(br, skb, vid); 104 - if (mdst || BR_INPUT_SKB_CB_MROUTERS_ONLY(skb)) { 105 if ((mdst && mdst->mglist) || 106 br_multicast_is_router(br)) 107 skb2 = skb;
··· 101 unicast = false; 102 } else if (is_multicast_ether_addr(dest)) { 103 mdst = br_mdb_get(br, skb, vid); 104 + if ((mdst || BR_INPUT_SKB_CB_MROUTERS_ONLY(skb)) && 105 + br_multicast_querier_exists(br)) { 106 if ((mdst && mdst->mglist) || 107 br_multicast_is_router(br)) 108 skb2 = skb;
+30 -9
net/bridge/br_multicast.c
··· 1014 } 1015 #endif 1016 1017 /* 1018 * Add port to router_list 1019 * list is maintained ordered by pointer value ··· 1074 1075 static void br_multicast_query_received(struct net_bridge *br, 1076 struct net_bridge_port *port, 1077 - int saddr) 1078 { 1079 if (saddr) 1080 - mod_timer(&br->multicast_querier_timer, 1081 - jiffies + br->multicast_querier_interval); 1082 else if (timer_pending(&br->multicast_querier_timer)) 1083 return; 1084 ··· 1106 (port && port->state == BR_STATE_DISABLED)) 1107 goto out; 1108 1109 - br_multicast_query_received(br, port, !!iph->saddr); 1110 - 1111 group = ih->group; 1112 1113 if (skb->len == sizeof(*ih)) { ··· 1128 max_delay = ih3->code ? 1129 IGMPV3_MRC(ih3->code) * (HZ / IGMP_TIMER_SCALE) : 1; 1130 } 1131 1132 if (!group) 1133 goto out; ··· 1186 (port && port->state == BR_STATE_DISABLED)) 1187 goto out; 1188 1189 - br_multicast_query_received(br, port, !ipv6_addr_any(&ip6h->saddr)); 1190 - 1191 if (skb->len == sizeof(*mld)) { 1192 if (!pskb_may_pull(skb, sizeof(*mld))) { 1193 err = -EINVAL; ··· 1205 group = &mld2q->mld2q_mca; 1206 max_delay = mld2q->mld2q_mrc ? MLDV2_MRC(ntohs(mld2q->mld2q_mrc)) : 1; 1207 } 1208 1209 if (!group) 1210 goto out; ··· 1654 br->multicast_querier_interval = 255 * HZ; 1655 br->multicast_membership_interval = 260 * HZ; 1656 1657 spin_lock_init(&br->multicast_lock); 1658 setup_timer(&br->multicast_router_timer, 1659 br_multicast_local_router_expired, 0); ··· 1844 1845 int br_multicast_set_querier(struct net_bridge *br, unsigned long val) 1846 { 1847 val = !!val; 1848 1849 spin_lock_bh(&br->multicast_lock); ··· 1853 goto unlock; 1854 1855 br->multicast_querier = val; 1856 - if (val) 1857 - br_multicast_start_querier(br); 1858 1859 unlock: 1860 spin_unlock_bh(&br->multicast_lock);
··· 1014 } 1015 #endif 1016 1017 + static void br_multicast_update_querier_timer(struct net_bridge *br, 1018 + unsigned long max_delay) 1019 + { 1020 + if (!timer_pending(&br->multicast_querier_timer)) 1021 + br->multicast_querier_delay_time = jiffies + max_delay; 1022 + 1023 + mod_timer(&br->multicast_querier_timer, 1024 + jiffies + br->multicast_querier_interval); 1025 + } 1026 + 1027 /* 1028 * Add port to router_list 1029 * list is maintained ordered by pointer value ··· 1064 1065 static void br_multicast_query_received(struct net_bridge *br, 1066 struct net_bridge_port *port, 1067 + int saddr, 1068 + unsigned long max_delay) 1069 { 1070 if (saddr) 1071 + br_multicast_update_querier_timer(br, max_delay); 1072 else if (timer_pending(&br->multicast_querier_timer)) 1073 return; 1074 ··· 1096 (port && port->state == BR_STATE_DISABLED)) 1097 goto out; 1098 1099 group = ih->group; 1100 1101 if (skb->len == sizeof(*ih)) { ··· 1120 max_delay = ih3->code ? 1121 IGMPV3_MRC(ih3->code) * (HZ / IGMP_TIMER_SCALE) : 1; 1122 } 1123 + 1124 + br_multicast_query_received(br, port, !!iph->saddr, max_delay); 1125 1126 if (!group) 1127 goto out; ··· 1176 (port && port->state == BR_STATE_DISABLED)) 1177 goto out; 1178 1179 if (skb->len == sizeof(*mld)) { 1180 if (!pskb_may_pull(skb, sizeof(*mld))) { 1181 err = -EINVAL; ··· 1197 group = &mld2q->mld2q_mca; 1198 max_delay = mld2q->mld2q_mrc ? MLDV2_MRC(ntohs(mld2q->mld2q_mrc)) : 1; 1199 } 1200 + 1201 + br_multicast_query_received(br, port, !ipv6_addr_any(&ip6h->saddr), 1202 + max_delay); 1203 1204 if (!group) 1205 goto out; ··· 1643 br->multicast_querier_interval = 255 * HZ; 1644 br->multicast_membership_interval = 260 * HZ; 1645 1646 + br->multicast_querier_delay_time = 0; 1647 + 1648 spin_lock_init(&br->multicast_lock); 1649 setup_timer(&br->multicast_router_timer, 1650 br_multicast_local_router_expired, 0); ··· 1831 1832 int br_multicast_set_querier(struct net_bridge *br, unsigned long val) 1833 { 1834 + unsigned long max_delay; 1835 + 1836 val = !!val; 1837 1838 spin_lock_bh(&br->multicast_lock); ··· 1838 goto unlock; 1839 1840 br->multicast_querier = val; 1841 + if (!val) 1842 + goto unlock; 1843 + 1844 + max_delay = br->multicast_query_response_interval; 1845 + if (!timer_pending(&br->multicast_querier_timer)) 1846 + br->multicast_querier_delay_time = jiffies + max_delay; 1847 + 1848 + br_multicast_start_querier(br); 1849 1850 unlock: 1851 spin_unlock_bh(&br->multicast_lock);
+12
net/bridge/br_private.h
··· 267 unsigned long multicast_query_interval; 268 unsigned long multicast_query_response_interval; 269 unsigned long multicast_startup_query_interval; 270 271 spinlock_t multicast_lock; 272 struct net_bridge_mdb_htable __rcu *mdb; ··· 502 (br->multicast_router == 1 && 503 timer_pending(&br->multicast_router_timer)); 504 } 505 #else 506 static inline int br_multicast_rcv(struct net_bridge *br, 507 struct net_bridge_port *port, ··· 564 static inline bool br_multicast_is_router(struct net_bridge *br) 565 { 566 return 0; 567 } 568 static inline void br_mdb_init(void) 569 {
··· 267 unsigned long multicast_query_interval; 268 unsigned long multicast_query_response_interval; 269 unsigned long multicast_startup_query_interval; 270 + unsigned long multicast_querier_delay_time; 271 272 spinlock_t multicast_lock; 273 struct net_bridge_mdb_htable __rcu *mdb; ··· 501 (br->multicast_router == 1 && 502 timer_pending(&br->multicast_router_timer)); 503 } 504 + 505 + static inline bool br_multicast_querier_exists(struct net_bridge *br) 506 + { 507 + return time_is_before_jiffies(br->multicast_querier_delay_time) && 508 + (br->multicast_querier || 509 + timer_pending(&br->multicast_querier_timer)); 510 + } 511 #else 512 static inline int br_multicast_rcv(struct net_bridge *br, 513 struct net_bridge_port *port, ··· 556 static inline bool br_multicast_is_router(struct net_bridge *br) 557 { 558 return 0; 559 + } 560 + static inline bool br_multicast_querier_exists(struct net_bridge *br) 561 + { 562 + return false; 563 } 564 static inline void br_mdb_init(void) 565 {
+1 -1
net/core/skbuff.c
··· 740 741 skb_copy_secmark(new, old); 742 743 - #ifdef CONFIG_NET_LL_RX_POLL 744 new->napi_id = old->napi_id; 745 #endif 746 }
··· 740 741 skb_copy_secmark(new, old); 742 743 + #ifdef CONFIG_NET_RX_BUSY_POLL 744 new->napi_id = old->napi_id; 745 #endif 746 }
+3 -3
net/core/sock.c
··· 900 sock_valbool_flag(sk, SOCK_SELECT_ERR_QUEUE, valbool); 901 break; 902 903 - #ifdef CONFIG_NET_LL_RX_POLL 904 case SO_BUSY_POLL: 905 /* allow unprivileged users to decrease the value */ 906 if ((val > sk->sk_ll_usec) && !capable(CAP_NET_ADMIN)) ··· 1170 v.val = sock_flag(sk, SOCK_SELECT_ERR_QUEUE); 1171 break; 1172 1173 - #ifdef CONFIG_NET_LL_RX_POLL 1174 case SO_BUSY_POLL: 1175 v.val = sk->sk_ll_usec; 1176 break; ··· 2292 2293 sk->sk_stamp = ktime_set(-1L, 0); 2294 2295 - #ifdef CONFIG_NET_LL_RX_POLL 2296 sk->sk_napi_id = 0; 2297 sk->sk_ll_usec = sysctl_net_busy_read; 2298 #endif
··· 900 sock_valbool_flag(sk, SOCK_SELECT_ERR_QUEUE, valbool); 901 break; 902 903 + #ifdef CONFIG_NET_RX_BUSY_POLL 904 case SO_BUSY_POLL: 905 /* allow unprivileged users to decrease the value */ 906 if ((val > sk->sk_ll_usec) && !capable(CAP_NET_ADMIN)) ··· 1170 v.val = sock_flag(sk, SOCK_SELECT_ERR_QUEUE); 1171 break; 1172 1173 + #ifdef CONFIG_NET_RX_BUSY_POLL 1174 case SO_BUSY_POLL: 1175 v.val = sk->sk_ll_usec; 1176 break; ··· 2292 2293 sk->sk_stamp = ktime_set(-1L, 0); 2294 2295 + #ifdef CONFIG_NET_RX_BUSY_POLL 2296 sk->sk_napi_id = 0; 2297 sk->sk_ll_usec = sysctl_net_busy_read; 2298 #endif
+6 -2
net/core/sysctl_net_core.c
··· 21 #include <net/net_ratelimit.h> 22 #include <net/busy_poll.h> 23 24 static int one = 1; 25 26 #ifdef CONFIG_RPS 27 static int rps_sock_flow_sysctl(struct ctl_table *table, int write, ··· 300 .proc_handler = flow_limit_table_len_sysctl 301 }, 302 #endif /* CONFIG_NET_FLOW_LIMIT */ 303 - #ifdef CONFIG_NET_LL_RX_POLL 304 { 305 .procname = "busy_poll", 306 .data = &sysctl_net_busy_poll, ··· 341 .data = &init_net.core.sysctl_somaxconn, 342 .maxlen = sizeof(int), 343 .mode = 0644, 344 - .proc_handler = proc_dointvec 345 }, 346 { } 347 };
··· 21 #include <net/net_ratelimit.h> 22 #include <net/busy_poll.h> 23 24 + static int zero = 0; 25 static int one = 1; 26 + static int ushort_max = USHRT_MAX; 27 28 #ifdef CONFIG_RPS 29 static int rps_sock_flow_sysctl(struct ctl_table *table, int write, ··· 298 .proc_handler = flow_limit_table_len_sysctl 299 }, 300 #endif /* CONFIG_NET_FLOW_LIMIT */ 301 + #ifdef CONFIG_NET_RX_BUSY_POLL 302 { 303 .procname = "busy_poll", 304 .data = &sysctl_net_busy_poll, ··· 339 .data = &init_net.core.sysctl_somaxconn, 340 .maxlen = sizeof(int), 341 .mode = 0644, 342 + .extra1 = &zero, 343 + .extra2 = &ushort_max, 344 + .proc_handler = proc_dointvec_minmax 345 }, 346 { } 347 };
+3 -1
net/ipv4/devinet.c
··· 772 ci = nla_data(tb[IFA_CACHEINFO]); 773 if (!ci->ifa_valid || ci->ifa_prefered > ci->ifa_valid) { 774 err = -EINVAL; 775 - goto errout; 776 } 777 *pvalid_lft = ci->ifa_valid; 778 *pprefered_lft = ci->ifa_prefered; ··· 780 781 return ifa; 782 783 errout: 784 return ERR_PTR(err); 785 }
··· 772 ci = nla_data(tb[IFA_CACHEINFO]); 773 if (!ci->ifa_valid || ci->ifa_prefered > ci->ifa_valid) { 774 err = -EINVAL; 775 + goto errout_free; 776 } 777 *pvalid_lft = ci->ifa_valid; 778 *pprefered_lft = ci->ifa_prefered; ··· 780 781 return ifa; 782 783 + errout_free: 784 + inet_free_ifa(ifa); 785 errout: 786 return ERR_PTR(err); 787 }
+22 -21
net/ipv6/addrconf.c
··· 813 /* On success it returns ifp with increased reference count */ 814 815 static struct inet6_ifaddr * 816 - ipv6_add_addr(struct inet6_dev *idev, const struct in6_addr *addr, int pfxlen, 817 - int scope, u32 flags) 818 { 819 struct inet6_ifaddr *ifa = NULL; 820 struct rt6_info *rt; ··· 864 } 865 866 ifa->addr = *addr; 867 868 spin_lock_init(&ifa->lock); 869 spin_lock_init(&ifa->state_lock); ··· 875 ifa->scope = scope; 876 ifa->prefix_len = pfxlen; 877 ifa->flags = flags | IFA_F_TENTATIVE; 878 ifa->cstamp = ifa->tstamp = jiffies; 879 ifa->tokenized = false; 880 ··· 1128 1129 ift = !max_addresses || 1130 ipv6_count_addresses(idev) < max_addresses ? 1131 - ipv6_add_addr(idev, &addr, tmp_plen, ipv6_addr_scope(&addr), 1132 - addr_flags) : NULL; 1133 if (IS_ERR_OR_NULL(ift)) { 1134 in6_ifa_put(ifp); 1135 in6_dev_put(idev); ··· 1142 1143 spin_lock_bh(&ift->lock); 1144 ift->ifpub = ifp; 1145 - ift->valid_lft = tmp_valid_lft; 1146 - ift->prefered_lft = tmp_prefered_lft; 1147 ift->cstamp = now; 1148 ift->tstamp = tmp_tstamp; 1149 spin_unlock_bh(&ift->lock); ··· 2183 */ 2184 if (!max_addresses || 2185 ipv6_count_addresses(in6_dev) < max_addresses) 2186 - ifp = ipv6_add_addr(in6_dev, &addr, pinfo->prefix_len, 2187 addr_type&IPV6_ADDR_SCOPE_MASK, 2188 - addr_flags); 2189 2190 if (IS_ERR_OR_NULL(ifp)) { 2191 in6_dev_put(in6_dev); 2192 return; 2193 } 2194 2195 - update_lft = create = 1; 2196 ifp->cstamp = jiffies; 2197 ifp->tokenized = tokenized; 2198 addrconf_dad_start(ifp); ··· 2216 stored_lft = ifp->valid_lft - (now - ifp->tstamp) / HZ; 2217 else 2218 stored_lft = 0; 2219 - if (!update_lft && stored_lft) { 2220 if (valid_lft > MIN_VALID_LIFETIME || 2221 valid_lft > stored_lft) 2222 update_lft = 1; ··· 2462 prefered_lft = timeout; 2463 } 2464 2465 - ifp = ipv6_add_addr(idev, pfx, plen, scope, ifa_flags); 2466 2467 if (!IS_ERR(ifp)) { 2468 - spin_lock_bh(&ifp->lock); 2469 - ifp->valid_lft = valid_lft; 2470 - ifp->prefered_lft = prefered_lft; 2471 - ifp->tstamp = jiffies; 2472 - if (peer_pfx) 2473 - ifp->peer_addr = *peer_pfx; 2474 - spin_unlock_bh(&ifp->lock); 2475 - 2476 addrconf_prefix_route(&ifp->addr, ifp->prefix_len, dev, 2477 expires, flags); 2478 /* ··· 2557 { 2558 struct inet6_ifaddr *ifp; 2559 2560 - ifp = ipv6_add_addr(idev, addr, plen, scope, IFA_F_PERMANENT); 2561 if (!IS_ERR(ifp)) { 2562 spin_lock_bh(&ifp->lock); 2563 ifp->flags &= ~IFA_F_TENTATIVE; ··· 2684 #endif 2685 2686 2687 - ifp = ipv6_add_addr(idev, addr, 64, IFA_LINK, addr_flags); 2688 if (!IS_ERR(ifp)) { 2689 addrconf_prefix_route(&ifp->addr, ifp->prefix_len, idev->dev, 0, 0); 2690 addrconf_dad_start(ifp);
··· 813 /* On success it returns ifp with increased reference count */ 814 815 static struct inet6_ifaddr * 816 + ipv6_add_addr(struct inet6_dev *idev, const struct in6_addr *addr, 817 + const struct in6_addr *peer_addr, int pfxlen, 818 + int scope, u32 flags, u32 valid_lft, u32 prefered_lft) 819 { 820 struct inet6_ifaddr *ifa = NULL; 821 struct rt6_info *rt; ··· 863 } 864 865 ifa->addr = *addr; 866 + if (peer_addr) 867 + ifa->peer_addr = *peer_addr; 868 869 spin_lock_init(&ifa->lock); 870 spin_lock_init(&ifa->state_lock); ··· 872 ifa->scope = scope; 873 ifa->prefix_len = pfxlen; 874 ifa->flags = flags | IFA_F_TENTATIVE; 875 + ifa->valid_lft = valid_lft; 876 + ifa->prefered_lft = prefered_lft; 877 ifa->cstamp = ifa->tstamp = jiffies; 878 ifa->tokenized = false; 879 ··· 1123 1124 ift = !max_addresses || 1125 ipv6_count_addresses(idev) < max_addresses ? 1126 + ipv6_add_addr(idev, &addr, NULL, tmp_plen, 1127 + ipv6_addr_scope(&addr), addr_flags, 1128 + tmp_valid_lft, tmp_prefered_lft) : NULL; 1129 if (IS_ERR_OR_NULL(ift)) { 1130 in6_ifa_put(ifp); 1131 in6_dev_put(idev); ··· 1136 1137 spin_lock_bh(&ift->lock); 1138 ift->ifpub = ifp; 1139 ift->cstamp = now; 1140 ift->tstamp = tmp_tstamp; 1141 spin_unlock_bh(&ift->lock); ··· 2179 */ 2180 if (!max_addresses || 2181 ipv6_count_addresses(in6_dev) < max_addresses) 2182 + ifp = ipv6_add_addr(in6_dev, &addr, NULL, 2183 + pinfo->prefix_len, 2184 addr_type&IPV6_ADDR_SCOPE_MASK, 2185 + addr_flags, valid_lft, 2186 + prefered_lft); 2187 2188 if (IS_ERR_OR_NULL(ifp)) { 2189 in6_dev_put(in6_dev); 2190 return; 2191 } 2192 2193 + update_lft = 0; 2194 + create = 1; 2195 ifp->cstamp = jiffies; 2196 ifp->tokenized = tokenized; 2197 addrconf_dad_start(ifp); ··· 2209 stored_lft = ifp->valid_lft - (now - ifp->tstamp) / HZ; 2210 else 2211 stored_lft = 0; 2212 + if (!update_lft && !create && stored_lft) { 2213 if (valid_lft > MIN_VALID_LIFETIME || 2214 valid_lft > stored_lft) 2215 update_lft = 1; ··· 2455 prefered_lft = timeout; 2456 } 2457 2458 + ifp = ipv6_add_addr(idev, pfx, peer_pfx, plen, scope, ifa_flags, 2459 + valid_lft, prefered_lft); 2460 2461 if (!IS_ERR(ifp)) { 2462 addrconf_prefix_route(&ifp->addr, ifp->prefix_len, dev, 2463 expires, flags); 2464 /* ··· 2557 { 2558 struct inet6_ifaddr *ifp; 2559 2560 + ifp = ipv6_add_addr(idev, addr, NULL, plen, 2561 + scope, IFA_F_PERMANENT, 0, 0); 2562 if (!IS_ERR(ifp)) { 2563 spin_lock_bh(&ifp->lock); 2564 ifp->flags &= ~IFA_F_TENTATIVE; ··· 2683 #endif 2684 2685 2686 + ifp = ipv6_add_addr(idev, addr, NULL, 64, IFA_LINK, addr_flags, 0, 0); 2687 if (!IS_ERR(ifp)) { 2688 addrconf_prefix_route(&ifp->addr, ifp->prefix_len, idev->dev, 0, 0); 2689 addrconf_dad_start(ifp);
+13 -12
net/ipv6/ip6_fib.c
··· 1632 1633 static DEFINE_SPINLOCK(fib6_gc_lock); 1634 1635 - void fib6_run_gc(unsigned long expires, struct net *net) 1636 { 1637 - if (expires != ~0UL) { 1638 spin_lock_bh(&fib6_gc_lock); 1639 - gc_args.timeout = expires ? (int)expires : 1640 - net->ipv6.sysctl.ip6_rt_gc_interval; 1641 - } else { 1642 - if (!spin_trylock_bh(&fib6_gc_lock)) { 1643 - mod_timer(&net->ipv6.ip6_fib_timer, jiffies + HZ); 1644 - return; 1645 - } 1646 - gc_args.timeout = net->ipv6.sysctl.ip6_rt_gc_interval; 1647 } 1648 1649 gc_args.more = icmp6_dst_gc(); 1650 1651 fib6_clean_all(net, fib6_age, 0, NULL); 1652 1653 if (gc_args.more) 1654 mod_timer(&net->ipv6.ip6_fib_timer, 1655 - round_jiffies(jiffies 1656 + net->ipv6.sysctl.ip6_rt_gc_interval)); 1657 else 1658 del_timer(&net->ipv6.ip6_fib_timer); ··· 1662 1663 static void fib6_gc_timer_cb(unsigned long arg) 1664 { 1665 - fib6_run_gc(0, (struct net *)arg); 1666 } 1667 1668 static int __net_init fib6_net_init(struct net *net)
··· 1632 1633 static DEFINE_SPINLOCK(fib6_gc_lock); 1634 1635 + void fib6_run_gc(unsigned long expires, struct net *net, bool force) 1636 { 1637 + unsigned long now; 1638 + 1639 + if (force) { 1640 spin_lock_bh(&fib6_gc_lock); 1641 + } else if (!spin_trylock_bh(&fib6_gc_lock)) { 1642 + mod_timer(&net->ipv6.ip6_fib_timer, jiffies + HZ); 1643 + return; 1644 } 1645 + gc_args.timeout = expires ? (int)expires : 1646 + net->ipv6.sysctl.ip6_rt_gc_interval; 1647 1648 gc_args.more = icmp6_dst_gc(); 1649 1650 fib6_clean_all(net, fib6_age, 0, NULL); 1651 + now = jiffies; 1652 + net->ipv6.ip6_rt_last_gc = now; 1653 1654 if (gc_args.more) 1655 mod_timer(&net->ipv6.ip6_fib_timer, 1656 + round_jiffies(now 1657 + net->ipv6.sysctl.ip6_rt_gc_interval)); 1658 else 1659 del_timer(&net->ipv6.ip6_fib_timer); ··· 1661 1662 static void fib6_gc_timer_cb(unsigned long arg) 1663 { 1664 + fib6_run_gc(0, (struct net *)arg, true); 1665 } 1666 1667 static int __net_init fib6_net_init(struct net *net)
+2 -2
net/ipv6/ndisc.c
··· 1576 switch (event) { 1577 case NETDEV_CHANGEADDR: 1578 neigh_changeaddr(&nd_tbl, dev); 1579 - fib6_run_gc(~0UL, net); 1580 idev = in6_dev_get(dev); 1581 if (!idev) 1582 break; ··· 1586 break; 1587 case NETDEV_DOWN: 1588 neigh_ifdown(&nd_tbl, dev); 1589 - fib6_run_gc(~0UL, net); 1590 break; 1591 case NETDEV_NOTIFY_PEERS: 1592 ndisc_send_unsol_na(dev);
··· 1576 switch (event) { 1577 case NETDEV_CHANGEADDR: 1578 neigh_changeaddr(&nd_tbl, dev); 1579 + fib6_run_gc(0, net, false); 1580 idev = in6_dev_get(dev); 1581 if (!idev) 1582 break; ··· 1586 break; 1587 case NETDEV_DOWN: 1588 neigh_ifdown(&nd_tbl, dev); 1589 + fib6_run_gc(0, net, false); 1590 break; 1591 case NETDEV_NOTIFY_PEERS: 1592 ndisc_send_unsol_na(dev);
+3 -5
net/ipv6/route.c
··· 1311 1312 static int ip6_dst_gc(struct dst_ops *ops) 1313 { 1314 - unsigned long now = jiffies; 1315 struct net *net = container_of(ops, struct net, ipv6.ip6_dst_ops); 1316 int rt_min_interval = net->ipv6.sysctl.ip6_rt_gc_min_interval; 1317 int rt_max_size = net->ipv6.sysctl.ip6_rt_max_size; ··· 1320 int entries; 1321 1322 entries = dst_entries_get_fast(ops); 1323 - if (time_after(rt_last_gc + rt_min_interval, now) && 1324 entries <= rt_max_size) 1325 goto out; 1326 1327 net->ipv6.ip6_rt_gc_expire++; 1328 - fib6_run_gc(net->ipv6.ip6_rt_gc_expire, net); 1329 - net->ipv6.ip6_rt_last_gc = now; 1330 entries = dst_entries_get_slow(ops); 1331 if (entries < ops->gc_thresh) 1332 net->ipv6.ip6_rt_gc_expire = rt_gc_timeout>>1; ··· 2825 net = (struct net *)ctl->extra1; 2826 delay = net->ipv6.sysctl.flush_delay; 2827 proc_dointvec(ctl, write, buffer, lenp, ppos); 2828 - fib6_run_gc(delay <= 0 ? ~0UL : (unsigned long)delay, net); 2829 return 0; 2830 } 2831
··· 1311 1312 static int ip6_dst_gc(struct dst_ops *ops) 1313 { 1314 struct net *net = container_of(ops, struct net, ipv6.ip6_dst_ops); 1315 int rt_min_interval = net->ipv6.sysctl.ip6_rt_gc_min_interval; 1316 int rt_max_size = net->ipv6.sysctl.ip6_rt_max_size; ··· 1321 int entries; 1322 1323 entries = dst_entries_get_fast(ops); 1324 + if (time_after(rt_last_gc + rt_min_interval, jiffies) && 1325 entries <= rt_max_size) 1326 goto out; 1327 1328 net->ipv6.ip6_rt_gc_expire++; 1329 + fib6_run_gc(net->ipv6.ip6_rt_gc_expire, net, entries > rt_max_size); 1330 entries = dst_entries_get_slow(ops); 1331 if (entries < ops->gc_thresh) 1332 net->ipv6.ip6_rt_gc_expire = rt_gc_timeout>>1; ··· 2827 net = (struct net *)ctl->extra1; 2828 delay = net->ipv6.sysctl.flush_delay; 2829 proc_dointvec(ctl, write, buffer, lenp, ppos); 2830 + fib6_run_gc(delay <= 0 ? 0 : (unsigned long)delay, net, delay > 0); 2831 return 0; 2832 } 2833
+4
net/mac80211/mesh_ps.c
··· 229 enum nl80211_mesh_power_mode pm; 230 bool do_buffer; 231 232 /* 233 * use peer-specific power mode if peering is established and the 234 * peer's power mode is known
··· 229 enum nl80211_mesh_power_mode pm; 230 bool do_buffer; 231 232 + /* For non-assoc STA, prevent buffering or frame transmission */ 233 + if (sta->sta_state < IEEE80211_STA_ASSOC) 234 + return; 235 + 236 /* 237 * use peer-specific power mode if peering is established and the 238 * peer's power mode is known
+5 -2
net/mac80211/pm.c
··· 99 } 100 mutex_unlock(&local->sta_mtx); 101 102 - /* remove all interfaces */ 103 list_for_each_entry(sdata, &local->interfaces, list) { 104 - if (!ieee80211_sdata_running(sdata)) 105 continue; 106 drv_remove_interface(local, sdata); 107 } 108
··· 99 } 100 mutex_unlock(&local->sta_mtx); 101 102 + /* remove all interfaces that were created in the driver */ 103 list_for_each_entry(sdata, &local->interfaces, list) { 104 + if (!ieee80211_sdata_running(sdata) || 105 + sdata->vif.type == NL80211_IFTYPE_AP_VLAN || 106 + sdata->vif.type == NL80211_IFTYPE_MONITOR) 107 continue; 108 + 109 drv_remove_interface(local, sdata); 110 } 111
+2 -2
net/netlabel/netlabel_cipso_v4.c
··· 691 { 692 struct netlbl_domhsh_walk_arg *cb_arg = arg; 693 694 - if (entry->type == NETLBL_NLTYPE_CIPSOV4 && 695 - entry->type_def.cipsov4->doi == cb_arg->doi) 696 return netlbl_domhsh_remove_entry(entry, cb_arg->audit_info); 697 698 return 0;
··· 691 { 692 struct netlbl_domhsh_walk_arg *cb_arg = arg; 693 694 + if (entry->def.type == NETLBL_NLTYPE_CIPSOV4 && 695 + entry->def.cipso->doi == cb_arg->doi) 696 return netlbl_domhsh_remove_entry(entry, cb_arg->audit_info); 697 698 return 0;
+49 -55
net/netlabel/netlabel_domainhash.c
··· 84 #endif /* IPv6 */ 85 86 ptr = container_of(entry, struct netlbl_dom_map, rcu); 87 - if (ptr->type == NETLBL_NLTYPE_ADDRSELECT) { 88 netlbl_af4list_foreach_safe(iter4, tmp4, 89 - &ptr->type_def.addrsel->list4) { 90 netlbl_af4list_remove_entry(iter4); 91 kfree(netlbl_domhsh_addr4_entry(iter4)); 92 } 93 #if IS_ENABLED(CONFIG_IPV6) 94 netlbl_af6list_foreach_safe(iter6, tmp6, 95 - &ptr->type_def.addrsel->list6) { 96 netlbl_af6list_remove_entry(iter6); 97 kfree(netlbl_domhsh_addr6_entry(iter6)); 98 } ··· 213 if (addr4 != NULL) { 214 struct netlbl_domaddr4_map *map4; 215 map4 = netlbl_domhsh_addr4_entry(addr4); 216 - type = map4->type; 217 - cipsov4 = map4->type_def.cipsov4; 218 netlbl_af4list_audit_addr(audit_buf, 0, NULL, 219 addr4->addr, addr4->mask); 220 #if IS_ENABLED(CONFIG_IPV6) 221 } else if (addr6 != NULL) { 222 struct netlbl_domaddr6_map *map6; 223 map6 = netlbl_domhsh_addr6_entry(addr6); 224 - type = map6->type; 225 netlbl_af6list_audit_addr(audit_buf, 0, NULL, 226 &addr6->addr, &addr6->mask); 227 #endif /* IPv6 */ 228 } else { 229 - type = entry->type; 230 - cipsov4 = entry->type_def.cipsov4; 231 } 232 switch (type) { 233 case NETLBL_NLTYPE_UNLABELED: ··· 265 if (entry == NULL) 266 return -EINVAL; 267 268 - switch (entry->type) { 269 case NETLBL_NLTYPE_UNLABELED: 270 - if (entry->type_def.cipsov4 != NULL || 271 - entry->type_def.addrsel != NULL) 272 return -EINVAL; 273 break; 274 case NETLBL_NLTYPE_CIPSOV4: 275 - if (entry->type_def.cipsov4 == NULL) 276 return -EINVAL; 277 break; 278 case NETLBL_NLTYPE_ADDRSELECT: 279 - netlbl_af4list_foreach(iter4, &entry->type_def.addrsel->list4) { 280 map4 = netlbl_domhsh_addr4_entry(iter4); 281 - switch (map4->type) { 282 case NETLBL_NLTYPE_UNLABELED: 283 - if (map4->type_def.cipsov4 != NULL) 284 return -EINVAL; 285 break; 286 case NETLBL_NLTYPE_CIPSOV4: 287 - if (map4->type_def.cipsov4 == NULL) 288 return -EINVAL; 289 break; 290 default: ··· 291 } 292 } 293 #if IS_ENABLED(CONFIG_IPV6) 294 - netlbl_af6list_foreach(iter6, &entry->type_def.addrsel->list6) { 295 map6 = netlbl_domhsh_addr6_entry(iter6); 296 - switch (map6->type) { 297 case NETLBL_NLTYPE_UNLABELED: 298 break; 299 default: ··· 401 rcu_assign_pointer(netlbl_domhsh_def, entry); 402 } 403 404 - if (entry->type == NETLBL_NLTYPE_ADDRSELECT) { 405 netlbl_af4list_foreach_rcu(iter4, 406 - &entry->type_def.addrsel->list4) 407 netlbl_domhsh_audit_add(entry, iter4, NULL, 408 ret_val, audit_info); 409 #if IS_ENABLED(CONFIG_IPV6) 410 netlbl_af6list_foreach_rcu(iter6, 411 - &entry->type_def.addrsel->list6) 412 netlbl_domhsh_audit_add(entry, NULL, iter6, 413 ret_val, audit_info); 414 #endif /* IPv6 */ 415 } else 416 netlbl_domhsh_audit_add(entry, NULL, NULL, 417 ret_val, audit_info); 418 - } else if (entry_old->type == NETLBL_NLTYPE_ADDRSELECT && 419 - entry->type == NETLBL_NLTYPE_ADDRSELECT) { 420 struct list_head *old_list4; 421 struct list_head *old_list6; 422 423 - old_list4 = &entry_old->type_def.addrsel->list4; 424 - old_list6 = &entry_old->type_def.addrsel->list6; 425 426 /* we only allow the addition of address selectors if all of 427 * the selectors do not exist in the existing domain map */ 428 - netlbl_af4list_foreach_rcu(iter4, 429 - &entry->type_def.addrsel->list4) 430 if (netlbl_af4list_search_exact(iter4->addr, 431 iter4->mask, 432 old_list4)) { ··· 433 goto add_return; 434 } 435 #if IS_ENABLED(CONFIG_IPV6) 436 - netlbl_af6list_foreach_rcu(iter6, 437 - &entry->type_def.addrsel->list6) 438 if (netlbl_af6list_search_exact(&iter6->addr, 439 &iter6->mask, 440 old_list6)) { ··· 443 #endif /* IPv6 */ 444 445 netlbl_af4list_foreach_safe(iter4, tmp4, 446 - &entry->type_def.addrsel->list4) { 447 netlbl_af4list_remove_entry(iter4); 448 iter4->valid = 1; 449 ret_val = netlbl_af4list_add(iter4, old_list4); ··· 454 } 455 #if IS_ENABLED(CONFIG_IPV6) 456 netlbl_af6list_foreach_safe(iter6, tmp6, 457 - &entry->type_def.addrsel->list6) { 458 netlbl_af6list_remove_entry(iter6); 459 iter6->valid = 1; 460 ret_val = netlbl_af6list_add(iter6, old_list6); ··· 535 struct netlbl_af4list *iter4; 536 struct netlbl_domaddr4_map *map4; 537 538 - switch (entry->type) { 539 case NETLBL_NLTYPE_ADDRSELECT: 540 netlbl_af4list_foreach_rcu(iter4, 541 - &entry->type_def.addrsel->list4) { 542 map4 = netlbl_domhsh_addr4_entry(iter4); 543 - cipso_v4_doi_putdef(map4->type_def.cipsov4); 544 } 545 /* no need to check the IPv6 list since we currently 546 * support only unlabeled protocols for IPv6 */ 547 break; 548 case NETLBL_NLTYPE_CIPSOV4: 549 - cipso_v4_doi_putdef(entry->type_def.cipsov4); 550 break; 551 } 552 call_rcu(&entry->rcu, netlbl_domhsh_free_entry); ··· 587 entry_map = netlbl_domhsh_search(domain); 588 else 589 entry_map = netlbl_domhsh_search_def(domain); 590 - if (entry_map == NULL || entry_map->type != NETLBL_NLTYPE_ADDRSELECT) 591 goto remove_af4_failure; 592 593 spin_lock(&netlbl_domhsh_lock); 594 entry_addr = netlbl_af4list_remove(addr->s_addr, mask->s_addr, 595 - &entry_map->type_def.addrsel->list4); 596 spin_unlock(&netlbl_domhsh_lock); 597 598 if (entry_addr == NULL) 599 goto remove_af4_failure; 600 - netlbl_af4list_foreach_rcu(iter4, &entry_map->type_def.addrsel->list4) 601 goto remove_af4_single_addr; 602 #if IS_ENABLED(CONFIG_IPV6) 603 - netlbl_af6list_foreach_rcu(iter6, &entry_map->type_def.addrsel->list6) 604 goto remove_af4_single_addr; 605 #endif /* IPv6 */ 606 /* the domain mapping is empty so remove it from the mapping table */ ··· 614 * shouldn't be a problem */ 615 synchronize_rcu(); 616 entry = netlbl_domhsh_addr4_entry(entry_addr); 617 - cipso_v4_doi_putdef(entry->type_def.cipsov4); 618 kfree(entry); 619 return 0; 620 ··· 691 * responsible for ensuring that rcu_read_[un]lock() is called. 692 * 693 */ 694 - struct netlbl_domaddr4_map *netlbl_domhsh_getentry_af4(const char *domain, 695 - __be32 addr) 696 { 697 struct netlbl_dom_map *dom_iter; 698 struct netlbl_af4list *addr_iter; ··· 700 dom_iter = netlbl_domhsh_search_def(domain); 701 if (dom_iter == NULL) 702 return NULL; 703 - if (dom_iter->type != NETLBL_NLTYPE_ADDRSELECT) 704 - return NULL; 705 706 - addr_iter = netlbl_af4list_search(addr, 707 - &dom_iter->type_def.addrsel->list4); 708 if (addr_iter == NULL) 709 return NULL; 710 - 711 - return netlbl_domhsh_addr4_entry(addr_iter); 712 } 713 714 #if IS_ENABLED(CONFIG_IPV6) ··· 721 * responsible for ensuring that rcu_read_[un]lock() is called. 722 * 723 */ 724 - struct netlbl_domaddr6_map *netlbl_domhsh_getentry_af6(const char *domain, 725 const struct in6_addr *addr) 726 { 727 struct netlbl_dom_map *dom_iter; ··· 730 dom_iter = netlbl_domhsh_search_def(domain); 731 if (dom_iter == NULL) 732 return NULL; 733 - if (dom_iter->type != NETLBL_NLTYPE_ADDRSELECT) 734 - return NULL; 735 736 - addr_iter = netlbl_af6list_search(addr, 737 - &dom_iter->type_def.addrsel->list6); 738 if (addr_iter == NULL) 739 return NULL; 740 - 741 - return netlbl_domhsh_addr6_entry(addr_iter); 742 } 743 #endif /* IPv6 */ 744
··· 84 #endif /* IPv6 */ 85 86 ptr = container_of(entry, struct netlbl_dom_map, rcu); 87 + if (ptr->def.type == NETLBL_NLTYPE_ADDRSELECT) { 88 netlbl_af4list_foreach_safe(iter4, tmp4, 89 + &ptr->def.addrsel->list4) { 90 netlbl_af4list_remove_entry(iter4); 91 kfree(netlbl_domhsh_addr4_entry(iter4)); 92 } 93 #if IS_ENABLED(CONFIG_IPV6) 94 netlbl_af6list_foreach_safe(iter6, tmp6, 95 + &ptr->def.addrsel->list6) { 96 netlbl_af6list_remove_entry(iter6); 97 kfree(netlbl_domhsh_addr6_entry(iter6)); 98 } ··· 213 if (addr4 != NULL) { 214 struct netlbl_domaddr4_map *map4; 215 map4 = netlbl_domhsh_addr4_entry(addr4); 216 + type = map4->def.type; 217 + cipsov4 = map4->def.cipso; 218 netlbl_af4list_audit_addr(audit_buf, 0, NULL, 219 addr4->addr, addr4->mask); 220 #if IS_ENABLED(CONFIG_IPV6) 221 } else if (addr6 != NULL) { 222 struct netlbl_domaddr6_map *map6; 223 map6 = netlbl_domhsh_addr6_entry(addr6); 224 + type = map6->def.type; 225 netlbl_af6list_audit_addr(audit_buf, 0, NULL, 226 &addr6->addr, &addr6->mask); 227 #endif /* IPv6 */ 228 } else { 229 + type = entry->def.type; 230 + cipsov4 = entry->def.cipso; 231 } 232 switch (type) { 233 case NETLBL_NLTYPE_UNLABELED: ··· 265 if (entry == NULL) 266 return -EINVAL; 267 268 + switch (entry->def.type) { 269 case NETLBL_NLTYPE_UNLABELED: 270 + if (entry->def.cipso != NULL || entry->def.addrsel != NULL) 271 return -EINVAL; 272 break; 273 case NETLBL_NLTYPE_CIPSOV4: 274 + if (entry->def.cipso == NULL) 275 return -EINVAL; 276 break; 277 case NETLBL_NLTYPE_ADDRSELECT: 278 + netlbl_af4list_foreach(iter4, &entry->def.addrsel->list4) { 279 map4 = netlbl_domhsh_addr4_entry(iter4); 280 + switch (map4->def.type) { 281 case NETLBL_NLTYPE_UNLABELED: 282 + if (map4->def.cipso != NULL) 283 return -EINVAL; 284 break; 285 case NETLBL_NLTYPE_CIPSOV4: 286 + if (map4->def.cipso == NULL) 287 return -EINVAL; 288 break; 289 default: ··· 292 } 293 } 294 #if IS_ENABLED(CONFIG_IPV6) 295 + netlbl_af6list_foreach(iter6, &entry->def.addrsel->list6) { 296 map6 = netlbl_domhsh_addr6_entry(iter6); 297 + switch (map6->def.type) { 298 case NETLBL_NLTYPE_UNLABELED: 299 break; 300 default: ··· 402 rcu_assign_pointer(netlbl_domhsh_def, entry); 403 } 404 405 + if (entry->def.type == NETLBL_NLTYPE_ADDRSELECT) { 406 netlbl_af4list_foreach_rcu(iter4, 407 + &entry->def.addrsel->list4) 408 netlbl_domhsh_audit_add(entry, iter4, NULL, 409 ret_val, audit_info); 410 #if IS_ENABLED(CONFIG_IPV6) 411 netlbl_af6list_foreach_rcu(iter6, 412 + &entry->def.addrsel->list6) 413 netlbl_domhsh_audit_add(entry, NULL, iter6, 414 ret_val, audit_info); 415 #endif /* IPv6 */ 416 } else 417 netlbl_domhsh_audit_add(entry, NULL, NULL, 418 ret_val, audit_info); 419 + } else if (entry_old->def.type == NETLBL_NLTYPE_ADDRSELECT && 420 + entry->def.type == NETLBL_NLTYPE_ADDRSELECT) { 421 struct list_head *old_list4; 422 struct list_head *old_list6; 423 424 + old_list4 = &entry_old->def.addrsel->list4; 425 + old_list6 = &entry_old->def.addrsel->list6; 426 427 /* we only allow the addition of address selectors if all of 428 * the selectors do not exist in the existing domain map */ 429 + netlbl_af4list_foreach_rcu(iter4, &entry->def.addrsel->list4) 430 if (netlbl_af4list_search_exact(iter4->addr, 431 iter4->mask, 432 old_list4)) { ··· 435 goto add_return; 436 } 437 #if IS_ENABLED(CONFIG_IPV6) 438 + netlbl_af6list_foreach_rcu(iter6, &entry->def.addrsel->list6) 439 if (netlbl_af6list_search_exact(&iter6->addr, 440 &iter6->mask, 441 old_list6)) { ··· 446 #endif /* IPv6 */ 447 448 netlbl_af4list_foreach_safe(iter4, tmp4, 449 + &entry->def.addrsel->list4) { 450 netlbl_af4list_remove_entry(iter4); 451 iter4->valid = 1; 452 ret_val = netlbl_af4list_add(iter4, old_list4); ··· 457 } 458 #if IS_ENABLED(CONFIG_IPV6) 459 netlbl_af6list_foreach_safe(iter6, tmp6, 460 + &entry->def.addrsel->list6) { 461 netlbl_af6list_remove_entry(iter6); 462 iter6->valid = 1; 463 ret_val = netlbl_af6list_add(iter6, old_list6); ··· 538 struct netlbl_af4list *iter4; 539 struct netlbl_domaddr4_map *map4; 540 541 + switch (entry->def.type) { 542 case NETLBL_NLTYPE_ADDRSELECT: 543 netlbl_af4list_foreach_rcu(iter4, 544 + &entry->def.addrsel->list4) { 545 map4 = netlbl_domhsh_addr4_entry(iter4); 546 + cipso_v4_doi_putdef(map4->def.cipso); 547 } 548 /* no need to check the IPv6 list since we currently 549 * support only unlabeled protocols for IPv6 */ 550 break; 551 case NETLBL_NLTYPE_CIPSOV4: 552 + cipso_v4_doi_putdef(entry->def.cipso); 553 break; 554 } 555 call_rcu(&entry->rcu, netlbl_domhsh_free_entry); ··· 590 entry_map = netlbl_domhsh_search(domain); 591 else 592 entry_map = netlbl_domhsh_search_def(domain); 593 + if (entry_map == NULL || 594 + entry_map->def.type != NETLBL_NLTYPE_ADDRSELECT) 595 goto remove_af4_failure; 596 597 spin_lock(&netlbl_domhsh_lock); 598 entry_addr = netlbl_af4list_remove(addr->s_addr, mask->s_addr, 599 + &entry_map->def.addrsel->list4); 600 spin_unlock(&netlbl_domhsh_lock); 601 602 if (entry_addr == NULL) 603 goto remove_af4_failure; 604 + netlbl_af4list_foreach_rcu(iter4, &entry_map->def.addrsel->list4) 605 goto remove_af4_single_addr; 606 #if IS_ENABLED(CONFIG_IPV6) 607 + netlbl_af6list_foreach_rcu(iter6, &entry_map->def.addrsel->list6) 608 goto remove_af4_single_addr; 609 #endif /* IPv6 */ 610 /* the domain mapping is empty so remove it from the mapping table */ ··· 616 * shouldn't be a problem */ 617 synchronize_rcu(); 618 entry = netlbl_domhsh_addr4_entry(entry_addr); 619 + cipso_v4_doi_putdef(entry->def.cipso); 620 kfree(entry); 621 return 0; 622 ··· 693 * responsible for ensuring that rcu_read_[un]lock() is called. 694 * 695 */ 696 + struct netlbl_dommap_def *netlbl_domhsh_getentry_af4(const char *domain, 697 + __be32 addr) 698 { 699 struct netlbl_dom_map *dom_iter; 700 struct netlbl_af4list *addr_iter; ··· 702 dom_iter = netlbl_domhsh_search_def(domain); 703 if (dom_iter == NULL) 704 return NULL; 705 706 + if (dom_iter->def.type != NETLBL_NLTYPE_ADDRSELECT) 707 + return &dom_iter->def; 708 + addr_iter = netlbl_af4list_search(addr, &dom_iter->def.addrsel->list4); 709 if (addr_iter == NULL) 710 return NULL; 711 + return &(netlbl_domhsh_addr4_entry(addr_iter)->def); 712 } 713 714 #if IS_ENABLED(CONFIG_IPV6) ··· 725 * responsible for ensuring that rcu_read_[un]lock() is called. 726 * 727 */ 728 + struct netlbl_dommap_def *netlbl_domhsh_getentry_af6(const char *domain, 729 const struct in6_addr *addr) 730 { 731 struct netlbl_dom_map *dom_iter; ··· 734 dom_iter = netlbl_domhsh_search_def(domain); 735 if (dom_iter == NULL) 736 return NULL; 737 738 + if (dom_iter->def.type != NETLBL_NLTYPE_ADDRSELECT) 739 + return &dom_iter->def; 740 + addr_iter = netlbl_af6list_search(addr, &dom_iter->def.addrsel->list6); 741 if (addr_iter == NULL) 742 return NULL; 743 + return &(netlbl_domhsh_addr6_entry(addr_iter)->def); 744 } 745 #endif /* IPv6 */ 746
+22 -24
net/netlabel/netlabel_domainhash.h
··· 43 #define NETLBL_DOMHSH_BITSIZE 7 44 45 /* Domain mapping definition structures */ 46 #define netlbl_domhsh_addr4_entry(iter) \ 47 container_of(iter, struct netlbl_domaddr4_map, list) 48 struct netlbl_domaddr4_map { 49 - u32 type; 50 - union { 51 - struct cipso_v4_doi *cipsov4; 52 - } type_def; 53 54 struct netlbl_af4list list; 55 }; 56 #define netlbl_domhsh_addr6_entry(iter) \ 57 container_of(iter, struct netlbl_domaddr6_map, list) 58 struct netlbl_domaddr6_map { 59 - u32 type; 60 - 61 - /* NOTE: no 'type_def' union needed at present since we don't currently 62 - * support any IPv6 labeling protocols */ 63 64 struct netlbl_af6list list; 65 }; 66 - struct netlbl_domaddr_map { 67 - struct list_head list4; 68 - struct list_head list6; 69 - }; 70 struct netlbl_dom_map { 71 char *domain; 72 - u32 type; 73 - union { 74 - struct cipso_v4_doi *cipsov4; 75 - struct netlbl_domaddr_map *addrsel; 76 - } type_def; 77 78 u32 valid; 79 struct list_head list; ··· 95 int netlbl_domhsh_remove(const char *domain, struct netlbl_audit *audit_info); 96 int netlbl_domhsh_remove_default(struct netlbl_audit *audit_info); 97 struct netlbl_dom_map *netlbl_domhsh_getentry(const char *domain); 98 - struct netlbl_domaddr4_map *netlbl_domhsh_getentry_af4(const char *domain, 99 - __be32 addr); 100 int netlbl_domhsh_walk(u32 *skip_bkt, 101 u32 *skip_chain, 102 int (*callback) (struct netlbl_dom_map *entry, void *arg), 103 void *cb_arg); 104 - 105 - #if IS_ENABLED(CONFIG_IPV6) 106 - struct netlbl_domaddr6_map *netlbl_domhsh_getentry_af6(const char *domain, 107 - const struct in6_addr *addr); 108 - #endif /* IPv6 */ 109 110 #endif
··· 43 #define NETLBL_DOMHSH_BITSIZE 7 44 45 /* Domain mapping definition structures */ 46 + struct netlbl_domaddr_map { 47 + struct list_head list4; 48 + struct list_head list6; 49 + }; 50 + struct netlbl_dommap_def { 51 + u32 type; 52 + union { 53 + struct netlbl_domaddr_map *addrsel; 54 + struct cipso_v4_doi *cipso; 55 + }; 56 + }; 57 #define netlbl_domhsh_addr4_entry(iter) \ 58 container_of(iter, struct netlbl_domaddr4_map, list) 59 struct netlbl_domaddr4_map { 60 + struct netlbl_dommap_def def; 61 62 struct netlbl_af4list list; 63 }; 64 #define netlbl_domhsh_addr6_entry(iter) \ 65 container_of(iter, struct netlbl_domaddr6_map, list) 66 struct netlbl_domaddr6_map { 67 + struct netlbl_dommap_def def; 68 69 struct netlbl_af6list list; 70 }; 71 + 72 struct netlbl_dom_map { 73 char *domain; 74 + struct netlbl_dommap_def def; 75 76 u32 valid; 77 struct list_head list; ··· 97 int netlbl_domhsh_remove(const char *domain, struct netlbl_audit *audit_info); 98 int netlbl_domhsh_remove_default(struct netlbl_audit *audit_info); 99 struct netlbl_dom_map *netlbl_domhsh_getentry(const char *domain); 100 + struct netlbl_dommap_def *netlbl_domhsh_getentry_af4(const char *domain, 101 + __be32 addr); 102 + #if IS_ENABLED(CONFIG_IPV6) 103 + struct netlbl_dommap_def *netlbl_domhsh_getentry_af6(const char *domain, 104 + const struct in6_addr *addr); 105 + #endif /* IPv6 */ 106 + 107 int netlbl_domhsh_walk(u32 *skip_bkt, 108 u32 *skip_chain, 109 int (*callback) (struct netlbl_dom_map *entry, void *arg), 110 void *cb_arg); 111 112 #endif
+35 -53
net/netlabel/netlabel_kapi.c
··· 122 } 123 124 if (addr == NULL && mask == NULL) 125 - entry->type = NETLBL_NLTYPE_UNLABELED; 126 else if (addr != NULL && mask != NULL) { 127 addrmap = kzalloc(sizeof(*addrmap), GFP_ATOMIC); 128 if (addrmap == NULL) ··· 137 map4 = kzalloc(sizeof(*map4), GFP_ATOMIC); 138 if (map4 == NULL) 139 goto cfg_unlbl_map_add_failure; 140 - map4->type = NETLBL_NLTYPE_UNLABELED; 141 map4->list.addr = addr4->s_addr & mask4->s_addr; 142 map4->list.mask = mask4->s_addr; 143 map4->list.valid = 1; ··· 154 map6 = kzalloc(sizeof(*map6), GFP_ATOMIC); 155 if (map6 == NULL) 156 goto cfg_unlbl_map_add_failure; 157 - map6->type = NETLBL_NLTYPE_UNLABELED; 158 map6->list.addr = *addr6; 159 map6->list.addr.s6_addr32[0] &= mask6->s6_addr32[0]; 160 map6->list.addr.s6_addr32[1] &= mask6->s6_addr32[1]; ··· 174 break; 175 } 176 177 - entry->type_def.addrsel = addrmap; 178 - entry->type = NETLBL_NLTYPE_ADDRSELECT; 179 } else { 180 ret_val = -EINVAL; 181 goto cfg_unlbl_map_add_failure; ··· 355 } 356 357 if (addr == NULL && mask == NULL) { 358 - entry->type_def.cipsov4 = doi_def; 359 - entry->type = NETLBL_NLTYPE_CIPSOV4; 360 } else if (addr != NULL && mask != NULL) { 361 addrmap = kzalloc(sizeof(*addrmap), GFP_ATOMIC); 362 if (addrmap == NULL) ··· 367 addrinfo = kzalloc(sizeof(*addrinfo), GFP_ATOMIC); 368 if (addrinfo == NULL) 369 goto out_addrinfo; 370 - addrinfo->type_def.cipsov4 = doi_def; 371 - addrinfo->type = NETLBL_NLTYPE_CIPSOV4; 372 addrinfo->list.addr = addr->s_addr & mask->s_addr; 373 addrinfo->list.mask = mask->s_addr; 374 addrinfo->list.valid = 1; ··· 376 if (ret_val != 0) 377 goto cfg_cipsov4_map_add_failure; 378 379 - entry->type_def.addrsel = addrmap; 380 - entry->type = NETLBL_NLTYPE_ADDRSELECT; 381 } else { 382 ret_val = -EINVAL; 383 goto out_addrmap; ··· 657 } 658 switch (family) { 659 case AF_INET: 660 - switch (dom_entry->type) { 661 case NETLBL_NLTYPE_ADDRSELECT: 662 ret_val = -EDESTADDRREQ; 663 break; 664 case NETLBL_NLTYPE_CIPSOV4: 665 ret_val = cipso_v4_sock_setattr(sk, 666 - dom_entry->type_def.cipsov4, 667 - secattr); 668 break; 669 case NETLBL_NLTYPE_UNLABELED: 670 ret_val = 0; ··· 754 { 755 int ret_val; 756 struct sockaddr_in *addr4; 757 - struct netlbl_domaddr4_map *af4_entry; 758 759 rcu_read_lock(); 760 switch (addr->sa_family) { 761 case AF_INET: 762 addr4 = (struct sockaddr_in *)addr; 763 - af4_entry = netlbl_domhsh_getentry_af4(secattr->domain, 764 - addr4->sin_addr.s_addr); 765 - if (af4_entry == NULL) { 766 ret_val = -ENOENT; 767 goto conn_setattr_return; 768 } 769 - switch (af4_entry->type) { 770 case NETLBL_NLTYPE_CIPSOV4: 771 ret_val = cipso_v4_sock_setattr(sk, 772 - af4_entry->type_def.cipsov4, 773 - secattr); 774 break; 775 case NETLBL_NLTYPE_UNLABELED: 776 /* just delete the protocols we support for right now ··· 811 const struct netlbl_lsm_secattr *secattr) 812 { 813 int ret_val; 814 - struct netlbl_dom_map *dom_entry; 815 - struct netlbl_domaddr4_map *af4_entry; 816 - u32 proto_type; 817 - struct cipso_v4_doi *proto_cv4; 818 819 rcu_read_lock(); 820 - dom_entry = netlbl_domhsh_getentry(secattr->domain); 821 - if (dom_entry == NULL) { 822 - ret_val = -ENOENT; 823 - goto req_setattr_return; 824 - } 825 switch (req->rsk_ops->family) { 826 case AF_INET: 827 - if (dom_entry->type == NETLBL_NLTYPE_ADDRSELECT) { 828 - struct inet_request_sock *req_inet = inet_rsk(req); 829 - af4_entry = netlbl_domhsh_getentry_af4(secattr->domain, 830 - req_inet->rmt_addr); 831 - if (af4_entry == NULL) { 832 - ret_val = -ENOENT; 833 - goto req_setattr_return; 834 - } 835 - proto_type = af4_entry->type; 836 - proto_cv4 = af4_entry->type_def.cipsov4; 837 - } else { 838 - proto_type = dom_entry->type; 839 - proto_cv4 = dom_entry->type_def.cipsov4; 840 } 841 - switch (proto_type) { 842 case NETLBL_NLTYPE_CIPSOV4: 843 - ret_val = cipso_v4_req_setattr(req, proto_cv4, secattr); 844 break; 845 case NETLBL_NLTYPE_UNLABELED: 846 /* just delete the protocols we support for right now ··· 883 { 884 int ret_val; 885 struct iphdr *hdr4; 886 - struct netlbl_domaddr4_map *af4_entry; 887 888 rcu_read_lock(); 889 switch (family) { 890 case AF_INET: 891 hdr4 = ip_hdr(skb); 892 - af4_entry = netlbl_domhsh_getentry_af4(secattr->domain, 893 - hdr4->daddr); 894 - if (af4_entry == NULL) { 895 ret_val = -ENOENT; 896 goto skbuff_setattr_return; 897 } 898 - switch (af4_entry->type) { 899 case NETLBL_NLTYPE_CIPSOV4: 900 - ret_val = cipso_v4_skbuff_setattr(skb, 901 - af4_entry->type_def.cipsov4, 902 - secattr); 903 break; 904 case NETLBL_NLTYPE_UNLABELED: 905 /* just delete the protocols we support for right now
··· 122 } 123 124 if (addr == NULL && mask == NULL) 125 + entry->def.type = NETLBL_NLTYPE_UNLABELED; 126 else if (addr != NULL && mask != NULL) { 127 addrmap = kzalloc(sizeof(*addrmap), GFP_ATOMIC); 128 if (addrmap == NULL) ··· 137 map4 = kzalloc(sizeof(*map4), GFP_ATOMIC); 138 if (map4 == NULL) 139 goto cfg_unlbl_map_add_failure; 140 + map4->def.type = NETLBL_NLTYPE_UNLABELED; 141 map4->list.addr = addr4->s_addr & mask4->s_addr; 142 map4->list.mask = mask4->s_addr; 143 map4->list.valid = 1; ··· 154 map6 = kzalloc(sizeof(*map6), GFP_ATOMIC); 155 if (map6 == NULL) 156 goto cfg_unlbl_map_add_failure; 157 + map6->def.type = NETLBL_NLTYPE_UNLABELED; 158 map6->list.addr = *addr6; 159 map6->list.addr.s6_addr32[0] &= mask6->s6_addr32[0]; 160 map6->list.addr.s6_addr32[1] &= mask6->s6_addr32[1]; ··· 174 break; 175 } 176 177 + entry->def.addrsel = addrmap; 178 + entry->def.type = NETLBL_NLTYPE_ADDRSELECT; 179 } else { 180 ret_val = -EINVAL; 181 goto cfg_unlbl_map_add_failure; ··· 355 } 356 357 if (addr == NULL && mask == NULL) { 358 + entry->def.cipso = doi_def; 359 + entry->def.type = NETLBL_NLTYPE_CIPSOV4; 360 } else if (addr != NULL && mask != NULL) { 361 addrmap = kzalloc(sizeof(*addrmap), GFP_ATOMIC); 362 if (addrmap == NULL) ··· 367 addrinfo = kzalloc(sizeof(*addrinfo), GFP_ATOMIC); 368 if (addrinfo == NULL) 369 goto out_addrinfo; 370 + addrinfo->def.cipso = doi_def; 371 + addrinfo->def.type = NETLBL_NLTYPE_CIPSOV4; 372 addrinfo->list.addr = addr->s_addr & mask->s_addr; 373 addrinfo->list.mask = mask->s_addr; 374 addrinfo->list.valid = 1; ··· 376 if (ret_val != 0) 377 goto cfg_cipsov4_map_add_failure; 378 379 + entry->def.addrsel = addrmap; 380 + entry->def.type = NETLBL_NLTYPE_ADDRSELECT; 381 } else { 382 ret_val = -EINVAL; 383 goto out_addrmap; ··· 657 } 658 switch (family) { 659 case AF_INET: 660 + switch (dom_entry->def.type) { 661 case NETLBL_NLTYPE_ADDRSELECT: 662 ret_val = -EDESTADDRREQ; 663 break; 664 case NETLBL_NLTYPE_CIPSOV4: 665 ret_val = cipso_v4_sock_setattr(sk, 666 + dom_entry->def.cipso, 667 + secattr); 668 break; 669 case NETLBL_NLTYPE_UNLABELED: 670 ret_val = 0; ··· 754 { 755 int ret_val; 756 struct sockaddr_in *addr4; 757 + struct netlbl_dommap_def *entry; 758 759 rcu_read_lock(); 760 switch (addr->sa_family) { 761 case AF_INET: 762 addr4 = (struct sockaddr_in *)addr; 763 + entry = netlbl_domhsh_getentry_af4(secattr->domain, 764 + addr4->sin_addr.s_addr); 765 + if (entry == NULL) { 766 ret_val = -ENOENT; 767 goto conn_setattr_return; 768 } 769 + switch (entry->type) { 770 case NETLBL_NLTYPE_CIPSOV4: 771 ret_val = cipso_v4_sock_setattr(sk, 772 + entry->cipso, secattr); 773 break; 774 case NETLBL_NLTYPE_UNLABELED: 775 /* just delete the protocols we support for right now ··· 812 const struct netlbl_lsm_secattr *secattr) 813 { 814 int ret_val; 815 + struct netlbl_dommap_def *entry; 816 817 rcu_read_lock(); 818 switch (req->rsk_ops->family) { 819 case AF_INET: 820 + entry = netlbl_domhsh_getentry_af4(secattr->domain, 821 + inet_rsk(req)->rmt_addr); 822 + if (entry == NULL) { 823 + ret_val = -ENOENT; 824 + goto req_setattr_return; 825 } 826 + switch (entry->type) { 827 case NETLBL_NLTYPE_CIPSOV4: 828 + ret_val = cipso_v4_req_setattr(req, 829 + entry->cipso, secattr); 830 break; 831 case NETLBL_NLTYPE_UNLABELED: 832 /* just delete the protocols we support for right now ··· 899 { 900 int ret_val; 901 struct iphdr *hdr4; 902 + struct netlbl_dommap_def *entry; 903 904 rcu_read_lock(); 905 switch (family) { 906 case AF_INET: 907 hdr4 = ip_hdr(skb); 908 + entry = netlbl_domhsh_getentry_af4(secattr->domain,hdr4->daddr); 909 + if (entry == NULL) { 910 ret_val = -ENOENT; 911 goto skbuff_setattr_return; 912 } 913 + switch (entry->type) { 914 case NETLBL_NLTYPE_CIPSOV4: 915 + ret_val = cipso_v4_skbuff_setattr(skb, entry->cipso, 916 + secattr); 917 break; 918 case NETLBL_NLTYPE_UNLABELED: 919 /* just delete the protocols we support for right now
+21 -23
net/netlabel/netlabel_mgmt.c
··· 104 ret_val = -ENOMEM; 105 goto add_failure; 106 } 107 - entry->type = nla_get_u32(info->attrs[NLBL_MGMT_A_PROTOCOL]); 108 if (info->attrs[NLBL_MGMT_A_DOMAIN]) { 109 size_t tmp_size = nla_len(info->attrs[NLBL_MGMT_A_DOMAIN]); 110 entry->domain = kmalloc(tmp_size, GFP_KERNEL); ··· 116 info->attrs[NLBL_MGMT_A_DOMAIN], tmp_size); 117 } 118 119 - /* NOTE: internally we allow/use a entry->type value of 120 * NETLBL_NLTYPE_ADDRSELECT but we don't currently allow users 121 * to pass that as a protocol value because we need to know the 122 * "real" protocol */ 123 124 - switch (entry->type) { 125 case NETLBL_NLTYPE_UNLABELED: 126 break; 127 case NETLBL_NLTYPE_CIPSOV4: ··· 132 cipsov4 = cipso_v4_doi_getdef(tmp_val); 133 if (cipsov4 == NULL) 134 goto add_failure; 135 - entry->type_def.cipsov4 = cipsov4; 136 break; 137 default: 138 goto add_failure; ··· 172 map->list.addr = addr->s_addr & mask->s_addr; 173 map->list.mask = mask->s_addr; 174 map->list.valid = 1; 175 - map->type = entry->type; 176 if (cipsov4) 177 - map->type_def.cipsov4 = cipsov4; 178 179 ret_val = netlbl_af4list_add(&map->list, &addrmap->list4); 180 if (ret_val != 0) { ··· 182 goto add_failure; 183 } 184 185 - entry->type = NETLBL_NLTYPE_ADDRSELECT; 186 - entry->type_def.addrsel = addrmap; 187 #if IS_ENABLED(CONFIG_IPV6) 188 } else if (info->attrs[NLBL_MGMT_A_IPV6ADDR]) { 189 struct in6_addr *addr; ··· 223 map->list.addr.s6_addr32[3] &= mask->s6_addr32[3]; 224 map->list.mask = *mask; 225 map->list.valid = 1; 226 - map->type = entry->type; 227 228 ret_val = netlbl_af6list_add(&map->list, &addrmap->list6); 229 if (ret_val != 0) { ··· 231 goto add_failure; 232 } 233 234 - entry->type = NETLBL_NLTYPE_ADDRSELECT; 235 - entry->type_def.addrsel = addrmap; 236 #endif /* IPv6 */ 237 } 238 ··· 281 return ret_val; 282 } 283 284 - switch (entry->type) { 285 case NETLBL_NLTYPE_ADDRSELECT: 286 nla_a = nla_nest_start(skb, NLBL_MGMT_A_SELECTORLIST); 287 if (nla_a == NULL) 288 return -ENOMEM; 289 290 - netlbl_af4list_foreach_rcu(iter4, 291 - &entry->type_def.addrsel->list4) { 292 struct netlbl_domaddr4_map *map4; 293 struct in_addr addr_struct; 294 ··· 309 return ret_val; 310 map4 = netlbl_domhsh_addr4_entry(iter4); 311 ret_val = nla_put_u32(skb, NLBL_MGMT_A_PROTOCOL, 312 - map4->type); 313 if (ret_val != 0) 314 return ret_val; 315 - switch (map4->type) { 316 case NETLBL_NLTYPE_CIPSOV4: 317 ret_val = nla_put_u32(skb, NLBL_MGMT_A_CV4DOI, 318 - map4->type_def.cipsov4->doi); 319 if (ret_val != 0) 320 return ret_val; 321 break; ··· 324 nla_nest_end(skb, nla_b); 325 } 326 #if IS_ENABLED(CONFIG_IPV6) 327 - netlbl_af6list_foreach_rcu(iter6, 328 - &entry->type_def.addrsel->list6) { 329 struct netlbl_domaddr6_map *map6; 330 331 nla_b = nla_nest_start(skb, NLBL_MGMT_A_ADDRSELECTOR); ··· 343 return ret_val; 344 map6 = netlbl_domhsh_addr6_entry(iter6); 345 ret_val = nla_put_u32(skb, NLBL_MGMT_A_PROTOCOL, 346 - map6->type); 347 if (ret_val != 0) 348 return ret_val; 349 ··· 354 nla_nest_end(skb, nla_a); 355 break; 356 case NETLBL_NLTYPE_UNLABELED: 357 - ret_val = nla_put_u32(skb, NLBL_MGMT_A_PROTOCOL, entry->type); 358 break; 359 case NETLBL_NLTYPE_CIPSOV4: 360 - ret_val = nla_put_u32(skb, NLBL_MGMT_A_PROTOCOL, entry->type); 361 if (ret_val != 0) 362 return ret_val; 363 ret_val = nla_put_u32(skb, NLBL_MGMT_A_CV4DOI, 364 - entry->type_def.cipsov4->doi); 365 break; 366 } 367
··· 104 ret_val = -ENOMEM; 105 goto add_failure; 106 } 107 + entry->def.type = nla_get_u32(info->attrs[NLBL_MGMT_A_PROTOCOL]); 108 if (info->attrs[NLBL_MGMT_A_DOMAIN]) { 109 size_t tmp_size = nla_len(info->attrs[NLBL_MGMT_A_DOMAIN]); 110 entry->domain = kmalloc(tmp_size, GFP_KERNEL); ··· 116 info->attrs[NLBL_MGMT_A_DOMAIN], tmp_size); 117 } 118 119 + /* NOTE: internally we allow/use a entry->def.type value of 120 * NETLBL_NLTYPE_ADDRSELECT but we don't currently allow users 121 * to pass that as a protocol value because we need to know the 122 * "real" protocol */ 123 124 + switch (entry->def.type) { 125 case NETLBL_NLTYPE_UNLABELED: 126 break; 127 case NETLBL_NLTYPE_CIPSOV4: ··· 132 cipsov4 = cipso_v4_doi_getdef(tmp_val); 133 if (cipsov4 == NULL) 134 goto add_failure; 135 + entry->def.cipso = cipsov4; 136 break; 137 default: 138 goto add_failure; ··· 172 map->list.addr = addr->s_addr & mask->s_addr; 173 map->list.mask = mask->s_addr; 174 map->list.valid = 1; 175 + map->def.type = entry->def.type; 176 if (cipsov4) 177 + map->def.cipso = cipsov4; 178 179 ret_val = netlbl_af4list_add(&map->list, &addrmap->list4); 180 if (ret_val != 0) { ··· 182 goto add_failure; 183 } 184 185 + entry->def.type = NETLBL_NLTYPE_ADDRSELECT; 186 + entry->def.addrsel = addrmap; 187 #if IS_ENABLED(CONFIG_IPV6) 188 } else if (info->attrs[NLBL_MGMT_A_IPV6ADDR]) { 189 struct in6_addr *addr; ··· 223 map->list.addr.s6_addr32[3] &= mask->s6_addr32[3]; 224 map->list.mask = *mask; 225 map->list.valid = 1; 226 + map->def.type = entry->def.type; 227 228 ret_val = netlbl_af6list_add(&map->list, &addrmap->list6); 229 if (ret_val != 0) { ··· 231 goto add_failure; 232 } 233 234 + entry->def.type = NETLBL_NLTYPE_ADDRSELECT; 235 + entry->def.addrsel = addrmap; 236 #endif /* IPv6 */ 237 } 238 ··· 281 return ret_val; 282 } 283 284 + switch (entry->def.type) { 285 case NETLBL_NLTYPE_ADDRSELECT: 286 nla_a = nla_nest_start(skb, NLBL_MGMT_A_SELECTORLIST); 287 if (nla_a == NULL) 288 return -ENOMEM; 289 290 + netlbl_af4list_foreach_rcu(iter4, &entry->def.addrsel->list4) { 291 struct netlbl_domaddr4_map *map4; 292 struct in_addr addr_struct; 293 ··· 310 return ret_val; 311 map4 = netlbl_domhsh_addr4_entry(iter4); 312 ret_val = nla_put_u32(skb, NLBL_MGMT_A_PROTOCOL, 313 + map4->def.type); 314 if (ret_val != 0) 315 return ret_val; 316 + switch (map4->def.type) { 317 case NETLBL_NLTYPE_CIPSOV4: 318 ret_val = nla_put_u32(skb, NLBL_MGMT_A_CV4DOI, 319 + map4->def.cipso->doi); 320 if (ret_val != 0) 321 return ret_val; 322 break; ··· 325 nla_nest_end(skb, nla_b); 326 } 327 #if IS_ENABLED(CONFIG_IPV6) 328 + netlbl_af6list_foreach_rcu(iter6, &entry->def.addrsel->list6) { 329 struct netlbl_domaddr6_map *map6; 330 331 nla_b = nla_nest_start(skb, NLBL_MGMT_A_ADDRSELECTOR); ··· 345 return ret_val; 346 map6 = netlbl_domhsh_addr6_entry(iter6); 347 ret_val = nla_put_u32(skb, NLBL_MGMT_A_PROTOCOL, 348 + map6->def.type); 349 if (ret_val != 0) 350 return ret_val; 351 ··· 356 nla_nest_end(skb, nla_a); 357 break; 358 case NETLBL_NLTYPE_UNLABELED: 359 + ret_val = nla_put_u32(skb,NLBL_MGMT_A_PROTOCOL,entry->def.type); 360 break; 361 case NETLBL_NLTYPE_CIPSOV4: 362 + ret_val = nla_put_u32(skb,NLBL_MGMT_A_PROTOCOL,entry->def.type); 363 if (ret_val != 0) 364 return ret_val; 365 ret_val = nla_put_u32(skb, NLBL_MGMT_A_CV4DOI, 366 + entry->def.cipso->doi); 367 break; 368 } 369
+1 -1
net/netlabel/netlabel_unlabeled.c
··· 1541 entry = kzalloc(sizeof(*entry), GFP_KERNEL); 1542 if (entry == NULL) 1543 return -ENOMEM; 1544 - entry->type = NETLBL_NLTYPE_UNLABELED; 1545 ret_val = netlbl_domhsh_add_default(entry, &audit_info); 1546 if (ret_val != 0) 1547 return ret_val;
··· 1541 entry = kzalloc(sizeof(*entry), GFP_KERNEL); 1542 if (entry == NULL) 1543 return -ENOMEM; 1544 + entry->def.type = NETLBL_NLTYPE_UNLABELED; 1545 ret_val = netlbl_domhsh_add_default(entry, &audit_info); 1546 if (ret_val != 0) 1547 return ret_val;
+10 -10
net/nfc/core.c
··· 44 /* NFC device ID bitmap */ 45 static DEFINE_IDA(nfc_index_ida); 46 47 - int nfc_fw_upload(struct nfc_dev *dev, const char *firmware_name) 48 { 49 int rc = 0; 50 ··· 62 goto error; 63 } 64 65 - if (!dev->ops->fw_upload) { 66 rc = -EOPNOTSUPP; 67 goto error; 68 } 69 70 - dev->fw_upload_in_progress = true; 71 - rc = dev->ops->fw_upload(dev, firmware_name); 72 if (rc) 73 - dev->fw_upload_in_progress = false; 74 75 error: 76 device_unlock(&dev->dev); 77 return rc; 78 } 79 80 - int nfc_fw_upload_done(struct nfc_dev *dev, const char *firmware_name) 81 { 82 - dev->fw_upload_in_progress = false; 83 84 - return nfc_genl_fw_upload_done(dev, firmware_name); 85 } 86 - EXPORT_SYMBOL(nfc_fw_upload_done); 87 88 /** 89 * nfc_dev_up - turn on the NFC device ··· 110 goto error; 111 } 112 113 - if (dev->fw_upload_in_progress) { 114 rc = -EBUSY; 115 goto error; 116 }
··· 44 /* NFC device ID bitmap */ 45 static DEFINE_IDA(nfc_index_ida); 46 47 + int nfc_fw_download(struct nfc_dev *dev, const char *firmware_name) 48 { 49 int rc = 0; 50 ··· 62 goto error; 63 } 64 65 + if (!dev->ops->fw_download) { 66 rc = -EOPNOTSUPP; 67 goto error; 68 } 69 70 + dev->fw_download_in_progress = true; 71 + rc = dev->ops->fw_download(dev, firmware_name); 72 if (rc) 73 + dev->fw_download_in_progress = false; 74 75 error: 76 device_unlock(&dev->dev); 77 return rc; 78 } 79 80 + int nfc_fw_download_done(struct nfc_dev *dev, const char *firmware_name) 81 { 82 + dev->fw_download_in_progress = false; 83 84 + return nfc_genl_fw_download_done(dev, firmware_name); 85 } 86 + EXPORT_SYMBOL(nfc_fw_download_done); 87 88 /** 89 * nfc_dev_up - turn on the NFC device ··· 110 goto error; 111 } 112 113 + if (dev->fw_download_in_progress) { 114 rc = -EBUSY; 115 goto error; 116 }
+4 -4
net/nfc/hci/core.c
··· 809 } 810 } 811 812 - static int hci_fw_upload(struct nfc_dev *nfc_dev, const char *firmware_name) 813 { 814 struct nfc_hci_dev *hdev = nfc_get_drvdata(nfc_dev); 815 816 - if (!hdev->ops->fw_upload) 817 return -ENOTSUPP; 818 819 - return hdev->ops->fw_upload(hdev, firmware_name); 820 } 821 822 static struct nfc_ops hci_nfc_ops = { ··· 831 .im_transceive = hci_transceive, 832 .tm_send = hci_tm_send, 833 .check_presence = hci_check_presence, 834 - .fw_upload = hci_fw_upload, 835 .discover_se = hci_discover_se, 836 .enable_se = hci_enable_se, 837 .disable_se = hci_disable_se,
··· 809 } 810 } 811 812 + static int hci_fw_download(struct nfc_dev *nfc_dev, const char *firmware_name) 813 { 814 struct nfc_hci_dev *hdev = nfc_get_drvdata(nfc_dev); 815 816 + if (!hdev->ops->fw_download) 817 return -ENOTSUPP; 818 819 + return hdev->ops->fw_download(hdev, firmware_name); 820 } 821 822 static struct nfc_ops hci_nfc_ops = { ··· 831 .im_transceive = hci_transceive, 832 .tm_send = hci_tm_send, 833 .check_presence = hci_check_presence, 834 + .fw_download = hci_fw_download, 835 .discover_se = hci_discover_se, 836 .enable_se = hci_enable_se, 837 .disable_se = hci_disable_se,
+1
net/nfc/nci/Kconfig
··· 11 12 config NFC_NCI_SPI 13 depends on NFC_NCI && SPI 14 bool "NCI over SPI protocol support" 15 default n 16 help
··· 11 12 config NFC_NCI_SPI 13 depends on NFC_NCI && SPI 14 + select CRC_CCITT 15 bool "NCI over SPI protocol support" 16 default n 17 help
+6 -6
net/nfc/netlink.c
··· 1089 return rc; 1090 } 1091 1092 - static int nfc_genl_fw_upload(struct sk_buff *skb, struct genl_info *info) 1093 { 1094 struct nfc_dev *dev; 1095 int rc; ··· 1108 nla_strlcpy(firmware_name, info->attrs[NFC_ATTR_FIRMWARE_NAME], 1109 sizeof(firmware_name)); 1110 1111 - rc = nfc_fw_upload(dev, firmware_name); 1112 1113 nfc_put_device(dev); 1114 return rc; 1115 } 1116 1117 - int nfc_genl_fw_upload_done(struct nfc_dev *dev, const char *firmware_name) 1118 { 1119 struct sk_buff *msg; 1120 void *hdr; ··· 1124 return -ENOMEM; 1125 1126 hdr = genlmsg_put(msg, 0, 0, &nfc_genl_family, 0, 1127 - NFC_CMD_FW_UPLOAD); 1128 if (!hdr) 1129 goto free_msg; 1130 ··· 1251 .policy = nfc_genl_policy, 1252 }, 1253 { 1254 - .cmd = NFC_CMD_FW_UPLOAD, 1255 - .doit = nfc_genl_fw_upload, 1256 .policy = nfc_genl_policy, 1257 }, 1258 {
··· 1089 return rc; 1090 } 1091 1092 + static int nfc_genl_fw_download(struct sk_buff *skb, struct genl_info *info) 1093 { 1094 struct nfc_dev *dev; 1095 int rc; ··· 1108 nla_strlcpy(firmware_name, info->attrs[NFC_ATTR_FIRMWARE_NAME], 1109 sizeof(firmware_name)); 1110 1111 + rc = nfc_fw_download(dev, firmware_name); 1112 1113 nfc_put_device(dev); 1114 return rc; 1115 } 1116 1117 + int nfc_genl_fw_download_done(struct nfc_dev *dev, const char *firmware_name) 1118 { 1119 struct sk_buff *msg; 1120 void *hdr; ··· 1124 return -ENOMEM; 1125 1126 hdr = genlmsg_put(msg, 0, 0, &nfc_genl_family, 0, 1127 + NFC_CMD_FW_DOWNLOAD); 1128 if (!hdr) 1129 goto free_msg; 1130 ··· 1251 .policy = nfc_genl_policy, 1252 }, 1253 { 1254 + .cmd = NFC_CMD_FW_DOWNLOAD, 1255 + .doit = nfc_genl_fw_download, 1256 .policy = nfc_genl_policy, 1257 }, 1258 {
+3 -3
net/nfc/nfc.h
··· 123 class_dev_iter_exit(iter); 124 } 125 126 - int nfc_fw_upload(struct nfc_dev *dev, const char *firmware_name); 127 - int nfc_genl_fw_upload_done(struct nfc_dev *dev, const char *firmware_name); 128 129 - int nfc_fw_upload_done(struct nfc_dev *dev, const char *firmware_name); 130 131 int nfc_dev_up(struct nfc_dev *dev); 132
··· 123 class_dev_iter_exit(iter); 124 } 125 126 + int nfc_fw_download(struct nfc_dev *dev, const char *firmware_name); 127 + int nfc_genl_fw_download_done(struct nfc_dev *dev, const char *firmware_name); 128 129 + int nfc_fw_download_done(struct nfc_dev *dev, const char *firmware_name); 130 131 int nfc_dev_up(struct nfc_dev *dev); 132
+1
net/sched/sch_atm.c
··· 605 struct sockaddr_atmpvc pvc; 606 int state; 607 608 pvc.sap_family = AF_ATMPVC; 609 pvc.sap_addr.itf = flow->vcc->dev ? flow->vcc->dev->number : -1; 610 pvc.sap_addr.vpi = flow->vcc->vpi;
··· 605 struct sockaddr_atmpvc pvc; 606 int state; 607 608 + memset(&pvc, 0, sizeof(pvc)); 609 pvc.sap_family = AF_ATMPVC; 610 pvc.sap_addr.itf = flow->vcc->dev ? flow->vcc->dev->number : -1; 611 pvc.sap_addr.vpi = flow->vcc->vpi;
+1 -1
net/sched/sch_htb.c
··· 100 struct psched_ratecfg ceil; 101 s64 buffer, cbuffer;/* token bucket depth/rate */ 102 s64 mbuffer; /* max wait time */ 103 - int prio; /* these two are used only by leaves... */ 104 int quantum; /* but stored for parent-to-leaf return */ 105 106 struct tcf_proto *filter_list; /* class attached filters */
··· 100 struct psched_ratecfg ceil; 101 s64 buffer, cbuffer;/* token bucket depth/rate */ 102 s64 mbuffer; /* max wait time */ 103 + u32 prio; /* these two are used only by leaves... */ 104 int quantum; /* but stored for parent-to-leaf return */ 105 106 struct tcf_proto *filter_list; /* class attached filters */
+1 -1
net/socket.c
··· 106 #include <linux/atalk.h> 107 #include <net/busy_poll.h> 108 109 - #ifdef CONFIG_NET_LL_RX_POLL 110 unsigned int sysctl_net_busy_read __read_mostly; 111 unsigned int sysctl_net_busy_poll __read_mostly; 112 #endif
··· 106 #include <linux/atalk.h> 107 #include <net/busy_poll.h> 108 109 + #ifdef CONFIG_NET_RX_BUSY_POLL 110 unsigned int sysctl_net_busy_read __read_mostly; 111 unsigned int sysctl_net_busy_poll __read_mostly; 112 #endif
+12 -3
net/tipc/server.c
··· 355 return PTR_ERR(con); 356 357 sock = tipc_create_listen_sock(con); 358 - if (!sock) 359 return -EINVAL; 360 361 tipc_register_callbacks(sock, con); 362 return 0; ··· 567 kmem_cache_destroy(s->rcvbuf_cache); 568 return ret; 569 } 570 s->enabled = 1; 571 - 572 - return tipc_open_listening_sock(s); 573 } 574 575 void tipc_server_stop(struct tipc_server *s)
··· 355 return PTR_ERR(con); 356 357 sock = tipc_create_listen_sock(con); 358 + if (!sock) { 359 + idr_remove(&s->conn_idr, con->conid); 360 + s->idr_in_use--; 361 + kfree(con); 362 return -EINVAL; 363 + } 364 365 tipc_register_callbacks(sock, con); 366 return 0; ··· 563 kmem_cache_destroy(s->rcvbuf_cache); 564 return ret; 565 } 566 + ret = tipc_open_listening_sock(s); 567 + if (ret < 0) { 568 + tipc_work_stop(s); 569 + kmem_cache_destroy(s->rcvbuf_cache); 570 + return ret; 571 + } 572 s->enabled = 1; 573 + return ret; 574 } 575 576 void tipc_server_stop(struct tipc_server *s)
+4 -1
net/wireless/reg.c
··· 2247 2248 void wiphy_regulatory_register(struct wiphy *wiphy) 2249 { 2250 if (!reg_dev_ignore_cell_hint(wiphy)) 2251 reg_num_devs_support_basehint++; 2252 2253 - wiphy_update_regulatory(wiphy, NL80211_REGDOM_SET_BY_CORE); 2254 } 2255 2256 void wiphy_regulatory_deregister(struct wiphy *wiphy)
··· 2247 2248 void wiphy_regulatory_register(struct wiphy *wiphy) 2249 { 2250 + struct regulatory_request *lr; 2251 + 2252 if (!reg_dev_ignore_cell_hint(wiphy)) 2253 reg_num_devs_support_basehint++; 2254 2255 + lr = get_last_request(); 2256 + wiphy_update_regulatory(wiphy, lr->initiator); 2257 } 2258 2259 void wiphy_regulatory_deregister(struct wiphy *wiphy)