Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'net-6.10-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Pull networking fixes from Jakub Kicinski:
"Including fixes from bluetooth, wireless and netfilter.

There's one fix for power management with Intel's e1000e here,
Thorsten tells us there's another problem that started in v6.9. We're
trying to wrap that up but I don't think it's blocking.

Current release - new code bugs:

- wifi: mac80211: disable softirqs for queued frame handling

- af_unix: fix uninit-value in __unix_walk_scc(), with the new
garbage collection algo

Previous releases - regressions:

- Bluetooth:
- qca: fix BT enable failure for QCA6390 after warm reboot
- add quirk to ignore reserved PHY bits in LE Extended Adv Report,
abused by some Broadcom controllers found on Apple machines

- wifi: wilc1000: fix ies_len type in connect path

Previous releases - always broken:

- tcp: fix DSACK undo in fast recovery to call tcp_try_to_open(),
avoid premature timeouts

- net: make sure skb_datagram_iter maps fragments page by page, in
case we somehow get compound highmem mixed in

- eth: bnx2x: fix multiple UBSAN array-index-out-of-bounds when more
queues are used

Misc:

- MAINTAINERS: Remembering Larry Finger"

* tag 'net-6.10-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (62 commits)
bnxt_en: Fix the resource check condition for RSS contexts
mlxsw: core_linecards: Fix double memory deallocation in case of invalid INI file
inet_diag: Initialize pad field in struct inet_diag_req_v2
tcp: Don't flag tcp_sk(sk)->rx_opt.saw_unknown for TCP AO.
selftests: make order checking verbose in msg_zerocopy selftest
selftests: fix OOM in msg_zerocopy selftest
ice: use proper macro for testing bit
ice: Reject pin requests with unsupported flags
ice: Don't process extts if PTP is disabled
ice: Fix improper extts handling
selftest: af_unix: Add test case for backtrack after finalising SCC.
af_unix: Fix uninit-value in __unix_walk_scc()
bonding: Fix out-of-bounds read in bond_option_arp_ip_targets_set()
net: rswitch: Avoid use-after-free in rswitch_poll()
netfilter: nf_tables: unconditionally flush pending work before notifier
wifi: iwlwifi: mvm: check vif for NULL/ERR_PTR before dereference
wifi: iwlwifi: mvm: avoid link lookup in statistics
wifi: iwlwifi: mvm: don't wake up rx_sync_waitq upon RFKILL
wifi: iwlwifi: properly set WIPHY_FLAG_SUPPORTS_EXT_KEK_KCK
wifi: wilc1000: fix ies_len type in connect path
...

+629 -377
+4
CREDITS
··· 1214 1214 S: (ask for current address) 1215 1215 S: USA 1216 1216 1217 + N: Larry Finger 1218 + E: Larry.Finger@lwfinger.net 1219 + D: Maintainer of wireless drivers, too many to list here 1220 + 1217 1221 N: Jürgen Fischer 1218 1222 E: fischer@norbit.de 1219 1223 D: Author of Adaptec AHA-152x SCSI driver
+5 -8
MAINTAINERS
··· 3601 3601 F: drivers/net/wireless/broadcom/b43/ 3602 3602 3603 3603 B43LEGACY WIRELESS DRIVER 3604 - M: Larry Finger <Larry.Finger@lwfinger.net> 3605 3604 L: linux-wireless@vger.kernel.org 3606 3605 L: b43-dev@lists.infradead.org 3607 - S: Maintained 3606 + S: Orphan 3608 3607 W: https://wireless.wiki.kernel.org/en/users/Drivers/b43 3609 3608 F: drivers/net/wireless/broadcom/b43legacy/ 3610 3609 ··· 18373 18374 L: ath12k@lists.infradead.org 18374 18375 S: Supported 18375 18376 W: https://wireless.wiki.kernel.org/en/users/Drivers/ath12k 18376 - T: git git://git.kernel.org/pub/scm/linux/kernel/git/kvalo/ath.git 18377 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/ath/ath.git 18377 18378 F: drivers/net/wireless/ath/ath12k/ 18378 18379 N: ath12k 18379 18380 ··· 18383 18384 L: ath10k@lists.infradead.org 18384 18385 S: Supported 18385 18386 W: https://wireless.wiki.kernel.org/en/users/Drivers/ath10k 18386 - T: git git://git.kernel.org/pub/scm/linux/kernel/git/kvalo/ath.git 18387 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/ath/ath.git 18387 18388 F: drivers/net/wireless/ath/ath10k/ 18388 18389 N: ath10k 18389 18390 ··· 18394 18395 S: Supported 18395 18396 W: https://wireless.wiki.kernel.org/en/users/Drivers/ath11k 18396 18397 B: https://wireless.wiki.kernel.org/en/users/Drivers/ath11k/bugreport 18397 - T: git git://git.kernel.org/pub/scm/linux/kernel/git/kvalo/ath.git 18398 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/ath/ath.git 18398 18399 F: drivers/net/wireless/ath/ath11k/ 18399 18400 N: ath11k 18400 18401 ··· 18403 18404 L: linux-wireless@vger.kernel.org 18404 18405 S: Maintained 18405 18406 W: https://wireless.wiki.kernel.org/en/users/Drivers/ath9k 18406 - T: git git://git.kernel.org/pub/scm/linux/kernel/git/kvalo/ath.git 18407 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/ath/ath.git 18407 18408 F: Documentation/devicetree/bindings/net/wireless/qca,ath9k.yaml 18408 18409 F: drivers/net/wireless/ath/ath9k/ 18409 18410 ··· 19508 19509 19509 19510 RTL8187 WIRELESS DRIVER 19510 19511 M: Hin-Tak Leung <hintak.leung@gmail.com> 19511 - M: Larry Finger <Larry.Finger@lwfinger.net> 19512 19512 L: linux-wireless@vger.kernel.org 19513 19513 S: Maintained 19514 19514 T: git https://github.com/pkshih/rtw.git ··· 21245 21247 F: drivers/staging/olpc_dcon/ 21246 21248 21247 21249 STAGING - REALTEK RTL8712U DRIVERS 21248 - M: Larry Finger <Larry.Finger@lwfinger.net> 21249 21250 M: Florian Schilhabel <florian.c.schilhabel@googlemail.com>. 21250 21251 S: Odd Fixes 21251 21252 F: drivers/staging/rtl8712/
+1 -1
drivers/bluetooth/btintel_pcie.c
··· 382 382 383 383 /* The first 4 bytes indicates the Intel PCIe specific packet type */ 384 384 pdata = skb_pull_data(skb, BTINTEL_PCIE_HCI_TYPE_LEN); 385 - if (!data) { 385 + if (!pdata) { 386 386 bt_dev_err(hdev, "Corrupted packet received"); 387 387 ret = -EILSEQ; 388 388 goto exit_error;
+1 -1
drivers/bluetooth/btnxpuart.c
··· 281 281 282 282 /* Default configurations */ 283 283 #define DEFAULT_H2C_WAKEUP_MODE WAKEUP_METHOD_BREAK 284 - #define DEFAULT_PS_MODE PS_MODE_DISABLE 284 + #define DEFAULT_PS_MODE PS_MODE_ENABLE 285 285 #define FW_INIT_BAUDRATE HCI_NXP_PRI_BAUDRATE 286 286 287 287 static struct sk_buff *nxp_drv_send_cmd(struct hci_dev *hdev, u16 opcode,
+9 -1
drivers/bluetooth/hci_bcm4377.c
··· 495 495 * extended scanning 496 496 * broken_mws_transport_config: Set to true if the chip erroneously claims to 497 497 * support MWS Transport Configuration 498 + * broken_le_ext_adv_report_phy: Set to true if this chip stuffs flags inside 499 + * reserved bits of Primary/Secondary_PHY inside 500 + * LE Extended Advertising Report events which 501 + * have to be ignored 498 502 * send_calibration: Optional callback to send calibration data 499 503 * send_ptb: Callback to send "PTB" regulatory/calibration data 500 504 */ ··· 517 513 unsigned long broken_ext_scan : 1; 518 514 unsigned long broken_mws_transport_config : 1; 519 515 unsigned long broken_le_coded : 1; 516 + unsigned long broken_le_ext_adv_report_phy : 1; 520 517 521 518 int (*send_calibration)(struct bcm4377_data *bcm4377); 522 519 int (*send_ptb)(struct bcm4377_data *bcm4377, ··· 721 716 ring->events[msgid] = NULL; 722 717 } 723 718 724 - bitmap_release_region(ring->msgids, msgid, ring->n_entries); 719 + bitmap_release_region(ring->msgids, msgid, 0); 725 720 726 721 unlock: 727 722 spin_unlock_irqrestore(&ring->lock, flags); ··· 2378 2373 set_bit(HCI_QUIRK_BROKEN_EXT_SCAN, &hdev->quirks); 2379 2374 if (bcm4377->hw->broken_le_coded) 2380 2375 set_bit(HCI_QUIRK_BROKEN_LE_CODED, &hdev->quirks); 2376 + if (bcm4377->hw->broken_le_ext_adv_report_phy) 2377 + set_bit(HCI_QUIRK_FIXUP_LE_EXT_ADV_REPORT_PHY, &hdev->quirks); 2381 2378 2382 2379 pci_set_drvdata(pdev, bcm4377); 2383 2380 hci_set_drvdata(hdev, bcm4377); ··· 2484 2477 .clear_pciecfg_subsystem_ctrl_bit19 = true, 2485 2478 .broken_mws_transport_config = true, 2486 2479 .broken_le_coded = true, 2480 + .broken_le_ext_adv_report_phy = true, 2487 2481 .send_calibration = bcm4387_send_calibration, 2488 2482 .send_ptb = bcm4378_send_ptb, 2489 2483 },
+15 -3
drivers/bluetooth/hci_qca.c
··· 2450 2450 struct qca_serdev *qcadev = serdev_device_get_drvdata(serdev); 2451 2451 struct hci_uart *hu = &qcadev->serdev_hu; 2452 2452 struct hci_dev *hdev = hu->hdev; 2453 - struct qca_data *qca = hu->priv; 2454 2453 const u8 ibs_wake_cmd[] = { 0xFD }; 2455 2454 const u8 edl_reset_soc_cmd[] = { 0x01, 0x00, 0xFC, 0x01, 0x05 }; 2456 2455 2457 2456 if (qcadev->btsoc_type == QCA_QCA6390) { 2458 - if (test_bit(QCA_BT_OFF, &qca->flags) || 2459 - !test_bit(HCI_RUNNING, &hdev->flags)) 2457 + /* The purpose of sending the VSC is to reset SOC into a initial 2458 + * state and the state will ensure next hdev->setup() success. 2459 + * if HCI_QUIRK_NON_PERSISTENT_SETUP is set, it means that 2460 + * hdev->setup() can do its job regardless of SoC state, so 2461 + * don't need to send the VSC. 2462 + * if HCI_SETUP is set, it means that hdev->setup() was never 2463 + * invoked and the SOC is already in the initial state, so 2464 + * don't also need to send the VSC. 2465 + */ 2466 + if (test_bit(HCI_QUIRK_NON_PERSISTENT_SETUP, &hdev->quirks) || 2467 + hci_dev_test_flag(hdev, HCI_SETUP)) 2460 2468 return; 2461 2469 2470 + /* The serdev must be in open state when conrol logic arrives 2471 + * here, so also fix the use-after-free issue caused by that 2472 + * the serdev is flushed or wrote after it is closed. 2473 + */ 2462 2474 serdev_device_write_flush(serdev); 2463 2475 ret = serdev_device_write_buf(serdev, ibs_wake_cmd, 2464 2476 sizeof(ibs_wake_cmd));
+3 -3
drivers/net/bonding/bond_options.c
··· 1214 1214 __be32 target; 1215 1215 1216 1216 if (newval->string) { 1217 - if (!in4_pton(newval->string+1, -1, (u8 *)&target, -1, NULL)) { 1218 - netdev_err(bond->dev, "invalid ARP target %pI4 specified\n", 1219 - &target); 1217 + if (strlen(newval->string) < 1 || 1218 + !in4_pton(newval->string + 1, -1, (u8 *)&target, -1, NULL)) { 1219 + netdev_err(bond->dev, "invalid ARP target specified\n"); 1220 1220 return ret; 1221 1221 } 1222 1222 if (newval->string[0] == '+')
+1
drivers/net/can/usb/kvaser_usb/kvaser_usb_core.c
··· 125 125 126 126 static const struct kvaser_usb_driver_info kvaser_usb_driver_info_leafimx = { 127 127 .quirks = 0, 128 + .family = KVASER_LEAF, 128 129 .ops = &kvaser_usb_leaf_dev_ops, 129 130 }; 130 131
+1 -1
drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
··· 1262 1262 1263 1263 struct bnx2x_fw_stats_req { 1264 1264 struct stats_query_header hdr; 1265 - struct stats_query_entry query[FP_SB_MAX_E1x+ 1265 + struct stats_query_entry query[FP_SB_MAX_E2 + 1266 1266 BNX2X_FIRST_QUEUE_QUERY_IDX]; 1267 1267 }; 1268 1268
+5 -1
drivers/net/ethernet/broadcom/bnxt/bnxt.c
··· 12669 12669 if (!BNXT_NEW_RM(bp)) 12670 12670 return true; 12671 12671 12672 - if (hwr.vnic == bp->hw_resc.resv_vnics && 12672 + /* Do not reduce VNIC and RSS ctx reservations. There is a FW 12673 + * issue that will mess up the default VNIC if we reduce the 12674 + * reservations. 12675 + */ 12676 + if (hwr.vnic <= bp->hw_resc.resv_vnics && 12673 12677 hwr.rss_ctx <= bp->hw_resc.resv_rsscos_ctxs) 12674 12678 return true; 12675 12679
+66 -66
drivers/net/ethernet/intel/e1000e/netdev.c
··· 6363 6363 mac_data |= E1000_EXTCNF_CTRL_GATE_PHY_CFG; 6364 6364 ew32(EXTCNF_CTRL, mac_data); 6365 6365 6366 - /* Enable the Dynamic Power Gating in the MAC */ 6367 - mac_data = er32(FEXTNVM7); 6368 - mac_data |= BIT(22); 6369 - ew32(FEXTNVM7, mac_data); 6370 - 6371 6366 /* Disable disconnected cable conditioning for Power Gating */ 6372 6367 mac_data = er32(DPGFR); 6373 6368 mac_data |= BIT(2); 6374 6369 ew32(DPGFR, mac_data); 6375 6370 6376 - /* Don't wake from dynamic Power Gating with clock request */ 6377 - mac_data = er32(FEXTNVM12); 6378 - mac_data |= BIT(12); 6379 - ew32(FEXTNVM12, mac_data); 6380 - 6381 - /* Ungate PGCB clock */ 6382 - mac_data = er32(FEXTNVM9); 6383 - mac_data &= ~BIT(28); 6384 - ew32(FEXTNVM9, mac_data); 6385 - 6386 - /* Enable K1 off to enable mPHY Power Gating */ 6387 - mac_data = er32(FEXTNVM6); 6388 - mac_data |= BIT(31); 6389 - ew32(FEXTNVM6, mac_data); 6390 - 6391 - /* Enable mPHY power gating for any link and speed */ 6392 - mac_data = er32(FEXTNVM8); 6393 - mac_data |= BIT(9); 6394 - ew32(FEXTNVM8, mac_data); 6395 - 6396 6371 /* Enable the Dynamic Clock Gating in the DMA and MAC */ 6397 6372 mac_data = er32(CTRL_EXT); 6398 6373 mac_data |= E1000_CTRL_EXT_DMA_DYN_CLK_EN; 6399 6374 ew32(CTRL_EXT, mac_data); 6400 - 6401 - /* No MAC DPG gating SLP_S0 in modern standby 6402 - * Switch the logic of the lanphypc to use PMC counter 6403 - */ 6404 - mac_data = er32(FEXTNVM5); 6405 - mac_data |= BIT(7); 6406 - ew32(FEXTNVM5, mac_data); 6407 6375 } 6376 + 6377 + /* Enable the Dynamic Power Gating in the MAC */ 6378 + mac_data = er32(FEXTNVM7); 6379 + mac_data |= BIT(22); 6380 + ew32(FEXTNVM7, mac_data); 6381 + 6382 + /* Don't wake from dynamic Power Gating with clock request */ 6383 + mac_data = er32(FEXTNVM12); 6384 + mac_data |= BIT(12); 6385 + ew32(FEXTNVM12, mac_data); 6386 + 6387 + /* Ungate PGCB clock */ 6388 + mac_data = er32(FEXTNVM9); 6389 + mac_data &= ~BIT(28); 6390 + ew32(FEXTNVM9, mac_data); 6391 + 6392 + /* Enable K1 off to enable mPHY Power Gating */ 6393 + mac_data = er32(FEXTNVM6); 6394 + mac_data |= BIT(31); 6395 + ew32(FEXTNVM6, mac_data); 6396 + 6397 + /* Enable mPHY power gating for any link and speed */ 6398 + mac_data = er32(FEXTNVM8); 6399 + mac_data |= BIT(9); 6400 + ew32(FEXTNVM8, mac_data); 6401 + 6402 + /* No MAC DPG gating SLP_S0 in modern standby 6403 + * Switch the logic of the lanphypc to use PMC counter 6404 + */ 6405 + mac_data = er32(FEXTNVM5); 6406 + mac_data |= BIT(7); 6407 + ew32(FEXTNVM5, mac_data); 6408 6408 6409 6409 /* Disable the time synchronization clock */ 6410 6410 mac_data = er32(FEXTNVM7); ··· 6498 6498 } else { 6499 6499 /* Request driver unconfigure the device from S0ix */ 6500 6500 6501 - /* Disable the Dynamic Power Gating in the MAC */ 6502 - mac_data = er32(FEXTNVM7); 6503 - mac_data &= 0xFFBFFFFF; 6504 - ew32(FEXTNVM7, mac_data); 6505 - 6506 - /* Disable mPHY power gating for any link and speed */ 6507 - mac_data = er32(FEXTNVM8); 6508 - mac_data &= ~BIT(9); 6509 - ew32(FEXTNVM8, mac_data); 6510 - 6511 - /* Disable K1 off */ 6512 - mac_data = er32(FEXTNVM6); 6513 - mac_data &= ~BIT(31); 6514 - ew32(FEXTNVM6, mac_data); 6515 - 6516 - /* Disable Ungate PGCB clock */ 6517 - mac_data = er32(FEXTNVM9); 6518 - mac_data |= BIT(28); 6519 - ew32(FEXTNVM9, mac_data); 6520 - 6521 - /* Cancel not waking from dynamic 6522 - * Power Gating with clock request 6523 - */ 6524 - mac_data = er32(FEXTNVM12); 6525 - mac_data &= ~BIT(12); 6526 - ew32(FEXTNVM12, mac_data); 6527 - 6528 6501 /* Cancel disable disconnected cable conditioning 6529 6502 * for Power Gating 6530 6503 */ ··· 6509 6536 mac_data = er32(CTRL_EXT); 6510 6537 mac_data &= 0xFFF7FFFF; 6511 6538 ew32(CTRL_EXT, mac_data); 6512 - 6513 - /* Revert the lanphypc logic to use the internal Gbe counter 6514 - * and not the PMC counter 6515 - */ 6516 - mac_data = er32(FEXTNVM5); 6517 - mac_data &= 0xFFFFFF7F; 6518 - ew32(FEXTNVM5, mac_data); 6519 6539 6520 6540 /* Enable the periodic inband message, 6521 6541 * Request PCIe clock in K1 page770_17[10:9] =01b ··· 6547 6581 mac_data &= ~BIT(31); 6548 6582 mac_data |= BIT(0); 6549 6583 ew32(FEXTNVM7, mac_data); 6584 + 6585 + /* Disable the Dynamic Power Gating in the MAC */ 6586 + mac_data = er32(FEXTNVM7); 6587 + mac_data &= 0xFFBFFFFF; 6588 + ew32(FEXTNVM7, mac_data); 6589 + 6590 + /* Disable mPHY power gating for any link and speed */ 6591 + mac_data = er32(FEXTNVM8); 6592 + mac_data &= ~BIT(9); 6593 + ew32(FEXTNVM8, mac_data); 6594 + 6595 + /* Disable K1 off */ 6596 + mac_data = er32(FEXTNVM6); 6597 + mac_data &= ~BIT(31); 6598 + ew32(FEXTNVM6, mac_data); 6599 + 6600 + /* Disable Ungate PGCB clock */ 6601 + mac_data = er32(FEXTNVM9); 6602 + mac_data |= BIT(28); 6603 + ew32(FEXTNVM9, mac_data); 6604 + 6605 + /* Cancel not waking from dynamic 6606 + * Power Gating with clock request 6607 + */ 6608 + mac_data = er32(FEXTNVM12); 6609 + mac_data &= ~BIT(12); 6610 + ew32(FEXTNVM12, mac_data); 6611 + 6612 + /* Revert the lanphypc logic to use the internal Gbe counter 6613 + * and not the PMC counter 6614 + */ 6615 + mac_data = er32(FEXTNVM5); 6616 + mac_data &= 0xFFFFFF7F; 6617 + ew32(FEXTNVM5, mac_data); 6550 6618 } 6551 6619 6552 6620 static int e1000e_pm_freeze(struct device *dev)
+1 -1
drivers/net/ethernet/intel/ice/ice_hwmon.c
··· 96 96 97 97 unsigned long sensors = pf->hw.dev_caps.supported_sensors; 98 98 99 - return _test_bit(ICE_SENSOR_SUPPORT_E810_INT_TEMP_BIT, &sensors); 99 + return test_bit(ICE_SENSOR_SUPPORT_E810_INT_TEMP_BIT, &sensors); 100 100 }; 101 101 102 102 void ice_hwmon_init(struct ice_pf *pf)
+101 -30
drivers/net/ethernet/intel/ice/ice_ptp.c
··· 1559 1559 u8 chan, tmr_idx; 1560 1560 u32 hi, lo; 1561 1561 1562 + /* Don't process timestamp events if PTP is not ready */ 1563 + if (pf->ptp.state != ICE_PTP_READY) 1564 + return; 1565 + 1562 1566 tmr_idx = hw->func_caps.ts_func_info.tmr_index_owned; 1563 1567 /* Event time is captured by one of the two matched registers 1564 1568 * GLTSYN_EVNT_L: 32 LSB of sampled time event ··· 1588 1584 /** 1589 1585 * ice_ptp_cfg_extts - Configure EXTTS pin and channel 1590 1586 * @pf: Board private structure 1591 - * @ena: true to enable; false to disable 1592 1587 * @chan: GPIO channel (0-3) 1593 - * @gpio_pin: GPIO pin 1594 - * @extts_flags: request flags from the ptp_extts_request.flags 1588 + * @config: desired EXTTS configuration. 1589 + * @store: If set to true, the values will be stored 1590 + * 1591 + * Configure an external timestamp event on the requested channel. 1592 + * 1593 + * Return: 0 on success, -EOPNOTUSPP on unsupported flags 1595 1594 */ 1596 - static int 1597 - ice_ptp_cfg_extts(struct ice_pf *pf, bool ena, unsigned int chan, u32 gpio_pin, 1598 - unsigned int extts_flags) 1595 + static int ice_ptp_cfg_extts(struct ice_pf *pf, unsigned int chan, 1596 + struct ice_extts_channel *config, bool store) 1599 1597 { 1600 1598 u32 func, aux_reg, gpio_reg, irq_reg; 1601 1599 struct ice_hw *hw = &pf->hw; 1602 1600 u8 tmr_idx; 1603 1601 1604 - if (chan > (unsigned int)pf->ptp.info.n_ext_ts) 1605 - return -EINVAL; 1602 + /* Reject requests with unsupported flags */ 1603 + if (config->flags & ~(PTP_ENABLE_FEATURE | 1604 + PTP_RISING_EDGE | 1605 + PTP_FALLING_EDGE | 1606 + PTP_STRICT_FLAGS)) 1607 + return -EOPNOTSUPP; 1606 1608 1607 1609 tmr_idx = hw->func_caps.ts_func_info.tmr_index_owned; 1608 1610 1609 1611 irq_reg = rd32(hw, PFINT_OICR_ENA); 1610 1612 1611 - if (ena) { 1613 + if (config->ena) { 1612 1614 /* Enable the interrupt */ 1613 1615 irq_reg |= PFINT_OICR_TSYN_EVNT_M; 1614 1616 aux_reg = GLTSYN_AUX_IN_0_INT_ENA_M; ··· 1623 1613 #define GLTSYN_AUX_IN_0_EVNTLVL_FALLING_EDGE BIT(1) 1624 1614 1625 1615 /* set event level to requested edge */ 1626 - if (extts_flags & PTP_FALLING_EDGE) 1616 + if (config->flags & PTP_FALLING_EDGE) 1627 1617 aux_reg |= GLTSYN_AUX_IN_0_EVNTLVL_FALLING_EDGE; 1628 - if (extts_flags & PTP_RISING_EDGE) 1618 + if (config->flags & PTP_RISING_EDGE) 1629 1619 aux_reg |= GLTSYN_AUX_IN_0_EVNTLVL_RISING_EDGE; 1630 1620 1631 1621 /* Write GPIO CTL reg. ··· 1646 1636 1647 1637 wr32(hw, PFINT_OICR_ENA, irq_reg); 1648 1638 wr32(hw, GLTSYN_AUX_IN(chan, tmr_idx), aux_reg); 1649 - wr32(hw, GLGEN_GPIO_CTL(gpio_pin), gpio_reg); 1639 + wr32(hw, GLGEN_GPIO_CTL(config->gpio_pin), gpio_reg); 1640 + 1641 + if (store) 1642 + memcpy(&pf->ptp.extts_channels[chan], config, sizeof(*config)); 1650 1643 1651 1644 return 0; 1645 + } 1646 + 1647 + /** 1648 + * ice_ptp_disable_all_extts - Disable all EXTTS channels 1649 + * @pf: Board private structure 1650 + */ 1651 + static void ice_ptp_disable_all_extts(struct ice_pf *pf) 1652 + { 1653 + struct ice_extts_channel extts_cfg = {}; 1654 + int i; 1655 + 1656 + for (i = 0; i < pf->ptp.info.n_ext_ts; i++) { 1657 + if (pf->ptp.extts_channels[i].ena) { 1658 + extts_cfg.gpio_pin = pf->ptp.extts_channels[i].gpio_pin; 1659 + extts_cfg.ena = false; 1660 + ice_ptp_cfg_extts(pf, i, &extts_cfg, false); 1661 + } 1662 + } 1663 + 1664 + synchronize_irq(pf->oicr_irq.virq); 1665 + } 1666 + 1667 + /** 1668 + * ice_ptp_enable_all_extts - Enable all EXTTS channels 1669 + * @pf: Board private structure 1670 + * 1671 + * Called during reset to restore user configuration. 1672 + */ 1673 + static void ice_ptp_enable_all_extts(struct ice_pf *pf) 1674 + { 1675 + int i; 1676 + 1677 + for (i = 0; i < pf->ptp.info.n_ext_ts; i++) { 1678 + if (pf->ptp.extts_channels[i].ena) 1679 + ice_ptp_cfg_extts(pf, i, &pf->ptp.extts_channels[i], 1680 + false); 1681 + } 1652 1682 } 1653 1683 1654 1684 /** ··· 1708 1658 struct ice_hw *hw = &pf->hw; 1709 1659 u32 func, val, gpio_pin; 1710 1660 u8 tmr_idx; 1661 + 1662 + if (config && config->flags & ~PTP_PEROUT_PHASE) 1663 + return -EOPNOTSUPP; 1711 1664 1712 1665 tmr_idx = hw->func_caps.ts_func_info.tmr_index_owned; 1713 1666 ··· 1848 1795 struct ptp_clock_request *rq, int on) 1849 1796 { 1850 1797 struct ice_pf *pf = ptp_info_to_pf(info); 1851 - struct ice_perout_channel clk_cfg = {0}; 1852 1798 bool sma_pres = false; 1853 1799 unsigned int chan; 1854 1800 u32 gpio_pin; 1855 - int err; 1856 1801 1857 1802 if (ice_is_feature_supported(pf, ICE_F_SMA_CTRL)) 1858 1803 sma_pres = true; 1859 1804 1860 1805 switch (rq->type) { 1861 1806 case PTP_CLK_REQ_PEROUT: 1807 + { 1808 + struct ice_perout_channel clk_cfg = {}; 1809 + 1862 1810 chan = rq->perout.index; 1863 1811 if (sma_pres) { 1864 1812 if (chan == ice_pin_desc_e810t[SMA1].chan) ··· 1879 1825 clk_cfg.gpio_pin = chan; 1880 1826 } 1881 1827 1828 + clk_cfg.flags = rq->perout.flags; 1882 1829 clk_cfg.period = ((rq->perout.period.sec * NSEC_PER_SEC) + 1883 1830 rq->perout.period.nsec); 1884 1831 clk_cfg.start_time = ((rq->perout.start.sec * NSEC_PER_SEC) + 1885 1832 rq->perout.start.nsec); 1886 1833 clk_cfg.ena = !!on; 1887 1834 1888 - err = ice_ptp_cfg_clkout(pf, chan, &clk_cfg, true); 1889 - break; 1835 + return ice_ptp_cfg_clkout(pf, chan, &clk_cfg, true); 1836 + } 1890 1837 case PTP_CLK_REQ_EXTTS: 1838 + { 1839 + struct ice_extts_channel extts_cfg = {}; 1840 + 1891 1841 chan = rq->extts.index; 1892 1842 if (sma_pres) { 1893 1843 if (chan < ice_pin_desc_e810t[SMA2].chan) ··· 1907 1849 gpio_pin = chan; 1908 1850 } 1909 1851 1910 - err = ice_ptp_cfg_extts(pf, !!on, chan, gpio_pin, 1911 - rq->extts.flags); 1912 - break; 1852 + extts_cfg.flags = rq->extts.flags; 1853 + extts_cfg.gpio_pin = gpio_pin; 1854 + extts_cfg.ena = !!on; 1855 + 1856 + return ice_ptp_cfg_extts(pf, chan, &extts_cfg, true); 1857 + } 1913 1858 default: 1914 1859 return -EOPNOTSUPP; 1915 1860 } 1916 - 1917 - return err; 1918 1861 } 1919 1862 1920 1863 /** ··· 1928 1869 struct ptp_clock_request *rq, int on) 1929 1870 { 1930 1871 struct ice_pf *pf = ptp_info_to_pf(info); 1931 - struct ice_perout_channel clk_cfg = {0}; 1932 - int err; 1933 1872 1934 1873 switch (rq->type) { 1935 1874 case PTP_CLK_REQ_PPS: 1875 + { 1876 + struct ice_perout_channel clk_cfg = {}; 1877 + 1878 + clk_cfg.flags = rq->perout.flags; 1936 1879 clk_cfg.gpio_pin = PPS_PIN_INDEX; 1937 1880 clk_cfg.period = NSEC_PER_SEC; 1938 1881 clk_cfg.ena = !!on; 1939 1882 1940 - err = ice_ptp_cfg_clkout(pf, PPS_CLK_GEN_CHAN, &clk_cfg, true); 1941 - break; 1883 + return ice_ptp_cfg_clkout(pf, PPS_CLK_GEN_CHAN, &clk_cfg, true); 1884 + } 1942 1885 case PTP_CLK_REQ_EXTTS: 1943 - err = ice_ptp_cfg_extts(pf, !!on, rq->extts.index, 1944 - TIME_SYNC_PIN_INDEX, rq->extts.flags); 1945 - break; 1886 + { 1887 + struct ice_extts_channel extts_cfg = {}; 1888 + 1889 + extts_cfg.flags = rq->extts.flags; 1890 + extts_cfg.gpio_pin = TIME_SYNC_PIN_INDEX; 1891 + extts_cfg.ena = !!on; 1892 + 1893 + return ice_ptp_cfg_extts(pf, rq->extts.index, &extts_cfg, true); 1894 + } 1946 1895 default: 1947 1896 return -EOPNOTSUPP; 1948 1897 } 1949 - 1950 - return err; 1951 1898 } 1952 1899 1953 1900 /** ··· 2785 2720 ice_ptp_restart_all_phy(pf); 2786 2721 } 2787 2722 2723 + /* Re-enable all periodic outputs and external timestamp events */ 2724 + ice_ptp_enable_all_clkout(pf); 2725 + ice_ptp_enable_all_extts(pf); 2726 + 2788 2727 return 0; 2789 2728 } 2790 2729 ··· 3343 3274 ice_ptp_remove_auxbus_device(pf); 3344 3275 3345 3276 ice_ptp_release_tx_tracker(pf, &pf->ptp.port.tx); 3277 + 3278 + ice_ptp_disable_all_extts(pf); 3346 3279 3347 3280 kthread_cancel_delayed_work_sync(&pf->ptp.work); 3348 3281
+9
drivers/net/ethernet/intel/ice/ice_ptp.h
··· 29 29 struct ice_perout_channel { 30 30 bool ena; 31 31 u32 gpio_pin; 32 + u32 flags; 32 33 u64 period; 33 34 u64 start_time; 35 + }; 36 + 37 + struct ice_extts_channel { 38 + bool ena; 39 + u32 gpio_pin; 40 + u32 flags; 34 41 }; 35 42 36 43 /* The ice hardware captures Tx hardware timestamps in the PHY. The timestamp ··· 233 226 * @ext_ts_irq: the external timestamp IRQ in use 234 227 * @kworker: kwork thread for handling periodic work 235 228 * @perout_channels: periodic output data 229 + * @extts_channels: channels for external timestamps 236 230 * @info: structure defining PTP hardware capabilities 237 231 * @clock: pointer to registered PTP clock device 238 232 * @tstamp_config: hardware timestamping configuration ··· 257 249 u8 ext_ts_irq; 258 250 struct kthread_worker *kworker; 259 251 struct ice_perout_channel perout_channels[GLTSYN_TGT_H_IDX_MAX]; 252 + struct ice_extts_channel extts_channels[GLTSYN_TGT_H_IDX_MAX]; 260 253 struct ptp_clock_info info; 261 254 struct ptp_clock *clock; 262 255 struct hwtstamp_config tstamp_config;
+34 -12
drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c
··· 989 989 struct mlx5e_ipsec_sa_entry *sa_entry = to_ipsec_sa_entry(x); 990 990 struct mlx5e_ipsec_rule *ipsec_rule = &sa_entry->ipsec_rule; 991 991 struct net *net = dev_net(x->xso.dev); 992 + u64 trailer_packets = 0, trailer_bytes = 0; 993 + u64 replay_packets = 0, replay_bytes = 0; 994 + u64 auth_packets = 0, auth_bytes = 0; 995 + u64 success_packets, success_bytes; 992 996 u64 packets, bytes, lastuse; 997 + size_t headers; 993 998 994 999 lockdep_assert(lockdep_is_held(&x->lock) || 995 1000 lockdep_is_held(&dev_net(x->xso.real_dev)->xfrm.xfrm_cfg_mutex) || ··· 1004 999 return; 1005 1000 1006 1001 if (sa_entry->attrs.dir == XFRM_DEV_OFFLOAD_IN) { 1007 - mlx5_fc_query_cached(ipsec_rule->auth.fc, &bytes, &packets, &lastuse); 1008 - x->stats.integrity_failed += packets; 1009 - XFRM_ADD_STATS(net, LINUX_MIB_XFRMINSTATEPROTOERROR, packets); 1002 + mlx5_fc_query_cached(ipsec_rule->auth.fc, &auth_bytes, 1003 + &auth_packets, &lastuse); 1004 + x->stats.integrity_failed += auth_packets; 1005 + XFRM_ADD_STATS(net, LINUX_MIB_XFRMINSTATEPROTOERROR, auth_packets); 1010 1006 1011 - mlx5_fc_query_cached(ipsec_rule->trailer.fc, &bytes, &packets, &lastuse); 1012 - XFRM_ADD_STATS(net, LINUX_MIB_XFRMINHDRERROR, packets); 1007 + mlx5_fc_query_cached(ipsec_rule->trailer.fc, &trailer_bytes, 1008 + &trailer_packets, &lastuse); 1009 + XFRM_ADD_STATS(net, LINUX_MIB_XFRMINHDRERROR, trailer_packets); 1013 1010 } 1014 1011 1015 1012 if (x->xso.type != XFRM_DEV_OFFLOAD_PACKET) 1016 1013 return; 1017 1014 1018 - mlx5_fc_query_cached(ipsec_rule->fc, &bytes, &packets, &lastuse); 1019 - x->curlft.packets += packets; 1020 - x->curlft.bytes += bytes; 1021 - 1022 1015 if (sa_entry->attrs.dir == XFRM_DEV_OFFLOAD_IN) { 1023 - mlx5_fc_query_cached(ipsec_rule->replay.fc, &bytes, &packets, &lastuse); 1024 - x->stats.replay += packets; 1025 - XFRM_ADD_STATS(net, LINUX_MIB_XFRMINSTATESEQERROR, packets); 1016 + mlx5_fc_query_cached(ipsec_rule->replay.fc, &replay_bytes, 1017 + &replay_packets, &lastuse); 1018 + x->stats.replay += replay_packets; 1019 + XFRM_ADD_STATS(net, LINUX_MIB_XFRMINSTATESEQERROR, replay_packets); 1026 1020 } 1021 + 1022 + mlx5_fc_query_cached(ipsec_rule->fc, &bytes, &packets, &lastuse); 1023 + success_packets = packets - auth_packets - trailer_packets - replay_packets; 1024 + x->curlft.packets += success_packets; 1025 + /* NIC counts all bytes passed through flow steering and doesn't have 1026 + * an ability to count payload data size which is needed for SA. 1027 + * 1028 + * To overcome HW limitestion, let's approximate the payload size 1029 + * by removing always available headers. 1030 + */ 1031 + headers = sizeof(struct ethhdr); 1032 + if (sa_entry->attrs.family == AF_INET) 1033 + headers += sizeof(struct iphdr); 1034 + else 1035 + headers += sizeof(struct ipv6hdr); 1036 + 1037 + success_bytes = bytes - auth_bytes - trailer_bytes - replay_bytes; 1038 + x->curlft.bytes += success_bytes - headers * success_packets; 1027 1039 } 1028 1040 1029 1041 static int mlx5e_xfrm_validate_policy(struct mlx5_core_dev *mdev,
+5
drivers/net/ethernet/mellanox/mlx5/core/en_main.c
··· 5868 5868 kfree(priv->htb_qos_sq_stats[i]); 5869 5869 kvfree(priv->htb_qos_sq_stats); 5870 5870 5871 + if (priv->mqprio_rl) { 5872 + mlx5e_mqprio_rl_cleanup(priv->mqprio_rl); 5873 + mlx5e_mqprio_rl_free(priv->mqprio_rl); 5874 + } 5875 + 5871 5876 memset(priv, 0, sizeof(*priv)); 5872 5877 } 5873 5878
+1 -3
drivers/net/ethernet/mellanox/mlx5/core/eq.c
··· 1197 1197 if (!mlx5_core_is_eth_enabled(dev) && mlx5_eth_supported(dev)) 1198 1198 return 1; 1199 1199 1200 - max_dev_eqs = MLX5_CAP_GEN(dev, max_num_eqs) ? 1201 - MLX5_CAP_GEN(dev, max_num_eqs) : 1202 - 1 << MLX5_CAP_GEN(dev, log_max_eq); 1200 + max_dev_eqs = mlx5_max_eq_cap_get(dev); 1203 1201 1204 1202 num_eqs = min_t(int, mlx5_irq_table_get_num_comp(eq_table->irq_table), 1205 1203 max_dev_eqs - MLX5_MAX_ASYNC_EQS);
+29 -8
drivers/net/ethernet/mellanox/mlx5/core/esw/acl/ingress_ofld.c
··· 6 6 #include "helper.h" 7 7 #include "ofld.h" 8 8 9 + static int 10 + acl_ingress_ofld_setup(struct mlx5_eswitch *esw, struct mlx5_vport *vport); 11 + 9 12 static bool 10 13 esw_acl_ingress_prio_tag_enabled(struct mlx5_eswitch *esw, 11 14 const struct mlx5_vport *vport) ··· 126 123 { 127 124 struct mlx5_flow_act flow_act = {}; 128 125 struct mlx5_flow_handle *flow_rule; 126 + bool created = false; 129 127 int err = 0; 128 + 129 + if (!vport->ingress.acl) { 130 + err = acl_ingress_ofld_setup(esw, vport); 131 + if (err) 132 + return err; 133 + created = true; 134 + } 130 135 131 136 flow_act.action = MLX5_FLOW_CONTEXT_ACTION_DROP; 132 137 flow_act.fg = vport->ingress.offloads.drop_grp; 133 138 flow_rule = mlx5_add_flow_rules(vport->ingress.acl, NULL, &flow_act, NULL, 0); 134 139 if (IS_ERR(flow_rule)) { 135 140 err = PTR_ERR(flow_rule); 136 - goto out; 141 + goto err_out; 137 142 } 138 143 139 144 vport->ingress.offloads.drop_rule = flow_rule; 140 - out: 145 + 146 + return 0; 147 + err_out: 148 + /* Only destroy ingress acl created in this function. */ 149 + if (created) 150 + esw_acl_ingress_ofld_cleanup(esw, vport); 141 151 return err; 142 152 } 143 153 ··· 315 299 } 316 300 } 317 301 318 - int esw_acl_ingress_ofld_setup(struct mlx5_eswitch *esw, 319 - struct mlx5_vport *vport) 302 + static int 303 + acl_ingress_ofld_setup(struct mlx5_eswitch *esw, struct mlx5_vport *vport) 320 304 { 321 305 int num_ftes = 0; 322 306 int err; 323 - 324 - if (!mlx5_eswitch_vport_match_metadata_enabled(esw) && 325 - !esw_acl_ingress_prio_tag_enabled(esw, vport)) 326 - return 0; 327 307 328 308 esw_acl_ingress_allow_rule_destroy(vport); 329 309 ··· 357 345 group_err: 358 346 esw_acl_ingress_table_destroy(vport); 359 347 return err; 348 + } 349 + 350 + int esw_acl_ingress_ofld_setup(struct mlx5_eswitch *esw, struct mlx5_vport *vport) 351 + { 352 + if (!mlx5_eswitch_vport_match_metadata_enabled(esw) && 353 + !esw_acl_ingress_prio_tag_enabled(esw, vport)) 354 + return 0; 355 + 356 + return acl_ingress_ofld_setup(esw, vport); 360 357 } 361 358 362 359 void esw_acl_ingress_ofld_cleanup(struct mlx5_eswitch *esw,
+17 -5
drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
··· 4600 4600 return -EOPNOTSUPP; 4601 4601 } 4602 4602 4603 + if (!MLX5_CAP_GEN_2(esw->dev, max_num_eqs_24b)) { 4604 + NL_SET_ERR_MSG_MOD(extack, 4605 + "Device doesn't support getting the max number of EQs"); 4606 + return -EOPNOTSUPP; 4607 + } 4608 + 4603 4609 query_ctx = kzalloc(query_out_sz, GFP_KERNEL); 4604 4610 if (!query_ctx) 4605 4611 return -ENOMEM; 4606 4612 4607 4613 mutex_lock(&esw->state_lock); 4608 4614 err = mlx5_vport_get_other_func_cap(esw->dev, vport_num, query_ctx, 4609 - MLX5_CAP_GENERAL); 4615 + MLX5_CAP_GENERAL_2); 4610 4616 if (err) { 4611 4617 NL_SET_ERR_MSG_MOD(extack, "Failed getting HCA caps"); 4612 4618 goto out; 4613 4619 } 4614 4620 4615 4621 hca_caps = MLX5_ADDR_OF(query_hca_cap_out, query_ctx, capability); 4616 - max_eqs = MLX5_GET(cmd_hca_cap, hca_caps, max_num_eqs); 4622 + max_eqs = MLX5_GET(cmd_hca_cap_2, hca_caps, max_num_eqs_24b); 4617 4623 if (max_eqs < MLX5_ESW_MAX_CTRL_EQS) 4618 4624 *max_io_eqs = 0; 4619 4625 else ··· 4650 4644 return -EOPNOTSUPP; 4651 4645 } 4652 4646 4647 + if (!MLX5_CAP_GEN_2(esw->dev, max_num_eqs_24b)) { 4648 + NL_SET_ERR_MSG_MOD(extack, 4649 + "Device doesn't support changing the max number of EQs"); 4650 + return -EOPNOTSUPP; 4651 + } 4652 + 4653 4653 if (check_add_overflow(max_io_eqs, MLX5_ESW_MAX_CTRL_EQS, &max_eqs)) { 4654 4654 NL_SET_ERR_MSG_MOD(extack, "Supplied value out of range"); 4655 4655 return -EINVAL; ··· 4667 4655 4668 4656 mutex_lock(&esw->state_lock); 4669 4657 err = mlx5_vport_get_other_func_cap(esw->dev, vport_num, query_ctx, 4670 - MLX5_CAP_GENERAL); 4658 + MLX5_CAP_GENERAL_2); 4671 4659 if (err) { 4672 4660 NL_SET_ERR_MSG_MOD(extack, "Failed getting HCA caps"); 4673 4661 goto out; 4674 4662 } 4675 4663 4676 4664 hca_caps = MLX5_ADDR_OF(query_hca_cap_out, query_ctx, capability); 4677 - MLX5_SET(cmd_hca_cap, hca_caps, max_num_eqs, max_eqs); 4665 + MLX5_SET(cmd_hca_cap_2, hca_caps, max_num_eqs_24b, max_eqs); 4678 4666 4679 4667 err = mlx5_vport_set_other_func_cap(esw->dev, hca_caps, vport_num, 4680 - MLX5_SET_HCA_CAP_OP_MOD_GENERAL_DEVICE); 4668 + MLX5_SET_HCA_CAP_OP_MOD_GENERAL_DEVICE2); 4681 4669 if (err) 4682 4670 NL_SET_ERR_MSG_MOD(extack, "Failed setting HCA caps"); 4683 4671
+10
drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h
··· 383 383 : vport; 384 384 } 385 385 386 + static inline int mlx5_max_eq_cap_get(const struct mlx5_core_dev *dev) 387 + { 388 + if (MLX5_CAP_GEN_2(dev, max_num_eqs_24b)) 389 + return MLX5_CAP_GEN_2(dev, max_num_eqs_24b); 390 + 391 + if (MLX5_CAP_GEN(dev, max_num_eqs)) 392 + return MLX5_CAP_GEN(dev, max_num_eqs); 393 + 394 + return 1 << MLX5_CAP_GEN(dev, log_max_eq); 395 + } 386 396 #endif /* __MLX5_CORE_H__ */
+1 -3
drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c
··· 711 711 712 712 int mlx5_irq_table_create(struct mlx5_core_dev *dev) 713 713 { 714 - int num_eqs = MLX5_CAP_GEN(dev, max_num_eqs) ? 715 - MLX5_CAP_GEN(dev, max_num_eqs) : 716 - 1 << MLX5_CAP_GEN(dev, log_max_eq); 714 + int num_eqs = mlx5_max_eq_cap_get(dev); 717 715 int total_vec; 718 716 int pcif_vec; 719 717 int req_vec;
+1
drivers/net/ethernet/mellanox/mlxsw/core_linecards.c
··· 1484 1484 vfree(types_info->data); 1485 1485 err_data_alloc: 1486 1486 kfree(types_info); 1487 + linecards->types_info = NULL; 1487 1488 return err; 1488 1489 } 1489 1490
+2 -2
drivers/net/ethernet/renesas/rswitch.c
··· 871 871 dma_rmb(); 872 872 skb = gq->skbs[gq->dirty]; 873 873 if (skb) { 874 + rdev->ndev->stats.tx_packets++; 875 + rdev->ndev->stats.tx_bytes += skb->len; 874 876 dma_unmap_single(ndev->dev.parent, 875 877 gq->unmap_addrs[gq->dirty], 876 878 skb->len, DMA_TO_DEVICE); 877 879 dev_kfree_skb_any(gq->skbs[gq->dirty]); 878 880 gq->skbs[gq->dirty] = NULL; 879 - rdev->ndev->stats.tx_packets++; 880 - rdev->ndev->stats.tx_bytes += skb->len; 881 881 } 882 882 desc->desc.die_dt = DT_EEMPTY; 883 883 }
+1 -1
drivers/net/ethernet/stmicro/stmmac/dwmac-qcom-ethqos.c
··· 272 272 273 273 static const struct ethqos_emac_driver_data emac_v4_0_0_data = { 274 274 .por = emac_v4_0_0_por, 275 - .num_por = ARRAY_SIZE(emac_v3_0_0_por), 275 + .num_por = ARRAY_SIZE(emac_v4_0_0_por), 276 276 .rgmii_config_loopback_en = false, 277 277 .has_emac_ge_3 = true, 278 278 .link_clk_name = "phyaux",
+4 -3
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
··· 7662 7662 #ifdef STMMAC_VLAN_TAG_USED 7663 7663 /* Both mac100 and gmac support receive VLAN tag detection */ 7664 7664 ndev->features |= NETIF_F_HW_VLAN_CTAG_RX | NETIF_F_HW_VLAN_STAG_RX; 7665 - ndev->hw_features |= NETIF_F_HW_VLAN_CTAG_RX; 7666 - priv->hw->hw_vlan_en = true; 7667 - 7665 + if (priv->plat->has_gmac4) { 7666 + ndev->hw_features |= NETIF_F_HW_VLAN_CTAG_RX; 7667 + priv->hw->hw_vlan_en = true; 7668 + } 7668 7669 if (priv->dma_cap.vlhash) { 7669 7670 ndev->features |= NETIF_F_HW_VLAN_CTAG_FILTER; 7670 7671 ndev->features |= NETIF_F_HW_VLAN_STAG_FILTER;
+1
drivers/net/ethernet/wangxun/libwx/wx_hw.c
··· 1959 1959 } 1960 1960 1961 1961 bitmap_zero(wx->state, WX_STATE_NBITS); 1962 + wx->misc_irq_domain = false; 1962 1963 1963 1964 return 0; 1964 1965 }
+7 -3
drivers/net/ethernet/wangxun/libwx/wx_lib.c
··· 1686 1686 } 1687 1687 1688 1688 pdev->irq = pci_irq_vector(pdev, 0); 1689 + wx->num_q_vectors = 1; 1689 1690 1690 1691 return 0; 1691 1692 } ··· 1997 1996 int vector; 1998 1997 1999 1998 if (!(pdev->msix_enabled)) { 2000 - free_irq(pdev->irq, wx); 1999 + if (!wx->misc_irq_domain) 2000 + free_irq(pdev->irq, wx); 2001 2001 return; 2002 2002 } 2003 2003 ··· 2013 2011 free_irq(entry->vector, q_vector); 2014 2012 } 2015 2013 2016 - if (wx->mac.type == wx_mac_em) 2014 + if (!wx->misc_irq_domain) 2017 2015 free_irq(wx->msix_entry->vector, wx); 2018 2016 } 2019 2017 EXPORT_SYMBOL(wx_free_irq); ··· 2027 2025 int wx_setup_isb_resources(struct wx *wx) 2028 2026 { 2029 2027 struct pci_dev *pdev = wx->pdev; 2028 + 2029 + if (wx->isb_mem) 2030 + return 0; 2030 2031 2031 2032 wx->isb_mem = dma_alloc_coherent(&pdev->dev, 2032 2033 sizeof(u32) * 4, ··· 2390 2385 2391 2386 void wx_free_resources(struct wx *wx) 2392 2387 { 2393 - wx_free_isb_resources(wx); 2394 2388 wx_free_all_rx_resources(wx); 2395 2389 wx_free_all_tx_resources(wx); 2396 2390 }
+1
drivers/net/ethernet/wangxun/libwx/wx_type.h
··· 1058 1058 dma_addr_t isb_dma; 1059 1059 u32 *isb_mem; 1060 1060 u32 isb_tag[WX_ISB_MAX]; 1061 + bool misc_irq_domain; 1061 1062 1062 1063 #define WX_MAX_RETA_ENTRIES 128 1063 1064 #define WX_RSS_INDIR_TBL_MAX 64
+2
drivers/net/ethernet/wangxun/ngbe/ngbe_main.c
··· 387 387 err_free_irq: 388 388 wx_free_irq(wx); 389 389 err_free_resources: 390 + wx_free_isb_resources(wx); 390 391 wx_free_resources(wx); 391 392 return err; 392 393 } ··· 409 408 410 409 ngbe_down(wx); 411 410 wx_free_irq(wx); 411 + wx_free_isb_resources(wx); 412 412 wx_free_resources(wx); 413 413 phylink_disconnect_phy(wx->phylink); 414 414 wx_control_hw(wx, false);
+47 -77
drivers/net/ethernet/wangxun/txgbe/txgbe_irq.c
··· 27 27 } 28 28 29 29 /** 30 - * txgbe_intr - msi/legacy mode Interrupt Handler 31 - * @irq: interrupt number 32 - * @data: pointer to a network interface device structure 33 - **/ 34 - static irqreturn_t txgbe_intr(int __always_unused irq, void *data) 35 - { 36 - struct wx_q_vector *q_vector; 37 - struct wx *wx = data; 38 - struct pci_dev *pdev; 39 - u32 eicr; 40 - 41 - q_vector = wx->q_vector[0]; 42 - pdev = wx->pdev; 43 - 44 - eicr = wx_misc_isb(wx, WX_ISB_VEC0); 45 - if (!eicr) { 46 - /* shared interrupt alert! 47 - * the interrupt that we masked before the ICR read. 48 - */ 49 - if (netif_running(wx->netdev)) 50 - txgbe_irq_enable(wx, true); 51 - return IRQ_NONE; /* Not our interrupt */ 52 - } 53 - wx->isb_mem[WX_ISB_VEC0] = 0; 54 - if (!(pdev->msi_enabled)) 55 - wr32(wx, WX_PX_INTA, 1); 56 - 57 - wx->isb_mem[WX_ISB_MISC] = 0; 58 - /* would disable interrupts here but it is auto disabled */ 59 - napi_schedule_irqoff(&q_vector->napi); 60 - 61 - /* re-enable link(maybe) and non-queue interrupts, no flush. 62 - * txgbe_poll will re-enable the queue interrupts 63 - */ 64 - if (netif_running(wx->netdev)) 65 - txgbe_irq_enable(wx, false); 66 - 67 - return IRQ_HANDLED; 68 - } 69 - 70 - /** 71 - * txgbe_request_msix_irqs - Initialize MSI-X interrupts 30 + * txgbe_request_queue_irqs - Initialize MSI-X queue interrupts 72 31 * @wx: board private structure 73 32 * 74 - * Allocate MSI-X vectors and request interrupts from the kernel. 33 + * Allocate MSI-X queue vectors and request interrupts from the kernel. 75 34 **/ 76 - static int txgbe_request_msix_irqs(struct wx *wx) 35 + int txgbe_request_queue_irqs(struct wx *wx) 77 36 { 78 37 struct net_device *netdev = wx->netdev; 79 38 int vector, err; 39 + 40 + if (!wx->pdev->msix_enabled) 41 + return 0; 80 42 81 43 for (vector = 0; vector < wx->num_q_vectors; vector++) { 82 44 struct wx_q_vector *q_vector = wx->q_vector[vector]; ··· 69 107 wx->q_vector[vector]); 70 108 } 71 109 wx_reset_interrupt_capability(wx); 72 - return err; 73 - } 74 - 75 - /** 76 - * txgbe_request_irq - initialize interrupts 77 - * @wx: board private structure 78 - * 79 - * Attempt to configure interrupts using the best available 80 - * capabilities of the hardware and kernel. 81 - **/ 82 - int txgbe_request_irq(struct wx *wx) 83 - { 84 - struct net_device *netdev = wx->netdev; 85 - struct pci_dev *pdev = wx->pdev; 86 - int err; 87 - 88 - if (pdev->msix_enabled) 89 - err = txgbe_request_msix_irqs(wx); 90 - else if (pdev->msi_enabled) 91 - err = request_irq(wx->pdev->irq, &txgbe_intr, 0, 92 - netdev->name, wx); 93 - else 94 - err = request_irq(wx->pdev->irq, &txgbe_intr, IRQF_SHARED, 95 - netdev->name, wx); 96 - 97 - if (err) 98 - wx_err(wx, "request_irq failed, Error %d\n", err); 99 - 100 110 return err; 101 111 } 102 112 ··· 111 177 }; 112 178 113 179 static irqreturn_t txgbe_misc_irq_handle(int irq, void *data) 180 + { 181 + struct wx_q_vector *q_vector; 182 + struct txgbe *txgbe = data; 183 + struct wx *wx = txgbe->wx; 184 + u32 eicr; 185 + 186 + if (wx->pdev->msix_enabled) 187 + return IRQ_WAKE_THREAD; 188 + 189 + eicr = wx_misc_isb(wx, WX_ISB_VEC0); 190 + if (!eicr) { 191 + /* shared interrupt alert! 192 + * the interrupt that we masked before the ICR read. 193 + */ 194 + if (netif_running(wx->netdev)) 195 + txgbe_irq_enable(wx, true); 196 + return IRQ_NONE; /* Not our interrupt */ 197 + } 198 + wx->isb_mem[WX_ISB_VEC0] = 0; 199 + if (!(wx->pdev->msi_enabled)) 200 + wr32(wx, WX_PX_INTA, 1); 201 + 202 + /* would disable interrupts here but it is auto disabled */ 203 + q_vector = wx->q_vector[0]; 204 + napi_schedule_irqoff(&q_vector->napi); 205 + 206 + return IRQ_WAKE_THREAD; 207 + } 208 + 209 + static irqreturn_t txgbe_misc_irq_thread_fn(int irq, void *data) 114 210 { 115 211 struct txgbe *txgbe = data; 116 212 struct wx *wx = txgbe->wx; ··· 187 223 188 224 int txgbe_setup_misc_irq(struct txgbe *txgbe) 189 225 { 226 + unsigned long flags = IRQF_ONESHOT; 190 227 struct wx *wx = txgbe->wx; 191 228 int hwirq, err; 192 229 ··· 201 236 irq_create_mapping(txgbe->misc.domain, hwirq); 202 237 203 238 txgbe->misc.chip = txgbe_irq_chip; 204 - if (wx->pdev->msix_enabled) 239 + if (wx->pdev->msix_enabled) { 205 240 txgbe->misc.irq = wx->msix_entry->vector; 206 - else 241 + } else { 207 242 txgbe->misc.irq = wx->pdev->irq; 243 + if (!wx->pdev->msi_enabled) 244 + flags |= IRQF_SHARED; 245 + } 208 246 209 - err = request_threaded_irq(txgbe->misc.irq, NULL, 210 - txgbe_misc_irq_handle, 211 - IRQF_ONESHOT, 247 + err = request_threaded_irq(txgbe->misc.irq, txgbe_misc_irq_handle, 248 + txgbe_misc_irq_thread_fn, 249 + flags, 212 250 wx->netdev->name, txgbe); 213 251 if (err) 214 252 goto del_misc_irq; ··· 223 255 err = txgbe_request_link_irq(txgbe); 224 256 if (err) 225 257 goto free_gpio_irq; 258 + 259 + wx->misc_irq_domain = true; 226 260 227 261 return 0; 228 262
+1 -1
drivers/net/ethernet/wangxun/txgbe/txgbe_irq.h
··· 2 2 /* Copyright (c) 2015 - 2024 Beijing WangXun Technology Co., Ltd. */ 3 3 4 4 void txgbe_irq_enable(struct wx *wx, bool queues); 5 - int txgbe_request_irq(struct wx *wx); 5 + int txgbe_request_queue_irqs(struct wx *wx); 6 6 void txgbe_free_misc_irq(struct txgbe *txgbe); 7 7 int txgbe_setup_misc_irq(struct txgbe *txgbe);
+5 -4
drivers/net/ethernet/wangxun/txgbe/txgbe_main.c
··· 294 294 295 295 wx_configure(wx); 296 296 297 - err = txgbe_request_irq(wx); 297 + err = txgbe_request_queue_irqs(wx); 298 298 if (err) 299 - goto err_free_isb; 299 + goto err_free_resources; 300 300 301 301 /* Notify the stack of the actual queue counts. */ 302 302 err = netif_set_real_num_tx_queues(netdev, wx->num_tx_queues); ··· 313 313 314 314 err_free_irq: 315 315 wx_free_irq(wx); 316 - err_free_isb: 317 - wx_free_isb_resources(wx); 316 + err_free_resources: 317 + wx_free_resources(wx); 318 318 err_reset: 319 319 txgbe_reset(wx); 320 320 ··· 729 729 730 730 txgbe_remove_phy(txgbe); 731 731 txgbe_free_misc_irq(txgbe); 732 + wx_free_isb_resources(wx); 732 733 733 734 pci_release_selected_regions(pdev, 734 735 pci_select_bars(pdev, IORESOURCE_MEM));
+1 -1
drivers/net/ntb_netdev.c
··· 119 119 skb->protocol = eth_type_trans(skb, ndev); 120 120 skb->ip_summed = CHECKSUM_NONE; 121 121 122 - if (__netif_rx(skb) == NET_RX_DROP) { 122 + if (netif_rx(skb) == NET_RX_DROP) { 123 123 ndev->stats.rx_errors++; 124 124 ndev->stats.rx_dropped++; 125 125 } else {
+5
drivers/net/phy/aquantia/aquantia.h
··· 6 6 * Author: Heiner Kallweit <hkallweit1@gmail.com> 7 7 */ 8 8 9 + #ifndef AQUANTIA_H 10 + #define AQUANTIA_H 11 + 9 12 #include <linux/device.h> 10 13 #include <linux/phy.h> 11 14 ··· 123 120 #endif 124 121 125 122 int aqr_firmware_load(struct phy_device *phydev); 123 + 124 + #endif /* AQUANTIA_H */
+7 -7
drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
··· 654 654 hw->wiphy->features |= NL80211_FEATURE_WFA_TPC_IE_IN_PROBES; 655 655 656 656 if (iwl_fw_lookup_cmd_ver(mvm->fw, WOWLAN_KEK_KCK_MATERIAL, 657 - IWL_FW_CMD_VER_UNKNOWN) == 3) 657 + IWL_FW_CMD_VER_UNKNOWN) >= 3) 658 658 hw->wiphy->flags |= WIPHY_FLAG_SUPPORTS_EXT_KEK_KCK; 659 659 660 660 if (fw_has_api(&mvm->fw->ucode_capa, ··· 1656 1656 struct iwl_mvm_vif *mvmvif = 1657 1657 container_of(wk, struct iwl_mvm_vif, prevent_esr_done_wk.work); 1658 1658 struct iwl_mvm *mvm = mvmvif->mvm; 1659 - struct ieee80211_vif *vif = iwl_mvm_get_bss_vif(mvm); 1659 + struct ieee80211_vif *vif = 1660 + container_of((void *)mvmvif, struct ieee80211_vif, drv_priv); 1660 1661 1661 1662 mutex_lock(&mvm->mutex); 1662 1663 iwl_mvm_unblock_esr(mvm, vif, IWL_MVM_ESR_BLOCKED_PREVENTION); ··· 1683 1682 struct iwl_mvm_vif *mvmvif = 1684 1683 container_of(wk, struct iwl_mvm_vif, unblock_esr_tpt_wk); 1685 1684 struct iwl_mvm *mvm = mvmvif->mvm; 1686 - struct ieee80211_vif *vif = iwl_mvm_get_bss_vif(mvm); 1685 + struct ieee80211_vif *vif = 1686 + container_of((void *)mvmvif, struct ieee80211_vif, drv_priv); 1687 1687 1688 1688 mutex_lock(&mvm->mutex); 1689 1689 iwl_mvm_unblock_esr(mvm, vif, IWL_MVM_ESR_BLOCKED_TPT); ··· 6412 6410 if (sync) { 6413 6411 lockdep_assert_held(&mvm->mutex); 6414 6412 ret = wait_event_timeout(mvm->rx_sync_waitq, 6415 - READ_ONCE(mvm->queue_sync_state) == 0 || 6416 - iwl_mvm_is_radio_hw_killed(mvm), 6413 + READ_ONCE(mvm->queue_sync_state) == 0, 6417 6414 SYNC_RX_QUEUE_TIMEOUT); 6418 - WARN_ONCE(!ret && !iwl_mvm_is_radio_hw_killed(mvm), 6419 - "queue sync: failed to sync, state is 0x%lx, cookie %d\n", 6415 + WARN_ONCE(!ret, "queue sync: failed to sync, state is 0x%lx, cookie %d\n", 6420 6416 mvm->queue_sync_state, 6421 6417 mvm->queue_sync_cookie); 6422 6418 }
+3 -5
drivers/net/wireless/intel/iwlwifi/mvm/ops.c
··· 153 153 struct ieee80211_vif *vif = iwl_mvm_get_bss_vif(mvm); 154 154 155 155 /* FW recommendations is only for entering EMLSR */ 156 - if (!vif || iwl_mvm_vif_from_mac80211(vif)->esr_active) 156 + if (IS_ERR_OR_NULL(vif) || iwl_mvm_vif_from_mac80211(vif)->esr_active) 157 157 return; 158 158 159 159 if (le32_to_cpu(notif->action) == ESR_RECOMMEND_ENTER) ··· 1912 1912 bool rfkill_safe_init_done = READ_ONCE(mvm->rfkill_safe_init_done); 1913 1913 bool unified = iwl_mvm_has_unified_ucode(mvm); 1914 1914 1915 - if (state) { 1915 + if (state) 1916 1916 set_bit(IWL_MVM_STATUS_HW_RFKILL, &mvm->status); 1917 - wake_up(&mvm->rx_sync_waitq); 1918 - } else { 1917 + else 1919 1918 clear_bit(IWL_MVM_STATUS_HW_RFKILL, &mvm->status); 1920 - } 1921 1919 1922 1920 iwl_mvm_set_rfkill_state(mvm); 1923 1921
+7 -8
drivers/net/wireless/intel/iwlwifi/mvm/rx.c
··· 557 557 }; 558 558 559 559 static void iwl_mvm_update_link_sig(struct ieee80211_vif *vif, int sig, 560 - struct iwl_mvm_vif_link_info *link_info) 560 + struct iwl_mvm_vif_link_info *link_info, 561 + struct ieee80211_bss_conf *bss_conf) 561 562 { 562 563 struct iwl_mvm *mvm = iwl_mvm_vif_from_mac80211(vif)->mvm; 563 - struct ieee80211_bss_conf *bss_conf = 564 - iwl_mvm_rcu_fw_link_id_to_link_conf(mvm, link_info->fw_link_id, 565 - false); 566 564 int thold = bss_conf->cqm_rssi_thold; 567 565 int hyst = bss_conf->cqm_rssi_hyst; 568 566 int last_event; ··· 668 670 mvmvif->deflink.beacon_stats.num_beacons; 669 671 670 672 /* This is used in pre-MLO API so use deflink */ 671 - iwl_mvm_update_link_sig(vif, sig, &mvmvif->deflink); 673 + iwl_mvm_update_link_sig(vif, sig, &mvmvif->deflink, &vif->bss_conf); 672 674 } 673 675 674 676 static void iwl_mvm_stat_iterator_all_macs(void *_data, u8 *mac, ··· 703 705 sig = -le32_to_cpu(mac_stats->beacon_filter_average_energy); 704 706 705 707 /* This is used in pre-MLO API so use deflink */ 706 - iwl_mvm_update_link_sig(vif, sig, &mvmvif->deflink); 708 + iwl_mvm_update_link_sig(vif, sig, &mvmvif->deflink, &vif->bss_conf); 707 709 } 708 710 709 711 static inline void ··· 919 921 mvmvif->link[link_id]->beacon_stats.num_beacons; 920 922 921 923 sig = -le32_to_cpu(link_stats->beacon_filter_average_energy); 922 - iwl_mvm_update_link_sig(bss_conf->vif, sig, link_info); 924 + iwl_mvm_update_link_sig(bss_conf->vif, sig, link_info, 925 + bss_conf); 923 926 924 927 if (WARN_ONCE(mvmvif->id >= MAC_INDEX_AUX, 925 928 "invalid mvmvif id: %d", mvmvif->id)) ··· 966 967 967 968 lockdep_assert_held(&mvm->mutex); 968 969 969 - if (!bss_vif) 970 + if (IS_ERR_OR_NULL(bss_vif)) 970 971 return; 971 972 972 973 mvmvif = iwl_mvm_vif_from_mac80211(bss_vif);
+2 -1
drivers/net/wireless/microchip/wilc1000/hif.c
··· 382 382 struct ieee80211_p2p_noa_attr noa_attr; 383 383 const struct cfg80211_bss_ies *ies; 384 384 struct wilc_join_bss_param *param; 385 - u8 rates_len = 0, ies_len; 385 + u8 rates_len = 0; 386 + int ies_len; 386 387 int ret; 387 388 388 389 param = kzalloc(sizeof(*param), GFP_KERNEL);
-7
drivers/net/wireless/ti/wlcore/cmd.c
··· 1566 1566 cpu_to_le32(wl1271_tx_enabled_rates_get(wl, sta_rates, 1567 1567 wlvif->band)); 1568 1568 1569 - if (!cmd->supported_rates) { 1570 - wl1271_debug(DEBUG_CMD, 1571 - "peer has no supported rates yet, configuring basic rates: 0x%x", 1572 - wlvif->basic_rate_set); 1573 - cmd->supported_rates = cpu_to_le32(wlvif->basic_rate_set); 1574 - } 1575 - 1576 1569 wl1271_debug(DEBUG_CMD, "new peer rates=0x%x queues=0x%x", 1577 1570 cmd->supported_rates, sta->uapsd_queues); 1578 1571
+8 -9
drivers/net/wireless/ti/wlcore/main.c
··· 5139 5139 5140 5140 /* Add station (AP mode) */ 5141 5141 if (is_ap && 5142 - old_state == IEEE80211_STA_NOTEXIST && 5143 - new_state == IEEE80211_STA_NONE) { 5142 + old_state == IEEE80211_STA_AUTH && 5143 + new_state == IEEE80211_STA_ASSOC) { 5144 5144 ret = wl12xx_sta_add(wl, wlvif, sta); 5145 5145 if (ret) 5146 5146 return ret; 5147 + 5148 + wl_sta->fw_added = true; 5147 5149 5148 5150 wlcore_update_inconn_sta(wl, wlvif, wl_sta, true); 5149 5151 } 5150 5152 5151 5153 /* Remove station (AP mode) */ 5152 5154 if (is_ap && 5153 - old_state == IEEE80211_STA_NONE && 5154 - new_state == IEEE80211_STA_NOTEXIST) { 5155 + old_state == IEEE80211_STA_ASSOC && 5156 + new_state == IEEE80211_STA_AUTH) { 5157 + wl_sta->fw_added = false; 5158 + 5155 5159 /* must not fail */ 5156 5160 wl12xx_sta_remove(wl, wlvif, sta); 5157 5161 ··· 5166 5162 if (is_ap && 5167 5163 new_state == IEEE80211_STA_AUTHORIZED) { 5168 5164 ret = wl12xx_cmd_set_peer_state(wl, wlvif, wl_sta->hlid); 5169 - if (ret < 0) 5170 - return ret; 5171 - 5172 - /* reconfigure rates */ 5173 - ret = wl12xx_cmd_add_peer(wl, wlvif, sta, wl_sta->hlid); 5174 5165 if (ret < 0) 5175 5166 return ret; 5176 5167
+2 -5
drivers/net/wireless/ti/wlcore/tx.c
··· 140 140 static u8 wl12xx_tx_get_hlid_ap(struct wl1271 *wl, struct wl12xx_vif *wlvif, 141 141 struct sk_buff *skb, struct ieee80211_sta *sta) 142 142 { 143 - if (sta) { 144 - struct wl1271_station *wl_sta; 145 - 146 - wl_sta = (struct wl1271_station *)sta->drv_priv; 147 - return wl_sta->hlid; 143 + if (sta && wl1271_station(sta)->fw_added) { 144 + return wl1271_station(sta)->hlid; 148 145 } else { 149 146 struct ieee80211_hdr *hdr; 150 147
+6
drivers/net/wireless/ti/wlcore/wlcore_i.h
··· 324 324 325 325 struct wl1271_station { 326 326 u8 hlid; 327 + bool fw_added; 327 328 bool in_connection; 328 329 329 330 /* ··· 335 334 */ 336 335 u64 total_freed_pkts; 337 336 }; 337 + 338 + static inline struct wl1271_station *wl1271_station(struct ieee80211_sta *sta) 339 + { 340 + return (struct wl1271_station *)sta->drv_priv; 341 + } 338 342 339 343 struct wl12xx_vif { 340 344 struct wl1271 *wl;
+5 -1
include/linux/mlx5/mlx5_ifc.h
··· 2029 2029 u8 pcc_ifa2[0x1]; 2030 2030 u8 reserved_at_3f1[0xf]; 2031 2031 2032 - u8 reserved_at_400[0x400]; 2032 + u8 reserved_at_400[0x40]; 2033 + 2034 + u8 reserved_at_440[0x8]; 2035 + u8 max_num_eqs_24b[0x18]; 2036 + u8 reserved_at_460[0x3a0]; 2033 2037 }; 2034 2038 2035 2039 enum mlx5_ifc_flow_destination_type {
+1 -1
include/linux/phy.h
··· 1122 1122 u8 index, enum led_brightness value); 1123 1123 1124 1124 /** 1125 - * @led_blink_set: Set a PHY LED brightness. Index indicates 1125 + * @led_blink_set: Set a PHY LED blinking. Index indicates 1126 1126 * which of the PHYs led should be configured to blink. Delays 1127 1127 * are in milliseconds and if both are zero then a sensible 1128 1128 * default should be chosen. The call should adjust the
+11
include/net/bluetooth/hci.h
··· 324 324 * claim to support it. 325 325 */ 326 326 HCI_QUIRK_BROKEN_READ_ENC_KEY_SIZE, 327 + 328 + /* 329 + * When this quirk is set, the reserved bits of Primary/Secondary_PHY 330 + * inside the LE Extended Advertising Report events are discarded. 331 + * This is required for some Apple/Broadcom controllers which 332 + * abuse these reserved bits for unrelated flags. 333 + * 334 + * This quirk can be set before hci_register_dev is called or 335 + * during the hdev->setup vendor callback. 336 + */ 337 + HCI_QUIRK_FIXUP_LE_EXT_ADV_REPORT_PHY, 327 338 }; 328 339 329 340 /* HCI device flags */
+2
include/net/bluetooth/hci_sync.h
··· 38 38 int __hci_cmd_sync_status_sk(struct hci_dev *hdev, u16 opcode, u32 plen, 39 39 const void *param, u8 event, u32 timeout, 40 40 struct sock *sk); 41 + int hci_cmd_sync_status(struct hci_dev *hdev, u16 opcode, u32 plen, 42 + const void *param, u32 timeout); 41 43 42 44 void hci_cmd_sync_init(struct hci_dev *hdev); 43 45 void hci_cmd_sync_clear(struct hci_dev *hdev);
+1 -1
include/net/mac80211.h
··· 395 395 BSS_CHANGED_HE_OBSS_PD = 1<<28, 396 396 BSS_CHANGED_HE_BSS_COLOR = 1<<29, 397 397 BSS_CHANGED_FILS_DISCOVERY = 1<<30, 398 - BSS_CHANGED_UNSOL_BCAST_PROBE_RESP = 1<<31, 398 + BSS_CHANGED_UNSOL_BCAST_PROBE_RESP = BIT_ULL(31), 399 399 BSS_CHANGED_MLD_VALID_LINKS = BIT_ULL(33), 400 400 BSS_CHANGED_MLD_TTLM = BIT_ULL(34), 401 401
+12 -3
net/bluetooth/hci_conn.c
··· 899 899 U16_MAX, GFP_ATOMIC); 900 900 } 901 901 902 - struct hci_conn *hci_conn_add(struct hci_dev *hdev, int type, bdaddr_t *dst, 903 - u8 role, u16 handle) 902 + static struct hci_conn *__hci_conn_add(struct hci_dev *hdev, int type, bdaddr_t *dst, 903 + u8 role, u16 handle) 904 904 { 905 905 struct hci_conn *conn; 906 906 ··· 1041 1041 if (unlikely(handle < 0)) 1042 1042 return ERR_PTR(-ECONNREFUSED); 1043 1043 1044 - return hci_conn_add(hdev, type, dst, role, handle); 1044 + return __hci_conn_add(hdev, type, dst, role, handle); 1045 + } 1046 + 1047 + struct hci_conn *hci_conn_add(struct hci_dev *hdev, int type, bdaddr_t *dst, 1048 + u8 role, u16 handle) 1049 + { 1050 + if (handle > HCI_CONN_HANDLE_MAX) 1051 + return ERR_PTR(-EINVAL); 1052 + 1053 + return __hci_conn_add(hdev, type, dst, role, handle); 1045 1054 } 1046 1055 1047 1056 static void hci_conn_cleanup_child(struct hci_conn *conn, u8 reason)
+22 -54
net/bluetooth/hci_core.c
··· 63 63 /* HCI ID Numbering */ 64 64 static DEFINE_IDA(hci_index_ida); 65 65 66 - static int hci_scan_req(struct hci_request *req, unsigned long opt) 67 - { 68 - __u8 scan = opt; 69 - 70 - BT_DBG("%s %x", req->hdev->name, scan); 71 - 72 - /* Inquiry and Page scans */ 73 - hci_req_add(req, HCI_OP_WRITE_SCAN_ENABLE, 1, &scan); 74 - return 0; 75 - } 76 - 77 - static int hci_auth_req(struct hci_request *req, unsigned long opt) 78 - { 79 - __u8 auth = opt; 80 - 81 - BT_DBG("%s %x", req->hdev->name, auth); 82 - 83 - /* Authentication */ 84 - hci_req_add(req, HCI_OP_WRITE_AUTH_ENABLE, 1, &auth); 85 - return 0; 86 - } 87 - 88 - static int hci_encrypt_req(struct hci_request *req, unsigned long opt) 89 - { 90 - __u8 encrypt = opt; 91 - 92 - BT_DBG("%s %x", req->hdev->name, encrypt); 93 - 94 - /* Encryption */ 95 - hci_req_add(req, HCI_OP_WRITE_ENCRYPT_MODE, 1, &encrypt); 96 - return 0; 97 - } 98 - 99 - static int hci_linkpol_req(struct hci_request *req, unsigned long opt) 100 - { 101 - __le16 policy = cpu_to_le16(opt); 102 - 103 - BT_DBG("%s %x", req->hdev->name, policy); 104 - 105 - /* Default link policy */ 106 - hci_req_add(req, HCI_OP_WRITE_DEF_LINK_POLICY, 2, &policy); 107 - return 0; 108 - } 109 - 110 66 /* Get HCI device by index. 111 67 * Device is held on return. */ 112 68 struct hci_dev *hci_dev_get(int index) ··· 691 735 { 692 736 struct hci_dev *hdev; 693 737 struct hci_dev_req dr; 738 + __le16 policy; 694 739 int err = 0; 695 740 696 741 if (copy_from_user(&dr, arg, sizeof(dr))) ··· 718 761 719 762 switch (cmd) { 720 763 case HCISETAUTH: 721 - err = hci_req_sync(hdev, hci_auth_req, dr.dev_opt, 722 - HCI_INIT_TIMEOUT, NULL); 764 + err = __hci_cmd_sync_status(hdev, HCI_OP_WRITE_AUTH_ENABLE, 765 + 1, &dr.dev_opt, HCI_CMD_TIMEOUT); 723 766 break; 724 767 725 768 case HCISETENCRYPT: ··· 730 773 731 774 if (!test_bit(HCI_AUTH, &hdev->flags)) { 732 775 /* Auth must be enabled first */ 733 - err = hci_req_sync(hdev, hci_auth_req, dr.dev_opt, 734 - HCI_INIT_TIMEOUT, NULL); 776 + err = __hci_cmd_sync_status(hdev, 777 + HCI_OP_WRITE_AUTH_ENABLE, 778 + 1, &dr.dev_opt, 779 + HCI_CMD_TIMEOUT); 735 780 if (err) 736 781 break; 737 782 } 738 783 739 - err = hci_req_sync(hdev, hci_encrypt_req, dr.dev_opt, 740 - HCI_INIT_TIMEOUT, NULL); 784 + err = __hci_cmd_sync_status(hdev, HCI_OP_WRITE_ENCRYPT_MODE, 785 + 1, &dr.dev_opt, 786 + HCI_CMD_TIMEOUT); 741 787 break; 742 788 743 789 case HCISETSCAN: 744 - err = hci_req_sync(hdev, hci_scan_req, dr.dev_opt, 745 - HCI_INIT_TIMEOUT, NULL); 790 + err = __hci_cmd_sync_status(hdev, HCI_OP_WRITE_SCAN_ENABLE, 791 + 1, &dr.dev_opt, 792 + HCI_CMD_TIMEOUT); 746 793 747 794 /* Ensure that the connectable and discoverable states 748 795 * get correctly modified as this was a non-mgmt change. ··· 756 795 break; 757 796 758 797 case HCISETLINKPOL: 759 - err = hci_req_sync(hdev, hci_linkpol_req, dr.dev_opt, 760 - HCI_INIT_TIMEOUT, NULL); 798 + policy = cpu_to_le16(dr.dev_opt); 799 + 800 + err = __hci_cmd_sync_status(hdev, HCI_OP_WRITE_DEF_LINK_POLICY, 801 + 2, &policy, 802 + HCI_CMD_TIMEOUT); 761 803 break; 762 804 763 805 case HCISETLINKMODE: ··· 2715 2751 list_del(&hdev->list); 2716 2752 write_unlock(&hci_dev_list_lock); 2717 2753 2754 + cancel_work_sync(&hdev->rx_work); 2755 + cancel_work_sync(&hdev->cmd_work); 2756 + cancel_work_sync(&hdev->tx_work); 2718 2757 cancel_work_sync(&hdev->power_on); 2758 + cancel_work_sync(&hdev->error_reset); 2719 2759 2720 2760 hci_cmd_sync_clear(hdev); 2721 2761
+30 -3
net/bluetooth/hci_event.c
··· 6311 6311 6312 6312 evt_type = __le16_to_cpu(info->type) & LE_EXT_ADV_EVT_TYPE_MASK; 6313 6313 legacy_evt_type = ext_evt_type_to_legacy(hdev, evt_type); 6314 + 6315 + if (test_bit(HCI_QUIRK_FIXUP_LE_EXT_ADV_REPORT_PHY, 6316 + &hdev->quirks)) { 6317 + info->primary_phy &= 0x1f; 6318 + info->secondary_phy &= 0x1f; 6319 + } 6320 + 6314 6321 if (legacy_evt_type != LE_ADV_INVALID) { 6315 6322 process_adv_report(hdev, legacy_evt_type, &info->bdaddr, 6316 6323 info->bdaddr_type, NULL, 0, ··· 6667 6660 struct bt_iso_qos *qos; 6668 6661 bool pending = false; 6669 6662 u16 handle = __le16_to_cpu(ev->handle); 6663 + u32 c_sdu_interval, p_sdu_interval; 6670 6664 6671 6665 bt_dev_dbg(hdev, "status 0x%2.2x", ev->status); 6672 6666 ··· 6692 6684 6693 6685 pending = test_and_clear_bit(HCI_CONN_CREATE_CIS, &conn->flags); 6694 6686 6695 - /* Convert ISO Interval (1.25 ms slots) to SDU Interval (us) */ 6696 - qos->ucast.in.interval = le16_to_cpu(ev->interval) * 1250; 6697 - qos->ucast.out.interval = qos->ucast.in.interval; 6687 + /* BLUETOOTH CORE SPECIFICATION Version 5.4 | Vol 6, Part G 6688 + * page 3075: 6689 + * Transport_Latency_C_To_P = CIG_Sync_Delay + (FT_C_To_P) × 6690 + * ISO_Interval + SDU_Interval_C_To_P 6691 + * ... 6692 + * SDU_Interval = (CIG_Sync_Delay + (FT) x ISO_Interval) - 6693 + * Transport_Latency 6694 + */ 6695 + c_sdu_interval = (get_unaligned_le24(ev->cig_sync_delay) + 6696 + (ev->c_ft * le16_to_cpu(ev->interval) * 1250)) - 6697 + get_unaligned_le24(ev->c_latency); 6698 + p_sdu_interval = (get_unaligned_le24(ev->cig_sync_delay) + 6699 + (ev->p_ft * le16_to_cpu(ev->interval) * 1250)) - 6700 + get_unaligned_le24(ev->p_latency); 6698 6701 6699 6702 switch (conn->role) { 6700 6703 case HCI_ROLE_SLAVE: 6704 + qos->ucast.in.interval = c_sdu_interval; 6705 + qos->ucast.out.interval = p_sdu_interval; 6701 6706 /* Convert Transport Latency (us) to Latency (msec) */ 6702 6707 qos->ucast.in.latency = 6703 6708 DIV_ROUND_CLOSEST(get_unaligned_le24(ev->c_latency), ··· 6724 6703 qos->ucast.out.phy = ev->p_phy; 6725 6704 break; 6726 6705 case HCI_ROLE_MASTER: 6706 + qos->ucast.in.interval = p_sdu_interval; 6707 + qos->ucast.out.interval = c_sdu_interval; 6727 6708 /* Convert Transport Latency (us) to Latency (msec) */ 6728 6709 qos->ucast.out.latency = 6729 6710 DIV_ROUND_CLOSEST(get_unaligned_le24(ev->c_latency), ··· 6916 6893 6917 6894 bis = hci_conn_hash_lookup_handle(hdev, handle); 6918 6895 if (!bis) { 6896 + if (handle > HCI_CONN_HANDLE_MAX) { 6897 + bt_dev_dbg(hdev, "ignore too large handle %u", handle); 6898 + continue; 6899 + } 6919 6900 bis = hci_conn_add(hdev, ISO_LINK, BDADDR_ANY, 6920 6901 HCI_ROLE_SLAVE, handle); 6921 6902 if (IS_ERR(bis))
+13
net/bluetooth/hci_sync.c
··· 280 280 } 281 281 EXPORT_SYMBOL(__hci_cmd_sync_status); 282 282 283 + int hci_cmd_sync_status(struct hci_dev *hdev, u16 opcode, u32 plen, 284 + const void *param, u32 timeout) 285 + { 286 + int err; 287 + 288 + hci_req_sync_lock(hdev); 289 + err = __hci_cmd_sync_status(hdev, opcode, plen, param, timeout); 290 + hci_req_sync_unlock(hdev); 291 + 292 + return err; 293 + } 294 + EXPORT_SYMBOL(hci_cmd_sync_status); 295 + 283 296 static void hci_cmd_sync_work(struct work_struct *work) 284 297 { 285 298 struct hci_dev *hdev = container_of(work, struct hci_dev, cmd_sync_work);
+1 -2
net/bluetooth/iso.c
··· 1356 1356 lock_sock(sk); 1357 1357 switch (sk->sk_state) { 1358 1358 case BT_CONNECT2: 1359 - if (pi->conn->hcon && 1360 - test_bit(HCI_CONN_PA_SYNC, &pi->conn->hcon->flags)) { 1359 + if (test_bit(BT_SK_PA_SYNC, &pi->flags)) { 1361 1360 iso_conn_big_sync(sk); 1362 1361 sk->sk_state = BT_LISTEN; 1363 1362 } else {
+3
net/bluetooth/l2cap_core.c
··· 6761 6761 6762 6762 BT_DBG("chan %p, len %d", chan, skb->len); 6763 6763 6764 + l2cap_chan_lock(chan); 6765 + 6764 6766 if (chan->state != BT_BOUND && chan->state != BT_CONNECTED) 6765 6767 goto drop; 6766 6768 ··· 6779 6777 } 6780 6778 6781 6779 drop: 6780 + l2cap_chan_unlock(chan); 6782 6781 l2cap_chan_put(chan); 6783 6782 free_skb: 6784 6783 kfree_skb(skb);
+11 -3
net/bluetooth/l2cap_sock.c
··· 1239 1239 1240 1240 BT_DBG("sk %p state %s", sk, state_to_string(sk->sk_state)); 1241 1241 1242 + /* Sock is dead, so set chan data to NULL, avoid other task use invalid 1243 + * sock pointer. 1244 + */ 1245 + l2cap_pi(sk)->chan->data = NULL; 1242 1246 /* Kill poor orphan */ 1243 1247 1244 1248 l2cap_chan_put(l2cap_pi(sk)->chan); ··· 1485 1481 1486 1482 static int l2cap_sock_recv_cb(struct l2cap_chan *chan, struct sk_buff *skb) 1487 1483 { 1488 - struct sock *sk = chan->data; 1489 - struct l2cap_pinfo *pi = l2cap_pi(sk); 1484 + struct sock *sk; 1485 + struct l2cap_pinfo *pi; 1490 1486 int err; 1491 1487 1492 - lock_sock(sk); 1488 + sk = chan->data; 1489 + if (!sk) 1490 + return -ENXIO; 1493 1491 1492 + pi = l2cap_pi(sk); 1493 + lock_sock(sk); 1494 1494 if (chan->mode == L2CAP_MODE_ERTM && !list_empty(&pi->rx_busy)) { 1495 1495 err = -ENOMEM; 1496 1496 goto done;
+13 -6
net/core/datagram.c
··· 416 416 417 417 end = start + skb_frag_size(frag); 418 418 if ((copy = end - offset) > 0) { 419 - struct page *page = skb_frag_page(frag); 420 - u8 *vaddr = kmap(page); 419 + u32 p_off, p_len, copied; 420 + struct page *p; 421 + u8 *vaddr; 421 422 422 423 if (copy > len) 423 424 copy = len; 424 - n = INDIRECT_CALL_1(cb, simple_copy_to_iter, 425 - vaddr + skb_frag_off(frag) + offset - start, 426 - copy, data, to); 427 - kunmap(page); 425 + 426 + skb_frag_foreach_page(frag, 427 + skb_frag_off(frag) + offset - start, 428 + copy, p, p_off, p_len, copied) { 429 + vaddr = kmap_local_page(p); 430 + n = INDIRECT_CALL_1(cb, simple_copy_to_iter, 431 + vaddr + p_off, p_len, data, to); 432 + kunmap_local(vaddr); 433 + } 434 + 428 435 offset += n; 429 436 if (n != copy) 430 437 goto short_copy;
+2
net/ipv4/inet_diag.c
··· 1383 1383 req.sdiag_family = AF_UNSPEC; /* compatibility */ 1384 1384 req.sdiag_protocol = inet_diag_type2proto(cb->nlh->nlmsg_type); 1385 1385 req.idiag_ext = rc->idiag_ext; 1386 + req.pad = 0; 1386 1387 req.idiag_states = rc->idiag_states; 1387 1388 req.id = rc->id; 1388 1389 ··· 1399 1398 req.sdiag_family = rc->idiag_family; 1400 1399 req.sdiag_protocol = inet_diag_type2proto(nlh->nlmsg_type); 1401 1400 req.idiag_ext = rc->idiag_ext; 1401 + req.pad = 0; 1402 1402 req.idiag_states = rc->idiag_states; 1403 1403 req.id = rc->id; 1404 1404
+8 -1
net/ipv4/tcp_input.c
··· 3077 3077 return; 3078 3078 3079 3079 if (tcp_try_undo_dsack(sk)) 3080 - tcp_try_keep_open(sk); 3080 + tcp_try_to_open(sk, flag); 3081 3081 3082 3082 tcp_identify_packet_loss(sk, ack_flag); 3083 3083 if (icsk->icsk_ca_state != TCP_CA_Recovery) { ··· 4221 4221 case TCPOPT_MD5SIG: 4222 4222 /* The MD5 Hash has already been 4223 4223 * checked (see tcp_v{4,6}_rcv()). 4224 + */ 4225 + break; 4226 + #endif 4227 + #ifdef CONFIG_TCP_AO 4228 + case TCPOPT_AO: 4229 + /* TCP AO has already been checked 4230 + * (see tcp_inbound_ao_hash()). 4224 4231 */ 4225 4232 break; 4226 4233 #endif
+1
net/ipv4/tcp_metrics.c
··· 619 619 [TCP_METRICS_ATTR_ADDR_IPV4] = { .type = NLA_U32, }, 620 620 [TCP_METRICS_ATTR_ADDR_IPV6] = { .type = NLA_BINARY, 621 621 .len = sizeof(struct in6_addr), }, 622 + [TCP_METRICS_ATTR_SADDR_IPV4] = { .type = NLA_U32, }, 622 623 /* Following attributes are not received for GET/DEL, 623 624 * we keep them for reference 624 625 */
+1
net/mac80211/main.c
··· 423 423 BSS_CHANGED_ERP_SLOT; 424 424 } 425 425 426 + /* context: requires softirqs disabled */ 426 427 void ieee80211_handle_queued_frames(struct ieee80211_local *local) 427 428 { 428 429 struct sk_buff *skb;
+2
net/mac80211/util.c
··· 1567 1567 1568 1568 void ieee80211_stop_device(struct ieee80211_local *local) 1569 1569 { 1570 + local_bh_disable(); 1570 1571 ieee80211_handle_queued_frames(local); 1572 + local_bh_enable(); 1571 1573 1572 1574 ieee80211_led_radio(local, false); 1573 1575 ieee80211_mod_tpt_led_trig(local, 0, IEEE80211_TPT_LEDTRIG_FL_RADIO);
+8 -6
net/mac802154/main.c
··· 161 161 } 162 162 163 163 phy->symbol_duration = duration; 164 - phy->lifs_period = (IEEE802154_LIFS_PERIOD * phy->symbol_duration) / NSEC_PER_SEC; 165 - phy->sifs_period = (IEEE802154_SIFS_PERIOD * phy->symbol_duration) / NSEC_PER_SEC; 164 + phy->lifs_period = 165 + (IEEE802154_LIFS_PERIOD * phy->symbol_duration) / NSEC_PER_USEC; 166 + phy->sifs_period = 167 + (IEEE802154_SIFS_PERIOD * phy->symbol_duration) / NSEC_PER_USEC; 166 168 } 167 169 EXPORT_SYMBOL(ieee802154_configure_durations); 168 170 ··· 186 184 * Should be done when all drivers sets this value. 187 185 */ 188 186 189 - wpan_phy->lifs_period = 190 - (IEEE802154_LIFS_PERIOD * wpan_phy->symbol_duration) / 1000; 191 - wpan_phy->sifs_period = 192 - (IEEE802154_SIFS_PERIOD * wpan_phy->symbol_duration) / 1000; 187 + wpan_phy->lifs_period = (IEEE802154_LIFS_PERIOD * 188 + wpan_phy->symbol_duration) / NSEC_PER_USEC; 189 + wpan_phy->sifs_period = (IEEE802154_SIFS_PERIOD * 190 + wpan_phy->symbol_duration) / NSEC_PER_USEC; 193 191 } 194 192 195 193 int ieee802154_register_hw(struct ieee802154_hw *hw)
+4 -4
net/mac802154/tx.c
··· 34 34 if (res) 35 35 goto err_tx; 36 36 37 - dev->stats.tx_packets++; 38 - dev->stats.tx_bytes += skb->len; 37 + DEV_STATS_INC(dev, tx_packets); 38 + DEV_STATS_ADD(dev, tx_bytes, skb->len); 39 39 40 40 ieee802154_xmit_complete(&local->hw, skb, false); 41 41 ··· 90 90 if (ret) 91 91 goto err_wake_netif_queue; 92 92 93 - dev->stats.tx_packets++; 94 - dev->stats.tx_bytes += len; 93 + DEV_STATS_INC(dev, tx_packets); 94 + DEV_STATS_ADD(dev, tx_bytes, len); 95 95 } else { 96 96 local->tx_skb = skb; 97 97 queue_work(local->workqueue, &local->sync_tx_work);
+1 -2
net/netfilter/nf_tables_api.c
··· 11483 11483 11484 11484 gc_seq = nft_gc_seq_begin(nft_net); 11485 11485 11486 - if (!list_empty(&nf_tables_destroy_list)) 11487 - nf_tables_trans_destroy_flush_work(); 11486 + nf_tables_trans_destroy_flush_work(); 11488 11487 again: 11489 11488 list_for_each_entry(table, &nft_net->tables, list) { 11490 11489 if (nft_table_has_owner(table) &&
+5 -4
net/unix/garbage.c
··· 476 476 } 477 477 478 478 if (vertex->index == vertex->scc_index) { 479 + struct unix_vertex *v; 479 480 struct list_head scc; 480 481 bool scc_dead = true; 481 482 ··· 487 486 */ 488 487 __list_cut_position(&scc, &vertex_stack, &vertex->scc_entry); 489 488 490 - list_for_each_entry_reverse(vertex, &scc, scc_entry) { 489 + list_for_each_entry_reverse(v, &scc, scc_entry) { 491 490 /* Don't restart DFS from this vertex in unix_walk_scc(). */ 492 - list_move_tail(&vertex->entry, &unix_visited_vertices); 491 + list_move_tail(&v->entry, &unix_visited_vertices); 493 492 494 493 /* Mark vertex as off-stack. */ 495 - vertex->index = unix_vertex_grouped_index; 494 + v->index = unix_vertex_grouped_index; 496 495 497 496 if (scc_dead) 498 - scc_dead = unix_vertex_dead(vertex); 497 + scc_dead = unix_vertex_dead(v); 499 498 } 500 499 501 500 if (scc_dead)
+5 -1
net/wireless/nl80211.c
··· 468 468 .max = 0xffff, 469 469 }; 470 470 471 + static const struct netlink_range_validation q_range = { 472 + .max = INT_MAX, 473 + }; 474 + 471 475 static const struct nla_policy nl80211_policy[NUM_NL80211_ATTR] = { 472 476 [0] = { .strict_start_type = NL80211_ATTR_HE_OBSS_PD }, 473 477 [NL80211_ATTR_WIPHY] = { .type = NLA_U32 }, ··· 758 754 759 755 [NL80211_ATTR_TXQ_LIMIT] = { .type = NLA_U32 }, 760 756 [NL80211_ATTR_TXQ_MEMORY_LIMIT] = { .type = NLA_U32 }, 761 - [NL80211_ATTR_TXQ_QUANTUM] = { .type = NLA_U32 }, 757 + [NL80211_ATTR_TXQ_QUANTUM] = NLA_POLICY_FULL_RANGE(NLA_U32, &q_range), 762 758 [NL80211_ATTR_HE_CAPABILITY] = 763 759 NLA_POLICY_VALIDATE_FN(NLA_BINARY, validate_he_capa, 764 760 NL80211_HE_MAX_CAPABILITY_LEN),
+23 -2
tools/testing/selftests/net/af_unix/scm_rights.c
··· 14 14 15 15 FIXTURE(scm_rights) 16 16 { 17 - int fd[16]; 17 + int fd[32]; 18 18 }; 19 19 20 20 FIXTURE_VARIANT(scm_rights) 21 21 { 22 - char name[16]; 22 + char name[32]; 23 23 int type; 24 24 int flags; 25 25 bool test_listener; ··· 172 172 const FIXTURE_VARIANT(scm_rights) *variant, 173 173 int n) 174 174 { 175 + ASSERT_LE(n * 2, sizeof(self->fd) / sizeof(self->fd[0])); 176 + 175 177 if (variant->test_listener) 176 178 create_listeners(_metadata, self, n); 177 179 else ··· 283 281 send_fd(7, 6); 284 282 285 283 close_sockets(8); 284 + } 285 + 286 + TEST_F(scm_rights, backtrack_from_scc) 287 + { 288 + create_sockets(10); 289 + 290 + send_fd(0, 1); 291 + send_fd(0, 4); 292 + send_fd(1, 2); 293 + send_fd(2, 3); 294 + send_fd(3, 1); 295 + 296 + send_fd(5, 6); 297 + send_fd(5, 9); 298 + send_fd(6, 7); 299 + send_fd(7, 8); 300 + send_fd(8, 6); 301 + 302 + close_sockets(10); 286 303 } 287 304 288 305 TEST_HARNESS_MAIN
+12 -2
tools/testing/selftests/net/msg_zerocopy.c
··· 85 85 static int cfg_runtime_ms = 4200; 86 86 static int cfg_verbose; 87 87 static int cfg_waittime_ms = 500; 88 + static int cfg_notification_limit = 32; 88 89 static bool cfg_zerocopy; 89 90 90 91 static socklen_t cfg_alen; ··· 96 95 static long packets, bytes, completions, expected_completions; 97 96 static int zerocopied = -1; 98 97 static uint32_t next_completion; 98 + static uint32_t sends_since_notify; 99 99 100 100 static unsigned long gettimeofday_ms(void) 101 101 { ··· 210 208 error(1, errno, "send"); 211 209 if (cfg_verbose && ret != len) 212 210 fprintf(stderr, "send: ret=%u != %u\n", ret, len); 211 + sends_since_notify++; 213 212 214 213 if (len) { 215 214 packets++; ··· 438 435 /* Detect notification gaps. These should not happen often, if at all. 439 436 * Gaps can occur due to drops, reordering and retransmissions. 440 437 */ 441 - if (lo != next_completion) 438 + if (cfg_verbose && lo != next_completion) 442 439 fprintf(stderr, "gap: %u..%u does not append to %u\n", 443 440 lo, hi, next_completion); 444 441 next_completion = hi + 1; ··· 463 460 static void do_recv_completions(int fd, int domain) 464 461 { 465 462 while (do_recv_completion(fd, domain)) {} 463 + sends_since_notify = 0; 466 464 } 467 465 468 466 /* Wait for all remaining completions on the errqueue */ ··· 552 548 do_sendmsg_corked(fd, &msg); 553 549 else 554 550 do_sendmsg(fd, &msg, cfg_zerocopy, domain); 551 + 552 + if (cfg_zerocopy && sends_since_notify >= cfg_notification_limit) 553 + do_recv_completions(fd, domain); 555 554 556 555 while (!do_poll(fd, POLLOUT)) { 557 556 if (cfg_zerocopy) ··· 715 708 716 709 cfg_payload_len = max_payload_len; 717 710 718 - while ((c = getopt(argc, argv, "46c:C:D:i:mp:rs:S:t:vz")) != -1) { 711 + while ((c = getopt(argc, argv, "46c:C:D:i:l:mp:rs:S:t:vz")) != -1) { 719 712 switch (c) { 720 713 case '4': 721 714 if (cfg_family != PF_UNSPEC) ··· 742 735 cfg_ifindex = if_nametoindex(optarg); 743 736 if (cfg_ifindex == 0) 744 737 error(1, errno, "invalid iface: %s", optarg); 738 + break; 739 + case 'l': 740 + cfg_notification_limit = strtoul(optarg, NULL, 0); 745 741 break; 746 742 case 'm': 747 743 cfg_cork_mixed = true;