Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'net-6.18-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Pull networking fixes from Paolo Abeni:
"Including fixes from wireless, Bluetooth and netfilter.

Current release - regressions:

- tcp: fix too slow tcp_rcvbuf_grow() action

- bluetooth: fix corruption in h4_recv_buf() after cleanup

Previous releases - regressions:

- mptcp: restore window probe

- bluetooth:
- fix connection cleanup with BIG with 2 or more BIS
- fix crash in set_mesh_sync and set_mesh_complete

- batman-adv: release references to inactive interfaces

- nic:
- ice: fix usage of logical PF id
- sfc: fix potential memory leak in efx_mae_process_mport()

Previous releases - always broken:

- devmem: refresh devmem TX dst in case of route invalidation

- netfilter: add seqadj extension for natted connections

- wifi:
- iwlwifi: fix potential use after free in iwl_mld_remove_link()
- brcmfmac: fix crash while sending action frames in standalone AP Mode

- eth:
- mlx5e: cancel tls RX async resync request in error flows
- ixgbe: fix memory leak and use-after-free in ixgbe_recovery_probe()
- hibmcge: fix rx buf avl irq is not re-enabled in irq_handle issue
- cxgb4: fix potential use-after-free in ipsec callback
- nfp: fix memory leak in nfp_net_alloc()"

* tag 'net-6.18-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (75 commits)
net: sctp: fix KMSAN uninit-value in sctp_inq_pop
net: devmem: refresh devmem TX dst in case of route invalidation
net: stmmac: est: Fix GCL bounds checks
net: stmmac: Consider Tx VLAN offload tag length for maxSDU
net: stmmac: vlan: Disable 802.1AD tag insertion offload
net/mlx5e: kTLS, Cancel RX async resync request in error flows
net: tls: Cancel RX async resync request on rcd_delta overflow
net: tls: Change async resync helpers argument
net: phy: dp83869: fix STRAP_OPMODE bitmask
selftests: net: use BASH for bareudp testing
net: mctp: Fix tx queue stall
net/mlx5: Don't zero user_count when destroying FDB tables
net: usb: asix_devices: Check return value of usbnet_get_endpoints
mptcp: zero window probe mib
mptcp: restore window probe
mptcp: fix MSG_PEEK stream corruption
mptcp: drop bogus optimization in __mptcp_check_push()
netconsole: Fix race condition in between reader and writer of userdata
Documentation: netconsole: Remove obsolete contact people
nfp: xsk: fix memory leak in nfp_net_alloc()
...

+598 -286
+4
CREDITS
··· 2036 2036 S: 602 00 Brno 2037 2037 S: Czech Republic 2038 2038 2039 + N: Karsten Keil 2040 + E: isdn@linux-pingi.de 2041 + D: ISDN subsystem maintainer 2042 + 2039 2043 N: Jakob Kemi 2040 2044 E: jakob.kemi@telia.com 2041 2045 D: V4L W9966 Webcam driver
+2 -2
Documentation/devicetree/bindings/net/microchip,sparx5-switch.yaml
··· 180 180 then: 181 181 properties: 182 182 reg: 183 - minItems: 2 183 + maxItems: 2 184 184 reg-names: 185 - minItems: 2 185 + maxItems: 2 186 186 else: 187 187 properties: 188 188 reg:
+2
Documentation/netlink/specs/dpll.yaml
··· 605 605 reply: &pin-attrs 606 606 attributes: 607 607 - id 608 + - module-name 609 + - clock-id 608 610 - board-label 609 611 - panel-label 610 612 - package-label
-3
Documentation/networking/netconsole.rst
··· 19 19 20 20 Sysdata append support by Breno Leitao <leitao@debian.org>, Jan 15 2025 21 21 22 - Please send bug reports to Matt Mackall <mpm@selenic.com> 23 - Satyam Sharma <satyam.sharma@gmail.com>, and Cong Wang <xiyou.wangcong@gmail.com> 24 - 25 22 Introduction: 26 23 ============= 27 24
+3 -6
MAINTAINERS
··· 13260 13260 F: drivers/infiniband/ulp/isert 13261 13261 13262 13262 ISDN/CMTP OVER BLUETOOTH 13263 - M: Karsten Keil <isdn@linux-pingi.de> 13264 - L: isdn4linux@listserv.isdn4linux.de (subscribers-only) 13265 13263 L: netdev@vger.kernel.org 13266 - S: Odd Fixes 13264 + S: Orphan 13267 13265 W: http://www.isdn4linux.de 13268 13266 F: Documentation/isdn/ 13269 13267 F: drivers/isdn/capi/ ··· 13270 13272 F: net/bluetooth/cmtp/ 13271 13273 13272 13274 ISDN/mISDN SUBSYSTEM 13273 - M: Karsten Keil <isdn@linux-pingi.de> 13274 - L: isdn4linux@listserv.isdn4linux.de (subscribers-only) 13275 13275 L: netdev@vger.kernel.org 13276 - S: Maintained 13276 + S: Orphan 13277 13277 W: http://www.isdn4linux.de 13278 13278 F: drivers/isdn/Kconfig 13279 13279 F: drivers/isdn/Makefile ··· 21328 21332 QUALCOMM WCN36XX WIRELESS DRIVER 21329 21333 M: Loic Poulain <loic.poulain@oss.qualcomm.com> 21330 21334 L: wcn36xx@lists.infradead.org 21335 + L: linux-wireless@vger.kernel.org 21331 21336 S: Supported 21332 21337 W: https://wireless.wiki.kernel.org/en/users/Drivers/wcn36xx 21333 21338 F: drivers/net/wireless/ath/wcn36xx/
+6
drivers/bcma/main.c
··· 294 294 int err; 295 295 296 296 list_for_each_entry(core, &bus->cores, list) { 297 + struct device_node *np; 298 + 297 299 /* We support that core ourselves */ 298 300 switch (core->id.id) { 299 301 case BCMA_CORE_4706_CHIPCOMMON: ··· 311 309 312 310 /* Early cores were already registered */ 313 311 if (bcma_is_core_needed_early(core->id.id)) 312 + continue; 313 + 314 + np = core->dev.of_node; 315 + if (np && !of_device_is_available(np)) 314 316 continue; 315 317 316 318 /* Only first GMAC core on BCM4706 is connected and working */
+3 -1
drivers/bluetooth/bpa10x.c
··· 41 41 struct usb_anchor rx_anchor; 42 42 43 43 struct sk_buff *rx_skb[2]; 44 + struct hci_uart hu; 44 45 }; 45 46 46 47 static void bpa10x_tx_complete(struct urb *urb) ··· 97 96 if (urb->status == 0) { 98 97 bool idx = usb_pipebulk(urb->pipe); 99 98 100 - data->rx_skb[idx] = h4_recv_buf(hdev, data->rx_skb[idx], 99 + data->rx_skb[idx] = h4_recv_buf(&data->hu, data->rx_skb[idx], 101 100 urb->transfer_buffer, 102 101 urb->actual_length, 103 102 bpa10x_recv_pkts, ··· 389 388 hci_set_drvdata(hdev, data); 390 389 391 390 data->hdev = hdev; 391 + data->hu.hdev = hdev; 392 392 393 393 SET_HCIDEV_DEV(hdev, &intf->dev); 394 394
+6 -5
drivers/bluetooth/btintel_pcie.c
··· 1467 1467 if (intr_hw & BTINTEL_PCIE_MSIX_HW_INT_CAUSES_GP1) 1468 1468 btintel_pcie_msix_gp1_handler(data); 1469 1469 1470 - /* This interrupt is triggered by the firmware after updating 1471 - * boot_stage register and image_response register 1472 - */ 1473 - if (intr_hw & BTINTEL_PCIE_MSIX_HW_INT_CAUSES_GP0) 1474 - btintel_pcie_msix_gp0_handler(data); 1475 1470 1476 1471 /* For TX */ 1477 1472 if (intr_fh & BTINTEL_PCIE_MSIX_FH_INT_CAUSES_0) { ··· 1481 1486 if (!btintel_pcie_is_txackq_empty(data)) 1482 1487 btintel_pcie_msix_tx_handle(data); 1483 1488 } 1489 + 1490 + /* This interrupt is triggered by the firmware after updating 1491 + * boot_stage register and image_response register 1492 + */ 1493 + if (intr_hw & BTINTEL_PCIE_MSIX_HW_INT_CAUSES_GP0) 1494 + btintel_pcie_msix_gp0_handler(data); 1484 1495 1485 1496 /* 1486 1497 * Before sending the interrupt the HW disables it to prevent a nested
+12
drivers/bluetooth/btmtksdio.c
··· 1270 1270 1271 1271 sdio_claim_host(bdev->func); 1272 1272 1273 + /* set drv_pmctrl if BT is closed before doing reset */ 1274 + if (!test_bit(BTMTKSDIO_FUNC_ENABLED, &bdev->tx_state)) { 1275 + sdio_enable_func(bdev->func); 1276 + btmtksdio_drv_pmctrl(bdev); 1277 + } 1278 + 1273 1279 sdio_writel(bdev->func, C_INT_EN_CLR, MTK_REG_CHLPCR, NULL); 1274 1280 skb_queue_purge(&bdev->txq); 1275 1281 cancel_work_sync(&bdev->txrx_work); ··· 1289 1283 if (err < 0) { 1290 1284 bt_dev_err(hdev, "Failed to reset (%d)", err); 1291 1285 goto err; 1286 + } 1287 + 1288 + /* set fw_pmctrl back if BT is closed after doing reset */ 1289 + if (!test_bit(BTMTKSDIO_FUNC_ENABLED, &bdev->tx_state)) { 1290 + btmtksdio_fw_pmctrl(bdev); 1291 + sdio_disable_func(bdev->func); 1292 1292 } 1293 1293 1294 1294 clear_bit(BTMTKSDIO_PATCH_ENABLED, &bdev->tx_state);
+3 -1
drivers/bluetooth/btmtkuart.c
··· 79 79 u16 stp_dlen; 80 80 81 81 const struct btmtkuart_data *data; 82 + struct hci_uart hu; 82 83 }; 83 84 84 85 #define btmtkuart_is_standalone(bdev) \ ··· 369 368 sz_left -= adv; 370 369 p_left += adv; 371 370 372 - bdev->rx_skb = h4_recv_buf(bdev->hdev, bdev->rx_skb, p_h4, 371 + bdev->rx_skb = h4_recv_buf(&bdev->hu, bdev->rx_skb, p_h4, 373 372 sz_h4, mtk_recv_pkts, 374 373 ARRAY_SIZE(mtk_recv_pkts)); 375 374 if (IS_ERR(bdev->rx_skb)) { ··· 859 858 } 860 859 861 860 bdev->hdev = hdev; 861 + bdev->hu.hdev = hdev; 862 862 863 863 hdev->bus = HCI_UART; 864 864 hci_set_drvdata(hdev, bdev);
+3 -1
drivers/bluetooth/btnxpuart.c
··· 212 212 struct ps_data psdata; 213 213 struct btnxpuart_data *nxp_data; 214 214 struct reset_control *pdn; 215 + struct hci_uart hu; 215 216 }; 216 217 217 218 #define NXP_V1_FW_REQ_PKT 0xa5 ··· 1757 1756 1758 1757 ps_start_timer(nxpdev); 1759 1758 1760 - nxpdev->rx_skb = h4_recv_buf(nxpdev->hdev, nxpdev->rx_skb, data, count, 1759 + nxpdev->rx_skb = h4_recv_buf(&nxpdev->hu, nxpdev->rx_skb, data, count, 1761 1760 nxp_recv_pkts, ARRAY_SIZE(nxp_recv_pkts)); 1762 1761 if (IS_ERR(nxpdev->rx_skb)) { 1763 1762 int err = PTR_ERR(nxpdev->rx_skb); ··· 1876 1875 reset_control_deassert(nxpdev->pdn); 1877 1876 1878 1877 nxpdev->hdev = hdev; 1878 + nxpdev->hu.hdev = hdev; 1879 1879 1880 1880 hdev->bus = HCI_UART; 1881 1881 hci_set_drvdata(hdev, nxpdev);
+1 -1
drivers/bluetooth/hci_ag6xx.c
··· 105 105 if (!test_bit(HCI_UART_REGISTERED, &hu->flags)) 106 106 return -EUNATCH; 107 107 108 - ag6xx->rx_skb = h4_recv_buf(hu->hdev, ag6xx->rx_skb, data, count, 108 + ag6xx->rx_skb = h4_recv_buf(hu, ag6xx->rx_skb, data, count, 109 109 ag6xx_recv_pkts, 110 110 ARRAY_SIZE(ag6xx_recv_pkts)); 111 111 if (IS_ERR(ag6xx->rx_skb)) {
+1 -1
drivers/bluetooth/hci_aml.c
··· 650 650 struct aml_data *aml_data = hu->priv; 651 651 int err; 652 652 653 - aml_data->rx_skb = h4_recv_buf(hu->hdev, aml_data->rx_skb, data, count, 653 + aml_data->rx_skb = h4_recv_buf(hu, aml_data->rx_skb, data, count, 654 654 aml_recv_pkts, 655 655 ARRAY_SIZE(aml_recv_pkts)); 656 656 if (IS_ERR(aml_data->rx_skb)) {
+1 -1
drivers/bluetooth/hci_ath.c
··· 191 191 { 192 192 struct ath_struct *ath = hu->priv; 193 193 194 - ath->rx_skb = h4_recv_buf(hu->hdev, ath->rx_skb, data, count, 194 + ath->rx_skb = h4_recv_buf(hu, ath->rx_skb, data, count, 195 195 ath_recv_pkts, ARRAY_SIZE(ath_recv_pkts)); 196 196 if (IS_ERR(ath->rx_skb)) { 197 197 int err = PTR_ERR(ath->rx_skb);
+1 -1
drivers/bluetooth/hci_bcm.c
··· 698 698 if (!test_bit(HCI_UART_REGISTERED, &hu->flags)) 699 699 return -EUNATCH; 700 700 701 - bcm->rx_skb = h4_recv_buf(hu->hdev, bcm->rx_skb, data, count, 701 + bcm->rx_skb = h4_recv_buf(hu, bcm->rx_skb, data, count, 702 702 bcm_recv_pkts, ARRAY_SIZE(bcm_recv_pkts)); 703 703 if (IS_ERR(bcm->rx_skb)) { 704 704 int err = PTR_ERR(bcm->rx_skb);
+3 -3
drivers/bluetooth/hci_h4.c
··· 112 112 if (!test_bit(HCI_UART_REGISTERED, &hu->flags)) 113 113 return -EUNATCH; 114 114 115 - h4->rx_skb = h4_recv_buf(hu->hdev, h4->rx_skb, data, count, 115 + h4->rx_skb = h4_recv_buf(hu, h4->rx_skb, data, count, 116 116 h4_recv_pkts, ARRAY_SIZE(h4_recv_pkts)); 117 117 if (IS_ERR(h4->rx_skb)) { 118 118 int err = PTR_ERR(h4->rx_skb); ··· 151 151 return hci_uart_unregister_proto(&h4p); 152 152 } 153 153 154 - struct sk_buff *h4_recv_buf(struct hci_dev *hdev, struct sk_buff *skb, 154 + struct sk_buff *h4_recv_buf(struct hci_uart *hu, struct sk_buff *skb, 155 155 const unsigned char *buffer, int count, 156 156 const struct h4_recv_pkt *pkts, int pkts_count) 157 157 { 158 - struct hci_uart *hu = hci_get_drvdata(hdev); 159 158 u8 alignment = hu->alignment ? hu->alignment : 1; 159 + struct hci_dev *hdev = hu->hdev; 160 160 161 161 /* Check for error from previous call */ 162 162 if (IS_ERR(skb))
+1 -1
drivers/bluetooth/hci_intel.c
··· 972 972 if (!test_bit(HCI_UART_REGISTERED, &hu->flags)) 973 973 return -EUNATCH; 974 974 975 - intel->rx_skb = h4_recv_buf(hu->hdev, intel->rx_skb, data, count, 975 + intel->rx_skb = h4_recv_buf(hu, intel->rx_skb, data, count, 976 976 intel_recv_pkts, 977 977 ARRAY_SIZE(intel_recv_pkts)); 978 978 if (IS_ERR(intel->rx_skb)) {
+1 -1
drivers/bluetooth/hci_ll.c
··· 429 429 if (!test_bit(HCI_UART_REGISTERED, &hu->flags)) 430 430 return -EUNATCH; 431 431 432 - ll->rx_skb = h4_recv_buf(hu->hdev, ll->rx_skb, data, count, 432 + ll->rx_skb = h4_recv_buf(hu, ll->rx_skb, data, count, 433 433 ll_recv_pkts, ARRAY_SIZE(ll_recv_pkts)); 434 434 if (IS_ERR(ll->rx_skb)) { 435 435 int err = PTR_ERR(ll->rx_skb);
+3 -3
drivers/bluetooth/hci_mrvl.c
··· 264 264 !test_bit(STATE_FW_LOADED, &mrvl->flags)) 265 265 return count; 266 266 267 - mrvl->rx_skb = h4_recv_buf(hu->hdev, mrvl->rx_skb, data, count, 268 - mrvl_recv_pkts, 269 - ARRAY_SIZE(mrvl_recv_pkts)); 267 + mrvl->rx_skb = h4_recv_buf(hu, mrvl->rx_skb, data, count, 268 + mrvl_recv_pkts, 269 + ARRAY_SIZE(mrvl_recv_pkts)); 270 270 if (IS_ERR(mrvl->rx_skb)) { 271 271 int err = PTR_ERR(mrvl->rx_skb); 272 272 bt_dev_err(hu->hdev, "Frame reassembly failed (%d)", err);
+2 -2
drivers/bluetooth/hci_nokia.c
··· 624 624 if (!test_bit(HCI_UART_REGISTERED, &hu->flags)) 625 625 return -EUNATCH; 626 626 627 - btdev->rx_skb = h4_recv_buf(hu->hdev, btdev->rx_skb, data, count, 628 - nokia_recv_pkts, ARRAY_SIZE(nokia_recv_pkts)); 627 + btdev->rx_skb = h4_recv_buf(hu, btdev->rx_skb, data, count, 628 + nokia_recv_pkts, ARRAY_SIZE(nokia_recv_pkts)); 629 629 if (IS_ERR(btdev->rx_skb)) { 630 630 err = PTR_ERR(btdev->rx_skb); 631 631 dev_err(dev, "Frame reassembly failed (%d)", err);
+1 -1
drivers/bluetooth/hci_qca.c
··· 1277 1277 if (!test_bit(HCI_UART_REGISTERED, &hu->flags)) 1278 1278 return -EUNATCH; 1279 1279 1280 - qca->rx_skb = h4_recv_buf(hu->hdev, qca->rx_skb, data, count, 1280 + qca->rx_skb = h4_recv_buf(hu, qca->rx_skb, data, count, 1281 1281 qca_recv_pkts, ARRAY_SIZE(qca_recv_pkts)); 1282 1282 if (IS_ERR(qca->rx_skb)) { 1283 1283 int err = PTR_ERR(qca->rx_skb);
+1 -1
drivers/bluetooth/hci_uart.h
··· 162 162 int h4_init(void); 163 163 int h4_deinit(void); 164 164 165 - struct sk_buff *h4_recv_buf(struct hci_dev *hdev, struct sk_buff *skb, 165 + struct sk_buff *h4_recv_buf(struct hci_uart *hu, struct sk_buff *skb, 166 166 const unsigned char *buffer, int count, 167 167 const struct h4_recv_pkt *pkts, int pkts_count); 168 168 #endif
+20 -16
drivers/dpll/dpll_netlink.c
··· 1559 1559 return -EMSGSIZE; 1560 1560 } 1561 1561 pin = dpll_pin_find_from_nlattr(info); 1562 - if (!IS_ERR(pin)) { 1563 - if (!dpll_pin_available(pin)) { 1564 - nlmsg_free(msg); 1565 - return -ENODEV; 1566 - } 1567 - ret = dpll_msg_add_pin_handle(msg, pin); 1568 - if (ret) { 1569 - nlmsg_free(msg); 1570 - return ret; 1571 - } 1562 + if (IS_ERR(pin)) { 1563 + nlmsg_free(msg); 1564 + return PTR_ERR(pin); 1565 + } 1566 + if (!dpll_pin_available(pin)) { 1567 + nlmsg_free(msg); 1568 + return -ENODEV; 1569 + } 1570 + ret = dpll_msg_add_pin_handle(msg, pin); 1571 + if (ret) { 1572 + nlmsg_free(msg); 1573 + return ret; 1572 1574 } 1573 1575 genlmsg_end(msg, hdr); 1574 1576 ··· 1737 1735 } 1738 1736 1739 1737 dpll = dpll_device_find_from_nlattr(info); 1740 - if (!IS_ERR(dpll)) { 1741 - ret = dpll_msg_add_dev_handle(msg, dpll); 1742 - if (ret) { 1743 - nlmsg_free(msg); 1744 - return ret; 1745 - } 1738 + if (IS_ERR(dpll)) { 1739 + nlmsg_free(msg); 1740 + return PTR_ERR(dpll); 1741 + } 1742 + ret = dpll_msg_add_dev_handle(msg, dpll); 1743 + if (ret) { 1744 + nlmsg_free(msg); 1745 + return ret; 1746 1746 } 1747 1747 genlmsg_end(msg, hdr); 1748 1748
+1 -1
drivers/dpll/zl3073x/dpll.c
··· 1904 1904 } 1905 1905 1906 1906 is_diff = zl3073x_out_is_diff(zldev, out); 1907 - is_enabled = zl3073x_out_is_enabled(zldev, out); 1907 + is_enabled = zl3073x_output_pin_is_enabled(zldev, index); 1908 1908 } 1909 1909 1910 1910 /* Skip N-pin if the corresponding input/output is differential */
+6 -1
drivers/net/ethernet/chelsio/inline_crypto/ch_ipsec/chcr_ipsec.c
··· 290 290 return -EINVAL; 291 291 } 292 292 293 + if (unlikely(!try_module_get(THIS_MODULE))) { 294 + NL_SET_ERR_MSG_MOD(extack, "Failed to acquire module reference"); 295 + return -ENODEV; 296 + } 297 + 293 298 sa_entry = kzalloc(sizeof(*sa_entry), GFP_KERNEL); 294 299 if (!sa_entry) { 295 300 res = -ENOMEM; 301 + module_put(THIS_MODULE); 296 302 goto out; 297 303 } 298 304 ··· 307 301 sa_entry->esn = 1; 308 302 ch_ipsec_setkey(x, sa_entry); 309 303 x->xso.offload_handle = (unsigned long)sa_entry; 310 - try_module_get(THIS_MODULE); 311 304 out: 312 305 return res; 313 306 }
+1
drivers/net/ethernet/hisilicon/hibmcge/hbg_common.h
··· 17 17 #define HBG_PCU_CACHE_LINE_SIZE 32 18 18 #define HBG_TX_TIMEOUT_BUF_LEN 1024 19 19 #define HBG_RX_DESCR 0x01 20 + #define HBG_NO_PHY 0xFF 20 21 21 22 #define HBG_PACKET_HEAD_SIZE ((HBG_RX_SKIP1 + HBG_RX_SKIP2 + \ 22 23 HBG_RX_DESCR) * HBG_PCU_CACHE_LINE_SIZE)
+6 -4
drivers/net/ethernet/hisilicon/hibmcge/hbg_err.c
··· 136 136 { 137 137 struct net_device *netdev = pci_get_drvdata(pdev); 138 138 139 - netif_device_detach(netdev); 140 - 141 - if (state == pci_channel_io_perm_failure) 139 + if (state == pci_channel_io_perm_failure) { 140 + netif_device_detach(netdev); 142 141 return PCI_ERS_RESULT_DISCONNECT; 142 + } 143 143 144 - pci_disable_device(pdev); 145 144 return PCI_ERS_RESULT_NEED_RESET; 146 145 } 147 146 ··· 148 149 { 149 150 struct net_device *netdev = pci_get_drvdata(pdev); 150 151 struct hbg_priv *priv = netdev_priv(netdev); 152 + 153 + netif_device_detach(netdev); 154 + pci_disable_device(pdev); 151 155 152 156 if (pci_enable_device(pdev)) { 153 157 dev_err(&pdev->dev,
+3
drivers/net/ethernet/hisilicon/hibmcge/hbg_hw.c
··· 244 244 245 245 hbg_hw_mac_enable(priv, HBG_STATUS_ENABLE); 246 246 247 + if (priv->mac.phy_addr == HBG_NO_PHY) 248 + return; 249 + 247 250 /* wait MAC link up */ 248 251 ret = readl_poll_timeout(priv->io_base + HBG_REG_AN_NEG_STATE_ADDR, 249 252 link_status,
+1
drivers/net/ethernet/hisilicon/hibmcge/hbg_irq.c
··· 32 32 const struct hbg_irq_info *irq_info) 33 33 { 34 34 priv->stats.rx_fifo_less_empty_thrsld_cnt++; 35 + hbg_hw_irq_enable(priv, irq_info->mask, true); 35 36 } 36 37 37 38 #define HBG_IRQ_I(name, handle) \
-1
drivers/net/ethernet/hisilicon/hibmcge/hbg_mdio.c
··· 20 20 #define HBG_MDIO_OP_INTERVAL_US (5 * 1000) 21 21 22 22 #define HBG_NP_LINK_FAIL_RETRY_TIMES 5 23 - #define HBG_NO_PHY 0xFF 24 23 25 24 static void hbg_mdio_set_command(struct hbg_mac *mac, u32 cmd) 26 25 {
+1 -2
drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
··· 9429 9429 /* this command reads phy id and register at the same time */ 9430 9430 fallthrough; 9431 9431 case SIOCGMIIREG: 9432 - data->val_out = hclge_read_phy_reg(hdev, data->reg_num); 9433 - return 0; 9432 + return hclge_read_phy_reg(hdev, data->reg_num, &data->val_out); 9434 9433 9435 9434 case SIOCSMIIREG: 9436 9435 return hclge_write_phy_reg(hdev, data->reg_num, data->val_in);
+6 -3
drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mdio.c
··· 274 274 phy_stop(phydev); 275 275 } 276 276 277 - u16 hclge_read_phy_reg(struct hclge_dev *hdev, u16 reg_addr) 277 + int hclge_read_phy_reg(struct hclge_dev *hdev, u16 reg_addr, u16 *val) 278 278 { 279 279 struct hclge_phy_reg_cmd *req; 280 280 struct hclge_desc desc; ··· 286 286 req->reg_addr = cpu_to_le16(reg_addr); 287 287 288 288 ret = hclge_cmd_send(&hdev->hw, &desc, 1); 289 - if (ret) 289 + if (ret) { 290 290 dev_err(&hdev->pdev->dev, 291 291 "failed to read phy reg, ret = %d.\n", ret); 292 + return ret; 293 + } 292 294 293 - return le16_to_cpu(req->reg_val); 295 + *val = le16_to_cpu(req->reg_val); 296 + return 0; 294 297 } 295 298 296 299 int hclge_write_phy_reg(struct hclge_dev *hdev, u16 reg_addr, u16 val)
+1 -1
drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mdio.h
··· 13 13 void hclge_mac_disconnect_phy(struct hnae3_handle *handle); 14 14 void hclge_mac_start_phy(struct hclge_dev *hdev); 15 15 void hclge_mac_stop_phy(struct hclge_dev *hdev); 16 - u16 hclge_read_phy_reg(struct hclge_dev *hdev, u16 reg_addr); 16 + int hclge_read_phy_reg(struct hclge_dev *hdev, u16 reg_addr, u16 *val); 17 17 int hclge_write_phy_reg(struct hclge_dev *hdev, u16 reg_addr, u16 val); 18 18 19 19 #endif
+33 -2
drivers/net/ethernet/intel/ice/ice_common.c
··· 4382 4382 unsigned int lane; 4383 4383 int err; 4384 4384 4385 + /* E82X does not have sequential IDs, lane number is PF ID. 4386 + * For E825 device, the exception is the variant with external 4387 + * PHY (0x579F), in which there is also 1:1 pf_id -> lane_number 4388 + * mapping. 4389 + */ 4390 + if (hw->mac_type == ICE_MAC_GENERIC || 4391 + hw->device_id == ICE_DEV_ID_E825C_SGMII) 4392 + return hw->pf_id; 4393 + 4385 4394 options = kcalloc(ICE_AQC_PORT_OPT_MAX, sizeof(*options), GFP_KERNEL); 4386 4395 if (!options) 4387 4396 return -ENOMEM; ··· 6506 6497 } 6507 6498 6508 6499 /** 6500 + * ice_get_dest_cgu - get destination CGU dev for given HW 6501 + * @hw: pointer to the HW struct 6502 + * 6503 + * Get CGU client id for CGU register read/write operations. 6504 + * 6505 + * Return: CGU device id to use in SBQ transactions. 6506 + */ 6507 + static enum ice_sbq_dev_id ice_get_dest_cgu(struct ice_hw *hw) 6508 + { 6509 + /* On dual complex E825 only complex 0 has functional CGU powering all 6510 + * the PHYs. 6511 + * SBQ destination device cgu points to CGU on a current complex and to 6512 + * access primary CGU from the secondary complex, the driver should use 6513 + * cgu_peer as a destination device. 6514 + */ 6515 + if (hw->mac_type == ICE_MAC_GENERIC_3K_E825 && ice_is_dual(hw) && 6516 + !ice_is_primary(hw)) 6517 + return ice_sbq_dev_cgu_peer; 6518 + return ice_sbq_dev_cgu; 6519 + } 6520 + 6521 + /** 6509 6522 * ice_read_cgu_reg - Read a CGU register 6510 6523 * @hw: Pointer to the HW struct 6511 6524 * @addr: Register address to read ··· 6541 6510 int ice_read_cgu_reg(struct ice_hw *hw, u32 addr, u32 *val) 6542 6511 { 6543 6512 struct ice_sbq_msg_input cgu_msg = { 6513 + .dest_dev = ice_get_dest_cgu(hw), 6544 6514 .opcode = ice_sbq_msg_rd, 6545 - .dest_dev = ice_sbq_dev_cgu, 6546 6515 .msg_addr_low = addr 6547 6516 }; 6548 6517 int err; ··· 6573 6542 int ice_write_cgu_reg(struct ice_hw *hw, u32 addr, u32 val) 6574 6543 { 6575 6544 struct ice_sbq_msg_input cgu_msg = { 6545 + .dest_dev = ice_get_dest_cgu(hw), 6576 6546 .opcode = ice_sbq_msg_wr, 6577 - .dest_dev = ice_sbq_dev_cgu, 6578 6547 .msg_addr_low = addr, 6579 6548 .data = val 6580 6549 };
+1 -1
drivers/net/ethernet/intel/ice/ice_flex_pipe.c
··· 1479 1479 per_pf = ICE_PROF_MASK_COUNT / hw->dev_caps.num_funcs; 1480 1480 1481 1481 hw->blk[blk].masks.count = per_pf; 1482 - hw->blk[blk].masks.first = hw->pf_id * per_pf; 1482 + hw->blk[blk].masks.first = hw->logical_pf_id * per_pf; 1483 1483 1484 1484 memset(hw->blk[blk].masks.masks, 0, sizeof(hw->blk[blk].masks.masks)); 1485 1485
+1
drivers/net/ethernet/intel/ice/ice_sbq_cmd.h
··· 50 50 ice_sbq_dev_phy_0 = 0x02, 51 51 ice_sbq_dev_cgu = 0x06, 52 52 ice_sbq_dev_phy_0_peer = 0x0D, 53 + ice_sbq_dev_cgu_peer = 0x0F, 53 54 }; 54 55 55 56 enum ice_sbq_msg_opcode {
+1 -1
drivers/net/ethernet/intel/igb/igb_ethtool.c
··· 2281 2281 case ETH_SS_PRIV_FLAGS: 2282 2282 return IGB_PRIV_FLAGS_STR_LEN; 2283 2283 default: 2284 - return -ENOTSUPP; 2284 + return -EOPNOTSUPP; 2285 2285 } 2286 2286 } 2287 2287
+4 -1
drivers/net/ethernet/intel/igc/igc_ethtool.c
··· 810 810 case ETH_SS_PRIV_FLAGS: 811 811 return IGC_PRIV_FLAGS_STR_LEN; 812 812 default: 813 - return -ENOTSUPP; 813 + return -EOPNOTSUPP; 814 814 } 815 815 } 816 816 ··· 2093 2093 if (eth_test->flags == ETH_TEST_FL_OFFLINE) { 2094 2094 netdev_info(adapter->netdev, "Offline testing starting"); 2095 2095 set_bit(__IGC_TESTING, &adapter->state); 2096 + 2097 + /* power up PHY for link test */ 2098 + igc_power_up_phy_copper(&adapter->hw); 2096 2099 2097 2100 /* Link test performed before hardware reset so autoneg doesn't 2098 2101 * interfere with test result
+1 -1
drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
··· 11507 11507 shutdown_aci: 11508 11508 mutex_destroy(&adapter->hw.aci.lock); 11509 11509 ixgbe_release_hw_control(adapter); 11510 - devlink_free(adapter->devlink); 11511 11510 clean_up_probe: 11512 11511 disable_dev = !test_and_set_bit(__IXGBE_DISABLED, &adapter->state); 11513 11512 free_netdev(netdev); 11513 + devlink_free(adapter->devlink); 11514 11514 pci_release_mem_regions(pdev); 11515 11515 if (disable_dev) 11516 11516 pci_disable_device(pdev);
+1 -1
drivers/net/ethernet/intel/ixgbe/ixgbe_ptp.c
··· 641 641 * disabled 642 642 */ 643 643 if (rq->type != PTP_CLK_REQ_PPS || !adapter->ptp_setup_sdp) 644 - return -ENOTSUPP; 644 + return -EOPNOTSUPP; 645 645 646 646 if (on) 647 647 adapter->flags2 |= IXGBE_FLAG2_PTP_PPS_ENABLED;
+35 -6
drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_rx.c
··· 320 320 err_free: 321 321 kfree(buf); 322 322 err_out: 323 - priv_rx->rq_stats->tls_resync_req_skip++; 324 323 return err; 325 324 } 326 325 ··· 338 339 339 340 if (unlikely(test_bit(MLX5E_PRIV_RX_FLAG_DELETING, priv_rx->flags))) { 340 341 mlx5e_ktls_priv_rx_put(priv_rx); 342 + priv_rx->rq_stats->tls_resync_req_skip++; 343 + tls_offload_rx_resync_async_request_cancel(&resync->core); 341 344 return; 342 345 } 343 346 344 347 c = resync->priv->channels.c[priv_rx->rxq]; 345 348 sq = &c->async_icosq; 346 349 347 - if (resync_post_get_progress_params(sq, priv_rx)) 350 + if (resync_post_get_progress_params(sq, priv_rx)) { 351 + priv_rx->rq_stats->tls_resync_req_skip++; 352 + tls_offload_rx_resync_async_request_cancel(&resync->core); 348 353 mlx5e_ktls_priv_rx_put(priv_rx); 354 + } 349 355 } 350 356 351 357 static void resync_init(struct mlx5e_ktls_rx_resync_ctx *resync, ··· 429 425 { 430 426 struct mlx5e_ktls_rx_resync_buf *buf = wi->tls_get_params.buf; 431 427 struct mlx5e_ktls_offload_context_rx *priv_rx; 428 + struct tls_offload_resync_async *async_resync; 429 + struct tls_offload_context_rx *rx_ctx; 432 430 u8 tracker_state, auth_state, *ctx; 433 431 struct device *dev; 434 432 u32 hw_seq; 435 433 436 434 priv_rx = buf->priv_rx; 437 435 dev = mlx5_core_dma_dev(sq->channel->mdev); 438 - if (unlikely(test_bit(MLX5E_PRIV_RX_FLAG_DELETING, priv_rx->flags))) 436 + rx_ctx = tls_offload_ctx_rx(tls_get_ctx(priv_rx->sk)); 437 + async_resync = rx_ctx->resync_async; 438 + if (unlikely(test_bit(MLX5E_PRIV_RX_FLAG_DELETING, priv_rx->flags))) { 439 + priv_rx->rq_stats->tls_resync_req_skip++; 440 + tls_offload_rx_resync_async_request_cancel(async_resync); 439 441 goto out; 442 + } 440 443 441 444 dma_sync_single_for_cpu(dev, buf->dma_addr, PROGRESS_PARAMS_PADDED_SIZE, 442 445 DMA_FROM_DEVICE); ··· 454 443 if (tracker_state != MLX5E_TLS_PROGRESS_PARAMS_RECORD_TRACKER_STATE_TRACKING || 455 444 auth_state != MLX5E_TLS_PROGRESS_PARAMS_AUTH_STATE_NO_OFFLOAD) { 456 445 priv_rx->rq_stats->tls_resync_req_skip++; 446 + tls_offload_rx_resync_async_request_cancel(async_resync); 457 447 goto out; 458 448 } 459 449 460 450 hw_seq = MLX5_GET(tls_progress_params, ctx, hw_resync_tcp_sn); 461 - tls_offload_rx_resync_async_request_end(priv_rx->sk, cpu_to_be32(hw_seq)); 451 + tls_offload_rx_resync_async_request_end(async_resync, 452 + cpu_to_be32(hw_seq)); 462 453 priv_rx->rq_stats->tls_resync_req_end++; 463 454 out: 464 455 mlx5e_ktls_priv_rx_put(priv_rx); ··· 485 472 486 473 resync = &priv_rx->resync; 487 474 mlx5e_ktls_priv_rx_get(priv_rx); 488 - if (unlikely(!queue_work(resync->priv->tls->rx_wq, &resync->work))) 475 + if (unlikely(!queue_work(resync->priv->tls->rx_wq, &resync->work))) { 489 476 mlx5e_ktls_priv_rx_put(priv_rx); 477 + return false; 478 + } 490 479 491 480 return true; 492 481 } ··· 497 482 static void resync_update_sn(struct mlx5e_rq *rq, struct sk_buff *skb) 498 483 { 499 484 struct ethhdr *eth = (struct ethhdr *)(skb->data); 485 + struct tls_offload_resync_async *resync_async; 500 486 struct net_device *netdev = rq->netdev; 501 487 struct net *net = dev_net(netdev); 502 488 struct sock *sk = NULL; ··· 543 527 544 528 seq = th->seq; 545 529 datalen = skb->len - depth; 546 - tls_offload_rx_resync_async_request_start(sk, seq, datalen); 530 + resync_async = tls_offload_ctx_rx(tls_get_ctx(sk))->resync_async; 531 + tls_offload_rx_resync_async_request_start(resync_async, seq, datalen); 547 532 rq->stats->tls_resync_req_start++; 548 533 549 534 unref: ··· 571 554 c = priv->channels.c[priv_rx->rxq]; 572 555 573 556 resync_handle_seq_match(priv_rx, c); 557 + } 558 + 559 + void 560 + mlx5e_ktls_rx_resync_async_request_cancel(struct mlx5e_icosq_wqe_info *wi) 561 + { 562 + struct mlx5e_ktls_offload_context_rx *priv_rx; 563 + struct mlx5e_ktls_rx_resync_buf *buf; 564 + 565 + buf = wi->tls_get_params.buf; 566 + priv_rx = buf->priv_rx; 567 + priv_rx->rq_stats->tls_resync_req_skip++; 568 + tls_offload_rx_resync_async_request_cancel(&priv_rx->resync.core); 574 569 } 575 570 576 571 /* End of resync section */
+4
drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_txrx.h
··· 29 29 void mlx5e_ktls_tx_handle_resync_dump_comp(struct mlx5e_txqsq *sq, 30 30 struct mlx5e_tx_wqe_info *wi, 31 31 u32 *dma_fifo_cc); 32 + 33 + void 34 + mlx5e_ktls_rx_resync_async_request_cancel(struct mlx5e_icosq_wqe_info *wi); 35 + 32 36 static inline bool 33 37 mlx5e_ktls_tx_try_handle_resync_dump_comp(struct mlx5e_txqsq *sq, 34 38 struct mlx5e_tx_wqe_info *wi,
+4
drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
··· 1036 1036 netdev_WARN_ONCE(cq->netdev, 1037 1037 "Bad OP in ICOSQ CQE: 0x%x\n", 1038 1038 get_cqe_opcode(cqe)); 1039 + #ifdef CONFIG_MLX5_EN_TLS 1040 + if (wi->wqe_type == MLX5E_ICOSQ_WQE_GET_PSV_TLS) 1041 + mlx5e_ktls_rx_resync_async_request_cancel(wi); 1042 + #endif 1039 1043 mlx5e_dump_error_cqe(&sq->cq, sq->sqn, 1040 1044 (struct mlx5_err_cqe *)cqe); 1041 1045 mlx5_wq_cyc_wqe_dump(&sq->wq, ci, wi->num_wqebbs);
-1
drivers/net/ethernet/mellanox/mlx5/core/esw/legacy.c
··· 66 66 esw->fdb_table.legacy.addr_grp = NULL; 67 67 esw->fdb_table.legacy.allmulti_grp = NULL; 68 68 esw->fdb_table.legacy.promisc_grp = NULL; 69 - atomic64_set(&esw->user_count, 0); 70 69 } 71 70 72 71 static int esw_create_legacy_fdb_table(struct mlx5_eswitch *esw)
-1
drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
··· 1978 1978 /* Holds true only as long as DMFS is the default */ 1979 1979 mlx5_flow_namespace_set_mode(esw->fdb_table.offloads.ns, 1980 1980 MLX5_FLOW_STEERING_MODE_DMFS); 1981 - atomic64_set(&esw->user_count, 0); 1982 1981 } 1983 1982 1984 1983 static int esw_get_nr_ft_offloads_steering_src_ports(struct mlx5_eswitch *esw)
+4 -2
drivers/net/ethernet/netronome/nfp/nfp_net_common.c
··· 2557 2557 err = nfp_net_tlv_caps_parse(&nn->pdev->dev, nn->dp.ctrl_bar, 2558 2558 &nn->tlv_caps); 2559 2559 if (err) 2560 - goto err_free_nn; 2560 + goto err_free_xsk_pools; 2561 2561 2562 2562 err = nfp_ccm_mbox_alloc(nn); 2563 2563 if (err) 2564 - goto err_free_nn; 2564 + goto err_free_xsk_pools; 2565 2565 2566 2566 return nn; 2567 2567 2568 + err_free_xsk_pools: 2569 + kfree(nn->dp.xsk_pools); 2568 2570 err_free_nn: 2569 2571 if (nn->dp.netdev) 2570 2572 free_netdev(nn->dp.netdev);
+4
drivers/net/ethernet/sfc/mae.c
··· 1090 1090 kfree(mport); 1091 1091 } 1092 1092 1093 + /* 1094 + * Takes ownership of @desc, even if it returns an error 1095 + */ 1093 1096 static int efx_mae_process_mport(struct efx_nic *efx, 1094 1097 struct mae_mport_desc *desc) 1095 1098 { ··· 1103 1100 if (!IS_ERR_OR_NULL(mport)) { 1104 1101 netif_err(efx, drv, efx->net_dev, 1105 1102 "mport with id %u does exist!!!\n", desc->mport_id); 1103 + kfree(desc); 1106 1104 return -EEXIST; 1107 1105 } 1108 1106
+14 -18
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
··· 4089 4089 static bool stmmac_vlan_insert(struct stmmac_priv *priv, struct sk_buff *skb, 4090 4090 struct stmmac_tx_queue *tx_q) 4091 4091 { 4092 - u16 tag = 0x0, inner_tag = 0x0; 4093 - u32 inner_type = 0x0; 4094 4092 struct dma_desc *p; 4093 + u16 tag = 0x0; 4095 4094 4096 - if (!priv->dma_cap.vlins) 4095 + if (!priv->dma_cap.vlins || !skb_vlan_tag_present(skb)) 4097 4096 return false; 4098 - if (!skb_vlan_tag_present(skb)) 4099 - return false; 4100 - if (skb->vlan_proto == htons(ETH_P_8021AD)) { 4101 - inner_tag = skb_vlan_tag_get(skb); 4102 - inner_type = STMMAC_VLAN_INSERT; 4103 - } 4104 4097 4105 4098 tag = skb_vlan_tag_get(skb); 4106 4099 ··· 4102 4109 else 4103 4110 p = &tx_q->dma_tx[tx_q->cur_tx]; 4104 4111 4105 - if (stmmac_set_desc_vlan_tag(priv, p, tag, inner_tag, inner_type)) 4112 + if (stmmac_set_desc_vlan_tag(priv, p, tag, 0x0, 0x0)) 4106 4113 return false; 4107 4114 4108 4115 stmmac_set_tx_owner(priv, p); ··· 4500 4507 bool has_vlan, set_ic; 4501 4508 int entry, first_tx; 4502 4509 dma_addr_t des; 4510 + u32 sdu_len; 4503 4511 4504 4512 tx_q = &priv->dma_conf.tx_queue[queue]; 4505 4513 txq_stats = &priv->xstats.txq_stats[queue]; ··· 4518 4524 } 4519 4525 4520 4526 if (priv->est && priv->est->enable && 4521 - priv->est->max_sdu[queue] && 4522 - skb->len > priv->est->max_sdu[queue]){ 4523 - priv->xstats.max_sdu_txq_drop[queue]++; 4524 - goto max_sdu_err; 4527 + priv->est->max_sdu[queue]) { 4528 + sdu_len = skb->len; 4529 + /* Add VLAN tag length if VLAN tag insertion offload is requested */ 4530 + if (priv->dma_cap.vlins && skb_vlan_tag_present(skb)) 4531 + sdu_len += VLAN_HLEN; 4532 + if (sdu_len > priv->est->max_sdu[queue]) { 4533 + priv->xstats.max_sdu_txq_drop[queue]++; 4534 + goto max_sdu_err; 4535 + } 4525 4536 } 4526 4537 4527 4538 if (unlikely(stmmac_tx_avail(priv, queue) < nfrags + 1)) { ··· 7572 7573 ndev->features |= NETIF_F_HW_VLAN_CTAG_FILTER; 7573 7574 ndev->features |= NETIF_F_HW_VLAN_STAG_FILTER; 7574 7575 } 7575 - if (priv->dma_cap.vlins) { 7576 + if (priv->dma_cap.vlins) 7576 7577 ndev->features |= NETIF_F_HW_VLAN_CTAG_TX; 7577 - if (priv->dma_cap.dvlan) 7578 - ndev->features |= NETIF_F_HW_VLAN_STAG_TX; 7579 - } 7580 7578 #endif 7581 7579 priv->msg_enable = netif_msg_init(debug, default_msg_level); 7582 7580
+2 -2
drivers/net/ethernet/stmicro/stmmac/stmmac_tc.c
··· 981 981 if (qopt->cmd == TAPRIO_CMD_DESTROY) 982 982 goto disable; 983 983 984 - if (qopt->num_entries >= dep) 984 + if (qopt->num_entries > dep) 985 985 return -EINVAL; 986 986 if (!qopt->cycle_time) 987 987 return -ERANGE; ··· 1012 1012 s64 delta_ns = qopt->entries[i].interval; 1013 1013 u32 gates = qopt->entries[i].gate_mask; 1014 1014 1015 - if (delta_ns > GENMASK(wid, 0)) 1015 + if (delta_ns > GENMASK(wid - 1, 0)) 1016 1016 return -ERANGE; 1017 1017 if (gates > GENMASK(31 - wid, 0)) 1018 1018 return -ERANGE;
+1 -1
drivers/net/ethernet/stmicro/stmmac/stmmac_vlan.c
··· 212 212 213 213 value = readl(ioaddr + VLAN_INCL); 214 214 value |= VLAN_VLTI; 215 - value |= VLAN_CSVL; /* Only use SVLAN */ 215 + value &= ~VLAN_CSVL; /* Only use CVLAN */ 216 216 value &= ~VLAN_VLC; 217 217 value |= (type << VLAN_VLC_SHIFT) & VLAN_VLC; 218 218 writel(value, ioaddr + VLAN_INCL);
+5 -3
drivers/net/mctp/mctp-usb.c
··· 96 96 skb->data, skb->len, 97 97 mctp_usb_out_complete, skb); 98 98 99 + /* Stops TX queue first to prevent race condition with URB complete */ 100 + netif_stop_queue(dev); 99 101 rc = usb_submit_urb(urb, GFP_ATOMIC); 100 - if (rc) 102 + if (rc) { 103 + netif_wake_queue(dev); 101 104 goto err_drop; 102 - else 103 - netif_stop_queue(dev); 105 + } 104 106 105 107 return NETDEV_TX_OK; 106 108
+13 -8
drivers/net/netconsole.c
··· 886 886 887 887 static void update_userdata(struct netconsole_target *nt) 888 888 { 889 - int complete_idx = 0, child_count = 0; 890 889 struct list_head *entry; 890 + int child_count = 0; 891 + unsigned long flags; 892 + 893 + spin_lock_irqsave(&target_list_lock, flags); 891 894 892 895 /* Clear the current string in case the last userdatum was deleted */ 893 896 nt->userdata_length = 0; ··· 900 897 struct userdatum *udm_item; 901 898 struct config_item *item; 902 899 903 - if (WARN_ON_ONCE(child_count >= MAX_EXTRADATA_ITEMS)) 904 - break; 900 + if (child_count >= MAX_EXTRADATA_ITEMS) { 901 + spin_unlock_irqrestore(&target_list_lock, flags); 902 + WARN_ON_ONCE(1); 903 + return; 904 + } 905 905 child_count++; 906 906 907 907 item = container_of(entry, struct config_item, ci_entry); ··· 918 912 * one entry length (1/MAX_EXTRADATA_ITEMS long), entry count is 919 913 * checked to not exceed MAX items with child_count above 920 914 */ 921 - complete_idx += scnprintf(&nt->extradata_complete[complete_idx], 922 - MAX_EXTRADATA_ENTRY_LEN, " %s=%s\n", 923 - item->ci_name, udm_item->value); 915 + nt->userdata_length += scnprintf(&nt->extradata_complete[nt->userdata_length], 916 + MAX_EXTRADATA_ENTRY_LEN, " %s=%s\n", 917 + item->ci_name, udm_item->value); 924 918 } 925 - nt->userdata_length = strnlen(nt->extradata_complete, 926 - sizeof(nt->extradata_complete)); 919 + spin_unlock_irqrestore(&target_list_lock, flags); 927 920 } 928 921 929 922 static ssize_t userdatum_value_store(struct config_item *item, const char *buf,
+6
drivers/net/phy/dp83867.c
··· 738 738 return ret; 739 739 } 740 740 741 + /* Although the DP83867 reports EEE capability through the 742 + * MDIO_PCS_EEE_ABLE and MDIO_AN_EEE_ADV registers, the feature 743 + * is not actually implemented in hardware. 744 + */ 745 + phy_disable_eee(phydev); 746 + 741 747 if (phy_interface_is_rgmii(phydev) || 742 748 phydev->interface == PHY_INTERFACE_MODE_SGMII) { 743 749 val = phy_read(phydev, MII_DP83867_PHYCTRL);
+2 -2
drivers/net/phy/dp83869.c
··· 84 84 #define DP83869_CLK_DELAY_DEF 7 85 85 86 86 /* STRAP_STS1 bits */ 87 - #define DP83869_STRAP_OP_MODE_MASK GENMASK(2, 0) 87 + #define DP83869_STRAP_OP_MODE_MASK GENMASK(11, 9) 88 88 #define DP83869_STRAP_STS1_RESERVED BIT(11) 89 89 #define DP83869_STRAP_MIRROR_ENABLED BIT(12) 90 90 ··· 528 528 if (val < 0) 529 529 return val; 530 530 531 - dp83869->mode = val & DP83869_STRAP_OP_MODE_MASK; 531 + dp83869->mode = FIELD_GET(DP83869_STRAP_OP_MODE_MASK, val); 532 532 533 533 return 0; 534 534 }
+9 -3
drivers/net/usb/asix_devices.c
··· 230 230 int i; 231 231 unsigned long gpio_bits = dev->driver_info->data; 232 232 233 - usbnet_get_endpoints(dev,intf); 233 + ret = usbnet_get_endpoints(dev, intf); 234 + if (ret) 235 + goto out; 234 236 235 237 /* Toggle the GPIOs in a manufacturer/model specific way */ 236 238 for (i = 2; i >= 0; i--) { ··· 850 848 851 849 dev->driver_priv = priv; 852 850 853 - usbnet_get_endpoints(dev, intf); 851 + ret = usbnet_get_endpoints(dev, intf); 852 + if (ret) 853 + return ret; 854 854 855 855 /* Maybe the boot loader passed the MAC address via device tree */ 856 856 if (!eth_platform_get_mac_address(&dev->udev->dev, buf)) { ··· 1285 1281 int ret; 1286 1282 u8 buf[ETH_ALEN] = {0}; 1287 1283 1288 - usbnet_get_endpoints(dev,intf); 1284 + ret = usbnet_get_endpoints(dev, intf); 1285 + if (ret) 1286 + return ret; 1289 1287 1290 1288 /* Get the MAC address */ 1291 1289 ret = asix_read_cmd(dev, AX_CMD_READ_NODE_ID, 0, 0, ETH_ALEN, buf, 0);
+2
drivers/net/usb/usbnet.c
··· 1659 1659 net = dev->net; 1660 1660 unregister_netdev (net); 1661 1661 1662 + cancel_work_sync(&dev->kevent); 1663 + 1662 1664 while ((urb = usb_get_from_anchor(&dev->deferred))) { 1663 1665 dev_kfree_skb(urb->context); 1664 1666 kfree(urb->sg);
+8 -3
drivers/net/virtio_net.c
··· 1379 1379 ret = XDP_PASS; 1380 1380 rcu_read_lock(); 1381 1381 prog = rcu_dereference(rq->xdp_prog); 1382 - /* TODO: support multi buffer. */ 1383 - if (prog && num_buf == 1) 1384 - ret = virtnet_xdp_handler(prog, xdp, dev, xdp_xmit, stats); 1382 + if (prog) { 1383 + /* TODO: support multi buffer. */ 1384 + if (num_buf == 1) 1385 + ret = virtnet_xdp_handler(prog, xdp, dev, xdp_xmit, 1386 + stats); 1387 + else 1388 + ret = XDP_ABORTED; 1389 + } 1385 1390 rcu_read_unlock(); 1386 1391 1387 1392 switch (ret) {
+1
drivers/net/wireless/ath/ath10k/wmi.c
··· 1937 1937 if (cmd_id == WMI_CMD_UNSUPPORTED) { 1938 1938 ath10k_warn(ar, "wmi command %d is not supported by firmware\n", 1939 1939 cmd_id); 1940 + dev_kfree_skb_any(skb); 1940 1941 return ret; 1941 1942 } 1942 1943
+48 -6
drivers/net/wireless/ath/ath11k/core.c
··· 912 912 static const struct dmi_system_id ath11k_pm_quirk_table[] = { 913 913 { 914 914 .driver_data = (void *)ATH11K_PM_WOW, 915 - .matches = { 915 + .matches = { /* X13 G4 AMD #1 */ 916 + DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 917 + DMI_MATCH(DMI_PRODUCT_NAME, "21J3"), 918 + }, 919 + }, 920 + { 921 + .driver_data = (void *)ATH11K_PM_WOW, 922 + .matches = { /* X13 G4 AMD #2 */ 916 923 DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 917 924 DMI_MATCH(DMI_PRODUCT_NAME, "21J4"), 918 925 }, 919 926 }, 920 927 { 921 928 .driver_data = (void *)ATH11K_PM_WOW, 922 - .matches = { 929 + .matches = { /* T14 G4 AMD #1 */ 930 + DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 931 + DMI_MATCH(DMI_PRODUCT_NAME, "21K3"), 932 + }, 933 + }, 934 + { 935 + .driver_data = (void *)ATH11K_PM_WOW, 936 + .matches = { /* T14 G4 AMD #2 */ 923 937 DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 924 938 DMI_MATCH(DMI_PRODUCT_NAME, "21K4"), 925 939 }, 926 940 }, 927 941 { 928 942 .driver_data = (void *)ATH11K_PM_WOW, 929 - .matches = { 943 + .matches = { /* P14s G4 AMD #1 */ 944 + DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 945 + DMI_MATCH(DMI_PRODUCT_NAME, "21K5"), 946 + }, 947 + }, 948 + { 949 + .driver_data = (void *)ATH11K_PM_WOW, 950 + .matches = { /* P14s G4 AMD #2 */ 930 951 DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 931 952 DMI_MATCH(DMI_PRODUCT_NAME, "21K6"), 932 953 }, 933 954 }, 934 955 { 935 956 .driver_data = (void *)ATH11K_PM_WOW, 936 - .matches = { 957 + .matches = { /* T16 G2 AMD #1 */ 958 + DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 959 + DMI_MATCH(DMI_PRODUCT_NAME, "21K7"), 960 + }, 961 + }, 962 + { 963 + .driver_data = (void *)ATH11K_PM_WOW, 964 + .matches = { /* T16 G2 AMD #2 */ 937 965 DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 938 966 DMI_MATCH(DMI_PRODUCT_NAME, "21K8"), 939 967 }, 940 968 }, 941 969 { 942 970 .driver_data = (void *)ATH11K_PM_WOW, 943 - .matches = { 971 + .matches = { /* P16s G2 AMD #1 */ 972 + DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 973 + DMI_MATCH(DMI_PRODUCT_NAME, "21K9"), 974 + }, 975 + }, 976 + { 977 + .driver_data = (void *)ATH11K_PM_WOW, 978 + .matches = { /* P16s G2 AMD #2 */ 944 979 DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 945 980 DMI_MATCH(DMI_PRODUCT_NAME, "21KA"), 946 981 }, 947 982 }, 948 983 { 949 984 .driver_data = (void *)ATH11K_PM_WOW, 950 - .matches = { 985 + .matches = { /* T14s G4 AMD #1 */ 986 + DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 987 + DMI_MATCH(DMI_PRODUCT_NAME, "21F8"), 988 + }, 989 + }, 990 + { 991 + .driver_data = (void *)ATH11K_PM_WOW, 992 + .matches = { /* T14s G4 AMD #2 */ 951 993 DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 952 994 DMI_MATCH(DMI_PRODUCT_NAME, "21F9"), 953 995 },
+5 -5
drivers/net/wireless/ath/ath11k/mac.c
··· 1 1 // SPDX-License-Identifier: BSD-3-Clause-Clear 2 2 /* 3 3 * Copyright (c) 2018-2019 The Linux Foundation. All rights reserved. 4 - * Copyright (c) 2021-2025 Qualcomm Innovation Center, Inc. All rights reserved. 4 + * Copyright (c) Qualcomm Technologies, Inc. and/or its subsidiaries. 5 5 */ 6 6 7 7 #include <net/mac80211.h> ··· 4417 4417 } 4418 4418 4419 4419 if (key->flags & IEEE80211_KEY_FLAG_PAIRWISE) 4420 - flags |= WMI_KEY_PAIRWISE; 4420 + flags = WMI_KEY_PAIRWISE; 4421 4421 else 4422 - flags |= WMI_KEY_GROUP; 4422 + flags = WMI_KEY_GROUP; 4423 4423 4424 4424 ath11k_dbg(ar->ab, ATH11K_DBG_MAC, 4425 4425 "%s for peer %pM on vdev %d flags 0x%X, type = %d, num_sta %d\n", ··· 4456 4456 4457 4457 is_ap_with_no_sta = (vif->type == NL80211_IFTYPE_AP && 4458 4458 !arvif->num_stations); 4459 - if ((flags & WMI_KEY_PAIRWISE) || cmd == SET_KEY || is_ap_with_no_sta) { 4459 + if (flags == WMI_KEY_PAIRWISE || cmd == SET_KEY || is_ap_with_no_sta) { 4460 4460 ret = ath11k_install_key(arvif, key, cmd, peer_addr, flags); 4461 4461 if (ret) { 4462 4462 ath11k_warn(ab, "ath11k_install_key failed (%d)\n", ret); ··· 4470 4470 goto exit; 4471 4471 } 4472 4472 4473 - if ((flags & WMI_KEY_GROUP) && cmd == SET_KEY && is_ap_with_no_sta) 4473 + if (flags == WMI_KEY_GROUP && cmd == SET_KEY && is_ap_with_no_sta) 4474 4474 arvif->reinstall_group_keys = true; 4475 4475 } 4476 4476
+18 -16
drivers/net/wireless/ath/ath12k/mac.c
··· 8290 8290 wake_up(&ar->txmgmt_empty_waitq); 8291 8291 } 8292 8292 8293 - int ath12k_mac_tx_mgmt_pending_free(int buf_id, void *skb, void *ctx) 8293 + static void ath12k_mac_tx_mgmt_free(struct ath12k *ar, int buf_id) 8294 8294 { 8295 - struct sk_buff *msdu = skb; 8295 + struct sk_buff *msdu; 8296 8296 struct ieee80211_tx_info *info; 8297 - struct ath12k *ar = ctx; 8298 - struct ath12k_base *ab = ar->ab; 8299 8297 8300 8298 spin_lock_bh(&ar->txmgmt_idr_lock); 8301 - idr_remove(&ar->txmgmt_idr, buf_id); 8299 + msdu = idr_remove(&ar->txmgmt_idr, buf_id); 8302 8300 spin_unlock_bh(&ar->txmgmt_idr_lock); 8303 - dma_unmap_single(ab->dev, ATH12K_SKB_CB(msdu)->paddr, msdu->len, 8301 + 8302 + if (!msdu) 8303 + return; 8304 + 8305 + dma_unmap_single(ar->ab->dev, ATH12K_SKB_CB(msdu)->paddr, msdu->len, 8304 8306 DMA_TO_DEVICE); 8305 8307 8306 8308 info = IEEE80211_SKB_CB(msdu); 8307 8309 memset(&info->status, 0, sizeof(info->status)); 8308 8310 8309 - ath12k_mgmt_over_wmi_tx_drop(ar, skb); 8311 + ath12k_mgmt_over_wmi_tx_drop(ar, msdu); 8312 + } 8313 + 8314 + int ath12k_mac_tx_mgmt_pending_free(int buf_id, void *skb, void *ctx) 8315 + { 8316 + struct ath12k *ar = ctx; 8317 + 8318 + ath12k_mac_tx_mgmt_free(ar, buf_id); 8310 8319 8311 8320 return 0; 8312 8321 } ··· 8324 8315 { 8325 8316 struct ieee80211_vif *vif = ctx; 8326 8317 struct ath12k_skb_cb *skb_cb = ATH12K_SKB_CB(skb); 8327 - struct sk_buff *msdu = skb; 8328 8318 struct ath12k *ar = skb_cb->ar; 8329 - struct ath12k_base *ab = ar->ab; 8330 8319 8331 - if (skb_cb->vif == vif) { 8332 - spin_lock_bh(&ar->txmgmt_idr_lock); 8333 - idr_remove(&ar->txmgmt_idr, buf_id); 8334 - spin_unlock_bh(&ar->txmgmt_idr_lock); 8335 - dma_unmap_single(ab->dev, skb_cb->paddr, msdu->len, 8336 - DMA_TO_DEVICE); 8337 - } 8320 + if (skb_cb->vif == vif) 8321 + ath12k_mac_tx_mgmt_free(ar, buf_id); 8338 8322 8339 8323 return 0; 8340 8324 }
+1 -2
drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
··· 5627 5627 *cookie, le16_to_cpu(action_frame->len), 5628 5628 le32_to_cpu(af_params->channel)); 5629 5629 5630 - ack = brcmf_p2p_send_action_frame(cfg, cfg_to_ndev(cfg), 5631 - af_params); 5630 + ack = brcmf_p2p_send_action_frame(vif->ifp, af_params); 5632 5631 5633 5632 cfg80211_mgmt_tx_status(wdev, *cookie, buf, len, ack, 5634 5633 GFP_KERNEL);
+10 -18
drivers/net/wireless/broadcom/brcm80211/brcmfmac/p2p.c
··· 1529 1529 /** 1530 1530 * brcmf_p2p_tx_action_frame() - send action frame over fil. 1531 1531 * 1532 + * @ifp: interface to transmit on. 1532 1533 * @p2p: p2p info struct for vif. 1533 1534 * @af_params: action frame data/info. 1534 1535 * ··· 1539 1538 * The WLC_E_ACTION_FRAME_COMPLETE event will be received when the action 1540 1539 * frame is transmitted. 1541 1540 */ 1542 - static s32 brcmf_p2p_tx_action_frame(struct brcmf_p2p_info *p2p, 1541 + static s32 brcmf_p2p_tx_action_frame(struct brcmf_if *ifp, 1542 + struct brcmf_p2p_info *p2p, 1543 1543 struct brcmf_fil_af_params_le *af_params) 1544 1544 { 1545 1545 struct brcmf_pub *drvr = p2p->cfg->pub; 1546 - struct brcmf_cfg80211_vif *vif; 1547 - struct brcmf_p2p_action_frame *p2p_af; 1548 1546 s32 err = 0; 1549 1547 1550 1548 brcmf_dbg(TRACE, "Enter\n"); ··· 1552 1552 clear_bit(BRCMF_P2P_STATUS_ACTION_TX_COMPLETED, &p2p->status); 1553 1553 clear_bit(BRCMF_P2P_STATUS_ACTION_TX_NOACK, &p2p->status); 1554 1554 1555 - /* check if it is a p2p_presence response */ 1556 - p2p_af = (struct brcmf_p2p_action_frame *)af_params->action_frame.data; 1557 - if (p2p_af->subtype == P2P_AF_PRESENCE_RSP) 1558 - vif = p2p->bss_idx[P2PAPI_BSSCFG_CONNECTION].vif; 1559 - else 1560 - vif = p2p->bss_idx[P2PAPI_BSSCFG_DEVICE].vif; 1561 - 1562 - err = brcmf_fil_bsscfg_data_set(vif->ifp, "actframe", af_params, 1555 + err = brcmf_fil_bsscfg_data_set(ifp, "actframe", af_params, 1563 1556 sizeof(*af_params)); 1564 1557 if (err) { 1565 1558 bphy_err(drvr, " sending action frame has failed\n"); ··· 1704 1711 /** 1705 1712 * brcmf_p2p_send_action_frame() - send action frame . 1706 1713 * 1707 - * @cfg: driver private data for cfg80211 interface. 1708 - * @ndev: net device to transmit on. 1714 + * @ifp: interface to transmit on. 1709 1715 * @af_params: configuration data for action frame. 1710 1716 */ 1711 - bool brcmf_p2p_send_action_frame(struct brcmf_cfg80211_info *cfg, 1712 - struct net_device *ndev, 1717 + bool brcmf_p2p_send_action_frame(struct brcmf_if *ifp, 1713 1718 struct brcmf_fil_af_params_le *af_params) 1714 1719 { 1720 + struct brcmf_cfg80211_info *cfg = ifp->drvr->config; 1715 1721 struct brcmf_p2p_info *p2p = &cfg->p2p; 1716 - struct brcmf_if *ifp = netdev_priv(ndev); 1717 1722 struct brcmf_fil_action_frame_le *action_frame; 1718 1723 struct brcmf_config_af_params config_af_params; 1719 1724 struct afx_hdl *afx_hdl = &p2p->afx_hdl; ··· 1848 1857 if (af_params->channel) 1849 1858 msleep(P2P_AF_RETRY_DELAY_TIME); 1850 1859 1851 - ack = !brcmf_p2p_tx_action_frame(p2p, af_params); 1860 + ack = !brcmf_p2p_tx_action_frame(ifp, p2p, af_params); 1852 1861 tx_retry++; 1853 1862 dwell_overflow = brcmf_p2p_check_dwell_overflow(requested_dwell, 1854 1863 dwell_jiffies); ··· 2208 2217 2209 2218 WARN_ON(p2p_ifp->bsscfgidx != bsscfgidx); 2210 2219 2211 - init_completion(&p2p->send_af_done); 2212 2220 INIT_WORK(&p2p->afx_hdl.afx_work, brcmf_p2p_afx_handler); 2213 2221 init_completion(&p2p->afx_hdl.act_frm_scan); 2214 2222 init_completion(&p2p->wait_next_af); ··· 2502 2512 2503 2513 pri_ifp = brcmf_get_ifp(cfg->pub, 0); 2504 2514 p2p->bss_idx[P2PAPI_BSSCFG_PRIMARY].vif = pri_ifp->vif; 2515 + 2516 + init_completion(&p2p->send_af_done); 2505 2517 2506 2518 if (p2pdev_forced) { 2507 2519 err_ptr = brcmf_p2p_create_p2pdev(p2p, NULL, NULL);
+1 -2
drivers/net/wireless/broadcom/brcm80211/brcmfmac/p2p.h
··· 168 168 int brcmf_p2p_notify_action_tx_complete(struct brcmf_if *ifp, 169 169 const struct brcmf_event_msg *e, 170 170 void *data); 171 - bool brcmf_p2p_send_action_frame(struct brcmf_cfg80211_info *cfg, 172 - struct net_device *ndev, 171 + bool brcmf_p2p_send_action_frame(struct brcmf_if *ifp, 173 172 struct brcmf_fil_af_params_le *af_params); 174 173 bool brcmf_p2p_scan_finding_common_channel(struct brcmf_cfg80211_info *cfg, 175 174 struct brcmf_bss_info_le *bi);
+3 -2
drivers/net/wireless/intel/iwlwifi/mld/link.c
··· 501 501 struct iwl_mld_vif *mld_vif = iwl_mld_vif_from_mac80211(bss_conf->vif); 502 502 struct iwl_mld_link *link = iwl_mld_link_from_mac80211(bss_conf); 503 503 bool is_deflink = link == &mld_vif->deflink; 504 + u8 fw_id = link->fw_id; 504 505 505 506 if (WARN_ON(!link || link->active)) 506 507 return; ··· 514 513 515 514 RCU_INIT_POINTER(mld_vif->link[bss_conf->link_id], NULL); 516 515 517 - if (WARN_ON(link->fw_id >= mld->fw->ucode_capa.num_links)) 516 + if (WARN_ON(fw_id >= mld->fw->ucode_capa.num_links)) 518 517 return; 519 518 520 - RCU_INIT_POINTER(mld->fw_id_to_bss_conf[link->fw_id], NULL); 519 + RCU_INIT_POINTER(mld->fw_id_to_bss_conf[fw_id], NULL); 521 520 } 522 521 523 522 void iwl_mld_handle_missed_beacon_notif(struct iwl_mld *mld,
+1
include/net/bluetooth/hci.h
··· 434 434 HCI_USER_CHANNEL, 435 435 HCI_EXT_CONFIGURED, 436 436 HCI_LE_ADV, 437 + HCI_LE_ADV_0, 437 438 HCI_LE_PER_ADV, 438 439 HCI_LE_SCAN, 439 440 HCI_SSP_ENABLED,
+1
include/net/bluetooth/hci_core.h
··· 244 244 bool enabled; 245 245 bool pending; 246 246 bool periodic; 247 + bool periodic_enabled; 247 248 __u8 mesh; 248 249 __u8 instance; 249 250 __u8 handle;
+2 -2
include/net/bluetooth/l2cap.h
··· 38 38 #define L2CAP_DEFAULT_TX_WINDOW 63 39 39 #define L2CAP_DEFAULT_EXT_WINDOW 0x3FFF 40 40 #define L2CAP_DEFAULT_MAX_TX 3 41 - #define L2CAP_DEFAULT_RETRANS_TO 2 /* seconds */ 42 - #define L2CAP_DEFAULT_MONITOR_TO 12 /* seconds */ 41 + #define L2CAP_DEFAULT_RETRANS_TO 2000 /* 2 seconds */ 42 + #define L2CAP_DEFAULT_MONITOR_TO 12000 /* 12 seconds */ 43 43 #define L2CAP_DEFAULT_MAX_PDU_SIZE 1492 /* Sized for AMP packet */ 44 44 #define L2CAP_DEFAULT_ACK_TO 200 45 45 #define L2CAP_DEFAULT_MAX_SDU_SIZE 0xFFFF
+1 -1
include/net/bluetooth/mgmt.h
··· 853 853 __le16 window; 854 854 __le16 period; 855 855 __u8 num_ad_types; 856 - __u8 ad_types[]; 856 + __u8 ad_types[] __counted_by(num_ad_types); 857 857 } __packed; 858 858 #define MGMT_SET_MESH_RECEIVER_SIZE 6 859 859
+1 -1
include/net/tcp.h
··· 370 370 int tcp_ioctl(struct sock *sk, int cmd, int *karg); 371 371 enum skb_drop_reason tcp_rcv_state_process(struct sock *sk, struct sk_buff *skb); 372 372 void tcp_rcv_established(struct sock *sk, struct sk_buff *skb); 373 - void tcp_rcvbuf_grow(struct sock *sk); 373 + void tcp_rcvbuf_grow(struct sock *sk, u32 newval); 374 374 void tcp_rcv_space_adjust(struct sock *sk); 375 375 int tcp_twsk_unique(struct sock *sk, struct sock *sktw, void *twp); 376 376 void tcp_twsk_destructor(struct sock *sk);
+13 -12
include/net/tls.h
··· 451 451 452 452 /* Log all TLS record header TCP sequences in [seq, seq+len] */ 453 453 static inline void 454 - tls_offload_rx_resync_async_request_start(struct sock *sk, __be32 seq, u16 len) 454 + tls_offload_rx_resync_async_request_start(struct tls_offload_resync_async *resync_async, 455 + __be32 seq, u16 len) 455 456 { 456 - struct tls_context *tls_ctx = tls_get_ctx(sk); 457 - struct tls_offload_context_rx *rx_ctx = tls_offload_ctx_rx(tls_ctx); 458 - 459 - atomic64_set(&rx_ctx->resync_async->req, ((u64)ntohl(seq) << 32) | 457 + atomic64_set(&resync_async->req, ((u64)ntohl(seq) << 32) | 460 458 ((u64)len << 16) | RESYNC_REQ | RESYNC_REQ_ASYNC); 461 - rx_ctx->resync_async->loglen = 0; 462 - rx_ctx->resync_async->rcd_delta = 0; 459 + resync_async->loglen = 0; 460 + resync_async->rcd_delta = 0; 463 461 } 464 462 465 463 static inline void 466 - tls_offload_rx_resync_async_request_end(struct sock *sk, __be32 seq) 464 + tls_offload_rx_resync_async_request_end(struct tls_offload_resync_async *resync_async, 465 + __be32 seq) 467 466 { 468 - struct tls_context *tls_ctx = tls_get_ctx(sk); 469 - struct tls_offload_context_rx *rx_ctx = tls_offload_ctx_rx(tls_ctx); 467 + atomic64_set(&resync_async->req, ((u64)ntohl(seq) << 32) | RESYNC_REQ); 468 + } 470 469 471 - atomic64_set(&rx_ctx->resync_async->req, 472 - ((u64)ntohl(seq) << 32) | RESYNC_REQ); 470 + static inline void 471 + tls_offload_rx_resync_async_request_cancel(struct tls_offload_resync_async *resync_async) 472 + { 473 + atomic64_set(&resync_async->req, 0); 473 474 } 474 475 475 476 static inline void
+9
include/trace/events/tcp.h
··· 218 218 __field(__u32, space) 219 219 __field(__u32, ooo_space) 220 220 __field(__u32, rcvbuf) 221 + __field(__u32, rcv_ssthresh) 222 + __field(__u32, window_clamp) 223 + __field(__u32, rcv_wnd) 221 224 __field(__u8, scaling_ratio) 222 225 __field(__u16, sport) 223 226 __field(__u16, dport) ··· 248 245 tp->rcv_nxt; 249 246 250 247 __entry->rcvbuf = sk->sk_rcvbuf; 248 + __entry->rcv_ssthresh = tp->rcv_ssthresh; 249 + __entry->window_clamp = tp->window_clamp; 250 + __entry->rcv_wnd = tp->rcv_wnd; 251 251 __entry->scaling_ratio = tp->scaling_ratio; 252 252 __entry->sport = ntohs(inet->inet_sport); 253 253 __entry->dport = ntohs(inet->inet_dport); ··· 270 264 ), 271 265 272 266 TP_printk("time=%u rtt_us=%u copied=%u inq=%u space=%u ooo=%u scaling_ratio=%u rcvbuf=%u " 267 + "rcv_ssthresh=%u window_clamp=%u rcv_wnd=%u " 273 268 "family=%s sport=%hu dport=%hu saddr=%pI4 daddr=%pI4 " 274 269 "saddrv6=%pI6c daddrv6=%pI6c skaddr=%p sock_cookie=%llx", 275 270 __entry->time, __entry->rtt_us, __entry->copied, 276 271 __entry->inq, __entry->space, __entry->ooo_space, 277 272 __entry->scaling_ratio, __entry->rcvbuf, 273 + __entry->rcv_ssthresh, __entry->window_clamp, 274 + __entry->rcv_wnd, 278 275 show_family_name(__entry->family), 279 276 __entry->sport, __entry->dport, 280 277 __entry->saddr, __entry->daddr,
+12 -2
net/batman-adv/originator.c
··· 763 763 bat_priv = netdev_priv(mesh_iface); 764 764 765 765 primary_if = batadv_primary_if_get_selected(bat_priv); 766 - if (!primary_if || primary_if->if_status != BATADV_IF_ACTIVE) { 766 + if (!primary_if) { 767 767 ret = -ENOENT; 768 768 goto out_put_mesh_iface; 769 + } 770 + 771 + if (primary_if->if_status != BATADV_IF_ACTIVE) { 772 + ret = -ENOENT; 773 + goto out_put_primary_if; 769 774 } 770 775 771 776 hard_iface = batadv_netlink_get_hardif(bat_priv, cb); ··· 1332 1327 bat_priv = netdev_priv(mesh_iface); 1333 1328 1334 1329 primary_if = batadv_primary_if_get_selected(bat_priv); 1335 - if (!primary_if || primary_if->if_status != BATADV_IF_ACTIVE) { 1330 + if (!primary_if) { 1336 1331 ret = -ENOENT; 1337 1332 goto out_put_mesh_iface; 1333 + } 1334 + 1335 + if (primary_if->if_status != BATADV_IF_ACTIVE) { 1336 + ret = -ENOENT; 1337 + goto out_put_primary_if; 1338 1338 } 1339 1339 1340 1340 hard_iface = batadv_netlink_get_hardif(bat_priv, cb);
+7
net/bluetooth/hci_conn.c
··· 843 843 if (bis) 844 844 return; 845 845 846 + bis = hci_conn_hash_lookup_big_state(hdev, 847 + conn->iso_qos.bcast.big, 848 + BT_OPEN, 849 + HCI_ROLE_MASTER); 850 + if (bis) 851 + return; 852 + 846 853 hci_le_terminate_big(hdev, conn); 847 854 } else { 848 855 hci_le_big_terminate(hdev, conn->iso_qos.bcast.big,
+9 -2
net/bluetooth/hci_event.c
··· 1607 1607 1608 1608 hci_dev_set_flag(hdev, HCI_LE_ADV); 1609 1609 1610 - if (adv && !adv->periodic) 1610 + if (adv) 1611 1611 adv->enabled = true; 1612 + else if (!set->handle) 1613 + hci_dev_set_flag(hdev, HCI_LE_ADV_0); 1612 1614 1613 1615 conn = hci_lookup_le_connect(hdev); 1614 1616 if (conn) ··· 1621 1619 if (cp->num_of_sets) { 1622 1620 if (adv) 1623 1621 adv->enabled = false; 1622 + else if (!set->handle) 1623 + hci_dev_clear_flag(hdev, HCI_LE_ADV_0); 1624 1624 1625 1625 /* If just one instance was disabled check if there are 1626 1626 * any other instance enabled before clearing HCI_LE_ADV ··· 3963 3959 hci_dev_set_flag(hdev, HCI_LE_PER_ADV); 3964 3960 3965 3961 if (adv) 3966 - adv->enabled = true; 3962 + adv->periodic_enabled = true; 3967 3963 } else { 3964 + if (adv) 3965 + adv->periodic_enabled = false; 3966 + 3968 3967 /* If just one instance was disabled check if there are 3969 3968 * any other instance enabled before clearing HCI_LE_PER_ADV. 3970 3969 * The current periodic adv instance will be marked as
+14 -9
net/bluetooth/hci_sync.c
··· 863 863 { 864 864 struct hci_cmd_sync_work_entry *entry; 865 865 866 - entry = hci_cmd_sync_lookup_entry(hdev, func, data, destroy); 867 - if (!entry) 868 - return false; 866 + mutex_lock(&hdev->cmd_sync_work_lock); 869 867 870 - hci_cmd_sync_cancel_entry(hdev, entry); 868 + entry = _hci_cmd_sync_lookup_entry(hdev, func, data, destroy); 869 + if (!entry) { 870 + mutex_unlock(&hdev->cmd_sync_work_lock); 871 + return false; 872 + } 873 + 874 + _hci_cmd_sync_cancel_entry(hdev, entry, -ECANCELED); 875 + 876 + mutex_unlock(&hdev->cmd_sync_work_lock); 871 877 872 878 return true; 873 879 } ··· 1607 1601 1608 1602 /* If periodic advertising already disabled there is nothing to do. */ 1609 1603 adv = hci_find_adv_instance(hdev, instance); 1610 - if (!adv || !adv->periodic || !adv->enabled) 1604 + if (!adv || !adv->periodic_enabled) 1611 1605 return 0; 1612 1606 1613 1607 memset(&cp, 0, sizeof(cp)); ··· 1672 1666 1673 1667 /* If periodic advertising already enabled there is nothing to do. */ 1674 1668 adv = hci_find_adv_instance(hdev, instance); 1675 - if (adv && adv->periodic && adv->enabled) 1669 + if (adv && adv->periodic_enabled) 1676 1670 return 0; 1677 1671 1678 1672 memset(&cp, 0, sizeof(cp)); ··· 2606 2600 /* If current advertising instance is set to instance 0x00 2607 2601 * then we need to re-enable it. 2608 2602 */ 2609 - if (!hdev->cur_adv_instance) 2610 - err = hci_enable_ext_advertising_sync(hdev, 2611 - hdev->cur_adv_instance); 2603 + if (hci_dev_test_and_clear_flag(hdev, HCI_LE_ADV_0)) 2604 + err = hci_enable_ext_advertising_sync(hdev, 0x00); 2612 2605 } else { 2613 2606 /* Schedule for most recent instance to be restarted and begin 2614 2607 * the software rotation loop
+8 -2
net/bluetooth/iso.c
··· 2032 2032 */ 2033 2033 if (!bacmp(&hcon->dst, BDADDR_ANY)) { 2034 2034 bacpy(&hcon->dst, &iso_pi(parent)->dst); 2035 - hcon->dst_type = iso_pi(parent)->dst_type; 2035 + hcon->dst_type = le_addr_type(iso_pi(parent)->dst_type); 2036 2036 } 2037 2037 2038 2038 if (test_bit(HCI_CONN_PA_SYNC, &hcon->flags)) { ··· 2046 2046 } 2047 2047 2048 2048 bacpy(&iso_pi(sk)->dst, &hcon->dst); 2049 - iso_pi(sk)->dst_type = hcon->dst_type; 2049 + 2050 + /* Convert from HCI to three-value type */ 2051 + if (hcon->dst_type == ADDR_LE_DEV_PUBLIC) 2052 + iso_pi(sk)->dst_type = BDADDR_LE_PUBLIC; 2053 + else 2054 + iso_pi(sk)->dst_type = BDADDR_LE_RANDOM; 2055 + 2050 2056 iso_pi(sk)->sync_handle = iso_pi(parent)->sync_handle; 2051 2057 memcpy(iso_pi(sk)->base, iso_pi(parent)->base, iso_pi(parent)->base_len); 2052 2058 iso_pi(sk)->base_len = iso_pi(parent)->base_len;
+2 -2
net/bluetooth/l2cap_core.c
··· 282 282 if (!delayed_work_pending(&chan->monitor_timer) && 283 283 chan->retrans_timeout) { 284 284 l2cap_set_timer(chan, &chan->retrans_timer, 285 - secs_to_jiffies(chan->retrans_timeout)); 285 + msecs_to_jiffies(chan->retrans_timeout)); 286 286 } 287 287 } 288 288 ··· 291 291 __clear_retrans_timer(chan); 292 292 if (chan->monitor_timeout) { 293 293 l2cap_set_timer(chan, &chan->monitor_timer, 294 - secs_to_jiffies(chan->monitor_timeout)); 294 + msecs_to_jiffies(chan->monitor_timeout)); 295 295 } 296 296 } 297 297
+15 -11
net/bluetooth/mgmt.c
··· 2175 2175 sk = cmd->sk; 2176 2176 2177 2177 if (status) { 2178 + mgmt_cmd_status(cmd->sk, hdev->id, MGMT_OP_SET_MESH_RECEIVER, 2179 + status); 2178 2180 mgmt_pending_foreach(MGMT_OP_SET_MESH_RECEIVER, hdev, true, 2179 2181 cmd_status_rsp, &status); 2180 - return; 2182 + goto done; 2181 2183 } 2182 2184 2183 - mgmt_pending_remove(cmd); 2184 2185 mgmt_cmd_complete(sk, hdev->id, MGMT_OP_SET_MESH_RECEIVER, 0, NULL, 0); 2186 + 2187 + done: 2188 + mgmt_pending_free(cmd); 2185 2189 } 2186 2190 2187 2191 static int set_mesh_sync(struct hci_dev *hdev, void *data) 2188 2192 { 2189 2193 struct mgmt_pending_cmd *cmd = data; 2190 - struct mgmt_cp_set_mesh cp; 2194 + DEFINE_FLEX(struct mgmt_cp_set_mesh, cp, ad_types, num_ad_types, 2195 + sizeof(hdev->mesh_ad_types)); 2191 2196 size_t len; 2192 2197 2193 2198 mutex_lock(&hdev->mgmt_pending_lock); ··· 2202 2197 return -ECANCELED; 2203 2198 } 2204 2199 2205 - memcpy(&cp, cmd->param, sizeof(cp)); 2200 + len = cmd->param_len; 2201 + memcpy(cp, cmd->param, min(__struct_size(cp), len)); 2206 2202 2207 2203 mutex_unlock(&hdev->mgmt_pending_lock); 2208 2204 2209 - len = cmd->param_len; 2210 - 2211 2205 memset(hdev->mesh_ad_types, 0, sizeof(hdev->mesh_ad_types)); 2212 2206 2213 - if (cp.enable) 2207 + if (cp->enable) 2214 2208 hci_dev_set_flag(hdev, HCI_MESH); 2215 2209 else 2216 2210 hci_dev_clear_flag(hdev, HCI_MESH); 2217 2211 2218 - hdev->le_scan_interval = __le16_to_cpu(cp.period); 2219 - hdev->le_scan_window = __le16_to_cpu(cp.window); 2212 + hdev->le_scan_interval = __le16_to_cpu(cp->period); 2213 + hdev->le_scan_window = __le16_to_cpu(cp->window); 2220 2214 2221 - len -= sizeof(cp); 2215 + len -= sizeof(struct mgmt_cp_set_mesh); 2222 2216 2223 2217 /* If filters don't fit, forward all adv pkts */ 2224 2218 if (len <= sizeof(hdev->mesh_ad_types)) 2225 - memcpy(hdev->mesh_ad_types, cp.ad_types, len); 2219 + memcpy(hdev->mesh_ad_types, cp->ad_types, len); 2226 2220 2227 2221 hci_update_passive_scan_sync(hdev); 2228 2222 return 0;
+11 -15
net/bluetooth/rfcomm/tty.c
··· 643 643 tty_port_tty_hangup(&dev->port, true); 644 644 645 645 dev->modem_status = 646 - ((v24_sig & RFCOMM_V24_RTC) ? (TIOCM_DSR | TIOCM_DTR) : 0) | 647 - ((v24_sig & RFCOMM_V24_RTR) ? (TIOCM_RTS | TIOCM_CTS) : 0) | 646 + ((v24_sig & RFCOMM_V24_RTC) ? TIOCM_DSR : 0) | 647 + ((v24_sig & RFCOMM_V24_RTR) ? TIOCM_CTS : 0) | 648 648 ((v24_sig & RFCOMM_V24_IC) ? TIOCM_RI : 0) | 649 649 ((v24_sig & RFCOMM_V24_DV) ? TIOCM_CD : 0); 650 650 } ··· 1055 1055 static int rfcomm_tty_tiocmget(struct tty_struct *tty) 1056 1056 { 1057 1057 struct rfcomm_dev *dev = tty->driver_data; 1058 + struct rfcomm_dlc *dlc = dev->dlc; 1059 + u8 v24_sig; 1058 1060 1059 1061 BT_DBG("tty %p dev %p", tty, dev); 1060 1062 1061 - return dev->modem_status; 1063 + rfcomm_dlc_get_modem_status(dlc, &v24_sig); 1064 + 1065 + return (v24_sig & (TIOCM_DTR | TIOCM_RTS)) | dev->modem_status; 1062 1066 } 1063 1067 1064 1068 static int rfcomm_tty_tiocmset(struct tty_struct *tty, unsigned int set, unsigned int clear) ··· 1075 1071 1076 1072 rfcomm_dlc_get_modem_status(dlc, &v24_sig); 1077 1073 1078 - if (set & TIOCM_DSR || set & TIOCM_DTR) 1074 + if (set & TIOCM_DTR) 1079 1075 v24_sig |= RFCOMM_V24_RTC; 1080 - if (set & TIOCM_RTS || set & TIOCM_CTS) 1076 + if (set & TIOCM_RTS) 1081 1077 v24_sig |= RFCOMM_V24_RTR; 1082 - if (set & TIOCM_RI) 1083 - v24_sig |= RFCOMM_V24_IC; 1084 - if (set & TIOCM_CD) 1085 - v24_sig |= RFCOMM_V24_DV; 1086 1078 1087 - if (clear & TIOCM_DSR || clear & TIOCM_DTR) 1079 + if (clear & TIOCM_DTR) 1088 1080 v24_sig &= ~RFCOMM_V24_RTC; 1089 - if (clear & TIOCM_RTS || clear & TIOCM_CTS) 1081 + if (clear & TIOCM_RTS) 1090 1082 v24_sig &= ~RFCOMM_V24_RTR; 1091 - if (clear & TIOCM_RI) 1092 - v24_sig &= ~RFCOMM_V24_IC; 1093 - if (clear & TIOCM_CD) 1094 - v24_sig &= ~RFCOMM_V24_DV; 1095 1083 1096 1084 rfcomm_dlc_set_modem_status(dlc, v24_sig); 1097 1085
+24 -3
net/core/devmem.c
··· 17 17 #include <net/page_pool/helpers.h> 18 18 #include <net/page_pool/memory_provider.h> 19 19 #include <net/sock.h> 20 + #include <net/tcp.h> 20 21 #include <trace/events/page_pool.h> 21 22 22 23 #include "devmem.h" ··· 358 357 unsigned int dmabuf_id) 359 358 { 360 359 struct net_devmem_dmabuf_binding *binding; 361 - struct dst_entry *dst = __sk_dst_get(sk); 360 + struct net_device *dst_dev; 361 + struct dst_entry *dst; 362 362 int err = 0; 363 363 364 364 binding = net_devmem_lookup_dmabuf(dmabuf_id); ··· 368 366 goto out_err; 369 367 } 370 368 369 + rcu_read_lock(); 370 + dst = __sk_dst_get(sk); 371 + /* If dst is NULL (route expired), attempt to rebuild it. */ 372 + if (unlikely(!dst)) { 373 + if (inet_csk(sk)->icsk_af_ops->rebuild_header(sk)) { 374 + err = -EHOSTUNREACH; 375 + goto out_unlock; 376 + } 377 + dst = __sk_dst_get(sk); 378 + if (unlikely(!dst)) { 379 + err = -ENODEV; 380 + goto out_unlock; 381 + } 382 + } 383 + 371 384 /* The dma-addrs in this binding are only reachable to the corresponding 372 385 * net_device. 373 386 */ 374 - if (!dst || !dst->dev || dst->dev->ifindex != binding->dev->ifindex) { 387 + dst_dev = dst_dev_rcu(dst); 388 + if (unlikely(!dst_dev) || unlikely(dst_dev != binding->dev)) { 375 389 err = -ENODEV; 376 - goto out_err; 390 + goto out_unlock; 377 391 } 378 392 393 + rcu_read_unlock(); 379 394 return binding; 380 395 396 + out_unlock: 397 + rcu_read_unlock(); 381 398 out_err: 382 399 if (binding) 383 400 net_devmem_dmabuf_binding_put(binding);
+14 -7
net/ipv4/tcp_input.c
··· 891 891 } 892 892 } 893 893 894 - void tcp_rcvbuf_grow(struct sock *sk) 894 + void tcp_rcvbuf_grow(struct sock *sk, u32 newval) 895 895 { 896 896 const struct net *net = sock_net(sk); 897 897 struct tcp_sock *tp = tcp_sk(sk); 898 - int rcvwin, rcvbuf, cap; 898 + u32 rcvwin, rcvbuf, cap, oldval; 899 + u64 grow; 900 + 901 + oldval = tp->rcvq_space.space; 902 + tp->rcvq_space.space = newval; 899 903 900 904 if (!READ_ONCE(net->ipv4.sysctl_tcp_moderate_rcvbuf) || 901 905 (sk->sk_userlocks & SOCK_RCVBUF_LOCK)) 902 906 return; 903 907 908 + /* DRS is always one RTT late. */ 909 + rcvwin = newval << 1; 910 + 904 911 /* slow start: allow the sender to double its rate. */ 905 - rcvwin = tp->rcvq_space.space << 1; 912 + grow = (u64)rcvwin * (newval - oldval); 913 + do_div(grow, oldval); 914 + rcvwin += grow << 1; 906 915 907 916 if (!RB_EMPTY_ROOT(&tp->out_of_order_queue)) 908 917 rcvwin += TCP_SKB_CB(tp->ooo_last_skb)->end_seq - tp->rcv_nxt; ··· 952 943 953 944 trace_tcp_rcvbuf_grow(sk, time); 954 945 955 - tp->rcvq_space.space = copied; 956 - 957 - tcp_rcvbuf_grow(sk); 946 + tcp_rcvbuf_grow(sk, copied); 958 947 959 948 new_measure: 960 949 tp->rcvq_space.seq = tp->copied_seq; ··· 5277 5270 } 5278 5271 /* do not grow rcvbuf for not-yet-accepted or orphaned sockets. */ 5279 5272 if (sk->sk_socket) 5280 - tcp_rcvbuf_grow(sk); 5273 + tcp_rcvbuf_grow(sk, tp->rcvq_space.space); 5281 5274 } 5282 5275 5283 5276 static int __must_check tcp_queue_rcv(struct sock *sk, struct sk_buff *skb,
+3
net/mac80211/cfg.c
··· 1876 1876 link_conf->nontransmitted = false; 1877 1877 link_conf->ema_ap = false; 1878 1878 link_conf->bssid_indicator = 0; 1879 + link_conf->fils_discovery.min_interval = 0; 1880 + link_conf->fils_discovery.max_interval = 0; 1881 + link_conf->unsol_bcast_probe_resp_interval = 0; 1879 1882 1880 1883 __sta_info_flush(sdata, true, link_id, NULL); 1881 1884
+8 -3
net/mac80211/key.c
··· 508 508 ret = ieee80211_key_enable_hw_accel(new); 509 509 } 510 510 } else { 511 - if (!new->local->wowlan) 511 + if (!new->local->wowlan) { 512 512 ret = ieee80211_key_enable_hw_accel(new); 513 - else if (link_id < 0 || !sdata->vif.active_links || 514 - BIT(link_id) & sdata->vif.active_links) 513 + } else if (link_id < 0 || !sdata->vif.active_links || 514 + BIT(link_id) & sdata->vif.active_links) { 515 515 new->flags |= KEY_FLAG_UPLOADED_TO_HARDWARE; 516 + if (!(new->conf.flags & (IEEE80211_KEY_FLAG_GENERATE_MMIC | 517 + IEEE80211_KEY_FLAG_PUT_MIC_SPACE | 518 + IEEE80211_KEY_FLAG_RESERVE_TAILROOM))) 519 + decrease_tailroom_need_count(sdata, 1); 520 + } 516 521 } 517 522 518 523 if (ret)
+1
net/mptcp/mib.c
··· 85 85 SNMP_MIB_ITEM("DssFallback", MPTCP_MIB_DSSFALLBACK), 86 86 SNMP_MIB_ITEM("SimultConnectFallback", MPTCP_MIB_SIMULTCONNFALLBACK), 87 87 SNMP_MIB_ITEM("FallbackFailed", MPTCP_MIB_FALLBACKFAILED), 88 + SNMP_MIB_ITEM("WinProbe", MPTCP_MIB_WINPROBE), 88 89 }; 89 90 90 91 /* mptcp_mib_alloc - allocate percpu mib counters
+1
net/mptcp/mib.h
··· 88 88 MPTCP_MIB_DSSFALLBACK, /* Bad or missing DSS */ 89 89 MPTCP_MIB_SIMULTCONNFALLBACK, /* Simultaneous connect */ 90 90 MPTCP_MIB_FALLBACKFAILED, /* Can't fallback due to msk status */ 91 + MPTCP_MIB_WINPROBE, /* MPTCP-level zero window probe */ 91 92 __MPTCP_MIB_MAX 92 93 }; 93 94
+53 -30
net/mptcp/protocol.c
··· 194 194 * - mptcp does not maintain a msk-level window clamp 195 195 * - returns true when the receive buffer is actually updated 196 196 */ 197 - static bool mptcp_rcvbuf_grow(struct sock *sk) 197 + static bool mptcp_rcvbuf_grow(struct sock *sk, u32 newval) 198 198 { 199 199 struct mptcp_sock *msk = mptcp_sk(sk); 200 200 const struct net *net = sock_net(sk); 201 - int rcvwin, rcvbuf, cap; 201 + u32 rcvwin, rcvbuf, cap, oldval; 202 + u64 grow; 202 203 204 + oldval = msk->rcvq_space.space; 205 + msk->rcvq_space.space = newval; 203 206 if (!READ_ONCE(net->ipv4.sysctl_tcp_moderate_rcvbuf) || 204 207 (sk->sk_userlocks & SOCK_RCVBUF_LOCK)) 205 208 return false; 206 209 207 - rcvwin = msk->rcvq_space.space << 1; 210 + /* DRS is always one RTT late. */ 211 + rcvwin = newval << 1; 212 + 213 + /* slow start: allow the sender to double its rate. */ 214 + grow = (u64)rcvwin * (newval - oldval); 215 + do_div(grow, oldval); 216 + rcvwin += grow << 1; 208 217 209 218 if (!RB_EMPTY_ROOT(&msk->out_of_order_queue)) 210 219 rcvwin += MPTCP_SKB_CB(msk->ooo_last_skb)->end_seq - msk->ack_seq; ··· 343 334 skb_set_owner_r(skb, sk); 344 335 /* do not grow rcvbuf for not-yet-accepted or orphaned sockets. */ 345 336 if (sk->sk_socket) 346 - mptcp_rcvbuf_grow(sk); 337 + mptcp_rcvbuf_grow(sk, msk->rcvq_space.space); 347 338 } 348 339 349 340 static void mptcp_init_skb(struct sock *ssk, struct sk_buff *skb, int offset, ··· 1007 998 if (WARN_ON_ONCE(!msk->recovery)) 1008 999 break; 1009 1000 1010 - WRITE_ONCE(msk->first_pending, mptcp_send_next(sk)); 1001 + msk->first_pending = mptcp_send_next(sk); 1011 1002 } 1012 1003 1013 1004 dfrag_clear(sk, dfrag); ··· 1299 1290 if (copy == 0) { 1300 1291 u64 snd_una = READ_ONCE(msk->snd_una); 1301 1292 1302 - if (snd_una != msk->snd_nxt || tcp_write_queue_tail(ssk)) { 1293 + /* No need for zero probe if there are any data pending 1294 + * either at the msk or ssk level; skb is the current write 1295 + * queue tail and can be empty at this point. 1296 + */ 1297 + if (snd_una != msk->snd_nxt || skb->len || 1298 + skb != tcp_send_head(ssk)) { 1303 1299 tcp_remove_empty_skb(ssk); 1304 1300 return 0; 1305 1301 } ··· 1355 1341 mpext->dsn64); 1356 1342 1357 1343 if (zero_window_probe) { 1344 + MPTCP_INC_STATS(sock_net(ssk), MPTCP_MIB_WINPROBE); 1358 1345 mptcp_subflow_ctx(ssk)->rel_write_seq += copy; 1359 1346 mpext->frozen = 1; 1360 1347 if (READ_ONCE(msk->csum_enabled)) ··· 1558 1543 1559 1544 mptcp_update_post_push(msk, dfrag, ret); 1560 1545 } 1561 - WRITE_ONCE(msk->first_pending, mptcp_send_next(sk)); 1546 + msk->first_pending = mptcp_send_next(sk); 1562 1547 1563 1548 if (msk->snd_burst <= 0 || 1564 1549 !sk_stream_memory_free(ssk) || ··· 1918 1903 get_page(dfrag->page); 1919 1904 list_add_tail(&dfrag->list, &msk->rtx_queue); 1920 1905 if (!msk->first_pending) 1921 - WRITE_ONCE(msk->first_pending, dfrag); 1906 + msk->first_pending = dfrag; 1922 1907 } 1923 1908 pr_debug("msk=%p dfrag at seq=%llu len=%u sent=%u new=%d\n", msk, 1924 1909 dfrag->data_seq, dfrag->data_len, dfrag->already_sent, ··· 1951 1936 1952 1937 static void mptcp_rcv_space_adjust(struct mptcp_sock *msk, int copied); 1953 1938 1954 - static int __mptcp_recvmsg_mskq(struct sock *sk, 1955 - struct msghdr *msg, 1956 - size_t len, int flags, 1939 + static int __mptcp_recvmsg_mskq(struct sock *sk, struct msghdr *msg, 1940 + size_t len, int flags, int copied_total, 1957 1941 struct scm_timestamping_internal *tss, 1958 1942 int *cmsg_flags) 1959 1943 { 1960 1944 struct mptcp_sock *msk = mptcp_sk(sk); 1961 1945 struct sk_buff *skb, *tmp; 1946 + int total_data_len = 0; 1962 1947 int copied = 0; 1963 1948 1964 1949 skb_queue_walk_safe(&sk->sk_receive_queue, skb, tmp) { 1965 - u32 offset = MPTCP_SKB_CB(skb)->offset; 1950 + u32 delta, offset = MPTCP_SKB_CB(skb)->offset; 1966 1951 u32 data_len = skb->len - offset; 1967 - u32 count = min_t(size_t, len - copied, data_len); 1952 + u32 count; 1968 1953 int err; 1969 1954 1955 + if (flags & MSG_PEEK) { 1956 + /* skip already peeked skbs */ 1957 + if (total_data_len + data_len <= copied_total) { 1958 + total_data_len += data_len; 1959 + continue; 1960 + } 1961 + 1962 + /* skip the already peeked data in the current skb */ 1963 + delta = copied_total - total_data_len; 1964 + offset += delta; 1965 + data_len -= delta; 1966 + } 1967 + 1968 + count = min_t(size_t, len - copied, data_len); 1970 1969 if (!(flags & MSG_TRUNC)) { 1971 1970 err = skb_copy_datagram_msg(skb, offset, msg, count); 1972 1971 if (unlikely(err < 0)) { ··· 1997 1968 1998 1969 copied += count; 1999 1970 2000 - if (count < data_len) { 2001 - if (!(flags & MSG_PEEK)) { 1971 + if (!(flags & MSG_PEEK)) { 1972 + msk->bytes_consumed += count; 1973 + if (count < data_len) { 2002 1974 MPTCP_SKB_CB(skb)->offset += count; 2003 1975 MPTCP_SKB_CB(skb)->map_seq += count; 2004 - msk->bytes_consumed += count; 1976 + break; 2005 1977 } 2006 - break; 2007 - } 2008 1978 2009 - if (!(flags & MSG_PEEK)) { 2010 1979 /* avoid the indirect call, we know the destructor is sock_rfree */ 2011 1980 skb->destructor = NULL; 2012 1981 skb->sk = NULL; ··· 2012 1985 sk_mem_uncharge(sk, skb->truesize); 2013 1986 __skb_unlink(skb, &sk->sk_receive_queue); 2014 1987 skb_attempt_defer_free(skb); 2015 - msk->bytes_consumed += count; 2016 1988 } 2017 1989 2018 1990 if (copied >= len) ··· 2075 2049 if (msk->rcvq_space.copied <= msk->rcvq_space.space) 2076 2050 goto new_measure; 2077 2051 2078 - msk->rcvq_space.space = msk->rcvq_space.copied; 2079 - if (mptcp_rcvbuf_grow(sk)) { 2080 - 2052 + if (mptcp_rcvbuf_grow(sk, msk->rcvq_space.copied)) { 2081 2053 /* Make subflows follow along. If we do not do this, we 2082 2054 * get drops at subflow level if skbs can't be moved to 2083 2055 * the mptcp rx queue fast enough (announced rcv_win can ··· 2087 2063 2088 2064 ssk = mptcp_subflow_tcp_sock(subflow); 2089 2065 slow = lock_sock_fast(ssk); 2090 - tcp_sk(ssk)->rcvq_space.space = msk->rcvq_space.copied; 2091 - tcp_rcvbuf_grow(ssk); 2066 + /* subflows can be added before tcp_init_transfer() */ 2067 + if (tcp_sk(ssk)->rcvq_space.space) 2068 + tcp_rcvbuf_grow(ssk, msk->rcvq_space.copied); 2092 2069 unlock_sock_fast(ssk, slow); 2093 2070 } 2094 2071 } ··· 2208 2183 while (copied < len) { 2209 2184 int err, bytes_read; 2210 2185 2211 - bytes_read = __mptcp_recvmsg_mskq(sk, msg, len - copied, flags, &tss, &cmsg_flags); 2186 + bytes_read = __mptcp_recvmsg_mskq(sk, msg, len - copied, flags, 2187 + copied, &tss, &cmsg_flags); 2212 2188 if (unlikely(bytes_read < 0)) { 2213 2189 if (!copied) 2214 2190 copied = bytes_read; ··· 2900 2874 struct mptcp_sock *msk = mptcp_sk(sk); 2901 2875 struct mptcp_data_frag *dtmp, *dfrag; 2902 2876 2903 - WRITE_ONCE(msk->first_pending, NULL); 2877 + msk->first_pending = NULL; 2904 2878 list_for_each_entry_safe(dfrag, dtmp, &msk->rtx_queue, list) 2905 2879 dfrag_clear(sk, dfrag); 2906 2880 } ··· 3440 3414 3441 3415 void __mptcp_check_push(struct sock *sk, struct sock *ssk) 3442 3416 { 3443 - if (!mptcp_send_head(sk)) 3444 - return; 3445 - 3446 3417 if (!sock_owned_by_user(sk)) 3447 3418 __mptcp_subflow_push_pending(sk, ssk, false); 3448 3419 else
+1 -1
net/mptcp/protocol.h
··· 414 414 { 415 415 const struct mptcp_sock *msk = mptcp_sk(sk); 416 416 417 - return READ_ONCE(msk->first_pending); 417 + return msk->first_pending; 418 418 } 419 419 420 420 static inline struct mptcp_data_frag *mptcp_send_next(struct sock *sk)
+1 -1
net/netfilter/nft_connlimit.c
··· 48 48 return; 49 49 } 50 50 51 - count = priv->list->count; 51 + count = READ_ONCE(priv->list->count); 52 52 53 53 if ((count > priv->limit) ^ priv->invert) { 54 54 regs->verdict.code = NFT_BREAK;
+27 -3
net/netfilter/nft_ct.c
··· 22 22 #include <net/netfilter/nf_conntrack_timeout.h> 23 23 #include <net/netfilter/nf_conntrack_l4proto.h> 24 24 #include <net/netfilter/nf_conntrack_expect.h> 25 + #include <net/netfilter/nf_conntrack_seqadj.h> 25 26 26 27 struct nft_ct_helper_obj { 27 28 struct nf_conntrack_helper *helper4; ··· 380 379 } 381 380 #endif 382 381 382 + static void __nft_ct_get_destroy(const struct nft_ctx *ctx, struct nft_ct *priv) 383 + { 384 + #ifdef CONFIG_NF_CONNTRACK_LABELS 385 + if (priv->key == NFT_CT_LABELS) 386 + nf_connlabels_put(ctx->net); 387 + #endif 388 + } 389 + 383 390 static int nft_ct_get_init(const struct nft_ctx *ctx, 384 391 const struct nft_expr *expr, 385 392 const struct nlattr * const tb[]) ··· 422 413 if (tb[NFTA_CT_DIRECTION] != NULL) 423 414 return -EINVAL; 424 415 len = NF_CT_LABELS_MAX_SIZE; 416 + 417 + err = nf_connlabels_get(ctx->net, (len * BITS_PER_BYTE) - 1); 418 + if (err) 419 + return err; 425 420 break; 426 421 #endif 427 422 case NFT_CT_HELPER: ··· 507 494 case IP_CT_DIR_REPLY: 508 495 break; 509 496 default: 510 - return -EINVAL; 497 + err = -EINVAL; 498 + goto err; 511 499 } 512 500 } 513 501 ··· 516 502 err = nft_parse_register_store(ctx, tb[NFTA_CT_DREG], &priv->dreg, NULL, 517 503 NFT_DATA_VALUE, len); 518 504 if (err < 0) 519 - return err; 505 + goto err; 520 506 521 507 err = nf_ct_netns_get(ctx->net, ctx->family); 522 508 if (err < 0) 523 - return err; 509 + goto err; 524 510 525 511 if (priv->key == NFT_CT_BYTES || 526 512 priv->key == NFT_CT_PKTS || ··· 528 514 nf_ct_set_acct(ctx->net, true); 529 515 530 516 return 0; 517 + err: 518 + __nft_ct_get_destroy(ctx, priv); 519 + return err; 531 520 } 532 521 533 522 static void __nft_ct_set_destroy(const struct nft_ctx *ctx, struct nft_ct *priv) ··· 643 626 static void nft_ct_get_destroy(const struct nft_ctx *ctx, 644 627 const struct nft_expr *expr) 645 628 { 629 + struct nft_ct *priv = nft_expr_priv(expr); 630 + 631 + __nft_ct_get_destroy(ctx, priv); 646 632 nf_ct_netns_put(ctx->net, ctx->family); 647 633 } 648 634 ··· 1193 1173 if (help) { 1194 1174 rcu_assign_pointer(help->helper, to_assign); 1195 1175 set_bit(IPS_HELPER_BIT, &ct->status); 1176 + 1177 + if ((ct->status & IPS_NAT_MASK) && !nfct_seqadj(ct)) 1178 + if (!nfct_seqadj_ext_add(ct)) 1179 + regs->verdict.code = NF_DROP; 1196 1180 } 1197 1181 } 1198 1182
+1 -1
net/sctp/input.c
··· 190 190 goto discard_release; 191 191 nf_reset_ct(skb); 192 192 193 - if (sk_filter(sk, skb)) 193 + if (sk_filter(sk, skb) || skb->len < sizeof(struct sctp_chunkhdr)) 194 194 goto discard_release; 195 195 196 196 /* Create an SCTP packet structure. */
+3 -1
net/tls/tls_device.c
··· 723 723 /* shouldn't get to wraparound: 724 724 * too long in async stage, something bad happened 725 725 */ 726 - if (WARN_ON_ONCE(resync_async->rcd_delta == USHRT_MAX)) 726 + if (WARN_ON_ONCE(resync_async->rcd_delta == USHRT_MAX)) { 727 + tls_offload_rx_resync_async_request_cancel(resync_async); 727 728 return false; 729 + } 728 730 729 731 /* asynchronous stage: log all headers seq such that 730 732 * req_seq <= seq <= end_seq, and wait for real resync request
+1 -2
net/wireless/nl80211.c
··· 4136 4136 rdev->wiphy.txq_quantum = old_txq_quantum; 4137 4137 } 4138 4138 4139 - if (old_rts_threshold) 4140 - kfree(old_radio_rts_threshold); 4139 + kfree(old_radio_rts_threshold); 4141 4140 return result; 4142 4141 } 4143 4142
+2 -2
tools/net/ynl/lib/ynl-priv.h
··· 313 313 struct nlattr *attr; 314 314 size_t len; 315 315 316 - len = strlen(str); 316 + len = strlen(str) + 1; 317 317 if (__ynl_attr_put_overflow(nlh, len)) 318 318 return; 319 319 ··· 321 321 attr->nla_type = attr_type; 322 322 323 323 strcpy((char *)ynl_attr_data(attr), str); 324 - attr->nla_len = NLA_HDRLEN + NLA_ALIGN(len); 324 + attr->nla_len = NLA_HDRLEN + len; 325 325 326 326 nlh->nlmsg_len += NLMSG_ALIGN(attr->nla_len); 327 327 }
+3
tools/net/ynl/pyynl/ethtool.py
··· 44 44 Pretty-print a set of fields from the reply. desc specifies the 45 45 fields and the optional type (bool/yn). 46 46 """ 47 + if not reply: 48 + return 49 + 47 50 if len(desc) == 0: 48 51 return print_field(reply, *zip(reply.keys(), reply.keys())) 49 52
+1 -1
tools/testing/selftests/net/bareudp.sh
··· 1 - #!/bin/sh 1 + #!/bin/bash 2 2 # SPDX-License-Identifier: GPL-2.0 3 3 4 4 # Test various bareudp tunnel configurations.