Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'net-6.12-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Pull networking fixes from Paolo Abeni:
"Including fixes from ieee802154, bluetooth and netfilter.

Current release - regressions:

- eth: mlx5: fix wrong reserved field in hca_cap_2 in mlx5_ifc

- eth: am65-cpsw: fix forever loop in cleanup code

Current release - new code bugs:

- eth: mlx5: HWS, fixed double-free in error flow of creating SQ

Previous releases - regressions:

- core: avoid potential underflow in qdisc_pkt_len_init() with UFO

- core: test for not too small csum_start in virtio_net_hdr_to_skb()

- vrf: revert "vrf: remove unnecessary RCU-bh critical section"

- bluetooth:
- fix uaf in l2cap_connect
- fix possible crash on mgmt_index_removed

- dsa: improve shutdown sequence

- eth: mlx5e: SHAMPO, fix overflow of hd_per_wq

- eth: ip_gre: fix drops of small packets in ipgre_xmit

Previous releases - always broken:

- core: fix gso_features_check to check for both
dev->gso_{ipv4_,}max_size

- core: fix tcp fraglist segmentation after pull from frag_list

- netfilter: nf_tables: prevent nf_skb_duplicated corruption

- sctp: set sk_state back to CLOSED if autobind fails in
sctp_listen_start

- mac802154: fix potential RCU dereference issue in
mac802154_scan_worker

- eth: fec: restart PPS after link state change"

* tag 'net-6.12-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (48 commits)
sctp: set sk_state back to CLOSED if autobind fails in sctp_listen_start
dt-bindings: net: xlnx,axi-ethernet: Add missing reg minItems
doc: net: napi: Update documentation for napi_schedule_irqoff
net/ncsi: Disable the ncsi work before freeing the associated structure
net: phy: qt2025: Fix warning: unused import DeviceId
gso: fix udp gso fraglist segmentation after pull from frag_list
bridge: mcast: Fail MDB get request on empty entry
vrf: revert "vrf: Remove unnecessary RCU-bh critical section"
net: ethernet: ti: am65-cpsw: Fix forever loop in cleanup code
net: phy: realtek: Check the index value in led_hw_control_get
ppp: do not assume bh is held in ppp_channel_bridge_input()
selftests: rds: move include.sh to TEST_FILES
net: test for not too small csum_start in virtio_net_hdr_to_skb()
net: gso: fix tcp fraglist segmentation after pull from frag_list
ipv4: ip_gre: Fix drops of small packets in ipgre_xmit
net: stmmac: dwmac4: extend timeout for VLAN Tag register busy bit check
net: add more sanity checks to qdisc_pkt_len_init()
net: avoid potential underflow in qdisc_pkt_len_init() with UFO
net: ethernet: ti: cpsw_ale: Fix warning on some platforms
net: microchip: Make FDMA config symbol invisible
...

+310 -127
+2 -1
Documentation/devicetree/bindings/net/xlnx,axi-ethernet.yaml
··· 34 34 and length of the AXI DMA controller IO space, unless 35 35 axistream-connected is specified, in which case the reg 36 36 attribute of the node referenced by it is used. 37 + minItems: 1 37 38 maxItems: 2 38 39 39 40 interrupts: ··· 182 181 clock-names = "s_axi_lite_clk", "axis_clk", "ref_clk", "mgt_clk"; 183 182 clocks = <&axi_clk>, <&axi_clk>, <&pl_enet_ref_clk>, <&mgt_clk>; 184 183 phy-mode = "mii"; 185 - reg = <0x00 0x40000000 0x00 0x40000>; 184 + reg = <0x40000000 0x40000>; 186 185 xlnx,rxcsum = <0x2>; 187 186 xlnx,rxmem = <0x800>; 188 187 xlnx,txcsum = <0x2>;
+2 -3
Documentation/networking/napi.rst
··· 144 144 145 145 napi_schedule_irqoff() is a variant of napi_schedule() which takes advantage 146 146 of guarantees given by being invoked in IRQ context (no need to 147 - mask interrupts). Note that PREEMPT_RT forces all interrupts 148 - to be threaded so the interrupt may need to be marked ``IRQF_NO_THREAD`` 149 - to avoid issues on real-time kernel configurations. 147 + mask interrupts). napi_schedule_irqoff() will fall back to napi_schedule() if 148 + IRQs are threaded (such as if ``PREEMPT_RT`` is enabled). 150 149 151 150 Instance to queue mapping 152 151 -------------------------
+1 -2
drivers/bluetooth/btmrvl_sdio.c
··· 92 92 } else { 93 93 ret = devm_request_irq(dev, cfg->irq_bt, 94 94 btmrvl_wake_irq_bt, 95 - 0, "bt_wake", card); 95 + IRQF_NO_AUTOEN, "bt_wake", card); 96 96 if (ret) { 97 97 dev_err(dev, 98 98 "Failed to request irq_bt %d (%d)\n", ··· 101 101 102 102 /* Configure wakeup (enabled by default) */ 103 103 device_init_wakeup(dev, true); 104 - disable_irq(cfg->irq_bt); 105 104 } 106 105 } 107 106
+9
drivers/net/ethernet/freescale/fec.h
··· 691 691 /* XDP BPF Program */ 692 692 struct bpf_prog *xdp_prog; 693 693 694 + struct { 695 + int pps_enable; 696 + u64 ns_sys, ns_phc; 697 + u32 at_corr; 698 + u8 at_inc_corr; 699 + } ptp_saved_state; 700 + 694 701 u64 ethtool_stats[]; 695 702 }; 696 703 697 704 void fec_ptp_init(struct platform_device *pdev, int irq_idx); 705 + void fec_ptp_restore_state(struct fec_enet_private *fep); 706 + void fec_ptp_save_state(struct fec_enet_private *fep); 698 707 void fec_ptp_stop(struct platform_device *pdev); 699 708 void fec_ptp_start_cyclecounter(struct net_device *ndev); 700 709 int fec_ptp_set(struct net_device *ndev, struct kernel_hwtstamp_config *config,
+10 -1
drivers/net/ethernet/freescale/fec_main.c
··· 1077 1077 u32 rcntl = OPT_FRAME_SIZE | 0x04; 1078 1078 u32 ecntl = FEC_ECR_ETHEREN; 1079 1079 1080 + fec_ptp_save_state(fep); 1081 + 1080 1082 /* Whack a reset. We should wait for this. 1081 1083 * For i.MX6SX SOC, enet use AXI bus, we use disable MAC 1082 1084 * instead of reset MAC itself. ··· 1246 1244 writel(ecntl, fep->hwp + FEC_ECNTRL); 1247 1245 fec_enet_active_rxring(ndev); 1248 1246 1249 - if (fep->bufdesc_ex) 1247 + if (fep->bufdesc_ex) { 1250 1248 fec_ptp_start_cyclecounter(ndev); 1249 + fec_ptp_restore_state(fep); 1250 + } 1251 1251 1252 1252 /* Enable interrupts we wish to service */ 1253 1253 if (fep->link) ··· 1340 1336 netdev_err(ndev, "Graceful transmit stop did not complete!\n"); 1341 1337 } 1342 1338 1339 + fec_ptp_save_state(fep); 1340 + 1343 1341 /* Whack a reset. We should wait for this. 1344 1342 * For i.MX6SX SOC, enet use AXI bus, we use disable MAC 1345 1343 * instead of reset MAC itself. ··· 1372 1366 val = readl(fep->hwp + FEC_ECNTRL); 1373 1367 val |= FEC_ECR_EN1588; 1374 1368 writel(val, fep->hwp + FEC_ECNTRL); 1369 + 1370 + fec_ptp_start_cyclecounter(ndev); 1371 + fec_ptp_restore_state(fep); 1375 1372 } 1376 1373 } 1377 1374
+50
drivers/net/ethernet/freescale/fec_ptp.c
··· 764 764 schedule_delayed_work(&fep->time_keep, HZ); 765 765 } 766 766 767 + void fec_ptp_save_state(struct fec_enet_private *fep) 768 + { 769 + unsigned long flags; 770 + u32 atime_inc_corr; 771 + 772 + spin_lock_irqsave(&fep->tmreg_lock, flags); 773 + 774 + fep->ptp_saved_state.pps_enable = fep->pps_enable; 775 + 776 + fep->ptp_saved_state.ns_phc = timecounter_read(&fep->tc); 777 + fep->ptp_saved_state.ns_sys = ktime_get_ns(); 778 + 779 + fep->ptp_saved_state.at_corr = readl(fep->hwp + FEC_ATIME_CORR); 780 + atime_inc_corr = readl(fep->hwp + FEC_ATIME_INC) & FEC_T_INC_CORR_MASK; 781 + fep->ptp_saved_state.at_inc_corr = (u8)(atime_inc_corr >> FEC_T_INC_CORR_OFFSET); 782 + 783 + spin_unlock_irqrestore(&fep->tmreg_lock, flags); 784 + } 785 + 786 + /* Restore PTP functionality after a reset */ 787 + void fec_ptp_restore_state(struct fec_enet_private *fep) 788 + { 789 + u32 atime_inc = readl(fep->hwp + FEC_ATIME_INC) & FEC_T_INC_MASK; 790 + unsigned long flags; 791 + u32 counter; 792 + u64 ns; 793 + 794 + spin_lock_irqsave(&fep->tmreg_lock, flags); 795 + 796 + /* Reset turned it off, so adjust our status flag */ 797 + fep->pps_enable = 0; 798 + 799 + writel(fep->ptp_saved_state.at_corr, fep->hwp + FEC_ATIME_CORR); 800 + atime_inc |= ((u32)fep->ptp_saved_state.at_inc_corr) << FEC_T_INC_CORR_OFFSET; 801 + writel(atime_inc, fep->hwp + FEC_ATIME_INC); 802 + 803 + ns = ktime_get_ns() - fep->ptp_saved_state.ns_sys + fep->ptp_saved_state.ns_phc; 804 + counter = ns & fep->cc.mask; 805 + writel(counter, fep->hwp + FEC_ATIME); 806 + timecounter_init(&fep->tc, &fep->cc, ns); 807 + 808 + spin_unlock_irqrestore(&fep->tmreg_lock, flags); 809 + 810 + /* Restart PPS if needed */ 811 + if (fep->ptp_saved_state.pps_enable) { 812 + /* Re-enable PPS */ 813 + fec_ptp_enable_pps(fep, 1); 814 + } 815 + } 816 + 767 817 void fec_ptp_stop(struct platform_device *pdev) 768 818 { 769 819 struct net_device *ndev = platform_get_drvdata(pdev);
+3 -1
drivers/net/ethernet/lantiq_etop.c
··· 481 481 unsigned long flags; 482 482 u32 byte_offset; 483 483 484 - len = skb->len < ETH_ZLEN ? ETH_ZLEN : skb->len; 484 + if (skb_put_padto(skb, ETH_ZLEN)) 485 + return NETDEV_TX_OK; 486 + len = skb->len; 485 487 486 488 if ((desc->ctl & (LTQ_DMA_OWN | LTQ_DMA_C)) || ch->skb[ch->dma.desc]) { 487 489 netdev_err(dev, "tx ring full\n");
+1 -1
drivers/net/ethernet/mellanox/mlx5/core/en.h
··· 627 627 struct mlx5e_dma_info *info; 628 628 struct mlx5e_frag_page *pages; 629 629 u16 curr_page_index; 630 - u16 hd_per_wq; 630 + u32 hd_per_wq; 631 631 u16 hd_per_wqe; 632 632 unsigned long *bitmap; 633 633 u16 pi;
+3
drivers/net/ethernet/mellanox/mlx5/core/en/tir.c
··· 23 23 struct mlx5e_tir_builder *builder; 24 24 25 25 builder = kvzalloc(sizeof(*builder), GFP_KERNEL); 26 + if (!builder) 27 + return NULL; 28 + 26 29 builder->modify = modify; 27 30 28 31 return builder;
+7 -1
drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c
··· 67 67 return; 68 68 69 69 spin_lock_bh(&x->lock); 70 - xfrm_state_check_expire(x); 71 70 if (x->km.state == XFRM_STATE_EXPIRED) { 72 71 sa_entry->attrs.drop = true; 73 72 spin_unlock_bh(&x->lock); ··· 74 75 mlx5e_accel_ipsec_fs_modify(sa_entry); 75 76 return; 76 77 } 78 + 79 + if (x->km.state != XFRM_STATE_VALID) { 80 + spin_unlock_bh(&x->lock); 81 + return; 82 + } 83 + 84 + xfrm_state_check_expire(x); 77 85 spin_unlock_bh(&x->lock); 78 86 79 87 queue_delayed_work(sa_entry->ipsec->wq, &dwork->dwork,
-1
drivers/net/ethernet/mellanox/mlx5/core/en_tx.c
··· 642 642 return; 643 643 644 644 err_unmap: 645 - mlx5e_dma_unmap_wqe_err(sq, 1); 646 645 sq->stats->dropped++; 647 646 dev_kfree_skb_any(skb); 648 647 mlx5e_tx_flush(sq);
+10
drivers/net/ethernet/mellanox/mlx5/core/lib/pci_vsc.c
··· 24 24 pci_write_config_dword((dev)->pdev, (dev)->vsc_addr + (offset), (val)) 25 25 #define VSC_MAX_RETRIES 2048 26 26 27 + /* Reading VSC registers can take relatively long time. 28 + * Yield the cpu every 128 registers read. 29 + */ 30 + #define VSC_GW_READ_BLOCK_COUNT 128 31 + 27 32 enum { 28 33 VSC_CTRL_OFFSET = 0x4, 29 34 VSC_COUNTER_OFFSET = 0x8, ··· 278 273 { 279 274 unsigned int next_read_addr = 0; 280 275 unsigned int read_addr = 0; 276 + unsigned int count = 0; 281 277 282 278 while (read_addr < length) { 283 279 if (mlx5_vsc_gw_read_fast(dev, read_addr, &next_read_addr, ··· 286 280 return read_addr; 287 281 288 282 read_addr = next_read_addr; 283 + if (++count == VSC_GW_READ_BLOCK_COUNT) { 284 + cond_resched(); 285 + count = 0; 286 + } 289 287 } 290 288 return length; 291 289 }
+1 -1
drivers/net/ethernet/mellanox/mlx5/core/steering/hws/mlx5hws_bwc_complex.c
··· 33 33 * and let the usual match creation path handle it, 34 34 * both for good and bad flows. 35 35 */ 36 - if (ret == E2BIG) { 36 + if (ret == -E2BIG) { 37 37 is_complex = true; 38 38 mlx5hws_dbg(ctx, "Matcher definer layout: need complex matcher\n"); 39 39 } else {
+2 -2
drivers/net/ethernet/mellanox/mlx5/core/steering/hws/mlx5hws_definer.c
··· 1845 1845 return 0; 1846 1846 } 1847 1847 1848 - return E2BIG; 1848 + return -E2BIG; 1849 1849 } 1850 1850 1851 1851 static void ··· 1931 1931 /* Find the match definer layout for header layout match union */ 1932 1932 ret = hws_definer_find_best_match_fit(ctx, match_definer, match_hl); 1933 1933 if (ret) { 1934 - if (ret == E2BIG) 1934 + if (ret == -E2BIG) 1935 1935 mlx5hws_dbg(ctx, 1936 1936 "Failed to create match definer from header layout - E2BIG\n"); 1937 1937 else
+1 -1
drivers/net/ethernet/mellanox/mlx5/core/steering/hws/mlx5hws_matcher.c
··· 675 675 if (!(matcher->flags & MLX5HWS_MATCHER_FLAGS_COLLISION)) { 676 676 ret = mlx5hws_definer_mt_init(ctx, matcher->mt); 677 677 if (ret) { 678 - if (ret == E2BIG) 678 + if (ret == -E2BIG) 679 679 mlx5hws_err(ctx, "Failed to set matcher templates with match definers\n"); 680 680 return ret; 681 681 }
+7 -1
drivers/net/ethernet/mellanox/mlx5/core/steering/hws/mlx5hws_send.c
··· 653 653 return err; 654 654 } 655 655 656 + static void hws_send_ring_destroy_sq(struct mlx5_core_dev *mdev, 657 + struct mlx5hws_send_ring_sq *sq) 658 + { 659 + mlx5_core_destroy_sq(mdev, sq->sqn); 660 + } 661 + 656 662 static int hws_send_ring_set_sq_rdy(struct mlx5_core_dev *mdev, u32 sqn) 657 663 { 658 664 void *in, *sqc; ··· 702 696 703 697 err = hws_send_ring_set_sq_rdy(mdev, sq->sqn); 704 698 if (err) 705 - hws_send_ring_close_sq(sq); 699 + hws_send_ring_destroy_sq(mdev, sq); 706 700 707 701 return err; 708 702 }
+1 -1
drivers/net/ethernet/microchip/fdma/Kconfig
··· 6 6 if NET_VENDOR_MICROCHIP 7 7 8 8 config FDMA 9 - bool "FDMA API" 9 + bool "FDMA API" if COMPILE_TEST 10 10 help 11 11 Provides the basic FDMA functionality for multiple Microchip 12 12 switchcores.
+5 -1
drivers/net/ethernet/microchip/sparx5/sparx5_packet.c
··· 45 45 fwd = (fwd >> 5); 46 46 info->src_port = FIELD_GET(GENMASK(7, 1), fwd); 47 47 48 + /* 49 + * Bit 270-271 are occasionally unexpectedly set by the hardware, 50 + * clear bits before extracting timestamp 51 + */ 48 52 info->timestamp = 49 - ((u64)xtr_hdr[2] << 24) | 53 + ((u64)(xtr_hdr[2] & GENMASK(5, 0)) << 24) | 50 54 ((u64)xtr_hdr[3] << 16) | 51 55 ((u64)xtr_hdr[4] << 8) | 52 56 ((u64)xtr_hdr[5] << 0);
+9 -9
drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c
··· 14 14 #include <linux/slab.h> 15 15 #include <linux/ethtool.h> 16 16 #include <linux/io.h> 17 + #include <linux/iopoll.h> 17 18 #include "stmmac.h" 18 19 #include "stmmac_pcs.h" 19 20 #include "dwmac4.h" ··· 472 471 u8 index, u32 data) 473 472 { 474 473 void __iomem *ioaddr = (void __iomem *)dev->base_addr; 475 - int i, timeout = 10; 474 + int ret; 476 475 u32 val; 477 476 478 477 if (index >= hw->num_vlan) ··· 488 487 489 488 writel(val, ioaddr + GMAC_VLAN_TAG); 490 489 491 - for (i = 0; i < timeout; i++) { 492 - val = readl(ioaddr + GMAC_VLAN_TAG); 493 - if (!(val & GMAC_VLAN_TAG_CTRL_OB)) 494 - return 0; 495 - udelay(1); 490 + ret = readl_poll_timeout(ioaddr + GMAC_VLAN_TAG, val, 491 + !(val & GMAC_VLAN_TAG_CTRL_OB), 492 + 1000, 500000); 493 + if (ret) { 494 + netdev_err(dev, "Timeout accessing MAC_VLAN_Tag_Filter\n"); 495 + return -EBUSY; 496 496 } 497 497 498 - netdev_err(dev, "Timeout accessing MAC_VLAN_Tag_Filter\n"); 499 - 500 - return -EBUSY; 498 + return 0; 501 499 } 502 500 503 501 static int dwmac4_add_hw_vlan_rx_fltr(struct net_device *dev,
+1 -1
drivers/net/ethernet/ti/am65-cpsw-nuss.c
··· 763 763 k3_udma_glue_disable_rx_chn(rx_chn->rx_chn); 764 764 765 765 fail_rx: 766 - for (i = 0; i < common->rx_ch_num_flows; i--) 766 + for (i = 0; i < common->rx_ch_num_flows; i++) 767 767 k3_udma_glue_reset_rx_chn(rx_chn->rx_chn, i, &rx_chn->flows[i], 768 768 am65_cpsw_nuss_rx_cleanup, 0); 769 769
+11 -1
drivers/net/ethernet/ti/cpsw_ale.c
··· 96 96 * @features: features supported by ALE 97 97 * @tbl_entries: number of ALE entries 98 98 * @reg_fields: pointer to array of register field configuration 99 + * @num_fields: number of fields in the reg_fields array 99 100 * @nu_switch_ale: NU Switch ALE 100 101 * @vlan_entry_tbl: ALE vlan entry fields description tbl 101 102 */ ··· 105 104 u32 features; 106 105 u32 tbl_entries; 107 106 const struct reg_field *reg_fields; 107 + int num_fields; 108 108 bool nu_switch_ale; 109 109 const struct ale_entry_fld *vlan_entry_tbl; 110 110 }; ··· 1402 1400 .dev_id = "cpsw", 1403 1401 .tbl_entries = 1024, 1404 1402 .reg_fields = ale_fields_cpsw, 1403 + .num_fields = ARRAY_SIZE(ale_fields_cpsw), 1405 1404 .vlan_entry_tbl = vlan_entry_cpsw, 1406 1405 }, 1407 1406 { ··· 1410 1407 .dev_id = "66ak2h-xgbe", 1411 1408 .tbl_entries = 2048, 1412 1409 .reg_fields = ale_fields_cpsw, 1410 + .num_fields = ARRAY_SIZE(ale_fields_cpsw), 1413 1411 .vlan_entry_tbl = vlan_entry_cpsw, 1414 1412 }, 1415 1413 { 1416 1414 .dev_id = "66ak2el", 1417 1415 .features = CPSW_ALE_F_STATUS_REG, 1418 1416 .reg_fields = ale_fields_cpsw_nu, 1417 + .num_fields = ARRAY_SIZE(ale_fields_cpsw_nu), 1419 1418 .nu_switch_ale = true, 1420 1419 .vlan_entry_tbl = vlan_entry_nu, 1421 1420 }, ··· 1426 1421 .features = CPSW_ALE_F_STATUS_REG, 1427 1422 .tbl_entries = 64, 1428 1423 .reg_fields = ale_fields_cpsw_nu, 1424 + .num_fields = ARRAY_SIZE(ale_fields_cpsw_nu), 1429 1425 .nu_switch_ale = true, 1430 1426 .vlan_entry_tbl = vlan_entry_nu, 1431 1427 }, ··· 1435 1429 .features = CPSW_ALE_F_STATUS_REG | CPSW_ALE_F_HW_AUTOAGING, 1436 1430 .tbl_entries = 64, 1437 1431 .reg_fields = ale_fields_cpsw_nu, 1432 + .num_fields = ARRAY_SIZE(ale_fields_cpsw_nu), 1438 1433 .nu_switch_ale = true, 1439 1434 .vlan_entry_tbl = vlan_entry_nu, 1440 1435 }, ··· 1443 1436 .dev_id = "j721e-cpswxg", 1444 1437 .features = CPSW_ALE_F_STATUS_REG | CPSW_ALE_F_HW_AUTOAGING, 1445 1438 .reg_fields = ale_fields_cpsw_nu, 1439 + .num_fields = ARRAY_SIZE(ale_fields_cpsw_nu), 1446 1440 .vlan_entry_tbl = vlan_entry_k3_cpswxg, 1447 1441 }, 1448 1442 { 1449 1443 .dev_id = "am64-cpswxg", 1450 1444 .features = CPSW_ALE_F_STATUS_REG | CPSW_ALE_F_HW_AUTOAGING, 1451 1445 .reg_fields = ale_fields_cpsw_nu, 1446 + .num_fields = ARRAY_SIZE(ale_fields_cpsw_nu), 1452 1447 .vlan_entry_tbl = vlan_entry_k3_cpswxg, 1453 1448 .tbl_entries = 512, 1454 1449 }, ··· 1486 1477 struct regmap *regmap = ale->regmap; 1487 1478 int i; 1488 1479 1489 - for (i = 0; i < ALE_FIELDS_MAX; i++) { 1480 + for (i = 0; i < ale->params.num_fields; i++) { 1490 1481 ale->fields[i] = devm_regmap_field_alloc(dev, regmap, 1491 1482 reg_fields[i]); 1492 1483 if (IS_ERR(ale->fields[i])) { ··· 1512 1503 params->ale_entries = ale_dev_id->tbl_entries; 1513 1504 params->nu_switch_ale = ale_dev_id->nu_switch_ale; 1514 1505 params->reg_fields = ale_dev_id->reg_fields; 1506 + params->num_fields = ale_dev_id->num_fields; 1515 1507 1516 1508 ale = devm_kzalloc(params->dev, sizeof(*ale), GFP_KERNEL); 1517 1509 if (!ale)
+1
drivers/net/ethernet/ti/cpsw_ale.h
··· 24 24 */ 25 25 bool nu_switch_ale; 26 26 const struct reg_field *reg_fields; 27 + int num_fields; 27 28 const char *dev_id; 28 29 unsigned long bus_freq; 29 30 };
+1
drivers/net/ieee802154/Kconfig
··· 101 101 102 102 config IEEE802154_MCR20A 103 103 tristate "MCR20A transceiver driver" 104 + select REGMAP_SPI 104 105 depends on IEEE802154_DRIVERS && MAC802154 105 106 depends on SPI 106 107 help
+1 -4
drivers/net/ieee802154/mcr20a.c
··· 1302 1302 irq_type = IRQF_TRIGGER_FALLING; 1303 1303 1304 1304 ret = devm_request_irq(&spi->dev, spi->irq, mcr20a_irq_isr, 1305 - irq_type, dev_name(&spi->dev), lp); 1305 + irq_type | IRQF_NO_AUTOEN, dev_name(&spi->dev), lp); 1306 1306 if (ret) { 1307 1307 dev_err(&spi->dev, "could not request_irq for mcr20a\n"); 1308 1308 ret = -ENODEV; 1309 1309 goto free_dev; 1310 1310 } 1311 - 1312 - /* disable_irq by default and wait for starting hardware */ 1313 - disable_irq(spi->irq); 1314 1311 1315 1312 ret = ieee802154_register_hw(hw); 1316 1313 if (ret) {
+1 -1
drivers/net/pcs/pcs-xpcs-wx.c
··· 109 109 txgbe_write_pma(xpcs, TXGBE_DFE_TAP_CTL0, 0); 110 110 val = txgbe_read_pma(xpcs, TXGBE_RX_GEN_CTL3); 111 111 val = u16_replace_bits(val, 0x4, TXGBE_RX_GEN_CTL3_LOS_TRSHLD0); 112 - txgbe_write_pma(xpcs, TXGBE_RX_EQ_ATTN_CTL, val); 112 + txgbe_write_pma(xpcs, TXGBE_RX_GEN_CTL3, val); 113 113 114 114 txgbe_write_pma(xpcs, TXGBE_MPLLA_CTL0, 0x20); 115 115 txgbe_write_pma(xpcs, TXGBE_MPLLA_CTL3, 0x46);
+2 -2
drivers/net/phy/qt2025.rs
··· 15 15 use kernel::net::phy::{ 16 16 self, 17 17 reg::{Mmd, C45}, 18 - DeviceId, Driver, 18 + Driver, 19 19 }; 20 20 use kernel::prelude::*; 21 21 use kernel::sizes::{SZ_16K, SZ_8K}; ··· 23 23 kernel::module_phy_driver! { 24 24 drivers: [PhyQT2025], 25 25 device_table: [ 26 - DeviceId::new_with_driver::<PhyQT2025>(), 26 + phy::DeviceId::new_with_driver::<PhyQT2025>(), 27 27 ], 28 28 name: "qt2025_phy", 29 29 author: "FUJITA Tomonori <fujita.tomonori@gmail.com>",
+3
drivers/net/phy/realtek.c
··· 527 527 { 528 528 int val; 529 529 530 + if (index >= RTL8211F_LED_COUNT) 531 + return -EINVAL; 532 + 530 533 val = phy_read_paged(phydev, 0xd04, RTL8211F_LEDCR); 531 534 if (val < 0) 532 535 return val;
+2 -2
drivers/net/ppp/ppp_generic.c
··· 2269 2269 if (!pchb) 2270 2270 goto out_rcu; 2271 2271 2272 - spin_lock(&pchb->downl); 2272 + spin_lock_bh(&pchb->downl); 2273 2273 if (!pchb->chan) { 2274 2274 /* channel got unregistered */ 2275 2275 kfree_skb(skb); ··· 2281 2281 kfree_skb(skb); 2282 2282 2283 2283 outl: 2284 - spin_unlock(&pchb->downl); 2284 + spin_unlock_bh(&pchb->downl); 2285 2285 out_rcu: 2286 2286 rcu_read_unlock(); 2287 2287
+2
drivers/net/vrf.c
··· 608 608 eth_zero_addr(eth->h_dest); 609 609 eth->h_proto = skb->protocol; 610 610 611 + rcu_read_lock_bh(); 611 612 dev_queue_xmit_nit(skb, vrf_dev); 613 + rcu_read_unlock_bh(); 612 614 613 615 skb_pull(skb, ETH_HLEN); 614 616 }
+8 -3
drivers/net/wwan/qcom_bam_dmux.c
··· 823 823 ret = devm_request_threaded_irq(dev, pc_ack_irq, NULL, bam_dmux_pc_ack_irq, 824 824 IRQF_ONESHOT, NULL, dmux); 825 825 if (ret) 826 - return ret; 826 + goto err_disable_pm; 827 827 828 828 ret = devm_request_threaded_irq(dev, dmux->pc_irq, NULL, bam_dmux_pc_irq, 829 829 IRQF_ONESHOT, NULL, dmux); 830 830 if (ret) 831 - return ret; 831 + goto err_disable_pm; 832 832 833 833 ret = irq_get_irqchip_state(dmux->pc_irq, IRQCHIP_STATE_LINE_LEVEL, 834 834 &dmux->pc_state); 835 835 if (ret) 836 - return ret; 836 + goto err_disable_pm; 837 837 838 838 /* Check if remote finished initialization before us */ 839 839 if (dmux->pc_state) { ··· 844 844 } 845 845 846 846 return 0; 847 + 848 + err_disable_pm: 849 + pm_runtime_disable(dev); 850 + pm_runtime_dont_use_autosuspend(dev); 851 + return ret; 847 852 } 848 853 849 854 static void bam_dmux_remove(struct platform_device *pdev)
+1 -1
include/linux/mlx5/mlx5_ifc.h
··· 2138 2138 u8 ts_cqe_metadata_size2wqe_counter[0x5]; 2139 2139 u8 reserved_at_250[0x10]; 2140 2140 2141 - u8 reserved_at_260[0x120]; 2141 + u8 reserved_at_260[0x20]; 2142 2142 2143 2143 u8 format_select_dw_gtpu_dw_0[0x8]; 2144 2144 u8 format_select_dw_gtpu_dw_1[0x8];
+18
include/linux/netdevice.h
··· 5029 5029 void netif_inherit_tso_max(struct net_device *to, 5030 5030 const struct net_device *from); 5031 5031 5032 + static inline unsigned int 5033 + netif_get_gro_max_size(const struct net_device *dev, const struct sk_buff *skb) 5034 + { 5035 + /* pairs with WRITE_ONCE() in netif_set_gro(_ipv4)_max_size() */ 5036 + return skb->protocol == htons(ETH_P_IPV6) ? 5037 + READ_ONCE(dev->gro_max_size) : 5038 + READ_ONCE(dev->gro_ipv4_max_size); 5039 + } 5040 + 5041 + static inline unsigned int 5042 + netif_get_gso_max_size(const struct net_device *dev, const struct sk_buff *skb) 5043 + { 5044 + /* pairs with WRITE_ONCE() in netif_set_gso(_ipv4)_max_size() */ 5045 + return skb->protocol == htons(ETH_P_IPV6) ? 5046 + READ_ONCE(dev->gso_max_size) : 5047 + READ_ONCE(dev->gso_ipv4_max_size); 5048 + } 5049 + 5032 5050 static inline bool netif_is_macsec(const struct net_device *dev) 5033 5051 { 5034 5052 return dev->priv_flags & IFF_MACSEC;
+3 -1
include/linux/virtio_net.h
··· 103 103 104 104 if (!skb_partial_csum_set(skb, start, off)) 105 105 return -EINVAL; 106 + if (skb_transport_offset(skb) < nh_min_len) 107 + return -EINVAL; 106 108 107 - nh_min_len = max_t(u32, nh_min_len, skb_transport_offset(skb)); 109 + nh_min_len = skb_transport_offset(skb); 108 110 p_off = nh_min_len + thlen; 109 111 if (!pskb_may_pull(skb, p_off)) 110 112 return -EINVAL;
+1 -1
include/uapi/linux/netfilter/nf_tables.h
··· 1694 1694 * 1695 1695 * @NFTA_FLOWTABLE_TABLE: name of the table containing the expression (NLA_STRING) 1696 1696 * @NFTA_FLOWTABLE_NAME: name of this flow table (NLA_STRING) 1697 - * @NFTA_FLOWTABLE_HOOK: netfilter hook configuration(NLA_U32) 1697 + * @NFTA_FLOWTABLE_HOOK: netfilter hook configuration (NLA_NESTED) 1698 1698 * @NFTA_FLOWTABLE_USE: number of references to this flow table (NLA_U32) 1699 1699 * @NFTA_FLOWTABLE_HANDLE: object handle (NLA_U64) 1700 1700 * @NFTA_FLOWTABLE_FLAGS: flags (NLA_U32)
+2
net/bluetooth/hci_core.c
··· 3782 3782 3783 3783 hci_dev_lock(hdev); 3784 3784 conn = hci_conn_hash_lookup_handle(hdev, handle); 3785 + if (conn && hci_dev_test_flag(hdev, HCI_MGMT)) 3786 + mgmt_device_connected(hdev, conn, NULL, 0); 3785 3787 hci_dev_unlock(hdev); 3786 3788 3787 3789 if (conn) {
+6 -9
net/bluetooth/hci_event.c
··· 3706 3706 goto unlock; 3707 3707 } 3708 3708 3709 - if (!ev->status && !test_bit(HCI_CONN_MGMT_CONNECTED, &conn->flags)) { 3709 + if (!ev->status) { 3710 3710 struct hci_cp_remote_name_req cp; 3711 3711 memset(&cp, 0, sizeof(cp)); 3712 3712 bacpy(&cp.bdaddr, &conn->dst); ··· 5324 5324 goto unlock; 5325 5325 } 5326 5326 5327 - /* If no side requires MITM protection; auto-accept */ 5327 + /* If no side requires MITM protection; use JUST_CFM method */ 5328 5328 if ((!loc_mitm || conn->remote_cap == HCI_IO_NO_INPUT_OUTPUT) && 5329 5329 (!rem_mitm || conn->io_capability == HCI_IO_NO_INPUT_OUTPUT)) { 5330 5330 5331 - /* If we're not the initiators request authorization to 5332 - * proceed from user space (mgmt_user_confirm with 5333 - * confirm_hint set to 1). The exception is if neither 5334 - * side had MITM or if the local IO capability is 5335 - * NoInputNoOutput, in which case we do auto-accept 5331 + /* If we're not the initiator of request authorization and the 5332 + * local IO capability is not NoInputNoOutput, use JUST_WORKS 5333 + * method (mgmt_user_confirm with confirm_hint set to 1). 5336 5334 */ 5337 5335 if (!test_bit(HCI_CONN_AUTH_PEND, &conn->flags) && 5338 - conn->io_capability != HCI_IO_NO_INPUT_OUTPUT && 5339 - (loc_mitm || rem_mitm)) { 5336 + conn->io_capability != HCI_IO_NO_INPUT_OUTPUT) { 5340 5337 bt_dev_dbg(hdev, "Confirming auto-accept as acceptor"); 5341 5338 confirm_hint = 1; 5342 5339 goto confirm;
-8
net/bluetooth/l2cap_core.c
··· 4066 4066 static int l2cap_connect_req(struct l2cap_conn *conn, 4067 4067 struct l2cap_cmd_hdr *cmd, u16 cmd_len, u8 *data) 4068 4068 { 4069 - struct hci_dev *hdev = conn->hcon->hdev; 4070 - struct hci_conn *hcon = conn->hcon; 4071 - 4072 4069 if (cmd_len < sizeof(struct l2cap_conn_req)) 4073 4070 return -EPROTO; 4074 - 4075 - hci_dev_lock(hdev); 4076 - if (hci_dev_test_flag(hdev, HCI_MGMT)) 4077 - mgmt_device_connected(hdev, hcon, NULL, 0); 4078 - hci_dev_unlock(hdev); 4079 4071 4080 4072 l2cap_connect(conn, cmd, data, L2CAP_CONN_RSP); 4081 4073 return 0;
+14 -9
net/bluetooth/mgmt.c
··· 1453 1453 1454 1454 static void cmd_complete_rsp(struct mgmt_pending_cmd *cmd, void *data) 1455 1455 { 1456 - if (cmd->cmd_complete) { 1457 - u8 *status = data; 1456 + struct cmd_lookup *match = data; 1458 1457 1459 - cmd->cmd_complete(cmd, *status); 1458 + /* dequeue cmd_sync entries using cmd as data as that is about to be 1459 + * removed/freed. 1460 + */ 1461 + hci_cmd_sync_dequeue(match->hdev, NULL, cmd, NULL); 1462 + 1463 + if (cmd->cmd_complete) { 1464 + cmd->cmd_complete(cmd, match->mgmt_status); 1460 1465 mgmt_pending_remove(cmd); 1461 1466 1462 1467 return; ··· 9399 9394 void mgmt_index_removed(struct hci_dev *hdev) 9400 9395 { 9401 9396 struct mgmt_ev_ext_index ev; 9402 - u8 status = MGMT_STATUS_INVALID_INDEX; 9397 + struct cmd_lookup match = { NULL, hdev, MGMT_STATUS_INVALID_INDEX }; 9403 9398 9404 9399 if (test_bit(HCI_QUIRK_RAW_DEVICE, &hdev->quirks)) 9405 9400 return; 9406 9401 9407 - mgmt_pending_foreach(0, hdev, cmd_complete_rsp, &status); 9402 + mgmt_pending_foreach(0, hdev, cmd_complete_rsp, &match); 9408 9403 9409 9404 if (hci_dev_test_flag(hdev, HCI_UNCONFIGURED)) { 9410 9405 mgmt_index_event(MGMT_EV_UNCONF_INDEX_REMOVED, hdev, NULL, 0, ··· 9455 9450 void __mgmt_power_off(struct hci_dev *hdev) 9456 9451 { 9457 9452 struct cmd_lookup match = { NULL, hdev }; 9458 - u8 status, zero_cod[] = { 0, 0, 0 }; 9453 + u8 zero_cod[] = { 0, 0, 0 }; 9459 9454 9460 9455 mgmt_pending_foreach(MGMT_OP_SET_POWERED, hdev, settings_rsp, &match); 9461 9456 ··· 9467 9462 * status responses. 9468 9463 */ 9469 9464 if (hci_dev_test_flag(hdev, HCI_UNREGISTER)) 9470 - status = MGMT_STATUS_INVALID_INDEX; 9465 + match.mgmt_status = MGMT_STATUS_INVALID_INDEX; 9471 9466 else 9472 - status = MGMT_STATUS_NOT_POWERED; 9467 + match.mgmt_status = MGMT_STATUS_NOT_POWERED; 9473 9468 9474 - mgmt_pending_foreach(0, hdev, cmd_complete_rsp, &status); 9469 + mgmt_pending_foreach(0, hdev, cmd_complete_rsp, &match); 9475 9470 9476 9471 if (memcmp(hdev->dev_class, zero_cod, sizeof(zero_cod)) != 0) { 9477 9472 mgmt_limited_event(MGMT_EV_CLASS_OF_DEV_CHANGED, hdev,
+1 -1
net/bridge/br_mdb.c
··· 1674 1674 spin_lock_bh(&br->multicast_lock); 1675 1675 1676 1676 mp = br_mdb_ip_get(br, &group); 1677 - if (!mp) { 1677 + if (!mp || (!mp->ports && !mp->host_joined)) { 1678 1678 NL_SET_ERR_MSG_MOD(extack, "MDB entry not found"); 1679 1679 err = -ENOENT; 1680 1680 goto unlock;
+9 -5
net/core/dev.c
··· 3512 3512 if (gso_segs > READ_ONCE(dev->gso_max_segs)) 3513 3513 return features & ~NETIF_F_GSO_MASK; 3514 3514 3515 - if (unlikely(skb->len >= READ_ONCE(dev->gso_max_size))) 3515 + if (unlikely(skb->len >= netif_get_gso_max_size(dev, skb))) 3516 3516 return features & ~NETIF_F_GSO_MASK; 3517 3517 3518 3518 if (!skb_shinfo(skb)->gso_type) { ··· 3758 3758 sizeof(_tcphdr), &_tcphdr); 3759 3759 if (likely(th)) 3760 3760 hdr_len += __tcp_hdrlen(th); 3761 - } else { 3761 + } else if (shinfo->gso_type & SKB_GSO_UDP_L4) { 3762 3762 struct udphdr _udphdr; 3763 3763 3764 3764 if (skb_header_pointer(skb, hdr_len, ··· 3766 3766 hdr_len += sizeof(struct udphdr); 3767 3767 } 3768 3768 3769 - if (shinfo->gso_type & SKB_GSO_DODGY) 3770 - gso_segs = DIV_ROUND_UP(skb->len - hdr_len, 3771 - shinfo->gso_size); 3769 + if (unlikely(shinfo->gso_type & SKB_GSO_DODGY)) { 3770 + int payload = skb->len - hdr_len; 3772 3771 3772 + /* Malicious packet. */ 3773 + if (payload <= 0) 3774 + return; 3775 + gso_segs = DIV_ROUND_UP(payload, shinfo->gso_size); 3776 + } 3773 3777 qdisc_skb_cb(skb)->pkt_len += (gso_segs - 1) * hdr_len; 3774 3778 } 3775 3779 }
+2 -7
net/core/gro.c
··· 98 98 unsigned int headlen = skb_headlen(skb); 99 99 unsigned int len = skb_gro_len(skb); 100 100 unsigned int delta_truesize; 101 - unsigned int gro_max_size; 102 101 unsigned int new_truesize; 103 102 struct sk_buff *lp; 104 103 int segs; ··· 111 112 if (p->pp_recycle != skb->pp_recycle) 112 113 return -ETOOMANYREFS; 113 114 114 - /* pairs with WRITE_ONCE() in netif_set_gro(_ipv4)_max_size() */ 115 - gro_max_size = p->protocol == htons(ETH_P_IPV6) ? 116 - READ_ONCE(p->dev->gro_max_size) : 117 - READ_ONCE(p->dev->gro_ipv4_max_size); 118 - 119 - if (unlikely(p->len + len >= gro_max_size || NAPI_GRO_CB(skb)->flush)) 115 + if (unlikely(p->len + len >= netif_get_gro_max_size(p->dev, p) || 116 + NAPI_GRO_CB(skb)->flush)) 120 117 return -E2BIG; 121 118 122 119 if (unlikely(p->len + len >= GRO_LEGACY_MAX_SIZE)) {
+7
net/dsa/dsa.c
··· 1577 1577 void dsa_switch_shutdown(struct dsa_switch *ds) 1578 1578 { 1579 1579 struct net_device *conduit, *user_dev; 1580 + LIST_HEAD(close_list); 1580 1581 struct dsa_port *dp; 1581 1582 1582 1583 mutex_lock(&dsa2_mutex); ··· 1587 1586 1588 1587 rtnl_lock(); 1589 1588 1589 + dsa_switch_for_each_cpu_port(dp, ds) 1590 + list_add(&dp->conduit->close_list, &close_list); 1591 + 1592 + dev_close_many(&close_list, true); 1593 + 1590 1594 dsa_switch_for_each_user_port(dp, ds) { 1591 1595 conduit = dsa_port_to_conduit(dp); 1592 1596 user_dev = dp->user; 1593 1597 1598 + netif_device_detach(user_dev); 1594 1599 netdev_upper_dev_unlink(conduit, user_dev); 1595 1600 } 1596 1601
+3 -3
net/ipv4/ip_gre.c
··· 662 662 if (skb_cow_head(skb, 0)) 663 663 goto free_skb; 664 664 665 - tnl_params = (const struct iphdr *)skb->data; 666 - 667 - if (!pskb_network_may_pull(skb, pull_len)) 665 + if (!pskb_may_pull(skb, pull_len)) 668 666 goto free_skb; 667 + 668 + tnl_params = (const struct iphdr *)skb->data; 669 669 670 670 /* ip_tunnel_xmit() needs skb->data pointing to gre header. */ 671 671 skb_pull(skb, pull_len);
+5 -2
net/ipv4/netfilter/nf_dup_ipv4.c
··· 53 53 { 54 54 struct iphdr *iph; 55 55 56 + local_bh_disable(); 56 57 if (this_cpu_read(nf_skb_duplicated)) 57 - return; 58 + goto out; 58 59 /* 59 60 * Copy the skb, and route the copy. Will later return %XT_CONTINUE for 60 61 * the original skb, which should continue on its way as if nothing has ··· 63 62 */ 64 63 skb = pskb_copy(skb, GFP_ATOMIC); 65 64 if (skb == NULL) 66 - return; 65 + goto out; 67 66 68 67 #if IS_ENABLED(CONFIG_NF_CONNTRACK) 69 68 /* Avoid counting cloned packets towards the original connection. */ ··· 92 91 } else { 93 92 kfree_skb(skb); 94 93 } 94 + out: 95 + local_bh_enable(); 95 96 } 96 97 EXPORT_SYMBOL_GPL(nf_dup_ipv4); 97 98
+8 -2
net/ipv4/tcp_offload.c
··· 101 101 if (!pskb_may_pull(skb, sizeof(struct tcphdr))) 102 102 return ERR_PTR(-EINVAL); 103 103 104 - if (skb_shinfo(skb)->gso_type & SKB_GSO_FRAGLIST) 105 - return __tcp4_gso_segment_list(skb, features); 104 + if (skb_shinfo(skb)->gso_type & SKB_GSO_FRAGLIST) { 105 + struct tcphdr *th = tcp_hdr(skb); 106 + 107 + if (skb_pagelen(skb) - th->doff * 4 == skb_shinfo(skb)->gso_size) 108 + return __tcp4_gso_segment_list(skb, features); 109 + 110 + skb->ip_summed = CHECKSUM_NONE; 111 + } 106 112 107 113 if (unlikely(skb->ip_summed != CHECKSUM_PARTIAL)) { 108 114 const struct iphdr *iph = ip_hdr(skb);
+20 -2
net/ipv4/udp_offload.c
··· 296 296 return NULL; 297 297 } 298 298 299 - if (skb_shinfo(gso_skb)->gso_type & SKB_GSO_FRAGLIST) 300 - return __udp_gso_segment_list(gso_skb, features, is_ipv6); 299 + if (skb_shinfo(gso_skb)->gso_type & SKB_GSO_FRAGLIST) { 300 + /* Detect modified geometry and pass those to skb_segment. */ 301 + if (skb_pagelen(gso_skb) - sizeof(*uh) == skb_shinfo(gso_skb)->gso_size) 302 + return __udp_gso_segment_list(gso_skb, features, is_ipv6); 303 + 304 + /* Setup csum, as fraglist skips this in udp4_gro_receive. */ 305 + gso_skb->csum_start = skb_transport_header(gso_skb) - gso_skb->head; 306 + gso_skb->csum_offset = offsetof(struct udphdr, check); 307 + gso_skb->ip_summed = CHECKSUM_PARTIAL; 308 + 309 + uh = udp_hdr(gso_skb); 310 + if (is_ipv6) 311 + uh->check = ~udp_v6_check(gso_skb->len, 312 + &ipv6_hdr(gso_skb)->saddr, 313 + &ipv6_hdr(gso_skb)->daddr, 0); 314 + else 315 + uh->check = ~udp_v4_check(gso_skb->len, 316 + ip_hdr(gso_skb)->saddr, 317 + ip_hdr(gso_skb)->daddr, 0); 318 + } 301 319 302 320 skb_pull(gso_skb, sizeof(*uh)); 303 321
+5 -2
net/ipv6/netfilter/nf_dup_ipv6.c
··· 47 47 void nf_dup_ipv6(struct net *net, struct sk_buff *skb, unsigned int hooknum, 48 48 const struct in6_addr *gw, int oif) 49 49 { 50 + local_bh_disable(); 50 51 if (this_cpu_read(nf_skb_duplicated)) 51 - return; 52 + goto out; 52 53 skb = pskb_copy(skb, GFP_ATOMIC); 53 54 if (skb == NULL) 54 - return; 55 + goto out; 55 56 56 57 #if IS_ENABLED(CONFIG_NF_CONNTRACK) 57 58 nf_reset_ct(skb); ··· 70 69 } else { 71 70 kfree_skb(skb); 72 71 } 72 + out: 73 + local_bh_enable(); 73 74 } 74 75 EXPORT_SYMBOL_GPL(nf_dup_ipv6); 75 76
+8 -2
net/ipv6/tcpv6_offload.c
··· 159 159 if (!pskb_may_pull(skb, sizeof(*th))) 160 160 return ERR_PTR(-EINVAL); 161 161 162 - if (skb_shinfo(skb)->gso_type & SKB_GSO_FRAGLIST) 163 - return __tcp6_gso_segment_list(skb, features); 162 + if (skb_shinfo(skb)->gso_type & SKB_GSO_FRAGLIST) { 163 + struct tcphdr *th = tcp_hdr(skb); 164 + 165 + if (skb_pagelen(skb) - th->doff * 4 == skb_shinfo(skb)->gso_size) 166 + return __tcp6_gso_segment_list(skb, features); 167 + 168 + skb->ip_summed = CHECKSUM_NONE; 169 + } 164 170 165 171 if (unlikely(skb->ip_summed != CHECKSUM_PARTIAL)) { 166 172 const struct ipv6hdr *ipv6h = ipv6_hdr(skb);
+3 -1
net/mac802154/scan.c
··· 176 176 struct ieee802154_local *local = 177 177 container_of(work, struct ieee802154_local, scan_work.work); 178 178 struct cfg802154_scan_request *scan_req; 179 + enum nl802154_scan_types scan_req_type; 179 180 struct ieee802154_sub_if_data *sdata; 180 181 unsigned int scan_duration = 0; 181 182 struct wpan_phy *wpan_phy; ··· 210 209 } 211 210 212 211 wpan_phy = scan_req->wpan_phy; 212 + scan_req_type = scan_req->type; 213 213 scan_req_duration = scan_req->duration; 214 214 215 215 /* Look for the next valid chan */ ··· 248 246 goto end_scan; 249 247 } 250 248 251 - if (scan_req->type == NL802154_SCAN_ACTIVE) { 249 + if (scan_req_type == NL802154_SCAN_ACTIVE) { 252 250 ret = mac802154_transmit_beacon_req(local, sdata); 253 251 if (ret) 254 252 dev_err(&sdata->dev->dev,
+2
net/ncsi/ncsi-manage.c
··· 1954 1954 list_del_rcu(&ndp->node); 1955 1955 spin_unlock_irqrestore(&ncsi_dev_lock, flags); 1956 1956 1957 + disable_work_sync(&ndp->work); 1958 + 1957 1959 kfree(ndp); 1958 1960 } 1959 1961 EXPORT_SYMBOL_GPL(ncsi_unregister_dev);
+3 -1
net/sctp/socket.c
··· 8557 8557 */ 8558 8558 inet_sk_set_state(sk, SCTP_SS_LISTENING); 8559 8559 if (!ep->base.bind_addr.port) { 8560 - if (sctp_autobind(sk)) 8560 + if (sctp_autobind(sk)) { 8561 + inet_sk_set_state(sk, SCTP_SS_CLOSED); 8561 8562 return -EAGAIN; 8563 + } 8562 8564 } else { 8563 8565 if (sctp_get_port(sk, inet_sk(sk)->inet_num)) { 8564 8566 inet_sk_set_state(sk, SCTP_SS_CLOSED);
+1
tools/testing/selftests/net/netfilter/conntrack_dump_flush.c
··· 207 207 static int count_entries(const struct nlmsghdr *nlh, void *data) 208 208 { 209 209 reply_counter++; 210 + return MNL_CB_OK; 210 211 } 211 212 212 213 static int conntracK_count_zone(struct mnl_socket *sock, uint16_t zone)
+29 -28
tools/testing/selftests/net/netfilter/nft_audit.sh
··· 48 48 trap 'kill $logread_pid; rm -f $logfile $rulefile' EXIT 49 49 exec 3<"$logfile" 50 50 51 + lsplit='s/^\(.*\) entries=\([^ ]*\) \(.*\)$/pfx="\1"\nval="\2"\nsfx="\3"/' 52 + summarize_logs() { 53 + sum=0 54 + while read line; do 55 + eval $(sed "$lsplit" <<< "$line") 56 + [[ $sum -gt 0 ]] && { 57 + [[ "$pfx $sfx" == "$tpfx $tsfx" ]] && { 58 + let "sum += val" 59 + continue 60 + } 61 + echo "$tpfx entries=$sum $tsfx" 62 + } 63 + tpfx="$pfx" 64 + tsfx="$sfx" 65 + sum=$val 66 + done 67 + echo "$tpfx entries=$sum $tsfx" 68 + } 69 + 51 70 do_test() { # (cmd, log) 52 71 echo -n "testing for cmd: $1 ... " 53 72 cat <&3 >/dev/null 54 73 $1 >/dev/null || exit 1 55 74 sleep 0.1 56 - res=$(diff -a -u <(echo "$2") - <&3) 75 + res=$(diff -a -u <(echo "$2") <(summarize_logs <&3)) 57 76 [ $? -eq 0 ] && { echo "OK"; return; } 58 77 echo "FAIL" 59 78 grep -v '^\(---\|+++\|@@\)' <<< "$res" ··· 171 152 'table=t1 family=2 entries=3 op=nft_reset_rule' 172 153 173 154 do_test 'nft reset rules table t1' \ 174 - 'table=t1 family=2 entries=3 op=nft_reset_rule 175 - table=t1 family=2 entries=3 op=nft_reset_rule 176 - table=t1 family=2 entries=3 op=nft_reset_rule' 155 + 'table=t1 family=2 entries=9 op=nft_reset_rule' 177 156 178 157 do_test 'nft reset rules t2 c3' \ 179 - 'table=t2 family=2 entries=189 op=nft_reset_rule 180 - table=t2 family=2 entries=188 op=nft_reset_rule 181 - table=t2 family=2 entries=126 op=nft_reset_rule' 158 + 'table=t2 family=2 entries=503 op=nft_reset_rule' 182 159 183 160 do_test 'nft reset rules t2' \ 184 - 'table=t2 family=2 entries=3 op=nft_reset_rule 185 - table=t2 family=2 entries=3 op=nft_reset_rule 186 - table=t2 family=2 entries=186 op=nft_reset_rule 187 - table=t2 family=2 entries=188 op=nft_reset_rule 188 - table=t2 family=2 entries=129 op=nft_reset_rule' 161 + 'table=t2 family=2 entries=509 op=nft_reset_rule' 189 162 190 163 do_test 'nft reset rules' \ 191 - 'table=t1 family=2 entries=3 op=nft_reset_rule 192 - table=t1 family=2 entries=3 op=nft_reset_rule 193 - table=t1 family=2 entries=3 op=nft_reset_rule 194 - table=t2 family=2 entries=3 op=nft_reset_rule 195 - table=t2 family=2 entries=3 op=nft_reset_rule 196 - table=t2 family=2 entries=180 op=nft_reset_rule 197 - table=t2 family=2 entries=188 op=nft_reset_rule 198 - table=t2 family=2 entries=135 op=nft_reset_rule' 164 + 'table=t1 family=2 entries=9 op=nft_reset_rule 165 + table=t2 family=2 entries=509 op=nft_reset_rule' 199 166 200 167 # resetting sets and elements 201 168 ··· 205 200 'table=t1 family=2 entries=1 op=nft_reset_obj' 206 201 207 202 do_test 'nft reset counters t2' \ 208 - 'table=t2 family=2 entries=342 op=nft_reset_obj 209 - table=t2 family=2 entries=158 op=nft_reset_obj' 203 + 'table=t2 family=2 entries=500 op=nft_reset_obj' 210 204 211 205 do_test 'nft reset counters' \ 212 206 'table=t1 family=2 entries=1 op=nft_reset_obj 213 - table=t2 family=2 entries=341 op=nft_reset_obj 214 - table=t2 family=2 entries=159 op=nft_reset_obj' 207 + table=t2 family=2 entries=500 op=nft_reset_obj' 215 208 216 209 # resetting quotas 217 210 ··· 220 217 'table=t1 family=2 entries=1 op=nft_reset_obj' 221 218 222 219 do_test 'nft reset quotas t2' \ 223 - 'table=t2 family=2 entries=315 op=nft_reset_obj 224 - table=t2 family=2 entries=185 op=nft_reset_obj' 220 + 'table=t2 family=2 entries=500 op=nft_reset_obj' 225 221 226 222 do_test 'nft reset quotas' \ 227 223 'table=t1 family=2 entries=1 op=nft_reset_obj 228 - table=t2 family=2 entries=314 op=nft_reset_obj 229 - table=t2 family=2 entries=186 op=nft_reset_obj' 224 + table=t2 family=2 entries=500 op=nft_reset_obj' 230 225 231 226 # deleting rules 232 227
+2 -1
tools/testing/selftests/net/rds/Makefile
··· 4 4 @echo mk_build_dir="$(shell pwd)" > include.sh 5 5 6 6 TEST_PROGS := run.sh \ 7 - include.sh \ 8 7 test.py 8 + 9 + TEST_FILES := include.sh 9 10 10 11 EXTRA_CLEAN := /tmp/rds_logs 11 12
tools/testing/selftests/net/rds/test.py