Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'net-6.3-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Pull networking fixes from Jakub Kicinski:
"Including fixes from CAN and WPAN.

Still quite a few bugs from this release. This pull is a bit smaller
because major subtrees went into the previous one. Or maybe people
took spring break off?

Current release - regressions:

- phy: micrel: correct KSZ9131RNX EEE capabilities and advertisement

Current release - new code bugs:

- eth: wangxun: fix vector length of interrupt cause

- vsock/loopback: consistently protect the packet queue with
sk_buff_head.lock

- virtio/vsock: fix header length on skb merging

- wpan: ca8210: fix unsigned mac_len comparison with zero

Previous releases - regressions:

- eth: stmmac: don't reject VLANs when IFF_PROMISC is set

- eth: smsc911x: avoid PHY being resumed when interface is not up

- eth: mtk_eth_soc: fix tx throughput regression with direct 1G links

- eth: bnx2x: use the right build_skb() helper after core rework

- wwan: iosm: fix 7560 modem crash on use on unsupported channel

Previous releases - always broken:

- eth: sfc: don't overwrite offload features at NIC reset

- eth: r8169: fix RTL8168H and RTL8107E rx crc error

- can: j1939: prevent deadlock by moving j1939_sk_errqueue()

- virt: vmxnet3: use GRO callback when UPT is enabled

- virt: xen: don't do grant copy across page boundary

- phy: dp83869: fix default value for tx-/rx-internal-delay

- dsa: ksz8: fix multiple issues with ksz8_fdb_dump

- eth: mvpp2: fix classification/RSS of VLAN and fragmented packets

- eth: mtk_eth_soc: fix flow block refcounting logic

Misc:

- constify fwnode pointers in SFP handling"

* tag 'net-6.3-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (55 commits)
net: ethernet: mtk_eth_soc: add missing ppe cache flush when deleting a flow
net: ethernet: mtk_eth_soc: fix L2 offloading with DSA untag offload
net: ethernet: mtk_eth_soc: fix flow block refcounting logic
net: mvneta: fix potential double-frees in mvneta_txq_sw_deinit()
net: dsa: sync unicast and multicast addresses for VLAN filters too
net: dsa: mv88e6xxx: Enable IGMP snooping on user ports only
xen/netback: use same error messages for same errors
test/vsock: new skbuff appending test
virtio/vsock: WARN_ONCE() for invalid state of socket
virtio/vsock: fix header length on skb merging
bnxt_en: Add missing 200G link speed reporting
bnxt_en: Fix typo in PCI id to device description string mapping
bnxt_en: Fix reporting of test result in ethtool selftest
i40e: fix registers dump after run ethtool adapter self test
bnx2x: use the right build_skb() helper
net: ipa: compute DMA pool size properly
net: wwan: iosm: fixes 7560 modem crash
net: ethernet: mtk_eth_soc: fix tx throughput regression with direct 1G links
ice: fix invalid check for empty list in ice_sched_assoc_vsi_to_agg()
ice: add profile conflict check for AVF FDIR
...

+568 -263
+1 -6
MAINTAINERS
··· 8216 8216 8217 8217 FREESCALE QORIQ DPAA FMAN DRIVER 8218 8218 M: Madalin Bucur <madalin.bucur@nxp.com> 8219 + R: Sean Anderson <sean.anderson@seco.com> 8219 8220 L: netdev@vger.kernel.org 8220 8221 S: Maintained 8221 8222 F: Documentation/devicetree/bindings/net/fsl-fman.txt ··· 14657 14656 14658 14657 NFC SUBSYSTEM 14659 14658 M: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org> 14660 - L: linux-nfc@lists.01.org (subscribers-only) 14661 14659 L: netdev@vger.kernel.org 14662 14660 S: Maintained 14663 - B: mailto:linux-nfc@lists.01.org 14664 14661 F: Documentation/devicetree/bindings/net/nfc/ 14665 14662 F: drivers/nfc/ 14666 14663 F: include/linux/platform_data/nfcmrvl.h ··· 14669 14670 NFC VIRTUAL NCI DEVICE DRIVER 14670 14671 M: Bongsu Jeon <bongsu.jeon@samsung.com> 14671 14672 L: netdev@vger.kernel.org 14672 - L: linux-nfc@lists.01.org (subscribers-only) 14673 14673 S: Supported 14674 14674 F: drivers/nfc/virtual_ncidev.c 14675 14675 F: tools/testing/selftests/nci/ ··· 15040 15042 F: sound/soc/codecs/tfa989x.c 15041 15043 15042 15044 NXP-NCI NFC DRIVER 15043 - L: linux-nfc@lists.01.org (subscribers-only) 15044 15045 S: Orphan 15045 15046 F: Documentation/devicetree/bindings/net/nfc/nxp,nci.yaml 15046 15047 F: drivers/nfc/nxp-nci ··· 18484 18487 18485 18488 SAMSUNG S3FWRN5 NFC DRIVER 18486 18489 M: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org> 18487 - L: linux-nfc@lists.01.org (subscribers-only) 18488 18490 S: Maintained 18489 18491 F: Documentation/devicetree/bindings/net/nfc/samsung,s3fwrn5.yaml 18490 18492 F: drivers/nfc/s3fwrn5 ··· 20976 20980 TI TRF7970A NFC DRIVER 20977 20981 M: Mark Greer <mgreer@animalcreek.com> 20978 20982 L: linux-wireless@vger.kernel.org 20979 - L: linux-nfc@lists.01.org (subscribers-only) 20980 20983 S: Supported 20981 20984 F: Documentation/devicetree/bindings/net/nfc/ti,trf7970a.yaml 20982 20985 F: drivers/nfc/trf7970a.c
+14
drivers/net/dsa/b53/b53_mmap.c
··· 216 216 return 0; 217 217 } 218 218 219 + static int b53_mmap_phy_read16(struct b53_device *dev, int addr, int reg, 220 + u16 *value) 221 + { 222 + return -EIO; 223 + } 224 + 225 + static int b53_mmap_phy_write16(struct b53_device *dev, int addr, int reg, 226 + u16 value) 227 + { 228 + return -EIO; 229 + } 230 + 219 231 static const struct b53_io_ops b53_mmap_ops = { 220 232 .read8 = b53_mmap_read8, 221 233 .read16 = b53_mmap_read16, ··· 239 227 .write32 = b53_mmap_write32, 240 228 .write48 = b53_mmap_write48, 241 229 .write64 = b53_mmap_write64, 230 + .phy_read16 = b53_mmap_phy_read16, 231 + .phy_write16 = b53_mmap_phy_write16, 242 232 }; 243 233 244 234 static int b53_mmap_probe_of(struct platform_device *pdev,
+5 -6
drivers/net/dsa/microchip/ksz8795.c
··· 958 958 u16 entries = 0; 959 959 u8 timestamp = 0; 960 960 u8 fid; 961 - u8 member; 962 - struct alu_struct alu; 961 + u8 src_port; 962 + u8 mac[ETH_ALEN]; 963 963 964 964 do { 965 - alu.is_static = false; 966 - ret = ksz8_r_dyn_mac_table(dev, i, alu.mac, &fid, &member, 965 + ret = ksz8_r_dyn_mac_table(dev, i, mac, &fid, &src_port, 967 966 &timestamp, &entries); 968 - if (!ret && (member & BIT(port))) { 969 - ret = cb(alu.mac, alu.fid, alu.is_static, data); 967 + if (!ret && port == src_port) { 968 + ret = cb(mac, fid, false, data); 970 969 if (ret) 971 970 break; 972 971 }
-9
drivers/net/dsa/microchip/ksz8863_smi.c
··· 82 82 { 83 83 .read = ksz8863_mdio_read, 84 84 .write = ksz8863_mdio_write, 85 - .max_raw_read = 1, 86 - .max_raw_write = 1, 87 85 }, 88 86 { 89 87 .read = ksz8863_mdio_read, 90 88 .write = ksz8863_mdio_write, 91 89 .val_format_endian_default = REGMAP_ENDIAN_BIG, 92 - .max_raw_read = 2, 93 - .max_raw_write = 2, 94 90 }, 95 91 { 96 92 .read = ksz8863_mdio_read, 97 93 .write = ksz8863_mdio_write, 98 94 .val_format_endian_default = REGMAP_ENDIAN_BIG, 99 - .max_raw_read = 4, 100 - .max_raw_write = 4, 101 95 } 102 96 }; 103 97 ··· 102 108 .pad_bits = 24, 103 109 .val_bits = 8, 104 110 .cache_type = REGCACHE_NONE, 105 - .use_single_read = 1, 106 111 .lock = ksz_regmap_lock, 107 112 .unlock = ksz_regmap_unlock, 108 113 }, ··· 111 118 .pad_bits = 24, 112 119 .val_bits = 16, 113 120 .cache_type = REGCACHE_NONE, 114 - .use_single_read = 1, 115 121 .lock = ksz_regmap_lock, 116 122 .unlock = ksz_regmap_unlock, 117 123 }, ··· 120 128 .pad_bits = 24, 121 129 .val_bits = 32, 122 130 .cache_type = REGCACHE_NONE, 123 - .use_single_read = 1, 124 131 .lock = ksz_regmap_lock, 125 132 .unlock = ksz_regmap_unlock, 126 133 }
+6 -6
drivers/net/dsa/microchip/ksz_common.c
··· 404 404 [VLAN_TABLE_VALID] = BIT(19), 405 405 [STATIC_MAC_TABLE_VALID] = BIT(19), 406 406 [STATIC_MAC_TABLE_USE_FID] = BIT(21), 407 - [STATIC_MAC_TABLE_FID] = GENMASK(29, 26), 407 + [STATIC_MAC_TABLE_FID] = GENMASK(25, 22), 408 408 [STATIC_MAC_TABLE_OVERRIDE] = BIT(20), 409 409 [STATIC_MAC_TABLE_FWD_PORTS] = GENMASK(18, 16), 410 - [DYNAMIC_MAC_TABLE_ENTRIES_H] = GENMASK(5, 0), 411 - [DYNAMIC_MAC_TABLE_MAC_EMPTY] = BIT(7), 410 + [DYNAMIC_MAC_TABLE_ENTRIES_H] = GENMASK(1, 0), 411 + [DYNAMIC_MAC_TABLE_MAC_EMPTY] = BIT(2), 412 412 [DYNAMIC_MAC_TABLE_NOT_READY] = BIT(7), 413 - [DYNAMIC_MAC_TABLE_ENTRIES] = GENMASK(31, 28), 413 + [DYNAMIC_MAC_TABLE_ENTRIES] = GENMASK(31, 24), 414 414 [DYNAMIC_MAC_TABLE_FID] = GENMASK(19, 16), 415 415 [DYNAMIC_MAC_TABLE_SRC_PORT] = GENMASK(21, 20), 416 416 [DYNAMIC_MAC_TABLE_TIMESTAMP] = GENMASK(23, 22), ··· 420 420 [VLAN_TABLE_MEMBERSHIP_S] = 16, 421 421 [STATIC_MAC_FWD_PORTS] = 16, 422 422 [STATIC_MAC_FID] = 22, 423 - [DYNAMIC_MAC_ENTRIES_H] = 3, 423 + [DYNAMIC_MAC_ENTRIES_H] = 8, 424 424 [DYNAMIC_MAC_ENTRIES] = 24, 425 425 [DYNAMIC_MAC_FID] = 16, 426 - [DYNAMIC_MAC_TIMESTAMP] = 24, 426 + [DYNAMIC_MAC_TIMESTAMP] = 22, 427 427 [DYNAMIC_MAC_SRC_PORT] = 20, 428 428 }; 429 429
+7 -2
drivers/net/dsa/mv88e6xxx/chip.c
··· 3354 3354 * If this is the upstream port for this switch, enable 3355 3355 * forwarding of unknown unicasts and multicasts. 3356 3356 */ 3357 - reg = MV88E6XXX_PORT_CTL0_IGMP_MLD_SNOOP | 3358 - MV88E6185_PORT_CTL0_USE_TAG | MV88E6185_PORT_CTL0_USE_IP | 3357 + reg = MV88E6185_PORT_CTL0_USE_TAG | MV88E6185_PORT_CTL0_USE_IP | 3359 3358 MV88E6XXX_PORT_CTL0_STATE_FORWARDING; 3359 + /* Forward any IPv4 IGMP or IPv6 MLD frames received 3360 + * by a USER port to the CPU port to allow snooping. 3361 + */ 3362 + if (dsa_is_user_port(ds, port)) 3363 + reg |= MV88E6XXX_PORT_CTL0_IGMP_MLD_SNOOP; 3364 + 3360 3365 err = mv88e6xxx_port_write(chip, port, MV88E6XXX_PORT_CTL0, reg); 3361 3366 if (err) 3362 3367 return err;
+4 -1
drivers/net/dsa/realtek/realtek-mdio.c
··· 21 21 22 22 #include <linux/module.h> 23 23 #include <linux/of_device.h> 24 + #include <linux/overflow.h> 24 25 #include <linux/regmap.h> 25 26 26 27 #include "realtek.h" ··· 153 152 if (!var) 154 153 return -EINVAL; 155 154 156 - priv = devm_kzalloc(&mdiodev->dev, sizeof(*priv), GFP_KERNEL); 155 + priv = devm_kzalloc(&mdiodev->dev, 156 + size_add(sizeof(*priv), var->chip_data_sz), 157 + GFP_KERNEL); 157 158 if (!priv) 158 159 return -ENOMEM; 159 160
+14 -2
drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
··· 672 672 return 0; 673 673 } 674 674 675 + static struct sk_buff * 676 + bnx2x_build_skb(const struct bnx2x_fastpath *fp, void *data) 677 + { 678 + struct sk_buff *skb; 679 + 680 + if (fp->rx_frag_size) 681 + skb = build_skb(data, fp->rx_frag_size); 682 + else 683 + skb = slab_build_skb(data); 684 + return skb; 685 + } 686 + 675 687 static void bnx2x_frag_free(const struct bnx2x_fastpath *fp, void *data) 676 688 { 677 689 if (fp->rx_frag_size) ··· 791 779 dma_unmap_single(&bp->pdev->dev, dma_unmap_addr(rx_buf, mapping), 792 780 fp->rx_buf_size, DMA_FROM_DEVICE); 793 781 if (likely(new_data)) 794 - skb = build_skb(data, fp->rx_frag_size); 782 + skb = bnx2x_build_skb(fp, data); 795 783 796 784 if (likely(skb)) { 797 785 #ifdef BNX2X_STOP_ON_ERROR ··· 1058 1046 dma_unmap_addr(rx_buf, mapping), 1059 1047 fp->rx_buf_size, 1060 1048 DMA_FROM_DEVICE); 1061 - skb = build_skb(data, fp->rx_frag_size); 1049 + skb = bnx2x_build_skb(fp, data); 1062 1050 if (unlikely(!skb)) { 1063 1051 bnx2x_frag_free(fp, data); 1064 1052 bnx2x_fp_qstats(bp, fp)->
+4 -4
drivers/net/ethernet/broadcom/bnxt/bnxt.c
··· 175 175 { PCI_VDEVICE(BROADCOM, 0x1750), .driver_data = BCM57508 }, 176 176 { PCI_VDEVICE(BROADCOM, 0x1751), .driver_data = BCM57504 }, 177 177 { PCI_VDEVICE(BROADCOM, 0x1752), .driver_data = BCM57502 }, 178 - { PCI_VDEVICE(BROADCOM, 0x1800), .driver_data = BCM57508_NPAR }, 178 + { PCI_VDEVICE(BROADCOM, 0x1800), .driver_data = BCM57502_NPAR }, 179 179 { PCI_VDEVICE(BROADCOM, 0x1801), .driver_data = BCM57504_NPAR }, 180 - { PCI_VDEVICE(BROADCOM, 0x1802), .driver_data = BCM57502_NPAR }, 181 - { PCI_VDEVICE(BROADCOM, 0x1803), .driver_data = BCM57508_NPAR }, 180 + { PCI_VDEVICE(BROADCOM, 0x1802), .driver_data = BCM57508_NPAR }, 181 + { PCI_VDEVICE(BROADCOM, 0x1803), .driver_data = BCM57502_NPAR }, 182 182 { PCI_VDEVICE(BROADCOM, 0x1804), .driver_data = BCM57504_NPAR }, 183 - { PCI_VDEVICE(BROADCOM, 0x1805), .driver_data = BCM57502_NPAR }, 183 + { PCI_VDEVICE(BROADCOM, 0x1805), .driver_data = BCM57508_NPAR }, 184 184 { PCI_VDEVICE(BROADCOM, 0xd802), .driver_data = BCM58802 }, 185 185 { PCI_VDEVICE(BROADCOM, 0xd804), .driver_data = BCM58804 }, 186 186 #ifdef CONFIG_BNXT_SRIOV
+1
drivers/net/ethernet/broadcom/bnxt/bnxt.h
··· 1226 1226 #define BNXT_LINK_SPEED_40GB PORT_PHY_QCFG_RESP_LINK_SPEED_40GB 1227 1227 #define BNXT_LINK_SPEED_50GB PORT_PHY_QCFG_RESP_LINK_SPEED_50GB 1228 1228 #define BNXT_LINK_SPEED_100GB PORT_PHY_QCFG_RESP_LINK_SPEED_100GB 1229 + #define BNXT_LINK_SPEED_200GB PORT_PHY_QCFG_RESP_LINK_SPEED_200GB 1229 1230 u16 support_speeds; 1230 1231 u16 support_pam4_speeds; 1231 1232 u16 auto_link_speeds; /* fw adv setting */
+3
drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
··· 1714 1714 return SPEED_50000; 1715 1715 case BNXT_LINK_SPEED_100GB: 1716 1716 return SPEED_100000; 1717 + case BNXT_LINK_SPEED_200GB: 1718 + return SPEED_200000; 1717 1719 default: 1718 1720 return SPEED_UNKNOWN; 1719 1721 } ··· 3740 3738 bnxt_ulp_stop(bp); 3741 3739 rc = bnxt_close_nic(bp, true, false); 3742 3740 if (rc) { 3741 + etest->flags |= ETH_TEST_FL_FAILED; 3743 3742 bnxt_ulp_start(bp, rc); 3744 3743 return; 3745 3744 }
+6 -5
drivers/net/ethernet/intel/i40e/i40e_diag.c
··· 44 44 return 0; 45 45 } 46 46 47 - struct i40e_diag_reg_test_info i40e_reg_list[] = { 47 + const struct i40e_diag_reg_test_info i40e_reg_list[] = { 48 48 /* offset mask elements stride */ 49 49 {I40E_QTX_CTL(0), 0x0000FFBF, 1, 50 50 I40E_QTX_CTL(1) - I40E_QTX_CTL(0)}, ··· 78 78 { 79 79 int ret_code = 0; 80 80 u32 reg, mask; 81 + u32 elements; 81 82 u32 i, j; 82 83 83 84 for (i = 0; i40e_reg_list[i].offset != 0 && 84 85 !ret_code; i++) { 85 86 87 + elements = i40e_reg_list[i].elements; 86 88 /* set actual reg range for dynamically allocated resources */ 87 89 if (i40e_reg_list[i].offset == I40E_QTX_CTL(0) && 88 90 hw->func_caps.num_tx_qp != 0) 89 - i40e_reg_list[i].elements = hw->func_caps.num_tx_qp; 91 + elements = hw->func_caps.num_tx_qp; 90 92 if ((i40e_reg_list[i].offset == I40E_PFINT_ITRN(0, 0) || 91 93 i40e_reg_list[i].offset == I40E_PFINT_ITRN(1, 0) || 92 94 i40e_reg_list[i].offset == I40E_PFINT_ITRN(2, 0) || 93 95 i40e_reg_list[i].offset == I40E_QINT_TQCTL(0) || 94 96 i40e_reg_list[i].offset == I40E_QINT_RQCTL(0)) && 95 97 hw->func_caps.num_msix_vectors != 0) 96 - i40e_reg_list[i].elements = 97 - hw->func_caps.num_msix_vectors - 1; 98 + elements = hw->func_caps.num_msix_vectors - 1; 98 99 99 100 /* test register access */ 100 101 mask = i40e_reg_list[i].mask; 101 - for (j = 0; j < i40e_reg_list[i].elements && !ret_code; j++) { 102 + for (j = 0; j < elements && !ret_code; j++) { 102 103 reg = i40e_reg_list[i].offset + 103 104 (j * i40e_reg_list[i].stride); 104 105 ret_code = i40e_diag_reg_pattern_test(hw, reg, mask);
+1 -1
drivers/net/ethernet/intel/i40e/i40e_diag.h
··· 20 20 u32 stride; /* bytes between each element */ 21 21 }; 22 22 23 - extern struct i40e_diag_reg_test_info i40e_reg_list[]; 23 + extern const struct i40e_diag_reg_test_info i40e_reg_list[]; 24 24 25 25 int i40e_diag_reg_test(struct i40e_hw *hw); 26 26 int i40e_diag_eeprom_test(struct i40e_hw *hw);
+5 -3
drivers/net/ethernet/intel/ice/ice_sched.c
··· 2788 2788 ice_sched_assoc_vsi_to_agg(struct ice_port_info *pi, u32 agg_id, 2789 2789 u16 vsi_handle, unsigned long *tc_bitmap) 2790 2790 { 2791 - struct ice_sched_agg_vsi_info *agg_vsi_info, *old_agg_vsi_info = NULL; 2791 + struct ice_sched_agg_vsi_info *agg_vsi_info, *iter, *old_agg_vsi_info = NULL; 2792 2792 struct ice_sched_agg_info *agg_info, *old_agg_info; 2793 2793 struct ice_hw *hw = pi->hw; 2794 2794 int status = 0; ··· 2806 2806 if (old_agg_info && old_agg_info != agg_info) { 2807 2807 struct ice_sched_agg_vsi_info *vtmp; 2808 2808 2809 - list_for_each_entry_safe(old_agg_vsi_info, vtmp, 2809 + list_for_each_entry_safe(iter, vtmp, 2810 2810 &old_agg_info->agg_vsi_list, 2811 2811 list_entry) 2812 - if (old_agg_vsi_info->vsi_handle == vsi_handle) 2812 + if (iter->vsi_handle == vsi_handle) { 2813 + old_agg_vsi_info = iter; 2813 2814 break; 2815 + } 2814 2816 } 2815 2817 2816 2818 /* check if entry already exist */
+22 -4
drivers/net/ethernet/intel/ice/ice_switch.c
··· 1780 1780 int 1781 1781 ice_cfg_rdma_fltr(struct ice_hw *hw, u16 vsi_handle, bool enable) 1782 1782 { 1783 - struct ice_vsi_ctx *ctx; 1783 + struct ice_vsi_ctx *ctx, *cached_ctx; 1784 + int status; 1784 1785 1785 - ctx = ice_get_vsi_ctx(hw, vsi_handle); 1786 + cached_ctx = ice_get_vsi_ctx(hw, vsi_handle); 1787 + if (!cached_ctx) 1788 + return -ENOENT; 1789 + 1790 + ctx = kzalloc(sizeof(*ctx), GFP_KERNEL); 1786 1791 if (!ctx) 1787 - return -EIO; 1792 + return -ENOMEM; 1793 + 1794 + ctx->info.q_opt_rss = cached_ctx->info.q_opt_rss; 1795 + ctx->info.q_opt_tc = cached_ctx->info.q_opt_tc; 1796 + ctx->info.q_opt_flags = cached_ctx->info.q_opt_flags; 1797 + 1798 + ctx->info.valid_sections = cpu_to_le16(ICE_AQ_VSI_PROP_Q_OPT_VALID); 1788 1799 1789 1800 if (enable) 1790 1801 ctx->info.q_opt_flags |= ICE_AQ_VSI_Q_OPT_PE_FLTR_EN; 1791 1802 else 1792 1803 ctx->info.q_opt_flags &= ~ICE_AQ_VSI_Q_OPT_PE_FLTR_EN; 1793 1804 1794 - return ice_update_vsi(hw, vsi_handle, ctx, NULL); 1805 + status = ice_update_vsi(hw, vsi_handle, ctx, NULL); 1806 + if (!status) { 1807 + cached_ctx->info.q_opt_flags = ctx->info.q_opt_flags; 1808 + cached_ctx->info.valid_sections |= ctx->info.valid_sections; 1809 + } 1810 + 1811 + kfree(ctx); 1812 + return status; 1795 1813 } 1796 1814 1797 1815 /**
+1 -1
drivers/net/ethernet/intel/ice/ice_txrx.c
··· 938 938 * ice_get_rx_buf - Fetch Rx buffer and synchronize data for use 939 939 * @rx_ring: Rx descriptor ring to transact packets on 940 940 * @size: size of buffer to add to skb 941 + * @ntc: index of next to clean element 941 942 * 942 943 * This function will pull an Rx buffer from the ring and synchronize it 943 944 * for use by the CPU. ··· 1027 1026 /** 1028 1027 * ice_construct_skb - Allocate skb and populate it 1029 1028 * @rx_ring: Rx descriptor ring to transact packets on 1030 - * @rx_buf: Rx buffer to pull data from 1031 1029 * @xdp: xdp_buff pointing to the data 1032 1030 * 1033 1031 * This function allocates an skb. It then populates it with the page
+1
drivers/net/ethernet/intel/ice/ice_txrx_lib.c
··· 438 438 * ice_finalize_xdp_rx - Bump XDP Tx tail and/or flush redirect map 439 439 * @xdp_ring: XDP ring 440 440 * @xdp_res: Result of the receive batch 441 + * @first_idx: index to write from caller 441 442 * 442 443 * This function bumps XDP Tx tail and/or flush redirect map, and 443 444 * should be called when a batch of packets has been processed in the
+73
drivers/net/ethernet/intel/ice/ice_virtchnl_fdir.c
··· 542 542 } 543 543 544 544 /** 545 + * ice_vc_fdir_has_prof_conflict 546 + * @vf: pointer to the VF structure 547 + * @conf: FDIR configuration for each filter 548 + * 549 + * Check if @conf has conflicting profile with existing profiles 550 + * 551 + * Return: true on success, and false on error. 552 + */ 553 + static bool 554 + ice_vc_fdir_has_prof_conflict(struct ice_vf *vf, 555 + struct virtchnl_fdir_fltr_conf *conf) 556 + { 557 + struct ice_fdir_fltr *desc; 558 + 559 + list_for_each_entry(desc, &vf->fdir.fdir_rule_list, fltr_node) { 560 + struct virtchnl_fdir_fltr_conf *existing_conf; 561 + enum ice_fltr_ptype flow_type_a, flow_type_b; 562 + struct ice_fdir_fltr *a, *b; 563 + 564 + existing_conf = to_fltr_conf_from_desc(desc); 565 + a = &existing_conf->input; 566 + b = &conf->input; 567 + flow_type_a = a->flow_type; 568 + flow_type_b = b->flow_type; 569 + 570 + /* No need to compare two rules with different tunnel types or 571 + * with the same protocol type. 572 + */ 573 + if (existing_conf->ttype != conf->ttype || 574 + flow_type_a == flow_type_b) 575 + continue; 576 + 577 + switch (flow_type_a) { 578 + case ICE_FLTR_PTYPE_NONF_IPV4_UDP: 579 + case ICE_FLTR_PTYPE_NONF_IPV4_TCP: 580 + case ICE_FLTR_PTYPE_NONF_IPV4_SCTP: 581 + if (flow_type_b == ICE_FLTR_PTYPE_NONF_IPV4_OTHER) 582 + return true; 583 + break; 584 + case ICE_FLTR_PTYPE_NONF_IPV4_OTHER: 585 + if (flow_type_b == ICE_FLTR_PTYPE_NONF_IPV4_UDP || 586 + flow_type_b == ICE_FLTR_PTYPE_NONF_IPV4_TCP || 587 + flow_type_b == ICE_FLTR_PTYPE_NONF_IPV4_SCTP) 588 + return true; 589 + break; 590 + case ICE_FLTR_PTYPE_NONF_IPV6_UDP: 591 + case ICE_FLTR_PTYPE_NONF_IPV6_TCP: 592 + case ICE_FLTR_PTYPE_NONF_IPV6_SCTP: 593 + if (flow_type_b == ICE_FLTR_PTYPE_NONF_IPV6_OTHER) 594 + return true; 595 + break; 596 + case ICE_FLTR_PTYPE_NONF_IPV6_OTHER: 597 + if (flow_type_b == ICE_FLTR_PTYPE_NONF_IPV6_UDP || 598 + flow_type_b == ICE_FLTR_PTYPE_NONF_IPV6_TCP || 599 + flow_type_b == ICE_FLTR_PTYPE_NONF_IPV6_SCTP) 600 + return true; 601 + break; 602 + default: 603 + break; 604 + } 605 + } 606 + 607 + return false; 608 + } 609 + 610 + /** 545 611 * ice_vc_fdir_write_flow_prof 546 612 * @vf: pointer to the VF structure 547 613 * @flow: filter flow type ··· 742 676 struct ice_flow_seg_info *seg; 743 677 enum ice_fltr_ptype flow; 744 678 int ret; 679 + 680 + ret = ice_vc_fdir_has_prof_conflict(vf, conf); 681 + if (ret) { 682 + dev_dbg(dev, "Found flow profile conflict for VF %d\n", 683 + vf->vf_id); 684 + return ret; 685 + } 745 686 746 687 flow = input->flow_type; 747 688 ret = ice_vc_fdir_alloc_prof(vf, flow);
+2
drivers/net/ethernet/marvell/mvneta.c
··· 3549 3549 3550 3550 netdev_tx_reset_queue(nq); 3551 3551 3552 + txq->buf = NULL; 3553 + txq->tso_hdrs = NULL; 3552 3554 txq->descs = NULL; 3553 3555 txq->last_desc = 0; 3554 3556 txq->next_desc_to_proc = 0;
+18 -12
drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.c
··· 62 62 MVPP2_DEF_FLOW(MVPP22_FLOW_TCP4, MVPP2_FL_IP4_TCP_FRAG_UNTAG, 63 63 MVPP22_CLS_HEK_IP4_2T, 64 64 MVPP2_PRS_RI_VLAN_NONE | MVPP2_PRS_RI_L3_IP4 | 65 - MVPP2_PRS_RI_L4_TCP, 65 + MVPP2_PRS_RI_IP_FRAG_TRUE | MVPP2_PRS_RI_L4_TCP, 66 66 MVPP2_PRS_IP_MASK | MVPP2_PRS_RI_VLAN_MASK), 67 67 68 68 MVPP2_DEF_FLOW(MVPP22_FLOW_TCP4, MVPP2_FL_IP4_TCP_FRAG_UNTAG, 69 69 MVPP22_CLS_HEK_IP4_2T, 70 70 MVPP2_PRS_RI_VLAN_NONE | MVPP2_PRS_RI_L3_IP4_OPT | 71 - MVPP2_PRS_RI_L4_TCP, 71 + MVPP2_PRS_RI_IP_FRAG_TRUE | MVPP2_PRS_RI_L4_TCP, 72 72 MVPP2_PRS_IP_MASK | MVPP2_PRS_RI_VLAN_MASK), 73 73 74 74 MVPP2_DEF_FLOW(MVPP22_FLOW_TCP4, MVPP2_FL_IP4_TCP_FRAG_UNTAG, 75 75 MVPP22_CLS_HEK_IP4_2T, 76 76 MVPP2_PRS_RI_VLAN_NONE | MVPP2_PRS_RI_L3_IP4_OTHER | 77 - MVPP2_PRS_RI_L4_TCP, 77 + MVPP2_PRS_RI_IP_FRAG_TRUE | MVPP2_PRS_RI_L4_TCP, 78 78 MVPP2_PRS_IP_MASK | MVPP2_PRS_RI_VLAN_MASK), 79 79 80 80 /* TCP over IPv4 flows, fragmented, with vlan tag */ 81 81 MVPP2_DEF_FLOW(MVPP22_FLOW_TCP4, MVPP2_FL_IP4_TCP_FRAG_TAG, 82 82 MVPP22_CLS_HEK_IP4_2T | MVPP22_CLS_HEK_TAGGED, 83 - MVPP2_PRS_RI_L3_IP4 | MVPP2_PRS_RI_L4_TCP, 83 + MVPP2_PRS_RI_L3_IP4 | MVPP2_PRS_RI_IP_FRAG_TRUE | 84 + MVPP2_PRS_RI_L4_TCP, 84 85 MVPP2_PRS_IP_MASK), 85 86 86 87 MVPP2_DEF_FLOW(MVPP22_FLOW_TCP4, MVPP2_FL_IP4_TCP_FRAG_TAG, 87 88 MVPP22_CLS_HEK_IP4_2T | MVPP22_CLS_HEK_TAGGED, 88 - MVPP2_PRS_RI_L3_IP4_OPT | MVPP2_PRS_RI_L4_TCP, 89 + MVPP2_PRS_RI_L3_IP4_OPT | MVPP2_PRS_RI_IP_FRAG_TRUE | 90 + MVPP2_PRS_RI_L4_TCP, 89 91 MVPP2_PRS_IP_MASK), 90 92 91 93 MVPP2_DEF_FLOW(MVPP22_FLOW_TCP4, MVPP2_FL_IP4_TCP_FRAG_TAG, 92 94 MVPP22_CLS_HEK_IP4_2T | MVPP22_CLS_HEK_TAGGED, 93 - MVPP2_PRS_RI_L3_IP4_OTHER | MVPP2_PRS_RI_L4_TCP, 95 + MVPP2_PRS_RI_L3_IP4_OTHER | MVPP2_PRS_RI_IP_FRAG_TRUE | 96 + MVPP2_PRS_RI_L4_TCP, 94 97 MVPP2_PRS_IP_MASK), 95 98 96 99 /* UDP over IPv4 flows, Not fragmented, no vlan tag */ ··· 135 132 MVPP2_DEF_FLOW(MVPP22_FLOW_UDP4, MVPP2_FL_IP4_UDP_FRAG_UNTAG, 136 133 MVPP22_CLS_HEK_IP4_2T, 137 134 MVPP2_PRS_RI_VLAN_NONE | MVPP2_PRS_RI_L3_IP4 | 138 - MVPP2_PRS_RI_L4_UDP, 135 + MVPP2_PRS_RI_IP_FRAG_TRUE | MVPP2_PRS_RI_L4_UDP, 139 136 MVPP2_PRS_IP_MASK | MVPP2_PRS_RI_VLAN_MASK), 140 137 141 138 MVPP2_DEF_FLOW(MVPP22_FLOW_UDP4, MVPP2_FL_IP4_UDP_FRAG_UNTAG, 142 139 MVPP22_CLS_HEK_IP4_2T, 143 140 MVPP2_PRS_RI_VLAN_NONE | MVPP2_PRS_RI_L3_IP4_OPT | 144 - MVPP2_PRS_RI_L4_UDP, 141 + MVPP2_PRS_RI_IP_FRAG_TRUE | MVPP2_PRS_RI_L4_UDP, 145 142 MVPP2_PRS_IP_MASK | MVPP2_PRS_RI_VLAN_MASK), 146 143 147 144 MVPP2_DEF_FLOW(MVPP22_FLOW_UDP4, MVPP2_FL_IP4_UDP_FRAG_UNTAG, 148 145 MVPP22_CLS_HEK_IP4_2T, 149 146 MVPP2_PRS_RI_VLAN_NONE | MVPP2_PRS_RI_L3_IP4_OTHER | 150 - MVPP2_PRS_RI_L4_UDP, 147 + MVPP2_PRS_RI_IP_FRAG_TRUE | MVPP2_PRS_RI_L4_UDP, 151 148 MVPP2_PRS_IP_MASK | MVPP2_PRS_RI_VLAN_MASK), 152 149 153 150 /* UDP over IPv4 flows, fragmented, with vlan tag */ 154 151 MVPP2_DEF_FLOW(MVPP22_FLOW_UDP4, MVPP2_FL_IP4_UDP_FRAG_TAG, 155 152 MVPP22_CLS_HEK_IP4_2T | MVPP22_CLS_HEK_TAGGED, 156 - MVPP2_PRS_RI_L3_IP4 | MVPP2_PRS_RI_L4_UDP, 153 + MVPP2_PRS_RI_L3_IP4 | MVPP2_PRS_RI_IP_FRAG_TRUE | 154 + MVPP2_PRS_RI_L4_UDP, 157 155 MVPP2_PRS_IP_MASK), 158 156 159 157 MVPP2_DEF_FLOW(MVPP22_FLOW_UDP4, MVPP2_FL_IP4_UDP_FRAG_TAG, 160 158 MVPP22_CLS_HEK_IP4_2T | MVPP22_CLS_HEK_TAGGED, 161 - MVPP2_PRS_RI_L3_IP4_OPT | MVPP2_PRS_RI_L4_UDP, 159 + MVPP2_PRS_RI_L3_IP4_OPT | MVPP2_PRS_RI_IP_FRAG_TRUE | 160 + MVPP2_PRS_RI_L4_UDP, 162 161 MVPP2_PRS_IP_MASK), 163 162 164 163 MVPP2_DEF_FLOW(MVPP22_FLOW_UDP4, MVPP2_FL_IP4_UDP_FRAG_TAG, 165 164 MVPP22_CLS_HEK_IP4_2T | MVPP22_CLS_HEK_TAGGED, 166 - MVPP2_PRS_RI_L3_IP4_OTHER | MVPP2_PRS_RI_L4_UDP, 165 + MVPP2_PRS_RI_L3_IP4_OTHER | MVPP2_PRS_RI_IP_FRAG_TRUE | 166 + MVPP2_PRS_RI_L4_UDP, 167 167 MVPP2_PRS_IP_MASK), 168 168 169 169 /* TCP over IPv6 flows, not fragmented, no vlan tag */
+36 -50
drivers/net/ethernet/marvell/mvpp2/mvpp2_prs.c
··· 1539 1539 if (!priv->prs_double_vlans) 1540 1540 return -ENOMEM; 1541 1541 1542 - /* Double VLAN: 0x8100, 0x88A8 */ 1543 - err = mvpp2_prs_double_vlan_add(priv, ETH_P_8021Q, ETH_P_8021AD, 1542 + /* Double VLAN: 0x88A8, 0x8100 */ 1543 + err = mvpp2_prs_double_vlan_add(priv, ETH_P_8021AD, ETH_P_8021Q, 1544 1544 MVPP2_PRS_PORT_MASK); 1545 1545 if (err) 1546 1546 return err; ··· 1607 1607 static int mvpp2_prs_pppoe_init(struct mvpp2 *priv) 1608 1608 { 1609 1609 struct mvpp2_prs_entry pe; 1610 - int tid; 1610 + int tid, ihl; 1611 1611 1612 - /* IPv4 over PPPoE with options */ 1613 - tid = mvpp2_prs_tcam_first_free(priv, MVPP2_PE_FIRST_FREE_TID, 1614 - MVPP2_PE_LAST_FREE_TID); 1615 - if (tid < 0) 1616 - return tid; 1612 + /* IPv4 over PPPoE with header length >= 5 */ 1613 + for (ihl = MVPP2_PRS_IPV4_IHL_MIN; ihl <= MVPP2_PRS_IPV4_IHL_MAX; ihl++) { 1614 + tid = mvpp2_prs_tcam_first_free(priv, MVPP2_PE_FIRST_FREE_TID, 1615 + MVPP2_PE_LAST_FREE_TID); 1616 + if (tid < 0) 1617 + return tid; 1617 1618 1618 - memset(&pe, 0, sizeof(pe)); 1619 - mvpp2_prs_tcam_lu_set(&pe, MVPP2_PRS_LU_PPPOE); 1620 - pe.index = tid; 1619 + memset(&pe, 0, sizeof(pe)); 1620 + mvpp2_prs_tcam_lu_set(&pe, MVPP2_PRS_LU_PPPOE); 1621 + pe.index = tid; 1621 1622 1622 - mvpp2_prs_match_etype(&pe, 0, PPP_IP); 1623 + mvpp2_prs_match_etype(&pe, 0, PPP_IP); 1624 + mvpp2_prs_tcam_data_byte_set(&pe, MVPP2_ETH_TYPE_LEN, 1625 + MVPP2_PRS_IPV4_HEAD | ihl, 1626 + MVPP2_PRS_IPV4_HEAD_MASK | 1627 + MVPP2_PRS_IPV4_IHL_MASK); 1623 1628 1624 - mvpp2_prs_sram_next_lu_set(&pe, MVPP2_PRS_LU_IP4); 1625 - mvpp2_prs_sram_ri_update(&pe, MVPP2_PRS_RI_L3_IP4_OPT, 1626 - MVPP2_PRS_RI_L3_PROTO_MASK); 1627 - /* goto ipv4 dest-address (skip eth_type + IP-header-size - 4) */ 1628 - mvpp2_prs_sram_shift_set(&pe, MVPP2_ETH_TYPE_LEN + 1629 - sizeof(struct iphdr) - 4, 1630 - MVPP2_PRS_SRAM_OP_SEL_SHIFT_ADD); 1631 - /* Set L3 offset */ 1632 - mvpp2_prs_sram_offset_set(&pe, MVPP2_PRS_SRAM_UDF_TYPE_L3, 1633 - MVPP2_ETH_TYPE_LEN, 1634 - MVPP2_PRS_SRAM_OP_SEL_UDF_ADD); 1629 + mvpp2_prs_sram_next_lu_set(&pe, MVPP2_PRS_LU_IP4); 1630 + mvpp2_prs_sram_ri_update(&pe, MVPP2_PRS_RI_L3_IP4, 1631 + MVPP2_PRS_RI_L3_PROTO_MASK); 1632 + /* goto ipv4 dst-address (skip eth_type + IP-header-size - 4) */ 1633 + mvpp2_prs_sram_shift_set(&pe, MVPP2_ETH_TYPE_LEN + 1634 + sizeof(struct iphdr) - 4, 1635 + MVPP2_PRS_SRAM_OP_SEL_SHIFT_ADD); 1636 + /* Set L3 offset */ 1637 + mvpp2_prs_sram_offset_set(&pe, MVPP2_PRS_SRAM_UDF_TYPE_L3, 1638 + MVPP2_ETH_TYPE_LEN, 1639 + MVPP2_PRS_SRAM_OP_SEL_UDF_ADD); 1640 + /* Set L4 offset */ 1641 + mvpp2_prs_sram_offset_set(&pe, MVPP2_PRS_SRAM_UDF_TYPE_L4, 1642 + MVPP2_ETH_TYPE_LEN + (ihl * 4), 1643 + MVPP2_PRS_SRAM_OP_SEL_UDF_ADD); 1635 1644 1636 - /* Update shadow table and hw entry */ 1637 - mvpp2_prs_shadow_set(priv, pe.index, MVPP2_PRS_LU_PPPOE); 1638 - mvpp2_prs_hw_write(priv, &pe); 1639 - 1640 - /* IPv4 over PPPoE without options */ 1641 - tid = mvpp2_prs_tcam_first_free(priv, MVPP2_PE_FIRST_FREE_TID, 1642 - MVPP2_PE_LAST_FREE_TID); 1643 - if (tid < 0) 1644 - return tid; 1645 - 1646 - pe.index = tid; 1647 - 1648 - mvpp2_prs_tcam_data_byte_set(&pe, MVPP2_ETH_TYPE_LEN, 1649 - MVPP2_PRS_IPV4_HEAD | 1650 - MVPP2_PRS_IPV4_IHL_MIN, 1651 - MVPP2_PRS_IPV4_HEAD_MASK | 1652 - MVPP2_PRS_IPV4_IHL_MASK); 1653 - 1654 - /* Clear ri before updating */ 1655 - pe.sram[MVPP2_PRS_SRAM_RI_WORD] = 0x0; 1656 - pe.sram[MVPP2_PRS_SRAM_RI_CTRL_WORD] = 0x0; 1657 - mvpp2_prs_sram_ri_update(&pe, MVPP2_PRS_RI_L3_IP4, 1658 - MVPP2_PRS_RI_L3_PROTO_MASK); 1659 - 1660 - /* Update shadow table and hw entry */ 1661 - mvpp2_prs_shadow_set(priv, pe.index, MVPP2_PRS_LU_PPPOE); 1662 - mvpp2_prs_hw_write(priv, &pe); 1645 + /* Update shadow table and hw entry */ 1646 + mvpp2_prs_shadow_set(priv, pe.index, MVPP2_PRS_LU_PPPOE); 1647 + mvpp2_prs_hw_write(priv, &pe); 1648 + } 1663 1649 1664 1650 /* IPv6 over PPPoE */ 1665 1651 tid = mvpp2_prs_tcam_first_free(priv, MVPP2_PE_FIRST_FREE_TID,
+3 -5
drivers/net/ethernet/mediatek/mtk_eth_soc.c
··· 763 763 break; 764 764 } 765 765 766 - mtk_set_queue_speed(mac->hw, mac->id, speed); 767 - 768 766 /* Configure duplex */ 769 767 if (duplex == DUPLEX_FULL) 770 768 mcr |= MAC_MCR_FORCE_DPX; ··· 2057 2059 skb_checksum_none_assert(skb); 2058 2060 skb->protocol = eth_type_trans(skb, netdev); 2059 2061 2060 - if (reason == MTK_PPE_CPU_REASON_HIT_UNBIND_RATE_REACHED) 2061 - mtk_ppe_check_skb(eth->ppe[0], skb, hash); 2062 - 2063 2062 if (netdev->features & NETIF_F_HW_VLAN_CTAG_RX) { 2064 2063 if (MTK_HAS_CAPS(eth->soc->caps, MTK_NETSYS_V2)) { 2065 2064 if (trxd.rxd3 & RX_DMA_VTAG_V2) { ··· 2083 2088 } else if (has_hwaccel_tag) { 2084 2089 __vlan_hwaccel_put_tag(skb, htons(vlan_proto), vlan_tci); 2085 2090 } 2091 + 2092 + if (reason == MTK_PPE_CPU_REASON_HIT_UNBIND_RATE_REACHED) 2093 + mtk_ppe_check_skb(eth->ppe[0], skb, hash); 2086 2094 2087 2095 skb_record_rx_queue(skb, 0); 2088 2096 napi_gro_receive(napi, skb);
+5 -1
drivers/net/ethernet/mediatek/mtk_ppe.c
··· 8 8 #include <linux/platform_device.h> 9 9 #include <linux/if_ether.h> 10 10 #include <linux/if_vlan.h> 11 + #include <net/dst_metadata.h> 11 12 #include <net/dsa.h> 12 13 #include "mtk_eth_soc.h" 13 14 #include "mtk_ppe.h" ··· 459 458 hwe->ib1 &= ~MTK_FOE_IB1_STATE; 460 459 hwe->ib1 |= FIELD_PREP(MTK_FOE_IB1_STATE, MTK_FOE_STATE_INVALID); 461 460 dma_wmb(); 461 + mtk_ppe_cache_clear(ppe); 462 462 } 463 463 entry->hash = 0xffff; 464 464 ··· 701 699 skb->dev->dsa_ptr->tag_ops->proto != DSA_TAG_PROTO_MTK) 702 700 goto out; 703 701 704 - tag += 4; 702 + if (!skb_metadata_dst(skb)) 703 + tag += 4; 704 + 705 705 if (get_unaligned_be16(tag) != ETH_P_8021Q) 706 706 break; 707 707
+2 -1
drivers/net/ethernet/mediatek/mtk_ppe_offload.c
··· 576 576 if (IS_ERR(block_cb)) 577 577 return PTR_ERR(block_cb); 578 578 579 + flow_block_cb_incref(block_cb); 579 580 flow_block_cb_add(block_cb, f); 580 581 list_add_tail(&block_cb->driver_list, &block_cb_list); 581 582 return 0; ··· 585 584 if (!block_cb) 586 585 return -ENOENT; 587 586 588 - if (flow_block_cb_decref(block_cb)) { 587 + if (!flow_block_cb_decref(block_cb)) { 589 588 flow_block_cb_remove(block_cb, f); 590 589 list_del(&block_cb->driver_list); 591 590 }
+3
drivers/net/ethernet/realtek/r8169_phy_config.c
··· 826 826 /* disable phy pfm mode */ 827 827 phy_modify_paged(phydev, 0x0a44, 0x11, BIT(7), 0); 828 828 829 + /* disable 10m pll off */ 830 + phy_modify_paged(phydev, 0x0a43, 0x10, BIT(0), 0); 831 + 829 832 rtl8168g_disable_aldps(phydev); 830 833 rtl8168g_config_eee_phy(phydev); 831 834 }
+27 -13
drivers/net/ethernet/sfc/ef10.c
··· 1304 1304 static int efx_ef10_init_nic(struct efx_nic *efx) 1305 1305 { 1306 1306 struct efx_ef10_nic_data *nic_data = efx->nic_data; 1307 - netdev_features_t hw_enc_features = 0; 1307 + struct net_device *net_dev = efx->net_dev; 1308 + netdev_features_t tun_feats, tso_feats; 1308 1309 int rc; 1309 1310 1310 1311 if (nic_data->must_check_datapath_caps) { ··· 1350 1349 nic_data->must_restore_piobufs = false; 1351 1350 } 1352 1351 1353 - /* add encapsulated checksum offload features */ 1352 + /* encap features might change during reset if fw variant changed */ 1354 1353 if (efx_has_cap(efx, VXLAN_NVGRE) && !efx_ef10_is_vf(efx)) 1355 - hw_enc_features |= NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM; 1356 - /* add encapsulated TSO features */ 1354 + net_dev->hw_enc_features |= NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM; 1355 + else 1356 + net_dev->hw_enc_features &= ~(NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM); 1357 + 1358 + tun_feats = NETIF_F_GSO_UDP_TUNNEL | NETIF_F_GSO_GRE | 1359 + NETIF_F_GSO_UDP_TUNNEL_CSUM | NETIF_F_GSO_GRE_CSUM; 1360 + tso_feats = NETIF_F_TSO | NETIF_F_TSO6; 1361 + 1357 1362 if (efx_has_cap(efx, TX_TSO_V2_ENCAP)) { 1358 - netdev_features_t encap_tso_features; 1359 - 1360 - encap_tso_features = NETIF_F_GSO_UDP_TUNNEL | NETIF_F_GSO_GRE | 1361 - NETIF_F_GSO_UDP_TUNNEL_CSUM | NETIF_F_GSO_GRE_CSUM; 1362 - 1363 - hw_enc_features |= encap_tso_features | NETIF_F_TSO; 1364 - efx->net_dev->features |= encap_tso_features; 1363 + /* If this is first nic_init, or if it is a reset and a new fw 1364 + * variant has added new features, enable them by default. 1365 + * If the features are not new, maintain their current value. 1366 + */ 1367 + if (!(net_dev->hw_features & tun_feats)) 1368 + net_dev->features |= tun_feats; 1369 + net_dev->hw_enc_features |= tun_feats | tso_feats; 1370 + net_dev->hw_features |= tun_feats; 1371 + } else { 1372 + net_dev->hw_enc_features &= ~(tun_feats | tso_feats); 1373 + net_dev->hw_features &= ~tun_feats; 1374 + net_dev->features &= ~tun_feats; 1365 1375 } 1366 - efx->net_dev->hw_enc_features = hw_enc_features; 1367 1376 1368 1377 /* don't fail init if RSS setup doesn't work */ 1369 1378 rc = efx->type->rx_push_rss_config(efx, false, ··· 4032 4021 NETIF_F_HW_VLAN_CTAG_FILTER | \ 4033 4022 NETIF_F_IPV6_CSUM | \ 4034 4023 NETIF_F_RXHASH | \ 4035 - NETIF_F_NTUPLE) 4024 + NETIF_F_NTUPLE | \ 4025 + NETIF_F_SG | \ 4026 + NETIF_F_RXCSUM | \ 4027 + NETIF_F_RXALL) 4036 4028 4037 4029 const struct efx_nic_type efx_hunt_a0_vf_nic_type = { 4038 4030 .is_vf = true,
+7 -10
drivers/net/ethernet/sfc/efx.c
··· 1001 1001 } 1002 1002 1003 1003 /* Determine netdevice features */ 1004 - net_dev->features |= (efx->type->offload_features | NETIF_F_SG | 1005 - NETIF_F_TSO | NETIF_F_RXCSUM | NETIF_F_RXALL); 1006 - if (efx->type->offload_features & (NETIF_F_IPV6_CSUM | NETIF_F_HW_CSUM)) { 1007 - net_dev->features |= NETIF_F_TSO6; 1008 - if (efx_has_cap(efx, TX_TSO_V2_ENCAP)) 1009 - net_dev->hw_enc_features |= NETIF_F_TSO6; 1010 - } 1011 - /* Check whether device supports TSO */ 1012 - if (!efx->type->tso_versions || !efx->type->tso_versions(efx)) 1013 - net_dev->features &= ~NETIF_F_ALL_TSO; 1004 + net_dev->features |= efx->type->offload_features; 1005 + 1006 + /* Add TSO features */ 1007 + if (efx->type->tso_versions && efx->type->tso_versions(efx)) 1008 + net_dev->features |= NETIF_F_TSO | NETIF_F_TSO6; 1009 + 1014 1010 /* Mask for features that also apply to VLAN devices */ 1015 1011 net_dev->vlan_features |= (NETIF_F_HW_CSUM | NETIF_F_SG | 1016 1012 NETIF_F_HIGHDMA | NETIF_F_ALL_TSO | 1017 1013 NETIF_F_RXCSUM); 1018 1014 1015 + /* Determine user configurable features */ 1019 1016 net_dev->hw_features |= net_dev->features & ~efx->fixed_features; 1020 1017 1021 1018 /* Disable receiving frames with bad FCS, by default. */
+5 -2
drivers/net/ethernet/smsc/smsc911x.c
··· 1037 1037 return ret; 1038 1038 } 1039 1039 1040 - /* Indicate that the MAC is responsible for managing PHY PM */ 1041 - phydev->mac_managed_pm = true; 1042 1040 phy_attached_info(phydev); 1043 1041 1044 1042 phy_set_max_speed(phydev, SPEED_100); ··· 1064 1066 struct net_device *dev) 1065 1067 { 1066 1068 struct smsc911x_data *pdata = netdev_priv(dev); 1069 + struct phy_device *phydev; 1067 1070 int err = -ENXIO; 1068 1071 1069 1072 pdata->mii_bus = mdiobus_alloc(); ··· 1106 1107 SMSC_WARN(pdata, probe, "Error registering mii bus"); 1107 1108 goto err_out_free_bus_2; 1108 1109 } 1110 + 1111 + phydev = phy_find_first(pdata->mii_bus); 1112 + if (phydev) 1113 + phydev->mac_managed_pm = true; 1109 1114 1110 1115 return 0; 1111 1116
-1
drivers/net/ethernet/stmicro/stmmac/common.h
··· 532 532 unsigned int xlgmac; 533 533 unsigned int num_vlan; 534 534 u32 vlan_filter[32]; 535 - unsigned int promisc; 536 535 bool vlan_fail_q_en; 537 536 u8 vlan_fail_q; 538 537 };
+3 -58
drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c
··· 472 472 if (vid > 4095) 473 473 return -EINVAL; 474 474 475 - if (hw->promisc) { 476 - netdev_err(dev, 477 - "Adding VLAN in promisc mode not supported\n"); 478 - return -EPERM; 479 - } 480 - 481 475 /* Single Rx VLAN Filter */ 482 476 if (hw->num_vlan == 1) { 483 477 /* For single VLAN filter, VID 0 means VLAN promiscuous */ ··· 521 527 { 522 528 int i, ret = 0; 523 529 524 - if (hw->promisc) { 525 - netdev_err(dev, 526 - "Deleting VLAN in promisc mode not supported\n"); 527 - return -EPERM; 528 - } 529 - 530 530 /* Single Rx VLAN Filter */ 531 531 if (hw->num_vlan == 1) { 532 532 if ((hw->vlan_filter[0] & GMAC_VLAN_TAG_VID) == vid) { ··· 543 555 } 544 556 545 557 return ret; 546 - } 547 - 548 - static void dwmac4_vlan_promisc_enable(struct net_device *dev, 549 - struct mac_device_info *hw) 550 - { 551 - void __iomem *ioaddr = hw->pcsr; 552 - u32 value; 553 - u32 hash; 554 - u32 val; 555 - int i; 556 - 557 - /* Single Rx VLAN Filter */ 558 - if (hw->num_vlan == 1) { 559 - dwmac4_write_single_vlan(dev, 0); 560 - return; 561 - } 562 - 563 - /* Extended Rx VLAN Filter Enable */ 564 - for (i = 0; i < hw->num_vlan; i++) { 565 - if (hw->vlan_filter[i] & GMAC_VLAN_TAG_DATA_VEN) { 566 - val = hw->vlan_filter[i] & ~GMAC_VLAN_TAG_DATA_VEN; 567 - dwmac4_write_vlan_filter(dev, hw, i, val); 568 - } 569 - } 570 - 571 - hash = readl(ioaddr + GMAC_VLAN_HASH_TABLE); 572 - if (hash & GMAC_VLAN_VLHT) { 573 - value = readl(ioaddr + GMAC_VLAN_TAG); 574 - if (value & GMAC_VLAN_VTHM) { 575 - value &= ~GMAC_VLAN_VTHM; 576 - writel(value, ioaddr + GMAC_VLAN_TAG); 577 - } 578 - } 579 558 } 580 559 581 560 static void dwmac4_restore_hw_vlan_rx_fltr(struct net_device *dev, ··· 664 709 } 665 710 666 711 /* VLAN filtering */ 667 - if (dev->features & NETIF_F_HW_VLAN_CTAG_FILTER) 712 + if (dev->flags & IFF_PROMISC && !hw->vlan_fail_q_en) 713 + value &= ~GMAC_PACKET_FILTER_VTFE; 714 + else if (dev->features & NETIF_F_HW_VLAN_CTAG_FILTER) 668 715 value |= GMAC_PACKET_FILTER_VTFE; 669 716 670 717 writel(value, ioaddr + GMAC_PACKET_FILTER); 671 - 672 - if (dev->flags & IFF_PROMISC && !hw->vlan_fail_q_en) { 673 - if (!hw->promisc) { 674 - hw->promisc = 1; 675 - dwmac4_vlan_promisc_enable(dev, hw); 676 - } 677 - } else { 678 - if (hw->promisc) { 679 - hw->promisc = 0; 680 - dwmac4_restore_hw_vlan_rx_fltr(dev, hw); 681 - } 682 - } 683 718 } 684 719 685 720 static void dwmac4_flow_ctrl(struct mac_device_info *hw, unsigned int duplex,
+1 -1
drivers/net/ethernet/wangxun/libwx/wx_type.h
··· 222 222 #define WX_PX_INTA 0x110 223 223 #define WX_PX_GPIE 0x118 224 224 #define WX_PX_GPIE_MODEL BIT(0) 225 - #define WX_PX_IC 0x120 225 + #define WX_PX_IC(_i) (0x120 + (_i) * 4) 226 226 #define WX_PX_IMS(_i) (0x140 + (_i) * 4) 227 227 #define WX_PX_IMC(_i) (0x150 + (_i) * 4) 228 228 #define WX_PX_ISB_ADDR_L 0x160
+1 -1
drivers/net/ethernet/wangxun/ngbe/ngbe_main.c
··· 352 352 netif_tx_start_all_queues(wx->netdev); 353 353 354 354 /* clear any pending interrupts, may auto mask */ 355 - rd32(wx, WX_PX_IC); 355 + rd32(wx, WX_PX_IC(0)); 356 356 rd32(wx, WX_PX_MISC_IC); 357 357 ngbe_irq_enable(wx, true); 358 358 if (wx->gpio_ctrl)
+2 -1
drivers/net/ethernet/wangxun/txgbe/txgbe_main.c
··· 229 229 wx_napi_enable_all(wx); 230 230 231 231 /* clear any pending interrupts, may auto mask */ 232 - rd32(wx, WX_PX_IC); 232 + rd32(wx, WX_PX_IC(0)); 233 + rd32(wx, WX_PX_IC(1)); 233 234 rd32(wx, WX_PX_MISC_IC); 234 235 txgbe_irq_enable(wx, true); 235 236
+1 -2
drivers/net/ieee802154/ca8210.c
··· 1902 1902 struct ca8210_priv *priv 1903 1903 ) 1904 1904 { 1905 - int status; 1906 1905 struct ieee802154_hdr header = { }; 1907 1906 struct secspec secspec; 1908 - unsigned int mac_len; 1907 + int mac_len, status; 1909 1908 1910 1909 dev_dbg(&priv->spi->dev, "%s called\n", __func__); 1911 1910
+1 -1
drivers/net/ipa/gsi_trans.c
··· 156 156 * gsi_trans_pool_exit_dma() can assume the total allocated 157 157 * size is exactly (count * size). 158 158 */ 159 - total_size = get_order(total_size) << PAGE_SHIFT; 159 + total_size = PAGE_SIZE << get_order(total_size); 160 160 161 161 virt = dma_alloc_coherent(dev, total_size, &addr, GFP_KERNEL); 162 162 if (!virt)
+2 -6
drivers/net/net_failover.c
··· 130 130 txq = ops->ndo_select_queue(primary_dev, skb, sb_dev); 131 131 else 132 132 txq = netdev_pick_tx(primary_dev, skb, NULL); 133 - 134 - qdisc_skb_cb(skb)->slave_dev_queue_mapping = skb->queue_mapping; 135 - 136 - return txq; 133 + } else { 134 + txq = skb_rx_queue_recorded(skb) ? skb_get_rx_queue(skb) : 0; 137 135 } 138 - 139 - txq = skb_rx_queue_recorded(skb) ? skb_get_rx_queue(skb) : 0; 140 136 141 137 /* Save the original txq to restore before passing to the driver */ 142 138 qdisc_skb_cb(skb)->slave_dev_queue_mapping = skb->queue_mapping;
+2 -4
drivers/net/phy/dp83869.c
··· 588 588 &dp83869_internal_delay[0], 589 589 delay_size, true); 590 590 if (dp83869->rx_int_delay < 0) 591 - dp83869->rx_int_delay = 592 - dp83869_internal_delay[DP83869_CLK_DELAY_DEF]; 591 + dp83869->rx_int_delay = DP83869_CLK_DELAY_DEF; 593 592 594 593 dp83869->tx_int_delay = phy_get_internal_delay(phydev, dev, 595 594 &dp83869_internal_delay[0], 596 595 delay_size, false); 597 596 if (dp83869->tx_int_delay < 0) 598 - dp83869->tx_int_delay = 599 - dp83869_internal_delay[DP83869_CLK_DELAY_DEF]; 597 + dp83869->tx_int_delay = DP83869_CLK_DELAY_DEF; 600 598 601 599 return ret; 602 600 }
+1
drivers/net/phy/micrel.c
··· 4151 4151 .resume = kszphy_resume, 4152 4152 .cable_test_start = ksz9x31_cable_test_start, 4153 4153 .cable_test_get_status = ksz9x31_cable_test_get_status, 4154 + .get_features = ksz9477_get_features, 4154 4155 }, { 4155 4156 .phy_id = PHY_ID_KSZ8873MLL, 4156 4157 .phy_id_mask = MICREL_PHY_ID_MASK,
+1 -1
drivers/net/phy/phy_device.c
··· 3057 3057 * and "phy-device" are not supported in ACPI. DT supports all the three 3058 3058 * named references to the phy node. 3059 3059 */ 3060 - struct fwnode_handle *fwnode_get_phy_node(struct fwnode_handle *fwnode) 3060 + struct fwnode_handle *fwnode_get_phy_node(const struct fwnode_handle *fwnode) 3061 3061 { 3062 3062 struct fwnode_handle *phy_node; 3063 3063
+3 -3
drivers/net/phy/sfp-bus.c
··· 17 17 /* private: */ 18 18 struct kref kref; 19 19 struct list_head node; 20 - struct fwnode_handle *fwnode; 20 + const struct fwnode_handle *fwnode; 21 21 22 22 const struct sfp_socket_ops *socket_ops; 23 23 struct device *sfp_dev; ··· 390 390 return bus->registered ? bus->upstream_ops : NULL; 391 391 } 392 392 393 - static struct sfp_bus *sfp_bus_get(struct fwnode_handle *fwnode) 393 + static struct sfp_bus *sfp_bus_get(const struct fwnode_handle *fwnode) 394 394 { 395 395 struct sfp_bus *sfp, *new, *found = NULL; 396 396 ··· 593 593 * - %-ENOMEM if we failed to allocate the bus. 594 594 * - an error from the upstream's connect_phy() method. 595 595 */ 596 - struct sfp_bus *sfp_bus_find_fwnode(struct fwnode_handle *fwnode) 596 + struct sfp_bus *sfp_bus_find_fwnode(const struct fwnode_handle *fwnode) 597 597 { 598 598 struct fwnode_reference_args ref; 599 599 struct sfp_bus *bus;
+3 -1
drivers/net/vmxnet3/vmxnet3_drv.c
··· 1688 1688 if (unlikely(rcd->ts)) 1689 1689 __vlan_hwaccel_put_tag(skb, htons(ETH_P_8021Q), rcd->tci); 1690 1690 1691 - if (adapter->netdev->features & NETIF_F_LRO) 1691 + /* Use GRO callback if UPT is enabled */ 1692 + if ((adapter->netdev->features & NETIF_F_LRO) && 1693 + !rq->shared->updateRxProd) 1692 1694 netif_receive_skb(skb); 1693 1695 else 1694 1696 napi_gro_receive(&rq->napi, skb);
+7
drivers/net/wwan/iosm/iosm_ipc_imem.c
··· 587 587 while (ctrl_chl_idx < IPC_MEM_MAX_CHANNELS) { 588 588 if (!ipc_chnl_cfg_get(&chnl_cfg_port, ctrl_chl_idx)) { 589 589 ipc_imem->ipc_port[ctrl_chl_idx] = NULL; 590 + 591 + if (ipc_imem->pcie->pci->device == INTEL_CP_DEVICE_7560_ID && 592 + chnl_cfg_port.wwan_port_type == WWAN_PORT_XMMRPC) { 593 + ctrl_chl_idx++; 594 + continue; 595 + } 596 + 590 597 if (ipc_imem->pcie->pci->device == INTEL_CP_DEVICE_7360_ID && 591 598 chnl_cfg_port.wwan_port_type == WWAN_PORT_MBIM) { 592 599 ctrl_chl_idx++;
+1 -1
drivers/net/xen-netback/common.h
··· 166 166 struct pending_tx_info pending_tx_info[MAX_PENDING_REQS]; 167 167 grant_handle_t grant_tx_handle[MAX_PENDING_REQS]; 168 168 169 - struct gnttab_copy tx_copy_ops[MAX_PENDING_REQS]; 169 + struct gnttab_copy tx_copy_ops[2 * MAX_PENDING_REQS]; 170 170 struct gnttab_map_grant_ref tx_map_ops[MAX_PENDING_REQS]; 171 171 struct gnttab_unmap_grant_ref tx_unmap_ops[MAX_PENDING_REQS]; 172 172 /* passed to gnttab_[un]map_refs with pages under (un)mapping */
+25 -10
drivers/net/xen-netback/netback.c
··· 334 334 struct xenvif_tx_cb { 335 335 u16 copy_pending_idx[XEN_NETBK_LEGACY_SLOTS_MAX + 1]; 336 336 u8 copy_count; 337 + u32 split_mask; 337 338 }; 338 339 339 340 #define XENVIF_TX_CB(skb) ((struct xenvif_tx_cb *)(skb)->cb) ··· 362 361 struct sk_buff *skb = 363 362 alloc_skb(size + NET_SKB_PAD + NET_IP_ALIGN, 364 363 GFP_ATOMIC | __GFP_NOWARN); 364 + 365 + BUILD_BUG_ON(sizeof(*XENVIF_TX_CB(skb)) > sizeof(skb->cb)); 365 366 if (unlikely(skb == NULL)) 366 367 return NULL; 367 368 ··· 399 396 nr_slots = shinfo->nr_frags + 1; 400 397 401 398 copy_count(skb) = 0; 399 + XENVIF_TX_CB(skb)->split_mask = 0; 402 400 403 401 /* Create copy ops for exactly data_len bytes into the skb head. */ 404 402 __skb_put(skb, data_len); 405 403 while (data_len > 0) { 406 404 int amount = data_len > txp->size ? txp->size : data_len; 405 + bool split = false; 407 406 408 407 cop->source.u.ref = txp->gref; 409 408 cop->source.domid = queue->vif->domid; ··· 418 413 cop->dest.u.gmfn = virt_to_gfn(skb->data + skb_headlen(skb) 419 414 - data_len); 420 415 416 + /* Don't cross local page boundary! */ 417 + if (cop->dest.offset + amount > XEN_PAGE_SIZE) { 418 + amount = XEN_PAGE_SIZE - cop->dest.offset; 419 + XENVIF_TX_CB(skb)->split_mask |= 1U << copy_count(skb); 420 + split = true; 421 + } 422 + 421 423 cop->len = amount; 422 424 cop->flags = GNTCOPY_source_gref; 423 425 ··· 432 420 pending_idx = queue->pending_ring[index]; 433 421 callback_param(queue, pending_idx).ctx = NULL; 434 422 copy_pending_idx(skb, copy_count(skb)) = pending_idx; 435 - copy_count(skb)++; 423 + if (!split) 424 + copy_count(skb)++; 436 425 437 426 cop++; 438 427 data_len -= amount; ··· 454 441 nr_slots--; 455 442 } else { 456 443 /* The copy op partially covered the tx_request. 457 - * The remainder will be mapped. 444 + * The remainder will be mapped or copied in the next 445 + * iteration. 458 446 */ 459 447 txp->offset += amount; 460 448 txp->size -= amount; ··· 553 539 pending_idx = copy_pending_idx(skb, i); 554 540 555 541 newerr = (*gopp_copy)->status; 542 + 543 + /* Split copies need to be handled together. */ 544 + if (XENVIF_TX_CB(skb)->split_mask & (1U << i)) { 545 + (*gopp_copy)++; 546 + if (!newerr) 547 + newerr = (*gopp_copy)->status; 548 + } 556 549 if (likely(!newerr)) { 557 550 /* The first frag might still have this slot mapped */ 558 551 if (i < copy_count(skb) - 1 || !sharedslot) ··· 994 973 995 974 /* No crossing a page as the payload mustn't fragment. */ 996 975 if (unlikely((txreq.offset + txreq.size) > XEN_PAGE_SIZE)) { 997 - netdev_err(queue->vif->dev, 998 - "txreq.offset: %u, size: %u, end: %lu\n", 999 - txreq.offset, txreq.size, 1000 - (unsigned long)(txreq.offset&~XEN_PAGE_MASK) + txreq.size); 976 + netdev_err(queue->vif->dev, "Cross page boundary, txreq.offset: %u, size: %u\n", 977 + txreq.offset, txreq.size); 1001 978 xenvif_fatal_tx_err(queue->vif); 1002 979 break; 1003 980 } ··· 1080 1061 __skb_queue_tail(&queue->tx_queue, skb); 1081 1062 1082 1063 queue->tx.req_cons = idx; 1083 - 1084 - if ((*map_ops >= ARRAY_SIZE(queue->tx_map_ops)) || 1085 - (*copy_ops >= ARRAY_SIZE(queue->tx_copy_ops))) 1086 - break; 1087 1064 } 1088 1065 1089 1066 return;
+1 -1
drivers/ptp/ptp_qoriq.c
··· 637 637 return 0; 638 638 639 639 no_clock: 640 - iounmap(ptp_qoriq->base); 640 + iounmap(base); 641 641 no_ioremap: 642 642 release_resource(ptp_qoriq->rsrc); 643 643 no_resource:
+1 -1
include/linux/phy.h
··· 1547 1547 struct mdio_device *fwnode_mdio_find_device(struct fwnode_handle *fwnode); 1548 1548 struct phy_device *fwnode_phy_find_device(struct fwnode_handle *phy_fwnode); 1549 1549 struct phy_device *device_phy_find_device(struct device *dev); 1550 - struct fwnode_handle *fwnode_get_phy_node(struct fwnode_handle *fwnode); 1550 + struct fwnode_handle *fwnode_get_phy_node(const struct fwnode_handle *fwnode); 1551 1551 struct phy_device *get_phy_device(struct mii_bus *bus, int addr, bool is_c45); 1552 1552 int phy_device_register(struct phy_device *phy); 1553 1553 void phy_device_free(struct phy_device *phydev);
+3 -2
include/linux/sfp.h
··· 557 557 void sfp_upstream_start(struct sfp_bus *bus); 558 558 void sfp_upstream_stop(struct sfp_bus *bus); 559 559 void sfp_bus_put(struct sfp_bus *bus); 560 - struct sfp_bus *sfp_bus_find_fwnode(struct fwnode_handle *fwnode); 560 + struct sfp_bus *sfp_bus_find_fwnode(const struct fwnode_handle *fwnode); 561 561 int sfp_bus_add_upstream(struct sfp_bus *bus, void *upstream, 562 562 const struct sfp_upstream_ops *ops); 563 563 void sfp_bus_del_upstream(struct sfp_bus *bus); ··· 619 619 { 620 620 } 621 621 622 - static inline struct sfp_bus *sfp_bus_find_fwnode(struct fwnode_handle *fwnode) 622 + static inline struct sfp_bus * 623 + sfp_bus_find_fwnode(const struct fwnode_handle *fwnode) 623 624 { 624 625 return NULL; 625 626 }
+10 -6
net/can/bcm.c
··· 941 941 942 942 cf = op->frames + op->cfsiz * i; 943 943 err = memcpy_from_msg((u8 *)cf, msg, op->cfsiz); 944 + if (err < 0) 945 + goto free_op; 944 946 945 947 if (op->flags & CAN_FD_FRAME) { 946 948 if (cf->len > 64) ··· 952 950 err = -EINVAL; 953 951 } 954 952 955 - if (err < 0) { 956 - if (op->frames != &op->sframe) 957 - kfree(op->frames); 958 - kfree(op); 959 - return err; 960 - } 953 + if (err < 0) 954 + goto free_op; 961 955 962 956 if (msg_head->flags & TX_CP_CAN_ID) { 963 957 /* copy can_id into frame */ ··· 1024 1026 bcm_tx_start_timer(op); 1025 1027 1026 1028 return msg_head->nframes * op->cfsiz + MHSIZ; 1029 + 1030 + free_op: 1031 + if (op->frames != &op->sframe) 1032 + kfree(op->frames); 1033 + kfree(op); 1034 + return err; 1027 1035 } 1028 1036 1029 1037 /*
+6 -2
net/can/j1939/transport.c
··· 1124 1124 1125 1125 if (session->sk) 1126 1126 j1939_sk_send_loop_abort(session->sk, session->err); 1127 - else 1128 - j1939_sk_errqueue(session, J1939_ERRQUEUE_RX_ABORT); 1129 1127 } 1130 1128 1131 1129 static void j1939_session_cancel(struct j1939_session *session, ··· 1138 1140 } 1139 1141 1140 1142 j1939_session_list_unlock(session->priv); 1143 + 1144 + if (!session->sk) 1145 + j1939_sk_errqueue(session, J1939_ERRQUEUE_RX_ABORT); 1141 1146 } 1142 1147 1143 1148 static enum hrtimer_restart j1939_tp_txtimer(struct hrtimer *hrtimer) ··· 1254 1253 __j1939_session_cancel(session, J1939_XTP_ABORT_TIMEOUT); 1255 1254 } 1256 1255 j1939_session_list_unlock(session->priv); 1256 + 1257 + if (!session->sk) 1258 + j1939_sk_errqueue(session, J1939_ERRQUEUE_RX_ABORT); 1257 1259 } 1258 1260 1259 1261 j1939_session_put(session);
+116 -5
net/dsa/slave.c
··· 57 57 u16 vid; 58 58 }; 59 59 60 + struct dsa_host_vlan_rx_filtering_ctx { 61 + struct net_device *dev; 62 + const unsigned char *addr; 63 + enum dsa_standalone_event event; 64 + }; 65 + 60 66 static bool dsa_switch_supports_uc_filtering(struct dsa_switch *ds) 61 67 { 62 68 return ds->ops->port_fdb_add && ds->ops->port_fdb_del && ··· 161 155 return 0; 162 156 } 163 157 158 + static int dsa_slave_host_vlan_rx_filtering(struct net_device *vdev, int vid, 159 + void *arg) 160 + { 161 + struct dsa_host_vlan_rx_filtering_ctx *ctx = arg; 162 + 163 + return dsa_slave_schedule_standalone_work(ctx->dev, ctx->event, 164 + ctx->addr, vid); 165 + } 166 + 164 167 static int dsa_slave_sync_uc(struct net_device *dev, 165 168 const unsigned char *addr) 166 169 { 167 170 struct net_device *master = dsa_slave_to_master(dev); 168 171 struct dsa_port *dp = dsa_slave_to_port(dev); 172 + struct dsa_host_vlan_rx_filtering_ctx ctx = { 173 + .dev = dev, 174 + .addr = addr, 175 + .event = DSA_UC_ADD, 176 + }; 177 + int err; 169 178 170 179 dev_uc_add(master, addr); 171 180 172 181 if (!dsa_switch_supports_uc_filtering(dp->ds)) 173 182 return 0; 174 183 175 - return dsa_slave_schedule_standalone_work(dev, DSA_UC_ADD, addr, 0); 184 + err = dsa_slave_schedule_standalone_work(dev, DSA_UC_ADD, addr, 0); 185 + if (err) 186 + return err; 187 + 188 + return vlan_for_each(dev, dsa_slave_host_vlan_rx_filtering, &ctx); 176 189 } 177 190 178 191 static int dsa_slave_unsync_uc(struct net_device *dev, ··· 199 174 { 200 175 struct net_device *master = dsa_slave_to_master(dev); 201 176 struct dsa_port *dp = dsa_slave_to_port(dev); 177 + struct dsa_host_vlan_rx_filtering_ctx ctx = { 178 + .dev = dev, 179 + .addr = addr, 180 + .event = DSA_UC_DEL, 181 + }; 182 + int err; 202 183 203 184 dev_uc_del(master, addr); 204 185 205 186 if (!dsa_switch_supports_uc_filtering(dp->ds)) 206 187 return 0; 207 188 208 - return dsa_slave_schedule_standalone_work(dev, DSA_UC_DEL, addr, 0); 189 + err = dsa_slave_schedule_standalone_work(dev, DSA_UC_DEL, addr, 0); 190 + if (err) 191 + return err; 192 + 193 + return vlan_for_each(dev, dsa_slave_host_vlan_rx_filtering, &ctx); 209 194 } 210 195 211 196 static int dsa_slave_sync_mc(struct net_device *dev, ··· 223 188 { 224 189 struct net_device *master = dsa_slave_to_master(dev); 225 190 struct dsa_port *dp = dsa_slave_to_port(dev); 191 + struct dsa_host_vlan_rx_filtering_ctx ctx = { 192 + .dev = dev, 193 + .addr = addr, 194 + .event = DSA_MC_ADD, 195 + }; 196 + int err; 226 197 227 198 dev_mc_add(master, addr); 228 199 229 200 if (!dsa_switch_supports_mc_filtering(dp->ds)) 230 201 return 0; 231 202 232 - return dsa_slave_schedule_standalone_work(dev, DSA_MC_ADD, addr, 0); 203 + err = dsa_slave_schedule_standalone_work(dev, DSA_MC_ADD, addr, 0); 204 + if (err) 205 + return err; 206 + 207 + return vlan_for_each(dev, dsa_slave_host_vlan_rx_filtering, &ctx); 233 208 } 234 209 235 210 static int dsa_slave_unsync_mc(struct net_device *dev, ··· 247 202 { 248 203 struct net_device *master = dsa_slave_to_master(dev); 249 204 struct dsa_port *dp = dsa_slave_to_port(dev); 205 + struct dsa_host_vlan_rx_filtering_ctx ctx = { 206 + .dev = dev, 207 + .addr = addr, 208 + .event = DSA_MC_DEL, 209 + }; 210 + int err; 250 211 251 212 dev_mc_del(master, addr); 252 213 253 214 if (!dsa_switch_supports_mc_filtering(dp->ds)) 254 215 return 0; 255 216 256 - return dsa_slave_schedule_standalone_work(dev, DSA_MC_DEL, addr, 0); 217 + err = dsa_slave_schedule_standalone_work(dev, DSA_MC_DEL, addr, 0); 218 + if (err) 219 + return err; 220 + 221 + return vlan_for_each(dev, dsa_slave_host_vlan_rx_filtering, &ctx); 257 222 } 258 223 259 224 void dsa_slave_sync_ha(struct net_device *dev) ··· 1757 1702 .flags = 0, 1758 1703 }; 1759 1704 struct netlink_ext_ack extack = {0}; 1705 + struct dsa_switch *ds = dp->ds; 1706 + struct netdev_hw_addr *ha; 1760 1707 int ret; 1761 1708 1762 1709 /* User port... */ ··· 1778 1721 return ret; 1779 1722 } 1780 1723 1724 + if (!dsa_switch_supports_uc_filtering(ds) && 1725 + !dsa_switch_supports_mc_filtering(ds)) 1726 + return 0; 1727 + 1728 + netif_addr_lock_bh(dev); 1729 + 1730 + if (dsa_switch_supports_mc_filtering(ds)) { 1731 + netdev_for_each_synced_mc_addr(ha, dev) { 1732 + dsa_slave_schedule_standalone_work(dev, DSA_MC_ADD, 1733 + ha->addr, vid); 1734 + } 1735 + } 1736 + 1737 + if (dsa_switch_supports_uc_filtering(ds)) { 1738 + netdev_for_each_synced_uc_addr(ha, dev) { 1739 + dsa_slave_schedule_standalone_work(dev, DSA_UC_ADD, 1740 + ha->addr, vid); 1741 + } 1742 + } 1743 + 1744 + netif_addr_unlock_bh(dev); 1745 + 1746 + dsa_flush_workqueue(); 1747 + 1781 1748 return 0; 1782 1749 } 1783 1750 ··· 1814 1733 /* This API only allows programming tagged, non-PVID VIDs */ 1815 1734 .flags = 0, 1816 1735 }; 1736 + struct dsa_switch *ds = dp->ds; 1737 + struct netdev_hw_addr *ha; 1817 1738 int err; 1818 1739 1819 1740 err = dsa_port_vlan_del(dp, &vlan); 1820 1741 if (err) 1821 1742 return err; 1822 1743 1823 - return dsa_port_host_vlan_del(dp, &vlan); 1744 + err = dsa_port_host_vlan_del(dp, &vlan); 1745 + if (err) 1746 + return err; 1747 + 1748 + if (!dsa_switch_supports_uc_filtering(ds) && 1749 + !dsa_switch_supports_mc_filtering(ds)) 1750 + return 0; 1751 + 1752 + netif_addr_lock_bh(dev); 1753 + 1754 + if (dsa_switch_supports_mc_filtering(ds)) { 1755 + netdev_for_each_synced_mc_addr(ha, dev) { 1756 + dsa_slave_schedule_standalone_work(dev, DSA_MC_DEL, 1757 + ha->addr, vid); 1758 + } 1759 + } 1760 + 1761 + if (dsa_switch_supports_uc_filtering(ds)) { 1762 + netdev_for_each_synced_uc_addr(ha, dev) { 1763 + dsa_slave_schedule_standalone_work(dev, DSA_UC_DEL, 1764 + ha->addr, vid); 1765 + } 1766 + } 1767 + 1768 + netif_addr_unlock_bh(dev); 1769 + 1770 + dsa_flush_workqueue(); 1771 + 1772 + return 0; 1824 1773 } 1825 1774 1826 1775 static int dsa_slave_restore_vlan(struct net_device *vdev, int vid, void *arg)
+1 -2
net/ieee802154/nl802154.c
··· 2488 2488 if (wpan_dev->iftype == NL802154_IFTYPE_MONITOR) 2489 2489 return -EOPNOTSUPP; 2490 2490 2491 - if (!info->attrs[NL802154_ATTR_SEC_LEVEL] || 2492 - llsec_parse_seclevel(info->attrs[NL802154_ATTR_SEC_LEVEL], 2491 + if (llsec_parse_seclevel(info->attrs[NL802154_ATTR_SEC_LEVEL], 2493 2492 &sl) < 0) 2494 2493 return -EINVAL; 2495 2494
+8 -1
net/vmw_vsock/virtio_transport_common.c
··· 363 363 u32 free_space; 364 364 365 365 spin_lock_bh(&vvs->rx_lock); 366 + 367 + if (WARN_ONCE(skb_queue_empty(&vvs->rx_queue) && vvs->rx_bytes, 368 + "rx_queue is empty, but rx_bytes is non-zero\n")) { 369 + spin_unlock_bh(&vvs->rx_lock); 370 + return err; 371 + } 372 + 366 373 while (total < len && !skb_queue_empty(&vvs->rx_queue)) { 367 374 skb = skb_peek(&vvs->rx_queue); 368 375 ··· 1075 1068 memcpy(skb_put(last_skb, skb->len), skb->data, skb->len); 1076 1069 free_pkt = true; 1077 1070 last_hdr->flags |= hdr->flags; 1078 - last_hdr->len = cpu_to_le32(last_skb->len); 1071 + le32_add_cpu(&last_hdr->len, len); 1079 1072 goto out; 1080 1073 } 1081 1074 }
+2 -8
net/vmw_vsock/vsock_loopback.c
··· 15 15 struct vsock_loopback { 16 16 struct workqueue_struct *workqueue; 17 17 18 - spinlock_t pkt_list_lock; /* protects pkt_list */ 19 18 struct sk_buff_head pkt_queue; 20 19 struct work_struct pkt_work; 21 20 }; ··· 31 32 struct vsock_loopback *vsock = &the_vsock_loopback; 32 33 int len = skb->len; 33 34 34 - spin_lock_bh(&vsock->pkt_list_lock); 35 35 skb_queue_tail(&vsock->pkt_queue, skb); 36 - spin_unlock_bh(&vsock->pkt_list_lock); 37 36 38 37 queue_work(vsock->workqueue, &vsock->pkt_work); 39 38 ··· 110 113 111 114 skb_queue_head_init(&pkts); 112 115 113 - spin_lock_bh(&vsock->pkt_list_lock); 116 + spin_lock_bh(&vsock->pkt_queue.lock); 114 117 skb_queue_splice_init(&vsock->pkt_queue, &pkts); 115 - spin_unlock_bh(&vsock->pkt_list_lock); 118 + spin_unlock_bh(&vsock->pkt_queue.lock); 116 119 117 120 while ((skb = __skb_dequeue(&pkts))) { 118 121 virtio_transport_deliver_tap_pkt(skb); ··· 129 132 if (!vsock->workqueue) 130 133 return -ENOMEM; 131 134 132 - spin_lock_init(&vsock->pkt_list_lock); 133 135 skb_queue_head_init(&vsock->pkt_queue); 134 136 INIT_WORK(&vsock->pkt_work, vsock_loopback_work); 135 137 ··· 152 156 153 157 flush_work(&vsock->pkt_work); 154 158 155 - spin_lock_bh(&vsock->pkt_list_lock); 156 159 virtio_vsock_skb_queue_purge(&vsock->pkt_queue); 157 - spin_unlock_bh(&vsock->pkt_list_lock); 158 160 159 161 destroy_workqueue(vsock->workqueue); 160 162 }
+90
tools/testing/vsock/vsock_test.c
··· 968 968 test_inv_buf_server(opts, false); 969 969 } 970 970 971 + #define HELLO_STR "HELLO" 972 + #define WORLD_STR "WORLD" 973 + 974 + static void test_stream_virtio_skb_merge_client(const struct test_opts *opts) 975 + { 976 + ssize_t res; 977 + int fd; 978 + 979 + fd = vsock_stream_connect(opts->peer_cid, 1234); 980 + if (fd < 0) { 981 + perror("connect"); 982 + exit(EXIT_FAILURE); 983 + } 984 + 985 + /* Send first skbuff. */ 986 + res = send(fd, HELLO_STR, strlen(HELLO_STR), 0); 987 + if (res != strlen(HELLO_STR)) { 988 + fprintf(stderr, "unexpected send(2) result %zi\n", res); 989 + exit(EXIT_FAILURE); 990 + } 991 + 992 + control_writeln("SEND0"); 993 + /* Peer reads part of first skbuff. */ 994 + control_expectln("REPLY0"); 995 + 996 + /* Send second skbuff, it will be appended to the first. */ 997 + res = send(fd, WORLD_STR, strlen(WORLD_STR), 0); 998 + if (res != strlen(WORLD_STR)) { 999 + fprintf(stderr, "unexpected send(2) result %zi\n", res); 1000 + exit(EXIT_FAILURE); 1001 + } 1002 + 1003 + control_writeln("SEND1"); 1004 + /* Peer reads merged skbuff packet. */ 1005 + control_expectln("REPLY1"); 1006 + 1007 + close(fd); 1008 + } 1009 + 1010 + static void test_stream_virtio_skb_merge_server(const struct test_opts *opts) 1011 + { 1012 + unsigned char buf[64]; 1013 + ssize_t res; 1014 + int fd; 1015 + 1016 + fd = vsock_stream_accept(VMADDR_CID_ANY, 1234, NULL); 1017 + if (fd < 0) { 1018 + perror("accept"); 1019 + exit(EXIT_FAILURE); 1020 + } 1021 + 1022 + control_expectln("SEND0"); 1023 + 1024 + /* Read skbuff partially. */ 1025 + res = recv(fd, buf, 2, 0); 1026 + if (res != 2) { 1027 + fprintf(stderr, "expected recv(2) returns 2 bytes, got %zi\n", res); 1028 + exit(EXIT_FAILURE); 1029 + } 1030 + 1031 + control_writeln("REPLY0"); 1032 + control_expectln("SEND1"); 1033 + 1034 + res = recv(fd, buf + 2, sizeof(buf) - 2, 0); 1035 + if (res != 8) { 1036 + fprintf(stderr, "expected recv(2) returns 8 bytes, got %zi\n", res); 1037 + exit(EXIT_FAILURE); 1038 + } 1039 + 1040 + res = recv(fd, buf, sizeof(buf) - 8 - 2, MSG_DONTWAIT); 1041 + if (res != -1) { 1042 + fprintf(stderr, "expected recv(2) failure, got %zi\n", res); 1043 + exit(EXIT_FAILURE); 1044 + } 1045 + 1046 + if (memcmp(buf, HELLO_STR WORLD_STR, strlen(HELLO_STR WORLD_STR))) { 1047 + fprintf(stderr, "pattern mismatch\n"); 1048 + exit(EXIT_FAILURE); 1049 + } 1050 + 1051 + control_writeln("REPLY1"); 1052 + 1053 + close(fd); 1054 + } 1055 + 971 1056 static struct test_case test_cases[] = { 972 1057 { 973 1058 .name = "SOCK_STREAM connection reset", ··· 1122 1037 .name = "SOCK_SEQPACKET test invalid buffer", 1123 1038 .run_client = test_seqpacket_inv_buf_client, 1124 1039 .run_server = test_seqpacket_inv_buf_server, 1040 + }, 1041 + { 1042 + .name = "SOCK_STREAM virtio skb merge", 1043 + .run_client = test_stream_virtio_skb_merge_client, 1044 + .run_server = test_stream_virtio_skb_merge_server, 1125 1045 }, 1126 1046 {}, 1127 1047 };