Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'net-7.0-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Pull networking fixes from Paolo Abeni:
"Including fixes from IPsec, Bluetooth and netfilter

Current release - regressions:

- wifi: fix dev_alloc_name() return value check

- rds: fix recursive lock in rds_tcp_conn_slots_available

Current release - new code bugs:

- vsock: lock down child_ns_mode as write-once

Previous releases - regressions:

- core:
- do not pass flow_id to set_rps_cpu()
- consume xmit errors of GSO frames

- netconsole: avoid OOB reads, msg is not nul-terminated

- netfilter: h323: fix OOB read in decode_choice()

- tcp: re-enable acceptance of FIN packets when RWIN is 0

- udplite: fix null-ptr-deref in __udp_enqueue_schedule_skb().

- wifi: brcmfmac: fix potential kernel oops when probe fails

- phy: register phy led_triggers during probe to avoid AB-BA deadlock

- eth:
- bnxt_en: fix deleting of Ntuple filters
- wan: farsync: fix use-after-free bugs caused by unfinished tasklets
- xscale: check for PTP support properly

Previous releases - always broken:

- tcp: fix potential race in tcp_v6_syn_recv_sock()

- kcm: fix zero-frag skb in frag_list on partial sendmsg error

- xfrm:
- fix race condition in espintcp_close()
- always flush state and policy upon NETDEV_UNREGISTER event

- bluetooth:
- purge error queues in socket destructors
- fix response to L2CAP_ECRED_CONN_REQ

- eth:
- mlx5:
- fix circular locking dependency in dump
- fix "scheduling while atomic" in IPsec MAC address query
- gve: fix incorrect buffer cleanup for QPL
- team: avoid NETDEV_CHANGEMTU event when unregistering slave
- usb: validate USB endpoints"

* tag 'net-7.0-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (72 commits)
netfilter: nf_conntrack_h323: fix OOB read in decode_choice()
dpaa2-switch: validate num_ifs to prevent out-of-bounds write
net: consume xmit errors of GSO frames
vsock: document write-once behavior of the child_ns_mode sysctl
vsock: lock down child_ns_mode as write-once
selftests/vsock: change tests to respect write-once child ns mode
net/mlx5e: Fix "scheduling while atomic" in IPsec MAC address query
net/mlx5: Fix missing devlink lock in SRIOV enable error path
net/mlx5: E-switch, Clear legacy flag when moving to switchdev
net/mlx5: LAG, disable MPESW in lag_disable_change()
net/mlx5: DR, Fix circular locking dependency in dump
selftests: team: Add a reference count leak test
team: avoid NETDEV_CHANGEMTU event when unregistering slave
net: mana: Fix double destroy_workqueue on service rescan PCI path
MAINTAINERS: Update maintainer entry for QUALCOMM ETHQOS ETHERNET DRIVER
dpll: zl3073x: Remove redundant cleanup in devm_dpll_init()
selftests/net: packetdrill: Verify acceptance of FIN packets when RWIN is 0
tcp: re-enable acceptance of FIN packets when RWIN is 0
vsock: Use container_of() to get net namespace in sysctl handlers
net: usb: kaweth: validate USB endpoints
...

+829 -334
+3
Documentation/admin-guide/sysctl/net.rst
··· 594 594 their sockets will only be able to connect within their own 595 595 namespace. 596 596 597 + The first write to ``child_ns_mode`` locks its value. Subsequent writes of the 598 + same value succeed, but writing a different value returns ``-EBUSY``. 599 + 597 600 Changing ``child_ns_mode`` only affects namespaces created after the change; 598 601 it does not modify the current namespace or any existing children. 599 602
+4 -6
MAINTAINERS
··· 1292 1292 F: include/uapi/drm/amdxdna_accel.h 1293 1293 1294 1294 AMD XGBE DRIVER 1295 - M: "Shyam Sundar S K" <Shyam-sundar.S-k@amd.com> 1296 1295 M: Raju Rangoju <Raju.Rangoju@amd.com> 1297 1296 L: netdev@vger.kernel.org 1298 1297 S: Maintained ··· 6218 6219 F: drivers/scsi/snic/ 6219 6220 6220 6221 CISCO VIC ETHERNET NIC DRIVER 6221 - M: Christian Benvenuti <benve@cisco.com> 6222 6222 M: Satish Kharat <satishkh@cisco.com> 6223 6223 S: Maintained 6224 6224 F: drivers/net/ethernet/cisco/enic/ 6225 6225 6226 6226 CISCO VIC LOW LATENCY NIC DRIVER 6227 - M: Christian Benvenuti <benve@cisco.com> 6228 6227 M: Nelson Escobar <neescoba@cisco.com> 6228 + M: Satish Kharat <satishkh@cisco.com> 6229 6229 S: Supported 6230 6230 F: drivers/infiniband/hw/usnic/ 6231 6231 ··· 14410 14412 M: Herve Codina <herve.codina@bootlin.com> 14411 14413 S: Maintained 14412 14414 F: Documentation/devicetree/bindings/net/lantiq,pef2256.yaml 14413 - F: drivers/net/wan/framer/pef2256/ 14415 + F: drivers/net/wan/framer/ 14414 14416 F: drivers/pinctrl/pinctrl-pef2256.c 14415 - F: include/linux/framer/pef2256.h 14417 + F: include/linux/framer/ 14416 14418 14417 14419 LASI 53c700 driver for PARISC 14418 14420 M: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com> ··· 21693 21695 F: drivers/net/ethernet/qualcomm/emac/ 21694 21696 21695 21697 QUALCOMM ETHQOS ETHERNET DRIVER 21696 - M: Vinod Koul <vkoul@kernel.org> 21698 + M: Mohd Ayaan Anwar <mohd.anwar@oss.qualcomm.com> 21697 21699 L: netdev@vger.kernel.org 21698 21700 L: linux-arm-msm@vger.kernel.org 21699 21701 S: Maintained
+14 -10
drivers/bluetooth/hci_qca.c
··· 2046 2046 } 2047 2047 2048 2048 out: 2049 - if (ret && retries < MAX_INIT_RETRIES) { 2050 - bt_dev_warn(hdev, "Retry BT power ON:%d", retries); 2049 + if (ret) { 2051 2050 qca_power_shutdown(hu); 2052 - if (hu->serdev) { 2053 - serdev_device_close(hu->serdev); 2054 - ret = serdev_device_open(hu->serdev); 2055 - if (ret) { 2056 - bt_dev_err(hdev, "failed to open port"); 2057 - return ret; 2051 + 2052 + if (retries < MAX_INIT_RETRIES) { 2053 + bt_dev_warn(hdev, "Retry BT power ON:%d", retries); 2054 + if (hu->serdev) { 2055 + serdev_device_close(hu->serdev); 2056 + ret = serdev_device_open(hu->serdev); 2057 + if (ret) { 2058 + bt_dev_err(hdev, "failed to open port"); 2059 + return ret; 2060 + } 2058 2061 } 2062 + retries++; 2063 + goto retry; 2059 2064 } 2060 - retries++; 2061 - goto retry; 2065 + return ret; 2062 2066 } 2063 2067 2064 2068 /* Setup bdaddr */
+2 -5
drivers/dpll/zl3073x/core.c
··· 981 981 } 982 982 983 983 /* Add devres action to release DPLL related resources */ 984 - rc = devm_add_action_or_reset(zldev->dev, zl3073x_dev_dpll_fini, zldev); 985 - if (rc) 986 - goto error; 987 - 988 - return 0; 984 + return devm_add_action_or_reset(zldev->dev, zl3073x_dev_dpll_fini, zldev); 989 985 990 986 error: 991 987 zl3073x_dev_dpll_fini(zldev); ··· 1022 1026 "Unknown or non-match chip ID: 0x%0x\n", 1023 1027 id); 1024 1028 } 1029 + zldev->chip_id = id; 1025 1030 1026 1031 /* Read revision, firmware version and custom config version */ 1027 1032 rc = zl3073x_read_u16(zldev, ZL_REG_REVISION, &revision);
+28
drivers/dpll/zl3073x/core.h
··· 35 35 * @dev: pointer to device 36 36 * @regmap: regmap to access device registers 37 37 * @multiop_lock: to serialize multiple register operations 38 + * @chip_id: chip ID read from hardware 38 39 * @ref: array of input references' invariants 39 40 * @out: array of outs' invariants 40 41 * @synth: array of synths' invariants ··· 49 48 struct device *dev; 50 49 struct regmap *regmap; 51 50 struct mutex multiop_lock; 51 + u16 chip_id; 52 52 53 53 /* Invariants */ 54 54 struct zl3073x_ref ref[ZL3073X_NUM_REFS]; ··· 145 143 *****************/ 146 144 147 145 int zl3073x_ref_phase_offsets_update(struct zl3073x_dev *zldev, int channel); 146 + 147 + /** 148 + * zl3073x_dev_is_ref_phase_comp_32bit - check ref phase comp register size 149 + * @zldev: pointer to zl3073x device 150 + * 151 + * Some chip IDs have a 32-bit wide ref_phase_offset_comp register instead 152 + * of the default 48-bit. 153 + * 154 + * Return: true if the register is 32-bit, false if 48-bit 155 + */ 156 + static inline bool 157 + zl3073x_dev_is_ref_phase_comp_32bit(struct zl3073x_dev *zldev) 158 + { 159 + switch (zldev->chip_id) { 160 + case 0x0E30: 161 + case 0x0E93: 162 + case 0x0E94: 163 + case 0x0E95: 164 + case 0x0E96: 165 + case 0x0E97: 166 + case 0x1F60: 167 + return true; 168 + default: 169 + return false; 170 + } 171 + } 148 172 149 173 static inline bool 150 174 zl3073x_is_n_pin(u8 id)
+5 -2
drivers/dpll/zl3073x/dpll.c
··· 475 475 ref_id = zl3073x_input_pin_ref_get(pin->id); 476 476 ref = zl3073x_ref_state_get(zldev, ref_id); 477 477 478 - /* Perform sign extension for 48bit signed value */ 479 - phase_comp = sign_extend64(ref->phase_comp, 47); 478 + /* Perform sign extension based on register width */ 479 + if (zl3073x_dev_is_ref_phase_comp_32bit(zldev)) 480 + phase_comp = sign_extend64(ref->phase_comp, 31); 481 + else 482 + phase_comp = sign_extend64(ref->phase_comp, 47); 480 483 481 484 /* Reverse two's complement negation applied during set and convert 482 485 * to 32bit signed int
+20 -5
drivers/dpll/zl3073x/ref.c
··· 121 121 return rc; 122 122 123 123 /* Read phase compensation register */ 124 - rc = zl3073x_read_u48(zldev, ZL_REG_REF_PHASE_OFFSET_COMP, 125 - &ref->phase_comp); 124 + if (zl3073x_dev_is_ref_phase_comp_32bit(zldev)) { 125 + u32 val; 126 + 127 + rc = zl3073x_read_u32(zldev, ZL_REG_REF_PHASE_OFFSET_COMP_32, 128 + &val); 129 + ref->phase_comp = val; 130 + } else { 131 + rc = zl3073x_read_u48(zldev, ZL_REG_REF_PHASE_OFFSET_COMP, 132 + &ref->phase_comp); 133 + } 126 134 if (rc) 127 135 return rc; 128 136 ··· 187 179 if (!rc && dref->sync_ctrl != ref->sync_ctrl) 188 180 rc = zl3073x_write_u8(zldev, ZL_REG_REF_SYNC_CTRL, 189 181 ref->sync_ctrl); 190 - if (!rc && dref->phase_comp != ref->phase_comp) 191 - rc = zl3073x_write_u48(zldev, ZL_REG_REF_PHASE_OFFSET_COMP, 192 - ref->phase_comp); 182 + if (!rc && dref->phase_comp != ref->phase_comp) { 183 + if (zl3073x_dev_is_ref_phase_comp_32bit(zldev)) 184 + rc = zl3073x_write_u32(zldev, 185 + ZL_REG_REF_PHASE_OFFSET_COMP_32, 186 + ref->phase_comp); 187 + else 188 + rc = zl3073x_write_u48(zldev, 189 + ZL_REG_REF_PHASE_OFFSET_COMP, 190 + ref->phase_comp); 191 + } 193 192 if (rc) 194 193 return rc; 195 194
+1
drivers/dpll/zl3073x/regs.h
··· 194 194 #define ZL_REF_CONFIG_DIFF_EN BIT(2) 195 195 196 196 #define ZL_REG_REF_PHASE_OFFSET_COMP ZL_REG(10, 0x28, 6) 197 + #define ZL_REG_REF_PHASE_OFFSET_COMP_32 ZL_REG(10, 0x28, 4) 197 198 198 199 #define ZL_REG_REF_SYNC_CTRL ZL_REG(10, 0x2e, 1) 199 200 #define ZL_REF_SYNC_CTRL_MODE GENMASK(2, 0)
+8 -1
drivers/net/dsa/sja1105/sja1105_main.c
··· 2278 2278 * change it through the dynamic interface later. 2279 2279 */ 2280 2280 dsa_switch_for_each_available_port(dp, ds) { 2281 + /* May be called during unbind when we unoffload a VLAN-aware 2282 + * bridge from port 1 while port 0 was already torn down 2283 + */ 2284 + if (!dp->pl) 2285 + continue; 2286 + 2281 2287 phylink_replay_link_begin(dp->pl); 2282 2288 mac[dp->index].speed = priv->info->port_speed[SJA1105_SPEED_AUTO]; 2283 2289 } ··· 2340 2334 } 2341 2335 2342 2336 dsa_switch_for_each_available_port(dp, ds) 2343 - phylink_replay_link_end(dp->pl); 2337 + if (dp->pl) 2338 + phylink_replay_link_end(dp->pl); 2344 2339 2345 2340 rc = sja1105_reload_cbs(priv); 2346 2341 if (rc < 0)
+7 -6
drivers/net/ethernet/broadcom/bnxt/bnxt.c
··· 6232 6232 int rc; 6233 6233 6234 6234 set_bit(BNXT_FLTR_FW_DELETED, &fltr->base.state); 6235 + if (!test_bit(BNXT_STATE_OPEN, &bp->state)) 6236 + return 0; 6237 + 6235 6238 rc = hwrm_req_init(bp, req, HWRM_CFA_NTUPLE_FILTER_FREE); 6236 6239 if (rc) 6237 6240 return rc; ··· 10882 10879 struct bnxt_ntuple_filter *ntp_fltr; 10883 10880 int i; 10884 10881 10885 - if (netif_running(bp->dev)) { 10886 - bnxt_hwrm_vnic_free_one(bp, &rss_ctx->vnic); 10887 - for (i = 0; i < BNXT_MAX_CTX_PER_VNIC; i++) { 10888 - if (vnic->fw_rss_cos_lb_ctx[i] != INVALID_HW_RING_ID) 10889 - bnxt_hwrm_vnic_ctx_free_one(bp, vnic, i); 10890 - } 10882 + bnxt_hwrm_vnic_free_one(bp, &rss_ctx->vnic); 10883 + for (i = 0; i < BNXT_MAX_CTX_PER_VNIC; i++) { 10884 + if (vnic->fw_rss_cos_lb_ctx[i] != INVALID_HW_RING_ID) 10885 + bnxt_hwrm_vnic_ctx_free_one(bp, vnic, i); 10891 10886 } 10892 10887 if (!all) 10893 10888 return;
+7
drivers/net/ethernet/freescale/dpaa2/dpaa2-switch.c
··· 3034 3034 goto err_close; 3035 3035 } 3036 3036 3037 + if (ethsw->sw_attr.num_ifs >= DPSW_MAX_IF) { 3038 + dev_err(dev, "DPSW num_ifs %u exceeds max %u\n", 3039 + ethsw->sw_attr.num_ifs, DPSW_MAX_IF); 3040 + err = -EINVAL; 3041 + goto err_close; 3042 + } 3043 + 3037 3044 err = dpsw_get_api_version(ethsw->mc_io, 0, 3038 3045 &ethsw->major, 3039 3046 &ethsw->minor);
+24 -30
drivers/net/ethernet/google/gve/gve_tx_dqo.c
··· 167 167 } 168 168 } 169 169 170 + static void gve_unmap_packet(struct device *dev, 171 + struct gve_tx_pending_packet_dqo *pkt) 172 + { 173 + int i; 174 + 175 + if (!pkt->num_bufs) 176 + return; 177 + 178 + /* SKB linear portion is guaranteed to be mapped */ 179 + dma_unmap_single(dev, dma_unmap_addr(pkt, dma[0]), 180 + dma_unmap_len(pkt, len[0]), DMA_TO_DEVICE); 181 + for (i = 1; i < pkt->num_bufs; i++) { 182 + netmem_dma_unmap_page_attrs(dev, dma_unmap_addr(pkt, dma[i]), 183 + dma_unmap_len(pkt, len[i]), 184 + DMA_TO_DEVICE, 0); 185 + } 186 + pkt->num_bufs = 0; 187 + } 188 + 170 189 /* gve_tx_free_desc - Cleans up all pending tx requests and buffers. 171 190 */ 172 191 static void gve_tx_clean_pending_packets(struct gve_tx_ring *tx) ··· 195 176 for (i = 0; i < tx->dqo.num_pending_packets; i++) { 196 177 struct gve_tx_pending_packet_dqo *cur_state = 197 178 &tx->dqo.pending_packets[i]; 198 - int j; 199 179 200 - for (j = 0; j < cur_state->num_bufs; j++) { 201 - if (j == 0) { 202 - dma_unmap_single(tx->dev, 203 - dma_unmap_addr(cur_state, dma[j]), 204 - dma_unmap_len(cur_state, len[j]), 205 - DMA_TO_DEVICE); 206 - } else { 207 - dma_unmap_page(tx->dev, 208 - dma_unmap_addr(cur_state, dma[j]), 209 - dma_unmap_len(cur_state, len[j]), 210 - DMA_TO_DEVICE); 211 - } 212 - } 180 + if (tx->dqo.qpl) 181 + gve_free_tx_qpl_bufs(tx, cur_state); 182 + else 183 + gve_unmap_packet(tx->dev, cur_state); 184 + 213 185 if (cur_state->skb) { 214 186 dev_consume_skb_any(cur_state->skb); 215 187 cur_state->skb = NULL; ··· 1162 1152 } else { 1163 1153 tx->dqo.pending_packets[next_index].prev = prev_index; 1164 1154 } 1165 - } 1166 - 1167 - static void gve_unmap_packet(struct device *dev, 1168 - struct gve_tx_pending_packet_dqo *pkt) 1169 - { 1170 - int i; 1171 - 1172 - /* SKB linear portion is guaranteed to be mapped */ 1173 - dma_unmap_single(dev, dma_unmap_addr(pkt, dma[0]), 1174 - dma_unmap_len(pkt, len[0]), DMA_TO_DEVICE); 1175 - for (i = 1; i < pkt->num_bufs; i++) { 1176 - netmem_dma_unmap_page_attrs(dev, dma_unmap_addr(pkt, dma[i]), 1177 - dma_unmap_len(pkt, len[i]), 1178 - DMA_TO_DEVICE, 0); 1179 - } 1180 - pkt->num_bufs = 0; 1181 1155 } 1182 1156 1183 1157 /* Completion types and expected behavior:
+1 -2
drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c
··· 259 259 static void mlx5e_ipsec_init_macs(struct mlx5e_ipsec_sa_entry *sa_entry, 260 260 struct mlx5_accel_esp_xfrm_attrs *attrs) 261 261 { 262 - struct mlx5_core_dev *mdev = mlx5e_ipsec_sa2dev(sa_entry); 263 262 struct mlx5e_ipsec_addr *addrs = &attrs->addrs; 264 263 struct net_device *netdev = sa_entry->dev; 265 264 struct xfrm_state *x = sa_entry->x; ··· 275 276 attrs->type != XFRM_DEV_OFFLOAD_PACKET) 276 277 return; 277 278 278 - mlx5_query_mac_address(mdev, addr); 279 + ether_addr_copy(addr, netdev->dev_addr); 279 280 switch (attrs->dir) { 280 281 case XFRM_DEV_OFFLOAD_IN: 281 282 src = attrs->dmac;
+2
drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
··· 4068 4068 4069 4069 if (mlx5_mode == MLX5_ESWITCH_LEGACY) 4070 4070 esw->dev->priv.flags |= MLX5_PRIV_FLAGS_SWITCH_LEGACY; 4071 + if (mlx5_mode == MLX5_ESWITCH_OFFLOADS) 4072 + esw->dev->priv.flags &= ~MLX5_PRIV_FLAGS_SWITCH_LEGACY; 4071 4073 mlx5_eswitch_disable_locked(esw); 4072 4074 if (mlx5_mode == MLX5_ESWITCH_OFFLOADS) { 4073 4075 if (mlx5_devlink_trap_get_num_active(esw->dev)) {
+6 -2
drivers/net/ethernet/mellanox/mlx5/core/lag/lag.c
··· 1869 1869 mutex_lock(&ldev->lock); 1870 1870 1871 1871 ldev->mode_changes_in_progress++; 1872 - if (__mlx5_lag_is_active(ldev)) 1873 - mlx5_disable_lag(ldev); 1872 + if (__mlx5_lag_is_active(ldev)) { 1873 + if (ldev->mode == MLX5_LAG_MODE_MPESW) 1874 + mlx5_lag_disable_mpesw(ldev); 1875 + else 1876 + mlx5_disable_lag(ldev); 1877 + } 1874 1878 1875 1879 mutex_unlock(&ldev->lock); 1876 1880 mlx5_devcom_comp_unlock(dev->priv.hca_devcom_comp);
+4 -4
drivers/net/ethernet/mellanox/mlx5/core/lag/mpesw.c
··· 65 65 return err; 66 66 } 67 67 68 - static int enable_mpesw(struct mlx5_lag *ldev) 68 + static int mlx5_lag_enable_mpesw(struct mlx5_lag *ldev) 69 69 { 70 70 struct mlx5_core_dev *dev0; 71 71 int err; ··· 126 126 return err; 127 127 } 128 128 129 - static void disable_mpesw(struct mlx5_lag *ldev) 129 + void mlx5_lag_disable_mpesw(struct mlx5_lag *ldev) 130 130 { 131 131 if (ldev->mode == MLX5_LAG_MODE_MPESW) { 132 132 mlx5_mpesw_metadata_cleanup(ldev); ··· 152 152 } 153 153 154 154 if (mpesww->op == MLX5_MPESW_OP_ENABLE) 155 - mpesww->result = enable_mpesw(ldev); 155 + mpesww->result = mlx5_lag_enable_mpesw(ldev); 156 156 else if (mpesww->op == MLX5_MPESW_OP_DISABLE) 157 - disable_mpesw(ldev); 157 + mlx5_lag_disable_mpesw(ldev); 158 158 unlock: 159 159 mutex_unlock(&ldev->lock); 160 160 mlx5_devcom_comp_unlock(devcom);
+5
drivers/net/ethernet/mellanox/mlx5/core/lag/mpesw.h
··· 31 31 bool mlx5_lag_is_mpesw(struct mlx5_core_dev *dev); 32 32 void mlx5_lag_mpesw_disable(struct mlx5_core_dev *dev); 33 33 int mlx5_lag_mpesw_enable(struct mlx5_core_dev *dev); 34 + #ifdef CONFIG_MLX5_ESWITCH 35 + void mlx5_lag_disable_mpesw(struct mlx5_lag *ldev); 36 + #else 37 + static inline void mlx5_lag_disable_mpesw(struct mlx5_lag *ldev) {} 38 + #endif /* CONFIG_MLX5_ESWITCH */ 34 39 35 40 #ifdef CONFIG_MLX5_ESWITCH 36 41 void mlx5_mpesw_speed_update_work(struct work_struct *work);
+2
drivers/net/ethernet/mellanox/mlx5/core/sriov.c
··· 193 193 err = pci_enable_sriov(pdev, num_vfs); 194 194 if (err) { 195 195 mlx5_core_warn(dev, "pci_enable_sriov failed : %d\n", err); 196 + devl_lock(devlink); 196 197 mlx5_device_disable_sriov(dev, num_vfs, true, true); 198 + devl_unlock(devlink); 197 199 } 198 200 return err; 199 201 }
+2 -2
drivers/net/ethernet/mellanox/mlx5/core/steering/sws/dr_dbg.c
··· 1051 1051 struct mlx5dr_table *tbl; 1052 1052 int ret; 1053 1053 1054 - mutex_lock(&dmn->dump_info.dbg_mutex); 1055 1054 mlx5dr_domain_lock(dmn); 1055 + mutex_lock(&dmn->dump_info.dbg_mutex); 1056 1056 1057 1057 ret = dr_dump_domain(file, dmn); 1058 1058 if (ret < 0) ··· 1065 1065 } 1066 1066 1067 1067 unlock_mutex: 1068 - mlx5dr_domain_unlock(dmn); 1069 1068 mutex_unlock(&dmn->dump_info.dbg_mutex); 1069 + mlx5dr_domain_unlock(dmn); 1070 1070 return ret; 1071 1071 } 1072 1072
+4 -1
drivers/net/ethernet/microsoft/mana/gdma_main.c
··· 1946 1946 1947 1947 mana_gd_remove_irqs(pdev); 1948 1948 1949 - destroy_workqueue(gc->service_wq); 1949 + if (gc->service_wq) { 1950 + destroy_workqueue(gc->service_wq); 1951 + gc->service_wq = NULL; 1952 + } 1950 1953 dev_dbg(&pdev->dev, "mana gdma cleanup successful\n"); 1951 1954 } 1952 1955
+3 -1
drivers/net/ethernet/microsoft/mana/mana_en.c
··· 3757 3757 } 3758 3758 3759 3759 WRITE_ONCE(gd->rdma_teardown, true); 3760 - flush_workqueue(gc->service_wq); 3760 + 3761 + if (gc->service_wq) 3762 + flush_workqueue(gc->service_wq); 3761 3763 3762 3764 if (gd->adev) 3763 3765 remove_adev(gd);
+1
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
··· 853 853 netdev_info(priv->dev, 854 854 "IEEE 1588-2008 Advanced Timestamp supported\n"); 855 855 856 + memset(&priv->tstamp_config, 0, sizeof(priv->tstamp_config)); 856 857 priv->hwts_tx_en = 0; 857 858 priv->hwts_rx_en = 0; 858 859
+1 -4
drivers/net/ethernet/xscale/ixp4xx_eth.c
··· 403 403 int ret; 404 404 int ch; 405 405 406 - if (!cpu_is_ixp46x()) 407 - return -EOPNOTSUPP; 408 - 409 406 if (!netif_running(netdev)) 410 407 return -EINVAL; 411 408 412 409 ret = ixp46x_ptp_find(&port->timesync_regs, &port->phc_index); 413 410 if (ret) 414 - return ret; 411 + return -EOPNOTSUPP; 415 412 416 413 ch = PORT2CHANNEL(port); 417 414 regs = port->timesync_regs;
+3
drivers/net/ethernet/xscale/ptp_ixp46x.c
··· 232 232 233 233 int ixp46x_ptp_find(struct ixp46x_ts_regs *__iomem *regs, int *phc_index) 234 234 { 235 + if (!cpu_is_ixp46x()) 236 + return -ENODEV; 237 + 235 238 *regs = ixp_clock.regs; 236 239 *phc_index = ptp_clock_index(ixp_clock.ptp_clock); 237 240
+2 -1
drivers/net/netconsole.c
··· 1679 1679 if (release_len) { 1680 1680 release = init_utsname()->release; 1681 1681 1682 - scnprintf(nt->buf, MAX_PRINT_CHUNK, "%s,%s", release, msg); 1682 + scnprintf(nt->buf, MAX_PRINT_CHUNK, "%s,%.*s", release, 1683 + msg_len, msg); 1683 1684 msg_len += release_len; 1684 1685 } else { 1685 1686 memcpy(nt->buf, msg, msg_len);
+38 -19
drivers/net/ovpn/tcp.c
··· 70 70 peer->tcp.sk_cb.sk_data_ready(sk); 71 71 } 72 72 73 + static struct sk_buff *ovpn_tcp_skb_packet(const struct ovpn_peer *peer, 74 + struct sk_buff *orig_skb, 75 + const int pkt_len, const int pkt_off) 76 + { 77 + struct sk_buff *ovpn_skb; 78 + int err; 79 + 80 + /* create a new skb with only the content of the current packet */ 81 + ovpn_skb = netdev_alloc_skb(peer->ovpn->dev, pkt_len); 82 + if (unlikely(!ovpn_skb)) 83 + goto err; 84 + 85 + skb_copy_header(ovpn_skb, orig_skb); 86 + err = skb_copy_bits(orig_skb, pkt_off, skb_put(ovpn_skb, pkt_len), 87 + pkt_len); 88 + if (unlikely(err)) { 89 + net_warn_ratelimited("%s: skb_copy_bits failed for peer %u\n", 90 + netdev_name(peer->ovpn->dev), peer->id); 91 + kfree_skb(ovpn_skb); 92 + goto err; 93 + } 94 + 95 + consume_skb(orig_skb); 96 + return ovpn_skb; 97 + err: 98 + kfree_skb(orig_skb); 99 + return NULL; 100 + } 101 + 73 102 static void ovpn_tcp_rcv(struct strparser *strp, struct sk_buff *skb) 74 103 { 75 104 struct ovpn_peer *peer = container_of(strp, struct ovpn_peer, tcp.strp); 76 105 struct strp_msg *msg = strp_msg(skb); 77 - size_t pkt_len = msg->full_len - 2; 78 - size_t off = msg->offset + 2; 106 + int pkt_len = msg->full_len - 2; 79 107 u8 opcode; 80 108 81 - /* ensure skb->data points to the beginning of the openvpn packet */ 82 - if (!pskb_pull(skb, off)) { 83 - net_warn_ratelimited("%s: packet too small for peer %u\n", 84 - netdev_name(peer->ovpn->dev), peer->id); 85 - goto err; 86 - } 87 - 88 - /* strparser does not trim the skb for us, therefore we do it now */ 89 - if (pskb_trim(skb, pkt_len) != 0) { 90 - net_warn_ratelimited("%s: trimming skb failed for peer %u\n", 91 - netdev_name(peer->ovpn->dev), peer->id); 92 - goto err; 93 - } 94 - 95 - /* we need the first 4 bytes of data to be accessible 109 + /* we need at least 4 bytes of data in the packet 96 110 * to extract the opcode and the key ID later on 97 111 */ 98 - if (!pskb_may_pull(skb, OVPN_OPCODE_SIZE)) { 112 + if (unlikely(pkt_len < OVPN_OPCODE_SIZE)) { 99 113 net_warn_ratelimited("%s: packet too small to fetch opcode for peer %u\n", 100 114 netdev_name(peer->ovpn->dev), peer->id); 101 115 goto err; 102 116 } 117 + 118 + /* extract the packet into a new skb */ 119 + skb = ovpn_tcp_skb_packet(peer, skb, pkt_len, msg->offset + 2); 120 + if (unlikely(!skb)) 121 + goto err; 103 122 104 123 /* DATA_V2 packets are handled in kernel, the rest goes to user space */ 105 124 opcode = ovpn_opcode_from_skb(skb, 0); ··· 132 113 /* The packet size header must be there when sending the packet 133 114 * to userspace, therefore we put it back 134 115 */ 135 - skb_push(skb, 2); 116 + *(__be16 *)__skb_push(skb, sizeof(u16)) = htons(pkt_len); 136 117 ovpn_tcp_to_userspace(peer, strp->sk, skb); 137 118 return; 138 119 }
+17 -8
drivers/net/phy/phy_device.c
··· 1866 1866 goto error; 1867 1867 1868 1868 phy_resume(phydev); 1869 - if (!phydev->is_on_sfp_module) 1870 - phy_led_triggers_register(phydev); 1871 1869 1872 1870 /** 1873 1871 * If the external phy used by current mac interface is managed by ··· 1979 1981 1980 1982 phydev->phy_link_change = NULL; 1981 1983 phydev->phylink = NULL; 1982 - 1983 - if (!phydev->is_on_sfp_module) 1984 - phy_led_triggers_unregister(phydev); 1985 1984 1986 1985 if (phydev->mdio.dev.driver) 1987 1986 module_put(phydev->mdio.dev.driver->owner); ··· 3773 3778 /* Set the state to READY by default */ 3774 3779 phydev->state = PHY_READY; 3775 3780 3781 + /* Register the PHY LED triggers */ 3782 + if (!phydev->is_on_sfp_module) 3783 + phy_led_triggers_register(phydev); 3784 + 3776 3785 /* Get the LEDs from the device tree, and instantiate standard 3777 3786 * LEDs for them. 3778 3787 */ 3779 - if (IS_ENABLED(CONFIG_PHYLIB_LEDS) && !phy_driver_is_genphy(phydev)) 3788 + if (IS_ENABLED(CONFIG_PHYLIB_LEDS) && !phy_driver_is_genphy(phydev)) { 3780 3789 err = of_phy_leds(phydev); 3790 + if (err) 3791 + goto out; 3792 + } 3793 + 3794 + return 0; 3781 3795 3782 3796 out: 3797 + if (!phydev->is_on_sfp_module) 3798 + phy_led_triggers_unregister(phydev); 3799 + 3783 3800 /* Re-assert the reset signal on error */ 3784 - if (err) 3785 - phy_device_reset(phydev, 1); 3801 + phy_device_reset(phydev, 1); 3786 3802 3787 3803 return err; 3788 3804 } ··· 3806 3800 3807 3801 if (IS_ENABLED(CONFIG_PHYLIB_LEDS) && !phy_driver_is_genphy(phydev)) 3808 3802 phy_leds_unregister(phydev); 3803 + 3804 + if (!phydev->is_on_sfp_module) 3805 + phy_led_triggers_unregister(phydev); 3809 3806 3810 3807 phydev->state = PHY_DOWN; 3811 3808
+1 -1
drivers/net/phy/qcom/qca807x.c
··· 375 375 reg = QCA807X_MMD7_LED_FORCE_CTRL(offset); 376 376 val = phy_read_mmd(priv->phy, MDIO_MMD_AN, reg); 377 377 378 - return FIELD_GET(QCA807X_GPIO_FORCE_MODE_MASK, val); 378 + return !!FIELD_GET(QCA807X_GPIO_FORCE_MODE_MASK, val); 379 379 } 380 380 381 381 static int qca807x_gpio_set(struct gpio_chip *gc, unsigned int offset, int value)
+21 -5
drivers/net/team/team_core.c
··· 1290 1290 1291 1291 static void __team_port_change_port_removed(struct team_port *port); 1292 1292 1293 - static int team_port_del(struct team *team, struct net_device *port_dev) 1293 + static int team_port_del(struct team *team, struct net_device *port_dev, bool unregister) 1294 1294 { 1295 1295 struct net_device *dev = team->dev; 1296 1296 struct team_port *port; ··· 1328 1328 __team_port_change_port_removed(port); 1329 1329 1330 1330 team_port_set_orig_dev_addr(port); 1331 - dev_set_mtu(port_dev, port->orig.mtu); 1331 + if (unregister) { 1332 + netdev_lock_ops(port_dev); 1333 + __netif_set_mtu(port_dev, port->orig.mtu); 1334 + netdev_unlock_ops(port_dev); 1335 + } else { 1336 + dev_set_mtu(port_dev, port->orig.mtu); 1337 + } 1332 1338 kfree_rcu(port, rcu); 1333 1339 netdev_info(dev, "Port device %s removed\n", portname); 1334 1340 netdev_compute_master_upper_features(team->dev, true); ··· 1638 1632 ASSERT_RTNL(); 1639 1633 1640 1634 list_for_each_entry_safe(port, tmp, &team->port_list, list) 1641 - team_port_del(team, port->dev); 1635 + team_port_del(team, port->dev, false); 1642 1636 1643 1637 __team_change_mode(team, NULL); /* cleanup */ 1644 1638 __team_options_unregister(team, team_options, ARRAY_SIZE(team_options)); ··· 1937 1931 1938 1932 ASSERT_RTNL(); 1939 1933 1940 - return team_port_del(team, port_dev); 1934 + return team_port_del(team, port_dev, false); 1935 + } 1936 + 1937 + static int team_del_slave_on_unregister(struct net_device *dev, struct net_device *port_dev) 1938 + { 1939 + struct team *team = netdev_priv(dev); 1940 + 1941 + ASSERT_RTNL(); 1942 + 1943 + return team_port_del(team, port_dev, true); 1941 1944 } 1942 1945 1943 1946 static netdev_features_t team_fix_features(struct net_device *dev, ··· 2939 2924 !!netif_oper_up(port->dev)); 2940 2925 break; 2941 2926 case NETDEV_UNREGISTER: 2942 - team_del_slave(port->team->dev, dev); 2927 + team_del_slave_on_unregister(port->team->dev, dev); 2943 2928 break; 2944 2929 case NETDEV_FEAT_CHANGE: 2945 2930 if (!port->team->notifier_ctx) { ··· 3012 2997 MODULE_AUTHOR("Jiri Pirko <jpirko@redhat.com>"); 3013 2998 MODULE_DESCRIPTION("Ethernet team device driver"); 3014 2999 MODULE_ALIAS_RTNL_LINK(DRV_NAME); 3000 + MODULE_IMPORT_NS("NETDEV_INTERNAL");
+7
drivers/net/usb/kalmia.c
··· 132 132 { 133 133 int status; 134 134 u8 ethernet_addr[ETH_ALEN]; 135 + static const u8 ep_addr[] = { 136 + 1 | USB_DIR_IN, 137 + 2 | USB_DIR_OUT, 138 + 0}; 135 139 136 140 /* Don't bind to AT command interface */ 137 141 if (intf->cur_altsetting->desc.bInterfaceClass != USB_CLASS_VENDOR_SPEC) 138 142 return -EINVAL; 143 + 144 + if (!usb_check_bulk_endpoints(intf, ep_addr)) 145 + return -ENODEV; 139 146 140 147 dev->in = usb_rcvbulkpipe(dev->udev, 0x81 & USB_ENDPOINT_NUMBER_MASK); 141 148 dev->out = usb_sndbulkpipe(dev->udev, 0x02 & USB_ENDPOINT_NUMBER_MASK);
+13 -2
drivers/net/usb/kaweth.c
··· 765 765 766 766 netdev_dbg(net, "Setting Rx mode to %d\n", packet_filter_bitmap); 767 767 768 - netif_stop_queue(net); 769 768 770 769 if (net->flags & IFF_PROMISC) { 771 770 packet_filter_bitmap |= KAWETH_PACKET_FILTER_PROMISCUOUS; ··· 774 775 } 775 776 776 777 kaweth->packet_filter_bitmap = packet_filter_bitmap; 777 - netif_wake_queue(net); 778 778 } 779 779 780 780 /**************************************************************** ··· 883 885 const eth_addr_t bcast_addr = { 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF }; 884 886 int result = 0; 885 887 int rv = -EIO; 888 + static const u8 bulk_ep_addr[] = { 889 + 1 | USB_DIR_IN, 890 + 2 | USB_DIR_OUT, 891 + 0}; 892 + static const u8 int_ep_addr[] = { 893 + 3 | USB_DIR_IN, 894 + 0}; 886 895 887 896 dev_dbg(dev, 888 897 "Kawasaki Device Probe (Device number:%d): 0x%4.4x:0x%4.4x:0x%4.4x\n", ··· 902 897 dev_dbg(dev, "Descriptor length: %x type: %x\n", 903 898 (int)udev->descriptor.bLength, 904 899 (int)udev->descriptor.bDescriptorType); 900 + 901 + if (!usb_check_bulk_endpoints(intf, bulk_ep_addr) || 902 + !usb_check_int_endpoints(intf, int_ep_addr)) { 903 + dev_err(dev, "couldn't find required endpoints\n"); 904 + return -ENODEV; 905 + } 905 906 906 907 netdev = alloc_etherdev(sizeof(*kaweth)); 907 908 if (!netdev)
-2
drivers/net/usb/lan78xx.c
··· 2094 2094 dev->mdiobus->phy_mask = ~(1 << 1); 2095 2095 break; 2096 2096 case ID_REV_CHIP_ID_7801_: 2097 - /* scan thru PHYAD[2..0] */ 2098 - dev->mdiobus->phy_mask = ~(0xFF); 2099 2097 break; 2100 2098 } 2101 2099
+42 -6
drivers/net/usb/pegasus.c
··· 28 28 BMSR_100FULL | BMSR_ANEGCAPABLE) 29 29 #define CARRIER_CHECK_DELAY (2 * HZ) 30 30 31 + /* 32 + * USB endpoints. 33 + */ 34 + 35 + enum pegasus_usb_ep { 36 + PEGASUS_USB_EP_CONTROL = 0, 37 + PEGASUS_USB_EP_BULK_IN = 1, 38 + PEGASUS_USB_EP_BULK_OUT = 2, 39 + PEGASUS_USB_EP_INT_IN = 3, 40 + }; 41 + 31 42 static bool loopback; 32 43 static bool mii_mode; 33 44 static char *devid; ··· 553 542 goto tl_sched; 554 543 goon: 555 544 usb_fill_bulk_urb(pegasus->rx_urb, pegasus->usb, 556 - usb_rcvbulkpipe(pegasus->usb, 1), 545 + usb_rcvbulkpipe(pegasus->usb, PEGASUS_USB_EP_BULK_IN), 557 546 pegasus->rx_skb->data, PEGASUS_MTU, 558 547 read_bulk_callback, pegasus); 559 548 rx_status = usb_submit_urb(pegasus->rx_urb, GFP_ATOMIC); ··· 593 582 return; 594 583 } 595 584 usb_fill_bulk_urb(pegasus->rx_urb, pegasus->usb, 596 - usb_rcvbulkpipe(pegasus->usb, 1), 585 + usb_rcvbulkpipe(pegasus->usb, PEGASUS_USB_EP_BULK_IN), 597 586 pegasus->rx_skb->data, PEGASUS_MTU, 598 587 read_bulk_callback, pegasus); 599 588 try_again: ··· 721 710 ((__le16 *) pegasus->tx_buff)[0] = cpu_to_le16(l16); 722 711 skb_copy_from_linear_data(skb, pegasus->tx_buff + 2, skb->len); 723 712 usb_fill_bulk_urb(pegasus->tx_urb, pegasus->usb, 724 - usb_sndbulkpipe(pegasus->usb, 2), 713 + usb_sndbulkpipe(pegasus->usb, PEGASUS_USB_EP_BULK_OUT), 725 714 pegasus->tx_buff, count, 726 715 write_bulk_callback, pegasus); 727 716 if ((res = usb_submit_urb(pegasus->tx_urb, GFP_ATOMIC))) { ··· 812 801 813 802 static int alloc_urbs(pegasus_t *pegasus) 814 803 { 804 + static const u8 bulk_ep_addr[] = { 805 + 1 | USB_DIR_IN, 806 + 2 | USB_DIR_OUT, 807 + 0}; 808 + static const u8 int_ep_addr[] = { 809 + 3 | USB_DIR_IN, 810 + 0}; 815 811 int res = -ENOMEM; 812 + 813 + if (!usb_check_bulk_endpoints(pegasus->intf, bulk_ep_addr) || 814 + !usb_check_int_endpoints(pegasus->intf, int_ep_addr)) 815 + return -ENODEV; 816 816 817 817 pegasus->rx_urb = usb_alloc_urb(0, GFP_KERNEL); 818 818 if (!pegasus->rx_urb) { ··· 859 837 set_registers(pegasus, EthID, 6, net->dev_addr); 860 838 861 839 usb_fill_bulk_urb(pegasus->rx_urb, pegasus->usb, 862 - usb_rcvbulkpipe(pegasus->usb, 1), 840 + usb_rcvbulkpipe(pegasus->usb, PEGASUS_USB_EP_BULK_IN), 863 841 pegasus->rx_skb->data, PEGASUS_MTU, 864 842 read_bulk_callback, pegasus); 865 843 if ((res = usb_submit_urb(pegasus->rx_urb, GFP_KERNEL))) { ··· 870 848 } 871 849 872 850 usb_fill_int_urb(pegasus->intr_urb, pegasus->usb, 873 - usb_rcvintpipe(pegasus->usb, 3), 851 + usb_rcvintpipe(pegasus->usb, PEGASUS_USB_EP_INT_IN), 874 852 pegasus->intr_buff, sizeof(pegasus->intr_buff), 875 853 intr_callback, pegasus, pegasus->intr_interval); 876 854 if ((res = usb_submit_urb(pegasus->intr_urb, GFP_KERNEL))) { ··· 1155 1133 pegasus_t *pegasus; 1156 1134 int dev_index = id - pegasus_ids; 1157 1135 int res = -ENOMEM; 1136 + static const u8 bulk_ep_addr[] = { 1137 + PEGASUS_USB_EP_BULK_IN | USB_DIR_IN, 1138 + PEGASUS_USB_EP_BULK_OUT | USB_DIR_OUT, 1139 + 0}; 1140 + static const u8 int_ep_addr[] = { 1141 + PEGASUS_USB_EP_INT_IN | USB_DIR_IN, 1142 + 0}; 1158 1143 1159 1144 if (pegasus_blacklisted(dev)) 1160 1145 return -ENODEV; 1146 + 1147 + /* Verify that all required endpoints are present */ 1148 + if (!usb_check_bulk_endpoints(intf, bulk_ep_addr) || 1149 + !usb_check_int_endpoints(intf, int_ep_addr)) { 1150 + dev_err(&intf->dev, "Missing or invalid endpoints\n"); 1151 + return -ENODEV; 1152 + } 1161 1153 1162 1154 net = alloc_etherdev(sizeof(struct pegasus)); 1163 1155 if (!net) ··· 1179 1143 1180 1144 pegasus = netdev_priv(net); 1181 1145 pegasus->dev_index = dev_index; 1146 + pegasus->intf = intf; 1182 1147 1183 1148 res = alloc_urbs(pegasus); 1184 1149 if (res < 0) { ··· 1191 1154 1192 1155 INIT_DELAYED_WORK(&pegasus->carrier_check, check_carrier); 1193 1156 1194 - pegasus->intf = intf; 1195 1157 pegasus->usb = dev; 1196 1158 pegasus->net = net; 1197 1159
+2
drivers/net/wan/farsync.c
··· 2550 2550 2551 2551 fst_disable_intr(card); 2552 2552 free_irq(card->irq, card); 2553 + tasklet_kill(&fst_tx_task); 2554 + tasklet_kill(&fst_int_task); 2553 2555 2554 2556 iounmap(card->ctlmem); 2555 2557 iounmap(card->mem);
+3 -4
drivers/net/wireless/broadcom/brcm80211/brcmfmac/bcmsdh.c
··· 951 951 goto out; 952 952 953 953 /* try to attach to the target device */ 954 - sdiodev->bus = brcmf_sdio_probe(sdiodev); 955 - if (IS_ERR(sdiodev->bus)) { 956 - ret = PTR_ERR(sdiodev->bus); 954 + ret = brcmf_sdio_probe(sdiodev); 955 + if (ret) 957 956 goto out; 958 - } 957 + 959 958 brcmf_sdiod_host_fixup(sdiodev->func2->card->host); 960 959 out: 961 960 if (ret)
+4 -3
drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c
··· 4445 4445 return fwreq; 4446 4446 } 4447 4447 4448 - struct brcmf_sdio *brcmf_sdio_probe(struct brcmf_sdio_dev *sdiodev) 4448 + int brcmf_sdio_probe(struct brcmf_sdio_dev *sdiodev) 4449 4449 { 4450 4450 int ret; 4451 4451 struct brcmf_sdio *bus; ··· 4551 4551 goto fail; 4552 4552 } 4553 4553 4554 - return bus; 4554 + return 0; 4555 4555 4556 4556 fail: 4557 4557 brcmf_sdio_remove(bus); 4558 - return ERR_PTR(ret); 4558 + sdiodev->bus = NULL; 4559 + return ret; 4559 4560 } 4560 4561 4561 4562 /* Detach and free everything */
+1 -1
drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.h
··· 358 358 int brcmf_sdiod_probe(struct brcmf_sdio_dev *sdiodev); 359 359 int brcmf_sdiod_remove(struct brcmf_sdio_dev *sdiodev); 360 360 361 - struct brcmf_sdio *brcmf_sdio_probe(struct brcmf_sdio_dev *sdiodev); 361 + int brcmf_sdio_probe(struct brcmf_sdio_dev *sdiodev); 362 362 void brcmf_sdio_remove(struct brcmf_sdio *bus); 363 363 void brcmf_sdio_isr(struct brcmf_sdio *bus, bool in_isr); 364 364
+2 -2
drivers/net/wireless/marvell/libertas/main.c
··· 799 799 { 800 800 lbs_free_cmd_buffer(priv); 801 801 kfifo_free(&priv->event_fifo); 802 - timer_delete(&priv->command_timer); 803 - timer_delete(&priv->tx_lockup_timer); 802 + timer_delete_sync(&priv->command_timer); 803 + timer_delete_sync(&priv->tx_lockup_timer); 804 804 } 805 805 806 806 static const struct net_device_ops lbs_netdev_ops = {
+1 -1
drivers/net/wireless/marvell/mwifiex/cfg80211.c
··· 3148 3148 SET_NETDEV_DEV(dev, adapter->dev); 3149 3149 3150 3150 ret = dev_alloc_name(dev, name); 3151 - if (ret) 3151 + if (ret < 0) 3152 3152 goto err_alloc_name; 3153 3153 3154 3154 priv->dfs_cac_workqueue = alloc_workqueue("MWIFIEX_DFS_CAC-%s",
+1
drivers/nfc/pn533/usb.c
··· 628 628 usb_free_urb(phy->out_urb); 629 629 usb_free_urb(phy->ack_urb); 630 630 kfree(phy->ack_buffer); 631 + usb_put_dev(phy->udev); 631 632 632 633 nfc_info(&interface->dev, "NXP PN533 NFC device disconnected\n"); 633 634 }
+11 -2
include/net/af_vsock.h
··· 276 276 return vsock_net_mode(sock_net(sk_vsock(vsk))) == VSOCK_NET_MODE_GLOBAL; 277 277 } 278 278 279 - static inline void vsock_net_set_child_mode(struct net *net, 279 + static inline bool vsock_net_set_child_mode(struct net *net, 280 280 enum vsock_net_mode mode) 281 281 { 282 - WRITE_ONCE(net->vsock.child_ns_mode, mode); 282 + int new_locked = mode + 1; 283 + int old_locked = 0; /* unlocked */ 284 + 285 + if (try_cmpxchg(&net->vsock.child_ns_mode_locked, 286 + &old_locked, new_locked)) { 287 + WRITE_ONCE(net->vsock.child_ns_mode, mode); 288 + return true; 289 + } 290 + 291 + return old_locked == new_locked; 283 292 } 284 293 285 294 static inline enum vsock_net_mode vsock_net_child_mode(struct net *net)
+5 -3
include/net/bluetooth/l2cap.h
··· 284 284 #define L2CAP_CR_LE_BAD_KEY_SIZE 0x0007 285 285 #define L2CAP_CR_LE_ENCRYPTION 0x0008 286 286 #define L2CAP_CR_LE_INVALID_SCID 0x0009 287 - #define L2CAP_CR_LE_SCID_IN_USE 0X000A 288 - #define L2CAP_CR_LE_UNACCEPT_PARAMS 0X000B 289 - #define L2CAP_CR_LE_INVALID_PARAMS 0X000C 287 + #define L2CAP_CR_LE_SCID_IN_USE 0x000A 288 + #define L2CAP_CR_LE_UNACCEPT_PARAMS 0x000B 289 + #define L2CAP_CR_LE_INVALID_PARAMS 0x000C 290 290 291 291 /* connect/create channel status */ 292 292 #define L2CAP_CS_NO_INFO 0x0000 ··· 493 493 #define L2CAP_RECONF_SUCCESS 0x0000 494 494 #define L2CAP_RECONF_INVALID_MTU 0x0001 495 495 #define L2CAP_RECONF_INVALID_MPS 0x0002 496 + #define L2CAP_RECONF_INVALID_CID 0x0003 497 + #define L2CAP_RECONF_INVALID_PARAMS 0x0004 496 498 497 499 struct l2cap_ecred_reconf_rsp { 498 500 __le16 result;
+3 -1
include/net/inet_connection_sock.h
··· 42 42 struct request_sock *req, 43 43 struct dst_entry *dst, 44 44 struct request_sock *req_unhash, 45 - bool *own_req); 45 + bool *own_req, 46 + void (*opt_child_init)(struct sock *newsk, 47 + const struct sock *sk)); 46 48 u16 net_header_len; 47 49 int (*setsockopt)(struct sock *sk, int level, int optname, 48 50 sockptr_t optval, unsigned int optlen);
+3
include/net/netns/vsock.h
··· 17 17 18 18 enum vsock_net_mode mode; 19 19 enum vsock_net_mode child_ns_mode; 20 + 21 + /* 0 = unlocked, 1 = locked to global, 2 = locked to local */ 22 + int child_ns_mode_locked; 20 23 }; 21 24 #endif /* __NET_NET_NAMESPACE_VSOCK_H */
+1 -1
include/net/sock.h
··· 2098 2098 2099 2099 static inline void sk_set_socket(struct sock *sk, struct socket *sock) 2100 2100 { 2101 - sk->sk_socket = sock; 2101 + WRITE_ONCE(sk->sk_socket, sock); 2102 2102 if (sock) { 2103 2103 WRITE_ONCE(sk->sk_uid, SOCK_INODE(sock)->i_uid); 2104 2104 WRITE_ONCE(sk->sk_ino, SOCK_INODE(sock)->i_ino);
+3 -1
include/net/tcp.h
··· 544 544 struct request_sock *req, 545 545 struct dst_entry *dst, 546 546 struct request_sock *req_unhash, 547 - bool *own_req); 547 + bool *own_req, 548 + void (*opt_child_init)(struct sock *newsk, 549 + const struct sock *sk)); 548 550 int tcp_v4_do_rcv(struct sock *sk, struct sk_buff *skb); 549 551 int tcp_v4_connect(struct sock *sk, struct sockaddr_unsized *uaddr, int addr_len); 550 552 int tcp_connect(struct sock *sk);
+1
net/bluetooth/hci_sock.c
··· 2166 2166 mgmt_cleanup(sk); 2167 2167 skb_queue_purge(&sk->sk_receive_queue); 2168 2168 skb_queue_purge(&sk->sk_write_queue); 2169 + skb_queue_purge(&sk->sk_error_queue); 2169 2170 } 2170 2171 2171 2172 static const struct proto_ops hci_sock_ops = {
+1 -1
net/bluetooth/hci_sync.c
··· 4592 4592 { 4593 4593 int err; 4594 4594 4595 - if (iso_capable(hdev)) { 4595 + if (cis_capable(hdev)) { 4596 4596 /* Connected Isochronous Channels (Host Support) */ 4597 4597 err = hci_le_set_host_feature_sync(hdev, 32, 4598 4598 (iso_enabled(hdev) ? 0x01 :
+1
net/bluetooth/iso.c
··· 746 746 747 747 skb_queue_purge(&sk->sk_receive_queue); 748 748 skb_queue_purge(&sk->sk_write_queue); 749 + skb_queue_purge(&sk->sk_error_queue); 749 750 } 750 751 751 752 static void iso_sock_cleanup_listen(struct sock *parent)
+74 -29
net/bluetooth/l2cap_core.c
··· 4916 4916 goto response_unlock; 4917 4917 } 4918 4918 4919 + /* Check if Key Size is sufficient for the security level */ 4920 + if (!l2cap_check_enc_key_size(conn->hcon, pchan)) { 4921 + result = L2CAP_CR_LE_BAD_KEY_SIZE; 4922 + chan = NULL; 4923 + goto response_unlock; 4924 + } 4925 + 4919 4926 /* Check for valid dynamic CID range */ 4920 4927 if (scid < L2CAP_CID_DYN_START || scid > L2CAP_CID_LE_DYN_END) { 4921 4928 result = L2CAP_CR_LE_INVALID_SCID; ··· 5058 5051 struct l2cap_chan *chan, *pchan; 5059 5052 u16 mtu, mps; 5060 5053 __le16 psm; 5061 - u8 result, len = 0; 5054 + u8 result, rsp_len = 0; 5062 5055 int i, num_scid; 5063 5056 bool defer = false; 5064 5057 5065 5058 if (!enable_ecred) 5066 5059 return -EINVAL; 5060 + 5061 + memset(pdu, 0, sizeof(*pdu)); 5067 5062 5068 5063 if (cmd_len < sizeof(*req) || (cmd_len - sizeof(*req)) % sizeof(u16)) { 5069 5064 result = L2CAP_CR_LE_INVALID_PARAMS; ··· 5074 5065 5075 5066 cmd_len -= sizeof(*req); 5076 5067 num_scid = cmd_len / sizeof(u16); 5068 + 5069 + /* Always respond with the same number of scids as in the request */ 5070 + rsp_len = cmd_len; 5077 5071 5078 5072 if (num_scid > L2CAP_ECRED_MAX_CID) { 5079 5073 result = L2CAP_CR_LE_INVALID_PARAMS; ··· 5087 5075 mps = __le16_to_cpu(req->mps); 5088 5076 5089 5077 if (mtu < L2CAP_ECRED_MIN_MTU || mps < L2CAP_ECRED_MIN_MPS) { 5090 - result = L2CAP_CR_LE_UNACCEPT_PARAMS; 5078 + result = L2CAP_CR_LE_INVALID_PARAMS; 5091 5079 goto response; 5092 5080 } 5093 5081 ··· 5107 5095 5108 5096 BT_DBG("psm 0x%2.2x mtu %u mps %u", __le16_to_cpu(psm), mtu, mps); 5109 5097 5110 - memset(pdu, 0, sizeof(*pdu)); 5111 - 5112 5098 /* Check if we have socket listening on psm */ 5113 5099 pchan = l2cap_global_chan_by_psm(BT_LISTEN, psm, &conn->hcon->src, 5114 5100 &conn->hcon->dst, LE_LINK); ··· 5119 5109 5120 5110 if (!smp_sufficient_security(conn->hcon, pchan->sec_level, 5121 5111 SMP_ALLOW_STK)) { 5122 - result = L2CAP_CR_LE_AUTHENTICATION; 5112 + result = pchan->sec_level == BT_SECURITY_MEDIUM ? 5113 + L2CAP_CR_LE_ENCRYPTION : L2CAP_CR_LE_AUTHENTICATION; 5114 + goto unlock; 5115 + } 5116 + 5117 + /* Check if the listening channel has set an output MTU then the 5118 + * requested MTU shall be less than or equal to that value. 5119 + */ 5120 + if (pchan->omtu && mtu < pchan->omtu) { 5121 + result = L2CAP_CR_LE_UNACCEPT_PARAMS; 5123 5122 goto unlock; 5124 5123 } 5125 5124 ··· 5140 5121 BT_DBG("scid[%d] 0x%4.4x", i, scid); 5141 5122 5142 5123 pdu->dcid[i] = 0x0000; 5143 - len += sizeof(*pdu->dcid); 5144 5124 5145 5125 /* Check for valid dynamic CID range */ 5146 5126 if (scid < L2CAP_CID_DYN_START || scid > L2CAP_CID_LE_DYN_END) { ··· 5206 5188 return 0; 5207 5189 5208 5190 l2cap_send_cmd(conn, cmd->ident, L2CAP_ECRED_CONN_RSP, 5209 - sizeof(*pdu) + len, pdu); 5191 + sizeof(*pdu) + rsp_len, pdu); 5210 5192 5211 5193 return 0; 5212 5194 } ··· 5328 5310 struct l2cap_ecred_reconf_req *req = (void *) data; 5329 5311 struct l2cap_ecred_reconf_rsp rsp; 5330 5312 u16 mtu, mps, result; 5331 - struct l2cap_chan *chan; 5313 + struct l2cap_chan *chan[L2CAP_ECRED_MAX_CID] = {}; 5332 5314 int i, num_scid; 5333 5315 5334 5316 if (!enable_ecred) 5335 5317 return -EINVAL; 5336 5318 5337 - if (cmd_len < sizeof(*req) || cmd_len - sizeof(*req) % sizeof(u16)) { 5338 - result = L2CAP_CR_LE_INVALID_PARAMS; 5319 + if (cmd_len < sizeof(*req) || (cmd_len - sizeof(*req)) % sizeof(u16)) { 5320 + result = L2CAP_RECONF_INVALID_CID; 5339 5321 goto respond; 5340 5322 } 5341 5323 ··· 5345 5327 BT_DBG("mtu %u mps %u", mtu, mps); 5346 5328 5347 5329 if (mtu < L2CAP_ECRED_MIN_MTU) { 5348 - result = L2CAP_RECONF_INVALID_MTU; 5330 + result = L2CAP_RECONF_INVALID_PARAMS; 5349 5331 goto respond; 5350 5332 } 5351 5333 5352 5334 if (mps < L2CAP_ECRED_MIN_MPS) { 5353 - result = L2CAP_RECONF_INVALID_MPS; 5335 + result = L2CAP_RECONF_INVALID_PARAMS; 5354 5336 goto respond; 5355 5337 } 5356 5338 5357 5339 cmd_len -= sizeof(*req); 5358 5340 num_scid = cmd_len / sizeof(u16); 5341 + 5342 + if (num_scid > L2CAP_ECRED_MAX_CID) { 5343 + result = L2CAP_RECONF_INVALID_PARAMS; 5344 + goto respond; 5345 + } 5346 + 5359 5347 result = L2CAP_RECONF_SUCCESS; 5360 5348 5349 + /* Check if each SCID, MTU and MPS are valid */ 5361 5350 for (i = 0; i < num_scid; i++) { 5362 5351 u16 scid; 5363 5352 5364 5353 scid = __le16_to_cpu(req->scid[i]); 5365 - if (!scid) 5366 - return -EPROTO; 5367 - 5368 - chan = __l2cap_get_chan_by_dcid(conn, scid); 5369 - if (!chan) 5370 - continue; 5371 - 5372 - /* If the MTU value is decreased for any of the included 5373 - * channels, then the receiver shall disconnect all 5374 - * included channels. 5375 - */ 5376 - if (chan->omtu > mtu) { 5377 - BT_ERR("chan %p decreased MTU %u -> %u", chan, 5378 - chan->omtu, mtu); 5379 - result = L2CAP_RECONF_INVALID_MTU; 5354 + if (!scid) { 5355 + result = L2CAP_RECONF_INVALID_CID; 5356 + goto respond; 5380 5357 } 5381 5358 5382 - chan->omtu = mtu; 5383 - chan->remote_mps = mps; 5359 + chan[i] = __l2cap_get_chan_by_dcid(conn, scid); 5360 + if (!chan[i]) { 5361 + result = L2CAP_RECONF_INVALID_CID; 5362 + goto respond; 5363 + } 5364 + 5365 + /* The MTU field shall be greater than or equal to the greatest 5366 + * current MTU size of these channels. 5367 + */ 5368 + if (chan[i]->omtu > mtu) { 5369 + BT_ERR("chan %p decreased MTU %u -> %u", chan[i], 5370 + chan[i]->omtu, mtu); 5371 + result = L2CAP_RECONF_INVALID_MTU; 5372 + goto respond; 5373 + } 5374 + 5375 + /* If more than one channel is being configured, the MPS field 5376 + * shall be greater than or equal to the current MPS size of 5377 + * each of these channels. If only one channel is being 5378 + * configured, the MPS field may be less than the current MPS 5379 + * of that channel. 5380 + */ 5381 + if (chan[i]->remote_mps >= mps && i) { 5382 + BT_ERR("chan %p decreased MPS %u -> %u", chan[i], 5383 + chan[i]->remote_mps, mps); 5384 + result = L2CAP_RECONF_INVALID_MPS; 5385 + goto respond; 5386 + } 5387 + } 5388 + 5389 + /* Commit the new MTU and MPS values after checking they are valid */ 5390 + for (i = 0; i < num_scid; i++) { 5391 + chan[i]->omtu = mtu; 5392 + chan[i]->remote_mps = mps; 5384 5393 } 5385 5394 5386 5395 respond:
+12 -4
net/bluetooth/l2cap_sock.c
··· 1029 1029 break; 1030 1030 } 1031 1031 1032 - /* Setting is not supported as it's the remote side that 1033 - * decides this. 1034 - */ 1035 - err = -EPERM; 1032 + /* Only allow setting output MTU when not connected */ 1033 + if (sk->sk_state == BT_CONNECTED) { 1034 + err = -EISCONN; 1035 + break; 1036 + } 1037 + 1038 + err = copy_safe_from_sockptr(&mtu, sizeof(mtu), optval, optlen); 1039 + if (err) 1040 + break; 1041 + 1042 + chan->omtu = mtu; 1036 1043 break; 1037 1044 1038 1045 case BT_RCVMTU: ··· 1823 1816 1824 1817 skb_queue_purge(&sk->sk_receive_queue); 1825 1818 skb_queue_purge(&sk->sk_write_queue); 1819 + skb_queue_purge(&sk->sk_error_queue); 1826 1820 } 1827 1821 1828 1822 static void l2cap_skb_msg_name(struct sk_buff *skb, void *msg_name,
+1
net/bluetooth/sco.c
··· 470 470 471 471 skb_queue_purge(&sk->sk_receive_queue); 472 472 skb_queue_purge(&sk->sk_write_queue); 473 + skb_queue_purge(&sk->sk_error_queue); 473 474 } 474 475 475 476 static void sco_sock_cleanup_listen(struct sock *parent)
+23 -12
net/core/dev.c
··· 4822 4822 * to -1 or to their cpu id, but not to our id. 4823 4823 */ 4824 4824 if (READ_ONCE(txq->xmit_lock_owner) != cpu) { 4825 + bool is_list = false; 4826 + 4825 4827 if (dev_xmit_recursion()) 4826 4828 goto recursion_alert; 4827 4829 ··· 4834 4832 HARD_TX_LOCK(dev, txq, cpu); 4835 4833 4836 4834 if (!netif_xmit_stopped(txq)) { 4835 + is_list = !!skb->next; 4836 + 4837 4837 dev_xmit_recursion_inc(); 4838 4838 skb = dev_hard_start_xmit(skb, dev, txq, &rc); 4839 4839 dev_xmit_recursion_dec(); 4840 - if (dev_xmit_complete(rc)) { 4841 - HARD_TX_UNLOCK(dev, txq); 4842 - goto out; 4843 - } 4840 + 4841 + /* GSO segments a single SKB into 4842 + * a list of frames. TCP expects error 4843 + * to mean none of the data was sent. 4844 + */ 4845 + if (is_list) 4846 + rc = NETDEV_TX_OK; 4844 4847 } 4845 4848 HARD_TX_UNLOCK(dev, txq); 4849 + if (!skb) /* xmit completed */ 4850 + goto out; 4851 + 4846 4852 net_crit_ratelimited("Virtual device %s asks to queue packet!\n", 4847 4853 dev->name); 4854 + /* NETDEV_TX_BUSY or queue was stopped */ 4855 + if (!is_list) 4856 + rc = -ENETDOWN; 4848 4857 } else { 4849 4858 /* Recursion is detected! It is possible, 4850 4859 * unfortunately ··· 4863 4850 recursion_alert: 4864 4851 net_crit_ratelimited("Dead loop on virtual device %s, fix it urgently!\n", 4865 4852 dev->name); 4853 + rc = -ENETDOWN; 4866 4854 } 4867 4855 } 4868 4856 4869 - rc = -ENETDOWN; 4870 4857 rcu_read_unlock_bh(); 4871 4858 4872 4859 dev_core_stats_tx_dropped_inc(dev); ··· 5005 4992 5006 4993 static struct rps_dev_flow * 5007 4994 set_rps_cpu(struct net_device *dev, struct sk_buff *skb, 5008 - struct rps_dev_flow *rflow, u16 next_cpu, u32 hash, 5009 - u32 flow_id) 4995 + struct rps_dev_flow *rflow, u16 next_cpu, u32 hash) 5010 4996 { 5011 4997 if (next_cpu < nr_cpu_ids) { 5012 4998 u32 head; ··· 5016 5004 struct rps_dev_flow *tmp_rflow; 5017 5005 unsigned int tmp_cpu; 5018 5006 u16 rxq_index; 5007 + u32 flow_id; 5019 5008 int rc; 5020 5009 5021 5010 /* Should we steer this flow to a different hardware queue? */ ··· 5032 5019 if (!flow_table) 5033 5020 goto out; 5034 5021 5022 + flow_id = rfs_slot(hash, flow_table); 5035 5023 tmp_rflow = &flow_table->flows[flow_id]; 5036 5024 tmp_cpu = READ_ONCE(tmp_rflow->cpu); 5037 5025 ··· 5080 5066 struct rps_dev_flow_table *flow_table; 5081 5067 struct rps_map *map; 5082 5068 int cpu = -1; 5083 - u32 flow_id; 5084 5069 u32 tcpu; 5085 5070 u32 hash; 5086 5071 ··· 5126 5113 /* OK, now we know there is a match, 5127 5114 * we can look at the local (per receive queue) flow table 5128 5115 */ 5129 - flow_id = rfs_slot(hash, flow_table); 5130 - rflow = &flow_table->flows[flow_id]; 5116 + rflow = &flow_table->flows[rfs_slot(hash, flow_table)]; 5131 5117 tcpu = rflow->cpu; 5132 5118 5133 5119 /* ··· 5145 5133 ((int)(READ_ONCE(per_cpu(softnet_data, tcpu).input_queue_head) - 5146 5134 rflow->last_qtail)) >= 0)) { 5147 5135 tcpu = next_cpu; 5148 - rflow = set_rps_cpu(dev, skb, rflow, next_cpu, hash, 5149 - flow_id); 5136 + rflow = set_rps_cpu(dev, skb, rflow, next_cpu, hash); 5150 5137 } 5151 5138 5152 5139 if (tcpu < nr_cpu_ids && cpu_online(tcpu)) {
+18 -5
net/core/skbuff.c
··· 5590 5590 5591 5591 static bool skb_may_tx_timestamp(struct sock *sk, bool tsonly) 5592 5592 { 5593 - bool ret; 5593 + struct socket *sock; 5594 + struct file *file; 5595 + bool ret = false; 5594 5596 5595 5597 if (likely(tsonly || READ_ONCE(sock_net(sk)->core.sysctl_tstamp_allow_data))) 5596 5598 return true; 5597 5599 5598 - read_lock_bh(&sk->sk_callback_lock); 5599 - ret = sk->sk_socket && sk->sk_socket->file && 5600 - file_ns_capable(sk->sk_socket->file, &init_user_ns, CAP_NET_RAW); 5601 - read_unlock_bh(&sk->sk_callback_lock); 5600 + /* The sk pointer remains valid as long as the skb is. The sk_socket and 5601 + * file pointer may become NULL if the socket is closed. Both structures 5602 + * (including file->cred) are RCU freed which means they can be accessed 5603 + * within a RCU read section. 5604 + */ 5605 + rcu_read_lock(); 5606 + sock = READ_ONCE(sk->sk_socket); 5607 + if (!sock) 5608 + goto out; 5609 + file = READ_ONCE(sock->file); 5610 + if (!file) 5611 + goto out; 5612 + ret = file_ns_capable(file, &init_user_ns, CAP_NET_RAW); 5613 + out: 5614 + rcu_read_unlock(); 5602 5615 return ret; 5603 5616 } 5604 5617
+1 -1
net/ipv4/syncookies.c
··· 203 203 bool own_req; 204 204 205 205 child = icsk->icsk_af_ops->syn_recv_sock(sk, skb, req, dst, 206 - NULL, &own_req); 206 + NULL, &own_req, NULL); 207 207 if (child) { 208 208 refcount_set(&req->rsk_refcnt, 1); 209 209 sock_rps_save_rxhash(child, skb);
+1 -1
net/ipv4/tcp_fastopen.c
··· 333 333 bool own_req; 334 334 335 335 child = inet_csk(sk)->icsk_af_ops->syn_recv_sock(sk, skb, req, NULL, 336 - NULL, &own_req); 336 + NULL, &own_req, NULL); 337 337 if (!child) 338 338 return NULL; 339 339
+14 -4
net/ipv4/tcp_input.c
··· 4858 4858 */ 4859 4859 4860 4860 static enum skb_drop_reason tcp_sequence(const struct sock *sk, 4861 - u32 seq, u32 end_seq) 4861 + u32 seq, u32 end_seq, 4862 + const struct tcphdr *th) 4862 4863 { 4863 4864 const struct tcp_sock *tp = tcp_sk(sk); 4865 + u32 seq_limit; 4864 4866 4865 4867 if (before(end_seq, tp->rcv_wup)) 4866 4868 return SKB_DROP_REASON_TCP_OLD_SEQUENCE; 4867 4869 4868 - if (after(end_seq, tp->rcv_nxt + tcp_receive_window(tp))) { 4869 - if (after(seq, tp->rcv_nxt + tcp_receive_window(tp))) 4870 + seq_limit = tp->rcv_nxt + tcp_receive_window(tp); 4871 + if (unlikely(after(end_seq, seq_limit))) { 4872 + /* Some stacks are known to handle FIN incorrectly; allow the 4873 + * FIN to extend beyond the window and check it in detail later. 4874 + */ 4875 + if (!after(end_seq - th->fin, seq_limit)) 4876 + return SKB_NOT_DROPPED_YET; 4877 + 4878 + if (after(seq, seq_limit)) 4870 4879 return SKB_DROP_REASON_TCP_INVALID_SEQUENCE; 4871 4880 4872 4881 /* Only accept this packet if receive queue is empty. */ ··· 6388 6379 6389 6380 step1: 6390 6381 /* Step 1: check sequence number */ 6391 - reason = tcp_sequence(sk, TCP_SKB_CB(skb)->seq, TCP_SKB_CB(skb)->end_seq); 6382 + reason = tcp_sequence(sk, TCP_SKB_CB(skb)->seq, 6383 + TCP_SKB_CB(skb)->end_seq, th); 6392 6384 if (reason) { 6393 6385 /* RFC793, page 37: "In all states except SYN-SENT, all reset 6394 6386 * (RST) segments are validated by checking their SEQ-fields."
+7 -1
net/ipv4/tcp_ipv4.c
··· 1705 1705 struct request_sock *req, 1706 1706 struct dst_entry *dst, 1707 1707 struct request_sock *req_unhash, 1708 - bool *own_req) 1708 + bool *own_req, 1709 + void (*opt_child_init)(struct sock *newsk, 1710 + const struct sock *sk)) 1709 1711 { 1710 1712 struct inet_request_sock *ireq; 1711 1713 bool found_dup_sk = false; ··· 1759 1757 } 1760 1758 sk_setup_caps(newsk, dst); 1761 1759 1760 + #if IS_ENABLED(CONFIG_IPV6) 1761 + if (opt_child_init) 1762 + opt_child_init(newsk, sk); 1763 + #endif 1762 1764 tcp_ca_openreq_child(newsk, dst); 1763 1765 1764 1766 tcp_sync_mss(newsk, dst4_mtu(dst));
+1 -1
net/ipv4/tcp_minisocks.c
··· 925 925 * socket is created, wait for troubles. 926 926 */ 927 927 child = inet_csk(sk)->icsk_af_ops->syn_recv_sock(sk, skb, req, NULL, 928 - req, &own_req); 928 + req, &own_req, NULL); 929 929 if (!child) 930 930 goto listen_overflow; 931 931
+1 -2
net/ipv4/udplite.c
··· 20 20 /* Designate sk as UDP-Lite socket */ 21 21 static int udplite_sk_init(struct sock *sk) 22 22 { 23 - udp_init_sock(sk); 24 23 pr_warn_once("UDP-Lite is deprecated and scheduled to be removed in 2025, " 25 24 "please contact the netdev mailing list\n"); 26 - return 0; 25 + return udp_init_sock(sk); 27 26 } 28 27 29 28 static int udplite_rcv(struct sk_buff *skb)
+42 -56
net/ipv6/tcp_ipv6.c
··· 1312 1312 sizeof(struct inet6_skb_parm)); 1313 1313 } 1314 1314 1315 + /* Called from tcp_v4_syn_recv_sock() for v6_mapped children. */ 1316 + static void tcp_v6_mapped_child_init(struct sock *newsk, const struct sock *sk) 1317 + { 1318 + struct inet_sock *newinet = inet_sk(newsk); 1319 + struct ipv6_pinfo *newnp; 1320 + 1321 + newinet->pinet6 = newnp = tcp_inet6_sk(newsk); 1322 + newinet->ipv6_fl_list = NULL; 1323 + 1324 + memcpy(newnp, tcp_inet6_sk(sk), sizeof(struct ipv6_pinfo)); 1325 + 1326 + newnp->saddr = newsk->sk_v6_rcv_saddr; 1327 + 1328 + inet_csk(newsk)->icsk_af_ops = &ipv6_mapped; 1329 + if (sk_is_mptcp(newsk)) 1330 + mptcpv6_handle_mapped(newsk, true); 1331 + newsk->sk_backlog_rcv = tcp_v4_do_rcv; 1332 + #if defined(CONFIG_TCP_MD5SIG) || defined(CONFIG_TCP_AO) 1333 + tcp_sk(newsk)->af_specific = &tcp_sock_ipv6_mapped_specific; 1334 + #endif 1335 + 1336 + newnp->ipv6_mc_list = NULL; 1337 + newnp->ipv6_ac_list = NULL; 1338 + newnp->pktoptions = NULL; 1339 + newnp->opt = NULL; 1340 + 1341 + /* tcp_v4_syn_recv_sock() has initialized newinet->mc_{index,ttl} */ 1342 + newnp->mcast_oif = newinet->mc_index; 1343 + newnp->mcast_hops = newinet->mc_ttl; 1344 + 1345 + newnp->rcv_flowinfo = 0; 1346 + if (inet6_test_bit(REPFLOW, sk)) 1347 + newnp->flow_label = 0; 1348 + } 1349 + 1315 1350 static struct sock *tcp_v6_syn_recv_sock(const struct sock *sk, struct sk_buff *skb, 1316 1351 struct request_sock *req, 1317 1352 struct dst_entry *dst, 1318 1353 struct request_sock *req_unhash, 1319 - bool *own_req) 1354 + bool *own_req, 1355 + void (*opt_child_init)(struct sock *newsk, 1356 + const struct sock *sk)) 1320 1357 { 1321 1358 const struct ipv6_pinfo *np = tcp_inet6_sk(sk); 1322 1359 struct inet_request_sock *ireq; ··· 1369 1332 #endif 1370 1333 struct flowi6 fl6; 1371 1334 1372 - if (skb->protocol == htons(ETH_P_IP)) { 1373 - /* 1374 - * v6 mapped 1375 - */ 1376 - 1377 - newsk = tcp_v4_syn_recv_sock(sk, skb, req, dst, 1378 - req_unhash, own_req); 1379 - 1380 - if (!newsk) 1381 - return NULL; 1382 - 1383 - newinet = inet_sk(newsk); 1384 - newinet->pinet6 = tcp_inet6_sk(newsk); 1385 - newinet->ipv6_fl_list = NULL; 1386 - 1387 - newnp = tcp_inet6_sk(newsk); 1388 - newtp = tcp_sk(newsk); 1389 - 1390 - memcpy(newnp, np, sizeof(struct ipv6_pinfo)); 1391 - 1392 - newnp->saddr = newsk->sk_v6_rcv_saddr; 1393 - 1394 - inet_csk(newsk)->icsk_af_ops = &ipv6_mapped; 1395 - if (sk_is_mptcp(newsk)) 1396 - mptcpv6_handle_mapped(newsk, true); 1397 - newsk->sk_backlog_rcv = tcp_v4_do_rcv; 1398 - #if defined(CONFIG_TCP_MD5SIG) || defined(CONFIG_TCP_AO) 1399 - newtp->af_specific = &tcp_sock_ipv6_mapped_specific; 1400 - #endif 1401 - 1402 - newnp->ipv6_mc_list = NULL; 1403 - newnp->ipv6_ac_list = NULL; 1404 - newnp->pktoptions = NULL; 1405 - newnp->opt = NULL; 1406 - newnp->mcast_oif = inet_iif(skb); 1407 - newnp->mcast_hops = ip_hdr(skb)->ttl; 1408 - newnp->rcv_flowinfo = 0; 1409 - if (inet6_test_bit(REPFLOW, sk)) 1410 - newnp->flow_label = 0; 1411 - 1412 - /* 1413 - * No need to charge this sock to the relevant IPv6 refcnt debug socks count 1414 - * here, tcp_create_openreq_child now does this for us, see the comment in 1415 - * that function for the gory details. -acme 1416 - */ 1417 - 1418 - /* It is tricky place. Until this moment IPv4 tcp 1419 - worked with IPv6 icsk.icsk_af_ops. 1420 - Sync it now. 1421 - */ 1422 - tcp_sync_mss(newsk, inet_csk(newsk)->icsk_pmtu_cookie); 1423 - 1424 - return newsk; 1425 - } 1426 - 1335 + if (skb->protocol == htons(ETH_P_IP)) 1336 + return tcp_v4_syn_recv_sock(sk, skb, req, dst, 1337 + req_unhash, own_req, 1338 + tcp_v6_mapped_child_init); 1427 1339 ireq = inet_rsk(req); 1428 1340 1429 1341 if (sk_acceptq_is_full(sk))
+1 -2
net/ipv6/udplite.c
··· 16 16 17 17 static int udplitev6_sk_init(struct sock *sk) 18 18 { 19 - udpv6_init_sock(sk); 20 19 pr_warn_once("UDP-Lite is deprecated and scheduled to be removed in 2025, " 21 20 "please contact the netdev mailing list\n"); 22 - return 0; 21 + return udpv6_init_sock(sk); 23 22 } 24 23 25 24 static int udplitev6_rcv(struct sk_buff *skb)
+5 -2
net/ipv6/xfrm6_policy.c
··· 57 57 struct dst_entry *dst; 58 58 struct net_device *dev; 59 59 struct inet6_dev *idev; 60 + int err; 60 61 61 62 dst = xfrm6_dst_lookup(params); 62 63 if (IS_ERR(dst)) ··· 69 68 return -EHOSTUNREACH; 70 69 } 71 70 dev = idev->dev; 72 - ipv6_dev_get_saddr(dev_net(dev), dev, &params->daddr->in6, 0, 73 - &saddr->in6); 71 + err = ipv6_dev_get_saddr(dev_net(dev), dev, &params->daddr->in6, 0, 72 + &saddr->in6); 74 73 dst_release(dst); 74 + if (err) 75 + return -EHOSTUNREACH; 75 76 return 0; 76 77 } 77 78
+19 -2
net/kcm/kcmsock.c
··· 628 628 skb = txm->frag_skb; 629 629 } 630 630 631 - if (WARN_ON(!skb_shinfo(skb)->nr_frags) || 631 + if (WARN_ON_ONCE(!skb_shinfo(skb)->nr_frags) || 632 632 WARN_ON_ONCE(!skb_frag_page(&skb_shinfo(skb)->frags[0]))) { 633 633 ret = -EINVAL; 634 634 goto out; ··· 749 749 { 750 750 struct sock *sk = sock->sk; 751 751 struct kcm_sock *kcm = kcm_sk(sk); 752 - struct sk_buff *skb = NULL, *head = NULL; 752 + struct sk_buff *skb = NULL, *head = NULL, *frag_prev = NULL; 753 753 size_t copy, copied = 0; 754 754 long timeo = sock_sndtimeo(sk, msg->msg_flags & MSG_DONTWAIT); 755 755 int eor = (sock->type == SOCK_DGRAM) ? ··· 824 824 else 825 825 skb->next = tskb; 826 826 827 + frag_prev = skb; 827 828 skb = tskb; 828 829 skb->ip_summed = CHECKSUM_UNNECESSARY; 829 830 continue; ··· 933 932 934 933 out_error: 935 934 kcm_push(kcm); 935 + 936 + /* When MAX_SKB_FRAGS was reached, a new skb was allocated and 937 + * linked into the frag_list before data copy. If the copy 938 + * subsequently failed, this skb has zero frags. Remove it from 939 + * the frag_list to prevent kcm_write_msgs from later hitting 940 + * WARN_ON(!skb_shinfo(skb)->nr_frags). 941 + */ 942 + if (frag_prev && !skb_shinfo(skb)->nr_frags) { 943 + if (head == frag_prev) 944 + skb_shinfo(head)->frag_list = NULL; 945 + else 946 + frag_prev->next = NULL; 947 + kfree_skb(skb); 948 + /* Update skb as it may be saved in partial_message via goto */ 949 + skb = frag_prev; 950 + } 936 951 937 952 if (sock->type == SOCK_SEQPACKET) { 938 953 /* Wrote some bytes before encountering an
+2
net/mac80211/link.c
··· 281 281 struct ieee80211_bss_conf *old[IEEE80211_MLD_MAX_NUM_LINKS]; 282 282 struct ieee80211_link_data *old_data[IEEE80211_MLD_MAX_NUM_LINKS]; 283 283 bool use_deflink = old_links == 0; /* set for error case */ 284 + bool non_sta = sdata->vif.type != NL80211_IFTYPE_STATION; 284 285 285 286 lockdep_assert_wiphy(sdata->local->hw.wiphy); 286 287 ··· 338 337 link = links[link_id]; 339 338 ieee80211_link_init(sdata, link_id, &link->data, &link->conf); 340 339 ieee80211_link_setup(&link->data); 340 + ieee80211_set_wmm_default(&link->data, true, non_sta); 341 341 } 342 342 343 343 if (new_links == 0)
+3
net/mac80211/mesh.c
··· 1635 1635 if (!mesh_matches_local(sdata, elems)) 1636 1636 goto free; 1637 1637 1638 + if (!elems->mesh_chansw_params_ie) 1639 + goto free; 1640 + 1638 1641 ifmsh->chsw_ttl = elems->mesh_chansw_params_ie->mesh_ttl; 1639 1642 if (!--ifmsh->chsw_ttl) 1640 1643 fwd_csa = false;
+3
net/mac80211/mlme.c
··· 7085 7085 control = le16_to_cpu(prof->control); 7086 7086 link_id = control & IEEE80211_MLE_STA_RECONF_CONTROL_LINK_ID; 7087 7087 7088 + if (link_id >= IEEE80211_MLD_MAX_NUM_LINKS) 7089 + continue; 7090 + 7088 7091 removed_links |= BIT(link_id); 7089 7092 7090 7093 /* the MAC address should not be included, but handle it */
+4 -2
net/mptcp/subflow.c
··· 808 808 struct request_sock *req, 809 809 struct dst_entry *dst, 810 810 struct request_sock *req_unhash, 811 - bool *own_req) 811 + bool *own_req, 812 + void (*opt_child_init)(struct sock *newsk, 813 + const struct sock *sk)) 812 814 { 813 815 struct mptcp_subflow_context *listener = mptcp_subflow_ctx(sk); 814 816 struct mptcp_subflow_request_sock *subflow_req; ··· 857 855 858 856 create_child: 859 857 child = listener->icsk_af_ops->syn_recv_sock(sk, skb, req, dst, 860 - req_unhash, own_req); 858 + req_unhash, own_req, opt_child_init); 861 859 862 860 if (child && *own_req) { 863 861 struct mptcp_subflow_context *ctx = mptcp_subflow_ctx(child);
+1 -1
net/netfilter/nf_conntrack_h323_asn1.c
··· 796 796 797 797 if (ext || (son->attr & OPEN)) { 798 798 BYTE_ALIGN(bs); 799 - if (nf_h323_error_boundary(bs, len, 0)) 799 + if (nf_h323_error_boundary(bs, 2, 0)) 800 800 return H323_ERROR_BOUND; 801 801 len = get_len(bs); 802 802 if (nf_h323_error_boundary(bs, len, 0))
+38 -1
net/psp/psp_main.c
··· 166 166 { 167 167 struct udphdr *uh = udp_hdr(skb); 168 168 struct psphdr *psph = (struct psphdr *)(uh + 1); 169 + const struct sock *sk = skb->sk; 169 170 170 171 uh->dest = htons(PSP_DEFAULT_UDP_PORT); 171 - uh->source = udp_flow_src_port(net, skb, 0, 0, false); 172 + 173 + /* A bit of theory: Selection of the source port. 174 + * 175 + * We need some entropy, so that multiple flows use different 176 + * source ports for better RSS spreading at the receiver. 177 + * 178 + * We also need that all packets belonging to one TCP flow 179 + * use the same source port through their duration, 180 + * so that all these packets land in the same receive queue. 181 + * 182 + * udp_flow_src_port() is using sk_txhash, inherited from 183 + * skb_set_hash_from_sk() call in __tcp_transmit_skb(). 184 + * This field is subject to reshuffling, thanks to 185 + * sk_rethink_txhash() calls in various TCP functions. 186 + * 187 + * Instead, use sk->sk_hash which is constant through 188 + * the whole flow duration. 189 + */ 190 + if (likely(sk)) { 191 + u32 hash = sk->sk_hash; 192 + int min, max; 193 + 194 + /* These operations are cheap, no need to cache the result 195 + * in another socket field. 196 + */ 197 + inet_get_local_port_range(net, &min, &max); 198 + /* Since this is being sent on the wire obfuscate hash a bit 199 + * to minimize possibility that any useful information to an 200 + * attacker is leaked. Only upper 16 bits are relevant in the 201 + * computation for 16 bit port value because we use a 202 + * reciprocal divide. 203 + */ 204 + hash ^= hash << 16; 205 + uh->source = htons((((u64)hash * (max - min)) >> 32) + min); 206 + } else { 207 + uh->source = udp_flow_src_port(net, skb, 0, 0, false); 208 + } 172 209 uh->check = 0; 173 210 uh->len = htons(udp_len); 174 211
+3
net/rds/connection.c
··· 455 455 rcu_read_unlock(); 456 456 } 457 457 458 + /* we do not hold the socket lock here but it is safe because 459 + * fan-out is disabled when calling conn_slots_available() 460 + */ 458 461 if (conn->c_trans->conn_slots_available) 459 462 conn->c_trans->conn_slots_available(conn, false); 460 463 }
+4 -22
net/rds/tcp_listen.c
··· 59 59 static int 60 60 rds_tcp_get_peer_sport(struct socket *sock) 61 61 { 62 - union { 63 - struct sockaddr_storage storage; 64 - struct sockaddr addr; 65 - struct sockaddr_in sin; 66 - struct sockaddr_in6 sin6; 67 - } saddr; 68 - int sport; 62 + struct sock *sk = sock->sk; 69 63 70 - if (kernel_getpeername(sock, &saddr.addr) >= 0) { 71 - switch (saddr.addr.sa_family) { 72 - case AF_INET: 73 - sport = ntohs(saddr.sin.sin_port); 74 - break; 75 - case AF_INET6: 76 - sport = ntohs(saddr.sin6.sin6_port); 77 - break; 78 - default: 79 - sport = -1; 80 - } 81 - } else { 82 - sport = -1; 83 - } 64 + if (!sk) 65 + return -1; 84 66 85 - return sport; 67 + return ntohs(READ_ONCE(inet_sk(sk)->inet_dport)); 86 68 } 87 69 88 70 /* rds_tcp_accept_one_path(): if accepting on cp_index > 0, make sure the
+4 -2
net/smc/af_smc.c
··· 124 124 struct request_sock *req, 125 125 struct dst_entry *dst, 126 126 struct request_sock *req_unhash, 127 - bool *own_req) 127 + bool *own_req, 128 + void (*opt_child_init)(struct sock *newsk, 129 + const struct sock *sk)) 128 130 { 129 131 struct smc_sock *smc; 130 132 struct sock *child; ··· 144 142 145 143 /* passthrough to original syn recv sock fct */ 146 144 child = smc->ori_af_ops->syn_recv_sock(sk, skb, req, dst, req_unhash, 147 - own_req); 145 + own_req, opt_child_init); 148 146 /* child must not inherit smc or its ops */ 149 147 if (child) { 150 148 rcu_assign_sk_user_data(child, NULL);
+1 -1
net/socket.c
··· 674 674 iput(SOCK_INODE(sock)); 675 675 return; 676 676 } 677 - sock->file = NULL; 677 + WRITE_ONCE(sock->file, NULL); 678 678 } 679 679 680 680 /**
+4 -2
net/tipc/name_table.c
··· 348 348 349 349 /* Return if the publication already exists */ 350 350 list_for_each_entry(_p, &sr->all_publ, all_publ) { 351 - if (_p->key == key && (!_p->sk.node || _p->sk.node == node)) { 351 + if (_p->key == key && _p->sk.ref == p->sk.ref && 352 + (!_p->sk.node || _p->sk.node == node)) { 352 353 pr_debug("Failed to bind duplicate %u,%u,%u/%u:%u/%u\n", 353 354 p->sr.type, p->sr.lower, p->sr.upper, 354 355 node, p->sk.ref, key); ··· 389 388 u32 node = sk->node; 390 389 391 390 list_for_each_entry(p, &r->all_publ, all_publ) { 392 - if (p->key != key || (node && node != p->sk.node)) 391 + if (p->key != key || p->sk.ref != sk->ref || 392 + (node && node != p->sk.node)) 393 393 continue; 394 394 list_del(&p->all_publ); 395 395 list_del(&p->local_publ);
+1 -1
net/tls/tls_sw.c
··· 2533 2533 2534 2534 set_bit(BIT_TX_CLOSING, &ctx->tx_bitmask); 2535 2535 set_bit(BIT_TX_SCHEDULED, &ctx->tx_bitmask); 2536 - cancel_delayed_work_sync(&ctx->tx_work.work); 2536 + disable_delayed_work_sync(&ctx->tx_work.work); 2537 2537 } 2538 2538 2539 2539 void tls_sw_release_resources_tx(struct sock *sk)
+12 -7
net/vmw_vsock/af_vsock.c
··· 90 90 * 91 91 * - /proc/sys/net/vsock/ns_mode (read-only) reports the current namespace's 92 92 * mode, which is set at namespace creation and immutable thereafter. 93 - * - /proc/sys/net/vsock/child_ns_mode (writable) controls what mode future 93 + * - /proc/sys/net/vsock/child_ns_mode (write-once) controls what mode future 94 94 * child namespaces will inherit when created. The initial value matches 95 95 * the namespace's own ns_mode. 96 96 * 97 97 * Changing child_ns_mode only affects newly created namespaces, not the 98 98 * current namespace or existing children. A "local" namespace cannot set 99 - * child_ns_mode to "global". At namespace creation, ns_mode is inherited 100 - * from the parent's child_ns_mode. 99 + * child_ns_mode to "global". child_ns_mode is write-once, so that it may be 100 + * configured and locked down by a namespace manager. Writing a different 101 + * value after the first write returns -EBUSY. At namespace creation, ns_mode 102 + * is inherited from the parent's child_ns_mode. 101 103 * 102 - * The init_net mode is "global" and cannot be modified. 104 + * The init_net mode is "global" and cannot be modified. The init_net 105 + * child_ns_mode is also write-once, so an init process (e.g. systemd) can 106 + * set it to "local" to ensure all new namespaces inherit local mode. 103 107 * 104 108 * The modes affect the allocation and accessibility of CIDs as follows: 105 109 * ··· 2829 2825 if (write) 2830 2826 return -EPERM; 2831 2827 2832 - net = current->nsproxy->net_ns; 2828 + net = container_of(table->data, struct net, vsock.mode); 2833 2829 2834 2830 return __vsock_net_mode_string(table, write, buffer, lenp, ppos, 2835 2831 vsock_net_mode(net), NULL); ··· 2842 2838 struct net *net; 2843 2839 int ret; 2844 2840 2845 - net = current->nsproxy->net_ns; 2841 + net = container_of(table->data, struct net, vsock.child_ns_mode); 2846 2842 2847 2843 ret = __vsock_net_mode_string(table, write, buffer, lenp, ppos, 2848 2844 vsock_net_child_mode(net), &new_mode); ··· 2857 2853 new_mode == VSOCK_NET_MODE_GLOBAL) 2858 2854 return -EPERM; 2859 2855 2860 - vsock_net_set_child_mode(net, new_mode); 2856 + if (!vsock_net_set_child_mode(net, new_mode)) 2857 + return -EBUSY; 2861 2858 } 2862 2859 2863 2860 return 0;
+1
net/wireless/core.c
··· 1211 1211 /* this has nothing to do now but make sure it's gone */ 1212 1212 cancel_work_sync(&rdev->wiphy_work); 1213 1213 1214 + cancel_work_sync(&rdev->rfkill_block); 1214 1215 cancel_work_sync(&rdev->conn_work); 1215 1216 flush_work(&rdev->event_work); 1216 1217 cancel_delayed_work_sync(&rdev->dfs_update_channels_wk);
+2 -2
net/wireless/radiotap.c
··· 239 239 default: 240 240 if (!iterator->current_namespace || 241 241 iterator->_arg_index >= iterator->current_namespace->n_bits) { 242 - if (iterator->current_namespace == &radiotap_ns) 243 - return -ENOENT; 244 242 align = 0; 245 243 } else { 246 244 align = iterator->current_namespace->align_size[iterator->_arg_index].align; 247 245 size = iterator->current_namespace->align_size[iterator->_arg_index].size; 248 246 } 249 247 if (!align) { 248 + if (iterator->current_namespace == &radiotap_ns) 249 + return -ENOENT; 250 250 /* skip all subsequent data */ 251 251 iterator->_arg = iterator->_next_ns_data; 252 252 /* give up on this namespace */
+1 -1
net/wireless/wext-compat.c
··· 683 683 684 684 idx = erq->flags & IW_ENCODE_INDEX; 685 685 if (cipher == WLAN_CIPHER_SUITE_AES_CMAC) { 686 - if (idx < 4 || idx > 5) { 686 + if (idx < 5 || idx > 6) { 687 687 idx = wdev->wext.default_mgmt_key; 688 688 if (idx < 0) 689 689 return -EINVAL;
+1 -1
net/xfrm/espintcp.c
··· 536 536 sk->sk_prot = &tcp_prot; 537 537 barrier(); 538 538 539 - cancel_work_sync(&ctx->work); 539 + disable_work_sync(&ctx->work); 540 540 strp_done(&ctx->strp); 541 541 542 542 skb_queue_purge(&ctx->out_queue);
+11 -1
net/xfrm/xfrm_device.c
··· 544 544 return NOTIFY_DONE; 545 545 } 546 546 547 + static int xfrm_dev_unregister(struct net_device *dev) 548 + { 549 + xfrm_dev_state_flush(dev_net(dev), dev, true); 550 + xfrm_dev_policy_flush(dev_net(dev), dev, true); 551 + 552 + return NOTIFY_DONE; 553 + } 554 + 547 555 static int xfrm_dev_event(struct notifier_block *this, unsigned long event, void *ptr) 548 556 { 549 557 struct net_device *dev = netdev_notifier_info_to_dev(ptr); ··· 564 556 return xfrm_api_check(dev); 565 557 566 558 case NETDEV_DOWN: 567 - case NETDEV_UNREGISTER: 568 559 return xfrm_dev_down(dev); 560 + 561 + case NETDEV_UNREGISTER: 562 + return xfrm_dev_unregister(dev); 569 563 } 570 564 return NOTIFY_DONE; 571 565 }
+9 -2
net/xfrm/xfrm_policy.c
··· 3801 3801 struct xfrm_tmpl *tp[XFRM_MAX_DEPTH]; 3802 3802 struct xfrm_tmpl *stp[XFRM_MAX_DEPTH]; 3803 3803 struct xfrm_tmpl **tpp = tp; 3804 + int i, k = 0; 3804 3805 int ti = 0; 3805 - int i, k; 3806 3806 3807 3807 sp = skb_sec_path(skb); 3808 3808 if (!sp) ··· 3828 3828 tpp = stp; 3829 3829 } 3830 3830 3831 + if (pol->xdo.type == XFRM_DEV_OFFLOAD_PACKET && sp == &dummy) 3832 + /* This policy template was already checked by HW 3833 + * and secpath was removed in __xfrm_policy_check2. 3834 + */ 3835 + goto out; 3836 + 3831 3837 /* For each tunnel xfrm, find the first matching tmpl. 3832 3838 * For each tmpl before that, find corresponding xfrm. 3833 3839 * Order is _important_. Later we will implement ··· 3843 3837 * verified to allow them to be skipped in future policy 3844 3838 * checks (e.g. nested tunnels). 3845 3839 */ 3846 - for (i = xfrm_nr-1, k = 0; i >= 0; i--) { 3840 + for (i = xfrm_nr - 1; i >= 0; i--) { 3847 3841 k = xfrm_policy_ok(tpp[i], sp, k, family, if_id); 3848 3842 if (k < 0) { 3849 3843 if (k < -1) ··· 3859 3853 goto reject; 3860 3854 } 3861 3855 3856 + out: 3862 3857 xfrm_pols_put(pols, npols); 3863 3858 sp->verified_cnt = k; 3864 3859
+98 -2
tools/testing/selftests/drivers/net/hw/rss_ctx.py
··· 4 4 import datetime 5 5 import random 6 6 import re 7 + import time 7 8 from lib.py import ksft_run, ksft_pr, ksft_exit 8 9 from lib.py import ksft_eq, ksft_ne, ksft_ge, ksft_in, ksft_lt, ksft_true, ksft_raises 9 10 from lib.py import NetDrvEpEnv 10 11 from lib.py import EthtoolFamily, NetdevFamily 11 12 from lib.py import KsftSkipEx, KsftFailEx 13 + from lib.py import ksft_disruptive 12 14 from lib.py import rand_port 13 - from lib.py import ethtool, ip, defer, GenerateTraffic, CmdExitFailure 15 + from lib.py import cmd, ethtool, ip, defer, GenerateTraffic, CmdExitFailure, wait_file 14 16 15 17 16 18 def _rss_key_str(key): ··· 811 809 'noise' : (0, 1) }) 812 810 813 811 812 + @ksft_disruptive 813 + def test_rss_context_persist_ifupdown(cfg, pre_down=False): 814 + """ 815 + Test that RSS contexts and their associated ntuple filters persist across 816 + an interface down/up cycle. 817 + 818 + """ 819 + 820 + require_ntuple(cfg) 821 + 822 + qcnt = len(_get_rx_cnts(cfg)) 823 + if qcnt < 6: 824 + try: 825 + ethtool(f"-L {cfg.ifname} combined 6") 826 + defer(ethtool, f"-L {cfg.ifname} combined {qcnt}") 827 + except Exception as exc: 828 + raise KsftSkipEx("Not enough queues for the test") from exc 829 + 830 + ethtool(f"-X {cfg.ifname} equal 2") 831 + defer(ethtool, f"-X {cfg.ifname} default") 832 + 833 + ifup = defer(ip, f"link set dev {cfg.ifname} up") 834 + if pre_down: 835 + ip(f"link set dev {cfg.ifname} down") 836 + 837 + try: 838 + ctx1_id = ethtool_create(cfg, "-X", "context new start 2 equal 2") 839 + defer(ethtool, f"-X {cfg.ifname} context {ctx1_id} delete") 840 + except CmdExitFailure as exc: 841 + raise KsftSkipEx("Create context not supported with interface down") from exc 842 + 843 + ctx2_id = ethtool_create(cfg, "-X", "context new start 4 equal 2") 844 + defer(ethtool, f"-X {cfg.ifname} context {ctx2_id} delete") 845 + 846 + port_ctx2 = rand_port() 847 + flow = f"flow-type tcp{cfg.addr_ipver} dst-ip {cfg.addr} dst-port {port_ctx2} context {ctx2_id}" 848 + ntuple_id = ethtool_create(cfg, "-N", flow) 849 + defer(ethtool, f"-N {cfg.ifname} delete {ntuple_id}") 850 + 851 + if not pre_down: 852 + ip(f"link set dev {cfg.ifname} down") 853 + ifup.exec() 854 + 855 + wait_file(f"/sys/class/net/{cfg.ifname}/carrier", 856 + lambda x: x.strip() == "1", deadline=20) 857 + 858 + remote_addr = cfg.remote_addr_v[cfg.addr_ipver] 859 + for _ in range(10): 860 + if cmd(f"ping -c 1 -W 1 {remote_addr}", fail=False).ret == 0: 861 + break 862 + time.sleep(1) 863 + else: 864 + raise KsftSkipEx("Cannot reach remote host after interface up") 865 + 866 + ctxs = cfg.ethnl.rss_get({'header': {'dev-name': cfg.ifname}}, dump=True) 867 + 868 + data1 = [c for c in ctxs if c.get('context') == ctx1_id] 869 + ksft_eq(len(data1), 1, f"Context {ctx1_id} should persist after ifup") 870 + 871 + data2 = [c for c in ctxs if c.get('context') == ctx2_id] 872 + ksft_eq(len(data2), 1, f"Context {ctx2_id} should persist after ifup") 873 + 874 + _ntuple_rule_check(cfg, ntuple_id, ctx2_id) 875 + 876 + cnts = _get_rx_cnts(cfg) 877 + GenerateTraffic(cfg).wait_pkts_and_stop(20000) 878 + cnts = _get_rx_cnts(cfg, prev=cnts) 879 + 880 + main_traffic = sum(cnts[0:2]) 881 + ksft_ge(main_traffic, 18000, f"Main context traffic distribution: {cnts}") 882 + ksft_lt(sum(cnts[2:6]), 500, f"Other context queues should be mostly empty: {cnts}") 883 + 884 + _send_traffic_check(cfg, port_ctx2, f"context {ctx2_id}", 885 + {'target': (4, 5), 886 + 'noise': (0, 1), 887 + 'empty': (2, 3)}) 888 + 889 + 890 + def test_rss_context_persist_create_and_ifdown(cfg): 891 + """ 892 + Create RSS contexts then cycle the interface down and up. 893 + """ 894 + test_rss_context_persist_ifupdown(cfg, pre_down=False) 895 + 896 + 897 + def test_rss_context_persist_ifdown_and_create(cfg): 898 + """ 899 + Bring interface down first, then create RSS contexts and bring up. 900 + """ 901 + test_rss_context_persist_ifupdown(cfg, pre_down=True) 902 + 903 + 814 904 def main() -> None: 815 905 with NetDrvEpEnv(__file__, nsim_test=False) as cfg: 816 906 cfg.context_cnt = None ··· 917 823 test_rss_context_out_of_order, test_rss_context4_create_with_cfg, 918 824 test_flow_add_context_missing, 919 825 test_delete_rss_context_busy, test_rss_ntuple_addition, 920 - test_rss_default_context_rule], 826 + test_rss_default_context_rule, 827 + test_rss_context_persist_create_and_ifdown, 828 + test_rss_context_persist_ifdown_and_create], 921 829 args=(cfg, )) 922 830 ksft_exit() 923 831
+1
tools/testing/selftests/drivers/net/team/Makefile
··· 5 5 dev_addr_lists.sh \ 6 6 options.sh \ 7 7 propagation.sh \ 8 + refleak.sh \ 8 9 # end of TEST_PROGS 9 10 10 11 TEST_INCLUDES := \
+17
tools/testing/selftests/drivers/net/team/refleak.sh
··· 1 + #!/bin/bash 2 + # SPDX-License-Identifier: GPL-2.0 3 + # shellcheck disable=SC2154 4 + 5 + lib_dir=$(dirname "$0") 6 + source "$lib_dir"/../../../net/lib.sh 7 + 8 + trap cleanup_all_ns EXIT 9 + 10 + # Test that there is no reference count leak and that dummy1 can be deleted. 11 + # https://lore.kernel.org/netdev/4d69abe1-ca8d-4f0b-bcf8-13899b211e57@I-love.SAKURA.ne.jp/ 12 + setup_ns ns1 ns2 13 + ip -n "$ns1" link add name team1 type team 14 + ip -n "$ns1" link add name dummy1 mtu 1499 type dummy 15 + ip -n "$ns1" link set dev dummy1 master team1 16 + ip -n "$ns1" link set dev dummy1 netns "$ns2" 17 + ip -n "$ns2" link del dev dummy1
+27
tools/testing/selftests/net/packetdrill/tcp_rcv_zero_wnd_fin.pkt
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + // Some TCP stacks send FINs even though the window is closed. We break 4 + // a possible FIN/ACK loop by accepting the FIN. 5 + 6 + --mss=1000 7 + 8 + `./defaults.sh` 9 + 10 + // Establish a connection. 11 + +0 socket(..., SOCK_STREAM, IPPROTO_TCP) = 3 12 + +0 setsockopt(3, SOL_SOCKET, SO_REUSEADDR, [1], 4) = 0 13 + +0 setsockopt(3, SOL_SOCKET, SO_RCVBUF, [20000], 4) = 0 14 + +0 bind(3, ..., ...) = 0 15 + +0 listen(3, 1) = 0 16 + 17 + +0 < S 0:0(0) win 32792 <mss 1000,nop,wscale 7> 18 + +0 > S. 0:0(0) ack 1 <mss 1460,nop,wscale 0> 19 + +0 < . 1:1(0) ack 1 win 257 20 + 21 + +0 accept(3, ..., ...) = 4 22 + 23 + +0 < P. 1:60001(60000) ack 1 win 257 24 + * > . 1:1(0) ack 60001 win 0 25 + 26 + +0 < F. 60001:60001(0) ack 1 win 257 27 + +0 > . 1:1(0) ack 60002 win 0
+20 -19
tools/testing/selftests/vsock/vmtest.sh
··· 210 210 } 211 211 212 212 add_namespaces() { 213 - local orig_mode 214 - orig_mode=$(cat /proc/sys/net/vsock/child_ns_mode) 213 + ip netns add "global-parent" 2>/dev/null 214 + echo "global" | ip netns exec "global-parent" \ 215 + tee /proc/sys/net/vsock/child_ns_mode &>/dev/null 216 + ip netns add "local-parent" 2>/dev/null 217 + echo "local" | ip netns exec "local-parent" \ 218 + tee /proc/sys/net/vsock/child_ns_mode &>/dev/null 215 219 216 - for mode in "${NS_MODES[@]}"; do 217 - echo "${mode}" > /proc/sys/net/vsock/child_ns_mode 218 - ip netns add "${mode}0" 2>/dev/null 219 - ip netns add "${mode}1" 2>/dev/null 220 - done 221 - 222 - echo "${orig_mode}" > /proc/sys/net/vsock/child_ns_mode 220 + nsenter --net=/var/run/netns/global-parent \ 221 + ip netns add "global0" 2>/dev/null 222 + nsenter --net=/var/run/netns/global-parent \ 223 + ip netns add "global1" 2>/dev/null 224 + nsenter --net=/var/run/netns/local-parent \ 225 + ip netns add "local0" 2>/dev/null 226 + nsenter --net=/var/run/netns/local-parent \ 227 + ip netns add "local1" 2>/dev/null 223 228 } 224 229 225 230 init_namespaces() { ··· 242 237 log_host "removed ns ${mode}0" 243 238 log_host "removed ns ${mode}1" 244 239 done 240 + ip netns del "global-parent" &>/dev/null 241 + ip netns del "local-parent" &>/dev/null 245 242 } 246 243 247 244 vm_ssh() { ··· 294 287 } 295 288 296 289 check_deps() { 297 - for dep in vng ${QEMU} busybox pkill ssh ss socat; do 290 + for dep in vng ${QEMU} busybox pkill ssh ss socat nsenter; do 298 291 if [[ ! -x $(command -v "${dep}") ]]; then 299 292 echo -e "skip: dependency ${dep} not found!\n" 300 293 exit "${KSFT_SKIP}" ··· 1238 1231 } 1239 1232 1240 1233 test_ns_host_vsock_child_ns_mode_ok() { 1241 - local orig_mode 1242 - local rc 1234 + local rc="${KSFT_PASS}" 1243 1235 1244 - orig_mode=$(cat /proc/sys/net/vsock/child_ns_mode) 1245 - 1246 - rc="${KSFT_PASS}" 1247 1236 for mode in "${NS_MODES[@]}"; do 1248 1237 local ns="${mode}0" 1249 1238 ··· 1249 1246 continue 1250 1247 fi 1251 1248 1252 - if ! echo "${mode}" > /proc/sys/net/vsock/child_ns_mode; then 1253 - log_host "child_ns_mode should be writable to ${mode}" 1249 + if ! echo "${mode}" | ip netns exec "${ns}" \ 1250 + tee /proc/sys/net/vsock/child_ns_mode &>/dev/null; then 1254 1251 rc="${KSFT_FAIL}" 1255 1252 continue 1256 1253 fi 1257 1254 done 1258 - 1259 - echo "${orig_mode}" > /proc/sys/net/vsock/child_ns_mode 1260 1255 1261 1256 return "${rc}" 1262 1257 }