Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'net-6.12-rc8' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Pull networking fixes from Paolo Abeni:
"Including fixes from bluetooth.

Quite calm week. No new regression under investigation.

Current release - regressions:

- eth: revert "igb: Disable threaded IRQ for igb_msix_other"

Current release - new code bugs:

- bluetooth: btintel: direct exception event to bluetooth stack

Previous releases - regressions:

- core: fix data-races around sk->sk_forward_alloc

- netlink: terminate outstanding dump on socket close

- mptcp: error out earlier on disconnect

- vsock: fix accept_queue memory leak

- phylink: ensure PHY momentary link-fails are handled

- eth: mlx5:
- fix null-ptr-deref in add rule err flow
- lock FTE when checking if active

- eth: dwmac-mediatek: fix inverted handling of mediatek,mac-wol

Previous releases - always broken:

- sched: fix u32's systematic failure to free IDR entries for hnodes.

- sctp: fix possible UAF in sctp_v6_available()

- eth: bonding: add ns target multicast address to slave device

- eth: mlx5: fix msix vectors to respect platform limit

- eth: icssg-prueth: fix 1 PPS sync"

* tag 'net-6.12-rc8' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (38 commits)
net: sched: u32: Add test case for systematic hnode IDR leaks
selftests: bonding: add ns multicast group testing
bonding: add ns target multicast address to slave device
net: ti: icssg-prueth: Fix 1 PPS sync
stmmac: dwmac-intel-plat: fix call balance of tx_clk handling routines
net: Make copy_safe_from_sockptr() match documentation
net: stmmac: dwmac-mediatek: Fix inverted handling of mediatek,mac-wol
ipmr: Fix access to mfc_cache_list without lock held
samples: pktgen: correct dev to DEV
net: phylink: ensure PHY momentary link-fails are handled
mptcp: pm: use _rcu variant under rcu_read_lock
mptcp: hold pm lock when deleting entry
mptcp: update local address flags when setting it
net: sched: cls_u32: Fix u32's systematic failure to free IDR entries for hnodes.
MAINTAINERS: Re-add cancelled Renesas driver sections
Revert "igb: Disable threaded IRQ for igb_msix_other"
Bluetooth: btintel: Direct exception event to bluetooth stack
Bluetooth: hci_core: Fix calling mgmt_device_connected
virtio/vsock: Improve MSG_ZEROCOPY error handling
vsock: Fix sk_error_queue memory leak
...

+545 -111
+9
Documentation/networking/devmem.rst
··· 225 225 Failure to do so will exhaust the limited dmabuf that is bound to the RX queue 226 226 and will lead to packet drops. 227 227 228 + The user must pass no more than 128 tokens, with no more than 1024 total frags 229 + among the token->token_count across all the tokens. If the user provides more 230 + than 1024 frags, the kernel will free up to 1024 frags and return early. 231 + 232 + The kernel returns the number of actual frags freed. The number of frags freed 233 + can be less than the tokens provided by the user in case of: 234 + 235 + (a) an internal kernel leak bug. 236 + (b) the user passed more than 1024 frags. 228 237 229 238 Implementation & Caveats 230 239 ========================
+30
MAINTAINERS
··· 19579 19579 F: Documentation/devicetree/bindings/i2c/renesas,iic-emev2.yaml 19580 19580 F: drivers/i2c/busses/i2c-emev2.c 19581 19581 19582 + RENESAS ETHERNET AVB DRIVER 19583 + M: Paul Barker <paul.barker.ct@bp.renesas.com> 19584 + M: Niklas Söderlund <niklas.soderlund@ragnatech.se> 19585 + L: netdev@vger.kernel.org 19586 + L: linux-renesas-soc@vger.kernel.org 19587 + S: Supported 19588 + F: Documentation/devicetree/bindings/net/renesas,etheravb.yaml 19589 + F: drivers/net/ethernet/renesas/Kconfig 19590 + F: drivers/net/ethernet/renesas/Makefile 19591 + F: drivers/net/ethernet/renesas/ravb* 19592 + 19582 19593 RENESAS ETHERNET SWITCH DRIVER 19583 19594 R: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com> 19584 19595 L: netdev@vger.kernel.org ··· 19638 19627 F: Documentation/devicetree/bindings/i2c/renesas,rmobile-iic.yaml 19639 19628 F: drivers/i2c/busses/i2c-rcar.c 19640 19629 F: drivers/i2c/busses/i2c-sh_mobile.c 19630 + 19631 + RENESAS R-CAR SATA DRIVER 19632 + M: Geert Uytterhoeven <geert+renesas@glider.be> 19633 + L: linux-ide@vger.kernel.org 19634 + L: linux-renesas-soc@vger.kernel.org 19635 + S: Supported 19636 + F: Documentation/devicetree/bindings/ata/renesas,rcar-sata.yaml 19637 + F: drivers/ata/sata_rcar.c 19641 19638 19642 19639 RENESAS R-CAR THERMAL DRIVERS 19643 19640 M: Niklas Söderlund <niklas.soderlund@ragnatech.se> ··· 19721 19702 S: Supported 19722 19703 F: Documentation/devicetree/bindings/i2c/renesas,rzv2m.yaml 19723 19704 F: drivers/i2c/busses/i2c-rzv2m.c 19705 + 19706 + RENESAS SUPERH ETHERNET DRIVER 19707 + M: Niklas Söderlund <niklas.soderlund@ragnatech.se> 19708 + L: netdev@vger.kernel.org 19709 + L: linux-renesas-soc@vger.kernel.org 19710 + S: Supported 19711 + F: Documentation/devicetree/bindings/net/renesas,ether.yaml 19712 + F: drivers/net/ethernet/renesas/Kconfig 19713 + F: drivers/net/ethernet/renesas/Makefile 19714 + F: drivers/net/ethernet/renesas/sh_eth* 19715 + F: include/linux/sh_eth.h 19724 19716 19725 19717 RENESAS USB PHY DRIVER 19726 19718 M: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
+2 -3
drivers/bluetooth/btintel.c
··· 3288 3288 case INTEL_TLV_TEST_EXCEPTION: 3289 3289 /* Generate devcoredump from exception */ 3290 3290 if (!hci_devcd_init(hdev, skb->len)) { 3291 - hci_devcd_append(hdev, skb); 3291 + hci_devcd_append(hdev, skb_clone(skb, GFP_ATOMIC)); 3292 3292 hci_devcd_complete(hdev); 3293 3293 } else { 3294 3294 bt_dev_err(hdev, "Failed to generate devcoredump"); 3295 - kfree_skb(skb); 3296 3295 } 3297 - return 0; 3296 + break; 3298 3297 default: 3299 3298 bt_dev_err(hdev, "Invalid exception type %02X", tlv->val[0]); 3300 3299 }
+15 -1
drivers/net/bonding/bond_main.c
··· 1008 1008 1009 1009 if (bond->dev->flags & IFF_UP) 1010 1010 bond_hw_addr_flush(bond->dev, old_active->dev); 1011 + 1012 + bond_slave_ns_maddrs_add(bond, old_active); 1011 1013 } 1012 1014 1013 1015 if (new_active) { ··· 1026 1024 dev_mc_sync(new_active->dev, bond->dev); 1027 1025 netif_addr_unlock_bh(bond->dev); 1028 1026 } 1027 + 1028 + bond_slave_ns_maddrs_del(bond, new_active); 1029 1029 } 1030 1030 } 1031 1031 ··· 2345 2341 bond_compute_features(bond); 2346 2342 bond_set_carrier(bond); 2347 2343 2344 + /* Needs to be called before bond_select_active_slave(), which will 2345 + * remove the maddrs if the slave is selected as active slave. 2346 + */ 2347 + bond_slave_ns_maddrs_add(bond, new_slave); 2348 + 2348 2349 if (bond_uses_primary(bond)) { 2349 2350 block_netpoll_tx(); 2350 2351 bond_select_active_slave(bond); ··· 2358 2349 2359 2350 if (bond_mode_can_use_xmit_hash(bond)) 2360 2351 bond_update_slave_arr(bond, NULL); 2361 - 2362 2352 2363 2353 if (!slave_dev->netdev_ops->ndo_bpf || 2364 2354 !slave_dev->netdev_ops->ndo_xdp_xmit) { ··· 2555 2547 2556 2548 if (oldcurrent == slave) 2557 2549 bond_change_active_slave(bond, NULL); 2550 + 2551 + /* Must be called after bond_change_active_slave () as the slave 2552 + * might change from an active slave to a backup slave. Then it is 2553 + * necessary to clear the maddrs on the backup slave. 2554 + */ 2555 + bond_slave_ns_maddrs_del(bond, slave); 2558 2556 2559 2557 if (bond_is_lb(bond)) { 2560 2558 /* Must be called only after the slave has been
+81 -1
drivers/net/bonding/bond_options.c
··· 15 15 #include <linux/sched/signal.h> 16 16 17 17 #include <net/bonding.h> 18 + #include <net/ndisc.h> 18 19 19 20 static int bond_option_active_slave_set(struct bonding *bond, 20 21 const struct bond_opt_value *newval); ··· 1235 1234 } 1236 1235 1237 1236 #if IS_ENABLED(CONFIG_IPV6) 1237 + static bool slave_can_set_ns_maddr(const struct bonding *bond, struct slave *slave) 1238 + { 1239 + return BOND_MODE(bond) == BOND_MODE_ACTIVEBACKUP && 1240 + !bond_is_active_slave(slave) && 1241 + slave->dev->flags & IFF_MULTICAST; 1242 + } 1243 + 1244 + static void slave_set_ns_maddrs(struct bonding *bond, struct slave *slave, bool add) 1245 + { 1246 + struct in6_addr *targets = bond->params.ns_targets; 1247 + char slot_maddr[MAX_ADDR_LEN]; 1248 + int i; 1249 + 1250 + if (!slave_can_set_ns_maddr(bond, slave)) 1251 + return; 1252 + 1253 + for (i = 0; i < BOND_MAX_NS_TARGETS; i++) { 1254 + if (ipv6_addr_any(&targets[i])) 1255 + break; 1256 + 1257 + if (!ndisc_mc_map(&targets[i], slot_maddr, slave->dev, 0)) { 1258 + if (add) 1259 + dev_mc_add(slave->dev, slot_maddr); 1260 + else 1261 + dev_mc_del(slave->dev, slot_maddr); 1262 + } 1263 + } 1264 + } 1265 + 1266 + void bond_slave_ns_maddrs_add(struct bonding *bond, struct slave *slave) 1267 + { 1268 + if (!bond->params.arp_validate) 1269 + return; 1270 + slave_set_ns_maddrs(bond, slave, true); 1271 + } 1272 + 1273 + void bond_slave_ns_maddrs_del(struct bonding *bond, struct slave *slave) 1274 + { 1275 + if (!bond->params.arp_validate) 1276 + return; 1277 + slave_set_ns_maddrs(bond, slave, false); 1278 + } 1279 + 1280 + static void slave_set_ns_maddr(struct bonding *bond, struct slave *slave, 1281 + struct in6_addr *target, struct in6_addr *slot) 1282 + { 1283 + char target_maddr[MAX_ADDR_LEN], slot_maddr[MAX_ADDR_LEN]; 1284 + 1285 + if (!bond->params.arp_validate || !slave_can_set_ns_maddr(bond, slave)) 1286 + return; 1287 + 1288 + /* remove the previous maddr from slave */ 1289 + if (!ipv6_addr_any(slot) && 1290 + !ndisc_mc_map(slot, slot_maddr, slave->dev, 0)) 1291 + dev_mc_del(slave->dev, slot_maddr); 1292 + 1293 + /* add new maddr on slave if target is set */ 1294 + if (!ipv6_addr_any(target) && 1295 + !ndisc_mc_map(target, target_maddr, slave->dev, 0)) 1296 + dev_mc_add(slave->dev, target_maddr); 1297 + } 1298 + 1238 1299 static void _bond_options_ns_ip6_target_set(struct bonding *bond, int slot, 1239 1300 struct in6_addr *target, 1240 1301 unsigned long last_rx) ··· 1306 1243 struct slave *slave; 1307 1244 1308 1245 if (slot >= 0 && slot < BOND_MAX_NS_TARGETS) { 1309 - bond_for_each_slave(bond, slave, iter) 1246 + bond_for_each_slave(bond, slave, iter) { 1310 1247 slave->target_last_arp_rx[slot] = last_rx; 1248 + slave_set_ns_maddr(bond, slave, target, &targets[slot]); 1249 + } 1311 1250 targets[slot] = *target; 1312 1251 } 1313 1252 } ··· 1361 1296 { 1362 1297 return -EPERM; 1363 1298 } 1299 + 1300 + static void slave_set_ns_maddrs(struct bonding *bond, struct slave *slave, bool add) {} 1301 + 1302 + void bond_slave_ns_maddrs_add(struct bonding *bond, struct slave *slave) {} 1303 + 1304 + void bond_slave_ns_maddrs_del(struct bonding *bond, struct slave *slave) {} 1364 1305 #endif 1365 1306 1366 1307 static int bond_option_arp_validate_set(struct bonding *bond, 1367 1308 const struct bond_opt_value *newval) 1368 1309 { 1310 + bool changed = !!bond->params.arp_validate != !!newval->value; 1311 + struct list_head *iter; 1312 + struct slave *slave; 1313 + 1369 1314 netdev_dbg(bond->dev, "Setting arp_validate to %s (%llu)\n", 1370 1315 newval->string, newval->value); 1371 1316 bond->params.arp_validate = newval->value; 1317 + 1318 + if (changed) { 1319 + bond_for_each_slave(bond, slave, iter) 1320 + slave_set_ns_maddrs(bond, slave, !!bond->params.arp_validate); 1321 + } 1372 1322 1373 1323 return 0; 1374 1324 }
+1 -1
drivers/net/ethernet/intel/igb/igb_main.c
··· 907 907 int i, err = 0, vector = 0, free_vector = 0; 908 908 909 909 err = request_irq(adapter->msix_entries[vector].vector, 910 - igb_msix_other, IRQF_NO_THREAD, netdev->name, adapter); 910 + igb_msix_other, 0, netdev->name, adapter); 911 911 if (err) 912 912 goto err_out; 913 913
+1 -1
drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c
··· 866 866 return 0; 867 867 868 868 err_rule: 869 - mlx5_tc_ct_entry_destroy_mod_hdr(ct_priv, zone_rule->attr, zone_rule->mh); 869 + mlx5_tc_ct_entry_destroy_mod_hdr(ct_priv, attr, zone_rule->mh); 870 870 mlx5_put_label_mapping(ct_priv, attr->ct_attr.ct_labels_id); 871 871 err_mod_hdr: 872 872 kfree(attr);
+4 -4
drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c
··· 660 660 while (remaining > 0) { 661 661 skb_frag_t *frag = &record->frags[i]; 662 662 663 - get_page(skb_frag_page(frag)); 663 + page_ref_inc(skb_frag_page(frag)); 664 664 remaining -= skb_frag_size(frag); 665 665 info->frags[i++] = *frag; 666 666 } ··· 763 763 stats = sq->stats; 764 764 765 765 mlx5e_tx_dma_unmap(sq->pdev, dma); 766 - put_page(wi->resync_dump_frag_page); 766 + page_ref_dec(wi->resync_dump_frag_page); 767 767 stats->tls_dump_packets++; 768 768 stats->tls_dump_bytes += wi->num_bytes; 769 769 } ··· 816 816 817 817 err_out: 818 818 for (; i < info.nr_frags; i++) 819 - /* The put_page() here undoes the page ref obtained in tx_sync_info_get(). 819 + /* The page_ref_dec() here undoes the page ref obtained in tx_sync_info_get(). 820 820 * Page refs obtained for the DUMP WQEs above (by page_ref_add) will be 821 821 * released only upon their completions (or in mlx5e_free_txqsq_descs, 822 822 * if channel closes). 823 823 */ 824 - put_page(skb_frag_page(&info.frags[i])); 824 + page_ref_dec(skb_frag_page(&info.frags[i])); 825 825 826 826 return MLX5E_KTLS_SYNC_FAIL; 827 827 }
+2 -1
drivers/net/ethernet/mellanox/mlx5/core/en_main.c
··· 4267 4267 struct mlx5e_params *params = &priv->channels.params; 4268 4268 xdp_features_t val; 4269 4269 4270 - if (params->packet_merge.type != MLX5E_PACKET_MERGE_NONE) { 4270 + if (!netdev->netdev_ops->ndo_bpf || 4271 + params->packet_merge.type != MLX5E_PACKET_MERGE_NONE) { 4271 4272 xdp_clear_features_flag(netdev); 4272 4273 return; 4273 4274 }
+4
drivers/net/ethernet/mellanox/mlx5/core/en_selftest.c
··· 36 36 #include "en.h" 37 37 #include "en/port.h" 38 38 #include "eswitch.h" 39 + #include "lib/mlx5.h" 39 40 40 41 static int mlx5e_test_health_info(struct mlx5e_priv *priv) 41 42 { ··· 246 245 static int mlx5e_cond_loopback(struct mlx5e_priv *priv) 247 246 { 248 247 if (is_mdev_switchdev_mode(priv->mdev)) 248 + return -EOPNOTSUPP; 249 + 250 + if (mlx5_get_sd(priv->mdev)) 249 251 return -EOPNOTSUPP; 250 252 251 253 return 0;
+4 -1
drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
··· 2527 2527 struct mlx5_eswitch_rep *rep, u8 rep_type) 2528 2528 { 2529 2529 if (atomic_cmpxchg(&rep->rep_data[rep_type].state, 2530 - REP_LOADED, REP_REGISTERED) == REP_LOADED) 2530 + REP_LOADED, REP_REGISTERED) == REP_LOADED) { 2531 + if (rep_type == REP_ETH) 2532 + __esw_offloads_unload_rep(esw, rep, REP_IB); 2531 2533 esw->offloads.rep_ops[rep_type]->unload(rep); 2534 + } 2532 2535 } 2533 2536 2534 2537 static void __unload_reps_all_vport(struct mlx5_eswitch *esw, u8 rep_type)
+14 -5
drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
··· 2105 2105 fte_tmp = NULL; 2106 2106 goto out; 2107 2107 } 2108 - if (!fte_tmp->node.active) { 2109 - tree_put_node(&fte_tmp->node, false); 2110 - fte_tmp = NULL; 2111 - goto out; 2112 - } 2113 2108 2114 2109 nested_down_write_ref_node(&fte_tmp->node, FS_LOCK_CHILD); 2110 + 2111 + if (!fte_tmp->node.active) { 2112 + up_write_ref_node(&fte_tmp->node, false); 2113 + 2114 + if (take_write) 2115 + up_write_ref_node(&g->node, false); 2116 + else 2117 + up_read_ref_node(&g->node); 2118 + 2119 + tree_put_node(&fte_tmp->node, false); 2120 + 2121 + return NULL; 2122 + } 2123 + 2115 2124 out: 2116 2125 if (take_write) 2117 2126 up_write_ref_node(&g->node, false);
+27 -5
drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c
··· 593 593 kvfree(pool); 594 594 } 595 595 596 - static int irq_pools_init(struct mlx5_core_dev *dev, int sf_vec, int pcif_vec) 596 + static int irq_pools_init(struct mlx5_core_dev *dev, int sf_vec, int pcif_vec, 597 + bool dynamic_vec) 597 598 { 598 599 struct mlx5_irq_table *table = dev->priv.irq_table; 600 + int sf_vec_available = sf_vec; 599 601 int num_sf_ctrl; 600 602 int err; 601 603 ··· 618 616 num_sf_ctrl = DIV_ROUND_UP(mlx5_sf_max_functions(dev), 619 617 MLX5_SFS_PER_CTRL_IRQ); 620 618 num_sf_ctrl = min_t(int, MLX5_IRQ_CTRL_SF_MAX, num_sf_ctrl); 619 + if (!dynamic_vec && (num_sf_ctrl + 1) > sf_vec_available) { 620 + mlx5_core_dbg(dev, 621 + "Not enough IRQs for SFs control and completion pool, required=%d avail=%d\n", 622 + num_sf_ctrl + 1, sf_vec_available); 623 + return 0; 624 + } 625 + 621 626 table->sf_ctrl_pool = irq_pool_alloc(dev, pcif_vec, num_sf_ctrl, 622 627 "mlx5_sf_ctrl", 623 628 MLX5_EQ_SHARE_IRQ_MIN_CTRL, ··· 633 624 err = PTR_ERR(table->sf_ctrl_pool); 634 625 goto err_pf; 635 626 } 636 - /* init sf_comp_pool */ 627 + sf_vec_available -= num_sf_ctrl; 628 + 629 + /* init sf_comp_pool, remaining vectors are for the SF completions */ 637 630 table->sf_comp_pool = irq_pool_alloc(dev, pcif_vec + num_sf_ctrl, 638 - sf_vec - num_sf_ctrl, "mlx5_sf_comp", 631 + sf_vec_available, "mlx5_sf_comp", 639 632 MLX5_EQ_SHARE_IRQ_MIN_COMP, 640 633 MLX5_EQ_SHARE_IRQ_MAX_COMP); 641 634 if (IS_ERR(table->sf_comp_pool)) { ··· 726 715 int mlx5_irq_table_create(struct mlx5_core_dev *dev) 727 716 { 728 717 int num_eqs = mlx5_max_eq_cap_get(dev); 718 + bool dynamic_vec; 729 719 int total_vec; 730 720 int pcif_vec; 731 721 int req_vec; ··· 736 724 if (mlx5_core_is_sf(dev)) 737 725 return 0; 738 726 727 + /* PCI PF vectors usage is limited by online cpus, device EQs and 728 + * PCI MSI-X capability. 729 + */ 739 730 pcif_vec = MLX5_CAP_GEN(dev, num_ports) * num_online_cpus() + 1; 740 731 pcif_vec = min_t(int, pcif_vec, num_eqs); 732 + pcif_vec = min_t(int, pcif_vec, pci_msix_vec_count(dev->pdev)); 741 733 742 734 total_vec = pcif_vec; 743 735 if (mlx5_sf_max_functions(dev)) 744 736 total_vec += MLX5_MAX_MSIX_PER_SF * mlx5_sf_max_functions(dev); 745 737 total_vec = min_t(int, total_vec, pci_msix_vec_count(dev->pdev)); 746 - pcif_vec = min_t(int, pcif_vec, pci_msix_vec_count(dev->pdev)); 747 738 748 739 req_vec = pci_msix_can_alloc_dyn(dev->pdev) ? 1 : total_vec; 749 740 n = pci_alloc_irq_vectors(dev->pdev, 1, req_vec, PCI_IRQ_MSIX); 750 741 if (n < 0) 751 742 return n; 752 743 753 - err = irq_pools_init(dev, total_vec - pcif_vec, pcif_vec); 744 + /* Further limit vectors of the pools based on platform for non dynamic case */ 745 + dynamic_vec = pci_msix_can_alloc_dyn(dev->pdev); 746 + if (!dynamic_vec) { 747 + pcif_vec = min_t(int, n, pcif_vec); 748 + total_vec = min_t(int, n, total_vec); 749 + } 750 + 751 + err = irq_pools_init(dev, total_vec - pcif_vec, pcif_vec, dynamic_vec); 754 752 if (err) 755 753 pci_free_irq_vectors(dev->pdev); 756 754
+17 -8
drivers/net/ethernet/stmicro/stmmac/dwmac-intel-plat.c
··· 108 108 if (IS_ERR(dwmac->tx_clk)) 109 109 return PTR_ERR(dwmac->tx_clk); 110 110 111 - clk_prepare_enable(dwmac->tx_clk); 111 + ret = clk_prepare_enable(dwmac->tx_clk); 112 + if (ret) { 113 + dev_err(&pdev->dev, 114 + "Failed to enable tx_clk\n"); 115 + return ret; 116 + } 112 117 113 118 /* Check and configure TX clock rate */ 114 119 rate = clk_get_rate(dwmac->tx_clk); ··· 124 119 if (ret) { 125 120 dev_err(&pdev->dev, 126 121 "Failed to set tx_clk\n"); 127 - return ret; 122 + goto err_tx_clk_disable; 128 123 } 129 124 } 130 125 } ··· 138 133 if (ret) { 139 134 dev_err(&pdev->dev, 140 135 "Failed to set clk_ptp_ref\n"); 141 - return ret; 136 + goto err_tx_clk_disable; 142 137 } 143 138 } 144 139 } ··· 154 149 } 155 150 156 151 ret = stmmac_dvr_probe(&pdev->dev, plat_dat, &stmmac_res); 157 - if (ret) { 158 - clk_disable_unprepare(dwmac->tx_clk); 159 - return ret; 160 - } 152 + if (ret) 153 + goto err_tx_clk_disable; 161 154 162 155 return 0; 156 + 157 + err_tx_clk_disable: 158 + if (dwmac->data->tx_clk_en) 159 + clk_disable_unprepare(dwmac->tx_clk); 160 + return ret; 163 161 } 164 162 165 163 static void intel_eth_plat_remove(struct platform_device *pdev) ··· 170 162 struct intel_dwmac *dwmac = get_stmmac_bsp_priv(&pdev->dev); 171 163 172 164 stmmac_pltfr_remove(pdev); 173 - clk_disable_unprepare(dwmac->tx_clk); 165 + if (dwmac->data->tx_clk_en) 166 + clk_disable_unprepare(dwmac->tx_clk); 174 167 } 175 168 176 169 static struct platform_driver intel_eth_plat_driver = {
+2 -2
drivers/net/ethernet/stmicro/stmmac/dwmac-mediatek.c
··· 589 589 590 590 plat->mac_interface = priv_plat->phy_mode; 591 591 if (priv_plat->mac_wol) 592 - plat->flags |= STMMAC_FLAG_USE_PHY_WOL; 593 - else 594 592 plat->flags &= ~STMMAC_FLAG_USE_PHY_WOL; 593 + else 594 + plat->flags |= STMMAC_FLAG_USE_PHY_WOL; 595 595 plat->riwt_off = 1; 596 596 plat->maxmtu = ETH_DATA_LEN; 597 597 plat->host_dma_width = priv_plat->variant->dma_bit_mask;
+11 -2
drivers/net/ethernet/ti/icssg/icssg_prueth.c
··· 16 16 #include <linux/if_hsr.h> 17 17 #include <linux/if_vlan.h> 18 18 #include <linux/interrupt.h> 19 + #include <linux/io-64-nonatomic-hi-lo.h> 19 20 #include <linux/kernel.h> 20 21 #include <linux/mfd/syscon.h> 21 22 #include <linux/module.h> ··· 412 411 struct prueth_emac *emac = clockops_data; 413 412 u32 reduction_factor = 0, offset = 0; 414 413 struct timespec64 ts; 414 + u64 current_cycle; 415 + u64 start_offset; 415 416 u64 ns_period; 416 417 417 418 if (!on) ··· 452 449 writel(reduction_factor, emac->prueth->shram.va + 453 450 TIMESYNC_FW_WC_SYNCOUT_REDUCTION_FACTOR_OFFSET); 454 451 455 - writel(0, emac->prueth->shram.va + 456 - TIMESYNC_FW_WC_SYNCOUT_START_TIME_CYCLECOUNT_OFFSET); 452 + current_cycle = icssg_read_time(emac->prueth->shram.va + 453 + TIMESYNC_FW_WC_CYCLECOUNT_OFFSET); 454 + 455 + /* Rounding of current_cycle count to next second */ 456 + start_offset = roundup(current_cycle, MSEC_PER_SEC); 457 + 458 + hi_lo_writeq(start_offset, emac->prueth->shram.va + 459 + TIMESYNC_FW_WC_SYNCOUT_START_TIME_CYCLECOUNT_OFFSET); 457 460 458 461 return 0; 459 462 }
+12
drivers/net/ethernet/ti/icssg/icssg_prueth.h
··· 330 330 extern const struct ethtool_ops icssg_ethtool_ops; 331 331 extern const struct dev_pm_ops prueth_dev_pm_ops; 332 332 333 + static inline u64 icssg_read_time(const void __iomem *addr) 334 + { 335 + u32 low, high; 336 + 337 + do { 338 + high = readl(addr + 4); 339 + low = readl(addr); 340 + } while (high != readl(addr + 4)); 341 + 342 + return low + ((u64)high << 32); 343 + } 344 + 333 345 /* Classifier helpers */ 334 346 void icssg_class_set_mac_addr(struct regmap *miig_rt, int slice, u8 *mac); 335 347 void icssg_class_set_host_mac_addr(struct regmap *miig_rt, const u8 *mac);
+3 -1
drivers/net/ethernet/vertexcom/mse102x.c
··· 437 437 mse = &mses->mse102x; 438 438 439 439 while ((txb = skb_dequeue(&mse->txq))) { 440 + unsigned int len = max_t(unsigned int, txb->len, ETH_ZLEN); 441 + 440 442 mutex_lock(&mses->lock); 441 443 ret = mse102x_tx_pkt_spi(mse, txb, work_timeout); 442 444 mutex_unlock(&mses->lock); 443 445 if (ret) { 444 446 mse->ndev->stats.tx_dropped++; 445 447 } else { 446 - mse->ndev->stats.tx_bytes += txb->len; 448 + mse->ndev->stats.tx_bytes += len; 447 449 mse->ndev->stats.tx_packets++; 448 450 } 449 451
+8 -6
drivers/net/phy/phylink.c
··· 78 78 unsigned int pcs_neg_mode; 79 79 unsigned int pcs_state; 80 80 81 - bool mac_link_dropped; 81 + bool link_failed; 82 82 bool using_mac_select_pcs; 83 83 84 84 struct sfp_bus *sfp_bus; ··· 1475 1475 cur_link_state = pl->old_link_state; 1476 1476 1477 1477 if (pl->phylink_disable_state) { 1478 - pl->mac_link_dropped = false; 1478 + pl->link_failed = false; 1479 1479 link_state.link = false; 1480 - } else if (pl->mac_link_dropped) { 1480 + } else if (pl->link_failed) { 1481 1481 link_state.link = false; 1482 1482 retrigger = true; 1483 1483 } else { ··· 1572 1572 phylink_link_up(pl, link_state); 1573 1573 } 1574 1574 if (!link_state.link && retrigger) { 1575 - pl->mac_link_dropped = false; 1575 + pl->link_failed = false; 1576 1576 queue_work(system_power_efficient_wq, &pl->resolve); 1577 1577 } 1578 1578 mutex_unlock(&pl->state_mutex); ··· 1835 1835 pl->phy_state.pause |= MLO_PAUSE_RX; 1836 1836 pl->phy_state.interface = phydev->interface; 1837 1837 pl->phy_state.link = up; 1838 + if (!up) 1839 + pl->link_failed = true; 1838 1840 mutex_unlock(&pl->state_mutex); 1839 1841 1840 1842 phylink_run_resolve(pl); ··· 2160 2158 static void phylink_link_changed(struct phylink *pl, bool up, const char *what) 2161 2159 { 2162 2160 if (!up) 2163 - pl->mac_link_dropped = true; 2161 + pl->link_failed = true; 2164 2162 phylink_run_resolve(pl); 2165 2163 phylink_dbg(pl, "%s link %s\n", what, up ? "up" : "down"); 2166 2164 } ··· 2794 2792 * link will cycle. 2795 2793 */ 2796 2794 if (manual_changed) { 2797 - pl->mac_link_dropped = true; 2795 + pl->link_failed = true; 2798 2796 phylink_run_resolve(pl); 2799 2797 } 2800 2798
+3 -1
include/linux/sockptr.h
··· 77 77 { 78 78 if (optlen < ksize) 79 79 return -EINVAL; 80 - return copy_from_sockptr(dst, optval, ksize); 80 + if (copy_from_sockptr(dst, optval, ksize)) 81 + return -EFAULT; 82 + return 0; 81 83 } 82 84 83 85 static inline int copy_struct_from_sockptr(void *dst, size_t ksize,
+2
include/net/bond_options.h
··· 161 161 #if IS_ENABLED(CONFIG_IPV6) 162 162 void bond_option_ns_ip6_targets_clear(struct bonding *bond); 163 163 #endif 164 + void bond_slave_ns_maddrs_add(struct bonding *bond, struct slave *slave); 165 + void bond_slave_ns_maddrs_del(struct bonding *bond, struct slave *slave); 164 166 165 167 #endif /* _NET_BOND_OPTIONS_H */
-2
net/bluetooth/hci_core.c
··· 3788 3788 3789 3789 hci_dev_lock(hdev); 3790 3790 conn = hci_conn_hash_lookup_handle(hdev, handle); 3791 - if (conn && hci_dev_test_flag(hdev, HCI_MGMT)) 3792 - mgmt_device_connected(hdev, conn, NULL, 0); 3793 3791 hci_dev_unlock(hdev); 3794 3792 3795 3793 if (conn) {
+24 -18
net/core/sock.c
··· 1052 1052 1053 1053 #ifdef CONFIG_PAGE_POOL 1054 1054 1055 - /* This is the number of tokens that the user can SO_DEVMEM_DONTNEED in 1056 - * 1 syscall. The limit exists to limit the amount of memory the kernel 1057 - * allocates to copy these tokens. 1055 + /* This is the number of tokens and frags that the user can SO_DEVMEM_DONTNEED 1056 + * in 1 syscall. The limit exists to limit the amount of memory the kernel 1057 + * allocates to copy these tokens, and to prevent looping over the frags for 1058 + * too long. 1058 1059 */ 1059 1060 #define MAX_DONTNEED_TOKENS 128 1061 + #define MAX_DONTNEED_FRAGS 1024 1060 1062 1061 1063 static noinline_for_stack int 1062 1064 sock_devmem_dontneed(struct sock *sk, sockptr_t optval, unsigned int optlen) 1063 1065 { 1064 1066 unsigned int num_tokens, i, j, k, netmem_num = 0; 1065 1067 struct dmabuf_token *tokens; 1068 + int ret = 0, num_frags = 0; 1066 1069 netmem_ref netmems[16]; 1067 - int ret = 0; 1068 1070 1069 1071 if (!sk_is_tcp(sk)) 1070 1072 return -EBADF; 1071 1073 1072 - if (optlen % sizeof(struct dmabuf_token) || 1074 + if (optlen % sizeof(*tokens) || 1073 1075 optlen > sizeof(*tokens) * MAX_DONTNEED_TOKENS) 1074 1076 return -EINVAL; 1075 1077 1076 - tokens = kvmalloc_array(optlen, sizeof(*tokens), GFP_KERNEL); 1078 + num_tokens = optlen / sizeof(*tokens); 1079 + tokens = kvmalloc_array(num_tokens, sizeof(*tokens), GFP_KERNEL); 1077 1080 if (!tokens) 1078 1081 return -ENOMEM; 1079 1082 1080 - num_tokens = optlen / sizeof(struct dmabuf_token); 1081 1083 if (copy_from_sockptr(tokens, optval, optlen)) { 1082 1084 kvfree(tokens); 1083 1085 return -EFAULT; ··· 1088 1086 xa_lock_bh(&sk->sk_user_frags); 1089 1087 for (i = 0; i < num_tokens; i++) { 1090 1088 for (j = 0; j < tokens[i].token_count; j++) { 1089 + if (++num_frags > MAX_DONTNEED_FRAGS) 1090 + goto frag_limit_reached; 1091 + 1091 1092 netmem_ref netmem = (__force netmem_ref)__xa_erase( 1092 1093 &sk->sk_user_frags, tokens[i].token_start + j); 1093 1094 1094 - if (netmem && 1095 - !WARN_ON_ONCE(!netmem_is_net_iov(netmem))) { 1096 - netmems[netmem_num++] = netmem; 1097 - if (netmem_num == ARRAY_SIZE(netmems)) { 1098 - xa_unlock_bh(&sk->sk_user_frags); 1099 - for (k = 0; k < netmem_num; k++) 1100 - WARN_ON_ONCE(!napi_pp_put_page(netmems[k])); 1101 - netmem_num = 0; 1102 - xa_lock_bh(&sk->sk_user_frags); 1103 - } 1104 - ret++; 1095 + if (!netmem || WARN_ON_ONCE(!netmem_is_net_iov(netmem))) 1096 + continue; 1097 + 1098 + netmems[netmem_num++] = netmem; 1099 + if (netmem_num == ARRAY_SIZE(netmems)) { 1100 + xa_unlock_bh(&sk->sk_user_frags); 1101 + for (k = 0; k < netmem_num; k++) 1102 + WARN_ON_ONCE(!napi_pp_put_page(netmems[k])); 1103 + netmem_num = 0; 1104 + xa_lock_bh(&sk->sk_user_frags); 1105 1105 } 1106 + ret++; 1106 1107 } 1107 1108 } 1108 1109 1110 + frag_limit_reached: 1109 1111 xa_unlock_bh(&sk->sk_user_frags); 1110 1112 for (k = 0; k < netmem_num; k++) 1111 1113 WARN_ON_ONCE(!napi_pp_put_page(netmems[k]));
+1 -1
net/dccp/ipv6.c
··· 618 618 by tcp. Feel free to propose better solution. 619 619 --ANK (980728) 620 620 */ 621 - if (np->rxopt.all) 621 + if (np->rxopt.all && sk->sk_state != DCCP_LISTEN) 622 622 opt_skb = skb_clone_and_charge_r(skb, sk); 623 623 624 624 if (sk->sk_state == DCCP_OPEN) { /* Fast path */
+2 -1
net/ipv4/ipmr_base.c
··· 310 310 if (filter->filter_set) 311 311 flags |= NLM_F_DUMP_FILTERED; 312 312 313 - list_for_each_entry_rcu(mfc, &mrt->mfc_cache_list, list) { 313 + list_for_each_entry_rcu(mfc, &mrt->mfc_cache_list, list, 314 + lockdep_rtnl_is_held()) { 314 315 if (e < s_e) 315 316 goto next_entry; 316 317 if (filter->dev &&
+1 -3
net/ipv6/tcp_ipv6.c
··· 1618 1618 by tcp. Feel free to propose better solution. 1619 1619 --ANK (980728) 1620 1620 */ 1621 - if (np->rxopt.all) 1621 + if (np->rxopt.all && sk->sk_state != TCP_LISTEN) 1622 1622 opt_skb = skb_clone_and_charge_r(skb, sk); 1623 1623 1624 1624 if (sk->sk_state == TCP_ESTABLISHED) { /* Fast path */ ··· 1656 1656 if (reason) 1657 1657 goto reset; 1658 1658 } 1659 - if (opt_skb) 1660 - __kfree_skb(opt_skb); 1661 1659 return 0; 1662 1660 } 1663 1661 } else
+2 -1
net/mptcp/pm_netlink.c
··· 524 524 { 525 525 struct mptcp_pm_addr_entry *entry; 526 526 527 - list_for_each_entry(entry, &pernet->local_addr_list, list) { 527 + list_for_each_entry_rcu(entry, &pernet->local_addr_list, list, 528 + lockdep_is_held(&pernet->lock)) { 528 529 if (mptcp_addresses_equal(&entry->addr, info, entry->addr.port)) 529 530 return entry; 530 531 }
+15
net/mptcp/pm_userspace.c
··· 308 308 309 309 lock_sock(sk); 310 310 311 + spin_lock_bh(&msk->pm.lock); 311 312 match = mptcp_userspace_pm_lookup_addr_by_id(msk, id_val); 312 313 if (!match) { 313 314 GENL_SET_ERR_MSG(info, "address with specified id not found"); 315 + spin_unlock_bh(&msk->pm.lock); 314 316 release_sock(sk); 315 317 goto out; 316 318 } 317 319 318 320 list_move(&match->list, &free_list); 321 + spin_unlock_bh(&msk->pm.lock); 319 322 320 323 mptcp_pm_remove_addrs(msk, &free_list); 321 324 ··· 563 560 struct nlattr *token = info->attrs[MPTCP_PM_ATTR_TOKEN]; 564 561 struct nlattr *attr = info->attrs[MPTCP_PM_ATTR_ADDR]; 565 562 struct net *net = sock_net(skb->sk); 563 + struct mptcp_pm_addr_entry *entry; 566 564 struct mptcp_sock *msk; 567 565 int ret = -EINVAL; 568 566 struct sock *sk; ··· 604 600 605 601 if (loc.flags & MPTCP_PM_ADDR_FLAG_BACKUP) 606 602 bkup = 1; 603 + 604 + spin_lock_bh(&msk->pm.lock); 605 + list_for_each_entry(entry, &msk->pm.userspace_pm_local_addr_list, list) { 606 + if (mptcp_addresses_equal(&entry->addr, &loc.addr, false)) { 607 + if (bkup) 608 + entry->flags |= MPTCP_PM_ADDR_FLAG_BACKUP; 609 + else 610 + entry->flags &= ~MPTCP_PM_ADDR_FLAG_BACKUP; 611 + } 612 + } 613 + spin_unlock_bh(&msk->pm.lock); 607 614 608 615 lock_sock(sk); 609 616 ret = mptcp_pm_nl_mp_prio_send_ack(msk, &loc.addr, &rem.addr, bkup);
+11 -5
net/mptcp/protocol.c
··· 2082 2082 slow = lock_sock_fast(ssk); 2083 2083 WRITE_ONCE(ssk->sk_rcvbuf, rcvbuf); 2084 2084 WRITE_ONCE(tcp_sk(ssk)->window_clamp, window_clamp); 2085 - tcp_cleanup_rbuf(ssk, 1); 2085 + if (tcp_can_send_ack(ssk)) 2086 + tcp_cleanup_rbuf(ssk, 1); 2086 2087 unlock_sock_fast(ssk, slow); 2087 2088 } 2088 2089 } ··· 2206 2205 cmsg_flags = MPTCP_CMSG_INQ; 2207 2206 2208 2207 while (copied < len) { 2209 - int bytes_read; 2208 + int err, bytes_read; 2210 2209 2211 2210 bytes_read = __mptcp_recvmsg_mskq(msk, msg, len - copied, flags, &tss, &cmsg_flags); 2212 2211 if (unlikely(bytes_read < 0)) { ··· 2268 2267 } 2269 2268 2270 2269 pr_debug("block timeout %ld\n", timeo); 2271 - sk_wait_data(sk, &timeo, NULL); 2270 + mptcp_rcv_space_adjust(msk, copied); 2271 + err = sk_wait_data(sk, &timeo, NULL); 2272 + if (err < 0) { 2273 + err = copied ? : err; 2274 + goto out_err; 2275 + } 2272 2276 } 2277 + 2278 + mptcp_rcv_space_adjust(msk, copied); 2273 2279 2274 2280 out_err: 2275 2281 if (cmsg_flags && copied >= 0) { ··· 2293 2285 pr_debug("msk=%p rx queue empty=%d:%d copied=%d\n", 2294 2286 msk, skb_queue_empty_lockless(&sk->sk_receive_queue), 2295 2287 skb_queue_empty(&msk->receive_queue), copied); 2296 - if (!(flags & MSG_PEEK)) 2297 - mptcp_rcv_space_adjust(msk, copied); 2298 2288 2299 2289 release_sock(sk); 2300 2290 return copied;
+8 -23
net/netlink/af_netlink.c
··· 393 393 394 394 static void netlink_sock_destruct(struct sock *sk) 395 395 { 396 - struct netlink_sock *nlk = nlk_sk(sk); 397 - 398 - if (nlk->cb_running) { 399 - if (nlk->cb.done) 400 - nlk->cb.done(&nlk->cb); 401 - module_put(nlk->cb.module); 402 - kfree_skb(nlk->cb.skb); 403 - } 404 - 405 396 skb_queue_purge(&sk->sk_receive_queue); 406 397 407 398 if (!sock_flag(sk, SOCK_DEAD)) { ··· 403 412 WARN_ON(atomic_read(&sk->sk_rmem_alloc)); 404 413 WARN_ON(refcount_read(&sk->sk_wmem_alloc)); 405 414 WARN_ON(nlk_sk(sk)->groups); 406 - } 407 - 408 - static void netlink_sock_destruct_work(struct work_struct *work) 409 - { 410 - struct netlink_sock *nlk = container_of(work, struct netlink_sock, 411 - work); 412 - 413 - sk_free(&nlk->sk); 414 415 } 415 416 416 417 /* This lock without WQ_FLAG_EXCLUSIVE is good on UP and it is _very_ bad on ··· 714 731 if (!refcount_dec_and_test(&sk->sk_refcnt)) 715 732 return; 716 733 717 - if (nlk->cb_running && nlk->cb.done) { 718 - INIT_WORK(&nlk->work, netlink_sock_destruct_work); 719 - schedule_work(&nlk->work); 720 - return; 721 - } 722 - 723 734 sk_free(sk); 724 735 } 725 736 ··· 763 786 }; 764 787 blocking_notifier_call_chain(&netlink_chain, 765 788 NETLINK_URELEASE, &n); 789 + } 790 + 791 + /* Terminate any outstanding dump */ 792 + if (nlk->cb_running) { 793 + if (nlk->cb.done) 794 + nlk->cb.done(&nlk->cb); 795 + module_put(nlk->cb.module); 796 + kfree_skb(nlk->cb.skb); 766 797 } 767 798 768 799 module_put(nlk->module);
-2
net/netlink/af_netlink.h
··· 4 4 5 5 #include <linux/rhashtable.h> 6 6 #include <linux/atomic.h> 7 - #include <linux/workqueue.h> 8 7 #include <net/sock.h> 9 8 10 9 /* flags */ ··· 49 50 50 51 struct rhash_head node; 51 52 struct rcu_head rcu; 52 - struct work_struct work; 53 53 }; 54 54 55 55 static inline struct netlink_sock *nlk_sk(struct sock *sk)
+14 -4
net/sched/cls_u32.c
··· 92 92 long knodes; 93 93 }; 94 94 95 + static u32 handle2id(u32 h) 96 + { 97 + return ((h & 0x80000000) ? ((h >> 20) & 0x7FF) : h); 98 + } 99 + 100 + static u32 id2handle(u32 id) 101 + { 102 + return (id | 0x800U) << 20; 103 + } 104 + 95 105 static inline unsigned int u32_hash_fold(__be32 key, 96 106 const struct tc_u32_sel *sel, 97 107 u8 fshift) ··· 320 310 int id = idr_alloc_cyclic(&tp_c->handle_idr, ptr, 1, 0x7FF, GFP_KERNEL); 321 311 if (id < 0) 322 312 return 0; 323 - return (id | 0x800U) << 20; 313 + return id2handle(id); 324 314 } 325 315 326 316 static struct hlist_head *tc_u_common_hash; ··· 370 360 return -ENOBUFS; 371 361 372 362 refcount_set(&root_ht->refcnt, 1); 373 - root_ht->handle = tp_c ? gen_new_htid(tp_c, root_ht) : 0x80000000; 363 + root_ht->handle = tp_c ? gen_new_htid(tp_c, root_ht) : id2handle(0); 374 364 root_ht->prio = tp->prio; 375 365 root_ht->is_root = true; 376 366 idr_init(&root_ht->handle_idr); ··· 622 612 if (phn == ht) { 623 613 u32_clear_hw_hnode(tp, ht, extack); 624 614 idr_destroy(&ht->handle_idr); 625 - idr_remove(&tp_c->handle_idr, ht->handle); 615 + idr_remove(&tp_c->handle_idr, handle2id(ht->handle)); 626 616 RCU_INIT_POINTER(*hn, ht->next); 627 617 kfree_rcu(ht, rcu); 628 618 return 0; ··· 999 989 1000 990 err = u32_replace_hw_hnode(tp, ht, userflags, extack); 1001 991 if (err) { 1002 - idr_remove(&tp_c->handle_idr, handle); 992 + idr_remove(&tp_c->handle_idr, handle2id(handle)); 1003 993 kfree(ht); 1004 994 return err; 1005 995 }
+13 -6
net/sctp/ipv6.c
··· 683 683 struct sock *sk = &sp->inet.sk; 684 684 struct net *net = sock_net(sk); 685 685 struct net_device *dev = NULL; 686 - int type; 686 + int type, res, bound_dev_if; 687 687 688 688 type = ipv6_addr_type(in6); 689 689 if (IPV6_ADDR_ANY == type) ··· 697 697 if (!(type & IPV6_ADDR_UNICAST)) 698 698 return 0; 699 699 700 - if (sk->sk_bound_dev_if) { 701 - dev = dev_get_by_index_rcu(net, sk->sk_bound_dev_if); 700 + rcu_read_lock(); 701 + bound_dev_if = READ_ONCE(sk->sk_bound_dev_if); 702 + if (bound_dev_if) { 703 + res = 0; 704 + dev = dev_get_by_index_rcu(net, bound_dev_if); 702 705 if (!dev) 703 - return 0; 706 + goto out; 704 707 } 705 708 706 - return ipv6_can_nonlocal_bind(net, &sp->inet) || 707 - ipv6_chk_addr(net, in6, dev, 0); 709 + res = ipv6_can_nonlocal_bind(net, &sp->inet) || 710 + ipv6_chk_addr(net, in6, dev, 0); 711 + 712 + out: 713 + rcu_read_unlock(); 714 + return res; 708 715 } 709 716 710 717 /* This function checks if the address is a valid address to be used for
+3
net/vmw_vsock/af_vsock.c
··· 836 836 { 837 837 struct vsock_sock *vsk = vsock_sk(sk); 838 838 839 + /* Flush MSG_ZEROCOPY leftovers. */ 840 + __skb_queue_purge(&sk->sk_error_queue); 841 + 839 842 vsock_deassign_transport(vsk); 840 843 841 844 /* When clearing these addresses, there's no need to set the family and
+9
net/vmw_vsock/virtio_transport_common.c
··· 400 400 if (virtio_transport_init_zcopy_skb(vsk, skb, 401 401 info->msg, 402 402 can_zcopy)) { 403 + kfree_skb(skb); 403 404 ret = -ENOMEM; 404 405 break; 405 406 } ··· 1512 1511 if (sk_acceptq_is_full(sk)) { 1513 1512 virtio_transport_reset_no_sock(t, skb); 1514 1513 return -ENOMEM; 1514 + } 1515 + 1516 + /* __vsock_release() might have already flushed accept_queue. 1517 + * Subsequent enqueues would lead to a memory leak. 1518 + */ 1519 + if (sk->sk_shutdown == SHUTDOWN_MASK) { 1520 + virtio_transport_reset_no_sock(t, skb); 1521 + return -ESHUTDOWN; 1515 1522 } 1516 1523 1517 1524 child = vsock_create_connected(sk);
+1 -1
samples/pktgen/pktgen_sample01_simple.sh
··· 76 76 pg_set $DEV "udp_dst_max $UDP_DST_MAX" 77 77 fi 78 78 79 - [ ! -z "$UDP_CSUM" ] && pg_set $dev "flag UDPCSUM" 79 + [ ! -z "$UDP_CSUM" ] && pg_set $DEV "flag UDPCSUM" 80 80 81 81 # Setup random UDP port src range 82 82 pg_set $DEV "flag UDPSRC_RND"
+53 -1
tools/testing/selftests/drivers/net/bonding/bond_options.sh
··· 11 11 12 12 lib_dir=$(dirname "$0") 13 13 source ${lib_dir}/bond_topo_3d1c.sh 14 + c_maddr="33:33:00:00:00:10" 15 + g_maddr="33:33:00:00:02:54" 14 16 15 17 skip_prio() 16 18 { ··· 242 240 done 243 241 } 244 242 243 + # Testing correct multicast groups are added to slaves for ns targets 244 + arp_validate_mcast() 245 + { 246 + RET=0 247 + local arp_valid=$(cmd_jq "ip -n ${s_ns} -j -d link show bond0" ".[].linkinfo.info_data.arp_validate") 248 + local active_slave=$(cmd_jq "ip -n ${s_ns} -d -j link show bond0" ".[].linkinfo.info_data.active_slave") 249 + 250 + for i in $(seq 0 2); do 251 + maddr_list=$(ip -n ${s_ns} maddr show dev eth${i}) 252 + 253 + # arp_valid == 0 or active_slave should not join any maddrs 254 + if { [ "$arp_valid" == "null" ] || [ "eth${i}" == ${active_slave} ]; } && \ 255 + echo "$maddr_list" | grep -qE "${c_maddr}|${g_maddr}"; then 256 + RET=1 257 + check_err 1 "arp_valid $arp_valid active_slave $active_slave, eth$i has mcast group" 258 + # arp_valid != 0 and backup_slave should join both maddrs 259 + elif [ "$arp_valid" != "null" ] && [ "eth${i}" != ${active_slave} ] && \ 260 + ( ! echo "$maddr_list" | grep -q "${c_maddr}" || \ 261 + ! echo "$maddr_list" | grep -q "${m_maddr}"); then 262 + RET=1 263 + check_err 1 "arp_valid $arp_valid active_slave $active_slave, eth$i has mcast group" 264 + fi 265 + done 266 + 267 + # Do failover 268 + ip -n ${s_ns} link set ${active_slave} down 269 + # wait for active link change 270 + slowwait 2 active_slave_changed $active_slave 271 + active_slave=$(cmd_jq "ip -n ${s_ns} -d -j link show bond0" ".[].linkinfo.info_data.active_slave") 272 + 273 + for i in $(seq 0 2); do 274 + maddr_list=$(ip -n ${s_ns} maddr show dev eth${i}) 275 + 276 + # arp_valid == 0 or active_slave should not join any maddrs 277 + if { [ "$arp_valid" == "null" ] || [ "eth${i}" == ${active_slave} ]; } && \ 278 + echo "$maddr_list" | grep -qE "${c_maddr}|${g_maddr}"; then 279 + RET=1 280 + check_err 1 "arp_valid $arp_valid active_slave $active_slave, eth$i has mcast group" 281 + # arp_valid != 0 and backup_slave should join both maddrs 282 + elif [ "$arp_valid" != "null" ] && [ "eth${i}" != ${active_slave} ] && \ 283 + ( ! echo "$maddr_list" | grep -q "${c_maddr}" || \ 284 + ! echo "$maddr_list" | grep -q "${m_maddr}"); then 285 + RET=1 286 + check_err 1 "arp_valid $arp_valid active_slave $active_slave, eth$i has mcast group" 287 + fi 288 + done 289 + } 290 + 245 291 arp_validate_arp() 246 292 { 247 293 local mode=$1 ··· 311 261 fi 312 262 313 263 for val in $(seq 0 6); do 314 - arp_validate_test "mode $mode arp_interval 100 ns_ip6_target ${g_ip6} arp_validate $val" 264 + arp_validate_test "mode $mode arp_interval 100 ns_ip6_target ${g_ip6},${c_ip6} arp_validate $val" 315 265 log_test "arp_validate" "$mode ns_ip6_target arp_validate $val" 266 + arp_validate_mcast 267 + log_test "arp_validate" "join mcast group" 316 268 done 317 269 } 318 270
+1
tools/testing/selftests/net/.gitignore
··· 19 19 msg_oob 20 20 msg_zerocopy 21 21 ncdevmem 22 + netlink-dumps 22 23 nettest 23 24 psock_fanout 24 25 psock_snd
+1
tools/testing/selftests/net/Makefile
··· 78 78 TEST_GEN_FILES += io_uring_zerocopy_tx 79 79 TEST_PROGS += io_uring_zerocopy_tx.sh 80 80 TEST_GEN_FILES += bind_bhash 81 + TEST_GEN_PROGS += netlink-dumps 81 82 TEST_GEN_PROGS += sk_bind_sendto_listen 82 83 TEST_GEN_PROGS += sk_connect_zero_addr 83 84 TEST_GEN_PROGS += sk_so_peek_off
+24
tools/testing/selftests/tc-testing/tc-tests/filters/u32.json
··· 329 329 "teardown": [ 330 330 "$TC qdisc del dev $DEV1 parent root drr" 331 331 ] 332 + }, 333 + { 334 + "id": "1234", 335 + "name": "Exercise IDR leaks by creating/deleting a filter many (2048) times", 336 + "category": [ 337 + "filter", 338 + "u32" 339 + ], 340 + "plugins": { 341 + "requires": "nsPlugin" 342 + }, 343 + "setup": [ 344 + "$TC qdisc add dev $DEV1 parent root handle 10: drr", 345 + "$TC filter add dev $DEV1 parent 10:0 protocol ip prio 2 u32 match ip src 0.0.0.2/32 action drop", 346 + "$TC filter add dev $DEV1 parent 10:0 protocol ip prio 3 u32 match ip src 0.0.0.3/32 action drop" 347 + ], 348 + "cmdUnderTest": "bash -c 'for i in {1..2048} ;do echo filter delete dev $DEV1 pref 3;echo filter add dev $DEV1 parent 10:0 protocol ip prio 3 u32 match ip src 0.0.0.3/32 action drop;done | $TC -b -'", 349 + "expExitCode": "0", 350 + "verifyCmd": "$TC filter show dev $DEV1", 351 + "matchPattern": "protocol ip pref 3 u32", 352 + "matchCount": "3", 353 + "teardown": [ 354 + "$TC qdisc del dev $DEV1 parent root drr" 355 + ] 332 356 } 333 357 ]