Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'net-5.15-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Pull networking fixes and stragglers from Jakub Kicinski:
"Networking stragglers and fixes, including changes from netfilter,
wireless and can.

Current release - regressions:

- qrtr: revert check in qrtr_endpoint_post(), fixes audio and wifi

- ip_gre: validate csum_start only on pull

- bnxt_en: fix 64-bit doorbell operation on 32-bit kernels

- ionic: fix double use of queue-lock, fix a sleeping in atomic

- can: c_can: fix null-ptr-deref on ioctl()

- cs89x0: disable compile testing on powerpc

Current release - new code bugs:

- bridge: mcast: fix vlan port router deadlock, consistently disable
BH

Previous releases - regressions:

- dsa: tag_rtl4_a: fix egress tags, only port 0 was working

- mptcp: fix possible divide by zero

- netfilter: nft_ct: protect nft_ct_pcpu_template_refcnt with mutex

- netfilter: socket: icmp6: fix use-after-scope

- stmmac: fix MAC not working when system resume back with WoL active

Previous releases - always broken:

- ip/ip6_gre: use the same logic as SIT interfaces when computing
v6LL address

- seg6: set fc_nlinfo in nh_create_ipv4, nh_create_ipv6

- mptcp: only send extra TCP acks in eligible socket states

- dsa: lantiq_gswip: fix maximum frame length

- stmmac: fix overall budget calculation for rxtx_napi

- bnxt_en: fix firmware version reporting via devlink

- renesas: sh_eth: add missing barrier to fix freeing wrong tx
descriptor

Stragglers:

- netfilter: conntrack: switch to siphash

- netfilter: refuse insertion if chain has grown too large

- ncsi: add get MAC address command to get Intel i210 MAC address"

* tag 'net-5.15-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (76 commits)
ieee802154: Remove redundant initialization of variable ret
net: stmmac: fix MAC not working when system resume back with WoL active
net: phylink: add suspend/resume support
net: renesas: sh_eth: Fix freeing wrong tx descriptor
bonding: 3ad: pass parameter bond_params by reference
cxgb3: fix oops on module removal
can: c_can: fix null-ptr-deref on ioctl()
can: rcar_canfd: add __maybe_unused annotation to silence warning
net: wwan: iosm: Unify IO accessors used in the driver
net: wwan: iosm: Replace io.*64_lo_hi() with regular accessors
net: qcom/emac: Replace strlcpy with strscpy
ip6_gre: Revert "ip6_gre: add validation for csum_start"
net: hns3: make hclgevf_cmd_caps_bit_map0 and hclge_cmd_caps_bit_map0 static
selftests/bpf: Test XDP bonding nest and unwind
bonding: Fix negative jump label count on nested bonding
MAINTAINERS: add VM SOCKETS (AF_VSOCK) entry
stmmac: dwmac-loongson:Fix missing return value
iwlwifi: fix printk format warnings in uefi.c
net: create netdev->dev_addr assignment helpers
bnxt_en: Fix possible unintended driver initiated error recovery
...

+1066 -389
+8 -5
Documentation/networking/nf_conntrack-sysctl.rst
··· 17 17 nf_conntrack_buckets - INTEGER 18 18 Size of hash table. If not specified as parameter during module 19 19 loading, the default size is calculated by dividing total memory 20 - by 16384 to determine the number of buckets but the hash table will 21 - never have fewer than 32 and limited to 16384 buckets. For systems 22 - with more than 4GB of memory it will be 65536 buckets. 20 + by 16384 to determine the number of buckets. The hash table will 21 + never have fewer than 1024 and never more than 262144 buckets. 23 22 This sysctl is only writeable in the initial net namespace. 24 23 25 24 nf_conntrack_checksum - BOOLEAN ··· 99 100 Log invalid packets of a type specified by value. 100 101 101 102 nf_conntrack_max - INTEGER 102 - Size of connection tracking table. Default value is 103 - nf_conntrack_buckets value * 4. 103 + Maximum number of allowed connection tracking entries. This value is set 104 + to nf_conntrack_buckets by default. 105 + Note that connection tracking entries are added to the table twice -- once 106 + for the original direction and once for the reply direction (i.e., with 107 + the reversed address). This means that with default settings a maxed-out 108 + table will have a average hash chain length of 2, not 1. 104 109 105 110 nf_conntrack_tcp_be_liberal - BOOLEAN 106 111 - 0 - disabled (default)
+13 -7
MAINTAINERS
··· 19761 19761 L: virtualization@lists.linux-foundation.org 19762 19762 L: netdev@vger.kernel.org 19763 19763 S: Maintained 19764 - F: drivers/net/vsockmon.c 19765 19764 F: drivers/vhost/vsock.c 19766 19765 F: include/linux/virtio_vsock.h 19767 19766 F: include/uapi/linux/virtio_vsock.h 19768 - F: include/uapi/linux/vm_sockets_diag.h 19769 - F: include/uapi/linux/vsockmon.h 19770 - F: net/vmw_vsock/af_vsock_tap.c 19771 - F: net/vmw_vsock/diag.c 19772 19767 F: net/vmw_vsock/virtio_transport.c 19773 19768 F: net/vmw_vsock/virtio_transport_common.c 19774 - F: net/vmw_vsock/vsock_loopback.c 19775 - F: tools/testing/vsock/ 19776 19769 19777 19770 VIRTIO BLOCK AND SCSI DRIVERS 19778 19771 M: "Michael S. Tsirkin" <mst@redhat.com> ··· 19969 19976 F: drivers/staging/vme/ 19970 19977 F: drivers/vme/ 19971 19978 F: include/linux/vme* 19979 + 19980 + VM SOCKETS (AF_VSOCK) 19981 + M: Stefano Garzarella <sgarzare@redhat.com> 19982 + L: virtualization@lists.linux-foundation.org 19983 + L: netdev@vger.kernel.org 19984 + S: Maintained 19985 + F: drivers/net/vsockmon.c 19986 + F: include/net/af_vsock.h 19987 + F: include/uapi/linux/vm_sockets.h 19988 + F: include/uapi/linux/vm_sockets_diag.h 19989 + F: include/uapi/linux/vsockmon.h 19990 + F: net/vmw_vsock/ 19991 + F: tools/testing/vsock/ 19972 19992 19973 19993 VMWARE BALLOON DRIVER 19974 19994 M: Nadav Amit <namit@vmware.com>
+4 -4
drivers/net/bonding/bond_3ad.c
··· 96 96 static void ad_mux_machine(struct port *port, bool *update_slave_arr); 97 97 static void ad_rx_machine(struct lacpdu *lacpdu, struct port *port); 98 98 static void ad_tx_machine(struct port *port); 99 - static void ad_periodic_machine(struct port *port, struct bond_params bond_params); 99 + static void ad_periodic_machine(struct port *port, struct bond_params *bond_params); 100 100 static void ad_port_selection_logic(struct port *port, bool *update_slave_arr); 101 101 static void ad_agg_selection_logic(struct aggregator *aggregator, 102 102 bool *update_slave_arr); ··· 1298 1298 * 1299 1299 * Turn ntt flag on priodically to perform periodic transmission of lacpdu's. 1300 1300 */ 1301 - static void ad_periodic_machine(struct port *port, struct bond_params bond_params) 1301 + static void ad_periodic_machine(struct port *port, struct bond_params *bond_params) 1302 1302 { 1303 1303 periodic_states_t last_state; 1304 1304 ··· 1308 1308 /* check if port was reinitialized */ 1309 1309 if (((port->sm_vars & AD_PORT_BEGIN) || !(port->sm_vars & AD_PORT_LACP_ENABLED) || !port->is_enabled) || 1310 1310 (!(port->actor_oper_port_state & LACP_STATE_LACP_ACTIVITY) && !(port->partner_oper.port_state & LACP_STATE_LACP_ACTIVITY)) || 1311 - !bond_params.lacp_active) { 1311 + !bond_params->lacp_active) { 1312 1312 port->sm_periodic_state = AD_NO_PERIODIC; 1313 1313 } 1314 1314 /* check if state machine should change state */ ··· 2342 2342 } 2343 2343 2344 2344 ad_rx_machine(NULL, port); 2345 - ad_periodic_machine(port, bond->params); 2345 + ad_periodic_machine(port, &bond->params); 2346 2346 ad_port_selection_logic(port, &update_slave_arr); 2347 2347 ad_mux_machine(port, &update_slave_arr); 2348 2348 ad_tx_machine(port);
+8 -9
drivers/net/bonding/bond_main.c
··· 2169 2169 res = -EOPNOTSUPP; 2170 2170 goto err_sysfs_del; 2171 2171 } 2172 - } else { 2172 + } else if (bond->xdp_prog) { 2173 2173 struct netdev_bpf xdp = { 2174 2174 .command = XDP_SETUP_PROG, 2175 2175 .flags = 0, ··· 2910 2910 * probe to generate any traffic (arp_validate=0) 2911 2911 */ 2912 2912 if (bond->params.arp_validate) 2913 - net_warn_ratelimited("%s: no route to arp_ip_target %pI4 and arp_validate is set\n", 2914 - bond->dev->name, 2915 - &targets[i]); 2913 + pr_warn_once("%s: no route to arp_ip_target %pI4 and arp_validate is set\n", 2914 + bond->dev->name, 2915 + &targets[i]); 2916 2916 bond_arp_send(slave, ARPOP_REQUEST, targets[i], 2917 2917 0, tags); 2918 2918 continue; ··· 5224 5224 bpf_prog_inc(prog); 5225 5225 } 5226 5226 5227 - if (old_prog) 5228 - bpf_prog_put(old_prog); 5229 - 5230 - if (prog) 5227 + if (prog) { 5231 5228 static_branch_inc(&bpf_master_redirect_enabled_key); 5232 - else 5229 + } else if (old_prog) { 5230 + bpf_prog_put(old_prog); 5233 5231 static_branch_dec(&bpf_master_redirect_enabled_key); 5232 + } 5234 5233 5235 5234 return 0; 5236 5235
+1 -3
drivers/net/can/c_can/c_can_ethtool.c
··· 15 15 struct ethtool_drvinfo *info) 16 16 { 17 17 struct c_can_priv *priv = netdev_priv(netdev); 18 - struct platform_device *pdev = to_platform_device(priv->device); 19 - 20 18 strscpy(info->driver, "c_can", sizeof(info->driver)); 21 - strscpy(info->bus_info, pdev->name, sizeof(info->bus_info)); 19 + strscpy(info->bus_info, dev_name(priv->device), sizeof(info->bus_info)); 22 20 } 23 21 24 22 static void c_can_get_ringparam(struct net_device *netdev,
+1 -1
drivers/net/can/rcar/rcar_canfd.c
··· 2017 2017 static SIMPLE_DEV_PM_OPS(rcar_canfd_pm_ops, rcar_canfd_suspend, 2018 2018 rcar_canfd_resume); 2019 2019 2020 - static const struct of_device_id rcar_canfd_of_table[] = { 2020 + static const __maybe_unused struct of_device_id rcar_canfd_of_table[] = { 2021 2021 { .compatible = "renesas,rcar-gen3-canfd", .data = (void *)RENESAS_RCAR_GEN3 }, 2022 2022 { .compatible = "renesas,rzg2l-canfd", .data = (void *)RENESAS_RZG2L }, 2023 2023 { }
+28 -6
drivers/net/dsa/b53/b53_common.c
··· 1144 1144 u8 reg, val, off; 1145 1145 1146 1146 /* Override the port settings */ 1147 - if (port == dev->cpu_port) { 1147 + if (port == dev->imp_port) { 1148 1148 off = B53_PORT_OVERRIDE_CTRL; 1149 1149 val = PORT_OVERRIDE_EN; 1150 1150 } else { ··· 1168 1168 u8 reg, val, off; 1169 1169 1170 1170 /* Override the port settings */ 1171 - if (port == dev->cpu_port) { 1171 + if (port == dev->imp_port) { 1172 1172 off = B53_PORT_OVERRIDE_CTRL; 1173 1173 val = PORT_OVERRIDE_EN; 1174 1174 } else { ··· 1236 1236 b53_force_link(dev, port, phydev->link); 1237 1237 1238 1238 if (is531x5(dev) && phy_interface_is_rgmii(phydev)) { 1239 - if (port == 8) 1239 + if (port == dev->imp_port) 1240 1240 off = B53_RGMII_CTRL_IMP; 1241 1241 else 1242 1242 off = B53_RGMII_CTRL_P(port); ··· 2280 2280 const char *dev_name; 2281 2281 u16 vlans; 2282 2282 u16 enabled_ports; 2283 + u8 imp_port; 2283 2284 u8 cpu_port; 2284 2285 u8 vta_regs[3]; 2285 2286 u8 arl_bins; ··· 2305 2304 .enabled_ports = 0x1f, 2306 2305 .arl_bins = 2, 2307 2306 .arl_buckets = 1024, 2307 + .imp_port = 5, 2308 2308 .cpu_port = B53_CPU_PORT_25, 2309 2309 .duplex_reg = B53_DUPLEX_STAT_FE, 2310 2310 }, ··· 2316 2314 .enabled_ports = 0x1f, 2317 2315 .arl_bins = 2, 2318 2316 .arl_buckets = 1024, 2317 + .imp_port = 5, 2319 2318 .cpu_port = B53_CPU_PORT_25, 2320 2319 .duplex_reg = B53_DUPLEX_STAT_FE, 2321 2320 }, ··· 2327 2324 .enabled_ports = 0x1f, 2328 2325 .arl_bins = 4, 2329 2326 .arl_buckets = 1024, 2327 + .imp_port = 8, 2330 2328 .cpu_port = B53_CPU_PORT, 2331 2329 .vta_regs = B53_VTA_REGS, 2332 2330 .duplex_reg = B53_DUPLEX_STAT_GE, ··· 2341 2337 .enabled_ports = 0x1f, 2342 2338 .arl_bins = 4, 2343 2339 .arl_buckets = 1024, 2340 + .imp_port = 8, 2344 2341 .cpu_port = B53_CPU_PORT, 2345 2342 .vta_regs = B53_VTA_REGS, 2346 2343 .duplex_reg = B53_DUPLEX_STAT_GE, ··· 2355 2350 .enabled_ports = 0x1f, 2356 2351 .arl_bins = 4, 2357 2352 .arl_buckets = 1024, 2353 + .imp_port = 8, 2358 2354 .cpu_port = B53_CPU_PORT, 2359 2355 .vta_regs = B53_VTA_REGS_9798, 2360 2356 .duplex_reg = B53_DUPLEX_STAT_GE, ··· 2369 2363 .enabled_ports = 0x7f, 2370 2364 .arl_bins = 4, 2371 2365 .arl_buckets = 1024, 2366 + .imp_port = 8, 2372 2367 .cpu_port = B53_CPU_PORT, 2373 2368 .vta_regs = B53_VTA_REGS_9798, 2374 2369 .duplex_reg = B53_DUPLEX_STAT_GE, ··· 2384 2377 .arl_bins = 4, 2385 2378 .arl_buckets = 1024, 2386 2379 .vta_regs = B53_VTA_REGS, 2380 + .imp_port = 8, 2387 2381 .cpu_port = B53_CPU_PORT, 2388 2382 .duplex_reg = B53_DUPLEX_STAT_GE, 2389 2383 .jumbo_pm_reg = B53_JUMBO_PORT_MASK, ··· 2397 2389 .enabled_ports = 0xff, 2398 2390 .arl_bins = 4, 2399 2391 .arl_buckets = 1024, 2392 + .imp_port = 8, 2400 2393 .cpu_port = B53_CPU_PORT, 2401 2394 .vta_regs = B53_VTA_REGS, 2402 2395 .duplex_reg = B53_DUPLEX_STAT_GE, ··· 2411 2402 .enabled_ports = 0x1ff, 2412 2403 .arl_bins = 4, 2413 2404 .arl_buckets = 1024, 2405 + .imp_port = 8, 2414 2406 .cpu_port = B53_CPU_PORT, 2415 2407 .vta_regs = B53_VTA_REGS, 2416 2408 .duplex_reg = B53_DUPLEX_STAT_GE, ··· 2425 2415 .enabled_ports = 0, /* pdata must provide them */ 2426 2416 .arl_bins = 4, 2427 2417 .arl_buckets = 1024, 2418 + .imp_port = 8, 2428 2419 .cpu_port = B53_CPU_PORT, 2429 2420 .vta_regs = B53_VTA_REGS_63XX, 2430 2421 .duplex_reg = B53_DUPLEX_STAT_63XX, ··· 2439 2428 .enabled_ports = 0x1f, 2440 2429 .arl_bins = 4, 2441 2430 .arl_buckets = 1024, 2431 + .imp_port = 8, 2442 2432 .cpu_port = B53_CPU_PORT_25, /* TODO: auto detect */ 2443 2433 .vta_regs = B53_VTA_REGS, 2444 2434 .duplex_reg = B53_DUPLEX_STAT_GE, ··· 2453 2441 .enabled_ports = 0x1bf, 2454 2442 .arl_bins = 4, 2455 2443 .arl_buckets = 1024, 2444 + .imp_port = 8, 2456 2445 .cpu_port = B53_CPU_PORT_25, /* TODO: auto detect */ 2457 2446 .vta_regs = B53_VTA_REGS, 2458 2447 .duplex_reg = B53_DUPLEX_STAT_GE, ··· 2467 2454 .enabled_ports = 0x1bf, 2468 2455 .arl_bins = 4, 2469 2456 .arl_buckets = 1024, 2457 + .imp_port = 8, 2470 2458 .cpu_port = B53_CPU_PORT_25, /* TODO: auto detect */ 2471 2459 .vta_regs = B53_VTA_REGS, 2472 2460 .duplex_reg = B53_DUPLEX_STAT_GE, ··· 2481 2467 .enabled_ports = 0x1f, 2482 2468 .arl_bins = 4, 2483 2469 .arl_buckets = 1024, 2470 + .imp_port = 8, 2484 2471 .cpu_port = B53_CPU_PORT_25, /* TODO: auto detect */ 2485 2472 .vta_regs = B53_VTA_REGS, 2486 2473 .duplex_reg = B53_DUPLEX_STAT_GE, ··· 2495 2480 .enabled_ports = 0x1f, 2496 2481 .arl_bins = 4, 2497 2482 .arl_buckets = 1024, 2483 + .imp_port = 8, 2498 2484 .cpu_port = B53_CPU_PORT_25, /* TODO: auto detect */ 2499 2485 .vta_regs = B53_VTA_REGS, 2500 2486 .duplex_reg = B53_DUPLEX_STAT_GE, ··· 2509 2493 .enabled_ports = 0x1ff, 2510 2494 .arl_bins = 4, 2511 2495 .arl_buckets = 1024, 2496 + .imp_port = 8, 2512 2497 .cpu_port = B53_CPU_PORT, 2513 2498 .vta_regs = B53_VTA_REGS, 2514 2499 .duplex_reg = B53_DUPLEX_STAT_GE, ··· 2523 2506 .enabled_ports = 0x103, 2524 2507 .arl_bins = 4, 2525 2508 .arl_buckets = 1024, 2509 + .imp_port = 8, 2526 2510 .cpu_port = B53_CPU_PORT, 2527 2511 .vta_regs = B53_VTA_REGS, 2528 2512 .duplex_reg = B53_DUPLEX_STAT_GE, ··· 2538 2520 .enabled_ports = 0x1bf, 2539 2521 .arl_bins = 4, 2540 2522 .arl_buckets = 256, 2523 + .imp_port = 8, 2541 2524 .cpu_port = 8, /* TODO: ports 4, 5, 8 */ 2542 2525 .vta_regs = B53_VTA_REGS, 2543 2526 .duplex_reg = B53_DUPLEX_STAT_GE, ··· 2552 2533 .enabled_ports = 0x1ff, 2553 2534 .arl_bins = 4, 2554 2535 .arl_buckets = 1024, 2536 + .imp_port = 8, 2555 2537 .cpu_port = B53_CPU_PORT, 2556 2538 .vta_regs = B53_VTA_REGS, 2557 2539 .duplex_reg = B53_DUPLEX_STAT_GE, ··· 2566 2546 .enabled_ports = 0x1ff, 2567 2547 .arl_bins = 4, 2568 2548 .arl_buckets = 256, 2549 + .imp_port = 8, 2569 2550 .cpu_port = B53_CPU_PORT, 2570 2551 .vta_regs = B53_VTA_REGS, 2571 2552 .duplex_reg = B53_DUPLEX_STAT_GE, ··· 2592 2571 dev->vta_regs[1] = chip->vta_regs[1]; 2593 2572 dev->vta_regs[2] = chip->vta_regs[2]; 2594 2573 dev->jumbo_pm_reg = chip->jumbo_pm_reg; 2574 + dev->imp_port = chip->imp_port; 2595 2575 dev->cpu_port = chip->cpu_port; 2596 2576 dev->num_vlans = chip->vlans; 2597 2577 dev->num_arl_bins = chip->arl_bins; ··· 2634 2612 dev->cpu_port = 5; 2635 2613 } 2636 2614 2637 - /* cpu port is always last */ 2638 - dev->num_ports = dev->cpu_port + 1; 2639 2615 dev->enabled_ports |= BIT(dev->cpu_port); 2616 + dev->num_ports = fls(dev->enabled_ports); 2617 + 2618 + dev->ds->num_ports = min_t(unsigned int, dev->num_ports, DSA_MAX_PORTS); 2640 2619 2641 2620 /* Include non standard CPU port built-in PHYs to be probed */ 2642 2621 if (is539x(dev) || is531x5(dev)) { ··· 2683 2660 return NULL; 2684 2661 2685 2662 ds->dev = base; 2686 - ds->num_ports = DSA_MAX_PORTS; 2687 2663 2688 2664 dev = devm_kzalloc(base, sizeof(*dev), GFP_KERNEL); 2689 2665 if (!dev)
+1
drivers/net/dsa/b53/b53_priv.h
··· 123 123 124 124 /* used ports mask */ 125 125 u16 enabled_ports; 126 + unsigned int imp_port; 126 127 unsigned int cpu_port; 127 128 128 129 /* connect specific data */
+2 -1
drivers/net/dsa/lantiq_gswip.c
··· 843 843 844 844 gswip_switch_mask(priv, 0, GSWIP_MAC_CTRL_2_MLEN, 845 845 GSWIP_MAC_CTRL_2p(cpu_port)); 846 - gswip_switch_w(priv, VLAN_ETH_FRAME_LEN + 8, GSWIP_MAC_FLEN); 846 + gswip_switch_w(priv, VLAN_ETH_FRAME_LEN + 8 + ETH_FCS_LEN, 847 + GSWIP_MAC_FLEN); 847 848 gswip_switch_mask(priv, 0, GSWIP_BM_QUEUE_GCTRL_GL_MOD, 848 849 GSWIP_BM_QUEUE_GCTRL); 849 850
+1 -1
drivers/net/ethernet/3com/3c59x.c
··· 2786 2786 dump_tx_ring(struct net_device *dev) 2787 2787 { 2788 2788 if (vortex_debug > 0) { 2789 - struct vortex_private *vp = netdev_priv(dev); 2789 + struct vortex_private *vp = netdev_priv(dev); 2790 2790 void __iomem *ioaddr = vp->ioaddr; 2791 2791 2792 2792 if (vp->full_bus_master_tx) {
+43 -24
drivers/net/ethernet/broadcom/bnxt/bnxt.c
··· 305 305 writel(DB_CP_FLAGS | RING_CMP(idx), (db)->doorbell) 306 306 307 307 #define BNXT_DB_NQ_P5(db, idx) \ 308 - writeq((db)->db_key64 | DBR_TYPE_NQ | RING_CMP(idx), (db)->doorbell) 308 + bnxt_writeq(bp, (db)->db_key64 | DBR_TYPE_NQ | RING_CMP(idx), \ 309 + (db)->doorbell) 309 310 310 311 #define BNXT_DB_CQ_ARM(db, idx) \ 311 312 writel(DB_CP_REARM_FLAGS | RING_CMP(idx), (db)->doorbell) 312 313 313 314 #define BNXT_DB_NQ_ARM_P5(db, idx) \ 314 - writeq((db)->db_key64 | DBR_TYPE_NQ_ARM | RING_CMP(idx), (db)->doorbell) 315 + bnxt_writeq(bp, (db)->db_key64 | DBR_TYPE_NQ_ARM | RING_CMP(idx),\ 316 + (db)->doorbell) 315 317 316 318 static void bnxt_db_nq(struct bnxt *bp, struct bnxt_db_info *db, u32 idx) 317 319 { ··· 334 332 static void bnxt_db_cq(struct bnxt *bp, struct bnxt_db_info *db, u32 idx) 335 333 { 336 334 if (bp->flags & BNXT_FLAG_CHIP_P5) 337 - writeq(db->db_key64 | DBR_TYPE_CQ_ARMALL | RING_CMP(idx), 338 - db->doorbell); 335 + bnxt_writeq(bp, db->db_key64 | DBR_TYPE_CQ_ARMALL | 336 + RING_CMP(idx), db->doorbell); 339 337 else 340 338 BNXT_DB_CQ(db, idx); 341 339 } ··· 2202 2200 if (!fw_health) 2203 2201 goto async_event_process_exit; 2204 2202 2205 - fw_health->enabled = EVENT_DATA1_RECOVERY_ENABLED(data1); 2206 - fw_health->master = EVENT_DATA1_RECOVERY_MASTER_FUNC(data1); 2207 - if (!fw_health->enabled) { 2203 + if (!EVENT_DATA1_RECOVERY_ENABLED(data1)) { 2204 + fw_health->enabled = false; 2208 2205 netif_info(bp, drv, bp->dev, 2209 2206 "Error recovery info: error recovery[0]\n"); 2210 2207 break; 2211 2208 } 2209 + fw_health->master = EVENT_DATA1_RECOVERY_MASTER_FUNC(data1); 2212 2210 fw_health->tmr_multiplier = 2213 2211 DIV_ROUND_UP(fw_health->polling_dsecs * HZ, 2214 2212 bp->current_interval * 10); 2215 2213 fw_health->tmr_counter = fw_health->tmr_multiplier; 2216 - fw_health->last_fw_heartbeat = 2217 - bnxt_fw_health_readl(bp, BNXT_FW_HEARTBEAT_REG); 2218 - fw_health->last_fw_reset_cnt = 2219 - bnxt_fw_health_readl(bp, BNXT_FW_RESET_CNT_REG); 2214 + if (!fw_health->enabled) { 2215 + fw_health->last_fw_heartbeat = 2216 + bnxt_fw_health_readl(bp, BNXT_FW_HEARTBEAT_REG); 2217 + fw_health->last_fw_reset_cnt = 2218 + bnxt_fw_health_readl(bp, BNXT_FW_RESET_CNT_REG); 2219 + } 2220 2220 netif_info(bp, drv, bp->dev, 2221 2221 "Error recovery info: error recovery[1], master[%d], reset count[%u], health status: 0x%x\n", 2222 2222 fw_health->master, fw_health->last_fw_reset_cnt, 2223 2223 bnxt_fw_health_readl(bp, BNXT_FW_HEALTH_REG)); 2224 + if (!fw_health->enabled) { 2225 + /* Make sure tmr_counter is set and visible to 2226 + * bnxt_health_check() before setting enabled to true. 2227 + */ 2228 + smp_wmb(); 2229 + fw_health->enabled = true; 2230 + } 2224 2231 goto async_event_process_exit; 2225 2232 } 2226 2233 case ASYNC_EVENT_CMPL_EVENT_ID_DEBUG_NOTIFICATION: ··· 2649 2638 2650 2639 if (cpr2 && cpr2->had_work_done) { 2651 2640 db = &cpr2->cp_db; 2652 - writeq(db->db_key64 | dbr_type | 2653 - RING_CMP(cpr2->cp_raw_cons), db->doorbell); 2641 + bnxt_writeq(bp, db->db_key64 | dbr_type | 2642 + RING_CMP(cpr2->cp_raw_cons), db->doorbell); 2654 2643 cpr2->had_work_done = 0; 2655 2644 } 2656 2645 } ··· 4650 4639 struct hwrm_tunnel_dst_port_free_input *req; 4651 4640 int rc; 4652 4641 4642 + if (tunnel_type == TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_VXLAN && 4643 + bp->vxlan_fw_dst_port_id == INVALID_HW_RING_ID) 4644 + return 0; 4645 + if (tunnel_type == TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_GENEVE && 4646 + bp->nge_fw_dst_port_id == INVALID_HW_RING_ID) 4647 + return 0; 4648 + 4653 4649 rc = hwrm_req_init(bp, req, HWRM_TUNNEL_DST_PORT_FREE); 4654 4650 if (rc) 4655 4651 return rc; ··· 4666 4648 switch (tunnel_type) { 4667 4649 case TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_VXLAN: 4668 4650 req->tunnel_dst_port_id = cpu_to_le16(bp->vxlan_fw_dst_port_id); 4651 + bp->vxlan_port = 0; 4669 4652 bp->vxlan_fw_dst_port_id = INVALID_HW_RING_ID; 4670 4653 break; 4671 4654 case TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_GENEVE: 4672 4655 req->tunnel_dst_port_id = cpu_to_le16(bp->nge_fw_dst_port_id); 4656 + bp->nge_port = 0; 4673 4657 bp->nge_fw_dst_port_id = INVALID_HW_RING_ID; 4674 4658 break; 4675 4659 default: ··· 4709 4689 4710 4690 switch (tunnel_type) { 4711 4691 case TUNNEL_DST_PORT_ALLOC_REQ_TUNNEL_TYPE_VXLAN: 4692 + bp->vxlan_port = port; 4712 4693 bp->vxlan_fw_dst_port_id = 4713 4694 le16_to_cpu(resp->tunnel_dst_port_id); 4714 4695 break; 4715 4696 case TUNNEL_DST_PORT_ALLOC_REQ_TUNNEL_TYPE_GENEVE: 4697 + bp->nge_port = port; 4716 4698 bp->nge_fw_dst_port_id = le16_to_cpu(resp->tunnel_dst_port_id); 4717 4699 break; 4718 4700 default: ··· 8243 8221 8244 8222 static void bnxt_hwrm_free_tunnel_ports(struct bnxt *bp) 8245 8223 { 8246 - if (bp->vxlan_fw_dst_port_id != INVALID_HW_RING_ID) 8247 - bnxt_hwrm_tunnel_dst_port_free( 8248 - bp, TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_VXLAN); 8249 - if (bp->nge_fw_dst_port_id != INVALID_HW_RING_ID) 8250 - bnxt_hwrm_tunnel_dst_port_free( 8251 - bp, TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_GENEVE); 8224 + bnxt_hwrm_tunnel_dst_port_free(bp, 8225 + TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_VXLAN); 8226 + bnxt_hwrm_tunnel_dst_port_free(bp, 8227 + TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_GENEVE); 8252 8228 } 8253 8229 8254 8230 static int bnxt_set_tpa(struct bnxt *bp, bool set_tpa) ··· 11267 11247 if (!fw_health->enabled || test_bit(BNXT_STATE_IN_FW_RESET, &bp->state)) 11268 11248 return; 11269 11249 11250 + /* Make sure it is enabled before checking the tmr_counter. */ 11251 + smp_rmb(); 11270 11252 if (fw_health->tmr_counter) { 11271 11253 fw_health->tmr_counter--; 11272 11254 return; ··· 12647 12625 unsigned int cmd; 12648 12626 12649 12627 udp_tunnel_nic_get_port(netdev, table, 0, &ti); 12650 - if (ti.type == UDP_TUNNEL_TYPE_VXLAN) { 12651 - bp->vxlan_port = ti.port; 12628 + if (ti.type == UDP_TUNNEL_TYPE_VXLAN) 12652 12629 cmd = TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_VXLAN; 12653 - } else { 12654 - bp->nge_port = ti.port; 12630 + else 12655 12631 cmd = TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_GENEVE; 12656 - } 12657 12632 12658 12633 if (ti.port) 12659 12634 return bnxt_hwrm_tunnel_dst_port_alloc(bp, ti.port, cmd);
+26 -13
drivers/net/ethernet/broadcom/bnxt/bnxt.h
··· 28 28 #include <net/dst_metadata.h> 29 29 #include <net/xdp.h> 30 30 #include <linux/dim.h> 31 + #include <linux/io-64-nonatomic-lo-hi.h> 31 32 #ifdef CONFIG_TEE_BNXT_FW 32 33 #include <linux/firmware/broadcom/tee_bnxt_fw.h> 33 34 #endif ··· 1982 1981 struct mutex sriov_lock; 1983 1982 #endif 1984 1983 1985 - #ifndef writeq 1984 + #if BITS_PER_LONG == 32 1986 1985 /* ensure atomic 64-bit doorbell writes on 32-bit systems. */ 1987 1986 spinlock_t db_lock; 1988 1987 #endif ··· 2111 2110 ((txr->tx_prod - txr->tx_cons) & bp->tx_ring_mask); 2112 2111 } 2113 2112 2114 - #ifndef writeq 2115 - #define writeq(val64, db) \ 2116 - do { \ 2117 - spin_lock(&bp->db_lock); \ 2118 - writel((val64) & 0xffffffff, db); \ 2119 - writel((val64) >> 32, (db) + 4); \ 2120 - spin_unlock(&bp->db_lock); \ 2121 - } while (0) 2122 - 2123 - #define writeq_relaxed writeq 2113 + static inline void bnxt_writeq(struct bnxt *bp, u64 val, 2114 + volatile void __iomem *addr) 2115 + { 2116 + #if BITS_PER_LONG == 32 2117 + spin_lock(&bp->db_lock); 2118 + lo_hi_writeq(val, addr); 2119 + spin_unlock(&bp->db_lock); 2120 + #else 2121 + writeq(val, addr); 2124 2122 #endif 2123 + } 2124 + 2125 + static inline void bnxt_writeq_relaxed(struct bnxt *bp, u64 val, 2126 + volatile void __iomem *addr) 2127 + { 2128 + #if BITS_PER_LONG == 32 2129 + spin_lock(&bp->db_lock); 2130 + lo_hi_writeq_relaxed(val, addr); 2131 + spin_unlock(&bp->db_lock); 2132 + #else 2133 + writeq_relaxed(val, addr); 2134 + #endif 2135 + } 2125 2136 2126 2137 /* For TX and RX ring doorbells with no ordering guarantee*/ 2127 2138 static inline void bnxt_db_write_relaxed(struct bnxt *bp, 2128 2139 struct bnxt_db_info *db, u32 idx) 2129 2140 { 2130 2141 if (bp->flags & BNXT_FLAG_CHIP_P5) { 2131 - writeq_relaxed(db->db_key64 | idx, db->doorbell); 2142 + bnxt_writeq_relaxed(bp, db->db_key64 | idx, db->doorbell); 2132 2143 } else { 2133 2144 u32 db_val = db->db_key32 | idx; 2134 2145 ··· 2155 2142 u32 idx) 2156 2143 { 2157 2144 if (bp->flags & BNXT_FLAG_CHIP_P5) { 2158 - writeq(db->db_key64 | idx, db->doorbell); 2145 + bnxt_writeq(bp, db->db_key64 | idx, db->doorbell); 2159 2146 } else { 2160 2147 u32 db_val = db->db_key32 | idx; 2161 2148
+35 -16
drivers/net/ethernet/broadcom/bnxt/bnxt_devlink.c
··· 352 352 dst->vu8 = (u8)val32; 353 353 } 354 354 355 - static int bnxt_hwrm_get_nvm_cfg_ver(struct bnxt *bp, 356 - union devlink_param_value *nvm_cfg_ver) 355 + static int bnxt_hwrm_get_nvm_cfg_ver(struct bnxt *bp, u32 *nvm_cfg_ver) 357 356 { 358 357 struct hwrm_nvm_get_variable_input *req; 358 + u16 bytes = BNXT_NVM_CFG_VER_BYTES; 359 + u16 bits = BNXT_NVM_CFG_VER_BITS; 360 + union devlink_param_value ver; 359 361 union bnxt_nvm_data *data; 360 362 dma_addr_t data_dma_addr; 361 - int rc; 363 + int rc, i = 2; 364 + u16 dim = 1; 362 365 363 366 rc = hwrm_req_init(bp, req, HWRM_NVM_GET_VARIABLE); 364 367 if (rc) ··· 373 370 goto exit; 374 371 } 375 372 373 + /* earlier devices present as an array of raw bytes */ 374 + if (!BNXT_CHIP_P5(bp)) { 375 + dim = 0; 376 + i = 0; 377 + bits *= 3; /* array of 3 version components */ 378 + bytes *= 4; /* copy whole word */ 379 + } 380 + 376 381 hwrm_req_hold(bp, req); 377 382 req->dest_data_addr = cpu_to_le64(data_dma_addr); 378 - req->data_len = cpu_to_le16(BNXT_NVM_CFG_VER_BITS); 383 + req->data_len = cpu_to_le16(bits); 379 384 req->option_num = cpu_to_le16(NVM_OFF_NVM_CFG_VER); 385 + req->dimensions = cpu_to_le16(dim); 380 386 381 - rc = hwrm_req_send_silent(bp, req); 382 - if (!rc) 383 - bnxt_copy_from_nvm_data(nvm_cfg_ver, data, 384 - BNXT_NVM_CFG_VER_BITS, 385 - BNXT_NVM_CFG_VER_BYTES); 387 + while (i >= 0) { 388 + req->index_0 = cpu_to_le16(i--); 389 + rc = hwrm_req_send_silent(bp, req); 390 + if (rc) 391 + goto exit; 392 + bnxt_copy_from_nvm_data(&ver, data, bits, bytes); 393 + 394 + if (BNXT_CHIP_P5(bp)) { 395 + *nvm_cfg_ver <<= 8; 396 + *nvm_cfg_ver |= ver.vu8; 397 + } else { 398 + *nvm_cfg_ver = ver.vu32; 399 + } 400 + } 386 401 387 402 exit: 388 403 hwrm_req_drop(bp, req); ··· 437 416 { 438 417 struct hwrm_nvm_get_dev_info_output nvm_dev_info; 439 418 struct bnxt *bp = bnxt_get_bp_from_dl(dl); 440 - union devlink_param_value nvm_cfg_ver; 441 419 struct hwrm_ver_get_output *ver_resp; 442 420 char mgmt_ver[FW_VER_STR_LEN]; 443 421 char roce_ver[FW_VER_STR_LEN]; 444 422 char ncsi_ver[FW_VER_STR_LEN]; 445 423 char buf[32]; 424 + u32 ver = 0; 446 425 int rc; 447 426 448 427 rc = devlink_info_driver_name_put(req, DRV_MODULE_NAME); ··· 477 456 return rc; 478 457 479 458 ver_resp = &bp->ver_resp; 480 - sprintf(buf, "%X", ver_resp->chip_rev); 459 + sprintf(buf, "%c%d", 'A' + ver_resp->chip_rev, ver_resp->chip_metal); 481 460 rc = bnxt_dl_info_put(bp, req, BNXT_VERSION_FIXED, 482 461 DEVLINK_INFO_VERSION_GENERIC_ASIC_REV, buf); 483 462 if (rc) ··· 496 475 if (rc) 497 476 return rc; 498 477 499 - if (BNXT_PF(bp) && !bnxt_hwrm_get_nvm_cfg_ver(bp, &nvm_cfg_ver)) { 500 - u32 ver = nvm_cfg_ver.vu32; 501 - 502 - sprintf(buf, "%d.%d.%d", (ver >> 16) & 0xf, (ver >> 8) & 0xf, 503 - ver & 0xf); 478 + if (BNXT_PF(bp) && !bnxt_hwrm_get_nvm_cfg_ver(bp, &ver)) { 479 + sprintf(buf, "%d.%d.%d", (ver >> 16) & 0xff, (ver >> 8) & 0xff, 480 + ver & 0xff); 504 481 rc = bnxt_dl_info_put(bp, req, BNXT_VERSION_STORED, 505 482 DEVLINK_INFO_VERSION_GENERIC_FW_PSID, 506 483 buf);
+2 -2
drivers/net/ethernet/broadcom/bnxt/bnxt_devlink.h
··· 40 40 #define NVM_OFF_ENABLE_SRIOV 401 41 41 #define NVM_OFF_NVM_CFG_VER 602 42 42 43 - #define BNXT_NVM_CFG_VER_BITS 24 44 - #define BNXT_NVM_CFG_VER_BYTES 4 43 + #define BNXT_NVM_CFG_VER_BITS 8 44 + #define BNXT_NVM_CFG_VER_BYTES 1 45 45 46 46 #define BNXT_MSIX_VEC_MAX 512 47 47 #define BNXT_MSIX_VEC_MIN_MAX 128
+7 -7
drivers/net/ethernet/broadcom/bnxt/bnxt_hwrm.c
··· 145 145 * @bp: The driver context. 146 146 * @req: The request for which calls to hwrm_req_dma_slice() will have altered 147 147 * allocation flags. 148 - * @flags: A bitmask of GFP flags. These flags are passed to 149 - * dma_alloc_coherent() whenever it is used to allocate backing memory 150 - * for slices. Note that calls to hwrm_req_dma_slice() will not always 151 - * result in new allocations, however, memory suballocated from the 152 - * request buffer is already __GFP_ZERO. 148 + * @gfp: A bitmask of GFP flags. These flags are passed to dma_alloc_coherent() 149 + * whenever it is used to allocate backing memory for slices. Note that 150 + * calls to hwrm_req_dma_slice() will not always result in new allocations, 151 + * however, memory suballocated from the request buffer is already 152 + * __GFP_ZERO. 153 153 * 154 154 * Sets the GFP allocation flags associated with the request for subsequent 155 155 * calls to hwrm_req_dma_slice(). This can be useful for specifying __GFP_ZERO ··· 698 698 * @bp: The driver context. 699 699 * @req: The request for which indirect data will be associated. 700 700 * @size: The size of the allocation. 701 - * @dma: The bus address associated with the allocation. The HWRM API has no 702 - * knowledge about the type of the request and so cannot infer how the 701 + * @dma_handle: The bus address associated with the allocation. The HWRM API has 702 + * no knowledge about the type of the request and so cannot infer how the 703 703 * caller intends to use the indirect data. Thus, the caller is 704 704 * responsible for configuring the request object appropriately to 705 705 * point to the associated indirect memory. Note, DMA handle has the
+1
drivers/net/ethernet/chelsio/cxgb/cxgb2.c
··· 1111 1111 if (!adapter->registered_device_map) { 1112 1112 pr_err("%s: could not register any net devices\n", 1113 1113 pci_name(pdev)); 1114 + err = -EINVAL; 1114 1115 goto out_release_adapter_res; 1115 1116 } 1116 1117
+3
drivers/net/ethernet/chelsio/cxgb3/sge.c
··· 3301 3301 3302 3302 t3_sge_stop_dma(adap); 3303 3303 3304 + /* workqueues aren't initialized otherwise */ 3305 + if (!(adap->flags & FULL_INIT_DONE)) 3306 + return; 3304 3307 for (i = 0; i < SGE_QSETS; ++i) { 3305 3308 struct sge_qset *qs = &adap->sge.qs[i]; 3306 3309
+1 -1
drivers/net/ethernet/cirrus/Kconfig
··· 38 38 39 39 config CS89x0_PLATFORM 40 40 tristate "CS89x0 platform driver support" 41 - depends on ARM || COMPILE_TEST 41 + depends on ARM || (COMPILE_TEST && !PPC) 42 42 select CS89x0 43 43 help 44 44 Say Y to compile the cs89x0 platform driver. This makes this driver
+1 -1
drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.c
··· 362 362 } 363 363 } 364 364 365 - const struct hclge_caps_bit_map hclge_cmd_caps_bit_map0[] = { 365 + static const struct hclge_caps_bit_map hclge_cmd_caps_bit_map0[] = { 366 366 {HCLGE_CAP_UDP_GSO_B, HNAE3_DEV_SUPPORT_UDP_GSO_B}, 367 367 {HCLGE_CAP_PTP_B, HNAE3_DEV_SUPPORT_PTP_B}, 368 368 {HCLGE_CAP_INT_QL_B, HNAE3_DEV_SUPPORT_INT_QL_B},
+1 -1
drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_cmd.c
··· 342 342 set_bit(HNAE3_DEV_SUPPORT_FEC_B, ae_dev->caps); 343 343 } 344 344 345 - const struct hclgevf_caps_bit_map hclgevf_cmd_caps_bit_map0[] = { 345 + static const struct hclgevf_caps_bit_map hclgevf_cmd_caps_bit_map0[] = { 346 346 {HCLGEVF_CAP_UDP_GSO_B, HNAE3_DEV_SUPPORT_UDP_GSO_B}, 347 347 {HCLGEVF_CAP_INT_QL_B, HNAE3_DEV_SUPPORT_INT_QL_B}, 348 348 {HCLGEVF_CAP_TQP_TXRX_INDEP_B, HNAE3_DEV_SUPPORT_TQP_TXRX_INDEP_B},
+1 -1
drivers/net/ethernet/i825xx/sun3_82586.c
··· 314 314 err = register_netdev(dev); 315 315 if (err) 316 316 goto out2; 317 - return dev; 317 + return 0; 318 318 319 319 out2: 320 320 release_region(ioaddr, SUN3_82586_TOTAL_SIZE);
+5 -3
drivers/net/ethernet/marvell/octeontx2/af/cgx.c
··· 1487 1487 MAX_DMAC_ENTRIES_PER_CGX / cgx->lmac_count; 1488 1488 err = rvu_alloc_bitmap(&lmac->mac_to_index_bmap); 1489 1489 if (err) 1490 - return err; 1490 + goto err_name_free; 1491 1491 1492 1492 /* Reserve first entry for default MAC address */ 1493 1493 set_bit(0, lmac->mac_to_index_bmap.bmap); ··· 1497 1497 spin_lock_init(&lmac->event_cb_lock); 1498 1498 err = cgx_configure_interrupt(cgx, lmac, lmac->lmac_id, false); 1499 1499 if (err) 1500 - goto err_irq; 1500 + goto err_bitmap_free; 1501 1501 1502 1502 /* Add reference */ 1503 1503 cgx->lmac_idmap[lmac->lmac_id] = lmac; ··· 1507 1507 1508 1508 return cgx_lmac_verify_fwi_version(cgx); 1509 1509 1510 - err_irq: 1510 + err_bitmap_free: 1511 + rvu_free_bitmap(&lmac->mac_to_index_bmap); 1512 + err_name_free: 1511 1513 kfree(lmac->name); 1512 1514 err_lmac_free: 1513 1515 kfree(lmac);
+16 -1
drivers/net/ethernet/marvell/octeontx2/af/rvu.c
··· 92 92 */ 93 93 int rvu_poll_reg(struct rvu *rvu, u64 block, u64 offset, u64 mask, bool zero) 94 94 { 95 - unsigned long timeout = jiffies + usecs_to_jiffies(10000); 95 + unsigned long timeout = jiffies + usecs_to_jiffies(20000); 96 + bool twice = false; 96 97 void __iomem *reg; 97 98 u64 reg_val; 98 99 ··· 106 105 return 0; 107 106 if (time_before(jiffies, timeout)) { 108 107 usleep_range(1, 5); 108 + goto again; 109 + } 110 + /* In scenarios where CPU is scheduled out before checking 111 + * 'time_before' (above) and gets scheduled in such that 112 + * jiffies are beyond timeout value, then check again if HW is 113 + * done with the operation in the meantime. 114 + */ 115 + if (!twice) { 116 + twice = true; 109 117 goto again; 110 118 } 111 119 return -EBUSY; ··· 209 199 if (!rsrc->bmap) 210 200 return -ENOMEM; 211 201 return 0; 202 + } 203 + 204 + void rvu_free_bitmap(struct rsrc_bmap *rsrc) 205 + { 206 + kfree(rsrc->bmap); 212 207 } 213 208 214 209 /* Get block LF's HW index from a PF_FUNC's block slot number */
+1
drivers/net/ethernet/marvell/octeontx2/af/rvu.h
··· 638 638 } 639 639 640 640 int rvu_alloc_bitmap(struct rsrc_bmap *rsrc); 641 + void rvu_free_bitmap(struct rsrc_bmap *rsrc); 641 642 int rvu_alloc_rsrc(struct rsrc_bmap *rsrc); 642 643 void rvu_free_rsrc(struct rsrc_bmap *rsrc, int id); 643 644 bool is_rsrc_free(struct rsrc_bmap *rsrc, int id);
+19 -23
drivers/net/ethernet/marvell/octeontx2/nic/cn10k.c
··· 27 27 { 28 28 29 29 struct lmtst_tbl_setup_req *req; 30 - int qcount, err; 30 + struct otx2_lmt_info *lmt_info; 31 + int err, cpu; 31 32 32 33 if (!test_bit(CN10K_LMTST, &pfvf->hw.cap_flag)) { 33 34 pfvf->hw_ops = &otx2_hw_ops; ··· 36 35 } 37 36 38 37 pfvf->hw_ops = &cn10k_hw_ops; 39 - qcount = pfvf->hw.max_queues; 40 - /* LMTST lines allocation 41 - * qcount = num_online_cpus(); 42 - * NPA = TX + RX + XDP. 43 - * NIX = TX * 32 (For Burst SQE flush). 44 - */ 45 - pfvf->tot_lmt_lines = (qcount * 3) + (qcount * 32); 46 - pfvf->npa_lmt_lines = qcount * 3; 47 - pfvf->nix_lmt_size = LMT_BURST_SIZE * LMT_LINE_SIZE; 38 + /* Total LMTLINES = num_online_cpus() * 32 (For Burst flush).*/ 39 + pfvf->tot_lmt_lines = (num_online_cpus() * LMT_BURST_SIZE); 40 + pfvf->hw.lmt_info = alloc_percpu(struct otx2_lmt_info); 48 41 49 42 mutex_lock(&pfvf->mbox.lock); 50 43 req = otx2_mbox_alloc_msg_lmtst_tbl_setup(&pfvf->mbox); ··· 61 66 err = otx2_sync_mbox_msg(&pfvf->mbox); 62 67 mutex_unlock(&pfvf->mbox.lock); 63 68 69 + for_each_possible_cpu(cpu) { 70 + lmt_info = per_cpu_ptr(pfvf->hw.lmt_info, cpu); 71 + lmt_info->lmt_addr = ((u64)pfvf->hw.lmt_base + 72 + (cpu * LMT_BURST_SIZE * LMT_LINE_SIZE)); 73 + lmt_info->lmt_id = cpu * LMT_BURST_SIZE; 74 + } 75 + 64 76 return 0; 65 77 } 66 78 EXPORT_SYMBOL(cn10k_lmtst_init); ··· 76 74 { 77 75 struct nix_cn10k_aq_enq_req *aq; 78 76 struct otx2_nic *pfvf = dev; 79 - struct otx2_snd_queue *sq; 80 - 81 - sq = &pfvf->qset.sq[qidx]; 82 - sq->lmt_addr = (u64 *)((u64)pfvf->hw.nix_lmt_base + 83 - (qidx * pfvf->nix_lmt_size)); 84 - 85 - sq->lmt_id = pfvf->npa_lmt_lines + (qidx * LMT_BURST_SIZE); 86 77 87 78 /* Get memory to put this msg */ 88 79 aq = otx2_mbox_alloc_msg_nix_cn10k_aq_enq(&pfvf->mbox); ··· 120 125 if (otx2_alloc_buffer(pfvf, cq, &bufptr)) { 121 126 if (num_ptrs--) 122 127 __cn10k_aura_freeptr(pfvf, cq->cq_idx, ptrs, 123 - num_ptrs, 124 - cq->rbpool->lmt_addr); 128 + num_ptrs); 125 129 break; 126 130 } 127 131 cq->pool_ptrs--; ··· 128 134 num_ptrs++; 129 135 if (num_ptrs == NPA_MAX_BURST || cq->pool_ptrs == 0) { 130 136 __cn10k_aura_freeptr(pfvf, cq->cq_idx, ptrs, 131 - num_ptrs, 132 - cq->rbpool->lmt_addr); 137 + num_ptrs); 133 138 num_ptrs = 1; 134 139 } 135 140 } ··· 136 143 137 144 void cn10k_sqe_flush(void *dev, struct otx2_snd_queue *sq, int size, int qidx) 138 145 { 146 + struct otx2_lmt_info *lmt_info; 147 + struct otx2_nic *pfvf = dev; 139 148 u64 val = 0, tar_addr = 0; 140 149 150 + lmt_info = per_cpu_ptr(pfvf->hw.lmt_info, smp_processor_id()); 141 151 /* FIXME: val[0:10] LMT_ID. 142 152 * [12:15] no of LMTST - 1 in the burst. 143 153 * [19:63] data size of each LMTST in the burst except first. 144 154 */ 145 - val = (sq->lmt_id & 0x7FF); 155 + val = (lmt_info->lmt_id & 0x7FF); 146 156 /* Target address for LMTST flush tells HW how many 128bit 147 157 * words are present. 148 158 * tar_addr[6:4] size of first LMTST - 1 in units of 128b. 149 159 */ 150 160 tar_addr |= sq->io_addr | (((size / 16) - 1) & 0x7) << 4; 151 161 dma_wmb(); 152 - memcpy(sq->lmt_addr, sq->sqe_base, size); 162 + memcpy((u64 *)lmt_info->lmt_addr, sq->sqe_base, size); 153 163 cn10k_lmt_flush(val, tar_addr); 154 164 155 165 sq->head++;
-5
drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
··· 1230 1230 1231 1231 pool->rbsize = buf_size; 1232 1232 1233 - /* Set LMTST addr for NPA batch free */ 1234 - if (test_bit(CN10K_LMTST, &pfvf->hw.cap_flag)) 1235 - pool->lmt_addr = (__force u64 *)((u64)pfvf->hw.npa_lmt_base + 1236 - (pool_id * LMT_LINE_SIZE)); 1237 - 1238 1233 /* Initialize this pool's context via AF */ 1239 1234 aq = otx2_mbox_alloc_msg_npa_aq_enq(&pfvf->mbox); 1240 1235 if (!aq) {
+16 -12
drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
··· 53 53 /* Send skid of 2000 packets required for CQ size of 4K CQEs. */ 54 54 #define SEND_CQ_SKID 2000 55 55 56 + struct otx2_lmt_info { 57 + u64 lmt_addr; 58 + u16 lmt_id; 59 + }; 56 60 /* RSS configuration */ 57 61 struct otx2_rss_ctx { 58 62 u8 ind_tbl[MAX_RSS_INDIR_TBL_SIZE]; ··· 228 224 #define LMT_LINE_SIZE 128 229 225 #define LMT_BURST_SIZE 32 /* 32 LMTST lines for burst SQE flush */ 230 226 u64 *lmt_base; 231 - u64 *npa_lmt_base; 232 - u64 *nix_lmt_base; 227 + struct otx2_lmt_info __percpu *lmt_info; 233 228 }; 234 229 235 230 enum vfperm { ··· 410 407 */ 411 408 #define PCI_REVISION_ID_96XX 0x00 412 409 #define PCI_REVISION_ID_95XX 0x10 413 - #define PCI_REVISION_ID_LOKI 0x20 410 + #define PCI_REVISION_ID_95XXN 0x20 414 411 #define PCI_REVISION_ID_98XX 0x30 415 412 #define PCI_REVISION_ID_95XXMM 0x40 413 + #define PCI_REVISION_ID_95XXO 0xE0 416 414 417 415 static inline bool is_dev_otx2(struct pci_dev *pdev) 418 416 { 419 417 u8 midr = pdev->revision & 0xF0; 420 418 421 419 return (midr == PCI_REVISION_ID_96XX || midr == PCI_REVISION_ID_95XX || 422 - midr == PCI_REVISION_ID_LOKI || midr == PCI_REVISION_ID_98XX || 423 - midr == PCI_REVISION_ID_95XXMM); 420 + midr == PCI_REVISION_ID_95XXN || midr == PCI_REVISION_ID_98XX || 421 + midr == PCI_REVISION_ID_95XXMM || midr == PCI_REVISION_ID_95XXO); 424 422 } 425 423 426 424 static inline void otx2_setup_dev_hw_settings(struct otx2_nic *pfvf) ··· 566 562 #endif 567 563 568 564 static inline void __cn10k_aura_freeptr(struct otx2_nic *pfvf, u64 aura, 569 - u64 *ptrs, u64 num_ptrs, 570 - u64 *lmt_addr) 565 + u64 *ptrs, u64 num_ptrs) 571 566 { 567 + struct otx2_lmt_info *lmt_info; 572 568 u64 size = 0, count_eot = 0; 573 569 u64 tar_addr, val = 0; 574 570 571 + lmt_info = per_cpu_ptr(pfvf->hw.lmt_info, smp_processor_id()); 575 572 tar_addr = (__force u64)otx2_get_regaddr(pfvf, NPA_LF_AURA_BATCH_FREE0); 576 573 /* LMTID is same as AURA Id */ 577 - val = (aura & 0x7FF) | BIT_ULL(63); 574 + val = (lmt_info->lmt_id & 0x7FF) | BIT_ULL(63); 578 575 /* Set if [127:64] of last 128bit word has a valid pointer */ 579 576 count_eot = (num_ptrs % 2) ? 0ULL : 1ULL; 580 577 /* Set AURA ID to free pointer */ ··· 591 586 size++; 592 587 tar_addr |= ((size - 1) & 0x7) << 4; 593 588 } 594 - memcpy(lmt_addr, ptrs, sizeof(u64) * num_ptrs); 589 + memcpy((u64 *)lmt_info->lmt_addr, ptrs, sizeof(u64) * num_ptrs); 595 590 /* Perform LMTST flush */ 596 591 cn10k_lmt_flush(val, tar_addr); 597 592 } ··· 599 594 static inline void cn10k_aura_freeptr(void *dev, int aura, u64 buf) 600 595 { 601 596 struct otx2_nic *pfvf = dev; 602 - struct otx2_pool *pool; 603 597 u64 ptrs[2]; 604 598 605 - pool = &pfvf->qset.pool[aura]; 606 599 ptrs[1] = buf; 607 - __cn10k_aura_freeptr(pfvf, aura, ptrs, 2, pool->lmt_addr); 600 + /* Free only one buffer at time during init and teardown */ 601 + __cn10k_aura_freeptr(pfvf, aura, ptrs, 2); 608 602 } 609 603 610 604 /* Alloc pointer from pool/aura */
+2 -2
drivers/net/ethernet/marvell/octeontx2/nic/otx2_ethtool.c
··· 16 16 #include "otx2_common.h" 17 17 #include "otx2_ptp.h" 18 18 19 - #define DRV_NAME "octeontx2-nicpf" 20 - #define DRV_VF_NAME "octeontx2-nicvf" 19 + #define DRV_NAME "rvu-nicpf" 20 + #define DRV_VF_NAME "rvu-nicvf" 21 21 22 22 struct otx2_stat { 23 23 char name[ETH_GSTRING_LEN];
+4 -8
drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
··· 1533 1533 if (!qset->rq) 1534 1534 goto err_free_mem; 1535 1535 1536 - if (test_bit(CN10K_LMTST, &pf->hw.cap_flag)) { 1537 - /* Reserve LMT lines for NPA AURA batch free */ 1538 - pf->hw.npa_lmt_base = pf->hw.lmt_base; 1539 - /* Reserve LMT lines for NIX TX */ 1540 - pf->hw.nix_lmt_base = (u64 *)((u64)pf->hw.npa_lmt_base + 1541 - (pf->npa_lmt_lines * LMT_LINE_SIZE)); 1542 - } 1543 - 1544 1536 err = otx2_init_hw_resources(pf); 1545 1537 if (err) 1546 1538 goto err_free_mem; ··· 2660 2668 err_ptp_destroy: 2661 2669 otx2_ptp_destroy(pf); 2662 2670 err_detach_rsrc: 2671 + if (pf->hw.lmt_info) 2672 + free_percpu(pf->hw.lmt_info); 2663 2673 if (test_bit(CN10K_LMTST, &pf->hw.cap_flag)) 2664 2674 qmem_free(pf->dev, pf->dync_lmt); 2665 2675 otx2_detach_resources(&pf->mbox); ··· 2805 2811 otx2_mcam_flow_del(pf); 2806 2812 otx2_shutdown_tc(pf); 2807 2813 otx2_detach_resources(&pf->mbox); 2814 + if (pf->hw.lmt_info) 2815 + free_percpu(pf->hw.lmt_info); 2808 2816 if (test_bit(CN10K_LMTST, &pf->hw.cap_flag)) 2809 2817 qmem_free(pf->dev, pf->dync_lmt); 2810 2818 otx2_disable_mbox_intr(pf);
-2
drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.h
··· 80 80 u16 num_sqbs; 81 81 u16 sqe_thresh; 82 82 u8 sqe_per_sqb; 83 - u32 lmt_id; 84 83 u64 io_addr; 85 84 u64 *aura_fc_addr; 86 85 u64 *lmt_addr; ··· 110 111 struct otx2_pool { 111 112 struct qmem *stack; 112 113 struct qmem *fc_addr; 113 - u64 *lmt_addr; 114 114 u16 rbsize; 115 115 }; 116 116
+5
drivers/net/ethernet/pensando/ionic/ionic_ethtool.c
··· 582 582 583 583 qparam.ntxq_descs = ring->tx_pending; 584 584 qparam.nrxq_descs = ring->rx_pending; 585 + 586 + mutex_lock(&lif->queue_lock); 585 587 err = ionic_reconfigure_queues(lif, &qparam); 588 + mutex_unlock(&lif->queue_lock); 586 589 if (err) 587 590 netdev_info(netdev, "Ring reconfiguration failed, changes canceled: %d\n", err); 588 591 ··· 682 679 return 0; 683 680 } 684 681 682 + mutex_lock(&lif->queue_lock); 685 683 err = ionic_reconfigure_queues(lif, &qparam); 684 + mutex_unlock(&lif->queue_lock); 686 685 if (err) 687 686 netdev_info(netdev, "Queue reconfiguration failed, changes canceled: %d\n", err); 688 687
+8 -4
drivers/net/ethernet/pensando/ionic/ionic_lif.c
··· 1715 1715 static void ionic_stop_queues_reconfig(struct ionic_lif *lif) 1716 1716 { 1717 1717 /* Stop and clean the queues before reconfiguration */ 1718 - mutex_lock(&lif->queue_lock); 1719 1718 netif_device_detach(lif->netdev); 1720 1719 ionic_stop_queues(lif); 1721 1720 ionic_txrx_deinit(lif); ··· 1733 1734 * DOWN and UP to try to reset and clear the issue. 1734 1735 */ 1735 1736 err = ionic_txrx_init(lif); 1736 - mutex_unlock(&lif->queue_lock); 1737 - ionic_link_status_check_request(lif, CAN_SLEEP); 1737 + ionic_link_status_check_request(lif, CAN_NOT_SLEEP); 1738 1738 netif_device_attach(lif->netdev); 1739 1739 1740 1740 return err; ··· 1763 1765 return 0; 1764 1766 } 1765 1767 1768 + mutex_lock(&lif->queue_lock); 1766 1769 ionic_stop_queues_reconfig(lif); 1767 1770 netdev->mtu = new_mtu; 1768 - return ionic_start_queues_reconfig(lif); 1771 + err = ionic_start_queues_reconfig(lif); 1772 + mutex_unlock(&lif->queue_lock); 1773 + 1774 + return err; 1769 1775 } 1770 1776 1771 1777 static void ionic_tx_timeout_work(struct work_struct *ws) ··· 1785 1783 if (!netif_running(lif->netdev)) 1786 1784 return; 1787 1785 1786 + mutex_lock(&lif->queue_lock); 1788 1787 ionic_stop_queues_reconfig(lif); 1789 1788 ionic_start_queues_reconfig(lif); 1789 + mutex_unlock(&lif->queue_lock); 1790 1790 } 1791 1791 1792 1792 static void ionic_tx_timeout(struct net_device *netdev, unsigned int txqueue)
+1 -1
drivers/net/ethernet/pensando/ionic/ionic_rx_filter.c
··· 318 318 if (f->state == IONIC_FILTER_STATE_NEW || 319 319 f->state == IONIC_FILTER_STATE_OLD) { 320 320 sync_item = devm_kzalloc(dev, sizeof(*sync_item), 321 - GFP_KERNEL); 321 + GFP_ATOMIC); 322 322 if (!sync_item) 323 323 goto loop_out; 324 324
-1
drivers/net/ethernet/qlogic/qlcnic/qlcnic_init.c
··· 437 437 QLCWR32(adapter, QLCNIC_CRB_PEG_NET_4 + 0x3c, 1); 438 438 msleep(20); 439 439 440 - qlcnic_rom_unlock(adapter); 441 440 /* big hammer don't reset CAM block on reset */ 442 441 QLCWR32(adapter, QLCNIC_ROMUSB_GLB_SW_RESET, 0xfeffffff); 443 442
+1 -1
drivers/net/ethernet/qualcomm/emac/emac-ethtool.c
··· 100 100 101 101 case ETH_SS_STATS: 102 102 for (i = 0; i < EMAC_STATS_LEN; i++) { 103 - strlcpy(data, emac_ethtool_stat_strings[i], 103 + strscpy(data, emac_ethtool_stat_strings[i], 104 104 ETH_GSTRING_LEN); 105 105 data += ETH_GSTRING_LEN; 106 106 }
+1
drivers/net/ethernet/renesas/sh_eth.c
··· 2533 2533 else 2534 2534 txdesc->status |= cpu_to_le32(TD_TACT); 2535 2535 2536 + wmb(); /* cur_tx must be incremented after TACT bit was set */ 2536 2537 mdp->cur_tx++; 2537 2538 2538 2539 if (!(sh_eth_read(ndev, EDTRR) & mdp->cd->edtrr_trns))
+6 -6
drivers/net/ethernet/smsc/smc911x.c
··· 1550 1550 } 1551 1551 1552 1552 static void smc911x_ethtool_getregs(struct net_device *dev, 1553 - struct ethtool_regs* regs, void *buf) 1553 + struct ethtool_regs *regs, void *buf) 1554 1554 { 1555 1555 struct smc911x_local *lp = netdev_priv(dev); 1556 1556 unsigned long flags; ··· 1600 1600 } 1601 1601 1602 1602 static inline int smc911x_ethtool_write_eeprom_cmd(struct net_device *dev, 1603 - int cmd, int addr) 1603 + int cmd, int addr) 1604 1604 { 1605 1605 struct smc911x_local *lp = netdev_priv(dev); 1606 1606 int ret; ··· 1614 1614 } 1615 1615 1616 1616 static inline int smc911x_ethtool_read_eeprom_byte(struct net_device *dev, 1617 - u8 *data) 1617 + u8 *data) 1618 1618 { 1619 1619 struct smc911x_local *lp = netdev_priv(dev); 1620 1620 int ret; ··· 1626 1626 } 1627 1627 1628 1628 static inline int smc911x_ethtool_write_eeprom_byte(struct net_device *dev, 1629 - u8 data) 1629 + u8 data) 1630 1630 { 1631 1631 struct smc911x_local *lp = netdev_priv(dev); 1632 1632 int ret; ··· 1638 1638 } 1639 1639 1640 1640 static int smc911x_ethtool_geteeprom(struct net_device *dev, 1641 - struct ethtool_eeprom *eeprom, u8 *data) 1641 + struct ethtool_eeprom *eeprom, u8 *data) 1642 1642 { 1643 1643 u8 eebuf[SMC911X_EEPROM_LEN]; 1644 1644 int i, ret; ··· 1654 1654 } 1655 1655 1656 1656 static int smc911x_ethtool_seteeprom(struct net_device *dev, 1657 - struct ethtool_eeprom *eeprom, u8 *data) 1657 + struct ethtool_eeprom *eeprom, u8 *data) 1658 1658 { 1659 1659 int i, ret; 1660 1660
+3 -1
drivers/net/ethernet/stmicro/stmmac/dwmac-loongson.c
··· 109 109 plat->bus_id = pci_dev_id(pdev); 110 110 111 111 phy_mode = device_get_phy_mode(&pdev->dev); 112 - if (phy_mode < 0) 112 + if (phy_mode < 0) { 113 113 dev_err(&pdev->dev, "phy_mode not found\n"); 114 + return phy_mode; 115 + } 114 116 115 117 plat->phy_interface = phy_mode; 116 118 plat->interface = PHY_INTERFACE_MODE_GMII;
+24 -22
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
··· 5347 5347 struct stmmac_channel *ch = 5348 5348 container_of(napi, struct stmmac_channel, rxtx_napi); 5349 5349 struct stmmac_priv *priv = ch->priv_data; 5350 - int rx_done, tx_done; 5350 + int rx_done, tx_done, rxtx_done; 5351 5351 u32 chan = ch->index; 5352 5352 5353 5353 priv->xstats.napi_poll++; ··· 5357 5357 5358 5358 rx_done = stmmac_rx_zc(priv, budget, chan); 5359 5359 5360 + rxtx_done = max(tx_done, rx_done); 5361 + 5360 5362 /* If either TX or RX work is not complete, return budget 5361 5363 * and keep pooling 5362 5364 */ 5363 - if (tx_done >= budget || rx_done >= budget) 5365 + if (rxtx_done >= budget) 5364 5366 return budget; 5365 5367 5366 5368 /* all work done, exit the polling mode */ 5367 - if (napi_complete_done(napi, rx_done)) { 5369 + if (napi_complete_done(napi, rxtx_done)) { 5368 5370 unsigned long flags; 5369 5371 5370 5372 spin_lock_irqsave(&ch->lock, flags); ··· 5377 5375 spin_unlock_irqrestore(&ch->lock, flags); 5378 5376 } 5379 5377 5380 - return min(rx_done, budget - 1); 5378 + return min(rxtx_done, budget - 1); 5381 5379 } 5382 5380 5383 5381 /** ··· 7123 7121 if (!ndev || !netif_running(ndev)) 7124 7122 return 0; 7125 7123 7126 - phylink_mac_change(priv->phylink, false); 7127 - 7128 7124 mutex_lock(&priv->lock); 7129 7125 7130 7126 netif_device_detach(ndev); ··· 7148 7148 stmmac_pmt(priv, priv->hw, priv->wolopts); 7149 7149 priv->irq_wake = 1; 7150 7150 } else { 7151 - mutex_unlock(&priv->lock); 7152 - rtnl_lock(); 7153 - if (device_may_wakeup(priv->device)) 7154 - phylink_speed_down(priv->phylink, false); 7155 - phylink_stop(priv->phylink); 7156 - rtnl_unlock(); 7157 - mutex_lock(&priv->lock); 7158 - 7159 7151 stmmac_mac_set(priv, priv->ioaddr, false); 7160 7152 pinctrl_pm_select_sleep_state(priv->device); 7161 7153 /* Disable clock in case of PWM is off */ ··· 7160 7168 } 7161 7169 7162 7170 mutex_unlock(&priv->lock); 7171 + 7172 + rtnl_lock(); 7173 + if (device_may_wakeup(priv->device) && priv->plat->pmt) { 7174 + phylink_suspend(priv->phylink, true); 7175 + } else { 7176 + if (device_may_wakeup(priv->device)) 7177 + phylink_speed_down(priv->phylink, false); 7178 + phylink_suspend(priv->phylink, false); 7179 + } 7180 + rtnl_unlock(); 7163 7181 7164 7182 if (priv->dma_cap.fpesel) { 7165 7183 /* Disable FPE */ ··· 7261 7259 return ret; 7262 7260 } 7263 7261 7264 - if (!device_may_wakeup(priv->device) || !priv->plat->pmt) { 7265 - rtnl_lock(); 7266 - phylink_start(priv->phylink); 7267 - /* We may have called phylink_speed_down before */ 7268 - phylink_speed_up(priv->phylink); 7269 - rtnl_unlock(); 7262 + rtnl_lock(); 7263 + if (device_may_wakeup(priv->device) && priv->plat->pmt) { 7264 + phylink_resume(priv->phylink); 7265 + } else { 7266 + phylink_resume(priv->phylink); 7267 + if (device_may_wakeup(priv->device)) 7268 + phylink_speed_up(priv->phylink); 7270 7269 } 7270 + rtnl_unlock(); 7271 7271 7272 7272 rtnl_lock(); 7273 7273 mutex_lock(&priv->lock); ··· 7289 7285 7290 7286 mutex_unlock(&priv->lock); 7291 7287 rtnl_unlock(); 7292 - 7293 - phylink_mac_change(priv->phylink, true); 7294 7288 7295 7289 netif_device_attach(ndev); 7296 7290
-1
drivers/net/ethernet/xscale/ptp_ixp46x.c
··· 16 16 #include <linux/ptp_clock_kernel.h> 17 17 #include <linux/platform_device.h> 18 18 #include <linux/soc/ixp4xx/cpu.h> 19 - #include <linux/module.h> 20 19 #include <mach/ixp4xx-regs.h> 21 20 22 21 #include "ixp46x_ts.h"
+82
drivers/net/phy/phylink.c
··· 33 33 enum { 34 34 PHYLINK_DISABLE_STOPPED, 35 35 PHYLINK_DISABLE_LINK, 36 + PHYLINK_DISABLE_MAC_WOL, 36 37 }; 37 38 38 39 /** ··· 1283 1282 * network device driver's &struct net_device_ops ndo_stop() method. The 1284 1283 * network device's carrier state should not be changed prior to calling this 1285 1284 * function. 1285 + * 1286 + * This will synchronously bring down the link if the link is not already 1287 + * down (in other words, it will trigger a mac_link_down() method call.) 1286 1288 */ 1287 1289 void phylink_stop(struct phylink *pl) 1288 1290 { ··· 1304 1300 phylink_run_resolve_and_disable(pl, PHYLINK_DISABLE_STOPPED); 1305 1301 } 1306 1302 EXPORT_SYMBOL_GPL(phylink_stop); 1303 + 1304 + /** 1305 + * phylink_suspend() - handle a network device suspend event 1306 + * @pl: a pointer to a &struct phylink returned from phylink_create() 1307 + * @mac_wol: true if the MAC needs to receive packets for Wake-on-Lan 1308 + * 1309 + * Handle a network device suspend event. There are several cases: 1310 + * - If Wake-on-Lan is not active, we can bring down the link between 1311 + * the MAC and PHY by calling phylink_stop(). 1312 + * - If Wake-on-Lan is active, and being handled only by the PHY, we 1313 + * can also bring down the link between the MAC and PHY. 1314 + * - If Wake-on-Lan is active, but being handled by the MAC, the MAC 1315 + * still needs to receive packets, so we can not bring the link down. 1316 + */ 1317 + void phylink_suspend(struct phylink *pl, bool mac_wol) 1318 + { 1319 + ASSERT_RTNL(); 1320 + 1321 + if (mac_wol && (!pl->netdev || pl->netdev->wol_enabled)) { 1322 + /* Wake-on-Lan enabled, MAC handling */ 1323 + mutex_lock(&pl->state_mutex); 1324 + 1325 + /* Stop the resolver bringing the link up */ 1326 + __set_bit(PHYLINK_DISABLE_MAC_WOL, &pl->phylink_disable_state); 1327 + 1328 + /* Disable the carrier, to prevent transmit timeouts, 1329 + * but one would hope all packets have been sent. This 1330 + * also means phylink_resolve() will do nothing. 1331 + */ 1332 + netif_carrier_off(pl->netdev); 1333 + 1334 + /* We do not call mac_link_down() here as we want the 1335 + * link to remain up to receive the WoL packets. 1336 + */ 1337 + mutex_unlock(&pl->state_mutex); 1338 + } else { 1339 + phylink_stop(pl); 1340 + } 1341 + } 1342 + EXPORT_SYMBOL_GPL(phylink_suspend); 1343 + 1344 + /** 1345 + * phylink_resume() - handle a network device resume event 1346 + * @pl: a pointer to a &struct phylink returned from phylink_create() 1347 + * 1348 + * Undo the effects of phylink_suspend(), returning the link to an 1349 + * operational state. 1350 + */ 1351 + void phylink_resume(struct phylink *pl) 1352 + { 1353 + ASSERT_RTNL(); 1354 + 1355 + if (test_bit(PHYLINK_DISABLE_MAC_WOL, &pl->phylink_disable_state)) { 1356 + /* Wake-on-Lan enabled, MAC handling */ 1357 + 1358 + /* Call mac_link_down() so we keep the overall state balanced. 1359 + * Do this under the state_mutex lock for consistency. This 1360 + * will cause a "Link Down" message to be printed during 1361 + * resume, which is harmless - the true link state will be 1362 + * printed when we run a resolve. 1363 + */ 1364 + mutex_lock(&pl->state_mutex); 1365 + phylink_link_down(pl); 1366 + mutex_unlock(&pl->state_mutex); 1367 + 1368 + /* Re-apply the link parameters so that all the settings get 1369 + * restored to the MAC. 1370 + */ 1371 + phylink_mac_initial_config(pl, true); 1372 + 1373 + /* Re-enable and re-resolve the link parameters */ 1374 + clear_bit(PHYLINK_DISABLE_MAC_WOL, &pl->phylink_disable_state); 1375 + phylink_run_resolve(pl); 1376 + } else { 1377 + phylink_start(pl); 1378 + } 1379 + } 1380 + EXPORT_SYMBOL_GPL(phylink_resume); 1307 1381 1308 1382 /** 1309 1383 * phylink_ethtool_get_wol() - get the wake on lan parameters for the PHY
+5
drivers/net/usb/cdc_mbim.c
··· 654 654 .driver_info = (unsigned long)&cdc_mbim_info_avoid_altsetting_toggle, 655 655 }, 656 656 657 + /* Telit LN920 */ 658 + { USB_DEVICE_AND_INTERFACE_INFO(0x1bc7, 0x1061, USB_CLASS_COMM, USB_CDC_SUBCLASS_MBIM, USB_CDC_PROTO_NONE), 659 + .driver_info = (unsigned long)&cdc_mbim_info_avoid_altsetting_toggle, 660 + }, 661 + 657 662 /* default entry */ 658 663 { USB_INTERFACE_INFO(USB_CLASS_COMM, USB_CDC_SUBCLASS_MBIM, USB_CDC_PROTO_NONE), 659 664 .driver_info = (unsigned long)&cdc_mbim_info_zlp,
+8 -3
drivers/net/usb/hso.c
··· 2535 2535 if (!hso_net->mux_bulk_tx_buf) 2536 2536 goto err_free_tx_urb; 2537 2537 2538 - add_net_device(hso_dev); 2538 + result = add_net_device(hso_dev); 2539 + if (result) { 2540 + dev_err(&interface->dev, "Failed to add net device\n"); 2541 + goto err_free_tx_buf; 2542 + } 2539 2543 2540 2544 /* registering our net device */ 2541 2545 result = register_netdev(net); 2542 2546 if (result) { 2543 2547 dev_err(&interface->dev, "Failed to register device\n"); 2544 - goto err_free_tx_buf; 2548 + goto err_rmv_ndev; 2545 2549 } 2546 2550 2547 2551 hso_log_port(hso_dev); ··· 2554 2550 2555 2551 return hso_dev; 2556 2552 2557 - err_free_tx_buf: 2553 + err_rmv_ndev: 2558 2554 remove_net_device(hso_dev); 2555 + err_free_tx_buf: 2559 2556 kfree(hso_net->mux_bulk_tx_buf); 2560 2557 err_free_tx_urb: 2561 2558 usb_free_urb(hso_net->mux_bulk_tx_urb);
+1
drivers/net/usb/qmi_wwan.c
··· 1354 1354 {QMI_QUIRK_SET_DTR(0x1bc7, 0x1031, 3)}, /* Telit LE910C1-EUX */ 1355 1355 {QMI_QUIRK_SET_DTR(0x1bc7, 0x1040, 2)}, /* Telit LE922A */ 1356 1356 {QMI_QUIRK_SET_DTR(0x1bc7, 0x1050, 2)}, /* Telit FN980 */ 1357 + {QMI_QUIRK_SET_DTR(0x1bc7, 0x1060, 2)}, /* Telit LN920 */ 1357 1358 {QMI_FIXED_INTF(0x1bc7, 0x1100, 3)}, /* Telit ME910 */ 1358 1359 {QMI_FIXED_INTF(0x1bc7, 0x1101, 3)}, /* Telit ME910 dual modem */ 1359 1360 {QMI_FIXED_INTF(0x1bc7, 0x1200, 5)}, /* Telit LE920 */
+1 -1
drivers/net/wireless/intel/iwlwifi/cfg/22000.c
··· 9 9 #include "iwl-prph.h" 10 10 11 11 /* Highest firmware API version supported */ 12 - #define IWL_22000_UCODE_API_MAX 65 12 + #define IWL_22000_UCODE_API_MAX 66 13 13 14 14 /* Lowest firmware API version supported */ 15 15 #define IWL_22000_UCODE_API_MIN 39
+5 -1
drivers/net/wireless/intel/iwlwifi/fw/pnvm.c
··· 231 231 { 232 232 const struct firmware *pnvm; 233 233 char pnvm_name[MAX_PNVM_NAME]; 234 + size_t new_len; 234 235 int ret; 235 236 236 237 iwl_pnvm_get_fs_name(trans, pnvm_name, sizeof(pnvm_name)); ··· 243 242 return ret; 244 243 } 245 244 245 + new_len = pnvm->size; 246 246 *data = kmemdup(pnvm->data, pnvm->size, GFP_KERNEL); 247 + release_firmware(pnvm); 248 + 247 249 if (!*data) 248 250 return -ENOMEM; 249 251 250 - *len = pnvm->size; 252 + *len = new_len; 251 253 252 254 return 0; 253 255 }
+1
drivers/net/wireless/intel/iwlwifi/pcie/drv.c
··· 558 558 IWL_DEV_INFO(0xA0F0, 0x1652, killer1650i_2ax_cfg_qu_b0_hr_b0, NULL), 559 559 IWL_DEV_INFO(0xA0F0, 0x2074, iwl_ax201_cfg_qu_hr, NULL), 560 560 IWL_DEV_INFO(0xA0F0, 0x4070, iwl_ax201_cfg_qu_hr, NULL), 561 + IWL_DEV_INFO(0xA0F0, 0x6074, iwl_ax201_cfg_qu_hr, NULL), 561 562 IWL_DEV_INFO(0x02F0, 0x0070, iwl_ax201_cfg_quz_hr, NULL), 562 563 IWL_DEV_INFO(0x02F0, 0x0074, iwl_ax201_cfg_quz_hr, NULL), 563 564 IWL_DEV_INFO(0x02F0, 0x6074, iwl_ax201_cfg_quz_hr, NULL),
+16 -14
drivers/net/wwan/iosm/iosm_ipc_mmio.c
··· 69 69 unsigned int ver; 70 70 71 71 ver = ipc_mmio_get_cp_version(ipc_mmio); 72 - cp_cap = readl(ipc_mmio->base + ipc_mmio->offset.cp_capability); 72 + cp_cap = ioread32(ipc_mmio->base + ipc_mmio->offset.cp_capability); 73 73 74 74 ipc_mmio->has_mux_lite = (ver >= IOSM_CP_VERSION) && 75 75 !(cp_cap & DL_AGGR) && !(cp_cap & UL_AGGR); ··· 150 150 if (!ipc_mmio) 151 151 return IPC_MEM_EXEC_STAGE_INVALID; 152 152 153 - return (enum ipc_mem_exec_stage)readl(ipc_mmio->base + 154 - ipc_mmio->offset.exec_stage); 153 + return (enum ipc_mem_exec_stage)ioread32(ipc_mmio->base + 154 + ipc_mmio->offset.exec_stage); 155 155 } 156 156 157 157 void ipc_mmio_copy_chip_info(struct iosm_mmio *ipc_mmio, void *dest, ··· 167 167 if (!ipc_mmio) 168 168 return IPC_MEM_DEVICE_IPC_INVALID; 169 169 170 - return (enum ipc_mem_device_ipc_state) 171 - readl(ipc_mmio->base + ipc_mmio->offset.ipc_status); 170 + return (enum ipc_mem_device_ipc_state)ioread32(ipc_mmio->base + 171 + ipc_mmio->offset.ipc_status); 172 172 } 173 173 174 174 enum rom_exit_code ipc_mmio_get_rom_exit_code(struct iosm_mmio *ipc_mmio) ··· 176 176 if (!ipc_mmio) 177 177 return IMEM_ROM_EXIT_FAIL; 178 178 179 - return (enum rom_exit_code)readl(ipc_mmio->base + 180 - ipc_mmio->offset.rom_exit_code); 179 + return (enum rom_exit_code)ioread32(ipc_mmio->base + 180 + ipc_mmio->offset.rom_exit_code); 181 181 } 182 182 183 183 void ipc_mmio_config(struct iosm_mmio *ipc_mmio) ··· 188 188 /* AP memory window (full window is open and active so that modem checks 189 189 * each AP address) 0 means don't check on modem side. 190 190 */ 191 - iowrite64_lo_hi(0, ipc_mmio->base + ipc_mmio->offset.ap_win_base); 192 - iowrite64_lo_hi(0, ipc_mmio->base + ipc_mmio->offset.ap_win_end); 191 + iowrite64(0, ipc_mmio->base + ipc_mmio->offset.ap_win_base); 192 + iowrite64(0, ipc_mmio->base + ipc_mmio->offset.ap_win_end); 193 193 194 - iowrite64_lo_hi(ipc_mmio->context_info_addr, 194 + iowrite64(ipc_mmio->context_info_addr, 195 195 ipc_mmio->base + ipc_mmio->offset.context_info); 196 196 } 197 197 ··· 201 201 if (!ipc_mmio) 202 202 return; 203 203 204 - iowrite64_lo_hi(addr, ipc_mmio->base + ipc_mmio->offset.psi_address); 205 - writel(size, ipc_mmio->base + ipc_mmio->offset.psi_size); 204 + iowrite64(addr, ipc_mmio->base + ipc_mmio->offset.psi_address); 205 + iowrite32(size, ipc_mmio->base + ipc_mmio->offset.psi_size); 206 206 } 207 207 208 208 void ipc_mmio_set_contex_info_addr(struct iosm_mmio *ipc_mmio, phys_addr_t addr) ··· 218 218 219 219 int ipc_mmio_get_cp_version(struct iosm_mmio *ipc_mmio) 220 220 { 221 - return ipc_mmio ? readl(ipc_mmio->base + ipc_mmio->offset.cp_version) : 222 - -EFAULT; 221 + if (ipc_mmio) 222 + return ioread32(ipc_mmio->base + ipc_mmio->offset.cp_version); 223 + 224 + return -EFAULT; 223 225 }
+12
include/linux/etherdevice.h
··· 300 300 } 301 301 302 302 /** 303 + * eth_hw_addr_set - Assign Ethernet address to a net_device 304 + * @dev: pointer to net_device structure 305 + * @addr: address to assign 306 + * 307 + * Assign given address to the net_device, addr_assign_type is not changed. 308 + */ 309 + static inline void eth_hw_addr_set(struct net_device *dev, const u8 *addr) 310 + { 311 + ether_addr_copy(dev->dev_addr, addr); 312 + } 313 + 314 + /** 303 315 * eth_hw_addr_inherit - Copy dev_addr from another net_device 304 316 * @dst: pointer to net_device to copy dev_addr to 305 317 * @src: pointer to net_device to copy dev_addr from
+18
include/linux/netdevice.h
··· 4641 4641 void __hw_addr_init(struct netdev_hw_addr_list *list); 4642 4642 4643 4643 /* Functions used for device addresses handling */ 4644 + static inline void 4645 + __dev_addr_set(struct net_device *dev, const u8 *addr, size_t len) 4646 + { 4647 + memcpy(dev->dev_addr, addr, len); 4648 + } 4649 + 4650 + static inline void dev_addr_set(struct net_device *dev, const u8 *addr) 4651 + { 4652 + __dev_addr_set(dev, addr, dev->addr_len); 4653 + } 4654 + 4655 + static inline void 4656 + dev_addr_mod(struct net_device *dev, unsigned int offset, 4657 + const u8 *addr, size_t len) 4658 + { 4659 + memcpy(&dev->dev_addr[offset], addr, len); 4660 + } 4661 + 4644 4662 int dev_addr_add(struct net_device *dev, const unsigned char *addr, 4645 4663 unsigned char addr_type); 4646 4664 int dev_addr_del(struct net_device *dev, const unsigned char *addr,
+1
include/linux/netfilter/nf_conntrack_common.h
··· 18 18 unsigned int expect_create; 19 19 unsigned int expect_delete; 20 20 unsigned int search_restart; 21 + unsigned int chaintoolong; 21 22 }; 22 23 23 24 #define NFCT_INFOMASK 7UL
+3
include/linux/phylink.h
··· 451 451 void phylink_start(struct phylink *); 452 452 void phylink_stop(struct phylink *); 453 453 454 + void phylink_suspend(struct phylink *pl, bool mac_wol); 455 + void phylink_resume(struct phylink *pl); 456 + 454 457 void phylink_ethtool_get_wol(struct phylink *, struct ethtool_wolinfo *); 455 458 int phylink_ethtool_set_wol(struct phylink *, struct ethtool_wolinfo *); 456 459
+8 -3
include/linux/soc/marvell/octeontx2/asm.h
··· 22 22 : [rs]"r" (ioaddr)); \ 23 23 (result); \ 24 24 }) 25 + /* 26 + * STEORL store to memory with release semantics. 27 + * This will avoid using DMB barrier after each LMTST 28 + * operation. 29 + */ 25 30 #define cn10k_lmt_flush(val, addr) \ 26 31 ({ \ 27 32 __asm__ volatile(".cpu generic+lse\n" \ 28 - "steor %x[rf],[%[rs]]" \ 29 - : [rf]"+r"(val) \ 30 - : [rs]"r"(addr)); \ 33 + "steorl %x[rf],[%[rs]]" \ 34 + : [rf] "+r"(val) \ 35 + : [rs] "r"(addr)); \ 31 36 }) 32 37 #else 33 38 #define otx2_lmt_flush(ioaddr) ({ 0; })
+2 -2
include/net/flow.h
··· 194 194 195 195 static inline struct flowi_common *flowi4_to_flowi_common(struct flowi4 *fl4) 196 196 { 197 - return &(flowi4_to_flowi(fl4)->u.__fl_common); 197 + return &(fl4->__fl_common); 198 198 } 199 199 200 200 static inline struct flowi *flowi6_to_flowi(struct flowi6 *fl6) ··· 204 204 205 205 static inline struct flowi_common *flowi6_to_flowi_common(struct flowi6 *fl6) 206 206 { 207 - return &(flowi6_to_flowi(fl6)->u.__fl_common); 207 + return &(fl6->__fl_common); 208 208 } 209 209 210 210 static inline struct flowi *flowidn_to_flowi(struct flowidn *fldn)
+2
include/uapi/linux/pkt_sched.h
··· 827 827 828 828 /* FQ_CODEL */ 829 829 830 + #define FQ_CODEL_QUANTUM_MAX (1 << 20) 831 + 830 832 enum { 831 833 TCA_FQ_CODEL_UNSPEC, 832 834 TCA_FQ_CODEL_TARGET,
+2 -2
net/bridge/br_multicast.c
··· 4255 4255 bool del = false; 4256 4256 4257 4257 brmctx = br_multicast_port_ctx_get_global(pmctx); 4258 - spin_lock(&brmctx->br->multicast_lock); 4258 + spin_lock_bh(&brmctx->br->multicast_lock); 4259 4259 if (pmctx->multicast_router == val) { 4260 4260 /* Refresh the temp router port timer */ 4261 4261 if (pmctx->multicast_router == MDB_RTR_TYPE_TEMP) { ··· 4305 4305 } 4306 4306 err = 0; 4307 4307 unlock: 4308 - spin_unlock(&brmctx->br->multicast_lock); 4308 + spin_unlock_bh(&brmctx->br->multicast_lock); 4309 4309 4310 4310 return err; 4311 4311 }
-1
net/core/pktgen.c
··· 3602 3602 3603 3603 static int pktgen_thread_worker(void *arg) 3604 3604 { 3605 - DEFINE_WAIT(wait); 3606 3605 struct pktgen_thread *t = arg; 3607 3606 struct pktgen_dev *pkt_dev = NULL; 3608 3607 int cpu = t->cpu;
+1 -1
net/core/skbuff.c
··· 3884 3884 skb_push(nskb, -skb_network_offset(nskb) + offset); 3885 3885 3886 3886 skb_release_head_state(nskb); 3887 - __copy_skb_header(nskb, skb); 3887 + __copy_skb_header(nskb, skb); 3888 3888 3889 3889 skb_headers_offset_update(nskb, skb_headroom(nskb) - skb_headroom(skb)); 3890 3890 skb_copy_from_linear_data_offset(skb, -tnl_hlen,
+4 -3
net/dsa/tag_rtl4_a.c
··· 54 54 p = (__be16 *)tag; 55 55 *p = htons(RTL4_A_ETHERTYPE); 56 56 57 - out = (RTL4_A_PROTOCOL_RTL8366RB << 12) | (2 << 8); 58 - /* The lower bits is the port number */ 59 - out |= (u8)dp->index; 57 + out = (RTL4_A_PROTOCOL_RTL8366RB << RTL4_A_PROTOCOL_SHIFT) | (2 << 8); 58 + /* The lower bits indicate the port number */ 59 + out |= BIT(dp->index); 60 + 60 61 p = (__be16 *)(tag + 2); 61 62 *p = htons(out); 62 63
+8 -10
net/ipv4/cipso_ipv4.c
··· 465 465 if (!doi_def) 466 466 return; 467 467 468 - if (doi_def->map.std) { 469 - switch (doi_def->type) { 470 - case CIPSO_V4_MAP_TRANS: 471 - kfree(doi_def->map.std->lvl.cipso); 472 - kfree(doi_def->map.std->lvl.local); 473 - kfree(doi_def->map.std->cat.cipso); 474 - kfree(doi_def->map.std->cat.local); 475 - kfree(doi_def->map.std); 476 - break; 477 - } 468 + switch (doi_def->type) { 469 + case CIPSO_V4_MAP_TRANS: 470 + kfree(doi_def->map.std->lvl.cipso); 471 + kfree(doi_def->map.std->lvl.local); 472 + kfree(doi_def->map.std->cat.cipso); 473 + kfree(doi_def->map.std->cat.local); 474 + kfree(doi_def->map.std); 475 + break; 478 476 } 479 477 kfree(doi_def); 480 478 }
+6 -3
net/ipv4/ip_gre.c
··· 473 473 474 474 static int gre_handle_offloads(struct sk_buff *skb, bool csum) 475 475 { 476 - if (csum && skb_checksum_start(skb) < skb->data) 477 - return -EINVAL; 478 476 return iptunnel_handle_offloads(skb, csum ? SKB_GSO_GRE_CSUM : SKB_GSO_GRE); 479 477 } 480 478 ··· 630 632 } 631 633 632 634 if (dev->header_ops) { 635 + const int pull_len = tunnel->hlen + sizeof(struct iphdr); 636 + 633 637 if (skb_cow_head(skb, 0)) 634 638 goto free_skb; 635 639 636 640 tnl_params = (const struct iphdr *)skb->data; 637 641 642 + if (pull_len > skb_transport_offset(skb)) 643 + goto free_skb; 644 + 638 645 /* Pull skb since ip_tunnel_xmit() needs skb->data pointing 639 646 * to gre header. 640 647 */ 641 - skb_pull(skb, tunnel->hlen + sizeof(struct iphdr)); 648 + skb_pull(skb, pull_len); 642 649 skb_reset_mac_header(skb); 643 650 } else { 644 651 if (skb_cow_head(skb, dev->needed_headroom))
+2
net/ipv4/nexthop.c
··· 2490 2490 .fc_gw4 = cfg->gw.ipv4, 2491 2491 .fc_gw_family = cfg->gw.ipv4 ? AF_INET : 0, 2492 2492 .fc_flags = cfg->nh_flags, 2493 + .fc_nlinfo = cfg->nlinfo, 2493 2494 .fc_encap = cfg->nh_encap, 2494 2495 .fc_encap_type = cfg->nh_encap_type, 2495 2496 }; ··· 2529 2528 .fc_ifindex = cfg->nh_ifindex, 2530 2529 .fc_gateway = cfg->gw.ipv6, 2531 2530 .fc_flags = cfg->nh_flags, 2531 + .fc_nlinfo = cfg->nlinfo, 2532 2532 .fc_encap = cfg->nh_encap, 2533 2533 .fc_encap_type = cfg->nh_encap_type, 2534 2534 .fc_is_fdb = cfg->nh_fdb,
+18 -10
net/ipv6/addrconf.c
··· 3092 3092 } 3093 3093 } 3094 3094 3095 - #if IS_ENABLED(CONFIG_IPV6_SIT) 3096 - static void sit_add_v4_addrs(struct inet6_dev *idev) 3095 + #if IS_ENABLED(CONFIG_IPV6_SIT) || IS_ENABLED(CONFIG_NET_IPGRE) || IS_ENABLED(CONFIG_IPV6_GRE) 3096 + static void add_v4_addrs(struct inet6_dev *idev) 3097 3097 { 3098 3098 struct in6_addr addr; 3099 3099 struct net_device *dev; 3100 3100 struct net *net = dev_net(idev->dev); 3101 - int scope, plen; 3101 + int scope, plen, offset = 0; 3102 3102 u32 pflags = 0; 3103 3103 3104 3104 ASSERT_RTNL(); 3105 3105 3106 3106 memset(&addr, 0, sizeof(struct in6_addr)); 3107 - memcpy(&addr.s6_addr32[3], idev->dev->dev_addr, 4); 3107 + /* in case of IP6GRE the dev_addr is an IPv6 and therefore we use only the last 4 bytes */ 3108 + if (idev->dev->addr_len == sizeof(struct in6_addr)) 3109 + offset = sizeof(struct in6_addr) - 4; 3110 + memcpy(&addr.s6_addr32[3], idev->dev->dev_addr + offset, 4); 3108 3111 3109 3112 if (idev->dev->flags&IFF_POINTOPOINT) { 3110 3113 addr.s6_addr32[0] = htonl(0xfe800000); ··· 3345 3342 (dev->type != ARPHRD_IEEE1394) && 3346 3343 (dev->type != ARPHRD_TUNNEL6) && 3347 3344 (dev->type != ARPHRD_6LOWPAN) && 3348 - (dev->type != ARPHRD_IP6GRE) && 3349 - (dev->type != ARPHRD_IPGRE) && 3350 3345 (dev->type != ARPHRD_TUNNEL) && 3351 3346 (dev->type != ARPHRD_NONE) && 3352 3347 (dev->type != ARPHRD_RAWIP)) { ··· 3392 3391 return; 3393 3392 } 3394 3393 3395 - sit_add_v4_addrs(idev); 3394 + add_v4_addrs(idev); 3396 3395 3397 3396 if (dev->flags&IFF_POINTOPOINT) 3398 3397 addrconf_add_mroute(dev); 3399 3398 } 3400 3399 #endif 3401 3400 3402 - #if IS_ENABLED(CONFIG_NET_IPGRE) 3401 + #if IS_ENABLED(CONFIG_NET_IPGRE) || IS_ENABLED(CONFIG_IPV6_GRE) 3403 3402 static void addrconf_gre_config(struct net_device *dev) 3404 3403 { 3405 3404 struct inet6_dev *idev; ··· 3412 3411 return; 3413 3412 } 3414 3413 3415 - addrconf_addr_gen(idev, true); 3414 + if (dev->type == ARPHRD_ETHER) { 3415 + addrconf_addr_gen(idev, true); 3416 + return; 3417 + } 3418 + 3419 + add_v4_addrs(idev); 3420 + 3416 3421 if (dev->flags & IFF_POINTOPOINT) 3417 3422 addrconf_add_mroute(dev); 3418 3423 } ··· 3594 3587 addrconf_sit_config(dev); 3595 3588 break; 3596 3589 #endif 3597 - #if IS_ENABLED(CONFIG_NET_IPGRE) 3590 + #if IS_ENABLED(CONFIG_NET_IPGRE) || IS_ENABLED(CONFIG_IPV6_GRE) 3591 + case ARPHRD_IP6GRE: 3598 3592 case ARPHRD_IPGRE: 3599 3593 addrconf_gre_config(dev); 3600 3594 break;
-2
net/ipv6/ip6_gre.c
··· 629 629 630 630 static int gre_handle_offloads(struct sk_buff *skb, bool csum) 631 631 { 632 - if (csum && skb_checksum_start(skb) < skb->data) 633 - return -EINVAL; 634 632 return iptunnel_handle_offloads(skb, 635 633 csum ? SKB_GSO_GRE_CSUM : SKB_GSO_GRE); 636 634 }
+4 -6
net/ipv6/mcast.c
··· 1356 1356 return 0; 1357 1357 } 1358 1358 1359 - static int mld_process_v2(struct inet6_dev *idev, struct mld2_query *mld, 1360 - unsigned long *max_delay) 1359 + static void mld_process_v2(struct inet6_dev *idev, struct mld2_query *mld, 1360 + unsigned long *max_delay) 1361 1361 { 1362 1362 *max_delay = max(msecs_to_jiffies(mldv2_mrc(mld)), 1UL); 1363 1363 ··· 1367 1367 1368 1368 idev->mc_maxdelay = *max_delay; 1369 1369 1370 - return 0; 1370 + return; 1371 1371 } 1372 1372 1373 1373 /* called with rcu_read_lock() */ ··· 1454 1454 1455 1455 mlh2 = (struct mld2_query *)skb_transport_header(skb); 1456 1456 1457 - err = mld_process_v2(idev, mlh2, &max_delay); 1458 - if (err < 0) 1459 - goto out; 1457 + mld_process_v2(idev, mlh2, &max_delay); 1460 1458 1461 1459 if (group_type == IPV6_ADDR_ANY) { /* general query */ 1462 1460 if (mlh2->mld2q_nsrcs)
+1 -3
net/ipv6/netfilter/nf_socket_ipv6.c
··· 99 99 { 100 100 __be16 dport, sport; 101 101 const struct in6_addr *daddr = NULL, *saddr = NULL; 102 - struct ipv6hdr *iph = ipv6_hdr(skb); 102 + struct ipv6hdr *iph = ipv6_hdr(skb), ipv6_var; 103 103 struct sk_buff *data_skb = NULL; 104 104 int doff = 0; 105 105 int thoff = 0, tproto; ··· 129 129 thoff + sizeof(*hp); 130 130 131 131 } else if (tproto == IPPROTO_ICMPV6) { 132 - struct ipv6hdr ipv6_var; 133 - 134 132 if (extract_icmp6_fields(skb, thoff, &tproto, &saddr, &daddr, 135 133 &sport, &dport, &ipv6_var)) 136 134 return NULL;
+1 -1
net/ipv6/seg6_iptunnel.c
··· 385 385 struct dst_entry *orig_dst = skb_dst(skb); 386 386 struct dst_entry *dst = NULL; 387 387 struct seg6_lwt *slwt; 388 - int err = -EINVAL; 388 + int err; 389 389 390 390 err = seg6_do_srh(skb); 391 391 if (unlikely(err))
+1 -1
net/mac802154/iface.c
··· 617 617 { 618 618 struct net_device *ndev = NULL; 619 619 struct ieee802154_sub_if_data *sdata = NULL; 620 - int ret = -ENOMEM; 620 + int ret; 621 621 622 622 ASSERT_RTNL(); 623 623
+2 -8
net/mptcp/pm_netlink.c
··· 644 644 subflow = list_first_entry_or_null(&msk->conn_list, typeof(*subflow), node); 645 645 if (subflow) { 646 646 struct sock *ssk = mptcp_subflow_tcp_sock(subflow); 647 - bool slow; 648 647 649 648 spin_unlock_bh(&msk->pm.lock); 650 649 pr_debug("send ack for %s", 651 650 mptcp_pm_should_add_signal(msk) ? "add_addr" : "rm_addr"); 652 651 653 - slow = lock_sock_fast(ssk); 654 - tcp_send_ack(ssk); 655 - unlock_sock_fast(ssk, slow); 652 + mptcp_subflow_send_ack(ssk); 656 653 spin_lock_bh(&msk->pm.lock); 657 654 } 658 655 } ··· 666 669 struct sock *ssk = mptcp_subflow_tcp_sock(subflow); 667 670 struct sock *sk = (struct sock *)msk; 668 671 struct mptcp_addr_info local; 669 - bool slow; 670 672 671 673 local_address((struct sock_common *)ssk, &local); 672 674 if (!addresses_equal(&local, addr, addr->port)) ··· 678 682 679 683 spin_unlock_bh(&msk->pm.lock); 680 684 pr_debug("send ack for mp_prio"); 681 - slow = lock_sock_fast(ssk); 682 - tcp_send_ack(ssk); 683 - unlock_sock_fast(ssk, slow); 685 + mptcp_subflow_send_ack(ssk); 684 686 spin_lock_bh(&msk->pm.lock); 685 687 686 688 return 0;
+47 -50
net/mptcp/protocol.c
··· 440 440 (TCPF_SYN_SENT | TCPF_SYN_RECV | TCPF_TIME_WAIT | TCPF_CLOSE | TCPF_LISTEN)); 441 441 } 442 442 443 + void mptcp_subflow_send_ack(struct sock *ssk) 444 + { 445 + bool slow; 446 + 447 + slow = lock_sock_fast(ssk); 448 + if (tcp_can_send_ack(ssk)) 449 + tcp_send_ack(ssk); 450 + unlock_sock_fast(ssk, slow); 451 + } 452 + 443 453 static void mptcp_send_ack(struct mptcp_sock *msk) 444 454 { 445 455 struct mptcp_subflow_context *subflow; 446 456 447 - mptcp_for_each_subflow(msk, subflow) { 448 - struct sock *ssk = mptcp_subflow_tcp_sock(subflow); 449 - bool slow; 450 - 451 - slow = lock_sock_fast(ssk); 452 - if (tcp_can_send_ack(ssk)) 453 - tcp_send_ack(ssk); 454 - unlock_sock_fast(ssk, slow); 455 - } 457 + mptcp_for_each_subflow(msk, subflow) 458 + mptcp_subflow_send_ack(mptcp_subflow_tcp_sock(subflow)); 456 459 } 457 460 458 461 static void mptcp_subflow_cleanup_rbuf(struct sock *ssk) ··· 1006 1003 msk->wmem_reserved += size; 1007 1004 } 1008 1005 1006 + static void __mptcp_mem_reclaim_partial(struct sock *sk) 1007 + { 1008 + lockdep_assert_held_once(&sk->sk_lock.slock); 1009 + __mptcp_update_wmem(sk); 1010 + sk_mem_reclaim_partial(sk); 1011 + } 1012 + 1009 1013 static void mptcp_mem_reclaim_partial(struct sock *sk) 1010 1014 { 1011 1015 struct mptcp_sock *msk = mptcp_sk(sk); ··· 1104 1094 msk->recovery = false; 1105 1095 1106 1096 out: 1107 - if (cleaned) { 1108 - if (tcp_under_memory_pressure(sk)) { 1109 - __mptcp_update_wmem(sk); 1110 - sk_mem_reclaim_partial(sk); 1111 - } 1112 - } 1097 + if (cleaned && tcp_under_memory_pressure(sk)) 1098 + __mptcp_mem_reclaim_partial(sk); 1113 1099 1114 1100 if (snd_una == READ_ONCE(msk->snd_nxt) && !msk->recovery) { 1115 1101 if (mptcp_timer_pending(sk) && !mptcp_data_fin_enabled(msk)) ··· 1185 1179 u16 limit; 1186 1180 u16 sent; 1187 1181 unsigned int flags; 1182 + bool data_lock_held; 1188 1183 }; 1189 1184 1190 1185 static int mptcp_check_allowed_size(struct mptcp_sock *msk, u64 data_seq, ··· 1257 1250 return false; 1258 1251 } 1259 1252 1260 - static bool mptcp_must_reclaim_memory(struct sock *sk, struct sock *ssk) 1253 + static bool mptcp_alloc_tx_skb(struct sock *sk, struct sock *ssk, bool data_lock_held) 1261 1254 { 1262 - return !ssk->sk_tx_skb_cache && 1263 - tcp_under_memory_pressure(sk); 1264 - } 1255 + gfp_t gfp = data_lock_held ? GFP_ATOMIC : sk->sk_allocation; 1265 1256 1266 - static bool mptcp_alloc_tx_skb(struct sock *sk, struct sock *ssk) 1267 - { 1268 - if (unlikely(mptcp_must_reclaim_memory(sk, ssk))) 1269 - mptcp_mem_reclaim_partial(sk); 1270 - return __mptcp_alloc_tx_skb(sk, ssk, sk->sk_allocation); 1257 + if (unlikely(tcp_under_memory_pressure(sk))) { 1258 + if (data_lock_held) 1259 + __mptcp_mem_reclaim_partial(sk); 1260 + else 1261 + mptcp_mem_reclaim_partial(sk); 1262 + } 1263 + return __mptcp_alloc_tx_skb(sk, ssk, gfp); 1271 1264 } 1272 1265 1273 1266 /* note: this always recompute the csum on the whole skb, even ··· 1291 1284 bool zero_window_probe = false; 1292 1285 struct mptcp_ext *mpext = NULL; 1293 1286 struct sk_buff *skb, *tail; 1294 - bool can_collapse = false; 1287 + bool must_collapse = false; 1295 1288 int size_bias = 0; 1296 1289 int avail_size; 1297 1290 size_t ret = 0; ··· 1311 1304 * SSN association set here 1312 1305 */ 1313 1306 mpext = skb_ext_find(skb, SKB_EXT_MPTCP); 1314 - can_collapse = (info->size_goal - skb->len > 0) && 1315 - mptcp_skb_can_collapse_to(data_seq, skb, mpext); 1316 - if (!can_collapse) { 1307 + if (!mptcp_skb_can_collapse_to(data_seq, skb, mpext)) { 1317 1308 TCP_SKB_CB(skb)->eor = 1; 1318 - } else { 1309 + goto alloc_skb; 1310 + } 1311 + 1312 + must_collapse = (info->size_goal - skb->len > 0) && 1313 + (skb_shinfo(skb)->nr_frags < sysctl_max_skb_frags); 1314 + if (must_collapse) { 1319 1315 size_bias = skb->len; 1320 1316 avail_size = info->size_goal - skb->len; 1321 1317 } 1322 1318 } 1319 + 1320 + alloc_skb: 1321 + if (!must_collapse && !ssk->sk_tx_skb_cache && 1322 + !mptcp_alloc_tx_skb(sk, ssk, info->data_lock_held)) 1323 + return 0; 1323 1324 1324 1325 /* Zero window and all data acked? Probe. */ 1325 1326 avail_size = mptcp_check_allowed_size(msk, data_seq, avail_size); ··· 1358 1343 if (skb == tail) { 1359 1344 TCP_SKB_CB(tail)->tcp_flags &= ~TCPHDR_PSH; 1360 1345 mpext->data_len += ret; 1361 - WARN_ON_ONCE(!can_collapse); 1362 1346 WARN_ON_ONCE(zero_window_probe); 1363 1347 goto out; 1364 1348 } ··· 1544 1530 if (ssk != prev_ssk) 1545 1531 lock_sock(ssk); 1546 1532 1547 - /* keep it simple and always provide a new skb for the 1548 - * subflow, even if we will not use it when collapsing 1549 - * on the pending one 1550 - */ 1551 - if (!mptcp_alloc_tx_skb(sk, ssk)) { 1552 - mptcp_push_release(sk, ssk, &info); 1553 - goto out; 1554 - } 1555 - 1556 1533 ret = mptcp_sendmsg_frag(sk, ssk, dfrag, &info); 1557 1534 if (ret <= 0) { 1558 1535 mptcp_push_release(sk, ssk, &info); ··· 1576 1571 static void __mptcp_subflow_push_pending(struct sock *sk, struct sock *ssk) 1577 1572 { 1578 1573 struct mptcp_sock *msk = mptcp_sk(sk); 1579 - struct mptcp_sendmsg_info info; 1574 + struct mptcp_sendmsg_info info = { 1575 + .data_lock_held = true, 1576 + }; 1580 1577 struct mptcp_data_frag *dfrag; 1581 1578 struct sock *xmit_ssk; 1582 1579 int len, copied = 0; ··· 1603 1596 mptcp_subflow_delegate(mptcp_subflow_ctx(xmit_ssk)); 1604 1597 goto out; 1605 1598 } 1606 - 1607 - if (unlikely(mptcp_must_reclaim_memory(sk, ssk))) { 1608 - __mptcp_update_wmem(sk); 1609 - sk_mem_reclaim_partial(sk); 1610 - } 1611 - if (!__mptcp_alloc_tx_skb(sk, ssk, GFP_ATOMIC)) 1612 - goto out; 1613 1599 1614 1600 ret = mptcp_sendmsg_frag(sk, ssk, dfrag, &info); 1615 1601 if (ret <= 0) ··· 2409 2409 info.sent = 0; 2410 2410 info.limit = READ_ONCE(msk->csum_enabled) ? dfrag->data_len : dfrag->already_sent; 2411 2411 while (info.sent < info.limit) { 2412 - if (!mptcp_alloc_tx_skb(sk, ssk)) 2413 - break; 2414 - 2415 2412 ret = mptcp_sendmsg_frag(sk, ssk, dfrag, &info); 2416 2413 if (ret <= 0) 2417 2414 break;
+2 -1
net/mptcp/protocol.h
··· 34 34 #define OPTIONS_MPTCP_MPC (OPTION_MPTCP_MPC_SYN | OPTION_MPTCP_MPC_SYNACK | \ 35 35 OPTION_MPTCP_MPC_ACK) 36 36 #define OPTIONS_MPTCP_MPJ (OPTION_MPTCP_MPJ_SYN | OPTION_MPTCP_MPJ_SYNACK | \ 37 - OPTION_MPTCP_MPJ_SYNACK) 37 + OPTION_MPTCP_MPJ_ACK) 38 38 39 39 /* MPTCP option subtypes */ 40 40 #define MPTCPOPT_MP_CAPABLE 0 ··· 573 573 void mptcp_subflow_shutdown(struct sock *sk, struct sock *ssk, int how); 574 574 void mptcp_close_ssk(struct sock *sk, struct sock *ssk, 575 575 struct mptcp_subflow_context *subflow); 576 + void mptcp_subflow_send_ack(struct sock *ssk); 576 577 void mptcp_subflow_reset(struct sock *ssk); 577 578 void mptcp_sock_graft(struct sock *sk, struct socket *parent); 578 579 struct socket *__mptcp_nmpc_socket(const struct mptcp_sock *msk);
+3
net/ncsi/internal.h
··· 80 80 #define NCSI_OEM_MFR_BCM_ID 0x113d 81 81 #define NCSI_OEM_MFR_INTEL_ID 0x157 82 82 /* Intel specific OEM command */ 83 + #define NCSI_OEM_INTEL_CMD_GMA 0x06 /* CMD ID for Get MAC */ 83 84 #define NCSI_OEM_INTEL_CMD_KEEP_PHY 0x20 /* CMD ID for Keep PHY up */ 84 85 /* Broadcom specific OEM Command */ 85 86 #define NCSI_OEM_BCM_CMD_GMA 0x01 /* CMD ID for Get MAC */ ··· 90 89 #define NCSI_OEM_MLX_CMD_SMAF 0x01 /* CMD ID for Set MC Affinity */ 91 90 #define NCSI_OEM_MLX_CMD_SMAF_PARAM 0x07 /* Parameter for SMAF */ 92 91 /* OEM Command payload lengths*/ 92 + #define NCSI_OEM_INTEL_CMD_GMA_LEN 5 93 93 #define NCSI_OEM_INTEL_CMD_KEEP_PHY_LEN 7 94 94 #define NCSI_OEM_BCM_CMD_GMA_LEN 12 95 95 #define NCSI_OEM_MLX_CMD_GMA_LEN 8 ··· 101 99 /* Mac address offset in OEM response */ 102 100 #define BCM_MAC_ADDR_OFFSET 28 103 101 #define MLX_MAC_ADDR_OFFSET 8 102 + #define INTEL_MAC_ADDR_OFFSET 1 104 103 105 104 106 105 struct ncsi_channel_version {
+24 -1
net/ncsi/ncsi-manage.c
··· 795 795 return ret; 796 796 } 797 797 798 + static int ncsi_oem_gma_handler_intel(struct ncsi_cmd_arg *nca) 799 + { 800 + unsigned char data[NCSI_OEM_INTEL_CMD_GMA_LEN]; 801 + int ret = 0; 802 + 803 + nca->payload = NCSI_OEM_INTEL_CMD_GMA_LEN; 804 + 805 + memset(data, 0, NCSI_OEM_INTEL_CMD_GMA_LEN); 806 + *(unsigned int *)data = ntohl((__force __be32)NCSI_OEM_MFR_INTEL_ID); 807 + data[4] = NCSI_OEM_INTEL_CMD_GMA; 808 + 809 + nca->data = data; 810 + 811 + ret = ncsi_xmit_cmd(nca); 812 + if (ret) 813 + netdev_err(nca->ndp->ndev.dev, 814 + "NCSI: Failed to transmit cmd 0x%x during configure\n", 815 + nca->type); 816 + 817 + return ret; 818 + } 819 + 798 820 /* OEM Command handlers initialization */ 799 821 static struct ncsi_oem_gma_handler { 800 822 unsigned int mfr_id; 801 823 int (*handler)(struct ncsi_cmd_arg *nca); 802 824 } ncsi_oem_gma_handlers[] = { 803 825 { NCSI_OEM_MFR_BCM_ID, ncsi_oem_gma_handler_bcm }, 804 - { NCSI_OEM_MFR_MLX_ID, ncsi_oem_gma_handler_mlx } 826 + { NCSI_OEM_MFR_MLX_ID, ncsi_oem_gma_handler_mlx }, 827 + { NCSI_OEM_MFR_INTEL_ID, ncsi_oem_gma_handler_intel } 805 828 }; 806 829 807 830 static int ncsi_gma_handler(struct ncsi_cmd_arg *nca, unsigned int mf_id)
+6
net/ncsi/ncsi-pkt.h
··· 178 178 unsigned char data[]; /* Cmd specific Data */ 179 179 }; 180 180 181 + /* Intel Response Data */ 182 + struct ncsi_rsp_oem_intel_pkt { 183 + unsigned char cmd; /* OEM Command ID */ 184 + unsigned char data[]; /* Cmd specific Data */ 185 + }; 186 + 181 187 /* Get Link Status */ 182 188 struct ncsi_rsp_gls_pkt { 183 189 struct ncsi_rsp_pkt_hdr rsp; /* Response header */
+42
net/ncsi/ncsi-rsp.c
··· 699 699 return 0; 700 700 } 701 701 702 + /* Response handler for Intel command Get Mac Address */ 703 + static int ncsi_rsp_handler_oem_intel_gma(struct ncsi_request *nr) 704 + { 705 + struct ncsi_dev_priv *ndp = nr->ndp; 706 + struct net_device *ndev = ndp->ndev.dev; 707 + const struct net_device_ops *ops = ndev->netdev_ops; 708 + struct ncsi_rsp_oem_pkt *rsp; 709 + struct sockaddr saddr; 710 + int ret = 0; 711 + 712 + /* Get the response header */ 713 + rsp = (struct ncsi_rsp_oem_pkt *)skb_network_header(nr->rsp); 714 + 715 + saddr.sa_family = ndev->type; 716 + ndev->priv_flags |= IFF_LIVE_ADDR_CHANGE; 717 + memcpy(saddr.sa_data, &rsp->data[INTEL_MAC_ADDR_OFFSET], ETH_ALEN); 718 + /* Increase mac address by 1 for BMC's address */ 719 + eth_addr_inc((u8 *)saddr.sa_data); 720 + if (!is_valid_ether_addr((const u8 *)saddr.sa_data)) 721 + return -ENXIO; 722 + 723 + /* Set the flag for GMA command which should only be called once */ 724 + ndp->gma_flag = 1; 725 + 726 + ret = ops->ndo_set_mac_address(ndev, &saddr); 727 + if (ret < 0) 728 + netdev_warn(ndev, 729 + "NCSI: 'Writing mac address to device failed\n"); 730 + 731 + return ret; 732 + } 733 + 702 734 /* Response handler for Intel card */ 703 735 static int ncsi_rsp_handler_oem_intel(struct ncsi_request *nr) 704 736 { 737 + struct ncsi_rsp_oem_intel_pkt *intel; 738 + struct ncsi_rsp_oem_pkt *rsp; 739 + 740 + /* Get the response header */ 741 + rsp = (struct ncsi_rsp_oem_pkt *)skb_network_header(nr->rsp); 742 + intel = (struct ncsi_rsp_oem_intel_pkt *)(rsp->data); 743 + 744 + if (intel->cmd == NCSI_OEM_INTEL_CMD_GMA) 745 + return ncsi_rsp_handler_oem_intel_gma(nr); 746 + 705 747 return 0; 706 748 } 707 749
+66 -35
net/netfilter/nf_conntrack_core.c
··· 21 21 #include <linux/stddef.h> 22 22 #include <linux/slab.h> 23 23 #include <linux/random.h> 24 - #include <linux/jhash.h> 25 24 #include <linux/siphash.h> 26 25 #include <linux/err.h> 27 26 #include <linux/percpu.h> ··· 76 77 77 78 #define GC_SCAN_INTERVAL (120u * HZ) 78 79 #define GC_SCAN_MAX_DURATION msecs_to_jiffies(10) 80 + 81 + #define MAX_CHAINLEN 64u 79 82 80 83 static struct conntrack_gc_work conntrack_gc_work; 81 84 ··· 185 184 unsigned int nf_conntrack_max __read_mostly; 186 185 EXPORT_SYMBOL_GPL(nf_conntrack_max); 187 186 seqcount_spinlock_t nf_conntrack_generation __read_mostly; 188 - static unsigned int nf_conntrack_hash_rnd __read_mostly; 187 + static siphash_key_t nf_conntrack_hash_rnd __read_mostly; 189 188 190 189 static u32 hash_conntrack_raw(const struct nf_conntrack_tuple *tuple, 191 190 const struct net *net) 192 191 { 193 - unsigned int n; 194 - u32 seed; 192 + struct { 193 + struct nf_conntrack_man src; 194 + union nf_inet_addr dst_addr; 195 + u32 net_mix; 196 + u16 dport; 197 + u16 proto; 198 + } __aligned(SIPHASH_ALIGNMENT) combined; 195 199 196 200 get_random_once(&nf_conntrack_hash_rnd, sizeof(nf_conntrack_hash_rnd)); 197 201 198 - /* The direction must be ignored, so we hash everything up to the 199 - * destination ports (which is a multiple of 4) and treat the last 200 - * three bytes manually. 201 - */ 202 - seed = nf_conntrack_hash_rnd ^ net_hash_mix(net); 203 - n = (sizeof(tuple->src) + sizeof(tuple->dst.u3)) / sizeof(u32); 204 - return jhash2((u32 *)tuple, n, seed ^ 205 - (((__force __u16)tuple->dst.u.all << 16) | 206 - tuple->dst.protonum)); 202 + memset(&combined, 0, sizeof(combined)); 203 + 204 + /* The direction must be ignored, so handle usable members manually. */ 205 + combined.src = tuple->src; 206 + combined.dst_addr = tuple->dst.u3; 207 + combined.net_mix = net_hash_mix(net); 208 + combined.dport = (__force __u16)tuple->dst.u.all; 209 + combined.proto = tuple->dst.protonum; 210 + 211 + return (u32)siphash(&combined, sizeof(combined), &nf_conntrack_hash_rnd); 207 212 } 208 213 209 214 static u32 scale_hash(u32 hash) ··· 842 835 unsigned int hash, reply_hash; 843 836 struct nf_conntrack_tuple_hash *h; 844 837 struct hlist_nulls_node *n; 838 + unsigned int chainlen = 0; 845 839 unsigned int sequence; 840 + int err = -EEXIST; 846 841 847 842 zone = nf_ct_zone(ct); 848 843 ··· 858 849 } while (nf_conntrack_double_lock(net, hash, reply_hash, sequence)); 859 850 860 851 /* See if there's one in the list already, including reverse */ 861 - hlist_nulls_for_each_entry(h, n, &nf_conntrack_hash[hash], hnnode) 852 + hlist_nulls_for_each_entry(h, n, &nf_conntrack_hash[hash], hnnode) { 862 853 if (nf_ct_key_equal(h, &ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple, 863 854 zone, net)) 864 855 goto out; 865 856 866 - hlist_nulls_for_each_entry(h, n, &nf_conntrack_hash[reply_hash], hnnode) 857 + if (chainlen++ > MAX_CHAINLEN) 858 + goto chaintoolong; 859 + } 860 + 861 + chainlen = 0; 862 + 863 + hlist_nulls_for_each_entry(h, n, &nf_conntrack_hash[reply_hash], hnnode) { 867 864 if (nf_ct_key_equal(h, &ct->tuplehash[IP_CT_DIR_REPLY].tuple, 868 865 zone, net)) 869 866 goto out; 867 + if (chainlen++ > MAX_CHAINLEN) 868 + goto chaintoolong; 869 + } 870 870 871 871 smp_wmb(); 872 872 /* The caller holds a reference to this object */ ··· 885 867 NF_CT_STAT_INC(net, insert); 886 868 local_bh_enable(); 887 869 return 0; 888 - 870 + chaintoolong: 871 + NF_CT_STAT_INC(net, chaintoolong); 872 + err = -ENOSPC; 889 873 out: 890 874 nf_conntrack_double_unlock(hash, reply_hash); 891 875 local_bh_enable(); 892 - return -EEXIST; 876 + return err; 893 877 } 894 878 EXPORT_SYMBOL_GPL(nf_conntrack_hash_check_insert); 895 879 ··· 1104 1084 __nf_conntrack_confirm(struct sk_buff *skb) 1105 1085 { 1106 1086 const struct nf_conntrack_zone *zone; 1087 + unsigned int chainlen = 0, sequence; 1107 1088 unsigned int hash, reply_hash; 1108 1089 struct nf_conntrack_tuple_hash *h; 1109 1090 struct nf_conn *ct; ··· 1112 1091 struct hlist_nulls_node *n; 1113 1092 enum ip_conntrack_info ctinfo; 1114 1093 struct net *net; 1115 - unsigned int sequence; 1116 1094 int ret = NF_DROP; 1117 1095 1118 1096 ct = nf_ct_get(skb, &ctinfo); ··· 1171 1151 /* See if there's one in the list already, including reverse: 1172 1152 NAT could have grabbed it without realizing, since we're 1173 1153 not in the hash. If there is, we lost race. */ 1174 - hlist_nulls_for_each_entry(h, n, &nf_conntrack_hash[hash], hnnode) 1154 + hlist_nulls_for_each_entry(h, n, &nf_conntrack_hash[hash], hnnode) { 1175 1155 if (nf_ct_key_equal(h, &ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple, 1176 1156 zone, net)) 1177 1157 goto out; 1158 + if (chainlen++ > MAX_CHAINLEN) 1159 + goto chaintoolong; 1160 + } 1178 1161 1179 - hlist_nulls_for_each_entry(h, n, &nf_conntrack_hash[reply_hash], hnnode) 1162 + chainlen = 0; 1163 + hlist_nulls_for_each_entry(h, n, &nf_conntrack_hash[reply_hash], hnnode) { 1180 1164 if (nf_ct_key_equal(h, &ct->tuplehash[IP_CT_DIR_REPLY].tuple, 1181 1165 zone, net)) 1182 1166 goto out; 1167 + if (chainlen++ > MAX_CHAINLEN) { 1168 + chaintoolong: 1169 + nf_ct_add_to_dying_list(ct); 1170 + NF_CT_STAT_INC(net, chaintoolong); 1171 + NF_CT_STAT_INC(net, insert_failed); 1172 + ret = NF_DROP; 1173 + goto dying; 1174 + } 1175 + } 1183 1176 1184 1177 /* Timer relative to confirmation time, not original 1185 1178 setting time, otherwise we'd get timer wrap in ··· 2627 2594 spin_lock_init(&nf_conntrack_locks[i]); 2628 2595 2629 2596 if (!nf_conntrack_htable_size) { 2630 - /* Idea from tcp.c: use 1/16384 of memory. 2631 - * On i386: 32MB machine has 512 buckets. 2632 - * >= 1GB machines have 16384 buckets. 2633 - * >= 4GB machines have 65536 buckets. 2634 - */ 2635 2597 nf_conntrack_htable_size 2636 2598 = (((nr_pages << PAGE_SHIFT) / 16384) 2637 2599 / sizeof(struct hlist_head)); 2638 - if (nr_pages > (4 * (1024 * 1024 * 1024 / PAGE_SIZE))) 2639 - nf_conntrack_htable_size = 65536; 2600 + if (BITS_PER_LONG >= 64 && 2601 + nr_pages > (4 * (1024 * 1024 * 1024 / PAGE_SIZE))) 2602 + nf_conntrack_htable_size = 262144; 2640 2603 else if (nr_pages > (1024 * 1024 * 1024 / PAGE_SIZE)) 2641 - nf_conntrack_htable_size = 16384; 2642 - if (nf_conntrack_htable_size < 32) 2643 - nf_conntrack_htable_size = 32; 2604 + nf_conntrack_htable_size = 65536; 2644 2605 2645 - /* Use a max. factor of four by default to get the same max as 2646 - * with the old struct list_heads. When a table size is given 2647 - * we use the old value of 8 to avoid reducing the max. 2648 - * entries. */ 2649 - max_factor = 4; 2606 + if (nf_conntrack_htable_size < 1024) 2607 + nf_conntrack_htable_size = 1024; 2608 + /* Use a max. factor of one by default to keep the average 2609 + * hash chain length at 2 entries. Each entry has to be added 2610 + * twice (once for original direction, once for reply). 2611 + * When a table size is given we use the old value of 8 to 2612 + * avoid implicit reduction of the max entries setting. 2613 + */ 2614 + max_factor = 1; 2650 2615 } 2651 2616 2652 2617 nf_conntrack_hash = nf_ct_alloc_hashtable(&nf_conntrack_htable_size, 1);
+18 -7
net/netfilter/nf_conntrack_expect.c
··· 17 17 #include <linux/err.h> 18 18 #include <linux/percpu.h> 19 19 #include <linux/kernel.h> 20 - #include <linux/jhash.h> 20 + #include <linux/siphash.h> 21 21 #include <linux/moduleparam.h> 22 22 #include <linux/export.h> 23 23 #include <net/net_namespace.h> ··· 41 41 unsigned int nf_ct_expect_max __read_mostly; 42 42 43 43 static struct kmem_cache *nf_ct_expect_cachep __read_mostly; 44 - static unsigned int nf_ct_expect_hashrnd __read_mostly; 44 + static siphash_key_t nf_ct_expect_hashrnd __read_mostly; 45 45 46 46 /* nf_conntrack_expect helper functions */ 47 47 void nf_ct_unlink_expect_report(struct nf_conntrack_expect *exp, ··· 81 81 82 82 static unsigned int nf_ct_expect_dst_hash(const struct net *n, const struct nf_conntrack_tuple *tuple) 83 83 { 84 - unsigned int hash, seed; 84 + struct { 85 + union nf_inet_addr dst_addr; 86 + u32 net_mix; 87 + u16 dport; 88 + u8 l3num; 89 + u8 protonum; 90 + } __aligned(SIPHASH_ALIGNMENT) combined; 91 + u32 hash; 85 92 86 93 get_random_once(&nf_ct_expect_hashrnd, sizeof(nf_ct_expect_hashrnd)); 87 94 88 - seed = nf_ct_expect_hashrnd ^ net_hash_mix(n); 95 + memset(&combined, 0, sizeof(combined)); 89 96 90 - hash = jhash2(tuple->dst.u3.all, ARRAY_SIZE(tuple->dst.u3.all), 91 - (((tuple->dst.protonum ^ tuple->src.l3num) << 16) | 92 - (__force __u16)tuple->dst.u.all) ^ seed); 97 + combined.dst_addr = tuple->dst.u3; 98 + combined.net_mix = net_hash_mix(n); 99 + combined.dport = (__force __u16)tuple->dst.u.all; 100 + combined.l3num = tuple->src.l3num; 101 + combined.protonum = tuple->dst.protonum; 102 + 103 + hash = siphash(&combined, sizeof(combined), &nf_ct_expect_hashrnd); 93 104 94 105 return reciprocal_scale(hash, nf_ct_expect_hsize); 95 106 }
+3 -1
net/netfilter/nf_conntrack_netlink.c
··· 2528 2528 nla_put_be32(skb, CTA_STATS_SEARCH_RESTART, 2529 2529 htonl(st->search_restart)) || 2530 2530 nla_put_be32(skb, CTA_STATS_CLASH_RESOLVE, 2531 - htonl(st->clash_resolve))) 2531 + htonl(st->clash_resolve)) || 2532 + nla_put_be32(skb, CTA_STATS_CHAIN_TOOLONG, 2533 + htonl(st->chaintoolong))) 2532 2534 goto nla_put_failure; 2533 2535 2534 2536 nlmsg_end(skb, nlh);
+2 -2
net/netfilter/nf_conntrack_standalone.c
··· 432 432 unsigned int nr_conntracks; 433 433 434 434 if (v == SEQ_START_TOKEN) { 435 - seq_puts(seq, "entries clashres found new invalid ignore delete delete_list insert insert_failed drop early_drop icmp_error expect_new expect_create expect_delete search_restart\n"); 435 + seq_puts(seq, "entries clashres found new invalid ignore delete chainlength insert insert_failed drop early_drop icmp_error expect_new expect_create expect_delete search_restart\n"); 436 436 return 0; 437 437 } 438 438 ··· 447 447 st->invalid, 448 448 0, 449 449 0, 450 - 0, 450 + st->chaintoolong, 451 451 st->insert, 452 452 st->insert_failed, 453 453 st->drop,
+14 -4
net/netfilter/nf_nat_core.c
··· 13 13 #include <linux/skbuff.h> 14 14 #include <linux/gfp.h> 15 15 #include <net/xfrm.h> 16 - #include <linux/jhash.h> 16 + #include <linux/siphash.h> 17 17 #include <linux/rtnetlink.h> 18 18 19 19 #include <net/netfilter/nf_conntrack.h> ··· 34 34 35 35 static struct hlist_head *nf_nat_bysource __read_mostly; 36 36 static unsigned int nf_nat_htable_size __read_mostly; 37 - static unsigned int nf_nat_hash_rnd __read_mostly; 37 + static siphash_key_t nf_nat_hash_rnd __read_mostly; 38 38 39 39 struct nf_nat_lookup_hook_priv { 40 40 struct nf_hook_entries __rcu *entries; ··· 153 153 hash_by_src(const struct net *n, const struct nf_conntrack_tuple *tuple) 154 154 { 155 155 unsigned int hash; 156 + struct { 157 + struct nf_conntrack_man src; 158 + u32 net_mix; 159 + u32 protonum; 160 + } __aligned(SIPHASH_ALIGNMENT) combined; 156 161 157 162 get_random_once(&nf_nat_hash_rnd, sizeof(nf_nat_hash_rnd)); 158 163 164 + memset(&combined, 0, sizeof(combined)); 165 + 159 166 /* Original src, to ensure we map it consistently if poss. */ 160 - hash = jhash2((u32 *)&tuple->src, sizeof(tuple->src) / sizeof(u32), 161 - tuple->dst.protonum ^ nf_nat_hash_rnd ^ net_hash_mix(n)); 167 + combined.src = tuple->src; 168 + combined.net_mix = net_hash_mix(n); 169 + combined.protonum = tuple->dst.protonum; 170 + 171 + hash = siphash(&combined, sizeof(combined), &nf_nat_hash_rnd); 162 172 163 173 return reciprocal_scale(hash, nf_nat_htable_size); 164 174 }
+8 -1
net/netfilter/nft_ct.c
··· 41 41 #ifdef CONFIG_NF_CONNTRACK_ZONES 42 42 static DEFINE_PER_CPU(struct nf_conn *, nft_ct_pcpu_template); 43 43 static unsigned int nft_ct_pcpu_template_refcnt __read_mostly; 44 + static DEFINE_MUTEX(nft_ct_pcpu_mutex); 44 45 #endif 45 46 46 47 static u64 nft_ct_get_eval_counter(const struct nf_conn_counter *c, ··· 526 525 #endif 527 526 #ifdef CONFIG_NF_CONNTRACK_ZONES 528 527 case NFT_CT_ZONE: 528 + mutex_lock(&nft_ct_pcpu_mutex); 529 529 if (--nft_ct_pcpu_template_refcnt == 0) 530 530 nft_ct_tmpl_put_pcpu(); 531 + mutex_unlock(&nft_ct_pcpu_mutex); 531 532 break; 532 533 #endif 533 534 default: ··· 567 564 #endif 568 565 #ifdef CONFIG_NF_CONNTRACK_ZONES 569 566 case NFT_CT_ZONE: 570 - if (!nft_ct_tmpl_alloc_pcpu()) 567 + mutex_lock(&nft_ct_pcpu_mutex); 568 + if (!nft_ct_tmpl_alloc_pcpu()) { 569 + mutex_unlock(&nft_ct_pcpu_mutex); 571 570 return -ENOMEM; 571 + } 572 572 nft_ct_pcpu_template_refcnt++; 573 + mutex_unlock(&nft_ct_pcpu_mutex); 573 574 len = sizeof(u16); 574 575 break; 575 576 #endif
+1 -1
net/qrtr/qrtr.c
··· 493 493 goto err; 494 494 } 495 495 496 - if (!size || size & 3 || len != size + hdrlen) 496 + if (!size || len != ALIGN(size, 4) + hdrlen) 497 497 goto err; 498 498 499 499 if (cb->dst_port != QRTR_PORT_CTRL && cb->type != QRTR_TYPE_DATA &&
+10 -2
net/sched/sch_fq_codel.c
··· 369 369 { 370 370 struct fq_codel_sched_data *q = qdisc_priv(sch); 371 371 struct nlattr *tb[TCA_FQ_CODEL_MAX + 1]; 372 + u32 quantum = 0; 372 373 int err; 373 374 374 375 if (!opt) ··· 386 385 if (!q->flows_cnt || 387 386 q->flows_cnt > 65536) 388 387 return -EINVAL; 388 + } 389 + if (tb[TCA_FQ_CODEL_QUANTUM]) { 390 + quantum = max(256U, nla_get_u32(tb[TCA_FQ_CODEL_QUANTUM])); 391 + if (quantum > FQ_CODEL_QUANTUM_MAX) { 392 + NL_SET_ERR_MSG(extack, "Invalid quantum"); 393 + return -EINVAL; 394 + } 389 395 } 390 396 sch_tree_lock(sch); 391 397 ··· 420 412 if (tb[TCA_FQ_CODEL_ECN]) 421 413 q->cparams.ecn = !!nla_get_u32(tb[TCA_FQ_CODEL_ECN]); 422 414 423 - if (tb[TCA_FQ_CODEL_QUANTUM]) 424 - q->quantum = max(256U, nla_get_u32(tb[TCA_FQ_CODEL_QUANTUM])); 415 + if (quantum) 416 + q->quantum = quantum; 425 417 426 418 if (tb[TCA_FQ_CODEL_DROP_BATCH_SIZE]) 427 419 q->drop_batch_size = max(1U, nla_get_u32(tb[TCA_FQ_CODEL_DROP_BATCH_SIZE]));
+1 -1
net/tipc/socket.c
··· 1426 1426 if (ua) { 1427 1427 if (!tipc_uaddr_valid(ua, m->msg_namelen)) 1428 1428 return -EINVAL; 1429 - atype = ua->addrtype; 1429 + atype = ua->addrtype; 1430 1430 } 1431 1431 1432 1432 /* If socket belongs to a communication group follow other paths */
+64 -10
tools/testing/selftests/bpf/prog_tests/xdp_bonding.c
··· 384 384 { 385 385 struct bpf_link *link = NULL; 386 386 struct bpf_link *link2 = NULL; 387 - int veth, bond; 388 - int err; 387 + int veth, bond, err; 389 388 390 389 if (!ASSERT_OK(system("ip link add veth type veth"), "add veth")) 391 390 goto out; ··· 398 399 if (!ASSERT_GE(bond, 0, "if_nametoindex bond")) 399 400 goto out; 400 401 401 - /* enslaving with a XDP program loaded fails */ 402 + /* enslaving with a XDP program loaded is allowed */ 402 403 link = bpf_program__attach_xdp(skeletons->xdp_dummy->progs.xdp_dummy_prog, veth); 403 404 if (!ASSERT_OK_PTR(link, "attach program to veth")) 404 405 goto out; 405 406 406 407 err = system("ip link set veth master bond"); 407 - if (!ASSERT_NEQ(err, 0, "attaching slave with xdp program expected to fail")) 408 + if (!ASSERT_OK(err, "set veth master")) 408 409 goto out; 409 410 410 411 bpf_link__destroy(link); 411 412 link = NULL; 412 - 413 - err = system("ip link set veth master bond"); 414 - if (!ASSERT_OK(err, "set veth master")) 415 - goto out; 416 413 417 414 /* attaching to slave when master has no program is allowed */ 418 415 link = bpf_program__attach_xdp(skeletons->xdp_dummy->progs.xdp_dummy_prog, veth); ··· 429 434 goto out; 430 435 431 436 /* attaching to slave not allowed when master has program loaded */ 432 - link2 = bpf_program__attach_xdp(skeletons->xdp_dummy->progs.xdp_dummy_prog, bond); 433 - ASSERT_ERR_PTR(link2, "attach program to slave when master has program"); 437 + link2 = bpf_program__attach_xdp(skeletons->xdp_dummy->progs.xdp_dummy_prog, veth); 438 + if (!ASSERT_ERR_PTR(link2, "attach program to slave when master has program")) 439 + goto out; 440 + 441 + bpf_link__destroy(link); 442 + link = NULL; 443 + 444 + /* test program unwinding with a non-XDP slave */ 445 + if (!ASSERT_OK(system("ip link add vxlan type vxlan id 1 remote 1.2.3.4 dstport 0 dev lo"), 446 + "add vxlan")) 447 + goto out; 448 + 449 + err = system("ip link set vxlan master bond"); 450 + if (!ASSERT_OK(err, "set vxlan master")) 451 + goto out; 452 + 453 + /* attaching not allowed when one slave does not support XDP */ 454 + link = bpf_program__attach_xdp(skeletons->xdp_dummy->progs.xdp_dummy_prog, bond); 455 + if (!ASSERT_ERR_PTR(link, "attach program to master when slave does not support XDP")) 456 + goto out; 434 457 435 458 out: 436 459 bpf_link__destroy(link); ··· 456 443 457 444 system("ip link del veth"); 458 445 system("ip link del bond"); 446 + system("ip link del vxlan"); 447 + } 448 + 449 + /* Test with nested bonding devices to catch issue with negative jump label count */ 450 + static void test_xdp_bonding_nested(struct skeletons *skeletons) 451 + { 452 + struct bpf_link *link = NULL; 453 + int bond, err; 454 + 455 + if (!ASSERT_OK(system("ip link add bond type bond"), "add bond")) 456 + goto out; 457 + 458 + bond = if_nametoindex("bond"); 459 + if (!ASSERT_GE(bond, 0, "if_nametoindex bond")) 460 + goto out; 461 + 462 + if (!ASSERT_OK(system("ip link add bond_nest1 type bond"), "add bond_nest1")) 463 + goto out; 464 + 465 + err = system("ip link set bond_nest1 master bond"); 466 + if (!ASSERT_OK(err, "set bond_nest1 master")) 467 + goto out; 468 + 469 + if (!ASSERT_OK(system("ip link add bond_nest2 type bond"), "add bond_nest1")) 470 + goto out; 471 + 472 + err = system("ip link set bond_nest2 master bond_nest1"); 473 + if (!ASSERT_OK(err, "set bond_nest2 master")) 474 + goto out; 475 + 476 + link = bpf_program__attach_xdp(skeletons->xdp_dummy->progs.xdp_dummy_prog, bond); 477 + ASSERT_OK_PTR(link, "attach program to master"); 478 + 479 + out: 480 + bpf_link__destroy(link); 481 + system("ip link del bond"); 482 + system("ip link del bond_nest1"); 483 + system("ip link del bond_nest2"); 459 484 } 460 485 461 486 static int libbpf_debug_print(enum libbpf_print_level level, ··· 546 495 547 496 if (test__start_subtest("xdp_bonding_attach")) 548 497 test_xdp_bonding_attach(&skeletons); 498 + 499 + if (test__start_subtest("xdp_bonding_nested")) 500 + test_xdp_bonding_nested(&skeletons); 549 501 550 502 for (i = 0; i < ARRAY_SIZE(bond_test_cases); i++) { 551 503 struct bond_test_case *test_case = &bond_test_cases[i];
+1
tools/testing/selftests/net/Makefile
··· 27 27 TEST_PROGS += veth.sh 28 28 TEST_PROGS += ioam6.sh 29 29 TEST_PROGS += gro.sh 30 + TEST_PROGS += gre_gso.sh 30 31 TEST_PROGS_EXTENDED := in_netns.sh 31 32 TEST_GEN_FILES = socket nettest 32 33 TEST_GEN_FILES += psock_fanout psock_tpacket msg_zerocopy reuseport_addr_any
+236
tools/testing/selftests/net/gre_gso.sh
··· 1 + #!/bin/bash 2 + # SPDX-License-Identifier: GPL-2.0 3 + 4 + # This test is for checking GRE GSO. 5 + 6 + ret=0 7 + # Kselftest framework requirement - SKIP code is 4. 8 + ksft_skip=4 9 + 10 + # all tests in this script. Can be overridden with -t option 11 + TESTS="gre_gso" 12 + 13 + VERBOSE=0 14 + PAUSE_ON_FAIL=no 15 + PAUSE=no 16 + IP="ip -netns ns1" 17 + NS_EXEC="ip netns exec ns1" 18 + TMPFILE=`mktemp` 19 + PID= 20 + 21 + log_test() 22 + { 23 + local rc=$1 24 + local expected=$2 25 + local msg="$3" 26 + 27 + if [ ${rc} -eq ${expected} ]; then 28 + printf " TEST: %-60s [ OK ]\n" "${msg}" 29 + nsuccess=$((nsuccess+1)) 30 + else 31 + ret=1 32 + nfail=$((nfail+1)) 33 + printf " TEST: %-60s [FAIL]\n" "${msg}" 34 + if [ "${PAUSE_ON_FAIL}" = "yes" ]; then 35 + echo 36 + echo "hit enter to continue, 'q' to quit" 37 + read a 38 + [ "$a" = "q" ] && exit 1 39 + fi 40 + fi 41 + 42 + if [ "${PAUSE}" = "yes" ]; then 43 + echo 44 + echo "hit enter to continue, 'q' to quit" 45 + read a 46 + [ "$a" = "q" ] && exit 1 47 + fi 48 + } 49 + 50 + setup() 51 + { 52 + set -e 53 + ip netns add ns1 54 + ip netns set ns1 auto 55 + $IP link set dev lo up 56 + 57 + ip link add veth0 type veth peer name veth1 58 + ip link set veth0 up 59 + ip link set veth1 netns ns1 60 + $IP link set veth1 name veth0 61 + $IP link set veth0 up 62 + 63 + dd if=/dev/urandom of=$TMPFILE bs=1024 count=2048 &>/dev/null 64 + set +e 65 + } 66 + 67 + cleanup() 68 + { 69 + rm -rf $TMPFILE 70 + [ -n "$PID" ] && kill $PID 71 + ip link del dev gre1 &> /dev/null 72 + ip link del dev veth0 &> /dev/null 73 + ip netns del ns1 74 + } 75 + 76 + get_linklocal() 77 + { 78 + local dev=$1 79 + local ns=$2 80 + local addr 81 + 82 + [ -n "$ns" ] && ns="-netns $ns" 83 + 84 + addr=$(ip -6 -br $ns addr show dev ${dev} | \ 85 + awk '{ 86 + for (i = 3; i <= NF; ++i) { 87 + if ($i ~ /^fe80/) 88 + print $i 89 + } 90 + }' 91 + ) 92 + addr=${addr/\/*} 93 + 94 + [ -z "$addr" ] && return 1 95 + 96 + echo $addr 97 + 98 + return 0 99 + } 100 + 101 + gre_create_tun() 102 + { 103 + local a1=$1 104 + local a2=$2 105 + local mode 106 + 107 + [[ $a1 =~ ^[0-9.]*$ ]] && mode=gre || mode=ip6gre 108 + 109 + ip tunnel add gre1 mode $mode local $a1 remote $a2 dev veth0 110 + ip link set gre1 up 111 + $IP tunnel add gre1 mode $mode local $a2 remote $a1 dev veth0 112 + $IP link set gre1 up 113 + } 114 + 115 + gre_gst_test_checks() 116 + { 117 + local name=$1 118 + local addr=$2 119 + 120 + $NS_EXEC nc -kl $port >/dev/null & 121 + PID=$! 122 + while ! $NS_EXEC ss -ltn | grep -q $port; do ((i++)); sleep 0.01; done 123 + 124 + cat $TMPFILE | timeout 1 nc $addr $port 125 + log_test $? 0 "$name - copy file w/ TSO" 126 + 127 + ethtool -K veth0 tso off 128 + 129 + cat $TMPFILE | timeout 1 nc $addr $port 130 + log_test $? 0 "$name - copy file w/ GSO" 131 + 132 + ethtool -K veth0 tso on 133 + 134 + kill $PID 135 + PID= 136 + } 137 + 138 + gre6_gso_test() 139 + { 140 + local port=7777 141 + 142 + setup 143 + 144 + a1=$(get_linklocal veth0) 145 + a2=$(get_linklocal veth0 ns1) 146 + 147 + gre_create_tun $a1 $a2 148 + 149 + ip addr add 172.16.2.1/24 dev gre1 150 + $IP addr add 172.16.2.2/24 dev gre1 151 + 152 + ip -6 addr add 2001:db8:1::1/64 dev gre1 nodad 153 + $IP -6 addr add 2001:db8:1::2/64 dev gre1 nodad 154 + 155 + sleep 2 156 + 157 + gre_gst_test_checks GREv6/v4 172.16.2.2 158 + gre_gst_test_checks GREv6/v6 2001:db8:1::2 159 + 160 + cleanup 161 + } 162 + 163 + gre_gso_test() 164 + { 165 + gre6_gso_test 166 + } 167 + 168 + ################################################################################ 169 + # usage 170 + 171 + usage() 172 + { 173 + cat <<EOF 174 + usage: ${0##*/} OPTS 175 + 176 + -t <test> Test(s) to run (default: all) 177 + (options: $TESTS) 178 + -p Pause on fail 179 + -P Pause after each test before cleanup 180 + -v verbose mode (show commands and output) 181 + EOF 182 + } 183 + 184 + ################################################################################ 185 + # main 186 + 187 + while getopts :t:pPhv o 188 + do 189 + case $o in 190 + t) TESTS=$OPTARG;; 191 + p) PAUSE_ON_FAIL=yes;; 192 + P) PAUSE=yes;; 193 + v) VERBOSE=$(($VERBOSE + 1));; 194 + h) usage; exit 0;; 195 + *) usage; exit 1;; 196 + esac 197 + done 198 + 199 + PEER_CMD="ip netns exec ${PEER_NS}" 200 + 201 + # make sure we don't pause twice 202 + [ "${PAUSE}" = "yes" ] && PAUSE_ON_FAIL=no 203 + 204 + if [ "$(id -u)" -ne 0 ];then 205 + echo "SKIP: Need root privileges" 206 + exit $ksft_skip; 207 + fi 208 + 209 + if [ ! -x "$(command -v ip)" ]; then 210 + echo "SKIP: Could not run test without ip tool" 211 + exit $ksft_skip 212 + fi 213 + 214 + if [ ! -x "$(command -v nc)" ]; then 215 + echo "SKIP: Could not run test without nc tool" 216 + exit $ksft_skip 217 + fi 218 + 219 + # start clean 220 + cleanup &> /dev/null 221 + 222 + for t in $TESTS 223 + do 224 + case $t in 225 + gre_gso) gre_gso_test;; 226 + 227 + help) echo "Test names: $TESTS"; exit 0;; 228 + esac 229 + done 230 + 231 + if [ "$TESTS" != "none" ]; then 232 + printf "\nTests passed: %3d\n" ${nsuccess} 233 + printf "Tests failed: %3d\n" ${nfail} 234 + fi 235 + 236 + exit $ret
+2 -2
tools/testing/selftests/net/mptcp/simult_flows.sh
··· 22 22 23 23 cleanup() 24 24 { 25 - rm -f "$cin" "$cout" 26 - rm -f "$sin" "$sout" 25 + rm -f "$cout" "$sout" 26 + rm -f "$large" "$small" 27 27 rm -f "$capout" 28 28 29 29 local netns