Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'net-5.16-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Pull networking fixes from Jakub Kicinski:
"Including fixes from bpf, can and netfilter.

Current release - regressions:

- bpf: do not reject when the stack read size is different from the
tracked scalar size

- net: fix premature exit from NAPI state polling in napi_disable()

- riscv, bpf: fix RV32 broken build, and silence RV64 warning

Current release - new code bugs:

- net: fix possible NULL deref in sock_reserve_memory

- amt: fix error return code in amt_init(); fix stopping the
workqueue

- ax88796c: use the correct ioctl callback

Previous releases - always broken:

- bpf: stop caching subprog index in the bpf_pseudo_func insn

- security: fixups for the security hooks in sctp

- nfc: add necessary privilege flags in netlink layer, limit
operations to admin only

- vsock: prevent unnecessary refcnt inc for non-blocking connect

- net/smc: fix sk_refcnt underflow on link down and fallback

- nfnetlink_queue: fix OOB when mac header was cleared

- can: j1939: ignore invalid messages per standard

- bpf, sockmap:
- fix race in ingress receive verdict with redirect to self
- fix incorrect sk_skb data_end access when src_reg = dst_reg
- strparser, and tls are reusing qdisc_skb_cb and colliding

- ethtool: fix ethtool msg len calculation for pause stats

- vlan: fix a UAF in vlan_dev_real_dev() when ref-holder tries to
access an unregistering real_dev

- udp6: make encap_rcv() bump the v6 not v4 stats

- drv: prestera: add explicit padding to fix m68k build

- drv: felix: fix broken VLAN-tagged PTP under VLAN-aware bridge

- drv: mvpp2: fix wrong SerDes reconfiguration order

Misc & small latecomers:

- ipvs: auto-load ipvs on genl access

- mctp: sanity check the struct sockaddr_mctp padding fields

- libfs: support RENAME_EXCHANGE in simple_rename()

- avoid double accounting for pure zerocopy skbs"

* tag 'net-5.16-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (123 commits)
selftests/net: udpgso_bench_rx: fix port argument
net: wwan: iosm: fix compilation warning
cxgb4: fix eeprom len when diagnostics not implemented
net: fix premature exit from NAPI state polling in napi_disable()
net/smc: fix sk_refcnt underflow on linkdown and fallback
net/mlx5: Lag, fix a potential Oops with mlx5_lag_create_definer()
gve: fix unmatched u64_stats_update_end()
net: ethernet: lantiq_etop: Fix compilation error
selftests: forwarding: Fix packet matching in mirroring selftests
vsock: prevent unnecessary refcnt inc for nonblocking connect
net: marvell: mvpp2: Fix wrong SerDes reconfiguration order
net: ethernet: ti: cpsw_ale: Fix access to un-initialized memory
net: stmmac: allow a tc-taprio base-time of zero
selftests: net: test_vxlan_under_vrf: fix HV connectivity test
net: hns3: allow configure ETS bandwidth of all TCs
net: hns3: remove check VF uc mac exist when set by PF
net: hns3: fix some mac statistics is always 0 in device version V2
net: hns3: fix kernel crash when unload VF while it is being reset
net: hns3: sync rx ring head in echo common pull
net: hns3: fix pfc packet number incorrect after querying pfc parameters
...

+1243 -731
+2 -4
Documentation/networking/ip-sysctl.rst
··· 1004 1004 udp_mem - vector of 3 INTEGERs: min, pressure, max 1005 1005 Number of pages allowed for queueing by all UDP sockets. 1006 1006 1007 - min: Below this number of pages UDP is not bothered about its 1008 - memory appetite. When amount of memory allocated by UDP exceeds 1009 - this number, UDP starts to moderate memory usage. 1007 + min: Number of pages allowed for queueing by all UDP sockets. 1010 1008 1011 1009 pressure: This value was introduced to follow format of tcp_mem. 1012 1010 1013 - max: Number of pages allowed for queueing by all UDP sockets. 1011 + max: This value was introduced to follow format of tcp_mem. 1014 1012 1015 1013 Default is calculated at boot time from amount of available memory. 1016 1014
+33 -32
Documentation/security/SCTP.rst
··· 15 15 security_sctp_assoc_request() 16 16 security_sctp_bind_connect() 17 17 security_sctp_sk_clone() 18 - 19 - Also the following security hook has been utilised:: 20 - 21 - security_inet_conn_established() 18 + security_sctp_assoc_established() 22 19 23 20 The usage of these hooks are described below with the SELinux implementation 24 21 described in the `SCTP SELinux Support`_ chapter. ··· 23 26 24 27 security_sctp_assoc_request() 25 28 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 26 - Passes the ``@ep`` and ``@chunk->skb`` of the association INIT packet to the 29 + Passes the ``@asoc`` and ``@chunk->skb`` of the association INIT packet to the 27 30 security module. Returns 0 on success, error on failure. 28 31 :: 29 32 30 - @ep - pointer to sctp endpoint structure. 33 + @asoc - pointer to sctp association structure. 31 34 @skb - pointer to skbuff of association packet. 32 35 33 36 ··· 114 117 calls **sctp_peeloff**\(3). 115 118 :: 116 119 117 - @ep - pointer to current sctp endpoint structure. 120 + @asoc - pointer to current sctp association structure. 118 121 @sk - pointer to current sock structure. 119 - @sk - pointer to new sock structure. 122 + @newsk - pointer to new sock structure. 120 123 121 124 122 - security_inet_conn_established() 125 + security_sctp_assoc_established() 123 126 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 124 - Called when a COOKIE ACK is received:: 127 + Called when a COOKIE ACK is received, and the peer secid will be 128 + saved into ``@asoc->peer_secid`` for client:: 125 129 126 - @sk - pointer to sock structure. 130 + @asoc - pointer to sctp association structure. 127 131 @skb - pointer to skbuff of the COOKIE ACK packet. 128 132 129 133 ··· 132 134 ------------------------------------------------- 133 135 134 136 The following diagram shows the use of ``security_sctp_bind_connect()``, 135 - ``security_sctp_assoc_request()``, ``security_inet_conn_established()`` when 137 + ``security_sctp_assoc_request()``, ``security_sctp_assoc_established()`` when 136 138 establishing an association. 137 139 :: 138 140 ··· 149 151 INIT ---------------------------------------------> 150 152 sctp_sf_do_5_1B_init() 151 153 Respond to an INIT chunk. 152 - SCTP peer endpoint "A" is 153 - asking for an association. Call 154 - security_sctp_assoc_request() 154 + SCTP peer endpoint "A" is asking 155 + for a temporary association. 156 + Call security_sctp_assoc_request() 155 157 to set the peer label if first 156 158 association. 157 159 If not first association, check ··· 161 163 | discard the packet. 162 164 | 163 165 COOKIE ECHO ------------------------------------------> 164 - | 165 - | 166 - | 166 + sctp_sf_do_5_1D_ce() 167 + Respond to an COOKIE ECHO chunk. 168 + Confirm the cookie and create a 169 + permanent association. 170 + Call security_sctp_assoc_request() to 171 + do the same as for INIT chunk Response. 167 172 <------------------------------------------- COOKIE ACK 168 173 | | 169 174 sctp_sf_do_5_1E_ca | 170 - Call security_inet_conn_established() | 175 + Call security_sctp_assoc_established() | 171 176 to set the peer label. | 172 177 | | 173 178 | If SCTP_SOCKET_TCP or peeled off ··· 196 195 security_sctp_assoc_request() 197 196 security_sctp_bind_connect() 198 197 security_sctp_sk_clone() 199 - security_inet_conn_established() 198 + security_sctp_assoc_established() 200 199 201 200 202 201 security_sctp_assoc_request() 203 202 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 204 - Passes the ``@ep`` and ``@chunk->skb`` of the association INIT packet to the 203 + Passes the ``@asoc`` and ``@chunk->skb`` of the association INIT packet to the 205 204 security module. Returns 0 on success, error on failure. 206 205 :: 207 206 208 - @ep - pointer to sctp endpoint structure. 207 + @asoc - pointer to sctp association structure. 209 208 @skb - pointer to skbuff of association packet. 210 209 211 210 The security module performs the following operations: 212 - IF this is the first association on ``@ep->base.sk``, then set the peer 211 + IF this is the first association on ``@asoc->base.sk``, then set the peer 213 212 sid to that in ``@skb``. This will ensure there is only one peer sid 214 - assigned to ``@ep->base.sk`` that may support multiple associations. 213 + assigned to ``@asoc->base.sk`` that may support multiple associations. 215 214 216 - ELSE validate the ``@ep->base.sk peer_sid`` against the ``@skb peer sid`` 215 + ELSE validate the ``@asoc->base.sk peer_sid`` against the ``@skb peer sid`` 217 216 to determine whether the association should be allowed or denied. 218 217 219 - Set the sctp ``@ep sid`` to socket's sid (from ``ep->base.sk``) with 218 + Set the sctp ``@asoc sid`` to socket's sid (from ``asoc->base.sk``) with 220 219 MLS portion taken from ``@skb peer sid``. This will be used by SCTP 221 220 TCP style sockets and peeled off connections as they cause a new socket 222 221 to be generated. ··· 260 259 Called whenever a new socket is created by **accept**\(2) (i.e. a TCP style 261 260 socket) or when a socket is 'peeled off' e.g userspace calls 262 261 **sctp_peeloff**\(3). ``security_sctp_sk_clone()`` will set the new 263 - sockets sid and peer sid to that contained in the ``@ep sid`` and 264 - ``@ep peer sid`` respectively. 262 + sockets sid and peer sid to that contained in the ``@asoc sid`` and 263 + ``@asoc peer sid`` respectively. 265 264 :: 266 265 267 - @ep - pointer to current sctp endpoint structure. 266 + @asoc - pointer to current sctp association structure. 268 267 @sk - pointer to current sock structure. 269 - @sk - pointer to new sock structure. 268 + @newsk - pointer to new sock structure. 270 269 271 270 272 - security_inet_conn_established() 271 + security_sctp_assoc_established() 273 272 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 274 273 Called when a COOKIE ACK is received where it sets the connection's peer sid 275 274 to that in ``@skb``:: 276 275 277 - @sk - pointer to sock structure. 276 + @asoc - pointer to sctp association structure. 278 277 @skb - pointer to skbuff of the COOKIE ACK packet. 279 278 280 279
+3 -2
MAINTAINERS
··· 872 872 F: drivers/thermal/thermal_mmio.c 873 873 874 874 AMAZON ETHERNET DRIVERS 875 - M: Netanel Belgazal <netanel@amazon.com> 875 + M: Shay Agroskin <shayagr@amazon.com> 876 876 M: Arthur Kiyanovski <akiyano@amazon.com> 877 - R: Guy Tzalik <gtzalik@amazon.com> 877 + R: David Arinzon <darinzon@amazon.com> 878 + R: Noam Dagan <ndagan@amazon.com> 878 879 R: Saeed Bishara <saeedb@amazon.com> 879 880 L: netdev@vger.kernel.org 880 881 S: Supported
+2 -2
arch/riscv/mm/extable.c
··· 11 11 #include <linux/module.h> 12 12 #include <linux/uaccess.h> 13 13 14 - #ifdef CONFIG_BPF_JIT 14 + #if defined(CONFIG_BPF_JIT) && defined(CONFIG_ARCH_RV64I) 15 15 int rv_bpf_fixup_exception(const struct exception_table_entry *ex, struct pt_regs *regs); 16 16 #endif 17 17 ··· 23 23 if (!fixup) 24 24 return 0; 25 25 26 - #ifdef CONFIG_BPF_JIT 26 + #if defined(CONFIG_BPF_JIT) && defined(CONFIG_ARCH_RV64I) 27 27 if (regs->epc >= BPF_JIT_REGION_START && regs->epc < BPF_JIT_REGION_END) 28 28 return rv_bpf_fixup_exception(fixup, regs); 29 29 #endif
+2
arch/riscv/net/bpf_jit_comp64.c
··· 460 460 #define BPF_FIXUP_REG_MASK GENMASK(31, 27) 461 461 462 462 int rv_bpf_fixup_exception(const struct exception_table_entry *ex, 463 + struct pt_regs *regs); 464 + int rv_bpf_fixup_exception(const struct exception_table_entry *ex, 463 465 struct pt_regs *regs) 464 466 { 465 467 off_t offset = FIELD_GET(BPF_FIXUP_OFFSET_MASK, ex->fixup);
+1
drivers/net/Kconfig
··· 294 294 config AMT 295 295 tristate "Automatic Multicast Tunneling (AMT)" 296 296 depends on INET && IP_MULTICAST 297 + depends on IPV6 || !IPV6 297 298 select NET_UDP_TUNNEL 298 299 help 299 300 This allows one to create AMT(Automatic Multicast Tunneling)
+6 -5
drivers/net/amt.c
··· 12 12 #include <linux/igmp.h> 13 13 #include <linux/workqueue.h> 14 14 #include <net/net_namespace.h> 15 - #include <net/protocol.h> 16 15 #include <net/ip.h> 17 16 #include <net/udp.h> 18 17 #include <net/udp_tunnel.h> ··· 22 23 #include <linux/security.h> 23 24 #include <net/gro_cells.h> 24 25 #include <net/ipv6.h> 25 - #include <net/protocol.h> 26 26 #include <net/if_inet6.h> 27 27 #include <net/ndisc.h> 28 28 #include <net/addrconf.h> ··· 2765 2767 rcu_read_lock_bh(); 2766 2768 amt = rcu_dereference_sk_user_data(sk); 2767 2769 if (!amt) 2768 - goto drop; 2770 + goto out; 2769 2771 2770 2772 if (amt->mode != AMT_MODE_GATEWAY) 2771 2773 goto drop; ··· 2787 2789 default: 2788 2790 goto drop; 2789 2791 } 2792 + out: 2790 2793 rcu_read_unlock_bh(); 2791 2794 return 0; 2792 2795 drop: ··· 3258 3259 goto unregister_notifier; 3259 3260 3260 3261 amt_wq = alloc_workqueue("amt", WQ_UNBOUND, 1); 3261 - if (!amt_wq) 3262 + if (!amt_wq) { 3263 + err = -ENOMEM; 3262 3264 goto rtnl_unregister; 3265 + } 3263 3266 3264 3267 spin_lock_init(&source_gc_lock); 3265 3268 spin_lock_bh(&source_gc_lock); ··· 3286 3285 { 3287 3286 rtnl_link_unregister(&amt_link_ops); 3288 3287 unregister_netdevice_notifier(&amt_notifier_block); 3289 - flush_delayed_work(&source_gc_wq); 3288 + cancel_delayed_work(&source_gc_wq); 3290 3289 __amt_source_gc_work(); 3291 3290 destroy_workqueue(amt_wq); 3292 3291 }
+11 -25
drivers/net/bonding/bond_sysfs_slave.c
··· 108 108 } 109 109 static SLAVE_ATTR_RO(ad_partner_oper_port_state); 110 110 111 - static const struct slave_attribute *slave_attrs[] = { 112 - &slave_attr_state, 113 - &slave_attr_mii_status, 114 - &slave_attr_link_failure_count, 115 - &slave_attr_perm_hwaddr, 116 - &slave_attr_queue_id, 117 - &slave_attr_ad_aggregator_id, 118 - &slave_attr_ad_actor_oper_port_state, 119 - &slave_attr_ad_partner_oper_port_state, 111 + static const struct attribute *slave_attrs[] = { 112 + &slave_attr_state.attr, 113 + &slave_attr_mii_status.attr, 114 + &slave_attr_link_failure_count.attr, 115 + &slave_attr_perm_hwaddr.attr, 116 + &slave_attr_queue_id.attr, 117 + &slave_attr_ad_aggregator_id.attr, 118 + &slave_attr_ad_actor_oper_port_state.attr, 119 + &slave_attr_ad_partner_oper_port_state.attr, 120 120 NULL 121 121 }; 122 122 ··· 137 137 138 138 int bond_sysfs_slave_add(struct slave *slave) 139 139 { 140 - const struct slave_attribute **a; 141 - int err; 142 - 143 - for (a = slave_attrs; *a; ++a) { 144 - err = sysfs_create_file(&slave->kobj, &((*a)->attr)); 145 - if (err) { 146 - kobject_put(&slave->kobj); 147 - return err; 148 - } 149 - } 150 - 151 - return 0; 140 + return sysfs_create_files(&slave->kobj, slave_attrs); 152 141 } 153 142 154 143 void bond_sysfs_slave_del(struct slave *slave) 155 144 { 156 - const struct slave_attribute **a; 157 - 158 - for (a = slave_attrs; *a; ++a) 159 - sysfs_remove_file(&slave->kobj, &((*a)->attr)); 145 + sysfs_remove_files(&slave->kobj, slave_attrs); 160 146 }
+4 -2
drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c
··· 1092 1092 1093 1093 err = mcp251xfd_chip_rx_int_enable(priv); 1094 1094 if (err) 1095 - return err; 1095 + goto out_chip_stop; 1096 1096 1097 1097 err = mcp251xfd_chip_ecc_init(priv); 1098 1098 if (err) ··· 2290 2290 * check will fail, too. So leave IRQ handler 2291 2291 * directly. 2292 2292 */ 2293 - if (priv->can.state == CAN_STATE_BUS_OFF) 2293 + if (priv->can.state == CAN_STATE_BUS_OFF) { 2294 + can_rx_offload_threaded_irq_finish(&priv->offload); 2294 2295 return IRQ_HANDLED; 2296 + } 2295 2297 } 2296 2298 2297 2299 handled = IRQ_HANDLED;
+2 -4
drivers/net/can/usb/etas_es58x/es58x_core.c
··· 664 664 struct can_device_stats *can_stats = &can->can_stats; 665 665 struct can_frame *cf = NULL; 666 666 struct sk_buff *skb; 667 - int ret; 667 + int ret = 0; 668 668 669 669 if (!netif_running(netdev)) { 670 670 if (net_ratelimit()) ··· 823 823 can->state = CAN_STATE_BUS_OFF; 824 824 can_bus_off(netdev); 825 825 ret = can->do_set_mode(netdev, CAN_MODE_STOP); 826 - if (ret) 827 - return ret; 828 826 } 829 827 break; 830 828 ··· 879 881 ES58X_EVENT_BUSOFF, timestamp); 880 882 } 881 883 882 - return 0; 884 + return ret; 883 885 } 884 886 885 887 /**
+13 -14
drivers/net/can/usb/peak_usb/pcan_usb.c
··· 841 841 pdev->bec.rxerr = 0; 842 842 pdev->bec.txerr = 0; 843 843 844 - /* be notified on error counter changes (if requested by user) */ 845 - if (dev->can.ctrlmode & CAN_CTRLMODE_BERR_REPORTING) { 846 - err = pcan_usb_set_err_frame(dev, PCAN_USB_BERR_MASK); 847 - if (err) 848 - netdev_warn(dev->netdev, 849 - "Asking for BERR reporting error %u\n", 850 - err); 851 - } 844 + /* always ask the device for BERR reporting, to be able to switch from 845 + * WARNING to PASSIVE state 846 + */ 847 + err = pcan_usb_set_err_frame(dev, PCAN_USB_BERR_MASK); 848 + if (err) 849 + netdev_warn(dev->netdev, 850 + "Asking for BERR reporting error %u\n", 851 + err); 852 852 853 853 /* if revision greater than 3, can put silent mode on/off */ 854 854 if (dev->device_rev > 3) { ··· 883 883 return err; 884 884 } 885 885 886 + dev_info(dev->netdev->dev.parent, 887 + "PEAK-System %s adapter hwrev %u serial %08X (%u channel)\n", 888 + pcan_usb.name, dev->device_rev, serial_number, 889 + pcan_usb.ctrl_count); 890 + 886 891 /* Since rev 4.1, PCAN-USB is able to make single-shot as well as 887 892 * looped back frames. 888 893 */ ··· 900 895 dev_info(dev->netdev->dev.parent, 901 896 "Firmware update available. Please contact support@peak-system.com\n"); 902 897 } 903 - 904 - dev_info(dev->netdev->dev.parent, 905 - "PEAK-System %s adapter hwrev %u serial %08X (%u channel)\n", 906 - pcan_usb.name, dev->device_rev, serial_number, 907 - pcan_usb.ctrl_count); 908 898 909 899 return 0; 910 900 } ··· 986 986 .device_id = PCAN_USB_PRODUCT_ID, 987 987 .ctrl_count = 1, 988 988 .ctrlmode_supported = CAN_CTRLMODE_3_SAMPLES | CAN_CTRLMODE_LISTENONLY | 989 - CAN_CTRLMODE_BERR_REPORTING | 990 989 CAN_CTRLMODE_CC_LEN8_DLC, 991 990 .clock = { 992 991 .freq = PCAN_USB_CRYSTAL_HZ / 2,
+4 -1
drivers/net/dsa/mv88e6xxx/chip.c
··· 640 640 unsigned long *mask, 641 641 struct phylink_link_state *state) 642 642 { 643 - if (port == 0 || port == 9 || port == 10) { 643 + bool is_6191x = 644 + chip->info->prod_num == MV88E6XXX_PORT_SWITCH_ID_PROD_6191X; 645 + 646 + if (((port == 0 || port == 9) && !is_6191x) || port == 10) { 644 647 phylink_set(mask, 10000baseT_Full); 645 648 phylink_set(mask, 10000baseKR_Full); 646 649 phylink_set(mask, 10000baseCR_Full);
+3 -6
drivers/net/dsa/ocelot/felix.c
··· 1370 1370 static bool felix_rxtstamp(struct dsa_switch *ds, int port, 1371 1371 struct sk_buff *skb, unsigned int type) 1372 1372 { 1373 - u8 *extraction = skb->data - ETH_HLEN - OCELOT_TAG_LEN; 1373 + u32 tstamp_lo = OCELOT_SKB_CB(skb)->tstamp_lo; 1374 1374 struct skb_shared_hwtstamps *shhwtstamps; 1375 1375 struct ocelot *ocelot = ds->priv; 1376 - u32 tstamp_lo, tstamp_hi; 1377 1376 struct timespec64 ts; 1378 - u64 tstamp, val; 1377 + u32 tstamp_hi; 1378 + u64 tstamp; 1379 1379 1380 1380 /* If the "no XTR IRQ" workaround is in use, tell DSA to defer this skb 1381 1381 * for RX timestamping. Then free it, and poll for its copy through ··· 1389 1389 1390 1390 ocelot_ptp_gettime64(&ocelot->ptp_info, &ts); 1391 1391 tstamp = ktime_set(ts.tv_sec, ts.tv_nsec); 1392 - 1393 - ocelot_xfh_get_rew_val(extraction, &val); 1394 - tstamp_lo = (u32)val; 1395 1392 1396 1393 tstamp_hi = tstamp >> 32; 1397 1394 if ((tstamp & 0xffffffff) < tstamp_lo)
+8
drivers/net/dsa/qca8k.c
··· 1109 1109 if (ret) 1110 1110 return ret; 1111 1111 1112 + /* Make sure MAC06 is disabled */ 1113 + ret = qca8k_reg_clear(priv, QCA8K_REG_PORT0_PAD_CTRL, 1114 + QCA8K_PORT0_PAD_MAC06_EXCHANGE_EN); 1115 + if (ret) { 1116 + dev_err(priv->dev, "failed disabling MAC06 exchange"); 1117 + return ret; 1118 + } 1119 + 1112 1120 /* Enable CPU Port */ 1113 1121 ret = qca8k_reg_set(priv, QCA8K_REG_GLOBAL_FW_CTRL0, 1114 1122 QCA8K_GLOBAL_FW_CTRL0_CPU_PORT_EN);
+1
drivers/net/dsa/qca8k.h
··· 34 34 #define QCA8K_MASK_CTRL_DEVICE_ID_MASK GENMASK(15, 8) 35 35 #define QCA8K_MASK_CTRL_DEVICE_ID(x) ((x) >> 8) 36 36 #define QCA8K_REG_PORT0_PAD_CTRL 0x004 37 + #define QCA8K_PORT0_PAD_MAC06_EXCHANGE_EN BIT(31) 37 38 #define QCA8K_PORT0_PAD_SGMII_RXCLK_FALLING_EDGE BIT(19) 38 39 #define QCA8K_PORT0_PAD_SGMII_TXCLK_FALLING_EDGE BIT(18) 39 40 #define QCA8K_REG_PORT5_PAD_CTRL 0x008
+3 -1
drivers/net/ethernet/asix/ax88796c_main.c
··· 934 934 .ndo_stop = ax88796c_close, 935 935 .ndo_start_xmit = ax88796c_start_xmit, 936 936 .ndo_get_stats64 = ax88796c_get_stats64, 937 - .ndo_do_ioctl = ax88796c_ioctl, 937 + .ndo_eth_ioctl = ax88796c_ioctl, 938 938 .ndo_set_mac_address = eth_mac_addr, 939 939 .ndo_set_features = ax88796c_set_features, 940 940 }; ··· 1114 1114 return 0; 1115 1115 } 1116 1116 1117 + #ifdef CONFIG_OF 1117 1118 static const struct of_device_id ax88796c_dt_ids[] = { 1118 1119 { .compatible = "asix,ax88796c" }, 1119 1120 {}, 1120 1121 }; 1121 1122 MODULE_DEVICE_TABLE(of, ax88796c_dt_ids); 1123 + #endif 1122 1124 1123 1125 static const struct spi_device_id asix_id[] = { 1124 1126 { "ax88796c", 0 },
+1 -1
drivers/net/ethernet/broadcom/bnxt/bnxt_devlink.c
··· 443 443 case DEVLINK_RELOAD_ACTION_DRIVER_REINIT: { 444 444 if (BNXT_PF(bp) && bp->pf.active_vfs) { 445 445 NL_SET_ERR_MSG_MOD(extack, 446 - "reload is unsupported when VFs are allocated\n"); 446 + "reload is unsupported when VFs are allocated"); 447 447 return -EOPNOTSUPP; 448 448 } 449 449 rtnl_lock();
-1
drivers/net/ethernet/broadcom/tg3.c
··· 5503 5503 int workaround, port_a; 5504 5504 5505 5505 serdes_cfg = 0; 5506 - expected_sg_dig_ctrl = 0; 5507 5506 workaround = 0; 5508 5507 port_a = 1; 5509 5508 current_link_up = false;
+5 -2
drivers/net/ethernet/chelsio/cxgb4/cxgb4_ethtool.c
··· 2015 2015 if (ret) 2016 2016 return ret; 2017 2017 2018 - if (!sff8472_comp || (sff_diag_type & 4)) { 2018 + if (!sff8472_comp || (sff_diag_type & SFP_DIAG_ADDRMODE)) { 2019 2019 modinfo->type = ETH_MODULE_SFF_8079; 2020 2020 modinfo->eeprom_len = ETH_MODULE_SFF_8079_LEN; 2021 2021 } else { 2022 2022 modinfo->type = ETH_MODULE_SFF_8472; 2023 - modinfo->eeprom_len = ETH_MODULE_SFF_8472_LEN; 2023 + if (sff_diag_type & SFP_DIAG_IMPLEMENTED) 2024 + modinfo->eeprom_len = ETH_MODULE_SFF_8472_LEN; 2025 + else 2026 + modinfo->eeprom_len = ETH_MODULE_SFF_8472_LEN / 2; 2024 2027 } 2025 2028 break; 2026 2029
+2
drivers/net/ethernet/chelsio/cxgb4/t4_hw.h
··· 293 293 #define I2C_PAGE_SIZE 0x100 294 294 #define SFP_DIAG_TYPE_ADDR 0x5c 295 295 #define SFP_DIAG_TYPE_LEN 0x1 296 + #define SFP_DIAG_ADDRMODE BIT(2) 297 + #define SFP_DIAG_IMPLEMENTED BIT(6) 296 298 #define SFF_8472_COMP_ADDR 0x5e 297 299 #define SFF_8472_COMP_LEN 0x1 298 300 #define SFF_REV_ADDR 0x1
+1 -1
drivers/net/ethernet/google/gve/gve_main.c
··· 1137 1137 goto reset; 1138 1138 1139 1139 ntfy_idx = gve_tx_idx_to_ntfy(priv, txqueue); 1140 - if (ntfy_idx > priv->num_ntfy_blks) 1140 + if (ntfy_idx >= priv->num_ntfy_blks) 1141 1141 goto reset; 1142 1142 1143 1143 block = &priv->ntfy_blocks[ntfy_idx];
+2 -1
drivers/net/ethernet/google/gve/gve_rx.c
··· 500 500 rx->rx_copied_pkt++; 501 501 rx->rx_frag_copy_cnt++; 502 502 rx->rx_copybreak_pkt++; 503 - } u64_stats_update_end(&rx->statss); 503 + u64_stats_update_end(&rx->statss); 504 + } 504 505 } else { 505 506 if (rx->data.raw_addressing) { 506 507 int recycle = gve_rx_can_recycle_buffer(page_info);
+7
drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
··· 4210 4210 } 4211 4211 4212 4212 out: 4213 + /* sync head pointer before exiting, since hardware will calculate 4214 + * FBD number with head pointer 4215 + */ 4216 + if (unused_count > 0) 4217 + failure = failure || 4218 + hns3_nic_alloc_rx_buffers(ring, unused_count); 4219 + 4213 4220 return failure ? budget : recv_pkts; 4214 4221 } 4215 4222
+4 -2
drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c
··· 238 238 } 239 239 240 240 /** 241 - * hns3_lp_run_test - run loopback test 241 + * hns3_lp_run_test - run loopback test 242 242 * @ndev: net device 243 243 * @mode: loopback type 244 + * 245 + * Return: %0 for success or a NIC loopback test error code on failure 244 246 */ 245 247 static int hns3_lp_run_test(struct net_device *ndev, enum hnae3_loop mode) 246 248 { ··· 400 398 } 401 399 402 400 /** 403 - * hns3_nic_self_test - self test 401 + * hns3_self_test - self test 404 402 * @ndev: net device 405 403 * @eth_test: test cmd 406 404 * @data: test result
+1
drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.c
··· 483 483 if (hnae3_dev_phy_imp_supported(hdev)) 484 484 hnae3_set_bit(compat, HCLGE_PHY_IMP_EN_B, 1); 485 485 hnae3_set_bit(compat, HCLGE_MAC_STATS_EXT_EN_B, 1); 486 + hnae3_set_bit(compat, HCLGE_SYNC_RX_RING_HEAD_EN_B, 1); 486 487 487 488 req->compat = cpu_to_le32(compat); 488 489 }
+1
drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.h
··· 1151 1151 #define HCLGE_NCSI_ERROR_REPORT_EN_B 1 1152 1152 #define HCLGE_PHY_IMP_EN_B 2 1153 1153 #define HCLGE_MAC_STATS_EXT_EN_B 3 1154 + #define HCLGE_SYNC_RX_RING_HEAD_EN_B 4 1154 1155 struct hclge_firmware_compat_cmd { 1155 1156 __le32 compat; 1156 1157 u8 rsv[20];
+9 -13
drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_dcb.c
··· 129 129 u32 total_ets_bw = 0; 130 130 u8 i; 131 131 132 - for (i = 0; i < hdev->tc_max; i++) { 132 + for (i = 0; i < HNAE3_MAX_TC; i++) { 133 133 switch (ets->tc_tsa[i]) { 134 134 case IEEE_8021QAZ_TSA_STRICT: 135 135 if (hdev->tm_info.tc_info[i].tc_sch_mode != ··· 286 286 287 287 static int hclge_ieee_getpfc(struct hnae3_handle *h, struct ieee_pfc *pfc) 288 288 { 289 - u64 requests[HNAE3_MAX_TC], indications[HNAE3_MAX_TC]; 290 289 struct hclge_vport *vport = hclge_get_vport(h); 291 290 struct hclge_dev *hdev = vport->back; 292 291 int ret; 293 - u8 i; 294 292 295 293 memset(pfc, 0, sizeof(*pfc)); 296 294 pfc->pfc_cap = hdev->pfc_max; 297 295 pfc->pfc_en = hdev->tm_info.pfc_en; 298 296 299 - ret = hclge_pfc_tx_stats_get(hdev, requests); 300 - if (ret) 297 + ret = hclge_mac_update_stats(hdev); 298 + if (ret) { 299 + dev_err(&hdev->pdev->dev, 300 + "failed to update MAC stats, ret = %d.\n", ret); 301 301 return ret; 302 - 303 - ret = hclge_pfc_rx_stats_get(hdev, indications); 304 - if (ret) 305 - return ret; 306 - 307 - for (i = 0; i < HCLGE_MAX_TC_NUM; i++) { 308 - pfc->requests[i] = requests[i]; 309 - pfc->indications[i] = indications[i]; 310 302 } 303 + 304 + hclge_pfc_tx_stats_get(hdev, pfc->requests); 305 + hclge_pfc_rx_stats_get(hdev, pfc->indications); 306 + 311 307 return 0; 312 308 } 313 309
+39 -67
drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
··· 26 26 #include "hclge_devlink.h" 27 27 28 28 #define HCLGE_NAME "hclge" 29 - #define HCLGE_STATS_READ(p, offset) (*(u64 *)((u8 *)(p) + (offset))) 30 - #define HCLGE_MAC_STATS_FIELD_OFF(f) (offsetof(struct hclge_mac_stats, f)) 31 29 32 30 #define HCLGE_BUF_SIZE_UNIT 256U 33 31 #define HCLGE_BUF_MUL_BY 2 ··· 566 568 struct hclge_desc desc; 567 569 int ret; 568 570 571 + /* Driver needs total register number of both valid registers and 572 + * reserved registers, but the old firmware only returns number 573 + * of valid registers in device V2. To be compatible with these 574 + * devices, driver uses a fixed value. 575 + */ 576 + if (hdev->ae_dev->dev_version == HNAE3_DEVICE_VERSION_V2) { 577 + *reg_num = HCLGE_MAC_STATS_MAX_NUM_V1; 578 + return 0; 579 + } 580 + 569 581 hclge_cmd_setup_basic_desc(&desc, HCLGE_OPC_QUERY_MAC_REG_NUM, true); 570 582 ret = hclge_cmd_send(&hdev->hw, &desc, 1); 571 583 if (ret) { ··· 595 587 return 0; 596 588 } 597 589 598 - static int hclge_mac_update_stats(struct hclge_dev *hdev) 590 + int hclge_mac_update_stats(struct hclge_dev *hdev) 599 591 { 600 592 /* The firmware supports the new statistics acquisition method */ 601 593 if (hdev->ae_dev->dev_specs.mac_stats_num) ··· 2589 2581 if (hdev->num_msi < hdev->num_nic_msi + hdev->num_roce_msi) 2590 2582 return -EINVAL; 2591 2583 2592 - roce->rinfo.base_vector = hdev->roce_base_vector; 2584 + roce->rinfo.base_vector = hdev->num_nic_msi; 2593 2585 2594 2586 roce->rinfo.netdev = nic->kinfo.netdev; 2595 2587 roce->rinfo.roce_io_base = hdev->hw.io_base; ··· 2624 2616 2625 2617 hdev->num_msi = vectors; 2626 2618 hdev->num_msi_left = vectors; 2627 - 2628 - hdev->base_msi_vector = pdev->irq; 2629 - hdev->roce_base_vector = hdev->base_msi_vector + 2630 - hdev->num_nic_msi; 2631 2619 2632 2620 hdev->vector_status = devm_kcalloc(&pdev->dev, hdev->num_msi, 2633 2621 sizeof(u16), GFP_KERNEL); ··· 8953 8949 8954 8950 err_no_space: 8955 8951 /* if already overflow, not to print each time */ 8956 - if (!(vport->overflow_promisc_flags & HNAE3_OVERFLOW_MPE)) 8952 + if (!(vport->overflow_promisc_flags & HNAE3_OVERFLOW_MPE)) { 8953 + vport->overflow_promisc_flags |= HNAE3_OVERFLOW_MPE; 8957 8954 dev_err(&hdev->pdev->dev, "mc mac vlan table is full\n"); 8955 + } 8956 + 8958 8957 return -ENOSPC; 8959 8958 } 8960 8959 ··· 9013 9006 9014 9007 static void hclge_sync_vport_mac_list(struct hclge_vport *vport, 9015 9008 struct list_head *list, 9016 - int (*sync)(struct hclge_vport *, 9017 - const unsigned char *)) 9009 + enum HCLGE_MAC_ADDR_TYPE mac_type) 9018 9010 { 9011 + int (*sync)(struct hclge_vport *vport, const unsigned char *addr); 9019 9012 struct hclge_mac_node *mac_node, *tmp; 9020 9013 int ret; 9014 + 9015 + if (mac_type == HCLGE_MAC_ADDR_UC) 9016 + sync = hclge_add_uc_addr_common; 9017 + else 9018 + sync = hclge_add_mc_addr_common; 9021 9019 9022 9020 list_for_each_entry_safe(mac_node, tmp, list, node) { 9023 9021 ret = sync(vport, mac_node->mac_addr); ··· 9035 9023 /* If one unicast mac address is existing in hardware, 9036 9024 * we need to try whether other unicast mac addresses 9037 9025 * are new addresses that can be added. 9026 + * Multicast mac address can be reusable, even though 9027 + * there is no space to add new multicast mac address, 9028 + * we should check whether other mac addresses are 9029 + * existing in hardware for reuse. 9038 9030 */ 9039 - if (ret != -EEXIST) 9031 + if ((mac_type == HCLGE_MAC_ADDR_UC && ret != -EEXIST) || 9032 + (mac_type == HCLGE_MAC_ADDR_MC && ret != -ENOSPC)) 9040 9033 break; 9041 9034 } 9042 9035 } ··· 9049 9032 9050 9033 static void hclge_unsync_vport_mac_list(struct hclge_vport *vport, 9051 9034 struct list_head *list, 9052 - int (*unsync)(struct hclge_vport *, 9053 - const unsigned char *)) 9035 + enum HCLGE_MAC_ADDR_TYPE mac_type) 9054 9036 { 9037 + int (*unsync)(struct hclge_vport *vport, const unsigned char *addr); 9055 9038 struct hclge_mac_node *mac_node, *tmp; 9056 9039 int ret; 9040 + 9041 + if (mac_type == HCLGE_MAC_ADDR_UC) 9042 + unsync = hclge_rm_uc_addr_common; 9043 + else 9044 + unsync = hclge_rm_mc_addr_common; 9057 9045 9058 9046 list_for_each_entry_safe(mac_node, tmp, list, node) { 9059 9047 ret = unsync(vport, mac_node->mac_addr); ··· 9190 9168 spin_unlock_bh(&vport->mac_list_lock); 9191 9169 9192 9170 /* delete first, in order to get max mac table space for adding */ 9193 - if (mac_type == HCLGE_MAC_ADDR_UC) { 9194 - hclge_unsync_vport_mac_list(vport, &tmp_del_list, 9195 - hclge_rm_uc_addr_common); 9196 - hclge_sync_vport_mac_list(vport, &tmp_add_list, 9197 - hclge_add_uc_addr_common); 9198 - } else { 9199 - hclge_unsync_vport_mac_list(vport, &tmp_del_list, 9200 - hclge_rm_mc_addr_common); 9201 - hclge_sync_vport_mac_list(vport, &tmp_add_list, 9202 - hclge_add_mc_addr_common); 9203 - } 9171 + hclge_unsync_vport_mac_list(vport, &tmp_del_list, mac_type); 9172 + hclge_sync_vport_mac_list(vport, &tmp_add_list, mac_type); 9204 9173 9205 9174 /* if some mac addresses were added/deleted fail, move back to the 9206 9175 * mac_list, and retry at next time. ··· 9350 9337 9351 9338 spin_unlock_bh(&vport->mac_list_lock); 9352 9339 9353 - if (mac_type == HCLGE_MAC_ADDR_UC) 9354 - hclge_unsync_vport_mac_list(vport, &tmp_del_list, 9355 - hclge_rm_uc_addr_common); 9356 - else 9357 - hclge_unsync_vport_mac_list(vport, &tmp_del_list, 9358 - hclge_rm_mc_addr_common); 9340 + hclge_unsync_vport_mac_list(vport, &tmp_del_list, mac_type); 9359 9341 9360 9342 if (!list_empty(&tmp_del_list)) 9361 9343 dev_warn(&hdev->pdev->dev, ··· 9418 9410 return return_status; 9419 9411 } 9420 9412 9421 - static bool hclge_check_vf_mac_exist(struct hclge_vport *vport, int vf_idx, 9422 - u8 *mac_addr) 9423 - { 9424 - struct hclge_mac_vlan_tbl_entry_cmd req; 9425 - struct hclge_dev *hdev = vport->back; 9426 - struct hclge_desc desc; 9427 - u16 egress_port = 0; 9428 - int i; 9429 - 9430 - if (is_zero_ether_addr(mac_addr)) 9431 - return false; 9432 - 9433 - memset(&req, 0, sizeof(req)); 9434 - hnae3_set_field(egress_port, HCLGE_MAC_EPORT_VFID_M, 9435 - HCLGE_MAC_EPORT_VFID_S, vport->vport_id); 9436 - req.egress_port = cpu_to_le16(egress_port); 9437 - hclge_prepare_mac_addr(&req, mac_addr, false); 9438 - 9439 - if (hclge_lookup_mac_vlan_tbl(vport, &req, &desc, false) != -ENOENT) 9440 - return true; 9441 - 9442 - vf_idx += HCLGE_VF_VPORT_START_NUM; 9443 - for (i = HCLGE_VF_VPORT_START_NUM; i < hdev->num_alloc_vport; i++) 9444 - if (i != vf_idx && 9445 - ether_addr_equal(mac_addr, hdev->vport[i].vf_info.mac)) 9446 - return true; 9447 - 9448 - return false; 9449 - } 9450 - 9451 9413 static int hclge_set_vf_mac(struct hnae3_handle *handle, int vf, 9452 9414 u8 *mac_addr) 9453 9415 { ··· 9433 9455 "Specified MAC(=%pM) is same as before, no change committed!\n", 9434 9456 mac_addr); 9435 9457 return 0; 9436 - } 9437 - 9438 - if (hclge_check_vf_mac_exist(vport, vf, mac_addr)) { 9439 - dev_err(&hdev->pdev->dev, "Specified MAC(=%pM) exists!\n", 9440 - mac_addr); 9441 - return -EEXIST; 9442 9458 } 9443 9459 9444 9460 ether_addr_copy(vport->vf_info.mac, mac_addr);
+5 -3
drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h
··· 404 404 }; 405 405 406 406 /* max number of mac statistics on each version */ 407 - #define HCLGE_MAC_STATS_MAX_NUM_V1 84 407 + #define HCLGE_MAC_STATS_MAX_NUM_V1 87 408 408 #define HCLGE_MAC_STATS_MAX_NUM_V2 105 409 409 410 410 struct hclge_comm_stats_str { ··· 852 852 (y) = (_k_ ^ ~_v_) & (_k_); \ 853 853 } while (0) 854 854 855 + #define HCLGE_MAC_STATS_FIELD_OFF(f) (offsetof(struct hclge_mac_stats, f)) 856 + #define HCLGE_STATS_READ(p, offset) (*(u64 *)((u8 *)(p) + (offset))) 857 + 855 858 #define HCLGE_MAC_TNL_LOG_SIZE 8 856 859 #define HCLGE_VPORT_NUM 256 857 860 struct hclge_dev { ··· 907 904 u16 num_msi; 908 905 u16 num_msi_left; 909 906 u16 num_msi_used; 910 - u32 base_msi_vector; 911 907 u16 *vector_status; 912 908 int *vector_irq; 913 909 u16 num_nic_msi; /* Num of nic vectors for this PF */ 914 910 u16 num_roce_msi; /* Num of roce vectors for this PF */ 915 - int roce_base_vector; 916 911 917 912 unsigned long service_timer_period; 918 913 unsigned long service_timer_previous; ··· 1169 1168 int hclge_dbg_dump_rst_info(struct hclge_dev *hdev, char *buf, int len); 1170 1169 int hclge_push_vf_link_status(struct hclge_vport *vport); 1171 1170 int hclge_enable_vport_vlan_filter(struct hclge_vport *vport, bool request_en); 1171 + int hclge_mac_update_stats(struct hclge_dev *hdev); 1172 1172 #endif
+36 -43
drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c
··· 113 113 return 0; 114 114 } 115 115 116 - static int hclge_pfc_stats_get(struct hclge_dev *hdev, 117 - enum hclge_opcode_type opcode, u64 *stats) 116 + static const u16 hclge_pfc_tx_stats_offset[] = { 117 + HCLGE_MAC_STATS_FIELD_OFF(mac_tx_pfc_pri0_pkt_num), 118 + HCLGE_MAC_STATS_FIELD_OFF(mac_tx_pfc_pri1_pkt_num), 119 + HCLGE_MAC_STATS_FIELD_OFF(mac_tx_pfc_pri2_pkt_num), 120 + HCLGE_MAC_STATS_FIELD_OFF(mac_tx_pfc_pri3_pkt_num), 121 + HCLGE_MAC_STATS_FIELD_OFF(mac_tx_pfc_pri4_pkt_num), 122 + HCLGE_MAC_STATS_FIELD_OFF(mac_tx_pfc_pri5_pkt_num), 123 + HCLGE_MAC_STATS_FIELD_OFF(mac_tx_pfc_pri6_pkt_num), 124 + HCLGE_MAC_STATS_FIELD_OFF(mac_tx_pfc_pri7_pkt_num) 125 + }; 126 + 127 + static const u16 hclge_pfc_rx_stats_offset[] = { 128 + HCLGE_MAC_STATS_FIELD_OFF(mac_rx_pfc_pri0_pkt_num), 129 + HCLGE_MAC_STATS_FIELD_OFF(mac_rx_pfc_pri1_pkt_num), 130 + HCLGE_MAC_STATS_FIELD_OFF(mac_rx_pfc_pri2_pkt_num), 131 + HCLGE_MAC_STATS_FIELD_OFF(mac_rx_pfc_pri3_pkt_num), 132 + HCLGE_MAC_STATS_FIELD_OFF(mac_rx_pfc_pri4_pkt_num), 133 + HCLGE_MAC_STATS_FIELD_OFF(mac_rx_pfc_pri5_pkt_num), 134 + HCLGE_MAC_STATS_FIELD_OFF(mac_rx_pfc_pri6_pkt_num), 135 + HCLGE_MAC_STATS_FIELD_OFF(mac_rx_pfc_pri7_pkt_num) 136 + }; 137 + 138 + static void hclge_pfc_stats_get(struct hclge_dev *hdev, bool tx, u64 *stats) 118 139 { 119 - struct hclge_desc desc[HCLGE_TM_PFC_PKT_GET_CMD_NUM]; 120 - int ret, i, j; 140 + const u16 *offset; 141 + int i; 121 142 122 - if (!(opcode == HCLGE_OPC_QUERY_PFC_RX_PKT_CNT || 123 - opcode == HCLGE_OPC_QUERY_PFC_TX_PKT_CNT)) 124 - return -EINVAL; 143 + if (tx) 144 + offset = hclge_pfc_tx_stats_offset; 145 + else 146 + offset = hclge_pfc_rx_stats_offset; 125 147 126 - for (i = 0; i < HCLGE_TM_PFC_PKT_GET_CMD_NUM - 1; i++) { 127 - hclge_cmd_setup_basic_desc(&desc[i], opcode, true); 128 - desc[i].flag |= cpu_to_le16(HCLGE_CMD_FLAG_NEXT); 129 - } 130 - 131 - hclge_cmd_setup_basic_desc(&desc[i], opcode, true); 132 - 133 - ret = hclge_cmd_send(&hdev->hw, desc, HCLGE_TM_PFC_PKT_GET_CMD_NUM); 134 - if (ret) 135 - return ret; 136 - 137 - for (i = 0; i < HCLGE_TM_PFC_PKT_GET_CMD_NUM; i++) { 138 - struct hclge_pfc_stats_cmd *pfc_stats = 139 - (struct hclge_pfc_stats_cmd *)desc[i].data; 140 - 141 - for (j = 0; j < HCLGE_TM_PFC_NUM_GET_PER_CMD; j++) { 142 - u32 index = i * HCLGE_TM_PFC_PKT_GET_CMD_NUM + j; 143 - 144 - if (index < HCLGE_MAX_TC_NUM) 145 - stats[index] = 146 - le64_to_cpu(pfc_stats->pkt_num[j]); 147 - } 148 - } 149 - return 0; 148 + for (i = 0; i < HCLGE_MAX_TC_NUM; i++) 149 + stats[i] = HCLGE_STATS_READ(&hdev->mac_stats, offset[i]); 150 150 } 151 151 152 - int hclge_pfc_rx_stats_get(struct hclge_dev *hdev, u64 *stats) 152 + void hclge_pfc_rx_stats_get(struct hclge_dev *hdev, u64 *stats) 153 153 { 154 - return hclge_pfc_stats_get(hdev, HCLGE_OPC_QUERY_PFC_RX_PKT_CNT, stats); 154 + hclge_pfc_stats_get(hdev, false, stats); 155 155 } 156 156 157 - int hclge_pfc_tx_stats_get(struct hclge_dev *hdev, u64 *stats) 157 + void hclge_pfc_tx_stats_get(struct hclge_dev *hdev, u64 *stats) 158 158 { 159 - return hclge_pfc_stats_get(hdev, HCLGE_OPC_QUERY_PFC_TX_PKT_CNT, stats); 159 + hclge_pfc_stats_get(hdev, true, stats); 160 160 } 161 161 162 162 int hclge_mac_pause_en_cfg(struct hclge_dev *hdev, bool tx, bool rx) ··· 1123 1123 1124 1124 static int hclge_tm_ets_tc_dwrr_cfg(struct hclge_dev *hdev) 1125 1125 { 1126 - #define DEFAULT_TC_WEIGHT 1 1127 1126 #define DEFAULT_TC_OFFSET 14 1128 1127 1129 1128 struct hclge_ets_tc_weight_cmd *ets_weight; ··· 1135 1136 for (i = 0; i < HNAE3_MAX_TC; i++) { 1136 1137 struct hclge_pg_info *pg_info; 1137 1138 1138 - ets_weight->tc_weight[i] = DEFAULT_TC_WEIGHT; 1139 - 1140 - if (!(hdev->hw_tc_map & BIT(i))) 1141 - continue; 1142 - 1143 - pg_info = 1144 - &hdev->tm_info.pg_info[hdev->tm_info.tc_info[i].pgid]; 1139 + pg_info = &hdev->tm_info.pg_info[hdev->tm_info.tc_info[i].pgid]; 1145 1140 ets_weight->tc_weight[i] = pg_info->tc_dwrr[i]; 1146 1141 } 1147 1142
+2 -2
drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.h
··· 228 228 int hclge_tm_init_hw(struct hclge_dev *hdev, bool init); 229 229 int hclge_mac_pause_en_cfg(struct hclge_dev *hdev, bool tx, bool rx); 230 230 int hclge_pause_addr_cfg(struct hclge_dev *hdev, const u8 *mac_addr); 231 - int hclge_pfc_rx_stats_get(struct hclge_dev *hdev, u64 *stats); 232 - int hclge_pfc_tx_stats_get(struct hclge_dev *hdev, u64 *stats); 231 + void hclge_pfc_rx_stats_get(struct hclge_dev *hdev, u64 *stats); 232 + void hclge_pfc_tx_stats_get(struct hclge_dev *hdev, u64 *stats); 233 233 int hclge_tm_qs_shaper_cfg(struct hclge_vport *vport, int max_tx_rate); 234 234 int hclge_tm_get_qset_num(struct hclge_dev *hdev, u16 *qset_num); 235 235 int hclge_tm_get_pri_num(struct hclge_dev *hdev, u8 *pri_num);
+32
drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_cmd.c
··· 434 434 return ret; 435 435 } 436 436 437 + static int hclgevf_firmware_compat_config(struct hclgevf_dev *hdev, bool en) 438 + { 439 + struct hclgevf_firmware_compat_cmd *req; 440 + struct hclgevf_desc desc; 441 + u32 compat = 0; 442 + 443 + hclgevf_cmd_setup_basic_desc(&desc, HCLGEVF_OPC_IMP_COMPAT_CFG, false); 444 + 445 + if (en) { 446 + req = (struct hclgevf_firmware_compat_cmd *)desc.data; 447 + 448 + hnae3_set_bit(compat, HCLGEVF_SYNC_RX_RING_HEAD_EN_B, 1); 449 + 450 + req->compat = cpu_to_le32(compat); 451 + } 452 + 453 + return hclgevf_cmd_send(&hdev->hw, &desc, 1); 454 + } 455 + 437 456 int hclgevf_cmd_init(struct hclgevf_dev *hdev) 438 457 { 458 + struct hnae3_ae_dev *ae_dev = pci_get_drvdata(hdev->pdev); 439 459 int ret; 440 460 441 461 spin_lock_bh(&hdev->hw.cmq.csq.lock); ··· 504 484 hnae3_get_field(hdev->fw_version, HNAE3_FW_VERSION_BYTE0_MASK, 505 485 HNAE3_FW_VERSION_BYTE0_SHIFT)); 506 486 487 + if (ae_dev->dev_version >= HNAE3_DEVICE_VERSION_V3) { 488 + /* ask the firmware to enable some features, driver can work 489 + * without it. 490 + */ 491 + ret = hclgevf_firmware_compat_config(hdev, true); 492 + if (ret) 493 + dev_warn(&hdev->pdev->dev, 494 + "Firmware compatible features not enabled(%d).\n", 495 + ret); 496 + } 497 + 507 498 return 0; 508 499 509 500 err_cmd_init: ··· 539 508 540 509 void hclgevf_cmd_uninit(struct hclgevf_dev *hdev) 541 510 { 511 + hclgevf_firmware_compat_config(hdev, false); 542 512 set_bit(HCLGEVF_STATE_CMD_DISABLE, &hdev->state); 543 513 /* wait to ensure that the firmware completes the possible left 544 514 * over commands.
+9
drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_cmd.h
··· 15 15 struct hclgevf_hw; 16 16 struct hclgevf_dev; 17 17 18 + #define HCLGEVF_SYNC_RX_RING_HEAD_EN_B 4 19 + struct hclgevf_firmware_compat_cmd { 20 + __le32 compat; 21 + u8 rsv[20]; 22 + }; 23 + 18 24 struct hclgevf_desc { 19 25 __le16 opcode; 20 26 __le16 flag; ··· 113 107 HCLGEVF_OPC_RSS_TC_MODE = 0x0D08, 114 108 /* Mailbox cmd */ 115 109 HCLGEVF_OPC_MBX_VF_TO_PF = 0x2001, 110 + 111 + /* IMP stats command */ 112 + HCLGEVF_OPC_IMP_COMPAT_CFG = 0x701A, 116 113 }; 117 114 118 115 #define HCLGEVF_TQP_REG_OFFSET 0x80000
+6 -4
drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
··· 2557 2557 hdev->num_msi_left == 0) 2558 2558 return -EINVAL; 2559 2559 2560 - roce->rinfo.base_vector = hdev->roce_base_vector; 2560 + roce->rinfo.base_vector = hdev->roce_base_msix_offset; 2561 2561 2562 2562 roce->rinfo.netdev = nic->kinfo.netdev; 2563 2563 roce->rinfo.roce_io_base = hdev->hw.io_base; ··· 2823 2823 hdev->num_msi = vectors; 2824 2824 hdev->num_msi_left = vectors; 2825 2825 2826 - hdev->base_msi_vector = pdev->irq; 2827 - hdev->roce_base_vector = pdev->irq + hdev->roce_base_msix_offset; 2828 - 2829 2826 hdev->vector_status = devm_kcalloc(&pdev->dev, hdev->num_msi, 2830 2827 sizeof(u16), GFP_KERNEL); 2831 2828 if (!hdev->vector_status) { ··· 3010 3013 3011 3014 /* un-init roce, if it exists */ 3012 3015 if (hdev->roce_client) { 3016 + while (test_bit(HCLGEVF_STATE_RST_HANDLING, &hdev->state)) 3017 + msleep(HCLGEVF_WAIT_RESET_DONE); 3013 3018 clear_bit(HCLGEVF_STATE_ROCE_REGISTERED, &hdev->state); 3019 + 3014 3020 hdev->roce_client->ops->uninit_instance(&hdev->roce, 0); 3015 3021 hdev->roce_client = NULL; 3016 3022 hdev->roce.client = NULL; ··· 3022 3022 /* un-init nic/unic, if this was not called by roce client */ 3023 3023 if (client->ops->uninit_instance && hdev->nic_client && 3024 3024 client->type != HNAE3_CLIENT_ROCE) { 3025 + while (test_bit(HCLGEVF_STATE_RST_HANDLING, &hdev->state)) 3026 + msleep(HCLGEVF_WAIT_RESET_DONE); 3025 3027 clear_bit(HCLGEVF_STATE_NIC_REGISTERED, &hdev->state); 3026 3028 3027 3029 client->ops->uninit_instance(&hdev->nic, 0);
+2 -2
drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.h
··· 109 109 #define HCLGEVF_VF_RST_ING 0x07008 110 110 #define HCLGEVF_VF_RST_ING_BIT BIT(16) 111 111 112 + #define HCLGEVF_WAIT_RESET_DONE 100 113 + 112 114 #define HCLGEVF_RSS_IND_TBL_SIZE 512 113 115 #define HCLGEVF_RSS_SET_BITMAP_MSK 0xffff 114 116 #define HCLGEVF_RSS_KEY_SIZE 40 ··· 310 308 u16 num_nic_msix; /* Num of nic vectors for this VF */ 311 309 u16 num_roce_msix; /* Num of roce vectors for this VF */ 312 310 u16 roce_base_msix_offset; 313 - int roce_base_vector; 314 - u32 base_msi_vector; 315 311 u16 *vector_status; 316 312 int *vector_irq; 317 313
+1 -4
drivers/net/ethernet/intel/ice/ice.h
··· 165 165 #define ice_for_each_chnl_tc(i) \ 166 166 for ((i) = ICE_CHNL_START_TC; (i) < ICE_CHNL_MAX_TC; (i)++) 167 167 168 - #define ICE_UCAST_PROMISC_BITS (ICE_PROMISC_UCAST_TX | ICE_PROMISC_MCAST_TX | \ 169 - ICE_PROMISC_UCAST_RX | ICE_PROMISC_MCAST_RX) 168 + #define ICE_UCAST_PROMISC_BITS (ICE_PROMISC_UCAST_TX | ICE_PROMISC_UCAST_RX) 170 169 171 170 #define ICE_UCAST_VLAN_PROMISC_BITS (ICE_PROMISC_UCAST_TX | \ 172 - ICE_PROMISC_MCAST_TX | \ 173 171 ICE_PROMISC_UCAST_RX | \ 174 - ICE_PROMISC_MCAST_RX | \ 175 172 ICE_PROMISC_VLAN_TX | \ 176 173 ICE_PROMISC_VLAN_RX) 177 174
+1 -1
drivers/net/ethernet/intel/ice/ice_base.c
··· 962 962 } else if (status == ICE_ERR_DOES_NOT_EXIST) { 963 963 dev_dbg(ice_pf_to_dev(vsi->back), "LAN Tx queues do not exist, nothing to disable\n"); 964 964 } else if (status) { 965 - dev_err(ice_pf_to_dev(vsi->back), "Failed to disable LAN Tx queues, error: %s\n", 965 + dev_dbg(ice_pf_to_dev(vsi->back), "Failed to disable LAN Tx queues, error: %s\n", 966 966 ice_stat_str(status)); 967 967 return -ENODEV; 968 968 }
+74 -65
drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c
··· 638 638 639 639 /* Avoid wait time by stopping all VFs at the same time */ 640 640 ice_for_each_vf(pf, i) 641 - if (test_bit(ICE_VF_STATE_QS_ENA, pf->vf[i].vf_states)) 642 - ice_dis_vf_qs(&pf->vf[i]); 641 + ice_dis_vf_qs(&pf->vf[i]); 643 642 644 643 tmp = pf->num_alloc_vfs; 645 644 pf->num_qps_per_vf = 0; ··· 650 651 set_bit(ICE_VF_STATE_DIS, pf->vf[i].vf_states); 651 652 ice_free_vf_res(&pf->vf[i]); 652 653 } 654 + 655 + mutex_destroy(&pf->vf[i].cfg_lock); 653 656 } 654 657 655 658 if (ice_sriov_free_msix_res(pf)) ··· 1696 1695 1697 1696 vsi = ice_get_vf_vsi(vf); 1698 1697 1699 - if (test_bit(ICE_VF_STATE_QS_ENA, vf->vf_states)) 1700 - ice_dis_vf_qs(vf); 1698 + ice_dis_vf_qs(vf); 1701 1699 1702 1700 /* Call Disable LAN Tx queue AQ whether or not queues are 1703 1701 * enabled. This is needed for successful completion of VFR. ··· 1948 1948 ice_vf_fdir_init(vf); 1949 1949 1950 1950 ice_vc_set_dflt_vf_ops(&vf->vc_ops); 1951 + 1952 + mutex_init(&vf->cfg_lock); 1951 1953 } 1952 1954 } 1953 1955 ··· 3015 3013 static int ice_vc_cfg_promiscuous_mode_msg(struct ice_vf *vf, u8 *msg) 3016 3014 { 3017 3015 enum virtchnl_status_code v_ret = VIRTCHNL_STATUS_SUCCESS; 3016 + enum ice_status mcast_status = 0, ucast_status = 0; 3018 3017 bool rm_promisc, alluni = false, allmulti = false; 3019 3018 struct virtchnl_promisc_info *info = 3020 3019 (struct virtchnl_promisc_info *)msg; ··· 3057 3054 rm_promisc = !allmulti && !alluni; 3058 3055 3059 3056 if (vsi->num_vlan || vf->port_vlan_info) { 3060 - struct ice_vsi *pf_vsi = ice_get_main_vsi(pf); 3061 - struct net_device *pf_netdev; 3062 - 3063 - if (!pf_vsi) { 3064 - v_ret = VIRTCHNL_STATUS_ERR_PARAM; 3065 - goto error_param; 3066 - } 3067 - 3068 - pf_netdev = pf_vsi->netdev; 3069 - 3070 - ret = ice_set_vf_spoofchk(pf_netdev, vf->vf_id, rm_promisc); 3071 - if (ret) { 3072 - dev_err(dev, "Failed to update spoofchk to %s for VF %d VSI %d when setting promiscuous mode\n", 3073 - rm_promisc ? "ON" : "OFF", vf->vf_id, 3074 - vsi->vsi_num); 3075 - v_ret = VIRTCHNL_STATUS_ERR_PARAM; 3076 - } 3077 - 3078 3057 if (rm_promisc) 3079 3058 ret = ice_cfg_vlan_pruning(vsi, true); 3080 3059 else ··· 3090 3105 goto error_param; 3091 3106 } 3092 3107 } else { 3093 - enum ice_status status; 3094 - u8 promisc_m; 3108 + u8 mcast_m, ucast_m; 3095 3109 3096 - if (alluni) { 3097 - if (vf->port_vlan_info || vsi->num_vlan) 3098 - promisc_m = ICE_UCAST_VLAN_PROMISC_BITS; 3099 - else 3100 - promisc_m = ICE_UCAST_PROMISC_BITS; 3101 - } else if (allmulti) { 3102 - if (vf->port_vlan_info || vsi->num_vlan) 3103 - promisc_m = ICE_MCAST_VLAN_PROMISC_BITS; 3104 - else 3105 - promisc_m = ICE_MCAST_PROMISC_BITS; 3110 + if (vf->port_vlan_info || vsi->num_vlan > 1) { 3111 + mcast_m = ICE_MCAST_VLAN_PROMISC_BITS; 3112 + ucast_m = ICE_UCAST_VLAN_PROMISC_BITS; 3106 3113 } else { 3107 - if (vf->port_vlan_info || vsi->num_vlan) 3108 - promisc_m = ICE_UCAST_VLAN_PROMISC_BITS; 3109 - else 3110 - promisc_m = ICE_UCAST_PROMISC_BITS; 3114 + mcast_m = ICE_MCAST_PROMISC_BITS; 3115 + ucast_m = ICE_UCAST_PROMISC_BITS; 3111 3116 } 3112 3117 3113 - /* Configure multicast/unicast with or without VLAN promiscuous 3114 - * mode 3115 - */ 3116 - status = ice_vf_set_vsi_promisc(vf, vsi, promisc_m, rm_promisc); 3117 - if (status) { 3118 - dev_err(dev, "%sable Tx/Rx filter promiscuous mode on VF-%d failed, error: %s\n", 3119 - rm_promisc ? "dis" : "en", vf->vf_id, 3120 - ice_stat_str(status)); 3121 - v_ret = ice_err_to_virt_err(status); 3122 - goto error_param; 3123 - } else { 3124 - dev_dbg(dev, "%sable Tx/Rx filter promiscuous mode on VF-%d succeeded\n", 3125 - rm_promisc ? "dis" : "en", vf->vf_id); 3118 + ucast_status = ice_vf_set_vsi_promisc(vf, vsi, ucast_m, 3119 + !alluni); 3120 + if (ucast_status) { 3121 + dev_err(dev, "%sable Tx/Rx filter promiscuous mode on VF-%d failed\n", 3122 + alluni ? "en" : "dis", vf->vf_id); 3123 + v_ret = ice_err_to_virt_err(ucast_status); 3124 + } 3125 + 3126 + mcast_status = ice_vf_set_vsi_promisc(vf, vsi, mcast_m, 3127 + !allmulti); 3128 + if (mcast_status) { 3129 + dev_err(dev, "%sable Tx/Rx filter promiscuous mode on VF-%d failed\n", 3130 + allmulti ? "en" : "dis", vf->vf_id); 3131 + v_ret = ice_err_to_virt_err(mcast_status); 3126 3132 } 3127 3133 } 3128 3134 3129 - if (allmulti && 3130 - !test_and_set_bit(ICE_VF_STATE_MC_PROMISC, vf->vf_states)) 3131 - dev_info(dev, "VF %u successfully set multicast promiscuous mode\n", vf->vf_id); 3132 - else if (!allmulti && test_and_clear_bit(ICE_VF_STATE_MC_PROMISC, vf->vf_states)) 3133 - dev_info(dev, "VF %u successfully unset multicast promiscuous mode\n", vf->vf_id); 3135 + if (!mcast_status) { 3136 + if (allmulti && 3137 + !test_and_set_bit(ICE_VF_STATE_MC_PROMISC, vf->vf_states)) 3138 + dev_info(dev, "VF %u successfully set multicast promiscuous mode\n", 3139 + vf->vf_id); 3140 + else if (!allmulti && test_and_clear_bit(ICE_VF_STATE_MC_PROMISC, vf->vf_states)) 3141 + dev_info(dev, "VF %u successfully unset multicast promiscuous mode\n", 3142 + vf->vf_id); 3143 + } 3134 3144 3135 - if (alluni && !test_and_set_bit(ICE_VF_STATE_UC_PROMISC, vf->vf_states)) 3136 - dev_info(dev, "VF %u successfully set unicast promiscuous mode\n", vf->vf_id); 3137 - else if (!alluni && test_and_clear_bit(ICE_VF_STATE_UC_PROMISC, vf->vf_states)) 3138 - dev_info(dev, "VF %u successfully unset unicast promiscuous mode\n", vf->vf_id); 3145 + if (!ucast_status) { 3146 + if (alluni && !test_and_set_bit(ICE_VF_STATE_UC_PROMISC, vf->vf_states)) 3147 + dev_info(dev, "VF %u successfully set unicast promiscuous mode\n", 3148 + vf->vf_id); 3149 + else if (!alluni && test_and_clear_bit(ICE_VF_STATE_UC_PROMISC, vf->vf_states)) 3150 + dev_info(dev, "VF %u successfully unset unicast promiscuous mode\n", 3151 + vf->vf_id); 3152 + } 3139 3153 3140 3154 error_param: 3141 3155 return ice_vc_send_msg_to_vf(vf, VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE, ··· 3808 3824 struct device *dev = ice_pf_to_dev(vf->pf); 3809 3825 u8 *mac_addr = vc_ether_addr->addr; 3810 3826 enum ice_status status; 3827 + int ret = 0; 3811 3828 3812 3829 /* device MAC already added */ 3813 3830 if (ether_addr_equal(mac_addr, vf->dev_lan_addr.addr)) ··· 3821 3836 3822 3837 status = ice_fltr_add_mac(vsi, mac_addr, ICE_FWD_TO_VSI); 3823 3838 if (status == ICE_ERR_ALREADY_EXISTS) { 3824 - dev_err(dev, "MAC %pM already exists for VF %d\n", mac_addr, 3839 + dev_dbg(dev, "MAC %pM already exists for VF %d\n", mac_addr, 3825 3840 vf->vf_id); 3826 - return -EEXIST; 3841 + /* don't return since we might need to update 3842 + * the primary MAC in ice_vfhw_mac_add() below 3843 + */ 3844 + ret = -EEXIST; 3827 3845 } else if (status) { 3828 3846 dev_err(dev, "Failed to add MAC %pM for VF %d\n, error %s\n", 3829 3847 mac_addr, vf->vf_id, ice_stat_str(status)); 3830 3848 return -EIO; 3849 + } else { 3850 + vf->num_mac++; 3831 3851 } 3832 3852 3833 3853 ice_vfhw_mac_add(vf, vc_ether_addr); 3834 3854 3835 - vf->num_mac++; 3836 - 3837 - return 0; 3855 + return ret; 3838 3856 } 3839 3857 3840 3858 /** ··· 4139 4151 return 0; 4140 4152 } 4141 4153 4154 + mutex_lock(&vf->cfg_lock); 4155 + 4142 4156 vf->port_vlan_info = vlanprio; 4143 4157 4144 4158 if (vf->port_vlan_info) ··· 4150 4160 dev_info(dev, "Clearing port VLAN on VF %d\n", vf_id); 4151 4161 4152 4162 ice_vc_reset_vf(vf); 4163 + mutex_unlock(&vf->cfg_lock); 4153 4164 4154 4165 return 0; 4155 4166 } ··· 4690 4699 return; 4691 4700 } 4692 4701 4702 + /* VF is being configured in another context that triggers a VFR, so no 4703 + * need to process this message 4704 + */ 4705 + if (!mutex_trylock(&vf->cfg_lock)) { 4706 + dev_info(dev, "VF %u is being configured in another context that will trigger a VFR, so there is no need to handle this message\n", 4707 + vf->vf_id); 4708 + return; 4709 + } 4710 + 4693 4711 switch (v_opcode) { 4694 4712 case VIRTCHNL_OP_VERSION: 4695 4713 err = ops->get_ver_msg(vf, msg); ··· 4787 4787 dev_info(dev, "PF failed to honor VF %d, opcode %d, error %d\n", 4788 4788 vf_id, v_opcode, err); 4789 4789 } 4790 + 4791 + mutex_unlock(&vf->cfg_lock); 4790 4792 } 4791 4793 4792 4794 /** ··· 4904 4902 return -EINVAL; 4905 4903 } 4906 4904 4905 + mutex_lock(&vf->cfg_lock); 4906 + 4907 4907 /* VF is notified of its new MAC via the PF's response to the 4908 4908 * VIRTCHNL_OP_GET_VF_RESOURCES message after the VF has been reset 4909 4909 */ ··· 4924 4920 } 4925 4921 4926 4922 ice_vc_reset_vf(vf); 4923 + mutex_unlock(&vf->cfg_lock); 4927 4924 return 0; 4928 4925 } 4929 4926 ··· 4959 4954 if (trusted == vf->trusted) 4960 4955 return 0; 4961 4956 4957 + mutex_lock(&vf->cfg_lock); 4958 + 4962 4959 vf->trusted = trusted; 4963 4960 ice_vc_reset_vf(vf); 4964 4961 dev_info(ice_pf_to_dev(pf), "VF %u is now %strusted\n", 4965 4962 vf_id, trusted ? "" : "un"); 4963 + 4964 + mutex_unlock(&vf->cfg_lock); 4966 4965 4967 4966 return 0; 4968 4967 }
+5
drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h
··· 100 100 struct ice_vf { 101 101 struct ice_pf *pf; 102 102 103 + /* Used during virtchnl message handling and NDO ops against the VF 104 + * that will trigger a VFR 105 + */ 106 + struct mutex cfg_lock; 107 + 103 108 u16 vf_id; /* VF ID in the PF space */ 104 109 u16 lan_vsi_idx; /* index into PF struct */ 105 110 u16 ctrl_vsi_idx;
+1 -1
drivers/net/ethernet/lantiq_etop.c
··· 262 262 /* enable crc generation */ 263 263 ltq_etop_w32(PPE32_CGEN, LQ_PPE32_ENET_MAC_CFG); 264 264 265 - ltq_dma_init_port(DMA_PORT_ETOP, priv->tx_burst_len, rx_burst_len); 265 + ltq_dma_init_port(DMA_PORT_ETOP, priv->tx_burst_len, priv->rx_burst_len); 266 266 267 267 for (i = 0; i < MAX_DMA_CHAN; i++) { 268 268 int irq = LTQ_DMA_CH0_INT + i;
+1 -4
drivers/net/ethernet/litex/litex_liteeth.c
··· 242 242 priv->dev = &pdev->dev; 243 243 244 244 irq = platform_get_irq(pdev, 0); 245 - if (irq < 0) { 246 - dev_err(&pdev->dev, "Failed to get IRQ %d\n", irq); 245 + if (irq < 0) 247 246 return irq; 248 - } 249 247 netdev->irq = irq; 250 248 251 249 priv->base = devm_platform_ioremap_resource_byname(pdev, "mac"); ··· 287 289 struct net_device *netdev = platform_get_drvdata(pdev); 288 290 289 291 unregister_netdev(netdev); 290 - free_netdev(netdev); 291 292 292 293 return 0; 293 294 }
+20 -18
drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
··· 1605 1605 mvpp22_gop_fca_enable_periodic(port, true); 1606 1606 } 1607 1607 1608 - static int mvpp22_gop_init(struct mvpp2_port *port) 1608 + static int mvpp22_gop_init(struct mvpp2_port *port, phy_interface_t interface) 1609 1609 { 1610 1610 struct mvpp2 *priv = port->priv; 1611 1611 u32 val; ··· 1613 1613 if (!priv->sysctrl_base) 1614 1614 return 0; 1615 1615 1616 - switch (port->phy_interface) { 1616 + switch (interface) { 1617 1617 case PHY_INTERFACE_MODE_RGMII: 1618 1618 case PHY_INTERFACE_MODE_RGMII_ID: 1619 1619 case PHY_INTERFACE_MODE_RGMII_RXID: ··· 1743 1743 * lanes by the physical layer. This is why configurations like 1744 1744 * "PPv2 (2500BaseX) - COMPHY (2500SGMII)" are valid. 1745 1745 */ 1746 - static int mvpp22_comphy_init(struct mvpp2_port *port) 1746 + static int mvpp22_comphy_init(struct mvpp2_port *port, 1747 + phy_interface_t interface) 1747 1748 { 1748 1749 int ret; 1749 1750 1750 1751 if (!port->comphy) 1751 1752 return 0; 1752 1753 1753 - ret = phy_set_mode_ext(port->comphy, PHY_MODE_ETHERNET, 1754 - port->phy_interface); 1754 + ret = phy_set_mode_ext(port->comphy, PHY_MODE_ETHERNET, interface); 1755 1755 if (ret) 1756 1756 return ret; 1757 1757 ··· 2172 2172 writel(val & ~MVPP22_XPCS_CFG0_RESET_DIS, xpcs + MVPP22_XPCS_CFG0); 2173 2173 } 2174 2174 2175 - static void mvpp22_pcs_reset_deassert(struct mvpp2_port *port) 2175 + static void mvpp22_pcs_reset_deassert(struct mvpp2_port *port, 2176 + phy_interface_t interface) 2176 2177 { 2177 2178 struct mvpp2 *priv = port->priv; 2178 2179 void __iomem *mpcs, *xpcs; ··· 2185 2184 mpcs = priv->iface_base + MVPP22_MPCS_BASE(port->gop_id); 2186 2185 xpcs = priv->iface_base + MVPP22_XPCS_BASE(port->gop_id); 2187 2186 2188 - switch (port->phy_interface) { 2187 + switch (interface) { 2189 2188 case PHY_INTERFACE_MODE_10GBASER: 2190 2189 val = readl(mpcs + MVPP22_MPCS_CLK_RESET); 2191 2190 val |= MAC_CLK_RESET_MAC | MAC_CLK_RESET_SD_RX | ··· 4530 4529 return rx_done; 4531 4530 } 4532 4531 4533 - static void mvpp22_mode_reconfigure(struct mvpp2_port *port) 4532 + static void mvpp22_mode_reconfigure(struct mvpp2_port *port, 4533 + phy_interface_t interface) 4534 4534 { 4535 4535 u32 ctrl3; 4536 4536 ··· 4542 4540 mvpp22_pcs_reset_assert(port); 4543 4541 4544 4542 /* comphy reconfiguration */ 4545 - mvpp22_comphy_init(port); 4543 + mvpp22_comphy_init(port, interface); 4546 4544 4547 4545 /* gop reconfiguration */ 4548 - mvpp22_gop_init(port); 4546 + mvpp22_gop_init(port, interface); 4549 4547 4550 - mvpp22_pcs_reset_deassert(port); 4548 + mvpp22_pcs_reset_deassert(port, interface); 4551 4549 4552 4550 if (mvpp2_port_supports_xlg(port)) { 4553 4551 ctrl3 = readl(port->base + MVPP22_XLG_CTRL3_REG); 4554 4552 ctrl3 &= ~MVPP22_XLG_CTRL3_MACMODESELECT_MASK; 4555 4553 4556 - if (mvpp2_is_xlg(port->phy_interface)) 4554 + if (mvpp2_is_xlg(interface)) 4557 4555 ctrl3 |= MVPP22_XLG_CTRL3_MACMODESELECT_10G; 4558 4556 else 4559 4557 ctrl3 |= MVPP22_XLG_CTRL3_MACMODESELECT_GMAC; ··· 4561 4559 writel(ctrl3, port->base + MVPP22_XLG_CTRL3_REG); 4562 4560 } 4563 4561 4564 - if (mvpp2_port_supports_xlg(port) && mvpp2_is_xlg(port->phy_interface)) 4562 + if (mvpp2_port_supports_xlg(port) && mvpp2_is_xlg(interface)) 4565 4563 mvpp2_xlg_max_rx_size_set(port); 4566 4564 else 4567 4565 mvpp2_gmac_max_rx_size_set(port); ··· 4581 4579 mvpp2_interrupts_enable(port); 4582 4580 4583 4581 if (port->priv->hw_version >= MVPP22) 4584 - mvpp22_mode_reconfigure(port); 4582 + mvpp22_mode_reconfigure(port, port->phy_interface); 4585 4583 4586 4584 if (port->phylink) { 4587 4585 phylink_start(port->phylink); ··· 6446 6444 mvpp22_gop_mask_irq(port); 6447 6445 6448 6446 phy_power_off(port->comphy); 6447 + 6448 + /* Reconfigure the serdes lanes */ 6449 + mvpp22_mode_reconfigure(port, interface); 6449 6450 } 6450 6451 } 6451 6452 ··· 6502 6497 if (port->priv->hw_version >= MVPP22 && 6503 6498 port->phy_interface != interface) { 6504 6499 port->phy_interface = interface; 6505 - 6506 - /* Reconfigure the serdes lanes */ 6507 - mvpp22_mode_reconfigure(port); 6508 6500 6509 6501 /* Unmask interrupts */ 6510 6502 mvpp22_gop_unmask_irq(port); ··· 6963 6961 * driver does this, we can remove this code. 6964 6962 */ 6965 6963 if (port->comphy) { 6966 - err = mvpp22_comphy_init(port); 6964 + err = mvpp22_comphy_init(port, port->phy_interface); 6967 6965 if (err == 0) 6968 6966 phy_power_off(port->comphy); 6969 6967 }
+1
drivers/net/ethernet/marvell/octeontx2/Kconfig
··· 31 31 config OCTEONTX2_PF 32 32 tristate "Marvell OcteonTX2 NIC Physical Function driver" 33 33 select OCTEONTX2_MBOX 34 + select NET_DEVLINK 34 35 depends on (64BIT && COMPILE_TEST) || ARM64 35 36 depends on PCI 36 37 depends on PTP_1588_CLOCK_OPTIONAL
+1 -3
drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c
··· 2450 2450 bmap = mcam->bmap_reverse; 2451 2451 start = mcam->bmap_entries - start; 2452 2452 end = mcam->bmap_entries - end; 2453 - index = start; 2454 - start = end; 2455 - end = index; 2453 + swap(start, end); 2456 2454 } else { 2457 2455 bmap = mcam->bmap; 2458 2456 }
+1 -1
drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c
··· 501 501 .ndo_set_features = otx2vf_set_features, 502 502 .ndo_get_stats64 = otx2_get_stats64, 503 503 .ndo_tx_timeout = otx2_tx_timeout, 504 - .ndo_do_ioctl = otx2_ioctl, 504 + .ndo_eth_ioctl = otx2_ioctl, 505 505 }; 506 506 507 507 static int otx2_wq_init(struct otx2_nic *vf)
+2 -1
drivers/net/ethernet/marvell/prestera/prestera_ethtool.c
··· 499 499 { 500 500 struct prestera_port_phy_state *state = &port->state_phy; 501 501 502 - if (prestera_hw_port_phy_mode_get(port, &state->mdix, NULL, NULL, NULL)) { 502 + if (prestera_hw_port_phy_mode_get(port, 503 + &state->mdix, NULL, NULL, NULL)) { 503 504 netdev_warn(port->dev, "MDIX params get failed"); 504 505 state->mdix = ETH_TP_MDI_INVALID; 505 506 }
+81 -63
drivers/net/ethernet/marvell/prestera/prestera_hw.c
··· 180 180 struct prestera_msg_ret ret; 181 181 }; 182 182 183 - union prestera_msg_switch_param { 184 - u8 mac[ETH_ALEN]; 185 - __le32 ageing_timeout_ms; 186 - } __packed; 187 - 188 183 struct prestera_msg_switch_attr_req { 189 184 struct prestera_msg_cmd cmd; 190 185 __le32 attr; 191 - union prestera_msg_switch_param param; 186 + union { 187 + __le32 ageing_timeout_ms; 188 + struct { 189 + u8 mac[ETH_ALEN]; 190 + u8 __pad[2]; 191 + }; 192 + } param; 192 193 }; 193 194 194 195 struct prestera_msg_switch_init_resp { 195 196 struct prestera_msg_ret ret; 196 197 __le32 port_count; 197 198 __le32 mtu_max; 198 - u8 switch_id; 199 - u8 lag_max; 200 - u8 lag_member_max; 201 199 __le32 size_tbl_router_nexthop; 202 - } __packed __aligned(4); 200 + u8 switch_id; 201 + u8 lag_max; 202 + u8 lag_member_max; 203 + }; 203 204 204 205 struct prestera_msg_event_port_param { 205 206 union { 206 207 struct { 207 - u8 oper; 208 208 __le32 mode; 209 209 __le32 speed; 210 + u8 oper; 210 211 u8 duplex; 211 212 u8 fc; 212 213 u8 fec; 213 - } __packed mac; 214 + } mac; 214 215 struct { 215 - u8 mdix; 216 216 __le64 lmode_bmap; 217 + u8 mdix; 217 218 u8 fc; 218 - } __packed phy; 219 - } __packed; 220 - } __packed __aligned(4); 219 + u8 __pad[2]; 220 + } __packed phy; /* make sure always 12 bytes size */ 221 + }; 222 + }; 221 223 222 224 struct prestera_msg_port_cap_param { 223 225 __le64 link_mode; 224 - u8 type; 225 - u8 fec; 226 - u8 fc; 227 - u8 transceiver; 226 + u8 type; 227 + u8 fec; 228 + u8 fc; 229 + u8 transceiver; 228 230 }; 229 231 230 232 struct prestera_msg_port_flood_param { 231 233 u8 type; 232 234 u8 enable; 235 + u8 __pad[2]; 233 236 }; 234 237 235 238 union prestera_msg_port_param { 239 + __le32 mtu; 240 + __le32 speed; 241 + __le32 link_mode; 236 242 u8 admin_state; 237 243 u8 oper_state; 238 - __le32 mtu; 239 244 u8 mac[ETH_ALEN]; 240 245 u8 accept_frm_type; 241 - __le32 speed; 242 246 u8 learning; 243 247 u8 flood; 244 - __le32 link_mode; 245 248 u8 type; 246 249 u8 duplex; 247 250 u8 fec; 248 251 u8 fc; 249 - 250 252 union { 251 253 struct { 252 - u8 admin:1; 254 + u8 admin; 253 255 u8 fc; 254 256 u8 ap_enable; 257 + u8 __reserved[5]; 255 258 union { 256 259 struct { 257 260 __le32 mode; 258 - u8 inband:1; 259 261 __le32 speed; 260 - u8 duplex; 261 - u8 fec; 262 - u8 fec_supp; 263 - } __packed reg_mode; 262 + u8 inband; 263 + u8 duplex; 264 + u8 fec; 265 + u8 fec_supp; 266 + } reg_mode; 264 267 struct { 265 268 __le32 mode; 266 269 __le32 speed; 267 - u8 fec; 268 - u8 fec_supp; 269 - } __packed ap_modes[PRESTERA_AP_PORT_MAX]; 270 - } __packed; 271 - } __packed mac; 270 + u8 fec; 271 + u8 fec_supp; 272 + u8 __pad[2]; 273 + } ap_modes[PRESTERA_AP_PORT_MAX]; 274 + }; 275 + } mac; 272 276 struct { 273 - u8 admin:1; 274 - u8 adv_enable; 275 277 __le64 modes; 276 278 __le32 mode; 279 + u8 admin; 280 + u8 adv_enable; 277 281 u8 mdix; 278 - } __packed phy; 279 - } __packed link; 282 + u8 __pad; 283 + } phy; 284 + } link; 280 285 281 286 struct prestera_msg_port_cap_param cap; 282 287 struct prestera_msg_port_flood_param flood_ext; 283 288 struct prestera_msg_event_port_param link_evt; 284 - } __packed; 289 + }; 285 290 286 291 struct prestera_msg_port_attr_req { 287 292 struct prestera_msg_cmd cmd; ··· 294 289 __le32 port; 295 290 __le32 dev; 296 291 union prestera_msg_port_param param; 297 - } __packed __aligned(4); 298 - 292 + }; 299 293 300 294 struct prestera_msg_port_attr_resp { 301 295 struct prestera_msg_ret ret; 302 296 union prestera_msg_port_param param; 303 - } __packed __aligned(4); 304 - 297 + }; 305 298 306 299 struct prestera_msg_port_stats_resp { 307 300 struct prestera_msg_ret ret; ··· 316 313 __le32 hw_id; 317 314 __le32 dev_id; 318 315 __le16 fp_id; 316 + u8 pad[2]; 319 317 }; 320 318 321 319 struct prestera_msg_vlan_req { ··· 324 320 __le32 port; 325 321 __le32 dev; 326 322 __le16 vid; 327 - u8 is_member; 328 - u8 is_tagged; 323 + u8 is_member; 324 + u8 is_tagged; 329 325 }; 330 326 331 327 struct prestera_msg_fdb_req { 332 328 struct prestera_msg_cmd cmd; 333 - u8 dest_type; 329 + __le32 flush_mode; 334 330 union { 335 331 struct { 336 332 __le32 port; ··· 338 334 }; 339 335 __le16 lag_id; 340 336 } dest; 341 - u8 mac[ETH_ALEN]; 342 337 __le16 vid; 343 - u8 dynamic; 344 - __le32 flush_mode; 345 - } __packed __aligned(4); 338 + u8 dest_type; 339 + u8 dynamic; 340 + u8 mac[ETH_ALEN]; 341 + u8 __pad[2]; 342 + }; 346 343 347 344 struct prestera_msg_bridge_req { 348 345 struct prestera_msg_cmd cmd; 349 346 __le32 port; 350 347 __le32 dev; 351 348 __le16 bridge; 349 + u8 pad[2]; 352 350 }; 353 351 354 352 struct prestera_msg_bridge_resp { 355 353 struct prestera_msg_ret ret; 356 354 __le16 bridge; 355 + u8 pad[2]; 357 356 }; 358 357 359 358 struct prestera_msg_acl_action { ··· 366 359 367 360 struct prestera_msg_acl_match { 368 361 __le32 type; 362 + __le32 __reserved; 369 363 union { 370 364 struct { 371 365 u8 key; 372 366 u8 mask; 373 - } __packed u8; 367 + } u8; 374 368 struct { 375 369 __le16 key; 376 370 __le16 mask; ··· 387 379 struct { 388 380 u8 key[ETH_ALEN]; 389 381 u8 mask[ETH_ALEN]; 390 - } __packed mac; 382 + } mac; 391 383 } keymask; 392 384 }; 393 385 ··· 416 408 __le32 port; 417 409 __le32 dev; 418 410 __le16 ruleset_id; 411 + u8 pad[2]; 419 412 }; 420 413 421 414 struct prestera_msg_acl_ruleset_req { 422 415 struct prestera_msg_cmd cmd; 423 416 __le16 id; 417 + u8 pad[2]; 424 418 }; 425 419 426 420 struct prestera_msg_acl_ruleset_resp { 427 421 struct prestera_msg_ret ret; 428 422 __le16 id; 423 + u8 pad[2]; 429 424 }; 430 425 431 426 struct prestera_msg_span_req { ··· 436 425 __le32 port; 437 426 __le32 dev; 438 427 u8 id; 428 + u8 pad[3]; 439 429 }; 440 430 441 431 struct prestera_msg_span_resp { 442 432 struct prestera_msg_ret ret; 443 433 u8 id; 434 + u8 pad[3]; 444 435 }; 445 436 446 437 struct prestera_msg_stp_req { ··· 450 437 __le32 port; 451 438 __le32 dev; 452 439 __le16 vid; 453 - u8 state; 440 + u8 state; 441 + u8 __pad; 454 442 }; 455 443 456 444 struct prestera_msg_rxtx_req { 457 445 struct prestera_msg_cmd cmd; 458 446 u8 use_sdma; 447 + u8 pad[3]; 459 448 }; 460 449 461 450 struct prestera_msg_rxtx_resp { ··· 470 455 __le32 port; 471 456 __le32 dev; 472 457 __le16 lag_id; 458 + u8 pad[2]; 473 459 }; 474 460 475 461 struct prestera_msg_cpu_code_counter_req { 476 462 struct prestera_msg_cmd cmd; 477 463 u8 counter_type; 478 464 u8 code; 465 + u8 pad[2]; 479 466 }; 480 467 481 468 struct mvsw_msg_cpu_code_counter_ret { ··· 502 485 503 486 struct prestera_msg_event_fdb { 504 487 struct prestera_msg_event id; 505 - u8 dest_type; 488 + __le32 vid; 506 489 union { 507 490 __le32 port_id; 508 491 __le16 lag_id; 509 492 } dest; 510 - __le32 vid; 511 493 union prestera_msg_event_fdb_param param; 512 - } __packed __aligned(4); 494 + u8 dest_type; 495 + }; 513 496 514 - static inline void prestera_hw_build_tests(void) 497 + static void prestera_hw_build_tests(void) 515 498 { 516 499 /* check requests */ 517 500 BUILD_BUG_ON(sizeof(struct prestera_msg_common_req) != 4); 518 501 BUILD_BUG_ON(sizeof(struct prestera_msg_switch_attr_req) != 16); 519 - BUILD_BUG_ON(sizeof(struct prestera_msg_port_attr_req) != 120); 502 + BUILD_BUG_ON(sizeof(struct prestera_msg_port_attr_req) != 144); 520 503 BUILD_BUG_ON(sizeof(struct prestera_msg_port_info_req) != 8); 521 504 BUILD_BUG_ON(sizeof(struct prestera_msg_vlan_req) != 16); 522 505 BUILD_BUG_ON(sizeof(struct prestera_msg_fdb_req) != 28); ··· 533 516 /* check responses */ 534 517 BUILD_BUG_ON(sizeof(struct prestera_msg_common_resp) != 8); 535 518 BUILD_BUG_ON(sizeof(struct prestera_msg_switch_init_resp) != 24); 536 - BUILD_BUG_ON(sizeof(struct prestera_msg_port_attr_resp) != 112); 519 + BUILD_BUG_ON(sizeof(struct prestera_msg_port_attr_resp) != 136); 537 520 BUILD_BUG_ON(sizeof(struct prestera_msg_port_stats_resp) != 248); 538 521 BUILD_BUG_ON(sizeof(struct prestera_msg_port_info_resp) != 20); 539 522 BUILD_BUG_ON(sizeof(struct prestera_msg_bridge_resp) != 12); ··· 566 549 if (err) 567 550 return err; 568 551 569 - if (__le32_to_cpu(ret->cmd.type) != PRESTERA_CMD_TYPE_ACK) 552 + if (ret->cmd.type != __cpu_to_le32(PRESTERA_CMD_TYPE_ACK)) 570 553 return -EBADE; 571 - if (__le32_to_cpu(ret->status) != PRESTERA_CMD_ACK_OK) 554 + if (ret->status != __cpu_to_le32(PRESTERA_CMD_ACK_OK)) 572 555 return -EINVAL; 573 556 574 557 return 0; ··· 1361 1344 int prestera_hw_port_autoneg_restart(struct prestera_port *port) 1362 1345 { 1363 1346 struct prestera_msg_port_attr_req req = { 1364 - .attr = __cpu_to_le32(PRESTERA_CMD_PORT_ATTR_PHY_AUTONEG_RESTART), 1347 + .attr = 1348 + __cpu_to_le32(PRESTERA_CMD_PORT_ATTR_PHY_AUTONEG_RESTART), 1365 1349 .port = __cpu_to_le32(port->hw_id), 1366 1350 .dev = __cpu_to_le32(port->dev_id), 1367 1351 };
+4 -2
drivers/net/ethernet/marvell/prestera/prestera_main.c
··· 405 405 406 406 err = prestera_port_cfg_mac_write(port, &cfg_mac); 407 407 if (err) { 408 - dev_err(prestera_dev(sw), "Failed to set port(%u) mac mode\n", id); 408 + dev_err(prestera_dev(sw), 409 + "Failed to set port(%u) mac mode\n", id); 409 410 goto err_port_init; 410 411 } 411 412 ··· 419 418 false, 0, 0, 420 419 port->cfg_phy.mdix); 421 420 if (err) { 422 - dev_err(prestera_dev(sw), "Failed to set port(%u) phy mode\n", id); 421 + dev_err(prestera_dev(sw), 422 + "Failed to set port(%u) phy mode\n", id); 423 423 goto err_port_init; 424 424 } 425 425 }
+2 -1
drivers/net/ethernet/marvell/prestera/prestera_pci.c
··· 411 411 goto cmd_exit; 412 412 } 413 413 414 - memcpy_fromio(out_msg, prestera_fw_cmdq_buf(fw, qid) + in_size, ret_size); 414 + memcpy_fromio(out_msg, 415 + prestera_fw_cmdq_buf(fw, qid) + in_size, ret_size); 415 416 416 417 cmd_exit: 417 418 prestera_fw_write(fw, PRESTERA_CMDQ_REQ_CTL_REG(qid),
+1 -1
drivers/net/ethernet/mellanox/mlx5/core/lag/port_sel.c
··· 289 289 290 290 lag_definer = kzalloc(sizeof(*lag_definer), GFP_KERNEL); 291 291 if (!lag_definer) 292 - return ERR_PTR(ENOMEM); 292 + return ERR_PTR(-ENOMEM); 293 293 294 294 match_definer_mask = kvzalloc(MLX5_FLD_SZ_BYTES(match_definer, 295 295 match_mask),
+1 -1
drivers/net/ethernet/microsoft/mana/gdma_main.c
··· 1424 1424 { 1425 1425 struct gdma_context *gc = pci_get_drvdata(pdev); 1426 1426 1427 - dev_info(&pdev->dev, "Shutdown was calledd\n"); 1427 + dev_info(&pdev->dev, "Shutdown was called\n"); 1428 1428 1429 1429 mana_remove(&gc->mana, true); 1430 1430
+2 -6
drivers/net/ethernet/sfc/falcon/efx.c
··· 817 817 efx->rxq_entries = rxq_entries; 818 818 efx->txq_entries = txq_entries; 819 819 for (i = 0; i < efx->n_channels; i++) { 820 - channel = efx->channel[i]; 821 - efx->channel[i] = other_channel[i]; 822 - other_channel[i] = channel; 820 + swap(efx->channel[i], other_channel[i]); 823 821 } 824 822 825 823 /* Restart buffer table allocation */ ··· 861 863 efx->rxq_entries = old_rxq_entries; 862 864 efx->txq_entries = old_txq_entries; 863 865 for (i = 0; i < efx->n_channels; i++) { 864 - channel = efx->channel[i]; 865 - efx->channel[i] = other_channel[i]; 866 - other_channel[i] = channel; 866 + swap(efx->channel[i], other_channel[i]); 867 867 } 868 868 goto out; 869 869 }
-2
drivers/net/ethernet/stmicro/stmmac/stmmac_tc.c
··· 786 786 goto disable; 787 787 if (qopt->num_entries >= dep) 788 788 return -EINVAL; 789 - if (!qopt->base_time) 790 - return -ERANGE; 791 789 if (!qopt->cycle_time) 792 790 return -ERANGE; 793 791
+2 -4
drivers/net/ethernet/ti/cpsw_ale.c
··· 1299 1299 if (!ale) 1300 1300 return ERR_PTR(-ENOMEM); 1301 1301 1302 - ale->p0_untag_vid_mask = 1303 - devm_kmalloc_array(params->dev, BITS_TO_LONGS(VLAN_N_VID), 1304 - sizeof(unsigned long), 1305 - GFP_KERNEL); 1302 + ale->p0_untag_vid_mask = devm_bitmap_zalloc(params->dev, VLAN_N_VID, 1303 + GFP_KERNEL); 1306 1304 if (!ale->p0_untag_vid_mask) 1307 1305 return ERR_PTR(-ENOMEM); 1308 1306
+14 -2
drivers/net/ethernet/ti/davinci_emac.c
··· 420 420 u32 int_ctrl, num_interrupts = 0; 421 421 u32 prescale = 0, addnl_dvdr = 1, coal_intvl = 0; 422 422 423 - if (!coal->rx_coalesce_usecs) 424 - return -EINVAL; 423 + if (!coal->rx_coalesce_usecs) { 424 + priv->coal_intvl = 0; 425 + 426 + switch (priv->version) { 427 + case EMAC_VERSION_2: 428 + emac_ctrl_write(EMAC_DM646X_CMINTCTRL, 0); 429 + break; 430 + default: 431 + emac_ctrl_write(EMAC_CTRL_EWINTTCNT, 0); 432 + break; 433 + } 434 + 435 + return 0; 436 + } 425 437 426 438 coal_intvl = coal->rx_coalesce_usecs; 427 439
+4 -2
drivers/net/hamradio/6pack.c
··· 672 672 del_timer_sync(&sp->tx_t); 673 673 del_timer_sync(&sp->resync_t); 674 674 675 - /* Free all 6pack frame buffers. */ 675 + unregister_netdev(sp->dev); 676 + 677 + /* Free all 6pack frame buffers after unreg. */ 676 678 kfree(sp->rbuff); 677 679 kfree(sp->xbuff); 678 680 679 - unregister_netdev(sp->dev); 681 + free_netdev(sp->dev); 680 682 } 681 683 682 684 /* Perform I/O control on an active 6pack channel. */
+5 -4
drivers/net/hamradio/mkiss.c
··· 792 792 */ 793 793 netif_stop_queue(ax->dev); 794 794 795 - /* Free all AX25 frame buffers. */ 796 - kfree(ax->rbuff); 797 - kfree(ax->xbuff); 798 - 799 795 ax->tty = NULL; 800 796 801 797 unregister_netdev(ax->dev); 798 + 799 + /* Free all AX25 frame buffers after unreg. */ 800 + kfree(ax->rbuff); 801 + kfree(ax->xbuff); 802 + 802 803 free_netdev(ax->dev); 803 804 } 804 805
+43 -1
drivers/net/phy/microchip_t1.c
··· 28 28 #define LAN87XX_MASK_LINK_UP (0x0004) 29 29 #define LAN87XX_MASK_LINK_DOWN (0x0002) 30 30 31 + /* MISC Control 1 Register */ 32 + #define LAN87XX_CTRL_1 (0x11) 33 + #define LAN87XX_MASK_RGMII_TXC_DLY_EN (0x4000) 34 + #define LAN87XX_MASK_RGMII_RXC_DLY_EN (0x2000) 35 + 31 36 /* phyaccess nested types */ 32 37 #define PHYACC_ATTR_MODE_READ 0 33 38 #define PHYACC_ATTR_MODE_WRITE 1 ··· 117 112 return rc; 118 113 } 119 114 115 + static int lan87xx_config_rgmii_delay(struct phy_device *phydev) 116 + { 117 + int rc; 118 + 119 + if (!phy_interface_is_rgmii(phydev)) 120 + return 0; 121 + 122 + rc = access_ereg(phydev, PHYACC_ATTR_MODE_READ, 123 + PHYACC_ATTR_BANK_MISC, LAN87XX_CTRL_1, 0); 124 + if (rc < 0) 125 + return rc; 126 + 127 + switch (phydev->interface) { 128 + case PHY_INTERFACE_MODE_RGMII: 129 + rc &= ~LAN87XX_MASK_RGMII_TXC_DLY_EN; 130 + rc &= ~LAN87XX_MASK_RGMII_RXC_DLY_EN; 131 + break; 132 + case PHY_INTERFACE_MODE_RGMII_ID: 133 + rc |= LAN87XX_MASK_RGMII_TXC_DLY_EN; 134 + rc |= LAN87XX_MASK_RGMII_RXC_DLY_EN; 135 + break; 136 + case PHY_INTERFACE_MODE_RGMII_RXID: 137 + rc &= ~LAN87XX_MASK_RGMII_TXC_DLY_EN; 138 + rc |= LAN87XX_MASK_RGMII_RXC_DLY_EN; 139 + break; 140 + case PHY_INTERFACE_MODE_RGMII_TXID: 141 + rc |= LAN87XX_MASK_RGMII_TXC_DLY_EN; 142 + rc &= ~LAN87XX_MASK_RGMII_RXC_DLY_EN; 143 + break; 144 + default: 145 + return 0; 146 + } 147 + 148 + return access_ereg(phydev, PHYACC_ATTR_MODE_WRITE, 149 + PHYACC_ATTR_BANK_MISC, LAN87XX_CTRL_1, rc); 150 + } 151 + 120 152 static int lan87xx_phy_init(struct phy_device *phydev) 121 153 { 122 154 static const struct access_ereg_val init[] = { ··· 227 185 return rc; 228 186 } 229 187 230 - return 0; 188 + return lan87xx_config_rgmii_delay(phydev); 231 189 } 232 190 233 191 static int lan87xx_phy_config_intr(struct phy_device *phydev)
+6 -1
drivers/net/phy/phy.c
··· 815 815 phydev->mdix_ctrl = cmd->base.eth_tp_mdix_ctrl; 816 816 817 817 /* Restart the PHY */ 818 - _phy_start_aneg(phydev); 818 + if (phy_is_started(phydev)) { 819 + phydev->state = PHY_UP; 820 + phy_trigger_machine(phydev); 821 + } else { 822 + _phy_start_aneg(phydev); 823 + } 819 824 820 825 mutex_unlock(&phydev->lock); 821 826 return 0;
+1 -1
drivers/net/sungem_phy.c
··· 409 409 * though magic-aneg shouldn't prevent this case from occurring 410 410 */ 411 411 412 - return 0; 412 + return 0; 413 413 } 414 414 415 415 static int generic_suspend(struct mii_phy* phy)
-2
drivers/net/wwan/iosm/iosm_ipc_imem_ops.c
··· 394 394 int boot_check_timeout = BOOT_CHECK_DEFAULT_TIMEOUT; 395 395 enum ipc_mem_exec_stage exec_stage; 396 396 struct ipc_mem_channel *channel; 397 - enum ipc_phase curr_phase; 398 397 int status = 0; 399 398 u32 tail = 0; 400 399 401 400 channel = ipc_imem->ipc_devlink->devlink_sio.channel; 402 - curr_phase = ipc_imem->phase; 403 401 /* Increase the total wait time to boot_check_timeout */ 404 402 do { 405 403 exec_stage = ipc_mmio_get_exec_stage(ipc_imem->mmio);
+3 -3
drivers/nfc/pn533/pn533.c
··· 2216 2216 frag = pn533_alloc_skb(dev, frag_size); 2217 2217 if (!frag) { 2218 2218 skb_queue_purge(&dev->fragment_skb); 2219 - break; 2219 + return -ENOMEM; 2220 2220 } 2221 2221 2222 2222 if (!dev->tgt_mode) { ··· 2285 2285 /* jumbo frame ? */ 2286 2286 if (skb->len > PN533_CMD_DATAEXCH_DATA_MAXLEN) { 2287 2287 rc = pn533_fill_fragment_skbs(dev, skb); 2288 - if (rc <= 0) 2288 + if (rc < 0) 2289 2289 goto error; 2290 2290 2291 2291 skb = skb_dequeue(&dev->fragment_skb); ··· 2353 2353 /* let's split in multiple chunks if size's too big */ 2354 2354 if (skb->len > PN533_CMD_DATAEXCH_DATA_MAXLEN) { 2355 2355 rc = pn533_fill_fragment_skbs(dev, skb); 2356 - if (rc <= 0) 2356 + if (rc < 0) 2357 2357 goto error; 2358 2358 2359 2359 /* get the first skb */
+3 -3
drivers/nfc/port100.c
··· 624 624 break; /* success */ 625 625 case -ECONNRESET: 626 626 case -ENOENT: 627 - nfc_err(&dev->interface->dev, 627 + nfc_dbg(&dev->interface->dev, 628 628 "The urb has been canceled (status %d)\n", urb->status); 629 629 goto sched_wq; 630 630 case -ESHUTDOWN: ··· 678 678 break; /* success */ 679 679 case -ECONNRESET: 680 680 case -ENOENT: 681 - nfc_err(&dev->interface->dev, 681 + nfc_dbg(&dev->interface->dev, 682 682 "The urb has been stopped (status %d)\n", urb->status); 683 683 goto sched_wq; 684 684 case -ESHUTDOWN: ··· 942 942 break; /* success */ 943 943 case -ECONNRESET: 944 944 case -ENOENT: 945 - nfc_err(&dev->interface->dev, 945 + nfc_dbg(&dev->interface->dev, 946 946 "The urb has been stopped (status %d)\n", urb->status); 947 947 break; 948 948 case -ESHUTDOWN:
+28 -1
fs/libfs.c
··· 448 448 } 449 449 EXPORT_SYMBOL(simple_rmdir); 450 450 451 + int simple_rename_exchange(struct inode *old_dir, struct dentry *old_dentry, 452 + struct inode *new_dir, struct dentry *new_dentry) 453 + { 454 + bool old_is_dir = d_is_dir(old_dentry); 455 + bool new_is_dir = d_is_dir(new_dentry); 456 + 457 + if (old_dir != new_dir && old_is_dir != new_is_dir) { 458 + if (old_is_dir) { 459 + drop_nlink(old_dir); 460 + inc_nlink(new_dir); 461 + } else { 462 + drop_nlink(new_dir); 463 + inc_nlink(old_dir); 464 + } 465 + } 466 + old_dir->i_ctime = old_dir->i_mtime = 467 + new_dir->i_ctime = new_dir->i_mtime = 468 + d_inode(old_dentry)->i_ctime = 469 + d_inode(new_dentry)->i_ctime = current_time(old_dir); 470 + 471 + return 0; 472 + } 473 + EXPORT_SYMBOL_GPL(simple_rename_exchange); 474 + 451 475 int simple_rename(struct user_namespace *mnt_userns, struct inode *old_dir, 452 476 struct dentry *old_dentry, struct inode *new_dir, 453 477 struct dentry *new_dentry, unsigned int flags) ··· 479 455 struct inode *inode = d_inode(old_dentry); 480 456 int they_are_dirs = d_is_dir(old_dentry); 481 457 482 - if (flags & ~RENAME_NOREPLACE) 458 + if (flags & ~(RENAME_NOREPLACE | RENAME_EXCHANGE)) 483 459 return -EINVAL; 460 + 461 + if (flags & RENAME_EXCHANGE) 462 + return simple_rename_exchange(old_dir, old_dentry, new_dir, new_dentry); 484 463 485 464 if (!simple_empty(new_dentry)) 486 465 return -ENOTEMPTY;
+6
include/linux/bpf.h
··· 484 484 aux->ctx_field_size = size; 485 485 } 486 486 487 + static inline bool bpf_pseudo_func(const struct bpf_insn *insn) 488 + { 489 + return insn->code == (BPF_LD | BPF_IMM | BPF_DW) && 490 + insn->src_reg == BPF_PSEUDO_FUNC; 491 + } 492 + 487 493 struct bpf_prog_ops { 488 494 int (*test_run)(struct bpf_prog *prog, const union bpf_attr *kattr, 489 495 union bpf_attr __user *uattr);
+1
include/linux/dsa/ocelot.h
··· 12 12 struct ocelot_skb_cb { 13 13 struct sk_buff *clone; 14 14 unsigned int ptp_class; /* valid only for clones */ 15 + u32 tstamp_lo; 15 16 u8 ptp_cmd; 16 17 u8 ts_id; 17 18 };
+3
include/linux/ethtool_netlink.h
··· 10 10 #define __ETHTOOL_LINK_MODE_MASK_NWORDS \ 11 11 DIV_ROUND_UP(__ETHTOOL_LINK_MODE_MASK_NBITS, 32) 12 12 13 + #define ETHTOOL_PAUSE_STAT_CNT (__ETHTOOL_A_PAUSE_STAT_CNT - \ 14 + ETHTOOL_A_PAUSE_STAT_TX_FRAMES) 15 + 13 16 enum ethtool_multicast_groups { 14 17 ETHNL_MCGRP_MONITOR, 15 18 };
+2
include/linux/fs.h
··· 3385 3385 extern int simple_link(struct dentry *, struct inode *, struct dentry *); 3386 3386 extern int simple_unlink(struct inode *, struct dentry *); 3387 3387 extern int simple_rmdir(struct inode *, struct dentry *); 3388 + extern int simple_rename_exchange(struct inode *old_dir, struct dentry *old_dentry, 3389 + struct inode *new_dir, struct dentry *new_dentry); 3388 3390 extern int simple_rename(struct user_namespace *, struct inode *, 3389 3391 struct dentry *, struct inode *, struct dentry *, 3390 3392 unsigned int);
+4 -2
include/linux/lsm_hook_defs.h
··· 329 329 LSM_HOOK(int, 0, tun_dev_attach_queue, void *security) 330 330 LSM_HOOK(int, 0, tun_dev_attach, struct sock *sk, void *security) 331 331 LSM_HOOK(int, 0, tun_dev_open, void *security) 332 - LSM_HOOK(int, 0, sctp_assoc_request, struct sctp_endpoint *ep, 332 + LSM_HOOK(int, 0, sctp_assoc_request, struct sctp_association *asoc, 333 333 struct sk_buff *skb) 334 334 LSM_HOOK(int, 0, sctp_bind_connect, struct sock *sk, int optname, 335 335 struct sockaddr *address, int addrlen) 336 - LSM_HOOK(void, LSM_RET_VOID, sctp_sk_clone, struct sctp_endpoint *ep, 336 + LSM_HOOK(void, LSM_RET_VOID, sctp_sk_clone, struct sctp_association *asoc, 337 337 struct sock *sk, struct sock *newsk) 338 + LSM_HOOK(void, LSM_RET_VOID, sctp_assoc_established, struct sctp_association *asoc, 339 + struct sk_buff *skb) 338 340 #endif /* CONFIG_SECURITY_NETWORK */ 339 341 340 342 #ifdef CONFIG_SECURITY_INFINIBAND
+9 -4
include/linux/lsm_hooks.h
··· 1027 1027 * Security hooks for SCTP 1028 1028 * 1029 1029 * @sctp_assoc_request: 1030 - * Passes the @ep and @chunk->skb of the association INIT packet to 1030 + * Passes the @asoc and @chunk->skb of the association INIT packet to 1031 1031 * the security module. 1032 - * @ep pointer to sctp endpoint structure. 1032 + * @asoc pointer to sctp association structure. 1033 1033 * @skb pointer to skbuff of association packet. 1034 1034 * Return 0 on success, error on failure. 1035 1035 * @sctp_bind_connect: ··· 1047 1047 * Called whenever a new socket is created by accept(2) (i.e. a TCP 1048 1048 * style socket) or when a socket is 'peeled off' e.g userspace 1049 1049 * calls sctp_peeloff(3). 1050 - * @ep pointer to current sctp endpoint structure. 1050 + * @asoc pointer to current sctp association structure. 1051 1051 * @sk pointer to current sock structure. 1052 - * @sk pointer to new sock structure. 1052 + * @newsk pointer to new sock structure. 1053 + * @sctp_assoc_established: 1054 + * Passes the @asoc and @chunk->skb of the association COOKIE_ACK packet 1055 + * to the security module. 1056 + * @asoc pointer to sctp association structure. 1057 + * @skb pointer to skbuff of association packet. 1053 1058 * 1054 1059 * Security hooks for Infiniband 1055 1060 *
+12 -5
include/linux/security.h
··· 179 179 struct xfrm_state; 180 180 struct xfrm_user_sec_ctx; 181 181 struct seq_file; 182 - struct sctp_endpoint; 182 + struct sctp_association; 183 183 184 184 #ifdef CONFIG_MMU 185 185 extern unsigned long mmap_min_addr; ··· 1425 1425 int security_tun_dev_attach_queue(void *security); 1426 1426 int security_tun_dev_attach(struct sock *sk, void *security); 1427 1427 int security_tun_dev_open(void *security); 1428 - int security_sctp_assoc_request(struct sctp_endpoint *ep, struct sk_buff *skb); 1428 + int security_sctp_assoc_request(struct sctp_association *asoc, struct sk_buff *skb); 1429 1429 int security_sctp_bind_connect(struct sock *sk, int optname, 1430 1430 struct sockaddr *address, int addrlen); 1431 - void security_sctp_sk_clone(struct sctp_endpoint *ep, struct sock *sk, 1431 + void security_sctp_sk_clone(struct sctp_association *asoc, struct sock *sk, 1432 1432 struct sock *newsk); 1433 + void security_sctp_assoc_established(struct sctp_association *asoc, 1434 + struct sk_buff *skb); 1433 1435 1434 1436 #else /* CONFIG_SECURITY_NETWORK */ 1435 1437 static inline int security_unix_stream_connect(struct sock *sock, ··· 1633 1631 return 0; 1634 1632 } 1635 1633 1636 - static inline int security_sctp_assoc_request(struct sctp_endpoint *ep, 1634 + static inline int security_sctp_assoc_request(struct sctp_association *asoc, 1637 1635 struct sk_buff *skb) 1638 1636 { 1639 1637 return 0; ··· 1646 1644 return 0; 1647 1645 } 1648 1646 1649 - static inline void security_sctp_sk_clone(struct sctp_endpoint *ep, 1647 + static inline void security_sctp_sk_clone(struct sctp_association *asoc, 1650 1648 struct sock *sk, 1651 1649 struct sock *newsk) 1650 + { 1651 + } 1652 + 1653 + static inline void security_sctp_assoc_established(struct sctp_association *asoc, 1654 + struct sk_buff *skb) 1652 1655 { 1653 1656 } 1654 1657 #endif /* CONFIG_SECURITY_NETWORK */
+34 -1
include/linux/skbuff.h
··· 454 454 * all frags to avoid possible bad checksum 455 455 */ 456 456 SKBFL_SHARED_FRAG = BIT(1), 457 + 458 + /* segment contains only zerocopy data and should not be 459 + * charged to the kernel memory. 460 + */ 461 + SKBFL_PURE_ZEROCOPY = BIT(2), 457 462 }; 458 463 459 464 #define SKBFL_ZEROCOPY_FRAG (SKBFL_ZEROCOPY_ENABLE | SKBFL_SHARED_FRAG) 465 + #define SKBFL_ALL_ZEROCOPY (SKBFL_ZEROCOPY_FRAG | SKBFL_PURE_ZEROCOPY) 460 466 461 467 /* 462 468 * The callback notifies userspace to release buffers when skb DMA is done in ··· 1470 1464 return is_zcopy ? skb_uarg(skb) : NULL; 1471 1465 } 1472 1466 1467 + static inline bool skb_zcopy_pure(const struct sk_buff *skb) 1468 + { 1469 + return skb_shinfo(skb)->flags & SKBFL_PURE_ZEROCOPY; 1470 + } 1471 + 1472 + static inline bool skb_pure_zcopy_same(const struct sk_buff *skb1, 1473 + const struct sk_buff *skb2) 1474 + { 1475 + return skb_zcopy_pure(skb1) == skb_zcopy_pure(skb2); 1476 + } 1477 + 1473 1478 static inline void net_zcopy_get(struct ubuf_info *uarg) 1474 1479 { 1475 1480 refcount_inc(&uarg->refcnt); ··· 1545 1528 if (!skb_zcopy_is_nouarg(skb)) 1546 1529 uarg->callback(skb, uarg, zerocopy_success); 1547 1530 1548 - skb_shinfo(skb)->flags &= ~SKBFL_ZEROCOPY_FRAG; 1531 + skb_shinfo(skb)->flags &= ~SKBFL_ALL_ZEROCOPY; 1549 1532 } 1550 1533 } 1551 1534 ··· 1689 1672 if (skb_cloned(skb)) 1690 1673 return pskb_expand_head(skb, 0, 0, pri); 1691 1674 1675 + return 0; 1676 + } 1677 + 1678 + /* This variant of skb_unclone() makes sure skb->truesize is not changed */ 1679 + static inline int skb_unclone_keeptruesize(struct sk_buff *skb, gfp_t pri) 1680 + { 1681 + might_sleep_if(gfpflags_allow_blocking(pri)); 1682 + 1683 + if (skb_cloned(skb)) { 1684 + unsigned int save = skb->truesize; 1685 + int res; 1686 + 1687 + res = pskb_expand_head(skb, 0, 0, pri); 1688 + skb->truesize = save; 1689 + return res; 1690 + } 1692 1691 return 0; 1693 1692 } 1694 1693
+12
include/linux/skmsg.h
··· 507 507 return !!psock->saved_data_ready; 508 508 } 509 509 510 + static inline bool sk_is_tcp(const struct sock *sk) 511 + { 512 + return sk->sk_type == SOCK_STREAM && 513 + sk->sk_protocol == IPPROTO_TCP; 514 + } 515 + 516 + static inline bool sk_is_udp(const struct sock *sk) 517 + { 518 + return sk->sk_type == SOCK_DGRAM && 519 + sk->sk_protocol == IPPROTO_UDP; 520 + } 521 + 510 522 #if IS_ENABLED(CONFIG_NET_SOCK_MSG) 511 523 512 524 #define BPF_F_STRPARSER (1UL << 1)
+3 -1
include/net/llc.h
··· 72 72 static inline 73 73 struct hlist_head *llc_sk_dev_hash(struct llc_sap *sap, int ifindex) 74 74 { 75 - return &sap->sk_dev_hash[ifindex % LLC_SK_DEV_HASH_ENTRIES]; 75 + u32 bucket = hash_32(ifindex, LLC_SK_DEV_HASH_BITS); 76 + 77 + return &sap->sk_dev_hash[bucket]; 76 78 } 77 79 78 80 static inline
+10 -10
include/net/sctp/structs.h
··· 1355 1355 reconf_enable:1; 1356 1356 1357 1357 __u8 strreset_enable; 1358 - 1359 - /* Security identifiers from incoming (INIT). These are set by 1360 - * security_sctp_assoc_request(). These will only be used by 1361 - * SCTP TCP type sockets and peeled off connections as they 1362 - * cause a new socket to be generated. security_sctp_sk_clone() 1363 - * will then plug these into the new socket. 1364 - */ 1365 - 1366 - u32 secid; 1367 - u32 peer_secid; 1368 1358 }; 1369 1359 1370 1360 /* Recover the outter endpoint structure. */ ··· 2093 2103 2094 2104 __u64 abandoned_unsent[SCTP_PR_INDEX(MAX) + 1]; 2095 2105 __u64 abandoned_sent[SCTP_PR_INDEX(MAX) + 1]; 2106 + 2107 + /* Security identifiers from incoming (INIT). These are set by 2108 + * security_sctp_assoc_request(). These will only be used by 2109 + * SCTP TCP type sockets and peeled off connections as they 2110 + * cause a new socket to be generated. security_sctp_sk_clone() 2111 + * will then plug these into the new socket. 2112 + */ 2113 + 2114 + u32 secid; 2115 + u32 peer_secid; 2096 2116 2097 2117 struct rcu_head rcu; 2098 2118 };
+19 -1
include/net/strparser.h
··· 54 54 int offset; 55 55 }; 56 56 57 + struct _strp_msg { 58 + /* Internal cb structure. struct strp_msg must be first for passing 59 + * to upper layer. 60 + */ 61 + struct strp_msg strp; 62 + int accum_len; 63 + }; 64 + 65 + struct sk_skb_cb { 66 + #define SK_SKB_CB_PRIV_LEN 20 67 + unsigned char data[SK_SKB_CB_PRIV_LEN]; 68 + struct _strp_msg strp; 69 + /* temp_reg is a temporary register used for bpf_convert_data_end_access 70 + * when dst_reg == src_reg. 71 + */ 72 + u64 temp_reg; 73 + }; 74 + 57 75 static inline struct strp_msg *strp_msg(struct sk_buff *skb) 58 76 { 59 77 return (struct strp_msg *)((void *)skb->cb + 60 - offsetof(struct qdisc_skb_cb, data)); 78 + offsetof(struct sk_skb_cb, strp)); 61 79 } 62 80 63 81 /* Structure for an attached lower socket */
+6 -2
include/net/tcp.h
··· 293 293 static inline void tcp_wmem_free_skb(struct sock *sk, struct sk_buff *skb) 294 294 { 295 295 sk_wmem_queued_add(sk, -skb->truesize); 296 - sk_mem_uncharge(sk, skb->truesize); 296 + if (!skb_zcopy_pure(skb)) 297 + sk_mem_uncharge(sk, skb->truesize); 298 + else 299 + sk_mem_uncharge(sk, SKB_TRUESIZE(skb_end_offset(skb))); 297 300 __kfree_skb(skb); 298 301 } 299 302 ··· 977 974 const struct sk_buff *from) 978 975 { 979 976 return likely(tcp_skb_can_collapse_to(to) && 980 - mptcp_skb_can_collapse(to, from)); 977 + mptcp_skb_can_collapse(to, from) && 978 + skb_pure_zcopy_same(to, from)); 981 979 } 982 980 983 981 /* Events passed to congestion control interface */
+3 -1
include/uapi/linux/ethtool_netlink.h
··· 411 411 ETHTOOL_A_PAUSE_STAT_TX_FRAMES, 412 412 ETHTOOL_A_PAUSE_STAT_RX_FRAMES, 413 413 414 - /* add new constants above here */ 414 + /* add new constants above here 415 + * adjust ETHTOOL_PAUSE_STAT_CNT if adding non-stats! 416 + */ 415 417 __ETHTOOL_A_PAUSE_STAT_CNT, 416 418 ETHTOOL_A_PAUSE_STAT_MAX = (__ETHTOOL_A_PAUSE_STAT_CNT - 1) 417 419 };
+7
kernel/bpf/core.c
··· 390 390 i = end_new; 391 391 insn = prog->insnsi + end_old; 392 392 } 393 + if (bpf_pseudo_func(insn)) { 394 + ret = bpf_adj_delta_to_imm(insn, pos, end_old, 395 + end_new, i, probe_pass); 396 + if (ret) 397 + return ret; 398 + continue; 399 + } 393 400 code = insn->code; 394 401 if ((BPF_CLASS(code) != BPF_JMP && 395 402 BPF_CLASS(code) != BPF_JMP32) ||
+20 -35
kernel/bpf/verifier.c
··· 240 240 insn->src_reg == BPF_PSEUDO_KFUNC_CALL; 241 241 } 242 242 243 - static bool bpf_pseudo_func(const struct bpf_insn *insn) 244 - { 245 - return insn->code == (BPF_LD | BPF_IMM | BPF_DW) && 246 - insn->src_reg == BPF_PSEUDO_FUNC; 247 - } 248 - 249 243 struct bpf_call_arg_meta { 250 244 struct bpf_map *map_ptr; 251 245 bool raw_mode; ··· 1954 1960 return -EPERM; 1955 1961 } 1956 1962 1957 - if (bpf_pseudo_func(insn)) { 1963 + if (bpf_pseudo_func(insn) || bpf_pseudo_call(insn)) 1958 1964 ret = add_subprog(env, i + insn->imm + 1); 1959 - if (ret >= 0) 1960 - /* remember subprog */ 1961 - insn[1].imm = ret; 1962 - } else if (bpf_pseudo_call(insn)) { 1963 - ret = add_subprog(env, i + insn->imm + 1); 1964 - } else { 1965 + else 1965 1966 ret = add_kfunc_call(env, insn->imm, insn->off); 1966 - } 1967 1967 1968 1968 if (ret < 0) 1969 1969 return ret; ··· 3076 3088 reg = &reg_state->stack[spi].spilled_ptr; 3077 3089 3078 3090 if (is_spilled_reg(&reg_state->stack[spi])) { 3079 - if (size != BPF_REG_SIZE) { 3080 - u8 scalar_size = 0; 3091 + u8 spill_size = 1; 3081 3092 3093 + for (i = BPF_REG_SIZE - 1; i > 0 && stype[i - 1] == STACK_SPILL; i--) 3094 + spill_size++; 3095 + 3096 + if (size != BPF_REG_SIZE || spill_size != BPF_REG_SIZE) { 3082 3097 if (reg->type != SCALAR_VALUE) { 3083 3098 verbose_linfo(env, env->insn_idx, "; "); 3084 3099 verbose(env, "invalid size of register fill\n"); ··· 3092 3101 if (dst_regno < 0) 3093 3102 return 0; 3094 3103 3095 - for (i = BPF_REG_SIZE; i > 0 && stype[i - 1] == STACK_SPILL; i--) 3096 - scalar_size++; 3097 - 3098 - if (!(off % BPF_REG_SIZE) && size == scalar_size) { 3104 + if (!(off % BPF_REG_SIZE) && size == spill_size) { 3099 3105 /* The earlier check_reg_arg() has decided the 3100 3106 * subreg_def for this insn. Save it first. 3101 3107 */ ··· 3115 3127 } 3116 3128 state->regs[dst_regno].live |= REG_LIVE_WRITTEN; 3117 3129 return 0; 3118 - } 3119 - for (i = 1; i < BPF_REG_SIZE; i++) { 3120 - if (stype[(slot - i) % BPF_REG_SIZE] != STACK_SPILL) { 3121 - verbose(env, "corrupted spill memory\n"); 3122 - return -EACCES; 3123 - } 3124 3130 } 3125 3131 3126 3132 if (dst_regno >= 0) { ··· 9375 9393 9376 9394 if (insn->src_reg == BPF_PSEUDO_FUNC) { 9377 9395 struct bpf_prog_aux *aux = env->prog->aux; 9378 - u32 subprogno = insn[1].imm; 9396 + u32 subprogno = find_subprog(env, 9397 + env->insn_idx + insn->imm + 1); 9379 9398 9380 9399 if (!aux->func_info) { 9381 9400 verbose(env, "missing btf func_info\n"); ··· 12546 12563 return 0; 12547 12564 12548 12565 for (i = 0, insn = prog->insnsi; i < prog->len; i++, insn++) { 12549 - if (bpf_pseudo_func(insn)) { 12550 - env->insn_aux_data[i].call_imm = insn->imm; 12551 - /* subprog is encoded in insn[1].imm */ 12566 + if (!bpf_pseudo_func(insn) && !bpf_pseudo_call(insn)) 12552 12567 continue; 12553 - } 12554 12568 12555 - if (!bpf_pseudo_call(insn)) 12556 - continue; 12557 12569 /* Upon error here we cannot fall back to interpreter but 12558 12570 * need a hard reject of the program. Thus -EFAULT is 12559 12571 * propagated in any case. ··· 12569 12591 env->insn_aux_data[i].call_imm = insn->imm; 12570 12592 /* point imm to __bpf_call_base+1 from JITs point of view */ 12571 12593 insn->imm = 1; 12594 + if (bpf_pseudo_func(insn)) 12595 + /* jit (e.g. x86_64) may emit fewer instructions 12596 + * if it learns a u32 imm is the same as a u64 imm. 12597 + * Force a non zero here. 12598 + */ 12599 + insn[1].imm = 1; 12572 12600 } 12573 12601 12574 12602 err = bpf_prog_alloc_jited_linfo(prog); ··· 12659 12675 insn = func[i]->insnsi; 12660 12676 for (j = 0; j < func[i]->len; j++, insn++) { 12661 12677 if (bpf_pseudo_func(insn)) { 12662 - subprog = insn[1].imm; 12678 + subprog = insn->off; 12663 12679 insn[0].imm = (u32)(long)func[subprog]->bpf_func; 12664 12680 insn[1].imm = ((u64)(long)func[subprog]->bpf_func) >> 32; 12665 12681 continue; ··· 12710 12726 for (i = 0, insn = prog->insnsi; i < prog->len; i++, insn++) { 12711 12727 if (bpf_pseudo_func(insn)) { 12712 12728 insn[0].imm = env->insn_aux_data[i].call_imm; 12713 - insn[1].imm = find_subprog(env, i + insn[0].imm + 1); 12729 + insn[1].imm = insn->off; 12730 + insn->off = 0; 12714 12731 continue; 12715 12732 } 12716 12733 if (!bpf_pseudo_call(insn))
+1 -23
mm/shmem.c
··· 2960 2960 return shmem_unlink(dir, dentry); 2961 2961 } 2962 2962 2963 - static int shmem_exchange(struct inode *old_dir, struct dentry *old_dentry, struct inode *new_dir, struct dentry *new_dentry) 2964 - { 2965 - bool old_is_dir = d_is_dir(old_dentry); 2966 - bool new_is_dir = d_is_dir(new_dentry); 2967 - 2968 - if (old_dir != new_dir && old_is_dir != new_is_dir) { 2969 - if (old_is_dir) { 2970 - drop_nlink(old_dir); 2971 - inc_nlink(new_dir); 2972 - } else { 2973 - drop_nlink(new_dir); 2974 - inc_nlink(old_dir); 2975 - } 2976 - } 2977 - old_dir->i_ctime = old_dir->i_mtime = 2978 - new_dir->i_ctime = new_dir->i_mtime = 2979 - d_inode(old_dentry)->i_ctime = 2980 - d_inode(new_dentry)->i_ctime = current_time(old_dir); 2981 - 2982 - return 0; 2983 - } 2984 - 2985 2963 static int shmem_whiteout(struct user_namespace *mnt_userns, 2986 2964 struct inode *old_dir, struct dentry *old_dentry) 2987 2965 { ··· 3005 3027 return -EINVAL; 3006 3028 3007 3029 if (flags & RENAME_EXCHANGE) 3008 - return shmem_exchange(old_dir, old_dentry, new_dir, new_dentry); 3030 + return simple_rename_exchange(old_dir, old_dentry, new_dir, new_dentry); 3009 3031 3010 3032 if (!simple_empty(new_dentry)) 3011 3033 return -ENOTEMPTY;
-3
net/8021q/vlan.c
··· 123 123 } 124 124 125 125 vlan_vid_del(real_dev, vlan->vlan_proto, vlan_id); 126 - 127 - /* Get rid of the vlan's reference to real_dev */ 128 - dev_put(real_dev); 129 126 } 130 127 131 128 int vlan_check_real_dev(struct net_device *real_dev,
+3
net/8021q/vlan_dev.c
··· 843 843 844 844 free_percpu(vlan->vlan_pcpu_stats); 845 845 vlan->vlan_pcpu_stats = NULL; 846 + 847 + /* Get rid of the vlan's reference to real_dev */ 848 + dev_put(vlan->real_dev); 846 849 } 847 850 848 851 void vlan_setup(struct net_device *dev)
+7
net/can/j1939/main.c
··· 75 75 skcb->addr.pgn = (cf->can_id >> 8) & J1939_PGN_MAX; 76 76 /* set default message type */ 77 77 skcb->addr.type = J1939_TP; 78 + 79 + if (!j1939_address_is_valid(skcb->addr.sa)) { 80 + netdev_err_once(priv->ndev, "%s: sa is broadcast address, ignoring!\n", 81 + __func__); 82 + goto done; 83 + } 84 + 78 85 if (j1939_pgn_is_pdu1(skcb->addr.pgn)) { 79 86 /* Type 1: with destination address */ 80 87 skcb->addr.da = skcb->addr.pgn;
+11
net/can/j1939/transport.c
··· 2023 2023 extd = J1939_ETP; 2024 2024 fallthrough; 2025 2025 case J1939_TP_CMD_BAM: 2026 + if (cmd == J1939_TP_CMD_BAM && !j1939_cb_is_broadcast(skcb)) { 2027 + netdev_err_once(priv->ndev, "%s: BAM to unicast (%02x), ignoring!\n", 2028 + __func__, skcb->addr.sa); 2029 + return; 2030 + } 2026 2031 fallthrough; 2027 2032 case J1939_TP_CMD_RTS: 2028 2033 if (skcb->addr.type != extd) ··· 2090 2085 break; 2091 2086 2092 2087 case J1939_ETP_CMD_ABORT: /* && J1939_TP_CMD_ABORT */ 2088 + if (j1939_cb_is_broadcast(skcb)) { 2089 + netdev_err_once(priv->ndev, "%s: abort to broadcast (%02x), ignoring!\n", 2090 + __func__, skcb->addr.sa); 2091 + return; 2092 + } 2093 + 2093 2094 if (j1939_tp_im_transmitter(skcb)) 2094 2095 j1939_xtp_rx_abort(priv, skb, true); 2095 2096
+2 -1
net/core/datagram.c
··· 646 646 skb->truesize += truesize; 647 647 if (sk && sk->sk_type == SOCK_STREAM) { 648 648 sk_wmem_queued_add(sk, truesize); 649 - sk_mem_charge(sk, truesize); 649 + if (!skb_zcopy_pure(skb)) 650 + sk_mem_charge(sk, truesize); 650 651 } else { 651 652 refcount_add(truesize, &skb->sk->sk_wmem_alloc); 652 653 }
+5 -2
net/core/dev.c
··· 6928 6928 might_sleep(); 6929 6929 set_bit(NAPI_STATE_DISABLE, &n->state); 6930 6930 6931 - do { 6931 + for ( ; ; ) { 6932 6932 val = READ_ONCE(n->state); 6933 6933 if (val & (NAPIF_STATE_SCHED | NAPIF_STATE_NPSVC)) { 6934 6934 usleep_range(20, 200); ··· 6937 6937 6938 6938 new = val | NAPIF_STATE_SCHED | NAPIF_STATE_NPSVC; 6939 6939 new &= ~(NAPIF_STATE_THREADED | NAPIF_STATE_PREFER_BUSY_POLL); 6940 - } while (cmpxchg(&n->state, val, new) != val); 6940 + 6941 + if (cmpxchg(&n->state, val, new) == val) 6942 + break; 6943 + } 6941 6944 6942 6945 hrtimer_cancel(&n->timer); 6943 6946
+1 -1
net/core/devlink.c
··· 66 66 u8 reload_failed:1; 67 67 refcount_t refcount; 68 68 struct completion comp; 69 - char priv[0] __aligned(NETDEV_ALIGN); 69 + char priv[] __aligned(NETDEV_ALIGN); 70 70 }; 71 71 72 72 void *devlink_priv(struct devlink *devlink)
+56 -8
net/core/filter.c
··· 9756 9756 static struct bpf_insn *bpf_convert_data_end_access(const struct bpf_insn *si, 9757 9757 struct bpf_insn *insn) 9758 9758 { 9759 - /* si->dst_reg = skb->data */ 9759 + int reg; 9760 + int temp_reg_off = offsetof(struct sk_buff, cb) + 9761 + offsetof(struct sk_skb_cb, temp_reg); 9762 + 9763 + if (si->src_reg == si->dst_reg) { 9764 + /* We need an extra register, choose and save a register. */ 9765 + reg = BPF_REG_9; 9766 + if (si->src_reg == reg || si->dst_reg == reg) 9767 + reg--; 9768 + if (si->src_reg == reg || si->dst_reg == reg) 9769 + reg--; 9770 + *insn++ = BPF_STX_MEM(BPF_DW, si->src_reg, reg, temp_reg_off); 9771 + } else { 9772 + reg = si->dst_reg; 9773 + } 9774 + 9775 + /* reg = skb->data */ 9760 9776 *insn++ = BPF_LDX_MEM(BPF_FIELD_SIZEOF(struct sk_buff, data), 9761 - si->dst_reg, si->src_reg, 9777 + reg, si->src_reg, 9762 9778 offsetof(struct sk_buff, data)); 9763 9779 /* AX = skb->len */ 9764 9780 *insn++ = BPF_LDX_MEM(BPF_FIELD_SIZEOF(struct sk_buff, len), 9765 9781 BPF_REG_AX, si->src_reg, 9766 9782 offsetof(struct sk_buff, len)); 9767 - /* si->dst_reg = skb->data + skb->len */ 9768 - *insn++ = BPF_ALU64_REG(BPF_ADD, si->dst_reg, BPF_REG_AX); 9783 + /* reg = skb->data + skb->len */ 9784 + *insn++ = BPF_ALU64_REG(BPF_ADD, reg, BPF_REG_AX); 9769 9785 /* AX = skb->data_len */ 9770 9786 *insn++ = BPF_LDX_MEM(BPF_FIELD_SIZEOF(struct sk_buff, data_len), 9771 9787 BPF_REG_AX, si->src_reg, 9772 9788 offsetof(struct sk_buff, data_len)); 9773 - /* si->dst_reg = skb->data + skb->len - skb->data_len */ 9774 - *insn++ = BPF_ALU64_REG(BPF_SUB, si->dst_reg, BPF_REG_AX); 9789 + 9790 + /* reg = skb->data + skb->len - skb->data_len */ 9791 + *insn++ = BPF_ALU64_REG(BPF_SUB, reg, BPF_REG_AX); 9792 + 9793 + if (si->src_reg == si->dst_reg) { 9794 + /* Restore the saved register */ 9795 + *insn++ = BPF_MOV64_REG(BPF_REG_AX, si->src_reg); 9796 + *insn++ = BPF_MOV64_REG(si->dst_reg, reg); 9797 + *insn++ = BPF_LDX_MEM(BPF_DW, reg, BPF_REG_AX, temp_reg_off); 9798 + } 9775 9799 9776 9800 return insn; 9777 9801 } ··· 9806 9782 struct bpf_prog *prog, u32 *target_size) 9807 9783 { 9808 9784 struct bpf_insn *insn = insn_buf; 9785 + int off; 9809 9786 9810 9787 switch (si->off) { 9811 9788 case offsetof(struct __sk_buff, data_end): 9812 9789 insn = bpf_convert_data_end_access(si, insn); 9813 9790 break; 9791 + case offsetof(struct __sk_buff, cb[0]) ... 9792 + offsetofend(struct __sk_buff, cb[4]) - 1: 9793 + BUILD_BUG_ON(sizeof_field(struct sk_skb_cb, data) < 20); 9794 + BUILD_BUG_ON((offsetof(struct sk_buff, cb) + 9795 + offsetof(struct sk_skb_cb, data)) % 9796 + sizeof(__u64)); 9797 + 9798 + prog->cb_access = 1; 9799 + off = si->off; 9800 + off -= offsetof(struct __sk_buff, cb[0]); 9801 + off += offsetof(struct sk_buff, cb); 9802 + off += offsetof(struct sk_skb_cb, data); 9803 + if (type == BPF_WRITE) 9804 + *insn++ = BPF_STX_MEM(BPF_SIZE(si->code), si->dst_reg, 9805 + si->src_reg, off); 9806 + else 9807 + *insn++ = BPF_LDX_MEM(BPF_SIZE(si->code), si->dst_reg, 9808 + si->src_reg, off); 9809 + break; 9810 + 9811 + 9814 9812 default: 9815 9813 return bpf_convert_ctx_access(type, si, insn_buf, prog, 9816 9814 target_size); ··· 10469 10423 return -EINVAL; 10470 10424 if (unlikely(sk && sk_is_refcounted(sk))) 10471 10425 return -ESOCKTNOSUPPORT; /* reject non-RCU freed sockets */ 10472 - if (unlikely(sk && sk->sk_state == TCP_ESTABLISHED)) 10473 - return -ESOCKTNOSUPPORT; /* reject connected sockets */ 10426 + if (unlikely(sk && sk_is_tcp(sk) && sk->sk_state != TCP_LISTEN)) 10427 + return -ESOCKTNOSUPPORT; /* only accept TCP socket in LISTEN */ 10428 + if (unlikely(sk && sk_is_udp(sk) && sk->sk_state != TCP_CLOSE)) 10429 + return -ESOCKTNOSUPPORT; /* only accept UDP socket in CLOSE */ 10474 10430 10475 10431 /* Check if socket is suitable for packet L3/L4 protocol */ 10476 10432 if (sk && sk->sk_protocol != ctx->protocol)
+3 -14
net/core/skbuff.c
··· 3433 3433 void skb_split(struct sk_buff *skb, struct sk_buff *skb1, const u32 len) 3434 3434 { 3435 3435 int pos = skb_headlen(skb); 3436 + const int zc_flags = SKBFL_SHARED_FRAG | SKBFL_PURE_ZEROCOPY; 3436 3437 3437 - skb_shinfo(skb1)->flags |= skb_shinfo(skb)->flags & SKBFL_SHARED_FRAG; 3438 + skb_shinfo(skb1)->flags |= skb_shinfo(skb)->flags & zc_flags; 3438 3439 skb_zerocopy_clone(skb1, skb, 0); 3439 3440 if (len < pos) /* Split line is inside header. */ 3440 3441 skb_split_inside_header(skb, skb1, len, pos); ··· 3450 3449 */ 3451 3450 static int skb_prepare_for_shift(struct sk_buff *skb) 3452 3451 { 3453 - int ret = 0; 3454 - 3455 - if (skb_cloned(skb)) { 3456 - /* Save and restore truesize: pskb_expand_head() may reallocate 3457 - * memory where ksize(kmalloc(S)) != ksize(kmalloc(S)), but we 3458 - * cannot change truesize at this point. 3459 - */ 3460 - unsigned int save_truesize = skb->truesize; 3461 - 3462 - ret = pskb_expand_head(skb, 0, 0, GFP_ATOMIC); 3463 - skb->truesize = save_truesize; 3464 - } 3465 - return ret; 3452 + return skb_unclone_keeptruesize(skb, GFP_ATOMIC); 3466 3453 } 3467 3454 3468 3455 /**
+1 -1
net/core/sock.c
··· 976 976 bool charged; 977 977 int pages; 978 978 979 - if (!mem_cgroup_sockets_enabled || !sk->sk_memcg) 979 + if (!mem_cgroup_sockets_enabled || !sk->sk_memcg || !sk_has_account(sk)) 980 980 return -EOPNOTSUPP; 981 981 982 982 if (!bytes)
-6
net/core/sock_map.c
··· 511 511 ops->op == BPF_SOCK_OPS_TCP_LISTEN_CB; 512 512 } 513 513 514 - static bool sk_is_tcp(const struct sock *sk) 515 - { 516 - return sk->sk_type == SOCK_STREAM && 517 - sk->sk_protocol == IPPROTO_TCP; 518 - } 519 - 520 514 static bool sock_map_redirect_allowed(const struct sock *sk) 521 515 { 522 516 if (sk_is_tcp(sk))
+3
net/dsa/tag_ocelot.c
··· 101 101 struct dsa_port *dp; 102 102 u8 *extraction; 103 103 u16 vlan_tpid; 104 + u64 rew_val; 104 105 105 106 /* Revert skb->data by the amount consumed by the DSA master, 106 107 * so it points to the beginning of the frame. ··· 131 130 ocelot_xfh_get_qos_class(extraction, &qos_class); 132 131 ocelot_xfh_get_tag_type(extraction, &tag_type); 133 132 ocelot_xfh_get_vlan_tci(extraction, &vlan_tci); 133 + ocelot_xfh_get_rew_val(extraction, &rew_val); 134 134 135 135 skb->dev = dsa_master_find_slave(netdev, 0, src_port); 136 136 if (!skb->dev) ··· 145 143 146 144 dsa_default_offload_fwd_mark(skb); 147 145 skb->priority = qos_class; 146 + OCELOT_SKB_CB(skb)->tstamp_lo = rew_val; 148 147 149 148 /* Ocelot switches copy frames unmodified to the CPU. However, it is 150 149 * possible for the user to request a VLAN modification through
+1 -2
net/ethtool/pause.c
··· 56 56 57 57 if (req_base->flags & ETHTOOL_FLAG_STATS) 58 58 n += nla_total_size(0) + /* _PAUSE_STATS */ 59 - nla_total_size_64bit(sizeof(u64)) * 60 - (ETHTOOL_A_PAUSE_STAT_MAX - 2); 59 + nla_total_size_64bit(sizeof(u64)) * ETHTOOL_PAUSE_STAT_CNT; 61 60 return n; 62 61 } 63 62
+20 -2
net/ipv4/tcp.c
··· 862 862 if (likely(skb)) { 863 863 bool mem_scheduled; 864 864 865 + skb->truesize = SKB_TRUESIZE(skb_end_offset(skb)); 865 866 if (force_schedule) { 866 867 mem_scheduled = true; 867 868 sk_forced_mem_schedule(sk, skb->truesize); ··· 1319 1318 1320 1319 copy = min_t(int, copy, pfrag->size - pfrag->offset); 1321 1320 1321 + /* skb changing from pure zc to mixed, must charge zc */ 1322 + if (unlikely(skb_zcopy_pure(skb))) { 1323 + if (!sk_wmem_schedule(sk, skb->data_len)) 1324 + goto wait_for_space; 1325 + 1326 + sk_mem_charge(sk, skb->data_len); 1327 + skb_shinfo(skb)->flags &= ~SKBFL_PURE_ZEROCOPY; 1328 + } 1329 + 1322 1330 if (!sk_wmem_schedule(sk, copy)) 1323 1331 goto wait_for_space; 1324 1332 ··· 1348 1338 } 1349 1339 pfrag->offset += copy; 1350 1340 } else { 1351 - if (!sk_wmem_schedule(sk, copy)) 1352 - goto wait_for_space; 1341 + /* First append to a fragless skb builds initial 1342 + * pure zerocopy skb 1343 + */ 1344 + if (!skb->len) 1345 + skb_shinfo(skb)->flags |= SKBFL_PURE_ZEROCOPY; 1346 + 1347 + if (!skb_zcopy_pure(skb)) { 1348 + if (!sk_wmem_schedule(sk, copy)) 1349 + goto wait_for_space; 1350 + } 1353 1351 1354 1352 err = skb_zerocopy_iter_stream(sk, skb, msg, copy, uarg); 1355 1353 if (err == -EMSGSIZE || err == -EEXIST) {
+47 -1
net/ipv4/tcp_bpf.c
··· 172 172 return ret; 173 173 } 174 174 175 + static int tcp_bpf_recvmsg_parser(struct sock *sk, 176 + struct msghdr *msg, 177 + size_t len, 178 + int nonblock, 179 + int flags, 180 + int *addr_len) 181 + { 182 + struct sk_psock *psock; 183 + int copied; 184 + 185 + if (unlikely(flags & MSG_ERRQUEUE)) 186 + return inet_recv_error(sk, msg, len, addr_len); 187 + 188 + psock = sk_psock_get(sk); 189 + if (unlikely(!psock)) 190 + return tcp_recvmsg(sk, msg, len, nonblock, flags, addr_len); 191 + 192 + lock_sock(sk); 193 + msg_bytes_ready: 194 + copied = sk_msg_recvmsg(sk, psock, msg, len, flags); 195 + if (!copied) { 196 + long timeo; 197 + int data; 198 + 199 + timeo = sock_rcvtimeo(sk, nonblock); 200 + data = tcp_msg_wait_data(sk, psock, timeo); 201 + if (data && !sk_psock_queue_empty(psock)) 202 + goto msg_bytes_ready; 203 + copied = -EAGAIN; 204 + } 205 + release_sock(sk); 206 + sk_psock_put(sk, psock); 207 + return copied; 208 + } 209 + 175 210 static int tcp_bpf_recvmsg(struct sock *sk, struct msghdr *msg, size_t len, 176 211 int nonblock, int flags, int *addr_len) 177 212 { ··· 499 464 enum { 500 465 TCP_BPF_BASE, 501 466 TCP_BPF_TX, 467 + TCP_BPF_RX, 468 + TCP_BPF_TXRX, 502 469 TCP_BPF_NUM_CFGS, 503 470 }; 504 471 ··· 512 475 struct proto *base) 513 476 { 514 477 prot[TCP_BPF_BASE] = *base; 515 - prot[TCP_BPF_BASE].unhash = sock_map_unhash; 516 478 prot[TCP_BPF_BASE].close = sock_map_close; 517 479 prot[TCP_BPF_BASE].recvmsg = tcp_bpf_recvmsg; 518 480 prot[TCP_BPF_BASE].sock_is_readable = sk_msg_is_readable; ··· 519 483 prot[TCP_BPF_TX] = prot[TCP_BPF_BASE]; 520 484 prot[TCP_BPF_TX].sendmsg = tcp_bpf_sendmsg; 521 485 prot[TCP_BPF_TX].sendpage = tcp_bpf_sendpage; 486 + 487 + prot[TCP_BPF_RX] = prot[TCP_BPF_BASE]; 488 + prot[TCP_BPF_RX].recvmsg = tcp_bpf_recvmsg_parser; 489 + 490 + prot[TCP_BPF_TXRX] = prot[TCP_BPF_TX]; 491 + prot[TCP_BPF_TXRX].recvmsg = tcp_bpf_recvmsg_parser; 522 492 } 523 493 524 494 static void tcp_bpf_check_v6_needs_rebuild(struct proto *ops) ··· 561 519 { 562 520 int family = sk->sk_family == AF_INET6 ? TCP_BPF_IPV6 : TCP_BPF_IPV4; 563 521 int config = psock->progs.msg_parser ? TCP_BPF_TX : TCP_BPF_BASE; 522 + 523 + if (psock->progs.stream_verdict || psock->progs.skb_verdict) { 524 + config = (config == TCP_BPF_TX) ? TCP_BPF_TXRX : TCP_BPF_RX; 525 + } 564 526 565 527 if (restore) { 566 528 if (inet_csk_has_ulp(sk)) {
+15 -12
net/ipv4/tcp_output.c
··· 408 408 return tp->snd_una != tp->snd_up; 409 409 } 410 410 411 - #define OPTION_SACK_ADVERTISE (1 << 0) 412 - #define OPTION_TS (1 << 1) 413 - #define OPTION_MD5 (1 << 2) 414 - #define OPTION_WSCALE (1 << 3) 415 - #define OPTION_FAST_OPEN_COOKIE (1 << 8) 416 - #define OPTION_SMC (1 << 9) 417 - #define OPTION_MPTCP (1 << 10) 411 + #define OPTION_SACK_ADVERTISE BIT(0) 412 + #define OPTION_TS BIT(1) 413 + #define OPTION_MD5 BIT(2) 414 + #define OPTION_WSCALE BIT(3) 415 + #define OPTION_FAST_OPEN_COOKIE BIT(8) 416 + #define OPTION_SMC BIT(9) 417 + #define OPTION_MPTCP BIT(10) 418 418 419 419 static void smc_options_write(__be32 *ptr, u16 *options) 420 420 { ··· 1559 1559 return -ENOMEM; 1560 1560 } 1561 1561 1562 - if (skb_unclone(skb, gfp)) 1562 + if (skb_unclone_keeptruesize(skb, gfp)) 1563 1563 return -ENOMEM; 1564 1564 1565 1565 /* Get a new skb... force flag on. */ ··· 1667 1667 { 1668 1668 u32 delta_truesize; 1669 1669 1670 - if (skb_unclone(skb, GFP_ATOMIC)) 1670 + if (skb_unclone_keeptruesize(skb, GFP_ATOMIC)) 1671 1671 return -ENOMEM; 1672 1672 1673 1673 delta_truesize = __pskb_trim_head(skb, len); ··· 1677 1677 if (delta_truesize) { 1678 1678 skb->truesize -= delta_truesize; 1679 1679 sk_wmem_queued_add(sk, -delta_truesize); 1680 - sk_mem_uncharge(sk, delta_truesize); 1680 + if (!skb_zcopy_pure(skb)) 1681 + sk_mem_uncharge(sk, delta_truesize); 1681 1682 } 1682 1683 1683 1684 /* Any change of skb->len requires recalculation of tso factor. */ ··· 2296 2295 if (len <= skb->len) 2297 2296 break; 2298 2297 2299 - if (unlikely(TCP_SKB_CB(skb)->eor) || tcp_has_tx_tstamp(skb)) 2298 + if (unlikely(TCP_SKB_CB(skb)->eor) || 2299 + tcp_has_tx_tstamp(skb) || 2300 + !skb_pure_zcopy_same(skb, next)) 2300 2301 return false; 2301 2302 2302 2303 len -= skb->len; ··· 3169 3166 cur_mss, GFP_ATOMIC)) 3170 3167 return -ENOMEM; /* We'll try again later. */ 3171 3168 } else { 3172 - if (skb_unclone(skb, GFP_ATOMIC)) 3169 + if (skb_unclone_keeptruesize(skb, GFP_ATOMIC)) 3173 3170 return -ENOMEM; 3174 3171 3175 3172 diff = tcp_skb_pcount(skb);
+1 -1
net/ipv6/seg6.c
··· 378 378 kfree(rcu_dereference_raw(sdata->tun_src)); 379 379 kfree(sdata); 380 380 return -ENOMEM; 381 - }; 381 + } 382 382 #endif 383 383 384 384 return 0;
-1
net/ipv6/tcp_ipv6.c
··· 1263 1263 1264 1264 inet_sk(newsk)->pinet6 = tcp_inet6_sk(newsk); 1265 1265 1266 - newinet = inet_sk(newsk); 1267 1266 newnp = tcp_inet6_sk(newsk); 1268 1267 newtp = tcp_sk(newsk); 1269 1268
+3 -3
net/ipv6/udp.c
··· 700 700 701 701 ret = encap_rcv(sk, skb); 702 702 if (ret <= 0) { 703 - __UDP_INC_STATS(sock_net(sk), 704 - UDP_MIB_INDATAGRAMS, 705 - is_udplite); 703 + __UDP6_INC_STATS(sock_net(sk), 704 + UDP_MIB_INDATAGRAMS, 705 + is_udplite); 706 706 return -ret; 707 707 } 708 708 }
+23 -1
net/mctp/af_mctp.c
··· 33 33 return 0; 34 34 } 35 35 36 + /* Generic sockaddr checks, padding checks only so far */ 37 + static bool mctp_sockaddr_is_ok(const struct sockaddr_mctp *addr) 38 + { 39 + return !addr->__smctp_pad0 && !addr->__smctp_pad1; 40 + } 41 + 42 + static bool mctp_sockaddr_ext_is_ok(const struct sockaddr_mctp_ext *addr) 43 + { 44 + return !addr->__smctp_pad0[0] && 45 + !addr->__smctp_pad0[1] && 46 + !addr->__smctp_pad0[2]; 47 + } 48 + 36 49 static int mctp_bind(struct socket *sock, struct sockaddr *addr, int addrlen) 37 50 { 38 51 struct sock *sk = sock->sk; ··· 64 51 65 52 /* it's a valid sockaddr for MCTP, cast and do protocol checks */ 66 53 smctp = (struct sockaddr_mctp *)addr; 54 + 55 + if (!mctp_sockaddr_is_ok(smctp)) 56 + return -EINVAL; 67 57 68 58 lock_sock(sk); 69 59 ··· 102 86 if (addrlen < sizeof(struct sockaddr_mctp)) 103 87 return -EINVAL; 104 88 if (addr->smctp_family != AF_MCTP) 89 + return -EINVAL; 90 + if (!mctp_sockaddr_is_ok(addr)) 105 91 return -EINVAL; 106 92 if (addr->smctp_tag & ~(MCTP_TAG_MASK | MCTP_TAG_OWNER)) 107 93 return -EINVAL; ··· 142 124 DECLARE_SOCKADDR(struct sockaddr_mctp_ext *, 143 125 extaddr, msg->msg_name); 144 126 145 - if (extaddr->smctp_halen > sizeof(cb->haddr)) { 127 + if (!mctp_sockaddr_ext_is_ok(extaddr) || 128 + extaddr->smctp_halen > sizeof(cb->haddr)) { 146 129 rc = -EINVAL; 147 130 goto err_free; 148 131 } ··· 217 198 218 199 addr = msg->msg_name; 219 200 addr->smctp_family = AF_MCTP; 201 + addr->__smctp_pad0 = 0; 220 202 addr->smctp_network = cb->net; 221 203 addr->smctp_addr.s_addr = hdr->src; 222 204 addr->smctp_type = type; 223 205 addr->smctp_tag = hdr->flags_seq_tag & 224 206 (MCTP_HDR_TAG_MASK | MCTP_HDR_FLAG_TO); 207 + addr->__smctp_pad1 = 0; 225 208 msg->msg_namelen = sizeof(*addr); 226 209 227 210 if (msk->addr_ext) { ··· 232 211 msg->msg_namelen = sizeof(*ae); 233 212 ae->smctp_ifindex = cb->ifindex; 234 213 ae->smctp_halen = cb->halen; 214 + memset(ae->__smctp_pad0, 0x0, sizeof(ae->__smctp_pad0)); 235 215 memset(ae->smctp_haddr, 0x0, sizeof(ae->smctp_haddr)); 236 216 memcpy(ae->smctp_haddr, cb->haddr, cb->halen); 237 217 }
+2
net/netfilter/ipvs/ip_vs_ctl.c
··· 47 47 48 48 #include <net/ip_vs.h> 49 49 50 + MODULE_ALIAS_GENL_FAMILY(IPVS_GENL_NAME); 51 + 50 52 /* semaphore for IPVS sockopts. And, [gs]etsockopt may sleep. */ 51 53 static DEFINE_MUTEX(__ip_vs_mutex); 52 54
+15
net/nfc/netlink.c
··· 1664 1664 .cmd = NFC_CMD_DEV_UP, 1665 1665 .validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP, 1666 1666 .doit = nfc_genl_dev_up, 1667 + .flags = GENL_ADMIN_PERM, 1667 1668 }, 1668 1669 { 1669 1670 .cmd = NFC_CMD_DEV_DOWN, 1670 1671 .validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP, 1671 1672 .doit = nfc_genl_dev_down, 1673 + .flags = GENL_ADMIN_PERM, 1672 1674 }, 1673 1675 { 1674 1676 .cmd = NFC_CMD_START_POLL, 1675 1677 .validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP, 1676 1678 .doit = nfc_genl_start_poll, 1679 + .flags = GENL_ADMIN_PERM, 1677 1680 }, 1678 1681 { 1679 1682 .cmd = NFC_CMD_STOP_POLL, 1680 1683 .validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP, 1681 1684 .doit = nfc_genl_stop_poll, 1685 + .flags = GENL_ADMIN_PERM, 1682 1686 }, 1683 1687 { 1684 1688 .cmd = NFC_CMD_DEP_LINK_UP, 1685 1689 .validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP, 1686 1690 .doit = nfc_genl_dep_link_up, 1691 + .flags = GENL_ADMIN_PERM, 1687 1692 }, 1688 1693 { 1689 1694 .cmd = NFC_CMD_DEP_LINK_DOWN, 1690 1695 .validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP, 1691 1696 .doit = nfc_genl_dep_link_down, 1697 + .flags = GENL_ADMIN_PERM, 1692 1698 }, 1693 1699 { 1694 1700 .cmd = NFC_CMD_GET_TARGET, ··· 1712 1706 .cmd = NFC_CMD_LLC_SET_PARAMS, 1713 1707 .validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP, 1714 1708 .doit = nfc_genl_llc_set_params, 1709 + .flags = GENL_ADMIN_PERM, 1715 1710 }, 1716 1711 { 1717 1712 .cmd = NFC_CMD_LLC_SDREQ, 1718 1713 .validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP, 1719 1714 .doit = nfc_genl_llc_sdreq, 1715 + .flags = GENL_ADMIN_PERM, 1720 1716 }, 1721 1717 { 1722 1718 .cmd = NFC_CMD_FW_DOWNLOAD, 1723 1719 .validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP, 1724 1720 .doit = nfc_genl_fw_download, 1721 + .flags = GENL_ADMIN_PERM, 1725 1722 }, 1726 1723 { 1727 1724 .cmd = NFC_CMD_ENABLE_SE, 1728 1725 .validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP, 1729 1726 .doit = nfc_genl_enable_se, 1727 + .flags = GENL_ADMIN_PERM, 1730 1728 }, 1731 1729 { 1732 1730 .cmd = NFC_CMD_DISABLE_SE, 1733 1731 .validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP, 1734 1732 .doit = nfc_genl_disable_se, 1733 + .flags = GENL_ADMIN_PERM, 1735 1734 }, 1736 1735 { 1737 1736 .cmd = NFC_CMD_GET_SE, ··· 1748 1737 .cmd = NFC_CMD_SE_IO, 1749 1738 .validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP, 1750 1739 .doit = nfc_genl_se_io, 1740 + .flags = GENL_ADMIN_PERM, 1751 1741 }, 1752 1742 { 1753 1743 .cmd = NFC_CMD_ACTIVATE_TARGET, 1754 1744 .validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP, 1755 1745 .doit = nfc_genl_activate_target, 1746 + .flags = GENL_ADMIN_PERM, 1756 1747 }, 1757 1748 { 1758 1749 .cmd = NFC_CMD_VENDOR, 1759 1750 .validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP, 1760 1751 .doit = nfc_genl_vendor_cmd, 1752 + .flags = GENL_ADMIN_PERM, 1761 1753 }, 1762 1754 { 1763 1755 .cmd = NFC_CMD_DEACTIVATE_TARGET, 1764 1756 .validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP, 1765 1757 .doit = nfc_genl_deactivate_target, 1758 + .flags = GENL_ADMIN_PERM, 1766 1759 }, 1767 1760 }; 1768 1761
+17 -10
net/sched/sch_taprio.c
··· 95 95 return ns_to_ktime(sched->base_time); 96 96 } 97 97 98 - static ktime_t taprio_get_time(struct taprio_sched *q) 98 + static ktime_t taprio_mono_to_any(const struct taprio_sched *q, ktime_t mono) 99 99 { 100 - ktime_t mono = ktime_get(); 100 + /* This pairs with WRITE_ONCE() in taprio_parse_clockid() */ 101 + enum tk_offsets tk_offset = READ_ONCE(q->tk_offset); 101 102 102 - switch (q->tk_offset) { 103 + switch (tk_offset) { 103 104 case TK_OFFS_MAX: 104 105 return mono; 105 106 default: 106 - return ktime_mono_to_any(mono, q->tk_offset); 107 + return ktime_mono_to_any(mono, tk_offset); 107 108 } 109 + } 108 110 109 - return KTIME_MAX; 111 + static ktime_t taprio_get_time(const struct taprio_sched *q) 112 + { 113 + return taprio_mono_to_any(q, ktime_get()); 110 114 } 111 115 112 116 static void taprio_free_sched_cb(struct rcu_head *head) ··· 323 319 return 0; 324 320 } 325 321 326 - return ktime_mono_to_any(skb->skb_mstamp_ns, q->tk_offset); 322 + return taprio_mono_to_any(q, skb->skb_mstamp_ns); 327 323 } 328 324 329 325 /* There are a few scenarios where we will have to modify the txtime from ··· 1356 1352 } 1357 1353 } else if (tb[TCA_TAPRIO_ATTR_SCHED_CLOCKID]) { 1358 1354 int clockid = nla_get_s32(tb[TCA_TAPRIO_ATTR_SCHED_CLOCKID]); 1355 + enum tk_offsets tk_offset; 1359 1356 1360 1357 /* We only support static clockids and we don't allow 1361 1358 * for it to be modified after the first init. ··· 1371 1366 1372 1367 switch (clockid) { 1373 1368 case CLOCK_REALTIME: 1374 - q->tk_offset = TK_OFFS_REAL; 1369 + tk_offset = TK_OFFS_REAL; 1375 1370 break; 1376 1371 case CLOCK_MONOTONIC: 1377 - q->tk_offset = TK_OFFS_MAX; 1372 + tk_offset = TK_OFFS_MAX; 1378 1373 break; 1379 1374 case CLOCK_BOOTTIME: 1380 - q->tk_offset = TK_OFFS_BOOT; 1375 + tk_offset = TK_OFFS_BOOT; 1381 1376 break; 1382 1377 case CLOCK_TAI: 1383 - q->tk_offset = TK_OFFS_TAI; 1378 + tk_offset = TK_OFFS_TAI; 1384 1379 break; 1385 1380 default: 1386 1381 NL_SET_ERR_MSG(extack, "Invalid 'clockid'"); 1387 1382 err = -EINVAL; 1388 1383 goto out; 1389 1384 } 1385 + /* This pairs with READ_ONCE() in taprio_mono_to_any */ 1386 + WRITE_ONCE(q->tk_offset, tk_offset); 1390 1387 1391 1388 q->clockid = clockid; 1392 1389 } else {
+18 -16
net/sctp/sm_statefuns.c
··· 326 326 struct sctp_packet *packet; 327 327 int len; 328 328 329 - /* Update socket peer label if first association. */ 330 - if (security_sctp_assoc_request((struct sctp_endpoint *)ep, 331 - chunk->skb)) 332 - return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands); 333 - 334 329 /* 6.10 Bundling 335 330 * An endpoint MUST NOT bundle INIT, INIT ACK or 336 331 * SHUTDOWN COMPLETE with any other chunks. ··· 409 414 new_asoc = sctp_make_temp_asoc(ep, chunk, GFP_ATOMIC); 410 415 if (!new_asoc) 411 416 goto nomem; 417 + 418 + /* Update socket peer label if first association. */ 419 + if (security_sctp_assoc_request(new_asoc, chunk->skb)) { 420 + sctp_association_free(new_asoc); 421 + return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands); 422 + } 412 423 413 424 if (sctp_assoc_set_bind_addr_from_ep(new_asoc, 414 425 sctp_scope(sctp_source(chunk)), ··· 781 780 } 782 781 } 783 782 783 + if (security_sctp_assoc_request(new_asoc, chunk->skb)) { 784 + sctp_association_free(new_asoc); 785 + return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands); 786 + } 784 787 785 788 /* Delay state machine commands until later. 786 789 * ··· 946 941 sctp_add_cmd_sf(commands, SCTP_CMD_INIT_COUNTER_RESET, SCTP_NULL()); 947 942 948 943 /* Set peer label for connection. */ 949 - security_inet_conn_established(ep->base.sk, chunk->skb); 944 + security_sctp_assoc_established((struct sctp_association *)asoc, chunk->skb); 950 945 951 946 /* RFC 2960 5.1 Normal Establishment of an Association 952 947 * ··· 1522 1517 struct sctp_packet *packet; 1523 1518 int len; 1524 1519 1525 - /* Update socket peer label if first association. */ 1526 - if (security_sctp_assoc_request((struct sctp_endpoint *)ep, 1527 - chunk->skb)) 1528 - return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands); 1529 - 1530 1520 /* 6.10 Bundling 1531 1521 * An endpoint MUST NOT bundle INIT, INIT ACK or 1532 1522 * SHUTDOWN COMPLETE with any other chunks. ··· 1593 1593 new_asoc = sctp_make_temp_asoc(ep, chunk, GFP_ATOMIC); 1594 1594 if (!new_asoc) 1595 1595 goto nomem; 1596 + 1597 + /* Update socket peer label if first association. */ 1598 + if (security_sctp_assoc_request(new_asoc, chunk->skb)) { 1599 + sctp_association_free(new_asoc); 1600 + return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands); 1601 + } 1596 1602 1597 1603 if (sctp_assoc_set_bind_addr_from_ep(new_asoc, 1598 1604 sctp_scope(sctp_source(chunk)), GFP_ATOMIC) < 0) ··· 2261 2255 } 2262 2256 2263 2257 /* Update socket peer label if first association. */ 2264 - if (security_sctp_assoc_request((struct sctp_endpoint *)ep, 2265 - chunk->skb)) { 2258 + if (security_sctp_assoc_request(new_asoc, chunk->skb)) { 2266 2259 sctp_association_free(new_asoc); 2267 2260 return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands); 2268 2261 } ··· 4897 4892 struct sctp_cmd_seq *commands) 4898 4893 { 4899 4894 static const char err_str[] = "The following chunk violates protocol:"; 4900 - 4901 - if (!asoc) 4902 - return sctp_sf_violation(net, ep, asoc, type, arg, commands); 4903 4895 4904 4896 return sctp_sf_abort_violation(net, ep, asoc, arg, commands, err_str, 4905 4897 sizeof(err_str));
+2 -3
net/sctp/socket.c
··· 9412 9412 struct inet_sock *inet = inet_sk(sk); 9413 9413 struct inet_sock *newinet; 9414 9414 struct sctp_sock *sp = sctp_sk(sk); 9415 - struct sctp_endpoint *ep = sp->ep; 9416 9415 9417 9416 newsk->sk_type = sk->sk_type; 9418 9417 newsk->sk_bound_dev_if = sk->sk_bound_dev_if; ··· 9456 9457 net_enable_timestamp(); 9457 9458 9458 9459 /* Set newsk security attributes from original sk and connection 9459 - * security attribute from ep. 9460 + * security attribute from asoc. 9460 9461 */ 9461 - security_sctp_sk_clone(ep, sk, newsk); 9462 + security_sctp_sk_clone(asoc, sk, newsk); 9462 9463 } 9463 9464 9464 9465 static inline void sctp_copy_descendant(struct sock *sk_to,
+11 -7
net/smc/af_smc.c
··· 149 149 sock_set_flag(sk, SOCK_DEAD); 150 150 sk->sk_shutdown |= SHUTDOWN_MASK; 151 151 } else { 152 - if (sk->sk_state != SMC_LISTEN && sk->sk_state != SMC_INIT) 153 - sock_put(sk); /* passive closing */ 154 - if (sk->sk_state == SMC_LISTEN) { 155 - /* wake up clcsock accept */ 156 - rc = kernel_sock_shutdown(smc->clcsock, SHUT_RDWR); 152 + if (sk->sk_state != SMC_CLOSED) { 153 + if (sk->sk_state != SMC_LISTEN && 154 + sk->sk_state != SMC_INIT) 155 + sock_put(sk); /* passive closing */ 156 + if (sk->sk_state == SMC_LISTEN) { 157 + /* wake up clcsock accept */ 158 + rc = kernel_sock_shutdown(smc->clcsock, 159 + SHUT_RDWR); 160 + } 161 + sk->sk_state = SMC_CLOSED; 162 + sk->sk_state_change(sk); 157 163 } 158 - sk->sk_state = SMC_CLOSED; 159 - sk->sk_state_change(sk); 160 164 smc_restore_fallback_changes(smc); 161 165 } 162 166
+1 -1
net/smc/smc_tracepoint.h
··· 99 99 __entry->location = location; 100 100 ), 101 101 102 - TP_printk("lnk=%p lgr=%p state=%d dev=%s location=%p", 102 + TP_printk("lnk=%p lgr=%p state=%d dev=%s location=%pS", 103 103 __entry->lnk, __entry->lgr, 104 104 __entry->state, __get_str(name), 105 105 __entry->location)
+1 -9
net/strparser/strparser.c
··· 27 27 28 28 static struct workqueue_struct *strp_wq; 29 29 30 - struct _strp_msg { 31 - /* Internal cb structure. struct strp_msg must be first for passing 32 - * to upper layer. 33 - */ 34 - struct strp_msg strp; 35 - int accum_len; 36 - }; 37 - 38 30 static inline struct _strp_msg *_strp_msg(struct sk_buff *skb) 39 31 { 40 32 return (struct _strp_msg *)((void *)skb->cb + 41 - offsetof(struct qdisc_skb_cb, data)); 33 + offsetof(struct sk_skb_cb, strp)); 42 34 } 43 35 44 36 /* Lower lock held */
+2
net/vmw_vsock/af_vsock.c
··· 1322 1322 * non-blocking call. 1323 1323 */ 1324 1324 err = -EALREADY; 1325 + if (flags & O_NONBLOCK) 1326 + goto out; 1325 1327 break; 1326 1328 default: 1327 1329 if ((sk->sk_state == TCP_LISTEN) ||
+11 -4
security/security.c
··· 2367 2367 } 2368 2368 EXPORT_SYMBOL(security_tun_dev_open); 2369 2369 2370 - int security_sctp_assoc_request(struct sctp_endpoint *ep, struct sk_buff *skb) 2370 + int security_sctp_assoc_request(struct sctp_association *asoc, struct sk_buff *skb) 2371 2371 { 2372 - return call_int_hook(sctp_assoc_request, 0, ep, skb); 2372 + return call_int_hook(sctp_assoc_request, 0, asoc, skb); 2373 2373 } 2374 2374 EXPORT_SYMBOL(security_sctp_assoc_request); 2375 2375 ··· 2381 2381 } 2382 2382 EXPORT_SYMBOL(security_sctp_bind_connect); 2383 2383 2384 - void security_sctp_sk_clone(struct sctp_endpoint *ep, struct sock *sk, 2384 + void security_sctp_sk_clone(struct sctp_association *asoc, struct sock *sk, 2385 2385 struct sock *newsk) 2386 2386 { 2387 - call_void_hook(sctp_sk_clone, ep, sk, newsk); 2387 + call_void_hook(sctp_sk_clone, asoc, sk, newsk); 2388 2388 } 2389 2389 EXPORT_SYMBOL(security_sctp_sk_clone); 2390 + 2391 + void security_sctp_assoc_established(struct sctp_association *asoc, 2392 + struct sk_buff *skb) 2393 + { 2394 + call_void_hook(sctp_assoc_established, asoc, skb); 2395 + } 2396 + EXPORT_SYMBOL(security_sctp_assoc_established); 2390 2397 2391 2398 #endif /* CONFIG_SECURITY_NETWORK */ 2392 2399
+23 -11
security/selinux/hooks.c
··· 5339 5339 * connect(2), sctp_connectx(3) or sctp_sendmsg(3) (with no association 5340 5340 * already present). 5341 5341 */ 5342 - static int selinux_sctp_assoc_request(struct sctp_endpoint *ep, 5342 + static int selinux_sctp_assoc_request(struct sctp_association *asoc, 5343 5343 struct sk_buff *skb) 5344 5344 { 5345 - struct sk_security_struct *sksec = ep->base.sk->sk_security; 5345 + struct sk_security_struct *sksec = asoc->base.sk->sk_security; 5346 5346 struct common_audit_data ad; 5347 5347 struct lsm_network_audit net = {0,}; 5348 5348 u8 peerlbl_active; ··· 5359 5359 /* This will return peer_sid = SECSID_NULL if there are 5360 5360 * no peer labels, see security_net_peersid_resolve(). 5361 5361 */ 5362 - err = selinux_skb_peerlbl_sid(skb, ep->base.sk->sk_family, 5362 + err = selinux_skb_peerlbl_sid(skb, asoc->base.sk->sk_family, 5363 5363 &peer_sid); 5364 5364 if (err) 5365 5365 return err; ··· 5383 5383 */ 5384 5384 ad.type = LSM_AUDIT_DATA_NET; 5385 5385 ad.u.net = &net; 5386 - ad.u.net->sk = ep->base.sk; 5386 + ad.u.net->sk = asoc->base.sk; 5387 5387 err = avc_has_perm(&selinux_state, 5388 5388 sksec->peer_sid, peer_sid, sksec->sclass, 5389 5389 SCTP_SOCKET__ASSOCIATION, &ad); ··· 5392 5392 } 5393 5393 5394 5394 /* Compute the MLS component for the connection and store 5395 - * the information in ep. This will be used by SCTP TCP type 5395 + * the information in asoc. This will be used by SCTP TCP type 5396 5396 * sockets and peeled off connections as they cause a new 5397 5397 * socket to be generated. selinux_sctp_sk_clone() will then 5398 5398 * plug this into the new socket. ··· 5401 5401 if (err) 5402 5402 return err; 5403 5403 5404 - ep->secid = conn_sid; 5405 - ep->peer_secid = peer_sid; 5404 + asoc->secid = conn_sid; 5405 + asoc->peer_secid = peer_sid; 5406 5406 5407 5407 /* Set any NetLabel labels including CIPSO/CALIPSO options. */ 5408 - return selinux_netlbl_sctp_assoc_request(ep, skb); 5408 + return selinux_netlbl_sctp_assoc_request(asoc, skb); 5409 5409 } 5410 5410 5411 5411 /* Check if sctp IPv4/IPv6 addresses are valid for binding or connecting ··· 5490 5490 } 5491 5491 5492 5492 /* Called whenever a new socket is created by accept(2) or sctp_peeloff(3). */ 5493 - static void selinux_sctp_sk_clone(struct sctp_endpoint *ep, struct sock *sk, 5493 + static void selinux_sctp_sk_clone(struct sctp_association *asoc, struct sock *sk, 5494 5494 struct sock *newsk) 5495 5495 { 5496 5496 struct sk_security_struct *sksec = sk->sk_security; ··· 5502 5502 if (!selinux_policycap_extsockclass()) 5503 5503 return selinux_sk_clone_security(sk, newsk); 5504 5504 5505 - newsksec->sid = ep->secid; 5506 - newsksec->peer_sid = ep->peer_secid; 5505 + if (asoc->secid != SECSID_WILD) 5506 + newsksec->sid = asoc->secid; 5507 + newsksec->peer_sid = asoc->peer_secid; 5507 5508 newsksec->sclass = sksec->sclass; 5508 5509 selinux_netlbl_sctp_sk_clone(sk, newsk); 5509 5510 } ··· 5557 5556 family = PF_INET; 5558 5557 5559 5558 selinux_skb_peerlbl_sid(skb, family, &sksec->peer_sid); 5559 + } 5560 + 5561 + static void selinux_sctp_assoc_established(struct sctp_association *asoc, 5562 + struct sk_buff *skb) 5563 + { 5564 + struct sk_security_struct *sksec = asoc->base.sk->sk_security; 5565 + 5566 + selinux_inet_conn_established(asoc->base.sk, skb); 5567 + asoc->peer_secid = sksec->peer_sid; 5568 + asoc->secid = SECSID_WILD; 5560 5569 } 5561 5570 5562 5571 static int selinux_secmark_relabel_packet(u32 sid) ··· 7239 7228 LSM_HOOK_INIT(sctp_assoc_request, selinux_sctp_assoc_request), 7240 7229 LSM_HOOK_INIT(sctp_sk_clone, selinux_sctp_sk_clone), 7241 7230 LSM_HOOK_INIT(sctp_bind_connect, selinux_sctp_bind_connect), 7231 + LSM_HOOK_INIT(sctp_assoc_established, selinux_sctp_assoc_established), 7242 7232 LSM_HOOK_INIT(inet_conn_request, selinux_inet_conn_request), 7243 7233 LSM_HOOK_INIT(inet_csk_clone, selinux_inet_csk_clone), 7244 7234 LSM_HOOK_INIT(inet_conn_established, selinux_inet_conn_established),
+2 -2
security/selinux/include/netlabel.h
··· 39 39 int selinux_netlbl_skbuff_setsid(struct sk_buff *skb, 40 40 u16 family, 41 41 u32 sid); 42 - int selinux_netlbl_sctp_assoc_request(struct sctp_endpoint *ep, 42 + int selinux_netlbl_sctp_assoc_request(struct sctp_association *asoc, 43 43 struct sk_buff *skb); 44 44 int selinux_netlbl_inet_conn_request(struct request_sock *req, u16 family); 45 45 void selinux_netlbl_inet_csk_clone(struct sock *sk, u16 family); ··· 98 98 return 0; 99 99 } 100 100 101 - static inline int selinux_netlbl_sctp_assoc_request(struct sctp_endpoint *ep, 101 + static inline int selinux_netlbl_sctp_assoc_request(struct sctp_association *asoc, 102 102 struct sk_buff *skb) 103 103 { 104 104 return 0;
+9 -9
security/selinux/netlabel.c
··· 261 261 262 262 /** 263 263 * selinux_netlbl_sctp_assoc_request - Label an incoming sctp association. 264 - * @ep: incoming association endpoint. 264 + * @asoc: incoming association. 265 265 * @skb: the packet. 266 266 * 267 267 * Description: 268 - * A new incoming connection is represented by @ep, ...... 268 + * A new incoming connection is represented by @asoc, ...... 269 269 * Returns zero on success, negative values on failure. 270 270 * 271 271 */ 272 - int selinux_netlbl_sctp_assoc_request(struct sctp_endpoint *ep, 272 + int selinux_netlbl_sctp_assoc_request(struct sctp_association *asoc, 273 273 struct sk_buff *skb) 274 274 { 275 275 int rc; 276 276 struct netlbl_lsm_secattr secattr; 277 - struct sk_security_struct *sksec = ep->base.sk->sk_security; 277 + struct sk_security_struct *sksec = asoc->base.sk->sk_security; 278 278 struct sockaddr_in addr4; 279 279 struct sockaddr_in6 addr6; 280 280 281 - if (ep->base.sk->sk_family != PF_INET && 282 - ep->base.sk->sk_family != PF_INET6) 281 + if (asoc->base.sk->sk_family != PF_INET && 282 + asoc->base.sk->sk_family != PF_INET6) 283 283 return 0; 284 284 285 285 netlbl_secattr_init(&secattr); 286 286 rc = security_netlbl_sid_to_secattr(&selinux_state, 287 - ep->secid, &secattr); 287 + asoc->secid, &secattr); 288 288 if (rc != 0) 289 289 goto assoc_request_return; 290 290 ··· 294 294 if (ip_hdr(skb)->version == 4) { 295 295 addr4.sin_family = AF_INET; 296 296 addr4.sin_addr.s_addr = ip_hdr(skb)->saddr; 297 - rc = netlbl_conn_setattr(ep->base.sk, (void *)&addr4, &secattr); 297 + rc = netlbl_conn_setattr(asoc->base.sk, (void *)&addr4, &secattr); 298 298 } else if (IS_ENABLED(CONFIG_IPV6) && ip_hdr(skb)->version == 6) { 299 299 addr6.sin6_family = AF_INET6; 300 300 addr6.sin6_addr = ipv6_hdr(skb)->saddr; 301 - rc = netlbl_conn_setattr(ep->base.sk, (void *)&addr6, &secattr); 301 + rc = netlbl_conn_setattr(asoc->base.sk, (void *)&addr6, &secattr); 302 302 } else { 303 303 rc = -EAFNOSUPPORT; 304 304 }
+22 -10
tools/bpf/bpftool/Makefile
··· 22 22 _OUTPUT := $(CURDIR) 23 23 endif 24 24 BOOTSTRAP_OUTPUT := $(_OUTPUT)/bootstrap/ 25 + 25 26 LIBBPF_OUTPUT := $(_OUTPUT)/libbpf/ 26 27 LIBBPF_DESTDIR := $(LIBBPF_OUTPUT) 27 28 LIBBPF_INCLUDE := $(LIBBPF_DESTDIR)/include 28 29 LIBBPF_HDRS_DIR := $(LIBBPF_INCLUDE)/bpf 30 + LIBBPF := $(LIBBPF_OUTPUT)libbpf.a 29 31 30 - LIBBPF = $(LIBBPF_OUTPUT)libbpf.a 31 - LIBBPF_BOOTSTRAP_OUTPUT = $(BOOTSTRAP_OUTPUT)libbpf/ 32 - LIBBPF_BOOTSTRAP = $(LIBBPF_BOOTSTRAP_OUTPUT)libbpf.a 32 + LIBBPF_BOOTSTRAP_OUTPUT := $(BOOTSTRAP_OUTPUT)libbpf/ 33 + LIBBPF_BOOTSTRAP_DESTDIR := $(LIBBPF_BOOTSTRAP_OUTPUT) 34 + LIBBPF_BOOTSTRAP_INCLUDE := $(LIBBPF_BOOTSTRAP_DESTDIR)/include 35 + LIBBPF_BOOTSTRAP_HDRS_DIR := $(LIBBPF_BOOTSTRAP_INCLUDE)/bpf 36 + LIBBPF_BOOTSTRAP := $(LIBBPF_BOOTSTRAP_OUTPUT)libbpf.a 33 37 34 38 # We need to copy hashmap.h and nlattr.h which is not otherwise exported by 35 39 # libbpf, but still required by bpftool. 36 40 LIBBPF_INTERNAL_HDRS := $(addprefix $(LIBBPF_HDRS_DIR)/,hashmap.h nlattr.h) 41 + LIBBPF_BOOTSTRAP_INTERNAL_HDRS := $(addprefix $(LIBBPF_BOOTSTRAP_HDRS_DIR)/,hashmap.h) 37 42 38 43 ifeq ($(BPFTOOL_VERSION),) 39 44 BPFTOOL_VERSION := $(shell make -rR --no-print-directory -sC ../../.. kernelversion) 40 45 endif 41 46 42 - $(LIBBPF_OUTPUT) $(BOOTSTRAP_OUTPUT) $(LIBBPF_BOOTSTRAP_OUTPUT) $(LIBBPF_HDRS_DIR): 47 + $(LIBBPF_OUTPUT) $(BOOTSTRAP_OUTPUT) $(LIBBPF_BOOTSTRAP_OUTPUT) $(LIBBPF_HDRS_DIR) $(LIBBPF_BOOTSTRAP_HDRS_DIR): 43 48 $(QUIET_MKDIR)mkdir -p $@ 44 49 45 50 $(LIBBPF): $(wildcard $(BPF_DIR)/*.[ch] $(BPF_DIR)/Makefile) | $(LIBBPF_OUTPUT) ··· 57 52 58 53 $(LIBBPF_BOOTSTRAP): $(wildcard $(BPF_DIR)/*.[ch] $(BPF_DIR)/Makefile) | $(LIBBPF_BOOTSTRAP_OUTPUT) 59 54 $(Q)$(MAKE) -C $(BPF_DIR) OUTPUT=$(LIBBPF_BOOTSTRAP_OUTPUT) \ 60 - ARCH= CC=$(HOSTCC) LD=$(HOSTLD) $@ 55 + DESTDIR=$(LIBBPF_BOOTSTRAP_DESTDIR) prefix= \ 56 + ARCH= CC=$(HOSTCC) LD=$(HOSTLD) $@ install_headers 57 + 58 + $(LIBBPF_BOOTSTRAP_INTERNAL_HDRS): $(LIBBPF_BOOTSTRAP_HDRS_DIR)/%.h: $(BPF_DIR)/%.h | $(LIBBPF_BOOTSTRAP_HDRS_DIR) 59 + $(call QUIET_INSTALL, $@) 60 + $(Q)install -m 644 -t $(LIBBPF_BOOTSTRAP_HDRS_DIR) $< 61 61 62 62 $(LIBBPF)-clean: FORCE | $(LIBBPF_OUTPUT) 63 63 $(call QUIET_CLEAN, libbpf) ··· 182 172 $(Q)cp "$(VMLINUX_H)" $@ 183 173 endif 184 174 185 - $(OUTPUT)%.bpf.o: skeleton/%.bpf.c $(OUTPUT)vmlinux.h $(LIBBPF) 175 + $(OUTPUT)%.bpf.o: skeleton/%.bpf.c $(OUTPUT)vmlinux.h $(LIBBPF_BOOTSTRAP) 186 176 $(QUIET_CLANG)$(CLANG) \ 187 177 -I$(if $(OUTPUT),$(OUTPUT),.) \ 188 178 -I$(srctree)/tools/include/uapi/ \ 189 - -I$(LIBBPF_INCLUDE) \ 179 + -I$(LIBBPF_BOOTSTRAP_INCLUDE) \ 190 180 -g -O2 -Wall -target bpf -c $< -o $@ && $(LLVM_STRIP) -g $@ 191 181 192 182 $(OUTPUT)%.skel.h: $(OUTPUT)%.bpf.o $(BPFTOOL_BOOTSTRAP) ··· 219 209 $(OUTPUT)bpftool: $(OBJS) $(LIBBPF) 220 210 $(QUIET_LINK)$(CC) $(CFLAGS) $(LDFLAGS) -o $@ $(OBJS) $(LIBS) 221 211 222 - $(BOOTSTRAP_OUTPUT)%.o: %.c $(LIBBPF_INTERNAL_HDRS) | $(BOOTSTRAP_OUTPUT) 223 - $(QUIET_CC)$(HOSTCC) $(CFLAGS) -c -MMD -o $@ $< 212 + $(BOOTSTRAP_OUTPUT)%.o: %.c $(LIBBPF_BOOTSTRAP_INTERNAL_HDRS) | $(BOOTSTRAP_OUTPUT) 213 + $(QUIET_CC)$(HOSTCC) \ 214 + $(subst -I$(LIBBPF_INCLUDE),-I$(LIBBPF_BOOTSTRAP_INCLUDE),$(CFLAGS)) \ 215 + -c -MMD -o $@ $< 224 216 225 217 $(OUTPUT)%.o: %.c 226 218 $(QUIET_CC)$(CC) $(CFLAGS) -c -MMD -o $@ $< ··· 269 257 FORCE: 270 258 271 259 .SECONDARY: 272 - .PHONY: all FORCE clean install-bin install uninstall 260 + .PHONY: all FORCE bootstrap clean install-bin install uninstall 273 261 .PHONY: doc doc-clean doc-install doc-uninstall 274 262 .DEFAULT_GOAL := all
+3 -1
tools/lib/bpf/bpf.c
··· 515 515 int bpf_map_lookup_and_delete_elem_flags(int fd, const void *key, void *value, __u64 flags) 516 516 { 517 517 union bpf_attr attr; 518 + int ret; 518 519 519 520 memset(&attr, 0, sizeof(attr)); 520 521 attr.map_fd = fd; ··· 523 522 attr.value = ptr_to_u64(value); 524 523 attr.flags = flags; 525 524 526 - return sys_bpf(BPF_MAP_LOOKUP_AND_DELETE_ELEM, &attr, sizeof(attr)); 525 + ret = sys_bpf(BPF_MAP_LOOKUP_AND_DELETE_ELEM, &attr, sizeof(attr)); 526 + return libbpf_err_errno(ret); 527 527 } 528 528 529 529 int bpf_map_delete_elem(int fd, const void *key)
+1 -1
tools/testing/selftests/bpf/prog_tests/netcnt.c
··· 8 8 9 9 #define CG_NAME "/netcnt" 10 10 11 - void test_netcnt(void) 11 + void serial_test_netcnt(void) 12 12 { 13 13 union percpu_net_cnt *percpu_netcnt = NULL; 14 14 struct bpf_cgroup_storage_key key;
+74 -11
tools/testing/selftests/bpf/prog_tests/test_bpffs.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 /* Copyright (c) 2020 Facebook */ 3 3 #define _GNU_SOURCE 4 + #include <stdio.h> 4 5 #include <sched.h> 5 6 #include <sys/mount.h> 6 7 #include <sys/stat.h> ··· 30 29 31 30 static int fn(void) 32 31 { 33 - int err, duration = 0; 32 + struct stat a, b, c; 33 + int err, map; 34 34 35 35 err = unshare(CLONE_NEWNS); 36 - if (CHECK(err, "unshare", "failed: %d\n", errno)) 36 + if (!ASSERT_OK(err, "unshare")) 37 37 goto out; 38 38 39 39 err = mount("", "/", "", MS_REC | MS_PRIVATE, NULL); 40 - if (CHECK(err, "mount /", "failed: %d\n", errno)) 40 + if (!ASSERT_OK(err, "mount /")) 41 41 goto out; 42 42 43 43 err = umount(TDIR); 44 - if (CHECK(err, "umount " TDIR, "failed: %d\n", errno)) 44 + if (!ASSERT_OK(err, "umount " TDIR)) 45 45 goto out; 46 46 47 47 err = mount("none", TDIR, "tmpfs", 0, NULL); 48 - if (CHECK(err, "mount", "mount root failed: %d\n", errno)) 48 + if (!ASSERT_OK(err, "mount tmpfs")) 49 49 goto out; 50 50 51 51 err = mkdir(TDIR "/fs1", 0777); 52 - if (CHECK(err, "mkdir "TDIR"/fs1", "failed: %d\n", errno)) 52 + if (!ASSERT_OK(err, "mkdir " TDIR "/fs1")) 53 53 goto out; 54 54 err = mkdir(TDIR "/fs2", 0777); 55 - if (CHECK(err, "mkdir "TDIR"/fs2", "failed: %d\n", errno)) 55 + if (!ASSERT_OK(err, "mkdir " TDIR "/fs2")) 56 56 goto out; 57 57 58 58 err = mount("bpf", TDIR "/fs1", "bpf", 0, NULL); 59 - if (CHECK(err, "mount bpffs "TDIR"/fs1", "failed: %d\n", errno)) 59 + if (!ASSERT_OK(err, "mount bpffs " TDIR "/fs1")) 60 60 goto out; 61 61 err = mount("bpf", TDIR "/fs2", "bpf", 0, NULL); 62 - if (CHECK(err, "mount bpffs " TDIR "/fs2", "failed: %d\n", errno)) 62 + if (!ASSERT_OK(err, "mount bpffs " TDIR "/fs2")) 63 63 goto out; 64 64 65 65 err = read_iter(TDIR "/fs1/maps.debug"); 66 - if (CHECK(err, "reading " TDIR "/fs1/maps.debug", "failed\n")) 66 + if (!ASSERT_OK(err, "reading " TDIR "/fs1/maps.debug")) 67 67 goto out; 68 68 err = read_iter(TDIR "/fs2/progs.debug"); 69 - if (CHECK(err, "reading " TDIR "/fs2/progs.debug", "failed\n")) 69 + if (!ASSERT_OK(err, "reading " TDIR "/fs2/progs.debug")) 70 70 goto out; 71 + 72 + err = mkdir(TDIR "/fs1/a", 0777); 73 + if (!ASSERT_OK(err, "creating " TDIR "/fs1/a")) 74 + goto out; 75 + err = mkdir(TDIR "/fs1/a/1", 0777); 76 + if (!ASSERT_OK(err, "creating " TDIR "/fs1/a/1")) 77 + goto out; 78 + err = mkdir(TDIR "/fs1/b", 0777); 79 + if (!ASSERT_OK(err, "creating " TDIR "/fs1/b")) 80 + goto out; 81 + 82 + map = bpf_create_map(BPF_MAP_TYPE_ARRAY, 4, 4, 1, 0); 83 + if (!ASSERT_GT(map, 0, "create_map(ARRAY)")) 84 + goto out; 85 + err = bpf_obj_pin(map, TDIR "/fs1/c"); 86 + if (!ASSERT_OK(err, "pin map")) 87 + goto out; 88 + close(map); 89 + 90 + /* Check that RENAME_EXCHANGE works for directories. */ 91 + err = stat(TDIR "/fs1/a", &a); 92 + if (!ASSERT_OK(err, "stat(" TDIR "/fs1/a)")) 93 + goto out; 94 + err = renameat2(0, TDIR "/fs1/a", 0, TDIR "/fs1/b", RENAME_EXCHANGE); 95 + if (!ASSERT_OK(err, "renameat2(/fs1/a, /fs1/b, RENAME_EXCHANGE)")) 96 + goto out; 97 + err = stat(TDIR "/fs1/b", &b); 98 + if (!ASSERT_OK(err, "stat(" TDIR "/fs1/b)")) 99 + goto out; 100 + if (!ASSERT_EQ(a.st_ino, b.st_ino, "b should have a's inode")) 101 + goto out; 102 + err = access(TDIR "/fs1/b/1", F_OK); 103 + if (!ASSERT_OK(err, "access(" TDIR "/fs1/b/1)")) 104 + goto out; 105 + 106 + /* Check that RENAME_EXCHANGE works for mixed file types. */ 107 + err = stat(TDIR "/fs1/c", &c); 108 + if (!ASSERT_OK(err, "stat(" TDIR "/fs1/map)")) 109 + goto out; 110 + err = renameat2(0, TDIR "/fs1/c", 0, TDIR "/fs1/b", RENAME_EXCHANGE); 111 + if (!ASSERT_OK(err, "renameat2(/fs1/c, /fs1/b, RENAME_EXCHANGE)")) 112 + goto out; 113 + err = stat(TDIR "/fs1/b", &b); 114 + if (!ASSERT_OK(err, "stat(" TDIR "/fs1/b)")) 115 + goto out; 116 + if (!ASSERT_EQ(c.st_ino, b.st_ino, "b should have c's inode")) 117 + goto out; 118 + err = access(TDIR "/fs1/c/1", F_OK); 119 + if (!ASSERT_OK(err, "access(" TDIR "/fs1/c/1)")) 120 + goto out; 121 + 122 + /* Check that RENAME_NOREPLACE works. */ 123 + err = renameat2(0, TDIR "/fs1/b", 0, TDIR "/fs1/a", RENAME_NOREPLACE); 124 + if (!ASSERT_ERR(err, "renameat2(RENAME_NOREPLACE)")) { 125 + err = -EINVAL; 126 + goto out; 127 + } 128 + err = access(TDIR "/fs1/b", F_OK); 129 + if (!ASSERT_OK(err, "access(" TDIR "/fs1/b)")) 130 + goto out; 131 + 71 132 out: 72 133 umount(TDIR "/fs1"); 73 134 umount(TDIR "/fs2");
+12
tools/testing/selftests/bpf/progs/for_each_array_map_elem.c
··· 23 23 int output; 24 24 }; 25 25 26 + const volatile int bypass_unused = 1; 27 + 28 + static __u64 29 + unused_subprog(struct bpf_map *map, __u32 *key, __u64 *val, 30 + struct callback_ctx *data) 31 + { 32 + data->output = 0; 33 + return 1; 34 + } 35 + 26 36 static __u64 27 37 check_array_elem(struct bpf_map *map, __u32 *key, __u64 *val, 28 38 struct callback_ctx *data) ··· 64 54 65 55 data.output = 0; 66 56 bpf_for_each_map_elem(&arraymap, check_array_elem, &data, 0); 57 + if (!bypass_unused) 58 + bpf_for_each_map_elem(&arraymap, unused_subprog, &data, 0); 67 59 arraymap_output = data.output; 68 60 69 61 bpf_for_each_map_elem(&percpu_map, check_percpu_elem, (void *)0, 0);
+35 -27
tools/testing/selftests/bpf/test_xdp_redirect_multi.sh
··· 2 2 # SPDX-License-Identifier: GPL-2.0 3 3 # 4 4 # Test topology: 5 - # - - - - - - - - - - - - - - - - - - - - - - - - - 6 - # | veth1 veth2 veth3 | ... init net 5 + # - - - - - - - - - - - - - - - - - - - 6 + # | veth1 veth2 veth3 | ns0 7 7 # - -| - - - - - - | - - - - - - | - - 8 8 # --------- --------- --------- 9 - # | veth0 | | veth0 | | veth0 | ... 9 + # | veth0 | | veth0 | | veth0 | 10 10 # --------- --------- --------- 11 11 # ns1 ns2 ns3 12 12 # ··· 31 31 DRV_MODE="xdpgeneric xdpdrv xdpegress" 32 32 PASS=0 33 33 FAIL=0 34 + LOG_DIR=$(mktemp -d) 34 35 35 36 test_pass() 36 37 { ··· 51 50 ip link del veth$i 2> /dev/null 52 51 ip netns del ns$i 2> /dev/null 53 52 done 53 + ip netns del ns0 2> /dev/null 54 54 } 55 55 56 56 # Kselftest framework requirement - SKIP code is 4. ··· 79 77 mode="xdpdrv" 80 78 fi 81 79 80 + ip netns add ns0 82 81 for i in $(seq $NUM); do 83 82 ip netns add ns$i 84 - ip link add veth$i type veth peer name veth0 netns ns$i 85 - ip link set veth$i up 83 + ip -n ns$i link add veth0 index 2 type veth \ 84 + peer name veth$i netns ns0 index $((1 + $i)) 85 + ip -n ns0 link set veth$i up 86 86 ip -n ns$i link set veth0 up 87 87 88 88 ip -n ns$i addr add 192.0.2.$i/24 dev veth0 ··· 95 91 xdp_dummy.o sec xdp &> /dev/null || \ 96 92 { test_fail "Unable to load dummy xdp" && exit 1; } 97 93 IFACES="$IFACES veth$i" 98 - veth_mac[$i]=$(ip link show veth$i | awk '/link\/ether/ {print $2}') 94 + veth_mac[$i]=$(ip -n ns0 link show veth$i | awk '/link\/ether/ {print $2}') 99 95 done 100 96 } 101 97 ··· 104 100 local mode=$1 105 101 106 102 # mac test 107 - ip netns exec ns2 tcpdump -e -i veth0 -nn -l -e &> mac_ns1-2_${mode}.log & 108 - ip netns exec ns3 tcpdump -e -i veth0 -nn -l -e &> mac_ns1-3_${mode}.log & 103 + ip netns exec ns2 tcpdump -e -i veth0 -nn -l -e &> ${LOG_DIR}/mac_ns1-2_${mode}.log & 104 + ip netns exec ns3 tcpdump -e -i veth0 -nn -l -e &> ${LOG_DIR}/mac_ns1-3_${mode}.log & 109 105 sleep 0.5 110 106 ip netns exec ns1 ping 192.0.2.254 -i 0.1 -c 4 &> /dev/null 111 107 sleep 0.5 112 - pkill -9 tcpdump 108 + pkill tcpdump 113 109 114 110 # mac check 115 - grep -q "${veth_mac[2]} > ff:ff:ff:ff:ff:ff" mac_ns1-2_${mode}.log && \ 111 + grep -q "${veth_mac[2]} > ff:ff:ff:ff:ff:ff" ${LOG_DIR}/mac_ns1-2_${mode}.log && \ 116 112 test_pass "$mode mac ns1-2" || test_fail "$mode mac ns1-2" 117 - grep -q "${veth_mac[3]} > ff:ff:ff:ff:ff:ff" mac_ns1-3_${mode}.log && \ 113 + grep -q "${veth_mac[3]} > ff:ff:ff:ff:ff:ff" ${LOG_DIR}/mac_ns1-3_${mode}.log && \ 118 114 test_pass "$mode mac ns1-3" || test_fail "$mode mac ns1-3" 119 115 } 120 116 ··· 125 121 # ping6 test: echo request should be redirect back to itself, not others 126 122 ip netns exec ns1 ip neigh add 2001:db8::2 dev veth0 lladdr 00:00:00:00:00:02 127 123 128 - ip netns exec ns1 tcpdump -i veth0 -nn -l -e &> ns1-1_${mode}.log & 129 - ip netns exec ns2 tcpdump -i veth0 -nn -l -e &> ns1-2_${mode}.log & 130 - ip netns exec ns3 tcpdump -i veth0 -nn -l -e &> ns1-3_${mode}.log & 124 + ip netns exec ns1 tcpdump -i veth0 -nn -l -e &> ${LOG_DIR}/ns1-1_${mode}.log & 125 + ip netns exec ns2 tcpdump -i veth0 -nn -l -e &> ${LOG_DIR}/ns1-2_${mode}.log & 126 + ip netns exec ns3 tcpdump -i veth0 -nn -l -e &> ${LOG_DIR}/ns1-3_${mode}.log & 131 127 sleep 0.5 132 128 # ARP test 133 - ip netns exec ns1 ping 192.0.2.254 -i 0.1 -c 4 &> /dev/null 129 + ip netns exec ns1 arping -q -c 2 -I veth0 192.0.2.254 134 130 # IPv4 test 135 131 ip netns exec ns1 ping 192.0.2.253 -i 0.1 -c 4 &> /dev/null 136 132 # IPv6 test 137 133 ip netns exec ns1 ping6 2001:db8::2 -i 0.1 -c 2 &> /dev/null 138 134 sleep 0.5 139 - pkill -9 tcpdump 135 + pkill tcpdump 140 136 141 137 # All netns should receive the redirect arp requests 142 - [ $(grep -c "who-has 192.0.2.254" ns1-1_${mode}.log) -gt 4 ] && \ 138 + [ $(grep -cF "who-has 192.0.2.254" ${LOG_DIR}/ns1-1_${mode}.log) -eq 4 ] && \ 143 139 test_pass "$mode arp(F_BROADCAST) ns1-1" || \ 144 140 test_fail "$mode arp(F_BROADCAST) ns1-1" 145 - [ $(grep -c "who-has 192.0.2.254" ns1-2_${mode}.log) -le 4 ] && \ 141 + [ $(grep -cF "who-has 192.0.2.254" ${LOG_DIR}/ns1-2_${mode}.log) -eq 2 ] && \ 146 142 test_pass "$mode arp(F_BROADCAST) ns1-2" || \ 147 143 test_fail "$mode arp(F_BROADCAST) ns1-2" 148 - [ $(grep -c "who-has 192.0.2.254" ns1-3_${mode}.log) -le 4 ] && \ 144 + [ $(grep -cF "who-has 192.0.2.254" ${LOG_DIR}/ns1-3_${mode}.log) -eq 2 ] && \ 149 145 test_pass "$mode arp(F_BROADCAST) ns1-3" || \ 150 146 test_fail "$mode arp(F_BROADCAST) ns1-3" 151 147 152 148 # ns1 should not receive the redirect echo request, others should 153 - [ $(grep -c "ICMP echo request" ns1-1_${mode}.log) -eq 4 ] && \ 149 + [ $(grep -c "ICMP echo request" ${LOG_DIR}/ns1-1_${mode}.log) -eq 4 ] && \ 154 150 test_pass "$mode IPv4 (F_BROADCAST|F_EXCLUDE_INGRESS) ns1-1" || \ 155 151 test_fail "$mode IPv4 (F_BROADCAST|F_EXCLUDE_INGRESS) ns1-1" 156 - [ $(grep -c "ICMP echo request" ns1-2_${mode}.log) -eq 4 ] && \ 152 + [ $(grep -c "ICMP echo request" ${LOG_DIR}/ns1-2_${mode}.log) -eq 4 ] && \ 157 153 test_pass "$mode IPv4 (F_BROADCAST|F_EXCLUDE_INGRESS) ns1-2" || \ 158 154 test_fail "$mode IPv4 (F_BROADCAST|F_EXCLUDE_INGRESS) ns1-2" 159 - [ $(grep -c "ICMP echo request" ns1-3_${mode}.log) -eq 4 ] && \ 155 + [ $(grep -c "ICMP echo request" ${LOG_DIR}/ns1-3_${mode}.log) -eq 4 ] && \ 160 156 test_pass "$mode IPv4 (F_BROADCAST|F_EXCLUDE_INGRESS) ns1-3" || \ 161 157 test_fail "$mode IPv4 (F_BROADCAST|F_EXCLUDE_INGRESS) ns1-3" 162 158 163 159 # ns1 should receive the echo request, ns2 should not 164 - [ $(grep -c "ICMP6, echo request" ns1-1_${mode}.log) -eq 4 ] && \ 160 + [ $(grep -c "ICMP6, echo request" ${LOG_DIR}/ns1-1_${mode}.log) -eq 4 ] && \ 165 161 test_pass "$mode IPv6 (no flags) ns1-1" || \ 166 162 test_fail "$mode IPv6 (no flags) ns1-1" 167 - [ $(grep -c "ICMP6, echo request" ns1-2_${mode}.log) -eq 0 ] && \ 163 + [ $(grep -c "ICMP6, echo request" ${LOG_DIR}/ns1-2_${mode}.log) -eq 0 ] && \ 168 164 test_pass "$mode IPv6 (no flags) ns1-2" || \ 169 165 test_fail "$mode IPv6 (no flags) ns1-2" 170 166 } ··· 180 176 xdpgeneric) drv_p="-S";; 181 177 esac 182 178 183 - ./xdp_redirect_multi $drv_p $IFACES &> xdp_redirect_${mode}.log & 179 + ip netns exec ns0 ./xdp_redirect_multi $drv_p $IFACES &> ${LOG_DIR}/xdp_redirect_${mode}.log & 184 180 xdp_pid=$! 185 181 sleep 1 182 + if ! ps -p $xdp_pid > /dev/null; then 183 + test_fail "$mode xdp_redirect_multi start failed" 184 + return 1 185 + fi 186 186 187 187 if [ "$mode" = "xdpegress" ]; then 188 188 do_egress_tests $mode ··· 197 189 kill $xdp_pid 198 190 } 199 191 200 - trap clean_up 0 2 3 6 9 192 + trap clean_up EXIT 201 193 202 194 check_env 203 - rm -f xdp_redirect_*.log ns*.log mac_ns*.log 204 195 205 196 for mode in ${DRV_MODE}; do 206 197 setup_ns $mode 207 198 do_tests $mode 208 199 clean_up 209 200 done 201 + rm -rf ${LOG_DIR} 210 202 211 203 echo "Summary: PASS $PASS, FAIL $FAIL" 212 204 [ $FAIL -eq 0 ] && exit 0 || exit 1
+17
tools/testing/selftests/bpf/verifier/spill_fill.c
··· 265 265 .result = ACCEPT, 266 266 .prog_type = BPF_PROG_TYPE_SCHED_CLS, 267 267 }, 268 + { 269 + "Spill a u32 scalar at fp-4 and then at fp-8", 270 + .insns = { 271 + /* r4 = 4321 */ 272 + BPF_MOV32_IMM(BPF_REG_4, 4321), 273 + /* *(u32 *)(r10 -4) = r4 */ 274 + BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_4, -4), 275 + /* *(u32 *)(r10 -8) = r4 */ 276 + BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_4, -8), 277 + /* r4 = *(u64 *)(r10 -8) */ 278 + BPF_LDX_MEM(BPF_DW, BPF_REG_4, BPF_REG_10, -8), 279 + BPF_MOV64_IMM(BPF_REG_0, 0), 280 + BPF_EXIT_INSN(), 281 + }, 282 + .result = ACCEPT, 283 + .prog_type = BPF_PROG_TYPE_SCHED_CLS, 284 + },
+2 -2
tools/testing/selftests/bpf/xdp_redirect_multi.c
··· 129 129 goto err_out; 130 130 } 131 131 132 - printf("Get interfaces"); 132 + printf("Get interfaces:"); 133 133 for (i = 0; i < MAX_IFACE_NUM && argv[optind + i]; i++) { 134 134 ifaces[i] = if_nametoindex(argv[optind + i]); 135 135 if (!ifaces[i]) ··· 139 139 goto err_out; 140 140 } 141 141 if (ifaces[i] > MAX_INDEX_NUM) { 142 - printf("Interface index to large\n"); 142 + printf(" interface index too large\n"); 143 143 goto err_out; 144 144 } 145 145 printf(" %d", ifaces[i]);
+7 -2
tools/testing/selftests/net/Makefile
··· 12 12 TEST_PROGS += test_vxlan_fdb_changelink.sh so_txtime.sh ipv6_flowlabel.sh 13 13 TEST_PROGS += tcp_fastopen_backup_key.sh fcnal-test.sh l2tp.sh traceroute.sh 14 14 TEST_PROGS += fin_ack_lat.sh fib_nexthop_multiprefix.sh fib_nexthops.sh 15 - TEST_PROGS += altnames.sh icmp_redirect.sh ip6_gre_headroom.sh 15 + TEST_PROGS += altnames.sh icmp.sh icmp_redirect.sh ip6_gre_headroom.sh 16 16 TEST_PROGS += route_localnet.sh 17 17 TEST_PROGS += reuseaddr_ports_exhausted.sh 18 18 TEST_PROGS += txtimestamp.sh ··· 30 30 TEST_PROGS += gro.sh 31 31 TEST_PROGS += gre_gso.sh 32 32 TEST_PROGS += cmsg_so_mark.sh 33 - TEST_PROGS_EXTENDED := in_netns.sh 33 + TEST_PROGS += srv6_end_dt46_l3vpn_test.sh 34 + TEST_PROGS += srv6_end_dt4_l3vpn_test.sh 35 + TEST_PROGS += srv6_end_dt6_l3vpn_test.sh 36 + TEST_PROGS += vrf_strict_mode_test.sh 37 + TEST_PROGS_EXTENDED := in_netns.sh setup_loopback.sh setup_veth.sh 38 + TEST_PROGS_EXTENDED += toeplitz_client.sh toeplitz.sh 34 39 TEST_GEN_FILES = socket nettest 35 40 TEST_GEN_FILES += psock_fanout psock_tpacket msg_zerocopy reuseport_addr_any 36 41 TEST_GEN_FILES += tcp_mmap tcp_inq psock_snd txring_overwrite
+1 -1
tools/testing/selftests/net/forwarding/mirror_gre_bridge_1d_vlan.sh
··· 80 80 81 81 test_ip6gretap() 82 82 { 83 - test_vlan_match gt6 'skip_hw vlan_id 555 vlan_ethtype ip' \ 83 + test_vlan_match gt6 'skip_hw vlan_id 555 vlan_ethtype ipv6' \ 84 84 "mirror to ip6gretap" 85 85 } 86 86
+1 -1
tools/testing/selftests/net/forwarding/mirror_gre_changes.sh
··· 74 74 75 75 mirror_install $swp1 ingress $tundev "matchall $tcflags" 76 76 tc filter add dev $h3 ingress pref 77 prot $prot \ 77 - flower ip_ttl 50 action pass 77 + flower skip_hw ip_ttl 50 action pass 78 78 79 79 mirror_test v$h1 192.0.2.1 192.0.2.2 $h3 77 0 80 80
+7 -6
tools/testing/selftests/net/forwarding/mirror_gre_vlan_bridge_1q.sh
··· 141 141 142 142 test_ip6gretap() 143 143 { 144 - test_vlan_match gt6 'skip_hw vlan_id 555 vlan_ethtype ip' \ 144 + test_vlan_match gt6 'skip_hw vlan_id 555 vlan_ethtype ipv6' \ 145 145 "mirror to ip6gretap" 146 146 } 147 147 ··· 218 218 test_span_gre_untagged_egress() 219 219 { 220 220 local tundev=$1; shift 221 + local ul_proto=$1; shift 221 222 local what=$1; shift 222 223 223 224 RET=0 ··· 226 225 mirror_install $swp1 ingress $tundev "matchall $tcflags" 227 226 228 227 quick_test_span_gre_dir $tundev ingress 229 - quick_test_span_vlan_dir $h3 555 ingress 228 + quick_test_span_vlan_dir $h3 555 ingress "$ul_proto" 230 229 231 230 h3_addr_add_del del $h3.555 232 231 bridge vlan add dev $swp3 vid 555 pvid untagged ··· 234 233 sleep 5 235 234 236 235 quick_test_span_gre_dir $tundev ingress 237 - fail_test_span_vlan_dir $h3 555 ingress 236 + fail_test_span_vlan_dir $h3 555 ingress "$ul_proto" 238 237 239 238 h3_addr_add_del del $h3 240 239 bridge vlan add dev $swp3 vid 555 ··· 242 241 sleep 5 243 242 244 243 quick_test_span_gre_dir $tundev ingress 245 - quick_test_span_vlan_dir $h3 555 ingress 244 + quick_test_span_vlan_dir $h3 555 ingress "$ul_proto" 246 245 247 246 mirror_uninstall $swp1 ingress 248 247 ··· 251 250 252 251 test_gretap_untagged_egress() 253 252 { 254 - test_span_gre_untagged_egress gt4 "mirror to gretap" 253 + test_span_gre_untagged_egress gt4 ip "mirror to gretap" 255 254 } 256 255 257 256 test_ip6gretap_untagged_egress() 258 257 { 259 - test_span_gre_untagged_egress gt6 "mirror to ip6gretap" 258 + test_span_gre_untagged_egress gt6 ipv6 "mirror to ip6gretap" 260 259 } 261 260 262 261 test_span_gre_fdb_roaming()
+2 -1
tools/testing/selftests/net/forwarding/mirror_lib.sh
··· 115 115 local dev=$1; shift 116 116 local vid=$1; shift 117 117 local direction=$1; shift 118 + local ul_proto=$1; shift 118 119 local ip1=$1; shift 119 120 local ip2=$1; shift 120 121 121 122 # Install the capture as skip_hw to avoid double-counting of packets. 122 123 # The traffic is meant for local box anyway, so will be trapped to 123 124 # kernel. 124 - vlan_capture_install $dev "skip_hw vlan_id $vid vlan_ethtype ip" 125 + vlan_capture_install $dev "skip_hw vlan_id $vid vlan_ethtype $ul_proto" 125 126 mirror_test v$h1 $ip1 $ip2 $dev 100 $expect 126 127 mirror_test v$h2 $ip2 $ip1 $dev 100 $expect 127 128 vlan_capture_uninstall $dev
+2 -2
tools/testing/selftests/net/forwarding/mirror_vlan.sh
··· 85 85 RET=0 86 86 87 87 mirror_install $swp1 $direction $swp3.555 "matchall $tcflags" 88 - do_test_span_vlan_dir_ips 10 "$h3.555" 111 "$direction" \ 88 + do_test_span_vlan_dir_ips 10 "$h3.555" 111 "$direction" ip \ 89 89 192.0.2.17 192.0.2.18 90 - do_test_span_vlan_dir_ips 0 "$h3.555" 555 "$direction" \ 90 + do_test_span_vlan_dir_ips 0 "$h3.555" 555 "$direction" ip \ 91 91 192.0.2.17 192.0.2.18 92 92 mirror_uninstall $swp1 $direction 93 93
+5 -4
tools/testing/selftests/net/gre_gso.sh
··· 116 116 { 117 117 local name=$1 118 118 local addr=$2 119 + local proto=$3 119 120 120 - $NS_EXEC nc -kl $port >/dev/null & 121 + $NS_EXEC nc $proto -kl $port >/dev/null & 121 122 PID=$! 122 123 while ! $NS_EXEC ss -ltn | grep -q $port; do ((i++)); sleep 0.01; done 123 124 124 - cat $TMPFILE | timeout 1 nc $addr $port 125 + cat $TMPFILE | timeout 1 nc $proto -N $addr $port 125 126 log_test $? 0 "$name - copy file w/ TSO" 126 127 127 128 ethtool -K veth0 tso off 128 129 129 - cat $TMPFILE | timeout 1 nc $addr $port 130 + cat $TMPFILE | timeout 1 nc $proto -N $addr $port 130 131 log_test $? 0 "$name - copy file w/ GSO" 131 132 132 133 ethtool -K veth0 tso on ··· 156 155 sleep 2 157 156 158 157 gre_gst_test_checks GREv6/v4 172.16.2.2 159 - gre_gst_test_checks GREv6/v6 2001:db8:1::2 158 + gre_gst_test_checks GREv6/v6 2001:db8:1::2 -6 160 159 161 160 cleanup 162 161 }
+4
tools/testing/selftests/net/reuseport_bpf_numa.c
··· 211 211 212 212 /* Forward iterate */ 213 213 for (node = 0; node < len; ++node) { 214 + if (!numa_bitmask_isbitset(numa_nodes_ptr, node)) 215 + continue; 214 216 send_from_node(node, family, proto); 215 217 receive_on_node(rcv_fd, len, epfd, node, proto); 216 218 } 217 219 218 220 /* Reverse iterate */ 219 221 for (node = len - 1; node >= 0; --node) { 222 + if (!numa_bitmask_isbitset(numa_nodes_ptr, node)) 223 + continue; 220 224 send_from_node(node, family, proto); 221 225 receive_on_node(rcv_fd, len, epfd, node, proto); 222 226 }
+2
tools/testing/selftests/net/test_vxlan_under_vrf.sh
··· 101 101 ip -netns hv-$id link set veth-tap master br0 102 102 ip -netns hv-$id link set veth-tap up 103 103 104 + ip link set veth-hv address 02:1d:8d:dd:0c:6$id 105 + 104 106 ip link set veth-hv netns vm-$id 105 107 ip -netns vm-$id addr add 10.0.0.$id/24 dev veth-hv 106 108 ip -netns vm-$id link set veth-hv up
-3
tools/testing/selftests/net/tls.c
··· 654 654 TEST_F(tls, recvmsg_multiple) 655 655 { 656 656 unsigned int msg_iovlen = 1024; 657 - unsigned int len_compared = 0; 658 657 struct iovec vec[1024]; 659 658 char *iov_base[1024]; 660 659 unsigned int iov_len = 16; ··· 674 675 hdr.msg_iovlen = msg_iovlen; 675 676 hdr.msg_iov = vec; 676 677 EXPECT_NE(recvmsg(self->cfd, &hdr, 0), -1); 677 - for (i = 0; i < msg_iovlen; i++) 678 - len_compared += iov_len; 679 678 680 679 for (i = 0; i < msg_iovlen; i++) 681 680 free(iov_base[i]);
+7 -4
tools/testing/selftests/net/udpgso_bench_rx.c
··· 293 293 294 294 static void parse_opts(int argc, char **argv) 295 295 { 296 + const char *bind_addr = NULL; 296 297 int c; 297 298 298 - /* bind to any by default */ 299 - setup_sockaddr(PF_INET6, "::", &cfg_bind_addr); 300 299 while ((c = getopt(argc, argv, "4b:C:Gl:n:p:rR:S:tv")) != -1) { 301 300 switch (c) { 302 301 case '4': 303 302 cfg_family = PF_INET; 304 303 cfg_alen = sizeof(struct sockaddr_in); 305 - setup_sockaddr(PF_INET, "0.0.0.0", &cfg_bind_addr); 306 304 break; 307 305 case 'b': 308 - setup_sockaddr(cfg_family, optarg, &cfg_bind_addr); 306 + bind_addr = optarg; 309 307 break; 310 308 case 'C': 311 309 cfg_connect_timeout_ms = strtoul(optarg, NULL, 0); ··· 338 340 break; 339 341 } 340 342 } 343 + 344 + if (!bind_addr) 345 + bind_addr = cfg_family == PF_INET6 ? "::" : "0.0.0.0"; 346 + 347 + setup_sockaddr(cfg_family, bind_addr, &cfg_bind_addr); 341 348 342 349 if (optind != argc) 343 350 usage(argv[0]);