Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'net-6.0-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Pull networking fixes from Jakub Kicinski:
"Including fixes from bluetooth, bpf, can and netfilter.

A little larger than usual but it's all fixes, no late features. It's
large partially because of timing, and partially because of follow ups
to stuff that got merged a week or so before the merge window and
wasn't as widely tested. Maybe the Bluetooth fixes are a little
alarming so we'll address that, but the rest seems okay and not scary.

Notably we're including a fix for the netfilter Kconfig [1], your WiFi
warning [2] and a bluetooth fix which should unblock syzbot [3].

Current release - regressions:

- Bluetooth:
- don't try to cancel uninitialized works [3]
- L2CAP: fix use-after-free caused by l2cap_chan_put

- tls: rx: fix device offload after recent rework

- devlink: fix UAF on failed reload and leftover locks in mlxsw

Current release - new code bugs:

- netfilter:
- flowtable: fix incorrect Kconfig dependencies [1]
- nf_tables: fix crash when nf_trace is enabled

- bpf:
- use proper target btf when exporting attach_btf_obj_id
- arm64: fixes for bpf trampoline support

- Bluetooth:
- ISO: unlock on error path in iso_sock_setsockopt()
- ISO: fix info leak in iso_sock_getsockopt()
- ISO: fix iso_sock_getsockopt for BT_DEFER_SETUP
- ISO: fix memory corruption on iso_pinfo.base
- ISO: fix not using the correct QoS
- hci_conn: fix updating ISO QoS PHY

- phy: dp83867: fix get nvmem cell fail

Previous releases - regressions:

- wifi: cfg80211: fix validating BSS pointers in
__cfg80211_connect_result [2]

- atm: bring back zatm uAPI after ATM had been removed

- properly fix old bug making bonding ARP monitor mode not being able
to work with software devices with lockless Tx

- tap: fix null-deref on skb->dev in dev_parse_header_protocol

- revert "net: usb: ax88179_178a needs FLAG_SEND_ZLP" it helps some
devices and breaks others

- netfilter:
- nf_tables: many fixes rejecting cross-object linking which may
lead to UAFs
- nf_tables: fix null deref due to zeroed list head
- nf_tables: validate variable length element extension

- bgmac: fix a BUG triggered by wrong bytes_compl

- bcmgenet: indicate MAC is in charge of PHY PM

Previous releases - always broken:

- bpf:
- fix bad pointer deref in bpf_sys_bpf() injected via test infra
- disallow non-builtin bpf programs calling the prog_run command
- don't reinit map value in prealloc_lru_pop
- fix UAFs during the read of map iterator fd
- fix invalidity check for values in sk local storage map
- reject sleepable program for non-resched map iterator

- mptcp:
- move subflow cleanup in mptcp_destroy_common()
- do not queue data on closed subflows

- virtio_net: fix memory leak inside XDP_TX with mergeable

- vsock: fix memory leak when multiple threads try to connect()

- rework sk_user_data sharing to prevent psock leaks

- geneve: fix TOS inheriting for ipv4

- tunnels & drivers: do not use RT_TOS for IPv6 flowlabel

- phy: c45 baset1: do not skip aneg configuration if clock role is
not specified

- rose: avoid overflow when /proc displays timer information

- x25: fix call timeouts in blocking connects

- can: mcp251x: fix race condition on receive interrupt

- can: j1939:
- replace user-reachable WARN_ON_ONCE() with netdev_warn_once()
- fix memory leak of skbs in j1939_session_destroy()

Misc:

- docs: bpf: clarify that many things are not uAPI

- seg6: initialize induction variable to first valid array index (to
silence clang vs objtool warning)

- can: ems_usb: fix clang 14's -Wunaligned-access warning"

* tag 'net-6.0-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (117 commits)
net: atm: bring back zatm uAPI
dpaa2-eth: trace the allocated address instead of page struct
net: add missing kdoc for struct genl_multicast_group::flags
nfp: fix use-after-free in area_cache_get()
MAINTAINERS: use my korg address for mt7601u
mlxsw: minimal: Fix deadlock in ports creation
bonding: fix reference count leak in balance-alb mode
net: usb: qmi_wwan: Add support for Cinterion MV32
bpf: Shut up kern_sys_bpf warning.
net/tls: Use RCU API to access tls_ctx->netdev
tls: rx: device: don't try to copy too much on detach
tls: rx: device: bound the frag walk
net_sched: cls_route: remove from list when handle is 0
selftests: forwarding: Fix failing tests with old libnet
net: refactor bpf_sk_reuseport_detach()
net: fix refcount bug in sk_psock_get (2)
selftests/bpf: Ensure sleepable program is rejected by hash map iter
selftests/bpf: Add write tests for sk local storage map iterator
selftests/bpf: Add tests for reading a dangling map iter fd
bpf: Only allow sleepable program for resched-able iterator
...

+1867 -648
+25
Documentation/bpf/bpf_design_QA.rst
··· 214 214 subject to change and can break with newer kernels. BPF programs need to change 215 215 accordingly when this happens. 216 216 217 + Q: Are places where kprobes can attach part of the stable ABI? 218 + -------------------------------------------------------------- 219 + A: NO. The places to which kprobes can attach are internal implementation 220 + details, which means that they are subject to change and can break with 221 + newer kernels. BPF programs need to change accordingly when this happens. 222 + 217 223 Q: How much stack space a BPF program uses? 218 224 ------------------------------------------- 219 225 A: Currently all program types are limited to 512 bytes of stack ··· 279 273 functions has changed, both the in-tree and out-of-tree kernel tcp cc 280 274 implementations have to be changed. The same goes for the bpf 281 275 programs and they have to be adjusted accordingly. 276 + 277 + Q: Attaching to arbitrary kernel functions is an ABI? 278 + ----------------------------------------------------- 279 + Q: BPF programs can be attached to many kernel functions. Do these 280 + kernel functions become part of the ABI? 281 + 282 + A: NO. 283 + 284 + The kernel function prototypes will change, and BPF programs attaching to 285 + them will need to change. The BPF compile-once-run-everywhere (CO-RE) 286 + should be used in order to make it easier to adapt your BPF programs to 287 + different versions of the kernel. 288 + 289 + Q: Marking a function with BTF_ID makes that function an ABI? 290 + ------------------------------------------------------------- 291 + A: NO. 292 + 293 + The BTF_ID macro does not cause a function to become part of the ABI 294 + any more than does the EXPORT_SYMBOL_GPL macro.
-9
Documentation/networking/bonding.rst
··· 1982 1982 gives some assurance that traffic is actually flowing to and from one 1983 1983 or more peers on the local network. 1984 1984 1985 - The ARP monitor relies on the device driver itself to verify 1986 - that traffic is flowing. In particular, the driver must keep up to 1987 - date the last receive time, dev->last_rx. Drivers that use NETIF_F_LLTX 1988 - flag must also update netdev_queue->trans_start. If they do not, then the 1989 - ARP monitor will immediately fail any slaves using that driver, and 1990 - those slaves will stay down. If networking monitoring (tcpdump, etc) 1991 - shows the ARP requests and replies on the network, then it may be that 1992 - your device driver is not updating last_rx and trans_start. 1993 - 1994 1985 7.2 Configuring Multiple ARP Targets 1995 1986 ------------------------------------ 1996 1987
+2 -2
MAINTAINERS
··· 9694 9694 F: arch/powerpc/platforms/powernv/vas* 9695 9695 9696 9696 IBM Power Virtual Ethernet Device Driver 9697 - M: Cristobal Forno <cforno12@linux.ibm.com> 9697 + M: Nick Child <nnac123@linux.ibm.com> 9698 9698 L: netdev@vger.kernel.org 9699 9699 S: Supported 9700 9700 F: drivers/net/ethernet/ibm/ibmveth.* ··· 12843 12843 F: drivers/net/wireless/mediatek/mt76/ 12844 12844 12845 12845 MEDIATEK MT7601U WIRELESS LAN DRIVER 12846 - M: Jakub Kicinski <kubakici@wp.pl> 12846 + M: Jakub Kicinski <kuba@kernel.org> 12847 12847 L: linux-wireless@vger.kernel.org 12848 12848 S: Maintained 12849 12849 F: drivers/net/wireless/mediatek/mt7601u/
+8 -8
arch/arm64/net/bpf_jit_comp.c
··· 1496 1496 memset(&ctx, 0, sizeof(ctx)); 1497 1497 ctx.prog = prog; 1498 1498 1499 - ctx.offset = kcalloc(prog->len + 1, sizeof(int), GFP_KERNEL); 1499 + ctx.offset = kvcalloc(prog->len + 1, sizeof(int), GFP_KERNEL); 1500 1500 if (ctx.offset == NULL) { 1501 1501 prog = orig_prog; 1502 1502 goto out_off; ··· 1601 1601 ctx.offset[i] *= AARCH64_INSN_SIZE; 1602 1602 bpf_prog_fill_jited_linfo(prog, ctx.offset + 1); 1603 1603 out_off: 1604 - kfree(ctx.offset); 1604 + kvfree(ctx.offset); 1605 1605 kfree(jit_data); 1606 1606 prog->aux->jit_data = NULL; 1607 1607 } ··· 1643 1643 int args_off, int retval_off, int run_ctx_off, 1644 1644 bool save_ret) 1645 1645 { 1646 - u32 *branch; 1646 + __le32 *branch; 1647 1647 u64 enter_prog; 1648 1648 u64 exit_prog; 1649 1649 struct bpf_prog *p = l->link.prog; ··· 1698 1698 1699 1699 if (ctx->image) { 1700 1700 int offset = &ctx->image[ctx->idx] - branch; 1701 - *branch = A64_CBZ(1, A64_R(0), offset); 1701 + *branch = cpu_to_le32(A64_CBZ(1, A64_R(0), offset)); 1702 1702 } 1703 1703 1704 1704 /* arg1: prog */ ··· 1713 1713 1714 1714 static void invoke_bpf_mod_ret(struct jit_ctx *ctx, struct bpf_tramp_links *tl, 1715 1715 int args_off, int retval_off, int run_ctx_off, 1716 - u32 **branches) 1716 + __le32 **branches) 1717 1717 { 1718 1718 int i; 1719 1719 ··· 1784 1784 struct bpf_tramp_links *fexit = &tlinks[BPF_TRAMP_FEXIT]; 1785 1785 struct bpf_tramp_links *fmod_ret = &tlinks[BPF_TRAMP_MODIFY_RETURN]; 1786 1786 bool save_ret; 1787 - u32 **branches = NULL; 1787 + __le32 **branches = NULL; 1788 1788 1789 1789 /* trampoline stack layout: 1790 1790 * [ parent ip ] ··· 1892 1892 flags & BPF_TRAMP_F_RET_FENTRY_RET); 1893 1893 1894 1894 if (fmod_ret->nr_links) { 1895 - branches = kcalloc(fmod_ret->nr_links, sizeof(u32 *), 1895 + branches = kcalloc(fmod_ret->nr_links, sizeof(__le32 *), 1896 1896 GFP_KERNEL); 1897 1897 if (!branches) 1898 1898 return -ENOMEM; ··· 1916 1916 /* update the branches saved in invoke_bpf_mod_ret with cbnz */ 1917 1917 for (i = 0; i < fmod_ret->nr_links && ctx->image != NULL; i++) { 1918 1918 int offset = &ctx->image[ctx->idx] - branches[i]; 1919 - *branches[i] = A64_CBNZ(1, A64_R(10), offset); 1919 + *branches[i] = cpu_to_le32(A64_CBNZ(1, A64_R(10), offset)); 1920 1920 } 1921 1921 1922 1922 for (i = 0; i < fexit->nr_links; i++)
+1
drivers/atm/idt77252.c
··· 3752 3752 card = idt77252_chain; 3753 3753 dev = card->atmdev; 3754 3754 idt77252_chain = card->next; 3755 + del_timer_sync(&card->tst_timer); 3755 3756 3756 3757 if (dev->phy->stop) 3757 3758 dev->phy->stop(dev);
+10
drivers/net/bonding/bond_alb.c
··· 653 653 static struct slave *rlb_arp_xmit(struct sk_buff *skb, struct bonding *bond) 654 654 { 655 655 struct slave *tx_slave = NULL; 656 + struct net_device *dev; 656 657 struct arp_pkt *arp; 657 658 658 659 if (!pskb_network_may_pull(skb, sizeof(*arp))) ··· 665 664 */ 666 665 if (!bond_slave_has_mac_rx(bond, arp->mac_src)) 667 666 return NULL; 667 + 668 + dev = ip_dev_find(dev_net(bond->dev), arp->ip_src); 669 + if (dev) { 670 + if (netif_is_bridge_master(dev)) { 671 + dev_put(dev); 672 + return NULL; 673 + } 674 + dev_put(dev); 675 + } 668 676 669 677 if (arp->op_code == htons(ARPOP_REPLY)) { 670 678 /* the arp must be sent on the selected rx channel */
+28 -17
drivers/net/bonding/bond_main.c
··· 2001 2001 for (i = 0; i < BOND_MAX_ARP_TARGETS; i++) 2002 2002 new_slave->target_last_arp_rx[i] = new_slave->last_rx; 2003 2003 2004 + new_slave->last_tx = new_slave->last_rx; 2005 + 2004 2006 if (bond->params.miimon && !bond->params.use_carrier) { 2005 2007 link_reporting = bond_check_dev_link(bond, slave_dev, 1); 2006 2008 ··· 2886 2884 return; 2887 2885 } 2888 2886 2889 - if (bond_handle_vlan(slave, tags, skb)) 2887 + if (bond_handle_vlan(slave, tags, skb)) { 2888 + slave_update_last_tx(slave); 2890 2889 arp_xmit(skb); 2890 + } 2891 + 2891 2892 return; 2892 2893 } 2893 2894 ··· 3079 3074 curr_active_slave->last_link_up)) 3080 3075 bond_validate_arp(bond, slave, tip, sip); 3081 3076 else if (curr_arp_slave && (arp->ar_op == htons(ARPOP_REPLY)) && 3082 - bond_time_in_interval(bond, 3083 - dev_trans_start(curr_arp_slave->dev), 1)) 3077 + bond_time_in_interval(bond, slave_last_tx(curr_arp_slave), 1)) 3084 3078 bond_validate_arp(bond, slave, sip, tip); 3085 3079 3086 3080 out_unlock: ··· 3107 3103 } 3108 3104 3109 3105 addrconf_addr_solict_mult(daddr, &mcaddr); 3110 - if (bond_handle_vlan(slave, tags, skb)) 3106 + if (bond_handle_vlan(slave, tags, skb)) { 3107 + slave_update_last_tx(slave); 3111 3108 ndisc_send_skb(skb, &mcaddr, saddr); 3109 + } 3112 3110 } 3113 3111 3114 3112 static void bond_ns_send_all(struct bonding *bond, struct slave *slave) ··· 3252 3246 curr_active_slave->last_link_up)) 3253 3247 bond_validate_ns(bond, slave, saddr, daddr); 3254 3248 else if (curr_arp_slave && 3255 - bond_time_in_interval(bond, 3256 - dev_trans_start(curr_arp_slave->dev), 1)) 3249 + bond_time_in_interval(bond, slave_last_tx(curr_arp_slave), 1)) 3257 3250 bond_validate_ns(bond, slave, saddr, daddr); 3258 3251 3259 3252 out: ··· 3340 3335 * so it can wait 3341 3336 */ 3342 3337 bond_for_each_slave_rcu(bond, slave, iter) { 3343 - unsigned long trans_start = dev_trans_start(slave->dev); 3338 + unsigned long last_tx = slave_last_tx(slave); 3344 3339 3345 3340 bond_propose_link_state(slave, BOND_LINK_NOCHANGE); 3346 3341 3347 3342 if (slave->link != BOND_LINK_UP) { 3348 - if (bond_time_in_interval(bond, trans_start, 1) && 3343 + if (bond_time_in_interval(bond, last_tx, 1) && 3349 3344 bond_time_in_interval(bond, slave->last_rx, 1)) { 3350 3345 3351 3346 bond_propose_link_state(slave, BOND_LINK_UP); ··· 3370 3365 * when the source ip is 0, so don't take the link down 3371 3366 * if we don't know our ip yet 3372 3367 */ 3373 - if (!bond_time_in_interval(bond, trans_start, bond->params.missed_max) || 3368 + if (!bond_time_in_interval(bond, last_tx, bond->params.missed_max) || 3374 3369 !bond_time_in_interval(bond, slave->last_rx, bond->params.missed_max)) { 3375 3370 3376 3371 bond_propose_link_state(slave, BOND_LINK_DOWN); ··· 3436 3431 */ 3437 3432 static int bond_ab_arp_inspect(struct bonding *bond) 3438 3433 { 3439 - unsigned long trans_start, last_rx; 3434 + unsigned long last_tx, last_rx; 3440 3435 struct list_head *iter; 3441 3436 struct slave *slave; 3442 3437 int commit = 0; ··· 3487 3482 * - (more than missed_max*delta since receive AND 3488 3483 * the bond has an IP address) 3489 3484 */ 3490 - trans_start = dev_trans_start(slave->dev); 3485 + last_tx = slave_last_tx(slave); 3491 3486 if (bond_is_active_slave(slave) && 3492 - (!bond_time_in_interval(bond, trans_start, bond->params.missed_max) || 3487 + (!bond_time_in_interval(bond, last_tx, bond->params.missed_max) || 3493 3488 !bond_time_in_interval(bond, last_rx, bond->params.missed_max))) { 3494 3489 bond_propose_link_state(slave, BOND_LINK_DOWN); 3495 3490 commit++; ··· 3506 3501 */ 3507 3502 static void bond_ab_arp_commit(struct bonding *bond) 3508 3503 { 3509 - unsigned long trans_start; 3510 3504 struct list_head *iter; 3505 + unsigned long last_tx; 3511 3506 struct slave *slave; 3512 3507 3513 3508 bond_for_each_slave(bond, slave, iter) { ··· 3516 3511 continue; 3517 3512 3518 3513 case BOND_LINK_UP: 3519 - trans_start = dev_trans_start(slave->dev); 3514 + last_tx = slave_last_tx(slave); 3520 3515 if (rtnl_dereference(bond->curr_active_slave) != slave || 3521 3516 (!rtnl_dereference(bond->curr_active_slave) && 3522 - bond_time_in_interval(bond, trans_start, 1))) { 3517 + bond_time_in_interval(bond, last_tx, 1))) { 3523 3518 struct slave *current_arp_slave; 3524 3519 3525 3520 current_arp_slave = rtnl_dereference(bond->current_arp_slave); ··· 5338 5333 static netdev_tx_t bond_tls_device_xmit(struct bonding *bond, struct sk_buff *skb, 5339 5334 struct net_device *dev) 5340 5335 { 5341 - if (likely(bond_get_slave_by_dev(bond, tls_get_ctx(skb->sk)->netdev))) 5342 - return bond_dev_queue_xmit(bond, skb, tls_get_ctx(skb->sk)->netdev); 5336 + struct net_device *tls_netdev = rcu_dereference(tls_get_ctx(skb->sk)->netdev); 5337 + 5338 + /* tls_netdev might become NULL, even if tls_is_sk_tx_device_offloaded 5339 + * was true, if tls_device_down is running in parallel, but it's OK, 5340 + * because bond_get_slave_by_dev has a NULL check. 5341 + */ 5342 + if (likely(bond_get_slave_by_dev(bond, tls_netdev))) 5343 + return bond_dev_queue_xmit(bond, skb, tls_netdev); 5343 5344 return bond_tx_drop(dev, skb); 5344 5345 } 5345 5346 #endif
+15 -3
drivers/net/can/spi/mcp251x.c
··· 1070 1070 1071 1071 mcp251x_read_2regs(spi, CANINTF, &intf, &eflag); 1072 1072 1073 - /* mask out flags we don't care about */ 1074 - intf &= CANINTF_RX | CANINTF_TX | CANINTF_ERR; 1075 - 1076 1073 /* receive buffer 0 */ 1077 1074 if (intf & CANINTF_RX0IF) { 1078 1075 mcp251x_hw_rx(spi, 0); ··· 1079 1082 if (mcp251x_is_2510(spi)) 1080 1083 mcp251x_write_bits(spi, CANINTF, 1081 1084 CANINTF_RX0IF, 0x00); 1085 + 1086 + /* check if buffer 1 is already known to be full, no need to re-read */ 1087 + if (!(intf & CANINTF_RX1IF)) { 1088 + u8 intf1, eflag1; 1089 + 1090 + /* intf needs to be read again to avoid a race condition */ 1091 + mcp251x_read_2regs(spi, CANINTF, &intf1, &eflag1); 1092 + 1093 + /* combine flags from both operations for error handling */ 1094 + intf |= intf1; 1095 + eflag |= eflag1; 1096 + } 1082 1097 } 1083 1098 1084 1099 /* receive buffer 1 */ ··· 1100 1091 if (mcp251x_is_2510(spi)) 1101 1092 clear_intf |= CANINTF_RX1IF; 1102 1093 } 1094 + 1095 + /* mask out flags we don't care about */ 1096 + intf &= CANINTF_RX | CANINTF_TX | CANINTF_ERR; 1103 1097 1104 1098 /* any error or tx interrupt we need to clear? */ 1105 1099 if (intf & (CANINTF_ERR | CANINTF_TX))
+1 -1
drivers/net/can/usb/ems_usb.c
··· 195 195 __le32 ts_sec; /* timestamp in seconds */ 196 196 __le32 ts_nsec; /* timestamp in nano seconds */ 197 197 198 - union { 198 + union __packed { 199 199 u8 generic[64]; 200 200 struct cpc_can_msg can_msg; 201 201 struct cpc_can_params can_params;
+3
drivers/net/dsa/ocelot/felix.c
··· 610 610 611 611 old_proto_ops = felix->tag_proto_ops; 612 612 613 + if (proto_ops == old_proto_ops) 614 + return 0; 615 + 613 616 err = proto_ops->setup(ds); 614 617 if (err) 615 618 goto setup_failed;
+14 -1
drivers/net/dsa/ocelot/felix_vsc9959.c
··· 1137 1137 { 1138 1138 struct tc_taprio_sched_entry *entry; 1139 1139 u64 gate_len[OCELOT_NUM_TC]; 1140 + u8 gates_ever_opened = 0; 1140 1141 int tc, i, n; 1141 1142 1142 1143 /* Initialize arrays */ ··· 1165 1164 for (tc = 0; tc < OCELOT_NUM_TC; tc++) { 1166 1165 if (entry->gate_mask & BIT(tc)) { 1167 1166 gate_len[tc] += entry->interval; 1167 + gates_ever_opened |= BIT(tc); 1168 1168 } else { 1169 1169 /* Gate closes now, record a potential new 1170 1170 * minimum and reinitialize length 1171 1171 */ 1172 - if (min_gate_len[tc] > gate_len[tc]) 1172 + if (min_gate_len[tc] > gate_len[tc] && 1173 + gate_len[tc]) 1173 1174 min_gate_len[tc] = gate_len[tc]; 1174 1175 gate_len[tc] = 0; 1175 1176 } 1176 1177 } 1177 1178 } 1179 + 1180 + /* min_gate_len[tc] actually tracks minimum *open* gate time, so for 1181 + * permanently closed gates, min_gate_len[tc] will still be U64_MAX. 1182 + * Therefore they are currently indistinguishable from permanently 1183 + * open gates. Overwrite the gate len with 0 when we know they're 1184 + * actually permanently closed, i.e. after the loop above. 1185 + */ 1186 + for (tc = 0; tc < OCELOT_NUM_TC; tc++) 1187 + if (!(gates_ever_opened & BIT(tc))) 1188 + min_gate_len[tc] = 0; 1178 1189 } 1179 1190 1180 1191 /* Update QSYS_PORT_MAX_SDU to make sure the static guard bands added by the
+8 -13
drivers/net/ethernet/aquantia/atlantic/aq_nic.c
··· 265 265 static void aq_nic_polling_timer_cb(struct timer_list *t) 266 266 { 267 267 struct aq_nic_s *self = from_timer(self, t, polling_timer); 268 - struct aq_vec_s *aq_vec = NULL; 269 268 unsigned int i = 0U; 270 269 271 - for (i = 0U, aq_vec = self->aq_vec[0]; 272 - self->aq_vecs > i; ++i, aq_vec = self->aq_vec[i]) 273 - aq_vec_isr(i, (void *)aq_vec); 270 + for (i = 0U; self->aq_vecs > i; ++i) 271 + aq_vec_isr(i, (void *)self->aq_vec[i]); 274 272 275 273 mod_timer(&self->polling_timer, jiffies + 276 274 AQ_CFG_POLLING_TIMER_INTERVAL); ··· 1012 1014 1013 1015 u64 *aq_nic_get_stats(struct aq_nic_s *self, u64 *data) 1014 1016 { 1015 - struct aq_vec_s *aq_vec = NULL; 1016 1017 struct aq_stats_s *stats; 1017 1018 unsigned int count = 0U; 1018 1019 unsigned int i = 0U; ··· 1061 1064 data += i; 1062 1065 1063 1066 for (tc = 0U; tc < self->aq_nic_cfg.tcs; tc++) { 1064 - for (i = 0U, aq_vec = self->aq_vec[0]; 1065 - aq_vec && self->aq_vecs > i; 1066 - ++i, aq_vec = self->aq_vec[i]) { 1067 + for (i = 0U; self->aq_vecs > i; ++i) { 1068 + if (!self->aq_vec[i]) 1069 + break; 1067 1070 data += count; 1068 - count = aq_vec_get_sw_stats(aq_vec, tc, data); 1071 + count = aq_vec_get_sw_stats(self->aq_vec[i], tc, data); 1069 1072 } 1070 1073 } 1071 1074 ··· 1379 1382 1380 1383 int aq_nic_stop(struct aq_nic_s *self) 1381 1384 { 1382 - struct aq_vec_s *aq_vec = NULL; 1383 1385 unsigned int i = 0U; 1384 1386 1385 1387 netif_tx_disable(self->ndev); ··· 1396 1400 1397 1401 aq_ptp_irq_free(self); 1398 1402 1399 - for (i = 0U, aq_vec = self->aq_vec[0]; 1400 - self->aq_vecs > i; ++i, aq_vec = self->aq_vec[i]) 1401 - aq_vec_stop(aq_vec); 1403 + for (i = 0U; self->aq_vecs > i; ++i) 1404 + aq_vec_stop(self->aq_vec[i]); 1402 1405 1403 1406 aq_ptp_ring_stop(self); 1404 1407
+1 -1
drivers/net/ethernet/broadcom/bgmac.c
··· 189 189 } 190 190 191 191 slot->skb = skb; 192 - ring->end += nr_frags + 1; 193 192 netdev_sent_queue(net_dev, skb->len); 193 + ring->end += nr_frags + 1; 194 194 195 195 wmb(); 196 196
-1
drivers/net/ethernet/broadcom/bnxt/bnxt_devlink.c
··· 21 21 #include "bnxt_ptp.h" 22 22 #include "bnxt_coredump.h" 23 23 #include "bnxt_nvm_defs.h" 24 - #include "bnxt_ethtool.h" 25 24 26 25 static void __bnxt_fw_recover(struct bnxt *bp) 27 26 {
+3
drivers/net/ethernet/broadcom/genet/bcmmii.c
··· 393 393 if (priv->internal_phy && !GENET_IS_V5(priv)) 394 394 dev->phydev->irq = PHY_MAC_INTERRUPT; 395 395 396 + /* Indicate that the MAC is responsible for PHY PM */ 397 + dev->phydev->mac_managed_pm = true; 398 + 396 399 return 0; 397 400 } 398 401
+7 -1
drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.c
··· 1932 1932 int data_len, qidx, ret = 0, mss; 1933 1933 struct tls_record_info *record; 1934 1934 struct chcr_ktls_info *tx_info; 1935 + struct net_device *tls_netdev; 1935 1936 struct tls_context *tls_ctx; 1936 1937 struct sge_eth_txq *q; 1937 1938 struct adapter *adap; ··· 1946 1945 mss = skb_is_gso(skb) ? skb_shinfo(skb)->gso_size : data_len; 1947 1946 1948 1947 tls_ctx = tls_get_ctx(skb->sk); 1949 - if (unlikely(tls_ctx->netdev != dev)) 1948 + tls_netdev = rcu_dereference_bh(tls_ctx->netdev); 1949 + /* Don't quit on NULL: if tls_device_down is running in parallel, 1950 + * netdev might become NULL, even if tls_is_sk_tx_device_offloaded was 1951 + * true. Rather continue processing this packet. 1952 + */ 1953 + if (unlikely(tls_netdev && tls_netdev != dev)) 1950 1954 goto out; 1951 1955 1952 1956 tx_ctx = chcr_get_ktls_tx_context(tls_ctx);
+5 -5
drivers/net/ethernet/engleder/tsnep_main.c
··· 340 340 return 0; 341 341 } 342 342 343 - static void tsnep_tx_unmap(struct tsnep_tx *tx, int count) 343 + static void tsnep_tx_unmap(struct tsnep_tx *tx, int index, int count) 344 344 { 345 345 struct device *dmadev = tx->adapter->dmadev; 346 346 struct tsnep_tx_entry *entry; 347 347 int i; 348 348 349 349 for (i = 0; i < count; i++) { 350 - entry = &tx->entry[(tx->read + i) % TSNEP_RING_SIZE]; 350 + entry = &tx->entry[(index + i) % TSNEP_RING_SIZE]; 351 351 352 352 if (entry->len) { 353 353 if (i == 0) ··· 395 395 396 396 retval = tsnep_tx_map(skb, tx, count); 397 397 if (retval != 0) { 398 - tsnep_tx_unmap(tx, count); 398 + tsnep_tx_unmap(tx, tx->write, count); 399 399 dev_kfree_skb_any(entry->skb); 400 400 entry->skb = NULL; 401 401 ··· 464 464 if (skb_shinfo(entry->skb)->nr_frags > 0) 465 465 count += skb_shinfo(entry->skb)->nr_frags; 466 466 467 - tsnep_tx_unmap(tx, count); 467 + tsnep_tx_unmap(tx, tx->read, count); 468 468 469 469 if ((skb_shinfo(entry->skb)->tx_flags & SKBTX_IN_PROGRESS) && 470 470 (__le32_to_cpu(entry->desc_wb->properties) & ··· 1282 1282 static struct platform_driver tsnep_driver = { 1283 1283 .driver = { 1284 1284 .name = TSNEP, 1285 - .of_match_table = of_match_ptr(tsnep_of_match), 1285 + .of_match_table = tsnep_of_match, 1286 1286 }, 1287 1287 .probe = tsnep_probe, 1288 1288 .remove = tsnep_remove,
+2 -2
drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
··· 1660 1660 buf_array[i] = addr; 1661 1661 1662 1662 /* tracing point */ 1663 - trace_dpaa2_eth_buf_seed(priv->net_dev, 1664 - page, DPAA2_ETH_RX_BUF_RAW_SIZE, 1663 + trace_dpaa2_eth_buf_seed(priv->net_dev, page_address(page), 1664 + DPAA2_ETH_RX_BUF_RAW_SIZE, 1665 1665 addr, priv->rx_buf_size, 1666 1666 bpid); 1667 1667 }
+6
drivers/net/ethernet/marvell/octeontx2/af/rvu.c
··· 2580 2580 rvu_blklf_teardown(rvu, pcifunc, BLKADDR_NPA); 2581 2581 rvu_reset_lmt_map_tbl(rvu, pcifunc); 2582 2582 rvu_detach_rsrcs(rvu, NULL, pcifunc); 2583 + /* In scenarios where PF/VF drivers detach NIXLF without freeing MCAM 2584 + * entries, check and free the MCAM entries explicitly to avoid leak. 2585 + * Since LF is detached use LF number as -1. 2586 + */ 2587 + rvu_npc_free_mcam_entries(rvu, pcifunc, -1); 2588 + 2583 2589 mutex_unlock(&rvu->flr_lock); 2584 2590 } 2585 2591
+11 -4
drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c
··· 1097 1097 1098 1098 void rvu_npc_disable_default_entries(struct rvu *rvu, u16 pcifunc, int nixlf) 1099 1099 { 1100 + if (nixlf < 0) 1101 + return; 1102 + 1100 1103 npc_enadis_default_entries(rvu, pcifunc, nixlf, false); 1101 1104 1102 1105 /* Delete multicast and promisc MCAM entries */ ··· 1139 1136 1140 1137 void rvu_npc_enable_default_entries(struct rvu *rvu, u16 pcifunc, int nixlf) 1141 1138 { 1139 + if (nixlf < 0) 1140 + return; 1141 + 1142 1142 /* Enables only broadcast match entry. Promisc/Allmulti are enabled 1143 1143 * in set_rx_mode mbox handler. 1144 1144 */ ··· 1681 1675 * Firmware database method. 1682 1676 * Default KPU profile. 1683 1677 */ 1684 - if (!request_firmware(&fw, kpu_profile, rvu->dev)) { 1678 + if (!request_firmware_direct(&fw, kpu_profile, rvu->dev)) { 1685 1679 dev_info(rvu->dev, "Loading KPU profile from firmware: %s\n", 1686 1680 kpu_profile); 1687 1681 rvu->kpu_fwdata = kzalloc(fw->size, GFP_KERNEL); ··· 1945 1939 1946 1940 static void rvu_npc_setup_interfaces(struct rvu *rvu, int blkaddr) 1947 1941 { 1942 + struct npc_mcam_kex *mkex = rvu->kpu.mkex; 1948 1943 struct npc_mcam *mcam = &rvu->hw->mcam; 1949 1944 struct rvu_hwinfo *hw = rvu->hw; 1950 1945 u64 nibble_ena, rx_kex, tx_kex; ··· 1958 1951 mcam->counters.max--; 1959 1952 mcam->rx_miss_act_cntr = mcam->counters.max; 1960 1953 1961 - rx_kex = npc_mkex_default.keyx_cfg[NIX_INTF_RX]; 1962 - tx_kex = npc_mkex_default.keyx_cfg[NIX_INTF_TX]; 1954 + rx_kex = mkex->keyx_cfg[NIX_INTF_RX]; 1955 + tx_kex = mkex->keyx_cfg[NIX_INTF_TX]; 1963 1956 nibble_ena = FIELD_GET(NPC_PARSE_NIBBLE, rx_kex); 1964 1957 1965 1958 nibble_ena = rvu_npc_get_tx_nibble_cfg(rvu, nibble_ena); 1966 1959 if (nibble_ena) { 1967 1960 tx_kex &= ~NPC_PARSE_NIBBLE; 1968 1961 tx_kex |= FIELD_PREP(NPC_PARSE_NIBBLE, nibble_ena); 1969 - npc_mkex_default.keyx_cfg[NIX_INTF_TX] = tx_kex; 1962 + mkex->keyx_cfg[NIX_INTF_TX] = tx_kex; 1970 1963 } 1971 1964 1972 1965 /* Configure RX interfaces */
+2 -1
drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_fs.c
··· 467 467 NPC_SCAN_HDR(NPC_VLAN_TAG1, NPC_LID_LB, NPC_LT_LB_CTAG, 2, 2); 468 468 NPC_SCAN_HDR(NPC_VLAN_TAG2, NPC_LID_LB, NPC_LT_LB_STAG_QINQ, 2, 2); 469 469 NPC_SCAN_HDR(NPC_DMAC, NPC_LID_LA, la_ltype, la_start, 6); 470 - NPC_SCAN_HDR(NPC_SMAC, NPC_LID_LA, la_ltype, la_start, 6); 470 + /* SMAC follows the DMAC(which is 6 bytes) */ 471 + NPC_SCAN_HDR(NPC_SMAC, NPC_LID_LA, la_ltype, la_start + 6, 6); 471 472 /* PF_FUNC is 2 bytes at 0th byte of NPC_LT_LA_IH_NIX_ETHER */ 472 473 NPC_SCAN_HDR(NPC_PF_FUNC, NPC_LID_LA, NPC_LT_LA_IH_NIX_ETHER, 0, 2); 473 474 }
+14 -5
drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
··· 632 632 req->num_regs++; 633 633 req->reg[1] = NIX_AF_TL3X_SCHEDULE(schq); 634 634 req->regval[1] = dwrr_val; 635 + if (lvl == hw->txschq_link_cfg_lvl) { 636 + req->num_regs++; 637 + req->reg[2] = NIX_AF_TL3_TL2X_LINKX_CFG(schq, hw->tx_link); 638 + /* Enable this queue and backpressure */ 639 + req->regval[2] = BIT_ULL(13) | BIT_ULL(12); 640 + } 635 641 } else if (lvl == NIX_TXSCH_LVL_TL2) { 636 642 parent = hw->txschq_list[NIX_TXSCH_LVL_TL1][0]; 637 643 req->reg[0] = NIX_AF_TL2X_PARENT(schq); ··· 647 641 req->reg[1] = NIX_AF_TL2X_SCHEDULE(schq); 648 642 req->regval[1] = TXSCH_TL1_DFLT_RR_PRIO << 24 | dwrr_val; 649 643 650 - req->num_regs++; 651 - req->reg[2] = NIX_AF_TL3_TL2X_LINKX_CFG(schq, hw->tx_link); 652 - /* Enable this queue and backpressure */ 653 - req->regval[2] = BIT_ULL(13) | BIT_ULL(12); 654 - 644 + if (lvl == hw->txschq_link_cfg_lvl) { 645 + req->num_regs++; 646 + req->reg[2] = NIX_AF_TL3_TL2X_LINKX_CFG(schq, hw->tx_link); 647 + /* Enable this queue and backpressure */ 648 + req->regval[2] = BIT_ULL(13) | BIT_ULL(12); 649 + } 655 650 } else if (lvl == NIX_TXSCH_LVL_TL1) { 656 651 /* Default config for TL1. 657 652 * For VF this is always ignored. ··· 1598 1591 for (schq = 0; schq < rsp->schq[lvl]; schq++) 1599 1592 pf->hw.txschq_list[lvl][schq] = 1600 1593 rsp->schq_list[lvl][schq]; 1594 + 1595 + pf->hw.txschq_link_cfg_lvl = rsp->link_cfg_lvl; 1601 1596 } 1602 1597 EXPORT_SYMBOL(mbox_handler_nix_txsch_alloc); 1603 1598
+1
drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
··· 195 195 u16 sqb_size; 196 196 197 197 /* NIX */ 198 + u8 txschq_link_cfg_lvl; 198 199 u16 txschq_list[NIX_TXSCH_LVL_CNT][MAX_TXSCHQ_PER_FUNC]; 199 200 u16 matchall_ipolicer; 200 201 u32 dwrr_mtu;
+2 -2
drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.c
··· 506 506 int err; 507 507 508 508 attr.ttl = tun_key->ttl; 509 - attr.fl.fl6.flowlabel = ip6_make_flowinfo(RT_TOS(tun_key->tos), tun_key->label); 509 + attr.fl.fl6.flowlabel = ip6_make_flowinfo(tun_key->tos, tun_key->label); 510 510 attr.fl.fl6.daddr = tun_key->u.ipv6.dst; 511 511 attr.fl.fl6.saddr = tun_key->u.ipv6.src; 512 512 ··· 620 620 621 621 attr.ttl = tun_key->ttl; 622 622 623 - attr.fl.fl6.flowlabel = ip6_make_flowinfo(RT_TOS(tun_key->tos), tun_key->label); 623 + attr.fl.fl6.flowlabel = ip6_make_flowinfo(tun_key->tos, tun_key->label); 624 624 attr.fl.fl6.daddr = tun_key->u.ipv6.dst; 625 625 attr.fl.fl6.saddr = tun_key->u.ipv6.src; 626 626
+7 -1
drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c
··· 808 808 { 809 809 struct mlx5e_ktls_offload_context_tx *priv_tx; 810 810 struct mlx5e_sq_stats *stats = sq->stats; 811 + struct net_device *tls_netdev; 811 812 struct tls_context *tls_ctx; 812 813 int datalen; 813 814 u32 seq; ··· 820 819 mlx5e_tx_mpwqe_ensure_complete(sq); 821 820 822 821 tls_ctx = tls_get_ctx(skb->sk); 823 - if (WARN_ON_ONCE(tls_ctx->netdev != netdev)) 822 + tls_netdev = rcu_dereference_bh(tls_ctx->netdev); 823 + /* Don't WARN on NULL: if tls_device_down is running in parallel, 824 + * netdev might become NULL, even if tls_is_sk_tx_device_offloaded was 825 + * true. Rather continue processing this packet. 826 + */ 827 + if (WARN_ON_ONCE(tls_netdev && tls_netdev != netdev)) 824 828 goto err_out; 825 829 826 830 priv_tx = mlx5e_get_ktls_tx_priv_ctx(tls_ctx);
-7
drivers/net/ethernet/mellanox/mlxsw/minimal.c
··· 328 328 static int mlxsw_m_ports_create(struct mlxsw_m *mlxsw_m) 329 329 { 330 330 unsigned int max_ports = mlxsw_core_max_ports(mlxsw_m->core); 331 - struct devlink *devlink = priv_to_devlink(mlxsw_m->core); 332 331 u8 last_module = max_ports; 333 332 int i; 334 333 int err; ··· 356 357 } 357 358 358 359 /* Create port objects for each valid entry */ 359 - devl_lock(devlink); 360 360 for (i = 0; i < mlxsw_m->max_ports; i++) { 361 361 if (mlxsw_m->module_to_port[i] > 0) { 362 362 err = mlxsw_m_port_create(mlxsw_m, ··· 365 367 goto err_module_to_port_create; 366 368 } 367 369 } 368 - devl_unlock(devlink); 369 370 370 371 return 0; 371 372 ··· 374 377 mlxsw_m_port_remove(mlxsw_m, 375 378 mlxsw_m->module_to_port[i]); 376 379 } 377 - devl_unlock(devlink); 378 380 i = max_ports; 379 381 err_module_to_port_map: 380 382 for (i--; i > 0; i--) ··· 386 390 387 391 static void mlxsw_m_ports_remove(struct mlxsw_m *mlxsw_m) 388 392 { 389 - struct devlink *devlink = priv_to_devlink(mlxsw_m->core); 390 393 int i; 391 394 392 - devl_lock(devlink); 393 395 for (i = 0; i < mlxsw_m->max_ports; i++) { 394 396 if (mlxsw_m->module_to_port[i] > 0) { 395 397 mlxsw_m_port_remove(mlxsw_m, ··· 395 401 mlxsw_m_port_module_unmap(mlxsw_m, i); 396 402 } 397 403 } 398 - devl_unlock(devlink); 399 404 400 405 kfree(mlxsw_m->module_to_port); 401 406 kfree(mlxsw_m->ports);
+2
drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c
··· 1395 1395 u8 data; 1396 1396 1397 1397 port = nfp_port_from_netdev(netdev); 1398 + /* update port state to get latest interface */ 1399 + set_bit(NFP_PORT_CHANGED, &port->flags); 1398 1400 eth_port = nfp_port_get_eth_port(port); 1399 1401 if (!eth_port) 1400 1402 return -EOPNOTSUPP;
+2 -1
drivers/net/ethernet/netronome/nfp/nfpcore/nfp_cppcore.c
··· 874 874 } 875 875 876 876 /* Adjust the start address to be cache size aligned */ 877 - cache->id = id; 878 877 cache->addr = addr & ~(u64)(cache->size - 1); 879 878 880 879 /* Re-init to the new ID and address */ ··· 892 893 mutex_unlock(&cpp->area_cache_mutex); 893 894 return NULL; 894 895 } 896 + 897 + cache->id = id; 895 898 896 899 exit: 897 900 /* Adjust offset */
+3 -3
drivers/net/ethernet/wangxun/Kconfig
··· 7 7 bool "Wangxun devices" 8 8 default y 9 9 help 10 - If you have a network (Ethernet) card belonging to this class, say Y. 10 + If you have a network (Ethernet) card from Wangxun(R), say Y. 11 11 12 12 Note that the answer to this question doesn't directly affect the 13 13 kernel: saying N will just cause the configurator to skip all 14 - the questions about Intel cards. If you say Y, you will be asked for 15 - your specific card in the following questions. 14 + the questions about Wangxun(R) cards. If you say Y, you will 15 + be asked for your specific card in the following questions. 16 16 17 17 if NET_VENDOR_WANGXUN 18 18
+9 -6
drivers/net/geneve.c
··· 797 797 struct geneve_sock *gs4, 798 798 struct flowi4 *fl4, 799 799 const struct ip_tunnel_info *info, 800 - __be16 dport, __be16 sport) 800 + __be16 dport, __be16 sport, 801 + __u8 *full_tos) 801 802 { 802 803 bool use_cache = ip_tunnel_dst_cache_usable(skb, info); 803 804 struct geneve_dev *geneve = netdev_priv(dev); ··· 824 823 use_cache = false; 825 824 } 826 825 fl4->flowi4_tos = RT_TOS(tos); 826 + if (full_tos) 827 + *full_tos = tos; 827 828 828 829 dst_cache = (struct dst_cache *)&info->dst_cache; 829 830 if (use_cache) { ··· 879 876 use_cache = false; 880 877 } 881 878 882 - fl6->flowlabel = ip6_make_flowinfo(RT_TOS(prio), 883 - info->key.label); 879 + fl6->flowlabel = ip6_make_flowinfo(prio, info->key.label); 884 880 dst_cache = (struct dst_cache *)&info->dst_cache; 885 881 if (use_cache) { 886 882 dst = dst_cache_get_ip6(dst_cache, &fl6->saddr); ··· 913 911 const struct ip_tunnel_key *key = &info->key; 914 912 struct rtable *rt; 915 913 struct flowi4 fl4; 914 + __u8 full_tos; 916 915 __u8 tos, ttl; 917 916 __be16 df = 0; 918 917 __be16 sport; ··· 924 921 925 922 sport = udp_flow_src_port(geneve->net, skb, 1, USHRT_MAX, true); 926 923 rt = geneve_get_v4_rt(skb, dev, gs4, &fl4, info, 927 - geneve->cfg.info.key.tp_dst, sport); 924 + geneve->cfg.info.key.tp_dst, sport, &full_tos); 928 925 if (IS_ERR(rt)) 929 926 return PTR_ERR(rt); 930 927 ··· 968 965 969 966 df = key->tun_flags & TUNNEL_DONT_FRAGMENT ? htons(IP_DF) : 0; 970 967 } else { 971 - tos = ip_tunnel_ecn_encap(fl4.flowi4_tos, ip_hdr(skb), skb); 968 + tos = ip_tunnel_ecn_encap(full_tos, ip_hdr(skb), skb); 972 969 if (geneve->cfg.ttl_inherit) 973 970 ttl = ip_tunnel_get_ttl(ip_hdr(skb), skb); 974 971 else ··· 1152 1149 1, USHRT_MAX, true); 1153 1150 1154 1151 rt = geneve_get_v4_rt(skb, dev, gs4, &fl4, info, 1155 - geneve->cfg.info.key.tp_dst, sport); 1152 + geneve->cfg.info.key.tp_dst, sport, NULL); 1156 1153 if (IS_ERR(rt)) 1157 1154 return PTR_ERR(rt); 1158 1155
+49 -9
drivers/net/macsec.c
··· 162 162 return sa; 163 163 } 164 164 165 + static struct macsec_rx_sa *macsec_active_rxsa_get(struct macsec_rx_sc *rx_sc) 166 + { 167 + struct macsec_rx_sa *sa = NULL; 168 + int an; 169 + 170 + for (an = 0; an < MACSEC_NUM_AN; an++) { 171 + sa = macsec_rxsa_get(rx_sc->sa[an]); 172 + if (sa) 173 + break; 174 + } 175 + return sa; 176 + } 177 + 165 178 static void free_rx_sc_rcu(struct rcu_head *head) 166 179 { 167 180 struct macsec_rx_sc *rx_sc = container_of(head, struct macsec_rx_sc, rcu_head); ··· 513 500 skb->protocol = eth_hdr(skb)->h_proto; 514 501 } 515 502 503 + static unsigned int macsec_msdu_len(struct sk_buff *skb) 504 + { 505 + struct macsec_dev *macsec = macsec_priv(skb->dev); 506 + struct macsec_secy *secy = &macsec->secy; 507 + bool sci_present = macsec_skb_cb(skb)->has_sci; 508 + 509 + return skb->len - macsec_hdr_len(sci_present) - secy->icv_len; 510 + } 511 + 516 512 static void macsec_count_tx(struct sk_buff *skb, struct macsec_tx_sc *tx_sc, 517 513 struct macsec_tx_sa *tx_sa) 518 514 { 515 + unsigned int msdu_len = macsec_msdu_len(skb); 519 516 struct pcpu_tx_sc_stats *txsc_stats = this_cpu_ptr(tx_sc->stats); 520 517 521 518 u64_stats_update_begin(&txsc_stats->syncp); 522 519 if (tx_sc->encrypt) { 523 - txsc_stats->stats.OutOctetsEncrypted += skb->len; 520 + txsc_stats->stats.OutOctetsEncrypted += msdu_len; 524 521 txsc_stats->stats.OutPktsEncrypted++; 525 522 this_cpu_inc(tx_sa->stats->OutPktsEncrypted); 526 523 } else { 527 - txsc_stats->stats.OutOctetsProtected += skb->len; 524 + txsc_stats->stats.OutOctetsProtected += msdu_len; 528 525 txsc_stats->stats.OutPktsProtected++; 529 526 this_cpu_inc(tx_sa->stats->OutPktsProtected); 530 527 } ··· 564 541 aead_request_free(macsec_skb_cb(skb)->req); 565 542 566 543 rcu_read_lock_bh(); 567 - macsec_encrypt_finish(skb, dev); 568 544 macsec_count_tx(skb, &macsec->secy.tx_sc, macsec_skb_cb(skb)->tx_sa); 569 - len = skb->len; 545 + /* packet is encrypted/protected so tx_bytes must be calculated */ 546 + len = macsec_msdu_len(skb) + 2 * ETH_ALEN; 547 + macsec_encrypt_finish(skb, dev); 570 548 ret = dev_queue_xmit(skb); 571 549 count_tx(dev, ret, len); 572 550 rcu_read_unlock_bh(); ··· 726 702 727 703 macsec_skb_cb(skb)->req = req; 728 704 macsec_skb_cb(skb)->tx_sa = tx_sa; 705 + macsec_skb_cb(skb)->has_sci = sci_present; 729 706 aead_request_set_callback(req, 0, macsec_encrypt_done, skb); 730 707 731 708 dev_hold(skb->dev); ··· 768 743 u64_stats_update_begin(&rxsc_stats->syncp); 769 744 rxsc_stats->stats.InPktsLate++; 770 745 u64_stats_update_end(&rxsc_stats->syncp); 746 + secy->netdev->stats.rx_dropped++; 771 747 return false; 772 748 } 773 749 774 750 if (secy->validate_frames != MACSEC_VALIDATE_DISABLED) { 751 + unsigned int msdu_len = macsec_msdu_len(skb); 775 752 u64_stats_update_begin(&rxsc_stats->syncp); 776 753 if (hdr->tci_an & MACSEC_TCI_E) 777 - rxsc_stats->stats.InOctetsDecrypted += skb->len; 754 + rxsc_stats->stats.InOctetsDecrypted += msdu_len; 778 755 else 779 - rxsc_stats->stats.InOctetsValidated += skb->len; 756 + rxsc_stats->stats.InOctetsValidated += msdu_len; 780 757 u64_stats_update_end(&rxsc_stats->syncp); 781 758 } 782 759 ··· 791 764 u64_stats_update_begin(&rxsc_stats->syncp); 792 765 rxsc_stats->stats.InPktsNotValid++; 793 766 u64_stats_update_end(&rxsc_stats->syncp); 767 + this_cpu_inc(rx_sa->stats->InPktsNotValid); 768 + secy->netdev->stats.rx_errors++; 794 769 return false; 795 770 } 796 771 ··· 885 856 886 857 macsec_finalize_skb(skb, macsec->secy.icv_len, 887 858 macsec_extra_len(macsec_skb_cb(skb)->has_sci)); 859 + len = skb->len; 888 860 macsec_reset_skb(skb, macsec->secy.netdev); 889 861 890 - len = skb->len; 891 862 if (gro_cells_receive(&macsec->gro_cells, skb) == NET_RX_SUCCESS) 892 863 count_rx(dev, len); 893 864 ··· 1078 1049 u64_stats_update_begin(&secy_stats->syncp); 1079 1050 secy_stats->stats.InPktsNoTag++; 1080 1051 u64_stats_update_end(&secy_stats->syncp); 1052 + macsec->secy.netdev->stats.rx_dropped++; 1081 1053 continue; 1082 1054 } 1083 1055 ··· 1188 1158 u64_stats_update_begin(&secy_stats->syncp); 1189 1159 secy_stats->stats.InPktsBadTag++; 1190 1160 u64_stats_update_end(&secy_stats->syncp); 1161 + secy->netdev->stats.rx_errors++; 1191 1162 goto drop_nosa; 1192 1163 } 1193 1164 ··· 1199 1168 /* If validateFrames is Strict or the C bit in the 1200 1169 * SecTAG is set, discard 1201 1170 */ 1171 + struct macsec_rx_sa *active_rx_sa = macsec_active_rxsa_get(rx_sc); 1202 1172 if (hdr->tci_an & MACSEC_TCI_C || 1203 1173 secy->validate_frames == MACSEC_VALIDATE_STRICT) { 1204 1174 u64_stats_update_begin(&rxsc_stats->syncp); 1205 1175 rxsc_stats->stats.InPktsNotUsingSA++; 1206 1176 u64_stats_update_end(&rxsc_stats->syncp); 1177 + secy->netdev->stats.rx_errors++; 1178 + if (active_rx_sa) 1179 + this_cpu_inc(active_rx_sa->stats->InPktsNotUsingSA); 1207 1180 goto drop_nosa; 1208 1181 } 1209 1182 ··· 1217 1182 u64_stats_update_begin(&rxsc_stats->syncp); 1218 1183 rxsc_stats->stats.InPktsUnusedSA++; 1219 1184 u64_stats_update_end(&rxsc_stats->syncp); 1185 + if (active_rx_sa) 1186 + this_cpu_inc(active_rx_sa->stats->InPktsUnusedSA); 1220 1187 goto deliver; 1221 1188 } 1222 1189 ··· 1239 1202 u64_stats_update_begin(&rxsc_stats->syncp); 1240 1203 rxsc_stats->stats.InPktsLate++; 1241 1204 u64_stats_update_end(&rxsc_stats->syncp); 1205 + macsec->secy.netdev->stats.rx_dropped++; 1242 1206 goto drop; 1243 1207 } 1244 1208 } ··· 1268 1230 deliver: 1269 1231 macsec_finalize_skb(skb, secy->icv_len, 1270 1232 macsec_extra_len(macsec_skb_cb(skb)->has_sci)); 1233 + len = skb->len; 1271 1234 macsec_reset_skb(skb, secy->netdev); 1272 1235 1273 1236 if (rx_sa) ··· 1276 1237 macsec_rxsc_put(rx_sc); 1277 1238 1278 1239 skb_orphan(skb); 1279 - len = skb->len; 1280 1240 ret = gro_cells_receive(&macsec->gro_cells, skb); 1281 1241 if (ret == NET_RX_SUCCESS) 1282 1242 count_rx(dev, len); ··· 1317 1279 u64_stats_update_begin(&secy_stats->syncp); 1318 1280 secy_stats->stats.InPktsNoSCI++; 1319 1281 u64_stats_update_end(&secy_stats->syncp); 1282 + macsec->secy.netdev->stats.rx_errors++; 1320 1283 continue; 1321 1284 } 1322 1285 ··· 3443 3404 return NETDEV_TX_OK; 3444 3405 } 3445 3406 3407 + len = skb->len; 3446 3408 skb = macsec_encrypt(skb, dev); 3447 3409 if (IS_ERR(skb)) { 3448 3410 if (PTR_ERR(skb) != -EINPROGRESS) ··· 3454 3414 macsec_count_tx(skb, &macsec->secy.tx_sc, macsec_skb_cb(skb)->tx_sa); 3455 3415 3456 3416 macsec_encrypt_finish(skb, dev); 3457 - len = skb->len; 3458 3417 ret = dev_queue_xmit(skb); 3459 3418 count_tx(dev, ret, len); 3460 3419 return ret; ··· 3701 3662 3702 3663 s->rx_dropped = dev->stats.rx_dropped; 3703 3664 s->tx_dropped = dev->stats.tx_dropped; 3665 + s->rx_errors = dev->stats.rx_errors; 3704 3666 } 3705 3667 3706 3668 static int macsec_get_iflink(const struct net_device *dev)
+1 -1
drivers/net/phy/dp83867.c
··· 535 535 cell = of_nvmem_cell_get(of_node, "io_impedance_ctrl"); 536 536 if (IS_ERR(cell)) { 537 537 ret = PTR_ERR(cell); 538 - if (ret != -ENOENT) 538 + if (ret != -ENOENT && ret != -EOPNOTSUPP) 539 539 return phydev_err_probe(phydev, ret, 540 540 "failed to get nvmem cell io_impedance_ctrl\n"); 541 541
+16 -18
drivers/net/phy/phy-c45.c
··· 190 190 */ 191 191 static int genphy_c45_baset1_an_config_aneg(struct phy_device *phydev) 192 192 { 193 + u16 adv_l_mask, adv_l = 0; 194 + u16 adv_m_mask, adv_m = 0; 193 195 int changed = 0; 194 - u16 adv_l = 0; 195 - u16 adv_m = 0; 196 196 int ret; 197 + 198 + adv_l_mask = MDIO_AN_T1_ADV_L_FORCE_MS | MDIO_AN_T1_ADV_L_PAUSE_CAP | 199 + MDIO_AN_T1_ADV_L_PAUSE_ASYM; 200 + adv_m_mask = MDIO_AN_T1_ADV_M_MST | MDIO_AN_T1_ADV_M_B10L; 197 201 198 202 switch (phydev->master_slave_set) { 199 203 case MASTER_SLAVE_CFG_MASTER_FORCE: 204 + adv_m |= MDIO_AN_T1_ADV_M_MST; 205 + fallthrough; 200 206 case MASTER_SLAVE_CFG_SLAVE_FORCE: 201 207 adv_l |= MDIO_AN_T1_ADV_L_FORCE_MS; 202 208 break; 203 209 case MASTER_SLAVE_CFG_MASTER_PREFERRED: 210 + adv_m |= MDIO_AN_T1_ADV_M_MST; 211 + fallthrough; 204 212 case MASTER_SLAVE_CFG_SLAVE_PREFERRED: 205 213 break; 206 214 case MASTER_SLAVE_CFG_UNKNOWN: 207 215 case MASTER_SLAVE_CFG_UNSUPPORTED: 208 - return 0; 216 + /* if master/slave role is not specified, do not overwrite it */ 217 + adv_l_mask &= ~MDIO_AN_T1_ADV_L_FORCE_MS; 218 + adv_m_mask &= ~MDIO_AN_T1_ADV_M_MST; 219 + break; 209 220 default: 210 221 phydev_warn(phydev, "Unsupported Master/Slave mode\n"); 211 222 return -EOPNOTSUPP; 212 223 } 213 224 214 - switch (phydev->master_slave_set) { 215 - case MASTER_SLAVE_CFG_MASTER_FORCE: 216 - case MASTER_SLAVE_CFG_MASTER_PREFERRED: 217 - adv_m |= MDIO_AN_T1_ADV_M_MST; 218 - break; 219 - case MASTER_SLAVE_CFG_SLAVE_FORCE: 220 - case MASTER_SLAVE_CFG_SLAVE_PREFERRED: 221 - break; 222 - default: 223 - break; 224 - } 225 - 226 225 adv_l |= linkmode_adv_to_mii_t1_adv_l_t(phydev->advertising); 227 226 228 227 ret = phy_modify_mmd_changed(phydev, MDIO_MMD_AN, MDIO_AN_T1_ADV_L, 229 - (MDIO_AN_T1_ADV_L_FORCE_MS | MDIO_AN_T1_ADV_L_PAUSE_CAP 230 - | MDIO_AN_T1_ADV_L_PAUSE_ASYM), adv_l); 228 + adv_l_mask, adv_l); 231 229 if (ret < 0) 232 230 return ret; 233 231 if (ret > 0) ··· 234 236 adv_m |= linkmode_adv_to_mii_t1_adv_m_t(phydev->advertising); 235 237 236 238 ret = phy_modify_mmd_changed(phydev, MDIO_MMD_AN, MDIO_AN_T1_ADV_M, 237 - MDIO_AN_T1_ADV_M_MST | MDIO_AN_T1_ADV_M_B10L, adv_m); 239 + adv_m_mask, adv_m); 238 240 if (ret < 0) 239 241 return ret; 240 242 if (ret > 0)
+6
drivers/net/phy/phy_device.c
··· 316 316 317 317 phydev->suspended_by_mdio_bus = 0; 318 318 319 + /* If we managed to get here with the PHY state machine in a state other 320 + * than PHY_HALTED this is an indication that something went wrong and 321 + * we should most likely be using MAC managed PM and we are not. 322 + */ 323 + WARN_ON(phydev->state != PHY_HALTED && !phydev->mac_managed_pm); 324 + 319 325 ret = phy_init_hw(phydev); 320 326 if (ret < 0) 321 327 return ret;
+1 -1
drivers/net/plip/plip.c
··· 1111 1111 /* Any address will do - we take the first. We already 1112 1112 have the first two bytes filled with 0xfc, from 1113 1113 plip_init_dev(). */ 1114 - const struct in_ifaddr *ifa = rcu_dereference(in_dev->ifa_list); 1114 + const struct in_ifaddr *ifa = rtnl_dereference(in_dev->ifa_list); 1115 1115 if (ifa != NULL) { 1116 1116 dev_addr_mod(dev, 2, &ifa->ifa_local, 4); 1117 1117 }
+11 -9
drivers/net/tap.c
··· 716 716 skb_reset_mac_header(skb); 717 717 skb->protocol = eth_hdr(skb)->h_proto; 718 718 719 + rcu_read_lock(); 720 + tap = rcu_dereference(q->tap); 721 + if (!tap) { 722 + kfree_skb(skb); 723 + rcu_read_unlock(); 724 + return total_len; 725 + } 726 + skb->dev = tap->dev; 727 + 719 728 if (vnet_hdr_len) { 720 729 err = virtio_net_hdr_to_skb(skb, &vnet_hdr, 721 730 tap_is_little_endian(q)); 722 731 if (err) { 732 + rcu_read_unlock(); 723 733 drop_reason = SKB_DROP_REASON_DEV_HDR; 724 734 goto err_kfree; 725 735 } ··· 742 732 __vlan_get_protocol(skb, skb->protocol, &depth) != 0) 743 733 skb_set_network_header(skb, depth); 744 734 745 - rcu_read_lock(); 746 - tap = rcu_dereference(q->tap); 747 735 /* copy skb_ubuf_info for callback when skb has no error */ 748 736 if (zerocopy) { 749 737 skb_zcopy_init(skb, msg_control); ··· 750 742 uarg->callback(NULL, uarg, false); 751 743 } 752 744 753 - if (tap) { 754 - skb->dev = tap->dev; 755 - dev_queue_xmit(skb); 756 - } else { 757 - kfree_skb(skb); 758 - } 745 + dev_queue_xmit(skb); 759 746 rcu_read_unlock(); 760 - 761 747 return total_len; 762 748 763 749 err_kfree:
+13 -13
drivers/net/usb/ax88179_178a.c
··· 1680 1680 .link_reset = ax88179_link_reset, 1681 1681 .reset = ax88179_reset, 1682 1682 .stop = ax88179_stop, 1683 - .flags = FLAG_ETHER | FLAG_FRAMING_AX | FLAG_SEND_ZLP, 1683 + .flags = FLAG_ETHER | FLAG_FRAMING_AX, 1684 1684 .rx_fixup = ax88179_rx_fixup, 1685 1685 .tx_fixup = ax88179_tx_fixup, 1686 1686 }; ··· 1693 1693 .link_reset = ax88179_link_reset, 1694 1694 .reset = ax88179_reset, 1695 1695 .stop = ax88179_stop, 1696 - .flags = FLAG_ETHER | FLAG_FRAMING_AX | FLAG_SEND_ZLP, 1696 + .flags = FLAG_ETHER | FLAG_FRAMING_AX, 1697 1697 .rx_fixup = ax88179_rx_fixup, 1698 1698 .tx_fixup = ax88179_tx_fixup, 1699 1699 }; ··· 1706 1706 .link_reset = ax88179_link_reset, 1707 1707 .reset = ax88179_reset, 1708 1708 .stop = ax88179_stop, 1709 - .flags = FLAG_ETHER | FLAG_FRAMING_AX | FLAG_SEND_ZLP, 1709 + .flags = FLAG_ETHER | FLAG_FRAMING_AX, 1710 1710 .rx_fixup = ax88179_rx_fixup, 1711 1711 .tx_fixup = ax88179_tx_fixup, 1712 1712 }; ··· 1719 1719 .link_reset = ax88179_link_reset, 1720 1720 .reset = ax88179_reset, 1721 1721 .stop = ax88179_stop, 1722 - .flags = FLAG_ETHER | FLAG_FRAMING_AX | FLAG_SEND_ZLP, 1722 + .flags = FLAG_ETHER | FLAG_FRAMING_AX, 1723 1723 .rx_fixup = ax88179_rx_fixup, 1724 1724 .tx_fixup = ax88179_tx_fixup, 1725 1725 }; ··· 1732 1732 .link_reset = ax88179_link_reset, 1733 1733 .reset = ax88179_reset, 1734 1734 .stop = ax88179_stop, 1735 - .flags = FLAG_ETHER | FLAG_FRAMING_AX | FLAG_SEND_ZLP, 1735 + .flags = FLAG_ETHER | FLAG_FRAMING_AX, 1736 1736 .rx_fixup = ax88179_rx_fixup, 1737 1737 .tx_fixup = ax88179_tx_fixup, 1738 1738 }; ··· 1745 1745 .link_reset = ax88179_link_reset, 1746 1746 .reset = ax88179_reset, 1747 1747 .stop = ax88179_stop, 1748 - .flags = FLAG_ETHER | FLAG_FRAMING_AX | FLAG_SEND_ZLP, 1748 + .flags = FLAG_ETHER | FLAG_FRAMING_AX, 1749 1749 .rx_fixup = ax88179_rx_fixup, 1750 1750 .tx_fixup = ax88179_tx_fixup, 1751 1751 }; ··· 1758 1758 .link_reset = ax88179_link_reset, 1759 1759 .reset = ax88179_reset, 1760 1760 .stop = ax88179_stop, 1761 - .flags = FLAG_ETHER | FLAG_FRAMING_AX | FLAG_SEND_ZLP, 1761 + .flags = FLAG_ETHER | FLAG_FRAMING_AX, 1762 1762 .rx_fixup = ax88179_rx_fixup, 1763 1763 .tx_fixup = ax88179_tx_fixup, 1764 1764 }; ··· 1771 1771 .link_reset = ax88179_link_reset, 1772 1772 .reset = ax88179_reset, 1773 1773 .stop = ax88179_stop, 1774 - .flags = FLAG_ETHER | FLAG_FRAMING_AX | FLAG_SEND_ZLP, 1774 + .flags = FLAG_ETHER | FLAG_FRAMING_AX, 1775 1775 .rx_fixup = ax88179_rx_fixup, 1776 1776 .tx_fixup = ax88179_tx_fixup, 1777 1777 }; ··· 1784 1784 .link_reset = ax88179_link_reset, 1785 1785 .reset = ax88179_reset, 1786 1786 .stop = ax88179_stop, 1787 - .flags = FLAG_ETHER | FLAG_FRAMING_AX | FLAG_SEND_ZLP, 1787 + .flags = FLAG_ETHER | FLAG_FRAMING_AX, 1788 1788 .rx_fixup = ax88179_rx_fixup, 1789 1789 .tx_fixup = ax88179_tx_fixup, 1790 1790 }; ··· 1797 1797 .link_reset = ax88179_link_reset, 1798 1798 .reset = ax88179_reset, 1799 1799 .stop = ax88179_stop, 1800 - .flags = FLAG_ETHER | FLAG_FRAMING_AX | FLAG_SEND_ZLP, 1800 + .flags = FLAG_ETHER | FLAG_FRAMING_AX, 1801 1801 .rx_fixup = ax88179_rx_fixup, 1802 1802 .tx_fixup = ax88179_tx_fixup, 1803 1803 }; ··· 1810 1810 .link_reset = ax88179_link_reset, 1811 1811 .reset = ax88179_reset, 1812 1812 .stop = ax88179_stop, 1813 - .flags = FLAG_ETHER | FLAG_FRAMING_AX | FLAG_SEND_ZLP, 1813 + .flags = FLAG_ETHER | FLAG_FRAMING_AX, 1814 1814 .rx_fixup = ax88179_rx_fixup, 1815 1815 .tx_fixup = ax88179_tx_fixup, 1816 1816 }; ··· 1823 1823 .link_reset = ax88179_link_reset, 1824 1824 .reset = ax88179_reset, 1825 1825 .stop = ax88179_stop, 1826 - .flags = FLAG_ETHER | FLAG_FRAMING_AX | FLAG_SEND_ZLP, 1826 + .flags = FLAG_ETHER | FLAG_FRAMING_AX, 1827 1827 .rx_fixup = ax88179_rx_fixup, 1828 1828 .tx_fixup = ax88179_tx_fixup, 1829 1829 }; ··· 1836 1836 .link_reset = ax88179_link_reset, 1837 1837 .reset = ax88179_reset, 1838 1838 .stop = ax88179_stop, 1839 - .flags = FLAG_ETHER | FLAG_FRAMING_AX | FLAG_SEND_ZLP, 1839 + .flags = FLAG_ETHER | FLAG_FRAMING_AX, 1840 1840 .rx_fixup = ax88179_rx_fixup, 1841 1841 .tx_fixup = ax88179_tx_fixup, 1842 1842 };
+2
drivers/net/usb/qmi_wwan.c
··· 1390 1390 {QMI_QUIRK_SET_DTR(0x1e2d, 0x00b0, 4)}, /* Cinterion CLS8 */ 1391 1391 {QMI_FIXED_INTF(0x1e2d, 0x00b7, 0)}, /* Cinterion MV31 RmNet */ 1392 1392 {QMI_FIXED_INTF(0x1e2d, 0x00b9, 0)}, /* Cinterion MV31 RmNet based on new baseline */ 1393 + {QMI_FIXED_INTF(0x1e2d, 0x00f3, 0)}, /* Cinterion MV32-W-A RmNet */ 1394 + {QMI_FIXED_INTF(0x1e2d, 0x00f4, 0)}, /* Cinterion MV32-W-B RmNet */ 1393 1395 {QMI_FIXED_INTF(0x413c, 0x81a2, 8)}, /* Dell Wireless 5806 Gobi(TM) 4G LTE Mobile Broadband Card */ 1394 1396 {QMI_FIXED_INTF(0x413c, 0x81a3, 8)}, /* Dell Wireless 5570 HSPA+ (42Mbps) Mobile Broadband Card */ 1395 1397 {QMI_FIXED_INTF(0x413c, 0x81a4, 8)}, /* Dell Wireless 5570e HSPA+ (42Mbps) Mobile Broadband Card */
-4
drivers/net/veth.c
··· 312 312 static netdev_tx_t veth_xmit(struct sk_buff *skb, struct net_device *dev) 313 313 { 314 314 struct veth_priv *rcv_priv, *priv = netdev_priv(dev); 315 - struct netdev_queue *queue = NULL; 316 315 struct veth_rq *rq = NULL; 317 316 struct net_device *rcv; 318 317 int length = skb->len; ··· 329 330 rxq = skb_get_queue_mapping(skb); 330 331 if (rxq < rcv->real_num_rx_queues) { 331 332 rq = &rcv_priv->rq[rxq]; 332 - queue = netdev_get_tx_queue(dev, rxq); 333 333 334 334 /* The napi pointer is available when an XDP program is 335 335 * attached or when GRO is enabled ··· 340 342 341 343 skb_tx_timestamp(skb); 342 344 if (likely(veth_forward_skb(rcv, skb, rq, use_napi) == NET_RX_SUCCESS)) { 343 - if (queue) 344 - txq_trans_cond_update(queue); 345 345 if (!use_napi) 346 346 dev_lstats_add(dev, length); 347 347 } else {
+4 -1
drivers/net/virtio_net.c
··· 1057 1057 case XDP_TX: 1058 1058 stats->xdp_tx++; 1059 1059 xdpf = xdp_convert_buff_to_frame(&xdp); 1060 - if (unlikely(!xdpf)) 1060 + if (unlikely(!xdpf)) { 1061 + if (unlikely(xdp_page != page)) 1062 + put_page(xdp_page); 1061 1063 goto err_xdp; 1064 + } 1062 1065 err = virtnet_xdp_xmit(dev, 1, &xdpf, 0); 1063 1066 if (unlikely(!err)) { 1064 1067 xdp_return_frame_rx_napi(xdpf);
+1 -1
drivers/net/vxlan/vxlan_core.c
··· 2321 2321 fl6.flowi6_oif = oif; 2322 2322 fl6.daddr = *daddr; 2323 2323 fl6.saddr = *saddr; 2324 - fl6.flowlabel = ip6_make_flowinfo(RT_TOS(tos), label); 2324 + fl6.flowlabel = ip6_make_flowinfo(tos, label); 2325 2325 fl6.flowi6_mark = skb->mark; 2326 2326 fl6.flowi6_proto = IPPROTO_UDP; 2327 2327 fl6.fl6_dport = dport;
+1 -1
drivers/net/wireless/microchip/wilc1000/hif.c
··· 635 635 conn_info->req_ies_len = 0; 636 636 } 637 637 638 - inline void wilc_handle_disconnect(struct wilc_vif *vif) 638 + void wilc_handle_disconnect(struct wilc_vif *vif) 639 639 { 640 640 struct host_if_drv *hif_drv = vif->hif_drv; 641 641
+2 -1
drivers/net/wireless/microchip/wilc1000/hif.h
··· 215 215 void *wilc_parse_join_bss_param(struct cfg80211_bss *bss, 216 216 struct cfg80211_crypto_settings *crypto); 217 217 int wilc_set_default_mgmt_key_index(struct wilc_vif *vif, u8 index); 218 - inline void wilc_handle_disconnect(struct wilc_vif *vif); 218 + void wilc_handle_disconnect(struct wilc_vif *vif); 219 + 219 220 #endif
+47 -121
drivers/s390/net/qeth_core_main.c
··· 763 763 ipa_name, com, CARD_DEVID(card)); 764 764 } 765 765 766 + static void qeth_default_link_info(struct qeth_card *card) 767 + { 768 + struct qeth_link_info *link_info = &card->info.link_info; 769 + 770 + QETH_CARD_TEXT(card, 2, "dftlinfo"); 771 + link_info->duplex = DUPLEX_FULL; 772 + 773 + if (IS_IQD(card) || IS_VM_NIC(card)) { 774 + link_info->speed = SPEED_10000; 775 + link_info->port = PORT_FIBRE; 776 + link_info->link_mode = QETH_LINK_MODE_FIBRE_SHORT; 777 + } else { 778 + switch (card->info.link_type) { 779 + case QETH_LINK_TYPE_FAST_ETH: 780 + case QETH_LINK_TYPE_LANE_ETH100: 781 + link_info->speed = SPEED_100; 782 + link_info->port = PORT_TP; 783 + break; 784 + case QETH_LINK_TYPE_GBIT_ETH: 785 + case QETH_LINK_TYPE_LANE_ETH1000: 786 + link_info->speed = SPEED_1000; 787 + link_info->port = PORT_FIBRE; 788 + break; 789 + case QETH_LINK_TYPE_10GBIT_ETH: 790 + link_info->speed = SPEED_10000; 791 + link_info->port = PORT_FIBRE; 792 + break; 793 + case QETH_LINK_TYPE_25GBIT_ETH: 794 + link_info->speed = SPEED_25000; 795 + link_info->port = PORT_FIBRE; 796 + break; 797 + default: 798 + dev_info(&card->gdev->dev, 799 + "Unknown link type %x\n", 800 + card->info.link_type); 801 + link_info->speed = SPEED_UNKNOWN; 802 + link_info->port = PORT_OTHER; 803 + } 804 + 805 + link_info->link_mode = QETH_LINK_MODE_UNKNOWN; 806 + } 807 + } 808 + 766 809 static struct qeth_ipa_cmd *qeth_check_ipa_data(struct qeth_card *card, 767 810 struct qeth_ipa_cmd *cmd) 768 811 { ··· 833 790 netdev_name(card->dev), card->info.chpid); 834 791 qeth_issue_ipa_msg(cmd, cmd->hdr.return_code, card); 835 792 netif_carrier_off(card->dev); 793 + qeth_default_link_info(card); 836 794 } 837 795 return NULL; 838 796 case IPA_CMD_STARTLAN: ··· 4788 4744 return rc; 4789 4745 } 4790 4746 4791 - static int qeth_query_card_info_cb(struct qeth_card *card, 4792 - struct qeth_reply *reply, unsigned long data) 4793 - { 4794 - struct qeth_ipa_cmd *cmd = (struct qeth_ipa_cmd *)data; 4795 - struct qeth_link_info *link_info = reply->param; 4796 - struct qeth_query_card_info *card_info; 4797 - 4798 - QETH_CARD_TEXT(card, 2, "qcrdincb"); 4799 - if (qeth_setadpparms_inspect_rc(cmd)) 4800 - return -EIO; 4801 - 4802 - card_info = &cmd->data.setadapterparms.data.card_info; 4803 - netdev_dbg(card->dev, 4804 - "card info: card_type=0x%02x, port_mode=0x%04x, port_speed=0x%08x\n", 4805 - card_info->card_type, card_info->port_mode, 4806 - card_info->port_speed); 4807 - 4808 - switch (card_info->port_mode) { 4809 - case CARD_INFO_PORTM_FULLDUPLEX: 4810 - link_info->duplex = DUPLEX_FULL; 4811 - break; 4812 - case CARD_INFO_PORTM_HALFDUPLEX: 4813 - link_info->duplex = DUPLEX_HALF; 4814 - break; 4815 - default: 4816 - link_info->duplex = DUPLEX_UNKNOWN; 4817 - } 4818 - 4819 - switch (card_info->card_type) { 4820 - case CARD_INFO_TYPE_1G_COPPER_A: 4821 - case CARD_INFO_TYPE_1G_COPPER_B: 4822 - link_info->speed = SPEED_1000; 4823 - link_info->port = PORT_TP; 4824 - break; 4825 - case CARD_INFO_TYPE_1G_FIBRE_A: 4826 - case CARD_INFO_TYPE_1G_FIBRE_B: 4827 - link_info->speed = SPEED_1000; 4828 - link_info->port = PORT_FIBRE; 4829 - break; 4830 - case CARD_INFO_TYPE_10G_FIBRE_A: 4831 - case CARD_INFO_TYPE_10G_FIBRE_B: 4832 - link_info->speed = SPEED_10000; 4833 - link_info->port = PORT_FIBRE; 4834 - break; 4835 - default: 4836 - switch (card_info->port_speed) { 4837 - case CARD_INFO_PORTS_10M: 4838 - link_info->speed = SPEED_10; 4839 - break; 4840 - case CARD_INFO_PORTS_100M: 4841 - link_info->speed = SPEED_100; 4842 - break; 4843 - case CARD_INFO_PORTS_1G: 4844 - link_info->speed = SPEED_1000; 4845 - break; 4846 - case CARD_INFO_PORTS_10G: 4847 - link_info->speed = SPEED_10000; 4848 - break; 4849 - case CARD_INFO_PORTS_25G: 4850 - link_info->speed = SPEED_25000; 4851 - break; 4852 - default: 4853 - link_info->speed = SPEED_UNKNOWN; 4854 - } 4855 - 4856 - link_info->port = PORT_OTHER; 4857 - } 4858 - 4859 - return 0; 4860 - } 4861 - 4862 - int qeth_query_card_info(struct qeth_card *card, 4863 - struct qeth_link_info *link_info) 4864 - { 4865 - struct qeth_cmd_buffer *iob; 4866 - 4867 - QETH_CARD_TEXT(card, 2, "qcrdinfo"); 4868 - if (!qeth_adp_supported(card, IPA_SETADP_QUERY_CARD_INFO)) 4869 - return -EOPNOTSUPP; 4870 - iob = qeth_get_adapter_cmd(card, IPA_SETADP_QUERY_CARD_INFO, 0); 4871 - if (!iob) 4872 - return -ENOMEM; 4873 - 4874 - return qeth_send_ipa_cmd(card, iob, qeth_query_card_info_cb, link_info); 4875 - } 4876 - 4877 4747 static int qeth_init_link_info_oat_cb(struct qeth_card *card, 4878 4748 struct qeth_reply *reply_priv, 4879 4749 unsigned long data) ··· 4797 4839 struct qeth_query_oat_physical_if *phys_if; 4798 4840 struct qeth_query_oat_reply *reply; 4799 4841 4842 + QETH_CARD_TEXT(card, 2, "qoatincb"); 4800 4843 if (qeth_setadpparms_inspect_rc(cmd)) 4801 4844 return -EIO; 4802 4845 ··· 4877 4918 4878 4919 static void qeth_init_link_info(struct qeth_card *card) 4879 4920 { 4880 - card->info.link_info.duplex = DUPLEX_FULL; 4881 - 4882 - if (IS_IQD(card) || IS_VM_NIC(card)) { 4883 - card->info.link_info.speed = SPEED_10000; 4884 - card->info.link_info.port = PORT_FIBRE; 4885 - card->info.link_info.link_mode = QETH_LINK_MODE_FIBRE_SHORT; 4886 - } else { 4887 - switch (card->info.link_type) { 4888 - case QETH_LINK_TYPE_FAST_ETH: 4889 - case QETH_LINK_TYPE_LANE_ETH100: 4890 - card->info.link_info.speed = SPEED_100; 4891 - card->info.link_info.port = PORT_TP; 4892 - break; 4893 - case QETH_LINK_TYPE_GBIT_ETH: 4894 - case QETH_LINK_TYPE_LANE_ETH1000: 4895 - card->info.link_info.speed = SPEED_1000; 4896 - card->info.link_info.port = PORT_FIBRE; 4897 - break; 4898 - case QETH_LINK_TYPE_10GBIT_ETH: 4899 - card->info.link_info.speed = SPEED_10000; 4900 - card->info.link_info.port = PORT_FIBRE; 4901 - break; 4902 - case QETH_LINK_TYPE_25GBIT_ETH: 4903 - card->info.link_info.speed = SPEED_25000; 4904 - card->info.link_info.port = PORT_FIBRE; 4905 - break; 4906 - default: 4907 - dev_info(&card->gdev->dev, "Unknown link type %x\n", 4908 - card->info.link_type); 4909 - card->info.link_info.speed = SPEED_UNKNOWN; 4910 - card->info.link_info.port = PORT_OTHER; 4911 - } 4912 - 4913 - card->info.link_info.link_mode = QETH_LINK_MODE_UNKNOWN; 4914 - } 4921 + qeth_default_link_info(card); 4915 4922 4916 4923 /* Get more accurate data via QUERY OAT: */ 4917 4924 if (qeth_adp_supported(card, IPA_SETADP_QUERY_OAT)) { ··· 5386 5461 qeth_clear_working_pool_list(card); 5387 5462 qeth_flush_local_addrs(card); 5388 5463 card->info.promisc_mode = 0; 5464 + qeth_default_link_info(card); 5389 5465 5390 5466 rc = qeth_stop_channel(&card->data); 5391 5467 rc2 = qeth_stop_channel(&card->write);
+1 -11
drivers/s390/net/qeth_ethtool.c
··· 428 428 struct ethtool_link_ksettings *cmd) 429 429 { 430 430 struct qeth_card *card = netdev->ml_priv; 431 - struct qeth_link_info link_info; 432 431 432 + QETH_CARD_TEXT(card, 4, "ethtglks"); 433 433 cmd->base.speed = card->info.link_info.speed; 434 434 cmd->base.duplex = card->info.link_info.duplex; 435 435 cmd->base.port = card->info.link_info.port; ··· 438 438 cmd->base.mdio_support = 0; 439 439 cmd->base.eth_tp_mdix = ETH_TP_MDI_INVALID; 440 440 cmd->base.eth_tp_mdix_ctrl = ETH_TP_MDI_INVALID; 441 - 442 - /* Check if we can obtain more accurate information. */ 443 - if (!qeth_query_card_info(card, &link_info)) { 444 - if (link_info.speed != SPEED_UNKNOWN) 445 - cmd->base.speed = link_info.speed; 446 - if (link_info.duplex != DUPLEX_UNKNOWN) 447 - cmd->base.duplex = link_info.duplex; 448 - if (link_info.port != PORT_OTHER) 449 - cmd->base.port = link_info.port; 450 - } 451 441 452 442 qeth_set_ethtool_link_modes(cmd, card->info.link_info.link_mode); 453 443
+6 -2
include/linux/bpfptr.h
··· 49 49 static inline int copy_from_bpfptr_offset(void *dst, bpfptr_t src, 50 50 size_t offset, size_t size) 51 51 { 52 - return copy_from_sockptr_offset(dst, (sockptr_t) src, offset, size); 52 + if (!bpfptr_is_kernel(src)) 53 + return copy_from_user(dst, src.user + offset, size); 54 + return copy_from_kernel_nofault(dst, src.kernel + offset, size); 53 55 } 54 56 55 57 static inline int copy_from_bpfptr(void *dst, bpfptr_t src, size_t size) ··· 80 78 81 79 static inline long strncpy_from_bpfptr(char *dst, bpfptr_t src, size_t count) 82 80 { 83 - return strncpy_from_sockptr(dst, (sockptr_t) src, count); 81 + if (bpfptr_is_kernel(src)) 82 + return strncpy_from_kernel_nofault(dst, src.kernel, count); 83 + return strncpy_from_user(dst, src.user, count); 84 84 } 85 85 86 86 #endif /* _LINUX_BPFPTR_H */
+2 -1
include/linux/skmsg.h
··· 278 278 279 279 static inline struct sk_psock *sk_psock(const struct sock *sk) 280 280 { 281 - return rcu_dereference_sk_user_data(sk); 281 + return __rcu_dereference_sk_user_data_with_flags(sk, 282 + SK_USER_DATA_PSOCK); 282 283 } 283 284 284 285 static inline void sk_psock_set_state(struct sk_psock *psock,
+2 -2
include/net/ax88796.h
··· 34 34 const unsigned char *buf, int star_page); 35 35 void (*block_input)(struct net_device *dev, int count, 36 36 struct sk_buff *skb, int ring_offset); 37 - /* returns nonzero if a pending interrupt request might by caused by 38 - * the ax88786. Handles all interrupts if set to NULL 37 + /* returns nonzero if a pending interrupt request might be caused by 38 + * the ax88796. Handles all interrupts if set to NULL 39 39 */ 40 40 int (*check_irq)(struct platform_device *pdev); 41 41 };
+12 -1
include/net/bonding.h
··· 161 161 struct net_device *dev; /* first - useful for panic debug */ 162 162 struct bonding *bond; /* our master */ 163 163 int delay; 164 - /* all three in jiffies */ 164 + /* all 4 in jiffies */ 165 165 unsigned long last_link_up; 166 + unsigned long last_tx; 166 167 unsigned long last_rx; 167 168 unsigned long target_last_arp_rx[BOND_MAX_ARP_TARGETS]; 168 169 s8 link; /* one of BOND_LINK_XXXX */ ··· 539 538 return slave_oldest_target_arp_rx(bond, slave); 540 539 541 540 return slave->last_rx; 541 + } 542 + 543 + static inline void slave_update_last_tx(struct slave *slave) 544 + { 545 + WRITE_ONCE(slave->last_tx, jiffies); 546 + } 547 + 548 + static inline unsigned long slave_last_tx(struct slave *slave) 549 + { 550 + return READ_ONCE(slave->last_tx); 542 551 } 543 552 544 553 #ifdef CONFIG_NET_POLL_CONTROLLER
+3 -2
include/net/genetlink.h
··· 11 11 /** 12 12 * struct genl_multicast_group - generic netlink multicast group 13 13 * @name: name of the multicast group, names are per-family 14 + * @flags: GENL_* flags (%GENL_ADMIN_PERM or %GENL_UNS_ADMIN_PERM) 14 15 */ 15 16 struct genl_multicast_group { 16 17 char name[GENL_NAMSIZ]; ··· 117 116 * struct genl_small_ops - generic netlink operations (small version) 118 117 * @cmd: command identifier 119 118 * @internal_flags: flags used by the family 120 - * @flags: flags 119 + * @flags: GENL_* flags (%GENL_ADMIN_PERM or %GENL_UNS_ADMIN_PERM) 121 120 * @validate: validation flags from enum genl_validate_flags 122 121 * @doit: standard command callback 123 122 * @dumpit: callback for dumpers ··· 138 137 * struct genl_ops - generic netlink operations 139 138 * @cmd: command identifier 140 139 * @internal_flags: flags used by the family 141 - * @flags: flags 140 + * @flags: GENL_* flags (%GENL_ADMIN_PERM or %GENL_UNS_ADMIN_PERM) 142 141 * @maxattr: maximum number of attributes supported 143 142 * @policy: netlink policy (takes precedence over family policy) 144 143 * @validate: validation flags from enum genl_validate_flags
+4
include/net/mptcp.h
··· 291 291 static inline struct mptcp_sock *bpf_mptcp_sock_from_subflow(struct sock *sk) { return NULL; } 292 292 #endif 293 293 294 + #if !IS_ENABLED(CONFIG_MPTCP) 295 + struct mptcp_sock { }; 296 + #endif 297 + 294 298 #endif /* __NET_MPTCP_H */
+13 -6
include/net/netfilter/nf_tables.h
··· 221 221 bool report; 222 222 }; 223 223 224 - struct nft_data_desc { 225 - enum nft_data_types type; 226 - unsigned int len; 224 + enum nft_data_desc_flags { 225 + NFT_DATA_DESC_SETELEM = (1 << 0), 227 226 }; 228 227 229 - int nft_data_init(const struct nft_ctx *ctx, 230 - struct nft_data *data, unsigned int size, 228 + struct nft_data_desc { 229 + enum nft_data_types type; 230 + unsigned int size; 231 + unsigned int len; 232 + unsigned int flags; 233 + }; 234 + 235 + int nft_data_init(const struct nft_ctx *ctx, struct nft_data *data, 231 236 struct nft_data_desc *desc, const struct nlattr *nla); 232 237 void nft_data_hold(const struct nft_data *data, enum nft_data_types type); 233 238 void nft_data_release(const struct nft_data *data, enum nft_data_types type); ··· 656 651 struct nft_set_ext_tmpl { 657 652 u16 len; 658 653 u8 offset[NFT_SET_EXT_NUM]; 654 + u8 ext_len[NFT_SET_EXT_NUM]; 659 655 }; 660 656 661 657 /** ··· 686 680 return -EINVAL; 687 681 688 682 tmpl->offset[id] = tmpl->len; 689 - tmpl->len += nft_set_ext_types[id].len + len; 683 + tmpl->ext_len[id] = nft_set_ext_types[id].len + len; 684 + tmpl->len += tmpl->ext_len[id]; 690 685 691 686 return 0; 692 687 }
+48 -20
include/net/sock.h
··· 545 545 SK_PACING_FQ = 2, 546 546 }; 547 547 548 - /* Pointer stored in sk_user_data might not be suitable for copying 549 - * when cloning the socket. For instance, it can point to a reference 550 - * counted object. sk_user_data bottom bit is set if pointer must not 551 - * be copied. 548 + /* flag bits in sk_user_data 549 + * 550 + * - SK_USER_DATA_NOCOPY: Pointer stored in sk_user_data might 551 + * not be suitable for copying when cloning the socket. For instance, 552 + * it can point to a reference counted object. sk_user_data bottom 553 + * bit is set if pointer must not be copied. 554 + * 555 + * - SK_USER_DATA_BPF: Mark whether sk_user_data field is 556 + * managed/owned by a BPF reuseport array. This bit should be set 557 + * when sk_user_data's sk is added to the bpf's reuseport_array. 558 + * 559 + * - SK_USER_DATA_PSOCK: Mark whether pointer stored in 560 + * sk_user_data points to psock type. This bit should be set 561 + * when sk_user_data is assigned to a psock object. 552 562 */ 553 563 #define SK_USER_DATA_NOCOPY 1UL 554 - #define SK_USER_DATA_BPF 2UL /* Managed by BPF */ 555 - #define SK_USER_DATA_PTRMASK ~(SK_USER_DATA_NOCOPY | SK_USER_DATA_BPF) 564 + #define SK_USER_DATA_BPF 2UL 565 + #define SK_USER_DATA_PSOCK 4UL 566 + #define SK_USER_DATA_PTRMASK ~(SK_USER_DATA_NOCOPY | SK_USER_DATA_BPF |\ 567 + SK_USER_DATA_PSOCK) 556 568 557 569 /** 558 570 * sk_user_data_is_nocopy - Test if sk_user_data pointer must not be copied ··· 577 565 578 566 #define __sk_user_data(sk) ((*((void __rcu **)&(sk)->sk_user_data))) 579 567 568 + /** 569 + * __rcu_dereference_sk_user_data_with_flags - return the pointer 570 + * only if argument flags all has been set in sk_user_data. Otherwise 571 + * return NULL 572 + * 573 + * @sk: socket 574 + * @flags: flag bits 575 + */ 576 + static inline void * 577 + __rcu_dereference_sk_user_data_with_flags(const struct sock *sk, 578 + uintptr_t flags) 579 + { 580 + uintptr_t sk_user_data = (uintptr_t)rcu_dereference(__sk_user_data(sk)); 581 + 582 + WARN_ON_ONCE(flags & SK_USER_DATA_PTRMASK); 583 + 584 + if ((sk_user_data & flags) == flags) 585 + return (void *)(sk_user_data & SK_USER_DATA_PTRMASK); 586 + return NULL; 587 + } 588 + 580 589 #define rcu_dereference_sk_user_data(sk) \ 590 + __rcu_dereference_sk_user_data_with_flags(sk, 0) 591 + #define __rcu_assign_sk_user_data_with_flags(sk, ptr, flags) \ 581 592 ({ \ 582 - void *__tmp = rcu_dereference(__sk_user_data((sk))); \ 583 - (void *)((uintptr_t)__tmp & SK_USER_DATA_PTRMASK); \ 593 + uintptr_t __tmp1 = (uintptr_t)(ptr), \ 594 + __tmp2 = (uintptr_t)(flags); \ 595 + WARN_ON_ONCE(__tmp1 & ~SK_USER_DATA_PTRMASK); \ 596 + WARN_ON_ONCE(__tmp2 & SK_USER_DATA_PTRMASK); \ 597 + rcu_assign_pointer(__sk_user_data((sk)), \ 598 + __tmp1 | __tmp2); \ 584 599 }) 585 600 #define rcu_assign_sk_user_data(sk, ptr) \ 586 - ({ \ 587 - uintptr_t __tmp = (uintptr_t)(ptr); \ 588 - WARN_ON_ONCE(__tmp & ~SK_USER_DATA_PTRMASK); \ 589 - rcu_assign_pointer(__sk_user_data((sk)), __tmp); \ 590 - }) 591 - #define rcu_assign_sk_user_data_nocopy(sk, ptr) \ 592 - ({ \ 593 - uintptr_t __tmp = (uintptr_t)(ptr); \ 594 - WARN_ON_ONCE(__tmp & ~SK_USER_DATA_PTRMASK); \ 595 - rcu_assign_pointer(__sk_user_data((sk)), \ 596 - __tmp | SK_USER_DATA_NOCOPY); \ 597 - }) 601 + __rcu_assign_sk_user_data_with_flags(sk, ptr, 0) 598 602 599 603 static inline 600 604 struct net *sock_net(const struct sock *sk)
+1 -1
include/net/tls.h
··· 237 237 void *priv_ctx_tx; 238 238 void *priv_ctx_rx; 239 239 240 - struct net_device *netdev; 240 + struct net_device __rcu *netdev; 241 241 242 242 /* rw cache line */ 243 243 struct cipher_context tx;
+47
include/uapi/linux/atm_zatm.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */ 2 + /* atm_zatm.h - Driver-specific declarations of the ZATM driver (for use by 3 + driver-specific utilities) */ 4 + 5 + /* Written 1995-1999 by Werner Almesberger, EPFL LRC/ICA */ 6 + 7 + 8 + #ifndef LINUX_ATM_ZATM_H 9 + #define LINUX_ATM_ZATM_H 10 + 11 + /* 12 + * Note: non-kernel programs including this file must also include 13 + * sys/types.h for struct timeval 14 + */ 15 + 16 + #include <linux/atmapi.h> 17 + #include <linux/atmioc.h> 18 + 19 + #define ZATM_GETPOOL _IOW('a',ATMIOC_SARPRV+1,struct atmif_sioc) 20 + /* get pool statistics */ 21 + #define ZATM_GETPOOLZ _IOW('a',ATMIOC_SARPRV+2,struct atmif_sioc) 22 + /* get statistics and zero */ 23 + #define ZATM_SETPOOL _IOW('a',ATMIOC_SARPRV+3,struct atmif_sioc) 24 + /* set pool parameters */ 25 + 26 + struct zatm_pool_info { 27 + int ref_count; /* free buffer pool usage counters */ 28 + int low_water,high_water; /* refill parameters */ 29 + int rqa_count,rqu_count; /* queue condition counters */ 30 + int offset,next_off; /* alignment optimizations: offset */ 31 + int next_cnt,next_thres; /* repetition counter and threshold */ 32 + }; 33 + 34 + struct zatm_pool_req { 35 + int pool_num; /* pool number */ 36 + struct zatm_pool_info info; /* actual information */ 37 + }; 38 + 39 + #define ZATM_OAM_POOL 0 /* free buffer pool for OAM cells */ 40 + #define ZATM_AAL0_POOL 1 /* free buffer pool for AAL0 cells */ 41 + #define ZATM_AAL5_POOL_BASE 2 /* first AAL5 free buffer pool */ 42 + #define ZATM_LAST_POOL ZATM_AAL5_POOL_BASE+10 /* max. 64 kB */ 43 + 44 + #define ZATM_TIMER_HISTORY_SIZE 16 /* number of timer adjustments to 45 + record; must be 2^n */ 46 + 47 + #endif
+3 -2
include/uapi/linux/genetlink.h
··· 87 87 __CTRL_ATTR_MCAST_GRP_MAX, 88 88 }; 89 89 90 + #define CTRL_ATTR_MCAST_GRP_MAX (__CTRL_ATTR_MCAST_GRP_MAX - 1) 91 + 90 92 enum { 91 93 CTRL_ATTR_POLICY_UNSPEC, 92 94 CTRL_ATTR_POLICY_DO, ··· 98 96 CTRL_ATTR_POLICY_DUMP_MAX = __CTRL_ATTR_POLICY_DUMP_MAX - 1 99 97 }; 100 98 101 - #define CTRL_ATTR_MCAST_GRP_MAX (__CTRL_ATTR_MCAST_GRP_MAX - 1) 102 - 99 + #define CTRL_ATTR_POLICY_MAX (__CTRL_ATTR_POLICY_DUMP_MAX - 1) 103 100 104 101 #endif /* _UAPI__LINUX_GENERIC_NETLINK_H */
+1 -1
include/uapi/linux/netfilter_ipv6/ip6t_LOG.h
··· 17 17 char prefix[30]; 18 18 }; 19 19 20 - #endif /*_IPT_LOG_H*/ 20 + #endif /* _IP6T_LOG_H */
+6
kernel/bpf/arraymap.c
··· 649 649 seq_info->percpu_value_buf = value_buf; 650 650 } 651 651 652 + /* bpf_iter_attach_map() acquires a map uref, and the uref may be 653 + * released before or in the middle of iterating map elements, so 654 + * acquire an extra map uref for iterator. 655 + */ 656 + bpf_map_inc_with_uref(map); 652 657 seq_info->map = map; 653 658 return 0; 654 659 } ··· 662 657 { 663 658 struct bpf_iter_seq_array_map_info *seq_info = priv_data; 664 659 660 + bpf_map_put_with_uref(seq_info->map); 665 661 kfree(seq_info->percpu_value_buf); 666 662 } 667 663
+10 -1
kernel/bpf/bpf_iter.c
··· 68 68 iter_priv->done_stop = true; 69 69 } 70 70 71 + static inline bool bpf_iter_target_support_resched(const struct bpf_iter_target_info *tinfo) 72 + { 73 + return tinfo->reg_info->feature & BPF_ITER_RESCHED; 74 + } 75 + 71 76 static bool bpf_iter_support_resched(struct seq_file *seq) 72 77 { 73 78 struct bpf_iter_priv_data *iter_priv; 74 79 75 80 iter_priv = container_of(seq->private, struct bpf_iter_priv_data, 76 81 target_private); 77 - return iter_priv->tinfo->reg_info->feature & BPF_ITER_RESCHED; 82 + return bpf_iter_target_support_resched(iter_priv->tinfo); 78 83 } 79 84 80 85 /* maximum visited objects before bailing out */ ··· 541 536 mutex_unlock(&targets_mutex); 542 537 if (!tinfo) 543 538 return -ENOENT; 539 + 540 + /* Only allow sleepable program for resched-able iterator */ 541 + if (prog->aux->sleepable && !bpf_iter_target_support_resched(tinfo)) 542 + return -EINVAL; 544 543 545 544 link = kzalloc(sizeof(*link), GFP_USER | __GFP_NOWARN); 546 545 if (!link)
+3 -5
kernel/bpf/hashtab.c
··· 311 311 struct htab_elem *l; 312 312 313 313 if (node) { 314 - u32 key_size = htab->map.key_size; 315 - 316 314 l = container_of(node, struct htab_elem, lru_node); 317 - memcpy(l->key, key, key_size); 318 - check_and_init_map_value(&htab->map, 319 - l->key + round_up(key_size, 8)); 315 + memcpy(l->key, key, htab->map.key_size); 320 316 return l; 321 317 } 322 318 ··· 2060 2064 seq_info->percpu_value_buf = value_buf; 2061 2065 } 2062 2066 2067 + bpf_map_inc_with_uref(map); 2063 2068 seq_info->map = map; 2064 2069 seq_info->htab = container_of(map, struct bpf_htab, map); 2065 2070 return 0; ··· 2070 2073 { 2071 2074 struct bpf_iter_seq_hash_map_info *seq_info = priv_data; 2072 2075 2076 + bpf_map_put_with_uref(seq_info->map); 2073 2077 kfree(seq_info->percpu_value_buf); 2074 2078 } 2075 2079
+3 -6
kernel/bpf/reuseport_array.c
··· 21 21 /* The caller must hold the reuseport_lock */ 22 22 void bpf_sk_reuseport_detach(struct sock *sk) 23 23 { 24 - uintptr_t sk_user_data; 24 + struct sock __rcu **socks; 25 25 26 26 write_lock_bh(&sk->sk_callback_lock); 27 - sk_user_data = (uintptr_t)sk->sk_user_data; 28 - if (sk_user_data & SK_USER_DATA_BPF) { 29 - struct sock __rcu **socks; 30 - 31 - socks = (void *)(sk_user_data & SK_USER_DATA_PTRMASK); 27 + socks = __rcu_dereference_sk_user_data_with_flags(sk, SK_USER_DATA_BPF); 28 + if (socks) { 32 29 WRITE_ONCE(sk->sk_user_data, NULL); 33 30 /* 34 31 * Do not move this NULL assignment outside of
+25 -10
kernel/bpf/syscall.c
··· 3886 3886 union bpf_attr __user *uattr) 3887 3887 { 3888 3888 struct bpf_prog_info __user *uinfo = u64_to_user_ptr(attr->info.info); 3889 + struct btf *attach_btf = bpf_prog_get_target_btf(prog); 3889 3890 struct bpf_prog_info info; 3890 3891 u32 info_len = attr->info.info_len; 3891 3892 struct bpf_prog_kstats stats; ··· 4089 4088 if (prog->aux->btf) 4090 4089 info.btf_id = btf_obj_id(prog->aux->btf); 4091 4090 info.attach_btf_id = prog->aux->attach_btf_id; 4092 - if (prog->aux->attach_btf) 4093 - info.attach_btf_obj_id = btf_obj_id(prog->aux->attach_btf); 4094 - else if (prog->aux->dst_prog) 4095 - info.attach_btf_obj_id = btf_obj_id(prog->aux->dst_prog->aux->attach_btf); 4091 + if (attach_btf) 4092 + info.attach_btf_obj_id = btf_obj_id(attach_btf); 4096 4093 4097 4094 ulen = info.nr_func_info; 4098 4095 info.nr_func_info = prog->aux->func_info_cnt; ··· 5071 5072 5072 5073 BPF_CALL_3(bpf_sys_bpf, int, cmd, union bpf_attr *, attr, u32, attr_size) 5073 5074 { 5074 - struct bpf_prog * __maybe_unused prog; 5075 - struct bpf_tramp_run_ctx __maybe_unused run_ctx; 5076 - 5077 5075 switch (cmd) { 5078 5076 case BPF_MAP_CREATE: 5079 5077 case BPF_MAP_UPDATE_ELEM: ··· 5080 5084 case BPF_LINK_CREATE: 5081 5085 case BPF_RAW_TRACEPOINT_OPEN: 5082 5086 break; 5087 + default: 5088 + return -EINVAL; 5089 + } 5090 + return __sys_bpf(cmd, KERNEL_BPFPTR(attr), attr_size); 5091 + } 5092 + 5093 + 5094 + /* To shut up -Wmissing-prototypes. 5095 + * This function is used by the kernel light skeleton 5096 + * to load bpf programs when modules are loaded or during kernel boot. 5097 + * See tools/lib/bpf/skel_internal.h 5098 + */ 5099 + int kern_sys_bpf(int cmd, union bpf_attr *attr, unsigned int size); 5100 + 5101 + int kern_sys_bpf(int cmd, union bpf_attr *attr, unsigned int size) 5102 + { 5103 + struct bpf_prog * __maybe_unused prog; 5104 + struct bpf_tramp_run_ctx __maybe_unused run_ctx; 5105 + 5106 + switch (cmd) { 5083 5107 #ifdef CONFIG_BPF_JIT /* __bpf_prog_enter_sleepable used by trampoline and JIT */ 5084 5108 case BPF_PROG_TEST_RUN: 5085 5109 if (attr->test.data_in || attr->test.data_out || ··· 5130 5114 return 0; 5131 5115 #endif 5132 5116 default: 5133 - return -EINVAL; 5117 + return ____bpf_sys_bpf(cmd, attr, size); 5134 5118 } 5135 - return __sys_bpf(cmd, KERNEL_BPFPTR(attr), attr_size); 5136 5119 } 5137 - EXPORT_SYMBOL(bpf_sys_bpf); 5120 + EXPORT_SYMBOL(kern_sys_bpf); 5138 5121 5139 5122 static const struct bpf_func_proto bpf_sys_bpf_proto = { 5140 5123 .func = bpf_sys_bpf,
+4 -1
kernel/bpf/trampoline.c
··· 841 841 * multiple rcu callbacks. 842 842 */ 843 843 hlist_del(&tr->hlist); 844 - kfree(tr->fops); 844 + if (tr->fops) { 845 + ftrace_free_filter(tr->fops); 846 + kfree(tr->fops); 847 + } 845 848 kfree(tr); 846 849 out: 847 850 mutex_unlock(&trampoline_mutex);
+3 -1
net/ax25/ax25_timer.c
··· 108 108 109 109 unsigned long ax25_display_timer(struct timer_list *timer) 110 110 { 111 + long delta = timer->expires - jiffies; 112 + 111 113 if (!timer_pending(timer)) 112 114 return 0; 113 115 114 - return timer->expires - jiffies; 116 + return max(0L, delta); 115 117 } 116 118 117 119 EXPORT_SYMBOL(ax25_display_timer);
+12 -3
net/bluetooth/aosp.c
··· 54 54 /* LE Get Vendor Capabilities Command */ 55 55 skb = __hci_cmd_sync(hdev, hci_opcode_pack(0x3f, 0x153), 0, NULL, 56 56 HCI_CMD_TIMEOUT); 57 - if (IS_ERR(skb)) { 57 + if (IS_ERR_OR_NULL(skb)) { 58 + if (!skb) 59 + skb = ERR_PTR(-EIO); 60 + 58 61 bt_dev_err(hdev, "AOSP get vendor capabilities (%ld)", 59 62 PTR_ERR(skb)); 60 63 return; ··· 155 152 156 153 skb = __hci_cmd_sync(hdev, BQR_OPCODE, sizeof(cp), &cp, 157 154 HCI_CMD_TIMEOUT); 158 - if (IS_ERR(skb)) { 155 + if (IS_ERR_OR_NULL(skb)) { 156 + if (!skb) 157 + skb = ERR_PTR(-EIO); 158 + 159 159 bt_dev_err(hdev, "Enabling Android BQR failed (%ld)", 160 160 PTR_ERR(skb)); 161 161 return PTR_ERR(skb); ··· 177 171 178 172 skb = __hci_cmd_sync(hdev, BQR_OPCODE, sizeof(cp), &cp, 179 173 HCI_CMD_TIMEOUT); 180 - if (IS_ERR(skb)) { 174 + if (IS_ERR_OR_NULL(skb)) { 175 + if (!skb) 176 + skb = ERR_PTR(-EIO); 177 + 181 178 bt_dev_err(hdev, "Disabling Android BQR failed (%ld)", 182 179 PTR_ERR(skb)); 183 180 return PTR_ERR(skb);
+2 -9
net/bluetooth/hci_conn.c
··· 1551 1551 cis->cis_id = qos->cis; 1552 1552 cis->c_sdu = cpu_to_le16(qos->out.sdu); 1553 1553 cis->p_sdu = cpu_to_le16(qos->in.sdu); 1554 - cis->c_phy = qos->out.phy; 1555 - cis->p_phy = qos->in.phy; 1554 + cis->c_phy = qos->out.phy ? qos->out.phy : qos->in.phy; 1555 + cis->p_phy = qos->in.phy ? qos->in.phy : qos->out.phy; 1556 1556 cis->c_rtn = qos->out.rtn; 1557 1557 cis->p_rtn = qos->in.rtn; 1558 1558 ··· 1734 1734 */ 1735 1735 if (!qos->in.latency) 1736 1736 qos->in.latency = qos->out.latency; 1737 - 1738 - /* Mirror PHYs that are disabled as SDU will be set to 0 */ 1739 - if (!qos->in.phy) 1740 - qos->in.phy = qos->out.phy; 1741 - 1742 - if (!qos->out.phy) 1743 - qos->out.phy = qos->in.phy; 1744 1737 1745 1738 if (!hci_le_set_cig_params(cis, qos)) { 1746 1739 hci_conn_drop(cis);
+5 -2
net/bluetooth/hci_event.c
··· 328 328 struct sk_buff *skb) 329 329 { 330 330 struct hci_rp_delete_stored_link_key *rp = data; 331 + u16 num_keys; 331 332 332 333 bt_dev_dbg(hdev, "status 0x%2.2x", rp->status); 333 334 334 335 if (rp->status) 335 336 return rp->status; 336 337 337 - if (rp->num_keys <= hdev->stored_num_keys) 338 - hdev->stored_num_keys -= le16_to_cpu(rp->num_keys); 338 + num_keys = le16_to_cpu(rp->num_keys); 339 + 340 + if (num_keys <= hdev->stored_num_keys) 341 + hdev->stored_num_keys -= num_keys; 339 342 else 340 343 hdev->stored_num_keys = 0; 341 344
+23 -12
net/bluetooth/iso.c
··· 44 44 /* ----- ISO socket info ----- */ 45 45 #define iso_pi(sk) ((struct iso_pinfo *)sk) 46 46 47 + #define EIR_SERVICE_DATA_LENGTH 4 48 + #define BASE_MAX_LENGTH (HCI_MAX_PER_AD_LENGTH - EIR_SERVICE_DATA_LENGTH) 49 + 47 50 struct iso_pinfo { 48 51 struct bt_sock bt; 49 52 bdaddr_t src; ··· 60 57 __u32 flags; 61 58 struct bt_iso_qos qos; 62 59 __u8 base_len; 63 - __u8 base[HCI_MAX_PER_AD_LENGTH]; 60 + __u8 base[BASE_MAX_LENGTH]; 64 61 struct iso_conn *conn; 65 62 }; 66 63 ··· 373 370 return err; 374 371 } 375 372 373 + static struct bt_iso_qos *iso_sock_get_qos(struct sock *sk) 374 + { 375 + if (sk->sk_state == BT_CONNECTED || sk->sk_state == BT_CONNECT2) 376 + return &iso_pi(sk)->conn->hcon->iso_qos; 377 + 378 + return &iso_pi(sk)->qos; 379 + } 380 + 376 381 static int iso_send_frame(struct sock *sk, struct sk_buff *skb) 377 382 { 378 383 struct iso_conn *conn = iso_pi(sk)->conn; 384 + struct bt_iso_qos *qos = iso_sock_get_qos(sk); 379 385 struct hci_iso_data_hdr *hdr; 380 386 int len = 0; 381 387 382 388 BT_DBG("sk %p len %d", sk, skb->len); 383 389 384 - if (skb->len > iso_pi(sk)->qos.out.sdu) 390 + if (skb->len > qos->out.sdu) 385 391 return -EMSGSIZE; 386 392 387 393 len = skb->len; ··· 1189 1177 } 1190 1178 1191 1179 len = min_t(unsigned int, sizeof(qos), optlen); 1192 - if (len != sizeof(qos)) 1193 - return -EINVAL; 1180 + if (len != sizeof(qos)) { 1181 + err = -EINVAL; 1182 + break; 1183 + } 1194 1184 1195 1185 memset(&qos, 0, sizeof(qos)); 1196 1186 ··· 1247 1233 { 1248 1234 struct sock *sk = sock->sk; 1249 1235 int len, err = 0; 1250 - struct bt_iso_qos qos; 1236 + struct bt_iso_qos *qos; 1251 1237 u8 base_len; 1252 1238 u8 *base; 1253 1239 ··· 1260 1246 1261 1247 switch (optname) { 1262 1248 case BT_DEFER_SETUP: 1263 - if (sk->sk_state != BT_BOUND && sk->sk_state != BT_LISTEN) { 1249 + if (sk->sk_state == BT_CONNECTED) { 1264 1250 err = -EINVAL; 1265 1251 break; 1266 1252 } ··· 1272 1258 break; 1273 1259 1274 1260 case BT_ISO_QOS: 1275 - if (sk->sk_state == BT_CONNECTED || sk->sk_state == BT_CONNECT2) 1276 - qos = iso_pi(sk)->conn->hcon->iso_qos; 1277 - else 1278 - qos = iso_pi(sk)->qos; 1261 + qos = iso_sock_get_qos(sk); 1279 1262 1280 - len = min_t(unsigned int, len, sizeof(qos)); 1281 - if (copy_to_user(optval, (char *)&qos, len)) 1263 + len = min_t(unsigned int, len, sizeof(*qos)); 1264 + if (copy_to_user(optval, qos, len)) 1282 1265 err = -EFAULT; 1283 1266 1284 1267 break;
+6 -7
net/bluetooth/l2cap_core.c
··· 1970 1970 bdaddr_t *dst, 1971 1971 u8 link_type) 1972 1972 { 1973 - struct l2cap_chan *c, *c1 = NULL; 1973 + struct l2cap_chan *c, *tmp, *c1 = NULL; 1974 1974 1975 1975 read_lock(&chan_list_lock); 1976 1976 1977 - list_for_each_entry(c, &chan_list, global_l) { 1977 + list_for_each_entry_safe(c, tmp, &chan_list, global_l) { 1978 1978 if (state && c->state != state) 1979 1979 continue; 1980 1980 ··· 1993 1993 dst_match = !bacmp(&c->dst, dst); 1994 1994 if (src_match && dst_match) { 1995 1995 c = l2cap_chan_hold_unless_zero(c); 1996 - if (!c) 1997 - continue; 1998 - 1999 - read_unlock(&chan_list_lock); 2000 - return c; 1996 + if (c) { 1997 + read_unlock(&chan_list_lock); 1998 + return c; 1999 + } 2001 2000 } 2002 2001 2003 2002 /* Closest match */
+4 -3
net/bluetooth/mgmt.c
··· 3819 3819 3820 3820 hci_blocked_keys_clear(hdev); 3821 3821 3822 - for (i = 0; i < keys->key_count; ++i) { 3822 + for (i = 0; i < key_count; ++i) { 3823 3823 struct blocked_key *b = kzalloc(sizeof(*b), GFP_KERNEL); 3824 3824 3825 3825 if (!b) { ··· 4624 4624 u32 current_flags = __le32_to_cpu(cp->current_flags); 4625 4625 4626 4626 bt_dev_dbg(hdev, "Set device flags %pMR (type 0x%x) = 0x%x", 4627 - &cp->addr.bdaddr, cp->addr.type, 4628 - __le32_to_cpu(current_flags)); 4627 + &cp->addr.bdaddr, cp->addr.type, current_flags); 4629 4628 4630 4629 // We should take hci_dev_lock() early, I think.. conn_flags can change 4631 4630 supported_flags = hdev->conn_flags; ··· 8935 8936 HCI_MGMT_EXT_INDEX_EVENTS); 8936 8937 8937 8938 /* Cancel any remaining timed work */ 8939 + if (!hci_dev_test_flag(hdev, HCI_MGMT)) 8940 + return; 8938 8941 cancel_delayed_work_sync(&hdev->discov_off); 8939 8942 cancel_delayed_work_sync(&hdev->service_cache); 8940 8943 cancel_delayed_work_sync(&hdev->rpa_expired);
+12 -3
net/bluetooth/msft.c
··· 120 120 121 121 skb = __hci_cmd_sync(hdev, hdev->msft_opcode, sizeof(cp), &cp, 122 122 HCI_CMD_TIMEOUT); 123 - if (IS_ERR(skb)) { 123 + if (IS_ERR_OR_NULL(skb)) { 124 + if (!skb) 125 + skb = ERR_PTR(-EIO); 126 + 124 127 bt_dev_err(hdev, "Failed to read MSFT supported features (%ld)", 125 128 PTR_ERR(skb)); 126 129 return false; ··· 322 319 323 320 skb = __hci_cmd_sync(hdev, hdev->msft_opcode, sizeof(cp), &cp, 324 321 HCI_CMD_TIMEOUT); 325 - if (IS_ERR(skb)) 322 + if (IS_ERR_OR_NULL(skb)) { 323 + if (!skb) 324 + return -EIO; 326 325 return PTR_ERR(skb); 326 + } 327 327 328 328 return msft_le_cancel_monitor_advertisement_cb(hdev, hdev->msft_opcode, 329 329 monitor, skb); ··· 438 432 HCI_CMD_TIMEOUT); 439 433 kfree(cp); 440 434 441 - if (IS_ERR(skb)) 435 + if (IS_ERR_OR_NULL(skb)) { 436 + if (!skb) 437 + return -EIO; 442 438 return PTR_ERR(skb); 439 + } 443 440 444 441 return msft_le_monitor_advertisement_cb(hdev, hdev->msft_opcode, 445 442 monitor, skb);
+1
net/bpf/test_run.c
··· 1628 1628 int ret; 1629 1629 1630 1630 ret = register_btf_kfunc_id_set(BPF_PROG_TYPE_SCHED_CLS, &bpf_prog_test_kfunc_set); 1631 + ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_TRACING, &bpf_prog_test_kfunc_set); 1631 1632 return ret ?: register_btf_id_dtor_kfuncs(bpf_prog_test_dtor_kfunc, 1632 1633 ARRAY_SIZE(bpf_prog_test_dtor_kfunc), 1633 1634 THIS_MODULE);
+4 -1
net/can/j1939/socket.c
··· 178 178 if (!first) 179 179 return; 180 180 181 - if (WARN_ON_ONCE(j1939_session_activate(first))) { 181 + if (j1939_session_activate(first)) { 182 + netdev_warn_once(first->priv->ndev, 183 + "%s: 0x%p: Identical session is already activated.\n", 184 + __func__, first); 182 185 first->err = -EBUSY; 183 186 goto activate_next; 184 187 } else {
+7 -1
net/can/j1939/transport.c
··· 260 260 261 261 static void j1939_session_destroy(struct j1939_session *session) 262 262 { 263 + struct sk_buff *skb; 264 + 263 265 if (session->transmission) { 264 266 if (session->err) 265 267 j1939_sk_errqueue(session, J1939_ERRQUEUE_TX_ABORT); ··· 276 274 WARN_ON_ONCE(!list_empty(&session->sk_session_queue_entry)); 277 275 WARN_ON_ONCE(!list_empty(&session->active_session_list_entry)); 278 276 279 - skb_queue_purge(&session->skb_queue); 277 + while ((skb = skb_dequeue(&session->skb_queue)) != NULL) { 278 + /* drop ref taken in j1939_session_skb_queue() */ 279 + skb_unref(skb); 280 + kfree_skb(skb); 281 + } 280 282 __j1939_session_drop(session); 281 283 j1939_priv_put(session->priv); 282 284 kfree(session);
+10 -2
net/core/bpf_sk_storage.c
··· 875 875 { 876 876 struct bpf_iter_seq_sk_storage_map_info *seq_info = priv_data; 877 877 878 + bpf_map_inc_with_uref(aux->map); 878 879 seq_info->map = aux->map; 879 880 return 0; 881 + } 882 + 883 + static void bpf_iter_fini_sk_storage_map(void *priv_data) 884 + { 885 + struct bpf_iter_seq_sk_storage_map_info *seq_info = priv_data; 886 + 887 + bpf_map_put_with_uref(seq_info->map); 880 888 } 881 889 882 890 static int bpf_iter_attach_map(struct bpf_prog *prog, ··· 904 896 if (map->map_type != BPF_MAP_TYPE_SK_STORAGE) 905 897 goto put_map; 906 898 907 - if (prog->aux->max_rdonly_access > map->value_size) { 899 + if (prog->aux->max_rdwr_access > map->value_size) { 908 900 err = -EACCES; 909 901 goto put_map; 910 902 } ··· 932 924 static const struct bpf_iter_seq_info iter_seq_info = { 933 925 .seq_ops = &bpf_sk_storage_map_seq_ops, 934 926 .init_seq_private = bpf_iter_init_sk_storage_map, 935 - .fini_seq_private = NULL, 927 + .fini_seq_private = bpf_iter_fini_sk_storage_map, 936 928 .seq_priv_size = sizeof(struct bpf_iter_seq_sk_storage_map_info), 937 929 }; 938 930
+2 -2
net/core/devlink.c
··· 5147 5147 const struct devlink_param *param, 5148 5148 struct devlink_param_gset_ctx *ctx) 5149 5149 { 5150 - if (!param->get) 5150 + if (!param->get || devlink->reload_failed) 5151 5151 return -EOPNOTSUPP; 5152 5152 return param->get(devlink, param->id, ctx); 5153 5153 } ··· 5156 5156 const struct devlink_param *param, 5157 5157 struct devlink_param_gset_ctx *ctx) 5158 5158 { 5159 - if (!param->set) 5159 + if (!param->set || devlink->reload_failed) 5160 5160 return -EOPNOTSUPP; 5161 5161 return param->set(devlink, param->id, ctx); 5162 5162 }
+4 -1
net/core/filter.c
··· 5063 5063 case SO_RCVLOWAT: 5064 5064 if (val < 0) 5065 5065 val = INT_MAX; 5066 - WRITE_ONCE(sk->sk_rcvlowat, val ? : 1); 5066 + if (sk->sk_socket && sk->sk_socket->ops->set_rcvlowat) 5067 + ret = sk->sk_socket->ops->set_rcvlowat(sk, val); 5068 + else 5069 + WRITE_ONCE(sk->sk_rcvlowat, val ? : 1); 5067 5070 break; 5068 5071 case SO_MARK: 5069 5072 if (sk->sk_mark != val) {
+3 -1
net/core/skmsg.c
··· 738 738 sk_psock_set_state(psock, SK_PSOCK_TX_ENABLED); 739 739 refcount_set(&psock->refcnt, 1); 740 740 741 - rcu_assign_sk_user_data_nocopy(sk, psock); 741 + __rcu_assign_sk_user_data_with_flags(sk, psock, 742 + SK_USER_DATA_NOCOPY | 743 + SK_USER_DATA_PSOCK); 742 744 sock_hold(sk); 743 745 744 746 out:
+19 -1
net/core/sock_map.c
··· 783 783 { 784 784 struct sock_map_seq_info *info = priv_data; 785 785 786 + bpf_map_inc_with_uref(aux->map); 786 787 info->map = aux->map; 787 788 return 0; 789 + } 790 + 791 + static void sock_map_fini_seq_private(void *priv_data) 792 + { 793 + struct sock_map_seq_info *info = priv_data; 794 + 795 + bpf_map_put_with_uref(info->map); 788 796 } 789 797 790 798 static const struct bpf_iter_seq_info sock_map_iter_seq_info = { 791 799 .seq_ops = &sock_map_seq_ops, 792 800 .init_seq_private = sock_map_init_seq_private, 801 + .fini_seq_private = sock_map_fini_seq_private, 793 802 .seq_priv_size = sizeof(struct sock_map_seq_info), 794 803 }; 795 804 ··· 1378 1369 }; 1379 1370 1380 1371 static int sock_hash_init_seq_private(void *priv_data, 1381 - struct bpf_iter_aux_info *aux) 1372 + struct bpf_iter_aux_info *aux) 1382 1373 { 1383 1374 struct sock_hash_seq_info *info = priv_data; 1384 1375 1376 + bpf_map_inc_with_uref(aux->map); 1385 1377 info->map = aux->map; 1386 1378 info->htab = container_of(aux->map, struct bpf_shtab, map); 1387 1379 return 0; 1388 1380 } 1389 1381 1382 + static void sock_hash_fini_seq_private(void *priv_data) 1383 + { 1384 + struct sock_hash_seq_info *info = priv_data; 1385 + 1386 + bpf_map_put_with_uref(info->map); 1387 + } 1388 + 1390 1389 static const struct bpf_iter_seq_info sock_hash_iter_seq_info = { 1391 1390 .seq_ops = &sock_hash_seq_ops, 1392 1391 .init_seq_private = sock_hash_init_seq_private, 1392 + .fini_seq_private = sock_hash_fini_seq_private, 1393 1393 .seq_priv_size = sizeof(struct sock_hash_seq_info), 1394 1394 }; 1395 1395
+1 -2
net/ipv6/ip6_output.c
··· 1311 1311 fl6.daddr = info->key.u.ipv6.dst; 1312 1312 fl6.saddr = info->key.u.ipv6.src; 1313 1313 prio = info->key.tos; 1314 - fl6.flowlabel = ip6_make_flowinfo(RT_TOS(prio), 1315 - info->key.label); 1314 + fl6.flowlabel = ip6_make_flowinfo(prio, info->key.label); 1316 1315 1317 1316 dst = ipv6_stub->ipv6_dst_lookup_flow(net, sock->sk, &fl6, 1318 1317 NULL);
+5 -5
net/ipv6/seg6_local.c
··· 1614 1614 * callback. If the callback is not available, then we skip to the next 1615 1615 * attribute; otherwise, we call the destroy() callback. 1616 1616 */ 1617 - for (i = 0; i < max_parsed; ++i) { 1617 + for (i = SEG6_LOCAL_SRH; i < max_parsed; ++i) { 1618 1618 if (!(parsed_attrs & SEG6_F_ATTR(i))) 1619 1619 continue; 1620 1620 ··· 1643 1643 struct seg6_action_param *param; 1644 1644 int err, i; 1645 1645 1646 - for (i = 0; i < SEG6_LOCAL_MAX + 1; ++i) { 1646 + for (i = SEG6_LOCAL_SRH; i < SEG6_LOCAL_MAX + 1; ++i) { 1647 1647 if (!(desc->optattrs & SEG6_F_ATTR(i)) || !attrs[i]) 1648 1648 continue; 1649 1649 ··· 1742 1742 } 1743 1743 1744 1744 /* parse the required attributes */ 1745 - for (i = 0; i < SEG6_LOCAL_MAX + 1; i++) { 1745 + for (i = SEG6_LOCAL_SRH; i < SEG6_LOCAL_MAX + 1; i++) { 1746 1746 if (desc->attrs & SEG6_F_ATTR(i)) { 1747 1747 if (!attrs[i]) 1748 1748 return -EINVAL; ··· 1847 1847 1848 1848 attrs = slwt->desc->attrs | slwt->parsed_optattrs; 1849 1849 1850 - for (i = 0; i < SEG6_LOCAL_MAX + 1; i++) { 1850 + for (i = SEG6_LOCAL_SRH; i < SEG6_LOCAL_MAX + 1; i++) { 1851 1851 if (attrs & SEG6_F_ATTR(i)) { 1852 1852 param = &seg6_action_params[i]; 1853 1853 err = param->put(skb, slwt); ··· 1927 1927 if (attrs_a != attrs_b) 1928 1928 return 1; 1929 1929 1930 - for (i = 0; i < SEG6_LOCAL_MAX + 1; i++) { 1930 + for (i = SEG6_LOCAL_SRH; i < SEG6_LOCAL_MAX + 1; i++) { 1931 1931 if (attrs_a & SEG6_F_ATTR(i)) { 1932 1932 param = &seg6_action_params[i]; 1933 1933 if (param->cmp(slwt_a, slwt_b))
+22 -25
net/mptcp/protocol.c
··· 1240 1240 info->limit > dfrag->data_len)) 1241 1241 return 0; 1242 1242 1243 + if (unlikely(!__tcp_can_send(ssk))) 1244 + return -EAGAIN; 1245 + 1243 1246 /* compute send limit */ 1244 1247 info->mss_now = tcp_send_mss(ssk, &info->size_goal, info->flags); 1245 1248 copy = info->size_goal; ··· 1416 1413 if (__mptcp_check_fallback(msk)) { 1417 1414 if (!msk->first) 1418 1415 return NULL; 1419 - return sk_stream_memory_free(msk->first) ? msk->first : NULL; 1416 + return __tcp_can_send(msk->first) && 1417 + sk_stream_memory_free(msk->first) ? msk->first : NULL; 1420 1418 } 1421 1419 1422 1420 /* re-use last subflow, if the burst allow that */ ··· 1568 1564 1569 1565 ret = mptcp_sendmsg_frag(sk, ssk, dfrag, &info); 1570 1566 if (ret <= 0) { 1567 + if (ret == -EAGAIN) 1568 + continue; 1571 1569 mptcp_push_release(ssk, &info); 1572 1570 goto out; 1573 1571 } ··· 2775 2769 2776 2770 static void __mptcp_destroy_sock(struct sock *sk) 2777 2771 { 2778 - struct mptcp_subflow_context *subflow, *tmp; 2779 2772 struct mptcp_sock *msk = mptcp_sk(sk); 2780 - LIST_HEAD(conn_list); 2781 2773 2782 2774 pr_debug("msk=%p", msk); 2783 2775 2784 2776 might_sleep(); 2785 2777 2786 - /* join list will be eventually flushed (with rst) at sock lock release time*/ 2787 - list_splice_init(&msk->conn_list, &conn_list); 2788 - 2789 2778 mptcp_stop_timer(sk); 2790 2779 sk_stop_timer(sk, &sk->sk_timer); 2791 2780 msk->pm.status = 0; 2792 - 2793 - /* clears msk->subflow, allowing the following loop to close 2794 - * even the initial subflow 2795 - */ 2796 - mptcp_dispose_initial_subflow(msk); 2797 - list_for_each_entry_safe(subflow, tmp, &conn_list, node) { 2798 - struct sock *ssk = mptcp_subflow_tcp_sock(subflow); 2799 - __mptcp_close_ssk(sk, ssk, subflow, 0); 2800 - } 2801 2781 2802 2782 sk->sk_prot->destroy(sk); 2803 2783 ··· 2876 2884 2877 2885 static int mptcp_disconnect(struct sock *sk, int flags) 2878 2886 { 2879 - struct mptcp_subflow_context *subflow, *tmp; 2880 2887 struct mptcp_sock *msk = mptcp_sk(sk); 2881 2888 2882 2889 inet_sk_state_store(sk, TCP_CLOSE); 2883 - 2884 - list_for_each_entry_safe(subflow, tmp, &msk->conn_list, node) { 2885 - struct sock *ssk = mptcp_subflow_tcp_sock(subflow); 2886 - 2887 - __mptcp_close_ssk(sk, ssk, subflow, MPTCP_CF_FASTCLOSE); 2888 - } 2889 2890 2890 2891 mptcp_stop_timer(sk); 2891 2892 sk_stop_timer(sk, &sk->sk_timer); ··· 2886 2901 if (mptcp_sk(sk)->token) 2887 2902 mptcp_event(MPTCP_EVENT_CLOSED, mptcp_sk(sk), NULL, GFP_KERNEL); 2888 2903 2889 - mptcp_destroy_common(msk); 2904 + /* msk->subflow is still intact, the following will not free the first 2905 + * subflow 2906 + */ 2907 + mptcp_destroy_common(msk, MPTCP_CF_FASTCLOSE); 2890 2908 msk->last_snd = NULL; 2891 2909 WRITE_ONCE(msk->flags, 0); 2892 2910 msk->cb_flags = 0; ··· 3039 3051 return newsk; 3040 3052 } 3041 3053 3042 - void mptcp_destroy_common(struct mptcp_sock *msk) 3054 + void mptcp_destroy_common(struct mptcp_sock *msk, unsigned int flags) 3043 3055 { 3056 + struct mptcp_subflow_context *subflow, *tmp; 3044 3057 struct sock *sk = (struct sock *)msk; 3045 3058 3046 3059 __mptcp_clear_xmit(sk); 3060 + 3061 + /* join list will be eventually flushed (with rst) at sock lock release time */ 3062 + list_for_each_entry_safe(subflow, tmp, &msk->conn_list, node) 3063 + __mptcp_close_ssk(sk, mptcp_subflow_tcp_sock(subflow), subflow, flags); 3047 3064 3048 3065 /* move to sk_receive_queue, sk_stream_kill_queues will purge it */ 3049 3066 mptcp_data_lock(sk); ··· 3071 3078 { 3072 3079 struct mptcp_sock *msk = mptcp_sk(sk); 3073 3080 3074 - mptcp_destroy_common(msk); 3081 + /* clears msk->subflow, allowing the following to close 3082 + * even the initial subflow 3083 + */ 3084 + mptcp_dispose_initial_subflow(msk); 3085 + mptcp_destroy_common(msk, 0); 3075 3086 sk_sockets_allocated_dec(sk); 3076 3087 } 3077 3088
+8 -5
net/mptcp/protocol.h
··· 624 624 struct sockaddr_storage *addr, 625 625 unsigned short family); 626 626 627 + static inline bool __tcp_can_send(const struct sock *ssk) 628 + { 629 + /* only send if our side has not closed yet */ 630 + return ((1 << inet_sk_state_load(ssk)) & (TCPF_ESTABLISHED | TCPF_CLOSE_WAIT)); 631 + } 632 + 627 633 static inline bool __mptcp_subflow_active(struct mptcp_subflow_context *subflow) 628 634 { 629 - struct sock *ssk = mptcp_subflow_tcp_sock(subflow); 630 - 631 635 /* can't send if JOIN hasn't completed yet (i.e. is usable for mptcp) */ 632 636 if (subflow->request_join && !subflow->fully_established) 633 637 return false; 634 638 635 - /* only send if our side has not closed yet */ 636 - return ((1 << ssk->sk_state) & (TCPF_ESTABLISHED | TCPF_CLOSE_WAIT)); 639 + return __tcp_can_send(mptcp_subflow_tcp_sock(subflow)); 637 640 } 638 641 639 642 void mptcp_subflow_set_active(struct mptcp_subflow_context *subflow); ··· 720 717 } 721 718 } 722 719 723 - void mptcp_destroy_common(struct mptcp_sock *msk); 720 + void mptcp_destroy_common(struct mptcp_sock *msk, unsigned int flags); 724 721 725 722 #define MPTCP_TOKEN_MAX_RETRIES 4 726 723
+2 -1
net/mptcp/subflow.c
··· 621 621 sock_orphan(sk); 622 622 } 623 623 624 - mptcp_destroy_common(mptcp_sk(sk)); 624 + /* We don't need to clear msk->subflow, as it's still NULL at this point */ 625 + mptcp_destroy_common(mptcp_sk(sk), 0); 625 626 inet_sock_destruct(sk); 626 627 } 627 628
+1 -2
net/netfilter/Kconfig
··· 736 736 737 737 config NF_FLOW_TABLE_PROCFS 738 738 bool "Supply flow table statistics in procfs" 739 - default y 739 + depends on NF_FLOW_TABLE 740 740 depends on PROC_FS 741 - depends on SYSCTL 742 741 help 743 742 This option enables for the flow table offload statistics 744 743 to be shown in procfs under net/netfilter/nf_flowtable.
+126 -58
net/netfilter/nf_tables_api.c
··· 153 153 if (trans == NULL) 154 154 return NULL; 155 155 156 + INIT_LIST_HEAD(&trans->list); 156 157 trans->msg_type = msg_type; 157 158 trans->ctx = *ctx; 158 159 ··· 2473 2472 } 2474 2473 2475 2474 static struct nft_chain *nft_chain_lookup_byid(const struct net *net, 2475 + const struct nft_table *table, 2476 2476 const struct nlattr *nla) 2477 2477 { 2478 2478 struct nftables_pernet *nft_net = nft_pernet(net); ··· 2484 2482 struct nft_chain *chain = trans->ctx.chain; 2485 2483 2486 2484 if (trans->msg_type == NFT_MSG_NEWCHAIN && 2485 + chain->table == table && 2487 2486 id == nft_trans_chain_id(trans)) 2488 2487 return chain; 2489 2488 } ··· 3374 3371 } 3375 3372 3376 3373 static struct nft_rule *nft_rule_lookup_byid(const struct net *net, 3374 + const struct nft_chain *chain, 3377 3375 const struct nlattr *nla); 3378 3376 3379 3377 #define NFT_RULE_MAXEXPRS 128 ··· 3421 3417 return -EOPNOTSUPP; 3422 3418 3423 3419 } else if (nla[NFTA_RULE_CHAIN_ID]) { 3424 - chain = nft_chain_lookup_byid(net, nla[NFTA_RULE_CHAIN_ID]); 3420 + chain = nft_chain_lookup_byid(net, table, nla[NFTA_RULE_CHAIN_ID]); 3425 3421 if (IS_ERR(chain)) { 3426 3422 NL_SET_BAD_ATTR(extack, nla[NFTA_RULE_CHAIN_ID]); 3427 3423 return PTR_ERR(chain); ··· 3463 3459 return PTR_ERR(old_rule); 3464 3460 } 3465 3461 } else if (nla[NFTA_RULE_POSITION_ID]) { 3466 - old_rule = nft_rule_lookup_byid(net, nla[NFTA_RULE_POSITION_ID]); 3462 + old_rule = nft_rule_lookup_byid(net, chain, nla[NFTA_RULE_POSITION_ID]); 3467 3463 if (IS_ERR(old_rule)) { 3468 3464 NL_SET_BAD_ATTR(extack, nla[NFTA_RULE_POSITION_ID]); 3469 3465 return PTR_ERR(old_rule); ··· 3608 3604 } 3609 3605 3610 3606 static struct nft_rule *nft_rule_lookup_byid(const struct net *net, 3607 + const struct nft_chain *chain, 3611 3608 const struct nlattr *nla) 3612 3609 { 3613 3610 struct nftables_pernet *nft_net = nft_pernet(net); ··· 3619 3614 struct nft_rule *rule = nft_trans_rule(trans); 3620 3615 3621 3616 if (trans->msg_type == NFT_MSG_NEWRULE && 3617 + trans->ctx.chain == chain && 3622 3618 id == nft_trans_rule_id(trans)) 3623 3619 return rule; 3624 3620 } ··· 3669 3663 3670 3664 err = nft_delrule(&ctx, rule); 3671 3665 } else if (nla[NFTA_RULE_ID]) { 3672 - rule = nft_rule_lookup_byid(net, nla[NFTA_RULE_ID]); 3666 + rule = nft_rule_lookup_byid(net, chain, nla[NFTA_RULE_ID]); 3673 3667 if (IS_ERR(rule)) { 3674 3668 NL_SET_BAD_ATTR(extack, nla[NFTA_RULE_ID]); 3675 3669 return PTR_ERR(rule); ··· 3848 3842 } 3849 3843 3850 3844 static struct nft_set *nft_set_lookup_byid(const struct net *net, 3845 + const struct nft_table *table, 3851 3846 const struct nlattr *nla, u8 genmask) 3852 3847 { 3853 3848 struct nftables_pernet *nft_net = nft_pernet(net); ··· 3860 3853 struct nft_set *set = nft_trans_set(trans); 3861 3854 3862 3855 if (id == nft_trans_set_id(trans) && 3856 + set->table == table && 3863 3857 nft_active_genmask(set, genmask)) 3864 3858 return set; 3865 3859 } ··· 3881 3873 if (!nla_set_id) 3882 3874 return set; 3883 3875 3884 - set = nft_set_lookup_byid(net, nla_set_id, genmask); 3876 + set = nft_set_lookup_byid(net, table, nla_set_id, genmask); 3885 3877 } 3886 3878 return set; 3887 3879 } ··· 5203 5195 static int nft_setelem_parse_key(struct nft_ctx *ctx, struct nft_set *set, 5204 5196 struct nft_data *key, struct nlattr *attr) 5205 5197 { 5206 - struct nft_data_desc desc; 5207 - int err; 5198 + struct nft_data_desc desc = { 5199 + .type = NFT_DATA_VALUE, 5200 + .size = NFT_DATA_VALUE_MAXLEN, 5201 + .len = set->klen, 5202 + }; 5208 5203 5209 - err = nft_data_init(ctx, key, NFT_DATA_VALUE_MAXLEN, &desc, attr); 5210 - if (err < 0) 5211 - return err; 5212 - 5213 - if (desc.type != NFT_DATA_VALUE || desc.len != set->klen) { 5214 - nft_data_release(key, desc.type); 5215 - return -EINVAL; 5216 - } 5217 - 5218 - return 0; 5204 + return nft_data_init(ctx, key, &desc, attr); 5219 5205 } 5220 5206 5221 5207 static int nft_setelem_parse_data(struct nft_ctx *ctx, struct nft_set *set, ··· 5218 5216 struct nlattr *attr) 5219 5217 { 5220 5218 u32 dtype; 5221 - int err; 5222 - 5223 - err = nft_data_init(ctx, data, NFT_DATA_VALUE_MAXLEN, desc, attr); 5224 - if (err < 0) 5225 - return err; 5226 5219 5227 5220 if (set->dtype == NFT_DATA_VERDICT) 5228 5221 dtype = NFT_DATA_VERDICT; 5229 5222 else 5230 5223 dtype = NFT_DATA_VALUE; 5231 5224 5232 - if (dtype != desc->type || 5233 - set->dlen != desc->len) { 5234 - nft_data_release(data, desc->type); 5235 - return -EINVAL; 5236 - } 5225 + desc->type = dtype; 5226 + desc->size = NFT_DATA_VALUE_MAXLEN; 5227 + desc->len = set->dlen; 5228 + desc->flags = NFT_DATA_DESC_SETELEM; 5237 5229 5238 - return 0; 5230 + return nft_data_init(ctx, data, desc, attr); 5239 5231 } 5240 5232 5241 5233 static void *nft_setelem_catchall_get(const struct net *net, ··· 5463 5467 return ERR_PTR(err); 5464 5468 } 5465 5469 5470 + static int nft_set_ext_check(const struct nft_set_ext_tmpl *tmpl, u8 id, u32 len) 5471 + { 5472 + len += nft_set_ext_types[id].len; 5473 + if (len > tmpl->ext_len[id] || 5474 + len > U8_MAX) 5475 + return -1; 5476 + 5477 + return 0; 5478 + } 5479 + 5480 + static int nft_set_ext_memcpy(const struct nft_set_ext_tmpl *tmpl, u8 id, 5481 + void *to, const void *from, u32 len) 5482 + { 5483 + if (nft_set_ext_check(tmpl, id, len) < 0) 5484 + return -1; 5485 + 5486 + memcpy(to, from, len); 5487 + 5488 + return 0; 5489 + } 5490 + 5466 5491 void *nft_set_elem_init(const struct nft_set *set, 5467 5492 const struct nft_set_ext_tmpl *tmpl, 5468 5493 const u32 *key, const u32 *key_end, ··· 5494 5477 5495 5478 elem = kzalloc(set->ops->elemsize + tmpl->len, gfp); 5496 5479 if (elem == NULL) 5497 - return NULL; 5480 + return ERR_PTR(-ENOMEM); 5498 5481 5499 5482 ext = nft_set_elem_ext(set, elem); 5500 5483 nft_set_ext_init(ext, tmpl); 5501 5484 5502 - if (nft_set_ext_exists(ext, NFT_SET_EXT_KEY)) 5503 - memcpy(nft_set_ext_key(ext), key, set->klen); 5504 - if (nft_set_ext_exists(ext, NFT_SET_EXT_KEY_END)) 5505 - memcpy(nft_set_ext_key_end(ext), key_end, set->klen); 5506 - if (nft_set_ext_exists(ext, NFT_SET_EXT_DATA)) 5507 - memcpy(nft_set_ext_data(ext), data, set->dlen); 5485 + if (nft_set_ext_exists(ext, NFT_SET_EXT_KEY) && 5486 + nft_set_ext_memcpy(tmpl, NFT_SET_EXT_KEY, 5487 + nft_set_ext_key(ext), key, set->klen) < 0) 5488 + goto err_ext_check; 5489 + 5490 + if (nft_set_ext_exists(ext, NFT_SET_EXT_KEY_END) && 5491 + nft_set_ext_memcpy(tmpl, NFT_SET_EXT_KEY_END, 5492 + nft_set_ext_key_end(ext), key_end, set->klen) < 0) 5493 + goto err_ext_check; 5494 + 5495 + if (nft_set_ext_exists(ext, NFT_SET_EXT_DATA) && 5496 + nft_set_ext_memcpy(tmpl, NFT_SET_EXT_DATA, 5497 + nft_set_ext_data(ext), data, set->dlen) < 0) 5498 + goto err_ext_check; 5499 + 5508 5500 if (nft_set_ext_exists(ext, NFT_SET_EXT_EXPIRATION)) { 5509 5501 *nft_set_ext_expiration(ext) = get_jiffies_64() + expiration; 5510 5502 if (expiration == 0) ··· 5523 5497 *nft_set_ext_timeout(ext) = timeout; 5524 5498 5525 5499 return elem; 5500 + 5501 + err_ext_check: 5502 + kfree(elem); 5503 + 5504 + return ERR_PTR(-EINVAL); 5526 5505 } 5527 5506 5528 5507 static void __nft_set_elem_expr_destroy(const struct nft_ctx *ctx, ··· 5615 5584 } 5616 5585 5617 5586 static int nft_set_elem_expr_setup(struct nft_ctx *ctx, 5587 + const struct nft_set_ext_tmpl *tmpl, 5618 5588 const struct nft_set_ext *ext, 5619 5589 struct nft_expr *expr_array[], 5620 5590 u32 num_exprs) 5621 5591 { 5622 5592 struct nft_set_elem_expr *elem_expr = nft_set_ext_expr(ext); 5593 + u32 len = sizeof(struct nft_set_elem_expr); 5623 5594 struct nft_expr *expr; 5624 5595 int i, err; 5596 + 5597 + if (num_exprs == 0) 5598 + return 0; 5599 + 5600 + for (i = 0; i < num_exprs; i++) 5601 + len += expr_array[i]->ops->size; 5602 + 5603 + if (nft_set_ext_check(tmpl, NFT_SET_EXT_EXPRESSIONS, len) < 0) 5604 + return -EINVAL; 5625 5605 5626 5606 for (i = 0; i < num_exprs; i++) { 5627 5607 expr = nft_setelem_expr_at(elem_expr, elem_expr->size); ··· 6096 6054 } 6097 6055 } 6098 6056 6099 - err = -ENOMEM; 6100 6057 elem.priv = nft_set_elem_init(set, &tmpl, elem.key.val.data, 6101 6058 elem.key_end.val.data, elem.data.val.data, 6102 6059 timeout, expiration, GFP_KERNEL_ACCOUNT); 6103 - if (elem.priv == NULL) 6060 + if (IS_ERR(elem.priv)) { 6061 + err = PTR_ERR(elem.priv); 6104 6062 goto err_parse_data; 6063 + } 6105 6064 6106 6065 ext = nft_set_elem_ext(set, elem.priv); 6107 6066 if (flags) 6108 6067 *nft_set_ext_flags(ext) = flags; 6068 + 6109 6069 if (ulen > 0) { 6070 + if (nft_set_ext_check(&tmpl, NFT_SET_EXT_USERDATA, ulen) < 0) { 6071 + err = -EINVAL; 6072 + goto err_elem_userdata; 6073 + } 6110 6074 udata = nft_set_ext_userdata(ext); 6111 6075 udata->len = ulen - 1; 6112 6076 nla_memcpy(&udata->data, nla[NFTA_SET_ELEM_USERDATA], ulen); ··· 6121 6073 *nft_set_ext_obj(ext) = obj; 6122 6074 obj->use++; 6123 6075 } 6124 - err = nft_set_elem_expr_setup(ctx, ext, expr_array, num_exprs); 6076 + err = nft_set_elem_expr_setup(ctx, &tmpl, ext, expr_array, num_exprs); 6125 6077 if (err < 0) 6126 - goto err_elem_expr; 6078 + goto err_elem_free; 6127 6079 6128 6080 trans = nft_trans_elem_alloc(ctx, NFT_MSG_NEWSETELEM, set); 6129 6081 if (trans == NULL) { 6130 6082 err = -ENOMEM; 6131 - goto err_elem_expr; 6083 + goto err_elem_free; 6132 6084 } 6133 6085 6134 6086 ext->genmask = nft_genmask_cur(ctx->net) | NFT_SET_ELEM_BUSY_MASK; ··· 6174 6126 nft_setelem_remove(ctx->net, set, &elem); 6175 6127 err_element_clash: 6176 6128 kfree(trans); 6177 - err_elem_expr: 6129 + err_elem_free: 6178 6130 if (obj) 6179 6131 obj->use--; 6180 - 6132 + err_elem_userdata: 6181 6133 nf_tables_set_elem_destroy(ctx, set, elem.priv); 6182 6134 err_parse_data: 6183 6135 if (nla[NFTA_SET_ELEM_DATA] != NULL) ··· 6359 6311 elem.priv = nft_set_elem_init(set, &tmpl, elem.key.val.data, 6360 6312 elem.key_end.val.data, NULL, 0, 0, 6361 6313 GFP_KERNEL_ACCOUNT); 6362 - if (elem.priv == NULL) 6314 + if (IS_ERR(elem.priv)) { 6315 + err = PTR_ERR(elem.priv); 6363 6316 goto fail_elem_key_end; 6317 + } 6364 6318 6365 6319 ext = nft_set_elem_ext(set, elem.priv); 6366 6320 if (flags) ··· 9655 9605 tb[NFTA_VERDICT_CHAIN], 9656 9606 genmask); 9657 9607 } else if (tb[NFTA_VERDICT_CHAIN_ID]) { 9658 - chain = nft_chain_lookup_byid(ctx->net, 9608 + chain = nft_chain_lookup_byid(ctx->net, ctx->table, 9659 9609 tb[NFTA_VERDICT_CHAIN_ID]); 9660 9610 if (IS_ERR(chain)) 9661 9611 return PTR_ERR(chain); ··· 9667 9617 return PTR_ERR(chain); 9668 9618 if (nft_is_base_chain(chain)) 9669 9619 return -EOPNOTSUPP; 9620 + if (desc->flags & NFT_DATA_DESC_SETELEM && 9621 + chain->flags & NFT_CHAIN_BINDING) 9622 + return -EINVAL; 9670 9623 9671 9624 chain->use++; 9672 9625 data->verdict.chain = chain; ··· 9677 9624 } 9678 9625 9679 9626 desc->len = sizeof(data->verdict); 9680 - desc->type = NFT_DATA_VERDICT; 9627 + 9681 9628 return 0; 9682 9629 } 9683 9630 ··· 9730 9677 } 9731 9678 9732 9679 static int nft_value_init(const struct nft_ctx *ctx, 9733 - struct nft_data *data, unsigned int size, 9734 - struct nft_data_desc *desc, const struct nlattr *nla) 9680 + struct nft_data *data, struct nft_data_desc *desc, 9681 + const struct nlattr *nla) 9735 9682 { 9736 9683 unsigned int len; 9737 9684 9738 9685 len = nla_len(nla); 9739 9686 if (len == 0) 9740 9687 return -EINVAL; 9741 - if (len > size) 9688 + if (len > desc->size) 9742 9689 return -EOVERFLOW; 9690 + if (desc->len) { 9691 + if (len != desc->len) 9692 + return -EINVAL; 9693 + } else { 9694 + desc->len = len; 9695 + } 9743 9696 9744 9697 nla_memcpy(data->data, nla, len); 9745 - desc->type = NFT_DATA_VALUE; 9746 - desc->len = len; 9698 + 9747 9699 return 0; 9748 9700 } 9749 9701 ··· 9768 9710 * 9769 9711 * @ctx: context of the expression using the data 9770 9712 * @data: destination struct nft_data 9771 - * @size: maximum data length 9772 9713 * @desc: data description 9773 9714 * @nla: netlink attribute containing data 9774 9715 * ··· 9777 9720 * The caller can indicate that it only wants to accept data of type 9778 9721 * NFT_DATA_VALUE by passing NULL for the ctx argument. 9779 9722 */ 9780 - int nft_data_init(const struct nft_ctx *ctx, 9781 - struct nft_data *data, unsigned int size, 9723 + int nft_data_init(const struct nft_ctx *ctx, struct nft_data *data, 9782 9724 struct nft_data_desc *desc, const struct nlattr *nla) 9783 9725 { 9784 9726 struct nlattr *tb[NFTA_DATA_MAX + 1]; 9785 9727 int err; 9728 + 9729 + if (WARN_ON_ONCE(!desc->size)) 9730 + return -EINVAL; 9786 9731 9787 9732 err = nla_parse_nested_deprecated(tb, NFTA_DATA_MAX, nla, 9788 9733 nft_data_policy, NULL); 9789 9734 if (err < 0) 9790 9735 return err; 9791 9736 9792 - if (tb[NFTA_DATA_VALUE]) 9793 - return nft_value_init(ctx, data, size, desc, 9794 - tb[NFTA_DATA_VALUE]); 9795 - if (tb[NFTA_DATA_VERDICT] && ctx != NULL) 9796 - return nft_verdict_init(ctx, data, desc, tb[NFTA_DATA_VERDICT]); 9797 - return -EINVAL; 9737 + if (tb[NFTA_DATA_VALUE]) { 9738 + if (desc->type != NFT_DATA_VALUE) 9739 + return -EINVAL; 9740 + 9741 + err = nft_value_init(ctx, data, desc, tb[NFTA_DATA_VALUE]); 9742 + } else if (tb[NFTA_DATA_VERDICT] && ctx != NULL) { 9743 + if (desc->type != NFT_DATA_VERDICT) 9744 + return -EINVAL; 9745 + 9746 + err = nft_verdict_init(ctx, data, desc, tb[NFTA_DATA_VERDICT]); 9747 + } else { 9748 + err = -EINVAL; 9749 + } 9750 + 9751 + return err; 9798 9752 } 9799 9753 EXPORT_SYMBOL_GPL(nft_data_init); 9800 9754
+10 -11
net/netfilter/nf_tables_core.c
··· 34 34 nft_trace_notify(info); 35 35 } 36 36 37 - static inline void nft_trace_packet(struct nft_traceinfo *info, 37 + static inline void nft_trace_packet(const struct nft_pktinfo *pkt, 38 + struct nft_traceinfo *info, 38 39 const struct nft_chain *chain, 39 40 const struct nft_rule_dp *rule, 40 41 enum nft_trace_types type) 41 42 { 42 43 if (static_branch_unlikely(&nft_trace_enabled)) { 43 - const struct nft_pktinfo *pkt = info->pkt; 44 - 45 44 info->nf_trace = pkt->skb->nf_trace; 46 45 info->rule = rule; 47 46 __nft_trace_packet(info, chain, type); 48 47 } 49 48 } 50 49 51 - static inline void nft_trace_copy_nftrace(struct nft_traceinfo *info) 50 + static inline void nft_trace_copy_nftrace(const struct nft_pktinfo *pkt, 51 + struct nft_traceinfo *info) 52 52 { 53 53 if (static_branch_unlikely(&nft_trace_enabled)) { 54 - const struct nft_pktinfo *pkt = info->pkt; 55 - 56 54 if (info->trace) 57 55 info->nf_trace = pkt->skb->nf_trace; 58 56 } ··· 94 96 const struct nft_chain *chain, 95 97 const struct nft_regs *regs) 96 98 { 97 - const struct nft_pktinfo *pkt = info->pkt; 98 99 enum nft_trace_types type; 99 100 100 101 switch (regs->verdict.code) { ··· 107 110 break; 108 111 default: 109 112 type = NFT_TRACETYPE_RULE; 110 - info->nf_trace = pkt->skb->nf_trace; 113 + 114 + if (info->trace) 115 + info->nf_trace = info->pkt->skb->nf_trace; 111 116 break; 112 117 } 113 118 ··· 270 271 switch (regs.verdict.code) { 271 272 case NFT_BREAK: 272 273 regs.verdict.code = NFT_CONTINUE; 273 - nft_trace_copy_nftrace(&info); 274 + nft_trace_copy_nftrace(pkt, &info); 274 275 continue; 275 276 case NFT_CONTINUE: 276 - nft_trace_packet(&info, chain, rule, 277 + nft_trace_packet(pkt, &info, chain, rule, 277 278 NFT_TRACETYPE_RULE); 278 279 continue; 279 280 } ··· 317 318 goto next_rule; 318 319 } 319 320 320 - nft_trace_packet(&info, basechain, NULL, NFT_TRACETYPE_POLICY); 321 + nft_trace_packet(pkt, &info, basechain, NULL, NFT_TRACETYPE_POLICY); 321 322 322 323 if (static_branch_unlikely(&nft_counters_enabled)) 323 324 nft_update_chain_stats(basechain, pkt);
+33 -33
net/netfilter/nft_bitwise.c
··· 93 93 static int nft_bitwise_init_bool(struct nft_bitwise *priv, 94 94 const struct nlattr *const tb[]) 95 95 { 96 - struct nft_data_desc mask, xor; 96 + struct nft_data_desc mask = { 97 + .type = NFT_DATA_VALUE, 98 + .size = sizeof(priv->mask), 99 + .len = priv->len, 100 + }; 101 + struct nft_data_desc xor = { 102 + .type = NFT_DATA_VALUE, 103 + .size = sizeof(priv->xor), 104 + .len = priv->len, 105 + }; 97 106 int err; 98 107 99 108 if (tb[NFTA_BITWISE_DATA]) ··· 112 103 !tb[NFTA_BITWISE_XOR]) 113 104 return -EINVAL; 114 105 115 - err = nft_data_init(NULL, &priv->mask, sizeof(priv->mask), &mask, 116 - tb[NFTA_BITWISE_MASK]); 106 + err = nft_data_init(NULL, &priv->mask, &mask, tb[NFTA_BITWISE_MASK]); 117 107 if (err < 0) 118 108 return err; 119 - if (mask.type != NFT_DATA_VALUE || mask.len != priv->len) { 120 - err = -EINVAL; 121 - goto err_mask_release; 122 - } 123 109 124 - err = nft_data_init(NULL, &priv->xor, sizeof(priv->xor), &xor, 125 - tb[NFTA_BITWISE_XOR]); 110 + err = nft_data_init(NULL, &priv->xor, &xor, tb[NFTA_BITWISE_XOR]); 126 111 if (err < 0) 127 - goto err_mask_release; 128 - if (xor.type != NFT_DATA_VALUE || xor.len != priv->len) { 129 - err = -EINVAL; 130 - goto err_xor_release; 131 - } 112 + goto err_xor_err; 132 113 133 114 return 0; 134 115 135 - err_xor_release: 136 - nft_data_release(&priv->xor, xor.type); 137 - err_mask_release: 116 + err_xor_err: 138 117 nft_data_release(&priv->mask, mask.type); 118 + 139 119 return err; 140 120 } 141 121 142 122 static int nft_bitwise_init_shift(struct nft_bitwise *priv, 143 123 const struct nlattr *const tb[]) 144 124 { 145 - struct nft_data_desc d; 125 + struct nft_data_desc desc = { 126 + .type = NFT_DATA_VALUE, 127 + .size = sizeof(priv->data), 128 + .len = sizeof(u32), 129 + }; 146 130 int err; 147 131 148 132 if (tb[NFTA_BITWISE_MASK] || ··· 145 143 if (!tb[NFTA_BITWISE_DATA]) 146 144 return -EINVAL; 147 145 148 - err = nft_data_init(NULL, &priv->data, sizeof(priv->data), &d, 149 - tb[NFTA_BITWISE_DATA]); 146 + err = nft_data_init(NULL, &priv->data, &desc, tb[NFTA_BITWISE_DATA]); 150 147 if (err < 0) 151 148 return err; 152 - if (d.type != NFT_DATA_VALUE || d.len != sizeof(u32) || 153 - priv->data.data[0] >= BITS_PER_TYPE(u32)) { 154 - nft_data_release(&priv->data, d.type); 149 + 150 + if (priv->data.data[0] >= BITS_PER_TYPE(u32)) { 151 + nft_data_release(&priv->data, desc.type); 155 152 return -EINVAL; 156 153 } 157 154 ··· 340 339 static int 341 340 nft_bitwise_extract_u32_data(const struct nlattr * const tb, u32 *out) 342 341 { 343 - struct nft_data_desc desc; 344 342 struct nft_data data; 345 - int err = 0; 343 + struct nft_data_desc desc = { 344 + .type = NFT_DATA_VALUE, 345 + .size = sizeof(data), 346 + .len = sizeof(u32), 347 + }; 348 + int err; 346 349 347 - err = nft_data_init(NULL, &data, sizeof(data), &desc, tb); 350 + err = nft_data_init(NULL, &data, &desc, tb); 348 351 if (err < 0) 349 352 return err; 350 353 351 - if (desc.type != NFT_DATA_VALUE || desc.len != sizeof(u32)) { 352 - err = -EINVAL; 353 - goto err; 354 - } 355 354 *out = data.data[0]; 356 - err: 357 - nft_data_release(&data, desc.type); 358 - return err; 355 + 356 + return 0; 359 357 } 360 358 361 359 static int nft_bitwise_fast_init(const struct nft_ctx *ctx,
+20 -24
net/netfilter/nft_cmp.c
··· 73 73 const struct nlattr * const tb[]) 74 74 { 75 75 struct nft_cmp_expr *priv = nft_expr_priv(expr); 76 - struct nft_data_desc desc; 76 + struct nft_data_desc desc = { 77 + .type = NFT_DATA_VALUE, 78 + .size = sizeof(priv->data), 79 + }; 77 80 int err; 78 81 79 - err = nft_data_init(NULL, &priv->data, sizeof(priv->data), &desc, 80 - tb[NFTA_CMP_DATA]); 82 + err = nft_data_init(NULL, &priv->data, &desc, tb[NFTA_CMP_DATA]); 81 83 if (err < 0) 82 84 return err; 83 - 84 - if (desc.type != NFT_DATA_VALUE) { 85 - err = -EINVAL; 86 - nft_data_release(&priv->data, desc.type); 87 - return err; 88 - } 89 85 90 86 err = nft_parse_register_load(tb[NFTA_CMP_SREG], &priv->sreg, desc.len); 91 87 if (err < 0) ··· 210 214 const struct nlattr * const tb[]) 211 215 { 212 216 struct nft_cmp_fast_expr *priv = nft_expr_priv(expr); 213 - struct nft_data_desc desc; 214 217 struct nft_data data; 218 + struct nft_data_desc desc = { 219 + .type = NFT_DATA_VALUE, 220 + .size = sizeof(data), 221 + }; 215 222 int err; 216 223 217 - err = nft_data_init(NULL, &data, sizeof(data), &desc, 218 - tb[NFTA_CMP_DATA]); 224 + err = nft_data_init(NULL, &data, &desc, tb[NFTA_CMP_DATA]); 219 225 if (err < 0) 220 226 return err; 221 227 ··· 311 313 const struct nlattr * const tb[]) 312 314 { 313 315 struct nft_cmp16_fast_expr *priv = nft_expr_priv(expr); 314 - struct nft_data_desc desc; 316 + struct nft_data_desc desc = { 317 + .type = NFT_DATA_VALUE, 318 + .size = sizeof(priv->data), 319 + }; 315 320 int err; 316 321 317 - err = nft_data_init(NULL, &priv->data, sizeof(priv->data), &desc, 318 - tb[NFTA_CMP_DATA]); 322 + err = nft_data_init(NULL, &priv->data, &desc, tb[NFTA_CMP_DATA]); 319 323 if (err < 0) 320 324 return err; 321 325 ··· 380 380 static const struct nft_expr_ops * 381 381 nft_cmp_select_ops(const struct nft_ctx *ctx, const struct nlattr * const tb[]) 382 382 { 383 - struct nft_data_desc desc; 384 383 struct nft_data data; 384 + struct nft_data_desc desc = { 385 + .type = NFT_DATA_VALUE, 386 + .size = sizeof(data), 387 + }; 385 388 enum nft_cmp_ops op; 386 389 u8 sreg; 387 390 int err; ··· 407 404 return ERR_PTR(-EINVAL); 408 405 } 409 406 410 - err = nft_data_init(NULL, &data, sizeof(data), &desc, 411 - tb[NFTA_CMP_DATA]); 407 + err = nft_data_init(NULL, &data, &desc, tb[NFTA_CMP_DATA]); 412 408 if (err < 0) 413 409 return ERR_PTR(err); 414 - 415 - if (desc.type != NFT_DATA_VALUE) 416 - goto err1; 417 410 418 411 sreg = ntohl(nla_get_be32(tb[NFTA_CMP_SREG])); 419 412 ··· 422 423 return &nft_cmp16_fast_ops; 423 424 } 424 425 return &nft_cmp_ops; 425 - err1: 426 - nft_data_release(&data, desc.type); 427 - return ERR_PTR(-EINVAL); 428 426 } 429 427 430 428 struct nft_expr_type nft_cmp_type __read_mostly = {
+1 -1
net/netfilter/nft_dynset.c
··· 60 60 &regs->data[priv->sreg_key], NULL, 61 61 &regs->data[priv->sreg_data], 62 62 timeout, 0, GFP_ATOMIC); 63 - if (elem == NULL) 63 + if (IS_ERR(elem)) 64 64 goto err1; 65 65 66 66 ext = nft_set_elem_ext(set, elem);
+19 -3
net/netfilter/nft_immediate.c
··· 29 29 [NFTA_IMMEDIATE_DATA] = { .type = NLA_NESTED }, 30 30 }; 31 31 32 + static enum nft_data_types nft_reg_to_type(const struct nlattr *nla) 33 + { 34 + enum nft_data_types type; 35 + u8 reg; 36 + 37 + reg = ntohl(nla_get_be32(nla)); 38 + if (reg == NFT_REG_VERDICT) 39 + type = NFT_DATA_VERDICT; 40 + else 41 + type = NFT_DATA_VALUE; 42 + 43 + return type; 44 + } 45 + 32 46 static int nft_immediate_init(const struct nft_ctx *ctx, 33 47 const struct nft_expr *expr, 34 48 const struct nlattr * const tb[]) 35 49 { 36 50 struct nft_immediate_expr *priv = nft_expr_priv(expr); 37 - struct nft_data_desc desc; 51 + struct nft_data_desc desc = { 52 + .size = sizeof(priv->data), 53 + }; 38 54 int err; 39 55 40 56 if (tb[NFTA_IMMEDIATE_DREG] == NULL || 41 57 tb[NFTA_IMMEDIATE_DATA] == NULL) 42 58 return -EINVAL; 43 59 44 - err = nft_data_init(ctx, &priv->data, sizeof(priv->data), &desc, 45 - tb[NFTA_IMMEDIATE_DATA]); 60 + desc.type = nft_reg_to_type(tb[NFTA_IMMEDIATE_DREG]); 61 + err = nft_data_init(ctx, &priv->data, &desc, tb[NFTA_IMMEDIATE_DATA]); 46 62 if (err < 0) 47 63 return err; 48 64
+12 -15
net/netfilter/nft_range.c
··· 51 51 const struct nlattr * const tb[]) 52 52 { 53 53 struct nft_range_expr *priv = nft_expr_priv(expr); 54 - struct nft_data_desc desc_from, desc_to; 54 + struct nft_data_desc desc_from = { 55 + .type = NFT_DATA_VALUE, 56 + .size = sizeof(priv->data_from), 57 + }; 58 + struct nft_data_desc desc_to = { 59 + .type = NFT_DATA_VALUE, 60 + .size = sizeof(priv->data_to), 61 + }; 55 62 int err; 56 63 u32 op; 57 64 ··· 68 61 !tb[NFTA_RANGE_TO_DATA]) 69 62 return -EINVAL; 70 63 71 - err = nft_data_init(NULL, &priv->data_from, sizeof(priv->data_from), 72 - &desc_from, tb[NFTA_RANGE_FROM_DATA]); 64 + err = nft_data_init(NULL, &priv->data_from, &desc_from, 65 + tb[NFTA_RANGE_FROM_DATA]); 73 66 if (err < 0) 74 67 return err; 75 68 76 - if (desc_from.type != NFT_DATA_VALUE) { 77 - err = -EINVAL; 78 - goto err1; 79 - } 80 - 81 - err = nft_data_init(NULL, &priv->data_to, sizeof(priv->data_to), 82 - &desc_to, tb[NFTA_RANGE_TO_DATA]); 69 + err = nft_data_init(NULL, &priv->data_to, &desc_to, 70 + tb[NFTA_RANGE_TO_DATA]); 83 71 if (err < 0) 84 72 goto err1; 85 - 86 - if (desc_to.type != NFT_DATA_VALUE) { 87 - err = -EINVAL; 88 - goto err2; 89 - } 90 73 91 74 if (desc_from.len != desc_to.len) { 92 75 err = -EINVAL;
+1 -1
net/netlabel/netlabel_unlabeled.c
··· 885 885 886 886 /* Don't allow users to add both IPv4 and IPv6 addresses for a 887 887 * single entry. However, allow users to create two entries, one each 888 - * for IPv4 and IPv4, with the same LSM security context which should 888 + * for IPv4 and IPv6, with the same LSM security context which should 889 889 * achieve the same result. */ 890 890 if (!info->attrs[NLBL_UNLABEL_A_SECCTX] || 891 891 !info->attrs[NLBL_UNLABEL_A_IFACE] ||
+1 -1
net/sched/cls_route.c
··· 526 526 rcu_assign_pointer(f->next, f1); 527 527 rcu_assign_pointer(*fp, f); 528 528 529 - if (fold && fold->handle && f->handle != fold->handle) { 529 + if (fold) { 530 530 th = to_hash(fold->handle); 531 531 h = from_hash(fold->handle >> 16); 532 532 b = rtnl_dereference(head->table[th]);
+2 -6
net/sched/sch_generic.c
··· 427 427 428 428 unsigned long dev_trans_start(struct net_device *dev) 429 429 { 430 - unsigned long val, res; 430 + unsigned long res = READ_ONCE(netdev_get_tx_queue(dev, 0)->trans_start); 431 + unsigned long val; 431 432 unsigned int i; 432 433 433 - if (is_vlan_dev(dev)) 434 - dev = vlan_dev_real_dev(dev); 435 - else if (netif_is_macvlan(dev)) 436 - dev = macvlan_dev_real_dev(dev); 437 - res = READ_ONCE(netdev_get_tx_queue(dev, 0)->trans_start); 438 434 for (i = 1; i < dev->num_tx_queues; i++) { 439 435 val = READ_ONCE(netdev_get_tx_queue(dev, i)->trans_start); 440 436 if (val && time_after(val, res))
+36 -10
net/tls/tls_device.c
··· 71 71 struct tls_offload_context_tx *offload_ctx = 72 72 container_of(work, struct tls_offload_context_tx, destruct_work); 73 73 struct tls_context *ctx = offload_ctx->ctx; 74 - struct net_device *netdev = ctx->netdev; 74 + struct net_device *netdev; 75 + 76 + /* Safe, because this is the destroy flow, refcount is 0, so 77 + * tls_device_down can't store this field in parallel. 78 + */ 79 + netdev = rcu_dereference_protected(ctx->netdev, 80 + !refcount_read(&ctx->refcount)); 75 81 76 82 netdev->tlsdev_ops->tls_dev_del(netdev, ctx, TLS_OFFLOAD_CTX_DIR_TX); 77 83 dev_put(netdev); ··· 87 81 88 82 static void tls_device_queue_ctx_destruction(struct tls_context *ctx) 89 83 { 84 + struct net_device *netdev; 90 85 unsigned long flags; 91 86 bool async_cleanup; 92 87 ··· 98 91 } 99 92 100 93 list_del(&ctx->list); /* Remove from tls_device_list / tls_device_down_list */ 101 - async_cleanup = ctx->netdev && ctx->tx_conf == TLS_HW; 94 + 95 + /* Safe, because this is the destroy flow, refcount is 0, so 96 + * tls_device_down can't store this field in parallel. 97 + */ 98 + netdev = rcu_dereference_protected(ctx->netdev, 99 + !refcount_read(&ctx->refcount)); 100 + 101 + async_cleanup = netdev && ctx->tx_conf == TLS_HW; 102 102 if (async_cleanup) { 103 103 struct tls_offload_context_tx *offload_ctx = tls_offload_ctx_tx(ctx); 104 104 ··· 243 229 244 230 trace_tls_device_tx_resync_send(sk, seq, rcd_sn); 245 231 down_read(&device_offload_lock); 246 - netdev = tls_ctx->netdev; 232 + netdev = rcu_dereference_protected(tls_ctx->netdev, 233 + lockdep_is_held(&device_offload_lock)); 247 234 if (netdev) 248 235 err = netdev->tlsdev_ops->tls_dev_resync(netdev, sk, seq, 249 236 rcd_sn, ··· 725 710 726 711 trace_tls_device_rx_resync_send(sk, seq, rcd_sn, rx_ctx->resync_type); 727 712 rcu_read_lock(); 728 - netdev = READ_ONCE(tls_ctx->netdev); 713 + netdev = rcu_dereference(tls_ctx->netdev); 729 714 if (netdev) 730 715 netdev->tlsdev_ops->tls_dev_resync(netdev, sk, seq, rcd_sn, 731 716 TLS_OFFLOAD_CTX_DIR_RX); ··· 999 984 int is_decrypted = skb->decrypted; 1000 985 int is_encrypted = !is_decrypted; 1001 986 struct sk_buff *skb_iter; 987 + int left; 1002 988 989 + left = rxm->full_len - skb->len; 1003 990 /* Check if all the data is decrypted already */ 1004 - skb_walk_frags(skb, skb_iter) { 991 + skb_iter = skb_shinfo(skb)->frag_list; 992 + while (skb_iter && left > 0) { 1005 993 is_decrypted &= skb_iter->decrypted; 1006 994 is_encrypted &= !skb_iter->decrypted; 995 + 996 + left -= skb_iter->len; 997 + skb_iter = skb_iter->next; 1007 998 } 1008 999 1009 1000 trace_tls_device_decrypted(sk, tcp_sk(sk)->copied_seq - rxm->full_len, ··· 1050 1029 if (sk->sk_destruct != tls_device_sk_destruct) { 1051 1030 refcount_set(&ctx->refcount, 1); 1052 1031 dev_hold(netdev); 1053 - ctx->netdev = netdev; 1032 + RCU_INIT_POINTER(ctx->netdev, netdev); 1054 1033 spin_lock_irq(&tls_device_lock); 1055 1034 list_add_tail(&ctx->list, &tls_device_list); 1056 1035 spin_unlock_irq(&tls_device_lock); ··· 1321 1300 struct net_device *netdev; 1322 1301 1323 1302 down_read(&device_offload_lock); 1324 - netdev = tls_ctx->netdev; 1303 + netdev = rcu_dereference_protected(tls_ctx->netdev, 1304 + lockdep_is_held(&device_offload_lock)); 1325 1305 if (!netdev) 1326 1306 goto out; 1327 1307 ··· 1331 1309 1332 1310 if (tls_ctx->tx_conf != TLS_HW) { 1333 1311 dev_put(netdev); 1334 - tls_ctx->netdev = NULL; 1312 + rcu_assign_pointer(tls_ctx->netdev, NULL); 1335 1313 } else { 1336 1314 set_bit(TLS_RX_DEV_CLOSED, &tls_ctx->flags); 1337 1315 } ··· 1351 1329 1352 1330 spin_lock_irqsave(&tls_device_lock, flags); 1353 1331 list_for_each_entry_safe(ctx, tmp, &tls_device_list, list) { 1354 - if (ctx->netdev != netdev || 1332 + struct net_device *ctx_netdev = 1333 + rcu_dereference_protected(ctx->netdev, 1334 + lockdep_is_held(&device_offload_lock)); 1335 + 1336 + if (ctx_netdev != netdev || 1355 1337 !refcount_inc_not_zero(&ctx->refcount)) 1356 1338 continue; 1357 1339 ··· 1372 1346 /* Stop the RX and TX resync. 1373 1347 * tls_dev_resync must not be called after tls_dev_del. 1374 1348 */ 1375 - WRITE_ONCE(ctx->netdev, NULL); 1349 + rcu_assign_pointer(ctx->netdev, NULL); 1376 1350 1377 1351 /* Start skipping the RX resync logic completely. */ 1378 1352 set_bit(TLS_RX_DEV_DEGRADED, &ctx->flags);
+2 -1
net/tls/tls_device_fallback.c
··· 426 426 struct net_device *dev, 427 427 struct sk_buff *skb) 428 428 { 429 - if (dev == tls_get_ctx(sk)->netdev || netif_is_bond_master(dev)) 429 + if (dev == rcu_dereference_bh(tls_get_ctx(sk)->netdev) || 430 + netif_is_bond_master(dev)) 430 431 return skb; 431 432 432 433 return tls_sw_fallback(sk, skb);
+1 -1
net/tls/tls_strp.c
··· 41 41 struct sk_buff *skb; 42 42 int i, err, offset; 43 43 44 - skb = alloc_skb_with_frags(0, strp->anchor->len, TLS_PAGE_ORDER, 44 + skb = alloc_skb_with_frags(0, strp->stm.full_len, TLS_PAGE_ORDER, 45 45 &err, strp->sk->sk_allocation); 46 46 if (!skb) 47 47 return NULL;
+9 -1
net/vmw_vsock/af_vsock.c
··· 1286 1286 if (sk->sk_state == TCP_SYN_SENT && 1287 1287 (sk->sk_shutdown != SHUTDOWN_MASK)) { 1288 1288 sk->sk_state = TCP_CLOSE; 1289 + sk->sk_socket->state = SS_UNCONNECTED; 1289 1290 sk->sk_err = ETIMEDOUT; 1290 1291 sk_error_report(sk); 1291 1292 vsock_transport_cancel_pkt(vsk); ··· 1392 1391 * timeout fires. 1393 1392 */ 1394 1393 sock_hold(sk); 1395 - schedule_delayed_work(&vsk->connect_work, timeout); 1394 + 1395 + /* If the timeout function is already scheduled, 1396 + * reschedule it, then ungrab the socket refcount to 1397 + * keep it balanced. 1398 + */ 1399 + if (mod_delayed_work(system_wq, &vsk->connect_work, 1400 + timeout)) 1401 + sock_put(sk); 1396 1402 1397 1403 /* Skip ahead to preserve error code set above. */ 1398 1404 goto out_wait;
+5 -3
net/wireless/sme.c
··· 782 782 #endif 783 783 784 784 if (cr->status == WLAN_STATUS_SUCCESS) { 785 - for_each_valid_link(cr, link) { 786 - if (WARN_ON_ONCE(!cr->links[link].bss)) 787 - break; 785 + if (!wiphy_to_rdev(wdev->wiphy)->ops->connect) { 786 + for_each_valid_link(cr, link) { 787 + if (WARN_ON_ONCE(!cr->links[link].bss)) 788 + break; 789 + } 788 790 } 789 791 790 792 for_each_valid_link(cr, link) {
+5
net/x25/af_x25.c
··· 719 719 sk->sk_socket->state = SS_UNCONNECTED; 720 720 break; 721 721 } 722 + rc = -ENOTCONN; 723 + if (sk->sk_state == TCP_CLOSE) { 724 + sk->sk_socket->state = SS_UNCONNECTED; 725 + break; 726 + } 722 727 rc = 0; 723 728 if (sk->sk_state != TCP_ESTABLISHED) { 724 729 release_sock(sk);
+2 -2
tools/lib/bpf/skel_internal.h
··· 66 66 const char *errstr; 67 67 }; 68 68 69 - long bpf_sys_bpf(__u32 cmd, void *attr, __u32 attr_size); 69 + long kern_sys_bpf(__u32 cmd, void *attr, __u32 attr_size); 70 70 71 71 static inline int skel_sys_bpf(enum bpf_cmd cmd, union bpf_attr *attr, 72 72 unsigned int size) 73 73 { 74 74 #ifdef __KERNEL__ 75 - return bpf_sys_bpf(cmd, attr, size); 75 + return kern_sys_bpf(cmd, attr, size); 76 76 #else 77 77 return syscall(__NR_bpf, cmd, attr, size); 78 78 #endif
+115 -1
tools/testing/selftests/bpf/prog_tests/bpf_iter.c
··· 28 28 #include "bpf_iter_test_kern6.skel.h" 29 29 #include "bpf_iter_bpf_link.skel.h" 30 30 #include "bpf_iter_ksym.skel.h" 31 + #include "bpf_iter_sockmap.skel.h" 31 32 32 33 static int duration; 33 34 ··· 66 65 67 66 free_link: 68 67 bpf_link__destroy(link); 68 + } 69 + 70 + static void do_read_map_iter_fd(struct bpf_object_skeleton **skel, struct bpf_program *prog, 71 + struct bpf_map *map) 72 + { 73 + DECLARE_LIBBPF_OPTS(bpf_iter_attach_opts, opts); 74 + union bpf_iter_link_info linfo; 75 + struct bpf_link *link; 76 + char buf[16] = {}; 77 + int iter_fd, len; 78 + 79 + memset(&linfo, 0, sizeof(linfo)); 80 + linfo.map.map_fd = bpf_map__fd(map); 81 + opts.link_info = &linfo; 82 + opts.link_info_len = sizeof(linfo); 83 + link = bpf_program__attach_iter(prog, &opts); 84 + if (!ASSERT_OK_PTR(link, "attach_map_iter")) 85 + return; 86 + 87 + iter_fd = bpf_iter_create(bpf_link__fd(link)); 88 + if (!ASSERT_GE(iter_fd, 0, "create_map_iter")) { 89 + bpf_link__destroy(link); 90 + return; 91 + } 92 + 93 + /* Close link and map fd prematurely */ 94 + bpf_link__destroy(link); 95 + bpf_object__destroy_skeleton(*skel); 96 + *skel = NULL; 97 + 98 + /* Try to let map free work to run first if map is freed */ 99 + usleep(100); 100 + /* Memory used by both sock map and sock local storage map are 101 + * freed after two synchronize_rcu() calls, so wait for it 102 + */ 103 + kern_sync_rcu(); 104 + kern_sync_rcu(); 105 + 106 + /* Read after both map fd and link fd are closed */ 107 + while ((len = read(iter_fd, buf, sizeof(buf))) > 0) 108 + ; 109 + ASSERT_GE(len, 0, "read_iterator"); 110 + 111 + close(iter_fd); 69 112 } 70 113 71 114 static int read_fd_into_buffer(int fd, char *buf, int size) ··· 679 634 goto out; 680 635 } 681 636 637 + /* Sleepable program is prohibited for hash map iterator */ 638 + linfo.map.map_fd = map_fd; 639 + link = bpf_program__attach_iter(skel->progs.sleepable_dummy_dump, &opts); 640 + if (!ASSERT_ERR_PTR(link, "attach_sleepable_prog_to_iter")) 641 + goto out; 642 + 682 643 linfo.map.map_fd = map_fd; 683 644 link = bpf_program__attach_iter(skel->progs.dump_bpf_hash_map, &opts); 684 645 if (!ASSERT_OK_PTR(link, "attach_iter")) ··· 878 827 bpf_iter_bpf_array_map__destroy(skel); 879 828 } 880 829 830 + static void test_bpf_array_map_iter_fd(void) 831 + { 832 + struct bpf_iter_bpf_array_map *skel; 833 + 834 + skel = bpf_iter_bpf_array_map__open_and_load(); 835 + if (!ASSERT_OK_PTR(skel, "bpf_iter_bpf_array_map__open_and_load")) 836 + return; 837 + 838 + do_read_map_iter_fd(&skel->skeleton, skel->progs.dump_bpf_array_map, 839 + skel->maps.arraymap1); 840 + 841 + bpf_iter_bpf_array_map__destroy(skel); 842 + } 843 + 881 844 static void test_bpf_percpu_array_map(void) 882 845 { 883 846 DECLARE_LIBBPF_OPTS(bpf_iter_attach_opts, opts); ··· 1074 1009 bpf_iter_bpf_sk_storage_helpers__destroy(skel); 1075 1010 } 1076 1011 1012 + static void test_bpf_sk_stoarge_map_iter_fd(void) 1013 + { 1014 + struct bpf_iter_bpf_sk_storage_map *skel; 1015 + 1016 + skel = bpf_iter_bpf_sk_storage_map__open_and_load(); 1017 + if (!ASSERT_OK_PTR(skel, "bpf_iter_bpf_sk_storage_map__open_and_load")) 1018 + return; 1019 + 1020 + do_read_map_iter_fd(&skel->skeleton, skel->progs.rw_bpf_sk_storage_map, 1021 + skel->maps.sk_stg_map); 1022 + 1023 + bpf_iter_bpf_sk_storage_map__destroy(skel); 1024 + } 1025 + 1077 1026 static void test_bpf_sk_storage_map(void) 1078 1027 { 1079 1028 DECLARE_LIBBPF_OPTS(bpf_iter_attach_opts, opts); ··· 1123 1044 linfo.map.map_fd = map_fd; 1124 1045 opts.link_info = &linfo; 1125 1046 opts.link_info_len = sizeof(linfo); 1126 - link = bpf_program__attach_iter(skel->progs.dump_bpf_sk_storage_map, &opts); 1047 + link = bpf_program__attach_iter(skel->progs.oob_write_bpf_sk_storage_map, &opts); 1048 + err = libbpf_get_error(link); 1049 + if (!ASSERT_EQ(err, -EACCES, "attach_oob_write_iter")) { 1050 + if (!err) 1051 + bpf_link__destroy(link); 1052 + goto out; 1053 + } 1054 + 1055 + link = bpf_program__attach_iter(skel->progs.rw_bpf_sk_storage_map, &opts); 1127 1056 if (!ASSERT_OK_PTR(link, "attach_iter")) 1128 1057 goto out; 1129 1058 ··· 1139 1052 if (!ASSERT_GE(iter_fd, 0, "create_iter")) 1140 1053 goto free_link; 1141 1054 1055 + skel->bss->to_add_val = time(NULL); 1142 1056 /* do some tests */ 1143 1057 while ((len = read(iter_fd, buf, sizeof(buf))) > 0) 1144 1058 ; ··· 1152 1064 1153 1065 if (!ASSERT_EQ(skel->bss->val_sum, expected_val, "val_sum")) 1154 1066 goto close_iter; 1067 + 1068 + for (i = 0; i < num_sockets; i++) { 1069 + err = bpf_map_lookup_elem(map_fd, &sock_fd[i], &val); 1070 + if (!ASSERT_OK(err, "map_lookup") || 1071 + !ASSERT_EQ(val, i + 1 + skel->bss->to_add_val, "check_map_value")) 1072 + break; 1073 + } 1155 1074 1156 1075 close_iter: 1157 1076 close(iter_fd); ··· 1312 1217 bpf_iter_task_vma__destroy(skel); 1313 1218 } 1314 1219 1220 + void test_bpf_sockmap_map_iter_fd(void) 1221 + { 1222 + struct bpf_iter_sockmap *skel; 1223 + 1224 + skel = bpf_iter_sockmap__open_and_load(); 1225 + if (!ASSERT_OK_PTR(skel, "bpf_iter_sockmap__open_and_load")) 1226 + return; 1227 + 1228 + do_read_map_iter_fd(&skel->skeleton, skel->progs.copy, skel->maps.sockmap); 1229 + 1230 + bpf_iter_sockmap__destroy(skel); 1231 + } 1232 + 1315 1233 void test_bpf_iter(void) 1316 1234 { 1317 1235 if (test__start_subtest("btf_id_or_null")) ··· 1375 1267 test_bpf_percpu_hash_map(); 1376 1268 if (test__start_subtest("bpf_array_map")) 1377 1269 test_bpf_array_map(); 1270 + if (test__start_subtest("bpf_array_map_iter_fd")) 1271 + test_bpf_array_map_iter_fd(); 1378 1272 if (test__start_subtest("bpf_percpu_array_map")) 1379 1273 test_bpf_percpu_array_map(); 1380 1274 if (test__start_subtest("bpf_sk_storage_map")) 1381 1275 test_bpf_sk_storage_map(); 1276 + if (test__start_subtest("bpf_sk_storage_map_iter_fd")) 1277 + test_bpf_sk_stoarge_map_iter_fd(); 1382 1278 if (test__start_subtest("bpf_sk_storage_delete")) 1383 1279 test_bpf_sk_storage_delete(); 1384 1280 if (test__start_subtest("bpf_sk_storage_get")) ··· 1395 1283 test_link_iter(); 1396 1284 if (test__start_subtest("ksym")) 1397 1285 test_ksym_iter(); 1286 + if (test__start_subtest("bpf_sockmap_map_iter_fd")) 1287 + test_bpf_sockmap_map_iter_fd(); 1398 1288 }
+95
tools/testing/selftests/bpf/prog_tests/fexit_bpf2bpf.c
··· 3 3 #include <test_progs.h> 4 4 #include <network_helpers.h> 5 5 #include <bpf/btf.h> 6 + #include "bind4_prog.skel.h" 6 7 7 8 typedef int (*test_cb)(struct bpf_object *obj); 8 9 ··· 408 407 prog_name, false, NULL); 409 408 } 410 409 410 + static int find_prog_btf_id(const char *name, __u32 attach_prog_fd) 411 + { 412 + struct bpf_prog_info info = {}; 413 + __u32 info_len = sizeof(info); 414 + struct btf *btf; 415 + int ret; 416 + 417 + ret = bpf_obj_get_info_by_fd(attach_prog_fd, &info, &info_len); 418 + if (ret) 419 + return ret; 420 + 421 + if (!info.btf_id) 422 + return -EINVAL; 423 + 424 + btf = btf__load_from_kernel_by_id(info.btf_id); 425 + ret = libbpf_get_error(btf); 426 + if (ret) 427 + return ret; 428 + 429 + ret = btf__find_by_name_kind(btf, name, BTF_KIND_FUNC); 430 + btf__free(btf); 431 + return ret; 432 + } 433 + 434 + static int load_fentry(int attach_prog_fd, int attach_btf_id) 435 + { 436 + LIBBPF_OPTS(bpf_prog_load_opts, opts, 437 + .expected_attach_type = BPF_TRACE_FENTRY, 438 + .attach_prog_fd = attach_prog_fd, 439 + .attach_btf_id = attach_btf_id, 440 + ); 441 + struct bpf_insn insns[] = { 442 + BPF_MOV64_IMM(BPF_REG_0, 0), 443 + BPF_EXIT_INSN(), 444 + }; 445 + 446 + return bpf_prog_load(BPF_PROG_TYPE_TRACING, 447 + "bind4_fentry", 448 + "GPL", 449 + insns, 450 + ARRAY_SIZE(insns), 451 + &opts); 452 + } 453 + 454 + static void test_fentry_to_cgroup_bpf(void) 455 + { 456 + struct bind4_prog *skel = NULL; 457 + struct bpf_prog_info info = {}; 458 + __u32 info_len = sizeof(info); 459 + int cgroup_fd = -1; 460 + int fentry_fd = -1; 461 + int btf_id; 462 + 463 + cgroup_fd = test__join_cgroup("/fentry_to_cgroup_bpf"); 464 + if (!ASSERT_GE(cgroup_fd, 0, "cgroup_fd")) 465 + return; 466 + 467 + skel = bind4_prog__open_and_load(); 468 + if (!ASSERT_OK_PTR(skel, "skel")) 469 + goto cleanup; 470 + 471 + skel->links.bind_v4_prog = bpf_program__attach_cgroup(skel->progs.bind_v4_prog, cgroup_fd); 472 + if (!ASSERT_OK_PTR(skel->links.bind_v4_prog, "bpf_program__attach_cgroup")) 473 + goto cleanup; 474 + 475 + btf_id = find_prog_btf_id("bind_v4_prog", bpf_program__fd(skel->progs.bind_v4_prog)); 476 + if (!ASSERT_GE(btf_id, 0, "find_prog_btf_id")) 477 + goto cleanup; 478 + 479 + fentry_fd = load_fentry(bpf_program__fd(skel->progs.bind_v4_prog), btf_id); 480 + if (!ASSERT_GE(fentry_fd, 0, "load_fentry")) 481 + goto cleanup; 482 + 483 + /* Make sure bpf_obj_get_info_by_fd works correctly when attaching 484 + * to another BPF program. 485 + */ 486 + 487 + ASSERT_OK(bpf_obj_get_info_by_fd(fentry_fd, &info, &info_len), 488 + "bpf_obj_get_info_by_fd"); 489 + 490 + ASSERT_EQ(info.btf_id, 0, "info.btf_id"); 491 + ASSERT_EQ(info.attach_btf_id, btf_id, "info.attach_btf_id"); 492 + ASSERT_GT(info.attach_btf_obj_id, 0, "info.attach_btf_obj_id"); 493 + 494 + cleanup: 495 + if (cgroup_fd >= 0) 496 + close(cgroup_fd); 497 + if (fentry_fd >= 0) 498 + close(fentry_fd); 499 + bind4_prog__destroy(skel); 500 + } 501 + 411 502 /* NOTE: affect other tests, must run in serial mode */ 412 503 void serial_test_fexit_bpf2bpf(void) 413 504 { ··· 523 430 test_fmod_ret_freplace(); 524 431 if (test__start_subtest("func_replace_global_func")) 525 432 test_func_replace_global_func(); 433 + if (test__start_subtest("fentry_to_cgroup_bpf")) 434 + test_fentry_to_cgroup_bpf(); 526 435 }
+21
tools/testing/selftests/bpf/prog_tests/lru_bug.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + #include <test_progs.h> 3 + 4 + #include "lru_bug.skel.h" 5 + 6 + void test_lru_bug(void) 7 + { 8 + struct lru_bug *skel; 9 + int ret; 10 + 11 + skel = lru_bug__open_and_load(); 12 + if (!ASSERT_OK_PTR(skel, "lru_bug__open_and_load")) 13 + return; 14 + ret = lru_bug__attach(skel); 15 + if (!ASSERT_OK(ret, "lru_bug__attach")) 16 + goto end; 17 + usleep(1); 18 + ASSERT_OK(skel->data->result, "prealloc_lru_pop doesn't call check_and_init_map_value"); 19 + end: 20 + lru_bug__destroy(skel); 21 + }
+9
tools/testing/selftests/bpf/progs/bpf_iter_bpf_hash_map.c
··· 112 112 113 113 return 0; 114 114 } 115 + 116 + SEC("iter.s/bpf_map_elem") 117 + int sleepable_dummy_dump(struct bpf_iter__bpf_map_elem *ctx) 118 + { 119 + if (ctx->meta->seq_num == 0) 120 + BPF_SEQ_PRINTF(ctx->meta->seq, "map dump starts\n"); 121 + 122 + return 0; 123 + }
+20 -2
tools/testing/selftests/bpf/progs/bpf_iter_bpf_sk_storage_map.c
··· 16 16 17 17 __u32 val_sum = 0; 18 18 __u32 ipv6_sk_count = 0; 19 + __u32 to_add_val = 0; 19 20 20 21 SEC("iter/bpf_sk_storage_map") 21 - int dump_bpf_sk_storage_map(struct bpf_iter__bpf_sk_storage_map *ctx) 22 + int rw_bpf_sk_storage_map(struct bpf_iter__bpf_sk_storage_map *ctx) 22 23 { 23 24 struct sock *sk = ctx->sk; 24 25 __u32 *val = ctx->value; 25 26 26 - if (sk == (void *)0 || val == (void *)0) 27 + if (sk == NULL || val == NULL) 27 28 return 0; 28 29 29 30 if (sk->sk_family == AF_INET6) 30 31 ipv6_sk_count++; 31 32 32 33 val_sum += *val; 34 + 35 + *val += to_add_val; 36 + 37 + return 0; 38 + } 39 + 40 + SEC("iter/bpf_sk_storage_map") 41 + int oob_write_bpf_sk_storage_map(struct bpf_iter__bpf_sk_storage_map *ctx) 42 + { 43 + struct sock *sk = ctx->sk; 44 + __u32 *val = ctx->value; 45 + 46 + if (sk == NULL || val == NULL) 47 + return 0; 48 + 49 + *(val + 1) = 0xdeadbeef; 50 + 33 51 return 0; 34 52 }
+49
tools/testing/selftests/bpf/progs/lru_bug.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + #include <vmlinux.h> 3 + #include <bpf/bpf_tracing.h> 4 + #include <bpf/bpf_helpers.h> 5 + 6 + struct map_value { 7 + struct task_struct __kptr *ptr; 8 + }; 9 + 10 + struct { 11 + __uint(type, BPF_MAP_TYPE_LRU_HASH); 12 + __uint(max_entries, 1); 13 + __type(key, int); 14 + __type(value, struct map_value); 15 + } lru_map SEC(".maps"); 16 + 17 + int pid = 0; 18 + int result = 1; 19 + 20 + SEC("fentry/bpf_ktime_get_ns") 21 + int printk(void *ctx) 22 + { 23 + struct map_value v = {}; 24 + 25 + if (pid == bpf_get_current_task_btf()->pid) 26 + bpf_map_update_elem(&lru_map, &(int){0}, &v, 0); 27 + return 0; 28 + } 29 + 30 + SEC("fentry/do_nanosleep") 31 + int nanosleep(void *ctx) 32 + { 33 + struct map_value val = {}, *v; 34 + struct task_struct *current; 35 + 36 + bpf_map_update_elem(&lru_map, &(int){0}, &val, 0); 37 + v = bpf_map_lookup_elem(&lru_map, &(int){0}); 38 + if (!v) 39 + return 0; 40 + bpf_map_delete_elem(&lru_map, &(int){0}); 41 + current = bpf_get_current_task_btf(); 42 + v->ptr = current; 43 + pid = current->pid; 44 + bpf_ktime_get_ns(); 45 + result = !v->ptr; 46 + return 0; 47 + } 48 + 49 + char _license[] SEC("license") = "GPL";
+1
tools/testing/selftests/net/.gitignore
··· 39 39 tun 40 40 cmsg_sender 41 41 unix_connect 42 + tap
+1 -1
tools/testing/selftests/net/Makefile
··· 57 57 TEST_GEN_FILES += ioam6_parser 58 58 TEST_GEN_FILES += gro 59 59 TEST_GEN_PROGS = reuseport_bpf reuseport_bpf_cpu reuseport_bpf_numa 60 - TEST_GEN_PROGS += reuseport_dualstack reuseaddr_conflict tls tun 60 + TEST_GEN_PROGS += reuseport_dualstack reuseaddr_conflict tls tun tap 61 61 TEST_GEN_FILES += toeplitz 62 62 TEST_GEN_FILES += cmsg_sender 63 63 TEST_GEN_FILES += stress_reuseport_listen
+16 -8
tools/testing/selftests/net/forwarding/custom_multipath_hash.sh
··· 181 181 182 182 send_src_ipv4() 183 183 { 184 - $MZ $h1 -q -p 64 -A "198.51.100.2-198.51.100.253" -B 203.0.113.2 \ 184 + ip vrf exec v$h1 $MZ $h1 -q -p 64 \ 185 + -A "198.51.100.2-198.51.100.253" -B 203.0.113.2 \ 185 186 -d 1msec -c 50 -t udp "sp=20000,dp=30000" 186 187 } 187 188 188 189 send_dst_ipv4() 189 190 { 190 - $MZ $h1 -q -p 64 -A 198.51.100.2 -B "203.0.113.2-203.0.113.253" \ 191 + ip vrf exec v$h1 $MZ $h1 -q -p 64 \ 192 + -A 198.51.100.2 -B "203.0.113.2-203.0.113.253" \ 191 193 -d 1msec -c 50 -t udp "sp=20000,dp=30000" 192 194 } 193 195 194 196 send_src_udp4() 195 197 { 196 - $MZ $h1 -q -p 64 -A 198.51.100.2 -B 203.0.113.2 \ 198 + ip vrf exec v$h1 $MZ $h1 -q -p 64 \ 199 + -A 198.51.100.2 -B 203.0.113.2 \ 197 200 -d 1msec -t udp "sp=0-32768,dp=30000" 198 201 } 199 202 200 203 send_dst_udp4() 201 204 { 202 - $MZ $h1 -q -p 64 -A 198.51.100.2 -B 203.0.113.2 \ 205 + ip vrf exec v$h1 $MZ $h1 -q -p 64 \ 206 + -A 198.51.100.2 -B 203.0.113.2 \ 203 207 -d 1msec -t udp "sp=20000,dp=0-32768" 204 208 } 205 209 206 210 send_src_ipv6() 207 211 { 208 - $MZ -6 $h1 -q -p 64 -A "2001:db8:1::2-2001:db8:1::fd" -B 2001:db8:4::2 \ 212 + ip vrf exec v$h1 $MZ -6 $h1 -q -p 64 \ 213 + -A "2001:db8:1::2-2001:db8:1::fd" -B 2001:db8:4::2 \ 209 214 -d 1msec -c 50 -t udp "sp=20000,dp=30000" 210 215 } 211 216 212 217 send_dst_ipv6() 213 218 { 214 - $MZ -6 $h1 -q -p 64 -A 2001:db8:1::2 -B "2001:db8:4::2-2001:db8:4::fd" \ 219 + ip vrf exec v$h1 $MZ -6 $h1 -q -p 64 \ 220 + -A 2001:db8:1::2 -B "2001:db8:4::2-2001:db8:4::fd" \ 215 221 -d 1msec -c 50 -t udp "sp=20000,dp=30000" 216 222 } 217 223 ··· 232 226 233 227 send_src_udp6() 234 228 { 235 - $MZ -6 $h1 -q -p 64 -A 2001:db8:1::2 -B 2001:db8:4::2 \ 229 + ip vrf exec v$h1 $MZ -6 $h1 -q -p 64 \ 230 + -A 2001:db8:1::2 -B 2001:db8:4::2 \ 236 231 -d 1msec -t udp "sp=0-32768,dp=30000" 237 232 } 238 233 239 234 send_dst_udp6() 240 235 { 241 - $MZ -6 $h1 -q -p 64 -A 2001:db8:1::2 -B 2001:db8:4::2 \ 236 + ip vrf exec v$h1 $MZ -6 $h1 -q -p 64 \ 237 + -A 2001:db8:1::2 -B 2001:db8:4::2 \ 242 238 -d 1msec -t udp "sp=20000,dp=0-32768" 243 239 } 244 240
+16 -8
tools/testing/selftests/net/forwarding/gre_custom_multipath_hash.sh
··· 276 276 277 277 send_src_ipv4() 278 278 { 279 - $MZ $h1 -q -p 64 -A "198.51.100.2-198.51.100.253" -B 203.0.113.2 \ 279 + ip vrf exec v$h1 $MZ $h1 -q -p 64 \ 280 + -A "198.51.100.2-198.51.100.253" -B 203.0.113.2 \ 280 281 -d 1msec -c 50 -t udp "sp=20000,dp=30000" 281 282 } 282 283 283 284 send_dst_ipv4() 284 285 { 285 - $MZ $h1 -q -p 64 -A 198.51.100.2 -B "203.0.113.2-203.0.113.253" \ 286 + ip vrf exec v$h1 $MZ $h1 -q -p 64 \ 287 + -A 198.51.100.2 -B "203.0.113.2-203.0.113.253" \ 286 288 -d 1msec -c 50 -t udp "sp=20000,dp=30000" 287 289 } 288 290 289 291 send_src_udp4() 290 292 { 291 - $MZ $h1 -q -p 64 -A 198.51.100.2 -B 203.0.113.2 \ 293 + ip vrf exec v$h1 $MZ $h1 -q -p 64 \ 294 + -A 198.51.100.2 -B 203.0.113.2 \ 292 295 -d 1msec -t udp "sp=0-32768,dp=30000" 293 296 } 294 297 295 298 send_dst_udp4() 296 299 { 297 - $MZ $h1 -q -p 64 -A 198.51.100.2 -B 203.0.113.2 \ 300 + ip vrf exec v$h1 $MZ $h1 -q -p 64 \ 301 + -A 198.51.100.2 -B 203.0.113.2 \ 298 302 -d 1msec -t udp "sp=20000,dp=0-32768" 299 303 } 300 304 301 305 send_src_ipv6() 302 306 { 303 - $MZ -6 $h1 -q -p 64 -A "2001:db8:1::2-2001:db8:1::fd" -B 2001:db8:2::2 \ 307 + ip vrf exec v$h1 $MZ -6 $h1 -q -p 64 \ 308 + -A "2001:db8:1::2-2001:db8:1::fd" -B 2001:db8:2::2 \ 304 309 -d 1msec -c 50 -t udp "sp=20000,dp=30000" 305 310 } 306 311 307 312 send_dst_ipv6() 308 313 { 309 - $MZ -6 $h1 -q -p 64 -A 2001:db8:1::2 -B "2001:db8:2::2-2001:db8:2::fd" \ 314 + ip vrf exec v$h1 $MZ -6 $h1 -q -p 64 \ 315 + -A 2001:db8:1::2 -B "2001:db8:2::2-2001:db8:2::fd" \ 310 316 -d 1msec -c 50 -t udp "sp=20000,dp=30000" 311 317 } 312 318 ··· 327 321 328 322 send_src_udp6() 329 323 { 330 - $MZ -6 $h1 -q -p 64 -A 2001:db8:1::2 -B 2001:db8:2::2 \ 324 + ip vrf exec v$h1 $MZ -6 $h1 -q -p 64 \ 325 + -A 2001:db8:1::2 -B 2001:db8:2::2 \ 331 326 -d 1msec -t udp "sp=0-32768,dp=30000" 332 327 } 333 328 334 329 send_dst_udp6() 335 330 { 336 - $MZ -6 $h1 -q -p 64 -A 2001:db8:1::2 -B 2001:db8:2::2 \ 331 + ip vrf exec v$h1 $MZ -6 $h1 -q -p 64 \ 332 + -A 2001:db8:1::2 -B 2001:db8:2::2 \ 337 333 -d 1msec -t udp "sp=20000,dp=0-32768" 338 334 } 339 335
+16 -8
tools/testing/selftests/net/forwarding/ip6gre_custom_multipath_hash.sh
··· 278 278 279 279 send_src_ipv4() 280 280 { 281 - $MZ $h1 -q -p 64 -A "198.51.100.2-198.51.100.253" -B 203.0.113.2 \ 281 + ip vrf exec v$h1 $MZ $h1 -q -p 64 \ 282 + -A "198.51.100.2-198.51.100.253" -B 203.0.113.2 \ 282 283 -d 1msec -c 50 -t udp "sp=20000,dp=30000" 283 284 } 284 285 285 286 send_dst_ipv4() 286 287 { 287 - $MZ $h1 -q -p 64 -A 198.51.100.2 -B "203.0.113.2-203.0.113.253" \ 288 + ip vrf exec v$h1 $MZ $h1 -q -p 64 \ 289 + -A 198.51.100.2 -B "203.0.113.2-203.0.113.253" \ 288 290 -d 1msec -c 50 -t udp "sp=20000,dp=30000" 289 291 } 290 292 291 293 send_src_udp4() 292 294 { 293 - $MZ $h1 -q -p 64 -A 198.51.100.2 -B 203.0.113.2 \ 295 + ip vrf exec v$h1 $MZ $h1 -q -p 64 \ 296 + -A 198.51.100.2 -B 203.0.113.2 \ 294 297 -d 1msec -t udp "sp=0-32768,dp=30000" 295 298 } 296 299 297 300 send_dst_udp4() 298 301 { 299 - $MZ $h1 -q -p 64 -A 198.51.100.2 -B 203.0.113.2 \ 302 + ip vrf exec v$h1 $MZ $h1 -q -p 64 \ 303 + -A 198.51.100.2 -B 203.0.113.2 \ 300 304 -d 1msec -t udp "sp=20000,dp=0-32768" 301 305 } 302 306 303 307 send_src_ipv6() 304 308 { 305 - $MZ -6 $h1 -q -p 64 -A "2001:db8:1::2-2001:db8:1::fd" -B 2001:db8:2::2 \ 309 + ip vrf exec v$h1 $MZ -6 $h1 -q -p 64 \ 310 + -A "2001:db8:1::2-2001:db8:1::fd" -B 2001:db8:2::2 \ 306 311 -d 1msec -c 50 -t udp "sp=20000,dp=30000" 307 312 } 308 313 309 314 send_dst_ipv6() 310 315 { 311 - $MZ -6 $h1 -q -p 64 -A 2001:db8:1::2 -B "2001:db8:2::2-2001:db8:2::fd" \ 316 + ip vrf exec v$h1 $MZ -6 $h1 -q -p 64 \ 317 + -A 2001:db8:1::2 -B "2001:db8:2::2-2001:db8:2::fd" \ 312 318 -d 1msec -c 50 -t udp "sp=20000,dp=30000" 313 319 } 314 320 ··· 329 323 330 324 send_src_udp6() 331 325 { 332 - $MZ -6 $h1 -q -p 64 -A 2001:db8:1::2 -B 2001:db8:2::2 \ 326 + ip vrf exec v$h1 $MZ -6 $h1 -q -p 64 \ 327 + -A 2001:db8:1::2 -B 2001:db8:2::2 \ 333 328 -d 1msec -t udp "sp=0-32768,dp=30000" 334 329 } 335 330 336 331 send_dst_udp6() 337 332 { 338 - $MZ -6 $h1 -q -p 64 -A 2001:db8:1::2 -B 2001:db8:2::2 \ 333 + ip vrf exec v$h1 $MZ -6 $h1 -q -p 64 \ 334 + -A 2001:db8:1::2 -B 2001:db8:2::2 \ 339 335 -d 1msec -t udp "sp=20000,dp=0-32768" 340 336 } 341 337
+17 -9
tools/testing/selftests/net/mptcp/mptcp_connect.c
··· 553 553 fcntl(fd, F_SETFL, flags & ~O_NONBLOCK); 554 554 } 555 555 556 + static void shut_wr(int fd) 557 + { 558 + /* Close our write side, ev. give some time 559 + * for address notification and/or checking 560 + * the current status 561 + */ 562 + if (cfg_wait) 563 + usleep(cfg_wait); 564 + 565 + shutdown(fd, SHUT_WR); 566 + } 567 + 556 568 static int copyfd_io_poll(int infd, int peerfd, int outfd, bool *in_closed_after_out) 557 569 { 558 570 struct pollfd fds = { ··· 642 630 /* ... and peer also closed already */ 643 631 break; 644 632 645 - /* ... but we still receive. 646 - * Close our write side, ev. give some time 647 - * for address notification and/or checking 648 - * the current status 649 - */ 650 - if (cfg_wait) 651 - usleep(cfg_wait); 652 - shutdown(peerfd, SHUT_WR); 633 + shut_wr(peerfd); 653 634 } else { 654 635 if (errno == EINTR) 655 636 continue; ··· 772 767 if (err) 773 768 return err; 774 769 775 - shutdown(peerfd, SHUT_WR); 770 + shut_wr(peerfd); 776 771 777 772 err = do_recvfile(peerfd, outfd); 778 773 *in_closed_after_out = true; ··· 796 791 err = do_sendfile(infd, peerfd, size); 797 792 if (err) 798 793 return err; 794 + 795 + shut_wr(peerfd); 796 + 799 797 err = do_recvfile(peerfd, outfd); 800 798 *in_closed_after_out = true; 801 799 }
+434
tools/testing/selftests/net/tap.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + #define _GNU_SOURCE 4 + 5 + #include <errno.h> 6 + #include <fcntl.h> 7 + #include <stdio.h> 8 + #include <stdlib.h> 9 + #include <string.h> 10 + #include <unistd.h> 11 + #include <net/if.h> 12 + #include <linux/if_tun.h> 13 + #include <linux/netlink.h> 14 + #include <linux/rtnetlink.h> 15 + #include <sys/ioctl.h> 16 + #include <sys/socket.h> 17 + #include <linux/virtio_net.h> 18 + #include <netinet/ip.h> 19 + #include <netinet/udp.h> 20 + #include "../kselftest_harness.h" 21 + 22 + static const char param_dev_tap_name[] = "xmacvtap0"; 23 + static const char param_dev_dummy_name[] = "xdummy0"; 24 + static unsigned char param_hwaddr_src[] = { 0x00, 0xfe, 0x98, 0x14, 0x22, 0x42 }; 25 + static unsigned char param_hwaddr_dest[] = { 26 + 0x00, 0xfe, 0x98, 0x94, 0xd2, 0x43 27 + }; 28 + 29 + #define MAX_RTNL_PAYLOAD (2048) 30 + #define PKT_DATA 0xCB 31 + #define TEST_PACKET_SZ (sizeof(struct virtio_net_hdr) + ETH_HLEN + ETH_MAX_MTU) 32 + 33 + static struct rtattr *rtattr_add(struct nlmsghdr *nh, unsigned short type, 34 + unsigned short len) 35 + { 36 + struct rtattr *rta = 37 + (struct rtattr *)((uint8_t *)nh + RTA_ALIGN(nh->nlmsg_len)); 38 + rta->rta_type = type; 39 + rta->rta_len = RTA_LENGTH(len); 40 + nh->nlmsg_len = RTA_ALIGN(nh->nlmsg_len) + RTA_ALIGN(rta->rta_len); 41 + return rta; 42 + } 43 + 44 + static struct rtattr *rtattr_begin(struct nlmsghdr *nh, unsigned short type) 45 + { 46 + return rtattr_add(nh, type, 0); 47 + } 48 + 49 + static void rtattr_end(struct nlmsghdr *nh, struct rtattr *attr) 50 + { 51 + uint8_t *end = (uint8_t *)nh + nh->nlmsg_len; 52 + 53 + attr->rta_len = end - (uint8_t *)attr; 54 + } 55 + 56 + static struct rtattr *rtattr_add_str(struct nlmsghdr *nh, unsigned short type, 57 + const char *s) 58 + { 59 + struct rtattr *rta = rtattr_add(nh, type, strlen(s)); 60 + 61 + memcpy(RTA_DATA(rta), s, strlen(s)); 62 + return rta; 63 + } 64 + 65 + static struct rtattr *rtattr_add_strsz(struct nlmsghdr *nh, unsigned short type, 66 + const char *s) 67 + { 68 + struct rtattr *rta = rtattr_add(nh, type, strlen(s) + 1); 69 + 70 + strcpy(RTA_DATA(rta), s); 71 + return rta; 72 + } 73 + 74 + static struct rtattr *rtattr_add_any(struct nlmsghdr *nh, unsigned short type, 75 + const void *arr, size_t len) 76 + { 77 + struct rtattr *rta = rtattr_add(nh, type, len); 78 + 79 + memcpy(RTA_DATA(rta), arr, len); 80 + return rta; 81 + } 82 + 83 + static int dev_create(const char *dev, const char *link_type, 84 + int (*fill_rtattr)(struct nlmsghdr *nh), 85 + int (*fill_info_data)(struct nlmsghdr *nh)) 86 + { 87 + struct { 88 + struct nlmsghdr nh; 89 + struct ifinfomsg info; 90 + unsigned char data[MAX_RTNL_PAYLOAD]; 91 + } req; 92 + struct rtattr *link_info, *info_data; 93 + int ret, rtnl; 94 + 95 + rtnl = socket(AF_NETLINK, SOCK_DGRAM, NETLINK_ROUTE); 96 + if (rtnl < 0) { 97 + fprintf(stderr, "%s: socket %s\n", __func__, strerror(errno)); 98 + return 1; 99 + } 100 + 101 + memset(&req, 0, sizeof(req)); 102 + req.nh.nlmsg_len = NLMSG_LENGTH(sizeof(req.info)); 103 + req.nh.nlmsg_flags = NLM_F_REQUEST | NLM_F_CREATE; 104 + req.nh.nlmsg_type = RTM_NEWLINK; 105 + 106 + req.info.ifi_family = AF_UNSPEC; 107 + req.info.ifi_type = 1; 108 + req.info.ifi_index = 0; 109 + req.info.ifi_flags = IFF_BROADCAST | IFF_UP; 110 + req.info.ifi_change = 0xffffffff; 111 + 112 + rtattr_add_str(&req.nh, IFLA_IFNAME, dev); 113 + 114 + if (fill_rtattr) { 115 + ret = fill_rtattr(&req.nh); 116 + if (ret) 117 + return ret; 118 + } 119 + 120 + link_info = rtattr_begin(&req.nh, IFLA_LINKINFO); 121 + 122 + rtattr_add_strsz(&req.nh, IFLA_INFO_KIND, link_type); 123 + 124 + if (fill_info_data) { 125 + info_data = rtattr_begin(&req.nh, IFLA_INFO_DATA); 126 + ret = fill_info_data(&req.nh); 127 + if (ret) 128 + return ret; 129 + rtattr_end(&req.nh, info_data); 130 + } 131 + 132 + rtattr_end(&req.nh, link_info); 133 + 134 + ret = send(rtnl, &req, req.nh.nlmsg_len, 0); 135 + if (ret < 0) 136 + fprintf(stderr, "%s: send %s\n", __func__, strerror(errno)); 137 + ret = (unsigned int)ret != req.nh.nlmsg_len; 138 + 139 + close(rtnl); 140 + return ret; 141 + } 142 + 143 + static int dev_delete(const char *dev) 144 + { 145 + struct { 146 + struct nlmsghdr nh; 147 + struct ifinfomsg info; 148 + unsigned char data[MAX_RTNL_PAYLOAD]; 149 + } req; 150 + int ret, rtnl; 151 + 152 + rtnl = socket(AF_NETLINK, SOCK_DGRAM, NETLINK_ROUTE); 153 + if (rtnl < 0) { 154 + fprintf(stderr, "%s: socket %s\n", __func__, strerror(errno)); 155 + return 1; 156 + } 157 + 158 + memset(&req, 0, sizeof(req)); 159 + req.nh.nlmsg_len = NLMSG_LENGTH(sizeof(req.info)); 160 + req.nh.nlmsg_flags = NLM_F_REQUEST; 161 + req.nh.nlmsg_type = RTM_DELLINK; 162 + 163 + req.info.ifi_family = AF_UNSPEC; 164 + 165 + rtattr_add_str(&req.nh, IFLA_IFNAME, dev); 166 + 167 + ret = send(rtnl, &req, req.nh.nlmsg_len, 0); 168 + if (ret < 0) 169 + fprintf(stderr, "%s: send %s\n", __func__, strerror(errno)); 170 + 171 + ret = (unsigned int)ret != req.nh.nlmsg_len; 172 + 173 + close(rtnl); 174 + return ret; 175 + } 176 + 177 + static int macvtap_fill_rtattr(struct nlmsghdr *nh) 178 + { 179 + int ifindex; 180 + 181 + ifindex = if_nametoindex(param_dev_dummy_name); 182 + if (ifindex == 0) { 183 + fprintf(stderr, "%s: ifindex %s\n", __func__, strerror(errno)); 184 + return -errno; 185 + } 186 + 187 + rtattr_add_any(nh, IFLA_LINK, &ifindex, sizeof(ifindex)); 188 + rtattr_add_any(nh, IFLA_ADDRESS, param_hwaddr_src, ETH_ALEN); 189 + 190 + return 0; 191 + } 192 + 193 + static int opentap(const char *devname) 194 + { 195 + int ifindex; 196 + char buf[256]; 197 + int fd; 198 + struct ifreq ifr; 199 + 200 + ifindex = if_nametoindex(devname); 201 + if (ifindex == 0) { 202 + fprintf(stderr, "%s: ifindex %s\n", __func__, strerror(errno)); 203 + return -errno; 204 + } 205 + 206 + sprintf(buf, "/dev/tap%d", ifindex); 207 + fd = open(buf, O_RDWR | O_NONBLOCK); 208 + if (fd < 0) { 209 + fprintf(stderr, "%s: open %s\n", __func__, strerror(errno)); 210 + return -errno; 211 + } 212 + 213 + memset(&ifr, 0, sizeof(ifr)); 214 + strcpy(ifr.ifr_name, devname); 215 + ifr.ifr_flags = IFF_TAP | IFF_NO_PI | IFF_VNET_HDR | IFF_MULTI_QUEUE; 216 + if (ioctl(fd, TUNSETIFF, &ifr, sizeof(ifr)) < 0) 217 + return -errno; 218 + return fd; 219 + } 220 + 221 + size_t build_eth(uint8_t *buf, uint16_t proto) 222 + { 223 + struct ethhdr *eth = (struct ethhdr *)buf; 224 + 225 + eth->h_proto = htons(proto); 226 + memcpy(eth->h_source, param_hwaddr_src, ETH_ALEN); 227 + memcpy(eth->h_dest, param_hwaddr_dest, ETH_ALEN); 228 + 229 + return ETH_HLEN; 230 + } 231 + 232 + static uint32_t add_csum(const uint8_t *buf, int len) 233 + { 234 + uint32_t sum = 0; 235 + uint16_t *sbuf = (uint16_t *)buf; 236 + 237 + while (len > 1) { 238 + sum += *sbuf++; 239 + len -= 2; 240 + } 241 + 242 + if (len) 243 + sum += *(uint8_t *)sbuf; 244 + 245 + return sum; 246 + } 247 + 248 + static uint16_t finish_ip_csum(uint32_t sum) 249 + { 250 + uint16_t lo = sum & 0xffff; 251 + uint16_t hi = sum >> 16; 252 + 253 + return ~(lo + hi); 254 + 255 + } 256 + 257 + static uint16_t build_ip_csum(const uint8_t *buf, int len, 258 + uint32_t sum) 259 + { 260 + sum += add_csum(buf, len); 261 + return finish_ip_csum(sum); 262 + } 263 + 264 + static int build_ipv4_header(uint8_t *buf, int payload_len) 265 + { 266 + struct iphdr *iph = (struct iphdr *)buf; 267 + 268 + iph->ihl = 5; 269 + iph->version = 4; 270 + iph->ttl = 8; 271 + iph->tot_len = 272 + htons(sizeof(*iph) + sizeof(struct udphdr) + payload_len); 273 + iph->id = htons(1337); 274 + iph->protocol = IPPROTO_UDP; 275 + iph->saddr = htonl((172 << 24) | (17 << 16) | 2); 276 + iph->daddr = htonl((172 << 24) | (17 << 16) | 1); 277 + iph->check = build_ip_csum(buf, iph->ihl << 2, 0); 278 + 279 + return iph->ihl << 2; 280 + } 281 + 282 + static int build_udp_packet(uint8_t *buf, int payload_len, bool csum_off) 283 + { 284 + const int ip4alen = sizeof(uint32_t); 285 + struct udphdr *udph = (struct udphdr *)buf; 286 + int len = sizeof(*udph) + payload_len; 287 + uint32_t sum = 0; 288 + 289 + udph->source = htons(22); 290 + udph->dest = htons(58822); 291 + udph->len = htons(len); 292 + 293 + memset(buf + sizeof(struct udphdr), PKT_DATA, payload_len); 294 + 295 + sum = add_csum(buf - 2 * ip4alen, 2 * ip4alen); 296 + sum += htons(IPPROTO_UDP) + udph->len; 297 + 298 + if (!csum_off) 299 + sum += add_csum(buf, len); 300 + 301 + udph->check = finish_ip_csum(sum); 302 + 303 + return sizeof(*udph) + payload_len; 304 + } 305 + 306 + size_t build_test_packet_valid_udp_gso(uint8_t *buf, size_t payload_len) 307 + { 308 + uint8_t *cur = buf; 309 + struct virtio_net_hdr *vh = (struct virtio_net_hdr *)buf; 310 + 311 + vh->hdr_len = ETH_HLEN + sizeof(struct iphdr) + sizeof(struct udphdr); 312 + vh->flags = VIRTIO_NET_HDR_F_NEEDS_CSUM; 313 + vh->csum_start = ETH_HLEN + sizeof(struct iphdr); 314 + vh->csum_offset = __builtin_offsetof(struct udphdr, check); 315 + vh->gso_type = VIRTIO_NET_HDR_GSO_UDP; 316 + vh->gso_size = ETH_DATA_LEN - sizeof(struct iphdr); 317 + cur += sizeof(*vh); 318 + 319 + cur += build_eth(cur, ETH_P_IP); 320 + cur += build_ipv4_header(cur, payload_len); 321 + cur += build_udp_packet(cur, payload_len, true); 322 + 323 + return cur - buf; 324 + } 325 + 326 + size_t build_test_packet_valid_udp_csum(uint8_t *buf, size_t payload_len) 327 + { 328 + uint8_t *cur = buf; 329 + struct virtio_net_hdr *vh = (struct virtio_net_hdr *)buf; 330 + 331 + vh->flags = VIRTIO_NET_HDR_F_DATA_VALID; 332 + vh->gso_type = VIRTIO_NET_HDR_GSO_NONE; 333 + cur += sizeof(*vh); 334 + 335 + cur += build_eth(cur, ETH_P_IP); 336 + cur += build_ipv4_header(cur, payload_len); 337 + cur += build_udp_packet(cur, payload_len, false); 338 + 339 + return cur - buf; 340 + } 341 + 342 + size_t build_test_packet_crash_tap_invalid_eth_proto(uint8_t *buf, 343 + size_t payload_len) 344 + { 345 + uint8_t *cur = buf; 346 + struct virtio_net_hdr *vh = (struct virtio_net_hdr *)buf; 347 + 348 + vh->hdr_len = ETH_HLEN + sizeof(struct iphdr) + sizeof(struct udphdr); 349 + vh->flags = 0; 350 + vh->gso_type = VIRTIO_NET_HDR_GSO_UDP; 351 + vh->gso_size = ETH_DATA_LEN - sizeof(struct iphdr); 352 + cur += sizeof(*vh); 353 + 354 + cur += build_eth(cur, 0); 355 + cur += sizeof(struct iphdr) + sizeof(struct udphdr); 356 + cur += build_ipv4_header(cur, payload_len); 357 + cur += build_udp_packet(cur, payload_len, true); 358 + cur += payload_len; 359 + 360 + return cur - buf; 361 + } 362 + 363 + FIXTURE(tap) 364 + { 365 + int fd; 366 + }; 367 + 368 + FIXTURE_SETUP(tap) 369 + { 370 + int ret; 371 + 372 + ret = dev_create(param_dev_dummy_name, "dummy", NULL, NULL); 373 + EXPECT_EQ(ret, 0); 374 + 375 + ret = dev_create(param_dev_tap_name, "macvtap", macvtap_fill_rtattr, 376 + NULL); 377 + EXPECT_EQ(ret, 0); 378 + 379 + self->fd = opentap(param_dev_tap_name); 380 + ASSERT_GE(self->fd, 0); 381 + } 382 + 383 + FIXTURE_TEARDOWN(tap) 384 + { 385 + int ret; 386 + 387 + if (self->fd != -1) 388 + close(self->fd); 389 + 390 + ret = dev_delete(param_dev_tap_name); 391 + EXPECT_EQ(ret, 0); 392 + 393 + ret = dev_delete(param_dev_dummy_name); 394 + EXPECT_EQ(ret, 0); 395 + } 396 + 397 + TEST_F(tap, test_packet_valid_udp_gso) 398 + { 399 + uint8_t pkt[TEST_PACKET_SZ]; 400 + size_t off; 401 + int ret; 402 + 403 + memset(pkt, 0, sizeof(pkt)); 404 + off = build_test_packet_valid_udp_gso(pkt, 1021); 405 + ret = write(self->fd, pkt, off); 406 + ASSERT_EQ(ret, off); 407 + } 408 + 409 + TEST_F(tap, test_packet_valid_udp_csum) 410 + { 411 + uint8_t pkt[TEST_PACKET_SZ]; 412 + size_t off; 413 + int ret; 414 + 415 + memset(pkt, 0, sizeof(pkt)); 416 + off = build_test_packet_valid_udp_csum(pkt, 1024); 417 + ret = write(self->fd, pkt, off); 418 + ASSERT_EQ(ret, off); 419 + } 420 + 421 + TEST_F(tap, test_packet_crash_tap_invalid_eth_proto) 422 + { 423 + uint8_t pkt[TEST_PACKET_SZ]; 424 + size_t off; 425 + int ret; 426 + 427 + memset(pkt, 0, sizeof(pkt)); 428 + off = build_test_packet_crash_tap_invalid_eth_proto(pkt, 1024); 429 + ret = write(self->fd, pkt, off); 430 + ASSERT_EQ(ret, -1); 431 + ASSERT_EQ(errno, EINVAL); 432 + } 433 + 434 + TEST_HARNESS_MAIN
+76 -5
tools/testing/selftests/netfilter/nft_trans_stress.sh
··· 9 9 # Kselftest framework requirement - SKIP code is 4. 10 10 ksft_skip=4 11 11 12 - testns=testns1 12 + testns=testns-$(mktemp -u "XXXXXXXX") 13 + 13 14 tables="foo bar baz quux" 15 + global_ret=0 16 + eret=0 17 + lret=0 18 + 19 + check_result() 20 + { 21 + local r=$1 22 + local OK="PASS" 23 + 24 + if [ $r -ne 0 ] ;then 25 + OK="FAIL" 26 + global_ret=$r 27 + fi 28 + 29 + echo "$OK: nft $2 test returned $r" 30 + 31 + eret=0 32 + } 14 33 15 34 nft --version > /dev/null 2>&1 16 35 if [ $? -ne 0 ];then ··· 78 59 79 60 sleep 1 80 61 62 + ip netns exec "$testns" nft -f "$tmp" 81 63 for i in $(seq 1 10) ; do ip netns exec "$testns" nft -f "$tmp" & done 82 64 83 65 for table in $tables;do 84 - randsleep=$((RANDOM%10)) 66 + randsleep=$((RANDOM%2)) 85 67 sleep $randsleep 86 - ip netns exec "$testns" nft delete table inet $table 2>/dev/null 68 + ip netns exec "$testns" nft delete table inet $table 69 + lret=$? 70 + if [ $lret -ne 0 ]; then 71 + eret=$lret 72 + fi 87 73 done 88 74 89 - randsleep=$((RANDOM%10)) 90 - sleep $randsleep 75 + check_result $eret "add/delete" 76 + 77 + for i in $(seq 1 10) ; do 78 + (echo "flush ruleset"; cat "$tmp") | ip netns exec "$testns" nft -f /dev/stdin 79 + 80 + lret=$? 81 + if [ $lret -ne 0 ]; then 82 + eret=$lret 83 + fi 84 + done 85 + 86 + check_result $eret "reload" 87 + 88 + for i in $(seq 1 10) ; do 89 + (echo "flush ruleset"; cat "$tmp" 90 + echo "insert rule inet foo INPUT meta nftrace set 1" 91 + echo "insert rule inet foo OUTPUT meta nftrace set 1" 92 + ) | ip netns exec "$testns" nft -f /dev/stdin 93 + lret=$? 94 + if [ $lret -ne 0 ]; then 95 + eret=$lret 96 + fi 97 + 98 + (echo "flush ruleset"; cat "$tmp" 99 + ) | ip netns exec "$testns" nft -f /dev/stdin 100 + 101 + lret=$? 102 + if [ $lret -ne 0 ]; then 103 + eret=$lret 104 + fi 105 + done 106 + 107 + check_result $eret "add/delete with nftrace enabled" 108 + 109 + echo "insert rule inet foo INPUT meta nftrace set 1" >> $tmp 110 + echo "insert rule inet foo OUTPUT meta nftrace set 1" >> $tmp 111 + 112 + for i in $(seq 1 10) ; do 113 + (echo "flush ruleset"; cat "$tmp") | ip netns exec "$testns" nft -f /dev/stdin 114 + 115 + lret=$? 116 + if [ $lret -ne 0 ]; then 117 + eret=1 118 + fi 119 + done 120 + 121 + check_result $lret "add/delete with nftrace enabled" 91 122 92 123 pkill -9 ping 93 124 ··· 145 76 146 77 rm -f "$tmp" 147 78 ip netns del "$testns" 79 + 80 + exit $global_ret