Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'net-5.16-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Pull networking fixes from Jakub Kicinski:
"Networking fixes, including fixes from mac80211, wifi, bpf.

Relatively large batches of fixes from BPF and the WiFi stack, calm in
general networking.

Current release - regressions:

- dpaa2-eth: fix buffer overrun when reporting ethtool statistics

Current release - new code bugs:

- bpf: fix incorrect state pruning for <8B spill/fill

- iavf:
- add missing unlocks in iavf_watchdog_task()
- do not override the adapter state in the watchdog task (again)

- mlxsw: spectrum_router: consolidate MAC profiles when possible

Previous releases - regressions:

- mac80211 fixes:
- rate control, avoid driver crash for retransmitted frames
- regression in SSN handling of addba tx
- a memory leak where sta_info is not freed
- marking TX-during-stop for TX in in_reconfig, prevent stall

- cfg80211: acquire wiphy mutex on regulatory work

- wifi drivers: fix build regressions and LED config dependency

- virtio_net: fix rx_drops stat for small pkts

- dsa: mv88e6xxx: unforce speed & duplex in mac_link_down()

Previous releases - always broken:

- bpf fixes:
- kernel address leakage in atomic fetch
- kernel address leakage in atomic cmpxchg's r0 aux reg
- signed bounds propagation after mov32
- extable fixup offset
- extable address check

- mac80211:
- fix the size used for building probe request
- send ADDBA requests using the tid/queue of the aggregation
session
- agg-tx: don't schedule_and_wake_txq() under sta->lock, avoid
deadlocks
- validate extended element ID is present

- mptcp:
- never allow the PM to close a listener subflow (null-defer)
- clear 'kern' flag from fallback sockets, prevent crash
- fix deadlock in __mptcp_push_pending()

- inet_diag: fix kernel-infoleak for UDP sockets

- xsk: do not sleep in poll() when need_wakeup set

- smc: avoid very long waits in smc_release()

- sch_ets: don't remove idle classes from the round-robin list

- netdevsim:
- zero-initialize memory for bpf map's value, prevent info leak
- don't let user space overwrite read only (max) ethtool parms

- ixgbe: set X550 MDIO speed before talking to PHY

- stmmac:
- fix null-deref in flower deletion w/ VLAN prio Rx steering
- dwmac-rk: fix oob read in rk_gmac_setup

- ice: time stamping fixes

- systemport: add global locking for descriptor life cycle"

* tag 'net-5.16-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (89 commits)
bpf, selftests: Fix racing issue in btf_skc_cls_ingress test
selftest/bpf: Add a test that reads various addresses.
bpf: Fix extable address check.
bpf: Fix extable fixup offset.
bpf, selftests: Add test case trying to taint map value pointer
bpf: Make 32->64 bounds propagation slightly more robust
bpf: Fix signed bounds propagation after mov32
sit: do not call ipip6_dev_free() from sit_init_net()
net: systemport: Add global locking for descriptor lifecycle
net/smc: Prevent smc_release() from long blocking
net: Fix double 0x prefix print in SKB dump
virtio_net: fix rx_drops stat for small pkts
dsa: mv88e6xxx: fix debug print for SPEED_UNFORCED
sfc_ef100: potential dereference of null pointer
net: stmmac: dwmac-rk: fix oob read in rk_gmac_setup
net: usb: lan78xx: add Allied Telesis AT29M2-AF
net/packet: rx_owner_map depends on pg_vec
netdevsim: Zero-initialize memory for new map's value in function nsim_bpf_map_alloc
dpaa2-eth: fix ethtool statistics
ixgbe: set X550 MDIO speed before talking to PHY
...

+881 -198
+16
Documentation/networking/device_drivers/ethernet/intel/ixgbe.rst
··· 440 440 a virtual function (VF), jumbo frames must first be enabled in the physical 441 441 function (PF). The VF MTU setting cannot be larger than the PF MTU. 442 442 443 + NBASE-T Support 444 + --------------- 445 + The ixgbe driver supports NBASE-T on some devices. However, the advertisement 446 + of NBASE-T speeds is suppressed by default, to accommodate broken network 447 + switches which cannot cope with advertised NBASE-T speeds. Use the ethtool 448 + command to enable advertising NBASE-T speeds on devices which support it:: 449 + 450 + ethtool -s eth? advertise 0x1800000001028 451 + 452 + On Linux systems with INTERFACES(5), this can be specified as a pre-up command 453 + in /etc/network/interfaces so that the interface is always brought up with 454 + NBASE-T support, e.g.:: 455 + 456 + iface eth? inet dhcp 457 + pre-up ethtool -s eth? advertise 0x1800000001028 || true 458 + 443 459 Generic Receive Offload, aka GRO 444 460 -------------------------------- 445 461 The driver supports the in-kernel software implementation of GRO. GRO has
+6 -6
MAINTAINERS
··· 3066 3066 F: drivers/phy/qualcomm/phy-ath79-usb.c 3067 3067 3068 3068 ATHEROS ATH GENERIC UTILITIES 3069 - M: Kalle Valo <kvalo@codeaurora.org> 3069 + M: Kalle Valo <kvalo@kernel.org> 3070 3070 L: linux-wireless@vger.kernel.org 3071 3071 S: Supported 3072 3072 F: drivers/net/wireless/ath/* ··· 3081 3081 F: drivers/net/wireless/ath/ath5k/ 3082 3082 3083 3083 ATHEROS ATH6KL WIRELESS DRIVER 3084 - M: Kalle Valo <kvalo@codeaurora.org> 3084 + M: Kalle Valo <kvalo@kernel.org> 3085 3085 L: linux-wireless@vger.kernel.org 3086 3086 S: Supported 3087 3087 W: https://wireless.wiki.kernel.org/en/users/Drivers/ath6kl ··· 13248 13248 F: include/uapi/linux/netdevice.h 13249 13249 13250 13250 NETWORKING DRIVERS (WIRELESS) 13251 - M: Kalle Valo <kvalo@codeaurora.org> 13251 + M: Kalle Valo <kvalo@kernel.org> 13252 13252 L: linux-wireless@vger.kernel.org 13253 13253 S: Maintained 13254 13254 Q: http://patchwork.kernel.org/project/linux-wireless/list/ ··· 15704 15704 F: drivers/media/tuners/qt1010* 15705 15705 15706 15706 QUALCOMM ATHEROS ATH10K WIRELESS DRIVER 15707 - M: Kalle Valo <kvalo@codeaurora.org> 15707 + M: Kalle Valo <kvalo@kernel.org> 15708 15708 L: ath10k@lists.infradead.org 15709 15709 S: Supported 15710 15710 W: https://wireless.wiki.kernel.org/en/users/Drivers/ath10k ··· 15712 15712 F: drivers/net/wireless/ath/ath10k/ 15713 15713 15714 15714 QUALCOMM ATHEROS ATH11K WIRELESS DRIVER 15715 - M: Kalle Valo <kvalo@codeaurora.org> 15715 + M: Kalle Valo <kvalo@kernel.org> 15716 15716 L: ath11k@lists.infradead.org 15717 15717 S: Supported 15718 15718 T: git git://git.kernel.org/pub/scm/linux/kernel/git/kvalo/ath.git ··· 15885 15885 F: drivers/media/platform/qcom/venus/ 15886 15886 15887 15887 QUALCOMM WCN36XX WIRELESS DRIVER 15888 - M: Kalle Valo <kvalo@codeaurora.org> 15888 + M: Kalle Valo <kvalo@kernel.org> 15889 15889 L: wcn36xx@lists.infradead.org 15890 15890 S: Supported 15891 15891 W: https://wireless.wiki.kernel.org/en/users/Drivers/wcn36xx
+43 -8
arch/x86/net/bpf_jit_comp.c
··· 1252 1252 case BPF_LDX | BPF_MEM | BPF_DW: 1253 1253 case BPF_LDX | BPF_PROBE_MEM | BPF_DW: 1254 1254 if (BPF_MODE(insn->code) == BPF_PROBE_MEM) { 1255 - /* test src_reg, src_reg */ 1256 - maybe_emit_mod(&prog, src_reg, src_reg, true); /* always 1 byte */ 1257 - EMIT2(0x85, add_2reg(0xC0, src_reg, src_reg)); 1258 - /* jne start_of_ldx */ 1259 - EMIT2(X86_JNE, 0); 1255 + /* Though the verifier prevents negative insn->off in BPF_PROBE_MEM 1256 + * add abs(insn->off) to the limit to make sure that negative 1257 + * offset won't be an issue. 1258 + * insn->off is s16, so it won't affect valid pointers. 1259 + */ 1260 + u64 limit = TASK_SIZE_MAX + PAGE_SIZE + abs(insn->off); 1261 + u8 *end_of_jmp1, *end_of_jmp2; 1262 + 1263 + /* Conservatively check that src_reg + insn->off is a kernel address: 1264 + * 1. src_reg + insn->off >= limit 1265 + * 2. src_reg + insn->off doesn't become small positive. 1266 + * Cannot do src_reg + insn->off >= limit in one branch, 1267 + * since it needs two spare registers, but JIT has only one. 1268 + */ 1269 + 1270 + /* movabsq r11, limit */ 1271 + EMIT2(add_1mod(0x48, AUX_REG), add_1reg(0xB8, AUX_REG)); 1272 + EMIT((u32)limit, 4); 1273 + EMIT(limit >> 32, 4); 1274 + /* cmp src_reg, r11 */ 1275 + maybe_emit_mod(&prog, src_reg, AUX_REG, true); 1276 + EMIT2(0x39, add_2reg(0xC0, src_reg, AUX_REG)); 1277 + /* if unsigned '<' goto end_of_jmp2 */ 1278 + EMIT2(X86_JB, 0); 1279 + end_of_jmp1 = prog; 1280 + 1281 + /* mov r11, src_reg */ 1282 + emit_mov_reg(&prog, true, AUX_REG, src_reg); 1283 + /* add r11, insn->off */ 1284 + maybe_emit_1mod(&prog, AUX_REG, true); 1285 + EMIT2_off32(0x81, add_1reg(0xC0, AUX_REG), insn->off); 1286 + /* jmp if not carry to start_of_ldx 1287 + * Otherwise ERR_PTR(-EINVAL) + 128 will be the user addr 1288 + * that has to be rejected. 1289 + */ 1290 + EMIT2(0x73 /* JNC */, 0); 1291 + end_of_jmp2 = prog; 1292 + 1260 1293 /* xor dst_reg, dst_reg */ 1261 1294 emit_mov_imm32(&prog, false, dst_reg, 0); 1262 1295 /* jmp byte_after_ldx */ 1263 1296 EMIT2(0xEB, 0); 1264 1297 1265 - /* populate jmp_offset for JNE above */ 1266 - temp[4] = prog - temp - 5 /* sizeof(test + jne) */; 1298 + /* populate jmp_offset for JB above to jump to xor dst_reg */ 1299 + end_of_jmp1[-1] = end_of_jmp2 - end_of_jmp1; 1300 + /* populate jmp_offset for JNC above to jump to start_of_ldx */ 1267 1301 start_of_ldx = prog; 1302 + end_of_jmp2[-1] = start_of_ldx - end_of_jmp2; 1268 1303 } 1269 1304 emit_ldx(&prog, BPF_SIZE(insn->code), dst_reg, src_reg, insn->off); 1270 1305 if (BPF_MODE(insn->code) == BPF_PROBE_MEM) { ··· 1340 1305 * End result: x86 insn "mov rbx, qword ptr [rax+0x14]" 1341 1306 * of 4 bytes will be ignored and rbx will be zero inited. 1342 1307 */ 1343 - ex->fixup = (prog - temp) | (reg2pt_regs[dst_reg] << 8); 1308 + ex->fixup = (prog - start_of_ldx) | (reg2pt_regs[dst_reg] << 8); 1344 1309 } 1345 1310 break; 1346 1311
+4
drivers/net/dsa/mv88e6xxx/chip.c
··· 768 768 if ((!mv88e6xxx_port_ppu_updates(chip, port) || 769 769 mode == MLO_AN_FIXED) && ops->port_sync_link) 770 770 err = ops->port_sync_link(chip, port, mode, false); 771 + 772 + if (!err && ops->port_set_speed_duplex) 773 + err = ops->port_set_speed_duplex(chip, port, SPEED_UNFORCED, 774 + DUPLEX_UNFORCED); 771 775 mv88e6xxx_reg_unlock(chip); 772 776 773 777 if (err)
+2 -2
drivers/net/dsa/mv88e6xxx/port.c
··· 283 283 if (err) 284 284 return err; 285 285 286 - if (speed) 286 + if (speed != SPEED_UNFORCED) 287 287 dev_dbg(chip->dev, "p%d: Speed set to %d Mbps\n", port, speed); 288 288 else 289 289 dev_dbg(chip->dev, "p%d: Speed unforced\n", port); ··· 516 516 if (err) 517 517 return err; 518 518 519 - if (speed) 519 + if (speed != SPEED_UNFORCED) 520 520 dev_dbg(chip->dev, "p%d: Speed set to %d Mbps\n", port, speed); 521 521 else 522 522 dev_dbg(chip->dev, "p%d: Speed unforced\n", port);
+4 -1
drivers/net/ethernet/broadcom/bcmsysport.c
··· 1309 1309 struct bcm_sysport_priv *priv = netdev_priv(dev); 1310 1310 struct device *kdev = &priv->pdev->dev; 1311 1311 struct bcm_sysport_tx_ring *ring; 1312 + unsigned long flags, desc_flags; 1312 1313 struct bcm_sysport_cb *cb; 1313 1314 struct netdev_queue *txq; 1314 1315 u32 len_status, addr_lo; 1315 1316 unsigned int skb_len; 1316 - unsigned long flags; 1317 1317 dma_addr_t mapping; 1318 1318 u16 queue; 1319 1319 int ret; ··· 1373 1373 ring->desc_count--; 1374 1374 1375 1375 /* Ports are latched, so write upper address first */ 1376 + spin_lock_irqsave(&priv->desc_lock, desc_flags); 1376 1377 tdma_writel(priv, len_status, TDMA_WRITE_PORT_HI(ring->index)); 1377 1378 tdma_writel(priv, addr_lo, TDMA_WRITE_PORT_LO(ring->index)); 1379 + spin_unlock_irqrestore(&priv->desc_lock, desc_flags); 1378 1380 1379 1381 /* Check ring space and update SW control flow */ 1380 1382 if (ring->desc_count == 0) ··· 2015 2013 } 2016 2014 2017 2015 /* Initialize both hardware and software ring */ 2016 + spin_lock_init(&priv->desc_lock); 2018 2017 for (i = 0; i < dev->num_tx_queues; i++) { 2019 2018 ret = bcm_sysport_init_tx_ring(priv, i); 2020 2019 if (ret) {
+1
drivers/net/ethernet/broadcom/bcmsysport.h
··· 711 711 int wol_irq; 712 712 713 713 /* Transmit rings */ 714 + spinlock_t desc_lock; 714 715 struct bcm_sysport_tx_ring *tx_rings; 715 716 716 717 /* Receive queue */
+2 -2
drivers/net/ethernet/broadcom/genet/bcmmii.c
··· 589 589 * Internal or external PHY with MDIO access 590 590 */ 591 591 phydev = phy_attach(priv->dev, phy_name, pd->phy_interface); 592 - if (!phydev) { 592 + if (IS_ERR(phydev)) { 593 593 dev_err(kdev, "failed to register PHY device\n"); 594 - return -ENODEV; 594 + return PTR_ERR(phydev); 595 595 } 596 596 } else { 597 597 /*
+2
drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.h
··· 388 388 __u64 bytes_per_cdan; 389 389 }; 390 390 391 + #define DPAA2_ETH_CH_STATS 7 392 + 391 393 /* Maximum number of queues associated with a DPNI */ 392 394 #define DPAA2_ETH_MAX_TCS 8 393 395 #define DPAA2_ETH_MAX_RX_QUEUES_PER_TC 16
+1 -1
drivers/net/ethernet/freescale/dpaa2/dpaa2-ethtool.c
··· 278 278 /* Per-channel stats */ 279 279 for (k = 0; k < priv->num_channels; k++) { 280 280 ch_stats = &priv->channel[k]->stats; 281 - for (j = 0; j < sizeof(*ch_stats) / sizeof(__u64) - 1; j++) 281 + for (j = 0; j < DPAA2_ETH_CH_STATS; j++) 282 282 *((__u64 *)data + i + j) += *((__u64 *)ch_stats + j); 283 283 } 284 284 i += j;
+2
drivers/net/ethernet/hisilicon/hns3/hnae3.h
··· 839 839 840 840 u8 netdev_flags; 841 841 struct dentry *hnae3_dbgfs; 842 + /* protects concurrent contention between debugfs commands */ 843 + struct mutex dbgfs_lock; 842 844 843 845 /* Network interface message level enabled bits */ 844 846 u32 msg_enable;
+14 -6
drivers/net/ethernet/hisilicon/hns3/hns3_debugfs.c
··· 1226 1226 if (ret) 1227 1227 return ret; 1228 1228 1229 + mutex_lock(&handle->dbgfs_lock); 1229 1230 save_buf = &hns3_dbg_cmd[index].buf; 1230 1231 1231 1232 if (!test_bit(HNS3_NIC_STATE_INITED, &priv->state) || ··· 1239 1238 read_buf = *save_buf; 1240 1239 } else { 1241 1240 read_buf = kvzalloc(hns3_dbg_cmd[index].buf_len, GFP_KERNEL); 1242 - if (!read_buf) 1243 - return -ENOMEM; 1241 + if (!read_buf) { 1242 + ret = -ENOMEM; 1243 + goto out; 1244 + } 1244 1245 1245 1246 /* save the buffer addr until the last read operation */ 1246 1247 *save_buf = read_buf; 1247 - } 1248 1248 1249 - /* get data ready for the first time to read */ 1250 - if (!*ppos) { 1249 + /* get data ready for the first time to read */ 1251 1250 ret = hns3_dbg_read_cmd(dbg_data, hns3_dbg_cmd[index].cmd, 1252 1251 read_buf, hns3_dbg_cmd[index].buf_len); 1253 1252 if (ret) ··· 1256 1255 1257 1256 size = simple_read_from_buffer(buffer, count, ppos, read_buf, 1258 1257 strlen(read_buf)); 1259 - if (size > 0) 1258 + if (size > 0) { 1259 + mutex_unlock(&handle->dbgfs_lock); 1260 1260 return size; 1261 + } 1261 1262 1262 1263 out: 1263 1264 /* free the buffer for the last read operation */ ··· 1268 1265 *save_buf = NULL; 1269 1266 } 1270 1267 1268 + mutex_unlock(&handle->dbgfs_lock); 1271 1269 return ret; 1272 1270 } 1273 1271 ··· 1341 1337 debugfs_create_dir(hns3_dbg_dentry[i].name, 1342 1338 handle->hnae3_dbgfs); 1343 1339 1340 + mutex_init(&handle->dbgfs_lock); 1341 + 1344 1342 for (i = 0; i < ARRAY_SIZE(hns3_dbg_cmd); i++) { 1345 1343 if ((hns3_dbg_cmd[i].cmd == HNAE3_DBG_CMD_TM_NODES && 1346 1344 ae_dev->dev_version <= HNAE3_DEVICE_VERSION_V2) || ··· 1369 1363 return 0; 1370 1364 1371 1365 out: 1366 + mutex_destroy(&handle->dbgfs_lock); 1372 1367 debugfs_remove_recursive(handle->hnae3_dbgfs); 1373 1368 handle->hnae3_dbgfs = NULL; 1374 1369 return ret; ··· 1385 1378 hns3_dbg_cmd[i].buf = NULL; 1386 1379 } 1387 1380 1381 + mutex_destroy(&handle->dbgfs_lock); 1388 1382 debugfs_remove_recursive(handle->hnae3_dbgfs); 1389 1383 handle->hnae3_dbgfs = NULL; 1390 1384 }
+2 -1
drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_mbx.c
··· 114 114 115 115 memcpy(&req->msg, send_msg, sizeof(struct hclge_vf_to_pf_msg)); 116 116 117 - trace_hclge_vf_mbx_send(hdev, req); 117 + if (test_bit(HCLGEVF_STATE_NIC_REGISTERED, &hdev->state)) 118 + trace_hclge_vf_mbx_send(hdev, req); 118 119 119 120 /* synchronous send */ 120 121 if (need_resp) {
+2 -3
drivers/net/ethernet/intel/iavf/iavf_main.c
··· 2046 2046 } 2047 2047 adapter->aq_required = 0; 2048 2048 adapter->current_op = VIRTCHNL_OP_UNKNOWN; 2049 + mutex_unlock(&adapter->crit_lock); 2049 2050 queue_delayed_work(iavf_wq, 2050 2051 &adapter->watchdog_task, 2051 2052 msecs_to_jiffies(10)); ··· 2077 2076 iavf_detect_recover_hung(&adapter->vsi); 2078 2077 break; 2079 2078 case __IAVF_REMOVE: 2080 - mutex_unlock(&adapter->crit_lock); 2081 - return; 2082 2079 default: 2080 + mutex_unlock(&adapter->crit_lock); 2083 2081 return; 2084 2082 } 2085 2083 2086 2084 /* check for hw reset */ 2087 2085 reg_val = rd32(hw, IAVF_VF_ARQLEN1) & IAVF_VF_ARQLEN1_ARQENABLE_MASK; 2088 2086 if (!reg_val) { 2089 - iavf_change_state(adapter, __IAVF_RESETTING); 2090 2087 adapter->flags |= IAVF_FLAG_RESET_PENDING; 2091 2088 adapter->aq_required = 0; 2092 2089 adapter->current_op = VIRTCHNL_OP_UNKNOWN;
+5 -8
drivers/net/ethernet/intel/ice/ice_ptp.c
··· 705 705 scaled_ppm = -scaled_ppm; 706 706 } 707 707 708 - while ((u64)scaled_ppm > div_u64(U64_MAX, incval)) { 708 + while ((u64)scaled_ppm > div64_u64(U64_MAX, incval)) { 709 709 /* handle overflow by scaling down the scaled_ppm and 710 710 * the divisor, losing some precision 711 711 */ ··· 1540 1540 if (err) 1541 1541 continue; 1542 1542 1543 - /* Check if the timestamp is valid */ 1544 - if (!(raw_tstamp & ICE_PTP_TS_VALID)) 1543 + /* Check if the timestamp is invalid or stale */ 1544 + if (!(raw_tstamp & ICE_PTP_TS_VALID) || 1545 + raw_tstamp == tx->tstamps[idx].cached_tstamp) 1545 1546 continue; 1546 - 1547 - /* clear the timestamp register, so that it won't show valid 1548 - * again when re-used. 1549 - */ 1550 - ice_clear_phy_tstamp(hw, tx->quad, phy_idx); 1551 1547 1552 1548 /* The timestamp is valid, so we'll go ahead and clear this 1553 1549 * index and then send the timestamp up to the stack. 1554 1550 */ 1555 1551 spin_lock(&tx->lock); 1552 + tx->tstamps[idx].cached_tstamp = raw_tstamp; 1556 1553 clear_bit(idx, tx->in_use); 1557 1554 skb = tx->tstamps[idx].skb; 1558 1555 tx->tstamps[idx].skb = NULL;
+6
drivers/net/ethernet/intel/ice/ice_ptp.h
··· 55 55 * struct ice_tx_tstamp - Tracking for a single Tx timestamp 56 56 * @skb: pointer to the SKB for this timestamp request 57 57 * @start: jiffies when the timestamp was first requested 58 + * @cached_tstamp: last read timestamp 58 59 * 59 60 * This structure tracks a single timestamp request. The SKB pointer is 60 61 * provided when initiating a request. The start time is used to ensure that 61 62 * we discard old requests that were not fulfilled within a 2 second time 62 63 * window. 64 + * Timestamp values in the PHY are read only and do not get cleared except at 65 + * hardware reset or when a new timestamp value is captured. The cached_tstamp 66 + * field is used to detect the case where a new timestamp has not yet been 67 + * captured, ensuring that we avoid sending stale timestamp data to the stack. 63 68 */ 64 69 struct ice_tx_tstamp { 65 70 struct sk_buff *skb; 66 71 unsigned long start; 72 + u64 cached_tstamp; 67 73 }; 68 74 69 75 /**
+14 -14
drivers/net/ethernet/intel/igb/igb_main.c
··· 7648 7648 struct vf_mac_filter *entry = NULL; 7649 7649 int ret = 0; 7650 7650 7651 + if ((vf_data->flags & IGB_VF_FLAG_PF_SET_MAC) && 7652 + !vf_data->trusted) { 7653 + dev_warn(&pdev->dev, 7654 + "VF %d requested MAC filter but is administratively denied\n", 7655 + vf); 7656 + return -EINVAL; 7657 + } 7658 + if (!is_valid_ether_addr(addr)) { 7659 + dev_warn(&pdev->dev, 7660 + "VF %d attempted to set invalid MAC filter\n", 7661 + vf); 7662 + return -EINVAL; 7663 + } 7664 + 7651 7665 switch (info) { 7652 7666 case E1000_VF_MAC_FILTER_CLR: 7653 7667 /* remove all unicast MAC filters related to the current VF */ ··· 7675 7661 } 7676 7662 break; 7677 7663 case E1000_VF_MAC_FILTER_ADD: 7678 - if ((vf_data->flags & IGB_VF_FLAG_PF_SET_MAC) && 7679 - !vf_data->trusted) { 7680 - dev_warn(&pdev->dev, 7681 - "VF %d requested MAC filter but is administratively denied\n", 7682 - vf); 7683 - return -EINVAL; 7684 - } 7685 - if (!is_valid_ether_addr(addr)) { 7686 - dev_warn(&pdev->dev, 7687 - "VF %d attempted to set invalid MAC filter\n", 7688 - vf); 7689 - return -EINVAL; 7690 - } 7691 - 7692 7664 /* try to find empty slot in the list */ 7693 7665 list_for_each(pos, &adapter->vf_macs.l) { 7694 7666 entry = list_entry(pos, struct vf_mac_filter, l);
+1
drivers/net/ethernet/intel/igbvf/netdev.c
··· 2859 2859 return 0; 2860 2860 2861 2861 err_hw_init: 2862 + netif_napi_del(&adapter->rx_ring->napi); 2862 2863 kfree(adapter->tx_ring); 2863 2864 kfree(adapter->rx_ring); 2864 2865 err_sw_init:
+1 -1
drivers/net/ethernet/intel/igc/igc_i225.c
··· 636 636 ltrv = rd32(IGC_LTRMAXV); 637 637 if (ltr_max != (ltrv & IGC_LTRMAXV_LTRV_MASK)) { 638 638 ltrv = IGC_LTRMAXV_LSNP_REQ | ltr_max | 639 - (scale_min << IGC_LTRMAXV_SCALE_SHIFT); 639 + (scale_max << IGC_LTRMAXV_SCALE_SHIFT); 640 640 wr32(IGC_LTRMAXV, ltrv); 641 641 } 642 642 }
+4
drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
··· 5531 5531 if (!speed && hw->mac.ops.get_link_capabilities) { 5532 5532 ret = hw->mac.ops.get_link_capabilities(hw, &speed, 5533 5533 &autoneg); 5534 + /* remove NBASE-T speeds from default autonegotiation 5535 + * to accommodate broken network switches in the field 5536 + * which cannot cope with advertised NBASE-T speeds 5537 + */ 5534 5538 speed &= ~(IXGBE_LINK_SPEED_5GB_FULL | 5535 5539 IXGBE_LINK_SPEED_2_5GB_FULL); 5536 5540 }
+3
drivers/net/ethernet/intel/ixgbe/ixgbe_x550.c
··· 3405 3405 /* flush pending Tx transactions */ 3406 3406 ixgbe_clear_tx_pending(hw); 3407 3407 3408 + /* set MDIO speed before talking to the PHY in case it's the 1st time */ 3409 + ixgbe_set_mdio_speed(hw); 3410 + 3408 3411 /* PHY ops must be identified and initialized prior to reset */ 3409 3412 status = hw->phy.ops.init(hw); 3410 3413 if (status == IXGBE_ERR_SFP_NOT_SUPPORTED ||
+2 -1
drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c
··· 8494 8494 u8 mac_profile; 8495 8495 int err; 8496 8496 8497 - if (!mlxsw_sp_rif_mac_profile_is_shared(rif)) 8497 + if (!mlxsw_sp_rif_mac_profile_is_shared(rif) && 8498 + !mlxsw_sp_rif_mac_profile_find(mlxsw_sp, new_mac)) 8498 8499 return mlxsw_sp_rif_mac_profile_edit(rif, new_mac); 8499 8500 8500 8501 err = mlxsw_sp_rif_mac_profile_get(mlxsw_sp, new_mac,
+3
drivers/net/ethernet/sfc/ef100_nic.c
··· 609 609 ef100_common_stat_mask(mask); 610 610 ef100_ethtool_stat_mask(mask); 611 611 612 + if (!mc_stats) 613 + return 0; 614 + 612 615 efx_nic_copy_stats(efx, mc_stats); 613 616 efx_nic_update_stats(ef100_stat_desc, EF100_STAT_COUNT, mask, 614 617 stats, mc_stats, false);
+3 -1
drivers/net/ethernet/stmicro/stmmac/dwmac-rk.c
··· 33 33 void (*set_rgmii_speed)(struct rk_priv_data *bsp_priv, int speed); 34 34 void (*set_rmii_speed)(struct rk_priv_data *bsp_priv, int speed); 35 35 void (*integrated_phy_powerup)(struct rk_priv_data *bsp_priv); 36 + bool regs_valid; 36 37 u32 regs[]; 37 38 }; 38 39 ··· 1093 1092 .set_to_rmii = rk3568_set_to_rmii, 1094 1093 .set_rgmii_speed = rk3568_set_gmac_speed, 1095 1094 .set_rmii_speed = rk3568_set_gmac_speed, 1095 + .regs_valid = true, 1096 1096 .regs = { 1097 1097 0xfe2a0000, /* gmac0 */ 1098 1098 0xfe010000, /* gmac1 */ ··· 1385 1383 * to be distinguished. 1386 1384 */ 1387 1385 res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 1388 - if (res) { 1386 + if (res && ops->regs_valid) { 1389 1387 int i = 0; 1390 1388 1391 1389 while (ops->regs[i]) {
+17
drivers/net/ethernet/stmicro/stmmac/stmmac.h
··· 172 172 int is_l4; 173 173 }; 174 174 175 + /* Rx Frame Steering */ 176 + enum stmmac_rfs_type { 177 + STMMAC_RFS_T_VLAN, 178 + STMMAC_RFS_T_MAX, 179 + }; 180 + 181 + struct stmmac_rfs_entry { 182 + unsigned long cookie; 183 + int in_use; 184 + int type; 185 + int tc; 186 + }; 187 + 175 188 struct stmmac_priv { 176 189 /* Frequently used values are kept adjacent for cache effect */ 177 190 u32 tx_coal_frames[MTL_MAX_TX_QUEUES]; ··· 302 289 struct stmmac_tc_entry *tc_entries; 303 290 unsigned int flow_entries_max; 304 291 struct stmmac_flow_entry *flow_entries; 292 + unsigned int rfs_entries_max[STMMAC_RFS_T_MAX]; 293 + unsigned int rfs_entries_cnt[STMMAC_RFS_T_MAX]; 294 + unsigned int rfs_entries_total; 295 + struct stmmac_rfs_entry *rfs_entries; 305 296 306 297 /* Pulse Per Second output */ 307 298 struct stmmac_pps_cfg pps[STMMAC_PPS_MAX];
+12 -4
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
··· 1461 1461 { 1462 1462 struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue]; 1463 1463 struct stmmac_rx_buffer *buf = &rx_q->buf_pool[i]; 1464 + gfp_t gfp = (GFP_ATOMIC | __GFP_NOWARN); 1465 + 1466 + if (priv->dma_cap.addr64 <= 32) 1467 + gfp |= GFP_DMA32; 1464 1468 1465 1469 if (!buf->page) { 1466 - buf->page = page_pool_dev_alloc_pages(rx_q->page_pool); 1470 + buf->page = page_pool_alloc_pages(rx_q->page_pool, gfp); 1467 1471 if (!buf->page) 1468 1472 return -ENOMEM; 1469 1473 buf->page_offset = stmmac_rx_offset(priv); 1470 1474 } 1471 1475 1472 1476 if (priv->sph && !buf->sec_page) { 1473 - buf->sec_page = page_pool_dev_alloc_pages(rx_q->page_pool); 1477 + buf->sec_page = page_pool_alloc_pages(rx_q->page_pool, gfp); 1474 1478 if (!buf->sec_page) 1475 1479 return -ENOMEM; 1476 1480 ··· 4486 4482 struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue]; 4487 4483 int dirty = stmmac_rx_dirty(priv, queue); 4488 4484 unsigned int entry = rx_q->dirty_rx; 4485 + gfp_t gfp = (GFP_ATOMIC | __GFP_NOWARN); 4486 + 4487 + if (priv->dma_cap.addr64 <= 32) 4488 + gfp |= GFP_DMA32; 4489 4489 4490 4490 while (dirty-- > 0) { 4491 4491 struct stmmac_rx_buffer *buf = &rx_q->buf_pool[entry]; ··· 4502 4494 p = rx_q->dma_rx + entry; 4503 4495 4504 4496 if (!buf->page) { 4505 - buf->page = page_pool_dev_alloc_pages(rx_q->page_pool); 4497 + buf->page = page_pool_alloc_pages(rx_q->page_pool, gfp); 4506 4498 if (!buf->page) 4507 4499 break; 4508 4500 } 4509 4501 4510 4502 if (priv->sph && !buf->sec_page) { 4511 - buf->sec_page = page_pool_dev_alloc_pages(rx_q->page_pool); 4503 + buf->sec_page = page_pool_alloc_pages(rx_q->page_pool, gfp); 4512 4504 if (!buf->sec_page) 4513 4505 break; 4514 4506
+73 -13
drivers/net/ethernet/stmicro/stmmac/stmmac_tc.c
··· 232 232 } 233 233 } 234 234 235 + static int tc_rfs_init(struct stmmac_priv *priv) 236 + { 237 + int i; 238 + 239 + priv->rfs_entries_max[STMMAC_RFS_T_VLAN] = 8; 240 + 241 + for (i = 0; i < STMMAC_RFS_T_MAX; i++) 242 + priv->rfs_entries_total += priv->rfs_entries_max[i]; 243 + 244 + priv->rfs_entries = devm_kcalloc(priv->device, 245 + priv->rfs_entries_total, 246 + sizeof(*priv->rfs_entries), 247 + GFP_KERNEL); 248 + if (!priv->rfs_entries) 249 + return -ENOMEM; 250 + 251 + dev_info(priv->device, "Enabled RFS Flow TC (entries=%d)\n", 252 + priv->rfs_entries_total); 253 + 254 + return 0; 255 + } 256 + 235 257 static int tc_init(struct stmmac_priv *priv) 236 258 { 237 259 struct dma_features *dma_cap = &priv->dma_cap; 238 260 unsigned int count; 239 - int i; 261 + int ret, i; 240 262 241 263 if (dma_cap->l3l4fnum) { 242 264 priv->flow_entries_max = dma_cap->l3l4fnum; ··· 272 250 for (i = 0; i < priv->flow_entries_max; i++) 273 251 priv->flow_entries[i].idx = i; 274 252 275 - dev_info(priv->device, "Enabled Flow TC (entries=%d)\n", 253 + dev_info(priv->device, "Enabled L3L4 Flow TC (entries=%d)\n", 276 254 priv->flow_entries_max); 277 255 } 256 + 257 + ret = tc_rfs_init(priv); 258 + if (ret) 259 + return -ENOMEM; 278 260 279 261 if (!priv->plat->fpe_cfg) { 280 262 priv->plat->fpe_cfg = devm_kzalloc(priv->device, ··· 633 607 return ret; 634 608 } 635 609 610 + static struct stmmac_rfs_entry *tc_find_rfs(struct stmmac_priv *priv, 611 + struct flow_cls_offload *cls, 612 + bool get_free) 613 + { 614 + int i; 615 + 616 + for (i = 0; i < priv->rfs_entries_total; i++) { 617 + struct stmmac_rfs_entry *entry = &priv->rfs_entries[i]; 618 + 619 + if (entry->cookie == cls->cookie) 620 + return entry; 621 + if (get_free && entry->in_use == false) 622 + return entry; 623 + } 624 + 625 + return NULL; 626 + } 627 + 636 628 #define VLAN_PRIO_FULL_MASK (0x07) 637 629 638 630 static int tc_add_vlan_flow(struct stmmac_priv *priv, 639 631 struct flow_cls_offload *cls) 640 632 { 633 + struct stmmac_rfs_entry *entry = tc_find_rfs(priv, cls, false); 641 634 struct flow_rule *rule = flow_cls_offload_flow_rule(cls); 642 635 struct flow_dissector *dissector = rule->match.dissector; 643 636 int tc = tc_classid_to_hwtc(priv->dev, cls->classid); 644 637 struct flow_match_vlan match; 638 + 639 + if (!entry) { 640 + entry = tc_find_rfs(priv, cls, true); 641 + if (!entry) 642 + return -ENOENT; 643 + } 644 + 645 + if (priv->rfs_entries_cnt[STMMAC_RFS_T_VLAN] >= 646 + priv->rfs_entries_max[STMMAC_RFS_T_VLAN]) 647 + return -ENOENT; 645 648 646 649 /* Nothing to do here */ 647 650 if (!dissector_uses_key(dissector, FLOW_DISSECTOR_KEY_VLAN)) ··· 693 638 694 639 prio = BIT(match.key->vlan_priority); 695 640 stmmac_rx_queue_prio(priv, priv->hw, prio, tc); 641 + 642 + entry->in_use = true; 643 + entry->cookie = cls->cookie; 644 + entry->tc = tc; 645 + entry->type = STMMAC_RFS_T_VLAN; 646 + priv->rfs_entries_cnt[STMMAC_RFS_T_VLAN]++; 696 647 } 697 648 698 649 return 0; ··· 707 646 static int tc_del_vlan_flow(struct stmmac_priv *priv, 708 647 struct flow_cls_offload *cls) 709 648 { 710 - struct flow_rule *rule = flow_cls_offload_flow_rule(cls); 711 - struct flow_dissector *dissector = rule->match.dissector; 712 - int tc = tc_classid_to_hwtc(priv->dev, cls->classid); 649 + struct stmmac_rfs_entry *entry = tc_find_rfs(priv, cls, false); 713 650 714 - /* Nothing to do here */ 715 - if (!dissector_uses_key(dissector, FLOW_DISSECTOR_KEY_VLAN)) 716 - return -EINVAL; 651 + if (!entry || !entry->in_use || entry->type != STMMAC_RFS_T_VLAN) 652 + return -ENOENT; 717 653 718 - if (tc < 0) { 719 - netdev_err(priv->dev, "Invalid traffic class\n"); 720 - return -EINVAL; 721 - } 654 + stmmac_rx_queue_prio(priv, priv->hw, 0, entry->tc); 722 655 723 - stmmac_rx_queue_prio(priv, priv->hw, 0, tc); 656 + entry->in_use = false; 657 + entry->cookie = 0; 658 + entry->tc = 0; 659 + entry->type = 0; 660 + 661 + priv->rfs_entries_cnt[STMMAC_RFS_T_VLAN]--; 724 662 725 663 return 0; 726 664 }
+20 -9
drivers/net/ethernet/ti/am65-cpsw-nuss.c
··· 1844 1844 if (ret < 0) { 1845 1845 dev_err(dev, "%pOF error reading port_id %d\n", 1846 1846 port_np, ret); 1847 - return ret; 1847 + goto of_node_put; 1848 1848 } 1849 1849 1850 1850 if (!port_id || port_id > common->port_num) { 1851 1851 dev_err(dev, "%pOF has invalid port_id %u %s\n", 1852 1852 port_np, port_id, port_np->name); 1853 - return -EINVAL; 1853 + ret = -EINVAL; 1854 + goto of_node_put; 1854 1855 } 1855 1856 1856 1857 port = am65_common_get_port(common, port_id); ··· 1867 1866 (AM65_CPSW_NU_FRAM_PORT_OFFSET * (port_id - 1)); 1868 1867 1869 1868 port->slave.mac_sl = cpsw_sl_get("am65", dev, port->port_base); 1870 - if (IS_ERR(port->slave.mac_sl)) 1871 - return PTR_ERR(port->slave.mac_sl); 1869 + if (IS_ERR(port->slave.mac_sl)) { 1870 + ret = PTR_ERR(port->slave.mac_sl); 1871 + goto of_node_put; 1872 + } 1872 1873 1873 1874 port->disabled = !of_device_is_available(port_np); 1874 1875 if (port->disabled) { ··· 1883 1880 ret = PTR_ERR(port->slave.ifphy); 1884 1881 dev_err(dev, "%pOF error retrieving port phy: %d\n", 1885 1882 port_np, ret); 1886 - return ret; 1883 + goto of_node_put; 1887 1884 } 1888 1885 1889 1886 port->slave.mac_only = ··· 1892 1889 /* get phy/link info */ 1893 1890 if (of_phy_is_fixed_link(port_np)) { 1894 1891 ret = of_phy_register_fixed_link(port_np); 1895 - if (ret) 1896 - return dev_err_probe(dev, ret, 1892 + if (ret) { 1893 + ret = dev_err_probe(dev, ret, 1897 1894 "failed to register fixed-link phy %pOF\n", 1898 1895 port_np); 1896 + goto of_node_put; 1897 + } 1899 1898 port->slave.phy_node = of_node_get(port_np); 1900 1899 } else { 1901 1900 port->slave.phy_node = ··· 1907 1902 if (!port->slave.phy_node) { 1908 1903 dev_err(dev, 1909 1904 "slave[%d] no phy found\n", port_id); 1910 - return -ENODEV; 1905 + ret = -ENODEV; 1906 + goto of_node_put; 1911 1907 } 1912 1908 1913 1909 ret = of_get_phy_mode(port_np, &port->slave.phy_if); 1914 1910 if (ret) { 1915 1911 dev_err(dev, "%pOF read phy-mode err %d\n", 1916 1912 port_np, ret); 1917 - return ret; 1913 + goto of_node_put; 1918 1914 } 1919 1915 1920 1916 ret = of_get_mac_address(port_np, port->slave.mac_addr); ··· 1938 1932 } 1939 1933 1940 1934 return 0; 1935 + 1936 + of_node_put: 1937 + of_node_put(port_np); 1938 + of_node_put(node); 1939 + return ret; 1941 1940 } 1942 1941 1943 1942 static void am65_cpsw_pcpu_stats_free(void *data)
+1
drivers/net/netdevsim/bpf.c
··· 514 514 goto err_free; 515 515 key = nmap->entry[i].key; 516 516 *key = i; 517 + memset(nmap->entry[i].value, 0, offmap->map.value_size); 517 518 } 518 519 } 519 520
+4 -1
drivers/net/netdevsim/ethtool.c
··· 77 77 { 78 78 struct netdevsim *ns = netdev_priv(dev); 79 79 80 - memcpy(&ns->ethtool.ring, ring, sizeof(ns->ethtool.ring)); 80 + ns->ethtool.ring.rx_pending = ring->rx_pending; 81 + ns->ethtool.ring.rx_jumbo_pending = ring->rx_jumbo_pending; 82 + ns->ethtool.ring.rx_mini_pending = ring->rx_mini_pending; 83 + ns->ethtool.ring.tx_pending = ring->tx_pending; 81 84 return 0; 82 85 } 83 86
+3
drivers/net/phy/mdio_bus.c
··· 460 460 461 461 if (addr == mdiodev->addr) { 462 462 device_set_node(dev, of_fwnode_handle(child)); 463 + /* The refcount on "child" is passed to the mdio 464 + * device. Do _not_ use of_node_put(child) here. 465 + */ 463 466 return; 464 467 } 465 468 }
+6
drivers/net/usb/lan78xx.c
··· 76 76 #define LAN7801_USB_PRODUCT_ID (0x7801) 77 77 #define LAN78XX_EEPROM_MAGIC (0x78A5) 78 78 #define LAN78XX_OTP_MAGIC (0x78F3) 79 + #define AT29M2AF_USB_VENDOR_ID (0x07C9) 80 + #define AT29M2AF_USB_PRODUCT_ID (0x0012) 79 81 80 82 #define MII_READ 1 81 83 #define MII_WRITE 0 ··· 4735 4733 { 4736 4734 /* LAN7801 USB Gigabit Ethernet Device */ 4737 4735 USB_DEVICE(LAN78XX_USB_VENDOR_ID, LAN7801_USB_PRODUCT_ID), 4736 + }, 4737 + { 4738 + /* ATM2-AF USB Gigabit Ethernet Device */ 4739 + USB_DEVICE(AT29M2AF_USB_VENDOR_ID, AT29M2AF_USB_PRODUCT_ID), 4738 4740 }, 4739 4741 {}, 4740 4742 };
+1
drivers/net/usb/qmi_wwan.c
··· 1358 1358 {QMI_QUIRK_SET_DTR(0x1bc7, 0x1040, 2)}, /* Telit LE922A */ 1359 1359 {QMI_QUIRK_SET_DTR(0x1bc7, 0x1050, 2)}, /* Telit FN980 */ 1360 1360 {QMI_QUIRK_SET_DTR(0x1bc7, 0x1060, 2)}, /* Telit LN920 */ 1361 + {QMI_QUIRK_SET_DTR(0x1bc7, 0x1070, 2)}, /* Telit FN990 */ 1361 1362 {QMI_FIXED_INTF(0x1bc7, 0x1100, 3)}, /* Telit ME910 */ 1362 1363 {QMI_FIXED_INTF(0x1bc7, 0x1101, 3)}, /* Telit ME910 dual modem */ 1363 1364 {QMI_FIXED_INTF(0x1bc7, 0x1200, 5)}, /* Telit LE920 */
+3 -6
drivers/net/virtio_net.c
··· 733 733 pr_debug("%s: rx error: len %u exceeds max size %d\n", 734 734 dev->name, len, GOOD_PACKET_LEN); 735 735 dev->stats.rx_length_errors++; 736 - goto err_len; 736 + goto err; 737 737 } 738 738 739 739 if (likely(!vi->xdp_enabled)) { ··· 825 825 826 826 skip_xdp: 827 827 skb = build_skb(buf, buflen); 828 - if (!skb) { 829 - put_page(page); 828 + if (!skb) 830 829 goto err; 831 - } 832 830 skb_reserve(skb, headroom - delta); 833 831 skb_put(skb, len); 834 832 if (!xdp_prog) { ··· 837 839 if (metasize) 838 840 skb_metadata_set(skb, metasize); 839 841 840 - err: 841 842 return skb; 842 843 843 844 err_xdp: 844 845 rcu_read_unlock(); 845 846 stats->xdp_drops++; 846 - err_len: 847 + err: 847 848 stats->drops++; 848 849 put_page(page); 849 850 xdp_xmit:
+9 -5
drivers/net/wireless/broadcom/brcm80211/Kconfig
··· 7 7 depends on MAC80211 8 8 depends on BCMA_POSSIBLE 9 9 select BCMA 10 - select NEW_LEDS if BCMA_DRIVER_GPIO 11 - select LEDS_CLASS if BCMA_DRIVER_GPIO 12 10 select BRCMUTIL 13 11 select FW_LOADER 14 12 select CORDIC 15 13 help 16 14 This module adds support for PCIe wireless adapters based on Broadcom 17 - IEEE802.11n SoftMAC chipsets. It also has WLAN led support, which will 18 - be available if you select BCMA_DRIVER_GPIO. If you choose to build a 19 - module, the driver will be called brcmsmac.ko. 15 + IEEE802.11n SoftMAC chipsets. If you choose to build a module, the 16 + driver will be called brcmsmac.ko. 17 + 18 + config BRCMSMAC_LEDS 19 + def_bool BRCMSMAC && BCMA_DRIVER_GPIO && MAC80211_LEDS 20 + help 21 + The brcmsmac LED support depends on the presence of the 22 + BCMA_DRIVER_GPIO driver, and it only works if LED support 23 + is enabled and reachable from the driver module. 20 24 21 25 source "drivers/net/wireless/broadcom/brcm80211/brcmfmac/Kconfig" 22 26
+1 -1
drivers/net/wireless/broadcom/brcm80211/brcmsmac/Makefile
··· 42 42 brcms_trace_events.o \ 43 43 debug.o 44 44 45 - brcmsmac-$(CONFIG_BCMA_DRIVER_GPIO) += led.o 45 + brcmsmac-$(CONFIG_BRCMSMAC_LEDS) += led.o 46 46 47 47 obj-$(CONFIG_BRCMSMAC) += brcmsmac.o
+1 -1
drivers/net/wireless/broadcom/brcm80211/brcmsmac/led.h
··· 24 24 struct gpio_desc *gpiod; 25 25 }; 26 26 27 - #ifdef CONFIG_BCMA_DRIVER_GPIO 27 + #ifdef CONFIG_BRCMSMAC_LEDS 28 28 void brcms_led_unregister(struct brcms_info *wl); 29 29 int brcms_led_register(struct brcms_info *wl); 30 30 #else
+2 -2
drivers/net/wireless/intel/iwlegacy/Kconfig
··· 2 2 config IWLEGACY 3 3 tristate 4 4 select FW_LOADER 5 - select NEW_LEDS 6 - select LEDS_CLASS 7 5 select LEDS_TRIGGERS 8 6 select MAC80211_LEDS 9 7 10 8 config IWL4965 11 9 tristate "Intel Wireless WiFi 4965AGN (iwl4965)" 12 10 depends on PCI && MAC80211 11 + depends on LEDS_CLASS=y || LEDS_CLASS=MAC80211 13 12 select IWLEGACY 14 13 help 15 14 This option enables support for ··· 37 38 config IWL3945 38 39 tristate "Intel PRO/Wireless 3945ABG/BG Network Connection (iwl3945)" 39 40 depends on PCI && MAC80211 41 + depends on LEDS_CLASS=y || LEDS_CLASS=MAC80211 40 42 select IWLEGACY 41 43 help 42 44 Select to build the driver supporting the:
+1 -1
drivers/net/wireless/intel/iwlwifi/Kconfig
··· 47 47 48 48 config IWLWIFI_LEDS 49 49 bool 50 - depends on LEDS_CLASS=y || LEDS_CLASS=IWLWIFI 50 + depends on LEDS_CLASS=y || LEDS_CLASS=MAC80211 51 51 depends on IWLMVM || IWLDVM 52 52 select LEDS_TRIGGERS 53 53 select MAC80211_LEDS
+3 -2
drivers/net/wireless/intel/iwlwifi/mvm/tx.c
··· 269 269 u8 rate_plcp; 270 270 u32 rate_flags = 0; 271 271 bool is_cck; 272 - struct iwl_mvm_sta *mvmsta = iwl_mvm_sta_from_mac80211(sta); 273 272 274 273 /* info->control is only relevant for non HW rate control */ 275 274 if (!ieee80211_hw_check(mvm->hw, HAS_RATE_CONTROL)) { 275 + struct iwl_mvm_sta *mvmsta = iwl_mvm_sta_from_mac80211(sta); 276 + 276 277 /* HT rate doesn't make sense for a non data frame */ 277 278 WARN_ONCE(info->control.rates[0].flags & IEEE80211_TX_RC_MCS && 278 279 !ieee80211_is_data(fc), 279 280 "Got a HT rate (flags:0x%x/mcs:%d/fc:0x%x/state:%d) for a non data frame\n", 280 281 info->control.rates[0].flags, 281 282 info->control.rates[0].idx, 282 - le16_to_cpu(fc), mvmsta->sta_state); 283 + le16_to_cpu(fc), sta ? mvmsta->sta_state : -1); 283 284 284 285 rate_idx = info->control.rates[0].idx; 285 286 }
+1 -1
drivers/net/wireless/mediatek/mt76/Makefile
··· 34 34 obj-$(CONFIG_MT7603E) += mt7603/ 35 35 obj-$(CONFIG_MT7615_COMMON) += mt7615/ 36 36 obj-$(CONFIG_MT7915E) += mt7915/ 37 - obj-$(CONFIG_MT7921E) += mt7921/ 37 + obj-$(CONFIG_MT7921_COMMON) += mt7921/
+10 -8
include/uapi/linux/mptcp.h
··· 136 136 * MPTCP_EVENT_REMOVED: token, rem_id 137 137 * An address has been lost by the peer. 138 138 * 139 - * MPTCP_EVENT_SUB_ESTABLISHED: token, family, saddr4 | saddr6, 140 - * daddr4 | daddr6, sport, dport, backup, 141 - * if_idx [, error] 139 + * MPTCP_EVENT_SUB_ESTABLISHED: token, family, loc_id, rem_id, 140 + * saddr4 | saddr6, daddr4 | daddr6, sport, 141 + * dport, backup, if_idx [, error] 142 142 * A new subflow has been established. 'error' should not be set. 143 143 * 144 - * MPTCP_EVENT_SUB_CLOSED: token, family, saddr4 | saddr6, daddr4 | daddr6, 145 - * sport, dport, backup, if_idx [, error] 144 + * MPTCP_EVENT_SUB_CLOSED: token, family, loc_id, rem_id, saddr4 | saddr6, 145 + * daddr4 | daddr6, sport, dport, backup, if_idx 146 + * [, error] 146 147 * A subflow has been closed. An error (copy of sk_err) could be set if an 147 148 * error has been detected for this subflow. 148 149 * 149 - * MPTCP_EVENT_SUB_PRIORITY: token, family, saddr4 | saddr6, daddr4 | daddr6, 150 - * sport, dport, backup, if_idx [, error] 151 - * The priority of a subflow has changed. 'error' should not be set. 150 + * MPTCP_EVENT_SUB_PRIORITY: token, family, loc_id, rem_id, saddr4 | saddr6, 151 + * daddr4 | daddr6, sport, dport, backup, if_idx 152 + * [, error] 153 + * The priority of a subflow has changed. 'error' should not be set. 152 154 */ 153 155 enum mptcp_event_type { 154 156 MPTCP_EVENT_UNSPEC = 0,
+36 -17
kernel/bpf/verifier.c
··· 1366 1366 reg->var_off = tnum_or(tnum_clear_subreg(var64_off), var32_off); 1367 1367 } 1368 1368 1369 + static bool __reg32_bound_s64(s32 a) 1370 + { 1371 + return a >= 0 && a <= S32_MAX; 1372 + } 1373 + 1369 1374 static void __reg_assign_32_into_64(struct bpf_reg_state *reg) 1370 1375 { 1371 1376 reg->umin_value = reg->u32_min_value; 1372 1377 reg->umax_value = reg->u32_max_value; 1373 - /* Attempt to pull 32-bit signed bounds into 64-bit bounds 1374 - * but must be positive otherwise set to worse case bounds 1375 - * and refine later from tnum. 1378 + 1379 + /* Attempt to pull 32-bit signed bounds into 64-bit bounds but must 1380 + * be positive otherwise set to worse case bounds and refine later 1381 + * from tnum. 1376 1382 */ 1377 - if (reg->s32_min_value >= 0 && reg->s32_max_value >= 0) 1378 - reg->smax_value = reg->s32_max_value; 1379 - else 1380 - reg->smax_value = U32_MAX; 1381 - if (reg->s32_min_value >= 0) 1383 + if (__reg32_bound_s64(reg->s32_min_value) && 1384 + __reg32_bound_s64(reg->s32_max_value)) { 1382 1385 reg->smin_value = reg->s32_min_value; 1383 - else 1386 + reg->smax_value = reg->s32_max_value; 1387 + } else { 1384 1388 reg->smin_value = 0; 1389 + reg->smax_value = U32_MAX; 1390 + } 1385 1391 } 1386 1392 1387 1393 static void __reg_combine_32_into_64(struct bpf_reg_state *reg) ··· 2385 2379 */ 2386 2380 if (insn->src_reg != BPF_REG_FP) 2387 2381 return 0; 2388 - if (BPF_SIZE(insn->code) != BPF_DW) 2389 - return 0; 2390 2382 2391 2383 /* dreg = *(u64 *)[fp - off] was a fill from the stack. 2392 2384 * that [fp - off] slot contains scalar that needs to be ··· 2406 2402 return -ENOTSUPP; 2407 2403 /* scalars can only be spilled into stack */ 2408 2404 if (insn->dst_reg != BPF_REG_FP) 2409 - return 0; 2410 - if (BPF_SIZE(insn->code) != BPF_DW) 2411 2405 return 0; 2412 2406 spi = (-insn->off - 1) / BPF_REG_SIZE; 2413 2407 if (spi >= 64) { ··· 4553 4551 4554 4552 if (insn->imm == BPF_CMPXCHG) { 4555 4553 /* Check comparison of R0 with memory location */ 4556 - err = check_reg_arg(env, BPF_REG_0, SRC_OP); 4554 + const u32 aux_reg = BPF_REG_0; 4555 + 4556 + err = check_reg_arg(env, aux_reg, SRC_OP); 4557 4557 if (err) 4558 4558 return err; 4559 + 4560 + if (is_pointer_value(env, aux_reg)) { 4561 + verbose(env, "R%d leaks addr into mem\n", aux_reg); 4562 + return -EACCES; 4563 + } 4559 4564 } 4560 4565 4561 4566 if (is_pointer_value(env, insn->src_reg)) { ··· 4597 4588 load_reg = -1; 4598 4589 } 4599 4590 4600 - /* check whether we can read the memory */ 4591 + /* Check whether we can read the memory, with second call for fetch 4592 + * case to simulate the register fill. 4593 + */ 4601 4594 err = check_mem_access(env, insn_idx, insn->dst_reg, insn->off, 4602 - BPF_SIZE(insn->code), BPF_READ, load_reg, true); 4595 + BPF_SIZE(insn->code), BPF_READ, -1, true); 4596 + if (!err && load_reg >= 0) 4597 + err = check_mem_access(env, insn_idx, insn->dst_reg, insn->off, 4598 + BPF_SIZE(insn->code), BPF_READ, load_reg, 4599 + true); 4603 4600 if (err) 4604 4601 return err; 4605 4602 4606 - /* check whether we can write into the same memory */ 4603 + /* Check whether we can write into the same memory. */ 4607 4604 err = check_mem_access(env, insn_idx, insn->dst_reg, insn->off, 4608 4605 BPF_SIZE(insn->code), BPF_WRITE, -1, true); 4609 4606 if (err) ··· 8323 8308 insn->dst_reg); 8324 8309 } 8325 8310 zext_32_to_64(dst_reg); 8311 + 8312 + __update_reg_bounds(dst_reg); 8313 + __reg_deduce_bounds(dst_reg); 8314 + __reg_bound_offset(dst_reg); 8326 8315 } 8327 8316 } else { 8328 8317 /* case: R = imm
+1 -1
net/core/skbuff.c
··· 832 832 ntohs(skb->protocol), skb->pkt_type, skb->skb_iif); 833 833 834 834 if (dev) 835 - printk("%sdev name=%s feat=0x%pNF\n", 835 + printk("%sdev name=%s feat=%pNF\n", 836 836 level, dev->name, &dev->features); 837 837 if (sk) 838 838 printk("%ssk family=%hu type=%u proto=%u\n",
+1 -3
net/ipv4/inet_diag.c
··· 261 261 r->idiag_state = sk->sk_state; 262 262 r->idiag_timer = 0; 263 263 r->idiag_retrans = 0; 264 + r->idiag_expires = 0; 264 265 265 266 if (inet_diag_msg_attrs_fill(sk, skb, r, ext, 266 267 sk_user_ns(NETLINK_CB(cb->skb).sk), ··· 315 314 r->idiag_retrans = icsk->icsk_probes_out; 316 315 r->idiag_expires = 317 316 jiffies_delta_to_msecs(sk->sk_timer.expires - jiffies); 318 - } else { 319 - r->idiag_timer = 0; 320 - r->idiag_expires = 0; 321 317 } 322 318 323 319 if ((ext & (1 << (INET_DIAG_INFO - 1))) && handler->idiag_info_size) {
-1
net/ipv6/sit.c
··· 1933 1933 return 0; 1934 1934 1935 1935 err_reg_dev: 1936 - ipip6_dev_free(sitn->fb_tunnel_dev); 1937 1936 free_netdev(sitn->fb_tunnel_dev); 1938 1937 err_alloc_dev: 1939 1938 return err;
+3 -2
net/mac80211/agg-rx.c
··· 9 9 * Copyright 2007, Michael Wu <flamingice@sourmilk.net> 10 10 * Copyright 2007-2010, Intel Corporation 11 11 * Copyright(c) 2015-2017 Intel Deutschland GmbH 12 - * Copyright (C) 2018-2020 Intel Corporation 12 + * Copyright (C) 2018-2021 Intel Corporation 13 13 */ 14 14 15 15 /** ··· 191 191 sband = ieee80211_get_sband(sdata); 192 192 if (!sband) 193 193 return; 194 - he_cap = ieee80211_get_he_iftype_cap(sband, sdata->vif.type); 194 + he_cap = ieee80211_get_he_iftype_cap(sband, 195 + ieee80211_vif_type_p2p(&sdata->vif)); 195 196 if (!he_cap) 196 197 return; 197 198
+11 -5
net/mac80211/agg-tx.c
··· 9 9 * Copyright 2007, Michael Wu <flamingice@sourmilk.net> 10 10 * Copyright 2007-2010, Intel Corporation 11 11 * Copyright(c) 2015-2017 Intel Deutschland GmbH 12 - * Copyright (C) 2018 - 2020 Intel Corporation 12 + * Copyright (C) 2018 - 2021 Intel Corporation 13 13 */ 14 14 15 15 #include <linux/ieee80211.h> ··· 106 106 mgmt->u.action.u.addba_req.start_seq_num = 107 107 cpu_to_le16(start_seq_num << 4); 108 108 109 - ieee80211_tx_skb(sdata, skb); 109 + ieee80211_tx_skb_tid(sdata, skb, tid); 110 110 } 111 111 112 112 void ieee80211_send_bar(struct ieee80211_vif *vif, u8 *ra, u16 tid, u16 ssn) ··· 213 213 struct ieee80211_txq *txq = sta->sta.txq[tid]; 214 214 struct txq_info *txqi; 215 215 216 + lockdep_assert_held(&sta->ampdu_mlme.mtx); 217 + 216 218 if (!txq) 217 219 return; 218 220 ··· 292 290 ieee80211_assign_tid_tx(sta, tid, NULL); 293 291 294 292 ieee80211_agg_splice_finish(sta->sdata, tid); 295 - ieee80211_agg_start_txq(sta, tid, false); 296 293 297 294 kfree_rcu(tid_tx, rcu_head); 298 295 } ··· 481 480 482 481 /* send AddBA request */ 483 482 ieee80211_send_addba_request(sdata, sta->sta.addr, tid, 484 - tid_tx->dialog_token, 485 - sta->tid_seq[tid] >> 4, 483 + tid_tx->dialog_token, tid_tx->ssn, 486 484 buf_size, tid_tx->timeout); 487 485 488 486 WARN_ON(test_and_set_bit(HT_AGG_STATE_SENT_ADDBA, &tid_tx->state)); ··· 523 523 524 524 params.ssn = sta->tid_seq[tid] >> 4; 525 525 ret = drv_ampdu_action(local, sdata, &params); 526 + tid_tx->ssn = params.ssn; 526 527 if (ret == IEEE80211_AMPDU_TX_START_DELAY_ADDBA) { 527 528 return; 528 529 } else if (ret == IEEE80211_AMPDU_TX_START_IMMEDIATE) { ··· 890 889 { 891 890 struct ieee80211_sub_if_data *sdata = sta->sdata; 892 891 bool send_delba = false; 892 + bool start_txq = false; 893 893 894 894 ht_dbg(sdata, "Stopping Tx BA session for %pM tid %d\n", 895 895 sta->sta.addr, tid); ··· 908 906 send_delba = true; 909 907 910 908 ieee80211_remove_tid_tx(sta, tid); 909 + start_txq = true; 911 910 912 911 unlock_sta: 913 912 spin_unlock_bh(&sta->lock); 913 + 914 + if (start_txq) 915 + ieee80211_agg_start_txq(sta, tid, false); 914 916 915 917 if (send_delba) 916 918 ieee80211_send_delba(sdata, sta->sta.addr, tid,
+4 -1
net/mac80211/driver-ops.h
··· 1219 1219 { 1220 1220 struct ieee80211_sub_if_data *sdata = vif_to_sdata(txq->txq.vif); 1221 1221 1222 - if (local->in_reconfig) 1222 + /* In reconfig don't transmit now, but mark for waking later */ 1223 + if (local->in_reconfig) { 1224 + set_bit(IEEE80211_TXQ_STOP_NETIF_TX, &txq->flags); 1223 1225 return; 1226 + } 1224 1227 1225 1228 if (!check_sdata_in_driver(sdata)) 1226 1229 return;
+10 -3
net/mac80211/mlme.c
··· 2452 2452 u16 tx_time) 2453 2453 { 2454 2454 struct ieee80211_if_managed *ifmgd = &sdata->u.mgd; 2455 - u16 tid = ieee80211_get_tid(hdr); 2456 - int ac = ieee80211_ac_from_tid(tid); 2457 - struct ieee80211_sta_tx_tspec *tx_tspec = &ifmgd->tx_tspec[ac]; 2455 + u16 tid; 2456 + int ac; 2457 + struct ieee80211_sta_tx_tspec *tx_tspec; 2458 2458 unsigned long now = jiffies; 2459 + 2460 + if (!ieee80211_is_data_qos(hdr->frame_control)) 2461 + return; 2462 + 2463 + tid = ieee80211_get_tid(hdr); 2464 + ac = ieee80211_ac_from_tid(tid); 2465 + tx_tspec = &ifmgd->tx_tspec[ac]; 2459 2466 2460 2467 if (likely(!tx_tspec->admitted_time)) 2461 2468 return;
+1
net/mac80211/rx.c
··· 2944 2944 if (!fwd_skb) 2945 2945 goto out; 2946 2946 2947 + fwd_skb->dev = sdata->dev; 2947 2948 fwd_hdr = (struct ieee80211_hdr *) fwd_skb->data; 2948 2949 fwd_hdr->frame_control &= ~cpu_to_le16(IEEE80211_FCTL_RETRY); 2949 2950 info = IEEE80211_SKB_CB(fwd_skb);
+12 -9
net/mac80211/sta_info.c
··· 644 644 /* check if STA exists already */ 645 645 if (sta_info_get_bss(sdata, sta->sta.addr)) { 646 646 err = -EEXIST; 647 - goto out_err; 647 + goto out_cleanup; 648 648 } 649 649 650 650 sinfo = kzalloc(sizeof(struct station_info), GFP_KERNEL); 651 651 if (!sinfo) { 652 652 err = -ENOMEM; 653 - goto out_err; 653 + goto out_cleanup; 654 654 } 655 655 656 656 local->num_sta++; ··· 667 667 668 668 list_add_tail_rcu(&sta->list, &local->sta_list); 669 669 670 + /* update channel context before notifying the driver about state 671 + * change, this enables driver using the updated channel context right away. 672 + */ 673 + if (sta->sta_state >= IEEE80211_STA_ASSOC) { 674 + ieee80211_recalc_min_chandef(sta->sdata); 675 + if (!sta->sta.support_p2p_ps) 676 + ieee80211_recalc_p2p_go_ps_allowed(sta->sdata); 677 + } 678 + 670 679 /* notify driver */ 671 680 err = sta_info_insert_drv_state(local, sdata, sta); 672 681 if (err) 673 682 goto out_remove; 674 683 675 684 set_sta_flag(sta, WLAN_STA_INSERTED); 676 - 677 - if (sta->sta_state >= IEEE80211_STA_ASSOC) { 678 - ieee80211_recalc_min_chandef(sta->sdata); 679 - if (!sta->sta.support_p2p_ps) 680 - ieee80211_recalc_p2p_go_ps_allowed(sta->sdata); 681 - } 682 685 683 686 /* accept BA sessions now */ 684 687 clear_sta_flag(sta, WLAN_STA_BLOCK_BA); ··· 709 706 out_drop_sta: 710 707 local->num_sta--; 711 708 synchronize_net(); 709 + out_cleanup: 712 710 cleanup_single_sta(sta); 713 - out_err: 714 711 mutex_unlock(&local->sta_mtx); 715 712 kfree(sinfo); 716 713 rcu_read_lock();
+2
net/mac80211/sta_info.h
··· 176 176 * @failed_bar_ssn: ssn of the last failed BAR tx attempt 177 177 * @bar_pending: BAR needs to be re-sent 178 178 * @amsdu: support A-MSDU withing A-MDPU 179 + * @ssn: starting sequence number of the session 179 180 * 180 181 * This structure's lifetime is managed by RCU, assignments to 181 182 * the array holding it must hold the aggregation mutex. ··· 200 199 u8 stop_initiator; 201 200 bool tx_stop; 202 201 u16 buf_size; 202 + u16 ssn; 203 203 204 204 u16 failed_bar_ssn; 205 205 bool bar_pending;
+5 -5
net/mac80211/tx.c
··· 1822 1822 struct ieee80211_tx_info *info = IEEE80211_SKB_CB(tx->skb); 1823 1823 ieee80211_tx_result res = TX_CONTINUE; 1824 1824 1825 + if (!ieee80211_hw_check(&tx->local->hw, HAS_RATE_CONTROL)) 1826 + CALL_TXH(ieee80211_tx_h_rate_ctrl); 1827 + 1825 1828 if (unlikely(info->flags & IEEE80211_TX_INTFL_RETRANSMISSION)) { 1826 1829 __skb_queue_tail(&tx->skbs, tx->skb); 1827 1830 tx->skb = NULL; 1828 1831 goto txh_done; 1829 1832 } 1830 - 1831 - if (!ieee80211_hw_check(&tx->local->hw, HAS_RATE_CONTROL)) 1832 - CALL_TXH(ieee80211_tx_h_rate_ctrl); 1833 1833 1834 1834 CALL_TXH(ieee80211_tx_h_michael_mic_add); 1835 1835 CALL_TXH(ieee80211_tx_h_sequence); ··· 4191 4191 4192 4192 ieee80211_aggr_check(sdata, sta, skb); 4193 4193 4194 + sk_pacing_shift_update(skb->sk, sdata->local->hw.tx_sk_pacing_shift); 4195 + 4194 4196 if (sta) { 4195 4197 struct ieee80211_fast_tx *fast_tx; 4196 - 4197 - sk_pacing_shift_update(skb->sk, sdata->local->hw.tx_sk_pacing_shift); 4198 4198 4199 4199 fast_tx = rcu_dereference(sta->fast_tx); 4200 4200
+14 -9
net/mac80211/util.c
··· 943 943 struct ieee802_11_elems *elems) 944 944 { 945 945 const void *data = elem->data + 1; 946 - u8 len = elem->datalen - 1; 946 + u8 len; 947 + 948 + if (!elem->datalen) 949 + return; 950 + 951 + len = elem->datalen - 1; 947 952 948 953 switch (elem->data[0]) { 949 954 case WLAN_EID_EXT_HE_MU_EDCA: ··· 2068 2063 chandef.chan = chan; 2069 2064 2070 2065 skb = ieee80211_probereq_get(&local->hw, src, ssid, ssid_len, 2071 - 100 + ie_len); 2066 + local->scan_ies_len + ie_len); 2072 2067 if (!skb) 2073 2068 return NULL; 2074 2069 ··· 2651 2646 mutex_unlock(&local->sta_mtx); 2652 2647 } 2653 2648 2649 + /* 2650 + * If this is for hw restart things are still running. 2651 + * We may want to change that later, however. 2652 + */ 2653 + if (local->open_count && (!suspended || reconfig_due_to_wowlan)) 2654 + drv_reconfig_complete(local, IEEE80211_RECONFIG_TYPE_RESTART); 2655 + 2654 2656 if (local->in_reconfig) { 2655 2657 local->in_reconfig = false; 2656 2658 barrier(); ··· 2675 2663 ieee80211_wake_queues_by_reason(hw, IEEE80211_MAX_QUEUE_MAP, 2676 2664 IEEE80211_QUEUE_STOP_REASON_SUSPEND, 2677 2665 false); 2678 - 2679 - /* 2680 - * If this is for hw restart things are still running. 2681 - * We may want to change that later, however. 2682 - */ 2683 - if (local->open_count && (!suspended || reconfig_due_to_wowlan)) 2684 - drv_reconfig_complete(local, IEEE80211_RECONFIG_TYPE_RESTART); 2685 2666 2686 2667 if (!suspended) 2687 2668 return 0;
+3
net/mptcp/pm_netlink.c
··· 700 700 701 701 msk_owned_by_me(msk); 702 702 703 + if (sk->sk_state == TCP_LISTEN) 704 + return; 705 + 703 706 if (!rm_list->nr) 704 707 return; 705 708
+4 -2
net/mptcp/protocol.c
··· 1524 1524 int ret = 0; 1525 1525 1526 1526 prev_ssk = ssk; 1527 - mptcp_flush_join_list(msk); 1527 + __mptcp_flush_join_list(msk); 1528 1528 ssk = mptcp_subflow_get_send(msk); 1529 1529 1530 1530 /* First check. If the ssk has changed since ··· 2879 2879 */ 2880 2880 if (WARN_ON_ONCE(!new_mptcp_sock)) { 2881 2881 tcp_sk(newsk)->is_mptcp = 0; 2882 - return newsk; 2882 + goto out; 2883 2883 } 2884 2884 2885 2885 /* acquire the 2nd reference for the owning socket */ ··· 2891 2891 MPTCP_MIB_MPCAPABLEPASSIVEFALLBACK); 2892 2892 } 2893 2893 2894 + out: 2895 + newsk->sk_kern_sock = kern; 2894 2896 return newsk; 2895 2897 } 2896 2898
-1
net/mptcp/sockopt.c
··· 525 525 case TCP_NODELAY: 526 526 case TCP_THIN_LINEAR_TIMEOUTS: 527 527 case TCP_CONGESTION: 528 - case TCP_ULP: 529 528 case TCP_CORK: 530 529 case TCP_KEEPIDLE: 531 530 case TCP_KEEPINTVL:
+3 -2
net/packet/af_packet.c
··· 4492 4492 } 4493 4493 4494 4494 out_free_pg_vec: 4495 - bitmap_free(rx_owner_map); 4496 - if (pg_vec) 4495 + if (pg_vec) { 4496 + bitmap_free(rx_owner_map); 4497 4497 free_pg_vec(pg_vec, order, req->tp_block_nr); 4498 + } 4498 4499 out: 4499 4500 return err; 4500 4501 }
+1
net/phonet/pep.c
··· 868 868 869 869 err = pep_accept_conn(newsk, skb); 870 870 if (err) { 871 + __sock_put(sk); 871 872 sock_put(newsk); 872 873 newsk = NULL; 873 874 goto drop;
+1
net/rds/connection.c
··· 253 253 * should end up here, but if it 254 254 * does, reset/destroy the connection. 255 255 */ 256 + kfree(conn->c_path); 256 257 kmem_cache_free(rds_conn_slab, conn); 257 258 conn = ERR_PTR(-EOPNOTSUPP); 258 259 goto out;
+1
net/sched/cls_api.c
··· 3687 3687 entry->mpls_mangle.ttl = tcf_mpls_ttl(act); 3688 3688 break; 3689 3689 default: 3690 + err = -EOPNOTSUPP; 3690 3691 goto err_out_locked; 3691 3692 } 3692 3693 } else if (is_tcf_skbedit_ptype(act)) {
+1 -5
net/sched/sch_cake.c
··· 2736 2736 q->tins = kvcalloc(CAKE_MAX_TINS, sizeof(struct cake_tin_data), 2737 2737 GFP_KERNEL); 2738 2738 if (!q->tins) 2739 - goto nomem; 2739 + return -ENOMEM; 2740 2740 2741 2741 for (i = 0; i < CAKE_MAX_TINS; i++) { 2742 2742 struct cake_tin_data *b = q->tins + i; ··· 2766 2766 q->min_netlen = ~0; 2767 2767 q->min_adjlen = ~0; 2768 2768 return 0; 2769 - 2770 - nomem: 2771 - cake_destroy(sch); 2772 - return -ENOMEM; 2773 2769 } 2774 2770 2775 2771 static int cake_dump(struct Qdisc *sch, struct sk_buff *skb)
+2 -2
net/sched/sch_ets.c
··· 666 666 } 667 667 } 668 668 for (i = q->nbands; i < oldbands; i++) { 669 - qdisc_tree_flush_backlog(q->classes[i].qdisc); 670 - if (i >= q->nstrict) 669 + if (i >= q->nstrict && q->classes[i].qdisc->q.qlen) 671 670 list_del(&q->classes[i].alist); 671 + qdisc_tree_flush_backlog(q->classes[i].qdisc); 672 672 } 673 673 q->nstrict = nstrict; 674 674 memcpy(q->prio2band, priomap, sizeof(priomap));
+3 -1
net/smc/af_smc.c
··· 194 194 /* cleanup for a dangling non-blocking connect */ 195 195 if (smc->connect_nonblock && sk->sk_state == SMC_INIT) 196 196 tcp_abort(smc->clcsock->sk, ECONNABORTED); 197 - flush_work(&smc->connect_work); 197 + 198 + if (cancel_work_sync(&smc->connect_work)) 199 + sock_put(&smc->sk); /* sock_hold in smc_connect for passive closing */ 198 200 199 201 if (sk->sk_state == SMC_LISTEN) 200 202 /* smc_close_non_accepted() is called and acquires
+28 -2
net/wireless/reg.c
··· 133 133 134 134 static void restore_regulatory_settings(bool reset_user, bool cached); 135 135 static void print_regdomain(const struct ieee80211_regdomain *rd); 136 + static void reg_process_hint(struct regulatory_request *reg_request); 136 137 137 138 static const struct ieee80211_regdomain *get_cfg80211_regdom(void) 138 139 { ··· 1099 1098 const struct firmware *fw; 1100 1099 void *db; 1101 1100 int err; 1101 + const struct ieee80211_regdomain *current_regdomain; 1102 + struct regulatory_request *request; 1102 1103 1103 1104 err = request_firmware(&fw, "regulatory.db", &reg_pdev->dev); 1104 1105 if (err) ··· 1121 1118 if (!IS_ERR_OR_NULL(regdb)) 1122 1119 kfree(regdb); 1123 1120 regdb = db; 1124 - rtnl_unlock(); 1125 1121 1122 + /* reset regulatory domain */ 1123 + current_regdomain = get_cfg80211_regdom(); 1124 + 1125 + request = kzalloc(sizeof(*request), GFP_KERNEL); 1126 + if (!request) { 1127 + err = -ENOMEM; 1128 + goto out_unlock; 1129 + } 1130 + 1131 + request->wiphy_idx = WIPHY_IDX_INVALID; 1132 + request->alpha2[0] = current_regdomain->alpha2[0]; 1133 + request->alpha2[1] = current_regdomain->alpha2[1]; 1134 + request->initiator = NL80211_REGDOM_SET_BY_CORE; 1135 + request->user_reg_hint_type = NL80211_USER_REG_HINT_USER; 1136 + 1137 + reg_process_hint(request); 1138 + 1139 + out_unlock: 1140 + rtnl_unlock(); 1126 1141 out: 1127 1142 release_firmware(fw); 1128 1143 return err; ··· 2359 2338 struct cfg80211_chan_def chandef = {}; 2360 2339 struct cfg80211_registered_device *rdev = wiphy_to_rdev(wiphy); 2361 2340 enum nl80211_iftype iftype; 2341 + bool ret; 2362 2342 2363 2343 wdev_lock(wdev); 2364 2344 iftype = wdev->iftype; ··· 2409 2387 case NL80211_IFTYPE_AP: 2410 2388 case NL80211_IFTYPE_P2P_GO: 2411 2389 case NL80211_IFTYPE_ADHOC: 2412 - return cfg80211_reg_can_beacon_relax(wiphy, &chandef, iftype); 2390 + wiphy_lock(wiphy); 2391 + ret = cfg80211_reg_can_beacon_relax(wiphy, &chandef, iftype); 2392 + wiphy_unlock(wiphy); 2393 + 2394 + return ret; 2413 2395 case NL80211_IFTYPE_STATION: 2414 2396 case NL80211_IFTYPE_P2P_CLIENT: 2415 2397 return cfg80211_chandef_usable(wiphy, &chandef,
+2 -2
net/xdp/xsk.c
··· 677 677 struct xdp_sock *xs = xdp_sk(sk); 678 678 struct xsk_buff_pool *pool; 679 679 680 - sock_poll_wait(file, sock, wait); 681 - 682 680 if (unlikely(!xsk_is_bound(xs))) 683 681 return mask; 684 682 ··· 688 690 else 689 691 /* Poll needs to drive Tx also in copy mode */ 690 692 __xsk_sendmsg(sk); 693 + } else { 694 + sock_poll_wait(file, sock, wait); 691 695 } 692 696 693 697 if (xs->rx && !xskq_prod_is_empty(xs->rx))
+20
tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c
··· 33 33 return sum; 34 34 } 35 35 36 + __weak noinline struct file *bpf_testmod_return_ptr(int arg) 37 + { 38 + static struct file f = {}; 39 + 40 + switch (arg) { 41 + case 1: return (void *)EINVAL; /* user addr */ 42 + case 2: return (void *)0xcafe4a11; /* user addr */ 43 + case 3: return (void *)-EINVAL; /* canonical, but invalid */ 44 + case 4: return (void *)(1ull << 60); /* non-canonical and invalid */ 45 + case 5: return (void *)~(1ull << 30); /* trigger extable */ 46 + case 6: return &f; /* valid addr */ 47 + case 7: return (void *)((long)&f | 1); /* kernel tricks */ 48 + default: return NULL; 49 + } 50 + } 51 + 36 52 noinline ssize_t 37 53 bpf_testmod_test_read(struct file *file, struct kobject *kobj, 38 54 struct bin_attribute *bin_attr, ··· 59 43 .off = off, 60 44 .len = len, 61 45 }; 46 + int i = 1; 47 + 48 + while (bpf_testmod_return_ptr(i)) 49 + i++; 62 50 63 51 /* This is always true. Use the check to make sure the compiler 64 52 * doesn't remove bpf_testmod_loop_test.
+14 -2
tools/testing/selftests/bpf/prog_tests/btf_skc_cls_ingress.c
··· 90 90 91 91 static void test_conn(void) 92 92 { 93 - int listen_fd = -1, cli_fd = -1, err; 93 + int listen_fd = -1, cli_fd = -1, srv_fd = -1, err; 94 94 socklen_t addrlen = sizeof(srv_sa6); 95 95 int srv_port; 96 96 ··· 110 110 111 111 cli_fd = connect_to_fd(listen_fd, 0); 112 112 if (CHECK_FAIL(cli_fd == -1)) 113 + goto done; 114 + 115 + srv_fd = accept(listen_fd, NULL, NULL); 116 + if (CHECK_FAIL(srv_fd == -1)) 113 117 goto done; 114 118 115 119 if (CHECK(skel->bss->listen_tp_sport != srv_port || ··· 138 134 close(listen_fd); 139 135 if (cli_fd != -1) 140 136 close(cli_fd); 137 + if (srv_fd != -1) 138 + close(srv_fd); 141 139 } 142 140 143 141 static void test_syncookie(void) 144 142 { 145 - int listen_fd = -1, cli_fd = -1, err; 143 + int listen_fd = -1, cli_fd = -1, srv_fd = -1, err; 146 144 socklen_t addrlen = sizeof(srv_sa6); 147 145 int srv_port; 148 146 ··· 165 159 166 160 cli_fd = connect_to_fd(listen_fd, 0); 167 161 if (CHECK_FAIL(cli_fd == -1)) 162 + goto done; 163 + 164 + srv_fd = accept(listen_fd, NULL, NULL); 165 + if (CHECK_FAIL(srv_fd == -1)) 168 166 goto done; 169 167 170 168 if (CHECK(skel->bss->listen_tp_sport != srv_port, ··· 198 188 close(listen_fd); 199 189 if (cli_fd != -1) 200 190 close(cli_fd); 191 + if (srv_fd != -1) 192 + close(srv_fd); 201 193 } 202 194 203 195 struct test {
+12
tools/testing/selftests/bpf/progs/test_module_attach.c
··· 87 87 return 0; 88 88 } 89 89 90 + SEC("fexit/bpf_testmod_return_ptr") 91 + int BPF_PROG(handle_fexit_ret, int arg, struct file *ret) 92 + { 93 + long buf = 0; 94 + 95 + bpf_probe_read_kernel(&buf, 8, ret); 96 + bpf_probe_read_kernel(&buf, 8, (char *)ret + 256); 97 + *(volatile long long *)ret; 98 + *(volatile int *)&ret->f_mode; 99 + return 0; 100 + } 101 + 90 102 __u32 fmod_ret_read_sz = 0; 91 103 92 104 SEC("fmod_ret/bpf_testmod_test_read")
+1 -1
tools/testing/selftests/bpf/test_verifier.c
··· 54 54 #define MAX_INSNS BPF_MAXINSNS 55 55 #define MAX_TEST_INSNS 1000000 56 56 #define MAX_FIXUPS 8 57 - #define MAX_NR_MAPS 21 57 + #define MAX_NR_MAPS 22 58 58 #define MAX_TEST_RUNS 8 59 59 #define POINTER_VALUE 0xcafe4all 60 60 #define TEST_DATA_LEN 64
+86
tools/testing/selftests/bpf/verifier/atomic_cmpxchg.c
··· 138 138 BPF_EXIT_INSN(), 139 139 }, 140 140 .result = ACCEPT, 141 + .result_unpriv = REJECT, 142 + .errstr_unpriv = "R0 leaks addr into mem", 141 143 }, 142 144 { 143 145 "Dest pointer in r0 - succeed", ··· 158 156 BPF_EXIT_INSN(), 159 157 }, 160 158 .result = ACCEPT, 159 + .result_unpriv = REJECT, 160 + .errstr_unpriv = "R0 leaks addr into mem", 161 + }, 162 + { 163 + "Dest pointer in r0 - succeed, check 2", 164 + .insns = { 165 + /* r0 = &val */ 166 + BPF_MOV64_REG(BPF_REG_0, BPF_REG_10), 167 + /* val = r0; */ 168 + BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -8), 169 + /* r5 = &val */ 170 + BPF_MOV64_REG(BPF_REG_5, BPF_REG_10), 171 + /* r0 = atomic_cmpxchg(&val, r0, r5); */ 172 + BPF_ATOMIC_OP(BPF_DW, BPF_CMPXCHG, BPF_REG_10, BPF_REG_5, -8), 173 + /* r1 = *r0 */ 174 + BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_0, -8), 175 + /* exit(0); */ 176 + BPF_MOV64_IMM(BPF_REG_0, 0), 177 + BPF_EXIT_INSN(), 178 + }, 179 + .result = ACCEPT, 180 + .result_unpriv = REJECT, 181 + .errstr_unpriv = "R0 leaks addr into mem", 182 + }, 183 + { 184 + "Dest pointer in r0 - succeed, check 3", 185 + .insns = { 186 + /* r0 = &val */ 187 + BPF_MOV64_REG(BPF_REG_0, BPF_REG_10), 188 + /* val = r0; */ 189 + BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -8), 190 + /* r5 = &val */ 191 + BPF_MOV64_REG(BPF_REG_5, BPF_REG_10), 192 + /* r0 = atomic_cmpxchg(&val, r0, r5); */ 193 + BPF_ATOMIC_OP(BPF_W, BPF_CMPXCHG, BPF_REG_10, BPF_REG_5, -8), 194 + /* exit(0); */ 195 + BPF_MOV64_IMM(BPF_REG_0, 0), 196 + BPF_EXIT_INSN(), 197 + }, 198 + .result = REJECT, 199 + .errstr = "invalid size of register fill", 200 + .errstr_unpriv = "R0 leaks addr into mem", 201 + }, 202 + { 203 + "Dest pointer in r0 - succeed, check 4", 204 + .insns = { 205 + /* r0 = &val */ 206 + BPF_MOV32_REG(BPF_REG_0, BPF_REG_10), 207 + /* val = r0; */ 208 + BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_0, -8), 209 + /* r5 = &val */ 210 + BPF_MOV32_REG(BPF_REG_5, BPF_REG_10), 211 + /* r0 = atomic_cmpxchg(&val, r0, r5); */ 212 + BPF_ATOMIC_OP(BPF_W, BPF_CMPXCHG, BPF_REG_10, BPF_REG_5, -8), 213 + /* r1 = *r10 */ 214 + BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_10, -8), 215 + /* exit(0); */ 216 + BPF_MOV64_IMM(BPF_REG_0, 0), 217 + BPF_EXIT_INSN(), 218 + }, 219 + .result = ACCEPT, 220 + .result_unpriv = REJECT, 221 + .errstr_unpriv = "R10 partial copy of pointer", 222 + }, 223 + { 224 + "Dest pointer in r0 - succeed, check 5", 225 + .insns = { 226 + /* r0 = &val */ 227 + BPF_MOV32_REG(BPF_REG_0, BPF_REG_10), 228 + /* val = r0; */ 229 + BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_0, -8), 230 + /* r5 = &val */ 231 + BPF_MOV32_REG(BPF_REG_5, BPF_REG_10), 232 + /* r0 = atomic_cmpxchg(&val, r0, r5); */ 233 + BPF_ATOMIC_OP(BPF_W, BPF_CMPXCHG, BPF_REG_10, BPF_REG_5, -8), 234 + /* r1 = *r0 */ 235 + BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, -8), 236 + /* exit(0); */ 237 + BPF_MOV64_IMM(BPF_REG_0, 0), 238 + BPF_EXIT_INSN(), 239 + }, 240 + .result = REJECT, 241 + .errstr = "R0 invalid mem access", 242 + .errstr_unpriv = "R10 partial copy of pointer", 161 243 },
+94
tools/testing/selftests/bpf/verifier/atomic_fetch.c
··· 1 + { 2 + "atomic dw/fetch and address leakage of (map ptr & -1) via stack slot", 3 + .insns = { 4 + BPF_LD_IMM64(BPF_REG_1, -1), 5 + BPF_LD_MAP_FD(BPF_REG_8, 0), 6 + BPF_LD_MAP_FD(BPF_REG_9, 0), 7 + BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), 8 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), 9 + BPF_STX_MEM(BPF_DW, BPF_REG_2, BPF_REG_9, 0), 10 + BPF_ATOMIC_OP(BPF_DW, BPF_AND | BPF_FETCH, BPF_REG_2, BPF_REG_1, 0), 11 + BPF_LDX_MEM(BPF_DW, BPF_REG_9, BPF_REG_2, 0), 12 + BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0), 13 + BPF_MOV64_REG(BPF_REG_1, BPF_REG_8), 14 + BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), 15 + BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1), 16 + BPF_STX_MEM(BPF_DW, BPF_REG_0, BPF_REG_9, 0), 17 + BPF_MOV64_IMM(BPF_REG_0, 0), 18 + BPF_EXIT_INSN(), 19 + }, 20 + .fixup_map_array_48b = { 2, 4 }, 21 + .result = ACCEPT, 22 + .result_unpriv = REJECT, 23 + .errstr_unpriv = "leaking pointer from stack off -8", 24 + }, 25 + { 26 + "atomic dw/fetch and address leakage of (map ptr & -1) via returned value", 27 + .insns = { 28 + BPF_LD_IMM64(BPF_REG_1, -1), 29 + BPF_LD_MAP_FD(BPF_REG_8, 0), 30 + BPF_LD_MAP_FD(BPF_REG_9, 0), 31 + BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), 32 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), 33 + BPF_STX_MEM(BPF_DW, BPF_REG_2, BPF_REG_9, 0), 34 + BPF_ATOMIC_OP(BPF_DW, BPF_AND | BPF_FETCH, BPF_REG_2, BPF_REG_1, 0), 35 + BPF_MOV64_REG(BPF_REG_9, BPF_REG_1), 36 + BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0), 37 + BPF_MOV64_REG(BPF_REG_1, BPF_REG_8), 38 + BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), 39 + BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1), 40 + BPF_STX_MEM(BPF_DW, BPF_REG_0, BPF_REG_9, 0), 41 + BPF_MOV64_IMM(BPF_REG_0, 0), 42 + BPF_EXIT_INSN(), 43 + }, 44 + .fixup_map_array_48b = { 2, 4 }, 45 + .result = ACCEPT, 46 + .result_unpriv = REJECT, 47 + .errstr_unpriv = "leaking pointer from stack off -8", 48 + }, 49 + { 50 + "atomic w/fetch and address leakage of (map ptr & -1) via stack slot", 51 + .insns = { 52 + BPF_LD_IMM64(BPF_REG_1, -1), 53 + BPF_LD_MAP_FD(BPF_REG_8, 0), 54 + BPF_LD_MAP_FD(BPF_REG_9, 0), 55 + BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), 56 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), 57 + BPF_STX_MEM(BPF_DW, BPF_REG_2, BPF_REG_9, 0), 58 + BPF_ATOMIC_OP(BPF_W, BPF_AND | BPF_FETCH, BPF_REG_2, BPF_REG_1, 0), 59 + BPF_LDX_MEM(BPF_DW, BPF_REG_9, BPF_REG_2, 0), 60 + BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0), 61 + BPF_MOV64_REG(BPF_REG_1, BPF_REG_8), 62 + BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), 63 + BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1), 64 + BPF_STX_MEM(BPF_DW, BPF_REG_0, BPF_REG_9, 0), 65 + BPF_MOV64_IMM(BPF_REG_0, 0), 66 + BPF_EXIT_INSN(), 67 + }, 68 + .fixup_map_array_48b = { 2, 4 }, 69 + .result = REJECT, 70 + .errstr = "invalid size of register fill", 71 + }, 72 + { 73 + "atomic w/fetch and address leakage of (map ptr & -1) via returned value", 74 + .insns = { 75 + BPF_LD_IMM64(BPF_REG_1, -1), 76 + BPF_LD_MAP_FD(BPF_REG_8, 0), 77 + BPF_LD_MAP_FD(BPF_REG_9, 0), 78 + BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), 79 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), 80 + BPF_STX_MEM(BPF_DW, BPF_REG_2, BPF_REG_9, 0), 81 + BPF_ATOMIC_OP(BPF_W, BPF_AND | BPF_FETCH, BPF_REG_2, BPF_REG_1, 0), 82 + BPF_MOV64_REG(BPF_REG_9, BPF_REG_1), 83 + BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0), 84 + BPF_MOV64_REG(BPF_REG_1, BPF_REG_8), 85 + BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), 86 + BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1), 87 + BPF_STX_MEM(BPF_DW, BPF_REG_0, BPF_REG_9, 0), 88 + BPF_MOV64_IMM(BPF_REG_0, 0), 89 + BPF_EXIT_INSN(), 90 + }, 91 + .fixup_map_array_48b = { 2, 4 }, 92 + .result = REJECT, 93 + .errstr = "invalid size of register fill", 94 + }, 1 95 #define __ATOMIC_FETCH_OP_TEST(src_reg, dst_reg, operand1, op, operand2, expect) \ 2 96 { \ 3 97 "atomic fetch " #op ", src=" #dst_reg " dst=" #dst_reg, \
+71
tools/testing/selftests/bpf/verifier/search_pruning.c
··· 133 133 .prog_type = BPF_PROG_TYPE_TRACEPOINT, 134 134 }, 135 135 { 136 + "precision tracking for u32 spill/fill", 137 + .insns = { 138 + BPF_MOV64_REG(BPF_REG_7, BPF_REG_1), 139 + BPF_EMIT_CALL(BPF_FUNC_get_prandom_u32), 140 + BPF_MOV32_IMM(BPF_REG_6, 32), 141 + BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1), 142 + BPF_MOV32_IMM(BPF_REG_6, 4), 143 + /* Additional insns to introduce a pruning point. */ 144 + BPF_EMIT_CALL(BPF_FUNC_get_prandom_u32), 145 + BPF_MOV64_IMM(BPF_REG_3, 0), 146 + BPF_MOV64_IMM(BPF_REG_3, 0), 147 + BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1), 148 + BPF_MOV64_IMM(BPF_REG_3, 0), 149 + /* u32 spill/fill */ 150 + BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_6, -8), 151 + BPF_LDX_MEM(BPF_W, BPF_REG_8, BPF_REG_10, -8), 152 + /* out-of-bound map value access for r6=32 */ 153 + BPF_ST_MEM(BPF_DW, BPF_REG_10, -16, 0), 154 + BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), 155 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -16), 156 + BPF_LD_MAP_FD(BPF_REG_1, 0), 157 + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), 158 + BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2), 159 + BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_8), 160 + BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, 0), 161 + BPF_MOV64_IMM(BPF_REG_0, 0), 162 + BPF_EXIT_INSN(), 163 + }, 164 + .fixup_map_hash_8b = { 15 }, 165 + .result = REJECT, 166 + .errstr = "R0 min value is outside of the allowed memory range", 167 + .prog_type = BPF_PROG_TYPE_TRACEPOINT, 168 + }, 169 + { 170 + "precision tracking for u32 spills, u64 fill", 171 + .insns = { 172 + BPF_EMIT_CALL(BPF_FUNC_get_prandom_u32), 173 + BPF_MOV64_REG(BPF_REG_6, BPF_REG_0), 174 + BPF_MOV32_IMM(BPF_REG_7, 0xffffffff), 175 + /* Additional insns to introduce a pruning point. */ 176 + BPF_MOV64_IMM(BPF_REG_3, 1), 177 + BPF_MOV64_IMM(BPF_REG_3, 1), 178 + BPF_MOV64_IMM(BPF_REG_3, 1), 179 + BPF_MOV64_IMM(BPF_REG_3, 1), 180 + BPF_EMIT_CALL(BPF_FUNC_get_prandom_u32), 181 + BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1), 182 + BPF_MOV64_IMM(BPF_REG_3, 1), 183 + BPF_ALU32_IMM(BPF_DIV, BPF_REG_3, 0), 184 + /* u32 spills, u64 fill */ 185 + BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_6, -4), 186 + BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_7, -8), 187 + BPF_LDX_MEM(BPF_DW, BPF_REG_8, BPF_REG_10, -8), 188 + /* if r8 != X goto pc+1 r8 known in fallthrough branch */ 189 + BPF_JMP_IMM(BPF_JNE, BPF_REG_8, 0xffffffff, 1), 190 + BPF_MOV64_IMM(BPF_REG_3, 1), 191 + /* if r8 == X goto pc+1 condition always true on first 192 + * traversal, so starts backtracking to mark r8 as requiring 193 + * precision. r7 marked as needing precision. r6 not marked 194 + * since it's not tracked. 195 + */ 196 + BPF_JMP_IMM(BPF_JEQ, BPF_REG_8, 0xffffffff, 1), 197 + /* fails if r8 correctly marked unknown after fill. */ 198 + BPF_ALU32_IMM(BPF_DIV, BPF_REG_3, 0), 199 + BPF_MOV64_IMM(BPF_REG_0, 0), 200 + BPF_EXIT_INSN(), 201 + }, 202 + .result = REJECT, 203 + .errstr = "div by zero", 204 + .prog_type = BPF_PROG_TYPE_TRACEPOINT, 205 + }, 206 + { 136 207 "allocated_stack", 137 208 .insns = { 138 209 BPF_ALU64_REG(BPF_MOV, BPF_REG_6, BPF_REG_1),
+32
tools/testing/selftests/bpf/verifier/spill_fill.c
··· 176 176 .prog_type = BPF_PROG_TYPE_SCHED_CLS, 177 177 }, 178 178 { 179 + "Spill u32 const scalars. Refill as u64. Offset to skb->data", 180 + .insns = { 181 + BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, 182 + offsetof(struct __sk_buff, data)), 183 + BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, 184 + offsetof(struct __sk_buff, data_end)), 185 + /* r6 = 0 */ 186 + BPF_MOV32_IMM(BPF_REG_6, 0), 187 + /* r7 = 20 */ 188 + BPF_MOV32_IMM(BPF_REG_7, 20), 189 + /* *(u32 *)(r10 -4) = r6 */ 190 + BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_6, -4), 191 + /* *(u32 *)(r10 -8) = r7 */ 192 + BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_7, -8), 193 + /* r4 = *(u64 *)(r10 -8) */ 194 + BPF_LDX_MEM(BPF_H, BPF_REG_4, BPF_REG_10, -8), 195 + /* r0 = r2 */ 196 + BPF_MOV64_REG(BPF_REG_0, BPF_REG_2), 197 + /* r0 += r4 R0=pkt R2=pkt R3=pkt_end R4=inv,umax=65535 */ 198 + BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_4), 199 + /* if (r0 > r3) R0=pkt,umax=65535 R2=pkt R3=pkt_end R4=inv,umax=65535 */ 200 + BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1), 201 + /* r0 = *(u32 *)r2 R0=pkt,umax=65535 R2=pkt R3=pkt_end R4=inv20 */ 202 + BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_2, 0), 203 + BPF_MOV64_IMM(BPF_REG_0, 0), 204 + BPF_EXIT_INSN(), 205 + }, 206 + .result = REJECT, 207 + .errstr = "invalid access to packet", 208 + .prog_type = BPF_PROG_TYPE_SCHED_CLS, 209 + }, 210 + { 179 211 "Spill a u32 const scalar. Refill as u16 from fp-6. Offset to skb->data", 180 212 .insns = { 181 213 BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
+23
tools/testing/selftests/bpf/verifier/value_ptr_arith.c
··· 1078 1078 .errstr_unpriv = "R0 pointer -= pointer prohibited", 1079 1079 }, 1080 1080 { 1081 + "map access: trying to leak tained dst reg", 1082 + .insns = { 1083 + BPF_MOV64_IMM(BPF_REG_0, 0), 1084 + BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), 1085 + BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), 1086 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), 1087 + BPF_LD_MAP_FD(BPF_REG_1, 0), 1088 + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), 1089 + BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1), 1090 + BPF_EXIT_INSN(), 1091 + BPF_MOV64_REG(BPF_REG_2, BPF_REG_0), 1092 + BPF_MOV32_IMM(BPF_REG_1, 0xFFFFFFFF), 1093 + BPF_MOV32_REG(BPF_REG_1, BPF_REG_1), 1094 + BPF_ALU64_REG(BPF_SUB, BPF_REG_2, BPF_REG_1), 1095 + BPF_STX_MEM(BPF_DW, BPF_REG_0, BPF_REG_2, 0), 1096 + BPF_MOV64_IMM(BPF_REG_0, 0), 1097 + BPF_EXIT_INSN(), 1098 + }, 1099 + .fixup_map_array_48b = { 4 }, 1100 + .result = REJECT, 1101 + .errstr = "math between map_value pointer and 4294967295 is not allowed", 1102 + }, 1103 + { 1081 1104 "32bit pkt_ptr -= scalar", 1082 1105 .insns = { 1083 1106 BPF_LDX_MEM(BPF_W, BPF_REG_8, BPF_REG_1,
+30
tools/testing/selftests/drivers/net/mlxsw/rif_mac_profiles_occ.sh
··· 72 72 ip link set $h1.10 address $h1_10_mac 73 73 } 74 74 75 + rif_mac_profile_consolidation_test() 76 + { 77 + local count=$1; shift 78 + local h1_20_mac 79 + 80 + RET=0 81 + 82 + if [[ $count -eq 1 ]]; then 83 + return 84 + fi 85 + 86 + h1_20_mac=$(mac_get $h1.20) 87 + 88 + # Set the MAC of $h1.20 to that of $h1.10 and confirm that they are 89 + # using the same MAC profile. 90 + ip link set $h1.20 address 00:11:11:11:11:11 91 + check_err $? 92 + 93 + occ=$(devlink -j resource show $DEVLINK_DEV \ 94 + | jq '.[][][] | select(.name=="rif_mac_profiles") |.["occ"]') 95 + 96 + [[ $occ -eq $((count - 1)) ]] 97 + check_err $? "MAC profile occupancy did not decrease" 98 + 99 + log_test "RIF MAC profile consolidation" 100 + 101 + ip link set $h1.20 address $h1_20_mac 102 + } 103 + 75 104 rif_mac_profile_shared_replacement_test() 76 105 { 77 106 local count=$1; shift ··· 133 104 create_max_rif_mac_profiles $count 134 105 135 106 rif_mac_profile_replacement_test 107 + rif_mac_profile_consolidation_test $count 136 108 rif_mac_profile_shared_replacement_test $count 137 109 } 138 110
+34 -11
tools/testing/selftests/net/fcnal-test.sh
··· 455 455 ip netns del ${NSC} >/dev/null 2>&1 456 456 } 457 457 458 + cleanup_vrf_dup() 459 + { 460 + ip link del ${NSA_DEV2} >/dev/null 2>&1 461 + ip netns pids ${NSC} | xargs kill 2>/dev/null 462 + ip netns del ${NSC} >/dev/null 2>&1 463 + } 464 + 465 + setup_vrf_dup() 466 + { 467 + # some VRF tests use ns-C which has the same config as 468 + # ns-B but for a device NOT in the VRF 469 + create_ns ${NSC} "-" "-" 470 + connect_ns ${NSA} ${NSA_DEV2} ${NSA_IP}/24 ${NSA_IP6}/64 \ 471 + ${NSC} ${NSC_DEV} ${NSB_IP}/24 ${NSB_IP6}/64 472 + } 473 + 458 474 setup() 459 475 { 460 476 local with_vrf=${1} ··· 500 484 501 485 ip -netns ${NSB} ro add ${VRF_IP}/32 via ${NSA_IP} dev ${NSB_DEV} 502 486 ip -netns ${NSB} -6 ro add ${VRF_IP6}/128 via ${NSA_IP6} dev ${NSB_DEV} 503 - 504 - # some VRF tests use ns-C which has the same config as 505 - # ns-B but for a device NOT in the VRF 506 - create_ns ${NSC} "-" "-" 507 - connect_ns ${NSA} ${NSA_DEV2} ${NSA_IP}/24 ${NSA_IP6}/64 \ 508 - ${NSC} ${NSC_DEV} ${NSB_IP}/24 ${NSB_IP6}/64 509 487 else 510 488 ip -netns ${NSA} ro add ${NSB_LO_IP}/32 via ${NSB_IP} dev ${NSA_DEV} 511 489 ip -netns ${NSA} ro add ${NSB_LO_IP6}/128 via ${NSB_IP6} dev ${NSA_DEV} ··· 1250 1240 log_test_addr ${a} $? 1 "Global server, local connection" 1251 1241 1252 1242 # run MD5 tests 1243 + setup_vrf_dup 1253 1244 ipv4_tcp_md5 1245 + cleanup_vrf_dup 1254 1246 1255 1247 # 1256 1248 # enable VRF global server ··· 1810 1798 for a in ${NSA_IP} ${VRF_IP} 1811 1799 do 1812 1800 log_start 1801 + show_hint "Socket not bound to VRF, but address is in VRF" 1813 1802 run_cmd nettest -s -R -P icmp -l ${a} -b 1814 - log_test_addr ${a} $? 0 "Raw socket bind to local address" 1803 + log_test_addr ${a} $? 1 "Raw socket bind to local address" 1815 1804 1816 1805 log_start 1817 1806 run_cmd nettest -s -R -P icmp -l ${a} -I ${NSA_DEV} -b ··· 2204 2191 log_start 2205 2192 show_hint "Fails since VRF device does not support linklocal or multicast" 2206 2193 run_cmd ${ping6} -c1 -w1 ${a} 2207 - log_test_addr ${a} $? 2 "ping out, VRF bind" 2194 + log_test_addr ${a} $? 1 "ping out, VRF bind" 2208 2195 done 2209 2196 2210 2197 for a in ${NSB_IP6} ${NSB_LO_IP6} ${NSB_LINKIP6}%${NSA_DEV} ${MCAST}%${NSA_DEV} ··· 2732 2719 log_test_addr ${a} $? 1 "Global server, local connection" 2733 2720 2734 2721 # run MD5 tests 2722 + setup_vrf_dup 2735 2723 ipv6_tcp_md5 2724 + cleanup_vrf_dup 2736 2725 2737 2726 # 2738 2727 # enable VRF global server ··· 3429 3414 run_cmd nettest -6 -s -l ${a} -I ${NSA_DEV} -t1 -b 3430 3415 log_test_addr ${a} $? 0 "TCP socket bind to local address after device bind" 3431 3416 3417 + # Sadly, the kernel allows binding a socket to a device and then 3418 + # binding to an address not on the device. So this test passes 3419 + # when it really should not 3432 3420 a=${NSA_LO_IP6} 3433 3421 log_start 3434 - show_hint "Should fail with 'Cannot assign requested address'" 3422 + show_hint "Tecnically should fail since address is not on device but kernel allows" 3435 3423 run_cmd nettest -6 -s -l ${a} -I ${NSA_DEV} -t1 -b 3436 - log_test_addr ${a} $? 1 "TCP socket bind to out of scope local address" 3424 + log_test_addr ${a} $? 0 "TCP socket bind to out of scope local address" 3437 3425 } 3438 3426 3439 3427 ipv6_addr_bind_vrf() ··· 3477 3459 run_cmd nettest -6 -s -l ${a} -I ${NSA_DEV} -t1 -b 3478 3460 log_test_addr ${a} $? 0 "TCP socket bind to local address with device bind" 3479 3461 3462 + # Sadly, the kernel allows binding a socket to a device and then 3463 + # binding to an address not on the device. The only restriction 3464 + # is that the address is valid in the L3 domain. So this test 3465 + # passes when it really should not 3480 3466 a=${VRF_IP6} 3481 3467 log_start 3468 + show_hint "Tecnically should fail since address is not on device but kernel allows" 3482 3469 run_cmd nettest -6 -s -l ${a} -I ${NSA_DEV} -t1 -b 3483 - log_test_addr ${a} $? 1 "TCP socket bind to VRF address with device bind" 3470 + log_test_addr ${a} $? 0 "TCP socket bind to VRF address with device bind" 3484 3471 3485 3472 a=${NSA_LO_IP6} 3486 3473 log_start
+2
tools/testing/selftests/net/forwarding/forwarding.config.sample
··· 13 13 NETIFS[p6]=veth5 14 14 NETIFS[p7]=veth6 15 15 NETIFS[p8]=veth7 16 + NETIFS[p9]=veth8 17 + NETIFS[p10]=veth9 16 18 17 19 # Port that does not have a cable connected. 18 20 NETIF_NO_CABLE=eth8
+1 -1
tools/testing/selftests/net/icmp_redirect.sh
··· 311 311 ip -netns h1 ro get ${H1_VRF_ARG} ${H2_N2_IP} | \ 312 312 grep -E -v 'mtu|redirected' | grep -q "cache" 313 313 fi 314 - log_test $? 0 "IPv4: ${desc}" 314 + log_test $? 0 "IPv4: ${desc}" 0 315 315 316 316 # No PMTU info for test "redirect" and "mtu exception plus redirect" 317 317 if [ "$with_redirect" = "yes" ] && [ "$desc" != "redirect exception plus mtu" ]; then
+1 -1
tools/testing/selftests/net/toeplitz.c
··· 498 498 bool have_toeplitz = false; 499 499 int index, c; 500 500 501 - while ((c = getopt_long(argc, argv, "46C:d:i:k:r:stT:u:v", long_options, &index)) != -1) { 501 + while ((c = getopt_long(argc, argv, "46C:d:i:k:r:stT:uv", long_options, &index)) != -1) { 502 502 switch (c) { 503 503 case '4': 504 504 cfg_family = AF_INET;