Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'net-6.1-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Pull networking fixes from Paolo Abeni:
"Including fixes from netfilter.

Current release - regressions:

- revert "net: fix cpu_max_bits_warn() usage in
netif_attrmask_next{,_and}"

- revert "net: sched: fq_codel: remove redundant resource cleanup in
fq_codel_init()"

- dsa: uninitialized variable in dsa_slave_netdevice_event()

- eth: sunhme: uninitialized variable in happy_meal_init()

Current release - new code bugs:

- eth: octeontx2: fix resource not freed after malloc

Previous releases - regressions:

- sched: fix return value of qdisc ingress handling on success

- sched: fix race condition in qdisc_graft()

- udp: update reuse->has_conns under reuseport_lock.

- tls: strp: make sure the TCP skbs do not have overlapping data

- hsr: avoid possible NULL deref in skb_clone()

- tipc: fix an information leak in tipc_topsrv_kern_subscr

- phylink: add mac_managed_pm in phylink_config structure

- eth: i40e: fix DMA mappings leak

- eth: hyperv: fix a RX-path warning

- eth: mtk: fix memory leaks

Previous releases - always broken:

- sched: cake: fix null pointer access issue when cake_init() fails"

* tag 'net-6.1-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (43 commits)
net: phy: dp83822: disable MDI crossover status change interrupt
net: sched: fix race condition in qdisc_graft()
net: hns: fix possible memory leak in hnae_ae_register()
wwan_hwsim: fix possible memory leak in wwan_hwsim_dev_new()
sfc: include vport_id in filter spec hash and equal()
genetlink: fix kdoc warnings
selftests: add selftest for chaining of tc ingress handling to egress
net: Fix return value of qdisc ingress handling on success
net: sched: sfb: fix null pointer access issue when sfb_init() fails
Revert "net: sched: fq_codel: remove redundant resource cleanup in fq_codel_init()"
net: sched: cake: fix null pointer access issue when cake_init() fails
ethernet: marvell: octeontx2 Fix resource not freed after malloc
netfilter: nf_tables: relax NFTA_SET_ELEM_KEY_END set flags requirements
netfilter: rpfilter/fib: Set ->flowic_uid correctly for user namespaces.
ionic: catch NULL pointer issue on reconfig
net: hsr: avoid possible NULL deref in skb_clone()
bnxt_en: fix memory leak in bnxt_nvm_test()
ip6mr: fix UAF issue in ip6mr_sk_done() when addrconf_init_net() failed
udp: Update reuse->has_conns under reuseport_lock.
net: ethernet: mediatek: ppe: Remove the unused function mtk_foe_entry_usable()
...

+432 -185
-1
Documentation/devicetree/bindings/net/nfc/samsung,s3fwrn5.yaml
··· 8 8 9 9 maintainers: 10 10 - Krzysztof Kozlowski <krzk@kernel.org> 11 - - Krzysztof Opasiak <k.opasiak@samsung.com> 12 11 13 12 properties: 14 13 compatible:
+2 -3
MAINTAINERS
··· 18215 18215 18216 18216 SAMSUNG S3FWRN5 NFC DRIVER 18217 18217 M: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org> 18218 - M: Krzysztof Opasiak <k.opasiak@samsung.com> 18219 18218 L: linux-nfc@lists.01.org (subscribers-only) 18220 18219 S: Maintained 18221 18220 F: Documentation/devicetree/bindings/net/nfc/samsung,s3fwrn5.yaml ··· 21296 21297 L: netdev@vger.kernel.org 21297 21298 S: Maintained 21298 21299 W: https://github.com/petkan/pegasus 21299 - T: git git://github.com/petkan/pegasus.git 21300 + T: git https://github.com/petkan/pegasus.git 21300 21301 F: drivers/net/usb/pegasus.* 21301 21302 21302 21303 USB PHY LAYER ··· 21333 21334 L: netdev@vger.kernel.org 21334 21335 S: Maintained 21335 21336 W: https://github.com/petkan/rtl8150 21336 - T: git git://github.com/petkan/rtl8150.git 21337 + T: git https://github.com/petkan/rtl8150.git 21337 21338 F: drivers/net/usb/rtl8150.c 21338 21339 21339 21340 USB SERIAL SUBSYSTEM
+55 -28
drivers/net/dsa/qca/qca8k-8xxx.c
··· 137 137 struct qca8k_mgmt_eth_data *mgmt_eth_data; 138 138 struct qca8k_priv *priv = ds->priv; 139 139 struct qca_mgmt_ethhdr *mgmt_ethhdr; 140 + u32 command; 140 141 u8 len, cmd; 142 + int i; 141 143 142 144 mgmt_ethhdr = (struct qca_mgmt_ethhdr *)skb_mac_header(skb); 143 145 mgmt_eth_data = &priv->mgmt_eth_data; 144 146 145 - cmd = FIELD_GET(QCA_HDR_MGMT_CMD, mgmt_ethhdr->command); 146 - len = FIELD_GET(QCA_HDR_MGMT_LENGTH, mgmt_ethhdr->command); 147 + command = get_unaligned_le32(&mgmt_ethhdr->command); 148 + cmd = FIELD_GET(QCA_HDR_MGMT_CMD, command); 149 + len = FIELD_GET(QCA_HDR_MGMT_LENGTH, command); 147 150 148 151 /* Make sure the seq match the requested packet */ 149 - if (mgmt_ethhdr->seq == mgmt_eth_data->seq) 152 + if (get_unaligned_le32(&mgmt_ethhdr->seq) == mgmt_eth_data->seq) 150 153 mgmt_eth_data->ack = true; 151 154 152 155 if (cmd == MDIO_READ) { 153 - mgmt_eth_data->data[0] = mgmt_ethhdr->mdio_data; 156 + u32 *val = mgmt_eth_data->data; 157 + 158 + *val = get_unaligned_le32(&mgmt_ethhdr->mdio_data); 154 159 155 160 /* Get the rest of the 12 byte of data. 156 161 * The read/write function will extract the requested data. 157 162 */ 158 - if (len > QCA_HDR_MGMT_DATA1_LEN) 159 - memcpy(mgmt_eth_data->data + 1, skb->data, 160 - QCA_HDR_MGMT_DATA2_LEN); 163 + if (len > QCA_HDR_MGMT_DATA1_LEN) { 164 + __le32 *data2 = (__le32 *)skb->data; 165 + int data_len = min_t(int, QCA_HDR_MGMT_DATA2_LEN, 166 + len - QCA_HDR_MGMT_DATA1_LEN); 167 + 168 + val++; 169 + 170 + for (i = sizeof(u32); i <= data_len; i += sizeof(u32)) { 171 + *val = get_unaligned_le32(data2); 172 + val++; 173 + data2++; 174 + } 175 + } 161 176 } 162 177 163 178 complete(&mgmt_eth_data->rw_done); ··· 184 169 struct qca_mgmt_ethhdr *mgmt_ethhdr; 185 170 unsigned int real_len; 186 171 struct sk_buff *skb; 187 - u32 *data2; 172 + __le32 *data2; 173 + u32 command; 188 174 u16 hdr; 175 + int i; 189 176 190 177 skb = dev_alloc_skb(QCA_HDR_MGMT_PKT_LEN); 191 178 if (!skb) ··· 216 199 hdr |= FIELD_PREP(QCA_HDR_XMIT_DP_BIT, BIT(0)); 217 200 hdr |= FIELD_PREP(QCA_HDR_XMIT_CONTROL, QCA_HDR_XMIT_TYPE_RW_REG); 218 201 219 - mgmt_ethhdr->command = FIELD_PREP(QCA_HDR_MGMT_ADDR, reg); 220 - mgmt_ethhdr->command |= FIELD_PREP(QCA_HDR_MGMT_LENGTH, real_len); 221 - mgmt_ethhdr->command |= FIELD_PREP(QCA_HDR_MGMT_CMD, cmd); 222 - mgmt_ethhdr->command |= FIELD_PREP(QCA_HDR_MGMT_CHECK_CODE, 202 + command = FIELD_PREP(QCA_HDR_MGMT_ADDR, reg); 203 + command |= FIELD_PREP(QCA_HDR_MGMT_LENGTH, real_len); 204 + command |= FIELD_PREP(QCA_HDR_MGMT_CMD, cmd); 205 + command |= FIELD_PREP(QCA_HDR_MGMT_CHECK_CODE, 223 206 QCA_HDR_MGMT_CHECK_CODE_VAL); 224 207 208 + put_unaligned_le32(command, &mgmt_ethhdr->command); 209 + 225 210 if (cmd == MDIO_WRITE) 226 - mgmt_ethhdr->mdio_data = *val; 211 + put_unaligned_le32(*val, &mgmt_ethhdr->mdio_data); 227 212 228 213 mgmt_ethhdr->hdr = htons(hdr); 229 214 230 215 data2 = skb_put_zero(skb, QCA_HDR_MGMT_DATA2_LEN + QCA_HDR_MGMT_PADDING_LEN); 231 - if (cmd == MDIO_WRITE && len > QCA_HDR_MGMT_DATA1_LEN) 232 - memcpy(data2, val + 1, len - QCA_HDR_MGMT_DATA1_LEN); 216 + if (cmd == MDIO_WRITE && len > QCA_HDR_MGMT_DATA1_LEN) { 217 + int data_len = min_t(int, QCA_HDR_MGMT_DATA2_LEN, 218 + len - QCA_HDR_MGMT_DATA1_LEN); 219 + 220 + val++; 221 + 222 + for (i = sizeof(u32); i <= data_len; i += sizeof(u32)) { 223 + put_unaligned_le32(*val, data2); 224 + data2++; 225 + val++; 226 + } 227 + } 233 228 234 229 return skb; 235 230 } ··· 249 220 static void qca8k_mdio_header_fill_seq_num(struct sk_buff *skb, u32 seq_num) 250 221 { 251 222 struct qca_mgmt_ethhdr *mgmt_ethhdr; 223 + u32 seq; 252 224 225 + seq = FIELD_PREP(QCA_HDR_MGMT_SEQ_NUM, seq_num); 253 226 mgmt_ethhdr = (struct qca_mgmt_ethhdr *)skb->data; 254 - mgmt_ethhdr->seq = FIELD_PREP(QCA_HDR_MGMT_SEQ_NUM, seq_num); 227 + put_unaligned_le32(seq, &mgmt_ethhdr->seq); 255 228 } 256 229 257 230 static int qca8k_read_eth(struct qca8k_priv *priv, u32 reg, u32 *val, int len) ··· 1518 1487 struct qca8k_priv *priv = ds->priv; 1519 1488 const struct qca8k_mib_desc *mib; 1520 1489 struct mib_ethhdr *mib_ethhdr; 1521 - int i, mib_len, offset = 0; 1522 - u64 *data; 1490 + __le32 *data2; 1523 1491 u8 port; 1492 + int i; 1524 1493 1525 1494 mib_ethhdr = (struct mib_ethhdr *)skb_mac_header(skb); 1526 1495 mib_eth_data = &priv->mib_eth_data; ··· 1532 1501 if (port != mib_eth_data->req_port) 1533 1502 goto exit; 1534 1503 1535 - data = mib_eth_data->data; 1504 + data2 = (__le32 *)skb->data; 1536 1505 1537 1506 for (i = 0; i < priv->info->mib_count; i++) { 1538 1507 mib = &ar8327_mib[i]; 1539 1508 1540 1509 /* First 3 mib are present in the skb head */ 1541 1510 if (i < 3) { 1542 - data[i] = mib_ethhdr->data[i]; 1511 + mib_eth_data->data[i] = get_unaligned_le32(mib_ethhdr->data + i); 1543 1512 continue; 1544 1513 } 1545 1514 1546 - mib_len = sizeof(uint32_t); 1547 - 1548 1515 /* Some mib are 64 bit wide */ 1549 1516 if (mib->size == 2) 1550 - mib_len = sizeof(uint64_t); 1517 + mib_eth_data->data[i] = get_unaligned_le64((__le64 *)data2); 1518 + else 1519 + mib_eth_data->data[i] = get_unaligned_le32(data2); 1551 1520 1552 - /* Copy the mib value from packet to the */ 1553 - memcpy(data + i, skb->data + offset, mib_len); 1554 - 1555 - /* Set the offset for the next mib */ 1556 - offset += mib_len; 1521 + data2 += mib->size; 1557 1522 } 1558 1523 1559 1524 exit:
+6 -5
drivers/net/ethernet/broadcom/bnxt/bnxt_devlink.c
··· 613 613 614 614 static bool bnxt_nvm_test(struct bnxt *bp, struct netlink_ext_ack *extack) 615 615 { 616 + bool rc = false; 616 617 u32 datalen; 617 618 u16 index; 618 619 u8 *buf; ··· 633 632 634 633 if (bnxt_get_nvram_item(bp->dev, index, 0, datalen, buf)) { 635 634 NL_SET_ERR_MSG_MOD(extack, "nvm test vpd read error"); 636 - goto err; 635 + goto done; 637 636 } 638 637 639 638 if (bnxt_flash_nvram(bp->dev, BNX_DIR_TYPE_VPD, BNX_DIR_ORDINAL_FIRST, 640 639 BNX_DIR_EXT_NONE, 0, 0, buf, datalen)) { 641 640 NL_SET_ERR_MSG_MOD(extack, "nvm test vpd write error"); 642 - goto err; 641 + goto done; 643 642 } 644 643 645 - return true; 644 + rc = true; 646 645 647 - err: 646 + done: 648 647 kfree(buf); 649 - return false; 648 + return rc; 650 649 } 651 650 652 651 static bool bnxt_dl_selftest_check(struct devlink *dl, unsigned int id,
+3 -1
drivers/net/ethernet/hisilicon/hns/hnae.c
··· 419 419 hdev->cls_dev.release = hnae_release; 420 420 (void)dev_set_name(&hdev->cls_dev, "hnae%d", hdev->id); 421 421 ret = device_register(&hdev->cls_dev); 422 - if (ret) 422 + if (ret) { 423 + put_device(&hdev->cls_dev); 423 424 return ret; 425 + } 424 426 425 427 __module_get(THIS_MODULE); 426 428
-3
drivers/net/ethernet/intel/i40e/i40e_ethtool.c
··· 2183 2183 err = i40e_setup_rx_descriptors(&rx_rings[i]); 2184 2184 if (err) 2185 2185 goto rx_unwind; 2186 - err = i40e_alloc_rx_bi(&rx_rings[i]); 2187 - if (err) 2188 - goto rx_unwind; 2189 2186 2190 2187 /* now allocate the Rx buffers to make sure the OS 2191 2188 * has enough memory, any failure here means abort
+9 -7
drivers/net/ethernet/intel/i40e/i40e_main.c
··· 3566 3566 if (ring->vsi->type == I40E_VSI_MAIN) 3567 3567 xdp_rxq_info_unreg_mem_model(&ring->xdp_rxq); 3568 3568 3569 - kfree(ring->rx_bi); 3570 3569 ring->xsk_pool = i40e_xsk_pool(ring); 3571 3570 if (ring->xsk_pool) { 3572 - ret = i40e_alloc_rx_bi_zc(ring); 3573 - if (ret) 3574 - return ret; 3575 3571 ring->rx_buf_len = 3576 3572 xsk_pool_get_rx_frame_size(ring->xsk_pool); 3577 3573 /* For AF_XDP ZC, we disallow packets to span on ··· 3585 3589 ring->queue_index); 3586 3590 3587 3591 } else { 3588 - ret = i40e_alloc_rx_bi(ring); 3589 - if (ret) 3590 - return ret; 3591 3592 ring->rx_buf_len = vsi->rx_buf_len; 3592 3593 if (ring->vsi->type == I40E_VSI_MAIN) { 3593 3594 ret = xdp_rxq_info_reg_mem_model(&ring->xdp_rxq, ··· 13289 13296 i40e_reset_and_rebuild(pf, true, true); 13290 13297 } 13291 13298 13299 + if (!i40e_enabled_xdp_vsi(vsi) && prog) { 13300 + if (i40e_realloc_rx_bi_zc(vsi, true)) 13301 + return -ENOMEM; 13302 + } else if (i40e_enabled_xdp_vsi(vsi) && !prog) { 13303 + if (i40e_realloc_rx_bi_zc(vsi, false)) 13304 + return -ENOMEM; 13305 + } 13306 + 13292 13307 for (i = 0; i < vsi->num_queue_pairs; i++) 13293 13308 WRITE_ONCE(vsi->rx_rings[i]->xdp_prog, vsi->xdp_prog); 13294 13309 ··· 13529 13528 13530 13529 i40e_queue_pair_disable_irq(vsi, queue_pair); 13531 13530 err = i40e_queue_pair_toggle_rings(vsi, queue_pair, false /* off */); 13531 + i40e_clean_rx_ring(vsi->rx_rings[queue_pair]); 13532 13532 i40e_queue_pair_toggle_napi(vsi, queue_pair, false /* off */); 13533 13533 i40e_queue_pair_clean_rings(vsi, queue_pair); 13534 13534 i40e_queue_pair_reset_stats(vsi, queue_pair);
+5 -8
drivers/net/ethernet/intel/i40e/i40e_txrx.c
··· 1457 1457 return -ENOMEM; 1458 1458 } 1459 1459 1460 - int i40e_alloc_rx_bi(struct i40e_ring *rx_ring) 1461 - { 1462 - unsigned long sz = sizeof(*rx_ring->rx_bi) * rx_ring->count; 1463 - 1464 - rx_ring->rx_bi = kzalloc(sz, GFP_KERNEL); 1465 - return rx_ring->rx_bi ? 0 : -ENOMEM; 1466 - } 1467 - 1468 1460 static void i40e_clear_rx_bi(struct i40e_ring *rx_ring) 1469 1461 { 1470 1462 memset(rx_ring->rx_bi, 0, sizeof(*rx_ring->rx_bi) * rx_ring->count); ··· 1584 1592 } 1585 1593 1586 1594 rx_ring->xdp_prog = rx_ring->vsi->xdp_prog; 1595 + 1596 + rx_ring->rx_bi = 1597 + kcalloc(rx_ring->count, sizeof(*rx_ring->rx_bi), GFP_KERNEL); 1598 + if (!rx_ring->rx_bi) 1599 + return -ENOMEM; 1587 1600 1588 1601 return 0; 1589 1602 }
-1
drivers/net/ethernet/intel/i40e/i40e_txrx.h
··· 469 469 bool __i40e_chk_linearize(struct sk_buff *skb); 470 470 int i40e_xdp_xmit(struct net_device *dev, int n, struct xdp_frame **frames, 471 471 u32 flags); 472 - int i40e_alloc_rx_bi(struct i40e_ring *rx_ring); 473 472 474 473 /** 475 474 * i40e_get_head - Retrieve head from head writeback
+59 -8
drivers/net/ethernet/intel/i40e/i40e_xsk.c
··· 10 10 #include "i40e_txrx_common.h" 11 11 #include "i40e_xsk.h" 12 12 13 - int i40e_alloc_rx_bi_zc(struct i40e_ring *rx_ring) 14 - { 15 - unsigned long sz = sizeof(*rx_ring->rx_bi_zc) * rx_ring->count; 16 - 17 - rx_ring->rx_bi_zc = kzalloc(sz, GFP_KERNEL); 18 - return rx_ring->rx_bi_zc ? 0 : -ENOMEM; 19 - } 20 - 21 13 void i40e_clear_rx_bi_zc(struct i40e_ring *rx_ring) 22 14 { 23 15 memset(rx_ring->rx_bi_zc, 0, ··· 19 27 static struct xdp_buff **i40e_rx_bi(struct i40e_ring *rx_ring, u32 idx) 20 28 { 21 29 return &rx_ring->rx_bi_zc[idx]; 30 + } 31 + 32 + /** 33 + * i40e_realloc_rx_xdp_bi - reallocate SW ring for either XSK or normal buffer 34 + * @rx_ring: Current rx ring 35 + * @pool_present: is pool for XSK present 36 + * 37 + * Try allocating memory and return ENOMEM, if failed to allocate. 38 + * If allocation was successful, substitute buffer with allocated one. 39 + * Returns 0 on success, negative on failure 40 + */ 41 + static int i40e_realloc_rx_xdp_bi(struct i40e_ring *rx_ring, bool pool_present) 42 + { 43 + size_t elem_size = pool_present ? sizeof(*rx_ring->rx_bi_zc) : 44 + sizeof(*rx_ring->rx_bi); 45 + void *sw_ring = kcalloc(rx_ring->count, elem_size, GFP_KERNEL); 46 + 47 + if (!sw_ring) 48 + return -ENOMEM; 49 + 50 + if (pool_present) { 51 + kfree(rx_ring->rx_bi); 52 + rx_ring->rx_bi = NULL; 53 + rx_ring->rx_bi_zc = sw_ring; 54 + } else { 55 + kfree(rx_ring->rx_bi_zc); 56 + rx_ring->rx_bi_zc = NULL; 57 + rx_ring->rx_bi = sw_ring; 58 + } 59 + return 0; 60 + } 61 + 62 + /** 63 + * i40e_realloc_rx_bi_zc - reallocate rx SW rings 64 + * @vsi: Current VSI 65 + * @zc: is zero copy set 66 + * 67 + * Reallocate buffer for rx_rings that might be used by XSK. 68 + * XDP requires more memory, than rx_buf provides. 69 + * Returns 0 on success, negative on failure 70 + */ 71 + int i40e_realloc_rx_bi_zc(struct i40e_vsi *vsi, bool zc) 72 + { 73 + struct i40e_ring *rx_ring; 74 + unsigned long q; 75 + 76 + for_each_set_bit(q, vsi->af_xdp_zc_qps, vsi->alloc_queue_pairs) { 77 + rx_ring = vsi->rx_rings[q]; 78 + if (i40e_realloc_rx_xdp_bi(rx_ring, zc)) 79 + return -ENOMEM; 80 + } 81 + return 0; 22 82 } 23 83 24 84 /** ··· 110 66 111 67 if (if_running) { 112 68 err = i40e_queue_pair_disable(vsi, qid); 69 + if (err) 70 + return err; 71 + 72 + err = i40e_realloc_rx_xdp_bi(vsi->rx_rings[qid], true); 113 73 if (err) 114 74 return err; 115 75 ··· 161 113 xsk_pool_dma_unmap(pool, I40E_RX_DMA_ATTR); 162 114 163 115 if (if_running) { 116 + err = i40e_realloc_rx_xdp_bi(vsi->rx_rings[qid], false); 117 + if (err) 118 + return err; 164 119 err = i40e_queue_pair_enable(vsi, qid); 165 120 if (err) 166 121 return err;
+1 -1
drivers/net/ethernet/intel/i40e/i40e_xsk.h
··· 32 32 33 33 bool i40e_clean_xdp_tx_irq(struct i40e_vsi *vsi, struct i40e_ring *tx_ring); 34 34 int i40e_xsk_wakeup(struct net_device *dev, u32 queue_id, u32 flags); 35 - int i40e_alloc_rx_bi_zc(struct i40e_ring *rx_ring); 35 + int i40e_realloc_rx_bi_zc(struct i40e_vsi *vsi, bool zc); 36 36 void i40e_clear_rx_bi_zc(struct i40e_ring *rx_ring); 37 37 38 38 #endif /* _I40E_XSK_H_ */
+2
drivers/net/ethernet/marvell/octeontx2/nic/cn10k_macsec.c
··· 815 815 cn10k_mcs_free_rsrc(pfvf, MCS_TX, MCS_RSRC_TYPE_FLOWID, 816 816 txsc->hw_flow_id, false); 817 817 fail: 818 + kfree(txsc); 818 819 return ERR_PTR(ret); 819 820 } 820 821 ··· 871 870 cn10k_mcs_free_rsrc(pfvf, MCS_RX, MCS_RSRC_TYPE_FLOWID, 872 871 rxsc->hw_flow_id, false); 873 872 fail: 873 + kfree(rxsc); 874 874 return ERR_PTR(ret); 875 875 } 876 876
+12 -5
drivers/net/ethernet/mediatek/mtk_eth_soc.c
··· 4060 4060 eth->irq[i] = platform_get_irq(pdev, i); 4061 4061 if (eth->irq[i] < 0) { 4062 4062 dev_err(&pdev->dev, "no IRQ%d resource found\n", i); 4063 - return -ENXIO; 4063 + err = -ENXIO; 4064 + goto err_wed_exit; 4064 4065 } 4065 4066 } 4066 4067 for (i = 0; i < ARRAY_SIZE(eth->clks); i++) { 4067 4068 eth->clks[i] = devm_clk_get(eth->dev, 4068 4069 mtk_clks_source_name[i]); 4069 4070 if (IS_ERR(eth->clks[i])) { 4070 - if (PTR_ERR(eth->clks[i]) == -EPROBE_DEFER) 4071 - return -EPROBE_DEFER; 4071 + if (PTR_ERR(eth->clks[i]) == -EPROBE_DEFER) { 4072 + err = -EPROBE_DEFER; 4073 + goto err_wed_exit; 4074 + } 4072 4075 if (eth->soc->required_clks & BIT(i)) { 4073 4076 dev_err(&pdev->dev, "clock %s not found\n", 4074 4077 mtk_clks_source_name[i]); 4075 - return -EINVAL; 4078 + err = -EINVAL; 4079 + goto err_wed_exit; 4076 4080 } 4077 4081 eth->clks[i] = NULL; 4078 4082 } ··· 4087 4083 4088 4084 err = mtk_hw_init(eth); 4089 4085 if (err) 4090 - return err; 4086 + goto err_wed_exit; 4091 4087 4092 4088 eth->hwlro = MTK_HAS_CAPS(eth->soc->caps, MTK_HWLRO); 4093 4089 ··· 4183 4179 mtk_free_dev(eth); 4184 4180 err_deinit_hw: 4185 4181 mtk_hw_deinit(eth); 4182 + err_wed_exit: 4183 + mtk_wed_exit(); 4186 4184 4187 4185 return err; 4188 4186 } ··· 4204 4198 phylink_disconnect_phy(mac->phylink); 4205 4199 } 4206 4200 4201 + mtk_wed_exit(); 4207 4202 mtk_hw_deinit(eth); 4208 4203 4209 4204 netif_napi_del(&eth->tx_napi);
-6
drivers/net/ethernet/mediatek/mtk_ppe.c
··· 397 397 return 0; 398 398 } 399 399 400 - static inline bool mtk_foe_entry_usable(struct mtk_foe_entry *entry) 401 - { 402 - return !(entry->ib1 & MTK_FOE_IB1_STATIC) && 403 - FIELD_GET(MTK_FOE_IB1_STATE, entry->ib1) != MTK_FOE_STATE_BIND; 404 - } 405 - 406 400 static bool 407 401 mtk_flow_entry_match(struct mtk_eth *eth, struct mtk_flow_entry *entry, 408 402 struct mtk_foe_entry *data)
+12 -3
drivers/net/ethernet/mediatek/mtk_wed.c
··· 1072 1072 1073 1073 pdev = of_find_device_by_node(np); 1074 1074 if (!pdev) 1075 - return; 1075 + goto err_of_node_put; 1076 1076 1077 1077 get_device(&pdev->dev); 1078 1078 irq = platform_get_irq(pdev, 0); 1079 1079 if (irq < 0) 1080 - return; 1080 + goto err_put_device; 1081 1081 1082 1082 regs = syscon_regmap_lookup_by_phandle(np, NULL); 1083 1083 if (IS_ERR(regs)) 1084 - return; 1084 + goto err_put_device; 1085 1085 1086 1086 rcu_assign_pointer(mtk_soc_wed_ops, &wed_ops); 1087 1087 ··· 1124 1124 1125 1125 hw_list[index] = hw; 1126 1126 1127 + mutex_unlock(&hw_lock); 1128 + 1129 + return; 1130 + 1127 1131 unlock: 1128 1132 mutex_unlock(&hw_lock); 1133 + err_put_device: 1134 + put_device(&pdev->dev); 1135 + err_of_node_put: 1136 + of_node_put(np); 1129 1137 } 1130 1138 1131 1139 void mtk_wed_exit(void) ··· 1154 1146 hw_list[i] = NULL; 1155 1147 debugfs_remove(hw->debugfs_dir); 1156 1148 put_device(hw->dev); 1149 + of_node_put(hw->node); 1157 1150 kfree(hw); 1158 1151 } 1159 1152 }
+8 -4
drivers/net/ethernet/pensando/ionic/ionic_lif.c
··· 2817 2817 * than the full array, but leave the qcq shells in place 2818 2818 */ 2819 2819 for (i = lif->nxqs; i < lif->ionic->ntxqs_per_lif; i++) { 2820 - lif->txqcqs[i]->flags &= ~IONIC_QCQ_F_INTR; 2821 - ionic_qcq_free(lif, lif->txqcqs[i]); 2820 + if (lif->txqcqs && lif->txqcqs[i]) { 2821 + lif->txqcqs[i]->flags &= ~IONIC_QCQ_F_INTR; 2822 + ionic_qcq_free(lif, lif->txqcqs[i]); 2823 + } 2822 2824 2823 - lif->rxqcqs[i]->flags &= ~IONIC_QCQ_F_INTR; 2824 - ionic_qcq_free(lif, lif->rxqcqs[i]); 2825 + if (lif->rxqcqs && lif->rxqcqs[i]) { 2826 + lif->rxqcqs[i]->flags &= ~IONIC_QCQ_F_INTR; 2827 + ionic_qcq_free(lif, lif->rxqcqs[i]); 2828 + } 2825 2829 } 2826 2830 2827 2831 if (err)
+24 -34
drivers/net/ethernet/sfc/ef10.c
··· 3277 3277 bool was_enabled = efx->port_enabled; 3278 3278 int rc; 3279 3279 3280 + #ifdef CONFIG_SFC_SRIOV 3281 + /* If this function is a VF and we have access to the parent PF, 3282 + * then use the PF control path to attempt to change the VF MAC address. 3283 + */ 3284 + if (efx->pci_dev->is_virtfn && efx->pci_dev->physfn) { 3285 + struct efx_nic *efx_pf = pci_get_drvdata(efx->pci_dev->physfn); 3286 + struct efx_ef10_nic_data *nic_data = efx->nic_data; 3287 + u8 mac[ETH_ALEN]; 3288 + 3289 + /* net_dev->dev_addr can be zeroed by efx_net_stop in 3290 + * efx_ef10_sriov_set_vf_mac, so pass in a copy. 3291 + */ 3292 + ether_addr_copy(mac, efx->net_dev->dev_addr); 3293 + 3294 + rc = efx_ef10_sriov_set_vf_mac(efx_pf, nic_data->vf_index, mac); 3295 + if (!rc) 3296 + return 0; 3297 + 3298 + netif_dbg(efx, drv, efx->net_dev, 3299 + "Updating VF mac via PF failed (%d), setting directly\n", 3300 + rc); 3301 + } 3302 + #endif 3303 + 3280 3304 efx_device_detach_sync(efx); 3281 3305 efx_net_stop(efx->net_dev); 3282 3306 ··· 3321 3297 efx_net_open(efx->net_dev); 3322 3298 efx_device_attach_if_not_resetting(efx); 3323 3299 3324 - #ifdef CONFIG_SFC_SRIOV 3325 - if (efx->pci_dev->is_virtfn && efx->pci_dev->physfn) { 3326 - struct efx_ef10_nic_data *nic_data = efx->nic_data; 3327 - struct pci_dev *pci_dev_pf = efx->pci_dev->physfn; 3328 - 3329 - if (rc == -EPERM) { 3330 - struct efx_nic *efx_pf; 3331 - 3332 - /* Switch to PF and change MAC address on vport */ 3333 - efx_pf = pci_get_drvdata(pci_dev_pf); 3334 - 3335 - rc = efx_ef10_sriov_set_vf_mac(efx_pf, 3336 - nic_data->vf_index, 3337 - efx->net_dev->dev_addr); 3338 - } else if (!rc) { 3339 - struct efx_nic *efx_pf = pci_get_drvdata(pci_dev_pf); 3340 - struct efx_ef10_nic_data *nic_data = efx_pf->nic_data; 3341 - unsigned int i; 3342 - 3343 - /* MAC address successfully changed by VF (with MAC 3344 - * spoofing) so update the parent PF if possible. 3345 - */ 3346 - for (i = 0; i < efx_pf->vf_count; ++i) { 3347 - struct ef10_vf *vf = nic_data->vf + i; 3348 - 3349 - if (vf->efx == efx) { 3350 - ether_addr_copy(vf->mac, 3351 - efx->net_dev->dev_addr); 3352 - return 0; 3353 - } 3354 - } 3355 - } 3356 - } else 3357 - #endif 3358 3300 if (rc == -EPERM) { 3359 3301 netif_err(efx, drv, efx->net_dev, 3360 3302 "Cannot change MAC address; use sfboot to enable"
+2 -2
drivers/net/ethernet/sfc/filter.h
··· 162 162 u32 priority:2; 163 163 u32 flags:6; 164 164 u32 dmaq_id:12; 165 - u32 vport_id; 166 165 u32 rss_context; 167 - __be16 outer_vid __aligned(4); /* allow jhash2() of match values */ 166 + u32 vport_id; 167 + __be16 outer_vid; 168 168 __be16 inner_vid; 169 169 u8 loc_mac[ETH_ALEN]; 170 170 u8 rem_mac[ETH_ALEN];
+5 -5
drivers/net/ethernet/sfc/rx_common.c
··· 660 660 (EFX_FILTER_FLAG_RX | EFX_FILTER_FLAG_TX))) 661 661 return false; 662 662 663 - return memcmp(&left->outer_vid, &right->outer_vid, 663 + return memcmp(&left->vport_id, &right->vport_id, 664 664 sizeof(struct efx_filter_spec) - 665 - offsetof(struct efx_filter_spec, outer_vid)) == 0; 665 + offsetof(struct efx_filter_spec, vport_id)) == 0; 666 666 } 667 667 668 668 u32 efx_filter_spec_hash(const struct efx_filter_spec *spec) 669 669 { 670 - BUILD_BUG_ON(offsetof(struct efx_filter_spec, outer_vid) & 3); 671 - return jhash2((const u32 *)&spec->outer_vid, 670 + BUILD_BUG_ON(offsetof(struct efx_filter_spec, vport_id) & 3); 671 + return jhash2((const u32 *)&spec->vport_id, 672 672 (sizeof(struct efx_filter_spec) - 673 - offsetof(struct efx_filter_spec, outer_vid)) / 4, 673 + offsetof(struct efx_filter_spec, vport_id)) / 4, 674 674 0); 675 675 } 676 676
+1
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
··· 1214 1214 if (priv->plat->tx_queues_to_use > 1) 1215 1215 priv->phylink_config.mac_capabilities &= 1216 1216 ~(MAC_10HD | MAC_100HD | MAC_1000HD); 1217 + priv->phylink_config.mac_managed_pm = true; 1217 1218 1218 1219 phylink = phylink_create(&priv->phylink_config, fwnode, 1219 1220 mode, &stmmac_phylink_mac_ops);
+1 -1
drivers/net/ethernet/sun/sunhme.c
··· 1328 1328 void __iomem *erxregs = hp->erxregs; 1329 1329 void __iomem *bregs = hp->bigmacregs; 1330 1330 void __iomem *tregs = hp->tcvregs; 1331 - const char *bursts; 1331 + const char *bursts = "64"; 1332 1332 u32 regtmp, rxcfg; 1333 1333 1334 1334 /* If auto-negotiation timer is running, kill it. */
+4 -2
drivers/net/hyperv/rndis_filter.c
··· 20 20 #include <linux/vmalloc.h> 21 21 #include <linux/rtnetlink.h> 22 22 #include <linux/ucs2_string.h> 23 + #include <linux/string.h> 23 24 24 25 #include "hyperv_net.h" 25 26 #include "netvsc_trace.h" ··· 336 335 if (resp->msg_len <= 337 336 sizeof(struct rndis_message) + RNDIS_EXT_LEN) { 338 337 memcpy(&request->response_msg, resp, RNDIS_HEADER_SIZE + sizeof(*req_id)); 339 - memcpy((void *)&request->response_msg + RNDIS_HEADER_SIZE + sizeof(*req_id), 338 + unsafe_memcpy((void *)&request->response_msg + RNDIS_HEADER_SIZE + sizeof(*req_id), 340 339 data + RNDIS_HEADER_SIZE + sizeof(*req_id), 341 - resp->msg_len - RNDIS_HEADER_SIZE - sizeof(*req_id)); 340 + resp->msg_len - RNDIS_HEADER_SIZE - sizeof(*req_id), 341 + "request->response_msg is followed by a padding of RNDIS_EXT_LEN inside rndis_request"); 342 342 if (request->request_msg.ndis_msg_type == 343 343 RNDIS_MSG_QUERY && request->request_msg.msg. 344 344 query_req.oid == RNDIS_OID_GEN_MEDIA_CONNECT_STATUS)
+1 -1
drivers/net/macvlan.c
··· 361 361 } 362 362 spin_unlock(&port->bc_queue.lock); 363 363 364 - schedule_work(&port->bc_work); 364 + queue_work(system_unbound_wq, &port->bc_work); 365 365 366 366 if (err) 367 367 goto free_nskb;
+1 -2
drivers/net/phy/dp83822.c
··· 254 254 DP83822_EEE_ERROR_CHANGE_INT_EN); 255 255 256 256 if (!dp83822->fx_enabled) 257 - misr_status |= DP83822_MDI_XOVER_INT_EN | 258 - DP83822_ANEG_ERR_INT_EN | 257 + misr_status |= DP83822_ANEG_ERR_INT_EN | 259 258 DP83822_WOL_PKT_INT_EN; 260 259 261 260 err = phy_write(phydev, MII_DP83822_MISR2, misr_status);
+8
drivers/net/phy/dp83867.c
··· 853 853 else 854 854 val &= ~DP83867_SGMII_TYPE; 855 855 phy_write_mmd(phydev, DP83867_DEVADDR, DP83867_SGMIICTL, val); 856 + 857 + /* This is a SW workaround for link instability if RX_CTRL is 858 + * not strapped to mode 3 or 4 in HW. This is required for SGMII 859 + * in addition to clearing bit 7, handled above. 860 + */ 861 + if (dp83867->rxctrl_strap_quirk) 862 + phy_set_bits_mmd(phydev, DP83867_DEVADDR, DP83867_CFG4, 863 + BIT(8)); 856 864 } 857 865 858 866 val = phy_read(phydev, DP83867_CFG3);
+3
drivers/net/phy/phylink.c
··· 1661 1661 if (phy_interrupt_is_valid(phy)) 1662 1662 phy_request_interrupt(phy); 1663 1663 1664 + if (pl->config->mac_managed_pm) 1665 + phy->mac_managed_pm = true; 1666 + 1664 1667 return 0; 1665 1668 } 1666 1669
+1 -1
drivers/net/wwan/wwan_hwsim.c
··· 311 311 return ERR_PTR(err); 312 312 313 313 err_free_dev: 314 - kfree(dev); 314 + put_device(&dev->dev); 315 315 316 316 return ERR_PTR(err); 317 317 }
+4 -4
include/linux/dsa/tag_qca.h
··· 61 61 62 62 /* Special struct emulating a Ethernet header */ 63 63 struct qca_mgmt_ethhdr { 64 - u32 command; /* command bit 31:0 */ 65 - u32 seq; /* seq 63:32 */ 66 - u32 mdio_data; /* first 4byte mdio */ 64 + __le32 command; /* command bit 31:0 */ 65 + __le32 seq; /* seq 63:32 */ 66 + __le32 mdio_data; /* first 4byte mdio */ 67 67 __be16 hdr; /* qca hdr */ 68 68 } __packed; 69 69 ··· 73 73 }; 74 74 75 75 struct mib_ethhdr { 76 - u32 data[3]; /* first 3 mib counter */ 76 + __le32 data[3]; /* first 3 mib counter */ 77 77 __be16 hdr; /* qca hdr */ 78 78 } __packed; 79 79
+6 -4
include/linux/netdevice.h
··· 3663 3663 static inline unsigned int netif_attrmask_next(int n, const unsigned long *srcp, 3664 3664 unsigned int nr_bits) 3665 3665 { 3666 - /* n is a prior cpu */ 3667 - cpu_max_bits_warn(n + 1, nr_bits); 3666 + /* -1 is a legal arg here. */ 3667 + if (n != -1) 3668 + cpu_max_bits_warn(n, nr_bits); 3668 3669 3669 3670 if (srcp) 3670 3671 return find_next_bit(srcp, nr_bits, n + 1); ··· 3686 3685 const unsigned long *src2p, 3687 3686 unsigned int nr_bits) 3688 3687 { 3689 - /* n is a prior cpu */ 3690 - cpu_max_bits_warn(n + 1, nr_bits); 3688 + /* -1 is a legal arg here. */ 3689 + if (n != -1) 3690 + cpu_max_bits_warn(n, nr_bits); 3691 3691 3692 3692 if (src1p && src2p) 3693 3693 return find_next_and_bit(src1p, src2p, nr_bits, n + 1);
+2
include/linux/phylink.h
··· 122 122 * (See commit 7cceb599d15d ("net: phylink: avoid mac_config calls") 123 123 * @poll_fixed_state: if true, starts link_poll, 124 124 * if MAC link is at %MLO_AN_FIXED mode. 125 + * @mac_managed_pm: if true, indicate the MAC driver is responsible for PHY PM. 125 126 * @ovr_an_inband: if true, override PCS to MLO_AN_INBAND 126 127 * @get_fixed_state: callback to execute to determine the fixed link state, 127 128 * if MAC link is at %MLO_AN_FIXED mode. ··· 135 134 enum phylink_op_type type; 136 135 bool legacy_pre_march2020; 137 136 bool poll_fixed_state; 137 + bool mac_managed_pm; 138 138 bool ovr_an_inband; 139 139 void (*get_fixed_state)(struct phylink_config *config, 140 140 struct phylink_link_state *state);
+5 -3
include/net/genetlink.h
··· 37 37 * do additional, common, filtering and return an error 38 38 * @post_doit: called after an operation's doit callback, it may 39 39 * undo operations done by pre_doit, for example release locks 40 + * @module: pointer to the owning module (set to THIS_MODULE) 40 41 * @mcgrps: multicast groups used by this family 41 42 * @n_mcgrps: number of multicast groups 42 43 * @resv_start_op: first operation for which reserved fields of the header ··· 174 173 }; 175 174 176 175 /** 177 - * struct genl_info - info that is available during dumpit op call 176 + * struct genl_dumpit_info - info that is available during dumpit op call 178 177 * @family: generic netlink family - for internal genl code usage 179 - * @ops: generic netlink ops - for internal genl code usage 178 + * @op: generic netlink ops - for internal genl code usage 180 179 * @attrs: netlink attributes 181 180 */ 182 181 struct genl_dumpit_info { ··· 355 354 356 355 /** 357 356 * genlmsg_unicast - unicast a netlink message 357 + * @net: network namespace to look up @portid in 358 358 * @skb: netlink message as socket buffer 359 359 * @portid: netlink portid of the destination socket 360 360 */ ··· 375 373 } 376 374 377 375 /** 378 - * gennlmsg_data - head of message payload 376 + * genlmsg_data - head of message payload 379 377 * @gnlh: genetlink message header 380 378 */ 381 379 static inline void *genlmsg_data(const struct genlmsghdr *gnlh)
+5 -6
include/net/sock_reuseport.h
··· 43 43 extern int reuseport_attach_prog(struct sock *sk, struct bpf_prog *prog); 44 44 extern int reuseport_detach_prog(struct sock *sk); 45 45 46 - static inline bool reuseport_has_conns(struct sock *sk, bool set) 46 + static inline bool reuseport_has_conns(struct sock *sk) 47 47 { 48 48 struct sock_reuseport *reuse; 49 49 bool ret = false; 50 50 51 51 rcu_read_lock(); 52 52 reuse = rcu_dereference(sk->sk_reuseport_cb); 53 - if (reuse) { 54 - if (set) 55 - reuse->has_conns = 1; 56 - ret = reuse->has_conns; 57 - } 53 + if (reuse && reuse->has_conns) 54 + ret = true; 58 55 rcu_read_unlock(); 59 56 60 57 return ret; 61 58 } 59 + 60 + void reuseport_has_conns_set(struct sock *sk); 62 61 63 62 #endif /* _SOCK_REUSEPORT_H */
+2 -1
net/atm/mpoa_proc.c
··· 219 219 if (!page) 220 220 return -ENOMEM; 221 221 222 - for (p = page, len = 0; len < nbytes; p++, len++) { 222 + for (p = page, len = 0; len < nbytes; p++) { 223 223 if (get_user(*p, buff++)) { 224 224 free_page((unsigned long)page); 225 225 return -EFAULT; 226 226 } 227 + len += 1; 227 228 if (*p == '\0' || *p == '\n') 228 229 break; 229 230 }
+4
net/core/dev.c
··· 5136 5136 case TC_ACT_SHOT: 5137 5137 mini_qdisc_qstats_cpu_drop(miniq); 5138 5138 kfree_skb_reason(skb, SKB_DROP_REASON_TC_INGRESS); 5139 + *ret = NET_RX_DROP; 5139 5140 return NULL; 5140 5141 case TC_ACT_STOLEN: 5141 5142 case TC_ACT_QUEUED: 5142 5143 case TC_ACT_TRAP: 5143 5144 consume_skb(skb); 5145 + *ret = NET_RX_SUCCESS; 5144 5146 return NULL; 5145 5147 case TC_ACT_REDIRECT: 5146 5148 /* skb_mac_header check was done by cls/act_bpf, so ··· 5155 5153 *another = true; 5156 5154 break; 5157 5155 } 5156 + *ret = NET_RX_SUCCESS; 5158 5157 return NULL; 5159 5158 case TC_ACT_CONSUMED: 5159 + *ret = NET_RX_SUCCESS; 5160 5160 return NULL; 5161 5161 default: 5162 5162 break;
+4 -4
net/core/skmsg.c
··· 500 500 } 501 501 EXPORT_SYMBOL_GPL(sk_msg_is_readable); 502 502 503 - static struct sk_msg *alloc_sk_msg(void) 503 + static struct sk_msg *alloc_sk_msg(gfp_t gfp) 504 504 { 505 505 struct sk_msg *msg; 506 506 507 - msg = kzalloc(sizeof(*msg), __GFP_NOWARN | GFP_KERNEL); 507 + msg = kzalloc(sizeof(*msg), gfp | __GFP_NOWARN); 508 508 if (unlikely(!msg)) 509 509 return NULL; 510 510 sg_init_marker(msg->sg.data, NR_MSG_FRAG_IDS); ··· 520 520 if (!sk_rmem_schedule(sk, skb, skb->truesize)) 521 521 return NULL; 522 522 523 - return alloc_sk_msg(); 523 + return alloc_sk_msg(GFP_KERNEL); 524 524 } 525 525 526 526 static int sk_psock_skb_ingress_enqueue(struct sk_buff *skb, ··· 597 597 static int sk_psock_skb_ingress_self(struct sk_psock *psock, struct sk_buff *skb, 598 598 u32 off, u32 len) 599 599 { 600 - struct sk_msg *msg = alloc_sk_msg(); 600 + struct sk_msg *msg = alloc_sk_msg(GFP_ATOMIC); 601 601 struct sock *sk = psock->sk; 602 602 int err; 603 603
+16
net/core/sock_reuseport.c
··· 21 21 static int reuseport_resurrect(struct sock *sk, struct sock_reuseport *old_reuse, 22 22 struct sock_reuseport *reuse, bool bind_inany); 23 23 24 + void reuseport_has_conns_set(struct sock *sk) 25 + { 26 + struct sock_reuseport *reuse; 27 + 28 + if (!rcu_access_pointer(sk->sk_reuseport_cb)) 29 + return; 30 + 31 + spin_lock_bh(&reuseport_lock); 32 + reuse = rcu_dereference_protected(sk->sk_reuseport_cb, 33 + lockdep_is_held(&reuseport_lock)); 34 + if (likely(reuse)) 35 + reuse->has_conns = 1; 36 + spin_unlock_bh(&reuseport_lock); 37 + } 38 + EXPORT_SYMBOL(reuseport_has_conns_set); 39 + 24 40 static int reuseport_sock_index(struct sock *sk, 25 41 const struct sock_reuseport *reuse, 26 42 bool closed)
+1 -1
net/dsa/slave.c
··· 3145 3145 case NETDEV_CHANGELOWERSTATE: { 3146 3146 struct netdev_notifier_changelowerstate_info *info = ptr; 3147 3147 struct dsa_port *dp; 3148 - int err; 3148 + int err = 0; 3149 3149 3150 3150 if (dsa_slave_dev_check(dev)) { 3151 3151 dp = dsa_slave_to_port(dev);
+6 -6
net/hsr/hsr_forward.c
··· 150 150 struct hsr_port *port) 151 151 { 152 152 if (!frame->skb_std) { 153 - if (frame->skb_hsr) { 153 + if (frame->skb_hsr) 154 154 frame->skb_std = 155 155 create_stripped_skb_hsr(frame->skb_hsr, frame); 156 - } else { 157 - /* Unexpected */ 158 - WARN_ONCE(1, "%s:%d: Unexpected frame received (port_src %s)\n", 159 - __FILE__, __LINE__, port->dev->name); 156 + else 157 + netdev_warn_once(port->dev, 158 + "Unexpected frame received in hsr_get_untagged_frame()\n"); 159 + 160 + if (!frame->skb_std) 160 161 return NULL; 161 - } 162 162 } 163 163 164 164 return skb_clone(frame->skb_std, GFP_ATOMIC);
+1 -1
net/ipv4/datagram.c
··· 70 70 } 71 71 inet->inet_daddr = fl4->daddr; 72 72 inet->inet_dport = usin->sin_port; 73 - reuseport_has_conns(sk, true); 73 + reuseport_has_conns_set(sk); 74 74 sk->sk_state = TCP_ESTABLISHED; 75 75 sk_set_txhash(sk); 76 76 inet->inet_id = get_random_u16();
+1
net/ipv4/netfilter/ipt_rpfilter.c
··· 78 78 flow.flowi4_tos = iph->tos & IPTOS_RT_MASK; 79 79 flow.flowi4_scope = RT_SCOPE_UNIVERSE; 80 80 flow.flowi4_l3mdev = l3mdev_master_ifindex_rcu(xt_in(par)); 81 + flow.flowi4_uid = sock_net_uid(xt_net(par), NULL); 81 82 82 83 return rpfilter_lookup_reverse(xt_net(par), &flow, xt_in(par), info->flags) ^ invert; 83 84 }
+1
net/ipv4/netfilter/nft_fib_ipv4.c
··· 65 65 struct flowi4 fl4 = { 66 66 .flowi4_scope = RT_SCOPE_UNIVERSE, 67 67 .flowi4_iif = LOOPBACK_IFINDEX, 68 + .flowi4_uid = sock_net_uid(nft_net(pkt), NULL), 68 69 }; 69 70 const struct net_device *oif; 70 71 const struct net_device *found;
+1 -1
net/ipv4/udp.c
··· 448 448 result = lookup_reuseport(net, sk, skb, 449 449 saddr, sport, daddr, hnum); 450 450 /* Fall back to scoring if group has connections */ 451 - if (result && !reuseport_has_conns(sk, false)) 451 + if (result && !reuseport_has_conns(sk)) 452 452 return result; 453 453 454 454 result = result ? : sk;
+2
net/ipv6/addrconf.c
··· 7214 7214 __addrconf_sysctl_unregister(net, all, NETCONFA_IFINDEX_ALL); 7215 7215 err_reg_all: 7216 7216 kfree(dflt); 7217 + net->ipv6.devconf_dflt = NULL; 7217 7218 #endif 7218 7219 err_alloc_dflt: 7219 7220 kfree(all); 7221 + net->ipv6.devconf_all = NULL; 7220 7222 err_alloc_all: 7221 7223 kfree(net->ipv6.inet6_addr_lst); 7222 7224 err_alloc_addr:
+1 -1
net/ipv6/datagram.c
··· 256 256 goto out; 257 257 } 258 258 259 - reuseport_has_conns(sk, true); 259 + reuseport_has_conns_set(sk); 260 260 sk->sk_state = TCP_ESTABLISHED; 261 261 sk_set_txhash(sk); 262 262 out:
+1
net/ipv6/netfilter/ip6t_rpfilter.c
··· 40 40 .flowi6_l3mdev = l3mdev_master_ifindex_rcu(dev), 41 41 .flowlabel = (* (__be32 *) iph) & IPV6_FLOWINFO_MASK, 42 42 .flowi6_proto = iph->nexthdr, 43 + .flowi6_uid = sock_net_uid(net, NULL), 43 44 .daddr = iph->saddr, 44 45 }; 45 46 int lookup_flags;
+2
net/ipv6/netfilter/nft_fib_ipv6.c
··· 66 66 struct flowi6 fl6 = { 67 67 .flowi6_iif = LOOPBACK_IFINDEX, 68 68 .flowi6_proto = pkt->tprot, 69 + .flowi6_uid = sock_net_uid(nft_net(pkt), NULL), 69 70 }; 70 71 u32 ret = 0; 71 72 ··· 164 163 struct flowi6 fl6 = { 165 164 .flowi6_iif = LOOPBACK_IFINDEX, 166 165 .flowi6_proto = pkt->tprot, 166 + .flowi6_uid = sock_net_uid(nft_net(pkt), NULL), 167 167 }; 168 168 struct rt6_info *rt; 169 169 int lookup_flags;
+1 -1
net/ipv6/udp.c
··· 195 195 result = lookup_reuseport(net, sk, skb, 196 196 saddr, sport, daddr, hnum); 197 197 /* Fall back to scoring if group has connections */ 198 - if (result && !reuseport_has_conns(sk, false)) 198 + if (result && !reuseport_has_conns(sk)) 199 199 return result; 200 200 201 201 result = result ? : sk;
+3 -2
net/netfilter/nf_tables_api.c
··· 5865 5865 (NFT_SET_CONCAT | NFT_SET_INTERVAL)) { 5866 5866 if (flags & NFT_SET_ELEM_INTERVAL_END) 5867 5867 return false; 5868 - if (!nla[NFTA_SET_ELEM_KEY_END] && 5869 - !(flags & NFT_SET_ELEM_CATCHALL)) 5868 + 5869 + if (nla[NFTA_SET_ELEM_KEY_END] && 5870 + flags & NFT_SET_ELEM_CATCHALL) 5870 5871 return false; 5871 5872 } else { 5872 5873 if (nla[NFTA_SET_ELEM_KEY_END])
+3 -2
net/sched/sch_api.c
··· 1099 1099 1100 1100 skip: 1101 1101 if (!ingress) { 1102 - notify_and_destroy(net, skb, n, classid, 1103 - rtnl_dereference(dev->qdisc), new); 1102 + old = rtnl_dereference(dev->qdisc); 1104 1103 if (new && !new->ops->attach) 1105 1104 qdisc_refcount_inc(new); 1106 1105 rcu_assign_pointer(dev->qdisc, new ? : &noop_qdisc); 1106 + 1107 + notify_and_destroy(net, skb, n, classid, old, new); 1107 1108 1108 1109 if (new && new->ops->attach) 1109 1110 new->ops->attach(new);
+4
net/sched/sch_cake.c
··· 2224 2224 2225 2225 static void cake_reset(struct Qdisc *sch) 2226 2226 { 2227 + struct cake_sched_data *q = qdisc_priv(sch); 2227 2228 u32 c; 2229 + 2230 + if (!q->tins) 2231 + return; 2228 2232 2229 2233 for (c = 0; c < CAKE_MAX_TINS; c++) 2230 2234 cake_clear_tin(sch, c);
+17 -8
net/sched/sch_fq_codel.c
··· 478 478 if (opt) { 479 479 err = fq_codel_change(sch, opt, extack); 480 480 if (err) 481 - return err; 481 + goto init_failure; 482 482 } 483 483 484 484 err = tcf_block_get(&q->block, &q->filter_list, sch, extack); 485 485 if (err) 486 - return err; 486 + goto init_failure; 487 487 488 488 if (!q->flows) { 489 489 q->flows = kvcalloc(q->flows_cnt, 490 490 sizeof(struct fq_codel_flow), 491 491 GFP_KERNEL); 492 - if (!q->flows) 493 - return -ENOMEM; 494 - 492 + if (!q->flows) { 493 + err = -ENOMEM; 494 + goto init_failure; 495 + } 495 496 q->backlogs = kvcalloc(q->flows_cnt, sizeof(u32), GFP_KERNEL); 496 - if (!q->backlogs) 497 - return -ENOMEM; 498 - 497 + if (!q->backlogs) { 498 + err = -ENOMEM; 499 + goto alloc_failure; 500 + } 499 501 for (i = 0; i < q->flows_cnt; i++) { 500 502 struct fq_codel_flow *flow = q->flows + i; 501 503 ··· 510 508 else 511 509 sch->flags &= ~TCQ_F_CAN_BYPASS; 512 510 return 0; 511 + 512 + alloc_failure: 513 + kvfree(q->flows); 514 + q->flows = NULL; 515 + init_failure: 516 + q->flows_cnt = 0; 517 + return err; 513 518 } 514 519 515 520 static int fq_codel_dump(struct Qdisc *sch, struct sk_buff *skb)
+2 -1
net/sched/sch_sfb.c
··· 455 455 { 456 456 struct sfb_sched_data *q = qdisc_priv(sch); 457 457 458 - qdisc_reset(q->qdisc); 458 + if (likely(q->qdisc)) 459 + qdisc_reset(q->qdisc); 459 460 q->slot = 0; 460 461 q->double_buffering = false; 461 462 sfb_zero_all_buckets(q);
+2 -1
net/smc/smc_core.c
··· 896 896 } 897 897 memcpy(lgr->pnet_id, ibdev->pnetid[ibport - 1], 898 898 SMC_MAX_PNETID_LEN); 899 - if (smc_wr_alloc_lgr_mem(lgr)) 899 + rc = smc_wr_alloc_lgr_mem(lgr); 900 + if (rc) 900 901 goto free_wq; 901 902 smc_llc_lgr_init(lgr, smc); 902 903
+1 -1
net/tipc/discover.c
··· 148 148 { 149 149 struct net *net = d->net; 150 150 struct tipc_net *tn = tipc_net(net); 151 - bool trial = time_before(jiffies, tn->addr_trial_end); 152 151 u32 self = tipc_own_addr(net); 152 + bool trial = time_before(jiffies, tn->addr_trial_end) && !self; 153 153 154 154 if (mtyp == DSC_TRIAL_FAIL_MSG) { 155 155 if (!trial)
+1 -1
net/tipc/topsrv.c
··· 568 568 sub.seq.upper = upper; 569 569 sub.timeout = TIPC_WAIT_FOREVER; 570 570 sub.filter = filter; 571 - *(u32 *)&sub.usr_handle = port; 571 + *(u64 *)&sub.usr_handle = (u64)port; 572 572 573 573 con = tipc_conn_alloc(tipc_topsrv(net)); 574 574 if (IS_ERR(con))
+28 -4
net/tls/tls_strp.c
··· 273 273 return desc.error; 274 274 } 275 275 276 - static int tls_strp_read_short(struct tls_strparser *strp) 276 + static int tls_strp_read_copy(struct tls_strparser *strp, bool qshort) 277 277 { 278 278 struct skb_shared_info *shinfo; 279 279 struct page *page; ··· 283 283 * to read the data out. Otherwise the connection will stall. 284 284 * Without pressure threshold of INT_MAX will never be ready. 285 285 */ 286 - if (likely(!tcp_epollin_ready(strp->sk, INT_MAX))) 286 + if (likely(qshort && !tcp_epollin_ready(strp->sk, INT_MAX))) 287 287 return 0; 288 288 289 289 shinfo = skb_shinfo(strp->anchor); ··· 313 313 tls_strp_read_copyin(strp); 314 314 315 315 return 0; 316 + } 317 + 318 + static bool tls_strp_check_no_dup(struct tls_strparser *strp) 319 + { 320 + unsigned int len = strp->stm.offset + strp->stm.full_len; 321 + struct sk_buff *skb; 322 + u32 seq; 323 + 324 + skb = skb_shinfo(strp->anchor)->frag_list; 325 + seq = TCP_SKB_CB(skb)->seq; 326 + 327 + while (skb->len < len) { 328 + seq += skb->len; 329 + len -= skb->len; 330 + skb = skb->next; 331 + 332 + if (TCP_SKB_CB(skb)->seq != seq) 333 + return false; 334 + } 335 + 336 + return true; 316 337 } 317 338 318 339 static void tls_strp_load_anchor_with_queue(struct tls_strparser *strp, int len) ··· 394 373 return tls_strp_read_copyin(strp); 395 374 396 375 if (inq < strp->stm.full_len) 397 - return tls_strp_read_short(strp); 376 + return tls_strp_read_copy(strp, true); 398 377 399 378 if (!strp->stm.full_len) { 400 379 tls_strp_load_anchor_with_queue(strp, inq); ··· 408 387 strp->stm.full_len = sz; 409 388 410 389 if (!strp->stm.full_len || inq < strp->stm.full_len) 411 - return tls_strp_read_short(strp); 390 + return tls_strp_read_copy(strp, true); 412 391 } 392 + 393 + if (!tls_strp_check_no_dup(strp)) 394 + return tls_strp_read_copy(strp, false); 413 395 414 396 strp->msg_ready = 1; 415 397 tls_rx_msg_ready(strp);
+1
tools/testing/selftests/net/Makefile
··· 70 70 TEST_GEN_FILES += bind_bhash 71 71 TEST_GEN_PROGS += sk_bind_sendto_listen 72 72 TEST_GEN_PROGS += sk_connect_zero_addr 73 + TEST_PROGS += test_ingress_egress_chaining.sh 73 74 74 75 TEST_FILES := settings 75 76
+79
tools/testing/selftests/net/test_ingress_egress_chaining.sh
··· 1 + #!/bin/bash 2 + # SPDX-License-Identifier: GPL-2.0 3 + 4 + # This test runs a simple ingress tc setup between two veth pairs, 5 + # and chains a single egress rule to test ingress chaining to egress. 6 + # 7 + # Kselftest framework requirement - SKIP code is 4. 8 + ksft_skip=4 9 + 10 + if [ "$(id -u)" -ne 0 ];then 11 + echo "SKIP: Need root privileges" 12 + exit $ksft_skip 13 + fi 14 + 15 + needed_mods="act_mirred cls_flower sch_ingress" 16 + for mod in $needed_mods; do 17 + modinfo $mod &>/dev/null || { echo "SKIP: Need act_mirred module"; exit $ksft_skip; } 18 + done 19 + 20 + ns="ns$((RANDOM%899+100))" 21 + veth1="veth1$((RANDOM%899+100))" 22 + veth2="veth2$((RANDOM%899+100))" 23 + peer1="peer1$((RANDOM%899+100))" 24 + peer2="peer2$((RANDOM%899+100))" 25 + ip_peer1=198.51.100.5 26 + ip_peer2=198.51.100.6 27 + 28 + function fail() { 29 + echo "FAIL: $@" >> /dev/stderr 30 + exit 1 31 + } 32 + 33 + function cleanup() { 34 + killall -q -9 udpgso_bench_rx 35 + ip link del $veth1 &> /dev/null 36 + ip link del $veth2 &> /dev/null 37 + ip netns del $ns &> /dev/null 38 + } 39 + trap cleanup EXIT 40 + 41 + function config() { 42 + echo "Setup veth pairs [$veth1, $peer1], and veth pair [$veth2, $peer2]" 43 + ip link add $veth1 type veth peer name $peer1 44 + ip link add $veth2 type veth peer name $peer2 45 + ip addr add $ip_peer1/24 dev $peer1 46 + ip link set $peer1 up 47 + ip netns add $ns 48 + ip link set dev $peer2 netns $ns 49 + ip netns exec $ns ip addr add $ip_peer2/24 dev $peer2 50 + ip netns exec $ns ip link set $peer2 up 51 + ip link set $veth1 up 52 + ip link set $veth2 up 53 + 54 + echo "Add tc filter ingress->egress forwarding $veth1 <-> $veth2" 55 + tc qdisc add dev $veth2 ingress 56 + tc qdisc add dev $veth1 ingress 57 + tc filter add dev $veth2 ingress prio 1 proto all flower \ 58 + action mirred egress redirect dev $veth1 59 + tc filter add dev $veth1 ingress prio 1 proto all flower \ 60 + action mirred egress redirect dev $veth2 61 + 62 + echo "Add tc filter egress->ingress forwarding $peer1 -> $veth1, bypassing the veth pipe" 63 + tc qdisc add dev $peer1 clsact 64 + tc filter add dev $peer1 egress prio 20 proto ip flower \ 65 + action mirred ingress redirect dev $veth1 66 + } 67 + 68 + function test_run() { 69 + echo "Run tcp traffic" 70 + ./udpgso_bench_rx -t & 71 + sleep 1 72 + ip netns exec $ns timeout -k 2 10 ./udpgso_bench_tx -t -l 2 -4 -D $ip_peer1 || fail "traffic failed" 73 + echo "Test passed" 74 + } 75 + 76 + config 77 + test_run 78 + trap - EXIT 79 + cleanup