Merge tag 'net-6.13-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Pull networking fixes from Paolo Abeni:
"Including fixes from bluetooth.

Current release - regressions:

- rtnetlink: fix rtnl_dump_ifinfo() error path

- bluetooth: remove the redundant sco_conn_put

Previous releases - regressions:

- netlink: fix false positive warning in extack during dumps

- sched: sch_fq: don't follow the fast path if Tx is behind now

- ipv6: delete temporary address if mngtmpaddr is removed or
unmanaged

- tcp: fix use-after-free of nreq in reqsk_timer_handler().

- bluetooth: fix slab-use-after-free Read in set_powered_sync

- l2tp: fix warning in l2tp_exit_net found

- eth:
- bnxt_en: fix receive ring space parameters when XDP is active
- lan78xx: fix double free issue with interrupt buffer allocation
- tg3: set coherent DMA mask bits to 31 for BCM57766 chipsets

Previous releases - always broken:

- ipmr: fix tables suspicious RCU usage

- iucv: MSG_PEEK causes memory leak in iucv_sock_destruct()

- eth:
- octeontx2-af: fix low network performance
- stmmac: dwmac-socfpga: set RX watchdog interrupt as broken
- rtase: correct the speed for RTL907XD-V1

Misc:

- some documentation fixup"

* tag 'net-6.13-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (49 commits)
ipmr: fix build with clang and DEBUG_NET disabled.
Documentation: tls_offload: fix typos and grammar
Fix spelling mistake
ipmr: fix tables suspicious RCU usage
ip6mr: fix tables suspicious RCU usage
ipmr: add debug check for mr table cleanup
selftests: rds: move test.py to TEST_FILES
net_sched: sch_fq: don't follow the fast path if Tx is behind now
tcp: Fix use-after-free of nreq in reqsk_timer_handler().
net: phy: fix phy_ethtool_set_eee() incorrectly enabling LPI
net: Comment copy_from_sockptr() explaining its behaviour
rxrpc: Improve setsockopt() handling of malformed user input
llc: Improve setsockopt() handling of malformed user input
Bluetooth: SCO: remove the redundant sco_conn_put
Bluetooth: MGMT: Fix possible deadlocks
Bluetooth: MGMT: Fix slab-use-after-free Read in set_powered_sync
bnxt_en: Unregister PTP during PCI shutdown and suspend
bnxt_en: Refactor bnxt_ptp_init()
bnxt_en: Fix receive ring space parameters when XDP is active
bnxt_en: Fix queue start to update vnic RSS table
...

Changed files
+814 -229
Documentation
drivers
include
net
tools
testing
selftests
+1 -1
Documentation/networking/cdc_mbim.rst
··· 51 51 - mbimcli (included with the libmbim [3] library), and 52 52 - ModemManager [4] 53 53 54 - Establishing a MBIM IP session reequires at least these actions by the 54 + Establishing a MBIM IP session requires at least these actions by the 55 55 management application: 56 56 57 57 - open the control channel
+15 -14
Documentation/networking/tls-offload.rst
··· 51 51 RX 52 52 -- 53 53 54 - On the receive side if the device handled decryption and authentication 54 + On the receive side, if the device handled decryption and authentication 55 55 successfully, the driver will set the decrypted bit in the associated 56 56 :c:type:`struct sk_buff <sk_buff>`. The packets reach the TCP stack and 57 57 are handled normally. ``ktls`` is informed when data is queued to the socket ··· 120 120 RX 121 121 -- 122 122 123 - In RX direction local networking stack has little control over the segmentation, 124 - so the initial records' TCP sequence number may be anywhere inside the segment. 123 + In the RX direction, the local networking stack has little control over 124 + segmentation, so the initial records' TCP sequence number may be anywhere 125 + inside the segment. 125 126 126 127 Normal operation 127 128 ================ ··· 139 138 segments may start at any point of a record and contain any number of records. 140 139 Assuming segments are received in order, the device should be able to perform 141 140 crypto operations and authentication regardless of segmentation. For this 142 - to be possible device has to keep small amount of segment-to-segment state. 143 - This includes at least: 141 + to be possible, the device has to keep a small amount of segment-to-segment 142 + state. This includes at least: 144 143 145 144 * partial headers (if a segment carried only a part of the TLS header) 146 145 * partial data block ··· 176 175 checksum and performs a 5-tuple lookup to find any TLS connection the packet 177 176 may belong to (technically a 4-tuple 178 177 lookup is sufficient - IP addresses and TCP port numbers, as the protocol 179 - is always TCP). If connection is matched device confirms if the TCP sequence 180 - number is the expected one and proceeds to TLS handling (record delineation, 181 - decryption, authentication for each record in the packet). The device leaves 182 - the record framing unmodified, the stack takes care of record decapsulation. 183 - Device indicates successful handling of TLS offload in the per-packet context 184 - (descriptor) passed to the host. 178 + is always TCP). If the packet is matched to a connection, the device confirms 179 + if the TCP sequence number is the expected one and proceeds to TLS handling 180 + (record delineation, decryption, authentication for each record in the packet). 181 + The device leaves the record framing unmodified, the stack takes care of record 182 + decapsulation. Device indicates successful handling of TLS offload in the 183 + per-packet context (descriptor) passed to the host. 185 184 186 185 Upon reception of a TLS offloaded packet, the driver sets 187 186 the :c:member:`decrypted` mark in :c:type:`struct sk_buff <sk_buff>` ··· 440 439 * ``rx_tls_resync_req_end`` - number of times the TLS async resync request 441 440 properly ended with providing the HW tracked tcp-seq. 442 441 * ``rx_tls_resync_req_skip`` - number of times the TLS async resync request 443 - procedure was started by not properly ended. 442 + procedure was started but not properly ended. 444 443 * ``rx_tls_resync_res_ok`` - number of times the TLS resync response call to 445 444 the driver was successfully handled. 446 445 * ``rx_tls_resync_res_skip`` - number of times the TLS resync response call to ··· 508 507 Transport layer transparency 509 508 ---------------------------- 510 509 511 - The device should not modify any packet headers for the purpose 512 - of the simplifying TLS offload. 510 + For the purpose of simplifying TLS offload, the device should not modify any 511 + packet headers. 513 512 514 513 The device should not depend on any packet headers beyond what is strictly 515 514 necessary for TLS offload.
+31 -6
drivers/net/ethernet/broadcom/bnxt/bnxt.c
··· 4661 4661 struct net_device *dev = bp->dev; 4662 4662 4663 4663 if (page_mode) { 4664 - bp->flags &= ~BNXT_FLAG_AGG_RINGS; 4664 + bp->flags &= ~(BNXT_FLAG_AGG_RINGS | BNXT_FLAG_NO_AGG_RINGS); 4665 4665 bp->flags |= BNXT_FLAG_RX_PAGE_MODE; 4666 4666 4667 4667 if (bp->xdp_prog->aux->xdp_has_frags) ··· 9299 9299 struct hwrm_port_mac_ptp_qcfg_output *resp; 9300 9300 struct hwrm_port_mac_ptp_qcfg_input *req; 9301 9301 struct bnxt_ptp_cfg *ptp = bp->ptp_cfg; 9302 - bool phc_cfg; 9303 9302 u8 flags; 9304 9303 int rc; 9305 9304 ··· 9345 9346 rc = -ENODEV; 9346 9347 goto exit; 9347 9348 } 9348 - phc_cfg = (flags & PORT_MAC_PTP_QCFG_RESP_FLAGS_RTC_CONFIGURED) != 0; 9349 - rc = bnxt_ptp_init(bp, phc_cfg); 9349 + ptp->rtc_configured = 9350 + (flags & PORT_MAC_PTP_QCFG_RESP_FLAGS_RTC_CONFIGURED) != 0; 9351 + rc = bnxt_ptp_init(bp); 9350 9352 if (rc) 9351 9353 netdev_warn(bp->dev, "PTP initialization failed.\n"); 9352 9354 exit: ··· 14746 14746 bnxt_close_nic(bp, true, false); 14747 14747 14748 14748 WRITE_ONCE(dev->mtu, new_mtu); 14749 + 14750 + /* MTU change may change the AGG ring settings if an XDP multi-buffer 14751 + * program is attached. We need to set the AGG rings settings and 14752 + * rx_skb_func accordingly. 14753 + */ 14754 + if (READ_ONCE(bp->xdp_prog)) 14755 + bnxt_set_rx_skb_mode(bp, true); 14756 + 14749 14757 bnxt_set_ring_params(bp); 14750 14758 14751 14759 if (netif_running(dev)) ··· 15491 15483 15492 15484 for (i = 0; i <= BNXT_VNIC_NTUPLE; i++) { 15493 15485 vnic = &bp->vnic_info[i]; 15486 + 15487 + rc = bnxt_hwrm_vnic_set_rss_p5(bp, vnic, true); 15488 + if (rc) { 15489 + netdev_err(bp->dev, "hwrm vnic %d set rss failure rc: %d\n", 15490 + vnic->vnic_id, rc); 15491 + return rc; 15492 + } 15494 15493 vnic->mru = bp->dev->mtu + ETH_HLEN + VLAN_HLEN; 15495 15494 bnxt_hwrm_vnic_update(bp, vnic, 15496 15495 VNIC_UPDATE_REQ_ENABLES_MRU_VALID); ··· 16251 16236 if (netif_running(dev)) 16252 16237 dev_close(dev); 16253 16238 16239 + bnxt_ptp_clear(bp); 16254 16240 bnxt_clear_int_mode(bp); 16255 16241 pci_disable_device(pdev); 16256 16242 ··· 16279 16263 rc = bnxt_close(dev); 16280 16264 } 16281 16265 bnxt_hwrm_func_drv_unrgtr(bp); 16266 + bnxt_ptp_clear(bp); 16282 16267 pci_disable_device(bp->pdev); 16283 16268 bnxt_free_ctx_mem(bp, false); 16284 16269 rtnl_unlock(); ··· 16323 16306 if (bp->fw_crash_mem) 16324 16307 bnxt_hwrm_crash_dump_mem_cfg(bp); 16325 16308 16309 + if (bnxt_ptp_init(bp)) { 16310 + kfree(bp->ptp_cfg); 16311 + bp->ptp_cfg = NULL; 16312 + } 16326 16313 bnxt_get_wol_settings(bp); 16327 16314 if (netif_running(dev)) { 16328 16315 rc = bnxt_open(dev); ··· 16505 16484 rtnl_lock(); 16506 16485 16507 16486 err = bnxt_hwrm_func_qcaps(bp); 16508 - if (!err && netif_running(netdev)) 16509 - err = bnxt_open(netdev); 16487 + if (!err) { 16488 + if (netif_running(netdev)) 16489 + err = bnxt_open(netdev); 16490 + else 16491 + err = bnxt_reserve_rings(bp, true); 16492 + } 16510 16493 16511 16494 if (!err) 16512 16495 netif_device_attach(netdev);
+7 -2
drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
··· 2837 2837 } 2838 2838 2839 2839 base->port = PORT_NONE; 2840 - if (link_info->media_type == PORT_PHY_QCFG_RESP_MEDIA_TYPE_TP) { 2840 + if (media == BNXT_MEDIA_TP) { 2841 2841 base->port = PORT_TP; 2842 2842 linkmode_set_bit(ETHTOOL_LINK_MODE_TP_BIT, 2843 2843 lk_ksettings->link_modes.supported); 2844 2844 linkmode_set_bit(ETHTOOL_LINK_MODE_TP_BIT, 2845 + lk_ksettings->link_modes.advertising); 2846 + } else if (media == BNXT_MEDIA_KR) { 2847 + linkmode_set_bit(ETHTOOL_LINK_MODE_Backplane_BIT, 2848 + lk_ksettings->link_modes.supported); 2849 + linkmode_set_bit(ETHTOOL_LINK_MODE_Backplane_BIT, 2845 2850 lk_ksettings->link_modes.advertising); 2846 2851 } else { 2847 2852 linkmode_set_bit(ETHTOOL_LINK_MODE_FIBRE_BIT, ··· 2854 2849 linkmode_set_bit(ETHTOOL_LINK_MODE_FIBRE_BIT, 2855 2850 lk_ksettings->link_modes.advertising); 2856 2851 2857 - if (link_info->media_type == PORT_PHY_QCFG_RESP_MEDIA_TYPE_DAC) 2852 + if (media == BNXT_MEDIA_CR) 2858 2853 base->port = PORT_DA; 2859 2854 else 2860 2855 base->port = PORT_FIBRE;
+2 -2
drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.c
··· 1038 1038 } 1039 1039 } 1040 1040 1041 - int bnxt_ptp_init(struct bnxt *bp, bool phc_cfg) 1041 + int bnxt_ptp_init(struct bnxt *bp) 1042 1042 { 1043 1043 struct bnxt_ptp_cfg *ptp = bp->ptp_cfg; 1044 1044 int rc; ··· 1061 1061 1062 1062 if (BNXT_PTP_USE_RTC(bp)) { 1063 1063 bnxt_ptp_timecounter_init(bp, false); 1064 - rc = bnxt_ptp_init_rtc(bp, phc_cfg); 1064 + rc = bnxt_ptp_init_rtc(bp, ptp->rtc_configured); 1065 1065 if (rc) 1066 1066 goto out; 1067 1067 } else {
+2 -1
drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.h
··· 135 135 BNXT_PTP_MSG_PDELAY_REQ | \ 136 136 BNXT_PTP_MSG_PDELAY_RESP) 137 137 u8 tx_tstamp_en:1; 138 + u8 rtc_configured:1; 138 139 int rx_filter; 139 140 u32 tstamp_filters; 140 141 ··· 169 168 struct tx_ts_cmp *tscmp); 170 169 void bnxt_ptp_rtc_timecounter_init(struct bnxt_ptp_cfg *ptp, u64 ns); 171 170 int bnxt_ptp_init_rtc(struct bnxt *bp, bool phc_cfg); 172 - int bnxt_ptp_init(struct bnxt *bp, bool phc_cfg); 171 + int bnxt_ptp_init(struct bnxt *bp); 173 172 void bnxt_ptp_clear(struct bnxt *bp); 174 173 static inline u64 bnxt_timecounter_cyc2time(struct bnxt_ptp_cfg *ptp, u64 ts) 175 174 {
+3
drivers/net/ethernet/broadcom/tg3.c
··· 17839 17839 } else 17840 17840 persist_dma_mask = dma_mask = DMA_BIT_MASK(64); 17841 17841 17842 + if (tg3_asic_rev(tp) == ASIC_REV_57766) 17843 + persist_dma_mask = DMA_BIT_MASK(31); 17844 + 17842 17845 /* Configure DMA attributes. */ 17843 17846 if (dma_mask > DMA_BIT_MASK(32)) { 17844 17847 err = dma_set_mask(&pdev->dev, dma_mask);
+68 -2
drivers/net/ethernet/marvell/octeontx2/af/cgx.c
··· 112 112 return ((struct cgx *)cgxd)->mac_ops; 113 113 } 114 114 115 + u32 cgx_get_fifo_len(void *cgxd) 116 + { 117 + return ((struct cgx *)cgxd)->fifo_len; 118 + } 119 + 115 120 void cgx_write(struct cgx *cgx, u64 lmac, u64 offset, u64 val) 116 121 { 117 122 writeq(val, cgx->reg_base + (lmac << cgx->mac_ops->lmac_offset) + ··· 212 207 cfg = cgx_read(cgx_dev, lmac_id, CGXX_CMRX_CFG); 213 208 214 209 return (cfg & CMR_P2X_SEL_MASK) >> CMR_P2X_SEL_SHIFT; 210 + } 211 + 212 + static u8 cgx_get_nix_resetbit(struct cgx *cgx) 213 + { 214 + int first_lmac; 215 + u8 p2x; 216 + 217 + /* non 98XX silicons supports only NIX0 block */ 218 + if (cgx->pdev->subsystem_device != PCI_SUBSYS_DEVID_98XX) 219 + return CGX_NIX0_RESET; 220 + 221 + first_lmac = find_first_bit(&cgx->lmac_bmap, cgx->max_lmac_per_mac); 222 + p2x = cgx_lmac_get_p2x(cgx->cgx_id, first_lmac); 223 + 224 + if (p2x == CMR_P2X_SEL_NIX1) 225 + return CGX_NIX1_RESET; 226 + else 227 + return CGX_NIX0_RESET; 215 228 } 216 229 217 230 /* Ensure the required lock for event queue(where asynchronous events are ··· 524 501 u8 num_lmacs; 525 502 u32 fifo_len; 526 503 527 - fifo_len = cgx->mac_ops->fifo_len; 504 + fifo_len = cgx->fifo_len; 528 505 num_lmacs = cgx->mac_ops->get_nr_lmacs(cgx); 529 506 530 507 switch (num_lmacs) { ··· 1742 1719 lmac->lmac_type = cgx->mac_ops->get_lmac_type(cgx, lmac->lmac_id); 1743 1720 } 1744 1721 1722 + /* Start X2P reset on given MAC block */ 1723 + cgx->mac_ops->mac_x2p_reset(cgx, true); 1745 1724 return cgx_lmac_verify_fwi_version(cgx); 1746 1725 1747 1726 err_bitmap_free: ··· 1789 1764 u64 cfg; 1790 1765 1791 1766 cfg = cgx_read(cgx, 0, CGX_CONST); 1792 - cgx->mac_ops->fifo_len = FIELD_GET(CGX_CONST_RXFIFO_SIZE, cfg); 1767 + cgx->fifo_len = FIELD_GET(CGX_CONST_RXFIFO_SIZE, cfg); 1793 1768 cgx->max_lmac_per_mac = FIELD_GET(CGX_CONST_MAX_LMACS, cfg); 1794 1769 1795 1770 if (is_dev_rpm(cgx)) ··· 1807 1782 return 0x80; 1808 1783 else 1809 1784 return 0x60; 1785 + } 1786 + 1787 + static void cgx_x2p_reset(void *cgxd, bool enable) 1788 + { 1789 + struct cgx *cgx = cgxd; 1790 + int lmac_id; 1791 + u64 cfg; 1792 + 1793 + if (enable) { 1794 + for_each_set_bit(lmac_id, &cgx->lmac_bmap, cgx->max_lmac_per_mac) 1795 + cgx->mac_ops->mac_enadis_rx(cgx, lmac_id, false); 1796 + 1797 + usleep_range(1000, 2000); 1798 + 1799 + cfg = cgx_read(cgx, 0, CGXX_CMR_GLOBAL_CONFIG); 1800 + cfg |= cgx_get_nix_resetbit(cgx) | CGX_NSCI_DROP; 1801 + cgx_write(cgx, 0, CGXX_CMR_GLOBAL_CONFIG, cfg); 1802 + } else { 1803 + cfg = cgx_read(cgx, 0, CGXX_CMR_GLOBAL_CONFIG); 1804 + cfg &= ~(cgx_get_nix_resetbit(cgx) | CGX_NSCI_DROP); 1805 + cgx_write(cgx, 0, CGXX_CMR_GLOBAL_CONFIG, cfg); 1806 + } 1807 + } 1808 + 1809 + static int cgx_enadis_rx(void *cgxd, int lmac_id, bool enable) 1810 + { 1811 + struct cgx *cgx = cgxd; 1812 + u64 cfg; 1813 + 1814 + if (!is_lmac_valid(cgx, lmac_id)) 1815 + return -ENODEV; 1816 + 1817 + cfg = cgx_read(cgx, lmac_id, CGXX_CMRX_CFG); 1818 + if (enable) 1819 + cfg |= DATA_PKT_RX_EN; 1820 + else 1821 + cfg &= ~DATA_PKT_RX_EN; 1822 + cgx_write(cgx, lmac_id, CGXX_CMRX_CFG, cfg); 1823 + return 0; 1810 1824 } 1811 1825 1812 1826 static struct mac_ops cgx_mac_ops = { ··· 1879 1815 .mac_get_pfc_frm_cfg = cgx_lmac_get_pfc_frm_cfg, 1880 1816 .mac_reset = cgx_lmac_reset, 1881 1817 .mac_stats_reset = cgx_stats_reset, 1818 + .mac_x2p_reset = cgx_x2p_reset, 1819 + .mac_enadis_rx = cgx_enadis_rx, 1882 1820 }; 1883 1821 1884 1822 static int cgx_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+5
drivers/net/ethernet/marvell/octeontx2/af/cgx.h
··· 32 32 #define CGX_LMAC_TYPE_MASK 0xF 33 33 #define CGXX_CMRX_INT 0x040 34 34 #define FW_CGX_INT BIT_ULL(1) 35 + #define CGXX_CMR_GLOBAL_CONFIG 0x08 36 + #define CGX_NIX0_RESET BIT_ULL(2) 37 + #define CGX_NIX1_RESET BIT_ULL(3) 38 + #define CGX_NSCI_DROP BIT_ULL(9) 35 39 #define CGXX_CMRX_INT_ENA_W1S 0x058 36 40 #define CGXX_CMRX_RX_ID_MAP 0x060 37 41 #define CGXX_CMRX_RX_STAT0 0x070 ··· 189 185 int verify_lmac_fc_cfg(void *cgxd, int lmac_id, u8 tx_pause, u8 rx_pause, 190 186 int pfvf_idx); 191 187 int cgx_lmac_reset(void *cgxd, int lmac_id, u8 pf_req_flr); 188 + u32 cgx_get_fifo_len(void *cgxd); 192 189 #endif /* CGX_H */
+6 -1
drivers/net/ethernet/marvell/octeontx2/af/lmac_common.h
··· 72 72 u8 irq_offset; 73 73 u8 int_ena_bit; 74 74 u8 lmac_fwi; 75 - u32 fifo_len; 76 75 bool non_contiguous_serdes_lane; 77 76 /* RPM & CGX differs in number of Receive/transmit stats */ 78 77 u8 rx_stats_cnt; ··· 132 133 int (*get_fec_stats)(void *cgxd, int lmac_id, 133 134 struct cgx_fec_stats_rsp *rsp); 134 135 int (*mac_stats_reset)(void *cgxd, int lmac_id); 136 + void (*mac_x2p_reset)(void *cgxd, bool enable); 137 + int (*mac_enadis_rx)(void *cgxd, int lmac_id, bool enable); 135 138 }; 136 139 137 140 struct cgx { ··· 143 142 u8 lmac_count; 144 143 /* number of LMACs per MAC could be 4 or 8 */ 145 144 u8 max_lmac_per_mac; 145 + /* length of fifo varies depending on the number 146 + * of LMACS 147 + */ 148 + u32 fifo_len; 146 149 #define MAX_LMAC_COUNT 8 147 150 struct lmac *lmac_idmap[MAX_LMAC_COUNT]; 148 151 struct work_struct cgx_cmd_work;
+67 -20
drivers/net/ethernet/marvell/octeontx2/af/rpm.c
··· 39 39 .mac_get_pfc_frm_cfg = rpm_lmac_get_pfc_frm_cfg, 40 40 .mac_reset = rpm_lmac_reset, 41 41 .mac_stats_reset = rpm_stats_reset, 42 + .mac_x2p_reset = rpm_x2p_reset, 43 + .mac_enadis_rx = rpm_enadis_rx, 42 44 }; 43 45 44 46 static struct mac_ops rpm2_mac_ops = { ··· 74 72 .mac_get_pfc_frm_cfg = rpm_lmac_get_pfc_frm_cfg, 75 73 .mac_reset = rpm_lmac_reset, 76 74 .mac_stats_reset = rpm_stats_reset, 75 + .mac_x2p_reset = rpm_x2p_reset, 76 + .mac_enadis_rx = rpm_enadis_rx, 77 77 }; 78 78 79 79 bool is_dev_rpm2(void *rpmd) ··· 471 467 int err; 472 468 473 469 req = FIELD_SET(CMDREG_ID, CGX_CMD_GET_LINK_STS, req); 474 - err = cgx_fwi_cmd_generic(req, &resp, rpm, 0); 470 + err = cgx_fwi_cmd_generic(req, &resp, rpm, lmac_id); 475 471 if (!err) 476 472 return FIELD_GET(RESP_LINKSTAT_LMAC_TYPE, resp); 477 473 return err; ··· 484 480 u8 num_lmacs; 485 481 u32 fifo_len; 486 482 487 - fifo_len = rpm->mac_ops->fifo_len; 483 + fifo_len = rpm->fifo_len; 488 484 num_lmacs = rpm->mac_ops->get_nr_lmacs(rpm); 489 485 490 486 switch (num_lmacs) { ··· 537 533 */ 538 534 max_lmac = (rpm_read(rpm, 0, CGX_CONST) >> 24) & 0xFF; 539 535 if (max_lmac > 4) 540 - fifo_len = rpm->mac_ops->fifo_len / 2; 536 + fifo_len = rpm->fifo_len / 2; 541 537 else 542 - fifo_len = rpm->mac_ops->fifo_len; 538 + fifo_len = rpm->fifo_len; 543 539 544 540 if (lmac_id < 4) { 545 541 num_lmacs = hweight8(lmac_info & 0xF); ··· 703 699 if (rpm->lmac_idmap[lmac_id]->link_info.fec == OTX2_FEC_NONE) 704 700 return 0; 705 701 702 + /* latched registers FCFECX_CW_HI/RSFEC_STAT_FAST_DATA_HI_CDC are common 703 + * for all counters. Acquire lock to ensure serialized reads 704 + */ 705 + mutex_lock(&rpm->lock); 706 706 if (rpm->lmac_idmap[lmac_id]->link_info.fec == OTX2_FEC_BASER) { 707 - val_lo = rpm_read(rpm, lmac_id, RPMX_MTI_FCFECX_VL0_CCW_LO); 708 - val_hi = rpm_read(rpm, lmac_id, RPMX_MTI_FCFECX_CW_HI); 707 + val_lo = rpm_read(rpm, 0, RPMX_MTI_FCFECX_VL0_CCW_LO(lmac_id)); 708 + val_hi = rpm_read(rpm, 0, RPMX_MTI_FCFECX_CW_HI(lmac_id)); 709 709 rsp->fec_corr_blks = (val_hi << 16 | val_lo); 710 710 711 - val_lo = rpm_read(rpm, lmac_id, RPMX_MTI_FCFECX_VL0_NCCW_LO); 712 - val_hi = rpm_read(rpm, lmac_id, RPMX_MTI_FCFECX_CW_HI); 711 + val_lo = rpm_read(rpm, 0, RPMX_MTI_FCFECX_VL0_NCCW_LO(lmac_id)); 712 + val_hi = rpm_read(rpm, 0, RPMX_MTI_FCFECX_CW_HI(lmac_id)); 713 713 rsp->fec_uncorr_blks = (val_hi << 16 | val_lo); 714 714 715 715 /* 50G uses 2 Physical serdes lines */ 716 716 if (rpm->lmac_idmap[lmac_id]->link_info.lmac_type_id == 717 717 LMAC_MODE_50G_R) { 718 - val_lo = rpm_read(rpm, lmac_id, 719 - RPMX_MTI_FCFECX_VL1_CCW_LO); 720 - val_hi = rpm_read(rpm, lmac_id, 721 - RPMX_MTI_FCFECX_CW_HI); 718 + val_lo = rpm_read(rpm, 0, 719 + RPMX_MTI_FCFECX_VL1_CCW_LO(lmac_id)); 720 + val_hi = rpm_read(rpm, 0, 721 + RPMX_MTI_FCFECX_CW_HI(lmac_id)); 722 722 rsp->fec_corr_blks += (val_hi << 16 | val_lo); 723 723 724 - val_lo = rpm_read(rpm, lmac_id, 725 - RPMX_MTI_FCFECX_VL1_NCCW_LO); 726 - val_hi = rpm_read(rpm, lmac_id, 727 - RPMX_MTI_FCFECX_CW_HI); 724 + val_lo = rpm_read(rpm, 0, 725 + RPMX_MTI_FCFECX_VL1_NCCW_LO(lmac_id)); 726 + val_hi = rpm_read(rpm, 0, 727 + RPMX_MTI_FCFECX_CW_HI(lmac_id)); 728 728 rsp->fec_uncorr_blks += (val_hi << 16 | val_lo); 729 729 } 730 730 } else { 731 731 /* enable RS-FEC capture */ 732 - cfg = rpm_read(rpm, 0, RPMX_MTI_STAT_STATN_CONTROL); 732 + cfg = rpm_read(rpm, 0, RPMX_MTI_RSFEC_STAT_STATN_CONTROL); 733 733 cfg |= RPMX_RSFEC_RX_CAPTURE | BIT(lmac_id); 734 - rpm_write(rpm, 0, RPMX_MTI_STAT_STATN_CONTROL, cfg); 734 + rpm_write(rpm, 0, RPMX_MTI_RSFEC_STAT_STATN_CONTROL, cfg); 735 735 736 736 val_lo = rpm_read(rpm, 0, 737 737 RPMX_MTI_RSFEC_STAT_COUNTER_CAPTURE_2); 738 - val_hi = rpm_read(rpm, 0, RPMX_MTI_STAT_DATA_HI_CDC); 738 + val_hi = rpm_read(rpm, 0, RPMX_MTI_RSFEC_STAT_FAST_DATA_HI_CDC); 739 739 rsp->fec_corr_blks = (val_hi << 32 | val_lo); 740 740 741 741 val_lo = rpm_read(rpm, 0, 742 742 RPMX_MTI_RSFEC_STAT_COUNTER_CAPTURE_3); 743 - val_hi = rpm_read(rpm, 0, RPMX_MTI_STAT_DATA_HI_CDC); 743 + val_hi = rpm_read(rpm, 0, RPMX_MTI_RSFEC_STAT_FAST_DATA_HI_CDC); 744 744 rsp->fec_uncorr_blks = (val_hi << 32 | val_lo); 745 745 } 746 + mutex_unlock(&rpm->lock); 746 747 747 748 return 0; 748 749 } ··· 770 761 if (pf_req_flr) 771 762 rpm_lmac_internal_loopback(rpm, lmac_id, false); 772 763 764 + return 0; 765 + } 766 + 767 + void rpm_x2p_reset(void *rpmd, bool enable) 768 + { 769 + rpm_t *rpm = rpmd; 770 + int lmac_id; 771 + u64 cfg; 772 + 773 + if (enable) { 774 + for_each_set_bit(lmac_id, &rpm->lmac_bmap, rpm->max_lmac_per_mac) 775 + rpm->mac_ops->mac_enadis_rx(rpm, lmac_id, false); 776 + 777 + usleep_range(1000, 2000); 778 + 779 + cfg = rpm_read(rpm, 0, RPMX_CMR_GLOBAL_CFG); 780 + rpm_write(rpm, 0, RPMX_CMR_GLOBAL_CFG, cfg | RPM_NIX0_RESET); 781 + } else { 782 + cfg = rpm_read(rpm, 0, RPMX_CMR_GLOBAL_CFG); 783 + cfg &= ~RPM_NIX0_RESET; 784 + rpm_write(rpm, 0, RPMX_CMR_GLOBAL_CFG, cfg); 785 + } 786 + } 787 + 788 + int rpm_enadis_rx(void *rpmd, int lmac_id, bool enable) 789 + { 790 + rpm_t *rpm = rpmd; 791 + u64 cfg; 792 + 793 + if (!is_lmac_valid(rpm, lmac_id)) 794 + return -ENODEV; 795 + 796 + cfg = rpm_read(rpm, lmac_id, RPMX_MTI_MAC100X_COMMAND_CONFIG); 797 + if (enable) 798 + cfg |= RPM_RX_EN; 799 + else 800 + cfg &= ~RPM_RX_EN; 801 + rpm_write(rpm, lmac_id, RPMX_MTI_MAC100X_COMMAND_CONFIG, cfg); 773 802 return 0; 774 803 }
+12 -6
drivers/net/ethernet/marvell/octeontx2/af/rpm.h
··· 17 17 18 18 /* Registers */ 19 19 #define RPMX_CMRX_CFG 0x00 20 + #define RPMX_CMR_GLOBAL_CFG 0x08 21 + #define RPM_NIX0_RESET BIT_ULL(3) 20 22 #define RPMX_RX_TS_PREPEND BIT_ULL(22) 21 23 #define RPMX_TX_PTP_1S_SUPPORT BIT_ULL(17) 22 24 #define RPMX_CMRX_RX_ID_MAP 0x80 ··· 86 84 /* FEC stats */ 87 85 #define RPMX_MTI_STAT_STATN_CONTROL 0x10018 88 86 #define RPMX_MTI_STAT_DATA_HI_CDC 0x10038 89 - #define RPMX_RSFEC_RX_CAPTURE BIT_ULL(27) 87 + #define RPMX_RSFEC_RX_CAPTURE BIT_ULL(28) 90 88 #define RPMX_CMD_CLEAR_RX BIT_ULL(30) 91 89 #define RPMX_CMD_CLEAR_TX BIT_ULL(31) 90 + #define RPMX_MTI_RSFEC_STAT_STATN_CONTROL 0x40018 91 + #define RPMX_MTI_RSFEC_STAT_FAST_DATA_HI_CDC 0x40000 92 92 #define RPMX_MTI_RSFEC_STAT_COUNTER_CAPTURE_2 0x40050 93 93 #define RPMX_MTI_RSFEC_STAT_COUNTER_CAPTURE_3 0x40058 94 - #define RPMX_MTI_FCFECX_VL0_CCW_LO 0x38618 95 - #define RPMX_MTI_FCFECX_VL0_NCCW_LO 0x38620 96 - #define RPMX_MTI_FCFECX_VL1_CCW_LO 0x38628 97 - #define RPMX_MTI_FCFECX_VL1_NCCW_LO 0x38630 98 - #define RPMX_MTI_FCFECX_CW_HI 0x38638 94 + #define RPMX_MTI_FCFECX_VL0_CCW_LO(a) (0x38618 + ((a) * 0x40)) 95 + #define RPMX_MTI_FCFECX_VL0_NCCW_LO(a) (0x38620 + ((a) * 0x40)) 96 + #define RPMX_MTI_FCFECX_VL1_CCW_LO(a) (0x38628 + ((a) * 0x40)) 97 + #define RPMX_MTI_FCFECX_VL1_NCCW_LO(a) (0x38630 + ((a) * 0x40)) 98 + #define RPMX_MTI_FCFECX_CW_HI(a) (0x38638 + ((a) * 0x40)) 99 99 100 100 /* CN10KB CSR Declaration */ 101 101 #define RPM2_CMRX_SW_INT 0x1b0 ··· 141 137 int rpm_get_fec_stats(void *cgxd, int lmac_id, struct cgx_fec_stats_rsp *rsp); 142 138 int rpm_lmac_reset(void *rpmd, int lmac_id, u8 pf_req_flr); 143 139 int rpm_stats_reset(void *rpmd, int lmac_id); 140 + void rpm_x2p_reset(void *rpmd, bool enable); 141 + int rpm_enadis_rx(void *rpmd, int lmac_id, bool enable); 144 142 #endif /* RPM_H */
+1
drivers/net/ethernet/marvell/octeontx2/af/rvu.c
··· 1162 1162 } 1163 1163 1164 1164 rvu_program_channels(rvu); 1165 + cgx_start_linkup(rvu); 1165 1166 1166 1167 err = rvu_mcs_init(rvu); 1167 1168 if (err) {
+1
drivers/net/ethernet/marvell/octeontx2/af/rvu.h
··· 1025 1025 int rvu_cgx_cfg_pause_frm(struct rvu *rvu, u16 pcifunc, u8 tx_pause, u8 rx_pause); 1026 1026 void rvu_mac_reset(struct rvu *rvu, u16 pcifunc); 1027 1027 u32 rvu_cgx_get_lmac_fifolen(struct rvu *rvu, int cgx, int lmac); 1028 + void cgx_start_linkup(struct rvu *rvu); 1028 1029 int npc_get_nixlf_mcam_index(struct npc_mcam *mcam, u16 pcifunc, int nixlf, 1029 1030 int type); 1030 1031 bool is_mcam_entry_enabled(struct rvu *rvu, struct npc_mcam *mcam, int blkaddr,
+34 -11
drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c
··· 349 349 350 350 int rvu_cgx_init(struct rvu *rvu) 351 351 { 352 + struct mac_ops *mac_ops; 352 353 int cgx, err; 353 354 void *cgxd; 354 355 ··· 376 375 if (err) 377 376 return err; 378 377 378 + /* Clear X2P reset on all MAC blocks */ 379 + for (cgx = 0; cgx < rvu->cgx_cnt_max; cgx++) { 380 + cgxd = rvu_cgx_pdata(cgx, rvu); 381 + if (!cgxd) 382 + continue; 383 + mac_ops = get_mac_ops(cgxd); 384 + mac_ops->mac_x2p_reset(cgxd, false); 385 + } 386 + 379 387 /* Register for CGX events */ 380 388 err = cgx_lmac_event_handler_init(rvu); 381 389 if (err) ··· 392 382 393 383 mutex_init(&rvu->cgx_cfg_lock); 394 384 395 - /* Ensure event handler registration is completed, before 396 - * we turn on the links 397 - */ 398 - mb(); 385 + return 0; 386 + } 387 + 388 + void cgx_start_linkup(struct rvu *rvu) 389 + { 390 + unsigned long lmac_bmap; 391 + struct mac_ops *mac_ops; 392 + int cgx, lmac, err; 393 + void *cgxd; 394 + 395 + /* Enable receive on all LMACS */ 396 + for (cgx = 0; cgx <= rvu->cgx_cnt_max; cgx++) { 397 + cgxd = rvu_cgx_pdata(cgx, rvu); 398 + if (!cgxd) 399 + continue; 400 + mac_ops = get_mac_ops(cgxd); 401 + lmac_bmap = cgx_get_lmac_bmap(cgxd); 402 + for_each_set_bit(lmac, &lmac_bmap, rvu->hw->lmac_per_cgx) 403 + mac_ops->mac_enadis_rx(cgxd, lmac, true); 404 + } 399 405 400 406 /* Do link up for all CGX ports */ 401 407 for (cgx = 0; cgx <= rvu->cgx_cnt_max; cgx++) { ··· 424 398 "Link up process failed to start on cgx %d\n", 425 399 cgx); 426 400 } 427 - 428 - return 0; 429 401 } 430 402 431 403 int rvu_cgx_exit(struct rvu *rvu) ··· 947 923 948 924 u32 rvu_cgx_get_fifolen(struct rvu *rvu) 949 925 { 950 - struct mac_ops *mac_ops; 951 - u32 fifo_len; 926 + void *cgxd = rvu_first_cgx_pdata(rvu); 952 927 953 - mac_ops = get_mac_ops(rvu_first_cgx_pdata(rvu)); 954 - fifo_len = mac_ops ? mac_ops->fifo_len : 0; 928 + if (!cgxd) 929 + return 0; 955 930 956 - return fifo_len; 931 + return cgx_get_fifo_len(cgxd); 957 932 } 958 933 959 934 u32 rvu_cgx_get_lmac_fifolen(struct rvu *rvu, int cgx, int lmac)
+4 -10
drivers/net/ethernet/marvell/pxa168_eth.c
··· 1394 1394 1395 1395 printk(KERN_NOTICE "PXA168 10/100 Ethernet Driver\n"); 1396 1396 1397 - clk = devm_clk_get(&pdev->dev, NULL); 1397 + clk = devm_clk_get_enabled(&pdev->dev, NULL); 1398 1398 if (IS_ERR(clk)) { 1399 - dev_err(&pdev->dev, "Fast Ethernet failed to get clock\n"); 1399 + dev_err(&pdev->dev, "Fast Ethernet failed to get and enable clock\n"); 1400 1400 return -ENODEV; 1401 1401 } 1402 - clk_prepare_enable(clk); 1403 1402 1404 1403 dev = alloc_etherdev(sizeof(struct pxa168_eth_private)); 1405 - if (!dev) { 1406 - err = -ENOMEM; 1407 - goto err_clk; 1408 - } 1404 + if (!dev) 1405 + return -ENOMEM; 1409 1406 1410 1407 platform_set_drvdata(pdev, dev); 1411 1408 pep = netdev_priv(dev); ··· 1520 1523 mdiobus_free(pep->smi_bus); 1521 1524 err_netdev: 1522 1525 free_netdev(dev); 1523 - err_clk: 1524 - clk_disable_unprepare(clk); 1525 1526 return err; 1526 1527 } 1527 1528 ··· 1537 1542 if (dev->phydev) 1538 1543 phy_disconnect(dev->phydev); 1539 1544 1540 - clk_disable_unprepare(pep->clk); 1541 1545 mdiobus_unregister(pep->smi_bus); 1542 1546 mdiobus_free(pep->smi_bus); 1543 1547 unregister_netdev(dev);
+10 -7
drivers/net/ethernet/microchip/vcap/vcap_api_kunit.c
··· 366 366 struct vcap_typegroup typegroups[] = { 367 367 { .offset = 0, .width = 2, .value = 2, }, 368 368 { .offset = 156, .width = 1, .value = 0, }, 369 - { .offset = 0, .width = 0, .value = 0, }, 369 + { } 370 370 }; 371 371 struct vcap_typegroup typegroups2[] = { 372 372 { .offset = 0, .width = 3, .value = 4, }, 373 373 { .offset = 49, .width = 2, .value = 0, }, 374 374 { .offset = 98, .width = 2, .value = 0, }, 375 + { } 375 376 }; 376 377 377 378 vcap_iter_init(&iter, 52, typegroups, 86); ··· 400 399 { .offset = 147, .width = 3, .value = 0, }, 401 400 { .offset = 196, .width = 2, .value = 0, }, 402 401 { .offset = 245, .width = 1, .value = 0, }, 402 + { } 403 403 }; 404 404 int idx; 405 405 ··· 435 433 { .offset = 147, .width = 3, .value = 5, }, 436 434 { .offset = 196, .width = 2, .value = 2, }, 437 435 { .offset = 245, .width = 5, .value = 27, }, 438 - { .offset = 0, .width = 0, .value = 0, }, 436 + { } 439 437 }; 440 438 441 439 vcap_encode_typegroups(stream, 49, typegroups, false); ··· 465 463 { .offset = 147, .width = 3, .value = 5, }, 466 464 { .offset = 196, .width = 2, .value = 2, }, 467 465 { .offset = 245, .width = 1, .value = 0, }, 466 + { } 468 467 }; 469 468 470 469 vcap_iter_init(&iter, 49, typegroups, 44); ··· 492 489 { .offset = 147, .width = 3, .value = 5, }, 493 490 { .offset = 196, .width = 2, .value = 2, }, 494 491 { .offset = 245, .width = 5, .value = 27, }, 495 - { .offset = 0, .width = 0, .value = 0, }, 492 + { } 496 493 }; 497 494 struct vcap_field rf = { 498 495 .type = VCAP_FIELD_U32, ··· 541 538 { .offset = 0, .width = 3, .value = 7, }, 542 539 { .offset = 21, .width = 2, .value = 3, }, 543 540 { .offset = 42, .width = 1, .value = 1, }, 544 - { .offset = 0, .width = 0, .value = 0, }, 541 + { } 545 542 }; 546 543 struct vcap_field rf = { 547 544 .type = VCAP_FIELD_U32, ··· 611 608 struct vcap_typegroup tgt[] = { 612 609 { .offset = 0, .width = 2, .value = 2, }, 613 610 { .offset = 156, .width = 1, .value = 1, }, 614 - { .offset = 0, .width = 0, .value = 0, }, 611 + { } 615 612 }; 616 613 617 614 vcap_test_api_init(&admin); ··· 674 671 struct vcap_typegroup tgt[] = { 675 672 { .offset = 0, .width = 2, .value = 2, }, 676 673 { .offset = 156, .width = 1, .value = 1, }, 677 - { .offset = 0, .width = 0, .value = 0, }, 674 + { } 678 675 }; 679 676 u32 keyres[] = { 680 677 0x928e8a84, ··· 735 732 { .offset = 0, .width = 2, .value = 2, }, 736 733 { .offset = 21, .width = 1, .value = 1, }, 737 734 { .offset = 42, .width = 1, .value = 0, }, 738 - { .offset = 0, .width = 0, .value = 0, }, 735 + { } 739 736 }; 740 737 741 738 vcap_encode_actionfield(&rule, &caf, &rf, tgt);
+6 -1
drivers/net/ethernet/realtek/rtase/rtase.h
··· 9 9 #ifndef RTASE_H 10 10 #define RTASE_H 11 11 12 - #define RTASE_HW_VER_MASK 0x7C800000 12 + #define RTASE_HW_VER_MASK 0x7C800000 13 + #define RTASE_HW_VER_906X_7XA 0x00800000 14 + #define RTASE_HW_VER_906X_7XC 0x04000000 15 + #define RTASE_HW_VER_907XD_V1 0x04800000 13 16 14 17 #define RTASE_RX_DMA_BURST_256 4 15 18 #define RTASE_TX_DMA_BURST_UNLIMITED 7 ··· 330 327 u16 int_nums; 331 328 u16 tx_int_mit; 332 329 u16 rx_int_mit; 330 + 331 + u32 hw_ver; 333 332 }; 334 333 335 334 #define RTASE_LSO_64K 64000
+30 -13
drivers/net/ethernet/realtek/rtase/rtase_main.c
··· 1714 1714 struct ethtool_link_ksettings *cmd) 1715 1715 { 1716 1716 u32 supported = SUPPORTED_MII | SUPPORTED_Pause | SUPPORTED_Asym_Pause; 1717 + const struct rtase_private *tp = netdev_priv(dev); 1717 1718 1718 1719 ethtool_convert_legacy_u32_to_link_mode(cmd->link_modes.supported, 1719 1720 supported); 1720 - cmd->base.speed = SPEED_5000; 1721 + 1722 + switch (tp->hw_ver) { 1723 + case RTASE_HW_VER_906X_7XA: 1724 + case RTASE_HW_VER_906X_7XC: 1725 + cmd->base.speed = SPEED_5000; 1726 + break; 1727 + case RTASE_HW_VER_907XD_V1: 1728 + cmd->base.speed = SPEED_10000; 1729 + break; 1730 + } 1731 + 1721 1732 cmd->base.duplex = DUPLEX_FULL; 1722 1733 cmd->base.port = PORT_MII; 1723 1734 cmd->base.autoneg = AUTONEG_DISABLE; ··· 1983 1972 tp->dev->max_mtu = RTASE_MAX_JUMBO_SIZE; 1984 1973 } 1985 1974 1986 - static bool rtase_check_mac_version_valid(struct rtase_private *tp) 1975 + static int rtase_check_mac_version_valid(struct rtase_private *tp) 1987 1976 { 1988 - u32 hw_ver = rtase_r32(tp, RTASE_TX_CONFIG_0) & RTASE_HW_VER_MASK; 1989 - bool known_ver = false; 1977 + int ret = -ENODEV; 1990 1978 1991 - switch (hw_ver) { 1992 - case 0x00800000: 1993 - case 0x04000000: 1994 - case 0x04800000: 1995 - known_ver = true; 1979 + tp->hw_ver = rtase_r32(tp, RTASE_TX_CONFIG_0) & RTASE_HW_VER_MASK; 1980 + 1981 + switch (tp->hw_ver) { 1982 + case RTASE_HW_VER_906X_7XA: 1983 + case RTASE_HW_VER_906X_7XC: 1984 + case RTASE_HW_VER_907XD_V1: 1985 + ret = 0; 1996 1986 break; 1997 1987 } 1998 1988 1999 - return known_ver; 1989 + return ret; 2000 1990 } 2001 1991 2002 1992 static int rtase_init_board(struct pci_dev *pdev, struct net_device **dev_out, ··· 2117 2105 tp->pdev = pdev; 2118 2106 2119 2107 /* identify chip attached to board */ 2120 - if (!rtase_check_mac_version_valid(tp)) 2121 - return dev_err_probe(&pdev->dev, -ENODEV, 2122 - "unknown chip version, contact rtase maintainers (see MAINTAINERS file)\n"); 2108 + ret = rtase_check_mac_version_valid(tp); 2109 + if (ret != 0) { 2110 + dev_err(&pdev->dev, 2111 + "unknown chip version: 0x%08x, contact rtase maintainers (see MAINTAINERS file)\n", 2112 + tp->hw_ver); 2113 + goto err_out_release_board; 2114 + } 2123 2115 2124 2116 rtase_init_software_variable(pdev, tp); 2125 2117 rtase_init_hardware(tp); ··· 2197 2181 netif_napi_del(&ivec->napi); 2198 2182 } 2199 2183 2184 + err_out_release_board: 2200 2185 rtase_release_board(pdev, dev, ioaddr); 2201 2186 2202 2187 return ret;
+2
drivers/net/ethernet/stmicro/stmmac/dwmac-socfpga.c
··· 487 487 plat_dat->select_pcs = socfpga_dwmac_select_pcs; 488 488 plat_dat->has_gmac = true; 489 489 490 + plat_dat->riwt_off = 1; 491 + 490 492 ret = stmmac_dvr_probe(&pdev->dev, plat_dat, &stmmac_res); 491 493 if (ret) 492 494 return ret;
+3
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
··· 1177 1177 return -ENODEV; 1178 1178 } 1179 1179 1180 + if (priv->dma_cap.eee) 1181 + phy_support_eee(phydev); 1182 + 1180 1183 ret = phylink_connect_phy(priv->phylink, phydev); 1181 1184 } else { 1182 1185 fwnode_handle_put(phy_fwnode);
+4 -1
drivers/net/mdio/mdio-ipq4019.c
··· 352 352 /* The platform resource is provided on the chipset IPQ5018 */ 353 353 /* This resource is optional */ 354 354 res = platform_get_resource(pdev, IORESOURCE_MEM, 1); 355 - if (res) 355 + if (res) { 356 356 priv->eth_ldo_rdy = devm_ioremap_resource(&pdev->dev, res); 357 + if (IS_ERR(priv->eth_ldo_rdy)) 358 + return PTR_ERR(priv->eth_ldo_rdy); 359 + } 357 360 358 361 bus->name = "ipq4019_mdio"; 359 362 bus->read = ipq4019_mdio_read_c22;
+1 -1
drivers/net/phy/phy-c45.c
··· 1530 1530 return ret; 1531 1531 1532 1532 data->eee_enabled = is_enabled; 1533 - data->eee_active = ret; 1533 + data->eee_active = phydev->eee_active; 1534 1534 linkmode_copy(data->supported, phydev->supported_eee); 1535 1535 1536 1536 return 0;
+31 -21
drivers/net/phy/phy.c
··· 990 990 phydev->state = PHY_RUNNING; 991 991 err = genphy_c45_eee_is_active(phydev, 992 992 NULL, NULL, NULL); 993 - if (err <= 0) 994 - phydev->enable_tx_lpi = false; 995 - else 996 - phydev->enable_tx_lpi = phydev->eee_cfg.tx_lpi_enabled; 993 + phydev->eee_active = err > 0; 994 + phydev->enable_tx_lpi = phydev->eee_cfg.tx_lpi_enabled && 995 + phydev->eee_active; 997 996 998 997 phy_link_up(phydev); 999 998 } else if (!phydev->link && phydev->state != PHY_NOLINK) { 1000 999 phydev->state = PHY_NOLINK; 1000 + phydev->eee_active = false; 1001 1001 phydev->enable_tx_lpi = false; 1002 1002 phy_link_down(phydev); 1003 1003 } ··· 1672 1672 * phy_ethtool_set_eee_noneg - Adjusts MAC LPI configuration without PHY 1673 1673 * renegotiation 1674 1674 * @phydev: pointer to the target PHY device structure 1675 - * @data: pointer to the ethtool_keee structure containing the new EEE settings 1675 + * @old_cfg: pointer to the eee_config structure containing the old EEE settings 1676 1676 * 1677 1677 * This function updates the Energy Efficient Ethernet (EEE) configuration 1678 1678 * for cases where only the MAC's Low Power Idle (LPI) configuration changes, ··· 1683 1683 * configuration. 1684 1684 */ 1685 1685 static void phy_ethtool_set_eee_noneg(struct phy_device *phydev, 1686 - struct ethtool_keee *data) 1686 + const struct eee_config *old_cfg) 1687 1687 { 1688 - if (phydev->eee_cfg.tx_lpi_enabled != data->tx_lpi_enabled || 1689 - phydev->eee_cfg.tx_lpi_timer != data->tx_lpi_timer) { 1690 - eee_to_eeecfg(&phydev->eee_cfg, data); 1691 - phydev->enable_tx_lpi = eeecfg_mac_can_tx_lpi(&phydev->eee_cfg); 1692 - if (phydev->link) { 1693 - phydev->link = false; 1694 - phy_link_down(phydev); 1695 - phydev->link = true; 1696 - phy_link_up(phydev); 1697 - } 1688 + bool enable_tx_lpi; 1689 + 1690 + if (!phydev->link) 1691 + return; 1692 + 1693 + enable_tx_lpi = phydev->eee_cfg.tx_lpi_enabled && phydev->eee_active; 1694 + 1695 + if (phydev->enable_tx_lpi != enable_tx_lpi || 1696 + phydev->eee_cfg.tx_lpi_timer != old_cfg->tx_lpi_timer) { 1697 + phydev->enable_tx_lpi = false; 1698 + phydev->link = false; 1699 + phy_link_down(phydev); 1700 + phydev->enable_tx_lpi = enable_tx_lpi; 1701 + phydev->link = true; 1702 + phy_link_up(phydev); 1698 1703 } 1699 1704 } 1700 1705 ··· 1712 1707 */ 1713 1708 int phy_ethtool_set_eee(struct phy_device *phydev, struct ethtool_keee *data) 1714 1709 { 1710 + struct eee_config old_cfg; 1715 1711 int ret; 1716 1712 1717 1713 if (!phydev->drv) 1718 1714 return -EIO; 1719 1715 1720 1716 mutex_lock(&phydev->lock); 1717 + 1718 + old_cfg = phydev->eee_cfg; 1719 + eee_to_eeecfg(&phydev->eee_cfg, data); 1720 + 1721 1721 ret = genphy_c45_ethtool_set_eee(phydev, data); 1722 - if (ret >= 0) { 1723 - if (ret == 0) 1724 - phy_ethtool_set_eee_noneg(phydev, data); 1725 - eee_to_eeecfg(&phydev->eee_cfg, data); 1726 - } 1722 + if (ret == 0) 1723 + phy_ethtool_set_eee_noneg(phydev, &old_cfg); 1724 + else if (ret < 0) 1725 + phydev->eee_cfg = old_cfg; 1726 + 1727 1727 mutex_unlock(&phydev->lock); 1728 1728 1729 1729 return ret < 0 ? ret : 0;
+22 -20
drivers/net/usb/lan78xx.c
··· 1652 1652 struct lan78xx_priv *pdata = (struct lan78xx_priv *)(dev->data[0]); 1653 1653 int ret; 1654 1654 1655 + if (wol->wolopts & ~WAKE_ALL) 1656 + return -EINVAL; 1657 + 1655 1658 ret = usb_autopm_get_interface(dev->intf); 1656 1659 if (ret < 0) 1657 1660 return ret; 1658 - 1659 - if (wol->wolopts & ~WAKE_ALL) 1660 - return -EINVAL; 1661 1661 1662 1662 pdata->wol = wol->wolopts; 1663 1663 ··· 2380 2380 if (dev->chipid == ID_REV_CHIP_ID_7801_) { 2381 2381 if (phy_is_pseudo_fixed_link(phydev)) { 2382 2382 fixed_phy_unregister(phydev); 2383 + phy_device_free(phydev); 2383 2384 } else { 2384 2385 phy_unregister_fixup_for_uid(PHY_KSZ9031RNX, 2385 2386 0xfffffff0); ··· 4247 4246 4248 4247 phy_disconnect(net->phydev); 4249 4248 4250 - if (phy_is_pseudo_fixed_link(phydev)) 4249 + if (phy_is_pseudo_fixed_link(phydev)) { 4251 4250 fixed_phy_unregister(phydev); 4251 + phy_device_free(phydev); 4252 + } 4252 4253 4253 4254 usb_scuttle_anchored_urbs(&dev->deferred); 4254 4255 ··· 4417 4414 4418 4415 period = ep_intr->desc.bInterval; 4419 4416 maxp = usb_maxpacket(dev->udev, dev->pipe_intr); 4420 - buf = kmalloc(maxp, GFP_KERNEL); 4421 - if (!buf) { 4422 - ret = -ENOMEM; 4423 - goto out5; 4424 - } 4425 4417 4426 4418 dev->urb_intr = usb_alloc_urb(0, GFP_KERNEL); 4427 4419 if (!dev->urb_intr) { 4428 4420 ret = -ENOMEM; 4429 - goto out6; 4430 - } else { 4431 - usb_fill_int_urb(dev->urb_intr, dev->udev, 4432 - dev->pipe_intr, buf, maxp, 4433 - intr_complete, dev, period); 4434 - dev->urb_intr->transfer_flags |= URB_FREE_BUFFER; 4421 + goto out5; 4435 4422 } 4423 + 4424 + buf = kmalloc(maxp, GFP_KERNEL); 4425 + if (!buf) { 4426 + ret = -ENOMEM; 4427 + goto free_urbs; 4428 + } 4429 + 4430 + usb_fill_int_urb(dev->urb_intr, dev->udev, 4431 + dev->pipe_intr, buf, maxp, 4432 + intr_complete, dev, period); 4433 + dev->urb_intr->transfer_flags |= URB_FREE_BUFFER; 4436 4434 4437 4435 dev->maxpacket = usb_maxpacket(dev->udev, dev->pipe_out); 4438 4436 4439 4437 /* Reject broken descriptors. */ 4440 4438 if (dev->maxpacket == 0) { 4441 4439 ret = -ENODEV; 4442 - goto out6; 4440 + goto free_urbs; 4443 4441 } 4444 4442 4445 4443 /* driver requires remote-wakeup capability during autosuspend. */ ··· 4448 4444 4449 4445 ret = lan78xx_phy_init(dev); 4450 4446 if (ret < 0) 4451 - goto out7; 4447 + goto free_urbs; 4452 4448 4453 4449 ret = register_netdev(netdev); 4454 4450 if (ret != 0) { ··· 4470 4466 4471 4467 out8: 4472 4468 phy_disconnect(netdev->phydev); 4473 - out7: 4469 + free_urbs: 4474 4470 usb_free_urb(dev->urb_intr); 4475 - out6: 4476 - kfree(buf); 4477 4471 out5: 4478 4472 lan78xx_unbind(dev, intf); 4479 4473 out4:
+2
include/linux/phy.h
··· 602 602 * @supported_eee: supported PHY EEE linkmodes 603 603 * @advertising_eee: Currently advertised EEE linkmodes 604 604 * @enable_tx_lpi: When True, MAC should transmit LPI to PHY 605 + * @eee_active: phylib private state, indicating that EEE has been negotiated 605 606 * @eee_cfg: User configuration of EEE 606 607 * @lp_advertising: Current link partner advertised linkmodes 607 608 * @host_interfaces: PHY interface modes supported by host ··· 724 723 /* Energy efficient ethernet modes which should be prohibited */ 725 724 __ETHTOOL_DECLARE_LINK_MODE_MASK(eee_broken_modes); 726 725 bool enable_tx_lpi; 726 + bool eee_active; 727 727 struct eee_config eee_cfg; 728 728 729 729 /* Host supported PHY interface types. Should be ignored if empty. */
+2
include/linux/sockptr.h
··· 53 53 /* Deprecated. 54 54 * This is unsafe, unless caller checked user provided optlen. 55 55 * Prefer copy_safe_from_sockptr() instead. 56 + * 57 + * Returns 0 for success, or number of bytes not copied on error. 56 58 */ 57 59 static inline int copy_from_sockptr(void *dst, sockptr_t src, size_t size) 58 60 {
+27 -11
net/bluetooth/mgmt.c
··· 1318 1318 struct mgmt_mode *cp; 1319 1319 1320 1320 /* Make sure cmd still outstanding. */ 1321 - if (cmd != pending_find(MGMT_OP_SET_POWERED, hdev)) 1321 + if (err == -ECANCELED || 1322 + cmd != pending_find(MGMT_OP_SET_POWERED, hdev)) 1322 1323 return; 1323 1324 1324 1325 cp = cmd->param; ··· 1352 1351 static int set_powered_sync(struct hci_dev *hdev, void *data) 1353 1352 { 1354 1353 struct mgmt_pending_cmd *cmd = data; 1355 - struct mgmt_mode *cp = cmd->param; 1354 + struct mgmt_mode *cp; 1355 + 1356 + /* Make sure cmd still outstanding. */ 1357 + if (cmd != pending_find(MGMT_OP_SET_POWERED, hdev)) 1358 + return -ECANCELED; 1359 + 1360 + cp = cmd->param; 1356 1361 1357 1362 BT_DBG("%s", hdev->name); 1358 1363 ··· 1518 1511 bt_dev_dbg(hdev, "err %d", err); 1519 1512 1520 1513 /* Make sure cmd still outstanding. */ 1521 - if (cmd != pending_find(MGMT_OP_SET_DISCOVERABLE, hdev)) 1514 + if (err == -ECANCELED || 1515 + cmd != pending_find(MGMT_OP_SET_DISCOVERABLE, hdev)) 1522 1516 return; 1523 1517 1524 1518 hci_dev_lock(hdev); ··· 1693 1685 bt_dev_dbg(hdev, "err %d", err); 1694 1686 1695 1687 /* Make sure cmd still outstanding. */ 1696 - if (cmd != pending_find(MGMT_OP_SET_CONNECTABLE, hdev)) 1688 + if (err == -ECANCELED || 1689 + cmd != pending_find(MGMT_OP_SET_CONNECTABLE, hdev)) 1697 1690 return; 1698 1691 1699 1692 hci_dev_lock(hdev); ··· 1926 1917 bool changed; 1927 1918 1928 1919 /* Make sure cmd still outstanding. */ 1929 - if (cmd != pending_find(MGMT_OP_SET_SSP, hdev)) 1920 + if (err == -ECANCELED || cmd != pending_find(MGMT_OP_SET_SSP, hdev)) 1930 1921 return; 1931 1922 1932 1923 if (err) { ··· 3850 3841 3851 3842 bt_dev_dbg(hdev, "err %d", err); 3852 3843 3853 - if (cmd != pending_find(MGMT_OP_SET_LOCAL_NAME, hdev)) 3844 + if (err == -ECANCELED || 3845 + cmd != pending_find(MGMT_OP_SET_LOCAL_NAME, hdev)) 3854 3846 return; 3855 3847 3856 3848 if (status) { ··· 4026 4016 struct sk_buff *skb = cmd->skb; 4027 4017 u8 status = mgmt_status(err); 4028 4018 4029 - if (cmd != pending_find(MGMT_OP_SET_PHY_CONFIGURATION, hdev)) 4019 + if (err == -ECANCELED || 4020 + cmd != pending_find(MGMT_OP_SET_PHY_CONFIGURATION, hdev)) 4030 4021 return; 4031 4022 4032 4023 if (!status) { ··· 5918 5907 { 5919 5908 struct mgmt_pending_cmd *cmd = data; 5920 5909 5910 + bt_dev_dbg(hdev, "err %d", err); 5911 + 5912 + if (err == -ECANCELED) 5913 + return; 5914 + 5921 5915 if (cmd != pending_find(MGMT_OP_START_DISCOVERY, hdev) && 5922 5916 cmd != pending_find(MGMT_OP_START_LIMITED_DISCOVERY, hdev) && 5923 5917 cmd != pending_find(MGMT_OP_START_SERVICE_DISCOVERY, hdev)) 5924 5918 return; 5925 - 5926 - bt_dev_dbg(hdev, "err %d", err); 5927 5919 5928 5920 mgmt_cmd_complete(cmd->sk, cmd->index, cmd->opcode, mgmt_status(err), 5929 5921 cmd->param, 1); ··· 6160 6146 { 6161 6147 struct mgmt_pending_cmd *cmd = data; 6162 6148 6163 - if (cmd != pending_find(MGMT_OP_STOP_DISCOVERY, hdev)) 6149 + if (err == -ECANCELED || 6150 + cmd != pending_find(MGMT_OP_STOP_DISCOVERY, hdev)) 6164 6151 return; 6165 6152 6166 6153 bt_dev_dbg(hdev, "err %d", err); ··· 8152 8137 u8 status = mgmt_status(err); 8153 8138 u16 eir_len; 8154 8139 8155 - if (cmd != pending_find(MGMT_OP_READ_LOCAL_OOB_EXT_DATA, hdev)) 8140 + if (err == -ECANCELED || 8141 + cmd != pending_find(MGMT_OP_READ_LOCAL_OOB_EXT_DATA, hdev)) 8156 8142 return; 8157 8143 8158 8144 if (!status) {
+1 -1
net/bluetooth/sco.c
··· 143 143 sco_conn_lock(conn); 144 144 if (!conn->hcon) { 145 145 sco_conn_unlock(conn); 146 + sco_conn_put(conn); 146 147 return; 147 148 } 148 149 sk = sco_sock_hold(conn); ··· 193 192 conn->hcon = hcon; 194 193 sco_conn_unlock(conn); 195 194 } 196 - sco_conn_put(conn); 197 195 return conn; 198 196 } 199 197
+10 -4
net/core/rtnetlink.c
··· 2442 2442 tgt_net = rtnl_get_net_ns_capable(skb->sk, netnsid); 2443 2443 if (IS_ERR(tgt_net)) { 2444 2444 NL_SET_ERR_MSG(extack, "Invalid target network namespace id"); 2445 - return PTR_ERR(tgt_net); 2445 + err = PTR_ERR(tgt_net); 2446 + netnsid = -1; 2447 + goto out; 2446 2448 } 2447 2449 break; 2448 2450 case IFLA_EXT_MASK: ··· 2459 2457 default: 2460 2458 if (cb->strict_check) { 2461 2459 NL_SET_ERR_MSG(extack, "Unsupported attribute in link dump request"); 2462 - return -EINVAL; 2460 + err = -EINVAL; 2461 + goto out; 2463 2462 } 2464 2463 } 2465 2464 } ··· 2482 2479 break; 2483 2480 } 2484 2481 2485 - if (kind_ops) 2486 - rtnl_link_ops_put(kind_ops, ops_srcu_index); 2487 2482 2488 2483 cb->seq = tgt_net->dev_base_seq; 2489 2484 nl_dump_check_consistent(cb, nlmsg_hdr(skb)); 2485 + 2486 + out: 2487 + 2488 + if (kind_ops) 2489 + rtnl_link_ops_put(kind_ops, ops_srcu_index); 2490 2490 if (netnsid >= 0) 2491 2491 put_net(tgt_net); 2492 2492
+2 -2
net/hsr/hsr_device.c
··· 268 268 skb->dev = master->dev; 269 269 skb->priority = TC_PRIO_CONTROL; 270 270 271 + skb_reset_network_header(skb); 272 + skb_reset_transport_header(skb); 271 273 if (dev_hard_header(skb, skb->dev, ETH_P_PRP, 272 274 hsr->sup_multicast_addr, 273 275 skb->dev->dev_addr, skb->len) <= 0) ··· 277 275 278 276 skb_reset_mac_header(skb); 279 277 skb_reset_mac_len(skb); 280 - skb_reset_network_header(skb); 281 - skb_reset_transport_header(skb); 282 278 283 279 return skb; 284 280 out:
+1 -1
net/ipv4/inet_connection_sock.c
··· 1191 1191 1192 1192 drop: 1193 1193 __inet_csk_reqsk_queue_drop(sk_listener, oreq, true); 1194 - reqsk_put(req); 1194 + reqsk_put(oreq); 1195 1195 } 1196 1196 1197 1197 static bool reqsk_queue_hash_req(struct request_sock *req,
+44 -14
net/ipv4/ipmr.c
··· 120 120 lockdep_rtnl_is_held() || \ 121 121 list_empty(&net->ipv4.mr_tables)) 122 122 123 + static bool ipmr_can_free_table(struct net *net) 124 + { 125 + return !check_net(net) || !net->ipv4.mr_rules_ops; 126 + } 127 + 123 128 static struct mr_table *ipmr_mr_table_iter(struct net *net, 124 129 struct mr_table *mrt) 125 130 { ··· 142 137 return ret; 143 138 } 144 139 145 - static struct mr_table *ipmr_get_table(struct net *net, u32 id) 140 + static struct mr_table *__ipmr_get_table(struct net *net, u32 id) 146 141 { 147 142 struct mr_table *mrt; 148 143 ··· 151 146 return mrt; 152 147 } 153 148 return NULL; 149 + } 150 + 151 + static struct mr_table *ipmr_get_table(struct net *net, u32 id) 152 + { 153 + struct mr_table *mrt; 154 + 155 + rcu_read_lock(); 156 + mrt = __ipmr_get_table(net, id); 157 + rcu_read_unlock(); 158 + return mrt; 154 159 } 155 160 156 161 static int ipmr_fib_lookup(struct net *net, struct flowi4 *flp4, ··· 204 189 205 190 arg->table = fib_rule_get_table(rule, arg); 206 191 207 - mrt = ipmr_get_table(rule->fr_net, arg->table); 192 + mrt = __ipmr_get_table(rule->fr_net, arg->table); 208 193 if (!mrt) 209 194 return -EAGAIN; 210 195 res->mrt = mrt; ··· 317 302 #define ipmr_for_each_table(mrt, net) \ 318 303 for (mrt = net->ipv4.mrt; mrt; mrt = NULL) 319 304 305 + static bool ipmr_can_free_table(struct net *net) 306 + { 307 + return !check_net(net); 308 + } 309 + 320 310 static struct mr_table *ipmr_mr_table_iter(struct net *net, 321 311 struct mr_table *mrt) 322 312 { ··· 334 314 { 335 315 return net->ipv4.mrt; 336 316 } 317 + 318 + #define __ipmr_get_table ipmr_get_table 337 319 338 320 static int ipmr_fib_lookup(struct net *net, struct flowi4 *flp4, 339 321 struct mr_table **mrt) ··· 425 403 if (id != RT_TABLE_DEFAULT && id >= 1000000000) 426 404 return ERR_PTR(-EINVAL); 427 405 428 - mrt = ipmr_get_table(net, id); 406 + mrt = __ipmr_get_table(net, id); 429 407 if (mrt) 430 408 return mrt; 431 409 ··· 435 413 436 414 static void ipmr_free_table(struct mr_table *mrt) 437 415 { 416 + struct net *net = read_pnet(&mrt->net); 417 + 418 + WARN_ON_ONCE(!ipmr_can_free_table(net)); 419 + 438 420 timer_shutdown_sync(&mrt->ipmr_expire_timer); 439 421 mroute_clean_tables(mrt, MRT_FLUSH_VIFS | MRT_FLUSH_VIFS_STATIC | 440 422 MRT_FLUSH_MFC | MRT_FLUSH_MFC_STATIC); ··· 1400 1374 goto out_unlock; 1401 1375 } 1402 1376 1403 - mrt = ipmr_get_table(net, raw_sk(sk)->ipmr_table ? : RT_TABLE_DEFAULT); 1377 + mrt = __ipmr_get_table(net, raw_sk(sk)->ipmr_table ? : RT_TABLE_DEFAULT); 1404 1378 if (!mrt) { 1405 1379 ret = -ENOENT; 1406 1380 goto out_unlock; ··· 2288 2262 struct mr_table *mrt; 2289 2263 int err; 2290 2264 2291 - mrt = ipmr_get_table(net, RT_TABLE_DEFAULT); 2292 - if (!mrt) 2293 - return -ENOENT; 2294 - 2295 2265 rcu_read_lock(); 2266 + mrt = __ipmr_get_table(net, RT_TABLE_DEFAULT); 2267 + if (!mrt) { 2268 + rcu_read_unlock(); 2269 + return -ENOENT; 2270 + } 2271 + 2296 2272 cache = ipmr_cache_find(mrt, saddr, daddr); 2297 2273 if (!cache && skb->dev) { 2298 2274 int vif = ipmr_find_vif(mrt, skb->dev); ··· 2578 2550 grp = nla_get_in_addr_default(tb[RTA_DST], 0); 2579 2551 tableid = nla_get_u32_default(tb[RTA_TABLE], 0); 2580 2552 2581 - mrt = ipmr_get_table(net, tableid ? tableid : RT_TABLE_DEFAULT); 2553 + mrt = __ipmr_get_table(net, tableid ? tableid : RT_TABLE_DEFAULT); 2582 2554 if (!mrt) { 2583 2555 err = -ENOENT; 2584 2556 goto errout_free; ··· 2632 2604 if (filter.table_id) { 2633 2605 struct mr_table *mrt; 2634 2606 2635 - mrt = ipmr_get_table(sock_net(skb->sk), filter.table_id); 2607 + mrt = __ipmr_get_table(sock_net(skb->sk), filter.table_id); 2636 2608 if (!mrt) { 2637 2609 if (rtnl_msg_family(cb->nlh) != RTNL_FAMILY_IPMR) 2638 2610 return skb->len; ··· 2740 2712 break; 2741 2713 } 2742 2714 } 2743 - mrt = ipmr_get_table(net, tblid); 2715 + mrt = __ipmr_get_table(net, tblid); 2744 2716 if (!mrt) { 2745 2717 ret = -ENOENT; 2746 2718 goto out; ··· 2948 2920 struct net *net = seq_file_net(seq); 2949 2921 struct mr_table *mrt; 2950 2922 2951 - mrt = ipmr_get_table(net, RT_TABLE_DEFAULT); 2952 - if (!mrt) 2923 + rcu_read_lock(); 2924 + mrt = __ipmr_get_table(net, RT_TABLE_DEFAULT); 2925 + if (!mrt) { 2926 + rcu_read_unlock(); 2953 2927 return ERR_PTR(-ENOENT); 2928 + } 2954 2929 2955 2930 iter->mrt = mrt; 2956 2931 2957 - rcu_read_lock(); 2958 2932 return mr_vif_seq_start(seq, pos); 2959 2933 } 2960 2934
+29 -12
net/ipv6/addrconf.c
··· 2570 2570 return idev; 2571 2571 } 2572 2572 2573 + static void delete_tempaddrs(struct inet6_dev *idev, 2574 + struct inet6_ifaddr *ifp) 2575 + { 2576 + struct inet6_ifaddr *ift, *tmp; 2577 + 2578 + write_lock_bh(&idev->lock); 2579 + list_for_each_entry_safe(ift, tmp, &idev->tempaddr_list, tmp_list) { 2580 + if (ift->ifpub != ifp) 2581 + continue; 2582 + 2583 + in6_ifa_hold(ift); 2584 + write_unlock_bh(&idev->lock); 2585 + ipv6_del_addr(ift); 2586 + write_lock_bh(&idev->lock); 2587 + } 2588 + write_unlock_bh(&idev->lock); 2589 + } 2590 + 2573 2591 static void manage_tempaddrs(struct inet6_dev *idev, 2574 2592 struct inet6_ifaddr *ifp, 2575 2593 __u32 valid_lft, __u32 prefered_lft, ··· 3142 3124 in6_ifa_hold(ifp); 3143 3125 read_unlock_bh(&idev->lock); 3144 3126 3145 - if (!(ifp->flags & IFA_F_TEMPORARY) && 3146 - (ifa_flags & IFA_F_MANAGETEMPADDR)) 3147 - manage_tempaddrs(idev, ifp, 0, 0, false, 3148 - jiffies); 3149 3127 ipv6_del_addr(ifp); 3128 + 3129 + if (!(ifp->flags & IFA_F_TEMPORARY) && 3130 + (ifp->flags & IFA_F_MANAGETEMPADDR)) 3131 + delete_tempaddrs(idev, ifp); 3132 + 3150 3133 addrconf_verify_rtnl(net); 3151 3134 if (ipv6_addr_is_multicast(pfx)) { 3152 3135 ipv6_mc_config(net->ipv6.mc_autojoin_sk, ··· 4971 4952 } 4972 4953 4973 4954 if (was_managetempaddr || ifp->flags & IFA_F_MANAGETEMPADDR) { 4974 - if (was_managetempaddr && 4975 - !(ifp->flags & IFA_F_MANAGETEMPADDR)) { 4976 - cfg->valid_lft = 0; 4977 - cfg->preferred_lft = 0; 4978 - } 4979 - manage_tempaddrs(ifp->idev, ifp, cfg->valid_lft, 4980 - cfg->preferred_lft, !was_managetempaddr, 4981 - jiffies); 4955 + if (was_managetempaddr && !(ifp->flags & IFA_F_MANAGETEMPADDR)) 4956 + delete_tempaddrs(ifp->idev, ifp); 4957 + else 4958 + manage_tempaddrs(ifp->idev, ifp, cfg->valid_lft, 4959 + cfg->preferred_lft, !was_managetempaddr, 4960 + jiffies); 4982 4961 } 4983 4962 4984 4963 addrconf_verify_rtnl(net);
+42 -12
net/ipv6/ip6mr.c
··· 108 108 lockdep_rtnl_is_held() || \ 109 109 list_empty(&net->ipv6.mr6_tables)) 110 110 111 + static bool ip6mr_can_free_table(struct net *net) 112 + { 113 + return !check_net(net) || !net->ipv6.mr6_rules_ops; 114 + } 115 + 111 116 static struct mr_table *ip6mr_mr_table_iter(struct net *net, 112 117 struct mr_table *mrt) 113 118 { ··· 130 125 return ret; 131 126 } 132 127 133 - static struct mr_table *ip6mr_get_table(struct net *net, u32 id) 128 + static struct mr_table *__ip6mr_get_table(struct net *net, u32 id) 134 129 { 135 130 struct mr_table *mrt; 136 131 ··· 139 134 return mrt; 140 135 } 141 136 return NULL; 137 + } 138 + 139 + static struct mr_table *ip6mr_get_table(struct net *net, u32 id) 140 + { 141 + struct mr_table *mrt; 142 + 143 + rcu_read_lock(); 144 + mrt = __ip6mr_get_table(net, id); 145 + rcu_read_unlock(); 146 + return mrt; 142 147 } 143 148 144 149 static int ip6mr_fib_lookup(struct net *net, struct flowi6 *flp6, ··· 192 177 193 178 arg->table = fib_rule_get_table(rule, arg); 194 179 195 - mrt = ip6mr_get_table(rule->fr_net, arg->table); 180 + mrt = __ip6mr_get_table(rule->fr_net, arg->table); 196 181 if (!mrt) 197 182 return -EAGAIN; 198 183 res->mrt = mrt; ··· 306 291 #define ip6mr_for_each_table(mrt, net) \ 307 292 for (mrt = net->ipv6.mrt6; mrt; mrt = NULL) 308 293 294 + static bool ip6mr_can_free_table(struct net *net) 295 + { 296 + return !check_net(net); 297 + } 298 + 309 299 static struct mr_table *ip6mr_mr_table_iter(struct net *net, 310 300 struct mr_table *mrt) 311 301 { ··· 323 303 { 324 304 return net->ipv6.mrt6; 325 305 } 306 + 307 + #define __ip6mr_get_table ip6mr_get_table 326 308 327 309 static int ip6mr_fib_lookup(struct net *net, struct flowi6 *flp6, 328 310 struct mr_table **mrt) ··· 404 382 { 405 383 struct mr_table *mrt; 406 384 407 - mrt = ip6mr_get_table(net, id); 385 + mrt = __ip6mr_get_table(net, id); 408 386 if (mrt) 409 387 return mrt; 410 388 ··· 414 392 415 393 static void ip6mr_free_table(struct mr_table *mrt) 416 394 { 395 + struct net *net = read_pnet(&mrt->net); 396 + 397 + WARN_ON_ONCE(!ip6mr_can_free_table(net)); 398 + 417 399 timer_shutdown_sync(&mrt->ipmr_expire_timer); 418 400 mroute_clean_tables(mrt, MRT6_FLUSH_MIFS | MRT6_FLUSH_MIFS_STATIC | 419 401 MRT6_FLUSH_MFC | MRT6_FLUSH_MFC_STATIC); ··· 437 411 struct net *net = seq_file_net(seq); 438 412 struct mr_table *mrt; 439 413 440 - mrt = ip6mr_get_table(net, RT6_TABLE_DFLT); 441 - if (!mrt) 414 + rcu_read_lock(); 415 + mrt = __ip6mr_get_table(net, RT6_TABLE_DFLT); 416 + if (!mrt) { 417 + rcu_read_unlock(); 442 418 return ERR_PTR(-ENOENT); 419 + } 443 420 444 421 iter->mrt = mrt; 445 422 446 - rcu_read_lock(); 447 423 return mr_vif_seq_start(seq, pos); 448 424 } 449 425 ··· 2306 2278 struct mfc6_cache *cache; 2307 2279 struct rt6_info *rt = dst_rt6_info(skb_dst(skb)); 2308 2280 2309 - mrt = ip6mr_get_table(net, RT6_TABLE_DFLT); 2310 - if (!mrt) 2311 - return -ENOENT; 2312 - 2313 2281 rcu_read_lock(); 2282 + mrt = __ip6mr_get_table(net, RT6_TABLE_DFLT); 2283 + if (!mrt) { 2284 + rcu_read_unlock(); 2285 + return -ENOENT; 2286 + } 2287 + 2314 2288 cache = ip6mr_cache_find(mrt, &rt->rt6i_src.addr, &rt->rt6i_dst.addr); 2315 2289 if (!cache && skb->dev) { 2316 2290 int vif = ip6mr_find_vif(mrt, skb->dev); ··· 2592 2562 grp = nla_get_in6_addr(tb[RTA_DST]); 2593 2563 tableid = nla_get_u32_default(tb[RTA_TABLE], 0); 2594 2564 2595 - mrt = ip6mr_get_table(net, tableid ?: RT_TABLE_DEFAULT); 2565 + mrt = __ip6mr_get_table(net, tableid ?: RT_TABLE_DEFAULT); 2596 2566 if (!mrt) { 2597 2567 NL_SET_ERR_MSG_MOD(extack, "MR table does not exist"); 2598 2568 return -ENOENT; ··· 2639 2609 if (filter.table_id) { 2640 2610 struct mr_table *mrt; 2641 2611 2642 - mrt = ip6mr_get_table(sock_net(skb->sk), filter.table_id); 2612 + mrt = __ip6mr_get_table(sock_net(skb->sk), filter.table_id); 2643 2613 if (!mrt) { 2644 2614 if (rtnl_msg_family(cb->nlh) != RTNL_FAMILY_IP6MR) 2645 2615 return skb->len;
+17 -9
net/iucv/af_iucv.c
··· 1236 1236 return -EOPNOTSUPP; 1237 1237 1238 1238 /* receive/dequeue next skb: 1239 - * the function understands MSG_PEEK and, thus, does not dequeue skb */ 1239 + * the function understands MSG_PEEK and, thus, does not dequeue skb 1240 + * only refcount is increased. 1241 + */ 1240 1242 skb = skb_recv_datagram(sk, flags, &err); 1241 1243 if (!skb) { 1242 1244 if (sk->sk_shutdown & RCV_SHUTDOWN) ··· 1254 1252 1255 1253 cskb = skb; 1256 1254 if (skb_copy_datagram_msg(cskb, offset, msg, copied)) { 1257 - if (!(flags & MSG_PEEK)) 1258 - skb_queue_head(&sk->sk_receive_queue, skb); 1259 - return -EFAULT; 1255 + err = -EFAULT; 1256 + goto err_out; 1260 1257 } 1261 1258 1262 1259 /* SOCK_SEQPACKET: set MSG_TRUNC if recv buf size is too small */ ··· 1272 1271 err = put_cmsg(msg, SOL_IUCV, SCM_IUCV_TRGCLS, 1273 1272 sizeof(IUCV_SKB_CB(skb)->class), 1274 1273 (void *)&IUCV_SKB_CB(skb)->class); 1275 - if (err) { 1276 - if (!(flags & MSG_PEEK)) 1277 - skb_queue_head(&sk->sk_receive_queue, skb); 1278 - return err; 1279 - } 1274 + if (err) 1275 + goto err_out; 1280 1276 1281 1277 /* Mark read part of skb as used */ 1282 1278 if (!(flags & MSG_PEEK)) { ··· 1329 1331 /* SOCK_SEQPACKET: return real length if MSG_TRUNC is set */ 1330 1332 if (sk->sk_type == SOCK_SEQPACKET && (flags & MSG_TRUNC)) 1331 1333 copied = rlen; 1334 + if (flags & MSG_PEEK) 1335 + skb_unref(skb); 1332 1336 1333 1337 return copied; 1338 + 1339 + err_out: 1340 + if (!(flags & MSG_PEEK)) 1341 + skb_queue_head(&sk->sk_receive_queue, skb); 1342 + else 1343 + skb_unref(skb); 1344 + 1345 + return err; 1334 1346 } 1335 1347 1336 1348 static inline __poll_t iucv_accept_poll(struct sock *parent)
+19 -3
net/l2tp/l2tp_core.c
··· 1870 1870 } 1871 1871 } 1872 1872 1873 + static int l2tp_idr_item_unexpected(int id, void *p, void *data) 1874 + { 1875 + const char *idr_name = data; 1876 + 1877 + pr_err("l2tp: %s IDR not empty at net %d exit\n", idr_name, id); 1878 + WARN_ON_ONCE(1); 1879 + return 1; 1880 + } 1881 + 1873 1882 static __net_exit void l2tp_exit_net(struct net *net) 1874 1883 { 1875 1884 struct l2tp_net *pn = l2tp_pernet(net); 1876 1885 1877 - WARN_ON_ONCE(!idr_is_empty(&pn->l2tp_v2_session_idr)); 1886 + /* Our per-net IDRs should be empty. Check that is so, to 1887 + * help catch cleanup races or refcnt leaks. 1888 + */ 1889 + idr_for_each(&pn->l2tp_v2_session_idr, l2tp_idr_item_unexpected, 1890 + "v2_session"); 1891 + idr_for_each(&pn->l2tp_v3_session_idr, l2tp_idr_item_unexpected, 1892 + "v3_session"); 1893 + idr_for_each(&pn->l2tp_tunnel_idr, l2tp_idr_item_unexpected, 1894 + "tunnel"); 1895 + 1878 1896 idr_destroy(&pn->l2tp_v2_session_idr); 1879 - WARN_ON_ONCE(!idr_is_empty(&pn->l2tp_v3_session_idr)); 1880 1897 idr_destroy(&pn->l2tp_v3_session_idr); 1881 - WARN_ON_ONCE(!idr_is_empty(&pn->l2tp_tunnel_idr)); 1882 1898 idr_destroy(&pn->l2tp_tunnel_idr); 1883 1899 } 1884 1900
+1 -1
net/llc/af_llc.c
··· 1098 1098 lock_sock(sk); 1099 1099 if (unlikely(level != SOL_LLC || optlen != sizeof(int))) 1100 1100 goto out; 1101 - rc = copy_from_sockptr(&opt, optval, sizeof(opt)); 1101 + rc = copy_safe_from_sockptr(&opt, sizeof(opt), optval, optlen); 1102 1102 if (rc) 1103 1103 goto out; 1104 1104 rc = -EINVAL;
+11 -10
net/netlink/af_netlink.c
··· 2181 2181 return tlvlen; 2182 2182 } 2183 2183 2184 + static bool nlmsg_check_in_payload(const struct nlmsghdr *nlh, const void *addr) 2185 + { 2186 + return !WARN_ON(addr < nlmsg_data(nlh) || 2187 + addr - (const void *) nlh >= nlh->nlmsg_len); 2188 + } 2189 + 2184 2190 static void 2185 - netlink_ack_tlv_fill(struct sk_buff *in_skb, struct sk_buff *skb, 2186 - const struct nlmsghdr *nlh, int err, 2191 + netlink_ack_tlv_fill(struct sk_buff *skb, const struct nlmsghdr *nlh, int err, 2187 2192 const struct netlink_ext_ack *extack) 2188 2193 { 2189 2194 if (extack->_msg) ··· 2200 2195 if (!err) 2201 2196 return; 2202 2197 2203 - if (extack->bad_attr && 2204 - !WARN_ON((u8 *)extack->bad_attr < in_skb->data || 2205 - (u8 *)extack->bad_attr >= in_skb->data + in_skb->len)) 2198 + if (extack->bad_attr && nlmsg_check_in_payload(nlh, extack->bad_attr)) 2206 2199 WARN_ON(nla_put_u32(skb, NLMSGERR_ATTR_OFFS, 2207 2200 (u8 *)extack->bad_attr - (const u8 *)nlh)); 2208 2201 if (extack->policy) ··· 2209 2206 if (extack->miss_type) 2210 2207 WARN_ON(nla_put_u32(skb, NLMSGERR_ATTR_MISS_TYPE, 2211 2208 extack->miss_type)); 2212 - if (extack->miss_nest && 2213 - !WARN_ON((u8 *)extack->miss_nest < in_skb->data || 2214 - (u8 *)extack->miss_nest > in_skb->data + in_skb->len)) 2209 + if (extack->miss_nest && nlmsg_check_in_payload(nlh, extack->miss_nest)) 2215 2210 WARN_ON(nla_put_u32(skb, NLMSGERR_ATTR_MISS_NEST, 2216 2211 (u8 *)extack->miss_nest - (const u8 *)nlh)); 2217 2212 } ··· 2238 2237 if (extack_len) { 2239 2238 nlh->nlmsg_flags |= NLM_F_ACK_TLVS; 2240 2239 if (skb_tailroom(skb) >= extack_len) { 2241 - netlink_ack_tlv_fill(cb->skb, skb, cb->nlh, 2240 + netlink_ack_tlv_fill(skb, cb->nlh, 2242 2241 nlk->dump_done_errno, extack); 2243 2242 nlmsg_end(skb, nlh); 2244 2243 } ··· 2497 2496 } 2498 2497 2499 2498 if (tlvlen) 2500 - netlink_ack_tlv_fill(in_skb, skb, nlh, err, extack); 2499 + netlink_ack_tlv_fill(skb, nlh, err, extack); 2501 2500 2502 2501 nlmsg_end(skb, rep); 2503 2502
+4 -3
net/rxrpc/af_rxrpc.c
··· 707 707 ret = -EISCONN; 708 708 if (rx->sk.sk_state != RXRPC_UNBOUND) 709 709 goto error; 710 - ret = copy_from_sockptr(&min_sec_level, optval, 711 - sizeof(unsigned int)); 712 - if (ret < 0) 710 + ret = copy_safe_from_sockptr(&min_sec_level, 711 + sizeof(min_sec_level), 712 + optval, optlen); 713 + if (ret) 713 714 goto error; 714 715 ret = -EINVAL; 715 716 if (min_sec_level > RXRPC_SECURITY_MAX)
+6
net/sched/sch_fq.c
··· 332 332 */ 333 333 if (q->internal.qlen >= 8) 334 334 return false; 335 + 336 + /* Ordering invariants fall apart if some delayed flows 337 + * are ready but we haven't serviced them, yet. 338 + */ 339 + if (q->time_next_delayed_flow <= now + q->offload_horizon) 340 + return false; 335 341 } 336 342 337 343 sk = skb->sk;
+1 -1
tools/testing/selftests/drivers/net/hw/lib/py/linkconfig.py
··· 218 218 json_data = process[0] 219 219 """Check if the field exist in the json data""" 220 220 if field not in json_data: 221 - raise KsftSkipEx(f"Field {field} does not exist in the output of interface {json_data["ifname"]}") 221 + raise KsftSkipEx(f'Field {field} does not exist in the output of interface {json_data["ifname"]}') 222 222 return json_data[field]
+1 -2
tools/testing/selftests/net/Makefile
··· 78 78 TEST_GEN_FILES += io_uring_zerocopy_tx 79 79 TEST_PROGS += io_uring_zerocopy_tx.sh 80 80 TEST_GEN_FILES += bind_bhash 81 - TEST_GEN_PROGS += netlink-dumps 82 81 TEST_GEN_PROGS += sk_bind_sendto_listen 83 82 TEST_GEN_PROGS += sk_connect_zero_addr 84 83 TEST_GEN_PROGS += sk_so_peek_off ··· 100 101 TEST_PROGS += busy_poll_test.sh 101 102 102 103 # YNL files, must be before "include ..lib.mk" 103 - YNL_GEN_FILES := busy_poller 104 + YNL_GEN_FILES := busy_poller netlink-dumps 104 105 TEST_GEN_FILES += $(YNL_GEN_FILES) 105 106 106 107 TEST_FILES := settings
+2 -3
tools/testing/selftests/net/rds/Makefile
··· 3 3 all: 4 4 @echo mk_build_dir="$(shell pwd)" > include.sh 5 5 6 - TEST_PROGS := run.sh \ 7 - test.py 6 + TEST_PROGS := run.sh 8 7 9 - TEST_FILES := include.sh 8 + TEST_FILES := include.sh test.py 10 9 11 10 EXTRA_CLEAN := /tmp/rds_logs include.sh 12 11
+95
tools/testing/selftests/net/rtnetlink.sh
··· 29 29 kci_test_bridge_parent_id 30 30 kci_test_address_proto 31 31 kci_test_enslave_bonding 32 + kci_test_mngtmpaddr 32 33 " 33 34 34 35 devdummy="test-dummy0" ··· 45 44 if [ $ret -eq 0 ]; then 46 45 ret=$1 47 46 fi 47 + [ -n "$2" ] && echo "$2" 48 48 } 49 49 50 50 # same but inverted -- used when command must fail for test to pass ··· 1239 1237 1240 1238 end_test "PASS: enslave interface in a bond" 1241 1239 ip netns del "$testns" 1240 + } 1241 + 1242 + # Called to validate the addresses on $IFNAME: 1243 + # 1244 + # 1. Every `temporary` address must have a matching `mngtmpaddr` 1245 + # 2. Every `mngtmpaddr` address must have some un`deprecated` `temporary` 1246 + # 1247 + # If the mngtmpaddr or tempaddr checking failed, return 0 and stop slowwait 1248 + validate_mngtmpaddr() 1249 + { 1250 + local dev=$1 1251 + local prefix="" 1252 + local addr_list=$(ip -j -n $testns addr show dev ${dev}) 1253 + local temp_addrs=$(echo ${addr_list} | \ 1254 + jq -r '.[].addr_info[] | select(.temporary == true) | .local') 1255 + local mng_prefixes=$(echo ${addr_list} | \ 1256 + jq -r '.[].addr_info[] | select(.mngtmpaddr == true) | .local' | \ 1257 + cut -d: -f1-4 | tr '\n' ' ') 1258 + local undep_prefixes=$(echo ${addr_list} | \ 1259 + jq -r '.[].addr_info[] | select(.temporary == true and .deprecated != true) | .local' | \ 1260 + cut -d: -f1-4 | tr '\n' ' ') 1261 + 1262 + # 1. All temporary addresses (temp and dep) must have a matching mngtmpaddr 1263 + for address in ${temp_addrs}; do 1264 + prefix=$(echo ${address} | cut -d: -f1-4) 1265 + if [[ ! " ${mng_prefixes} " =~ " $prefix " ]]; then 1266 + check_err 1 "FAIL: Temporary $address with no matching mngtmpaddr!"; 1267 + return 0 1268 + fi 1269 + done 1270 + 1271 + # 2. All mngtmpaddr addresses must have a temporary address (not dep) 1272 + for prefix in ${mng_prefixes}; do 1273 + if [[ ! " ${undep_prefixes} " =~ " $prefix " ]]; then 1274 + check_err 1 "FAIL: No undeprecated temporary in $prefix!"; 1275 + return 0 1276 + fi 1277 + done 1278 + 1279 + return 1 1280 + } 1281 + 1282 + kci_test_mngtmpaddr() 1283 + { 1284 + local ret=0 1285 + 1286 + setup_ns testns 1287 + if [ $? -ne 0 ]; then 1288 + end_test "SKIP mngtmpaddr tests: cannot add net namespace $testns" 1289 + return $ksft_skip 1290 + fi 1291 + 1292 + # 1. Create a dummy Ethernet interface 1293 + run_cmd ip -n $testns link add ${devdummy} type dummy 1294 + run_cmd ip -n $testns link set ${devdummy} up 1295 + run_cmd ip netns exec $testns sysctl -w net.ipv6.conf.${devdummy}.use_tempaddr=1 1296 + run_cmd ip netns exec $testns sysctl -w net.ipv6.conf.${devdummy}.temp_prefered_lft=10 1297 + run_cmd ip netns exec $testns sysctl -w net.ipv6.conf.${devdummy}.temp_valid_lft=25 1298 + run_cmd ip netns exec $testns sysctl -w net.ipv6.conf.${devdummy}.max_desync_factor=1 1299 + 1300 + # 2. Create several mngtmpaddr addresses on that interface. 1301 + # with temp_*_lft configured to be pretty short (10 and 35 seconds 1302 + # for prefer/valid respectively) 1303 + for i in $(seq 1 9); do 1304 + run_cmd ip -n $testns addr add 2001:db8:7e57:${i}::1/64 mngtmpaddr dev ${devdummy} 1305 + done 1306 + 1307 + # 3. Confirm that a preferred temporary address exists for each mngtmpaddr 1308 + # address at all times, polling once per second for 30 seconds. 1309 + slowwait 30 validate_mngtmpaddr ${devdummy} 1310 + 1311 + # 4. Delete each mngtmpaddr address, one at a time (alternating between 1312 + # deleting and merely un-mngtmpaddr-ing), and confirm that the other 1313 + # mngtmpaddr addresses still have preferred temporaries. 1314 + for i in $(seq 1 9); do 1315 + (( $i % 4 == 0 )) && mng_flag="mngtmpaddr" || mng_flag="" 1316 + if (( $i % 2 == 0 )); then 1317 + run_cmd ip -n $testns addr del 2001:db8:7e57:${i}::1/64 $mng_flag dev ${devdummy} 1318 + else 1319 + run_cmd ip -n $testns addr change 2001:db8:7e57:${i}::1/64 dev ${devdummy} 1320 + fi 1321 + # the temp addr should be deleted 1322 + validate_mngtmpaddr ${devdummy} 1323 + done 1324 + 1325 + if [ $ret -ne 0 ]; then 1326 + end_test "FAIL: mngtmpaddr add/remove incorrect" 1327 + else 1328 + end_test "PASS: mngtmpaddr add/remove correctly" 1329 + fi 1330 + 1331 + ip netns del "$testns" 1332 + return $ret 1242 1333 } 1243 1334 1244 1335 kci_test_rtnl()