Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'net-6.16-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Pull networking fixes from Jakub Kicinski:
"Including fixes from wireless.

The ath12k fix to avoid FW crashes requires adding support for a
number of new FW commands so it's quite large in terms of LoC. The
rest is relatively small.

Current release - fix to a fix:

- ptp: fix breakage after ptp_vclock_in_use() rework

Current release - regressions:

- openvswitch: allocate struct ovs_pcpu_storage dynamically, static
allocation may exhaust module loader limit on smaller systems

Previous releases - regressions:

- tcp: fix tcp_packet_delayed() for peers with no selective ACK
support

Previous releases - always broken:

- wifi: ath12k: don't activate more links than firmware supports

- tcp: make sure sockets open via passive TFO have valid NAPI ID

- eth: bnxt_en: update MRU and RSS table of RSS contexts on queue
reset, prevent Rx queues from silently hanging after queue reset

- NFC: uart: set tty->disc_data only in success path"

* tag 'net-6.16-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (59 commits)
net: airoha: Differentiate hwfd buffer size for QDMA0 and QDMA1
net: airoha: Compute number of descriptors according to reserved memory size
tools: ynl: fix mixing ops and notifications on one socket
net: atm: fix /proc/net/atm/lec handling
net: atm: add lec_mutex
mlxbf_gige: return EPROBE_DEFER if PHY IRQ is not available
net: airoha: Always check return value from airoha_ppe_foe_get_entry()
NFC: nci: uart: Set tty->disc_data only in success path
calipso: Fix null-ptr-deref in calipso_req_{set,del}attr().
MAINTAINERS: Remove Shannon Nelson from MAINTAINERS file
net: lan743x: fix potential out-of-bounds write in lan743x_ptp_io_event_clock_get()
eth: fbnic: avoid double free when failing to DMA-map FW msg
tcp: fix passive TFO socket having invalid NAPI ID
selftests: net: add test for passive TFO socket NAPI ID
selftests: net: add passive TFO test binary
selftests: netdevsim: improve lib.sh include in peer.sh
tipc: fix null-ptr-deref when acquiring remote ip of ethernet bearer
Octeontx2-pf: Fix Backpresure configuration
net: ftgmac100: select FIXED_PHY
net: ethtool: remove duplicate defines for family info
...

+2090 -287
+4 -3
.mailmap
··· 693 693 Serge Hallyn <sergeh@kernel.org> <serue@us.ibm.com> 694 694 Seth Forshee <sforshee@kernel.org> <seth.forshee@canonical.com> 695 695 Shakeel Butt <shakeel.butt@linux.dev> <shakeelb@google.com> 696 - Shannon Nelson <shannon.nelson@amd.com> <snelson@pensando.io> 697 - Shannon Nelson <shannon.nelson@amd.com> <shannon.nelson@intel.com> 698 - Shannon Nelson <shannon.nelson@amd.com> <shannon.nelson@oracle.com> 696 + Shannon Nelson <sln@onemain.com> <shannon.nelson@amd.com> 697 + Shannon Nelson <sln@onemain.com> <snelson@pensando.io> 698 + Shannon Nelson <sln@onemain.com> <shannon.nelson@intel.com> 699 + Shannon Nelson <sln@onemain.com> <shannon.nelson@oracle.com> 699 700 Sharath Chandra Vurukala <quic_sharathv@quicinc.com> <sharathv@codeaurora.org> 700 701 Shiraz Hashim <shiraz.linux.kernel@gmail.com> <shiraz.hashim@st.com> 701 702 Shuah Khan <shuah@kernel.org> <shuahkhan@gmail.com>
+3
Documentation/netlink/specs/ethtool.yaml
··· 7 7 doc: Partial family for Ethtool Netlink. 8 8 uapi-header: linux/ethtool_netlink_generated.h 9 9 10 + c-family-name: ethtool-genl-name 11 + c-version-name: ethtool-genl-version 12 + 10 13 definitions: 11 14 - 12 15 name: udp-tunnel-type
+1 -4
MAINTAINERS
··· 1157 1157 F: arch/x86/kernel/amd_node.c 1158 1158 1159 1159 AMD PDS CORE DRIVER 1160 - M: Shannon Nelson <shannon.nelson@amd.com> 1161 1160 M: Brett Creeley <brett.creeley@amd.com> 1162 1161 L: netdev@vger.kernel.org 1163 1162 S: Maintained ··· 9941 9942 9942 9943 FWCTL PDS DRIVER 9943 9944 M: Brett Creeley <brett.creeley@amd.com> 9944 - R: Shannon Nelson <shannon.nelson@amd.com> 9945 9945 L: linux-kernel@vger.kernel.org 9946 9946 S: Maintained 9947 9947 F: drivers/fwctl/pds/ ··· 19377 19379 F: include/crypto/pcrypt.h 19378 19380 19379 19381 PDS DSC VIRTIO DATA PATH ACCELERATOR 19380 - R: Shannon Nelson <shannon.nelson@amd.com> 19382 + R: Brett Creeley <brett.creeley@amd.com> 19381 19383 F: drivers/vdpa/pds/ 19382 19384 19383 19385 PECI HARDWARE MONITORING DRIVERS ··· 19399 19401 F: include/linux/peci.h 19400 19402 19401 19403 PENSANDO ETHERNET DRIVERS 19402 - M: Shannon Nelson <shannon.nelson@amd.com> 19403 19404 M: Brett Creeley <brett.creeley@amd.com> 19404 19405 L: netdev@vger.kernel.org 19405 19406 S: Maintained
+3 -1
drivers/atm/atmtcp.c
··· 288 288 struct sk_buff *new_skb; 289 289 int result = 0; 290 290 291 - if (!skb->len) return 0; 291 + if (skb->len < sizeof(struct atmtcp_hdr)) 292 + goto done; 293 + 292 294 dev = vcc->dev_data; 293 295 hdr = (struct atmtcp_hdr *) skb->data; 294 296 if (hdr->length == ATMTCP_HDR_MAGIC) {
+5 -4
drivers/net/can/m_can/tcan4x5x-core.c
··· 411 411 priv = cdev_to_priv(mcan_class); 412 412 413 413 priv->power = devm_regulator_get_optional(&spi->dev, "vsup"); 414 - if (PTR_ERR(priv->power) == -EPROBE_DEFER) { 415 - ret = -EPROBE_DEFER; 416 - goto out_m_can_class_free_dev; 417 - } else { 414 + if (IS_ERR(priv->power)) { 415 + if (PTR_ERR(priv->power) == -EPROBE_DEFER) { 416 + ret = -EPROBE_DEFER; 417 + goto out_m_can_class_free_dev; 418 + } 418 419 priv->power = NULL; 419 420 } 420 421
+16 -11
drivers/net/ethernet/airoha/airoha_eth.c
··· 1065 1065 1066 1066 static int airoha_qdma_init_hfwd_queues(struct airoha_qdma *qdma) 1067 1067 { 1068 + int size, index, num_desc = HW_DSCP_NUM; 1068 1069 struct airoha_eth *eth = qdma->eth; 1069 1070 int id = qdma - &eth->qdma[0]; 1071 + u32 status, buf_size; 1070 1072 dma_addr_t dma_addr; 1071 1073 const char *name; 1072 - int size, index; 1073 - u32 status; 1074 - 1075 - size = HW_DSCP_NUM * sizeof(struct airoha_qdma_fwd_desc); 1076 - if (!dmam_alloc_coherent(eth->dev, size, &dma_addr, GFP_KERNEL)) 1077 - return -ENOMEM; 1078 - 1079 - airoha_qdma_wr(qdma, REG_FWD_DSCP_BASE, dma_addr); 1080 1074 1081 1075 name = devm_kasprintf(eth->dev, GFP_KERNEL, "qdma%d-buf", id); 1082 1076 if (!name) 1083 1077 return -ENOMEM; 1084 1078 1079 + buf_size = id ? AIROHA_MAX_PACKET_SIZE / 2 : AIROHA_MAX_PACKET_SIZE; 1085 1080 index = of_property_match_string(eth->dev->of_node, 1086 1081 "memory-region-names", name); 1087 1082 if (index >= 0) { ··· 1094 1099 rmem = of_reserved_mem_lookup(np); 1095 1100 of_node_put(np); 1096 1101 dma_addr = rmem->base; 1102 + /* Compute the number of hw descriptors according to the 1103 + * reserved memory size and the payload buffer size 1104 + */ 1105 + num_desc = div_u64(rmem->size, buf_size); 1097 1106 } else { 1098 - size = AIROHA_MAX_PACKET_SIZE * HW_DSCP_NUM; 1107 + size = buf_size * num_desc; 1099 1108 if (!dmam_alloc_coherent(eth->dev, size, &dma_addr, 1100 1109 GFP_KERNEL)) 1101 1110 return -ENOMEM; ··· 1107 1108 1108 1109 airoha_qdma_wr(qdma, REG_FWD_BUF_BASE, dma_addr); 1109 1110 1111 + size = num_desc * sizeof(struct airoha_qdma_fwd_desc); 1112 + if (!dmam_alloc_coherent(eth->dev, size, &dma_addr, GFP_KERNEL)) 1113 + return -ENOMEM; 1114 + 1115 + airoha_qdma_wr(qdma, REG_FWD_DSCP_BASE, dma_addr); 1116 + /* QDMA0: 2KB. QDMA1: 1KB */ 1110 1117 airoha_qdma_rmw(qdma, REG_HW_FWD_DSCP_CFG, 1111 1118 HW_FWD_DSCP_PAYLOAD_SIZE_MASK, 1112 - FIELD_PREP(HW_FWD_DSCP_PAYLOAD_SIZE_MASK, 0)); 1119 + FIELD_PREP(HW_FWD_DSCP_PAYLOAD_SIZE_MASK, !!id)); 1113 1120 airoha_qdma_rmw(qdma, REG_FWD_DSCP_LOW_THR, FWD_DSCP_LOW_THR_MASK, 1114 1121 FIELD_PREP(FWD_DSCP_LOW_THR_MASK, 128)); 1115 1122 airoha_qdma_rmw(qdma, REG_LMGR_INIT_CFG, 1116 1123 LMGR_INIT_START | LMGR_SRAM_MODE_MASK | 1117 1124 HW_FWD_DESC_NUM_MASK, 1118 - FIELD_PREP(HW_FWD_DESC_NUM_MASK, HW_DSCP_NUM) | 1125 + FIELD_PREP(HW_FWD_DESC_NUM_MASK, num_desc) | 1119 1126 LMGR_INIT_START | LMGR_SRAM_MODE_MASK); 1120 1127 1121 1128 return read_poll_timeout(airoha_qdma_rr, status,
+3 -1
drivers/net/ethernet/airoha/airoha_ppe.c
··· 809 809 int idle; 810 810 811 811 hwe = airoha_ppe_foe_get_entry(ppe, iter->hash); 812 - ib1 = READ_ONCE(hwe->ib1); 812 + if (!hwe) 813 + continue; 813 814 815 + ib1 = READ_ONCE(hwe->ib1); 814 816 state = FIELD_GET(AIROHA_FOE_IB1_BIND_STATE, ib1); 815 817 if (state != AIROHA_FOE_STATE_BIND) { 816 818 iter->hash = 0xffff;
+74 -13
drivers/net/ethernet/broadcom/bnxt/bnxt.c
··· 10780 10780 bp->num_rss_ctx--; 10781 10781 } 10782 10782 10783 + static bool bnxt_vnic_has_rx_ring(struct bnxt *bp, struct bnxt_vnic_info *vnic, 10784 + int rxr_id) 10785 + { 10786 + u16 tbl_size = bnxt_get_rxfh_indir_size(bp->dev); 10787 + int i, vnic_rx; 10788 + 10789 + /* Ntuple VNIC always has all the rx rings. Any change of ring id 10790 + * must be updated because a future filter may use it. 10791 + */ 10792 + if (vnic->flags & BNXT_VNIC_NTUPLE_FLAG) 10793 + return true; 10794 + 10795 + for (i = 0; i < tbl_size; i++) { 10796 + if (vnic->flags & BNXT_VNIC_RSSCTX_FLAG) 10797 + vnic_rx = ethtool_rxfh_context_indir(vnic->rss_ctx)[i]; 10798 + else 10799 + vnic_rx = bp->rss_indir_tbl[i]; 10800 + 10801 + if (rxr_id == vnic_rx) 10802 + return true; 10803 + } 10804 + 10805 + return false; 10806 + } 10807 + 10808 + static int bnxt_set_vnic_mru_p5(struct bnxt *bp, struct bnxt_vnic_info *vnic, 10809 + u16 mru, int rxr_id) 10810 + { 10811 + int rc; 10812 + 10813 + if (!bnxt_vnic_has_rx_ring(bp, vnic, rxr_id)) 10814 + return 0; 10815 + 10816 + if (mru) { 10817 + rc = bnxt_hwrm_vnic_set_rss_p5(bp, vnic, true); 10818 + if (rc) { 10819 + netdev_err(bp->dev, "hwrm vnic %d set rss failure rc: %d\n", 10820 + vnic->vnic_id, rc); 10821 + return rc; 10822 + } 10823 + } 10824 + vnic->mru = mru; 10825 + bnxt_hwrm_vnic_update(bp, vnic, 10826 + VNIC_UPDATE_REQ_ENABLES_MRU_VALID); 10827 + 10828 + return 0; 10829 + } 10830 + 10831 + static int bnxt_set_rss_ctx_vnic_mru(struct bnxt *bp, u16 mru, int rxr_id) 10832 + { 10833 + struct ethtool_rxfh_context *ctx; 10834 + unsigned long context; 10835 + int rc; 10836 + 10837 + xa_for_each(&bp->dev->ethtool->rss_ctx, context, ctx) { 10838 + struct bnxt_rss_ctx *rss_ctx = ethtool_rxfh_context_priv(ctx); 10839 + struct bnxt_vnic_info *vnic = &rss_ctx->vnic; 10840 + 10841 + rc = bnxt_set_vnic_mru_p5(bp, vnic, mru, rxr_id); 10842 + if (rc) 10843 + return rc; 10844 + } 10845 + 10846 + return 0; 10847 + } 10848 + 10783 10849 static void bnxt_hwrm_realloc_rss_ctx_vnic(struct bnxt *bp) 10784 10850 { 10785 10851 bool set_tpa = !!(bp->flags & BNXT_FLAG_TPA); ··· 15993 15927 struct bnxt_vnic_info *vnic; 15994 15928 struct bnxt_napi *bnapi; 15995 15929 int i, rc; 15930 + u16 mru; 15996 15931 15997 15932 rxr = &bp->rx_ring[idx]; 15998 15933 clone = qmem; ··· 16044 15977 napi_enable_locked(&bnapi->napi); 16045 15978 bnxt_db_nq_arm(bp, &cpr->cp_db, cpr->cp_raw_cons); 16046 15979 15980 + mru = bp->dev->mtu + ETH_HLEN + VLAN_HLEN; 16047 15981 for (i = 0; i < bp->nr_vnics; i++) { 16048 15982 vnic = &bp->vnic_info[i]; 16049 15983 16050 - rc = bnxt_hwrm_vnic_set_rss_p5(bp, vnic, true); 16051 - if (rc) { 16052 - netdev_err(bp->dev, "hwrm vnic %d set rss failure rc: %d\n", 16053 - vnic->vnic_id, rc); 15984 + rc = bnxt_set_vnic_mru_p5(bp, vnic, mru, idx); 15985 + if (rc) 16054 15986 return rc; 16055 - } 16056 - vnic->mru = bp->dev->mtu + ETH_HLEN + VLAN_HLEN; 16057 - bnxt_hwrm_vnic_update(bp, vnic, 16058 - VNIC_UPDATE_REQ_ENABLES_MRU_VALID); 16059 15987 } 16060 - 16061 - return 0; 15988 + return bnxt_set_rss_ctx_vnic_mru(bp, mru, idx); 16062 15989 16063 15990 err_reset: 16064 15991 netdev_err(bp->dev, "Unexpected HWRM error during queue start rc: %d\n", ··· 16074 16013 16075 16014 for (i = 0; i < bp->nr_vnics; i++) { 16076 16015 vnic = &bp->vnic_info[i]; 16077 - vnic->mru = 0; 16078 - bnxt_hwrm_vnic_update(bp, vnic, 16079 - VNIC_UPDATE_REQ_ENABLES_MRU_VALID); 16016 + 16017 + bnxt_set_vnic_mru_p5(bp, vnic, 0, idx); 16080 16018 } 16019 + bnxt_set_rss_ctx_vnic_mru(bp, 0, idx); 16081 16020 /* Make sure NAPI sees that the VNIC is disabled */ 16082 16021 synchronize_net(); 16083 16022 rxr = &bp->rx_ring[idx];
+10 -14
drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c
··· 231 231 return; 232 232 233 233 mutex_lock(&edev->en_dev_lock); 234 - if (!bnxt_ulp_registered(edev)) { 235 - mutex_unlock(&edev->en_dev_lock); 236 - return; 237 - } 234 + if (!bnxt_ulp_registered(edev) || 235 + (edev->flags & BNXT_EN_FLAG_ULP_STOPPED)) 236 + goto ulp_stop_exit; 238 237 239 238 edev->flags |= BNXT_EN_FLAG_ULP_STOPPED; 240 239 if (aux_priv) { ··· 249 250 adrv->suspend(adev, pm); 250 251 } 251 252 } 253 + ulp_stop_exit: 252 254 mutex_unlock(&edev->en_dev_lock); 253 255 } 254 256 ··· 258 258 struct bnxt_aux_priv *aux_priv = bp->aux_priv; 259 259 struct bnxt_en_dev *edev = bp->edev; 260 260 261 - if (!edev) 262 - return; 263 - 264 - edev->flags &= ~BNXT_EN_FLAG_ULP_STOPPED; 265 - 266 - if (err) 261 + if (!edev || err) 267 262 return; 268 263 269 264 mutex_lock(&edev->en_dev_lock); 270 - if (!bnxt_ulp_registered(edev)) { 271 - mutex_unlock(&edev->en_dev_lock); 272 - return; 273 - } 265 + if (!bnxt_ulp_registered(edev) || 266 + !(edev->flags & BNXT_EN_FLAG_ULP_STOPPED)) 267 + goto ulp_start_exit; 274 268 275 269 if (edev->ulp_tbl->msix_requested) 276 270 bnxt_fill_msix_vecs(bp, edev->msix_entries); ··· 281 287 adrv->resume(adev); 282 288 } 283 289 } 290 + ulp_start_exit: 291 + edev->flags &= ~BNXT_EN_FLAG_ULP_STOPPED; 284 292 mutex_unlock(&edev->en_dev_lock); 285 293 } 286 294
+1
drivers/net/ethernet/faraday/Kconfig
··· 31 31 depends on ARM || COMPILE_TEST 32 32 depends on !64BIT || BROKEN 33 33 select PHYLIB 34 + select FIXED_PHY 34 35 select MDIO_ASPEED if MACH_ASPEED_G6 35 36 select CRC32 36 37 help
+11 -3
drivers/net/ethernet/intel/e1000e/netdev.c
··· 3534 3534 case e1000_pch_cnp: 3535 3535 case e1000_pch_tgp: 3536 3536 case e1000_pch_adp: 3537 - case e1000_pch_mtp: 3538 - case e1000_pch_lnp: 3539 - case e1000_pch_ptp: 3540 3537 case e1000_pch_nvp: 3541 3538 if (er32(TSYNCRXCTL) & E1000_TSYNCRXCTL_SYSCFI) { 3542 3539 /* Stable 24MHz frequency */ ··· 3548 3551 shift = INCVALUE_SHIFT_38400KHZ; 3549 3552 adapter->cc.shift = shift; 3550 3553 } 3554 + break; 3555 + case e1000_pch_mtp: 3556 + case e1000_pch_lnp: 3557 + case e1000_pch_ptp: 3558 + /* System firmware can misreport this value, so set it to a 3559 + * stable 38400KHz frequency. 3560 + */ 3561 + incperiod = INCPERIOD_38400KHZ; 3562 + incvalue = INCVALUE_38400KHZ; 3563 + shift = INCVALUE_SHIFT_38400KHZ; 3564 + adapter->cc.shift = shift; 3551 3565 break; 3552 3566 case e1000_82574: 3553 3567 case e1000_82583:
+5 -3
drivers/net/ethernet/intel/e1000e/ptp.c
··· 295 295 case e1000_pch_cnp: 296 296 case e1000_pch_tgp: 297 297 case e1000_pch_adp: 298 - case e1000_pch_mtp: 299 - case e1000_pch_lnp: 300 - case e1000_pch_ptp: 301 298 case e1000_pch_nvp: 302 299 if (er32(TSYNCRXCTL) & E1000_TSYNCRXCTL_SYSCFI) 303 300 adapter->ptp_clock_info.max_adj = MAX_PPB_24MHZ; 304 301 else 305 302 adapter->ptp_clock_info.max_adj = MAX_PPB_38400KHZ; 303 + break; 304 + case e1000_pch_mtp: 305 + case e1000_pch_lnp: 306 + case e1000_pch_ptp: 307 + adapter->ptp_clock_info.max_adj = MAX_PPB_38400KHZ; 306 308 break; 307 309 case e1000_82574: 308 310 case e1000_82583:
+48
drivers/net/ethernet/intel/ice/ice_arfs.c
··· 378 378 } 379 379 380 380 /** 381 + * ice_arfs_cmp - Check if aRFS filter matches this flow. 382 + * @fltr_info: filter info of the saved ARFS entry. 383 + * @fk: flow dissector keys. 384 + * @n_proto: One of htons(ETH_P_IP) or htons(ETH_P_IPV6). 385 + * @ip_proto: One of IPPROTO_TCP or IPPROTO_UDP. 386 + * 387 + * Since this function assumes limited values for n_proto and ip_proto, it 388 + * is meant to be called only from ice_rx_flow_steer(). 389 + * 390 + * Return: 391 + * * true - fltr_info refers to the same flow as fk. 392 + * * false - fltr_info and fk refer to different flows. 393 + */ 394 + static bool 395 + ice_arfs_cmp(const struct ice_fdir_fltr *fltr_info, const struct flow_keys *fk, 396 + __be16 n_proto, u8 ip_proto) 397 + { 398 + /* Determine if the filter is for IPv4 or IPv6 based on flow_type, 399 + * which is one of ICE_FLTR_PTYPE_NONF_IPV{4,6}_{TCP,UDP}. 400 + */ 401 + bool is_v4 = fltr_info->flow_type == ICE_FLTR_PTYPE_NONF_IPV4_TCP || 402 + fltr_info->flow_type == ICE_FLTR_PTYPE_NONF_IPV4_UDP; 403 + 404 + /* Following checks are arranged in the quickest and most discriminative 405 + * fields first for early failure. 406 + */ 407 + if (is_v4) 408 + return n_proto == htons(ETH_P_IP) && 409 + fltr_info->ip.v4.src_port == fk->ports.src && 410 + fltr_info->ip.v4.dst_port == fk->ports.dst && 411 + fltr_info->ip.v4.src_ip == fk->addrs.v4addrs.src && 412 + fltr_info->ip.v4.dst_ip == fk->addrs.v4addrs.dst && 413 + fltr_info->ip.v4.proto == ip_proto; 414 + 415 + return fltr_info->ip.v6.src_port == fk->ports.src && 416 + fltr_info->ip.v6.dst_port == fk->ports.dst && 417 + fltr_info->ip.v6.proto == ip_proto && 418 + !memcmp(&fltr_info->ip.v6.src_ip, &fk->addrs.v6addrs.src, 419 + sizeof(struct in6_addr)) && 420 + !memcmp(&fltr_info->ip.v6.dst_ip, &fk->addrs.v6addrs.dst, 421 + sizeof(struct in6_addr)); 422 + } 423 + 424 + /** 381 425 * ice_rx_flow_steer - steer the Rx flow to where application is being run 382 426 * @netdev: ptr to the netdev being adjusted 383 427 * @skb: buffer with required header information ··· 492 448 continue; 493 449 494 450 fltr_info = &arfs_entry->fltr_info; 451 + 452 + if (!ice_arfs_cmp(fltr_info, &fk, n_proto, ip_proto)) 453 + continue; 454 + 495 455 ret = fltr_info->fltr_id; 496 456 497 457 if (fltr_info->q_index == rxq_idx ||
+5 -1
drivers/net/ethernet/intel/ice/ice_eswitch.c
··· 508 508 */ 509 509 int ice_eswitch_attach_vf(struct ice_pf *pf, struct ice_vf *vf) 510 510 { 511 - struct ice_repr *repr = ice_repr_create_vf(vf); 512 511 struct devlink *devlink = priv_to_devlink(pf); 512 + struct ice_repr *repr; 513 513 int err; 514 514 515 + if (!ice_is_eswitch_mode_switchdev(pf)) 516 + return 0; 517 + 518 + repr = ice_repr_create_vf(vf); 515 519 if (IS_ERR(repr)) 516 520 return PTR_ERR(repr); 517 521
+2 -2
drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
··· 1822 1822 req->chan_cnt = IEEE_8021QAZ_MAX_TCS; 1823 1823 req->bpid_per_chan = 1; 1824 1824 } else { 1825 - req->chan_cnt = 1; 1825 + req->chan_cnt = pfvf->hw.rx_chan_cnt; 1826 1826 req->bpid_per_chan = 0; 1827 1827 } 1828 1828 ··· 1847 1847 req->chan_cnt = IEEE_8021QAZ_MAX_TCS; 1848 1848 req->bpid_per_chan = 1; 1849 1849 } else { 1850 - req->chan_cnt = 1; 1850 + req->chan_cnt = pfvf->hw.rx_chan_cnt; 1851 1851 req->bpid_per_chan = 0; 1852 1852 } 1853 1853
+4 -2
drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_main.c
··· 447 447 priv->llu_plu_irq = platform_get_irq(pdev, MLXBF_GIGE_LLU_PLU_INTR_IDX); 448 448 449 449 phy_irq = acpi_dev_gpio_irq_get_by(ACPI_COMPANION(&pdev->dev), "phy", 0); 450 - if (phy_irq < 0) { 451 - dev_err(&pdev->dev, "Error getting PHY irq. Use polling instead"); 450 + if (phy_irq == -EPROBE_DEFER) { 451 + err = -EPROBE_DEFER; 452 + goto out; 453 + } else if (phy_irq < 0) { 452 454 phy_irq = PHY_POLL; 453 455 } 454 456
+1 -4
drivers/net/ethernet/meta/fbnic/fbnic_fw.c
··· 127 127 return -EBUSY; 128 128 129 129 addr = dma_map_single(fbd->dev, msg, PAGE_SIZE, direction); 130 - if (dma_mapping_error(fbd->dev, addr)) { 131 - free_page((unsigned long)msg); 132 - 130 + if (dma_mapping_error(fbd->dev, addr)) 133 131 return -ENOSPC; 134 - } 135 132 136 133 mbx->buf_info[tail].msg = msg; 137 134 mbx->buf_info[tail].addr = addr;
+2 -2
drivers/net/ethernet/microchip/lan743x_ptp.h
··· 18 18 */ 19 19 #define LAN743X_PTP_N_EVENT_CHAN 2 20 20 #define LAN743X_PTP_N_PEROUT LAN743X_PTP_N_EVENT_CHAN 21 - #define LAN743X_PTP_N_EXTTS 4 22 - #define LAN743X_PTP_N_PPS 0 23 21 #define PCI11X1X_PTP_IO_MAX_CHANNELS 8 22 + #define LAN743X_PTP_N_EXTTS PCI11X1X_PTP_IO_MAX_CHANNELS 23 + #define LAN743X_PTP_N_PPS 0 24 24 #define PTP_CMD_CTL_TIMEOUT_CNT 50 25 25 26 26 struct lan743x_adapter;
+2 -1
drivers/net/ethernet/pensando/ionic/ionic_main.c
··· 516 516 unsigned long start_time; 517 517 unsigned long max_wait; 518 518 unsigned long duration; 519 - int done = 0; 520 519 bool fw_up; 521 520 int opcode; 521 + bool done; 522 522 int err; 523 523 524 524 /* Wait for dev cmd to complete, retrying if we get EAGAIN, ··· 526 526 */ 527 527 max_wait = jiffies + (max_seconds * HZ); 528 528 try_again: 529 + done = false; 529 530 opcode = idev->opcode; 530 531 start_time = jiffies; 531 532 for (fw_up = ionic_is_fw_running(idev);
+2 -17
drivers/net/ethernet/ti/icssg/icssg_common.c
··· 98 98 { 99 99 struct cppi5_host_desc_t *first_desc, *next_desc; 100 100 dma_addr_t buf_dma, next_desc_dma; 101 - struct prueth_swdata *swdata; 102 - struct page *page; 103 101 u32 buf_dma_len; 104 102 105 103 first_desc = desc; 106 104 next_desc = first_desc; 107 - 108 - swdata = cppi5_hdesc_get_swdata(desc); 109 - if (swdata->type == PRUETH_SWDATA_PAGE) { 110 - page = swdata->data.page; 111 - page_pool_recycle_direct(page->pp, swdata->data.page); 112 - goto free_desc; 113 - } 114 105 115 106 cppi5_hdesc_get_obuf(first_desc, &buf_dma, &buf_dma_len); 116 107 k3_udma_glue_tx_cppi5_to_dma_addr(tx_chn->tx_chn, &buf_dma); ··· 126 135 k3_cppi_desc_pool_free(tx_chn->desc_pool, next_desc); 127 136 } 128 137 129 - free_desc: 130 138 k3_cppi_desc_pool_free(tx_chn->desc_pool, first_desc); 131 139 } 132 140 EXPORT_SYMBOL_GPL(prueth_xmit_free); ··· 602 612 k3_udma_glue_tx_dma_to_cppi5_addr(tx_chn->tx_chn, &buf_dma); 603 613 cppi5_hdesc_attach_buf(first_desc, buf_dma, xdpf->len, buf_dma, xdpf->len); 604 614 swdata = cppi5_hdesc_get_swdata(first_desc); 605 - if (page) { 606 - swdata->type = PRUETH_SWDATA_PAGE; 607 - swdata->data.page = page; 608 - } else { 609 - swdata->type = PRUETH_SWDATA_XDPF; 610 - swdata->data.xdpf = xdpf; 611 - } 615 + swdata->type = PRUETH_SWDATA_XDPF; 616 + swdata->data.xdpf = xdpf; 612 617 613 618 /* Report BQL before sending the packet */ 614 619 netif_txq = netdev_get_tx_queue(ndev, tx_chn->id);
+3 -1
drivers/net/wireless/ath/ath12k/core.c
··· 1216 1216 INIT_LIST_HEAD(&ar->fw_stats.pdevs); 1217 1217 INIT_LIST_HEAD(&ar->fw_stats.bcn); 1218 1218 init_completion(&ar->fw_stats_complete); 1219 + init_completion(&ar->fw_stats_done); 1219 1220 } 1220 1221 1221 1222 void ath12k_fw_stats_free(struct ath12k_fw_stats *stats) ··· 1229 1228 void ath12k_fw_stats_reset(struct ath12k *ar) 1230 1229 { 1231 1230 spin_lock_bh(&ar->data_lock); 1232 - ar->fw_stats.fw_stats_done = false; 1233 1231 ath12k_fw_stats_free(&ar->fw_stats); 1232 + ar->fw_stats.num_vdev_recvd = 0; 1233 + ar->fw_stats.num_bcn_recvd = 0; 1234 1234 spin_unlock_bh(&ar->data_lock); 1235 1235 } 1236 1236
+9 -1
drivers/net/wireless/ath/ath12k/core.h
··· 601 601 #define ATH12K_NUM_CHANS 101 602 602 #define ATH12K_MAX_5GHZ_CHAN 173 603 603 604 + static inline bool ath12k_is_2ghz_channel_freq(u32 freq) 605 + { 606 + return freq >= ATH12K_MIN_2GHZ_FREQ && 607 + freq <= ATH12K_MAX_2GHZ_FREQ; 608 + } 609 + 604 610 enum ath12k_hw_state { 605 611 ATH12K_HW_STATE_OFF, 606 612 ATH12K_HW_STATE_ON, ··· 632 626 struct list_head pdevs; 633 627 struct list_head vdevs; 634 628 struct list_head bcn; 635 - bool fw_stats_done; 629 + u32 num_vdev_recvd; 630 + u32 num_bcn_recvd; 636 631 }; 637 632 638 633 struct ath12k_dbg_htt_stats { ··· 813 806 bool regdom_set_by_user; 814 807 815 808 struct completion fw_stats_complete; 809 + struct completion fw_stats_done; 816 810 817 811 struct completion mlo_setup_done; 818 812 u32 mlo_setup_status;
-58
drivers/net/wireless/ath/ath12k/debugfs.c
··· 1251 1251 */ 1252 1252 } 1253 1253 1254 - void 1255 - ath12k_debugfs_fw_stats_process(struct ath12k *ar, 1256 - struct ath12k_fw_stats *stats) 1257 - { 1258 - struct ath12k_base *ab = ar->ab; 1259 - struct ath12k_pdev *pdev; 1260 - bool is_end; 1261 - static unsigned int num_vdev, num_bcn; 1262 - size_t total_vdevs_started = 0; 1263 - int i; 1264 - 1265 - if (stats->stats_id == WMI_REQUEST_VDEV_STAT) { 1266 - if (list_empty(&stats->vdevs)) { 1267 - ath12k_warn(ab, "empty vdev stats"); 1268 - return; 1269 - } 1270 - /* FW sends all the active VDEV stats irrespective of PDEV, 1271 - * hence limit until the count of all VDEVs started 1272 - */ 1273 - rcu_read_lock(); 1274 - for (i = 0; i < ab->num_radios; i++) { 1275 - pdev = rcu_dereference(ab->pdevs_active[i]); 1276 - if (pdev && pdev->ar) 1277 - total_vdevs_started += pdev->ar->num_started_vdevs; 1278 - } 1279 - rcu_read_unlock(); 1280 - 1281 - is_end = ((++num_vdev) == total_vdevs_started); 1282 - 1283 - list_splice_tail_init(&stats->vdevs, 1284 - &ar->fw_stats.vdevs); 1285 - 1286 - if (is_end) { 1287 - ar->fw_stats.fw_stats_done = true; 1288 - num_vdev = 0; 1289 - } 1290 - return; 1291 - } 1292 - if (stats->stats_id == WMI_REQUEST_BCN_STAT) { 1293 - if (list_empty(&stats->bcn)) { 1294 - ath12k_warn(ab, "empty beacon stats"); 1295 - return; 1296 - } 1297 - /* Mark end until we reached the count of all started VDEVs 1298 - * within the PDEV 1299 - */ 1300 - is_end = ((++num_bcn) == ar->num_started_vdevs); 1301 - 1302 - list_splice_tail_init(&stats->bcn, 1303 - &ar->fw_stats.bcn); 1304 - 1305 - if (is_end) { 1306 - ar->fw_stats.fw_stats_done = true; 1307 - num_bcn = 0; 1308 - } 1309 - } 1310 - } 1311 - 1312 1254 static int ath12k_open_vdev_stats(struct inode *inode, struct file *file) 1313 1255 { 1314 1256 struct ath12k *ar = inode->i_private;
-7
drivers/net/wireless/ath/ath12k/debugfs.h
··· 12 12 void ath12k_debugfs_soc_destroy(struct ath12k_base *ab); 13 13 void ath12k_debugfs_register(struct ath12k *ar); 14 14 void ath12k_debugfs_unregister(struct ath12k *ar); 15 - void ath12k_debugfs_fw_stats_process(struct ath12k *ar, 16 - struct ath12k_fw_stats *stats); 17 15 void ath12k_debugfs_op_vif_add(struct ieee80211_hw *hw, 18 16 struct ieee80211_vif *vif); 19 17 void ath12k_debugfs_pdev_create(struct ath12k_base *ab); ··· 121 123 } 122 124 123 125 static inline void ath12k_debugfs_unregister(struct ath12k *ar) 124 - { 125 - } 126 - 127 - static inline void ath12k_debugfs_fw_stats_process(struct ath12k *ar, 128 - struct ath12k_fw_stats *stats) 129 126 { 130 127 } 131 128
+370 -24
drivers/net/wireless/ath/ath12k/mac.c
··· 4360 4360 { 4361 4361 struct ath12k_base *ab = ar->ab; 4362 4362 struct ath12k_hw *ah = ath12k_ar_to_ah(ar); 4363 - unsigned long timeout, time_left; 4363 + unsigned long time_left; 4364 4364 int ret; 4365 4365 4366 4366 guard(mutex)(&ah->hw_mutex); ··· 4368 4368 if (ah->state != ATH12K_HW_STATE_ON) 4369 4369 return -ENETDOWN; 4370 4370 4371 - /* FW stats can get split when exceeding the stats data buffer limit. 4372 - * In that case, since there is no end marking for the back-to-back 4373 - * received 'update stats' event, we keep a 3 seconds timeout in case, 4374 - * fw_stats_done is not marked yet 4375 - */ 4376 - timeout = jiffies + msecs_to_jiffies(3 * 1000); 4377 4371 ath12k_fw_stats_reset(ar); 4378 4372 4379 4373 reinit_completion(&ar->fw_stats_complete); 4374 + reinit_completion(&ar->fw_stats_done); 4380 4375 4381 4376 ret = ath12k_wmi_send_stats_request_cmd(ar, param->stats_id, 4382 4377 param->vdev_id, param->pdev_id); 4383 - 4384 4378 if (ret) { 4385 4379 ath12k_warn(ab, "failed to request fw stats: %d\n", ret); 4386 4380 return ret; ··· 4385 4391 param->pdev_id, param->vdev_id, param->stats_id); 4386 4392 4387 4393 time_left = wait_for_completion_timeout(&ar->fw_stats_complete, 1 * HZ); 4388 - 4389 4394 if (!time_left) { 4390 4395 ath12k_warn(ab, "time out while waiting for get fw stats\n"); 4391 4396 return -ETIMEDOUT; ··· 4393 4400 /* Firmware sends WMI_UPDATE_STATS_EVENTID back-to-back 4394 4401 * when stats data buffer limit is reached. fw_stats_complete 4395 4402 * is completed once host receives first event from firmware, but 4396 - * still end might not be marked in the TLV. 4397 - * Below loop is to confirm that firmware completed sending all the event 4398 - * and fw_stats_done is marked true when end is marked in the TLV. 4403 + * still there could be more events following. Below is to wait 4404 + * until firmware completes sending all the events. 4399 4405 */ 4400 - for (;;) { 4401 - if (time_after(jiffies, timeout)) 4402 - break; 4403 - spin_lock_bh(&ar->data_lock); 4404 - if (ar->fw_stats.fw_stats_done) { 4405 - spin_unlock_bh(&ar->data_lock); 4406 - break; 4407 - } 4408 - spin_unlock_bh(&ar->data_lock); 4406 + time_left = wait_for_completion_timeout(&ar->fw_stats_done, 3 * HZ); 4407 + if (!time_left) { 4408 + ath12k_warn(ab, "time out while waiting for fw stats done\n"); 4409 + return -ETIMEDOUT; 4409 4410 } 4411 + 4410 4412 return 0; 4411 4413 } 4412 4414 ··· 5878 5890 return ret; 5879 5891 } 5880 5892 5893 + static bool ath12k_mac_is_freq_on_mac(struct ath12k_hw_mode_freq_range_arg *freq_range, 5894 + u32 freq, u8 mac_id) 5895 + { 5896 + return (freq >= freq_range[mac_id].low_2ghz_freq && 5897 + freq <= freq_range[mac_id].high_2ghz_freq) || 5898 + (freq >= freq_range[mac_id].low_5ghz_freq && 5899 + freq <= freq_range[mac_id].high_5ghz_freq); 5900 + } 5901 + 5902 + static bool 5903 + ath12k_mac_2_freq_same_mac_in_freq_range(struct ath12k_base *ab, 5904 + struct ath12k_hw_mode_freq_range_arg *freq_range, 5905 + u32 freq_link1, u32 freq_link2) 5906 + { 5907 + u8 i; 5908 + 5909 + for (i = 0; i < MAX_RADIOS; i++) { 5910 + if (ath12k_mac_is_freq_on_mac(freq_range, freq_link1, i) && 5911 + ath12k_mac_is_freq_on_mac(freq_range, freq_link2, i)) 5912 + return true; 5913 + } 5914 + 5915 + return false; 5916 + } 5917 + 5918 + static bool ath12k_mac_is_hw_dbs_capable(struct ath12k_base *ab) 5919 + { 5920 + return test_bit(WMI_TLV_SERVICE_DUAL_BAND_SIMULTANEOUS_SUPPORT, 5921 + ab->wmi_ab.svc_map) && 5922 + ab->wmi_ab.hw_mode_info.support_dbs; 5923 + } 5924 + 5925 + static bool ath12k_mac_2_freq_same_mac_in_dbs(struct ath12k_base *ab, 5926 + u32 freq_link1, u32 freq_link2) 5927 + { 5928 + struct ath12k_hw_mode_freq_range_arg *freq_range; 5929 + 5930 + if (!ath12k_mac_is_hw_dbs_capable(ab)) 5931 + return true; 5932 + 5933 + freq_range = ab->wmi_ab.hw_mode_info.freq_range_caps[ATH12K_HW_MODE_DBS]; 5934 + return ath12k_mac_2_freq_same_mac_in_freq_range(ab, freq_range, 5935 + freq_link1, freq_link2); 5936 + } 5937 + 5938 + static bool ath12k_mac_is_hw_sbs_capable(struct ath12k_base *ab) 5939 + { 5940 + return test_bit(WMI_TLV_SERVICE_DUAL_BAND_SIMULTANEOUS_SUPPORT, 5941 + ab->wmi_ab.svc_map) && 5942 + ab->wmi_ab.hw_mode_info.support_sbs; 5943 + } 5944 + 5945 + static bool ath12k_mac_2_freq_same_mac_in_sbs(struct ath12k_base *ab, 5946 + u32 freq_link1, u32 freq_link2) 5947 + { 5948 + struct ath12k_hw_mode_info *info = &ab->wmi_ab.hw_mode_info; 5949 + struct ath12k_hw_mode_freq_range_arg *sbs_uppr_share; 5950 + struct ath12k_hw_mode_freq_range_arg *sbs_low_share; 5951 + struct ath12k_hw_mode_freq_range_arg *sbs_range; 5952 + 5953 + if (!ath12k_mac_is_hw_sbs_capable(ab)) 5954 + return true; 5955 + 5956 + if (ab->wmi_ab.sbs_lower_band_end_freq) { 5957 + sbs_uppr_share = info->freq_range_caps[ATH12K_HW_MODE_SBS_UPPER_SHARE]; 5958 + sbs_low_share = info->freq_range_caps[ATH12K_HW_MODE_SBS_LOWER_SHARE]; 5959 + 5960 + return ath12k_mac_2_freq_same_mac_in_freq_range(ab, sbs_low_share, 5961 + freq_link1, freq_link2) || 5962 + ath12k_mac_2_freq_same_mac_in_freq_range(ab, sbs_uppr_share, 5963 + freq_link1, freq_link2); 5964 + } 5965 + 5966 + sbs_range = info->freq_range_caps[ATH12K_HW_MODE_SBS]; 5967 + return ath12k_mac_2_freq_same_mac_in_freq_range(ab, sbs_range, 5968 + freq_link1, freq_link2); 5969 + } 5970 + 5971 + static bool ath12k_mac_freqs_on_same_mac(struct ath12k_base *ab, 5972 + u32 freq_link1, u32 freq_link2) 5973 + { 5974 + return ath12k_mac_2_freq_same_mac_in_dbs(ab, freq_link1, freq_link2) && 5975 + ath12k_mac_2_freq_same_mac_in_sbs(ab, freq_link1, freq_link2); 5976 + } 5977 + 5978 + static int ath12k_mac_mlo_sta_set_link_active(struct ath12k_base *ab, 5979 + enum wmi_mlo_link_force_reason reason, 5980 + enum wmi_mlo_link_force_mode mode, 5981 + u8 *mlo_vdev_id_lst, 5982 + u8 num_mlo_vdev, 5983 + u8 *mlo_inactive_vdev_lst, 5984 + u8 num_mlo_inactive_vdev) 5985 + { 5986 + struct wmi_mlo_link_set_active_arg param = {0}; 5987 + u32 entry_idx, entry_offset, vdev_idx; 5988 + u8 vdev_id; 5989 + 5990 + param.reason = reason; 5991 + param.force_mode = mode; 5992 + 5993 + for (vdev_idx = 0; vdev_idx < num_mlo_vdev; vdev_idx++) { 5994 + vdev_id = mlo_vdev_id_lst[vdev_idx]; 5995 + entry_idx = vdev_id / 32; 5996 + entry_offset = vdev_id % 32; 5997 + if (entry_idx >= WMI_MLO_LINK_NUM_SZ) { 5998 + ath12k_warn(ab, "Invalid entry_idx %d num_mlo_vdev %d vdev %d", 5999 + entry_idx, num_mlo_vdev, vdev_id); 6000 + return -EINVAL; 6001 + } 6002 + param.vdev_bitmap[entry_idx] |= 1 << entry_offset; 6003 + /* update entry number if entry index changed */ 6004 + if (param.num_vdev_bitmap < entry_idx + 1) 6005 + param.num_vdev_bitmap = entry_idx + 1; 6006 + } 6007 + 6008 + ath12k_dbg(ab, ATH12K_DBG_MAC, 6009 + "num_vdev_bitmap %d vdev_bitmap[0] = 0x%x, vdev_bitmap[1] = 0x%x", 6010 + param.num_vdev_bitmap, param.vdev_bitmap[0], param.vdev_bitmap[1]); 6011 + 6012 + if (mode == WMI_MLO_LINK_FORCE_MODE_ACTIVE_INACTIVE) { 6013 + for (vdev_idx = 0; vdev_idx < num_mlo_inactive_vdev; vdev_idx++) { 6014 + vdev_id = mlo_inactive_vdev_lst[vdev_idx]; 6015 + entry_idx = vdev_id / 32; 6016 + entry_offset = vdev_id % 32; 6017 + if (entry_idx >= WMI_MLO_LINK_NUM_SZ) { 6018 + ath12k_warn(ab, "Invalid entry_idx %d num_mlo_vdev %d vdev %d", 6019 + entry_idx, num_mlo_inactive_vdev, vdev_id); 6020 + return -EINVAL; 6021 + } 6022 + param.inactive_vdev_bitmap[entry_idx] |= 1 << entry_offset; 6023 + /* update entry number if entry index changed */ 6024 + if (param.num_inactive_vdev_bitmap < entry_idx + 1) 6025 + param.num_inactive_vdev_bitmap = entry_idx + 1; 6026 + } 6027 + 6028 + ath12k_dbg(ab, ATH12K_DBG_MAC, 6029 + "num_vdev_bitmap %d inactive_vdev_bitmap[0] = 0x%x, inactive_vdev_bitmap[1] = 0x%x", 6030 + param.num_inactive_vdev_bitmap, 6031 + param.inactive_vdev_bitmap[0], 6032 + param.inactive_vdev_bitmap[1]); 6033 + } 6034 + 6035 + if (mode == WMI_MLO_LINK_FORCE_MODE_ACTIVE_LINK_NUM || 6036 + mode == WMI_MLO_LINK_FORCE_MODE_INACTIVE_LINK_NUM) { 6037 + param.num_link_entry = 1; 6038 + param.link_num[0].num_of_link = num_mlo_vdev - 1; 6039 + } 6040 + 6041 + return ath12k_wmi_send_mlo_link_set_active_cmd(ab, &param); 6042 + } 6043 + 6044 + static int ath12k_mac_mlo_sta_update_link_active(struct ath12k_base *ab, 6045 + struct ieee80211_hw *hw, 6046 + struct ath12k_vif *ahvif) 6047 + { 6048 + u8 mlo_vdev_id_lst[IEEE80211_MLD_MAX_NUM_LINKS] = {0}; 6049 + u32 mlo_freq_list[IEEE80211_MLD_MAX_NUM_LINKS] = {0}; 6050 + unsigned long links = ahvif->links_map; 6051 + enum wmi_mlo_link_force_reason reason; 6052 + struct ieee80211_chanctx_conf *conf; 6053 + enum wmi_mlo_link_force_mode mode; 6054 + struct ieee80211_bss_conf *info; 6055 + struct ath12k_link_vif *arvif; 6056 + u8 num_mlo_vdev = 0; 6057 + u8 link_id; 6058 + 6059 + for_each_set_bit(link_id, &links, IEEE80211_MLD_MAX_NUM_LINKS) { 6060 + arvif = wiphy_dereference(hw->wiphy, ahvif->link[link_id]); 6061 + /* make sure vdev is created on this device */ 6062 + if (!arvif || !arvif->is_created || arvif->ar->ab != ab) 6063 + continue; 6064 + 6065 + info = ath12k_mac_get_link_bss_conf(arvif); 6066 + conf = wiphy_dereference(hw->wiphy, info->chanctx_conf); 6067 + mlo_freq_list[num_mlo_vdev] = conf->def.chan->center_freq; 6068 + 6069 + mlo_vdev_id_lst[num_mlo_vdev] = arvif->vdev_id; 6070 + num_mlo_vdev++; 6071 + } 6072 + 6073 + /* It is not allowed to activate more links than a single device 6074 + * supported. Something goes wrong if we reach here. 6075 + */ 6076 + if (num_mlo_vdev > ATH12K_NUM_MAX_ACTIVE_LINKS_PER_DEVICE) { 6077 + WARN_ON_ONCE(1); 6078 + return -EINVAL; 6079 + } 6080 + 6081 + /* if 2 links are established and both link channels fall on the 6082 + * same hardware MAC, send command to firmware to deactivate one 6083 + * of them. 6084 + */ 6085 + if (num_mlo_vdev == 2 && 6086 + ath12k_mac_freqs_on_same_mac(ab, mlo_freq_list[0], 6087 + mlo_freq_list[1])) { 6088 + mode = WMI_MLO_LINK_FORCE_MODE_INACTIVE_LINK_NUM; 6089 + reason = WMI_MLO_LINK_FORCE_REASON_NEW_CONNECT; 6090 + return ath12k_mac_mlo_sta_set_link_active(ab, reason, mode, 6091 + mlo_vdev_id_lst, num_mlo_vdev, 6092 + NULL, 0); 6093 + } 6094 + 6095 + return 0; 6096 + } 6097 + 6098 + static bool ath12k_mac_are_sbs_chan(struct ath12k_base *ab, u32 freq_1, u32 freq_2) 6099 + { 6100 + if (!ath12k_mac_is_hw_sbs_capable(ab)) 6101 + return false; 6102 + 6103 + if (ath12k_is_2ghz_channel_freq(freq_1) || 6104 + ath12k_is_2ghz_channel_freq(freq_2)) 6105 + return false; 6106 + 6107 + return !ath12k_mac_2_freq_same_mac_in_sbs(ab, freq_1, freq_2); 6108 + } 6109 + 6110 + static bool ath12k_mac_are_dbs_chan(struct ath12k_base *ab, u32 freq_1, u32 freq_2) 6111 + { 6112 + if (!ath12k_mac_is_hw_dbs_capable(ab)) 6113 + return false; 6114 + 6115 + return !ath12k_mac_2_freq_same_mac_in_dbs(ab, freq_1, freq_2); 6116 + } 6117 + 6118 + static int ath12k_mac_select_links(struct ath12k_base *ab, 6119 + struct ieee80211_vif *vif, 6120 + struct ieee80211_hw *hw, 6121 + u16 *selected_links) 6122 + { 6123 + unsigned long useful_links = ieee80211_vif_usable_links(vif); 6124 + struct ath12k_vif *ahvif = ath12k_vif_to_ahvif(vif); 6125 + u8 num_useful_links = hweight_long(useful_links); 6126 + struct ieee80211_chanctx_conf *chanctx; 6127 + struct ath12k_link_vif *assoc_arvif; 6128 + u32 assoc_link_freq, partner_freq; 6129 + u16 sbs_links = 0, dbs_links = 0; 6130 + struct ieee80211_bss_conf *info; 6131 + struct ieee80211_channel *chan; 6132 + struct ieee80211_sta *sta; 6133 + struct ath12k_sta *ahsta; 6134 + u8 link_id; 6135 + 6136 + /* activate all useful links if less than max supported */ 6137 + if (num_useful_links <= ATH12K_NUM_MAX_ACTIVE_LINKS_PER_DEVICE) { 6138 + *selected_links = useful_links; 6139 + return 0; 6140 + } 6141 + 6142 + /* only in station mode we can get here, so it's safe 6143 + * to use ap_addr 6144 + */ 6145 + rcu_read_lock(); 6146 + sta = ieee80211_find_sta(vif, vif->cfg.ap_addr); 6147 + if (!sta) { 6148 + rcu_read_unlock(); 6149 + ath12k_warn(ab, "failed to find sta with addr %pM\n", vif->cfg.ap_addr); 6150 + return -EINVAL; 6151 + } 6152 + 6153 + ahsta = ath12k_sta_to_ahsta(sta); 6154 + assoc_arvif = wiphy_dereference(hw->wiphy, ahvif->link[ahsta->assoc_link_id]); 6155 + info = ath12k_mac_get_link_bss_conf(assoc_arvif); 6156 + chanctx = rcu_dereference(info->chanctx_conf); 6157 + assoc_link_freq = chanctx->def.chan->center_freq; 6158 + rcu_read_unlock(); 6159 + ath12k_dbg(ab, ATH12K_DBG_MAC, "assoc link %u freq %u\n", 6160 + assoc_arvif->link_id, assoc_link_freq); 6161 + 6162 + /* assoc link is already activated and has to be kept active, 6163 + * only need to select a partner link from others. 6164 + */ 6165 + useful_links &= ~BIT(assoc_arvif->link_id); 6166 + for_each_set_bit(link_id, &useful_links, IEEE80211_MLD_MAX_NUM_LINKS) { 6167 + info = wiphy_dereference(hw->wiphy, vif->link_conf[link_id]); 6168 + if (!info) { 6169 + ath12k_warn(ab, "failed to get link info for link: %u\n", 6170 + link_id); 6171 + return -ENOLINK; 6172 + } 6173 + 6174 + chan = info->chanreq.oper.chan; 6175 + if (!chan) { 6176 + ath12k_warn(ab, "failed to get chan for link: %u\n", link_id); 6177 + return -EINVAL; 6178 + } 6179 + 6180 + partner_freq = chan->center_freq; 6181 + if (ath12k_mac_are_sbs_chan(ab, assoc_link_freq, partner_freq)) { 6182 + sbs_links |= BIT(link_id); 6183 + ath12k_dbg(ab, ATH12K_DBG_MAC, "new SBS link %u freq %u\n", 6184 + link_id, partner_freq); 6185 + continue; 6186 + } 6187 + 6188 + if (ath12k_mac_are_dbs_chan(ab, assoc_link_freq, partner_freq)) { 6189 + dbs_links |= BIT(link_id); 6190 + ath12k_dbg(ab, ATH12K_DBG_MAC, "new DBS link %u freq %u\n", 6191 + link_id, partner_freq); 6192 + continue; 6193 + } 6194 + 6195 + ath12k_dbg(ab, ATH12K_DBG_MAC, "non DBS/SBS link %u freq %u\n", 6196 + link_id, partner_freq); 6197 + } 6198 + 6199 + /* choose the first candidate no matter how many is in the list */ 6200 + if (sbs_links) 6201 + link_id = __ffs(sbs_links); 6202 + else if (dbs_links) 6203 + link_id = __ffs(dbs_links); 6204 + else 6205 + link_id = ffs(useful_links) - 1; 6206 + 6207 + ath12k_dbg(ab, ATH12K_DBG_MAC, "select partner link %u\n", link_id); 6208 + 6209 + *selected_links = BIT(assoc_arvif->link_id) | BIT(link_id); 6210 + 6211 + return 0; 6212 + } 6213 + 5881 6214 static int ath12k_mac_op_sta_state(struct ieee80211_hw *hw, 5882 6215 struct ieee80211_vif *vif, 5883 6216 struct ieee80211_sta *sta, ··· 6208 5899 struct ath12k_vif *ahvif = ath12k_vif_to_ahvif(vif); 6209 5900 struct ath12k_sta *ahsta = ath12k_sta_to_ahsta(sta); 6210 5901 struct ath12k_hw *ah = ath12k_hw_to_ah(hw); 5902 + struct ath12k_base *prev_ab = NULL, *ab; 6211 5903 struct ath12k_link_vif *arvif; 6212 5904 struct ath12k_link_sta *arsta; 6213 5905 unsigned long valid_links; 6214 - u8 link_id = 0; 5906 + u16 selected_links = 0; 5907 + u8 link_id = 0, i; 5908 + struct ath12k *ar; 6215 5909 int ret; 6216 5910 6217 5911 lockdep_assert_wiphy(hw->wiphy); ··· 6284 5972 * about to move to the associated state. 6285 5973 */ 6286 5974 if (ieee80211_vif_is_mld(vif) && vif->type == NL80211_IFTYPE_STATION && 6287 - old_state == IEEE80211_STA_AUTH && new_state == IEEE80211_STA_ASSOC) 6288 - ieee80211_set_active_links(vif, ieee80211_vif_usable_links(vif)); 5975 + old_state == IEEE80211_STA_AUTH && new_state == IEEE80211_STA_ASSOC) { 5976 + /* TODO: for now only do link selection for single device 5977 + * MLO case. Other cases would be handled in the future. 5978 + */ 5979 + ab = ah->radio[0].ab; 5980 + if (ab->ag->num_devices == 1) { 5981 + ret = ath12k_mac_select_links(ab, vif, hw, &selected_links); 5982 + if (ret) { 5983 + ath12k_warn(ab, 5984 + "failed to get selected links: %d\n", ret); 5985 + goto exit; 5986 + } 5987 + } else { 5988 + selected_links = ieee80211_vif_usable_links(vif); 5989 + } 5990 + 5991 + ieee80211_set_active_links(vif, selected_links); 5992 + } 6289 5993 6290 5994 /* Handle all the other state transitions in generic way */ 6291 5995 valid_links = ahsta->links_map; ··· 6325 5997 } 6326 5998 } 6327 5999 6000 + if (ieee80211_vif_is_mld(vif) && vif->type == NL80211_IFTYPE_STATION && 6001 + old_state == IEEE80211_STA_ASSOC && new_state == IEEE80211_STA_AUTHORIZED) { 6002 + for_each_ar(ah, ar, i) { 6003 + ab = ar->ab; 6004 + if (prev_ab == ab) 6005 + continue; 6006 + 6007 + ret = ath12k_mac_mlo_sta_update_link_active(ab, hw, ahvif); 6008 + if (ret) { 6009 + ath12k_warn(ab, 6010 + "failed to update link active state on connect %d\n", 6011 + ret); 6012 + goto exit; 6013 + } 6014 + 6015 + prev_ab = ab; 6016 + } 6017 + } 6328 6018 /* IEEE80211_STA_NONE -> IEEE80211_STA_NOTEXIST: 6329 6019 * Remove the station from driver (handle ML sta here since that 6330 6020 * needs special handling. Normal sta will be handled in generic
+2
drivers/net/wireless/ath/ath12k/mac.h
··· 54 54 #define ATH12K_DEFAULT_SCAN_LINK IEEE80211_MLD_MAX_NUM_LINKS 55 55 #define ATH12K_NUM_MAX_LINKS (IEEE80211_MLD_MAX_NUM_LINKS + 1) 56 56 57 + #define ATH12K_NUM_MAX_ACTIVE_LINKS_PER_DEVICE 2 58 + 57 59 enum ath12k_supported_bw { 58 60 ATH12K_BW_20 = 0, 59 61 ATH12K_BW_40 = 1,
+819 -10
drivers/net/wireless/ath/ath12k/wmi.c
··· 91 91 bool dma_ring_cap_done; 92 92 bool spectral_bin_scaling_done; 93 93 bool mac_phy_caps_ext_done; 94 + bool hal_reg_caps_ext2_done; 95 + bool scan_radio_caps_ext2_done; 96 + bool twt_caps_done; 97 + bool htt_msdu_idx_to_qtype_map_done; 98 + bool dbs_or_sbs_cap_ext_done; 94 99 }; 95 100 96 101 struct ath12k_wmi_rdy_parse { ··· 4400 4395 static int ath12k_wmi_hw_mode_caps(struct ath12k_base *soc, 4401 4396 u16 len, const void *ptr, void *data) 4402 4397 { 4398 + struct ath12k_svc_ext_info *svc_ext_info = &soc->wmi_ab.svc_ext_info; 4403 4399 struct ath12k_wmi_svc_rdy_ext_parse *svc_rdy_ext = data; 4404 4400 const struct ath12k_wmi_hw_mode_cap_params *hw_mode_caps; 4405 4401 enum wmi_host_hw_mode_config_type mode, pref; ··· 4433 4427 } 4434 4428 } 4435 4429 4436 - ath12k_dbg(soc, ATH12K_DBG_WMI, "preferred_hw_mode:%d\n", 4437 - soc->wmi_ab.preferred_hw_mode); 4430 + svc_ext_info->num_hw_modes = svc_rdy_ext->n_hw_mode_caps; 4431 + 4432 + ath12k_dbg(soc, ATH12K_DBG_WMI, "num hw modes %u preferred_hw_mode %d\n", 4433 + svc_ext_info->num_hw_modes, soc->wmi_ab.preferred_hw_mode); 4434 + 4438 4435 if (soc->wmi_ab.preferred_hw_mode == WMI_HOST_HW_MODE_MAX) 4439 4436 return -EINVAL; 4440 4437 ··· 4667 4658 return ret; 4668 4659 } 4669 4660 4661 + static void 4662 + ath12k_wmi_save_mac_phy_info(struct ath12k_base *ab, 4663 + const struct ath12k_wmi_mac_phy_caps_params *mac_phy_cap, 4664 + struct ath12k_svc_ext_mac_phy_info *mac_phy_info) 4665 + { 4666 + mac_phy_info->phy_id = __le32_to_cpu(mac_phy_cap->phy_id); 4667 + mac_phy_info->supported_bands = __le32_to_cpu(mac_phy_cap->supported_bands); 4668 + mac_phy_info->hw_freq_range.low_2ghz_freq = 4669 + __le32_to_cpu(mac_phy_cap->low_2ghz_chan_freq); 4670 + mac_phy_info->hw_freq_range.high_2ghz_freq = 4671 + __le32_to_cpu(mac_phy_cap->high_2ghz_chan_freq); 4672 + mac_phy_info->hw_freq_range.low_5ghz_freq = 4673 + __le32_to_cpu(mac_phy_cap->low_5ghz_chan_freq); 4674 + mac_phy_info->hw_freq_range.high_5ghz_freq = 4675 + __le32_to_cpu(mac_phy_cap->high_5ghz_chan_freq); 4676 + } 4677 + 4678 + static void 4679 + ath12k_wmi_save_all_mac_phy_info(struct ath12k_base *ab, 4680 + struct ath12k_wmi_svc_rdy_ext_parse *svc_rdy_ext) 4681 + { 4682 + struct ath12k_svc_ext_info *svc_ext_info = &ab->wmi_ab.svc_ext_info; 4683 + const struct ath12k_wmi_mac_phy_caps_params *mac_phy_cap; 4684 + const struct ath12k_wmi_hw_mode_cap_params *hw_mode_cap; 4685 + struct ath12k_svc_ext_mac_phy_info *mac_phy_info; 4686 + u32 hw_mode_id, phy_bit_map; 4687 + u8 hw_idx; 4688 + 4689 + mac_phy_info = &svc_ext_info->mac_phy_info[0]; 4690 + mac_phy_cap = svc_rdy_ext->mac_phy_caps; 4691 + 4692 + for (hw_idx = 0; hw_idx < svc_ext_info->num_hw_modes; hw_idx++) { 4693 + hw_mode_cap = &svc_rdy_ext->hw_mode_caps[hw_idx]; 4694 + hw_mode_id = __le32_to_cpu(hw_mode_cap->hw_mode_id); 4695 + phy_bit_map = __le32_to_cpu(hw_mode_cap->phy_id_map); 4696 + 4697 + while (phy_bit_map) { 4698 + ath12k_wmi_save_mac_phy_info(ab, mac_phy_cap, mac_phy_info); 4699 + mac_phy_info->hw_mode_config_type = 4700 + le32_get_bits(hw_mode_cap->hw_mode_config_type, 4701 + WMI_HW_MODE_CAP_CFG_TYPE); 4702 + ath12k_dbg(ab, ATH12K_DBG_WMI, 4703 + "hw_idx %u hw_mode_id %u hw_mode_config_type %u supported_bands %u phy_id %u 2 GHz [%u - %u] 5 GHz [%u - %u]\n", 4704 + hw_idx, hw_mode_id, 4705 + mac_phy_info->hw_mode_config_type, 4706 + mac_phy_info->supported_bands, mac_phy_info->phy_id, 4707 + mac_phy_info->hw_freq_range.low_2ghz_freq, 4708 + mac_phy_info->hw_freq_range.high_2ghz_freq, 4709 + mac_phy_info->hw_freq_range.low_5ghz_freq, 4710 + mac_phy_info->hw_freq_range.high_5ghz_freq); 4711 + 4712 + mac_phy_cap++; 4713 + mac_phy_info++; 4714 + 4715 + phy_bit_map >>= 1; 4716 + } 4717 + } 4718 + } 4719 + 4670 4720 static int ath12k_wmi_svc_rdy_ext_parse(struct ath12k_base *ab, 4671 4721 u16 tag, u16 len, 4672 4722 const void *ptr, void *data) ··· 4773 4705 ath12k_warn(ab, "failed to parse tlv %d\n", ret); 4774 4706 return ret; 4775 4707 } 4708 + 4709 + ath12k_wmi_save_all_mac_phy_info(ab, svc_rdy_ext); 4776 4710 4777 4711 svc_rdy_ext->mac_phy_done = true; 4778 4712 } else if (!svc_rdy_ext->ext_hal_reg_done) { ··· 4992 4922 return 0; 4993 4923 } 4994 4924 4925 + static void 4926 + ath12k_wmi_update_freq_info(struct ath12k_base *ab, 4927 + struct ath12k_svc_ext_mac_phy_info *mac_cap, 4928 + enum ath12k_hw_mode mode, 4929 + u32 phy_id) 4930 + { 4931 + struct ath12k_hw_mode_info *hw_mode_info = &ab->wmi_ab.hw_mode_info; 4932 + struct ath12k_hw_mode_freq_range_arg *mac_range; 4933 + 4934 + mac_range = &hw_mode_info->freq_range_caps[mode][phy_id]; 4935 + 4936 + if (mac_cap->supported_bands & WMI_HOST_WLAN_2GHZ_CAP) { 4937 + mac_range->low_2ghz_freq = max_t(u32, 4938 + mac_cap->hw_freq_range.low_2ghz_freq, 4939 + ATH12K_MIN_2GHZ_FREQ); 4940 + mac_range->high_2ghz_freq = mac_cap->hw_freq_range.high_2ghz_freq ? 4941 + min_t(u32, 4942 + mac_cap->hw_freq_range.high_2ghz_freq, 4943 + ATH12K_MAX_2GHZ_FREQ) : 4944 + ATH12K_MAX_2GHZ_FREQ; 4945 + } 4946 + 4947 + if (mac_cap->supported_bands & WMI_HOST_WLAN_5GHZ_CAP) { 4948 + mac_range->low_5ghz_freq = max_t(u32, 4949 + mac_cap->hw_freq_range.low_5ghz_freq, 4950 + ATH12K_MIN_5GHZ_FREQ); 4951 + mac_range->high_5ghz_freq = mac_cap->hw_freq_range.high_5ghz_freq ? 4952 + min_t(u32, 4953 + mac_cap->hw_freq_range.high_5ghz_freq, 4954 + ATH12K_MAX_6GHZ_FREQ) : 4955 + ATH12K_MAX_6GHZ_FREQ; 4956 + } 4957 + } 4958 + 4959 + static bool 4960 + ath12k_wmi_all_phy_range_updated(struct ath12k_base *ab, 4961 + enum ath12k_hw_mode hwmode) 4962 + { 4963 + struct ath12k_hw_mode_info *hw_mode_info = &ab->wmi_ab.hw_mode_info; 4964 + struct ath12k_hw_mode_freq_range_arg *mac_range; 4965 + u8 phy_id; 4966 + 4967 + for (phy_id = 0; phy_id < MAX_RADIOS; phy_id++) { 4968 + mac_range = &hw_mode_info->freq_range_caps[hwmode][phy_id]; 4969 + /* modify SBS/DBS range only when both phy for DBS are filled */ 4970 + if (!mac_range->low_2ghz_freq && !mac_range->low_5ghz_freq) 4971 + return false; 4972 + } 4973 + 4974 + return true; 4975 + } 4976 + 4977 + static void ath12k_wmi_update_dbs_freq_info(struct ath12k_base *ab) 4978 + { 4979 + struct ath12k_hw_mode_info *hw_mode_info = &ab->wmi_ab.hw_mode_info; 4980 + struct ath12k_hw_mode_freq_range_arg *mac_range; 4981 + u8 phy_id; 4982 + 4983 + mac_range = hw_mode_info->freq_range_caps[ATH12K_HW_MODE_DBS]; 4984 + /* Reset 5 GHz range for shared mac for DBS */ 4985 + for (phy_id = 0; phy_id < MAX_RADIOS; phy_id++) { 4986 + if (mac_range[phy_id].low_2ghz_freq && 4987 + mac_range[phy_id].low_5ghz_freq) { 4988 + mac_range[phy_id].low_5ghz_freq = 0; 4989 + mac_range[phy_id].high_5ghz_freq = 0; 4990 + } 4991 + } 4992 + } 4993 + 4994 + static u32 4995 + ath12k_wmi_get_highest_5ghz_freq_from_range(struct ath12k_hw_mode_freq_range_arg *range) 4996 + { 4997 + u32 highest_freq = 0; 4998 + u8 phy_id; 4999 + 5000 + for (phy_id = 0; phy_id < MAX_RADIOS; phy_id++) { 5001 + if (range[phy_id].high_5ghz_freq > highest_freq) 5002 + highest_freq = range[phy_id].high_5ghz_freq; 5003 + } 5004 + 5005 + return highest_freq ? highest_freq : ATH12K_MAX_6GHZ_FREQ; 5006 + } 5007 + 5008 + static u32 5009 + ath12k_wmi_get_lowest_5ghz_freq_from_range(struct ath12k_hw_mode_freq_range_arg *range) 5010 + { 5011 + u32 lowest_freq = 0; 5012 + u8 phy_id; 5013 + 5014 + for (phy_id = 0; phy_id < MAX_RADIOS; phy_id++) { 5015 + if ((!lowest_freq && range[phy_id].low_5ghz_freq) || 5016 + range[phy_id].low_5ghz_freq < lowest_freq) 5017 + lowest_freq = range[phy_id].low_5ghz_freq; 5018 + } 5019 + 5020 + return lowest_freq ? lowest_freq : ATH12K_MIN_5GHZ_FREQ; 5021 + } 5022 + 5023 + static void 5024 + ath12k_wmi_fill_upper_share_sbs_freq(struct ath12k_base *ab, 5025 + u16 sbs_range_sep, 5026 + struct ath12k_hw_mode_freq_range_arg *ref_freq) 5027 + { 5028 + struct ath12k_hw_mode_info *hw_mode_info = &ab->wmi_ab.hw_mode_info; 5029 + struct ath12k_hw_mode_freq_range_arg *upper_sbs_freq_range; 5030 + u8 phy_id; 5031 + 5032 + upper_sbs_freq_range = 5033 + hw_mode_info->freq_range_caps[ATH12K_HW_MODE_SBS_UPPER_SHARE]; 5034 + 5035 + for (phy_id = 0; phy_id < MAX_RADIOS; phy_id++) { 5036 + upper_sbs_freq_range[phy_id].low_2ghz_freq = 5037 + ref_freq[phy_id].low_2ghz_freq; 5038 + upper_sbs_freq_range[phy_id].high_2ghz_freq = 5039 + ref_freq[phy_id].high_2ghz_freq; 5040 + 5041 + /* update for shared mac */ 5042 + if (upper_sbs_freq_range[phy_id].low_2ghz_freq) { 5043 + upper_sbs_freq_range[phy_id].low_5ghz_freq = sbs_range_sep + 10; 5044 + upper_sbs_freq_range[phy_id].high_5ghz_freq = 5045 + ath12k_wmi_get_highest_5ghz_freq_from_range(ref_freq); 5046 + } else { 5047 + upper_sbs_freq_range[phy_id].low_5ghz_freq = 5048 + ath12k_wmi_get_lowest_5ghz_freq_from_range(ref_freq); 5049 + upper_sbs_freq_range[phy_id].high_5ghz_freq = sbs_range_sep; 5050 + } 5051 + } 5052 + } 5053 + 5054 + static void 5055 + ath12k_wmi_fill_lower_share_sbs_freq(struct ath12k_base *ab, 5056 + u16 sbs_range_sep, 5057 + struct ath12k_hw_mode_freq_range_arg *ref_freq) 5058 + { 5059 + struct ath12k_hw_mode_info *hw_mode_info = &ab->wmi_ab.hw_mode_info; 5060 + struct ath12k_hw_mode_freq_range_arg *lower_sbs_freq_range; 5061 + u8 phy_id; 5062 + 5063 + lower_sbs_freq_range = 5064 + hw_mode_info->freq_range_caps[ATH12K_HW_MODE_SBS_LOWER_SHARE]; 5065 + 5066 + for (phy_id = 0; phy_id < MAX_RADIOS; phy_id++) { 5067 + lower_sbs_freq_range[phy_id].low_2ghz_freq = 5068 + ref_freq[phy_id].low_2ghz_freq; 5069 + lower_sbs_freq_range[phy_id].high_2ghz_freq = 5070 + ref_freq[phy_id].high_2ghz_freq; 5071 + 5072 + /* update for shared mac */ 5073 + if (lower_sbs_freq_range[phy_id].low_2ghz_freq) { 5074 + lower_sbs_freq_range[phy_id].low_5ghz_freq = 5075 + ath12k_wmi_get_lowest_5ghz_freq_from_range(ref_freq); 5076 + lower_sbs_freq_range[phy_id].high_5ghz_freq = sbs_range_sep; 5077 + } else { 5078 + lower_sbs_freq_range[phy_id].low_5ghz_freq = sbs_range_sep + 10; 5079 + lower_sbs_freq_range[phy_id].high_5ghz_freq = 5080 + ath12k_wmi_get_highest_5ghz_freq_from_range(ref_freq); 5081 + } 5082 + } 5083 + } 5084 + 5085 + static const char *ath12k_wmi_hw_mode_to_str(enum ath12k_hw_mode hw_mode) 5086 + { 5087 + static const char * const mode_str[] = { 5088 + [ATH12K_HW_MODE_SMM] = "SMM", 5089 + [ATH12K_HW_MODE_DBS] = "DBS", 5090 + [ATH12K_HW_MODE_SBS] = "SBS", 5091 + [ATH12K_HW_MODE_SBS_UPPER_SHARE] = "SBS_UPPER_SHARE", 5092 + [ATH12K_HW_MODE_SBS_LOWER_SHARE] = "SBS_LOWER_SHARE", 5093 + }; 5094 + 5095 + if (hw_mode >= ARRAY_SIZE(mode_str)) 5096 + return "Unknown"; 5097 + 5098 + return mode_str[hw_mode]; 5099 + } 5100 + 5101 + static void 5102 + ath12k_wmi_dump_freq_range_per_mac(struct ath12k_base *ab, 5103 + struct ath12k_hw_mode_freq_range_arg *freq_range, 5104 + enum ath12k_hw_mode hw_mode) 5105 + { 5106 + u8 i; 5107 + 5108 + for (i = 0; i < MAX_RADIOS; i++) 5109 + if (freq_range[i].low_2ghz_freq || freq_range[i].low_5ghz_freq) 5110 + ath12k_dbg(ab, ATH12K_DBG_WMI, 5111 + "frequency range: %s(%d) mac %d 2 GHz [%d - %d] 5 GHz [%d - %d]", 5112 + ath12k_wmi_hw_mode_to_str(hw_mode), 5113 + hw_mode, i, 5114 + freq_range[i].low_2ghz_freq, 5115 + freq_range[i].high_2ghz_freq, 5116 + freq_range[i].low_5ghz_freq, 5117 + freq_range[i].high_5ghz_freq); 5118 + } 5119 + 5120 + static void ath12k_wmi_dump_freq_range(struct ath12k_base *ab) 5121 + { 5122 + struct ath12k_hw_mode_freq_range_arg *freq_range; 5123 + u8 i; 5124 + 5125 + for (i = ATH12K_HW_MODE_SMM; i < ATH12K_HW_MODE_MAX; i++) { 5126 + freq_range = ab->wmi_ab.hw_mode_info.freq_range_caps[i]; 5127 + ath12k_wmi_dump_freq_range_per_mac(ab, freq_range, i); 5128 + } 5129 + } 5130 + 5131 + static int ath12k_wmi_modify_sbs_freq(struct ath12k_base *ab, u8 phy_id) 5132 + { 5133 + struct ath12k_hw_mode_info *hw_mode_info = &ab->wmi_ab.hw_mode_info; 5134 + struct ath12k_hw_mode_freq_range_arg *sbs_mac_range, *shared_mac_range; 5135 + struct ath12k_hw_mode_freq_range_arg *non_shared_range; 5136 + u8 shared_phy_id; 5137 + 5138 + sbs_mac_range = &hw_mode_info->freq_range_caps[ATH12K_HW_MODE_SBS][phy_id]; 5139 + 5140 + /* if SBS mac range has both 2.4 and 5 GHz ranges, i.e. shared phy_id 5141 + * keep the range as it is in SBS 5142 + */ 5143 + if (sbs_mac_range->low_2ghz_freq && sbs_mac_range->low_5ghz_freq) 5144 + return 0; 5145 + 5146 + if (sbs_mac_range->low_2ghz_freq && !sbs_mac_range->low_5ghz_freq) { 5147 + ath12k_err(ab, "Invalid DBS/SBS mode with only 2.4Ghz"); 5148 + ath12k_wmi_dump_freq_range_per_mac(ab, sbs_mac_range, ATH12K_HW_MODE_SBS); 5149 + return -EINVAL; 5150 + } 5151 + 5152 + non_shared_range = sbs_mac_range; 5153 + /* if SBS mac range has only 5 GHz then it's the non-shared phy, so 5154 + * modify the range as per the shared mac. 5155 + */ 5156 + shared_phy_id = phy_id ? 0 : 1; 5157 + shared_mac_range = 5158 + &hw_mode_info->freq_range_caps[ATH12K_HW_MODE_SBS][shared_phy_id]; 5159 + 5160 + if (shared_mac_range->low_5ghz_freq > non_shared_range->low_5ghz_freq) { 5161 + ath12k_dbg(ab, ATH12K_DBG_WMI, "high 5 GHz shared"); 5162 + /* If the shared mac lower 5 GHz frequency is greater than 5163 + * non-shared mac lower 5 GHz frequency then the shared mac has 5164 + * high 5 GHz shared with 2.4 GHz. So non-shared mac's 5 GHz high 5165 + * freq should be less than the shared mac's low 5 GHz freq. 5166 + */ 5167 + if (non_shared_range->high_5ghz_freq >= 5168 + shared_mac_range->low_5ghz_freq) 5169 + non_shared_range->high_5ghz_freq = 5170 + max_t(u32, shared_mac_range->low_5ghz_freq - 10, 5171 + non_shared_range->low_5ghz_freq); 5172 + } else if (shared_mac_range->high_5ghz_freq < 5173 + non_shared_range->high_5ghz_freq) { 5174 + ath12k_dbg(ab, ATH12K_DBG_WMI, "low 5 GHz shared"); 5175 + /* If the shared mac high 5 GHz frequency is less than 5176 + * non-shared mac high 5 GHz frequency then the shared mac has 5177 + * low 5 GHz shared with 2.4 GHz. So non-shared mac's 5 GHz low 5178 + * freq should be greater than the shared mac's high 5 GHz freq. 5179 + */ 5180 + if (shared_mac_range->high_5ghz_freq >= 5181 + non_shared_range->low_5ghz_freq) 5182 + non_shared_range->low_5ghz_freq = 5183 + min_t(u32, shared_mac_range->high_5ghz_freq + 10, 5184 + non_shared_range->high_5ghz_freq); 5185 + } else { 5186 + ath12k_warn(ab, "invalid SBS range with all 5 GHz shared"); 5187 + return -EINVAL; 5188 + } 5189 + 5190 + return 0; 5191 + } 5192 + 5193 + static void ath12k_wmi_update_sbs_freq_info(struct ath12k_base *ab) 5194 + { 5195 + struct ath12k_hw_mode_info *hw_mode_info = &ab->wmi_ab.hw_mode_info; 5196 + struct ath12k_hw_mode_freq_range_arg *mac_range; 5197 + u16 sbs_range_sep; 5198 + u8 phy_id; 5199 + int ret; 5200 + 5201 + mac_range = hw_mode_info->freq_range_caps[ATH12K_HW_MODE_SBS]; 5202 + 5203 + /* If sbs_lower_band_end_freq has a value, then the frequency range 5204 + * will be split using that value. 5205 + */ 5206 + sbs_range_sep = ab->wmi_ab.sbs_lower_band_end_freq; 5207 + if (sbs_range_sep) { 5208 + ath12k_wmi_fill_upper_share_sbs_freq(ab, sbs_range_sep, 5209 + mac_range); 5210 + ath12k_wmi_fill_lower_share_sbs_freq(ab, sbs_range_sep, 5211 + mac_range); 5212 + /* Hardware specifies the range boundary with sbs_range_sep, 5213 + * (i.e. the boundary between 5 GHz high and 5 GHz low), 5214 + * reset the original one to make sure it will not get used. 5215 + */ 5216 + memset(mac_range, 0, sizeof(*mac_range) * MAX_RADIOS); 5217 + return; 5218 + } 5219 + 5220 + /* If sbs_lower_band_end_freq is not set that means firmware will send one 5221 + * shared mac range and one non-shared mac range. so update that freq. 5222 + */ 5223 + for (phy_id = 0; phy_id < MAX_RADIOS; phy_id++) { 5224 + ret = ath12k_wmi_modify_sbs_freq(ab, phy_id); 5225 + if (ret) { 5226 + memset(mac_range, 0, sizeof(*mac_range) * MAX_RADIOS); 5227 + break; 5228 + } 5229 + } 5230 + } 5231 + 5232 + static void 5233 + ath12k_wmi_update_mac_freq_info(struct ath12k_base *ab, 5234 + enum wmi_host_hw_mode_config_type hw_config_type, 5235 + u32 phy_id, 5236 + struct ath12k_svc_ext_mac_phy_info *mac_cap) 5237 + { 5238 + if (phy_id >= MAX_RADIOS) { 5239 + ath12k_err(ab, "mac more than two not supported: %d", phy_id); 5240 + return; 5241 + } 5242 + 5243 + ath12k_dbg(ab, ATH12K_DBG_WMI, 5244 + "hw_mode_cfg %d mac %d band 0x%x SBS cutoff freq %d 2 GHz [%d - %d] 5 GHz [%d - %d]", 5245 + hw_config_type, phy_id, mac_cap->supported_bands, 5246 + ab->wmi_ab.sbs_lower_band_end_freq, 5247 + mac_cap->hw_freq_range.low_2ghz_freq, 5248 + mac_cap->hw_freq_range.high_2ghz_freq, 5249 + mac_cap->hw_freq_range.low_5ghz_freq, 5250 + mac_cap->hw_freq_range.high_5ghz_freq); 5251 + 5252 + switch (hw_config_type) { 5253 + case WMI_HOST_HW_MODE_SINGLE: 5254 + if (phy_id) { 5255 + ath12k_dbg(ab, ATH12K_DBG_WMI, "mac phy 1 is not supported"); 5256 + break; 5257 + } 5258 + ath12k_wmi_update_freq_info(ab, mac_cap, ATH12K_HW_MODE_SMM, phy_id); 5259 + break; 5260 + 5261 + case WMI_HOST_HW_MODE_DBS: 5262 + if (!ath12k_wmi_all_phy_range_updated(ab, ATH12K_HW_MODE_DBS)) 5263 + ath12k_wmi_update_freq_info(ab, mac_cap, 5264 + ATH12K_HW_MODE_DBS, phy_id); 5265 + break; 5266 + case WMI_HOST_HW_MODE_DBS_SBS: 5267 + case WMI_HOST_HW_MODE_DBS_OR_SBS: 5268 + ath12k_wmi_update_freq_info(ab, mac_cap, ATH12K_HW_MODE_DBS, phy_id); 5269 + if (ab->wmi_ab.sbs_lower_band_end_freq || 5270 + mac_cap->hw_freq_range.low_5ghz_freq || 5271 + mac_cap->hw_freq_range.low_2ghz_freq) 5272 + ath12k_wmi_update_freq_info(ab, mac_cap, ATH12K_HW_MODE_SBS, 5273 + phy_id); 5274 + 5275 + if (ath12k_wmi_all_phy_range_updated(ab, ATH12K_HW_MODE_DBS)) 5276 + ath12k_wmi_update_dbs_freq_info(ab); 5277 + if (ath12k_wmi_all_phy_range_updated(ab, ATH12K_HW_MODE_SBS)) 5278 + ath12k_wmi_update_sbs_freq_info(ab); 5279 + break; 5280 + case WMI_HOST_HW_MODE_SBS: 5281 + case WMI_HOST_HW_MODE_SBS_PASSIVE: 5282 + ath12k_wmi_update_freq_info(ab, mac_cap, ATH12K_HW_MODE_SBS, phy_id); 5283 + if (ath12k_wmi_all_phy_range_updated(ab, ATH12K_HW_MODE_SBS)) 5284 + ath12k_wmi_update_sbs_freq_info(ab); 5285 + 5286 + break; 5287 + default: 5288 + break; 5289 + } 5290 + } 5291 + 5292 + static bool ath12k_wmi_sbs_range_present(struct ath12k_base *ab) 5293 + { 5294 + if (ath12k_wmi_all_phy_range_updated(ab, ATH12K_HW_MODE_SBS) || 5295 + (ab->wmi_ab.sbs_lower_band_end_freq && 5296 + ath12k_wmi_all_phy_range_updated(ab, ATH12K_HW_MODE_SBS_LOWER_SHARE) && 5297 + ath12k_wmi_all_phy_range_updated(ab, ATH12K_HW_MODE_SBS_UPPER_SHARE))) 5298 + return true; 5299 + 5300 + return false; 5301 + } 5302 + 5303 + static int ath12k_wmi_update_hw_mode_list(struct ath12k_base *ab) 5304 + { 5305 + struct ath12k_svc_ext_info *svc_ext_info = &ab->wmi_ab.svc_ext_info; 5306 + struct ath12k_hw_mode_info *info = &ab->wmi_ab.hw_mode_info; 5307 + enum wmi_host_hw_mode_config_type hw_config_type; 5308 + struct ath12k_svc_ext_mac_phy_info *tmp; 5309 + bool dbs_mode = false, sbs_mode = false; 5310 + u32 i, j = 0; 5311 + 5312 + if (!svc_ext_info->num_hw_modes) { 5313 + ath12k_err(ab, "invalid number of hw modes"); 5314 + return -EINVAL; 5315 + } 5316 + 5317 + ath12k_dbg(ab, ATH12K_DBG_WMI, "updated HW mode list: num modes %d", 5318 + svc_ext_info->num_hw_modes); 5319 + 5320 + memset(info->freq_range_caps, 0, sizeof(info->freq_range_caps)); 5321 + 5322 + for (i = 0; i < svc_ext_info->num_hw_modes; i++) { 5323 + if (j >= ATH12K_MAX_MAC_PHY_CAP) 5324 + return -EINVAL; 5325 + 5326 + /* Update for MAC0 */ 5327 + tmp = &svc_ext_info->mac_phy_info[j++]; 5328 + hw_config_type = tmp->hw_mode_config_type; 5329 + ath12k_wmi_update_mac_freq_info(ab, hw_config_type, tmp->phy_id, tmp); 5330 + 5331 + /* SBS and DBS have dual MAC. Up to 2 MACs are considered. */ 5332 + if (hw_config_type == WMI_HOST_HW_MODE_DBS || 5333 + hw_config_type == WMI_HOST_HW_MODE_SBS_PASSIVE || 5334 + hw_config_type == WMI_HOST_HW_MODE_SBS || 5335 + hw_config_type == WMI_HOST_HW_MODE_DBS_OR_SBS) { 5336 + if (j >= ATH12K_MAX_MAC_PHY_CAP) 5337 + return -EINVAL; 5338 + /* Update for MAC1 */ 5339 + tmp = &svc_ext_info->mac_phy_info[j++]; 5340 + ath12k_wmi_update_mac_freq_info(ab, hw_config_type, 5341 + tmp->phy_id, tmp); 5342 + 5343 + if (hw_config_type == WMI_HOST_HW_MODE_DBS || 5344 + hw_config_type == WMI_HOST_HW_MODE_DBS_OR_SBS) 5345 + dbs_mode = true; 5346 + 5347 + if (ath12k_wmi_sbs_range_present(ab) && 5348 + (hw_config_type == WMI_HOST_HW_MODE_SBS_PASSIVE || 5349 + hw_config_type == WMI_HOST_HW_MODE_SBS || 5350 + hw_config_type == WMI_HOST_HW_MODE_DBS_OR_SBS)) 5351 + sbs_mode = true; 5352 + } 5353 + } 5354 + 5355 + info->support_dbs = dbs_mode; 5356 + info->support_sbs = sbs_mode; 5357 + 5358 + ath12k_wmi_dump_freq_range(ab); 5359 + 5360 + return 0; 5361 + } 5362 + 4995 5363 static int ath12k_wmi_svc_rdy_ext2_parse(struct ath12k_base *ab, 4996 5364 u16 tag, u16 len, 4997 5365 const void *ptr, void *data) 4998 5366 { 5367 + const struct ath12k_wmi_dbs_or_sbs_cap_params *dbs_or_sbs_caps; 4999 5368 struct ath12k_wmi_pdev *wmi_handle = &ab->wmi_ab.wmi[0]; 5000 5369 struct ath12k_wmi_svc_rdy_ext2_parse *parse = data; 5001 5370 int ret; ··· 5476 4967 } 5477 4968 5478 4969 parse->mac_phy_caps_ext_done = true; 4970 + } else if (!parse->hal_reg_caps_ext2_done) { 4971 + parse->hal_reg_caps_ext2_done = true; 4972 + } else if (!parse->scan_radio_caps_ext2_done) { 4973 + parse->scan_radio_caps_ext2_done = true; 4974 + } else if (!parse->twt_caps_done) { 4975 + parse->twt_caps_done = true; 4976 + } else if (!parse->htt_msdu_idx_to_qtype_map_done) { 4977 + parse->htt_msdu_idx_to_qtype_map_done = true; 4978 + } else if (!parse->dbs_or_sbs_cap_ext_done) { 4979 + dbs_or_sbs_caps = ptr; 4980 + ab->wmi_ab.sbs_lower_band_end_freq = 4981 + __le32_to_cpu(dbs_or_sbs_caps->sbs_lower_band_end_freq); 4982 + 4983 + ath12k_dbg(ab, ATH12K_DBG_WMI, "sbs_lower_band_end_freq %u\n", 4984 + ab->wmi_ab.sbs_lower_band_end_freq); 4985 + 4986 + ret = ath12k_wmi_update_hw_mode_list(ab); 4987 + if (ret) { 4988 + ath12k_warn(ab, "failed to update hw mode list: %d\n", 4989 + ret); 4990 + return ret; 4991 + } 4992 + 4993 + parse->dbs_or_sbs_cap_ext_done = true; 5479 4994 } 4995 + 5480 4996 break; 5481 4997 default: 5482 4998 break; ··· 8160 7626 &parse); 8161 7627 } 8162 7628 7629 + static void ath12k_wmi_fw_stats_process(struct ath12k *ar, 7630 + struct ath12k_fw_stats *stats) 7631 + { 7632 + struct ath12k_base *ab = ar->ab; 7633 + struct ath12k_pdev *pdev; 7634 + bool is_end = true; 7635 + size_t total_vdevs_started = 0; 7636 + int i; 7637 + 7638 + if (stats->stats_id == WMI_REQUEST_VDEV_STAT) { 7639 + if (list_empty(&stats->vdevs)) { 7640 + ath12k_warn(ab, "empty vdev stats"); 7641 + return; 7642 + } 7643 + /* FW sends all the active VDEV stats irrespective of PDEV, 7644 + * hence limit until the count of all VDEVs started 7645 + */ 7646 + rcu_read_lock(); 7647 + for (i = 0; i < ab->num_radios; i++) { 7648 + pdev = rcu_dereference(ab->pdevs_active[i]); 7649 + if (pdev && pdev->ar) 7650 + total_vdevs_started += pdev->ar->num_started_vdevs; 7651 + } 7652 + rcu_read_unlock(); 7653 + 7654 + if (total_vdevs_started) 7655 + is_end = ((++ar->fw_stats.num_vdev_recvd) == 7656 + total_vdevs_started); 7657 + 7658 + list_splice_tail_init(&stats->vdevs, 7659 + &ar->fw_stats.vdevs); 7660 + 7661 + if (is_end) 7662 + complete(&ar->fw_stats_done); 7663 + 7664 + return; 7665 + } 7666 + 7667 + if (stats->stats_id == WMI_REQUEST_BCN_STAT) { 7668 + if (list_empty(&stats->bcn)) { 7669 + ath12k_warn(ab, "empty beacon stats"); 7670 + return; 7671 + } 7672 + /* Mark end until we reached the count of all started VDEVs 7673 + * within the PDEV 7674 + */ 7675 + if (ar->num_started_vdevs) 7676 + is_end = ((++ar->fw_stats.num_bcn_recvd) == 7677 + ar->num_started_vdevs); 7678 + 7679 + list_splice_tail_init(&stats->bcn, 7680 + &ar->fw_stats.bcn); 7681 + 7682 + if (is_end) 7683 + complete(&ar->fw_stats_done); 7684 + } 7685 + } 7686 + 8163 7687 static void ath12k_update_stats_event(struct ath12k_base *ab, struct sk_buff *skb) 8164 7688 { 8165 7689 struct ath12k_fw_stats stats = {}; ··· 8247 7655 8248 7656 spin_lock_bh(&ar->data_lock); 8249 7657 8250 - /* WMI_REQUEST_PDEV_STAT can be requested via .get_txpower mac ops or via 8251 - * debugfs fw stats. Therefore, processing it separately. 8252 - */ 7658 + /* Handle WMI_REQUEST_PDEV_STAT status update */ 8253 7659 if (stats.stats_id == WMI_REQUEST_PDEV_STAT) { 8254 7660 list_splice_tail_init(&stats.pdevs, &ar->fw_stats.pdevs); 8255 - ar->fw_stats.fw_stats_done = true; 7661 + complete(&ar->fw_stats_done); 8256 7662 goto complete; 8257 7663 } 8258 7664 8259 - /* WMI_REQUEST_VDEV_STAT and WMI_REQUEST_BCN_STAT are currently requested only 8260 - * via debugfs fw stats. Hence, processing these in debugfs context. 8261 - */ 8262 - ath12k_debugfs_fw_stats_process(ar, &stats); 7665 + /* Handle WMI_REQUEST_VDEV_STAT and WMI_REQUEST_BCN_STAT updates. */ 7666 + ath12k_wmi_fw_stats_process(ar, &stats); 8263 7667 8264 7668 complete: 8265 7669 complete(&ar->fw_stats_complete); ··· 10498 9910 } 10499 9911 10500 9912 return 0; 9913 + } 9914 + 9915 + static int 9916 + ath12k_wmi_fill_disallowed_bmap(struct ath12k_base *ab, 9917 + struct wmi_disallowed_mlo_mode_bitmap_params *dislw_bmap, 9918 + struct wmi_mlo_link_set_active_arg *arg) 9919 + { 9920 + struct wmi_ml_disallow_mode_bmap_arg *dislw_bmap_arg; 9921 + u8 i; 9922 + 9923 + if (arg->num_disallow_mode_comb > 9924 + ARRAY_SIZE(arg->disallow_bmap)) { 9925 + ath12k_warn(ab, "invalid num_disallow_mode_comb: %d", 9926 + arg->num_disallow_mode_comb); 9927 + return -EINVAL; 9928 + } 9929 + 9930 + dislw_bmap_arg = &arg->disallow_bmap[0]; 9931 + for (i = 0; i < arg->num_disallow_mode_comb; i++) { 9932 + dislw_bmap->tlv_header = 9933 + ath12k_wmi_tlv_cmd_hdr(0, sizeof(*dislw_bmap)); 9934 + dislw_bmap->disallowed_mode_bitmap = 9935 + cpu_to_le32(dislw_bmap_arg->disallowed_mode); 9936 + dislw_bmap->ieee_link_id_comb = 9937 + le32_encode_bits(dislw_bmap_arg->ieee_link_id[0], 9938 + WMI_DISALW_MLO_MODE_BMAP_IEEE_LINK_ID_COMB_1) | 9939 + le32_encode_bits(dislw_bmap_arg->ieee_link_id[1], 9940 + WMI_DISALW_MLO_MODE_BMAP_IEEE_LINK_ID_COMB_2) | 9941 + le32_encode_bits(dislw_bmap_arg->ieee_link_id[2], 9942 + WMI_DISALW_MLO_MODE_BMAP_IEEE_LINK_ID_COMB_3) | 9943 + le32_encode_bits(dislw_bmap_arg->ieee_link_id[3], 9944 + WMI_DISALW_MLO_MODE_BMAP_IEEE_LINK_ID_COMB_4); 9945 + 9946 + ath12k_dbg(ab, ATH12K_DBG_WMI, 9947 + "entry %d disallowed_mode %d ieee_link_id_comb 0x%x", 9948 + i, dislw_bmap_arg->disallowed_mode, 9949 + dislw_bmap_arg->ieee_link_id_comb); 9950 + dislw_bmap++; 9951 + dislw_bmap_arg++; 9952 + } 9953 + 9954 + return 0; 9955 + } 9956 + 9957 + int ath12k_wmi_send_mlo_link_set_active_cmd(struct ath12k_base *ab, 9958 + struct wmi_mlo_link_set_active_arg *arg) 9959 + { 9960 + struct wmi_disallowed_mlo_mode_bitmap_params *disallowed_mode_bmap; 9961 + struct wmi_mlo_set_active_link_number_params *link_num_param; 9962 + u32 num_link_num_param = 0, num_vdev_bitmap = 0; 9963 + struct ath12k_wmi_base *wmi_ab = &ab->wmi_ab; 9964 + struct wmi_mlo_link_set_active_cmd *cmd; 9965 + u32 num_inactive_vdev_bitmap = 0; 9966 + u32 num_disallow_mode_comb = 0; 9967 + struct wmi_tlv *tlv; 9968 + struct sk_buff *skb; 9969 + __le32 *vdev_bitmap; 9970 + void *buf_ptr; 9971 + int i, ret; 9972 + u32 len; 9973 + 9974 + if (!arg->num_vdev_bitmap && !arg->num_link_entry) { 9975 + ath12k_warn(ab, "Invalid num_vdev_bitmap and num_link_entry"); 9976 + return -EINVAL; 9977 + } 9978 + 9979 + switch (arg->force_mode) { 9980 + case WMI_MLO_LINK_FORCE_MODE_ACTIVE_LINK_NUM: 9981 + case WMI_MLO_LINK_FORCE_MODE_INACTIVE_LINK_NUM: 9982 + num_link_num_param = arg->num_link_entry; 9983 + fallthrough; 9984 + case WMI_MLO_LINK_FORCE_MODE_ACTIVE: 9985 + case WMI_MLO_LINK_FORCE_MODE_INACTIVE: 9986 + case WMI_MLO_LINK_FORCE_MODE_NO_FORCE: 9987 + num_vdev_bitmap = arg->num_vdev_bitmap; 9988 + break; 9989 + case WMI_MLO_LINK_FORCE_MODE_ACTIVE_INACTIVE: 9990 + num_vdev_bitmap = arg->num_vdev_bitmap; 9991 + num_inactive_vdev_bitmap = arg->num_inactive_vdev_bitmap; 9992 + break; 9993 + default: 9994 + ath12k_warn(ab, "Invalid force mode: %u", arg->force_mode); 9995 + return -EINVAL; 9996 + } 9997 + 9998 + num_disallow_mode_comb = arg->num_disallow_mode_comb; 9999 + len = sizeof(*cmd) + 10000 + TLV_HDR_SIZE + sizeof(*link_num_param) * num_link_num_param + 10001 + TLV_HDR_SIZE + sizeof(*vdev_bitmap) * num_vdev_bitmap + 10002 + TLV_HDR_SIZE + TLV_HDR_SIZE + TLV_HDR_SIZE + 10003 + TLV_HDR_SIZE + sizeof(*disallowed_mode_bmap) * num_disallow_mode_comb; 10004 + if (arg->force_mode == WMI_MLO_LINK_FORCE_MODE_ACTIVE_INACTIVE) 10005 + len += sizeof(*vdev_bitmap) * num_inactive_vdev_bitmap; 10006 + 10007 + skb = ath12k_wmi_alloc_skb(wmi_ab, len); 10008 + if (!skb) 10009 + return -ENOMEM; 10010 + 10011 + cmd = (struct wmi_mlo_link_set_active_cmd *)skb->data; 10012 + cmd->tlv_header = ath12k_wmi_tlv_cmd_hdr(WMI_TAG_MLO_LINK_SET_ACTIVE_CMD, 10013 + sizeof(*cmd)); 10014 + cmd->force_mode = cpu_to_le32(arg->force_mode); 10015 + cmd->reason = cpu_to_le32(arg->reason); 10016 + ath12k_dbg(ab, ATH12K_DBG_WMI, 10017 + "mode %d reason %d num_link_num_param %d num_vdev_bitmap %d inactive %d num_disallow_mode_comb %d", 10018 + arg->force_mode, arg->reason, num_link_num_param, 10019 + num_vdev_bitmap, num_inactive_vdev_bitmap, 10020 + num_disallow_mode_comb); 10021 + 10022 + buf_ptr = skb->data + sizeof(*cmd); 10023 + tlv = buf_ptr; 10024 + tlv->header = ath12k_wmi_tlv_hdr(WMI_TAG_ARRAY_STRUCT, 10025 + sizeof(*link_num_param) * num_link_num_param); 10026 + buf_ptr += TLV_HDR_SIZE; 10027 + 10028 + if (num_link_num_param) { 10029 + cmd->ctrl_flags = 10030 + le32_encode_bits(arg->ctrl_flags.dync_force_link_num ? 1 : 0, 10031 + CRTL_F_DYNC_FORCE_LINK_NUM); 10032 + 10033 + link_num_param = buf_ptr; 10034 + for (i = 0; i < num_link_num_param; i++) { 10035 + link_num_param->tlv_header = 10036 + ath12k_wmi_tlv_cmd_hdr(0, sizeof(*link_num_param)); 10037 + link_num_param->num_of_link = 10038 + cpu_to_le32(arg->link_num[i].num_of_link); 10039 + link_num_param->vdev_type = 10040 + cpu_to_le32(arg->link_num[i].vdev_type); 10041 + link_num_param->vdev_subtype = 10042 + cpu_to_le32(arg->link_num[i].vdev_subtype); 10043 + link_num_param->home_freq = 10044 + cpu_to_le32(arg->link_num[i].home_freq); 10045 + ath12k_dbg(ab, ATH12K_DBG_WMI, 10046 + "entry %d num_of_link %d vdev type %d subtype %d freq %d control_flags %d", 10047 + i, arg->link_num[i].num_of_link, 10048 + arg->link_num[i].vdev_type, 10049 + arg->link_num[i].vdev_subtype, 10050 + arg->link_num[i].home_freq, 10051 + __le32_to_cpu(cmd->ctrl_flags)); 10052 + link_num_param++; 10053 + } 10054 + 10055 + buf_ptr += sizeof(*link_num_param) * num_link_num_param; 10056 + } 10057 + 10058 + tlv = buf_ptr; 10059 + tlv->header = ath12k_wmi_tlv_hdr(WMI_TAG_ARRAY_UINT32, 10060 + sizeof(*vdev_bitmap) * num_vdev_bitmap); 10061 + buf_ptr += TLV_HDR_SIZE; 10062 + 10063 + if (num_vdev_bitmap) { 10064 + vdev_bitmap = buf_ptr; 10065 + for (i = 0; i < num_vdev_bitmap; i++) { 10066 + vdev_bitmap[i] = cpu_to_le32(arg->vdev_bitmap[i]); 10067 + ath12k_dbg(ab, ATH12K_DBG_WMI, "entry %d vdev_id_bitmap 0x%x", 10068 + i, arg->vdev_bitmap[i]); 10069 + } 10070 + 10071 + buf_ptr += sizeof(*vdev_bitmap) * num_vdev_bitmap; 10072 + } 10073 + 10074 + if (arg->force_mode == WMI_MLO_LINK_FORCE_MODE_ACTIVE_INACTIVE) { 10075 + tlv = buf_ptr; 10076 + tlv->header = ath12k_wmi_tlv_hdr(WMI_TAG_ARRAY_UINT32, 10077 + sizeof(*vdev_bitmap) * 10078 + num_inactive_vdev_bitmap); 10079 + buf_ptr += TLV_HDR_SIZE; 10080 + 10081 + if (num_inactive_vdev_bitmap) { 10082 + vdev_bitmap = buf_ptr; 10083 + for (i = 0; i < num_inactive_vdev_bitmap; i++) { 10084 + vdev_bitmap[i] = 10085 + cpu_to_le32(arg->inactive_vdev_bitmap[i]); 10086 + ath12k_dbg(ab, ATH12K_DBG_WMI, 10087 + "entry %d inactive_vdev_id_bitmap 0x%x", 10088 + i, arg->inactive_vdev_bitmap[i]); 10089 + } 10090 + 10091 + buf_ptr += sizeof(*vdev_bitmap) * num_inactive_vdev_bitmap; 10092 + } 10093 + } else { 10094 + /* add empty vdev bitmap2 tlv */ 10095 + tlv = buf_ptr; 10096 + tlv->header = ath12k_wmi_tlv_hdr(WMI_TAG_ARRAY_UINT32, 0); 10097 + buf_ptr += TLV_HDR_SIZE; 10098 + } 10099 + 10100 + /* add empty ieee_link_id_bitmap tlv */ 10101 + tlv = buf_ptr; 10102 + tlv->header = ath12k_wmi_tlv_hdr(WMI_TAG_ARRAY_UINT32, 0); 10103 + buf_ptr += TLV_HDR_SIZE; 10104 + 10105 + /* add empty ieee_link_id_bitmap2 tlv */ 10106 + tlv = buf_ptr; 10107 + tlv->header = ath12k_wmi_tlv_hdr(WMI_TAG_ARRAY_UINT32, 0); 10108 + buf_ptr += TLV_HDR_SIZE; 10109 + 10110 + tlv = buf_ptr; 10111 + tlv->header = ath12k_wmi_tlv_hdr(WMI_TAG_ARRAY_STRUCT, 10112 + sizeof(*disallowed_mode_bmap) * 10113 + arg->num_disallow_mode_comb); 10114 + buf_ptr += TLV_HDR_SIZE; 10115 + 10116 + ret = ath12k_wmi_fill_disallowed_bmap(ab, buf_ptr, arg); 10117 + if (ret) 10118 + goto free_skb; 10119 + 10120 + ret = ath12k_wmi_cmd_send(&wmi_ab->wmi[0], skb, WMI_MLO_LINK_SET_ACTIVE_CMDID); 10121 + if (ret) { 10122 + ath12k_warn(ab, 10123 + "failed to send WMI_MLO_LINK_SET_ACTIVE_CMDID: %d\n", ret); 10124 + goto free_skb; 10125 + } 10126 + 10127 + ath12k_dbg(ab, ATH12K_DBG_WMI, "WMI mlo link set active cmd"); 10128 + 10129 + return ret; 10130 + 10131 + free_skb: 10132 + dev_kfree_skb(skb); 10133 + return ret; 10501 10134 }
+179 -1
drivers/net/wireless/ath/ath12k/wmi.h
··· 1974 1974 WMI_TAG_TPC_STATS_CTL_PWR_TABLE_EVENT, 1975 1975 WMI_TAG_VDEV_SET_TPC_POWER_CMD = 0x3B5, 1976 1976 WMI_TAG_VDEV_CH_POWER_INFO, 1977 + WMI_TAG_MLO_LINK_SET_ACTIVE_CMD = 0x3BE, 1977 1978 WMI_TAG_EHT_RATE_SET = 0x3C4, 1978 1979 WMI_TAG_DCS_AWGN_INT_TYPE = 0x3C5, 1979 1980 WMI_TAG_MLO_TX_SEND_PARAMS, ··· 2618 2617 __le32 num_chainmask_tables; 2619 2618 } __packed; 2620 2619 2620 + #define WMI_HW_MODE_CAP_CFG_TYPE GENMASK(27, 0) 2621 + 2621 2622 struct ath12k_wmi_hw_mode_cap_params { 2622 2623 __le32 tlv_header; 2623 2624 __le32 hw_mode_id; ··· 2669 2666 __le32 he_cap_info_2g_ext; 2670 2667 __le32 he_cap_info_5g_ext; 2671 2668 __le32 he_cap_info_internal; 2669 + __le32 wireless_modes; 2670 + __le32 low_2ghz_chan_freq; 2671 + __le32 high_2ghz_chan_freq; 2672 + __le32 low_5ghz_chan_freq; 2673 + __le32 high_5ghz_chan_freq; 2674 + __le32 nss_ratio; 2672 2675 } __packed; 2673 2676 2674 2677 struct ath12k_wmi_hal_reg_caps_ext_params { ··· 2746 2737 __le32 max_num_linkview_peers; 2747 2738 __le32 max_num_msduq_supported_per_tid; 2748 2739 __le32 default_num_msduq_supported_per_tid; 2740 + } __packed; 2741 + 2742 + struct ath12k_wmi_dbs_or_sbs_cap_params { 2743 + __le32 hw_mode_id; 2744 + __le32 sbs_lower_band_end_freq; 2749 2745 } __packed; 2750 2746 2751 2747 struct ath12k_wmi_caps_ext_params { ··· 5063 5049 u32 rx_decap_mode; 5064 5050 }; 5065 5051 5052 + struct ath12k_hw_mode_freq_range_arg { 5053 + u32 low_2ghz_freq; 5054 + u32 high_2ghz_freq; 5055 + u32 low_5ghz_freq; 5056 + u32 high_5ghz_freq; 5057 + }; 5058 + 5059 + struct ath12k_svc_ext_mac_phy_info { 5060 + enum wmi_host_hw_mode_config_type hw_mode_config_type; 5061 + u32 phy_id; 5062 + u32 supported_bands; 5063 + struct ath12k_hw_mode_freq_range_arg hw_freq_range; 5064 + }; 5065 + 5066 + #define ATH12K_MAX_MAC_PHY_CAP 8 5067 + 5068 + struct ath12k_svc_ext_info { 5069 + u32 num_hw_modes; 5070 + struct ath12k_svc_ext_mac_phy_info mac_phy_info[ATH12K_MAX_MAC_PHY_CAP]; 5071 + }; 5072 + 5073 + /** 5074 + * enum ath12k_hw_mode - enum for host mode 5075 + * @ATH12K_HW_MODE_SMM: Single mac mode 5076 + * @ATH12K_HW_MODE_DBS: DBS mode 5077 + * @ATH12K_HW_MODE_SBS: SBS mode with either high share or low share 5078 + * @ATH12K_HW_MODE_SBS_UPPER_SHARE: Higher 5 GHz shared with 2.4 GHz 5079 + * @ATH12K_HW_MODE_SBS_LOWER_SHARE: Lower 5 GHz shared with 2.4 GHz 5080 + * @ATH12K_HW_MODE_MAX: Max, used to indicate invalid mode 5081 + */ 5082 + enum ath12k_hw_mode { 5083 + ATH12K_HW_MODE_SMM, 5084 + ATH12K_HW_MODE_DBS, 5085 + ATH12K_HW_MODE_SBS, 5086 + ATH12K_HW_MODE_SBS_UPPER_SHARE, 5087 + ATH12K_HW_MODE_SBS_LOWER_SHARE, 5088 + ATH12K_HW_MODE_MAX, 5089 + }; 5090 + 5091 + struct ath12k_hw_mode_info { 5092 + bool support_dbs:1; 5093 + bool support_sbs:1; 5094 + 5095 + struct ath12k_hw_mode_freq_range_arg freq_range_caps[ATH12K_HW_MODE_MAX] 5096 + [MAX_RADIOS]; 5097 + }; 5098 + 5066 5099 struct ath12k_wmi_base { 5067 5100 struct ath12k_base *ab; 5068 5101 struct ath12k_wmi_pdev wmi[MAX_RADIOS]; ··· 5127 5066 enum wmi_host_hw_mode_config_type preferred_hw_mode; 5128 5067 5129 5068 struct ath12k_wmi_target_cap_arg *targ_cap; 5069 + 5070 + struct ath12k_svc_ext_info svc_ext_info; 5071 + u32 sbs_lower_band_end_freq; 5072 + struct ath12k_hw_mode_info hw_mode_info; 5130 5073 }; 5131 5074 5132 5075 struct wmi_pdev_set_bios_interface_cmd { ··· 6062 5997 */ 6063 5998 } __packed; 6064 5999 6000 + #define CRTL_F_DYNC_FORCE_LINK_NUM GENMASK(3, 2) 6001 + 6002 + struct wmi_mlo_link_set_active_cmd { 6003 + __le32 tlv_header; 6004 + __le32 force_mode; 6005 + __le32 reason; 6006 + __le32 use_ieee_link_id_bitmap; 6007 + struct ath12k_wmi_mac_addr_params ap_mld_mac_addr; 6008 + __le32 ctrl_flags; 6009 + } __packed; 6010 + 6011 + struct wmi_mlo_set_active_link_number_params { 6012 + __le32 tlv_header; 6013 + __le32 num_of_link; 6014 + __le32 vdev_type; 6015 + __le32 vdev_subtype; 6016 + __le32 home_freq; 6017 + } __packed; 6018 + 6019 + #define WMI_DISALW_MLO_MODE_BMAP_IEEE_LINK_ID_COMB_1 GENMASK(7, 0) 6020 + #define WMI_DISALW_MLO_MODE_BMAP_IEEE_LINK_ID_COMB_2 GENMASK(15, 8) 6021 + #define WMI_DISALW_MLO_MODE_BMAP_IEEE_LINK_ID_COMB_3 GENMASK(23, 16) 6022 + #define WMI_DISALW_MLO_MODE_BMAP_IEEE_LINK_ID_COMB_4 GENMASK(31, 24) 6023 + 6024 + struct wmi_disallowed_mlo_mode_bitmap_params { 6025 + __le32 tlv_header; 6026 + __le32 disallowed_mode_bitmap; 6027 + __le32 ieee_link_id_comb; 6028 + } __packed; 6029 + 6030 + enum wmi_mlo_link_force_mode { 6031 + WMI_MLO_LINK_FORCE_MODE_ACTIVE = 1, 6032 + WMI_MLO_LINK_FORCE_MODE_INACTIVE = 2, 6033 + WMI_MLO_LINK_FORCE_MODE_ACTIVE_LINK_NUM = 3, 6034 + WMI_MLO_LINK_FORCE_MODE_INACTIVE_LINK_NUM = 4, 6035 + WMI_MLO_LINK_FORCE_MODE_NO_FORCE = 5, 6036 + WMI_MLO_LINK_FORCE_MODE_ACTIVE_INACTIVE = 6, 6037 + WMI_MLO_LINK_FORCE_MODE_NON_FORCE_UPDATE = 7, 6038 + }; 6039 + 6040 + enum wmi_mlo_link_force_reason { 6041 + WMI_MLO_LINK_FORCE_REASON_NEW_CONNECT = 1, 6042 + WMI_MLO_LINK_FORCE_REASON_NEW_DISCONNECT = 2, 6043 + WMI_MLO_LINK_FORCE_REASON_LINK_REMOVAL = 3, 6044 + WMI_MLO_LINK_FORCE_REASON_TDLS = 4, 6045 + WMI_MLO_LINK_FORCE_REASON_REVERT_FAILURE = 5, 6046 + WMI_MLO_LINK_FORCE_REASON_LINK_DELETE = 6, 6047 + WMI_MLO_LINK_FORCE_REASON_SINGLE_LINK_EMLSR_OP = 7, 6048 + }; 6049 + 6050 + struct wmi_mlo_link_num_arg { 6051 + u32 num_of_link; 6052 + u32 vdev_type; 6053 + u32 vdev_subtype; 6054 + u32 home_freq; 6055 + }; 6056 + 6057 + struct wmi_mlo_control_flags_arg { 6058 + bool overwrite_force_active_bitmap; 6059 + bool overwrite_force_inactive_bitmap; 6060 + bool dync_force_link_num; 6061 + bool post_re_evaluate; 6062 + u8 post_re_evaluate_loops; 6063 + bool dont_reschedule_workqueue; 6064 + }; 6065 + 6066 + struct wmi_ml_link_force_cmd_arg { 6067 + u8 ap_mld_mac_addr[ETH_ALEN]; 6068 + u16 ieee_link_id_bitmap; 6069 + u16 ieee_link_id_bitmap2; 6070 + u8 link_num; 6071 + }; 6072 + 6073 + struct wmi_ml_disallow_mode_bmap_arg { 6074 + u32 disallowed_mode; 6075 + union { 6076 + u32 ieee_link_id_comb; 6077 + u8 ieee_link_id[4]; 6078 + }; 6079 + }; 6080 + 6081 + /* maximum size of link number param array 6082 + * for MLO link set active command 6083 + */ 6084 + #define WMI_MLO_LINK_NUM_SZ 2 6085 + 6086 + /* maximum size of vdev bitmap array for 6087 + * MLO link set active command 6088 + */ 6089 + #define WMI_MLO_VDEV_BITMAP_SZ 2 6090 + 6091 + /* Max number of disallowed bitmap combination 6092 + * sent to firmware 6093 + */ 6094 + #define WMI_ML_MAX_DISALLOW_BMAP_COMB 4 6095 + 6096 + struct wmi_mlo_link_set_active_arg { 6097 + enum wmi_mlo_link_force_mode force_mode; 6098 + enum wmi_mlo_link_force_reason reason; 6099 + u32 num_link_entry; 6100 + u32 num_vdev_bitmap; 6101 + u32 num_inactive_vdev_bitmap; 6102 + struct wmi_mlo_link_num_arg link_num[WMI_MLO_LINK_NUM_SZ]; 6103 + u32 vdev_bitmap[WMI_MLO_VDEV_BITMAP_SZ]; 6104 + u32 inactive_vdev_bitmap[WMI_MLO_VDEV_BITMAP_SZ]; 6105 + struct wmi_mlo_control_flags_arg ctrl_flags; 6106 + bool use_ieee_link_id; 6107 + struct wmi_ml_link_force_cmd_arg force_cmd; 6108 + u32 num_disallow_mode_comb; 6109 + struct wmi_ml_disallow_mode_bmap_arg disallow_bmap[WMI_ML_MAX_DISALLOW_BMAP_COMB]; 6110 + }; 6111 + 6065 6112 void ath12k_wmi_init_qcn9274(struct ath12k_base *ab, 6066 6113 struct ath12k_wmi_resource_config_arg *config); 6067 6114 void ath12k_wmi_init_wcn7850(struct ath12k_base *ab, ··· 6372 6195 int ath12k_wmi_send_vdev_set_tpc_power(struct ath12k *ar, 6373 6196 u32 vdev_id, 6374 6197 struct ath12k_reg_tpc_power_info *param); 6375 - 6198 + int ath12k_wmi_send_mlo_link_set_active_cmd(struct ath12k_base *ab, 6199 + struct wmi_mlo_link_set_active_arg *param); 6376 6200 #endif
+3 -1
drivers/net/wireless/ath/ath6kl/bmi.c
··· 87 87 * We need to do some backwards compatibility to make this work. 88 88 */ 89 89 if (le32_to_cpu(targ_info->byte_count) != sizeof(*targ_info)) { 90 - WARN_ON(1); 90 + ath6kl_err("mismatched byte count %d vs. expected %zd\n", 91 + le32_to_cpu(targ_info->byte_count), 92 + sizeof(*targ_info)); 91 93 return -EINVAL; 92 94 } 93 95
+13 -6
drivers/net/wireless/ath/carl9170/usb.c
··· 438 438 439 439 if (atomic_read(&ar->rx_anch_urbs) == 0) { 440 440 /* 441 - * The system is too slow to cope with 442 - * the enormous workload. We have simply 443 - * run out of active rx urbs and this 444 - * unfortunately leads to an unpredictable 445 - * device. 441 + * At this point, either the system is too slow to 442 + * cope with the enormous workload (so we have simply 443 + * run out of active rx urbs and this unfortunately 444 + * leads to an unpredictable device), or the device 445 + * is not fully functional after an unsuccessful 446 + * firmware loading attempts (so it doesn't pass 447 + * ieee80211_register_hw() and there is no internal 448 + * workqueue at all). 446 449 */ 447 450 448 - ieee80211_queue_work(ar->hw, &ar->ping_work); 451 + if (ar->registered) 452 + ieee80211_queue_work(ar->hw, &ar->ping_work); 453 + else 454 + pr_warn_once("device %s is not registered\n", 455 + dev_name(&ar->udev->dev)); 449 456 } 450 457 } else { 451 458 /*
+1
drivers/net/wireless/intel/iwlwifi/dvm/main.c
··· 1316 1316 sizeof(trans->conf.no_reclaim_cmds)); 1317 1317 memcpy(trans->conf.no_reclaim_cmds, no_reclaim_cmds, 1318 1318 sizeof(no_reclaim_cmds)); 1319 + trans->conf.n_no_reclaim_cmds = ARRAY_SIZE(no_reclaim_cmds); 1319 1320 1320 1321 switch (iwlwifi_mod_params.amsdu_size) { 1321 1322 case IWL_AMSDU_DEF:
+1
drivers/net/wireless/intel/iwlwifi/mld/mld.c
··· 77 77 78 78 /* Setup async RX handling */ 79 79 spin_lock_init(&mld->async_handlers_lock); 80 + INIT_LIST_HEAD(&mld->async_handlers_list); 80 81 wiphy_work_init(&mld->async_handlers_wk, 81 82 iwl_mld_async_handlers_wk); 82 83
+1 -1
drivers/net/wireless/intel/iwlwifi/mvm/mld-mac.c
··· 34 34 WIDE_ID(MAC_CONF_GROUP, 35 35 MAC_CONFIG_CMD), 0); 36 36 37 - if (WARN_ON(cmd_ver < 1 && cmd_ver > 3)) 37 + if (WARN_ON(cmd_ver < 1 || cmd_ver > 3)) 38 38 return; 39 39 40 40 cmd->id_and_color = cpu_to_le32(mvmvif->id);
+6 -5
drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info.c
··· 166 166 struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); 167 167 struct iwl_context_info *ctxt_info; 168 168 struct iwl_context_info_rbd_cfg *rx_cfg; 169 - u32 control_flags = 0, rb_size; 169 + u32 control_flags = 0, rb_size, cb_size; 170 170 dma_addr_t phys; 171 171 int ret; 172 172 ··· 202 202 rb_size = IWL_CTXT_INFO_RB_SIZE_4K; 203 203 } 204 204 205 - WARN_ON(RX_QUEUE_CB_SIZE(iwl_trans_get_num_rbds(trans)) > 12); 205 + cb_size = RX_QUEUE_CB_SIZE(iwl_trans_get_num_rbds(trans)); 206 + if (WARN_ON(cb_size > 12)) 207 + cb_size = 12; 208 + 206 209 control_flags = IWL_CTXT_INFO_TFD_FORMAT_LONG; 207 - control_flags |= 208 - u32_encode_bits(RX_QUEUE_CB_SIZE(iwl_trans_get_num_rbds(trans)), 209 - IWL_CTXT_INFO_RB_CB_SIZE); 210 + control_flags |= u32_encode_bits(cb_size, IWL_CTXT_INFO_RB_CB_SIZE); 210 211 control_flags |= u32_encode_bits(rb_size, IWL_CTXT_INFO_RB_SIZE); 211 212 ctxt_info->control.control_flags = cpu_to_le32(control_flags); 212 213
+2 -1
drivers/ptp/ptp_clock.c
··· 121 121 struct ptp_clock_info *ops; 122 122 int err = -EOPNOTSUPP; 123 123 124 - if (ptp_clock_freerun(ptp)) { 124 + if (tx->modes & (ADJ_SETOFFSET | ADJ_FREQUENCY | ADJ_OFFSET) && 125 + ptp_clock_freerun(ptp)) { 125 126 pr_err("ptp: physical clock is free running\n"); 126 127 return -EBUSY; 127 128 }
+21 -1
drivers/ptp/ptp_private.h
··· 98 98 /* Check if ptp virtual clock is in use */ 99 99 static inline bool ptp_vclock_in_use(struct ptp_clock *ptp) 100 100 { 101 - return !ptp->is_virtual_clock; 101 + bool in_use = false; 102 + 103 + /* Virtual clocks can't be stacked on top of virtual clocks. 104 + * Avoid acquiring the n_vclocks_mux on virtual clocks, to allow this 105 + * function to be called from code paths where the n_vclocks_mux of the 106 + * parent physical clock is already held. Functionally that's not an 107 + * issue, but lockdep would complain, because they have the same lock 108 + * class. 109 + */ 110 + if (ptp->is_virtual_clock) 111 + return false; 112 + 113 + if (mutex_lock_interruptible(&ptp->n_vclocks_mux)) 114 + return true; 115 + 116 + if (ptp->n_vclocks) 117 + in_use = true; 118 + 119 + mutex_unlock(&ptp->n_vclocks_mux); 120 + 121 + return in_use; 102 122 } 103 123 104 124 /* Check if ptp clock shall be free running */
+6
include/linux/atmdev.h
··· 249 249 ATM_SKB(skb)->atm_options = vcc->atm_options; 250 250 } 251 251 252 + static inline void atm_return_tx(struct atm_vcc *vcc, struct sk_buff *skb) 253 + { 254 + WARN_ON_ONCE(refcount_sub_and_test(ATM_SKB(skb)->acct_truesize, 255 + &sk_atm(vcc)->sk_wmem_alloc)); 256 + } 257 + 252 258 static inline void atm_force_charge(struct atm_vcc *vcc,int truesize) 253 259 { 254 260 atomic_add(truesize, &sk_atm(vcc)->sk_rmem_alloc);
+9 -9
include/linux/ieee80211.h
··· 1278 1278 u8 sa[ETH_ALEN]; 1279 1279 __le32 timestamp; 1280 1280 u8 change_seq; 1281 - u8 variable[0]; 1281 + u8 variable[]; 1282 1282 } __packed s1g_beacon; 1283 1283 } u; 1284 1284 } __packed __aligned(2); ··· 1536 1536 u8 action_code; 1537 1537 u8 dialog_token; 1538 1538 __le16 capability; 1539 - u8 variable[0]; 1539 + u8 variable[]; 1540 1540 } __packed tdls_discover_resp; 1541 1541 struct { 1542 1542 u8 action_code; ··· 1721 1721 struct { 1722 1722 u8 dialog_token; 1723 1723 __le16 capability; 1724 - u8 variable[0]; 1724 + u8 variable[]; 1725 1725 } __packed setup_req; 1726 1726 struct { 1727 1727 __le16 status_code; 1728 1728 u8 dialog_token; 1729 1729 __le16 capability; 1730 - u8 variable[0]; 1730 + u8 variable[]; 1731 1731 } __packed setup_resp; 1732 1732 struct { 1733 1733 __le16 status_code; 1734 1734 u8 dialog_token; 1735 - u8 variable[0]; 1735 + u8 variable[]; 1736 1736 } __packed setup_cfm; 1737 1737 struct { 1738 1738 __le16 reason_code; 1739 - u8 variable[0]; 1739 + u8 variable[]; 1740 1740 } __packed teardown; 1741 1741 struct { 1742 1742 u8 dialog_token; 1743 - u8 variable[0]; 1743 + u8 variable[]; 1744 1744 } __packed discover_req; 1745 1745 struct { 1746 1746 u8 target_channel; 1747 1747 u8 oper_class; 1748 - u8 variable[0]; 1748 + u8 variable[]; 1749 1749 } __packed chan_switch_req; 1750 1750 struct { 1751 1751 __le16 status_code; 1752 - u8 variable[0]; 1752 + u8 variable[]; 1753 1753 } __packed chan_switch_resp; 1754 1754 } u; 1755 1755 } __packed;
-4
include/uapi/linux/ethtool_netlink.h
··· 208 208 ETHTOOL_A_STATS_PHY_MAX = (__ETHTOOL_A_STATS_PHY_CNT - 1) 209 209 }; 210 210 211 - /* generic netlink info */ 212 - #define ETHTOOL_GENL_NAME "ethtool" 213 - #define ETHTOOL_GENL_VERSION 1 214 - 215 211 #define ETHTOOL_MCGRP_MONITOR_NAME "monitor" 216 212 217 213 #endif /* _UAPI_LINUX_ETHTOOL_NETLINK_H_ */
+1
lib/Kconfig
··· 716 716 717 717 config PLDMFW 718 718 bool 719 + select CRC32 719 720 default n 720 721 721 722 config ASN1_ENCODER
+1
net/atm/common.c
··· 635 635 636 636 skb->dev = NULL; /* for paths shared with net_device interfaces */ 637 637 if (!copy_from_iter_full(skb_put(skb, size), size, &m->msg_iter)) { 638 + atm_return_tx(vcc, skb); 638 639 kfree_skb(skb); 639 640 error = -EFAULT; 640 641 goto out;
+10 -2
net/atm/lec.c
··· 124 124 125 125 /* Device structures */ 126 126 static struct net_device *dev_lec[MAX_LEC_ITF]; 127 + static DEFINE_MUTEX(lec_mutex); 127 128 128 129 #if IS_ENABLED(CONFIG_BRIDGE) 129 130 static void lec_handle_bridge(struct sk_buff *skb, struct net_device *dev) ··· 686 685 int bytes_left; 687 686 struct atmlec_ioc ioc_data; 688 687 688 + lockdep_assert_held(&lec_mutex); 689 689 /* Lecd must be up in this case */ 690 690 bytes_left = copy_from_user(&ioc_data, arg, sizeof(struct atmlec_ioc)); 691 691 if (bytes_left != 0) ··· 712 710 713 711 static int lec_mcast_attach(struct atm_vcc *vcc, int arg) 714 712 { 713 + lockdep_assert_held(&lec_mutex); 715 714 if (arg < 0 || arg >= MAX_LEC_ITF) 716 715 return -EINVAL; 717 716 arg = array_index_nospec(arg, MAX_LEC_ITF); ··· 728 725 int i; 729 726 struct lec_priv *priv; 730 727 728 + lockdep_assert_held(&lec_mutex); 731 729 if (arg < 0) 732 730 arg = 0; 733 731 if (arg >= MAX_LEC_ITF) ··· 746 742 snprintf(dev_lec[i]->name, IFNAMSIZ, "lec%d", i); 747 743 if (register_netdev(dev_lec[i])) { 748 744 free_netdev(dev_lec[i]); 745 + dev_lec[i] = NULL; 749 746 return -EINVAL; 750 747 } 751 748 ··· 909 904 v = (dev && netdev_priv(dev)) ? 910 905 lec_priv_walk(state, l, netdev_priv(dev)) : NULL; 911 906 if (!v && dev) { 912 - dev_put(dev); 913 907 /* Partial state reset for the next time we get called */ 914 908 dev = NULL; 915 909 } ··· 932 928 { 933 929 struct lec_state *state = seq->private; 934 930 931 + mutex_lock(&lec_mutex); 935 932 state->itf = 0; 936 933 state->dev = NULL; 937 934 state->locked = NULL; ··· 950 945 if (state->dev) { 951 946 spin_unlock_irqrestore(&state->locked->lec_arp_lock, 952 947 state->flags); 953 - dev_put(state->dev); 948 + state->dev = NULL; 954 949 } 950 + mutex_unlock(&lec_mutex); 955 951 } 956 952 957 953 static void *lec_seq_next(struct seq_file *seq, void *v, loff_t *pos) ··· 1009 1003 return -ENOIOCTLCMD; 1010 1004 } 1011 1005 1006 + mutex_lock(&lec_mutex); 1012 1007 switch (cmd) { 1013 1008 case ATMLEC_CTRL: 1014 1009 err = lecd_attach(vcc, (int)arg); ··· 1024 1017 break; 1025 1018 } 1026 1019 1020 + mutex_unlock(&lec_mutex); 1027 1021 return err; 1028 1022 } 1029 1023
+1 -1
net/atm/raw.c
··· 36 36 37 37 pr_debug("(%d) %d -= %d\n", 38 38 vcc->vci, sk_wmem_alloc_get(sk), ATM_SKB(skb)->acct_truesize); 39 - WARN_ON(refcount_sub_and_test(ATM_SKB(skb)->acct_truesize, &sk->sk_wmem_alloc)); 39 + atm_return_tx(vcc, skb); 40 40 dev_kfree_skb_any(skb); 41 41 sk->sk_write_space(sk); 42 42 }
-3
net/core/skbuff.c
··· 6261 6261 if (!pskb_may_pull(skb, write_len)) 6262 6262 return -ENOMEM; 6263 6263 6264 - if (!skb_frags_readable(skb)) 6265 - return -EFAULT; 6266 - 6267 6264 if (!skb_cloned(skb) || skb_clone_writable(skb, write_len)) 6268 6265 return 0; 6269 6266
+3
net/ipv4/tcp_fastopen.c
··· 3 3 #include <linux/tcp.h> 4 4 #include <linux/rcupdate.h> 5 5 #include <net/tcp.h> 6 + #include <net/busy_poll.h> 6 7 7 8 void tcp_fastopen_init_key_once(struct net *net) 8 9 { ··· 279 278 req->timeout, false); 280 279 281 280 refcount_set(&req->rsk_refcnt, 2); 281 + 282 + sk_mark_napi_id_set(child, skb); 282 283 283 284 /* Now finish processing the fastopen child socket. */ 284 285 tcp_init_transfer(child, BPF_SOCK_OPS_PASSIVE_ESTABLISHED_CB, skb);
+24 -11
net/ipv4/tcp_input.c
··· 2479 2479 { 2480 2480 const struct sock *sk = (const struct sock *)tp; 2481 2481 2482 - if (tp->retrans_stamp && 2483 - tcp_tsopt_ecr_before(tp, tp->retrans_stamp)) 2484 - return true; /* got echoed TS before first retransmission */ 2482 + /* Received an echoed timestamp before the first retransmission? */ 2483 + if (tp->retrans_stamp) 2484 + return tcp_tsopt_ecr_before(tp, tp->retrans_stamp); 2485 2485 2486 - /* Check if nothing was retransmitted (retrans_stamp==0), which may 2487 - * happen in fast recovery due to TSQ. But we ignore zero retrans_stamp 2488 - * in TCP_SYN_SENT, since when we set FLAG_SYN_ACKED we also clear 2489 - * retrans_stamp even if we had retransmitted the SYN. 2486 + /* We set tp->retrans_stamp upon the first retransmission of a loss 2487 + * recovery episode, so normally if tp->retrans_stamp is 0 then no 2488 + * retransmission has happened yet (likely due to TSQ, which can cause 2489 + * fast retransmits to be delayed). So if snd_una advanced while 2490 + * (tp->retrans_stamp is 0 then apparently a packet was merely delayed, 2491 + * not lost. But there are exceptions where we retransmit but then 2492 + * clear tp->retrans_stamp, so we check for those exceptions. 2490 2493 */ 2491 - if (!tp->retrans_stamp && /* no record of a retransmit/SYN? */ 2492 - sk->sk_state != TCP_SYN_SENT) /* not the FLAG_SYN_ACKED case? */ 2493 - return true; /* nothing was retransmitted */ 2494 2494 2495 - return false; 2495 + /* (1) For non-SACK connections, tcp_is_non_sack_preventing_reopen() 2496 + * clears tp->retrans_stamp when snd_una == high_seq. 2497 + */ 2498 + if (!tcp_is_sack(tp) && !before(tp->snd_una, tp->high_seq)) 2499 + return false; 2500 + 2501 + /* (2) In TCP_SYN_SENT tcp_clean_rtx_queue() clears tp->retrans_stamp 2502 + * when setting FLAG_SYN_ACKED is set, even if the SYN was 2503 + * retransmitted. 2504 + */ 2505 + if (sk->sk_state == TCP_SYN_SENT) 2506 + return false; 2507 + 2508 + return true; /* tp->retrans_stamp is zero; no retransmit yet */ 2496 2509 } 2497 2510 2498 2511 /* Undo procedures. */
+8
net/ipv6/calipso.c
··· 1207 1207 struct ipv6_opt_hdr *old, *new; 1208 1208 struct sock *sk = sk_to_full_sk(req_to_sk(req)); 1209 1209 1210 + /* sk is NULL for SYN+ACK w/ SYN Cookie */ 1211 + if (!sk) 1212 + return -ENOMEM; 1213 + 1210 1214 if (req_inet->ipv6_opt && req_inet->ipv6_opt->hopopt) 1211 1215 old = req_inet->ipv6_opt->hopopt; 1212 1216 else ··· 1250 1246 struct ipv6_opt_hdr *new; 1251 1247 struct ipv6_txoptions *txopts; 1252 1248 struct sock *sk = sk_to_full_sk(req_to_sk(req)); 1249 + 1250 + /* sk is NULL for SYN+ACK w/ SYN Cookie */ 1251 + if (!sk) 1252 + return; 1253 1253 1254 1254 if (!req_inet->ipv6_opt || !req_inet->ipv6_opt->hopopt) 1255 1255 return;
+4 -1
net/mac80211/debug.h
··· 1 1 /* SPDX-License-Identifier: GPL-2.0 */ 2 2 /* 3 3 * Portions 4 - * Copyright (C) 2022 - 2024 Intel Corporation 4 + * Copyright (C) 2022 - 2025 Intel Corporation 5 5 */ 6 6 #ifndef __MAC80211_DEBUG_H 7 7 #define __MAC80211_DEBUG_H 8 + #include <linux/once_lite.h> 8 9 #include <net/cfg80211.h> 9 10 10 11 #ifdef CONFIG_MAC80211_OCB_DEBUG ··· 153 152 else \ 154 153 _sdata_err((link)->sdata, fmt, ##__VA_ARGS__); \ 155 154 } while (0) 155 + #define link_err_once(link, fmt, ...) \ 156 + DO_ONCE_LITE(link_err, link, fmt, ##__VA_ARGS__) 156 157 #define link_id_info(sdata, link_id, fmt, ...) \ 157 158 do { \ 158 159 if (ieee80211_vif_is_mld(&sdata->vif)) \
+4
net/mac80211/rx.c
··· 4432 4432 if (!multicast && 4433 4433 !ether_addr_equal(sdata->dev->dev_addr, hdr->addr1)) 4434 4434 return false; 4435 + /* reject invalid/our STA address */ 4436 + if (!is_valid_ether_addr(hdr->addr2) || 4437 + ether_addr_equal(sdata->dev->dev_addr, hdr->addr2)) 4438 + return false; 4435 4439 if (!rx->sta) { 4436 4440 int rate_idx; 4437 4441 if (status->encoding != RX_ENC_LEGACY)
+21 -8
net/mac80211/tx.c
··· 5 5 * Copyright 2006-2007 Jiri Benc <jbenc@suse.cz> 6 6 * Copyright 2007 Johannes Berg <johannes@sipsolutions.net> 7 7 * Copyright 2013-2014 Intel Mobile Communications GmbH 8 - * Copyright (C) 2018-2024 Intel Corporation 8 + * Copyright (C) 2018-2025 Intel Corporation 9 9 * 10 10 * Transmit and frame generation functions. 11 11 */ ··· 5016 5016 } 5017 5017 } 5018 5018 5019 - static u8 __ieee80211_beacon_update_cntdwn(struct beacon_data *beacon) 5019 + static u8 __ieee80211_beacon_update_cntdwn(struct ieee80211_link_data *link, 5020 + struct beacon_data *beacon) 5020 5021 { 5021 - beacon->cntdwn_current_counter--; 5022 + if (beacon->cntdwn_current_counter == 1) { 5023 + /* 5024 + * Channel switch handling is done by a worker thread while 5025 + * beacons get pulled from hardware timers. It's therefore 5026 + * possible that software threads are slow enough to not be 5027 + * able to complete CSA handling in a single beacon interval, 5028 + * in which case we get here. There isn't much to do about 5029 + * it, other than letting the user know that the AP isn't 5030 + * behaving correctly. 5031 + */ 5032 + link_err_once(link, 5033 + "beacon TX faster than countdown (channel/color switch) completion\n"); 5034 + return 0; 5035 + } 5022 5036 5023 - /* the counter should never reach 0 */ 5024 - WARN_ON_ONCE(!beacon->cntdwn_current_counter); 5037 + beacon->cntdwn_current_counter--; 5025 5038 5026 5039 return beacon->cntdwn_current_counter; 5027 5040 } ··· 5065 5052 if (!beacon) 5066 5053 goto unlock; 5067 5054 5068 - count = __ieee80211_beacon_update_cntdwn(beacon); 5055 + count = __ieee80211_beacon_update_cntdwn(link, beacon); 5069 5056 5070 5057 unlock: 5071 5058 rcu_read_unlock(); ··· 5463 5450 5464 5451 if (beacon->cntdwn_counter_offsets[0]) { 5465 5452 if (!is_template) 5466 - __ieee80211_beacon_update_cntdwn(beacon); 5453 + __ieee80211_beacon_update_cntdwn(link, beacon); 5467 5454 5468 5455 ieee80211_set_beacon_cntdwn(sdata, beacon, link); 5469 5456 } ··· 5495 5482 * for now we leave it consistent with overall 5496 5483 * mac80211's behavior. 5497 5484 */ 5498 - __ieee80211_beacon_update_cntdwn(beacon); 5485 + __ieee80211_beacon_update_cntdwn(link, beacon); 5499 5486 5500 5487 ieee80211_set_beacon_cntdwn(sdata, beacon, link); 5501 5488 }
+2 -2
net/mpls/af_mpls.c
··· 81 81 82 82 if (index < net->mpls.platform_labels) { 83 83 struct mpls_route __rcu **platform_label = 84 - rcu_dereference(net->mpls.platform_label); 85 - rt = rcu_dereference(platform_label[index]); 84 + rcu_dereference_rtnl(net->mpls.platform_label); 85 + rt = rcu_dereference_rtnl(platform_label[index]); 86 86 } 87 87 return rt; 88 88 }
+4 -4
net/nfc/nci/uart.c
··· 119 119 120 120 memcpy(nu, nci_uart_drivers[driver], sizeof(struct nci_uart)); 121 121 nu->tty = tty; 122 - tty->disc_data = nu; 123 122 skb_queue_head_init(&nu->tx_q); 124 123 INIT_WORK(&nu->write_work, nci_uart_write_work); 125 124 spin_lock_init(&nu->rx_lock); 126 125 127 126 ret = nu->ops.open(nu); 128 127 if (ret) { 129 - tty->disc_data = NULL; 130 128 kfree(nu); 129 + return ret; 131 130 } else if (!try_module_get(nu->owner)) { 132 131 nu->ops.close(nu); 133 - tty->disc_data = NULL; 134 132 kfree(nu); 135 133 return -ENOENT; 136 134 } 137 - return ret; 135 + tty->disc_data = nu; 136 + 137 + return 0; 138 138 } 139 139 140 140 /* ------ LDISC part ------ */
+10 -13
net/openvswitch/actions.c
··· 39 39 #include "flow_netlink.h" 40 40 #include "openvswitch_trace.h" 41 41 42 - DEFINE_PER_CPU(struct ovs_pcpu_storage, ovs_pcpu_storage) = { 43 - .bh_lock = INIT_LOCAL_LOCK(bh_lock), 44 - }; 42 + struct ovs_pcpu_storage __percpu *ovs_pcpu_storage; 45 43 46 44 /* Make a clone of the 'key', using the pre-allocated percpu 'flow_keys' 47 45 * space. Return NULL if out of key spaces. 48 46 */ 49 47 static struct sw_flow_key *clone_key(const struct sw_flow_key *key_) 50 48 { 51 - struct ovs_pcpu_storage *ovs_pcpu = this_cpu_ptr(&ovs_pcpu_storage); 49 + struct ovs_pcpu_storage *ovs_pcpu = this_cpu_ptr(ovs_pcpu_storage); 52 50 struct action_flow_keys *keys = &ovs_pcpu->flow_keys; 53 51 int level = ovs_pcpu->exec_level; 54 52 struct sw_flow_key *key = NULL; ··· 92 94 const struct nlattr *actions, 93 95 const int actions_len) 94 96 { 95 - struct action_fifo *fifo = this_cpu_ptr(&ovs_pcpu_storage.action_fifos); 97 + struct action_fifo *fifo = this_cpu_ptr(&ovs_pcpu_storage->action_fifos); 96 98 struct deferred_action *da; 97 99 98 100 da = action_fifo_put(fifo); ··· 753 755 static int ovs_vport_output(struct net *net, struct sock *sk, 754 756 struct sk_buff *skb) 755 757 { 756 - struct ovs_frag_data *data = this_cpu_ptr(&ovs_pcpu_storage.frag_data); 758 + struct ovs_frag_data *data = this_cpu_ptr(&ovs_pcpu_storage->frag_data); 757 759 struct vport *vport = data->vport; 758 760 759 761 if (skb_cow_head(skb, data->l2_len) < 0) { ··· 805 807 unsigned int hlen = skb_network_offset(skb); 806 808 struct ovs_frag_data *data; 807 809 808 - data = this_cpu_ptr(&ovs_pcpu_storage.frag_data); 810 + data = this_cpu_ptr(&ovs_pcpu_storage->frag_data); 809 811 data->dst = skb->_skb_refdst; 810 812 data->vport = vport; 811 813 data->cb = *OVS_CB(skb); ··· 1564 1566 clone = clone_flow_key ? clone_key(key) : key; 1565 1567 if (clone) { 1566 1568 int err = 0; 1567 - 1568 1569 if (actions) { /* Sample action */ 1569 1570 if (clone_flow_key) 1570 - __this_cpu_inc(ovs_pcpu_storage.exec_level); 1571 + __this_cpu_inc(ovs_pcpu_storage->exec_level); 1571 1572 1572 1573 err = do_execute_actions(dp, skb, clone, 1573 1574 actions, len); 1574 1575 1575 1576 if (clone_flow_key) 1576 - __this_cpu_dec(ovs_pcpu_storage.exec_level); 1577 + __this_cpu_dec(ovs_pcpu_storage->exec_level); 1577 1578 } else { /* Recirc action */ 1578 1579 clone->recirc_id = recirc_id; 1579 1580 ovs_dp_process_packet(skb, clone); ··· 1608 1611 1609 1612 static void process_deferred_actions(struct datapath *dp) 1610 1613 { 1611 - struct action_fifo *fifo = this_cpu_ptr(&ovs_pcpu_storage.action_fifos); 1614 + struct action_fifo *fifo = this_cpu_ptr(&ovs_pcpu_storage->action_fifos); 1612 1615 1613 1616 /* Do not touch the FIFO in case there is no deferred actions. */ 1614 1617 if (action_fifo_is_empty(fifo)) ··· 1639 1642 { 1640 1643 int err, level; 1641 1644 1642 - level = __this_cpu_inc_return(ovs_pcpu_storage.exec_level); 1645 + level = __this_cpu_inc_return(ovs_pcpu_storage->exec_level); 1643 1646 if (unlikely(level > OVS_RECURSION_LIMIT)) { 1644 1647 net_crit_ratelimited("ovs: recursion limit reached on datapath %s, probable configuration error\n", 1645 1648 ovs_dp_name(dp)); ··· 1656 1659 process_deferred_actions(dp); 1657 1660 1658 1661 out: 1659 - __this_cpu_dec(ovs_pcpu_storage.exec_level); 1662 + __this_cpu_dec(ovs_pcpu_storage->exec_level); 1660 1663 return err; 1661 1664 }
+35 -7
net/openvswitch/datapath.c
··· 244 244 /* Must be called with rcu_read_lock. */ 245 245 void ovs_dp_process_packet(struct sk_buff *skb, struct sw_flow_key *key) 246 246 { 247 - struct ovs_pcpu_storage *ovs_pcpu = this_cpu_ptr(&ovs_pcpu_storage); 247 + struct ovs_pcpu_storage *ovs_pcpu = this_cpu_ptr(ovs_pcpu_storage); 248 248 const struct vport *p = OVS_CB(skb)->input_vport; 249 249 struct datapath *dp = p->dp; 250 250 struct sw_flow *flow; ··· 299 299 * avoided. 300 300 */ 301 301 if (IS_ENABLED(CONFIG_PREEMPT_RT) && ovs_pcpu->owner != current) { 302 - local_lock_nested_bh(&ovs_pcpu_storage.bh_lock); 302 + local_lock_nested_bh(&ovs_pcpu_storage->bh_lock); 303 303 ovs_pcpu->owner = current; 304 304 ovs_pcpu_locked = true; 305 305 } ··· 310 310 ovs_dp_name(dp), error); 311 311 if (ovs_pcpu_locked) { 312 312 ovs_pcpu->owner = NULL; 313 - local_unlock_nested_bh(&ovs_pcpu_storage.bh_lock); 313 + local_unlock_nested_bh(&ovs_pcpu_storage->bh_lock); 314 314 } 315 315 316 316 stats_counter = &stats->n_hit; ··· 689 689 sf_acts = rcu_dereference(flow->sf_acts); 690 690 691 691 local_bh_disable(); 692 - local_lock_nested_bh(&ovs_pcpu_storage.bh_lock); 692 + local_lock_nested_bh(&ovs_pcpu_storage->bh_lock); 693 693 if (IS_ENABLED(CONFIG_PREEMPT_RT)) 694 - this_cpu_write(ovs_pcpu_storage.owner, current); 694 + this_cpu_write(ovs_pcpu_storage->owner, current); 695 695 err = ovs_execute_actions(dp, packet, sf_acts, &flow->key); 696 696 if (IS_ENABLED(CONFIG_PREEMPT_RT)) 697 - this_cpu_write(ovs_pcpu_storage.owner, NULL); 698 - local_unlock_nested_bh(&ovs_pcpu_storage.bh_lock); 697 + this_cpu_write(ovs_pcpu_storage->owner, NULL); 698 + local_unlock_nested_bh(&ovs_pcpu_storage->bh_lock); 699 699 local_bh_enable(); 700 700 rcu_read_unlock(); 701 701 ··· 2744 2744 .n_reasons = ARRAY_SIZE(ovs_drop_reasons), 2745 2745 }; 2746 2746 2747 + static int __init ovs_alloc_percpu_storage(void) 2748 + { 2749 + unsigned int cpu; 2750 + 2751 + ovs_pcpu_storage = alloc_percpu(*ovs_pcpu_storage); 2752 + if (!ovs_pcpu_storage) 2753 + return -ENOMEM; 2754 + 2755 + for_each_possible_cpu(cpu) { 2756 + struct ovs_pcpu_storage *ovs_pcpu; 2757 + 2758 + ovs_pcpu = per_cpu_ptr(ovs_pcpu_storage, cpu); 2759 + local_lock_init(&ovs_pcpu->bh_lock); 2760 + } 2761 + return 0; 2762 + } 2763 + 2764 + static void ovs_free_percpu_storage(void) 2765 + { 2766 + free_percpu(ovs_pcpu_storage); 2767 + } 2768 + 2747 2769 static int __init dp_init(void) 2748 2770 { 2749 2771 int err; ··· 2774 2752 sizeof_field(struct sk_buff, cb)); 2775 2753 2776 2754 pr_info("Open vSwitch switching datapath\n"); 2755 + 2756 + err = ovs_alloc_percpu_storage(); 2757 + if (err) 2758 + goto error; 2777 2759 2778 2760 err = ovs_internal_dev_rtnl_link_register(); 2779 2761 if (err) ··· 2825 2799 error_unreg_rtnl_link: 2826 2800 ovs_internal_dev_rtnl_link_unregister(); 2827 2801 error: 2802 + ovs_free_percpu_storage(); 2828 2803 return err; 2829 2804 } 2830 2805 ··· 2840 2813 ovs_vport_exit(); 2841 2814 ovs_flow_exit(); 2842 2815 ovs_internal_dev_rtnl_link_unregister(); 2816 + ovs_free_percpu_storage(); 2843 2817 } 2844 2818 2845 2819 module_init(dp_init);
+2 -1
net/openvswitch/datapath.h
··· 220 220 struct task_struct *owner; 221 221 local_lock_t bh_lock; 222 222 }; 223 - DECLARE_PER_CPU(struct ovs_pcpu_storage, ovs_pcpu_storage); 223 + 224 + extern struct ovs_pcpu_storage __percpu *ovs_pcpu_storage; 224 225 225 226 /** 226 227 * enum ovs_pkt_hash_types - hash info to include with a packet
+4 -2
net/sched/sch_taprio.c
··· 1328 1328 1329 1329 stab = rtnl_dereference(q->root->stab); 1330 1330 1331 - oper = rtnl_dereference(q->oper_sched); 1331 + rcu_read_lock(); 1332 + oper = rcu_dereference(q->oper_sched); 1332 1333 if (oper) 1333 1334 taprio_update_queue_max_sdu(q, oper, stab); 1334 1335 1335 - admin = rtnl_dereference(q->admin_sched); 1336 + admin = rcu_dereference(q->admin_sched); 1336 1337 if (admin) 1337 1338 taprio_update_queue_max_sdu(q, admin, stab); 1339 + rcu_read_unlock(); 1338 1340 1339 1341 break; 1340 1342 }
+2 -2
net/tipc/udp_media.c
··· 489 489 490 490 rtnl_lock(); 491 491 b = tipc_bearer_find(net, bname); 492 - if (!b) { 492 + if (!b || b->bcast_addr.media_id != TIPC_MEDIA_TYPE_UDP) { 493 493 rtnl_unlock(); 494 494 return -EINVAL; 495 495 } ··· 500 500 501 501 rtnl_lock(); 502 502 b = rtnl_dereference(tn->bearer_list[bid]); 503 - if (!b) { 503 + if (!b || b->bcast_addr.media_id != TIPC_MEDIA_TYPE_UDP) { 504 504 rtnl_unlock(); 505 505 return -EINVAL; 506 506 }
+17 -11
tools/net/ynl/pyynl/lib/ynl.py
··· 231 231 self.extack['unknown'].append(extack) 232 232 233 233 if attr_space: 234 - # We don't have the ability to parse nests yet, so only do global 235 - if 'miss-type' in self.extack and 'miss-nest' not in self.extack: 236 - miss_type = self.extack['miss-type'] 237 - if miss_type in attr_space.attrs_by_val: 238 - spec = attr_space.attrs_by_val[miss_type] 239 - self.extack['miss-type'] = spec['name'] 240 - if 'doc' in spec: 241 - self.extack['miss-type-doc'] = spec['doc'] 234 + self.annotate_extack(attr_space) 242 235 243 236 def _decode_policy(self, raw): 244 237 policy = {} ··· 257 264 policy['mask'] = attr.as_scalar('u64') 258 265 return policy 259 266 267 + def annotate_extack(self, attr_space): 268 + """ Make extack more human friendly with attribute information """ 269 + 270 + # We don't have the ability to parse nests yet, so only do global 271 + if 'miss-type' in self.extack and 'miss-nest' not in self.extack: 272 + miss_type = self.extack['miss-type'] 273 + if miss_type in attr_space.attrs_by_val: 274 + spec = attr_space.attrs_by_val[miss_type] 275 + self.extack['miss-type'] = spec['name'] 276 + if 'doc' in spec: 277 + self.extack['miss-type-doc'] = spec['doc'] 278 + 260 279 def cmd(self): 261 280 return self.nl_type 262 281 ··· 282 277 283 278 284 279 class NlMsgs: 285 - def __init__(self, data, attr_space=None): 280 + def __init__(self, data): 286 281 self.msgs = [] 287 282 288 283 offset = 0 289 284 while offset < len(data): 290 - msg = NlMsg(data, offset, attr_space=attr_space) 285 + msg = NlMsg(data, offset) 291 286 offset += msg.nl_len 292 287 self.msgs.append(msg) 293 288 ··· 1039 1034 op_rsp = [] 1040 1035 while not done: 1041 1036 reply = self.sock.recv(self._recv_size) 1042 - nms = NlMsgs(reply, attr_space=op.attr_set) 1037 + nms = NlMsgs(reply) 1043 1038 self._recv_dbg_print(reply, nms) 1044 1039 for nl_msg in nms: 1045 1040 if nl_msg.nl_seq in reqs_by_seq: 1046 1041 (op, vals, req_msg, req_flags) = reqs_by_seq[nl_msg.nl_seq] 1047 1042 if nl_msg.extack: 1043 + nl_msg.annotate_extack(op.attr_set) 1048 1044 self._decode_extack(req_msg, op, nl_msg.extack, vals) 1049 1045 else: 1050 1046 op = None
+2 -1
tools/testing/selftests/drivers/net/netdevsim/peer.sh
··· 1 1 #!/bin/bash 2 2 # SPDX-License-Identifier: GPL-2.0-only 3 3 4 - source ../../../net/lib.sh 4 + lib_dir=$(dirname $0)/../../../net 5 + source $lib_dir/lib.sh 5 6 6 7 NSIM_DEV_1_ID=$((256 + RANDOM % 256)) 7 8 NSIM_DEV_1_SYS=/sys/bus/netdevsim/devices/netdevsim$NSIM_DEV_1_ID
+1
tools/testing/selftests/net/.gitignore
··· 50 50 tcp_fastopen_backup_key 51 51 tcp_inq 52 52 tcp_mmap 53 + tfo 53 54 timestamping 54 55 tls 55 56 toeplitz
+2
tools/testing/selftests/net/Makefile
··· 110 110 TEST_PROGS += lwt_dst_cache_ref_loop.sh 111 111 TEST_PROGS += skf_net_off.sh 112 112 TEST_GEN_FILES += skf_net_off 113 + TEST_GEN_FILES += tfo 114 + TEST_PROGS += tfo_passive.sh 113 115 114 116 # YNL files, must be before "include ..lib.mk" 115 117 YNL_GEN_FILES := busy_poller netlink-dumps
+171
tools/testing/selftests/net/tfo.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + #include <error.h> 3 + #include <fcntl.h> 4 + #include <limits.h> 5 + #include <stdbool.h> 6 + #include <stdint.h> 7 + #include <stdio.h> 8 + #include <stdlib.h> 9 + #include <string.h> 10 + #include <unistd.h> 11 + #include <arpa/inet.h> 12 + #include <sys/socket.h> 13 + #include <netinet/tcp.h> 14 + #include <errno.h> 15 + 16 + static int cfg_server; 17 + static int cfg_client; 18 + static int cfg_port = 8000; 19 + static struct sockaddr_in6 cfg_addr; 20 + static char *cfg_outfile; 21 + 22 + static int parse_address(const char *str, int port, struct sockaddr_in6 *sin6) 23 + { 24 + int ret; 25 + 26 + sin6->sin6_family = AF_INET6; 27 + sin6->sin6_port = htons(port); 28 + 29 + ret = inet_pton(sin6->sin6_family, str, &sin6->sin6_addr); 30 + if (ret != 1) { 31 + /* fallback to plain IPv4 */ 32 + ret = inet_pton(AF_INET, str, &sin6->sin6_addr.s6_addr32[3]); 33 + if (ret != 1) 34 + return -1; 35 + 36 + /* add ::ffff prefix */ 37 + sin6->sin6_addr.s6_addr32[0] = 0; 38 + sin6->sin6_addr.s6_addr32[1] = 0; 39 + sin6->sin6_addr.s6_addr16[4] = 0; 40 + sin6->sin6_addr.s6_addr16[5] = 0xffff; 41 + } 42 + 43 + return 0; 44 + } 45 + 46 + static void run_server(void) 47 + { 48 + unsigned long qlen = 32; 49 + int fd, opt, connfd; 50 + socklen_t len; 51 + char buf[64]; 52 + FILE *outfile; 53 + 54 + outfile = fopen(cfg_outfile, "w"); 55 + if (!outfile) 56 + error(1, errno, "fopen() outfile"); 57 + 58 + fd = socket(AF_INET6, SOCK_STREAM, 0); 59 + if (fd == -1) 60 + error(1, errno, "socket()"); 61 + 62 + opt = 1; 63 + if (setsockopt(fd, SOL_SOCKET, SO_REUSEADDR, &opt, sizeof(opt)) < 0) 64 + error(1, errno, "setsockopt(SO_REUSEADDR)"); 65 + 66 + if (setsockopt(fd, SOL_TCP, TCP_FASTOPEN, &qlen, sizeof(qlen)) < 0) 67 + error(1, errno, "setsockopt(TCP_FASTOPEN)"); 68 + 69 + if (bind(fd, (struct sockaddr *)&cfg_addr, sizeof(cfg_addr)) < 0) 70 + error(1, errno, "bind()"); 71 + 72 + if (listen(fd, 5) < 0) 73 + error(1, errno, "listen()"); 74 + 75 + len = sizeof(cfg_addr); 76 + connfd = accept(fd, (struct sockaddr *)&cfg_addr, &len); 77 + if (connfd < 0) 78 + error(1, errno, "accept()"); 79 + 80 + len = sizeof(opt); 81 + if (getsockopt(connfd, SOL_SOCKET, SO_INCOMING_NAPI_ID, &opt, &len) < 0) 82 + error(1, errno, "getsockopt(SO_INCOMING_NAPI_ID)"); 83 + 84 + read(connfd, buf, 64); 85 + fprintf(outfile, "%d\n", opt); 86 + 87 + fclose(outfile); 88 + close(connfd); 89 + close(fd); 90 + } 91 + 92 + static void run_client(void) 93 + { 94 + int fd; 95 + char *msg = "Hello, world!"; 96 + 97 + fd = socket(AF_INET6, SOCK_STREAM, 0); 98 + if (fd == -1) 99 + error(1, errno, "socket()"); 100 + 101 + sendto(fd, msg, strlen(msg), MSG_FASTOPEN, (struct sockaddr *)&cfg_addr, sizeof(cfg_addr)); 102 + 103 + close(fd); 104 + } 105 + 106 + static void usage(const char *filepath) 107 + { 108 + error(1, 0, "Usage: %s (-s|-c) -h<server_ip> -p<port> -o<outfile> ", filepath); 109 + } 110 + 111 + static void parse_opts(int argc, char **argv) 112 + { 113 + struct sockaddr_in6 *addr6 = (void *) &cfg_addr; 114 + char *addr = NULL; 115 + int ret; 116 + int c; 117 + 118 + if (argc <= 1) 119 + usage(argv[0]); 120 + 121 + while ((c = getopt(argc, argv, "sch:p:o:")) != -1) { 122 + switch (c) { 123 + case 's': 124 + if (cfg_client) 125 + error(1, 0, "Pass one of -s or -c"); 126 + cfg_server = 1; 127 + break; 128 + case 'c': 129 + if (cfg_server) 130 + error(1, 0, "Pass one of -s or -c"); 131 + cfg_client = 1; 132 + break; 133 + case 'h': 134 + addr = optarg; 135 + break; 136 + case 'p': 137 + cfg_port = strtoul(optarg, NULL, 0); 138 + break; 139 + case 'o': 140 + cfg_outfile = strdup(optarg); 141 + if (!cfg_outfile) 142 + error(1, 0, "outfile invalid"); 143 + break; 144 + } 145 + } 146 + 147 + if (cfg_server && addr) 148 + error(1, 0, "Server cannot have -h specified"); 149 + 150 + memset(addr6, 0, sizeof(*addr6)); 151 + addr6->sin6_family = AF_INET6; 152 + addr6->sin6_port = htons(cfg_port); 153 + addr6->sin6_addr = in6addr_any; 154 + if (addr) { 155 + ret = parse_address(addr, cfg_port, addr6); 156 + if (ret) 157 + error(1, 0, "Client address parse error: %s", addr); 158 + } 159 + } 160 + 161 + int main(int argc, char **argv) 162 + { 163 + parse_opts(argc, argv); 164 + 165 + if (cfg_server) 166 + run_server(); 167 + else if (cfg_client) 168 + run_client(); 169 + 170 + return 0; 171 + }
+112
tools/testing/selftests/net/tfo_passive.sh
··· 1 + #!/bin/bash 2 + # SPDX-License-Identifier: GPL-2.0 3 + source lib.sh 4 + 5 + NSIM_SV_ID=$((256 + RANDOM % 256)) 6 + NSIM_SV_SYS=/sys/bus/netdevsim/devices/netdevsim$NSIM_SV_ID 7 + NSIM_CL_ID=$((512 + RANDOM % 256)) 8 + NSIM_CL_SYS=/sys/bus/netdevsim/devices/netdevsim$NSIM_CL_ID 9 + 10 + NSIM_DEV_SYS_NEW=/sys/bus/netdevsim/new_device 11 + NSIM_DEV_SYS_DEL=/sys/bus/netdevsim/del_device 12 + NSIM_DEV_SYS_LINK=/sys/bus/netdevsim/link_device 13 + NSIM_DEV_SYS_UNLINK=/sys/bus/netdevsim/unlink_device 14 + 15 + SERVER_IP=192.168.1.1 16 + CLIENT_IP=192.168.1.2 17 + SERVER_PORT=48675 18 + 19 + setup_ns() 20 + { 21 + set -e 22 + ip netns add nssv 23 + ip netns add nscl 24 + 25 + NSIM_SV_NAME=$(find $NSIM_SV_SYS/net -maxdepth 1 -type d ! \ 26 + -path $NSIM_SV_SYS/net -exec basename {} \;) 27 + NSIM_CL_NAME=$(find $NSIM_CL_SYS/net -maxdepth 1 -type d ! \ 28 + -path $NSIM_CL_SYS/net -exec basename {} \;) 29 + 30 + ip link set $NSIM_SV_NAME netns nssv 31 + ip link set $NSIM_CL_NAME netns nscl 32 + 33 + ip netns exec nssv ip addr add "${SERVER_IP}/24" dev $NSIM_SV_NAME 34 + ip netns exec nscl ip addr add "${CLIENT_IP}/24" dev $NSIM_CL_NAME 35 + 36 + ip netns exec nssv ip link set dev $NSIM_SV_NAME up 37 + ip netns exec nscl ip link set dev $NSIM_CL_NAME up 38 + 39 + # Enable passive TFO 40 + ip netns exec nssv sysctl -w net.ipv4.tcp_fastopen=519 > /dev/null 41 + 42 + set +e 43 + } 44 + 45 + cleanup_ns() 46 + { 47 + ip netns del nscl 48 + ip netns del nssv 49 + } 50 + 51 + ### 52 + ### Code start 53 + ### 54 + 55 + modprobe netdevsim 56 + 57 + # linking 58 + 59 + echo $NSIM_SV_ID > $NSIM_DEV_SYS_NEW 60 + echo $NSIM_CL_ID > $NSIM_DEV_SYS_NEW 61 + udevadm settle 62 + 63 + setup_ns 64 + 65 + NSIM_SV_FD=$((256 + RANDOM % 256)) 66 + exec {NSIM_SV_FD}</var/run/netns/nssv 67 + NSIM_SV_IFIDX=$(ip netns exec nssv cat /sys/class/net/$NSIM_SV_NAME/ifindex) 68 + 69 + NSIM_CL_FD=$((256 + RANDOM % 256)) 70 + exec {NSIM_CL_FD}</var/run/netns/nscl 71 + NSIM_CL_IFIDX=$(ip netns exec nscl cat /sys/class/net/$NSIM_CL_NAME/ifindex) 72 + 73 + echo "$NSIM_SV_FD:$NSIM_SV_IFIDX $NSIM_CL_FD:$NSIM_CL_IFIDX" > \ 74 + $NSIM_DEV_SYS_LINK 75 + 76 + if [ $? -ne 0 ]; then 77 + echo "linking netdevsim1 with netdevsim2 should succeed" 78 + cleanup_ns 79 + exit 1 80 + fi 81 + 82 + out_file=$(mktemp) 83 + 84 + timeout -k 1s 30s ip netns exec nssv ./tfo \ 85 + -s \ 86 + -p ${SERVER_PORT} \ 87 + -o ${out_file}& 88 + 89 + wait_local_port_listen nssv ${SERVER_PORT} tcp 90 + 91 + ip netns exec nscl ./tfo -c -h ${SERVER_IP} -p ${SERVER_PORT} 92 + 93 + wait 94 + 95 + res=$(cat $out_file) 96 + rm $out_file 97 + 98 + if [ $res -eq 0 ]; then 99 + echo "got invalid NAPI ID from passive TFO socket" 100 + cleanup_ns 101 + exit 1 102 + fi 103 + 104 + echo "$NSIM_SV_FD:$NSIM_SV_IFIDX" > $NSIM_DEV_SYS_UNLINK 105 + 106 + echo $NSIM_CL_ID > $NSIM_DEV_SYS_DEL 107 + 108 + cleanup_ns 109 + 110 + modprobe -r netdevsim 111 + 112 + exit 0