Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

hinic3: module initialization and tx/rx logic

This is [1/3] part of hinic3 Ethernet driver initial submission.
With this patch hinic3 is a valid kernel module but non-functional
driver.

The driver parts contained in this patch:
Module initialization.
PCI driver registration but with empty id_table.
Auxiliary driver registration.
Net device_ops registration but open/stop are empty stubs.
tx/rx logic.

All major data structures of the driver are fully introduced with the
code that uses them but without their initialization code that requires
management interface with the hw.

Co-developed-by: Xin Guo <guoxin09@huawei.com>
Signed-off-by: Xin Guo <guoxin09@huawei.com>
Signed-off-by: Fan Gong <gongfan1@huawei.com>
Co-developed-by: Gur Stavi <gur.stavi@huawei.com>
Signed-off-by: Gur Stavi <gur.stavi@huawei.com>
Link: https://patch.msgid.link/76a137ffdfe115c737c2c224f0c93b60ba53cc16.1747736586.git.gur.stavi@huawei.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>

authored by

Fan Gong and committed by
Jakub Kicinski
17fcb3dc bd15b2b2

+3726
+137
Documentation/networking/device_drivers/ethernet/huawei/hinic3.rst
··· 1 + .. SPDX-License-Identifier: GPL-2.0 2 + 3 + ===================================================================== 4 + Linux kernel driver for Huawei Ethernet Device Driver (hinic3) family 5 + ===================================================================== 6 + 7 + Overview 8 + ======== 9 + 10 + The hinic3 is a network interface card (NIC) for Data Center. It supports 11 + a range of link-speed devices (10GE, 25GE, 100GE, etc.). The hinic3 12 + devices can have multiple physical forms: LOM (Lan on Motherboard) NIC, 13 + PCIe standard NIC, OCP (Open Compute Project) NIC, etc. 14 + 15 + The hinic3 driver supports the following features: 16 + - IPv4/IPv6 TCP/UDP checksum offload 17 + - TSO (TCP Segmentation Offload), LRO (Large Receive Offload) 18 + - RSS (Receive Side Scaling) 19 + - MSI-X interrupt aggregation configuration and interrupt adaptation. 20 + - SR-IOV (Single Root I/O Virtualization). 21 + 22 + Content 23 + ======= 24 + 25 + - Supported PCI vendor ID/device IDs 26 + - Source Code Structure of Hinic3 Driver 27 + - Management Interface 28 + 29 + Supported PCI vendor ID/device IDs 30 + ================================== 31 + 32 + 19e5:0222 - hinic3 PF/PPF 33 + 19e5:375F - hinic3 VF 34 + 35 + Prime Physical Function (PPF) is responsible for the management of the 36 + whole NIC card. For example, clock synchronization between the NIC and 37 + the host. Any PF may serve as a PPF. The PPF is selected dynamically. 38 + 39 + Source Code Structure of Hinic3 Driver 40 + ====================================== 41 + 42 + ======================== ================================================ 43 + hinic3_pci_id_tbl.h Supported device IDs 44 + hinic3_hw_intf.h Interface between HW and driver 45 + hinic3_queue_common.[ch] Common structures and methods for NIC queues 46 + hinic3_common.[ch] Encapsulation of memory operations in Linux 47 + hinic3_csr.h Register definitions in the BAR 48 + hinic3_hwif.[ch] Interface for BAR 49 + hinic3_eqs.[ch] Interface for AEQs and CEQs 50 + hinic3_mbox.[ch] Interface for mailbox 51 + hinic3_mgmt.[ch] Management interface based on mailbox and AEQ 52 + hinic3_wq.[ch] Work queue data structures and interface 53 + hinic3_cmdq.[ch] Command queue is used to post command to HW 54 + hinic3_hwdev.[ch] HW structures and methods abstractions 55 + hinic3_lld.[ch] Auxiliary driver adaptation layer 56 + hinic3_hw_comm.[ch] Interface for common HW operations 57 + hinic3_mgmt_interface.h Interface between firmware and driver 58 + hinic3_hw_cfg.[ch] Interface for HW configuration 59 + hinic3_irq.c Interrupt request 60 + hinic3_netdev_ops.c Operations registered to Linux kernel stack 61 + hinic3_nic_dev.h NIC structures and methods abstractions 62 + hinic3_main.c Main Linux kernel driver 63 + hinic3_nic_cfg.[ch] NIC service configuration 64 + hinic3_nic_io.[ch] Management plane interface for TX and RX 65 + hinic3_rss.[ch] Interface for Receive Side Scaling (RSS) 66 + hinic3_rx.[ch] Interface for transmit 67 + hinic3_tx.[ch] Interface for receive 68 + hinic3_ethtool.c Interface for ethtool operations (ops) 69 + hinic3_filter.c Interface for MAC address 70 + ======================== ================================================ 71 + 72 + Management Interface 73 + ==================== 74 + 75 + Asynchronous Event Queue (AEQ) 76 + ------------------------------ 77 + 78 + AEQ receives high priority events from the HW over a descriptor queue. 79 + Every descriptor is a fixed size of 64 bytes. AEQ can receive solicited or 80 + unsolicited events. Every device, VF or PF, can have up to 4 AEQs. 81 + Every AEQ is associated to a dedicated IRQ. AEQ can receive multiple types 82 + of events, but in practice the hinic3 driver ignores all events except for 83 + 2 mailbox related events. 84 + 85 + Mailbox 86 + ------- 87 + 88 + Mailbox is a communication mechanism between the hinic3 driver and the HW. 89 + Each device has an independent mailbox. Driver can use the mailbox to send 90 + requests to management. Driver receives mailbox messages, such as responses 91 + to requests, over the AEQ (using event HINIC3_AEQ_FOR_MBOX). Due to the 92 + limited size of mailbox data register, mailbox messages are sent 93 + segment-by-segment. 94 + 95 + Every device can use its mailbox to post request to firmware. The mailbox 96 + can also be used to post requests and responses between the PF and its VFs. 97 + 98 + Completion Event Queue (CEQ) 99 + ---------------------------- 100 + 101 + The implementation of CEQ is the same as AEQ. It receives completion events 102 + from HW over a fixed size descriptor of 32 bits. Every device can have up 103 + to 32 CEQs. Every CEQ has a dedicated IRQ. CEQ only receives solicited 104 + events that are responses to requests from the driver. CEQ can receive 105 + multiple types of events, but in practice the hinic3 driver ignores all 106 + events except for HINIC3_CMDQ that represents completion of previously 107 + posted commands on a cmdq. 108 + 109 + Command Queue (cmdq) 110 + -------------------- 111 + 112 + Every cmdq has a dedicated work queue on which commands are posted. 113 + Commands on the work queue are fixed size descriptor of size 64 bytes. 114 + Completion of a command will be indicated using ctrl bits in the 115 + descriptor that carried the command. Notification of command completions 116 + will also be provided via event on CEQ. Every device has 4 command queues 117 + that are initialized as a set (called cmdqs), each with its own type. 118 + Hinic3 driver only uses type HINIC3_CMDQ_SYNC. 119 + 120 + Work Queues(WQ) 121 + --------------- 122 + 123 + Work queues are logical arrays of fixed size WQEs. The array may be spread 124 + over multiple non-contiguous pages using indirection table. Work queues are 125 + used by I/O queues and command queues. 126 + 127 + Global function ID 128 + ------------------ 129 + 130 + Every function, PF or VF, has a unique ordinal identification within the device. 131 + Many management commands (mbox or cmdq) contain this ID so HW can apply the 132 + command effect to the right function. 133 + 134 + PF is allowed to post management commands to a subordinate VF by specifying the 135 + VFs ID. A VF must provide its own ID. Anti-spoofing in the HW will cause 136 + command from a VF to fail if it contains the wrong ID. 137 +
+1
Documentation/networking/device_drivers/ethernet/index.rst
··· 28 28 freescale/gianfar 29 29 google/gve 30 30 huawei/hinic 31 + huawei/hinic3 31 32 intel/e100 32 33 intel/e1000 33 34 intel/e1000e
+7
MAINTAINERS
··· 10963 10963 F: Documentation/networking/device_drivers/ethernet/huawei/hinic.rst 10964 10964 F: drivers/net/ethernet/huawei/hinic/ 10965 10965 10966 + HUAWEI 3RD GEN ETHERNET DRIVER 10967 + M: Fan Gong <gongfan1@huawei.com> 10968 + L: netdev@vger.kernel.org 10969 + S: Maintained 10970 + F: Documentation/networking/device_drivers/ethernet/huawei/hinic3.rst 10971 + F: drivers/net/ethernet/huawei/hinic3/ 10972 + 10966 10973 HUAWEI MATEBOOK E GO EMBEDDED CONTROLLER DRIVER 10967 10974 M: Pengyu Luo <mitltlatltl@gmail.com> 10968 10975 S: Maintained
+1
drivers/net/ethernet/huawei/Kconfig
··· 16 16 if NET_VENDOR_HUAWEI 17 17 18 18 source "drivers/net/ethernet/huawei/hinic/Kconfig" 19 + source "drivers/net/ethernet/huawei/hinic3/Kconfig" 19 20 20 21 endif # NET_VENDOR_HUAWEI
+1
drivers/net/ethernet/huawei/Makefile
··· 4 4 # 5 5 6 6 obj-$(CONFIG_HINIC) += hinic/ 7 + obj-$(CONFIG_HINIC3) += hinic3/
+20
drivers/net/ethernet/huawei/hinic3/Kconfig
··· 1 + # SPDX-License-Identifier: GPL-2.0-only 2 + # 3 + # Huawei driver configuration 4 + # 5 + 6 + config HINIC3 7 + tristate "Huawei 3rd generation network adapters (HINIC3) support" 8 + # Fields of HW and management structures are little endian and are 9 + # currently not converted 10 + depends on !CPU_BIG_ENDIAN 11 + depends on X86 || ARM64 || COMPILE_TEST 12 + depends on PCI_MSI && 64BIT 13 + select AUXILIARY_BUS 14 + select PAGE_POOL 15 + help 16 + This driver supports HiNIC 3rd gen Network Adapter (HINIC3). 17 + The driver is supported on X86_64 and ARM64 little endian. 18 + 19 + To compile this driver as a module, choose M here. 20 + The module will be called hinic3.
+21
drivers/net/ethernet/huawei/hinic3/Makefile
··· 1 + # SPDX-License-Identifier: GPL-2.0 2 + # Copyright (c) Huawei Technologies Co., Ltd. 2025. All rights reserved. 3 + 4 + obj-$(CONFIG_HINIC3) += hinic3.o 5 + 6 + hinic3-objs := hinic3_common.o \ 7 + hinic3_hw_cfg.o \ 8 + hinic3_hw_comm.o \ 9 + hinic3_hwdev.o \ 10 + hinic3_hwif.o \ 11 + hinic3_irq.o \ 12 + hinic3_lld.o \ 13 + hinic3_main.o \ 14 + hinic3_mbox.o \ 15 + hinic3_netdev_ops.o \ 16 + hinic3_nic_cfg.o \ 17 + hinic3_nic_io.o \ 18 + hinic3_queue_common.o \ 19 + hinic3_rx.o \ 20 + hinic3_tx.o \ 21 + hinic3_wq.o
+53
drivers/net/ethernet/huawei/hinic3/hinic3_common.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + // Copyright (c) Huawei Technologies Co., Ltd. 2025. All rights reserved. 3 + 4 + #include <linux/delay.h> 5 + #include <linux/dma-mapping.h> 6 + 7 + #include "hinic3_common.h" 8 + 9 + int hinic3_dma_zalloc_coherent_align(struct device *dev, u32 size, u32 align, 10 + gfp_t flag, 11 + struct hinic3_dma_addr_align *mem_align) 12 + { 13 + dma_addr_t paddr, align_paddr; 14 + void *vaddr, *align_vaddr; 15 + u32 real_size = size; 16 + 17 + vaddr = dma_alloc_coherent(dev, real_size, &paddr, flag); 18 + if (!vaddr) 19 + return -ENOMEM; 20 + 21 + align_paddr = ALIGN(paddr, align); 22 + if (align_paddr == paddr) { 23 + align_vaddr = vaddr; 24 + goto out; 25 + } 26 + 27 + dma_free_coherent(dev, real_size, vaddr, paddr); 28 + 29 + /* realloc memory for align */ 30 + real_size = size + align; 31 + vaddr = dma_alloc_coherent(dev, real_size, &paddr, flag); 32 + if (!vaddr) 33 + return -ENOMEM; 34 + 35 + align_paddr = ALIGN(paddr, align); 36 + align_vaddr = vaddr + (align_paddr - paddr); 37 + 38 + out: 39 + mem_align->real_size = real_size; 40 + mem_align->ori_vaddr = vaddr; 41 + mem_align->ori_paddr = paddr; 42 + mem_align->align_vaddr = align_vaddr; 43 + mem_align->align_paddr = align_paddr; 44 + 45 + return 0; 46 + } 47 + 48 + void hinic3_dma_free_coherent_align(struct device *dev, 49 + struct hinic3_dma_addr_align *mem_align) 50 + { 51 + dma_free_coherent(dev, mem_align->real_size, 52 + mem_align->ori_vaddr, mem_align->ori_paddr); 53 + }
+27
drivers/net/ethernet/huawei/hinic3/hinic3_common.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* Copyright (c) Huawei Technologies Co., Ltd. 2025. All rights reserved. */ 3 + 4 + #ifndef _HINIC3_COMMON_H_ 5 + #define _HINIC3_COMMON_H_ 6 + 7 + #include <linux/device.h> 8 + 9 + #define HINIC3_MIN_PAGE_SIZE 0x1000 10 + 11 + struct hinic3_dma_addr_align { 12 + u32 real_size; 13 + 14 + void *ori_vaddr; 15 + dma_addr_t ori_paddr; 16 + 17 + void *align_vaddr; 18 + dma_addr_t align_paddr; 19 + }; 20 + 21 + int hinic3_dma_zalloc_coherent_align(struct device *dev, u32 size, u32 align, 22 + gfp_t flag, 23 + struct hinic3_dma_addr_align *mem_align); 24 + void hinic3_dma_free_coherent_align(struct device *dev, 25 + struct hinic3_dma_addr_align *mem_align); 26 + 27 + #endif
+25
drivers/net/ethernet/huawei/hinic3/hinic3_hw_cfg.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + // Copyright (c) Huawei Technologies Co., Ltd. 2025. All rights reserved. 3 + 4 + #include <linux/device.h> 5 + 6 + #include "hinic3_hw_cfg.h" 7 + #include "hinic3_hwdev.h" 8 + #include "hinic3_hwif.h" 9 + #include "hinic3_mbox.h" 10 + 11 + bool hinic3_support_nic(struct hinic3_hwdev *hwdev) 12 + { 13 + return hwdev->cfg_mgmt->cap.supp_svcs_bitmap & 14 + BIT(HINIC3_SERVICE_T_NIC); 15 + } 16 + 17 + u16 hinic3_func_max_qnum(struct hinic3_hwdev *hwdev) 18 + { 19 + return hwdev->cfg_mgmt->cap.nic_svc_cap.max_sqs; 20 + } 21 + 22 + u8 hinic3_physical_port_id(struct hinic3_hwdev *hwdev) 23 + { 24 + return hwdev->cfg_mgmt->cap.port_id; 25 + }
+53
drivers/net/ethernet/huawei/hinic3/hinic3_hw_cfg.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* Copyright (c) Huawei Technologies Co., Ltd. 2025. All rights reserved. */ 3 + 4 + #ifndef _HINIC3_HW_CFG_H_ 5 + #define _HINIC3_HW_CFG_H_ 6 + 7 + #include <linux/mutex.h> 8 + #include <linux/pci.h> 9 + 10 + struct hinic3_hwdev; 11 + 12 + struct hinic3_irq { 13 + u32 irq_id; 14 + u16 msix_entry_idx; 15 + bool allocated; 16 + }; 17 + 18 + struct hinic3_irq_info { 19 + struct hinic3_irq *irq; 20 + u16 num_irq; 21 + /* device max irq number */ 22 + u16 num_irq_hw; 23 + /* protect irq alloc and free */ 24 + struct mutex irq_mutex; 25 + }; 26 + 27 + struct hinic3_nic_service_cap { 28 + u16 max_sqs; 29 + }; 30 + 31 + /* Device capabilities */ 32 + struct hinic3_dev_cap { 33 + /* Bitmasks of services supported by device */ 34 + u16 supp_svcs_bitmap; 35 + /* Physical port */ 36 + u8 port_id; 37 + struct hinic3_nic_service_cap nic_svc_cap; 38 + }; 39 + 40 + struct hinic3_cfg_mgmt_info { 41 + struct hinic3_irq_info irq_info; 42 + struct hinic3_dev_cap cap; 43 + }; 44 + 45 + int hinic3_alloc_irqs(struct hinic3_hwdev *hwdev, u16 num, 46 + struct msix_entry *alloc_arr, u16 *act_num); 47 + void hinic3_free_irq(struct hinic3_hwdev *hwdev, u32 irq_id); 48 + 49 + bool hinic3_support_nic(struct hinic3_hwdev *hwdev); 50 + u16 hinic3_func_max_qnum(struct hinic3_hwdev *hwdev); 51 + u8 hinic3_physical_port_id(struct hinic3_hwdev *hwdev); 52 + 53 + #endif
+32
drivers/net/ethernet/huawei/hinic3/hinic3_hw_comm.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + // Copyright (c) Huawei Technologies Co., Ltd. 2025. All rights reserved. 3 + 4 + #include <linux/delay.h> 5 + 6 + #include "hinic3_hw_comm.h" 7 + #include "hinic3_hwdev.h" 8 + #include "hinic3_hwif.h" 9 + #include "hinic3_mbox.h" 10 + 11 + int hinic3_func_reset(struct hinic3_hwdev *hwdev, u16 func_id, u64 reset_flag) 12 + { 13 + struct comm_cmd_func_reset func_reset = {}; 14 + struct mgmt_msg_params msg_params = {}; 15 + int err; 16 + 17 + func_reset.func_id = func_id; 18 + func_reset.reset_flag = reset_flag; 19 + 20 + mgmt_msg_params_init_default(&msg_params, &func_reset, 21 + sizeof(func_reset)); 22 + 23 + err = hinic3_send_mbox_to_mgmt(hwdev, MGMT_MOD_COMM, 24 + COMM_CMD_FUNC_RESET, &msg_params); 25 + if (err || func_reset.head.status) { 26 + dev_err(hwdev->dev, "Failed to reset func resources, reset_flag 0x%llx, err: %d, status: 0x%x\n", 27 + reset_flag, err, func_reset.head.status); 28 + return -EIO; 29 + } 30 + 31 + return 0; 32 + }
+13
drivers/net/ethernet/huawei/hinic3/hinic3_hw_comm.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* Copyright (c) Huawei Technologies Co., Ltd. 2025. All rights reserved. */ 3 + 4 + #ifndef _HINIC3_HW_COMM_H_ 5 + #define _HINIC3_HW_COMM_H_ 6 + 7 + #include "hinic3_hw_intf.h" 8 + 9 + struct hinic3_hwdev; 10 + 11 + int hinic3_func_reset(struct hinic3_hwdev *hwdev, u16 func_id, u64 reset_flag); 12 + 13 + #endif
+113
drivers/net/ethernet/huawei/hinic3/hinic3_hw_intf.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* Copyright (c) Huawei Technologies Co., Ltd. 2025. All rights reserved. */ 3 + 4 + #ifndef _HINIC3_HW_INTF_H_ 5 + #define _HINIC3_HW_INTF_H_ 6 + 7 + #include <linux/bits.h> 8 + #include <linux/types.h> 9 + 10 + #define MGMT_MSG_CMD_OP_SET 1 11 + #define MGMT_MSG_CMD_OP_GET 0 12 + 13 + #define MGMT_STATUS_PF_SET_VF_ALREADY 0x4 14 + #define MGMT_STATUS_EXIST 0x6 15 + #define MGMT_STATUS_CMD_UNSUPPORTED 0xFF 16 + 17 + #define MGMT_MSG_POLLING_TIMEOUT 0 18 + 19 + struct mgmt_msg_head { 20 + u8 status; 21 + u8 version; 22 + u8 rsvd0[6]; 23 + }; 24 + 25 + struct mgmt_msg_params { 26 + const void *buf_in; 27 + u32 in_size; 28 + void *buf_out; 29 + u32 expected_out_size; 30 + u32 timeout_ms; 31 + }; 32 + 33 + /* CMDQ MODULE_TYPE */ 34 + enum mgmt_mod_type { 35 + /* HW communication module */ 36 + MGMT_MOD_COMM = 0, 37 + /* L2NIC module */ 38 + MGMT_MOD_L2NIC = 1, 39 + /* Configuration module */ 40 + MGMT_MOD_CFGM = 7, 41 + MGMT_MOD_HILINK = 14, 42 + }; 43 + 44 + static inline void mgmt_msg_params_init_default(struct mgmt_msg_params *msg_params, 45 + void *inout_buf, u32 buf_size) 46 + { 47 + msg_params->buf_in = inout_buf; 48 + msg_params->buf_out = inout_buf; 49 + msg_params->in_size = buf_size; 50 + msg_params->expected_out_size = buf_size; 51 + msg_params->timeout_ms = 0; 52 + } 53 + 54 + /* COMM Commands between Driver to fw */ 55 + enum comm_cmd { 56 + /* Commands for clearing FLR and resources */ 57 + COMM_CMD_FUNC_RESET = 0, 58 + COMM_CMD_FEATURE_NEGO = 1, 59 + COMM_CMD_FLUSH_DOORBELL = 2, 60 + COMM_CMD_START_FLUSH = 3, 61 + COMM_CMD_GET_GLOBAL_ATTR = 5, 62 + COMM_CMD_SET_FUNC_SVC_USED_STATE = 7, 63 + 64 + /* Driver Configuration Commands */ 65 + COMM_CMD_SET_CMDQ_CTXT = 20, 66 + COMM_CMD_SET_VAT = 21, 67 + COMM_CMD_CFG_PAGESIZE = 22, 68 + COMM_CMD_CFG_MSIX_CTRL_REG = 23, 69 + COMM_CMD_SET_CEQ_CTRL_REG = 24, 70 + COMM_CMD_SET_DMA_ATTR = 25, 71 + }; 72 + 73 + enum comm_func_reset_bits { 74 + COMM_FUNC_RESET_BIT_FLUSH = BIT(0), 75 + COMM_FUNC_RESET_BIT_MQM = BIT(1), 76 + COMM_FUNC_RESET_BIT_SMF = BIT(2), 77 + COMM_FUNC_RESET_BIT_PF_BW_CFG = BIT(3), 78 + 79 + COMM_FUNC_RESET_BIT_COMM = BIT(10), 80 + /* clear mbox and aeq, The COMM_FUNC_RESET_BIT_COMM bit must be set */ 81 + COMM_FUNC_RESET_BIT_COMM_MGMT_CH = BIT(11), 82 + /* clear cmdq and ceq, The COMM_FUNC_RESET_BIT_COMM bit must be set */ 83 + COMM_FUNC_RESET_BIT_COMM_CMD_CH = BIT(12), 84 + COMM_FUNC_RESET_BIT_NIC = BIT(13), 85 + }; 86 + 87 + struct comm_cmd_func_reset { 88 + struct mgmt_msg_head head; 89 + u16 func_id; 90 + u16 rsvd1[3]; 91 + u64 reset_flag; 92 + }; 93 + 94 + #define COMM_MAX_FEATURE_QWORD 4 95 + struct comm_cmd_feature_nego { 96 + struct mgmt_msg_head head; 97 + u16 func_id; 98 + u8 opcode; 99 + u8 rsvd; 100 + u64 s_feature[COMM_MAX_FEATURE_QWORD]; 101 + }; 102 + 103 + /* Services supported by HW. HW uses these values when delivering events. 104 + * HW supports multiple services that are not yet supported by driver 105 + * (e.g. RoCE). 106 + */ 107 + enum hinic3_service_type { 108 + HINIC3_SERVICE_T_NIC = 0, 109 + /* MAX is only used by SW for array sizes. */ 110 + HINIC3_SERVICE_T_MAX = 1, 111 + }; 112 + 113 + #endif
+24
drivers/net/ethernet/huawei/hinic3/hinic3_hwdev.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + // Copyright (c) Huawei Technologies Co., Ltd. 2025. All rights reserved. 3 + 4 + #include "hinic3_hw_comm.h" 5 + #include "hinic3_hwdev.h" 6 + #include "hinic3_hwif.h" 7 + #include "hinic3_mbox.h" 8 + #include "hinic3_mgmt.h" 9 + 10 + int hinic3_init_hwdev(struct pci_dev *pdev) 11 + { 12 + /* Completed by later submission due to LoC limit. */ 13 + return -EFAULT; 14 + } 15 + 16 + void hinic3_free_hwdev(struct hinic3_hwdev *hwdev) 17 + { 18 + /* Completed by later submission due to LoC limit. */ 19 + } 20 + 21 + void hinic3_set_api_stop(struct hinic3_hwdev *hwdev) 22 + { 23 + /* Completed by later submission due to LoC limit. */ 24 + }
+81
drivers/net/ethernet/huawei/hinic3/hinic3_hwdev.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* Copyright (c) Huawei Technologies Co., Ltd. 2025. All rights reserved. */ 3 + 4 + #ifndef _HINIC3_HWDEV_H_ 5 + #define _HINIC3_HWDEV_H_ 6 + 7 + #include <linux/auxiliary_bus.h> 8 + #include <linux/pci.h> 9 + 10 + #include "hinic3_hw_intf.h" 11 + 12 + struct hinic3_cmdqs; 13 + struct hinic3_hwif; 14 + 15 + enum hinic3_event_service_type { 16 + HINIC3_EVENT_SRV_COMM = 0, 17 + HINIC3_EVENT_SRV_NIC = 1 18 + }; 19 + 20 + #define HINIC3_SRV_EVENT_TYPE(svc, type) (((svc) << 16) | (type)) 21 + 22 + /* driver-specific data of pci_dev */ 23 + struct hinic3_pcidev { 24 + struct pci_dev *pdev; 25 + struct hinic3_hwdev *hwdev; 26 + /* Auxiliary devices */ 27 + struct hinic3_adev *hadev[HINIC3_SERVICE_T_MAX]; 28 + 29 + void __iomem *cfg_reg_base; 30 + void __iomem *intr_reg_base; 31 + void __iomem *db_base; 32 + u64 db_dwqe_len; 33 + u64 db_base_phy; 34 + 35 + /* lock for attach/detach uld */ 36 + struct mutex pdev_mutex; 37 + unsigned long state; 38 + }; 39 + 40 + struct hinic3_hwdev { 41 + struct hinic3_pcidev *adapter; 42 + struct pci_dev *pdev; 43 + struct device *dev; 44 + int dev_id; 45 + struct hinic3_hwif *hwif; 46 + struct hinic3_cfg_mgmt_info *cfg_mgmt; 47 + struct hinic3_aeqs *aeqs; 48 + struct hinic3_ceqs *ceqs; 49 + struct hinic3_mbox *mbox; 50 + struct hinic3_cmdqs *cmdqs; 51 + struct workqueue_struct *workq; 52 + /* protect channel init and uninit */ 53 + spinlock_t channel_lock; 54 + u64 features[COMM_MAX_FEATURE_QWORD]; 55 + u32 wq_page_size; 56 + u8 max_cmdq; 57 + ulong func_state; 58 + }; 59 + 60 + struct hinic3_event_info { 61 + /* enum hinic3_event_service_type */ 62 + u16 service; 63 + u16 type; 64 + u8 event_data[104]; 65 + }; 66 + 67 + struct hinic3_adev { 68 + struct auxiliary_device adev; 69 + struct hinic3_hwdev *hwdev; 70 + enum hinic3_service_type svc_type; 71 + 72 + void (*event)(struct auxiliary_device *adev, 73 + struct hinic3_event_info *event); 74 + }; 75 + 76 + int hinic3_init_hwdev(struct pci_dev *pdev); 77 + void hinic3_free_hwdev(struct hinic3_hwdev *hwdev); 78 + 79 + void hinic3_set_api_stop(struct hinic3_hwdev *hwdev); 80 + 81 + #endif
+21
drivers/net/ethernet/huawei/hinic3/hinic3_hwif.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + // Copyright (c) Huawei Technologies Co., Ltd. 2025. All rights reserved. 3 + 4 + #include <linux/bitfield.h> 5 + #include <linux/device.h> 6 + #include <linux/io.h> 7 + 8 + #include "hinic3_common.h" 9 + #include "hinic3_hwdev.h" 10 + #include "hinic3_hwif.h" 11 + 12 + void hinic3_set_msix_state(struct hinic3_hwdev *hwdev, u16 msix_idx, 13 + enum hinic3_msix_state flag) 14 + { 15 + /* Completed by later submission due to LoC limit. */ 16 + } 17 + 18 + u16 hinic3_global_func_id(struct hinic3_hwdev *hwdev) 19 + { 20 + return hwdev->hwif->attr.func_global_idx; 21 + }
+58
drivers/net/ethernet/huawei/hinic3/hinic3_hwif.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* Copyright (c) Huawei Technologies Co., Ltd. 2025. All rights reserved. */ 3 + 4 + #ifndef _HINIC3_HWIF_H_ 5 + #define _HINIC3_HWIF_H_ 6 + 7 + #include <linux/build_bug.h> 8 + #include <linux/spinlock_types.h> 9 + 10 + struct hinic3_hwdev; 11 + 12 + enum hinic3_func_type { 13 + HINIC3_FUNC_TYPE_VF = 1, 14 + }; 15 + 16 + struct hinic3_db_area { 17 + unsigned long *db_bitmap_array; 18 + u32 db_max_areas; 19 + /* protect doorbell area alloc and free */ 20 + spinlock_t idx_lock; 21 + }; 22 + 23 + struct hinic3_func_attr { 24 + enum hinic3_func_type func_type; 25 + u16 func_global_idx; 26 + u16 global_vf_id_of_pf; 27 + u16 num_irqs; 28 + u16 num_sq; 29 + u8 port_to_port_idx; 30 + u8 pci_intf_idx; 31 + u8 ppf_idx; 32 + u8 num_aeqs; 33 + u8 num_ceqs; 34 + u8 msix_flex_en; 35 + }; 36 + 37 + static_assert(sizeof(struct hinic3_func_attr) == 20); 38 + 39 + struct hinic3_hwif { 40 + u8 __iomem *cfg_regs_base; 41 + u64 db_base_phy; 42 + u64 db_dwqe_len; 43 + u8 __iomem *db_base; 44 + struct hinic3_db_area db_area; 45 + struct hinic3_func_attr attr; 46 + }; 47 + 48 + enum hinic3_msix_state { 49 + HINIC3_MSIX_ENABLE, 50 + HINIC3_MSIX_DISABLE, 51 + }; 52 + 53 + void hinic3_set_msix_state(struct hinic3_hwdev *hwdev, u16 msix_idx, 54 + enum hinic3_msix_state flag); 55 + 56 + u16 hinic3_global_func_id(struct hinic3_hwdev *hwdev); 57 + 58 + #endif
+62
drivers/net/ethernet/huawei/hinic3/hinic3_irq.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + // Copyright (c) Huawei Technologies Co., Ltd. 2025. All rights reserved. 3 + 4 + #include <linux/netdevice.h> 5 + 6 + #include "hinic3_hw_comm.h" 7 + #include "hinic3_hwdev.h" 8 + #include "hinic3_hwif.h" 9 + #include "hinic3_nic_dev.h" 10 + #include "hinic3_rx.h" 11 + #include "hinic3_tx.h" 12 + 13 + static int hinic3_poll(struct napi_struct *napi, int budget) 14 + { 15 + struct hinic3_irq_cfg *irq_cfg = 16 + container_of(napi, struct hinic3_irq_cfg, napi); 17 + struct hinic3_nic_dev *nic_dev; 18 + bool busy = false; 19 + int work_done; 20 + 21 + nic_dev = netdev_priv(irq_cfg->netdev); 22 + 23 + busy |= hinic3_tx_poll(irq_cfg->txq, budget); 24 + 25 + if (unlikely(!budget)) 26 + return 0; 27 + 28 + work_done = hinic3_rx_poll(irq_cfg->rxq, budget); 29 + busy |= work_done >= budget; 30 + 31 + if (busy) 32 + return budget; 33 + 34 + if (likely(napi_complete_done(napi, work_done))) 35 + hinic3_set_msix_state(nic_dev->hwdev, irq_cfg->msix_entry_idx, 36 + HINIC3_MSIX_ENABLE); 37 + 38 + return work_done; 39 + } 40 + 41 + void qp_add_napi(struct hinic3_irq_cfg *irq_cfg) 42 + { 43 + struct hinic3_nic_dev *nic_dev = netdev_priv(irq_cfg->netdev); 44 + 45 + netif_queue_set_napi(irq_cfg->netdev, irq_cfg->irq_id, 46 + NETDEV_QUEUE_TYPE_RX, &irq_cfg->napi); 47 + netif_queue_set_napi(irq_cfg->netdev, irq_cfg->irq_id, 48 + NETDEV_QUEUE_TYPE_TX, &irq_cfg->napi); 49 + netif_napi_add(nic_dev->netdev, &irq_cfg->napi, hinic3_poll); 50 + napi_enable(&irq_cfg->napi); 51 + } 52 + 53 + void qp_del_napi(struct hinic3_irq_cfg *irq_cfg) 54 + { 55 + napi_disable(&irq_cfg->napi); 56 + netif_queue_set_napi(irq_cfg->netdev, irq_cfg->irq_id, 57 + NETDEV_QUEUE_TYPE_RX, NULL); 58 + netif_queue_set_napi(irq_cfg->netdev, irq_cfg->irq_id, 59 + NETDEV_QUEUE_TYPE_TX, NULL); 60 + netif_stop_subqueue(irq_cfg->netdev, irq_cfg->irq_id); 61 + netif_napi_del(&irq_cfg->napi); 62 + }
+414
drivers/net/ethernet/huawei/hinic3/hinic3_lld.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + // Copyright (c) Huawei Technologies Co., Ltd. 2025. All rights reserved. 3 + 4 + #include <linux/delay.h> 5 + #include <linux/iopoll.h> 6 + 7 + #include "hinic3_hw_cfg.h" 8 + #include "hinic3_hwdev.h" 9 + #include "hinic3_lld.h" 10 + #include "hinic3_mgmt.h" 11 + 12 + #define HINIC3_VF_PCI_CFG_REG_BAR 0 13 + #define HINIC3_PCI_INTR_REG_BAR 2 14 + #define HINIC3_PCI_DB_BAR 4 15 + 16 + #define HINIC3_EVENT_POLL_SLEEP_US 1000 17 + #define HINIC3_EVENT_POLL_TIMEOUT_US 10000000 18 + 19 + static struct hinic3_adev_device { 20 + const char *name; 21 + } hinic3_adev_devices[HINIC3_SERVICE_T_MAX] = { 22 + [HINIC3_SERVICE_T_NIC] = { 23 + .name = "nic", 24 + }, 25 + }; 26 + 27 + static bool hinic3_adev_svc_supported(struct hinic3_hwdev *hwdev, 28 + enum hinic3_service_type svc_type) 29 + { 30 + switch (svc_type) { 31 + case HINIC3_SERVICE_T_NIC: 32 + return hinic3_support_nic(hwdev); 33 + default: 34 + break; 35 + } 36 + 37 + return false; 38 + } 39 + 40 + static void hinic3_comm_adev_release(struct device *dev) 41 + { 42 + struct hinic3_adev *hadev = container_of(dev, struct hinic3_adev, 43 + adev.dev); 44 + 45 + kfree(hadev); 46 + } 47 + 48 + static struct hinic3_adev *hinic3_add_one_adev(struct hinic3_hwdev *hwdev, 49 + enum hinic3_service_type svc_type) 50 + { 51 + struct hinic3_adev *hadev; 52 + const char *svc_name; 53 + int ret; 54 + 55 + hadev = kzalloc(sizeof(*hadev), GFP_KERNEL); 56 + if (!hadev) 57 + return NULL; 58 + 59 + svc_name = hinic3_adev_devices[svc_type].name; 60 + hadev->adev.name = svc_name; 61 + hadev->adev.id = hwdev->dev_id; 62 + hadev->adev.dev.parent = hwdev->dev; 63 + hadev->adev.dev.release = hinic3_comm_adev_release; 64 + hadev->svc_type = svc_type; 65 + hadev->hwdev = hwdev; 66 + 67 + ret = auxiliary_device_init(&hadev->adev); 68 + if (ret) { 69 + dev_err(hwdev->dev, "failed init adev %s %u\n", 70 + svc_name, hwdev->dev_id); 71 + kfree(hadev); 72 + return NULL; 73 + } 74 + 75 + ret = auxiliary_device_add(&hadev->adev); 76 + if (ret) { 77 + dev_err(hwdev->dev, "failed to add adev %s %u\n", 78 + svc_name, hwdev->dev_id); 79 + auxiliary_device_uninit(&hadev->adev); 80 + return NULL; 81 + } 82 + 83 + return hadev; 84 + } 85 + 86 + static void hinic3_del_one_adev(struct hinic3_hwdev *hwdev, 87 + enum hinic3_service_type svc_type) 88 + { 89 + struct hinic3_pcidev *pci_adapter = hwdev->adapter; 90 + struct hinic3_adev *hadev; 91 + int timeout; 92 + bool state; 93 + 94 + timeout = read_poll_timeout(test_and_set_bit, state, !state, 95 + HINIC3_EVENT_POLL_SLEEP_US, 96 + HINIC3_EVENT_POLL_TIMEOUT_US, 97 + false, svc_type, &pci_adapter->state); 98 + 99 + hadev = pci_adapter->hadev[svc_type]; 100 + auxiliary_device_delete(&hadev->adev); 101 + auxiliary_device_uninit(&hadev->adev); 102 + pci_adapter->hadev[svc_type] = NULL; 103 + if (!timeout) 104 + clear_bit(svc_type, &pci_adapter->state); 105 + } 106 + 107 + static int hinic3_attach_aux_devices(struct hinic3_hwdev *hwdev) 108 + { 109 + struct hinic3_pcidev *pci_adapter = hwdev->adapter; 110 + enum hinic3_service_type svc_type; 111 + 112 + mutex_lock(&pci_adapter->pdev_mutex); 113 + 114 + for (svc_type = 0; svc_type < HINIC3_SERVICE_T_MAX; svc_type++) { 115 + if (!hinic3_adev_svc_supported(hwdev, svc_type)) 116 + continue; 117 + 118 + pci_adapter->hadev[svc_type] = hinic3_add_one_adev(hwdev, 119 + svc_type); 120 + if (!pci_adapter->hadev[svc_type]) 121 + goto err_del_adevs; 122 + } 123 + mutex_unlock(&pci_adapter->pdev_mutex); 124 + return 0; 125 + 126 + err_del_adevs: 127 + while (svc_type > 0) { 128 + svc_type--; 129 + if (pci_adapter->hadev[svc_type]) { 130 + hinic3_del_one_adev(hwdev, svc_type); 131 + pci_adapter->hadev[svc_type] = NULL; 132 + } 133 + } 134 + mutex_unlock(&pci_adapter->pdev_mutex); 135 + return -ENOMEM; 136 + } 137 + 138 + static void hinic3_detach_aux_devices(struct hinic3_hwdev *hwdev) 139 + { 140 + struct hinic3_pcidev *pci_adapter = hwdev->adapter; 141 + int i; 142 + 143 + mutex_lock(&pci_adapter->pdev_mutex); 144 + for (i = 0; i < ARRAY_SIZE(hinic3_adev_devices); i++) { 145 + if (pci_adapter->hadev[i]) 146 + hinic3_del_one_adev(hwdev, i); 147 + } 148 + mutex_unlock(&pci_adapter->pdev_mutex); 149 + } 150 + 151 + struct hinic3_hwdev *hinic3_adev_get_hwdev(struct auxiliary_device *adev) 152 + { 153 + struct hinic3_adev *hadev; 154 + 155 + hadev = container_of(adev, struct hinic3_adev, adev); 156 + return hadev->hwdev; 157 + } 158 + 159 + void hinic3_adev_event_register(struct auxiliary_device *adev, 160 + void (*event_handler)(struct auxiliary_device *adev, 161 + struct hinic3_event_info *event)) 162 + { 163 + struct hinic3_adev *hadev; 164 + 165 + hadev = container_of(adev, struct hinic3_adev, adev); 166 + hadev->event = event_handler; 167 + } 168 + 169 + void hinic3_adev_event_unregister(struct auxiliary_device *adev) 170 + { 171 + struct hinic3_adev *hadev; 172 + 173 + hadev = container_of(adev, struct hinic3_adev, adev); 174 + hadev->event = NULL; 175 + } 176 + 177 + static int hinic3_mapping_bar(struct pci_dev *pdev, 178 + struct hinic3_pcidev *pci_adapter) 179 + { 180 + pci_adapter->cfg_reg_base = pci_ioremap_bar(pdev, 181 + HINIC3_VF_PCI_CFG_REG_BAR); 182 + if (!pci_adapter->cfg_reg_base) { 183 + dev_err(&pdev->dev, "Failed to map configuration regs\n"); 184 + return -ENOMEM; 185 + } 186 + 187 + pci_adapter->intr_reg_base = pci_ioremap_bar(pdev, 188 + HINIC3_PCI_INTR_REG_BAR); 189 + if (!pci_adapter->intr_reg_base) { 190 + dev_err(&pdev->dev, "Failed to map interrupt regs\n"); 191 + goto err_unmap_cfg_reg_base; 192 + } 193 + 194 + pci_adapter->db_base_phy = pci_resource_start(pdev, HINIC3_PCI_DB_BAR); 195 + pci_adapter->db_dwqe_len = pci_resource_len(pdev, HINIC3_PCI_DB_BAR); 196 + pci_adapter->db_base = pci_ioremap_bar(pdev, HINIC3_PCI_DB_BAR); 197 + if (!pci_adapter->db_base) { 198 + dev_err(&pdev->dev, "Failed to map doorbell regs\n"); 199 + goto err_unmap_intr_reg_base; 200 + } 201 + 202 + return 0; 203 + 204 + err_unmap_intr_reg_base: 205 + iounmap(pci_adapter->intr_reg_base); 206 + 207 + err_unmap_cfg_reg_base: 208 + iounmap(pci_adapter->cfg_reg_base); 209 + 210 + return -ENOMEM; 211 + } 212 + 213 + static void hinic3_unmapping_bar(struct hinic3_pcidev *pci_adapter) 214 + { 215 + iounmap(pci_adapter->db_base); 216 + iounmap(pci_adapter->intr_reg_base); 217 + iounmap(pci_adapter->cfg_reg_base); 218 + } 219 + 220 + static int hinic3_pci_init(struct pci_dev *pdev) 221 + { 222 + struct hinic3_pcidev *pci_adapter; 223 + int err; 224 + 225 + pci_adapter = kzalloc(sizeof(*pci_adapter), GFP_KERNEL); 226 + if (!pci_adapter) 227 + return -ENOMEM; 228 + 229 + pci_adapter->pdev = pdev; 230 + mutex_init(&pci_adapter->pdev_mutex); 231 + 232 + pci_set_drvdata(pdev, pci_adapter); 233 + 234 + err = pci_enable_device(pdev); 235 + if (err) { 236 + dev_err(&pdev->dev, "Failed to enable PCI device\n"); 237 + goto err_free_pci_adapter; 238 + } 239 + 240 + err = pci_request_regions(pdev, HINIC3_NIC_DRV_NAME); 241 + if (err) { 242 + dev_err(&pdev->dev, "Failed to request regions\n"); 243 + goto err_disable_device; 244 + } 245 + 246 + pci_set_master(pdev); 247 + 248 + err = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64)); 249 + if (err) { 250 + dev_err(&pdev->dev, "Failed to set DMA mask\n"); 251 + goto err_release_regions; 252 + } 253 + 254 + return 0; 255 + 256 + err_release_regions: 257 + pci_clear_master(pdev); 258 + pci_release_regions(pdev); 259 + 260 + err_disable_device: 261 + pci_disable_device(pdev); 262 + 263 + err_free_pci_adapter: 264 + pci_set_drvdata(pdev, NULL); 265 + mutex_destroy(&pci_adapter->pdev_mutex); 266 + kfree(pci_adapter); 267 + 268 + return err; 269 + } 270 + 271 + static void hinic3_pci_uninit(struct pci_dev *pdev) 272 + { 273 + struct hinic3_pcidev *pci_adapter = pci_get_drvdata(pdev); 274 + 275 + pci_clear_master(pdev); 276 + pci_release_regions(pdev); 277 + pci_disable_device(pdev); 278 + pci_set_drvdata(pdev, NULL); 279 + mutex_destroy(&pci_adapter->pdev_mutex); 280 + kfree(pci_adapter); 281 + } 282 + 283 + static int hinic3_func_init(struct pci_dev *pdev, 284 + struct hinic3_pcidev *pci_adapter) 285 + { 286 + int err; 287 + 288 + err = hinic3_init_hwdev(pdev); 289 + if (err) { 290 + dev_err(&pdev->dev, "Failed to initialize hardware device\n"); 291 + return err; 292 + } 293 + 294 + err = hinic3_attach_aux_devices(pci_adapter->hwdev); 295 + if (err) 296 + goto err_free_hwdev; 297 + 298 + return 0; 299 + 300 + err_free_hwdev: 301 + hinic3_free_hwdev(pci_adapter->hwdev); 302 + 303 + return err; 304 + } 305 + 306 + static void hinic3_func_uninit(struct pci_dev *pdev) 307 + { 308 + struct hinic3_pcidev *pci_adapter = pci_get_drvdata(pdev); 309 + 310 + hinic3_detach_aux_devices(pci_adapter->hwdev); 311 + hinic3_free_hwdev(pci_adapter->hwdev); 312 + } 313 + 314 + static int hinic3_probe_func(struct hinic3_pcidev *pci_adapter) 315 + { 316 + struct pci_dev *pdev = pci_adapter->pdev; 317 + int err; 318 + 319 + err = hinic3_mapping_bar(pdev, pci_adapter); 320 + if (err) { 321 + dev_err(&pdev->dev, "Failed to map bar\n"); 322 + goto err_out; 323 + } 324 + 325 + err = hinic3_func_init(pdev, pci_adapter); 326 + if (err) 327 + goto err_unmap_bar; 328 + 329 + return 0; 330 + 331 + err_unmap_bar: 332 + hinic3_unmapping_bar(pci_adapter); 333 + 334 + err_out: 335 + dev_err(&pdev->dev, "PCIe device probe function failed\n"); 336 + return err; 337 + } 338 + 339 + static void hinic3_remove_func(struct hinic3_pcidev *pci_adapter) 340 + { 341 + struct pci_dev *pdev = pci_adapter->pdev; 342 + 343 + hinic3_func_uninit(pdev); 344 + hinic3_unmapping_bar(pci_adapter); 345 + } 346 + 347 + static int hinic3_probe(struct pci_dev *pdev, const struct pci_device_id *id) 348 + { 349 + struct hinic3_pcidev *pci_adapter; 350 + int err; 351 + 352 + err = hinic3_pci_init(pdev); 353 + if (err) 354 + goto err_out; 355 + 356 + pci_adapter = pci_get_drvdata(pdev); 357 + err = hinic3_probe_func(pci_adapter); 358 + if (err) 359 + goto err_uninit_pci; 360 + 361 + return 0; 362 + 363 + err_uninit_pci: 364 + hinic3_pci_uninit(pdev); 365 + 366 + err_out: 367 + dev_err(&pdev->dev, "PCIe device probe failed\n"); 368 + return err; 369 + } 370 + 371 + static void hinic3_remove(struct pci_dev *pdev) 372 + { 373 + struct hinic3_pcidev *pci_adapter = pci_get_drvdata(pdev); 374 + 375 + hinic3_remove_func(pci_adapter); 376 + hinic3_pci_uninit(pdev); 377 + } 378 + 379 + static const struct pci_device_id hinic3_pci_table[] = { 380 + /* Completed by later submission due to LoC limit. */ 381 + {0, 0} 382 + 383 + }; 384 + 385 + MODULE_DEVICE_TABLE(pci, hinic3_pci_table); 386 + 387 + static void hinic3_shutdown(struct pci_dev *pdev) 388 + { 389 + struct hinic3_pcidev *pci_adapter = pci_get_drvdata(pdev); 390 + 391 + pci_disable_device(pdev); 392 + 393 + if (pci_adapter) 394 + hinic3_set_api_stop(pci_adapter->hwdev); 395 + } 396 + 397 + static struct pci_driver hinic3_driver = { 398 + .name = HINIC3_NIC_DRV_NAME, 399 + .id_table = hinic3_pci_table, 400 + .probe = hinic3_probe, 401 + .remove = hinic3_remove, 402 + .shutdown = hinic3_shutdown, 403 + .sriov_configure = pci_sriov_configure_simple 404 + }; 405 + 406 + int hinic3_lld_init(void) 407 + { 408 + return pci_register_driver(&hinic3_driver); 409 + } 410 + 411 + void hinic3_lld_exit(void) 412 + { 413 + pci_unregister_driver(&hinic3_driver); 414 + }
+21
drivers/net/ethernet/huawei/hinic3/hinic3_lld.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* Copyright (c) Huawei Technologies Co., Ltd. 2025. All rights reserved. */ 3 + 4 + #ifndef _HINIC3_LLD_H_ 5 + #define _HINIC3_LLD_H_ 6 + 7 + #include <linux/auxiliary_bus.h> 8 + 9 + struct hinic3_event_info; 10 + 11 + #define HINIC3_NIC_DRV_NAME "hinic3" 12 + 13 + int hinic3_lld_init(void); 14 + void hinic3_lld_exit(void); 15 + void hinic3_adev_event_register(struct auxiliary_device *adev, 16 + void (*event_handler)(struct auxiliary_device *adev, 17 + struct hinic3_event_info *event)); 18 + void hinic3_adev_event_unregister(struct auxiliary_device *adev); 19 + struct hinic3_hwdev *hinic3_adev_get_hwdev(struct auxiliary_device *adev); 20 + 21 + #endif
+354
drivers/net/ethernet/huawei/hinic3/hinic3_main.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + // Copyright (c) Huawei Technologies Co., Ltd. 2025. All rights reserved. 3 + 4 + #include <linux/etherdevice.h> 5 + #include <linux/netdevice.h> 6 + 7 + #include "hinic3_common.h" 8 + #include "hinic3_hw_comm.h" 9 + #include "hinic3_hwdev.h" 10 + #include "hinic3_hwif.h" 11 + #include "hinic3_lld.h" 12 + #include "hinic3_nic_cfg.h" 13 + #include "hinic3_nic_dev.h" 14 + #include "hinic3_nic_io.h" 15 + #include "hinic3_rx.h" 16 + #include "hinic3_tx.h" 17 + 18 + #define HINIC3_NIC_DRV_DESC "Intelligent Network Interface Card Driver" 19 + 20 + #define HINIC3_RX_BUF_LEN 2048 21 + #define HINIC3_LRO_REPLENISH_THLD 256 22 + #define HINIC3_NIC_DEV_WQ_NAME "hinic3_nic_dev_wq" 23 + 24 + #define HINIC3_SQ_DEPTH 1024 25 + #define HINIC3_RQ_DEPTH 1024 26 + 27 + static int hinic3_alloc_txrxqs(struct net_device *netdev) 28 + { 29 + struct hinic3_nic_dev *nic_dev = netdev_priv(netdev); 30 + struct hinic3_hwdev *hwdev = nic_dev->hwdev; 31 + int err; 32 + 33 + err = hinic3_alloc_txqs(netdev); 34 + if (err) { 35 + dev_err(hwdev->dev, "Failed to alloc txqs\n"); 36 + return err; 37 + } 38 + 39 + err = hinic3_alloc_rxqs(netdev); 40 + if (err) { 41 + dev_err(hwdev->dev, "Failed to alloc rxqs\n"); 42 + goto err_free_txqs; 43 + } 44 + 45 + return 0; 46 + 47 + err_free_txqs: 48 + hinic3_free_txqs(netdev); 49 + 50 + return err; 51 + } 52 + 53 + static void hinic3_free_txrxqs(struct net_device *netdev) 54 + { 55 + hinic3_free_rxqs(netdev); 56 + hinic3_free_txqs(netdev); 57 + } 58 + 59 + static int hinic3_init_nic_dev(struct net_device *netdev, 60 + struct hinic3_hwdev *hwdev) 61 + { 62 + struct hinic3_nic_dev *nic_dev = netdev_priv(netdev); 63 + struct pci_dev *pdev = hwdev->pdev; 64 + 65 + nic_dev->netdev = netdev; 66 + SET_NETDEV_DEV(netdev, &pdev->dev); 67 + nic_dev->hwdev = hwdev; 68 + nic_dev->pdev = pdev; 69 + 70 + nic_dev->rx_buf_len = HINIC3_RX_BUF_LEN; 71 + nic_dev->lro_replenish_thld = HINIC3_LRO_REPLENISH_THLD; 72 + nic_dev->nic_svc_cap = hwdev->cfg_mgmt->cap.nic_svc_cap; 73 + 74 + return 0; 75 + } 76 + 77 + static int hinic3_sw_init(struct net_device *netdev) 78 + { 79 + struct hinic3_nic_dev *nic_dev = netdev_priv(netdev); 80 + struct hinic3_hwdev *hwdev = nic_dev->hwdev; 81 + int err; 82 + 83 + nic_dev->q_params.sq_depth = HINIC3_SQ_DEPTH; 84 + nic_dev->q_params.rq_depth = HINIC3_RQ_DEPTH; 85 + 86 + /* VF driver always uses random MAC address. During VM migration to a 87 + * new device, the new device should learn the VMs old MAC rather than 88 + * provide its own MAC. The product design assumes that every VF is 89 + * suspectable to migration so the device avoids offering MAC address 90 + * to VFs. 91 + */ 92 + eth_hw_addr_random(netdev); 93 + err = hinic3_set_mac(hwdev, netdev->dev_addr, 0, 94 + hinic3_global_func_id(hwdev)); 95 + if (err) { 96 + dev_err(hwdev->dev, "Failed to set default MAC\n"); 97 + return err; 98 + } 99 + 100 + err = hinic3_alloc_txrxqs(netdev); 101 + if (err) { 102 + dev_err(hwdev->dev, "Failed to alloc qps\n"); 103 + goto err_del_mac; 104 + } 105 + 106 + return 0; 107 + 108 + err_del_mac: 109 + hinic3_del_mac(hwdev, netdev->dev_addr, 0, 110 + hinic3_global_func_id(hwdev)); 111 + 112 + return err; 113 + } 114 + 115 + static void hinic3_sw_uninit(struct net_device *netdev) 116 + { 117 + struct hinic3_nic_dev *nic_dev = netdev_priv(netdev); 118 + 119 + hinic3_free_txrxqs(netdev); 120 + hinic3_del_mac(nic_dev->hwdev, netdev->dev_addr, 0, 121 + hinic3_global_func_id(nic_dev->hwdev)); 122 + } 123 + 124 + static void hinic3_assign_netdev_ops(struct net_device *netdev) 125 + { 126 + hinic3_set_netdev_ops(netdev); 127 + } 128 + 129 + static void netdev_feature_init(struct net_device *netdev) 130 + { 131 + struct hinic3_nic_dev *nic_dev = netdev_priv(netdev); 132 + netdev_features_t cso_fts = 0; 133 + netdev_features_t tso_fts = 0; 134 + netdev_features_t dft_fts; 135 + 136 + dft_fts = NETIF_F_SG | NETIF_F_HIGHDMA; 137 + if (hinic3_test_support(nic_dev, HINIC3_NIC_F_CSUM)) 138 + cso_fts |= NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM | NETIF_F_RXCSUM; 139 + if (hinic3_test_support(nic_dev, HINIC3_NIC_F_SCTP_CRC)) 140 + cso_fts |= NETIF_F_SCTP_CRC; 141 + if (hinic3_test_support(nic_dev, HINIC3_NIC_F_TSO)) 142 + tso_fts |= NETIF_F_TSO | NETIF_F_TSO6; 143 + 144 + netdev->features |= dft_fts | cso_fts | tso_fts; 145 + } 146 + 147 + static int hinic3_set_default_hw_feature(struct net_device *netdev) 148 + { 149 + struct hinic3_nic_dev *nic_dev = netdev_priv(netdev); 150 + struct hinic3_hwdev *hwdev = nic_dev->hwdev; 151 + int err; 152 + 153 + err = hinic3_set_nic_feature_to_hw(nic_dev); 154 + if (err) { 155 + dev_err(hwdev->dev, "Failed to set nic features\n"); 156 + return err; 157 + } 158 + 159 + return 0; 160 + } 161 + 162 + static void hinic3_link_status_change(struct net_device *netdev, 163 + bool link_status_up) 164 + { 165 + struct hinic3_nic_dev *nic_dev = netdev_priv(netdev); 166 + 167 + if (link_status_up) { 168 + if (netif_carrier_ok(netdev)) 169 + return; 170 + 171 + nic_dev->link_status_up = true; 172 + netif_carrier_on(netdev); 173 + netdev_dbg(netdev, "Link is up\n"); 174 + } else { 175 + if (!netif_carrier_ok(netdev)) 176 + return; 177 + 178 + nic_dev->link_status_up = false; 179 + netif_carrier_off(netdev); 180 + netdev_dbg(netdev, "Link is down\n"); 181 + } 182 + } 183 + 184 + static void hinic3_nic_event(struct auxiliary_device *adev, 185 + struct hinic3_event_info *event) 186 + { 187 + struct hinic3_nic_dev *nic_dev = dev_get_drvdata(&adev->dev); 188 + struct net_device *netdev; 189 + 190 + netdev = nic_dev->netdev; 191 + 192 + switch (HINIC3_SRV_EVENT_TYPE(event->service, event->type)) { 193 + case HINIC3_SRV_EVENT_TYPE(HINIC3_EVENT_SRV_NIC, 194 + HINIC3_NIC_EVENT_LINK_UP): 195 + hinic3_link_status_change(netdev, true); 196 + break; 197 + case HINIC3_SRV_EVENT_TYPE(HINIC3_EVENT_SRV_NIC, 198 + HINIC3_NIC_EVENT_LINK_DOWN): 199 + hinic3_link_status_change(netdev, false); 200 + break; 201 + default: 202 + break; 203 + } 204 + } 205 + 206 + static int hinic3_nic_probe(struct auxiliary_device *adev, 207 + const struct auxiliary_device_id *id) 208 + { 209 + struct hinic3_hwdev *hwdev = hinic3_adev_get_hwdev(adev); 210 + struct pci_dev *pdev = hwdev->pdev; 211 + struct hinic3_nic_dev *nic_dev; 212 + struct net_device *netdev; 213 + u16 max_qps, glb_func_id; 214 + int err; 215 + 216 + if (!hinic3_support_nic(hwdev)) { 217 + dev_dbg(&adev->dev, "HW doesn't support nic\n"); 218 + return 0; 219 + } 220 + 221 + hinic3_adev_event_register(adev, hinic3_nic_event); 222 + 223 + glb_func_id = hinic3_global_func_id(hwdev); 224 + err = hinic3_func_reset(hwdev, glb_func_id, COMM_FUNC_RESET_BIT_NIC); 225 + if (err) { 226 + dev_err(&adev->dev, "Failed to reset function\n"); 227 + goto err_unregister_adev_event; 228 + } 229 + 230 + max_qps = hinic3_func_max_qnum(hwdev); 231 + netdev = alloc_etherdev_mq(sizeof(*nic_dev), max_qps); 232 + if (!netdev) { 233 + dev_err(&adev->dev, "Failed to allocate netdev\n"); 234 + err = -ENOMEM; 235 + goto err_unregister_adev_event; 236 + } 237 + 238 + nic_dev = netdev_priv(netdev); 239 + dev_set_drvdata(&adev->dev, nic_dev); 240 + err = hinic3_init_nic_dev(netdev, hwdev); 241 + if (err) 242 + goto err_free_netdev; 243 + 244 + err = hinic3_init_nic_io(nic_dev); 245 + if (err) 246 + goto err_free_netdev; 247 + 248 + err = hinic3_sw_init(netdev); 249 + if (err) 250 + goto err_free_nic_io; 251 + 252 + hinic3_assign_netdev_ops(netdev); 253 + 254 + netdev_feature_init(netdev); 255 + err = hinic3_set_default_hw_feature(netdev); 256 + if (err) 257 + goto err_uninit_sw; 258 + 259 + netif_carrier_off(netdev); 260 + 261 + err = register_netdev(netdev); 262 + if (err) 263 + goto err_uninit_nic_feature; 264 + 265 + return 0; 266 + 267 + err_uninit_nic_feature: 268 + hinic3_update_nic_feature(nic_dev, 0); 269 + hinic3_set_nic_feature_to_hw(nic_dev); 270 + 271 + err_uninit_sw: 272 + hinic3_sw_uninit(netdev); 273 + 274 + err_free_nic_io: 275 + hinic3_free_nic_io(nic_dev); 276 + 277 + err_free_netdev: 278 + free_netdev(netdev); 279 + 280 + err_unregister_adev_event: 281 + hinic3_adev_event_unregister(adev); 282 + dev_err(&pdev->dev, "NIC service probe failed\n"); 283 + 284 + return err; 285 + } 286 + 287 + static void hinic3_nic_remove(struct auxiliary_device *adev) 288 + { 289 + struct hinic3_nic_dev *nic_dev = dev_get_drvdata(&adev->dev); 290 + struct net_device *netdev; 291 + 292 + if (!hinic3_support_nic(nic_dev->hwdev)) 293 + return; 294 + 295 + netdev = nic_dev->netdev; 296 + unregister_netdev(netdev); 297 + 298 + hinic3_update_nic_feature(nic_dev, 0); 299 + hinic3_set_nic_feature_to_hw(nic_dev); 300 + hinic3_sw_uninit(netdev); 301 + 302 + hinic3_free_nic_io(nic_dev); 303 + 304 + free_netdev(netdev); 305 + } 306 + 307 + static const struct auxiliary_device_id hinic3_nic_id_table[] = { 308 + { 309 + .name = HINIC3_NIC_DRV_NAME ".nic", 310 + }, 311 + {} 312 + }; 313 + 314 + static struct auxiliary_driver hinic3_nic_driver = { 315 + .probe = hinic3_nic_probe, 316 + .remove = hinic3_nic_remove, 317 + .suspend = NULL, 318 + .resume = NULL, 319 + .name = "nic", 320 + .id_table = hinic3_nic_id_table, 321 + }; 322 + 323 + static __init int hinic3_nic_lld_init(void) 324 + { 325 + int err; 326 + 327 + pr_info("%s: %s\n", HINIC3_NIC_DRV_NAME, HINIC3_NIC_DRV_DESC); 328 + 329 + err = hinic3_lld_init(); 330 + if (err) 331 + return err; 332 + 333 + err = auxiliary_driver_register(&hinic3_nic_driver); 334 + if (err) { 335 + hinic3_lld_exit(); 336 + return err; 337 + } 338 + 339 + return 0; 340 + } 341 + 342 + static __exit void hinic3_nic_lld_exit(void) 343 + { 344 + auxiliary_driver_unregister(&hinic3_nic_driver); 345 + 346 + hinic3_lld_exit(); 347 + } 348 + 349 + module_init(hinic3_nic_lld_init); 350 + module_exit(hinic3_nic_lld_exit); 351 + 352 + MODULE_AUTHOR("Huawei Technologies CO., Ltd"); 353 + MODULE_DESCRIPTION(HINIC3_NIC_DRV_DESC); 354 + MODULE_LICENSE("GPL");
+16
drivers/net/ethernet/huawei/hinic3/hinic3_mbox.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + // Copyright (c) Huawei Technologies Co., Ltd. 2025. All rights reserved. 3 + 4 + #include <linux/dma-mapping.h> 5 + 6 + #include "hinic3_common.h" 7 + #include "hinic3_hwdev.h" 8 + #include "hinic3_hwif.h" 9 + #include "hinic3_mbox.h" 10 + 11 + int hinic3_send_mbox_to_mgmt(struct hinic3_hwdev *hwdev, u8 mod, u16 cmd, 12 + const struct mgmt_msg_params *msg_params) 13 + { 14 + /* Completed by later submission due to LoC limit. */ 15 + return -EFAULT; 16 + }
+15
drivers/net/ethernet/huawei/hinic3/hinic3_mbox.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* Copyright (c) Huawei Technologies Co., Ltd. 2025. All rights reserved. */ 3 + 4 + #ifndef _HINIC3_MBOX_H_ 5 + #define _HINIC3_MBOX_H_ 6 + 7 + #include <linux/bitfield.h> 8 + #include <linux/mutex.h> 9 + 10 + struct hinic3_hwdev; 11 + 12 + int hinic3_send_mbox_to_mgmt(struct hinic3_hwdev *hwdev, u8 mod, u16 cmd, 13 + const struct mgmt_msg_params *msg_params); 14 + 15 + #endif
+13
drivers/net/ethernet/huawei/hinic3/hinic3_mgmt.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* Copyright (c) Huawei Technologies Co., Ltd. 2025. All rights reserved. */ 3 + 4 + #ifndef _HINIC3_MGMT_H_ 5 + #define _HINIC3_MGMT_H_ 6 + 7 + #include <linux/types.h> 8 + 9 + struct hinic3_hwdev; 10 + 11 + void hinic3_flush_mgmt_workq(struct hinic3_hwdev *hwdev); 12 + 13 + #endif
+105
drivers/net/ethernet/huawei/hinic3/hinic3_mgmt_interface.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* Copyright (c) Huawei Technologies Co., Ltd. 2025. All rights reserved. */ 3 + 4 + #ifndef _HINIC3_MGMT_INTERFACE_H_ 5 + #define _HINIC3_MGMT_INTERFACE_H_ 6 + 7 + #include <linux/bitfield.h> 8 + #include <linux/bits.h> 9 + #include <linux/if_ether.h> 10 + 11 + #include "hinic3_hw_intf.h" 12 + 13 + struct l2nic_cmd_feature_nego { 14 + struct mgmt_msg_head msg_head; 15 + u16 func_id; 16 + u8 opcode; 17 + u8 rsvd; 18 + u64 s_feature[4]; 19 + }; 20 + 21 + enum l2nic_func_tbl_cfg_bitmap { 22 + L2NIC_FUNC_TBL_CFG_INIT = 0, 23 + L2NIC_FUNC_TBL_CFG_RX_BUF_SIZE = 1, 24 + L2NIC_FUNC_TBL_CFG_MTU = 2, 25 + }; 26 + 27 + struct l2nic_func_tbl_cfg { 28 + u16 rx_wqe_buf_size; 29 + u16 mtu; 30 + u32 rsvd[9]; 31 + }; 32 + 33 + struct l2nic_cmd_set_func_tbl { 34 + struct mgmt_msg_head msg_head; 35 + u16 func_id; 36 + u16 rsvd; 37 + u32 cfg_bitmap; 38 + struct l2nic_func_tbl_cfg tbl_cfg; 39 + }; 40 + 41 + struct l2nic_cmd_set_mac { 42 + struct mgmt_msg_head msg_head; 43 + u16 func_id; 44 + u16 vlan_id; 45 + u16 rsvd1; 46 + u8 mac[ETH_ALEN]; 47 + }; 48 + 49 + struct l2nic_cmd_update_mac { 50 + struct mgmt_msg_head msg_head; 51 + u16 func_id; 52 + u16 vlan_id; 53 + u16 rsvd1; 54 + u8 old_mac[ETH_ALEN]; 55 + u16 rsvd2; 56 + u8 new_mac[ETH_ALEN]; 57 + }; 58 + 59 + struct l2nic_cmd_force_pkt_drop { 60 + struct mgmt_msg_head msg_head; 61 + u8 port; 62 + u8 rsvd1[3]; 63 + }; 64 + 65 + /* Commands between NIC to fw */ 66 + enum l2nic_cmd { 67 + /* FUNC CFG */ 68 + L2NIC_CMD_SET_FUNC_TBL = 5, 69 + L2NIC_CMD_SET_VPORT_ENABLE = 6, 70 + L2NIC_CMD_SET_SQ_CI_ATTR = 8, 71 + L2NIC_CMD_CLEAR_QP_RESOURCE = 11, 72 + L2NIC_CMD_FEATURE_NEGO = 15, 73 + L2NIC_CMD_SET_MAC = 21, 74 + L2NIC_CMD_DEL_MAC = 22, 75 + L2NIC_CMD_UPDATE_MAC = 23, 76 + L2NIC_CMD_CFG_RSS = 60, 77 + L2NIC_CMD_CFG_RSS_HASH_KEY = 63, 78 + L2NIC_CMD_CFG_RSS_HASH_ENGINE = 64, 79 + L2NIC_CMD_SET_RSS_CTX_TBL = 65, 80 + L2NIC_CMD_QOS_DCB_STATE = 110, 81 + L2NIC_CMD_FORCE_PKT_DROP = 113, 82 + L2NIC_CMD_MAX = 256, 83 + }; 84 + 85 + enum hinic3_nic_feature_cap { 86 + HINIC3_NIC_F_CSUM = BIT(0), 87 + HINIC3_NIC_F_SCTP_CRC = BIT(1), 88 + HINIC3_NIC_F_TSO = BIT(2), 89 + HINIC3_NIC_F_LRO = BIT(3), 90 + HINIC3_NIC_F_UFO = BIT(4), 91 + HINIC3_NIC_F_RSS = BIT(5), 92 + HINIC3_NIC_F_RX_VLAN_FILTER = BIT(6), 93 + HINIC3_NIC_F_RX_VLAN_STRIP = BIT(7), 94 + HINIC3_NIC_F_TX_VLAN_INSERT = BIT(8), 95 + HINIC3_NIC_F_VXLAN_OFFLOAD = BIT(9), 96 + HINIC3_NIC_F_FDIR = BIT(11), 97 + HINIC3_NIC_F_PROMISC = BIT(12), 98 + HINIC3_NIC_F_ALLMULTI = BIT(13), 99 + HINIC3_NIC_F_RATE_LIMIT = BIT(16), 100 + }; 101 + 102 + #define HINIC3_NIC_F_ALL_MASK 0x33bff 103 + #define HINIC3_NIC_DRV_DEFAULT_FEATURE 0x3f03f 104 + 105 + #endif
+78
drivers/net/ethernet/huawei/hinic3/hinic3_netdev_ops.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + // Copyright (c) Huawei Technologies Co., Ltd. 2025. All rights reserved. 3 + 4 + #include <linux/etherdevice.h> 5 + #include <linux/netdevice.h> 6 + 7 + #include "hinic3_hwif.h" 8 + #include "hinic3_nic_cfg.h" 9 + #include "hinic3_nic_dev.h" 10 + #include "hinic3_nic_io.h" 11 + #include "hinic3_rx.h" 12 + #include "hinic3_tx.h" 13 + 14 + static int hinic3_open(struct net_device *netdev) 15 + { 16 + /* Completed by later submission due to LoC limit. */ 17 + return -EFAULT; 18 + } 19 + 20 + static int hinic3_close(struct net_device *netdev) 21 + { 22 + /* Completed by later submission due to LoC limit. */ 23 + return -EFAULT; 24 + } 25 + 26 + static int hinic3_change_mtu(struct net_device *netdev, int new_mtu) 27 + { 28 + int err; 29 + 30 + err = hinic3_set_port_mtu(netdev, new_mtu); 31 + if (err) { 32 + netdev_err(netdev, "Failed to change port mtu to %d\n", 33 + new_mtu); 34 + return err; 35 + } 36 + 37 + netdev_dbg(netdev, "Change mtu from %u to %d\n", netdev->mtu, new_mtu); 38 + WRITE_ONCE(netdev->mtu, new_mtu); 39 + 40 + return 0; 41 + } 42 + 43 + static int hinic3_set_mac_addr(struct net_device *netdev, void *addr) 44 + { 45 + struct hinic3_nic_dev *nic_dev = netdev_priv(netdev); 46 + struct sockaddr *saddr = addr; 47 + int err; 48 + 49 + if (!is_valid_ether_addr(saddr->sa_data)) 50 + return -EADDRNOTAVAIL; 51 + 52 + if (ether_addr_equal(netdev->dev_addr, saddr->sa_data)) 53 + return 0; 54 + 55 + err = hinic3_update_mac(nic_dev->hwdev, netdev->dev_addr, 56 + saddr->sa_data, 0, 57 + hinic3_global_func_id(nic_dev->hwdev)); 58 + 59 + if (err) 60 + return err; 61 + 62 + eth_hw_addr_set(netdev, saddr->sa_data); 63 + 64 + return 0; 65 + } 66 + 67 + static const struct net_device_ops hinic3_netdev_ops = { 68 + .ndo_open = hinic3_open, 69 + .ndo_stop = hinic3_close, 70 + .ndo_change_mtu = hinic3_change_mtu, 71 + .ndo_set_mac_address = hinic3_set_mac_addr, 72 + .ndo_start_xmit = hinic3_xmit_frame, 73 + }; 74 + 75 + void hinic3_set_netdev_ops(struct net_device *netdev) 76 + { 77 + netdev->netdev_ops = &hinic3_netdev_ops; 78 + }
+233
drivers/net/ethernet/huawei/hinic3/hinic3_nic_cfg.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + // Copyright (c) Huawei Technologies Co., Ltd. 2025. All rights reserved. 3 + 4 + #include <linux/if_vlan.h> 5 + 6 + #include "hinic3_hwdev.h" 7 + #include "hinic3_hwif.h" 8 + #include "hinic3_mbox.h" 9 + #include "hinic3_nic_cfg.h" 10 + #include "hinic3_nic_dev.h" 11 + #include "hinic3_nic_io.h" 12 + 13 + static int hinic3_feature_nego(struct hinic3_hwdev *hwdev, u8 opcode, 14 + u64 *s_feature, u16 size) 15 + { 16 + struct l2nic_cmd_feature_nego feature_nego = {}; 17 + struct mgmt_msg_params msg_params = {}; 18 + int err; 19 + 20 + feature_nego.func_id = hinic3_global_func_id(hwdev); 21 + feature_nego.opcode = opcode; 22 + if (opcode == MGMT_MSG_CMD_OP_SET) 23 + memcpy(feature_nego.s_feature, s_feature, size * sizeof(u64)); 24 + 25 + mgmt_msg_params_init_default(&msg_params, &feature_nego, 26 + sizeof(feature_nego)); 27 + 28 + err = hinic3_send_mbox_to_mgmt(hwdev, MGMT_MOD_L2NIC, 29 + L2NIC_CMD_FEATURE_NEGO, &msg_params); 30 + if (err || feature_nego.msg_head.status) { 31 + dev_err(hwdev->dev, "Failed to negotiate nic feature, err:%d, status: 0x%x\n", 32 + err, feature_nego.msg_head.status); 33 + return -EIO; 34 + } 35 + 36 + if (opcode == MGMT_MSG_CMD_OP_GET) 37 + memcpy(s_feature, feature_nego.s_feature, size * sizeof(u64)); 38 + 39 + return 0; 40 + } 41 + 42 + int hinic3_set_nic_feature_to_hw(struct hinic3_nic_dev *nic_dev) 43 + { 44 + return hinic3_feature_nego(nic_dev->hwdev, MGMT_MSG_CMD_OP_SET, 45 + &nic_dev->nic_io->feature_cap, 1); 46 + } 47 + 48 + bool hinic3_test_support(struct hinic3_nic_dev *nic_dev, 49 + enum hinic3_nic_feature_cap feature_bits) 50 + { 51 + return (nic_dev->nic_io->feature_cap & feature_bits) == feature_bits; 52 + } 53 + 54 + void hinic3_update_nic_feature(struct hinic3_nic_dev *nic_dev, u64 feature_cap) 55 + { 56 + nic_dev->nic_io->feature_cap = feature_cap; 57 + } 58 + 59 + static int hinic3_set_function_table(struct hinic3_hwdev *hwdev, u32 cfg_bitmap, 60 + const struct l2nic_func_tbl_cfg *cfg) 61 + { 62 + struct l2nic_cmd_set_func_tbl cmd_func_tbl = {}; 63 + struct mgmt_msg_params msg_params = {}; 64 + int err; 65 + 66 + cmd_func_tbl.func_id = hinic3_global_func_id(hwdev); 67 + cmd_func_tbl.cfg_bitmap = cfg_bitmap; 68 + cmd_func_tbl.tbl_cfg = *cfg; 69 + 70 + mgmt_msg_params_init_default(&msg_params, &cmd_func_tbl, 71 + sizeof(cmd_func_tbl)); 72 + 73 + err = hinic3_send_mbox_to_mgmt(hwdev, MGMT_MOD_L2NIC, 74 + L2NIC_CMD_SET_FUNC_TBL, &msg_params); 75 + if (err || cmd_func_tbl.msg_head.status) { 76 + dev_err(hwdev->dev, 77 + "Failed to set func table, bitmap: 0x%x, err: %d, status: 0x%x\n", 78 + cfg_bitmap, err, cmd_func_tbl.msg_head.status); 79 + return -EFAULT; 80 + } 81 + 82 + return 0; 83 + } 84 + 85 + int hinic3_set_port_mtu(struct net_device *netdev, u16 new_mtu) 86 + { 87 + struct hinic3_nic_dev *nic_dev = netdev_priv(netdev); 88 + struct l2nic_func_tbl_cfg func_tbl_cfg = {}; 89 + struct hinic3_hwdev *hwdev = nic_dev->hwdev; 90 + 91 + func_tbl_cfg.mtu = new_mtu; 92 + return hinic3_set_function_table(hwdev, BIT(L2NIC_FUNC_TBL_CFG_MTU), 93 + &func_tbl_cfg); 94 + } 95 + 96 + static int hinic3_check_mac_info(struct hinic3_hwdev *hwdev, u8 status, 97 + u16 vlan_id) 98 + { 99 + if ((status && status != MGMT_STATUS_EXIST) || 100 + ((vlan_id & BIT(15)) && status == MGMT_STATUS_EXIST)) { 101 + return -EINVAL; 102 + } 103 + 104 + return 0; 105 + } 106 + 107 + int hinic3_set_mac(struct hinic3_hwdev *hwdev, const u8 *mac_addr, u16 vlan_id, 108 + u16 func_id) 109 + { 110 + struct l2nic_cmd_set_mac mac_info = {}; 111 + struct mgmt_msg_params msg_params = {}; 112 + int err; 113 + 114 + if ((vlan_id & HINIC3_VLAN_ID_MASK) >= VLAN_N_VID) { 115 + dev_err(hwdev->dev, "Invalid VLAN number: %d\n", 116 + (vlan_id & HINIC3_VLAN_ID_MASK)); 117 + return -EINVAL; 118 + } 119 + 120 + mac_info.func_id = func_id; 121 + mac_info.vlan_id = vlan_id; 122 + ether_addr_copy(mac_info.mac, mac_addr); 123 + 124 + mgmt_msg_params_init_default(&msg_params, &mac_info, sizeof(mac_info)); 125 + 126 + err = hinic3_send_mbox_to_mgmt(hwdev, MGMT_MOD_L2NIC, 127 + L2NIC_CMD_SET_MAC, &msg_params); 128 + if (err || hinic3_check_mac_info(hwdev, mac_info.msg_head.status, 129 + mac_info.vlan_id)) { 130 + dev_err(hwdev->dev, 131 + "Failed to update MAC, err: %d, status: 0x%x\n", 132 + err, mac_info.msg_head.status); 133 + return -EIO; 134 + } 135 + 136 + if (mac_info.msg_head.status == MGMT_STATUS_PF_SET_VF_ALREADY) { 137 + dev_warn(hwdev->dev, "PF has already set VF mac, Ignore set operation\n"); 138 + return 0; 139 + } 140 + 141 + if (mac_info.msg_head.status == MGMT_STATUS_EXIST) { 142 + dev_warn(hwdev->dev, "MAC is repeated. Ignore update operation\n"); 143 + return 0; 144 + } 145 + 146 + return 0; 147 + } 148 + 149 + int hinic3_del_mac(struct hinic3_hwdev *hwdev, const u8 *mac_addr, u16 vlan_id, 150 + u16 func_id) 151 + { 152 + struct l2nic_cmd_set_mac mac_info = {}; 153 + struct mgmt_msg_params msg_params = {}; 154 + int err; 155 + 156 + if ((vlan_id & HINIC3_VLAN_ID_MASK) >= VLAN_N_VID) { 157 + dev_err(hwdev->dev, "Invalid VLAN number: %d\n", 158 + (vlan_id & HINIC3_VLAN_ID_MASK)); 159 + return -EINVAL; 160 + } 161 + 162 + mac_info.func_id = func_id; 163 + mac_info.vlan_id = vlan_id; 164 + ether_addr_copy(mac_info.mac, mac_addr); 165 + 166 + mgmt_msg_params_init_default(&msg_params, &mac_info, sizeof(mac_info)); 167 + 168 + err = hinic3_send_mbox_to_mgmt(hwdev, MGMT_MOD_L2NIC, 169 + L2NIC_CMD_DEL_MAC, &msg_params); 170 + if (err) { 171 + dev_err(hwdev->dev, 172 + "Failed to delete MAC, err: %d, status: 0x%x\n", 173 + err, mac_info.msg_head.status); 174 + return err; 175 + } 176 + 177 + return 0; 178 + } 179 + 180 + int hinic3_update_mac(struct hinic3_hwdev *hwdev, const u8 *old_mac, 181 + u8 *new_mac, u16 vlan_id, u16 func_id) 182 + { 183 + struct l2nic_cmd_update_mac mac_info = {}; 184 + struct mgmt_msg_params msg_params = {}; 185 + int err; 186 + 187 + if ((vlan_id & HINIC3_VLAN_ID_MASK) >= VLAN_N_VID) { 188 + dev_err(hwdev->dev, "Invalid VLAN number: %d\n", 189 + (vlan_id & HINIC3_VLAN_ID_MASK)); 190 + return -EINVAL; 191 + } 192 + 193 + mac_info.func_id = func_id; 194 + mac_info.vlan_id = vlan_id; 195 + ether_addr_copy(mac_info.old_mac, old_mac); 196 + ether_addr_copy(mac_info.new_mac, new_mac); 197 + 198 + mgmt_msg_params_init_default(&msg_params, &mac_info, sizeof(mac_info)); 199 + 200 + err = hinic3_send_mbox_to_mgmt(hwdev, MGMT_MOD_L2NIC, 201 + L2NIC_CMD_UPDATE_MAC, &msg_params); 202 + if (err || hinic3_check_mac_info(hwdev, mac_info.msg_head.status, 203 + mac_info.vlan_id)) { 204 + dev_err(hwdev->dev, 205 + "Failed to update MAC, err: %d, status: 0x%x\n", 206 + err, mac_info.msg_head.status); 207 + return -EIO; 208 + } 209 + return 0; 210 + } 211 + 212 + int hinic3_force_drop_tx_pkt(struct hinic3_hwdev *hwdev) 213 + { 214 + struct l2nic_cmd_force_pkt_drop pkt_drop = {}; 215 + struct mgmt_msg_params msg_params = {}; 216 + int err; 217 + 218 + pkt_drop.port = hinic3_physical_port_id(hwdev); 219 + 220 + mgmt_msg_params_init_default(&msg_params, &pkt_drop, sizeof(pkt_drop)); 221 + 222 + err = hinic3_send_mbox_to_mgmt(hwdev, MGMT_MOD_L2NIC, 223 + L2NIC_CMD_FORCE_PKT_DROP, &msg_params); 224 + if ((pkt_drop.msg_head.status != MGMT_STATUS_CMD_UNSUPPORTED && 225 + pkt_drop.msg_head.status) || err) { 226 + dev_err(hwdev->dev, 227 + "Failed to set force tx packets drop, err: %d, status: 0x%x\n", 228 + err, pkt_drop.msg_head.status); 229 + return -EFAULT; 230 + } 231 + 232 + return pkt_drop.msg_head.status; 233 + }
+41
drivers/net/ethernet/huawei/hinic3/hinic3_nic_cfg.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* Copyright (c) Huawei Technologies Co., Ltd. 2025. All rights reserved. */ 3 + 4 + #ifndef _HINIC3_NIC_CFG_H_ 5 + #define _HINIC3_NIC_CFG_H_ 6 + 7 + #include <linux/types.h> 8 + 9 + #include "hinic3_hw_intf.h" 10 + #include "hinic3_mgmt_interface.h" 11 + 12 + struct hinic3_hwdev; 13 + struct hinic3_nic_dev; 14 + 15 + #define HINIC3_MIN_MTU_SIZE 256 16 + #define HINIC3_MAX_JUMBO_FRAME_SIZE 9600 17 + 18 + #define HINIC3_VLAN_ID_MASK 0x7FFF 19 + 20 + enum hinic3_nic_event_type { 21 + HINIC3_NIC_EVENT_LINK_DOWN = 0, 22 + HINIC3_NIC_EVENT_LINK_UP = 1, 23 + }; 24 + 25 + int hinic3_set_nic_feature_to_hw(struct hinic3_nic_dev *nic_dev); 26 + bool hinic3_test_support(struct hinic3_nic_dev *nic_dev, 27 + enum hinic3_nic_feature_cap feature_bits); 28 + void hinic3_update_nic_feature(struct hinic3_nic_dev *nic_dev, u64 feature_cap); 29 + 30 + int hinic3_set_port_mtu(struct net_device *netdev, u16 new_mtu); 31 + 32 + int hinic3_set_mac(struct hinic3_hwdev *hwdev, const u8 *mac_addr, u16 vlan_id, 33 + u16 func_id); 34 + int hinic3_del_mac(struct hinic3_hwdev *hwdev, const u8 *mac_addr, u16 vlan_id, 35 + u16 func_id); 36 + int hinic3_update_mac(struct hinic3_hwdev *hwdev, const u8 *old_mac, 37 + u8 *new_mac, u16 vlan_id, u16 func_id); 38 + 39 + int hinic3_force_drop_tx_pkt(struct hinic3_hwdev *hwdev); 40 + 41 + #endif
+82
drivers/net/ethernet/huawei/hinic3/hinic3_nic_dev.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* Copyright (c) Huawei Technologies Co., Ltd. 2025. All rights reserved. */ 3 + 4 + #ifndef _HINIC3_NIC_DEV_H_ 5 + #define _HINIC3_NIC_DEV_H_ 6 + 7 + #include <linux/netdevice.h> 8 + 9 + #include "hinic3_hw_cfg.h" 10 + #include "hinic3_mgmt_interface.h" 11 + 12 + enum hinic3_flags { 13 + HINIC3_RSS_ENABLE, 14 + }; 15 + 16 + enum hinic3_rss_hash_type { 17 + HINIC3_RSS_HASH_ENGINE_TYPE_XOR = 0, 18 + HINIC3_RSS_HASH_ENGINE_TYPE_TOEP = 1, 19 + }; 20 + 21 + struct hinic3_rss_type { 22 + u8 tcp_ipv6_ext; 23 + u8 ipv6_ext; 24 + u8 tcp_ipv6; 25 + u8 ipv6; 26 + u8 tcp_ipv4; 27 + u8 ipv4; 28 + u8 udp_ipv6; 29 + u8 udp_ipv4; 30 + }; 31 + 32 + struct hinic3_irq_cfg { 33 + struct net_device *netdev; 34 + u16 msix_entry_idx; 35 + /* provided by OS */ 36 + u32 irq_id; 37 + char irq_name[IFNAMSIZ + 16]; 38 + struct napi_struct napi; 39 + cpumask_t affinity_mask; 40 + struct hinic3_txq *txq; 41 + struct hinic3_rxq *rxq; 42 + }; 43 + 44 + struct hinic3_dyna_txrxq_params { 45 + u16 num_qps; 46 + u32 sq_depth; 47 + u32 rq_depth; 48 + 49 + struct hinic3_dyna_txq_res *txqs_res; 50 + struct hinic3_dyna_rxq_res *rxqs_res; 51 + struct hinic3_irq_cfg *irq_cfg; 52 + }; 53 + 54 + struct hinic3_nic_dev { 55 + struct pci_dev *pdev; 56 + struct net_device *netdev; 57 + struct hinic3_hwdev *hwdev; 58 + struct hinic3_nic_io *nic_io; 59 + 60 + u16 max_qps; 61 + u16 rx_buf_len; 62 + u32 lro_replenish_thld; 63 + unsigned long flags; 64 + struct hinic3_nic_service_cap nic_svc_cap; 65 + 66 + struct hinic3_dyna_txrxq_params q_params; 67 + struct hinic3_txq *txqs; 68 + struct hinic3_rxq *rxqs; 69 + 70 + u16 num_qp_irq; 71 + struct msix_entry *qps_msix_entries; 72 + 73 + bool link_status_up; 74 + }; 75 + 76 + void hinic3_set_netdev_ops(struct net_device *netdev); 77 + 78 + /* Temporary prototypes. Functions become static in later submission. */ 79 + void qp_add_napi(struct hinic3_irq_cfg *irq_cfg); 80 + void qp_del_napi(struct hinic3_irq_cfg *irq_cfg); 81 + 82 + #endif
+21
drivers/net/ethernet/huawei/hinic3/hinic3_nic_io.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + // Copyright (c) Huawei Technologies Co., Ltd. 2025. All rights reserved. 3 + 4 + #include "hinic3_hw_comm.h" 5 + #include "hinic3_hw_intf.h" 6 + #include "hinic3_hwdev.h" 7 + #include "hinic3_hwif.h" 8 + #include "hinic3_nic_cfg.h" 9 + #include "hinic3_nic_dev.h" 10 + #include "hinic3_nic_io.h" 11 + 12 + int hinic3_init_nic_io(struct hinic3_nic_dev *nic_dev) 13 + { 14 + /* Completed by later submission due to LoC limit. */ 15 + return -EFAULT; 16 + } 17 + 18 + void hinic3_free_nic_io(struct hinic3_nic_dev *nic_dev) 19 + { 20 + /* Completed by later submission due to LoC limit. */ 21 + }
+120
drivers/net/ethernet/huawei/hinic3/hinic3_nic_io.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* Copyright (c) Huawei Technologies Co., Ltd. 2025. All rights reserved. */ 3 + 4 + #ifndef _HINIC3_NIC_IO_H_ 5 + #define _HINIC3_NIC_IO_H_ 6 + 7 + #include <linux/bitfield.h> 8 + 9 + #include "hinic3_wq.h" 10 + 11 + struct hinic3_nic_dev; 12 + 13 + #define HINIC3_SQ_WQEBB_SHIFT 4 14 + #define HINIC3_RQ_WQEBB_SHIFT 3 15 + #define HINIC3_SQ_WQEBB_SIZE BIT(HINIC3_SQ_WQEBB_SHIFT) 16 + 17 + /* ******************** RQ_CTRL ******************** */ 18 + enum hinic3_rq_wqe_type { 19 + HINIC3_NORMAL_RQ_WQE = 1, 20 + }; 21 + 22 + /* ******************** SQ_CTRL ******************** */ 23 + #define HINIC3_TX_MSS_DEFAULT 0x3E00 24 + #define HINIC3_TX_MSS_MIN 0x50 25 + #define HINIC3_MAX_SQ_SGE 18 26 + 27 + struct hinic3_io_queue { 28 + struct hinic3_wq wq; 29 + u8 owner; 30 + u16 q_id; 31 + u16 msix_entry_idx; 32 + u8 __iomem *db_addr; 33 + u16 *cons_idx_addr; 34 + } ____cacheline_aligned; 35 + 36 + static inline u16 hinic3_get_sq_local_ci(const struct hinic3_io_queue *sq) 37 + { 38 + const struct hinic3_wq *wq = &sq->wq; 39 + 40 + return wq->cons_idx & wq->idx_mask; 41 + } 42 + 43 + static inline u16 hinic3_get_sq_local_pi(const struct hinic3_io_queue *sq) 44 + { 45 + const struct hinic3_wq *wq = &sq->wq; 46 + 47 + return wq->prod_idx & wq->idx_mask; 48 + } 49 + 50 + static inline u16 hinic3_get_sq_hw_ci(const struct hinic3_io_queue *sq) 51 + { 52 + const struct hinic3_wq *wq = &sq->wq; 53 + 54 + return READ_ONCE(*sq->cons_idx_addr) & wq->idx_mask; 55 + } 56 + 57 + /* ******************** DB INFO ******************** */ 58 + #define DB_INFO_QID_MASK GENMASK(12, 0) 59 + #define DB_INFO_CFLAG_MASK BIT(23) 60 + #define DB_INFO_COS_MASK GENMASK(26, 24) 61 + #define DB_INFO_TYPE_MASK GENMASK(31, 27) 62 + #define DB_INFO_SET(val, member) \ 63 + FIELD_PREP(DB_INFO_##member##_MASK, val) 64 + 65 + #define DB_PI_LOW_MASK 0xFFU 66 + #define DB_PI_HIGH_MASK 0xFFU 67 + #define DB_PI_HI_SHIFT 8 68 + #define DB_PI_LOW(pi) ((pi) & DB_PI_LOW_MASK) 69 + #define DB_PI_HIGH(pi) (((pi) >> DB_PI_HI_SHIFT) & DB_PI_HIGH_MASK) 70 + #define DB_ADDR(q, pi) ((u64 __iomem *)((q)->db_addr) + DB_PI_LOW(pi)) 71 + #define DB_SRC_TYPE 1 72 + 73 + /* CFLAG_DATA_PATH */ 74 + #define DB_CFLAG_DP_SQ 0 75 + #define DB_CFLAG_DP_RQ 1 76 + 77 + struct hinic3_nic_db { 78 + u32 db_info; 79 + u32 pi_hi; 80 + }; 81 + 82 + static inline void hinic3_write_db(struct hinic3_io_queue *queue, int cos, 83 + u8 cflag, u16 pi) 84 + { 85 + struct hinic3_nic_db db; 86 + 87 + db.db_info = DB_INFO_SET(DB_SRC_TYPE, TYPE) | 88 + DB_INFO_SET(cflag, CFLAG) | 89 + DB_INFO_SET(cos, COS) | 90 + DB_INFO_SET(queue->q_id, QID); 91 + db.pi_hi = DB_PI_HIGH(pi); 92 + 93 + writeq(*((u64 *)&db), DB_ADDR(queue, pi)); 94 + } 95 + 96 + struct hinic3_nic_io { 97 + struct hinic3_io_queue *sq; 98 + struct hinic3_io_queue *rq; 99 + 100 + u16 num_qps; 101 + u16 max_qps; 102 + 103 + /* Base address for consumer index of all tx queues. Each queue is 104 + * given a full cache line to hold its consumer index. HW updates 105 + * current consumer index as it consumes tx WQEs. 106 + */ 107 + void *ci_vaddr_base; 108 + dma_addr_t ci_dma_base; 109 + 110 + u8 __iomem *sqs_db_addr; 111 + u8 __iomem *rqs_db_addr; 112 + 113 + u16 rx_buf_len; 114 + u64 feature_cap; 115 + }; 116 + 117 + int hinic3_init_nic_io(struct hinic3_nic_dev *nic_dev); 118 + void hinic3_free_nic_io(struct hinic3_nic_dev *nic_dev); 119 + 120 + #endif
+68
drivers/net/ethernet/huawei/hinic3/hinic3_queue_common.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + // Copyright (c) Huawei Technologies Co., Ltd. 2025. All rights reserved. 3 + 4 + #include <linux/device.h> 5 + 6 + #include "hinic3_hwdev.h" 7 + #include "hinic3_queue_common.h" 8 + 9 + void hinic3_queue_pages_init(struct hinic3_queue_pages *qpages, u32 q_depth, 10 + u32 page_size, u32 elem_size) 11 + { 12 + u32 elem_per_page; 13 + 14 + elem_per_page = min(page_size / elem_size, q_depth); 15 + 16 + qpages->pages = NULL; 17 + qpages->page_size = page_size; 18 + qpages->num_pages = max(q_depth / elem_per_page, 1); 19 + qpages->elem_size_shift = ilog2(elem_size); 20 + qpages->elem_per_pg_shift = ilog2(elem_per_page); 21 + } 22 + 23 + static void __queue_pages_free(struct hinic3_hwdev *hwdev, 24 + struct hinic3_queue_pages *qpages, u32 pg_cnt) 25 + { 26 + while (pg_cnt > 0) { 27 + pg_cnt--; 28 + hinic3_dma_free_coherent_align(hwdev->dev, 29 + qpages->pages + pg_cnt); 30 + } 31 + kfree(qpages->pages); 32 + qpages->pages = NULL; 33 + } 34 + 35 + void hinic3_queue_pages_free(struct hinic3_hwdev *hwdev, 36 + struct hinic3_queue_pages *qpages) 37 + { 38 + __queue_pages_free(hwdev, qpages, qpages->num_pages); 39 + } 40 + 41 + int hinic3_queue_pages_alloc(struct hinic3_hwdev *hwdev, 42 + struct hinic3_queue_pages *qpages, u32 align) 43 + { 44 + u32 pg_idx; 45 + int err; 46 + 47 + qpages->pages = kcalloc(qpages->num_pages, sizeof(qpages->pages[0]), 48 + GFP_KERNEL); 49 + if (!qpages->pages) 50 + return -ENOMEM; 51 + 52 + if (align == 0) 53 + align = qpages->page_size; 54 + 55 + for (pg_idx = 0; pg_idx < qpages->num_pages; pg_idx++) { 56 + err = hinic3_dma_zalloc_coherent_align(hwdev->dev, 57 + qpages->page_size, 58 + align, 59 + GFP_KERNEL, 60 + qpages->pages + pg_idx); 61 + if (err) { 62 + __queue_pages_free(hwdev, qpages, pg_idx); 63 + return err; 64 + } 65 + } 66 + 67 + return 0; 68 + }
+54
drivers/net/ethernet/huawei/hinic3/hinic3_queue_common.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* Copyright (c) Huawei Technologies Co., Ltd. 2025. All rights reserved. */ 3 + 4 + #ifndef _HINIC3_QUEUE_COMMON_H_ 5 + #define _HINIC3_QUEUE_COMMON_H_ 6 + 7 + #include <linux/types.h> 8 + 9 + #include "hinic3_common.h" 10 + 11 + struct hinic3_hwdev; 12 + 13 + struct hinic3_queue_pages { 14 + /* Array of DMA-able pages that actually holds the queue entries. */ 15 + struct hinic3_dma_addr_align *pages; 16 + /* Page size in bytes. */ 17 + u32 page_size; 18 + /* Number of pages, must be power of 2. */ 19 + u16 num_pages; 20 + u8 elem_size_shift; 21 + u8 elem_per_pg_shift; 22 + }; 23 + 24 + void hinic3_queue_pages_init(struct hinic3_queue_pages *qpages, u32 q_depth, 25 + u32 page_size, u32 elem_size); 26 + int hinic3_queue_pages_alloc(struct hinic3_hwdev *hwdev, 27 + struct hinic3_queue_pages *qpages, u32 align); 28 + void hinic3_queue_pages_free(struct hinic3_hwdev *hwdev, 29 + struct hinic3_queue_pages *qpages); 30 + 31 + /* Get pointer to queue entry at the specified index. Index does not have to be 32 + * masked to queue depth, only least significant bits will be used. Also 33 + * provides remaining elements in same page (including the first one) in case 34 + * caller needs multiple entries. 35 + */ 36 + static inline void *get_q_element(const struct hinic3_queue_pages *qpages, 37 + u32 idx, u32 *remaining_in_page) 38 + { 39 + const struct hinic3_dma_addr_align *page; 40 + u32 page_idx, elem_idx, elem_per_pg, ofs; 41 + u8 shift; 42 + 43 + shift = qpages->elem_per_pg_shift; 44 + page_idx = (idx >> shift) & (qpages->num_pages - 1); 45 + elem_per_pg = 1 << shift; 46 + elem_idx = idx & (elem_per_pg - 1); 47 + if (remaining_in_page) 48 + *remaining_in_page = elem_per_pg - elem_idx; 49 + ofs = elem_idx << qpages->elem_size_shift; 50 + page = qpages->pages + page_idx; 51 + return (char *)page->align_vaddr + ofs; 52 + } 53 + 54 + #endif
+341
drivers/net/ethernet/huawei/hinic3/hinic3_rx.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + // Copyright (c) Huawei Technologies Co., Ltd. 2025. All rights reserved. 3 + 4 + #include <linux/etherdevice.h> 5 + #include <linux/if_vlan.h> 6 + #include <linux/netdevice.h> 7 + #include <net/gro.h> 8 + #include <net/page_pool/helpers.h> 9 + 10 + #include "hinic3_hwdev.h" 11 + #include "hinic3_nic_dev.h" 12 + #include "hinic3_nic_io.h" 13 + #include "hinic3_rx.h" 14 + 15 + #define HINIC3_RX_HDR_SIZE 256 16 + #define HINIC3_RX_BUFFER_WRITE 16 17 + 18 + #define HINIC3_RX_TCP_PKT 0x3 19 + #define HINIC3_RX_UDP_PKT 0x4 20 + #define HINIC3_RX_SCTP_PKT 0x7 21 + 22 + #define HINIC3_RX_IPV4_PKT 0 23 + #define HINIC3_RX_IPV6_PKT 1 24 + #define HINIC3_RX_INVALID_IP_TYPE 2 25 + 26 + #define HINIC3_RX_PKT_FORMAT_NON_TUNNEL 0 27 + #define HINIC3_RX_PKT_FORMAT_VXLAN 1 28 + 29 + #define HINIC3_LRO_PKT_HDR_LEN_IPV4 66 30 + #define HINIC3_LRO_PKT_HDR_LEN_IPV6 86 31 + #define HINIC3_LRO_PKT_HDR_LEN(cqe) \ 32 + (RQ_CQE_OFFOLAD_TYPE_GET((cqe)->offload_type, IP_TYPE) == \ 33 + HINIC3_RX_IPV6_PKT ? HINIC3_LRO_PKT_HDR_LEN_IPV6 : \ 34 + HINIC3_LRO_PKT_HDR_LEN_IPV4) 35 + 36 + int hinic3_alloc_rxqs(struct net_device *netdev) 37 + { 38 + /* Completed by later submission due to LoC limit. */ 39 + return -EFAULT; 40 + } 41 + 42 + void hinic3_free_rxqs(struct net_device *netdev) 43 + { 44 + /* Completed by later submission due to LoC limit. */ 45 + } 46 + 47 + static int rx_alloc_mapped_page(struct page_pool *page_pool, 48 + struct hinic3_rx_info *rx_info, u16 buf_len) 49 + { 50 + struct page *page; 51 + u32 page_offset; 52 + 53 + page = page_pool_dev_alloc_frag(page_pool, &page_offset, buf_len); 54 + if (unlikely(!page)) 55 + return -ENOMEM; 56 + 57 + rx_info->page = page; 58 + rx_info->page_offset = page_offset; 59 + 60 + return 0; 61 + } 62 + 63 + static void rq_wqe_buf_set(struct hinic3_io_queue *rq, uint32_t wqe_idx, 64 + dma_addr_t dma_addr, u16 len) 65 + { 66 + struct hinic3_rq_wqe *rq_wqe; 67 + 68 + rq_wqe = get_q_element(&rq->wq.qpages, wqe_idx, NULL); 69 + rq_wqe->buf_hi_addr = upper_32_bits(dma_addr); 70 + rq_wqe->buf_lo_addr = lower_32_bits(dma_addr); 71 + } 72 + 73 + static u32 hinic3_rx_fill_buffers(struct hinic3_rxq *rxq) 74 + { 75 + u32 i, free_wqebbs = rxq->delta - 1; 76 + struct hinic3_rx_info *rx_info; 77 + dma_addr_t dma_addr; 78 + int err; 79 + 80 + for (i = 0; i < free_wqebbs; i++) { 81 + rx_info = &rxq->rx_info[rxq->next_to_update]; 82 + 83 + err = rx_alloc_mapped_page(rxq->page_pool, rx_info, 84 + rxq->buf_len); 85 + if (unlikely(err)) 86 + break; 87 + 88 + dma_addr = page_pool_get_dma_addr(rx_info->page) + 89 + rx_info->page_offset; 90 + rq_wqe_buf_set(rxq->rq, rxq->next_to_update, dma_addr, 91 + rxq->buf_len); 92 + rxq->next_to_update = (rxq->next_to_update + 1) & rxq->q_mask; 93 + } 94 + 95 + if (likely(i)) { 96 + hinic3_write_db(rxq->rq, rxq->q_id & 3, DB_CFLAG_DP_RQ, 97 + rxq->next_to_update << HINIC3_NORMAL_RQ_WQE); 98 + rxq->delta -= i; 99 + rxq->next_to_alloc = rxq->next_to_update; 100 + } 101 + 102 + return i; 103 + } 104 + 105 + static void hinic3_add_rx_frag(struct hinic3_rxq *rxq, 106 + struct hinic3_rx_info *rx_info, 107 + struct sk_buff *skb, u32 size) 108 + { 109 + struct page *page; 110 + u8 *va; 111 + 112 + page = rx_info->page; 113 + va = (u8 *)page_address(page) + rx_info->page_offset; 114 + net_prefetch(va); 115 + 116 + page_pool_dma_sync_for_cpu(rxq->page_pool, page, rx_info->page_offset, 117 + rxq->buf_len); 118 + 119 + if (size <= HINIC3_RX_HDR_SIZE && !skb_is_nonlinear(skb)) { 120 + memcpy(__skb_put(skb, size), va, 121 + ALIGN(size, sizeof(long))); 122 + page_pool_put_full_page(rxq->page_pool, page, false); 123 + 124 + return; 125 + } 126 + 127 + skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, page, 128 + rx_info->page_offset, size, rxq->buf_len); 129 + skb_mark_for_recycle(skb); 130 + } 131 + 132 + static void packaging_skb(struct hinic3_rxq *rxq, struct sk_buff *skb, 133 + u32 sge_num, u32 pkt_len) 134 + { 135 + struct hinic3_rx_info *rx_info; 136 + u32 temp_pkt_len = pkt_len; 137 + u32 temp_sge_num = sge_num; 138 + u32 sw_ci; 139 + u32 size; 140 + 141 + sw_ci = rxq->cons_idx & rxq->q_mask; 142 + while (temp_sge_num) { 143 + rx_info = &rxq->rx_info[sw_ci]; 144 + sw_ci = (sw_ci + 1) & rxq->q_mask; 145 + if (unlikely(temp_pkt_len > rxq->buf_len)) { 146 + size = rxq->buf_len; 147 + temp_pkt_len -= rxq->buf_len; 148 + } else { 149 + size = temp_pkt_len; 150 + } 151 + 152 + hinic3_add_rx_frag(rxq, rx_info, skb, size); 153 + 154 + /* clear contents of buffer_info */ 155 + rx_info->page = NULL; 156 + temp_sge_num--; 157 + } 158 + } 159 + 160 + static u32 hinic3_get_sge_num(struct hinic3_rxq *rxq, u32 pkt_len) 161 + { 162 + u32 sge_num; 163 + 164 + sge_num = pkt_len >> rxq->buf_len_shift; 165 + sge_num += (pkt_len & (rxq->buf_len - 1)) ? 1 : 0; 166 + 167 + return sge_num; 168 + } 169 + 170 + static struct sk_buff *hinic3_fetch_rx_buffer(struct hinic3_rxq *rxq, 171 + u32 pkt_len) 172 + { 173 + struct sk_buff *skb; 174 + u32 sge_num; 175 + 176 + skb = napi_alloc_skb(&rxq->irq_cfg->napi, HINIC3_RX_HDR_SIZE); 177 + if (unlikely(!skb)) 178 + return NULL; 179 + 180 + sge_num = hinic3_get_sge_num(rxq, pkt_len); 181 + 182 + net_prefetchw(skb->data); 183 + packaging_skb(rxq, skb, sge_num, pkt_len); 184 + 185 + rxq->cons_idx += sge_num; 186 + rxq->delta += sge_num; 187 + 188 + return skb; 189 + } 190 + 191 + static void hinic3_pull_tail(struct sk_buff *skb) 192 + { 193 + skb_frag_t *frag = &skb_shinfo(skb)->frags[0]; 194 + unsigned int pull_len; 195 + unsigned char *va; 196 + 197 + va = skb_frag_address(frag); 198 + 199 + /* we need the header to contain the greater of either ETH_HLEN or 200 + * 60 bytes if the skb->len is less than 60 for skb_pad. 201 + */ 202 + pull_len = eth_get_headlen(skb->dev, va, HINIC3_RX_HDR_SIZE); 203 + 204 + /* align pull length to size of long to optimize memcpy performance */ 205 + skb_copy_to_linear_data(skb, va, ALIGN(pull_len, sizeof(long))); 206 + 207 + /* update all of the pointers */ 208 + skb_frag_size_sub(frag, pull_len); 209 + skb_frag_off_add(frag, pull_len); 210 + 211 + skb->data_len -= pull_len; 212 + skb->tail += pull_len; 213 + } 214 + 215 + static void hinic3_rx_csum(struct hinic3_rxq *rxq, u32 offload_type, 216 + u32 status, struct sk_buff *skb) 217 + { 218 + u32 pkt_fmt = RQ_CQE_OFFOLAD_TYPE_GET(offload_type, TUNNEL_PKT_FORMAT); 219 + u32 pkt_type = RQ_CQE_OFFOLAD_TYPE_GET(offload_type, PKT_TYPE); 220 + u32 ip_type = RQ_CQE_OFFOLAD_TYPE_GET(offload_type, IP_TYPE); 221 + u32 csum_err = RQ_CQE_STATUS_GET(status, CSUM_ERR); 222 + struct net_device *netdev = rxq->netdev; 223 + 224 + if (!(netdev->features & NETIF_F_RXCSUM)) 225 + return; 226 + 227 + if (unlikely(csum_err)) { 228 + /* pkt type is recognized by HW, and csum is wrong */ 229 + skb->ip_summed = CHECKSUM_NONE; 230 + return; 231 + } 232 + 233 + if (ip_type == HINIC3_RX_INVALID_IP_TYPE || 234 + !(pkt_fmt == HINIC3_RX_PKT_FORMAT_NON_TUNNEL || 235 + pkt_fmt == HINIC3_RX_PKT_FORMAT_VXLAN)) { 236 + skb->ip_summed = CHECKSUM_NONE; 237 + return; 238 + } 239 + 240 + switch (pkt_type) { 241 + case HINIC3_RX_TCP_PKT: 242 + case HINIC3_RX_UDP_PKT: 243 + case HINIC3_RX_SCTP_PKT: 244 + skb->ip_summed = CHECKSUM_UNNECESSARY; 245 + break; 246 + default: 247 + skb->ip_summed = CHECKSUM_NONE; 248 + break; 249 + } 250 + } 251 + 252 + static void hinic3_lro_set_gso_params(struct sk_buff *skb, u16 num_lro) 253 + { 254 + struct ethhdr *eth = (struct ethhdr *)(skb->data); 255 + __be16 proto; 256 + 257 + proto = __vlan_get_protocol(skb, eth->h_proto, NULL); 258 + 259 + skb_shinfo(skb)->gso_size = DIV_ROUND_UP(skb->len - skb_headlen(skb), 260 + num_lro); 261 + skb_shinfo(skb)->gso_type = proto == htons(ETH_P_IP) ? 262 + SKB_GSO_TCPV4 : SKB_GSO_TCPV6; 263 + skb_shinfo(skb)->gso_segs = num_lro; 264 + } 265 + 266 + static int recv_one_pkt(struct hinic3_rxq *rxq, struct hinic3_rq_cqe *rx_cqe, 267 + u32 pkt_len, u32 vlan_len, u32 status) 268 + { 269 + struct net_device *netdev = rxq->netdev; 270 + struct sk_buff *skb; 271 + u32 offload_type; 272 + u16 num_lro; 273 + 274 + skb = hinic3_fetch_rx_buffer(rxq, pkt_len); 275 + if (unlikely(!skb)) 276 + return -ENOMEM; 277 + 278 + /* place header in linear portion of buffer */ 279 + if (skb_is_nonlinear(skb)) 280 + hinic3_pull_tail(skb); 281 + 282 + offload_type = rx_cqe->offload_type; 283 + hinic3_rx_csum(rxq, offload_type, status, skb); 284 + 285 + num_lro = RQ_CQE_STATUS_GET(status, NUM_LRO); 286 + if (num_lro) 287 + hinic3_lro_set_gso_params(skb, num_lro); 288 + 289 + skb_record_rx_queue(skb, rxq->q_id); 290 + skb->protocol = eth_type_trans(skb, netdev); 291 + 292 + if (skb_has_frag_list(skb)) { 293 + napi_gro_flush(&rxq->irq_cfg->napi, false); 294 + netif_receive_skb(skb); 295 + } else { 296 + napi_gro_receive(&rxq->irq_cfg->napi, skb); 297 + } 298 + 299 + return 0; 300 + } 301 + 302 + int hinic3_rx_poll(struct hinic3_rxq *rxq, int budget) 303 + { 304 + struct hinic3_nic_dev *nic_dev = netdev_priv(rxq->netdev); 305 + u32 sw_ci, status, pkt_len, vlan_len; 306 + struct hinic3_rq_cqe *rx_cqe; 307 + u32 num_wqe = 0; 308 + int nr_pkts = 0; 309 + u16 num_lro; 310 + 311 + while (likely(nr_pkts < budget)) { 312 + sw_ci = rxq->cons_idx & rxq->q_mask; 313 + rx_cqe = rxq->cqe_arr + sw_ci; 314 + status = rx_cqe->status; 315 + if (!RQ_CQE_STATUS_GET(status, RXDONE)) 316 + break; 317 + 318 + /* make sure we read rx_done before packet length */ 319 + rmb(); 320 + 321 + vlan_len = rx_cqe->vlan_len; 322 + pkt_len = RQ_CQE_SGE_GET(vlan_len, LEN); 323 + if (recv_one_pkt(rxq, rx_cqe, pkt_len, vlan_len, status)) 324 + break; 325 + 326 + nr_pkts++; 327 + num_lro = RQ_CQE_STATUS_GET(status, NUM_LRO); 328 + if (num_lro) 329 + num_wqe += hinic3_get_sge_num(rxq, pkt_len); 330 + 331 + rx_cqe->status = 0; 332 + 333 + if (num_wqe >= nic_dev->lro_replenish_thld) 334 + break; 335 + } 336 + 337 + if (rxq->delta >= HINIC3_RX_BUFFER_WRITE) 338 + hinic3_rx_fill_buffers(rxq); 339 + 340 + return nr_pkts; 341 + }
+90
drivers/net/ethernet/huawei/hinic3/hinic3_rx.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* Copyright (c) Huawei Technologies Co., Ltd. 2025. All rights reserved. */ 3 + 4 + #ifndef _HINIC3_RX_H_ 5 + #define _HINIC3_RX_H_ 6 + 7 + #include <linux/bitfield.h> 8 + #include <linux/netdevice.h> 9 + 10 + #define RQ_CQE_OFFOLAD_TYPE_PKT_TYPE_MASK GENMASK(4, 0) 11 + #define RQ_CQE_OFFOLAD_TYPE_IP_TYPE_MASK GENMASK(6, 5) 12 + #define RQ_CQE_OFFOLAD_TYPE_TUNNEL_PKT_FORMAT_MASK GENMASK(11, 8) 13 + #define RQ_CQE_OFFOLAD_TYPE_VLAN_EN_MASK BIT(21) 14 + #define RQ_CQE_OFFOLAD_TYPE_GET(val, member) \ 15 + FIELD_GET(RQ_CQE_OFFOLAD_TYPE_##member##_MASK, val) 16 + 17 + #define RQ_CQE_SGE_VLAN_MASK GENMASK(15, 0) 18 + #define RQ_CQE_SGE_LEN_MASK GENMASK(31, 16) 19 + #define RQ_CQE_SGE_GET(val, member) \ 20 + FIELD_GET(RQ_CQE_SGE_##member##_MASK, val) 21 + 22 + #define RQ_CQE_STATUS_CSUM_ERR_MASK GENMASK(15, 0) 23 + #define RQ_CQE_STATUS_NUM_LRO_MASK GENMASK(23, 16) 24 + #define RQ_CQE_STATUS_RXDONE_MASK BIT(31) 25 + #define RQ_CQE_STATUS_GET(val, member) \ 26 + FIELD_GET(RQ_CQE_STATUS_##member##_MASK, val) 27 + 28 + /* RX Completion information that is provided by HW for a specific RX WQE */ 29 + struct hinic3_rq_cqe { 30 + u32 status; 31 + u32 vlan_len; 32 + u32 offload_type; 33 + u32 rsvd3; 34 + u32 rsvd4; 35 + u32 rsvd5; 36 + u32 rsvd6; 37 + u32 pkt_info; 38 + }; 39 + 40 + struct hinic3_rq_wqe { 41 + u32 buf_hi_addr; 42 + u32 buf_lo_addr; 43 + u32 cqe_hi_addr; 44 + u32 cqe_lo_addr; 45 + }; 46 + 47 + struct hinic3_rx_info { 48 + struct page *page; 49 + u32 page_offset; 50 + }; 51 + 52 + struct hinic3_rxq { 53 + struct net_device *netdev; 54 + 55 + u16 q_id; 56 + u32 q_depth; 57 + u32 q_mask; 58 + 59 + u16 buf_len; 60 + u32 buf_len_shift; 61 + 62 + u32 cons_idx; 63 + u32 delta; 64 + 65 + u32 irq_id; 66 + u16 msix_entry_idx; 67 + 68 + /* cqe_arr and rx_info are arrays of rq_depth elements. Each element is 69 + * statically associated (by index) to a specific rq_wqe. 70 + */ 71 + struct hinic3_rq_cqe *cqe_arr; 72 + struct hinic3_rx_info *rx_info; 73 + struct page_pool *page_pool; 74 + 75 + struct hinic3_io_queue *rq; 76 + 77 + struct hinic3_irq_cfg *irq_cfg; 78 + u16 next_to_alloc; 79 + u16 next_to_update; 80 + struct device *dev; /* device for DMA mapping */ 81 + 82 + dma_addr_t cqe_start_paddr; 83 + } ____cacheline_aligned; 84 + 85 + int hinic3_alloc_rxqs(struct net_device *netdev); 86 + void hinic3_free_rxqs(struct net_device *netdev); 87 + 88 + int hinic3_rx_poll(struct hinic3_rxq *rxq, int budget); 89 + 90 + #endif
+670
drivers/net/ethernet/huawei/hinic3/hinic3_tx.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + // Copyright (c) Huawei Technologies Co., Ltd. 2025. All rights reserved. 3 + 4 + #include <linux/if_vlan.h> 5 + #include <linux/iopoll.h> 6 + #include <net/ip6_checksum.h> 7 + #include <net/ipv6.h> 8 + #include <net/netdev_queues.h> 9 + 10 + #include "hinic3_hwdev.h" 11 + #include "hinic3_nic_cfg.h" 12 + #include "hinic3_nic_dev.h" 13 + #include "hinic3_nic_io.h" 14 + #include "hinic3_tx.h" 15 + #include "hinic3_wq.h" 16 + 17 + #define MIN_SKB_LEN 32 18 + 19 + int hinic3_alloc_txqs(struct net_device *netdev) 20 + { 21 + struct hinic3_nic_dev *nic_dev = netdev_priv(netdev); 22 + struct hinic3_hwdev *hwdev = nic_dev->hwdev; 23 + u16 q_id, num_txqs = nic_dev->max_qps; 24 + struct pci_dev *pdev = nic_dev->pdev; 25 + struct hinic3_txq *txq; 26 + 27 + if (!num_txqs) { 28 + dev_err(hwdev->dev, "Cannot allocate zero size txqs\n"); 29 + return -EINVAL; 30 + } 31 + 32 + nic_dev->txqs = kcalloc(num_txqs, sizeof(*nic_dev->txqs), GFP_KERNEL); 33 + if (!nic_dev->txqs) 34 + return -ENOMEM; 35 + 36 + for (q_id = 0; q_id < num_txqs; q_id++) { 37 + txq = &nic_dev->txqs[q_id]; 38 + txq->netdev = netdev; 39 + txq->q_id = q_id; 40 + txq->q_depth = nic_dev->q_params.sq_depth; 41 + txq->q_mask = nic_dev->q_params.sq_depth - 1; 42 + txq->dev = &pdev->dev; 43 + } 44 + 45 + return 0; 46 + } 47 + 48 + void hinic3_free_txqs(struct net_device *netdev) 49 + { 50 + struct hinic3_nic_dev *nic_dev = netdev_priv(netdev); 51 + 52 + kfree(nic_dev->txqs); 53 + } 54 + 55 + static void hinic3_set_buf_desc(struct hinic3_sq_bufdesc *buf_descs, 56 + dma_addr_t addr, u32 len) 57 + { 58 + buf_descs->hi_addr = upper_32_bits(addr); 59 + buf_descs->lo_addr = lower_32_bits(addr); 60 + buf_descs->len = len; 61 + } 62 + 63 + static int hinic3_tx_map_skb(struct net_device *netdev, struct sk_buff *skb, 64 + struct hinic3_txq *txq, 65 + struct hinic3_tx_info *tx_info, 66 + struct hinic3_sq_wqe_combo *wqe_combo) 67 + { 68 + struct hinic3_sq_wqe_desc *wqe_desc = wqe_combo->ctrl_bd0; 69 + struct hinic3_sq_bufdesc *buf_desc = wqe_combo->bds_head; 70 + struct hinic3_nic_dev *nic_dev = netdev_priv(netdev); 71 + struct hinic3_dma_info *dma_info = tx_info->dma_info; 72 + struct pci_dev *pdev = nic_dev->pdev; 73 + skb_frag_t *frag; 74 + u32 i, idx; 75 + int err; 76 + 77 + dma_info[0].dma = dma_map_single(&pdev->dev, skb->data, 78 + skb_headlen(skb), DMA_TO_DEVICE); 79 + if (dma_mapping_error(&pdev->dev, dma_info[0].dma)) 80 + return -EFAULT; 81 + 82 + dma_info[0].len = skb_headlen(skb); 83 + 84 + wqe_desc->hi_addr = upper_32_bits(dma_info[0].dma); 85 + wqe_desc->lo_addr = lower_32_bits(dma_info[0].dma); 86 + 87 + wqe_desc->ctrl_len = dma_info[0].len; 88 + 89 + for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) { 90 + frag = &(skb_shinfo(skb)->frags[i]); 91 + if (unlikely(i == wqe_combo->first_bds_num)) 92 + buf_desc = wqe_combo->bds_sec2; 93 + 94 + idx = i + 1; 95 + dma_info[idx].dma = skb_frag_dma_map(&pdev->dev, frag, 0, 96 + skb_frag_size(frag), 97 + DMA_TO_DEVICE); 98 + if (dma_mapping_error(&pdev->dev, dma_info[idx].dma)) { 99 + err = -EFAULT; 100 + goto err_unmap_page; 101 + } 102 + dma_info[idx].len = skb_frag_size(frag); 103 + 104 + hinic3_set_buf_desc(buf_desc, dma_info[idx].dma, 105 + dma_info[idx].len); 106 + buf_desc++; 107 + } 108 + 109 + return 0; 110 + 111 + err_unmap_page: 112 + while (idx > 1) { 113 + idx--; 114 + dma_unmap_page(&pdev->dev, dma_info[idx].dma, 115 + dma_info[idx].len, DMA_TO_DEVICE); 116 + } 117 + dma_unmap_single(&pdev->dev, dma_info[0].dma, dma_info[0].len, 118 + DMA_TO_DEVICE); 119 + return err; 120 + } 121 + 122 + static void hinic3_tx_unmap_skb(struct net_device *netdev, 123 + struct sk_buff *skb, 124 + struct hinic3_dma_info *dma_info) 125 + { 126 + struct hinic3_nic_dev *nic_dev = netdev_priv(netdev); 127 + struct pci_dev *pdev = nic_dev->pdev; 128 + int i; 129 + 130 + for (i = 0; i < skb_shinfo(skb)->nr_frags;) { 131 + i++; 132 + dma_unmap_page(&pdev->dev, 133 + dma_info[i].dma, 134 + dma_info[i].len, DMA_TO_DEVICE); 135 + } 136 + 137 + dma_unmap_single(&pdev->dev, dma_info[0].dma, 138 + dma_info[0].len, DMA_TO_DEVICE); 139 + } 140 + 141 + union hinic3_ip { 142 + struct iphdr *v4; 143 + struct ipv6hdr *v6; 144 + unsigned char *hdr; 145 + }; 146 + 147 + union hinic3_l4 { 148 + struct tcphdr *tcp; 149 + struct udphdr *udp; 150 + unsigned char *hdr; 151 + }; 152 + 153 + enum hinic3_l3_type { 154 + HINIC3_L3_UNKNOWN = 0, 155 + HINIC3_L3_IP6_PKT = 1, 156 + HINIC3_L3_IP4_PKT_NO_CSUM = 2, 157 + HINIC3_L3_IP4_PKT_CSUM = 3, 158 + }; 159 + 160 + enum hinic3_l4_offload_type { 161 + HINIC3_L4_OFFLOAD_DISABLE = 0, 162 + HINIC3_L4_OFFLOAD_TCP = 1, 163 + HINIC3_L4_OFFLOAD_STCP = 2, 164 + HINIC3_L4_OFFLOAD_UDP = 3, 165 + }; 166 + 167 + /* initialize l4 offset and offload */ 168 + static void get_inner_l4_info(struct sk_buff *skb, union hinic3_l4 *l4, 169 + u8 l4_proto, u32 *offset, 170 + enum hinic3_l4_offload_type *l4_offload) 171 + { 172 + switch (l4_proto) { 173 + case IPPROTO_TCP: 174 + *l4_offload = HINIC3_L4_OFFLOAD_TCP; 175 + /* To be same with TSO, payload offset begins from payload */ 176 + *offset = (l4->tcp->doff << TCP_HDR_DATA_OFF_UNIT_SHIFT) + 177 + TRANSPORT_OFFSET(l4->hdr, skb); 178 + break; 179 + 180 + case IPPROTO_UDP: 181 + *l4_offload = HINIC3_L4_OFFLOAD_UDP; 182 + *offset = TRANSPORT_OFFSET(l4->hdr, skb); 183 + break; 184 + default: 185 + *l4_offload = HINIC3_L4_OFFLOAD_DISABLE; 186 + *offset = 0; 187 + } 188 + } 189 + 190 + static int hinic3_tx_csum(struct hinic3_txq *txq, struct hinic3_sq_task *task, 191 + struct sk_buff *skb) 192 + { 193 + if (skb->ip_summed != CHECKSUM_PARTIAL) 194 + return 0; 195 + 196 + if (skb->encapsulation) { 197 + union hinic3_ip ip; 198 + u8 l4_proto; 199 + 200 + task->pkt_info0 |= SQ_TASK_INFO0_SET(1, TUNNEL_FLAG); 201 + 202 + ip.hdr = skb_network_header(skb); 203 + if (ip.v4->version == 4) { 204 + l4_proto = ip.v4->protocol; 205 + } else if (ip.v4->version == 6) { 206 + union hinic3_l4 l4; 207 + unsigned char *exthdr; 208 + __be16 frag_off; 209 + 210 + exthdr = ip.hdr + sizeof(*ip.v6); 211 + l4_proto = ip.v6->nexthdr; 212 + l4.hdr = skb_transport_header(skb); 213 + if (l4.hdr != exthdr) 214 + ipv6_skip_exthdr(skb, exthdr - skb->data, 215 + &l4_proto, &frag_off); 216 + } else { 217 + l4_proto = IPPROTO_RAW; 218 + } 219 + 220 + if (l4_proto != IPPROTO_UDP || 221 + ((struct udphdr *)skb_transport_header(skb))->dest != 222 + VXLAN_OFFLOAD_PORT_LE) { 223 + /* Unsupported tunnel packet, disable csum offload */ 224 + skb_checksum_help(skb); 225 + return 0; 226 + } 227 + } 228 + 229 + task->pkt_info0 |= SQ_TASK_INFO0_SET(1, INNER_L4_EN); 230 + 231 + return 1; 232 + } 233 + 234 + static void get_inner_l3_l4_type(struct sk_buff *skb, union hinic3_ip *ip, 235 + union hinic3_l4 *l4, 236 + enum hinic3_l3_type *l3_type, u8 *l4_proto) 237 + { 238 + unsigned char *exthdr; 239 + __be16 frag_off; 240 + 241 + if (ip->v4->version == 4) { 242 + *l3_type = HINIC3_L3_IP4_PKT_CSUM; 243 + *l4_proto = ip->v4->protocol; 244 + } else if (ip->v4->version == 6) { 245 + *l3_type = HINIC3_L3_IP6_PKT; 246 + exthdr = ip->hdr + sizeof(*ip->v6); 247 + *l4_proto = ip->v6->nexthdr; 248 + if (exthdr != l4->hdr) { 249 + ipv6_skip_exthdr(skb, exthdr - skb->data, 250 + l4_proto, &frag_off); 251 + } 252 + } else { 253 + *l3_type = HINIC3_L3_UNKNOWN; 254 + *l4_proto = 0; 255 + } 256 + } 257 + 258 + static void hinic3_set_tso_info(struct hinic3_sq_task *task, u32 *queue_info, 259 + enum hinic3_l4_offload_type l4_offload, 260 + u32 offset, u32 mss) 261 + { 262 + if (l4_offload == HINIC3_L4_OFFLOAD_TCP) { 263 + *queue_info |= SQ_CTRL_QUEUE_INFO_SET(1, TSO); 264 + task->pkt_info0 |= SQ_TASK_INFO0_SET(1, INNER_L4_EN); 265 + } else if (l4_offload == HINIC3_L4_OFFLOAD_UDP) { 266 + *queue_info |= SQ_CTRL_QUEUE_INFO_SET(1, UFO); 267 + task->pkt_info0 |= SQ_TASK_INFO0_SET(1, INNER_L4_EN); 268 + } 269 + 270 + /* enable L3 calculation */ 271 + task->pkt_info0 |= SQ_TASK_INFO0_SET(1, INNER_L3_EN); 272 + 273 + *queue_info |= SQ_CTRL_QUEUE_INFO_SET(offset >> 1, PLDOFF); 274 + 275 + /* set MSS value */ 276 + *queue_info &= ~SQ_CTRL_QUEUE_INFO_MSS_MASK; 277 + *queue_info |= SQ_CTRL_QUEUE_INFO_SET(mss, MSS); 278 + } 279 + 280 + static __sum16 csum_magic(union hinic3_ip *ip, unsigned short proto) 281 + { 282 + return (ip->v4->version == 4) ? 283 + csum_tcpudp_magic(ip->v4->saddr, ip->v4->daddr, 0, proto, 0) : 284 + csum_ipv6_magic(&ip->v6->saddr, &ip->v6->daddr, 0, proto, 0); 285 + } 286 + 287 + static int hinic3_tso(struct hinic3_sq_task *task, u32 *queue_info, 288 + struct sk_buff *skb) 289 + { 290 + enum hinic3_l4_offload_type l4_offload; 291 + enum hinic3_l3_type l3_type; 292 + union hinic3_ip ip; 293 + union hinic3_l4 l4; 294 + u8 l4_proto; 295 + u32 offset; 296 + int err; 297 + 298 + if (!skb_is_gso(skb)) 299 + return 0; 300 + 301 + err = skb_cow_head(skb, 0); 302 + if (err < 0) 303 + return err; 304 + 305 + if (skb->encapsulation) { 306 + u32 gso_type = skb_shinfo(skb)->gso_type; 307 + /* L3 checksum is always enabled */ 308 + task->pkt_info0 |= SQ_TASK_INFO0_SET(1, OUT_L3_EN); 309 + task->pkt_info0 |= SQ_TASK_INFO0_SET(1, TUNNEL_FLAG); 310 + 311 + l4.hdr = skb_transport_header(skb); 312 + ip.hdr = skb_network_header(skb); 313 + 314 + if (gso_type & SKB_GSO_UDP_TUNNEL_CSUM) { 315 + l4.udp->check = ~csum_magic(&ip, IPPROTO_UDP); 316 + task->pkt_info0 |= SQ_TASK_INFO0_SET(1, OUT_L4_EN); 317 + } 318 + 319 + ip.hdr = skb_inner_network_header(skb); 320 + l4.hdr = skb_inner_transport_header(skb); 321 + } else { 322 + ip.hdr = skb_network_header(skb); 323 + l4.hdr = skb_transport_header(skb); 324 + } 325 + 326 + get_inner_l3_l4_type(skb, &ip, &l4, &l3_type, &l4_proto); 327 + 328 + if (l4_proto == IPPROTO_TCP) 329 + l4.tcp->check = ~csum_magic(&ip, IPPROTO_TCP); 330 + 331 + get_inner_l4_info(skb, &l4, l4_proto, &offset, &l4_offload); 332 + 333 + hinic3_set_tso_info(task, queue_info, l4_offload, offset, 334 + skb_shinfo(skb)->gso_size); 335 + 336 + return 1; 337 + } 338 + 339 + static void hinic3_set_vlan_tx_offload(struct hinic3_sq_task *task, 340 + u16 vlan_tag, u8 vlan_tpid) 341 + { 342 + /* vlan_tpid: 0=select TPID0 in IPSU, 1=select TPID1 in IPSU 343 + * 2=select TPID2 in IPSU, 3=select TPID3 in IPSU, 344 + * 4=select TPID4 in IPSU 345 + */ 346 + task->vlan_offload = SQ_TASK_INFO3_SET(vlan_tag, VLAN_TAG) | 347 + SQ_TASK_INFO3_SET(vlan_tpid, VLAN_TPID) | 348 + SQ_TASK_INFO3_SET(1, VLAN_TAG_VALID); 349 + } 350 + 351 + static u32 hinic3_tx_offload(struct sk_buff *skb, struct hinic3_sq_task *task, 352 + u32 *queue_info, struct hinic3_txq *txq) 353 + { 354 + u32 offload = 0; 355 + int tso_cs_en; 356 + 357 + task->pkt_info0 = 0; 358 + task->ip_identify = 0; 359 + task->rsvd = 0; 360 + task->vlan_offload = 0; 361 + 362 + tso_cs_en = hinic3_tso(task, queue_info, skb); 363 + if (tso_cs_en < 0) { 364 + offload = HINIC3_TX_OFFLOAD_INVALID; 365 + return offload; 366 + } else if (tso_cs_en) { 367 + offload |= HINIC3_TX_OFFLOAD_TSO; 368 + } else { 369 + tso_cs_en = hinic3_tx_csum(txq, task, skb); 370 + if (tso_cs_en) 371 + offload |= HINIC3_TX_OFFLOAD_CSUM; 372 + } 373 + 374 + #define VLAN_INSERT_MODE_MAX 5 375 + if (unlikely(skb_vlan_tag_present(skb))) { 376 + /* select vlan insert mode by qid, default 802.1Q Tag type */ 377 + hinic3_set_vlan_tx_offload(task, skb_vlan_tag_get(skb), 378 + txq->q_id % VLAN_INSERT_MODE_MAX); 379 + offload |= HINIC3_TX_OFFLOAD_VLAN; 380 + } 381 + 382 + if (unlikely(SQ_CTRL_QUEUE_INFO_GET(*queue_info, PLDOFF) > 383 + SQ_CTRL_MAX_PLDOFF)) { 384 + offload = HINIC3_TX_OFFLOAD_INVALID; 385 + return offload; 386 + } 387 + 388 + return offload; 389 + } 390 + 391 + static u16 hinic3_get_and_update_sq_owner(struct hinic3_io_queue *sq, 392 + u16 curr_pi, u16 wqebb_cnt) 393 + { 394 + u16 owner = sq->owner; 395 + 396 + if (unlikely(curr_pi + wqebb_cnt >= sq->wq.q_depth)) 397 + sq->owner = !sq->owner; 398 + 399 + return owner; 400 + } 401 + 402 + static u16 hinic3_set_wqe_combo(struct hinic3_txq *txq, 403 + struct hinic3_sq_wqe_combo *wqe_combo, 404 + u32 offload, u16 num_sge, u16 *curr_pi) 405 + { 406 + struct hinic3_sq_bufdesc *first_part_wqebbs, *second_part_wqebbs; 407 + u16 first_part_wqebbs_num, tmp_pi; 408 + 409 + wqe_combo->ctrl_bd0 = hinic3_wq_get_one_wqebb(&txq->sq->wq, curr_pi); 410 + if (!offload && num_sge == 1) { 411 + wqe_combo->wqe_type = SQ_WQE_COMPACT_TYPE; 412 + return hinic3_get_and_update_sq_owner(txq->sq, *curr_pi, 1); 413 + } 414 + 415 + wqe_combo->wqe_type = SQ_WQE_EXTENDED_TYPE; 416 + 417 + if (offload) { 418 + wqe_combo->task = hinic3_wq_get_one_wqebb(&txq->sq->wq, 419 + &tmp_pi); 420 + wqe_combo->task_type = SQ_WQE_TASKSECT_16BYTES; 421 + } else { 422 + wqe_combo->task_type = SQ_WQE_TASKSECT_46BITS; 423 + } 424 + 425 + if (num_sge > 1) { 426 + /* first wqebb contain bd0, and bd size is equal to sq wqebb 427 + * size, so we use (num_sge - 1) as wanted weqbb_cnt 428 + */ 429 + hinic3_wq_get_multi_wqebbs(&txq->sq->wq, num_sge - 1, &tmp_pi, 430 + &first_part_wqebbs, 431 + &second_part_wqebbs, 432 + &first_part_wqebbs_num); 433 + wqe_combo->bds_head = first_part_wqebbs; 434 + wqe_combo->bds_sec2 = second_part_wqebbs; 435 + wqe_combo->first_bds_num = first_part_wqebbs_num; 436 + } 437 + 438 + return hinic3_get_and_update_sq_owner(txq->sq, *curr_pi, 439 + num_sge + !!offload); 440 + } 441 + 442 + static void hinic3_prepare_sq_ctrl(struct hinic3_sq_wqe_combo *wqe_combo, 443 + u32 queue_info, int nr_descs, u16 owner) 444 + { 445 + struct hinic3_sq_wqe_desc *wqe_desc = wqe_combo->ctrl_bd0; 446 + 447 + if (wqe_combo->wqe_type == SQ_WQE_COMPACT_TYPE) { 448 + wqe_desc->ctrl_len |= 449 + SQ_CTRL_SET(SQ_NORMAL_WQE, DATA_FORMAT) | 450 + SQ_CTRL_SET(wqe_combo->wqe_type, EXTENDED) | 451 + SQ_CTRL_SET(owner, OWNER); 452 + 453 + /* compact wqe queue_info will transfer to chip */ 454 + wqe_desc->queue_info = 0; 455 + return; 456 + } 457 + 458 + wqe_desc->ctrl_len |= SQ_CTRL_SET(nr_descs, BUFDESC_NUM) | 459 + SQ_CTRL_SET(wqe_combo->task_type, TASKSECT_LEN) | 460 + SQ_CTRL_SET(SQ_NORMAL_WQE, DATA_FORMAT) | 461 + SQ_CTRL_SET(wqe_combo->wqe_type, EXTENDED) | 462 + SQ_CTRL_SET(owner, OWNER); 463 + 464 + wqe_desc->queue_info = queue_info; 465 + wqe_desc->queue_info |= SQ_CTRL_QUEUE_INFO_SET(1, UC); 466 + 467 + if (!SQ_CTRL_QUEUE_INFO_GET(wqe_desc->queue_info, MSS)) { 468 + wqe_desc->queue_info |= 469 + SQ_CTRL_QUEUE_INFO_SET(HINIC3_TX_MSS_DEFAULT, MSS); 470 + } else if (SQ_CTRL_QUEUE_INFO_GET(wqe_desc->queue_info, MSS) < 471 + HINIC3_TX_MSS_MIN) { 472 + /* mss should not be less than 80 */ 473 + wqe_desc->queue_info &= ~SQ_CTRL_QUEUE_INFO_MSS_MASK; 474 + wqe_desc->queue_info |= 475 + SQ_CTRL_QUEUE_INFO_SET(HINIC3_TX_MSS_MIN, MSS); 476 + } 477 + } 478 + 479 + static netdev_tx_t hinic3_send_one_skb(struct sk_buff *skb, 480 + struct net_device *netdev, 481 + struct hinic3_txq *txq) 482 + { 483 + struct hinic3_sq_wqe_combo wqe_combo = {}; 484 + struct hinic3_tx_info *tx_info; 485 + struct hinic3_txq *tx_q = txq; 486 + u32 offload, queue_info = 0; 487 + struct hinic3_sq_task task; 488 + u16 wqebb_cnt, num_sge; 489 + u16 saved_wq_prod_idx; 490 + u16 owner, pi = 0; 491 + u8 saved_sq_owner; 492 + int err; 493 + 494 + if (unlikely(skb->len < MIN_SKB_LEN)) { 495 + if (skb_pad(skb, MIN_SKB_LEN - skb->len)) 496 + goto err_out; 497 + 498 + skb->len = MIN_SKB_LEN; 499 + } 500 + 501 + num_sge = skb_shinfo(skb)->nr_frags + 1; 502 + /* assume normal wqe format + 1 wqebb for task info */ 503 + wqebb_cnt = num_sge + 1; 504 + 505 + if (unlikely(hinic3_wq_free_wqebbs(&txq->sq->wq) < wqebb_cnt)) { 506 + if (likely(wqebb_cnt > txq->tx_stop_thrs)) 507 + txq->tx_stop_thrs = min(wqebb_cnt, txq->tx_start_thrs); 508 + 509 + netif_subqueue_try_stop(netdev, tx_q->sq->q_id, 510 + hinic3_wq_free_wqebbs(&tx_q->sq->wq), 511 + tx_q->tx_start_thrs); 512 + 513 + return NETDEV_TX_BUSY; 514 + } 515 + 516 + offload = hinic3_tx_offload(skb, &task, &queue_info, txq); 517 + if (unlikely(offload == HINIC3_TX_OFFLOAD_INVALID)) { 518 + goto err_drop_pkt; 519 + } else if (!offload) { 520 + wqebb_cnt -= 1; 521 + if (unlikely(num_sge == 1 && 522 + skb->len > HINIC3_COMPACT_WQEE_SKB_MAX_LEN)) 523 + goto err_drop_pkt; 524 + } 525 + 526 + saved_wq_prod_idx = txq->sq->wq.prod_idx; 527 + saved_sq_owner = txq->sq->owner; 528 + 529 + owner = hinic3_set_wqe_combo(txq, &wqe_combo, offload, num_sge, &pi); 530 + if (offload) 531 + *wqe_combo.task = task; 532 + 533 + tx_info = &txq->tx_info[pi]; 534 + tx_info->skb = skb; 535 + tx_info->wqebb_cnt = wqebb_cnt; 536 + 537 + err = hinic3_tx_map_skb(netdev, skb, txq, tx_info, &wqe_combo); 538 + if (err) { 539 + /* Rollback work queue to reclaim the wqebb we did not use */ 540 + txq->sq->wq.prod_idx = saved_wq_prod_idx; 541 + txq->sq->owner = saved_sq_owner; 542 + goto err_drop_pkt; 543 + } 544 + 545 + netdev_tx_sent_queue(netdev_get_tx_queue(netdev, txq->sq->q_id), 546 + skb->len); 547 + netif_subqueue_maybe_stop(netdev, tx_q->sq->q_id, 548 + hinic3_wq_free_wqebbs(&tx_q->sq->wq), 549 + tx_q->tx_stop_thrs, 550 + tx_q->tx_start_thrs); 551 + 552 + hinic3_prepare_sq_ctrl(&wqe_combo, queue_info, num_sge, owner); 553 + hinic3_write_db(txq->sq, 0, DB_CFLAG_DP_SQ, 554 + hinic3_get_sq_local_pi(txq->sq)); 555 + 556 + return NETDEV_TX_OK; 557 + 558 + err_drop_pkt: 559 + dev_kfree_skb_any(skb); 560 + 561 + err_out: 562 + return NETDEV_TX_OK; 563 + } 564 + 565 + netdev_tx_t hinic3_xmit_frame(struct sk_buff *skb, struct net_device *netdev) 566 + { 567 + struct hinic3_nic_dev *nic_dev = netdev_priv(netdev); 568 + u16 q_id = skb_get_queue_mapping(skb); 569 + 570 + if (unlikely(!netif_carrier_ok(netdev))) 571 + goto err_drop_pkt; 572 + 573 + if (unlikely(q_id >= nic_dev->q_params.num_qps)) 574 + goto err_drop_pkt; 575 + 576 + return hinic3_send_one_skb(skb, netdev, &nic_dev->txqs[q_id]); 577 + 578 + err_drop_pkt: 579 + dev_kfree_skb_any(skb); 580 + return NETDEV_TX_OK; 581 + } 582 + 583 + static bool is_hw_complete_sq_process(struct hinic3_io_queue *sq) 584 + { 585 + u16 sw_pi, hw_ci; 586 + 587 + sw_pi = hinic3_get_sq_local_pi(sq); 588 + hw_ci = hinic3_get_sq_hw_ci(sq); 589 + 590 + return sw_pi == hw_ci; 591 + } 592 + 593 + #define HINIC3_FLUSH_QUEUE_POLL_SLEEP_US 10000 594 + #define HINIC3_FLUSH_QUEUE_POLL_TIMEOUT_US 10000000 595 + static int hinic3_stop_sq(struct hinic3_txq *txq) 596 + { 597 + struct hinic3_nic_dev *nic_dev = netdev_priv(txq->netdev); 598 + int err, rc; 599 + 600 + err = read_poll_timeout(hinic3_force_drop_tx_pkt, rc, 601 + is_hw_complete_sq_process(txq->sq) || rc, 602 + HINIC3_FLUSH_QUEUE_POLL_SLEEP_US, 603 + HINIC3_FLUSH_QUEUE_POLL_TIMEOUT_US, 604 + true, nic_dev->hwdev); 605 + if (rc) 606 + return rc; 607 + else 608 + return err; 609 + } 610 + 611 + /* packet transmission should be stopped before calling this function */ 612 + void hinic3_flush_txqs(struct net_device *netdev) 613 + { 614 + struct hinic3_nic_dev *nic_dev = netdev_priv(netdev); 615 + u16 qid; 616 + int err; 617 + 618 + for (qid = 0; qid < nic_dev->q_params.num_qps; qid++) { 619 + err = hinic3_stop_sq(&nic_dev->txqs[qid]); 620 + netdev_tx_reset_subqueue(netdev, qid); 621 + if (err) 622 + netdev_err(netdev, "Failed to stop sq%u\n", qid); 623 + } 624 + } 625 + 626 + #define HINIC3_BDS_PER_SQ_WQEBB \ 627 + (HINIC3_SQ_WQEBB_SIZE / sizeof(struct hinic3_sq_bufdesc)) 628 + 629 + bool hinic3_tx_poll(struct hinic3_txq *txq, int budget) 630 + { 631 + struct net_device *netdev = txq->netdev; 632 + u16 hw_ci, sw_ci, q_id = txq->sq->q_id; 633 + struct hinic3_tx_info *tx_info; 634 + struct hinic3_txq *tx_q = txq; 635 + unsigned int bytes_compl = 0; 636 + unsigned int pkts = 0; 637 + u16 wqebb_cnt = 0; 638 + 639 + hw_ci = hinic3_get_sq_hw_ci(txq->sq); 640 + dma_rmb(); 641 + sw_ci = hinic3_get_sq_local_ci(txq->sq); 642 + 643 + do { 644 + tx_info = &txq->tx_info[sw_ci]; 645 + 646 + /* Did all wqebb of this wqe complete? */ 647 + if (hw_ci == sw_ci || 648 + ((hw_ci - sw_ci) & txq->q_mask) < tx_info->wqebb_cnt) 649 + break; 650 + 651 + sw_ci = (sw_ci + tx_info->wqebb_cnt) & txq->q_mask; 652 + net_prefetch(&txq->tx_info[sw_ci]); 653 + 654 + wqebb_cnt += tx_info->wqebb_cnt; 655 + bytes_compl += tx_info->skb->len; 656 + pkts++; 657 + 658 + hinic3_tx_unmap_skb(netdev, tx_info->skb, tx_info->dma_info); 659 + napi_consume_skb(tx_info->skb, budget); 660 + tx_info->skb = NULL; 661 + } while (likely(pkts < HINIC3_TX_POLL_WEIGHT)); 662 + 663 + hinic3_wq_put_wqebbs(&txq->sq->wq, wqebb_cnt); 664 + 665 + netif_subqueue_completed_wake(netdev, q_id, pkts, bytes_compl, 666 + hinic3_wq_free_wqebbs(&tx_q->sq->wq), 667 + tx_q->tx_start_thrs); 668 + 669 + return pkts == HINIC3_TX_POLL_WEIGHT; 670 + }
+135
drivers/net/ethernet/huawei/hinic3/hinic3_tx.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* Copyright (c) Huawei Technologies Co., Ltd. 2025. All rights reserved. */ 3 + 4 + #ifndef _HINIC3_TX_H_ 5 + #define _HINIC3_TX_H_ 6 + 7 + #include <linux/bitops.h> 8 + #include <linux/ip.h> 9 + #include <linux/ipv6.h> 10 + #include <linux/netdevice.h> 11 + #include <net/checksum.h> 12 + 13 + #define VXLAN_OFFLOAD_PORT_LE cpu_to_be16(4789) 14 + #define TCP_HDR_DATA_OFF_UNIT_SHIFT 2 15 + #define TRANSPORT_OFFSET(l4_hdr, skb) ((l4_hdr) - (skb)->data) 16 + 17 + #define HINIC3_COMPACT_WQEE_SKB_MAX_LEN 16383 18 + #define HINIC3_TX_POLL_WEIGHT 64 19 + #define HINIC3_DEFAULT_STOP_THRS 6 20 + #define HINIC3_DEFAULT_START_THRS 24 21 + 22 + enum sq_wqe_data_format { 23 + SQ_NORMAL_WQE = 0, 24 + }; 25 + 26 + enum sq_wqe_ec_type { 27 + SQ_WQE_COMPACT_TYPE = 0, 28 + SQ_WQE_EXTENDED_TYPE = 1, 29 + }; 30 + 31 + enum sq_wqe_tasksect_len_type { 32 + SQ_WQE_TASKSECT_46BITS = 0, 33 + SQ_WQE_TASKSECT_16BYTES = 1, 34 + }; 35 + 36 + enum hinic3_tx_offload_type { 37 + HINIC3_TX_OFFLOAD_TSO = BIT(0), 38 + HINIC3_TX_OFFLOAD_CSUM = BIT(1), 39 + HINIC3_TX_OFFLOAD_VLAN = BIT(2), 40 + HINIC3_TX_OFFLOAD_INVALID = BIT(3), 41 + HINIC3_TX_OFFLOAD_ESP = BIT(4), 42 + }; 43 + 44 + #define SQ_CTRL_BUFDESC_NUM_MASK GENMASK(26, 19) 45 + #define SQ_CTRL_TASKSECT_LEN_MASK BIT(27) 46 + #define SQ_CTRL_DATA_FORMAT_MASK BIT(28) 47 + #define SQ_CTRL_EXTENDED_MASK BIT(30) 48 + #define SQ_CTRL_OWNER_MASK BIT(31) 49 + #define SQ_CTRL_SET(val, member) \ 50 + FIELD_PREP(SQ_CTRL_##member##_MASK, val) 51 + 52 + #define SQ_CTRL_QUEUE_INFO_PLDOFF_MASK GENMASK(9, 2) 53 + #define SQ_CTRL_QUEUE_INFO_UFO_MASK BIT(10) 54 + #define SQ_CTRL_QUEUE_INFO_TSO_MASK BIT(11) 55 + #define SQ_CTRL_QUEUE_INFO_MSS_MASK GENMASK(26, 13) 56 + #define SQ_CTRL_QUEUE_INFO_UC_MASK BIT(28) 57 + 58 + #define SQ_CTRL_QUEUE_INFO_SET(val, member) \ 59 + FIELD_PREP(SQ_CTRL_QUEUE_INFO_##member##_MASK, val) 60 + #define SQ_CTRL_QUEUE_INFO_GET(val, member) \ 61 + FIELD_GET(SQ_CTRL_QUEUE_INFO_##member##_MASK, val) 62 + 63 + #define SQ_CTRL_MAX_PLDOFF 221 64 + 65 + #define SQ_TASK_INFO0_TUNNEL_FLAG_MASK BIT(19) 66 + #define SQ_TASK_INFO0_INNER_L4_EN_MASK BIT(24) 67 + #define SQ_TASK_INFO0_INNER_L3_EN_MASK BIT(25) 68 + #define SQ_TASK_INFO0_OUT_L4_EN_MASK BIT(27) 69 + #define SQ_TASK_INFO0_OUT_L3_EN_MASK BIT(28) 70 + #define SQ_TASK_INFO0_SET(val, member) \ 71 + FIELD_PREP(SQ_TASK_INFO0_##member##_MASK, val) 72 + 73 + #define SQ_TASK_INFO3_VLAN_TAG_MASK GENMASK(15, 0) 74 + #define SQ_TASK_INFO3_VLAN_TPID_MASK GENMASK(18, 16) 75 + #define SQ_TASK_INFO3_VLAN_TAG_VALID_MASK BIT(19) 76 + #define SQ_TASK_INFO3_SET(val, member) \ 77 + FIELD_PREP(SQ_TASK_INFO3_##member##_MASK, val) 78 + 79 + struct hinic3_sq_wqe_desc { 80 + u32 ctrl_len; 81 + u32 queue_info; 82 + u32 hi_addr; 83 + u32 lo_addr; 84 + }; 85 + 86 + struct hinic3_sq_task { 87 + u32 pkt_info0; 88 + u32 ip_identify; 89 + u32 rsvd; 90 + u32 vlan_offload; 91 + }; 92 + 93 + struct hinic3_sq_wqe_combo { 94 + struct hinic3_sq_wqe_desc *ctrl_bd0; 95 + struct hinic3_sq_task *task; 96 + struct hinic3_sq_bufdesc *bds_head; 97 + struct hinic3_sq_bufdesc *bds_sec2; 98 + u16 first_bds_num; 99 + u32 wqe_type; 100 + u32 task_type; 101 + }; 102 + 103 + struct hinic3_dma_info { 104 + dma_addr_t dma; 105 + u32 len; 106 + }; 107 + 108 + struct hinic3_tx_info { 109 + struct sk_buff *skb; 110 + u16 wqebb_cnt; 111 + struct hinic3_dma_info *dma_info; 112 + }; 113 + 114 + struct hinic3_txq { 115 + struct net_device *netdev; 116 + struct device *dev; 117 + 118 + u16 q_id; 119 + u16 tx_stop_thrs; 120 + u16 tx_start_thrs; 121 + u32 q_mask; 122 + u32 q_depth; 123 + 124 + struct hinic3_tx_info *tx_info; 125 + struct hinic3_io_queue *sq; 126 + } ____cacheline_aligned; 127 + 128 + int hinic3_alloc_txqs(struct net_device *netdev); 129 + void hinic3_free_txqs(struct net_device *netdev); 130 + 131 + netdev_tx_t hinic3_xmit_frame(struct sk_buff *skb, struct net_device *netdev); 132 + bool hinic3_tx_poll(struct hinic3_txq *txq, int budget); 133 + void hinic3_flush_txqs(struct net_device *netdev); 134 + 135 + #endif
+29
drivers/net/ethernet/huawei/hinic3/hinic3_wq.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + // Copyright (c) Huawei Technologies Co., Ltd. 2025. All rights reserved. 3 + 4 + #include <linux/dma-mapping.h> 5 + 6 + #include "hinic3_hwdev.h" 7 + #include "hinic3_wq.h" 8 + 9 + void hinic3_wq_get_multi_wqebbs(struct hinic3_wq *wq, 10 + u16 num_wqebbs, u16 *prod_idx, 11 + struct hinic3_sq_bufdesc **first_part_wqebbs, 12 + struct hinic3_sq_bufdesc **second_part_wqebbs, 13 + u16 *first_part_wqebbs_num) 14 + { 15 + u32 idx, remaining; 16 + 17 + idx = wq->prod_idx & wq->idx_mask; 18 + wq->prod_idx += num_wqebbs; 19 + *prod_idx = idx; 20 + *first_part_wqebbs = get_q_element(&wq->qpages, idx, &remaining); 21 + if (likely(remaining >= num_wqebbs)) { 22 + *first_part_wqebbs_num = num_wqebbs; 23 + *second_part_wqebbs = NULL; 24 + } else { 25 + *first_part_wqebbs_num = remaining; 26 + idx += remaining; 27 + *second_part_wqebbs = get_q_element(&wq->qpages, idx, NULL); 28 + } 29 + }
+76
drivers/net/ethernet/huawei/hinic3/hinic3_wq.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* Copyright (c) Huawei Technologies Co., Ltd. 2025. All rights reserved. */ 3 + 4 + #ifndef _HINIC3_WQ_H_ 5 + #define _HINIC3_WQ_H_ 6 + 7 + #include <linux/io.h> 8 + 9 + #include "hinic3_queue_common.h" 10 + 11 + struct hinic3_sq_bufdesc { 12 + /* 31-bits Length, L2NIC only uses length[17:0] */ 13 + u32 len; 14 + u32 rsvd; 15 + u32 hi_addr; 16 + u32 lo_addr; 17 + }; 18 + 19 + /* Work queue is used to submit elements (tx, rx, cmd) to hw. 20 + * Driver is the producer that advances prod_idx. cons_idx is advanced when 21 + * HW reports completions of previously submitted elements. 22 + */ 23 + struct hinic3_wq { 24 + struct hinic3_queue_pages qpages; 25 + /* Unmasked producer/consumer indices that are advanced to natural 26 + * integer overflow regardless of queue depth. 27 + */ 28 + u16 cons_idx; 29 + u16 prod_idx; 30 + 31 + u32 q_depth; 32 + u16 idx_mask; 33 + 34 + /* Work Queue (logical WQEBB array) is mapped to hw via Chip Logical 35 + * Address (CLA) using 1 of 2 levels: 36 + * level 0 - direct mapping of single wq page 37 + * level 1 - indirect mapping of multiple pages via additional page 38 + * table. 39 + * When wq uses level 1, wq_block will hold the allocated indirection 40 + * table. 41 + */ 42 + dma_addr_t wq_block_paddr; 43 + __be64 *wq_block_vaddr; 44 + } ____cacheline_aligned; 45 + 46 + /* Get number of elements in work queue that are in-use. */ 47 + static inline u16 hinic3_wq_get_used(const struct hinic3_wq *wq) 48 + { 49 + return READ_ONCE(wq->prod_idx) - READ_ONCE(wq->cons_idx); 50 + } 51 + 52 + static inline u16 hinic3_wq_free_wqebbs(struct hinic3_wq *wq) 53 + { 54 + /* Don't allow queue to become completely full, report (free - 1). */ 55 + return wq->q_depth - hinic3_wq_get_used(wq) - 1; 56 + } 57 + 58 + static inline void *hinic3_wq_get_one_wqebb(struct hinic3_wq *wq, u16 *pi) 59 + { 60 + *pi = wq->prod_idx & wq->idx_mask; 61 + wq->prod_idx++; 62 + return get_q_element(&wq->qpages, *pi, NULL); 63 + } 64 + 65 + static inline void hinic3_wq_put_wqebbs(struct hinic3_wq *wq, u16 num_wqebbs) 66 + { 67 + wq->cons_idx += num_wqebbs; 68 + } 69 + 70 + void hinic3_wq_get_multi_wqebbs(struct hinic3_wq *wq, 71 + u16 num_wqebbs, u16 *prod_idx, 72 + struct hinic3_sq_bufdesc **first_part_wqebbs, 73 + struct hinic3_sq_bufdesc **second_part_wqebbs, 74 + u16 *first_part_wqebbs_num); 75 + 76 + #endif