Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branch 'hnx3-vf'

Salil Mehta says:

====================
Hisilicon Network Subsystem 3 VF Ethernet Driver

This patch-set contains the support of the HNS3 (Hisilicon Network Subsystem 3)
Virtual Function Ethernet driver for hip08 family of SoCs. The Physical Function
driver is already part of the Linux mainline.

This VF driver has its Hardware Compatibility Layer and has commom/unified ENET
layer/client/ethtool code with the PF driver. It also has support of mailbox to
communicate with the HNS3 PF driver. The basic architecture of VF driver is
derivative of the PF driver. Just like PF driver, this driver is also PCI
Express based.

This driver is the ongoing development work and HNS3 VF Ethernet driver would be
incrementally enhanced with more new features.

High Level Architecture:

[ Ethtool ]
|
[ Ethernet Client ] ... [ RoCE Client ]
| |
[ HNAE Device ] |________
| | |
--------------------------------------------- |
|
[ HNAE3 Framework (Register/unregister) ] |
|
--------------------------------------------- |
| |
[ VF HCLGE Layer ] |
| | |
| | |
| | |
| [ VF Mailbox (To PF via IMP) ] |
| | |
[ IMP command Interface ] [ IMP command Interface ]
| |
| |
(A B O V E R U N S O N G U E S T S Y S T E M)
-------------------------------------------------------------
Q E M U / V F I O / K V M (on Host System)
-------------------------------------------------------------
HIP08 H A R D W A R E (limited to VF by SMMU)

[ IMP/Mgmt Processor (hardware common to system/cmd based) ]

Fig 1. HNS3 Virtual Function Driver

[ dcbnl ] [ Ethtool ]
| |
[ Ethernet Client ] [ ODP/UIO Client ] . . .[ RoCE Client ]
|_____________________| |
| _________|
[ HNAE Device ] | |
| | |
--------------------------------------------- |
|
[ HNAE3 Framework (Register/unregister) ] |
|
--------------------------------------------- |
| |
[ HCLGE Layer ] |
________________|_________________ |
| | | |
[ DCB ] | | |
| | | |
[ Scheduler/Shaper ] [ MDIO ] [ PF Mailbox ] |
| | | |
|________________|_________________| |
| |
[ IMP command Interface ] [ IMP command Interface ]
----------------------------------------------------------------
HIP08 H A R D W A R E

[ IMP/Mgmt Processor (hardware common to system/cmd based) ]

Fig 2. Existing HNS3 PF Driver (added with mailbox)

Change Log Summary:
Patch V4: Addressed SPDX related comment by Philippe Ombredanne
Patch V3: Addressed SPDX change requested by Philippe Ombredanne
Patch V2: 1. Addressed some comments by David Miller.
2. Addressed some internal comments on various patches
Patch V1: Initial Submit
====================

Acked-by: Philippe Ombredanne <pombredanne@nexb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>

+3132 -122
+19 -9
drivers/net/ethernet/hisilicon/Kconfig
··· 94 94 compatibility layer. The engine would be used in Hisilicon hip08 family of 95 95 SoCs and further upcoming SoCs. 96 96 97 - config HNS3_ENET 98 - tristate "Hisilicon HNS3 Ethernet Device Support" 99 - depends on 64BIT && PCI 100 - depends on HNS3 && HNS3_HCLGE 101 - ---help--- 102 - This selects the Ethernet Driver for Hisilicon Network Subsystem 3 for hip08 103 - family of SoCs. This module depends upon HNAE3 driver to access the HNAE3 104 - devices and their associated operations. 105 - 106 97 config HNS3_DCB 107 98 bool "Hisilicon HNS3 Data Center Bridge Support" 108 99 default n ··· 102 111 Say Y here if you want to use Data Center Bridging (DCB) in the HNS3 driver. 103 112 104 113 If unsure, say N. 114 + 115 + config HNS3_HCLGEVF 116 + tristate "Hisilicon HNS3VF Acceleration Engine & Compatibility Layer Support" 117 + depends on PCI_MSI 118 + depends on HNS3 119 + depends on HNS3_HCLGE 120 + ---help--- 121 + This selects the HNS3 VF drivers network acceleration engine & its hardware 122 + compatibility layer. The engine would be used in Hisilicon hip08 family of 123 + SoCs and further upcoming SoCs. 124 + 125 + config HNS3_ENET 126 + tristate "Hisilicon HNS3 Ethernet Device Support" 127 + depends on 64BIT && PCI 128 + depends on HNS3 129 + ---help--- 130 + This selects the Ethernet Driver for Hisilicon Network Subsystem 3 for hip08 131 + family of SoCs. This module depends upon HNAE3 driver to access the HNAE3 132 + devices and their associated operations. 105 133 106 134 endif # NET_VENDOR_HISILICON
+7
drivers/net/ethernet/hisilicon/hns3/Makefile
··· 1 + # SPDX-License-Identifier: GPL-2.0+ 1 2 # 2 3 # Makefile for the HISILICON network device drivers. 3 4 # 4 5 5 6 obj-$(CONFIG_HNS3) += hns3pf/ 7 + obj-$(CONFIG_HNS3) += hns3vf/ 6 8 7 9 obj-$(CONFIG_HNS3) += hnae3.o 10 + 11 + obj-$(CONFIG_HNS3_ENET) += hns3.o 12 + hns3-objs = hns3_enet.o hns3_ethtool.o 13 + 14 + hns3-$(CONFIG_HNS3_DCB) += hns3_dcbnl.o
+88
drivers/net/ethernet/hisilicon/hns3/hclge_mbx.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0+ */ 2 + /* Copyright (c) 2016-2017 Hisilicon Limited. */ 3 + 4 + #ifndef __HCLGE_MBX_H 5 + #define __HCLGE_MBX_H 6 + #include <linux/init.h> 7 + #include <linux/mutex.h> 8 + #include <linux/types.h> 9 + 10 + #define HCLGE_MBX_VF_MSG_DATA_NUM 16 11 + 12 + enum HCLGE_MBX_OPCODE { 13 + HCLGE_MBX_RESET = 0x01, /* (VF -> PF) assert reset */ 14 + HCLGE_MBX_SET_UNICAST, /* (VF -> PF) set UC addr */ 15 + HCLGE_MBX_SET_MULTICAST, /* (VF -> PF) set MC addr */ 16 + HCLGE_MBX_SET_VLAN, /* (VF -> PF) set VLAN */ 17 + HCLGE_MBX_MAP_RING_TO_VECTOR, /* (VF -> PF) map ring-to-vector */ 18 + HCLGE_MBX_UNMAP_RING_TO_VECTOR, /* (VF -> PF) unamp ring-to-vector */ 19 + HCLGE_MBX_SET_PROMISC_MODE, /* (VF -> PF) set promiscuous mode */ 20 + HCLGE_MBX_SET_MACVLAN, /* (VF -> PF) set unicast filter */ 21 + HCLGE_MBX_API_NEGOTIATE, /* (VF -> PF) negotiate API version */ 22 + HCLGE_MBX_GET_QINFO, /* (VF -> PF) get queue config */ 23 + HCLGE_MBX_GET_TCINFO, /* (VF -> PF) get TC config */ 24 + HCLGE_MBX_GET_RETA, /* (VF -> PF) get RETA */ 25 + HCLGE_MBX_GET_RSS_KEY, /* (VF -> PF) get RSS key */ 26 + HCLGE_MBX_GET_MAC_ADDR, /* (VF -> PF) get MAC addr */ 27 + HCLGE_MBX_PF_VF_RESP, /* (PF -> VF) generate respone to VF */ 28 + HCLGE_MBX_GET_BDNUM, /* (VF -> PF) get BD num */ 29 + HCLGE_MBX_GET_BUFSIZE, /* (VF -> PF) get buffer size */ 30 + HCLGE_MBX_GET_STREAMID, /* (VF -> PF) get stream id */ 31 + HCLGE_MBX_SET_AESTART, /* (VF -> PF) start ae */ 32 + HCLGE_MBX_SET_TSOSTATS, /* (VF -> PF) get tso stats */ 33 + HCLGE_MBX_LINK_STAT_CHANGE, /* (PF -> VF) link status has changed */ 34 + HCLGE_MBX_GET_BASE_CONFIG, /* (VF -> PF) get config */ 35 + HCLGE_MBX_BIND_FUNC_QUEUE, /* (VF -> PF) bind function and queue */ 36 + HCLGE_MBX_GET_LINK_STATUS, /* (VF -> PF) get link status */ 37 + HCLGE_MBX_QUEUE_RESET, /* (VF -> PF) reset queue */ 38 + }; 39 + 40 + /* below are per-VF mac-vlan subcodes */ 41 + enum hclge_mbx_mac_vlan_subcode { 42 + HCLGE_MBX_MAC_VLAN_UC_MODIFY = 0, /* modify UC mac addr */ 43 + HCLGE_MBX_MAC_VLAN_UC_ADD, /* add a new UC mac addr */ 44 + HCLGE_MBX_MAC_VLAN_UC_REMOVE, /* remove a new UC mac addr */ 45 + HCLGE_MBX_MAC_VLAN_MC_MODIFY, /* modify MC mac addr */ 46 + HCLGE_MBX_MAC_VLAN_MC_ADD, /* add new MC mac addr */ 47 + HCLGE_MBX_MAC_VLAN_MC_REMOVE, /* remove MC mac addr */ 48 + HCLGE_MBX_MAC_VLAN_MC_FUNC_MTA_ENABLE, /* config func MTA enable */ 49 + }; 50 + 51 + /* below are per-VF vlan cfg subcodes */ 52 + enum hclge_mbx_vlan_cfg_subcode { 53 + HCLGE_MBX_VLAN_FILTER = 0, /* set vlan filter */ 54 + HCLGE_MBX_VLAN_TX_OFF_CFG, /* set tx side vlan offload */ 55 + HCLGE_MBX_VLAN_RX_OFF_CFG, /* set rx side vlan offload */ 56 + }; 57 + 58 + #define HCLGE_MBX_MAX_MSG_SIZE 16 59 + #define HCLGE_MBX_MAX_RESP_DATA_SIZE 8 60 + 61 + struct hclgevf_mbx_resp_status { 62 + struct mutex mbx_mutex; /* protects against contending sync cmd resp */ 63 + u32 origin_mbx_msg; 64 + bool received_resp; 65 + int resp_status; 66 + u8 additional_info[HCLGE_MBX_MAX_RESP_DATA_SIZE]; 67 + }; 68 + 69 + struct hclge_mbx_vf_to_pf_cmd { 70 + u8 rsv; 71 + u8 mbx_src_vfid; /* Auto filled by IMP */ 72 + u8 rsv1[2]; 73 + u8 msg_len; 74 + u8 rsv2[3]; 75 + u8 msg[HCLGE_MBX_MAX_MSG_SIZE]; 76 + }; 77 + 78 + struct hclge_mbx_pf_to_vf_cmd { 79 + u8 dest_vfid; 80 + u8 rsv[3]; 81 + u8 msg_len; 82 + u8 rsv1[3]; 83 + u16 msg[8]; 84 + }; 85 + 86 + #define hclge_mbx_ring_ptr_move_crq(crq) \ 87 + (crq->next_to_use = (crq->next_to_use + 1) % crq->desc_num) 88 + #endif
+12 -2
drivers/net/ethernet/hisilicon/hns3/hnae3.c
··· 196 196 const struct pci_device_id *id; 197 197 struct hnae3_ae_algo *ae_algo; 198 198 struct hnae3_client *client; 199 - int ret = 0; 199 + int ret = 0, lock_acquired; 200 200 201 - mutex_lock(&hnae3_common_lock); 201 + /* we can get deadlocked if SRIOV is being enabled in context to probe 202 + * and probe gets called again in same context. This can happen when 203 + * pci_enable_sriov() is called to create VFs from PF probes context. 204 + * Therefore, for simplicity uniformly defering further probing in all 205 + * cases where we detect contention. 206 + */ 207 + lock_acquired = mutex_trylock(&hnae3_common_lock); 208 + if (!lock_acquired) 209 + return -EPROBE_DEFER; 210 + 202 211 list_add_tail(&ae_dev->node, &hnae3_ae_dev_list); 203 212 204 213 /* Check if there are matched ae_algo */ ··· 220 211 221 212 if (!ae_dev->ops) { 222 213 dev_err(&ae_dev->pdev->dev, "ae_dev ops are null\n"); 214 + ret = -EOPNOTSUPP; 223 215 goto out_err; 224 216 } 225 217
+4 -3
drivers/net/ethernet/hisilicon/hns3/hnae3.h
··· 452 452 struct hnae3_queue **tqp; /* array base of all TQPs of this instance */ 453 453 }; 454 454 455 - #define HNAE3_SUPPORT_MAC_LOOPBACK 1 456 - #define HNAE3_SUPPORT_PHY_LOOPBACK 2 457 - #define HNAE3_SUPPORT_SERDES_LOOPBACK 4 455 + #define HNAE3_SUPPORT_MAC_LOOPBACK BIT(0) 456 + #define HNAE3_SUPPORT_PHY_LOOPBACK BIT(1) 457 + #define HNAE3_SUPPORT_SERDES_LOOPBACK BIT(2) 458 + #define HNAE3_SUPPORT_VF BIT(3) 458 459 459 460 struct hnae3_handle { 460 461 struct hnae3_client *client;
+2 -6
drivers/net/ethernet/hisilicon/hns3/hns3pf/Makefile
··· 1 + # SPDX-License-Identifier: GPL-2.0+ 1 2 # 2 3 # Makefile for the HISILICON network device drivers. 3 4 # ··· 6 5 ccflags-y := -Idrivers/net/ethernet/hisilicon/hns3 7 6 8 7 obj-$(CONFIG_HNS3_HCLGE) += hclge.o 9 - hclge-objs = hclge_main.o hclge_cmd.o hclge_mdio.o hclge_tm.o 8 + hclge-objs = hclge_main.o hclge_cmd.o hclge_mdio.o hclge_tm.o hclge_mbx.o 10 9 11 10 hclge-$(CONFIG_HNS3_DCB) += hclge_dcb.o 12 - 13 - obj-$(CONFIG_HNS3_ENET) += hns3.o 14 - hns3-objs = hns3_enet.o hns3_ethtool.o 15 - 16 - hns3-$(CONFIG_HNS3_DCB) += hns3_dcbnl.o
+110 -97
drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
··· 21 21 #include "hclge_cmd.h" 22 22 #include "hclge_dcb.h" 23 23 #include "hclge_main.h" 24 + #include "hclge_mbx.h" 24 25 #include "hclge_mdio.h" 25 26 #include "hclge_tm.h" 26 27 #include "hnae3.h" ··· 2227 2226 return hclge_cfg_func_mta_filter(hdev, 0, hdev->accept_mta_mc); 2228 2227 } 2229 2228 2229 + static void hclge_mbx_task_schedule(struct hclge_dev *hdev) 2230 + { 2231 + if (!test_and_set_bit(HCLGE_STATE_MBX_SERVICE_SCHED, &hdev->state)) 2232 + schedule_work(&hdev->mbx_service_task); 2233 + } 2234 + 2230 2235 static void hclge_reset_task_schedule(struct hclge_dev *hdev) 2231 2236 { 2232 2237 if (!test_and_set_bit(HCLGE_STATE_RST_SERVICE_SCHED, &hdev->state)) ··· 2378 2371 static u32 hclge_check_event_cause(struct hclge_dev *hdev, u32 *clearval) 2379 2372 { 2380 2373 u32 rst_src_reg; 2374 + u32 cmdq_src_reg; 2381 2375 2382 2376 /* fetch the events from their corresponding regs */ 2383 2377 rst_src_reg = hclge_read_dev(&hdev->hw, HCLGE_MISC_RESET_STS_REG); 2378 + cmdq_src_reg = hclge_read_dev(&hdev->hw, HCLGE_VECTOR0_CMDQ_SRC_REG); 2379 + 2380 + /* Assumption: If by any chance reset and mailbox events are reported 2381 + * together then we will only process reset event in this go and will 2382 + * defer the processing of the mailbox events. Since, we would have not 2383 + * cleared RX CMDQ event this time we would receive again another 2384 + * interrupt from H/W just for the mailbox. 2385 + */ 2384 2386 2385 2387 /* check for vector0 reset event sources */ 2386 2388 if (BIT(HCLGE_VECTOR0_GLOBALRESET_INT_B) & rst_src_reg) { ··· 2410 2394 return HCLGE_VECTOR0_EVENT_RST; 2411 2395 } 2412 2396 2413 - /* mailbox event sharing vector 0 interrupt would be placed here */ 2397 + /* check for vector0 mailbox(=CMDQ RX) event source */ 2398 + if (BIT(HCLGE_VECTOR0_RX_CMDQ_INT_B) & cmdq_src_reg) { 2399 + cmdq_src_reg &= ~BIT(HCLGE_VECTOR0_RX_CMDQ_INT_B); 2400 + *clearval = cmdq_src_reg; 2401 + return HCLGE_VECTOR0_EVENT_MBX; 2402 + } 2414 2403 2415 2404 return HCLGE_VECTOR0_EVENT_OTHER; 2416 2405 } ··· 2423 2402 static void hclge_clear_event_cause(struct hclge_dev *hdev, u32 event_type, 2424 2403 u32 regclr) 2425 2404 { 2426 - if (event_type == HCLGE_VECTOR0_EVENT_RST) 2405 + switch (event_type) { 2406 + case HCLGE_VECTOR0_EVENT_RST: 2427 2407 hclge_write_dev(&hdev->hw, HCLGE_MISC_RESET_STS_REG, regclr); 2428 - 2429 - /* mailbox event sharing vector 0 interrupt would be placed here */ 2408 + break; 2409 + case HCLGE_VECTOR0_EVENT_MBX: 2410 + hclge_write_dev(&hdev->hw, HCLGE_VECTOR0_CMDQ_SRC_REG, regclr); 2411 + break; 2412 + } 2430 2413 } 2431 2414 2432 2415 static void hclge_enable_vector(struct hclge_misc_vector *vector, bool enable) ··· 2447 2422 hclge_enable_vector(&hdev->misc_vector, false); 2448 2423 event_cause = hclge_check_event_cause(hdev, &clearval); 2449 2424 2450 - /* vector 0 interrupt is shared with reset and mailbox source events. 2451 - * For now, we are not handling mailbox events. 2452 - */ 2425 + /* vector 0 interrupt is shared with reset and mailbox source events.*/ 2453 2426 switch (event_cause) { 2454 2427 case HCLGE_VECTOR0_EVENT_RST: 2455 2428 hclge_reset_task_schedule(hdev); 2456 2429 break; 2430 + case HCLGE_VECTOR0_EVENT_MBX: 2431 + /* If we are here then, 2432 + * 1. Either we are not handling any mbx task and we are not 2433 + * scheduled as well 2434 + * OR 2435 + * 2. We could be handling a mbx task but nothing more is 2436 + * scheduled. 2437 + * In both cases, we should schedule mbx task as there are more 2438 + * mbx messages reported by this interrupt. 2439 + */ 2440 + hclge_mbx_task_schedule(hdev); 2441 + 2457 2442 default: 2458 2443 dev_dbg(&hdev->pdev->dev, 2459 2444 "received unknown or unhandled event of vector0\n"); ··· 2740 2705 hclge_reset_subtask(hdev); 2741 2706 2742 2707 clear_bit(HCLGE_STATE_RST_HANDLING, &hdev->state); 2708 + } 2709 + 2710 + static void hclge_mailbox_service_task(struct work_struct *work) 2711 + { 2712 + struct hclge_dev *hdev = 2713 + container_of(work, struct hclge_dev, mbx_service_task); 2714 + 2715 + if (test_and_set_bit(HCLGE_STATE_MBX_HANDLING, &hdev->state)) 2716 + return; 2717 + 2718 + clear_bit(HCLGE_STATE_MBX_SERVICE_SCHED, &hdev->state); 2719 + 2720 + hclge_mbx_handler(hdev); 2721 + 2722 + clear_bit(HCLGE_STATE_MBX_HANDLING, &hdev->state); 2743 2723 } 2744 2724 2745 2725 static void hclge_service_task(struct work_struct *work) ··· 3305 3255 return ret; 3306 3256 } 3307 3257 3308 - int hclge_map_vport_ring_to_vector(struct hclge_vport *vport, int vector_id, 3309 - struct hnae3_ring_chain_node *ring_chain) 3258 + int hclge_bind_ring_with_vector(struct hclge_vport *vport, 3259 + int vector_id, bool en, 3260 + struct hnae3_ring_chain_node *ring_chain) 3310 3261 { 3311 3262 struct hclge_dev *hdev = vport->back; 3312 - struct hclge_ctrl_vector_chain_cmd *req; 3313 3263 struct hnae3_ring_chain_node *node; 3314 3264 struct hclge_desc desc; 3315 - int ret; 3265 + struct hclge_ctrl_vector_chain_cmd *req 3266 + = (struct hclge_ctrl_vector_chain_cmd *)desc.data; 3267 + enum hclge_cmd_status status; 3268 + enum hclge_opcode_type op; 3269 + u16 tqp_type_and_id; 3316 3270 int i; 3317 3271 3318 - hclge_cmd_setup_basic_desc(&desc, HCLGE_OPC_ADD_RING_TO_VECTOR, false); 3319 - 3320 - req = (struct hclge_ctrl_vector_chain_cmd *)desc.data; 3272 + op = en ? HCLGE_OPC_ADD_RING_TO_VECTOR : HCLGE_OPC_DEL_RING_TO_VECTOR; 3273 + hclge_cmd_setup_basic_desc(&desc, op, false); 3321 3274 req->int_vector_id = vector_id; 3322 3275 3323 3276 i = 0; 3324 3277 for (node = ring_chain; node; node = node->next) { 3325 - u16 type_and_id = 0; 3326 - 3327 - hnae_set_field(type_and_id, HCLGE_INT_TYPE_M, HCLGE_INT_TYPE_S, 3278 + tqp_type_and_id = le16_to_cpu(req->tqp_type_and_id[i]); 3279 + hnae_set_field(tqp_type_and_id, HCLGE_INT_TYPE_M, 3280 + HCLGE_INT_TYPE_S, 3328 3281 hnae_get_bit(node->flag, HNAE3_RING_TYPE_B)); 3329 - hnae_set_field(type_and_id, HCLGE_TQP_ID_M, HCLGE_TQP_ID_S, 3330 - node->tqp_index); 3331 - hnae_set_field(type_and_id, HCLGE_INT_GL_IDX_M, 3332 - HCLGE_INT_GL_IDX_S, 3333 - hnae_get_bit(node->flag, HNAE3_RING_TYPE_B)); 3334 - req->tqp_type_and_id[i] = cpu_to_le16(type_and_id); 3335 - req->vfid = vport->vport_id; 3336 - 3282 + hnae_set_field(tqp_type_and_id, HCLGE_TQP_ID_M, 3283 + HCLGE_TQP_ID_S, node->tqp_index); 3284 + req->tqp_type_and_id[i] = cpu_to_le16(tqp_type_and_id); 3337 3285 if (++i >= HCLGE_VECTOR_ELEMENTS_PER_CMD) { 3338 3286 req->int_cause_num = HCLGE_VECTOR_ELEMENTS_PER_CMD; 3287 + req->vfid = vport->vport_id; 3339 3288 3340 - ret = hclge_cmd_send(&hdev->hw, &desc, 1); 3341 - if (ret) { 3289 + status = hclge_cmd_send(&hdev->hw, &desc, 1); 3290 + if (status) { 3342 3291 dev_err(&hdev->pdev->dev, 3343 3292 "Map TQP fail, status is %d.\n", 3344 - ret); 3345 - return ret; 3293 + status); 3294 + return -EIO; 3346 3295 } 3347 3296 i = 0; 3348 3297 3349 3298 hclge_cmd_setup_basic_desc(&desc, 3350 - HCLGE_OPC_ADD_RING_TO_VECTOR, 3299 + op, 3351 3300 false); 3352 3301 req->int_vector_id = vector_id; 3353 3302 } ··· 3354 3305 3355 3306 if (i > 0) { 3356 3307 req->int_cause_num = i; 3357 - 3358 - ret = hclge_cmd_send(&hdev->hw, &desc, 1); 3359 - if (ret) { 3308 + req->vfid = vport->vport_id; 3309 + status = hclge_cmd_send(&hdev->hw, &desc, 1); 3310 + if (status) { 3360 3311 dev_err(&hdev->pdev->dev, 3361 - "Map TQP fail, status is %d.\n", ret); 3362 - return ret; 3312 + "Map TQP fail, status is %d.\n", status); 3313 + return -EIO; 3363 3314 } 3364 3315 } 3365 3316 3366 3317 return 0; 3367 3318 } 3368 3319 3369 - static int hclge_map_handle_ring_to_vector( 3370 - struct hnae3_handle *handle, int vector, 3371 - struct hnae3_ring_chain_node *ring_chain) 3320 + static int hclge_map_ring_to_vector(struct hnae3_handle *handle, 3321 + int vector, 3322 + struct hnae3_ring_chain_node *ring_chain) 3372 3323 { 3373 3324 struct hclge_vport *vport = hclge_get_vport(handle); 3374 3325 struct hclge_dev *hdev = vport->back; ··· 3377 3328 vector_id = hclge_get_vector_index(hdev, vector); 3378 3329 if (vector_id < 0) { 3379 3330 dev_err(&hdev->pdev->dev, 3380 - "Get vector index fail. ret =%d\n", vector_id); 3331 + "Get vector index fail. vector_id =%d\n", vector_id); 3381 3332 return vector_id; 3382 3333 } 3383 3334 3384 - return hclge_map_vport_ring_to_vector(vport, vector_id, ring_chain); 3335 + return hclge_bind_ring_with_vector(vport, vector_id, true, ring_chain); 3385 3336 } 3386 3337 3387 - static int hclge_unmap_ring_from_vector( 3388 - struct hnae3_handle *handle, int vector, 3389 - struct hnae3_ring_chain_node *ring_chain) 3338 + static int hclge_unmap_ring_frm_vector(struct hnae3_handle *handle, 3339 + int vector, 3340 + struct hnae3_ring_chain_node *ring_chain) 3390 3341 { 3391 3342 struct hclge_vport *vport = hclge_get_vport(handle); 3392 3343 struct hclge_dev *hdev = vport->back; 3393 - struct hclge_ctrl_vector_chain_cmd *req; 3394 - struct hnae3_ring_chain_node *node; 3395 - struct hclge_desc desc; 3396 - int i, vector_id; 3397 - int ret; 3344 + int vector_id, ret; 3398 3345 3399 3346 vector_id = hclge_get_vector_index(hdev, vector); 3400 3347 if (vector_id < 0) { ··· 3399 3354 return vector_id; 3400 3355 } 3401 3356 3402 - hclge_cmd_setup_basic_desc(&desc, HCLGE_OPC_DEL_RING_TO_VECTOR, false); 3403 - 3404 - req = (struct hclge_ctrl_vector_chain_cmd *)desc.data; 3405 - req->int_vector_id = vector_id; 3406 - 3407 - i = 0; 3408 - for (node = ring_chain; node; node = node->next) { 3409 - u16 type_and_id = 0; 3410 - 3411 - hnae_set_field(type_and_id, HCLGE_INT_TYPE_M, HCLGE_INT_TYPE_S, 3412 - hnae_get_bit(node->flag, HNAE3_RING_TYPE_B)); 3413 - hnae_set_field(type_and_id, HCLGE_TQP_ID_M, HCLGE_TQP_ID_S, 3414 - node->tqp_index); 3415 - hnae_set_field(type_and_id, HCLGE_INT_GL_IDX_M, 3416 - HCLGE_INT_GL_IDX_S, 3417 - hnae_get_bit(node->flag, HNAE3_RING_TYPE_B)); 3418 - 3419 - req->tqp_type_and_id[i] = cpu_to_le16(type_and_id); 3420 - req->vfid = vport->vport_id; 3421 - 3422 - if (++i >= HCLGE_VECTOR_ELEMENTS_PER_CMD) { 3423 - req->int_cause_num = HCLGE_VECTOR_ELEMENTS_PER_CMD; 3424 - 3425 - ret = hclge_cmd_send(&hdev->hw, &desc, 1); 3426 - if (ret) { 3427 - dev_err(&hdev->pdev->dev, 3428 - "Unmap TQP fail, status is %d.\n", 3429 - ret); 3430 - return ret; 3431 - } 3432 - i = 0; 3433 - hclge_cmd_setup_basic_desc(&desc, 3434 - HCLGE_OPC_DEL_RING_TO_VECTOR, 3435 - false); 3436 - req->int_vector_id = vector_id; 3437 - } 3357 + ret = hclge_bind_ring_with_vector(vport, vector_id, false, ring_chain); 3358 + if (ret) { 3359 + dev_err(&handle->pdev->dev, 3360 + "Unmap ring from vector fail. vectorid=%d, ret =%d\n", 3361 + vector_id, 3362 + ret); 3363 + return ret; 3438 3364 } 3439 3365 3440 - if (i > 0) { 3441 - req->int_cause_num = i; 3442 - 3443 - ret = hclge_cmd_send(&hdev->hw, &desc, 1); 3444 - if (ret) { 3445 - dev_err(&hdev->pdev->dev, 3446 - "Unmap TQP fail, status is %d.\n", ret); 3447 - return ret; 3448 - } 3449 - } 3366 + /* Free this MSIX or MSI vector */ 3367 + hclge_free_vector(hdev, vector_id); 3450 3368 3451 3369 return 0; 3452 3370 } ··· 4430 4422 return hnae_get_bit(req->ready_to_reset, HCLGE_TQP_RESET_B); 4431 4423 } 4432 4424 4433 - static void hclge_reset_tqp(struct hnae3_handle *handle, u16 queue_id) 4425 + void hclge_reset_tqp(struct hnae3_handle *handle, u16 queue_id) 4434 4426 { 4435 4427 struct hclge_vport *vport = hclge_get_vport(handle); 4436 4428 struct hclge_dev *hdev = vport->back; ··· 4864 4856 timer_setup(&hdev->service_timer, hclge_service_timer, 0); 4865 4857 INIT_WORK(&hdev->service_task, hclge_service_task); 4866 4858 INIT_WORK(&hdev->rst_service_task, hclge_reset_service_task); 4859 + INIT_WORK(&hdev->mbx_service_task, hclge_mailbox_service_task); 4867 4860 4868 4861 /* Enable MISC vector(vector0) */ 4869 4862 hclge_enable_vector(&hdev->misc_vector, true); ··· 4873 4864 set_bit(HCLGE_STATE_DOWN, &hdev->state); 4874 4865 clear_bit(HCLGE_STATE_RST_SERVICE_SCHED, &hdev->state); 4875 4866 clear_bit(HCLGE_STATE_RST_HANDLING, &hdev->state); 4867 + clear_bit(HCLGE_STATE_MBX_SERVICE_SCHED, &hdev->state); 4868 + clear_bit(HCLGE_STATE_MBX_HANDLING, &hdev->state); 4876 4869 4877 4870 pr_info("%s driver initialization finished.\n", HCLGE_DRIVER_NAME); 4878 4871 return 0; ··· 4988 4977 cancel_work_sync(&hdev->service_task); 4989 4978 if (hdev->rst_service_task.func) 4990 4979 cancel_work_sync(&hdev->rst_service_task); 4980 + if (hdev->mbx_service_task.func) 4981 + cancel_work_sync(&hdev->mbx_service_task); 4991 4982 4992 4983 if (mac->phydev) 4993 4984 mdiobus_unregister(mac->mdio_bus); ··· 5007 4994 .uninit_ae_dev = hclge_uninit_ae_dev, 5008 4995 .init_client_instance = hclge_init_client_instance, 5009 4996 .uninit_client_instance = hclge_uninit_client_instance, 5010 - .map_ring_to_vector = hclge_map_handle_ring_to_vector, 5011 - .unmap_ring_from_vector = hclge_unmap_ring_from_vector, 4997 + .map_ring_to_vector = hclge_map_ring_to_vector, 4998 + .unmap_ring_from_vector = hclge_unmap_ring_frm_vector, 5012 4999 .get_vector = hclge_get_vector, 5013 5000 .set_promisc_mode = hclge_set_promisc_mode, 5014 5001 .set_loopback = hclge_set_loopback,
+14 -3
drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h
··· 92 92 #define HCLGE_VECTOR0_CORERESET_INT_B 6 93 93 #define HCLGE_VECTOR0_IMPRESET_INT_B 7 94 94 95 + /* Vector0 interrupt CMDQ event source register(RW) */ 96 + #define HCLGE_VECTOR0_CMDQ_SRC_REG 0x27100 97 + /* CMDQ register bits for RX event(=MBX event) */ 98 + #define HCLGE_VECTOR0_RX_CMDQ_INT_B 1 99 + 95 100 enum HCLGE_DEV_STATE { 96 101 HCLGE_STATE_REINITING, 97 102 HCLGE_STATE_DOWN, ··· 106 101 HCLGE_STATE_SERVICE_SCHED, 107 102 HCLGE_STATE_RST_SERVICE_SCHED, 108 103 HCLGE_STATE_RST_HANDLING, 104 + HCLGE_STATE_MBX_SERVICE_SCHED, 109 105 HCLGE_STATE_MBX_HANDLING, 110 - HCLGE_STATE_MBX_IRQ, 111 106 HCLGE_STATE_MAX 112 107 }; 113 108 ··· 484 479 struct timer_list service_timer; 485 480 struct work_struct service_task; 486 481 struct work_struct rst_service_task; 482 + struct work_struct mbx_service_task; 487 483 488 484 bool cur_promisc; 489 485 int num_alloc_vfs; /* Actual number of VFs allocated */ ··· 545 539 u8 func_id, 546 540 bool enable); 547 541 struct hclge_vport *hclge_get_vport(struct hnae3_handle *handle); 548 - int hclge_map_vport_ring_to_vector(struct hclge_vport *vport, int vector, 549 - struct hnae3_ring_chain_node *ring_chain); 542 + int hclge_bind_ring_with_vector(struct hclge_vport *vport, 543 + int vector_id, bool en, 544 + struct hnae3_ring_chain_node *ring_chain); 545 + 550 546 static inline int hclge_get_queue_id(struct hnae3_queue *queue) 551 547 { 552 548 struct hclge_tqp *tqp = container_of(queue, struct hclge_tqp, q); ··· 562 554 563 555 int hclge_buffer_alloc(struct hclge_dev *hdev); 564 556 int hclge_rss_init_hw(struct hclge_dev *hdev); 557 + 558 + void hclge_mbx_handler(struct hclge_dev *hdev); 559 + void hclge_reset_tqp(struct hnae3_handle *handle, u16 queue_id); 565 560 #endif
+410
drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c
··· 1 + // SPDX-License-Identifier: GPL-2.0+ 2 + // Copyright (c) 2016-2017 Hisilicon Limited. 3 + 4 + #include "hclge_main.h" 5 + #include "hclge_mbx.h" 6 + #include "hnae3.h" 7 + 8 + /* hclge_gen_resp_to_vf: used to generate a synchronous response to VF when PF 9 + * receives a mailbox message from VF. 10 + * @vport: pointer to struct hclge_vport 11 + * @vf_to_pf_req: pointer to hclge_mbx_vf_to_pf_cmd of the original mailbox 12 + * message 13 + * @resp_status: indicate to VF whether its request success(0) or failed. 14 + */ 15 + static int hclge_gen_resp_to_vf(struct hclge_vport *vport, 16 + struct hclge_mbx_vf_to_pf_cmd *vf_to_pf_req, 17 + int resp_status, 18 + u8 *resp_data, u16 resp_data_len) 19 + { 20 + struct hclge_mbx_pf_to_vf_cmd *resp_pf_to_vf; 21 + struct hclge_dev *hdev = vport->back; 22 + enum hclge_cmd_status status; 23 + struct hclge_desc desc; 24 + 25 + resp_pf_to_vf = (struct hclge_mbx_pf_to_vf_cmd *)desc.data; 26 + 27 + if (resp_data_len > HCLGE_MBX_MAX_RESP_DATA_SIZE) { 28 + dev_err(&hdev->pdev->dev, 29 + "PF fail to gen resp to VF len %d exceeds max len %d\n", 30 + resp_data_len, 31 + HCLGE_MBX_MAX_RESP_DATA_SIZE); 32 + } 33 + 34 + hclge_cmd_setup_basic_desc(&desc, HCLGEVF_OPC_MBX_PF_TO_VF, false); 35 + 36 + resp_pf_to_vf->dest_vfid = vf_to_pf_req->mbx_src_vfid; 37 + resp_pf_to_vf->msg_len = vf_to_pf_req->msg_len; 38 + 39 + resp_pf_to_vf->msg[0] = HCLGE_MBX_PF_VF_RESP; 40 + resp_pf_to_vf->msg[1] = vf_to_pf_req->msg[0]; 41 + resp_pf_to_vf->msg[2] = vf_to_pf_req->msg[1]; 42 + resp_pf_to_vf->msg[3] = (resp_status == 0) ? 0 : 1; 43 + 44 + if (resp_data && resp_data_len > 0) 45 + memcpy(&resp_pf_to_vf->msg[4], resp_data, resp_data_len); 46 + 47 + status = hclge_cmd_send(&hdev->hw, &desc, 1); 48 + if (status) 49 + dev_err(&hdev->pdev->dev, 50 + "PF failed(=%d) to send response to VF\n", status); 51 + 52 + return status; 53 + } 54 + 55 + static int hclge_send_mbx_msg(struct hclge_vport *vport, u8 *msg, u16 msg_len, 56 + u16 mbx_opcode, u8 dest_vfid) 57 + { 58 + struct hclge_mbx_pf_to_vf_cmd *resp_pf_to_vf; 59 + struct hclge_dev *hdev = vport->back; 60 + enum hclge_cmd_status status; 61 + struct hclge_desc desc; 62 + 63 + resp_pf_to_vf = (struct hclge_mbx_pf_to_vf_cmd *)desc.data; 64 + 65 + hclge_cmd_setup_basic_desc(&desc, HCLGEVF_OPC_MBX_PF_TO_VF, false); 66 + 67 + resp_pf_to_vf->dest_vfid = dest_vfid; 68 + resp_pf_to_vf->msg_len = msg_len; 69 + resp_pf_to_vf->msg[0] = mbx_opcode; 70 + 71 + memcpy(&resp_pf_to_vf->msg[1], msg, msg_len); 72 + 73 + status = hclge_cmd_send(&hdev->hw, &desc, 1); 74 + if (status) 75 + dev_err(&hdev->pdev->dev, 76 + "PF failed(=%d) to send mailbox message to VF\n", 77 + status); 78 + 79 + return status; 80 + } 81 + 82 + static void hclge_free_vector_ring_chain(struct hnae3_ring_chain_node *head) 83 + { 84 + struct hnae3_ring_chain_node *chain_tmp, *chain; 85 + 86 + chain = head->next; 87 + 88 + while (chain) { 89 + chain_tmp = chain->next; 90 + kzfree(chain); 91 + chain = chain_tmp; 92 + } 93 + } 94 + 95 + /* hclge_get_ring_chain_from_mbx: get ring type & tqpid from mailbox message 96 + * msg[0]: opcode 97 + * msg[1]: <not relevant to this function> 98 + * msg[2]: ring_num 99 + * msg[3]: first ring type (TX|RX) 100 + * msg[4]: first tqp id 101 + * msg[5] ~ msg[14]: other ring type and tqp id 102 + */ 103 + static int hclge_get_ring_chain_from_mbx( 104 + struct hclge_mbx_vf_to_pf_cmd *req, 105 + struct hnae3_ring_chain_node *ring_chain, 106 + struct hclge_vport *vport) 107 + { 108 + #define HCLGE_RING_NODE_VARIABLE_NUM 3 109 + #define HCLGE_RING_MAP_MBX_BASIC_MSG_NUM 3 110 + struct hnae3_ring_chain_node *cur_chain, *new_chain; 111 + int ring_num; 112 + int i; 113 + 114 + ring_num = req->msg[2]; 115 + 116 + hnae_set_bit(ring_chain->flag, HNAE3_RING_TYPE_B, req->msg[3]); 117 + ring_chain->tqp_index = 118 + hclge_get_queue_id(vport->nic.kinfo.tqp[req->msg[4]]); 119 + 120 + cur_chain = ring_chain; 121 + 122 + for (i = 1; i < ring_num; i++) { 123 + new_chain = kzalloc(sizeof(*new_chain), GFP_KERNEL); 124 + if (!new_chain) 125 + goto err; 126 + 127 + hnae_set_bit(new_chain->flag, HNAE3_RING_TYPE_B, 128 + req->msg[HCLGE_RING_NODE_VARIABLE_NUM * i + 129 + HCLGE_RING_MAP_MBX_BASIC_MSG_NUM]); 130 + 131 + new_chain->tqp_index = 132 + hclge_get_queue_id(vport->nic.kinfo.tqp 133 + [req->msg[HCLGE_RING_NODE_VARIABLE_NUM * i + 134 + HCLGE_RING_MAP_MBX_BASIC_MSG_NUM + 1]]); 135 + 136 + cur_chain->next = new_chain; 137 + cur_chain = new_chain; 138 + } 139 + 140 + return 0; 141 + err: 142 + hclge_free_vector_ring_chain(ring_chain); 143 + return -ENOMEM; 144 + } 145 + 146 + static int hclge_map_unmap_ring_to_vf_vector(struct hclge_vport *vport, bool en, 147 + struct hclge_mbx_vf_to_pf_cmd *req) 148 + { 149 + struct hnae3_ring_chain_node ring_chain; 150 + int vector_id = req->msg[1]; 151 + int ret; 152 + 153 + memset(&ring_chain, 0, sizeof(ring_chain)); 154 + ret = hclge_get_ring_chain_from_mbx(req, &ring_chain, vport); 155 + if (ret) 156 + return ret; 157 + 158 + ret = hclge_bind_ring_with_vector(vport, vector_id, en, &ring_chain); 159 + if (ret) 160 + return ret; 161 + 162 + hclge_free_vector_ring_chain(&ring_chain); 163 + 164 + return 0; 165 + } 166 + 167 + static int hclge_set_vf_promisc_mode(struct hclge_vport *vport, 168 + struct hclge_mbx_vf_to_pf_cmd *req) 169 + { 170 + bool en = req->msg[1] ? true : false; 171 + struct hclge_promisc_param param; 172 + 173 + /* always enable broadcast promisc bit */ 174 + hclge_promisc_param_init(&param, en, en, true, vport->vport_id); 175 + return hclge_cmd_set_promisc_mode(vport->back, &param); 176 + } 177 + 178 + static int hclge_set_vf_uc_mac_addr(struct hclge_vport *vport, 179 + struct hclge_mbx_vf_to_pf_cmd *mbx_req, 180 + bool gen_resp) 181 + { 182 + const u8 *mac_addr = (const u8 *)(&mbx_req->msg[2]); 183 + struct hclge_dev *hdev = vport->back; 184 + int status; 185 + 186 + if (mbx_req->msg[1] == HCLGE_MBX_MAC_VLAN_UC_MODIFY) { 187 + const u8 *old_addr = (const u8 *)(&mbx_req->msg[8]); 188 + 189 + hclge_rm_uc_addr_common(vport, old_addr); 190 + status = hclge_add_uc_addr_common(vport, mac_addr); 191 + } else if (mbx_req->msg[1] == HCLGE_MBX_MAC_VLAN_UC_ADD) { 192 + status = hclge_add_uc_addr_common(vport, mac_addr); 193 + } else if (mbx_req->msg[1] == HCLGE_MBX_MAC_VLAN_UC_REMOVE) { 194 + status = hclge_rm_uc_addr_common(vport, mac_addr); 195 + } else { 196 + dev_err(&hdev->pdev->dev, 197 + "failed to set unicast mac addr, unknown subcode %d\n", 198 + mbx_req->msg[1]); 199 + return -EIO; 200 + } 201 + 202 + if (gen_resp) 203 + hclge_gen_resp_to_vf(vport, mbx_req, status, NULL, 0); 204 + 205 + return 0; 206 + } 207 + 208 + static int hclge_set_vf_mc_mac_addr(struct hclge_vport *vport, 209 + struct hclge_mbx_vf_to_pf_cmd *mbx_req, 210 + bool gen_resp) 211 + { 212 + const u8 *mac_addr = (const u8 *)(&mbx_req->msg[2]); 213 + struct hclge_dev *hdev = vport->back; 214 + int status; 215 + 216 + if (mbx_req->msg[1] == HCLGE_MBX_MAC_VLAN_MC_ADD) { 217 + status = hclge_add_mc_addr_common(vport, mac_addr); 218 + } else if (mbx_req->msg[1] == HCLGE_MBX_MAC_VLAN_MC_REMOVE) { 219 + status = hclge_rm_mc_addr_common(vport, mac_addr); 220 + } else if (mbx_req->msg[1] == HCLGE_MBX_MAC_VLAN_MC_FUNC_MTA_ENABLE) { 221 + u8 func_id = vport->vport_id; 222 + bool enable = mbx_req->msg[2]; 223 + 224 + status = hclge_cfg_func_mta_filter(hdev, func_id, enable); 225 + } else { 226 + dev_err(&hdev->pdev->dev, 227 + "failed to set mcast mac addr, unknown subcode %d\n", 228 + mbx_req->msg[1]); 229 + return -EIO; 230 + } 231 + 232 + if (gen_resp) 233 + hclge_gen_resp_to_vf(vport, mbx_req, status, NULL, 0); 234 + 235 + return 0; 236 + } 237 + 238 + static int hclge_set_vf_vlan_cfg(struct hclge_vport *vport, 239 + struct hclge_mbx_vf_to_pf_cmd *mbx_req, 240 + bool gen_resp) 241 + { 242 + struct hclge_dev *hdev = vport->back; 243 + int status = 0; 244 + 245 + if (mbx_req->msg[1] == HCLGE_MBX_VLAN_FILTER) { 246 + u16 vlan, proto; 247 + bool is_kill; 248 + 249 + is_kill = !!mbx_req->msg[2]; 250 + memcpy(&vlan, &mbx_req->msg[3], sizeof(vlan)); 251 + memcpy(&proto, &mbx_req->msg[5], sizeof(proto)); 252 + status = hclge_set_vf_vlan_common(hdev, vport->vport_id, 253 + is_kill, vlan, 0, 254 + cpu_to_be16(proto)); 255 + } 256 + 257 + if (gen_resp) 258 + status = hclge_gen_resp_to_vf(vport, mbx_req, status, NULL, 0); 259 + 260 + return status; 261 + } 262 + 263 + static int hclge_get_vf_tcinfo(struct hclge_vport *vport, 264 + struct hclge_mbx_vf_to_pf_cmd *mbx_req, 265 + bool gen_resp) 266 + { 267 + struct hclge_dev *hdev = vport->back; 268 + int ret; 269 + 270 + ret = hclge_gen_resp_to_vf(vport, mbx_req, 0, &hdev->hw_tc_map, 271 + sizeof(u8)); 272 + 273 + return ret; 274 + } 275 + 276 + static int hclge_get_vf_queue_info(struct hclge_vport *vport, 277 + struct hclge_mbx_vf_to_pf_cmd *mbx_req, 278 + bool gen_resp) 279 + { 280 + #define HCLGE_TQPS_RSS_INFO_LEN 8 281 + u8 resp_data[HCLGE_TQPS_RSS_INFO_LEN]; 282 + struct hclge_dev *hdev = vport->back; 283 + 284 + /* get the queue related info */ 285 + memcpy(&resp_data[0], &vport->alloc_tqps, sizeof(u16)); 286 + memcpy(&resp_data[2], &hdev->rss_size_max, sizeof(u16)); 287 + memcpy(&resp_data[4], &hdev->num_desc, sizeof(u16)); 288 + memcpy(&resp_data[6], &hdev->rx_buf_len, sizeof(u16)); 289 + 290 + return hclge_gen_resp_to_vf(vport, mbx_req, 0, resp_data, 291 + HCLGE_TQPS_RSS_INFO_LEN); 292 + } 293 + 294 + static int hclge_get_link_info(struct hclge_vport *vport, 295 + struct hclge_mbx_vf_to_pf_cmd *mbx_req) 296 + { 297 + struct hclge_dev *hdev = vport->back; 298 + u16 link_status; 299 + u8 msg_data[2]; 300 + u8 dest_vfid; 301 + 302 + /* mac.link can only be 0 or 1 */ 303 + link_status = (u16)hdev->hw.mac.link; 304 + memcpy(&msg_data[0], &link_status, sizeof(u16)); 305 + dest_vfid = mbx_req->mbx_src_vfid; 306 + 307 + /* send this requested info to VF */ 308 + return hclge_send_mbx_msg(vport, msg_data, sizeof(u8), 309 + HCLGE_MBX_LINK_STAT_CHANGE, dest_vfid); 310 + } 311 + 312 + static void hclge_reset_vf_queue(struct hclge_vport *vport, 313 + struct hclge_mbx_vf_to_pf_cmd *mbx_req) 314 + { 315 + u16 queue_id; 316 + 317 + memcpy(&queue_id, &mbx_req->msg[2], sizeof(queue_id)); 318 + 319 + hclge_reset_tqp(&vport->nic, queue_id); 320 + } 321 + 322 + void hclge_mbx_handler(struct hclge_dev *hdev) 323 + { 324 + struct hclge_cmq_ring *crq = &hdev->hw.cmq.crq; 325 + struct hclge_mbx_vf_to_pf_cmd *req; 326 + struct hclge_vport *vport; 327 + struct hclge_desc *desc; 328 + int ret; 329 + 330 + /* handle all the mailbox requests in the queue */ 331 + while (hnae_get_bit(crq->desc[crq->next_to_use].flag, 332 + HCLGE_CMDQ_RX_OUTVLD_B)) { 333 + desc = &crq->desc[crq->next_to_use]; 334 + req = (struct hclge_mbx_vf_to_pf_cmd *)desc->data; 335 + 336 + vport = &hdev->vport[req->mbx_src_vfid]; 337 + 338 + switch (req->msg[0]) { 339 + case HCLGE_MBX_MAP_RING_TO_VECTOR: 340 + ret = hclge_map_unmap_ring_to_vf_vector(vport, true, 341 + req); 342 + break; 343 + case HCLGE_MBX_UNMAP_RING_TO_VECTOR: 344 + ret = hclge_map_unmap_ring_to_vf_vector(vport, false, 345 + req); 346 + break; 347 + case HCLGE_MBX_SET_PROMISC_MODE: 348 + ret = hclge_set_vf_promisc_mode(vport, req); 349 + if (ret) 350 + dev_err(&hdev->pdev->dev, 351 + "PF fail(%d) to set VF promisc mode\n", 352 + ret); 353 + break; 354 + case HCLGE_MBX_SET_UNICAST: 355 + ret = hclge_set_vf_uc_mac_addr(vport, req, false); 356 + if (ret) 357 + dev_err(&hdev->pdev->dev, 358 + "PF fail(%d) to set VF UC MAC Addr\n", 359 + ret); 360 + break; 361 + case HCLGE_MBX_SET_MULTICAST: 362 + ret = hclge_set_vf_mc_mac_addr(vport, req, false); 363 + if (ret) 364 + dev_err(&hdev->pdev->dev, 365 + "PF fail(%d) to set VF MC MAC Addr\n", 366 + ret); 367 + break; 368 + case HCLGE_MBX_SET_VLAN: 369 + ret = hclge_set_vf_vlan_cfg(vport, req, false); 370 + if (ret) 371 + dev_err(&hdev->pdev->dev, 372 + "PF failed(%d) to config VF's VLAN\n", 373 + ret); 374 + break; 375 + case HCLGE_MBX_GET_QINFO: 376 + ret = hclge_get_vf_queue_info(vport, req, true); 377 + if (ret) 378 + dev_err(&hdev->pdev->dev, 379 + "PF failed(%d) to get Q info for VF\n", 380 + ret); 381 + break; 382 + case HCLGE_MBX_GET_TCINFO: 383 + ret = hclge_get_vf_tcinfo(vport, req, true); 384 + if (ret) 385 + dev_err(&hdev->pdev->dev, 386 + "PF failed(%d) to get TC info for VF\n", 387 + ret); 388 + break; 389 + case HCLGE_MBX_GET_LINK_STATUS: 390 + ret = hclge_get_link_info(vport, req); 391 + if (ret) 392 + dev_err(&hdev->pdev->dev, 393 + "PF fail(%d) to get link stat for VF\n", 394 + ret); 395 + break; 396 + case HCLGE_MBX_QUEUE_RESET: 397 + hclge_reset_vf_queue(vport, req); 398 + break; 399 + default: 400 + dev_err(&hdev->pdev->dev, 401 + "un-supported mailbox message, code = %d\n", 402 + req->msg[0]); 403 + break; 404 + } 405 + hclge_mbx_ring_ptr_move_crq(crq); 406 + } 407 + 408 + /* Write back CMDQ_RQ header pointer, M7 need this pointer */ 409 + hclge_write_dev(&hdev->hw, HCLGE_NIC_CRQ_HEAD_REG, crq->next_to_use); 410 + }
+1 -1
drivers/net/ethernet/hisilicon/hns3/hns3pf/hns3_dcbnl.c drivers/net/ethernet/hisilicon/hns3/hns3_dcbnl.c
··· 93 93 { 94 94 struct net_device *dev = handle->kinfo.netdev; 95 95 96 - if (!handle->kinfo.dcb_ops) 96 + if ((!handle->kinfo.dcb_ops) || (handle->flags & HNAE3_SUPPORT_VF)) 97 97 return; 98 98 99 99 dev->dcbnl_ops = &hns3_dcbnl_ops;
+2
drivers/net/ethernet/hisilicon/hns3/hns3pf/hns3_enet.c drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
··· 52 52 HNAE3_DEV_SUPPORT_ROCE_DCB_BITS}, 53 53 {PCI_VDEVICE(HUAWEI, HNAE3_DEV_ID_100G_RDMA_MACSEC), 54 54 HNAE3_DEV_SUPPORT_ROCE_DCB_BITS}, 55 + {PCI_VDEVICE(HUAWEI, HNAE3_DEV_ID_100G_VF), 0}, 56 + {PCI_VDEVICE(HUAWEI, HNAE3_DEV_ID_100G_RDMA_DCB_PFC_VF), 0}, 55 57 /* required last entry */ 56 58 {0, } 57 59 };
drivers/net/ethernet/hisilicon/hns3/hns3pf/hns3_enet.h drivers/net/ethernet/hisilicon/hns3/hns3_enet.h
+21 -1
drivers/net/ethernet/hisilicon/hns3/hns3pf/hns3_ethtool.c drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c
··· 849 849 return genphy_restart_aneg(phy); 850 850 } 851 851 852 + static const struct ethtool_ops hns3vf_ethtool_ops = { 853 + .get_drvinfo = hns3_get_drvinfo, 854 + .get_ringparam = hns3_get_ringparam, 855 + .set_ringparam = hns3_set_ringparam, 856 + .get_strings = hns3_get_strings, 857 + .get_ethtool_stats = hns3_get_stats, 858 + .get_sset_count = hns3_get_sset_count, 859 + .get_rxnfc = hns3_get_rxnfc, 860 + .get_rxfh_key_size = hns3_get_rss_key_size, 861 + .get_rxfh_indir_size = hns3_get_rss_indir_size, 862 + .get_rxfh = hns3_get_rss, 863 + .set_rxfh = hns3_set_rss, 864 + .get_link_ksettings = hns3_get_link_ksettings, 865 + }; 866 + 852 867 static const struct ethtool_ops hns3_ethtool_ops = { 853 868 .self_test = hns3_self_test, 854 869 .get_drvinfo = hns3_get_drvinfo, ··· 887 872 888 873 void hns3_ethtool_set_ops(struct net_device *netdev) 889 874 { 890 - netdev->ethtool_ops = &hns3_ethtool_ops; 875 + struct hnae3_handle *h = hns3_get_handle(netdev); 876 + 877 + if (h->flags & HNAE3_SUPPORT_VF) 878 + netdev->ethtool_ops = &hns3vf_ethtool_ops; 879 + else 880 + netdev->ethtool_ops = &hns3_ethtool_ops; 891 881 }
+9
drivers/net/ethernet/hisilicon/hns3/hns3vf/Makefile
··· 1 + # SPDX-License-Identifier: GPL-2.0+ 2 + # 3 + # Makefile for the HISILICON network device drivers. 4 + # 5 + 6 + ccflags-y := -Idrivers/net/ethernet/hisilicon/hns3 7 + 8 + obj-$(CONFIG_HNS3_HCLGEVF) += hclgevf.o 9 + hclgevf-objs = hclgevf_main.o hclgevf_cmd.o hclgevf_mbx.o
+342
drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_cmd.c
··· 1 + // SPDX-License-Identifier: GPL-2.0+ 2 + // Copyright (c) 2016-2017 Hisilicon Limited. 3 + 4 + #include <linux/device.h> 5 + #include <linux/dma-direction.h> 6 + #include <linux/dma-mapping.h> 7 + #include <linux/err.h> 8 + #include <linux/pci.h> 9 + #include <linux/slab.h> 10 + #include "hclgevf_cmd.h" 11 + #include "hclgevf_main.h" 12 + #include "hnae3.h" 13 + 14 + #define hclgevf_is_csq(ring) ((ring)->flag & HCLGEVF_TYPE_CSQ) 15 + #define hclgevf_ring_to_dma_dir(ring) (hclgevf_is_csq(ring) ? \ 16 + DMA_TO_DEVICE : DMA_FROM_DEVICE) 17 + #define cmq_ring_to_dev(ring) (&(ring)->dev->pdev->dev) 18 + 19 + static int hclgevf_ring_space(struct hclgevf_cmq_ring *ring) 20 + { 21 + int ntc = ring->next_to_clean; 22 + int ntu = ring->next_to_use; 23 + int used; 24 + 25 + used = (ntu - ntc + ring->desc_num) % ring->desc_num; 26 + 27 + return ring->desc_num - used - 1; 28 + } 29 + 30 + static int hclgevf_cmd_csq_clean(struct hclgevf_hw *hw) 31 + { 32 + struct hclgevf_cmq_ring *csq = &hw->cmq.csq; 33 + u16 ntc = csq->next_to_clean; 34 + struct hclgevf_desc *desc; 35 + int clean = 0; 36 + u32 head; 37 + 38 + desc = &csq->desc[ntc]; 39 + head = hclgevf_read_dev(hw, HCLGEVF_NIC_CSQ_HEAD_REG); 40 + while (head != ntc) { 41 + memset(desc, 0, sizeof(*desc)); 42 + ntc++; 43 + if (ntc == csq->desc_num) 44 + ntc = 0; 45 + desc = &csq->desc[ntc]; 46 + clean++; 47 + } 48 + csq->next_to_clean = ntc; 49 + 50 + return clean; 51 + } 52 + 53 + static bool hclgevf_cmd_csq_done(struct hclgevf_hw *hw) 54 + { 55 + u32 head; 56 + 57 + head = hclgevf_read_dev(hw, HCLGEVF_NIC_CSQ_HEAD_REG); 58 + 59 + return head == hw->cmq.csq.next_to_use; 60 + } 61 + 62 + static bool hclgevf_is_special_opcode(u16 opcode) 63 + { 64 + u16 spec_opcode[] = {0x30, 0x31, 0x32}; 65 + int i; 66 + 67 + for (i = 0; i < ARRAY_SIZE(spec_opcode); i++) { 68 + if (spec_opcode[i] == opcode) 69 + return true; 70 + } 71 + 72 + return false; 73 + } 74 + 75 + static int hclgevf_alloc_cmd_desc(struct hclgevf_cmq_ring *ring) 76 + { 77 + int size = ring->desc_num * sizeof(struct hclgevf_desc); 78 + 79 + ring->desc = kzalloc(size, GFP_KERNEL); 80 + if (!ring->desc) 81 + return -ENOMEM; 82 + 83 + ring->desc_dma_addr = dma_map_single(cmq_ring_to_dev(ring), ring->desc, 84 + size, DMA_BIDIRECTIONAL); 85 + 86 + if (dma_mapping_error(cmq_ring_to_dev(ring), ring->desc_dma_addr)) { 87 + ring->desc_dma_addr = 0; 88 + kfree(ring->desc); 89 + ring->desc = NULL; 90 + return -ENOMEM; 91 + } 92 + 93 + return 0; 94 + } 95 + 96 + static void hclgevf_free_cmd_desc(struct hclgevf_cmq_ring *ring) 97 + { 98 + dma_unmap_single(cmq_ring_to_dev(ring), ring->desc_dma_addr, 99 + ring->desc_num * sizeof(ring->desc[0]), 100 + hclgevf_ring_to_dma_dir(ring)); 101 + 102 + ring->desc_dma_addr = 0; 103 + kfree(ring->desc); 104 + ring->desc = NULL; 105 + } 106 + 107 + static int hclgevf_init_cmd_queue(struct hclgevf_dev *hdev, 108 + struct hclgevf_cmq_ring *ring) 109 + { 110 + struct hclgevf_hw *hw = &hdev->hw; 111 + int ring_type = ring->flag; 112 + u32 reg_val; 113 + int ret; 114 + 115 + ring->desc_num = HCLGEVF_NIC_CMQ_DESC_NUM; 116 + spin_lock_init(&ring->lock); 117 + ring->next_to_clean = 0; 118 + ring->next_to_use = 0; 119 + ring->dev = hdev; 120 + 121 + /* allocate CSQ/CRQ descriptor */ 122 + ret = hclgevf_alloc_cmd_desc(ring); 123 + if (ret) { 124 + dev_err(&hdev->pdev->dev, "failed(%d) to alloc %s desc\n", ret, 125 + (ring_type == HCLGEVF_TYPE_CSQ) ? "CSQ" : "CRQ"); 126 + return ret; 127 + } 128 + 129 + /* initialize the hardware registers with csq/crq dma-address, 130 + * descriptor number, head & tail pointers 131 + */ 132 + switch (ring_type) { 133 + case HCLGEVF_TYPE_CSQ: 134 + reg_val = (u32)ring->desc_dma_addr; 135 + hclgevf_write_dev(hw, HCLGEVF_NIC_CSQ_BASEADDR_L_REG, reg_val); 136 + reg_val = (u32)((ring->desc_dma_addr >> 31) >> 1); 137 + hclgevf_write_dev(hw, HCLGEVF_NIC_CSQ_BASEADDR_H_REG, reg_val); 138 + 139 + reg_val = (ring->desc_num >> HCLGEVF_NIC_CMQ_DESC_NUM_S); 140 + reg_val |= HCLGEVF_NIC_CMQ_ENABLE; 141 + hclgevf_write_dev(hw, HCLGEVF_NIC_CSQ_DEPTH_REG, reg_val); 142 + 143 + hclgevf_write_dev(hw, HCLGEVF_NIC_CSQ_TAIL_REG, 0); 144 + hclgevf_write_dev(hw, HCLGEVF_NIC_CSQ_HEAD_REG, 0); 145 + break; 146 + case HCLGEVF_TYPE_CRQ: 147 + reg_val = (u32)ring->desc_dma_addr; 148 + hclgevf_write_dev(hw, HCLGEVF_NIC_CRQ_BASEADDR_L_REG, reg_val); 149 + reg_val = (u32)((ring->desc_dma_addr >> 31) >> 1); 150 + hclgevf_write_dev(hw, HCLGEVF_NIC_CRQ_BASEADDR_H_REG, reg_val); 151 + 152 + reg_val = (ring->desc_num >> HCLGEVF_NIC_CMQ_DESC_NUM_S); 153 + reg_val |= HCLGEVF_NIC_CMQ_ENABLE; 154 + hclgevf_write_dev(hw, HCLGEVF_NIC_CRQ_DEPTH_REG, reg_val); 155 + 156 + hclgevf_write_dev(hw, HCLGEVF_NIC_CRQ_TAIL_REG, 0); 157 + hclgevf_write_dev(hw, HCLGEVF_NIC_CRQ_HEAD_REG, 0); 158 + break; 159 + } 160 + 161 + return 0; 162 + } 163 + 164 + void hclgevf_cmd_setup_basic_desc(struct hclgevf_desc *desc, 165 + enum hclgevf_opcode_type opcode, bool is_read) 166 + { 167 + memset(desc, 0, sizeof(struct hclgevf_desc)); 168 + desc->opcode = cpu_to_le16(opcode); 169 + desc->flag = cpu_to_le16(HCLGEVF_CMD_FLAG_NO_INTR | 170 + HCLGEVF_CMD_FLAG_IN); 171 + if (is_read) 172 + desc->flag |= cpu_to_le16(HCLGEVF_CMD_FLAG_WR); 173 + else 174 + desc->flag &= cpu_to_le16(~HCLGEVF_CMD_FLAG_WR); 175 + } 176 + 177 + /* hclgevf_cmd_send - send command to command queue 178 + * @hw: pointer to the hw struct 179 + * @desc: prefilled descriptor for describing the command 180 + * @num : the number of descriptors to be sent 181 + * 182 + * This is the main send command for command queue, it 183 + * sends the queue, cleans the queue, etc 184 + */ 185 + int hclgevf_cmd_send(struct hclgevf_hw *hw, struct hclgevf_desc *desc, int num) 186 + { 187 + struct hclgevf_dev *hdev = (struct hclgevf_dev *)hw->hdev; 188 + struct hclgevf_desc *desc_to_use; 189 + bool complete = false; 190 + u32 timeout = 0; 191 + int handle = 0; 192 + int status = 0; 193 + u16 retval; 194 + u16 opcode; 195 + int ntc; 196 + 197 + spin_lock_bh(&hw->cmq.csq.lock); 198 + 199 + if (num > hclgevf_ring_space(&hw->cmq.csq)) { 200 + spin_unlock_bh(&hw->cmq.csq.lock); 201 + return -EBUSY; 202 + } 203 + 204 + /* Record the location of desc in the ring for this time 205 + * which will be use for hardware to write back 206 + */ 207 + ntc = hw->cmq.csq.next_to_use; 208 + opcode = le16_to_cpu(desc[0].opcode); 209 + while (handle < num) { 210 + desc_to_use = &hw->cmq.csq.desc[hw->cmq.csq.next_to_use]; 211 + *desc_to_use = desc[handle]; 212 + (hw->cmq.csq.next_to_use)++; 213 + if (hw->cmq.csq.next_to_use == hw->cmq.csq.desc_num) 214 + hw->cmq.csq.next_to_use = 0; 215 + handle++; 216 + } 217 + 218 + /* Write to hardware */ 219 + hclgevf_write_dev(hw, HCLGEVF_NIC_CSQ_TAIL_REG, 220 + hw->cmq.csq.next_to_use); 221 + 222 + /* If the command is sync, wait for the firmware to write back, 223 + * if multi descriptors to be sent, use the first one to check 224 + */ 225 + if (HCLGEVF_SEND_SYNC(le16_to_cpu(desc->flag))) { 226 + do { 227 + if (hclgevf_cmd_csq_done(hw)) 228 + break; 229 + udelay(1); 230 + timeout++; 231 + } while (timeout < hw->cmq.tx_timeout); 232 + } 233 + 234 + if (hclgevf_cmd_csq_done(hw)) { 235 + complete = true; 236 + handle = 0; 237 + 238 + while (handle < num) { 239 + /* Get the result of hardware write back */ 240 + desc_to_use = &hw->cmq.csq.desc[ntc]; 241 + desc[handle] = *desc_to_use; 242 + 243 + if (likely(!hclgevf_is_special_opcode(opcode))) 244 + retval = le16_to_cpu(desc[handle].retval); 245 + else 246 + retval = le16_to_cpu(desc[0].retval); 247 + 248 + if ((enum hclgevf_cmd_return_status)retval == 249 + HCLGEVF_CMD_EXEC_SUCCESS) 250 + status = 0; 251 + else 252 + status = -EIO; 253 + hw->cmq.last_status = (enum hclgevf_cmd_status)retval; 254 + ntc++; 255 + handle++; 256 + if (ntc == hw->cmq.csq.desc_num) 257 + ntc = 0; 258 + } 259 + } 260 + 261 + if (!complete) 262 + status = -EAGAIN; 263 + 264 + /* Clean the command send queue */ 265 + handle = hclgevf_cmd_csq_clean(hw); 266 + if (handle != num) { 267 + dev_warn(&hdev->pdev->dev, 268 + "cleaned %d, need to clean %d\n", handle, num); 269 + } 270 + 271 + spin_unlock_bh(&hw->cmq.csq.lock); 272 + 273 + return status; 274 + } 275 + 276 + static int hclgevf_cmd_query_firmware_version(struct hclgevf_hw *hw, 277 + u32 *version) 278 + { 279 + struct hclgevf_query_version_cmd *resp; 280 + struct hclgevf_desc desc; 281 + int status; 282 + 283 + resp = (struct hclgevf_query_version_cmd *)desc.data; 284 + 285 + hclgevf_cmd_setup_basic_desc(&desc, HCLGEVF_OPC_QUERY_FW_VER, 1); 286 + status = hclgevf_cmd_send(hw, &desc, 1); 287 + if (!status) 288 + *version = le32_to_cpu(resp->firmware); 289 + 290 + return status; 291 + } 292 + 293 + int hclgevf_cmd_init(struct hclgevf_dev *hdev) 294 + { 295 + u32 version; 296 + int ret; 297 + 298 + /* setup Tx write back timeout */ 299 + hdev->hw.cmq.tx_timeout = HCLGEVF_CMDQ_TX_TIMEOUT; 300 + 301 + /* setup queue CSQ/CRQ rings */ 302 + hdev->hw.cmq.csq.flag = HCLGEVF_TYPE_CSQ; 303 + ret = hclgevf_init_cmd_queue(hdev, &hdev->hw.cmq.csq); 304 + if (ret) { 305 + dev_err(&hdev->pdev->dev, 306 + "failed(%d) to initialize CSQ ring\n", ret); 307 + return ret; 308 + } 309 + 310 + hdev->hw.cmq.crq.flag = HCLGEVF_TYPE_CRQ; 311 + ret = hclgevf_init_cmd_queue(hdev, &hdev->hw.cmq.crq); 312 + if (ret) { 313 + dev_err(&hdev->pdev->dev, 314 + "failed(%d) to initialize CRQ ring\n", ret); 315 + goto err_csq; 316 + } 317 + 318 + /* get firmware version */ 319 + ret = hclgevf_cmd_query_firmware_version(&hdev->hw, &version); 320 + if (ret) { 321 + dev_err(&hdev->pdev->dev, 322 + "failed(%d) to query firmware version\n", ret); 323 + goto err_crq; 324 + } 325 + hdev->fw_version = version; 326 + 327 + dev_info(&hdev->pdev->dev, "The firmware version is %08x\n", version); 328 + 329 + return 0; 330 + err_crq: 331 + hclgevf_free_cmd_desc(&hdev->hw.cmq.crq); 332 + err_csq: 333 + hclgevf_free_cmd_desc(&hdev->hw.cmq.csq); 334 + 335 + return ret; 336 + } 337 + 338 + void hclgevf_cmd_uninit(struct hclgevf_dev *hdev) 339 + { 340 + hclgevf_free_cmd_desc(&hdev->hw.cmq.csq); 341 + hclgevf_free_cmd_desc(&hdev->hw.cmq.crq); 342 + }
+256
drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_cmd.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0+ */ 2 + /* Copyright (c) 2016-2017 Hisilicon Limited. */ 3 + 4 + #ifndef __HCLGEVF_CMD_H 5 + #define __HCLGEVF_CMD_H 6 + #include <linux/io.h> 7 + #include <linux/types.h> 8 + #include "hnae3.h" 9 + 10 + #define HCLGEVF_CMDQ_TX_TIMEOUT 200 11 + #define HCLGEVF_CMDQ_RX_INVLD_B 0 12 + #define HCLGEVF_CMDQ_RX_OUTVLD_B 1 13 + 14 + struct hclgevf_hw; 15 + struct hclgevf_dev; 16 + 17 + struct hclgevf_desc { 18 + __le16 opcode; 19 + __le16 flag; 20 + __le16 retval; 21 + __le16 rsv; 22 + __le32 data[6]; 23 + }; 24 + 25 + struct hclgevf_desc_cb { 26 + dma_addr_t dma; 27 + void *va; 28 + u32 length; 29 + }; 30 + 31 + struct hclgevf_cmq_ring { 32 + dma_addr_t desc_dma_addr; 33 + struct hclgevf_desc *desc; 34 + struct hclgevf_desc_cb *desc_cb; 35 + struct hclgevf_dev *dev; 36 + u32 head; 37 + u32 tail; 38 + 39 + u16 buf_size; 40 + u16 desc_num; 41 + int next_to_use; 42 + int next_to_clean; 43 + u8 flag; 44 + spinlock_t lock; /* Command queue lock */ 45 + }; 46 + 47 + enum hclgevf_cmd_return_status { 48 + HCLGEVF_CMD_EXEC_SUCCESS = 0, 49 + HCLGEVF_CMD_NO_AUTH = 1, 50 + HCLGEVF_CMD_NOT_EXEC = 2, 51 + HCLGEVF_CMD_QUEUE_FULL = 3, 52 + }; 53 + 54 + enum hclgevf_cmd_status { 55 + HCLGEVF_STATUS_SUCCESS = 0, 56 + HCLGEVF_ERR_CSQ_FULL = -1, 57 + HCLGEVF_ERR_CSQ_TIMEOUT = -2, 58 + HCLGEVF_ERR_CSQ_ERROR = -3 59 + }; 60 + 61 + struct hclgevf_cmq { 62 + struct hclgevf_cmq_ring csq; 63 + struct hclgevf_cmq_ring crq; 64 + u16 tx_timeout; /* Tx timeout */ 65 + enum hclgevf_cmd_status last_status; 66 + }; 67 + 68 + #define HCLGEVF_CMD_FLAG_IN_VALID_SHIFT 0 69 + #define HCLGEVF_CMD_FLAG_OUT_VALID_SHIFT 1 70 + #define HCLGEVF_CMD_FLAG_NEXT_SHIFT 2 71 + #define HCLGEVF_CMD_FLAG_WR_OR_RD_SHIFT 3 72 + #define HCLGEVF_CMD_FLAG_NO_INTR_SHIFT 4 73 + #define HCLGEVF_CMD_FLAG_ERR_INTR_SHIFT 5 74 + 75 + #define HCLGEVF_CMD_FLAG_IN BIT(HCLGEVF_CMD_FLAG_IN_VALID_SHIFT) 76 + #define HCLGEVF_CMD_FLAG_OUT BIT(HCLGEVF_CMD_FLAG_OUT_VALID_SHIFT) 77 + #define HCLGEVF_CMD_FLAG_NEXT BIT(HCLGEVF_CMD_FLAG_NEXT_SHIFT) 78 + #define HCLGEVF_CMD_FLAG_WR BIT(HCLGEVF_CMD_FLAG_WR_OR_RD_SHIFT) 79 + #define HCLGEVF_CMD_FLAG_NO_INTR BIT(HCLGEVF_CMD_FLAG_NO_INTR_SHIFT) 80 + #define HCLGEVF_CMD_FLAG_ERR_INTR BIT(HCLGEVF_CMD_FLAG_ERR_INTR_SHIFT) 81 + 82 + enum hclgevf_opcode_type { 83 + /* Generic command */ 84 + HCLGEVF_OPC_QUERY_FW_VER = 0x0001, 85 + /* TQP command */ 86 + HCLGEVF_OPC_QUERY_TX_STATUS = 0x0B03, 87 + HCLGEVF_OPC_QUERY_RX_STATUS = 0x0B13, 88 + HCLGEVF_OPC_CFG_COM_TQP_QUEUE = 0x0B20, 89 + /* TSO cmd */ 90 + HCLGEVF_OPC_TSO_GENERIC_CONFIG = 0x0C01, 91 + /* RSS cmd */ 92 + HCLGEVF_OPC_RSS_GENERIC_CONFIG = 0x0D01, 93 + HCLGEVF_OPC_RSS_INDIR_TABLE = 0x0D07, 94 + HCLGEVF_OPC_RSS_TC_MODE = 0x0D08, 95 + /* Mailbox cmd */ 96 + HCLGEVF_OPC_MBX_VF_TO_PF = 0x2001, 97 + }; 98 + 99 + #define HCLGEVF_TQP_REG_OFFSET 0x80000 100 + #define HCLGEVF_TQP_REG_SIZE 0x200 101 + 102 + struct hclgevf_tqp_map { 103 + __le16 tqp_id; /* Absolute tqp id for in this pf */ 104 + u8 tqp_vf; /* VF id */ 105 + #define HCLGEVF_TQP_MAP_TYPE_PF 0 106 + #define HCLGEVF_TQP_MAP_TYPE_VF 1 107 + #define HCLGEVF_TQP_MAP_TYPE_B 0 108 + #define HCLGEVF_TQP_MAP_EN_B 1 109 + u8 tqp_flag; /* Indicate it's pf or vf tqp */ 110 + __le16 tqp_vid; /* Virtual id in this pf/vf */ 111 + u8 rsv[18]; 112 + }; 113 + 114 + #define HCLGEVF_VECTOR_ELEMENTS_PER_CMD 10 115 + 116 + enum hclgevf_int_type { 117 + HCLGEVF_INT_TX = 0, 118 + HCLGEVF_INT_RX, 119 + HCLGEVF_INT_EVENT, 120 + }; 121 + 122 + struct hclgevf_ctrl_vector_chain { 123 + u8 int_vector_id; 124 + u8 int_cause_num; 125 + #define HCLGEVF_INT_TYPE_S 0 126 + #define HCLGEVF_INT_TYPE_M 0x3 127 + #define HCLGEVF_TQP_ID_S 2 128 + #define HCLGEVF_TQP_ID_M (0x3fff << HCLGEVF_TQP_ID_S) 129 + __le16 tqp_type_and_id[HCLGEVF_VECTOR_ELEMENTS_PER_CMD]; 130 + u8 vfid; 131 + u8 resv; 132 + }; 133 + 134 + struct hclgevf_query_version_cmd { 135 + __le32 firmware; 136 + __le32 firmware_rsv[5]; 137 + }; 138 + 139 + #define HCLGEVF_RSS_HASH_KEY_OFFSET 4 140 + #define HCLGEVF_RSS_HASH_KEY_NUM 16 141 + struct hclgevf_rss_config_cmd { 142 + u8 hash_config; 143 + u8 rsv[7]; 144 + u8 hash_key[HCLGEVF_RSS_HASH_KEY_NUM]; 145 + }; 146 + 147 + struct hclgevf_rss_input_tuple_cmd { 148 + u8 ipv4_tcp_en; 149 + u8 ipv4_udp_en; 150 + u8 ipv4_stcp_en; 151 + u8 ipv4_fragment_en; 152 + u8 ipv6_tcp_en; 153 + u8 ipv6_udp_en; 154 + u8 ipv6_stcp_en; 155 + u8 ipv6_fragment_en; 156 + u8 rsv[16]; 157 + }; 158 + 159 + #define HCLGEVF_RSS_CFG_TBL_SIZE 16 160 + 161 + struct hclgevf_rss_indirection_table_cmd { 162 + u16 start_table_index; 163 + u16 rss_set_bitmap; 164 + u8 rsv[4]; 165 + u8 rss_result[HCLGEVF_RSS_CFG_TBL_SIZE]; 166 + }; 167 + 168 + #define HCLGEVF_RSS_TC_OFFSET_S 0 169 + #define HCLGEVF_RSS_TC_OFFSET_M (0x3ff << HCLGEVF_RSS_TC_OFFSET_S) 170 + #define HCLGEVF_RSS_TC_SIZE_S 12 171 + #define HCLGEVF_RSS_TC_SIZE_M (0x7 << HCLGEVF_RSS_TC_SIZE_S) 172 + #define HCLGEVF_RSS_TC_VALID_B 15 173 + #define HCLGEVF_MAX_TC_NUM 8 174 + struct hclgevf_rss_tc_mode_cmd { 175 + u16 rss_tc_mode[HCLGEVF_MAX_TC_NUM]; 176 + u8 rsv[8]; 177 + }; 178 + 179 + #define HCLGEVF_LINK_STS_B 0 180 + #define HCLGEVF_LINK_STATUS BIT(HCLGEVF_LINK_STS_B) 181 + struct hclgevf_link_status_cmd { 182 + u8 status; 183 + u8 rsv[23]; 184 + }; 185 + 186 + #define HCLGEVF_RING_ID_MASK 0x3ff 187 + #define HCLGEVF_TQP_ENABLE_B 0 188 + 189 + struct hclgevf_cfg_com_tqp_queue_cmd { 190 + __le16 tqp_id; 191 + __le16 stream_id; 192 + u8 enable; 193 + u8 rsv[19]; 194 + }; 195 + 196 + struct hclgevf_cfg_tx_queue_pointer_cmd { 197 + __le16 tqp_id; 198 + __le16 tx_tail; 199 + __le16 tx_head; 200 + __le16 fbd_num; 201 + __le16 ring_offset; 202 + u8 rsv[14]; 203 + }; 204 + 205 + #define HCLGEVF_TSO_ENABLE_B 0 206 + struct hclgevf_cfg_tso_status_cmd { 207 + u8 tso_enable; 208 + u8 rsv[23]; 209 + }; 210 + 211 + #define HCLGEVF_TYPE_CRQ 0 212 + #define HCLGEVF_TYPE_CSQ 1 213 + #define HCLGEVF_NIC_CSQ_BASEADDR_L_REG 0x27000 214 + #define HCLGEVF_NIC_CSQ_BASEADDR_H_REG 0x27004 215 + #define HCLGEVF_NIC_CSQ_DEPTH_REG 0x27008 216 + #define HCLGEVF_NIC_CSQ_TAIL_REG 0x27010 217 + #define HCLGEVF_NIC_CSQ_HEAD_REG 0x27014 218 + #define HCLGEVF_NIC_CRQ_BASEADDR_L_REG 0x27018 219 + #define HCLGEVF_NIC_CRQ_BASEADDR_H_REG 0x2701c 220 + #define HCLGEVF_NIC_CRQ_DEPTH_REG 0x27020 221 + #define HCLGEVF_NIC_CRQ_TAIL_REG 0x27024 222 + #define HCLGEVF_NIC_CRQ_HEAD_REG 0x27028 223 + #define HCLGEVF_NIC_CMQ_EN_B 16 224 + #define HCLGEVF_NIC_CMQ_ENABLE BIT(HCLGEVF_NIC_CMQ_EN_B) 225 + #define HCLGEVF_NIC_CMQ_DESC_NUM 1024 226 + #define HCLGEVF_NIC_CMQ_DESC_NUM_S 3 227 + #define HCLGEVF_NIC_CMDQ_INT_SRC_REG 0x27100 228 + 229 + static inline void hclgevf_write_reg(void __iomem *base, u32 reg, u32 value) 230 + { 231 + writel(value, base + reg); 232 + } 233 + 234 + static inline u32 hclgevf_read_reg(u8 __iomem *base, u32 reg) 235 + { 236 + u8 __iomem *reg_addr = READ_ONCE(base); 237 + 238 + return readl(reg_addr + reg); 239 + } 240 + 241 + #define hclgevf_write_dev(a, reg, value) \ 242 + hclgevf_write_reg((a)->io_base, (reg), (value)) 243 + #define hclgevf_read_dev(a, reg) \ 244 + hclgevf_read_reg((a)->io_base, (reg)) 245 + 246 + #define HCLGEVF_SEND_SYNC(flag) \ 247 + ((flag) & HCLGEVF_CMD_FLAG_NO_INTR) 248 + 249 + int hclgevf_cmd_init(struct hclgevf_dev *hdev); 250 + void hclgevf_cmd_uninit(struct hclgevf_dev *hdev); 251 + 252 + int hclgevf_cmd_send(struct hclgevf_hw *hw, struct hclgevf_desc *desc, int num); 253 + void hclgevf_cmd_setup_basic_desc(struct hclgevf_desc *desc, 254 + enum hclgevf_opcode_type opcode, 255 + bool is_read); 256 + #endif
+1490
drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
··· 1 + // SPDX-License-Identifier: GPL-2.0+ 2 + // Copyright (c) 2016-2017 Hisilicon Limited. 3 + 4 + #include <linux/etherdevice.h> 5 + #include "hclgevf_cmd.h" 6 + #include "hclgevf_main.h" 7 + #include "hclge_mbx.h" 8 + #include "hnae3.h" 9 + 10 + #define HCLGEVF_NAME "hclgevf" 11 + 12 + static struct hnae3_ae_algo ae_algovf; 13 + 14 + static const struct pci_device_id ae_algovf_pci_tbl[] = { 15 + {PCI_VDEVICE(HUAWEI, HNAE3_DEV_ID_100G_VF), 0}, 16 + {PCI_VDEVICE(HUAWEI, HNAE3_DEV_ID_100G_RDMA_DCB_PFC_VF), 0}, 17 + /* required last entry */ 18 + {0, } 19 + }; 20 + 21 + static inline struct hclgevf_dev *hclgevf_ae_get_hdev( 22 + struct hnae3_handle *handle) 23 + { 24 + return container_of(handle, struct hclgevf_dev, nic); 25 + } 26 + 27 + static int hclgevf_tqps_update_stats(struct hnae3_handle *handle) 28 + { 29 + struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle); 30 + struct hnae3_queue *queue; 31 + struct hclgevf_desc desc; 32 + struct hclgevf_tqp *tqp; 33 + int status; 34 + int i; 35 + 36 + for (i = 0; i < hdev->num_tqps; i++) { 37 + queue = handle->kinfo.tqp[i]; 38 + tqp = container_of(queue, struct hclgevf_tqp, q); 39 + hclgevf_cmd_setup_basic_desc(&desc, 40 + HCLGEVF_OPC_QUERY_RX_STATUS, 41 + true); 42 + 43 + desc.data[0] = cpu_to_le32(tqp->index & 0x1ff); 44 + status = hclgevf_cmd_send(&hdev->hw, &desc, 1); 45 + if (status) { 46 + dev_err(&hdev->pdev->dev, 47 + "Query tqp stat fail, status = %d,queue = %d\n", 48 + status, i); 49 + return status; 50 + } 51 + tqp->tqp_stats.rcb_rx_ring_pktnum_rcd += 52 + le32_to_cpu(desc.data[4]); 53 + 54 + hclgevf_cmd_setup_basic_desc(&desc, HCLGEVF_OPC_QUERY_TX_STATUS, 55 + true); 56 + 57 + desc.data[0] = cpu_to_le32(tqp->index & 0x1ff); 58 + status = hclgevf_cmd_send(&hdev->hw, &desc, 1); 59 + if (status) { 60 + dev_err(&hdev->pdev->dev, 61 + "Query tqp stat fail, status = %d,queue = %d\n", 62 + status, i); 63 + return status; 64 + } 65 + tqp->tqp_stats.rcb_tx_ring_pktnum_rcd += 66 + le32_to_cpu(desc.data[4]); 67 + } 68 + 69 + return 0; 70 + } 71 + 72 + static u64 *hclgevf_tqps_get_stats(struct hnae3_handle *handle, u64 *data) 73 + { 74 + struct hnae3_knic_private_info *kinfo = &handle->kinfo; 75 + struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle); 76 + struct hclgevf_tqp *tqp; 77 + u64 *buff = data; 78 + int i; 79 + 80 + for (i = 0; i < hdev->num_tqps; i++) { 81 + tqp = container_of(handle->kinfo.tqp[i], struct hclgevf_tqp, q); 82 + *buff++ = tqp->tqp_stats.rcb_tx_ring_pktnum_rcd; 83 + } 84 + for (i = 0; i < kinfo->num_tqps; i++) { 85 + tqp = container_of(handle->kinfo.tqp[i], struct hclgevf_tqp, q); 86 + *buff++ = tqp->tqp_stats.rcb_rx_ring_pktnum_rcd; 87 + } 88 + 89 + return buff; 90 + } 91 + 92 + static int hclgevf_tqps_get_sset_count(struct hnae3_handle *handle, int strset) 93 + { 94 + struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle); 95 + 96 + return hdev->num_tqps * 2; 97 + } 98 + 99 + static u8 *hclgevf_tqps_get_strings(struct hnae3_handle *handle, u8 *data) 100 + { 101 + struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle); 102 + u8 *buff = data; 103 + int i = 0; 104 + 105 + for (i = 0; i < hdev->num_tqps; i++) { 106 + struct hclgevf_tqp *tqp = container_of(handle->kinfo.tqp[i], 107 + struct hclgevf_tqp, q); 108 + snprintf(buff, ETH_GSTRING_LEN, "rcb_q%d_tx_pktnum_rcd", 109 + tqp->index); 110 + buff += ETH_GSTRING_LEN; 111 + } 112 + 113 + for (i = 0; i < hdev->num_tqps; i++) { 114 + struct hclgevf_tqp *tqp = container_of(handle->kinfo.tqp[i], 115 + struct hclgevf_tqp, q); 116 + snprintf(buff, ETH_GSTRING_LEN, "rcb_q%d_rx_pktnum_rcd", 117 + tqp->index); 118 + buff += ETH_GSTRING_LEN; 119 + } 120 + 121 + return buff; 122 + } 123 + 124 + static void hclgevf_update_stats(struct hnae3_handle *handle, 125 + struct net_device_stats *net_stats) 126 + { 127 + struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle); 128 + int status; 129 + 130 + status = hclgevf_tqps_update_stats(handle); 131 + if (status) 132 + dev_err(&hdev->pdev->dev, 133 + "VF update of TQPS stats fail, status = %d.\n", 134 + status); 135 + } 136 + 137 + static int hclgevf_get_sset_count(struct hnae3_handle *handle, int strset) 138 + { 139 + if (strset == ETH_SS_TEST) 140 + return -EOPNOTSUPP; 141 + else if (strset == ETH_SS_STATS) 142 + return hclgevf_tqps_get_sset_count(handle, strset); 143 + 144 + return 0; 145 + } 146 + 147 + static void hclgevf_get_strings(struct hnae3_handle *handle, u32 strset, 148 + u8 *data) 149 + { 150 + u8 *p = (char *)data; 151 + 152 + if (strset == ETH_SS_STATS) 153 + p = hclgevf_tqps_get_strings(handle, p); 154 + } 155 + 156 + static void hclgevf_get_stats(struct hnae3_handle *handle, u64 *data) 157 + { 158 + hclgevf_tqps_get_stats(handle, data); 159 + } 160 + 161 + static int hclgevf_get_tc_info(struct hclgevf_dev *hdev) 162 + { 163 + u8 resp_msg; 164 + int status; 165 + 166 + status = hclgevf_send_mbx_msg(hdev, HCLGE_MBX_GET_TCINFO, 0, NULL, 0, 167 + true, &resp_msg, sizeof(u8)); 168 + if (status) { 169 + dev_err(&hdev->pdev->dev, 170 + "VF request to get TC info from PF failed %d", 171 + status); 172 + return status; 173 + } 174 + 175 + hdev->hw_tc_map = resp_msg; 176 + 177 + return 0; 178 + } 179 + 180 + static int hclge_get_queue_info(struct hclgevf_dev *hdev) 181 + { 182 + #define HCLGEVF_TQPS_RSS_INFO_LEN 8 183 + u8 resp_msg[HCLGEVF_TQPS_RSS_INFO_LEN]; 184 + int status; 185 + 186 + status = hclgevf_send_mbx_msg(hdev, HCLGE_MBX_GET_QINFO, 0, NULL, 0, 187 + true, resp_msg, 188 + HCLGEVF_TQPS_RSS_INFO_LEN); 189 + if (status) { 190 + dev_err(&hdev->pdev->dev, 191 + "VF request to get tqp info from PF failed %d", 192 + status); 193 + return status; 194 + } 195 + 196 + memcpy(&hdev->num_tqps, &resp_msg[0], sizeof(u16)); 197 + memcpy(&hdev->rss_size_max, &resp_msg[2], sizeof(u16)); 198 + memcpy(&hdev->num_desc, &resp_msg[4], sizeof(u16)); 199 + memcpy(&hdev->rx_buf_len, &resp_msg[6], sizeof(u16)); 200 + 201 + return 0; 202 + } 203 + 204 + static int hclgevf_enable_tso(struct hclgevf_dev *hdev, int enable) 205 + { 206 + struct hclgevf_cfg_tso_status_cmd *req; 207 + struct hclgevf_desc desc; 208 + 209 + req = (struct hclgevf_cfg_tso_status_cmd *)desc.data; 210 + 211 + hclgevf_cmd_setup_basic_desc(&desc, HCLGEVF_OPC_TSO_GENERIC_CONFIG, 212 + false); 213 + hnae_set_bit(req->tso_enable, HCLGEVF_TSO_ENABLE_B, enable); 214 + 215 + return hclgevf_cmd_send(&hdev->hw, &desc, 1); 216 + } 217 + 218 + static int hclgevf_alloc_tqps(struct hclgevf_dev *hdev) 219 + { 220 + struct hclgevf_tqp *tqp; 221 + int i; 222 + 223 + hdev->htqp = devm_kcalloc(&hdev->pdev->dev, hdev->num_tqps, 224 + sizeof(struct hclgevf_tqp), GFP_KERNEL); 225 + if (!hdev->htqp) 226 + return -ENOMEM; 227 + 228 + tqp = hdev->htqp; 229 + 230 + for (i = 0; i < hdev->num_tqps; i++) { 231 + tqp->dev = &hdev->pdev->dev; 232 + tqp->index = i; 233 + 234 + tqp->q.ae_algo = &ae_algovf; 235 + tqp->q.buf_size = hdev->rx_buf_len; 236 + tqp->q.desc_num = hdev->num_desc; 237 + tqp->q.io_base = hdev->hw.io_base + HCLGEVF_TQP_REG_OFFSET + 238 + i * HCLGEVF_TQP_REG_SIZE; 239 + 240 + tqp++; 241 + } 242 + 243 + return 0; 244 + } 245 + 246 + static int hclgevf_knic_setup(struct hclgevf_dev *hdev) 247 + { 248 + struct hnae3_handle *nic = &hdev->nic; 249 + struct hnae3_knic_private_info *kinfo; 250 + u16 new_tqps = hdev->num_tqps; 251 + int i; 252 + 253 + kinfo = &nic->kinfo; 254 + kinfo->num_tc = 0; 255 + kinfo->num_desc = hdev->num_desc; 256 + kinfo->rx_buf_len = hdev->rx_buf_len; 257 + for (i = 0; i < HCLGEVF_MAX_TC_NUM; i++) 258 + if (hdev->hw_tc_map & BIT(i)) 259 + kinfo->num_tc++; 260 + 261 + kinfo->rss_size 262 + = min_t(u16, hdev->rss_size_max, new_tqps / kinfo->num_tc); 263 + new_tqps = kinfo->rss_size * kinfo->num_tc; 264 + kinfo->num_tqps = min(new_tqps, hdev->num_tqps); 265 + 266 + kinfo->tqp = devm_kcalloc(&hdev->pdev->dev, kinfo->num_tqps, 267 + sizeof(struct hnae3_queue *), GFP_KERNEL); 268 + if (!kinfo->tqp) 269 + return -ENOMEM; 270 + 271 + for (i = 0; i < kinfo->num_tqps; i++) { 272 + hdev->htqp[i].q.handle = &hdev->nic; 273 + hdev->htqp[i].q.tqp_index = i; 274 + kinfo->tqp[i] = &hdev->htqp[i].q; 275 + } 276 + 277 + return 0; 278 + } 279 + 280 + static void hclgevf_request_link_info(struct hclgevf_dev *hdev) 281 + { 282 + int status; 283 + u8 resp_msg; 284 + 285 + status = hclgevf_send_mbx_msg(hdev, HCLGE_MBX_GET_LINK_STATUS, 0, NULL, 286 + 0, false, &resp_msg, sizeof(u8)); 287 + if (status) 288 + dev_err(&hdev->pdev->dev, 289 + "VF failed to fetch link status(%d) from PF", status); 290 + } 291 + 292 + void hclgevf_update_link_status(struct hclgevf_dev *hdev, int link_state) 293 + { 294 + struct hnae3_handle *handle = &hdev->nic; 295 + struct hnae3_client *client; 296 + 297 + client = handle->client; 298 + 299 + if (link_state != hdev->hw.mac.link) { 300 + client->ops->link_status_change(handle, !!link_state); 301 + hdev->hw.mac.link = link_state; 302 + } 303 + } 304 + 305 + static int hclgevf_set_handle_info(struct hclgevf_dev *hdev) 306 + { 307 + struct hnae3_handle *nic = &hdev->nic; 308 + int ret; 309 + 310 + nic->ae_algo = &ae_algovf; 311 + nic->pdev = hdev->pdev; 312 + nic->numa_node_mask = hdev->numa_node_mask; 313 + nic->flags |= HNAE3_SUPPORT_VF; 314 + 315 + if (hdev->ae_dev->dev_type != HNAE3_DEV_KNIC) { 316 + dev_err(&hdev->pdev->dev, "unsupported device type %d\n", 317 + hdev->ae_dev->dev_type); 318 + return -EINVAL; 319 + } 320 + 321 + ret = hclgevf_knic_setup(hdev); 322 + if (ret) 323 + dev_err(&hdev->pdev->dev, "VF knic setup failed %d\n", 324 + ret); 325 + return ret; 326 + } 327 + 328 + static void hclgevf_free_vector(struct hclgevf_dev *hdev, int vector_id) 329 + { 330 + hdev->vector_status[vector_id] = HCLGEVF_INVALID_VPORT; 331 + hdev->num_msi_left += 1; 332 + hdev->num_msi_used -= 1; 333 + } 334 + 335 + static int hclgevf_get_vector(struct hnae3_handle *handle, u16 vector_num, 336 + struct hnae3_vector_info *vector_info) 337 + { 338 + struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle); 339 + struct hnae3_vector_info *vector = vector_info; 340 + int alloc = 0; 341 + int i, j; 342 + 343 + vector_num = min(hdev->num_msi_left, vector_num); 344 + 345 + for (j = 0; j < vector_num; j++) { 346 + for (i = HCLGEVF_MISC_VECTOR_NUM + 1; i < hdev->num_msi; i++) { 347 + if (hdev->vector_status[i] == HCLGEVF_INVALID_VPORT) { 348 + vector->vector = pci_irq_vector(hdev->pdev, i); 349 + vector->io_addr = hdev->hw.io_base + 350 + HCLGEVF_VECTOR_REG_BASE + 351 + (i - 1) * HCLGEVF_VECTOR_REG_OFFSET; 352 + hdev->vector_status[i] = 0; 353 + hdev->vector_irq[i] = vector->vector; 354 + 355 + vector++; 356 + alloc++; 357 + 358 + break; 359 + } 360 + } 361 + } 362 + hdev->num_msi_left -= alloc; 363 + hdev->num_msi_used += alloc; 364 + 365 + return alloc; 366 + } 367 + 368 + static int hclgevf_get_vector_index(struct hclgevf_dev *hdev, int vector) 369 + { 370 + int i; 371 + 372 + for (i = 0; i < hdev->num_msi; i++) 373 + if (vector == hdev->vector_irq[i]) 374 + return i; 375 + 376 + return -EINVAL; 377 + } 378 + 379 + static u32 hclgevf_get_rss_key_size(struct hnae3_handle *handle) 380 + { 381 + return HCLGEVF_RSS_KEY_SIZE; 382 + } 383 + 384 + static u32 hclgevf_get_rss_indir_size(struct hnae3_handle *handle) 385 + { 386 + return HCLGEVF_RSS_IND_TBL_SIZE; 387 + } 388 + 389 + static int hclgevf_set_rss_indir_table(struct hclgevf_dev *hdev) 390 + { 391 + const u8 *indir = hdev->rss_cfg.rss_indirection_tbl; 392 + struct hclgevf_rss_indirection_table_cmd *req; 393 + struct hclgevf_desc desc; 394 + int status; 395 + int i, j; 396 + 397 + req = (struct hclgevf_rss_indirection_table_cmd *)desc.data; 398 + 399 + for (i = 0; i < HCLGEVF_RSS_CFG_TBL_NUM; i++) { 400 + hclgevf_cmd_setup_basic_desc(&desc, HCLGEVF_OPC_RSS_INDIR_TABLE, 401 + false); 402 + req->start_table_index = i * HCLGEVF_RSS_CFG_TBL_SIZE; 403 + req->rss_set_bitmap = HCLGEVF_RSS_SET_BITMAP_MSK; 404 + for (j = 0; j < HCLGEVF_RSS_CFG_TBL_SIZE; j++) 405 + req->rss_result[j] = 406 + indir[i * HCLGEVF_RSS_CFG_TBL_SIZE + j]; 407 + 408 + status = hclgevf_cmd_send(&hdev->hw, &desc, 1); 409 + if (status) { 410 + dev_err(&hdev->pdev->dev, 411 + "VF failed(=%d) to set RSS indirection table\n", 412 + status); 413 + return status; 414 + } 415 + } 416 + 417 + return 0; 418 + } 419 + 420 + static int hclgevf_set_rss_tc_mode(struct hclgevf_dev *hdev, u16 rss_size) 421 + { 422 + struct hclgevf_rss_tc_mode_cmd *req; 423 + u16 tc_offset[HCLGEVF_MAX_TC_NUM]; 424 + u16 tc_valid[HCLGEVF_MAX_TC_NUM]; 425 + u16 tc_size[HCLGEVF_MAX_TC_NUM]; 426 + struct hclgevf_desc desc; 427 + u16 roundup_size; 428 + int status; 429 + int i; 430 + 431 + req = (struct hclgevf_rss_tc_mode_cmd *)desc.data; 432 + 433 + roundup_size = roundup_pow_of_two(rss_size); 434 + roundup_size = ilog2(roundup_size); 435 + 436 + for (i = 0; i < HCLGEVF_MAX_TC_NUM; i++) { 437 + tc_valid[i] = !!(hdev->hw_tc_map & BIT(i)); 438 + tc_size[i] = roundup_size; 439 + tc_offset[i] = rss_size * i; 440 + } 441 + 442 + hclgevf_cmd_setup_basic_desc(&desc, HCLGEVF_OPC_RSS_TC_MODE, false); 443 + for (i = 0; i < HCLGEVF_MAX_TC_NUM; i++) { 444 + hnae_set_bit(req->rss_tc_mode[i], HCLGEVF_RSS_TC_VALID_B, 445 + (tc_valid[i] & 0x1)); 446 + hnae_set_field(req->rss_tc_mode[i], HCLGEVF_RSS_TC_SIZE_M, 447 + HCLGEVF_RSS_TC_SIZE_S, tc_size[i]); 448 + hnae_set_field(req->rss_tc_mode[i], HCLGEVF_RSS_TC_OFFSET_M, 449 + HCLGEVF_RSS_TC_OFFSET_S, tc_offset[i]); 450 + } 451 + status = hclgevf_cmd_send(&hdev->hw, &desc, 1); 452 + if (status) 453 + dev_err(&hdev->pdev->dev, 454 + "VF failed(=%d) to set rss tc mode\n", status); 455 + 456 + return status; 457 + } 458 + 459 + static int hclgevf_get_rss_hw_cfg(struct hnae3_handle *handle, u8 *hash, 460 + u8 *key) 461 + { 462 + struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle); 463 + struct hclgevf_rss_config_cmd *req; 464 + int lkup_times = key ? 3 : 1; 465 + struct hclgevf_desc desc; 466 + int key_offset; 467 + int key_size; 468 + int status; 469 + 470 + req = (struct hclgevf_rss_config_cmd *)desc.data; 471 + lkup_times = (lkup_times == 3) ? 3 : ((hash) ? 1 : 0); 472 + 473 + for (key_offset = 0; key_offset < lkup_times; key_offset++) { 474 + hclgevf_cmd_setup_basic_desc(&desc, 475 + HCLGEVF_OPC_RSS_GENERIC_CONFIG, 476 + true); 477 + req->hash_config |= (key_offset << HCLGEVF_RSS_HASH_KEY_OFFSET); 478 + 479 + status = hclgevf_cmd_send(&hdev->hw, &desc, 1); 480 + if (status) { 481 + dev_err(&hdev->pdev->dev, 482 + "failed to get hardware RSS cfg, status = %d\n", 483 + status); 484 + return status; 485 + } 486 + 487 + if (key_offset == 2) 488 + key_size = 489 + HCLGEVF_RSS_KEY_SIZE - HCLGEVF_RSS_HASH_KEY_NUM * 2; 490 + else 491 + key_size = HCLGEVF_RSS_HASH_KEY_NUM; 492 + 493 + if (key) 494 + memcpy(key + key_offset * HCLGEVF_RSS_HASH_KEY_NUM, 495 + req->hash_key, 496 + key_size); 497 + } 498 + 499 + if (hash) { 500 + if ((req->hash_config & 0xf) == HCLGEVF_RSS_HASH_ALGO_TOEPLITZ) 501 + *hash = ETH_RSS_HASH_TOP; 502 + else 503 + *hash = ETH_RSS_HASH_UNKNOWN; 504 + } 505 + 506 + return 0; 507 + } 508 + 509 + static int hclgevf_get_rss(struct hnae3_handle *handle, u32 *indir, u8 *key, 510 + u8 *hfunc) 511 + { 512 + struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle); 513 + struct hclgevf_rss_cfg *rss_cfg = &hdev->rss_cfg; 514 + int i; 515 + 516 + if (indir) 517 + for (i = 0; i < HCLGEVF_RSS_IND_TBL_SIZE; i++) 518 + indir[i] = rss_cfg->rss_indirection_tbl[i]; 519 + 520 + return hclgevf_get_rss_hw_cfg(handle, hfunc, key); 521 + } 522 + 523 + static int hclgevf_set_rss(struct hnae3_handle *handle, const u32 *indir, 524 + const u8 *key, const u8 hfunc) 525 + { 526 + struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle); 527 + struct hclgevf_rss_cfg *rss_cfg = &hdev->rss_cfg; 528 + int i; 529 + 530 + /* update the shadow RSS table with user specified qids */ 531 + for (i = 0; i < HCLGEVF_RSS_IND_TBL_SIZE; i++) 532 + rss_cfg->rss_indirection_tbl[i] = indir[i]; 533 + 534 + /* update the hardware */ 535 + return hclgevf_set_rss_indir_table(hdev); 536 + } 537 + 538 + static int hclgevf_get_tc_size(struct hnae3_handle *handle) 539 + { 540 + struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle); 541 + struct hclgevf_rss_cfg *rss_cfg = &hdev->rss_cfg; 542 + 543 + return rss_cfg->rss_size; 544 + } 545 + 546 + static int hclgevf_bind_ring_to_vector(struct hnae3_handle *handle, bool en, 547 + int vector, 548 + struct hnae3_ring_chain_node *ring_chain) 549 + { 550 + #define HCLGEVF_RING_NODE_VARIABLE_NUM 3 551 + #define HCLGEVF_RING_MAP_MBX_BASIC_MSG_NUM 3 552 + struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle); 553 + struct hnae3_ring_chain_node *node; 554 + struct hclge_mbx_vf_to_pf_cmd *req; 555 + struct hclgevf_desc desc; 556 + int i, vector_id; 557 + int status; 558 + u8 type; 559 + 560 + req = (struct hclge_mbx_vf_to_pf_cmd *)desc.data; 561 + vector_id = hclgevf_get_vector_index(hdev, vector); 562 + if (vector_id < 0) { 563 + dev_err(&handle->pdev->dev, 564 + "Get vector index fail. ret =%d\n", vector_id); 565 + return vector_id; 566 + } 567 + 568 + hclgevf_cmd_setup_basic_desc(&desc, HCLGEVF_OPC_MBX_VF_TO_PF, false); 569 + type = en ? 570 + HCLGE_MBX_MAP_RING_TO_VECTOR : HCLGE_MBX_UNMAP_RING_TO_VECTOR; 571 + req->msg[0] = type; 572 + req->msg[1] = vector_id; /* vector_id should be id in VF */ 573 + 574 + i = 0; 575 + for (node = ring_chain; node; node = node->next) { 576 + i++; 577 + /* msg[2] is cause num */ 578 + req->msg[HCLGEVF_RING_NODE_VARIABLE_NUM * i] = 579 + hnae_get_bit(node->flag, HNAE3_RING_TYPE_B); 580 + req->msg[HCLGEVF_RING_NODE_VARIABLE_NUM * i + 1] = 581 + node->tqp_index; 582 + if (i == (HCLGE_MBX_VF_MSG_DATA_NUM - 583 + HCLGEVF_RING_MAP_MBX_BASIC_MSG_NUM) / 584 + HCLGEVF_RING_NODE_VARIABLE_NUM) { 585 + req->msg[2] = i; 586 + 587 + status = hclgevf_cmd_send(&hdev->hw, &desc, 1); 588 + if (status) { 589 + dev_err(&hdev->pdev->dev, 590 + "Map TQP fail, status is %d.\n", 591 + status); 592 + return status; 593 + } 594 + i = 0; 595 + hclgevf_cmd_setup_basic_desc(&desc, 596 + HCLGEVF_OPC_MBX_VF_TO_PF, 597 + false); 598 + req->msg[0] = type; 599 + req->msg[1] = vector_id; 600 + } 601 + } 602 + 603 + if (i > 0) { 604 + req->msg[2] = i; 605 + 606 + status = hclgevf_cmd_send(&hdev->hw, &desc, 1); 607 + if (status) { 608 + dev_err(&hdev->pdev->dev, 609 + "Map TQP fail, status is %d.\n", status); 610 + return status; 611 + } 612 + } 613 + 614 + return 0; 615 + } 616 + 617 + static int hclgevf_map_ring_to_vector(struct hnae3_handle *handle, int vector, 618 + struct hnae3_ring_chain_node *ring_chain) 619 + { 620 + return hclgevf_bind_ring_to_vector(handle, true, vector, ring_chain); 621 + } 622 + 623 + static int hclgevf_unmap_ring_from_vector( 624 + struct hnae3_handle *handle, 625 + int vector, 626 + struct hnae3_ring_chain_node *ring_chain) 627 + { 628 + struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle); 629 + int ret, vector_id; 630 + 631 + vector_id = hclgevf_get_vector_index(hdev, vector); 632 + if (vector_id < 0) { 633 + dev_err(&handle->pdev->dev, 634 + "Get vector index fail. ret =%d\n", vector_id); 635 + return vector_id; 636 + } 637 + 638 + ret = hclgevf_bind_ring_to_vector(handle, false, vector, ring_chain); 639 + if (ret) { 640 + dev_err(&handle->pdev->dev, 641 + "Unmap ring from vector fail. vector=%d, ret =%d\n", 642 + vector_id, 643 + ret); 644 + return ret; 645 + } 646 + 647 + hclgevf_free_vector(hdev, vector); 648 + 649 + return 0; 650 + } 651 + 652 + static int hclgevf_cmd_set_promisc_mode(struct hclgevf_dev *hdev, u32 en) 653 + { 654 + struct hclge_mbx_vf_to_pf_cmd *req; 655 + struct hclgevf_desc desc; 656 + int status; 657 + 658 + req = (struct hclge_mbx_vf_to_pf_cmd *)desc.data; 659 + 660 + hclgevf_cmd_setup_basic_desc(&desc, HCLGEVF_OPC_MBX_VF_TO_PF, false); 661 + req->msg[0] = HCLGE_MBX_SET_PROMISC_MODE; 662 + req->msg[1] = en; 663 + 664 + status = hclgevf_cmd_send(&hdev->hw, &desc, 1); 665 + if (status) 666 + dev_err(&hdev->pdev->dev, 667 + "Set promisc mode fail, status is %d.\n", status); 668 + 669 + return status; 670 + } 671 + 672 + static void hclgevf_set_promisc_mode(struct hnae3_handle *handle, u32 en) 673 + { 674 + struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle); 675 + 676 + hclgevf_cmd_set_promisc_mode(hdev, en); 677 + } 678 + 679 + static int hclgevf_tqp_enable(struct hclgevf_dev *hdev, int tqp_id, 680 + int stream_id, bool enable) 681 + { 682 + struct hclgevf_cfg_com_tqp_queue_cmd *req; 683 + struct hclgevf_desc desc; 684 + int status; 685 + 686 + req = (struct hclgevf_cfg_com_tqp_queue_cmd *)desc.data; 687 + 688 + hclgevf_cmd_setup_basic_desc(&desc, HCLGEVF_OPC_CFG_COM_TQP_QUEUE, 689 + false); 690 + req->tqp_id = cpu_to_le16(tqp_id & HCLGEVF_RING_ID_MASK); 691 + req->stream_id = cpu_to_le16(stream_id); 692 + req->enable |= enable << HCLGEVF_TQP_ENABLE_B; 693 + 694 + status = hclgevf_cmd_send(&hdev->hw, &desc, 1); 695 + if (status) 696 + dev_err(&hdev->pdev->dev, 697 + "TQP enable fail, status =%d.\n", status); 698 + 699 + return status; 700 + } 701 + 702 + static int hclgevf_get_queue_id(struct hnae3_queue *queue) 703 + { 704 + struct hclgevf_tqp *tqp = container_of(queue, struct hclgevf_tqp, q); 705 + 706 + return tqp->index; 707 + } 708 + 709 + static void hclgevf_reset_tqp_stats(struct hnae3_handle *handle) 710 + { 711 + struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle); 712 + struct hnae3_queue *queue; 713 + struct hclgevf_tqp *tqp; 714 + int i; 715 + 716 + for (i = 0; i < hdev->num_tqps; i++) { 717 + queue = handle->kinfo.tqp[i]; 718 + tqp = container_of(queue, struct hclgevf_tqp, q); 719 + memset(&tqp->tqp_stats, 0, sizeof(tqp->tqp_stats)); 720 + } 721 + } 722 + 723 + static int hclgevf_cfg_func_mta_filter(struct hnae3_handle *handle, bool en) 724 + { 725 + struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle); 726 + u8 msg[2] = {0}; 727 + 728 + msg[0] = en; 729 + return hclgevf_send_mbx_msg(hdev, HCLGE_MBX_SET_MULTICAST, 730 + HCLGE_MBX_MAC_VLAN_MC_FUNC_MTA_ENABLE, 731 + msg, 1, false, NULL, 0); 732 + } 733 + 734 + static void hclgevf_get_mac_addr(struct hnae3_handle *handle, u8 *p) 735 + { 736 + struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle); 737 + 738 + ether_addr_copy(p, hdev->hw.mac.mac_addr); 739 + } 740 + 741 + static int hclgevf_set_mac_addr(struct hnae3_handle *handle, void *p) 742 + { 743 + struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle); 744 + u8 *old_mac_addr = (u8 *)hdev->hw.mac.mac_addr; 745 + u8 *new_mac_addr = (u8 *)p; 746 + u8 msg_data[ETH_ALEN * 2]; 747 + int status; 748 + 749 + ether_addr_copy(msg_data, new_mac_addr); 750 + ether_addr_copy(&msg_data[ETH_ALEN], old_mac_addr); 751 + 752 + status = hclgevf_send_mbx_msg(hdev, HCLGE_MBX_SET_UNICAST, 753 + HCLGE_MBX_MAC_VLAN_UC_MODIFY, 754 + msg_data, ETH_ALEN * 2, 755 + false, NULL, 0); 756 + if (!status) 757 + ether_addr_copy(hdev->hw.mac.mac_addr, new_mac_addr); 758 + 759 + return status; 760 + } 761 + 762 + static int hclgevf_add_uc_addr(struct hnae3_handle *handle, 763 + const unsigned char *addr) 764 + { 765 + struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle); 766 + 767 + return hclgevf_send_mbx_msg(hdev, HCLGE_MBX_SET_UNICAST, 768 + HCLGE_MBX_MAC_VLAN_UC_ADD, 769 + addr, ETH_ALEN, false, NULL, 0); 770 + } 771 + 772 + static int hclgevf_rm_uc_addr(struct hnae3_handle *handle, 773 + const unsigned char *addr) 774 + { 775 + struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle); 776 + 777 + return hclgevf_send_mbx_msg(hdev, HCLGE_MBX_SET_UNICAST, 778 + HCLGE_MBX_MAC_VLAN_UC_REMOVE, 779 + addr, ETH_ALEN, false, NULL, 0); 780 + } 781 + 782 + static int hclgevf_add_mc_addr(struct hnae3_handle *handle, 783 + const unsigned char *addr) 784 + { 785 + struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle); 786 + 787 + return hclgevf_send_mbx_msg(hdev, HCLGE_MBX_SET_MULTICAST, 788 + HCLGE_MBX_MAC_VLAN_MC_ADD, 789 + addr, ETH_ALEN, false, NULL, 0); 790 + } 791 + 792 + static int hclgevf_rm_mc_addr(struct hnae3_handle *handle, 793 + const unsigned char *addr) 794 + { 795 + struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle); 796 + 797 + return hclgevf_send_mbx_msg(hdev, HCLGE_MBX_SET_MULTICAST, 798 + HCLGE_MBX_MAC_VLAN_MC_REMOVE, 799 + addr, ETH_ALEN, false, NULL, 0); 800 + } 801 + 802 + static int hclgevf_set_vlan_filter(struct hnae3_handle *handle, 803 + __be16 proto, u16 vlan_id, 804 + bool is_kill) 805 + { 806 + #define HCLGEVF_VLAN_MBX_MSG_LEN 5 807 + struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle); 808 + u8 msg_data[HCLGEVF_VLAN_MBX_MSG_LEN]; 809 + 810 + if (vlan_id > 4095) 811 + return -EINVAL; 812 + 813 + if (proto != htons(ETH_P_8021Q)) 814 + return -EPROTONOSUPPORT; 815 + 816 + msg_data[0] = is_kill; 817 + memcpy(&msg_data[1], &vlan_id, sizeof(vlan_id)); 818 + memcpy(&msg_data[3], &proto, sizeof(proto)); 819 + return hclgevf_send_mbx_msg(hdev, HCLGE_MBX_SET_VLAN, 820 + HCLGE_MBX_VLAN_FILTER, msg_data, 821 + HCLGEVF_VLAN_MBX_MSG_LEN, false, NULL, 0); 822 + } 823 + 824 + static void hclgevf_reset_tqp(struct hnae3_handle *handle, u16 queue_id) 825 + { 826 + struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle); 827 + u8 msg_data[2]; 828 + 829 + memcpy(&msg_data[0], &queue_id, sizeof(queue_id)); 830 + 831 + hclgevf_send_mbx_msg(hdev, HCLGE_MBX_QUEUE_RESET, 0, msg_data, 2, false, 832 + NULL, 0); 833 + } 834 + 835 + static u32 hclgevf_get_fw_version(struct hnae3_handle *handle) 836 + { 837 + struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle); 838 + 839 + return hdev->fw_version; 840 + } 841 + 842 + static void hclgevf_get_misc_vector(struct hclgevf_dev *hdev) 843 + { 844 + struct hclgevf_misc_vector *vector = &hdev->misc_vector; 845 + 846 + vector->vector_irq = pci_irq_vector(hdev->pdev, 847 + HCLGEVF_MISC_VECTOR_NUM); 848 + vector->addr = hdev->hw.io_base + HCLGEVF_MISC_VECTOR_REG_BASE; 849 + /* vector status always valid for Vector 0 */ 850 + hdev->vector_status[HCLGEVF_MISC_VECTOR_NUM] = 0; 851 + hdev->vector_irq[HCLGEVF_MISC_VECTOR_NUM] = vector->vector_irq; 852 + 853 + hdev->num_msi_left -= 1; 854 + hdev->num_msi_used += 1; 855 + } 856 + 857 + static void hclgevf_mbx_task_schedule(struct hclgevf_dev *hdev) 858 + { 859 + if (!test_and_set_bit(HCLGEVF_STATE_MBX_SERVICE_SCHED, &hdev->state)) 860 + schedule_work(&hdev->mbx_service_task); 861 + } 862 + 863 + static void hclgevf_task_schedule(struct hclgevf_dev *hdev) 864 + { 865 + if (!test_bit(HCLGEVF_STATE_DOWN, &hdev->state) && 866 + !test_and_set_bit(HCLGEVF_STATE_SERVICE_SCHED, &hdev->state)) 867 + schedule_work(&hdev->service_task); 868 + } 869 + 870 + static void hclgevf_service_timer(struct timer_list *t) 871 + { 872 + struct hclgevf_dev *hdev = from_timer(hdev, t, service_timer); 873 + 874 + mod_timer(&hdev->service_timer, jiffies + 5 * HZ); 875 + 876 + hclgevf_task_schedule(hdev); 877 + } 878 + 879 + static void hclgevf_mailbox_service_task(struct work_struct *work) 880 + { 881 + struct hclgevf_dev *hdev; 882 + 883 + hdev = container_of(work, struct hclgevf_dev, mbx_service_task); 884 + 885 + if (test_and_set_bit(HCLGEVF_STATE_MBX_HANDLING, &hdev->state)) 886 + return; 887 + 888 + clear_bit(HCLGEVF_STATE_MBX_SERVICE_SCHED, &hdev->state); 889 + 890 + hclgevf_mbx_handler(hdev); 891 + 892 + clear_bit(HCLGEVF_STATE_MBX_HANDLING, &hdev->state); 893 + } 894 + 895 + static void hclgevf_service_task(struct work_struct *work) 896 + { 897 + struct hclgevf_dev *hdev; 898 + 899 + hdev = container_of(work, struct hclgevf_dev, service_task); 900 + 901 + /* request the link status from the PF. PF would be able to tell VF 902 + * about such updates in future so we might remove this later 903 + */ 904 + hclgevf_request_link_info(hdev); 905 + 906 + clear_bit(HCLGEVF_STATE_SERVICE_SCHED, &hdev->state); 907 + } 908 + 909 + static void hclgevf_clear_event_cause(struct hclgevf_dev *hdev, u32 regclr) 910 + { 911 + hclgevf_write_dev(&hdev->hw, HCLGEVF_VECTOR0_CMDQ_SRC_REG, regclr); 912 + } 913 + 914 + static bool hclgevf_check_event_cause(struct hclgevf_dev *hdev, u32 *clearval) 915 + { 916 + u32 cmdq_src_reg; 917 + 918 + /* fetch the events from their corresponding regs */ 919 + cmdq_src_reg = hclgevf_read_dev(&hdev->hw, 920 + HCLGEVF_VECTOR0_CMDQ_SRC_REG); 921 + 922 + /* check for vector0 mailbox(=CMDQ RX) event source */ 923 + if (BIT(HCLGEVF_VECTOR0_RX_CMDQ_INT_B) & cmdq_src_reg) { 924 + cmdq_src_reg &= ~BIT(HCLGEVF_VECTOR0_RX_CMDQ_INT_B); 925 + *clearval = cmdq_src_reg; 926 + return true; 927 + } 928 + 929 + dev_dbg(&hdev->pdev->dev, "vector 0 interrupt from unknown source\n"); 930 + 931 + return false; 932 + } 933 + 934 + static void hclgevf_enable_vector(struct hclgevf_misc_vector *vector, bool en) 935 + { 936 + writel(en ? 1 : 0, vector->addr); 937 + } 938 + 939 + static irqreturn_t hclgevf_misc_irq_handle(int irq, void *data) 940 + { 941 + struct hclgevf_dev *hdev = data; 942 + u32 clearval; 943 + 944 + hclgevf_enable_vector(&hdev->misc_vector, false); 945 + if (!hclgevf_check_event_cause(hdev, &clearval)) 946 + goto skip_sched; 947 + 948 + /* schedule the VF mailbox service task, if not already scheduled */ 949 + hclgevf_mbx_task_schedule(hdev); 950 + 951 + hclgevf_clear_event_cause(hdev, clearval); 952 + 953 + skip_sched: 954 + hclgevf_enable_vector(&hdev->misc_vector, true); 955 + 956 + return IRQ_HANDLED; 957 + } 958 + 959 + static int hclgevf_configure(struct hclgevf_dev *hdev) 960 + { 961 + int ret; 962 + 963 + /* get queue configuration from PF */ 964 + ret = hclge_get_queue_info(hdev); 965 + if (ret) 966 + return ret; 967 + /* get tc configuration from PF */ 968 + return hclgevf_get_tc_info(hdev); 969 + } 970 + 971 + static int hclgevf_init_roce_base_info(struct hclgevf_dev *hdev) 972 + { 973 + struct hnae3_handle *roce = &hdev->roce; 974 + struct hnae3_handle *nic = &hdev->nic; 975 + 976 + roce->rinfo.num_vectors = HCLGEVF_ROCEE_VECTOR_NUM; 977 + 978 + if (hdev->num_msi_left < roce->rinfo.num_vectors || 979 + hdev->num_msi_left == 0) 980 + return -EINVAL; 981 + 982 + roce->rinfo.base_vector = 983 + hdev->vector_status[hdev->num_msi_used]; 984 + 985 + roce->rinfo.netdev = nic->kinfo.netdev; 986 + roce->rinfo.roce_io_base = hdev->hw.io_base; 987 + 988 + roce->pdev = nic->pdev; 989 + roce->ae_algo = nic->ae_algo; 990 + roce->numa_node_mask = nic->numa_node_mask; 991 + 992 + return 0; 993 + } 994 + 995 + static int hclgevf_rss_init_hw(struct hclgevf_dev *hdev) 996 + { 997 + struct hclgevf_rss_cfg *rss_cfg = &hdev->rss_cfg; 998 + int i, ret; 999 + 1000 + rss_cfg->rss_size = hdev->rss_size_max; 1001 + 1002 + /* Initialize RSS indirect table for each vport */ 1003 + for (i = 0; i < HCLGEVF_RSS_IND_TBL_SIZE; i++) 1004 + rss_cfg->rss_indirection_tbl[i] = i % hdev->rss_size_max; 1005 + 1006 + ret = hclgevf_set_rss_indir_table(hdev); 1007 + if (ret) 1008 + return ret; 1009 + 1010 + return hclgevf_set_rss_tc_mode(hdev, hdev->rss_size_max); 1011 + } 1012 + 1013 + static int hclgevf_init_vlan_config(struct hclgevf_dev *hdev) 1014 + { 1015 + /* other vlan config(like, VLAN TX/RX offload) would also be added 1016 + * here later 1017 + */ 1018 + return hclgevf_set_vlan_filter(&hdev->nic, htons(ETH_P_8021Q), 0, 1019 + false); 1020 + } 1021 + 1022 + static int hclgevf_ae_start(struct hnae3_handle *handle) 1023 + { 1024 + struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle); 1025 + int i, queue_id; 1026 + 1027 + for (i = 0; i < handle->kinfo.num_tqps; i++) { 1028 + /* ring enable */ 1029 + queue_id = hclgevf_get_queue_id(handle->kinfo.tqp[i]); 1030 + if (queue_id < 0) { 1031 + dev_warn(&hdev->pdev->dev, 1032 + "Get invalid queue id, ignore it\n"); 1033 + continue; 1034 + } 1035 + 1036 + hclgevf_tqp_enable(hdev, queue_id, 0, true); 1037 + } 1038 + 1039 + /* reset tqp stats */ 1040 + hclgevf_reset_tqp_stats(handle); 1041 + 1042 + hclgevf_request_link_info(hdev); 1043 + 1044 + clear_bit(HCLGEVF_STATE_DOWN, &hdev->state); 1045 + mod_timer(&hdev->service_timer, jiffies + HZ); 1046 + 1047 + return 0; 1048 + } 1049 + 1050 + static void hclgevf_ae_stop(struct hnae3_handle *handle) 1051 + { 1052 + struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle); 1053 + int i, queue_id; 1054 + 1055 + for (i = 0; i < hdev->num_tqps; i++) { 1056 + /* Ring disable */ 1057 + queue_id = hclgevf_get_queue_id(handle->kinfo.tqp[i]); 1058 + if (queue_id < 0) { 1059 + dev_warn(&hdev->pdev->dev, 1060 + "Get invalid queue id, ignore it\n"); 1061 + continue; 1062 + } 1063 + 1064 + hclgevf_tqp_enable(hdev, queue_id, 0, false); 1065 + } 1066 + 1067 + /* reset tqp stats */ 1068 + hclgevf_reset_tqp_stats(handle); 1069 + } 1070 + 1071 + static void hclgevf_state_init(struct hclgevf_dev *hdev) 1072 + { 1073 + /* setup tasks for the MBX */ 1074 + INIT_WORK(&hdev->mbx_service_task, hclgevf_mailbox_service_task); 1075 + clear_bit(HCLGEVF_STATE_MBX_SERVICE_SCHED, &hdev->state); 1076 + clear_bit(HCLGEVF_STATE_MBX_HANDLING, &hdev->state); 1077 + 1078 + /* setup tasks for service timer */ 1079 + timer_setup(&hdev->service_timer, hclgevf_service_timer, 0); 1080 + 1081 + INIT_WORK(&hdev->service_task, hclgevf_service_task); 1082 + clear_bit(HCLGEVF_STATE_SERVICE_SCHED, &hdev->state); 1083 + 1084 + mutex_init(&hdev->mbx_resp.mbx_mutex); 1085 + 1086 + /* bring the device down */ 1087 + set_bit(HCLGEVF_STATE_DOWN, &hdev->state); 1088 + } 1089 + 1090 + static void hclgevf_state_uninit(struct hclgevf_dev *hdev) 1091 + { 1092 + set_bit(HCLGEVF_STATE_DOWN, &hdev->state); 1093 + 1094 + if (hdev->service_timer.function) 1095 + del_timer_sync(&hdev->service_timer); 1096 + if (hdev->service_task.func) 1097 + cancel_work_sync(&hdev->service_task); 1098 + if (hdev->mbx_service_task.func) 1099 + cancel_work_sync(&hdev->mbx_service_task); 1100 + 1101 + mutex_destroy(&hdev->mbx_resp.mbx_mutex); 1102 + } 1103 + 1104 + static int hclgevf_init_msi(struct hclgevf_dev *hdev) 1105 + { 1106 + struct pci_dev *pdev = hdev->pdev; 1107 + int vectors; 1108 + int i; 1109 + 1110 + hdev->num_msi = HCLGEVF_MAX_VF_VECTOR_NUM; 1111 + 1112 + vectors = pci_alloc_irq_vectors(pdev, 1, hdev->num_msi, 1113 + PCI_IRQ_MSI | PCI_IRQ_MSIX); 1114 + if (vectors < 0) { 1115 + dev_err(&pdev->dev, 1116 + "failed(%d) to allocate MSI/MSI-X vectors\n", 1117 + vectors); 1118 + return vectors; 1119 + } 1120 + if (vectors < hdev->num_msi) 1121 + dev_warn(&hdev->pdev->dev, 1122 + "requested %d MSI/MSI-X, but allocated %d MSI/MSI-X\n", 1123 + hdev->num_msi, vectors); 1124 + 1125 + hdev->num_msi = vectors; 1126 + hdev->num_msi_left = vectors; 1127 + hdev->base_msi_vector = pdev->irq; 1128 + 1129 + hdev->vector_status = devm_kcalloc(&pdev->dev, hdev->num_msi, 1130 + sizeof(u16), GFP_KERNEL); 1131 + if (!hdev->vector_status) { 1132 + pci_free_irq_vectors(pdev); 1133 + return -ENOMEM; 1134 + } 1135 + 1136 + for (i = 0; i < hdev->num_msi; i++) 1137 + hdev->vector_status[i] = HCLGEVF_INVALID_VPORT; 1138 + 1139 + hdev->vector_irq = devm_kcalloc(&pdev->dev, hdev->num_msi, 1140 + sizeof(int), GFP_KERNEL); 1141 + if (!hdev->vector_irq) { 1142 + pci_free_irq_vectors(pdev); 1143 + return -ENOMEM; 1144 + } 1145 + 1146 + return 0; 1147 + } 1148 + 1149 + static void hclgevf_uninit_msi(struct hclgevf_dev *hdev) 1150 + { 1151 + struct pci_dev *pdev = hdev->pdev; 1152 + 1153 + pci_free_irq_vectors(pdev); 1154 + } 1155 + 1156 + static int hclgevf_misc_irq_init(struct hclgevf_dev *hdev) 1157 + { 1158 + int ret = 0; 1159 + 1160 + hclgevf_get_misc_vector(hdev); 1161 + 1162 + ret = request_irq(hdev->misc_vector.vector_irq, hclgevf_misc_irq_handle, 1163 + 0, "hclgevf_cmd", hdev); 1164 + if (ret) { 1165 + dev_err(&hdev->pdev->dev, "VF failed to request misc irq(%d)\n", 1166 + hdev->misc_vector.vector_irq); 1167 + return ret; 1168 + } 1169 + 1170 + /* enable misc. vector(vector 0) */ 1171 + hclgevf_enable_vector(&hdev->misc_vector, true); 1172 + 1173 + return ret; 1174 + } 1175 + 1176 + static void hclgevf_misc_irq_uninit(struct hclgevf_dev *hdev) 1177 + { 1178 + /* disable misc vector(vector 0) */ 1179 + hclgevf_enable_vector(&hdev->misc_vector, false); 1180 + free_irq(hdev->misc_vector.vector_irq, hdev); 1181 + hclgevf_free_vector(hdev, 0); 1182 + } 1183 + 1184 + static int hclgevf_init_instance(struct hclgevf_dev *hdev, 1185 + struct hnae3_client *client) 1186 + { 1187 + int ret; 1188 + 1189 + switch (client->type) { 1190 + case HNAE3_CLIENT_KNIC: 1191 + hdev->nic_client = client; 1192 + hdev->nic.client = client; 1193 + 1194 + ret = client->ops->init_instance(&hdev->nic); 1195 + if (ret) 1196 + return ret; 1197 + 1198 + if (hdev->roce_client && hnae3_dev_roce_supported(hdev)) { 1199 + struct hnae3_client *rc = hdev->roce_client; 1200 + 1201 + ret = hclgevf_init_roce_base_info(hdev); 1202 + if (ret) 1203 + return ret; 1204 + ret = rc->ops->init_instance(&hdev->roce); 1205 + if (ret) 1206 + return ret; 1207 + } 1208 + break; 1209 + case HNAE3_CLIENT_UNIC: 1210 + hdev->nic_client = client; 1211 + hdev->nic.client = client; 1212 + 1213 + ret = client->ops->init_instance(&hdev->nic); 1214 + if (ret) 1215 + return ret; 1216 + break; 1217 + case HNAE3_CLIENT_ROCE: 1218 + hdev->roce_client = client; 1219 + hdev->roce.client = client; 1220 + 1221 + if (hdev->roce_client && hnae3_dev_roce_supported(hdev)) { 1222 + ret = hclgevf_init_roce_base_info(hdev); 1223 + if (ret) 1224 + return ret; 1225 + 1226 + ret = client->ops->init_instance(&hdev->roce); 1227 + if (ret) 1228 + return ret; 1229 + } 1230 + } 1231 + 1232 + return 0; 1233 + } 1234 + 1235 + static void hclgevf_uninit_instance(struct hclgevf_dev *hdev, 1236 + struct hnae3_client *client) 1237 + { 1238 + /* un-init roce, if it exists */ 1239 + if (hdev->roce_client) 1240 + hdev->roce_client->ops->uninit_instance(&hdev->roce, 0); 1241 + 1242 + /* un-init nic/unic, if this was not called by roce client */ 1243 + if ((client->ops->uninit_instance) && 1244 + (client->type != HNAE3_CLIENT_ROCE)) 1245 + client->ops->uninit_instance(&hdev->nic, 0); 1246 + } 1247 + 1248 + static int hclgevf_register_client(struct hnae3_client *client, 1249 + struct hnae3_ae_dev *ae_dev) 1250 + { 1251 + struct hclgevf_dev *hdev = ae_dev->priv; 1252 + 1253 + return hclgevf_init_instance(hdev, client); 1254 + } 1255 + 1256 + static void hclgevf_unregister_client(struct hnae3_client *client, 1257 + struct hnae3_ae_dev *ae_dev) 1258 + { 1259 + struct hclgevf_dev *hdev = ae_dev->priv; 1260 + 1261 + hclgevf_uninit_instance(hdev, client); 1262 + } 1263 + 1264 + static int hclgevf_pci_init(struct hclgevf_dev *hdev) 1265 + { 1266 + struct pci_dev *pdev = hdev->pdev; 1267 + struct hclgevf_hw *hw; 1268 + int ret; 1269 + 1270 + ret = pci_enable_device(pdev); 1271 + if (ret) { 1272 + dev_err(&pdev->dev, "failed to enable PCI device\n"); 1273 + goto err_no_drvdata; 1274 + } 1275 + 1276 + ret = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64)); 1277 + if (ret) { 1278 + dev_err(&pdev->dev, "can't set consistent PCI DMA, exiting"); 1279 + goto err_disable_device; 1280 + } 1281 + 1282 + ret = pci_request_regions(pdev, HCLGEVF_DRIVER_NAME); 1283 + if (ret) { 1284 + dev_err(&pdev->dev, "PCI request regions failed %d\n", ret); 1285 + goto err_disable_device; 1286 + } 1287 + 1288 + pci_set_master(pdev); 1289 + hw = &hdev->hw; 1290 + hw->hdev = hdev; 1291 + hw->io_base = pci_iomap(pdev, 2, 0);; 1292 + if (!hw->io_base) { 1293 + dev_err(&pdev->dev, "can't map configuration register space\n"); 1294 + ret = -ENOMEM; 1295 + goto err_clr_master; 1296 + } 1297 + 1298 + return 0; 1299 + 1300 + err_clr_master: 1301 + pci_clear_master(pdev); 1302 + pci_release_regions(pdev); 1303 + err_disable_device: 1304 + pci_disable_device(pdev); 1305 + err_no_drvdata: 1306 + pci_set_drvdata(pdev, NULL); 1307 + return ret; 1308 + } 1309 + 1310 + static void hclgevf_pci_uninit(struct hclgevf_dev *hdev) 1311 + { 1312 + struct pci_dev *pdev = hdev->pdev; 1313 + 1314 + pci_iounmap(pdev, hdev->hw.io_base); 1315 + pci_clear_master(pdev); 1316 + pci_release_regions(pdev); 1317 + pci_disable_device(pdev); 1318 + pci_set_drvdata(pdev, NULL); 1319 + } 1320 + 1321 + static int hclgevf_init_ae_dev(struct hnae3_ae_dev *ae_dev) 1322 + { 1323 + struct pci_dev *pdev = ae_dev->pdev; 1324 + struct hclgevf_dev *hdev; 1325 + int ret; 1326 + 1327 + hdev = devm_kzalloc(&pdev->dev, sizeof(*hdev), GFP_KERNEL); 1328 + if (!hdev) 1329 + return -ENOMEM; 1330 + 1331 + hdev->pdev = pdev; 1332 + hdev->ae_dev = ae_dev; 1333 + ae_dev->priv = hdev; 1334 + 1335 + ret = hclgevf_pci_init(hdev); 1336 + if (ret) { 1337 + dev_err(&pdev->dev, "PCI initialization failed\n"); 1338 + return ret; 1339 + } 1340 + 1341 + ret = hclgevf_init_msi(hdev); 1342 + if (ret) { 1343 + dev_err(&pdev->dev, "failed(%d) to init MSI/MSI-X\n", ret); 1344 + goto err_irq_init; 1345 + } 1346 + 1347 + hclgevf_state_init(hdev); 1348 + 1349 + ret = hclgevf_misc_irq_init(hdev); 1350 + if (ret) { 1351 + dev_err(&pdev->dev, "failed(%d) to init Misc IRQ(vector0)\n", 1352 + ret); 1353 + goto err_misc_irq_init; 1354 + } 1355 + 1356 + ret = hclgevf_cmd_init(hdev); 1357 + if (ret) 1358 + goto err_cmd_init; 1359 + 1360 + ret = hclgevf_configure(hdev); 1361 + if (ret) { 1362 + dev_err(&pdev->dev, "failed(%d) to fetch configuration\n", ret); 1363 + goto err_config; 1364 + } 1365 + 1366 + ret = hclgevf_alloc_tqps(hdev); 1367 + if (ret) { 1368 + dev_err(&pdev->dev, "failed(%d) to allocate TQPs\n", ret); 1369 + goto err_config; 1370 + } 1371 + 1372 + ret = hclgevf_set_handle_info(hdev); 1373 + if (ret) { 1374 + dev_err(&pdev->dev, "failed(%d) to set handle info\n", ret); 1375 + goto err_config; 1376 + } 1377 + 1378 + ret = hclgevf_enable_tso(hdev, true); 1379 + if (ret) { 1380 + dev_err(&pdev->dev, "failed(%d) to enable tso\n", ret); 1381 + goto err_config; 1382 + } 1383 + 1384 + /* Initialize VF's MTA */ 1385 + hdev->accept_mta_mc = true; 1386 + ret = hclgevf_cfg_func_mta_filter(&hdev->nic, hdev->accept_mta_mc); 1387 + if (ret) { 1388 + dev_err(&hdev->pdev->dev, 1389 + "failed(%d) to set mta filter mode\n", ret); 1390 + goto err_config; 1391 + } 1392 + 1393 + /* Initialize RSS for this VF */ 1394 + ret = hclgevf_rss_init_hw(hdev); 1395 + if (ret) { 1396 + dev_err(&hdev->pdev->dev, 1397 + "failed(%d) to initialize RSS\n", ret); 1398 + goto err_config; 1399 + } 1400 + 1401 + ret = hclgevf_init_vlan_config(hdev); 1402 + if (ret) { 1403 + dev_err(&hdev->pdev->dev, 1404 + "failed(%d) to initialize VLAN config\n", ret); 1405 + goto err_config; 1406 + } 1407 + 1408 + pr_info("finished initializing %s driver\n", HCLGEVF_DRIVER_NAME); 1409 + 1410 + return 0; 1411 + 1412 + err_config: 1413 + hclgevf_cmd_uninit(hdev); 1414 + err_cmd_init: 1415 + hclgevf_misc_irq_uninit(hdev); 1416 + err_misc_irq_init: 1417 + hclgevf_state_uninit(hdev); 1418 + hclgevf_uninit_msi(hdev); 1419 + err_irq_init: 1420 + hclgevf_pci_uninit(hdev); 1421 + return ret; 1422 + } 1423 + 1424 + static void hclgevf_uninit_ae_dev(struct hnae3_ae_dev *ae_dev) 1425 + { 1426 + struct hclgevf_dev *hdev = ae_dev->priv; 1427 + 1428 + hclgevf_cmd_uninit(hdev); 1429 + hclgevf_misc_irq_uninit(hdev); 1430 + hclgevf_state_uninit(hdev); 1431 + hclgevf_uninit_msi(hdev); 1432 + hclgevf_pci_uninit(hdev); 1433 + ae_dev->priv = NULL; 1434 + } 1435 + 1436 + static const struct hnae3_ae_ops hclgevf_ops = { 1437 + .init_ae_dev = hclgevf_init_ae_dev, 1438 + .uninit_ae_dev = hclgevf_uninit_ae_dev, 1439 + .init_client_instance = hclgevf_register_client, 1440 + .uninit_client_instance = hclgevf_unregister_client, 1441 + .start = hclgevf_ae_start, 1442 + .stop = hclgevf_ae_stop, 1443 + .map_ring_to_vector = hclgevf_map_ring_to_vector, 1444 + .unmap_ring_from_vector = hclgevf_unmap_ring_from_vector, 1445 + .get_vector = hclgevf_get_vector, 1446 + .reset_queue = hclgevf_reset_tqp, 1447 + .set_promisc_mode = hclgevf_set_promisc_mode, 1448 + .get_mac_addr = hclgevf_get_mac_addr, 1449 + .set_mac_addr = hclgevf_set_mac_addr, 1450 + .add_uc_addr = hclgevf_add_uc_addr, 1451 + .rm_uc_addr = hclgevf_rm_uc_addr, 1452 + .add_mc_addr = hclgevf_add_mc_addr, 1453 + .rm_mc_addr = hclgevf_rm_mc_addr, 1454 + .get_stats = hclgevf_get_stats, 1455 + .update_stats = hclgevf_update_stats, 1456 + .get_strings = hclgevf_get_strings, 1457 + .get_sset_count = hclgevf_get_sset_count, 1458 + .get_rss_key_size = hclgevf_get_rss_key_size, 1459 + .get_rss_indir_size = hclgevf_get_rss_indir_size, 1460 + .get_rss = hclgevf_get_rss, 1461 + .set_rss = hclgevf_set_rss, 1462 + .get_tc_size = hclgevf_get_tc_size, 1463 + .get_fw_version = hclgevf_get_fw_version, 1464 + .set_vlan_filter = hclgevf_set_vlan_filter, 1465 + }; 1466 + 1467 + static struct hnae3_ae_algo ae_algovf = { 1468 + .ops = &hclgevf_ops, 1469 + .name = HCLGEVF_NAME, 1470 + .pdev_id_table = ae_algovf_pci_tbl, 1471 + }; 1472 + 1473 + static int hclgevf_init(void) 1474 + { 1475 + pr_info("%s is initializing\n", HCLGEVF_NAME); 1476 + 1477 + return hnae3_register_ae_algo(&ae_algovf); 1478 + } 1479 + 1480 + static void hclgevf_exit(void) 1481 + { 1482 + hnae3_unregister_ae_algo(&ae_algovf); 1483 + } 1484 + module_init(hclgevf_init); 1485 + module_exit(hclgevf_exit); 1486 + 1487 + MODULE_LICENSE("GPL"); 1488 + MODULE_AUTHOR("Huawei Tech. Co., Ltd."); 1489 + MODULE_DESCRIPTION("HCLGEVF Driver"); 1490 + MODULE_VERSION(HCLGEVF_MOD_VERSION);
+164
drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0+ */ 2 + /* Copyright (c) 2016-2017 Hisilicon Limited. */ 3 + 4 + #ifndef __HCLGEVF_MAIN_H 5 + #define __HCLGEVF_MAIN_H 6 + #include <linux/fs.h> 7 + #include <linux/types.h> 8 + #include "hclge_mbx.h" 9 + #include "hclgevf_cmd.h" 10 + #include "hnae3.h" 11 + 12 + #define HCLGEVF_MOD_VERSION "v1.0" 13 + #define HCLGEVF_DRIVER_NAME "hclgevf" 14 + 15 + #define HCLGEVF_ROCEE_VECTOR_NUM 0 16 + #define HCLGEVF_MISC_VECTOR_NUM 0 17 + 18 + #define HCLGEVF_INVALID_VPORT 0xffff 19 + 20 + /* This number in actual depends upon the total number of VFs 21 + * created by physical function. But the maximum number of 22 + * possible vector-per-VF is {VFn(1-32), VECTn(32 + 1)}. 23 + */ 24 + #define HCLGEVF_MAX_VF_VECTOR_NUM (32 + 1) 25 + 26 + #define HCLGEVF_VECTOR_REG_BASE 0x20000 27 + #define HCLGEVF_MISC_VECTOR_REG_BASE 0x20400 28 + #define HCLGEVF_VECTOR_REG_OFFSET 0x4 29 + #define HCLGEVF_VECTOR_VF_OFFSET 0x100000 30 + 31 + /* Vector0 interrupt CMDQ event source register(RW) */ 32 + #define HCLGEVF_VECTOR0_CMDQ_SRC_REG 0x27100 33 + /* CMDQ register bits for RX event(=MBX event) */ 34 + #define HCLGEVF_VECTOR0_RX_CMDQ_INT_B 1 35 + 36 + #define HCLGEVF_TQP_RESET_TRY_TIMES 10 37 + 38 + #define HCLGEVF_RSS_IND_TBL_SIZE 512 39 + #define HCLGEVF_RSS_SET_BITMAP_MSK 0xffff 40 + #define HCLGEVF_RSS_KEY_SIZE 40 41 + #define HCLGEVF_RSS_HASH_ALGO_TOEPLITZ 0 42 + #define HCLGEVF_RSS_HASH_ALGO_SIMPLE 1 43 + #define HCLGEVF_RSS_HASH_ALGO_SYMMETRIC 2 44 + #define HCLGEVF_RSS_HASH_ALGO_MASK 0xf 45 + #define HCLGEVF_RSS_CFG_TBL_NUM \ 46 + (HCLGEVF_RSS_IND_TBL_SIZE / HCLGEVF_RSS_CFG_TBL_SIZE) 47 + 48 + /* states of hclgevf device & tasks */ 49 + enum hclgevf_states { 50 + /* device states */ 51 + HCLGEVF_STATE_DOWN, 52 + HCLGEVF_STATE_DISABLED, 53 + /* task states */ 54 + HCLGEVF_STATE_SERVICE_SCHED, 55 + HCLGEVF_STATE_MBX_SERVICE_SCHED, 56 + HCLGEVF_STATE_MBX_HANDLING, 57 + }; 58 + 59 + #define HCLGEVF_MPF_ENBALE 1 60 + 61 + struct hclgevf_mac { 62 + u8 mac_addr[ETH_ALEN]; 63 + int link; 64 + }; 65 + 66 + struct hclgevf_hw { 67 + void __iomem *io_base; 68 + int num_vec; 69 + struct hclgevf_cmq cmq; 70 + struct hclgevf_mac mac; 71 + void *hdev; /* hchgevf device it is part of */ 72 + }; 73 + 74 + /* TQP stats */ 75 + struct hlcgevf_tqp_stats { 76 + /* query_tqp_tx_queue_statistics ,opcode id: 0x0B03 */ 77 + u64 rcb_tx_ring_pktnum_rcd; /* 32bit */ 78 + /* query_tqp_rx_queue_statistics ,opcode id: 0x0B13 */ 79 + u64 rcb_rx_ring_pktnum_rcd; /* 32bit */ 80 + }; 81 + 82 + struct hclgevf_tqp { 83 + struct device *dev; /* device for DMA mapping */ 84 + struct hnae3_queue q; 85 + struct hlcgevf_tqp_stats tqp_stats; 86 + u16 index; /* global index in a NIC controller */ 87 + 88 + bool alloced; 89 + }; 90 + 91 + struct hclgevf_cfg { 92 + u8 vmdq_vport_num; 93 + u8 tc_num; 94 + u16 tqp_desc_num; 95 + u16 rx_buf_len; 96 + u8 phy_addr; 97 + u8 media_type; 98 + u8 mac_addr[ETH_ALEN]; 99 + u32 numa_node_map; 100 + }; 101 + 102 + struct hclgevf_rss_cfg { 103 + u8 rss_hash_key[HCLGEVF_RSS_KEY_SIZE]; /* user configured hash keys */ 104 + u32 hash_algo; 105 + u32 rss_size; 106 + u8 hw_tc_map; 107 + u8 rss_indirection_tbl[HCLGEVF_RSS_IND_TBL_SIZE]; /* shadow table */ 108 + }; 109 + 110 + struct hclgevf_misc_vector { 111 + u8 __iomem *addr; 112 + int vector_irq; 113 + }; 114 + 115 + struct hclgevf_dev { 116 + struct pci_dev *pdev; 117 + struct hnae3_ae_dev *ae_dev; 118 + struct hclgevf_hw hw; 119 + struct hclgevf_misc_vector misc_vector; 120 + struct hclgevf_rss_cfg rss_cfg; 121 + unsigned long state; 122 + 123 + u32 fw_version; 124 + u16 num_tqps; /* num task queue pairs of this PF */ 125 + 126 + u16 alloc_rss_size; /* allocated RSS task queue */ 127 + u16 rss_size_max; /* HW defined max RSS task queue */ 128 + 129 + u16 num_alloc_vport; /* num vports this driver supports */ 130 + u32 numa_node_mask; 131 + u16 rx_buf_len; 132 + u16 num_desc; 133 + u8 hw_tc_map; 134 + 135 + u16 num_msi; 136 + u16 num_msi_left; 137 + u16 num_msi_used; 138 + u32 base_msi_vector; 139 + u16 *vector_status; 140 + int *vector_irq; 141 + 142 + bool accept_mta_mc; /* whether to accept mta filter multicast */ 143 + struct hclgevf_mbx_resp_status mbx_resp; /* mailbox response */ 144 + 145 + struct timer_list service_timer; 146 + struct work_struct service_task; 147 + struct work_struct mbx_service_task; 148 + 149 + struct hclgevf_tqp *htqp; 150 + 151 + struct hnae3_handle nic; 152 + struct hnae3_handle roce; 153 + 154 + struct hnae3_client *nic_client; 155 + struct hnae3_client *roce_client; 156 + u32 flag; 157 + }; 158 + 159 + int hclgevf_send_mbx_msg(struct hclgevf_dev *hdev, u16 code, u16 subcode, 160 + const u8 *msg_data, u8 msg_len, bool need_resp, 161 + u8 *resp_data, u16 resp_len); 162 + void hclgevf_mbx_handler(struct hclgevf_dev *hdev); 163 + void hclgevf_update_link_status(struct hclgevf_dev *hdev, int link_state); 164 + #endif
+181
drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_mbx.c
··· 1 + // SPDX-License-Identifier: GPL-2.0+ 2 + // Copyright (c) 2016-2017 Hisilicon Limited. 3 + 4 + #include "hclge_mbx.h" 5 + #include "hclgevf_main.h" 6 + #include "hnae3.h" 7 + 8 + static void hclgevf_reset_mbx_resp_status(struct hclgevf_dev *hdev) 9 + { 10 + /* this function should be called with mbx_resp.mbx_mutex held 11 + * to prtect the received_response from race condition 12 + */ 13 + hdev->mbx_resp.received_resp = false; 14 + hdev->mbx_resp.origin_mbx_msg = 0; 15 + hdev->mbx_resp.resp_status = 0; 16 + memset(hdev->mbx_resp.additional_info, 0, HCLGE_MBX_MAX_RESP_DATA_SIZE); 17 + } 18 + 19 + /* hclgevf_get_mbx_resp: used to get a response from PF after VF sends a mailbox 20 + * message to PF. 21 + * @hdev: pointer to struct hclgevf_dev 22 + * @resp_msg: pointer to store the original message type and response status 23 + * @len: the resp_msg data array length. 24 + */ 25 + static int hclgevf_get_mbx_resp(struct hclgevf_dev *hdev, u16 code0, u16 code1, 26 + u8 *resp_data, u16 resp_len) 27 + { 28 + #define HCLGEVF_MAX_TRY_TIMES 500 29 + #define HCLGEVF_SLEEP_USCOEND 1000 30 + struct hclgevf_mbx_resp_status *mbx_resp; 31 + u16 r_code0, r_code1; 32 + int i = 0; 33 + 34 + if (resp_len > HCLGE_MBX_MAX_RESP_DATA_SIZE) { 35 + dev_err(&hdev->pdev->dev, 36 + "VF mbx response len(=%d) exceeds maximum(=%d)\n", 37 + resp_len, 38 + HCLGE_MBX_MAX_RESP_DATA_SIZE); 39 + return -EINVAL; 40 + } 41 + 42 + while ((!hdev->mbx_resp.received_resp) && (i < HCLGEVF_MAX_TRY_TIMES)) { 43 + udelay(HCLGEVF_SLEEP_USCOEND); 44 + i++; 45 + } 46 + 47 + if (i >= HCLGEVF_MAX_TRY_TIMES) { 48 + dev_err(&hdev->pdev->dev, 49 + "VF could not get mbx resp(=%d) from PF in %d tries\n", 50 + hdev->mbx_resp.received_resp, i); 51 + return -EIO; 52 + } 53 + 54 + mbx_resp = &hdev->mbx_resp; 55 + r_code0 = (u16)(mbx_resp->origin_mbx_msg >> 16); 56 + r_code1 = (u16)(mbx_resp->origin_mbx_msg & 0xff); 57 + if (resp_data) 58 + memcpy(resp_data, &mbx_resp->additional_info[0], resp_len); 59 + 60 + hclgevf_reset_mbx_resp_status(hdev); 61 + 62 + if (!(r_code0 == code0 && r_code1 == code1 && !mbx_resp->resp_status)) { 63 + dev_err(&hdev->pdev->dev, 64 + "VF could not match resp code(code0=%d,code1=%d), %d", 65 + code0, code1, mbx_resp->resp_status); 66 + return -EIO; 67 + } 68 + 69 + return 0; 70 + } 71 + 72 + int hclgevf_send_mbx_msg(struct hclgevf_dev *hdev, u16 code, u16 subcode, 73 + const u8 *msg_data, u8 msg_len, bool need_resp, 74 + u8 *resp_data, u16 resp_len) 75 + { 76 + struct hclge_mbx_vf_to_pf_cmd *req; 77 + struct hclgevf_desc desc; 78 + int status; 79 + 80 + req = (struct hclge_mbx_vf_to_pf_cmd *)desc.data; 81 + 82 + /* first two bytes are reserved for code & subcode */ 83 + if (msg_len > (HCLGE_MBX_MAX_MSG_SIZE - 2)) { 84 + dev_err(&hdev->pdev->dev, 85 + "VF send mbx msg fail, msg len %d exceeds max len %d\n", 86 + msg_len, HCLGE_MBX_MAX_MSG_SIZE); 87 + return -EINVAL; 88 + } 89 + 90 + hclgevf_cmd_setup_basic_desc(&desc, HCLGEVF_OPC_MBX_VF_TO_PF, false); 91 + req->msg[0] = code; 92 + req->msg[1] = subcode; 93 + memcpy(&req->msg[2], msg_data, msg_len); 94 + 95 + /* synchronous send */ 96 + if (need_resp) { 97 + mutex_lock(&hdev->mbx_resp.mbx_mutex); 98 + hclgevf_reset_mbx_resp_status(hdev); 99 + status = hclgevf_cmd_send(&hdev->hw, &desc, 1); 100 + if (status) { 101 + dev_err(&hdev->pdev->dev, 102 + "VF failed(=%d) to send mbx message to PF\n", 103 + status); 104 + mutex_unlock(&hdev->mbx_resp.mbx_mutex); 105 + return status; 106 + } 107 + 108 + status = hclgevf_get_mbx_resp(hdev, code, subcode, resp_data, 109 + resp_len); 110 + mutex_unlock(&hdev->mbx_resp.mbx_mutex); 111 + } else { 112 + /* asynchronous send */ 113 + status = hclgevf_cmd_send(&hdev->hw, &desc, 1); 114 + if (status) { 115 + dev_err(&hdev->pdev->dev, 116 + "VF failed(=%d) to send mbx message to PF\n", 117 + status); 118 + return status; 119 + } 120 + } 121 + 122 + return status; 123 + } 124 + 125 + void hclgevf_mbx_handler(struct hclgevf_dev *hdev) 126 + { 127 + struct hclgevf_mbx_resp_status *resp; 128 + struct hclge_mbx_pf_to_vf_cmd *req; 129 + struct hclgevf_cmq_ring *crq; 130 + struct hclgevf_desc *desc; 131 + u16 link_status, flag; 132 + u8 *temp; 133 + int i; 134 + 135 + resp = &hdev->mbx_resp; 136 + crq = &hdev->hw.cmq.crq; 137 + 138 + flag = le16_to_cpu(crq->desc[crq->next_to_use].flag); 139 + while (hnae_get_bit(flag, HCLGEVF_CMDQ_RX_OUTVLD_B)) { 140 + desc = &crq->desc[crq->next_to_use]; 141 + req = (struct hclge_mbx_pf_to_vf_cmd *)desc->data; 142 + 143 + switch (req->msg[0]) { 144 + case HCLGE_MBX_PF_VF_RESP: 145 + if (resp->received_resp) 146 + dev_warn(&hdev->pdev->dev, 147 + "VF mbx resp flag not clear(%d)\n", 148 + req->msg[1]); 149 + resp->received_resp = true; 150 + 151 + resp->origin_mbx_msg = (req->msg[1] << 16); 152 + resp->origin_mbx_msg |= req->msg[2]; 153 + resp->resp_status = req->msg[3]; 154 + 155 + temp = (u8 *)&req->msg[4]; 156 + for (i = 0; i < HCLGE_MBX_MAX_RESP_DATA_SIZE; i++) { 157 + resp->additional_info[i] = *temp; 158 + temp++; 159 + } 160 + break; 161 + case HCLGE_MBX_LINK_STAT_CHANGE: 162 + link_status = le16_to_cpu(req->msg[1]); 163 + 164 + /* update upper layer with new link link status */ 165 + hclgevf_update_link_status(hdev, link_status); 166 + 167 + break; 168 + default: 169 + dev_err(&hdev->pdev->dev, 170 + "VF received unsupported(%d) mbx msg from PF\n", 171 + req->msg[0]); 172 + break; 173 + } 174 + hclge_mbx_ring_ptr_move_crq(crq); 175 + flag = le16_to_cpu(crq->desc[crq->next_to_use].flag); 176 + } 177 + 178 + /* Write back CMDQ_RQ header pointer, M7 need this pointer */ 179 + hclgevf_write_dev(&hdev->hw, HCLGEVF_NIC_CRQ_HEAD_REG, 180 + crq->next_to_use); 181 + }