Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

gve: Add basic driver framework for Compute Engine Virtual NIC

Add a driver framework for the Compute Engine Virtual NIC that will be
available in the future.

At this point the only functionality is loading the driver.

Signed-off-by: Catherine Sullivan <csully@google.com>
Signed-off-by: Sagi Shahar <sagis@google.com>
Signed-off-by: Jon Olson <jonolson@google.com>
Acked-by: Willem de Bruijn <willemb@google.com>
Reviewed-by: Luigi Rizzo <lrizzo@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>

authored by

Catherine Sullivan and committed by
David S. Miller
893ce44d 2a8d8e0f

+1121
+82
Documentation/networking/device_drivers/google/gve.rst
··· 1 + .. SPDX-License-Identifier: GPL-2.0+ 2 + 3 + ============================================================== 4 + Linux kernel driver for Compute Engine Virtual Ethernet (gve): 5 + ============================================================== 6 + 7 + Supported Hardware 8 + =================== 9 + The GVE driver binds to a single PCI device id used by the virtual 10 + Ethernet device found in some Compute Engine VMs. 11 + 12 + +--------------+----------+---------+ 13 + |Field | Value | Comments| 14 + +==============+==========+=========+ 15 + |Vendor ID | `0x1AE0` | Google | 16 + +--------------+----------+---------+ 17 + |Device ID | `0x0042` | | 18 + +--------------+----------+---------+ 19 + |Sub-vendor ID | `0x1AE0` | Google | 20 + +--------------+----------+---------+ 21 + |Sub-device ID | `0x0058` | | 22 + +--------------+----------+---------+ 23 + |Revision ID | `0x0` | | 24 + +--------------+----------+---------+ 25 + |Device Class | `0x200` | Ethernet| 26 + +--------------+----------+---------+ 27 + 28 + PCI Bars 29 + ======== 30 + The gVNIC PCI device exposes three 32-bit memory BARS: 31 + - Bar0 - Device configuration and status registers. 32 + - Bar1 - MSI-X vector table 33 + - Bar2 - IRQ, RX and TX doorbells 34 + 35 + Device Interactions 36 + =================== 37 + The driver interacts with the device in the following ways: 38 + - Registers 39 + - A block of MMIO registers 40 + - See gve_register.h for more detail 41 + - Admin Queue 42 + - See description below 43 + - Interrupts 44 + - See supported interrupts below 45 + 46 + Registers 47 + --------- 48 + All registers are MMIO and big endian. 49 + 50 + The registers are used for initializing and configuring the device as well as 51 + querying device status in response to management interrupts. 52 + 53 + Admin Queue (AQ) 54 + ---------------- 55 + The Admin Queue is a PAGE_SIZE memory block, treated as an array of AQ 56 + commands, used by the driver to issue commands to the device and set up 57 + resources.The driver and the device maintain a count of how many commands 58 + have been submitted and executed. To issue AQ commands, the driver must do 59 + the following (with proper locking): 60 + 61 + 1) Copy new commands into next available slots in the AQ array 62 + 2) Increment its counter by he number of new commands 63 + 3) Write the counter into the GVE_ADMIN_QUEUE_DOORBELL register 64 + 4) Poll the ADMIN_QUEUE_EVENT_COUNTER register until it equals 65 + the value written to the doorbell, or until a timeout. 66 + 67 + The device will update the status field in each AQ command reported as 68 + executed through the ADMIN_QUEUE_EVENT_COUNTER register. 69 + 70 + Interrupts 71 + ---------- 72 + The following interrupts are supported by the driver: 73 + 74 + Management Interrupt 75 + ~~~~~~~~~~~~~~~~~~~~ 76 + The management interrupt is used by the device to tell the driver to 77 + look at the GVE_DEVICE_STATUS register. 78 + 79 + Notification Block Interrupts 80 + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 81 + The notification block interrupts are used to tell the driver to poll 82 + the queues associated with that interrupt.
+1
Documentation/networking/device_drivers/index.rst
··· 21 21 intel/i40e 22 22 intel/iavf 23 23 intel/ice 24 + google/gve 24 25 mellanox/mlx5 25 26 26 27 .. only:: subproject
+9
MAINTAINERS
··· 6720 6720 S: Maintained 6721 6721 F: drivers/input/touchscreen/goodix.c 6722 6722 6723 + GOOGLE ETHERNET DRIVERS 6724 + M: Catherine Sullivan <csully@google.com> 6725 + R: Sagi Shahar <sagis@google.com> 6726 + R: Jon Olson <jonolson@google.com> 6727 + L: netdev@vger.kernel.org 6728 + S: Supported 6729 + F: Documentation/networking/device_drivers/google/gve.txt 6730 + F: drivers/net/ethernet/google 6731 + 6723 6732 GPD POCKET FAN DRIVER 6724 6733 M: Hans de Goede <hdegoede@redhat.com> 6725 6734 L: platform-driver-x86@vger.kernel.org
+1
drivers/net/ethernet/Kconfig
··· 76 76 source "drivers/net/ethernet/faraday/Kconfig" 77 77 source "drivers/net/ethernet/freescale/Kconfig" 78 78 source "drivers/net/ethernet/fujitsu/Kconfig" 79 + source "drivers/net/ethernet/google/Kconfig" 79 80 source "drivers/net/ethernet/hisilicon/Kconfig" 80 81 source "drivers/net/ethernet/hp/Kconfig" 81 82 source "drivers/net/ethernet/huawei/Kconfig"
+1
drivers/net/ethernet/Makefile
··· 39 39 obj-$(CONFIG_NET_VENDOR_FARADAY) += faraday/ 40 40 obj-$(CONFIG_NET_VENDOR_FREESCALE) += freescale/ 41 41 obj-$(CONFIG_NET_VENDOR_FUJITSU) += fujitsu/ 42 + obj-$(CONFIG_NET_VENDOR_GOOGLE) += google/ 42 43 obj-$(CONFIG_NET_VENDOR_HISILICON) += hisilicon/ 43 44 obj-$(CONFIG_NET_VENDOR_HP) += hp/ 44 45 obj-$(CONFIG_NET_VENDOR_HUAWEI) += huawei/
+27
drivers/net/ethernet/google/Kconfig
··· 1 + # 2 + # Google network device configuration 3 + # 4 + 5 + config NET_VENDOR_GOOGLE 6 + bool "Google Devices" 7 + default y 8 + help 9 + If you have a network (Ethernet) device belonging to this class, say Y. 10 + 11 + Note that the answer to this question doesn't directly affect the 12 + kernel: saying N will just cause the configurator to skip all 13 + the questions about Google devices. If you say Y, you will be asked 14 + for your specific device in the following questions. 15 + 16 + if NET_VENDOR_GOOGLE 17 + 18 + config GVE 19 + tristate "Google Virtual NIC (gVNIC) support" 20 + depends on PCI_MSI 21 + help 22 + This driver supports Google Virtual NIC (gVNIC)" 23 + 24 + To compile this driver as a module, choose M here. 25 + The module will be called gve. 26 + 27 + endif #NET_VENDOR_GOOGLE
+5
drivers/net/ethernet/google/Makefile
··· 1 + # 2 + # Makefile for the Google network device drivers. 3 + # 4 + 5 + obj-$(CONFIG_GVE) += gve/
+4
drivers/net/ethernet/google/gve/Makefile
··· 1 + # Makefile for the Google virtual Ethernet (gve) driver 2 + 3 + obj-$(CONFIG_GVE) += gve.o 4 + gve-objs := gve_main.o gve_adminq.o
+135
drivers/net/ethernet/google/gve/gve.h
··· 1 + /* SPDX-License-Identifier: (GPL-2.0 OR MIT) 2 + * Google virtual Ethernet (gve) driver 3 + * 4 + * Copyright (C) 2015-2019 Google, Inc. 5 + */ 6 + 7 + #ifndef _GVE_H_ 8 + #define _GVE_H_ 9 + 10 + #include <linux/dma-mapping.h> 11 + #include <linux/netdevice.h> 12 + #include <linux/pci.h> 13 + 14 + #ifndef PCI_VENDOR_ID_GOOGLE 15 + #define PCI_VENDOR_ID_GOOGLE 0x1ae0 16 + #endif 17 + 18 + #define PCI_DEV_ID_GVNIC 0x0042 19 + 20 + #define GVE_REGISTER_BAR 0 21 + #define GVE_DOORBELL_BAR 2 22 + 23 + /* 1 for management */ 24 + #define GVE_MIN_MSIX 3 25 + 26 + struct gve_notify_block { 27 + __be32 irq_db_index; /* idx into Bar2 - set by device, must be 1st */ 28 + char name[IFNAMSIZ + 16]; /* name registered with the kernel */ 29 + struct napi_struct napi; /* kernel napi struct for this block */ 30 + struct gve_priv *priv; 31 + } ____cacheline_aligned; 32 + 33 + struct gve_priv { 34 + struct net_device *dev; 35 + struct gve_notify_block *ntfy_blocks; /* array of num_ntfy_blks */ 36 + dma_addr_t ntfy_block_bus; 37 + struct msix_entry *msix_vectors; /* array of num_ntfy_blks + 1 */ 38 + char mgmt_msix_name[IFNAMSIZ + 16]; 39 + u32 mgmt_msix_idx; 40 + __be32 *counter_array; /* array of num_event_counters */ 41 + dma_addr_t counter_array_bus; 42 + 43 + u16 num_event_counters; 44 + 45 + u32 num_ntfy_blks; /* spilt between TX and RX so must be even */ 46 + 47 + struct gve_registers __iomem *reg_bar0; /* see gve_register.h */ 48 + __be32 __iomem *db_bar2; /* "array" of doorbells */ 49 + u32 msg_enable; /* level for netif* netdev print macros */ 50 + struct pci_dev *pdev; 51 + 52 + /* Admin queue - see gve_adminq.h*/ 53 + union gve_adminq_command *adminq; 54 + dma_addr_t adminq_bus_addr; 55 + u32 adminq_mask; /* masks prod_cnt to adminq size */ 56 + u32 adminq_prod_cnt; /* free-running count of AQ cmds executed */ 57 + 58 + unsigned long state_flags; 59 + }; 60 + 61 + enum gve_state_flags { 62 + GVE_PRIV_FLAGS_ADMIN_QUEUE_OK = BIT(1), 63 + GVE_PRIV_FLAGS_DEVICE_RESOURCES_OK = BIT(2), 64 + GVE_PRIV_FLAGS_DEVICE_RINGS_OK = BIT(3), 65 + GVE_PRIV_FLAGS_NAPI_ENABLED = BIT(4), 66 + }; 67 + 68 + static inline bool gve_get_admin_queue_ok(struct gve_priv *priv) 69 + { 70 + return test_bit(GVE_PRIV_FLAGS_ADMIN_QUEUE_OK, &priv->state_flags); 71 + } 72 + 73 + static inline void gve_set_admin_queue_ok(struct gve_priv *priv) 74 + { 75 + set_bit(GVE_PRIV_FLAGS_ADMIN_QUEUE_OK, &priv->state_flags); 76 + } 77 + 78 + static inline void gve_clear_admin_queue_ok(struct gve_priv *priv) 79 + { 80 + clear_bit(GVE_PRIV_FLAGS_ADMIN_QUEUE_OK, &priv->state_flags); 81 + } 82 + 83 + static inline bool gve_get_device_resources_ok(struct gve_priv *priv) 84 + { 85 + return test_bit(GVE_PRIV_FLAGS_DEVICE_RESOURCES_OK, &priv->state_flags); 86 + } 87 + 88 + static inline void gve_set_device_resources_ok(struct gve_priv *priv) 89 + { 90 + set_bit(GVE_PRIV_FLAGS_DEVICE_RESOURCES_OK, &priv->state_flags); 91 + } 92 + 93 + static inline void gve_clear_device_resources_ok(struct gve_priv *priv) 94 + { 95 + clear_bit(GVE_PRIV_FLAGS_DEVICE_RESOURCES_OK, &priv->state_flags); 96 + } 97 + 98 + static inline bool gve_get_device_rings_ok(struct gve_priv *priv) 99 + { 100 + return test_bit(GVE_PRIV_FLAGS_DEVICE_RINGS_OK, &priv->state_flags); 101 + } 102 + 103 + static inline void gve_set_device_rings_ok(struct gve_priv *priv) 104 + { 105 + set_bit(GVE_PRIV_FLAGS_DEVICE_RINGS_OK, &priv->state_flags); 106 + } 107 + 108 + static inline void gve_clear_device_rings_ok(struct gve_priv *priv) 109 + { 110 + clear_bit(GVE_PRIV_FLAGS_DEVICE_RINGS_OK, &priv->state_flags); 111 + } 112 + 113 + static inline bool gve_get_napi_enabled(struct gve_priv *priv) 114 + { 115 + return test_bit(GVE_PRIV_FLAGS_NAPI_ENABLED, &priv->state_flags); 116 + } 117 + 118 + static inline void gve_set_napi_enabled(struct gve_priv *priv) 119 + { 120 + set_bit(GVE_PRIV_FLAGS_NAPI_ENABLED, &priv->state_flags); 121 + } 122 + 123 + static inline void gve_clear_napi_enabled(struct gve_priv *priv) 124 + { 125 + clear_bit(GVE_PRIV_FLAGS_NAPI_ENABLED, &priv->state_flags); 126 + } 127 + 128 + /* Returns the address of the ntfy_blocks irq doorbell 129 + */ 130 + static inline __be32 __iomem *gve_irq_doorbell(struct gve_priv *priv, 131 + struct gve_notify_block *block) 132 + { 133 + return &priv->db_bar2[be32_to_cpu(block->irq_db_index)]; 134 + } 135 + #endif /* _GVE_H_ */
+249
drivers/net/ethernet/google/gve/gve_adminq.c
··· 1 + // SPDX-License-Identifier: (GPL-2.0 OR MIT) 2 + /* Google virtual Ethernet (gve) driver 3 + * 4 + * Copyright (C) 2015-2019 Google, Inc. 5 + */ 6 + 7 + #include <linux/etherdevice.h> 8 + #include <linux/pci.h> 9 + #include "gve.h" 10 + #include "gve_adminq.h" 11 + #include "gve_register.h" 12 + 13 + #define GVE_MAX_ADMINQ_RELEASE_CHECK 500 14 + #define GVE_ADMINQ_SLEEP_LEN 20 15 + #define GVE_MAX_ADMINQ_EVENT_COUNTER_CHECK 100 16 + 17 + int gve_adminq_alloc(struct device *dev, struct gve_priv *priv) 18 + { 19 + priv->adminq = dma_alloc_coherent(dev, PAGE_SIZE, 20 + &priv->adminq_bus_addr, GFP_KERNEL); 21 + if (unlikely(!priv->adminq)) 22 + return -ENOMEM; 23 + 24 + priv->adminq_mask = (PAGE_SIZE / sizeof(union gve_adminq_command)) - 1; 25 + priv->adminq_prod_cnt = 0; 26 + 27 + /* Setup Admin queue with the device */ 28 + iowrite32be(priv->adminq_bus_addr / PAGE_SIZE, 29 + &priv->reg_bar0->adminq_pfn); 30 + 31 + gve_set_admin_queue_ok(priv); 32 + return 0; 33 + } 34 + 35 + void gve_adminq_release(struct gve_priv *priv) 36 + { 37 + int i = 0; 38 + 39 + /* Tell the device the adminq is leaving */ 40 + iowrite32be(0x0, &priv->reg_bar0->adminq_pfn); 41 + while (ioread32be(&priv->reg_bar0->adminq_pfn)) { 42 + /* If this is reached the device is unrecoverable and still 43 + * holding memory. Continue looping to avoid memory corruption, 44 + * but WARN so it is visible what is going on. 45 + */ 46 + if (i == GVE_MAX_ADMINQ_RELEASE_CHECK) 47 + WARN(1, "Unrecoverable platform error!"); 48 + i++; 49 + msleep(GVE_ADMINQ_SLEEP_LEN); 50 + } 51 + gve_clear_device_rings_ok(priv); 52 + gve_clear_device_resources_ok(priv); 53 + gve_clear_admin_queue_ok(priv); 54 + } 55 + 56 + void gve_adminq_free(struct device *dev, struct gve_priv *priv) 57 + { 58 + if (!gve_get_admin_queue_ok(priv)) 59 + return; 60 + gve_adminq_release(priv); 61 + dma_free_coherent(dev, PAGE_SIZE, priv->adminq, priv->adminq_bus_addr); 62 + gve_clear_admin_queue_ok(priv); 63 + } 64 + 65 + static void gve_adminq_kick_cmd(struct gve_priv *priv, u32 prod_cnt) 66 + { 67 + iowrite32be(prod_cnt, &priv->reg_bar0->adminq_doorbell); 68 + } 69 + 70 + static bool gve_adminq_wait_for_cmd(struct gve_priv *priv, u32 prod_cnt) 71 + { 72 + int i; 73 + 74 + for (i = 0; i < GVE_MAX_ADMINQ_EVENT_COUNTER_CHECK; i++) { 75 + if (ioread32be(&priv->reg_bar0->adminq_event_counter) 76 + == prod_cnt) 77 + return true; 78 + msleep(GVE_ADMINQ_SLEEP_LEN); 79 + } 80 + 81 + return false; 82 + } 83 + 84 + static int gve_adminq_parse_err(struct device *dev, u32 status) 85 + { 86 + if (status != GVE_ADMINQ_COMMAND_PASSED && 87 + status != GVE_ADMINQ_COMMAND_UNSET) 88 + dev_err(dev, "AQ command failed with status %d\n", status); 89 + 90 + switch (status) { 91 + case GVE_ADMINQ_COMMAND_PASSED: 92 + return 0; 93 + case GVE_ADMINQ_COMMAND_UNSET: 94 + dev_err(dev, "parse_aq_err: err and status both unset, this should not be possible.\n"); 95 + return -EINVAL; 96 + case GVE_ADMINQ_COMMAND_ERROR_ABORTED: 97 + case GVE_ADMINQ_COMMAND_ERROR_CANCELLED: 98 + case GVE_ADMINQ_COMMAND_ERROR_DATALOSS: 99 + case GVE_ADMINQ_COMMAND_ERROR_FAILED_PRECONDITION: 100 + case GVE_ADMINQ_COMMAND_ERROR_UNAVAILABLE: 101 + return -EAGAIN; 102 + case GVE_ADMINQ_COMMAND_ERROR_ALREADY_EXISTS: 103 + case GVE_ADMINQ_COMMAND_ERROR_INTERNAL_ERROR: 104 + case GVE_ADMINQ_COMMAND_ERROR_INVALID_ARGUMENT: 105 + case GVE_ADMINQ_COMMAND_ERROR_NOT_FOUND: 106 + case GVE_ADMINQ_COMMAND_ERROR_OUT_OF_RANGE: 107 + case GVE_ADMINQ_COMMAND_ERROR_UNKNOWN_ERROR: 108 + return -EINVAL; 109 + case GVE_ADMINQ_COMMAND_ERROR_DEADLINE_EXCEEDED: 110 + return -ETIME; 111 + case GVE_ADMINQ_COMMAND_ERROR_PERMISSION_DENIED: 112 + case GVE_ADMINQ_COMMAND_ERROR_UNAUTHENTICATED: 113 + return -EACCES; 114 + case GVE_ADMINQ_COMMAND_ERROR_RESOURCE_EXHAUSTED: 115 + return -ENOMEM; 116 + case GVE_ADMINQ_COMMAND_ERROR_UNIMPLEMENTED: 117 + return -ENOTSUPP; 118 + default: 119 + dev_err(dev, "parse_aq_err: unknown status code %d\n", status); 120 + return -EINVAL; 121 + } 122 + } 123 + 124 + /* This function is not threadsafe - the caller is responsible for any 125 + * necessary locks. 126 + */ 127 + int gve_adminq_execute_cmd(struct gve_priv *priv, 128 + union gve_adminq_command *cmd_orig) 129 + { 130 + union gve_adminq_command *cmd; 131 + u32 status = 0; 132 + u32 prod_cnt; 133 + 134 + cmd = &priv->adminq[priv->adminq_prod_cnt & priv->adminq_mask]; 135 + priv->adminq_prod_cnt++; 136 + prod_cnt = priv->adminq_prod_cnt; 137 + 138 + memcpy(cmd, cmd_orig, sizeof(*cmd_orig)); 139 + 140 + gve_adminq_kick_cmd(priv, prod_cnt); 141 + if (!gve_adminq_wait_for_cmd(priv, prod_cnt)) { 142 + dev_err(&priv->pdev->dev, "AQ command timed out, need to reset AQ\n"); 143 + return -ENOTRECOVERABLE; 144 + } 145 + 146 + memcpy(cmd_orig, cmd, sizeof(*cmd)); 147 + status = be32_to_cpu(READ_ONCE(cmd->status)); 148 + return gve_adminq_parse_err(&priv->pdev->dev, status); 149 + } 150 + 151 + /* The device specifies that the management vector can either be the first irq 152 + * or the last irq. ntfy_blk_msix_base_idx indicates the first irq assigned to 153 + * the ntfy blks. It if is 0 then the management vector is last, if it is 1 then 154 + * the management vector is first. 155 + * 156 + * gve arranges the msix vectors so that the management vector is last. 157 + */ 158 + #define GVE_NTFY_BLK_BASE_MSIX_IDX 0 159 + int gve_adminq_configure_device_resources(struct gve_priv *priv, 160 + dma_addr_t counter_array_bus_addr, 161 + u32 num_counters, 162 + dma_addr_t db_array_bus_addr, 163 + u32 num_ntfy_blks) 164 + { 165 + union gve_adminq_command cmd; 166 + 167 + memset(&cmd, 0, sizeof(cmd)); 168 + cmd.opcode = cpu_to_be32(GVE_ADMINQ_CONFIGURE_DEVICE_RESOURCES); 169 + cmd.configure_device_resources = 170 + (struct gve_adminq_configure_device_resources) { 171 + .counter_array = cpu_to_be64(counter_array_bus_addr), 172 + .num_counters = cpu_to_be32(num_counters), 173 + .irq_db_addr = cpu_to_be64(db_array_bus_addr), 174 + .num_irq_dbs = cpu_to_be32(num_ntfy_blks), 175 + .irq_db_stride = cpu_to_be32(sizeof(priv->ntfy_blocks[0])), 176 + .ntfy_blk_msix_base_idx = 177 + cpu_to_be32(GVE_NTFY_BLK_BASE_MSIX_IDX), 178 + }; 179 + 180 + return gve_adminq_execute_cmd(priv, &cmd); 181 + } 182 + 183 + int gve_adminq_deconfigure_device_resources(struct gve_priv *priv) 184 + { 185 + union gve_adminq_command cmd; 186 + 187 + memset(&cmd, 0, sizeof(cmd)); 188 + cmd.opcode = cpu_to_be32(GVE_ADMINQ_DECONFIGURE_DEVICE_RESOURCES); 189 + 190 + return gve_adminq_execute_cmd(priv, &cmd); 191 + } 192 + 193 + int gve_adminq_describe_device(struct gve_priv *priv) 194 + { 195 + struct gve_device_descriptor *descriptor; 196 + union gve_adminq_command cmd; 197 + dma_addr_t descriptor_bus; 198 + int err = 0; 199 + u8 *mac; 200 + u16 mtu; 201 + 202 + memset(&cmd, 0, sizeof(cmd)); 203 + descriptor = dma_alloc_coherent(&priv->pdev->dev, PAGE_SIZE, 204 + &descriptor_bus, GFP_KERNEL); 205 + if (!descriptor) 206 + return -ENOMEM; 207 + cmd.opcode = cpu_to_be32(GVE_ADMINQ_DESCRIBE_DEVICE); 208 + cmd.describe_device.device_descriptor_addr = 209 + cpu_to_be64(descriptor_bus); 210 + cmd.describe_device.device_descriptor_version = 211 + cpu_to_be32(GVE_ADMINQ_DEVICE_DESCRIPTOR_VERSION); 212 + cmd.describe_device.available_length = cpu_to_be32(PAGE_SIZE); 213 + 214 + err = gve_adminq_execute_cmd(priv, &cmd); 215 + if (err) 216 + goto free_device_descriptor; 217 + 218 + mtu = be16_to_cpu(descriptor->mtu); 219 + if (mtu < ETH_MIN_MTU) { 220 + netif_err(priv, drv, priv->dev, "MTU %d below minimum MTU\n", 221 + mtu); 222 + err = -EINVAL; 223 + goto free_device_descriptor; 224 + } 225 + priv->dev->max_mtu = mtu; 226 + priv->num_event_counters = be16_to_cpu(descriptor->counters); 227 + ether_addr_copy(priv->dev->dev_addr, descriptor->mac); 228 + mac = descriptor->mac; 229 + netif_info(priv, drv, priv->dev, "MAC addr: %pM\n", mac); 230 + 231 + free_device_descriptor: 232 + dma_free_coherent(&priv->pdev->dev, sizeof(*descriptor), descriptor, 233 + descriptor_bus); 234 + return err; 235 + } 236 + 237 + int gve_adminq_set_mtu(struct gve_priv *priv, u64 mtu) 238 + { 239 + union gve_adminq_command cmd; 240 + 241 + memset(&cmd, 0, sizeof(cmd)); 242 + cmd.opcode = cpu_to_be32(GVE_ADMINQ_SET_DRIVER_PARAMETER); 243 + cmd.set_driver_param = (struct gve_adminq_set_driver_parameter) { 244 + .parameter_type = cpu_to_be32(GVE_SET_PARAM_MTU), 245 + .parameter_value = cpu_to_be64(mtu), 246 + }; 247 + 248 + return gve_adminq_execute_cmd(priv, &cmd); 249 + }
+134
drivers/net/ethernet/google/gve/gve_adminq.h
··· 1 + /* SPDX-License-Identifier: (GPL-2.0 OR MIT) 2 + * Google virtual Ethernet (gve) driver 3 + * 4 + * Copyright (C) 2015-2019 Google, Inc. 5 + */ 6 + 7 + #ifndef _GVE_ADMINQ_H 8 + #define _GVE_ADMINQ_H 9 + 10 + #include <linux/build_bug.h> 11 + 12 + /* Admin queue opcodes */ 13 + enum gve_adminq_opcodes { 14 + GVE_ADMINQ_DESCRIBE_DEVICE = 0x1, 15 + GVE_ADMINQ_CONFIGURE_DEVICE_RESOURCES = 0x2, 16 + GVE_ADMINQ_DECONFIGURE_DEVICE_RESOURCES = 0x9, 17 + GVE_ADMINQ_SET_DRIVER_PARAMETER = 0xB, 18 + }; 19 + 20 + /* Admin queue status codes */ 21 + enum gve_adminq_statuses { 22 + GVE_ADMINQ_COMMAND_UNSET = 0x0, 23 + GVE_ADMINQ_COMMAND_PASSED = 0x1, 24 + GVE_ADMINQ_COMMAND_ERROR_ABORTED = 0xFFFFFFF0, 25 + GVE_ADMINQ_COMMAND_ERROR_ALREADY_EXISTS = 0xFFFFFFF1, 26 + GVE_ADMINQ_COMMAND_ERROR_CANCELLED = 0xFFFFFFF2, 27 + GVE_ADMINQ_COMMAND_ERROR_DATALOSS = 0xFFFFFFF3, 28 + GVE_ADMINQ_COMMAND_ERROR_DEADLINE_EXCEEDED = 0xFFFFFFF4, 29 + GVE_ADMINQ_COMMAND_ERROR_FAILED_PRECONDITION = 0xFFFFFFF5, 30 + GVE_ADMINQ_COMMAND_ERROR_INTERNAL_ERROR = 0xFFFFFFF6, 31 + GVE_ADMINQ_COMMAND_ERROR_INVALID_ARGUMENT = 0xFFFFFFF7, 32 + GVE_ADMINQ_COMMAND_ERROR_NOT_FOUND = 0xFFFFFFF8, 33 + GVE_ADMINQ_COMMAND_ERROR_OUT_OF_RANGE = 0xFFFFFFF9, 34 + GVE_ADMINQ_COMMAND_ERROR_PERMISSION_DENIED = 0xFFFFFFFA, 35 + GVE_ADMINQ_COMMAND_ERROR_UNAUTHENTICATED = 0xFFFFFFFB, 36 + GVE_ADMINQ_COMMAND_ERROR_RESOURCE_EXHAUSTED = 0xFFFFFFFC, 37 + GVE_ADMINQ_COMMAND_ERROR_UNAVAILABLE = 0xFFFFFFFD, 38 + GVE_ADMINQ_COMMAND_ERROR_UNIMPLEMENTED = 0xFFFFFFFE, 39 + GVE_ADMINQ_COMMAND_ERROR_UNKNOWN_ERROR = 0xFFFFFFFF, 40 + }; 41 + 42 + #define GVE_ADMINQ_DEVICE_DESCRIPTOR_VERSION 1 43 + 44 + /* All AdminQ command structs should be naturally packed. The static_assert 45 + * calls make sure this is the case at compile time. 46 + */ 47 + 48 + struct gve_adminq_describe_device { 49 + __be64 device_descriptor_addr; 50 + __be32 device_descriptor_version; 51 + __be32 available_length; 52 + }; 53 + 54 + static_assert(sizeof(struct gve_adminq_describe_device) == 16); 55 + 56 + struct gve_device_descriptor { 57 + __be64 max_registered_pages; 58 + __be16 reserved1; 59 + __be16 tx_queue_entries; 60 + __be16 rx_queue_entries; 61 + __be16 default_num_queues; 62 + __be16 mtu; 63 + __be16 counters; 64 + __be16 tx_pages_per_qpl; 65 + __be16 rx_pages_per_qpl; 66 + u8 mac[ETH_ALEN]; 67 + __be16 num_device_options; 68 + __be16 total_length; 69 + u8 reserved2[6]; 70 + }; 71 + 72 + static_assert(sizeof(struct gve_device_descriptor) == 40); 73 + 74 + struct device_option { 75 + __be32 option_id; 76 + __be32 option_length; 77 + }; 78 + 79 + static_assert(sizeof(struct device_option) == 8); 80 + 81 + struct gve_adminq_configure_device_resources { 82 + __be64 counter_array; 83 + __be64 irq_db_addr; 84 + __be32 num_counters; 85 + __be32 num_irq_dbs; 86 + __be32 irq_db_stride; 87 + __be32 ntfy_blk_msix_base_idx; 88 + }; 89 + 90 + static_assert(sizeof(struct gve_adminq_configure_device_resources) == 32); 91 + 92 + /* GVE Set Driver Parameter Types */ 93 + enum gve_set_driver_param_types { 94 + GVE_SET_PARAM_MTU = 0x1, 95 + }; 96 + 97 + struct gve_adminq_set_driver_parameter { 98 + __be32 parameter_type; 99 + u8 reserved[4]; 100 + __be64 parameter_value; 101 + }; 102 + 103 + static_assert(sizeof(struct gve_adminq_set_driver_parameter) == 16); 104 + 105 + union gve_adminq_command { 106 + struct { 107 + __be32 opcode; 108 + __be32 status; 109 + union { 110 + struct gve_adminq_configure_device_resources 111 + configure_device_resources; 112 + struct gve_adminq_describe_device describe_device; 113 + struct gve_adminq_set_driver_parameter set_driver_param; 114 + }; 115 + }; 116 + u8 reserved[64]; 117 + }; 118 + 119 + static_assert(sizeof(union gve_adminq_command) == 64); 120 + 121 + int gve_adminq_alloc(struct device *dev, struct gve_priv *priv); 122 + void gve_adminq_free(struct device *dev, struct gve_priv *priv); 123 + void gve_adminq_release(struct gve_priv *priv); 124 + int gve_adminq_execute_cmd(struct gve_priv *priv, 125 + union gve_adminq_command *cmd_orig); 126 + int gve_adminq_describe_device(struct gve_priv *priv); 127 + int gve_adminq_configure_device_resources(struct gve_priv *priv, 128 + dma_addr_t counter_array_bus_addr, 129 + u32 num_counters, 130 + dma_addr_t db_array_bus_addr, 131 + u32 num_ntfy_blks); 132 + int gve_adminq_deconfigure_device_resources(struct gve_priv *priv); 133 + int gve_adminq_set_mtu(struct gve_priv *priv, u64 mtu); 134 + #endif /* _GVE_ADMINQ_H */
+446
drivers/net/ethernet/google/gve/gve_main.c
··· 1 + // SPDX-License-Identifier: (GPL-2.0 OR MIT) 2 + /* Google virtual Ethernet (gve) driver 3 + * 4 + * Copyright (C) 2015-2019 Google, Inc. 5 + */ 6 + 7 + #include <linux/cpumask.h> 8 + #include <linux/etherdevice.h> 9 + #include <linux/interrupt.h> 10 + #include <linux/module.h> 11 + #include <linux/pci.h> 12 + #include <linux/sched.h> 13 + #include <linux/timer.h> 14 + #include <net/sch_generic.h> 15 + #include "gve.h" 16 + #include "gve_adminq.h" 17 + #include "gve_register.h" 18 + 19 + #define DEFAULT_MSG_LEVEL (NETIF_MSG_DRV | NETIF_MSG_LINK) 20 + #define GVE_VERSION "1.0.0" 21 + #define GVE_VERSION_PREFIX "GVE-" 22 + 23 + static const char gve_version_str[] = GVE_VERSION; 24 + static const char gve_version_prefix[] = GVE_VERSION_PREFIX; 25 + 26 + static int gve_alloc_counter_array(struct gve_priv *priv) 27 + { 28 + priv->counter_array = 29 + dma_alloc_coherent(&priv->pdev->dev, 30 + priv->num_event_counters * 31 + sizeof(*priv->counter_array), 32 + &priv->counter_array_bus, GFP_KERNEL); 33 + if (!priv->counter_array) 34 + return -ENOMEM; 35 + 36 + return 0; 37 + } 38 + 39 + static void gve_free_counter_array(struct gve_priv *priv) 40 + { 41 + dma_free_coherent(&priv->pdev->dev, 42 + priv->num_event_counters * 43 + sizeof(*priv->counter_array), 44 + priv->counter_array, priv->counter_array_bus); 45 + priv->counter_array = NULL; 46 + } 47 + 48 + static irqreturn_t gve_mgmnt_intr(int irq, void *arg) 49 + { 50 + return IRQ_HANDLED; 51 + } 52 + 53 + static irqreturn_t gve_intr(int irq, void *arg) 54 + { 55 + return IRQ_HANDLED; 56 + } 57 + 58 + static int gve_alloc_notify_blocks(struct gve_priv *priv) 59 + { 60 + int num_vecs_requested = priv->num_ntfy_blks + 1; 61 + char *name = priv->dev->name; 62 + unsigned int active_cpus; 63 + int vecs_enabled; 64 + int i, j; 65 + int err; 66 + 67 + priv->msix_vectors = kvzalloc(num_vecs_requested * 68 + sizeof(*priv->msix_vectors), GFP_KERNEL); 69 + if (!priv->msix_vectors) 70 + return -ENOMEM; 71 + for (i = 0; i < num_vecs_requested; i++) 72 + priv->msix_vectors[i].entry = i; 73 + vecs_enabled = pci_enable_msix_range(priv->pdev, priv->msix_vectors, 74 + GVE_MIN_MSIX, num_vecs_requested); 75 + if (vecs_enabled < 0) { 76 + dev_err(&priv->pdev->dev, "Could not enable min msix %d/%d\n", 77 + GVE_MIN_MSIX, vecs_enabled); 78 + err = vecs_enabled; 79 + goto abort_with_msix_vectors; 80 + } 81 + if (vecs_enabled != num_vecs_requested) { 82 + priv->num_ntfy_blks = (vecs_enabled - 1) & ~0x1; 83 + dev_err(&priv->pdev->dev, 84 + "Only received %d msix. Lowering number of notification blocks to %d\n", 85 + vecs_enabled, priv->num_ntfy_blks); 86 + } 87 + /* Half the notification blocks go to TX and half to RX */ 88 + active_cpus = min_t(int, priv->num_ntfy_blks / 2, num_online_cpus()); 89 + 90 + /* Setup Management Vector - the last vector */ 91 + snprintf(priv->mgmt_msix_name, sizeof(priv->mgmt_msix_name), "%s-mgmnt", 92 + name); 93 + err = request_irq(priv->msix_vectors[priv->mgmt_msix_idx].vector, 94 + gve_mgmnt_intr, 0, priv->mgmt_msix_name, priv); 95 + if (err) { 96 + dev_err(&priv->pdev->dev, "Did not receive management vector.\n"); 97 + goto abort_with_msix_enabled; 98 + } 99 + priv->ntfy_blocks = 100 + dma_alloc_coherent(&priv->pdev->dev, 101 + priv->num_ntfy_blks * 102 + sizeof(*priv->ntfy_blocks), 103 + &priv->ntfy_block_bus, GFP_KERNEL); 104 + if (!priv->ntfy_blocks) { 105 + err = -ENOMEM; 106 + goto abort_with_mgmt_vector; 107 + } 108 + /* Setup the other blocks - the first n-1 vectors */ 109 + for (i = 0; i < priv->num_ntfy_blks; i++) { 110 + struct gve_notify_block *block = &priv->ntfy_blocks[i]; 111 + int msix_idx = i; 112 + 113 + snprintf(block->name, sizeof(block->name), "%s-ntfy-block.%d", 114 + name, i); 115 + block->priv = priv; 116 + err = request_irq(priv->msix_vectors[msix_idx].vector, 117 + gve_intr, 0, block->name, block); 118 + if (err) { 119 + dev_err(&priv->pdev->dev, 120 + "Failed to receive msix vector %d\n", i); 121 + goto abort_with_some_ntfy_blocks; 122 + } 123 + irq_set_affinity_hint(priv->msix_vectors[msix_idx].vector, 124 + get_cpu_mask(i % active_cpus)); 125 + } 126 + return 0; 127 + abort_with_some_ntfy_blocks: 128 + for (j = 0; j < i; j++) { 129 + struct gve_notify_block *block = &priv->ntfy_blocks[j]; 130 + int msix_idx = j; 131 + 132 + irq_set_affinity_hint(priv->msix_vectors[msix_idx].vector, 133 + NULL); 134 + free_irq(priv->msix_vectors[msix_idx].vector, block); 135 + } 136 + dma_free_coherent(&priv->pdev->dev, priv->num_ntfy_blks * 137 + sizeof(*priv->ntfy_blocks), 138 + priv->ntfy_blocks, priv->ntfy_block_bus); 139 + priv->ntfy_blocks = NULL; 140 + abort_with_mgmt_vector: 141 + free_irq(priv->msix_vectors[priv->mgmt_msix_idx].vector, priv); 142 + abort_with_msix_enabled: 143 + pci_disable_msix(priv->pdev); 144 + abort_with_msix_vectors: 145 + kfree(priv->msix_vectors); 146 + priv->msix_vectors = NULL; 147 + return err; 148 + } 149 + 150 + static void gve_free_notify_blocks(struct gve_priv *priv) 151 + { 152 + int i; 153 + 154 + /* Free the irqs */ 155 + for (i = 0; i < priv->num_ntfy_blks; i++) { 156 + struct gve_notify_block *block = &priv->ntfy_blocks[i]; 157 + int msix_idx = i; 158 + 159 + irq_set_affinity_hint(priv->msix_vectors[msix_idx].vector, 160 + NULL); 161 + free_irq(priv->msix_vectors[msix_idx].vector, block); 162 + } 163 + dma_free_coherent(&priv->pdev->dev, 164 + priv->num_ntfy_blks * sizeof(*priv->ntfy_blocks), 165 + priv->ntfy_blocks, priv->ntfy_block_bus); 166 + priv->ntfy_blocks = NULL; 167 + free_irq(priv->msix_vectors[priv->mgmt_msix_idx].vector, priv); 168 + pci_disable_msix(priv->pdev); 169 + kfree(priv->msix_vectors); 170 + priv->msix_vectors = NULL; 171 + } 172 + 173 + static int gve_setup_device_resources(struct gve_priv *priv) 174 + { 175 + int err; 176 + 177 + err = gve_alloc_counter_array(priv); 178 + if (err) 179 + return err; 180 + err = gve_alloc_notify_blocks(priv); 181 + if (err) 182 + goto abort_with_counter; 183 + err = gve_adminq_configure_device_resources(priv, 184 + priv->counter_array_bus, 185 + priv->num_event_counters, 186 + priv->ntfy_block_bus, 187 + priv->num_ntfy_blks); 188 + if (unlikely(err)) { 189 + dev_err(&priv->pdev->dev, 190 + "could not setup device_resources: err=%d\n", err); 191 + err = -ENXIO; 192 + goto abort_with_ntfy_blocks; 193 + } 194 + gve_set_device_resources_ok(priv); 195 + return 0; 196 + abort_with_ntfy_blocks: 197 + gve_free_notify_blocks(priv); 198 + abort_with_counter: 199 + gve_free_counter_array(priv); 200 + return err; 201 + } 202 + 203 + static void gve_teardown_device_resources(struct gve_priv *priv) 204 + { 205 + int err; 206 + 207 + /* Tell device its resources are being freed */ 208 + if (gve_get_device_resources_ok(priv)) { 209 + err = gve_adminq_deconfigure_device_resources(priv); 210 + if (err) { 211 + dev_err(&priv->pdev->dev, 212 + "Could not deconfigure device resources: err=%d\n", 213 + err); 214 + return; 215 + } 216 + } 217 + gve_free_counter_array(priv); 218 + gve_free_notify_blocks(priv); 219 + gve_clear_device_resources_ok(priv); 220 + } 221 + 222 + static int gve_init_priv(struct gve_priv *priv, bool skip_describe_device) 223 + { 224 + int num_ntfy; 225 + int err; 226 + 227 + /* Set up the adminq */ 228 + err = gve_adminq_alloc(&priv->pdev->dev, priv); 229 + if (err) { 230 + dev_err(&priv->pdev->dev, 231 + "Failed to alloc admin queue: err=%d\n", err); 232 + return err; 233 + } 234 + 235 + if (skip_describe_device) 236 + goto setup_device; 237 + 238 + /* Get the initial information we need from the device */ 239 + err = gve_adminq_describe_device(priv); 240 + if (err) { 241 + dev_err(&priv->pdev->dev, 242 + "Could not get device information: err=%d\n", err); 243 + goto err; 244 + } 245 + if (priv->dev->max_mtu > PAGE_SIZE) { 246 + priv->dev->max_mtu = PAGE_SIZE; 247 + err = gve_adminq_set_mtu(priv, priv->dev->mtu); 248 + if (err) { 249 + netif_err(priv, drv, priv->dev, "Could not set mtu"); 250 + goto err; 251 + } 252 + } 253 + priv->dev->mtu = priv->dev->max_mtu; 254 + num_ntfy = pci_msix_vec_count(priv->pdev); 255 + if (num_ntfy <= 0) { 256 + dev_err(&priv->pdev->dev, 257 + "could not count MSI-x vectors: err=%d\n", num_ntfy); 258 + err = num_ntfy; 259 + goto err; 260 + } else if (num_ntfy < GVE_MIN_MSIX) { 261 + dev_err(&priv->pdev->dev, "gve needs at least %d MSI-x vectors, but only has %d\n", 262 + GVE_MIN_MSIX, num_ntfy); 263 + err = -EINVAL; 264 + goto err; 265 + } 266 + 267 + /* gvnic has one Notification Block per MSI-x vector, except for the 268 + * management vector 269 + */ 270 + priv->num_ntfy_blks = (num_ntfy - 1) & ~0x1; 271 + priv->mgmt_msix_idx = priv->num_ntfy_blks; 272 + 273 + setup_device: 274 + err = gve_setup_device_resources(priv); 275 + if (!err) 276 + return 0; 277 + err: 278 + gve_adminq_free(&priv->pdev->dev, priv); 279 + return err; 280 + } 281 + 282 + static void gve_teardown_priv_resources(struct gve_priv *priv) 283 + { 284 + gve_teardown_device_resources(priv); 285 + gve_adminq_free(&priv->pdev->dev, priv); 286 + } 287 + 288 + static void gve_write_version(u8 __iomem *driver_version_register) 289 + { 290 + const char *c = gve_version_prefix; 291 + 292 + while (*c) { 293 + writeb(*c, driver_version_register); 294 + c++; 295 + } 296 + 297 + c = gve_version_str; 298 + while (*c) { 299 + writeb(*c, driver_version_register); 300 + c++; 301 + } 302 + writeb('\n', driver_version_register); 303 + } 304 + 305 + static int gve_probe(struct pci_dev *pdev, const struct pci_device_id *ent) 306 + { 307 + int max_tx_queues, max_rx_queues; 308 + struct net_device *dev; 309 + __be32 __iomem *db_bar; 310 + struct gve_registers __iomem *reg_bar; 311 + struct gve_priv *priv; 312 + int err; 313 + 314 + err = pci_enable_device(pdev); 315 + if (err) 316 + return -ENXIO; 317 + 318 + err = pci_request_regions(pdev, "gvnic-cfg"); 319 + if (err) 320 + goto abort_with_enabled; 321 + 322 + pci_set_master(pdev); 323 + 324 + err = pci_set_dma_mask(pdev, DMA_BIT_MASK(64)); 325 + if (err) { 326 + dev_err(&pdev->dev, "Failed to set dma mask: err=%d\n", err); 327 + goto abort_with_pci_region; 328 + } 329 + 330 + err = pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(64)); 331 + if (err) { 332 + dev_err(&pdev->dev, 333 + "Failed to set consistent dma mask: err=%d\n", err); 334 + goto abort_with_pci_region; 335 + } 336 + 337 + reg_bar = pci_iomap(pdev, GVE_REGISTER_BAR, 0); 338 + if (!reg_bar) { 339 + err = -ENOMEM; 340 + goto abort_with_pci_region; 341 + } 342 + 343 + db_bar = pci_iomap(pdev, GVE_DOORBELL_BAR, 0); 344 + if (!db_bar) { 345 + dev_err(&pdev->dev, "Failed to map doorbell bar!\n"); 346 + err = -ENOMEM; 347 + goto abort_with_reg_bar; 348 + } 349 + 350 + gve_write_version(&reg_bar->driver_version); 351 + /* Get max queues to alloc etherdev */ 352 + max_rx_queues = ioread32be(&reg_bar->max_tx_queues); 353 + max_tx_queues = ioread32be(&reg_bar->max_rx_queues); 354 + /* Alloc and setup the netdev and priv */ 355 + dev = alloc_etherdev_mqs(sizeof(*priv), max_tx_queues, max_rx_queues); 356 + if (!dev) { 357 + dev_err(&pdev->dev, "could not allocate netdev\n"); 358 + goto abort_with_db_bar; 359 + } 360 + SET_NETDEV_DEV(dev, &pdev->dev); 361 + pci_set_drvdata(pdev, dev); 362 + /* advertise features */ 363 + dev->hw_features = NETIF_F_HIGHDMA; 364 + dev->hw_features |= NETIF_F_SG; 365 + dev->hw_features |= NETIF_F_HW_CSUM; 366 + dev->hw_features |= NETIF_F_TSO; 367 + dev->hw_features |= NETIF_F_TSO6; 368 + dev->hw_features |= NETIF_F_TSO_ECN; 369 + dev->hw_features |= NETIF_F_RXCSUM; 370 + dev->hw_features |= NETIF_F_RXHASH; 371 + dev->features = dev->hw_features; 372 + dev->min_mtu = ETH_MIN_MTU; 373 + netif_carrier_off(dev); 374 + 375 + priv = netdev_priv(dev); 376 + priv->dev = dev; 377 + priv->pdev = pdev; 378 + priv->msg_enable = DEFAULT_MSG_LEVEL; 379 + priv->reg_bar0 = reg_bar; 380 + priv->db_bar2 = db_bar; 381 + priv->state_flags = 0x0; 382 + 383 + err = gve_init_priv(priv, false); 384 + if (err) 385 + goto abort_with_netdev; 386 + 387 + err = register_netdev(dev); 388 + if (err) 389 + goto abort_with_netdev; 390 + 391 + dev_info(&pdev->dev, "GVE version %s\n", gve_version_str); 392 + return 0; 393 + 394 + abort_with_netdev: 395 + free_netdev(dev); 396 + 397 + abort_with_db_bar: 398 + pci_iounmap(pdev, db_bar); 399 + 400 + abort_with_reg_bar: 401 + pci_iounmap(pdev, reg_bar); 402 + 403 + abort_with_pci_region: 404 + pci_release_regions(pdev); 405 + 406 + abort_with_enabled: 407 + pci_disable_device(pdev); 408 + return -ENXIO; 409 + } 410 + EXPORT_SYMBOL(gve_probe); 411 + 412 + static void gve_remove(struct pci_dev *pdev) 413 + { 414 + struct net_device *netdev = pci_get_drvdata(pdev); 415 + struct gve_priv *priv = netdev_priv(netdev); 416 + __be32 __iomem *db_bar = priv->db_bar2; 417 + void __iomem *reg_bar = priv->reg_bar0; 418 + 419 + unregister_netdev(netdev); 420 + gve_teardown_priv_resources(priv); 421 + free_netdev(netdev); 422 + pci_iounmap(pdev, db_bar); 423 + pci_iounmap(pdev, reg_bar); 424 + pci_release_regions(pdev); 425 + pci_disable_device(pdev); 426 + } 427 + 428 + static const struct pci_device_id gve_id_table[] = { 429 + { PCI_DEVICE(PCI_VENDOR_ID_GOOGLE, PCI_DEV_ID_GVNIC) }, 430 + { } 431 + }; 432 + 433 + static struct pci_driver gvnic_driver = { 434 + .name = "gvnic", 435 + .id_table = gve_id_table, 436 + .probe = gve_probe, 437 + .remove = gve_remove, 438 + }; 439 + 440 + module_pci_driver(gvnic_driver); 441 + 442 + MODULE_DEVICE_TABLE(pci, gve_id_table); 443 + MODULE_AUTHOR("Google, Inc."); 444 + MODULE_DESCRIPTION("gVNIC Driver"); 445 + MODULE_LICENSE("Dual MIT/GPL"); 446 + MODULE_VERSION(GVE_VERSION);
+27
drivers/net/ethernet/google/gve/gve_register.h
··· 1 + /* SPDX-License-Identifier: (GPL-2.0 OR MIT) 2 + * Google virtual Ethernet (gve) driver 3 + * 4 + * Copyright (C) 2015-2019 Google, Inc. 5 + */ 6 + 7 + #ifndef _GVE_REGISTER_H_ 8 + #define _GVE_REGISTER_H_ 9 + 10 + /* Fixed Configuration Registers */ 11 + struct gve_registers { 12 + __be32 device_status; 13 + __be32 driver_status; 14 + __be32 max_tx_queues; 15 + __be32 max_rx_queues; 16 + __be32 adminq_pfn; 17 + __be32 adminq_doorbell; 18 + __be32 adminq_event_counter; 19 + u8 reserved[3]; 20 + u8 driver_version; 21 + }; 22 + 23 + enum gve_device_status_flags { 24 + GVE_DEVICE_STATUS_RESET_MASK = BIT(1), 25 + GVE_DEVICE_STATUS_LINK_STATUS_MASK = BIT(2), 26 + }; 27 + #endif /* _GVE_REGISTER_H_ */