Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Documentation: hyperv: Add overview of VMbus

Add documentation topic for using VMbus when running as a guest
on Hyper-V.

Signed-off-by: Michael Kelley <mikelley@microsoft.com>
Link: https://lore.kernel.org/r/1657561704-12631-3-git-send-email-mikelley@microsoft.com
Signed-off-by: Jonathan Corbet <corbet@lwn.net>

authored by

Michael Kelley and committed by
Jonathan Corbet
ac1129e7 ec7c5681

+304
+1
Documentation/virt/hyperv/index.rst
··· 8 8 :maxdepth: 1 9 9 10 10 overview 11 + vmbus
+303
Documentation/virt/hyperv/vmbus.rst
··· 1 + .. SPDX-License-Identifier: GPL-2.0 2 + 3 + VMbus 4 + ===== 5 + VMbus is a software construct provided by Hyper-V to guest VMs. It 6 + consists of a control path and common facilities used by synthetic 7 + devices that Hyper-V presents to guest VMs. The control path is 8 + used to offer synthetic devices to the guest VM and, in some cases, 9 + to rescind those devices. The common facilities include software 10 + channels for communicating between the device driver in the guest VM 11 + and the synthetic device implementation that is part of Hyper-V, and 12 + signaling primitives to allow Hyper-V and the guest to interrupt 13 + each other. 14 + 15 + VMbus is modeled in Linux as a bus, with the expected /sys/bus/vmbus 16 + entry in a running Linux guest. The VMbus driver (drivers/hv/vmbus_drv.c) 17 + establishes the VMbus control path with the Hyper-V host, then 18 + registers itself as a Linux bus driver. It implements the standard 19 + bus functions for adding and removing devices to/from the bus. 20 + 21 + Most synthetic devices offered by Hyper-V have a corresponding Linux 22 + device driver. These devices include: 23 + 24 + * SCSI controller 25 + * NIC 26 + * Graphics frame buffer 27 + * Keyboard 28 + * Mouse 29 + * PCI device pass-thru 30 + * Heartbeat 31 + * Time Sync 32 + * Shutdown 33 + * Memory balloon 34 + * Key/Value Pair (KVP) exchange with Hyper-V 35 + * Hyper-V online backup (a.k.a. VSS) 36 + 37 + Guest VMs may have multiple instances of the synthetic SCSI 38 + controller, synthetic NIC, and PCI pass-thru devices. Other 39 + synthetic devices are limited to a single instance per VM. Not 40 + listed above are a small number of synthetic devices offered by 41 + Hyper-V that are used only by Windows guests and for which Linux 42 + does not have a driver. 43 + 44 + Hyper-V uses the terms "VSP" and "VSC" in describing synthetic 45 + devices. "VSP" refers to the Hyper-V code that implements a 46 + particular synthetic device, while "VSC" refers to the driver for 47 + the device in the guest VM. For example, the Linux driver for the 48 + synthetic NIC is referred to as "netvsc" and the Linux driver for 49 + the synthetic SCSI controller is "storvsc". These drivers contain 50 + functions with names like "storvsc_connect_to_vsp". 51 + 52 + VMbus channels 53 + -------------- 54 + An instance of a synthetic device uses VMbus channels to communicate 55 + between the VSP and the VSC. Channels are bi-directional and used 56 + for passing messages. Most synthetic devices use a single channel, 57 + but the synthetic SCSI controller and synthetic NIC may use multiple 58 + channels to achieve higher performance and greater parallelism. 59 + 60 + Each channel consists of two ring buffers. These are classic ring 61 + buffers from a university data structures textbook. If the read 62 + and writes pointers are equal, the ring buffer is considered to be 63 + empty, so a full ring buffer always has at least one byte unused. 64 + The "in" ring buffer is for messages from the Hyper-V host to the 65 + guest, and the "out" ring buffer is for messages from the guest to 66 + the Hyper-V host. In Linux, the "in" and "out" designations are as 67 + viewed by the guest side. The ring buffers are memory that is 68 + shared between the guest and the host, and they follow the standard 69 + paradigm where the memory is allocated by the guest, with the list 70 + of GPAs that make up the ring buffer communicated to the host. Each 71 + ring buffer consists of a header page (4 Kbytes) with the read and 72 + write indices and some control flags, followed by the memory for the 73 + actual ring. The size of the ring is determined by the VSC in the 74 + guest and is specific to each synthetic device. The list of GPAs 75 + making up the ring is communicated to the Hyper-V host over the 76 + VMbus control path as a GPA Descriptor List (GPADL). See function 77 + vmbus_establish_gpadl(). 78 + 79 + Each ring buffer is mapped into contiguous Linux kernel virtual 80 + space in three parts: 1) the 4 Kbyte header page, 2) the memory 81 + that makes up the ring itself, and 3) a second mapping of the memory 82 + that makes up the ring itself. Because (2) and (3) are contiguous 83 + in kernel virtual space, the code that copies data to and from the 84 + ring buffer need not be concerned with ring buffer wrap-around. 85 + Once a copy operation has completed, the read or write index may 86 + need to be reset to point back into the first mapping, but the 87 + actual data copy does not need to be broken into two parts. This 88 + approach also allows complex data structures to be easily accessed 89 + directly in the ring without handling wrap-around. 90 + 91 + On arm64 with page sizes > 4 Kbytes, the header page must still be 92 + passed to Hyper-V as a 4 Kbyte area. But the memory for the actual 93 + ring must be aligned to PAGE_SIZE and have a size that is a multiple 94 + of PAGE_SIZE so that the duplicate mapping trick can be done. Hence 95 + a portion of the header page is unused and not communicated to 96 + Hyper-V. This case is handled by vmbus_establish_gpadl(). 97 + 98 + Hyper-V enforces a limit on the aggregate amount of guest memory 99 + that can be shared with the host via GPADLs. This limit ensures 100 + that a rogue guest can't force the consumption of excessive host 101 + resources. For Windows Server 2019 and later, this limit is 102 + approximately 1280 Mbytes. For versions prior to Windows Server 103 + 2019, the limit is approximately 384 Mbytes. 104 + 105 + VMbus messages 106 + -------------- 107 + All VMbus messages have a standard header that includes the message 108 + length, the offset of the message payload, some flags, and a 109 + transactionID. The portion of the message after the header is 110 + unique to each VSP/VSC pair. 111 + 112 + Messages follow one of two patterns: 113 + 114 + * Unidirectional: Either side sends a message and does not 115 + expect a response message 116 + * Request/response: One side (usually the guest) sends a message 117 + and expects a response 118 + 119 + The transactionID (a.k.a. "requestID") is for matching requests & 120 + responses. Some synthetic devices allow multiple requests to be in- 121 + flight simultaneously, so the guest specifies a transactionID when 122 + sending a request. Hyper-V sends back the same transactionID in the 123 + matching response. 124 + 125 + Messages passed between the VSP and VSC are control messages. For 126 + example, a message sent from the storvsc driver might be "execute 127 + this SCSI command". If a message also implies some data transfer 128 + between the guest and the Hyper-V host, the actual data to be 129 + transferred may be embedded with the control message, or it may be 130 + specified as a separate data buffer that the Hyper-V host will 131 + access as a DMA operation. The former case is used when the size of 132 + the data is small and the cost of copying the data to and from the 133 + ring buffer is minimal. For example, time sync messages from the 134 + Hyper-V host to the guest contain the actual time value. When the 135 + data is larger, a separate data buffer is used. In this case, the 136 + control message contains a list of GPAs that describe the data 137 + buffer. For example, the storvsc driver uses this approach to 138 + specify the data buffers to/from which disk I/O is done. 139 + 140 + Three functions exist to send VMbus messages: 141 + 142 + 1. vmbus_sendpacket(): Control-only messages and messages with 143 + embedded data -- no GPAs 144 + 2. vmbus_sendpacket_pagebuffer(): Message with list of GPAs 145 + identifying data to transfer. An offset and length is 146 + associated with each GPA so that multiple discontinuous areas 147 + of guest memory can be targeted. 148 + 3. vmbus_sendpacket_mpb_desc(): Message with list of GPAs 149 + identifying data to transfer. A single offset and length is 150 + associated with a list of GPAs. The GPAs must describe a 151 + single logical area of guest memory to be targeted. 152 + 153 + Historically, Linux guests have trusted Hyper-V to send well-formed 154 + and valid messages, and Linux drivers for synthetic devices did not 155 + fully validate messages. With the introduction of processor 156 + technologies that fully encrypt guest memory and that allow the 157 + guest to not trust the hypervisor (AMD SNP-SEV, Intel TDX), trusting 158 + the Hyper-V host is no longer a valid assumption. The drivers for 159 + VMbus synthetic devices are being updated to fully validate any 160 + values read from memory that is shared with Hyper-V, which includes 161 + messages from VMbus devices. To facilitate such validation, 162 + messages read by the guest from the "in" ring buffer are copied to a 163 + temporary buffer that is not shared with Hyper-V. Validation is 164 + performed in this temporary buffer without the risk of Hyper-V 165 + maliciously modifying the message after it is validated but before 166 + it is used. 167 + 168 + VMbus interrupts 169 + ---------------- 170 + VMbus provides a mechanism for the guest to interrupt the host when 171 + the guest has queued new messages in a ring buffer. The host 172 + expects that the guest will send an interrupt only when an "out" 173 + ring buffer transitions from empty to non-empty. If the guest sends 174 + interrupts at other times, the host deems such interrupts to be 175 + unnecessary. If a guest sends an excessive number of unnecessary 176 + interrupts, the host may throttle that guest by suspending its 177 + execution for a few seconds to prevent a denial-of-service attack. 178 + 179 + Similarly, the host will interrupt the guest when it sends a new 180 + message on the VMbus control path, or when a VMbus channel "in" ring 181 + buffer transitions from empty to non-empty. Each CPU in the guest 182 + may receive VMbus interrupts, so they are best modeled as per-CPU 183 + interrupts in Linux. This model works well on arm64 where a single 184 + per-CPU IRQ is allocated for VMbus. Since x86/x64 lacks support for 185 + per-CPU IRQs, an x86 interrupt vector is statically allocated (see 186 + HYPERVISOR_CALLBACK_VECTOR) across all CPUs and explicitly coded to 187 + call the VMbus interrupt service routine. These interrupts are 188 + visible in /proc/interrupts on the "HYP" line. 189 + 190 + The guest CPU that a VMbus channel will interrupt is selected by the 191 + guest when the channel is created, and the host is informed of that 192 + selection. VMbus devices are broadly grouped into two categories: 193 + 194 + 1. "Slow" devices that need only one VMbus channel. The devices 195 + (such as keyboard, mouse, heartbeat, and timesync) generate 196 + relatively few interrupts. Their VMbus channels are all 197 + assigned to interrupt the VMBUS_CONNECT_CPU, which is always 198 + CPU 0. 199 + 200 + 2. "High speed" devices that may use multiple VMbus channels for 201 + higher parallelism and performance. These devices include the 202 + synthetic SCSI controller and synthetic NIC. Their VMbus 203 + channels interrupts are assigned to CPUs that are spread out 204 + among the available CPUs in the VM so that interrupts on 205 + multiple channels can be processed in parallel. 206 + 207 + The assignment of VMbus channel interrupts to CPUs is done in the 208 + function init_vp_index(). This assignment is done outside of the 209 + normal Linux interrupt affinity mechanism, so the interrupts are 210 + neither "unmanaged" nor "managed" interrupts. 211 + 212 + The CPU that a VMbus channel will interrupt can be seen in 213 + /sys/bus/vmbus/devices/<deviceGUID>/ channels/<channelRelID>/cpu. 214 + When running on later versions of Hyper-V, the CPU can be changed 215 + by writing a new value to this sysfs entry. Because the interrupt 216 + assignment is done outside of the normal Linux affinity mechanism, 217 + there are no entries in /proc/irq corresponding to individual 218 + VMbus channel interrupts. 219 + 220 + An online CPU in a Linux guest may not be taken offline if it has 221 + VMbus channel interrupts assigned to it. Any such channel 222 + interrupts must first be manually reassigned to another CPU as 223 + described above. When no channel interrupts are assigned to the 224 + CPU, it can be taken offline. 225 + 226 + When a guest CPU receives a VMbus interrupt from the host, the 227 + function vmbus_isr() handles the interrupt. It first checks for 228 + channel interrupts by calling vmbus_chan_sched(), which looks at a 229 + bitmap setup by the host to determine which channels have pending 230 + interrupts on this CPU. If multiple channels have pending 231 + interrupts for this CPU, they are processed sequentially. When all 232 + channel interrupts have been processed, vmbus_isr() checks for and 233 + processes any message received on the VMbus control path. 234 + 235 + The VMbus channel interrupt handling code is designed to work 236 + correctly even if an interrupt is received on a CPU other than the 237 + CPU assigned to the channel. Specifically, the code does not use 238 + CPU-based exclusion for correctness. In normal operation, Hyper-V 239 + will interrupt the assigned CPU. But when the CPU assigned to a 240 + channel is being changed via sysfs, the guest doesn't know exactly 241 + when Hyper-V will make the transition. The code must work correctly 242 + even if there is a time lag before Hyper-V starts interrupting the 243 + new CPU. See comments in target_cpu_store(). 244 + 245 + VMbus device creation/deletion 246 + ------------------------------ 247 + Hyper-V and the Linux guest have a separate message-passing path 248 + that is used for synthetic device creation and deletion. This 249 + path does not use a VMbus channel. See vmbus_post_msg() and 250 + vmbus_on_msg_dpc(). 251 + 252 + The first step is for the guest to connect to the generic 253 + Hyper-V VMbus mechanism. As part of establishing this connection, 254 + the guest and Hyper-V agree on a VMbus protocol version they will 255 + use. This negotiation allows newer Linux kernels to run on older 256 + Hyper-V versions, and vice versa. 257 + 258 + The guest then tells Hyper-V to "send offers". Hyper-V sends an 259 + offer message to the guest for each synthetic device that the VM 260 + is configured to have. Each VMbus device type has a fixed GUID 261 + known as the "class ID", and each VMbus device instance is also 262 + identified by a GUID. The offer message from Hyper-V contains 263 + both GUIDs to uniquely (within the VM) identify the device. 264 + There is one offer message for each device instance, so a VM with 265 + two synthetic NICs will get two offers messages with the NIC 266 + class ID. The ordering of offer messages can vary from boot-to-boot 267 + and must not be assumed to be consistent in Linux code. Offer 268 + messages may also arrive long after Linux has initially booted 269 + because Hyper-V supports adding devices, such as synthetic NICs, 270 + to running VMs. A new offer message is processed by 271 + vmbus_process_offer(), which indirectly invokes vmbus_add_channel_work(). 272 + 273 + Upon receipt of an offer message, the guest identifies the device 274 + type based on the class ID, and invokes the correct driver to set up 275 + the device. Driver/device matching is performed using the standard 276 + Linux mechanism. 277 + 278 + The device driver probe function opens the primary VMbus channel to 279 + the corresponding VSP. It allocates guest memory for the channel 280 + ring buffers and shares the ring buffer with the Hyper-V host by 281 + giving the host a list of GPAs for the ring buffer memory. See 282 + vmbus_establish_gpadl(). 283 + 284 + Once the ring buffer is set up, the device driver and VSP exchange 285 + setup messages via the primary channel. These messages may include 286 + negotiating the device protocol version to be used between the Linux 287 + VSC and the VSP on the Hyper-V host. The setup messages may also 288 + include creating additional VMbus channels, which are somewhat 289 + mis-named as "sub-channels" since they are functionally 290 + equivalent to the primary channel once they are created. 291 + 292 + Finally, the device driver may create entries in /dev as with 293 + any device driver. 294 + 295 + The Hyper-V host can send a "rescind" message to the guest to 296 + remove a device that was previously offered. Linux drivers must 297 + handle such a rescind message at any time. Rescinding a device 298 + invokes the device driver "remove" function to cleanly shut 299 + down the device and remove it. Once a synthetic device is 300 + rescinded, neither Hyper-V nor Linux retains any state about 301 + its previous existence. Such a device might be re-added later, 302 + in which case it is treated as an entirely new device. See 303 + vmbus_onoffer_rescind().