Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

docs: nvme: fix grammar in nvme-pci-endpoint-target.rst

Notable changes:

- Use "an NVMe" instead of "a NVMe" throughout the document
- Fix incorrect phrasing such as "will is discoverable" -> "is
discoverable"
- Ensure consistent and proper article usage for clarity.

Signed-off-by: Alok Tiwari <alok.a.tiwari@oracle.com>
Reviewed-by: Randy Dunlap <rdunlap@infradead.org>
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>

authored by

Alok Tiwari and committed by
Christoph Hellwig
3be8ad8c b5cd5f1e

+11 -11
+11 -11
Documentation/nvme/nvme-pci-endpoint-target.rst
··· 6 6 7 7 :Author: Damien Le Moal <dlemoal@kernel.org> 8 8 9 - The NVMe PCI endpoint function target driver implements a NVMe PCIe controller 10 - using a NVMe fabrics target controller configured with the PCI transport type. 9 + The NVMe PCI endpoint function target driver implements an NVMe PCIe controller 10 + using an NVMe fabrics target controller configured with the PCI transport type. 11 11 12 12 Overview 13 13 ======== 14 14 15 - The NVMe PCI endpoint function target driver allows exposing a NVMe target 15 + The NVMe PCI endpoint function target driver allows exposing an NVMe target 16 16 controller over a PCIe link, thus implementing an NVMe PCIe device similar to a 17 17 regular M.2 SSD. The target controller is created in the same manner as when 18 18 using NVMe over fabrics: the controller represents the interface to an NVMe 19 19 subsystem using a port. The port transfer type must be configured to be 20 20 "pci". The subsystem can be configured to have namespaces backed by regular 21 21 files or block devices, or can use NVMe passthrough to expose to the PCI host an 22 - existing physical NVMe device or a NVMe fabrics host controller (e.g. a NVMe TCP 23 - host controller). 22 + existing physical NVMe device or an NVMe fabrics host controller (e.g. a NVMe 23 + TCP host controller). 24 24 25 25 The NVMe PCI endpoint function target driver relies as much as possible on the 26 26 NVMe target core code to parse and execute NVMe commands submitted by the PCIe ··· 181 181 subsystem and port must be defined. Second, the NVMe PCI endpoint device must 182 182 be setup and bound to the subsystem and port created. 183 183 184 - Creating a NVMe Subsystem and Port 185 - ---------------------------------- 184 + Creating an NVMe Subsystem and Port 185 + ----------------------------------- 186 186 187 - Details about how to configure a NVMe target subsystem and port are outside the 187 + Details about how to configure an NVMe target subsystem and port are outside the 188 188 scope of this document. The following only provides a simple example of a port 189 189 and subsystem with a single namespace backed by a null_blk device. 190 190 ··· 234 234 # ln -s /sys/kernel/config/nvmet/subsystems/nvmepf.0.nqn \ 235 235 /sys/kernel/config/nvmet/ports/1/subsystems/nvmepf.0.nqn 236 236 237 - Creating a NVMe PCI Endpoint Device 238 - ----------------------------------- 237 + Creating an NVMe PCI Endpoint Device 238 + ------------------------------------ 239 239 240 240 With the NVMe target subsystem and port ready for use, the NVMe PCI endpoint 241 241 device can now be created and enabled. The NVMe PCI endpoint target driver ··· 303 303 304 304 nvmet_pci_epf nvmet_pci_epf.0: Enabling controller 305 305 306 - On the host side, the NVMe PCI endpoint function target device will is 306 + On the host side, the NVMe PCI endpoint function target device is 307 307 discoverable as a PCI device, with the vendor ID and device ID as configured:: 308 308 309 309 # lspci -n