Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

drm/xe: Fix spelling and typos across Xe driver files

Corrected various spelling mistakes and typos in multiple
files under the Xe directory. These fixes improve clarity
and maintain consistency in documentation.

v2
- Replaced all instances of "XE" with "Xe" where it referred
to the driver name
- of -> for
- Typical -> Typically

v3
- Revert "Xe" to "XE" for macro prefix reference

Signed-off-by: Sanjay Yadav <sanjay.kumar.yadav@intel.com>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
Reviewed-by: Stuart Summers <stuart.summers@intel.com>
Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Link: https://patch.msgid.link/20251023121453.1182035-2-sanjay.kumar.yadav@intel.com

authored by

Sanjay Yadav and committed by
Matthew Auld
dd5d11b6 fab36494

+63 -63
+2 -2
drivers/gpu/drm/xe/xe_bo.c
··· 2121 2121 * if the function should allocate a new one. 2122 2122 * @tile: The tile to select for migration of this bo, and the tile used for 2123 2123 * GGTT binding if any. Only to be non-NULL for ttm_bo_type_kernel bos. 2124 - * @resv: Pointer to a locked shared reservation object to use fo this bo, 2124 + * @resv: Pointer to a locked shared reservation object to use for this bo, 2125 2125 * or NULL for the xe_bo to use its own. 2126 2126 * @bulk: The bulk move to use for LRU bumping, or NULL for external bos. 2127 2127 * @size: The storage size to use for the bo. ··· 2651 2651 * @size: The storage size to use for the bo. 2652 2652 * @type: The TTM buffer object type. 2653 2653 * @flags: XE_BO_FLAG_ flags. 2654 - * @intr: Whether to execut any waits for backing store interruptible. 2654 + * @intr: Whether to execute any waits for backing store interruptible. 2655 2655 * 2656 2656 * Create a pinned and mapped bo. The bo will be external and not associated 2657 2657 * with a VM.
+4 -4
drivers/gpu/drm/xe/xe_bo_doc.h
··· 12 12 * BO management 13 13 * ============= 14 14 * 15 - * TTM manages (placement, eviction, etc...) all BOs in XE. 15 + * TTM manages (placement, eviction, etc...) all BOs in Xe. 16 16 * 17 17 * BO creation 18 18 * =========== ··· 29 29 * a kernel BO (e.g. engine state, memory for page tables, etc...). These BOs 30 30 * are typically mapped in the GGTT (any kernel BOs aside memory for page tables 31 31 * are in the GGTT), are pinned (can't move or be evicted at runtime), have a 32 - * vmap (XE can access the memory via xe_map layer) and have contiguous physical 32 + * vmap (Xe can access the memory via xe_map layer) and have contiguous physical 33 33 * memory. 34 34 * 35 35 * More details of why kernel BOs are pinned and contiguous below. ··· 40 40 * A user BO is created via the DRM_IOCTL_XE_GEM_CREATE IOCTL. Once it is 41 41 * created the BO can be mmap'd (via DRM_IOCTL_XE_GEM_MMAP_OFFSET) for user 42 42 * access and it can be bound for GPU access (via DRM_IOCTL_XE_VM_BIND). All 43 - * user BOs are evictable and user BOs are never pinned by XE. The allocation of 43 + * user BOs are evictable and user BOs are never pinned by Xe. The allocation of 44 44 * the backing store can be deferred from creation time until first use which is 45 45 * either mmap, bind, or pagefault. 46 46 * ··· 84 84 * ==================== 85 85 * 86 86 * All eviction (or in other words, moving a BO from one memory location to 87 - * another) is routed through TTM with a callback into XE. 87 + * another) is routed through TTM with a callback into Xe. 88 88 * 89 89 * Runtime eviction 90 90 * ----------------
+1 -1
drivers/gpu/drm/xe/xe_configfs.c
··· 27 27 * Overview 28 28 * ======== 29 29 * 30 - * Configfs is a filesystem-based manager of kernel objects. XE KMD registers a 30 + * Configfs is a filesystem-based manager of kernel objects. Xe KMD registers a 31 31 * configfs subsystem called ``xe`` that creates a directory in the mounted 32 32 * configfs directory. The user can create devices under this directory and 33 33 * configure them as necessary. See Documentation/filesystems/configfs.rst for
+1 -1
drivers/gpu/drm/xe/xe_device.c
··· 1217 1217 * 1218 1218 * /sys/bus/pci/devices/<device>/survivability_mode 1219 1219 * 1220 - * - Admin/userpsace consumer can use firmware flashing tools like fwupd to flash 1220 + * - Admin/userspace consumer can use firmware flashing tools like fwupd to flash 1221 1221 * firmware and restore device to normal operation. 1222 1222 */ 1223 1223
+4 -4
drivers/gpu/drm/xe/xe_device_types.h
··· 222 222 }; 223 223 224 224 /** 225 - * struct xe_device - Top level struct of XE device 225 + * struct xe_device - Top level struct of Xe device 226 226 */ 227 227 struct xe_device { 228 228 /** @drm: drm device */ ··· 245 245 u32 media_verx100; 246 246 /** @info.mem_region_mask: mask of valid memory regions */ 247 247 u32 mem_region_mask; 248 - /** @info.platform: XE platform enum */ 248 + /** @info.platform: Xe platform enum */ 249 249 enum xe_platform platform; 250 - /** @info.subplatform: XE subplatform enum */ 250 + /** @info.subplatform: Xe subplatform enum */ 251 251 enum xe_subplatform subplatform; 252 252 /** @info.devid: device ID */ 253 253 u16 devid; ··· 661 661 }; 662 662 663 663 /** 664 - * struct xe_file - file handle for XE driver 664 + * struct xe_file - file handle for Xe driver 665 665 */ 666 666 struct xe_file { 667 667 /** @xe: xe DEVICE **/
+1 -1
drivers/gpu/drm/xe/xe_exec.c
··· 33 33 * - Binding at exec time 34 34 * - Flow controlling the ring at exec time 35 35 * 36 - * In XE we avoid all of this complication by not allowing a BO list to be 36 + * In Xe we avoid all of this complication by not allowing a BO list to be 37 37 * passed into an exec, using the dma-buf implicit sync uAPI, have binds as 38 38 * separate operations, and using the DRM scheduler to flow control the ring. 39 39 * Let's deep dive on each of these.
+2 -2
drivers/gpu/drm/xe/xe_force_wake_types.h
··· 52 52 }; 53 53 54 54 /** 55 - * struct xe_force_wake_domain - XE force wake domains 55 + * struct xe_force_wake_domain - Xe force wake domains 56 56 */ 57 57 struct xe_force_wake_domain { 58 58 /** @id: domain force wake id */ ··· 70 70 }; 71 71 72 72 /** 73 - * struct xe_force_wake - XE force wake 73 + * struct xe_force_wake - Xe force wake 74 74 */ 75 75 struct xe_force_wake { 76 76 /** @gt: back pointers to GT */
+1 -1
drivers/gpu/drm/xe/xe_gt_freq.c
··· 36 36 * - act_freq: The actual resolved frequency decided by PCODE. 37 37 * - cur_freq: The current one requested by GuC PC to the PCODE. 38 38 * - rpn_freq: The Render Performance (RP) N level, which is the minimal one. 39 - * - rpa_freq: The Render Performance (RP) A level, which is the achiveable one. 39 + * - rpa_freq: The Render Performance (RP) A level, which is the achievable one. 40 40 * Calculated by PCODE at runtime based on multiple running conditions 41 41 * - rpe_freq: The Render Performance (RP) E level, which is the efficient one. 42 42 * Calculated by PCODE at runtime based on multiple running conditions
+1 -1
drivers/gpu/drm/xe/xe_gt_sriov_vf.c
··· 738 738 gt->sriov.vf.migration.recovery_queued = true; 739 739 WRITE_ONCE(gt->sriov.vf.migration.recovery_inprogress, true); 740 740 WRITE_ONCE(gt->sriov.vf.migration.ggtt_need_fixes, true); 741 - smp_wmb(); /* Ensure above writes visable before wake */ 741 + smp_wmb(); /* Ensure above writes visible before wake */ 742 742 743 743 xe_guc_ct_wake_waiters(&gt->uc.guc.ct); 744 744
+1 -1
drivers/gpu/drm/xe/xe_guc_ads_types.h
··· 14 14 * struct xe_guc_ads - GuC additional data structures (ADS) 15 15 */ 16 16 struct xe_guc_ads { 17 - /** @bo: XE BO for GuC ads blob */ 17 + /** @bo: Xe BO for GuC ads blob */ 18 18 struct xe_bo *bo; 19 19 /** @golden_lrc_size: golden LRC size */ 20 20 size_t golden_lrc_size;
+1 -1
drivers/gpu/drm/xe/xe_guc_ct_types.h
··· 126 126 * for the H2G and G2H requests sent and received through the buffers. 127 127 */ 128 128 struct xe_guc_ct { 129 - /** @bo: XE BO for CT */ 129 + /** @bo: Xe BO for CT */ 130 130 struct xe_bo *bo; 131 131 /** @lock: protects everything in CT layer */ 132 132 struct mutex lock;
+1 -1
drivers/gpu/drm/xe/xe_guc_log_types.h
··· 44 44 struct xe_guc_log { 45 45 /** @level: GuC log level */ 46 46 u32 level; 47 - /** @bo: XE BO for GuC log */ 47 + /** @bo: Xe BO for GuC log */ 48 48 struct xe_bo *bo; 49 49 /** @stats: logging related stats */ 50 50 struct {
+1 -1
drivers/gpu/drm/xe/xe_guc_submit.c
··· 1920 1920 } 1921 1921 1922 1922 /* 1923 - * All of these functions are an abstraction layer which other parts of XE can 1923 + * All of these functions are an abstraction layer which other parts of Xe can 1924 1924 * use to trap into the GuC backend. All of these functions, aside from init, 1925 1925 * really shouldn't do much other than trap into the DRM scheduler which 1926 1926 * synchronizes these operations.
+1 -1
drivers/gpu/drm/xe/xe_guc_tlb_inval.c
··· 207 207 * @guc: GuC object 208 208 * @tlb_inval: TLB invalidation client 209 209 * 210 - * Inititialize GuC TLB invalidation by setting back pointer in TLB invalidation 210 + * Initialize GuC TLB invalidation by setting back pointer in TLB invalidation 211 211 * client to the GuC and setting GuC backend ops. 212 212 */ 213 213 void xe_guc_tlb_inval_init_early(struct xe_guc *guc,
+2 -2
drivers/gpu/drm/xe/xe_map.h
··· 14 14 * DOC: Map layer 15 15 * 16 16 * All access to any memory shared with a device (both sysmem and vram) in the 17 - * XE driver should go through this layer (xe_map). This layer is built on top 17 + * Xe driver should go through this layer (xe_map). This layer is built on top 18 18 * of :ref:`driver-api/device-io:Generalizing Access to System and I/O Memory` 19 - * and with extra hooks into the XE driver that allows adding asserts to memory 19 + * and with extra hooks into the Xe driver that allows adding asserts to memory 20 20 * accesses (e.g. for blocking runtime_pm D3Cold on Discrete Graphics). 21 21 */ 22 22
+2 -2
drivers/gpu/drm/xe/xe_migrate.c
··· 2060 2060 * 2061 2061 * Copy from an array dma addresses to a VRAM device physical address 2062 2062 * 2063 - * Return: dma fence for migrate to signal completion on succees, ERR_PTR on 2063 + * Return: dma fence for migrate to signal completion on success, ERR_PTR on 2064 2064 * failure 2065 2065 */ 2066 2066 struct dma_fence *xe_migrate_to_vram(struct xe_migrate *m, ··· 2081 2081 * 2082 2082 * Copy from a VRAM device physical address to an array dma addresses 2083 2083 * 2084 - * Return: dma fence for migrate to signal completion on succees, ERR_PTR on 2084 + * Return: dma fence for migrate to signal completion on success, ERR_PTR on 2085 2085 * failure 2086 2086 */ 2087 2087 struct dma_fence *xe_migrate_from_vram(struct xe_migrate *m,
+1 -1
drivers/gpu/drm/xe/xe_migrate_doc.h
··· 9 9 /** 10 10 * DOC: Migrate Layer 11 11 * 12 - * The XE migrate layer is used generate jobs which can copy memory (eviction), 12 + * The Xe migrate layer is used generate jobs which can copy memory (eviction), 13 13 * clear memory, or program tables (binds). This layer exists in every GT, has 14 14 * a migrate engine, and uses a special VM for all generated jobs. 15 15 *
+1 -1
drivers/gpu/drm/xe/xe_pm.c
··· 102 102 /** 103 103 * xe_pm_might_block_on_suspend() - Annotate that the code might block on suspend 104 104 * 105 - * Annotation to use where the code might block or sieze to make 105 + * Annotation to use where the code might block or seize to make 106 106 * progress pending resume completion. 107 107 */ 108 108 void xe_pm_might_block_on_suspend(void)
+1 -1
drivers/gpu/drm/xe/xe_preempt_fence_types.h
··· 12 12 struct xe_exec_queue; 13 13 14 14 /** 15 - * struct xe_preempt_fence - XE preempt fence 15 + * struct xe_preempt_fence - Xe preempt fence 16 16 * 17 17 * hardware and triggers a callback once the xe_engine is complete. 18 18 */
+2 -2
drivers/gpu/drm/xe/xe_range_fence.h
··· 13 13 struct xe_range_fence_tree; 14 14 struct xe_range_fence; 15 15 16 - /** struct xe_range_fence_ops - XE range fence ops */ 16 + /** struct xe_range_fence_ops - Xe range fence ops */ 17 17 struct xe_range_fence_ops { 18 18 /** @free: free range fence op */ 19 19 void (*free)(struct xe_range_fence *rfence); 20 20 }; 21 21 22 - /** struct xe_range_fence - XE range fence (address conflict tracking) */ 22 + /** struct xe_range_fence - Xe range fence (address conflict tracking) */ 23 23 struct xe_range_fence { 24 24 /** @rb: RB tree node inserted into interval tree */ 25 25 struct rb_node rb;
+3 -3
drivers/gpu/drm/xe/xe_sched_job.c
··· 160 160 } 161 161 162 162 /** 163 - * xe_sched_job_destroy - Destroy XE schedule job 164 - * @ref: reference to XE schedule job 163 + * xe_sched_job_destroy - Destroy Xe schedule job 164 + * @ref: reference to Xe schedule job 165 165 * 166 166 * Called when ref == 0, drop a reference to job's xe_engine + fence, cleanup 167 - * base DRM schedule job, and free memory for XE schedule job. 167 + * base DRM schedule job, and free memory for Xe schedule job. 168 168 */ 169 169 void xe_sched_job_destroy(struct kref *ref) 170 170 {
+6 -6
drivers/gpu/drm/xe/xe_sched_job.h
··· 23 23 void xe_sched_job_destroy(struct kref *ref); 24 24 25 25 /** 26 - * xe_sched_job_get - get reference to XE schedule job 27 - * @job: XE schedule job object 26 + * xe_sched_job_get - get reference to Xe schedule job 27 + * @job: Xe schedule job object 28 28 * 29 - * Increment XE schedule job's reference count 29 + * Increment Xe schedule job's reference count 30 30 */ 31 31 static inline struct xe_sched_job *xe_sched_job_get(struct xe_sched_job *job) 32 32 { ··· 35 35 } 36 36 37 37 /** 38 - * xe_sched_job_put - put reference to XE schedule job 39 - * @job: XE schedule job object 38 + * xe_sched_job_put - put reference to Xe schedule job 39 + * @job: Xe schedule job object 40 40 * 41 - * Decrement XE schedule job's reference count, call xe_sched_job_destroy when 41 + * Decrement Xe schedule job's reference count, call xe_sched_job_destroy when 42 42 * reference count == 0. 43 43 */ 44 44 static inline void xe_sched_job_put(struct xe_sched_job *job)
+1 -1
drivers/gpu/drm/xe/xe_sched_job_types.h
··· 32 32 }; 33 33 34 34 /** 35 - * struct xe_sched_job - XE schedule job (batch buffer tracking) 35 + * struct xe_sched_job - Xe schedule job (batch buffer tracking) 36 36 */ 37 37 struct xe_sched_job { 38 38 /** @drm: base DRM scheduler job */
+1 -1
drivers/gpu/drm/xe/xe_svm.c
··· 633 633 634 634 /* 635 635 * XXX: We can't derive the GT here (or anywhere in this functions, but 636 - * compute always uses the primary GT so accumlate stats on the likely 636 + * compute always uses the primary GT so accumulate stats on the likely 637 637 * GT of the fault. 638 638 */ 639 639 if (gt)
+1 -1
drivers/gpu/drm/xe/xe_tlb_inval.h
··· 33 33 * xe_tlb_inval_fence_wait() - TLB invalidiation fence wait 34 34 * @fence: TLB invalidation fence to wait on 35 35 * 36 - * Wait on a TLB invalidiation fence until it signals, non interruptable 36 + * Wait on a TLB invalidiation fence until it signals, non interruptible 37 37 */ 38 38 static inline void 39 39 xe_tlb_inval_fence_wait(struct xe_tlb_inval_fence *fence)
+2 -2
drivers/gpu/drm/xe/xe_ttm_vram_mgr_types.h
··· 10 10 #include <drm/ttm/ttm_device.h> 11 11 12 12 /** 13 - * struct xe_ttm_vram_mgr - XE TTM VRAM manager 13 + * struct xe_ttm_vram_mgr - Xe TTM VRAM manager 14 14 * 15 15 * Manages placement of TTM resource in VRAM. 16 16 */ ··· 32 32 }; 33 33 34 34 /** 35 - * struct xe_ttm_vram_mgr_resource - XE TTM VRAM resource 35 + * struct xe_ttm_vram_mgr_resource - Xe TTM VRAM resource 36 36 */ 37 37 struct xe_ttm_vram_mgr_resource { 38 38 /** @base: Base TTM resource */
+3 -3
drivers/gpu/drm/xe/xe_uc_fw_types.h
··· 62 62 }; 63 63 64 64 /** 65 - * struct xe_uc_fw_version - Version for XE micro controller firmware 65 + * struct xe_uc_fw_version - Version for Xe micro controller firmware 66 66 */ 67 67 struct xe_uc_fw_version { 68 68 /** @branch: branch version of the FW (not always available) */ ··· 84 84 }; 85 85 86 86 /** 87 - * struct xe_uc_fw - XE micro controller firmware 87 + * struct xe_uc_fw - Xe micro controller firmware 88 88 */ 89 89 struct xe_uc_fw { 90 90 /** @type: type uC firmware */ ··· 112 112 /** @size: size of uC firmware including css header */ 113 113 size_t size; 114 114 115 - /** @bo: XE BO for uC firmware */ 115 + /** @bo: Xe BO for uC firmware */ 116 116 struct xe_bo *bo; 117 117 118 118 /** @has_gsc_headers: whether the FW image starts with GSC headers */
+1 -1
drivers/gpu/drm/xe/xe_uc_types.h
··· 12 12 #include "xe_wopcm_types.h" 13 13 14 14 /** 15 - * struct xe_uc - XE micro controllers 15 + * struct xe_uc - Xe micro controllers 16 16 */ 17 17 struct xe_uc { 18 18 /** @guc: Graphics micro controller */
+3 -3
drivers/gpu/drm/xe/xe_validation.h
··· 108 108 * @request_exclusive: Whether to lock exclusively (write mode) the next time 109 109 * the domain lock is locked. 110 110 * @exec_flags: The drm_exec flags used for drm_exec (re-)initialization. 111 - * @nr: The drm_exec nr parameter used for drm_exec (re-)initializaiton. 111 + * @nr: The drm_exec nr parameter used for drm_exec (re-)initialization. 112 112 */ 113 113 struct xe_validation_ctx { 114 114 struct drm_exec *exec; ··· 137 137 * @_ret: The current error value possibly holding -ENOMEM 138 138 * 139 139 * Use this in way similar to drm_exec_retry_on_contention(). 140 - * If @_ret contains -ENOMEM the tranaction is restarted once in a way that 140 + * If @_ret contains -ENOMEM the transaction is restarted once in a way that 141 141 * blocks other transactions and allows exhastive eviction. If the transaction 142 142 * was already restarted once, Just return the -ENOMEM. May also set 143 143 * _ret to -EINTR if not retrying and waits are interruptible. ··· 180 180 * @_val: The xe_validation_device. 181 181 * @_exec: The struct drm_exec object 182 182 * @_flags: Flags for the xe_validation_ctx initialization. 183 - * @_ret: Return in / out parameter. May be set by this macro. Typicall 0 when called. 183 + * @_ret: Return in / out parameter. May be set by this macro. Typically 0 when called. 184 184 * 185 185 * This macro is will initiate a drm_exec transaction with additional support for 186 186 * exhaustive eviction.
+5 -5
drivers/gpu/drm/xe/xe_vm.c
··· 824 824 * 825 825 * (re)bind SVM range setting up GPU page tables for the range. 826 826 * 827 - * Return: dma fence for rebind to signal completion on succees, ERR_PTR on 827 + * Return: dma fence for rebind to signal completion on success, ERR_PTR on 828 828 * failure 829 829 */ 830 830 struct dma_fence *xe_vm_range_rebind(struct xe_vm *vm, ··· 907 907 * 908 908 * Unbind SVM range removing the GPU page tables for the range. 909 909 * 910 - * Return: dma fence for unbind to signal completion on succees, ERR_PTR on 910 + * Return: dma fence for unbind to signal completion on success, ERR_PTR on 911 911 * failure 912 912 */ 913 913 struct dma_fence *xe_vm_range_unbind(struct xe_vm *vm, ··· 1291 1291 * selection of options. The user PAT index is only for encoding leaf 1292 1292 * nodes, where we have use of more bits to do the encoding. The 1293 1293 * non-leaf nodes are instead under driver control so the chosen index 1294 - * here should be distict from the user PAT index. Also the 1294 + * here should be distinct from the user PAT index. Also the 1295 1295 * corresponding coherency of the PAT index should be tied to the 1296 1296 * allocation type of the page table (or at least we should pick 1297 1297 * something which is always safe). ··· 4172 4172 4173 4173 /** 4174 4174 * xe_vma_need_vram_for_atomic - Check if VMA needs VRAM migration for atomic operations 4175 - * @xe: Pointer to the XE device structure 4175 + * @xe: Pointer to the Xe device structure 4176 4176 * @vma: Pointer to the virtual memory area (VMA) structure 4177 4177 * @is_atomic: In pagefault path and atomic operation 4178 4178 * ··· 4319 4319 xe_vma_destroy(gpuva_to_vma(op->base.remap.unmap->va), NULL); 4320 4320 } else if (__op->op == DRM_GPUVA_OP_MAP) { 4321 4321 vma = op->map.vma; 4322 - /* In case of madvise call, MAP will always be follwed by REMAP. 4322 + /* In case of madvise call, MAP will always be followed by REMAP. 4323 4323 * Therefore temp_attr will always have sane values, making it safe to 4324 4324 * copy them to new vma. 4325 4325 */
+4 -4
drivers/gpu/drm/xe/xe_vm_doc.h
··· 7 7 #define _XE_VM_DOC_H_ 8 8 9 9 /** 10 - * DOC: XE VM (user address space) 10 + * DOC: Xe VM (user address space) 11 11 * 12 12 * VM creation 13 13 * =========== ··· 202 202 * User pointers are user allocated memory (malloc'd, mmap'd, etc..) for which the 203 203 * user wants to create a GPU mapping. Typically in other DRM drivers a dummy BO 204 204 * was created and then a binding was created. We bypass creating a dummy BO in 205 - * XE and simply create a binding directly from the userptr. 205 + * Xe and simply create a binding directly from the userptr. 206 206 * 207 207 * Invalidation 208 208 * ------------ 209 209 * 210 210 * Since this a core kernel managed memory the kernel can move this memory 211 - * whenever it wants. We register an invalidation MMU notifier to alert XE when 211 + * whenever it wants. We register an invalidation MMU notifier to alert Xe when 212 212 * a user pointer is about to move. The invalidation notifier needs to block 213 213 * until all pending users (jobs or compute mode engines) of the userptr are 214 214 * idle to ensure no faults. This done by waiting on all of VM's dma-resv slots. ··· 419 419 * ======= 420 420 * 421 421 * VM locking protects all of the core data paths (bind operations, execs, 422 - * evictions, and compute mode rebind worker) in XE. 422 + * evictions, and compute mode rebind worker) in Xe. 423 423 * 424 424 * Locks 425 425 * -----
+2 -2
drivers/gpu/drm/xe/xe_vm_types.h
··· 52 52 * struct xe_vma_mem_attr - memory attributes associated with vma 53 53 */ 54 54 struct xe_vma_mem_attr { 55 - /** @preferred_loc: perferred memory_location */ 55 + /** @preferred_loc: preferred memory_location */ 56 56 struct { 57 57 /** @preferred_loc.migration_policy: Pages migration policy */ 58 58 u32 migration_policy; ··· 338 338 u64 tlb_flush_seqno; 339 339 /** @batch_invalidate_tlb: Always invalidate TLB before batch start */ 340 340 bool batch_invalidate_tlb; 341 - /** @xef: XE file handle for tracking this VM's drm client */ 341 + /** @xef: Xe file handle for tracking this VM's drm client */ 342 342 struct xe_file *xef; 343 343 }; 344 344