Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'drm-intel-gt-next-2021-05-28' of git://anongit.freedesktop.org/drm/drm-intel into drm-next

UAPI Changes:
- Add reworked uAPI for DG1 behind CONFIG_BROKEN (Matt A, Abdiel)

Driver Changes:

- Fix for Gitlab issues #3293 and #3450:
Avoid kernel crash on older L-shape memory machines

- Add Wa_14010733141 (VDBox SFC reset) for Gen11+ (Aditya)
- Fix crash in auto_retire active retire callback due to
misalignment (Stephane)
- Fix overlay active retire callback alignment (Tvrtko)
- Eliminate need to align active retire callbacks (Matt A, Ville,
Daniel)
- Program FF_MODE2 tuning value for all Gen12 platforms (Caz)
- Add Wa_14011060649 for TGL,RKL,DG1 and ADLS (Swathi)
- Create stolen memory region from local memory on DG1 (CQ)
- Place PD in LMEM on dGFX (Matt A)
- Use WC when default state object is allocated in LMEM (Venkata)
- Determine the coherent map type based on object location (Venkata)
- Use lmem physical addresses for fb_mmap() on discrete (Mohammed)
- Bypass aperture on fbdev when LMEM is available (Anusha)
- Return error value when displayable BO not in LMEM for dGFX (Mohammed)
- Do release kernel context if breadcrumb measure fails (Janusz)
- Hide modparams for compiled-out features (Tvrtko)
- Apply Wa_22010271021 for all Gen11 platforms (Caz)
- Fix unlikely ref count race in arming the watchdog timer (Tvrtko)
- Check actual RC6 enable status in PMU (Tvrtko)
- Fix a double free in gen8_preallocate_top_level_pdp (Lv)
- Use trylock in shrinker for GGTT on BSW VT-d and BXT (Maarten)
- Remove erroneous i915_is_ggtt check for
I915_GEM_OBJECT_UNBIND_VM_TRYLOCK (Maarten)

- Convert uAPI headers to real kerneldoc (Matt A)
- Clean up kerneldoc warnings headers (Matt A, Maarten)
- Fail driver if LMEM training failed (Matt R)
- Avoid div-by-zero on Gen2 (Ville)
- Read C0DRB3/C1DRB3 as 16 bits again and add _BW suffix (Ville)
- Remove reference to struct drm_device.pdev (Thomas)
- Increase separation between GuC and execlists code (Chris, Matt B)

- Use might_alloc() (Bernard)
- Split DGFX_FEATURES from GEN12_FEATURES (Lucas)
- Deduplicate Wa_22010271021 programming on (Jose)
- Drop duplicate WaDisable4x2SubspanOptimization:hsw (Tvrtko)
- Selftest improvements (Chris, Hsin-Yi, Tvrtko)
- Shuffle around init_memory_region for stolen (Matt)
- Typo fixes (wengjianfeng)

[airlied: fix conflict with fixes in i915_active.c]
Signed-off-by: Dave Airlie <airlied@redhat.com>

From: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/YLCbBR22BsQ/dpJB@jlahtine-mobl.ger.corp.intel.com

+2010 -571
+8
Documentation/gpu/driver-uapi.rst
··· 1 + =============== 2 + DRM Driver uAPI 3 + =============== 4 + 5 + drm/i915 uAPI 6 + ============= 7 + 8 + .. kernel-doc:: include/uapi/drm/i915_drm.h
+1
Documentation/gpu/index.rst
··· 10 10 drm-kms 11 11 drm-kms-helpers 12 12 drm-uapi 13 + driver-uapi 13 14 drm-client 14 15 drivers 15 16 backlight
+131
Documentation/gpu/rfc/i915_gem_lmem.rst
··· 1 + ========================= 2 + I915 DG1/LMEM RFC Section 3 + ========================= 4 + 5 + Upstream plan 6 + ============= 7 + For upstream the overall plan for landing all the DG1 stuff and turning it for 8 + real, with all the uAPI bits is: 9 + 10 + * Merge basic HW enabling of DG1(still without pciid) 11 + * Merge the uAPI bits behind special CONFIG_BROKEN(or so) flag 12 + * At this point we can still make changes, but importantly this lets us 13 + start running IGTs which can utilize local-memory in CI 14 + * Convert over to TTM, make sure it all keeps working. Some of the work items: 15 + * TTM shrinker for discrete 16 + * dma_resv_lockitem for full dma_resv_lock, i.e not just trylock 17 + * Use TTM CPU pagefault handler 18 + * Route shmem backend over to TTM SYSTEM for discrete 19 + * TTM purgeable object support 20 + * Move i915 buddy allocator over to TTM 21 + * MMAP ioctl mode(see `I915 MMAP`_) 22 + * SET/GET ioctl caching(see `I915 SET/GET CACHING`_) 23 + * Send RFC(with mesa-dev on cc) for final sign off on the uAPI 24 + * Add pciid for DG1 and turn on uAPI for real 25 + 26 + New object placement and region query uAPI 27 + ========================================== 28 + Starting from DG1 we need to give userspace the ability to allocate buffers from 29 + device local-memory. Currently the driver supports gem_create, which can place 30 + buffers in system memory via shmem, and the usual assortment of other 31 + interfaces, like dumb buffers and userptr. 32 + 33 + To support this new capability, while also providing a uAPI which will work 34 + beyond just DG1, we propose to offer three new bits of uAPI: 35 + 36 + DRM_I915_QUERY_MEMORY_REGIONS 37 + ----------------------------- 38 + New query ID which allows userspace to discover the list of supported memory 39 + regions(like system-memory and local-memory) for a given device. We identify 40 + each region with a class and instance pair, which should be unique. The class 41 + here would be DEVICE or SYSTEM, and the instance would be zero, on platforms 42 + like DG1. 43 + 44 + Side note: The class/instance design is borrowed from our existing engine uAPI, 45 + where we describe every physical engine in terms of its class, and the 46 + particular instance, since we can have more than one per class. 47 + 48 + In the future we also want to expose more information which can further 49 + describe the capabilities of a region. 50 + 51 + .. kernel-doc:: include/uapi/drm/i915_drm.h 52 + :functions: drm_i915_gem_memory_class drm_i915_gem_memory_class_instance drm_i915_memory_region_info drm_i915_query_memory_regions 53 + 54 + GEM_CREATE_EXT 55 + -------------- 56 + New ioctl which is basically just gem_create but now allows userspace to provide 57 + a chain of possible extensions. Note that if we don't provide any extensions and 58 + set flags=0 then we get the exact same behaviour as gem_create. 59 + 60 + Side note: We also need to support PXP[1] in the near future, which is also 61 + applicable to integrated platforms, and adds its own gem_create_ext extension, 62 + which basically lets userspace mark a buffer as "protected". 63 + 64 + .. kernel-doc:: include/uapi/drm/i915_drm.h 65 + :functions: drm_i915_gem_create_ext 66 + 67 + I915_GEM_CREATE_EXT_MEMORY_REGIONS 68 + ---------------------------------- 69 + Implemented as an extension for gem_create_ext, we would now allow userspace to 70 + optionally provide an immutable list of preferred placements at creation time, 71 + in priority order, for a given buffer object. For the placements we expect 72 + them each to use the class/instance encoding, as per the output of the regions 73 + query. Having the list in priority order will be useful in the future when 74 + placing an object, say during eviction. 75 + 76 + .. kernel-doc:: include/uapi/drm/i915_drm.h 77 + :functions: drm_i915_gem_create_ext_memory_regions 78 + 79 + One fair criticism here is that this seems a little over-engineered[2]. If we 80 + just consider DG1 then yes, a simple gem_create.flags or something is totally 81 + all that's needed to tell the kernel to allocate the buffer in local-memory or 82 + whatever. However looking to the future we need uAPI which can also support 83 + upcoming Xe HP multi-tile architecture in a sane way, where there can be 84 + multiple local-memory instances for a given device, and so using both class and 85 + instance in our uAPI to describe regions is desirable, although specifically 86 + for DG1 it's uninteresting, since we only have a single local-memory instance. 87 + 88 + Existing uAPI issues 89 + ==================== 90 + Some potential issues we still need to resolve. 91 + 92 + I915 MMAP 93 + --------- 94 + In i915 there are multiple ways to MMAP GEM object, including mapping the same 95 + object using different mapping types(WC vs WB), i.e multiple active mmaps per 96 + object. TTM expects one MMAP at most for the lifetime of the object. If it 97 + turns out that we have to backpedal here, there might be some potential 98 + userspace fallout. 99 + 100 + I915 SET/GET CACHING 101 + -------------------- 102 + In i915 we have set/get_caching ioctl. TTM doesn't let us to change this, but 103 + DG1 doesn't support non-snooped pcie transactions, so we can just always 104 + allocate as WB for smem-only buffers. If/when our hw gains support for 105 + non-snooped pcie transactions then we must fix this mode at allocation time as 106 + a new GEM extension. 107 + 108 + This is related to the mmap problem, because in general (meaning, when we're 109 + not running on intel cpus) the cpu mmap must not, ever, be inconsistent with 110 + allocation mode. 111 + 112 + Possible idea is to let the kernel picks the mmap mode for userspace from the 113 + following table: 114 + 115 + smem-only: WB. Userspace does not need to call clflush. 116 + 117 + smem+lmem: We only ever allow a single mode, so simply allocate this as uncached 118 + memory, and always give userspace a WC mapping. GPU still does snooped access 119 + here(assuming we can't turn it off like on DG1), which is a bit inefficient. 120 + 121 + lmem only: always WC 122 + 123 + This means on discrete you only get a single mmap mode, all others must be 124 + rejected. That's probably going to be a new default mode or something like 125 + that. 126 + 127 + Links 128 + ===== 129 + [1] https://patchwork.freedesktop.org/series/86798/ 130 + 131 + [2] https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/5599#note_553791
+4
Documentation/gpu/rfc/index.rst
··· 15 15 16 16 * Once the code has landed move all the documentation to the right places in 17 17 the main core, helper or driver sections. 18 + 19 + .. toctree:: 20 + 21 + i915_gem_lmem.rst
+9
drivers/gpu/drm/i915/display/intel_display.c
··· 11660 11660 struct drm_framebuffer *fb; 11661 11661 struct drm_i915_gem_object *obj; 11662 11662 struct drm_mode_fb_cmd2 mode_cmd = *user_mode_cmd; 11663 + struct drm_i915_private *i915; 11663 11664 11664 11665 obj = i915_gem_object_lookup(filp, mode_cmd.handles[0]); 11665 11666 if (!obj) 11666 11667 return ERR_PTR(-ENOENT); 11668 + 11669 + /* object is backed with LMEM for discrete */ 11670 + i915 = to_i915(obj->base.dev); 11671 + if (HAS_LMEM(i915) && !i915_gem_object_is_lmem(obj)) { 11672 + /* object is "remote", not in local memory */ 11673 + i915_gem_object_put(obj); 11674 + return ERR_PTR(-EREMOTE); 11675 + } 11667 11676 11668 11677 fb = intel_framebuffer_create(obj, &mode_cmd); 11669 11678 i915_gem_object_put(obj);
+38 -13
drivers/gpu/drm/i915/display/intel_fbdev.c
··· 41 41 #include <drm/drm_fb_helper.h> 42 42 #include <drm/drm_fourcc.h> 43 43 44 + #include "gem/i915_gem_lmem.h" 45 + 44 46 #include "i915_drv.h" 45 47 #include "intel_display_types.h" 46 48 #include "intel_fbdev.h" ··· 139 137 size = mode_cmd.pitches[0] * mode_cmd.height; 140 138 size = PAGE_ALIGN(size); 141 139 142 - /* If the FB is too big, just don't use it since fbdev is not very 143 - * important and we should probably use that space with FBC or other 144 - * features. */ 145 140 obj = ERR_PTR(-ENODEV); 146 - if (size * 2 < dev_priv->stolen_usable_size) 147 - obj = i915_gem_object_create_stolen(dev_priv, size); 148 - if (IS_ERR(obj)) 149 - obj = i915_gem_object_create_shmem(dev_priv, size); 141 + if (HAS_LMEM(dev_priv)) { 142 + obj = i915_gem_object_create_lmem(dev_priv, size, 143 + I915_BO_ALLOC_CONTIGUOUS); 144 + } else { 145 + /* 146 + * If the FB is too big, just don't use it since fbdev is not very 147 + * important and we should probably use that space with FBC or other 148 + * features. 149 + */ 150 + if (size * 2 < dev_priv->stolen_usable_size) 151 + obj = i915_gem_object_create_stolen(dev_priv, size); 152 + if (IS_ERR(obj)) 153 + obj = i915_gem_object_create_shmem(dev_priv, size); 154 + } 155 + 150 156 if (IS_ERR(obj)) { 151 157 drm_err(&dev_priv->drm, "failed to allocate framebuffer\n"); 152 158 return PTR_ERR(obj); ··· 188 178 unsigned long flags = 0; 189 179 bool prealloc = false; 190 180 void __iomem *vaddr; 181 + struct drm_i915_gem_object *obj; 191 182 int ret; 192 183 193 184 if (intel_fb && ··· 243 232 info->fbops = &intelfb_ops; 244 233 245 234 /* setup aperture base/size for vesafb takeover */ 246 - info->apertures->ranges[0].base = ggtt->gmadr.start; 247 - info->apertures->ranges[0].size = ggtt->mappable_end; 235 + obj = intel_fb_obj(&intel_fb->base); 236 + if (i915_gem_object_is_lmem(obj)) { 237 + struct intel_memory_region *mem = obj->mm.region; 248 238 249 - /* Our framebuffer is the entirety of fbdev's system memory */ 250 - info->fix.smem_start = 251 - (unsigned long)(ggtt->gmadr.start + vma->node.start); 252 - info->fix.smem_len = vma->node.size; 239 + info->apertures->ranges[0].base = mem->io_start; 240 + info->apertures->ranges[0].size = mem->total; 241 + 242 + /* Use fbdev's framebuffer from lmem for discrete */ 243 + info->fix.smem_start = 244 + (unsigned long)(mem->io_start + 245 + i915_gem_object_get_dma_address(obj, 0)); 246 + info->fix.smem_len = obj->base.size; 247 + } else { 248 + info->apertures->ranges[0].base = ggtt->gmadr.start; 249 + info->apertures->ranges[0].size = ggtt->mappable_end; 250 + 251 + /* Our framebuffer is the entirety of fbdev's system memory */ 252 + info->fix.smem_start = 253 + (unsigned long)(ggtt->gmadr.start + vma->node.start); 254 + info->fix.smem_len = vma->node.size; 255 + } 253 256 254 257 vaddr = i915_vma_pin_iomap(vma); 255 258 if (IS_ERR(vaddr)) {
+2 -2
drivers/gpu/drm/i915/display/intel_frontbuffer.c
··· 211 211 return 0; 212 212 } 213 213 214 - __i915_active_call 215 214 static void frontbuffer_retire(struct i915_active *ref) 216 215 { 217 216 struct intel_frontbuffer *front = ··· 265 266 atomic_set(&front->bits, 0); 266 267 i915_active_init(&front->write, 267 268 frontbuffer_active, 268 - i915_active_may_sleep(frontbuffer_retire)); 269 + frontbuffer_retire, 270 + I915_ACTIVE_RETIRE_SLEEPS); 269 271 270 272 spin_lock(&i915->fb_tracking.lock); 271 273 if (rcu_access_pointer(obj->frontbuffer)) {
+2 -3
drivers/gpu/drm/i915/display/intel_overlay.c
··· 384 384 i830_overlay_clock_gating(dev_priv, true); 385 385 } 386 386 387 - __i915_active_call static void 388 - intel_overlay_last_flip_retire(struct i915_active *active) 387 + static void intel_overlay_last_flip_retire(struct i915_active *active) 389 388 { 390 389 struct intel_overlay *overlay = 391 390 container_of(active, typeof(*overlay), last_flip); ··· 1401 1402 overlay->saturation = 146; 1402 1403 1403 1404 i915_active_init(&overlay->last_flip, 1404 - NULL, intel_overlay_last_flip_retire); 1405 + NULL, intel_overlay_last_flip_retire, 0); 1405 1406 1406 1407 ret = get_registers(overlay, OVERLAY_NEEDS_PHYSICAL(dev_priv)); 1407 1408 if (ret)
+1 -2
drivers/gpu/drm/i915/gem/i915_gem_context.c
··· 1046 1046 void *data; 1047 1047 }; 1048 1048 1049 - __i915_active_call 1050 1049 static void cb_retire(struct i915_active *base) 1051 1050 { 1052 1051 struct context_barrier_task *cb = container_of(base, typeof(*cb), base); ··· 1079 1080 if (!cb) 1080 1081 return -ENOMEM; 1081 1082 1082 - i915_active_init(&cb->base, NULL, cb_retire); 1083 + i915_active_init(&cb->base, NULL, cb_retire, 0); 1083 1084 err = i915_active_acquire(&cb->base); 1084 1085 if (err) { 1085 1086 kfree(cb);
+319 -28
drivers/gpu/drm/i915/gem/i915_gem_create.c
··· 4 4 */ 5 5 6 6 #include "gem/i915_gem_ioctls.h" 7 + #include "gem/i915_gem_lmem.h" 7 8 #include "gem/i915_gem_region.h" 8 9 9 10 #include "i915_drv.h" 11 + #include "i915_trace.h" 12 + #include "i915_user_extensions.h" 10 13 11 - static int 12 - i915_gem_create(struct drm_file *file, 13 - struct intel_memory_region *mr, 14 - u64 *size_p, 15 - u32 *handle_p) 14 + static u32 object_max_page_size(struct drm_i915_gem_object *obj) 16 15 { 17 - struct drm_i915_gem_object *obj; 18 - u32 handle; 19 - u64 size; 16 + u32 max_page_size = 0; 17 + int i; 18 + 19 + for (i = 0; i < obj->mm.n_placements; i++) { 20 + struct intel_memory_region *mr = obj->mm.placements[i]; 21 + 22 + GEM_BUG_ON(!is_power_of_2(mr->min_page_size)); 23 + max_page_size = max_t(u32, max_page_size, mr->min_page_size); 24 + } 25 + 26 + GEM_BUG_ON(!max_page_size); 27 + return max_page_size; 28 + } 29 + 30 + static void object_set_placements(struct drm_i915_gem_object *obj, 31 + struct intel_memory_region **placements, 32 + unsigned int n_placements) 33 + { 34 + GEM_BUG_ON(!n_placements); 35 + 36 + /* 37 + * For the common case of one memory region, skip storing an 38 + * allocated array and just point at the region directly. 39 + */ 40 + if (n_placements == 1) { 41 + struct intel_memory_region *mr = placements[0]; 42 + struct drm_i915_private *i915 = mr->i915; 43 + 44 + obj->mm.placements = &i915->mm.regions[mr->id]; 45 + obj->mm.n_placements = 1; 46 + } else { 47 + obj->mm.placements = placements; 48 + obj->mm.n_placements = n_placements; 49 + } 50 + } 51 + 52 + static int i915_gem_publish(struct drm_i915_gem_object *obj, 53 + struct drm_file *file, 54 + u64 *size_p, 55 + u32 *handle_p) 56 + { 57 + u64 size = obj->base.size; 20 58 int ret; 21 59 22 - GEM_BUG_ON(!is_power_of_2(mr->min_page_size)); 23 - size = round_up(*size_p, mr->min_page_size); 60 + ret = drm_gem_handle_create(file, &obj->base, handle_p); 61 + /* drop reference from allocate - handle holds it now */ 62 + i915_gem_object_put(obj); 63 + if (ret) 64 + return ret; 65 + 66 + *size_p = size; 67 + return 0; 68 + } 69 + 70 + static int 71 + i915_gem_setup(struct drm_i915_gem_object *obj, u64 size) 72 + { 73 + struct intel_memory_region *mr = obj->mm.placements[0]; 74 + unsigned int flags; 75 + int ret; 76 + 77 + size = round_up(size, object_max_page_size(obj)); 24 78 if (size == 0) 25 79 return -EINVAL; 26 80 27 81 /* For most of the ABI (e.g. mmap) we think in system pages */ 28 82 GEM_BUG_ON(!IS_ALIGNED(size, PAGE_SIZE)); 29 83 30 - /* Allocate the new object */ 31 - obj = i915_gem_object_create_region(mr, size, 0); 32 - if (IS_ERR(obj)) 33 - return PTR_ERR(obj); 84 + if (i915_gem_object_size_2big(size)) 85 + return -E2BIG; 34 86 35 - GEM_BUG_ON(size != obj->base.size); 87 + /* 88 + * For now resort to CPU based clearing for device local-memory, in the 89 + * near future this will use the blitter engine for accelerated, GPU 90 + * based clearing. 91 + */ 92 + flags = 0; 93 + if (mr->type == INTEL_MEMORY_LOCAL) 94 + flags = I915_BO_ALLOC_CPU_CLEAR; 36 95 37 - ret = drm_gem_handle_create(file, &obj->base, &handle); 38 - /* drop reference from allocate - handle holds it now */ 39 - i915_gem_object_put(obj); 96 + ret = mr->ops->init_object(mr, obj, size, flags); 40 97 if (ret) 41 98 return ret; 42 99 43 - *handle_p = handle; 44 - *size_p = size; 100 + GEM_BUG_ON(size != obj->base.size); 101 + 102 + trace_i915_gem_object_create(obj); 45 103 return 0; 46 104 } 47 105 ··· 108 50 struct drm_device *dev, 109 51 struct drm_mode_create_dumb *args) 110 52 { 53 + struct drm_i915_gem_object *obj; 54 + struct intel_memory_region *mr; 111 55 enum intel_memory_type mem_type; 112 56 int cpp = DIV_ROUND_UP(args->bpp, 8); 113 57 u32 format; 58 + int ret; 114 59 115 60 switch (cpp) { 116 61 case 1: ··· 146 85 if (HAS_LMEM(to_i915(dev))) 147 86 mem_type = INTEL_MEMORY_LOCAL; 148 87 149 - return i915_gem_create(file, 150 - intel_memory_region_by_type(to_i915(dev), 151 - mem_type), 152 - &args->size, &args->handle); 88 + obj = i915_gem_object_alloc(); 89 + if (!obj) 90 + return -ENOMEM; 91 + 92 + mr = intel_memory_region_by_type(to_i915(dev), mem_type); 93 + object_set_placements(obj, &mr, 1); 94 + 95 + ret = i915_gem_setup(obj, args->size); 96 + if (ret) 97 + goto object_free; 98 + 99 + return i915_gem_publish(obj, file, &args->size, &args->handle); 100 + 101 + object_free: 102 + i915_gem_object_free(obj); 103 + return ret; 153 104 } 154 105 155 106 /** ··· 176 103 { 177 104 struct drm_i915_private *i915 = to_i915(dev); 178 105 struct drm_i915_gem_create *args = data; 106 + struct drm_i915_gem_object *obj; 107 + struct intel_memory_region *mr; 108 + int ret; 179 109 180 110 i915_gem_flush_free_objects(i915); 181 111 182 - return i915_gem_create(file, 183 - intel_memory_region_by_type(i915, 184 - INTEL_MEMORY_SYSTEM), 185 - &args->size, &args->handle); 112 + obj = i915_gem_object_alloc(); 113 + if (!obj) 114 + return -ENOMEM; 115 + 116 + mr = intel_memory_region_by_type(i915, INTEL_MEMORY_SYSTEM); 117 + object_set_placements(obj, &mr, 1); 118 + 119 + ret = i915_gem_setup(obj, args->size); 120 + if (ret) 121 + goto object_free; 122 + 123 + return i915_gem_publish(obj, file, &args->size, &args->handle); 124 + 125 + object_free: 126 + i915_gem_object_free(obj); 127 + return ret; 128 + } 129 + 130 + struct create_ext { 131 + struct drm_i915_private *i915; 132 + struct drm_i915_gem_object *vanilla_object; 133 + }; 134 + 135 + static void repr_placements(char *buf, size_t size, 136 + struct intel_memory_region **placements, 137 + int n_placements) 138 + { 139 + int i; 140 + 141 + buf[0] = '\0'; 142 + 143 + for (i = 0; i < n_placements; i++) { 144 + struct intel_memory_region *mr = placements[i]; 145 + int r; 146 + 147 + r = snprintf(buf, size, "\n %s -> { class: %d, inst: %d }", 148 + mr->name, mr->type, mr->instance); 149 + if (r >= size) 150 + return; 151 + 152 + buf += r; 153 + size -= r; 154 + } 155 + } 156 + 157 + static int set_placements(struct drm_i915_gem_create_ext_memory_regions *args, 158 + struct create_ext *ext_data) 159 + { 160 + struct drm_i915_private *i915 = ext_data->i915; 161 + struct drm_i915_gem_memory_class_instance __user *uregions = 162 + u64_to_user_ptr(args->regions); 163 + struct drm_i915_gem_object *obj = ext_data->vanilla_object; 164 + struct intel_memory_region **placements; 165 + u32 mask; 166 + int i, ret = 0; 167 + 168 + if (args->pad) { 169 + drm_dbg(&i915->drm, "pad should be zero\n"); 170 + ret = -EINVAL; 171 + } 172 + 173 + if (!args->num_regions) { 174 + drm_dbg(&i915->drm, "num_regions is zero\n"); 175 + ret = -EINVAL; 176 + } 177 + 178 + if (args->num_regions > ARRAY_SIZE(i915->mm.regions)) { 179 + drm_dbg(&i915->drm, "num_regions is too large\n"); 180 + ret = -EINVAL; 181 + } 182 + 183 + if (ret) 184 + return ret; 185 + 186 + placements = kmalloc_array(args->num_regions, 187 + sizeof(struct intel_memory_region *), 188 + GFP_KERNEL); 189 + if (!placements) 190 + return -ENOMEM; 191 + 192 + mask = 0; 193 + for (i = 0; i < args->num_regions; i++) { 194 + struct drm_i915_gem_memory_class_instance region; 195 + struct intel_memory_region *mr; 196 + 197 + if (copy_from_user(&region, uregions, sizeof(region))) { 198 + ret = -EFAULT; 199 + goto out_free; 200 + } 201 + 202 + mr = intel_memory_region_lookup(i915, 203 + region.memory_class, 204 + region.memory_instance); 205 + if (!mr || mr->private) { 206 + drm_dbg(&i915->drm, "Device is missing region { class: %d, inst: %d } at index = %d\n", 207 + region.memory_class, region.memory_instance, i); 208 + ret = -EINVAL; 209 + goto out_dump; 210 + } 211 + 212 + if (mask & BIT(mr->id)) { 213 + drm_dbg(&i915->drm, "Found duplicate placement %s -> { class: %d, inst: %d } at index = %d\n", 214 + mr->name, region.memory_class, 215 + region.memory_instance, i); 216 + ret = -EINVAL; 217 + goto out_dump; 218 + } 219 + 220 + placements[i] = mr; 221 + mask |= BIT(mr->id); 222 + 223 + ++uregions; 224 + } 225 + 226 + if (obj->mm.placements) { 227 + ret = -EINVAL; 228 + goto out_dump; 229 + } 230 + 231 + object_set_placements(obj, placements, args->num_regions); 232 + if (args->num_regions == 1) 233 + kfree(placements); 234 + 235 + return 0; 236 + 237 + out_dump: 238 + if (1) { 239 + char buf[256]; 240 + 241 + if (obj->mm.placements) { 242 + repr_placements(buf, 243 + sizeof(buf), 244 + obj->mm.placements, 245 + obj->mm.n_placements); 246 + drm_dbg(&i915->drm, 247 + "Placements were already set in previous EXT. Existing placements: %s\n", 248 + buf); 249 + } 250 + 251 + repr_placements(buf, sizeof(buf), placements, i); 252 + drm_dbg(&i915->drm, "New placements(so far validated): %s\n", buf); 253 + } 254 + 255 + out_free: 256 + kfree(placements); 257 + return ret; 258 + } 259 + 260 + static int ext_set_placements(struct i915_user_extension __user *base, 261 + void *data) 262 + { 263 + struct drm_i915_gem_create_ext_memory_regions ext; 264 + 265 + if (!IS_ENABLED(CONFIG_DRM_I915_UNSTABLE_FAKE_LMEM)) 266 + return -ENODEV; 267 + 268 + if (copy_from_user(&ext, base, sizeof(ext))) 269 + return -EFAULT; 270 + 271 + return set_placements(&ext, data); 272 + } 273 + 274 + static const i915_user_extension_fn create_extensions[] = { 275 + [I915_GEM_CREATE_EXT_MEMORY_REGIONS] = ext_set_placements, 276 + }; 277 + 278 + /** 279 + * Creates a new mm object and returns a handle to it. 280 + * @dev: drm device pointer 281 + * @data: ioctl data blob 282 + * @file: drm file pointer 283 + */ 284 + int 285 + i915_gem_create_ext_ioctl(struct drm_device *dev, void *data, 286 + struct drm_file *file) 287 + { 288 + struct drm_i915_private *i915 = to_i915(dev); 289 + struct drm_i915_gem_create_ext *args = data; 290 + struct create_ext ext_data = { .i915 = i915 }; 291 + struct intel_memory_region **placements_ext; 292 + struct drm_i915_gem_object *obj; 293 + int ret; 294 + 295 + if (args->flags) 296 + return -EINVAL; 297 + 298 + i915_gem_flush_free_objects(i915); 299 + 300 + obj = i915_gem_object_alloc(); 301 + if (!obj) 302 + return -ENOMEM; 303 + 304 + ext_data.vanilla_object = obj; 305 + ret = i915_user_extensions(u64_to_user_ptr(args->extensions), 306 + create_extensions, 307 + ARRAY_SIZE(create_extensions), 308 + &ext_data); 309 + placements_ext = obj->mm.placements; 310 + if (ret) 311 + goto object_free; 312 + 313 + if (!placements_ext) { 314 + struct intel_memory_region *mr = 315 + intel_memory_region_by_type(i915, INTEL_MEMORY_SYSTEM); 316 + 317 + object_set_placements(obj, &mr, 1); 318 + } 319 + 320 + ret = i915_gem_setup(obj, args->size); 321 + if (ret) 322 + goto object_free; 323 + 324 + return i915_gem_publish(obj, file, &args->size, &args->handle); 325 + 326 + object_free: 327 + if (obj->mm.n_placements > 1) 328 + kfree(placements_ext); 329 + i915_gem_object_free(obj); 330 + return ret; 186 331 }
+2
drivers/gpu/drm/i915/gem/i915_gem_ioctls.h
··· 14 14 struct drm_file *file); 15 15 int i915_gem_create_ioctl(struct drm_device *dev, void *data, 16 16 struct drm_file *file); 17 + int i915_gem_create_ext_ioctl(struct drm_device *dev, void *data, 18 + struct drm_file *file); 17 19 int i915_gem_execbuffer2_ioctl(struct drm_device *dev, void *data, 18 20 struct drm_file *file); 19 21 int i915_gem_get_aperture_ioctl(struct drm_device *dev, void *data,
+19 -1
drivers/gpu/drm/i915/gem/i915_gem_lmem.c
··· 17 17 .release = i915_gem_object_release_memory_region, 18 18 }; 19 19 20 + void __iomem * 21 + i915_gem_object_lmem_io_map(struct drm_i915_gem_object *obj, 22 + unsigned long n, 23 + unsigned long size) 24 + { 25 + resource_size_t offset; 26 + 27 + GEM_BUG_ON(!i915_gem_object_is_contiguous(obj)); 28 + 29 + offset = i915_gem_object_get_dma_address(obj, n); 30 + offset -= obj->mm.region->region.start; 31 + 32 + return io_mapping_map_wc(&obj->mm.region->iomap, offset, size); 33 + } 34 + 20 35 bool i915_gem_object_is_lmem(struct drm_i915_gem_object *obj) 21 36 { 22 - return obj->ops == &i915_gem_lmem_obj_ops; 37 + struct intel_memory_region *mr = obj->mm.region; 38 + 39 + return mr && (mr->type == INTEL_MEMORY_LOCAL || 40 + mr->type == INTEL_MEMORY_STOLEN_LOCAL); 23 41 } 24 42 25 43 struct drm_i915_gem_object *
+5
drivers/gpu/drm/i915/gem/i915_gem_lmem.h
··· 14 14 15 15 extern const struct drm_i915_gem_object_ops i915_gem_lmem_obj_ops; 16 16 17 + void __iomem * 18 + i915_gem_object_lmem_io_map(struct drm_i915_gem_object *obj, 19 + unsigned long n, 20 + unsigned long size); 21 + 17 22 bool i915_gem_object_is_lmem(struct drm_i915_gem_object *obj); 18 23 19 24 struct drm_i915_gem_object *
+3
drivers/gpu/drm/i915/gem/i915_gem_object.c
··· 249 249 if (obj->ops->release) 250 250 obj->ops->release(obj); 251 251 252 + if (obj->mm.n_placements > 1) 253 + kfree(obj->mm.placements); 254 + 252 255 /* But keep the pointer alive for RCU-protected lookups */ 253 256 call_rcu(&obj->rcu, __i915_gem_free_object_rcu); 254 257 cond_resched();
+11 -3
drivers/gpu/drm/i915/gem/i915_gem_object_types.h
··· 172 172 #define I915_BO_ALLOC_CONTIGUOUS BIT(0) 173 173 #define I915_BO_ALLOC_VOLATILE BIT(1) 174 174 #define I915_BO_ALLOC_STRUCT_PAGE BIT(2) 175 + #define I915_BO_ALLOC_CPU_CLEAR BIT(3) 175 176 #define I915_BO_ALLOC_FLAGS (I915_BO_ALLOC_CONTIGUOUS | \ 176 177 I915_BO_ALLOC_VOLATILE | \ 177 - I915_BO_ALLOC_STRUCT_PAGE) 178 - #define I915_BO_READONLY BIT(3) 179 - #define I915_TILING_QUIRK_BIT 4 /* unknown swizzling; do not release! */ 178 + I915_BO_ALLOC_STRUCT_PAGE | \ 179 + I915_BO_ALLOC_CPU_CLEAR) 180 + #define I915_BO_READONLY BIT(4) 181 + #define I915_TILING_QUIRK_BIT 5 /* unknown swizzling; do not release! */ 180 182 181 183 /* 182 184 * Is the object to be mapped as read-only to the GPU ··· 220 218 */ 221 219 atomic_t pages_pin_count; 222 220 atomic_t shrink_pin; 221 + 222 + /** 223 + * Priority list of potential placements for this object. 224 + */ 225 + struct intel_memory_region **placements; 226 + int n_placements; 223 227 224 228 /** 225 229 * Memory region for this object.
+22
drivers/gpu/drm/i915/gem/i915_gem_region.c
··· 95 95 sg_mark_end(sg); 96 96 i915_sg_trim(st); 97 97 98 + /* Intended for kernel internal use only */ 99 + if (obj->flags & I915_BO_ALLOC_CPU_CLEAR) { 100 + struct scatterlist *sg; 101 + unsigned long i; 102 + 103 + for_each_sg(st->sgl, sg, st->nents, i) { 104 + unsigned int length; 105 + void __iomem *vaddr; 106 + dma_addr_t daddr; 107 + 108 + daddr = sg_dma_address(sg); 109 + daddr -= mem->region.start; 110 + length = sg_dma_len(sg); 111 + 112 + vaddr = io_mapping_map_wc(&mem->iomap, daddr, length); 113 + memset64((void __force *)vaddr, 0, length / sizeof(u64)); 114 + io_mapping_unmap(vaddr); 115 + } 116 + 117 + wmb(); 118 + } 119 + 98 120 __i915_gem_object_set_pages(obj, st, sg_page_sizes); 99 121 100 122 return 0;
+9 -4
drivers/gpu/drm/i915/gem/i915_gem_shrinker.c
··· 38 38 } 39 39 40 40 static bool unsafe_drop_pages(struct drm_i915_gem_object *obj, 41 - unsigned long shrink) 41 + unsigned long shrink, bool trylock_vm) 42 42 { 43 43 unsigned long flags; 44 44 45 45 flags = 0; 46 46 if (shrink & I915_SHRINK_ACTIVE) 47 - flags = I915_GEM_OBJECT_UNBIND_ACTIVE; 47 + flags |= I915_GEM_OBJECT_UNBIND_ACTIVE; 48 48 if (!(shrink & I915_SHRINK_BOUND)) 49 - flags = I915_GEM_OBJECT_UNBIND_TEST; 49 + flags |= I915_GEM_OBJECT_UNBIND_TEST; 50 + if (trylock_vm) 51 + flags |= I915_GEM_OBJECT_UNBIND_VM_TRYLOCK; 50 52 51 53 if (i915_gem_object_unbind(obj, flags) == 0) 52 54 return true; ··· 118 116 unsigned long count = 0; 119 117 unsigned long scanned = 0; 120 118 int err; 119 + 120 + /* CHV + VTD workaround use stop_machine(); need to trylock vm->mutex */ 121 + bool trylock_vm = !ww && intel_vm_no_concurrent_access_wa(i915); 121 122 122 123 trace_i915_gem_shrink(i915, target, shrink); 123 124 ··· 209 204 spin_unlock_irqrestore(&i915->mm.obj_lock, flags); 210 205 211 206 err = 0; 212 - if (unsafe_drop_pages(obj, shrink)) { 207 + if (unsafe_drop_pages(obj, shrink, trylock_vm)) { 213 208 /* May arrive from get_pages on another bo */ 214 209 if (!ww) { 215 210 if (!i915_gem_object_trylock(obj))
+133 -26
drivers/gpu/drm/i915/gem/i915_gem_stolen.c
··· 10 10 #include <drm/drm_mm.h> 11 11 #include <drm/i915_drm.h> 12 12 13 + #include "gem/i915_gem_lmem.h" 13 14 #include "gem/i915_gem_region.h" 14 15 #include "i915_drv.h" 15 16 #include "i915_gem_stolen.h" ··· 121 120 dsm); 122 121 } 123 122 } 123 + 124 + /* 125 + * With stolen lmem, we don't need to check if the address range 126 + * overlaps with the non-stolen system memory range, since lmem is local 127 + * to the gpu. 128 + */ 129 + if (HAS_LMEM(i915)) 130 + return 0; 124 131 125 132 /* 126 133 * Verify that nothing else uses this physical address. Stolen ··· 383 374 } 384 375 } 385 376 386 - static int i915_gem_init_stolen(struct drm_i915_private *i915) 377 + static int i915_gem_init_stolen(struct intel_memory_region *mem) 387 378 { 379 + struct drm_i915_private *i915 = mem->i915; 388 380 struct intel_uncore *uncore = &i915->uncore; 389 381 resource_size_t reserved_base, stolen_top; 390 382 resource_size_t reserved_total, reserved_size; ··· 406 396 return 0; 407 397 } 408 398 409 - if (resource_size(&intel_graphics_stolen_res) == 0) 399 + if (resource_size(&mem->region) == 0) 410 400 return 0; 411 401 412 - i915->dsm = intel_graphics_stolen_res; 402 + i915->dsm = mem->region; 413 403 414 404 if (i915_adjust_stolen(i915, &i915->dsm)) 415 405 return 0; ··· 637 627 { 638 628 static struct lock_class_key lock_class; 639 629 unsigned int cache_level; 630 + unsigned int flags; 640 631 int err; 641 632 633 + /* 634 + * Stolen objects are always physically contiguous since we just 635 + * allocate one big block underneath using the drm_mm range allocator. 636 + */ 637 + flags = I915_BO_ALLOC_CONTIGUOUS; 638 + 642 639 drm_gem_private_object_init(&mem->i915->drm, &obj->base, stolen->size); 643 - i915_gem_object_init(obj, &i915_gem_object_stolen_ops, &lock_class, 0); 640 + i915_gem_object_init(obj, &i915_gem_object_stolen_ops, &lock_class, flags); 644 641 645 642 obj->stolen = stolen; 646 643 obj->read_domains = I915_GEM_DOMAIN_CPU | I915_GEM_DOMAIN_GTT; ··· 657 640 if (WARN_ON(!i915_gem_object_trylock(obj))) 658 641 return -EBUSY; 659 642 643 + i915_gem_object_init_memory_region(obj, mem); 644 + 660 645 err = i915_gem_object_pin_pages(obj); 661 - if (!err) 662 - i915_gem_object_init_memory_region(obj, mem); 646 + if (err) 647 + i915_gem_object_release_memory_region(obj); 663 648 i915_gem_object_unlock(obj); 664 649 665 650 return err; ··· 686 667 if (!stolen) 687 668 return -ENOMEM; 688 669 689 - ret = i915_gem_stolen_insert_node(i915, stolen, size, 4096); 670 + ret = i915_gem_stolen_insert_node(i915, stolen, size, 671 + mem->min_page_size); 690 672 if (ret) 691 673 goto err_free; 692 674 ··· 708 688 i915_gem_object_create_stolen(struct drm_i915_private *i915, 709 689 resource_size_t size) 710 690 { 711 - return i915_gem_object_create_region(i915->mm.regions[INTEL_REGION_STOLEN_SMEM], 712 - size, I915_BO_ALLOC_CONTIGUOUS); 691 + return i915_gem_object_create_region(i915->mm.stolen_region, size, 0); 713 692 } 714 693 715 - static int init_stolen(struct intel_memory_region *mem) 694 + static int init_stolen_smem(struct intel_memory_region *mem) 716 695 { 717 - intel_memory_region_set_name(mem, "stolen"); 718 - 719 696 /* 720 697 * Initialise stolen early so that we may reserve preallocated 721 698 * objects for the BIOS to KMS transition. 722 699 */ 723 - return i915_gem_init_stolen(mem->i915); 700 + return i915_gem_init_stolen(mem); 724 701 } 725 702 726 - static void release_stolen(struct intel_memory_region *mem) 703 + static void release_stolen_smem(struct intel_memory_region *mem) 727 704 { 728 705 i915_gem_cleanup_stolen(mem->i915); 729 706 } 730 707 731 - static const struct intel_memory_region_ops i915_region_stolen_ops = { 732 - .init = init_stolen, 733 - .release = release_stolen, 708 + static const struct intel_memory_region_ops i915_region_stolen_smem_ops = { 709 + .init = init_stolen_smem, 710 + .release = release_stolen_smem, 734 711 .init_object = _i915_gem_object_stolen_init, 735 712 }; 736 713 737 - struct intel_memory_region *i915_gem_stolen_setup(struct drm_i915_private *i915) 714 + static int init_stolen_lmem(struct intel_memory_region *mem) 738 715 { 739 - return intel_memory_region_create(i915, 740 - intel_graphics_stolen_res.start, 741 - resource_size(&intel_graphics_stolen_res), 742 - PAGE_SIZE, 0, 743 - &i915_region_stolen_ops); 716 + int err; 717 + 718 + if (GEM_WARN_ON(resource_size(&mem->region) == 0)) 719 + return -ENODEV; 720 + 721 + if (!io_mapping_init_wc(&mem->iomap, 722 + mem->io_start, 723 + resource_size(&mem->region))) 724 + return -EIO; 725 + 726 + /* 727 + * TODO: For stolen lmem we mostly just care about populating the dsm 728 + * related bits and setting up the drm_mm allocator for the range. 729 + * Perhaps split up i915_gem_init_stolen() for this. 730 + */ 731 + err = i915_gem_init_stolen(mem); 732 + if (err) 733 + goto err_fini; 734 + 735 + return 0; 736 + 737 + err_fini: 738 + io_mapping_fini(&mem->iomap); 739 + return err; 740 + } 741 + 742 + static void release_stolen_lmem(struct intel_memory_region *mem) 743 + { 744 + io_mapping_fini(&mem->iomap); 745 + i915_gem_cleanup_stolen(mem->i915); 746 + } 747 + 748 + static const struct intel_memory_region_ops i915_region_stolen_lmem_ops = { 749 + .init = init_stolen_lmem, 750 + .release = release_stolen_lmem, 751 + .init_object = _i915_gem_object_stolen_init, 752 + }; 753 + 754 + struct intel_memory_region * 755 + i915_gem_stolen_lmem_setup(struct drm_i915_private *i915) 756 + { 757 + struct intel_uncore *uncore = &i915->uncore; 758 + struct pci_dev *pdev = to_pci_dev(i915->drm.dev); 759 + struct intel_memory_region *mem; 760 + resource_size_t io_start; 761 + resource_size_t lmem_size; 762 + u64 lmem_base; 763 + 764 + lmem_base = intel_uncore_read64(uncore, GEN12_DSMBASE); 765 + if (GEM_WARN_ON(lmem_base >= pci_resource_len(pdev, 2))) 766 + return ERR_PTR(-ENODEV); 767 + 768 + lmem_size = pci_resource_len(pdev, 2) - lmem_base; 769 + io_start = pci_resource_start(pdev, 2) + lmem_base; 770 + 771 + mem = intel_memory_region_create(i915, lmem_base, lmem_size, 772 + I915_GTT_PAGE_SIZE_4K, io_start, 773 + &i915_region_stolen_lmem_ops); 774 + if (IS_ERR(mem)) 775 + return mem; 776 + 777 + /* 778 + * TODO: consider creating common helper to just print all the 779 + * interesting stuff from intel_memory_region, which we can use for all 780 + * our probed regions. 781 + */ 782 + 783 + drm_dbg(&i915->drm, "Stolen Local memory IO start: %pa\n", 784 + &mem->io_start); 785 + 786 + intel_memory_region_set_name(mem, "stolen-local"); 787 + 788 + mem->private = true; 789 + 790 + return mem; 791 + } 792 + 793 + struct intel_memory_region* 794 + i915_gem_stolen_smem_setup(struct drm_i915_private *i915) 795 + { 796 + struct intel_memory_region *mem; 797 + 798 + mem = intel_memory_region_create(i915, 799 + intel_graphics_stolen_res.start, 800 + resource_size(&intel_graphics_stolen_res), 801 + PAGE_SIZE, 0, 802 + &i915_region_stolen_smem_ops); 803 + if (IS_ERR(mem)) 804 + return mem; 805 + 806 + intel_memory_region_set_name(mem, "stolen-system"); 807 + 808 + mem->private = true; 809 + 810 + return mem; 744 811 } 745 812 746 813 struct drm_i915_gem_object * ··· 835 728 resource_size_t stolen_offset, 836 729 resource_size_t size) 837 730 { 838 - struct intel_memory_region *mem = i915->mm.regions[INTEL_REGION_STOLEN_SMEM]; 731 + struct intel_memory_region *mem = i915->mm.stolen_region; 839 732 struct drm_i915_gem_object *obj; 840 733 struct drm_mm_node *stolen; 841 734 int ret; ··· 849 742 850 743 /* KISS and expect everything to be page-aligned */ 851 744 if (GEM_WARN_ON(size == 0) || 852 - GEM_WARN_ON(!IS_ALIGNED(size, I915_GTT_PAGE_SIZE)) || 853 - GEM_WARN_ON(!IS_ALIGNED(stolen_offset, I915_GTT_MIN_ALIGNMENT))) 745 + GEM_WARN_ON(!IS_ALIGNED(size, mem->min_page_size)) || 746 + GEM_WARN_ON(!IS_ALIGNED(stolen_offset, mem->min_page_size))) 854 747 return ERR_PTR(-EINVAL); 855 748 856 749 stolen = kzalloc(sizeof(*stolen), GFP_KERNEL);
+2 -1
drivers/gpu/drm/i915/gem/i915_gem_stolen.h
··· 21 21 u64 end); 22 22 void i915_gem_stolen_remove_node(struct drm_i915_private *dev_priv, 23 23 struct drm_mm_node *node); 24 - struct intel_memory_region *i915_gem_stolen_setup(struct drm_i915_private *i915); 24 + struct intel_memory_region *i915_gem_stolen_smem_setup(struct drm_i915_private *i915); 25 + struct intel_memory_region *i915_gem_stolen_lmem_setup(struct drm_i915_private *i915); 25 26 struct drm_i915_gem_object * 26 27 i915_gem_object_create_stolen(struct drm_i915_private *dev_priv, 27 28 resource_size_t size);
+2 -9
drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c
··· 1740 1740 static int check_scratch_page(struct i915_gem_context *ctx, u32 *out) 1741 1741 { 1742 1742 struct i915_address_space *vm; 1743 - struct page *page; 1744 1743 u32 *vaddr; 1745 1744 int err = 0; 1746 1745 ··· 1747 1748 if (!vm) 1748 1749 return -ENODEV; 1749 1750 1750 - page = __px_page(vm->scratch[0]); 1751 - if (!page) { 1751 + if (!vm->scratch[0]) { 1752 1752 pr_err("No scratch page!\n"); 1753 1753 return -EINVAL; 1754 1754 } 1755 1755 1756 - vaddr = kmap(page); 1757 - if (!vaddr) { 1758 - pr_err("No (mappable) scratch page!\n"); 1759 - return -EINVAL; 1760 - } 1756 + vaddr = __px_vaddr(vm->scratch[0]); 1761 1757 1762 1758 memcpy(out, vaddr, sizeof(*out)); 1763 1759 if (memchr_inv(vaddr, *out, PAGE_SIZE)) { 1764 1760 pr_err("Inconsistent initial state of scratch page!\n"); 1765 1761 err = -EINVAL; 1766 1762 } 1767 - kunmap(page); 1768 1763 1769 1764 return err; 1770 1765 }
+26
drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c
··· 842 842 return true; 843 843 } 844 844 845 + static void object_set_placements(struct drm_i915_gem_object *obj, 846 + struct intel_memory_region **placements, 847 + unsigned int n_placements) 848 + { 849 + GEM_BUG_ON(!n_placements); 850 + 851 + if (n_placements == 1) { 852 + struct drm_i915_private *i915 = to_i915(obj->base.dev); 853 + struct intel_memory_region *mr = placements[0]; 854 + 855 + obj->mm.placements = &i915->mm.regions[mr->id]; 856 + obj->mm.n_placements = 1; 857 + } else { 858 + obj->mm.placements = placements; 859 + obj->mm.n_placements = n_placements; 860 + } 861 + } 862 + 845 863 #define expand32(x) (((x) << 0) | ((x) << 8) | ((x) << 16) | ((x) << 24)) 846 864 static int __igt_mmap(struct drm_i915_private *i915, 847 865 struct drm_i915_gem_object *obj, ··· 967 949 968 950 if (IS_ERR(obj)) 969 951 return PTR_ERR(obj); 952 + 953 + object_set_placements(obj, &mr, 1); 970 954 971 955 err = __igt_mmap(i915, obj, I915_MMAP_TYPE_GTT); 972 956 if (err == 0) ··· 1087 1067 1088 1068 if (IS_ERR(obj)) 1089 1069 return PTR_ERR(obj); 1070 + 1071 + object_set_placements(obj, &mr, 1); 1090 1072 1091 1073 err = __igt_mmap_access(i915, obj, I915_MMAP_TYPE_GTT); 1092 1074 if (err == 0) ··· 1233 1211 if (IS_ERR(obj)) 1234 1212 return PTR_ERR(obj); 1235 1213 1214 + object_set_placements(obj, &mr, 1); 1215 + 1236 1216 err = __igt_mmap_gpu(i915, obj, I915_MMAP_TYPE_GTT); 1237 1217 if (err == 0) 1238 1218 err = __igt_mmap_gpu(i915, obj, I915_MMAP_TYPE_WC); ··· 1377 1353 1378 1354 if (IS_ERR(obj)) 1379 1355 return PTR_ERR(obj); 1356 + 1357 + object_set_placements(obj, &mr, 1); 1380 1358 1381 1359 err = __igt_mmap_revoke(i915, obj, I915_MMAP_TYPE_GTT); 1382 1360 if (err == 0)
+5 -8
drivers/gpu/drm/i915/gt/gen6_ppgtt.c
··· 96 96 * entries back to scratch. 97 97 */ 98 98 99 - vaddr = kmap_atomic_px(pt); 99 + vaddr = px_vaddr(pt); 100 100 memset32(vaddr + pte, scratch_pte, count); 101 - kunmap_atomic(vaddr); 102 101 103 102 pte = 0; 104 103 } ··· 119 120 120 121 GEM_BUG_ON(!pd->entry[act_pt]); 121 122 122 - vaddr = kmap_atomic_px(i915_pt_entry(pd, act_pt)); 123 + vaddr = px_vaddr(i915_pt_entry(pd, act_pt)); 123 124 do { 124 125 GEM_BUG_ON(sg_dma_len(iter.sg) < I915_GTT_PAGE_SIZE); 125 126 vaddr[act_pte] = pte_encode | GEN6_PTE_ADDR_ENCODE(iter.dma); ··· 135 136 } 136 137 137 138 if (++act_pte == GEN6_PTES) { 138 - kunmap_atomic(vaddr); 139 - vaddr = kmap_atomic_px(i915_pt_entry(pd, ++act_pt)); 139 + vaddr = px_vaddr(i915_pt_entry(pd, ++act_pt)); 140 140 act_pte = 0; 141 141 } 142 142 } while (1); 143 - kunmap_atomic(vaddr); 144 143 145 144 vma->page_sizes.gtt = I915_GTT_PAGE_SIZE; 146 145 } ··· 232 235 goto err_scratch0; 233 236 } 234 237 235 - ret = pin_pt_dma(vm, vm->scratch[1]); 238 + ret = map_pt_dma(vm, vm->scratch[1]); 236 239 if (ret) 237 240 goto err_scratch1; 238 241 ··· 343 346 if (!vma) 344 347 return ERR_PTR(-ENOMEM); 345 348 346 - i915_active_init(&vma->active, NULL, NULL); 349 + i915_active_init(&vma->active, NULL, NULL, 0); 347 350 348 351 kref_init(&vma->ref); 349 352 mutex_init(&vma->pages_mutex);
+14 -17
drivers/gpu/drm/i915/gt/gen8_ppgtt.c
··· 242 242 atomic_read(&pt->used)); 243 243 GEM_BUG_ON(!count || count >= atomic_read(&pt->used)); 244 244 245 - vaddr = kmap_atomic_px(pt); 245 + vaddr = px_vaddr(pt); 246 246 memset64(vaddr + gen8_pd_index(start, 0), 247 247 vm->scratch[0]->encode, 248 248 count); 249 - kunmap_atomic(vaddr); 250 249 251 250 atomic_sub(count, &pt->used); 252 251 start += count; ··· 374 375 gen8_pte_t *vaddr; 375 376 376 377 pd = i915_pd_entry(pdp, gen8_pd_index(idx, 2)); 377 - vaddr = kmap_atomic_px(i915_pt_entry(pd, gen8_pd_index(idx, 1))); 378 + vaddr = px_vaddr(i915_pt_entry(pd, gen8_pd_index(idx, 1))); 378 379 do { 379 380 GEM_BUG_ON(sg_dma_len(iter->sg) < I915_GTT_PAGE_SIZE); 380 381 vaddr[gen8_pd_index(idx, 0)] = pte_encode | iter->dma; ··· 401 402 } 402 403 403 404 clflush_cache_range(vaddr, PAGE_SIZE); 404 - kunmap_atomic(vaddr); 405 - vaddr = kmap_atomic_px(i915_pt_entry(pd, gen8_pd_index(idx, 1))); 405 + vaddr = px_vaddr(i915_pt_entry(pd, gen8_pd_index(idx, 1))); 406 406 } 407 407 } while (1); 408 408 clflush_cache_range(vaddr, PAGE_SIZE); 409 - kunmap_atomic(vaddr); 410 409 411 410 return idx; 412 411 } ··· 439 442 encode |= GEN8_PDE_PS_2M; 440 443 page_size = I915_GTT_PAGE_SIZE_2M; 441 444 442 - vaddr = kmap_atomic_px(pd); 445 + vaddr = px_vaddr(pd); 443 446 } else { 444 447 struct i915_page_table *pt = 445 448 i915_pt_entry(pd, __gen8_pte_index(start, 1)); ··· 454 457 rem >= (I915_PDES - index) * I915_GTT_PAGE_SIZE)) 455 458 maybe_64K = __gen8_pte_index(start, 1); 456 459 457 - vaddr = kmap_atomic_px(pt); 460 + vaddr = px_vaddr(pt); 458 461 } 459 462 460 463 do { ··· 488 491 } while (rem >= page_size && index < I915_PDES); 489 492 490 493 clflush_cache_range(vaddr, PAGE_SIZE); 491 - kunmap_atomic(vaddr); 492 494 493 495 /* 494 496 * Is it safe to mark the 2M block as 64K? -- Either we have ··· 501 505 !iter->sg && IS_ALIGNED(vma->node.start + 502 506 vma->node.size, 503 507 I915_GTT_PAGE_SIZE_2M)))) { 504 - vaddr = kmap_atomic_px(pd); 508 + vaddr = px_vaddr(pd); 505 509 vaddr[maybe_64K] |= GEN8_PDE_IPS_64K; 506 - kunmap_atomic(vaddr); 507 510 page_size = I915_GTT_PAGE_SIZE_64K; 508 511 509 512 /* ··· 518 523 u16 i; 519 524 520 525 encode = vma->vm->scratch[0]->encode; 521 - vaddr = kmap_atomic_px(i915_pt_entry(pd, maybe_64K)); 526 + vaddr = px_vaddr(i915_pt_entry(pd, maybe_64K)); 522 527 523 528 for (i = 1; i < index; i += 16) 524 529 memset64(vaddr + i, encode, 15); 525 530 526 - kunmap_atomic(vaddr); 527 531 } 528 532 } 529 533 ··· 596 602 if (IS_ERR(obj)) 597 603 goto free_scratch; 598 604 599 - ret = pin_pt_dma(vm, obj); 605 + ret = map_pt_dma(vm, obj); 600 606 if (ret) { 601 607 i915_gem_object_put(obj); 602 608 goto free_scratch; ··· 633 639 if (IS_ERR(pde)) 634 640 return PTR_ERR(pde); 635 641 636 - err = pin_pt_dma(vm, pde->pt.base); 642 + err = map_pt_dma(vm, pde->pt.base); 637 643 if (err) { 638 644 free_pd(vm, pde); 639 645 return err; ··· 668 674 goto err_pd; 669 675 } 670 676 671 - err = pin_pt_dma(vm, pd->pt.base); 677 + err = map_pt_dma(vm, pd->pt.base); 672 678 if (err) 673 679 goto err_pd; 674 680 ··· 711 717 */ 712 718 ppgtt->vm.has_read_only = !IS_GEN_RANGE(gt->i915, 11, 12); 713 719 714 - ppgtt->vm.alloc_pt_dma = alloc_pt_dma; 720 + if (HAS_LMEM(gt->i915)) 721 + ppgtt->vm.alloc_pt_dma = alloc_pt_lmem; 722 + else 723 + ppgtt->vm.alloc_pt_dma = alloc_pt_dma; 715 724 716 725 err = gen8_init_scratch(&ppgtt->vm); 717 726 if (err)
+1 -2
drivers/gpu/drm/i915/gt/intel_context.c
··· 326 326 intel_context_put(ce); 327 327 } 328 328 329 - __i915_active_call 330 329 static void __intel_context_retire(struct i915_active *active) 331 330 { 332 331 struct intel_context *ce = container_of(active, typeof(*ce), active); ··· 384 385 mutex_init(&ce->pin_mutex); 385 386 386 387 i915_active_init(&ce->active, 387 - __intel_context_active, __intel_context_retire); 388 + __intel_context_active, __intel_context_retire, 0); 388 389 } 389 390 390 391 void intel_context_fini(struct intel_context *ce)
+7 -1
drivers/gpu/drm/i915/gt/intel_engine.h
··· 13 13 #include "i915_reg.h" 14 14 #include "i915_request.h" 15 15 #include "i915_selftest.h" 16 - #include "gt/intel_timeline.h" 17 16 #include "intel_engine_types.h" 17 + #include "intel_gt_types.h" 18 + #include "intel_timeline.h" 18 19 #include "intel_workarounds.h" 19 20 20 21 struct drm_printer; ··· 262 261 #define ENGINE_PHYSICAL 0 263 262 #define ENGINE_MOCK 1 264 263 #define ENGINE_VIRTUAL 2 264 + 265 + static inline bool intel_engine_uses_guc(const struct intel_engine_cs *engine) 266 + { 267 + return engine->gt->submission_method >= INTEL_SUBMISSION_GUC; 268 + } 265 269 266 270 static inline bool 267 271 intel_engine_has_preempt_reset(const struct intel_engine_cs *engine)
+16 -5
drivers/gpu/drm/i915/gt/intel_engine_cs.c
··· 255 255 intel_engine_set_hwsp_writemask(engine, ~0u); 256 256 } 257 257 258 + static void nop_irq_handler(struct intel_engine_cs *engine, u16 iir) 259 + { 260 + GEM_DEBUG_WARN_ON(iir); 261 + } 262 + 258 263 static int intel_engine_setup(struct intel_gt *gt, enum intel_engine_id id) 259 264 { 260 265 const struct engine_info *info = &intel_engines[id]; ··· 296 291 engine->mmio_base = __engine_mmio_base(i915, info->mmio_bases); 297 292 engine->hw_id = info->hw_id; 298 293 engine->guc_id = MAKE_GUC_ID(info->class, info->instance); 294 + 295 + engine->irq_handler = nop_irq_handler; 299 296 300 297 engine->class = info->class; 301 298 engine->instance = info->instance; ··· 905 898 return 0; 906 899 907 900 err_context: 908 - intel_context_put(ce); 901 + destroy_pinned_context(ce); 909 902 return ret; 910 903 } 911 904 ··· 916 909 enum intel_engine_id id; 917 910 int err; 918 911 919 - if (intel_uc_uses_guc_submission(&gt->uc)) 912 + if (intel_uc_uses_guc_submission(&gt->uc)) { 913 + gt->submission_method = INTEL_SUBMISSION_GUC; 920 914 setup = intel_guc_submission_setup; 921 - else if (HAS_EXECLISTS(gt->i915)) 915 + } else if (HAS_EXECLISTS(gt->i915)) { 916 + gt->submission_method = INTEL_SUBMISSION_ELSP; 922 917 setup = intel_execlists_submission_setup; 923 - else 918 + } else { 919 + gt->submission_method = INTEL_SUBMISSION_RING; 924 920 setup = intel_ring_submission_setup; 921 + } 925 922 926 923 for_each_engine(engine, gt, id) { 927 924 err = engine_setup_common(engine); ··· 1490 1479 drm_printf(m, "\tIPEHR: 0x%08x\n", ENGINE_READ(engine, IPEHR)); 1491 1480 } 1492 1481 1493 - if (intel_engine_in_guc_submission_mode(engine)) { 1482 + if (intel_engine_uses_guc(engine)) { 1494 1483 /* nothing to print yet */ 1495 1484 } else if (HAS_EXECLISTS(dev_priv)) { 1496 1485 struct i915_request * const *port, *rq;
+1 -1
drivers/gpu/drm/i915/gt/intel_engine_pm.c
··· 23 23 24 24 if (ce->state) { 25 25 struct drm_i915_gem_object *obj = ce->state->obj; 26 - int type = i915_coherent_map_type(ce->engine->i915); 26 + int type = i915_coherent_map_type(ce->engine->i915, obj, true); 27 27 void *map; 28 28 29 29 if (!i915_gem_object_trylock(obj))
+4 -10
drivers/gpu/drm/i915/gt/intel_engine_types.h
··· 402 402 u32 irq_enable_mask; /* bitmask to enable ring interrupt */ 403 403 void (*irq_enable)(struct intel_engine_cs *engine); 404 404 void (*irq_disable)(struct intel_engine_cs *engine); 405 + void (*irq_handler)(struct intel_engine_cs *engine, u16 iir); 405 406 406 407 void (*sanitize)(struct intel_engine_cs *engine); 407 408 int (*resume)(struct intel_engine_cs *engine); ··· 482 481 #define I915_ENGINE_HAS_PREEMPTION BIT(2) 483 482 #define I915_ENGINE_HAS_SEMAPHORES BIT(3) 484 483 #define I915_ENGINE_HAS_TIMESLICES BIT(4) 485 - #define I915_ENGINE_NEEDS_BREADCRUMB_TASKLET BIT(5) 486 - #define I915_ENGINE_IS_VIRTUAL BIT(6) 487 - #define I915_ENGINE_HAS_RELATIVE_MMIO BIT(7) 488 - #define I915_ENGINE_REQUIRES_CMD_PARSER BIT(8) 484 + #define I915_ENGINE_IS_VIRTUAL BIT(5) 485 + #define I915_ENGINE_HAS_RELATIVE_MMIO BIT(6) 486 + #define I915_ENGINE_REQUIRES_CMD_PARSER BIT(7) 489 487 unsigned int flags; 490 488 491 489 /* ··· 591 591 return false; 592 592 593 593 return engine->flags & I915_ENGINE_HAS_TIMESLICES; 594 - } 595 - 596 - static inline bool 597 - intel_engine_needs_breadcrumb_tasklet(const struct intel_engine_cs *engine) 598 - { 599 - return engine->flags & I915_ENGINE_NEEDS_BREADCRUMB_TASKLET; 600 594 } 601 595 602 596 static inline bool
+64 -31
drivers/gpu/drm/i915/gt/intel_execlists_submission.c
··· 118 118 #include "intel_engine_stats.h" 119 119 #include "intel_execlists_submission.h" 120 120 #include "intel_gt.h" 121 + #include "intel_gt_irq.h" 121 122 #include "intel_gt_pm.h" 122 123 #include "intel_gt_requests.h" 123 124 #include "intel_lrc.h" ··· 1769 1768 */ 1770 1769 GEM_BUG_ON(!tasklet_is_locked(&execlists->tasklet) && 1771 1770 !reset_in_progress(execlists)); 1772 - GEM_BUG_ON(!intel_engine_in_execlists_submission_mode(engine)); 1773 1771 1774 1772 /* 1775 1773 * Note that csb_write, csb_status may be either in HWSP or mmio. ··· 2383 2383 2384 2384 post_process_csb(post, inactive); 2385 2385 rcu_read_unlock(); 2386 + } 2387 + 2388 + static void execlists_irq_handler(struct intel_engine_cs *engine, u16 iir) 2389 + { 2390 + bool tasklet = false; 2391 + 2392 + if (unlikely(iir & GT_CS_MASTER_ERROR_INTERRUPT)) { 2393 + u32 eir; 2394 + 2395 + /* Upper 16b are the enabling mask, rsvd for internal errors */ 2396 + eir = ENGINE_READ(engine, RING_EIR) & GENMASK(15, 0); 2397 + ENGINE_TRACE(engine, "CS error: %x\n", eir); 2398 + 2399 + /* Disable the error interrupt until after the reset */ 2400 + if (likely(eir)) { 2401 + ENGINE_WRITE(engine, RING_EMR, ~0u); 2402 + ENGINE_WRITE(engine, RING_EIR, eir); 2403 + WRITE_ONCE(engine->execlists.error_interrupt, eir); 2404 + tasklet = true; 2405 + } 2406 + } 2407 + 2408 + if (iir & GT_WAIT_SEMAPHORE_INTERRUPT) { 2409 + WRITE_ONCE(engine->execlists.yield, 2410 + ENGINE_READ_FW(engine, RING_EXECLIST_STATUS_HI)); 2411 + ENGINE_TRACE(engine, "semaphore yield: %08x\n", 2412 + engine->execlists.yield); 2413 + if (del_timer(&engine->execlists.timer)) 2414 + tasklet = true; 2415 + } 2416 + 2417 + if (iir & GT_CONTEXT_SWITCH_INTERRUPT) 2418 + tasklet = true; 2419 + 2420 + if (iir & GT_RENDER_USER_INTERRUPT) 2421 + intel_engine_signal_breadcrumbs(engine); 2422 + 2423 + if (tasklet) 2424 + tasklet_hi_schedule(&engine->execlists.tasklet); 2386 2425 } 2387 2426 2388 2427 static void __execlists_kick(struct intel_engine_execlists *execlists) ··· 3115 3076 engine->submit_request = execlists_submit_request; 3116 3077 engine->schedule = i915_schedule; 3117 3078 engine->execlists.tasklet.callback = execlists_submission_tasklet; 3118 - 3119 - engine->reset.prepare = execlists_reset_prepare; 3120 - engine->reset.rewind = execlists_reset_rewind; 3121 - engine->reset.cancel = execlists_reset_cancel; 3122 - engine->reset.finish = execlists_reset_finish; 3123 - 3124 - engine->park = execlists_park; 3125 - engine->unpark = NULL; 3126 - 3127 - engine->flags |= I915_ENGINE_SUPPORTS_STATS; 3128 - if (!intel_vgpu_active(engine->i915)) { 3129 - engine->flags |= I915_ENGINE_HAS_SEMAPHORES; 3130 - if (can_preempt(engine)) { 3131 - engine->flags |= I915_ENGINE_HAS_PREEMPTION; 3132 - if (IS_ACTIVE(CONFIG_DRM_I915_TIMESLICE_DURATION)) 3133 - engine->flags |= I915_ENGINE_HAS_TIMESLICES; 3134 - } 3135 - } 3136 - 3137 - if (intel_engine_has_preemption(engine)) 3138 - engine->emit_bb_start = gen8_emit_bb_start; 3139 - else 3140 - engine->emit_bb_start = gen8_emit_bb_start_noarb; 3141 3079 } 3142 3080 3143 3081 static void execlists_shutdown(struct intel_engine_cs *engine) ··· 3145 3129 engine->cops = &execlists_context_ops; 3146 3130 engine->request_alloc = execlists_request_alloc; 3147 3131 3132 + engine->reset.prepare = execlists_reset_prepare; 3133 + engine->reset.rewind = execlists_reset_rewind; 3134 + engine->reset.cancel = execlists_reset_cancel; 3135 + engine->reset.finish = execlists_reset_finish; 3136 + 3137 + engine->park = execlists_park; 3138 + engine->unpark = NULL; 3139 + 3148 3140 engine->emit_flush = gen8_emit_flush_xcs; 3149 3141 engine->emit_init_breadcrumb = gen8_emit_init_breadcrumb; 3150 3142 engine->emit_fini_breadcrumb = gen8_emit_fini_breadcrumb_xcs; ··· 3173 3149 * until a more refined solution exists. 3174 3150 */ 3175 3151 } 3152 + intel_engine_set_irq_handler(engine, execlists_irq_handler); 3153 + 3154 + engine->flags |= I915_ENGINE_SUPPORTS_STATS; 3155 + if (!intel_vgpu_active(engine->i915)) { 3156 + engine->flags |= I915_ENGINE_HAS_SEMAPHORES; 3157 + if (can_preempt(engine)) { 3158 + engine->flags |= I915_ENGINE_HAS_PREEMPTION; 3159 + if (IS_ACTIVE(CONFIG_DRM_I915_TIMESLICE_DURATION)) 3160 + engine->flags |= I915_ENGINE_HAS_TIMESLICES; 3161 + } 3162 + } 3163 + 3164 + if (intel_engine_has_preemption(engine)) 3165 + engine->emit_bb_start = gen8_emit_bb_start; 3166 + else 3167 + engine->emit_bb_start = gen8_emit_bb_start_noarb; 3176 3168 } 3177 3169 3178 3170 static void logical_ring_default_irqs(struct intel_engine_cs *engine) ··· 3922 3882 } 3923 3883 3924 3884 spin_unlock_irqrestore(&engine->active.lock, flags); 3925 - } 3926 - 3927 - bool 3928 - intel_engine_in_execlists_submission_mode(const struct intel_engine_cs *engine) 3929 - { 3930 - return engine->set_default_submission == 3931 - execlists_set_default_submission; 3932 3885 } 3933 3886 3934 3887 #if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
-3
drivers/gpu/drm/i915/gt/intel_execlists_submission.h
··· 43 43 const struct intel_engine_cs *master, 44 44 const struct intel_engine_cs *sibling); 45 45 46 - bool 47 - intel_engine_in_execlists_submission_mode(const struct intel_engine_cs *engine); 48 - 49 46 #endif /* __INTEL_EXECLISTS_SUBMISSION_H__ */
+6 -4
drivers/gpu/drm/i915/gt/intel_ggtt.c
··· 658 658 goto err_ppgtt; 659 659 660 660 i915_gem_object_lock(ppgtt->vm.scratch[0], NULL); 661 - err = i915_vm_pin_pt_stash(&ppgtt->vm, &stash); 661 + err = i915_vm_map_pt_stash(&ppgtt->vm, &stash); 662 662 i915_gem_object_unlock(ppgtt->vm.scratch[0]); 663 663 if (err) 664 664 goto err_stash; ··· 907 907 908 908 ggtt->vm.insert_entries = gen8_ggtt_insert_entries; 909 909 910 - /* Serialize GTT updates with aperture access on BXT if VT-d is on. */ 911 - if (intel_ggtt_update_needs_vtd_wa(i915) || 912 - IS_CHERRYVIEW(i915) /* fails with concurrent use/update */) { 910 + /* 911 + * Serialize GTT updates with aperture access on BXT if VT-d is on, 912 + * and always on CHV. 913 + */ 914 + if (intel_vm_no_concurrent_access_wa(i915)) { 913 915 ggtt->vm.insert_entries = bxt_vtd_ggtt_insert_entries__BKL; 914 916 ggtt->vm.insert_page = bxt_vtd_ggtt_insert_page__BKL; 915 917 ggtt->vm.bind_async_flags =
+3 -3
drivers/gpu/drm/i915/gt/intel_ggtt_fencing.c
··· 653 653 * banks of memory are paired and unswizzled on the 654 654 * uneven portion, so leave that as unknown. 655 655 */ 656 - if (intel_uncore_read16(uncore, C0DRB3) == 657 - intel_uncore_read16(uncore, C1DRB3)) { 656 + if (intel_uncore_read16(uncore, C0DRB3_BW) == 657 + intel_uncore_read16(uncore, C1DRB3_BW)) { 658 658 swizzle_x = I915_BIT_6_SWIZZLE_9_10; 659 659 swizzle_y = I915_BIT_6_SWIZZLE_9; 660 660 } ··· 867 867 for (i = 0; i < num_fences; i++) { 868 868 struct i915_fence_reg *fence = &ggtt->fence_regs[i]; 869 869 870 - i915_active_init(&fence->active, NULL, NULL); 870 + i915_active_init(&fence->active, NULL, NULL, 0); 871 871 fence->ggtt = ggtt; 872 872 fence->id = i; 873 873 list_add_tail(&fence->link, &ggtt->fence_list);
+1 -2
drivers/gpu/drm/i915/gt/intel_gt_buffer_pool.c
··· 98 98 round_jiffies_up_relative(HZ)); 99 99 } 100 100 101 - __i915_active_call 102 101 static void pool_retire(struct i915_active *ref) 103 102 { 104 103 struct intel_gt_buffer_pool_node *node = ··· 153 154 node->age = 0; 154 155 node->pool = pool; 155 156 node->pinned = false; 156 - i915_active_init(&node->active, NULL, pool_retire); 157 + i915_active_init(&node->active, NULL, pool_retire, 0); 157 158 158 159 obj = i915_gem_object_create_internal(gt->i915, sz); 159 160 if (IS_ERR(obj)) {
+24 -58
drivers/gpu/drm/i915/gt/intel_gt_irq.c
··· 20 20 intel_guc_to_host_event_handler(guc); 21 21 } 22 22 23 - static void 24 - cs_irq_handler(struct intel_engine_cs *engine, u32 iir) 25 - { 26 - bool tasklet = false; 27 - 28 - if (unlikely(iir & GT_CS_MASTER_ERROR_INTERRUPT)) { 29 - u32 eir; 30 - 31 - /* Upper 16b are the enabling mask, rsvd for internal errors */ 32 - eir = ENGINE_READ(engine, RING_EIR) & GENMASK(15, 0); 33 - ENGINE_TRACE(engine, "CS error: %x\n", eir); 34 - 35 - /* Disable the error interrupt until after the reset */ 36 - if (likely(eir)) { 37 - ENGINE_WRITE(engine, RING_EMR, ~0u); 38 - ENGINE_WRITE(engine, RING_EIR, eir); 39 - WRITE_ONCE(engine->execlists.error_interrupt, eir); 40 - tasklet = true; 41 - } 42 - } 43 - 44 - if (iir & GT_WAIT_SEMAPHORE_INTERRUPT) { 45 - WRITE_ONCE(engine->execlists.yield, 46 - ENGINE_READ_FW(engine, RING_EXECLIST_STATUS_HI)); 47 - ENGINE_TRACE(engine, "semaphore yield: %08x\n", 48 - engine->execlists.yield); 49 - if (del_timer(&engine->execlists.timer)) 50 - tasklet = true; 51 - } 52 - 53 - if (iir & GT_CONTEXT_SWITCH_INTERRUPT) 54 - tasklet = true; 55 - 56 - if (iir & GT_RENDER_USER_INTERRUPT) { 57 - intel_engine_signal_breadcrumbs(engine); 58 - tasklet |= intel_engine_needs_breadcrumb_tasklet(engine); 59 - } 60 - 61 - if (tasklet) 62 - tasklet_hi_schedule(&engine->execlists.tasklet); 63 - } 64 - 65 23 static u32 66 24 gen11_gt_engine_identity(struct intel_gt *gt, 67 25 const unsigned int bank, const unsigned int bit) ··· 80 122 engine = NULL; 81 123 82 124 if (likely(engine)) 83 - return cs_irq_handler(engine, iir); 125 + return intel_engine_cs_irq(engine, iir); 84 126 85 127 WARN_ONCE(1, "unhandled engine interrupt class=0x%x, instance=0x%x\n", 86 128 class, instance); ··· 233 275 void gen5_gt_irq_handler(struct intel_gt *gt, u32 gt_iir) 234 276 { 235 277 if (gt_iir & GT_RENDER_USER_INTERRUPT) 236 - intel_engine_signal_breadcrumbs(gt->engine_class[RENDER_CLASS][0]); 278 + intel_engine_cs_irq(gt->engine_class[RENDER_CLASS][0], 279 + gt_iir); 280 + 237 281 if (gt_iir & ILK_BSD_USER_INTERRUPT) 238 - intel_engine_signal_breadcrumbs(gt->engine_class[VIDEO_DECODE_CLASS][0]); 282 + intel_engine_cs_irq(gt->engine_class[VIDEO_DECODE_CLASS][0], 283 + gt_iir); 239 284 } 240 285 241 286 static void gen7_parity_error_irq_handler(struct intel_gt *gt, u32 iir) ··· 262 301 void gen6_gt_irq_handler(struct intel_gt *gt, u32 gt_iir) 263 302 { 264 303 if (gt_iir & GT_RENDER_USER_INTERRUPT) 265 - intel_engine_signal_breadcrumbs(gt->engine_class[RENDER_CLASS][0]); 304 + intel_engine_cs_irq(gt->engine_class[RENDER_CLASS][0], 305 + gt_iir); 306 + 266 307 if (gt_iir & GT_BSD_USER_INTERRUPT) 267 - intel_engine_signal_breadcrumbs(gt->engine_class[VIDEO_DECODE_CLASS][0]); 308 + intel_engine_cs_irq(gt->engine_class[VIDEO_DECODE_CLASS][0], 309 + gt_iir >> 12); 310 + 268 311 if (gt_iir & GT_BLT_USER_INTERRUPT) 269 - intel_engine_signal_breadcrumbs(gt->engine_class[COPY_ENGINE_CLASS][0]); 312 + intel_engine_cs_irq(gt->engine_class[COPY_ENGINE_CLASS][0], 313 + gt_iir >> 22); 270 314 271 315 if (gt_iir & (GT_BLT_CS_ERROR_INTERRUPT | 272 316 GT_BSD_CS_ERROR_INTERRUPT | ··· 290 324 if (master_ctl & (GEN8_GT_RCS_IRQ | GEN8_GT_BCS_IRQ)) { 291 325 iir = raw_reg_read(regs, GEN8_GT_IIR(0)); 292 326 if (likely(iir)) { 293 - cs_irq_handler(gt->engine_class[RENDER_CLASS][0], 294 - iir >> GEN8_RCS_IRQ_SHIFT); 295 - cs_irq_handler(gt->engine_class[COPY_ENGINE_CLASS][0], 296 - iir >> GEN8_BCS_IRQ_SHIFT); 327 + intel_engine_cs_irq(gt->engine_class[RENDER_CLASS][0], 328 + iir >> GEN8_RCS_IRQ_SHIFT); 329 + intel_engine_cs_irq(gt->engine_class[COPY_ENGINE_CLASS][0], 330 + iir >> GEN8_BCS_IRQ_SHIFT); 297 331 raw_reg_write(regs, GEN8_GT_IIR(0), iir); 298 332 } 299 333 } ··· 301 335 if (master_ctl & (GEN8_GT_VCS0_IRQ | GEN8_GT_VCS1_IRQ)) { 302 336 iir = raw_reg_read(regs, GEN8_GT_IIR(1)); 303 337 if (likely(iir)) { 304 - cs_irq_handler(gt->engine_class[VIDEO_DECODE_CLASS][0], 305 - iir >> GEN8_VCS0_IRQ_SHIFT); 306 - cs_irq_handler(gt->engine_class[VIDEO_DECODE_CLASS][1], 307 - iir >> GEN8_VCS1_IRQ_SHIFT); 338 + intel_engine_cs_irq(gt->engine_class[VIDEO_DECODE_CLASS][0], 339 + iir >> GEN8_VCS0_IRQ_SHIFT); 340 + intel_engine_cs_irq(gt->engine_class[VIDEO_DECODE_CLASS][1], 341 + iir >> GEN8_VCS1_IRQ_SHIFT); 308 342 raw_reg_write(regs, GEN8_GT_IIR(1), iir); 309 343 } 310 344 } ··· 312 346 if (master_ctl & GEN8_GT_VECS_IRQ) { 313 347 iir = raw_reg_read(regs, GEN8_GT_IIR(3)); 314 348 if (likely(iir)) { 315 - cs_irq_handler(gt->engine_class[VIDEO_ENHANCEMENT_CLASS][0], 316 - iir >> GEN8_VECS_IRQ_SHIFT); 349 + intel_engine_cs_irq(gt->engine_class[VIDEO_ENHANCEMENT_CLASS][0], 350 + iir >> GEN8_VECS_IRQ_SHIFT); 317 351 raw_reg_write(regs, GEN8_GT_IIR(3), iir); 318 352 } 319 353 }
+23
drivers/gpu/drm/i915/gt/intel_gt_irq.h
··· 8 8 9 9 #include <linux/types.h> 10 10 11 + #include "intel_engine_types.h" 12 + 11 13 struct intel_gt; 12 14 13 15 #define GEN8_GT_IRQS (GEN8_GT_RCS_IRQ | \ ··· 40 38 void gen8_gt_irq_handler(struct intel_gt *gt, u32 master_ctl); 41 39 void gen8_gt_irq_reset(struct intel_gt *gt); 42 40 void gen8_gt_irq_postinstall(struct intel_gt *gt); 41 + 42 + static inline void intel_engine_cs_irq(struct intel_engine_cs *engine, u16 iir) 43 + { 44 + if (iir) 45 + engine->irq_handler(engine, iir); 46 + } 47 + 48 + static inline void 49 + intel_engine_set_irq_handler(struct intel_engine_cs *engine, 50 + void (*fn)(struct intel_engine_cs *engine, 51 + u16 iir)) 52 + { 53 + /* 54 + * As the interrupt is live as allocate and setup the engines, 55 + * err on the side of caution and apply barriers to updating 56 + * the irq handler callback. This assures that when we do use 57 + * the engine, we will receive interrupts only to ourselves, 58 + * and not lose any. 59 + */ 60 + smp_store_mb(engine->irq_handler, fn); 61 + } 43 62 44 63 #endif /* INTEL_GT_IRQ_H */
+7
drivers/gpu/drm/i915/gt/intel_gt_types.h
··· 31 31 struct intel_engine_cs; 32 32 struct intel_uncore; 33 33 34 + enum intel_submission_method { 35 + INTEL_SUBMISSION_RING, 36 + INTEL_SUBMISSION_ELSP, 37 + INTEL_SUBMISSION_GUC, 38 + }; 39 + 34 40 struct intel_gt { 35 41 struct drm_i915_private *i915; 36 42 struct intel_uncore *uncore; ··· 124 118 struct intel_engine_cs *engine[I915_NUM_ENGINES]; 125 119 struct intel_engine_cs *engine_class[MAX_ENGINE_CLASS + 1] 126 120 [MAX_ENGINE_INSTANCE + 1]; 121 + enum intel_submission_method submission_method; 127 122 128 123 /* 129 124 * Default address space (either GGTT or ppGTT depending on arch).
+63 -28
drivers/gpu/drm/i915/gt/intel_gtt.c
··· 7 7 8 8 #include <linux/fault-inject.h> 9 9 10 + #include "gem/i915_gem_lmem.h" 10 11 #include "i915_trace.h" 11 12 #include "intel_gt.h" 12 13 #include "intel_gtt.h" 14 + 15 + struct drm_i915_gem_object *alloc_pt_lmem(struct i915_address_space *vm, int sz) 16 + { 17 + struct drm_i915_gem_object *obj; 18 + 19 + obj = i915_gem_object_create_lmem(vm->i915, sz, 0); 20 + /* 21 + * Ensure all paging structures for this vm share the same dma-resv 22 + * object underneath, with the idea that one object_lock() will lock 23 + * them all at once. 24 + */ 25 + if (!IS_ERR(obj)) 26 + obj->base.resv = &vm->resv; 27 + return obj; 28 + } 13 29 14 30 struct drm_i915_gem_object *alloc_pt_dma(struct i915_address_space *vm, int sz) 15 31 { ··· 35 19 i915_gem_shrink_all(vm->i915); 36 20 37 21 obj = i915_gem_object_create_internal(vm->i915, sz); 38 - /* ensure all dma objects have the same reservation class */ 22 + /* 23 + * Ensure all paging structures for this vm share the same dma-resv 24 + * object underneath, with the idea that one object_lock() will lock 25 + * them all at once. 26 + */ 39 27 if (!IS_ERR(obj)) 40 28 obj->base.resv = &vm->resv; 41 29 return obj; 42 30 } 43 31 44 - int pin_pt_dma(struct i915_address_space *vm, struct drm_i915_gem_object *obj) 32 + int map_pt_dma(struct i915_address_space *vm, struct drm_i915_gem_object *obj) 45 33 { 46 - int err; 34 + enum i915_map_type type; 35 + void *vaddr; 47 36 48 - i915_gem_object_lock(obj, NULL); 49 - err = i915_gem_object_pin_pages(obj); 50 - i915_gem_object_unlock(obj); 51 - if (err) 52 - return err; 37 + type = i915_coherent_map_type(vm->i915, obj, true); 38 + vaddr = i915_gem_object_pin_map_unlocked(obj, type); 39 + if (IS_ERR(vaddr)) 40 + return PTR_ERR(vaddr); 53 41 54 42 i915_gem_object_make_unshrinkable(obj); 55 43 return 0; 56 44 } 57 45 58 - int pin_pt_dma_locked(struct i915_address_space *vm, struct drm_i915_gem_object *obj) 46 + int map_pt_dma_locked(struct i915_address_space *vm, struct drm_i915_gem_object *obj) 59 47 { 60 - int err; 48 + enum i915_map_type type; 49 + void *vaddr; 61 50 62 - err = i915_gem_object_pin_pages(obj); 63 - if (err) 64 - return err; 51 + type = i915_coherent_map_type(vm->i915, obj, true); 52 + vaddr = i915_gem_object_pin_map(obj, type); 53 + if (IS_ERR(vaddr)) 54 + return PTR_ERR(vaddr); 65 55 66 56 i915_gem_object_make_unshrinkable(obj); 67 57 return 0; ··· 154 132 */ 155 133 mutex_init(&vm->mutex); 156 134 lockdep_set_subclass(&vm->mutex, subclass); 157 - i915_gem_shrinker_taints_mutex(vm->i915, &vm->mutex); 135 + 136 + if (!intel_vm_no_concurrent_access_wa(vm->i915)) { 137 + i915_gem_shrinker_taints_mutex(vm->i915, &vm->mutex); 138 + } else { 139 + /* 140 + * CHV + BXT VTD workaround use stop_machine(), 141 + * which is allowed to allocate memory. This means &vm->mutex 142 + * is the outer lock, and in theory we can allocate memory inside 143 + * it through stop_machine(). 144 + * 145 + * Add the annotation for this, we use trylock in shrinker. 146 + */ 147 + mutex_acquire(&vm->mutex.dep_map, 0, 0, _THIS_IP_); 148 + might_alloc(GFP_KERNEL); 149 + mutex_release(&vm->mutex.dep_map, _THIS_IP_); 150 + } 158 151 dma_resv_init(&vm->resv); 159 152 160 153 GEM_BUG_ON(!vm->total); ··· 192 155 memset(&vma->page_sizes, 0, sizeof(vma->page_sizes)); 193 156 } 194 157 158 + void *__px_vaddr(struct drm_i915_gem_object *p) 159 + { 160 + enum i915_map_type type; 161 + 162 + GEM_BUG_ON(!i915_gem_object_has_pages(p)); 163 + return page_unpack_bits(p->mm.mapping, &type); 164 + } 165 + 195 166 dma_addr_t __px_dma(struct drm_i915_gem_object *p) 196 167 { 197 168 GEM_BUG_ON(!i915_gem_object_has_pages(p)); ··· 215 170 void 216 171 fill_page_dma(struct drm_i915_gem_object *p, const u64 val, unsigned int count) 217 172 { 218 - struct page *page = __px_page(p); 219 - void *vaddr; 173 + void *vaddr = __px_vaddr(p); 220 174 221 - vaddr = kmap(page); 222 175 memset64(vaddr, val, count); 223 176 clflush_cache_range(vaddr, PAGE_SIZE); 224 - kunmap(page); 225 177 } 226 178 227 179 static void poison_scratch_page(struct drm_i915_gem_object *scratch) 228 180 { 229 - struct sgt_iter sgt; 230 - struct page *page; 181 + void *vaddr = __px_vaddr(scratch); 231 182 u8 val; 232 183 233 184 val = 0; 234 185 if (IS_ENABLED(CONFIG_DRM_I915_DEBUG_GEM)) 235 186 val = POISON_FREE; 236 187 237 - for_each_sgt_page(page, sgt, scratch->mm.pages) { 238 - void *vaddr; 239 - 240 - vaddr = kmap(page); 241 - memset(vaddr, val, PAGE_SIZE); 242 - kunmap(page); 243 - } 188 + memset(vaddr, val, scratch->base.size); 244 189 } 245 190 246 191 int setup_scratch_page(struct i915_address_space *vm) ··· 260 225 if (IS_ERR(obj)) 261 226 goto skip; 262 227 263 - if (pin_pt_dma(vm, obj)) 228 + if (map_pt_dma(vm, obj)) 264 229 goto skip_obj; 265 230 266 231 /* We need a single contiguous page for our scratch */
+7 -5
drivers/gpu/drm/i915/gt/intel_gtt.h
··· 180 180 dma_addr_t __px_dma(struct drm_i915_gem_object *p); 181 181 #define px_dma(px) (__px_dma(px_base(px))) 182 182 183 + void *__px_vaddr(struct drm_i915_gem_object *p); 184 + #define px_vaddr(px) (__px_vaddr(px_base(px))) 185 + 183 186 #define px_pt(px) \ 184 187 __px_choose_expr(px, struct i915_page_table *, __x, \ 185 188 __px_choose_expr(px, struct i915_page_directory *, &__x->pt, \ ··· 519 516 void i915_ggtt_suspend(struct i915_ggtt *gtt); 520 517 void i915_ggtt_resume(struct i915_ggtt *ggtt); 521 518 522 - #define kmap_atomic_px(px) kmap_atomic(__px_page(px_base(px))) 523 - 524 519 void 525 520 fill_page_dma(struct drm_i915_gem_object *p, const u64 val, unsigned int count); 526 521 ··· 532 531 void free_scratch(struct i915_address_space *vm); 533 532 534 533 struct drm_i915_gem_object *alloc_pt_dma(struct i915_address_space *vm, int sz); 534 + struct drm_i915_gem_object *alloc_pt_lmem(struct i915_address_space *vm, int sz); 535 535 struct i915_page_table *alloc_pt(struct i915_address_space *vm); 536 536 struct i915_page_directory *alloc_pd(struct i915_address_space *vm); 537 537 struct i915_page_directory *__alloc_pd(int npde); 538 538 539 - int pin_pt_dma(struct i915_address_space *vm, struct drm_i915_gem_object *obj); 540 - int pin_pt_dma_locked(struct i915_address_space *vm, struct drm_i915_gem_object *obj); 539 + int map_pt_dma(struct i915_address_space *vm, struct drm_i915_gem_object *obj); 540 + int map_pt_dma_locked(struct i915_address_space *vm, struct drm_i915_gem_object *obj); 541 541 542 542 void free_px(struct i915_address_space *vm, 543 543 struct i915_page_table *pt, int lvl); ··· 585 583 int i915_vm_alloc_pt_stash(struct i915_address_space *vm, 586 584 struct i915_vm_pt_stash *stash, 587 585 u64 size); 588 - int i915_vm_pin_pt_stash(struct i915_address_space *vm, 586 + int i915_vm_map_pt_stash(struct i915_address_space *vm, 589 587 struct i915_vm_pt_stash *stash); 590 588 void i915_vm_free_pt_stash(struct i915_address_space *vm, 591 589 struct i915_vm_pt_stash *stash);
+3 -1
drivers/gpu/drm/i915/gt/intel_lrc.c
··· 903 903 GEM_BUG_ON(!i915_vma_is_pinned(ce->state)); 904 904 905 905 *vaddr = i915_gem_object_pin_map(ce->state->obj, 906 - i915_coherent_map_type(ce->engine->i915) | 906 + i915_coherent_map_type(ce->engine->i915, 907 + ce->state->obj, 908 + false) | 907 909 I915_MAP_OVERRIDE); 908 910 909 911 return PTR_ERR_OR_ZERO(*vaddr);
+3 -4
drivers/gpu/drm/i915/gt/intel_ppgtt.c
··· 87 87 const unsigned short idx, 88 88 const u64 encoded_entry) 89 89 { 90 - u64 * const vaddr = kmap_atomic(__px_page(pdma)); 90 + u64 * const vaddr = __px_vaddr(pdma); 91 91 92 92 vaddr[idx] = encoded_entry; 93 93 clflush_cache_range(&vaddr[idx], sizeof(u64)); 94 - kunmap_atomic(vaddr); 95 94 } 96 95 97 96 void ··· 257 258 return 0; 258 259 } 259 260 260 - int i915_vm_pin_pt_stash(struct i915_address_space *vm, 261 + int i915_vm_map_pt_stash(struct i915_address_space *vm, 261 262 struct i915_vm_pt_stash *stash) 262 263 { 263 264 struct i915_page_table *pt; ··· 265 266 266 267 for (n = 0; n < ARRAY_SIZE(stash->pt); n++) { 267 268 for (pt = stash->pt[n]; pt; pt = pt->stash) { 268 - err = pin_pt_dma_locked(vm, pt->base); 269 + err = map_pt_dma_locked(vm, pt->base); 269 270 if (err) 270 271 return err; 271 272 }
+136 -69
drivers/gpu/drm/i915/gt/intel_reset.c
··· 338 338 return gen6_hw_domain_reset(gt, hw_mask); 339 339 } 340 340 341 - static int gen11_lock_sfc(struct intel_engine_cs *engine, u32 *hw_mask) 341 + static struct intel_engine_cs *find_sfc_paired_vecs_engine(struct intel_engine_cs *engine) 342 + { 343 + int vecs_id; 344 + 345 + GEM_BUG_ON(engine->class != VIDEO_DECODE_CLASS); 346 + 347 + vecs_id = _VECS((engine->instance) / 2); 348 + 349 + return engine->gt->engine[vecs_id]; 350 + } 351 + 352 + struct sfc_lock_data { 353 + i915_reg_t lock_reg; 354 + i915_reg_t ack_reg; 355 + i915_reg_t usage_reg; 356 + u32 lock_bit; 357 + u32 ack_bit; 358 + u32 usage_bit; 359 + u32 reset_bit; 360 + }; 361 + 362 + static void get_sfc_forced_lock_data(struct intel_engine_cs *engine, 363 + struct sfc_lock_data *sfc_lock) 364 + { 365 + switch (engine->class) { 366 + default: 367 + MISSING_CASE(engine->class); 368 + fallthrough; 369 + case VIDEO_DECODE_CLASS: 370 + sfc_lock->lock_reg = GEN11_VCS_SFC_FORCED_LOCK(engine); 371 + sfc_lock->lock_bit = GEN11_VCS_SFC_FORCED_LOCK_BIT; 372 + 373 + sfc_lock->ack_reg = GEN11_VCS_SFC_LOCK_STATUS(engine); 374 + sfc_lock->ack_bit = GEN11_VCS_SFC_LOCK_ACK_BIT; 375 + 376 + sfc_lock->usage_reg = GEN11_VCS_SFC_LOCK_STATUS(engine); 377 + sfc_lock->usage_bit = GEN11_VCS_SFC_USAGE_BIT; 378 + sfc_lock->reset_bit = GEN11_VCS_SFC_RESET_BIT(engine->instance); 379 + 380 + break; 381 + case VIDEO_ENHANCEMENT_CLASS: 382 + sfc_lock->lock_reg = GEN11_VECS_SFC_FORCED_LOCK(engine); 383 + sfc_lock->lock_bit = GEN11_VECS_SFC_FORCED_LOCK_BIT; 384 + 385 + sfc_lock->ack_reg = GEN11_VECS_SFC_LOCK_ACK(engine); 386 + sfc_lock->ack_bit = GEN11_VECS_SFC_LOCK_ACK_BIT; 387 + 388 + sfc_lock->usage_reg = GEN11_VECS_SFC_USAGE(engine); 389 + sfc_lock->usage_bit = GEN11_VECS_SFC_USAGE_BIT; 390 + sfc_lock->reset_bit = GEN11_VECS_SFC_RESET_BIT(engine->instance); 391 + 392 + break; 393 + } 394 + } 395 + 396 + static int gen11_lock_sfc(struct intel_engine_cs *engine, 397 + u32 *reset_mask, 398 + u32 *unlock_mask) 342 399 { 343 400 struct intel_uncore *uncore = engine->uncore; 344 401 u8 vdbox_sfc_access = engine->gt->info.vdbox_sfc_access; 345 - i915_reg_t sfc_forced_lock, sfc_forced_lock_ack; 346 - u32 sfc_forced_lock_bit, sfc_forced_lock_ack_bit; 347 - i915_reg_t sfc_usage; 348 - u32 sfc_usage_bit; 349 - u32 sfc_reset_bit; 402 + struct sfc_lock_data sfc_lock; 403 + bool lock_obtained, lock_to_other = false; 350 404 int ret; 351 405 352 406 switch (engine->class) { ··· 408 354 if ((BIT(engine->instance) & vdbox_sfc_access) == 0) 409 355 return 0; 410 356 411 - sfc_forced_lock = GEN11_VCS_SFC_FORCED_LOCK(engine); 412 - sfc_forced_lock_bit = GEN11_VCS_SFC_FORCED_LOCK_BIT; 413 - 414 - sfc_forced_lock_ack = GEN11_VCS_SFC_LOCK_STATUS(engine); 415 - sfc_forced_lock_ack_bit = GEN11_VCS_SFC_LOCK_ACK_BIT; 416 - 417 - sfc_usage = GEN11_VCS_SFC_LOCK_STATUS(engine); 418 - sfc_usage_bit = GEN11_VCS_SFC_USAGE_BIT; 419 - sfc_reset_bit = GEN11_VCS_SFC_RESET_BIT(engine->instance); 420 - break; 421 - 357 + fallthrough; 422 358 case VIDEO_ENHANCEMENT_CLASS: 423 - sfc_forced_lock = GEN11_VECS_SFC_FORCED_LOCK(engine); 424 - sfc_forced_lock_bit = GEN11_VECS_SFC_FORCED_LOCK_BIT; 359 + get_sfc_forced_lock_data(engine, &sfc_lock); 425 360 426 - sfc_forced_lock_ack = GEN11_VECS_SFC_LOCK_ACK(engine); 427 - sfc_forced_lock_ack_bit = GEN11_VECS_SFC_LOCK_ACK_BIT; 428 - 429 - sfc_usage = GEN11_VECS_SFC_USAGE(engine); 430 - sfc_usage_bit = GEN11_VECS_SFC_USAGE_BIT; 431 - sfc_reset_bit = GEN11_VECS_SFC_RESET_BIT(engine->instance); 432 361 break; 433 - 434 362 default: 435 363 return 0; 436 364 } 437 365 366 + if (!(intel_uncore_read_fw(uncore, sfc_lock.usage_reg) & sfc_lock.usage_bit)) { 367 + struct intel_engine_cs *paired_vecs; 368 + 369 + if (engine->class != VIDEO_DECODE_CLASS || 370 + !IS_GEN(engine->i915, 12)) 371 + return 0; 372 + 373 + /* 374 + * Wa_14010733141 375 + * 376 + * If the VCS-MFX isn't using the SFC, we also need to check 377 + * whether VCS-HCP is using it. If so, we need to issue a *VE* 378 + * forced lock on the VE engine that shares the same SFC. 379 + */ 380 + if (!(intel_uncore_read_fw(uncore, 381 + GEN12_HCP_SFC_LOCK_STATUS(engine)) & 382 + GEN12_HCP_SFC_USAGE_BIT)) 383 + return 0; 384 + 385 + paired_vecs = find_sfc_paired_vecs_engine(engine); 386 + get_sfc_forced_lock_data(paired_vecs, &sfc_lock); 387 + lock_to_other = true; 388 + *unlock_mask |= paired_vecs->mask; 389 + } else { 390 + *unlock_mask |= engine->mask; 391 + } 392 + 438 393 /* 439 - * If the engine is using a SFC, tell the engine that a software reset 394 + * If the engine is using an SFC, tell the engine that a software reset 440 395 * is going to happen. The engine will then try to force lock the SFC. 441 396 * If SFC ends up being locked to the engine we want to reset, we have 442 397 * to reset it as well (we will unlock it once the reset sequence is 443 398 * completed). 444 399 */ 445 - if (!(intel_uncore_read_fw(uncore, sfc_usage) & sfc_usage_bit)) 446 - return 0; 447 - 448 - rmw_set_fw(uncore, sfc_forced_lock, sfc_forced_lock_bit); 400 + rmw_set_fw(uncore, sfc_lock.lock_reg, sfc_lock.lock_bit); 449 401 450 402 ret = __intel_wait_for_register_fw(uncore, 451 - sfc_forced_lock_ack, 452 - sfc_forced_lock_ack_bit, 453 - sfc_forced_lock_ack_bit, 403 + sfc_lock.ack_reg, 404 + sfc_lock.ack_bit, 405 + sfc_lock.ack_bit, 454 406 1000, 0, NULL); 455 407 456 - /* Was the SFC released while we were trying to lock it? */ 457 - if (!(intel_uncore_read_fw(uncore, sfc_usage) & sfc_usage_bit)) 408 + /* 409 + * Was the SFC released while we were trying to lock it? 410 + * 411 + * We should reset both the engine and the SFC if: 412 + * - We were locking the SFC to this engine and the lock succeeded 413 + * OR 414 + * - We were locking the SFC to a different engine (Wa_14010733141) 415 + * but the SFC was released before the lock was obtained. 416 + * 417 + * Otherwise we need only reset the engine by itself and we can 418 + * leave the SFC alone. 419 + */ 420 + lock_obtained = (intel_uncore_read_fw(uncore, sfc_lock.usage_reg) & 421 + sfc_lock.usage_bit) != 0; 422 + if (lock_obtained == lock_to_other) 458 423 return 0; 459 424 460 425 if (ret) { ··· 481 408 return ret; 482 409 } 483 410 484 - *hw_mask |= sfc_reset_bit; 411 + *reset_mask |= sfc_lock.reset_bit; 485 412 return 0; 486 413 } 487 414 ··· 489 416 { 490 417 struct intel_uncore *uncore = engine->uncore; 491 418 u8 vdbox_sfc_access = engine->gt->info.vdbox_sfc_access; 492 - i915_reg_t sfc_forced_lock; 493 - u32 sfc_forced_lock_bit; 419 + struct sfc_lock_data sfc_lock = {}; 494 420 495 - switch (engine->class) { 496 - case VIDEO_DECODE_CLASS: 497 - if ((BIT(engine->instance) & vdbox_sfc_access) == 0) 498 - return; 499 - 500 - sfc_forced_lock = GEN11_VCS_SFC_FORCED_LOCK(engine); 501 - sfc_forced_lock_bit = GEN11_VCS_SFC_FORCED_LOCK_BIT; 502 - break; 503 - 504 - case VIDEO_ENHANCEMENT_CLASS: 505 - sfc_forced_lock = GEN11_VECS_SFC_FORCED_LOCK(engine); 506 - sfc_forced_lock_bit = GEN11_VECS_SFC_FORCED_LOCK_BIT; 507 - break; 508 - 509 - default: 421 + if (engine->class != VIDEO_DECODE_CLASS && 422 + engine->class != VIDEO_ENHANCEMENT_CLASS) 510 423 return; 511 - } 512 424 513 - rmw_clear_fw(uncore, sfc_forced_lock, sfc_forced_lock_bit); 425 + if (engine->class == VIDEO_DECODE_CLASS && 426 + (BIT(engine->instance) & vdbox_sfc_access) == 0) 427 + return; 428 + 429 + get_sfc_forced_lock_data(engine, &sfc_lock); 430 + 431 + rmw_clear_fw(uncore, sfc_lock.lock_reg, sfc_lock.lock_bit); 514 432 } 515 433 516 434 static int gen11_reset_engines(struct intel_gt *gt, ··· 520 456 }; 521 457 struct intel_engine_cs *engine; 522 458 intel_engine_mask_t tmp; 523 - u32 hw_mask; 459 + u32 reset_mask, unlock_mask = 0; 524 460 int ret; 525 461 526 462 if (engine_mask == ALL_ENGINES) { 527 - hw_mask = GEN11_GRDOM_FULL; 463 + reset_mask = GEN11_GRDOM_FULL; 528 464 } else { 529 - hw_mask = 0; 465 + reset_mask = 0; 530 466 for_each_engine_masked(engine, gt, engine_mask, tmp) { 531 467 GEM_BUG_ON(engine->id >= ARRAY_SIZE(hw_engine_mask)); 532 - hw_mask |= hw_engine_mask[engine->id]; 533 - ret = gen11_lock_sfc(engine, &hw_mask); 468 + reset_mask |= hw_engine_mask[engine->id]; 469 + ret = gen11_lock_sfc(engine, &reset_mask, &unlock_mask); 534 470 if (ret) 535 471 goto sfc_unlock; 536 472 } 537 473 } 538 474 539 - ret = gen6_hw_domain_reset(gt, hw_mask); 475 + ret = gen6_hw_domain_reset(gt, reset_mask); 540 476 541 477 sfc_unlock: 542 478 /* ··· 544 480 * gen11_lock_sfc to make sure that we clean properly if something 545 481 * wrong happened during the lock (e.g. lock acquired after timeout 546 482 * expiration). 483 + * 484 + * Due to Wa_14010733141, we may have locked an SFC to an engine that 485 + * wasn't being reset. So instead of calling gen11_unlock_sfc() 486 + * on engine_mask, we instead call it on the mask of engines that our 487 + * gen11_lock_sfc() calls told us actually had locks attempted. 547 488 */ 548 - if (engine_mask != ALL_ENGINES) 549 - for_each_engine_masked(engine, gt, engine_mask, tmp) 550 - gen11_unlock_sfc(engine); 489 + for_each_engine_masked(engine, gt, unlock_mask, tmp) 490 + gen11_unlock_sfc(engine); 551 491 552 492 return ret; 553 493 } ··· 1186 1118 int __intel_engine_reset_bh(struct intel_engine_cs *engine, const char *msg) 1187 1119 { 1188 1120 struct intel_gt *gt = engine->gt; 1189 - bool uses_guc = intel_engine_in_guc_submission_mode(engine); 1190 1121 int ret; 1191 1122 1192 1123 ENGINE_TRACE(engine, "flags=%lx\n", gt->reset.flags); ··· 1201 1134 "Resetting %s for %s\n", engine->name, msg); 1202 1135 atomic_inc(&engine->i915->gpu_error.reset_engine_count[engine->uabi_class]); 1203 1136 1204 - if (!uses_guc) 1205 - ret = intel_gt_reset_engine(engine); 1206 - else 1137 + if (intel_engine_uses_guc(engine)) 1207 1138 ret = intel_guc_reset_engine(&engine->gt->uc.guc, engine); 1139 + else 1140 + ret = intel_gt_reset_engine(engine); 1208 1141 if (ret) { 1209 1142 /* If we fail here, we expect to fallback to a global reset */ 1210 1143 ENGINE_TRACE(engine, "Failed to reset, err: %d\n", ret);
+7 -4
drivers/gpu/drm/i915/gt/intel_ring.c
··· 51 51 if (unlikely(ret)) 52 52 goto err_unpin; 53 53 54 - if (i915_vma_is_map_and_fenceable(vma)) 54 + if (i915_vma_is_map_and_fenceable(vma)) { 55 55 addr = (void __force *)i915_vma_pin_iomap(vma); 56 - else 57 - addr = i915_gem_object_pin_map(vma->obj, 58 - i915_coherent_map_type(vma->vm->i915)); 56 + } else { 57 + int type = i915_coherent_map_type(vma->vm->i915, vma->obj, false); 58 + 59 + addr = i915_gem_object_pin_map(vma->obj, type); 60 + } 61 + 59 62 if (IS_ERR(addr)) { 60 63 ret = PTR_ERR(addr); 61 64 goto err_ring;
+8 -4
drivers/gpu/drm/i915/gt/intel_ring_submission.c
··· 12 12 #include "intel_breadcrumbs.h" 13 13 #include "intel_context.h" 14 14 #include "intel_gt.h" 15 + #include "intel_gt_irq.h" 15 16 #include "intel_reset.h" 16 17 #include "intel_ring.h" 17 18 #include "shmem_utils.h" ··· 990 989 static void i9xx_set_default_submission(struct intel_engine_cs *engine) 991 990 { 992 991 engine->submit_request = i9xx_submit_request; 993 - 994 - engine->park = NULL; 995 - engine->unpark = NULL; 996 992 } 997 993 998 994 static void gen6_bsd_set_default_submission(struct intel_engine_cs *engine) 999 995 { 1000 - i9xx_set_default_submission(engine); 1001 996 engine->submit_request = gen6_bsd_submit_request; 1002 997 } 1003 998 ··· 1018 1021 intel_timeline_put(engine->legacy.timeline); 1019 1022 } 1020 1023 1024 + static void irq_handler(struct intel_engine_cs *engine, u16 iir) 1025 + { 1026 + intel_engine_signal_breadcrumbs(engine); 1027 + } 1028 + 1021 1029 static void setup_irq(struct intel_engine_cs *engine) 1022 1030 { 1023 1031 struct drm_i915_private *i915 = engine->i915; 1032 + 1033 + intel_engine_set_irq_handler(engine, irq_handler); 1024 1034 1025 1035 if (INTEL_GEN(i915) >= 6) { 1026 1036 engine->irq_enable = gen6_irq_enable;
+1 -1
drivers/gpu/drm/i915/gt/intel_rps.c
··· 1774 1774 return; 1775 1775 1776 1776 if (pm_iir & PM_VEBOX_USER_INTERRUPT) 1777 - intel_engine_signal_breadcrumbs(gt->engine[VECS0]); 1777 + intel_engine_cs_irq(gt->engine[VECS0], pm_iir >> 10); 1778 1778 1779 1779 if (pm_iir & PM_VEBOX_CS_ERROR_INTERRUPT) 1780 1780 DRM_DEBUG("Command parser error, pm_iir 0x%08x\n", pm_iir);
+2 -2
drivers/gpu/drm/i915/gt/intel_timeline.c
··· 32 32 return vma; 33 33 } 34 34 35 - __i915_active_call 36 35 static void __timeline_retire(struct i915_active *active) 37 36 { 38 37 struct intel_timeline *tl = ··· 103 104 INIT_LIST_HEAD(&timeline->requests); 104 105 105 106 i915_syncmap_init(&timeline->sync); 106 - i915_active_init(&timeline->active, __timeline_active, __timeline_retire); 107 + i915_active_init(&timeline->active, __timeline_active, 108 + __timeline_retire, 0); 107 109 108 110 return 0; 109 111 }
+64 -36
drivers/gpu/drm/i915/gt/intel_workarounds.c
··· 607 607 wa_masked_en(wal, GEN9_ROW_CHICKEN4, GEN11_DIS_PICK_2ND_EU); 608 608 } 609 609 610 + /* 611 + * These settings aren't actually workarounds, but general tuning settings that 612 + * need to be programmed on several platforms. 613 + */ 614 + static void gen12_ctx_gt_tuning_init(struct intel_engine_cs *engine, 615 + struct i915_wa_list *wal) 616 + { 617 + /* 618 + * Although some platforms refer to it as Wa_1604555607, we need to 619 + * program it even on those that don't explicitly list that 620 + * workaround. 621 + * 622 + * Note that the programming of this register is further modified 623 + * according to the FF_MODE2 guidance given by Wa_1608008084:gen12. 624 + * Wa_1608008084 tells us the FF_MODE2 register will return the wrong 625 + * value when read. The default value for this register is zero for all 626 + * fields and there are no bit masks. So instead of doing a RMW we 627 + * should just write TDS timer value. For the same reason read 628 + * verification is ignored. 629 + */ 630 + wa_add(wal, 631 + FF_MODE2, 632 + FF_MODE2_TDS_TIMER_MASK, 633 + FF_MODE2_TDS_TIMER_128, 634 + 0); 635 + } 636 + 610 637 static void gen12_ctx_workarounds_init(struct intel_engine_cs *engine, 611 638 struct i915_wa_list *wal) 612 639 { 640 + gen12_ctx_gt_tuning_init(engine, wal); 641 + 613 642 /* 614 643 * Wa_1409142259:tgl 615 644 * Wa_1409347922:tgl ··· 657 628 wa_masked_field_set(wal, GEN8_CS_CHICKEN1, 658 629 GEN9_PREEMPT_GPGPU_LEVEL_MASK, 659 630 GEN9_PREEMPT_GPGPU_THREAD_GROUP_LEVEL); 660 - } 661 - 662 - static void tgl_ctx_workarounds_init(struct intel_engine_cs *engine, 663 - struct i915_wa_list *wal) 664 - { 665 - gen12_ctx_workarounds_init(engine, wal); 666 631 667 632 /* 668 - * Wa_1604555607:tgl,rkl 633 + * Wa_16011163337 669 634 * 670 - * Note that the implementation of this workaround is further modified 671 - * according to the FF_MODE2 guidance given by Wa_1608008084:gen12. 672 - * FF_MODE2 register will return the wrong value when read. The default 673 - * value for this register is zero for all fields and there are no bit 674 - * masks. So instead of doing a RMW we should just write the GS Timer 675 - * and TDS timer values for Wa_1604555607 and Wa_16011163337. 635 + * Like in gen12_ctx_gt_tuning_init(), read verification is ignored due 636 + * to Wa_1608008084. 676 637 */ 677 638 wa_add(wal, 678 639 FF_MODE2, 679 - FF_MODE2_GS_TIMER_MASK | FF_MODE2_TDS_TIMER_MASK, 680 - FF_MODE2_GS_TIMER_224 | FF_MODE2_TDS_TIMER_128, 640 + FF_MODE2_GS_TIMER_MASK, 641 + FF_MODE2_GS_TIMER_224, 681 642 0); 682 643 } 683 644 ··· 683 664 /* Wa_22010493298 */ 684 665 wa_masked_en(wal, HIZ_CHICKEN, 685 666 DG1_HZ_READ_SUPPRESSION_OPTIMIZATION_DISABLE); 686 - 687 - /* 688 - * Wa_16011163337 689 - * 690 - * Like in tgl_ctx_workarounds_init(), read verification is ignored due 691 - * to Wa_1608008084. 692 - */ 693 - wa_add(wal, 694 - FF_MODE2, 695 - FF_MODE2_GS_TIMER_MASK, FF_MODE2_GS_TIMER_224, 0); 696 667 } 697 668 698 669 static void ··· 699 690 700 691 if (IS_DG1(i915)) 701 692 dg1_ctx_workarounds_init(engine, wal); 702 - else if (IS_ALDERLAKE_S(i915) || IS_ROCKETLAKE(i915) || 703 - IS_TIGERLAKE(i915)) 704 - tgl_ctx_workarounds_init(engine, wal); 705 693 else if (IS_GEN(i915, 12)) 706 694 gen12_ctx_workarounds_init(engine, wal); 707 695 else if (IS_GEN(i915, 11)) ··· 1084 1078 L3_CLKGATE_DIS | L3_CR2X_CLKGATE_DIS); 1085 1079 } 1086 1080 1081 + /* 1082 + * Though there are per-engine instances of these registers, 1083 + * they retain their value through engine resets and should 1084 + * only be provided on the GT workaround list rather than 1085 + * the engine-specific workaround list. 1086 + */ 1087 + static void 1088 + wa_14011060649(struct drm_i915_private *i915, struct i915_wa_list *wal) 1089 + { 1090 + struct intel_engine_cs *engine; 1091 + struct intel_gt *gt = &i915->gt; 1092 + int id; 1093 + 1094 + for_each_engine(engine, gt, id) { 1095 + if (engine->class != VIDEO_DECODE_CLASS || 1096 + (engine->instance % 2)) 1097 + continue; 1098 + 1099 + wa_write_or(wal, VDBOX_CGCTL3F10(engine->mmio_base), 1100 + IECPUNIT_CLKGATE_DIS); 1101 + } 1102 + } 1103 + 1087 1104 static void 1088 1105 gen12_gt_workarounds_init(struct drm_i915_private *i915, 1089 1106 struct i915_wa_list *wal) 1090 1107 { 1091 1108 wa_init_mcr(i915, wal); 1109 + 1110 + /* Wa_14011060649:tgl,rkl,dg1,adls */ 1111 + wa_14011060649(i915, wal); 1092 1112 } 1093 1113 1094 1114 static void ··· 1787 1755 GEN7_FF_THREAD_MODE, 1788 1756 GEN12_FF_TESSELATION_DOP_GATE_DISABLE); 1789 1757 1790 - /* Wa_22010271021:ehl */ 1791 - if (IS_JSL_EHL(i915)) 1792 - wa_masked_en(wal, 1793 - GEN9_CS_DEBUG_MODE1, 1794 - FF_DOP_CLOCK_GATE_DISABLE); 1758 + /* Wa_22010271021 */ 1759 + wa_masked_en(wal, 1760 + GEN9_CS_DEBUG_MODE1, 1761 + FF_DOP_CLOCK_GATE_DISABLE); 1795 1762 } 1796 1763 1797 1764 if (IS_GEN_RANGE(i915, 9, 12)) { ··· 1859 1828 CACHE_MODE_0_GEN7, 1860 1829 /* enable HiZ Raw Stall Optimization */ 1861 1830 HIZ_RAW_STALL_OPT_DISABLE); 1862 - 1863 - /* WaDisable4x2SubspanOptimization:hsw */ 1864 - wa_masked_en(wal, CACHE_MODE_1, PIXEL_SUBSPAN_COLLECT_OPT_DISABLE); 1865 1831 } 1866 1832 1867 1833 if (IS_VALLEYVIEW(i915)) {
+1 -1
drivers/gpu/drm/i915/gt/mock_engine.c
··· 55 55 kfree(ring); 56 56 return NULL; 57 57 } 58 - i915_active_init(&ring->vma->active, NULL, NULL); 58 + i915_active_init(&ring->vma->active, NULL, NULL, 0); 59 59 __set_bit(I915_VMA_GGTT_BIT, __i915_vma_flags(ring->vma)); 60 60 __set_bit(DRM_MM_NODE_ALLOCATED_BIT, &ring->vma->node.flags); 61 61 ring->vma->node.size = sz;
+2 -1
drivers/gpu/drm/i915/gt/selftest_context.c
··· 88 88 goto err; 89 89 90 90 vaddr = i915_gem_object_pin_map_unlocked(ce->state->obj, 91 - i915_coherent_map_type(engine->i915)); 91 + i915_coherent_map_type(engine->i915, 92 + ce->state->obj, false)); 92 93 if (IS_ERR(vaddr)) { 93 94 err = PTR_ERR(vaddr); 94 95 intel_context_unpin(ce);
+1 -1
drivers/gpu/drm/i915/gt/selftest_engine_heartbeat.c
··· 77 77 return p; 78 78 79 79 kref_init(&p->kref); 80 - i915_active_init(&p->active, pulse_active, pulse_retire); 80 + i915_active_init(&p->active, pulse_active, pulse_retire, 0); 81 81 82 82 return p; 83 83 }
+1 -1
drivers/gpu/drm/i915/gt/selftest_execlists.c
··· 4716 4716 SUBTEST(live_virtual_reset), 4717 4717 }; 4718 4718 4719 - if (!HAS_EXECLISTS(i915)) 4719 + if (i915->gt.submission_method != INTEL_SUBMISSION_ELSP) 4720 4720 return 0; 4721 4721 4722 4722 if (intel_gt_is_wedged(&i915->gt))
+2 -2
drivers/gpu/drm/i915/gt/selftest_hangcheck.c
··· 69 69 h->seqno = memset(vaddr, 0xff, PAGE_SIZE); 70 70 71 71 vaddr = i915_gem_object_pin_map_unlocked(h->obj, 72 - i915_coherent_map_type(gt->i915)); 72 + i915_coherent_map_type(gt->i915, h->obj, false)); 73 73 if (IS_ERR(vaddr)) { 74 74 err = PTR_ERR(vaddr); 75 75 goto err_unpin_hws; ··· 130 130 return ERR_CAST(obj); 131 131 } 132 132 133 - vaddr = i915_gem_object_pin_map_unlocked(obj, i915_coherent_map_type(gt->i915)); 133 + vaddr = i915_gem_object_pin_map_unlocked(obj, i915_coherent_map_type(gt->i915, obj, false)); 134 134 if (IS_ERR(vaddr)) { 135 135 i915_gem_object_put(obj); 136 136 i915_vm_put(vm);
+3 -1
drivers/gpu/drm/i915/gt/selftest_lrc.c
··· 1221 1221 } 1222 1222 1223 1223 lrc = i915_gem_object_pin_map_unlocked(ce->state->obj, 1224 - i915_coherent_map_type(engine->i915)); 1224 + i915_coherent_map_type(engine->i915, 1225 + ce->state->obj, 1226 + false)); 1225 1227 if (IS_ERR(lrc)) { 1226 1228 err = PTR_ERR(lrc); 1227 1229 goto err_B1;
+20 -12
drivers/gpu/drm/i915/gt/selftest_rc6.c
··· 34 34 struct intel_rc6 *rc6 = &gt->rc6; 35 35 u64 rc0_power, rc6_power; 36 36 intel_wakeref_t wakeref; 37 + bool has_power; 37 38 ktime_t dt; 38 39 u64 res[2]; 39 40 int err = 0; ··· 51 50 if (IS_VALLEYVIEW(gt->i915) || IS_CHERRYVIEW(gt->i915)) 52 51 return 0; 53 52 53 + has_power = librapl_supported(gt->i915); 54 54 wakeref = intel_runtime_pm_get(gt->uncore->rpm); 55 55 56 56 /* Force RC6 off for starters */ ··· 73 71 goto out_unlock; 74 72 } 75 73 76 - rc0_power = div64_u64(NSEC_PER_SEC * rc0_power, ktime_to_ns(dt)); 77 - if (!rc0_power) { 78 - pr_err("No power measured while in RC0\n"); 79 - err = -EINVAL; 80 - goto out_unlock; 74 + if (has_power) { 75 + rc0_power = div64_u64(NSEC_PER_SEC * rc0_power, 76 + ktime_to_ns(dt)); 77 + if (!rc0_power) { 78 + pr_err("No power measured while in RC0\n"); 79 + err = -EINVAL; 80 + goto out_unlock; 81 + } 81 82 } 82 83 83 84 /* Manually enter RC6 */ ··· 102 97 err = -EINVAL; 103 98 } 104 99 105 - rc6_power = div64_u64(NSEC_PER_SEC * rc6_power, ktime_to_ns(dt)); 106 - pr_info("GPU consumed %llduW in RC0 and %llduW in RC6\n", 107 - rc0_power, rc6_power); 108 - if (2 * rc6_power > rc0_power) { 109 - pr_err("GPU leaked energy while in RC6!\n"); 110 - err = -EINVAL; 111 - goto out_unlock; 100 + if (has_power) { 101 + rc6_power = div64_u64(NSEC_PER_SEC * rc6_power, 102 + ktime_to_ns(dt)); 103 + pr_info("GPU consumed %llduW in RC0 and %llduW in RC6\n", 104 + rc0_power, rc6_power); 105 + if (2 * rc6_power > rc0_power) { 106 + pr_err("GPU leaked energy while in RC6!\n"); 107 + err = -EINVAL; 108 + goto out_unlock; 109 + } 112 110 } 113 111 114 112 /* Restore what should have been the original state! */
+1 -1
drivers/gpu/drm/i915/gt/selftest_ring_submission.c
··· 291 291 SUBTEST(live_ctx_switch_wa), 292 292 }; 293 293 294 - if (HAS_EXECLISTS(i915)) 294 + if (i915->gt.submission_method > INTEL_SUBMISSION_RING) 295 295 return 0; 296 296 297 297 return intel_gt_live_subtests(tests, &i915->gt);
+3 -3
drivers/gpu/drm/i915/gt/selftest_rps.c
··· 606 606 int err = 0; 607 607 608 608 /* 609 - * The premise is that the GPU does change freqency at our behest. 609 + * The premise is that the GPU does change frequency at our behest. 610 610 * Let's check there is a correspondence between the requested 611 611 * frequency, the actual frequency, and the observed clock rate. 612 612 */ ··· 747 747 int err = 0; 748 748 749 749 /* 750 - * The premise is that the GPU does change freqency at our behest. 750 + * The premise is that the GPU does change frequency at our behest. 751 751 * Let's check there is a correspondence between the requested 752 752 * frequency, the actual frequency, and the observed clock rate. 753 753 */ ··· 1139 1139 if (!intel_rps_is_enabled(rps) || INTEL_GEN(gt->i915) < 6) 1140 1140 return 0; 1141 1141 1142 - if (!librapl_energy_uJ()) 1142 + if (!librapl_supported(gt->i915)) 1143 1143 return 0; 1144 1144 1145 1145 if (igt_spinner_init(&spin, gt))
+3 -1
drivers/gpu/drm/i915/gt/shmem_utils.c
··· 8 8 #include <linux/shmem_fs.h> 9 9 10 10 #include "gem/i915_gem_object.h" 11 + #include "gem/i915_gem_lmem.h" 11 12 #include "shmem_utils.h" 12 13 13 14 struct file *shmem_create_from_data(const char *name, void *data, size_t len) ··· 40 39 return file; 41 40 } 42 41 43 - ptr = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WB); 42 + ptr = i915_gem_object_pin_map_unlocked(obj, i915_gem_object_is_lmem(obj) ? 43 + I915_MAP_WC : I915_MAP_WB); 44 44 if (IS_ERR(ptr)) 45 45 return ERR_CAST(ptr); 46 46
+3 -1
drivers/gpu/drm/i915/gt/uc/intel_guc.c
··· 682 682 if (IS_ERR(vma)) 683 683 return PTR_ERR(vma); 684 684 685 - vaddr = i915_gem_object_pin_map_unlocked(vma->obj, I915_MAP_WB); 685 + vaddr = i915_gem_object_pin_map_unlocked(vma->obj, 686 + i915_coherent_map_type(guc_to_gt(guc)->i915, 687 + vma->obj, true)); 686 688 if (IS_ERR(vaddr)) { 687 689 i915_vma_unpin_and_release(&vma, 0); 688 690 return PTR_ERR(vaddr);
+30 -34
drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
··· 11 11 #include "gt/intel_context.h" 12 12 #include "gt/intel_engine_pm.h" 13 13 #include "gt/intel_gt.h" 14 + #include "gt/intel_gt_irq.h" 14 15 #include "gt/intel_gt_pm.h" 15 16 #include "gt/intel_lrc.h" 16 17 #include "gt/intel_mocs.h" ··· 263 262 __guc_dequeue(engine); 264 263 265 264 spin_unlock_irqrestore(&engine->active.lock, flags); 265 + } 266 + 267 + static void cs_irq_handler(struct intel_engine_cs *engine, u16 iir) 268 + { 269 + if (iir & GT_RENDER_USER_INTERRUPT) { 270 + intel_engine_signal_breadcrumbs(engine); 271 + tasklet_hi_schedule(&engine->execlists.tasklet); 272 + } 266 273 } 267 274 268 275 static void guc_reset_prepare(struct intel_engine_cs *engine) ··· 617 608 static void guc_set_default_submission(struct intel_engine_cs *engine) 618 609 { 619 610 engine->submit_request = guc_submit_request; 620 - engine->schedule = i915_schedule; 621 - engine->execlists.tasklet.callback = guc_submission_tasklet; 622 - 623 - engine->reset.prepare = guc_reset_prepare; 624 - engine->reset.rewind = guc_reset_rewind; 625 - engine->reset.cancel = guc_reset_cancel; 626 - engine->reset.finish = guc_reset_finish; 627 - 628 - engine->flags |= I915_ENGINE_NEEDS_BREADCRUMB_TASKLET; 629 - engine->flags |= I915_ENGINE_HAS_PREEMPTION; 630 - 631 - /* 632 - * TODO: GuC supports timeslicing and semaphores as well, but they're 633 - * handled by the firmware so some minor tweaks are required before 634 - * enabling. 635 - * 636 - * engine->flags |= I915_ENGINE_HAS_TIMESLICES; 637 - * engine->flags |= I915_ENGINE_HAS_SEMAPHORES; 638 - */ 639 - 640 - engine->emit_bb_start = gen8_emit_bb_start; 641 - 642 - /* 643 - * For the breadcrumb irq to work we need the interrupts to stay 644 - * enabled. However, on all platforms on which we'll have support for 645 - * GuC submission we don't allow disabling the interrupts at runtime, so 646 - * we're always safe with the current flow. 647 - */ 648 - GEM_BUG_ON(engine->irq_enable || engine->irq_disable); 649 611 } 650 612 651 613 static void guc_release(struct intel_engine_cs *engine) ··· 638 658 engine->cops = &guc_context_ops; 639 659 engine->request_alloc = guc_request_alloc; 640 660 661 + engine->schedule = i915_schedule; 662 + 663 + engine->reset.prepare = guc_reset_prepare; 664 + engine->reset.rewind = guc_reset_rewind; 665 + engine->reset.cancel = guc_reset_cancel; 666 + engine->reset.finish = guc_reset_finish; 667 + 641 668 engine->emit_flush = gen8_emit_flush_xcs; 642 669 engine->emit_init_breadcrumb = gen8_emit_init_breadcrumb; 643 670 engine->emit_fini_breadcrumb = gen8_emit_fini_breadcrumb_xcs; ··· 653 666 engine->emit_flush = gen12_emit_flush_xcs; 654 667 } 655 668 engine->set_default_submission = guc_set_default_submission; 669 + 670 + engine->flags |= I915_ENGINE_HAS_PREEMPTION; 671 + 672 + /* 673 + * TODO: GuC supports timeslicing and semaphores as well, but they're 674 + * handled by the firmware so some minor tweaks are required before 675 + * enabling. 676 + * 677 + * engine->flags |= I915_ENGINE_HAS_TIMESLICES; 678 + * engine->flags |= I915_ENGINE_HAS_SEMAPHORES; 679 + */ 680 + 681 + engine->emit_bb_start = gen8_emit_bb_start; 656 682 } 657 683 658 684 static void rcs_submission_override(struct intel_engine_cs *engine) ··· 689 689 static inline void guc_default_irqs(struct intel_engine_cs *engine) 690 690 { 691 691 engine->irq_keep_mask = GT_RENDER_USER_INTERRUPT; 692 + intel_engine_set_irq_handler(engine, cs_irq_handler); 692 693 } 693 694 694 695 int intel_guc_submission_setup(struct intel_engine_cs *engine) ··· 753 752 void intel_guc_submission_init_early(struct intel_guc *guc) 754 753 { 755 754 guc->submission_selected = __guc_submission_selected(guc); 756 - } 757 - 758 - bool intel_engine_in_guc_submission_mode(const struct intel_engine_cs *engine) 759 - { 760 - return engine->set_default_submission == guc_set_default_submission; 761 755 }
-1
drivers/gpu/drm/i915/gt/uc/intel_guc_submission.h
··· 20 20 int intel_guc_preempt_work_create(struct intel_guc *guc); 21 21 void intel_guc_preempt_work_destroy(struct intel_guc *guc); 22 22 int intel_guc_submission_setup(struct intel_engine_cs *engine); 23 - bool intel_engine_in_guc_submission_mode(const struct intel_engine_cs *engine); 24 23 25 24 static inline bool intel_guc_submission_is_supported(struct intel_guc *guc) 26 25 {
+3 -1
drivers/gpu/drm/i915/gt/uc/intel_huc.c
··· 82 82 if (IS_ERR(vma)) 83 83 return PTR_ERR(vma); 84 84 85 - vaddr = i915_gem_object_pin_map_unlocked(vma->obj, I915_MAP_WB); 85 + vaddr = i915_gem_object_pin_map_unlocked(vma->obj, 86 + i915_coherent_map_type(gt->i915, 87 + vma->obj, true)); 86 88 if (IS_ERR(vaddr)) { 87 89 i915_vma_unpin_and_release(&vma, 0); 88 90 return PTR_ERR(vaddr);
+5 -9
drivers/gpu/drm/i915/i915_active.c
··· 343 343 void __i915_active_init(struct i915_active *ref, 344 344 int (*active)(struct i915_active *ref), 345 345 void (*retire)(struct i915_active *ref), 346 + unsigned long flags, 346 347 struct lock_class_key *mkey, 347 348 struct lock_class_key *wkey) 348 349 { 349 - unsigned long bits; 350 - 351 350 debug_active_init(ref); 352 351 353 - ref->flags = 0; 352 + ref->flags = flags; 354 353 ref->active = active; 355 - ref->retire = ptr_unpack_bits(retire, &bits, 2); 356 - if (bits & I915_ACTIVE_MAY_SLEEP) 357 - ref->flags |= I915_ACTIVE_RETIRE_SLEEPS; 354 + ref->retire = retire; 358 355 359 356 spin_lock_init(&ref->tree_lock); 360 357 ref->tree = RB_ROOT; ··· 1153 1156 return 0; 1154 1157 } 1155 1158 1156 - __i915_active_call static void 1157 - auto_retire(struct i915_active *ref) 1159 + static void auto_retire(struct i915_active *ref) 1158 1160 { 1159 1161 i915_active_put(ref); 1160 1162 } ··· 1167 1171 return NULL; 1168 1172 1169 1173 kref_init(&aa->ref); 1170 - i915_active_init(&aa->base, auto_active, auto_retire); 1174 + i915_active_init(&aa->base, auto_active, auto_retire, 0); 1171 1175 1172 1176 return &aa->base; 1173 1177 }
+6 -5
drivers/gpu/drm/i915/i915_active.h
··· 152 152 void __i915_active_init(struct i915_active *ref, 153 153 int (*active)(struct i915_active *ref), 154 154 void (*retire)(struct i915_active *ref), 155 + unsigned long flags, 155 156 struct lock_class_key *mkey, 156 157 struct lock_class_key *wkey); 157 158 158 159 /* Specialise each class of i915_active to avoid impossible lockdep cycles. */ 159 - #define i915_active_init(ref, active, retire) do { \ 160 - static struct lock_class_key __mkey; \ 161 - static struct lock_class_key __wkey; \ 162 - \ 163 - __i915_active_init(ref, active, retire, &__mkey, &__wkey); \ 160 + #define i915_active_init(ref, active, retire, flags) do { \ 161 + static struct lock_class_key __mkey; \ 162 + static struct lock_class_key __wkey; \ 163 + \ 164 + __i915_active_init(ref, active, retire, flags, &__mkey, &__wkey); \ 164 165 } while (0) 165 166 166 167 struct dma_fence *
-5
drivers/gpu/drm/i915/i915_active_types.h
··· 24 24 25 25 struct active_node; 26 26 27 - #define I915_ACTIVE_MAY_SLEEP BIT(0) 28 - 29 - #define __i915_active_call __aligned(4) 30 - #define i915_active_may_sleep(fn) ptr_pack_bits(&(fn), I915_ACTIVE_MAY_SLEEP, 2) 31 - 32 27 struct i915_active { 33 28 atomic_t count; 34 29 struct mutex mutex;
+17 -1
drivers/gpu/drm/i915/i915_cmd_parser.c
··· 1369 1369 return 0; 1370 1370 } 1371 1371 1372 + /** 1373 + * intel_engine_cmd_parser_alloc_jump_whitelist() - preallocate jump whitelist for intel_engine_cmd_parser() 1374 + * @batch_length: length of the commands in batch_obj 1375 + * @trampoline: Whether jump trampolines are used. 1376 + * 1377 + * Preallocates a jump whitelist for parsing the cmd buffer in intel_engine_cmd_parser(). 1378 + * This has to be preallocated, because the command parser runs in signaling context, 1379 + * and may not allocate any memory. 1380 + * 1381 + * Return: NULL or pointer to a jump whitelist, or ERR_PTR() on failure. Use 1382 + * IS_ERR() to check for errors. Must bre freed() with kfree(). 1383 + * 1384 + * NULL is a valid value, meaning no allocation was required. 1385 + */ 1372 1386 unsigned long *intel_engine_cmd_parser_alloc_jump_whitelist(u32 batch_length, 1373 1387 bool trampoline) 1374 1388 { ··· 1415 1401 * @batch_offset: byte offset in the batch at which execution starts 1416 1402 * @batch_length: length of the commands in batch_obj 1417 1403 * @shadow: validated copy of the batch buffer in question 1418 - * @trampoline: whether to emit a conditional trampoline at the end of the batch 1404 + * @jump_whitelist: buffer preallocated with intel_engine_cmd_parser_alloc_jump_whitelist() 1405 + * @shadow_map: mapping to @shadow vma 1406 + * @batch_map: mapping to @batch vma 1419 1407 * 1420 1408 * Parses the specified batch buffer looking for privilege violations as 1421 1409 * described in the overview.
+2 -2
drivers/gpu/drm/i915/i915_debugfs.c
··· 622 622 seq_printf(m, "DDC2 = 0x%08x\n", 623 623 intel_uncore_read(uncore, DCC2)); 624 624 seq_printf(m, "C0DRB3 = 0x%04x\n", 625 - intel_uncore_read16(uncore, C0DRB3)); 625 + intel_uncore_read16(uncore, C0DRB3_BW)); 626 626 seq_printf(m, "C1DRB3 = 0x%04x\n", 627 - intel_uncore_read16(uncore, C1DRB3)); 627 + intel_uncore_read16(uncore, C1DRB3_BW)); 628 628 } else if (INTEL_GEN(dev_priv) >= 6) { 629 629 seq_printf(m, "MAD_DIMM_C0 = 0x%08x\n", 630 630 intel_uncore_read(uncore, MAD_DIMM_C0));
+1
drivers/gpu/drm/i915/i915_drv.c
··· 1727 1727 DRM_IOCTL_DEF_DRV(I915_GEM_ENTERVT, drm_noop, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY), 1728 1728 DRM_IOCTL_DEF_DRV(I915_GEM_LEAVEVT, drm_noop, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY), 1729 1729 DRM_IOCTL_DEF_DRV(I915_GEM_CREATE, i915_gem_create_ioctl, DRM_RENDER_ALLOW), 1730 + DRM_IOCTL_DEF_DRV(I915_GEM_CREATE_EXT, i915_gem_create_ext_ioctl, DRM_RENDER_ALLOW), 1730 1731 DRM_IOCTL_DEF_DRV(I915_GEM_PREAD, i915_gem_pread_ioctl, DRM_RENDER_ALLOW), 1731 1732 DRM_IOCTL_DEF_DRV(I915_GEM_PWRITE, i915_gem_pwrite_ioctl, DRM_RENDER_ALLOW), 1732 1733 DRM_IOCTL_DEF_DRV(I915_GEM_MMAP, i915_gem_mmap_ioctl, DRM_RENDER_ALLOW),
+25 -4
drivers/gpu/drm/i915/i915_drv.h
··· 77 77 #include "gem/i915_gem_context_types.h" 78 78 #include "gem/i915_gem_shrinker.h" 79 79 #include "gem/i915_gem_stolen.h" 80 + #include "gem/i915_gem_lmem.h" 80 81 81 82 #include "gt/intel_engine.h" 82 83 #include "gt/intel_gt_types.h" ··· 514 513 }; 515 514 516 515 struct i915_gem_mm { 516 + /* 517 + * Shortcut for the stolen region. This points to either 518 + * INTEL_REGION_STOLEN_SMEM for integrated platforms, or 519 + * INTEL_REGION_STOLEN_LMEM for discrete, or NULL if the device doesn't 520 + * support stolen. 521 + */ 522 + struct intel_memory_region *stolen_region; 517 523 /** Memory allocator for GTT stolen memory */ 518 524 struct drm_mm stolen; 519 525 /** Protects the usage of the GTT stolen memory allocator. This is ··· 1728 1720 } 1729 1721 1730 1722 static inline bool 1731 - intel_ggtt_update_needs_vtd_wa(struct drm_i915_private *dev_priv) 1723 + intel_ggtt_update_needs_vtd_wa(struct drm_i915_private *i915) 1732 1724 { 1733 - return IS_BROXTON(dev_priv) && intel_vtd_active(); 1725 + return IS_BROXTON(i915) && intel_vtd_active(); 1726 + } 1727 + 1728 + static inline bool 1729 + intel_vm_no_concurrent_access_wa(struct drm_i915_private *i915) 1730 + { 1731 + return IS_CHERRYVIEW(i915) || intel_ggtt_update_needs_vtd_wa(i915); 1734 1732 } 1735 1733 1736 1734 /* i915_drv.c */ ··· 1816 1802 #define I915_GEM_OBJECT_UNBIND_ACTIVE BIT(0) 1817 1803 #define I915_GEM_OBJECT_UNBIND_BARRIER BIT(1) 1818 1804 #define I915_GEM_OBJECT_UNBIND_TEST BIT(2) 1805 + #define I915_GEM_OBJECT_UNBIND_VM_TRYLOCK BIT(3) 1819 1806 1820 1807 void i915_gem_runtime_suspend(struct drm_i915_private *dev_priv); 1821 1808 ··· 1949 1934 } 1950 1935 1951 1936 static inline enum i915_map_type 1952 - i915_coherent_map_type(struct drm_i915_private *i915) 1937 + i915_coherent_map_type(struct drm_i915_private *i915, 1938 + struct drm_i915_gem_object *obj, bool always_coherent) 1953 1939 { 1954 - return HAS_LLC(i915) ? I915_MAP_WB : I915_MAP_WC; 1940 + if (i915_gem_object_is_lmem(obj)) 1941 + return I915_MAP_WC; 1942 + if (HAS_LLC(i915) || always_coherent) 1943 + return I915_MAP_WB; 1944 + else 1945 + return I915_MAP_WC; 1955 1946 } 1956 1947 1957 1948 #endif
+12 -2
drivers/gpu/drm/i915/i915_gem.c
··· 157 157 if (vma) { 158 158 ret = -EBUSY; 159 159 if (flags & I915_GEM_OBJECT_UNBIND_ACTIVE || 160 - !i915_vma_is_active(vma)) 161 - ret = i915_vma_unbind(vma); 160 + !i915_vma_is_active(vma)) { 161 + if (flags & I915_GEM_OBJECT_UNBIND_VM_TRYLOCK) { 162 + if (mutex_trylock(&vma->vm->mutex)) { 163 + ret = __i915_vma_unbind(vma); 164 + mutex_unlock(&vma->vm->mutex); 165 + } else { 166 + ret = -EBUSY; 167 + } 168 + } else { 169 + ret = i915_vma_unbind(vma); 170 + } 171 + } 162 172 163 173 __i915_vma_put(vma); 164 174 }
+6 -4
drivers/gpu/drm/i915/i915_irq.c
··· 4024 4024 intel_uncore_write16(&dev_priv->uncore, GEN2_IIR, iir); 4025 4025 4026 4026 if (iir & I915_USER_INTERRUPT) 4027 - intel_engine_signal_breadcrumbs(dev_priv->gt.engine[RCS0]); 4027 + intel_engine_cs_irq(dev_priv->gt.engine[RCS0], iir); 4028 4028 4029 4029 if (iir & I915_MASTER_ERROR_INTERRUPT) 4030 4030 i8xx_error_irq_handler(dev_priv, eir, eir_stuck); ··· 4132 4132 intel_uncore_write(&dev_priv->uncore, GEN2_IIR, iir); 4133 4133 4134 4134 if (iir & I915_USER_INTERRUPT) 4135 - intel_engine_signal_breadcrumbs(dev_priv->gt.engine[RCS0]); 4135 + intel_engine_cs_irq(dev_priv->gt.engine[RCS0], iir); 4136 4136 4137 4137 if (iir & I915_MASTER_ERROR_INTERRUPT) 4138 4138 i9xx_error_irq_handler(dev_priv, eir, eir_stuck); ··· 4277 4277 intel_uncore_write(&dev_priv->uncore, GEN2_IIR, iir); 4278 4278 4279 4279 if (iir & I915_USER_INTERRUPT) 4280 - intel_engine_signal_breadcrumbs(dev_priv->gt.engine[RCS0]); 4280 + intel_engine_cs_irq(dev_priv->gt.engine[RCS0], 4281 + iir); 4281 4282 4282 4283 if (iir & I915_BSD_USER_INTERRUPT) 4283 - intel_engine_signal_breadcrumbs(dev_priv->gt.engine[VCS0]); 4284 + intel_engine_cs_irq(dev_priv->gt.engine[VCS0], 4285 + iir >> 25); 4284 4286 4285 4287 if (iir & I915_MASTER_ERROR_INTERRUPT) 4286 4288 i9xx_error_irq_handler(dev_priv, eir, eir_stuck);
+4 -4
drivers/gpu/drm/i915/i915_params.h
··· 71 71 param(int, fastboot, -1, 0600) \ 72 72 param(int, enable_dpcd_backlight, -1, 0600) \ 73 73 param(char *, force_probe, CONFIG_DRM_I915_FORCE_PROBE, 0400) \ 74 - param(unsigned long, fake_lmem_start, 0, 0400) \ 75 - param(unsigned int, request_timeout_ms, CONFIG_DRM_I915_REQUEST_TIMEOUT, 0600) \ 74 + param(unsigned long, fake_lmem_start, 0, IS_ENABLED(CONFIG_DRM_I915_UNSTABLE_FAKE_LMEM) ? 0400 : 0) \ 75 + param(unsigned int, request_timeout_ms, CONFIG_DRM_I915_REQUEST_TIMEOUT, CONFIG_DRM_I915_REQUEST_TIMEOUT ? 0600 : 0) \ 76 76 /* leave bools at the end to not create holes */ \ 77 77 param(bool, enable_hangcheck, true, 0600) \ 78 78 param(bool, load_detect_test, false, 0600) \ 79 79 param(bool, force_reset_modeset_test, false, 0600) \ 80 - param(bool, error_capture, true, 0600) \ 80 + param(bool, error_capture, true, IS_ENABLED(CONFIG_DRM_I915_CAPTURE_ERROR) ? 0600 : 0) \ 81 81 param(bool, disable_display, false, 0400) \ 82 82 param(bool, verbose_state_checks, true, 0) \ 83 83 param(bool, nuclear_pageflip, false, 0400) \ 84 84 param(bool, enable_dp_mst, true, 0600) \ 85 - param(bool, enable_gvt, false, 0400) 85 + param(bool, enable_gvt, false, IS_ENABLED(CONFIG_DRM_I915_GVT) ? 0400 : 0) 86 86 87 87 #define MEMBER(T, member, ...) T member; 88 88 struct i915_params {
+1 -1
drivers/gpu/drm/i915/i915_pci.c
··· 908 908 }; 909 909 910 910 #define DGFX_FEATURES \ 911 - .memory_regions = REGION_SMEM | REGION_LMEM, \ 911 + .memory_regions = REGION_SMEM | REGION_LMEM | REGION_STOLEN_LMEM, \ 912 912 .has_master_unit_irq = 1, \ 913 913 .has_llc = 0, \ 914 914 .has_snoop = 1, \
+5 -5
drivers/gpu/drm/i915/i915_perf.c
··· 1257 1257 case 8: 1258 1258 case 9: 1259 1259 case 10: 1260 - if (intel_engine_in_execlists_submission_mode(ce->engine)) { 1261 - stream->specific_ctx_id_mask = 1262 - (1U << GEN8_CTX_ID_WIDTH) - 1; 1263 - stream->specific_ctx_id = stream->specific_ctx_id_mask; 1264 - } else { 1260 + if (intel_engine_uses_guc(ce->engine)) { 1265 1261 /* 1266 1262 * When using GuC, the context descriptor we write in 1267 1263 * i915 is read by GuC and rewritten before it's ··· 1276 1280 */ 1277 1281 stream->specific_ctx_id_mask = 1278 1282 (1U << (GEN8_CTX_ID_WIDTH - 1)) - 1; 1283 + } else { 1284 + stream->specific_ctx_id_mask = 1285 + (1U << GEN8_CTX_ID_WIDTH) - 1; 1286 + stream->specific_ctx_id = stream->specific_ctx_id_mask; 1279 1287 } 1280 1288 break; 1281 1289
+3 -1
drivers/gpu/drm/i915/i915_pmu.c
··· 476 476 static int 477 477 config_status(struct drm_i915_private *i915, u64 config) 478 478 { 479 + struct intel_gt *gt = &i915->gt; 480 + 479 481 switch (config) { 480 482 case I915_PMU_ACTUAL_FREQUENCY: 481 483 if (IS_VALLEYVIEW(i915) || IS_CHERRYVIEW(i915)) ··· 491 489 case I915_PMU_INTERRUPTS: 492 490 break; 493 491 case I915_PMU_RC6_RESIDENCY: 494 - if (!HAS_RC6(i915)) 492 + if (!gt->rc6.supported) 495 493 return -ENODEV; 496 494 break; 497 495 case I915_PMU_SOFTWARE_GT_AWAKE_TIME:
+62
drivers/gpu/drm/i915/i915_query.c
··· 419 419 } 420 420 } 421 421 422 + static int query_memregion_info(struct drm_i915_private *i915, 423 + struct drm_i915_query_item *query_item) 424 + { 425 + struct drm_i915_query_memory_regions __user *query_ptr = 426 + u64_to_user_ptr(query_item->data_ptr); 427 + struct drm_i915_memory_region_info __user *info_ptr = 428 + &query_ptr->regions[0]; 429 + struct drm_i915_memory_region_info info = { }; 430 + struct drm_i915_query_memory_regions query; 431 + struct intel_memory_region *mr; 432 + u32 total_length; 433 + int ret, id, i; 434 + 435 + if (!IS_ENABLED(CONFIG_DRM_I915_UNSTABLE_FAKE_LMEM)) 436 + return -ENODEV; 437 + 438 + if (query_item->flags != 0) 439 + return -EINVAL; 440 + 441 + total_length = sizeof(query); 442 + for_each_memory_region(mr, i915, id) { 443 + if (mr->private) 444 + continue; 445 + 446 + total_length += sizeof(info); 447 + } 448 + 449 + ret = copy_query_item(&query, sizeof(query), total_length, query_item); 450 + if (ret != 0) 451 + return ret; 452 + 453 + if (query.num_regions) 454 + return -EINVAL; 455 + 456 + for (i = 0; i < ARRAY_SIZE(query.rsvd); i++) { 457 + if (query.rsvd[i]) 458 + return -EINVAL; 459 + } 460 + 461 + for_each_memory_region(mr, i915, id) { 462 + if (mr->private) 463 + continue; 464 + 465 + info.region.memory_class = mr->type; 466 + info.region.memory_instance = mr->instance; 467 + info.probed_size = mr->total; 468 + info.unallocated_size = mr->avail; 469 + 470 + if (__copy_to_user(info_ptr, &info, sizeof(info))) 471 + return -EFAULT; 472 + 473 + query.num_regions++; 474 + info_ptr++; 475 + } 476 + 477 + if (__copy_to_user(query_ptr, &query, sizeof(query))) 478 + return -EFAULT; 479 + 480 + return total_length; 481 + } 482 + 422 483 static int (* const i915_query_funcs[])(struct drm_i915_private *dev_priv, 423 484 struct drm_i915_query_item *query_item) = { 424 485 query_topology_info, 425 486 query_engine_info, 426 487 query_perf_config, 488 + query_memregion_info, 427 489 }; 428 490 429 491 int i915_query_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
+15 -2
drivers/gpu/drm/i915/i915_reg.h
··· 416 416 #define GEN11_VECS_SFC_USAGE(engine) _MMIO((engine)->mmio_base + 0x2014) 417 417 #define GEN11_VECS_SFC_USAGE_BIT (1 << 0) 418 418 419 + #define GEN12_HCP_SFC_FORCED_LOCK(engine) _MMIO((engine)->mmio_base + 0x2910) 420 + #define GEN12_HCP_SFC_FORCED_LOCK_BIT REG_BIT(0) 421 + #define GEN12_HCP_SFC_LOCK_STATUS(engine) _MMIO((engine)->mmio_base + 0x2914) 422 + #define GEN12_HCP_SFC_LOCK_ACK_BIT REG_BIT(1) 423 + #define GEN12_HCP_SFC_USAGE_BIT REG_BIT(0) 424 + 419 425 #define GEN12_SFC_DONE(n) _MMIO(0x1cc00 + (n) * 0x100) 420 426 #define GEN12_SFC_DONE_MAX 4 421 427 ··· 492 486 493 487 #define GAB_CTL _MMIO(0x24000) 494 488 #define GAB_CTL_CONT_AFTER_PAGEFAULT (1 << 8) 489 + 490 + #define GU_CNTL _MMIO(0x101010) 491 + #define LMEM_INIT REG_BIT(7) 495 492 496 493 #define GEN6_STOLEN_RESERVED _MMIO(0x1082C0) 497 494 #define GEN6_STOLEN_RESERVED_ADDR_MASK (0xFFF << 20) ··· 2724 2715 #define RING_INDIRECT_CTX_OFFSET(base) _MMIO((base) + 0x1c8) /* gen8+ */ 2725 2716 #define RING_CTX_TIMESTAMP(base) _MMIO((base) + 0x3a8) /* gen8+ */ 2726 2717 2718 + #define VDBOX_CGCTL3F10(base) _MMIO((base) + 0x3f10) 2719 + #define IECPUNIT_CLKGATE_DIS REG_BIT(22) 2720 + 2727 2721 #define ERROR_GEN6 _MMIO(0x40a0) 2728 2722 #define GEN7_ERR_INT _MMIO(0x44040) 2729 2723 #define ERR_INT_POISON (1 << 31) ··· 3793 3781 #define CSHRDDR3CTL_DDR3 (1 << 2) 3794 3782 3795 3783 /* 965 MCH register controlling DRAM channel configuration */ 3796 - #define C0DRB3 _MMIO(MCHBAR_MIRROR_BASE + 0x206) 3797 - #define C1DRB3 _MMIO(MCHBAR_MIRROR_BASE + 0x606) 3784 + #define C0DRB3_BW _MMIO(MCHBAR_MIRROR_BASE + 0x206) 3785 + #define C1DRB3_BW _MMIO(MCHBAR_MIRROR_BASE + 0x606) 3798 3786 3799 3787 /* snb MCH registers for reading the DRAM channel configuration */ 3800 3788 #define MAD_DIMM_C0 _MMIO(MCHBAR_MIRROR_BASE_SNB + 0x5004) ··· 12220 12208 #define GEN12_GLOBAL_MOCS(i) _MMIO(0x4000 + (i) * 4) /* Global MOCS regs */ 12221 12209 12222 12210 #define GEN12_GSMBASE _MMIO(0x108100) 12211 + #define GEN12_DSMBASE _MMIO(0x1080C0) 12223 12212 12224 12213 /* gamt regs */ 12225 12214 #define GEN8_L3_LRA_1_GPGPU _MMIO(0x4dd4)
+1 -1
drivers/gpu/drm/i915/i915_request.c
··· 929 929 u32 seqno; 930 930 int ret; 931 931 932 - might_sleep_if(gfpflags_allow_blocking(gfp)); 932 + might_alloc(gfp); 933 933 934 934 /* Check that the caller provided an already pinned context */ 935 935 __intel_context_pin(ce);
+21 -10
drivers/gpu/drm/i915/i915_vma.c
··· 27 27 28 28 #include "display/intel_frontbuffer.h" 29 29 30 + #include "gem/i915_gem_lmem.h" 30 31 #include "gt/intel_engine.h" 31 32 #include "gt/intel_engine_heartbeat.h" 32 33 #include "gt/intel_gt.h" ··· 94 93 return i915_vma_tryget(active_to_vma(ref)) ? 0 : -ENOENT; 95 94 } 96 95 97 - __i915_active_call 98 96 static void __i915_vma_retire(struct i915_active *ref) 99 97 { 100 98 i915_vma_put(active_to_vma(ref)); ··· 124 124 vma->size = obj->base.size; 125 125 vma->display_alignment = I915_GTT_MIN_ALIGNMENT; 126 126 127 - i915_active_init(&vma->active, __i915_vma_active, __i915_vma_retire); 127 + i915_active_init(&vma->active, __i915_vma_active, __i915_vma_retire, 0); 128 128 129 129 /* Declare ourselves safe for use inside shrinkers */ 130 130 if (IS_ENABLED(CONFIG_LOCKDEP)) { ··· 448 448 void __iomem *ptr; 449 449 int err; 450 450 451 - if (GEM_WARN_ON(!i915_vma_is_map_and_fenceable(vma))) { 452 - err = -ENODEV; 453 - goto err; 451 + if (!i915_gem_object_is_lmem(vma->obj)) { 452 + if (GEM_WARN_ON(!i915_vma_is_map_and_fenceable(vma))) { 453 + err = -ENODEV; 454 + goto err; 455 + } 454 456 } 455 457 456 458 GEM_BUG_ON(!i915_vma_is_ggtt(vma)); ··· 460 458 461 459 ptr = READ_ONCE(vma->iomap); 462 460 if (ptr == NULL) { 463 - ptr = io_mapping_map_wc(&i915_vm_to_ggtt(vma->vm)->iomap, 464 - vma->node.start, 465 - vma->node.size); 461 + /* 462 + * TODO: consider just using i915_gem_object_pin_map() for lmem 463 + * instead, which already supports mapping non-contiguous chunks 464 + * of pages, that way we can also drop the 465 + * I915_BO_ALLOC_CONTIGUOUS when allocating the object. 466 + */ 467 + if (i915_gem_object_is_lmem(vma->obj)) 468 + ptr = i915_gem_object_lmem_io_map(vma->obj, 0, 469 + vma->obj->base.size); 470 + else 471 + ptr = io_mapping_map_wc(&i915_vm_to_ggtt(vma->vm)->iomap, 472 + vma->node.start, 473 + vma->node.size); 466 474 if (ptr == NULL) { 467 475 err = -ENOMEM; 468 476 goto err; ··· 917 905 if (err) 918 906 goto err_fence; 919 907 920 - err = i915_vm_pin_pt_stash(vma->vm, 921 - &work->stash); 908 + err = i915_vm_map_pt_stash(vma->vm, &work->stash); 922 909 if (err) 923 910 goto err_fence; 924 911 }
+28 -1
drivers/gpu/drm/i915/intel_memory_region.c
··· 22 22 .class = INTEL_MEMORY_STOLEN_SYSTEM, 23 23 .instance = 0, 24 24 }, 25 + [INTEL_REGION_STOLEN_LMEM] = { 26 + .class = INTEL_MEMORY_STOLEN_LOCAL, 27 + .instance = 0, 28 + }, 25 29 }; 30 + 31 + struct intel_memory_region * 32 + intel_memory_region_lookup(struct drm_i915_private *i915, 33 + u16 class, u16 instance) 34 + { 35 + struct intel_memory_region *mr; 36 + int id; 37 + 38 + /* XXX: consider maybe converting to an rb tree at some point */ 39 + for_each_memory_region(mr, i915, id) { 40 + if (mr->type == class && mr->instance == instance) 41 + return mr; 42 + } 43 + 44 + return NULL; 45 + } 26 46 27 47 struct intel_memory_region * 28 48 intel_memory_region_by_type(struct drm_i915_private *i915, ··· 298 278 case INTEL_MEMORY_SYSTEM: 299 279 mem = i915_gem_shmem_setup(i915); 300 280 break; 281 + case INTEL_MEMORY_STOLEN_LOCAL: 282 + mem = i915_gem_stolen_lmem_setup(i915); 283 + if (!IS_ERR(mem)) 284 + i915->mm.stolen_region = mem; 285 + break; 301 286 case INTEL_MEMORY_STOLEN_SYSTEM: 302 - mem = i915_gem_stolen_setup(i915); 287 + mem = i915_gem_stolen_smem_setup(i915); 288 + if (!IS_ERR(mem)) 289 + i915->mm.stolen_region = mem; 303 290 break; 304 291 default: 305 292 continue;
+12 -6
drivers/gpu/drm/i915/intel_memory_region.h
··· 11 11 #include <linux/mutex.h> 12 12 #include <linux/io-mapping.h> 13 13 #include <drm/drm_mm.h> 14 + #include <drm/i915_drm.h> 14 15 15 16 #include "i915_buddy.h" 16 17 ··· 20 19 struct intel_memory_region; 21 20 struct sg_table; 22 21 23 - /** 24 - * Base memory type 25 - */ 26 22 enum intel_memory_type { 27 - INTEL_MEMORY_SYSTEM = 0, 28 - INTEL_MEMORY_LOCAL, 23 + INTEL_MEMORY_SYSTEM = I915_MEMORY_CLASS_SYSTEM, 24 + INTEL_MEMORY_LOCAL = I915_MEMORY_CLASS_DEVICE, 29 25 INTEL_MEMORY_STOLEN_SYSTEM, 26 + INTEL_MEMORY_STOLEN_LOCAL, 30 27 }; 31 28 32 29 enum intel_region_id { 33 30 INTEL_REGION_SMEM = 0, 34 31 INTEL_REGION_LMEM, 35 32 INTEL_REGION_STOLEN_SMEM, 33 + INTEL_REGION_STOLEN_LMEM, 36 34 INTEL_REGION_UNKNOWN, /* Should be last */ 37 35 }; 38 36 39 37 #define REGION_SMEM BIT(INTEL_REGION_SMEM) 40 38 #define REGION_LMEM BIT(INTEL_REGION_LMEM) 41 39 #define REGION_STOLEN_SMEM BIT(INTEL_REGION_STOLEN_SMEM) 40 + #define REGION_STOLEN_LMEM BIT(INTEL_REGION_STOLEN_LMEM) 42 41 43 42 #define I915_ALLOC_MIN_PAGE_SIZE BIT(0) 44 43 #define I915_ALLOC_CONTIGUOUS BIT(1) ··· 83 82 u16 type; 84 83 u16 instance; 85 84 enum intel_region_id id; 86 - char name[8]; 85 + char name[16]; 86 + bool private; /* not for userspace */ 87 87 88 88 struct list_head reserved; 89 89 ··· 96 94 struct list_head purgeable; 97 95 } objects; 98 96 }; 97 + 98 + struct intel_memory_region * 99 + intel_memory_region_lookup(struct drm_i915_private *i915, 100 + u16 class, u16 instance); 99 101 100 102 int intel_memory_region_init_buddy(struct intel_memory_region *mem); 101 103 void intel_memory_region_release_buddy(struct intel_memory_region *mem);
+12
drivers/gpu/drm/i915/intel_uncore.c
··· 1917 1917 if (ret) 1918 1918 return ret; 1919 1919 1920 + /* 1921 + * The boot firmware initializes local memory and assesses its health. 1922 + * If memory training fails, the punit will have been instructed to 1923 + * keep the GT powered down; we won't be able to communicate with it 1924 + * and we should not continue with driver initialization. 1925 + */ 1926 + if (IS_DGFX(i915) && 1927 + !(__raw_uncore_read32(uncore, GU_CNTL) & LMEM_INIT)) { 1928 + drm_err(&i915->drm, "LMEM not initialized by firmware\n"); 1929 + return -ENODEV; 1930 + } 1931 + 1920 1932 if (INTEL_GEN(i915) > 5 && !intel_vgpu_active(i915)) 1921 1933 uncore->flags |= UNCORE_HAS_FORCEWAKE; 1922 1934
+1 -1
drivers/gpu/drm/i915/selftests/i915_active.c
··· 68 68 return NULL; 69 69 70 70 kref_init(&active->ref); 71 - i915_active_init(&active->base, __live_active, __live_retire); 71 + i915_active_init(&active->base, __live_active, __live_retire, 0); 72 72 73 73 return active; 74 74 }
+10 -10
drivers/gpu/drm/i915/selftests/i915_gem.c
··· 87 87 intel_runtime_pm_put(&i915->runtime_pm, wakeref); 88 88 } 89 89 90 - static int pm_prepare(struct drm_i915_private *i915) 90 + static int igt_pm_prepare(struct drm_i915_private *i915) 91 91 { 92 92 i915_gem_suspend(i915); 93 93 94 94 return 0; 95 95 } 96 96 97 - static void pm_suspend(struct drm_i915_private *i915) 97 + static void igt_pm_suspend(struct drm_i915_private *i915) 98 98 { 99 99 intel_wakeref_t wakeref; 100 100 ··· 104 104 } 105 105 } 106 106 107 - static void pm_hibernate(struct drm_i915_private *i915) 107 + static void igt_pm_hibernate(struct drm_i915_private *i915) 108 108 { 109 109 intel_wakeref_t wakeref; 110 110 ··· 116 116 } 117 117 } 118 118 119 - static void pm_resume(struct drm_i915_private *i915) 119 + static void igt_pm_resume(struct drm_i915_private *i915) 120 120 { 121 121 intel_wakeref_t wakeref; 122 122 ··· 148 148 if (err) 149 149 goto out; 150 150 151 - err = pm_prepare(i915); 151 + err = igt_pm_prepare(i915); 152 152 if (err) 153 153 goto out; 154 154 155 - pm_suspend(i915); 155 + igt_pm_suspend(i915); 156 156 157 157 /* Here be dragons! Note that with S3RST any S3 may become S4! */ 158 158 simulate_hibernate(i915); 159 159 160 - pm_resume(i915); 160 + igt_pm_resume(i915); 161 161 162 162 err = switch_to_context(ctx); 163 163 out: ··· 183 183 if (err) 184 184 goto out; 185 185 186 - err = pm_prepare(i915); 186 + err = igt_pm_prepare(i915); 187 187 if (err) 188 188 goto out; 189 189 190 - pm_hibernate(i915); 190 + igt_pm_hibernate(i915); 191 191 192 192 /* Here be dragons! */ 193 193 simulate_hibernate(i915); 194 194 195 - pm_resume(i915); 195 + igt_pm_resume(i915); 196 196 197 197 err = switch_to_context(ctx); 198 198 out:
+4 -6
drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
··· 186 186 if (err) 187 187 goto err_ppgtt_cleanup; 188 188 189 - err = i915_vm_pin_pt_stash(&ppgtt->vm, &stash); 189 + err = i915_vm_map_pt_stash(&ppgtt->vm, &stash); 190 190 if (err) { 191 191 i915_vm_free_pt_stash(&ppgtt->vm, &stash); 192 192 goto err_ppgtt_cleanup; ··· 208 208 if (err) 209 209 goto err_ppgtt_cleanup; 210 210 211 - err = i915_vm_pin_pt_stash(&ppgtt->vm, &stash); 211 + err = i915_vm_map_pt_stash(&ppgtt->vm, &stash); 212 212 if (err) { 213 213 i915_vm_free_pt_stash(&ppgtt->vm, &stash); 214 214 goto err_ppgtt_cleanup; ··· 325 325 BIT_ULL(size))) 326 326 goto alloc_vm_end; 327 327 328 - err = i915_vm_pin_pt_stash(vm, &stash); 328 + err = i915_vm_map_pt_stash(vm, &stash); 329 329 if (!err) 330 330 vm->allocate_va_range(vm, &stash, 331 331 addr, BIT_ULL(size)); 332 - 333 332 i915_vm_free_pt_stash(vm, &stash); 334 333 alloc_vm_end: 335 334 if (err == -EDEADLK) { ··· 1967 1968 if (err) 1968 1969 goto end_ww; 1969 1970 1970 - err = i915_vm_pin_pt_stash(vm, &stash); 1971 + err = i915_vm_map_pt_stash(vm, &stash); 1971 1972 if (!err) 1972 1973 vm->allocate_va_range(vm, &stash, offset, chunk_size); 1973 - 1974 1974 i915_vm_free_pt_stash(vm, &stash); 1975 1975 end_ww: 1976 1976 if (err == -EDEADLK) {
+1 -2
drivers/gpu/drm/i915/selftests/i915_perf.c
··· 307 307 } 308 308 309 309 /* Poison the ce->vm so we detect writes not to the GGTT gt->scratch */ 310 - scratch = kmap(__px_page(ce->vm->scratch[0])); 310 + scratch = __px_vaddr(ce->vm->scratch[0]); 311 311 memset(scratch, POISON_FREE, PAGE_SIZE); 312 312 313 313 rq = intel_context_create_request(ce); ··· 405 405 out_rq: 406 406 i915_request_put(rq); 407 407 out_ce: 408 - kunmap(__px_page(ce->vm->scratch[0])); 409 408 intel_context_put(ce); 410 409 out: 411 410 stream_destroy(stream);
+3
drivers/gpu/drm/i915/selftests/i915_vma.c
··· 967 967 intel_wakeref_t wakeref; 968 968 int err = 0; 969 969 970 + if (!i915_ggtt_has_aperture(&i915->ggtt)) 971 + return 0; 972 + 970 973 obj = i915_gem_object_create_internal(i915, 10 * 10 * PAGE_SIZE); 971 974 if (IS_ERR(obj)) 972 975 return PTR_ERR(obj);
+2 -2
drivers/gpu/drm/i915/selftests/igt_spinner.c
··· 94 94 } 95 95 96 96 if (!spin->batch) { 97 - unsigned int mode = 98 - i915_coherent_map_type(spin->gt->i915); 97 + unsigned int mode; 99 98 99 + mode = i915_coherent_map_type(spin->gt->i915, spin->obj, false); 100 100 vaddr = igt_spinner_pin_obj(ce, ww, spin->obj, mode, &spin->batch_vma); 101 101 if (IS_ERR(vaddr)) 102 102 return PTR_ERR(vaddr);
+86 -1
drivers/gpu/drm/i915/selftests/intel_memory_region.c
··· 513 513 if (err) 514 514 return err; 515 515 516 - ptr = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WC); 516 + ptr = i915_gem_object_pin_map(obj, I915_MAP_WC); 517 517 if (IS_ERR(ptr)) 518 518 return PTR_ERR(ptr); 519 519 ··· 593 593 if (err) 594 594 break; 595 595 596 + i915_gem_object_lock(obj, NULL); 596 597 err = igt_cpu_check(obj, dword, rng); 598 + i915_gem_object_unlock(obj); 597 599 if (err) 598 600 break; 599 601 } while (!__igt_timeout(end_time, NULL)); ··· 627 625 i915_gem_object_unpin_pages(obj); 628 626 out_put: 629 627 i915_gem_object_put(obj); 628 + 629 + return err; 630 + } 631 + 632 + static int igt_lmem_create_cleared_cpu(void *arg) 633 + { 634 + struct drm_i915_private *i915 = arg; 635 + I915_RND_STATE(prng); 636 + IGT_TIMEOUT(end_time); 637 + u32 size, i; 638 + int err; 639 + 640 + i915_gem_drain_freed_objects(i915); 641 + 642 + size = max_t(u32, PAGE_SIZE, i915_prandom_u32_max_state(SZ_32M, &prng)); 643 + size = round_up(size, PAGE_SIZE); 644 + i = 0; 645 + 646 + do { 647 + struct drm_i915_gem_object *obj; 648 + unsigned int flags; 649 + u32 dword, val; 650 + void *vaddr; 651 + 652 + /* 653 + * Alternate between cleared and uncleared allocations, while 654 + * also dirtying the pages each time to check that the pages are 655 + * always cleared if requested, since we should get some overlap 656 + * of the underlying pages, if not all, since we are the only 657 + * user. 658 + */ 659 + 660 + flags = I915_BO_ALLOC_CPU_CLEAR; 661 + if (i & 1) 662 + flags = 0; 663 + 664 + obj = i915_gem_object_create_lmem(i915, size, flags); 665 + if (IS_ERR(obj)) 666 + return PTR_ERR(obj); 667 + 668 + i915_gem_object_lock(obj, NULL); 669 + err = i915_gem_object_pin_pages(obj); 670 + if (err) 671 + goto out_put; 672 + 673 + dword = i915_prandom_u32_max_state(PAGE_SIZE / sizeof(u32), 674 + &prng); 675 + 676 + if (flags & I915_BO_ALLOC_CPU_CLEAR) { 677 + err = igt_cpu_check(obj, dword, 0); 678 + if (err) { 679 + pr_err("%s failed with size=%u, flags=%u\n", 680 + __func__, size, flags); 681 + goto out_unpin; 682 + } 683 + } 684 + 685 + vaddr = i915_gem_object_pin_map(obj, I915_MAP_WC); 686 + if (IS_ERR(vaddr)) { 687 + err = PTR_ERR(vaddr); 688 + goto out_unpin; 689 + } 690 + 691 + val = prandom_u32_state(&prng); 692 + 693 + memset32(vaddr, val, obj->base.size / sizeof(u32)); 694 + 695 + i915_gem_object_flush_map(obj); 696 + i915_gem_object_unpin_map(obj); 697 + out_unpin: 698 + i915_gem_object_unpin_pages(obj); 699 + __i915_gem_object_put_pages(obj); 700 + out_put: 701 + i915_gem_object_unlock(obj); 702 + i915_gem_object_put(obj); 703 + 704 + if (err) 705 + break; 706 + ++i; 707 + } while (!__igt_timeout(end_time, NULL)); 708 + 709 + pr_info("%s completed (%u) iterations\n", __func__, i); 630 710 631 711 return err; 632 712 } ··· 1127 1043 { 1128 1044 static const struct i915_subtest tests[] = { 1129 1045 SUBTEST(igt_lmem_create), 1046 + SUBTEST(igt_lmem_create_cleared_cpu), 1130 1047 SUBTEST(igt_lmem_write_cpu), 1131 1048 SUBTEST(igt_lmem_write_gpu), 1132 1049 };
+10
drivers/gpu/drm/i915/selftests/librapl.c
··· 5 5 6 6 #include <asm/msr.h> 7 7 8 + #include "i915_drv.h" 8 9 #include "librapl.h" 10 + 11 + bool librapl_supported(const struct drm_i915_private *i915) 12 + { 13 + /* Discrete cards require hwmon integration */ 14 + if (IS_DGFX(i915)) 15 + return false; 16 + 17 + return librapl_energy_uJ(); 18 + } 9 19 10 20 u64 librapl_energy_uJ(void) 11 21 {
+4
drivers/gpu/drm/i915/selftests/librapl.h
··· 8 8 9 9 #include <linux/types.h> 10 10 11 + struct drm_i915_private; 12 + 13 + bool librapl_supported(const struct drm_i915_private *i915); 14 + 11 15 u64 librapl_energy_uJ(void); 12 16 13 17 #endif /* SELFTEST_LIBRAPL_H */
+359 -34
include/uapi/drm/i915_drm.h
··· 62 62 #define I915_ERROR_UEVENT "ERROR" 63 63 #define I915_RESET_UEVENT "RESET" 64 64 65 - /* 66 - * i915_user_extension: Base class for defining a chain of extensions 65 + /** 66 + * struct i915_user_extension - Base class for defining a chain of extensions 67 67 * 68 68 * Many interfaces need to grow over time. In most cases we can simply 69 69 * extend the struct and have userspace pass in more data. Another option, ··· 76 76 * increasing complexity, and for large parts of that interface to be 77 77 * entirely optional. The downside is more pointer chasing; chasing across 78 78 * the __user boundary with pointers encapsulated inside u64. 79 + * 80 + * Example chaining: 81 + * 82 + * .. code-block:: C 83 + * 84 + * struct i915_user_extension ext3 { 85 + * .next_extension = 0, // end 86 + * .name = ..., 87 + * }; 88 + * struct i915_user_extension ext2 { 89 + * .next_extension = (uintptr_t)&ext3, 90 + * .name = ..., 91 + * }; 92 + * struct i915_user_extension ext1 { 93 + * .next_extension = (uintptr_t)&ext2, 94 + * .name = ..., 95 + * }; 96 + * 97 + * Typically the struct i915_user_extension would be embedded in some uAPI 98 + * struct, and in this case we would feed it the head of the chain(i.e ext1), 99 + * which would then apply all of the above extensions. 100 + * 79 101 */ 80 102 struct i915_user_extension { 103 + /** 104 + * @next_extension: 105 + * 106 + * Pointer to the next struct i915_user_extension, or zero if the end. 107 + */ 81 108 __u64 next_extension; 109 + /** 110 + * @name: Name of the extension. 111 + * 112 + * Note that the name here is just some integer. 113 + * 114 + * Also note that the name space for this is not global for the whole 115 + * driver, but rather its scope/meaning is limited to the specific piece 116 + * of uAPI which has embedded the struct i915_user_extension. 117 + */ 82 118 __u32 name; 83 - __u32 flags; /* All undefined bits must be zero. */ 84 - __u32 rsvd[4]; /* Reserved for future use; must be zero. */ 119 + /** 120 + * @flags: MBZ 121 + * 122 + * All undefined bits must be zero. 123 + */ 124 + __u32 flags; 125 + /** 126 + * @rsvd: MBZ 127 + * 128 + * Reserved for future use; must be zero. 129 + */ 130 + __u32 rsvd[4]; 85 131 }; 86 132 87 133 /* ··· 406 360 #define DRM_I915_QUERY 0x39 407 361 #define DRM_I915_GEM_VM_CREATE 0x3a 408 362 #define DRM_I915_GEM_VM_DESTROY 0x3b 363 + #define DRM_I915_GEM_CREATE_EXT 0x3c 409 364 /* Must be kept compact -- no holes */ 410 365 411 366 #define DRM_IOCTL_I915_INIT DRM_IOW( DRM_COMMAND_BASE + DRM_I915_INIT, drm_i915_init_t) ··· 439 392 #define DRM_IOCTL_I915_GEM_ENTERVT DRM_IO(DRM_COMMAND_BASE + DRM_I915_GEM_ENTERVT) 440 393 #define DRM_IOCTL_I915_GEM_LEAVEVT DRM_IO(DRM_COMMAND_BASE + DRM_I915_GEM_LEAVEVT) 441 394 #define DRM_IOCTL_I915_GEM_CREATE DRM_IOWR(DRM_COMMAND_BASE + DRM_I915_GEM_CREATE, struct drm_i915_gem_create) 395 + #define DRM_IOCTL_I915_GEM_CREATE_EXT DRM_IOWR(DRM_COMMAND_BASE + DRM_I915_GEM_CREATE_EXT, struct drm_i915_gem_create_ext) 442 396 #define DRM_IOCTL_I915_GEM_PREAD DRM_IOW (DRM_COMMAND_BASE + DRM_I915_GEM_PREAD, struct drm_i915_gem_pread) 443 397 #define DRM_IOCTL_I915_GEM_PWRITE DRM_IOW (DRM_COMMAND_BASE + DRM_I915_GEM_PWRITE, struct drm_i915_gem_pwrite) 444 398 #define DRM_IOCTL_I915_GEM_MMAP DRM_IOWR(DRM_COMMAND_BASE + DRM_I915_GEM_MMAP, struct drm_i915_gem_mmap) ··· 1102 1054 __u32 flags; 1103 1055 }; 1104 1056 1105 - /** 1057 + /* 1106 1058 * See drm_i915_gem_execbuffer_ext_timeline_fences. 1107 1059 */ 1108 1060 #define DRM_I915_GEM_EXECBUFFER_EXT_TIMELINE_FENCES 0 1109 1061 1110 - /** 1062 + /* 1111 1063 * This structure describes an array of drm_syncobj and associated points for 1112 1064 * timeline variants of drm_syncobj. It is invalid to append this structure to 1113 1065 * the execbuf if I915_EXEC_FENCE_ARRAY is set. ··· 1748 1700 __u64 value; 1749 1701 }; 1750 1702 1751 - /** 1703 + /* 1752 1704 * Context SSEU programming 1753 1705 * 1754 1706 * It may be necessary for either functional or performance reason to configure ··· 2115 2067 __u64 properties_ptr; 2116 2068 }; 2117 2069 2118 - /** 2070 + /* 2119 2071 * Enable data capture for a stream that was either opened in a disabled state 2120 2072 * via I915_PERF_FLAG_DISABLED or was later disabled via 2121 2073 * I915_PERF_IOCTL_DISABLE. ··· 2129 2081 */ 2130 2082 #define I915_PERF_IOCTL_ENABLE _IO('i', 0x0) 2131 2083 2132 - /** 2084 + /* 2133 2085 * Disable data capture for a stream. 2134 2086 * 2135 2087 * It is an error to try and read a stream that is disabled. ··· 2138 2090 */ 2139 2091 #define I915_PERF_IOCTL_DISABLE _IO('i', 0x1) 2140 2092 2141 - /** 2093 + /* 2142 2094 * Change metrics_set captured by a stream. 2143 2095 * 2144 2096 * If the stream is bound to a specific context, the configuration change ··· 2151 2103 */ 2152 2104 #define I915_PERF_IOCTL_CONFIG _IO('i', 0x2) 2153 2105 2154 - /** 2106 + /* 2155 2107 * Common to all i915 perf records 2156 2108 */ 2157 2109 struct drm_i915_perf_record_header { ··· 2199 2151 DRM_I915_PERF_RECORD_MAX /* non-ABI */ 2200 2152 }; 2201 2153 2202 - /** 2154 + /* 2203 2155 * Structure to upload perf dynamic configuration into the kernel. 2204 2156 */ 2205 2157 struct drm_i915_perf_oa_config { ··· 2220 2172 __u64 flex_regs_ptr; 2221 2173 }; 2222 2174 2175 + /** 2176 + * struct drm_i915_query_item - An individual query for the kernel to process. 2177 + * 2178 + * The behaviour is determined by the @query_id. Note that exactly what 2179 + * @data_ptr is also depends on the specific @query_id. 2180 + */ 2223 2181 struct drm_i915_query_item { 2182 + /** @query_id: The id for this query */ 2224 2183 __u64 query_id; 2225 2184 #define DRM_I915_QUERY_TOPOLOGY_INFO 1 2226 2185 #define DRM_I915_QUERY_ENGINE_INFO 2 2227 2186 #define DRM_I915_QUERY_PERF_CONFIG 3 2187 + #define DRM_I915_QUERY_MEMORY_REGIONS 4 2228 2188 /* Must be kept compact -- no holes and well documented */ 2229 2189 2230 - /* 2190 + /** 2191 + * @length: 2192 + * 2231 2193 * When set to zero by userspace, this is filled with the size of the 2232 - * data to be written at the data_ptr pointer. The kernel sets this 2194 + * data to be written at the @data_ptr pointer. The kernel sets this 2233 2195 * value to a negative value to signal an error on a particular query 2234 2196 * item. 2235 2197 */ 2236 2198 __s32 length; 2237 2199 2238 - /* 2200 + /** 2201 + * @flags: 2202 + * 2239 2203 * When query_id == DRM_I915_QUERY_TOPOLOGY_INFO, must be 0. 2240 2204 * 2241 2205 * When query_id == DRM_I915_QUERY_PERF_CONFIG, must be one of the 2242 - * following : 2243 - * - DRM_I915_QUERY_PERF_CONFIG_LIST 2244 - * - DRM_I915_QUERY_PERF_CONFIG_DATA_FOR_UUID 2245 - * - DRM_I915_QUERY_PERF_CONFIG_FOR_UUID 2206 + * following: 2207 + * 2208 + * - DRM_I915_QUERY_PERF_CONFIG_LIST 2209 + * - DRM_I915_QUERY_PERF_CONFIG_DATA_FOR_UUID 2210 + * - DRM_I915_QUERY_PERF_CONFIG_FOR_UUID 2246 2211 */ 2247 2212 __u32 flags; 2248 2213 #define DRM_I915_QUERY_PERF_CONFIG_LIST 1 2249 2214 #define DRM_I915_QUERY_PERF_CONFIG_DATA_FOR_UUID 2 2250 2215 #define DRM_I915_QUERY_PERF_CONFIG_DATA_FOR_ID 3 2251 2216 2252 - /* 2253 - * Data will be written at the location pointed by data_ptr when the 2254 - * value of length matches the length of the data to be written by the 2217 + /** 2218 + * @data_ptr: 2219 + * 2220 + * Data will be written at the location pointed by @data_ptr when the 2221 + * value of @length matches the length of the data to be written by the 2255 2222 * kernel. 2256 2223 */ 2257 2224 __u64 data_ptr; 2258 2225 }; 2259 2226 2227 + /** 2228 + * struct drm_i915_query - Supply an array of struct drm_i915_query_item for the 2229 + * kernel to fill out. 2230 + * 2231 + * Note that this is generally a two step process for each struct 2232 + * drm_i915_query_item in the array: 2233 + * 2234 + * 1. Call the DRM_IOCTL_I915_QUERY, giving it our array of struct 2235 + * drm_i915_query_item, with &drm_i915_query_item.length set to zero. The 2236 + * kernel will then fill in the size, in bytes, which tells userspace how 2237 + * memory it needs to allocate for the blob(say for an array of properties). 2238 + * 2239 + * 2. Next we call DRM_IOCTL_I915_QUERY again, this time with the 2240 + * &drm_i915_query_item.data_ptr equal to our newly allocated blob. Note that 2241 + * the &drm_i915_query_item.length should still be the same as what the 2242 + * kernel previously set. At this point the kernel can fill in the blob. 2243 + * 2244 + * Note that for some query items it can make sense for userspace to just pass 2245 + * in a buffer/blob equal to or larger than the required size. In this case only 2246 + * a single ioctl call is needed. For some smaller query items this can work 2247 + * quite well. 2248 + * 2249 + */ 2260 2250 struct drm_i915_query { 2251 + /** @num_items: The number of elements in the @items_ptr array */ 2261 2252 __u32 num_items; 2262 2253 2263 - /* 2264 - * Unused for now. Must be cleared to zero. 2254 + /** 2255 + * @flags: Unused for now. Must be cleared to zero. 2265 2256 */ 2266 2257 __u32 flags; 2267 2258 2268 - /* 2269 - * This points to an array of num_items drm_i915_query_item structures. 2259 + /** 2260 + * @items_ptr: 2261 + * 2262 + * Pointer to an array of struct drm_i915_query_item. The number of 2263 + * array elements is @num_items. 2270 2264 */ 2271 2265 __u64 items_ptr; 2272 2266 }; ··· 2382 2292 * Describes one engine and it's capabilities as known to the driver. 2383 2293 */ 2384 2294 struct drm_i915_engine_info { 2385 - /** Engine class and instance. */ 2295 + /** @engine: Engine class and instance. */ 2386 2296 struct i915_engine_class_instance engine; 2387 2297 2388 - /** Reserved field. */ 2298 + /** @rsvd0: Reserved field. */ 2389 2299 __u32 rsvd0; 2390 2300 2391 - /** Engine flags. */ 2301 + /** @flags: Engine flags. */ 2392 2302 __u64 flags; 2393 2303 2394 - /** Capabilities of this engine. */ 2304 + /** @capabilities: Capabilities of this engine. */ 2395 2305 __u64 capabilities; 2396 2306 #define I915_VIDEO_CLASS_CAPABILITY_HEVC (1 << 0) 2397 2307 #define I915_VIDEO_AND_ENHANCE_CLASS_CAPABILITY_SFC (1 << 1) 2398 2308 2399 - /** Reserved fields. */ 2309 + /** @rsvd1: Reserved fields. */ 2400 2310 __u64 rsvd1[4]; 2401 2311 }; 2402 2312 ··· 2407 2317 * an array of struct drm_i915_engine_info structures. 2408 2318 */ 2409 2319 struct drm_i915_query_engine_info { 2410 - /** Number of struct drm_i915_engine_info structs following. */ 2320 + /** @num_engines: Number of struct drm_i915_engine_info structs following. */ 2411 2321 __u32 num_engines; 2412 2322 2413 - /** MBZ */ 2323 + /** @rsvd: MBZ */ 2414 2324 __u32 rsvd[3]; 2415 2325 2416 - /** Marker for drm_i915_engine_info structures. */ 2326 + /** @engines: Marker for drm_i915_engine_info structures. */ 2417 2327 struct drm_i915_engine_info engines[]; 2418 2328 }; 2419 2329 ··· 2465 2375 * - n_flex_regs 2466 2376 */ 2467 2377 __u8 data[]; 2378 + }; 2379 + 2380 + /** 2381 + * enum drm_i915_gem_memory_class - Supported memory classes 2382 + */ 2383 + enum drm_i915_gem_memory_class { 2384 + /** @I915_MEMORY_CLASS_SYSTEM: System memory */ 2385 + I915_MEMORY_CLASS_SYSTEM = 0, 2386 + /** @I915_MEMORY_CLASS_DEVICE: Device local-memory */ 2387 + I915_MEMORY_CLASS_DEVICE, 2388 + }; 2389 + 2390 + /** 2391 + * struct drm_i915_gem_memory_class_instance - Identify particular memory region 2392 + */ 2393 + struct drm_i915_gem_memory_class_instance { 2394 + /** @memory_class: See enum drm_i915_gem_memory_class */ 2395 + __u16 memory_class; 2396 + 2397 + /** @memory_instance: Which instance */ 2398 + __u16 memory_instance; 2399 + }; 2400 + 2401 + /** 2402 + * struct drm_i915_memory_region_info - Describes one region as known to the 2403 + * driver. 2404 + * 2405 + * Note that we reserve some stuff here for potential future work. As an example 2406 + * we might want expose the capabilities for a given region, which could include 2407 + * things like if the region is CPU mappable/accessible, what are the supported 2408 + * mapping types etc. 2409 + * 2410 + * Note that to extend struct drm_i915_memory_region_info and struct 2411 + * drm_i915_query_memory_regions in the future the plan is to do the following: 2412 + * 2413 + * .. code-block:: C 2414 + * 2415 + * struct drm_i915_memory_region_info { 2416 + * struct drm_i915_gem_memory_class_instance region; 2417 + * union { 2418 + * __u32 rsvd0; 2419 + * __u32 new_thing1; 2420 + * }; 2421 + * ... 2422 + * union { 2423 + * __u64 rsvd1[8]; 2424 + * struct { 2425 + * __u64 new_thing2; 2426 + * __u64 new_thing3; 2427 + * ... 2428 + * }; 2429 + * }; 2430 + * }; 2431 + * 2432 + * With this things should remain source compatible between versions for 2433 + * userspace, even as we add new fields. 2434 + * 2435 + * Note this is using both struct drm_i915_query_item and struct drm_i915_query. 2436 + * For this new query we are adding the new query id DRM_I915_QUERY_MEMORY_REGIONS 2437 + * at &drm_i915_query_item.query_id. 2438 + */ 2439 + struct drm_i915_memory_region_info { 2440 + /** @region: The class:instance pair encoding */ 2441 + struct drm_i915_gem_memory_class_instance region; 2442 + 2443 + /** @rsvd0: MBZ */ 2444 + __u32 rsvd0; 2445 + 2446 + /** @probed_size: Memory probed by the driver (-1 = unknown) */ 2447 + __u64 probed_size; 2448 + 2449 + /** @unallocated_size: Estimate of memory remaining (-1 = unknown) */ 2450 + __u64 unallocated_size; 2451 + 2452 + /** @rsvd1: MBZ */ 2453 + __u64 rsvd1[8]; 2454 + }; 2455 + 2456 + /** 2457 + * struct drm_i915_query_memory_regions 2458 + * 2459 + * The region info query enumerates all regions known to the driver by filling 2460 + * in an array of struct drm_i915_memory_region_info structures. 2461 + * 2462 + * Example for getting the list of supported regions: 2463 + * 2464 + * .. code-block:: C 2465 + * 2466 + * struct drm_i915_query_memory_regions *info; 2467 + * struct drm_i915_query_item item = { 2468 + * .query_id = DRM_I915_QUERY_MEMORY_REGIONS; 2469 + * }; 2470 + * struct drm_i915_query query = { 2471 + * .num_items = 1, 2472 + * .items_ptr = (uintptr_t)&item, 2473 + * }; 2474 + * int err, i; 2475 + * 2476 + * // First query the size of the blob we need, this needs to be large 2477 + * // enough to hold our array of regions. The kernel will fill out the 2478 + * // item.length for us, which is the number of bytes we need. 2479 + * err = ioctl(fd, DRM_IOCTL_I915_QUERY, &query); 2480 + * if (err) ... 2481 + * 2482 + * info = calloc(1, item.length); 2483 + * // Now that we allocated the required number of bytes, we call the ioctl 2484 + * // again, this time with the data_ptr pointing to our newly allocated 2485 + * // blob, which the kernel can then populate with the all the region info. 2486 + * item.data_ptr = (uintptr_t)&info, 2487 + * 2488 + * err = ioctl(fd, DRM_IOCTL_I915_QUERY, &query); 2489 + * if (err) ... 2490 + * 2491 + * // We can now access each region in the array 2492 + * for (i = 0; i < info->num_regions; i++) { 2493 + * struct drm_i915_memory_region_info mr = info->regions[i]; 2494 + * u16 class = mr.region.class; 2495 + * u16 instance = mr.region.instance; 2496 + * 2497 + * .... 2498 + * } 2499 + * 2500 + * free(info); 2501 + */ 2502 + struct drm_i915_query_memory_regions { 2503 + /** @num_regions: Number of supported regions */ 2504 + __u32 num_regions; 2505 + 2506 + /** @rsvd: MBZ */ 2507 + __u32 rsvd[3]; 2508 + 2509 + /** @regions: Info about each supported region */ 2510 + struct drm_i915_memory_region_info regions[]; 2511 + }; 2512 + 2513 + /** 2514 + * struct drm_i915_gem_create_ext - Existing gem_create behaviour, with added 2515 + * extension support using struct i915_user_extension. 2516 + * 2517 + * Note that in the future we want to have our buffer flags here, at least for 2518 + * the stuff that is immutable. Previously we would have two ioctls, one to 2519 + * create the object with gem_create, and another to apply various parameters, 2520 + * however this creates some ambiguity for the params which are considered 2521 + * immutable. Also in general we're phasing out the various SET/GET ioctls. 2522 + */ 2523 + struct drm_i915_gem_create_ext { 2524 + /** 2525 + * @size: Requested size for the object. 2526 + * 2527 + * The (page-aligned) allocated size for the object will be returned. 2528 + * 2529 + * Note that for some devices we have might have further minimum 2530 + * page-size restrictions(larger than 4K), like for device local-memory. 2531 + * However in general the final size here should always reflect any 2532 + * rounding up, if for example using the I915_GEM_CREATE_EXT_MEMORY_REGIONS 2533 + * extension to place the object in device local-memory. 2534 + */ 2535 + __u64 size; 2536 + /** 2537 + * @handle: Returned handle for the object. 2538 + * 2539 + * Object handles are nonzero. 2540 + */ 2541 + __u32 handle; 2542 + /** @flags: MBZ */ 2543 + __u32 flags; 2544 + /** 2545 + * @extensions: The chain of extensions to apply to this object. 2546 + * 2547 + * This will be useful in the future when we need to support several 2548 + * different extensions, and we need to apply more than one when 2549 + * creating the object. See struct i915_user_extension. 2550 + * 2551 + * If we don't supply any extensions then we get the same old gem_create 2552 + * behaviour. 2553 + * 2554 + * For I915_GEM_CREATE_EXT_MEMORY_REGIONS usage see 2555 + * struct drm_i915_gem_create_ext_memory_regions. 2556 + */ 2557 + #define I915_GEM_CREATE_EXT_MEMORY_REGIONS 0 2558 + __u64 extensions; 2559 + }; 2560 + 2561 + /** 2562 + * struct drm_i915_gem_create_ext_memory_regions - The 2563 + * I915_GEM_CREATE_EXT_MEMORY_REGIONS extension. 2564 + * 2565 + * Set the object with the desired set of placements/regions in priority 2566 + * order. Each entry must be unique and supported by the device. 2567 + * 2568 + * This is provided as an array of struct drm_i915_gem_memory_class_instance, or 2569 + * an equivalent layout of class:instance pair encodings. See struct 2570 + * drm_i915_query_memory_regions and DRM_I915_QUERY_MEMORY_REGIONS for how to 2571 + * query the supported regions for a device. 2572 + * 2573 + * As an example, on discrete devices, if we wish to set the placement as 2574 + * device local-memory we can do something like: 2575 + * 2576 + * .. code-block:: C 2577 + * 2578 + * struct drm_i915_gem_memory_class_instance region_lmem = { 2579 + * .memory_class = I915_MEMORY_CLASS_DEVICE, 2580 + * .memory_instance = 0, 2581 + * }; 2582 + * struct drm_i915_gem_create_ext_memory_regions regions = { 2583 + * .base = { .name = I915_GEM_CREATE_EXT_MEMORY_REGIONS }, 2584 + * .regions = (uintptr_t)&region_lmem, 2585 + * .num_regions = 1, 2586 + * }; 2587 + * struct drm_i915_gem_create_ext create_ext = { 2588 + * .size = 16 * PAGE_SIZE, 2589 + * .extensions = (uintptr_t)&regions, 2590 + * }; 2591 + * 2592 + * int err = ioctl(fd, DRM_IOCTL_I915_GEM_CREATE_EXT, &create_ext); 2593 + * if (err) ... 2594 + * 2595 + * At which point we get the object handle in &drm_i915_gem_create_ext.handle, 2596 + * along with the final object size in &drm_i915_gem_create_ext.size, which 2597 + * should account for any rounding up, if required. 2598 + */ 2599 + struct drm_i915_gem_create_ext_memory_regions { 2600 + /** @base: Extension link. See struct i915_user_extension. */ 2601 + struct i915_user_extension base; 2602 + 2603 + /** @pad: MBZ */ 2604 + __u32 pad; 2605 + /** @num_regions: Number of elements in the @regions array. */ 2606 + __u32 num_regions; 2607 + /** 2608 + * @regions: The regions/placements array. 2609 + * 2610 + * An array of struct drm_i915_gem_memory_class_instance. 2611 + */ 2612 + __u64 regions; 2468 2613 }; 2469 2614 2470 2615 #if defined(__cplusplus)