Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

drm/i915: Flush to GTT domain all GGTT bound objects after hibernation

Recently I have been applying an optimisation to avoid stalling and
clflushing GGTT objects based on their current binding. That is we only
set-to-gtt-domain upon first bind. However, on hibernation the objects
remain bound, but they are in the CPU domain. Currently (since commit
975f7ff42edf ("drm/i915: Lazily migrate the objects after hibernation"))
we only flush scanout objects as all other objects are expected to be
flushed prior to use. That breaks down in the face of the runtime
optimisation above - and we need to flush all GGTT pinned objects
(essentially ringbuffers).

To reduce the burden of extra clflushes, we only flush those objects we
cannot discard from the GGTT. Everything pinned to the scanout, or
current contexts or ringbuffers will be flushed and rebound. Other
objects, such as inactive contexts, will be left unbound and in the CPU
domain until first use after resuming.

Fixes: 7abc98fadfdd ("drm/i915: Only change the context object's domain...")
Fixes: 57e885318119 ("drm/i915: Use VMA for ringbuffer tracking")
References: https://bugs.freedesktop.org/show_bug.cgi?id=94722
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Mika Kuoppala <mika.kuoppala@intel.com>
Cc: David Weinehall <david.weinehall@intel.com>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20160909201957.2499-1-chris@chris-wilson.co.uk

+16 -5
+16 -5
drivers/gpu/drm/i915/i915_gem_gtt.c
··· 3237 3237 { 3238 3238 struct drm_i915_private *dev_priv = to_i915(dev); 3239 3239 struct i915_ggtt *ggtt = &dev_priv->ggtt; 3240 - struct drm_i915_gem_object *obj; 3241 - struct i915_vma *vma; 3240 + struct drm_i915_gem_object *obj, *on; 3242 3241 3243 3242 i915_check_and_clear_faults(dev_priv); 3244 3243 ··· 3245 3246 ggtt->base.clear_range(&ggtt->base, ggtt->base.start, ggtt->base.total, 3246 3247 true); 3247 3248 3248 - /* Cache flush objects bound into GGTT and rebind them. */ 3249 - list_for_each_entry(obj, &dev_priv->mm.bound_list, global_list) { 3249 + ggtt->base.closed = true; /* skip rewriting PTE on VMA unbind */ 3250 + 3251 + /* clflush objects bound into the GGTT and rebind them. */ 3252 + list_for_each_entry_safe(obj, on, 3253 + &dev_priv->mm.bound_list, global_list) { 3254 + bool ggtt_bound = false; 3255 + struct i915_vma *vma; 3256 + 3250 3257 list_for_each_entry(vma, &obj->vma_list, obj_link) { 3251 3258 if (vma->vm != &ggtt->base) 3252 3259 continue; 3253 3260 3261 + if (!i915_vma_unbind(vma)) 3262 + continue; 3263 + 3254 3264 WARN_ON(i915_vma_bind(vma, obj->cache_level, 3255 3265 PIN_UPDATE)); 3266 + ggtt_bound = true; 3256 3267 } 3257 3268 3258 - if (obj->pin_display) 3269 + if (ggtt_bound) 3259 3270 WARN_ON(i915_gem_object_set_to_gtt_domain(obj, false)); 3260 3271 } 3272 + 3273 + ggtt->base.closed = false; 3261 3274 3262 3275 if (INTEL_INFO(dev)->gen >= 8) { 3263 3276 if (IS_CHERRYVIEW(dev) || IS_BROXTON(dev))