Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'drm-intel-next-2018-09-06-2' of git://anongit.freedesktop.org/drm/drm-intel into drm-next

Merge tag 'gvt-next-2018-09-04'
drm-intel-next-2018-09-06-1:
UAPI Changes:
- GGTT coherency GETPARAM: GGTT has turned out to be non-coherent for some
platforms, which we've failed to communicate to userspace so far. SNA was
modified to do extra flushing on non-coherent GGTT access, while Mesa will
mitigate by always requiring WC mapping (which is non-coherent anyway).
- Neuter Resource Streamer uAPI: There never really were users for the feature,
so neuter it while keeping the interface bits for compatibility. This is a
long due item from past.

Cross-subsystem Changes:
- Backmerge of branch drm-next-4.19 for DP_DPCD_REV_14 changes

Core Changes:
- None

Driver Changes:

- A load of Icelake (ICL) enabling patches (Paulo, Manasi)
- Enabled full PPGTT for IVB,VLV and HSW (Chris)
- Bugzilla #107113: Distribute DDB based on display resolutions (Mahesh)
- Bugzillas #100023,#107476,#94921: Support limited range DP displays (Jani)
- Bugzilla #107503: Increase LSPCON timeout (Fredrik)
- Avoid boosting GPU due to an occasional stall in interactive workloads (Chris)
- Apply GGTT coherency W/A only for affected systems instead of all (Chris)
- Fix for infinite link training loop for faulty USB-C MST hubs (Nathan)
- Keep KMS functional on Gen4 and earlier when GPU is wedged (Chris)
- Stop holding ppGTT reference from closed VMAs (Chris)
- Clear error registers after error capture (Lionel)
- Various Icelake fixes (Anusha, Jyoti, Ville, Tvrtko)
- Add missing Coffeelake (CFL) PCI IDs (Rodrigo)
- Flush execlists tasklet directly from reset-finish (Chris)
- Fix LPE audio runtime PM (Chris)
- Fix detection of out of range surface positions (GLK/CNL) (Ville)
- Remove wait-for-idle for PSR2 (Dhinakaran)
- Power down existing display hardware resources when display is disabled (Chris)
- Don't allow runtime power management if RC6 doesn't exist (Chris)
- Add debugging checks for runtime power management paths (Imre)
- Increase symmetry in display power init/fini paths (Imre)
- Isolate GVT specific macros from i915_reg.h (Lucas)
- Increase symmetry in power management enable/disable paths (Chris)
- Increase IP disable timeout to 100 ms to avoid DRM_ERROR (Imre)
- Fix memory leak from HDMI HDCP write function (Brian, Rodrigo)
- Reject Y/Yf tiling on interlaced modes (Ville)
- Use a cached mapping for the physical HWS on older gens (Chris)
- Force slow path of writing relocations to buffer if unable to write to userspace (Chris)
- Do a full device reset after being wedged (Chris)
- Keep forcewake counts over reset (in case of debugfs user) (Imre, Chris)
- Avoid false-positive errors from power wells during init (Imre)
- Reset engines forcibly in exchange of declaring whole device wedged (Mika)
- Reduce context HW ID lifetime in preparation for Icelake (Chris)
- Attempt to recover from module load failures (Chris)
- Keep select interrupts over a reset to avoid missing/losing them (Chris)
- GuC submission backend improvements (Jakub)
- Terminate context images with BB_END (Chris, Lionel)
- Make GCC evaluate GGTT view struct size assertions again (Ville)
- Add selftest to exercise suspend/hibernate code-paths for GEM (Chris)
- Use a full emulation of a user ppgtt context in selftests (Chris)
- Exercise resetting in the middle of a wait-on-fence in selftests (Chris)
- Fix coherency issues on selftests for Baytrail (Chris)
- Various other GEM fixes / self-test updates (Chris, Matt)
- GuC doorbell self-tests (Daniele)
- PSR mode control through debugfs for IGTs (Maarten)
- Degrade expected WM latency errors to DRM_DEBUG_KMS (Chris)
- Cope with errors better in MST link training (Dhinakaran)
- Fix WARN on KBL external displays (Azhar)
- Power well code cleanups (Imre)
- Fixes to PSR debugging (Dhinakaran)
- Make forcewake errors louder for easier catching in CI (WARNs) (Chris)
- Fortify tiling code against programmer errors (Chris)
- Bunch of fixes for CI exposed corner cases (multiple authors, mostly Chris)

Signed-off-by: Dave Airlie <airlied@redhat.com>

From: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180907105446.GA22860@jlahtine-desk.ger.corp.intel.com

+3859 -1808
+12
drivers/gpu/drm/i915/Kconfig.debug
··· 30 30 select SW_SYNC # signaling validation framework (igt/syncobj*) 31 31 select DRM_I915_SW_FENCE_DEBUG_OBJECTS 32 32 select DRM_I915_SELFTEST 33 + select DRM_I915_DEBUG_RUNTIME_PM 33 34 default n 34 35 help 35 36 Choose this option to turn on extra driver debugging that may affect ··· 168 167 the vblank. 169 168 170 169 If in doubt, say "N". 170 + 171 + config DRM_I915_DEBUG_RUNTIME_PM 172 + bool "Enable extra state checking for runtime PM" 173 + depends on DRM_I915 174 + default n 175 + help 176 + Choose this option to turn on extra state checking for the 177 + runtime PM functionality. This may introduce overhead during 178 + driver loading, suspend and resume operations. 179 + 180 + If in doubt, say "N"
+12
drivers/gpu/drm/i915/gvt/cfg_space.c
··· 56 56 57 57 /** 58 58 * vgpu_pci_cfg_mem_write - write virtual cfg space memory 59 + * @vgpu: target vgpu 60 + * @off: offset 61 + * @src: src ptr to write 62 + * @bytes: number of bytes 59 63 * 60 64 * Use this function to write virtual cfg space memory. 61 65 * For standard cfg space, only RW bits can be changed, ··· 95 91 96 92 /** 97 93 * intel_vgpu_emulate_cfg_read - emulate vGPU configuration space read 94 + * @vgpu: target vgpu 95 + * @offset: offset 96 + * @p_data: return data ptr 97 + * @bytes: number of bytes to read 98 98 * 99 99 * Returns: 100 100 * Zero on success, negative error code if failed. ··· 286 278 287 279 /** 288 280 * intel_vgpu_emulate_cfg_read - emulate vGPU configuration space write 281 + * @vgpu: target vgpu 282 + * @offset: offset 283 + * @p_data: write data ptr 284 + * @bytes: number of bytes to write 289 285 * 290 286 * Returns: 291 287 * Zero on success, negative error code if failed.
+10 -1
drivers/gpu/drm/i915/gvt/cmd_parser.c
··· 1840 1840 return ret; 1841 1841 } 1842 1842 1843 + static int mi_noop_index; 1844 + 1843 1845 static struct cmd_info cmd_info[] = { 1844 1846 {"MI_NOOP", OP_MI_NOOP, F_LEN_CONST, R_ALL, D_ALL, 0, 1, NULL}, 1845 1847 ··· 2527 2525 2528 2526 cmd = cmd_val(s, 0); 2529 2527 2530 - info = get_cmd_info(s->vgpu->gvt, cmd, s->ring_id); 2528 + /* fastpath for MI_NOOP */ 2529 + if (cmd == MI_NOOP) 2530 + info = &cmd_info[mi_noop_index]; 2531 + else 2532 + info = get_cmd_info(s->vgpu->gvt, cmd, s->ring_id); 2533 + 2531 2534 if (info == NULL) { 2532 2535 gvt_vgpu_err("unknown cmd 0x%x, opcode=0x%x, addr_type=%s, ring %d, workload=%p\n", 2533 2536 cmd, get_opcode(cmd, s->ring_id), ··· 2935 2928 kfree(e); 2936 2929 return -EEXIST; 2937 2930 } 2931 + if (cmd_info[i].opcode == OP_MI_NOOP) 2932 + mi_noop_index = i; 2938 2933 2939 2934 INIT_HLIST_NODE(&e->hlist); 2940 2935 add_cmd_entry(gvt, e);
+1
drivers/gpu/drm/i915/gvt/display.c
··· 462 462 /** 463 463 * intel_vgpu_init_display- initialize vGPU virtual display emulation 464 464 * @vgpu: a vGPU 465 + * @resolution: resolution index for intel_vgpu_edid 465 466 * 466 467 * This function is used to initialize vGPU virtual display emulation stuffs 467 468 *
+9
drivers/gpu/drm/i915/gvt/edid.c
··· 340 340 /** 341 341 * intel_gvt_i2c_handle_gmbus_read - emulate gmbus register mmio read 342 342 * @vgpu: a vGPU 343 + * @offset: reg offset 344 + * @p_data: data return buffer 345 + * @bytes: access data length 343 346 * 344 347 * This function is used to emulate gmbus register mmio read 345 348 * ··· 368 365 /** 369 366 * intel_gvt_i2c_handle_gmbus_write - emulate gmbus register mmio write 370 367 * @vgpu: a vGPU 368 + * @offset: reg offset 369 + * @p_data: data return buffer 370 + * @bytes: access data length 371 371 * 372 372 * This function is used to emulate gmbus register mmio write 373 373 * ··· 443 437 /** 444 438 * intel_gvt_i2c_handle_aux_ch_write - emulate AUX channel register write 445 439 * @vgpu: a vGPU 440 + * @port_idx: port index 441 + * @offset: reg offset 442 + * @p_data: write ptr 446 443 * 447 444 * This function is used to emulate AUX channel register write 448 445 *
+6 -3
drivers/gpu/drm/i915/gvt/gtt.c
··· 1113 1113 } 1114 1114 1115 1115 /** 1116 + * Check if can do 2M page 1117 + * @vgpu: target vgpu 1118 + * @entry: target pfn's gtt entry 1119 + * 1116 1120 * Return 1 if 2MB huge gtt shadowing is possilbe, 0 if miscondition, 1117 1121 * negtive if found err. 1118 1122 */ ··· 1949 1945 1950 1946 /** 1951 1947 * intel_vgpu_pin_mm - increase the pin count of a vGPU mm object 1952 - * @vgpu: a vGPU 1948 + * @mm: target vgpu mm 1953 1949 * 1954 1950 * This function is called when user wants to use a vGPU mm object. If this 1955 1951 * mm object hasn't been shadowed yet, the shadow will be populated at this ··· 2525 2521 /** 2526 2522 * intel_vgpu_find_ppgtt_mm - find a PPGTT mm object 2527 2523 * @vgpu: a vGPU 2528 - * @page_table_level: PPGTT page table level 2529 - * @root_entry: PPGTT page table root pointers 2524 + * @pdps: pdp root array 2530 2525 * 2531 2526 * This function is used to find a PPGTT mm object from mm object pool 2532 2527 *
+1 -2
drivers/gpu/drm/i915/gvt/gvt.c
··· 189 189 190 190 /** 191 191 * intel_gvt_init_host - Load MPT modules and detect if we're running in host 192 - * @gvt: intel gvt device 193 192 * 194 193 * This function is called at the driver loading stage. If failed to find a 195 194 * loadable MPT module or detect currently we're running in a VM, then GVT-g ··· 302 303 303 304 /** 304 305 * intel_gvt_clean_device - clean a GVT device 305 - * @gvt: intel gvt device 306 + * @dev_priv: i915 private 306 307 * 307 308 * This function is called at the driver unloading stage, to free the 308 309 * resources owned by a GVT device.
+12 -22
drivers/gpu/drm/i915/gvt/handlers.c
··· 1287 1287 { 1288 1288 write_vreg(vgpu, offset, p_data, bytes); 1289 1289 1290 - if (vgpu_vreg(vgpu, offset) & HSW_PWR_WELL_CTL_REQ(HSW_DISP_PW_GLOBAL)) 1290 + if (vgpu_vreg(vgpu, offset) & 1291 + HSW_PWR_WELL_CTL_REQ(HSW_PW_CTL_IDX_GLOBAL)) 1291 1292 vgpu_vreg(vgpu, offset) |= 1292 - HSW_PWR_WELL_CTL_STATE(HSW_DISP_PW_GLOBAL); 1293 + HSW_PWR_WELL_CTL_STATE(HSW_PW_CTL_IDX_GLOBAL); 1293 1294 else 1294 1295 vgpu_vreg(vgpu, offset) &= 1295 - ~HSW_PWR_WELL_CTL_STATE(HSW_DISP_PW_GLOBAL); 1296 + ~HSW_PWR_WELL_CTL_STATE(HSW_PW_CTL_IDX_GLOBAL); 1296 1297 return 0; 1297 1298 } 1298 1299 ··· 2119 2118 2120 2119 MMIO_F(PCH_GMBUS0, 4 * 4, 0, 0, 0, D_ALL, gmbus_mmio_read, 2121 2120 gmbus_mmio_write); 2122 - MMIO_F(PCH_GPIOA, 6 * 4, F_UNALIGN, 0, 0, D_ALL, NULL, NULL); 2121 + MMIO_F(PCH_GPIO_BASE, 6 * 4, F_UNALIGN, 0, 0, D_ALL, NULL, NULL); 2123 2122 MMIO_F(_MMIO(0xe4f00), 0x28, 0, 0, 0, D_ALL, NULL, NULL); 2124 2123 2125 2124 MMIO_F(_MMIO(_PCH_DPB_AUX_CH_CTL), 6 * 4, 0, 0, 0, D_PRE_SKL, NULL, ··· 2444 2443 MMIO_D(GEN6_RC6p_THRESHOLD, D_ALL); 2445 2444 MMIO_D(GEN6_RC6pp_THRESHOLD, D_ALL); 2446 2445 MMIO_D(GEN6_PMINTRMSK, D_ALL); 2447 - /* 2448 - * Use an arbitrary power well controlled by the PWR_WELL_CTL 2449 - * register. 2450 - */ 2451 - MMIO_DH(HSW_PWR_WELL_CTL_BIOS(HSW_DISP_PW_GLOBAL), D_BDW, NULL, 2452 - power_well_ctl_mmio_write); 2453 - MMIO_DH(HSW_PWR_WELL_CTL_DRIVER(HSW_DISP_PW_GLOBAL), D_BDW, NULL, 2454 - power_well_ctl_mmio_write); 2455 - MMIO_DH(HSW_PWR_WELL_CTL_KVMR, D_BDW, NULL, power_well_ctl_mmio_write); 2456 - MMIO_DH(HSW_PWR_WELL_CTL_DEBUG(HSW_DISP_PW_GLOBAL), D_BDW, NULL, 2457 - power_well_ctl_mmio_write); 2446 + MMIO_DH(HSW_PWR_WELL_CTL1, D_BDW, NULL, power_well_ctl_mmio_write); 2447 + MMIO_DH(HSW_PWR_WELL_CTL2, D_BDW, NULL, power_well_ctl_mmio_write); 2448 + MMIO_DH(HSW_PWR_WELL_CTL3, D_BDW, NULL, power_well_ctl_mmio_write); 2449 + MMIO_DH(HSW_PWR_WELL_CTL4, D_BDW, NULL, power_well_ctl_mmio_write); 2458 2450 MMIO_DH(HSW_PWR_WELL_CTL5, D_BDW, NULL, power_well_ctl_mmio_write); 2459 2451 MMIO_DH(HSW_PWR_WELL_CTL6, D_BDW, NULL, power_well_ctl_mmio_write); 2460 2452 ··· 2798 2804 MMIO_F(_MMIO(_DPD_AUX_CH_CTL), 6 * 4, 0, 0, 0, D_SKL_PLUS, NULL, 2799 2805 dp_aux_ch_ctl_mmio_write); 2800 2806 2801 - /* 2802 - * Use an arbitrary power well controlled by the PWR_WELL_CTL 2803 - * register. 2804 - */ 2805 - MMIO_D(HSW_PWR_WELL_CTL_BIOS(SKL_DISP_PW_MISC_IO), D_SKL_PLUS); 2806 - MMIO_DH(HSW_PWR_WELL_CTL_DRIVER(SKL_DISP_PW_MISC_IO), D_SKL_PLUS, NULL, 2807 - skl_power_well_ctl_write); 2807 + MMIO_D(HSW_PWR_WELL_CTL1, D_SKL_PLUS); 2808 + MMIO_DH(HSW_PWR_WELL_CTL2, D_SKL_PLUS, NULL, skl_power_well_ctl_write); 2808 2809 2809 2810 MMIO_D(_MMIO(0xa210), D_SKL_PLUS); 2810 2811 MMIO_D(GEN9_MEDIA_PG_IDLE_HYSTERESIS, D_SKL_PLUS); ··· 3423 3434 * @offset: register offset 3424 3435 * @pdata: data buffer 3425 3436 * @bytes: data length 3437 + * @is_read: read or write 3426 3438 * 3427 3439 * Returns: 3428 3440 * Zero on success, negative error code if failed.
+2 -2
drivers/gpu/drm/i915/gvt/kvmgt.c
··· 1712 1712 return pfn; 1713 1713 } 1714 1714 1715 - int kvmgt_dma_map_guest_page(unsigned long handle, unsigned long gfn, 1715 + static int kvmgt_dma_map_guest_page(unsigned long handle, unsigned long gfn, 1716 1716 unsigned long size, dma_addr_t *dma_addr) 1717 1717 { 1718 1718 struct kvmgt_guest_info *info; ··· 1761 1761 __gvt_cache_remove_entry(entry->vgpu, entry); 1762 1762 } 1763 1763 1764 - void kvmgt_dma_unmap_guest_page(unsigned long handle, dma_addr_t dma_addr) 1764 + static void kvmgt_dma_unmap_guest_page(unsigned long handle, dma_addr_t dma_addr) 1765 1765 { 1766 1766 struct kvmgt_guest_info *info; 1767 1767 struct gvt_dma *entry;
+2 -1
drivers/gpu/drm/i915/gvt/mmio.c
··· 39 39 /** 40 40 * intel_vgpu_gpa_to_mmio_offset - translate a GPA to MMIO offset 41 41 * @vgpu: a vGPU 42 + * @gpa: guest physical address 42 43 * 43 44 * Returns: 44 45 * Zero on success, negative error code if failed ··· 229 228 /** 230 229 * intel_vgpu_reset_mmio - reset virtual MMIO space 231 230 * @vgpu: a vGPU 232 - * 231 + * @dmlr: whether this is device model level reset 233 232 */ 234 233 void intel_vgpu_reset_mmio(struct intel_vgpu *vgpu, bool dmlr) 235 234 {
-13
drivers/gpu/drm/i915/gvt/mmio_context.c
··· 37 37 #include "gvt.h" 38 38 #include "trace.h" 39 39 40 - /** 41 - * Defined in Intel Open Source PRM. 42 - * Ref: https://01.org/linuxgraphics/documentation/hardware-specification-prms 43 - */ 44 - #define TRVATTL3PTRDW(i) _MMIO(0x4de0 + (i)*4) 45 - #define TRNULLDETCT _MMIO(0x4de8) 46 - #define TRINVTILEDETCT _MMIO(0x4dec) 47 - #define TRVADR _MMIO(0x4df0) 48 - #define TRTTE _MMIO(0x4df4) 49 - #define RING_EXCC(base) _MMIO((base) + 0x28) 50 - #define RING_GFX_MODE(base) _MMIO((base) + 0x29c) 51 - #define VF_GUARDBAND _MMIO(0x83a4) 52 - 53 40 #define GEN9_MOCS_SIZE 64 54 41 55 42 /* Raw offset is appened to each line for convenience. */
+3
drivers/gpu/drm/i915/gvt/mmio_context.h
··· 53 53 54 54 int intel_vgpu_restore_inhibit_context(struct intel_vgpu *vgpu, 55 55 struct i915_request *req); 56 + #define IS_RESTORE_INHIBIT(a) \ 57 + (_MASKED_BIT_ENABLE(CTX_CTRL_ENGINE_CTX_RESTORE_INHIBIT) == \ 58 + ((a) & _MASKED_BIT_ENABLE(CTX_CTRL_ENGINE_CTX_RESTORE_INHIBIT))) 56 59 57 60 #endif
-1
drivers/gpu/drm/i915/gvt/opregion.c
··· 216 216 /** 217 217 * intel_vgpu_init_opregion - initialize the stuff used to emulate opregion 218 218 * @vgpu: a vGPU 219 - * @gpa: guest physical address of opregion 220 219 * 221 220 * Returns: 222 221 * Zero on success, negative error code if failed.
+2
drivers/gpu/drm/i915/gvt/page_track.c
··· 41 41 * intel_vgpu_register_page_track - register a guest page to be tacked 42 42 * @vgpu: a vGPU 43 43 * @gfn: the gfn of guest page 44 + * @handler: page track handler 45 + * @priv: tracker private 44 46 * 45 47 * Returns: 46 48 * zero on success, negative error code if failed.
+18
drivers/gpu/drm/i915/gvt/reg.h
··· 77 77 #define _RING_CTL_BUF_SIZE(ctl) (((ctl) & RB_TAIL_SIZE_MASK) + \ 78 78 I915_GTT_PAGE_SIZE) 79 79 80 + #define PCH_GPIO_BASE _MMIO(0xc5010) 81 + 82 + #define PCH_GMBUS0 _MMIO(0xc5100) 83 + #define PCH_GMBUS1 _MMIO(0xc5104) 84 + #define PCH_GMBUS2 _MMIO(0xc5108) 85 + #define PCH_GMBUS3 _MMIO(0xc510c) 86 + #define PCH_GMBUS4 _MMIO(0xc5110) 87 + #define PCH_GMBUS5 _MMIO(0xc5120) 88 + 89 + #define TRVATTL3PTRDW(i) _MMIO(0x4de0 + (i) * 4) 90 + #define TRNULLDETCT _MMIO(0x4de8) 91 + #define TRINVTILEDETCT _MMIO(0x4dec) 92 + #define TRVADR _MMIO(0x4df0) 93 + #define TRTTE _MMIO(0x4df4) 94 + #define RING_EXCC(base) _MMIO((base) + 0x28) 95 + #define RING_GFX_MODE(base) _MMIO((base) + 0x29c) 96 + #define VF_GUARDBAND _MMIO(0x83a4) 97 + 80 98 #endif
+34 -30
drivers/gpu/drm/i915/gvt/scheduler.c
··· 132 132 unsigned long context_gpa, context_page_num; 133 133 int i; 134 134 135 - gvt_dbg_sched("ring id %d workload lrca %x", ring_id, 136 - workload->ctx_desc.lrca); 137 - 138 - context_page_num = gvt->dev_priv->engine[ring_id]->context_size; 139 - 140 - context_page_num = context_page_num >> PAGE_SHIFT; 141 - 142 - if (IS_BROADWELL(gvt->dev_priv) && ring_id == RCS) 143 - context_page_num = 19; 144 - 145 - i = 2; 146 - 147 - while (i < context_page_num) { 148 - context_gpa = intel_vgpu_gma_to_gpa(vgpu->gtt.ggtt_mm, 149 - (u32)((workload->ctx_desc.lrca + i) << 150 - I915_GTT_PAGE_SHIFT)); 151 - if (context_gpa == INTEL_GVT_INVALID_ADDR) { 152 - gvt_vgpu_err("Invalid guest context descriptor\n"); 153 - return -EFAULT; 154 - } 155 - 156 - page = i915_gem_object_get_page(ctx_obj, LRC_HEADER_PAGES + i); 157 - dst = kmap(page); 158 - intel_gvt_hypervisor_read_gpa(vgpu, context_gpa, dst, 159 - I915_GTT_PAGE_SIZE); 160 - kunmap(page); 161 - i++; 162 - } 163 - 164 135 page = i915_gem_object_get_page(ctx_obj, LRC_STATE_PN); 165 136 shadow_ring_context = kmap(page); 166 137 ··· 166 195 167 196 sr_oa_regs(workload, (u32 *)shadow_ring_context, false); 168 197 kunmap(page); 198 + 199 + if (IS_RESTORE_INHIBIT(shadow_ring_context->ctx_ctrl.val)) 200 + return 0; 201 + 202 + gvt_dbg_sched("ring id %d workload lrca %x", ring_id, 203 + workload->ctx_desc.lrca); 204 + 205 + context_page_num = gvt->dev_priv->engine[ring_id]->context_size; 206 + 207 + context_page_num = context_page_num >> PAGE_SHIFT; 208 + 209 + if (IS_BROADWELL(gvt->dev_priv) && ring_id == RCS) 210 + context_page_num = 19; 211 + 212 + i = 2; 213 + while (i < context_page_num) { 214 + context_gpa = intel_vgpu_gma_to_gpa(vgpu->gtt.ggtt_mm, 215 + (u32)((workload->ctx_desc.lrca + i) << 216 + I915_GTT_PAGE_SHIFT)); 217 + if (context_gpa == INTEL_GVT_INVALID_ADDR) { 218 + gvt_vgpu_err("Invalid guest context descriptor\n"); 219 + return -EFAULT; 220 + } 221 + 222 + page = i915_gem_object_get_page(ctx_obj, LRC_HEADER_PAGES + i); 223 + dst = kmap(page); 224 + intel_gvt_hypervisor_read_gpa(vgpu, context_gpa, dst, 225 + I915_GTT_PAGE_SIZE); 226 + kunmap(page); 227 + i++; 228 + } 169 229 return 0; 170 230 } 171 231 ··· 1140 1138 /** 1141 1139 * intel_vgpu_select_submission_ops - select virtual submission interface 1142 1140 * @vgpu: a vGPU 1141 + * @engine_mask: either ALL_ENGINES or target engine mask 1143 1142 * @interface: expected vGPU virtual submission interface 1144 1143 * 1145 1144 * This function is called when guest configures submission interface. ··· 1193 1190 1194 1191 /** 1195 1192 * intel_vgpu_destroy_workload - destroy a vGPU workload 1196 - * @vgpu: a vGPU 1193 + * @workload: workload to destroy 1197 1194 * 1198 1195 * This function is called when destroy a vGPU workload. 1199 1196 * ··· 1285 1282 /** 1286 1283 * intel_vgpu_create_workload - create a vGPU workload 1287 1284 * @vgpu: a vGPU 1285 + * @ring_id: ring index 1288 1286 * @desc: a guest context descriptor 1289 1287 * 1290 1288 * This function is called when creating a vGPU workload.
+65 -25
drivers/gpu/drm/i915/i915_debugfs.c
··· 1953 1953 return ret; 1954 1954 1955 1955 list_for_each_entry(ctx, &dev_priv->contexts.list, link) { 1956 - seq_printf(m, "HW context %u ", ctx->hw_id); 1956 + seq_puts(m, "HW context "); 1957 + if (!list_empty(&ctx->hw_id_link)) 1958 + seq_printf(m, "%x [pin %u]", ctx->hw_id, 1959 + atomic_read(&ctx->hw_id_pin_count)); 1957 1960 if (ctx->pid) { 1958 1961 struct task_struct *task; 1959 1962 ··· 2711 2708 intel_runtime_pm_get(dev_priv); 2712 2709 2713 2710 mutex_lock(&dev_priv->psr.lock); 2714 - seq_printf(m, "Enabled: %s\n", yesno((bool)dev_priv->psr.enabled)); 2711 + seq_printf(m, "PSR mode: %s\n", 2712 + dev_priv->psr.psr2_enabled ? "PSR2" : "PSR1"); 2713 + seq_printf(m, "Enabled: %s\n", yesno(dev_priv->psr.enabled)); 2715 2714 seq_printf(m, "Busy frontbuffer bits: 0x%03x\n", 2716 2715 dev_priv->psr.busy_frontbuffer_bits); 2717 2716 ··· 2740 2735 psr_source_status(dev_priv, m); 2741 2736 mutex_unlock(&dev_priv->psr.lock); 2742 2737 2743 - if (READ_ONCE(dev_priv->psr.debug)) { 2738 + if (READ_ONCE(dev_priv->psr.debug) & I915_PSR_DEBUG_IRQ) { 2744 2739 seq_printf(m, "Last attempted entry at: %lld\n", 2745 2740 dev_priv->psr.last_entry_attempt); 2746 2741 seq_printf(m, "Last exit at: %lld\n", ··· 2755 2750 i915_edp_psr_debug_set(void *data, u64 val) 2756 2751 { 2757 2752 struct drm_i915_private *dev_priv = data; 2753 + struct drm_modeset_acquire_ctx ctx; 2754 + int ret; 2758 2755 2759 2756 if (!CAN_PSR(dev_priv)) 2760 2757 return -ENODEV; 2761 2758 2762 - DRM_DEBUG_KMS("PSR debug %s\n", enableddisabled(val)); 2759 + DRM_DEBUG_KMS("Setting PSR debug to %llx\n", val); 2763 2760 2764 2761 intel_runtime_pm_get(dev_priv); 2765 - intel_psr_irq_control(dev_priv, !!val); 2762 + 2763 + drm_modeset_acquire_init(&ctx, DRM_MODESET_ACQUIRE_INTERRUPTIBLE); 2764 + 2765 + retry: 2766 + ret = intel_psr_set_debugfs_mode(dev_priv, &ctx, val); 2767 + if (ret == -EDEADLK) { 2768 + ret = drm_modeset_backoff(&ctx); 2769 + if (!ret) 2770 + goto retry; 2771 + } 2772 + 2773 + drm_modeset_drop_locks(&ctx); 2774 + drm_modeset_acquire_fini(&ctx); 2775 + 2766 2776 intel_runtime_pm_put(dev_priv); 2767 2777 2768 - return 0; 2778 + return ret; 2769 2779 } 2770 2780 2771 2781 static int ··· 2865 2845 enum intel_display_power_domain power_domain; 2866 2846 2867 2847 power_well = &power_domains->power_wells[i]; 2868 - seq_printf(m, "%-25s %d\n", power_well->name, 2848 + seq_printf(m, "%-25s %d\n", power_well->desc->name, 2869 2849 power_well->count); 2870 2850 2871 - for_each_power_domain(power_domain, power_well->domains) 2851 + for_each_power_domain(power_domain, power_well->desc->domains) 2872 2852 seq_printf(m, " %-23s %d\n", 2873 2853 intel_display_power_domain_str(power_domain), 2874 2854 power_domains->domain_use_count[power_domain]); ··· 4134 4114 #define DROP_FREED BIT(4) 4135 4115 #define DROP_SHRINK_ALL BIT(5) 4136 4116 #define DROP_IDLE BIT(6) 4117 + #define DROP_RESET_ACTIVE BIT(7) 4118 + #define DROP_RESET_SEQNO BIT(8) 4137 4119 #define DROP_ALL (DROP_UNBOUND | \ 4138 4120 DROP_BOUND | \ 4139 4121 DROP_RETIRE | \ 4140 4122 DROP_ACTIVE | \ 4141 4123 DROP_FREED | \ 4142 4124 DROP_SHRINK_ALL |\ 4143 - DROP_IDLE) 4125 + DROP_IDLE | \ 4126 + DROP_RESET_ACTIVE | \ 4127 + DROP_RESET_SEQNO) 4144 4128 static int 4145 4129 i915_drop_caches_get(void *data, u64 *val) 4146 4130 { ··· 4156 4132 static int 4157 4133 i915_drop_caches_set(void *data, u64 val) 4158 4134 { 4159 - struct drm_i915_private *dev_priv = data; 4160 - struct drm_device *dev = &dev_priv->drm; 4135 + struct drm_i915_private *i915 = data; 4161 4136 int ret = 0; 4162 4137 4163 4138 DRM_DEBUG("Dropping caches: 0x%08llx [0x%08llx]\n", 4164 4139 val, val & DROP_ALL); 4165 4140 4141 + if (val & DROP_RESET_ACTIVE && !intel_engines_are_idle(i915)) 4142 + i915_gem_set_wedged(i915); 4143 + 4166 4144 /* No need to check and wait for gpu resets, only libdrm auto-restarts 4167 4145 * on ioctls on -EAGAIN. */ 4168 - if (val & (DROP_ACTIVE | DROP_RETIRE)) { 4169 - ret = mutex_lock_interruptible(&dev->struct_mutex); 4146 + if (val & (DROP_ACTIVE | DROP_RETIRE | DROP_RESET_SEQNO)) { 4147 + ret = mutex_lock_interruptible(&i915->drm.struct_mutex); 4170 4148 if (ret) 4171 4149 return ret; 4172 4150 4173 4151 if (val & DROP_ACTIVE) 4174 - ret = i915_gem_wait_for_idle(dev_priv, 4152 + ret = i915_gem_wait_for_idle(i915, 4175 4153 I915_WAIT_INTERRUPTIBLE | 4176 4154 I915_WAIT_LOCKED, 4177 4155 MAX_SCHEDULE_TIMEOUT); 4178 4156 4179 - if (val & DROP_RETIRE) 4180 - i915_retire_requests(dev_priv); 4157 + if (val & DROP_RESET_SEQNO) { 4158 + intel_runtime_pm_get(i915); 4159 + ret = i915_gem_set_global_seqno(&i915->drm, 1); 4160 + intel_runtime_pm_put(i915); 4161 + } 4181 4162 4182 - mutex_unlock(&dev->struct_mutex); 4163 + if (val & DROP_RETIRE) 4164 + i915_retire_requests(i915); 4165 + 4166 + mutex_unlock(&i915->drm.struct_mutex); 4167 + } 4168 + 4169 + if (val & DROP_RESET_ACTIVE && 4170 + i915_terminally_wedged(&i915->gpu_error)) { 4171 + i915_handle_error(i915, ALL_ENGINES, 0, NULL); 4172 + wait_on_bit(&i915->gpu_error.flags, 4173 + I915_RESET_HANDOFF, 4174 + TASK_UNINTERRUPTIBLE); 4183 4175 } 4184 4176 4185 4177 fs_reclaim_acquire(GFP_KERNEL); 4186 4178 if (val & DROP_BOUND) 4187 - i915_gem_shrink(dev_priv, LONG_MAX, NULL, I915_SHRINK_BOUND); 4179 + i915_gem_shrink(i915, LONG_MAX, NULL, I915_SHRINK_BOUND); 4188 4180 4189 4181 if (val & DROP_UNBOUND) 4190 - i915_gem_shrink(dev_priv, LONG_MAX, NULL, I915_SHRINK_UNBOUND); 4182 + i915_gem_shrink(i915, LONG_MAX, NULL, I915_SHRINK_UNBOUND); 4191 4183 4192 4184 if (val & DROP_SHRINK_ALL) 4193 - i915_gem_shrink_all(dev_priv); 4185 + i915_gem_shrink_all(i915); 4194 4186 fs_reclaim_release(GFP_KERNEL); 4195 4187 4196 4188 if (val & DROP_IDLE) { 4197 4189 do { 4198 - if (READ_ONCE(dev_priv->gt.active_requests)) 4199 - flush_delayed_work(&dev_priv->gt.retire_work); 4200 - drain_delayed_work(&dev_priv->gt.idle_work); 4201 - } while (READ_ONCE(dev_priv->gt.awake)); 4190 + if (READ_ONCE(i915->gt.active_requests)) 4191 + flush_delayed_work(&i915->gt.retire_work); 4192 + drain_delayed_work(&i915->gt.idle_work); 4193 + } while (READ_ONCE(i915->gt.awake)); 4202 4194 } 4203 4195 4204 4196 if (val & DROP_FREED) 4205 - i915_gem_drain_freed_objects(dev_priv); 4197 + i915_gem_drain_freed_objects(i915); 4206 4198 4207 4199 return ret; 4208 4200 }
+104 -90
drivers/gpu/drm/i915/i915_drv.c
··· 373 373 value = 2; 374 374 break; 375 375 case I915_PARAM_HAS_RESOURCE_STREAMER: 376 - value = HAS_RESOURCE_STREAMER(dev_priv); 376 + value = 0; 377 377 break; 378 378 case I915_PARAM_HAS_POOLED_EU: 379 379 value = HAS_POOLED_EU(dev_priv); ··· 440 440 break; 441 441 case I915_PARAM_CS_TIMESTAMP_FREQUENCY: 442 442 value = 1000 * INTEL_INFO(dev_priv)->cs_timestamp_frequency_khz; 443 + break; 444 + case I915_PARAM_MMAP_GTT_COHERENT: 445 + value = INTEL_INFO(dev_priv)->has_coherent_ggtt; 443 446 break; 444 447 default: 445 448 DRM_DEBUG("Unknown parameter %d\n", param->param); ··· 712 709 intel_teardown_gmbus(dev_priv); 713 710 cleanup_csr: 714 711 intel_csr_ucode_fini(dev_priv); 715 - intel_power_domains_fini(dev_priv); 712 + intel_power_domains_fini_hw(dev_priv); 716 713 vga_switcheroo_unregister_client(pdev); 717 714 cleanup_vga_client: 718 715 vga_client_register(pdev, NULL, NULL, NULL); ··· 870 867 /** 871 868 * i915_driver_init_early - setup state not requiring device access 872 869 * @dev_priv: device private 873 - * @ent: the matching pci_device_id 874 870 * 875 871 * Initialize everything that is a "SW-only" state, that is state not 876 872 * requiring accessing the device or exposing the driver via kernel internal ··· 877 875 * system memory allocation, setting up device specific attributes and 878 876 * function hooks not requiring accessing the device. 879 877 */ 880 - static int i915_driver_init_early(struct drm_i915_private *dev_priv, 881 - const struct pci_device_id *ent) 878 + static int i915_driver_init_early(struct drm_i915_private *dev_priv) 882 879 { 883 - const struct intel_device_info *match_info = 884 - (struct intel_device_info *)ent->driver_data; 885 - struct intel_device_info *device_info; 886 880 int ret = 0; 887 881 888 882 if (i915_inject_load_failure()) 889 883 return -ENODEV; 890 884 891 - /* Setup the write-once "constant" device info */ 892 - device_info = mkwrite_device_info(dev_priv); 893 - memcpy(device_info, match_info, sizeof(*device_info)); 894 - device_info->device_id = dev_priv->drm.pdev->device; 895 - 896 - BUILD_BUG_ON(INTEL_MAX_PLATFORMS > 897 - sizeof(device_info->platform_mask) * BITS_PER_BYTE); 898 - BUG_ON(device_info->gen > sizeof(device_info->gen_mask) * BITS_PER_BYTE); 899 885 spin_lock_init(&dev_priv->irq_lock); 900 886 spin_lock_init(&dev_priv->gpu_error.lock); 901 887 mutex_init(&dev_priv->backlight_lock); ··· 911 921 intel_uc_init_early(dev_priv); 912 922 intel_pm_setup(dev_priv); 913 923 intel_init_dpio(dev_priv); 914 - intel_power_domains_init(dev_priv); 924 + ret = intel_power_domains_init(dev_priv); 925 + if (ret < 0) 926 + goto err_uc; 915 927 intel_irq_init(dev_priv); 916 928 intel_hangcheck_init(dev_priv); 917 929 intel_init_display_hooks(dev_priv); ··· 925 933 926 934 return 0; 927 935 936 + err_uc: 937 + intel_uc_cleanup_early(dev_priv); 938 + i915_gem_cleanup_early(dev_priv); 928 939 err_workqueues: 929 940 i915_workqueues_cleanup(dev_priv); 930 941 err_engines: ··· 942 947 static void i915_driver_cleanup_early(struct drm_i915_private *dev_priv) 943 948 { 944 949 intel_irq_fini(dev_priv); 950 + intel_power_domains_cleanup(dev_priv); 945 951 intel_uc_cleanup_early(dev_priv); 946 952 i915_gem_cleanup_early(dev_priv); 947 953 i915_workqueues_cleanup(dev_priv); ··· 1268 1272 */ 1269 1273 if (INTEL_INFO(dev_priv)->num_pipes) 1270 1274 drm_kms_helper_poll_init(dev); 1275 + 1276 + intel_power_domains_enable(dev_priv); 1277 + intel_runtime_pm_enable(dev_priv); 1271 1278 } 1272 1279 1273 1280 /** ··· 1279 1280 */ 1280 1281 static void i915_driver_unregister(struct drm_i915_private *dev_priv) 1281 1282 { 1283 + intel_runtime_pm_disable(dev_priv); 1284 + intel_power_domains_disable(dev_priv); 1285 + 1282 1286 intel_fbdev_unregister(dev_priv); 1283 1287 intel_audio_deinit(dev_priv); 1284 1288 ··· 1318 1316 DRM_INFO("DRM_I915_DEBUG enabled\n"); 1319 1317 if (IS_ENABLED(CONFIG_DRM_I915_DEBUG_GEM)) 1320 1318 DRM_INFO("DRM_I915_DEBUG_GEM enabled\n"); 1319 + if (IS_ENABLED(CONFIG_DRM_I915_DEBUG_RUNTIME_PM)) 1320 + DRM_INFO("DRM_I915_DEBUG_RUNTIME_PM enabled\n"); 1321 + } 1322 + 1323 + static struct drm_i915_private * 1324 + i915_driver_create(struct pci_dev *pdev, const struct pci_device_id *ent) 1325 + { 1326 + const struct intel_device_info *match_info = 1327 + (struct intel_device_info *)ent->driver_data; 1328 + struct intel_device_info *device_info; 1329 + struct drm_i915_private *i915; 1330 + 1331 + i915 = kzalloc(sizeof(*i915), GFP_KERNEL); 1332 + if (!i915) 1333 + return NULL; 1334 + 1335 + if (drm_dev_init(&i915->drm, &driver, &pdev->dev)) { 1336 + kfree(i915); 1337 + return NULL; 1338 + } 1339 + 1340 + i915->drm.pdev = pdev; 1341 + i915->drm.dev_private = i915; 1342 + pci_set_drvdata(pdev, &i915->drm); 1343 + 1344 + /* Setup the write-once "constant" device info */ 1345 + device_info = mkwrite_device_info(i915); 1346 + memcpy(device_info, match_info, sizeof(*device_info)); 1347 + device_info->device_id = pdev->device; 1348 + 1349 + BUILD_BUG_ON(INTEL_MAX_PLATFORMS > 1350 + sizeof(device_info->platform_mask) * BITS_PER_BYTE); 1351 + BUG_ON(device_info->gen > sizeof(device_info->gen_mask) * BITS_PER_BYTE); 1352 + 1353 + return i915; 1354 + } 1355 + 1356 + static void i915_driver_destroy(struct drm_i915_private *i915) 1357 + { 1358 + struct pci_dev *pdev = i915->drm.pdev; 1359 + 1360 + drm_dev_fini(&i915->drm); 1361 + kfree(i915); 1362 + 1363 + /* And make sure we never chase our dangling pointer from pci_dev */ 1364 + pci_set_drvdata(pdev, NULL); 1321 1365 } 1322 1366 1323 1367 /** ··· 1388 1340 if (!i915_modparams.nuclear_pageflip && match_info->gen < 5) 1389 1341 driver.driver_features &= ~DRIVER_ATOMIC; 1390 1342 1391 - ret = -ENOMEM; 1392 - dev_priv = kzalloc(sizeof(*dev_priv), GFP_KERNEL); 1393 - if (dev_priv) 1394 - ret = drm_dev_init(&dev_priv->drm, &driver, &pdev->dev); 1395 - if (ret) { 1396 - DRM_DEV_ERROR(&pdev->dev, "allocation failed\n"); 1397 - goto out_free; 1398 - } 1399 - 1400 - dev_priv->drm.pdev = pdev; 1401 - dev_priv->drm.dev_private = dev_priv; 1343 + dev_priv = i915_driver_create(pdev, ent); 1344 + if (!dev_priv) 1345 + return -ENOMEM; 1402 1346 1403 1347 ret = pci_enable_device(pdev); 1404 1348 if (ret) 1405 1349 goto out_fini; 1406 1350 1407 - pci_set_drvdata(pdev, &dev_priv->drm); 1408 - /* 1409 - * Disable the system suspend direct complete optimization, which can 1410 - * leave the device suspended skipping the driver's suspend handlers 1411 - * if the device was already runtime suspended. This is needed due to 1412 - * the difference in our runtime and system suspend sequence and 1413 - * becaue the HDA driver may require us to enable the audio power 1414 - * domain during system suspend. 1415 - */ 1416 - dev_pm_set_driver_flags(&pdev->dev, DPM_FLAG_NEVER_SKIP); 1417 - 1418 - ret = i915_driver_init_early(dev_priv, ent); 1351 + ret = i915_driver_init_early(dev_priv); 1419 1352 if (ret < 0) 1420 1353 goto out_pci_disable; 1421 1354 1422 - intel_runtime_pm_get(dev_priv); 1355 + disable_rpm_wakeref_asserts(dev_priv); 1423 1356 1424 1357 ret = i915_driver_init_mmio(dev_priv); 1425 1358 if (ret < 0) ··· 1428 1399 1429 1400 i915_driver_register(dev_priv); 1430 1401 1431 - intel_runtime_pm_enable(dev_priv); 1432 - 1433 1402 intel_init_ipc(dev_priv); 1434 1403 1435 - intel_runtime_pm_put(dev_priv); 1404 + enable_rpm_wakeref_asserts(dev_priv); 1436 1405 1437 1406 i915_welcome_messages(dev_priv); 1438 1407 ··· 1441 1414 out_cleanup_mmio: 1442 1415 i915_driver_cleanup_mmio(dev_priv); 1443 1416 out_runtime_pm_put: 1444 - intel_runtime_pm_put(dev_priv); 1417 + enable_rpm_wakeref_asserts(dev_priv); 1445 1418 i915_driver_cleanup_early(dev_priv); 1446 1419 out_pci_disable: 1447 1420 pci_disable_device(pdev); 1448 1421 out_fini: 1449 1422 i915_load_error(dev_priv, "Device initialization failed (%d)\n", ret); 1450 - drm_dev_fini(&dev_priv->drm); 1451 - out_free: 1452 - kfree(dev_priv); 1453 - pci_set_drvdata(pdev, NULL); 1423 + i915_driver_destroy(dev_priv); 1454 1424 return ret; 1455 1425 } 1456 1426 ··· 1456 1432 struct drm_i915_private *dev_priv = to_i915(dev); 1457 1433 struct pci_dev *pdev = dev_priv->drm.pdev; 1458 1434 1435 + disable_rpm_wakeref_asserts(dev_priv); 1436 + 1459 1437 i915_driver_unregister(dev_priv); 1460 1438 1461 1439 if (i915_gem_suspend(dev_priv)) 1462 1440 DRM_ERROR("failed to idle hardware; continuing to unload!\n"); 1463 - 1464 - intel_display_power_get(dev_priv, POWER_DOMAIN_INIT); 1465 1441 1466 1442 drm_atomic_helper_shutdown(dev); 1467 1443 ··· 1483 1459 i915_gem_fini(dev_priv); 1484 1460 intel_fbc_cleanup_cfb(dev_priv); 1485 1461 1486 - intel_power_domains_fini(dev_priv); 1462 + intel_power_domains_fini_hw(dev_priv); 1487 1463 1488 1464 i915_driver_cleanup_hw(dev_priv); 1489 1465 i915_driver_cleanup_mmio(dev_priv); 1490 1466 1491 - intel_display_power_put(dev_priv, POWER_DOMAIN_INIT); 1467 + enable_rpm_wakeref_asserts(dev_priv); 1468 + 1469 + WARN_ON(atomic_read(&dev_priv->runtime_pm.wakeref_count)); 1492 1470 } 1493 1471 1494 1472 static void i915_driver_release(struct drm_device *dev) ··· 1498 1472 struct drm_i915_private *dev_priv = to_i915(dev); 1499 1473 1500 1474 i915_driver_cleanup_early(dev_priv); 1501 - drm_dev_fini(&dev_priv->drm); 1502 - 1503 - kfree(dev_priv); 1475 + i915_driver_destroy(dev_priv); 1504 1476 } 1505 1477 1506 1478 static int i915_driver_open(struct drm_device *dev, struct drm_file *file) ··· 1597 1573 1598 1574 /* We do a lot of poking in a lot of registers, make sure they work 1599 1575 * properly. */ 1600 - intel_display_set_init_power(dev_priv, true); 1576 + intel_power_domains_disable(dev_priv); 1601 1577 1602 1578 drm_kms_helper_poll_disable(dev); 1603 1579 ··· 1634 1610 return 0; 1635 1611 } 1636 1612 1613 + static enum i915_drm_suspend_mode 1614 + get_suspend_mode(struct drm_i915_private *dev_priv, bool hibernate) 1615 + { 1616 + if (hibernate) 1617 + return I915_DRM_SUSPEND_HIBERNATE; 1618 + 1619 + if (suspend_to_idle(dev_priv)) 1620 + return I915_DRM_SUSPEND_IDLE; 1621 + 1622 + return I915_DRM_SUSPEND_MEM; 1623 + } 1624 + 1637 1625 static int i915_drm_suspend_late(struct drm_device *dev, bool hibernation) 1638 1626 { 1639 1627 struct drm_i915_private *dev_priv = to_i915(dev); ··· 1656 1620 1657 1621 i915_gem_suspend_late(dev_priv); 1658 1622 1659 - intel_display_set_init_power(dev_priv, false); 1660 1623 intel_uncore_suspend(dev_priv); 1661 1624 1662 - /* 1663 - * In case of firmware assisted context save/restore don't manually 1664 - * deinit the power domains. This also means the CSR/DMC firmware will 1665 - * stay active, it will power down any HW resources as required and 1666 - * also enable deeper system power states that would be blocked if the 1667 - * firmware was inactive. 1668 - */ 1669 - if (IS_GEN9_LP(dev_priv) || hibernation || !suspend_to_idle(dev_priv) || 1670 - dev_priv->csr.dmc_payload == NULL) { 1671 - intel_power_domains_suspend(dev_priv); 1672 - dev_priv->power_domains_suspended = true; 1673 - } 1625 + intel_power_domains_suspend(dev_priv, 1626 + get_suspend_mode(dev_priv, hibernation)); 1674 1627 1675 1628 ret = 0; 1676 1629 if (IS_GEN9_LP(dev_priv)) ··· 1671 1646 1672 1647 if (ret) { 1673 1648 DRM_ERROR("Suspend complete failed: %d\n", ret); 1674 - if (dev_priv->power_domains_suspended) { 1675 - intel_power_domains_init_hw(dev_priv, true); 1676 - dev_priv->power_domains_suspended = false; 1677 - } 1649 + intel_power_domains_resume(dev_priv); 1678 1650 1679 1651 goto out; 1680 1652 } ··· 1777 1755 /* 1778 1756 * ... but also need to make sure that hotplug processing 1779 1757 * doesn't cause havoc. Like in the driver load code we don't 1780 - * bother with the tiny race here where we might loose hotplug 1758 + * bother with the tiny race here where we might lose hotplug 1781 1759 * notifications. 1782 1760 * */ 1783 1761 intel_hpd_init(dev_priv); ··· 1787 1765 intel_fbdev_set_suspend(dev, FBINFO_STATE_RUNNING, false); 1788 1766 1789 1767 intel_opregion_notify_adapter(dev_priv, PCI_D0); 1768 + 1769 + intel_power_domains_enable(dev_priv); 1790 1770 1791 1771 enable_rpm_wakeref_asserts(dev_priv); 1792 1772 ··· 1824 1800 ret = pci_set_power_state(pdev, PCI_D0); 1825 1801 if (ret) { 1826 1802 DRM_ERROR("failed to set PCI D0 power state (%d)\n", ret); 1827 - goto out; 1803 + return ret; 1828 1804 } 1829 1805 1830 1806 /* ··· 1840 1816 * depend on the device enable refcount we can't anyway depend on them 1841 1817 * disabling/enabling the device. 1842 1818 */ 1843 - if (pci_enable_device(pdev)) { 1844 - ret = -EIO; 1845 - goto out; 1846 - } 1819 + if (pci_enable_device(pdev)) 1820 + return -EIO; 1847 1821 1848 1822 pci_set_master(pdev); 1849 1823 ··· 1864 1842 1865 1843 intel_uncore_sanitize(dev_priv); 1866 1844 1867 - if (dev_priv->power_domains_suspended) 1868 - intel_power_domains_init_hw(dev_priv, true); 1869 - else 1870 - intel_display_set_init_power(dev_priv, true); 1845 + intel_power_domains_resume(dev_priv); 1871 1846 1872 1847 intel_engines_sanitize(dev_priv); 1873 1848 1874 1849 enable_rpm_wakeref_asserts(dev_priv); 1875 - 1876 - out: 1877 - dev_priv->power_domains_suspended = false; 1878 1850 1879 1851 return ret; 1880 1852 } ··· 1931 1915 dev_notice(i915->drm.dev, "Resetting chip for %s\n", reason); 1932 1916 error->reset_count++; 1933 1917 1934 - disable_irq(i915->drm.irq); 1935 1918 ret = i915_gem_reset_prepare(i915); 1936 1919 if (ret) { 1937 1920 dev_err(i915->drm.dev, "GPU recovery failed\n"); ··· 1992 1977 1993 1978 finish: 1994 1979 i915_gem_reset_finish(i915); 1995 - enable_irq(i915->drm.irq); 1996 - 1997 1980 wakeup: 1998 1981 clear_bit(I915_RESET_HANDOFF, &error->flags); 1999 1982 wake_up_bit(&error->flags, I915_RESET_HANDOFF); ··· 2086 2073 goto out; 2087 2074 2088 2075 out: 2076 + intel_engine_cancel_stop_cs(engine); 2089 2077 i915_gem_reset_finish_engine(engine); 2090 2078 return ret; 2091 2079 }
+49 -14
drivers/gpu/drm/i915/i915_drv.h
··· 86 86 87 87 #define DRIVER_NAME "i915" 88 88 #define DRIVER_DESC "Intel Graphics" 89 - #define DRIVER_DATE "20180719" 90 - #define DRIVER_TIMESTAMP 1532015279 89 + #define DRIVER_DATE "20180906" 90 + #define DRIVER_TIMESTAMP 1536242083 91 91 92 92 /* Use I915_STATE_WARN(x) and I915_STATE_WARN_ON() (rather than WARN() and 93 93 * WARN_ON()) for hw state sanity checks to check for unexpected conditions ··· 611 611 612 612 struct i915_psr { 613 613 struct mutex lock; 614 + 615 + #define I915_PSR_DEBUG_MODE_MASK 0x0f 616 + #define I915_PSR_DEBUG_DEFAULT 0x00 617 + #define I915_PSR_DEBUG_DISABLE 0x01 618 + #define I915_PSR_DEBUG_ENABLE 0x02 619 + #define I915_PSR_DEBUG_FORCE_PSR1 0x03 620 + #define I915_PSR_DEBUG_IRQ 0x10 621 + 622 + u32 debug; 614 623 bool sink_support; 615 - struct intel_dp *enabled; 624 + bool prepared, enabled; 625 + struct intel_dp *dp; 616 626 bool active; 617 627 struct work_struct work; 618 628 unsigned busy_frontbuffer_bits; ··· 632 622 bool alpm; 633 623 bool psr2_enabled; 634 624 u8 sink_sync_latency; 635 - bool debug; 636 625 ktime_t last_entry_attempt; 637 626 ktime_t last_exit; 638 627 }; ··· 876 867 struct i915_power_well *power_well); 877 868 }; 878 869 870 + struct i915_power_well_regs { 871 + i915_reg_t bios; 872 + i915_reg_t driver; 873 + i915_reg_t kvmr; 874 + i915_reg_t debug; 875 + }; 876 + 879 877 /* Power well structure for haswell */ 880 - struct i915_power_well { 878 + struct i915_power_well_desc { 881 879 const char *name; 882 880 bool always_on; 883 - /* power well enable/disable usage count */ 884 - int count; 885 - /* cached hw enabled state */ 886 - bool hw_enabled; 887 881 u64 domains; 888 882 /* unique identifier for this power well */ 889 883 enum i915_power_well_id id; ··· 896 884 */ 897 885 union { 898 886 struct { 887 + /* 888 + * request/status flag index in the PUNIT power well 889 + * control/status registers. 890 + */ 891 + u8 idx; 892 + } vlv; 893 + struct { 899 894 enum dpio_phy phy; 900 895 } bxt; 901 896 struct { 897 + const struct i915_power_well_regs *regs; 898 + /* 899 + * request/status flag index in the power well 900 + * constrol/status registers. 901 + */ 902 + u8 idx; 902 903 /* Mask of pipes whose IRQ logic is backed by the pw */ 903 904 u8 irq_pipe_mask; 904 905 /* The pw is backing the VGA functionality */ ··· 922 897 const struct i915_power_well_ops *ops; 923 898 }; 924 899 900 + struct i915_power_well { 901 + const struct i915_power_well_desc *desc; 902 + /* power well enable/disable usage count */ 903 + int count; 904 + /* cached hw enabled state */ 905 + bool hw_enabled; 906 + }; 907 + 925 908 struct i915_power_domains { 926 909 /* 927 910 * Power wells needed for initialization at driver init and suspend 928 911 * time are on. They are kept on until after the first modeset. 929 912 */ 930 - bool init_power_on; 931 913 bool initializing; 914 + bool display_core_suspended; 932 915 int power_well_count; 933 916 934 917 struct mutex lock; ··· 1643 1610 struct mutex gmbus_mutex; 1644 1611 1645 1612 /** 1646 - * Base address of the gmbus and gpio block. 1613 + * Base address of where the gmbus and gpio blocks are located (either 1614 + * on PCH or on SoC for platforms without PCH). 1647 1615 */ 1648 1616 uint32_t gpio_mmio_base; 1649 1617 ··· 1666 1632 struct intel_engine_cs *engine_class[MAX_ENGINE_CLASS + 1] 1667 1633 [MAX_ENGINE_INSTANCE + 1]; 1668 1634 1669 - struct drm_dma_handle *status_page_dmah; 1670 1635 struct resource mch_res; 1671 1636 1672 1637 /* protects the irq masks */ ··· 1861 1828 struct mutex av_mutex; 1862 1829 1863 1830 struct { 1831 + struct mutex mutex; 1864 1832 struct list_head list; 1865 1833 struct llist_head free_list; 1866 1834 struct work_struct free_work; ··· 1874 1840 #define MAX_CONTEXT_HW_ID (1<<21) /* exclusive */ 1875 1841 #define MAX_GUC_CONTEXT_HW_ID (1 << 20) /* exclusive */ 1876 1842 #define GEN11_MAX_CONTEXT_HW_ID (1<<11) /* exclusive */ 1843 + struct list_head hw_id_list; 1877 1844 } contexts; 1878 1845 1879 1846 u32 fdi_rx_config; ··· 2645 2610 #define USES_GUC_SUBMISSION(dev_priv) intel_uc_is_using_guc_submission() 2646 2611 #define USES_HUC(dev_priv) intel_uc_is_using_huc() 2647 2612 2648 - #define HAS_RESOURCE_STREAMER(dev_priv) ((dev_priv)->info.has_resource_streamer) 2649 - 2650 2613 #define HAS_POOLED_EU(dev_priv) ((dev_priv)->info.has_pooled_eu) 2651 2614 2652 2615 #define INTEL_PCH_DEVICE_ID_MASK 0xff80 ··· 2807 2774 extern void intel_irq_fini(struct drm_i915_private *dev_priv); 2808 2775 int intel_irq_install(struct drm_i915_private *dev_priv); 2809 2776 void intel_irq_uninstall(struct drm_i915_private *dev_priv); 2777 + 2778 + void i915_clear_error_registers(struct drm_i915_private *dev_priv); 2810 2779 2811 2780 static inline bool intel_gvt_active(struct drm_i915_private *dev_priv) 2812 2781 {
+52 -13
drivers/gpu/drm/i915/i915_gem.c
··· 802 802 * that was!). 803 803 */ 804 804 805 + wmb(); 806 + 807 + if (INTEL_INFO(dev_priv)->has_coherent_ggtt) 808 + return; 809 + 805 810 i915_gem_chipset_flush(dev_priv); 806 811 807 812 intel_runtime_pm_get(dev_priv); ··· 1911 1906 return 0; 1912 1907 } 1913 1908 1914 - static unsigned int tile_row_pages(struct drm_i915_gem_object *obj) 1909 + static unsigned int tile_row_pages(const struct drm_i915_gem_object *obj) 1915 1910 { 1916 1911 return i915_gem_object_get_tile_row_size(obj) >> PAGE_SHIFT; 1917 1912 } ··· 1970 1965 } 1971 1966 1972 1967 static inline struct i915_ggtt_view 1973 - compute_partial_view(struct drm_i915_gem_object *obj, 1968 + compute_partial_view(const struct drm_i915_gem_object *obj, 1974 1969 pgoff_t page_offset, 1975 1970 unsigned int chunk) 1976 1971 { ··· 2018 2013 struct drm_device *dev = obj->base.dev; 2019 2014 struct drm_i915_private *dev_priv = to_i915(dev); 2020 2015 struct i915_ggtt *ggtt = &dev_priv->ggtt; 2021 - bool write = !!(vmf->flags & FAULT_FLAG_WRITE); 2016 + bool write = area->vm_flags & VM_WRITE; 2022 2017 struct i915_vma *vma; 2023 2018 pgoff_t page_offset; 2024 2019 int ret; ··· 2533 2528 gfp_t noreclaim; 2534 2529 int ret; 2535 2530 2536 - /* Assert that the object is not currently in any GPU domain. As it 2531 + /* 2532 + * Assert that the object is not currently in any GPU domain. As it 2537 2533 * wasn't in the GTT, there shouldn't be any way it could have been in 2538 2534 * a GPU cache 2539 2535 */ 2540 2536 GEM_BUG_ON(obj->read_domains & I915_GEM_GPU_DOMAINS); 2541 2537 GEM_BUG_ON(obj->write_domain & I915_GEM_GPU_DOMAINS); 2538 + 2539 + /* 2540 + * If there's no chance of allocating enough pages for the whole 2541 + * object, bail early. 2542 + */ 2543 + if (page_count > totalram_pages) 2544 + return -ENOMEM; 2542 2545 2543 2546 st = kmalloc(sizeof(*st), GFP_KERNEL); 2544 2547 if (st == NULL) ··· 2558 2545 return -ENOMEM; 2559 2546 } 2560 2547 2561 - /* Get the list of pages out of our struct file. They'll be pinned 2548 + /* 2549 + * Get the list of pages out of our struct file. They'll be pinned 2562 2550 * at this point until we release them. 2563 2551 * 2564 2552 * Fail silently without starting the shrinker ··· 2591 2577 i915_gem_shrink(dev_priv, 2 * page_count, NULL, *s++); 2592 2578 cond_resched(); 2593 2579 2594 - /* We've tried hard to allocate the memory by reaping 2580 + /* 2581 + * We've tried hard to allocate the memory by reaping 2595 2582 * our own buffer, now let the real VM do its job and 2596 2583 * go down in flames if truly OOM. 2597 2584 * ··· 2604 2589 /* reclaim and warn, but no oom */ 2605 2590 gfp = mapping_gfp_mask(mapping); 2606 2591 2607 - /* Our bo are always dirty and so we require 2592 + /* 2593 + * Our bo are always dirty and so we require 2608 2594 * kswapd to reclaim our pages (direct reclaim 2609 2595 * does not effectively begin pageout of our 2610 2596 * buffers on its own). However, direct reclaim ··· 2649 2633 2650 2634 ret = i915_gem_gtt_prepare_pages(obj, st); 2651 2635 if (ret) { 2652 - /* DMA remapping failed? One possible cause is that 2636 + /* 2637 + * DMA remapping failed? One possible cause is that 2653 2638 * it could not reserve enough large entries, asking 2654 2639 * for PAGE_SIZE chunks instead may be helpful. 2655 2640 */ ··· 2684 2667 sg_free_table(st); 2685 2668 kfree(st); 2686 2669 2687 - /* shmemfs first checks if there is enough memory to allocate the page 2670 + /* 2671 + * shmemfs first checks if there is enough memory to allocate the page 2688 2672 * and reports ENOSPC should there be insufficient, along with the usual 2689 2673 * ENOMEM for a genuine allocation failure. 2690 2674 * ··· 3325 3307 intel_engine_dump(engine, &p, "%s\n", engine->name); 3326 3308 } 3327 3309 3328 - set_bit(I915_WEDGED, &i915->gpu_error.flags); 3329 - smp_mb__after_atomic(); 3310 + if (test_and_set_bit(I915_WEDGED, &i915->gpu_error.flags)) 3311 + goto out; 3330 3312 3331 3313 /* 3332 3314 * First, stop submission to hw, but do not yet complete requests by ··· 3342 3324 i915->caps.scheduler = 0; 3343 3325 3344 3326 /* Even if the GPU reset fails, it should still stop the engines */ 3345 - intel_gpu_reset(i915, ALL_ENGINES); 3327 + if (INTEL_GEN(i915) >= 5) 3328 + intel_gpu_reset(i915, ALL_ENGINES); 3346 3329 3347 3330 /* 3348 3331 * Make sure no one is running the old callback before we proceed with ··· 3386 3367 i915_gem_reset_finish_engine(engine); 3387 3368 } 3388 3369 3370 + out: 3389 3371 GEM_TRACE("end\n"); 3390 3372 3391 3373 wake_up_all(&i915->gpu_error.reset_queue); ··· 3835 3815 timeout = wait_for_timeline(tl, flags, timeout); 3836 3816 if (timeout < 0) 3837 3817 return timeout; 3818 + } 3819 + if (GEM_SHOW_DEBUG() && !timeout) { 3820 + /* Presume that timeout was non-zero to begin with! */ 3821 + dev_warn(&i915->drm.pdev->dev, 3822 + "Missed idle-completion interrupt!\n"); 3823 + GEM_TRACE_DUMP(); 3838 3824 } 3839 3825 3840 3826 err = wait_for_engines(i915); ··· 5618 5592 i915_gem_cleanup_userptr(dev_priv); 5619 5593 5620 5594 if (ret == -EIO) { 5595 + mutex_lock(&dev_priv->drm.struct_mutex); 5596 + 5621 5597 /* 5622 5598 * Allow engine initialisation to fail by marking the GPU as 5623 5599 * wedged. But we only want to do this where the GPU is angry, ··· 5630 5602 "Failed to initialize GPU, declaring it wedged!\n"); 5631 5603 i915_gem_set_wedged(dev_priv); 5632 5604 } 5633 - ret = 0; 5605 + 5606 + /* Minimal basic recovery for KMS */ 5607 + ret = i915_ggtt_enable_hw(dev_priv); 5608 + i915_gem_restore_gtt_mappings(dev_priv); 5609 + i915_gem_restore_fences(dev_priv); 5610 + intel_init_clock_gating(dev_priv); 5611 + 5612 + mutex_unlock(&dev_priv->drm.struct_mutex); 5634 5613 } 5635 5614 5636 5615 i915_gem_drain_freed_objects(dev_priv); ··· 5647 5612 void i915_gem_fini(struct drm_i915_private *dev_priv) 5648 5613 { 5649 5614 i915_gem_suspend_late(dev_priv); 5615 + intel_disable_gt_powersave(dev_priv); 5650 5616 5651 5617 /* Flush any outstanding unpin_work. */ 5652 5618 i915_gem_drain_workqueue(dev_priv); ··· 5658 5622 i915_gem_cleanup_engines(dev_priv); 5659 5623 i915_gem_contexts_fini(dev_priv); 5660 5624 mutex_unlock(&dev_priv->drm.struct_mutex); 5625 + 5626 + intel_cleanup_gt_powersave(dev_priv); 5661 5627 5662 5628 intel_uc_fini_misc(dev_priv); 5663 5629 i915_gem_cleanup_userptr(dev_priv); ··· 6220 6182 #include "selftests/huge_pages.c" 6221 6183 #include "selftests/i915_gem_object.c" 6222 6184 #include "selftests/i915_gem_coherency.c" 6185 + #include "selftests/i915_gem.c" 6223 6186 #endif
-6
drivers/gpu/drm/i915/i915_gem.h
··· 82 82 tasklet_unlock_wait(t); 83 83 } 84 84 85 - static inline void __tasklet_enable_sync_once(struct tasklet_struct *t) 86 - { 87 - if (atomic_dec_return(&t->count) == 0) 88 - tasklet_kill(t); 89 - } 90 - 91 85 static inline bool __tasklet_is_enabled(const struct tasklet_struct *t) 92 86 { 93 87 return !atomic_read(&t->count);
+171 -81
drivers/gpu/drm/i915/i915_gem_context.c
··· 115 115 rcu_read_unlock(); 116 116 } 117 117 118 + static inline int new_hw_id(struct drm_i915_private *i915, gfp_t gfp) 119 + { 120 + unsigned int max; 121 + 122 + lockdep_assert_held(&i915->contexts.mutex); 123 + 124 + if (INTEL_GEN(i915) >= 11) 125 + max = GEN11_MAX_CONTEXT_HW_ID; 126 + else if (USES_GUC_SUBMISSION(i915)) 127 + /* 128 + * When using GuC in proxy submission, GuC consumes the 129 + * highest bit in the context id to indicate proxy submission. 130 + */ 131 + max = MAX_GUC_CONTEXT_HW_ID; 132 + else 133 + max = MAX_CONTEXT_HW_ID; 134 + 135 + return ida_simple_get(&i915->contexts.hw_ida, 0, max, gfp); 136 + } 137 + 138 + static int steal_hw_id(struct drm_i915_private *i915) 139 + { 140 + struct i915_gem_context *ctx, *cn; 141 + LIST_HEAD(pinned); 142 + int id = -ENOSPC; 143 + 144 + lockdep_assert_held(&i915->contexts.mutex); 145 + 146 + list_for_each_entry_safe(ctx, cn, 147 + &i915->contexts.hw_id_list, hw_id_link) { 148 + if (atomic_read(&ctx->hw_id_pin_count)) { 149 + list_move_tail(&ctx->hw_id_link, &pinned); 150 + continue; 151 + } 152 + 153 + GEM_BUG_ON(!ctx->hw_id); /* perma-pinned kernel context */ 154 + list_del_init(&ctx->hw_id_link); 155 + id = ctx->hw_id; 156 + break; 157 + } 158 + 159 + /* 160 + * Remember how far we got up on the last repossesion scan, so the 161 + * list is kept in a "least recently scanned" order. 162 + */ 163 + list_splice_tail(&pinned, &i915->contexts.hw_id_list); 164 + return id; 165 + } 166 + 167 + static int assign_hw_id(struct drm_i915_private *i915, unsigned int *out) 168 + { 169 + int ret; 170 + 171 + lockdep_assert_held(&i915->contexts.mutex); 172 + 173 + /* 174 + * We prefer to steal/stall ourselves and our users over that of the 175 + * entire system. That may be a little unfair to our users, and 176 + * even hurt high priority clients. The choice is whether to oomkill 177 + * something else, or steal a context id. 178 + */ 179 + ret = new_hw_id(i915, GFP_KERNEL | __GFP_RETRY_MAYFAIL | __GFP_NOWARN); 180 + if (unlikely(ret < 0)) { 181 + ret = steal_hw_id(i915); 182 + if (ret < 0) /* once again for the correct errno code */ 183 + ret = new_hw_id(i915, GFP_KERNEL); 184 + if (ret < 0) 185 + return ret; 186 + } 187 + 188 + *out = ret; 189 + return 0; 190 + } 191 + 192 + static void release_hw_id(struct i915_gem_context *ctx) 193 + { 194 + struct drm_i915_private *i915 = ctx->i915; 195 + 196 + if (list_empty(&ctx->hw_id_link)) 197 + return; 198 + 199 + mutex_lock(&i915->contexts.mutex); 200 + if (!list_empty(&ctx->hw_id_link)) { 201 + ida_simple_remove(&i915->contexts.hw_ida, ctx->hw_id); 202 + list_del_init(&ctx->hw_id_link); 203 + } 204 + mutex_unlock(&i915->contexts.mutex); 205 + } 206 + 118 207 static void i915_gem_context_free(struct i915_gem_context *ctx) 119 208 { 120 209 unsigned int n; ··· 211 122 lockdep_assert_held(&ctx->i915->drm.struct_mutex); 212 123 GEM_BUG_ON(!i915_gem_context_is_closed(ctx)); 213 124 125 + release_hw_id(ctx); 214 126 i915_ppgtt_put(ctx->ppgtt); 215 127 216 128 for (n = 0; n < ARRAY_SIZE(ctx->__engine); n++) { ··· 226 136 227 137 list_del(&ctx->link); 228 138 229 - ida_simple_remove(&ctx->i915->contexts.hw_ida, ctx->hw_id); 230 139 kfree_rcu(ctx, rcu); 231 140 } 232 141 ··· 280 191 i915_gem_context_set_closed(ctx); 281 192 282 193 /* 194 + * This context will never again be assinged to HW, so we can 195 + * reuse its ID for the next context. 196 + */ 197 + release_hw_id(ctx); 198 + 199 + /* 283 200 * The LUT uses the VMA as a backpointer to unref the object, 284 201 * so we need to clear the LUT before we close all the VMA (inside 285 202 * the ppgtt). ··· 296 201 297 202 ctx->file_priv = ERR_PTR(-EBADF); 298 203 i915_gem_context_put(ctx); 299 - } 300 - 301 - static int assign_hw_id(struct drm_i915_private *dev_priv, unsigned *out) 302 - { 303 - int ret; 304 - unsigned int max; 305 - 306 - if (INTEL_GEN(dev_priv) >= 11) { 307 - max = GEN11_MAX_CONTEXT_HW_ID; 308 - } else { 309 - /* 310 - * When using GuC in proxy submission, GuC consumes the 311 - * highest bit in the context id to indicate proxy submission. 312 - */ 313 - if (USES_GUC_SUBMISSION(dev_priv)) 314 - max = MAX_GUC_CONTEXT_HW_ID; 315 - else 316 - max = MAX_CONTEXT_HW_ID; 317 - } 318 - 319 - 320 - ret = ida_simple_get(&dev_priv->contexts.hw_ida, 321 - 0, max, GFP_KERNEL); 322 - if (ret < 0) { 323 - /* Contexts are only released when no longer active. 324 - * Flush any pending retires to hopefully release some 325 - * stale contexts and try again. 326 - */ 327 - i915_retire_requests(dev_priv); 328 - ret = ida_simple_get(&dev_priv->contexts.hw_ida, 329 - 0, max, GFP_KERNEL); 330 - if (ret < 0) 331 - return ret; 332 - } 333 - 334 - *out = ret; 335 - return 0; 336 204 } 337 205 338 206 static u32 default_desc_template(const struct drm_i915_private *i915, ··· 334 276 if (ctx == NULL) 335 277 return ERR_PTR(-ENOMEM); 336 278 337 - ret = assign_hw_id(dev_priv, &ctx->hw_id); 338 - if (ret) { 339 - kfree(ctx); 340 - return ERR_PTR(ret); 341 - } 342 - 343 279 kref_init(&ctx->ref); 344 280 list_add_tail(&ctx->link, &dev_priv->contexts.list); 345 281 ctx->i915 = dev_priv; ··· 347 295 348 296 INIT_RADIX_TREE(&ctx->handles_vma, GFP_KERNEL); 349 297 INIT_LIST_HEAD(&ctx->handles_list); 298 + INIT_LIST_HEAD(&ctx->hw_id_link); 350 299 351 300 /* Default context will never have a file_priv */ 352 301 ret = DEFAULT_CONTEXT_HANDLE; ··· 381 328 ctx->ring_size = 4 * PAGE_SIZE; 382 329 ctx->desc_template = 383 330 default_desc_template(dev_priv, dev_priv->mm.aliasing_ppgtt); 384 - 385 - /* 386 - * GuC requires the ring to be placed in Non-WOPCM memory. If GuC is not 387 - * present or not in use we still need a small bias as ring wraparound 388 - * at offset 0 sometimes hangs. No idea why. 389 - */ 390 - if (USES_GUC(dev_priv)) 391 - ctx->ggtt_offset_bias = dev_priv->guc.ggtt_pin_bias; 392 - else 393 - ctx->ggtt_offset_bias = I915_GTT_PAGE_SIZE; 394 331 395 332 return ctx; 396 333 ··· 474 431 return ctx; 475 432 } 476 433 477 - struct i915_gem_context * 478 - i915_gem_context_create_kernel(struct drm_i915_private *i915, int prio) 479 - { 480 - struct i915_gem_context *ctx; 481 - 482 - ctx = i915_gem_create_context(i915, NULL); 483 - if (IS_ERR(ctx)) 484 - return ctx; 485 - 486 - i915_gem_context_clear_bannable(ctx); 487 - ctx->sched.priority = prio; 488 - ctx->ring_size = PAGE_SIZE; 489 - 490 - GEM_BUG_ON(!i915_gem_context_is_kernel(ctx)); 491 - 492 - return ctx; 493 - } 494 - 495 434 static void 496 435 destroy_kernel_context(struct i915_gem_context **ctxp) 497 436 { ··· 485 460 486 461 context_close(ctx); 487 462 i915_gem_context_free(ctx); 463 + } 464 + 465 + struct i915_gem_context * 466 + i915_gem_context_create_kernel(struct drm_i915_private *i915, int prio) 467 + { 468 + struct i915_gem_context *ctx; 469 + int err; 470 + 471 + ctx = i915_gem_create_context(i915, NULL); 472 + if (IS_ERR(ctx)) 473 + return ctx; 474 + 475 + err = i915_gem_context_pin_hw_id(ctx); 476 + if (err) { 477 + destroy_kernel_context(&ctx); 478 + return ERR_PTR(err); 479 + } 480 + 481 + i915_gem_context_clear_bannable(ctx); 482 + ctx->sched.priority = prio; 483 + ctx->ring_size = PAGE_SIZE; 484 + 485 + GEM_BUG_ON(!i915_gem_context_is_kernel(ctx)); 486 + 487 + return ctx; 488 + } 489 + 490 + static void init_contexts(struct drm_i915_private *i915) 491 + { 492 + mutex_init(&i915->contexts.mutex); 493 + INIT_LIST_HEAD(&i915->contexts.list); 494 + 495 + /* Using the simple ida interface, the max is limited by sizeof(int) */ 496 + BUILD_BUG_ON(MAX_CONTEXT_HW_ID > INT_MAX); 497 + BUILD_BUG_ON(GEN11_MAX_CONTEXT_HW_ID > INT_MAX); 498 + ida_init(&i915->contexts.hw_ida); 499 + INIT_LIST_HEAD(&i915->contexts.hw_id_list); 500 + 501 + INIT_WORK(&i915->contexts.free_work, contexts_free_worker); 502 + init_llist_head(&i915->contexts.free_list); 488 503 } 489 504 490 505 static bool needs_preempt_context(struct drm_i915_private *i915) ··· 545 480 if (ret) 546 481 return ret; 547 482 548 - INIT_LIST_HEAD(&dev_priv->contexts.list); 549 - INIT_WORK(&dev_priv->contexts.free_work, contexts_free_worker); 550 - init_llist_head(&dev_priv->contexts.free_list); 551 - 552 - /* Using the simple ida interface, the max is limited by sizeof(int) */ 553 - BUILD_BUG_ON(MAX_CONTEXT_HW_ID > INT_MAX); 554 - BUILD_BUG_ON(GEN11_MAX_CONTEXT_HW_ID > INT_MAX); 555 - ida_init(&dev_priv->contexts.hw_ida); 483 + init_contexts(dev_priv); 556 484 557 485 /* lowest priority; idle task */ 558 486 ctx = i915_gem_context_create_kernel(dev_priv, I915_PRIORITY_MIN); ··· 555 497 } 556 498 /* 557 499 * For easy recognisablity, we want the kernel context to be 0 and then 558 - * all user contexts will have non-zero hw_id. 500 + * all user contexts will have non-zero hw_id. Kernel contexts are 501 + * permanently pinned, so that we never suffer a stall and can 502 + * use them from any allocation context (e.g. for evicting other 503 + * contexts and from inside the shrinker). 559 504 */ 560 505 GEM_BUG_ON(ctx->hw_id); 506 + GEM_BUG_ON(!atomic_read(&ctx->hw_id_pin_count)); 561 507 dev_priv->kernel_context = ctx; 562 508 563 509 /* highest priority; preempting task */ ··· 599 537 destroy_kernel_context(&i915->kernel_context); 600 538 601 539 /* Must free all deferred contexts (via flush_workqueue) first */ 540 + GEM_BUG_ON(!list_empty(&i915->contexts.hw_id_list)); 602 541 ida_destroy(&i915->contexts.hw_ida); 603 542 } 604 543 ··· 1003 940 out: 1004 941 rcu_read_unlock(); 1005 942 return ret; 943 + } 944 + 945 + int __i915_gem_context_pin_hw_id(struct i915_gem_context *ctx) 946 + { 947 + struct drm_i915_private *i915 = ctx->i915; 948 + int err = 0; 949 + 950 + mutex_lock(&i915->contexts.mutex); 951 + 952 + GEM_BUG_ON(i915_gem_context_is_closed(ctx)); 953 + 954 + if (list_empty(&ctx->hw_id_link)) { 955 + GEM_BUG_ON(atomic_read(&ctx->hw_id_pin_count)); 956 + 957 + err = assign_hw_id(i915, &ctx->hw_id); 958 + if (err) 959 + goto out_unlock; 960 + 961 + list_add_tail(&ctx->hw_id_link, &i915->contexts.hw_id_list); 962 + } 963 + 964 + GEM_BUG_ON(atomic_read(&ctx->hw_id_pin_count) == ~0u); 965 + atomic_inc(&ctx->hw_id_pin_count); 966 + 967 + out_unlock: 968 + mutex_unlock(&i915->contexts.mutex); 969 + return err; 1006 970 } 1007 971 1008 972 #if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
+23 -3
drivers/gpu/drm/i915/i915_gem_context.h
··· 134 134 * functions like fault reporting, PASID, scheduling. The 135 135 * &drm_i915_private.context_hw_ida is used to assign a unqiue 136 136 * id for the lifetime of the context. 137 + * 138 + * @hw_id_pin_count: - number of times this context had been pinned 139 + * for use (should be, at most, once per engine). 140 + * 141 + * @hw_id_link: - all contexts with an assigned id are tracked 142 + * for possible repossession. 137 143 */ 138 144 unsigned int hw_id; 145 + atomic_t hw_id_pin_count; 146 + struct list_head hw_id_link; 139 147 140 148 /** 141 149 * @user_handle: userspace identifier ··· 154 146 u32 user_handle; 155 147 156 148 struct i915_sched_attr sched; 157 - 158 - /** ggtt_offset_bias: placement restriction for context objects */ 159 - u32 ggtt_offset_bias; 160 149 161 150 /** engine: per-engine logical HW state */ 162 151 struct intel_context { ··· 260 255 static inline void i915_gem_context_set_force_single_submission(struct i915_gem_context *ctx) 261 256 { 262 257 __set_bit(CONTEXT_FORCE_SINGLE_SUBMISSION, &ctx->flags); 258 + } 259 + 260 + int __i915_gem_context_pin_hw_id(struct i915_gem_context *ctx); 261 + static inline int i915_gem_context_pin_hw_id(struct i915_gem_context *ctx) 262 + { 263 + if (atomic_inc_not_zero(&ctx->hw_id_pin_count)) 264 + return 0; 265 + 266 + return __i915_gem_context_pin_hw_id(ctx); 267 + } 268 + 269 + static inline void i915_gem_context_unpin_hw_id(struct i915_gem_context *ctx) 270 + { 271 + GEM_BUG_ON(atomic_read(&ctx->hw_id_pin_count) == 0u); 272 + atomic_dec(&ctx->hw_id_pin_count); 263 273 } 264 274 265 275 static inline bool i915_gem_context_is_default(const struct i915_gem_context *c)
+21 -22
drivers/gpu/drm/i915/i915_gem_execbuffer.c
··· 64 64 #define BATCH_OFFSET_BIAS (256*1024) 65 65 66 66 #define __I915_EXEC_ILLEGAL_FLAGS \ 67 - (__I915_EXEC_UNKNOWN_FLAGS | I915_EXEC_CONSTANTS_MASK) 67 + (__I915_EXEC_UNKNOWN_FLAGS | \ 68 + I915_EXEC_CONSTANTS_MASK | \ 69 + I915_EXEC_RESOURCE_STREAMER) 68 70 69 71 /* Catch emission of unexpected errors for CI! */ 70 72 #if IS_ENABLED(CONFIG_DRM_I915_DEBUG_GEM) ··· 735 733 return -ENOENT; 736 734 737 735 eb->ctx = ctx; 738 - eb->vm = ctx->ppgtt ? &ctx->ppgtt->vm : &eb->i915->ggtt.vm; 736 + if (ctx->ppgtt) { 737 + eb->vm = &ctx->ppgtt->vm; 738 + eb->invalid_flags |= EXEC_OBJECT_NEEDS_GTT; 739 + } else { 740 + eb->vm = &eb->i915->ggtt.vm; 741 + } 739 742 740 743 eb->context_flags = 0; 741 744 if (ctx->flags & CONTEXT_NO_ZEROMAP) ··· 1127 1120 u32 *cmd; 1128 1121 int err; 1129 1122 1123 + if (DBG_FORCE_RELOC == FORCE_GPU_RELOC) { 1124 + obj = vma->obj; 1125 + if (obj->cache_dirty & ~obj->cache_coherent) 1126 + i915_gem_clflush_object(obj, 0); 1127 + obj->write_domain = 0; 1128 + } 1129 + 1130 1130 GEM_BUG_ON(vma->obj->write_domain & I915_GEM_DOMAIN_CPU); 1131 1131 1132 1132 obj = i915_gem_batch_pool_get(&eb->engine->batch_pool, PAGE_SIZE); ··· 1498 1484 * can read from this userspace address. 1499 1485 */ 1500 1486 offset = gen8_canonical_addr(offset & ~UPDATE); 1501 - __put_user(offset, 1502 - &urelocs[r-stack].presumed_offset); 1487 + if (unlikely(__put_user(offset, &urelocs[r-stack].presumed_offset))) { 1488 + remain = -EFAULT; 1489 + goto out; 1490 + } 1503 1491 } 1504 1492 } while (r++, --count); 1505 1493 urelocs += ARRAY_SIZE(stack); ··· 1586 1570 1587 1571 relocs = kvmalloc_array(size, 1, GFP_KERNEL); 1588 1572 if (!relocs) { 1589 - kvfree(relocs); 1590 1573 err = -ENOMEM; 1591 1574 goto err; 1592 1575 } ··· 1599 1584 if (__copy_from_user((char *)relocs + copied, 1600 1585 (char __user *)urelocs + copied, 1601 1586 len)) { 1587 + end_user: 1602 1588 kvfree(relocs); 1603 1589 err = -EFAULT; 1604 1590 goto err; ··· 1623 1607 unsafe_put_user(-1, 1624 1608 &urelocs[copied].presumed_offset, 1625 1609 end_user); 1626 - end_user: 1627 1610 user_access_end(); 1628 1611 1629 1612 eb->exec[i].relocs_ptr = (uintptr_t)relocs; ··· 2214 2199 eb.flags = (unsigned int *)(eb.vma + args->buffer_count + 1); 2215 2200 2216 2201 eb.invalid_flags = __EXEC_OBJECT_UNKNOWN_FLAGS; 2217 - if (USES_FULL_PPGTT(eb.i915)) 2218 - eb.invalid_flags |= EXEC_OBJECT_NEEDS_GTT; 2219 2202 reloc_cache_init(&eb.reloc_cache, eb.i915); 2220 2203 2221 2204 eb.buffer_count = args->buffer_count; ··· 2233 2220 eb.engine = eb_select_engine(eb.i915, file, args); 2234 2221 if (!eb.engine) 2235 2222 return -EINVAL; 2236 - 2237 - if (args->flags & I915_EXEC_RESOURCE_STREAMER) { 2238 - if (!HAS_RESOURCE_STREAMER(eb.i915)) { 2239 - DRM_DEBUG("RS is only allowed for Haswell, Gen8 and above\n"); 2240 - return -EINVAL; 2241 - } 2242 - if (eb.engine->id != RCS) { 2243 - DRM_DEBUG("RS is not available on %s\n", 2244 - eb.engine->name); 2245 - return -EINVAL; 2246 - } 2247 - 2248 - eb.batch_flags |= I915_DISPATCH_RS; 2249 - } 2250 2223 2251 2224 if (args->flags & I915_EXEC_FENCE_IN) { 2252 2225 in_fence = sync_file_get_fence(lower_32_bits(args->rsvd2));
+25 -27
drivers/gpu/drm/i915/i915_gem_gtt.c
··· 173 173 return 0; 174 174 } 175 175 176 - /* Early VLV doesn't have this */ 177 - if (IS_VALLEYVIEW(dev_priv) && dev_priv->drm.pdev->revision < 0xb) { 178 - DRM_DEBUG_DRIVER("disabling PPGTT on pre-B3 step VLV\n"); 179 - return 0; 180 - } 176 + if (has_full_48bit_ppgtt) 177 + return 3; 181 178 182 - if (HAS_LOGICAL_RING_CONTEXTS(dev_priv)) { 183 - if (has_full_48bit_ppgtt) 184 - return 3; 185 - 186 - if (has_full_ppgtt) 187 - return 2; 188 - } 179 + if (has_full_ppgtt) 180 + return 2; 189 181 190 182 return 1; 191 183 } ··· 1250 1258 struct i915_page_directory *pd) 1251 1259 { 1252 1260 int i; 1253 - 1254 - if (!px_page(pd)) 1255 - return; 1256 1261 1257 1262 for (i = 0; i < I915_PDES; i++) { 1258 1263 if (pd->page_table[i] != vm->scratch_pt) ··· 2337 2348 return IS_GEN5(dev_priv) && IS_MOBILE(dev_priv) && intel_vtd_active(); 2338 2349 } 2339 2350 2340 - static void gen6_check_and_clear_faults(struct drm_i915_private *dev_priv) 2351 + static void gen6_check_faults(struct drm_i915_private *dev_priv) 2341 2352 { 2342 2353 struct intel_engine_cs *engine; 2343 2354 enum intel_engine_id id; ··· 2355 2366 fault & RING_FAULT_GTTSEL_MASK ? "GGTT" : "PPGTT", 2356 2367 RING_FAULT_SRCID(fault), 2357 2368 RING_FAULT_FAULT_TYPE(fault)); 2358 - I915_WRITE(RING_FAULT_REG(engine), 2359 - fault & ~RING_FAULT_VALID); 2360 2369 } 2361 2370 } 2362 - 2363 - POSTING_READ(RING_FAULT_REG(dev_priv->engine[RCS])); 2364 2371 } 2365 2372 2366 - static void gen8_check_and_clear_faults(struct drm_i915_private *dev_priv) 2373 + static void gen8_check_faults(struct drm_i915_private *dev_priv) 2367 2374 { 2368 2375 u32 fault = I915_READ(GEN8_RING_FAULT_REG); 2369 2376 ··· 2384 2399 GEN8_RING_FAULT_ENGINE_ID(fault), 2385 2400 RING_FAULT_SRCID(fault), 2386 2401 RING_FAULT_FAULT_TYPE(fault)); 2387 - I915_WRITE(GEN8_RING_FAULT_REG, 2388 - fault & ~RING_FAULT_VALID); 2389 2402 } 2390 - 2391 - POSTING_READ(GEN8_RING_FAULT_REG); 2392 2403 } 2393 2404 2394 2405 void i915_check_and_clear_faults(struct drm_i915_private *dev_priv) 2395 2406 { 2396 2407 /* From GEN8 onwards we only have one 'All Engine Fault Register' */ 2397 2408 if (INTEL_GEN(dev_priv) >= 8) 2398 - gen8_check_and_clear_faults(dev_priv); 2409 + gen8_check_faults(dev_priv); 2399 2410 else if (INTEL_GEN(dev_priv) >= 6) 2400 - gen6_check_and_clear_faults(dev_priv); 2411 + gen6_check_faults(dev_priv); 2401 2412 else 2402 2413 return; 2414 + 2415 + i915_clear_error_registers(dev_priv); 2403 2416 } 2404 2417 2405 2418 void i915_gem_suspend_gtt_mappings(struct drm_i915_private *dev_priv) ··· 2919 2936 unsigned long hole_start, hole_end; 2920 2937 struct drm_mm_node *entry; 2921 2938 int ret; 2939 + 2940 + /* 2941 + * GuC requires all resources that we're sharing with it to be placed in 2942 + * non-WOPCM memory. If GuC is not present or not in use we still need a 2943 + * small bias as ring wraparound at offset 0 sometimes hangs. No idea 2944 + * why. 2945 + */ 2946 + ggtt->pin_bias = max_t(u32, I915_GTT_PAGE_SIZE, 2947 + intel_guc_reserved_gtt_size(&dev_priv->guc)); 2922 2948 2923 2949 ret = intel_vgt_balloon(dev_priv); 2924 2950 if (ret) ··· 3604 3612 mutex_lock(&dev_priv->drm.struct_mutex); 3605 3613 i915_address_space_init(&ggtt->vm, dev_priv); 3606 3614 3615 + ggtt->vm.is_ggtt = true; 3616 + 3607 3617 /* Only VLV supports read-only GGTT mappings */ 3608 3618 ggtt->vm.has_read_only = IS_VALLEYVIEW(dev_priv); 3609 3619 ··· 3656 3662 3657 3663 void i915_ggtt_disable_guc(struct drm_i915_private *i915) 3658 3664 { 3665 + /* XXX Temporary pardon for error unload */ 3666 + if (i915->ggtt.invalidate == gen6_ggtt_invalidate) 3667 + return; 3668 + 3659 3669 /* We should only be called after i915_ggtt_enable_guc() */ 3660 3670 GEM_BUG_ON(i915->ggtt.invalidate != guc_ggtt_invalidate); 3661 3671
+10 -13
drivers/gpu/drm/i915/i915_gem_gtt.h
··· 167 167 } plane[2]; 168 168 } __packed; 169 169 170 - static inline void assert_intel_rotation_info_is_packed(void) 171 - { 172 - BUILD_BUG_ON(sizeof(struct intel_rotation_info) != 8*sizeof(unsigned int)); 173 - } 174 - 175 170 struct intel_partial_info { 176 171 u64 offset; 177 172 unsigned int size; 178 173 } __packed; 179 - 180 - static inline void assert_intel_partial_info_is_packed(void) 181 - { 182 - BUILD_BUG_ON(sizeof(struct intel_partial_info) != sizeof(u64) + sizeof(unsigned int)); 183 - } 184 174 185 175 enum i915_ggtt_view_type { 186 176 I915_GGTT_VIEW_NORMAL = 0, ··· 178 188 I915_GGTT_VIEW_PARTIAL = sizeof(struct intel_partial_info), 179 189 }; 180 190 181 - static inline void assert_i915_ggtt_view_type_is_unique(void) 191 + static inline void assert_i915_gem_gtt_types(void) 182 192 { 193 + BUILD_BUG_ON(sizeof(struct intel_rotation_info) != 8*sizeof(unsigned int)); 194 + BUILD_BUG_ON(sizeof(struct intel_partial_info) != sizeof(u64) + sizeof(unsigned int)); 195 + 183 196 /* As we encode the size of each branch inside the union into its type, 184 197 * we have to be careful that each branch has a unique size. 185 198 */ ··· 222 229 }; 223 230 224 231 #define px_base(px) (&(px)->base) 225 - #define px_page(px) (px_base(px)->page) 226 232 #define px_dma(px) (px_base(px)->daddr) 227 233 228 234 struct i915_page_table { ··· 324 332 325 333 struct pagestash free_pages; 326 334 335 + /* Global GTT */ 336 + bool is_ggtt:1; 337 + 327 338 /* Some systems require uncached updates of the page directories */ 328 339 bool pt_kmap_wc:1; 329 340 ··· 360 365 I915_SELFTEST_DECLARE(bool scrub_64K); 361 366 }; 362 367 363 - #define i915_is_ggtt(V) (!(V)->file) 368 + #define i915_is_ggtt(vm) ((vm)->is_ggtt) 364 369 365 370 static inline bool 366 371 i915_vm_is_48bit(const struct i915_address_space *vm) ··· 395 400 bool do_idle_maps; 396 401 397 402 int mtrr; 403 + 404 + u32 pin_bias; 398 405 399 406 struct drm_mm_node error_capture; 400 407 };
+5 -5
drivers/gpu/drm/i915/i915_gem_object.h
··· 421 421 } 422 422 423 423 static inline unsigned int 424 - i915_gem_object_get_tiling(struct drm_i915_gem_object *obj) 424 + i915_gem_object_get_tiling(const struct drm_i915_gem_object *obj) 425 425 { 426 426 return obj->tiling_and_stride & TILING_MASK; 427 427 } 428 428 429 429 static inline bool 430 - i915_gem_object_is_tiled(struct drm_i915_gem_object *obj) 430 + i915_gem_object_is_tiled(const struct drm_i915_gem_object *obj) 431 431 { 432 432 return i915_gem_object_get_tiling(obj) != I915_TILING_NONE; 433 433 } 434 434 435 435 static inline unsigned int 436 - i915_gem_object_get_stride(struct drm_i915_gem_object *obj) 436 + i915_gem_object_get_stride(const struct drm_i915_gem_object *obj) 437 437 { 438 438 return obj->tiling_and_stride & STRIDE_MASK; 439 439 } ··· 446 446 } 447 447 448 448 static inline unsigned int 449 - i915_gem_object_get_tile_height(struct drm_i915_gem_object *obj) 449 + i915_gem_object_get_tile_height(const struct drm_i915_gem_object *obj) 450 450 { 451 451 return i915_gem_tile_height(i915_gem_object_get_tiling(obj)); 452 452 } 453 453 454 454 static inline unsigned int 455 - i915_gem_object_get_tile_row_size(struct drm_i915_gem_object *obj) 455 + i915_gem_object_get_tile_row_size(const struct drm_i915_gem_object *obj) 456 456 { 457 457 return (i915_gem_object_get_stride(obj) * 458 458 i915_gem_object_get_tile_height(obj));
+26 -10
drivers/gpu/drm/i915/i915_irq.c
··· 478 478 void gen6_reset_rps_interrupts(struct drm_i915_private *dev_priv) 479 479 { 480 480 spin_lock_irq(&dev_priv->irq_lock); 481 - gen6_reset_pm_iir(dev_priv, dev_priv->pm_rps_events); 481 + gen6_reset_pm_iir(dev_priv, GEN6_PM_RPS_EVENTS); 482 482 dev_priv->gt_pm.rps.pm_iir = 0; 483 483 spin_unlock_irq(&dev_priv->irq_lock); 484 484 } ··· 516 516 517 517 I915_WRITE(GEN6_PMINTRMSK, gen6_sanitize_rps_pm_mask(dev_priv, ~0u)); 518 518 519 - gen6_disable_pm_irq(dev_priv, dev_priv->pm_rps_events); 519 + gen6_disable_pm_irq(dev_priv, GEN6_PM_RPS_EVENTS); 520 520 521 521 spin_unlock_irq(&dev_priv->irq_lock); 522 522 synchronize_irq(dev_priv->drm.irq); ··· 1534 1534 1535 1535 if (master_ctl & (GEN8_GT_PM_IRQ | GEN8_GT_GUC_IRQ)) { 1536 1536 gt_iir[2] = raw_reg_read(regs, GEN8_GT_IIR(2)); 1537 - if (likely(gt_iir[2] & (i915->pm_rps_events | 1538 - i915->pm_guc_events))) 1539 - raw_reg_write(regs, GEN8_GT_IIR(2), 1540 - gt_iir[2] & (i915->pm_rps_events | 1541 - i915->pm_guc_events)); 1537 + if (likely(gt_iir[2])) 1538 + raw_reg_write(regs, GEN8_GT_IIR(2), gt_iir[2]); 1542 1539 } 1543 1540 1544 1541 if (master_ctl & GEN8_GT_VECS_IRQ) { ··· 3215 3218 kobject_uevent_env(kobj, KOBJ_CHANGE, reset_done_event); 3216 3219 } 3217 3220 3218 - static void i915_clear_error_registers(struct drm_i915_private *dev_priv) 3221 + void i915_clear_error_registers(struct drm_i915_private *dev_priv) 3219 3222 { 3220 3223 u32 eir; 3221 3224 ··· 3237 3240 DRM_DEBUG_DRIVER("EIR stuck: 0x%08x, masking\n", eir); 3238 3241 I915_WRITE(EMR, I915_READ(EMR) | eir); 3239 3242 I915_WRITE(IIR, I915_MASTER_ERROR_INTERRUPT); 3243 + } 3244 + 3245 + if (INTEL_GEN(dev_priv) >= 8) { 3246 + I915_WRITE(GEN8_RING_FAULT_REG, 3247 + I915_READ(GEN8_RING_FAULT_REG) & ~RING_FAULT_VALID); 3248 + POSTING_READ(GEN8_RING_FAULT_REG); 3249 + } else if (INTEL_GEN(dev_priv) >= 6) { 3250 + struct intel_engine_cs *engine; 3251 + enum intel_engine_id id; 3252 + 3253 + for_each_engine(engine, dev_priv, id) { 3254 + I915_WRITE(RING_FAULT_REG(engine), 3255 + I915_READ(RING_FAULT_REG(engine)) & 3256 + ~RING_FAULT_VALID); 3257 + } 3258 + POSTING_READ(RING_FAULT_REG(dev_priv->engine[RCS])); 3240 3259 } 3241 3260 } 3242 3261 ··· 3309 3296 * Try engine reset when available. We fall back to full reset if 3310 3297 * single reset fails. 3311 3298 */ 3312 - if (intel_has_reset_engine(dev_priv)) { 3299 + if (intel_has_reset_engine(dev_priv) && 3300 + !i915_terminally_wedged(&dev_priv->gpu_error)) { 3313 3301 for_each_engine_masked(engine, dev_priv, engine_mask, tmp) { 3314 3302 BUILD_BUG_ON(I915_RESET_MODESET >= I915_RESET_ENGINE); 3315 3303 if (test_and_set_bit(I915_RESET_ENGINE + engine->id, ··· 4795 4781 /* WaGsvRC0ResidencyMethod:vlv */ 4796 4782 dev_priv->pm_rps_events = GEN6_PM_RP_UP_EI_EXPIRED; 4797 4783 else 4798 - dev_priv->pm_rps_events = GEN6_PM_RPS_EVENTS; 4784 + dev_priv->pm_rps_events = (GEN6_PM_RP_UP_THRESHOLD | 4785 + GEN6_PM_RP_DOWN_THRESHOLD | 4786 + GEN6_PM_RP_DOWN_TIMEOUT); 4799 4787 4800 4788 rps->pm_intrmsk_mbz = 0; 4801 4789
+11 -5
drivers/gpu/drm/i915/i915_pci.c
··· 74 74 .unfenced_needs_alignment = 1, \ 75 75 .ring_mask = RENDER_RING, \ 76 76 .has_snoop = true, \ 77 + .has_coherent_ggtt = false, \ 77 78 GEN_DEFAULT_PIPEOFFSETS, \ 78 79 GEN_DEFAULT_PAGE_SIZES, \ 79 80 CURSOR_OFFSETS ··· 111 110 .has_gmch_display = 1, \ 112 111 .ring_mask = RENDER_RING, \ 113 112 .has_snoop = true, \ 113 + .has_coherent_ggtt = true, \ 114 114 GEN_DEFAULT_PIPEOFFSETS, \ 115 115 GEN_DEFAULT_PAGE_SIZES, \ 116 116 CURSOR_OFFSETS ··· 119 117 static const struct intel_device_info intel_i915g_info = { 120 118 GEN3_FEATURES, 121 119 PLATFORM(INTEL_I915G), 120 + .has_coherent_ggtt = false, 122 121 .cursor_needs_physical = 1, 123 122 .has_overlay = 1, .overlay_needs_physical = 1, 124 123 .hws_needs_physical = 1, ··· 181 178 .has_gmch_display = 1, \ 182 179 .ring_mask = RENDER_RING, \ 183 180 .has_snoop = true, \ 181 + .has_coherent_ggtt = true, \ 184 182 GEN_DEFAULT_PIPEOFFSETS, \ 185 183 GEN_DEFAULT_PAGE_SIZES, \ 186 184 CURSOR_OFFSETS ··· 224 220 .has_hotplug = 1, \ 225 221 .ring_mask = RENDER_RING | BSD_RING, \ 226 222 .has_snoop = true, \ 223 + .has_coherent_ggtt = true, \ 227 224 /* ilk does support rc6, but we do not implement [power] contexts */ \ 228 225 .has_rc6 = 0, \ 229 226 GEN_DEFAULT_PIPEOFFSETS, \ ··· 248 243 .has_hotplug = 1, \ 249 244 .has_fbc = 1, \ 250 245 .ring_mask = RENDER_RING | BSD_RING | BLT_RING, \ 246 + .has_coherent_ggtt = true, \ 251 247 .has_llc = 1, \ 252 248 .has_rc6 = 1, \ 253 249 .has_rc6p = 1, \ ··· 293 287 .has_hotplug = 1, \ 294 288 .has_fbc = 1, \ 295 289 .ring_mask = RENDER_RING | BSD_RING | BLT_RING, \ 290 + .has_coherent_ggtt = true, \ 296 291 .has_llc = 1, \ 297 292 .has_rc6 = 1, \ 298 293 .has_rc6p = 1, \ ··· 354 347 .has_aliasing_ppgtt = 1, 355 348 .has_full_ppgtt = 1, 356 349 .has_snoop = true, 350 + .has_coherent_ggtt = false, 357 351 .ring_mask = RENDER_RING | BSD_RING | BLT_RING, 358 352 .display_mmio_offset = VLV_DISPLAY_BASE, 359 353 GEN_DEFAULT_PAGE_SIZES, ··· 368 360 .has_ddi = 1, \ 369 361 .has_fpga_dbg = 1, \ 370 362 .has_psr = 1, \ 371 - .has_resource_streamer = 1, \ 372 363 .has_dp_mst = 1, \ 373 364 .has_rc6p = 0 /* RC6p removed-by HSW */, \ 374 365 .has_runtime_pm = 1 ··· 440 433 .ring_mask = RENDER_RING | BSD_RING | BLT_RING | VEBOX_RING, 441 434 .has_64bit_reloc = 1, 442 435 .has_runtime_pm = 1, 443 - .has_resource_streamer = 1, 444 436 .has_rc6 = 1, 445 437 .has_logical_ring_contexts = 1, 446 438 .has_gmch_display = 1, ··· 447 441 .has_full_ppgtt = 1, 448 442 .has_reset_engine = 1, 449 443 .has_snoop = true, 444 + .has_coherent_ggtt = false, 450 445 .display_mmio_offset = VLV_DISPLAY_BASE, 451 446 GEN_DEFAULT_PAGE_SIZES, 452 447 GEN_CHV_PIPEOFFSETS, ··· 513 506 .has_runtime_pm = 1, \ 514 507 .has_pooled_eu = 0, \ 515 508 .has_csr = 1, \ 516 - .has_resource_streamer = 1, \ 517 509 .has_rc6 = 1, \ 518 510 .has_dp_mst = 1, \ 519 511 .has_logical_ring_contexts = 1, \ ··· 523 517 .has_full_48bit_ppgtt = 1, \ 524 518 .has_reset_engine = 1, \ 525 519 .has_snoop = true, \ 520 + .has_coherent_ggtt = false, \ 526 521 .has_ipc = 1, \ 527 522 GEN9_DEFAULT_PAGE_SIZES, \ 528 523 GEN_DEFAULT_PIPEOFFSETS, \ ··· 587 580 GEN9_FEATURES, \ 588 581 GEN(10), \ 589 582 .ddb_size = 1024, \ 583 + .has_coherent_ggtt = false, \ 590 584 GLK_COLORS 591 585 592 586 static const struct intel_device_info intel_cannonlake_info = { ··· 600 592 GEN10_FEATURES, \ 601 593 GEN(11), \ 602 594 .ddb_size = 2048, \ 603 - .has_csr = 0, \ 604 595 .has_logical_ring_elsq = 1 605 596 606 597 static const struct intel_device_info intel_icelake_11_info = { 607 598 GEN11_FEATURES, 608 599 PLATFORM(INTEL_ICELAKE), 609 600 .is_alpha_support = 1, 610 - .has_resource_streamer = 0, 611 601 .ring_mask = RENDER_RING | BLT_RING | VEBOX_RING | BSD_RING | BSD3_RING, 612 602 }; 613 603
+24 -31
drivers/gpu/drm/i915/i915_perf.c
··· 210 210 #include "i915_oa_cflgt3.h" 211 211 #include "i915_oa_cnl.h" 212 212 #include "i915_oa_icl.h" 213 + #include "intel_lrc_reg.h" 213 214 214 215 /* HW requires this to be a power of two, between 128k and 16M, though driver 215 216 * is currently generally designed assuming the largest 16M size is used such ··· 1339 1338 { 1340 1339 mutex_lock(&i915->drm.struct_mutex); 1341 1340 1342 - i915_gem_object_unpin_map(i915->perf.oa.oa_buffer.vma->obj); 1343 - i915_vma_unpin(i915->perf.oa.oa_buffer.vma); 1344 - i915_gem_object_put(i915->perf.oa.oa_buffer.vma->obj); 1345 - 1346 - i915->perf.oa.oa_buffer.vma = NULL; 1347 - i915->perf.oa.oa_buffer.vaddr = NULL; 1341 + i915_vma_unpin_and_release(&i915->perf.oa.oa_buffer.vma, 1342 + I915_VMA_RELEASE_MAP); 1348 1343 1349 1344 mutex_unlock(&i915->drm.struct_mutex); 1345 + 1346 + i915->perf.oa.oa_buffer.vaddr = NULL; 1350 1347 } 1351 1348 1352 1349 static void i915_oa_stream_destroy(struct i915_perf_stream *stream) ··· 1637 1638 u32 ctx_oactxctrl = dev_priv->perf.oa.ctx_oactxctrl_offset; 1638 1639 u32 ctx_flexeu0 = dev_priv->perf.oa.ctx_flexeu0_offset; 1639 1640 /* The MMIO offsets for Flex EU registers aren't contiguous */ 1640 - u32 flex_mmio[] = { 1641 - i915_mmio_reg_offset(EU_PERF_CNTL0), 1642 - i915_mmio_reg_offset(EU_PERF_CNTL1), 1643 - i915_mmio_reg_offset(EU_PERF_CNTL2), 1644 - i915_mmio_reg_offset(EU_PERF_CNTL3), 1645 - i915_mmio_reg_offset(EU_PERF_CNTL4), 1646 - i915_mmio_reg_offset(EU_PERF_CNTL5), 1647 - i915_mmio_reg_offset(EU_PERF_CNTL6), 1641 + i915_reg_t flex_regs[] = { 1642 + EU_PERF_CNTL0, 1643 + EU_PERF_CNTL1, 1644 + EU_PERF_CNTL2, 1645 + EU_PERF_CNTL3, 1646 + EU_PERF_CNTL4, 1647 + EU_PERF_CNTL5, 1648 + EU_PERF_CNTL6, 1648 1649 }; 1649 1650 int i; 1650 1651 1651 - reg_state[ctx_oactxctrl] = i915_mmio_reg_offset(GEN8_OACTXCONTROL); 1652 - reg_state[ctx_oactxctrl+1] = (dev_priv->perf.oa.period_exponent << 1653 - GEN8_OA_TIMER_PERIOD_SHIFT) | 1654 - (dev_priv->perf.oa.periodic ? 1655 - GEN8_OA_TIMER_ENABLE : 0) | 1656 - GEN8_OA_COUNTER_RESUME; 1652 + CTX_REG(reg_state, ctx_oactxctrl, GEN8_OACTXCONTROL, 1653 + (dev_priv->perf.oa.period_exponent << GEN8_OA_TIMER_PERIOD_SHIFT) | 1654 + (dev_priv->perf.oa.periodic ? GEN8_OA_TIMER_ENABLE : 0) | 1655 + GEN8_OA_COUNTER_RESUME); 1657 1656 1658 - for (i = 0; i < ARRAY_SIZE(flex_mmio); i++) { 1657 + for (i = 0; i < ARRAY_SIZE(flex_regs); i++) { 1659 1658 u32 state_offset = ctx_flexeu0 + i * 2; 1660 - u32 mmio = flex_mmio[i]; 1659 + u32 mmio = i915_mmio_reg_offset(flex_regs[i]); 1661 1660 1662 1661 /* 1663 1662 * This arbitrary default will select the 'EU FPU0 Pipeline ··· 1675 1678 } 1676 1679 } 1677 1680 1678 - reg_state[state_offset] = mmio; 1679 - reg_state[state_offset+1] = value; 1681 + CTX_REG(reg_state, state_offset, flex_regs[i], value); 1680 1682 } 1681 1683 } 1682 1684 ··· 1817 1821 /* Switch away from any user context. */ 1818 1822 ret = gen8_switch_to_updated_kernel_context(dev_priv, oa_config); 1819 1823 if (ret) 1820 - goto out; 1824 + return ret; 1821 1825 1822 1826 /* 1823 1827 * The OA register config is setup through the context image. This image ··· 1836 1840 wait_flags, 1837 1841 MAX_SCHEDULE_TIMEOUT); 1838 1842 if (ret) 1839 - goto out; 1843 + return ret; 1840 1844 1841 1845 /* Update all contexts now that we've stalled the submission. */ 1842 1846 list_for_each_entry(ctx, &dev_priv->contexts.list, link) { ··· 1848 1852 continue; 1849 1853 1850 1854 regs = i915_gem_object_pin_map(ce->state->obj, I915_MAP_WB); 1851 - if (IS_ERR(regs)) { 1852 - ret = PTR_ERR(regs); 1853 - goto out; 1854 - } 1855 + if (IS_ERR(regs)) 1856 + return PTR_ERR(regs); 1855 1857 1856 1858 ce->state->obj->mm.dirty = true; 1857 1859 regs += LRC_STATE_PN * PAGE_SIZE / sizeof(*regs); ··· 1859 1865 i915_gem_object_unpin_map(ce->state->obj); 1860 1866 } 1861 1867 1862 - out: 1863 1868 return ret; 1864 1869 } 1865 1870
+378 -331
drivers/gpu/drm/i915/i915_reg.h
··· 344 344 #define GEN8_RPCS_S_CNT_ENABLE (1 << 18) 345 345 #define GEN8_RPCS_S_CNT_SHIFT 15 346 346 #define GEN8_RPCS_S_CNT_MASK (0x7 << GEN8_RPCS_S_CNT_SHIFT) 347 + #define GEN11_RPCS_S_CNT_SHIFT 12 348 + #define GEN11_RPCS_S_CNT_MASK (0x3f << GEN11_RPCS_S_CNT_SHIFT) 347 349 #define GEN8_RPCS_SS_CNT_ENABLE (1 << 11) 348 350 #define GEN8_RPCS_SS_CNT_SHIFT 8 349 351 #define GEN8_RPCS_SS_CNT_MASK (0x7 << GEN8_RPCS_SS_CNT_SHIFT) ··· 1031 1029 /* 1032 1030 * i915_power_well_id: 1033 1031 * 1034 - * Platform specific IDs used to look up power wells and - except for custom 1035 - * power wells - to define request/status register flag bit positions. As such 1036 - * the set of IDs on a given platform must be unique and except for custom 1037 - * power wells their value must stay fixed. 1032 + * IDs used to look up power wells. Power wells accessed directly bypassing 1033 + * the power domains framework must be assigned a unique ID. The rest of power 1034 + * wells must be assigned DISP_PW_ID_NONE. 1038 1035 */ 1039 1036 enum i915_power_well_id { 1040 - /* 1041 - * I830 1042 - * - custom power well 1043 - */ 1044 - I830_DISP_PW_PIPES = 0, 1037 + DISP_PW_ID_NONE, 1045 1038 1046 - /* 1047 - * VLV/CHV 1048 - * - PUNIT_REG_PWRGT_CTRL (bit: id*2), 1049 - * PUNIT_REG_PWRGT_STATUS (bit: id*2) (PUNIT HAS v0.8) 1050 - */ 1051 - PUNIT_POWER_WELL_RENDER = 0, 1052 - PUNIT_POWER_WELL_MEDIA = 1, 1053 - PUNIT_POWER_WELL_DISP2D = 3, 1054 - PUNIT_POWER_WELL_DPIO_CMN_BC = 5, 1055 - PUNIT_POWER_WELL_DPIO_TX_B_LANES_01 = 6, 1056 - PUNIT_POWER_WELL_DPIO_TX_B_LANES_23 = 7, 1057 - PUNIT_POWER_WELL_DPIO_TX_C_LANES_01 = 8, 1058 - PUNIT_POWER_WELL_DPIO_TX_C_LANES_23 = 9, 1059 - PUNIT_POWER_WELL_DPIO_RX0 = 10, 1060 - PUNIT_POWER_WELL_DPIO_RX1 = 11, 1061 - PUNIT_POWER_WELL_DPIO_CMN_D = 12, 1062 - /* - custom power well */ 1063 - CHV_DISP_PW_PIPE_A, /* 13 */ 1064 - 1065 - /* 1066 - * HSW/BDW 1067 - * - _HSW_PWR_WELL_CTL1-4 (status bit: id*2, req bit: id*2+1) 1068 - */ 1069 - HSW_DISP_PW_GLOBAL = 15, 1070 - 1071 - /* 1072 - * GEN9+ 1073 - * - _HSW_PWR_WELL_CTL1-4 (status bit: id*2, req bit: id*2+1) 1074 - */ 1075 - SKL_DISP_PW_MISC_IO = 0, 1076 - SKL_DISP_PW_DDI_A_E, 1077 - GLK_DISP_PW_DDI_A = SKL_DISP_PW_DDI_A_E, 1078 - CNL_DISP_PW_DDI_A = SKL_DISP_PW_DDI_A_E, 1079 - SKL_DISP_PW_DDI_B, 1080 - SKL_DISP_PW_DDI_C, 1081 - SKL_DISP_PW_DDI_D, 1082 - CNL_DISP_PW_DDI_F = 6, 1083 - 1084 - GLK_DISP_PW_AUX_A = 8, 1085 - GLK_DISP_PW_AUX_B, 1086 - GLK_DISP_PW_AUX_C, 1087 - CNL_DISP_PW_AUX_A = GLK_DISP_PW_AUX_A, 1088 - CNL_DISP_PW_AUX_B = GLK_DISP_PW_AUX_B, 1089 - CNL_DISP_PW_AUX_C = GLK_DISP_PW_AUX_C, 1090 - CNL_DISP_PW_AUX_D, 1091 - CNL_DISP_PW_AUX_F, 1092 - 1093 - SKL_DISP_PW_1 = 14, 1039 + VLV_DISP_PW_DISP2D, 1040 + BXT_DISP_PW_DPIO_CMN_A, 1041 + VLV_DISP_PW_DPIO_CMN_BC, 1042 + GLK_DISP_PW_DPIO_CMN_C, 1043 + CHV_DISP_PW_DPIO_CMN_D, 1044 + HSW_DISP_PW_GLOBAL, 1045 + SKL_DISP_PW_MISC_IO, 1046 + SKL_DISP_PW_1, 1094 1047 SKL_DISP_PW_2, 1095 - 1096 - /* - custom power wells */ 1097 - BXT_DPIO_CMN_A, 1098 - BXT_DPIO_CMN_BC, 1099 - GLK_DPIO_CMN_C, /* 18 */ 1100 - 1101 - /* 1102 - * GEN11+ 1103 - * - _HSW_PWR_WELL_CTL1-4 1104 - * (status bit: (id&15)*2, req bit:(id&15)*2+1) 1105 - */ 1106 - ICL_DISP_PW_1 = 0, 1107 - ICL_DISP_PW_2, 1108 - ICL_DISP_PW_3, 1109 - ICL_DISP_PW_4, 1110 - 1111 - /* 1112 - * - _HSW_PWR_WELL_CTL_AUX1/2/4 1113 - * (status bit: (id&15)*2, req bit:(id&15)*2+1) 1114 - */ 1115 - ICL_DISP_PW_AUX_A = 16, 1116 - ICL_DISP_PW_AUX_B, 1117 - ICL_DISP_PW_AUX_C, 1118 - ICL_DISP_PW_AUX_D, 1119 - ICL_DISP_PW_AUX_E, 1120 - ICL_DISP_PW_AUX_F, 1121 - 1122 - ICL_DISP_PW_AUX_TBT1 = 24, 1123 - ICL_DISP_PW_AUX_TBT2, 1124 - ICL_DISP_PW_AUX_TBT3, 1125 - ICL_DISP_PW_AUX_TBT4, 1126 - 1127 - /* 1128 - * - _HSW_PWR_WELL_CTL_DDI1/2/4 1129 - * (status bit: (id&15)*2, req bit:(id&15)*2+1) 1130 - */ 1131 - ICL_DISP_PW_DDI_A = 32, 1132 - ICL_DISP_PW_DDI_B, 1133 - ICL_DISP_PW_DDI_C, 1134 - ICL_DISP_PW_DDI_D, 1135 - ICL_DISP_PW_DDI_E, 1136 - ICL_DISP_PW_DDI_F, /* 37 */ 1137 - 1138 - /* 1139 - * Multiple platforms. 1140 - * Must start following the highest ID of any platform. 1141 - * - custom power wells 1142 - */ 1143 - SKL_DISP_PW_DC_OFF = 38, 1144 - I915_DISP_PW_ALWAYS_ON, 1145 1048 }; 1146 1049 1147 1050 #define PUNIT_REG_PWRGT_CTRL 0x60 1148 1051 #define PUNIT_REG_PWRGT_STATUS 0x61 1149 - #define PUNIT_PWRGT_MASK(power_well) (3 << ((power_well) * 2)) 1150 - #define PUNIT_PWRGT_PWR_ON(power_well) (0 << ((power_well) * 2)) 1151 - #define PUNIT_PWRGT_CLK_GATE(power_well) (1 << ((power_well) * 2)) 1152 - #define PUNIT_PWRGT_RESET(power_well) (2 << ((power_well) * 2)) 1153 - #define PUNIT_PWRGT_PWR_GATE(power_well) (3 << ((power_well) * 2)) 1052 + #define PUNIT_PWRGT_MASK(pw_idx) (3 << ((pw_idx) * 2)) 1053 + #define PUNIT_PWRGT_PWR_ON(pw_idx) (0 << ((pw_idx) * 2)) 1054 + #define PUNIT_PWRGT_CLK_GATE(pw_idx) (1 << ((pw_idx) * 2)) 1055 + #define PUNIT_PWRGT_RESET(pw_idx) (2 << ((pw_idx) * 2)) 1056 + #define PUNIT_PWRGT_PWR_GATE(pw_idx) (3 << ((pw_idx) * 2)) 1057 + 1058 + #define PUNIT_PWGT_IDX_RENDER 0 1059 + #define PUNIT_PWGT_IDX_MEDIA 1 1060 + #define PUNIT_PWGT_IDX_DISP2D 3 1061 + #define PUNIT_PWGT_IDX_DPIO_CMN_BC 5 1062 + #define PUNIT_PWGT_IDX_DPIO_TX_B_LANES_01 6 1063 + #define PUNIT_PWGT_IDX_DPIO_TX_B_LANES_23 7 1064 + #define PUNIT_PWGT_IDX_DPIO_TX_C_LANES_01 8 1065 + #define PUNIT_PWGT_IDX_DPIO_TX_C_LANES_23 9 1066 + #define PUNIT_PWGT_IDX_DPIO_RX0 10 1067 + #define PUNIT_PWGT_IDX_DPIO_RX1 11 1068 + #define PUNIT_PWGT_IDX_DPIO_CMN_D 12 1154 1069 1155 1070 #define PUNIT_REG_GPU_LFM 0xd3 1156 1071 #define PUNIT_REG_GPU_FREQ_REQ 0xd4 ··· 1851 1932 #define N_SCALAR(x) ((x) << 24) 1852 1933 #define N_SCALAR_MASK (0x7F << 24) 1853 1934 1854 - #define _ICL_MG_PHY_PORT_LN(port, ln, ln0p1, ln0p2, ln1p1) \ 1935 + #define MG_PHY_PORT_LN(port, ln, ln0p1, ln0p2, ln1p1) \ 1855 1936 _MMIO(_PORT((port) - PORT_C, ln0p1, ln0p2) + (ln) * ((ln1p1) - (ln0p1))) 1856 1937 1857 - #define _ICL_MG_TX_LINK_PARAMS_TX1LN0_PORT1 0x16812C 1858 - #define _ICL_MG_TX_LINK_PARAMS_TX1LN1_PORT1 0x16852C 1859 - #define _ICL_MG_TX_LINK_PARAMS_TX1LN0_PORT2 0x16912C 1860 - #define _ICL_MG_TX_LINK_PARAMS_TX1LN1_PORT2 0x16952C 1861 - #define _ICL_MG_TX_LINK_PARAMS_TX1LN0_PORT3 0x16A12C 1862 - #define _ICL_MG_TX_LINK_PARAMS_TX1LN1_PORT3 0x16A52C 1863 - #define _ICL_MG_TX_LINK_PARAMS_TX1LN0_PORT4 0x16B12C 1864 - #define _ICL_MG_TX_LINK_PARAMS_TX1LN1_PORT4 0x16B52C 1865 - #define ICL_PORT_MG_TX1_LINK_PARAMS(port, ln) \ 1866 - _ICL_MG_PHY_PORT_LN(port, ln, _ICL_MG_TX_LINK_PARAMS_TX1LN0_PORT1, \ 1867 - _ICL_MG_TX_LINK_PARAMS_TX1LN0_PORT2, \ 1868 - _ICL_MG_TX_LINK_PARAMS_TX1LN1_PORT1) 1938 + #define MG_TX_LINK_PARAMS_TX1LN0_PORT1 0x16812C 1939 + #define MG_TX_LINK_PARAMS_TX1LN1_PORT1 0x16852C 1940 + #define MG_TX_LINK_PARAMS_TX1LN0_PORT2 0x16912C 1941 + #define MG_TX_LINK_PARAMS_TX1LN1_PORT2 0x16952C 1942 + #define MG_TX_LINK_PARAMS_TX1LN0_PORT3 0x16A12C 1943 + #define MG_TX_LINK_PARAMS_TX1LN1_PORT3 0x16A52C 1944 + #define MG_TX_LINK_PARAMS_TX1LN0_PORT4 0x16B12C 1945 + #define MG_TX_LINK_PARAMS_TX1LN1_PORT4 0x16B52C 1946 + #define MG_TX1_LINK_PARAMS(port, ln) \ 1947 + MG_PHY_PORT_LN(port, ln, MG_TX_LINK_PARAMS_TX1LN0_PORT1, \ 1948 + MG_TX_LINK_PARAMS_TX1LN0_PORT2, \ 1949 + MG_TX_LINK_PARAMS_TX1LN1_PORT1) 1869 1950 1870 - #define _ICL_MG_TX_LINK_PARAMS_TX2LN0_PORT1 0x1680AC 1871 - #define _ICL_MG_TX_LINK_PARAMS_TX2LN1_PORT1 0x1684AC 1872 - #define _ICL_MG_TX_LINK_PARAMS_TX2LN0_PORT2 0x1690AC 1873 - #define _ICL_MG_TX_LINK_PARAMS_TX2LN1_PORT2 0x1694AC 1874 - #define _ICL_MG_TX_LINK_PARAMS_TX2LN0_PORT3 0x16A0AC 1875 - #define _ICL_MG_TX_LINK_PARAMS_TX2LN1_PORT3 0x16A4AC 1876 - #define _ICL_MG_TX_LINK_PARAMS_TX2LN0_PORT4 0x16B0AC 1877 - #define _ICL_MG_TX_LINK_PARAMS_TX2LN1_PORT4 0x16B4AC 1878 - #define ICL_PORT_MG_TX2_LINK_PARAMS(port, ln) \ 1879 - _ICL_MG_PHY_PORT_LN(port, ln, _ICL_MG_TX_LINK_PARAMS_TX2LN0_PORT1, \ 1880 - _ICL_MG_TX_LINK_PARAMS_TX2LN0_PORT2, \ 1881 - _ICL_MG_TX_LINK_PARAMS_TX2LN1_PORT1) 1882 - #define CRI_USE_FS32 (1 << 5) 1951 + #define MG_TX_LINK_PARAMS_TX2LN0_PORT1 0x1680AC 1952 + #define MG_TX_LINK_PARAMS_TX2LN1_PORT1 0x1684AC 1953 + #define MG_TX_LINK_PARAMS_TX2LN0_PORT2 0x1690AC 1954 + #define MG_TX_LINK_PARAMS_TX2LN1_PORT2 0x1694AC 1955 + #define MG_TX_LINK_PARAMS_TX2LN0_PORT3 0x16A0AC 1956 + #define MG_TX_LINK_PARAMS_TX2LN1_PORT3 0x16A4AC 1957 + #define MG_TX_LINK_PARAMS_TX2LN0_PORT4 0x16B0AC 1958 + #define MG_TX_LINK_PARAMS_TX2LN1_PORT4 0x16B4AC 1959 + #define MG_TX2_LINK_PARAMS(port, ln) \ 1960 + MG_PHY_PORT_LN(port, ln, MG_TX_LINK_PARAMS_TX2LN0_PORT1, \ 1961 + MG_TX_LINK_PARAMS_TX2LN0_PORT2, \ 1962 + MG_TX_LINK_PARAMS_TX2LN1_PORT1) 1963 + #define CRI_USE_FS32 (1 << 5) 1883 1964 1884 - #define _ICL_MG_TX_PISO_READLOAD_TX1LN0_PORT1 0x16814C 1885 - #define _ICL_MG_TX_PISO_READLOAD_TX1LN1_PORT1 0x16854C 1886 - #define _ICL_MG_TX_PISO_READLOAD_TX1LN0_PORT2 0x16914C 1887 - #define _ICL_MG_TX_PISO_READLOAD_TX1LN1_PORT2 0x16954C 1888 - #define _ICL_MG_TX_PISO_READLOAD_TX1LN0_PORT3 0x16A14C 1889 - #define _ICL_MG_TX_PISO_READLOAD_TX1LN1_PORT3 0x16A54C 1890 - #define _ICL_MG_TX_PISO_READLOAD_TX1LN0_PORT4 0x16B14C 1891 - #define _ICL_MG_TX_PISO_READLOAD_TX1LN1_PORT4 0x16B54C 1892 - #define ICL_PORT_MG_TX1_PISO_READLOAD(port, ln) \ 1893 - _ICL_MG_PHY_PORT_LN(port, ln, _ICL_MG_TX_PISO_READLOAD_TX1LN0_PORT1, \ 1894 - _ICL_MG_TX_PISO_READLOAD_TX1LN0_PORT2, \ 1895 - _ICL_MG_TX_PISO_READLOAD_TX1LN1_PORT1) 1965 + #define MG_TX_PISO_READLOAD_TX1LN0_PORT1 0x16814C 1966 + #define MG_TX_PISO_READLOAD_TX1LN1_PORT1 0x16854C 1967 + #define MG_TX_PISO_READLOAD_TX1LN0_PORT2 0x16914C 1968 + #define MG_TX_PISO_READLOAD_TX1LN1_PORT2 0x16954C 1969 + #define MG_TX_PISO_READLOAD_TX1LN0_PORT3 0x16A14C 1970 + #define MG_TX_PISO_READLOAD_TX1LN1_PORT3 0x16A54C 1971 + #define MG_TX_PISO_READLOAD_TX1LN0_PORT4 0x16B14C 1972 + #define MG_TX_PISO_READLOAD_TX1LN1_PORT4 0x16B54C 1973 + #define MG_TX1_PISO_READLOAD(port, ln) \ 1974 + MG_PHY_PORT_LN(port, ln, MG_TX_PISO_READLOAD_TX1LN0_PORT1, \ 1975 + MG_TX_PISO_READLOAD_TX1LN0_PORT2, \ 1976 + MG_TX_PISO_READLOAD_TX1LN1_PORT1) 1896 1977 1897 - #define _ICL_MG_TX_PISO_READLOAD_TX2LN0_PORT1 0x1680CC 1898 - #define _ICL_MG_TX_PISO_READLOAD_TX2LN1_PORT1 0x1684CC 1899 - #define _ICL_MG_TX_PISO_READLOAD_TX2LN0_PORT2 0x1690CC 1900 - #define _ICL_MG_TX_PISO_READLOAD_TX2LN1_PORT2 0x1694CC 1901 - #define _ICL_MG_TX_PISO_READLOAD_TX2LN0_PORT3 0x16A0CC 1902 - #define _ICL_MG_TX_PISO_READLOAD_TX2LN1_PORT3 0x16A4CC 1903 - #define _ICL_MG_TX_PISO_READLOAD_TX2LN0_PORT4 0x16B0CC 1904 - #define _ICL_MG_TX_PISO_READLOAD_TX2LN1_PORT4 0x16B4CC 1905 - #define ICL_PORT_MG_TX2_PISO_READLOAD(port, ln) \ 1906 - _ICL_MG_PHY_PORT_LN(port, ln, _ICL_MG_TX_PISO_READLOAD_TX2LN0_PORT1, \ 1907 - _ICL_MG_TX_PISO_READLOAD_TX2LN0_PORT2, \ 1908 - _ICL_MG_TX_PISO_READLOAD_TX2LN1_PORT1) 1909 - #define CRI_CALCINIT (1 << 1) 1978 + #define MG_TX_PISO_READLOAD_TX2LN0_PORT1 0x1680CC 1979 + #define MG_TX_PISO_READLOAD_TX2LN1_PORT1 0x1684CC 1980 + #define MG_TX_PISO_READLOAD_TX2LN0_PORT2 0x1690CC 1981 + #define MG_TX_PISO_READLOAD_TX2LN1_PORT2 0x1694CC 1982 + #define MG_TX_PISO_READLOAD_TX2LN0_PORT3 0x16A0CC 1983 + #define MG_TX_PISO_READLOAD_TX2LN1_PORT3 0x16A4CC 1984 + #define MG_TX_PISO_READLOAD_TX2LN0_PORT4 0x16B0CC 1985 + #define MG_TX_PISO_READLOAD_TX2LN1_PORT4 0x16B4CC 1986 + #define MG_TX2_PISO_READLOAD(port, ln) \ 1987 + MG_PHY_PORT_LN(port, ln, MG_TX_PISO_READLOAD_TX2LN0_PORT1, \ 1988 + MG_TX_PISO_READLOAD_TX2LN0_PORT2, \ 1989 + MG_TX_PISO_READLOAD_TX2LN1_PORT1) 1990 + #define CRI_CALCINIT (1 << 1) 1910 1991 1911 - #define _ICL_MG_TX_SWINGCTRL_TX1LN0_PORT1 0x168148 1912 - #define _ICL_MG_TX_SWINGCTRL_TX1LN1_PORT1 0x168548 1913 - #define _ICL_MG_TX_SWINGCTRL_TX1LN0_PORT2 0x169148 1914 - #define _ICL_MG_TX_SWINGCTRL_TX1LN1_PORT2 0x169548 1915 - #define _ICL_MG_TX_SWINGCTRL_TX1LN0_PORT3 0x16A148 1916 - #define _ICL_MG_TX_SWINGCTRL_TX1LN1_PORT3 0x16A548 1917 - #define _ICL_MG_TX_SWINGCTRL_TX1LN0_PORT4 0x16B148 1918 - #define _ICL_MG_TX_SWINGCTRL_TX1LN1_PORT4 0x16B548 1919 - #define ICL_PORT_MG_TX1_SWINGCTRL(port, ln) \ 1920 - _ICL_MG_PHY_PORT_LN(port, ln, _ICL_MG_TX_SWINGCTRL_TX1LN0_PORT1, \ 1921 - _ICL_MG_TX_SWINGCTRL_TX1LN0_PORT2, \ 1922 - _ICL_MG_TX_SWINGCTRL_TX1LN1_PORT1) 1992 + #define MG_TX_SWINGCTRL_TX1LN0_PORT1 0x168148 1993 + #define MG_TX_SWINGCTRL_TX1LN1_PORT1 0x168548 1994 + #define MG_TX_SWINGCTRL_TX1LN0_PORT2 0x169148 1995 + #define MG_TX_SWINGCTRL_TX1LN1_PORT2 0x169548 1996 + #define MG_TX_SWINGCTRL_TX1LN0_PORT3 0x16A148 1997 + #define MG_TX_SWINGCTRL_TX1LN1_PORT3 0x16A548 1998 + #define MG_TX_SWINGCTRL_TX1LN0_PORT4 0x16B148 1999 + #define MG_TX_SWINGCTRL_TX1LN1_PORT4 0x16B548 2000 + #define MG_TX1_SWINGCTRL(port, ln) \ 2001 + MG_PHY_PORT_LN(port, ln, MG_TX_SWINGCTRL_TX1LN0_PORT1, \ 2002 + MG_TX_SWINGCTRL_TX1LN0_PORT2, \ 2003 + MG_TX_SWINGCTRL_TX1LN1_PORT1) 1923 2004 1924 - #define _ICL_MG_TX_SWINGCTRL_TX2LN0_PORT1 0x1680C8 1925 - #define _ICL_MG_TX_SWINGCTRL_TX2LN1_PORT1 0x1684C8 1926 - #define _ICL_MG_TX_SWINGCTRL_TX2LN0_PORT2 0x1690C8 1927 - #define _ICL_MG_TX_SWINGCTRL_TX2LN1_PORT2 0x1694C8 1928 - #define _ICL_MG_TX_SWINGCTRL_TX2LN0_PORT3 0x16A0C8 1929 - #define _ICL_MG_TX_SWINGCTRL_TX2LN1_PORT3 0x16A4C8 1930 - #define _ICL_MG_TX_SWINGCTRL_TX2LN0_PORT4 0x16B0C8 1931 - #define _ICL_MG_TX_SWINGCTRL_TX2LN1_PORT4 0x16B4C8 1932 - #define ICL_PORT_MG_TX2_SWINGCTRL(port, ln) \ 1933 - _ICL_MG_PHY_PORT_LN(port, ln, _ICL_MG_TX_SWINGCTRL_TX2LN0_PORT1, \ 1934 - _ICL_MG_TX_SWINGCTRL_TX2LN0_PORT2, \ 1935 - _ICL_MG_TX_SWINGCTRL_TX2LN1_PORT1) 1936 - #define CRI_TXDEEMPH_OVERRIDE_17_12(x) ((x) << 0) 1937 - #define CRI_TXDEEMPH_OVERRIDE_17_12_MASK (0x3F << 0) 2005 + #define MG_TX_SWINGCTRL_TX2LN0_PORT1 0x1680C8 2006 + #define MG_TX_SWINGCTRL_TX2LN1_PORT1 0x1684C8 2007 + #define MG_TX_SWINGCTRL_TX2LN0_PORT2 0x1690C8 2008 + #define MG_TX_SWINGCTRL_TX2LN1_PORT2 0x1694C8 2009 + #define MG_TX_SWINGCTRL_TX2LN0_PORT3 0x16A0C8 2010 + #define MG_TX_SWINGCTRL_TX2LN1_PORT3 0x16A4C8 2011 + #define MG_TX_SWINGCTRL_TX2LN0_PORT4 0x16B0C8 2012 + #define MG_TX_SWINGCTRL_TX2LN1_PORT4 0x16B4C8 2013 + #define MG_TX2_SWINGCTRL(port, ln) \ 2014 + MG_PHY_PORT_LN(port, ln, MG_TX_SWINGCTRL_TX2LN0_PORT1, \ 2015 + MG_TX_SWINGCTRL_TX2LN0_PORT2, \ 2016 + MG_TX_SWINGCTRL_TX2LN1_PORT1) 2017 + #define CRI_TXDEEMPH_OVERRIDE_17_12(x) ((x) << 0) 2018 + #define CRI_TXDEEMPH_OVERRIDE_17_12_MASK (0x3F << 0) 1938 2019 1939 - #define _ICL_MG_TX_DRVCTRL_TX1LN0_PORT1 0x168144 1940 - #define _ICL_MG_TX_DRVCTRL_TX1LN1_PORT1 0x168544 1941 - #define _ICL_MG_TX_DRVCTRL_TX1LN0_PORT2 0x169144 1942 - #define _ICL_MG_TX_DRVCTRL_TX1LN1_PORT2 0x169544 1943 - #define _ICL_MG_TX_DRVCTRL_TX1LN0_PORT3 0x16A144 1944 - #define _ICL_MG_TX_DRVCTRL_TX1LN1_PORT3 0x16A544 1945 - #define _ICL_MG_TX_DRVCTRL_TX1LN0_PORT4 0x16B144 1946 - #define _ICL_MG_TX_DRVCTRL_TX1LN1_PORT4 0x16B544 1947 - #define ICL_PORT_MG_TX1_DRVCTRL(port, ln) \ 1948 - _ICL_MG_PHY_PORT_LN(port, ln, _ICL_MG_TX_DRVCTRL_TX1LN0_PORT1, \ 1949 - _ICL_MG_TX_DRVCTRL_TX1LN0_PORT2, \ 1950 - _ICL_MG_TX_DRVCTRL_TX1LN1_PORT1) 2020 + #define MG_TX_DRVCTRL_TX1LN0_TXPORT1 0x168144 2021 + #define MG_TX_DRVCTRL_TX1LN1_TXPORT1 0x168544 2022 + #define MG_TX_DRVCTRL_TX1LN0_TXPORT2 0x169144 2023 + #define MG_TX_DRVCTRL_TX1LN1_TXPORT2 0x169544 2024 + #define MG_TX_DRVCTRL_TX1LN0_TXPORT3 0x16A144 2025 + #define MG_TX_DRVCTRL_TX1LN1_TXPORT3 0x16A544 2026 + #define MG_TX_DRVCTRL_TX1LN0_TXPORT4 0x16B144 2027 + #define MG_TX_DRVCTRL_TX1LN1_TXPORT4 0x16B544 2028 + #define MG_TX1_DRVCTRL(port, ln) \ 2029 + MG_PHY_PORT_LN(port, ln, MG_TX_DRVCTRL_TX1LN0_TXPORT1, \ 2030 + MG_TX_DRVCTRL_TX1LN0_TXPORT2, \ 2031 + MG_TX_DRVCTRL_TX1LN1_TXPORT1) 1951 2032 1952 - #define _ICL_MG_TX_DRVCTRL_TX2LN0_PORT1 0x1680C4 1953 - #define _ICL_MG_TX_DRVCTRL_TX2LN1_PORT1 0x1684C4 1954 - #define _ICL_MG_TX_DRVCTRL_TX2LN0_PORT2 0x1690C4 1955 - #define _ICL_MG_TX_DRVCTRL_TX2LN1_PORT2 0x1694C4 1956 - #define _ICL_MG_TX_DRVCTRL_TX2LN0_PORT3 0x16A0C4 1957 - #define _ICL_MG_TX_DRVCTRL_TX2LN1_PORT3 0x16A4C4 1958 - #define _ICL_MG_TX_DRVCTRL_TX2LN0_PORT4 0x16B0C4 1959 - #define _ICL_MG_TX_DRVCTRL_TX2LN1_PORT4 0x16B4C4 1960 - #define ICL_PORT_MG_TX2_DRVCTRL(port, ln) \ 1961 - _ICL_MG_PHY_PORT_LN(port, ln, _ICL_MG_TX_DRVCTRL_TX2LN0_PORT1, \ 1962 - _ICL_MG_TX_DRVCTRL_TX2LN0_PORT2, \ 1963 - _ICL_MG_TX_DRVCTRL_TX2LN1_PORT1) 1964 - #define CRI_TXDEEMPH_OVERRIDE_11_6(x) ((x) << 24) 1965 - #define CRI_TXDEEMPH_OVERRIDE_11_6_MASK (0x3F << 24) 1966 - #define CRI_TXDEEMPH_OVERRIDE_EN (1 << 22) 1967 - #define CRI_TXDEEMPH_OVERRIDE_5_0(x) ((x) << 16) 1968 - #define CRI_TXDEEMPH_OVERRIDE_5_0_MASK (0x3F << 16) 2033 + #define MG_TX_DRVCTRL_TX2LN0_PORT1 0x1680C4 2034 + #define MG_TX_DRVCTRL_TX2LN1_PORT1 0x1684C4 2035 + #define MG_TX_DRVCTRL_TX2LN0_PORT2 0x1690C4 2036 + #define MG_TX_DRVCTRL_TX2LN1_PORT2 0x1694C4 2037 + #define MG_TX_DRVCTRL_TX2LN0_PORT3 0x16A0C4 2038 + #define MG_TX_DRVCTRL_TX2LN1_PORT3 0x16A4C4 2039 + #define MG_TX_DRVCTRL_TX2LN0_PORT4 0x16B0C4 2040 + #define MG_TX_DRVCTRL_TX2LN1_PORT4 0x16B4C4 2041 + #define MG_TX2_DRVCTRL(port, ln) \ 2042 + MG_PHY_PORT_LN(port, ln, MG_TX_DRVCTRL_TX2LN0_PORT1, \ 2043 + MG_TX_DRVCTRL_TX2LN0_PORT2, \ 2044 + MG_TX_DRVCTRL_TX2LN1_PORT1) 2045 + #define CRI_TXDEEMPH_OVERRIDE_11_6(x) ((x) << 24) 2046 + #define CRI_TXDEEMPH_OVERRIDE_11_6_MASK (0x3F << 24) 2047 + #define CRI_TXDEEMPH_OVERRIDE_EN (1 << 22) 2048 + #define CRI_TXDEEMPH_OVERRIDE_5_0(x) ((x) << 16) 2049 + #define CRI_TXDEEMPH_OVERRIDE_5_0_MASK (0x3F << 16) 2050 + #define CRI_LOADGEN_SEL(x) ((x) << 12) 2051 + #define CRI_LOADGEN_SEL_MASK (0x3 << 12) 2052 + 2053 + #define MG_CLKHUB_LN0_PORT1 0x16839C 2054 + #define MG_CLKHUB_LN1_PORT1 0x16879C 2055 + #define MG_CLKHUB_LN0_PORT2 0x16939C 2056 + #define MG_CLKHUB_LN1_PORT2 0x16979C 2057 + #define MG_CLKHUB_LN0_PORT3 0x16A39C 2058 + #define MG_CLKHUB_LN1_PORT3 0x16A79C 2059 + #define MG_CLKHUB_LN0_PORT4 0x16B39C 2060 + #define MG_CLKHUB_LN1_PORT4 0x16B79C 2061 + #define MG_CLKHUB(port, ln) \ 2062 + MG_PHY_PORT_LN(port, ln, MG_CLKHUB_LN0_PORT1, \ 2063 + MG_CLKHUB_LN0_PORT2, \ 2064 + MG_CLKHUB_LN1_PORT1) 2065 + #define CFG_LOW_RATE_LKREN_EN (1 << 11) 2066 + 2067 + #define MG_TX_DCC_TX1LN0_PORT1 0x168110 2068 + #define MG_TX_DCC_TX1LN1_PORT1 0x168510 2069 + #define MG_TX_DCC_TX1LN0_PORT2 0x169110 2070 + #define MG_TX_DCC_TX1LN1_PORT2 0x169510 2071 + #define MG_TX_DCC_TX1LN0_PORT3 0x16A110 2072 + #define MG_TX_DCC_TX1LN1_PORT3 0x16A510 2073 + #define MG_TX_DCC_TX1LN0_PORT4 0x16B110 2074 + #define MG_TX_DCC_TX1LN1_PORT4 0x16B510 2075 + #define MG_TX1_DCC(port, ln) \ 2076 + MG_PHY_PORT_LN(port, ln, MG_TX_DCC_TX1LN0_PORT1, \ 2077 + MG_TX_DCC_TX1LN0_PORT2, \ 2078 + MG_TX_DCC_TX1LN1_PORT1) 2079 + #define MG_TX_DCC_TX2LN0_PORT1 0x168090 2080 + #define MG_TX_DCC_TX2LN1_PORT1 0x168490 2081 + #define MG_TX_DCC_TX2LN0_PORT2 0x169090 2082 + #define MG_TX_DCC_TX2LN1_PORT2 0x169490 2083 + #define MG_TX_DCC_TX2LN0_PORT3 0x16A090 2084 + #define MG_TX_DCC_TX2LN1_PORT3 0x16A490 2085 + #define MG_TX_DCC_TX2LN0_PORT4 0x16B090 2086 + #define MG_TX_DCC_TX2LN1_PORT4 0x16B490 2087 + #define MG_TX2_DCC(port, ln) \ 2088 + MG_PHY_PORT_LN(port, ln, MG_TX_DCC_TX2LN0_PORT1, \ 2089 + MG_TX_DCC_TX2LN0_PORT2, \ 2090 + MG_TX_DCC_TX2LN1_PORT1) 2091 + #define CFG_AMI_CK_DIV_OVERRIDE_VAL(x) ((x) << 25) 2092 + #define CFG_AMI_CK_DIV_OVERRIDE_VAL_MASK (0x3 << 25) 2093 + #define CFG_AMI_CK_DIV_OVERRIDE_EN (1 << 24) 2094 + 2095 + #define MG_DP_MODE_LN0_ACU_PORT1 0x1683A0 2096 + #define MG_DP_MODE_LN1_ACU_PORT1 0x1687A0 2097 + #define MG_DP_MODE_LN0_ACU_PORT2 0x1693A0 2098 + #define MG_DP_MODE_LN1_ACU_PORT2 0x1697A0 2099 + #define MG_DP_MODE_LN0_ACU_PORT3 0x16A3A0 2100 + #define MG_DP_MODE_LN1_ACU_PORT3 0x16A7A0 2101 + #define MG_DP_MODE_LN0_ACU_PORT4 0x16B3A0 2102 + #define MG_DP_MODE_LN1_ACU_PORT4 0x16B7A0 2103 + #define MG_DP_MODE(port, ln) \ 2104 + MG_PHY_PORT_LN(port, ln, MG_DP_MODE_LN0_ACU_PORT1, \ 2105 + MG_DP_MODE_LN0_ACU_PORT2, \ 2106 + MG_DP_MODE_LN1_ACU_PORT1) 2107 + #define MG_DP_MODE_CFG_DP_X2_MODE (1 << 7) 2108 + #define MG_DP_MODE_CFG_DP_X1_MODE (1 << 6) 2109 + #define MG_DP_MODE_CFG_TR2PWR_GATING (1 << 5) 2110 + #define MG_DP_MODE_CFG_TRPWR_GATING (1 << 4) 2111 + #define MG_DP_MODE_CFG_CLNPWR_GATING (1 << 3) 2112 + #define MG_DP_MODE_CFG_DIGPWR_GATING (1 << 2) 2113 + #define MG_DP_MODE_CFG_GAONPWR_GATING (1 << 1) 2114 + 2115 + #define MG_MISC_SUS0_PORT1 0x168814 2116 + #define MG_MISC_SUS0_PORT2 0x169814 2117 + #define MG_MISC_SUS0_PORT3 0x16A814 2118 + #define MG_MISC_SUS0_PORT4 0x16B814 2119 + #define MG_MISC_SUS0(tc_port) \ 2120 + _MMIO(_PORT(tc_port, MG_MISC_SUS0_PORT1, MG_MISC_SUS0_PORT2)) 2121 + #define MG_MISC_SUS0_SUSCLK_DYNCLKGATE_MODE_MASK (3 << 14) 2122 + #define MG_MISC_SUS0_SUSCLK_DYNCLKGATE_MODE(x) ((x) << 14) 2123 + #define MG_MISC_SUS0_CFG_TR2PWR_GATING (1 << 12) 2124 + #define MG_MISC_SUS0_CFG_CL2PWR_GATING (1 << 11) 2125 + #define MG_MISC_SUS0_CFG_GAONPWR_GATING (1 << 10) 2126 + #define MG_MISC_SUS0_CFG_TRPWR_GATING (1 << 7) 2127 + #define MG_MISC_SUS0_CFG_CL1PWR_GATING (1 << 6) 2128 + #define MG_MISC_SUS0_CFG_DGPWR_GATING (1 << 5) 1969 2129 1970 2130 /* The spec defines this only for BXT PHY0, but lets assume that this 1971 2131 * would exist for PHY1 too if it had a second channel. ··· 3084 3086 /* 3085 3087 * GPIO regs 3086 3088 */ 3087 - #define GPIOA _MMIO(0x5010) 3088 - #define GPIOB _MMIO(0x5014) 3089 - #define GPIOC _MMIO(0x5018) 3090 - #define GPIOD _MMIO(0x501c) 3091 - #define GPIOE _MMIO(0x5020) 3092 - #define GPIOF _MMIO(0x5024) 3093 - #define GPIOG _MMIO(0x5028) 3094 - #define GPIOH _MMIO(0x502c) 3095 - #define GPIOJ _MMIO(0x5034) 3096 - #define GPIOK _MMIO(0x5038) 3097 - #define GPIOL _MMIO(0x503C) 3098 - #define GPIOM _MMIO(0x5040) 3089 + #define GPIO(gpio) _MMIO(dev_priv->gpio_mmio_base + 0x5010 + \ 3090 + 4 * (gpio)) 3091 + 3099 3092 # define GPIO_CLOCK_DIR_MASK (1 << 0) 3100 3093 # define GPIO_CLOCK_DIR_IN (0 << 1) 3101 3094 # define GPIO_CLOCK_DIR_OUT (1 << 1) ··· 5465 5476 #define DP_AUX_CH_CTL_PSR_DATA_AUX_REG_SKL (1 << 14) 5466 5477 #define DP_AUX_CH_CTL_FS_DATA_AUX_REG_SKL (1 << 13) 5467 5478 #define DP_AUX_CH_CTL_GTC_DATA_AUX_REG_SKL (1 << 12) 5479 + #define DP_AUX_CH_CTL_TBT_IO (1 << 11) 5468 5480 #define DP_AUX_CH_CTL_FW_SYNC_PULSE_SKL_MASK (0x1f << 5) 5469 5481 #define DP_AUX_CH_CTL_FW_SYNC_PULSE_SKL(c) (((c) - 1) << 5) 5470 5482 #define DP_AUX_CH_CTL_SYNC_PULSE_SKL(c) ((c) - 1) ··· 6517 6527 #define PLANE_CTL_YUV422_UYVY (1 << 16) 6518 6528 #define PLANE_CTL_YUV422_YVYU (2 << 16) 6519 6529 #define PLANE_CTL_YUV422_VYUY (3 << 16) 6520 - #define PLANE_CTL_DECOMPRESSION_ENABLE (1 << 15) 6530 + #define PLANE_CTL_RENDER_DECOMPRESSION_ENABLE (1 << 15) 6521 6531 #define PLANE_CTL_TRICKLE_FEED_DISABLE (1 << 14) 6522 6532 #define PLANE_CTL_PLANE_GAMMA_DISABLE (1 << 13) /* Pre-GLK */ 6523 6533 #define PLANE_CTL_TILED_MASK (0x7 << 10) ··· 7197 7207 #define GEN11_TC3_HOTPLUG (1 << 18) 7198 7208 #define GEN11_TC2_HOTPLUG (1 << 17) 7199 7209 #define GEN11_TC1_HOTPLUG (1 << 16) 7210 + #define GEN11_TC_HOTPLUG(tc_port) (1 << ((tc_port) + 16)) 7200 7211 #define GEN11_DE_TC_HOTPLUG_MASK (GEN11_TC4_HOTPLUG | \ 7201 7212 GEN11_TC3_HOTPLUG | \ 7202 7213 GEN11_TC2_HOTPLUG | \ ··· 7206 7215 #define GEN11_TBT3_HOTPLUG (1 << 2) 7207 7216 #define GEN11_TBT2_HOTPLUG (1 << 1) 7208 7217 #define GEN11_TBT1_HOTPLUG (1 << 0) 7218 + #define GEN11_TBT_HOTPLUG(tc_port) (1 << (tc_port)) 7209 7219 #define GEN11_DE_TBT_HOTPLUG_MASK (GEN11_TBT4_HOTPLUG | \ 7210 7220 GEN11_TBT3_HOTPLUG | \ 7211 7221 GEN11_TBT2_HOTPLUG | \ ··· 7482 7490 7483 7491 /* PCH */ 7484 7492 7493 + #define PCH_DISPLAY_BASE 0xc0000u 7494 + 7485 7495 /* south display engine interrupt: IBX */ 7486 7496 #define SDE_AUDIO_POWER_D (1 << 27) 7487 7497 #define SDE_AUDIO_POWER_C (1 << 26) ··· 7581 7587 #define SDE_GMBUS_ICP (1 << 23) 7582 7588 #define SDE_DDIB_HOTPLUG_ICP (1 << 17) 7583 7589 #define SDE_DDIA_HOTPLUG_ICP (1 << 16) 7590 + #define SDE_TC_HOTPLUG_ICP(tc_port) (1 << ((tc_port) + 24)) 7591 + #define SDE_DDI_HOTPLUG_ICP(port) (1 << ((port) + 16)) 7584 7592 #define SDE_DDI_MASK_ICP (SDE_DDIB_HOTPLUG_ICP | \ 7585 7593 SDE_DDIA_HOTPLUG_ICP) 7586 7594 #define SDE_TC_MASK_ICP (SDE_TC4_HOTPLUG_ICP | \ ··· 7777 7781 7778 7782 #define ICP_TC_HPD_LONG_DETECT(tc_port) (2 << (tc_port) * 4) 7779 7783 #define ICP_TC_HPD_SHORT_DETECT(tc_port) (1 << (tc_port) * 4) 7780 - 7781 - #define PCH_GPIOA _MMIO(0xc5010) 7782 - #define PCH_GPIOB _MMIO(0xc5014) 7783 - #define PCH_GPIOC _MMIO(0xc5018) 7784 - #define PCH_GPIOD _MMIO(0xc501c) 7785 - #define PCH_GPIOE _MMIO(0xc5020) 7786 - #define PCH_GPIOF _MMIO(0xc5024) 7787 - 7788 - #define PCH_GMBUS0 _MMIO(0xc5100) 7789 - #define PCH_GMBUS1 _MMIO(0xc5104) 7790 - #define PCH_GMBUS2 _MMIO(0xc5108) 7791 - #define PCH_GMBUS3 _MMIO(0xc510c) 7792 - #define PCH_GMBUS4 _MMIO(0xc5110) 7793 - #define PCH_GMBUS5 _MMIO(0xc5120) 7794 7784 7795 7785 #define _PCH_DPLL_A 0xc6014 7796 7786 #define _PCH_DPLL_B 0xc6018 ··· 8480 8498 #define GEN6_PM_RP_DOWN_THRESHOLD (1 << 4) 8481 8499 #define GEN6_PM_RP_UP_EI_EXPIRED (1 << 2) 8482 8500 #define GEN6_PM_RP_DOWN_EI_EXPIRED (1 << 1) 8483 - #define GEN6_PM_RPS_EVENTS (GEN6_PM_RP_UP_THRESHOLD | \ 8484 - GEN6_PM_RP_DOWN_THRESHOLD | \ 8501 + #define GEN6_PM_RPS_EVENTS (GEN6_PM_RP_UP_EI_EXPIRED | \ 8502 + GEN6_PM_RP_UP_THRESHOLD | \ 8503 + GEN6_PM_RP_DOWN_EI_EXPIRED | \ 8504 + GEN6_PM_RP_DOWN_THRESHOLD | \ 8485 8505 GEN6_PM_RP_DOWN_TIMEOUT) 8486 8506 8487 8507 #define GEN7_GT_SCRATCH(i) _MMIO(0x4F100 + (i) * 4) ··· 8811 8827 #define HSW_AUD_CHICKENBIT _MMIO(0x65f10) 8812 8828 #define SKL_AUD_CODEC_WAKE_SIGNAL (1 << 15) 8813 8829 8814 - /* HSW Power Wells */ 8815 - #define _HSW_PWR_WELL_CTL1 0x45400 8816 - #define _HSW_PWR_WELL_CTL2 0x45404 8817 - #define _HSW_PWR_WELL_CTL3 0x45408 8818 - #define _HSW_PWR_WELL_CTL4 0x4540C 8819 - 8820 - #define _ICL_PWR_WELL_CTL_AUX1 0x45440 8821 - #define _ICL_PWR_WELL_CTL_AUX2 0x45444 8822 - #define _ICL_PWR_WELL_CTL_AUX4 0x4544C 8823 - 8824 - #define _ICL_PWR_WELL_CTL_DDI1 0x45450 8825 - #define _ICL_PWR_WELL_CTL_DDI2 0x45454 8826 - #define _ICL_PWR_WELL_CTL_DDI4 0x4545C 8827 - 8828 8830 /* 8829 - * Each power well control register contains up to 16 (request, status) HW 8830 - * flag tuples. The register index and HW flag shift is determined by the 8831 - * power well ID (see i915_power_well_id). There are 4 possible sources of 8832 - * power well requests each source having its own set of control registers: 8833 - * BIOS, DRIVER, KVMR, DEBUG. 8831 + * HSW - ICL power wells 8832 + * 8833 + * Platforms have up to 3 power well control register sets, each set 8834 + * controlling up to 16 power wells via a request/status HW flag tuple: 8835 + * - main (HSW_PWR_WELL_CTL[1-4]) 8836 + * - AUX (ICL_PWR_WELL_CTL_AUX[1-4]) 8837 + * - DDI (ICL_PWR_WELL_CTL_DDI[1-4]) 8838 + * Each control register set consists of up to 4 registers used by different 8839 + * sources that can request a power well to be enabled: 8840 + * - BIOS (HSW_PWR_WELL_CTL1/ICL_PWR_WELL_CTL_AUX1/ICL_PWR_WELL_CTL_DDI1) 8841 + * - DRIVER (HSW_PWR_WELL_CTL2/ICL_PWR_WELL_CTL_AUX2/ICL_PWR_WELL_CTL_DDI2) 8842 + * - KVMR (HSW_PWR_WELL_CTL3) (only in the main register set) 8843 + * - DEBUG (HSW_PWR_WELL_CTL4/ICL_PWR_WELL_CTL_AUX4/ICL_PWR_WELL_CTL_DDI4) 8834 8844 */ 8835 - #define _HSW_PW_REG_IDX(pw) ((pw) >> 4) 8836 - #define _HSW_PW_SHIFT(pw) (((pw) & 0xf) * 2) 8837 - #define HSW_PWR_WELL_CTL_BIOS(pw) _MMIO(_PICK(_HSW_PW_REG_IDX(pw), \ 8838 - _HSW_PWR_WELL_CTL1, \ 8839 - _ICL_PWR_WELL_CTL_AUX1, \ 8840 - _ICL_PWR_WELL_CTL_DDI1)) 8841 - #define HSW_PWR_WELL_CTL_DRIVER(pw) _MMIO(_PICK(_HSW_PW_REG_IDX(pw), \ 8842 - _HSW_PWR_WELL_CTL2, \ 8843 - _ICL_PWR_WELL_CTL_AUX2, \ 8844 - _ICL_PWR_WELL_CTL_DDI2)) 8845 - /* KVMR doesn't have a reg for AUX or DDI power well control */ 8846 - #define HSW_PWR_WELL_CTL_KVMR _MMIO(_HSW_PWR_WELL_CTL3) 8847 - #define HSW_PWR_WELL_CTL_DEBUG(pw) _MMIO(_PICK(_HSW_PW_REG_IDX(pw), \ 8848 - _HSW_PWR_WELL_CTL4, \ 8849 - _ICL_PWR_WELL_CTL_AUX4, \ 8850 - _ICL_PWR_WELL_CTL_DDI4)) 8845 + #define HSW_PWR_WELL_CTL1 _MMIO(0x45400) 8846 + #define HSW_PWR_WELL_CTL2 _MMIO(0x45404) 8847 + #define HSW_PWR_WELL_CTL3 _MMIO(0x45408) 8848 + #define HSW_PWR_WELL_CTL4 _MMIO(0x4540C) 8849 + #define HSW_PWR_WELL_CTL_REQ(pw_idx) (0x2 << ((pw_idx) * 2)) 8850 + #define HSW_PWR_WELL_CTL_STATE(pw_idx) (0x1 << ((pw_idx) * 2)) 8851 8851 8852 - #define HSW_PWR_WELL_CTL_REQ(pw) (1 << (_HSW_PW_SHIFT(pw) + 1)) 8853 - #define HSW_PWR_WELL_CTL_STATE(pw) (1 << _HSW_PW_SHIFT(pw)) 8852 + /* HSW/BDW power well */ 8853 + #define HSW_PW_CTL_IDX_GLOBAL 15 8854 + 8855 + /* SKL/BXT/GLK/CNL power wells */ 8856 + #define SKL_PW_CTL_IDX_PW_2 15 8857 + #define SKL_PW_CTL_IDX_PW_1 14 8858 + #define CNL_PW_CTL_IDX_AUX_F 12 8859 + #define CNL_PW_CTL_IDX_AUX_D 11 8860 + #define GLK_PW_CTL_IDX_AUX_C 10 8861 + #define GLK_PW_CTL_IDX_AUX_B 9 8862 + #define GLK_PW_CTL_IDX_AUX_A 8 8863 + #define CNL_PW_CTL_IDX_DDI_F 6 8864 + #define SKL_PW_CTL_IDX_DDI_D 4 8865 + #define SKL_PW_CTL_IDX_DDI_C 3 8866 + #define SKL_PW_CTL_IDX_DDI_B 2 8867 + #define SKL_PW_CTL_IDX_DDI_A_E 1 8868 + #define GLK_PW_CTL_IDX_DDI_A 1 8869 + #define SKL_PW_CTL_IDX_MISC_IO 0 8870 + 8871 + /* ICL - power wells */ 8872 + #define ICL_PW_CTL_IDX_PW_4 3 8873 + #define ICL_PW_CTL_IDX_PW_3 2 8874 + #define ICL_PW_CTL_IDX_PW_2 1 8875 + #define ICL_PW_CTL_IDX_PW_1 0 8876 + 8877 + #define ICL_PWR_WELL_CTL_AUX1 _MMIO(0x45440) 8878 + #define ICL_PWR_WELL_CTL_AUX2 _MMIO(0x45444) 8879 + #define ICL_PWR_WELL_CTL_AUX4 _MMIO(0x4544C) 8880 + #define ICL_PW_CTL_IDX_AUX_TBT4 11 8881 + #define ICL_PW_CTL_IDX_AUX_TBT3 10 8882 + #define ICL_PW_CTL_IDX_AUX_TBT2 9 8883 + #define ICL_PW_CTL_IDX_AUX_TBT1 8 8884 + #define ICL_PW_CTL_IDX_AUX_F 5 8885 + #define ICL_PW_CTL_IDX_AUX_E 4 8886 + #define ICL_PW_CTL_IDX_AUX_D 3 8887 + #define ICL_PW_CTL_IDX_AUX_C 2 8888 + #define ICL_PW_CTL_IDX_AUX_B 1 8889 + #define ICL_PW_CTL_IDX_AUX_A 0 8890 + 8891 + #define ICL_PWR_WELL_CTL_DDI1 _MMIO(0x45450) 8892 + #define ICL_PWR_WELL_CTL_DDI2 _MMIO(0x45454) 8893 + #define ICL_PWR_WELL_CTL_DDI4 _MMIO(0x4545C) 8894 + #define ICL_PW_CTL_IDX_DDI_F 5 8895 + #define ICL_PW_CTL_IDX_DDI_E 4 8896 + #define ICL_PW_CTL_IDX_DDI_D 3 8897 + #define ICL_PW_CTL_IDX_DDI_C 2 8898 + #define ICL_PW_CTL_IDX_DDI_B 1 8899 + #define ICL_PW_CTL_IDX_DDI_A 0 8900 + 8901 + /* HSW - power well misc debug registers */ 8854 8902 #define HSW_PWR_WELL_CTL5 _MMIO(0x45410) 8855 8903 #define HSW_PWR_WELL_ENABLE_SINGLE_STEP (1 << 31) 8856 8904 #define HSW_PWR_WELL_PWR_GATE_OVERRIDE (1 << 20) ··· 8894 8878 SKL_PG0, 8895 8879 SKL_PG1, 8896 8880 SKL_PG2, 8881 + ICL_PG3, 8882 + ICL_PG4, 8897 8883 }; 8898 8884 8899 8885 #define SKL_FUSE_STATUS _MMIO(0x42000) 8900 8886 #define SKL_FUSE_DOWNLOAD_STATUS (1 << 31) 8901 - /* PG0 (HW control->no power well ID), PG1..PG2 (SKL_DISP_PW1..SKL_DISP_PW2) */ 8902 - #define SKL_PW_TO_PG(pw) ((pw) - SKL_DISP_PW_1 + SKL_PG1) 8903 - /* PG0 (HW control->no power well ID), PG1..PG4 (ICL_DISP_PW1..ICL_DISP_PW4) */ 8904 - #define ICL_PW_TO_PG(pw) ((pw) - ICL_DISP_PW_1 + SKL_PG1) 8887 + /* 8888 + * PG0 is HW controlled, so doesn't have a corresponding power well control knob 8889 + * SKL_DISP_PW1_IDX..SKL_DISP_PW2_IDX -> PG1..PG2 8890 + */ 8891 + #define SKL_PW_CTL_IDX_TO_PG(pw_idx) \ 8892 + ((pw_idx) - SKL_PW_CTL_IDX_PW_1 + SKL_PG1) 8893 + /* 8894 + * PG0 is HW controlled, so doesn't have a corresponding power well control knob 8895 + * ICL_DISP_PW1_IDX..ICL_DISP_PW4_IDX -> PG1..PG4 8896 + */ 8897 + #define ICL_PW_CTL_IDX_TO_PG(pw_idx) \ 8898 + ((pw_idx) - ICL_PW_CTL_IDX_PW_1 + SKL_PG1) 8905 8899 #define SKL_FUSE_PG_DIST_STATUS(pg) (1 << (27 - (pg))) 8906 8900 8907 - #define _CNL_AUX_REG_IDX(pw) ((pw) - 9) 8901 + #define _CNL_AUX_REG_IDX(pw_idx) ((pw_idx) - GLK_PW_CTL_IDX_AUX_B) 8908 8902 #define _CNL_AUX_ANAOVRD1_B 0x162250 8909 8903 #define _CNL_AUX_ANAOVRD1_C 0x162210 8910 8904 #define _CNL_AUX_ANAOVRD1_D 0x1622D0 8911 8905 #define _CNL_AUX_ANAOVRD1_F 0x162A90 8912 - #define CNL_AUX_ANAOVRD1(pw) _MMIO(_PICK(_CNL_AUX_REG_IDX(pw), \ 8906 + #define CNL_AUX_ANAOVRD1(pw_idx) _MMIO(_PICK(_CNL_AUX_REG_IDX(pw_idx), \ 8913 8907 _CNL_AUX_ANAOVRD1_B, \ 8914 8908 _CNL_AUX_ANAOVRD1_C, \ 8915 8909 _CNL_AUX_ANAOVRD1_D, \ ··· 9393 9367 #define MG_CLKTOP2_HSCLKCTL_CORE_INPUTSEL_MASK (0x1 << 16) 9394 9368 #define MG_CLKTOP2_HSCLKCTL_TLINEDRV_CLKSEL(x) ((x) << 14) 9395 9369 #define MG_CLKTOP2_HSCLKCTL_TLINEDRV_CLKSEL_MASK (0x3 << 14) 9396 - #define MG_CLKTOP2_HSCLKCTL_HSDIV_RATIO(x) ((x) << 12) 9397 9370 #define MG_CLKTOP2_HSCLKCTL_HSDIV_RATIO_MASK (0x3 << 12) 9371 + #define MG_CLKTOP2_HSCLKCTL_HSDIV_RATIO_2 (0 << 12) 9372 + #define MG_CLKTOP2_HSCLKCTL_HSDIV_RATIO_3 (1 << 12) 9373 + #define MG_CLKTOP2_HSCLKCTL_HSDIV_RATIO_5 (2 << 12) 9374 + #define MG_CLKTOP2_HSCLKCTL_HSDIV_RATIO_7 (3 << 12) 9398 9375 #define MG_CLKTOP2_HSCLKCTL_DSDIV_RATIO(x) ((x) << 8) 9376 + #define MG_CLKTOP2_HSCLKCTL_DSDIV_RATIO_SHIFT 8 9399 9377 #define MG_CLKTOP2_HSCLKCTL_DSDIV_RATIO_MASK (0xf << 8) 9400 9378 #define MG_CLKTOP2_HSCLKCTL(port) _MMIO_PORT((port) - PORT_C, \ 9401 9379 _MG_CLKTOP2_HSCLKCTL_PORT1, \ ··· 9410 9380 #define _MG_PLL_DIV0_PORT3 0x16AA00 9411 9381 #define _MG_PLL_DIV0_PORT4 0x16BA00 9412 9382 #define MG_PLL_DIV0_FRACNEN_H (1 << 30) 9383 + #define MG_PLL_DIV0_FBDIV_FRAC_MASK (0x3fffff << 8) 9384 + #define MG_PLL_DIV0_FBDIV_FRAC_SHIFT 8 9413 9385 #define MG_PLL_DIV0_FBDIV_FRAC(x) ((x) << 8) 9386 + #define MG_PLL_DIV0_FBDIV_INT_MASK (0xff << 0) 9414 9387 #define MG_PLL_DIV0_FBDIV_INT(x) ((x) << 0) 9415 9388 #define MG_PLL_DIV0(port) _MMIO_PORT((port) - PORT_C, _MG_PLL_DIV0_PORT1, \ 9416 9389 _MG_PLL_DIV0_PORT2) ··· 9428 9395 #define MG_PLL_DIV1_DITHER_DIV_4 (2 << 12) 9429 9396 #define MG_PLL_DIV1_DITHER_DIV_8 (3 << 12) 9430 9397 #define MG_PLL_DIV1_NDIVRATIO(x) ((x) << 4) 9398 + #define MG_PLL_DIV1_FBPREDIV_MASK (0xf << 0) 9431 9399 #define MG_PLL_DIV1_FBPREDIV(x) ((x) << 0) 9432 9400 #define MG_PLL_DIV1(port) _MMIO_PORT((port) - PORT_C, _MG_PLL_DIV1_PORT1, \ 9433 9401 _MG_PLL_DIV1_PORT2) ··· 10381 10347 #define ICL_PHY_MISC_DE_IO_COMP_PWR_DOWN (1 << 23) 10382 10348 10383 10349 /* Icelake Display Stream Compression Registers */ 10384 - #define DSCA_PICTURE_PARAMETER_SET_0 0x6B200 10385 - #define DSCC_PICTURE_PARAMETER_SET_0 0x6BA00 10350 + #define DSCA_PICTURE_PARAMETER_SET_0 _MMIO(0x6B200) 10351 + #define DSCC_PICTURE_PARAMETER_SET_0 _MMIO(0x6BA00) 10386 10352 #define _ICL_DSC0_PICTURE_PARAMETER_SET_0_PB 0x78270 10387 10353 #define _ICL_DSC1_PICTURE_PARAMETER_SET_0_PB 0x78370 10388 10354 #define _ICL_DSC0_PICTURE_PARAMETER_SET_0_PC 0x78470 ··· 10402 10368 #define DSC_VER_MIN_SHIFT 4 10403 10369 #define DSC_VER_MAJ (0x1 << 0) 10404 10370 10405 - #define DSCA_PICTURE_PARAMETER_SET_1 0x6B204 10406 - #define DSCC_PICTURE_PARAMETER_SET_1 0x6BA04 10371 + #define DSCA_PICTURE_PARAMETER_SET_1 _MMIO(0x6B204) 10372 + #define DSCC_PICTURE_PARAMETER_SET_1 _MMIO(0x6BA04) 10407 10373 #define _ICL_DSC0_PICTURE_PARAMETER_SET_1_PB 0x78274 10408 10374 #define _ICL_DSC1_PICTURE_PARAMETER_SET_1_PB 0x78374 10409 10375 #define _ICL_DSC0_PICTURE_PARAMETER_SET_1_PC 0x78474 ··· 10416 10382 _ICL_DSC1_PICTURE_PARAMETER_SET_1_PC) 10417 10383 #define DSC_BPP(bpp) ((bpp) << 0) 10418 10384 10419 - #define DSCA_PICTURE_PARAMETER_SET_2 0x6B208 10420 - #define DSCC_PICTURE_PARAMETER_SET_2 0x6BA08 10385 + #define DSCA_PICTURE_PARAMETER_SET_2 _MMIO(0x6B208) 10386 + #define DSCC_PICTURE_PARAMETER_SET_2 _MMIO(0x6BA08) 10421 10387 #define _ICL_DSC0_PICTURE_PARAMETER_SET_2_PB 0x78278 10422 10388 #define _ICL_DSC1_PICTURE_PARAMETER_SET_2_PB 0x78378 10423 10389 #define _ICL_DSC0_PICTURE_PARAMETER_SET_2_PC 0x78478 ··· 10431 10397 #define DSC_PIC_WIDTH(pic_width) ((pic_width) << 16) 10432 10398 #define DSC_PIC_HEIGHT(pic_height) ((pic_height) << 0) 10433 10399 10434 - #define DSCA_PICTURE_PARAMETER_SET_3 0x6B20C 10435 - #define DSCC_PICTURE_PARAMETER_SET_3 0x6BA0C 10400 + #define DSCA_PICTURE_PARAMETER_SET_3 _MMIO(0x6B20C) 10401 + #define DSCC_PICTURE_PARAMETER_SET_3 _MMIO(0x6BA0C) 10436 10402 #define _ICL_DSC0_PICTURE_PARAMETER_SET_3_PB 0x7827C 10437 10403 #define _ICL_DSC1_PICTURE_PARAMETER_SET_3_PB 0x7837C 10438 10404 #define _ICL_DSC0_PICTURE_PARAMETER_SET_3_PC 0x7847C ··· 10446 10412 #define DSC_SLICE_WIDTH(slice_width) ((slice_width) << 16) 10447 10413 #define DSC_SLICE_HEIGHT(slice_height) ((slice_height) << 0) 10448 10414 10449 - #define DSCA_PICTURE_PARAMETER_SET_4 0x6B210 10450 - #define DSCC_PICTURE_PARAMETER_SET_4 0x6BA10 10415 + #define DSCA_PICTURE_PARAMETER_SET_4 _MMIO(0x6B210) 10416 + #define DSCC_PICTURE_PARAMETER_SET_4 _MMIO(0x6BA10) 10451 10417 #define _ICL_DSC0_PICTURE_PARAMETER_SET_4_PB 0x78280 10452 10418 #define _ICL_DSC1_PICTURE_PARAMETER_SET_4_PB 0x78380 10453 10419 #define _ICL_DSC0_PICTURE_PARAMETER_SET_4_PC 0x78480 ··· 10456 10422 _ICL_DSC0_PICTURE_PARAMETER_SET_4_PB, \ 10457 10423 _ICL_DSC0_PICTURE_PARAMETER_SET_4_PC) 10458 10424 #define ICL_DSC1_PICTURE_PARAMETER_SET_4(pipe) _MMIO_PIPE((pipe) - PIPE_B, \ 10459 - _ICL_DSC0_PICTURE_PARAMETER_SET_4_PB, \ 10425 + _ICL_DSC1_PICTURE_PARAMETER_SET_4_PB, \ 10460 10426 _ICL_DSC1_PICTURE_PARAMETER_SET_4_PC) 10461 10427 #define DSC_INITIAL_DEC_DELAY(dec_delay) ((dec_delay) << 16) 10462 10428 #define DSC_INITIAL_XMIT_DELAY(xmit_delay) ((xmit_delay) << 0) 10463 10429 10464 - #define DSCA_PICTURE_PARAMETER_SET_5 0x6B214 10465 - #define DSCC_PICTURE_PARAMETER_SET_5 0x6BA14 10430 + #define DSCA_PICTURE_PARAMETER_SET_5 _MMIO(0x6B214) 10431 + #define DSCC_PICTURE_PARAMETER_SET_5 _MMIO(0x6BA14) 10466 10432 #define _ICL_DSC0_PICTURE_PARAMETER_SET_5_PB 0x78284 10467 10433 #define _ICL_DSC1_PICTURE_PARAMETER_SET_5_PB 0x78384 10468 10434 #define _ICL_DSC0_PICTURE_PARAMETER_SET_5_PC 0x78484 ··· 10471 10437 _ICL_DSC0_PICTURE_PARAMETER_SET_5_PB, \ 10472 10438 _ICL_DSC0_PICTURE_PARAMETER_SET_5_PC) 10473 10439 #define ICL_DSC1_PICTURE_PARAMETER_SET_5(pipe) _MMIO_PIPE((pipe) - PIPE_B, \ 10474 - _ICL_DSC1_PICTURE_PARAMETER_SET_5_PC, \ 10440 + _ICL_DSC1_PICTURE_PARAMETER_SET_5_PB, \ 10475 10441 _ICL_DSC1_PICTURE_PARAMETER_SET_5_PC) 10476 - #define DSC_SCALE_DEC_INTINT(scale_dec) ((scale_dec) << 16) 10442 + #define DSC_SCALE_DEC_INT(scale_dec) ((scale_dec) << 16) 10477 10443 #define DSC_SCALE_INC_INT(scale_inc) ((scale_inc) << 0) 10478 10444 10479 - #define DSCA_PICTURE_PARAMETER_SET_6 0x6B218 10480 - #define DSCC_PICTURE_PARAMETER_SET_6 0x6BA18 10445 + #define DSCA_PICTURE_PARAMETER_SET_6 _MMIO(0x6B218) 10446 + #define DSCC_PICTURE_PARAMETER_SET_6 _MMIO(0x6BA18) 10481 10447 #define _ICL_DSC0_PICTURE_PARAMETER_SET_6_PB 0x78288 10482 10448 #define _ICL_DSC1_PICTURE_PARAMETER_SET_6_PB 0x78388 10483 10449 #define _ICL_DSC0_PICTURE_PARAMETER_SET_6_PC 0x78488 ··· 10488 10454 #define ICL_DSC1_PICTURE_PARAMETER_SET_6(pipe) _MMIO_PIPE((pipe) - PIPE_B, \ 10489 10455 _ICL_DSC1_PICTURE_PARAMETER_SET_6_PB, \ 10490 10456 _ICL_DSC1_PICTURE_PARAMETER_SET_6_PC) 10491 - #define DSC_FLATNESS_MAX_QP(max_qp) (qp << 24) 10492 - #define DSC_FLATNESS_MIN_QP(min_qp) (qp << 16) 10457 + #define DSC_FLATNESS_MAX_QP(max_qp) ((max_qp) << 24) 10458 + #define DSC_FLATNESS_MIN_QP(min_qp) ((min_qp) << 16) 10493 10459 #define DSC_FIRST_LINE_BPG_OFFSET(offset) ((offset) << 8) 10494 10460 #define DSC_INITIAL_SCALE_VALUE(value) ((value) << 0) 10495 10461 10496 - #define DSCA_PICTURE_PARAMETER_SET_7 0x6B21C 10497 - #define DSCC_PICTURE_PARAMETER_SET_7 0x6BA1C 10462 + #define DSCA_PICTURE_PARAMETER_SET_7 _MMIO(0x6B21C) 10463 + #define DSCC_PICTURE_PARAMETER_SET_7 _MMIO(0x6BA1C) 10498 10464 #define _ICL_DSC0_PICTURE_PARAMETER_SET_7_PB 0x7828C 10499 10465 #define _ICL_DSC1_PICTURE_PARAMETER_SET_7_PB 0x7838C 10500 10466 #define _ICL_DSC0_PICTURE_PARAMETER_SET_7_PC 0x7848C ··· 10508 10474 #define DSC_NFL_BPG_OFFSET(bpg_offset) ((bpg_offset) << 16) 10509 10475 #define DSC_SLICE_BPG_OFFSET(bpg_offset) ((bpg_offset) << 0) 10510 10476 10511 - #define DSCA_PICTURE_PARAMETER_SET_8 0x6B220 10512 - #define DSCC_PICTURE_PARAMETER_SET_8 0x6BA20 10477 + #define DSCA_PICTURE_PARAMETER_SET_8 _MMIO(0x6B220) 10478 + #define DSCC_PICTURE_PARAMETER_SET_8 _MMIO(0x6BA20) 10513 10479 #define _ICL_DSC0_PICTURE_PARAMETER_SET_8_PB 0x78290 10514 10480 #define _ICL_DSC1_PICTURE_PARAMETER_SET_8_PB 0x78390 10515 10481 #define _ICL_DSC0_PICTURE_PARAMETER_SET_8_PC 0x78490 ··· 10523 10489 #define DSC_INITIAL_OFFSET(initial_offset) ((initial_offset) << 16) 10524 10490 #define DSC_FINAL_OFFSET(final_offset) ((final_offset) << 0) 10525 10491 10526 - #define DSCA_PICTURE_PARAMETER_SET_9 0x6B224 10527 - #define DSCC_PICTURE_PARAMETER_SET_9 0x6BA24 10492 + #define DSCA_PICTURE_PARAMETER_SET_9 _MMIO(0x6B224) 10493 + #define DSCC_PICTURE_PARAMETER_SET_9 _MMIO(0x6BA24) 10528 10494 #define _ICL_DSC0_PICTURE_PARAMETER_SET_9_PB 0x78294 10529 10495 #define _ICL_DSC1_PICTURE_PARAMETER_SET_9_PB 0x78394 10530 10496 #define _ICL_DSC0_PICTURE_PARAMETER_SET_9_PC 0x78494 ··· 10538 10504 #define DSC_RC_EDGE_FACTOR(rc_edge_fact) ((rc_edge_fact) << 16) 10539 10505 #define DSC_RC_MODEL_SIZE(rc_model_size) ((rc_model_size) << 0) 10540 10506 10541 - #define DSCA_PICTURE_PARAMETER_SET_10 0x6B228 10542 - #define DSCC_PICTURE_PARAMETER_SET_10 0x6BA28 10507 + #define DSCA_PICTURE_PARAMETER_SET_10 _MMIO(0x6B228) 10508 + #define DSCC_PICTURE_PARAMETER_SET_10 _MMIO(0x6BA28) 10543 10509 #define _ICL_DSC0_PICTURE_PARAMETER_SET_10_PB 0x78298 10544 10510 #define _ICL_DSC1_PICTURE_PARAMETER_SET_10_PB 0x78398 10545 10511 #define _ICL_DSC0_PICTURE_PARAMETER_SET_10_PC 0x78498 ··· 10555 10521 #define DSC_RC_QUANT_INC_LIMIT1(lim) ((lim) << 8) 10556 10522 #define DSC_RC_QUANT_INC_LIMIT0(lim) ((lim) << 0) 10557 10523 10558 - #define DSCA_PICTURE_PARAMETER_SET_11 0x6B22C 10559 - #define DSCC_PICTURE_PARAMETER_SET_11 0x6BA2C 10524 + #define DSCA_PICTURE_PARAMETER_SET_11 _MMIO(0x6B22C) 10525 + #define DSCC_PICTURE_PARAMETER_SET_11 _MMIO(0x6BA2C) 10560 10526 #define _ICL_DSC0_PICTURE_PARAMETER_SET_11_PB 0x7829C 10561 10527 #define _ICL_DSC1_PICTURE_PARAMETER_SET_11_PB 0x7839C 10562 10528 #define _ICL_DSC0_PICTURE_PARAMETER_SET_11_PC 0x7849C ··· 10568 10534 _ICL_DSC1_PICTURE_PARAMETER_SET_11_PB, \ 10569 10535 _ICL_DSC1_PICTURE_PARAMETER_SET_11_PC) 10570 10536 10571 - #define DSCA_PICTURE_PARAMETER_SET_12 0x6B260 10572 - #define DSCC_PICTURE_PARAMETER_SET_12 0x6BA60 10537 + #define DSCA_PICTURE_PARAMETER_SET_12 _MMIO(0x6B260) 10538 + #define DSCC_PICTURE_PARAMETER_SET_12 _MMIO(0x6BA60) 10573 10539 #define _ICL_DSC0_PICTURE_PARAMETER_SET_12_PB 0x782A0 10574 10540 #define _ICL_DSC1_PICTURE_PARAMETER_SET_12_PB 0x783A0 10575 10541 #define _ICL_DSC0_PICTURE_PARAMETER_SET_12_PC 0x784A0 ··· 10581 10547 _ICL_DSC1_PICTURE_PARAMETER_SET_12_PB, \ 10582 10548 _ICL_DSC1_PICTURE_PARAMETER_SET_12_PC) 10583 10549 10584 - #define DSCA_PICTURE_PARAMETER_SET_13 0x6B264 10585 - #define DSCC_PICTURE_PARAMETER_SET_13 0x6BA64 10550 + #define DSCA_PICTURE_PARAMETER_SET_13 _MMIO(0x6B264) 10551 + #define DSCC_PICTURE_PARAMETER_SET_13 _MMIO(0x6BA64) 10586 10552 #define _ICL_DSC0_PICTURE_PARAMETER_SET_13_PB 0x782A4 10587 10553 #define _ICL_DSC1_PICTURE_PARAMETER_SET_13_PB 0x783A4 10588 10554 #define _ICL_DSC0_PICTURE_PARAMETER_SET_13_PC 0x784A4 ··· 10594 10560 _ICL_DSC1_PICTURE_PARAMETER_SET_13_PB, \ 10595 10561 _ICL_DSC1_PICTURE_PARAMETER_SET_13_PC) 10596 10562 10597 - #define DSCA_PICTURE_PARAMETER_SET_14 0x6B268 10598 - #define DSCC_PICTURE_PARAMETER_SET_14 0x6BA68 10563 + #define DSCA_PICTURE_PARAMETER_SET_14 _MMIO(0x6B268) 10564 + #define DSCC_PICTURE_PARAMETER_SET_14 _MMIO(0x6BA68) 10599 10565 #define _ICL_DSC0_PICTURE_PARAMETER_SET_14_PB 0x782A8 10600 10566 #define _ICL_DSC1_PICTURE_PARAMETER_SET_14_PB 0x783A8 10601 10567 #define _ICL_DSC0_PICTURE_PARAMETER_SET_14_PC 0x784A8 ··· 10607 10573 _ICL_DSC1_PICTURE_PARAMETER_SET_14_PB, \ 10608 10574 _ICL_DSC1_PICTURE_PARAMETER_SET_14_PC) 10609 10575 10610 - #define DSCA_PICTURE_PARAMETER_SET_15 0x6B26C 10611 - #define DSCC_PICTURE_PARAMETER_SET_15 0x6BA6C 10576 + #define DSCA_PICTURE_PARAMETER_SET_15 _MMIO(0x6B26C) 10577 + #define DSCC_PICTURE_PARAMETER_SET_15 _MMIO(0x6BA6C) 10612 10578 #define _ICL_DSC0_PICTURE_PARAMETER_SET_15_PB 0x782AC 10613 10579 #define _ICL_DSC1_PICTURE_PARAMETER_SET_15_PB 0x783AC 10614 10580 #define _ICL_DSC0_PICTURE_PARAMETER_SET_15_PC 0x784AC ··· 10620 10586 _ICL_DSC1_PICTURE_PARAMETER_SET_15_PB, \ 10621 10587 _ICL_DSC1_PICTURE_PARAMETER_SET_15_PC) 10622 10588 10623 - #define DSCA_PICTURE_PARAMETER_SET_16 0x6B270 10624 - #define DSCC_PICTURE_PARAMETER_SET_16 0x6BA70 10589 + #define DSCA_PICTURE_PARAMETER_SET_16 _MMIO(0x6B270) 10590 + #define DSCC_PICTURE_PARAMETER_SET_16 _MMIO(0x6BA70) 10625 10591 #define _ICL_DSC0_PICTURE_PARAMETER_SET_16_PB 0x782B0 10626 10592 #define _ICL_DSC1_PICTURE_PARAMETER_SET_16_PB 0x783B0 10627 10593 #define _ICL_DSC0_PICTURE_PARAMETER_SET_16_PC 0x784B0 ··· 10633 10599 _ICL_DSC1_PICTURE_PARAMETER_SET_16_PB, \ 10634 10600 _ICL_DSC1_PICTURE_PARAMETER_SET_16_PC) 10635 10601 #define DSC_SLICE_PER_LINE(slice_per_line) ((slice_per_line) << 16) 10636 - #define DSC_SLICE_CHUNK_SIZE(slice_chunk_aize) (slice_chunk_size << 0) 10602 + #define DSC_SLICE_CHUNK_SIZE(slice_chunk_size) ((slice_chunk_size) << 0) 10637 10603 10638 10604 /* Icelake Rate Control Buffer Threshold Registers */ 10639 10605 #define DSCA_RC_BUF_THRESH_0 _MMIO(0x6B230) ··· 10685 10651 #define ICL_DSC1_RC_BUF_THRESH_1_UDW(pipe) _MMIO_PIPE((pipe) - PIPE_B, \ 10686 10652 _ICL_DSC1_RC_BUF_THRESH_1_UDW_PB, \ 10687 10653 _ICL_DSC1_RC_BUF_THRESH_1_UDW_PC) 10654 + 10655 + #define PORT_TX_DFLEXDPSP _MMIO(0x1638A0) 10656 + #define TC_LIVE_STATE_TBT(tc_port) (1 << ((tc_port) * 8 + 6)) 10657 + #define TC_LIVE_STATE_TC(tc_port) (1 << ((tc_port) * 8 + 5)) 10658 + #define DP_LANE_ASSIGNMENT_SHIFT(tc_port) ((tc_port) * 8) 10659 + #define DP_LANE_ASSIGNMENT_MASK(tc_port) (0xf << ((tc_port) * 8)) 10660 + #define DP_LANE_ASSIGNMENT(tc_port, x) ((x) << ((tc_port) * 8)) 10661 + 10662 + #define PORT_TX_DFLEXDPPMS _MMIO(0x163890) 10663 + #define DP_PHY_MODE_STATUS_COMPLETED(tc_port) (1 << (tc_port)) 10664 + 10665 + #define PORT_TX_DFLEXDPCSSS _MMIO(0x163894) 10666 + #define DP_PHY_MODE_STATUS_NOT_SAFE(tc_port) (1 << (tc_port)) 10688 10667 10689 10668 #endif /* _I915_REG_H_ */
+4 -5
drivers/gpu/drm/i915/i915_request.c
··· 527 527 528 528 seqno = timeline_get_seqno(&engine->timeline); 529 529 GEM_BUG_ON(!seqno); 530 - GEM_BUG_ON(i915_seqno_passed(intel_engine_get_seqno(engine), seqno)); 530 + GEM_BUG_ON(intel_engine_signaled(engine, seqno)); 531 531 532 532 /* We may be recursing from the signal callback of another i915 fence */ 533 533 spin_lock_nested(&request->lock, SINGLE_DEPTH_NESTING); ··· 579 579 */ 580 580 GEM_BUG_ON(!request->global_seqno); 581 581 GEM_BUG_ON(request->global_seqno != engine->timeline.seqno); 582 - GEM_BUG_ON(i915_seqno_passed(intel_engine_get_seqno(engine), 583 - request->global_seqno)); 582 + GEM_BUG_ON(intel_engine_has_completed(engine, request->global_seqno)); 584 583 engine->timeline.seqno--; 585 584 586 585 /* We may be recursing from the signal callback of another i915 fence */ ··· 1204 1205 * it is a fair assumption that it will not complete within our 1205 1206 * relatively short timeout. 1206 1207 */ 1207 - if (!i915_seqno_passed(intel_engine_get_seqno(engine), seqno - 1)) 1208 + if (!intel_engine_has_started(engine, seqno)) 1208 1209 return false; 1209 1210 1210 1211 /* ··· 1221 1222 irq = READ_ONCE(engine->breadcrumbs.irq_count); 1222 1223 timeout_us += local_clock_us(&cpu); 1223 1224 do { 1224 - if (i915_seqno_passed(intel_engine_get_seqno(engine), seqno)) 1225 + if (intel_engine_has_completed(engine, seqno)) 1225 1226 return seqno == i915_request_global_seqno(rq); 1226 1227 1227 1228 /*
+25 -14
drivers/gpu/drm/i915/i915_request.h
··· 272 272 #define I915_WAIT_ALL BIT(2) /* used by i915_gem_object_wait() */ 273 273 #define I915_WAIT_FOR_IDLE_BOOST BIT(3) 274 274 275 - static inline u32 intel_engine_get_seqno(struct intel_engine_cs *engine); 275 + static inline bool intel_engine_has_started(struct intel_engine_cs *engine, 276 + u32 seqno); 277 + static inline bool intel_engine_has_completed(struct intel_engine_cs *engine, 278 + u32 seqno); 276 279 277 280 /** 278 281 * Returns true if seq1 is later than seq2. ··· 285 282 return (s32)(seq1 - seq2) >= 0; 286 283 } 287 284 285 + /** 286 + * i915_request_started - check if the request has begun being executed 287 + * @rq: the request 288 + * 289 + * Returns true if the request has been submitted to hardware, and the hardware 290 + * has advanced passed the end of the previous request and so should be either 291 + * currently processing the request (though it may be preempted and so 292 + * not necessarily the next request to complete) or have completed the request. 293 + */ 294 + static inline bool i915_request_started(const struct i915_request *rq) 295 + { 296 + u32 seqno; 297 + 298 + seqno = i915_request_global_seqno(rq); 299 + if (!seqno) /* not yet submitted to HW */ 300 + return false; 301 + 302 + return intel_engine_has_started(rq->engine, seqno); 303 + } 304 + 288 305 static inline bool 289 306 __i915_request_completed(const struct i915_request *rq, u32 seqno) 290 307 { 291 308 GEM_BUG_ON(!seqno); 292 - return i915_seqno_passed(intel_engine_get_seqno(rq->engine), seqno) && 309 + return intel_engine_has_completed(rq->engine, seqno) && 293 310 seqno == i915_request_global_seqno(rq); 294 311 } 295 312 ··· 322 299 return false; 323 300 324 301 return __i915_request_completed(rq, seqno); 325 - } 326 - 327 - static inline bool i915_request_started(const struct i915_request *rq) 328 - { 329 - u32 seqno; 330 - 331 - seqno = i915_request_global_seqno(rq); 332 - if (!seqno) 333 - return false; 334 - 335 - return i915_seqno_passed(intel_engine_get_seqno(rq->engine), 336 - seqno - 1); 337 302 } 338 303 339 304 static inline bool i915_sched_node_signaled(const struct i915_sched_node *node)
+4 -1
drivers/gpu/drm/i915/i915_vma.c
··· 405 405 i915_vma_unpin(vma); 406 406 } 407 407 408 - void i915_vma_unpin_and_release(struct i915_vma **p_vma) 408 + void i915_vma_unpin_and_release(struct i915_vma **p_vma, unsigned int flags) 409 409 { 410 410 struct i915_vma *vma; 411 411 struct drm_i915_gem_object *obj; ··· 419 419 420 420 i915_vma_unpin(vma); 421 421 i915_vma_close(vma); 422 + 423 + if (flags & I915_VMA_RELEASE_MAP) 424 + i915_gem_object_unpin_map(obj); 422 425 423 426 __i915_gem_object_release_unless_active(obj); 424 427 }
+9 -1
drivers/gpu/drm/i915/i915_vma.h
··· 138 138 struct i915_address_space *vm, 139 139 const struct i915_ggtt_view *view); 140 140 141 - void i915_vma_unpin_and_release(struct i915_vma **p_vma); 141 + void i915_vma_unpin_and_release(struct i915_vma **p_vma, unsigned int flags); 142 + #define I915_VMA_RELEASE_MAP BIT(0) 142 143 143 144 static inline bool i915_vma_is_active(struct i915_vma *vma) 144 145 { ··· 208 207 return lower_32_bits(vma->node.start); 209 208 } 210 209 210 + static inline u32 i915_ggtt_pin_bias(struct i915_vma *vma) 211 + { 212 + return i915_vm_to_ggtt(vma->vm)->pin_bias; 213 + } 214 + 211 215 static inline struct i915_vma *i915_vma_get(struct i915_vma *vma) 212 216 { 213 217 i915_gem_object_get(vma->obj); ··· 250 244 cmp -= view->type; 251 245 if (cmp) 252 246 return cmp; 247 + 248 + assert_i915_gem_gtt_types(); 253 249 254 250 /* ggtt_view.type also encodes its size so that we both distinguish 255 251 * different views using it as a "type" and also use a compact (no
+4 -2
drivers/gpu/drm/i915/intel_atomic_plane.c
··· 159 159 } 160 160 161 161 intel_state->base.visible = false; 162 - ret = intel_plane->check_plane(intel_plane, crtc_state, intel_state); 162 + ret = intel_plane->check_plane(crtc_state, intel_state); 163 163 if (ret) 164 164 return ret; 165 165 ··· 170 170 if (state->fb && INTEL_GEN(dev_priv) >= 9 && crtc_state->base.enable && 171 171 adjusted_mode->flags & DRM_MODE_FLAG_INTERLACE) { 172 172 if (state->fb->modifier == I915_FORMAT_MOD_Y_TILED || 173 - state->fb->modifier == I915_FORMAT_MOD_Yf_TILED) { 173 + state->fb->modifier == I915_FORMAT_MOD_Yf_TILED || 174 + state->fb->modifier == I915_FORMAT_MOD_Y_TILED_CCS || 175 + state->fb->modifier == I915_FORMAT_MOD_Yf_TILED_CCS) { 174 176 DRM_DEBUG_KMS("Y/Yf tiling not supported in IF-ID mode\n"); 175 177 return -EINVAL; 176 178 }
+2 -4
drivers/gpu/drm/i915/intel_breadcrumbs.c
··· 256 256 spin_unlock(&b->irq_lock); 257 257 258 258 rbtree_postorder_for_each_entry_safe(wait, n, &b->waiters, node) { 259 - GEM_BUG_ON(!i915_seqno_passed(intel_engine_get_seqno(engine), 260 - wait->seqno)); 259 + GEM_BUG_ON(!intel_engine_signaled(engine, wait->seqno)); 261 260 RB_CLEAR_NODE(&wait->node); 262 261 wake_up_process(wait->tsk); 263 262 } ··· 507 508 return armed; 508 509 509 510 /* Make the caller recheck if its request has already started. */ 510 - return i915_seqno_passed(intel_engine_get_seqno(engine), 511 - wait->seqno - 1); 511 + return intel_engine_has_started(engine, wait->seqno); 512 512 } 513 513 514 514 static inline bool chain_wakeup(struct rb_node *rb, int priority)
+24 -9
drivers/gpu/drm/i915/intel_csr.c
··· 55 55 #define BXT_CSR_VERSION_REQUIRED CSR_VERSION(1, 7) 56 56 57 57 58 - #define CSR_MAX_FW_SIZE 0x2FFF 58 + #define BXT_CSR_MAX_FW_SIZE 0x3000 59 + #define GLK_CSR_MAX_FW_SIZE 0x4000 60 + #define ICL_CSR_MAX_FW_SIZE 0x6000 59 61 #define CSR_DEFAULT_FW_OFFSET 0xFFFFFFFF 60 62 61 63 struct intel_css_header { ··· 281 279 struct intel_csr *csr = &dev_priv->csr; 282 280 const struct stepping_info *si = intel_get_stepping_info(dev_priv); 283 281 uint32_t dmc_offset = CSR_DEFAULT_FW_OFFSET, readcount = 0, nbytes; 282 + uint32_t max_fw_size = 0; 284 283 uint32_t i; 285 284 uint32_t *dmc_payload; 286 285 uint32_t required_version; ··· 362 359 si->stepping); 363 360 return NULL; 364 361 } 362 + /* Convert dmc_offset into number of bytes. By default it is in dwords*/ 363 + dmc_offset *= 4; 365 364 readcount += dmc_offset; 366 365 367 366 /* Extract dmc_header information. */ ··· 396 391 397 392 /* fw_size is in dwords, so multiplied by 4 to convert into bytes. */ 398 393 nbytes = dmc_header->fw_size * 4; 399 - if (nbytes > CSR_MAX_FW_SIZE) { 400 - DRM_ERROR("DMC firmware too big (%u bytes)\n", nbytes); 394 + if (INTEL_GEN(dev_priv) >= 11) 395 + max_fw_size = ICL_CSR_MAX_FW_SIZE; 396 + else if (IS_CANNONLAKE(dev_priv) || IS_GEMINILAKE(dev_priv)) 397 + max_fw_size = GLK_CSR_MAX_FW_SIZE; 398 + else if (IS_GEN9(dev_priv)) 399 + max_fw_size = BXT_CSR_MAX_FW_SIZE; 400 + else 401 + MISSING_CASE(INTEL_REVID(dev_priv)); 402 + if (nbytes > max_fw_size) { 403 + DRM_ERROR("DMC FW too big (%u bytes)\n", nbytes); 401 404 return NULL; 402 405 } 403 406 csr->dmc_fw_size = dmc_header->fw_size; ··· 481 468 csr->fw_path = I915_CSR_SKL; 482 469 else if (IS_BROXTON(dev_priv)) 483 470 csr->fw_path = I915_CSR_BXT; 484 - else { 485 - DRM_ERROR("Unexpected: no known CSR firmware for platform\n"); 486 - return; 487 - } 488 - 489 - DRM_DEBUG_KMS("Loading %s\n", csr->fw_path); 490 471 491 472 /* 492 473 * Obtain a runtime pm reference, until CSR is loaded, ··· 488 481 */ 489 482 intel_display_power_get(dev_priv, POWER_DOMAIN_INIT); 490 483 484 + if (csr->fw_path == NULL) { 485 + DRM_DEBUG_KMS("No known CSR firmware for platform, disabling runtime PM\n"); 486 + WARN_ON(!IS_ALPHA_SUPPORT(INTEL_INFO(dev_priv))); 487 + 488 + return; 489 + } 490 + 491 + DRM_DEBUG_KMS("Loading %s\n", csr->fw_path); 491 492 schedule_work(&dev_priv->csr.work); 492 493 } 493 494
+223 -17
drivers/gpu/drm/i915/intel_ddi.c
··· 1414 1414 break; 1415 1415 } 1416 1416 1417 - ref_clock = dev_priv->cdclk.hw.ref; 1417 + ref_clock = cnl_hdmi_pll_ref_clock(dev_priv); 1418 1418 1419 1419 dco_freq = (cfgcr0 & DPLL_CFGCR0_DCO_INTEGER_MASK) * ref_clock; 1420 1420 ··· 1425 1425 return 0; 1426 1426 1427 1427 return dco_freq / (p0 * p1 * p2 * 5); 1428 + } 1429 + 1430 + static int icl_calc_tbt_pll_link(struct drm_i915_private *dev_priv, 1431 + enum port port) 1432 + { 1433 + u32 val = I915_READ(DDI_CLK_SEL(port)) & DDI_CLK_SEL_MASK; 1434 + 1435 + switch (val) { 1436 + case DDI_CLK_SEL_NONE: 1437 + return 0; 1438 + case DDI_CLK_SEL_TBT_162: 1439 + return 162000; 1440 + case DDI_CLK_SEL_TBT_270: 1441 + return 270000; 1442 + case DDI_CLK_SEL_TBT_540: 1443 + return 540000; 1444 + case DDI_CLK_SEL_TBT_810: 1445 + return 810000; 1446 + default: 1447 + MISSING_CASE(val); 1448 + return 0; 1449 + } 1450 + } 1451 + 1452 + static int icl_calc_mg_pll_link(struct drm_i915_private *dev_priv, 1453 + enum port port) 1454 + { 1455 + u32 mg_pll_div0, mg_clktop_hsclkctl; 1456 + u32 m1, m2_int, m2_frac, div1, div2, refclk; 1457 + u64 tmp; 1458 + 1459 + refclk = dev_priv->cdclk.hw.ref; 1460 + 1461 + mg_pll_div0 = I915_READ(MG_PLL_DIV0(port)); 1462 + mg_clktop_hsclkctl = I915_READ(MG_CLKTOP2_HSCLKCTL(port)); 1463 + 1464 + m1 = I915_READ(MG_PLL_DIV1(port)) & MG_PLL_DIV1_FBPREDIV_MASK; 1465 + m2_int = mg_pll_div0 & MG_PLL_DIV0_FBDIV_INT_MASK; 1466 + m2_frac = (mg_pll_div0 & MG_PLL_DIV0_FRACNEN_H) ? 1467 + (mg_pll_div0 & MG_PLL_DIV0_FBDIV_FRAC_MASK) >> 1468 + MG_PLL_DIV0_FBDIV_FRAC_SHIFT : 0; 1469 + 1470 + switch (mg_clktop_hsclkctl & MG_CLKTOP2_HSCLKCTL_HSDIV_RATIO_MASK) { 1471 + case MG_CLKTOP2_HSCLKCTL_HSDIV_RATIO_2: 1472 + div1 = 2; 1473 + break; 1474 + case MG_CLKTOP2_HSCLKCTL_HSDIV_RATIO_3: 1475 + div1 = 3; 1476 + break; 1477 + case MG_CLKTOP2_HSCLKCTL_HSDIV_RATIO_5: 1478 + div1 = 5; 1479 + break; 1480 + case MG_CLKTOP2_HSCLKCTL_HSDIV_RATIO_7: 1481 + div1 = 7; 1482 + break; 1483 + default: 1484 + MISSING_CASE(mg_clktop_hsclkctl); 1485 + return 0; 1486 + } 1487 + 1488 + div2 = (mg_clktop_hsclkctl & MG_CLKTOP2_HSCLKCTL_DSDIV_RATIO_MASK) >> 1489 + MG_CLKTOP2_HSCLKCTL_DSDIV_RATIO_SHIFT; 1490 + /* div2 value of 0 is same as 1 means no div */ 1491 + if (div2 == 0) 1492 + div2 = 1; 1493 + 1494 + /* 1495 + * Adjust the original formula to delay the division by 2^22 in order to 1496 + * minimize possible rounding errors. 1497 + */ 1498 + tmp = (u64)m1 * m2_int * refclk + 1499 + (((u64)m1 * m2_frac * refclk) >> 22); 1500 + tmp = div_u64(tmp, 5 * div1 * div2); 1501 + 1502 + return tmp; 1428 1503 } 1429 1504 1430 1505 static void ddi_dotclock_get(struct intel_crtc_state *pipe_config) ··· 1542 1467 link_clock = icl_calc_dp_combo_pll_link(dev_priv, 1543 1468 pll_id); 1544 1469 } else { 1545 - /* FIXME - Add for MG PLL */ 1546 - WARN(1, "MG PLL clock_get code not implemented yet\n"); 1470 + if (pll_id == DPLL_ID_ICL_TBTPLL) 1471 + link_clock = icl_calc_tbt_pll_link(dev_priv, port); 1472 + else 1473 + link_clock = icl_calc_mg_pll_link(dev_priv, port); 1547 1474 } 1548 1475 1549 1476 pipe_config->port_clock = link_clock; ··· 2545 2468 I915_WRITE(ICL_PORT_TX_DW5_GRP(port), val); 2546 2469 } 2547 2470 2548 - static void icl_ddi_vswing_sequence(struct intel_encoder *encoder, u32 level, 2471 + static void icl_mg_phy_ddi_vswing_sequence(struct intel_encoder *encoder, 2472 + int link_clock, 2473 + u32 level) 2474 + { 2475 + struct drm_i915_private *dev_priv = to_i915(encoder->base.dev); 2476 + enum port port = encoder->port; 2477 + const struct icl_mg_phy_ddi_buf_trans *ddi_translations; 2478 + u32 n_entries, val; 2479 + int ln; 2480 + 2481 + n_entries = ARRAY_SIZE(icl_mg_phy_ddi_translations); 2482 + ddi_translations = icl_mg_phy_ddi_translations; 2483 + /* The table does not have values for level 3 and level 9. */ 2484 + if (level >= n_entries || level == 3 || level == 9) { 2485 + DRM_DEBUG_KMS("DDI translation not found for level %d. Using %d instead.", 2486 + level, n_entries - 2); 2487 + level = n_entries - 2; 2488 + } 2489 + 2490 + /* Set MG_TX_LINK_PARAMS cri_use_fs32 to 0. */ 2491 + for (ln = 0; ln < 2; ln++) { 2492 + val = I915_READ(MG_TX1_LINK_PARAMS(port, ln)); 2493 + val &= ~CRI_USE_FS32; 2494 + I915_WRITE(MG_TX1_LINK_PARAMS(port, ln), val); 2495 + 2496 + val = I915_READ(MG_TX2_LINK_PARAMS(port, ln)); 2497 + val &= ~CRI_USE_FS32; 2498 + I915_WRITE(MG_TX2_LINK_PARAMS(port, ln), val); 2499 + } 2500 + 2501 + /* Program MG_TX_SWINGCTRL with values from vswing table */ 2502 + for (ln = 0; ln < 2; ln++) { 2503 + val = I915_READ(MG_TX1_SWINGCTRL(port, ln)); 2504 + val &= ~CRI_TXDEEMPH_OVERRIDE_17_12_MASK; 2505 + val |= CRI_TXDEEMPH_OVERRIDE_17_12( 2506 + ddi_translations[level].cri_txdeemph_override_17_12); 2507 + I915_WRITE(MG_TX1_SWINGCTRL(port, ln), val); 2508 + 2509 + val = I915_READ(MG_TX2_SWINGCTRL(port, ln)); 2510 + val &= ~CRI_TXDEEMPH_OVERRIDE_17_12_MASK; 2511 + val |= CRI_TXDEEMPH_OVERRIDE_17_12( 2512 + ddi_translations[level].cri_txdeemph_override_17_12); 2513 + I915_WRITE(MG_TX2_SWINGCTRL(port, ln), val); 2514 + } 2515 + 2516 + /* Program MG_TX_DRVCTRL with values from vswing table */ 2517 + for (ln = 0; ln < 2; ln++) { 2518 + val = I915_READ(MG_TX1_DRVCTRL(port, ln)); 2519 + val &= ~(CRI_TXDEEMPH_OVERRIDE_11_6_MASK | 2520 + CRI_TXDEEMPH_OVERRIDE_5_0_MASK); 2521 + val |= CRI_TXDEEMPH_OVERRIDE_5_0( 2522 + ddi_translations[level].cri_txdeemph_override_5_0) | 2523 + CRI_TXDEEMPH_OVERRIDE_11_6( 2524 + ddi_translations[level].cri_txdeemph_override_11_6) | 2525 + CRI_TXDEEMPH_OVERRIDE_EN; 2526 + I915_WRITE(MG_TX1_DRVCTRL(port, ln), val); 2527 + 2528 + val = I915_READ(MG_TX2_DRVCTRL(port, ln)); 2529 + val &= ~(CRI_TXDEEMPH_OVERRIDE_11_6_MASK | 2530 + CRI_TXDEEMPH_OVERRIDE_5_0_MASK); 2531 + val |= CRI_TXDEEMPH_OVERRIDE_5_0( 2532 + ddi_translations[level].cri_txdeemph_override_5_0) | 2533 + CRI_TXDEEMPH_OVERRIDE_11_6( 2534 + ddi_translations[level].cri_txdeemph_override_11_6) | 2535 + CRI_TXDEEMPH_OVERRIDE_EN; 2536 + I915_WRITE(MG_TX2_DRVCTRL(port, ln), val); 2537 + 2538 + /* FIXME: Program CRI_LOADGEN_SEL after the spec is updated */ 2539 + } 2540 + 2541 + /* 2542 + * Program MG_CLKHUB<LN, port being used> with value from frequency table 2543 + * In case of Legacy mode on MG PHY, both TX1 and TX2 enabled so use the 2544 + * values from table for which TX1 and TX2 enabled. 2545 + */ 2546 + for (ln = 0; ln < 2; ln++) { 2547 + val = I915_READ(MG_CLKHUB(port, ln)); 2548 + if (link_clock < 300000) 2549 + val |= CFG_LOW_RATE_LKREN_EN; 2550 + else 2551 + val &= ~CFG_LOW_RATE_LKREN_EN; 2552 + I915_WRITE(MG_CLKHUB(port, ln), val); 2553 + } 2554 + 2555 + /* Program the MG_TX_DCC<LN, port being used> based on the link frequency */ 2556 + for (ln = 0; ln < 2; ln++) { 2557 + val = I915_READ(MG_TX1_DCC(port, ln)); 2558 + val &= ~CFG_AMI_CK_DIV_OVERRIDE_VAL_MASK; 2559 + if (link_clock <= 500000) { 2560 + val &= ~CFG_AMI_CK_DIV_OVERRIDE_EN; 2561 + } else { 2562 + val |= CFG_AMI_CK_DIV_OVERRIDE_EN | 2563 + CFG_AMI_CK_DIV_OVERRIDE_VAL(1); 2564 + } 2565 + I915_WRITE(MG_TX1_DCC(port, ln), val); 2566 + 2567 + val = I915_READ(MG_TX2_DCC(port, ln)); 2568 + val &= ~CFG_AMI_CK_DIV_OVERRIDE_VAL_MASK; 2569 + if (link_clock <= 500000) { 2570 + val &= ~CFG_AMI_CK_DIV_OVERRIDE_EN; 2571 + } else { 2572 + val |= CFG_AMI_CK_DIV_OVERRIDE_EN | 2573 + CFG_AMI_CK_DIV_OVERRIDE_VAL(1); 2574 + } 2575 + I915_WRITE(MG_TX2_DCC(port, ln), val); 2576 + } 2577 + 2578 + /* Program MG_TX_PISO_READLOAD with values from vswing table */ 2579 + for (ln = 0; ln < 2; ln++) { 2580 + val = I915_READ(MG_TX1_PISO_READLOAD(port, ln)); 2581 + val |= CRI_CALCINIT; 2582 + I915_WRITE(MG_TX1_PISO_READLOAD(port, ln), val); 2583 + 2584 + val = I915_READ(MG_TX2_PISO_READLOAD(port, ln)); 2585 + val |= CRI_CALCINIT; 2586 + I915_WRITE(MG_TX2_PISO_READLOAD(port, ln), val); 2587 + } 2588 + } 2589 + 2590 + static void icl_ddi_vswing_sequence(struct intel_encoder *encoder, 2591 + int link_clock, 2592 + u32 level, 2549 2593 enum intel_output_type type) 2550 2594 { 2551 2595 enum port port = encoder->port; ··· 2674 2476 if (port == PORT_A || port == PORT_B) 2675 2477 icl_combo_phy_ddi_vswing_sequence(encoder, level, type); 2676 2478 else 2677 - /* Not Implemented Yet */ 2678 - WARN_ON(1); 2479 + icl_mg_phy_ddi_vswing_sequence(encoder, link_clock, level); 2679 2480 } 2680 2481 2681 2482 static uint32_t translate_signal_level(int signal_levels) ··· 2709 2512 int level = intel_ddi_dp_level(intel_dp); 2710 2513 2711 2514 if (IS_ICELAKE(dev_priv)) 2712 - icl_ddi_vswing_sequence(encoder, level, encoder->type); 2515 + icl_ddi_vswing_sequence(encoder, intel_dp->link_rate, 2516 + level, encoder->type); 2713 2517 else if (IS_CANNONLAKE(dev_priv)) 2714 2518 cnl_ddi_vswing_sequence(encoder, level, encoder->type); 2715 2519 else ··· 2890 2692 2891 2693 intel_display_power_get(dev_priv, dig_port->ddi_io_power_domain); 2892 2694 2695 + icl_program_mg_dp_mode(intel_dp); 2696 + icl_disable_phy_clock_gating(dig_port); 2697 + 2893 2698 if (IS_ICELAKE(dev_priv)) 2894 - icl_ddi_vswing_sequence(encoder, level, encoder->type); 2699 + icl_ddi_vswing_sequence(encoder, crtc_state->port_clock, 2700 + level, encoder->type); 2895 2701 else if (IS_CANNONLAKE(dev_priv)) 2896 2702 cnl_ddi_vswing_sequence(encoder, level, encoder->type); 2897 2703 else if (IS_GEN9_LP(dev_priv)) ··· 2910 2708 if (port != PORT_A || INTEL_GEN(dev_priv) >= 9) 2911 2709 intel_dp_stop_link_train(intel_dp); 2912 2710 2913 - intel_ddi_enable_pipe_clock(crtc_state); 2711 + icl_enable_phy_clock_gating(dig_port); 2712 + 2713 + if (!is_mst) 2714 + intel_ddi_enable_pipe_clock(crtc_state); 2914 2715 } 2915 2716 2916 2717 static void intel_ddi_pre_enable_hdmi(struct intel_encoder *encoder, ··· 2933 2728 intel_display_power_get(dev_priv, dig_port->ddi_io_power_domain); 2934 2729 2935 2730 if (IS_ICELAKE(dev_priv)) 2936 - icl_ddi_vswing_sequence(encoder, level, INTEL_OUTPUT_HDMI); 2731 + icl_ddi_vswing_sequence(encoder, crtc_state->port_clock, 2732 + level, INTEL_OUTPUT_HDMI); 2937 2733 else if (IS_CANNONLAKE(dev_priv)) 2938 2734 cnl_ddi_vswing_sequence(encoder, level, INTEL_OUTPUT_HDMI); 2939 2735 else if (IS_GEN9_LP(dev_priv)) ··· 3016 2810 bool is_mst = intel_crtc_has_type(old_crtc_state, 3017 2811 INTEL_OUTPUT_DP_MST); 3018 2812 3019 - intel_ddi_disable_pipe_clock(old_crtc_state); 3020 - 3021 - /* 3022 - * Power down sink before disabling the port, otherwise we end 3023 - * up getting interrupts from the sink on detecting link loss. 3024 - */ 3025 - if (!is_mst) 2813 + if (!is_mst) { 2814 + intel_ddi_disable_pipe_clock(old_crtc_state); 2815 + /* 2816 + * Power down sink before disabling the port, otherwise we end 2817 + * up getting interrupts from the sink on detecting link loss. 2818 + */ 3026 2819 intel_dp_sink_dpms(intel_dp, DRM_MODE_DPMS_OFF); 2820 + } 3027 2821 3028 2822 intel_disable_ddi_buf(encoder); 3029 2823
+1 -1
drivers/gpu/drm/i915/intel_device_info.h
··· 103 103 func(has_psr); \ 104 104 func(has_rc6); \ 105 105 func(has_rc6p); \ 106 - func(has_resource_streamer); \ 107 106 func(has_runtime_pm); \ 108 107 func(has_snoop); \ 108 + func(has_coherent_ggtt); \ 109 109 func(unfenced_needs_alignment); \ 110 110 func(cursor_needs_physical); \ 111 111 func(hws_needs_physical); \
+115 -57
drivers/gpu/drm/i915/intel_display.c
··· 2474 2474 } 2475 2475 } 2476 2476 2477 + bool is_ccs_modifier(u64 modifier) 2478 + { 2479 + return modifier == I915_FORMAT_MOD_Y_TILED_CCS || 2480 + modifier == I915_FORMAT_MOD_Yf_TILED_CCS; 2481 + } 2482 + 2477 2483 static int 2478 2484 intel_fill_fb_info(struct drm_i915_private *dev_priv, 2479 2485 struct drm_framebuffer *fb) ··· 2510 2504 return ret; 2511 2505 } 2512 2506 2513 - if ((fb->modifier == I915_FORMAT_MOD_Y_TILED_CCS || 2514 - fb->modifier == I915_FORMAT_MOD_Yf_TILED_CCS) && i == 1) { 2507 + if (is_ccs_modifier(fb->modifier) && i == 1) { 2515 2508 int hsub = fb->format->hsub; 2516 2509 int vsub = fb->format->vsub; 2517 2510 int tile_width, tile_height; ··· 3060 3055 * CCS AUX surface doesn't have its own x/y offsets, we must make sure 3061 3056 * they match with the main surface x/y offsets. 3062 3057 */ 3063 - if (fb->modifier == I915_FORMAT_MOD_Y_TILED_CCS || 3064 - fb->modifier == I915_FORMAT_MOD_Yf_TILED_CCS) { 3058 + if (is_ccs_modifier(fb->modifier)) { 3065 3059 while (!skl_check_main_ccs_coordinates(plane_state, x, y, offset)) { 3066 3060 if (offset == 0) 3067 3061 break; ··· 3194 3190 ret = skl_check_nv12_aux_surface(plane_state); 3195 3191 if (ret) 3196 3192 return ret; 3197 - } else if (fb->modifier == I915_FORMAT_MOD_Y_TILED_CCS || 3198 - fb->modifier == I915_FORMAT_MOD_Yf_TILED_CCS) { 3193 + } else if (is_ccs_modifier(fb->modifier)) { 3199 3194 ret = skl_check_ccs_aux_surface(plane_state); 3200 3195 if (ret) 3201 3196 return ret; ··· 3555 3552 case I915_FORMAT_MOD_Y_TILED: 3556 3553 return PLANE_CTL_TILED_Y; 3557 3554 case I915_FORMAT_MOD_Y_TILED_CCS: 3558 - return PLANE_CTL_TILED_Y | PLANE_CTL_DECOMPRESSION_ENABLE; 3555 + return PLANE_CTL_TILED_Y | PLANE_CTL_RENDER_DECOMPRESSION_ENABLE; 3559 3556 case I915_FORMAT_MOD_Yf_TILED: 3560 3557 return PLANE_CTL_TILED_YF; 3561 3558 case I915_FORMAT_MOD_Yf_TILED_CCS: 3562 - return PLANE_CTL_TILED_YF | PLANE_CTL_DECOMPRESSION_ENABLE; 3559 + return PLANE_CTL_TILED_YF | PLANE_CTL_RENDER_DECOMPRESSION_ENABLE; 3563 3560 default: 3564 3561 MISSING_CASE(fb_modifier); 3565 3562 } ··· 5082 5079 mutex_lock(&dev_priv->pcu_lock); 5083 5080 WARN_ON(sandybridge_pcode_write(dev_priv, DISPLAY_IPS_CONTROL, 0)); 5084 5081 mutex_unlock(&dev_priv->pcu_lock); 5085 - /* wait for pcode to finish disabling IPS, which may take up to 42ms */ 5082 + /* 5083 + * Wait for PCODE to finish disabling IPS. The BSpec specified 5084 + * 42ms timeout value leads to occasional timeouts so use 100ms 5085 + * instead. 5086 + */ 5086 5087 if (intel_wait_for_register(dev_priv, 5087 5088 IPS_CTL, IPS_ENABLE, 0, 5088 - 42)) 5089 + 100)) 5089 5090 DRM_ERROR("Timed out waiting for IPS disable\n"); 5090 5091 } else { 5091 5092 I915_WRITE(IPS_CTL, 0); ··· 8806 8799 fb->modifier = I915_FORMAT_MOD_X_TILED; 8807 8800 break; 8808 8801 case PLANE_CTL_TILED_Y: 8809 - if (val & PLANE_CTL_DECOMPRESSION_ENABLE) 8802 + if (val & PLANE_CTL_RENDER_DECOMPRESSION_ENABLE) 8810 8803 fb->modifier = I915_FORMAT_MOD_Y_TILED_CCS; 8811 8804 else 8812 8805 fb->modifier = I915_FORMAT_MOD_Y_TILED; 8813 8806 break; 8814 8807 case PLANE_CTL_TILED_YF: 8815 - if (val & PLANE_CTL_DECOMPRESSION_ENABLE) 8808 + if (val & PLANE_CTL_RENDER_DECOMPRESSION_ENABLE) 8816 8809 fb->modifier = I915_FORMAT_MOD_Yf_TILED_CCS; 8817 8810 else 8818 8811 fb->modifier = I915_FORMAT_MOD_Yf_TILED; ··· 8981 8974 I915_STATE_WARN(crtc->active, "CRTC for pipe %c enabled\n", 8982 8975 pipe_name(crtc->pipe)); 8983 8976 8984 - I915_STATE_WARN(I915_READ(HSW_PWR_WELL_CTL_DRIVER(HSW_DISP_PW_GLOBAL)), 8977 + I915_STATE_WARN(I915_READ(HSW_PWR_WELL_CTL2), 8985 8978 "Display power well on\n"); 8986 8979 I915_STATE_WARN(I915_READ(SPLL_CTL) & SPLL_PLL_ENABLE, "SPLL enabled\n"); 8987 8980 I915_STATE_WARN(I915_READ(WRPLL_CTL(0)) & WRPLL_PLL_ENABLE, "WRPLL1 enabled\n"); ··· 9698 9691 return intel_cursor_size_ok(plane_state) && IS_ALIGNED(width, 64); 9699 9692 } 9700 9693 9701 - static int i845_check_cursor(struct intel_plane *plane, 9702 - struct intel_crtc_state *crtc_state, 9694 + static int i845_check_cursor(struct intel_crtc_state *crtc_state, 9703 9695 struct intel_plane_state *plane_state) 9704 9696 { 9705 9697 const struct drm_framebuffer *fb = plane_state->base.fb; ··· 9888 9882 return true; 9889 9883 } 9890 9884 9891 - static int i9xx_check_cursor(struct intel_plane *plane, 9892 - struct intel_crtc_state *crtc_state, 9885 + static int i9xx_check_cursor(struct intel_crtc_state *crtc_state, 9893 9886 struct intel_plane_state *plane_state) 9894 9887 { 9888 + struct intel_plane *plane = to_intel_plane(plane_state->base.plane); 9895 9889 struct drm_i915_private *dev_priv = to_i915(plane->base.dev); 9896 9890 const struct drm_framebuffer *fb = plane_state->base.fb; 9897 9891 enum pipe pipe = plane->pipe; ··· 12745 12739 * down. 12746 12740 */ 12747 12741 INIT_WORK(&state->commit_work, intel_atomic_cleanup_work); 12748 - schedule_work(&state->commit_work); 12742 + queue_work(system_highpri_wq, &state->commit_work); 12749 12743 } 12750 12744 12751 12745 static void intel_atomic_commit_work(struct work_struct *work) ··· 12975 12969 INTEL_INFO(dev_priv)->cursor_needs_physical) { 12976 12970 struct drm_i915_gem_object *obj = intel_fb_obj(fb); 12977 12971 const int align = intel_cursor_alignment(dev_priv); 12972 + int err; 12978 12973 12979 - return i915_gem_object_attach_phys(obj, align); 12974 + err = i915_gem_object_attach_phys(obj, align); 12975 + if (err) 12976 + return err; 12980 12977 } 12981 12978 12982 12979 vma = intel_pin_and_fence_fb_obj(fb, ··· 13198 13189 } 13199 13190 13200 13191 static int 13201 - intel_check_primary_plane(struct intel_plane *plane, 13202 - struct intel_crtc_state *crtc_state, 13192 + intel_check_primary_plane(struct intel_crtc_state *crtc_state, 13203 13193 struct intel_plane_state *state) 13204 13194 { 13195 + struct intel_plane *plane = to_intel_plane(state->base.plane); 13205 13196 struct drm_i915_private *dev_priv = to_i915(plane->base.dev); 13206 13197 struct drm_crtc *crtc = state->base.crtc; 13207 13198 int min_scale = DRM_PLANE_HELPER_NO_SCALING; ··· 13409 13400 case DRM_FORMAT_XBGR8888: 13410 13401 case DRM_FORMAT_ARGB8888: 13411 13402 case DRM_FORMAT_ABGR8888: 13412 - if (modifier == I915_FORMAT_MOD_Yf_TILED_CCS || 13413 - modifier == I915_FORMAT_MOD_Y_TILED_CCS) 13403 + if (is_ccs_modifier(modifier)) 13414 13404 return true; 13415 13405 /* fall through */ 13416 13406 case DRM_FORMAT_RGB565: ··· 13628 13620 bool skl_plane_has_planar(struct drm_i915_private *dev_priv, 13629 13621 enum pipe pipe, enum plane_id plane_id) 13630 13622 { 13631 - if (plane_id == PLANE_PRIMARY) { 13632 - if (IS_SKYLAKE(dev_priv) || IS_BROXTON(dev_priv)) 13633 - return false; 13634 - else if ((INTEL_GEN(dev_priv) == 9 && pipe == PIPE_C) && 13635 - !IS_GEMINILAKE(dev_priv)) 13636 - return false; 13637 - } else if (plane_id >= PLANE_SPRITE0) { 13638 - if (plane_id == PLANE_CURSOR) 13639 - return false; 13640 - if (IS_GEMINILAKE(dev_priv) || INTEL_GEN(dev_priv) == 10) { 13641 - if (plane_id != PLANE_SPRITE0) 13642 - return false; 13643 - } else { 13644 - if (plane_id != PLANE_SPRITE0 || pipe == PIPE_C || 13645 - IS_SKYLAKE(dev_priv) || IS_BROXTON(dev_priv)) 13646 - return false; 13647 - } 13648 - } 13623 + /* 13624 + * FIXME: ICL requires two hardware planes for scanning out NV12 13625 + * framebuffers. Do not advertize support until this is implemented. 13626 + */ 13627 + if (INTEL_GEN(dev_priv) >= 11) 13628 + return false; 13629 + 13630 + if (IS_SKYLAKE(dev_priv) || IS_BROXTON(dev_priv)) 13631 + return false; 13632 + 13633 + if (INTEL_GEN(dev_priv) == 9 && !IS_GEMINILAKE(dev_priv) && pipe == PIPE_C) 13634 + return false; 13635 + 13636 + if (plane_id != PLANE_PRIMARY && plane_id != PLANE_SPRITE0) 13637 + return false; 13638 + 13649 13639 return true; 13650 13640 } 13651 13641 ··· 14137 14131 14138 14132 intel_pps_init(dev_priv); 14139 14133 14134 + if (INTEL_INFO(dev_priv)->num_pipes == 0) 14135 + return; 14136 + 14140 14137 /* 14141 14138 * intel_edp_init_connector() depends on this completing first, to 14142 14139 * prevent the registeration of both eDP and LVDS and the incorrect ··· 14556 14547 break; 14557 14548 case DRM_FORMAT_NV12: 14558 14549 if (INTEL_GEN(dev_priv) < 9 || IS_SKYLAKE(dev_priv) || 14559 - IS_BROXTON(dev_priv)) { 14550 + IS_BROXTON(dev_priv) || INTEL_GEN(dev_priv) >= 11) { 14560 14551 DRM_DEBUG_KMS("unsupported pixel format: %s\n", 14561 14552 drm_get_format_name(mode_cmd->pixel_format, 14562 14553 &format_name)); ··· 14603 14594 * potential runtime errors at plane configuration time. 14604 14595 */ 14605 14596 if (IS_GEN9(dev_priv) && i == 0 && fb->width > 3840 && 14606 - (fb->modifier == I915_FORMAT_MOD_Y_TILED_CCS || 14607 - fb->modifier == I915_FORMAT_MOD_Yf_TILED_CCS)) 14597 + is_ccs_modifier(fb->modifier)) 14608 14598 stride_alignment *= 4; 14609 14599 14610 14600 if (fb->pitches[i] & (stride_alignment - 1)) { ··· 15139 15131 DRM_DEBUG_DRIVER("FDI PLL freq=%d\n", dev_priv->fdi_pll_freq); 15140 15132 } 15141 15133 15134 + static int intel_initial_commit(struct drm_device *dev) 15135 + { 15136 + struct drm_atomic_state *state = NULL; 15137 + struct drm_modeset_acquire_ctx ctx; 15138 + struct drm_crtc *crtc; 15139 + struct drm_crtc_state *crtc_state; 15140 + int ret = 0; 15141 + 15142 + state = drm_atomic_state_alloc(dev); 15143 + if (!state) 15144 + return -ENOMEM; 15145 + 15146 + drm_modeset_acquire_init(&ctx, 0); 15147 + 15148 + retry: 15149 + state->acquire_ctx = &ctx; 15150 + 15151 + drm_for_each_crtc(crtc, dev) { 15152 + crtc_state = drm_atomic_get_crtc_state(state, crtc); 15153 + if (IS_ERR(crtc_state)) { 15154 + ret = PTR_ERR(crtc_state); 15155 + goto out; 15156 + } 15157 + 15158 + if (crtc_state->active) { 15159 + ret = drm_atomic_add_affected_planes(state, crtc); 15160 + if (ret) 15161 + goto out; 15162 + } 15163 + } 15164 + 15165 + ret = drm_atomic_commit(state); 15166 + 15167 + out: 15168 + if (ret == -EDEADLK) { 15169 + drm_atomic_state_clear(state); 15170 + drm_modeset_backoff(&ctx); 15171 + goto retry; 15172 + } 15173 + 15174 + drm_atomic_state_put(state); 15175 + 15176 + drm_modeset_drop_locks(&ctx); 15177 + drm_modeset_acquire_fini(&ctx); 15178 + 15179 + return ret; 15180 + } 15181 + 15142 15182 int intel_modeset_init(struct drm_device *dev) 15143 15183 { 15144 15184 struct drm_i915_private *dev_priv = to_i915(dev); 15145 15185 struct i915_ggtt *ggtt = &dev_priv->ggtt; 15146 15186 enum pipe pipe; 15147 15187 struct intel_crtc *crtc; 15188 + int ret; 15148 15189 15149 15190 dev_priv->modeset_wq = alloc_ordered_workqueue("i915_modeset", 0); 15150 15191 ··· 15216 15159 intel_init_quirks(dev); 15217 15160 15218 15161 intel_init_pm(dev_priv); 15219 - 15220 - if (INTEL_INFO(dev_priv)->num_pipes == 0) 15221 - return 0; 15222 15162 15223 15163 /* 15224 15164 * There may be no VBT; and if the BIOS enabled SSC we can ··· 15265 15211 INTEL_INFO(dev_priv)->num_pipes > 1 ? "s" : ""); 15266 15212 15267 15213 for_each_pipe(dev_priv, pipe) { 15268 - int ret; 15269 - 15270 15214 ret = intel_crtc_init(dev_priv, pipe); 15271 15215 if (ret) { 15272 15216 drm_mode_config_cleanup(dev); ··· 15319 15267 */ 15320 15268 if (!HAS_GMCH_DISPLAY(dev_priv)) 15321 15269 sanitize_watermarks(dev); 15270 + 15271 + /* 15272 + * Force all active planes to recompute their states. So that on 15273 + * mode_setcrtc after probe, all the intel_plane_state variables 15274 + * are already calculated and there is no assert_plane warnings 15275 + * during bootup. 15276 + */ 15277 + ret = intel_initial_commit(dev); 15278 + if (ret) 15279 + DRM_DEBUG_KMS("Initial commit in probe failed.\n"); 15322 15280 15323 15281 return 0; 15324 15282 } ··· 15854 15792 struct intel_encoder *encoder; 15855 15793 int i; 15856 15794 15795 + intel_display_power_get(dev_priv, POWER_DOMAIN_INIT); 15796 + 15857 15797 intel_early_display_was(dev_priv); 15858 15798 intel_modeset_readout_hw_state(dev); 15859 15799 ··· 15910 15846 if (WARN_ON(put_domains)) 15911 15847 modeset_put_power_domains(dev_priv, put_domains); 15912 15848 } 15913 - intel_display_set_init_power(dev_priv, false); 15914 15849 15915 - intel_power_domains_verify_state(dev_priv); 15850 + intel_display_power_put(dev_priv, POWER_DOMAIN_INIT); 15916 15851 15917 15852 intel_fbc_init_pipe_state(dev_priv); 15918 15853 } ··· 16000 15937 flush_work(&dev_priv->atomic_helper.free_work); 16001 15938 WARN_ON(!llist_empty(&dev_priv->atomic_helper.free_list)); 16002 15939 16003 - intel_disable_gt_powersave(dev_priv); 16004 - 16005 15940 /* 16006 15941 * Interrupts and polling as the first thing to avoid creating havoc. 16007 15942 * Too much stuff here (turning of connectors, ...) would ··· 16026 15965 drm_mode_config_cleanup(dev); 16027 15966 16028 15967 intel_cleanup_overlay(dev_priv); 16029 - 16030 - intel_cleanup_gt_powersave(dev_priv); 16031 15968 16032 15969 intel_teardown_gmbus(dev_priv); 16033 15970 ··· 16134 16075 return NULL; 16135 16076 16136 16077 if (IS_HASWELL(dev_priv) || IS_BROADWELL(dev_priv)) 16137 - error->power_well_driver = 16138 - I915_READ(HSW_PWR_WELL_CTL_DRIVER(HSW_DISP_PW_GLOBAL)); 16078 + error->power_well_driver = I915_READ(HSW_PWR_WELL_CTL2); 16139 16079 16140 16080 for_each_pipe(dev_priv, i) { 16141 16081 error->pipe[i].power_domain_on =
+26 -2
drivers/gpu/drm/i915/intel_display.h
··· 25 25 #ifndef _INTEL_DISPLAY_H_ 26 26 #define _INTEL_DISPLAY_H_ 27 27 28 + enum i915_gpio { 29 + GPIOA, 30 + GPIOB, 31 + GPIOC, 32 + GPIOD, 33 + GPIOE, 34 + GPIOF, 35 + GPIOG, 36 + GPIOH, 37 + __GPIOI_UNUSED, 38 + GPIOJ, 39 + GPIOK, 40 + GPIOL, 41 + GPIOM, 42 + }; 43 + 28 44 enum pipe { 29 45 INVALID_PIPE = -1, 30 46 ··· 175 159 PORT_TC4, 176 160 177 161 I915_MAX_TC_PORTS 162 + }; 163 + 164 + enum tc_port_type { 165 + TC_PORT_UNKNOWN = 0, 166 + TC_PORT_TYPEC, 167 + TC_PORT_TBT, 168 + TC_PORT_LEGACY, 178 169 }; 179 170 180 171 enum dpio_channel { ··· 369 346 370 347 #define for_each_power_domain_well(__dev_priv, __power_well, __domain_mask) \ 371 348 for_each_power_well(__dev_priv, __power_well) \ 372 - for_each_if((__power_well)->domains & (__domain_mask)) 349 + for_each_if((__power_well)->desc->domains & (__domain_mask)) 373 350 374 351 #define for_each_power_domain_well_rev(__dev_priv, __power_well, __domain_mask) \ 375 352 for_each_power_well_rev(__dev_priv, __power_well) \ 376 - for_each_if((__power_well)->domains & (__domain_mask)) 353 + for_each_if((__power_well)->desc->domains & (__domain_mask)) 377 354 378 355 #define for_each_new_intel_plane_in_state(__state, plane, new_plane_state, __i) \ 379 356 for ((__i) = 0; \ ··· 405 382 struct intel_link_m_n *m_n, 406 383 bool reduce_m_n); 407 384 385 + bool is_ccs_modifier(u64 modifier); 408 386 #endif
+451 -83
drivers/gpu/drm/i915/intel_dp.c
··· 107 107 return intel_dig_port->base.type == INTEL_OUTPUT_EDP; 108 108 } 109 109 110 - static struct drm_device *intel_dp_to_dev(struct intel_dp *intel_dp) 111 - { 112 - struct intel_digital_port *intel_dig_port = dp_to_dig_port(intel_dp); 113 - 114 - return intel_dig_port->base.base.dev; 115 - } 116 - 117 110 static struct intel_dp *intel_attached_dp(struct drm_connector *connector) 118 111 { 119 112 return enc_to_intel_dp(&intel_attached_encoder(connector)->base); ··· 169 176 return intel_dp->common_rates[intel_dp->num_common_rates - 1]; 170 177 } 171 178 179 + static int intel_dp_get_fia_supported_lane_count(struct intel_dp *intel_dp) 180 + { 181 + struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp); 182 + struct drm_i915_private *dev_priv = to_i915(dig_port->base.base.dev); 183 + enum tc_port tc_port = intel_port_to_tc(dev_priv, dig_port->base.port); 184 + u32 lane_info; 185 + 186 + if (tc_port == PORT_TC_NONE || dig_port->tc_type != TC_PORT_TYPEC) 187 + return 4; 188 + 189 + lane_info = (I915_READ(PORT_TX_DFLEXDPSP) & 190 + DP_LANE_ASSIGNMENT_MASK(tc_port)) >> 191 + DP_LANE_ASSIGNMENT_SHIFT(tc_port); 192 + 193 + switch (lane_info) { 194 + default: 195 + MISSING_CASE(lane_info); 196 + case 1: 197 + case 2: 198 + case 4: 199 + case 8: 200 + return 1; 201 + case 3: 202 + case 12: 203 + return 2; 204 + case 15: 205 + return 4; 206 + } 207 + } 208 + 172 209 /* Theoretical max between source and sink */ 173 210 static int intel_dp_max_common_lane_count(struct intel_dp *intel_dp) 174 211 { 175 212 struct intel_digital_port *intel_dig_port = dp_to_dig_port(intel_dp); 176 213 int source_max = intel_dig_port->max_lanes; 177 214 int sink_max = drm_dp_max_lane_count(intel_dp->dpcd); 215 + int fia_max = intel_dp_get_fia_supported_lane_count(intel_dp); 178 216 179 - return min(source_max, sink_max); 217 + return min3(source_max, sink_max, fia_max); 180 218 } 181 219 182 220 int intel_dp_max_lane_count(struct intel_dp *intel_dp) ··· 220 196 { 221 197 /* pixel_clock is in kHz, divide bpp by 8 for bit to Byte conversion */ 222 198 return DIV_ROUND_UP(pixel_clock * bpp, 8); 199 + } 200 + 201 + void icl_program_mg_dp_mode(struct intel_dp *intel_dp) 202 + { 203 + struct intel_digital_port *intel_dig_port = dp_to_dig_port(intel_dp); 204 + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); 205 + enum port port = intel_dig_port->base.port; 206 + enum tc_port tc_port = intel_port_to_tc(dev_priv, port); 207 + u32 ln0, ln1, lane_info; 208 + 209 + if (tc_port == PORT_TC_NONE || intel_dig_port->tc_type == TC_PORT_TBT) 210 + return; 211 + 212 + ln0 = I915_READ(MG_DP_MODE(port, 0)); 213 + ln1 = I915_READ(MG_DP_MODE(port, 1)); 214 + 215 + switch (intel_dig_port->tc_type) { 216 + case TC_PORT_TYPEC: 217 + ln0 &= ~(MG_DP_MODE_CFG_DP_X1_MODE | MG_DP_MODE_CFG_DP_X2_MODE); 218 + ln1 &= ~(MG_DP_MODE_CFG_DP_X1_MODE | MG_DP_MODE_CFG_DP_X2_MODE); 219 + 220 + lane_info = (I915_READ(PORT_TX_DFLEXDPSP) & 221 + DP_LANE_ASSIGNMENT_MASK(tc_port)) >> 222 + DP_LANE_ASSIGNMENT_SHIFT(tc_port); 223 + 224 + switch (lane_info) { 225 + case 0x1: 226 + case 0x4: 227 + break; 228 + case 0x2: 229 + ln0 |= MG_DP_MODE_CFG_DP_X1_MODE; 230 + break; 231 + case 0x3: 232 + ln0 |= MG_DP_MODE_CFG_DP_X1_MODE | 233 + MG_DP_MODE_CFG_DP_X2_MODE; 234 + break; 235 + case 0x8: 236 + ln1 |= MG_DP_MODE_CFG_DP_X1_MODE; 237 + break; 238 + case 0xC: 239 + ln1 |= MG_DP_MODE_CFG_DP_X1_MODE | 240 + MG_DP_MODE_CFG_DP_X2_MODE; 241 + break; 242 + case 0xF: 243 + ln0 |= MG_DP_MODE_CFG_DP_X1_MODE | 244 + MG_DP_MODE_CFG_DP_X2_MODE; 245 + ln1 |= MG_DP_MODE_CFG_DP_X1_MODE | 246 + MG_DP_MODE_CFG_DP_X2_MODE; 247 + break; 248 + default: 249 + MISSING_CASE(lane_info); 250 + } 251 + break; 252 + 253 + case TC_PORT_LEGACY: 254 + ln0 |= MG_DP_MODE_CFG_DP_X1_MODE | MG_DP_MODE_CFG_DP_X2_MODE; 255 + ln1 |= MG_DP_MODE_CFG_DP_X1_MODE | MG_DP_MODE_CFG_DP_X2_MODE; 256 + break; 257 + 258 + default: 259 + MISSING_CASE(intel_dig_port->tc_type); 260 + return; 261 + } 262 + 263 + I915_WRITE(MG_DP_MODE(port, 0), ln0); 264 + I915_WRITE(MG_DP_MODE(port, 1), ln1); 265 + } 266 + 267 + void icl_enable_phy_clock_gating(struct intel_digital_port *dig_port) 268 + { 269 + struct drm_i915_private *dev_priv = to_i915(dig_port->base.base.dev); 270 + enum port port = dig_port->base.port; 271 + enum tc_port tc_port = intel_port_to_tc(dev_priv, port); 272 + i915_reg_t mg_regs[2] = { MG_DP_MODE(port, 0), MG_DP_MODE(port, 1) }; 273 + u32 val; 274 + int i; 275 + 276 + if (tc_port == PORT_TC_NONE) 277 + return; 278 + 279 + for (i = 0; i < ARRAY_SIZE(mg_regs); i++) { 280 + val = I915_READ(mg_regs[i]); 281 + val |= MG_DP_MODE_CFG_TR2PWR_GATING | 282 + MG_DP_MODE_CFG_TRPWR_GATING | 283 + MG_DP_MODE_CFG_CLNPWR_GATING | 284 + MG_DP_MODE_CFG_DIGPWR_GATING | 285 + MG_DP_MODE_CFG_GAONPWR_GATING; 286 + I915_WRITE(mg_regs[i], val); 287 + } 288 + 289 + val = I915_READ(MG_MISC_SUS0(tc_port)); 290 + val |= MG_MISC_SUS0_SUSCLK_DYNCLKGATE_MODE(3) | 291 + MG_MISC_SUS0_CFG_TR2PWR_GATING | 292 + MG_MISC_SUS0_CFG_CL2PWR_GATING | 293 + MG_MISC_SUS0_CFG_GAONPWR_GATING | 294 + MG_MISC_SUS0_CFG_TRPWR_GATING | 295 + MG_MISC_SUS0_CFG_CL1PWR_GATING | 296 + MG_MISC_SUS0_CFG_DGPWR_GATING; 297 + I915_WRITE(MG_MISC_SUS0(tc_port), val); 298 + } 299 + 300 + void icl_disable_phy_clock_gating(struct intel_digital_port *dig_port) 301 + { 302 + struct drm_i915_private *dev_priv = to_i915(dig_port->base.base.dev); 303 + enum port port = dig_port->base.port; 304 + enum tc_port tc_port = intel_port_to_tc(dev_priv, port); 305 + i915_reg_t mg_regs[2] = { MG_DP_MODE(port, 0), MG_DP_MODE(port, 1) }; 306 + u32 val; 307 + int i; 308 + 309 + if (tc_port == PORT_TC_NONE) 310 + return; 311 + 312 + for (i = 0; i < ARRAY_SIZE(mg_regs); i++) { 313 + val = I915_READ(mg_regs[i]); 314 + val &= ~(MG_DP_MODE_CFG_TR2PWR_GATING | 315 + MG_DP_MODE_CFG_TRPWR_GATING | 316 + MG_DP_MODE_CFG_CLNPWR_GATING | 317 + MG_DP_MODE_CFG_DIGPWR_GATING | 318 + MG_DP_MODE_CFG_GAONPWR_GATING); 319 + I915_WRITE(mg_regs[i], val); 320 + } 321 + 322 + val = I915_READ(MG_MISC_SUS0(tc_port)); 323 + val &= ~(MG_MISC_SUS0_SUSCLK_DYNCLKGATE_MODE_MASK | 324 + MG_MISC_SUS0_CFG_TR2PWR_GATING | 325 + MG_MISC_SUS0_CFG_CL2PWR_GATING | 326 + MG_MISC_SUS0_CFG_GAONPWR_GATING | 327 + MG_MISC_SUS0_CFG_TRPWR_GATING | 328 + MG_MISC_SUS0_CFG_CL1PWR_GATING | 329 + MG_MISC_SUS0_CFG_DGPWR_GATING); 330 + I915_WRITE(MG_MISC_SUS0(tc_port), val); 223 331 } 224 332 225 333 int ··· 654 498 655 499 static void pps_lock(struct intel_dp *intel_dp) 656 500 { 657 - struct drm_i915_private *dev_priv = to_i915(intel_dp_to_dev(intel_dp)); 501 + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); 658 502 659 503 /* 660 504 * See intel_power_sequencer_reset() why we need ··· 667 511 668 512 static void pps_unlock(struct intel_dp *intel_dp) 669 513 { 670 - struct drm_i915_private *dev_priv = to_i915(intel_dp_to_dev(intel_dp)); 514 + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); 671 515 672 516 mutex_unlock(&dev_priv->pps_mutex); 673 517 ··· 677 521 static void 678 522 vlv_power_sequencer_kick(struct intel_dp *intel_dp) 679 523 { 680 - struct drm_i915_private *dev_priv = to_i915(intel_dp_to_dev(intel_dp)); 524 + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); 681 525 struct intel_digital_port *intel_dig_port = dp_to_dig_port(intel_dp); 682 526 enum pipe pipe = intel_dp->pps_pipe; 683 527 bool pll_enabled, release_cl_override = false; ··· 782 626 static enum pipe 783 627 vlv_power_sequencer_pipe(struct intel_dp *intel_dp) 784 628 { 785 - struct drm_i915_private *dev_priv = to_i915(intel_dp_to_dev(intel_dp)); 629 + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); 786 630 struct intel_digital_port *intel_dig_port = dp_to_dig_port(intel_dp); 787 631 enum pipe pipe; 788 632 ··· 829 673 static int 830 674 bxt_power_sequencer_idx(struct intel_dp *intel_dp) 831 675 { 832 - struct drm_i915_private *dev_priv = to_i915(intel_dp_to_dev(intel_dp)); 676 + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); 833 677 int backlight_controller = dev_priv->vbt.backlight.controller; 834 678 835 679 lockdep_assert_held(&dev_priv->pps_mutex); ··· 898 742 static void 899 743 vlv_initial_power_sequencer_setup(struct intel_dp *intel_dp) 900 744 { 901 - struct drm_i915_private *dev_priv = to_i915(intel_dp_to_dev(intel_dp)); 745 + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); 902 746 struct intel_digital_port *intel_dig_port = dp_to_dig_port(intel_dp); 903 747 enum port port = intel_dig_port->base.port; 904 748 ··· 975 819 static void intel_pps_get_registers(struct intel_dp *intel_dp, 976 820 struct pps_registers *regs) 977 821 { 978 - struct drm_i915_private *dev_priv = to_i915(intel_dp_to_dev(intel_dp)); 822 + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); 979 823 int pps_idx = 0; 980 824 981 825 memset(regs, 0, sizeof(*regs)); ··· 1021 865 { 1022 866 struct intel_dp *intel_dp = container_of(this, typeof(* intel_dp), 1023 867 edp_notifier); 1024 - struct drm_i915_private *dev_priv = to_i915(intel_dp_to_dev(intel_dp)); 868 + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); 1025 869 1026 870 if (!intel_dp_is_edp(intel_dp) || code != SYS_RESTART) 1027 871 return 0; ··· 1051 895 1052 896 static bool edp_have_panel_power(struct intel_dp *intel_dp) 1053 897 { 1054 - struct drm_i915_private *dev_priv = to_i915(intel_dp_to_dev(intel_dp)); 898 + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); 1055 899 1056 900 lockdep_assert_held(&dev_priv->pps_mutex); 1057 901 ··· 1064 908 1065 909 static bool edp_have_panel_vdd(struct intel_dp *intel_dp) 1066 910 { 1067 - struct drm_i915_private *dev_priv = to_i915(intel_dp_to_dev(intel_dp)); 911 + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); 1068 912 1069 913 lockdep_assert_held(&dev_priv->pps_mutex); 1070 914 ··· 1078 922 static void 1079 923 intel_dp_check_edp(struct intel_dp *intel_dp) 1080 924 { 1081 - struct drm_i915_private *dev_priv = to_i915(intel_dp_to_dev(intel_dp)); 925 + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); 1082 926 1083 927 if (!intel_dp_is_edp(intel_dp)) 1084 928 return; ··· 1094 938 static uint32_t 1095 939 intel_dp_aux_wait_done(struct intel_dp *intel_dp) 1096 940 { 1097 - struct drm_i915_private *dev_priv = to_i915(intel_dp_to_dev(intel_dp)); 941 + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); 1098 942 i915_reg_t ch_ctl = intel_dp->aux_ch_ctl_reg(intel_dp); 1099 943 uint32_t status; 1100 944 bool done; ··· 1111 955 1112 956 static uint32_t g4x_get_aux_clock_divider(struct intel_dp *intel_dp, int index) 1113 957 { 1114 - struct drm_i915_private *dev_priv = to_i915(intel_dp_to_dev(intel_dp)); 958 + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); 1115 959 1116 960 if (index) 1117 961 return 0; ··· 1125 969 1126 970 static uint32_t ilk_get_aux_clock_divider(struct intel_dp *intel_dp, int index) 1127 971 { 1128 - struct drm_i915_private *dev_priv = to_i915(intel_dp_to_dev(intel_dp)); 972 + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); 1129 973 1130 974 if (index) 1131 975 return 0; ··· 1143 987 1144 988 static uint32_t hsw_get_aux_clock_divider(struct intel_dp *intel_dp, int index) 1145 989 { 1146 - struct drm_i915_private *dev_priv = to_i915(intel_dp_to_dev(intel_dp)); 990 + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); 1147 991 1148 992 if (intel_dp->aux_ch != AUX_CH_A && HAS_PCH_LPT_H(dev_priv)) { 1149 993 /* Workaround for non-ULT HSW */ ··· 1201 1045 int send_bytes, 1202 1046 uint32_t unused) 1203 1047 { 1204 - return DP_AUX_CH_CTL_SEND_BUSY | 1205 - DP_AUX_CH_CTL_DONE | 1206 - DP_AUX_CH_CTL_INTERRUPT | 1207 - DP_AUX_CH_CTL_TIME_OUT_ERROR | 1208 - DP_AUX_CH_CTL_TIME_OUT_MAX | 1209 - DP_AUX_CH_CTL_RECEIVE_ERROR | 1210 - (send_bytes << DP_AUX_CH_CTL_MESSAGE_SIZE_SHIFT) | 1211 - DP_AUX_CH_CTL_FW_SYNC_PULSE_SKL(32) | 1212 - DP_AUX_CH_CTL_SYNC_PULSE_SKL(32); 1048 + struct intel_digital_port *intel_dig_port = dp_to_dig_port(intel_dp); 1049 + uint32_t ret; 1050 + 1051 + ret = DP_AUX_CH_CTL_SEND_BUSY | 1052 + DP_AUX_CH_CTL_DONE | 1053 + DP_AUX_CH_CTL_INTERRUPT | 1054 + DP_AUX_CH_CTL_TIME_OUT_ERROR | 1055 + DP_AUX_CH_CTL_TIME_OUT_MAX | 1056 + DP_AUX_CH_CTL_RECEIVE_ERROR | 1057 + (send_bytes << DP_AUX_CH_CTL_MESSAGE_SIZE_SHIFT) | 1058 + DP_AUX_CH_CTL_FW_SYNC_PULSE_SKL(32) | 1059 + DP_AUX_CH_CTL_SYNC_PULSE_SKL(32); 1060 + 1061 + if (intel_dig_port->tc_type == TC_PORT_TBT) 1062 + ret |= DP_AUX_CH_CTL_TBT_IO; 1063 + 1064 + return ret; 1213 1065 } 1214 1066 1215 1067 static int ··· 1545 1381 1546 1382 static i915_reg_t g4x_aux_ctl_reg(struct intel_dp *intel_dp) 1547 1383 { 1548 - struct drm_i915_private *dev_priv = to_i915(intel_dp_to_dev(intel_dp)); 1384 + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); 1549 1385 enum aux_ch aux_ch = intel_dp->aux_ch; 1550 1386 1551 1387 switch (aux_ch) { ··· 1561 1397 1562 1398 static i915_reg_t g4x_aux_data_reg(struct intel_dp *intel_dp, int index) 1563 1399 { 1564 - struct drm_i915_private *dev_priv = to_i915(intel_dp_to_dev(intel_dp)); 1400 + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); 1565 1401 enum aux_ch aux_ch = intel_dp->aux_ch; 1566 1402 1567 1403 switch (aux_ch) { ··· 1577 1413 1578 1414 static i915_reg_t ilk_aux_ctl_reg(struct intel_dp *intel_dp) 1579 1415 { 1580 - struct drm_i915_private *dev_priv = to_i915(intel_dp_to_dev(intel_dp)); 1416 + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); 1581 1417 enum aux_ch aux_ch = intel_dp->aux_ch; 1582 1418 1583 1419 switch (aux_ch) { ··· 1595 1431 1596 1432 static i915_reg_t ilk_aux_data_reg(struct intel_dp *intel_dp, int index) 1597 1433 { 1598 - struct drm_i915_private *dev_priv = to_i915(intel_dp_to_dev(intel_dp)); 1434 + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); 1599 1435 enum aux_ch aux_ch = intel_dp->aux_ch; 1600 1436 1601 1437 switch (aux_ch) { ··· 1613 1449 1614 1450 static i915_reg_t skl_aux_ctl_reg(struct intel_dp *intel_dp) 1615 1451 { 1616 - struct drm_i915_private *dev_priv = to_i915(intel_dp_to_dev(intel_dp)); 1452 + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); 1617 1453 enum aux_ch aux_ch = intel_dp->aux_ch; 1618 1454 1619 1455 switch (aux_ch) { ··· 1632 1468 1633 1469 static i915_reg_t skl_aux_data_reg(struct intel_dp *intel_dp, int index) 1634 1470 { 1635 - struct drm_i915_private *dev_priv = to_i915(intel_dp_to_dev(intel_dp)); 1471 + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); 1636 1472 enum aux_ch aux_ch = intel_dp->aux_ch; 1637 1473 1638 1474 switch (aux_ch) { ··· 1658 1494 static void 1659 1495 intel_dp_aux_init(struct intel_dp *intel_dp) 1660 1496 { 1661 - struct drm_i915_private *dev_priv = to_i915(intel_dp_to_dev(intel_dp)); 1497 + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); 1662 1498 struct intel_encoder *encoder = &dp_to_dig_port(intel_dp)->base; 1663 1499 1664 1500 intel_dp->aux_ch = intel_aux_ch(intel_dp); ··· 1826 1662 static int intel_dp_compute_bpp(struct intel_dp *intel_dp, 1827 1663 struct intel_crtc_state *pipe_config) 1828 1664 { 1829 - struct drm_i915_private *dev_priv = to_i915(intel_dp_to_dev(intel_dp)); 1665 + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); 1830 1666 struct intel_connector *intel_connector = intel_dp->attached_connector; 1831 1667 int bpp, bpc; 1832 1668 ··· 2194 2030 u32 mask, 2195 2031 u32 value) 2196 2032 { 2197 - struct drm_i915_private *dev_priv = to_i915(intel_dp_to_dev(intel_dp)); 2033 + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); 2198 2034 i915_reg_t pp_stat_reg, pp_ctrl_reg; 2199 2035 2200 2036 lockdep_assert_held(&dev_priv->pps_mutex); ··· 2270 2106 2271 2107 static u32 ironlake_get_pp_control(struct intel_dp *intel_dp) 2272 2108 { 2273 - struct drm_i915_private *dev_priv = to_i915(intel_dp_to_dev(intel_dp)); 2109 + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); 2274 2110 u32 control; 2275 2111 2276 2112 lockdep_assert_held(&dev_priv->pps_mutex); ··· 2291 2127 */ 2292 2128 static bool edp_panel_vdd_on(struct intel_dp *intel_dp) 2293 2129 { 2294 - struct drm_i915_private *dev_priv = to_i915(intel_dp_to_dev(intel_dp)); 2130 + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); 2295 2131 struct intel_digital_port *intel_dig_port = dp_to_dig_port(intel_dp); 2296 2132 u32 pp; 2297 2133 i915_reg_t pp_stat_reg, pp_ctrl_reg; ··· 2362 2198 2363 2199 static void edp_panel_vdd_off_sync(struct intel_dp *intel_dp) 2364 2200 { 2365 - struct drm_i915_private *dev_priv = to_i915(intel_dp_to_dev(intel_dp)); 2201 + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); 2366 2202 struct intel_digital_port *intel_dig_port = 2367 2203 dp_to_dig_port(intel_dp); 2368 2204 u32 pp; ··· 2428 2264 */ 2429 2265 static void edp_panel_vdd_off(struct intel_dp *intel_dp, bool sync) 2430 2266 { 2431 - struct drm_i915_private *dev_priv = to_i915(intel_dp_to_dev(intel_dp)); 2267 + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); 2432 2268 2433 2269 lockdep_assert_held(&dev_priv->pps_mutex); 2434 2270 ··· 2448 2284 2449 2285 static void edp_panel_on(struct intel_dp *intel_dp) 2450 2286 { 2451 - struct drm_i915_private *dev_priv = to_i915(intel_dp_to_dev(intel_dp)); 2287 + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); 2452 2288 u32 pp; 2453 2289 i915_reg_t pp_ctrl_reg; 2454 2290 ··· 2506 2342 2507 2343 static void edp_panel_off(struct intel_dp *intel_dp) 2508 2344 { 2509 - struct drm_i915_private *dev_priv = to_i915(intel_dp_to_dev(intel_dp)); 2345 + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); 2510 2346 u32 pp; 2511 2347 i915_reg_t pp_ctrl_reg; 2512 2348 ··· 2554 2390 /* Enable backlight in the panel power control. */ 2555 2391 static void _intel_edp_backlight_on(struct intel_dp *intel_dp) 2556 2392 { 2557 - struct drm_i915_private *dev_priv = to_i915(intel_dp_to_dev(intel_dp)); 2393 + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); 2558 2394 u32 pp; 2559 2395 i915_reg_t pp_ctrl_reg; 2560 2396 ··· 2597 2433 /* Disable backlight in the panel power control. */ 2598 2434 static void _intel_edp_backlight_off(struct intel_dp *intel_dp) 2599 2435 { 2600 - struct drm_i915_private *dev_priv = to_i915(intel_dp_to_dev(intel_dp)); 2436 + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); 2601 2437 u32 pp; 2602 2438 i915_reg_t pp_ctrl_reg; 2603 2439 ··· 3028 2864 uint32_t *DP, 3029 2865 uint8_t dp_train_pat) 3030 2866 { 3031 - struct drm_i915_private *dev_priv = to_i915(intel_dp_to_dev(intel_dp)); 2867 + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); 3032 2868 struct intel_digital_port *intel_dig_port = dp_to_dig_port(intel_dp); 3033 2869 enum port port = intel_dig_port->base.port; 3034 2870 uint8_t train_pat_mask = drm_dp_training_pattern_mask(intel_dp->dpcd); ··· 3110 2946 static void intel_dp_enable_port(struct intel_dp *intel_dp, 3111 2947 const struct intel_crtc_state *old_crtc_state) 3112 2948 { 3113 - struct drm_i915_private *dev_priv = to_i915(intel_dp_to_dev(intel_dp)); 2949 + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); 3114 2950 3115 2951 /* enable with pattern 1 (as per spec) */ 3116 2952 ··· 3367 3203 uint8_t 3368 3204 intel_dp_voltage_max(struct intel_dp *intel_dp) 3369 3205 { 3370 - struct drm_i915_private *dev_priv = to_i915(intel_dp_to_dev(intel_dp)); 3206 + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); 3371 3207 struct intel_encoder *encoder = &dp_to_dig_port(intel_dp)->base; 3372 3208 enum port port = encoder->port; 3373 3209 ··· 3386 3222 uint8_t 3387 3223 intel_dp_pre_emphasis_max(struct intel_dp *intel_dp, uint8_t voltage_swing) 3388 3224 { 3389 - struct drm_i915_private *dev_priv = to_i915(intel_dp_to_dev(intel_dp)); 3225 + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); 3390 3226 struct intel_encoder *encoder = &dp_to_dig_port(intel_dp)->base; 3391 3227 enum port port = encoder->port; 3392 3228 ··· 3698 3534 void 3699 3535 intel_dp_set_signal_levels(struct intel_dp *intel_dp) 3700 3536 { 3701 - struct drm_i915_private *dev_priv = to_i915(intel_dp_to_dev(intel_dp)); 3537 + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); 3702 3538 struct intel_digital_port *intel_dig_port = dp_to_dig_port(intel_dp); 3703 3539 enum port port = intel_dig_port->base.port; 3704 3540 uint32_t signal_levels, mask = 0; ··· 3755 3591 3756 3592 void intel_dp_set_idle_link_train(struct intel_dp *intel_dp) 3757 3593 { 3758 - struct drm_i915_private *dev_priv = to_i915(intel_dp_to_dev(intel_dp)); 3594 + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); 3759 3595 struct intel_digital_port *intel_dig_port = dp_to_dig_port(intel_dp); 3760 3596 enum port port = intel_dig_port->base.port; 3761 3597 uint32_t val; ··· 4254 4090 int ret = 0; 4255 4091 int retry; 4256 4092 bool handled; 4093 + 4094 + WARN_ON_ONCE(intel_dp->active_mst_links < 0); 4257 4095 bret = intel_dp_get_sink_irq_esi(intel_dp, esi); 4258 4096 go_again: 4259 4097 if (bret == true) { 4260 4098 4261 4099 /* check link status - esi[10] = 0x200c */ 4262 - if (intel_dp->active_mst_links && 4100 + if (intel_dp->active_mst_links > 0 && 4263 4101 !drm_dp_channel_eq_ok(&esi[10], intel_dp->lane_count)) { 4264 4102 DRM_DEBUG_KMS("channel EQ not ok, retraining\n"); 4265 4103 intel_dp_start_link_train(intel_dp); ··· 4326 4160 return !drm_dp_channel_eq_ok(link_status, intel_dp->lane_count); 4327 4161 } 4328 4162 4329 - /* 4330 - * If display is now connected check links status, 4331 - * there has been known issues of link loss triggering 4332 - * long pulse. 4333 - * 4334 - * Some sinks (eg. ASUS PB287Q) seem to perform some 4335 - * weird HPD ping pong during modesets. So we can apparently 4336 - * end up with HPD going low during a modeset, and then 4337 - * going back up soon after. And once that happens we must 4338 - * retrain the link to get a picture. That's in case no 4339 - * userspace component reacted to intermittent HPD dip. 4340 - */ 4341 4163 int intel_dp_retrain_link(struct intel_encoder *encoder, 4342 4164 struct drm_modeset_acquire_ctx *ctx) 4343 4165 { ··· 4448 4294 static bool 4449 4295 intel_dp_short_pulse(struct intel_dp *intel_dp) 4450 4296 { 4451 - struct drm_i915_private *dev_priv = to_i915(intel_dp_to_dev(intel_dp)); 4297 + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); 4452 4298 u8 sink_irq_vector = 0; 4453 4299 u8 old_sink_count = intel_dp->sink_count; 4454 4300 bool ret; ··· 4740 4586 return I915_READ(GEN8_DE_PORT_ISR) & bit; 4741 4587 } 4742 4588 4589 + static bool icl_combo_port_connected(struct drm_i915_private *dev_priv, 4590 + struct intel_digital_port *intel_dig_port) 4591 + { 4592 + enum port port = intel_dig_port->base.port; 4593 + 4594 + return I915_READ(SDEISR) & SDE_DDI_HOTPLUG_ICP(port); 4595 + } 4596 + 4597 + static void icl_update_tc_port_type(struct drm_i915_private *dev_priv, 4598 + struct intel_digital_port *intel_dig_port, 4599 + bool is_legacy, bool is_typec, bool is_tbt) 4600 + { 4601 + enum port port = intel_dig_port->base.port; 4602 + enum tc_port_type old_type = intel_dig_port->tc_type; 4603 + const char *type_str; 4604 + 4605 + WARN_ON(is_legacy + is_typec + is_tbt != 1); 4606 + 4607 + if (is_legacy) { 4608 + intel_dig_port->tc_type = TC_PORT_LEGACY; 4609 + type_str = "legacy"; 4610 + } else if (is_typec) { 4611 + intel_dig_port->tc_type = TC_PORT_TYPEC; 4612 + type_str = "typec"; 4613 + } else if (is_tbt) { 4614 + intel_dig_port->tc_type = TC_PORT_TBT; 4615 + type_str = "tbt"; 4616 + } else { 4617 + return; 4618 + } 4619 + 4620 + /* Types are not supposed to be changed at runtime. */ 4621 + WARN_ON(old_type != TC_PORT_UNKNOWN && 4622 + old_type != intel_dig_port->tc_type); 4623 + 4624 + if (old_type != intel_dig_port->tc_type) 4625 + DRM_DEBUG_KMS("Port %c has TC type %s\n", port_name(port), 4626 + type_str); 4627 + } 4628 + 4629 + /* 4630 + * This function implements the first part of the Connect Flow described by our 4631 + * specification, Gen11 TypeC Programming chapter. The rest of the flow (reading 4632 + * lanes, EDID, etc) is done as needed in the typical places. 4633 + * 4634 + * Unlike the other ports, type-C ports are not available to use as soon as we 4635 + * get a hotplug. The type-C PHYs can be shared between multiple controllers: 4636 + * display, USB, etc. As a result, handshaking through FIA is required around 4637 + * connect and disconnect to cleanly transfer ownership with the controller and 4638 + * set the type-C power state. 4639 + * 4640 + * We could opt to only do the connect flow when we actually try to use the AUX 4641 + * channels or do a modeset, then immediately run the disconnect flow after 4642 + * usage, but there are some implications on this for a dynamic environment: 4643 + * things may go away or change behind our backs. So for now our driver is 4644 + * always trying to acquire ownership of the controller as soon as it gets an 4645 + * interrupt (or polls state and sees a port is connected) and only gives it 4646 + * back when it sees a disconnect. Implementation of a more fine-grained model 4647 + * will require a lot of coordination with user space and thorough testing for 4648 + * the extra possible cases. 4649 + */ 4650 + static bool icl_tc_phy_connect(struct drm_i915_private *dev_priv, 4651 + struct intel_digital_port *dig_port) 4652 + { 4653 + enum tc_port tc_port = intel_port_to_tc(dev_priv, dig_port->base.port); 4654 + u32 val; 4655 + 4656 + if (dig_port->tc_type != TC_PORT_LEGACY && 4657 + dig_port->tc_type != TC_PORT_TYPEC) 4658 + return true; 4659 + 4660 + val = I915_READ(PORT_TX_DFLEXDPPMS); 4661 + if (!(val & DP_PHY_MODE_STATUS_COMPLETED(tc_port))) { 4662 + DRM_DEBUG_KMS("DP PHY for TC port %d not ready\n", tc_port); 4663 + return false; 4664 + } 4665 + 4666 + /* 4667 + * This function may be called many times in a row without an HPD event 4668 + * in between, so try to avoid the write when we can. 4669 + */ 4670 + val = I915_READ(PORT_TX_DFLEXDPCSSS); 4671 + if (!(val & DP_PHY_MODE_STATUS_NOT_SAFE(tc_port))) { 4672 + val |= DP_PHY_MODE_STATUS_NOT_SAFE(tc_port); 4673 + I915_WRITE(PORT_TX_DFLEXDPCSSS, val); 4674 + } 4675 + 4676 + /* 4677 + * Now we have to re-check the live state, in case the port recently 4678 + * became disconnected. Not necessary for legacy mode. 4679 + */ 4680 + if (dig_port->tc_type == TC_PORT_TYPEC && 4681 + !(I915_READ(PORT_TX_DFLEXDPSP) & TC_LIVE_STATE_TC(tc_port))) { 4682 + DRM_DEBUG_KMS("TC PHY %d sudden disconnect.\n", tc_port); 4683 + val = I915_READ(PORT_TX_DFLEXDPCSSS); 4684 + val &= ~DP_PHY_MODE_STATUS_NOT_SAFE(tc_port); 4685 + I915_WRITE(PORT_TX_DFLEXDPCSSS, val); 4686 + return false; 4687 + } 4688 + 4689 + return true; 4690 + } 4691 + 4692 + /* 4693 + * See the comment at the connect function. This implements the Disconnect 4694 + * Flow. 4695 + */ 4696 + static void icl_tc_phy_disconnect(struct drm_i915_private *dev_priv, 4697 + struct intel_digital_port *dig_port) 4698 + { 4699 + enum tc_port tc_port = intel_port_to_tc(dev_priv, dig_port->base.port); 4700 + u32 val; 4701 + 4702 + if (dig_port->tc_type != TC_PORT_LEGACY && 4703 + dig_port->tc_type != TC_PORT_TYPEC) 4704 + return; 4705 + 4706 + /* 4707 + * This function may be called many times in a row without an HPD event 4708 + * in between, so try to avoid the write when we can. 4709 + */ 4710 + val = I915_READ(PORT_TX_DFLEXDPCSSS); 4711 + if (val & DP_PHY_MODE_STATUS_NOT_SAFE(tc_port)) { 4712 + val &= ~DP_PHY_MODE_STATUS_NOT_SAFE(tc_port); 4713 + I915_WRITE(PORT_TX_DFLEXDPCSSS, val); 4714 + } 4715 + } 4716 + 4717 + /* 4718 + * The type-C ports are different because even when they are connected, they may 4719 + * not be available/usable by the graphics driver: see the comment on 4720 + * icl_tc_phy_connect(). So in our driver instead of adding the additional 4721 + * concept of "usable" and make everything check for "connected and usable" we 4722 + * define a port as "connected" when it is not only connected, but also when it 4723 + * is usable by the rest of the driver. That maintains the old assumption that 4724 + * connected ports are usable, and avoids exposing to the users objects they 4725 + * can't really use. 4726 + */ 4727 + static bool icl_tc_port_connected(struct drm_i915_private *dev_priv, 4728 + struct intel_digital_port *intel_dig_port) 4729 + { 4730 + enum port port = intel_dig_port->base.port; 4731 + enum tc_port tc_port = intel_port_to_tc(dev_priv, port); 4732 + bool is_legacy, is_typec, is_tbt; 4733 + u32 dpsp; 4734 + 4735 + is_legacy = I915_READ(SDEISR) & SDE_TC_HOTPLUG_ICP(tc_port); 4736 + 4737 + /* 4738 + * The spec says we shouldn't be using the ISR bits for detecting 4739 + * between TC and TBT. We should use DFLEXDPSP. 4740 + */ 4741 + dpsp = I915_READ(PORT_TX_DFLEXDPSP); 4742 + is_typec = dpsp & TC_LIVE_STATE_TC(tc_port); 4743 + is_tbt = dpsp & TC_LIVE_STATE_TBT(tc_port); 4744 + 4745 + if (!is_legacy && !is_typec && !is_tbt) { 4746 + icl_tc_phy_disconnect(dev_priv, intel_dig_port); 4747 + return false; 4748 + } 4749 + 4750 + icl_update_tc_port_type(dev_priv, intel_dig_port, is_legacy, is_typec, 4751 + is_tbt); 4752 + 4753 + if (!icl_tc_phy_connect(dev_priv, intel_dig_port)) 4754 + return false; 4755 + 4756 + return true; 4757 + } 4758 + 4759 + static bool icl_digital_port_connected(struct intel_encoder *encoder) 4760 + { 4761 + struct drm_i915_private *dev_priv = to_i915(encoder->base.dev); 4762 + struct intel_digital_port *dig_port = enc_to_dig_port(&encoder->base); 4763 + 4764 + switch (encoder->hpd_pin) { 4765 + case HPD_PORT_A: 4766 + case HPD_PORT_B: 4767 + return icl_combo_port_connected(dev_priv, dig_port); 4768 + case HPD_PORT_C: 4769 + case HPD_PORT_D: 4770 + case HPD_PORT_E: 4771 + case HPD_PORT_F: 4772 + return icl_tc_port_connected(dev_priv, dig_port); 4773 + default: 4774 + MISSING_CASE(encoder->hpd_pin); 4775 + return false; 4776 + } 4777 + } 4778 + 4743 4779 /* 4744 4780 * intel_digital_port_connected - is the specified port connected? 4745 4781 * @encoder: intel_encoder 4782 + * 4783 + * In cases where there's a connector physically connected but it can't be used 4784 + * by our hardware we also return false, since the rest of the driver should 4785 + * pretty much treat the port as disconnected. This is relevant for type-C 4786 + * (starting on ICL) where there's ownership involved. 4746 4787 * 4747 4788 * Return %true if port is connected, %false otherwise. 4748 4789 */ ··· 4962 4613 return bdw_digital_port_connected(encoder); 4963 4614 else if (IS_GEN9_LP(dev_priv)) 4964 4615 return bxt_digital_port_connected(encoder); 4965 - else 4616 + else if (IS_GEN9_BC(dev_priv) || IS_GEN10(dev_priv)) 4966 4617 return spt_digital_port_connected(encoder); 4618 + else 4619 + return icl_digital_port_connected(encoder); 4967 4620 } 4968 4621 4969 4622 static struct edid * ··· 5012 4661 } 5013 4662 5014 4663 static int 5015 - intel_dp_long_pulse(struct intel_connector *connector) 4664 + intel_dp_long_pulse(struct intel_connector *connector, 4665 + struct drm_modeset_acquire_ctx *ctx) 5016 4666 { 5017 4667 struct drm_i915_private *dev_priv = to_i915(connector->base.dev); 5018 4668 struct intel_dp *intel_dp = intel_attached_dp(&connector->base); ··· 5072 4720 */ 5073 4721 status = connector_status_disconnected; 5074 4722 goto out; 4723 + } else { 4724 + /* 4725 + * If display is now connected check links status, 4726 + * there has been known issues of link loss triggering 4727 + * long pulse. 4728 + * 4729 + * Some sinks (eg. ASUS PB287Q) seem to perform some 4730 + * weird HPD ping pong during modesets. So we can apparently 4731 + * end up with HPD going low during a modeset, and then 4732 + * going back up soon after. And once that happens we must 4733 + * retrain the link to get a picture. That's in case no 4734 + * userspace component reacted to intermittent HPD dip. 4735 + */ 4736 + struct intel_encoder *encoder = &dp_to_dig_port(intel_dp)->base; 4737 + 4738 + intel_dp_retrain_link(encoder, ctx); 5075 4739 } 5076 4740 5077 4741 /* ··· 5149 4781 return ret; 5150 4782 } 5151 4783 5152 - status = intel_dp_long_pulse(intel_dp->attached_connector); 4784 + status = intel_dp_long_pulse(intel_dp->attached_connector, ctx); 5153 4785 } 5154 4786 5155 4787 intel_dp->detect_done = false; ··· 5540 5172 5541 5173 static void intel_edp_panel_vdd_sanitize(struct intel_dp *intel_dp) 5542 5174 { 5543 - struct drm_i915_private *dev_priv = to_i915(intel_dp_to_dev(intel_dp)); 5175 + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); 5544 5176 5545 5177 lockdep_assert_held(&dev_priv->pps_mutex); 5546 5178 ··· 5561 5193 5562 5194 static enum pipe vlv_active_pipe(struct intel_dp *intel_dp) 5563 5195 { 5564 - struct drm_i915_private *dev_priv = to_i915(intel_dp_to_dev(intel_dp)); 5196 + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); 5565 5197 struct intel_encoder *encoder = &dp_to_dig_port(intel_dp)->base; 5566 5198 enum pipe pipe; 5567 5199 ··· 5628 5260 intel_dp_hpd_pulse(struct intel_digital_port *intel_dig_port, bool long_hpd) 5629 5261 { 5630 5262 struct intel_dp *intel_dp = &intel_dig_port->dp; 5631 - struct drm_i915_private *dev_priv = to_i915(intel_dp_to_dev(intel_dp)); 5263 + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); 5632 5264 enum irqreturn ret = IRQ_NONE; 5633 5265 5634 5266 if (long_hpd && intel_dig_port->base.type == INTEL_OUTPUT_EDP) { ··· 5744 5376 static void 5745 5377 intel_pps_readout_hw_state(struct intel_dp *intel_dp, struct edp_power_seq *seq) 5746 5378 { 5747 - struct drm_i915_private *dev_priv = to_i915(intel_dp_to_dev(intel_dp)); 5379 + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); 5748 5380 u32 pp_on, pp_off, pp_div = 0, pp_ctl = 0; 5749 5381 struct pps_registers regs; 5750 5382 ··· 5812 5444 static void 5813 5445 intel_dp_init_panel_power_sequencer(struct intel_dp *intel_dp) 5814 5446 { 5815 - struct drm_i915_private *dev_priv = to_i915(intel_dp_to_dev(intel_dp)); 5447 + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); 5816 5448 struct edp_power_seq cur, vbt, spec, 5817 5449 *final = &intel_dp->pps_delays; 5818 5450 ··· 5905 5537 intel_dp_init_panel_power_sequencer_registers(struct intel_dp *intel_dp, 5906 5538 bool force_disable_vdd) 5907 5539 { 5908 - struct drm_i915_private *dev_priv = to_i915(intel_dp_to_dev(intel_dp)); 5540 + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); 5909 5541 u32 pp_on, pp_off, pp_div, port_sel = 0; 5910 5542 int div = dev_priv->rawclk_freq / 1000; 5911 5543 struct pps_registers regs; ··· 6001 5633 6002 5634 static void intel_dp_pps_init(struct intel_dp *intel_dp) 6003 5635 { 6004 - struct drm_i915_private *dev_priv = to_i915(intel_dp_to_dev(intel_dp)); 5636 + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); 6005 5637 6006 5638 if (IS_VALLEYVIEW(dev_priv) || IS_CHERRYVIEW(dev_priv)) { 6007 5639 vlv_initial_power_sequencer_setup(intel_dp); ··· 6118 5750 void intel_edp_drrs_enable(struct intel_dp *intel_dp, 6119 5751 const struct intel_crtc_state *crtc_state) 6120 5752 { 6121 - struct drm_i915_private *dev_priv = to_i915(intel_dp_to_dev(intel_dp)); 5753 + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); 6122 5754 6123 5755 if (!crtc_state->has_drrs) { 6124 5756 DRM_DEBUG_KMS("Panel doesn't support DRRS\n"); ··· 6153 5785 void intel_edp_drrs_disable(struct intel_dp *intel_dp, 6154 5786 const struct intel_crtc_state *old_crtc_state) 6155 5787 { 6156 - struct drm_i915_private *dev_priv = to_i915(intel_dp_to_dev(intel_dp)); 5788 + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); 6157 5789 6158 5790 if (!old_crtc_state->has_drrs) 6159 5791 return; ··· 6385 6017 static bool intel_edp_init_connector(struct intel_dp *intel_dp, 6386 6018 struct intel_connector *intel_connector) 6387 6019 { 6388 - struct drm_device *dev = intel_dp_to_dev(intel_dp); 6389 - struct drm_i915_private *dev_priv = to_i915(dev); 6020 + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); 6021 + struct drm_device *dev = &dev_priv->drm; 6390 6022 struct drm_connector *connector = &intel_connector->base; 6391 6023 struct drm_display_mode *fixed_mode = NULL; 6392 6024 struct drm_display_mode *downclock_mode = NULL;
+7 -7
drivers/gpu/drm/i915/intel_dp_mst.c
··· 166 166 struct intel_connector *connector = 167 167 to_intel_connector(old_conn_state->connector); 168 168 169 + intel_ddi_disable_pipe_clock(old_crtc_state); 170 + 169 171 /* this can fail */ 170 172 drm_dp_check_act_status(&intel_dp->mst_mgr); 171 173 /* and this can also fail */ ··· 243 241 connector->port, 244 242 pipe_config->pbn, 245 243 pipe_config->dp_m_n.tu); 246 - if (ret == false) { 244 + if (!ret) 247 245 DRM_ERROR("failed to allocate vcpi\n"); 248 - return; 249 - } 250 - 251 246 252 247 intel_dp->active_mst_links++; 253 248 temp = I915_READ(DP_TP_STATUS(port)); 254 249 I915_WRITE(DP_TP_STATUS(port), temp); 255 250 256 251 ret = drm_dp_update_payload_part1(&intel_dp->mst_mgr); 252 + 253 + intel_ddi_enable_pipe_clock(pipe_config); 257 254 } 258 255 259 256 static void intel_mst_enable_dp(struct intel_encoder *encoder, ··· 264 263 struct intel_dp *intel_dp = &intel_dig_port->dp; 265 264 struct drm_i915_private *dev_priv = to_i915(encoder->base.dev); 266 265 enum port port = intel_dig_port->base.port; 267 - int ret; 268 266 269 267 DRM_DEBUG_KMS("active links %d\n", intel_dp->active_mst_links); 270 268 ··· 274 274 1)) 275 275 DRM_ERROR("Timed out waiting for ACT sent\n"); 276 276 277 - ret = drm_dp_check_act_status(&intel_dp->mst_mgr); 277 + drm_dp_check_act_status(&intel_dp->mst_mgr); 278 278 279 - ret = drm_dp_update_payload_part2(&intel_dp->mst_mgr); 279 + drm_dp_update_payload_part2(&intel_dp->mst_mgr); 280 280 if (pipe_config->has_audio) 281 281 intel_audio_codec_enable(encoder, pipe_config, conn_state); 282 282 }
+46 -16
drivers/gpu/drm/i915/intel_dpll_mgr.c
··· 2212 2212 params->dco_fraction = dco & 0x7fff; 2213 2213 } 2214 2214 2215 + int cnl_hdmi_pll_ref_clock(struct drm_i915_private *dev_priv) 2216 + { 2217 + int ref_clock = dev_priv->cdclk.hw.ref; 2218 + 2219 + /* 2220 + * For ICL+, the spec states: if reference frequency is 38.4, 2221 + * use 19.2 because the DPLL automatically divides that by 2. 2222 + */ 2223 + if (INTEL_GEN(dev_priv) >= 11 && ref_clock == 38400) 2224 + ref_clock = 19200; 2225 + 2226 + return ref_clock; 2227 + } 2228 + 2215 2229 static bool 2216 2230 cnl_ddi_calculate_wrpll(int clock, 2217 2231 struct drm_i915_private *dev_priv, ··· 2265 2251 2266 2252 cnl_wrpll_get_multipliers(best_div, &pdiv, &qdiv, &kdiv); 2267 2253 2268 - ref_clock = dev_priv->cdclk.hw.ref; 2269 - 2270 - /* 2271 - * For ICL, the spec states: if reference frequency is 38.4, use 19.2 2272 - * because the DPLL automatically divides that by 2. 2273 - */ 2274 - if (IS_ICELAKE(dev_priv) && ref_clock == 38400) 2275 - ref_clock = 19200; 2254 + ref_clock = cnl_hdmi_pll_ref_clock(dev_priv); 2276 2255 2277 2256 cnl_wrpll_params_populate(wrpll_params, best_dco, ref_clock, pdiv, qdiv, 2278 2257 kdiv); ··· 2459 2452 .pdiv = 0x1 /* 2 */, .kdiv = 1, .qdiv_mode = 0, .qdiv_ratio = 0}, 2460 2453 }; 2461 2454 2455 + static const struct skl_wrpll_params icl_tbt_pll_24MHz_values = { 2456 + .dco_integer = 0x151, .dco_fraction = 0x4000, 2457 + .pdiv = 0x4 /* 5 */, .kdiv = 1, .qdiv_mode = 0, .qdiv_ratio = 0, 2458 + }; 2459 + 2460 + static const struct skl_wrpll_params icl_tbt_pll_19_2MHz_values = { 2461 + .dco_integer = 0x1A5, .dco_fraction = 0x7000, 2462 + .pdiv = 0x4 /* 5 */, .kdiv = 1, .qdiv_mode = 0, .qdiv_ratio = 0, 2463 + }; 2464 + 2462 2465 static bool icl_calc_dp_combo_pll(struct drm_i915_private *dev_priv, int clock, 2463 2466 struct skl_wrpll_params *pll_params) 2464 2467 { ··· 2511 2494 return true; 2512 2495 } 2513 2496 2497 + static bool icl_calc_tbt_pll(struct drm_i915_private *dev_priv, int clock, 2498 + struct skl_wrpll_params *pll_params) 2499 + { 2500 + *pll_params = dev_priv->cdclk.hw.ref == 24000 ? 2501 + icl_tbt_pll_24MHz_values : icl_tbt_pll_19_2MHz_values; 2502 + return true; 2503 + } 2504 + 2514 2505 static bool icl_calc_dpll_state(struct intel_crtc_state *crtc_state, 2515 2506 struct intel_encoder *encoder, int clock, 2516 2507 struct intel_dpll_hw_state *pll_state) ··· 2528 2503 struct skl_wrpll_params pll_params = { 0 }; 2529 2504 bool ret; 2530 2505 2531 - if (intel_crtc_has_type(crtc_state, INTEL_OUTPUT_HDMI)) 2506 + if (intel_port_is_tc(dev_priv, encoder->port)) 2507 + ret = icl_calc_tbt_pll(dev_priv, clock, &pll_params); 2508 + else if (intel_crtc_has_type(crtc_state, INTEL_OUTPUT_HDMI)) 2532 2509 ret = cnl_ddi_calculate_wrpll(clock, dev_priv, &pll_params); 2533 2510 else 2534 2511 ret = icl_calc_dp_combo_pll(dev_priv, clock, &pll_params); ··· 2650 2623 2651 2624 for (div2 = 10; div2 > 0; div2--) { 2652 2625 int dco = div1 * div2 * clock_khz * 5; 2653 - int a_divratio, tlinedrv, inputsel, hsdiv; 2626 + int a_divratio, tlinedrv, inputsel; 2627 + u32 hsdiv; 2654 2628 2655 2629 if (dco < dco_min_freq || dco > dco_max_freq) 2656 2630 continue; ··· 2670 2642 MISSING_CASE(div1); 2671 2643 /* fall through */ 2672 2644 case 2: 2673 - hsdiv = 0; 2645 + hsdiv = MG_CLKTOP2_HSCLKCTL_HSDIV_RATIO_2; 2674 2646 break; 2675 2647 case 3: 2676 - hsdiv = 1; 2648 + hsdiv = MG_CLKTOP2_HSCLKCTL_HSDIV_RATIO_3; 2677 2649 break; 2678 2650 case 5: 2679 - hsdiv = 2; 2651 + hsdiv = MG_CLKTOP2_HSCLKCTL_HSDIV_RATIO_5; 2680 2652 break; 2681 2653 case 7: 2682 - hsdiv = 3; 2654 + hsdiv = MG_CLKTOP2_HSCLKCTL_HSDIV_RATIO_7; 2683 2655 break; 2684 2656 } 2685 2657 ··· 2693 2665 state->mg_clktop2_hsclkctl = 2694 2666 MG_CLKTOP2_HSCLKCTL_TLINEDRV_CLKSEL(tlinedrv) | 2695 2667 MG_CLKTOP2_HSCLKCTL_CORE_INPUTSEL(inputsel) | 2696 - MG_CLKTOP2_HSCLKCTL_HSDIV_RATIO(hsdiv) | 2668 + hsdiv | 2697 2669 MG_CLKTOP2_HSCLKCTL_DSDIV_RATIO(div2); 2698 2670 2699 2671 return true; ··· 2874 2846 icl_get_dpll(struct intel_crtc *crtc, struct intel_crtc_state *crtc_state, 2875 2847 struct intel_encoder *encoder) 2876 2848 { 2849 + struct intel_digital_port *intel_dig_port = 2850 + enc_to_dig_port(&encoder->base); 2877 2851 struct intel_shared_dpll *pll; 2878 2852 struct intel_dpll_hw_state pll_state = {}; 2879 2853 enum port port = encoder->port; ··· 2895 2865 case PORT_D: 2896 2866 case PORT_E: 2897 2867 case PORT_F: 2898 - if (0 /* TODO: TBT PLLs */) { 2868 + if (intel_dig_port->tc_type == TC_PORT_TBT) { 2899 2869 min = DPLL_ID_ICL_TBTPLL; 2900 2870 max = min; 2901 2871 ret = icl_calc_dpll_state(crtc_state, encoder, clock,
+1
drivers/gpu/drm/i915/intel_dpll_mgr.h
··· 344 344 struct intel_dpll_hw_state *hw_state); 345 345 int icl_calc_dp_combo_pll_link(struct drm_i915_private *dev_priv, 346 346 uint32_t pll_id); 347 + int cnl_hdmi_pll_ref_clock(struct drm_i915_private *dev_priv); 347 348 348 349 #endif /* _INTEL_DPLL_MGR_H_ */
+33 -10
drivers/gpu/drm/i915/intel_drv.h
··· 972 972 void (*disable_plane)(struct intel_plane *plane, 973 973 struct intel_crtc *crtc); 974 974 bool (*get_hw_state)(struct intel_plane *plane, enum pipe *pipe); 975 - int (*check_plane)(struct intel_plane *plane, 976 - struct intel_crtc_state *crtc_state, 977 - struct intel_plane_state *state); 975 + int (*check_plane)(struct intel_crtc_state *crtc_state, 976 + struct intel_plane_state *plane_state); 978 977 }; 979 978 980 979 struct intel_watermark_params { ··· 1167 1168 bool release_cl2_override; 1168 1169 uint8_t max_lanes; 1169 1170 enum intel_display_power_domain ddi_io_power_domain; 1171 + enum tc_port_type tc_type; 1170 1172 1171 1173 void (*write_infoframe)(struct drm_encoder *encoder, 1172 1174 const struct intel_crtc_state *crtc_state, ··· 1312 1312 dp_to_lspcon(struct intel_dp *intel_dp) 1313 1313 { 1314 1314 return &dp_to_dig_port(intel_dp)->lspcon; 1315 + } 1316 + 1317 + static inline struct drm_i915_private * 1318 + dp_to_i915(struct intel_dp *intel_dp) 1319 + { 1320 + return to_i915(dp_to_dig_port(intel_dp)->base.base.dev); 1315 1321 } 1316 1322 1317 1323 static inline struct intel_digital_port * ··· 1723 1717 unsigned int frontbuffer_bits); 1724 1718 void intel_edp_drrs_flush(struct drm_i915_private *dev_priv, 1725 1719 unsigned int frontbuffer_bits); 1720 + void icl_program_mg_dp_mode(struct intel_dp *intel_dp); 1721 + void icl_enable_phy_clock_gating(struct intel_digital_port *dig_port); 1722 + void icl_disable_phy_clock_gating(struct intel_digital_port *dig_port); 1726 1723 1727 1724 void 1728 1725 intel_dp_program_link_training_pattern(struct intel_dp *intel_dp, ··· 1939 1930 const struct intel_crtc_state *crtc_state); 1940 1931 void intel_psr_disable(struct intel_dp *intel_dp, 1941 1932 const struct intel_crtc_state *old_crtc_state); 1933 + int intel_psr_set_debugfs_mode(struct drm_i915_private *dev_priv, 1934 + struct drm_modeset_acquire_ctx *ctx, 1935 + u64 value); 1942 1936 void intel_psr_invalidate(struct drm_i915_private *dev_priv, 1943 1937 unsigned frontbuffer_bits, 1944 1938 enum fb_op_origin origin); ··· 1951 1939 void intel_psr_init(struct drm_i915_private *dev_priv); 1952 1940 void intel_psr_compute_config(struct intel_dp *intel_dp, 1953 1941 struct intel_crtc_state *crtc_state); 1954 - void intel_psr_irq_control(struct drm_i915_private *dev_priv, bool debug); 1942 + void intel_psr_irq_control(struct drm_i915_private *dev_priv, u32 debug); 1955 1943 void intel_psr_irq_handler(struct drm_i915_private *dev_priv, u32 psr_iir); 1956 1944 void intel_psr_short_pulse(struct intel_dp *intel_dp); 1957 - int intel_psr_wait_for_idle(const struct intel_crtc_state *new_crtc_state); 1945 + int intel_psr_wait_for_idle(const struct intel_crtc_state *new_crtc_state, 1946 + u32 *out_value); 1958 1947 1959 1948 /* intel_runtime_pm.c */ 1960 1949 int intel_power_domains_init(struct drm_i915_private *); 1961 - void intel_power_domains_fini(struct drm_i915_private *); 1950 + void intel_power_domains_cleanup(struct drm_i915_private *dev_priv); 1962 1951 void intel_power_domains_init_hw(struct drm_i915_private *dev_priv, bool resume); 1963 - void intel_power_domains_suspend(struct drm_i915_private *dev_priv); 1964 - void intel_power_domains_verify_state(struct drm_i915_private *dev_priv); 1952 + void intel_power_domains_fini_hw(struct drm_i915_private *dev_priv); 1953 + void intel_power_domains_enable(struct drm_i915_private *dev_priv); 1954 + void intel_power_domains_disable(struct drm_i915_private *dev_priv); 1955 + 1956 + enum i915_drm_suspend_mode { 1957 + I915_DRM_SUSPEND_IDLE, 1958 + I915_DRM_SUSPEND_MEM, 1959 + I915_DRM_SUSPEND_HIBERNATE, 1960 + }; 1961 + 1962 + void intel_power_domains_suspend(struct drm_i915_private *dev_priv, 1963 + enum i915_drm_suspend_mode); 1964 + void intel_power_domains_resume(struct drm_i915_private *dev_priv); 1965 1965 void bxt_display_core_init(struct drm_i915_private *dev_priv, bool resume); 1966 1966 void bxt_display_core_uninit(struct drm_i915_private *dev_priv); 1967 1967 void intel_runtime_pm_enable(struct drm_i915_private *dev_priv); 1968 + void intel_runtime_pm_disable(struct drm_i915_private *dev_priv); 1968 1969 const char * 1969 1970 intel_display_power_domain_str(enum intel_display_power_domain domain); 1970 1971 ··· 2054 2029 bool intel_runtime_pm_get_if_in_use(struct drm_i915_private *dev_priv); 2055 2030 void intel_runtime_pm_get_noresume(struct drm_i915_private *dev_priv); 2056 2031 void intel_runtime_pm_put(struct drm_i915_private *dev_priv); 2057 - 2058 - void intel_display_set_init_power(struct drm_i915_private *dev, bool enable); 2059 2032 2060 2033 void chv_phy_powergate_lanes(struct intel_encoder *encoder, 2061 2034 bool override, unsigned int mask);
+41 -52
drivers/gpu/drm/i915/intel_engine_cs.c
··· 513 513 goto err_unref; 514 514 } 515 515 516 - ret = i915_vma_pin(vma, 0, 4096, PIN_GLOBAL | PIN_HIGH); 516 + ret = i915_vma_pin(vma, 0, 0, PIN_GLOBAL | PIN_HIGH); 517 517 if (ret) 518 518 goto err_unref; 519 519 ··· 527 527 528 528 void intel_engine_cleanup_scratch(struct intel_engine_cs *engine) 529 529 { 530 - i915_vma_unpin_and_release(&engine->scratch); 531 - } 532 - 533 - static void cleanup_phys_status_page(struct intel_engine_cs *engine) 534 - { 535 - struct drm_i915_private *dev_priv = engine->i915; 536 - 537 - if (!dev_priv->status_page_dmah) 538 - return; 539 - 540 - drm_pci_free(&dev_priv->drm, dev_priv->status_page_dmah); 541 - engine->status_page.page_addr = NULL; 530 + i915_vma_unpin_and_release(&engine->scratch, 0); 542 531 } 543 532 544 533 static void cleanup_status_page(struct intel_engine_cs *engine) 545 534 { 546 - struct i915_vma *vma; 547 - struct drm_i915_gem_object *obj; 535 + if (HWS_NEEDS_PHYSICAL(engine->i915)) { 536 + void *addr = fetch_and_zero(&engine->status_page.page_addr); 548 537 549 - vma = fetch_and_zero(&engine->status_page.vma); 550 - if (!vma) 551 - return; 538 + __free_page(virt_to_page(addr)); 539 + } 552 540 553 - obj = vma->obj; 554 - 555 - i915_vma_unpin(vma); 556 - i915_vma_close(vma); 557 - 558 - i915_gem_object_unpin_map(obj); 559 - __i915_gem_object_release_unless_active(obj); 541 + i915_vma_unpin_and_release(&engine->status_page.vma, 542 + I915_VMA_RELEASE_MAP); 560 543 } 561 544 562 545 static int init_status_page(struct intel_engine_cs *engine) ··· 581 598 flags |= PIN_MAPPABLE; 582 599 else 583 600 flags |= PIN_HIGH; 584 - ret = i915_vma_pin(vma, 0, 4096, flags); 601 + ret = i915_vma_pin(vma, 0, 0, flags); 585 602 if (ret) 586 603 goto err; 587 604 ··· 605 622 606 623 static int init_phys_status_page(struct intel_engine_cs *engine) 607 624 { 608 - struct drm_i915_private *dev_priv = engine->i915; 625 + struct page *page; 609 626 610 - GEM_BUG_ON(engine->id != RCS); 611 - 612 - dev_priv->status_page_dmah = 613 - drm_pci_alloc(&dev_priv->drm, PAGE_SIZE, PAGE_SIZE); 614 - if (!dev_priv->status_page_dmah) 627 + /* 628 + * Though the HWS register does support 36bit addresses, historically 629 + * we have had hangs and corruption reported due to wild writes if 630 + * the HWS is placed above 4G. 631 + */ 632 + page = alloc_page(GFP_KERNEL | __GFP_DMA32 | __GFP_ZERO); 633 + if (!page) 615 634 return -ENOMEM; 616 635 617 - engine->status_page.page_addr = dev_priv->status_page_dmah->vaddr; 618 - memset(engine->status_page.page_addr, 0, PAGE_SIZE); 636 + engine->status_page.page_addr = page_address(page); 619 637 620 638 return 0; 621 639 } ··· 706 722 707 723 intel_engine_cleanup_scratch(engine); 708 724 709 - if (HWS_NEEDS_PHYSICAL(engine->i915)) 710 - cleanup_phys_status_page(engine); 711 - else 712 - cleanup_status_page(engine); 725 + cleanup_status_page(engine); 713 726 714 727 intel_engine_fini_breadcrumbs(engine); 715 728 intel_engine_cleanup_cmd_parser(engine); ··· 779 798 POSTING_READ_FW(mode); 780 799 781 800 return err; 801 + } 802 + 803 + void intel_engine_cancel_stop_cs(struct intel_engine_cs *engine) 804 + { 805 + struct drm_i915_private *dev_priv = engine->i915; 806 + 807 + GEM_TRACE("%s\n", engine->name); 808 + 809 + I915_WRITE_FW(RING_MI_MODE(engine->mmio_base), 810 + _MASKED_BIT_DISABLE(STOP_RING)); 782 811 } 783 812 784 813 const char *i915_cache_level_str(struct drm_i915_private *i915, int type) ··· 971 980 return true; 972 981 973 982 /* Any inflight/incomplete requests? */ 974 - if (!i915_seqno_passed(intel_engine_get_seqno(engine), 975 - intel_engine_last_submit(engine))) 983 + if (!intel_engine_signaled(engine, intel_engine_last_submit(engine))) 976 984 return false; 977 985 978 986 if (I915_SELFTEST_ONLY(engine->breadcrumbs.mock)) ··· 1338 1348 1339 1349 if (HAS_EXECLISTS(dev_priv)) { 1340 1350 const u32 *hws = &engine->status_page.page_addr[I915_HWS_CSB_BUF0_INDEX]; 1341 - u32 ptr, read, write; 1342 1351 unsigned int idx; 1352 + u8 read, write; 1343 1353 1344 1354 drm_printf(m, "\tExeclist status: 0x%08x %08x\n", 1345 1355 I915_READ(RING_EXECLIST_STATUS_LO(engine)), 1346 1356 I915_READ(RING_EXECLIST_STATUS_HI(engine))); 1347 1357 1348 - ptr = I915_READ(RING_CONTEXT_STATUS_PTR(engine)); 1349 - read = GEN8_CSB_READ_PTR(ptr); 1350 - write = GEN8_CSB_WRITE_PTR(ptr); 1351 - drm_printf(m, "\tExeclist CSB read %d [%d cached], write %d [%d from hws], tasklet queued? %s (%s)\n", 1352 - read, execlists->csb_head, 1353 - write, 1354 - intel_read_status_page(engine, intel_hws_csb_write_index(engine->i915)), 1358 + read = execlists->csb_head; 1359 + write = READ_ONCE(*execlists->csb_write); 1360 + 1361 + drm_printf(m, "\tExeclist CSB read %d, write %d [mmio:%d], tasklet queued? %s (%s)\n", 1362 + read, write, 1363 + GEN8_CSB_WRITE_PTR(I915_READ(RING_CONTEXT_STATUS_PTR(engine))), 1355 1364 yesno(test_bit(TASKLET_STATE_SCHED, 1356 1365 &engine->execlists.tasklet.state)), 1357 1366 enableddisabled(!atomic_read(&engine->execlists.tasklet.count))); ··· 1362 1373 write += GEN8_CSB_ENTRIES; 1363 1374 while (read < write) { 1364 1375 idx = ++read % GEN8_CSB_ENTRIES; 1365 - drm_printf(m, "\tExeclist CSB[%d]: 0x%08x [0x%08x in hwsp], context: %d [%d in hwsp]\n", 1376 + drm_printf(m, "\tExeclist CSB[%d]: 0x%08x [mmio:0x%08x], context: %d [mmio:%d]\n", 1366 1377 idx, 1367 - I915_READ(RING_CONTEXT_STATUS_BUF_LO(engine, idx)), 1368 1378 hws[idx * 2], 1369 - I915_READ(RING_CONTEXT_STATUS_BUF_HI(engine, idx)), 1370 - hws[idx * 2 + 1]); 1379 + I915_READ(RING_CONTEXT_STATUS_BUF_LO(engine, idx)), 1380 + hws[idx * 2 + 1], 1381 + I915_READ(RING_CONTEXT_STATUS_BUF_HI(engine, idx))); 1371 1382 } 1372 1383 1373 1384 rcu_read_lock();
+46 -56
drivers/gpu/drm/i915/intel_guc.c
··· 27 27 #include "intel_guc_submission.h" 28 28 #include "i915_drv.h" 29 29 30 - static void guc_init_ggtt_pin_bias(struct intel_guc *guc); 31 - 32 30 static void gen8_guc_raise_irq(struct intel_guc *guc) 33 31 { 34 32 struct drm_i915_private *dev_priv = guc_to_i915(guc); ··· 126 128 127 129 static void guc_fini_wq(struct intel_guc *guc) 128 130 { 129 - struct drm_i915_private *dev_priv = guc_to_i915(guc); 131 + struct workqueue_struct *wq; 130 132 131 - if (HAS_LOGICAL_RING_PREEMPTION(dev_priv) && 132 - USES_GUC_SUBMISSION(dev_priv)) 133 - destroy_workqueue(guc->preempt_wq); 133 + wq = fetch_and_zero(&guc->preempt_wq); 134 + if (wq) 135 + destroy_workqueue(wq); 134 136 135 - destroy_workqueue(guc->log.relay.flush_wq); 137 + wq = fetch_and_zero(&guc->log.relay.flush_wq); 138 + if (wq) 139 + destroy_workqueue(wq); 136 140 } 137 141 138 142 int intel_guc_init_misc(struct intel_guc *guc) 139 143 { 140 144 struct drm_i915_private *i915 = guc_to_i915(guc); 141 145 int ret; 142 - 143 - guc_init_ggtt_pin_bias(guc); 144 146 145 147 ret = guc_init_wq(guc); 146 148 if (ret) ··· 168 170 169 171 vaddr = i915_gem_object_pin_map(vma->obj, I915_MAP_WB); 170 172 if (IS_ERR(vaddr)) { 171 - i915_vma_unpin_and_release(&vma); 173 + i915_vma_unpin_and_release(&vma, 0); 172 174 return PTR_ERR(vaddr); 173 175 } 174 176 ··· 180 182 181 183 static void guc_shared_data_destroy(struct intel_guc *guc) 182 184 { 183 - i915_gem_object_unpin_map(guc->shared_data->obj); 184 - i915_vma_unpin_and_release(&guc->shared_data); 185 + i915_vma_unpin_and_release(&guc->shared_data, I915_VMA_RELEASE_MAP); 185 186 } 186 187 187 188 int intel_guc_init(struct intel_guc *guc) ··· 581 584 * 582 585 * :: 583 586 * 584 - * +==============> +====================+ <== GUC_GGTT_TOP 585 - * ^ | | 586 - * | | | 587 - * | | DRAM | 588 - * | | Memory | 589 - * | | | 590 - * GuC | | 591 - * Address +========> +====================+ <== WOPCM Top 592 - * Space ^ | HW contexts RSVD | 593 - * | | | WOPCM | 594 - * | | +==> +--------------------+ <== GuC WOPCM Top 595 - * | GuC ^ | | 596 - * | GGTT | | | 597 - * | Pin GuC | GuC | 598 - * | Bias WOPCM | WOPCM | 599 - * | | Size | | 600 - * | | | | | 601 - * v v v | | 602 - * +=====+=====+==> +====================+ <== GuC WOPCM Base 603 - * | Non-GuC WOPCM | 604 - * | (HuC/Reserved) | 605 - * +====================+ <== WOPCM Base 587 + * +===========> +====================+ <== FFFF_FFFF 588 + * ^ | Reserved | 589 + * | +====================+ <== GUC_GGTT_TOP 590 + * | | | 591 + * | | DRAM | 592 + * GuC | | 593 + * Address +===> +====================+ <== GuC ggtt_pin_bias 594 + * Space ^ | | 595 + * | | | | 596 + * | GuC | GuC | 597 + * | WOPCM | WOPCM | 598 + * | Size | | 599 + * | | | | 600 + * v v | | 601 + * +=======+===> +====================+ <== 0000_0000 606 602 * 607 - * The lower part of GuC Address Space [0, ggtt_pin_bias) is mapped to WOPCM 603 + * The lower part of GuC Address Space [0, ggtt_pin_bias) is mapped to GuC WOPCM 608 604 * while upper part of GuC Address Space [ggtt_pin_bias, GUC_GGTT_TOP) is mapped 609 - * to DRAM. The value of the GuC ggtt_pin_bias is determined by WOPCM size and 610 - * actual GuC WOPCM size. 605 + * to DRAM. The value of the GuC ggtt_pin_bias is the GuC WOPCM size. 611 606 */ 612 - 613 - /** 614 - * guc_init_ggtt_pin_bias() - Initialize the GuC ggtt_pin_bias value. 615 - * @guc: intel_guc structure. 616 - * 617 - * This function will calculate and initialize the ggtt_pin_bias value based on 618 - * overall WOPCM size and GuC WOPCM size. 619 - */ 620 - static void guc_init_ggtt_pin_bias(struct intel_guc *guc) 621 - { 622 - struct drm_i915_private *i915 = guc_to_i915(guc); 623 - 624 - GEM_BUG_ON(!i915->wopcm.size); 625 - GEM_BUG_ON(i915->wopcm.size < i915->wopcm.guc.base); 626 - 627 - guc->ggtt_pin_bias = i915->wopcm.size - i915->wopcm.guc.base; 628 - } 629 607 630 608 /** 631 609 * intel_guc_allocate_vma() - Allocate a GGTT VMA for GuC usage ··· 620 648 struct drm_i915_private *dev_priv = guc_to_i915(guc); 621 649 struct drm_i915_gem_object *obj; 622 650 struct i915_vma *vma; 651 + u64 flags; 623 652 int ret; 624 653 625 654 obj = i915_gem_object_create(dev_priv, size); ··· 631 658 if (IS_ERR(vma)) 632 659 goto err; 633 660 634 - ret = i915_vma_pin(vma, 0, PAGE_SIZE, 635 - PIN_GLOBAL | PIN_OFFSET_BIAS | guc->ggtt_pin_bias); 661 + flags = PIN_GLOBAL | PIN_OFFSET_BIAS | i915_ggtt_pin_bias(vma); 662 + ret = i915_vma_pin(vma, 0, 0, flags); 636 663 if (ret) { 637 664 vma = ERR_PTR(ret); 638 665 goto err; ··· 643 670 err: 644 671 i915_gem_object_put(obj); 645 672 return vma; 673 + } 674 + 675 + /** 676 + * intel_guc_reserved_gtt_size() 677 + * @guc: intel_guc structure 678 + * 679 + * The GuC WOPCM mapping shadows the lower part of the GGTT, so if we are using 680 + * GuC we can't have any objects pinned in that region. This function returns 681 + * the size of the shadowed region. 682 + * 683 + * Returns: 684 + * 0 if GuC is not present or not in use. 685 + * Otherwise, the GuC WOPCM size. 686 + */ 687 + u32 intel_guc_reserved_gtt_size(struct intel_guc *guc) 688 + { 689 + return guc_to_i915(guc)->wopcm.guc.size; 646 690 }
+5 -7
drivers/gpu/drm/i915/intel_guc.h
··· 49 49 struct intel_guc_log log; 50 50 struct intel_guc_ct ct; 51 51 52 - /* Offset where Non-WOPCM memory starts. */ 53 - u32 ggtt_pin_bias; 54 - 55 52 /* Log snapshot if GuC errors during load */ 56 53 struct drm_i915_gem_object *load_err_log; 57 54 ··· 127 130 * @vma: i915 graphics virtual memory area. 128 131 * 129 132 * GuC does not allow any gfx GGTT address that falls into range 130 - * [0, GuC ggtt_pin_bias), which is reserved for Boot ROM, SRAM and WOPCM. 131 - * Currently, in order to exclude [0, GuC ggtt_pin_bias) address space from 133 + * [0, ggtt.pin_bias), which is reserved for Boot ROM, SRAM and WOPCM. 134 + * Currently, in order to exclude [0, ggtt.pin_bias) address space from 132 135 * GGTT, all gfx objects used by GuC are allocated with intel_guc_allocate_vma() 133 - * and pinned with PIN_OFFSET_BIAS along with the value of GuC ggtt_pin_bias. 136 + * and pinned with PIN_OFFSET_BIAS along with the value of ggtt.pin_bias. 134 137 * 135 138 * Return: GGTT offset of the @vma. 136 139 */ ··· 139 142 { 140 143 u32 offset = i915_ggtt_offset(vma); 141 144 142 - GEM_BUG_ON(offset < guc->ggtt_pin_bias); 145 + GEM_BUG_ON(offset < i915_ggtt_pin_bias(vma)); 143 146 GEM_BUG_ON(range_overflows_t(u64, offset, vma->size, GUC_GGTT_TOP)); 144 147 145 148 return offset; ··· 165 168 int intel_guc_suspend(struct intel_guc *guc); 166 169 int intel_guc_resume(struct intel_guc *guc); 167 170 struct i915_vma *intel_guc_allocate_vma(struct intel_guc *guc, u32 size); 171 + u32 intel_guc_reserved_gtt_size(struct intel_guc *guc); 168 172 169 173 static inline int intel_guc_sanitize(struct intel_guc *guc) 170 174 {
+1 -1
drivers/gpu/drm/i915/intel_guc_ads.c
··· 148 148 149 149 void intel_guc_ads_destroy(struct intel_guc *guc) 150 150 { 151 - i915_vma_unpin_and_release(&guc->ads_vma); 151 + i915_vma_unpin_and_release(&guc->ads_vma, 0); 152 152 }
+2 -5
drivers/gpu/drm/i915/intel_guc_ct.c
··· 204 204 return 0; 205 205 206 206 err_vma: 207 - i915_vma_unpin_and_release(&ctch->vma); 207 + i915_vma_unpin_and_release(&ctch->vma, 0); 208 208 err_out: 209 209 CT_DEBUG_DRIVER("CT: channel %d initialization failed; err=%d\n", 210 210 ctch->owner, err); ··· 214 214 static void ctch_fini(struct intel_guc *guc, 215 215 struct intel_guc_ct_channel *ctch) 216 216 { 217 - GEM_BUG_ON(!ctch->vma); 218 - 219 - i915_gem_object_unpin_map(ctch->vma->obj); 220 - i915_vma_unpin_and_release(&ctch->vma); 217 + i915_vma_unpin_and_release(&ctch->vma, I915_VMA_RELEASE_MAP); 221 218 } 222 219 223 220 static int ctch_open(struct intel_guc *guc,
+1
drivers/gpu/drm/i915/intel_guc_fwif.h
··· 49 49 #define WQ_TYPE_BATCH_BUF (0x1 << WQ_TYPE_SHIFT) 50 50 #define WQ_TYPE_PSEUDO (0x2 << WQ_TYPE_SHIFT) 51 51 #define WQ_TYPE_INORDER (0x3 << WQ_TYPE_SHIFT) 52 + #define WQ_TYPE_NOOP (0x4 << WQ_TYPE_SHIFT) 52 53 #define WQ_TARGET_SHIFT 10 53 54 #define WQ_LEN_SHIFT 16 54 55 #define WQ_NO_WCFLUSH_WAIT (1 << 27)
+1 -1
drivers/gpu/drm/i915/intel_guc_log.c
··· 498 498 499 499 void intel_guc_log_destroy(struct intel_guc_log *log) 500 500 { 501 - i915_vma_unpin_and_release(&log->vma); 501 + i915_vma_unpin_and_release(&log->vma, 0); 502 502 } 503 503 504 504 int intel_guc_log_set_level(struct intel_guc_log *log, u32 level)
+20 -15
drivers/gpu/drm/i915/intel_guc_submission.c
··· 317 317 318 318 vaddr = i915_gem_object_pin_map(vma->obj, I915_MAP_WB); 319 319 if (IS_ERR(vaddr)) { 320 - i915_vma_unpin_and_release(&vma); 320 + i915_vma_unpin_and_release(&vma, 0); 321 321 return PTR_ERR(vaddr); 322 322 } 323 323 ··· 331 331 static void guc_stage_desc_pool_destroy(struct intel_guc *guc) 332 332 { 333 333 ida_destroy(&guc->stage_ids); 334 - i915_gem_object_unpin_map(guc->stage_desc_pool->obj); 335 - i915_vma_unpin_and_release(&guc->stage_desc_pool); 334 + i915_vma_unpin_and_release(&guc->stage_desc_pool, I915_VMA_RELEASE_MAP); 336 335 } 337 336 338 337 /* ··· 456 457 */ 457 458 BUILD_BUG_ON(wqi_size != 16); 458 459 460 + /* We expect the WQ to be active if we're appending items to it */ 461 + GEM_BUG_ON(desc->wq_status != WQ_STATUS_ACTIVE); 462 + 459 463 /* Free space is guaranteed. */ 460 464 wq_off = READ_ONCE(desc->tail); 461 465 GEM_BUG_ON(CIRC_SPACE(wq_off, READ_ONCE(desc->head), ··· 468 466 /* WQ starts from the page after doorbell / process_desc */ 469 467 wqi = client->vaddr + wq_off + GUC_DB_SIZE; 470 468 471 - /* Now fill in the 4-word work queue item */ 472 - wqi->header = WQ_TYPE_INORDER | 473 - (wqi_len << WQ_LEN_SHIFT) | 474 - (target_engine << WQ_TARGET_SHIFT) | 475 - WQ_NO_WCFLUSH_WAIT; 476 - wqi->context_desc = context_desc; 477 - wqi->submit_element_info = ring_tail << WQ_RING_TAIL_SHIFT; 478 - GEM_BUG_ON(ring_tail > WQ_RING_TAIL_MAX); 479 - wqi->fence_id = fence_id; 469 + if (I915_SELFTEST_ONLY(client->use_nop_wqi)) { 470 + wqi->header = WQ_TYPE_NOOP | (wqi_len << WQ_LEN_SHIFT); 471 + } else { 472 + /* Now fill in the 4-word work queue item */ 473 + wqi->header = WQ_TYPE_INORDER | 474 + (wqi_len << WQ_LEN_SHIFT) | 475 + (target_engine << WQ_TARGET_SHIFT) | 476 + WQ_NO_WCFLUSH_WAIT; 477 + wqi->context_desc = context_desc; 478 + wqi->submit_element_info = ring_tail << WQ_RING_TAIL_SHIFT; 479 + GEM_BUG_ON(ring_tail > WQ_RING_TAIL_MAX); 480 + wqi->fence_id = fence_id; 481 + } 480 482 481 483 /* Make the update visible to GuC */ 482 484 WRITE_ONCE(desc->tail, (wq_off + wqi_size) & (GUC_WQ_SIZE - 1)); ··· 1014 1008 err_vaddr: 1015 1009 i915_gem_object_unpin_map(client->vma->obj); 1016 1010 err_vma: 1017 - i915_vma_unpin_and_release(&client->vma); 1011 + i915_vma_unpin_and_release(&client->vma, 0); 1018 1012 err_id: 1019 1013 ida_simple_remove(&guc->stage_ids, client->stage_id); 1020 1014 err_client: ··· 1026 1020 { 1027 1021 unreserve_doorbell(client); 1028 1022 guc_stage_desc_fini(client->guc, client); 1029 - i915_gem_object_unpin_map(client->vma->obj); 1030 - i915_vma_unpin_and_release(&client->vma); 1023 + i915_vma_unpin_and_release(&client->vma, I915_VMA_RELEASE_MAP); 1031 1024 ida_simple_remove(&client->guc->stage_ids, client->stage_id); 1032 1025 kfree(client); 1033 1026 }
+4
drivers/gpu/drm/i915/intel_guc_submission.h
··· 28 28 #include <linux/spinlock.h> 29 29 30 30 #include "i915_gem.h" 31 + #include "i915_selftest.h" 31 32 32 33 struct drm_i915_private; 33 34 ··· 72 71 spinlock_t wq_lock; 73 72 /* Per-engine counts of GuC submissions */ 74 73 u64 submissions[I915_NUM_ENGINES]; 74 + 75 + /* For testing purposes, use nop WQ items instead of real ones */ 76 + I915_SELFTEST_DECLARE(bool use_nop_wqi); 75 77 }; 76 78 77 79 int intel_guc_submission_init(struct intel_guc *guc);
+1 -1
drivers/gpu/drm/i915/intel_hangcheck.c
··· 142 142 if (signaller->hangcheck.deadlock >= I915_NUM_ENGINES) 143 143 return -1; 144 144 145 - if (i915_seqno_passed(intel_engine_get_seqno(signaller), seqno)) 145 + if (intel_engine_signaled(signaller, seqno)) 146 146 return 1; 147 147 148 148 /* cursory check for an unkickable deadlock */
+3 -3
drivers/gpu/drm/i915/intel_hdcp.c
··· 57 57 58 58 /* PG1 (power well #1) needs to be enabled */ 59 59 for_each_power_well(dev_priv, power_well) { 60 - if (power_well->id == id) { 61 - enabled = power_well->ops->is_enabled(dev_priv, 62 - power_well); 60 + if (power_well->desc->id == id) { 61 + enabled = power_well->desc->ops->is_enabled(dev_priv, 62 + power_well); 63 63 break; 64 64 } 65 65 }
+7 -3
drivers/gpu/drm/i915/intel_hdmi.c
··· 1911 1911 static enum drm_connector_status 1912 1912 intel_hdmi_detect(struct drm_connector *connector, bool force) 1913 1913 { 1914 - enum drm_connector_status status; 1914 + enum drm_connector_status status = connector_status_disconnected; 1915 1915 struct drm_i915_private *dev_priv = to_i915(connector->dev); 1916 1916 struct intel_hdmi *intel_hdmi = intel_attached_hdmi(connector); 1917 + struct intel_encoder *encoder = &hdmi_to_dig_port(intel_hdmi)->base; 1917 1918 1918 1919 DRM_DEBUG_KMS("[CONNECTOR:%d:%s]\n", 1919 1920 connector->base.id, connector->name); 1920 1921 1921 1922 intel_display_power_get(dev_priv, POWER_DOMAIN_GMBUS); 1922 1923 1924 + if (IS_ICELAKE(dev_priv) && 1925 + !intel_digital_port_connected(encoder)) 1926 + goto out; 1927 + 1923 1928 intel_hdmi_unset_edid(connector); 1924 1929 1925 1930 if (intel_hdmi_set_edid(connector)) 1926 1931 status = connector_status_connected; 1927 - else 1928 - status = connector_status_disconnected; 1929 1932 1933 + out: 1930 1934 intel_display_power_put(dev_priv, POWER_DOMAIN_GMBUS); 1931 1935 1932 1936 if (status != connector_status_connected)
+1 -1
drivers/gpu/drm/i915/intel_huc.c
··· 63 63 return -ENOEXEC; 64 64 65 65 vma = i915_gem_object_ggtt_pin(huc->fw.obj, NULL, 0, 0, 66 - PIN_OFFSET_BIAS | guc->ggtt_pin_bias); 66 + PIN_OFFSET_BIAS | i915->ggtt.pin_bias); 67 67 if (IS_ERR(vma)) { 68 68 ret = PTR_ERR(vma); 69 69 DRM_ERROR("HuC: Failed to pin huc fw object %d\n", ret);
+8 -8
drivers/gpu/drm/i915/intel_i2c.c
··· 37 37 38 38 struct gmbus_pin { 39 39 const char *name; 40 - i915_reg_t reg; 40 + enum i915_gpio gpio; 41 41 }; 42 42 43 43 /* Map gmbus pin pairs to names and registers. */ ··· 121 121 else 122 122 size = ARRAY_SIZE(gmbus_pins); 123 123 124 - return pin < size && 125 - i915_mmio_reg_valid(get_gmbus_pin(dev_priv, pin)->reg); 124 + return pin < size && get_gmbus_pin(dev_priv, pin)->name; 126 125 } 127 126 128 127 /* Intel GPIO access functions */ ··· 291 292 292 293 algo = &bus->bit_algo; 293 294 294 - bus->gpio_reg = _MMIO(dev_priv->gpio_mmio_base + 295 - i915_mmio_reg_offset(get_gmbus_pin(dev_priv, pin)->reg)); 295 + bus->gpio_reg = GPIO(get_gmbus_pin(dev_priv, pin)->gpio); 296 296 bus->adapter.algo_data = algo; 297 297 algo->setsda = set_data; 298 298 algo->setscl = set_clock; ··· 823 825 if (IS_VALLEYVIEW(dev_priv) || IS_CHERRYVIEW(dev_priv)) 824 826 dev_priv->gpio_mmio_base = VLV_DISPLAY_BASE; 825 827 else if (!HAS_GMCH_DISPLAY(dev_priv)) 826 - dev_priv->gpio_mmio_base = 827 - i915_mmio_reg_offset(PCH_GPIOA) - 828 - i915_mmio_reg_offset(GPIOA); 828 + /* 829 + * Broxton uses the same PCH offsets for South Display Engine, 830 + * even though it doesn't have a PCH. 831 + */ 832 + dev_priv->gpio_mmio_base = PCH_DISPLAY_BASE; 829 833 830 834 mutex_init(&dev_priv->gmbus_mutex); 831 835 init_waitqueue_head(&dev_priv->gmbus_wait_queue);
+110 -49
drivers/gpu/drm/i915/intel_lrc.c
··· 541 541 542 542 GEM_BUG_ON(execlists->preempt_complete_status != 543 543 upper_32_bits(ce->lrc_desc)); 544 - GEM_BUG_ON((ce->lrc_reg_state[CTX_CONTEXT_CONTROL + 1] & 545 - _MASKED_BIT_ENABLE(CTX_CTRL_ENGINE_CTX_RESTORE_INHIBIT | 546 - CTX_CTRL_ENGINE_CTX_SAVE_INHIBIT)) != 547 - _MASKED_BIT_ENABLE(CTX_CTRL_ENGINE_CTX_RESTORE_INHIBIT | 548 - CTX_CTRL_ENGINE_CTX_SAVE_INHIBIT)); 549 544 550 545 /* 551 546 * Switch to our empty preempt context so ··· 1272 1277 1273 1278 static void execlists_context_unpin(struct intel_context *ce) 1274 1279 { 1280 + i915_gem_context_unpin_hw_id(ce->gem_context); 1281 + 1275 1282 intel_ring_unpin(ce->ring); 1276 1283 1277 1284 ce->state->obj->pin_global--; ··· 1300 1303 } 1301 1304 1302 1305 flags = PIN_GLOBAL | PIN_HIGH; 1303 - if (ctx->ggtt_offset_bias) 1304 - flags |= PIN_OFFSET_BIAS | ctx->ggtt_offset_bias; 1306 + flags |= PIN_OFFSET_BIAS | i915_ggtt_pin_bias(vma); 1305 1307 1306 - return i915_vma_pin(vma, 0, GEN8_LR_CONTEXT_ALIGN, flags); 1308 + return i915_vma_pin(vma, 0, 0, flags); 1307 1309 } 1308 1310 1309 1311 static struct intel_context * ··· 1328 1332 goto unpin_vma; 1329 1333 } 1330 1334 1331 - ret = intel_ring_pin(ce->ring, ctx->i915, ctx->ggtt_offset_bias); 1335 + ret = intel_ring_pin(ce->ring); 1332 1336 if (ret) 1333 1337 goto unpin_map; 1338 + 1339 + ret = i915_gem_context_pin_hw_id(ctx); 1340 + if (ret) 1341 + goto unpin_ring; 1334 1342 1335 1343 intel_lr_context_descriptor_update(ctx, engine, ce); 1336 1344 ··· 1348 1348 i915_gem_context_get(ctx); 1349 1349 return ce; 1350 1350 1351 + unpin_ring: 1352 + intel_ring_unpin(ce->ring); 1351 1353 unpin_map: 1352 1354 i915_gem_object_unpin_map(ce->state->obj); 1353 1355 unpin_vma: ··· 1645 1643 goto err; 1646 1644 } 1647 1645 1648 - err = i915_vma_pin(vma, 0, PAGE_SIZE, PIN_GLOBAL | PIN_HIGH); 1646 + err = i915_vma_pin(vma, 0, 0, PIN_GLOBAL | PIN_HIGH); 1649 1647 if (err) 1650 1648 goto err; 1651 1649 ··· 1659 1657 1660 1658 static void lrc_destroy_wa_ctx(struct intel_engine_cs *engine) 1661 1659 { 1662 - i915_vma_unpin_and_release(&engine->wa_ctx.vma); 1660 + i915_vma_unpin_and_release(&engine->wa_ctx.vma, 0); 1663 1661 } 1664 1662 1665 1663 typedef u32 *(*wa_bb_func_t)(struct intel_engine_cs *engine, u32 *batch); ··· 1777 1775 1778 1776 static int gen8_init_common_ring(struct intel_engine_cs *engine) 1779 1777 { 1780 - int ret; 1781 - 1782 - ret = intel_mocs_init_engine(engine); 1783 - if (ret) 1784 - return ret; 1778 + intel_mocs_init_engine(engine); 1785 1779 1786 1780 intel_engine_reset_breadcrumbs(engine); 1787 1781 ··· 1836 1838 struct i915_request *request, *active; 1837 1839 unsigned long flags; 1838 1840 1839 - GEM_TRACE("%s\n", engine->name); 1841 + GEM_TRACE("%s: depth<-%d\n", engine->name, 1842 + atomic_read(&execlists->tasklet.count)); 1840 1843 1841 1844 /* 1842 1845 * Prevent request submission to the hardware until we have ··· 1970 1971 { 1971 1972 struct intel_engine_execlists * const execlists = &engine->execlists; 1972 1973 1973 - /* After a GPU reset, we may have requests to replay */ 1974 - if (!RB_EMPTY_ROOT(&execlists->queue.rb_root)) 1975 - tasklet_schedule(&execlists->tasklet); 1976 - 1977 1974 /* 1978 - * Flush the tasklet while we still have the forcewake to be sure 1979 - * that it is not allowed to sleep before we restart and reload a 1980 - * context. 1975 + * After a GPU reset, we may have requests to replay. Do so now while 1976 + * we still have the forcewake to be sure that the GPU is not allowed 1977 + * to sleep before we restart and reload a context. 1981 1978 * 1982 - * As before (with execlists_reset_prepare) we rely on the caller 1983 - * serialising multiple attempts to reset so that we know that we 1984 - * are the only one manipulating tasklet state. 1985 1979 */ 1986 - __tasklet_enable_sync_once(&execlists->tasklet); 1980 + if (!RB_EMPTY_ROOT(&execlists->queue.rb_root)) 1981 + execlists->tasklet.func(execlists->tasklet.data); 1987 1982 1988 - GEM_TRACE("%s\n", engine->name); 1983 + tasklet_enable(&execlists->tasklet); 1984 + GEM_TRACE("%s: depth->%d\n", engine->name, 1985 + atomic_read(&execlists->tasklet.count)); 1989 1986 } 1990 1987 1991 1988 static int intel_logical_ring_emit_pdps(struct i915_request *rq) ··· 2061 2066 2062 2067 /* FIXME(BDW): Address space and security selectors. */ 2063 2068 *cs++ = MI_BATCH_BUFFER_START_GEN8 | 2064 - (flags & I915_DISPATCH_SECURE ? 0 : BIT(8)) | 2065 - (flags & I915_DISPATCH_RS ? MI_BATCH_RESOURCE_STREAMER : 0); 2069 + (flags & I915_DISPATCH_SECURE ? 0 : BIT(8)); 2066 2070 *cs++ = lower_32_bits(offset); 2067 2071 *cs++ = upper_32_bits(offset); 2068 2072 ··· 2488 2494 static u32 2489 2495 make_rpcs(struct drm_i915_private *dev_priv) 2490 2496 { 2497 + bool subslice_pg = INTEL_INFO(dev_priv)->sseu.has_subslice_pg; 2498 + u8 slices = hweight8(INTEL_INFO(dev_priv)->sseu.slice_mask); 2499 + u8 subslices = hweight8(INTEL_INFO(dev_priv)->sseu.subslice_mask[0]); 2491 2500 u32 rpcs = 0; 2492 2501 2493 2502 /* ··· 2501 2504 return 0; 2502 2505 2503 2506 /* 2507 + * Since the SScount bitfield in GEN8_R_PWR_CLK_STATE is only three bits 2508 + * wide and Icelake has up to eight subslices, specfial programming is 2509 + * needed in order to correctly enable all subslices. 2510 + * 2511 + * According to documentation software must consider the configuration 2512 + * as 2x4x8 and hardware will translate this to 1x8x8. 2513 + * 2514 + * Furthemore, even though SScount is three bits, maximum documented 2515 + * value for it is four. From this some rules/restrictions follow: 2516 + * 2517 + * 1. 2518 + * If enabled subslice count is greater than four, two whole slices must 2519 + * be enabled instead. 2520 + * 2521 + * 2. 2522 + * When more than one slice is enabled, hardware ignores the subslice 2523 + * count altogether. 2524 + * 2525 + * From these restrictions it follows that it is not possible to enable 2526 + * a count of subslices between the SScount maximum of four restriction, 2527 + * and the maximum available number on a particular SKU. Either all 2528 + * subslices are enabled, or a count between one and four on the first 2529 + * slice. 2530 + */ 2531 + if (IS_GEN11(dev_priv) && slices == 1 && subslices >= 4) { 2532 + GEM_BUG_ON(subslices & 1); 2533 + 2534 + subslice_pg = false; 2535 + slices *= 2; 2536 + } 2537 + 2538 + /* 2504 2539 * Starting in Gen9, render power gating can leave 2505 2540 * slice/subslice/EU in a partially enabled state. We 2506 2541 * must make an explicit request through RPCS for full 2507 2542 * enablement. 2508 2543 */ 2509 2544 if (INTEL_INFO(dev_priv)->sseu.has_slice_pg) { 2510 - rpcs |= GEN8_RPCS_S_CNT_ENABLE; 2511 - rpcs |= hweight8(INTEL_INFO(dev_priv)->sseu.slice_mask) << 2512 - GEN8_RPCS_S_CNT_SHIFT; 2513 - rpcs |= GEN8_RPCS_ENABLE; 2545 + u32 mask, val = slices; 2546 + 2547 + if (INTEL_GEN(dev_priv) >= 11) { 2548 + mask = GEN11_RPCS_S_CNT_MASK; 2549 + val <<= GEN11_RPCS_S_CNT_SHIFT; 2550 + } else { 2551 + mask = GEN8_RPCS_S_CNT_MASK; 2552 + val <<= GEN8_RPCS_S_CNT_SHIFT; 2553 + } 2554 + 2555 + GEM_BUG_ON(val & ~mask); 2556 + val &= mask; 2557 + 2558 + rpcs |= GEN8_RPCS_ENABLE | GEN8_RPCS_S_CNT_ENABLE | val; 2514 2559 } 2515 2560 2516 - if (INTEL_INFO(dev_priv)->sseu.has_subslice_pg) { 2517 - rpcs |= GEN8_RPCS_SS_CNT_ENABLE; 2518 - rpcs |= hweight8(INTEL_INFO(dev_priv)->sseu.subslice_mask[0]) << 2519 - GEN8_RPCS_SS_CNT_SHIFT; 2520 - rpcs |= GEN8_RPCS_ENABLE; 2561 + if (subslice_pg) { 2562 + u32 val = subslices; 2563 + 2564 + val <<= GEN8_RPCS_SS_CNT_SHIFT; 2565 + 2566 + GEM_BUG_ON(val & ~GEN8_RPCS_SS_CNT_MASK); 2567 + val &= GEN8_RPCS_SS_CNT_MASK; 2568 + 2569 + rpcs |= GEN8_RPCS_ENABLE | GEN8_RPCS_SS_CNT_ENABLE | val; 2521 2570 } 2522 2571 2523 2572 if (INTEL_INFO(dev_priv)->sseu.has_eu_pg) { 2524 - rpcs |= INTEL_INFO(dev_priv)->sseu.eu_per_subslice << 2525 - GEN8_RPCS_EU_MIN_SHIFT; 2526 - rpcs |= INTEL_INFO(dev_priv)->sseu.eu_per_subslice << 2527 - GEN8_RPCS_EU_MAX_SHIFT; 2573 + u32 val; 2574 + 2575 + val = INTEL_INFO(dev_priv)->sseu.eu_per_subslice << 2576 + GEN8_RPCS_EU_MIN_SHIFT; 2577 + GEM_BUG_ON(val & ~GEN8_RPCS_EU_MIN_MASK); 2578 + val &= GEN8_RPCS_EU_MIN_MASK; 2579 + 2580 + rpcs |= val; 2581 + 2582 + val = INTEL_INFO(dev_priv)->sseu.eu_per_subslice << 2583 + GEN8_RPCS_EU_MAX_SHIFT; 2584 + GEM_BUG_ON(val & ~GEN8_RPCS_EU_MAX_MASK); 2585 + val &= GEN8_RPCS_EU_MAX_MASK; 2586 + 2587 + rpcs |= val; 2588 + 2528 2589 rpcs |= GEN8_RPCS_ENABLE; 2529 2590 } 2530 2591 ··· 2639 2584 MI_LRI_FORCE_POSTED; 2640 2585 2641 2586 CTX_REG(regs, CTX_CONTEXT_CONTROL, RING_CONTEXT_CONTROL(engine), 2642 - _MASKED_BIT_DISABLE(CTX_CTRL_ENGINE_CTX_RESTORE_INHIBIT | 2643 - CTX_CTRL_ENGINE_CTX_SAVE_INHIBIT) | 2644 - _MASKED_BIT_ENABLE(CTX_CTRL_INHIBIT_SYN_CTX_SWITCH | 2645 - (HAS_RESOURCE_STREAMER(dev_priv) ? 2646 - CTX_CTRL_RS_CTX_ENABLE : 0))); 2587 + _MASKED_BIT_DISABLE(CTX_CTRL_ENGINE_CTX_RESTORE_INHIBIT) | 2588 + _MASKED_BIT_ENABLE(CTX_CTRL_INHIBIT_SYN_CTX_SWITCH)); 2589 + if (INTEL_GEN(dev_priv) < 11) { 2590 + regs[CTX_CONTEXT_CONTROL + 1] |= 2591 + _MASKED_BIT_DISABLE(CTX_CTRL_ENGINE_CTX_SAVE_INHIBIT | 2592 + CTX_CTRL_RS_CTX_ENABLE); 2593 + } 2647 2594 CTX_REG(regs, CTX_RING_HEAD, RING_HEAD(base), 0); 2648 2595 CTX_REG(regs, CTX_RING_TAIL, RING_TAIL(base), 0); 2649 2596 CTX_REG(regs, CTX_RING_BUFFER_START, RING_START(base), 0); ··· 2711 2654 2712 2655 i915_oa_init_reg_state(engine, ctx, regs); 2713 2656 } 2657 + 2658 + regs[CTX_END] = MI_BATCH_BUFFER_END; 2659 + if (INTEL_GEN(dev_priv) >= 10) 2660 + regs[CTX_END] |= BIT(0); 2714 2661 } 2715 2662 2716 2663 static int
-2
drivers/gpu/drm/i915/intel_lrc.h
··· 27 27 #include "intel_ringbuffer.h" 28 28 #include "i915_gem_context.h" 29 29 30 - #define GEN8_LR_CONTEXT_ALIGN I915_GTT_MIN_ALIGNMENT 31 - 32 30 /* Execlists regs */ 33 31 #define RING_ELSP(engine) _MMIO((engine)->mmio_base + 0x230) 34 32 #define RING_EXECLIST_STATUS_LO(engine) _MMIO((engine)->mmio_base + 0x234)
+1 -1
drivers/gpu/drm/i915/intel_lrc_reg.h
··· 37 37 #define CTX_PDP0_LDW 0x32 38 38 #define CTX_LRI_HEADER_2 0x41 39 39 #define CTX_R_PWR_CLK_STATE 0x42 40 - #define CTX_GPGPU_CSR_BASE_ADDRESS 0x44 40 + #define CTX_END 0x44 41 41 42 42 #define CTX_REG(reg_state, pos, reg, val) do { \ 43 43 u32 *reg_state__ = (reg_state); \
+3 -8
drivers/gpu/drm/i915/intel_mocs.c
··· 232 232 * 233 233 * This function simply emits a MI_LOAD_REGISTER_IMM command for the 234 234 * given table starting at the given address. 235 - * 236 - * Return: 0 on success, otherwise the error status. 237 235 */ 238 - int intel_mocs_init_engine(struct intel_engine_cs *engine) 236 + void intel_mocs_init_engine(struct intel_engine_cs *engine) 239 237 { 240 238 struct drm_i915_private *dev_priv = engine->i915; 241 239 struct drm_i915_mocs_table table; 242 240 unsigned int index; 243 241 244 242 if (!get_mocs_settings(dev_priv, &table)) 245 - return 0; 243 + return; 246 244 247 - if (WARN_ON(table.size > GEN9_NUM_MOCS_ENTRIES)) 248 - return -ENODEV; 245 + GEM_BUG_ON(table.size > GEN9_NUM_MOCS_ENTRIES); 249 246 250 247 for (index = 0; index < table.size; index++) 251 248 I915_WRITE(mocs_register(engine->id, index), ··· 259 262 for (; index < GEN9_NUM_MOCS_ENTRIES; index++) 260 263 I915_WRITE(mocs_register(engine->id, index), 261 264 table.table[0].control_value); 262 - 263 - return 0; 264 265 } 265 266 266 267 /**
+1 -1
drivers/gpu/drm/i915/intel_mocs.h
··· 54 54 55 55 int intel_rcs_context_init_mocs(struct i915_request *rq); 56 56 void intel_mocs_init_l3cc_table(struct drm_i915_private *dev_priv); 57 - int intel_mocs_init_engine(struct intel_engine_cs *engine); 57 + void intel_mocs_init_engine(struct intel_engine_cs *engine); 58 58 59 59 #endif
+62 -39
drivers/gpu/drm/i915/intel_pm.c
··· 26 26 */ 27 27 28 28 #include <linux/cpufreq.h> 29 + #include <linux/pm_runtime.h> 29 30 #include <drm/drm_plane_helper.h> 30 31 #include "i915_drv.h" 31 32 #include "intel_drv.h" ··· 2943 2942 unsigned int latency = wm[level]; 2944 2943 2945 2944 if (latency == 0) { 2946 - DRM_ERROR("%s WM%d latency not provided\n", 2947 - name, level); 2945 + DRM_DEBUG_KMS("%s WM%d latency not provided\n", 2946 + name, level); 2948 2947 continue; 2949 2948 } 2950 2949 ··· 3772 3771 return true; 3773 3772 } 3774 3773 3775 - static unsigned int intel_get_ddb_size(struct drm_i915_private *dev_priv, 3776 - const struct intel_crtc_state *cstate, 3777 - const unsigned int total_data_rate, 3778 - const int num_active, 3779 - struct skl_ddb_allocation *ddb) 3774 + static u16 intel_get_ddb_size(struct drm_i915_private *dev_priv, 3775 + const struct intel_crtc_state *cstate, 3776 + const unsigned int total_data_rate, 3777 + const int num_active, 3778 + struct skl_ddb_allocation *ddb) 3780 3779 { 3781 3780 const struct drm_display_mode *adjusted_mode; 3782 3781 u64 total_data_bw; ··· 3815 3814 struct intel_atomic_state *intel_state = to_intel_atomic_state(state); 3816 3815 struct drm_i915_private *dev_priv = to_i915(dev); 3817 3816 struct drm_crtc *for_crtc = cstate->base.crtc; 3818 - unsigned int pipe_size, ddb_size; 3819 - int nth_active_pipe; 3817 + const struct drm_crtc_state *crtc_state; 3818 + const struct drm_crtc *crtc; 3819 + u32 pipe_width = 0, total_width = 0, width_before_pipe = 0; 3820 + enum pipe for_pipe = to_intel_crtc(for_crtc)->pipe; 3821 + u16 ddb_size; 3822 + u32 i; 3820 3823 3821 3824 if (WARN_ON(!state) || !cstate->base.active) { 3822 3825 alloc->start = 0; ··· 3838 3833 *num_active, ddb); 3839 3834 3840 3835 /* 3841 - * If the state doesn't change the active CRTC's, then there's 3842 - * no need to recalculate; the existing pipe allocation limits 3843 - * should remain unchanged. Note that we're safe from racing 3844 - * commits since any racing commit that changes the active CRTC 3845 - * list would need to grab _all_ crtc locks, including the one 3846 - * we currently hold. 3836 + * If the state doesn't change the active CRTC's or there is no 3837 + * modeset request, then there's no need to recalculate; 3838 + * the existing pipe allocation limits should remain unchanged. 3839 + * Note that we're safe from racing commits since any racing commit 3840 + * that changes the active CRTC list or do modeset would need to 3841 + * grab _all_ crtc locks, including the one we currently hold. 3847 3842 */ 3848 - if (!intel_state->active_pipe_changes) { 3843 + if (!intel_state->active_pipe_changes && !intel_state->modeset) { 3849 3844 /* 3850 3845 * alloc may be cleared by clear_intel_crtc_state, 3851 3846 * copy from old state to be sure ··· 3854 3849 return; 3855 3850 } 3856 3851 3857 - nth_active_pipe = hweight32(intel_state->active_crtcs & 3858 - (drm_crtc_mask(for_crtc) - 1)); 3859 - pipe_size = ddb_size / hweight32(intel_state->active_crtcs); 3860 - alloc->start = nth_active_pipe * ddb_size / *num_active; 3861 - alloc->end = alloc->start + pipe_size; 3852 + /* 3853 + * Watermark/ddb requirement highly depends upon width of the 3854 + * framebuffer, So instead of allocating DDB equally among pipes 3855 + * distribute DDB based on resolution/width of the display. 3856 + */ 3857 + for_each_new_crtc_in_state(state, crtc, crtc_state, i) { 3858 + const struct drm_display_mode *adjusted_mode; 3859 + int hdisplay, vdisplay; 3860 + enum pipe pipe; 3861 + 3862 + if (!crtc_state->enable) 3863 + continue; 3864 + 3865 + pipe = to_intel_crtc(crtc)->pipe; 3866 + adjusted_mode = &crtc_state->adjusted_mode; 3867 + drm_mode_get_hv_timing(adjusted_mode, &hdisplay, &vdisplay); 3868 + total_width += hdisplay; 3869 + 3870 + if (pipe < for_pipe) 3871 + width_before_pipe += hdisplay; 3872 + else if (pipe == for_pipe) 3873 + pipe_width = hdisplay; 3874 + } 3875 + 3876 + alloc->start = ddb_size * width_before_pipe / total_width; 3877 + alloc->end = ddb_size * (width_before_pipe + pipe_width) / total_width; 3862 3878 } 3863 3879 3864 3880 static unsigned int skl_cursor_allocation(int num_active) ··· 3935 3909 val & PLANE_CTL_ALPHA_MASK); 3936 3910 3937 3911 val = I915_READ(PLANE_BUF_CFG(pipe, plane_id)); 3938 - val2 = I915_READ(PLANE_NV12_BUF_CFG(pipe, plane_id)); 3912 + /* 3913 + * FIXME: add proper NV12 support for ICL. Avoid reading unclaimed 3914 + * registers for now. 3915 + */ 3916 + if (INTEL_GEN(dev_priv) < 11) 3917 + val2 = I915_READ(PLANE_NV12_BUF_CFG(pipe, plane_id)); 3939 3918 3940 3919 if (fourcc == DRM_FORMAT_NV12) { 3941 3920 skl_ddb_entry_init_from_hw(dev_priv, ··· 5008 4977 5009 4978 skl_ddb_entry_write(dev_priv, PLANE_BUF_CFG(pipe, plane_id), 5010 4979 &ddb->plane[pipe][plane_id]); 4980 + /* FIXME: add proper NV12 support for ICL. */ 5011 4981 if (INTEL_GEN(dev_priv) >= 11) 5012 4982 return skl_ddb_entry_write(dev_priv, 5013 4983 PLANE_BUF_CFG(pipe, plane_id), ··· 5174 5142 } 5175 5143 5176 5144 static void 5177 - skl_copy_ddb_for_pipe(struct skl_ddb_values *dst, 5178 - struct skl_ddb_values *src, 5179 - enum pipe pipe) 5180 - { 5181 - memcpy(dst->ddb.uv_plane[pipe], src->ddb.uv_plane[pipe], 5182 - sizeof(dst->ddb.uv_plane[pipe])); 5183 - memcpy(dst->ddb.plane[pipe], src->ddb.plane[pipe], 5184 - sizeof(dst->ddb.plane[pipe])); 5185 - } 5186 - 5187 - static void 5188 5145 skl_print_wm_changes(const struct drm_atomic_state *state) 5189 5146 { 5190 5147 const struct drm_device *dev = state->dev; ··· 5280 5259 * any other display updates race with this transaction, so we need 5281 5260 * to grab the lock on *all* CRTC's. 5282 5261 */ 5283 - if (intel_state->active_pipe_changes) { 5262 + if (intel_state->active_pipe_changes || intel_state->modeset) { 5284 5263 realloc_pipes = ~0; 5285 5264 intel_state->wm_results.dirty_pipes = ~0; 5286 5265 } ··· 5402 5381 if (cstate->base.active_changed) 5403 5382 skl_atomic_update_crtc_wm(state, cstate); 5404 5383 5405 - skl_copy_ddb_for_pipe(hw_vals, results, pipe); 5384 + memcpy(hw_vals->ddb.uv_plane[pipe], results->ddb.uv_plane[pipe], 5385 + sizeof(hw_vals->ddb.uv_plane[pipe])); 5386 + memcpy(hw_vals->ddb.plane[pipe], results->ddb.plane[pipe], 5387 + sizeof(hw_vals->ddb.plane[pipe])); 5406 5388 5407 5389 mutex_unlock(&dev_priv->wm.wm_mutex); 5408 5390 } ··· 6403 6379 new_power = HIGH_POWER; 6404 6380 rps_set_power(dev_priv, new_power); 6405 6381 mutex_unlock(&rps->power.mutex); 6406 - rps->last_adj = 0; 6407 6382 } 6408 6383 6409 6384 void intel_rps_mark_interactive(struct drm_i915_private *i915, bool interactive) ··· 8182 8159 */ 8183 8160 if (!sanitize_rc6(dev_priv)) { 8184 8161 DRM_INFO("RC6 disabled, disabling runtime PM support\n"); 8185 - intel_runtime_pm_get(dev_priv); 8162 + pm_runtime_get(&dev_priv->drm.pdev->dev); 8186 8163 } 8187 8164 8188 8165 mutex_lock(&dev_priv->pcu_lock); ··· 8234 8211 valleyview_cleanup_gt_powersave(dev_priv); 8235 8212 8236 8213 if (!HAS_RC6(dev_priv)) 8237 - intel_runtime_pm_put(dev_priv); 8214 + pm_runtime_put(&dev_priv->drm.pdev->dev); 8238 8215 } 8239 8216 8240 8217 /** ··· 8261 8238 8262 8239 if (INTEL_GEN(dev_priv) >= 11) 8263 8240 gen11_reset_rps_interrupts(dev_priv); 8264 - else 8241 + else if (INTEL_GEN(dev_priv) >= 6) 8265 8242 gen6_reset_rps_interrupts(dev_priv); 8266 8243 } 8267 8244
+192 -85
drivers/gpu/drm/i915/intel_psr.c
··· 56 56 #include "intel_drv.h" 57 57 #include "i915_drv.h" 58 58 59 - void intel_psr_irq_control(struct drm_i915_private *dev_priv, bool debug) 59 + static bool psr_global_enabled(u32 debug) 60 + { 61 + switch (debug & I915_PSR_DEBUG_MODE_MASK) { 62 + case I915_PSR_DEBUG_DEFAULT: 63 + return i915_modparams.enable_psr; 64 + case I915_PSR_DEBUG_DISABLE: 65 + return false; 66 + default: 67 + return true; 68 + } 69 + } 70 + 71 + static bool intel_psr2_enabled(struct drm_i915_private *dev_priv, 72 + const struct intel_crtc_state *crtc_state) 73 + { 74 + switch (dev_priv->psr.debug & I915_PSR_DEBUG_MODE_MASK) { 75 + case I915_PSR_DEBUG_FORCE_PSR1: 76 + return false; 77 + default: 78 + return crtc_state->has_psr2; 79 + } 80 + } 81 + 82 + void intel_psr_irq_control(struct drm_i915_private *dev_priv, u32 debug) 60 83 { 61 84 u32 debug_mask, mask; 62 85 ··· 100 77 EDP_PSR_PRE_ENTRY(TRANSCODER_C); 101 78 } 102 79 103 - if (debug) 80 + if (debug & I915_PSR_DEBUG_IRQ) 104 81 mask |= debug_mask; 105 82 106 - WRITE_ONCE(dev_priv->psr.debug, debug); 107 83 I915_WRITE(EDP_PSR_IMR, ~mask); 108 84 } 109 85 ··· 235 213 dev_priv->psr.sink_sync_latency = 236 214 intel_dp_get_sink_sync_latency(intel_dp); 237 215 216 + WARN_ON(dev_priv->psr.dp); 217 + dev_priv->psr.dp = intel_dp; 218 + 238 219 if (INTEL_GEN(dev_priv) >= 9 && 239 220 (intel_dp->psr_dpcd[0] == DP_PSR2_WITH_Y_COORD_IS_SUPPORTED)) { 240 221 bool y_req = intel_dp->psr_dpcd[1] & ··· 270 245 const struct intel_crtc_state *crtc_state) 271 246 { 272 247 struct intel_digital_port *intel_dig_port = dp_to_dig_port(intel_dp); 273 - struct drm_i915_private *dev_priv = to_i915(intel_dig_port->base.base.dev); 248 + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); 274 249 struct edp_vsc_psr psr_vsc; 275 250 276 251 if (dev_priv->psr.psr2_enabled) { ··· 300 275 301 276 static void hsw_psr_setup_aux(struct intel_dp *intel_dp) 302 277 { 303 - struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp); 304 - struct drm_i915_private *dev_priv = to_i915(dig_port->base.base.dev); 278 + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); 305 279 u32 aux_clock_divider, aux_ctl; 306 280 int i; 307 281 static const uint8_t aux_msg[] = { ··· 333 309 334 310 static void intel_psr_enable_sink(struct intel_dp *intel_dp) 335 311 { 336 - struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp); 337 - struct drm_device *dev = dig_port->base.base.dev; 338 - struct drm_i915_private *dev_priv = to_i915(dev); 312 + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); 339 313 u8 dpcd_val = DP_PSR_ENABLE; 340 314 341 315 /* Enable ALPM at sink for psr2 */ ··· 354 332 355 333 static void hsw_activate_psr1(struct intel_dp *intel_dp) 356 334 { 357 - struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp); 358 - struct drm_device *dev = dig_port->base.base.dev; 359 - struct drm_i915_private *dev_priv = to_i915(dev); 335 + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); 360 336 u32 max_sleep_time = 0x1f; 361 337 u32 val = EDP_PSR_ENABLE; 362 338 ··· 409 389 410 390 static void hsw_activate_psr2(struct intel_dp *intel_dp) 411 391 { 412 - struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp); 413 - struct drm_device *dev = dig_port->base.base.dev; 414 - struct drm_i915_private *dev_priv = to_i915(dev); 392 + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); 415 393 u32 val; 416 394 417 395 /* Let's use 6 as the minimum to cover all known cases including the ··· 445 427 static bool intel_psr2_config_valid(struct intel_dp *intel_dp, 446 428 struct intel_crtc_state *crtc_state) 447 429 { 448 - struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp); 449 - struct drm_i915_private *dev_priv = to_i915(dig_port->base.base.dev); 430 + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); 450 431 int crtc_hdisplay = crtc_state->base.adjusted_mode.crtc_hdisplay; 451 432 int crtc_vdisplay = crtc_state->base.adjusted_mode.crtc_vdisplay; 452 433 int psr_max_h = 0, psr_max_v = 0; ··· 480 463 struct intel_crtc_state *crtc_state) 481 464 { 482 465 struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp); 483 - struct drm_i915_private *dev_priv = to_i915(dig_port->base.base.dev); 466 + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); 484 467 const struct drm_display_mode *adjusted_mode = 485 468 &crtc_state->base.adjusted_mode; 486 469 int psr_setup_time; ··· 488 471 if (!CAN_PSR(dev_priv)) 489 472 return; 490 473 491 - if (!i915_modparams.enable_psr) { 492 - DRM_DEBUG_KMS("PSR disable by flag\n"); 474 + if (intel_dp != dev_priv->psr.dp) 493 475 return; 494 - } 495 476 496 477 /* 497 478 * HSW spec explicitly says PSR is tied to port A. ··· 532 517 533 518 crtc_state->has_psr = true; 534 519 crtc_state->has_psr2 = intel_psr2_config_valid(intel_dp, crtc_state); 535 - DRM_DEBUG_KMS("Enabling PSR%s\n", crtc_state->has_psr2 ? "2" : ""); 536 520 } 537 521 538 522 static void intel_psr_activate(struct intel_dp *intel_dp) 539 523 { 540 - struct intel_digital_port *intel_dig_port = dp_to_dig_port(intel_dp); 541 - struct drm_device *dev = intel_dig_port->base.base.dev; 542 - struct drm_i915_private *dev_priv = to_i915(dev); 524 + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); 543 525 544 526 if (INTEL_GEN(dev_priv) >= 9) 545 527 WARN_ON(I915_READ(EDP_PSR2_CTL) & EDP_PSR2_ENABLE); ··· 556 544 static void intel_psr_enable_source(struct intel_dp *intel_dp, 557 545 const struct intel_crtc_state *crtc_state) 558 546 { 559 - struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp); 560 - struct drm_device *dev = dig_port->base.base.dev; 561 - struct drm_i915_private *dev_priv = to_i915(dev); 547 + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); 562 548 enum transcoder cpu_transcoder = crtc_state->cpu_transcoder; 563 549 564 550 /* Only HSW and BDW have PSR AUX registers that need to be setup. SKL+ ··· 599 589 } 600 590 } 601 591 592 + static void intel_psr_enable_locked(struct drm_i915_private *dev_priv, 593 + const struct intel_crtc_state *crtc_state) 594 + { 595 + struct intel_dp *intel_dp = dev_priv->psr.dp; 596 + 597 + if (dev_priv->psr.enabled) 598 + return; 599 + 600 + DRM_DEBUG_KMS("Enabling PSR%s\n", 601 + dev_priv->psr.psr2_enabled ? "2" : "1"); 602 + intel_psr_setup_vsc(intel_dp, crtc_state); 603 + intel_psr_enable_sink(intel_dp); 604 + intel_psr_enable_source(intel_dp, crtc_state); 605 + dev_priv->psr.enabled = true; 606 + 607 + intel_psr_activate(intel_dp); 608 + } 609 + 602 610 /** 603 611 * intel_psr_enable - Enable PSR 604 612 * @intel_dp: Intel DP ··· 627 599 void intel_psr_enable(struct intel_dp *intel_dp, 628 600 const struct intel_crtc_state *crtc_state) 629 601 { 630 - struct intel_digital_port *intel_dig_port = dp_to_dig_port(intel_dp); 631 - struct drm_device *dev = intel_dig_port->base.base.dev; 632 - struct drm_i915_private *dev_priv = to_i915(dev); 602 + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); 633 603 634 604 if (!crtc_state->has_psr) 635 605 return; ··· 636 610 return; 637 611 638 612 WARN_ON(dev_priv->drrs.dp); 613 + 639 614 mutex_lock(&dev_priv->psr.lock); 640 - if (dev_priv->psr.enabled) { 615 + if (dev_priv->psr.prepared) { 641 616 DRM_DEBUG_KMS("PSR already in use\n"); 642 617 goto unlock; 643 618 } 644 619 645 - dev_priv->psr.psr2_enabled = crtc_state->has_psr2; 620 + dev_priv->psr.psr2_enabled = intel_psr2_enabled(dev_priv, crtc_state); 646 621 dev_priv->psr.busy_frontbuffer_bits = 0; 622 + dev_priv->psr.prepared = true; 647 623 648 - intel_psr_setup_vsc(intel_dp, crtc_state); 649 - intel_psr_enable_sink(intel_dp); 650 - intel_psr_enable_source(intel_dp, crtc_state); 651 - dev_priv->psr.enabled = intel_dp; 652 - 653 - intel_psr_activate(intel_dp); 624 + if (psr_global_enabled(dev_priv->psr.debug)) 625 + intel_psr_enable_locked(dev_priv, crtc_state); 626 + else 627 + DRM_DEBUG_KMS("PSR disabled by flag\n"); 654 628 655 629 unlock: 656 630 mutex_unlock(&dev_priv->psr.lock); ··· 659 633 static void 660 634 intel_psr_disable_source(struct intel_dp *intel_dp) 661 635 { 662 - struct intel_digital_port *intel_dig_port = dp_to_dig_port(intel_dp); 663 - struct drm_device *dev = intel_dig_port->base.base.dev; 664 - struct drm_i915_private *dev_priv = to_i915(dev); 636 + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); 665 637 666 638 if (dev_priv->psr.active) { 667 639 i915_reg_t psr_status; ··· 698 674 699 675 static void intel_psr_disable_locked(struct intel_dp *intel_dp) 700 676 { 701 - struct intel_digital_port *intel_dig_port = dp_to_dig_port(intel_dp); 702 - struct drm_device *dev = intel_dig_port->base.base.dev; 703 - struct drm_i915_private *dev_priv = to_i915(dev); 677 + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); 704 678 705 679 lockdep_assert_held(&dev_priv->psr.lock); 706 680 707 681 if (!dev_priv->psr.enabled) 708 682 return; 709 683 684 + DRM_DEBUG_KMS("Disabling PSR%s\n", 685 + dev_priv->psr.psr2_enabled ? "2" : "1"); 710 686 intel_psr_disable_source(intel_dp); 711 687 712 688 /* Disable PSR on Sink */ 713 689 drm_dp_dpcd_writeb(&intel_dp->aux, DP_PSR_EN_CFG, 0); 714 690 715 - dev_priv->psr.enabled = NULL; 691 + dev_priv->psr.enabled = false; 716 692 } 717 693 718 694 /** ··· 725 701 void intel_psr_disable(struct intel_dp *intel_dp, 726 702 const struct intel_crtc_state *old_crtc_state) 727 703 { 728 - struct intel_digital_port *intel_dig_port = dp_to_dig_port(intel_dp); 729 - struct drm_device *dev = intel_dig_port->base.base.dev; 730 - struct drm_i915_private *dev_priv = to_i915(dev); 704 + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); 731 705 732 706 if (!old_crtc_state->has_psr) 733 707 return; ··· 734 712 return; 735 713 736 714 mutex_lock(&dev_priv->psr.lock); 715 + if (!dev_priv->psr.prepared) { 716 + mutex_unlock(&dev_priv->psr.lock); 717 + return; 718 + } 719 + 737 720 intel_psr_disable_locked(intel_dp); 721 + 722 + dev_priv->psr.prepared = false; 738 723 mutex_unlock(&dev_priv->psr.lock); 739 724 cancel_work_sync(&dev_priv->psr.work); 740 725 } 741 726 742 - int intel_psr_wait_for_idle(const struct intel_crtc_state *new_crtc_state) 727 + /** 728 + * intel_psr_wait_for_idle - wait for PSR1 to idle 729 + * @new_crtc_state: new CRTC state 730 + * @out_value: PSR status in case of failure 731 + * 732 + * This function is expected to be called from pipe_update_start() where it is 733 + * not expected to race with PSR enable or disable. 734 + * 735 + * Returns: 0 on success or -ETIMEOUT if PSR status does not idle. 736 + */ 737 + int intel_psr_wait_for_idle(const struct intel_crtc_state *new_crtc_state, 738 + u32 *out_value) 743 739 { 744 740 struct intel_crtc *crtc = to_intel_crtc(new_crtc_state->base.crtc); 745 741 struct drm_i915_private *dev_priv = to_i915(crtc->base.dev); 746 - i915_reg_t reg; 747 - u32 mask; 748 742 749 - if (!new_crtc_state->has_psr) 743 + if (!dev_priv->psr.enabled || !new_crtc_state->has_psr) 744 + return 0; 745 + 746 + /* FIXME: Update this for PSR2 if we need to wait for idle */ 747 + if (READ_ONCE(dev_priv->psr.psr2_enabled)) 750 748 return 0; 751 749 752 750 /* 753 - * The sole user right now is intel_pipe_update_start(), 754 - * which won't race with psr_enable/disable, which is 755 - * where psr2_enabled is written to. So, we don't need 756 - * to acquire the psr.lock. More importantly, we want the 757 - * latency inside intel_pipe_update_start() to be as low 758 - * as possible, so no need to acquire psr.lock when it is 759 - * not needed and will induce latencies in the atomic 760 - * update path. 751 + * From bspec: Panel Self Refresh (BDW+) 752 + * Max. time for PSR to idle = Inverse of the refresh rate + 6 ms of 753 + * exit training time + 1.5 ms of aux channel handshake. 50 ms is 754 + * defensive enough to cover everything. 761 755 */ 762 - if (dev_priv->psr.psr2_enabled) { 763 - reg = EDP_PSR2_STATUS; 764 - mask = EDP_PSR2_STATUS_STATE_MASK; 765 - } else { 766 - reg = EDP_PSR_STATUS; 767 - mask = EDP_PSR_STATUS_STATE_MASK; 768 - } 769 756 770 - /* 771 - * Max time for PSR to idle = Inverse of the refresh rate + 772 - * 6 ms of exit training time + 1.5 ms of aux channel 773 - * handshake. 50 msec is defesive enough to cover everything. 774 - */ 775 - return intel_wait_for_register(dev_priv, reg, mask, 776 - EDP_PSR_STATUS_STATE_IDLE, 50); 757 + return __intel_wait_for_register(dev_priv, EDP_PSR_STATUS, 758 + EDP_PSR_STATUS_STATE_MASK, 759 + EDP_PSR_STATUS_STATE_IDLE, 2, 50, 760 + out_value); 777 761 } 778 762 779 763 static bool __psr_wait_for_idle_locked(struct drm_i915_private *dev_priv) 780 764 { 781 - struct intel_dp *intel_dp; 782 765 i915_reg_t reg; 783 766 u32 mask; 784 767 int err; 785 768 786 - intel_dp = dev_priv->psr.enabled; 787 - if (!intel_dp) 769 + if (!dev_priv->psr.enabled) 788 770 return false; 789 771 790 772 if (dev_priv->psr.psr2_enabled) { ··· 808 782 /* After the unlocked wait, verify that PSR is still wanted! */ 809 783 mutex_lock(&dev_priv->psr.lock); 810 784 return err == 0 && dev_priv->psr.enabled; 785 + } 786 + 787 + static bool switching_psr(struct drm_i915_private *dev_priv, 788 + struct intel_crtc_state *crtc_state, 789 + u32 mode) 790 + { 791 + /* Can't switch psr state anyway if PSR2 is not supported. */ 792 + if (!crtc_state || !crtc_state->has_psr2) 793 + return false; 794 + 795 + if (dev_priv->psr.psr2_enabled && mode == I915_PSR_DEBUG_FORCE_PSR1) 796 + return true; 797 + 798 + if (!dev_priv->psr.psr2_enabled && mode != I915_PSR_DEBUG_FORCE_PSR1) 799 + return true; 800 + 801 + return false; 802 + } 803 + 804 + int intel_psr_set_debugfs_mode(struct drm_i915_private *dev_priv, 805 + struct drm_modeset_acquire_ctx *ctx, 806 + u64 val) 807 + { 808 + struct drm_device *dev = &dev_priv->drm; 809 + struct drm_connector_state *conn_state; 810 + struct intel_crtc_state *crtc_state = NULL; 811 + struct drm_crtc_commit *commit; 812 + struct drm_crtc *crtc; 813 + struct intel_dp *dp; 814 + int ret; 815 + bool enable; 816 + u32 mode = val & I915_PSR_DEBUG_MODE_MASK; 817 + 818 + if (val & ~(I915_PSR_DEBUG_IRQ | I915_PSR_DEBUG_MODE_MASK) || 819 + mode > I915_PSR_DEBUG_FORCE_PSR1) { 820 + DRM_DEBUG_KMS("Invalid debug mask %llx\n", val); 821 + return -EINVAL; 822 + } 823 + 824 + ret = drm_modeset_lock(&dev->mode_config.connection_mutex, ctx); 825 + if (ret) 826 + return ret; 827 + 828 + /* dev_priv->psr.dp should be set once and then never touched again. */ 829 + dp = READ_ONCE(dev_priv->psr.dp); 830 + conn_state = dp->attached_connector->base.state; 831 + crtc = conn_state->crtc; 832 + if (crtc) { 833 + ret = drm_modeset_lock(&crtc->mutex, ctx); 834 + if (ret) 835 + return ret; 836 + 837 + crtc_state = to_intel_crtc_state(crtc->state); 838 + commit = crtc_state->base.commit; 839 + } else { 840 + commit = conn_state->commit; 841 + } 842 + if (commit) { 843 + ret = wait_for_completion_interruptible(&commit->hw_done); 844 + if (ret) 845 + return ret; 846 + } 847 + 848 + ret = mutex_lock_interruptible(&dev_priv->psr.lock); 849 + if (ret) 850 + return ret; 851 + 852 + enable = psr_global_enabled(val); 853 + 854 + if (!enable || switching_psr(dev_priv, crtc_state, mode)) 855 + intel_psr_disable_locked(dev_priv->psr.dp); 856 + 857 + dev_priv->psr.debug = val; 858 + if (crtc) 859 + dev_priv->psr.psr2_enabled = intel_psr2_enabled(dev_priv, crtc_state); 860 + 861 + intel_psr_irq_control(dev_priv, dev_priv->psr.debug); 862 + 863 + if (dev_priv->psr.prepared && enable) 864 + intel_psr_enable_locked(dev_priv, crtc_state); 865 + 866 + mutex_unlock(&dev_priv->psr.lock); 867 + return ret; 811 868 } 812 869 813 870 static void intel_psr_work(struct work_struct *work) ··· 920 811 if (dev_priv->psr.busy_frontbuffer_bits || dev_priv->psr.active) 921 812 goto unlock; 922 813 923 - intel_psr_activate(dev_priv->psr.enabled); 814 + intel_psr_activate(dev_priv->psr.dp); 924 815 unlock: 925 816 mutex_unlock(&dev_priv->psr.lock); 926 817 } ··· 975 866 return; 976 867 } 977 868 978 - crtc = dp_to_dig_port(dev_priv->psr.enabled)->base.base.crtc; 869 + crtc = dp_to_dig_port(dev_priv->psr.dp)->base.base.crtc; 979 870 pipe = to_intel_crtc(crtc)->pipe; 980 871 981 872 frontbuffer_bits &= INTEL_FRONTBUFFER_ALL_MASK(pipe); ··· 1018 909 return; 1019 910 } 1020 911 1021 - crtc = dp_to_dig_port(dev_priv->psr.enabled)->base.base.crtc; 912 + crtc = dp_to_dig_port(dev_priv->psr.dp)->base.base.crtc; 1022 913 pipe = to_intel_crtc(crtc)->pipe; 1023 914 1024 915 frontbuffer_bits &= INTEL_FRONTBUFFER_ALL_MASK(pipe); ··· 1086 977 1087 978 void intel_psr_short_pulse(struct intel_dp *intel_dp) 1088 979 { 1089 - struct intel_digital_port *intel_dig_port = dp_to_dig_port(intel_dp); 1090 - struct drm_device *dev = intel_dig_port->base.base.dev; 1091 - struct drm_i915_private *dev_priv = to_i915(dev); 980 + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); 1092 981 struct i915_psr *psr = &dev_priv->psr; 1093 982 u8 val; 1094 983 const u8 errors = DP_PSR_RFB_STORAGE_ERROR | ··· 1098 991 1099 992 mutex_lock(&psr->lock); 1100 993 1101 - if (psr->enabled != intel_dp) 994 + if (!psr->enabled || psr->dp != intel_dp) 1102 995 goto exit; 1103 996 1104 997 if (drm_dp_dpcd_readb(&intel_dp->aux, DP_PSR_STATUS, &val) != 1) {
+56 -54
drivers/gpu/drm/i915/intel_ringbuffer.c
··· 344 344 static void ring_setup_phys_status_page(struct intel_engine_cs *engine) 345 345 { 346 346 struct drm_i915_private *dev_priv = engine->i915; 347 + struct page *page = virt_to_page(engine->status_page.page_addr); 348 + phys_addr_t phys = PFN_PHYS(page_to_pfn(page)); 347 349 u32 addr; 348 350 349 - addr = dev_priv->status_page_dmah->busaddr; 351 + addr = lower_32_bits(phys); 350 352 if (INTEL_GEN(dev_priv) >= 4) 351 - addr |= (dev_priv->status_page_dmah->busaddr >> 28) & 0xf0; 353 + addr |= (phys >> 28) & 0xf0; 354 + 352 355 I915_WRITE(HWS_PGA, addr); 353 356 } 354 357 ··· 540 537 if (INTEL_GEN(dev_priv) > 2) 541 538 I915_WRITE_MODE(engine, _MASKED_BIT_DISABLE(STOP_RING)); 542 539 540 + /* Papering over lost _interrupts_ immediately following the restart */ 541 + intel_engine_wakeup(engine); 543 542 out: 544 543 intel_uncore_forcewake_put(dev_priv, FORCEWAKE_ALL); 545 544 ··· 1018 1013 return 0; 1019 1014 } 1020 1015 1021 - 1022 - 1023 - int intel_ring_pin(struct intel_ring *ring, 1024 - struct drm_i915_private *i915, 1025 - unsigned int offset_bias) 1016 + int intel_ring_pin(struct intel_ring *ring) 1026 1017 { 1027 - enum i915_map_type map = HAS_LLC(i915) ? I915_MAP_WB : I915_MAP_WC; 1028 1018 struct i915_vma *vma = ring->vma; 1019 + enum i915_map_type map = 1020 + HAS_LLC(vma->vm->i915) ? I915_MAP_WB : I915_MAP_WC; 1029 1021 unsigned int flags; 1030 1022 void *addr; 1031 1023 int ret; 1032 1024 1033 1025 GEM_BUG_ON(ring->vaddr); 1034 1026 1035 - 1036 1027 flags = PIN_GLOBAL; 1037 - if (offset_bias) 1038 - flags |= PIN_OFFSET_BIAS | offset_bias; 1028 + 1029 + /* Ring wraparound at offset 0 sometimes hangs. No idea why. */ 1030 + flags |= PIN_OFFSET_BIAS | i915_ggtt_pin_bias(vma); 1031 + 1039 1032 if (vma->obj->stolen) 1040 1033 flags |= PIN_MAPPABLE; 1041 1034 else ··· 1048 1045 return ret; 1049 1046 } 1050 1047 1051 - ret = i915_vma_pin(vma, 0, PAGE_SIZE, flags); 1048 + ret = i915_vma_pin(vma, 0, 0, flags); 1052 1049 if (unlikely(ret)) 1053 1050 return ret; 1054 1051 ··· 1233 1230 return err; 1234 1231 } 1235 1232 1236 - err = i915_vma_pin(vma, 0, I915_GTT_MIN_ALIGNMENT, 1237 - PIN_GLOBAL | PIN_HIGH); 1233 + err = i915_vma_pin(vma, 0, 0, PIN_GLOBAL | PIN_HIGH); 1238 1234 if (err) 1239 1235 return err; 1240 1236 ··· 1421 1419 goto err; 1422 1420 } 1423 1421 1424 - /* Ring wraparound at offset 0 sometimes hangs. No idea why. */ 1425 - err = intel_ring_pin(ring, engine->i915, I915_GTT_PAGE_SIZE); 1422 + err = intel_ring_pin(ring); 1426 1423 if (err) 1427 1424 goto err_ring; 1428 1425 ··· 1707 1706 } 1708 1707 1709 1708 if (ppgtt) { 1709 + ret = engine->emit_flush(rq, EMIT_INVALIDATE); 1710 + if (ret) 1711 + goto err_mm; 1712 + 1710 1713 ret = flush_pd_dir(rq); 1714 + if (ret) 1715 + goto err_mm; 1716 + 1717 + /* 1718 + * Not only do we need a full barrier (post-sync write) after 1719 + * invalidating the TLBs, but we need to wait a little bit 1720 + * longer. Whether this is merely delaying us, or the 1721 + * subsequent flush is a key part of serialising with the 1722 + * post-sync op, this extra pass appears vital before a 1723 + * mm switch! 1724 + */ 1725 + ret = engine->emit_flush(rq, EMIT_INVALIDATE); 1726 + if (ret) 1727 + goto err_mm; 1728 + 1729 + ret = engine->emit_flush(rq, EMIT_FLUSH); 1711 1730 if (ret) 1712 1731 goto err_mm; 1713 1732 } ··· 1967 1946 intel_uncore_forcewake_put(dev_priv, FORCEWAKE_ALL); 1968 1947 } 1969 1948 1970 - static int gen6_bsd_ring_flush(struct i915_request *rq, u32 mode) 1949 + static int mi_flush_dw(struct i915_request *rq, u32 flags) 1971 1950 { 1972 1951 u32 cmd, *cs; 1973 1952 ··· 1977 1956 1978 1957 cmd = MI_FLUSH_DW; 1979 1958 1980 - /* We always require a command barrier so that subsequent 1959 + /* 1960 + * We always require a command barrier so that subsequent 1981 1961 * commands, such as breadcrumb interrupts, are strictly ordered 1982 1962 * wrt the contents of the write cache being flushed to memory 1983 1963 * (and thus being coherent from the CPU). ··· 1986 1964 cmd |= MI_FLUSH_DW_STORE_INDEX | MI_FLUSH_DW_OP_STOREDW; 1987 1965 1988 1966 /* 1989 - * Bspec vol 1c.5 - video engine command streamer: 1967 + * Bspec vol 1c.3 - blitter engine command streamer: 1990 1968 * "If ENABLED, all TLBs will be invalidated once the flush 1991 1969 * operation is complete. This bit is only valid when the 1992 1970 * Post-Sync Operation field is a value of 1h or 3h." 1993 1971 */ 1994 - if (mode & EMIT_INVALIDATE) 1995 - cmd |= MI_INVALIDATE_TLB | MI_INVALIDATE_BSD; 1972 + cmd |= flags; 1996 1973 1997 1974 *cs++ = cmd; 1998 1975 *cs++ = I915_GEM_HWS_SCRATCH_ADDR | MI_FLUSH_DW_USE_GTT; 1999 1976 *cs++ = 0; 2000 1977 *cs++ = MI_NOOP; 1978 + 2001 1979 intel_ring_advance(rq, cs); 1980 + 2002 1981 return 0; 1982 + } 1983 + 1984 + static int gen6_flush_dw(struct i915_request *rq, u32 mode, u32 invflags) 1985 + { 1986 + return mi_flush_dw(rq, mode & EMIT_INVALIDATE ? invflags : 0); 1987 + } 1988 + 1989 + static int gen6_bsd_ring_flush(struct i915_request *rq, u32 mode) 1990 + { 1991 + return gen6_flush_dw(rq, mode, MI_INVALIDATE_TLB | MI_INVALIDATE_BSD); 2003 1992 } 2004 1993 2005 1994 static int ··· 2025 1992 return PTR_ERR(cs); 2026 1993 2027 1994 *cs++ = MI_BATCH_BUFFER_START | (dispatch_flags & I915_DISPATCH_SECURE ? 2028 - 0 : MI_BATCH_PPGTT_HSW | MI_BATCH_NON_SECURE_HSW) | 2029 - (dispatch_flags & I915_DISPATCH_RS ? 2030 - MI_BATCH_RESOURCE_STREAMER : 0); 1995 + 0 : MI_BATCH_PPGTT_HSW | MI_BATCH_NON_SECURE_HSW); 2031 1996 /* bit0-7 is the length on GEN6+ */ 2032 1997 *cs++ = offset; 2033 1998 intel_ring_advance(rq, cs); ··· 2057 2026 2058 2027 static int gen6_ring_flush(struct i915_request *rq, u32 mode) 2059 2028 { 2060 - u32 cmd, *cs; 2061 - 2062 - cs = intel_ring_begin(rq, 4); 2063 - if (IS_ERR(cs)) 2064 - return PTR_ERR(cs); 2065 - 2066 - cmd = MI_FLUSH_DW; 2067 - 2068 - /* We always require a command barrier so that subsequent 2069 - * commands, such as breadcrumb interrupts, are strictly ordered 2070 - * wrt the contents of the write cache being flushed to memory 2071 - * (and thus being coherent from the CPU). 2072 - */ 2073 - cmd |= MI_FLUSH_DW_STORE_INDEX | MI_FLUSH_DW_OP_STOREDW; 2074 - 2075 - /* 2076 - * Bspec vol 1c.3 - blitter engine command streamer: 2077 - * "If ENABLED, all TLBs will be invalidated once the flush 2078 - * operation is complete. This bit is only valid when the 2079 - * Post-Sync Operation field is a value of 1h or 3h." 2080 - */ 2081 - if (mode & EMIT_INVALIDATE) 2082 - cmd |= MI_INVALIDATE_TLB; 2083 - *cs++ = cmd; 2084 - *cs++ = I915_GEM_HWS_SCRATCH_ADDR | MI_FLUSH_DW_USE_GTT; 2085 - *cs++ = 0; 2086 - *cs++ = MI_NOOP; 2087 - intel_ring_advance(rq, cs); 2088 - 2089 - return 0; 2029 + return gen6_flush_dw(rq, mode, MI_INVALIDATE_TLB); 2090 2030 } 2091 2031 2092 2032 static void intel_ring_init_semaphores(struct drm_i915_private *dev_priv,
+29 -10
drivers/gpu/drm/i915/intel_ringbuffer.h
··· 474 474 unsigned int dispatch_flags); 475 475 #define I915_DISPATCH_SECURE BIT(0) 476 476 #define I915_DISPATCH_PINNED BIT(1) 477 - #define I915_DISPATCH_RS BIT(2) 478 477 void (*emit_breadcrumb)(struct i915_request *rq, u32 *cs); 479 478 int emit_breadcrumb_sz; 480 479 ··· 796 797 intel_engine_create_ring(struct intel_engine_cs *engine, 797 798 struct i915_timeline *timeline, 798 799 int size); 799 - int intel_ring_pin(struct intel_ring *ring, 800 - struct drm_i915_private *i915, 801 - unsigned int offset_bias); 800 + int intel_ring_pin(struct intel_ring *ring); 802 801 void intel_ring_reset(struct intel_ring *ring, u32 tail); 803 802 unsigned int intel_ring_update_space(struct intel_ring *ring); 804 803 void intel_ring_unpin(struct intel_ring *ring); ··· 906 909 int intel_init_vebox_ring_buffer(struct intel_engine_cs *engine); 907 910 908 911 int intel_engine_stop_cs(struct intel_engine_cs *engine); 912 + void intel_engine_cancel_stop_cs(struct intel_engine_cs *engine); 909 913 910 914 u64 intel_engine_get_active_head(const struct intel_engine_cs *engine); 911 915 u64 intel_engine_get_last_batch_head(const struct intel_engine_cs *engine); 912 916 913 - static inline u32 intel_engine_get_seqno(struct intel_engine_cs *engine) 914 - { 915 - return intel_read_status_page(engine, I915_GEM_HWS_INDEX); 916 - } 917 - 918 917 static inline u32 intel_engine_last_submit(struct intel_engine_cs *engine) 919 918 { 920 - /* We are only peeking at the tail of the submit queue (and not the 919 + /* 920 + * We are only peeking at the tail of the submit queue (and not the 921 921 * queue itself) in order to gain a hint as to the current active 922 922 * state of the engine. Callers are not expected to be taking 923 923 * engine->timeline->lock, nor are they expected to be concerned ··· 922 928 * a hint and nothing more. 923 929 */ 924 930 return READ_ONCE(engine->timeline.seqno); 931 + } 932 + 933 + static inline u32 intel_engine_get_seqno(struct intel_engine_cs *engine) 934 + { 935 + return intel_read_status_page(engine, I915_GEM_HWS_INDEX); 936 + } 937 + 938 + static inline bool intel_engine_signaled(struct intel_engine_cs *engine, 939 + u32 seqno) 940 + { 941 + return i915_seqno_passed(intel_engine_get_seqno(engine), seqno); 942 + } 943 + 944 + static inline bool intel_engine_has_completed(struct intel_engine_cs *engine, 945 + u32 seqno) 946 + { 947 + GEM_BUG_ON(!seqno); 948 + return intel_engine_signaled(engine, seqno); 949 + } 950 + 951 + static inline bool intel_engine_has_started(struct intel_engine_cs *engine, 952 + u32 seqno) 953 + { 954 + GEM_BUG_ON(!seqno); 955 + return intel_engine_signaled(engine, seqno - 1); 925 956 } 926 957 927 958 void intel_engine_get_instdone(struct intel_engine_cs *engine,
+674 -329
drivers/gpu/drm/i915/intel_runtime_pm.c
··· 52 52 bool intel_display_power_well_is_enabled(struct drm_i915_private *dev_priv, 53 53 enum i915_power_well_id power_well_id); 54 54 55 - static struct i915_power_well * 56 - lookup_power_well(struct drm_i915_private *dev_priv, 57 - enum i915_power_well_id power_well_id); 58 - 59 55 const char * 60 56 intel_display_power_domain_str(enum intel_display_power_domain domain) 61 57 { ··· 155 159 static void intel_power_well_enable(struct drm_i915_private *dev_priv, 156 160 struct i915_power_well *power_well) 157 161 { 158 - DRM_DEBUG_KMS("enabling %s\n", power_well->name); 159 - power_well->ops->enable(dev_priv, power_well); 162 + DRM_DEBUG_KMS("enabling %s\n", power_well->desc->name); 163 + power_well->desc->ops->enable(dev_priv, power_well); 160 164 power_well->hw_enabled = true; 161 165 } 162 166 163 167 static void intel_power_well_disable(struct drm_i915_private *dev_priv, 164 168 struct i915_power_well *power_well) 165 169 { 166 - DRM_DEBUG_KMS("disabling %s\n", power_well->name); 170 + DRM_DEBUG_KMS("disabling %s\n", power_well->desc->name); 167 171 power_well->hw_enabled = false; 168 - power_well->ops->disable(dev_priv, power_well); 172 + power_well->desc->ops->disable(dev_priv, power_well); 169 173 } 170 174 171 175 static void intel_power_well_get(struct drm_i915_private *dev_priv, ··· 179 183 struct i915_power_well *power_well) 180 184 { 181 185 WARN(!power_well->count, "Use count on power well %s is already zero", 182 - power_well->name); 186 + power_well->desc->name); 183 187 184 188 if (!--power_well->count) 185 189 intel_power_well_disable(dev_priv, power_well); ··· 209 213 is_enabled = true; 210 214 211 215 for_each_power_domain_well_rev(dev_priv, power_well, BIT_ULL(domain)) { 212 - if (power_well->always_on) 216 + if (power_well->desc->always_on) 213 217 continue; 214 218 215 219 if (!power_well->hw_enabled) { ··· 251 255 mutex_unlock(&power_domains->lock); 252 256 253 257 return ret; 254 - } 255 - 256 - /** 257 - * intel_display_set_init_power - set the initial power domain state 258 - * @dev_priv: i915 device instance 259 - * @enable: whether to enable or disable the initial power domain state 260 - * 261 - * For simplicity our driver load/unload and system suspend/resume code assumes 262 - * that all power domains are always enabled. This functions controls the state 263 - * of this little hack. While the initial power domain state is enabled runtime 264 - * pm is effectively disabled. 265 - */ 266 - void intel_display_set_init_power(struct drm_i915_private *dev_priv, 267 - bool enable) 268 - { 269 - if (dev_priv->power_domains.init_power_on == enable) 270 - return; 271 - 272 - if (enable) 273 - intel_display_power_get(dev_priv, POWER_DOMAIN_INIT); 274 - else 275 - intel_display_power_put(dev_priv, POWER_DOMAIN_INIT); 276 - 277 - dev_priv->power_domains.init_power_on = enable; 278 258 } 279 259 280 260 /* ··· 295 323 static void hsw_wait_for_power_well_enable(struct drm_i915_private *dev_priv, 296 324 struct i915_power_well *power_well) 297 325 { 298 - enum i915_power_well_id id = power_well->id; 326 + const struct i915_power_well_regs *regs = power_well->desc->hsw.regs; 327 + int pw_idx = power_well->desc->hsw.idx; 299 328 300 329 /* Timeout for PW1:10 us, AUX:not specified, other PWs:20 us. */ 301 330 WARN_ON(intel_wait_for_register(dev_priv, 302 - HSW_PWR_WELL_CTL_DRIVER(id), 303 - HSW_PWR_WELL_CTL_STATE(id), 304 - HSW_PWR_WELL_CTL_STATE(id), 331 + regs->driver, 332 + HSW_PWR_WELL_CTL_STATE(pw_idx), 333 + HSW_PWR_WELL_CTL_STATE(pw_idx), 305 334 1)); 306 335 } 307 336 308 337 static u32 hsw_power_well_requesters(struct drm_i915_private *dev_priv, 309 - enum i915_power_well_id id) 338 + const struct i915_power_well_regs *regs, 339 + int pw_idx) 310 340 { 311 - u32 req_mask = HSW_PWR_WELL_CTL_REQ(id); 341 + u32 req_mask = HSW_PWR_WELL_CTL_REQ(pw_idx); 312 342 u32 ret; 313 343 314 - ret = I915_READ(HSW_PWR_WELL_CTL_BIOS(id)) & req_mask ? 1 : 0; 315 - ret |= I915_READ(HSW_PWR_WELL_CTL_DRIVER(id)) & req_mask ? 2 : 0; 316 - ret |= I915_READ(HSW_PWR_WELL_CTL_KVMR) & req_mask ? 4 : 0; 317 - ret |= I915_READ(HSW_PWR_WELL_CTL_DEBUG(id)) & req_mask ? 8 : 0; 344 + ret = I915_READ(regs->bios) & req_mask ? 1 : 0; 345 + ret |= I915_READ(regs->driver) & req_mask ? 2 : 0; 346 + if (regs->kvmr.reg) 347 + ret |= I915_READ(regs->kvmr) & req_mask ? 4 : 0; 348 + ret |= I915_READ(regs->debug) & req_mask ? 8 : 0; 318 349 319 350 return ret; 320 351 } ··· 325 350 static void hsw_wait_for_power_well_disable(struct drm_i915_private *dev_priv, 326 351 struct i915_power_well *power_well) 327 352 { 328 - enum i915_power_well_id id = power_well->id; 353 + const struct i915_power_well_regs *regs = power_well->desc->hsw.regs; 354 + int pw_idx = power_well->desc->hsw.idx; 329 355 bool disabled; 330 356 u32 reqs; 331 357 ··· 339 363 * Skip the wait in case any of the request bits are set and print a 340 364 * diagnostic message. 341 365 */ 342 - wait_for((disabled = !(I915_READ(HSW_PWR_WELL_CTL_DRIVER(id)) & 343 - HSW_PWR_WELL_CTL_STATE(id))) || 344 - (reqs = hsw_power_well_requesters(dev_priv, id)), 1); 366 + wait_for((disabled = !(I915_READ(regs->driver) & 367 + HSW_PWR_WELL_CTL_STATE(pw_idx))) || 368 + (reqs = hsw_power_well_requesters(dev_priv, regs, pw_idx)), 1); 345 369 if (disabled) 346 370 return; 347 371 348 372 DRM_DEBUG_KMS("%s forced on (bios:%d driver:%d kvmr:%d debug:%d)\n", 349 - power_well->name, 373 + power_well->desc->name, 350 374 !!(reqs & 1), !!(reqs & 2), !!(reqs & 4), !!(reqs & 8)); 351 375 } 352 376 ··· 362 386 static void hsw_power_well_enable(struct drm_i915_private *dev_priv, 363 387 struct i915_power_well *power_well) 364 388 { 365 - enum i915_power_well_id id = power_well->id; 366 - bool wait_fuses = power_well->hsw.has_fuses; 389 + const struct i915_power_well_regs *regs = power_well->desc->hsw.regs; 390 + int pw_idx = power_well->desc->hsw.idx; 391 + bool wait_fuses = power_well->desc->hsw.has_fuses; 367 392 enum skl_power_gate uninitialized_var(pg); 368 393 u32 val; 369 394 370 395 if (wait_fuses) { 371 - pg = INTEL_GEN(dev_priv) >= 11 ? ICL_PW_TO_PG(id) : 372 - SKL_PW_TO_PG(id); 396 + pg = INTEL_GEN(dev_priv) >= 11 ? ICL_PW_CTL_IDX_TO_PG(pw_idx) : 397 + SKL_PW_CTL_IDX_TO_PG(pw_idx); 373 398 /* 374 399 * For PW1 we have to wait both for the PW0/PG0 fuse state 375 400 * before enabling the power well and PW1/PG1's own fuse ··· 382 405 gen9_wait_for_power_well_fuses(dev_priv, SKL_PG0); 383 406 } 384 407 385 - val = I915_READ(HSW_PWR_WELL_CTL_DRIVER(id)); 386 - I915_WRITE(HSW_PWR_WELL_CTL_DRIVER(id), val | HSW_PWR_WELL_CTL_REQ(id)); 408 + val = I915_READ(regs->driver); 409 + I915_WRITE(regs->driver, val | HSW_PWR_WELL_CTL_REQ(pw_idx)); 387 410 hsw_wait_for_power_well_enable(dev_priv, power_well); 388 411 389 412 /* Display WA #1178: cnl */ 390 413 if (IS_CANNONLAKE(dev_priv) && 391 - (id == CNL_DISP_PW_AUX_B || id == CNL_DISP_PW_AUX_C || 392 - id == CNL_DISP_PW_AUX_D || id == CNL_DISP_PW_AUX_F)) { 393 - val = I915_READ(CNL_AUX_ANAOVRD1(id)); 414 + pw_idx >= GLK_PW_CTL_IDX_AUX_B && 415 + pw_idx <= CNL_PW_CTL_IDX_AUX_F) { 416 + val = I915_READ(CNL_AUX_ANAOVRD1(pw_idx)); 394 417 val |= CNL_AUX_ANAOVRD1_ENABLE | CNL_AUX_ANAOVRD1_LDO_BYPASS; 395 - I915_WRITE(CNL_AUX_ANAOVRD1(id), val); 418 + I915_WRITE(CNL_AUX_ANAOVRD1(pw_idx), val); 396 419 } 397 420 398 421 if (wait_fuses) 399 422 gen9_wait_for_power_well_fuses(dev_priv, pg); 400 423 401 - hsw_power_well_post_enable(dev_priv, power_well->hsw.irq_pipe_mask, 402 - power_well->hsw.has_vga); 424 + hsw_power_well_post_enable(dev_priv, 425 + power_well->desc->hsw.irq_pipe_mask, 426 + power_well->desc->hsw.has_vga); 403 427 } 404 428 405 429 static void hsw_power_well_disable(struct drm_i915_private *dev_priv, 406 430 struct i915_power_well *power_well) 407 431 { 408 - enum i915_power_well_id id = power_well->id; 432 + const struct i915_power_well_regs *regs = power_well->desc->hsw.regs; 433 + int pw_idx = power_well->desc->hsw.idx; 409 434 u32 val; 410 435 411 - hsw_power_well_pre_disable(dev_priv, power_well->hsw.irq_pipe_mask); 436 + hsw_power_well_pre_disable(dev_priv, 437 + power_well->desc->hsw.irq_pipe_mask); 412 438 413 - val = I915_READ(HSW_PWR_WELL_CTL_DRIVER(id)); 414 - I915_WRITE(HSW_PWR_WELL_CTL_DRIVER(id), 415 - val & ~HSW_PWR_WELL_CTL_REQ(id)); 439 + val = I915_READ(regs->driver); 440 + I915_WRITE(regs->driver, val & ~HSW_PWR_WELL_CTL_REQ(pw_idx)); 416 441 hsw_wait_for_power_well_disable(dev_priv, power_well); 417 442 } 418 443 419 - #define ICL_AUX_PW_TO_PORT(pw) ((pw) - ICL_DISP_PW_AUX_A) 444 + #define ICL_AUX_PW_TO_PORT(pw_idx) ((pw_idx) - ICL_PW_CTL_IDX_AUX_A) 420 445 421 446 static void 422 447 icl_combo_phy_aux_power_well_enable(struct drm_i915_private *dev_priv, 423 448 struct i915_power_well *power_well) 424 449 { 425 - enum i915_power_well_id id = power_well->id; 426 - enum port port = ICL_AUX_PW_TO_PORT(id); 450 + const struct i915_power_well_regs *regs = power_well->desc->hsw.regs; 451 + int pw_idx = power_well->desc->hsw.idx; 452 + enum port port = ICL_AUX_PW_TO_PORT(pw_idx); 427 453 u32 val; 428 454 429 - val = I915_READ(HSW_PWR_WELL_CTL_DRIVER(id)); 430 - I915_WRITE(HSW_PWR_WELL_CTL_DRIVER(id), val | HSW_PWR_WELL_CTL_REQ(id)); 455 + val = I915_READ(regs->driver); 456 + I915_WRITE(regs->driver, val | HSW_PWR_WELL_CTL_REQ(pw_idx)); 431 457 432 458 val = I915_READ(ICL_PORT_CL_DW12(port)); 433 459 I915_WRITE(ICL_PORT_CL_DW12(port), val | ICL_LANE_ENABLE_AUX); ··· 442 462 icl_combo_phy_aux_power_well_disable(struct drm_i915_private *dev_priv, 443 463 struct i915_power_well *power_well) 444 464 { 445 - enum i915_power_well_id id = power_well->id; 446 - enum port port = ICL_AUX_PW_TO_PORT(id); 465 + const struct i915_power_well_regs *regs = power_well->desc->hsw.regs; 466 + int pw_idx = power_well->desc->hsw.idx; 467 + enum port port = ICL_AUX_PW_TO_PORT(pw_idx); 447 468 u32 val; 448 469 449 470 val = I915_READ(ICL_PORT_CL_DW12(port)); 450 471 I915_WRITE(ICL_PORT_CL_DW12(port), val & ~ICL_LANE_ENABLE_AUX); 451 472 452 - val = I915_READ(HSW_PWR_WELL_CTL_DRIVER(id)); 453 - I915_WRITE(HSW_PWR_WELL_CTL_DRIVER(id), 454 - val & ~HSW_PWR_WELL_CTL_REQ(id)); 473 + val = I915_READ(regs->driver); 474 + I915_WRITE(regs->driver, val & ~HSW_PWR_WELL_CTL_REQ(pw_idx)); 455 475 456 476 hsw_wait_for_power_well_disable(dev_priv, power_well); 457 477 } ··· 464 484 static bool hsw_power_well_enabled(struct drm_i915_private *dev_priv, 465 485 struct i915_power_well *power_well) 466 486 { 467 - enum i915_power_well_id id = power_well->id; 468 - u32 mask = HSW_PWR_WELL_CTL_REQ(id) | HSW_PWR_WELL_CTL_STATE(id); 487 + const struct i915_power_well_regs *regs = power_well->desc->hsw.regs; 488 + int pw_idx = power_well->desc->hsw.idx; 489 + u32 mask = HSW_PWR_WELL_CTL_REQ(pw_idx) | 490 + HSW_PWR_WELL_CTL_STATE(pw_idx); 469 491 470 - return (I915_READ(HSW_PWR_WELL_CTL_DRIVER(id)) & mask) == mask; 492 + return (I915_READ(regs->driver) & mask) == mask; 471 493 } 472 494 473 495 static void assert_can_enable_dc9(struct drm_i915_private *dev_priv) 474 496 { 475 - enum i915_power_well_id id = SKL_DISP_PW_2; 476 - 477 497 WARN_ONCE((I915_READ(DC_STATE_EN) & DC_STATE_EN_DC9), 478 498 "DC9 already programmed to be enabled.\n"); 479 499 WARN_ONCE(I915_READ(DC_STATE_EN) & DC_STATE_EN_UPTO_DC5, 480 500 "DC5 still not disabled to enable DC9.\n"); 481 - WARN_ONCE(I915_READ(HSW_PWR_WELL_CTL_DRIVER(id)) & 482 - HSW_PWR_WELL_CTL_REQ(id), 501 + WARN_ONCE(I915_READ(HSW_PWR_WELL_CTL2) & 502 + HSW_PWR_WELL_CTL_REQ(SKL_PW_CTL_IDX_PW_2), 483 503 "Power well 2 on.\n"); 484 504 WARN_ONCE(intel_irqs_enabled(dev_priv), 485 505 "Interrupts not disabled yet.\n"); ··· 648 668 WARN_ONCE(!I915_READ(CSR_HTP_SKL), "CSR HTP Not fine\n"); 649 669 } 650 670 671 + static struct i915_power_well * 672 + lookup_power_well(struct drm_i915_private *dev_priv, 673 + enum i915_power_well_id power_well_id) 674 + { 675 + struct i915_power_well *power_well; 676 + 677 + for_each_power_well(dev_priv, power_well) 678 + if (power_well->desc->id == power_well_id) 679 + return power_well; 680 + 681 + /* 682 + * It's not feasible to add error checking code to the callers since 683 + * this condition really shouldn't happen and it doesn't even make sense 684 + * to abort things like display initialization sequences. Just return 685 + * the first power well and hope the WARN gets reported so we can fix 686 + * our driver. 687 + */ 688 + WARN(1, "Power well %d not defined for this platform\n", power_well_id); 689 + return &dev_priv->power_domains.power_wells[0]; 690 + } 691 + 651 692 static void assert_can_enable_dc5(struct drm_i915_private *dev_priv) 652 693 { 653 694 bool pg2_enabled = intel_display_power_well_is_enabled(dev_priv, ··· 724 723 static void hsw_power_well_sync_hw(struct drm_i915_private *dev_priv, 725 724 struct i915_power_well *power_well) 726 725 { 727 - enum i915_power_well_id id = power_well->id; 728 - u32 mask = HSW_PWR_WELL_CTL_REQ(id); 729 - u32 bios_req = I915_READ(HSW_PWR_WELL_CTL_BIOS(id)); 726 + const struct i915_power_well_regs *regs = power_well->desc->hsw.regs; 727 + int pw_idx = power_well->desc->hsw.idx; 728 + u32 mask = HSW_PWR_WELL_CTL_REQ(pw_idx); 729 + u32 bios_req = I915_READ(regs->bios); 730 730 731 731 /* Take over the request bit if set by BIOS. */ 732 732 if (bios_req & mask) { 733 - u32 drv_req = I915_READ(HSW_PWR_WELL_CTL_DRIVER(id)); 733 + u32 drv_req = I915_READ(regs->driver); 734 734 735 735 if (!(drv_req & mask)) 736 - I915_WRITE(HSW_PWR_WELL_CTL_DRIVER(id), drv_req | mask); 737 - I915_WRITE(HSW_PWR_WELL_CTL_BIOS(id), bios_req & ~mask); 736 + I915_WRITE(regs->driver, drv_req | mask); 737 + I915_WRITE(regs->bios, bios_req & ~mask); 738 738 } 739 739 } 740 740 741 741 static void bxt_dpio_cmn_power_well_enable(struct drm_i915_private *dev_priv, 742 742 struct i915_power_well *power_well) 743 743 { 744 - bxt_ddi_phy_init(dev_priv, power_well->bxt.phy); 744 + bxt_ddi_phy_init(dev_priv, power_well->desc->bxt.phy); 745 745 } 746 746 747 747 static void bxt_dpio_cmn_power_well_disable(struct drm_i915_private *dev_priv, 748 748 struct i915_power_well *power_well) 749 749 { 750 - bxt_ddi_phy_uninit(dev_priv, power_well->bxt.phy); 750 + bxt_ddi_phy_uninit(dev_priv, power_well->desc->bxt.phy); 751 751 } 752 752 753 753 static bool bxt_dpio_cmn_power_well_enabled(struct drm_i915_private *dev_priv, 754 754 struct i915_power_well *power_well) 755 755 { 756 - return bxt_ddi_phy_is_enabled(dev_priv, power_well->bxt.phy); 756 + return bxt_ddi_phy_is_enabled(dev_priv, power_well->desc->bxt.phy); 757 757 } 758 758 759 759 static void bxt_verify_ddi_phy_power_wells(struct drm_i915_private *dev_priv) 760 760 { 761 761 struct i915_power_well *power_well; 762 762 763 - power_well = lookup_power_well(dev_priv, BXT_DPIO_CMN_A); 763 + power_well = lookup_power_well(dev_priv, BXT_DISP_PW_DPIO_CMN_A); 764 764 if (power_well->count > 0) 765 - bxt_ddi_phy_verify_state(dev_priv, power_well->bxt.phy); 765 + bxt_ddi_phy_verify_state(dev_priv, power_well->desc->bxt.phy); 766 766 767 - power_well = lookup_power_well(dev_priv, BXT_DPIO_CMN_BC); 767 + power_well = lookup_power_well(dev_priv, VLV_DISP_PW_DPIO_CMN_BC); 768 768 if (power_well->count > 0) 769 - bxt_ddi_phy_verify_state(dev_priv, power_well->bxt.phy); 769 + bxt_ddi_phy_verify_state(dev_priv, power_well->desc->bxt.phy); 770 770 771 771 if (IS_GEMINILAKE(dev_priv)) { 772 - power_well = lookup_power_well(dev_priv, GLK_DPIO_CMN_C); 772 + power_well = lookup_power_well(dev_priv, 773 + GLK_DISP_PW_DPIO_CMN_C); 773 774 if (power_well->count > 0) 774 - bxt_ddi_phy_verify_state(dev_priv, power_well->bxt.phy); 775 + bxt_ddi_phy_verify_state(dev_priv, 776 + power_well->desc->bxt.phy); 775 777 } 776 778 } 777 779 ··· 873 869 static void vlv_set_power_well(struct drm_i915_private *dev_priv, 874 870 struct i915_power_well *power_well, bool enable) 875 871 { 876 - enum i915_power_well_id power_well_id = power_well->id; 872 + int pw_idx = power_well->desc->vlv.idx; 877 873 u32 mask; 878 874 u32 state; 879 875 u32 ctrl; 880 876 881 - mask = PUNIT_PWRGT_MASK(power_well_id); 882 - state = enable ? PUNIT_PWRGT_PWR_ON(power_well_id) : 883 - PUNIT_PWRGT_PWR_GATE(power_well_id); 877 + mask = PUNIT_PWRGT_MASK(pw_idx); 878 + state = enable ? PUNIT_PWRGT_PWR_ON(pw_idx) : 879 + PUNIT_PWRGT_PWR_GATE(pw_idx); 884 880 885 881 mutex_lock(&dev_priv->pcu_lock); 886 882 ··· 921 917 static bool vlv_power_well_enabled(struct drm_i915_private *dev_priv, 922 918 struct i915_power_well *power_well) 923 919 { 924 - enum i915_power_well_id power_well_id = power_well->id; 920 + int pw_idx = power_well->desc->vlv.idx; 925 921 bool enabled = false; 926 922 u32 mask; 927 923 u32 state; 928 924 u32 ctrl; 929 925 930 - mask = PUNIT_PWRGT_MASK(power_well_id); 931 - ctrl = PUNIT_PWRGT_PWR_ON(power_well_id); 926 + mask = PUNIT_PWRGT_MASK(pw_idx); 927 + ctrl = PUNIT_PWRGT_PWR_ON(pw_idx); 932 928 933 929 mutex_lock(&dev_priv->pcu_lock); 934 930 ··· 937 933 * We only ever set the power-on and power-gate states, anything 938 934 * else is unexpected. 939 935 */ 940 - WARN_ON(state != PUNIT_PWRGT_PWR_ON(power_well_id) && 941 - state != PUNIT_PWRGT_PWR_GATE(power_well_id)); 936 + WARN_ON(state != PUNIT_PWRGT_PWR_ON(pw_idx) && 937 + state != PUNIT_PWRGT_PWR_GATE(pw_idx)); 942 938 if (state == ctrl) 943 939 enabled = true; 944 940 ··· 1049 1045 static void vlv_display_power_well_enable(struct drm_i915_private *dev_priv, 1050 1046 struct i915_power_well *power_well) 1051 1047 { 1052 - WARN_ON_ONCE(power_well->id != PUNIT_POWER_WELL_DISP2D); 1053 - 1054 1048 vlv_set_power_well(dev_priv, power_well, true); 1055 1049 1056 1050 vlv_display_power_well_init(dev_priv); ··· 1057 1055 static void vlv_display_power_well_disable(struct drm_i915_private *dev_priv, 1058 1056 struct i915_power_well *power_well) 1059 1057 { 1060 - WARN_ON_ONCE(power_well->id != PUNIT_POWER_WELL_DISP2D); 1061 - 1062 1058 vlv_display_power_well_deinit(dev_priv); 1063 1059 1064 1060 vlv_set_power_well(dev_priv, power_well, false); ··· 1065 1065 static void vlv_dpio_cmn_power_well_enable(struct drm_i915_private *dev_priv, 1066 1066 struct i915_power_well *power_well) 1067 1067 { 1068 - WARN_ON_ONCE(power_well->id != PUNIT_POWER_WELL_DPIO_CMN_BC); 1069 - 1070 1068 /* since ref/cri clock was enabled */ 1071 1069 udelay(1); /* >10ns for cmnreset, >0ns for sidereset */ 1072 1070 ··· 1089 1091 { 1090 1092 enum pipe pipe; 1091 1093 1092 - WARN_ON_ONCE(power_well->id != PUNIT_POWER_WELL_DPIO_CMN_BC); 1093 - 1094 1094 for_each_pipe(dev_priv, pipe) 1095 1095 assert_pll_disabled(dev_priv, pipe); 1096 1096 ··· 1100 1104 1101 1105 #define POWER_DOMAIN_MASK (GENMASK_ULL(POWER_DOMAIN_NUM - 1, 0)) 1102 1106 1103 - static struct i915_power_well * 1104 - lookup_power_well(struct drm_i915_private *dev_priv, 1105 - enum i915_power_well_id power_well_id) 1106 - { 1107 - struct i915_power_domains *power_domains = &dev_priv->power_domains; 1108 - int i; 1109 - 1110 - for (i = 0; i < power_domains->power_well_count; i++) { 1111 - struct i915_power_well *power_well; 1112 - 1113 - power_well = &power_domains->power_wells[i]; 1114 - if (power_well->id == power_well_id) 1115 - return power_well; 1116 - } 1117 - 1118 - return NULL; 1119 - } 1120 - 1121 1107 #define BITS_SET(val, bits) (((val) & (bits)) == (bits)) 1122 1108 1123 1109 static void assert_chv_phy_status(struct drm_i915_private *dev_priv) 1124 1110 { 1125 1111 struct i915_power_well *cmn_bc = 1126 - lookup_power_well(dev_priv, PUNIT_POWER_WELL_DPIO_CMN_BC); 1112 + lookup_power_well(dev_priv, VLV_DISP_PW_DPIO_CMN_BC); 1127 1113 struct i915_power_well *cmn_d = 1128 - lookup_power_well(dev_priv, PUNIT_POWER_WELL_DPIO_CMN_D); 1114 + lookup_power_well(dev_priv, CHV_DISP_PW_DPIO_CMN_D); 1129 1115 u32 phy_control = dev_priv->chv_phy_control; 1130 1116 u32 phy_status = 0; 1131 1117 u32 phy_status_mask = 0xffffffff; ··· 1132 1154 PHY_STATUS_SPLINE_LDO(DPIO_PHY1, DPIO_CH0, 0) | 1133 1155 PHY_STATUS_SPLINE_LDO(DPIO_PHY1, DPIO_CH0, 1)); 1134 1156 1135 - if (cmn_bc->ops->is_enabled(dev_priv, cmn_bc)) { 1157 + if (cmn_bc->desc->ops->is_enabled(dev_priv, cmn_bc)) { 1136 1158 phy_status |= PHY_POWERGOOD(DPIO_PHY0); 1137 1159 1138 1160 /* this assumes override is only used to enable lanes */ ··· 1173 1195 phy_status |= PHY_STATUS_SPLINE_LDO(DPIO_PHY0, DPIO_CH1, 1); 1174 1196 } 1175 1197 1176 - if (cmn_d->ops->is_enabled(dev_priv, cmn_d)) { 1198 + if (cmn_d->desc->ops->is_enabled(dev_priv, cmn_d)) { 1177 1199 phy_status |= PHY_POWERGOOD(DPIO_PHY1); 1178 1200 1179 1201 /* this assumes override is only used to enable lanes */ ··· 1217 1239 enum pipe pipe; 1218 1240 uint32_t tmp; 1219 1241 1220 - WARN_ON_ONCE(power_well->id != PUNIT_POWER_WELL_DPIO_CMN_BC && 1221 - power_well->id != PUNIT_POWER_WELL_DPIO_CMN_D); 1242 + WARN_ON_ONCE(power_well->desc->id != VLV_DISP_PW_DPIO_CMN_BC && 1243 + power_well->desc->id != CHV_DISP_PW_DPIO_CMN_D); 1222 1244 1223 - if (power_well->id == PUNIT_POWER_WELL_DPIO_CMN_BC) { 1245 + if (power_well->desc->id == VLV_DISP_PW_DPIO_CMN_BC) { 1224 1246 pipe = PIPE_A; 1225 1247 phy = DPIO_PHY0; 1226 1248 } else { ··· 1248 1270 DPIO_SUS_CLK_CONFIG_GATE_CLKREQ; 1249 1271 vlv_dpio_write(dev_priv, pipe, CHV_CMN_DW28, tmp); 1250 1272 1251 - if (power_well->id == PUNIT_POWER_WELL_DPIO_CMN_BC) { 1273 + if (power_well->desc->id == VLV_DISP_PW_DPIO_CMN_BC) { 1252 1274 tmp = vlv_dpio_read(dev_priv, pipe, _CHV_CMN_DW6_CH1); 1253 1275 tmp |= DPIO_DYNPWRDOWNEN_CH1; 1254 1276 vlv_dpio_write(dev_priv, pipe, _CHV_CMN_DW6_CH1, tmp); ··· 1279 1301 { 1280 1302 enum dpio_phy phy; 1281 1303 1282 - WARN_ON_ONCE(power_well->id != PUNIT_POWER_WELL_DPIO_CMN_BC && 1283 - power_well->id != PUNIT_POWER_WELL_DPIO_CMN_D); 1304 + WARN_ON_ONCE(power_well->desc->id != VLV_DISP_PW_DPIO_CMN_BC && 1305 + power_well->desc->id != CHV_DISP_PW_DPIO_CMN_D); 1284 1306 1285 - if (power_well->id == PUNIT_POWER_WELL_DPIO_CMN_BC) { 1307 + if (power_well->desc->id == VLV_DISP_PW_DPIO_CMN_BC) { 1286 1308 phy = DPIO_PHY0; 1287 1309 assert_pll_disabled(dev_priv, PIPE_A); 1288 1310 assert_pll_disabled(dev_priv, PIPE_B); ··· 1494 1516 static void chv_pipe_power_well_enable(struct drm_i915_private *dev_priv, 1495 1517 struct i915_power_well *power_well) 1496 1518 { 1497 - WARN_ON_ONCE(power_well->id != CHV_DISP_PW_PIPE_A); 1498 - 1499 1519 chv_set_pipe_power_well(dev_priv, power_well, true); 1500 1520 1501 1521 vlv_display_power_well_init(dev_priv); ··· 1502 1526 static void chv_pipe_power_well_disable(struct drm_i915_private *dev_priv, 1503 1527 struct i915_power_well *power_well) 1504 1528 { 1505 - WARN_ON_ONCE(power_well->id != CHV_DISP_PW_PIPE_A); 1506 - 1507 1529 vlv_display_power_well_deinit(dev_priv); 1508 1530 1509 1531 chv_set_pipe_power_well(dev_priv, power_well, false); ··· 2037 2063 .is_enabled = vlv_power_well_enabled, 2038 2064 }; 2039 2065 2040 - static struct i915_power_well i9xx_always_on_power_well[] = { 2066 + static const struct i915_power_well_desc i9xx_always_on_power_well[] = { 2041 2067 { 2042 2068 .name = "always-on", 2043 2069 .always_on = 1, 2044 2070 .domains = POWER_DOMAIN_MASK, 2045 2071 .ops = &i9xx_always_on_power_well_ops, 2046 - .id = I915_DISP_PW_ALWAYS_ON, 2072 + .id = DISP_PW_ID_NONE, 2047 2073 }, 2048 2074 }; 2049 2075 ··· 2054 2080 .is_enabled = i830_pipes_power_well_enabled, 2055 2081 }; 2056 2082 2057 - static struct i915_power_well i830_power_wells[] = { 2083 + static const struct i915_power_well_desc i830_power_wells[] = { 2058 2084 { 2059 2085 .name = "always-on", 2060 2086 .always_on = 1, 2061 2087 .domains = POWER_DOMAIN_MASK, 2062 2088 .ops = &i9xx_always_on_power_well_ops, 2063 - .id = I915_DISP_PW_ALWAYS_ON, 2089 + .id = DISP_PW_ID_NONE, 2064 2090 }, 2065 2091 { 2066 2092 .name = "pipes", 2067 2093 .domains = I830_PIPES_POWER_DOMAINS, 2068 2094 .ops = &i830_pipes_power_well_ops, 2069 - .id = I830_DISP_PW_PIPES, 2095 + .id = DISP_PW_ID_NONE, 2070 2096 }, 2071 2097 }; 2072 2098 ··· 2091 2117 .is_enabled = bxt_dpio_cmn_power_well_enabled, 2092 2118 }; 2093 2119 2094 - static struct i915_power_well hsw_power_wells[] = { 2120 + static const struct i915_power_well_regs hsw_power_well_regs = { 2121 + .bios = HSW_PWR_WELL_CTL1, 2122 + .driver = HSW_PWR_WELL_CTL2, 2123 + .kvmr = HSW_PWR_WELL_CTL3, 2124 + .debug = HSW_PWR_WELL_CTL4, 2125 + }; 2126 + 2127 + static const struct i915_power_well_desc hsw_power_wells[] = { 2095 2128 { 2096 2129 .name = "always-on", 2097 2130 .always_on = 1, 2098 2131 .domains = POWER_DOMAIN_MASK, 2099 2132 .ops = &i9xx_always_on_power_well_ops, 2100 - .id = I915_DISP_PW_ALWAYS_ON, 2133 + .id = DISP_PW_ID_NONE, 2101 2134 }, 2102 2135 { 2103 2136 .name = "display", ··· 2112 2131 .ops = &hsw_power_well_ops, 2113 2132 .id = HSW_DISP_PW_GLOBAL, 2114 2133 { 2134 + .hsw.regs = &hsw_power_well_regs, 2135 + .hsw.idx = HSW_PW_CTL_IDX_GLOBAL, 2115 2136 .hsw.has_vga = true, 2116 2137 }, 2117 2138 }, 2118 2139 }; 2119 2140 2120 - static struct i915_power_well bdw_power_wells[] = { 2141 + static const struct i915_power_well_desc bdw_power_wells[] = { 2121 2142 { 2122 2143 .name = "always-on", 2123 2144 .always_on = 1, 2124 2145 .domains = POWER_DOMAIN_MASK, 2125 2146 .ops = &i9xx_always_on_power_well_ops, 2126 - .id = I915_DISP_PW_ALWAYS_ON, 2147 + .id = DISP_PW_ID_NONE, 2127 2148 }, 2128 2149 { 2129 2150 .name = "display", ··· 2133 2150 .ops = &hsw_power_well_ops, 2134 2151 .id = HSW_DISP_PW_GLOBAL, 2135 2152 { 2153 + .hsw.regs = &hsw_power_well_regs, 2154 + .hsw.idx = HSW_PW_CTL_IDX_GLOBAL, 2136 2155 .hsw.irq_pipe_mask = BIT(PIPE_B) | BIT(PIPE_C), 2137 2156 .hsw.has_vga = true, 2138 2157 }, ··· 2162 2177 .is_enabled = vlv_power_well_enabled, 2163 2178 }; 2164 2179 2165 - static struct i915_power_well vlv_power_wells[] = { 2180 + static const struct i915_power_well_desc vlv_power_wells[] = { 2166 2181 { 2167 2182 .name = "always-on", 2168 2183 .always_on = 1, 2169 2184 .domains = POWER_DOMAIN_MASK, 2170 2185 .ops = &i9xx_always_on_power_well_ops, 2171 - .id = I915_DISP_PW_ALWAYS_ON, 2186 + .id = DISP_PW_ID_NONE, 2172 2187 }, 2173 2188 { 2174 2189 .name = "display", 2175 2190 .domains = VLV_DISPLAY_POWER_DOMAINS, 2176 - .id = PUNIT_POWER_WELL_DISP2D, 2177 2191 .ops = &vlv_display_power_well_ops, 2192 + .id = VLV_DISP_PW_DISP2D, 2193 + { 2194 + .vlv.idx = PUNIT_PWGT_IDX_DISP2D, 2195 + }, 2178 2196 }, 2179 2197 { 2180 2198 .name = "dpio-tx-b-01", ··· 2186 2198 VLV_DPIO_TX_C_LANES_01_POWER_DOMAINS | 2187 2199 VLV_DPIO_TX_C_LANES_23_POWER_DOMAINS, 2188 2200 .ops = &vlv_dpio_power_well_ops, 2189 - .id = PUNIT_POWER_WELL_DPIO_TX_B_LANES_01, 2201 + .id = DISP_PW_ID_NONE, 2202 + { 2203 + .vlv.idx = PUNIT_PWGT_IDX_DPIO_TX_B_LANES_01, 2204 + }, 2190 2205 }, 2191 2206 { 2192 2207 .name = "dpio-tx-b-23", ··· 2198 2207 VLV_DPIO_TX_C_LANES_01_POWER_DOMAINS | 2199 2208 VLV_DPIO_TX_C_LANES_23_POWER_DOMAINS, 2200 2209 .ops = &vlv_dpio_power_well_ops, 2201 - .id = PUNIT_POWER_WELL_DPIO_TX_B_LANES_23, 2210 + .id = DISP_PW_ID_NONE, 2211 + { 2212 + .vlv.idx = PUNIT_PWGT_IDX_DPIO_TX_B_LANES_23, 2213 + }, 2202 2214 }, 2203 2215 { 2204 2216 .name = "dpio-tx-c-01", ··· 2210 2216 VLV_DPIO_TX_C_LANES_01_POWER_DOMAINS | 2211 2217 VLV_DPIO_TX_C_LANES_23_POWER_DOMAINS, 2212 2218 .ops = &vlv_dpio_power_well_ops, 2213 - .id = PUNIT_POWER_WELL_DPIO_TX_C_LANES_01, 2219 + .id = DISP_PW_ID_NONE, 2220 + { 2221 + .vlv.idx = PUNIT_PWGT_IDX_DPIO_TX_C_LANES_01, 2222 + }, 2214 2223 }, 2215 2224 { 2216 2225 .name = "dpio-tx-c-23", ··· 2222 2225 VLV_DPIO_TX_C_LANES_01_POWER_DOMAINS | 2223 2226 VLV_DPIO_TX_C_LANES_23_POWER_DOMAINS, 2224 2227 .ops = &vlv_dpio_power_well_ops, 2225 - .id = PUNIT_POWER_WELL_DPIO_TX_C_LANES_23, 2228 + .id = DISP_PW_ID_NONE, 2229 + { 2230 + .vlv.idx = PUNIT_PWGT_IDX_DPIO_TX_C_LANES_23, 2231 + }, 2226 2232 }, 2227 2233 { 2228 2234 .name = "dpio-common", 2229 2235 .domains = VLV_DPIO_CMN_BC_POWER_DOMAINS, 2230 - .id = PUNIT_POWER_WELL_DPIO_CMN_BC, 2231 2236 .ops = &vlv_dpio_cmn_power_well_ops, 2237 + .id = VLV_DISP_PW_DPIO_CMN_BC, 2238 + { 2239 + .vlv.idx = PUNIT_PWGT_IDX_DPIO_CMN_BC, 2240 + }, 2232 2241 }, 2233 2242 }; 2234 2243 2235 - static struct i915_power_well chv_power_wells[] = { 2244 + static const struct i915_power_well_desc chv_power_wells[] = { 2236 2245 { 2237 2246 .name = "always-on", 2238 2247 .always_on = 1, 2239 2248 .domains = POWER_DOMAIN_MASK, 2240 2249 .ops = &i9xx_always_on_power_well_ops, 2241 - .id = I915_DISP_PW_ALWAYS_ON, 2250 + .id = DISP_PW_ID_NONE, 2242 2251 }, 2243 2252 { 2244 2253 .name = "display", ··· 2254 2251 * required for any pipe to work. 2255 2252 */ 2256 2253 .domains = CHV_DISPLAY_POWER_DOMAINS, 2257 - .id = CHV_DISP_PW_PIPE_A, 2258 2254 .ops = &chv_pipe_power_well_ops, 2255 + .id = DISP_PW_ID_NONE, 2259 2256 }, 2260 2257 { 2261 2258 .name = "dpio-common-bc", 2262 2259 .domains = CHV_DPIO_CMN_BC_POWER_DOMAINS, 2263 - .id = PUNIT_POWER_WELL_DPIO_CMN_BC, 2264 2260 .ops = &chv_dpio_cmn_power_well_ops, 2261 + .id = VLV_DISP_PW_DPIO_CMN_BC, 2262 + { 2263 + .vlv.idx = PUNIT_PWGT_IDX_DPIO_CMN_BC, 2264 + }, 2265 2265 }, 2266 2266 { 2267 2267 .name = "dpio-common-d", 2268 2268 .domains = CHV_DPIO_CMN_D_POWER_DOMAINS, 2269 - .id = PUNIT_POWER_WELL_DPIO_CMN_D, 2270 2269 .ops = &chv_dpio_cmn_power_well_ops, 2270 + .id = CHV_DISP_PW_DPIO_CMN_D, 2271 + { 2272 + .vlv.idx = PUNIT_PWGT_IDX_DPIO_CMN_D, 2273 + }, 2271 2274 }, 2272 2275 }; 2273 2276 ··· 2284 2275 bool ret; 2285 2276 2286 2277 power_well = lookup_power_well(dev_priv, power_well_id); 2287 - ret = power_well->ops->is_enabled(dev_priv, power_well); 2278 + ret = power_well->desc->ops->is_enabled(dev_priv, power_well); 2288 2279 2289 2280 return ret; 2290 2281 } 2291 2282 2292 - static struct i915_power_well skl_power_wells[] = { 2283 + static const struct i915_power_well_desc skl_power_wells[] = { 2293 2284 { 2294 2285 .name = "always-on", 2295 2286 .always_on = 1, 2296 2287 .domains = POWER_DOMAIN_MASK, 2297 2288 .ops = &i9xx_always_on_power_well_ops, 2298 - .id = I915_DISP_PW_ALWAYS_ON, 2289 + .id = DISP_PW_ID_NONE, 2299 2290 }, 2300 2291 { 2301 2292 .name = "power well 1", ··· 2304 2295 .ops = &hsw_power_well_ops, 2305 2296 .id = SKL_DISP_PW_1, 2306 2297 { 2298 + .hsw.regs = &hsw_power_well_regs, 2299 + .hsw.idx = SKL_PW_CTL_IDX_PW_1, 2307 2300 .hsw.has_fuses = true, 2308 2301 }, 2309 2302 }, ··· 2315 2304 .domains = 0, 2316 2305 .ops = &hsw_power_well_ops, 2317 2306 .id = SKL_DISP_PW_MISC_IO, 2307 + { 2308 + .hsw.regs = &hsw_power_well_regs, 2309 + .hsw.idx = SKL_PW_CTL_IDX_MISC_IO, 2310 + }, 2318 2311 }, 2319 2312 { 2320 2313 .name = "DC off", 2321 2314 .domains = SKL_DISPLAY_DC_OFF_POWER_DOMAINS, 2322 2315 .ops = &gen9_dc_off_power_well_ops, 2323 - .id = SKL_DISP_PW_DC_OFF, 2316 + .id = DISP_PW_ID_NONE, 2324 2317 }, 2325 2318 { 2326 2319 .name = "power well 2", ··· 2332 2317 .ops = &hsw_power_well_ops, 2333 2318 .id = SKL_DISP_PW_2, 2334 2319 { 2320 + .hsw.regs = &hsw_power_well_regs, 2321 + .hsw.idx = SKL_PW_CTL_IDX_PW_2, 2335 2322 .hsw.irq_pipe_mask = BIT(PIPE_B) | BIT(PIPE_C), 2336 2323 .hsw.has_vga = true, 2337 2324 .hsw.has_fuses = true, ··· 2343 2326 .name = "DDI A/E IO power well", 2344 2327 .domains = SKL_DISPLAY_DDI_IO_A_E_POWER_DOMAINS, 2345 2328 .ops = &hsw_power_well_ops, 2346 - .id = SKL_DISP_PW_DDI_A_E, 2329 + .id = DISP_PW_ID_NONE, 2330 + { 2331 + .hsw.regs = &hsw_power_well_regs, 2332 + .hsw.idx = SKL_PW_CTL_IDX_DDI_A_E, 2333 + }, 2347 2334 }, 2348 2335 { 2349 2336 .name = "DDI B IO power well", 2350 2337 .domains = SKL_DISPLAY_DDI_IO_B_POWER_DOMAINS, 2351 2338 .ops = &hsw_power_well_ops, 2352 - .id = SKL_DISP_PW_DDI_B, 2339 + .id = DISP_PW_ID_NONE, 2340 + { 2341 + .hsw.regs = &hsw_power_well_regs, 2342 + .hsw.idx = SKL_PW_CTL_IDX_DDI_B, 2343 + }, 2353 2344 }, 2354 2345 { 2355 2346 .name = "DDI C IO power well", 2356 2347 .domains = SKL_DISPLAY_DDI_IO_C_POWER_DOMAINS, 2357 2348 .ops = &hsw_power_well_ops, 2358 - .id = SKL_DISP_PW_DDI_C, 2349 + .id = DISP_PW_ID_NONE, 2350 + { 2351 + .hsw.regs = &hsw_power_well_regs, 2352 + .hsw.idx = SKL_PW_CTL_IDX_DDI_C, 2353 + }, 2359 2354 }, 2360 2355 { 2361 2356 .name = "DDI D IO power well", 2362 2357 .domains = SKL_DISPLAY_DDI_IO_D_POWER_DOMAINS, 2363 2358 .ops = &hsw_power_well_ops, 2364 - .id = SKL_DISP_PW_DDI_D, 2359 + .id = DISP_PW_ID_NONE, 2360 + { 2361 + .hsw.regs = &hsw_power_well_regs, 2362 + .hsw.idx = SKL_PW_CTL_IDX_DDI_D, 2363 + }, 2365 2364 }, 2366 2365 }; 2367 2366 2368 - static struct i915_power_well bxt_power_wells[] = { 2367 + static const struct i915_power_well_desc bxt_power_wells[] = { 2369 2368 { 2370 2369 .name = "always-on", 2371 2370 .always_on = 1, 2372 2371 .domains = POWER_DOMAIN_MASK, 2373 2372 .ops = &i9xx_always_on_power_well_ops, 2374 - .id = I915_DISP_PW_ALWAYS_ON, 2373 + .id = DISP_PW_ID_NONE, 2375 2374 }, 2376 2375 { 2377 2376 .name = "power well 1", ··· 2395 2362 .ops = &hsw_power_well_ops, 2396 2363 .id = SKL_DISP_PW_1, 2397 2364 { 2365 + .hsw.regs = &hsw_power_well_regs, 2366 + .hsw.idx = SKL_PW_CTL_IDX_PW_1, 2398 2367 .hsw.has_fuses = true, 2399 2368 }, 2400 2369 }, ··· 2404 2369 .name = "DC off", 2405 2370 .domains = BXT_DISPLAY_DC_OFF_POWER_DOMAINS, 2406 2371 .ops = &gen9_dc_off_power_well_ops, 2407 - .id = SKL_DISP_PW_DC_OFF, 2372 + .id = DISP_PW_ID_NONE, 2408 2373 }, 2409 2374 { 2410 2375 .name = "power well 2", ··· 2412 2377 .ops = &hsw_power_well_ops, 2413 2378 .id = SKL_DISP_PW_2, 2414 2379 { 2380 + .hsw.regs = &hsw_power_well_regs, 2381 + .hsw.idx = SKL_PW_CTL_IDX_PW_2, 2415 2382 .hsw.irq_pipe_mask = BIT(PIPE_B) | BIT(PIPE_C), 2416 2383 .hsw.has_vga = true, 2417 2384 .hsw.has_fuses = true, ··· 2423 2386 .name = "dpio-common-a", 2424 2387 .domains = BXT_DPIO_CMN_A_POWER_DOMAINS, 2425 2388 .ops = &bxt_dpio_cmn_power_well_ops, 2426 - .id = BXT_DPIO_CMN_A, 2389 + .id = BXT_DISP_PW_DPIO_CMN_A, 2427 2390 { 2428 2391 .bxt.phy = DPIO_PHY1, 2429 2392 }, ··· 2432 2395 .name = "dpio-common-bc", 2433 2396 .domains = BXT_DPIO_CMN_BC_POWER_DOMAINS, 2434 2397 .ops = &bxt_dpio_cmn_power_well_ops, 2435 - .id = BXT_DPIO_CMN_BC, 2398 + .id = VLV_DISP_PW_DPIO_CMN_BC, 2436 2399 { 2437 2400 .bxt.phy = DPIO_PHY0, 2438 2401 }, 2439 2402 }, 2440 2403 }; 2441 2404 2442 - static struct i915_power_well glk_power_wells[] = { 2405 + static const struct i915_power_well_desc glk_power_wells[] = { 2443 2406 { 2444 2407 .name = "always-on", 2445 2408 .always_on = 1, 2446 2409 .domains = POWER_DOMAIN_MASK, 2447 2410 .ops = &i9xx_always_on_power_well_ops, 2448 - .id = I915_DISP_PW_ALWAYS_ON, 2411 + .id = DISP_PW_ID_NONE, 2449 2412 }, 2450 2413 { 2451 2414 .name = "power well 1", ··· 2454 2417 .ops = &hsw_power_well_ops, 2455 2418 .id = SKL_DISP_PW_1, 2456 2419 { 2420 + .hsw.regs = &hsw_power_well_regs, 2421 + .hsw.idx = SKL_PW_CTL_IDX_PW_1, 2457 2422 .hsw.has_fuses = true, 2458 2423 }, 2459 2424 }, ··· 2463 2424 .name = "DC off", 2464 2425 .domains = GLK_DISPLAY_DC_OFF_POWER_DOMAINS, 2465 2426 .ops = &gen9_dc_off_power_well_ops, 2466 - .id = SKL_DISP_PW_DC_OFF, 2427 + .id = DISP_PW_ID_NONE, 2467 2428 }, 2468 2429 { 2469 2430 .name = "power well 2", ··· 2471 2432 .ops = &hsw_power_well_ops, 2472 2433 .id = SKL_DISP_PW_2, 2473 2434 { 2435 + .hsw.regs = &hsw_power_well_regs, 2436 + .hsw.idx = SKL_PW_CTL_IDX_PW_2, 2474 2437 .hsw.irq_pipe_mask = BIT(PIPE_B) | BIT(PIPE_C), 2475 2438 .hsw.has_vga = true, 2476 2439 .hsw.has_fuses = true, ··· 2482 2441 .name = "dpio-common-a", 2483 2442 .domains = GLK_DPIO_CMN_A_POWER_DOMAINS, 2484 2443 .ops = &bxt_dpio_cmn_power_well_ops, 2485 - .id = BXT_DPIO_CMN_A, 2444 + .id = BXT_DISP_PW_DPIO_CMN_A, 2486 2445 { 2487 2446 .bxt.phy = DPIO_PHY1, 2488 2447 }, ··· 2491 2450 .name = "dpio-common-b", 2492 2451 .domains = GLK_DPIO_CMN_B_POWER_DOMAINS, 2493 2452 .ops = &bxt_dpio_cmn_power_well_ops, 2494 - .id = BXT_DPIO_CMN_BC, 2453 + .id = VLV_DISP_PW_DPIO_CMN_BC, 2495 2454 { 2496 2455 .bxt.phy = DPIO_PHY0, 2497 2456 }, ··· 2500 2459 .name = "dpio-common-c", 2501 2460 .domains = GLK_DPIO_CMN_C_POWER_DOMAINS, 2502 2461 .ops = &bxt_dpio_cmn_power_well_ops, 2503 - .id = GLK_DPIO_CMN_C, 2462 + .id = GLK_DISP_PW_DPIO_CMN_C, 2504 2463 { 2505 2464 .bxt.phy = DPIO_PHY2, 2506 2465 }, ··· 2509 2468 .name = "AUX A", 2510 2469 .domains = GLK_DISPLAY_AUX_A_POWER_DOMAINS, 2511 2470 .ops = &hsw_power_well_ops, 2512 - .id = GLK_DISP_PW_AUX_A, 2471 + .id = DISP_PW_ID_NONE, 2472 + { 2473 + .hsw.regs = &hsw_power_well_regs, 2474 + .hsw.idx = GLK_PW_CTL_IDX_AUX_A, 2475 + }, 2513 2476 }, 2514 2477 { 2515 2478 .name = "AUX B", 2516 2479 .domains = GLK_DISPLAY_AUX_B_POWER_DOMAINS, 2517 2480 .ops = &hsw_power_well_ops, 2518 - .id = GLK_DISP_PW_AUX_B, 2481 + .id = DISP_PW_ID_NONE, 2482 + { 2483 + .hsw.regs = &hsw_power_well_regs, 2484 + .hsw.idx = GLK_PW_CTL_IDX_AUX_B, 2485 + }, 2519 2486 }, 2520 2487 { 2521 2488 .name = "AUX C", 2522 2489 .domains = GLK_DISPLAY_AUX_C_POWER_DOMAINS, 2523 2490 .ops = &hsw_power_well_ops, 2524 - .id = GLK_DISP_PW_AUX_C, 2491 + .id = DISP_PW_ID_NONE, 2492 + { 2493 + .hsw.regs = &hsw_power_well_regs, 2494 + .hsw.idx = GLK_PW_CTL_IDX_AUX_C, 2495 + }, 2525 2496 }, 2526 2497 { 2527 2498 .name = "DDI A IO power well", 2528 2499 .domains = GLK_DISPLAY_DDI_IO_A_POWER_DOMAINS, 2529 2500 .ops = &hsw_power_well_ops, 2530 - .id = GLK_DISP_PW_DDI_A, 2501 + .id = DISP_PW_ID_NONE, 2502 + { 2503 + .hsw.regs = &hsw_power_well_regs, 2504 + .hsw.idx = GLK_PW_CTL_IDX_DDI_A, 2505 + }, 2531 2506 }, 2532 2507 { 2533 2508 .name = "DDI B IO power well", 2534 2509 .domains = GLK_DISPLAY_DDI_IO_B_POWER_DOMAINS, 2535 2510 .ops = &hsw_power_well_ops, 2536 - .id = SKL_DISP_PW_DDI_B, 2511 + .id = DISP_PW_ID_NONE, 2512 + { 2513 + .hsw.regs = &hsw_power_well_regs, 2514 + .hsw.idx = SKL_PW_CTL_IDX_DDI_B, 2515 + }, 2537 2516 }, 2538 2517 { 2539 2518 .name = "DDI C IO power well", 2540 2519 .domains = GLK_DISPLAY_DDI_IO_C_POWER_DOMAINS, 2541 2520 .ops = &hsw_power_well_ops, 2542 - .id = SKL_DISP_PW_DDI_C, 2521 + .id = DISP_PW_ID_NONE, 2522 + { 2523 + .hsw.regs = &hsw_power_well_regs, 2524 + .hsw.idx = SKL_PW_CTL_IDX_DDI_C, 2525 + }, 2543 2526 }, 2544 2527 }; 2545 2528 2546 - static struct i915_power_well cnl_power_wells[] = { 2529 + static const struct i915_power_well_desc cnl_power_wells[] = { 2547 2530 { 2548 2531 .name = "always-on", 2549 2532 .always_on = 1, 2550 2533 .domains = POWER_DOMAIN_MASK, 2551 2534 .ops = &i9xx_always_on_power_well_ops, 2552 - .id = I915_DISP_PW_ALWAYS_ON, 2535 + .id = DISP_PW_ID_NONE, 2553 2536 }, 2554 2537 { 2555 2538 .name = "power well 1", ··· 2582 2517 .ops = &hsw_power_well_ops, 2583 2518 .id = SKL_DISP_PW_1, 2584 2519 { 2520 + .hsw.regs = &hsw_power_well_regs, 2521 + .hsw.idx = SKL_PW_CTL_IDX_PW_1, 2585 2522 .hsw.has_fuses = true, 2586 2523 }, 2587 2524 }, ··· 2591 2524 .name = "AUX A", 2592 2525 .domains = CNL_DISPLAY_AUX_A_POWER_DOMAINS, 2593 2526 .ops = &hsw_power_well_ops, 2594 - .id = CNL_DISP_PW_AUX_A, 2527 + .id = DISP_PW_ID_NONE, 2528 + { 2529 + .hsw.regs = &hsw_power_well_regs, 2530 + .hsw.idx = GLK_PW_CTL_IDX_AUX_A, 2531 + }, 2595 2532 }, 2596 2533 { 2597 2534 .name = "AUX B", 2598 2535 .domains = CNL_DISPLAY_AUX_B_POWER_DOMAINS, 2599 2536 .ops = &hsw_power_well_ops, 2600 - .id = CNL_DISP_PW_AUX_B, 2537 + .id = DISP_PW_ID_NONE, 2538 + { 2539 + .hsw.regs = &hsw_power_well_regs, 2540 + .hsw.idx = GLK_PW_CTL_IDX_AUX_B, 2541 + }, 2601 2542 }, 2602 2543 { 2603 2544 .name = "AUX C", 2604 2545 .domains = CNL_DISPLAY_AUX_C_POWER_DOMAINS, 2605 2546 .ops = &hsw_power_well_ops, 2606 - .id = CNL_DISP_PW_AUX_C, 2547 + .id = DISP_PW_ID_NONE, 2548 + { 2549 + .hsw.regs = &hsw_power_well_regs, 2550 + .hsw.idx = GLK_PW_CTL_IDX_AUX_C, 2551 + }, 2607 2552 }, 2608 2553 { 2609 2554 .name = "AUX D", 2610 2555 .domains = CNL_DISPLAY_AUX_D_POWER_DOMAINS, 2611 2556 .ops = &hsw_power_well_ops, 2612 - .id = CNL_DISP_PW_AUX_D, 2557 + .id = DISP_PW_ID_NONE, 2558 + { 2559 + .hsw.regs = &hsw_power_well_regs, 2560 + .hsw.idx = CNL_PW_CTL_IDX_AUX_D, 2561 + }, 2613 2562 }, 2614 2563 { 2615 2564 .name = "DC off", 2616 2565 .domains = CNL_DISPLAY_DC_OFF_POWER_DOMAINS, 2617 2566 .ops = &gen9_dc_off_power_well_ops, 2618 - .id = SKL_DISP_PW_DC_OFF, 2567 + .id = DISP_PW_ID_NONE, 2619 2568 }, 2620 2569 { 2621 2570 .name = "power well 2", ··· 2639 2556 .ops = &hsw_power_well_ops, 2640 2557 .id = SKL_DISP_PW_2, 2641 2558 { 2559 + .hsw.regs = &hsw_power_well_regs, 2560 + .hsw.idx = SKL_PW_CTL_IDX_PW_2, 2642 2561 .hsw.irq_pipe_mask = BIT(PIPE_B) | BIT(PIPE_C), 2643 2562 .hsw.has_vga = true, 2644 2563 .hsw.has_fuses = true, ··· 2650 2565 .name = "DDI A IO power well", 2651 2566 .domains = CNL_DISPLAY_DDI_A_IO_POWER_DOMAINS, 2652 2567 .ops = &hsw_power_well_ops, 2653 - .id = CNL_DISP_PW_DDI_A, 2568 + .id = DISP_PW_ID_NONE, 2569 + { 2570 + .hsw.regs = &hsw_power_well_regs, 2571 + .hsw.idx = GLK_PW_CTL_IDX_DDI_A, 2572 + }, 2654 2573 }, 2655 2574 { 2656 2575 .name = "DDI B IO power well", 2657 2576 .domains = CNL_DISPLAY_DDI_B_IO_POWER_DOMAINS, 2658 2577 .ops = &hsw_power_well_ops, 2659 - .id = SKL_DISP_PW_DDI_B, 2578 + .id = DISP_PW_ID_NONE, 2579 + { 2580 + .hsw.regs = &hsw_power_well_regs, 2581 + .hsw.idx = SKL_PW_CTL_IDX_DDI_B, 2582 + }, 2660 2583 }, 2661 2584 { 2662 2585 .name = "DDI C IO power well", 2663 2586 .domains = CNL_DISPLAY_DDI_C_IO_POWER_DOMAINS, 2664 2587 .ops = &hsw_power_well_ops, 2665 - .id = SKL_DISP_PW_DDI_C, 2588 + .id = DISP_PW_ID_NONE, 2589 + { 2590 + .hsw.regs = &hsw_power_well_regs, 2591 + .hsw.idx = SKL_PW_CTL_IDX_DDI_C, 2592 + }, 2666 2593 }, 2667 2594 { 2668 2595 .name = "DDI D IO power well", 2669 2596 .domains = CNL_DISPLAY_DDI_D_IO_POWER_DOMAINS, 2670 2597 .ops = &hsw_power_well_ops, 2671 - .id = SKL_DISP_PW_DDI_D, 2598 + .id = DISP_PW_ID_NONE, 2599 + { 2600 + .hsw.regs = &hsw_power_well_regs, 2601 + .hsw.idx = SKL_PW_CTL_IDX_DDI_D, 2602 + }, 2672 2603 }, 2673 2604 { 2674 2605 .name = "DDI F IO power well", 2675 2606 .domains = CNL_DISPLAY_DDI_F_IO_POWER_DOMAINS, 2676 2607 .ops = &hsw_power_well_ops, 2677 - .id = CNL_DISP_PW_DDI_F, 2608 + .id = DISP_PW_ID_NONE, 2609 + { 2610 + .hsw.regs = &hsw_power_well_regs, 2611 + .hsw.idx = CNL_PW_CTL_IDX_DDI_F, 2612 + }, 2678 2613 }, 2679 2614 { 2680 2615 .name = "AUX F", 2681 2616 .domains = CNL_DISPLAY_AUX_F_POWER_DOMAINS, 2682 2617 .ops = &hsw_power_well_ops, 2683 - .id = CNL_DISP_PW_AUX_F, 2618 + .id = DISP_PW_ID_NONE, 2619 + { 2620 + .hsw.regs = &hsw_power_well_regs, 2621 + .hsw.idx = CNL_PW_CTL_IDX_AUX_F, 2622 + }, 2684 2623 }, 2685 2624 }; 2686 2625 ··· 2715 2606 .is_enabled = hsw_power_well_enabled, 2716 2607 }; 2717 2608 2718 - static struct i915_power_well icl_power_wells[] = { 2609 + static const struct i915_power_well_regs icl_aux_power_well_regs = { 2610 + .bios = ICL_PWR_WELL_CTL_AUX1, 2611 + .driver = ICL_PWR_WELL_CTL_AUX2, 2612 + .debug = ICL_PWR_WELL_CTL_AUX4, 2613 + }; 2614 + 2615 + static const struct i915_power_well_regs icl_ddi_power_well_regs = { 2616 + .bios = ICL_PWR_WELL_CTL_DDI1, 2617 + .driver = ICL_PWR_WELL_CTL_DDI2, 2618 + .debug = ICL_PWR_WELL_CTL_DDI4, 2619 + }; 2620 + 2621 + static const struct i915_power_well_desc icl_power_wells[] = { 2719 2622 { 2720 2623 .name = "always-on", 2721 2624 .always_on = 1, 2722 2625 .domains = POWER_DOMAIN_MASK, 2723 2626 .ops = &i9xx_always_on_power_well_ops, 2724 - .id = I915_DISP_PW_ALWAYS_ON, 2627 + .id = DISP_PW_ID_NONE, 2725 2628 }, 2726 2629 { 2727 2630 .name = "power well 1", 2728 2631 /* Handled by the DMC firmware */ 2729 2632 .domains = 0, 2730 2633 .ops = &hsw_power_well_ops, 2731 - .id = ICL_DISP_PW_1, 2732 - .hsw.has_fuses = true, 2634 + .id = SKL_DISP_PW_1, 2635 + { 2636 + .hsw.regs = &hsw_power_well_regs, 2637 + .hsw.idx = ICL_PW_CTL_IDX_PW_1, 2638 + .hsw.has_fuses = true, 2639 + }, 2733 2640 }, 2734 2641 { 2735 2642 .name = "power well 2", 2736 2643 .domains = ICL_PW_2_POWER_DOMAINS, 2737 2644 .ops = &hsw_power_well_ops, 2738 - .id = ICL_DISP_PW_2, 2739 - .hsw.has_fuses = true, 2645 + .id = SKL_DISP_PW_2, 2646 + { 2647 + .hsw.regs = &hsw_power_well_regs, 2648 + .hsw.idx = ICL_PW_CTL_IDX_PW_2, 2649 + .hsw.has_fuses = true, 2650 + }, 2740 2651 }, 2741 2652 { 2742 2653 .name = "DC off", 2743 2654 .domains = ICL_DISPLAY_DC_OFF_POWER_DOMAINS, 2744 2655 .ops = &gen9_dc_off_power_well_ops, 2745 - .id = SKL_DISP_PW_DC_OFF, 2656 + .id = DISP_PW_ID_NONE, 2746 2657 }, 2747 2658 { 2748 2659 .name = "power well 3", 2749 2660 .domains = ICL_PW_3_POWER_DOMAINS, 2750 2661 .ops = &hsw_power_well_ops, 2751 - .id = ICL_DISP_PW_3, 2752 - .hsw.irq_pipe_mask = BIT(PIPE_B), 2753 - .hsw.has_vga = true, 2754 - .hsw.has_fuses = true, 2662 + .id = DISP_PW_ID_NONE, 2663 + { 2664 + .hsw.regs = &hsw_power_well_regs, 2665 + .hsw.idx = ICL_PW_CTL_IDX_PW_3, 2666 + .hsw.irq_pipe_mask = BIT(PIPE_B), 2667 + .hsw.has_vga = true, 2668 + .hsw.has_fuses = true, 2669 + }, 2755 2670 }, 2756 2671 { 2757 2672 .name = "DDI A IO", 2758 2673 .domains = ICL_DDI_IO_A_POWER_DOMAINS, 2759 2674 .ops = &hsw_power_well_ops, 2760 - .id = ICL_DISP_PW_DDI_A, 2675 + .id = DISP_PW_ID_NONE, 2676 + { 2677 + .hsw.regs = &icl_ddi_power_well_regs, 2678 + .hsw.idx = ICL_PW_CTL_IDX_DDI_A, 2679 + }, 2761 2680 }, 2762 2681 { 2763 2682 .name = "DDI B IO", 2764 2683 .domains = ICL_DDI_IO_B_POWER_DOMAINS, 2765 2684 .ops = &hsw_power_well_ops, 2766 - .id = ICL_DISP_PW_DDI_B, 2685 + .id = DISP_PW_ID_NONE, 2686 + { 2687 + .hsw.regs = &icl_ddi_power_well_regs, 2688 + .hsw.idx = ICL_PW_CTL_IDX_DDI_B, 2689 + }, 2767 2690 }, 2768 2691 { 2769 2692 .name = "DDI C IO", 2770 2693 .domains = ICL_DDI_IO_C_POWER_DOMAINS, 2771 2694 .ops = &hsw_power_well_ops, 2772 - .id = ICL_DISP_PW_DDI_C, 2695 + .id = DISP_PW_ID_NONE, 2696 + { 2697 + .hsw.regs = &icl_ddi_power_well_regs, 2698 + .hsw.idx = ICL_PW_CTL_IDX_DDI_C, 2699 + }, 2773 2700 }, 2774 2701 { 2775 2702 .name = "DDI D IO", 2776 2703 .domains = ICL_DDI_IO_D_POWER_DOMAINS, 2777 2704 .ops = &hsw_power_well_ops, 2778 - .id = ICL_DISP_PW_DDI_D, 2705 + .id = DISP_PW_ID_NONE, 2706 + { 2707 + .hsw.regs = &icl_ddi_power_well_regs, 2708 + .hsw.idx = ICL_PW_CTL_IDX_DDI_D, 2709 + }, 2779 2710 }, 2780 2711 { 2781 2712 .name = "DDI E IO", 2782 2713 .domains = ICL_DDI_IO_E_POWER_DOMAINS, 2783 2714 .ops = &hsw_power_well_ops, 2784 - .id = ICL_DISP_PW_DDI_E, 2715 + .id = DISP_PW_ID_NONE, 2716 + { 2717 + .hsw.regs = &icl_ddi_power_well_regs, 2718 + .hsw.idx = ICL_PW_CTL_IDX_DDI_E, 2719 + }, 2785 2720 }, 2786 2721 { 2787 2722 .name = "DDI F IO", 2788 2723 .domains = ICL_DDI_IO_F_POWER_DOMAINS, 2789 2724 .ops = &hsw_power_well_ops, 2790 - .id = ICL_DISP_PW_DDI_F, 2725 + .id = DISP_PW_ID_NONE, 2726 + { 2727 + .hsw.regs = &icl_ddi_power_well_regs, 2728 + .hsw.idx = ICL_PW_CTL_IDX_DDI_F, 2729 + }, 2791 2730 }, 2792 2731 { 2793 2732 .name = "AUX A", 2794 2733 .domains = ICL_AUX_A_IO_POWER_DOMAINS, 2795 2734 .ops = &icl_combo_phy_aux_power_well_ops, 2796 - .id = ICL_DISP_PW_AUX_A, 2735 + .id = DISP_PW_ID_NONE, 2736 + { 2737 + .hsw.regs = &icl_aux_power_well_regs, 2738 + .hsw.idx = ICL_PW_CTL_IDX_AUX_A, 2739 + }, 2797 2740 }, 2798 2741 { 2799 2742 .name = "AUX B", 2800 2743 .domains = ICL_AUX_B_IO_POWER_DOMAINS, 2801 2744 .ops = &icl_combo_phy_aux_power_well_ops, 2802 - .id = ICL_DISP_PW_AUX_B, 2745 + .id = DISP_PW_ID_NONE, 2746 + { 2747 + .hsw.regs = &icl_aux_power_well_regs, 2748 + .hsw.idx = ICL_PW_CTL_IDX_AUX_B, 2749 + }, 2803 2750 }, 2804 2751 { 2805 2752 .name = "AUX C", 2806 2753 .domains = ICL_AUX_C_IO_POWER_DOMAINS, 2807 2754 .ops = &hsw_power_well_ops, 2808 - .id = ICL_DISP_PW_AUX_C, 2755 + .id = DISP_PW_ID_NONE, 2756 + { 2757 + .hsw.regs = &icl_aux_power_well_regs, 2758 + .hsw.idx = ICL_PW_CTL_IDX_AUX_C, 2759 + }, 2809 2760 }, 2810 2761 { 2811 2762 .name = "AUX D", 2812 2763 .domains = ICL_AUX_D_IO_POWER_DOMAINS, 2813 2764 .ops = &hsw_power_well_ops, 2814 - .id = ICL_DISP_PW_AUX_D, 2765 + .id = DISP_PW_ID_NONE, 2766 + { 2767 + .hsw.regs = &icl_aux_power_well_regs, 2768 + .hsw.idx = ICL_PW_CTL_IDX_AUX_D, 2769 + }, 2815 2770 }, 2816 2771 { 2817 2772 .name = "AUX E", 2818 2773 .domains = ICL_AUX_E_IO_POWER_DOMAINS, 2819 2774 .ops = &hsw_power_well_ops, 2820 - .id = ICL_DISP_PW_AUX_E, 2775 + .id = DISP_PW_ID_NONE, 2776 + { 2777 + .hsw.regs = &icl_aux_power_well_regs, 2778 + .hsw.idx = ICL_PW_CTL_IDX_AUX_E, 2779 + }, 2821 2780 }, 2822 2781 { 2823 2782 .name = "AUX F", 2824 2783 .domains = ICL_AUX_F_IO_POWER_DOMAINS, 2825 2784 .ops = &hsw_power_well_ops, 2826 - .id = ICL_DISP_PW_AUX_F, 2785 + .id = DISP_PW_ID_NONE, 2786 + { 2787 + .hsw.regs = &icl_aux_power_well_regs, 2788 + .hsw.idx = ICL_PW_CTL_IDX_AUX_F, 2789 + }, 2827 2790 }, 2828 2791 { 2829 2792 .name = "AUX TBT1", 2830 2793 .domains = ICL_AUX_TBT1_IO_POWER_DOMAINS, 2831 2794 .ops = &hsw_power_well_ops, 2832 - .id = ICL_DISP_PW_AUX_TBT1, 2795 + .id = DISP_PW_ID_NONE, 2796 + { 2797 + .hsw.regs = &icl_aux_power_well_regs, 2798 + .hsw.idx = ICL_PW_CTL_IDX_AUX_TBT1, 2799 + }, 2833 2800 }, 2834 2801 { 2835 2802 .name = "AUX TBT2", 2836 2803 .domains = ICL_AUX_TBT2_IO_POWER_DOMAINS, 2837 2804 .ops = &hsw_power_well_ops, 2838 - .id = ICL_DISP_PW_AUX_TBT2, 2805 + .id = DISP_PW_ID_NONE, 2806 + { 2807 + .hsw.regs = &icl_aux_power_well_regs, 2808 + .hsw.idx = ICL_PW_CTL_IDX_AUX_TBT2, 2809 + }, 2839 2810 }, 2840 2811 { 2841 2812 .name = "AUX TBT3", 2842 2813 .domains = ICL_AUX_TBT3_IO_POWER_DOMAINS, 2843 2814 .ops = &hsw_power_well_ops, 2844 - .id = ICL_DISP_PW_AUX_TBT3, 2815 + .id = DISP_PW_ID_NONE, 2816 + { 2817 + .hsw.regs = &icl_aux_power_well_regs, 2818 + .hsw.idx = ICL_PW_CTL_IDX_AUX_TBT3, 2819 + }, 2845 2820 }, 2846 2821 { 2847 2822 .name = "AUX TBT4", 2848 2823 .domains = ICL_AUX_TBT4_IO_POWER_DOMAINS, 2849 2824 .ops = &hsw_power_well_ops, 2850 - .id = ICL_DISP_PW_AUX_TBT4, 2825 + .id = DISP_PW_ID_NONE, 2826 + { 2827 + .hsw.regs = &icl_aux_power_well_regs, 2828 + .hsw.idx = ICL_PW_CTL_IDX_AUX_TBT4, 2829 + }, 2851 2830 }, 2852 2831 { 2853 2832 .name = "power well 4", 2854 2833 .domains = ICL_PW_4_POWER_DOMAINS, 2855 2834 .ops = &hsw_power_well_ops, 2856 - .id = ICL_DISP_PW_4, 2857 - .hsw.has_fuses = true, 2858 - .hsw.irq_pipe_mask = BIT(PIPE_C), 2835 + .id = DISP_PW_ID_NONE, 2836 + { 2837 + .hsw.regs = &hsw_power_well_regs, 2838 + .hsw.idx = ICL_PW_CTL_IDX_PW_4, 2839 + .hsw.has_fuses = true, 2840 + .hsw.irq_pipe_mask = BIT(PIPE_C), 2841 + }, 2859 2842 }, 2860 2843 }; 2861 2844 ··· 3010 2809 return mask; 3011 2810 } 3012 2811 3013 - static void assert_power_well_ids_unique(struct drm_i915_private *dev_priv) 2812 + static int 2813 + __set_power_wells(struct i915_power_domains *power_domains, 2814 + const struct i915_power_well_desc *power_well_descs, 2815 + int power_well_count) 3014 2816 { 3015 - struct i915_power_domains *power_domains = &dev_priv->power_domains; 3016 - u64 power_well_ids; 2817 + u64 power_well_ids = 0; 3017 2818 int i; 3018 2819 3019 - power_well_ids = 0; 3020 - for (i = 0; i < power_domains->power_well_count; i++) { 3021 - enum i915_power_well_id id = power_domains->power_wells[i].id; 2820 + power_domains->power_well_count = power_well_count; 2821 + power_domains->power_wells = 2822 + kcalloc(power_well_count, 2823 + sizeof(*power_domains->power_wells), 2824 + GFP_KERNEL); 2825 + if (!power_domains->power_wells) 2826 + return -ENOMEM; 2827 + 2828 + for (i = 0; i < power_well_count; i++) { 2829 + enum i915_power_well_id id = power_well_descs[i].id; 2830 + 2831 + power_domains->power_wells[i].desc = &power_well_descs[i]; 2832 + 2833 + if (id == DISP_PW_ID_NONE) 2834 + continue; 3022 2835 3023 2836 WARN_ON(id >= sizeof(power_well_ids) * 8); 3024 2837 WARN_ON(power_well_ids & BIT_ULL(id)); 3025 2838 power_well_ids |= BIT_ULL(id); 3026 2839 } 2840 + 2841 + return 0; 3027 2842 } 3028 2843 3029 - #define set_power_wells(power_domains, __power_wells) ({ \ 3030 - (power_domains)->power_wells = (__power_wells); \ 3031 - (power_domains)->power_well_count = ARRAY_SIZE(__power_wells); \ 3032 - }) 2844 + #define set_power_wells(power_domains, __power_well_descs) \ 2845 + __set_power_wells(power_domains, __power_well_descs, \ 2846 + ARRAY_SIZE(__power_well_descs)) 3033 2847 3034 2848 /** 3035 2849 * intel_power_domains_init - initializes the power domain structures ··· 3056 2840 int intel_power_domains_init(struct drm_i915_private *dev_priv) 3057 2841 { 3058 2842 struct i915_power_domains *power_domains = &dev_priv->power_domains; 2843 + int err; 3059 2844 3060 2845 i915_modparams.disable_power_well = 3061 2846 sanitize_disable_power_well_option(dev_priv, ··· 3073 2856 * the disabling order is reversed. 3074 2857 */ 3075 2858 if (IS_ICELAKE(dev_priv)) { 3076 - set_power_wells(power_domains, icl_power_wells); 2859 + err = set_power_wells(power_domains, icl_power_wells); 3077 2860 } else if (IS_HASWELL(dev_priv)) { 3078 - set_power_wells(power_domains, hsw_power_wells); 2861 + err = set_power_wells(power_domains, hsw_power_wells); 3079 2862 } else if (IS_BROADWELL(dev_priv)) { 3080 - set_power_wells(power_domains, bdw_power_wells); 2863 + err = set_power_wells(power_domains, bdw_power_wells); 3081 2864 } else if (IS_GEN9_BC(dev_priv)) { 3082 - set_power_wells(power_domains, skl_power_wells); 2865 + err = set_power_wells(power_domains, skl_power_wells); 3083 2866 } else if (IS_CANNONLAKE(dev_priv)) { 3084 - set_power_wells(power_domains, cnl_power_wells); 2867 + err = set_power_wells(power_domains, cnl_power_wells); 3085 2868 3086 2869 /* 3087 2870 * DDI and Aux IO are getting enabled for all ports ··· 3093 2876 power_domains->power_well_count -= 2; 3094 2877 3095 2878 } else if (IS_BROXTON(dev_priv)) { 3096 - set_power_wells(power_domains, bxt_power_wells); 2879 + err = set_power_wells(power_domains, bxt_power_wells); 3097 2880 } else if (IS_GEMINILAKE(dev_priv)) { 3098 - set_power_wells(power_domains, glk_power_wells); 2881 + err = set_power_wells(power_domains, glk_power_wells); 3099 2882 } else if (IS_CHERRYVIEW(dev_priv)) { 3100 - set_power_wells(power_domains, chv_power_wells); 2883 + err = set_power_wells(power_domains, chv_power_wells); 3101 2884 } else if (IS_VALLEYVIEW(dev_priv)) { 3102 - set_power_wells(power_domains, vlv_power_wells); 2885 + err = set_power_wells(power_domains, vlv_power_wells); 3103 2886 } else if (IS_I830(dev_priv)) { 3104 - set_power_wells(power_domains, i830_power_wells); 2887 + err = set_power_wells(power_domains, i830_power_wells); 3105 2888 } else { 3106 - set_power_wells(power_domains, i9xx_always_on_power_well); 2889 + err = set_power_wells(power_domains, i9xx_always_on_power_well); 3107 2890 } 3108 2891 3109 - assert_power_well_ids_unique(dev_priv); 3110 - 3111 - return 0; 2892 + return err; 3112 2893 } 3113 2894 3114 2895 /** 3115 - * intel_power_domains_fini - finalizes the power domain structures 2896 + * intel_power_domains_cleanup - clean up power domains resources 3116 2897 * @dev_priv: i915 device instance 3117 2898 * 3118 - * Finalizes the power domain structures for @dev_priv depending upon the 3119 - * supported platform. This function also disables runtime pm and ensures that 3120 - * the device stays powered up so that the driver can be reloaded. 2899 + * Release any resources acquired by intel_power_domains_init() 3121 2900 */ 3122 - void intel_power_domains_fini(struct drm_i915_private *dev_priv) 2901 + void intel_power_domains_cleanup(struct drm_i915_private *dev_priv) 3123 2902 { 3124 - struct device *kdev = &dev_priv->drm.pdev->dev; 3125 - 3126 - /* 3127 - * The i915.ko module is still not prepared to be loaded when 3128 - * the power well is not enabled, so just enable it in case 3129 - * we're going to unload/reload. 3130 - * The following also reacquires the RPM reference the core passed 3131 - * to the driver during loading, which is dropped in 3132 - * intel_runtime_pm_enable(). We have to hand back the control of the 3133 - * device to the core with this reference held. 3134 - */ 3135 - intel_display_set_init_power(dev_priv, true); 3136 - 3137 - /* Remove the refcount we took to keep power well support disabled. */ 3138 - if (!i915_modparams.disable_power_well) 3139 - intel_display_power_put(dev_priv, POWER_DOMAIN_INIT); 3140 - 3141 - /* 3142 - * Remove the refcount we took in intel_runtime_pm_enable() in case 3143 - * the platform doesn't support runtime PM. 3144 - */ 3145 - if (!HAS_RUNTIME_PM(dev_priv)) 3146 - pm_runtime_put(kdev); 2903 + kfree(dev_priv->power_domains.power_wells); 3147 2904 } 3148 2905 3149 2906 static void intel_power_domains_sync_hw(struct drm_i915_private *dev_priv) ··· 3127 2936 3128 2937 mutex_lock(&power_domains->lock); 3129 2938 for_each_power_well(dev_priv, power_well) { 3130 - power_well->ops->sync_hw(dev_priv, power_well); 3131 - power_well->hw_enabled = power_well->ops->is_enabled(dev_priv, 3132 - power_well); 2939 + power_well->desc->ops->sync_hw(dev_priv, power_well); 2940 + power_well->hw_enabled = 2941 + power_well->desc->ops->is_enabled(dev_priv, power_well); 3133 2942 } 3134 2943 mutex_unlock(&power_domains->lock); 3135 2944 } ··· 3551 3360 * The AUX IO power wells will be enabled on demand. 3552 3361 */ 3553 3362 mutex_lock(&power_domains->lock); 3554 - well = lookup_power_well(dev_priv, ICL_DISP_PW_1); 3363 + well = lookup_power_well(dev_priv, SKL_DISP_PW_1); 3555 3364 intel_power_well_enable(dev_priv, well); 3556 3365 mutex_unlock(&power_domains->lock); 3557 3366 ··· 3563 3372 3564 3373 /* 7. Setup MBUS. */ 3565 3374 icl_mbus_init(dev_priv); 3566 - 3567 - /* 8. CHICKEN_DCPR_1 */ 3568 - I915_WRITE(GEN8_CHICKEN_DCPR_1, I915_READ(GEN8_CHICKEN_DCPR_1) | 3569 - CNL_DDI_CLOCK_REG_ACCESS_ON); 3570 3375 } 3571 3376 3572 3377 static void icl_display_core_uninit(struct drm_i915_private *dev_priv) ··· 3588 3401 * disabled at this point. 3589 3402 */ 3590 3403 mutex_lock(&power_domains->lock); 3591 - well = lookup_power_well(dev_priv, ICL_DISP_PW_1); 3404 + well = lookup_power_well(dev_priv, SKL_DISP_PW_1); 3592 3405 intel_power_well_disable(dev_priv, well); 3593 3406 mutex_unlock(&power_domains->lock); 3594 3407 ··· 3603 3416 static void chv_phy_control_init(struct drm_i915_private *dev_priv) 3604 3417 { 3605 3418 struct i915_power_well *cmn_bc = 3606 - lookup_power_well(dev_priv, PUNIT_POWER_WELL_DPIO_CMN_BC); 3419 + lookup_power_well(dev_priv, VLV_DISP_PW_DPIO_CMN_BC); 3607 3420 struct i915_power_well *cmn_d = 3608 - lookup_power_well(dev_priv, PUNIT_POWER_WELL_DPIO_CMN_D); 3421 + lookup_power_well(dev_priv, CHV_DISP_PW_DPIO_CMN_D); 3609 3422 3610 3423 /* 3611 3424 * DISPLAY_PHY_CONTROL can get corrupted if read. As a ··· 3628 3441 * override and set the lane powerdown bits accding to the 3629 3442 * current lane status. 3630 3443 */ 3631 - if (cmn_bc->ops->is_enabled(dev_priv, cmn_bc)) { 3444 + if (cmn_bc->desc->ops->is_enabled(dev_priv, cmn_bc)) { 3632 3445 uint32_t status = I915_READ(DPLL(PIPE_A)); 3633 3446 unsigned int mask; 3634 3447 ··· 3659 3472 dev_priv->chv_phy_assert[DPIO_PHY0] = true; 3660 3473 } 3661 3474 3662 - if (cmn_d->ops->is_enabled(dev_priv, cmn_d)) { 3475 + if (cmn_d->desc->ops->is_enabled(dev_priv, cmn_d)) { 3663 3476 uint32_t status = I915_READ(DPIO_PHY_STATUS); 3664 3477 unsigned int mask; 3665 3478 ··· 3690 3503 static void vlv_cmnlane_wa(struct drm_i915_private *dev_priv) 3691 3504 { 3692 3505 struct i915_power_well *cmn = 3693 - lookup_power_well(dev_priv, PUNIT_POWER_WELL_DPIO_CMN_BC); 3506 + lookup_power_well(dev_priv, VLV_DISP_PW_DPIO_CMN_BC); 3694 3507 struct i915_power_well *disp2d = 3695 - lookup_power_well(dev_priv, PUNIT_POWER_WELL_DISP2D); 3508 + lookup_power_well(dev_priv, VLV_DISP_PW_DISP2D); 3696 3509 3697 3510 /* If the display might be already active skip this */ 3698 - if (cmn->ops->is_enabled(dev_priv, cmn) && 3699 - disp2d->ops->is_enabled(dev_priv, disp2d) && 3511 + if (cmn->desc->ops->is_enabled(dev_priv, cmn) && 3512 + disp2d->desc->ops->is_enabled(dev_priv, disp2d) && 3700 3513 I915_READ(DPIO_CTL) & DPIO_CMNRST) 3701 3514 return; 3702 3515 3703 3516 DRM_DEBUG_KMS("toggling display PHY side reset\n"); 3704 3517 3705 3518 /* cmnlane needs DPLL registers */ 3706 - disp2d->ops->enable(dev_priv, disp2d); 3519 + disp2d->desc->ops->enable(dev_priv, disp2d); 3707 3520 3708 3521 /* 3709 3522 * From VLV2A0_DP_eDP_HDMI_DPIO_driver_vbios_notes_11.docx: ··· 3712 3525 * Simply ungating isn't enough to reset the PHY enough to get 3713 3526 * ports and lanes running. 3714 3527 */ 3715 - cmn->ops->disable(dev_priv, cmn); 3528 + cmn->desc->ops->disable(dev_priv, cmn); 3716 3529 } 3530 + 3531 + static void intel_power_domains_verify_state(struct drm_i915_private *dev_priv); 3717 3532 3718 3533 /** 3719 3534 * intel_power_domains_init_hw - initialize hardware power domain state ··· 3724 3535 * 3725 3536 * This function initializes the hardware power domain state and enables all 3726 3537 * power wells belonging to the INIT power domain. Power wells in other 3727 - * domains (and not in the INIT domain) are referenced or disabled during the 3728 - * modeset state HW readout. After that the reference count of each power well 3729 - * must match its HW enabled state, see intel_power_domains_verify_state(). 3538 + * domains (and not in the INIT domain) are referenced or disabled by 3539 + * intel_modeset_readout_hw_state(). After that the reference count of each 3540 + * power well must match its HW enabled state, see 3541 + * intel_power_domains_verify_state(). 3542 + * 3543 + * It will return with power domains disabled (to be enabled later by 3544 + * intel_power_domains_enable()) and must be paired with 3545 + * intel_power_domains_fini_hw(). 3730 3546 */ 3731 3547 void intel_power_domains_init_hw(struct drm_i915_private *dev_priv, bool resume) 3732 3548 { ··· 3757 3563 mutex_unlock(&power_domains->lock); 3758 3564 } 3759 3565 3760 - /* For now, we need the power well to be always enabled. */ 3761 - intel_display_set_init_power(dev_priv, true); 3566 + /* 3567 + * Keep all power wells enabled for any dependent HW access during 3568 + * initialization and to make sure we keep BIOS enabled display HW 3569 + * resources powered until display HW readout is complete. We drop 3570 + * this reference in intel_power_domains_enable(). 3571 + */ 3572 + intel_display_power_get(dev_priv, POWER_DOMAIN_INIT); 3762 3573 /* Disable power support if the user asked so. */ 3763 3574 if (!i915_modparams.disable_power_well) 3764 3575 intel_display_power_get(dev_priv, POWER_DOMAIN_INIT); 3765 3576 intel_power_domains_sync_hw(dev_priv); 3577 + 3766 3578 power_domains->initializing = false; 3579 + } 3580 + 3581 + /** 3582 + * intel_power_domains_fini_hw - deinitialize hw power domain state 3583 + * @dev_priv: i915 device instance 3584 + * 3585 + * De-initializes the display power domain HW state. It also ensures that the 3586 + * device stays powered up so that the driver can be reloaded. 3587 + * 3588 + * It must be called with power domains already disabled (after a call to 3589 + * intel_power_domains_disable()) and must be paired with 3590 + * intel_power_domains_init_hw(). 3591 + */ 3592 + void intel_power_domains_fini_hw(struct drm_i915_private *dev_priv) 3593 + { 3594 + /* Keep the power well enabled, but cancel its rpm wakeref. */ 3595 + intel_runtime_pm_put(dev_priv); 3596 + 3597 + /* Remove the refcount we took to keep power well support disabled. */ 3598 + if (!i915_modparams.disable_power_well) 3599 + intel_display_power_put(dev_priv, POWER_DOMAIN_INIT); 3600 + 3601 + intel_power_domains_verify_state(dev_priv); 3602 + } 3603 + 3604 + /** 3605 + * intel_power_domains_enable - enable toggling of display power wells 3606 + * @dev_priv: i915 device instance 3607 + * 3608 + * Enable the ondemand enabling/disabling of the display power wells. Note that 3609 + * power wells not belonging to POWER_DOMAIN_INIT are allowed to be toggled 3610 + * only at specific points of the display modeset sequence, thus they are not 3611 + * affected by the intel_power_domains_enable()/disable() calls. The purpose 3612 + * of these function is to keep the rest of power wells enabled until the end 3613 + * of display HW readout (which will acquire the power references reflecting 3614 + * the current HW state). 3615 + */ 3616 + void intel_power_domains_enable(struct drm_i915_private *dev_priv) 3617 + { 3618 + intel_display_power_put(dev_priv, POWER_DOMAIN_INIT); 3619 + 3620 + intel_power_domains_verify_state(dev_priv); 3621 + } 3622 + 3623 + /** 3624 + * intel_power_domains_disable - disable toggling of display power wells 3625 + * @dev_priv: i915 device instance 3626 + * 3627 + * Disable the ondemand enabling/disabling of the display power wells. See 3628 + * intel_power_domains_enable() for which power wells this call controls. 3629 + */ 3630 + void intel_power_domains_disable(struct drm_i915_private *dev_priv) 3631 + { 3632 + intel_display_power_get(dev_priv, POWER_DOMAIN_INIT); 3633 + 3634 + intel_power_domains_verify_state(dev_priv); 3767 3635 } 3768 3636 3769 3637 /** 3770 3638 * intel_power_domains_suspend - suspend power domain state 3771 3639 * @dev_priv: i915 device instance 3640 + * @suspend_mode: specifies the target suspend state (idle, mem, hibernation) 3772 3641 * 3773 3642 * This function prepares the hardware power domain state before entering 3774 - * system suspend. It must be paired with intel_power_domains_init_hw(). 3643 + * system suspend. 3644 + * 3645 + * It must be called with power domains already disabled (after a call to 3646 + * intel_power_domains_disable()) and paired with intel_power_domains_resume(). 3775 3647 */ 3776 - void intel_power_domains_suspend(struct drm_i915_private *dev_priv) 3648 + void intel_power_domains_suspend(struct drm_i915_private *dev_priv, 3649 + enum i915_drm_suspend_mode suspend_mode) 3777 3650 { 3651 + struct i915_power_domains *power_domains = &dev_priv->power_domains; 3652 + 3653 + intel_display_power_put(dev_priv, POWER_DOMAIN_INIT); 3654 + 3655 + /* 3656 + * In case of suspend-to-idle (aka S0ix) on a DMC platform without DC9 3657 + * support don't manually deinit the power domains. This also means the 3658 + * CSR/DMC firmware will stay active, it will power down any HW 3659 + * resources as required and also enable deeper system power states 3660 + * that would be blocked if the firmware was inactive. 3661 + */ 3662 + if (!(dev_priv->csr.allowed_dc_mask & DC_STATE_EN_DC9) && 3663 + suspend_mode == I915_DRM_SUSPEND_IDLE && 3664 + dev_priv->csr.dmc_payload != NULL) { 3665 + intel_power_domains_verify_state(dev_priv); 3666 + return; 3667 + } 3668 + 3778 3669 /* 3779 3670 * Even if power well support was disabled we still want to disable 3780 - * power wells while we are system suspended. 3671 + * power wells if power domains must be deinitialized for suspend. 3781 3672 */ 3782 - if (!i915_modparams.disable_power_well) 3673 + if (!i915_modparams.disable_power_well) { 3783 3674 intel_display_power_put(dev_priv, POWER_DOMAIN_INIT); 3675 + intel_power_domains_verify_state(dev_priv); 3676 + } 3784 3677 3785 3678 if (IS_ICELAKE(dev_priv)) 3786 3679 icl_display_core_uninit(dev_priv); ··· 3877 3596 skl_display_core_uninit(dev_priv); 3878 3597 else if (IS_GEN9_LP(dev_priv)) 3879 3598 bxt_display_core_uninit(dev_priv); 3599 + 3600 + power_domains->display_core_suspended = true; 3880 3601 } 3602 + 3603 + /** 3604 + * intel_power_domains_resume - resume power domain state 3605 + * @dev_priv: i915 device instance 3606 + * 3607 + * This function resume the hardware power domain state during system resume. 3608 + * 3609 + * It will return with power domain support disabled (to be enabled later by 3610 + * intel_power_domains_enable()) and must be paired with 3611 + * intel_power_domains_suspend(). 3612 + */ 3613 + void intel_power_domains_resume(struct drm_i915_private *dev_priv) 3614 + { 3615 + struct i915_power_domains *power_domains = &dev_priv->power_domains; 3616 + 3617 + if (power_domains->display_core_suspended) { 3618 + intel_power_domains_init_hw(dev_priv, true); 3619 + power_domains->display_core_suspended = false; 3620 + } else { 3621 + intel_display_power_get(dev_priv, POWER_DOMAIN_INIT); 3622 + } 3623 + 3624 + intel_power_domains_verify_state(dev_priv); 3625 + } 3626 + 3627 + #if IS_ENABLED(CONFIG_DRM_I915_DEBUG_RUNTIME_PM) 3881 3628 3882 3629 static void intel_power_domains_dump_info(struct drm_i915_private *dev_priv) 3883 3630 { ··· 3916 3607 enum intel_display_power_domain domain; 3917 3608 3918 3609 DRM_DEBUG_DRIVER("%-25s %d\n", 3919 - power_well->name, power_well->count); 3610 + power_well->desc->name, power_well->count); 3920 3611 3921 - for_each_power_domain(domain, power_well->domains) 3612 + for_each_power_domain(domain, power_well->desc->domains) 3922 3613 DRM_DEBUG_DRIVER(" %-23s %d\n", 3923 3614 intel_display_power_domain_str(domain), 3924 3615 power_domains->domain_use_count[domain]); ··· 3935 3626 * acquiring reference counts for any power wells in use and disabling the 3936 3627 * ones left on by BIOS but not required by any active output. 3937 3628 */ 3938 - void intel_power_domains_verify_state(struct drm_i915_private *dev_priv) 3629 + static void intel_power_domains_verify_state(struct drm_i915_private *dev_priv) 3939 3630 { 3940 3631 struct i915_power_domains *power_domains = &dev_priv->power_domains; 3941 3632 struct i915_power_well *power_well; ··· 3954 3645 * and PW1 power wells) are under FW control, so ignore them, 3955 3646 * since their state can change asynchronously. 3956 3647 */ 3957 - if (!power_well->domains) 3648 + if (!power_well->desc->domains) 3958 3649 continue; 3959 3650 3960 - enabled = power_well->ops->is_enabled(dev_priv, power_well); 3961 - if ((power_well->count || power_well->always_on) != enabled) 3651 + enabled = power_well->desc->ops->is_enabled(dev_priv, 3652 + power_well); 3653 + if ((power_well->count || power_well->desc->always_on) != 3654 + enabled) 3962 3655 DRM_ERROR("power well %s state mismatch (refcount %d/enabled %d)", 3963 - power_well->name, power_well->count, enabled); 3656 + power_well->desc->name, 3657 + power_well->count, enabled); 3964 3658 3965 3659 domains_count = 0; 3966 - for_each_power_domain(domain, power_well->domains) 3660 + for_each_power_domain(domain, power_well->desc->domains) 3967 3661 domains_count += power_domains->domain_use_count[domain]; 3968 3662 3969 3663 if (power_well->count != domains_count) { 3970 3664 DRM_ERROR("power well %s refcount/domain refcount mismatch " 3971 3665 "(refcount %d/domains refcount %d)\n", 3972 - power_well->name, power_well->count, 3666 + power_well->desc->name, power_well->count, 3973 3667 domains_count); 3974 3668 dump_domain_info = true; 3975 3669 } ··· 3989 3677 3990 3678 mutex_unlock(&power_domains->lock); 3991 3679 } 3680 + 3681 + #else 3682 + 3683 + static void intel_power_domains_verify_state(struct drm_i915_private *dev_priv) 3684 + { 3685 + } 3686 + 3687 + #endif 3992 3688 3993 3689 /** 3994 3690 * intel_runtime_pm_get - grab a runtime pm reference ··· 4111 3791 * This function enables runtime pm at the end of the driver load sequence. 4112 3792 * 4113 3793 * Note that this function does currently not enable runtime pm for the 4114 - * subordinate display power domains. That is only done on the first modeset 4115 - * using intel_display_set_init_power(). 3794 + * subordinate display power domains. That is done by 3795 + * intel_power_domains_enable(). 4116 3796 */ 4117 3797 void intel_runtime_pm_enable(struct drm_i915_private *dev_priv) 4118 3798 { 4119 3799 struct pci_dev *pdev = dev_priv->drm.pdev; 4120 3800 struct device *kdev = &pdev->dev; 3801 + 3802 + /* 3803 + * Disable the system suspend direct complete optimization, which can 3804 + * leave the device suspended skipping the driver's suspend handlers 3805 + * if the device was already runtime suspended. This is needed due to 3806 + * the difference in our runtime and system suspend sequence and 3807 + * becaue the HDA driver may require us to enable the audio power 3808 + * domain during system suspend. 3809 + */ 3810 + dev_pm_set_driver_flags(kdev, DPM_FLAG_NEVER_SKIP); 4121 3811 4122 3812 pm_runtime_set_autosuspend_delay(kdev, 10000); /* 10s */ 4123 3813 pm_runtime_mark_last_busy(kdev); ··· 4154 3824 * intel_power_domains_fini(). 4155 3825 */ 4156 3826 pm_runtime_put_autosuspend(kdev); 3827 + } 3828 + 3829 + void intel_runtime_pm_disable(struct drm_i915_private *dev_priv) 3830 + { 3831 + struct pci_dev *pdev = dev_priv->drm.pdev; 3832 + struct device *kdev = &pdev->dev; 3833 + 3834 + /* Transfer rpm ownership back to core */ 3835 + WARN(pm_runtime_get_sync(&dev_priv->drm.pdev->dev) < 0, 3836 + "Failed to pass rpm ownership back to core\n"); 3837 + 3838 + pm_runtime_dont_use_autosuspend(kdev); 3839 + 3840 + if (!HAS_RUNTIME_PM(dev_priv)) 3841 + pm_runtime_put(kdev); 4157 3842 }
+7 -6
drivers/gpu/drm/i915/intel_sprite.c
··· 83 83 bool need_vlv_dsi_wa = (IS_VALLEYVIEW(dev_priv) || IS_CHERRYVIEW(dev_priv)) && 84 84 intel_crtc_has_type(new_crtc_state, INTEL_OUTPUT_DSI); 85 85 DEFINE_WAIT(wait); 86 + u32 psr_status; 86 87 87 88 vblank_start = adjusted_mode->crtc_vblank_start; 88 89 if (adjusted_mode->flags & DRM_MODE_FLAG_INTERLACE) ··· 105 104 * VBL interrupts will start the PSR exit and prevent a PSR 106 105 * re-entry as well. 107 106 */ 108 - if (intel_psr_wait_for_idle(new_crtc_state)) 109 - DRM_ERROR("PSR idle timed out, atomic update may fail\n"); 107 + if (intel_psr_wait_for_idle(new_crtc_state, &psr_status)) 108 + DRM_ERROR("PSR idle timed out 0x%x, atomic update may fail\n", 109 + psr_status); 110 110 111 111 local_irq_disable(); 112 112 ··· 959 957 } 960 958 961 959 static int 962 - intel_check_sprite_plane(struct intel_plane *plane, 963 - struct intel_crtc_state *crtc_state, 960 + intel_check_sprite_plane(struct intel_crtc_state *crtc_state, 964 961 struct intel_plane_state *state) 965 962 { 963 + struct intel_plane *plane = to_intel_plane(state->base.plane); 966 964 struct drm_i915_private *dev_priv = to_i915(plane->base.dev); 967 965 struct intel_crtc *crtc = to_intel_crtc(crtc_state->base.crtc); 968 966 struct drm_framebuffer *fb = state->base.fb; ··· 1409 1407 case DRM_FORMAT_XBGR8888: 1410 1408 case DRM_FORMAT_ARGB8888: 1411 1409 case DRM_FORMAT_ABGR8888: 1412 - if (modifier == I915_FORMAT_MOD_Yf_TILED_CCS || 1413 - modifier == I915_FORMAT_MOD_Y_TILED_CCS) 1410 + if (is_ccs_modifier(modifier)) 1414 1411 return true; 1415 1412 /* fall through */ 1416 1413 case DRM_FORMAT_RGB565:
+1 -1
drivers/gpu/drm/i915/intel_uc_fw.c
··· 222 222 goto fail; 223 223 } 224 224 225 - ggtt_pin_bias = to_i915(uc_fw->obj->base.dev)->guc.ggtt_pin_bias; 225 + ggtt_pin_bias = to_i915(uc_fw->obj->base.dev)->ggtt.pin_bias; 226 226 vma = i915_gem_object_ggtt_pin(uc_fw->obj, NULL, 0, 0, 227 227 PIN_OFFSET_BIAS | ggtt_pin_bias); 228 228 if (IS_ERR(vma)) {
+92 -38
drivers/gpu/drm/i915/intel_uncore.c
··· 283 283 fw_domain_reset(i915, d); 284 284 } 285 285 286 + static inline u32 gt_thread_status(struct drm_i915_private *dev_priv) 287 + { 288 + u32 val; 289 + 290 + val = __raw_i915_read32(dev_priv, GEN6_GT_THREAD_STATUS_REG); 291 + val &= GEN6_GT_THREAD_STATUS_CORE_MASK; 292 + 293 + return val; 294 + } 295 + 286 296 static void __gen6_gt_wait_for_thread_c0(struct drm_i915_private *dev_priv) 287 297 { 288 - /* w/a for a sporadic read returning 0 by waiting for the GT 298 + /* 299 + * w/a for a sporadic read returning 0 by waiting for the GT 289 300 * thread to wake up. 290 301 */ 291 - if (wait_for_atomic_us((__raw_i915_read32(dev_priv, GEN6_GT_THREAD_STATUS_REG) & 292 - GEN6_GT_THREAD_STATUS_CORE_MASK) == 0, 500)) 293 - DRM_ERROR("GT thread status wait timed out\n"); 302 + WARN_ONCE(wait_for_atomic_us(gt_thread_status(dev_priv) == 0, 5000), 303 + "GT thread status wait timed out\n"); 294 304 } 295 305 296 306 static void fw_domains_get_with_thread_status(struct drm_i915_private *dev_priv, ··· 1739 1729 } 1740 1730 1741 1731 static void i915_stop_engines(struct drm_i915_private *dev_priv, 1742 - unsigned engine_mask) 1732 + unsigned int engine_mask) 1743 1733 { 1744 1734 struct intel_engine_cs *engine; 1745 1735 enum intel_engine_id id; ··· 1759 1749 return gdrst & GRDOM_RESET_STATUS; 1760 1750 } 1761 1751 1762 - static int i915_do_reset(struct drm_i915_private *dev_priv, unsigned engine_mask) 1752 + static int i915_do_reset(struct drm_i915_private *dev_priv, 1753 + unsigned int engine_mask, 1754 + unsigned int retry) 1763 1755 { 1764 1756 struct pci_dev *pdev = dev_priv->drm.pdev; 1765 1757 int err; ··· 1788 1776 return (gdrst & GRDOM_RESET_ENABLE) == 0; 1789 1777 } 1790 1778 1791 - static int g33_do_reset(struct drm_i915_private *dev_priv, unsigned engine_mask) 1779 + static int g33_do_reset(struct drm_i915_private *dev_priv, 1780 + unsigned int engine_mask, 1781 + unsigned int retry) 1792 1782 { 1793 1783 struct pci_dev *pdev = dev_priv->drm.pdev; 1794 1784 ··· 1798 1784 return wait_for(g4x_reset_complete(pdev), 500); 1799 1785 } 1800 1786 1801 - static int g4x_do_reset(struct drm_i915_private *dev_priv, unsigned engine_mask) 1787 + static int g4x_do_reset(struct drm_i915_private *dev_priv, 1788 + unsigned int engine_mask, 1789 + unsigned int retry) 1802 1790 { 1803 1791 struct pci_dev *pdev = dev_priv->drm.pdev; 1804 1792 int ret; ··· 1837 1821 } 1838 1822 1839 1823 static int ironlake_do_reset(struct drm_i915_private *dev_priv, 1840 - unsigned engine_mask) 1824 + unsigned int engine_mask, 1825 + unsigned int retry) 1841 1826 { 1842 1827 int ret; 1843 1828 ··· 1894 1877 * gen6_reset_engines - reset individual engines 1895 1878 * @dev_priv: i915 device 1896 1879 * @engine_mask: mask of intel_ring_flag() engines or ALL_ENGINES for full reset 1880 + * @retry: the count of of previous attempts to reset. 1897 1881 * 1898 1882 * This function will reset the individual engines that are set in engine_mask. 1899 1883 * If you provide ALL_ENGINES as mask, full global domain reset will be issued. ··· 1905 1887 * Returns 0 on success, nonzero on error. 1906 1888 */ 1907 1889 static int gen6_reset_engines(struct drm_i915_private *dev_priv, 1908 - unsigned engine_mask) 1890 + unsigned int engine_mask, 1891 + unsigned int retry) 1909 1892 { 1910 1893 struct intel_engine_cs *engine; 1911 1894 const u32 hw_engine_mask[I915_NUM_ENGINES] = { ··· 1945 1926 * Returns 0 on success, nonzero on error. 1946 1927 */ 1947 1928 static int gen11_reset_engines(struct drm_i915_private *dev_priv, 1948 - unsigned engine_mask) 1929 + unsigned int engine_mask) 1949 1930 { 1950 1931 struct intel_engine_cs *engine; 1951 1932 const u32 hw_engine_mask[I915_NUM_ENGINES] = { ··· 2085 2066 return ret; 2086 2067 } 2087 2068 2088 - static int gen8_reset_engine_start(struct intel_engine_cs *engine) 2069 + static int gen8_engine_reset_prepare(struct intel_engine_cs *engine) 2089 2070 { 2090 2071 struct drm_i915_private *dev_priv = engine->i915; 2091 2072 int ret; ··· 2105 2086 return ret; 2106 2087 } 2107 2088 2108 - static void gen8_reset_engine_cancel(struct intel_engine_cs *engine) 2089 + static void gen8_engine_reset_cancel(struct intel_engine_cs *engine) 2109 2090 { 2110 2091 struct drm_i915_private *dev_priv = engine->i915; 2111 2092 ··· 2113 2094 _MASKED_BIT_DISABLE(RESET_CTL_REQUEST_RESET)); 2114 2095 } 2115 2096 2097 + static int reset_engines(struct drm_i915_private *i915, 2098 + unsigned int engine_mask, 2099 + unsigned int retry) 2100 + { 2101 + if (INTEL_GEN(i915) >= 11) 2102 + return gen11_reset_engines(i915, engine_mask); 2103 + else 2104 + return gen6_reset_engines(i915, engine_mask, retry); 2105 + } 2106 + 2116 2107 static int gen8_reset_engines(struct drm_i915_private *dev_priv, 2117 - unsigned engine_mask) 2108 + unsigned int engine_mask, 2109 + unsigned int retry) 2118 2110 { 2119 2111 struct intel_engine_cs *engine; 2112 + const bool reset_non_ready = retry >= 1; 2120 2113 unsigned int tmp; 2121 2114 int ret; 2122 2115 2123 2116 for_each_engine_masked(engine, dev_priv, engine_mask, tmp) { 2124 - if (gen8_reset_engine_start(engine)) { 2125 - ret = -EIO; 2126 - goto not_ready; 2127 - } 2117 + ret = gen8_engine_reset_prepare(engine); 2118 + if (ret && !reset_non_ready) 2119 + goto skip_reset; 2120 + 2121 + /* 2122 + * If this is not the first failed attempt to prepare, 2123 + * we decide to proceed anyway. 2124 + * 2125 + * By doing so we risk context corruption and with 2126 + * some gens (kbl), possible system hang if reset 2127 + * happens during active bb execution. 2128 + * 2129 + * We rather take context corruption instead of 2130 + * failed reset with a wedged driver/gpu. And 2131 + * active bb execution case should be covered by 2132 + * i915_stop_engines we have before the reset. 2133 + */ 2128 2134 } 2129 2135 2130 - if (INTEL_GEN(dev_priv) >= 11) 2131 - ret = gen11_reset_engines(dev_priv, engine_mask); 2132 - else 2133 - ret = gen6_reset_engines(dev_priv, engine_mask); 2136 + ret = reset_engines(dev_priv, engine_mask, retry); 2134 2137 2135 - not_ready: 2138 + skip_reset: 2136 2139 for_each_engine_masked(engine, dev_priv, engine_mask, tmp) 2137 - gen8_reset_engine_cancel(engine); 2140 + gen8_engine_reset_cancel(engine); 2138 2141 2139 2142 return ret; 2140 2143 } 2141 2144 2142 - typedef int (*reset_func)(struct drm_i915_private *, unsigned engine_mask); 2145 + typedef int (*reset_func)(struct drm_i915_private *, 2146 + unsigned int engine_mask, unsigned int retry); 2143 2147 2144 2148 static reset_func intel_get_gpu_reset(struct drm_i915_private *dev_priv) 2145 2149 { ··· 2185 2143 return NULL; 2186 2144 } 2187 2145 2188 - int intel_gpu_reset(struct drm_i915_private *dev_priv, unsigned engine_mask) 2146 + int intel_gpu_reset(struct drm_i915_private *dev_priv, 2147 + const unsigned int engine_mask) 2189 2148 { 2190 2149 reset_func reset = intel_get_gpu_reset(dev_priv); 2191 - int retry; 2150 + unsigned int retry; 2192 2151 int ret; 2152 + 2153 + GEM_BUG_ON(!engine_mask); 2193 2154 2194 2155 /* 2195 2156 * We want to perform per-engine reset from atomic context (e.g. ··· 2235 2190 2236 2191 ret = -ENODEV; 2237 2192 if (reset) { 2238 - GEM_TRACE("engine_mask=%x\n", engine_mask); 2239 - ret = reset(dev_priv, engine_mask); 2193 + ret = reset(dev_priv, engine_mask, retry); 2194 + GEM_TRACE("engine_mask=%x, ret=%d, retry=%d\n", 2195 + engine_mask, ret, retry); 2240 2196 } 2241 2197 if (ret != -ETIMEDOUT || engine_mask != ALL_ENGINES) 2242 2198 break; ··· 2283 2237 bool 2284 2238 intel_uncore_arm_unclaimed_mmio_detection(struct drm_i915_private *dev_priv) 2285 2239 { 2286 - if (unlikely(i915_modparams.mmio_debug || 2287 - dev_priv->uncore.unclaimed_mmio_check <= 0)) 2288 - return false; 2240 + bool ret = false; 2241 + 2242 + spin_lock_irq(&dev_priv->uncore.lock); 2243 + 2244 + if (unlikely(dev_priv->uncore.unclaimed_mmio_check <= 0)) 2245 + goto out; 2289 2246 2290 2247 if (unlikely(intel_uncore_unclaimed_mmio(dev_priv))) { 2291 - DRM_DEBUG("Unclaimed register detected, " 2292 - "enabling oneshot unclaimed register reporting. " 2293 - "Please use i915.mmio_debug=N for more information.\n"); 2294 - i915_modparams.mmio_debug++; 2248 + if (!i915_modparams.mmio_debug) { 2249 + DRM_DEBUG("Unclaimed register detected, " 2250 + "enabling oneshot unclaimed register reporting. " 2251 + "Please use i915.mmio_debug=N for more information.\n"); 2252 + i915_modparams.mmio_debug++; 2253 + } 2295 2254 dev_priv->uncore.unclaimed_mmio_check--; 2296 - return true; 2255 + ret = true; 2297 2256 } 2298 2257 2299 - return false; 2258 + out: 2259 + spin_unlock_irq(&dev_priv->uncore.lock); 2260 + 2261 + return ret; 2300 2262 } 2301 2263 2302 2264 static enum forcewake_domains
+6
drivers/gpu/drm/i915/intel_wopcm.c
··· 163 163 u32 guc_wopcm_rsvd; 164 164 int err; 165 165 166 + if (!USES_GUC(dev_priv)) 167 + return 0; 168 + 166 169 GEM_BUG_ON(!wopcm->size); 170 + 171 + if (i915_inject_load_failure()) 172 + return -E2BIG; 167 173 168 174 if (guc_fw_size >= wopcm->size) { 169 175 DRM_ERROR("GuC FW (%uKiB) is too big to fit in WOPCM.",
+6 -5
drivers/gpu/drm/i915/selftests/huge_pages.c
··· 906 906 if (IS_ERR(obj)) 907 907 return ERR_CAST(obj); 908 908 909 - cmd = i915_gem_object_pin_map(obj, I915_MAP_WB); 909 + err = i915_gem_object_set_to_wc_domain(obj, true); 910 + if (err) 911 + goto err; 912 + 913 + cmd = i915_gem_object_pin_map(obj, I915_MAP_WC); 910 914 if (IS_ERR(cmd)) { 911 915 err = PTR_ERR(cmd); 912 916 goto err; ··· 940 936 } 941 937 942 938 *cmd = MI_BATCH_BUFFER_END; 939 + i915_gem_chipset_flush(i915); 943 940 944 941 i915_gem_object_unpin_map(obj); 945 - 946 - err = i915_gem_object_set_to_gtt_domain(obj, false); 947 - if (err) 948 - goto err; 949 942 950 943 batch = i915_vma_instance(obj, vma->vm, NULL); 951 944 if (IS_ERR(batch)) {
+221
drivers/gpu/drm/i915/selftests/i915_gem.c
··· 1 + /* 2 + * SPDX-License-Identifier: MIT 3 + * 4 + * Copyright © 2018 Intel Corporation 5 + */ 6 + 7 + #include <linux/random.h> 8 + 9 + #include "../i915_selftest.h" 10 + 11 + #include "mock_context.h" 12 + #include "igt_flush_test.h" 13 + 14 + static int switch_to_context(struct drm_i915_private *i915, 15 + struct i915_gem_context *ctx) 16 + { 17 + struct intel_engine_cs *engine; 18 + enum intel_engine_id id; 19 + int err = 0; 20 + 21 + intel_runtime_pm_get(i915); 22 + 23 + for_each_engine(engine, i915, id) { 24 + struct i915_request *rq; 25 + 26 + rq = i915_request_alloc(engine, ctx); 27 + if (IS_ERR(rq)) { 28 + err = PTR_ERR(rq); 29 + break; 30 + } 31 + 32 + i915_request_add(rq); 33 + } 34 + 35 + intel_runtime_pm_put(i915); 36 + 37 + return err; 38 + } 39 + 40 + static void trash_stolen(struct drm_i915_private *i915) 41 + { 42 + struct i915_ggtt *ggtt = &i915->ggtt; 43 + const u64 slot = ggtt->error_capture.start; 44 + const resource_size_t size = resource_size(&i915->dsm); 45 + unsigned long page; 46 + u32 prng = 0x12345678; 47 + 48 + for (page = 0; page < size; page += PAGE_SIZE) { 49 + const dma_addr_t dma = i915->dsm.start + page; 50 + u32 __iomem *s; 51 + int x; 52 + 53 + ggtt->vm.insert_page(&ggtt->vm, dma, slot, I915_CACHE_NONE, 0); 54 + 55 + s = io_mapping_map_atomic_wc(&ggtt->iomap, slot); 56 + for (x = 0; x < PAGE_SIZE / sizeof(u32); x++) { 57 + prng = next_pseudo_random32(prng); 58 + iowrite32(prng, &s[x]); 59 + } 60 + io_mapping_unmap_atomic(s); 61 + } 62 + 63 + ggtt->vm.clear_range(&ggtt->vm, slot, PAGE_SIZE); 64 + } 65 + 66 + static void simulate_hibernate(struct drm_i915_private *i915) 67 + { 68 + intel_runtime_pm_get(i915); 69 + 70 + /* 71 + * As a final sting in the tail, invalidate stolen. Under a real S4, 72 + * stolen is lost and needs to be refilled on resume. However, under 73 + * CI we merely do S4-device testing (as full S4 is too unreliable 74 + * for automated testing across a cluster), so to simulate the effect 75 + * of stolen being trashed across S4, we trash it ourselves. 76 + */ 77 + trash_stolen(i915); 78 + 79 + intel_runtime_pm_put(i915); 80 + } 81 + 82 + static int pm_prepare(struct drm_i915_private *i915) 83 + { 84 + int err = 0; 85 + 86 + if (i915_gem_suspend(i915)) { 87 + pr_err("i915_gem_suspend failed\n"); 88 + err = -EINVAL; 89 + } 90 + 91 + return err; 92 + } 93 + 94 + static void pm_suspend(struct drm_i915_private *i915) 95 + { 96 + intel_runtime_pm_get(i915); 97 + 98 + i915_gem_suspend_gtt_mappings(i915); 99 + i915_gem_suspend_late(i915); 100 + 101 + intel_runtime_pm_put(i915); 102 + } 103 + 104 + static void pm_hibernate(struct drm_i915_private *i915) 105 + { 106 + intel_runtime_pm_get(i915); 107 + 108 + i915_gem_suspend_gtt_mappings(i915); 109 + 110 + i915_gem_freeze(i915); 111 + i915_gem_freeze_late(i915); 112 + 113 + intel_runtime_pm_put(i915); 114 + } 115 + 116 + static void pm_resume(struct drm_i915_private *i915) 117 + { 118 + /* 119 + * Both suspend and hibernate follow the same wakeup path and assume 120 + * that runtime-pm just works. 121 + */ 122 + intel_runtime_pm_get(i915); 123 + 124 + intel_engines_sanitize(i915); 125 + i915_gem_sanitize(i915); 126 + i915_gem_resume(i915); 127 + 128 + intel_runtime_pm_put(i915); 129 + } 130 + 131 + static int igt_gem_suspend(void *arg) 132 + { 133 + struct drm_i915_private *i915 = arg; 134 + struct i915_gem_context *ctx; 135 + struct drm_file *file; 136 + int err; 137 + 138 + file = mock_file(i915); 139 + if (IS_ERR(file)) 140 + return PTR_ERR(file); 141 + 142 + err = -ENOMEM; 143 + mutex_lock(&i915->drm.struct_mutex); 144 + ctx = live_context(i915, file); 145 + if (!IS_ERR(ctx)) 146 + err = switch_to_context(i915, ctx); 147 + mutex_unlock(&i915->drm.struct_mutex); 148 + if (err) 149 + goto out; 150 + 151 + err = pm_prepare(i915); 152 + if (err) 153 + goto out; 154 + 155 + pm_suspend(i915); 156 + 157 + /* Here be dragons! Note that with S3RST any S3 may become S4! */ 158 + simulate_hibernate(i915); 159 + 160 + pm_resume(i915); 161 + 162 + mutex_lock(&i915->drm.struct_mutex); 163 + err = switch_to_context(i915, ctx); 164 + if (igt_flush_test(i915, I915_WAIT_LOCKED)) 165 + err = -EIO; 166 + mutex_unlock(&i915->drm.struct_mutex); 167 + out: 168 + mock_file_free(i915, file); 169 + return err; 170 + } 171 + 172 + static int igt_gem_hibernate(void *arg) 173 + { 174 + struct drm_i915_private *i915 = arg; 175 + struct i915_gem_context *ctx; 176 + struct drm_file *file; 177 + int err; 178 + 179 + file = mock_file(i915); 180 + if (IS_ERR(file)) 181 + return PTR_ERR(file); 182 + 183 + err = -ENOMEM; 184 + mutex_lock(&i915->drm.struct_mutex); 185 + ctx = live_context(i915, file); 186 + if (!IS_ERR(ctx)) 187 + err = switch_to_context(i915, ctx); 188 + mutex_unlock(&i915->drm.struct_mutex); 189 + if (err) 190 + goto out; 191 + 192 + err = pm_prepare(i915); 193 + if (err) 194 + goto out; 195 + 196 + pm_hibernate(i915); 197 + 198 + /* Here be dragons! */ 199 + simulate_hibernate(i915); 200 + 201 + pm_resume(i915); 202 + 203 + mutex_lock(&i915->drm.struct_mutex); 204 + err = switch_to_context(i915, ctx); 205 + if (igt_flush_test(i915, I915_WAIT_LOCKED)) 206 + err = -EIO; 207 + mutex_unlock(&i915->drm.struct_mutex); 208 + out: 209 + mock_file_free(i915, file); 210 + return err; 211 + } 212 + 213 + int i915_gem_live_selftests(struct drm_i915_private *i915) 214 + { 215 + static const struct i915_subtest tests[] = { 216 + SUBTEST(igt_gem_suspend), 217 + SUBTEST(igt_gem_hibernate), 218 + }; 219 + 220 + return i915_subtests(tests, i915); 221 + }
+17 -21
drivers/gpu/drm/i915/selftests/i915_gem_coherency.c
··· 33 33 { 34 34 unsigned int needs_clflush; 35 35 struct page *page; 36 - u32 *map; 36 + void *map; 37 + u32 *cpu; 37 38 int err; 38 39 39 40 err = i915_gem_obj_prepare_shmem_write(obj, &needs_clflush); ··· 43 42 44 43 page = i915_gem_object_get_page(obj, offset >> PAGE_SHIFT); 45 44 map = kmap_atomic(page); 45 + cpu = map + offset_in_page(offset); 46 46 47 - if (needs_clflush & CLFLUSH_BEFORE) { 48 - mb(); 49 - clflush(map+offset_in_page(offset) / sizeof(*map)); 50 - mb(); 51 - } 47 + if (needs_clflush & CLFLUSH_BEFORE) 48 + drm_clflush_virt_range(cpu, sizeof(*cpu)); 52 49 53 - map[offset_in_page(offset) / sizeof(*map)] = v; 50 + *cpu = v; 54 51 55 - if (needs_clflush & CLFLUSH_AFTER) { 56 - mb(); 57 - clflush(map+offset_in_page(offset) / sizeof(*map)); 58 - mb(); 59 - } 52 + if (needs_clflush & CLFLUSH_AFTER) 53 + drm_clflush_virt_range(cpu, sizeof(*cpu)); 60 54 61 55 kunmap_atomic(map); 62 - 63 56 i915_gem_obj_finish_shmem_access(obj); 57 + 64 58 return 0; 65 59 } 66 60 ··· 65 69 { 66 70 unsigned int needs_clflush; 67 71 struct page *page; 68 - u32 *map; 72 + void *map; 73 + u32 *cpu; 69 74 int err; 70 75 71 76 err = i915_gem_obj_prepare_shmem_read(obj, &needs_clflush); ··· 75 78 76 79 page = i915_gem_object_get_page(obj, offset >> PAGE_SHIFT); 77 80 map = kmap_atomic(page); 81 + cpu = map + offset_in_page(offset); 78 82 79 - if (needs_clflush & CLFLUSH_BEFORE) { 80 - mb(); 81 - clflush(map+offset_in_page(offset) / sizeof(*map)); 82 - mb(); 83 - } 83 + if (needs_clflush & CLFLUSH_BEFORE) 84 + drm_clflush_virt_range(cpu, sizeof(*cpu)); 84 85 85 - *v = map[offset_in_page(offset) / sizeof(*map)]; 86 + *v = *cpu; 87 + 86 88 kunmap_atomic(map); 87 - 88 89 i915_gem_obj_finish_shmem_access(obj); 90 + 89 91 return 0; 90 92 } 91 93
+1 -1
drivers/gpu/drm/i915/selftests/i915_gem_object.c
··· 282 282 view.partial.offset, 283 283 view.partial.size, 284 284 vma->size >> PAGE_SHIFT, 285 - tile_row_pages(obj), 285 + tile->tiling ? tile_row_pages(obj) : 0, 286 286 vma->fence ? vma->fence->id : -1, tile->tiling, tile->stride, 287 287 offset >> PAGE_SHIFT, 288 288 (unsigned int)offset_in_page(offset),
+1
drivers/gpu/drm/i915/selftests/i915_live_selftests.h
··· 17 17 selftest(dmabuf, i915_gem_dmabuf_live_selftests) 18 18 selftest(coherency, i915_gem_coherency_live_selftests) 19 19 selftest(gtt, i915_gem_gtt_live_selftests) 20 + selftest(gem, i915_gem_live_selftests) 20 21 selftest(evict, i915_gem_evict_live_selftests) 21 22 selftest(hugepages, i915_gem_huge_page_live_selftests) 22 23 selftest(contexts, i915_gem_context_live_selftests)
+38
drivers/gpu/drm/i915/selftests/intel_guc.c
··· 65 65 return 0; 66 66 } 67 67 68 + static int ring_doorbell_nop(struct intel_guc_client *client) 69 + { 70 + struct guc_process_desc *desc = __get_process_desc(client); 71 + int err; 72 + 73 + client->use_nop_wqi = true; 74 + 75 + spin_lock_irq(&client->wq_lock); 76 + 77 + guc_wq_item_append(client, 0, 0, 0, 0); 78 + guc_ring_doorbell(client); 79 + 80 + spin_unlock_irq(&client->wq_lock); 81 + 82 + client->use_nop_wqi = false; 83 + 84 + /* if there are no issues GuC will update the WQ head and keep the 85 + * WQ in active status 86 + */ 87 + err = wait_for(READ_ONCE(desc->head) == READ_ONCE(desc->tail), 10); 88 + if (err) { 89 + pr_err("doorbell %u ring failed!\n", client->doorbell_id); 90 + return -EIO; 91 + } 92 + 93 + if (desc->wq_status != WQ_STATUS_ACTIVE) { 94 + pr_err("doorbell %u ring put WQ in bad state (%u)!\n", 95 + client->doorbell_id, desc->wq_status); 96 + return -EIO; 97 + } 98 + 99 + return 0; 100 + } 101 + 68 102 /* 69 103 * Basic client sanity check, handy to validate create_clients. 70 104 */ ··· 364 330 } 365 331 366 332 err = check_all_doorbells(guc); 333 + if (err) 334 + goto out; 335 + 336 + err = ring_doorbell_nop(clients[i]); 367 337 if (err) 368 338 goto out; 369 339 }
+89 -12
drivers/gpu/drm/i915/selftests/intel_hangcheck.c
··· 1018 1018 return err; 1019 1019 } 1020 1020 1021 + static int evict_fence(void *data) 1022 + { 1023 + struct evict_vma *arg = data; 1024 + struct drm_i915_private *i915 = arg->vma->vm->i915; 1025 + int err; 1026 + 1027 + complete(&arg->completion); 1028 + 1029 + mutex_lock(&i915->drm.struct_mutex); 1030 + 1031 + /* Mark the fence register as dirty to force the mmio update. */ 1032 + err = i915_gem_object_set_tiling(arg->vma->obj, I915_TILING_Y, 512); 1033 + if (err) { 1034 + pr_err("Invalid Y-tiling settings; err:%d\n", err); 1035 + goto out_unlock; 1036 + } 1037 + 1038 + err = i915_vma_pin_fence(arg->vma); 1039 + if (err) { 1040 + pr_err("Unable to pin Y-tiled fence; err:%d\n", err); 1041 + goto out_unlock; 1042 + } 1043 + 1044 + i915_vma_unpin_fence(arg->vma); 1045 + 1046 + out_unlock: 1047 + mutex_unlock(&i915->drm.struct_mutex); 1048 + 1049 + return err; 1050 + } 1051 + 1021 1052 static int __igt_reset_evict_vma(struct drm_i915_private *i915, 1022 - struct i915_address_space *vm) 1053 + struct i915_address_space *vm, 1054 + int (*fn)(void *), 1055 + unsigned int flags) 1023 1056 { 1024 1057 struct drm_i915_gem_object *obj; 1025 1058 struct task_struct *tsk = NULL; ··· 1073 1040 if (err) 1074 1041 goto unlock; 1075 1042 1076 - obj = i915_gem_object_create_internal(i915, PAGE_SIZE); 1043 + obj = i915_gem_object_create_internal(i915, SZ_1M); 1077 1044 if (IS_ERR(obj)) { 1078 1045 err = PTR_ERR(obj); 1079 1046 goto fini; 1047 + } 1048 + 1049 + if (flags & EXEC_OBJECT_NEEDS_FENCE) { 1050 + err = i915_gem_object_set_tiling(obj, I915_TILING_X, 512); 1051 + if (err) { 1052 + pr_err("Invalid X-tiling settings; err:%d\n", err); 1053 + goto out_obj; 1054 + } 1080 1055 } 1081 1056 1082 1057 arg.vma = i915_vma_instance(obj, vm, NULL); ··· 1100 1059 } 1101 1060 1102 1061 err = i915_vma_pin(arg.vma, 0, 0, 1103 - i915_vma_is_ggtt(arg.vma) ? PIN_GLOBAL : PIN_USER); 1104 - if (err) 1062 + i915_vma_is_ggtt(arg.vma) ? 1063 + PIN_GLOBAL | PIN_MAPPABLE : 1064 + PIN_USER); 1065 + if (err) { 1066 + i915_request_add(rq); 1105 1067 goto out_obj; 1068 + } 1106 1069 1107 - err = i915_vma_move_to_active(arg.vma, rq, EXEC_OBJECT_WRITE); 1070 + if (flags & EXEC_OBJECT_NEEDS_FENCE) { 1071 + err = i915_vma_pin_fence(arg.vma); 1072 + if (err) { 1073 + pr_err("Unable to pin X-tiled fence; err:%d\n", err); 1074 + i915_vma_unpin(arg.vma); 1075 + i915_request_add(rq); 1076 + goto out_obj; 1077 + } 1078 + } 1079 + 1080 + err = i915_vma_move_to_active(arg.vma, rq, flags); 1081 + 1082 + if (flags & EXEC_OBJECT_NEEDS_FENCE) 1083 + i915_vma_unpin_fence(arg.vma); 1108 1084 i915_vma_unpin(arg.vma); 1109 1085 1110 1086 i915_request_get(rq); ··· 1144 1086 1145 1087 init_completion(&arg.completion); 1146 1088 1147 - tsk = kthread_run(evict_vma, &arg, "igt/evict_vma"); 1089 + tsk = kthread_run(fn, &arg, "igt/evict_vma"); 1148 1090 if (IS_ERR(tsk)) { 1149 1091 err = PTR_ERR(tsk); 1150 1092 tsk = NULL; ··· 1195 1137 { 1196 1138 struct drm_i915_private *i915 = arg; 1197 1139 1198 - return __igt_reset_evict_vma(i915, &i915->ggtt.vm); 1140 + return __igt_reset_evict_vma(i915, &i915->ggtt.vm, 1141 + evict_vma, EXEC_OBJECT_WRITE); 1199 1142 } 1200 1143 1201 1144 static int igt_reset_evict_ppgtt(void *arg) 1202 1145 { 1203 1146 struct drm_i915_private *i915 = arg; 1204 1147 struct i915_gem_context *ctx; 1148 + struct drm_file *file; 1205 1149 int err; 1206 1150 1151 + file = mock_file(i915); 1152 + if (IS_ERR(file)) 1153 + return PTR_ERR(file); 1154 + 1207 1155 mutex_lock(&i915->drm.struct_mutex); 1208 - ctx = kernel_context(i915); 1156 + ctx = live_context(i915, file); 1209 1157 mutex_unlock(&i915->drm.struct_mutex); 1210 - if (IS_ERR(ctx)) 1211 - return PTR_ERR(ctx); 1158 + if (IS_ERR(ctx)) { 1159 + err = PTR_ERR(ctx); 1160 + goto out; 1161 + } 1212 1162 1213 1163 err = 0; 1214 1164 if (ctx->ppgtt) /* aliasing == global gtt locking, covered above */ 1215 - err = __igt_reset_evict_vma(i915, &ctx->ppgtt->vm); 1165 + err = __igt_reset_evict_vma(i915, &ctx->ppgtt->vm, 1166 + evict_vma, EXEC_OBJECT_WRITE); 1216 1167 1217 - kernel_context_close(ctx); 1168 + out: 1169 + mock_file_free(i915, file); 1218 1170 return err; 1171 + } 1172 + 1173 + static int igt_reset_evict_fence(void *arg) 1174 + { 1175 + struct drm_i915_private *i915 = arg; 1176 + 1177 + return __igt_reset_evict_vma(i915, &i915->ggtt.vm, 1178 + evict_fence, EXEC_OBJECT_NEEDS_FENCE); 1219 1179 } 1220 1180 1221 1181 static int wait_for_others(struct drm_i915_private *i915, ··· 1485 1409 SUBTEST(igt_reset_wait), 1486 1410 SUBTEST(igt_reset_evict_ggtt), 1487 1411 SUBTEST(igt_reset_evict_ppgtt), 1412 + SUBTEST(igt_reset_evict_fence), 1488 1413 SUBTEST(igt_handle_error), 1489 1414 }; 1490 1415 bool saved_hangcheck;
+3 -8
drivers/gpu/drm/i915/selftests/mock_context.c
··· 43 43 44 44 INIT_RADIX_TREE(&ctx->handles_vma, GFP_KERNEL); 45 45 INIT_LIST_HEAD(&ctx->handles_list); 46 + INIT_LIST_HEAD(&ctx->hw_id_link); 46 47 47 48 for (n = 0; n < ARRAY_SIZE(ctx->__engine); n++) { 48 49 struct intel_context *ce = &ctx->__engine[n]; ··· 51 50 ce->gem_context = ctx; 52 51 } 53 52 54 - ret = ida_simple_get(&i915->contexts.hw_ida, 55 - 0, MAX_CONTEXT_HW_ID, GFP_KERNEL); 53 + ret = i915_gem_context_pin_hw_id(ctx); 56 54 if (ret < 0) 57 55 goto err_handles; 58 - ctx->hw_id = ret; 59 56 60 57 if (name) { 61 58 ctx->name = kstrdup(name, GFP_KERNEL); ··· 84 85 85 86 void mock_init_contexts(struct drm_i915_private *i915) 86 87 { 87 - INIT_LIST_HEAD(&i915->contexts.list); 88 - ida_init(&i915->contexts.hw_ida); 89 - 90 - INIT_WORK(&i915->contexts.free_work, contexts_free_worker); 91 - init_llist_head(&i915->contexts.free_list); 88 + init_contexts(i915); 92 89 } 93 90 94 91 struct i915_gem_context *
+2
drivers/gpu/drm/i915/selftests/mock_gtt.c
··· 118 118 ggtt->vm.vma_ops.clear_pages = clear_pages; 119 119 120 120 i915_address_space_init(&ggtt->vm, i915); 121 + 122 + ggtt->vm.is_ggtt = true; 121 123 } 122 124 123 125 void mock_fini_ggtt(struct drm_i915_private *i915)
+1
include/drm/i915_pciids.h
··· 386 386 INTEL_VGA_DEVICE(0x3E91, info), /* SRV GT2 */ \ 387 387 INTEL_VGA_DEVICE(0x3E92, info), /* SRV GT2 */ \ 388 388 INTEL_VGA_DEVICE(0x3E96, info), /* SRV GT2 */ \ 389 + INTEL_VGA_DEVICE(0x3E98, info), /* SRV GT2 */ \ 389 390 INTEL_VGA_DEVICE(0x3E9A, info) /* SRV GT2 */ 390 391 391 392 /* CFL H */
+22
include/uapi/drm/i915_drm.h
··· 529 529 */ 530 530 #define I915_PARAM_CS_TIMESTAMP_FREQUENCY 51 531 531 532 + /* 533 + * Once upon a time we supposed that writes through the GGTT would be 534 + * immediately in physical memory (once flushed out of the CPU path). However, 535 + * on a few different processors and chipsets, this is not necessarily the case 536 + * as the writes appear to be buffered internally. Thus a read of the backing 537 + * storage (physical memory) via a different path (with different physical tags 538 + * to the indirect write via the GGTT) will see stale values from before 539 + * the GGTT write. Inside the kernel, we can for the most part keep track of 540 + * the different read/write domains in use (e.g. set-domain), but the assumption 541 + * of coherency is baked into the ABI, hence reporting its true state in this 542 + * parameter. 543 + * 544 + * Reports true when writes via mmap_gtt are immediately visible following an 545 + * lfence to flush the WCB. 546 + * 547 + * Reports false when writes via mmap_gtt are indeterminately delayed in an in 548 + * internal buffer and are _not_ immediately visible to third parties accessing 549 + * directly via mmap_cpu/mmap_wc. Use of mmap_gtt as part of an IPC 550 + * communications channel when reporting false is strongly disadvised. 551 + */ 552 + #define I915_PARAM_MMAP_GTT_COHERENT 52 553 + 532 554 typedef struct drm_i915_getparam { 533 555 __s32 param; 534 556 /*