Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'drm-intel-next-2023-09-29' of git://anongit.freedesktop.org/drm/drm-intel into drm-next

drm/i915 feature pull for v6.7:

Features and functionality:
- Early Xe2 LPD / Lunarlake (LNL) display enabling (Lucas, Matt, Gustavo,
Stanislav, Luca, Clint, Juha-Pekka, Balasubramani, Ravi)
- Plenty of various DSC improvements and fixes (Ankit)
- Add DSC PPS state readout and verification (Suraj)
- Improve fastsets for VRR, LRR and M/N updates (Ville)
- Use connector->ddc to create (non-DP MST) connector sysfs ddc symlinks (Ville)
- Various DSB improvements, load LUTs using DSB (Ville)
- Improve shared link bandwidth management, starting with FDI (Imre)
- Optimize get param ioctl for PXP status (Alan)
- Remove DG2 pre-production hardware workarounds (Matt)
- Add more RPL P/U PCI IDs (Dnyaneshwar)
- Add new DG2-G12 stepping (Swati)
- Add PSR sink error status to debugfs (Jouni)
- Add DP enhanced framing to crtc state checker (Ville)

Refactoring and cleanups:
- Simplify TileY/Tile4 tiling selftest enumeration (Matt)
- Remove some unused power domain code (Gustavo)
- Check stepping of display IP version rather than MTL platform (Matt)
- DP audio compute config cleanups (Vinod)
- SDVO cleanups and refactoring, more robust failure handling (Ville)
- Color register definition and readout cleanups (Jani)
- Reduce header interdependencies for frontbuffer tracking (Jani)
- Continue replacing struct edid with struct drm_edid (Jani)
- Use source physical address instead of EDID for CEC (Jani)
- Clean up Type-C port lane count functions (Luca)
- Clean up DSC PPS register definitions and readout (Jani)
- Stop using GEM_BUG_ON()/GEM_WARN_ON() in display code (Jani)
- Move more of the display probe to display code (Jani)
- Remove redundant runtime suspended state flag (Jouni)
- Move display info printing to display code (Balasubramani)
- Frontbuffer tracking improvements (Jouni)
- Add trailing newlines to debug logging (Jim Cromie)
- Separate display workarounds from clock gating init (Matt)
- Reduce dmesg log spamming for combo PHY, PLL state, FEC, DP MST (Ville, Imre)

Fixes:
- Fix hotplug poll detect loops via suspend/resume (Imre)
- Fix hotplug detect for forced connectors (Imre)
- Fix DSC first_line_bpg_offset calculation (Suraj)
- Fix debug prints for SDP CRC16 (Arun)
- Fix PXP runtime resume (Alan)
- Fix cx0 PHY lane handling (Gustavo)
- Fix frontbuffer tracking locking in debugfs (Juha-Pekka)
- Fix SDVO detect on some models (Ville)
- Fix SDP split configuration for DP MST (Vinod)
- Fix AUX usage and reads for HDCP on DP MST (Suraj)
- Fix PSR workaround (Jouni)
- Fix redundant AUX power get/put in DP force (Imre)
- Fix ICL DSI TCLK POST by letting hardware handle it (William)
- Fix IRQ reset for XE LP+ (Gustavo)
- Fix h/vsync_end instead of h/vtotal in VBT (Ville)
- Fix C20 PHY msgbus timeout issues (Gustavo)
- Fix pre-TGL FEC pipe A vs. DDI A mixup (Ville)
- Fix FEC state readout for DP MST (Ville)

DRM subsystem core changes:
- Assume sink supports 8 bpc when DSC is supported (Ankit)
- Add drm_edid_is_digital() helper (Jani)
- Parse source physical address from EDID (Jani)
- Add function to attach CEC without EDID (Jani)
- Reorder connector sysfs/debugfs remove (Ville)
- Register connector sysfs ddc symlink later (Ville)

Media subsystem changes:
- Add comments about CEC source physical address usage (Jani)

Merges:
- Backmerge drm-next to get v6.6-rc1 (Jani)

Signed-off-by: Dave Airlie <airlied@redhat.com>

# Conflicts:
# drivers/gpu/drm/i915/i915_drv.h
From: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/87r0mhi7a6.fsf@intel.com

+4545 -2459
+20 -3
drivers/gpu/drm/display/drm_dp_cec.c
··· 14 14 #include <drm/display/drm_dp_helper.h> 15 15 #include <drm/drm_connector.h> 16 16 #include <drm/drm_device.h> 17 + #include <drm/drm_edid.h> 17 18 18 19 /* 19 20 * Unfortunately it turns out that we have a chicken-and-egg situation ··· 298 297 * were unchanged and just update the CEC physical address. Otherwise 299 298 * unregister the old CEC adapter and create a new one. 300 299 */ 301 - void drm_dp_cec_set_edid(struct drm_dp_aux *aux, const struct edid *edid) 300 + void drm_dp_cec_attach(struct drm_dp_aux *aux, u16 source_physical_address) 302 301 { 303 302 struct drm_connector *connector = aux->cec.connector; 304 303 u32 cec_caps = CEC_CAP_DEFAULTS | CEC_CAP_NEEDS_HPD | ··· 340 339 if (aux->cec.adap->capabilities == cec_caps && 341 340 aux->cec.adap->available_log_addrs == num_las) { 342 341 /* Unchanged, so just set the phys addr */ 343 - cec_s_phys_addr_from_edid(aux->cec.adap, edid); 342 + cec_s_phys_addr(aux->cec.adap, source_physical_address, false); 344 343 goto unlock; 345 344 } 346 345 /* ··· 371 370 * from drm_dp_cec_register_connector() edid == NULL, so in 372 371 * that case the phys addr is just invalidated. 373 372 */ 374 - cec_s_phys_addr_from_edid(aux->cec.adap, edid); 373 + cec_s_phys_addr(aux->cec.adap, source_physical_address, false); 375 374 } 376 375 unlock: 377 376 mutex_unlock(&aux->cec.lock); 377 + } 378 + EXPORT_SYMBOL(drm_dp_cec_attach); 379 + 380 + /* 381 + * Note: Prefer calling drm_dp_cec_attach() with 382 + * connector->display_info.source_physical_address if possible. 383 + */ 384 + void drm_dp_cec_set_edid(struct drm_dp_aux *aux, const struct edid *edid) 385 + { 386 + u16 pa = CEC_PHYS_ADDR_INVALID; 387 + 388 + if (edid && edid->extensions) 389 + pa = cec_get_edid_phys_addr((const u8 *)edid, 390 + EDID_LENGTH * (edid->extensions + 1), NULL); 391 + 392 + drm_dp_cec_attach(aux, pa); 378 393 } 379 394 EXPORT_SYMBOL(drm_dp_cec_set_edid); 380 395
+6 -2
drivers/gpu/drm/display/drm_dp_helper.c
··· 2449 2449 int num_bpc = 0; 2450 2450 u8 color_depth = dsc_dpcd[DP_DSC_DEC_COLOR_DEPTH_CAP - DP_DSC_SUPPORT]; 2451 2451 2452 + if (!drm_dp_sink_supports_dsc(dsc_dpcd)) 2453 + return 0; 2454 + 2452 2455 if (color_depth & DP_DSC_12_BPC) 2453 2456 dsc_bpc[num_bpc++] = 12; 2454 2457 if (color_depth & DP_DSC_10_BPC) 2455 2458 dsc_bpc[num_bpc++] = 10; 2456 - if (color_depth & DP_DSC_8_BPC) 2457 - dsc_bpc[num_bpc++] = 8; 2459 + 2460 + /* A DP DSC Sink device shall support 8 bpc. */ 2461 + dsc_bpc[num_bpc++] = 8; 2458 2462 2459 2463 return num_bpc; 2460 2464 }
+10 -1
drivers/gpu/drm/drm_connector.c
··· 631 631 goto err_debugfs; 632 632 } 633 633 634 + ret = drm_sysfs_connector_add_late(connector); 635 + if (ret) 636 + goto err_late_register; 637 + 634 638 drm_mode_object_register(connector->dev, &connector->base); 635 639 636 640 connector->registration_state = DRM_CONNECTOR_REGISTERED; ··· 651 647 mutex_unlock(&connector_list_lock); 652 648 goto unlock; 653 649 650 + err_late_register: 651 + if (connector->funcs->early_unregister) 652 + connector->funcs->early_unregister(connector); 654 653 err_debugfs: 655 654 drm_debugfs_connector_remove(connector); 656 655 drm_sysfs_connector_remove(connector); ··· 688 681 connector->privacy_screen, 689 682 &connector->privacy_screen_notifier); 690 683 684 + drm_sysfs_connector_remove_early(connector); 685 + 691 686 if (connector->funcs->early_unregister) 692 687 connector->funcs->early_unregister(connector); 693 688 694 - drm_sysfs_connector_remove(connector); 695 689 drm_debugfs_connector_remove(connector); 690 + drm_sysfs_connector_remove(connector); 696 691 697 692 connector->registration_state = DRM_CONNECTOR_UNREGISTERED; 698 693 mutex_unlock(&connector->mutex);
+20 -2
drivers/gpu/drm/drm_edid.c
··· 29 29 */ 30 30 31 31 #include <linux/bitfield.h> 32 + #include <linux/cec.h> 32 33 #include <linux/hdmi.h> 33 34 #include <linux/i2c.h> 34 35 #include <linux/kernel.h> ··· 3111 3110 return ret; 3112 3111 } 3113 3112 3114 - return ((drm_edid->edid->input & DRM_EDID_INPUT_DIGITAL) != 0); 3113 + return drm_edid_is_digital(drm_edid); 3115 3114 } 3116 3115 3117 3116 static void ··· 6201 6200 6202 6201 info->is_hdmi = true; 6203 6202 6203 + info->source_physical_address = (db[4] << 8) | db[5]; 6204 + 6204 6205 if (len >= 6) 6205 6206 info->dvi_dual = db[6] & 1; 6206 6207 if (len >= 7) ··· 6481 6478 info->vics_len = 0; 6482 6479 6483 6480 info->quirks = 0; 6481 + 6482 + info->source_physical_address = CEC_PHYS_ADDR_INVALID; 6484 6483 } 6485 6484 6486 6485 static void update_displayid_info(struct drm_connector *connector, ··· 6532 6527 if (edid->revision < 3) 6533 6528 goto out; 6534 6529 6535 - if (!(edid->input & DRM_EDID_INPUT_DIGITAL)) 6530 + if (!drm_edid_is_digital(drm_edid)) 6536 6531 goto out; 6537 6532 6538 6533 info->color_formats |= DRM_COLOR_FORMAT_RGB444; ··· 7348 7343 connector->tile_group = NULL; 7349 7344 } 7350 7345 } 7346 + 7347 + /** 7348 + * drm_edid_is_digital - is digital? 7349 + * @drm_edid: The EDID 7350 + * 7351 + * Return true if input is digital. 7352 + */ 7353 + bool drm_edid_is_digital(const struct drm_edid *drm_edid) 7354 + { 7355 + return drm_edid && drm_edid->edid && 7356 + drm_edid->edid->input & DRM_EDID_INPUT_DIGITAL; 7357 + } 7358 + EXPORT_SYMBOL(drm_edid_is_digital);
+2
drivers/gpu/drm/drm_internal.h
··· 153 153 void drm_sysfs_destroy(void); 154 154 struct device *drm_sysfs_minor_alloc(struct drm_minor *minor); 155 155 int drm_sysfs_connector_add(struct drm_connector *connector); 156 + int drm_sysfs_connector_add_late(struct drm_connector *connector); 157 + void drm_sysfs_connector_remove_early(struct drm_connector *connector); 156 158 void drm_sysfs_connector_remove(struct drm_connector *connector); 157 159 158 160 void drm_sysfs_lease_event(struct drm_device *dev);
+15 -7
drivers/gpu/drm/drm_sysfs.c
··· 400 400 drm_err(dev, "failed to add component to create link to typec connector\n"); 401 401 } 402 402 403 - if (connector->ddc) 404 - return sysfs_create_link(&connector->kdev->kobj, 405 - &connector->ddc->dev.kobj, "ddc"); 406 - 407 403 return 0; 408 404 409 405 err_free: ··· 407 411 return r; 408 412 } 409 413 414 + int drm_sysfs_connector_add_late(struct drm_connector *connector) 415 + { 416 + if (connector->ddc) 417 + return sysfs_create_link(&connector->kdev->kobj, 418 + &connector->ddc->dev.kobj, "ddc"); 419 + 420 + return 0; 421 + } 422 + 423 + void drm_sysfs_connector_remove_early(struct drm_connector *connector) 424 + { 425 + if (connector->ddc) 426 + sysfs_remove_link(&connector->kdev->kobj, "ddc"); 427 + } 428 + 410 429 void drm_sysfs_connector_remove(struct drm_connector *connector) 411 430 { 412 431 if (!connector->kdev) 413 432 return; 414 - 415 - if (connector->ddc) 416 - sysfs_remove_link(&connector->kdev->kobj, "ddc"); 417 433 418 434 if (dev_fwnode(connector->kdev)) 419 435 component_del(connector->kdev, &typec_connector_ops);
+2
drivers/gpu/drm/i915/Makefile
··· 248 248 display/intel_display_power_well.o \ 249 249 display/intel_display_reset.o \ 250 250 display/intel_display_rps.o \ 251 + display/intel_display_wa.o \ 251 252 display/intel_dmc.o \ 252 253 display/intel_dpio_phy.o \ 253 254 display/intel_dpll.o \ ··· 268 267 display/intel_hotplug.o \ 269 268 display/intel_hotplug_irq.o \ 270 269 display/intel_hti.o \ 270 + display/intel_link_bw.o \ 271 271 display/intel_load_detect.o \ 272 272 display/intel_lpe_audio.o \ 273 273 display/intel_modeset_lock.o \
+8 -2
drivers/gpu/drm/i915/display/g4x_dp.c
··· 141 141 142 142 intel_de_rmw(dev_priv, TRANS_DP_CTL(crtc->pipe), 143 143 TRANS_DP_ENH_FRAMING, 144 - drm_dp_enhanced_frame_cap(intel_dp->dpcd) ? 144 + pipe_config->enhanced_framing ? 145 145 TRANS_DP_ENH_FRAMING : 0); 146 146 } else { 147 147 if (IS_G4X(dev_priv) && pipe_config->limited_color_range) ··· 153 153 intel_dp->DP |= DP_SYNC_VS_HIGH; 154 154 intel_dp->DP |= DP_LINK_TRAIN_OFF; 155 155 156 - if (drm_dp_enhanced_frame_cap(intel_dp->dpcd)) 156 + if (pipe_config->enhanced_framing) 157 157 intel_dp->DP |= DP_ENHANCED_FRAMING; 158 158 159 159 if (IS_CHERRYVIEW(dev_priv)) ··· 351 351 u32 trans_dp = intel_de_read(dev_priv, 352 352 TRANS_DP_CTL(crtc->pipe)); 353 353 354 + if (trans_dp & TRANS_DP_ENH_FRAMING) 355 + pipe_config->enhanced_framing = true; 356 + 354 357 if (trans_dp & TRANS_DP_HSYNC_ACTIVE_HIGH) 355 358 flags |= DRM_MODE_FLAG_PHSYNC; 356 359 else ··· 364 361 else 365 362 flags |= DRM_MODE_FLAG_NVSYNC; 366 363 } else { 364 + if (tmp & DP_ENHANCED_FRAMING) 365 + pipe_config->enhanced_framing = true; 366 + 367 367 if (tmp & DP_SYNC_HS_HIGH) 368 368 flags |= DRM_MODE_FLAG_PHSYNC; 369 369 else
+5 -1
drivers/gpu/drm/i915/display/g4x_hdmi.c
··· 16 16 #include "intel_display_types.h" 17 17 #include "intel_dp_aux.h" 18 18 #include "intel_dpio_phy.h" 19 + #include "intel_fdi.h" 19 20 #include "intel_fifo_underrun.h" 20 21 #include "intel_hdmi.h" 21 22 #include "intel_hotplug.h" ··· 134 133 struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc); 135 134 struct drm_i915_private *i915 = to_i915(encoder->base.dev); 136 135 137 - if (HAS_PCH_SPLIT(i915)) 136 + if (HAS_PCH_SPLIT(i915)) { 138 137 crtc_state->has_pch_encoder = true; 138 + if (!intel_fdi_compute_pipe_bpp(crtc_state)) 139 + return -EINVAL; 140 + } 139 141 140 142 if (IS_G4X(i915)) 141 143 crtc_state->has_hdmi_sink = g4x_compute_has_hdmi_sink(state, crtc);
+1
drivers/gpu/drm/i915/display/hsw_ips.c
··· 6 6 #include "hsw_ips.h" 7 7 #include "i915_drv.h" 8 8 #include "i915_reg.h" 9 + #include "intel_color_regs.h" 9 10 #include "intel_de.h" 10 11 #include "intel_display_types.h" 11 12 #include "intel_pcode.h"
+1
drivers/gpu/drm/i915/display/i9xx_plane.c
··· 17 17 #include "intel_display_types.h" 18 18 #include "intel_fb.h" 19 19 #include "intel_fbc.h" 20 + #include "intel_frontbuffer.h" 20 21 #include "intel_sprite.h" 21 22 22 23 /* Primary plane formats for gen <= 3 */
+1 -12
drivers/gpu/drm/i915/display/icl_dsi.c
··· 1822 1822 u32 prepare_cnt, exit_zero_cnt, clk_zero_cnt, trail_cnt; 1823 1823 u32 ths_prepare_ns, tclk_trail_ns; 1824 1824 u32 hs_zero_cnt; 1825 - u32 tclk_pre_cnt, tclk_post_cnt; 1825 + u32 tclk_pre_cnt; 1826 1826 1827 1827 tlpx_ns = intel_dsi_tlpx_ns(intel_dsi); 1828 1828 ··· 1869 1869 tclk_pre_cnt = ICL_TCLK_PRE_CNT_MAX; 1870 1870 } 1871 1871 1872 - /* tclk post count in escape clocks */ 1873 - tclk_post_cnt = DIV_ROUND_UP(mipi_config->tclk_post, tlpx_ns); 1874 - if (tclk_post_cnt > ICL_TCLK_POST_CNT_MAX) { 1875 - drm_dbg_kms(&dev_priv->drm, 1876 - "tclk_post_cnt out of range (%d)\n", 1877 - tclk_post_cnt); 1878 - tclk_post_cnt = ICL_TCLK_POST_CNT_MAX; 1879 - } 1880 - 1881 1872 /* hs zero cnt in escape clocks */ 1882 1873 hs_zero_cnt = DIV_ROUND_UP(mipi_config->ths_prepare_hszero - 1883 1874 ths_prepare_ns, tlpx_ns); ··· 1894 1903 CLK_ZERO(clk_zero_cnt) | 1895 1904 CLK_PRE_OVERRIDE | 1896 1905 CLK_PRE(tclk_pre_cnt) | 1897 - CLK_POST_OVERRIDE | 1898 - CLK_POST(tclk_post_cnt) | 1899 1906 CLK_TRAIL_OVERRIDE | 1900 1907 CLK_TRAIL(trail_cnt)); 1901 1908
+2
drivers/gpu/drm/i915/display/intel_atomic.c
··· 259 259 drm_property_blob_get(crtc_state->post_csc_lut); 260 260 261 261 crtc_state->update_pipe = false; 262 + crtc_state->update_m_n = false; 263 + crtc_state->update_lrr = false; 262 264 crtc_state->disable_lp_wm = false; 263 265 crtc_state->disable_cxsr = false; 264 266 crtc_state->update_wm_pre = false;
+11 -3
drivers/gpu/drm/i915/display/intel_atomic_plane.c
··· 214 214 int width, height; 215 215 unsigned int rel_data_rate; 216 216 217 - if (plane->id == PLANE_CURSOR) 218 - return 0; 219 - 220 217 if (!plane_state->uapi.visible) 221 218 return 0; 222 219 ··· 240 243 } 241 244 242 245 rel_data_rate = width * height * fb->format->cpp[color_plane]; 246 + 247 + if (plane->id == PLANE_CURSOR) 248 + return rel_data_rate; 243 249 244 250 return intel_adjusted_rate(&plane_state->uapi.src, 245 251 &plane_state->uapi.dst, ··· 981 981 if (fb->format->format == DRM_FORMAT_RGB565 && rotated) { 982 982 hsub = 2; 983 983 vsub = 2; 984 + } else if (DISPLAY_VER(i915) >= 20 && 985 + intel_format_info_is_yuv_semiplanar(fb->format, fb->modifier)) { 986 + /* 987 + * This allows NV12 and P0xx formats to have odd size and/or odd 988 + * source coordinates on DISPLAY_VER(i915) >= 20 989 + */ 990 + hsub = 1; 991 + vsub = 1; 984 992 } else { 985 993 hsub = fb->format->hsub; 986 994 vsub = fb->format->vsub;
+3 -3
drivers/gpu/drm/i915/display/intel_audio.c
··· 759 759 mutex_unlock(&i915->display.audio.mutex); 760 760 } 761 761 762 - void intel_audio_sdp_split_update(struct intel_encoder *encoder, 763 - const struct intel_crtc_state *crtc_state) 762 + void intel_audio_sdp_split_update(const struct intel_crtc_state *crtc_state) 764 763 { 765 - struct drm_i915_private *i915 = to_i915(encoder->base.dev); 764 + struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc); 765 + struct drm_i915_private *i915 = to_i915(crtc->base.dev); 766 766 enum transcoder trans = crtc_state->cpu_transcoder; 767 767 768 768 if (HAS_DP20(i915))
+1 -2
drivers/gpu/drm/i915/display/intel_audio.h
··· 29 29 void intel_audio_cdclk_change_post(struct drm_i915_private *dev_priv); 30 30 void intel_audio_init(struct drm_i915_private *dev_priv); 31 31 void intel_audio_deinit(struct drm_i915_private *dev_priv); 32 - void intel_audio_sdp_split_update(struct intel_encoder *encoder, 33 - const struct intel_crtc_state *crtc_state); 32 + void intel_audio_sdp_split_update(const struct intel_crtc_state *crtc_state); 34 33 35 34 #endif /* __INTEL_AUDIO_H__ */
+17 -9
drivers/gpu/drm/i915/display/intel_bios.c
··· 521 521 } 522 522 523 523 static void 524 - fill_detail_timing_data(struct drm_display_mode *panel_fixed_mode, 524 + fill_detail_timing_data(struct drm_i915_private *i915, 525 + struct drm_display_mode *panel_fixed_mode, 525 526 const struct lvds_dvo_timing *dvo_timing) 526 527 { 527 528 panel_fixed_mode->hdisplay = (dvo_timing->hactive_hi << 8) | ··· 562 561 panel_fixed_mode->height_mm = (dvo_timing->vimage_hi << 8) | 563 562 dvo_timing->vimage_lo; 564 563 565 - /* Some VBTs have bogus h/vtotal values */ 566 - if (panel_fixed_mode->hsync_end > panel_fixed_mode->htotal) 567 - panel_fixed_mode->htotal = panel_fixed_mode->hsync_end + 1; 568 - if (panel_fixed_mode->vsync_end > panel_fixed_mode->vtotal) 569 - panel_fixed_mode->vtotal = panel_fixed_mode->vsync_end + 1; 564 + /* Some VBTs have bogus h/vsync_end values */ 565 + if (panel_fixed_mode->hsync_end > panel_fixed_mode->htotal) { 566 + drm_dbg_kms(&i915->drm, "reducing hsync_end %d->%d\n", 567 + panel_fixed_mode->hsync_end, panel_fixed_mode->htotal); 568 + panel_fixed_mode->hsync_end = panel_fixed_mode->htotal; 569 + } 570 + if (panel_fixed_mode->vsync_end > panel_fixed_mode->vtotal) { 571 + drm_dbg_kms(&i915->drm, "reducing vsync_end %d->%d\n", 572 + panel_fixed_mode->vsync_end, panel_fixed_mode->vtotal); 573 + panel_fixed_mode->vsync_end = panel_fixed_mode->vtotal; 574 + } 570 575 571 576 drm_mode_set_name(panel_fixed_mode); 572 577 } ··· 856 849 if (!panel_fixed_mode) 857 850 return; 858 851 859 - fill_detail_timing_data(panel_fixed_mode, panel_dvo_timing); 852 + fill_detail_timing_data(i915, panel_fixed_mode, panel_dvo_timing); 860 853 861 854 panel->vbt.lfp_lvds_vbt_mode = panel_fixed_mode; 862 855 ··· 1141 1134 if (!panel_fixed_mode) 1142 1135 return; 1143 1136 1144 - fill_detail_timing_data(panel_fixed_mode, &dtds->dtds[index]); 1137 + fill_detail_timing_data(i915, panel_fixed_mode, &dtds->dtds[index]); 1145 1138 1146 1139 panel->vbt.sdvo_lvds_vbt_mode = panel_fixed_mode; 1147 1140 ··· 2201 2194 const u8 *ddc_pin_map; 2202 2195 int i, n_entries; 2203 2196 2204 - if (HAS_PCH_MTP(i915) || IS_ALDERLAKE_P(i915)) { 2197 + if (INTEL_PCH_TYPE(i915) >= PCH_LNL || HAS_PCH_MTP(i915) || 2198 + IS_ALDERLAKE_P(i915)) { 2205 2199 ddc_pin_map = adlp_ddc_pin_map; 2206 2200 n_entries = ARRAY_SIZE(adlp_ddc_pin_map); 2207 2201 } else if (IS_ALDERLAKE_S(i915)) {
+86 -21
drivers/gpu/drm/i915/display/intel_cdclk.c
··· 32 32 #include "intel_cdclk.h" 33 33 #include "intel_crtc.h" 34 34 #include "intel_de.h" 35 + #include "intel_dp.h" 35 36 #include "intel_display_types.h" 36 37 #include "intel_mchbar_regs.h" 37 38 #include "intel_pci_config.h" ··· 1382 1381 {} 1383 1382 }; 1384 1383 1384 + static const struct intel_cdclk_vals lnl_cdclk_table[] = { 1385 + { .refclk = 38400, .cdclk = 153600, .divider = 2, .ratio = 16, .waveform = 0xaaaa }, 1386 + { .refclk = 38400, .cdclk = 172800, .divider = 2, .ratio = 16, .waveform = 0xad5a }, 1387 + { .refclk = 38400, .cdclk = 192000, .divider = 2, .ratio = 16, .waveform = 0xb6b6 }, 1388 + { .refclk = 38400, .cdclk = 211200, .divider = 2, .ratio = 16, .waveform = 0xdbb6 }, 1389 + { .refclk = 38400, .cdclk = 230400, .divider = 2, .ratio = 16, .waveform = 0xeeee }, 1390 + { .refclk = 38400, .cdclk = 249600, .divider = 2, .ratio = 16, .waveform = 0xf7de }, 1391 + { .refclk = 38400, .cdclk = 268800, .divider = 2, .ratio = 16, .waveform = 0xfefe }, 1392 + { .refclk = 38400, .cdclk = 288000, .divider = 2, .ratio = 16, .waveform = 0xfffe }, 1393 + { .refclk = 38400, .cdclk = 307200, .divider = 2, .ratio = 16, .waveform = 0xffff }, 1394 + { .refclk = 38400, .cdclk = 330000, .divider = 2, .ratio = 25, .waveform = 0xdbb6 }, 1395 + { .refclk = 38400, .cdclk = 360000, .divider = 2, .ratio = 25, .waveform = 0xeeee }, 1396 + { .refclk = 38400, .cdclk = 390000, .divider = 2, .ratio = 25, .waveform = 0xf7de }, 1397 + { .refclk = 38400, .cdclk = 420000, .divider = 2, .ratio = 25, .waveform = 0xfefe }, 1398 + { .refclk = 38400, .cdclk = 450000, .divider = 2, .ratio = 25, .waveform = 0xfffe }, 1399 + { .refclk = 38400, .cdclk = 480000, .divider = 2, .ratio = 25, .waveform = 0xffff }, 1400 + { .refclk = 38400, .cdclk = 487200, .divider = 2, .ratio = 29, .waveform = 0xfefe }, 1401 + { .refclk = 38400, .cdclk = 522000, .divider = 2, .ratio = 29, .waveform = 0xfffe }, 1402 + { .refclk = 38400, .cdclk = 556800, .divider = 2, .ratio = 29, .waveform = 0xffff }, 1403 + { .refclk = 38400, .cdclk = 571200, .divider = 2, .ratio = 34, .waveform = 0xfefe }, 1404 + { .refclk = 38400, .cdclk = 612000, .divider = 2, .ratio = 34, .waveform = 0xfffe }, 1405 + { .refclk = 38400, .cdclk = 652800, .divider = 2, .ratio = 34, .waveform = 0xffff }, 1406 + {} 1407 + }; 1408 + 1385 1409 static int bxt_calc_cdclk(struct drm_i915_private *dev_priv, int min_cdclk) 1386 1410 { 1387 1411 const struct intel_cdclk_vals *table = dev_priv->display.cdclk.table; ··· 1866 1840 1867 1841 static bool pll_enable_wa_needed(struct drm_i915_private *dev_priv) 1868 1842 { 1869 - return ((IS_DG2(dev_priv) || IS_METEORLAKE(dev_priv)) && 1870 - dev_priv->display.cdclk.hw.vco > 0 && 1871 - HAS_CDCLK_SQUASH(dev_priv)); 1843 + return (DISPLAY_VER_FULL(dev_priv) == IP_VER(20, 0) || 1844 + DISPLAY_VER_FULL(dev_priv) == IP_VER(14, 0) || 1845 + IS_DG2(dev_priv)) && 1846 + dev_priv->display.cdclk.hw.vco > 0; 1872 1847 } 1873 1848 1874 1849 static void _bxt_set_cdclk(struct drm_i915_private *dev_priv, ··· 1906 1879 dg2_cdclk_squash_program(dev_priv, waveform); 1907 1880 1908 1881 val = bxt_cdclk_cd2x_div_sel(dev_priv, clock, vco) | 1909 - bxt_cdclk_cd2x_pipe(dev_priv, pipe) | 1910 - skl_cdclk_decimal(cdclk); 1882 + bxt_cdclk_cd2x_pipe(dev_priv, pipe); 1911 1883 1912 1884 /* 1913 1885 * Disable SSA Precharge when CD clock frequency < 500 MHz, ··· 1915 1889 if ((IS_GEMINILAKE(dev_priv) || IS_BROXTON(dev_priv)) && 1916 1890 cdclk >= 500000) 1917 1891 val |= BXT_CDCLK_SSA_PRECHARGE_ENABLE; 1892 + 1893 + if (DISPLAY_VER(dev_priv) >= 20) 1894 + val |= MDCLK_SOURCE_SEL_CDCLK_PLL; 1895 + else 1896 + val |= skl_cdclk_decimal(cdclk); 1897 + 1918 1898 intel_de_write(dev_priv, CDCLK_CTL, val); 1919 1899 1920 1900 if (pipe != INVALID_PIPE) ··· 2565 2533 return min_cdclk; 2566 2534 } 2567 2535 2536 + static int intel_vdsc_min_cdclk(const struct intel_crtc_state *crtc_state) 2537 + { 2538 + struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc); 2539 + struct drm_i915_private *i915 = to_i915(crtc->base.dev); 2540 + int num_vdsc_instances = intel_dsc_get_num_vdsc_instances(crtc_state); 2541 + int min_cdclk = 0; 2542 + 2543 + /* 2544 + * When we decide to use only one VDSC engine, since 2545 + * each VDSC operates with 1 ppc throughput, pixel clock 2546 + * cannot be higher than the VDSC clock (cdclk) 2547 + * If there 2 VDSC engines, then pixel clock can't be higher than 2548 + * VDSC clock(cdclk) * 2 and so on. 2549 + */ 2550 + min_cdclk = max_t(int, min_cdclk, 2551 + DIV_ROUND_UP(crtc_state->pixel_rate, num_vdsc_instances)); 2552 + 2553 + if (crtc_state->bigjoiner_pipes) { 2554 + int pixel_clock = intel_dp_mode_to_fec_clock(crtc_state->hw.adjusted_mode.clock); 2555 + 2556 + /* 2557 + * According to Bigjoiner bw check: 2558 + * compressed_bpp <= PPC * CDCLK * Big joiner Interface bits / Pixel clock 2559 + * 2560 + * We have already computed compressed_bpp, so now compute the min CDCLK that 2561 + * is required to support this compressed_bpp. 2562 + * 2563 + * => CDCLK >= compressed_bpp * Pixel clock / (PPC * Bigjoiner Interface bits) 2564 + * 2565 + * Since PPC = 2 with bigjoiner 2566 + * => CDCLK >= compressed_bpp * Pixel clock / 2 * Bigjoiner Interface bits 2567 + */ 2568 + int bigjoiner_interface_bits = DISPLAY_VER(i915) > 13 ? 36 : 24; 2569 + int min_cdclk_bj = (crtc_state->dsc.compressed_bpp * pixel_clock) / 2570 + (2 * bigjoiner_interface_bits); 2571 + 2572 + min_cdclk = max(min_cdclk, min_cdclk_bj); 2573 + } 2574 + 2575 + return min_cdclk; 2576 + } 2577 + 2568 2578 int intel_crtc_compute_min_cdclk(const struct intel_crtc_state *crtc_state) 2569 2579 { 2570 2580 struct drm_i915_private *dev_priv = ··· 2678 2604 /* Account for additional needs from the planes */ 2679 2605 min_cdclk = max(intel_planes_min_cdclk(crtc_state), min_cdclk); 2680 2606 2681 - /* 2682 - * When we decide to use only one VDSC engine, since 2683 - * each VDSC operates with 1 ppc throughput, pixel clock 2684 - * cannot be higher than the VDSC clock (cdclk) 2685 - * If there 2 VDSC engines, then pixel clock can't be higher than 2686 - * VDSC clock(cdclk) * 2 and so on. 2687 - */ 2688 - if (crtc_state->dsc.compression_enable) { 2689 - int num_vdsc_instances = intel_dsc_get_num_vdsc_instances(crtc_state); 2690 - 2691 - min_cdclk = max_t(int, min_cdclk, 2692 - DIV_ROUND_UP(crtc_state->pixel_rate, 2693 - num_vdsc_instances)); 2694 - } 2607 + if (crtc_state->dsc.compression_enable) 2608 + min_cdclk = max(min_cdclk, intel_vdsc_min_cdclk(crtc_state)); 2695 2609 2696 2610 /* 2697 2611 * HACK. Currently for TGL/DG2 platforms we calculate ··· 3170 3108 } else if (intel_cdclk_needs_modeset(&old_cdclk_state->actual, 3171 3109 &new_cdclk_state->actual)) { 3172 3110 /* All pipes must be switched off while we change the cdclk. */ 3173 - ret = intel_modeset_all_pipes(state, "CDCLK change"); 3111 + ret = intel_modeset_all_pipes_late(state, "CDCLK change"); 3174 3112 if (ret) 3175 3113 return ret; 3176 3114 ··· 3621 3559 */ 3622 3560 void intel_init_cdclk_hooks(struct drm_i915_private *dev_priv) 3623 3561 { 3624 - if (IS_METEORLAKE(dev_priv)) { 3562 + if (DISPLAY_VER(dev_priv) >= 20) { 3563 + dev_priv->display.funcs.cdclk = &mtl_cdclk_funcs; 3564 + dev_priv->display.cdclk.table = lnl_cdclk_table; 3565 + } else if (DISPLAY_VER(dev_priv) >= 14) { 3625 3566 dev_priv->display.funcs.cdclk = &mtl_cdclk_funcs; 3626 3567 dev_priv->display.cdclk.table = mtl_cdclk_table; 3627 3568 } else if (IS_DG2(dev_priv)) {
+142 -12
drivers/gpu/drm/i915/display/intel_color.c
··· 24 24 25 25 #include "i915_reg.h" 26 26 #include "intel_color.h" 27 + #include "intel_color_regs.h" 27 28 #include "intel_de.h" 28 29 #include "intel_display_types.h" 29 30 #include "intel_dsb.h" ··· 76 75 * software state. Used by eg. the hardware state checker. 77 76 */ 78 77 void (*read_csc)(struct intel_crtc_state *crtc_state); 78 + /* 79 + * Read config other than LUTs and CSCs, before them. Optional. 80 + */ 81 + void (*get_config)(struct intel_crtc_state *crtc_state); 79 82 }; 80 83 81 84 #define CTM_COEFF_SIGN (1ULL << 63) ··· 1018 1013 crtc_state->csc_mode); 1019 1014 } 1020 1015 1016 + static u32 hsw_read_gamma_mode(struct intel_crtc *crtc) 1017 + { 1018 + struct drm_i915_private *i915 = to_i915(crtc->base.dev); 1019 + 1020 + return intel_de_read(i915, GAMMA_MODE(crtc->pipe)); 1021 + } 1022 + 1023 + static u32 ilk_read_csc_mode(struct intel_crtc *crtc) 1024 + { 1025 + struct drm_i915_private *i915 = to_i915(crtc->base.dev); 1026 + 1027 + return intel_de_read(i915, PIPE_CSC_MODE(crtc->pipe)); 1028 + } 1029 + 1030 + static void i9xx_get_config(struct intel_crtc_state *crtc_state) 1031 + { 1032 + struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc); 1033 + struct intel_plane *plane = to_intel_plane(crtc->base.primary); 1034 + struct drm_i915_private *dev_priv = to_i915(crtc->base.dev); 1035 + enum i9xx_plane_id i9xx_plane = plane->i9xx_plane; 1036 + u32 tmp; 1037 + 1038 + tmp = intel_de_read(dev_priv, DSPCNTR(i9xx_plane)); 1039 + 1040 + if (tmp & DISP_PIPE_GAMMA_ENABLE) 1041 + crtc_state->gamma_enable = true; 1042 + 1043 + if (!HAS_GMCH(dev_priv) && tmp & DISP_PIPE_CSC_ENABLE) 1044 + crtc_state->csc_enable = true; 1045 + } 1046 + 1047 + static void hsw_get_config(struct intel_crtc_state *crtc_state) 1048 + { 1049 + struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc); 1050 + 1051 + crtc_state->gamma_mode = hsw_read_gamma_mode(crtc); 1052 + crtc_state->csc_mode = ilk_read_csc_mode(crtc); 1053 + 1054 + i9xx_get_config(crtc_state); 1055 + } 1056 + 1057 + static void skl_get_config(struct intel_crtc_state *crtc_state) 1058 + { 1059 + struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc); 1060 + struct drm_i915_private *i915 = to_i915(crtc->base.dev); 1061 + u32 tmp; 1062 + 1063 + crtc_state->gamma_mode = hsw_read_gamma_mode(crtc); 1064 + crtc_state->csc_mode = ilk_read_csc_mode(crtc); 1065 + 1066 + tmp = intel_de_read(i915, SKL_BOTTOM_COLOR(crtc->pipe)); 1067 + 1068 + if (tmp & SKL_BOTTOM_COLOR_GAMMA_ENABLE) 1069 + crtc_state->gamma_enable = true; 1070 + 1071 + if (tmp & SKL_BOTTOM_COLOR_CSC_ENABLE) 1072 + crtc_state->csc_enable = true; 1073 + } 1074 + 1021 1075 static void skl_color_commit_arm(const struct intel_crtc_state *crtc_state) 1022 1076 { 1023 1077 struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc); ··· 1329 1265 1330 1266 lut = blob->data; 1331 1267 1268 + /* 1269 + * DSB fails to correctly load the legacy LUT 1270 + * unless we either write each entry twice, 1271 + * or use non-posted writes 1272 + */ 1273 + if (crtc_state->dsb) 1274 + intel_dsb_nonpost_start(crtc_state->dsb); 1275 + 1332 1276 for (i = 0; i < 256; i++) 1333 1277 ilk_lut_write(crtc_state, LGC_PALETTE(pipe, i), 1334 1278 i9xx_lut_8(&lut[i])); 1279 + 1280 + if (crtc_state->dsb) 1281 + intel_dsb_nonpost_end(crtc_state->dsb); 1335 1282 } 1336 1283 1337 1284 static void ilk_load_lut_10(const struct intel_crtc_state *crtc_state, ··· 1752 1677 MISSING_CASE(crtc_state->gamma_mode); 1753 1678 break; 1754 1679 } 1755 - 1756 - if (crtc_state->dsb) { 1757 - intel_dsb_finish(crtc_state->dsb); 1758 - intel_dsb_commit(crtc_state->dsb, false); 1759 - intel_dsb_wait(crtc_state->dsb); 1760 - } 1761 1680 } 1762 1681 1763 1682 static void vlv_load_luts(const struct intel_crtc_state *crtc_state) ··· 1858 1789 { 1859 1790 struct drm_i915_private *i915 = to_i915(crtc_state->uapi.crtc->dev); 1860 1791 1792 + if (crtc_state->dsb) 1793 + return; 1794 + 1861 1795 i915->display.funcs.color->load_luts(crtc_state); 1862 1796 } 1863 1797 ··· 1877 1805 struct drm_i915_private *i915 = to_i915(crtc_state->uapi.crtc->dev); 1878 1806 1879 1807 i915->display.funcs.color->color_commit_arm(crtc_state); 1808 + 1809 + if (crtc_state->dsb) 1810 + intel_dsb_commit(crtc_state->dsb, true); 1880 1811 } 1881 1812 1882 1813 void intel_color_post_update(const struct intel_crtc_state *crtc_state) ··· 1893 1818 void intel_color_prepare_commit(struct intel_crtc_state *crtc_state) 1894 1819 { 1895 1820 struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc); 1821 + struct drm_i915_private *i915 = to_i915(crtc->base.dev); 1896 1822 1897 1823 /* FIXME DSB has issues loading LUTs, disable it for now */ 1898 1824 return; 1899 1825 1826 + if (!crtc_state->hw.active || 1827 + intel_crtc_needs_modeset(crtc_state)) 1828 + return; 1829 + 1900 1830 if (!crtc_state->pre_csc_lut && !crtc_state->post_csc_lut) 1901 1831 return; 1902 1832 1903 - crtc_state->dsb = intel_dsb_prepare(crtc, 1024); 1833 + crtc_state->dsb = intel_dsb_prepare(crtc_state, 1024); 1834 + if (!crtc_state->dsb) 1835 + return; 1836 + 1837 + i915->display.funcs.color->load_luts(crtc_state); 1838 + 1839 + intel_dsb_finish(crtc_state->dsb); 1904 1840 } 1905 1841 1906 1842 void intel_color_cleanup_commit(struct intel_crtc_state *crtc_state) ··· 1921 1835 1922 1836 intel_dsb_cleanup(crtc_state->dsb); 1923 1837 crtc_state->dsb = NULL; 1838 + } 1839 + 1840 + void intel_color_wait_commit(const struct intel_crtc_state *crtc_state) 1841 + { 1842 + if (crtc_state->dsb) 1843 + intel_dsb_wait(crtc_state->dsb); 1844 + } 1845 + 1846 + bool intel_color_uses_dsb(const struct intel_crtc_state *crtc_state) 1847 + { 1848 + return crtc_state->dsb; 1924 1849 } 1925 1850 1926 1851 static bool intel_can_preload_luts(const struct intel_crtc_state *new_crtc_state) ··· 1987 1890 void intel_color_get_config(struct intel_crtc_state *crtc_state) 1988 1891 { 1989 1892 struct drm_i915_private *i915 = to_i915(crtc_state->uapi.crtc->dev); 1893 + 1894 + if (i915->display.funcs.color->get_config) 1895 + i915->display.funcs.color->get_config(crtc_state); 1990 1896 1991 1897 i915->display.funcs.color->read_luts(crtc_state); 1992 1898 ··· 2965 2865 return 16; 2966 2866 } 2967 2867 2968 - static bool err_check(struct drm_color_lut *lut1, 2969 - struct drm_color_lut *lut2, u32 err) 2868 + static bool err_check(const struct drm_color_lut *lut1, 2869 + const struct drm_color_lut *lut2, u32 err) 2970 2870 { 2971 2871 return ((abs((long)lut2->red - lut1->red)) <= err) && 2972 2872 ((abs((long)lut2->blue - lut1->blue)) <= err) && 2973 2873 ((abs((long)lut2->green - lut1->green)) <= err); 2974 2874 } 2975 2875 2976 - static bool intel_lut_entries_equal(struct drm_color_lut *lut1, 2977 - struct drm_color_lut *lut2, 2876 + static bool intel_lut_entries_equal(const struct drm_color_lut *lut1, 2877 + const struct drm_color_lut *lut2, 2978 2878 int lut_size, u32 err) 2979 2879 { 2980 2880 int i; ··· 2991 2891 const struct drm_property_blob *blob2, 2992 2892 int check_size, int precision) 2993 2893 { 2994 - struct drm_color_lut *lut1, *lut2; 2894 + const struct drm_color_lut *lut1, *lut2; 2995 2895 int lut_size1, lut_size2; 2996 2896 u32 err; 2997 2897 ··· 3304 3204 return blob; 3305 3205 } 3306 3206 3207 + static void chv_get_config(struct intel_crtc_state *crtc_state) 3208 + { 3209 + struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc); 3210 + struct drm_i915_private *i915 = to_i915(crtc->base.dev); 3211 + 3212 + crtc_state->cgm_mode = intel_de_read(i915, CGM_PIPE_MODE(crtc->pipe)); 3213 + 3214 + i9xx_get_config(crtc_state); 3215 + } 3216 + 3307 3217 static void chv_read_luts(struct intel_crtc_state *crtc_state) 3308 3218 { 3309 3219 struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc); ··· 3375 3265 } 3376 3266 3377 3267 return blob; 3268 + } 3269 + 3270 + static void ilk_get_config(struct intel_crtc_state *crtc_state) 3271 + { 3272 + struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc); 3273 + 3274 + crtc_state->csc_mode = ilk_read_csc_mode(crtc); 3275 + 3276 + i9xx_get_config(crtc_state); 3378 3277 } 3379 3278 3380 3279 static void ilk_read_luts(struct intel_crtc_state *crtc_state) ··· 3692 3573 .read_luts = chv_read_luts, 3693 3574 .lut_equal = chv_lut_equal, 3694 3575 .read_csc = chv_read_csc, 3576 + .get_config = chv_get_config, 3695 3577 }; 3696 3578 3697 3579 static const struct intel_color_funcs vlv_color_funcs = { ··· 3710 3590 .load_luts = i965_load_luts, 3711 3591 .read_luts = i965_read_luts, 3712 3592 .lut_equal = i965_lut_equal, 3593 + .get_config = i9xx_get_config, 3713 3594 }; 3714 3595 3715 3596 static const struct intel_color_funcs i9xx_color_funcs = { ··· 3719 3598 .load_luts = i9xx_load_luts, 3720 3599 .read_luts = i9xx_read_luts, 3721 3600 .lut_equal = i9xx_lut_equal, 3601 + .get_config = i9xx_get_config, 3722 3602 }; 3723 3603 3724 3604 static const struct intel_color_funcs tgl_color_funcs = { ··· 3730 3608 .read_luts = icl_read_luts, 3731 3609 .lut_equal = icl_lut_equal, 3732 3610 .read_csc = icl_read_csc, 3611 + .get_config = skl_get_config, 3733 3612 }; 3734 3613 3735 3614 static const struct intel_color_funcs icl_color_funcs = { ··· 3742 3619 .read_luts = icl_read_luts, 3743 3620 .lut_equal = icl_lut_equal, 3744 3621 .read_csc = icl_read_csc, 3622 + .get_config = skl_get_config, 3745 3623 }; 3746 3624 3747 3625 static const struct intel_color_funcs glk_color_funcs = { ··· 3753 3629 .read_luts = glk_read_luts, 3754 3630 .lut_equal = glk_lut_equal, 3755 3631 .read_csc = skl_read_csc, 3632 + .get_config = skl_get_config, 3756 3633 }; 3757 3634 3758 3635 static const struct intel_color_funcs skl_color_funcs = { ··· 3764 3639 .read_luts = bdw_read_luts, 3765 3640 .lut_equal = ivb_lut_equal, 3766 3641 .read_csc = skl_read_csc, 3642 + .get_config = skl_get_config, 3767 3643 }; 3768 3644 3769 3645 static const struct intel_color_funcs bdw_color_funcs = { ··· 3775 3649 .read_luts = bdw_read_luts, 3776 3650 .lut_equal = ivb_lut_equal, 3777 3651 .read_csc = ilk_read_csc, 3652 + .get_config = hsw_get_config, 3778 3653 }; 3779 3654 3780 3655 static const struct intel_color_funcs hsw_color_funcs = { ··· 3786 3659 .read_luts = ivb_read_luts, 3787 3660 .lut_equal = ivb_lut_equal, 3788 3661 .read_csc = ilk_read_csc, 3662 + .get_config = hsw_get_config, 3789 3663 }; 3790 3664 3791 3665 static const struct intel_color_funcs ivb_color_funcs = { ··· 3797 3669 .read_luts = ivb_read_luts, 3798 3670 .lut_equal = ivb_lut_equal, 3799 3671 .read_csc = ilk_read_csc, 3672 + .get_config = ilk_get_config, 3800 3673 }; 3801 3674 3802 3675 static const struct intel_color_funcs ilk_color_funcs = { ··· 3808 3679 .read_luts = ilk_read_luts, 3809 3680 .lut_equal = ilk_lut_equal, 3810 3681 .read_csc = ilk_read_csc, 3682 + .get_config = ilk_get_config, 3811 3683 }; 3812 3684 3813 3685 void intel_color_crtc_init(struct intel_crtc *crtc)
+2
drivers/gpu/drm/i915/display/intel_color.h
··· 19 19 int intel_color_check(struct intel_crtc_state *crtc_state); 20 20 void intel_color_prepare_commit(struct intel_crtc_state *crtc_state); 21 21 void intel_color_cleanup_commit(struct intel_crtc_state *crtc_state); 22 + bool intel_color_uses_dsb(const struct intel_crtc_state *crtc_state); 23 + void intel_color_wait_commit(const struct intel_crtc_state *crtc_state); 22 24 void intel_color_commit_noarm(const struct intel_crtc_state *crtc_state); 23 25 void intel_color_commit_arm(const struct intel_crtc_state *crtc_state); 24 26 void intel_color_post_update(const struct intel_crtc_state *crtc_state);
+286
drivers/gpu/drm/i915/display/intel_color_regs.h
··· 1 + /* SPDX-License-Identifier: MIT */ 2 + /* 3 + * Copyright © 2023 Intel Corporation 4 + */ 5 + 6 + #ifndef __INTEL_COLOR_REGS_H__ 7 + #define __INTEL_COLOR_REGS_H__ 8 + 9 + #include "intel_display_reg_defs.h" 10 + 11 + /* legacy palette */ 12 + #define _LGC_PALETTE_A 0x4a000 13 + #define _LGC_PALETTE_B 0x4a800 14 + /* see PALETTE_* for the bits */ 15 + #define LGC_PALETTE(pipe, i) _MMIO(_PIPE(pipe, _LGC_PALETTE_A, _LGC_PALETTE_B) + (i) * 4) 16 + 17 + /* ilk/snb precision palette */ 18 + #define _PREC_PALETTE_A 0x4b000 19 + #define _PREC_PALETTE_B 0x4c000 20 + /* 10bit mode */ 21 + #define PREC_PALETTE_10_RED_MASK REG_GENMASK(29, 20) 22 + #define PREC_PALETTE_10_GREEN_MASK REG_GENMASK(19, 10) 23 + #define PREC_PALETTE_10_BLUE_MASK REG_GENMASK(9, 0) 24 + /* 12.4 interpolated mode ldw */ 25 + #define PREC_PALETTE_12P4_RED_LDW_MASK REG_GENMASK(29, 24) 26 + #define PREC_PALETTE_12P4_GREEN_LDW_MASK REG_GENMASK(19, 14) 27 + #define PREC_PALETTE_12P4_BLUE_LDW_MASK REG_GENMASK(9, 4) 28 + /* 12.4 interpolated mode udw */ 29 + #define PREC_PALETTE_12P4_RED_UDW_MASK REG_GENMASK(29, 20) 30 + #define PREC_PALETTE_12P4_GREEN_UDW_MASK REG_GENMASK(19, 10) 31 + #define PREC_PALETTE_12P4_BLUE_UDW_MASK REG_GENMASK(9, 0) 32 + #define PREC_PALETTE(pipe, i) _MMIO(_PIPE(pipe, _PREC_PALETTE_A, _PREC_PALETTE_B) + (i) * 4) 33 + 34 + #define _PREC_PIPEAGCMAX 0x4d000 35 + #define _PREC_PIPEBGCMAX 0x4d010 36 + #define PREC_PIPEGCMAX(pipe, i) _MMIO(_PIPE(pipe, _PIPEAGCMAX, _PIPEBGCMAX) + (i) * 4) /* u1.16 */ 37 + 38 + #define _GAMMA_MODE_A 0x4a480 39 + #define _GAMMA_MODE_B 0x4ac80 40 + #define GAMMA_MODE(pipe) _MMIO_PIPE(pipe, _GAMMA_MODE_A, _GAMMA_MODE_B) 41 + #define PRE_CSC_GAMMA_ENABLE REG_BIT(31) /* icl+ */ 42 + #define POST_CSC_GAMMA_ENABLE REG_BIT(30) /* icl+ */ 43 + #define PALETTE_ANTICOL_DISABLE REG_BIT(15) /* skl+ */ 44 + #define GAMMA_MODE_MODE_MASK REG_GENMASK(1, 0) 45 + #define GAMMA_MODE_MODE_8BIT REG_FIELD_PREP(GAMMA_MODE_MODE_MASK, 0) 46 + #define GAMMA_MODE_MODE_10BIT REG_FIELD_PREP(GAMMA_MODE_MODE_MASK, 1) 47 + #define GAMMA_MODE_MODE_12BIT REG_FIELD_PREP(GAMMA_MODE_MODE_MASK, 2) 48 + #define GAMMA_MODE_MODE_SPLIT REG_FIELD_PREP(GAMMA_MODE_MODE_MASK, 3) /* ivb-bdw */ 49 + #define GAMMA_MODE_MODE_12BIT_MULTI_SEG REG_FIELD_PREP(GAMMA_MODE_MODE_MASK, 3) /* icl-tgl */ 50 + 51 + /* pipe CSC */ 52 + #define _PIPE_A_CSC_COEFF_RY_GY 0x49010 53 + #define _PIPE_A_CSC_COEFF_BY 0x49014 54 + #define _PIPE_A_CSC_COEFF_RU_GU 0x49018 55 + #define _PIPE_A_CSC_COEFF_BU 0x4901c 56 + #define _PIPE_A_CSC_COEFF_RV_GV 0x49020 57 + #define _PIPE_A_CSC_COEFF_BV 0x49024 58 + 59 + #define _PIPE_A_CSC_MODE 0x49028 60 + #define ICL_CSC_ENABLE (1 << 31) /* icl+ */ 61 + #define ICL_OUTPUT_CSC_ENABLE (1 << 30) /* icl+ */ 62 + #define CSC_BLACK_SCREEN_OFFSET (1 << 2) /* ilk/snb */ 63 + #define CSC_POSITION_BEFORE_GAMMA (1 << 1) /* pre-glk */ 64 + #define CSC_MODE_YUV_TO_RGB (1 << 0) /* ilk/snb */ 65 + 66 + #define _PIPE_A_CSC_PREOFF_HI 0x49030 67 + #define _PIPE_A_CSC_PREOFF_ME 0x49034 68 + #define _PIPE_A_CSC_PREOFF_LO 0x49038 69 + #define _PIPE_A_CSC_POSTOFF_HI 0x49040 70 + #define _PIPE_A_CSC_POSTOFF_ME 0x49044 71 + #define _PIPE_A_CSC_POSTOFF_LO 0x49048 72 + 73 + #define _PIPE_B_CSC_COEFF_RY_GY 0x49110 74 + #define _PIPE_B_CSC_COEFF_BY 0x49114 75 + #define _PIPE_B_CSC_COEFF_RU_GU 0x49118 76 + #define _PIPE_B_CSC_COEFF_BU 0x4911c 77 + #define _PIPE_B_CSC_COEFF_RV_GV 0x49120 78 + #define _PIPE_B_CSC_COEFF_BV 0x49124 79 + #define _PIPE_B_CSC_MODE 0x49128 80 + #define _PIPE_B_CSC_PREOFF_HI 0x49130 81 + #define _PIPE_B_CSC_PREOFF_ME 0x49134 82 + #define _PIPE_B_CSC_PREOFF_LO 0x49138 83 + #define _PIPE_B_CSC_POSTOFF_HI 0x49140 84 + #define _PIPE_B_CSC_POSTOFF_ME 0x49144 85 + #define _PIPE_B_CSC_POSTOFF_LO 0x49148 86 + 87 + #define PIPE_CSC_COEFF_RY_GY(pipe) _MMIO_PIPE(pipe, _PIPE_A_CSC_COEFF_RY_GY, _PIPE_B_CSC_COEFF_RY_GY) 88 + #define PIPE_CSC_COEFF_BY(pipe) _MMIO_PIPE(pipe, _PIPE_A_CSC_COEFF_BY, _PIPE_B_CSC_COEFF_BY) 89 + #define PIPE_CSC_COEFF_RU_GU(pipe) _MMIO_PIPE(pipe, _PIPE_A_CSC_COEFF_RU_GU, _PIPE_B_CSC_COEFF_RU_GU) 90 + #define PIPE_CSC_COEFF_BU(pipe) _MMIO_PIPE(pipe, _PIPE_A_CSC_COEFF_BU, _PIPE_B_CSC_COEFF_BU) 91 + #define PIPE_CSC_COEFF_RV_GV(pipe) _MMIO_PIPE(pipe, _PIPE_A_CSC_COEFF_RV_GV, _PIPE_B_CSC_COEFF_RV_GV) 92 + #define PIPE_CSC_COEFF_BV(pipe) _MMIO_PIPE(pipe, _PIPE_A_CSC_COEFF_BV, _PIPE_B_CSC_COEFF_BV) 93 + #define PIPE_CSC_MODE(pipe) _MMIO_PIPE(pipe, _PIPE_A_CSC_MODE, _PIPE_B_CSC_MODE) 94 + #define PIPE_CSC_PREOFF_HI(pipe) _MMIO_PIPE(pipe, _PIPE_A_CSC_PREOFF_HI, _PIPE_B_CSC_PREOFF_HI) 95 + #define PIPE_CSC_PREOFF_ME(pipe) _MMIO_PIPE(pipe, _PIPE_A_CSC_PREOFF_ME, _PIPE_B_CSC_PREOFF_ME) 96 + #define PIPE_CSC_PREOFF_LO(pipe) _MMIO_PIPE(pipe, _PIPE_A_CSC_PREOFF_LO, _PIPE_B_CSC_PREOFF_LO) 97 + #define PIPE_CSC_POSTOFF_HI(pipe) _MMIO_PIPE(pipe, _PIPE_A_CSC_POSTOFF_HI, _PIPE_B_CSC_POSTOFF_HI) 98 + #define PIPE_CSC_POSTOFF_ME(pipe) _MMIO_PIPE(pipe, _PIPE_A_CSC_POSTOFF_ME, _PIPE_B_CSC_POSTOFF_ME) 99 + #define PIPE_CSC_POSTOFF_LO(pipe) _MMIO_PIPE(pipe, _PIPE_A_CSC_POSTOFF_LO, _PIPE_B_CSC_POSTOFF_LO) 100 + 101 + /* Pipe Output CSC */ 102 + #define _PIPE_A_OUTPUT_CSC_COEFF_RY_GY 0x49050 103 + #define _PIPE_A_OUTPUT_CSC_COEFF_BY 0x49054 104 + #define _PIPE_A_OUTPUT_CSC_COEFF_RU_GU 0x49058 105 + #define _PIPE_A_OUTPUT_CSC_COEFF_BU 0x4905c 106 + #define _PIPE_A_OUTPUT_CSC_COEFF_RV_GV 0x49060 107 + #define _PIPE_A_OUTPUT_CSC_COEFF_BV 0x49064 108 + #define _PIPE_A_OUTPUT_CSC_PREOFF_HI 0x49068 109 + #define _PIPE_A_OUTPUT_CSC_PREOFF_ME 0x4906c 110 + #define _PIPE_A_OUTPUT_CSC_PREOFF_LO 0x49070 111 + #define _PIPE_A_OUTPUT_CSC_POSTOFF_HI 0x49074 112 + #define _PIPE_A_OUTPUT_CSC_POSTOFF_ME 0x49078 113 + #define _PIPE_A_OUTPUT_CSC_POSTOFF_LO 0x4907c 114 + 115 + #define _PIPE_B_OUTPUT_CSC_COEFF_RY_GY 0x49150 116 + #define _PIPE_B_OUTPUT_CSC_COEFF_BY 0x49154 117 + #define _PIPE_B_OUTPUT_CSC_COEFF_RU_GU 0x49158 118 + #define _PIPE_B_OUTPUT_CSC_COEFF_BU 0x4915c 119 + #define _PIPE_B_OUTPUT_CSC_COEFF_RV_GV 0x49160 120 + #define _PIPE_B_OUTPUT_CSC_COEFF_BV 0x49164 121 + #define _PIPE_B_OUTPUT_CSC_PREOFF_HI 0x49168 122 + #define _PIPE_B_OUTPUT_CSC_PREOFF_ME 0x4916c 123 + #define _PIPE_B_OUTPUT_CSC_PREOFF_LO 0x49170 124 + #define _PIPE_B_OUTPUT_CSC_POSTOFF_HI 0x49174 125 + #define _PIPE_B_OUTPUT_CSC_POSTOFF_ME 0x49178 126 + #define _PIPE_B_OUTPUT_CSC_POSTOFF_LO 0x4917c 127 + 128 + #define PIPE_CSC_OUTPUT_COEFF_RY_GY(pipe) _MMIO_PIPE(pipe,\ 129 + _PIPE_A_OUTPUT_CSC_COEFF_RY_GY,\ 130 + _PIPE_B_OUTPUT_CSC_COEFF_RY_GY) 131 + #define PIPE_CSC_OUTPUT_COEFF_BY(pipe) _MMIO_PIPE(pipe, \ 132 + _PIPE_A_OUTPUT_CSC_COEFF_BY, \ 133 + _PIPE_B_OUTPUT_CSC_COEFF_BY) 134 + #define PIPE_CSC_OUTPUT_COEFF_RU_GU(pipe) _MMIO_PIPE(pipe, \ 135 + _PIPE_A_OUTPUT_CSC_COEFF_RU_GU, \ 136 + _PIPE_B_OUTPUT_CSC_COEFF_RU_GU) 137 + #define PIPE_CSC_OUTPUT_COEFF_BU(pipe) _MMIO_PIPE(pipe, \ 138 + _PIPE_A_OUTPUT_CSC_COEFF_BU, \ 139 + _PIPE_B_OUTPUT_CSC_COEFF_BU) 140 + #define PIPE_CSC_OUTPUT_COEFF_RV_GV(pipe) _MMIO_PIPE(pipe, \ 141 + _PIPE_A_OUTPUT_CSC_COEFF_RV_GV, \ 142 + _PIPE_B_OUTPUT_CSC_COEFF_RV_GV) 143 + #define PIPE_CSC_OUTPUT_COEFF_BV(pipe) _MMIO_PIPE(pipe, \ 144 + _PIPE_A_OUTPUT_CSC_COEFF_BV, \ 145 + _PIPE_B_OUTPUT_CSC_COEFF_BV) 146 + #define PIPE_CSC_OUTPUT_PREOFF_HI(pipe) _MMIO_PIPE(pipe, \ 147 + _PIPE_A_OUTPUT_CSC_PREOFF_HI, \ 148 + _PIPE_B_OUTPUT_CSC_PREOFF_HI) 149 + #define PIPE_CSC_OUTPUT_PREOFF_ME(pipe) _MMIO_PIPE(pipe, \ 150 + _PIPE_A_OUTPUT_CSC_PREOFF_ME, \ 151 + _PIPE_B_OUTPUT_CSC_PREOFF_ME) 152 + #define PIPE_CSC_OUTPUT_PREOFF_LO(pipe) _MMIO_PIPE(pipe, \ 153 + _PIPE_A_OUTPUT_CSC_PREOFF_LO, \ 154 + _PIPE_B_OUTPUT_CSC_PREOFF_LO) 155 + #define PIPE_CSC_OUTPUT_POSTOFF_HI(pipe) _MMIO_PIPE(pipe, \ 156 + _PIPE_A_OUTPUT_CSC_POSTOFF_HI, \ 157 + _PIPE_B_OUTPUT_CSC_POSTOFF_HI) 158 + #define PIPE_CSC_OUTPUT_POSTOFF_ME(pipe) _MMIO_PIPE(pipe, \ 159 + _PIPE_A_OUTPUT_CSC_POSTOFF_ME, \ 160 + _PIPE_B_OUTPUT_CSC_POSTOFF_ME) 161 + #define PIPE_CSC_OUTPUT_POSTOFF_LO(pipe) _MMIO_PIPE(pipe, \ 162 + _PIPE_A_OUTPUT_CSC_POSTOFF_LO, \ 163 + _PIPE_B_OUTPUT_CSC_POSTOFF_LO) 164 + 165 + /* pipe degamma/gamma LUTs on IVB+ */ 166 + #define _PAL_PREC_INDEX_A 0x4A400 167 + #define _PAL_PREC_INDEX_B 0x4AC00 168 + #define _PAL_PREC_INDEX_C 0x4B400 169 + #define PAL_PREC_SPLIT_MODE REG_BIT(31) 170 + #define PAL_PREC_AUTO_INCREMENT REG_BIT(15) 171 + #define PAL_PREC_INDEX_VALUE_MASK REG_GENMASK(9, 0) 172 + #define PAL_PREC_INDEX_VALUE(x) REG_FIELD_PREP(PAL_PREC_INDEX_VALUE_MASK, (x)) 173 + #define _PAL_PREC_DATA_A 0x4A404 174 + #define _PAL_PREC_DATA_B 0x4AC04 175 + #define _PAL_PREC_DATA_C 0x4B404 176 + /* see PREC_PALETTE_* for the bits */ 177 + #define _PAL_PREC_GC_MAX_A 0x4A410 178 + #define _PAL_PREC_GC_MAX_B 0x4AC10 179 + #define _PAL_PREC_GC_MAX_C 0x4B410 180 + #define _PAL_PREC_EXT_GC_MAX_A 0x4A420 181 + #define _PAL_PREC_EXT_GC_MAX_B 0x4AC20 182 + #define _PAL_PREC_EXT_GC_MAX_C 0x4B420 183 + #define _PAL_PREC_EXT2_GC_MAX_A 0x4A430 184 + #define _PAL_PREC_EXT2_GC_MAX_B 0x4AC30 185 + #define _PAL_PREC_EXT2_GC_MAX_C 0x4B430 186 + 187 + #define PREC_PAL_INDEX(pipe) _MMIO_PIPE(pipe, _PAL_PREC_INDEX_A, _PAL_PREC_INDEX_B) 188 + #define PREC_PAL_DATA(pipe) _MMIO_PIPE(pipe, _PAL_PREC_DATA_A, _PAL_PREC_DATA_B) 189 + #define PREC_PAL_GC_MAX(pipe, i) _MMIO(_PIPE(pipe, _PAL_PREC_GC_MAX_A, _PAL_PREC_GC_MAX_B) + (i) * 4) /* u1.16 */ 190 + #define PREC_PAL_EXT_GC_MAX(pipe, i) _MMIO(_PIPE(pipe, _PAL_PREC_EXT_GC_MAX_A, _PAL_PREC_EXT_GC_MAX_B) + (i) * 4) /* u3.16 */ 191 + #define PREC_PAL_EXT2_GC_MAX(pipe, i) _MMIO(_PIPE(pipe, _PAL_PREC_EXT2_GC_MAX_A, _PAL_PREC_EXT2_GC_MAX_B) + (i) * 4) /* glk+, u3.16 */ 192 + 193 + #define _PRE_CSC_GAMC_INDEX_A 0x4A484 194 + #define _PRE_CSC_GAMC_INDEX_B 0x4AC84 195 + #define _PRE_CSC_GAMC_INDEX_C 0x4B484 196 + #define PRE_CSC_GAMC_AUTO_INCREMENT REG_BIT(10) 197 + #define PRE_CSC_GAMC_INDEX_VALUE_MASK REG_GENMASK(7, 0) 198 + #define PRE_CSC_GAMC_INDEX_VALUE(x) REG_FIELD_PREP(PRE_CSC_GAMC_INDEX_VALUE_MASK, (x)) 199 + #define _PRE_CSC_GAMC_DATA_A 0x4A488 200 + #define _PRE_CSC_GAMC_DATA_B 0x4AC88 201 + #define _PRE_CSC_GAMC_DATA_C 0x4B488 202 + 203 + #define PRE_CSC_GAMC_INDEX(pipe) _MMIO_PIPE(pipe, _PRE_CSC_GAMC_INDEX_A, _PRE_CSC_GAMC_INDEX_B) 204 + #define PRE_CSC_GAMC_DATA(pipe) _MMIO_PIPE(pipe, _PRE_CSC_GAMC_DATA_A, _PRE_CSC_GAMC_DATA_B) 205 + 206 + /* ICL Multi segmented gamma */ 207 + #define _PAL_PREC_MULTI_SEG_INDEX_A 0x4A408 208 + #define _PAL_PREC_MULTI_SEG_INDEX_B 0x4AC08 209 + #define PAL_PREC_MULTI_SEG_AUTO_INCREMENT REG_BIT(15) 210 + #define PAL_PREC_MULTI_SEG_INDEX_VALUE_MASK REG_GENMASK(4, 0) 211 + #define PAL_PREC_MULTI_SEG_INDEX_VALUE(x) REG_FIELD_PREP(PAL_PREC_MULTI_SEG_INDEX_VALUE_MASK, (x)) 212 + 213 + #define _PAL_PREC_MULTI_SEG_DATA_A 0x4A40C 214 + #define _PAL_PREC_MULTI_SEG_DATA_B 0x4AC0C 215 + /* see PREC_PALETTE_12P4_* for the bits */ 216 + 217 + #define PREC_PAL_MULTI_SEG_INDEX(pipe) _MMIO_PIPE(pipe, \ 218 + _PAL_PREC_MULTI_SEG_INDEX_A, \ 219 + _PAL_PREC_MULTI_SEG_INDEX_B) 220 + #define PREC_PAL_MULTI_SEG_DATA(pipe) _MMIO_PIPE(pipe, \ 221 + _PAL_PREC_MULTI_SEG_DATA_A, \ 222 + _PAL_PREC_MULTI_SEG_DATA_B) 223 + 224 + #define _PIPE_A_WGC_C01_C00 0x600B0 /* s2.10 */ 225 + #define _PIPE_A_WGC_C02 0x600B4 /* s2.10 */ 226 + #define _PIPE_A_WGC_C11_C10 0x600B8 /* s2.10 */ 227 + #define _PIPE_A_WGC_C12 0x600BC /* s2.10 */ 228 + #define _PIPE_A_WGC_C21_C20 0x600C0 /* s2.10 */ 229 + #define _PIPE_A_WGC_C22 0x600C4 /* s2.10 */ 230 + 231 + #define PIPE_WGC_C01_C00(pipe) _MMIO_TRANS2(pipe, _PIPE_A_WGC_C01_C00) 232 + #define PIPE_WGC_C02(pipe) _MMIO_TRANS2(pipe, _PIPE_A_WGC_C02) 233 + #define PIPE_WGC_C11_C10(pipe) _MMIO_TRANS2(pipe, _PIPE_A_WGC_C11_C10) 234 + #define PIPE_WGC_C12(pipe) _MMIO_TRANS2(pipe, _PIPE_A_WGC_C12) 235 + #define PIPE_WGC_C21_C20(pipe) _MMIO_TRANS2(pipe, _PIPE_A_WGC_C21_C20) 236 + #define PIPE_WGC_C22(pipe) _MMIO_TRANS2(pipe, _PIPE_A_WGC_C22) 237 + 238 + /* pipe CSC & degamma/gamma LUTs on CHV */ 239 + #define _CGM_PIPE_A_CSC_COEFF01 (VLV_DISPLAY_BASE + 0x67900) 240 + #define _CGM_PIPE_A_CSC_COEFF23 (VLV_DISPLAY_BASE + 0x67904) 241 + #define _CGM_PIPE_A_CSC_COEFF45 (VLV_DISPLAY_BASE + 0x67908) 242 + #define _CGM_PIPE_A_CSC_COEFF67 (VLV_DISPLAY_BASE + 0x6790C) 243 + #define _CGM_PIPE_A_CSC_COEFF8 (VLV_DISPLAY_BASE + 0x67910) 244 + #define _CGM_PIPE_A_DEGAMMA (VLV_DISPLAY_BASE + 0x66000) 245 + /* cgm degamma ldw */ 246 + #define CGM_PIPE_DEGAMMA_GREEN_LDW_MASK REG_GENMASK(29, 16) 247 + #define CGM_PIPE_DEGAMMA_BLUE_LDW_MASK REG_GENMASK(13, 0) 248 + /* cgm degamma udw */ 249 + #define CGM_PIPE_DEGAMMA_RED_UDW_MASK REG_GENMASK(13, 0) 250 + #define _CGM_PIPE_A_GAMMA (VLV_DISPLAY_BASE + 0x67000) 251 + /* cgm gamma ldw */ 252 + #define CGM_PIPE_GAMMA_GREEN_LDW_MASK REG_GENMASK(25, 16) 253 + #define CGM_PIPE_GAMMA_BLUE_LDW_MASK REG_GENMASK(9, 0) 254 + /* cgm gamma udw */ 255 + #define CGM_PIPE_GAMMA_RED_UDW_MASK REG_GENMASK(9, 0) 256 + #define _CGM_PIPE_A_MODE (VLV_DISPLAY_BASE + 0x67A00) 257 + #define CGM_PIPE_MODE_GAMMA (1 << 2) 258 + #define CGM_PIPE_MODE_CSC (1 << 1) 259 + #define CGM_PIPE_MODE_DEGAMMA (1 << 0) 260 + 261 + #define _CGM_PIPE_B_CSC_COEFF01 (VLV_DISPLAY_BASE + 0x69900) 262 + #define _CGM_PIPE_B_CSC_COEFF23 (VLV_DISPLAY_BASE + 0x69904) 263 + #define _CGM_PIPE_B_CSC_COEFF45 (VLV_DISPLAY_BASE + 0x69908) 264 + #define _CGM_PIPE_B_CSC_COEFF67 (VLV_DISPLAY_BASE + 0x6990C) 265 + #define _CGM_PIPE_B_CSC_COEFF8 (VLV_DISPLAY_BASE + 0x69910) 266 + #define _CGM_PIPE_B_DEGAMMA (VLV_DISPLAY_BASE + 0x68000) 267 + #define _CGM_PIPE_B_GAMMA (VLV_DISPLAY_BASE + 0x69000) 268 + #define _CGM_PIPE_B_MODE (VLV_DISPLAY_BASE + 0x69A00) 269 + 270 + #define CGM_PIPE_CSC_COEFF01(pipe) _MMIO_PIPE(pipe, _CGM_PIPE_A_CSC_COEFF01, _CGM_PIPE_B_CSC_COEFF01) 271 + #define CGM_PIPE_CSC_COEFF23(pipe) _MMIO_PIPE(pipe, _CGM_PIPE_A_CSC_COEFF23, _CGM_PIPE_B_CSC_COEFF23) 272 + #define CGM_PIPE_CSC_COEFF45(pipe) _MMIO_PIPE(pipe, _CGM_PIPE_A_CSC_COEFF45, _CGM_PIPE_B_CSC_COEFF45) 273 + #define CGM_PIPE_CSC_COEFF67(pipe) _MMIO_PIPE(pipe, _CGM_PIPE_A_CSC_COEFF67, _CGM_PIPE_B_CSC_COEFF67) 274 + #define CGM_PIPE_CSC_COEFF8(pipe) _MMIO_PIPE(pipe, _CGM_PIPE_A_CSC_COEFF8, _CGM_PIPE_B_CSC_COEFF8) 275 + #define CGM_PIPE_DEGAMMA(pipe, i, w) _MMIO(_PIPE(pipe, _CGM_PIPE_A_DEGAMMA, _CGM_PIPE_B_DEGAMMA) + (i) * 8 + (w) * 4) 276 + #define CGM_PIPE_GAMMA(pipe, i, w) _MMIO(_PIPE(pipe, _CGM_PIPE_A_GAMMA, _CGM_PIPE_B_GAMMA) + (i) * 8 + (w) * 4) 277 + #define CGM_PIPE_MODE(pipe) _MMIO_PIPE(pipe, _CGM_PIPE_A_MODE, _CGM_PIPE_B_MODE) 278 + 279 + /* Skylake+ pipe bottom (background) color */ 280 + #define _SKL_BOTTOM_COLOR_A 0x70034 281 + #define _SKL_BOTTOM_COLOR_B 0x71034 282 + #define SKL_BOTTOM_COLOR_GAMMA_ENABLE REG_BIT(31) 283 + #define SKL_BOTTOM_COLOR_CSC_ENABLE REG_BIT(30) 284 + #define SKL_BOTTOM_COLOR(pipe) _MMIO_PIPE(pipe, _SKL_BOTTOM_COLOR_A, _SKL_BOTTOM_COLOR_B) 285 + 286 + #endif /* __INTEL_COLOR_REGS_H__ */
+8 -9
drivers/gpu/drm/i915/display/intel_combo_phy.c
··· 114 114 115 115 procmon = icl_get_procmon_ref_values(dev_priv, phy); 116 116 117 - drm_dbg_kms(&dev_priv->drm, 118 - "Combo PHY %c Voltage/Process Info : %s\n", 119 - phy_name(phy), procmon->name); 120 - 121 117 ret = check_phy_reg(dev_priv, phy, ICL_PORT_COMP_DW1(phy), 122 118 (0xff << 16) | 0xff, procmon->dw1); 123 119 ret &= check_phy_reg(dev_priv, phy, ICL_PORT_COMP_DW9(phy), ··· 308 312 enum phy phy; 309 313 310 314 for_each_combo_phy(dev_priv, phy) { 315 + const struct icl_procmon *procmon; 311 316 u32 val; 312 317 313 - if (icl_combo_phy_verify_state(dev_priv, phy)) { 314 - drm_dbg(&dev_priv->drm, 315 - "Combo PHY %c already enabled, won't reprogram it.\n", 316 - phy_name(phy)); 318 + if (icl_combo_phy_verify_state(dev_priv, phy)) 317 319 continue; 318 - } 320 + 321 + procmon = icl_get_procmon_ref_values(dev_priv, phy); 322 + 323 + drm_dbg(&dev_priv->drm, 324 + "Initializing combo PHY %c (Voltage/Process Info : %s)\n", 325 + phy_name(phy), procmon->name); 319 326 320 327 if (!has_phy_misc(dev_priv, phy)) 321 328 goto skip_phy_misc;
+3 -3
drivers/gpu/drm/i915/display/intel_connector.c
··· 192 192 /** 193 193 * intel_ddc_get_modes - get modelist from monitor 194 194 * @connector: DRM connector device to use 195 - * @adapter: i2c adapter 195 + * @ddc: DDC bus i2c adapter 196 196 * 197 197 * Fetch the EDID information from @connector using the DDC bus. 198 198 */ 199 199 int intel_ddc_get_modes(struct drm_connector *connector, 200 - struct i2c_adapter *adapter) 200 + struct i2c_adapter *ddc) 201 201 { 202 202 const struct drm_edid *drm_edid; 203 203 int ret; 204 204 205 - drm_edid = drm_edid_read_ddc(connector, adapter); 205 + drm_edid = drm_edid_read_ddc(connector, ddc); 206 206 if (!drm_edid) 207 207 return 0; 208 208
+1 -1
drivers/gpu/drm/i915/display/intel_connector.h
··· 26 26 enum pipe intel_connector_get_pipe(struct intel_connector *connector); 27 27 int intel_connector_update_modes(struct drm_connector *connector, 28 28 const struct drm_edid *drm_edid); 29 - int intel_ddc_get_modes(struct drm_connector *c, struct i2c_adapter *adapter); 29 + int intel_ddc_get_modes(struct drm_connector *c, struct i2c_adapter *ddc); 30 30 void intel_attach_force_audio_property(struct drm_connector *connector); 31 31 void intel_attach_broadcast_rgb_property(struct drm_connector *connector); 32 32 void intel_attach_aspect_ratio_property(struct drm_connector *connector);
+33 -31
drivers/gpu/drm/i915/display/intel_crt.c
··· 413 413 return -EINVAL; 414 414 415 415 pipe_config->has_pch_encoder = true; 416 + if (!intel_fdi_compute_pipe_bpp(pipe_config)) 417 + return -EINVAL; 418 + 416 419 pipe_config->output_format = INTEL_OUTPUT_FORMAT_RGB; 417 420 418 421 return 0; ··· 438 435 return -EINVAL; 439 436 440 437 pipe_config->has_pch_encoder = true; 438 + if (!intel_fdi_compute_pipe_bpp(pipe_config)) 439 + return -EINVAL; 440 + 441 441 pipe_config->output_format = INTEL_OUTPUT_FORMAT_RGB; 442 442 443 443 /* LPT FDI RX only supports 8bpc. */ 444 444 if (HAS_PCH_LPT(dev_priv)) { 445 + /* TODO: Check crtc_state->max_link_bpp_x16 instead of bw_constrained */ 445 446 if (pipe_config->bw_constrained && pipe_config->pipe_bpp < 24) { 446 447 drm_dbg_kms(&dev_priv->drm, 447 448 "LPT only supports 24bpp\n"); ··· 457 450 458 451 /* FDI must always be 2.7 GHz */ 459 452 pipe_config->port_clock = 135000 * 2; 453 + 454 + pipe_config->enhanced_framing = true; 460 455 461 456 adjusted_mode->crtc_clock = lpt_iclkip(pipe_config); 462 457 ··· 619 610 } 620 611 621 612 static const struct drm_edid *intel_crt_get_edid(struct drm_connector *connector, 622 - struct i2c_adapter *i2c) 613 + struct i2c_adapter *ddc) 623 614 { 624 615 const struct drm_edid *drm_edid; 625 616 626 - drm_edid = drm_edid_read_ddc(connector, i2c); 617 + drm_edid = drm_edid_read_ddc(connector, ddc); 627 618 628 - if (!drm_edid && !intel_gmbus_is_forced_bit(i2c)) { 619 + if (!drm_edid && !intel_gmbus_is_forced_bit(ddc)) { 629 620 drm_dbg_kms(connector->dev, 630 621 "CRT GMBUS EDID read failed, retry using GPIO bit-banging\n"); 631 - intel_gmbus_force_bit(i2c, true); 632 - drm_edid = drm_edid_read_ddc(connector, i2c); 633 - intel_gmbus_force_bit(i2c, false); 622 + intel_gmbus_force_bit(ddc, true); 623 + drm_edid = drm_edid_read_ddc(connector, ddc); 624 + intel_gmbus_force_bit(ddc, false); 634 625 } 635 626 636 627 return drm_edid; ··· 638 629 639 630 /* local version of intel_ddc_get_modes() to use intel_crt_get_edid() */ 640 631 static int intel_crt_ddc_get_modes(struct drm_connector *connector, 641 - struct i2c_adapter *adapter) 632 + struct i2c_adapter *ddc) 642 633 { 643 634 const struct drm_edid *drm_edid; 644 635 int ret; 645 636 646 - drm_edid = intel_crt_get_edid(connector, adapter); 637 + drm_edid = intel_crt_get_edid(connector, ddc); 647 638 if (!drm_edid) 648 639 return 0; 649 640 ··· 659 650 struct intel_crt *crt = intel_attached_crt(to_intel_connector(connector)); 660 651 struct drm_i915_private *dev_priv = to_i915(crt->base.base.dev); 661 652 const struct drm_edid *drm_edid; 662 - struct i2c_adapter *i2c; 663 653 bool ret = false; 664 654 665 - i2c = intel_gmbus_get_adapter(dev_priv, dev_priv->display.vbt.crt_ddc_pin); 666 - drm_edid = intel_crt_get_edid(connector, i2c); 655 + drm_edid = intel_crt_get_edid(connector, connector->ddc); 667 656 668 657 if (drm_edid) { 669 - const struct edid *edid = drm_edid_raw(drm_edid); 670 - bool is_digital = edid->input & DRM_EDID_INPUT_DIGITAL; 671 - 672 658 /* 673 659 * This may be a DVI-I connector with a shared DDC 674 660 * link between analog and digital outputs, so we 675 661 * have to check the EDID input spec of the attached device. 676 662 */ 677 - if (!is_digital) { 663 + if (drm_edid_is_digital(drm_edid)) { 664 + drm_dbg_kms(&dev_priv->drm, 665 + "CRT not detected via DDC:0x50 [EDID reports a digital panel]\n"); 666 + } else { 678 667 drm_dbg_kms(&dev_priv->drm, 679 668 "CRT detected via DDC:0x50 [EDID]\n"); 680 669 ret = true; 681 - } else { 682 - drm_dbg_kms(&dev_priv->drm, 683 - "CRT not detected via DDC:0x50 [EDID reports a digital panel]\n"); 684 670 } 685 671 } else { 686 672 drm_dbg_kms(&dev_priv->drm, ··· 911 907 out: 912 908 intel_display_power_put(dev_priv, intel_encoder->power_domain, wakeref); 913 909 914 - /* 915 - * Make sure the refs for power wells enabled during detect are 916 - * dropped to avoid a new detect cycle triggered by HPD polling. 917 - */ 918 - intel_display_power_flush_work(dev_priv); 919 - 920 910 return status; 921 911 } 922 912 ··· 921 923 struct intel_crt *crt = intel_attached_crt(to_intel_connector(connector)); 922 924 struct intel_encoder *intel_encoder = &crt->base; 923 925 intel_wakeref_t wakeref; 924 - struct i2c_adapter *i2c; 926 + struct i2c_adapter *ddc; 925 927 int ret; 926 928 927 929 wakeref = intel_display_power_get(dev_priv, 928 930 intel_encoder->power_domain); 929 931 930 - i2c = intel_gmbus_get_adapter(dev_priv, dev_priv->display.vbt.crt_ddc_pin); 931 - ret = intel_crt_ddc_get_modes(connector, i2c); 932 + ret = intel_crt_ddc_get_modes(connector, connector->ddc); 932 933 if (ret || !IS_G4X(dev_priv)) 933 934 goto out; 934 935 935 936 /* Try to probe digital port for output in DVI-I -> VGA mode. */ 936 - i2c = intel_gmbus_get_adapter(dev_priv, GMBUS_PIN_DPB); 937 - ret = intel_crt_ddc_get_modes(connector, i2c); 937 + ddc = intel_gmbus_get_adapter(dev_priv, GMBUS_PIN_DPB); 938 + ret = intel_crt_ddc_get_modes(connector, ddc); 938 939 939 940 out: 940 941 intel_display_power_put(dev_priv, intel_encoder->power_domain, wakeref); ··· 991 994 struct intel_crt *crt; 992 995 struct intel_connector *intel_connector; 993 996 i915_reg_t adpa_reg; 997 + u8 ddc_pin; 994 998 u32 adpa; 995 999 996 1000 if (HAS_PCH_SPLIT(dev_priv)) ··· 1028 1030 return; 1029 1031 } 1030 1032 1033 + ddc_pin = dev_priv->display.vbt.crt_ddc_pin; 1034 + 1031 1035 connector = &intel_connector->base; 1032 1036 crt->connector = intel_connector; 1033 - drm_connector_init(&dev_priv->drm, &intel_connector->base, 1034 - &intel_crt_connector_funcs, DRM_MODE_CONNECTOR_VGA); 1037 + drm_connector_init_with_ddc(&dev_priv->drm, connector, 1038 + &intel_crt_connector_funcs, 1039 + DRM_MODE_CONNECTOR_VGA, 1040 + intel_gmbus_get_adapter(dev_priv, ddc_pin)); 1035 1041 1036 1042 drm_encoder_init(&dev_priv->drm, &crt->base.base, &intel_crt_enc_funcs, 1037 1043 DRM_MODE_ENCODER_DAC, "CRT");
+76 -42
drivers/gpu/drm/i915/display/intel_crtc.c
··· 24 24 #include "intel_display_trace.h" 25 25 #include "intel_display_types.h" 26 26 #include "intel_drrs.h" 27 + #include "intel_dsb.h" 27 28 #include "intel_dsi.h" 28 29 #include "intel_fifo_underrun.h" 29 30 #include "intel_pipe_crc.h" ··· 176 175 crtc_state->hsw_workaround_pipe = INVALID_PIPE; 177 176 crtc_state->scaler_state.scaler_id = -1; 178 177 crtc_state->mst_master_transcoder = INVALID_TRANSCODER; 178 + crtc_state->max_link_bpp_x16 = INT_MAX; 179 179 } 180 180 181 181 static struct intel_crtc *intel_crtc_alloc(void) ··· 396 394 return crtc_state->hw.active && 397 395 !intel_crtc_needs_modeset(crtc_state) && 398 396 !crtc_state->preload_luts && 399 - intel_crtc_needs_color_update(crtc_state); 397 + intel_crtc_needs_color_update(crtc_state) && 398 + !intel_color_uses_dsb(crtc_state); 400 399 } 401 400 402 401 static void intel_crtc_vblank_work(struct kthread_work *base) ··· 471 468 return vblank_start; 472 469 } 473 470 471 + static void intel_crtc_vblank_evade_scanlines(struct intel_atomic_state *state, 472 + struct intel_crtc *crtc, 473 + int *min, int *max, int *vblank_start) 474 + { 475 + const struct intel_crtc_state *old_crtc_state = 476 + intel_atomic_get_old_crtc_state(state, crtc); 477 + const struct intel_crtc_state *new_crtc_state = 478 + intel_atomic_get_new_crtc_state(state, crtc); 479 + const struct intel_crtc_state *crtc_state; 480 + const struct drm_display_mode *adjusted_mode; 481 + 482 + /* 483 + * During fastsets/etc. the transcoder is still 484 + * running with the old timings at this point. 485 + * 486 + * TODO: maybe just use the active timings here? 487 + */ 488 + if (intel_crtc_needs_modeset(new_crtc_state)) 489 + crtc_state = new_crtc_state; 490 + else 491 + crtc_state = old_crtc_state; 492 + 493 + adjusted_mode = &crtc_state->hw.adjusted_mode; 494 + 495 + if (crtc->mode_flags & I915_MODE_FLAG_VRR) { 496 + /* timing changes should happen with VRR disabled */ 497 + drm_WARN_ON(state->base.dev, intel_crtc_needs_modeset(new_crtc_state) || 498 + new_crtc_state->update_m_n || new_crtc_state->update_lrr); 499 + 500 + if (intel_vrr_is_push_sent(crtc_state)) 501 + *vblank_start = intel_vrr_vmin_vblank_start(crtc_state); 502 + else 503 + *vblank_start = intel_vrr_vmax_vblank_start(crtc_state); 504 + } else { 505 + *vblank_start = intel_mode_vblank_start(adjusted_mode); 506 + } 507 + 508 + /* FIXME needs to be calibrated sensibly */ 509 + *min = *vblank_start - intel_usecs_to_scanlines(adjusted_mode, 510 + VBLANK_EVASION_TIME_US); 511 + *max = *vblank_start - 1; 512 + 513 + /* 514 + * M/N and TRANS_VTOTAL are double buffered on the transcoder's 515 + * undelayed vblank, so with seamless M/N and LRR we must evade 516 + * both vblanks. 517 + * 518 + * DSB execution waits for the transcoder's undelayed vblank, 519 + * hence we must kick off the commit before that. 520 + */ 521 + if (new_crtc_state->dsb || new_crtc_state->update_m_n || new_crtc_state->update_lrr) 522 + *min -= adjusted_mode->crtc_vblank_start - adjusted_mode->crtc_vdisplay; 523 + } 524 + 474 525 /** 475 526 * intel_pipe_update_start() - start update of a set of display registers 476 - * @new_crtc_state: the new crtc state 527 + * @state: the atomic state 528 + * @crtc: the crtc 477 529 * 478 530 * Mark the start of an update to pipe registers that should be updated 479 531 * atomically regarding vblank. If the next vblank will happens within ··· 538 480 * until a subsequent call to intel_pipe_update_end(). That is done to 539 481 * avoid random delays. 540 482 */ 541 - void intel_pipe_update_start(struct intel_crtc_state *new_crtc_state) 483 + void intel_pipe_update_start(struct intel_atomic_state *state, 484 + struct intel_crtc *crtc) 542 485 { 543 - struct intel_crtc *crtc = to_intel_crtc(new_crtc_state->uapi.crtc); 544 486 struct drm_i915_private *dev_priv = to_i915(crtc->base.dev); 545 - const struct drm_display_mode *adjusted_mode = &new_crtc_state->hw.adjusted_mode; 487 + struct intel_crtc_state *new_crtc_state = 488 + intel_atomic_get_new_crtc_state(state, crtc); 546 489 long timeout = msecs_to_jiffies_timeout(1); 547 490 int scanline, min, max, vblank_start; 548 491 wait_queue_head_t *wq = drm_crtc_vblank_waitqueue(&crtc->base); ··· 559 500 if (intel_crtc_needs_vblank_work(new_crtc_state)) 560 501 intel_crtc_vblank_work_init(new_crtc_state); 561 502 562 - if (new_crtc_state->vrr.enable) { 563 - if (intel_vrr_is_push_sent(new_crtc_state)) 564 - vblank_start = intel_vrr_vmin_vblank_start(new_crtc_state); 565 - else 566 - vblank_start = intel_vrr_vmax_vblank_start(new_crtc_state); 567 - } else { 568 - vblank_start = intel_mode_vblank_start(adjusted_mode); 569 - } 570 - 571 - /* FIXME needs to be calibrated sensibly */ 572 - min = vblank_start - intel_usecs_to_scanlines(adjusted_mode, 573 - VBLANK_EVASION_TIME_US); 574 - max = vblank_start - 1; 575 - 576 - /* 577 - * M/N is double buffered on the transcoder's undelayed vblank, 578 - * so with seamless M/N we must evade both vblanks. 579 - */ 580 - if (new_crtc_state->seamless_m_n && intel_crtc_needs_fastset(new_crtc_state)) 581 - min -= adjusted_mode->crtc_vblank_start - adjusted_mode->crtc_vdisplay; 582 - 503 + intel_crtc_vblank_evade_scanlines(state, crtc, &min, &max, &vblank_start); 583 504 if (min <= 0 || max <= 0) 584 505 goto irq_disable; 585 506 ··· 670 631 671 632 /** 672 633 * intel_pipe_update_end() - end update of a set of display registers 673 - * @new_crtc_state: the new crtc state 634 + * @state: the atomic state 635 + * @crtc: the crtc 674 636 * 675 637 * Mark the end of an update started with intel_pipe_update_start(). This 676 638 * re-enables interrupts and verifies the update was actually completed 677 639 * before a vblank. 678 640 */ 679 - void intel_pipe_update_end(struct intel_crtc_state *new_crtc_state) 641 + void intel_pipe_update_end(struct intel_atomic_state *state, 642 + struct intel_crtc *crtc) 680 643 { 681 - struct intel_crtc *crtc = to_intel_crtc(new_crtc_state->uapi.crtc); 644 + struct intel_crtc_state *new_crtc_state = 645 + intel_atomic_get_new_crtc_state(state, crtc); 682 646 enum pipe pipe = crtc->pipe; 683 647 int scanline_end = intel_get_crtc_scanline(crtc); 684 648 u32 end_vbl_count = intel_crtc_get_vblank_counter(crtc); 685 649 ktime_t end_vbl_time = ktime_get(); 686 650 struct drm_i915_private *dev_priv = to_i915(crtc->base.dev); 687 651 688 - intel_psr_unlock(new_crtc_state); 689 - 690 652 if (new_crtc_state->do_async_flip) 691 - return; 653 + goto out; 692 654 693 655 trace_intel_pipe_update_end(crtc, end_vbl_count, scanline_end); 694 656 ··· 737 697 */ 738 698 intel_vrr_send_push(new_crtc_state); 739 699 740 - /* 741 - * Seamless M/N update may need to update frame timings. 742 - * 743 - * FIXME Should be synchronized with the start of vblank somehow... 744 - */ 745 - if (new_crtc_state->seamless_m_n && intel_crtc_needs_fastset(new_crtc_state)) 746 - intel_crtc_update_active_timings(new_crtc_state, 747 - new_crtc_state->vrr.enable); 748 - 749 700 local_irq_enable(); 750 701 751 702 if (intel_vgpu_active(dev_priv)) 752 - return; 703 + goto out; 753 704 754 705 if (crtc->debug.start_vbl_count && 755 706 crtc->debug.start_vbl_count != end_vbl_count) { ··· 755 724 } 756 725 757 726 dbg_vblank_evade(crtc, end_vbl_time); 727 + 728 + out: 729 + intel_psr_unlock(new_crtc_state); 758 730 }
+4 -2
drivers/gpu/drm/i915/display/intel_crtc.h
··· 36 36 u32 intel_crtc_get_vblank_counter(struct intel_crtc *crtc); 37 37 void intel_crtc_vblank_on(const struct intel_crtc_state *crtc_state); 38 38 void intel_crtc_vblank_off(const struct intel_crtc_state *crtc_state); 39 - void intel_pipe_update_start(struct intel_crtc_state *new_crtc_state); 40 - void intel_pipe_update_end(struct intel_crtc_state *new_crtc_state); 39 + void intel_pipe_update_start(struct intel_atomic_state *state, 40 + struct intel_crtc *crtc); 41 + void intel_pipe_update_end(struct intel_atomic_state *state, 42 + struct intel_crtc *crtc); 41 43 void intel_wait_for_vblank_workers(struct intel_atomic_state *state); 42 44 struct intel_crtc *intel_first_crtc(struct drm_i915_private *i915); 43 45 struct intel_crtc *intel_crtc_for_pipe(struct drm_i915_private *i915,
+3
drivers/gpu/drm/i915/display/intel_crtc_state_dump.c
··· 258 258 intel_dump_m_n_config(pipe_config, "dp m2_n2", 259 259 pipe_config->lane_count, 260 260 &pipe_config->dp_m2_n2); 261 + drm_dbg_kms(&i915->drm, "fec: %s, enhanced framing: %s\n", 262 + str_enabled_disabled(pipe_config->fec_enable), 263 + str_enabled_disabled(pipe_config->enhanced_framing)); 261 264 } 262 265 263 266 drm_dbg_kms(&i915->drm, "framestart delay: %d, MSA timing delay: %d\n",
+106 -97
drivers/gpu/drm/i915/display/intel_cx0_phy.c
··· 31 31 32 32 bool intel_is_c10phy(struct drm_i915_private *i915, enum phy phy) 33 33 { 34 - if (IS_METEORLAKE(i915) && (phy < PHY_C)) 34 + if (DISPLAY_VER_FULL(i915) == IP_VER(14, 0) && phy < PHY_C) 35 35 return true; 36 36 37 37 return false; ··· 46 46 return ilog2(lane_mask); 47 47 } 48 48 49 + static u8 intel_cx0_get_owned_lane_mask(struct drm_i915_private *i915, 50 + struct intel_encoder *encoder) 51 + { 52 + struct intel_digital_port *dig_port = enc_to_dig_port(encoder); 53 + 54 + if (!intel_tc_port_in_dp_alt_mode(dig_port)) 55 + return INTEL_CX0_BOTH_LANES; 56 + 57 + /* 58 + * In DP-alt with pin assignment D, only PHY lane 0 is owned 59 + * by display and lane 1 is owned by USB. 60 + */ 61 + return intel_tc_port_max_lane_count(dig_port) > 2 62 + ? INTEL_CX0_BOTH_LANES : INTEL_CX0_LANE0; 63 + } 64 + 49 65 static void 50 66 assert_dc_off(struct drm_i915_private *i915) 51 67 { ··· 71 55 drm_WARN_ON(&i915->drm, !enabled); 72 56 } 73 57 58 + static void intel_cx0_program_msgbus_timer(struct intel_encoder *encoder) 59 + { 60 + int lane; 61 + struct drm_i915_private *i915 = to_i915(encoder->base.dev); 62 + 63 + for_each_cx0_lane_in_mask(INTEL_CX0_BOTH_LANES, lane) 64 + intel_de_rmw(i915, 65 + XELPDP_PORT_MSGBUS_TIMER(encoder->port, lane), 66 + XELPDP_PORT_MSGBUS_TIMER_VAL_MASK, 67 + XELPDP_PORT_MSGBUS_TIMER_VAL); 68 + } 69 + 74 70 /* 75 71 * Prepare HW for CX0 phy transactions. 76 72 * 77 73 * It is required that PSR and DC5/6 are disabled before any CX0 message 78 74 * bus transaction is executed. 75 + * 76 + * We also do the msgbus timer programming here to ensure that the timer 77 + * is already programmed before any access to the msgbus. 79 78 */ 80 79 static intel_wakeref_t intel_cx0_phy_transaction_begin(struct intel_encoder *encoder) 81 80 { 81 + intel_wakeref_t wakeref; 82 82 struct drm_i915_private *i915 = to_i915(encoder->base.dev); 83 83 struct intel_dp *intel_dp = enc_to_intel_dp(encoder); 84 84 85 85 intel_psr_pause(intel_dp); 86 - return intel_display_power_get(i915, POWER_DOMAIN_DC_OFF); 86 + wakeref = intel_display_power_get(i915, POWER_DOMAIN_DC_OFF); 87 + intel_cx0_program_msgbus_timer(encoder); 88 + 89 + return wakeref; 87 90 } 88 91 89 92 static void intel_cx0_phy_transaction_end(struct intel_encoder *encoder, intel_wakeref_t wakeref) ··· 151 116 XELPDP_MSGBUS_TIMEOUT_SLOW, val)) { 152 117 drm_dbg_kms(&i915->drm, "PHY %c Timeout waiting for message ACK. Status: 0x%x\n", 153 118 phy_name(phy), *val); 119 + 120 + if (!(intel_de_read(i915, XELPDP_PORT_MSGBUS_TIMER(port, lane)) & 121 + XELPDP_PORT_MSGBUS_TIMER_TIMED_OUT)) 122 + drm_dbg_kms(&i915->drm, 123 + "PHY %c Hardware did not detect a timeout\n", 124 + phy_name(phy)); 125 + 154 126 intel_cx0_bus_reset(i915, port, lane); 155 127 return -ETIMEDOUT; 156 128 } ··· 401 359 struct drm_i915_private *i915 = to_i915(encoder->base.dev); 402 360 const struct intel_ddi_buf_trans *trans; 403 361 enum phy phy = intel_port_to_phy(i915, encoder->port); 362 + u8 owned_lane_mask = intel_cx0_get_owned_lane_mask(i915, encoder); 404 363 intel_wakeref_t wakeref; 405 364 int n_entries, ln; 406 365 ··· 414 371 } 415 372 416 373 if (intel_is_c10phy(i915, phy)) { 417 - intel_cx0_rmw(i915, encoder->port, INTEL_CX0_BOTH_LANES, PHY_C10_VDR_CONTROL(1), 374 + intel_cx0_rmw(i915, encoder->port, owned_lane_mask, PHY_C10_VDR_CONTROL(1), 418 375 0, C10_VDR_CTRL_MSGBUS_ACCESS, MB_WRITE_COMMITTED); 419 - intel_cx0_rmw(i915, encoder->port, INTEL_CX0_BOTH_LANES, PHY_C10_VDR_CMN(3), 376 + intel_cx0_rmw(i915, encoder->port, owned_lane_mask, PHY_C10_VDR_CMN(3), 420 377 C10_CMN3_TXVBOOST_MASK, 421 378 C10_CMN3_TXVBOOST(intel_c10_get_tx_vboost_lvl(crtc_state)), 422 379 MB_WRITE_UNCOMMITTED); 423 - intel_cx0_rmw(i915, encoder->port, INTEL_CX0_BOTH_LANES, PHY_C10_VDR_TX(1), 380 + intel_cx0_rmw(i915, encoder->port, owned_lane_mask, PHY_C10_VDR_TX(1), 424 381 C10_TX1_TERMCTL_MASK, 425 382 C10_TX1_TERMCTL(intel_c10_get_tx_term_ctl(crtc_state)), 426 383 MB_WRITE_COMMITTED); ··· 428 385 429 386 for (ln = 0; ln < crtc_state->lane_count; ln++) { 430 387 int level = intel_ddi_level(encoder, crtc_state, ln); 431 - int lane, tx; 388 + int lane = ln / 2; 389 + int tx = ln % 2; 390 + u8 lane_mask = lane == 0 ? INTEL_CX0_LANE0 : INTEL_CX0_LANE1; 432 391 433 - lane = ln / 2; 434 - tx = ln % 2; 392 + if (!(lane_mask & owned_lane_mask)) 393 + continue; 435 394 436 - intel_cx0_rmw(i915, encoder->port, BIT(lane), PHY_CX0_VDROVRD_CTL(lane, tx, 0), 395 + intel_cx0_rmw(i915, encoder->port, lane_mask, PHY_CX0_VDROVRD_CTL(lane, tx, 0), 437 396 C10_PHY_OVRD_LEVEL_MASK, 438 397 C10_PHY_OVRD_LEVEL(trans->entries[level].snps.pre_cursor), 439 398 MB_WRITE_COMMITTED); 440 - intel_cx0_rmw(i915, encoder->port, BIT(lane), PHY_CX0_VDROVRD_CTL(lane, tx, 1), 399 + intel_cx0_rmw(i915, encoder->port, lane_mask, PHY_CX0_VDROVRD_CTL(lane, tx, 1), 441 400 C10_PHY_OVRD_LEVEL_MASK, 442 401 C10_PHY_OVRD_LEVEL(trans->entries[level].snps.vswing), 443 402 MB_WRITE_COMMITTED); 444 - intel_cx0_rmw(i915, encoder->port, BIT(lane), PHY_CX0_VDROVRD_CTL(lane, tx, 2), 403 + intel_cx0_rmw(i915, encoder->port, lane_mask, PHY_CX0_VDROVRD_CTL(lane, tx, 2), 445 404 C10_PHY_OVRD_LEVEL_MASK, 446 405 C10_PHY_OVRD_LEVEL(trans->entries[level].snps.post_cursor), 447 406 MB_WRITE_COMMITTED); 448 407 } 449 408 450 409 /* Write Override enables in 0xD71 */ 451 - intel_cx0_rmw(i915, encoder->port, INTEL_CX0_BOTH_LANES, PHY_C10_VDR_OVRD, 410 + intel_cx0_rmw(i915, encoder->port, owned_lane_mask, PHY_C10_VDR_OVRD, 452 411 0, PHY_C10_VDR_OVRD_TX1 | PHY_C10_VDR_OVRD_TX2, 453 412 MB_WRITE_COMMITTED); 454 413 455 414 if (intel_is_c10phy(i915, phy)) 456 - intel_cx0_rmw(i915, encoder->port, INTEL_CX0_BOTH_LANES, PHY_C10_VDR_CONTROL(1), 415 + intel_cx0_rmw(i915, encoder->port, owned_lane_mask, PHY_C10_VDR_CONTROL(1), 457 416 0, C10_VDR_CTRL_UPDATE_CFG, MB_WRITE_COMMITTED); 458 417 459 418 intel_cx0_phy_transaction_end(encoder, wakeref); ··· 2579 2534 { 2580 2535 enum port port = encoder->port; 2581 2536 enum phy phy = intel_port_to_phy(i915, port); 2582 - bool both_lanes = intel_tc_port_fia_max_lane_count(enc_to_dig_port(encoder)) > 2; 2583 - u8 lane_mask = lane_reversal ? INTEL_CX0_LANE1 : 2584 - INTEL_CX0_LANE0; 2585 - u32 lane_pipe_reset = both_lanes ? 2586 - XELPDP_LANE_PIPE_RESET(0) | 2587 - XELPDP_LANE_PIPE_RESET(1) : 2588 - XELPDP_LANE_PIPE_RESET(0); 2589 - u32 lane_phy_current_status = both_lanes ? 2590 - XELPDP_LANE_PHY_CURRENT_STATUS(0) | 2591 - XELPDP_LANE_PHY_CURRENT_STATUS(1) : 2592 - XELPDP_LANE_PHY_CURRENT_STATUS(0); 2537 + u8 owned_lane_mask = intel_cx0_get_owned_lane_mask(i915, encoder); 2538 + u8 lane_mask = lane_reversal ? INTEL_CX0_LANE1 : INTEL_CX0_LANE0; 2539 + u32 lane_pipe_reset = owned_lane_mask == INTEL_CX0_BOTH_LANES 2540 + ? XELPDP_LANE_PIPE_RESET(0) | XELPDP_LANE_PIPE_RESET(1) 2541 + : XELPDP_LANE_PIPE_RESET(0); 2542 + u32 lane_phy_current_status = owned_lane_mask == INTEL_CX0_BOTH_LANES 2543 + ? (XELPDP_LANE_PHY_CURRENT_STATUS(0) | 2544 + XELPDP_LANE_PHY_CURRENT_STATUS(1)) 2545 + : XELPDP_LANE_PHY_CURRENT_STATUS(0); 2593 2546 2594 2547 if (__intel_de_wait_for_register(i915, XELPDP_PORT_BUF_CTL1(port), 2595 2548 XELPDP_PORT_BUF_SOC_PHY_READY, ··· 2607 2564 phy_name(phy), XELPDP_PORT_RESET_START_TIMEOUT_US); 2608 2565 2609 2566 intel_de_rmw(i915, XELPDP_PORT_CLOCK_CTL(port), 2610 - intel_cx0_get_pclk_refclk_request(both_lanes ? 2611 - INTEL_CX0_BOTH_LANES : 2612 - INTEL_CX0_LANE0), 2567 + intel_cx0_get_pclk_refclk_request(owned_lane_mask), 2613 2568 intel_cx0_get_pclk_refclk_request(lane_mask)); 2614 2569 2615 2570 if (__intel_de_wait_for_register(i915, XELPDP_PORT_CLOCK_CTL(port), 2616 - intel_cx0_get_pclk_refclk_ack(both_lanes ? 2617 - INTEL_CX0_BOTH_LANES : 2618 - INTEL_CX0_LANE0), 2571 + intel_cx0_get_pclk_refclk_ack(owned_lane_mask), 2619 2572 intel_cx0_get_pclk_refclk_ack(lane_mask), 2620 2573 XELPDP_REFCLK_ENABLE_TIMEOUT_US, 0, NULL)) 2621 2574 drm_warn(&i915->drm, "PHY %c failed to request refclk after %dus.\n", ··· 2633 2594 struct intel_encoder *encoder, int lane_count, 2634 2595 bool lane_reversal) 2635 2596 { 2636 - u8 l0t1, l0t2, l1t1, l1t2; 2597 + int i; 2598 + u8 disables; 2637 2599 bool dp_alt_mode = intel_tc_port_in_dp_alt_mode(enc_to_dig_port(encoder)); 2600 + u8 owned_lane_mask = intel_cx0_get_owned_lane_mask(i915, encoder); 2638 2601 enum port port = encoder->port; 2639 2602 2640 2603 if (intel_is_c10phy(i915, intel_port_to_phy(i915, port))) 2641 - intel_cx0_rmw(i915, port, INTEL_CX0_BOTH_LANES, 2604 + intel_cx0_rmw(i915, port, owned_lane_mask, 2642 2605 PHY_C10_VDR_CONTROL(1), 0, 2643 2606 C10_VDR_CTRL_MSGBUS_ACCESS, 2644 2607 MB_WRITE_COMMITTED); 2645 2608 2646 - /* TODO: DP-alt MFD case where only one PHY lane should be programmed. */ 2647 - l0t1 = intel_cx0_read(i915, port, INTEL_CX0_LANE0, PHY_CX0_TX_CONTROL(1, 2)); 2648 - l0t2 = intel_cx0_read(i915, port, INTEL_CX0_LANE0, PHY_CX0_TX_CONTROL(2, 2)); 2649 - l1t1 = intel_cx0_read(i915, port, INTEL_CX0_LANE1, PHY_CX0_TX_CONTROL(1, 2)); 2650 - l1t2 = intel_cx0_read(i915, port, INTEL_CX0_LANE1, PHY_CX0_TX_CONTROL(2, 2)); 2609 + if (lane_reversal) 2610 + disables = REG_GENMASK8(3, 0) >> lane_count; 2611 + else 2612 + disables = REG_GENMASK8(3, 0) << lane_count; 2651 2613 2652 - l0t1 |= CONTROL2_DISABLE_SINGLE_TX; 2653 - l0t2 |= CONTROL2_DISABLE_SINGLE_TX; 2654 - l1t1 |= CONTROL2_DISABLE_SINGLE_TX; 2655 - l1t2 |= CONTROL2_DISABLE_SINGLE_TX; 2656 - 2657 - if (lane_reversal) { 2658 - switch (lane_count) { 2659 - case 4: 2660 - l0t1 &= ~CONTROL2_DISABLE_SINGLE_TX; 2661 - fallthrough; 2662 - case 3: 2663 - l0t2 &= ~CONTROL2_DISABLE_SINGLE_TX; 2664 - fallthrough; 2665 - case 2: 2666 - l1t1 &= ~CONTROL2_DISABLE_SINGLE_TX; 2667 - fallthrough; 2668 - case 1: 2669 - l1t2 &= ~CONTROL2_DISABLE_SINGLE_TX; 2670 - break; 2671 - default: 2672 - MISSING_CASE(lane_count); 2673 - } 2674 - } else { 2675 - switch (lane_count) { 2676 - case 4: 2677 - l1t2 &= ~CONTROL2_DISABLE_SINGLE_TX; 2678 - fallthrough; 2679 - case 3: 2680 - l1t1 &= ~CONTROL2_DISABLE_SINGLE_TX; 2681 - fallthrough; 2682 - case 2: 2683 - l0t2 &= ~CONTROL2_DISABLE_SINGLE_TX; 2684 - l0t1 &= ~CONTROL2_DISABLE_SINGLE_TX; 2685 - break; 2686 - case 1: 2687 - if (dp_alt_mode) 2688 - l0t2 &= ~CONTROL2_DISABLE_SINGLE_TX; 2689 - else 2690 - l0t1 &= ~CONTROL2_DISABLE_SINGLE_TX; 2691 - break; 2692 - default: 2693 - MISSING_CASE(lane_count); 2694 - } 2614 + if (dp_alt_mode && lane_count == 1) { 2615 + disables &= ~REG_GENMASK8(1, 0); 2616 + disables |= REG_FIELD_PREP8(REG_GENMASK8(1, 0), 0x1); 2695 2617 } 2696 2618 2697 - /* disable MLs */ 2698 - intel_cx0_write(i915, port, INTEL_CX0_LANE0, PHY_CX0_TX_CONTROL(1, 2), 2699 - l0t1, MB_WRITE_COMMITTED); 2700 - intel_cx0_write(i915, port, INTEL_CX0_LANE0, PHY_CX0_TX_CONTROL(2, 2), 2701 - l0t2, MB_WRITE_COMMITTED); 2702 - intel_cx0_write(i915, port, INTEL_CX0_LANE1, PHY_CX0_TX_CONTROL(1, 2), 2703 - l1t1, MB_WRITE_COMMITTED); 2704 - intel_cx0_write(i915, port, INTEL_CX0_LANE1, PHY_CX0_TX_CONTROL(2, 2), 2705 - l1t2, MB_WRITE_COMMITTED); 2619 + for (i = 0; i < 4; i++) { 2620 + int tx = i % 2 + 1; 2621 + u8 lane_mask = i < 2 ? INTEL_CX0_LANE0 : INTEL_CX0_LANE1; 2622 + 2623 + if (!(owned_lane_mask & lane_mask)) 2624 + continue; 2625 + 2626 + intel_cx0_rmw(i915, port, lane_mask, PHY_CX0_TX_CONTROL(tx, 2), 2627 + CONTROL2_DISABLE_SINGLE_TX, 2628 + disables & BIT(i) ? CONTROL2_DISABLE_SINGLE_TX : 0, 2629 + MB_WRITE_COMMITTED); 2630 + } 2706 2631 2707 2632 if (intel_is_c10phy(i915, intel_port_to_phy(i915, port))) 2708 - intel_cx0_rmw(i915, port, INTEL_CX0_BOTH_LANES, 2633 + intel_cx0_rmw(i915, port, owned_lane_mask, 2709 2634 PHY_C10_VDR_CONTROL(1), 0, 2710 2635 C10_VDR_CTRL_UPDATE_CFG, 2711 2636 MB_WRITE_COMMITTED); ··· 2724 2721 intel_cx0_powerdown_change_sequence(i915, encoder->port, INTEL_CX0_BOTH_LANES, 2725 2722 CX0_P2_STATE_READY); 2726 2723 2727 - /* 4. Program PHY internal PLL internal registers. */ 2724 + /* 2725 + * 4. Program PORT_MSGBUS_TIMER register's Message Bus Timer field to 0xA000. 2726 + * (This is done inside intel_cx0_phy_transaction_begin(), since we would need 2727 + * the right timer thresholds for readouts too.) 2728 + */ 2729 + 2730 + /* 5. Program PHY internal PLL internal registers. */ 2728 2731 if (intel_is_c10phy(i915, phy)) 2729 2732 intel_c10_pll_program(i915, crtc_state, encoder); 2730 2733 else 2731 2734 intel_c20_pll_program(i915, crtc_state, encoder); 2732 2735 2733 2736 /* 2734 - * 5. Program the enabled and disabled owned PHY lane 2737 + * 6. Program the enabled and disabled owned PHY lane 2735 2738 * transmitters over message bus 2736 2739 */ 2737 2740 intel_cx0_program_phy_lane(i915, encoder, crtc_state->lane_count, lane_reversal); 2738 2741 2739 2742 /* 2740 - * 6. Follow the Display Voltage Frequency Switching - Sequence 2743 + * 7. Follow the Display Voltage Frequency Switching - Sequence 2741 2744 * Before Frequency Change. We handle this step in bxt_set_cdclk(). 2742 2745 */ 2743 2746 2744 2747 /* 2745 - * 7. Program DDI_CLK_VALFREQ to match intended DDI 2748 + * 8. Program DDI_CLK_VALFREQ to match intended DDI 2746 2749 * clock frequency. 2747 2750 */ 2748 2751 intel_de_write(i915, DDI_CLK_VALFREQ(encoder->port), 2749 2752 crtc_state->port_clock); 2750 2753 2751 2754 /* 2752 - * 8. Set PORT_CLOCK_CTL register PCLK PLL Request 2755 + * 9. Set PORT_CLOCK_CTL register PCLK PLL Request 2753 2756 * LN<Lane for maxPCLK> to "1" to enable PLL. 2754 2757 */ 2755 2758 intel_de_rmw(i915, XELPDP_PORT_CLOCK_CTL(encoder->port), 2756 2759 intel_cx0_get_pclk_pll_request(INTEL_CX0_BOTH_LANES), 2757 2760 intel_cx0_get_pclk_pll_request(maxpclk_lane)); 2758 2761 2759 - /* 9. Poll on PORT_CLOCK_CTL PCLK PLL Ack LN<Lane for maxPCLK> == "1". */ 2762 + /* 10. Poll on PORT_CLOCK_CTL PCLK PLL Ack LN<Lane for maxPCLK> == "1". */ 2760 2763 if (__intel_de_wait_for_register(i915, XELPDP_PORT_CLOCK_CTL(encoder->port), 2761 2764 intel_cx0_get_pclk_pll_ack(INTEL_CX0_BOTH_LANES), 2762 2765 intel_cx0_get_pclk_pll_ack(maxpclk_lane), ··· 2771 2762 phy_name(phy), XELPDP_PCLK_PLL_ENABLE_TIMEOUT_US); 2772 2763 2773 2764 /* 2774 - * 10. Follow the Display Voltage Frequency Switching Sequence After 2765 + * 11. Follow the Display Voltage Frequency Switching Sequence After 2775 2766 * Frequency Change. We handle this step in bxt_set_cdclk(). 2776 2767 */ 2777 2768
+8 -6
drivers/gpu/drm/i915/display/intel_cx0_phy.h
··· 10 10 #include <linux/bitfield.h> 11 11 #include <linux/bits.h> 12 12 13 - #include "i915_drv.h" 14 - #include "intel_display_types.h" 15 - 16 - struct drm_i915_private; 17 - struct intel_encoder; 18 - struct intel_crtc_state; 19 13 enum icl_port_dpll_id; 20 14 enum phy; 15 + struct drm_i915_private; 16 + struct intel_atomic_state; 17 + struct intel_c10pll_state; 18 + struct intel_c20pll_state; 19 + struct intel_crtc_state; 20 + struct intel_encoder; 21 + struct intel_hdmi; 21 22 22 23 bool intel_is_c10phy(struct drm_i915_private *dev_priv, enum phy phy); 23 24 void intel_mtl_pll_enable(struct intel_encoder *encoder, ··· 45 44 const struct intel_crtc_state *crtc_state); 46 45 int intel_cx0_phy_check_hdmi_link_rate(struct intel_hdmi *hdmi, int clock); 47 46 int intel_mtl_tbt_calc_port_clock(struct intel_encoder *encoder); 47 + 48 48 #endif /* __INTEL_CX0_PHY_H__ */
+13
drivers/gpu/drm/i915/display/intel_cx0_phy_regs.h
··· 110 110 #define CX0_P4PG_STATE_DISABLE 0xC 111 111 #define CX0_P2_STATE_RESET 0x2 112 112 113 + #define _XELPDP_PORT_MSGBUS_TIMER_LN0_A 0x640d8 114 + #define _XELPDP_PORT_MSGBUS_TIMER_LN0_B 0x641d8 115 + #define _XELPDP_PORT_MSGBUS_TIMER_LN0_USBC1 0x16f258 116 + #define _XELPDP_PORT_MSGBUS_TIMER_LN0_USBC2 0x16f458 117 + #define XELPDP_PORT_MSGBUS_TIMER(port, lane) _MMIO(_PICK_EVEN_2RANGES(port, PORT_TC1, \ 118 + _XELPDP_PORT_MSGBUS_TIMER_LN0_A, \ 119 + _XELPDP_PORT_MSGBUS_TIMER_LN0_B, \ 120 + _XELPDP_PORT_MSGBUS_TIMER_LN0_USBC1, \ 121 + _XELPDP_PORT_MSGBUS_TIMER_LN0_USBC2) + (lane) * 4) 122 + #define XELPDP_PORT_MSGBUS_TIMER_TIMED_OUT REG_BIT(31) 123 + #define XELPDP_PORT_MSGBUS_TIMER_VAL_MASK REG_GENMASK(23, 0) 124 + #define XELPDP_PORT_MSGBUS_TIMER_VAL REG_FIELD_PREP(XELPDP_PORT_MSGBUS_TIMER_VAL_MASK, 0xa000) 125 + 113 126 #define _XELPDP_PORT_CLOCK_CTL_A 0x640E0 114 127 #define _XELPDP_PORT_CLOCK_CTL_B 0x641E0 115 128 #define _XELPDP_PORT_CLOCK_CTL_USBC1 0x16F260
+20 -18
drivers/gpu/drm/i915/display/intel_ddi.c
··· 3248 3248 intel_ddi_enable_transcoder_func(encoder, crtc_state); 3249 3249 3250 3250 /* Enable/Disable DP2.0 SDP split config before transcoder */ 3251 - intel_audio_sdp_split_update(encoder, crtc_state); 3251 + intel_audio_sdp_split_update(crtc_state); 3252 3252 3253 3253 intel_enable_transcoder(crtc_state); 3254 3254 ··· 3432 3432 dp_tp_ctl |= DP_TP_CTL_MODE_MST; 3433 3433 } else { 3434 3434 dp_tp_ctl |= DP_TP_CTL_MODE_SST; 3435 - if (drm_dp_enhanced_frame_cap(intel_dp->dpcd)) 3435 + if (crtc_state->enhanced_framing) 3436 3436 dp_tp_ctl |= DP_TP_CTL_ENHANCED_FRAME_ENABLE; 3437 3437 } 3438 3438 intel_de_write(dev_priv, dp_tp_ctl_reg(encoder, crtc_state), dp_tp_ctl); ··· 3489 3489 dp_tp_ctl |= DP_TP_CTL_MODE_MST; 3490 3490 } else { 3491 3491 dp_tp_ctl |= DP_TP_CTL_MODE_SST; 3492 - if (drm_dp_enhanced_frame_cap(intel_dp->dpcd)) 3492 + if (crtc_state->enhanced_framing) 3493 3493 dp_tp_ctl |= DP_TP_CTL_ENHANCED_FRAME_ENABLE; 3494 3494 } 3495 3495 intel_de_write(dev_priv, dp_tp_ctl_reg(encoder, crtc_state), dp_tp_ctl); ··· 3724 3724 intel_cpu_transcoder_get_m2_n2(crtc, cpu_transcoder, 3725 3725 &pipe_config->dp_m2_n2); 3726 3726 3727 - if (DISPLAY_VER(dev_priv) >= 11) { 3728 - i915_reg_t dp_tp_ctl = dp_tp_ctl_reg(encoder, pipe_config); 3727 + pipe_config->enhanced_framing = 3728 + intel_de_read(dev_priv, dp_tp_ctl_reg(encoder, pipe_config)) & 3729 + DP_TP_CTL_ENHANCED_FRAME_ENABLE; 3729 3730 3731 + if (DISPLAY_VER(dev_priv) >= 11) 3730 3732 pipe_config->fec_enable = 3731 - intel_de_read(dev_priv, dp_tp_ctl) & DP_TP_CTL_FEC_ENABLE; 3732 - 3733 - drm_dbg_kms(&dev_priv->drm, 3734 - "[ENCODER:%d:%s] Fec status: %u\n", 3735 - encoder->base.base.id, encoder->base.name, 3736 - pipe_config->fec_enable); 3737 - } 3733 + intel_de_read(dev_priv, 3734 + dp_tp_ctl_reg(encoder, pipe_config)) & DP_TP_CTL_FEC_ENABLE; 3738 3735 3739 3736 if (dig_port->lspcon.active && intel_dp_has_hdmi_sink(&dig_port->dp)) 3740 3737 pipe_config->infoframes.enable |= ··· 3744 3747 if (!HAS_DP20(dev_priv)) { 3745 3748 /* FDI */ 3746 3749 pipe_config->output_types |= BIT(INTEL_OUTPUT_ANALOG); 3750 + pipe_config->enhanced_framing = 3751 + intel_de_read(dev_priv, dp_tp_ctl_reg(encoder, pipe_config)) & 3752 + DP_TP_CTL_ENHANCED_FRAME_ENABLE; 3747 3753 break; 3748 3754 } 3749 3755 fallthrough; /* 128b/132b */ ··· 3761 3761 3762 3762 intel_cpu_transcoder_get_m1_n1(crtc, cpu_transcoder, 3763 3763 &pipe_config->dp_m_n); 3764 + 3765 + if (DISPLAY_VER(dev_priv) >= 11) 3766 + pipe_config->fec_enable = 3767 + intel_de_read(dev_priv, 3768 + dp_tp_ctl_reg(encoder, pipe_config)) & DP_TP_CTL_FEC_ENABLE; 3764 3769 3765 3770 pipe_config->infoframes.enable |= 3766 3771 intel_hdmi_infoframes_enabled(encoder, pipe_config); ··· 3862 3857 crtc_state->port_clock = intel_mtl_tbt_calc_port_clock(encoder); 3863 3858 } else if (intel_is_c10phy(i915, phy)) { 3864 3859 intel_c10pll_readout_hw_state(encoder, &crtc_state->cx0pll_state.c10); 3865 - intel_c10pll_dump_hw_state(i915, &crtc_state->cx0pll_state.c10); 3866 3860 crtc_state->port_clock = intel_c10pll_calc_port_clock(encoder, &crtc_state->cx0pll_state.c10); 3867 3861 } else { 3868 3862 intel_c20pll_readout_hw_state(encoder, &crtc_state->cx0pll_state.c20); 3869 - intel_c20pll_dump_hw_state(i915, &crtc_state->cx0pll_state.c20); 3870 3863 crtc_state->port_clock = intel_c20pll_calc_port_clock(encoder, &crtc_state->cx0pll_state.c20); 3871 3864 } 3872 3865 ··· 4176 4173 struct drm_connector *connector = conn_state->connector; 4177 4174 u8 port_sync_transcoders = 0; 4178 4175 4179 - drm_dbg_kms(&i915->drm, "[ENCODER:%d:%s] [CRTC:%d:%s]", 4176 + drm_dbg_kms(&i915->drm, "[ENCODER:%d:%s] [CRTC:%d:%s]\n", 4180 4177 encoder->base.base.id, encoder->base.name, 4181 4178 crtc_state->uapi.crtc->base.id, crtc_state->uapi.crtc->name); 4182 4179 ··· 4326 4323 struct drm_i915_private *dev_priv = to_i915(encoder->base.dev); 4327 4324 struct intel_hdmi *hdmi = enc_to_intel_hdmi(encoder); 4328 4325 struct intel_connector *connector = hdmi->attached_connector; 4329 - struct i2c_adapter *adapter = 4330 - intel_gmbus_get_adapter(dev_priv, hdmi->ddc_bus); 4326 + struct i2c_adapter *ddc = connector->base.ddc; 4331 4327 struct drm_connector_state *conn_state; 4332 4328 struct intel_crtc_state *crtc_state; 4333 4329 struct intel_crtc *crtc; ··· 4367 4365 !try_wait_for_completion(&conn_state->commit->hw_done)) 4368 4366 return 0; 4369 4367 4370 - ret = drm_scdc_readb(adapter, SCDC_TMDS_CONFIG, &config); 4368 + ret = drm_scdc_readb(ddc, SCDC_TMDS_CONFIG, &config); 4371 4369 if (ret < 0) { 4372 4370 drm_err(&dev_priv->drm, "[CONNECTOR:%d:%s] Failed to read TMDS config: %d\n", 4373 4371 connector->base.base.id, connector->base.name, ret);
+339 -141
drivers/gpu/drm/i915/display/intel_display.c
··· 77 77 #include "intel_dpll_mgr.h" 78 78 #include "intel_dpt.h" 79 79 #include "intel_drrs.h" 80 + #include "intel_dsb.h" 80 81 #include "intel_dsi.h" 81 82 #include "intel_dvo.h" 82 83 #include "intel_fb.h" ··· 88 87 #include "intel_frontbuffer.h" 89 88 #include "intel_hdmi.h" 90 89 #include "intel_hotplug.h" 90 + #include "intel_link_bw.h" 91 91 #include "intel_lvds.h" 92 92 #include "intel_lvds_regs.h" 93 93 #include "intel_modeset_setup.h" ··· 728 726 tmp |= UNDERRUN_RECOVERY_DISABLE_ADLP; 729 727 730 728 /* Wa_14010547955:dg2 */ 731 - if (IS_DG2_DISPLAY_STEP(dev_priv, STEP_B0, STEP_FOREVER)) 729 + if (IS_DG2(dev_priv)) 732 730 tmp |= DG2_RENDER_CCSTAG_4_3_EN; 733 731 734 732 intel_de_write(dev_priv, PIPE_CHICKEN(pipe), tmp); ··· 915 913 return is_disabling(active_planes, old_crtc_state, new_crtc_state); 916 914 } 917 915 916 + static bool vrr_params_changed(const struct intel_crtc_state *old_crtc_state, 917 + const struct intel_crtc_state *new_crtc_state) 918 + { 919 + return old_crtc_state->vrr.flipline != new_crtc_state->vrr.flipline || 920 + old_crtc_state->vrr.vmin != new_crtc_state->vrr.vmin || 921 + old_crtc_state->vrr.vmax != new_crtc_state->vrr.vmax || 922 + old_crtc_state->vrr.guardband != new_crtc_state->vrr.guardband || 923 + old_crtc_state->vrr.pipeline_full != new_crtc_state->vrr.pipeline_full; 924 + } 925 + 918 926 static bool vrr_enabling(const struct intel_crtc_state *old_crtc_state, 919 927 const struct intel_crtc_state *new_crtc_state) 920 928 { 921 - return is_enabling(vrr.enable, old_crtc_state, new_crtc_state); 929 + return is_enabling(vrr.enable, old_crtc_state, new_crtc_state) || 930 + (new_crtc_state->vrr.enable && 931 + (new_crtc_state->update_m_n || new_crtc_state->update_lrr || 932 + vrr_params_changed(old_crtc_state, new_crtc_state))); 922 933 } 923 934 924 935 static bool vrr_disabling(const struct intel_crtc_state *old_crtc_state, 925 936 const struct intel_crtc_state *new_crtc_state) 926 937 { 927 - return is_disabling(vrr.enable, old_crtc_state, new_crtc_state); 938 + return is_disabling(vrr.enable, old_crtc_state, new_crtc_state) || 939 + (old_crtc_state->vrr.enable && 940 + (new_crtc_state->update_m_n || new_crtc_state->update_lrr || 941 + vrr_params_changed(old_crtc_state, new_crtc_state))); 928 942 } 929 943 930 944 #undef is_disabling ··· 1785 1767 if (IS_DG2(dev_priv)) 1786 1768 /* DG2's "TC1" output uses a SNPS PHY */ 1787 1769 return false; 1788 - else if (IS_ALDERLAKE_P(dev_priv) || IS_METEORLAKE(dev_priv)) 1770 + else if (IS_ALDERLAKE_P(dev_priv) || DISPLAY_VER_FULL(dev_priv) == IP_VER(14, 0)) 1789 1771 return phy >= PHY_F && phy <= PHY_I; 1790 1772 else if (IS_TIGERLAKE(dev_priv)) 1791 1773 return phy >= PHY_D && phy <= PHY_I; ··· 2588 2570 VTOTAL(crtc_vtotal - 1)); 2589 2571 } 2590 2572 2573 + static void intel_set_transcoder_timings_lrr(const struct intel_crtc_state *crtc_state) 2574 + { 2575 + struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc); 2576 + struct drm_i915_private *dev_priv = to_i915(crtc->base.dev); 2577 + enum transcoder cpu_transcoder = crtc_state->cpu_transcoder; 2578 + const struct drm_display_mode *adjusted_mode = &crtc_state->hw.adjusted_mode; 2579 + u32 crtc_vdisplay, crtc_vtotal, crtc_vblank_start, crtc_vblank_end; 2580 + 2581 + crtc_vdisplay = adjusted_mode->crtc_vdisplay; 2582 + crtc_vtotal = adjusted_mode->crtc_vtotal; 2583 + crtc_vblank_start = adjusted_mode->crtc_vblank_start; 2584 + crtc_vblank_end = adjusted_mode->crtc_vblank_end; 2585 + 2586 + drm_WARN_ON(&dev_priv->drm, adjusted_mode->flags & DRM_MODE_FLAG_INTERLACE); 2587 + 2588 + /* 2589 + * The hardware actually ignores TRANS_VBLANK.VBLANK_END in DP mode. 2590 + * But let's write it anyway to keep the state checker happy. 2591 + */ 2592 + intel_de_write(dev_priv, TRANS_VBLANK(cpu_transcoder), 2593 + VBLANK_START(crtc_vblank_start - 1) | 2594 + VBLANK_END(crtc_vblank_end - 1)); 2595 + /* 2596 + * The double buffer latch point for TRANS_VTOTAL 2597 + * is the transcoder's undelayed vblank. 2598 + */ 2599 + intel_de_write(dev_priv, TRANS_VTOTAL(cpu_transcoder), 2600 + VACTIVE(crtc_vdisplay - 1) | 2601 + VTOTAL(crtc_vtotal - 1)); 2602 + } 2603 + 2591 2604 static void intel_set_pipe_src_size(const struct intel_crtc_state *crtc_state) 2592 2605 { 2593 2606 struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc); ··· 2918 2869 } 2919 2870 } 2920 2871 2921 - static void i9xx_get_pipe_color_config(struct intel_crtc_state *crtc_state) 2922 - { 2923 - struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc); 2924 - struct intel_plane *plane = to_intel_plane(crtc->base.primary); 2925 - struct drm_i915_private *dev_priv = to_i915(crtc->base.dev); 2926 - enum i9xx_plane_id i9xx_plane = plane->i9xx_plane; 2927 - u32 tmp; 2928 - 2929 - tmp = intel_de_read(dev_priv, DSPCNTR(i9xx_plane)); 2930 - 2931 - if (tmp & DISP_PIPE_GAMMA_ENABLE) 2932 - crtc_state->gamma_enable = true; 2933 - 2934 - if (!HAS_GMCH(dev_priv) && 2935 - tmp & DISP_PIPE_CSC_ENABLE) 2936 - crtc_state->csc_enable = true; 2937 - } 2938 - 2939 2872 static bool i9xx_get_pipe_config(struct intel_crtc *crtc, 2940 2873 struct intel_crtc_state *pipe_config) 2941 2874 { ··· 2973 2942 (tmp & TRANSCONF_WGC_ENABLE)) 2974 2943 pipe_config->wgc_enable = true; 2975 2944 2976 - if (IS_CHERRYVIEW(dev_priv)) 2977 - pipe_config->cgm_mode = intel_de_read(dev_priv, 2978 - CGM_PIPE_MODE(crtc->pipe)); 2979 - 2980 - i9xx_get_pipe_color_config(pipe_config); 2981 2945 intel_color_get_config(pipe_config); 2982 2946 2983 2947 if (DISPLAY_VER(dev_priv) < 4) ··· 3370 3344 3371 3345 pipe_config->msa_timing_delay = REG_FIELD_GET(TRANSCONF_MSA_TIMING_DELAY_MASK, tmp); 3372 3346 3373 - pipe_config->csc_mode = intel_de_read(dev_priv, 3374 - PIPE_CSC_MODE(crtc->pipe)); 3375 - 3376 - i9xx_get_pipe_color_config(pipe_config); 3377 3347 intel_color_get_config(pipe_config); 3378 3348 3379 3349 pipe_config->pixel_multiplier = 1; ··· 3759 3737 } 3760 3738 3761 3739 pipe_config->sink_format = pipe_config->output_format; 3762 - 3763 - pipe_config->gamma_mode = intel_de_read(dev_priv, 3764 - GAMMA_MODE(crtc->pipe)); 3765 - 3766 - pipe_config->csc_mode = intel_de_read(dev_priv, 3767 - PIPE_CSC_MODE(crtc->pipe)); 3768 - 3769 - if (DISPLAY_VER(dev_priv) >= 9) { 3770 - tmp = intel_de_read(dev_priv, SKL_BOTTOM_COLOR(crtc->pipe)); 3771 - 3772 - if (tmp & SKL_BOTTOM_COLOR_GAMMA_ENABLE) 3773 - pipe_config->gamma_enable = true; 3774 - 3775 - if (tmp & SKL_BOTTOM_COLOR_CSC_ENABLE) 3776 - pipe_config->csc_enable = true; 3777 - } else { 3778 - i9xx_get_pipe_color_config(pipe_config); 3779 - } 3780 3740 3781 3741 intel_color_get_config(pipe_config); 3782 3742 ··· 4645 4641 4646 4642 static int 4647 4643 intel_modeset_pipe_config(struct intel_atomic_state *state, 4648 - struct intel_crtc *crtc) 4644 + struct intel_crtc *crtc, 4645 + const struct intel_link_bw_limits *limits) 4649 4646 { 4650 4647 struct drm_i915_private *i915 = to_i915(crtc->base.dev); 4651 4648 struct intel_crtc_state *crtc_state = ··· 4655 4650 struct drm_connector_state *connector_state; 4656 4651 int pipe_src_w, pipe_src_h; 4657 4652 int base_bpp, ret, i; 4658 - bool retry = true; 4659 4653 4660 4654 crtc_state->cpu_transcoder = (enum transcoder) crtc->pipe; 4661 4655 ··· 4676 4672 ret = compute_baseline_pipe_bpp(state, crtc); 4677 4673 if (ret) 4678 4674 return ret; 4675 + 4676 + crtc_state->max_link_bpp_x16 = limits->max_bpp_x16[crtc->pipe]; 4677 + 4678 + if (crtc_state->pipe_bpp > to_bpp_int(crtc_state->max_link_bpp_x16)) { 4679 + drm_dbg_kms(&i915->drm, 4680 + "[CRTC:%d:%s] Link bpp limited to " BPP_X16_FMT "\n", 4681 + crtc->base.base.id, crtc->base.name, 4682 + BPP_X16_ARGS(crtc_state->max_link_bpp_x16)); 4683 + crtc_state->bw_constrained = true; 4684 + } 4679 4685 4680 4686 base_bpp = crtc_state->pipe_bpp; 4681 4687 ··· 4728 4714 crtc_state->output_types |= BIT(encoder->type); 4729 4715 } 4730 4716 4731 - encoder_retry: 4732 4717 /* Ensure the port clock defaults are reset when retrying. */ 4733 4718 crtc_state->port_clock = 0; 4734 4719 crtc_state->pixel_multiplier = 1; ··· 4767 4754 ret = intel_crtc_compute_config(state, crtc); 4768 4755 if (ret == -EDEADLK) 4769 4756 return ret; 4770 - if (ret == -EAGAIN) { 4771 - if (drm_WARN(&i915->drm, !retry, 4772 - "[CRTC:%d:%s] loop in pipe configuration computation\n", 4773 - crtc->base.base.id, crtc->base.name)) 4774 - return -EINVAL; 4775 - 4776 - drm_dbg_kms(&i915->drm, "[CRTC:%d:%s] bw constrained, retrying\n", 4777 - crtc->base.base.id, crtc->base.name); 4778 - retry = false; 4779 - goto encoder_retry; 4780 - } 4781 4757 if (ret < 0) { 4782 4758 drm_dbg_kms(&i915->drm, "[CRTC:%d:%s] config failure: %d\n", 4783 4759 crtc->base.base.id, crtc->base.name, ret); ··· 5113 5111 PIPE_CONF_CHECK_I(name.crtc_hsync_start); \ 5114 5112 PIPE_CONF_CHECK_I(name.crtc_hsync_end); \ 5115 5113 PIPE_CONF_CHECK_I(name.crtc_vdisplay); \ 5116 - PIPE_CONF_CHECK_I(name.crtc_vtotal); \ 5117 5114 PIPE_CONF_CHECK_I(name.crtc_vblank_start); \ 5118 - PIPE_CONF_CHECK_I(name.crtc_vblank_end); \ 5119 5115 PIPE_CONF_CHECK_I(name.crtc_vsync_start); \ 5120 5116 PIPE_CONF_CHECK_I(name.crtc_vsync_end); \ 5117 + if (!fastset || !pipe_config->update_lrr) { \ 5118 + PIPE_CONF_CHECK_I(name.crtc_vtotal); \ 5119 + PIPE_CONF_CHECK_I(name.crtc_vblank_end); \ 5120 + } \ 5121 5121 } while (0) 5122 5122 5123 5123 #define PIPE_CONF_CHECK_RECT(name) do { \ ··· 5219 5215 PIPE_CONF_CHECK_X(lane_lat_optim_mask); 5220 5216 5221 5217 if (HAS_DOUBLE_BUFFERED_M_N(dev_priv)) { 5222 - if (!fastset || !pipe_config->seamless_m_n) 5218 + if (!fastset || !pipe_config->update_m_n) 5223 5219 PIPE_CONF_CHECK_M_N(dp_m_n); 5224 5220 } else { 5225 5221 PIPE_CONF_CHECK_M_N(dp_m_n); ··· 5259 5255 PIPE_CONF_CHECK_BOOL(hdmi_scrambling); 5260 5256 PIPE_CONF_CHECK_BOOL(hdmi_high_tmds_clock_ratio); 5261 5257 PIPE_CONF_CHECK_BOOL(has_infoframe); 5258 + PIPE_CONF_CHECK_BOOL(enhanced_framing); 5262 5259 PIPE_CONF_CHECK_BOOL(fec_enable); 5263 5260 5264 5261 PIPE_CONF_CHECK_BOOL_INCOMPLETE(has_audio); ··· 5357 5352 if (IS_G4X(dev_priv) || DISPLAY_VER(dev_priv) >= 5) 5358 5353 PIPE_CONF_CHECK_I(pipe_bpp); 5359 5354 5360 - if (!fastset || !pipe_config->seamless_m_n) { 5355 + if (!fastset || !pipe_config->update_m_n) { 5361 5356 PIPE_CONF_CHECK_I(hw.pipe_mode.crtc_clock); 5362 5357 PIPE_CONF_CHECK_I(hw.adjusted_mode.crtc_clock); 5363 5358 } ··· 5382 5377 PIPE_CONF_CHECK_I(master_transcoder); 5383 5378 PIPE_CONF_CHECK_X(bigjoiner_pipes); 5384 5379 5380 + PIPE_CONF_CHECK_BOOL(dsc.config.block_pred_enable); 5381 + PIPE_CONF_CHECK_BOOL(dsc.config.convert_rgb); 5382 + PIPE_CONF_CHECK_BOOL(dsc.config.simple_422); 5383 + PIPE_CONF_CHECK_BOOL(dsc.config.native_422); 5384 + PIPE_CONF_CHECK_BOOL(dsc.config.native_420); 5385 + PIPE_CONF_CHECK_BOOL(dsc.config.vbr_enable); 5386 + PIPE_CONF_CHECK_I(dsc.config.line_buf_depth); 5387 + PIPE_CONF_CHECK_I(dsc.config.bits_per_component); 5388 + PIPE_CONF_CHECK_I(dsc.config.pic_width); 5389 + PIPE_CONF_CHECK_I(dsc.config.pic_height); 5390 + PIPE_CONF_CHECK_I(dsc.config.slice_width); 5391 + PIPE_CONF_CHECK_I(dsc.config.slice_height); 5392 + PIPE_CONF_CHECK_I(dsc.config.initial_dec_delay); 5393 + PIPE_CONF_CHECK_I(dsc.config.initial_xmit_delay); 5394 + PIPE_CONF_CHECK_I(dsc.config.scale_decrement_interval); 5395 + PIPE_CONF_CHECK_I(dsc.config.scale_increment_interval); 5396 + PIPE_CONF_CHECK_I(dsc.config.initial_scale_value); 5397 + PIPE_CONF_CHECK_I(dsc.config.first_line_bpg_offset); 5398 + PIPE_CONF_CHECK_I(dsc.config.flatness_min_qp); 5399 + PIPE_CONF_CHECK_I(dsc.config.flatness_max_qp); 5400 + PIPE_CONF_CHECK_I(dsc.config.slice_bpg_offset); 5401 + PIPE_CONF_CHECK_I(dsc.config.nfl_bpg_offset); 5402 + PIPE_CONF_CHECK_I(dsc.config.initial_offset); 5403 + PIPE_CONF_CHECK_I(dsc.config.final_offset); 5404 + PIPE_CONF_CHECK_I(dsc.config.rc_model_size); 5405 + PIPE_CONF_CHECK_I(dsc.config.rc_quant_incr_limit0); 5406 + PIPE_CONF_CHECK_I(dsc.config.rc_quant_incr_limit1); 5407 + PIPE_CONF_CHECK_I(dsc.config.slice_chunk_size); 5408 + PIPE_CONF_CHECK_I(dsc.config.second_line_bpg_offset); 5409 + PIPE_CONF_CHECK_I(dsc.config.nsl_bpg_offset); 5410 + 5385 5411 PIPE_CONF_CHECK_I(dsc.compression_enable); 5386 5412 PIPE_CONF_CHECK_I(dsc.dsc_split); 5387 5413 PIPE_CONF_CHECK_I(dsc.compressed_bpp); ··· 5421 5385 PIPE_CONF_CHECK_I(splitter.link_count); 5422 5386 PIPE_CONF_CHECK_I(splitter.pixel_overlap); 5423 5387 5424 - if (!fastset) 5388 + if (!fastset) { 5425 5389 PIPE_CONF_CHECK_BOOL(vrr.enable); 5426 - PIPE_CONF_CHECK_I(vrr.vmin); 5427 - PIPE_CONF_CHECK_I(vrr.vmax); 5428 - PIPE_CONF_CHECK_I(vrr.flipline); 5429 - PIPE_CONF_CHECK_I(vrr.pipeline_full); 5430 - PIPE_CONF_CHECK_I(vrr.guardband); 5390 + PIPE_CONF_CHECK_I(vrr.vmin); 5391 + PIPE_CONF_CHECK_I(vrr.vmax); 5392 + PIPE_CONF_CHECK_I(vrr.flipline); 5393 + PIPE_CONF_CHECK_I(vrr.pipeline_full); 5394 + PIPE_CONF_CHECK_I(vrr.guardband); 5395 + } 5431 5396 5432 5397 #undef PIPE_CONF_CHECK_X 5433 5398 #undef PIPE_CONF_CHECK_I ··· 5457 5420 plane_state->uapi.visible); 5458 5421 } 5459 5422 5460 - int intel_modeset_all_pipes(struct intel_atomic_state *state, 5461 - const char *reason) 5423 + static int intel_modeset_pipe(struct intel_atomic_state *state, 5424 + struct intel_crtc_state *crtc_state, 5425 + const char *reason) 5426 + { 5427 + struct drm_i915_private *i915 = to_i915(state->base.dev); 5428 + struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc); 5429 + int ret; 5430 + 5431 + drm_dbg_kms(&i915->drm, "[CRTC:%d:%s] Full modeset due to %s\n", 5432 + crtc->base.base.id, crtc->base.name, reason); 5433 + 5434 + ret = drm_atomic_add_affected_connectors(&state->base, 5435 + &crtc->base); 5436 + if (ret) 5437 + return ret; 5438 + 5439 + ret = intel_dp_mst_add_topology_state_for_crtc(state, crtc); 5440 + if (ret) 5441 + return ret; 5442 + 5443 + ret = intel_atomic_add_affected_planes(state, crtc); 5444 + if (ret) 5445 + return ret; 5446 + 5447 + crtc_state->uapi.mode_changed = true; 5448 + 5449 + return 0; 5450 + } 5451 + 5452 + /** 5453 + * intel_modeset_pipes_in_mask_early - force a full modeset on a set of pipes 5454 + * @state: intel atomic state 5455 + * @reason: the reason for the full modeset 5456 + * @mask: mask of pipes to modeset 5457 + * 5458 + * Add pipes in @mask to @state and force a full modeset on the enabled ones 5459 + * due to the description in @reason. 5460 + * This function can be called only before new plane states are computed. 5461 + * 5462 + * Returns 0 in case of success, negative error code otherwise. 5463 + */ 5464 + int intel_modeset_pipes_in_mask_early(struct intel_atomic_state *state, 5465 + const char *reason, u8 mask) 5466 + { 5467 + struct drm_i915_private *i915 = to_i915(state->base.dev); 5468 + struct intel_crtc *crtc; 5469 + 5470 + for_each_intel_crtc_in_pipe_mask(&i915->drm, crtc, mask) { 5471 + struct intel_crtc_state *crtc_state; 5472 + int ret; 5473 + 5474 + crtc_state = intel_atomic_get_crtc_state(&state->base, crtc); 5475 + if (IS_ERR(crtc_state)) 5476 + return PTR_ERR(crtc_state); 5477 + 5478 + if (!crtc_state->hw.enable || 5479 + intel_crtc_needs_modeset(crtc_state)) 5480 + continue; 5481 + 5482 + ret = intel_modeset_pipe(state, crtc_state, reason); 5483 + if (ret) 5484 + return ret; 5485 + } 5486 + 5487 + return 0; 5488 + } 5489 + 5490 + /** 5491 + * intel_modeset_all_pipes_late - force a full modeset on all pipes 5492 + * @state: intel atomic state 5493 + * @reason: the reason for the full modeset 5494 + * 5495 + * Add all pipes to @state and force a full modeset on the active ones due to 5496 + * the description in @reason. 5497 + * This function can be called only after new plane states are computed already. 5498 + * 5499 + * Returns 0 in case of success, negative error code otherwise. 5500 + */ 5501 + int intel_modeset_all_pipes_late(struct intel_atomic_state *state, 5502 + const char *reason) 5462 5503 { 5463 5504 struct drm_i915_private *dev_priv = to_i915(state->base.dev); 5464 5505 struct intel_crtc *crtc; 5465 5506 5466 - /* 5467 - * Add all pipes to the state, and force 5468 - * a modeset on all the active ones. 5469 - */ 5470 5507 for_each_intel_crtc(&dev_priv->drm, crtc) { 5471 5508 struct intel_crtc_state *crtc_state; 5472 5509 int ret; ··· 5553 5442 intel_crtc_needs_modeset(crtc_state)) 5554 5443 continue; 5555 5444 5556 - drm_dbg_kms(&dev_priv->drm, "[CRTC:%d:%s] Full modeset due to %s\n", 5557 - crtc->base.base.id, crtc->base.name, reason); 5445 + ret = intel_modeset_pipe(state, crtc_state, reason); 5446 + if (ret) 5447 + return ret; 5558 5448 5559 - crtc_state->uapi.mode_changed = true; 5560 5449 crtc_state->update_pipe = false; 5561 - 5562 - ret = drm_atomic_add_affected_connectors(&state->base, 5563 - &crtc->base); 5564 - if (ret) 5565 - return ret; 5566 - 5567 - ret = intel_dp_mst_add_topology_state_for_crtc(state, crtc); 5568 - if (ret) 5569 - return ret; 5570 - 5571 - ret = intel_atomic_add_affected_planes(state, crtc); 5572 - if (ret) 5573 - return ret; 5574 - 5450 + crtc_state->update_m_n = false; 5451 + crtc_state->update_lrr = false; 5575 5452 crtc_state->update_planes |= crtc_state->active_planes; 5576 5453 crtc_state->async_flip_planes = 0; 5577 5454 crtc_state->do_async_flip = false; ··· 5663 5564 { 5664 5565 struct drm_i915_private *i915 = to_i915(old_crtc_state->uapi.crtc->dev); 5665 5566 5666 - if (!intel_pipe_config_compare(old_crtc_state, new_crtc_state, true)) { 5567 + /* only allow LRR when the timings stay within the VRR range */ 5568 + if (old_crtc_state->vrr.in_range != new_crtc_state->vrr.in_range) 5569 + new_crtc_state->update_lrr = false; 5570 + 5571 + if (!intel_pipe_config_compare(old_crtc_state, new_crtc_state, true)) 5667 5572 drm_dbg_kms(&i915->drm, "fastset requirement not met, forcing full modeset\n"); 5573 + else 5574 + new_crtc_state->uapi.mode_changed = false; 5668 5575 5669 - return; 5670 - } 5576 + if (intel_crtc_needs_modeset(new_crtc_state) || 5577 + intel_compare_link_m_n(&old_crtc_state->dp_m_n, 5578 + &new_crtc_state->dp_m_n)) 5579 + new_crtc_state->update_m_n = false; 5671 5580 5672 - new_crtc_state->uapi.mode_changed = false; 5581 + if (intel_crtc_needs_modeset(new_crtc_state) || 5582 + (old_crtc_state->hw.adjusted_mode.crtc_vtotal == new_crtc_state->hw.adjusted_mode.crtc_vtotal && 5583 + old_crtc_state->hw.adjusted_mode.crtc_vblank_end == new_crtc_state->hw.adjusted_mode.crtc_vblank_end)) 5584 + new_crtc_state->update_lrr = false; 5585 + 5673 5586 if (!intel_crtc_needs_modeset(new_crtc_state)) 5674 5587 new_crtc_state->update_pipe = true; 5675 5588 } ··· 6282 6171 return 0; 6283 6172 } 6284 6173 6174 + static int intel_atomic_check_config(struct intel_atomic_state *state, 6175 + struct intel_link_bw_limits *limits, 6176 + enum pipe *failed_pipe) 6177 + { 6178 + struct drm_i915_private *i915 = to_i915(state->base.dev); 6179 + struct intel_crtc_state *new_crtc_state; 6180 + struct intel_crtc *crtc; 6181 + int ret; 6182 + int i; 6183 + 6184 + *failed_pipe = INVALID_PIPE; 6185 + 6186 + ret = intel_bigjoiner_add_affected_crtcs(state); 6187 + if (ret) 6188 + return ret; 6189 + 6190 + ret = intel_fdi_add_affected_crtcs(state); 6191 + if (ret) 6192 + return ret; 6193 + 6194 + for_each_new_intel_crtc_in_state(state, crtc, new_crtc_state, i) { 6195 + if (!intel_crtc_needs_modeset(new_crtc_state)) { 6196 + if (intel_crtc_is_bigjoiner_slave(new_crtc_state)) 6197 + copy_bigjoiner_crtc_state_nomodeset(state, crtc); 6198 + else 6199 + intel_crtc_copy_uapi_to_hw_state_nomodeset(state, crtc); 6200 + continue; 6201 + } 6202 + 6203 + if (intel_crtc_is_bigjoiner_slave(new_crtc_state)) { 6204 + drm_WARN_ON(&i915->drm, new_crtc_state->uapi.enable); 6205 + continue; 6206 + } 6207 + 6208 + ret = intel_crtc_prepare_cleared_state(state, crtc); 6209 + if (ret) 6210 + break; 6211 + 6212 + if (!new_crtc_state->hw.enable) 6213 + continue; 6214 + 6215 + ret = intel_modeset_pipe_config(state, crtc, limits); 6216 + if (ret) 6217 + break; 6218 + 6219 + ret = intel_atomic_check_bigjoiner(state, crtc); 6220 + if (ret) 6221 + break; 6222 + } 6223 + 6224 + if (ret) 6225 + *failed_pipe = crtc->pipe; 6226 + 6227 + return ret; 6228 + } 6229 + 6230 + static int intel_atomic_check_config_and_link(struct intel_atomic_state *state) 6231 + { 6232 + struct drm_i915_private *i915 = to_i915(state->base.dev); 6233 + struct intel_link_bw_limits new_limits; 6234 + struct intel_link_bw_limits old_limits; 6235 + int ret; 6236 + 6237 + intel_link_bw_init_limits(i915, &new_limits); 6238 + old_limits = new_limits; 6239 + 6240 + while (true) { 6241 + enum pipe failed_pipe; 6242 + 6243 + ret = intel_atomic_check_config(state, &new_limits, 6244 + &failed_pipe); 6245 + if (ret) { 6246 + /* 6247 + * The bpp limit for a pipe is below the minimum it supports, set the 6248 + * limit to the minimum and recalculate the config. 6249 + */ 6250 + if (ret == -EINVAL && 6251 + intel_link_bw_set_bpp_limit_for_pipe(state, 6252 + &old_limits, 6253 + &new_limits, 6254 + failed_pipe)) 6255 + continue; 6256 + 6257 + break; 6258 + } 6259 + 6260 + old_limits = new_limits; 6261 + 6262 + ret = intel_link_bw_atomic_check(state, &new_limits); 6263 + if (ret != -EAGAIN) 6264 + break; 6265 + } 6266 + 6267 + return ret; 6268 + } 6285 6269 /** 6286 6270 * intel_atomic_check - validate state object 6287 6271 * @dev: drm device ··· 6421 6215 return ret; 6422 6216 } 6423 6217 6424 - ret = intel_bigjoiner_add_affected_crtcs(state); 6218 + ret = intel_atomic_check_config_and_link(state); 6425 6219 if (ret) 6426 6220 goto fail; 6427 - 6428 - for_each_oldnew_intel_crtc_in_state(state, crtc, old_crtc_state, 6429 - new_crtc_state, i) { 6430 - if (!intel_crtc_needs_modeset(new_crtc_state)) { 6431 - if (intel_crtc_is_bigjoiner_slave(new_crtc_state)) 6432 - copy_bigjoiner_crtc_state_nomodeset(state, crtc); 6433 - else 6434 - intel_crtc_copy_uapi_to_hw_state_nomodeset(state, crtc); 6435 - continue; 6436 - } 6437 - 6438 - if (intel_crtc_is_bigjoiner_slave(new_crtc_state)) { 6439 - drm_WARN_ON(&dev_priv->drm, new_crtc_state->uapi.enable); 6440 - continue; 6441 - } 6442 - 6443 - ret = intel_crtc_prepare_cleared_state(state, crtc); 6444 - if (ret) 6445 - goto fail; 6446 - 6447 - if (!new_crtc_state->hw.enable) 6448 - continue; 6449 - 6450 - ret = intel_modeset_pipe_config(state, crtc); 6451 - if (ret) 6452 - goto fail; 6453 - 6454 - ret = intel_atomic_check_bigjoiner(state, crtc); 6455 - if (ret) 6456 - goto fail; 6457 - } 6458 6221 6459 6222 for_each_oldnew_intel_crtc_in_state(state, crtc, old_crtc_state, 6460 6223 new_crtc_state, i) { ··· 6460 6285 if (intel_cpu_transcoders_need_modeset(state, BIT(master))) { 6461 6286 new_crtc_state->uapi.mode_changed = true; 6462 6287 new_crtc_state->update_pipe = false; 6288 + new_crtc_state->update_m_n = false; 6289 + new_crtc_state->update_lrr = false; 6463 6290 } 6464 6291 } 6465 6292 ··· 6474 6297 if (intel_cpu_transcoders_need_modeset(state, trans)) { 6475 6298 new_crtc_state->uapi.mode_changed = true; 6476 6299 new_crtc_state->update_pipe = false; 6300 + new_crtc_state->update_m_n = false; 6301 + new_crtc_state->update_lrr = false; 6477 6302 } 6478 6303 } 6479 6304 ··· 6483 6304 if (intel_pipes_need_modeset(state, new_crtc_state->bigjoiner_pipes)) { 6484 6305 new_crtc_state->uapi.mode_changed = true; 6485 6306 new_crtc_state->update_pipe = false; 6307 + new_crtc_state->update_m_n = false; 6308 + new_crtc_state->update_lrr = false; 6486 6309 } 6487 6310 } 6488 6311 } ··· 6663 6482 IS_BROADWELL(dev_priv) || IS_HASWELL(dev_priv)) 6664 6483 hsw_set_linetime_wm(new_crtc_state); 6665 6484 6666 - if (new_crtc_state->seamless_m_n) 6485 + if (new_crtc_state->update_m_n) 6667 6486 intel_cpu_transcoder_set_m1_n1(crtc, new_crtc_state->cpu_transcoder, 6668 6487 &new_crtc_state->dp_m_n); 6488 + 6489 + if (new_crtc_state->update_lrr) 6490 + intel_set_transcoder_timings_lrr(new_crtc_state); 6669 6491 } 6670 6492 6671 6493 static void commit_pipe_pre_planes(struct intel_atomic_state *state, ··· 6705 6521 struct intel_crtc *crtc) 6706 6522 { 6707 6523 struct drm_i915_private *dev_priv = to_i915(state->base.dev); 6524 + const struct intel_crtc_state *old_crtc_state = 6525 + intel_atomic_get_old_crtc_state(state, crtc); 6708 6526 const struct intel_crtc_state *new_crtc_state = 6709 6527 intel_atomic_get_new_crtc_state(state, crtc); 6710 6528 ··· 6718 6532 if (DISPLAY_VER(dev_priv) >= 9 && 6719 6533 !intel_crtc_needs_modeset(new_crtc_state)) 6720 6534 skl_detach_scalers(new_crtc_state); 6535 + 6536 + if (vrr_enabling(old_crtc_state, new_crtc_state)) 6537 + intel_vrr_enable(new_crtc_state); 6721 6538 } 6722 6539 6723 6540 static void intel_enable_crtc(struct intel_atomic_state *state, ··· 6761 6572 intel_dpt_configure(crtc); 6762 6573 } 6763 6574 6764 - if (vrr_enabling(old_crtc_state, new_crtc_state)) { 6765 - intel_vrr_enable(new_crtc_state); 6766 - intel_crtc_update_active_timings(new_crtc_state, 6767 - new_crtc_state->vrr.enable); 6768 - } 6769 - 6770 6575 if (!modeset) { 6771 6576 if (new_crtc_state->preload_luts && 6772 6577 intel_crtc_needs_color_update(new_crtc_state)) ··· 6774 6591 if (DISPLAY_VER(i915) >= 11 && 6775 6592 intel_crtc_needs_fastset(new_crtc_state)) 6776 6593 icl_set_pipe_chicken(new_crtc_state); 6594 + 6595 + if (vrr_params_changed(old_crtc_state, new_crtc_state)) 6596 + intel_vrr_set_transcoder_timings(new_crtc_state); 6777 6597 } 6778 6598 6779 6599 intel_fbc_update(state, crtc); ··· 6790 6604 intel_crtc_planes_update_noarm(state, crtc); 6791 6605 6792 6606 /* Perform vblank evasion around commit operation */ 6793 - intel_pipe_update_start(new_crtc_state); 6607 + intel_pipe_update_start(state, crtc); 6794 6608 6795 6609 commit_pipe_pre_planes(state, crtc); 6796 6610 ··· 6798 6612 6799 6613 commit_pipe_post_planes(state, crtc); 6800 6614 6801 - intel_pipe_update_end(new_crtc_state); 6615 + intel_pipe_update_end(state, crtc); 6616 + 6617 + /* 6618 + * VRR/Seamless M/N update may need to update frame timings. 6619 + * 6620 + * FIXME Should be synchronized with the start of vblank somehow... 6621 + */ 6622 + if (vrr_enabling(old_crtc_state, new_crtc_state) || 6623 + new_crtc_state->update_m_n || new_crtc_state->update_lrr) 6624 + intel_crtc_update_active_timings(new_crtc_state, 6625 + new_crtc_state->vrr.enable); 6802 6626 6803 6627 /* 6804 6628 * We usually enable FIFO underrun interrupts as part of the ··· 7268 7072 for_each_new_intel_crtc_in_state(state, crtc, new_crtc_state, i) { 7269 7073 if (new_crtc_state->do_async_flip) 7270 7074 intel_crtc_disable_flip_done(state, crtc); 7075 + 7076 + intel_color_wait_commit(new_crtc_state); 7271 7077 } 7272 7078 7273 7079 /*
+4 -4
drivers/gpu/drm/i915/display/intel_display.h
··· 190 190 AUX_CH_E_XELPD, 191 191 }; 192 192 193 - #define aux_ch_name(a) ((a) + 'A') 194 - 195 193 enum phy { 196 194 PHY_NONE = -1, 197 195 ··· 511 513 void intel_update_watermarks(struct drm_i915_private *i915); 512 514 513 515 /* modesetting */ 514 - int intel_modeset_all_pipes(struct intel_atomic_state *state, 515 - const char *reason); 516 + int intel_modeset_pipes_in_mask_early(struct intel_atomic_state *state, 517 + const char *reason, u8 pipe_mask); 518 + int intel_modeset_all_pipes_late(struct intel_atomic_state *state, 519 + const char *reason); 516 520 void intel_modeset_get_crtc_power_domains(struct intel_crtc_state *crtc_state, 517 521 struct intel_power_domain_mask *old_domains); 518 522 void intel_modeset_put_crtc_power_domains(struct intel_crtc *crtc,
+4
drivers/gpu/drm/i915/display/intel_display_debugfs.c
··· 43 43 { 44 44 struct drm_i915_private *dev_priv = node_to_i915(m->private); 45 45 46 + spin_lock(&dev_priv->display.fb_tracking.lock); 47 + 46 48 seq_printf(m, "FB tracking busy bits: 0x%08x\n", 47 49 dev_priv->display.fb_tracking.busy_bits); 48 50 49 51 seq_printf(m, "FB tracking flip bits: 0x%08x\n", 50 52 dev_priv->display.fb_tracking.flip_bits); 53 + 54 + spin_unlock(&dev_priv->display.fb_tracking.lock); 51 55 52 56 return 0; 53 57 }
+83 -17
drivers/gpu/drm/i915/display/intel_display_device.c
··· 710 710 BIT(PORT_TC1), 711 711 }; 712 712 713 - static const struct intel_display_device_info xe_lpdp_display = { 714 - XE_LPD_FEATURES, 715 - .has_cdclk_crawl = 1, 716 - .has_cdclk_squash = 1, 713 + #define XE_LPDP_FEATURES \ 714 + .abox_mask = GENMASK(1, 0), \ 715 + .color = { \ 716 + .degamma_lut_size = 129, .gamma_lut_size = 1024, \ 717 + .degamma_lut_tests = DRM_COLOR_LUT_NON_DECREASING | \ 718 + DRM_COLOR_LUT_EQUAL_CHANNELS, \ 719 + }, \ 720 + .dbuf.size = 4096, \ 721 + .dbuf.slice_mask = BIT(DBUF_S1) | BIT(DBUF_S2) | BIT(DBUF_S3) | \ 722 + BIT(DBUF_S4), \ 723 + .has_cdclk_crawl = 1, \ 724 + .has_cdclk_squash = 1, \ 725 + .has_ddi = 1, \ 726 + .has_dp_mst = 1, \ 727 + .has_dsb = 1, \ 728 + .has_fpga_dbg = 1, \ 729 + .has_hotplug = 1, \ 730 + .has_ipc = 1, \ 731 + .has_psr = 1, \ 732 + .pipe_offsets = { \ 733 + [TRANSCODER_A] = PIPE_A_OFFSET, \ 734 + [TRANSCODER_B] = PIPE_B_OFFSET, \ 735 + [TRANSCODER_C] = PIPE_C_OFFSET, \ 736 + [TRANSCODER_D] = PIPE_D_OFFSET, \ 737 + }, \ 738 + .trans_offsets = { \ 739 + [TRANSCODER_A] = TRANSCODER_A_OFFSET, \ 740 + [TRANSCODER_B] = TRANSCODER_B_OFFSET, \ 741 + [TRANSCODER_C] = TRANSCODER_C_OFFSET, \ 742 + [TRANSCODER_D] = TRANSCODER_D_OFFSET, \ 743 + }, \ 744 + TGL_CURSOR_OFFSETS, \ 745 + \ 746 + .__runtime_defaults.cpu_transcoder_mask = \ 747 + BIT(TRANSCODER_A) | BIT(TRANSCODER_B) | \ 748 + BIT(TRANSCODER_C) | BIT(TRANSCODER_D), \ 749 + .__runtime_defaults.fbc_mask = BIT(INTEL_FBC_A) | BIT(INTEL_FBC_B), \ 750 + .__runtime_defaults.has_dmc = 1, \ 751 + .__runtime_defaults.has_dsc = 1, \ 752 + .__runtime_defaults.has_hdcp = 1, \ 753 + .__runtime_defaults.pipe_mask = \ 754 + BIT(PIPE_A) | BIT(PIPE_B) | BIT(PIPE_C) | BIT(PIPE_D), \ 755 + .__runtime_defaults.port_mask = BIT(PORT_A) | BIT(PORT_B) | \ 756 + BIT(PORT_TC1) | BIT(PORT_TC2) | BIT(PORT_TC3) | BIT(PORT_TC4) 717 757 718 - .__runtime_defaults.ip.ver = 14, 719 - .__runtime_defaults.fbc_mask = BIT(INTEL_FBC_A) | BIT(INTEL_FBC_B), 720 - .__runtime_defaults.cpu_transcoder_mask = 721 - BIT(TRANSCODER_A) | BIT(TRANSCODER_B) | 722 - BIT(TRANSCODER_C) | BIT(TRANSCODER_D), 723 - .__runtime_defaults.port_mask = BIT(PORT_A) | BIT(PORT_B) | 724 - BIT(PORT_TC1) | BIT(PORT_TC2) | BIT(PORT_TC3) | BIT(PORT_TC4), 758 + static const struct intel_display_device_info xe_lpdp_display = { 759 + XE_LPDP_FEATURES, 760 + }; 761 + 762 + static const struct intel_display_device_info xe2_lpd_display = { 763 + XE_LPDP_FEATURES, 764 + 765 + .__runtime_defaults.fbc_mask = 766 + BIT(INTEL_FBC_A) | BIT(INTEL_FBC_B) | 767 + BIT(INTEL_FBC_C) | BIT(INTEL_FBC_D), 725 768 }; 726 769 727 770 /* ··· 846 803 const struct intel_display_device_info *display; 847 804 } gmdid_display_map[] = { 848 805 { 14, 0, &xe_lpdp_display }, 806 + { 20, 0, &xe2_lpd_display }, 849 807 }; 850 808 851 809 static const struct intel_display_device_info * ··· 894 850 return &no_display; 895 851 } 896 852 897 - const struct intel_display_device_info * 898 - intel_display_device_probe(struct drm_i915_private *i915, bool has_gmdid, 899 - u16 *gmdid_ver, u16 *gmdid_rel, u16 *gmdid_step) 853 + static const struct intel_display_device_info * 854 + probe_display(struct drm_i915_private *i915) 900 855 { 901 856 struct pci_dev *pdev = to_pci_dev(i915->drm.dev); 902 857 int i; 903 - 904 - if (has_gmdid) 905 - return probe_gmdid_display(i915, gmdid_ver, gmdid_rel, gmdid_step); 906 858 907 859 if (has_no_display(pdev)) { 908 860 drm_dbg_kms(&i915->drm, "Device doesn't have display\n"); ··· 914 874 pdev->device); 915 875 916 876 return &no_display; 877 + } 878 + 879 + void intel_display_device_probe(struct drm_i915_private *i915) 880 + { 881 + const struct intel_display_device_info *info; 882 + u16 ver, rel, step; 883 + 884 + if (HAS_GMD_ID(i915)) 885 + info = probe_gmdid_display(i915, &ver, &rel, &step); 886 + else 887 + info = probe_display(i915); 888 + 889 + i915->display.info.__device_info = info; 890 + 891 + memcpy(DISPLAY_RUNTIME_INFO(i915), 892 + &DISPLAY_INFO(i915)->__runtime_defaults, 893 + sizeof(*DISPLAY_RUNTIME_INFO(i915))); 894 + 895 + if (HAS_GMD_ID(i915)) { 896 + DISPLAY_RUNTIME_INFO(i915)->ip.ver = ver; 897 + DISPLAY_RUNTIME_INFO(i915)->ip.rel = rel; 898 + DISPLAY_RUNTIME_INFO(i915)->ip.step = step; 899 + } 917 900 } 918 901 919 902 void intel_display_device_info_runtime_init(struct drm_i915_private *i915) ··· 1033 970 if (dfsm & SKL_DFSM_PIPE_B_DISABLE) { 1034 971 display_runtime->pipe_mask &= ~BIT(PIPE_B); 1035 972 display_runtime->cpu_transcoder_mask &= ~BIT(TRANSCODER_B); 973 + display_runtime->fbc_mask &= ~BIT(INTEL_FBC_B); 1036 974 } 1037 975 if (dfsm & SKL_DFSM_PIPE_C_DISABLE) { 1038 976 display_runtime->pipe_mask &= ~BIT(PIPE_C); 1039 977 display_runtime->cpu_transcoder_mask &= ~BIT(TRANSCODER_C); 978 + display_runtime->fbc_mask &= ~BIT(INTEL_FBC_C); 1040 979 } 1041 980 1042 981 if (DISPLAY_VER(i915) >= 12 && 1043 982 (dfsm & TGL_DFSM_PIPE_D_DISABLE)) { 1044 983 display_runtime->pipe_mask &= ~BIT(PIPE_D); 1045 984 display_runtime->cpu_transcoder_mask &= ~BIT(TRANSCODER_D); 985 + display_runtime->fbc_mask &= ~BIT(INTEL_FBC_D); 1046 986 } 1047 987 1048 988 if (!display_runtime->pipe_mask)
+28 -3
drivers/gpu/drm/i915/display/intel_display_device.h
··· 32 32 func(overlay_needs_physical); \ 33 33 func(supports_tv); 34 34 35 + #define HAS_4TILE(i915) (IS_DG2(i915) || DISPLAY_VER(i915) >= 14) 35 36 #define HAS_ASYNC_FLIPS(i915) (DISPLAY_VER(i915) >= 5) 36 37 #define HAS_CDCLK_CRAWL(i915) (DISPLAY_INFO(i915)->has_cdclk_crawl) 37 38 #define HAS_CDCLK_SQUASH(i915) (DISPLAY_INFO(i915)->has_cdclk_squash) ··· 56 55 #define HAS_HW_SAGV_WM(i915) (DISPLAY_VER(i915) >= 13 && !IS_DGFX(i915)) 57 56 #define HAS_IPC(i915) (DISPLAY_INFO(i915)->has_ipc) 58 57 #define HAS_IPS(i915) (IS_HASWELL_ULT(i915) || IS_BROADWELL(i915)) 58 + #define HAS_LRR(i915) (DISPLAY_VER(i915) >= 12) 59 59 #define HAS_LSPCON(i915) (IS_DISPLAY_VER(i915, 9, 10)) 60 60 #define HAS_MBUS_JOINING(i915) (IS_ALDERLAKE_P(i915) || DISPLAY_VER(i915) >= 14) 61 61 #define HAS_MSO(i915) (DISPLAY_VER(i915) >= 12) ··· 72 70 #define I915_HAS_HOTPLUG(i915) (DISPLAY_INFO(i915)->has_hotplug) 73 71 #define OVERLAY_NEEDS_PHYSICAL(i915) (DISPLAY_INFO(i915)->overlay_needs_physical) 74 72 #define SUPPORTS_TV(i915) (DISPLAY_INFO(i915)->supports_tv) 73 + 74 + /* Check that device has a display IP version within the specific range. */ 75 + #define IS_DISPLAY_IP_RANGE(__i915, from, until) ( \ 76 + BUILD_BUG_ON_ZERO((from) < IP_VER(2, 0)) + \ 77 + (DISPLAY_VER_FULL(__i915) >= (from) && \ 78 + DISPLAY_VER_FULL(__i915) <= (until))) 79 + 80 + /* 81 + * Check if a device has a specific IP version as well as a stepping within the 82 + * specified range [from, until). The lower bound is inclusive, the upper 83 + * bound is exclusive. The most common use-case of this macro is for checking 84 + * bounds for workarounds, which usually have a stepping ("from") at which the 85 + * hardware issue is first present and another stepping ("until") at which a 86 + * hardware fix is present and the software workaround is no longer necessary. 87 + * E.g., 88 + * 89 + * IS_DISPLAY_IP_STEP(i915, IP_VER(14, 0), STEP_A0, STEP_B2) 90 + * IS_DISPLAY_IP_STEP(i915, IP_VER(14, 0), STEP_C0, STEP_FOREVER) 91 + * 92 + * "STEP_FOREVER" can be passed as "until" for workarounds that have no upper 93 + * stepping bound for the specified IP version. 94 + */ 95 + #define IS_DISPLAY_IP_STEP(__i915, ipver, from, until) \ 96 + (IS_DISPLAY_IP_RANGE((__i915), (ipver), (ipver)) && \ 97 + IS_DISPLAY_STEP((__i915), (from), (until))) 75 98 76 99 struct intel_display_runtime_info { 77 100 struct { ··· 150 123 } color; 151 124 }; 152 125 153 - const struct intel_display_device_info * 154 - intel_display_device_probe(struct drm_i915_private *i915, bool has_gmdid, 155 - u16 *ver, u16 *rel, u16 *step); 126 + void intel_display_device_probe(struct drm_i915_private *i915); 156 127 void intel_display_device_info_runtime_init(struct drm_i915_private *i915); 157 128 158 129 void intel_display_device_info_print(const struct intel_display_device_info *info,
+8
drivers/gpu/drm/i915/display/intel_display_driver.c
··· 31 31 #include "intel_display_irq.h" 32 32 #include "intel_display_power.h" 33 33 #include "intel_display_types.h" 34 + #include "intel_display_wa.h" 34 35 #include "intel_dkl_phy.h" 35 36 #include "intel_dmc.h" 36 37 #include "intel_dp.h" ··· 89 88 intel_update_cdclk(i915); 90 89 intel_cdclk_dump_config(i915, &i915->display.cdclk.hw, "Current CDCLK"); 91 90 cdclk_state->logical = cdclk_state->actual = i915->display.cdclk.hw; 91 + 92 + intel_display_wa_apply(i915); 92 93 } 93 94 94 95 static const struct drm_mode_config_funcs intel_mode_funcs = { ··· 380 377 381 378 void intel_display_driver_register(struct drm_i915_private *i915) 382 379 { 380 + struct drm_printer p = drm_debug_printer("i915 display info:"); 381 + 383 382 if (!HAS_DISPLAY(i915)) 384 383 return; 385 384 ··· 409 404 * fbdev->async_cookie. 410 405 */ 411 406 drm_kms_helper_poll_init(&i915->drm); 407 + 408 + intel_display_device_info_print(DISPLAY_INFO(i915), 409 + DISPLAY_RUNTIME_INFO(i915), &p); 412 410 } 413 411 414 412 /* part #1: call before irq uninstall */
+3 -1
drivers/gpu/drm/i915/display/intel_display_irq.c
··· 792 792 { 793 793 u32 mask; 794 794 795 - if (DISPLAY_VER(dev_priv) >= 14) 795 + if (DISPLAY_VER(dev_priv) >= 20) 796 + return 0; 797 + else if (DISPLAY_VER(dev_priv) >= 14) 796 798 return TGL_DE_PORT_AUX_DDIA | 797 799 TGL_DE_PORT_AUX_DDIB; 798 800 else if (DISPLAY_VER(dev_priv) >= 13)
+4 -6
drivers/gpu/drm/i915/display/intel_display_power.c
··· 186 186 return "GMBUS"; 187 187 case POWER_DOMAIN_INIT: 188 188 return "INIT"; 189 - case POWER_DOMAIN_MODESET: 190 - return "MODESET"; 191 189 case POWER_DOMAIN_GT_IRQ: 192 190 return "GT_IRQ"; 193 191 case POWER_DOMAIN_DC_OFF: ··· 216 218 struct i915_power_well *power_well; 217 219 bool is_enabled; 218 220 219 - if (dev_priv->runtime_pm.suspended) 221 + if (pm_runtime_suspended(dev_priv->drm.dev)) 220 222 return false; 221 223 222 224 is_enabled = true; ··· 335 337 unlock: 336 338 mutex_unlock(&power_domains->lock); 337 339 } 338 - 339 - #define POWER_DOMAIN_MASK (GENMASK_ULL(POWER_DOMAIN_NUM - 1, 0)) 340 340 341 341 static void __async_put_domains_mask(struct i915_power_domains *power_domains, 342 342 struct intel_power_domain_mask *mask) ··· 943 947 if (!HAS_DISPLAY(dev_priv)) 944 948 return 0; 945 949 946 - if (IS_DG2(dev_priv)) 950 + if (DISPLAY_VER(dev_priv) >= 20) 951 + max_dc = 2; 952 + else if (IS_DG2(dev_priv)) 947 953 max_dc = 1; 948 954 else if (IS_DG1(dev_priv)) 949 955 max_dc = 3;
-1
drivers/gpu/drm/i915/display/intel_display_power.h
··· 108 108 POWER_DOMAIN_AUX_TBT6, 109 109 110 110 POWER_DOMAIN_GMBUS, 111 - POWER_DOMAIN_MODESET, 112 111 POWER_DOMAIN_GT_IRQ, 113 112 POWER_DOMAIN_DC_OFF, 114 113 POWER_DOMAIN_TC_COLD_OFF,
+53 -10
drivers/gpu/drm/i915/display/intel_display_power_map.c
··· 332 332 I915_DECL_PW_DOMAINS(skl_pwdoms_dc_off, 333 333 SKL_PW_2_POWER_DOMAINS, 334 334 POWER_DOMAIN_AUX_A, 335 - POWER_DOMAIN_MODESET, 336 335 POWER_DOMAIN_GT_IRQ, 337 336 POWER_DOMAIN_DC_OFF, 338 337 POWER_DOMAIN_INIT); ··· 436 437 BXT_PW_2_POWER_DOMAINS, 437 438 POWER_DOMAIN_AUX_A, 438 439 POWER_DOMAIN_GMBUS, 439 - POWER_DOMAIN_MODESET, 440 440 POWER_DOMAIN_GT_IRQ, 441 441 POWER_DOMAIN_DC_OFF, 442 442 POWER_DOMAIN_INIT); ··· 517 519 GLK_PW_2_POWER_DOMAINS, 518 520 POWER_DOMAIN_AUX_A, 519 521 POWER_DOMAIN_GMBUS, 520 - POWER_DOMAIN_MODESET, 521 522 POWER_DOMAIN_GT_IRQ, 522 523 POWER_DOMAIN_DC_OFF, 523 524 POWER_DOMAIN_INIT); ··· 682 685 I915_DECL_PW_DOMAINS(icl_pwdoms_dc_off, 683 686 ICL_PW_2_POWER_DOMAINS, 684 687 POWER_DOMAIN_AUX_A, 685 - POWER_DOMAIN_MODESET, 686 688 POWER_DOMAIN_DC_OFF, 687 689 POWER_DOMAIN_INIT); 688 690 ··· 857 861 POWER_DOMAIN_AUX_A, 858 862 POWER_DOMAIN_AUX_B, 859 863 POWER_DOMAIN_AUX_C, 860 - POWER_DOMAIN_MODESET, 861 864 POWER_DOMAIN_DC_OFF, 862 865 POWER_DOMAIN_INIT); 863 866 ··· 1053 1058 RKL_PW_3_POWER_DOMAINS, 1054 1059 POWER_DOMAIN_AUX_A, 1055 1060 POWER_DOMAIN_AUX_B, 1056 - POWER_DOMAIN_MODESET, 1057 1061 POWER_DOMAIN_DC_OFF, 1058 1062 POWER_DOMAIN_INIT); 1059 1063 ··· 1135 1141 POWER_DOMAIN_AUDIO_MMIO, 1136 1142 POWER_DOMAIN_AUX_A, 1137 1143 POWER_DOMAIN_AUX_B, 1138 - POWER_DOMAIN_MODESET, 1139 1144 POWER_DOMAIN_DC_OFF, 1140 1145 POWER_DOMAIN_INIT); 1141 1146 ··· 1304 1311 POWER_DOMAIN_AUDIO_MMIO, 1305 1312 POWER_DOMAIN_AUX_A, 1306 1313 POWER_DOMAIN_AUX_B, 1307 - POWER_DOMAIN_MODESET, 1308 1314 POWER_DOMAIN_DC_OFF, 1309 1315 POWER_DOMAIN_INIT); 1310 1316 ··· 1418 1426 POWER_DOMAIN_AUDIO_MMIO, 1419 1427 POWER_DOMAIN_AUX_A, 1420 1428 POWER_DOMAIN_AUX_B, 1421 - POWER_DOMAIN_MODESET, 1422 1429 POWER_DOMAIN_DC_OFF, 1423 1430 POWER_DOMAIN_INIT); 1424 1431 ··· 1536 1545 I915_PW_DESCRIPTORS(xelpdp_power_wells_main), 1537 1546 }; 1538 1547 1548 + I915_DECL_PW_DOMAINS(xe2lpd_pwdoms_pica_tc, 1549 + POWER_DOMAIN_PORT_DDI_LANES_TC1, 1550 + POWER_DOMAIN_PORT_DDI_LANES_TC2, 1551 + POWER_DOMAIN_PORT_DDI_LANES_TC3, 1552 + POWER_DOMAIN_PORT_DDI_LANES_TC4, 1553 + POWER_DOMAIN_AUX_USBC1, 1554 + POWER_DOMAIN_AUX_USBC2, 1555 + POWER_DOMAIN_AUX_USBC3, 1556 + POWER_DOMAIN_AUX_USBC4, 1557 + POWER_DOMAIN_AUX_TBT1, 1558 + POWER_DOMAIN_AUX_TBT2, 1559 + POWER_DOMAIN_AUX_TBT3, 1560 + POWER_DOMAIN_AUX_TBT4, 1561 + POWER_DOMAIN_INIT); 1562 + 1563 + static const struct i915_power_well_desc xe2lpd_power_wells_pica[] = { 1564 + { 1565 + .instances = &I915_PW_INSTANCES(I915_PW("PICA_TC", 1566 + &xe2lpd_pwdoms_pica_tc, 1567 + .id = DISP_PW_ID_NONE), 1568 + ), 1569 + .ops = &xe2lpd_pica_power_well_ops, 1570 + }, 1571 + }; 1572 + 1573 + I915_DECL_PW_DOMAINS(xe2lpd_pwdoms_dc_off, 1574 + POWER_DOMAIN_DC_OFF, 1575 + XELPD_PW_C_POWER_DOMAINS, 1576 + XELPD_PW_D_POWER_DOMAINS, 1577 + POWER_DOMAIN_AUDIO_MMIO, 1578 + POWER_DOMAIN_INIT); 1579 + 1580 + static const struct i915_power_well_desc xe2lpd_power_wells_dcoff[] = { 1581 + { 1582 + .instances = &I915_PW_INSTANCES( 1583 + I915_PW("DC_off", &xe2lpd_pwdoms_dc_off, 1584 + .id = SKL_DISP_DC_OFF), 1585 + ), 1586 + .ops = &gen9_dc_off_power_well_ops, 1587 + }, 1588 + }; 1589 + 1590 + static const struct i915_power_well_desc_list xe2lpd_power_wells[] = { 1591 + I915_PW_DESCRIPTORS(i9xx_power_wells_always_on), 1592 + I915_PW_DESCRIPTORS(icl_power_wells_pw_1), 1593 + I915_PW_DESCRIPTORS(xe2lpd_power_wells_dcoff), 1594 + I915_PW_DESCRIPTORS(xelpdp_power_wells_main), 1595 + I915_PW_DESCRIPTORS(xe2lpd_power_wells_pica), 1596 + }; 1597 + 1539 1598 static void init_power_well_domains(const struct i915_power_well_instance *inst, 1540 1599 struct i915_power_well *power_well) 1541 1600 { ··· 1693 1652 return 0; 1694 1653 } 1695 1654 1696 - if (DISPLAY_VER(i915) >= 14) 1655 + if (DISPLAY_VER(i915) >= 20) 1656 + return set_power_wells(power_domains, xe2lpd_power_wells); 1657 + else if (DISPLAY_VER(i915) >= 14) 1697 1658 return set_power_wells(power_domains, xelpdp_power_wells); 1698 1659 else if (IS_DG2(i915)) 1699 1660 return set_power_wells(power_domains, xehpd_power_wells);
+49 -3
drivers/gpu/drm/i915/display/intel_display_power_well.c
··· 1794 1794 struct i915_power_well *power_well) 1795 1795 { 1796 1796 enum aux_ch aux_ch = i915_power_well_instance(power_well)->xelpdp.aux_ch; 1797 + enum phy phy = icl_aux_pw_to_phy(dev_priv, power_well); 1797 1798 1798 - intel_de_rmw(dev_priv, XELPDP_DP_AUX_CH_CTL(aux_ch), 1799 + if (intel_phy_is_tc(dev_priv, phy)) 1800 + icl_tc_port_assert_ref_held(dev_priv, power_well, 1801 + aux_ch_to_digital_port(dev_priv, aux_ch)); 1802 + 1803 + intel_de_rmw(dev_priv, XELPDP_DP_AUX_CH_CTL(dev_priv, aux_ch), 1799 1804 XELPDP_DP_AUX_CH_CTL_POWER_REQUEST, 1800 1805 XELPDP_DP_AUX_CH_CTL_POWER_REQUEST); 1801 1806 ··· 1818 1813 { 1819 1814 enum aux_ch aux_ch = i915_power_well_instance(power_well)->xelpdp.aux_ch; 1820 1815 1821 - intel_de_rmw(dev_priv, XELPDP_DP_AUX_CH_CTL(aux_ch), 1816 + intel_de_rmw(dev_priv, XELPDP_DP_AUX_CH_CTL(dev_priv, aux_ch), 1822 1817 XELPDP_DP_AUX_CH_CTL_POWER_REQUEST, 1823 1818 0); 1824 1819 usleep_range(10, 30); ··· 1829 1824 { 1830 1825 enum aux_ch aux_ch = i915_power_well_instance(power_well)->xelpdp.aux_ch; 1831 1826 1832 - return intel_de_read(dev_priv, XELPDP_DP_AUX_CH_CTL(aux_ch)) & 1827 + return intel_de_read(dev_priv, XELPDP_DP_AUX_CH_CTL(dev_priv, aux_ch)) & 1833 1828 XELPDP_DP_AUX_CH_CTL_POWER_STATUS; 1829 + } 1830 + 1831 + static void xe2lpd_pica_power_well_enable(struct drm_i915_private *dev_priv, 1832 + struct i915_power_well *power_well) 1833 + { 1834 + intel_de_write(dev_priv, XE2LPD_PICA_PW_CTL, 1835 + XE2LPD_PICA_CTL_POWER_REQUEST); 1836 + 1837 + if (intel_de_wait_for_set(dev_priv, XE2LPD_PICA_PW_CTL, 1838 + XE2LPD_PICA_CTL_POWER_STATUS, 1)) { 1839 + drm_dbg_kms(&dev_priv->drm, "pica power well enable timeout\n"); 1840 + 1841 + drm_WARN(&dev_priv->drm, 1, "Power well PICA timeout when enabled"); 1842 + } 1843 + } 1844 + 1845 + static void xe2lpd_pica_power_well_disable(struct drm_i915_private *dev_priv, 1846 + struct i915_power_well *power_well) 1847 + { 1848 + intel_de_write(dev_priv, XE2LPD_PICA_PW_CTL, 0); 1849 + 1850 + if (intel_de_wait_for_clear(dev_priv, XE2LPD_PICA_PW_CTL, 1851 + XE2LPD_PICA_CTL_POWER_STATUS, 1)) { 1852 + drm_dbg_kms(&dev_priv->drm, "pica power well disable timeout\n"); 1853 + 1854 + drm_WARN(&dev_priv->drm, 1, "Power well PICA timeout when disabled"); 1855 + } 1856 + } 1857 + 1858 + static bool xe2lpd_pica_power_well_enabled(struct drm_i915_private *dev_priv, 1859 + struct i915_power_well *power_well) 1860 + { 1861 + return intel_de_read(dev_priv, XE2LPD_PICA_PW_CTL) & 1862 + XE2LPD_PICA_CTL_POWER_STATUS; 1834 1863 } 1835 1864 1836 1865 const struct i915_power_well_ops i9xx_always_on_power_well_ops = { ··· 1985 1946 .enable = xelpdp_aux_power_well_enable, 1986 1947 .disable = xelpdp_aux_power_well_disable, 1987 1948 .is_enabled = xelpdp_aux_power_well_enabled, 1949 + }; 1950 + 1951 + const struct i915_power_well_ops xe2lpd_pica_power_well_ops = { 1952 + .sync_hw = i9xx_power_well_sync_hw_noop, 1953 + .enable = xe2lpd_pica_power_well_enable, 1954 + .disable = xe2lpd_pica_power_well_disable, 1955 + .is_enabled = xe2lpd_pica_power_well_enabled, 1988 1956 };
+1
drivers/gpu/drm/i915/display/intel_display_power_well.h
··· 176 176 extern const struct i915_power_well_ops icl_ddi_power_well_ops; 177 177 extern const struct i915_power_well_ops tgl_tc_cold_off_ops; 178 178 extern const struct i915_power_well_ops xelpdp_aux_power_well_ops; 179 + extern const struct i915_power_well_ops xe2lpd_pica_power_well_ops; 179 180 180 181 #endif
+40 -9
drivers/gpu/drm/i915/display/intel_display_types.h
··· 500 500 enum hdcp_wired_protocol protocol; 501 501 502 502 /* Detects whether sink is HDCP2.2 capable */ 503 - int (*hdcp_2_2_capable)(struct intel_digital_port *dig_port, 503 + int (*hdcp_2_2_capable)(struct intel_connector *connector, 504 504 bool *capable); 505 505 506 506 /* Write HDCP2.2 messages */ 507 - int (*write_2_2_msg)(struct intel_digital_port *dig_port, 507 + int (*write_2_2_msg)(struct intel_connector *connector, 508 508 void *buf, size_t size); 509 509 510 510 /* Read HDCP2.2 messages */ 511 - int (*read_2_2_msg)(struct intel_digital_port *dig_port, 511 + int (*read_2_2_msg)(struct intel_connector *connector, 512 512 u8 msg_id, void *buf, size_t size); 513 513 514 514 /* ··· 516 516 * type to Receivers. In DP HDCP2.2 Stream type is one of the input to 517 517 * the HDCP2.2 Cipher for En/De-Cryption. Not applicable for HDMI. 518 518 */ 519 - int (*config_stream_type)(struct intel_digital_port *dig_port, 519 + int (*config_stream_type)(struct intel_connector *connector, 520 520 bool is_repeater, u8 type); 521 521 522 522 /* Enable/Disable HDCP 2.2 stream encryption on DP MST Transport Link */ ··· 1083 1083 1084 1084 unsigned fb_bits; /* framebuffers to flip */ 1085 1085 bool update_pipe; /* can a fast modeset be performed? */ 1086 + bool update_m_n; /* update M/N seamlessly during fastset? */ 1087 + bool update_lrr; /* update TRANS_VTOTAL/etc. during fastset? */ 1086 1088 bool disable_cxsr; 1087 1089 bool update_wm_pre, update_wm_post; /* watermarks are updated */ 1088 1090 bool fifo_changed; /* FIFO split is changed */ ··· 1191 1189 u32 ctrl, div; 1192 1190 } dsi_pll; 1193 1191 1194 - int pipe_bpp; 1192 + int max_link_bpp_x16; /* in 1/16 bpp units */ 1193 + int pipe_bpp; /* in 1 bpp units */ 1195 1194 struct intel_link_m_n dp_m_n; 1196 1195 1197 1196 /* m2_n2 for eDP downclock */ 1198 1197 struct intel_link_m_n dp_m2_n2; 1199 1198 bool has_drrs; 1200 - bool seamless_m_n; 1201 1199 1202 1200 /* PSR is supported but might not be enabled due the lack of enabled planes */ 1203 1201 bool has_psr; ··· 1364 1362 u16 linetime; 1365 1363 u16 ips_linetime; 1366 1364 1367 - /* Forward Error correction State */ 1365 + bool enhanced_framing; 1366 + 1367 + /* 1368 + * Forward Error Correction. 1369 + * 1370 + * Note: This will be false for 128b/132b, which will always have FEC 1371 + * enabled automatically. 1372 + */ 1368 1373 bool fec_enable; 1369 1374 1370 1375 bool sdp_split_enable; ··· 1392 1383 1393 1384 /* Variable Refresh Rate state */ 1394 1385 struct { 1395 - bool enable; 1386 + bool enable, in_range; 1396 1387 u8 pipeline_full; 1397 1388 u16 flipline, vmin, vmax, guardband; 1398 1389 } vrr; ··· 1590 1581 1591 1582 struct intel_hdmi { 1592 1583 i915_reg_t hdmi_reg; 1593 - int ddc_bus; 1594 1584 struct { 1595 1585 enum drm_dp_dual_mode_type type; 1596 1586 int max_tmds_clock; ··· 2114 2106 to_intel_frontbuffer(struct drm_framebuffer *fb) 2115 2107 { 2116 2108 return fb ? to_intel_framebuffer(fb)->frontbuffer : NULL; 2109 + } 2110 + 2111 + static inline int to_bpp_int(int bpp_x16) 2112 + { 2113 + return bpp_x16 >> 4; 2114 + } 2115 + 2116 + static inline int to_bpp_frac(int bpp_x16) 2117 + { 2118 + return bpp_x16 & 0xf; 2119 + } 2120 + 2121 + #define BPP_X16_FMT "%d.%04d" 2122 + #define BPP_X16_ARGS(bpp_x16) to_bpp_int(bpp_x16), (to_bpp_frac(bpp_x16) * 625) 2123 + 2124 + static inline int to_bpp_int_roundup(int bpp_x16) 2125 + { 2126 + return (bpp_x16 + 0xf) >> 4; 2127 + } 2128 + 2129 + static inline int to_bpp_x16(int bpp) 2130 + { 2131 + return bpp << 4; 2117 2132 } 2118 2133 2119 2134 #endif /* __INTEL_DISPLAY_TYPES_H__ */
+48
drivers/gpu/drm/i915/display/intel_display_wa.c
··· 1 + // SPDX-License-Identifier: MIT 2 + /* 3 + * Copyright © 2023 Intel Corporation 4 + */ 5 + 6 + #include "i915_drv.h" 7 + #include "i915_reg.h" 8 + #include "intel_de.h" 9 + #include "intel_display_wa.h" 10 + 11 + static void gen11_display_wa_apply(struct drm_i915_private *i915) 12 + { 13 + /* Wa_1409120013 */ 14 + intel_de_write(i915, ILK_DPFC_CHICKEN(INTEL_FBC_A), 15 + DPFC_CHICKEN_COMP_DUMMY_PIXEL); 16 + 17 + /* Wa_14010594013 */ 18 + intel_de_rmw(i915, GEN8_CHICKEN_DCPR_1, 0, ICL_DELAY_PMRSP); 19 + } 20 + 21 + static void xe_d_display_wa_apply(struct drm_i915_private *i915) 22 + { 23 + /* Wa_1409120013 */ 24 + intel_de_write(i915, ILK_DPFC_CHICKEN(INTEL_FBC_A), 25 + DPFC_CHICKEN_COMP_DUMMY_PIXEL); 26 + 27 + /* Wa_14013723622 */ 28 + intel_de_rmw(i915, CLKREQ_POLICY, CLKREQ_POLICY_MEM_UP_OVRD, 0); 29 + } 30 + 31 + static void adlp_display_wa_apply(struct drm_i915_private *i915) 32 + { 33 + /* Wa_22011091694:adlp */ 34 + intel_de_rmw(i915, GEN9_CLKGATE_DIS_5, 0, DPCE_GATING_DIS); 35 + 36 + /* Bspec/49189 Initialize Sequence */ 37 + intel_de_rmw(i915, GEN8_CHICKEN_DCPR_1, DDI_CLOCK_REG_ACCESS, 0); 38 + } 39 + 40 + void intel_display_wa_apply(struct drm_i915_private *i915) 41 + { 42 + if (IS_ALDERLAKE_P(i915)) 43 + adlp_display_wa_apply(i915); 44 + else if (DISPLAY_VER(i915) == 12) 45 + xe_d_display_wa_apply(i915); 46 + else if (DISPLAY_VER(i915) == 11) 47 + gen11_display_wa_apply(i915); 48 + }
+13
drivers/gpu/drm/i915/display/intel_display_wa.h
··· 1 + /* SPDX-License-Identifier: MIT */ 2 + /* 3 + * Copyright © 2023 Intel Corporation 4 + */ 5 + 6 + #ifndef __INTEL_DISPLAY_WA_H__ 7 + #define __INTEL_DISPLAY_WA_H__ 8 + 9 + struct drm_i915_private; 10 + 11 + void intel_display_wa_apply(struct drm_i915_private *i915); 12 + 13 + #endif
+1 -1
drivers/gpu/drm/i915/display/intel_dmc.c
··· 998 998 999 999 INIT_WORK(&dmc->work, dmc_load_work_fn); 1000 1000 1001 - if (IS_METEORLAKE(i915)) { 1001 + if (DISPLAY_VER_FULL(i915) == IP_VER(14, 0)) { 1002 1002 dmc->fw_path = MTL_DMC_PATH; 1003 1003 dmc->max_fw_size = XELPDP_DMC_MAX_FW_SIZE; 1004 1004 } else if (IS_DG2(i915)) {
+681 -206
drivers/gpu/drm/i915/display/intel_dp.c
··· 306 306 struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp); 307 307 int source_max = intel_dp_max_source_lane_count(dig_port); 308 308 int sink_max = intel_dp->max_sink_lane_count; 309 - int fia_max = intel_tc_port_fia_max_lane_count(dig_port); 309 + int lane_max = intel_tc_port_max_lane_count(dig_port); 310 310 int lttpr_max = drm_dp_lttpr_max_lane_count(intel_dp->lttpr_common_caps); 311 311 312 312 if (lttpr_max) 313 313 sink_max = min(sink_max, lttpr_max); 314 314 315 - return min3(source_max, sink_max, fia_max); 315 + return min3(source_max, sink_max, lane_max); 316 316 } 317 317 318 318 int intel_dp_max_lane_count(struct intel_dp *intel_dp) ··· 740 740 return bits_per_pixel; 741 741 } 742 742 743 - u16 intel_dp_dsc_get_output_bpp(struct drm_i915_private *i915, 744 - u32 link_clock, u32 lane_count, 745 - u32 mode_clock, u32 mode_hdisplay, 746 - bool bigjoiner, 747 - u32 pipe_bpp, 748 - u32 timeslots) 743 + static 744 + u32 get_max_compressed_bpp_with_joiner(struct drm_i915_private *i915, 745 + u32 mode_clock, u32 mode_hdisplay, 746 + bool bigjoiner) 749 747 { 750 - u32 bits_per_pixel, max_bpp_small_joiner_ram; 748 + u32 max_bpp_small_joiner_ram; 749 + 750 + /* Small Joiner Check: output bpp <= joiner RAM (bits) / Horiz. width */ 751 + max_bpp_small_joiner_ram = small_joiner_ram_size_bits(i915) / mode_hdisplay; 752 + 753 + if (bigjoiner) { 754 + int bigjoiner_interface_bits = DISPLAY_VER(i915) >= 14 ? 36 : 24; 755 + /* With bigjoiner multiple dsc engines are used in parallel so PPC is 2 */ 756 + int ppc = 2; 757 + u32 max_bpp_bigjoiner = 758 + i915->display.cdclk.max_cdclk_freq * ppc * bigjoiner_interface_bits / 759 + intel_dp_mode_to_fec_clock(mode_clock); 760 + 761 + max_bpp_small_joiner_ram *= 2; 762 + 763 + return min(max_bpp_small_joiner_ram, max_bpp_bigjoiner); 764 + } 765 + 766 + return max_bpp_small_joiner_ram; 767 + } 768 + 769 + u16 intel_dp_dsc_get_max_compressed_bpp(struct drm_i915_private *i915, 770 + u32 link_clock, u32 lane_count, 771 + u32 mode_clock, u32 mode_hdisplay, 772 + bool bigjoiner, 773 + enum intel_output_format output_format, 774 + u32 pipe_bpp, 775 + u32 timeslots) 776 + { 777 + u32 bits_per_pixel, joiner_max_bpp; 751 778 752 779 /* 753 780 * Available Link Bandwidth(Kbits/sec) = (NumberOfLanes)* ··· 795 768 bits_per_pixel = ((link_clock * lane_count) * timeslots) / 796 769 (intel_dp_mode_to_fec_clock(mode_clock) * 8); 797 770 771 + /* Bandwidth required for 420 is half, that of 444 format */ 772 + if (output_format == INTEL_OUTPUT_FORMAT_YCBCR420) 773 + bits_per_pixel *= 2; 774 + 775 + /* 776 + * According to DSC 1.2a Section 4.1.1 Table 4.1 the maximum 777 + * supported PPS value can be 63.9375 and with the further 778 + * mention that for 420, 422 formats, bpp should be programmed double 779 + * the target bpp restricting our target bpp to be 31.9375 at max. 780 + */ 781 + if (output_format == INTEL_OUTPUT_FORMAT_YCBCR420) 782 + bits_per_pixel = min_t(u32, bits_per_pixel, 31); 783 + 798 784 drm_dbg_kms(&i915->drm, "Max link bpp is %u for %u timeslots " 799 785 "total bw %u pixel clock %u\n", 800 786 bits_per_pixel, timeslots, 801 787 (link_clock * lane_count * 8), 802 788 intel_dp_mode_to_fec_clock(mode_clock)); 803 789 804 - /* Small Joiner Check: output bpp <= joiner RAM (bits) / Horiz. width */ 805 - max_bpp_small_joiner_ram = small_joiner_ram_size_bits(i915) / 806 - mode_hdisplay; 807 - 808 - if (bigjoiner) 809 - max_bpp_small_joiner_ram *= 2; 810 - 811 - /* 812 - * Greatest allowed DSC BPP = MIN (output BPP from available Link BW 813 - * check, output bpp from small joiner RAM check) 814 - */ 815 - bits_per_pixel = min(bits_per_pixel, max_bpp_small_joiner_ram); 816 - 817 - if (bigjoiner) { 818 - u32 max_bpp_bigjoiner = 819 - i915->display.cdclk.max_cdclk_freq * 48 / 820 - intel_dp_mode_to_fec_clock(mode_clock); 821 - 822 - bits_per_pixel = min(bits_per_pixel, max_bpp_bigjoiner); 823 - } 790 + joiner_max_bpp = get_max_compressed_bpp_with_joiner(i915, mode_clock, 791 + mode_hdisplay, bigjoiner); 792 + bits_per_pixel = min(bits_per_pixel, joiner_max_bpp); 824 793 825 794 bits_per_pixel = intel_dp_dsc_nearest_valid_bpp(i915, bits_per_pixel, pipe_bpp); 826 795 827 - /* 828 - * Compressed BPP in U6.4 format so multiply by 16, for Gen 11, 829 - * fractional part is 0 830 - */ 831 - return bits_per_pixel << 4; 796 + return bits_per_pixel; 832 797 } 833 798 834 799 u8 intel_dp_dsc_get_slice_count(struct intel_dp *intel_dp, ··· 935 916 return false; 936 917 } 937 918 919 + static bool 920 + dfp_can_convert(struct intel_dp *intel_dp, 921 + enum intel_output_format output_format, 922 + enum intel_output_format sink_format) 923 + { 924 + switch (output_format) { 925 + case INTEL_OUTPUT_FORMAT_RGB: 926 + return dfp_can_convert_from_rgb(intel_dp, sink_format); 927 + case INTEL_OUTPUT_FORMAT_YCBCR444: 928 + return dfp_can_convert_from_ycbcr444(intel_dp, sink_format); 929 + default: 930 + MISSING_CASE(output_format); 931 + return false; 932 + } 933 + 934 + return false; 935 + } 936 + 938 937 static enum intel_output_format 939 938 intel_dp_output_format(struct intel_connector *connector, 940 939 enum intel_output_format sink_format) 941 940 { 942 941 struct intel_dp *intel_dp = intel_attached_dp(connector); 943 942 struct drm_i915_private *i915 = dp_to_i915(intel_dp); 943 + enum intel_output_format force_dsc_output_format = 944 + intel_dp->force_dsc_output_format; 944 945 enum intel_output_format output_format; 946 + if (force_dsc_output_format) { 947 + if (source_can_output(intel_dp, force_dsc_output_format) && 948 + (!drm_dp_is_branch(intel_dp->dpcd) || 949 + sink_format != force_dsc_output_format || 950 + dfp_can_convert(intel_dp, force_dsc_output_format, sink_format))) 951 + return force_dsc_output_format; 945 952 946 - if (intel_dp->force_dsc_output_format) 947 - return intel_dp->force_dsc_output_format; 953 + drm_dbg_kms(&i915->drm, "Cannot force DSC output format\n"); 954 + } 948 955 949 956 if (sink_format == INTEL_OUTPUT_FORMAT_RGB || 950 957 dfp_can_convert_from_rgb(intel_dp, sink_format)) ··· 996 951 return 8 * 3; 997 952 } 998 953 999 - static int intel_dp_output_bpp(enum intel_output_format output_format, int bpp) 954 + int intel_dp_output_bpp(enum intel_output_format output_format, int bpp) 1000 955 { 1001 956 /* 1002 957 * bpp value was assumed to RGB format. And YCbCr 4:2:0 output ··· 1167 1122 int target_clock = mode->clock; 1168 1123 int max_rate, mode_rate, max_lanes, max_link_clock; 1169 1124 int max_dotclk = dev_priv->max_dotclk_freq; 1170 - u16 dsc_max_output_bpp = 0; 1125 + u16 dsc_max_compressed_bpp = 0; 1171 1126 u8 dsc_slice_count = 0; 1172 1127 enum drm_mode_status status; 1173 1128 bool dsc = false, bigjoiner = false; ··· 1206 1161 1207 1162 if (HAS_DSC(dev_priv) && 1208 1163 drm_dp_sink_supports_dsc(intel_dp->dsc_dpcd)) { 1164 + enum intel_output_format sink_format, output_format; 1165 + int pipe_bpp; 1166 + 1167 + sink_format = intel_dp_sink_format(connector, mode); 1168 + output_format = intel_dp_output_format(connector, sink_format); 1209 1169 /* 1210 1170 * TBD pass the connector BPC, 1211 1171 * for now U8_MAX so that max BPC on that platform would be picked 1212 1172 */ 1213 - int pipe_bpp = intel_dp_dsc_compute_bpp(intel_dp, U8_MAX); 1173 + pipe_bpp = intel_dp_dsc_compute_max_bpp(intel_dp, U8_MAX); 1214 1174 1215 1175 /* 1216 1176 * Output bpp is stored in 6.4 format so right shift by 4 to get the 1217 1177 * integer value since we support only integer values of bpp. 1218 1178 */ 1219 1179 if (intel_dp_is_edp(intel_dp)) { 1220 - dsc_max_output_bpp = 1180 + dsc_max_compressed_bpp = 1221 1181 drm_edp_dsc_sink_output_bpp(intel_dp->dsc_dpcd) >> 4; 1222 1182 dsc_slice_count = 1223 1183 drm_dp_dsc_sink_max_slice_count(intel_dp->dsc_dpcd, 1224 1184 true); 1225 1185 } else if (drm_dp_sink_supports_fec(intel_dp->fec_capable)) { 1226 - dsc_max_output_bpp = 1227 - intel_dp_dsc_get_output_bpp(dev_priv, 1228 - max_link_clock, 1229 - max_lanes, 1230 - target_clock, 1231 - mode->hdisplay, 1232 - bigjoiner, 1233 - pipe_bpp, 64) >> 4; 1186 + dsc_max_compressed_bpp = 1187 + intel_dp_dsc_get_max_compressed_bpp(dev_priv, 1188 + max_link_clock, 1189 + max_lanes, 1190 + target_clock, 1191 + mode->hdisplay, 1192 + bigjoiner, 1193 + output_format, 1194 + pipe_bpp, 64); 1234 1195 dsc_slice_count = 1235 1196 intel_dp_dsc_get_slice_count(intel_dp, 1236 1197 target_clock, ··· 1244 1193 bigjoiner); 1245 1194 } 1246 1195 1247 - dsc = dsc_max_output_bpp && dsc_slice_count; 1196 + dsc = dsc_max_compressed_bpp && dsc_slice_count; 1248 1197 } 1249 1198 1250 1199 /* ··· 1357 1306 static bool intel_dp_source_supports_fec(struct intel_dp *intel_dp, 1358 1307 const struct intel_crtc_state *pipe_config) 1359 1308 { 1309 + struct intel_encoder *encoder = &dp_to_dig_port(intel_dp)->base; 1360 1310 struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); 1361 1311 1362 - /* On TGL, FEC is supported on all Pipes */ 1363 1312 if (DISPLAY_VER(dev_priv) >= 12) 1364 1313 return true; 1365 1314 1366 - if (DISPLAY_VER(dev_priv) == 11 && pipe_config->cpu_transcoder != TRANSCODER_A) 1315 + if (DISPLAY_VER(dev_priv) == 11 && encoder->port != PORT_A) 1367 1316 return true; 1368 1317 1369 1318 return false; ··· 1470 1419 if (intel_dp->compliance.test_data.bpc != 0) { 1471 1420 int bpp = 3 * intel_dp->compliance.test_data.bpc; 1472 1421 1473 - limits->min_bpp = limits->max_bpp = bpp; 1422 + limits->pipe.min_bpp = limits->pipe.max_bpp = bpp; 1474 1423 pipe_config->dither_force_disable = bpp == 6 * 3; 1475 1424 1476 1425 drm_dbg_kms(&i915->drm, "Setting pipe_bpp to %d\n", bpp); ··· 1532 1481 int bpp, i, lane_count, clock = intel_dp_mode_clock(pipe_config, conn_state); 1533 1482 int mode_rate, link_rate, link_avail; 1534 1483 1535 - for (bpp = limits->max_bpp; bpp >= limits->min_bpp; bpp -= 2 * 3) { 1536 - int output_bpp = intel_dp_output_bpp(pipe_config->output_format, bpp); 1484 + for (bpp = to_bpp_int(limits->link.max_bpp_x16); 1485 + bpp >= to_bpp_int(limits->link.min_bpp_x16); 1486 + bpp -= 2 * 3) { 1487 + int link_bpp = intel_dp_output_bpp(pipe_config->output_format, bpp); 1537 1488 1538 - mode_rate = intel_dp_link_required(clock, output_bpp); 1489 + mode_rate = intel_dp_link_required(clock, link_bpp); 1539 1490 1540 1491 for (i = 0; i < intel_dp->num_common_rates; i++) { 1541 1492 link_rate = intel_dp_common_rate(intel_dp, i); ··· 1565 1512 return -EINVAL; 1566 1513 } 1567 1514 1568 - int intel_dp_dsc_compute_bpp(struct intel_dp *intel_dp, u8 max_req_bpc) 1515 + static 1516 + u8 intel_dp_dsc_max_src_input_bpc(struct drm_i915_private *i915) 1517 + { 1518 + /* Max DSC Input BPC for ICL is 10 and for TGL+ is 12 */ 1519 + if (DISPLAY_VER(i915) >= 12) 1520 + return 12; 1521 + if (DISPLAY_VER(i915) == 11) 1522 + return 10; 1523 + 1524 + return 0; 1525 + } 1526 + 1527 + int intel_dp_dsc_compute_max_bpp(struct intel_dp *intel_dp, u8 max_req_bpc) 1569 1528 { 1570 1529 struct drm_i915_private *i915 = dp_to_i915(intel_dp); 1571 1530 int i, num_bpc; 1572 1531 u8 dsc_bpc[3] = {0}; 1573 1532 u8 dsc_max_bpc; 1574 1533 1575 - /* Max DSC Input BPC for ICL is 10 and for TGL+ is 12 */ 1576 - if (DISPLAY_VER(i915) >= 12) 1577 - dsc_max_bpc = min_t(u8, 12, max_req_bpc); 1578 - else 1579 - dsc_max_bpc = min_t(u8, 10, max_req_bpc); 1534 + dsc_max_bpc = intel_dp_dsc_max_src_input_bpc(i915); 1535 + 1536 + if (!dsc_max_bpc) 1537 + return dsc_max_bpc; 1538 + 1539 + dsc_max_bpc = min_t(u8, dsc_max_bpc, max_req_bpc); 1580 1540 1581 1541 num_bpc = drm_dp_dsc_sink_supported_input_bpcs(intel_dp->dsc_dpcd, 1582 1542 dsc_bpc); ··· 1717 1651 return drm_dp_dsc_sink_supports_format(intel_dp->dsc_dpcd, sink_dsc_format); 1718 1652 } 1719 1653 1654 + static bool is_bw_sufficient_for_dsc_config(u16 compressed_bpp, u32 link_clock, 1655 + u32 lane_count, u32 mode_clock, 1656 + enum intel_output_format output_format, 1657 + int timeslots) 1658 + { 1659 + u32 available_bw, required_bw; 1660 + 1661 + available_bw = (link_clock * lane_count * timeslots) / 8; 1662 + required_bw = compressed_bpp * (intel_dp_mode_to_fec_clock(mode_clock)); 1663 + 1664 + return available_bw > required_bw; 1665 + } 1666 + 1667 + static int dsc_compute_link_config(struct intel_dp *intel_dp, 1668 + struct intel_crtc_state *pipe_config, 1669 + struct link_config_limits *limits, 1670 + u16 compressed_bpp, 1671 + int timeslots) 1672 + { 1673 + const struct drm_display_mode *adjusted_mode = &pipe_config->hw.adjusted_mode; 1674 + int link_rate, lane_count; 1675 + int i; 1676 + 1677 + for (i = 0; i < intel_dp->num_common_rates; i++) { 1678 + link_rate = intel_dp_common_rate(intel_dp, i); 1679 + if (link_rate < limits->min_rate || link_rate > limits->max_rate) 1680 + continue; 1681 + 1682 + for (lane_count = limits->min_lane_count; 1683 + lane_count <= limits->max_lane_count; 1684 + lane_count <<= 1) { 1685 + if (!is_bw_sufficient_for_dsc_config(compressed_bpp, link_rate, lane_count, 1686 + adjusted_mode->clock, 1687 + pipe_config->output_format, 1688 + timeslots)) 1689 + continue; 1690 + 1691 + pipe_config->lane_count = lane_count; 1692 + pipe_config->port_clock = link_rate; 1693 + 1694 + return 0; 1695 + } 1696 + } 1697 + 1698 + return -EINVAL; 1699 + } 1700 + 1701 + static 1702 + u16 intel_dp_dsc_max_sink_compressed_bppx16(struct intel_dp *intel_dp, 1703 + struct intel_crtc_state *pipe_config, 1704 + int bpc) 1705 + { 1706 + u16 max_bppx16 = drm_edp_dsc_sink_output_bpp(intel_dp->dsc_dpcd); 1707 + 1708 + if (max_bppx16) 1709 + return max_bppx16; 1710 + /* 1711 + * If support not given in DPCD 67h, 68h use the Maximum Allowed bit rate 1712 + * values as given in spec Table 2-157 DP v2.0 1713 + */ 1714 + switch (pipe_config->output_format) { 1715 + case INTEL_OUTPUT_FORMAT_RGB: 1716 + case INTEL_OUTPUT_FORMAT_YCBCR444: 1717 + return (3 * bpc) << 4; 1718 + case INTEL_OUTPUT_FORMAT_YCBCR420: 1719 + return (3 * (bpc / 2)) << 4; 1720 + default: 1721 + MISSING_CASE(pipe_config->output_format); 1722 + break; 1723 + } 1724 + 1725 + return 0; 1726 + } 1727 + 1728 + static int dsc_sink_min_compressed_bpp(struct intel_crtc_state *pipe_config) 1729 + { 1730 + /* From Mandatory bit rate range Support Table 2-157 (DP v2.0) */ 1731 + switch (pipe_config->output_format) { 1732 + case INTEL_OUTPUT_FORMAT_RGB: 1733 + case INTEL_OUTPUT_FORMAT_YCBCR444: 1734 + return 8; 1735 + case INTEL_OUTPUT_FORMAT_YCBCR420: 1736 + return 6; 1737 + default: 1738 + MISSING_CASE(pipe_config->output_format); 1739 + break; 1740 + } 1741 + 1742 + return 0; 1743 + } 1744 + 1745 + static int dsc_sink_max_compressed_bpp(struct intel_dp *intel_dp, 1746 + struct intel_crtc_state *pipe_config, 1747 + int bpc) 1748 + { 1749 + return intel_dp_dsc_max_sink_compressed_bppx16(intel_dp, 1750 + pipe_config, bpc) >> 4; 1751 + } 1752 + 1753 + static int dsc_src_min_compressed_bpp(void) 1754 + { 1755 + /* Min Compressed bpp supported by source is 8 */ 1756 + return 8; 1757 + } 1758 + 1759 + static int dsc_src_max_compressed_bpp(struct intel_dp *intel_dp) 1760 + { 1761 + struct drm_i915_private *i915 = dp_to_i915(intel_dp); 1762 + 1763 + /* 1764 + * Max Compressed bpp for Gen 13+ is 27bpp. 1765 + * For earlier platform is 23bpp. (Bspec:49259). 1766 + */ 1767 + if (DISPLAY_VER(i915) <= 12) 1768 + return 23; 1769 + else 1770 + return 27; 1771 + } 1772 + 1773 + /* 1774 + * From a list of valid compressed bpps try different compressed bpp and find a 1775 + * suitable link configuration that can support it. 1776 + */ 1777 + static int 1778 + icl_dsc_compute_link_config(struct intel_dp *intel_dp, 1779 + struct intel_crtc_state *pipe_config, 1780 + struct link_config_limits *limits, 1781 + int dsc_max_bpp, 1782 + int dsc_min_bpp, 1783 + int pipe_bpp, 1784 + int timeslots) 1785 + { 1786 + int i, ret; 1787 + 1788 + /* Compressed BPP should be less than the Input DSC bpp */ 1789 + dsc_max_bpp = min(dsc_max_bpp, pipe_bpp - 1); 1790 + 1791 + for (i = 0; i < ARRAY_SIZE(valid_dsc_bpp); i++) { 1792 + if (valid_dsc_bpp[i] < dsc_min_bpp || 1793 + valid_dsc_bpp[i] > dsc_max_bpp) 1794 + break; 1795 + 1796 + ret = dsc_compute_link_config(intel_dp, 1797 + pipe_config, 1798 + limits, 1799 + valid_dsc_bpp[i], 1800 + timeslots); 1801 + if (ret == 0) { 1802 + pipe_config->dsc.compressed_bpp = valid_dsc_bpp[i]; 1803 + return 0; 1804 + } 1805 + } 1806 + 1807 + return -EINVAL; 1808 + } 1809 + 1810 + /* 1811 + * From XE_LPD onwards we supports compression bpps in steps of 1 up to 1812 + * uncompressed bpp-1. So we start from max compressed bpp and see if any 1813 + * link configuration is able to support that compressed bpp, if not we 1814 + * step down and check for lower compressed bpp. 1815 + */ 1816 + static int 1817 + xelpd_dsc_compute_link_config(struct intel_dp *intel_dp, 1818 + struct intel_crtc_state *pipe_config, 1819 + struct link_config_limits *limits, 1820 + int dsc_max_bpp, 1821 + int dsc_min_bpp, 1822 + int pipe_bpp, 1823 + int timeslots) 1824 + { 1825 + u16 compressed_bpp; 1826 + int ret; 1827 + 1828 + /* Compressed BPP should be less than the Input DSC bpp */ 1829 + dsc_max_bpp = min(dsc_max_bpp, pipe_bpp - 1); 1830 + 1831 + for (compressed_bpp = dsc_max_bpp; 1832 + compressed_bpp >= dsc_min_bpp; 1833 + compressed_bpp--) { 1834 + ret = dsc_compute_link_config(intel_dp, 1835 + pipe_config, 1836 + limits, 1837 + compressed_bpp, 1838 + timeslots); 1839 + if (ret == 0) { 1840 + pipe_config->dsc.compressed_bpp = compressed_bpp; 1841 + return 0; 1842 + } 1843 + } 1844 + return -EINVAL; 1845 + } 1846 + 1847 + static int dsc_compute_compressed_bpp(struct intel_dp *intel_dp, 1848 + struct intel_crtc_state *pipe_config, 1849 + struct link_config_limits *limits, 1850 + int pipe_bpp, 1851 + int timeslots) 1852 + { 1853 + const struct drm_display_mode *adjusted_mode = &pipe_config->hw.adjusted_mode; 1854 + struct drm_i915_private *i915 = dp_to_i915(intel_dp); 1855 + int dsc_src_min_bpp, dsc_sink_min_bpp, dsc_min_bpp; 1856 + int dsc_src_max_bpp, dsc_sink_max_bpp, dsc_max_bpp; 1857 + int dsc_joiner_max_bpp; 1858 + 1859 + dsc_src_min_bpp = dsc_src_min_compressed_bpp(); 1860 + dsc_sink_min_bpp = dsc_sink_min_compressed_bpp(pipe_config); 1861 + dsc_min_bpp = max(dsc_src_min_bpp, dsc_sink_min_bpp); 1862 + dsc_min_bpp = max(dsc_min_bpp, to_bpp_int_roundup(limits->link.min_bpp_x16)); 1863 + 1864 + dsc_src_max_bpp = dsc_src_max_compressed_bpp(intel_dp); 1865 + dsc_sink_max_bpp = dsc_sink_max_compressed_bpp(intel_dp, pipe_config, pipe_bpp / 3); 1866 + dsc_max_bpp = dsc_sink_max_bpp ? min(dsc_sink_max_bpp, dsc_src_max_bpp) : dsc_src_max_bpp; 1867 + 1868 + dsc_joiner_max_bpp = get_max_compressed_bpp_with_joiner(i915, adjusted_mode->clock, 1869 + adjusted_mode->hdisplay, 1870 + pipe_config->bigjoiner_pipes); 1871 + dsc_max_bpp = min(dsc_max_bpp, dsc_joiner_max_bpp); 1872 + dsc_max_bpp = min(dsc_max_bpp, to_bpp_int(limits->link.max_bpp_x16)); 1873 + 1874 + if (DISPLAY_VER(i915) >= 13) 1875 + return xelpd_dsc_compute_link_config(intel_dp, pipe_config, limits, 1876 + dsc_max_bpp, dsc_min_bpp, pipe_bpp, timeslots); 1877 + return icl_dsc_compute_link_config(intel_dp, pipe_config, limits, 1878 + dsc_max_bpp, dsc_min_bpp, pipe_bpp, timeslots); 1879 + } 1880 + 1881 + static 1882 + u8 intel_dp_dsc_min_src_input_bpc(struct drm_i915_private *i915) 1883 + { 1884 + /* Min DSC Input BPC for ICL+ is 8 */ 1885 + return HAS_DSC(i915) ? 8 : 0; 1886 + } 1887 + 1888 + static 1889 + bool is_dsc_pipe_bpp_sufficient(struct drm_i915_private *i915, 1890 + struct drm_connector_state *conn_state, 1891 + struct link_config_limits *limits, 1892 + int pipe_bpp) 1893 + { 1894 + u8 dsc_max_bpc, dsc_min_bpc, dsc_max_pipe_bpp, dsc_min_pipe_bpp; 1895 + 1896 + dsc_max_bpc = min(intel_dp_dsc_max_src_input_bpc(i915), conn_state->max_requested_bpc); 1897 + dsc_min_bpc = intel_dp_dsc_min_src_input_bpc(i915); 1898 + 1899 + dsc_max_pipe_bpp = min(dsc_max_bpc * 3, limits->pipe.max_bpp); 1900 + dsc_min_pipe_bpp = max(dsc_min_bpc * 3, limits->pipe.min_bpp); 1901 + 1902 + return pipe_bpp >= dsc_min_pipe_bpp && 1903 + pipe_bpp <= dsc_max_pipe_bpp; 1904 + } 1905 + 1906 + static 1907 + int intel_dp_force_dsc_pipe_bpp(struct intel_dp *intel_dp, 1908 + struct drm_connector_state *conn_state, 1909 + struct link_config_limits *limits) 1910 + { 1911 + struct drm_i915_private *i915 = dp_to_i915(intel_dp); 1912 + int forced_bpp; 1913 + 1914 + if (!intel_dp->force_dsc_bpc) 1915 + return 0; 1916 + 1917 + forced_bpp = intel_dp->force_dsc_bpc * 3; 1918 + 1919 + if (is_dsc_pipe_bpp_sufficient(i915, conn_state, limits, forced_bpp)) { 1920 + drm_dbg_kms(&i915->drm, "Input DSC BPC forced to %d\n", intel_dp->force_dsc_bpc); 1921 + return forced_bpp; 1922 + } 1923 + 1924 + drm_dbg_kms(&i915->drm, "Cannot force DSC BPC:%d, due to DSC BPC limits\n", 1925 + intel_dp->force_dsc_bpc); 1926 + 1927 + return 0; 1928 + } 1929 + 1930 + static int intel_dp_dsc_compute_pipe_bpp(struct intel_dp *intel_dp, 1931 + struct intel_crtc_state *pipe_config, 1932 + struct drm_connector_state *conn_state, 1933 + struct link_config_limits *limits, 1934 + int timeslots) 1935 + { 1936 + struct drm_i915_private *i915 = dp_to_i915(intel_dp); 1937 + u8 max_req_bpc = conn_state->max_requested_bpc; 1938 + u8 dsc_max_bpc, dsc_max_bpp; 1939 + u8 dsc_min_bpc, dsc_min_bpp; 1940 + u8 dsc_bpc[3] = {0}; 1941 + int forced_bpp, pipe_bpp; 1942 + int num_bpc, i, ret; 1943 + 1944 + forced_bpp = intel_dp_force_dsc_pipe_bpp(intel_dp, conn_state, limits); 1945 + 1946 + if (forced_bpp) { 1947 + ret = dsc_compute_compressed_bpp(intel_dp, pipe_config, 1948 + limits, forced_bpp, timeslots); 1949 + if (ret == 0) { 1950 + pipe_config->pipe_bpp = forced_bpp; 1951 + return 0; 1952 + } 1953 + } 1954 + 1955 + dsc_max_bpc = intel_dp_dsc_min_src_input_bpc(i915); 1956 + if (!dsc_max_bpc) 1957 + return -EINVAL; 1958 + 1959 + dsc_max_bpc = min_t(u8, dsc_max_bpc, max_req_bpc); 1960 + dsc_max_bpp = min(dsc_max_bpc * 3, limits->pipe.max_bpp); 1961 + 1962 + dsc_min_bpc = intel_dp_dsc_min_src_input_bpc(i915); 1963 + dsc_min_bpp = max(dsc_min_bpc * 3, limits->pipe.min_bpp); 1964 + 1965 + /* 1966 + * Get the maximum DSC bpc that will be supported by any valid 1967 + * link configuration and compressed bpp. 1968 + */ 1969 + num_bpc = drm_dp_dsc_sink_supported_input_bpcs(intel_dp->dsc_dpcd, dsc_bpc); 1970 + for (i = 0; i < num_bpc; i++) { 1971 + pipe_bpp = dsc_bpc[i] * 3; 1972 + if (pipe_bpp < dsc_min_bpp) 1973 + break; 1974 + if (pipe_bpp > dsc_max_bpp) 1975 + continue; 1976 + ret = dsc_compute_compressed_bpp(intel_dp, pipe_config, 1977 + limits, pipe_bpp, timeslots); 1978 + if (ret == 0) { 1979 + pipe_config->pipe_bpp = pipe_bpp; 1980 + return 0; 1981 + } 1982 + } 1983 + 1984 + return -EINVAL; 1985 + } 1986 + 1987 + static int intel_edp_dsc_compute_pipe_bpp(struct intel_dp *intel_dp, 1988 + struct intel_crtc_state *pipe_config, 1989 + struct drm_connector_state *conn_state, 1990 + struct link_config_limits *limits) 1991 + { 1992 + struct drm_i915_private *i915 = dp_to_i915(intel_dp); 1993 + int pipe_bpp, forced_bpp; 1994 + int dsc_src_min_bpp, dsc_sink_min_bpp, dsc_min_bpp; 1995 + int dsc_src_max_bpp, dsc_sink_max_bpp, dsc_max_bpp; 1996 + 1997 + forced_bpp = intel_dp_force_dsc_pipe_bpp(intel_dp, conn_state, limits); 1998 + 1999 + if (forced_bpp) { 2000 + pipe_bpp = forced_bpp; 2001 + } else { 2002 + int max_bpc = min(limits->pipe.max_bpp / 3, (int)conn_state->max_requested_bpc); 2003 + 2004 + /* For eDP use max bpp that can be supported with DSC. */ 2005 + pipe_bpp = intel_dp_dsc_compute_max_bpp(intel_dp, max_bpc); 2006 + if (!is_dsc_pipe_bpp_sufficient(i915, conn_state, limits, pipe_bpp)) { 2007 + drm_dbg_kms(&i915->drm, 2008 + "Computed BPC is not in DSC BPC limits\n"); 2009 + return -EINVAL; 2010 + } 2011 + } 2012 + pipe_config->port_clock = limits->max_rate; 2013 + pipe_config->lane_count = limits->max_lane_count; 2014 + 2015 + dsc_src_min_bpp = dsc_src_min_compressed_bpp(); 2016 + dsc_sink_min_bpp = dsc_sink_min_compressed_bpp(pipe_config); 2017 + dsc_min_bpp = max(dsc_src_min_bpp, dsc_sink_min_bpp); 2018 + dsc_min_bpp = max(dsc_min_bpp, to_bpp_int_roundup(limits->link.min_bpp_x16)); 2019 + 2020 + dsc_src_max_bpp = dsc_src_max_compressed_bpp(intel_dp); 2021 + dsc_sink_max_bpp = dsc_sink_max_compressed_bpp(intel_dp, pipe_config, pipe_bpp / 3); 2022 + dsc_max_bpp = dsc_sink_max_bpp ? min(dsc_sink_max_bpp, dsc_src_max_bpp) : dsc_src_max_bpp; 2023 + dsc_max_bpp = min(dsc_max_bpp, to_bpp_int(limits->link.max_bpp_x16)); 2024 + 2025 + /* Compressed BPP should be less than the Input DSC bpp */ 2026 + dsc_max_bpp = min(dsc_max_bpp, pipe_bpp - 1); 2027 + 2028 + pipe_config->dsc.compressed_bpp = max(dsc_min_bpp, dsc_max_bpp); 2029 + 2030 + pipe_config->pipe_bpp = pipe_bpp; 2031 + 2032 + return 0; 2033 + } 2034 + 1720 2035 int intel_dp_dsc_compute_config(struct intel_dp *intel_dp, 1721 2036 struct intel_crtc_state *pipe_config, 1722 2037 struct drm_connector_state *conn_state, ··· 2109 1662 struct drm_i915_private *dev_priv = to_i915(dig_port->base.base.dev); 2110 1663 const struct drm_display_mode *adjusted_mode = 2111 1664 &pipe_config->hw.adjusted_mode; 2112 - int pipe_bpp; 2113 1665 int ret; 2114 1666 2115 1667 pipe_config->fec_enable = !intel_dp_is_edp(intel_dp) && ··· 2120 1674 if (!intel_dp_dsc_supports_format(intel_dp, pipe_config->output_format)) 2121 1675 return -EINVAL; 2122 1676 2123 - if (compute_pipe_bpp) 2124 - pipe_bpp = intel_dp_dsc_compute_bpp(intel_dp, conn_state->max_requested_bpc); 2125 - else 2126 - pipe_bpp = pipe_config->pipe_bpp; 2127 - 2128 - if (intel_dp->force_dsc_bpc) { 2129 - pipe_bpp = intel_dp->force_dsc_bpc * 3; 2130 - drm_dbg_kms(&dev_priv->drm, "Input DSC BPP forced to %d", pipe_bpp); 2131 - } 2132 - 2133 - /* Min Input BPC for ICL+ is 8 */ 2134 - if (pipe_bpp < 8 * 3) { 2135 - drm_dbg_kms(&dev_priv->drm, 2136 - "No DSC support for less than 8bpc\n"); 2137 - return -EINVAL; 2138 - } 2139 - 2140 1677 /* 2141 - * For now enable DSC for max bpp, max link rate, max lane count. 2142 - * Optimize this later for the minimum possible link rate/lane count 2143 - * with DSC enabled for the requested mode. 1678 + * compute pipe bpp is set to false for DP MST DSC case 1679 + * and compressed_bpp is calculated same time once 1680 + * vpci timeslots are allocated, because overall bpp 1681 + * calculation procedure is bit different for MST case. 2144 1682 */ 2145 - pipe_config->pipe_bpp = pipe_bpp; 2146 - pipe_config->port_clock = limits->max_rate; 2147 - pipe_config->lane_count = limits->max_lane_count; 1683 + if (compute_pipe_bpp) { 1684 + if (intel_dp_is_edp(intel_dp)) 1685 + ret = intel_edp_dsc_compute_pipe_bpp(intel_dp, pipe_config, 1686 + conn_state, limits); 1687 + else 1688 + ret = intel_dp_dsc_compute_pipe_bpp(intel_dp, pipe_config, 1689 + conn_state, limits, timeslots); 1690 + if (ret) { 1691 + drm_dbg_kms(&dev_priv->drm, 1692 + "No Valid pipe bpp for given mode ret = %d\n", ret); 1693 + return ret; 1694 + } 1695 + } 2148 1696 1697 + /* Calculate Slice count */ 2149 1698 if (intel_dp_is_edp(intel_dp)) { 2150 - pipe_config->dsc.compressed_bpp = 2151 - min_t(u16, drm_edp_dsc_sink_output_bpp(intel_dp->dsc_dpcd) >> 4, 2152 - pipe_config->pipe_bpp); 2153 1699 pipe_config->dsc.slice_count = 2154 1700 drm_dp_dsc_sink_max_slice_count(intel_dp->dsc_dpcd, 2155 1701 true); ··· 2151 1713 return -EINVAL; 2152 1714 } 2153 1715 } else { 2154 - u16 dsc_max_output_bpp = 0; 2155 1716 u8 dsc_dp_slice_count; 2156 1717 2157 - if (compute_pipe_bpp) { 2158 - dsc_max_output_bpp = 2159 - intel_dp_dsc_get_output_bpp(dev_priv, 2160 - pipe_config->port_clock, 2161 - pipe_config->lane_count, 2162 - adjusted_mode->crtc_clock, 2163 - adjusted_mode->crtc_hdisplay, 2164 - pipe_config->bigjoiner_pipes, 2165 - pipe_bpp, 2166 - timeslots); 2167 - /* 2168 - * According to DSC 1.2a Section 4.1.1 Table 4.1 the maximum 2169 - * supported PPS value can be 63.9375 and with the further 2170 - * mention that bpp should be programmed double the target bpp 2171 - * restricting our target bpp to be 31.9375 at max 2172 - */ 2173 - if (pipe_config->output_format == INTEL_OUTPUT_FORMAT_YCBCR420) 2174 - dsc_max_output_bpp = min_t(u16, dsc_max_output_bpp, 31 << 4); 2175 - 2176 - if (!dsc_max_output_bpp) { 2177 - drm_dbg_kms(&dev_priv->drm, 2178 - "Compressed BPP not supported\n"); 2179 - return -EINVAL; 2180 - } 2181 - } 2182 1718 dsc_dp_slice_count = 2183 1719 intel_dp_dsc_get_slice_count(intel_dp, 2184 1720 adjusted_mode->crtc_clock, ··· 2164 1752 return -EINVAL; 2165 1753 } 2166 1754 2167 - /* 2168 - * compute pipe bpp is set to false for DP MST DSC case 2169 - * and compressed_bpp is calculated same time once 2170 - * vpci timeslots are allocated, because overall bpp 2171 - * calculation procedure is bit different for MST case. 2172 - */ 2173 - if (compute_pipe_bpp) { 2174 - pipe_config->dsc.compressed_bpp = min_t(u16, 2175 - dsc_max_output_bpp >> 4, 2176 - pipe_config->pipe_bpp); 2177 - } 2178 1755 pipe_config->dsc.slice_count = dsc_dp_slice_count; 2179 - drm_dbg_kms(&dev_priv->drm, "DSC: compressed bpp %d slice count %d\n", 2180 - pipe_config->dsc.compressed_bpp, 2181 - pipe_config->dsc.slice_count); 2182 1756 } 2183 1757 /* 2184 1758 * VDSC engine operates at 1 Pixel per clock, so if peak pixel rate ··· 2194 1796 return 0; 2195 1797 } 2196 1798 1799 + /** 1800 + * intel_dp_compute_config_link_bpp_limits - compute output link bpp limits 1801 + * @intel_dp: intel DP 1802 + * @crtc_state: crtc state 1803 + * @dsc: DSC compression mode 1804 + * @limits: link configuration limits 1805 + * 1806 + * Calculates the output link min, max bpp values in @limits based on the 1807 + * pipe bpp range, @crtc_state and @dsc mode. 1808 + * 1809 + * Returns %true in case of success. 1810 + */ 1811 + bool 1812 + intel_dp_compute_config_link_bpp_limits(struct intel_dp *intel_dp, 1813 + const struct intel_crtc_state *crtc_state, 1814 + bool dsc, 1815 + struct link_config_limits *limits) 1816 + { 1817 + struct drm_i915_private *i915 = to_i915(crtc_state->uapi.crtc->dev); 1818 + const struct drm_display_mode *adjusted_mode = 1819 + &crtc_state->hw.adjusted_mode; 1820 + const struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc); 1821 + const struct intel_encoder *encoder = &dp_to_dig_port(intel_dp)->base; 1822 + int max_link_bpp_x16; 1823 + 1824 + max_link_bpp_x16 = min(crtc_state->max_link_bpp_x16, 1825 + to_bpp_x16(limits->pipe.max_bpp)); 1826 + 1827 + if (!dsc) { 1828 + max_link_bpp_x16 = rounddown(max_link_bpp_x16, to_bpp_x16(2 * 3)); 1829 + 1830 + if (max_link_bpp_x16 < to_bpp_x16(limits->pipe.min_bpp)) 1831 + return false; 1832 + 1833 + limits->link.min_bpp_x16 = to_bpp_x16(limits->pipe.min_bpp); 1834 + } else { 1835 + /* 1836 + * TODO: set the DSC link limits already here, atm these are 1837 + * initialized only later in intel_edp_dsc_compute_pipe_bpp() / 1838 + * intel_dp_dsc_compute_pipe_bpp() 1839 + */ 1840 + limits->link.min_bpp_x16 = 0; 1841 + } 1842 + 1843 + limits->link.max_bpp_x16 = max_link_bpp_x16; 1844 + 1845 + drm_dbg_kms(&i915->drm, 1846 + "[ENCODER:%d:%s][CRTC:%d:%s] DP link limits: pixel clock %d kHz DSC %s max lanes %d max rate %d max pipe_bpp %d max link_bpp " BPP_X16_FMT "\n", 1847 + encoder->base.base.id, encoder->base.name, 1848 + crtc->base.base.id, crtc->base.name, 1849 + adjusted_mode->crtc_clock, 1850 + dsc ? "on" : "off", 1851 + limits->max_lane_count, 1852 + limits->max_rate, 1853 + limits->pipe.max_bpp, 1854 + BPP_X16_ARGS(limits->link.max_bpp_x16)); 1855 + 1856 + return true; 1857 + } 1858 + 1859 + static bool 1860 + intel_dp_compute_config_limits(struct intel_dp *intel_dp, 1861 + struct intel_crtc_state *crtc_state, 1862 + bool respect_downstream_limits, 1863 + bool dsc, 1864 + struct link_config_limits *limits) 1865 + { 1866 + limits->min_rate = intel_dp_common_rate(intel_dp, 0); 1867 + limits->max_rate = intel_dp_max_link_rate(intel_dp); 1868 + 1869 + limits->min_lane_count = 1; 1870 + limits->max_lane_count = intel_dp_max_lane_count(intel_dp); 1871 + 1872 + limits->pipe.min_bpp = intel_dp_min_bpp(crtc_state->output_format); 1873 + limits->pipe.max_bpp = intel_dp_max_bpp(intel_dp, crtc_state, 1874 + respect_downstream_limits); 1875 + 1876 + if (intel_dp->use_max_params) { 1877 + /* 1878 + * Use the maximum clock and number of lanes the eDP panel 1879 + * advertizes being capable of in case the initial fast 1880 + * optimal params failed us. The panels are generally 1881 + * designed to support only a single clock and lane 1882 + * configuration, and typically on older panels these 1883 + * values correspond to the native resolution of the panel. 1884 + */ 1885 + limits->min_lane_count = limits->max_lane_count; 1886 + limits->min_rate = limits->max_rate; 1887 + } 1888 + 1889 + intel_dp_adjust_compliance_config(intel_dp, crtc_state, limits); 1890 + 1891 + return intel_dp_compute_config_link_bpp_limits(intel_dp, 1892 + crtc_state, 1893 + dsc, 1894 + limits); 1895 + } 1896 + 2197 1897 static int 2198 1898 intel_dp_compute_link_config(struct intel_encoder *encoder, 2199 1899 struct intel_crtc_state *pipe_config, ··· 2305 1809 struct intel_dp *intel_dp = enc_to_intel_dp(encoder); 2306 1810 struct link_config_limits limits; 2307 1811 bool joiner_needs_dsc = false; 2308 - int ret; 2309 - 2310 - limits.min_rate = intel_dp_common_rate(intel_dp, 0); 2311 - limits.max_rate = intel_dp_max_link_rate(intel_dp); 2312 - 2313 - limits.min_lane_count = 1; 2314 - limits.max_lane_count = intel_dp_max_lane_count(intel_dp); 2315 - 2316 - limits.min_bpp = intel_dp_min_bpp(pipe_config->output_format); 2317 - limits.max_bpp = intel_dp_max_bpp(intel_dp, pipe_config, respect_downstream_limits); 2318 - 2319 - if (intel_dp->use_max_params) { 2320 - /* 2321 - * Use the maximum clock and number of lanes the eDP panel 2322 - * advertizes being capable of in case the initial fast 2323 - * optimal params failed us. The panels are generally 2324 - * designed to support only a single clock and lane 2325 - * configuration, and typically on older panels these 2326 - * values correspond to the native resolution of the panel. 2327 - */ 2328 - limits.min_lane_count = limits.max_lane_count; 2329 - limits.min_rate = limits.max_rate; 2330 - } 2331 - 2332 - intel_dp_adjust_compliance_config(intel_dp, pipe_config, &limits); 2333 - 2334 - drm_dbg_kms(&i915->drm, "DP link computation with max lane count %i " 2335 - "max rate %d max bpp %d pixel clock %iKHz\n", 2336 - limits.max_lane_count, limits.max_rate, 2337 - limits.max_bpp, adjusted_mode->crtc_clock); 1812 + bool dsc_needed; 1813 + int ret = 0; 2338 1814 2339 1815 if (intel_dp_need_bigjoiner(intel_dp, adjusted_mode->crtc_hdisplay, 2340 1816 adjusted_mode->crtc_clock)) ··· 2319 1851 */ 2320 1852 joiner_needs_dsc = DISPLAY_VER(i915) < 13 && pipe_config->bigjoiner_pipes; 2321 1853 2322 - /* 2323 - * Optimize for slow and wide for everything, because there are some 2324 - * eDP 1.3 and 1.4 panels don't work well with fast and narrow. 2325 - */ 2326 - ret = intel_dp_compute_link_config_wide(intel_dp, pipe_config, conn_state, &limits); 1854 + dsc_needed = joiner_needs_dsc || intel_dp->force_dsc_en || 1855 + !intel_dp_compute_config_limits(intel_dp, pipe_config, 1856 + respect_downstream_limits, 1857 + false, 1858 + &limits); 2327 1859 2328 - if (ret || joiner_needs_dsc || intel_dp->force_dsc_en) { 1860 + if (!dsc_needed) { 1861 + /* 1862 + * Optimize for slow and wide for everything, because there are some 1863 + * eDP 1.3 and 1.4 panels don't work well with fast and narrow. 1864 + */ 1865 + ret = intel_dp_compute_link_config_wide(intel_dp, pipe_config, 1866 + conn_state, &limits); 1867 + if (ret) 1868 + dsc_needed = true; 1869 + } 1870 + 1871 + if (dsc_needed) { 2329 1872 drm_dbg_kms(&i915->drm, "Try DSC (fallback=%s, joiner=%s, force=%s)\n", 2330 1873 str_yes_no(ret), str_yes_no(joiner_needs_dsc), 2331 1874 str_yes_no(intel_dp->force_dsc_en)); 1875 + 1876 + if (!intel_dp_compute_config_limits(intel_dp, pipe_config, 1877 + respect_downstream_limits, 1878 + true, 1879 + &limits)) 1880 + return -EINVAL; 1881 + 2332 1882 ret = intel_dp_dsc_compute_config(intel_dp, pipe_config, 2333 1883 conn_state, &limits, 64, true); 2334 1884 if (ret < 0) ··· 2622 2136 static void 2623 2137 intel_dp_drrs_compute_config(struct intel_connector *connector, 2624 2138 struct intel_crtc_state *pipe_config, 2625 - int output_bpp) 2139 + int link_bpp) 2626 2140 { 2627 2141 struct drm_i915_private *i915 = to_i915(connector->base.dev); 2628 2142 const struct drm_display_mode *downclock_mode = ··· 2630 2144 int pixel_clock; 2631 2145 2632 2146 if (has_seamless_m_n(connector)) 2633 - pipe_config->seamless_m_n = true; 2147 + pipe_config->update_m_n = true; 2634 2148 2635 2149 if (!can_enable_drrs(connector, pipe_config, downclock_mode)) { 2636 2150 if (intel_cpu_transcoder_has_m2_n2(i915, pipe_config->cpu_transcoder)) ··· 2647 2161 if (pipe_config->splitter.enable) 2648 2162 pixel_clock /= pipe_config->splitter.link_count; 2649 2163 2650 - intel_link_compute_m_n(output_bpp, pipe_config->lane_count, pixel_clock, 2164 + intel_link_compute_m_n(link_bpp, pipe_config->lane_count, pixel_clock, 2651 2165 pipe_config->port_clock, &pipe_config->dp_m2_n2, 2652 2166 pipe_config->fec_enable); 2653 2167 ··· 2657 2171 } 2658 2172 2659 2173 static bool intel_dp_has_audio(struct intel_encoder *encoder, 2174 + struct intel_crtc_state *crtc_state, 2660 2175 const struct drm_connector_state *conn_state) 2661 2176 { 2662 2177 struct drm_i915_private *i915 = to_i915(encoder->base.dev); 2663 - struct intel_dp *intel_dp = enc_to_intel_dp(encoder); 2664 - struct intel_connector *connector = intel_dp->attached_connector; 2665 2178 const struct intel_digital_connector_state *intel_conn_state = 2666 2179 to_intel_digital_connector_state(conn_state); 2180 + struct intel_connector *connector = 2181 + to_intel_connector(conn_state->connector); 2667 2182 2668 - if (!intel_dp_port_has_audio(i915, encoder->port)) 2183 + if (!intel_crtc_has_type(crtc_state, INTEL_OUTPUT_DP_MST) && 2184 + !intel_dp_port_has_audio(i915, encoder->port)) 2669 2185 return false; 2670 2186 2671 2187 if (intel_conn_state->force_audio == HDMI_AUDIO_AUTO) ··· 2720 2232 return ret; 2721 2233 } 2722 2234 2723 - static void 2235 + void 2724 2236 intel_dp_audio_compute_config(struct intel_encoder *encoder, 2725 2237 struct intel_crtc_state *pipe_config, 2726 2238 struct drm_connector_state *conn_state) ··· 2728 2240 struct drm_i915_private *i915 = to_i915(encoder->base.dev); 2729 2241 struct drm_connector *connector = conn_state->connector; 2730 2242 2731 - pipe_config->sdp_split_enable = 2732 - intel_dp_has_audio(encoder, conn_state) && 2733 - intel_dp_is_uhbr(pipe_config); 2243 + pipe_config->has_audio = 2244 + intel_dp_has_audio(encoder, pipe_config, conn_state) && 2245 + intel_audio_compute_config(encoder, pipe_config, conn_state); 2246 + 2247 + pipe_config->sdp_split_enable = pipe_config->has_audio && 2248 + intel_dp_is_uhbr(pipe_config); 2734 2249 2735 2250 drm_dbg_kms(&i915->drm, "[CONNECTOR:%d:%s] SDP split enable: %s\n", 2736 2251 connector->base.id, connector->name, ··· 2750 2259 struct intel_dp *intel_dp = enc_to_intel_dp(encoder); 2751 2260 const struct drm_display_mode *fixed_mode; 2752 2261 struct intel_connector *connector = intel_dp->attached_connector; 2753 - int ret = 0, output_bpp; 2262 + int ret = 0, link_bpp; 2754 2263 2755 2264 if (HAS_PCH_SPLIT(dev_priv) && !HAS_DDI(dev_priv) && encoder->port != PORT_A) 2756 2265 pipe_config->has_pch_encoder = true; 2757 - 2758 - pipe_config->has_audio = 2759 - intel_dp_has_audio(encoder, conn_state) && 2760 - intel_audio_compute_config(encoder, pipe_config, conn_state); 2761 2266 2762 2267 fixed_mode = intel_panel_fixed_mode(connector, adjusted_mode); 2763 2268 if (intel_dp_is_edp(intel_dp) && fixed_mode) { ··· 2795 2308 pipe_config->limited_color_range = 2796 2309 intel_dp_limited_color_range(pipe_config, conn_state); 2797 2310 2311 + pipe_config->enhanced_framing = 2312 + drm_dp_enhanced_frame_cap(intel_dp->dpcd); 2313 + 2798 2314 if (pipe_config->dsc.compression_enable) 2799 - output_bpp = pipe_config->dsc.compressed_bpp; 2315 + link_bpp = pipe_config->dsc.compressed_bpp; 2800 2316 else 2801 - output_bpp = intel_dp_output_bpp(pipe_config->output_format, 2802 - pipe_config->pipe_bpp); 2317 + link_bpp = intel_dp_output_bpp(pipe_config->output_format, 2318 + pipe_config->pipe_bpp); 2803 2319 2804 2320 if (intel_dp->mso_link_count) { 2805 2321 int n = intel_dp->mso_link_count; ··· 2826 2336 2827 2337 intel_dp_audio_compute_config(encoder, pipe_config, conn_state); 2828 2338 2829 - intel_link_compute_m_n(output_bpp, 2339 + intel_link_compute_m_n(link_bpp, 2830 2340 pipe_config->lane_count, 2831 2341 adjusted_mode->crtc_clock, 2832 2342 pipe_config->port_clock, ··· 2842 2352 2843 2353 intel_vrr_compute_config(pipe_config, conn_state); 2844 2354 intel_psr_compute_config(intel_dp, pipe_config, conn_state); 2845 - intel_dp_drrs_compute_config(connector, pipe_config, output_bpp); 2355 + intel_dp_drrs_compute_config(connector, pipe_config, link_bpp); 2846 2356 intel_dp_compute_vsc_sdp(intel_dp, pipe_config, conn_state); 2847 2357 intel_dp_compute_hdr_metadata_infoframe_sdp(intel_dp, pipe_config, conn_state); 2848 2358 ··· 5298 4808 struct drm_i915_private *i915 = dp_to_i915(intel_dp); 5299 4809 struct intel_connector *connector = intel_dp->attached_connector; 5300 4810 const struct drm_edid *drm_edid; 5301 - const struct edid *edid; 5302 4811 bool vrr_capable; 5303 4812 5304 4813 intel_dp_unset_edid(intel_dp); ··· 5315 4826 intel_dp_update_dfp(intel_dp, drm_edid); 5316 4827 intel_dp_update_420(intel_dp); 5317 4828 5318 - /* FIXME: Get rid of drm_edid_raw() */ 5319 - edid = drm_edid_raw(drm_edid); 5320 - 5321 - drm_dp_cec_set_edid(&intel_dp->aux, edid); 4829 + drm_dp_cec_attach(&intel_dp->aux, 4830 + connector->base.display_info.source_physical_address); 5322 4831 } 5323 4832 5324 4833 static void ··· 5444 4957 if (status != connector_status_connected && !intel_dp->is_mst) 5445 4958 intel_dp_unset_edid(intel_dp); 5446 4959 5447 - /* 5448 - * Make sure the refs for power wells enabled during detect are 5449 - * dropped to avoid a new detect cycle triggered by HPD polling. 5450 - */ 5451 - intel_display_power_flush_work(dev_priv); 5452 - 5453 4960 if (!intel_dp_is_edp(intel_dp)) 5454 4961 drm_dp_set_subconnector_property(connector, 5455 4962 status, ··· 5459 4978 struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp); 5460 4979 struct intel_encoder *intel_encoder = &dig_port->base; 5461 4980 struct drm_i915_private *dev_priv = to_i915(intel_encoder->base.dev); 5462 - enum intel_display_power_domain aux_domain = 5463 - intel_aux_power_domain(dig_port); 5464 - intel_wakeref_t wakeref; 5465 4981 5466 4982 drm_dbg_kms(&dev_priv->drm, "[CONNECTOR:%d:%s]\n", 5467 4983 connector->base.id, connector->name); ··· 5467 4989 if (connector->status != connector_status_connected) 5468 4990 return; 5469 4991 5470 - wakeref = intel_display_power_get(dev_priv, aux_domain); 5471 - 5472 4992 intel_dp_set_edid(intel_dp); 5473 - 5474 - intel_display_power_put(dev_priv, aux_domain, wakeref); 5475 4993 } 5476 4994 5477 4995 static int intel_dp_get_modes(struct drm_connector *connector) ··· 6007 5533 } 6008 5534 6009 5535 mutex_lock(&dev_priv->drm.mode_config.mutex); 6010 - drm_edid = drm_edid_read_ddc(connector, &intel_dp->aux.ddc); 5536 + drm_edid = drm_edid_read_ddc(connector, connector->ddc); 6011 5537 if (!drm_edid) { 6012 5538 /* Fallback to EDID from ACPI OpRegion, if any */ 6013 5539 drm_edid = intel_opregion_get_edid(intel_connector); ··· 6146 5672 if (IS_VALLEYVIEW(dev_priv) || IS_CHERRYVIEW(dev_priv)) 6147 5673 intel_dp->pps.active_pipe = vlv_active_pipe(intel_dp); 6148 5674 5675 + intel_dp_aux_init(intel_dp); 5676 + 6149 5677 drm_dbg_kms(&dev_priv->drm, 6150 5678 "Adding %s connector on [ENCODER:%d:%s]\n", 6151 5679 type == DRM_MODE_CONNECTOR_eDP ? "eDP" : "DP", 6152 5680 intel_encoder->base.base.id, intel_encoder->base.name); 6153 5681 6154 - drm_connector_init(dev, connector, &intel_dp_connector_funcs, type); 5682 + drm_connector_init_with_ddc(dev, connector, &intel_dp_connector_funcs, 5683 + type, &intel_dp->aux.ddc); 6155 5684 drm_connector_helper_add(connector, &intel_dp_connector_helper_funcs); 6156 5685 6157 5686 if (!HAS_GMCH(dev_priv) && DISPLAY_VER(dev_priv) < 12) 6158 5687 connector->interlace_allowed = true; 6159 5688 6160 5689 intel_connector->polled = DRM_CONNECTOR_POLL_HPD; 6161 - 6162 - intel_dp_aux_init(intel_dp); 6163 5690 6164 5691 intel_connector_attach_encoder(intel_connector, intel_encoder); 6165 5692
+26 -8
drivers/gpu/drm/i915/display/intel_dp.h
··· 26 26 struct link_config_limits { 27 27 int min_rate, max_rate; 28 28 int min_lane_count, max_lane_count; 29 - int min_bpp, max_bpp; 29 + struct { 30 + /* Uncompressed DSC input or link output bpp in 1 bpp units */ 31 + int min_bpp, max_bpp; 32 + } pipe; 33 + struct { 34 + /* Compressed or uncompressed link output bpp in 1/16 bpp units */ 35 + int min_bpp_x16, max_bpp_x16; 36 + } link; 30 37 }; 31 38 32 39 void intel_edp_fixup_vbt_bpp(struct intel_encoder *encoder, int pipe_bpp); ··· 72 65 struct link_config_limits *limits, 73 66 int timeslots, 74 67 bool recompute_pipe_bpp); 68 + void intel_dp_audio_compute_config(struct intel_encoder *encoder, 69 + struct intel_crtc_state *pipe_config, 70 + struct drm_connector_state *conn_state); 75 71 bool intel_dp_has_hdmi_sink(struct intel_dp *intel_dp); 76 72 bool intel_dp_is_edp(struct intel_dp *intel_dp); 77 73 bool intel_dp_is_uhbr(const struct intel_crtc_state *crtc_state); ··· 116 106 struct intel_crtc_state *crtc_state, 117 107 unsigned int type); 118 108 bool intel_digital_port_connected(struct intel_encoder *encoder); 119 - int intel_dp_dsc_compute_bpp(struct intel_dp *intel_dp, u8 dsc_max_bpc); 120 - u16 intel_dp_dsc_get_output_bpp(struct drm_i915_private *i915, 121 - u32 link_clock, u32 lane_count, 122 - u32 mode_clock, u32 mode_hdisplay, 123 - bool bigjoiner, 124 - u32 pipe_bpp, 125 - u32 timeslots); 109 + int intel_dp_dsc_compute_max_bpp(struct intel_dp *intel_dp, u8 dsc_max_bpc); 110 + u16 intel_dp_dsc_get_max_compressed_bpp(struct drm_i915_private *i915, 111 + u32 link_clock, u32 lane_count, 112 + u32 mode_clock, u32 mode_hdisplay, 113 + bool bigjoiner, 114 + enum intel_output_format output_format, 115 + u32 pipe_bpp, 116 + u32 timeslots); 126 117 u8 intel_dp_dsc_get_slice_count(struct intel_dp *intel_dp, 127 118 int mode_clock, int mode_hdisplay, 128 119 bool bigjoiner); ··· 154 143 void intel_dp_phy_test(struct intel_encoder *encoder); 155 144 156 145 void intel_dp_wait_source_oui(struct intel_dp *intel_dp); 146 + int intel_dp_output_bpp(enum intel_output_format output_format, int bpp); 147 + 148 + bool 149 + intel_dp_compute_config_link_bpp_limits(struct intel_dp *intel_dp, 150 + const struct intel_crtc_state *crtc_state, 151 + bool dsc, 152 + struct link_config_limits *limits); 157 153 158 154 #endif /* __INTEL_DP_H__ */
+29 -20
drivers/gpu/drm/i915/display/intel_dp_aux.c
··· 14 14 #include "intel_pps.h" 15 15 #include "intel_tc.h" 16 16 17 + #define AUX_CH_NAME_BUFSIZE 6 18 + 19 + static const char *aux_ch_name(struct drm_i915_private *i915, 20 + char *buf, int size, enum aux_ch aux_ch) 21 + { 22 + if (DISPLAY_VER(i915) >= 13 && aux_ch >= AUX_CH_D_XELPD) 23 + snprintf(buf, size, "%c", 'A' + aux_ch - AUX_CH_D_XELPD + AUX_CH_D); 24 + else if (DISPLAY_VER(i915) >= 12 && aux_ch >= AUX_CH_USBC1) 25 + snprintf(buf, size, "USBC%c", '1' + aux_ch - AUX_CH_USBC1); 26 + else 27 + snprintf(buf, size, "%c", 'A' + aux_ch); 28 + 29 + return buf; 30 + } 31 + 17 32 u32 intel_dp_aux_pack(const u8 *src, int src_bytes) 18 33 { 19 34 int i; ··· 702 687 case AUX_CH_USBC2: 703 688 case AUX_CH_USBC3: 704 689 case AUX_CH_USBC4: 705 - return XELPDP_DP_AUX_CH_CTL(aux_ch); 690 + return XELPDP_DP_AUX_CH_CTL(dev_priv, aux_ch); 706 691 default: 707 692 MISSING_CASE(aux_ch); 708 - return XELPDP_DP_AUX_CH_CTL(AUX_CH_A); 693 + return XELPDP_DP_AUX_CH_CTL(dev_priv, AUX_CH_A); 709 694 } 710 695 } 711 696 ··· 722 707 case AUX_CH_USBC2: 723 708 case AUX_CH_USBC3: 724 709 case AUX_CH_USBC4: 725 - return XELPDP_DP_AUX_CH_DATA(aux_ch, index); 710 + return XELPDP_DP_AUX_CH_DATA(dev_priv, aux_ch, index); 726 711 default: 727 712 MISSING_CASE(aux_ch); 728 - return XELPDP_DP_AUX_CH_DATA(AUX_CH_A, index); 713 + return XELPDP_DP_AUX_CH_DATA(dev_priv, AUX_CH_A, index); 729 714 } 730 715 } 731 716 ··· 743 728 struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp); 744 729 struct intel_encoder *encoder = &dig_port->base; 745 730 enum aux_ch aux_ch = dig_port->aux_ch; 731 + char buf[AUX_CH_NAME_BUFSIZE]; 746 732 747 733 if (DISPLAY_VER(dev_priv) >= 14) { 748 734 intel_dp->aux_ch_ctl_reg = xelpdp_aux_ctl_reg; ··· 780 764 drm_dp_aux_init(&intel_dp->aux); 781 765 782 766 /* Failure to allocate our preferred name is not critical */ 783 - if (DISPLAY_VER(dev_priv) >= 13 && aux_ch >= AUX_CH_D_XELPD) 784 - intel_dp->aux.name = kasprintf(GFP_KERNEL, "AUX %c/%s", 785 - aux_ch_name(aux_ch - AUX_CH_D_XELPD + AUX_CH_D), 786 - encoder->base.name); 787 - else if (DISPLAY_VER(dev_priv) >= 12 && aux_ch >= AUX_CH_USBC1) 788 - intel_dp->aux.name = kasprintf(GFP_KERNEL, "AUX USBC%c/%s", 789 - aux_ch - AUX_CH_USBC1 + '1', 790 - encoder->base.name); 791 - else 792 - intel_dp->aux.name = kasprintf(GFP_KERNEL, "AUX %c/%s", 793 - aux_ch_name(aux_ch), 794 - encoder->base.name); 767 + intel_dp->aux.name = kasprintf(GFP_KERNEL, "AUX %s/%s", 768 + aux_ch_name(dev_priv, buf, sizeof(buf), aux_ch), 769 + encoder->base.name); 795 770 796 771 intel_dp->aux.transfer = intel_dp_aux_transfer; 797 772 cpu_latency_qos_add_request(&intel_dp->pm_qos, PM_QOS_DEFAULT_VALUE); ··· 826 819 struct intel_encoder *other; 827 820 const char *source; 828 821 enum aux_ch aux_ch; 822 + char buf[AUX_CH_NAME_BUFSIZE]; 829 823 830 824 aux_ch = intel_bios_dp_aux_ch(encoder->devdata); 831 825 source = "VBT"; ··· 844 836 other = get_encoder_by_aux_ch(encoder, aux_ch); 845 837 if (other) { 846 838 drm_dbg_kms(&i915->drm, 847 - "[ENCODER:%d:%s] AUX CH %c already claimed by [ENCODER:%d:%s]\n", 848 - encoder->base.base.id, encoder->base.name, aux_ch_name(aux_ch), 839 + "[ENCODER:%d:%s] AUX CH %s already claimed by [ENCODER:%d:%s]\n", 840 + encoder->base.base.id, encoder->base.name, 841 + aux_ch_name(i915, buf, sizeof(buf), aux_ch), 849 842 other->base.base.id, other->base.name); 850 843 return AUX_CH_NONE; 851 844 } 852 845 853 846 drm_dbg_kms(&i915->drm, 854 - "[ENCODER:%d:%s] Using AUX CH %c (%s)\n", 847 + "[ENCODER:%d:%s] Using AUX CH %s (%s)\n", 855 848 encoder->base.base.id, encoder->base.name, 856 - aux_ch_name(aux_ch), source); 849 + aux_ch_name(i915, buf, sizeof(buf), aux_ch), source); 857 850 858 851 return aux_ch; 859 852 }
+44 -36
drivers/gpu/drm/i915/display/intel_dp_aux_regs.h
··· 13 13 * packet size supported is 20 bytes in each direction, hence the 5 fixed data 14 14 * registers 15 15 */ 16 - #define _DPA_AUX_CH_CTL (DISPLAY_MMIO_BASE(dev_priv) + 0x64010) 17 - #define _DPA_AUX_CH_DATA1 (DISPLAY_MMIO_BASE(dev_priv) + 0x64014) 18 16 19 - #define _DPB_AUX_CH_CTL (DISPLAY_MMIO_BASE(dev_priv) + 0x64110) 20 - #define _DPB_AUX_CH_DATA1 (DISPLAY_MMIO_BASE(dev_priv) + 0x64114) 17 + /* 18 + * Wrapper macro to convert from aux_ch to the index used in some of the 19 + * registers. 20 + */ 21 + #define __xe2lpd_aux_ch_idx(aux_ch) \ 22 + (aux_ch >= AUX_CH_USBC1 ? aux_ch : AUX_CH_USBC4 + 1 + (aux_ch) - AUX_CH_A) 21 23 22 - #define DP_AUX_CH_CTL(aux_ch) _MMIO_PORT(aux_ch, _DPA_AUX_CH_CTL, _DPB_AUX_CH_CTL) 23 - #define DP_AUX_CH_DATA(aux_ch, i) _MMIO(_PORT(aux_ch, _DPA_AUX_CH_DATA1, _DPB_AUX_CH_DATA1) + (i) * 4) /* 5 registers */ 24 - 25 - #define _XELPDP_USBC1_AUX_CH_CTL 0x16F210 26 - #define _XELPDP_USBC2_AUX_CH_CTL 0x16F410 27 - #define _XELPDP_USBC3_AUX_CH_CTL 0x16F610 28 - #define _XELPDP_USBC4_AUX_CH_CTL 0x16F810 29 - 30 - #define XELPDP_DP_AUX_CH_CTL(aux_ch) _MMIO(_PICK(aux_ch, \ 31 - _DPA_AUX_CH_CTL, \ 32 - _DPB_AUX_CH_CTL, \ 33 - 0, /* port/aux_ch C is non-existent */ \ 34 - _XELPDP_USBC1_AUX_CH_CTL, \ 35 - _XELPDP_USBC2_AUX_CH_CTL, \ 36 - _XELPDP_USBC3_AUX_CH_CTL, \ 37 - _XELPDP_USBC4_AUX_CH_CTL)) 38 - 39 - #define _XELPDP_USBC1_AUX_CH_DATA1 0x16F214 40 - #define _XELPDP_USBC2_AUX_CH_DATA1 0x16F414 41 - #define _XELPDP_USBC3_AUX_CH_DATA1 0x16F614 42 - #define _XELPDP_USBC4_AUX_CH_DATA1 0x16F814 43 - 44 - #define XELPDP_DP_AUX_CH_DATA(aux_ch, i) _MMIO(_PICK(aux_ch, \ 45 - _DPA_AUX_CH_DATA1, \ 46 - _DPB_AUX_CH_DATA1, \ 47 - 0, /* port/aux_ch C is non-existent */ \ 48 - _XELPDP_USBC1_AUX_CH_DATA1, \ 49 - _XELPDP_USBC2_AUX_CH_DATA1, \ 50 - _XELPDP_USBC3_AUX_CH_DATA1, \ 51 - _XELPDP_USBC4_AUX_CH_DATA1) + (i) * 4) 52 - 24 + /* TODO: Remove implicit dev_priv */ 25 + #define _DPA_AUX_CH_CTL (DISPLAY_MMIO_BASE(dev_priv) + 0x64010) 26 + #define _DPB_AUX_CH_CTL (DISPLAY_MMIO_BASE(dev_priv) + 0x64110) 27 + #define _XELPDP_USBC1_AUX_CH_CTL 0x16f210 28 + #define _XELPDP_USBC2_AUX_CH_CTL 0x16f410 29 + #define DP_AUX_CH_CTL(aux_ch) _MMIO_PORT(aux_ch, _DPA_AUX_CH_CTL, \ 30 + _DPB_AUX_CH_CTL) 31 + #define _XELPDP_DP_AUX_CH_CTL(aux_ch) \ 32 + _MMIO(_PICK_EVEN_2RANGES(aux_ch, AUX_CH_USBC1, \ 33 + _DPA_AUX_CH_CTL, _DPB_AUX_CH_CTL, \ 34 + _XELPDP_USBC1_AUX_CH_CTL, \ 35 + _XELPDP_USBC2_AUX_CH_CTL)) 36 + #define XELPDP_DP_AUX_CH_CTL(i915__, aux_ch) \ 37 + (DISPLAY_VER(i915__) >= 20 ? \ 38 + _XELPDP_DP_AUX_CH_CTL(__xe2lpd_aux_ch_idx(aux_ch)) : \ 39 + _XELPDP_DP_AUX_CH_CTL(aux_ch)) 53 40 #define DP_AUX_CH_CTL_SEND_BUSY REG_BIT(31) 54 41 #define DP_AUX_CH_CTL_DONE REG_BIT(30) 55 42 #define DP_AUX_CH_CTL_INTERRUPT REG_BIT(29) 56 43 #define DP_AUX_CH_CTL_TIME_OUT_ERROR REG_BIT(28) 57 - 58 44 #define DP_AUX_CH_CTL_TIME_OUT_MASK REG_GENMASK(27, 26) 59 45 #define DP_AUX_CH_CTL_TIME_OUT_400us REG_FIELD_PREP(DP_AUX_CH_CTL_TIME_OUT_MASK, 0) 60 46 #define DP_AUX_CH_CTL_TIME_OUT_600us REG_FIELD_PREP(DP_AUX_CH_CTL_TIME_OUT_MASK, 1) ··· 68 82 #define DP_AUX_CH_CTL_FW_SYNC_PULSE_SKL(c) REG_FIELD_PREP(DP_AUX_CH_CTL_FW_SYNC_PULSE_SKL_MASK, (c) - 1) 69 83 #define DP_AUX_CH_CTL_SYNC_PULSE_SKL_MASK REG_GENMASK(4, 0) /* skl+ */ 70 84 #define DP_AUX_CH_CTL_SYNC_PULSE_SKL(c) REG_FIELD_PREP(DP_AUX_CH_CTL_SYNC_PULSE_SKL_MASK, (c) - 1) 85 + 86 + /* TODO: Remove implicit dev_priv */ 87 + #define _DPA_AUX_CH_DATA1 (DISPLAY_MMIO_BASE(dev_priv) + 0x64014) 88 + #define _DPB_AUX_CH_DATA1 (DISPLAY_MMIO_BASE(dev_priv) + 0x64114) 89 + #define _XELPDP_USBC1_AUX_CH_DATA1 0x16f214 90 + #define _XELPDP_USBC2_AUX_CH_DATA1 0x16f414 91 + #define DP_AUX_CH_DATA(aux_ch, i) _MMIO(_PORT(aux_ch, _DPA_AUX_CH_DATA1, \ 92 + _DPB_AUX_CH_DATA1) + (i) * 4) /* 5 registers */ 93 + #define _XELPDP_DP_AUX_CH_DATA(aux_ch, i) \ 94 + _MMIO(_PICK_EVEN_2RANGES(aux_ch, AUX_CH_USBC1, \ 95 + _DPA_AUX_CH_DATA1, _DPB_AUX_CH_DATA1, \ 96 + _XELPDP_USBC1_AUX_CH_DATA1, \ 97 + _XELPDP_USBC2_AUX_CH_DATA1) + (i) * 4) /* 5 registers */ 98 + #define XELPDP_DP_AUX_CH_DATA(i915__, aux_ch, i) \ 99 + (DISPLAY_VER(i915__) >= 20 ? \ 100 + _XELPDP_DP_AUX_CH_DATA(__xe2lpd_aux_ch_idx(aux_ch), i) : \ 101 + _XELPDP_DP_AUX_CH_DATA(aux_ch, i)) 102 + 103 + /* PICA Power Well Control */ 104 + #define XE2LPD_PICA_PW_CTL _MMIO(0x16fe04) 105 + #define XE2LPD_PICA_CTL_POWER_REQUEST REG_BIT(31) 106 + #define XE2LPD_PICA_CTL_POWER_STATUS REG_BIT(30) 71 107 72 108 #endif /* __INTEL_DP_AUX_REGS_H__ */
+58 -29
drivers/gpu/drm/i915/display/intel_dp_hdcp.c
··· 330 330 0, 0 }, 331 331 }; 332 332 333 + static struct drm_dp_aux * 334 + intel_dp_hdcp_get_aux(struct intel_connector *connector) 335 + { 336 + struct intel_digital_port *dig_port = intel_attached_dig_port(connector); 337 + 338 + if (intel_encoder_is_mst(connector->encoder)) 339 + return &connector->port->aux; 340 + else 341 + return &dig_port->dp.aux; 342 + } 343 + 333 344 static int 334 - intel_dp_hdcp2_read_rx_status(struct intel_digital_port *dig_port, 345 + intel_dp_hdcp2_read_rx_status(struct intel_connector *connector, 335 346 u8 *rx_status) 336 347 { 337 - struct drm_i915_private *i915 = to_i915(dig_port->base.base.dev); 348 + struct drm_i915_private *i915 = to_i915(connector->base.dev); 349 + struct drm_dp_aux *aux = intel_dp_hdcp_get_aux(connector); 338 350 ssize_t ret; 339 351 340 - ret = drm_dp_dpcd_read(&dig_port->dp.aux, 352 + ret = drm_dp_dpcd_read(aux, 341 353 DP_HDCP_2_2_REG_RXSTATUS_OFFSET, rx_status, 342 354 HDCP_2_2_DP_RXSTATUS_LEN); 343 355 if (ret != HDCP_2_2_DP_RXSTATUS_LEN) { ··· 362 350 } 363 351 364 352 static 365 - int hdcp2_detect_msg_availability(struct intel_digital_port *dig_port, 353 + int hdcp2_detect_msg_availability(struct intel_connector *connector, 366 354 u8 msg_id, bool *msg_ready) 367 355 { 368 356 u8 rx_status; 369 357 int ret; 370 358 371 359 *msg_ready = false; 372 - ret = intel_dp_hdcp2_read_rx_status(dig_port, &rx_status); 360 + ret = intel_dp_hdcp2_read_rx_status(connector, &rx_status); 373 361 if (ret < 0) 374 362 return ret; 375 363 ··· 395 383 } 396 384 397 385 static ssize_t 398 - intel_dp_hdcp2_wait_for_msg(struct intel_digital_port *dig_port, 386 + intel_dp_hdcp2_wait_for_msg(struct intel_connector *connector, 399 387 const struct hdcp2_dp_msg_data *hdcp2_msg_data) 400 388 { 401 - struct drm_i915_private *i915 = to_i915(dig_port->base.base.dev); 402 - struct intel_dp *dp = &dig_port->dp; 403 - struct intel_hdcp *hdcp = &dp->attached_connector->hdcp; 389 + struct drm_i915_private *i915 = to_i915(connector->base.dev); 390 + struct intel_hdcp *hdcp = &connector->hdcp; 404 391 u8 msg_id = hdcp2_msg_data->msg_id; 405 392 int ret, timeout; 406 393 bool msg_ready = false; ··· 422 411 * the timeout at wait for CP_IRQ. 423 412 */ 424 413 intel_dp_hdcp_wait_for_cp_irq(hdcp, timeout); 425 - ret = hdcp2_detect_msg_availability(dig_port, 426 - msg_id, &msg_ready); 414 + ret = hdcp2_detect_msg_availability(connector, msg_id, 415 + &msg_ready); 427 416 if (!msg_ready) 428 417 ret = -ETIMEDOUT; 429 418 } ··· 448 437 } 449 438 450 439 static 451 - int intel_dp_hdcp2_write_msg(struct intel_digital_port *dig_port, 440 + int intel_dp_hdcp2_write_msg(struct intel_connector *connector, 452 441 void *buf, size_t size) 453 442 { 454 443 unsigned int offset; 455 444 u8 *byte = buf; 456 445 ssize_t ret, bytes_to_write, len; 457 446 const struct hdcp2_dp_msg_data *hdcp2_msg_data; 447 + struct drm_dp_aux *aux; 458 448 459 449 hdcp2_msg_data = get_hdcp2_dp_msg_data(*byte); 460 450 if (!hdcp2_msg_data) 461 451 return -EINVAL; 462 452 463 453 offset = hdcp2_msg_data->offset; 454 + 455 + aux = intel_dp_hdcp_get_aux(connector); 464 456 465 457 /* No msg_id in DP HDCP2.2 msgs */ 466 458 bytes_to_write = size - 1; ··· 473 459 len = bytes_to_write > DP_AUX_MAX_PAYLOAD_BYTES ? 474 460 DP_AUX_MAX_PAYLOAD_BYTES : bytes_to_write; 475 461 476 - ret = drm_dp_dpcd_write(&dig_port->dp.aux, 462 + ret = drm_dp_dpcd_write(aux, 477 463 offset, (void *)byte, len); 478 464 if (ret < 0) 479 465 return ret; ··· 487 473 } 488 474 489 475 static 490 - ssize_t get_receiver_id_list_rx_info(struct intel_digital_port *dig_port, u32 *dev_cnt, u8 *byte) 476 + ssize_t get_receiver_id_list_rx_info(struct intel_connector *connector, 477 + u32 *dev_cnt, u8 *byte) 491 478 { 479 + struct drm_dp_aux *aux = intel_dp_hdcp_get_aux(connector); 492 480 ssize_t ret; 493 481 u8 *rx_info = byte; 494 482 495 - ret = drm_dp_dpcd_read(&dig_port->dp.aux, 483 + ret = drm_dp_dpcd_read(aux, 496 484 DP_HDCP_2_2_REG_RXINFO_OFFSET, 497 485 (void *)rx_info, HDCP_2_2_RXINFO_LEN); 498 486 if (ret != HDCP_2_2_RXINFO_LEN) ··· 510 494 } 511 495 512 496 static 513 - int intel_dp_hdcp2_read_msg(struct intel_digital_port *dig_port, 497 + int intel_dp_hdcp2_read_msg(struct intel_connector *connector, 514 498 u8 msg_id, void *buf, size_t size) 515 499 { 500 + struct intel_digital_port *dig_port = intel_attached_dig_port(connector); 516 501 struct drm_i915_private *i915 = to_i915(dig_port->base.base.dev); 517 - struct intel_dp *dp = &dig_port->dp; 518 - struct intel_hdcp *hdcp = &dp->attached_connector->hdcp; 502 + struct intel_hdcp *hdcp = &connector->hdcp; 503 + struct drm_dp_aux *aux; 519 504 unsigned int offset; 520 505 u8 *byte = buf; 521 506 ssize_t ret, bytes_to_recv, len; ··· 530 513 return -EINVAL; 531 514 offset = hdcp2_msg_data->offset; 532 515 533 - ret = intel_dp_hdcp2_wait_for_msg(dig_port, hdcp2_msg_data); 516 + aux = intel_dp_hdcp_get_aux(connector); 517 + 518 + ret = intel_dp_hdcp2_wait_for_msg(connector, hdcp2_msg_data); 534 519 if (ret < 0) 535 520 return ret; 536 521 ··· 542 523 byte++; 543 524 544 525 if (msg_id == HDCP_2_2_REP_SEND_RECVID_LIST) { 545 - ret = get_receiver_id_list_rx_info(dig_port, &dev_cnt, byte); 526 + ret = get_receiver_id_list_rx_info(connector, &dev_cnt, byte); 546 527 if (ret < 0) 547 528 return ret; 548 529 ··· 560 541 DP_AUX_MAX_PAYLOAD_BYTES : bytes_to_recv; 561 542 562 543 /* Entire msg read timeout since initiate of msg read */ 563 - if (bytes_to_recv == size - 1 && hdcp2_msg_data->msg_read_timeout > 0) 564 - msg_end = ktime_add_ms(ktime_get_raw(), 565 - hdcp2_msg_data->msg_read_timeout); 544 + if (bytes_to_recv == size - 1 && hdcp2_msg_data->msg_read_timeout > 0) { 545 + if (intel_encoder_is_mst(connector->encoder)) 546 + msg_end = ktime_add_ms(ktime_get_raw(), 547 + hdcp2_msg_data->msg_read_timeout * 548 + connector->port->parent->num_ports); 549 + else 550 + msg_end = ktime_add_ms(ktime_get_raw(), 551 + hdcp2_msg_data->msg_read_timeout); 552 + } 566 553 567 - ret = drm_dp_dpcd_read(&dig_port->dp.aux, offset, 554 + ret = drm_dp_dpcd_read(aux, offset, 568 555 (void *)byte, len); 569 556 if (ret < 0) { 570 557 drm_dbg_kms(&i915->drm, "msg_id %d, ret %zd\n", ··· 599 574 } 600 575 601 576 static 602 - int intel_dp_hdcp2_config_stream_type(struct intel_digital_port *dig_port, 577 + int intel_dp_hdcp2_config_stream_type(struct intel_connector *connector, 603 578 bool is_repeater, u8 content_type) 604 579 { 605 580 int ret; ··· 618 593 stream_type_msg.msg_id = HDCP_2_2_ERRATA_DP_STREAM_TYPE; 619 594 stream_type_msg.stream_type = content_type; 620 595 621 - ret = intel_dp_hdcp2_write_msg(dig_port, &stream_type_msg, 596 + ret = intel_dp_hdcp2_write_msg(connector, &stream_type_msg, 622 597 sizeof(stream_type_msg)); 623 598 624 599 return ret < 0 ? ret : 0; ··· 632 607 u8 rx_status; 633 608 int ret; 634 609 635 - ret = intel_dp_hdcp2_read_rx_status(dig_port, &rx_status); 610 + ret = intel_dp_hdcp2_read_rx_status(connector, 611 + &rx_status); 636 612 if (ret) 637 613 return ret; 638 614 ··· 648 622 } 649 623 650 624 static 651 - int intel_dp_hdcp2_capable(struct intel_digital_port *dig_port, 625 + int intel_dp_hdcp2_capable(struct intel_connector *connector, 652 626 bool *capable) 653 627 { 628 + struct drm_dp_aux *aux; 654 629 u8 rx_caps[3]; 655 630 int ret; 656 631 632 + aux = intel_dp_hdcp_get_aux(connector); 633 + 657 634 *capable = false; 658 - ret = drm_dp_dpcd_read(&dig_port->dp.aux, 635 + ret = drm_dp_dpcd_read(aux, 659 636 DP_HDCP_2_2_REG_RX_CAPS_OFFSET, 660 637 rx_caps, HDCP_2_2_RXCAPS_LEN); 661 638 if (ret != HDCP_2_2_RXCAPS_LEN)
+97 -59
drivers/gpu/drm/i915/display/intel_dp_mst.c
··· 155 155 const struct drm_display_mode *adjusted_mode = 156 156 &crtc_state->hw.adjusted_mode; 157 157 int slots = -EINVAL; 158 + int link_bpp; 158 159 159 - slots = intel_dp_mst_find_vcpi_slots_for_bpp(encoder, crtc_state, limits->max_bpp, 160 - limits->min_bpp, limits, 160 + /* 161 + * FIXME: allocate the BW according to link_bpp, which in the case of 162 + * YUV420 is only half of the pipe bpp value. 163 + */ 164 + slots = intel_dp_mst_find_vcpi_slots_for_bpp(encoder, crtc_state, 165 + to_bpp_int(limits->link.max_bpp_x16), 166 + to_bpp_int(limits->link.min_bpp_x16), 167 + limits, 161 168 conn_state, 2 * 3, false); 162 169 163 170 if (slots < 0) 164 171 return slots; 165 172 166 - intel_link_compute_m_n(crtc_state->pipe_bpp, 173 + link_bpp = intel_dp_output_bpp(crtc_state->output_format, crtc_state->pipe_bpp); 174 + 175 + intel_link_compute_m_n(link_bpp, 167 176 crtc_state->lane_count, 168 177 adjusted_mode->crtc_clock, 169 178 crtc_state->port_clock, ··· 209 200 else 210 201 dsc_max_bpc = min_t(u8, 10, conn_state->max_requested_bpc); 211 202 212 - max_bpp = min_t(u8, dsc_max_bpc * 3, limits->max_bpp); 213 - min_bpp = limits->min_bpp; 203 + max_bpp = min_t(u8, dsc_max_bpc * 3, limits->pipe.max_bpp); 204 + min_bpp = limits->pipe.min_bpp; 214 205 215 206 num_bpc = drm_dp_dsc_sink_supported_input_bpcs(intel_dp->dsc_dpcd, 216 207 dsc_bpc); ··· 236 227 237 228 if (max_bpp > sink_max_bpp) 238 229 max_bpp = sink_max_bpp; 230 + 231 + min_bpp = max(min_bpp, to_bpp_int_roundup(limits->link.min_bpp_x16)); 232 + max_bpp = min(max_bpp, to_bpp_int(limits->link.max_bpp_x16)); 239 233 240 234 slots = intel_dp_mst_find_vcpi_slots_for_bpp(encoder, crtc_state, max_bpp, 241 235 min_bpp, limits, ··· 302 290 return 0; 303 291 } 304 292 305 - static bool intel_dp_mst_has_audio(const struct drm_connector_state *conn_state) 293 + static bool 294 + intel_dp_mst_compute_config_limits(struct intel_dp *intel_dp, 295 + struct intel_crtc_state *crtc_state, 296 + bool dsc, 297 + struct link_config_limits *limits) 306 298 { 307 - const struct intel_digital_connector_state *intel_conn_state = 308 - to_intel_digital_connector_state(conn_state); 309 - struct intel_connector *connector = 310 - to_intel_connector(conn_state->connector); 299 + /* 300 + * for MST we always configure max link bw - the spec doesn't 301 + * seem to suggest we should do otherwise. 302 + */ 303 + limits->min_rate = limits->max_rate = 304 + intel_dp_max_link_rate(intel_dp); 311 305 312 - if (intel_conn_state->force_audio == HDMI_AUDIO_AUTO) 313 - return connector->base.display_info.has_audio; 314 - else 315 - return intel_conn_state->force_audio == HDMI_AUDIO_ON; 306 + limits->min_lane_count = limits->max_lane_count = 307 + intel_dp_max_lane_count(intel_dp); 308 + 309 + limits->pipe.min_bpp = intel_dp_min_bpp(crtc_state->output_format); 310 + /* 311 + * FIXME: If all the streams can't fit into the link with 312 + * their current pipe_bpp we should reduce pipe_bpp across 313 + * the board until things start to fit. Until then we 314 + * limit to <= 8bpc since that's what was hardcoded for all 315 + * MST streams previously. This hack should be removed once 316 + * we have the proper retry logic in place. 317 + */ 318 + limits->pipe.max_bpp = min(crtc_state->pipe_bpp, 24); 319 + 320 + intel_dp_adjust_compliance_config(intel_dp, crtc_state, limits); 321 + 322 + return intel_dp_compute_config_link_bpp_limits(intel_dp, 323 + crtc_state, 324 + dsc, 325 + limits); 316 326 } 317 327 318 328 static int intel_dp_mst_compute_config(struct intel_encoder *encoder, ··· 347 313 const struct drm_display_mode *adjusted_mode = 348 314 &pipe_config->hw.adjusted_mode; 349 315 struct link_config_limits limits; 350 - int ret; 316 + bool dsc_needed; 317 + int ret = 0; 351 318 352 319 if (adjusted_mode->flags & DRM_MODE_FLAG_DBLSCAN) 353 320 return -EINVAL; ··· 357 322 pipe_config->output_format = INTEL_OUTPUT_FORMAT_RGB; 358 323 pipe_config->has_pch_encoder = false; 359 324 360 - pipe_config->has_audio = 361 - intel_dp_mst_has_audio(conn_state) && 362 - intel_audio_compute_config(encoder, pipe_config, conn_state); 325 + dsc_needed = intel_dp->force_dsc_en || 326 + !intel_dp_mst_compute_config_limits(intel_dp, 327 + pipe_config, 328 + false, 329 + &limits); 363 330 364 - /* 365 - * for MST we always configure max link bw - the spec doesn't 366 - * seem to suggest we should do otherwise. 367 - */ 368 - limits.min_rate = 369 - limits.max_rate = intel_dp_max_link_rate(intel_dp); 331 + if (!dsc_needed) { 332 + ret = intel_dp_mst_compute_link_config(encoder, pipe_config, 333 + conn_state, &limits); 370 334 371 - limits.min_lane_count = 372 - limits.max_lane_count = intel_dp_max_lane_count(intel_dp); 335 + if (ret == -EDEADLK) 336 + return ret; 373 337 374 - limits.min_bpp = intel_dp_min_bpp(pipe_config->output_format); 375 - /* 376 - * FIXME: If all the streams can't fit into the link with 377 - * their current pipe_bpp we should reduce pipe_bpp across 378 - * the board until things start to fit. Until then we 379 - * limit to <= 8bpc since that's what was hardcoded for all 380 - * MST streams previously. This hack should be removed once 381 - * we have the proper retry logic in place. 382 - */ 383 - limits.max_bpp = min(pipe_config->pipe_bpp, 24); 384 - 385 - intel_dp_adjust_compliance_config(intel_dp, pipe_config, &limits); 386 - 387 - ret = intel_dp_mst_compute_link_config(encoder, pipe_config, 388 - conn_state, &limits); 389 - 390 - if (ret == -EDEADLK) 391 - return ret; 338 + if (ret) 339 + dsc_needed = true; 340 + } 392 341 393 342 /* enable compression if the mode doesn't fit available BW */ 394 - drm_dbg_kms(&dev_priv->drm, "Force DSC en = %d\n", intel_dp->force_dsc_en); 395 - if (ret || intel_dp->force_dsc_en) { 343 + if (dsc_needed) { 344 + drm_dbg_kms(&dev_priv->drm, "Try DSC (fallback=%s, force=%s)\n", 345 + str_yes_no(ret), 346 + str_yes_no(intel_dp->force_dsc_en)); 347 + 348 + if (!intel_dp_mst_compute_config_limits(intel_dp, 349 + pipe_config, 350 + true, 351 + &limits)) 352 + return -EINVAL; 353 + 354 + /* 355 + * FIXME: As bpc is hardcoded to 8, as mentioned above, 356 + * WARN and ignore the debug flag force_dsc_bpc for now. 357 + */ 358 + drm_WARN(&dev_priv->drm, intel_dp->force_dsc_bpc, "Cannot Force BPC for MST\n"); 396 359 /* 397 360 * Try to get at least some timeslots and then see, if 398 361 * we can fit there with DSC. ··· 420 387 if (IS_GEMINILAKE(dev_priv) || IS_BROXTON(dev_priv)) 421 388 pipe_config->lane_lat_optim_mask = 422 389 bxt_ddi_phy_calc_lane_lat_optim_mask(pipe_config->lane_count); 390 + 391 + intel_dp_audio_compute_config(encoder, pipe_config, conn_state); 423 392 424 393 intel_ddi_compute_min_voltage_level(dev_priv, pipe_config); 425 394 ··· 768 733 ret = drm_dp_add_payload_part1(&intel_dp->mst_mgr, mst_state, 769 734 drm_atomic_get_mst_payload_state(mst_state, connector->port)); 770 735 if (ret < 0) 771 - drm_err(&dev_priv->drm, "Failed to create MST payload for %s: %d\n", 772 - connector->base.name, ret); 736 + drm_dbg_kms(&dev_priv->drm, "Failed to create MST payload for %s: %d\n", 737 + connector->base.name, ret); 773 738 774 739 /* 775 740 * Before Gen 12 this is not done as part of ··· 832 797 else if (DISPLAY_VER(dev_priv) >= 12 && pipe_config->fec_enable) 833 798 intel_de_rmw(dev_priv, CHICKEN_TRANS(trans), 0, 834 799 FECSTALL_DIS_DPTSTREAM_DPTTG); 800 + 801 + intel_audio_sdp_split_update(pipe_config); 835 802 836 803 intel_enable_transcoder(pipe_config); 837 804 ··· 955 918 int max_rate, mode_rate, max_lanes, max_link_clock; 956 919 int ret; 957 920 bool dsc = false, bigjoiner = false; 958 - u16 dsc_max_output_bpp = 0; 921 + u16 dsc_max_compressed_bpp = 0; 959 922 u8 dsc_slice_count = 0; 960 923 int target_clock = mode->clock; 961 924 ··· 1006 969 * TBD pass the connector BPC, 1007 970 * for now U8_MAX so that max BPC on that platform would be picked 1008 971 */ 1009 - int pipe_bpp = intel_dp_dsc_compute_bpp(intel_dp, U8_MAX); 972 + int pipe_bpp = intel_dp_dsc_compute_max_bpp(intel_dp, U8_MAX); 1010 973 1011 974 if (drm_dp_sink_supports_fec(intel_dp->fec_capable)) { 1012 - dsc_max_output_bpp = 1013 - intel_dp_dsc_get_output_bpp(dev_priv, 1014 - max_link_clock, 1015 - max_lanes, 1016 - target_clock, 1017 - mode->hdisplay, 1018 - bigjoiner, 1019 - pipe_bpp, 64) >> 4; 975 + dsc_max_compressed_bpp = 976 + intel_dp_dsc_get_max_compressed_bpp(dev_priv, 977 + max_link_clock, 978 + max_lanes, 979 + target_clock, 980 + mode->hdisplay, 981 + bigjoiner, 982 + INTEL_OUTPUT_FORMAT_RGB, 983 + pipe_bpp, 64); 1020 984 dsc_slice_count = 1021 985 intel_dp_dsc_get_slice_count(intel_dp, 1022 986 target_clock, ··· 1025 987 bigjoiner); 1026 988 } 1027 989 1028 - dsc = dsc_max_output_bpp && dsc_slice_count; 990 + dsc = dsc_max_compressed_bpp && dsc_slice_count; 1029 991 } 1030 992 1031 993 /*
+35 -19
drivers/gpu/drm/i915/display/intel_dpll.c
··· 314 314 { 315 315 clock->m = clock->m2 + 2; 316 316 clock->p = clock->p1 * clock->p2; 317 - if (WARN_ON(clock->n == 0 || clock->p == 0)) 318 - return 0; 319 - clock->vco = DIV_ROUND_CLOSEST(refclk * clock->m, clock->n); 320 - clock->dot = DIV_ROUND_CLOSEST(clock->vco, clock->p); 317 + 318 + clock->vco = clock->n == 0 ? 0 : 319 + DIV_ROUND_CLOSEST(refclk * clock->m, clock->n); 320 + clock->dot = clock->p == 0 ? 0 : 321 + DIV_ROUND_CLOSEST(clock->vco, clock->p); 321 322 322 323 return clock->dot; 323 324 } ··· 332 331 { 333 332 clock->m = i9xx_dpll_compute_m(clock); 334 333 clock->p = clock->p1 * clock->p2; 335 - if (WARN_ON(clock->n + 2 == 0 || clock->p == 0)) 336 - return 0; 337 - clock->vco = DIV_ROUND_CLOSEST(refclk * clock->m, clock->n + 2); 338 - clock->dot = DIV_ROUND_CLOSEST(clock->vco, clock->p); 334 + 335 + clock->vco = clock->n + 2 == 0 ? 0 : 336 + DIV_ROUND_CLOSEST(refclk * clock->m, clock->n + 2); 337 + clock->dot = clock->p == 0 ? 0 : 338 + DIV_ROUND_CLOSEST(clock->vco, clock->p); 339 339 340 340 return clock->dot; 341 341 } ··· 345 343 { 346 344 clock->m = clock->m1 * clock->m2; 347 345 clock->p = clock->p1 * clock->p2 * 5; 348 - if (WARN_ON(clock->n == 0 || clock->p == 0)) 349 - return 0; 350 - clock->vco = DIV_ROUND_CLOSEST(refclk * clock->m, clock->n); 351 - clock->dot = DIV_ROUND_CLOSEST(clock->vco, clock->p); 346 + 347 + clock->vco = clock->n == 0 ? 0 : 348 + DIV_ROUND_CLOSEST(refclk * clock->m, clock->n); 349 + clock->dot = clock->p == 0 ? 0 : 350 + DIV_ROUND_CLOSEST(clock->vco, clock->p); 352 351 353 352 return clock->dot; 354 353 } ··· 358 355 { 359 356 clock->m = clock->m1 * clock->m2; 360 357 clock->p = clock->p1 * clock->p2 * 5; 361 - if (WARN_ON(clock->n == 0 || clock->p == 0)) 362 - return 0; 363 - clock->vco = DIV_ROUND_CLOSEST_ULL(mul_u32_u32(refclk, clock->m), 364 - clock->n << 22); 365 - clock->dot = DIV_ROUND_CLOSEST(clock->vco, clock->p); 358 + 359 + clock->vco = clock->n == 0 ? 0 : 360 + DIV_ROUND_CLOSEST_ULL(mul_u32_u32(refclk, clock->m), clock->n << 22); 361 + clock->dot = clock->p == 0 ? 0 : 362 + DIV_ROUND_CLOSEST(clock->vco, clock->p); 366 363 367 364 return clock->dot; 368 365 } ··· 1182 1179 refclk, NULL, &crtc_state->dpll)) 1183 1180 return -EINVAL; 1184 1181 1182 + i9xx_calc_dpll_params(refclk, &crtc_state->dpll); 1183 + 1185 1184 ilk_compute_dpll(crtc_state, &crtc_state->dpll, 1186 1185 &crtc_state->dpll); 1187 1186 ··· 1258 1253 refclk, NULL, &crtc_state->dpll)) 1259 1254 return -EINVAL; 1260 1255 1256 + chv_calc_dpll_params(refclk, &crtc_state->dpll); 1257 + 1261 1258 chv_compute_dpll(crtc_state); 1262 1259 1263 1260 /* FIXME this is a mess */ ··· 1282 1275 1283 1276 if (!crtc_state->clock_set && 1284 1277 !vlv_find_best_dpll(limit, crtc_state, crtc_state->port_clock, 1285 - refclk, NULL, &crtc_state->dpll)) { 1278 + refclk, NULL, &crtc_state->dpll)) 1286 1279 return -EINVAL; 1287 - } 1280 + 1281 + vlv_calc_dpll_params(refclk, &crtc_state->dpll); 1288 1282 1289 1283 vlv_compute_dpll(crtc_state); 1290 1284 ··· 1335 1327 refclk, NULL, &crtc_state->dpll)) 1336 1328 return -EINVAL; 1337 1329 1330 + i9xx_calc_dpll_params(refclk, &crtc_state->dpll); 1331 + 1338 1332 i9xx_compute_dpll(crtc_state, &crtc_state->dpll, 1339 1333 &crtc_state->dpll); 1340 1334 ··· 1375 1365 refclk, NULL, &crtc_state->dpll)) 1376 1366 return -EINVAL; 1377 1367 1368 + pnv_calc_dpll_params(refclk, &crtc_state->dpll); 1369 + 1378 1370 i9xx_compute_dpll(crtc_state, &crtc_state->dpll, 1379 1371 &crtc_state->dpll); 1380 1372 ··· 1412 1400 !i9xx_find_best_dpll(limit, crtc_state, crtc_state->port_clock, 1413 1401 refclk, NULL, &crtc_state->dpll)) 1414 1402 return -EINVAL; 1403 + 1404 + i9xx_calc_dpll_params(refclk, &crtc_state->dpll); 1415 1405 1416 1406 i9xx_compute_dpll(crtc_state, &crtc_state->dpll, 1417 1407 &crtc_state->dpll); ··· 1454 1440 !i9xx_find_best_dpll(limit, crtc_state, crtc_state->port_clock, 1455 1441 refclk, NULL, &crtc_state->dpll)) 1456 1442 return -EINVAL; 1443 + 1444 + i9xx_calc_dpll_params(refclk, &crtc_state->dpll); 1457 1445 1458 1446 i8xx_compute_dpll(crtc_state, &crtc_state->dpll, 1459 1447 &crtc_state->dpll);
+1 -1
drivers/gpu/drm/i915/display/intel_dpt.c
··· 29 29 i915_vm_to_dpt(struct i915_address_space *vm) 30 30 { 31 31 BUILD_BUG_ON(offsetof(struct i915_dpt, vm)); 32 - GEM_BUG_ON(!i915_is_dpt(vm)); 32 + drm_WARN_ON(&vm->i915->drm, !i915_is_dpt(vm)); 33 33 return container_of(vm, struct i915_dpt, vm); 34 34 } 35 35
+1
drivers/gpu/drm/i915/display/intel_drrs.c
··· 9 9 #include "intel_de.h" 10 10 #include "intel_display_types.h" 11 11 #include "intel_drrs.h" 12 + #include "intel_frontbuffer.h" 12 13 #include "intel_panel.h" 13 14 14 15 /**
+188 -29
drivers/gpu/drm/i915/display/intel_dsb.c
··· 7 7 #include "gem/i915_gem_internal.h" 8 8 9 9 #include "i915_drv.h" 10 + #include "i915_irq.h" 10 11 #include "i915_reg.h" 12 + #include "intel_crtc.h" 11 13 #include "intel_de.h" 12 14 #include "intel_display_types.h" 13 15 #include "intel_dsb.h" 14 16 #include "intel_dsb_regs.h" 17 + #include "intel_vblank.h" 18 + #include "intel_vrr.h" 19 + #include "skl_watermark.h" 15 20 16 21 struct i915_vma; 17 22 ··· 52 47 * register. 53 48 */ 54 49 unsigned int ins_start_offset; 50 + 51 + int dewake_scanline; 55 52 }; 56 53 57 54 /** ··· 77 70 #define DSB_OPCODE_SHIFT 24 78 71 #define DSB_OPCODE_NOOP 0x0 79 72 #define DSB_OPCODE_MMIO_WRITE 0x1 73 + #define DSB_BYTE_EN 0xf 74 + #define DSB_BYTE_EN_SHIFT 20 75 + #define DSB_REG_VALUE_MASK 0xfffff 80 76 #define DSB_OPCODE_WAIT_USEC 0x2 81 - #define DSB_OPCODE_WAIT_LINES 0x3 77 + #define DSB_OPCODE_WAIT_SCANLINE 0x3 82 78 #define DSB_OPCODE_WAIT_VBLANKS 0x4 83 79 #define DSB_OPCODE_WAIT_DSL_IN 0x5 84 80 #define DSB_OPCODE_WAIT_DSL_OUT 0x6 81 + #define DSB_SCANLINE_UPPER_SHIFT 20 82 + #define DSB_SCANLINE_LOWER_SHIFT 0 85 83 #define DSB_OPCODE_INTERRUPT 0x7 86 84 #define DSB_OPCODE_INDEXED_WRITE 0x9 85 + /* see DSB_REG_VALUE_MASK */ 87 86 #define DSB_OPCODE_POLL 0xA 88 - #define DSB_BYTE_EN 0xF 89 - #define DSB_BYTE_EN_SHIFT 20 90 - #define DSB_REG_VALUE_MASK 0xfffff 87 + /* see DSB_REG_VALUE_MASK */ 91 88 92 89 static bool assert_dsb_has_room(struct intel_dsb *dsb) 93 90 { ··· 104 93 crtc->base.base.id, crtc->base.name, dsb->id); 105 94 } 106 95 96 + static void intel_dsb_dump(struct intel_dsb *dsb) 97 + { 98 + struct intel_crtc *crtc = dsb->crtc; 99 + struct drm_i915_private *i915 = to_i915(crtc->base.dev); 100 + const u32 *buf = dsb->cmd_buf; 101 + int i; 102 + 103 + drm_dbg_kms(&i915->drm, "[CRTC:%d:%s] DSB %d commands {\n", 104 + crtc->base.base.id, crtc->base.name, dsb->id); 105 + for (i = 0; i < ALIGN(dsb->free_pos, 64 / 4); i += 4) 106 + drm_dbg_kms(&i915->drm, 107 + " 0x%08x: 0x%08x 0x%08x 0x%08x 0x%08x\n", 108 + i * 4, buf[i], buf[i+1], buf[i+2], buf[i+3]); 109 + drm_dbg_kms(&i915->drm, "}\n"); 110 + } 111 + 107 112 static bool is_dsb_busy(struct drm_i915_private *i915, enum pipe pipe, 108 113 enum dsb_id id) 109 114 { 110 - return intel_de_read(i915, DSB_CTRL(pipe, id)) & DSB_STATUS_BUSY; 115 + return intel_de_read_fw(i915, DSB_CTRL(pipe, id)) & DSB_STATUS_BUSY; 111 116 } 112 117 113 118 static void intel_dsb_emit(struct intel_dsb *dsb, u32 ldw, u32 udw) ··· 148 121 const u32 *buf = dsb->cmd_buf; 149 122 u32 prev_opcode, prev_reg; 150 123 151 - prev_opcode = buf[dsb->ins_start_offset + 1] >> DSB_OPCODE_SHIFT; 124 + /* 125 + * Nothing emitted yet? Must check before looking 126 + * at the actual data since i915_gem_object_create_internal() 127 + * does *not* give you zeroed memory! 128 + */ 129 + if (dsb->free_pos == 0) 130 + return false; 131 + 132 + prev_opcode = buf[dsb->ins_start_offset + 1] & ~DSB_REG_VALUE_MASK; 152 133 prev_reg = buf[dsb->ins_start_offset + 1] & DSB_REG_VALUE_MASK; 153 134 154 135 return prev_opcode == opcode && prev_reg == i915_mmio_reg_offset(reg); ··· 164 129 165 130 static bool intel_dsb_prev_ins_is_mmio_write(struct intel_dsb *dsb, i915_reg_t reg) 166 131 { 167 - return intel_dsb_prev_ins_is_write(dsb, DSB_OPCODE_MMIO_WRITE, reg); 132 + /* only full byte-enables can be converted to indexed writes */ 133 + return intel_dsb_prev_ins_is_write(dsb, 134 + DSB_OPCODE_MMIO_WRITE << DSB_OPCODE_SHIFT | 135 + DSB_BYTE_EN << DSB_BYTE_EN_SHIFT, 136 + reg); 168 137 } 169 138 170 139 static bool intel_dsb_prev_ins_is_indexed_write(struct intel_dsb *dsb, i915_reg_t reg) 171 140 { 172 - return intel_dsb_prev_ins_is_write(dsb, DSB_OPCODE_INDEXED_WRITE, reg); 141 + return intel_dsb_prev_ins_is_write(dsb, 142 + DSB_OPCODE_INDEXED_WRITE << DSB_OPCODE_SHIFT, 143 + reg); 173 144 } 174 145 175 146 /** ··· 241 200 } 242 201 } 243 202 203 + static u32 intel_dsb_mask_to_byte_en(u32 mask) 204 + { 205 + return (!!(mask & 0xff000000) << 3 | 206 + !!(mask & 0x00ff0000) << 2 | 207 + !!(mask & 0x0000ff00) << 1 | 208 + !!(mask & 0x000000ff) << 0); 209 + } 210 + 211 + /* Note: mask implemented via byte enables! */ 212 + void intel_dsb_reg_write_masked(struct intel_dsb *dsb, 213 + i915_reg_t reg, u32 mask, u32 val) 214 + { 215 + intel_dsb_emit(dsb, val, 216 + (DSB_OPCODE_MMIO_WRITE << DSB_OPCODE_SHIFT) | 217 + (intel_dsb_mask_to_byte_en(mask) << DSB_BYTE_EN_SHIFT) | 218 + i915_mmio_reg_offset(reg)); 219 + } 220 + 221 + void intel_dsb_noop(struct intel_dsb *dsb, int count) 222 + { 223 + int i; 224 + 225 + for (i = 0; i < count; i++) 226 + intel_dsb_emit(dsb, 0, 227 + DSB_OPCODE_NOOP << DSB_OPCODE_SHIFT); 228 + } 229 + 230 + void intel_dsb_nonpost_start(struct intel_dsb *dsb) 231 + { 232 + struct intel_crtc *crtc = dsb->crtc; 233 + enum pipe pipe = crtc->pipe; 234 + 235 + intel_dsb_reg_write_masked(dsb, DSB_CTRL(pipe, dsb->id), 236 + DSB_NON_POSTED, DSB_NON_POSTED); 237 + intel_dsb_noop(dsb, 4); 238 + } 239 + 240 + void intel_dsb_nonpost_end(struct intel_dsb *dsb) 241 + { 242 + struct intel_crtc *crtc = dsb->crtc; 243 + enum pipe pipe = crtc->pipe; 244 + 245 + intel_dsb_reg_write_masked(dsb, DSB_CTRL(pipe, dsb->id), 246 + DSB_NON_POSTED, 0); 247 + intel_dsb_noop(dsb, 4); 248 + } 249 + 244 250 static void intel_dsb_align_tail(struct intel_dsb *dsb) 245 251 { 246 252 u32 aligned_tail, tail; ··· 304 216 305 217 void intel_dsb_finish(struct intel_dsb *dsb) 306 218 { 219 + struct intel_crtc *crtc = dsb->crtc; 220 + 221 + /* 222 + * DSB_FORCE_DEWAKE remains active even after DSB is 223 + * disabled, so make sure to clear it (if set during 224 + * intel_dsb_commit()). 225 + */ 226 + intel_dsb_reg_write_masked(dsb, DSB_PMCTRL_2(crtc->pipe, dsb->id), 227 + DSB_FORCE_DEWAKE, 0); 228 + 307 229 intel_dsb_align_tail(dsb); 308 230 } 309 231 310 - /** 311 - * intel_dsb_commit() - Trigger workload execution of DSB. 312 - * @dsb: DSB context 313 - * @wait_for_vblank: wait for vblank before executing 314 - * 315 - * This function is used to do actual write to hardware using DSB. 316 - */ 317 - void intel_dsb_commit(struct intel_dsb *dsb, bool wait_for_vblank) 232 + static int intel_dsb_dewake_scanline(const struct intel_crtc_state *crtc_state) 233 + { 234 + struct drm_i915_private *i915 = to_i915(crtc_state->uapi.crtc->dev); 235 + const struct drm_display_mode *adjusted_mode = &crtc_state->hw.adjusted_mode; 236 + unsigned int latency = skl_watermark_max_latency(i915); 237 + int vblank_start; 238 + 239 + if (crtc_state->vrr.enable) { 240 + vblank_start = intel_vrr_vmin_vblank_start(crtc_state); 241 + } else { 242 + vblank_start = adjusted_mode->crtc_vblank_start; 243 + 244 + if (adjusted_mode->flags & DRM_MODE_FLAG_INTERLACE) 245 + vblank_start = DIV_ROUND_UP(vblank_start, 2); 246 + } 247 + 248 + return max(0, vblank_start - intel_usecs_to_scanlines(adjusted_mode, latency)); 249 + } 250 + 251 + static void _intel_dsb_commit(struct intel_dsb *dsb, u32 ctrl, 252 + unsigned int dewake_scanline) 318 253 { 319 254 struct intel_crtc *crtc = dsb->crtc; 320 255 struct drm_i915_private *dev_priv = to_i915(crtc->base.dev); ··· 354 243 return; 355 244 } 356 245 357 - intel_de_write(dev_priv, DSB_CTRL(pipe, dsb->id), 358 - (wait_for_vblank ? DSB_WAIT_FOR_VBLANK : 0) | 359 - DSB_ENABLE); 360 - intel_de_write(dev_priv, DSB_HEAD(pipe, dsb->id), 361 - i915_ggtt_offset(dsb->vma)); 362 - intel_de_write(dev_priv, DSB_TAIL(pipe, dsb->id), 363 - i915_ggtt_offset(dsb->vma) + tail); 246 + intel_de_write_fw(dev_priv, DSB_CTRL(pipe, dsb->id), 247 + ctrl | DSB_ENABLE); 248 + 249 + intel_de_write_fw(dev_priv, DSB_HEAD(pipe, dsb->id), 250 + i915_ggtt_offset(dsb->vma)); 251 + 252 + if (dewake_scanline >= 0) { 253 + int diff, hw_dewake_scanline; 254 + 255 + hw_dewake_scanline = intel_crtc_scanline_to_hw(crtc, dewake_scanline); 256 + 257 + intel_de_write_fw(dev_priv, DSB_PMCTRL(pipe, dsb->id), 258 + DSB_ENABLE_DEWAKE | 259 + DSB_SCANLINE_FOR_DEWAKE(hw_dewake_scanline)); 260 + 261 + /* 262 + * Force DEwake immediately if we're already past 263 + * or close to racing past the target scanline. 264 + */ 265 + diff = dewake_scanline - intel_get_crtc_scanline(crtc); 266 + intel_de_write_fw(dev_priv, DSB_PMCTRL_2(pipe, dsb->id), 267 + (diff >= 0 && diff < 5 ? DSB_FORCE_DEWAKE : 0) | 268 + DSB_BLOCK_DEWAKE_EXTENSION); 269 + } 270 + 271 + intel_de_write_fw(dev_priv, DSB_TAIL(pipe, dsb->id), 272 + i915_ggtt_offset(dsb->vma) + tail); 273 + } 274 + 275 + /** 276 + * intel_dsb_commit() - Trigger workload execution of DSB. 277 + * @dsb: DSB context 278 + * @wait_for_vblank: wait for vblank before executing 279 + * 280 + * This function is used to do actual write to hardware using DSB. 281 + */ 282 + void intel_dsb_commit(struct intel_dsb *dsb, 283 + bool wait_for_vblank) 284 + { 285 + _intel_dsb_commit(dsb, 286 + wait_for_vblank ? DSB_WAIT_FOR_VBLANK : 0, 287 + wait_for_vblank ? dsb->dewake_scanline : -1); 364 288 } 365 289 366 290 void intel_dsb_wait(struct intel_dsb *dsb) ··· 404 258 struct drm_i915_private *dev_priv = to_i915(crtc->base.dev); 405 259 enum pipe pipe = crtc->pipe; 406 260 407 - if (wait_for(!is_dsb_busy(dev_priv, pipe, dsb->id), 1)) 261 + if (wait_for(!is_dsb_busy(dev_priv, pipe, dsb->id), 1)) { 262 + u32 offset = i915_ggtt_offset(dsb->vma); 263 + 264 + intel_de_write_fw(dev_priv, DSB_CTRL(pipe, dsb->id), 265 + DSB_ENABLE | DSB_HALT); 266 + 408 267 drm_err(&dev_priv->drm, 409 - "[CRTC:%d:%s] DSB %d timed out waiting for idle\n", 410 - crtc->base.base.id, crtc->base.name, dsb->id); 268 + "[CRTC:%d:%s] DSB %d timed out waiting for idle (current head=0x%x, head=0x%x, tail=0x%x)\n", 269 + crtc->base.base.id, crtc->base.name, dsb->id, 270 + intel_de_read_fw(dev_priv, DSB_CURRENT_HEAD(pipe, dsb->id)) - offset, 271 + intel_de_read_fw(dev_priv, DSB_HEAD(pipe, dsb->id)) - offset, 272 + intel_de_read_fw(dev_priv, DSB_TAIL(pipe, dsb->id)) - offset); 273 + 274 + intel_dsb_dump(dsb); 275 + } 411 276 412 277 /* Attempt to reset it */ 413 278 dsb->free_pos = 0; 414 279 dsb->ins_start_offset = 0; 415 - intel_de_write(dev_priv, DSB_CTRL(pipe, dsb->id), 0); 280 + intel_de_write_fw(dev_priv, DSB_CTRL(pipe, dsb->id), 0); 416 281 } 417 282 418 283 /** 419 284 * intel_dsb_prepare() - Allocate, pin and map the DSB command buffer. 420 - * @crtc: the CRTC 285 + * @crtc_state: the CRTC state 421 286 * @max_cmds: number of commands we need to fit into command buffer 422 287 * 423 288 * This function prepare the command buffer which is used to store dsb ··· 437 280 * Returns: 438 281 * DSB context, NULL on failure 439 282 */ 440 - struct intel_dsb *intel_dsb_prepare(struct intel_crtc *crtc, 283 + struct intel_dsb *intel_dsb_prepare(const struct intel_crtc_state *crtc_state, 441 284 unsigned int max_cmds) 442 285 { 286 + struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc); 443 287 struct drm_i915_private *i915 = to_i915(crtc->base.dev); 444 288 struct drm_i915_gem_object *obj; 445 289 intel_wakeref_t wakeref; ··· 486 328 dsb->size = size / 4; /* in dwords */ 487 329 dsb->free_pos = 0; 488 330 dsb->ins_start_offset = 0; 331 + dsb->dewake_scanline = intel_dsb_dewake_scanline(crtc_state); 489 332 490 333 return dsb; 491 334
+8 -1
drivers/gpu/drm/i915/display/intel_dsb.h
··· 11 11 #include "i915_reg_defs.h" 12 12 13 13 struct intel_crtc; 14 + struct intel_crtc_state; 14 15 struct intel_dsb; 15 16 16 - struct intel_dsb *intel_dsb_prepare(struct intel_crtc *crtc, 17 + struct intel_dsb *intel_dsb_prepare(const struct intel_crtc_state *crtc_state, 17 18 unsigned int max_cmds); 18 19 void intel_dsb_finish(struct intel_dsb *dsb); 19 20 void intel_dsb_cleanup(struct intel_dsb *dsb); 20 21 void intel_dsb_reg_write(struct intel_dsb *dsb, 21 22 i915_reg_t reg, u32 val); 23 + void intel_dsb_reg_write_masked(struct intel_dsb *dsb, 24 + i915_reg_t reg, u32 mask, u32 val); 25 + void intel_dsb_noop(struct intel_dsb *dsb, int count); 26 + void intel_dsb_nonpost_start(struct intel_dsb *dsb); 27 + void intel_dsb_nonpost_end(struct intel_dsb *dsb); 28 + 22 29 void intel_dsb_commit(struct intel_dsb *dsb, 23 30 bool wait_for_vblank); 24 31 void intel_dsb_wait(struct intel_dsb *dsb);
+31
drivers/gpu/drm/i915/display/intel_dsb_regs.h
··· 37 37 #define DSB_DEBUG(pipe, id) _MMIO(DSBSL_INSTANCE(pipe, id) + 0x14) 38 38 #define DSB_POLLMASK(pipe, id) _MMIO(DSBSL_INSTANCE(pipe, id) + 0x1c) 39 39 #define DSB_STATUS(pipe, id) _MMIO(DSBSL_INSTANCE(pipe, id) + 0x24) 40 + #define DSB_HP_IDLE_STATUS REG_BIT(31) 41 + #define DSB_DEWAKE_STATUS REG_BIT(30) 42 + #define DSB_REQARB_SM_STATE_MASK REG_GENMASK(29, 27) 43 + #define DSB_SAFE_WINDOW_LIVE REG_BIT(26) 44 + #define DSB_VTDFAULT_ARB_SM_STATE_MASK REG_GENMASK(25, 23) 45 + #define DSB_TLBTRANS_SM_STATE_MASK REG_GENMASK(21, 20) 46 + #define DSB_SAFE_WINDOW REG_BIT(19) 47 + #define DSB_POINTERS_SM_STATE_MASK REG_GENMASK(18, 17) 48 + #define DSB_BUSY_ON_DELAYED_VBLANK REG_BIT(16) 49 + #define DSB_MMIO_ARB_SM_STATE_MASK REG_GENMASK(15, 13) 50 + #define DSB_MMIO_INST_SM_STATE_MASK REG_GENMASK(11, 7) 51 + #define DSB_RESET_SM_STATE_MASK REG_GENMASK(5, 4) 52 + #define DSB_RUN_SM_STATE_MASK REG_GENMASK(2, 0) 40 53 #define DSB_INTERRUPT(pipe, id) _MMIO(DSBSL_INSTANCE(pipe, id) + 0x28) 41 54 #define DSB_ATS_FAULT_INT_EN REG_BIT(20) 42 55 #define DSB_GTT_FAULT_INT_EN REG_BIT(19) ··· 71 58 #define DSB_RM_READY_TIMEOUT_VALUE(x) REG_FIELD_PREP(DSB_RM_READY_TIMEOUT_VALUE, (x)) /* usec */ 72 59 #define DSB_RMTIMEOUTREG_CAPTURE(pipe, id) _MMIO(DSBSL_INSTANCE(pipe, id) + 0x34) 73 60 #define DSB_PMCTRL(pipe, id) _MMIO(DSBSL_INSTANCE(pipe, id) + 0x38) 61 + #define DSB_ENABLE_DEWAKE REG_BIT(31) 62 + #define DSB_SCANLINE_FOR_DEWAKE_MASK REG_GENMASK(30, 0) 63 + #define DSB_SCANLINE_FOR_DEWAKE(x) REG_FIELD_PREP(DSB_SCANLINE_FOR_DEWAKE_MASK, (x)) 74 64 #define DSB_PMCTRL_2(pipe, id) _MMIO(DSBSL_INSTANCE(pipe, id) + 0x3c) 65 + #define DSB_MMIOGEN_DEWAKE_DIS REG_BIT(31) 66 + #define DSB_FORCE_DEWAKE REG_BIT(23) 67 + #define DSB_BLOCK_DEWAKE_EXTENSION REG_BIT(15) 68 + #define DSB_OVERRIDE_DC5_DC6_OK REG_BIT(7) 75 69 #define DSB_PF_LN_LOWER(pipe, id) _MMIO(DSBSL_INSTANCE(pipe, id) + 0x40) 76 70 #define DSB_PF_LN_UPPER(pipe, id) _MMIO(DSBSL_INSTANCE(pipe, id) + 0x44) 77 71 #define DSB_BUFRPT_CNT(pipe, id) _MMIO(DSBSL_INSTANCE(pipe, id) + 0x48) 78 72 #define DSB_CHICKEN(pipe, id) _MMIO(DSBSL_INSTANCE(pipe, id) + 0xf0) 73 + #define DSB_FORCE_DMA_SYNC_RESET REG_BIT(31) 74 + #define DSB_FORCE_VTD_ENGIE_RESET REG_BIT(30) 75 + #define DSB_DISABLE_IPC_DEMOTE REG_BIT(29) 76 + #define DSB_SKIP_WAITS_EN REG_BIT(23) 77 + #define DSB_EXTEND_HP_IDLE REG_BIT(16) 78 + #define DSB_CTRL_WAIT_SAFE_WINDOW REG_BIT(15) 79 + #define DSB_CTRL_NO_WAIT_VBLANK REG_BIT(14) 80 + #define DSB_INST_WAIT_SAFE_WINDOW REG_BIT(7) 81 + #define DSB_INST_NO_WAIT_VBLANK REG_BIT(6) 82 + #define DSB_MMIOGEN_DEWAKE_DIS_CHICKEN REG_BIT(2) 83 + #define DSB_DISABLE_MMIO_COUNT_FOR_INDEXED REG_BIT(0) 79 84 80 85 #endif /* __INTEL_DSB_REGS_H__ */
+5 -6
drivers/gpu/drm/i915/display/intel_dvo.c
··· 328 328 static int intel_dvo_get_modes(struct drm_connector *_connector) 329 329 { 330 330 struct intel_connector *connector = to_intel_connector(_connector); 331 - struct drm_i915_private *i915 = to_i915(connector->base.dev); 332 331 int num_modes; 333 332 334 333 /* ··· 336 337 * (TV-out, for example), but for now with just TMDS and LVDS, 337 338 * that's not the case. 338 339 */ 339 - num_modes = intel_ddc_get_modes(&connector->base, 340 - intel_gmbus_get_adapter(i915, GMBUS_PIN_DPC)); 340 + num_modes = intel_ddc_get_modes(&connector->base, connector->base.ddc); 341 341 if (num_modes) 342 342 return num_modes; 343 343 ··· 531 533 connector->polled = DRM_CONNECTOR_POLL_CONNECT | 532 534 DRM_CONNECTOR_POLL_DISCONNECT; 533 535 534 - drm_connector_init(&i915->drm, &connector->base, 535 - &intel_dvo_connector_funcs, 536 - intel_dvo_connector_type(&intel_dvo->dev)); 536 + drm_connector_init_with_ddc(&i915->drm, &connector->base, 537 + &intel_dvo_connector_funcs, 538 + intel_dvo_connector_type(&intel_dvo->dev), 539 + intel_gmbus_get_adapter(i915, GMBUS_PIN_DPC)); 537 540 538 541 drm_connector_helper_add(&connector->base, 539 542 &intel_dvo_connector_helper_funcs);
+58 -3
drivers/gpu/drm/i915/display/intel_fb.c
··· 7 7 #include <drm/drm_framebuffer.h> 8 8 #include <drm/drm_modeset_helper.h> 9 9 10 + #include <linux/dma-fence.h> 11 + #include <linux/dma-resv.h> 12 + 10 13 #include "i915_drv.h" 11 14 #include "intel_display.h" 12 15 #include "intel_display_types.h" 13 16 #include "intel_dpt.h" 14 17 #include "intel_fb.h" 18 + #include "intel_frontbuffer.h" 15 19 16 20 #define check_array_bounds(i915, a, i) drm_WARN_ON(&(i915)->drm, (i) >= ARRAY_SIZE(a)) 17 21 ··· 1900 1896 return drm_gem_handle_create(file, &obj->base, handle); 1901 1897 } 1902 1898 1899 + struct frontbuffer_fence_cb { 1900 + struct dma_fence_cb base; 1901 + struct intel_frontbuffer *front; 1902 + }; 1903 + 1904 + static void intel_user_framebuffer_fence_wake(struct dma_fence *dma, 1905 + struct dma_fence_cb *data) 1906 + { 1907 + struct frontbuffer_fence_cb *cb = container_of(data, typeof(*cb), base); 1908 + 1909 + intel_frontbuffer_queue_flush(cb->front); 1910 + kfree(cb); 1911 + dma_fence_put(dma); 1912 + } 1913 + 1903 1914 static int intel_user_framebuffer_dirty(struct drm_framebuffer *fb, 1904 1915 struct drm_file *file, 1905 1916 unsigned int flags, unsigned int color, ··· 1922 1903 unsigned int num_clips) 1923 1904 { 1924 1905 struct drm_i915_gem_object *obj = intel_fb_obj(fb); 1906 + struct intel_frontbuffer *front = to_intel_frontbuffer(fb); 1907 + struct dma_fence *fence; 1908 + struct frontbuffer_fence_cb *cb; 1909 + int ret = 0; 1925 1910 1911 + if (!atomic_read(&front->bits)) 1912 + return 0; 1913 + 1914 + if (dma_resv_test_signaled(obj->base.resv, dma_resv_usage_rw(false))) 1915 + goto flush; 1916 + 1917 + ret = dma_resv_get_singleton(obj->base.resv, dma_resv_usage_rw(false), 1918 + &fence); 1919 + if (ret || !fence) 1920 + goto flush; 1921 + 1922 + cb = kmalloc(sizeof(*cb), GFP_KERNEL); 1923 + if (!cb) { 1924 + dma_fence_put(fence); 1925 + ret = -ENOMEM; 1926 + goto flush; 1927 + } 1928 + 1929 + cb->front = front; 1930 + 1931 + intel_frontbuffer_invalidate(front, ORIGIN_DIRTYFB); 1932 + 1933 + ret = dma_fence_add_callback(fence, &cb->base, 1934 + intel_user_framebuffer_fence_wake); 1935 + if (ret) { 1936 + intel_user_framebuffer_fence_wake(fence, &cb->base); 1937 + if (ret == -ENOENT) 1938 + ret = 0; 1939 + } 1940 + 1941 + return ret; 1942 + 1943 + flush: 1926 1944 i915_gem_object_flush_if_display(obj); 1927 - intel_frontbuffer_flush(to_intel_frontbuffer(fb), ORIGIN_DIRTYFB); 1928 - 1929 - return 0; 1945 + intel_frontbuffer_flush(front, ORIGIN_DIRTYFB); 1946 + return ret; 1930 1947 } 1931 1948 1932 1949 static const struct drm_framebuffer_funcs intel_fb_funcs = {
+2 -1
drivers/gpu/drm/i915/display/intel_fb_pin.c
··· 35 35 * We are not syncing against the binding (and potential migrations) 36 36 * below, so this vm must never be async. 37 37 */ 38 - GEM_WARN_ON(vm->bind_async_flags); 38 + if (drm_WARN_ON(&dev_priv->drm, vm->bind_async_flags)) 39 + return ERR_PTR(-EINVAL); 39 40 40 41 if (WARN_ON(!i915_gem_object_is_framebuffer(obj))) 41 42 return ERR_PTR(-EINVAL);
+12 -11
drivers/gpu/drm/i915/display/intel_fbc.c
··· 50 50 #include "i915_vma.h" 51 51 #include "intel_cdclk.h" 52 52 #include "intel_de.h" 53 + #include "intel_display_device.h" 53 54 #include "intel_display_trace.h" 54 55 #include "intel_display_types.h" 55 56 #include "intel_fbc.h" ··· 333 332 { 334 333 struct drm_i915_private *i915 = fbc->i915; 335 334 336 - GEM_BUG_ON(range_overflows_end_t(u64, i915_gem_stolen_area_address(i915), 337 - i915_gem_stolen_node_offset(&fbc->compressed_fb), 338 - U32_MAX)); 339 - GEM_BUG_ON(range_overflows_end_t(u64, i915_gem_stolen_area_address(i915), 340 - i915_gem_stolen_node_offset(&fbc->compressed_llb), 341 - U32_MAX)); 335 + drm_WARN_ON(&i915->drm, 336 + range_overflows_end_t(u64, i915_gem_stolen_area_address(i915), 337 + i915_gem_stolen_node_offset(&fbc->compressed_fb), 338 + U32_MAX)); 339 + drm_WARN_ON(&i915->drm, 340 + range_overflows_end_t(u64, i915_gem_stolen_area_address(i915), 341 + i915_gem_stolen_node_offset(&fbc->compressed_llb), 342 + U32_MAX)); 342 343 intel_de_write(i915, FBC_CFB_BASE, 343 344 i915_gem_stolen_node_address(i915, &fbc->compressed_fb)); 344 345 intel_de_write(i915, FBC_LL_BASE, ··· 1103 1100 1104 1101 /* Wa_14016291713 */ 1105 1102 if ((IS_DISPLAY_VER(i915, 12, 13) || 1106 - IS_MTL_DISPLAY_STEP(i915, STEP_A0, STEP_C0)) && 1103 + IS_DISPLAY_IP_STEP(i915, IP_VER(14, 0), STEP_A0, STEP_C0)) && 1107 1104 crtc_state->has_psr) { 1108 1105 plane_state->no_fbc_reason = "PSR1 enabled (Wa_14016291713)"; 1109 1106 return 0; ··· 1309 1306 lockdep_assert_held(&fbc->lock); 1310 1307 1311 1308 fbc->flip_pending = false; 1309 + fbc->busy_bits = 0; 1312 1310 1313 - if (!fbc->busy_bits) 1314 - intel_fbc_activate(fbc); 1315 - else 1316 - intel_fbc_deactivate(fbc, "frontbuffer write"); 1311 + intel_fbc_activate(fbc); 1317 1312 } 1318 1313 1319 1314 void intel_fbc_post_update(struct intel_atomic_state *state,
+2
drivers/gpu/drm/i915/display/intel_fbc.h
··· 20 20 enum intel_fbc_id { 21 21 INTEL_FBC_A, 22 22 INTEL_FBC_B, 23 + INTEL_FBC_C, 24 + INTEL_FBC_D, 23 25 24 26 I915_MAX_FBCS, 25 27 };
+149 -20
drivers/gpu/drm/i915/display/intel_fdi.c
··· 13 13 #include "intel_display_types.h" 14 14 #include "intel_fdi.h" 15 15 #include "intel_fdi_regs.h" 16 + #include "intel_link_bw.h" 16 17 17 18 struct intel_fdi_funcs { 18 19 void (*fdi_link_train)(struct intel_crtc *crtc, ··· 120 119 dev_priv->display.funcs.fdi->fdi_link_train(crtc, crtc_state); 121 120 } 122 121 122 + /** 123 + * intel_fdi_add_affected_crtcs - add CRTCs on FDI affected by other modeset CRTCs 124 + * @state: intel atomic state 125 + * 126 + * Add a CRTC using FDI to @state if changing another CRTC's FDI BW usage is 127 + * known to affect the available FDI BW for the former CRTC. In practice this 128 + * means adding CRTC B on IVYBRIDGE if its use of FDI lanes is limited (by 129 + * CRTC C) and CRTC C is getting disabled. 130 + * 131 + * Returns 0 in case of success, or a negative error code otherwise. 132 + */ 133 + int intel_fdi_add_affected_crtcs(struct intel_atomic_state *state) 134 + { 135 + struct drm_i915_private *i915 = to_i915(state->base.dev); 136 + const struct intel_crtc_state *old_crtc_state; 137 + const struct intel_crtc_state *new_crtc_state; 138 + struct intel_crtc *crtc; 139 + 140 + if (!IS_IVYBRIDGE(i915) || INTEL_NUM_PIPES(i915) != 3) 141 + return 0; 142 + 143 + crtc = intel_crtc_for_pipe(i915, PIPE_C); 144 + new_crtc_state = intel_atomic_get_new_crtc_state(state, crtc); 145 + if (!new_crtc_state) 146 + return 0; 147 + 148 + if (!intel_crtc_needs_modeset(new_crtc_state)) 149 + return 0; 150 + 151 + old_crtc_state = intel_atomic_get_old_crtc_state(state, crtc); 152 + if (!old_crtc_state->fdi_lanes) 153 + return 0; 154 + 155 + crtc = intel_crtc_for_pipe(i915, PIPE_B); 156 + new_crtc_state = intel_atomic_get_crtc_state(&state->base, crtc); 157 + if (IS_ERR(new_crtc_state)) 158 + return PTR_ERR(new_crtc_state); 159 + 160 + old_crtc_state = intel_atomic_get_old_crtc_state(state, crtc); 161 + if (!old_crtc_state->fdi_lanes) 162 + return 0; 163 + 164 + return intel_modeset_pipes_in_mask_early(state, 165 + "FDI link BW decrease on pipe C", 166 + BIT(PIPE_B)); 167 + } 168 + 123 169 /* units of 100MHz */ 124 170 static int pipe_required_fdi_lanes(struct intel_crtc_state *crtc_state) 125 171 { ··· 177 129 } 178 130 179 131 static int ilk_check_fdi_lanes(struct drm_device *dev, enum pipe pipe, 180 - struct intel_crtc_state *pipe_config) 132 + struct intel_crtc_state *pipe_config, 133 + enum pipe *pipe_to_reduce) 181 134 { 182 135 struct drm_i915_private *dev_priv = to_i915(dev); 183 136 struct drm_atomic_state *state = pipe_config->uapi.state; 184 137 struct intel_crtc *other_crtc; 185 138 struct intel_crtc_state *other_crtc_state; 139 + 140 + *pipe_to_reduce = pipe; 186 141 187 142 drm_dbg_kms(&dev_priv->drm, 188 143 "checking fdi config on pipe %c, lanes %i\n", ··· 249 198 if (pipe_required_fdi_lanes(other_crtc_state) > 2) { 250 199 drm_dbg_kms(&dev_priv->drm, 251 200 "fdi link B uses too many lanes to enable link C\n"); 201 + 202 + *pipe_to_reduce = PIPE_B; 203 + 252 204 return -EINVAL; 253 205 } 254 206 return 0; ··· 286 232 return i915->display.fdi.pll_freq; 287 233 } 288 234 235 + /** 236 + * intel_fdi_compute_pipe_bpp - compute pipe bpp limited by max link bpp 237 + * @crtc_state: the crtc state 238 + * 239 + * Compute the pipe bpp limited by the CRTC's maximum link bpp. Encoders can 240 + * call this function during state computation in the simple case where the 241 + * link bpp will always match the pipe bpp. This is the case for all non-DP 242 + * encoders, while DP encoders will use a link bpp lower than pipe bpp in case 243 + * of DSC compression. 244 + * 245 + * Returns %true in case of success, %false if pipe bpp would need to be 246 + * reduced below its valid range. 247 + */ 248 + bool intel_fdi_compute_pipe_bpp(struct intel_crtc_state *crtc_state) 249 + { 250 + int pipe_bpp = min(crtc_state->pipe_bpp, 251 + to_bpp_int(crtc_state->max_link_bpp_x16)); 252 + 253 + pipe_bpp = rounddown(pipe_bpp, 2 * 3); 254 + 255 + if (pipe_bpp < 6 * 3) 256 + return false; 257 + 258 + crtc_state->pipe_bpp = pipe_bpp; 259 + 260 + return true; 261 + } 262 + 289 263 int ilk_fdi_compute_config(struct intel_crtc *crtc, 290 264 struct intel_crtc_state *pipe_config) 291 265 { 292 266 struct drm_device *dev = crtc->base.dev; 293 267 struct drm_i915_private *i915 = to_i915(dev); 294 268 const struct drm_display_mode *adjusted_mode = &pipe_config->hw.adjusted_mode; 295 - int lane, link_bw, fdi_dotclock, ret; 296 - bool needs_recompute = false; 269 + int lane, link_bw, fdi_dotclock; 297 270 298 - retry: 299 271 /* FDI is a binary signal running at ~2.7GHz, encoding 300 272 * each output octet as 10 bits. The actual frequency 301 273 * is stored as a divider into a 100MHz clock, and the ··· 341 261 intel_link_compute_m_n(pipe_config->pipe_bpp, lane, fdi_dotclock, 342 262 link_bw, &pipe_config->fdi_m_n, false); 343 263 344 - ret = ilk_check_fdi_lanes(dev, crtc->pipe, pipe_config); 345 - if (ret == -EDEADLK) 264 + return 0; 265 + } 266 + 267 + static int intel_fdi_atomic_check_bw(struct intel_atomic_state *state, 268 + struct intel_crtc *crtc, 269 + struct intel_crtc_state *pipe_config, 270 + struct intel_link_bw_limits *limits) 271 + { 272 + struct drm_i915_private *i915 = to_i915(crtc->base.dev); 273 + enum pipe pipe_to_reduce; 274 + int ret; 275 + 276 + ret = ilk_check_fdi_lanes(&i915->drm, crtc->pipe, pipe_config, 277 + &pipe_to_reduce); 278 + if (ret != -EINVAL) 346 279 return ret; 347 280 348 - if (ret == -EINVAL && pipe_config->pipe_bpp > 6*3) { 349 - pipe_config->pipe_bpp -= 2*3; 350 - drm_dbg_kms(&i915->drm, 351 - "fdi link bw constraint, reducing pipe bpp to %i\n", 352 - pipe_config->pipe_bpp); 353 - needs_recompute = true; 354 - pipe_config->bw_constrained = true; 281 + ret = intel_link_bw_reduce_bpp(state, limits, 282 + BIT(pipe_to_reduce), 283 + "FDI link BW"); 355 284 356 - goto retry; 285 + return ret ? : -EAGAIN; 286 + } 287 + 288 + /** 289 + * intel_fdi_atomic_check_link - check all modeset FDI link configuration 290 + * @state: intel atomic state 291 + * @limits: link BW limits 292 + * 293 + * Check the link configuration for all modeset FDI outputs. If the 294 + * configuration is invalid @limits will be updated if possible to 295 + * reduce the total BW, after which the configuration for all CRTCs in 296 + * @state must be recomputed with the updated @limits. 297 + * 298 + * Returns: 299 + * - 0 if the confugration is valid 300 + * - %-EAGAIN, if the configuration is invalid and @limits got updated 301 + * with fallback values with which the configuration of all CRTCs 302 + * in @state must be recomputed 303 + * - Other negative error, if the configuration is invalid without a 304 + * fallback possibility, or the check failed for another reason 305 + */ 306 + int intel_fdi_atomic_check_link(struct intel_atomic_state *state, 307 + struct intel_link_bw_limits *limits) 308 + { 309 + struct intel_crtc *crtc; 310 + struct intel_crtc_state *crtc_state; 311 + int i; 312 + 313 + for_each_new_intel_crtc_in_state(state, crtc, crtc_state, i) { 314 + int ret; 315 + 316 + if (!crtc_state->has_pch_encoder || 317 + !intel_crtc_needs_modeset(crtc_state) || 318 + !crtc_state->hw.enable) 319 + continue; 320 + 321 + ret = intel_fdi_atomic_check_bw(state, crtc, crtc_state, limits); 322 + if (ret) 323 + return ret; 357 324 } 358 325 359 - if (needs_recompute) 360 - return -EAGAIN; 361 - 362 - return ret; 326 + return 0; 363 327 } 364 328 365 329 static void cpt_set_fdi_bc_bifurcation(struct drm_i915_private *dev_priv, bool enable) ··· 890 766 * WaFDIAutoLinkSetTimingOverrride:hsw 891 767 */ 892 768 intel_de_write(dev_priv, FDI_RX_MISC(PIPE_A), 893 - FDI_RX_PWRDN_LANE1_VAL(2) | FDI_RX_PWRDN_LANE0_VAL(2) | FDI_RX_TP1_TO_TP2_48 | FDI_RX_FDI_DELAY_90); 769 + FDI_RX_PWRDN_LANE1_VAL(2) | 770 + FDI_RX_PWRDN_LANE0_VAL(2) | 771 + FDI_RX_TP1_TO_TP2_48 | 772 + FDI_RX_FDI_DELAY_90); 894 773 895 774 /* Enable the PCH Receiver FDI PLL */ 896 775 rx_ctl_val = dev_priv->display.fdi.rx_config | FDI_RX_ENHANCE_FRAME_ENABLE | ··· 926 799 * achieved on the PCH side in FDI_RX_CTL, so no need to set the 927 800 * port reversal bit */ 928 801 intel_de_write(dev_priv, DDI_BUF_CTL(PORT_E), 929 - DDI_BUF_CTL_ENABLE | ((crtc_state->fdi_lanes - 1) << 1) | DDI_BUF_TRANS_SELECT(i / 2)); 802 + DDI_BUF_CTL_ENABLE | 803 + ((crtc_state->fdi_lanes - 1) << 1) | 804 + DDI_BUF_TRANS_SELECT(i / 2)); 930 805 intel_de_posting_read(dev_priv, DDI_BUF_CTL(PORT_E)); 931 806 932 807 udelay(600);
+8
drivers/gpu/drm/i915/display/intel_fdi.h
··· 6 6 #ifndef _INTEL_FDI_H_ 7 7 #define _INTEL_FDI_H_ 8 8 9 + #include <linux/types.h> 10 + 9 11 enum pipe; 10 12 struct drm_i915_private; 13 + struct intel_atomic_state; 11 14 struct intel_crtc; 12 15 struct intel_crtc_state; 13 16 struct intel_encoder; 17 + struct intel_link_bw_limits; 14 18 19 + int intel_fdi_add_affected_crtcs(struct intel_atomic_state *state); 15 20 int intel_fdi_link_freq(struct drm_i915_private *i915, 16 21 const struct intel_crtc_state *pipe_config); 22 + bool intel_fdi_compute_pipe_bpp(struct intel_crtc_state *crtc_state); 17 23 int ilk_fdi_compute_config(struct intel_crtc *intel_crtc, 18 24 struct intel_crtc_state *pipe_config); 25 + int intel_fdi_atomic_check_link(struct intel_atomic_state *state, 26 + struct intel_link_bw_limits *limits); 19 27 void intel_fdi_normal_train(struct intel_crtc *crtc); 20 28 void ilk_fdi_disable(struct intel_crtc *crtc); 21 29 void ilk_fdi_pll_disable(struct intel_crtc *intel_crtc);
+32 -2
drivers/gpu/drm/i915/display/intel_frontbuffer.c
··· 55 55 * cancelled as soon as busyness is detected. 56 56 */ 57 57 58 + #include "gem/i915_gem_object_frontbuffer.h" 58 59 #include "i915_drv.h" 59 60 #include "intel_display_trace.h" 60 61 #include "intel_display_types.h" ··· 203 202 frontbuffer_flush(i915, frontbuffer_bits, origin); 204 203 } 205 204 205 + static void intel_frontbuffer_flush_work(struct work_struct *work) 206 + { 207 + struct intel_frontbuffer *front = 208 + container_of(work, struct intel_frontbuffer, flush_work); 209 + 210 + i915_gem_object_flush_if_display(front->obj); 211 + intel_frontbuffer_flush(front, ORIGIN_DIRTYFB); 212 + intel_frontbuffer_put(front); 213 + } 214 + 215 + /** 216 + * intel_frontbuffer_queue_flush - queue flushing frontbuffer object 217 + * @front: GEM object to flush 218 + * 219 + * This function is targeted for our dirty callback for queueing flush when 220 + * dma fence is signales 221 + */ 222 + void intel_frontbuffer_queue_flush(struct intel_frontbuffer *front) 223 + { 224 + if (!front) 225 + return; 226 + 227 + kref_get(&front->ref); 228 + if (!schedule_work(&front->flush_work)) 229 + intel_frontbuffer_put(front); 230 + } 231 + 206 232 static int frontbuffer_active(struct i915_active *ref) 207 233 { 208 234 struct intel_frontbuffer *front = ··· 251 223 static void frontbuffer_release(struct kref *ref) 252 224 __releases(&intel_bo_to_i915(front->obj)->display.fb_tracking.lock) 253 225 { 254 - struct intel_frontbuffer *front = 226 + struct intel_frontbuffer *ret, *front = 255 227 container_of(ref, typeof(*front), ref); 256 228 struct drm_i915_gem_object *obj = front->obj; 257 229 ··· 259 231 260 232 i915_ggtt_clear_scanout(obj); 261 233 262 - i915_gem_object_set_frontbuffer(obj, NULL); 234 + ret = i915_gem_object_set_frontbuffer(obj, NULL); 235 + drm_WARN_ON(&intel_bo_to_i915(obj)->drm, ret); 263 236 spin_unlock(&intel_bo_to_i915(obj)->display.fb_tracking.lock); 264 237 265 238 i915_active_fini(&front->write); ··· 290 261 frontbuffer_active, 291 262 frontbuffer_retire, 292 263 I915_ACTIVE_RETIRE_SLEEPS); 264 + INIT_WORK(&front->flush_work, intel_frontbuffer_flush_work); 293 265 294 266 spin_lock(&i915->display.fb_tracking.lock); 295 267 cur = i915_gem_object_set_frontbuffer(obj, front);
+4
drivers/gpu/drm/i915/display/intel_frontbuffer.h
··· 46 46 struct i915_active write; 47 47 struct drm_i915_gem_object *obj; 48 48 struct rcu_head rcu; 49 + 50 + struct work_struct flush_work; 49 51 }; 50 52 51 53 /* ··· 136 134 137 135 __intel_fb_flush(front, origin, frontbuffer_bits); 138 136 } 137 + 138 + void intel_frontbuffer_queue_flush(struct intel_frontbuffer *front); 139 139 140 140 void intel_frontbuffer_track(struct intel_frontbuffer *old, 141 141 struct intel_frontbuffer *new,
+4 -1
drivers/gpu/drm/i915/display/intel_gmbus.c
··· 155 155 const struct gmbus_pin *pins; 156 156 size_t size; 157 157 158 - if (INTEL_PCH_TYPE(i915) >= PCH_DG2) { 158 + if (INTEL_PCH_TYPE(i915) >= PCH_LNL) { 159 + pins = gmbus_pins_mtp; 160 + size = ARRAY_SIZE(gmbus_pins_mtp); 161 + } else if (INTEL_PCH_TYPE(i915) >= PCH_DG2) { 159 162 pins = gmbus_pins_dg2; 160 163 size = ARRAY_SIZE(gmbus_pins_dg2); 161 164 } else if (INTEL_PCH_TYPE(i915) >= PCH_DG1) {
+14 -19
drivers/gpu/drm/i915/display/intel_hdcp.c
··· 163 163 /* Is HDCP2.2 capable on Platform and Sink */ 164 164 bool intel_hdcp2_capable(struct intel_connector *connector) 165 165 { 166 - struct intel_digital_port *dig_port = intel_attached_dig_port(connector); 167 166 struct drm_i915_private *i915 = to_i915(connector->base.dev); 168 167 struct intel_hdcp *hdcp = &connector->hdcp; 169 168 bool capable = false; ··· 192 193 mutex_unlock(&i915->display.hdcp.hdcp_mutex); 193 194 194 195 /* Sink's capability for HDCP2.2 */ 195 - hdcp->shim->hdcp_2_2_capable(dig_port, &capable); 196 + hdcp->shim->hdcp_2_2_capable(connector, &capable); 196 197 197 198 return capable; 198 199 } ··· 1414 1415 /* Authentication flow starts from here */ 1415 1416 static int hdcp2_authentication_key_exchange(struct intel_connector *connector) 1416 1417 { 1417 - struct intel_digital_port *dig_port = intel_attached_dig_port(connector); 1418 1418 struct drm_i915_private *i915 = to_i915(connector->base.dev); 1419 1419 struct intel_hdcp *hdcp = &connector->hdcp; 1420 1420 union { ··· 1435 1437 if (ret < 0) 1436 1438 return ret; 1437 1439 1438 - ret = shim->write_2_2_msg(dig_port, &msgs.ake_init, 1440 + ret = shim->write_2_2_msg(connector, &msgs.ake_init, 1439 1441 sizeof(msgs.ake_init)); 1440 1442 if (ret < 0) 1441 1443 return ret; 1442 1444 1443 - ret = shim->read_2_2_msg(dig_port, HDCP_2_2_AKE_SEND_CERT, 1445 + ret = shim->read_2_2_msg(connector, HDCP_2_2_AKE_SEND_CERT, 1444 1446 &msgs.send_cert, sizeof(msgs.send_cert)); 1445 1447 if (ret < 0) 1446 1448 return ret; ··· 1469 1471 if (ret < 0) 1470 1472 return ret; 1471 1473 1472 - ret = shim->write_2_2_msg(dig_port, &msgs.no_stored_km, size); 1474 + ret = shim->write_2_2_msg(connector, &msgs.no_stored_km, size); 1473 1475 if (ret < 0) 1474 1476 return ret; 1475 1477 1476 - ret = shim->read_2_2_msg(dig_port, HDCP_2_2_AKE_SEND_HPRIME, 1478 + ret = shim->read_2_2_msg(connector, HDCP_2_2_AKE_SEND_HPRIME, 1477 1479 &msgs.send_hprime, sizeof(msgs.send_hprime)); 1478 1480 if (ret < 0) 1479 1481 return ret; ··· 1484 1486 1485 1487 if (!hdcp->is_paired) { 1486 1488 /* Pairing is required */ 1487 - ret = shim->read_2_2_msg(dig_port, 1489 + ret = shim->read_2_2_msg(connector, 1488 1490 HDCP_2_2_AKE_SEND_PAIRING_INFO, 1489 1491 &msgs.pairing_info, 1490 1492 sizeof(msgs.pairing_info)); ··· 1502 1504 1503 1505 static int hdcp2_locality_check(struct intel_connector *connector) 1504 1506 { 1505 - struct intel_digital_port *dig_port = intel_attached_dig_port(connector); 1506 1507 struct intel_hdcp *hdcp = &connector->hdcp; 1507 1508 union { 1508 1509 struct hdcp2_lc_init lc_init; ··· 1515 1518 if (ret < 0) 1516 1519 continue; 1517 1520 1518 - ret = shim->write_2_2_msg(dig_port, &msgs.lc_init, 1521 + ret = shim->write_2_2_msg(connector, &msgs.lc_init, 1519 1522 sizeof(msgs.lc_init)); 1520 1523 if (ret < 0) 1521 1524 continue; 1522 1525 1523 - ret = shim->read_2_2_msg(dig_port, 1526 + ret = shim->read_2_2_msg(connector, 1524 1527 HDCP_2_2_LC_SEND_LPRIME, 1525 1528 &msgs.send_lprime, 1526 1529 sizeof(msgs.send_lprime)); ··· 1537 1540 1538 1541 static int hdcp2_session_key_exchange(struct intel_connector *connector) 1539 1542 { 1540 - struct intel_digital_port *dig_port = intel_attached_dig_port(connector); 1541 1543 struct intel_hdcp *hdcp = &connector->hdcp; 1542 1544 struct hdcp2_ske_send_eks send_eks; 1543 1545 int ret; ··· 1545 1549 if (ret < 0) 1546 1550 return ret; 1547 1551 1548 - ret = hdcp->shim->write_2_2_msg(dig_port, &send_eks, 1552 + ret = hdcp->shim->write_2_2_msg(connector, &send_eks, 1549 1553 sizeof(send_eks)); 1550 1554 if (ret < 0) 1551 1555 return ret; ··· 1583 1587 streams_size_delta = (HDCP_2_2_MAX_CONTENT_STREAMS_CNT - data->k) * 1584 1588 sizeof(struct hdcp2_streamid_type); 1585 1589 /* Send it to Repeater */ 1586 - ret = shim->write_2_2_msg(dig_port, &msgs.stream_manage, 1590 + ret = shim->write_2_2_msg(connector, &msgs.stream_manage, 1587 1591 sizeof(msgs.stream_manage) - streams_size_delta); 1588 1592 if (ret < 0) 1589 1593 goto out; 1590 1594 1591 - ret = shim->read_2_2_msg(dig_port, HDCP_2_2_REP_STREAM_READY, 1595 + ret = shim->read_2_2_msg(connector, HDCP_2_2_REP_STREAM_READY, 1592 1596 &msgs.stream_ready, sizeof(msgs.stream_ready)); 1593 1597 if (ret < 0) 1594 1598 goto out; ··· 1618 1622 u8 *rx_info; 1619 1623 int ret; 1620 1624 1621 - ret = shim->read_2_2_msg(dig_port, HDCP_2_2_REP_SEND_RECVID_LIST, 1625 + ret = shim->read_2_2_msg(connector, HDCP_2_2_REP_SEND_RECVID_LIST, 1622 1626 &msgs.recvid_list, sizeof(msgs.recvid_list)); 1623 1627 if (ret < 0) 1624 1628 return ret; ··· 1671 1675 return ret; 1672 1676 1673 1677 hdcp->seq_num_v = seq_num_v; 1674 - ret = shim->write_2_2_msg(dig_port, &msgs.rep_ack, 1678 + ret = shim->write_2_2_msg(connector, &msgs.rep_ack, 1675 1679 sizeof(msgs.rep_ack)); 1676 1680 if (ret < 0) 1677 1681 return ret; ··· 1681 1685 1682 1686 static int hdcp2_authenticate_sink(struct intel_connector *connector) 1683 1687 { 1684 - struct intel_digital_port *dig_port = intel_attached_dig_port(connector); 1685 1688 struct drm_i915_private *i915 = to_i915(connector->base.dev); 1686 1689 struct intel_hdcp *hdcp = &connector->hdcp; 1687 1690 const struct intel_hdcp_shim *shim = hdcp->shim; ··· 1706 1711 } 1707 1712 1708 1713 if (shim->config_stream_type) { 1709 - ret = shim->config_stream_type(dig_port, 1714 + ret = shim->config_stream_type(connector, 1710 1715 hdcp->is_repeater, 1711 1716 hdcp->content_type); 1712 1717 if (ret < 0)
+39 -82
drivers/gpu/drm/i915/display/intel_hdmi.c
··· 1240 1240 void intel_dp_dual_mode_set_tmds_output(struct intel_hdmi *hdmi, bool enable) 1241 1241 { 1242 1242 struct drm_i915_private *dev_priv = intel_hdmi_to_i915(hdmi); 1243 - struct i2c_adapter *adapter; 1243 + struct i2c_adapter *ddc = hdmi->attached_connector->base.ddc; 1244 1244 1245 1245 if (hdmi->dp_dual_mode.type < DRM_DP_DUAL_MODE_TYPE2_DVI) 1246 1246 return; 1247 1247 1248 - adapter = intel_gmbus_get_adapter(dev_priv, hdmi->ddc_bus); 1249 - 1250 1248 drm_dbg_kms(&dev_priv->drm, "%s DP dual mode adaptor TMDS output\n", 1251 1249 enable ? "Enabling" : "Disabling"); 1252 1250 1253 - drm_dp_dual_mode_set_tmds_output(&dev_priv->drm, hdmi->dp_dual_mode.type, adapter, enable); 1251 + drm_dp_dual_mode_set_tmds_output(&dev_priv->drm, 1252 + hdmi->dp_dual_mode.type, ddc, enable); 1254 1253 } 1255 1254 1256 1255 static int intel_hdmi_hdcp_read(struct intel_digital_port *dig_port, 1257 1256 unsigned int offset, void *buffer, size_t size) 1258 1257 { 1259 - struct drm_i915_private *i915 = to_i915(dig_port->base.base.dev); 1260 1258 struct intel_hdmi *hdmi = &dig_port->hdmi; 1261 - struct i2c_adapter *adapter = intel_gmbus_get_adapter(i915, 1262 - hdmi->ddc_bus); 1259 + struct i2c_adapter *ddc = hdmi->attached_connector->base.ddc; 1263 1260 int ret; 1264 1261 u8 start = offset & 0xff; 1265 1262 struct i2c_msg msgs[] = { ··· 1273 1276 .buf = buffer 1274 1277 } 1275 1278 }; 1276 - ret = i2c_transfer(adapter, msgs, ARRAY_SIZE(msgs)); 1279 + ret = i2c_transfer(ddc, msgs, ARRAY_SIZE(msgs)); 1277 1280 if (ret == ARRAY_SIZE(msgs)) 1278 1281 return 0; 1279 1282 return ret >= 0 ? -EIO : ret; ··· 1282 1285 static int intel_hdmi_hdcp_write(struct intel_digital_port *dig_port, 1283 1286 unsigned int offset, void *buffer, size_t size) 1284 1287 { 1285 - struct drm_i915_private *i915 = to_i915(dig_port->base.base.dev); 1286 1288 struct intel_hdmi *hdmi = &dig_port->hdmi; 1287 - struct i2c_adapter *adapter = intel_gmbus_get_adapter(i915, 1288 - hdmi->ddc_bus); 1289 + struct i2c_adapter *ddc = hdmi->attached_connector->base.ddc; 1289 1290 int ret; 1290 1291 u8 *write_buf; 1291 1292 struct i2c_msg msg; ··· 1300 1305 msg.len = size + 1, 1301 1306 msg.buf = write_buf; 1302 1307 1303 - ret = i2c_transfer(adapter, &msg, 1); 1308 + ret = i2c_transfer(ddc, &msg, 1); 1304 1309 if (ret == 1) 1305 1310 ret = 0; 1306 1311 else if (ret >= 0) ··· 1316 1321 { 1317 1322 struct drm_i915_private *i915 = to_i915(dig_port->base.base.dev); 1318 1323 struct intel_hdmi *hdmi = &dig_port->hdmi; 1319 - struct i2c_adapter *adapter = intel_gmbus_get_adapter(i915, 1320 - hdmi->ddc_bus); 1324 + struct i2c_adapter *ddc = hdmi->attached_connector->base.ddc; 1321 1325 int ret; 1322 1326 1323 1327 ret = intel_hdmi_hdcp_write(dig_port, DRM_HDCP_DDC_AN, an, ··· 1327 1333 return ret; 1328 1334 } 1329 1335 1330 - ret = intel_gmbus_output_aksv(adapter); 1336 + ret = intel_gmbus_output_aksv(ddc); 1331 1337 if (ret < 0) { 1332 1338 drm_dbg_kms(&i915->drm, "Failed to output aksv (%d)\n", ret); 1333 1339 return ret; ··· 1659 1665 } 1660 1666 1661 1667 static 1662 - int intel_hdmi_hdcp2_write_msg(struct intel_digital_port *dig_port, 1668 + int intel_hdmi_hdcp2_write_msg(struct intel_connector *connector, 1663 1669 void *buf, size_t size) 1664 1670 { 1671 + struct intel_digital_port *dig_port = intel_attached_dig_port(connector); 1665 1672 unsigned int offset; 1666 1673 1667 1674 offset = HDCP_2_2_HDMI_REG_WR_MSG_OFFSET; ··· 1670 1675 } 1671 1676 1672 1677 static 1673 - int intel_hdmi_hdcp2_read_msg(struct intel_digital_port *dig_port, 1678 + int intel_hdmi_hdcp2_read_msg(struct intel_connector *connector, 1674 1679 u8 msg_id, void *buf, size_t size) 1675 1680 { 1681 + struct intel_digital_port *dig_port = intel_attached_dig_port(connector); 1676 1682 struct drm_i915_private *i915 = to_i915(dig_port->base.base.dev); 1677 1683 struct intel_hdmi *hdmi = &dig_port->hdmi; 1678 1684 struct intel_hdcp *hdcp = &hdmi->attached_connector->hdcp; ··· 1729 1733 } 1730 1734 1731 1735 static 1732 - int intel_hdmi_hdcp2_capable(struct intel_digital_port *dig_port, 1736 + int intel_hdmi_hdcp2_capable(struct intel_connector *connector, 1733 1737 bool *capable) 1734 1738 { 1739 + struct intel_digital_port *dig_port = intel_attached_dig_port(connector); 1735 1740 u8 hdcp2_version; 1736 1741 int ret; 1737 1742 ··· 2397 2400 struct drm_i915_private *dev_priv = to_i915(connector->dev); 2398 2401 struct intel_hdmi *hdmi = intel_attached_hdmi(to_intel_connector(connector)); 2399 2402 struct intel_encoder *encoder = &hdmi_to_dig_port(hdmi)->base; 2400 - struct i2c_adapter *adapter = 2401 - intel_gmbus_get_adapter(dev_priv, hdmi->ddc_bus); 2402 - enum drm_dp_dual_mode_type type = drm_dp_dual_mode_detect(&dev_priv->drm, adapter); 2403 + struct i2c_adapter *ddc = connector->ddc; 2404 + enum drm_dp_dual_mode_type type; 2405 + 2406 + type = drm_dp_dual_mode_detect(&dev_priv->drm, ddc); 2403 2407 2404 2408 /* 2405 2409 * Type 1 DVI adaptors are not required to implement any ··· 2427 2429 2428 2430 hdmi->dp_dual_mode.type = type; 2429 2431 hdmi->dp_dual_mode.max_tmds_clock = 2430 - drm_dp_dual_mode_max_tmds_clock(&dev_priv->drm, type, adapter); 2432 + drm_dp_dual_mode_max_tmds_clock(&dev_priv->drm, type, ddc); 2431 2433 2432 2434 drm_dbg_kms(&dev_priv->drm, 2433 2435 "DP dual mode adaptor (%s) detected (max TMDS clock: %d kHz)\n", ··· 2448 2450 { 2449 2451 struct drm_i915_private *dev_priv = to_i915(connector->dev); 2450 2452 struct intel_hdmi *intel_hdmi = intel_attached_hdmi(to_intel_connector(connector)); 2453 + struct i2c_adapter *ddc = connector->ddc; 2451 2454 intel_wakeref_t wakeref; 2452 2455 const struct drm_edid *drm_edid; 2453 - const struct edid *edid; 2454 2456 bool connected = false; 2455 - struct i2c_adapter *i2c; 2456 2457 2457 2458 wakeref = intel_display_power_get(dev_priv, POWER_DOMAIN_GMBUS); 2458 2459 2459 - i2c = intel_gmbus_get_adapter(dev_priv, intel_hdmi->ddc_bus); 2460 + drm_edid = drm_edid_read_ddc(connector, ddc); 2460 2461 2461 - drm_edid = drm_edid_read_ddc(connector, i2c); 2462 - 2463 - if (!drm_edid && !intel_gmbus_is_forced_bit(i2c)) { 2462 + if (!drm_edid && !intel_gmbus_is_forced_bit(ddc)) { 2464 2463 drm_dbg_kms(&dev_priv->drm, 2465 2464 "HDMI GMBUS EDID read failed, retry using GPIO bit-banging\n"); 2466 - intel_gmbus_force_bit(i2c, true); 2467 - drm_edid = drm_edid_read_ddc(connector, i2c); 2468 - intel_gmbus_force_bit(i2c, false); 2465 + intel_gmbus_force_bit(ddc, true); 2466 + drm_edid = drm_edid_read_ddc(connector, ddc); 2467 + intel_gmbus_force_bit(ddc, false); 2469 2468 } 2470 2469 2471 2470 /* Below we depend on display info having been updated */ ··· 2470 2475 2471 2476 to_intel_connector(connector)->detect_edid = drm_edid; 2472 2477 2473 - /* FIXME: Get rid of drm_edid_raw() */ 2474 - edid = drm_edid_raw(drm_edid); 2475 - if (edid && edid->input & DRM_EDID_INPUT_DIGITAL) { 2478 + if (drm_edid_is_digital(drm_edid)) { 2476 2479 intel_hdmi_dp_dual_mode_detect(connector); 2477 2480 2478 2481 connected = true; ··· 2478 2485 2479 2486 intel_display_power_put(dev_priv, POWER_DOMAIN_GMBUS, wakeref); 2480 2487 2481 - cec_notifier_set_phys_addr_from_edid(intel_hdmi->cec_notifier, edid); 2488 + cec_notifier_set_phys_addr(intel_hdmi->cec_notifier, 2489 + connector->display_info.source_physical_address); 2482 2490 2483 2491 return connected; 2484 2492 } ··· 2516 2522 if (status != connector_status_connected) 2517 2523 cec_notifier_phys_addr_invalidate(intel_hdmi->cec_notifier); 2518 2524 2519 - /* 2520 - * Make sure the refs for power wells enabled during detect are 2521 - * dropped to avoid a new detect cycle triggered by HPD polling. 2522 - */ 2523 - intel_display_power_flush_work(dev_priv); 2524 - 2525 2525 return status; 2526 2526 } 2527 2527 ··· 2541 2553 return drm_edid_connector_add_modes(connector); 2542 2554 } 2543 2555 2544 - static struct i2c_adapter * 2545 - intel_hdmi_get_i2c_adapter(struct drm_connector *connector) 2546 - { 2547 - struct drm_i915_private *dev_priv = to_i915(connector->dev); 2548 - struct intel_hdmi *intel_hdmi = intel_attached_hdmi(to_intel_connector(connector)); 2549 - 2550 - return intel_gmbus_get_adapter(dev_priv, intel_hdmi->ddc_bus); 2551 - } 2552 - 2553 - static void intel_hdmi_create_i2c_symlink(struct drm_connector *connector) 2554 - { 2555 - struct drm_i915_private *i915 = to_i915(connector->dev); 2556 - struct i2c_adapter *adapter = intel_hdmi_get_i2c_adapter(connector); 2557 - struct kobject *i2c_kobj = &adapter->dev.kobj; 2558 - struct kobject *connector_kobj = &connector->kdev->kobj; 2559 - int ret; 2560 - 2561 - ret = sysfs_create_link(connector_kobj, i2c_kobj, i2c_kobj->name); 2562 - if (ret) 2563 - drm_err(&i915->drm, "Failed to create i2c symlink (%d)\n", ret); 2564 - } 2565 - 2566 - static void intel_hdmi_remove_i2c_symlink(struct drm_connector *connector) 2567 - { 2568 - struct i2c_adapter *adapter = intel_hdmi_get_i2c_adapter(connector); 2569 - struct kobject *i2c_kobj = &adapter->dev.kobj; 2570 - struct kobject *connector_kobj = &connector->kdev->kobj; 2571 - 2572 - sysfs_remove_link(connector_kobj, i2c_kobj->name); 2573 - } 2574 - 2575 2556 static int 2576 2557 intel_hdmi_connector_register(struct drm_connector *connector) 2577 2558 { ··· 2549 2592 ret = intel_connector_register(connector); 2550 2593 if (ret) 2551 2594 return ret; 2552 - 2553 - intel_hdmi_create_i2c_symlink(connector); 2554 2595 2555 2596 return ret; 2556 2597 } ··· 2559 2604 2560 2605 cec_notifier_conn_unregister(n); 2561 2606 2562 - intel_hdmi_remove_i2c_symlink(connector); 2563 2607 intel_connector_unregister(connector); 2564 2608 } 2565 2609 ··· 2872 2918 struct intel_encoder *other; 2873 2919 2874 2920 for_each_intel_encoder(&i915->drm, other) { 2921 + struct intel_connector *connector; 2922 + 2875 2923 if (other == encoder) 2876 2924 continue; 2877 2925 2878 2926 if (!intel_encoder_is_dig_port(other)) 2879 2927 continue; 2880 2928 2881 - if (enc_to_dig_port(other)->hdmi.ddc_bus == ddc_pin) 2929 + connector = enc_to_dig_port(other)->hdmi.attached_connector; 2930 + 2931 + if (connector && connector->base.ddc == intel_gmbus_get_adapter(i915, ddc_pin)) 2882 2932 return other; 2883 2933 } 2884 2934 ··· 2974 3016 struct intel_encoder *intel_encoder = &dig_port->base; 2975 3017 struct drm_device *dev = intel_encoder->base.dev; 2976 3018 struct drm_i915_private *dev_priv = to_i915(dev); 2977 - struct i2c_adapter *ddc; 2978 3019 enum port port = intel_encoder->port; 2979 3020 struct cec_connector_info conn_info; 3021 + u8 ddc_pin; 2980 3022 2981 3023 drm_dbg_kms(&dev_priv->drm, 2982 3024 "Adding HDMI connector on [ENCODER:%d:%s]\n", ··· 2991 3033 intel_encoder->base.name)) 2992 3034 return; 2993 3035 2994 - intel_hdmi->ddc_bus = intel_hdmi_ddc_pin(intel_encoder); 2995 - if (!intel_hdmi->ddc_bus) 3036 + ddc_pin = intel_hdmi_ddc_pin(intel_encoder); 3037 + if (!ddc_pin) 2996 3038 return; 2997 - 2998 - ddc = intel_gmbus_get_adapter(dev_priv, intel_hdmi->ddc_bus); 2999 3039 3000 3040 drm_connector_init_with_ddc(dev, connector, 3001 3041 &intel_hdmi_connector_funcs, 3002 3042 DRM_MODE_CONNECTOR_HDMIA, 3003 - ddc); 3043 + intel_gmbus_get_adapter(dev_priv, ddc_pin)); 3044 + 3004 3045 drm_connector_helper_add(connector, &intel_hdmi_connector_helper_funcs); 3005 3046 3006 3047 if (DISPLAY_VER(dev_priv) < 12)
+78 -7
drivers/gpu/drm/i915/display/intel_hotplug.c
··· 25 25 26 26 #include "i915_drv.h" 27 27 #include "i915_irq.h" 28 + #include "intel_display_power.h" 28 29 #include "intel_display_types.h" 29 30 #include "intel_hotplug.h" 30 31 #include "intel_hotplug_irq.h" ··· 260 259 intel_runtime_pm_put(&dev_priv->runtime_pm, wakeref); 261 260 } 262 261 263 - enum intel_hotplug_state 264 - intel_encoder_hotplug(struct intel_encoder *encoder, 265 - struct intel_connector *connector) 262 + static enum intel_hotplug_state 263 + intel_hotplug_detect_connector(struct intel_connector *connector) 266 264 { 267 265 struct drm_device *dev = connector->base.dev; 268 266 enum drm_connector_status old_status; 269 267 u64 old_epoch_counter; 268 + int status; 270 269 bool ret = false; 271 270 272 271 drm_WARN_ON(dev, !mutex_is_locked(&dev->mode_config.mutex)); 273 272 old_status = connector->base.status; 274 273 old_epoch_counter = connector->base.epoch_counter; 275 274 276 - connector->base.status = 277 - drm_helper_probe_detect(&connector->base, NULL, false); 275 + status = drm_helper_probe_detect(&connector->base, NULL, false); 276 + if (!connector->base.force) 277 + connector->base.status = status; 278 278 279 279 if (old_epoch_counter != connector->base.epoch_counter) 280 280 ret = true; ··· 291 289 return INTEL_HOTPLUG_CHANGED; 292 290 } 293 291 return INTEL_HOTPLUG_UNCHANGED; 292 + } 293 + 294 + enum intel_hotplug_state 295 + intel_encoder_hotplug(struct intel_encoder *encoder, 296 + struct intel_connector *connector) 297 + { 298 + return intel_hotplug_detect_connector(connector); 294 299 } 295 300 296 301 static bool intel_encoder_has_hpd_pulse(struct intel_encoder *encoder) ··· 640 631 spin_unlock_irq(&dev_priv->irq_lock); 641 632 } 642 633 634 + static void i915_hpd_poll_detect_connectors(struct drm_i915_private *i915) 635 + { 636 + struct drm_connector_list_iter conn_iter; 637 + struct intel_connector *connector; 638 + struct intel_connector *first_changed_connector = NULL; 639 + int changed = 0; 640 + 641 + mutex_lock(&i915->drm.mode_config.mutex); 642 + 643 + if (!i915->drm.mode_config.poll_enabled) 644 + goto out; 645 + 646 + drm_connector_list_iter_begin(&i915->drm, &conn_iter); 647 + for_each_intel_connector_iter(connector, &conn_iter) { 648 + if (!(connector->base.polled & DRM_CONNECTOR_POLL_HPD)) 649 + continue; 650 + 651 + if (intel_hotplug_detect_connector(connector) != INTEL_HOTPLUG_CHANGED) 652 + continue; 653 + 654 + changed++; 655 + 656 + if (changed == 1) { 657 + drm_connector_get(&connector->base); 658 + first_changed_connector = connector; 659 + } 660 + } 661 + drm_connector_list_iter_end(&conn_iter); 662 + 663 + out: 664 + mutex_unlock(&i915->drm.mode_config.mutex); 665 + 666 + if (!changed) 667 + return; 668 + 669 + if (changed == 1) 670 + drm_kms_helper_connector_hotplug_event(&first_changed_connector->base); 671 + else 672 + drm_kms_helper_hotplug_event(&i915->drm); 673 + 674 + drm_connector_put(&first_changed_connector->base); 675 + } 676 + 643 677 static void i915_hpd_poll_init_work(struct work_struct *work) 644 678 { 645 679 struct drm_i915_private *dev_priv = ··· 690 638 display.hotplug.poll_init_work); 691 639 struct drm_connector_list_iter conn_iter; 692 640 struct intel_connector *connector; 641 + intel_wakeref_t wakeref; 693 642 bool enabled; 694 643 695 644 mutex_lock(&dev_priv->drm.mode_config.mutex); 696 645 697 646 enabled = READ_ONCE(dev_priv->display.hotplug.poll_enabled); 647 + /* 648 + * Prevent taking a power reference from this sequence of 649 + * i915_hpd_poll_init_work() -> drm_helper_hpd_irq_event() -> 650 + * connector detect which would requeue i915_hpd_poll_init_work() 651 + * and so risk an endless loop of this same sequence. 652 + */ 653 + if (!enabled) { 654 + wakeref = intel_display_power_get(dev_priv, 655 + POWER_DOMAIN_DISPLAY_CORE); 656 + drm_WARN_ON(&dev_priv->drm, 657 + READ_ONCE(dev_priv->display.hotplug.poll_enabled)); 658 + cancel_work(&dev_priv->display.hotplug.poll_init_work); 659 + } 698 660 699 661 drm_connector_list_iter_begin(&dev_priv->drm, &conn_iter); 700 662 for_each_intel_connector_iter(connector, &conn_iter) { ··· 735 669 * We might have missed any hotplugs that happened while we were 736 670 * in the middle of disabling polling 737 671 */ 738 - if (!enabled) 739 - drm_helper_hpd_irq_event(&dev_priv->drm); 672 + if (!enabled) { 673 + i915_hpd_poll_detect_connectors(dev_priv); 674 + 675 + intel_display_power_put(dev_priv, 676 + POWER_DOMAIN_DISPLAY_CORE, 677 + wakeref); 678 + } 740 679 } 741 680 742 681 /**
+22 -2
drivers/gpu/drm/i915/display/intel_hotplug_irq.c
··· 163 163 (!HAS_PCH_SPLIT(dev_priv) || HAS_PCH_NOP(dev_priv))) 164 164 return; 165 165 166 - if (INTEL_PCH_TYPE(dev_priv) >= PCH_DG1) 166 + if (INTEL_PCH_TYPE(dev_priv) >= PCH_LNL) 167 + hpd->pch_hpd = hpd_mtp; 168 + else if (INTEL_PCH_TYPE(dev_priv) >= PCH_DG1) 167 169 hpd->pch_hpd = hpd_sde_dg1; 168 170 else if (INTEL_PCH_TYPE(dev_priv) >= PCH_MTP) 169 171 hpd->pch_hpd = hpd_mtp; ··· 515 513 u32 hotplug_trigger = iir & (XELPDP_DP_ALT_HOTPLUG_MASK | XELPDP_TBT_HOTPLUG_MASK); 516 514 u32 trigger_aux = iir & XELPDP_AUX_TC_MASK; 517 515 u32 pin_mask = 0, long_mask = 0; 516 + 517 + if (DISPLAY_VER(i915) >= 20) 518 + trigger_aux |= iir & XE2LPD_AUX_DDI_MASK; 518 519 519 520 for (pin = HPD_PORT_TC1; pin <= HPD_PORT_TC4; pin++) { 520 521 u32 val; ··· 1065 1060 mtp_tc_hpd_detection_setup(i915); 1066 1061 } 1067 1062 1063 + static void xe2lpd_sde_hpd_irq_setup(struct drm_i915_private *i915) 1064 + { 1065 + u32 hotplug_irqs, enabled_irqs; 1066 + 1067 + enabled_irqs = intel_hpd_enabled_irqs(i915, i915->display.hotplug.pch_hpd); 1068 + hotplug_irqs = intel_hpd_hotplug_irqs(i915, i915->display.hotplug.pch_hpd); 1069 + 1070 + ibx_display_interrupt_update(i915, hotplug_irqs, enabled_irqs); 1071 + 1072 + mtp_ddi_hpd_detection_setup(i915); 1073 + mtp_tc_hpd_detection_setup(i915); 1074 + } 1075 + 1068 1076 static bool is_xelpdp_pica_hpd_pin(enum hpd_pin hpd_pin) 1069 1077 { 1070 1078 return hpd_pin >= HPD_PORT_TC1 && hpd_pin <= HPD_PORT_TC4; ··· 1137 1119 1138 1120 xelpdp_pica_hpd_detection_setup(i915); 1139 1121 1140 - if (INTEL_PCH_TYPE(i915) >= PCH_MTP) 1122 + if (INTEL_PCH_TYPE(i915) >= PCH_LNL) 1123 + xe2lpd_sde_hpd_irq_setup(i915); 1124 + else if (INTEL_PCH_TYPE(i915) >= PCH_MTP) 1141 1125 mtp_hpd_irq_setup(i915); 1142 1126 } 1143 1127
+7 -7
drivers/gpu/drm/i915/display/intel_lspcon.c
··· 144 144 struct intel_dp *intel_dp = lspcon_to_intel_dp(lspcon); 145 145 struct drm_i915_private *i915 = dp_to_i915(intel_dp); 146 146 enum drm_lspcon_mode current_mode; 147 - struct i2c_adapter *adapter = &intel_dp->aux.ddc; 147 + struct i2c_adapter *ddc = &intel_dp->aux.ddc; 148 148 149 - if (drm_lspcon_get_mode(intel_dp->aux.drm_dev, adapter, &current_mode)) { 149 + if (drm_lspcon_get_mode(intel_dp->aux.drm_dev, ddc, &current_mode)) { 150 150 drm_dbg_kms(&i915->drm, "Error reading LSPCON mode\n"); 151 151 return DRM_LSPCON_MODE_INVALID; 152 152 } ··· 185 185 struct drm_i915_private *i915 = dp_to_i915(intel_dp); 186 186 int err; 187 187 enum drm_lspcon_mode current_mode; 188 - struct i2c_adapter *adapter = &intel_dp->aux.ddc; 188 + struct i2c_adapter *ddc = &intel_dp->aux.ddc; 189 189 190 - err = drm_lspcon_get_mode(intel_dp->aux.drm_dev, adapter, &current_mode); 190 + err = drm_lspcon_get_mode(intel_dp->aux.drm_dev, ddc, &current_mode); 191 191 if (err) { 192 192 drm_err(&i915->drm, "Error reading LSPCON mode\n"); 193 193 return err; ··· 198 198 return 0; 199 199 } 200 200 201 - err = drm_lspcon_set_mode(intel_dp->aux.drm_dev, adapter, mode); 201 + err = drm_lspcon_set_mode(intel_dp->aux.drm_dev, ddc, mode); 202 202 if (err < 0) { 203 203 drm_err(&i915->drm, "LSPCON mode change failed\n"); 204 204 return err; ··· 233 233 enum drm_dp_dual_mode_type adaptor_type; 234 234 struct intel_dp *intel_dp = lspcon_to_intel_dp(lspcon); 235 235 struct drm_i915_private *i915 = dp_to_i915(intel_dp); 236 - struct i2c_adapter *adapter = &intel_dp->aux.ddc; 236 + struct i2c_adapter *ddc = &intel_dp->aux.ddc; 237 237 enum drm_lspcon_mode expected_mode; 238 238 239 239 expected_mode = lspcon_wake_native_aux_ch(lspcon) ? ··· 244 244 if (retry) 245 245 usleep_range(500, 1000); 246 246 247 - adaptor_type = drm_dp_dual_mode_detect(intel_dp->aux.drm_dev, adapter); 247 + adaptor_type = drm_dp_dual_mode_detect(intel_dp->aux.drm_dev, ddc); 248 248 if (adaptor_type == DRM_DP_DUAL_MODE_LSPCON) 249 249 break; 250 250 }
+18 -15
drivers/gpu/drm/i915/display/intel_lvds.c
··· 425 425 return -EINVAL; 426 426 } 427 427 428 + if (HAS_PCH_SPLIT(i915)) { 429 + crtc_state->has_pch_encoder = true; 430 + if (!intel_fdi_compute_pipe_bpp(crtc_state)) 431 + return -EINVAL; 432 + } 433 + 428 434 if (lvds_encoder->a3_power == LVDS_A3_POWER_UP) 429 435 lvds_bpp = 8*3; 430 436 else 431 437 lvds_bpp = 6*3; 432 438 439 + /* TODO: Check crtc_state->max_link_bpp_x16 instead of bw_constrained */ 433 440 if (lvds_bpp != crtc_state->pipe_bpp && !crtc_state->bw_constrained) { 434 441 drm_dbg_kms(&i915->drm, 435 442 "forcing display bpp (was %d) to LVDS (%d)\n", ··· 459 452 460 453 if (adjusted_mode->flags & DRM_MODE_FLAG_DBLSCAN) 461 454 return -EINVAL; 462 - 463 - if (HAS_PCH_SPLIT(i915)) 464 - crtc_state->has_pch_encoder = true; 465 455 466 456 ret = intel_panel_fitting(crtc_state, conn_state); 467 457 if (ret) ··· 841 837 struct intel_encoder *encoder; 842 838 i915_reg_t lvds_reg; 843 839 u32 lvds; 844 - u8 pin; 840 + u8 ddc_pin; 845 841 846 842 /* Skip init on machines we know falsely report LVDS */ 847 843 if (dmi_check_system(intel_no_lvds)) { ··· 868 864 return; 869 865 } 870 866 871 - pin = GMBUS_PIN_PANEL; 872 - if (!intel_bios_is_lvds_present(i915, &pin)) { 867 + ddc_pin = GMBUS_PIN_PANEL; 868 + if (!intel_bios_is_lvds_present(i915, &ddc_pin)) { 873 869 if ((lvds & LVDS_PORT_EN) == 0) { 874 870 drm_dbg_kms(&i915->drm, 875 871 "LVDS is not present in VBT\n"); ··· 892 888 lvds_encoder->attached_connector = connector; 893 889 encoder = &lvds_encoder->base; 894 890 895 - drm_connector_init(&i915->drm, &connector->base, &intel_lvds_connector_funcs, 896 - DRM_MODE_CONNECTOR_LVDS); 891 + drm_connector_init_with_ddc(&i915->drm, &connector->base, 892 + &intel_lvds_connector_funcs, 893 + DRM_MODE_CONNECTOR_LVDS, 894 + intel_gmbus_get_adapter(i915, ddc_pin)); 897 895 898 896 drm_encoder_init(&i915->drm, &encoder->base, &intel_lvds_enc_funcs, 899 897 DRM_MODE_ENCODER_LVDS, "LVDS"); ··· 949 943 * preferred mode is the right one. 950 944 */ 951 945 mutex_lock(&i915->drm.mode_config.mutex); 952 - if (vga_switcheroo_handler_flags() & VGA_SWITCHEROO_CAN_SWITCH_DDC) { 953 - drm_edid = drm_edid_read_switcheroo(&connector->base, 954 - intel_gmbus_get_adapter(i915, pin)); 955 - } else { 956 - drm_edid = drm_edid_read_ddc(&connector->base, 957 - intel_gmbus_get_adapter(i915, pin)); 958 - } 946 + if (vga_switcheroo_handler_flags() & VGA_SWITCHEROO_CAN_SWITCH_DDC) 947 + drm_edid = drm_edid_read_switcheroo(&connector->base, connector->base.ddc); 948 + else 949 + drm_edid = drm_edid_read_ddc(&connector->base, connector->base.ddc); 959 950 if (drm_edid) { 960 951 if (drm_edid_connector_update(&connector->base, drm_edid) || 961 952 !drm_edid_connector_add_modes(&connector->base)) {
+2
drivers/gpu/drm/i915/display/intel_overlay.c
··· 29 29 #include <drm/drm_fourcc.h> 30 30 31 31 #include "gem/i915_gem_internal.h" 32 + #include "gem/i915_gem_object_frontbuffer.h" 32 33 #include "gem/i915_gem_pm.h" 33 34 #include "gt/intel_gpu_commands.h" 34 35 #include "gt/intel_ring.h" 35 36 36 37 #include "i915_drv.h" 37 38 #include "i915_reg.h" 39 + #include "intel_color_regs.h" 38 40 #include "intel_de.h" 39 41 #include "intel_display_types.h" 40 42 #include "intel_frontbuffer.h"
+4 -13
drivers/gpu/drm/i915/display/intel_panel.c
··· 59 59 struct drm_display_mode, head); 60 60 } 61 61 62 - static bool is_in_vrr_range(struct intel_connector *connector, int vrefresh) 63 - { 64 - const struct drm_display_info *info = &connector->base.display_info; 65 - 66 - return intel_vrr_is_capable(connector) && 67 - vrefresh >= info->monitor_range.min_vfreq && 68 - vrefresh <= info->monitor_range.max_vfreq; 69 - } 70 - 71 62 static bool is_best_fixed_mode(struct intel_connector *connector, 72 63 int vrefresh, int fixed_mode_vrefresh, 73 64 const struct drm_display_mode *best_mode) ··· 72 81 * vrefresh, which we can then reduce to match the requested 73 82 * vrefresh by extending the vblank length. 74 83 */ 75 - if (is_in_vrr_range(connector, vrefresh) && 76 - is_in_vrr_range(connector, fixed_mode_vrefresh) && 84 + if (intel_vrr_is_in_range(connector, vrefresh) && 85 + intel_vrr_is_in_range(connector, fixed_mode_vrefresh) && 77 86 fixed_mode_vrefresh < vrefresh) 78 87 return false; 79 88 ··· 215 224 * Assume that we shouldn't muck about with the 216 225 * timings if they don't land in the VRR range. 217 226 */ 218 - is_vrr = is_in_vrr_range(connector, vrefresh) && 219 - is_in_vrr_range(connector, fixed_mode_vrefresh); 227 + is_vrr = intel_vrr_is_in_range(connector, vrefresh) && 228 + intel_vrr_is_in_range(connector, fixed_mode_vrefresh); 220 229 221 230 if (!is_vrr) { 222 231 /*
+1
drivers/gpu/drm/i915/display/intel_plane_initial.c
··· 9 9 #include "intel_display.h" 10 10 #include "intel_display_types.h" 11 11 #include "intel_fb.h" 12 + #include "intel_frontbuffer.h" 12 13 #include "intel_plane_initial.h" 13 14 14 15 static bool
+1 -1
drivers/gpu/drm/i915/display/intel_pmdemand.c
··· 92 92 &pmdemand_state->base, 93 93 &intel_pmdemand_funcs); 94 94 95 - if (IS_MTL_DISPLAY_STEP(i915, STEP_A0, STEP_C0)) 95 + if (IS_DISPLAY_IP_STEP(i915, IP_VER(14, 0), STEP_A0, STEP_C0)) 96 96 /* Wa_14016740474 */ 97 97 intel_de_rmw(i915, XELPD_CHICKEN_DCPR_3, 0, DMD_RSP_TIMEOUT_DISABLE); 98 98
+37 -15
drivers/gpu/drm/i915/display/intel_psr.c
··· 23 23 24 24 #include <drm/drm_atomic_helper.h> 25 25 #include <drm/drm_damage_helper.h> 26 + #include <drm/drm_debugfs.h> 26 27 27 28 #include "i915_drv.h" 28 29 #include "i915_reg.h" ··· 33 32 #include "intel_display_types.h" 34 33 #include "intel_dp.h" 35 34 #include "intel_dp_aux.h" 35 + #include "intel_frontbuffer.h" 36 36 #include "intel_hdmi.h" 37 37 #include "intel_psr.h" 38 38 #include "intel_psr_regs.h" ··· 1362 1360 bool set_wa_bit = false; 1363 1361 1364 1362 /* Wa_14015648006 */ 1365 - if (IS_MTL_DISPLAY_STEP(dev_priv, STEP_A0, STEP_B0) || 1366 - IS_DISPLAY_VER(dev_priv, 11, 13)) 1363 + if (IS_DISPLAY_VER(dev_priv, 11, 14)) 1367 1364 set_wa_bit |= crtc_state->wm_level_disabled; 1368 1365 1369 1366 /* Wa_16013835468 */ ··· 1448 1447 * All supported adlp panels have 1-based X granularity, this may 1449 1448 * cause issues if non-supported panels are used. 1450 1449 */ 1451 - if (IS_MTL_DISPLAY_STEP(dev_priv, STEP_A0, STEP_B0)) 1450 + if (IS_DISPLAY_IP_STEP(dev_priv, IP_VER(14, 0), STEP_A0, STEP_B0)) 1452 1451 intel_de_rmw(dev_priv, MTL_CHICKEN_TRANS(cpu_transcoder), 0, 1453 1452 ADLP_1_BASED_X_GRANULARITY); 1454 1453 else if (IS_ALDERLAKE_P(dev_priv)) ··· 1456 1455 ADLP_1_BASED_X_GRANULARITY); 1457 1456 1458 1457 /* Wa_16012604467:adlp,mtl[a0,b0] */ 1459 - if (IS_MTL_DISPLAY_STEP(dev_priv, STEP_A0, STEP_B0)) 1458 + if (IS_DISPLAY_IP_STEP(dev_priv, IP_VER(14, 0), STEP_A0, STEP_B0)) 1460 1459 intel_de_rmw(dev_priv, 1461 1460 MTL_CLKGATE_DIS_TRANS(cpu_transcoder), 0, 1462 1461 MTL_CLKGATE_DIS_TRANS_DMASC_GATING_DIS); ··· 1614 1613 1615 1614 if (intel_dp->psr.psr2_enabled) { 1616 1615 /* Wa_16012604467:adlp,mtl[a0,b0] */ 1617 - if (IS_MTL_DISPLAY_STEP(dev_priv, STEP_A0, STEP_B0)) 1616 + if (IS_DISPLAY_IP_STEP(dev_priv, IP_VER(14, 0), STEP_A0, STEP_B0)) 1618 1617 intel_de_rmw(dev_priv, 1619 1618 MTL_CLKGATE_DIS_TRANS(cpu_transcoder), 1620 1619 MTL_CLKGATE_DIS_TRANS_DMASC_GATING_DIS, 0); ··· 2088 2087 goto skip_sel_fetch_set_loop; 2089 2088 2090 2089 /* Wa_14014971492 */ 2091 - if ((IS_MTL_DISPLAY_STEP(dev_priv, STEP_A0, STEP_B0) || 2090 + if ((IS_DISPLAY_IP_STEP(dev_priv, IP_VER(14, 0), STEP_A0, STEP_B0) || 2092 2091 IS_ALDERLAKE_P(dev_priv) || IS_TIGERLAKE(dev_priv)) && 2093 2092 crtc_state->splitter.enable) 2094 2093 pipe_clip.y1 = 0; ··· 2230 2229 /* Force a PSR exit when enabling CRC to avoid CRC timeouts */ 2231 2230 if (crtc_state->crc_enabled && psr->enabled) 2232 2231 psr_force_hw_tracking_exit(intel_dp); 2232 + 2233 + /* 2234 + * Clear possible busy bits in case we have 2235 + * invalidate -> flip -> flush sequence. 2236 + */ 2237 + intel_dp->psr.busy_frontbuffer_bits = 0; 2233 2238 2234 2239 mutex_unlock(&psr->lock); 2235 2240 } ··· 3160 3153 }; 3161 3154 const char *str; 3162 3155 int ret; 3163 - u8 val; 3156 + u8 status, error_status; 3164 3157 3165 3158 if (!CAN_PSR(intel_dp)) { 3166 3159 seq_puts(m, "PSR Unsupported\n"); ··· 3170 3163 if (connector->base.status != connector_status_connected) 3171 3164 return -ENODEV; 3172 3165 3173 - ret = drm_dp_dpcd_readb(&intel_dp->aux, DP_PSR_STATUS, &val); 3174 - if (ret != 1) 3175 - return ret < 0 ? ret : -EIO; 3166 + ret = psr_get_status_and_error_status(intel_dp, &status, &error_status); 3167 + if (ret) 3168 + return ret; 3176 3169 3177 - val &= DP_PSR_SINK_STATE_MASK; 3178 - if (val < ARRAY_SIZE(sink_status)) 3179 - str = sink_status[val]; 3170 + status &= DP_PSR_SINK_STATE_MASK; 3171 + if (status < ARRAY_SIZE(sink_status)) 3172 + str = sink_status[status]; 3180 3173 else 3181 3174 str = "unknown"; 3182 3175 3183 - seq_printf(m, "Sink PSR status: 0x%x [%s]\n", val, str); 3176 + seq_printf(m, "Sink PSR status: 0x%x [%s]\n", status, str); 3184 3177 3185 - return 0; 3178 + seq_printf(m, "Sink PSR error status: 0x%x", error_status); 3179 + 3180 + if (error_status & (DP_PSR_RFB_STORAGE_ERROR | 3181 + DP_PSR_VSC_SDP_UNCORRECTABLE_ERROR | 3182 + DP_PSR_LINK_CRC_ERROR)) 3183 + seq_puts(m, ":\n"); 3184 + else 3185 + seq_puts(m, "\n"); 3186 + if (error_status & DP_PSR_RFB_STORAGE_ERROR) 3187 + seq_puts(m, "\tPSR RFB storage error\n"); 3188 + if (error_status & DP_PSR_VSC_SDP_UNCORRECTABLE_ERROR) 3189 + seq_puts(m, "\tPSR VSC SDP uncorrectable error\n"); 3190 + if (error_status & DP_PSR_LINK_CRC_ERROR) 3191 + seq_puts(m, "\tPSR Link CRC error\n"); 3192 + 3193 + return ret; 3186 3194 } 3187 3195 DEFINE_SHOW_ATTRIBUTE(i915_psr_sink_status); 3188 3196
+197 -186
drivers/gpu/drm/i915/display/intel_sdvo.c
··· 44 44 #include "intel_crtc.h" 45 45 #include "intel_de.h" 46 46 #include "intel_display_types.h" 47 + #include "intel_fdi.h" 47 48 #include "intel_fifo_underrun.h" 48 49 #include "intel_gmbus.h" 49 50 #include "intel_hdmi.h" ··· 58 57 #define SDVO_LVDS_MASK (SDVO_OUTPUT_LVDS0 | SDVO_OUTPUT_LVDS1) 59 58 #define SDVO_TV_MASK (SDVO_OUTPUT_CVBS0 | SDVO_OUTPUT_SVID0 | SDVO_OUTPUT_YPRPB0) 60 59 61 - #define SDVO_OUTPUT_MASK (SDVO_TMDS_MASK | SDVO_RGB_MASK | SDVO_LVDS_MASK |\ 62 - SDVO_TV_MASK) 60 + #define SDVO_OUTPUT_MASK (SDVO_TMDS_MASK | SDVO_RGB_MASK | SDVO_LVDS_MASK | SDVO_TV_MASK) 63 61 64 - #define IS_TV(c) (c->output_flag & SDVO_TV_MASK) 65 - #define IS_TMDS(c) (c->output_flag & SDVO_TMDS_MASK) 66 - #define IS_LVDS(c) (c->output_flag & SDVO_LVDS_MASK) 67 - #define IS_TV_OR_LVDS(c) (c->output_flag & (SDVO_TV_MASK | SDVO_LVDS_MASK)) 68 - #define IS_DIGITAL(c) (c->output_flag & (SDVO_TMDS_MASK | SDVO_LVDS_MASK)) 62 + #define IS_TV(c) ((c)->output_flag & SDVO_TV_MASK) 63 + #define IS_TMDS(c) ((c)->output_flag & SDVO_TMDS_MASK) 64 + #define IS_LVDS(c) ((c)->output_flag & SDVO_LVDS_MASK) 65 + #define IS_TV_OR_LVDS(c) ((c)->output_flag & (SDVO_TV_MASK | SDVO_LVDS_MASK)) 66 + #define IS_DIGITAL(c) ((c)->output_flag & (SDVO_TMDS_MASK | SDVO_LVDS_MASK)) 69 67 68 + #define HAS_DDC(c) ((c)->output_flag & (SDVO_RGB_MASK | SDVO_TMDS_MASK | \ 69 + SDVO_LVDS_MASK)) 70 70 71 71 static const char * const tv_format_names[] = { 72 72 "NTSC_M" , "NTSC_J" , "NTSC_443", ··· 81 79 82 80 #define TV_FORMAT_NUM ARRAY_SIZE(tv_format_names) 83 81 82 + struct intel_sdvo; 83 + 84 + struct intel_sdvo_ddc { 85 + struct i2c_adapter ddc; 86 + struct intel_sdvo *sdvo; 87 + u8 ddc_bus; 88 + }; 89 + 84 90 struct intel_sdvo { 85 91 struct intel_encoder base; 86 92 87 93 struct i2c_adapter *i2c; 88 94 u8 slave_addr; 89 95 90 - struct i2c_adapter ddc; 96 + struct intel_sdvo_ddc ddc[3]; 91 97 92 98 /* Register for the SDVO device: SDVOB or SDVOC */ 93 99 i915_reg_t sdvo_reg; 94 - 95 - /* Active outputs controlled by this SDVO output */ 96 - u16 controlled_output; 97 100 98 101 /* 99 102 * Capabilities of the SDVO device returned by ··· 112 105 int pixel_clock_min, pixel_clock_max; 113 106 114 107 /* 115 - * For multiple function SDVO device, 116 - * this is for current attached outputs. 117 - */ 118 - u16 attached_output; 119 - 120 - /* 121 108 * Hotplug activation bits for this device 122 109 */ 123 110 u16 hotplug_active; 124 - 125 - enum port port; 126 - 127 - /* DDC bus used by this SDVO encoder */ 128 - u8 ddc_bus; 129 111 130 112 /* 131 113 * the sdvo flag gets lost in round trip: dtd->adjusted_mode->dtd ··· 229 233 return; 230 234 } 231 235 232 - if (intel_sdvo->port == PORT_B) 236 + if (intel_sdvo->base.port == PORT_B) 233 237 cval = intel_de_read(dev_priv, GEN3_SDVOC); 234 238 else 235 239 bval = intel_de_read(dev_priv, GEN3_SDVOB); ··· 406 410 return NULL; 407 411 } 408 412 409 - #define SDVO_NAME(svdo) ((svdo)->port == PORT_B ? "SDVOB" : "SDVOC") 413 + #define SDVO_NAME(svdo) ((svdo)->base.port == PORT_B ? "SDVOB" : "SDVOC") 410 414 411 415 static void intel_sdvo_debug_write(struct intel_sdvo *intel_sdvo, u8 cmd, 412 416 const void *args, int args_len) ··· 1220 1224 1221 1225 static bool 1222 1226 intel_sdvo_set_output_timings_from_mode(struct intel_sdvo *intel_sdvo, 1227 + struct intel_sdvo_connector *intel_sdvo_connector, 1223 1228 const struct drm_display_mode *mode) 1224 1229 { 1225 1230 struct intel_sdvo_dtd output_dtd; 1226 1231 1227 1232 if (!intel_sdvo_set_target_output(intel_sdvo, 1228 - intel_sdvo->attached_output)) 1233 + intel_sdvo_connector->output_flag)) 1229 1234 return false; 1230 1235 1231 1236 intel_sdvo_get_dtd_from_mode(&output_dtd, mode); ··· 1267 1270 return true; 1268 1271 } 1269 1272 1270 - static void i9xx_adjust_sdvo_tv_clock(struct intel_crtc_state *pipe_config) 1273 + static int i9xx_adjust_sdvo_tv_clock(struct intel_crtc_state *pipe_config) 1271 1274 { 1272 1275 struct drm_i915_private *dev_priv = to_i915(pipe_config->uapi.crtc->dev); 1273 - unsigned dotclock = pipe_config->port_clock; 1276 + unsigned int dotclock = pipe_config->hw.adjusted_mode.crtc_clock; 1274 1277 struct dpll *clock = &pipe_config->dpll; 1275 1278 1276 1279 /* ··· 1290 1293 clock->m1 = 12; 1291 1294 clock->m2 = 8; 1292 1295 } else { 1293 - drm_WARN(&dev_priv->drm, 1, 1294 - "SDVO TV clock out of range: %i\n", dotclock); 1296 + drm_dbg_kms(&dev_priv->drm, 1297 + "SDVO TV clock out of range: %i\n", dotclock); 1298 + return -EINVAL; 1295 1299 } 1296 1300 1297 1301 pipe_config->clock_set = true; 1302 + 1303 + return 0; 1298 1304 } 1299 1305 1300 1306 static bool intel_has_hdmi_sink(struct intel_sdvo_connector *intel_sdvo_connector, ··· 1352 1352 struct drm_display_mode *adjusted_mode = &pipe_config->hw.adjusted_mode; 1353 1353 struct drm_display_mode *mode = &pipe_config->hw.mode; 1354 1354 1355 + if (HAS_PCH_SPLIT(to_i915(encoder->base.dev))) { 1356 + pipe_config->has_pch_encoder = true; 1357 + if (!intel_fdi_compute_pipe_bpp(pipe_config)) 1358 + return -EINVAL; 1359 + } 1360 + 1355 1361 DRM_DEBUG_KMS("forcing bpc to 8 for SDVO\n"); 1362 + /* FIXME: Don't increase pipe_bpp */ 1356 1363 pipe_config->pipe_bpp = 8*3; 1357 1364 pipe_config->sink_format = INTEL_OUTPUT_FORMAT_RGB; 1358 1365 pipe_config->output_format = INTEL_OUTPUT_FORMAT_RGB; 1359 - 1360 - if (HAS_PCH_SPLIT(to_i915(encoder->base.dev))) 1361 - pipe_config->has_pch_encoder = true; 1362 1366 1363 1367 /* 1364 1368 * We need to construct preferred input timings based on our ··· 1371 1367 * the sequence to do it. Oh well. 1372 1368 */ 1373 1369 if (IS_TV(intel_sdvo_connector)) { 1374 - if (!intel_sdvo_set_output_timings_from_mode(intel_sdvo, mode)) 1370 + if (!intel_sdvo_set_output_timings_from_mode(intel_sdvo, 1371 + intel_sdvo_connector, 1372 + mode)) 1375 1373 return -EINVAL; 1376 1374 1377 1375 (void) intel_sdvo_get_preferred_input_mode(intel_sdvo, ··· 1391 1385 if (ret) 1392 1386 return ret; 1393 1387 1394 - if (!intel_sdvo_set_output_timings_from_mode(intel_sdvo, fixed_mode)) 1388 + if (!intel_sdvo_set_output_timings_from_mode(intel_sdvo, 1389 + intel_sdvo_connector, 1390 + fixed_mode)) 1395 1391 return -EINVAL; 1396 1392 1397 1393 (void) intel_sdvo_get_preferred_input_mode(intel_sdvo, ··· 1423 1415 conn_state); 1424 1416 1425 1417 /* Clock computation needs to happen after pixel multiplier. */ 1426 - if (IS_TV(intel_sdvo_connector)) 1427 - i9xx_adjust_sdvo_tv_clock(pipe_config); 1418 + if (IS_TV(intel_sdvo_connector)) { 1419 + int ret; 1420 + 1421 + ret = i9xx_adjust_sdvo_tv_clock(pipe_config); 1422 + if (ret) 1423 + return ret; 1424 + } 1428 1425 1429 1426 if (conn_state->picture_aspect_ratio) 1430 1427 adjusted_mode->picture_aspect_ratio = ··· 1534 1521 * channel on the motherboard. In a two-input device, the first input 1535 1522 * will be SDVOB and the second SDVOC. 1536 1523 */ 1537 - in_out.in0 = intel_sdvo->attached_output; 1524 + in_out.in0 = intel_sdvo_connector->output_flag; 1538 1525 in_out.in1 = 0; 1539 1526 1540 1527 intel_sdvo_set_value(intel_sdvo, ··· 1543 1530 1544 1531 /* Set the output timings to the screen */ 1545 1532 if (!intel_sdvo_set_target_output(intel_sdvo, 1546 - intel_sdvo->attached_output)) 1533 + intel_sdvo_connector->output_flag)) 1547 1534 return; 1548 1535 1549 1536 /* lvds has a special fixed output timing. */ ··· 1611 1598 sdvox |= SDVO_BORDER_ENABLE; 1612 1599 } else { 1613 1600 sdvox = intel_de_read(dev_priv, intel_sdvo->sdvo_reg); 1614 - if (intel_sdvo->port == PORT_B) 1601 + if (intel_sdvo->base.port == PORT_B) 1615 1602 sdvox &= SDVOB_PRESERVE_MASK; 1616 1603 else 1617 1604 sdvox &= SDVOC_PRESERVE_MASK; ··· 1880 1867 struct drm_device *dev = encoder->base.dev; 1881 1868 struct drm_i915_private *dev_priv = to_i915(dev); 1882 1869 struct intel_sdvo *intel_sdvo = to_sdvo(encoder); 1870 + struct intel_sdvo_connector *intel_sdvo_connector = 1871 + to_intel_sdvo_connector(conn_state->connector); 1883 1872 struct intel_crtc *crtc = to_intel_crtc(pipe_config->uapi.crtc); 1884 1873 u32 temp; 1885 1874 bool input1, input2; ··· 1911 1896 if (0) 1912 1897 intel_sdvo_set_encoder_power_state(intel_sdvo, 1913 1898 DRM_MODE_DPMS_ON); 1914 - intel_sdvo_set_active_outputs(intel_sdvo, intel_sdvo->attached_output); 1899 + intel_sdvo_set_active_outputs(intel_sdvo, intel_sdvo_connector->output_flag); 1915 1900 1916 1901 if (pipe_config->has_audio) 1917 1902 intel_sdvo_enable_audio(intel_sdvo, pipe_config, conn_state); ··· 1971 1956 " device_rev_id: %d\n" 1972 1957 " sdvo_version_major: %d\n" 1973 1958 " sdvo_version_minor: %d\n" 1974 - " sdvo_inputs_mask: %d\n" 1959 + " sdvo_num_inputs: %d\n" 1975 1960 " smooth_scaling: %d\n" 1976 1961 " sharp_scaling: %d\n" 1977 1962 " up_scaling: %d\n" ··· 1983 1968 caps->device_rev_id, 1984 1969 caps->sdvo_version_major, 1985 1970 caps->sdvo_version_minor, 1986 - caps->sdvo_inputs_mask, 1971 + caps->sdvo_num_inputs, 1987 1972 caps->smooth_scaling, 1988 1973 caps->sharp_scaling, 1989 1974 caps->up_scaling, ··· 2044 2029 return intel_encoder_hotplug(encoder, connector); 2045 2030 } 2046 2031 2047 - static bool 2048 - intel_sdvo_multifunc_encoder(struct intel_sdvo *intel_sdvo) 2049 - { 2050 - /* Is there more than one type of output? */ 2051 - return hweight16(intel_sdvo->caps.output_flags) > 1; 2052 - } 2053 - 2054 2032 static const struct drm_edid * 2055 2033 intel_sdvo_get_edid(struct drm_connector *connector) 2056 2034 { 2057 - struct intel_sdvo *sdvo = intel_attached_sdvo(to_intel_connector(connector)); 2058 - return drm_edid_read_ddc(connector, &sdvo->ddc); 2035 + struct i2c_adapter *ddc = connector->ddc; 2036 + 2037 + if (!ddc) 2038 + return NULL; 2039 + 2040 + return drm_edid_read_ddc(connector, ddc); 2059 2041 } 2060 2042 2061 2043 /* Mac mini hack -- use the same DDC as the analog connector */ ··· 2060 2048 intel_sdvo_get_analog_edid(struct drm_connector *connector) 2061 2049 { 2062 2050 struct drm_i915_private *i915 = to_i915(connector->dev); 2063 - struct i2c_adapter *i2c; 2051 + struct i2c_adapter *ddc; 2064 2052 2065 - i2c = intel_gmbus_get_adapter(i915, i915->display.vbt.crt_ddc_pin); 2053 + ddc = intel_gmbus_get_adapter(i915, i915->display.vbt.crt_ddc_pin); 2054 + if (!ddc) 2055 + return NULL; 2066 2056 2067 - return drm_edid_read_ddc(connector, i2c); 2057 + return drm_edid_read_ddc(connector, ddc); 2068 2058 } 2069 2059 2070 2060 static enum drm_connector_status 2071 2061 intel_sdvo_tmds_sink_detect(struct drm_connector *connector) 2072 2062 { 2073 - struct intel_sdvo *intel_sdvo = intel_attached_sdvo(to_intel_connector(connector)); 2074 2063 enum drm_connector_status status; 2075 2064 const struct drm_edid *drm_edid; 2076 2065 2077 2066 drm_edid = intel_sdvo_get_edid(connector); 2078 - 2079 - if (!drm_edid && intel_sdvo_multifunc_encoder(intel_sdvo)) { 2080 - u8 ddc, saved_ddc = intel_sdvo->ddc_bus; 2081 - 2082 - /* 2083 - * Don't use the 1 as the argument of DDC bus switch to get 2084 - * the EDID. It is used for SDVO SPD ROM. 2085 - */ 2086 - for (ddc = intel_sdvo->ddc_bus >> 1; ddc > 1; ddc >>= 1) { 2087 - intel_sdvo->ddc_bus = ddc; 2088 - drm_edid = intel_sdvo_get_edid(connector); 2089 - if (drm_edid) 2090 - break; 2091 - } 2092 - /* 2093 - * If we found the EDID on the other bus, 2094 - * assume that is the correct DDC bus. 2095 - */ 2096 - if (!drm_edid) 2097 - intel_sdvo->ddc_bus = saved_ddc; 2098 - } 2099 2067 2100 2068 /* 2101 2069 * When there is no edid and no monitor is connected with VGA ··· 2086 2094 2087 2095 status = connector_status_unknown; 2088 2096 if (drm_edid) { 2089 - const struct edid *edid = drm_edid_raw(drm_edid); 2090 - 2091 2097 /* DDC bus is shared, match EDID to connector type */ 2092 - if (edid && edid->input & DRM_EDID_INPUT_DIGITAL) 2098 + if (drm_edid_is_digital(drm_edid)) 2093 2099 status = connector_status_connected; 2094 2100 else 2095 2101 status = connector_status_disconnected; ··· 2101 2111 intel_sdvo_connector_matches_edid(struct intel_sdvo_connector *sdvo, 2102 2112 const struct drm_edid *drm_edid) 2103 2113 { 2104 - const struct edid *edid = drm_edid_raw(drm_edid); 2105 - bool monitor_is_digital = !!(edid->input & DRM_EDID_INPUT_DIGITAL); 2114 + bool monitor_is_digital = drm_edid_is_digital(drm_edid); 2106 2115 bool connector_is_digital = !!IS_DIGITAL(sdvo); 2107 2116 2108 2117 DRM_DEBUG_KMS("connector_is_digital? %d, monitor_is_digital? %d\n", ··· 2124 2135 if (!INTEL_DISPLAY_ENABLED(i915)) 2125 2136 return connector_status_disconnected; 2126 2137 2138 + if (!intel_sdvo_set_target_output(intel_sdvo, 2139 + intel_sdvo_connector->output_flag)) 2140 + return connector_status_unknown; 2141 + 2127 2142 if (!intel_sdvo_get_value(intel_sdvo, 2128 2143 SDVO_CMD_GET_ATTACHED_DISPLAYS, 2129 2144 &response, 2)) ··· 2139 2146 2140 2147 if (response == 0) 2141 2148 return connector_status_disconnected; 2142 - 2143 - intel_sdvo->attached_output = response; 2144 2149 2145 2150 if ((intel_sdvo_connector->output_flag & response) == 0) 2146 2151 ret = connector_status_disconnected; ··· 2267 2276 static int intel_sdvo_get_tv_modes(struct drm_connector *connector) 2268 2277 { 2269 2278 struct intel_sdvo *intel_sdvo = intel_attached_sdvo(to_intel_connector(connector)); 2279 + struct intel_sdvo_connector *intel_sdvo_connector = 2280 + to_intel_sdvo_connector(connector); 2270 2281 const struct drm_connector_state *conn_state = connector->state; 2271 2282 struct intel_sdvo_sdtv_resolution_request tv_res; 2272 2283 u32 reply = 0, format_map = 0; ··· 2286 2293 memcpy(&tv_res, &format_map, 2287 2294 min(sizeof(format_map), sizeof(struct intel_sdvo_sdtv_resolution_request))); 2288 2295 2289 - if (!intel_sdvo_set_target_output(intel_sdvo, intel_sdvo->attached_output)) 2296 + if (!intel_sdvo_set_target_output(intel_sdvo, intel_sdvo_connector->output_flag)) 2290 2297 return 0; 2291 2298 2292 2299 BUILD_BUG_ON(sizeof(tv_res) != 3); ··· 2451 2458 return 0; 2452 2459 } 2453 2460 2454 - static int 2455 - intel_sdvo_connector_register(struct drm_connector *connector) 2456 - { 2457 - struct intel_sdvo *sdvo = intel_attached_sdvo(to_intel_connector(connector)); 2458 - int ret; 2459 - 2460 - ret = intel_connector_register(connector); 2461 - if (ret) 2462 - return ret; 2463 - 2464 - return sysfs_create_link(&connector->kdev->kobj, 2465 - &sdvo->ddc.dev.kobj, 2466 - sdvo->ddc.dev.kobj.name); 2467 - } 2468 - 2469 - static void 2470 - intel_sdvo_connector_unregister(struct drm_connector *connector) 2471 - { 2472 - struct intel_sdvo *sdvo = intel_attached_sdvo(to_intel_connector(connector)); 2473 - 2474 - sysfs_remove_link(&connector->kdev->kobj, 2475 - sdvo->ddc.dev.kobj.name); 2476 - intel_connector_unregister(connector); 2477 - } 2478 - 2479 2461 static struct drm_connector_state * 2480 2462 intel_sdvo_connector_duplicate_state(struct drm_connector *connector) 2481 2463 { ··· 2469 2501 .fill_modes = drm_helper_probe_single_connector_modes, 2470 2502 .atomic_get_property = intel_sdvo_connector_atomic_get_property, 2471 2503 .atomic_set_property = intel_sdvo_connector_atomic_set_property, 2472 - .late_register = intel_sdvo_connector_register, 2473 - .early_unregister = intel_sdvo_connector_unregister, 2504 + .late_register = intel_connector_register, 2505 + .early_unregister = intel_connector_unregister, 2474 2506 .destroy = intel_connector_destroy, 2475 2507 .atomic_destroy_state = drm_atomic_helper_connector_destroy_state, 2476 2508 .atomic_duplicate_state = intel_sdvo_connector_duplicate_state, ··· 2507 2539 .atomic_check = intel_sdvo_atomic_check, 2508 2540 }; 2509 2541 2510 - static void intel_sdvo_enc_destroy(struct drm_encoder *encoder) 2542 + static void intel_sdvo_encoder_destroy(struct drm_encoder *_encoder) 2511 2543 { 2512 - struct intel_sdvo *intel_sdvo = to_sdvo(to_intel_encoder(encoder)); 2544 + struct intel_encoder *encoder = to_intel_encoder(_encoder); 2545 + struct intel_sdvo *sdvo = to_sdvo(encoder); 2546 + int i; 2513 2547 2514 - i2c_del_adapter(&intel_sdvo->ddc); 2515 - intel_encoder_destroy(encoder); 2516 - } 2548 + for (i = 0; i < ARRAY_SIZE(sdvo->ddc); i++) { 2549 + if (sdvo->ddc[i].ddc_bus) 2550 + i2c_del_adapter(&sdvo->ddc[i].ddc); 2551 + } 2517 2552 2518 - static const struct drm_encoder_funcs intel_sdvo_enc_funcs = { 2519 - .destroy = intel_sdvo_enc_destroy, 2553 + drm_encoder_cleanup(&encoder->base); 2554 + kfree(sdvo); 2520 2555 }; 2521 2556 2522 - static void 2523 - intel_sdvo_guess_ddc_bus(struct intel_sdvo *sdvo) 2557 + static const struct drm_encoder_funcs intel_sdvo_enc_funcs = { 2558 + .destroy = intel_sdvo_encoder_destroy, 2559 + }; 2560 + 2561 + static int 2562 + intel_sdvo_guess_ddc_bus(struct intel_sdvo *sdvo, 2563 + struct intel_sdvo_connector *connector) 2524 2564 { 2525 2565 u16 mask = 0; 2526 - unsigned int num_bits; 2566 + int num_bits; 2527 2567 2528 2568 /* 2529 2569 * Make a mask of outputs less than or equal to our own priority in the 2530 2570 * list. 2531 2571 */ 2532 - switch (sdvo->controlled_output) { 2572 + switch (connector->output_flag) { 2533 2573 case SDVO_OUTPUT_LVDS1: 2534 2574 mask |= SDVO_OUTPUT_LVDS1; 2535 2575 fallthrough; ··· 2566 2590 num_bits = 3; 2567 2591 2568 2592 /* Corresponds to SDVO_CONTROL_BUS_DDCx */ 2569 - sdvo->ddc_bus = 1 << num_bits; 2593 + return num_bits; 2570 2594 } 2571 2595 2572 2596 /* ··· 2576 2600 * DDC bus number assignment is in a priority order of RGB outputs, then TMDS 2577 2601 * outputs, then LVDS outputs. 2578 2602 */ 2579 - static void 2580 - intel_sdvo_select_ddc_bus(struct drm_i915_private *dev_priv, 2581 - struct intel_sdvo *sdvo) 2603 + static struct intel_sdvo_ddc * 2604 + intel_sdvo_select_ddc_bus(struct intel_sdvo *sdvo, 2605 + struct intel_sdvo_connector *connector) 2582 2606 { 2583 - struct sdvo_device_mapping *mapping; 2607 + struct drm_i915_private *dev_priv = to_i915(sdvo->base.base.dev); 2608 + const struct sdvo_device_mapping *mapping; 2609 + int ddc_bus; 2584 2610 2585 - if (sdvo->port == PORT_B) 2611 + if (sdvo->base.port == PORT_B) 2586 2612 mapping = &dev_priv->display.vbt.sdvo_mappings[0]; 2587 2613 else 2588 2614 mapping = &dev_priv->display.vbt.sdvo_mappings[1]; 2589 2615 2590 2616 if (mapping->initialized) 2591 - sdvo->ddc_bus = 1 << ((mapping->ddc_pin & 0xf0) >> 4); 2617 + ddc_bus = (mapping->ddc_pin & 0xf0) >> 4; 2592 2618 else 2593 - intel_sdvo_guess_ddc_bus(sdvo); 2619 + ddc_bus = intel_sdvo_guess_ddc_bus(sdvo, connector); 2620 + 2621 + if (ddc_bus < 1 || ddc_bus > 3) 2622 + return NULL; 2623 + 2624 + return &sdvo->ddc[ddc_bus - 1]; 2594 2625 } 2595 2626 2596 2627 static void 2597 - intel_sdvo_select_i2c_bus(struct drm_i915_private *dev_priv, 2598 - struct intel_sdvo *sdvo) 2628 + intel_sdvo_select_i2c_bus(struct intel_sdvo *sdvo) 2599 2629 { 2600 - struct sdvo_device_mapping *mapping; 2630 + struct drm_i915_private *dev_priv = to_i915(sdvo->base.base.dev); 2631 + const struct sdvo_device_mapping *mapping; 2601 2632 u8 pin; 2602 2633 2603 - if (sdvo->port == PORT_B) 2634 + if (sdvo->base.port == PORT_B) 2604 2635 mapping = &dev_priv->display.vbt.sdvo_mappings[0]; 2605 2636 else 2606 2637 mapping = &dev_priv->display.vbt.sdvo_mappings[1]; ··· 2617 2634 pin = mapping->i2c_pin; 2618 2635 else 2619 2636 pin = GMBUS_PIN_DPB; 2637 + 2638 + drm_dbg_kms(&dev_priv->drm, "[ENCODER:%d:%s] I2C pin %d, slave addr 0x%x\n", 2639 + sdvo->base.base.base.id, sdvo->base.base.name, 2640 + pin, sdvo->slave_addr); 2620 2641 2621 2642 sdvo->i2c = intel_gmbus_get_adapter(dev_priv, pin); 2622 2643 ··· 2646 2659 } 2647 2660 2648 2661 static u8 2649 - intel_sdvo_get_slave_addr(struct drm_i915_private *dev_priv, 2650 - struct intel_sdvo *sdvo) 2662 + intel_sdvo_get_slave_addr(struct intel_sdvo *sdvo) 2651 2663 { 2652 - struct sdvo_device_mapping *my_mapping, *other_mapping; 2664 + struct drm_i915_private *dev_priv = to_i915(sdvo->base.base.dev); 2665 + const struct sdvo_device_mapping *my_mapping, *other_mapping; 2653 2666 2654 - if (sdvo->port == PORT_B) { 2667 + if (sdvo->base.port == PORT_B) { 2655 2668 my_mapping = &dev_priv->display.vbt.sdvo_mappings[0]; 2656 2669 other_mapping = &dev_priv->display.vbt.sdvo_mappings[1]; 2657 2670 } else { ··· 2678 2691 * No SDVO device info is found for another DVO port, 2679 2692 * so use mapping assumption we had before BIOS parsing. 2680 2693 */ 2681 - if (sdvo->port == PORT_B) 2694 + if (sdvo->base.port == PORT_B) 2682 2695 return 0x70; 2683 2696 else 2684 2697 return 0x72; 2685 2698 } 2686 2699 2687 2700 static int 2701 + intel_sdvo_init_ddc_proxy(struct intel_sdvo_ddc *ddc, 2702 + struct intel_sdvo *sdvo, int bit); 2703 + 2704 + static int 2688 2705 intel_sdvo_connector_init(struct intel_sdvo_connector *connector, 2689 2706 struct intel_sdvo *encoder) 2690 2707 { 2691 - struct drm_connector *drm_connector; 2708 + struct drm_i915_private *i915 = to_i915(encoder->base.base.dev); 2709 + struct intel_sdvo_ddc *ddc = NULL; 2692 2710 int ret; 2693 2711 2694 - drm_connector = &connector->base.base; 2695 - ret = drm_connector_init(encoder->base.base.dev, 2696 - drm_connector, 2697 - &intel_sdvo_connector_funcs, 2698 - connector->base.base.connector_type); 2712 + if (HAS_DDC(connector)) 2713 + ddc = intel_sdvo_select_ddc_bus(encoder, connector); 2714 + 2715 + ret = drm_connector_init_with_ddc(encoder->base.base.dev, 2716 + &connector->base.base, 2717 + &intel_sdvo_connector_funcs, 2718 + connector->base.base.connector_type, 2719 + ddc ? &ddc->ddc : NULL); 2699 2720 if (ret < 0) 2700 2721 return ret; 2701 2722 2702 - drm_connector_helper_add(drm_connector, 2723 + drm_connector_helper_add(&connector->base.base, 2703 2724 &intel_sdvo_connector_helper_funcs); 2704 2725 2705 2726 connector->base.base.display_info.subpixel_order = SubPixelHorizontalRGB; ··· 2715 2720 connector->base.get_hw_state = intel_sdvo_connector_get_hw_state; 2716 2721 2717 2722 intel_connector_attach_encoder(&connector->base, &encoder->base); 2723 + 2724 + if (ddc) 2725 + drm_dbg_kms(&i915->drm, "[CONNECTOR:%d:%s] using %s\n", 2726 + connector->base.base.base.id, connector->base.base.name, 2727 + ddc->ddc.name); 2718 2728 2719 2729 return 0; 2720 2730 } ··· 2918 2918 if (!intel_panel_preferred_fixed_mode(intel_connector)) { 2919 2919 mutex_lock(&i915->drm.mode_config.mutex); 2920 2920 2921 - intel_ddc_get_modes(connector, &intel_sdvo->ddc); 2921 + intel_ddc_get_modes(connector, connector->ddc); 2922 2922 intel_panel_add_edid_fixed_modes(intel_connector, false); 2923 2923 2924 2924 mutex_unlock(&i915->drm.mode_config.mutex); ··· 2982 2982 SDVO_OUTPUT_LVDS0, 2983 2983 SDVO_OUTPUT_LVDS1, 2984 2984 }; 2985 - struct drm_i915_private *i915 = to_i915(intel_sdvo->base.base.dev); 2986 2985 u16 flags; 2987 2986 int i; 2988 2987 ··· 2992 2993 SDVO_NAME(intel_sdvo), intel_sdvo->caps.output_flags); 2993 2994 return false; 2994 2995 } 2995 - 2996 - intel_sdvo->controlled_output = flags; 2997 - 2998 - intel_sdvo_select_ddc_bus(i915, intel_sdvo); 2999 2996 3000 2997 for (i = 0; i < ARRAY_SIZE(probe_order); i++) { 3001 2998 u16 type = flags & probe_order[i]; ··· 3245 3250 struct i2c_msg *msgs, 3246 3251 int num) 3247 3252 { 3248 - struct intel_sdvo *sdvo = adapter->algo_data; 3253 + struct intel_sdvo_ddc *ddc = adapter->algo_data; 3254 + struct intel_sdvo *sdvo = ddc->sdvo; 3249 3255 3250 - if (!__intel_sdvo_set_control_bus_switch(sdvo, sdvo->ddc_bus)) 3256 + if (!__intel_sdvo_set_control_bus_switch(sdvo, 1 << ddc->ddc_bus)) 3251 3257 return -EIO; 3252 3258 3253 3259 return sdvo->i2c->algo->master_xfer(sdvo->i2c, msgs, num); ··· 3256 3260 3257 3261 static u32 intel_sdvo_ddc_proxy_func(struct i2c_adapter *adapter) 3258 3262 { 3259 - struct intel_sdvo *sdvo = adapter->algo_data; 3263 + struct intel_sdvo_ddc *ddc = adapter->algo_data; 3264 + struct intel_sdvo *sdvo = ddc->sdvo; 3265 + 3260 3266 return sdvo->i2c->algo->functionality(sdvo->i2c); 3261 3267 } 3262 3268 ··· 3270 3272 static void proxy_lock_bus(struct i2c_adapter *adapter, 3271 3273 unsigned int flags) 3272 3274 { 3273 - struct intel_sdvo *sdvo = adapter->algo_data; 3275 + struct intel_sdvo_ddc *ddc = adapter->algo_data; 3276 + struct intel_sdvo *sdvo = ddc->sdvo; 3277 + 3274 3278 sdvo->i2c->lock_ops->lock_bus(sdvo->i2c, flags); 3275 3279 } 3276 3280 3277 3281 static int proxy_trylock_bus(struct i2c_adapter *adapter, 3278 3282 unsigned int flags) 3279 3283 { 3280 - struct intel_sdvo *sdvo = adapter->algo_data; 3284 + struct intel_sdvo_ddc *ddc = adapter->algo_data; 3285 + struct intel_sdvo *sdvo = ddc->sdvo; 3286 + 3281 3287 return sdvo->i2c->lock_ops->trylock_bus(sdvo->i2c, flags); 3282 3288 } 3283 3289 3284 3290 static void proxy_unlock_bus(struct i2c_adapter *adapter, 3285 3291 unsigned int flags) 3286 3292 { 3287 - struct intel_sdvo *sdvo = adapter->algo_data; 3293 + struct intel_sdvo_ddc *ddc = adapter->algo_data; 3294 + struct intel_sdvo *sdvo = ddc->sdvo; 3295 + 3288 3296 sdvo->i2c->lock_ops->unlock_bus(sdvo->i2c, flags); 3289 3297 } 3290 3298 ··· 3300 3296 .unlock_bus = proxy_unlock_bus, 3301 3297 }; 3302 3298 3303 - static bool 3304 - intel_sdvo_init_ddc_proxy(struct intel_sdvo *sdvo, 3305 - struct drm_i915_private *dev_priv) 3299 + static int 3300 + intel_sdvo_init_ddc_proxy(struct intel_sdvo_ddc *ddc, 3301 + struct intel_sdvo *sdvo, int ddc_bus) 3306 3302 { 3303 + struct drm_i915_private *dev_priv = to_i915(sdvo->base.base.dev); 3307 3304 struct pci_dev *pdev = to_pci_dev(dev_priv->drm.dev); 3308 3305 3309 - sdvo->ddc.owner = THIS_MODULE; 3310 - sdvo->ddc.class = I2C_CLASS_DDC; 3311 - snprintf(sdvo->ddc.name, I2C_NAME_SIZE, "SDVO DDC proxy"); 3312 - sdvo->ddc.dev.parent = &pdev->dev; 3313 - sdvo->ddc.algo_data = sdvo; 3314 - sdvo->ddc.algo = &intel_sdvo_ddc_proxy; 3315 - sdvo->ddc.lock_ops = &proxy_lock_ops; 3306 + ddc->sdvo = sdvo; 3307 + ddc->ddc_bus = ddc_bus; 3316 3308 3317 - return i2c_add_adapter(&sdvo->ddc) == 0; 3309 + ddc->ddc.owner = THIS_MODULE; 3310 + ddc->ddc.class = I2C_CLASS_DDC; 3311 + snprintf(ddc->ddc.name, I2C_NAME_SIZE, "SDVO %c DDC%d", 3312 + port_name(sdvo->base.port), ddc_bus); 3313 + ddc->ddc.dev.parent = &pdev->dev; 3314 + ddc->ddc.algo_data = ddc; 3315 + ddc->ddc.algo = &intel_sdvo_ddc_proxy; 3316 + ddc->ddc.lock_ops = &proxy_lock_ops; 3317 + 3318 + return i2c_add_adapter(&ddc->ddc); 3318 3319 } 3319 3320 3320 3321 static bool is_sdvo_port_valid(struct drm_i915_private *dev_priv, enum port port) ··· 3354 3345 if (!intel_sdvo) 3355 3346 return false; 3356 3347 3357 - intel_sdvo->sdvo_reg = sdvo_reg; 3358 - intel_sdvo->port = port; 3359 - intel_sdvo->slave_addr = 3360 - intel_sdvo_get_slave_addr(dev_priv, intel_sdvo) >> 1; 3361 - intel_sdvo_select_i2c_bus(dev_priv, intel_sdvo); 3362 - if (!intel_sdvo_init_ddc_proxy(intel_sdvo, dev_priv)) 3363 - goto err_i2c_bus; 3364 - 3365 3348 /* encoder type will be decided later */ 3366 3349 intel_encoder = &intel_sdvo->base; 3367 3350 intel_encoder->type = INTEL_OUTPUT_SDVO; 3368 3351 intel_encoder->power_domain = POWER_DOMAIN_PORT_OTHER; 3369 3352 intel_encoder->port = port; 3353 + 3370 3354 drm_encoder_init(&dev_priv->drm, &intel_encoder->base, 3371 3355 &intel_sdvo_enc_funcs, 0, 3372 3356 "SDVO %c", port_name(port)); 3357 + 3358 + intel_sdvo->sdvo_reg = sdvo_reg; 3359 + intel_sdvo->slave_addr = intel_sdvo_get_slave_addr(intel_sdvo) >> 1; 3360 + 3361 + intel_sdvo_select_i2c_bus(intel_sdvo); 3373 3362 3374 3363 /* Read the regs to test if we can talk to the device */ 3375 3364 for (i = 0; i < 0x40; i++) { ··· 3400 3393 intel_sdvo->colorimetry_cap = 3401 3394 intel_sdvo_get_colorimetry_cap(intel_sdvo); 3402 3395 3396 + for (i = 0; i < ARRAY_SIZE(intel_sdvo->ddc); i++) { 3397 + int ret; 3398 + 3399 + ret = intel_sdvo_init_ddc_proxy(&intel_sdvo->ddc[i], 3400 + intel_sdvo, i + 1); 3401 + if (ret) 3402 + goto err; 3403 + } 3404 + 3403 3405 if (!intel_sdvo_output_setup(intel_sdvo)) { 3404 3406 drm_dbg_kms(&dev_priv->drm, 3405 3407 "SDVO output failed to setup on %s\n", ··· 3422 3406 * hotplug lines. 3423 3407 */ 3424 3408 if (intel_sdvo->hotplug_active) { 3425 - if (intel_sdvo->port == PORT_B) 3409 + if (intel_sdvo->base.port == PORT_B) 3426 3410 intel_encoder->hpd_pin = HPD_SDVO_B; 3427 3411 else 3428 3412 intel_encoder->hpd_pin = HPD_SDVO_C; ··· 3449 3433 3450 3434 drm_dbg_kms(&dev_priv->drm, "%s device VID/DID: %02X:%02X.%02X, " 3451 3435 "clock range %dMHz - %dMHz, " 3452 - "input 1: %c, input 2: %c, " 3436 + "num inputs: %d, " 3453 3437 "output 1: %c, output 2: %c\n", 3454 3438 SDVO_NAME(intel_sdvo), 3455 3439 intel_sdvo->caps.vendor_id, intel_sdvo->caps.device_id, 3456 3440 intel_sdvo->caps.device_rev_id, 3457 3441 intel_sdvo->pixel_clock_min / 1000, 3458 3442 intel_sdvo->pixel_clock_max / 1000, 3459 - (intel_sdvo->caps.sdvo_inputs_mask & 0x1) ? 'Y' : 'N', 3460 - (intel_sdvo->caps.sdvo_inputs_mask & 0x2) ? 'Y' : 'N', 3443 + intel_sdvo->caps.sdvo_num_inputs, 3461 3444 /* check currently supported outputs */ 3462 3445 intel_sdvo->caps.output_flags & 3463 3446 (SDVO_OUTPUT_TMDS0 | SDVO_OUTPUT_RGB0 | ··· 3469 3454 3470 3455 err_output: 3471 3456 intel_sdvo_output_cleanup(intel_sdvo); 3472 - 3473 3457 err: 3474 - drm_encoder_cleanup(&intel_encoder->base); 3475 - i2c_del_adapter(&intel_sdvo->ddc); 3476 - err_i2c_bus: 3477 3458 intel_sdvo_unselect_i2c_bus(intel_sdvo); 3478 - kfree(intel_sdvo); 3459 + intel_sdvo_encoder_destroy(&intel_encoder->base); 3479 3460 3480 3461 return false; 3481 3462 }
+1 -1
drivers/gpu/drm/i915/display/intel_sdvo_regs.h
··· 57 57 u8 device_rev_id; 58 58 u8 sdvo_version_major; 59 59 u8 sdvo_version_minor; 60 - unsigned int sdvo_inputs_mask:2; 60 + unsigned int sdvo_num_inputs:2; 61 61 unsigned int smooth_scaling:1; 62 62 unsigned int sharp_scaling:1; 63 63 unsigned int up_scaling:1;
+1
drivers/gpu/drm/i915/display/intel_sprite.c
··· 45 45 #include "intel_de.h" 46 46 #include "intel_display_types.h" 47 47 #include "intel_fb.h" 48 + #include "intel_frontbuffer.h" 48 49 #include "intel_sprite.h" 49 50 50 51 static void i9xx_plane_linear_gamma(u16 gamma[8])
+50 -16
drivers/gpu/drm/i915/display/intel_tc.c
··· 260 260 !intel_display_power_is_enabled(i915, tc_port_power_domain(tc))); 261 261 } 262 262 263 - u32 intel_tc_port_get_lane_mask(struct intel_digital_port *dig_port) 263 + static u32 intel_tc_port_get_lane_mask(struct intel_digital_port *dig_port) 264 264 { 265 265 struct drm_i915_private *i915 = to_i915(dig_port->base.base.dev); 266 266 struct intel_tc_port *tc = to_tc_port(dig_port); ··· 290 290 DP_PIN_ASSIGNMENT_SHIFT(tc->phy_fia_idx); 291 291 } 292 292 293 - static int mtl_tc_port_get_pin_assignment_mask(struct intel_digital_port *dig_port) 293 + static int lnl_tc_port_get_max_lane_count(struct intel_digital_port *dig_port) 294 + { 295 + struct drm_i915_private *i915 = to_i915(dig_port->base.base.dev); 296 + enum tc_port tc_port = intel_port_to_tc(i915, dig_port->base.port); 297 + intel_wakeref_t wakeref; 298 + u32 val, pin_assignment; 299 + 300 + with_intel_display_power(i915, POWER_DOMAIN_DISPLAY_CORE, wakeref) 301 + val = intel_de_read(i915, TCSS_DDI_STATUS(tc_port)); 302 + 303 + pin_assignment = 304 + REG_FIELD_GET(TCSS_DDI_STATUS_PIN_ASSIGNMENT_MASK, val); 305 + 306 + switch (pin_assignment) { 307 + default: 308 + MISSING_CASE(pin_assignment); 309 + fallthrough; 310 + case DP_PIN_ASSIGNMENT_D: 311 + return 2; 312 + case DP_PIN_ASSIGNMENT_C: 313 + case DP_PIN_ASSIGNMENT_E: 314 + return 4; 315 + } 316 + } 317 + 318 + static int mtl_tc_port_get_max_lane_count(struct intel_digital_port *dig_port) 294 319 { 295 320 struct drm_i915_private *i915 = to_i915(dig_port->base.base.dev); 296 321 intel_wakeref_t wakeref; ··· 336 311 } 337 312 } 338 313 339 - int intel_tc_port_fia_max_lane_count(struct intel_digital_port *dig_port) 314 + static int intel_tc_port_get_max_lane_count(struct intel_digital_port *dig_port) 340 315 { 341 316 struct drm_i915_private *i915 = to_i915(dig_port->base.base.dev); 342 - struct intel_tc_port *tc = to_tc_port(dig_port); 343 - enum phy phy = intel_port_to_phy(i915, dig_port->base.port); 344 317 intel_wakeref_t wakeref; 345 - u32 lane_mask; 318 + u32 lane_mask = 0; 346 319 347 - if (!intel_phy_is_tc(i915, phy) || tc->mode != TC_PORT_DP_ALT) 348 - return 4; 349 - 350 - assert_tc_cold_blocked(tc); 351 - 352 - if (DISPLAY_VER(i915) >= 14) 353 - return mtl_tc_port_get_pin_assignment_mask(dig_port); 354 - 355 - lane_mask = 0; 356 320 with_intel_display_power(i915, POWER_DOMAIN_DISPLAY_CORE, wakeref) 357 321 lane_mask = intel_tc_port_get_lane_mask(dig_port); 358 322 ··· 360 346 case 0xf: 361 347 return 4; 362 348 } 349 + } 350 + 351 + int intel_tc_port_max_lane_count(struct intel_digital_port *dig_port) 352 + { 353 + struct drm_i915_private *i915 = to_i915(dig_port->base.base.dev); 354 + struct intel_tc_port *tc = to_tc_port(dig_port); 355 + enum phy phy = intel_port_to_phy(i915, dig_port->base.port); 356 + 357 + if (!intel_phy_is_tc(i915, phy) || tc->mode != TC_PORT_DP_ALT) 358 + return 4; 359 + 360 + assert_tc_cold_blocked(tc); 361 + 362 + if (DISPLAY_VER(i915) >= 20) 363 + return lnl_tc_port_get_max_lane_count(dig_port); 364 + 365 + if (DISPLAY_VER(i915) >= 14) 366 + return mtl_tc_port_get_max_lane_count(dig_port); 367 + 368 + return intel_tc_port_get_max_lane_count(dig_port); 363 369 } 364 370 365 371 void intel_tc_port_set_fia_lane_count(struct intel_digital_port *dig_port, ··· 617 583 struct intel_digital_port *dig_port = tc->dig_port; 618 584 int max_lanes; 619 585 620 - max_lanes = intel_tc_port_fia_max_lane_count(dig_port); 586 + max_lanes = intel_tc_port_max_lane_count(dig_port); 621 587 if (tc->mode == TC_PORT_LEGACY) { 622 588 drm_WARN_ON(&i915->drm, max_lanes != 4); 623 589 return true;
+1 -2
drivers/gpu/drm/i915/display/intel_tc.h
··· 19 19 bool intel_tc_port_connected(struct intel_encoder *encoder); 20 20 bool intel_tc_port_connected_locked(struct intel_encoder *encoder); 21 21 22 - u32 intel_tc_port_get_lane_mask(struct intel_digital_port *dig_port); 23 22 u32 intel_tc_port_get_pin_assignment_mask(struct intel_digital_port *dig_port); 24 - int intel_tc_port_fia_max_lane_count(struct intel_digital_port *dig_port); 23 + int intel_tc_port_max_lane_count(struct intel_digital_port *dig_port); 25 24 void intel_tc_port_set_fia_lane_count(struct intel_digital_port *dig_port, 26 25 int required_lanes); 27 26
+14
drivers/gpu/drm/i915/display/intel_vblank.c
··· 251 251 return (position + crtc->scanline_offset) % vtotal; 252 252 } 253 253 254 + int intel_crtc_scanline_to_hw(struct intel_crtc *crtc, int scanline) 255 + { 256 + const struct drm_vblank_crtc *vblank = 257 + &crtc->base.dev->vblank[drm_crtc_index(&crtc->base)]; 258 + const struct drm_display_mode *mode = &vblank->hwmode; 259 + int vtotal; 260 + 261 + vtotal = mode->crtc_vtotal; 262 + if (mode->flags & DRM_MODE_FLAG_INTERLACE) 263 + vtotal /= 2; 264 + 265 + return (scanline + vtotal - crtc->scanline_offset) % vtotal; 266 + } 267 + 254 268 static bool i915_get_crtc_scanoutpos(struct drm_crtc *_crtc, 255 269 bool in_vblank_irq, 256 270 int *vpos, int *hpos,
+1
drivers/gpu/drm/i915/display/intel_vblank.h
··· 22 22 void intel_wait_for_pipe_scanline_moving(struct intel_crtc *crtc); 23 23 void intel_crtc_update_active_timings(const struct intel_crtc_state *crtc_state, 24 24 bool vrr_enable); 25 + int intel_crtc_scanline_to_hw(struct intel_crtc *crtc, int scanline); 25 26 26 27 #endif /* __INTEL_VBLANK_H__ */
+286 -342
drivers/gpu/drm/i915/display/intel_vdsc.c
··· 80 80 int bpc = vdsc_cfg->bits_per_component; 81 81 int bpp = vdsc_cfg->bits_per_pixel >> 4; 82 82 int qp_bpc_modifier = (bpc - 8) * 2; 83 + int uncompressed_bpg_rate; 84 + int first_line_bpg_offset; 83 85 u32 res, buf_i, bpp_i; 84 86 85 87 if (vdsc_cfg->slice_height >= 8) 86 - vdsc_cfg->first_line_bpg_offset = 87 - 12 + DIV_ROUND_UP((9 * min(34, vdsc_cfg->slice_height - 8)), 100); 88 + first_line_bpg_offset = 89 + 12 + (9 * min(34, vdsc_cfg->slice_height - 8)) / 100; 88 90 else 89 - vdsc_cfg->first_line_bpg_offset = 2 * (vdsc_cfg->slice_height - 1); 91 + first_line_bpg_offset = 2 * (vdsc_cfg->slice_height - 1); 92 + 93 + uncompressed_bpg_rate = (3 * bpc + (vdsc_cfg->convert_rgb ? 0 : 2)) * 3; 94 + vdsc_cfg->first_line_bpg_offset = clamp(first_line_bpg_offset, 0, 95 + uncompressed_bpg_rate - 3 * bpp); 90 96 91 97 /* 92 98 * According to DSC 1.2 spec in Section 4.1 if native_420 is set: ··· 356 350 return POWER_DOMAIN_TRANSCODER_VDSC_PW2; 357 351 } 358 352 353 + static int intel_dsc_get_vdsc_per_pipe(const struct intel_crtc_state *crtc_state) 354 + { 355 + return crtc_state->dsc.dsc_split ? 2 : 1; 356 + } 357 + 359 358 int intel_dsc_get_num_vdsc_instances(const struct intel_crtc_state *crtc_state) 360 359 { 361 - int num_vdsc_instances = (crtc_state->dsc.dsc_split) ? 2 : 1; 360 + int num_vdsc_instances = intel_dsc_get_vdsc_per_pipe(crtc_state); 362 361 363 362 if (crtc_state->bigjoiner_pipes) 364 363 num_vdsc_instances *= 2; 365 364 366 365 return num_vdsc_instances; 366 + } 367 + 368 + static void intel_dsc_get_pps_reg(const struct intel_crtc_state *crtc_state, int pps, 369 + i915_reg_t *dsc_reg, int dsc_reg_num) 370 + { 371 + struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc); 372 + enum transcoder cpu_transcoder = crtc_state->cpu_transcoder; 373 + enum pipe pipe = crtc->pipe; 374 + bool pipe_dsc; 375 + 376 + pipe_dsc = is_pipe_dsc(crtc, cpu_transcoder); 377 + 378 + if (dsc_reg_num >= 3) 379 + MISSING_CASE(dsc_reg_num); 380 + if (dsc_reg_num >= 2) 381 + dsc_reg[1] = pipe_dsc ? ICL_DSC1_PPS(pipe, pps) : DSCC_PPS(pps); 382 + if (dsc_reg_num >= 1) 383 + dsc_reg[0] = pipe_dsc ? ICL_DSC0_PPS(pipe, pps) : DSCA_PPS(pps); 384 + } 385 + 386 + static void intel_dsc_pps_write(const struct intel_crtc_state *crtc_state, 387 + int pps, u32 pps_val) 388 + { 389 + struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc); 390 + struct drm_i915_private *i915 = to_i915(crtc->base.dev); 391 + i915_reg_t dsc_reg[2]; 392 + int i, vdsc_per_pipe, dsc_reg_num; 393 + 394 + vdsc_per_pipe = intel_dsc_get_vdsc_per_pipe(crtc_state); 395 + dsc_reg_num = min_t(int, ARRAY_SIZE(dsc_reg), vdsc_per_pipe); 396 + 397 + drm_WARN_ON_ONCE(&i915->drm, dsc_reg_num < vdsc_per_pipe); 398 + 399 + intel_dsc_get_pps_reg(crtc_state, pps, dsc_reg, dsc_reg_num); 400 + 401 + for (i = 0; i < dsc_reg_num; i++) 402 + intel_de_write(i915, dsc_reg[i], pps_val); 367 403 } 368 404 369 405 static void intel_dsc_pps_configure(const struct intel_crtc_state *crtc_state) ··· 415 367 const struct drm_dsc_config *vdsc_cfg = &crtc_state->dsc.config; 416 368 enum transcoder cpu_transcoder = crtc_state->cpu_transcoder; 417 369 enum pipe pipe = crtc->pipe; 418 - u32 pps_val = 0; 370 + u32 pps_val; 419 371 u32 rc_buf_thresh_dword[4]; 420 372 u32 rc_range_params_dword[8]; 421 373 int i = 0; 422 374 int num_vdsc_instances = intel_dsc_get_num_vdsc_instances(crtc_state); 375 + int vdsc_instances_per_pipe = intel_dsc_get_vdsc_per_pipe(crtc_state); 423 376 424 - /* Populate PICTURE_PARAMETER_SET_0 registers */ 425 - pps_val = DSC_VER_MAJ | vdsc_cfg->dsc_version_minor << 426 - DSC_VER_MIN_SHIFT | 427 - vdsc_cfg->bits_per_component << DSC_BPC_SHIFT | 428 - vdsc_cfg->line_buf_depth << DSC_LINE_BUF_DEPTH_SHIFT; 377 + /* PPS 0 */ 378 + pps_val = DSC_PPS0_VER_MAJOR(1) | 379 + DSC_PPS0_VER_MINOR(vdsc_cfg->dsc_version_minor) | 380 + DSC_PPS0_BPC(vdsc_cfg->bits_per_component) | 381 + DSC_PPS0_LINE_BUF_DEPTH(vdsc_cfg->line_buf_depth); 429 382 if (vdsc_cfg->dsc_version_minor == 2) { 430 - pps_val |= DSC_ALT_ICH_SEL; 383 + pps_val |= DSC_PPS0_ALT_ICH_SEL; 431 384 if (vdsc_cfg->native_420) 432 - pps_val |= DSC_NATIVE_420_ENABLE; 385 + pps_val |= DSC_PPS0_NATIVE_420_ENABLE; 433 386 if (vdsc_cfg->native_422) 434 - pps_val |= DSC_NATIVE_422_ENABLE; 387 + pps_val |= DSC_PPS0_NATIVE_422_ENABLE; 435 388 } 436 389 if (vdsc_cfg->block_pred_enable) 437 - pps_val |= DSC_BLOCK_PREDICTION; 390 + pps_val |= DSC_PPS0_BLOCK_PREDICTION; 438 391 if (vdsc_cfg->convert_rgb) 439 - pps_val |= DSC_COLOR_SPACE_CONVERSION; 392 + pps_val |= DSC_PPS0_COLOR_SPACE_CONVERSION; 440 393 if (vdsc_cfg->simple_422) 441 - pps_val |= DSC_422_ENABLE; 394 + pps_val |= DSC_PPS0_422_ENABLE; 442 395 if (vdsc_cfg->vbr_enable) 443 - pps_val |= DSC_VBR_ENABLE; 396 + pps_val |= DSC_PPS0_VBR_ENABLE; 444 397 drm_dbg_kms(&dev_priv->drm, "PPS0 = 0x%08x\n", pps_val); 445 - if (!is_pipe_dsc(crtc, cpu_transcoder)) { 446 - intel_de_write(dev_priv, DSCA_PICTURE_PARAMETER_SET_0, 447 - pps_val); 448 - /* 449 - * If 2 VDSC instances are needed, configure PPS for second 450 - * VDSC 451 - */ 452 - if (crtc_state->dsc.dsc_split) 453 - intel_de_write(dev_priv, DSCC_PICTURE_PARAMETER_SET_0, 454 - pps_val); 455 - } else { 456 - intel_de_write(dev_priv, 457 - ICL_DSC0_PICTURE_PARAMETER_SET_0(pipe), 458 - pps_val); 459 - if (crtc_state->dsc.dsc_split) 460 - intel_de_write(dev_priv, 461 - ICL_DSC1_PICTURE_PARAMETER_SET_0(pipe), 462 - pps_val); 463 - } 398 + intel_dsc_pps_write(crtc_state, 0, pps_val); 464 399 465 - /* Populate PICTURE_PARAMETER_SET_1 registers */ 466 - pps_val = 0; 467 - pps_val |= DSC_BPP(vdsc_cfg->bits_per_pixel); 400 + /* PPS 1 */ 401 + pps_val = DSC_PPS1_BPP(vdsc_cfg->bits_per_pixel); 468 402 drm_dbg_kms(&dev_priv->drm, "PPS1 = 0x%08x\n", pps_val); 469 - if (!is_pipe_dsc(crtc, cpu_transcoder)) { 470 - intel_de_write(dev_priv, DSCA_PICTURE_PARAMETER_SET_1, 471 - pps_val); 472 - /* 473 - * If 2 VDSC instances are needed, configure PPS for second 474 - * VDSC 475 - */ 476 - if (crtc_state->dsc.dsc_split) 477 - intel_de_write(dev_priv, DSCC_PICTURE_PARAMETER_SET_1, 478 - pps_val); 479 - } else { 480 - intel_de_write(dev_priv, 481 - ICL_DSC0_PICTURE_PARAMETER_SET_1(pipe), 482 - pps_val); 483 - if (crtc_state->dsc.dsc_split) 484 - intel_de_write(dev_priv, 485 - ICL_DSC1_PICTURE_PARAMETER_SET_1(pipe), 486 - pps_val); 487 - } 403 + intel_dsc_pps_write(crtc_state, 1, pps_val); 488 404 489 - /* Populate PICTURE_PARAMETER_SET_2 registers */ 490 - pps_val = 0; 491 - pps_val |= DSC_PIC_HEIGHT(vdsc_cfg->pic_height) | 492 - DSC_PIC_WIDTH(vdsc_cfg->pic_width / num_vdsc_instances); 405 + /* PPS 2 */ 406 + pps_val = DSC_PPS2_PIC_HEIGHT(vdsc_cfg->pic_height) | 407 + DSC_PPS2_PIC_WIDTH(vdsc_cfg->pic_width / num_vdsc_instances); 493 408 drm_dbg_kms(&dev_priv->drm, "PPS2 = 0x%08x\n", pps_val); 494 - if (!is_pipe_dsc(crtc, cpu_transcoder)) { 495 - intel_de_write(dev_priv, DSCA_PICTURE_PARAMETER_SET_2, 496 - pps_val); 497 - /* 498 - * If 2 VDSC instances are needed, configure PPS for second 499 - * VDSC 500 - */ 501 - if (crtc_state->dsc.dsc_split) 502 - intel_de_write(dev_priv, DSCC_PICTURE_PARAMETER_SET_2, 503 - pps_val); 504 - } else { 505 - intel_de_write(dev_priv, 506 - ICL_DSC0_PICTURE_PARAMETER_SET_2(pipe), 507 - pps_val); 508 - if (crtc_state->dsc.dsc_split) 509 - intel_de_write(dev_priv, 510 - ICL_DSC1_PICTURE_PARAMETER_SET_2(pipe), 511 - pps_val); 512 - } 409 + intel_dsc_pps_write(crtc_state, 2, pps_val); 513 410 514 - /* Populate PICTURE_PARAMETER_SET_3 registers */ 515 - pps_val = 0; 516 - pps_val |= DSC_SLICE_HEIGHT(vdsc_cfg->slice_height) | 517 - DSC_SLICE_WIDTH(vdsc_cfg->slice_width); 411 + /* PPS 3 */ 412 + pps_val = DSC_PPS3_SLICE_HEIGHT(vdsc_cfg->slice_height) | 413 + DSC_PPS3_SLICE_WIDTH(vdsc_cfg->slice_width); 518 414 drm_dbg_kms(&dev_priv->drm, "PPS3 = 0x%08x\n", pps_val); 519 - if (!is_pipe_dsc(crtc, cpu_transcoder)) { 520 - intel_de_write(dev_priv, DSCA_PICTURE_PARAMETER_SET_3, 521 - pps_val); 522 - /* 523 - * If 2 VDSC instances are needed, configure PPS for second 524 - * VDSC 525 - */ 526 - if (crtc_state->dsc.dsc_split) 527 - intel_de_write(dev_priv, DSCC_PICTURE_PARAMETER_SET_3, 528 - pps_val); 529 - } else { 530 - intel_de_write(dev_priv, 531 - ICL_DSC0_PICTURE_PARAMETER_SET_3(pipe), 532 - pps_val); 533 - if (crtc_state->dsc.dsc_split) 534 - intel_de_write(dev_priv, 535 - ICL_DSC1_PICTURE_PARAMETER_SET_3(pipe), 536 - pps_val); 537 - } 415 + intel_dsc_pps_write(crtc_state, 3, pps_val); 538 416 539 - /* Populate PICTURE_PARAMETER_SET_4 registers */ 540 - pps_val = 0; 541 - pps_val |= DSC_INITIAL_XMIT_DELAY(vdsc_cfg->initial_xmit_delay) | 542 - DSC_INITIAL_DEC_DELAY(vdsc_cfg->initial_dec_delay); 417 + /* PPS 4 */ 418 + pps_val = DSC_PPS4_INITIAL_XMIT_DELAY(vdsc_cfg->initial_xmit_delay) | 419 + DSC_PPS4_INITIAL_DEC_DELAY(vdsc_cfg->initial_dec_delay); 543 420 drm_dbg_kms(&dev_priv->drm, "PPS4 = 0x%08x\n", pps_val); 544 - if (!is_pipe_dsc(crtc, cpu_transcoder)) { 545 - intel_de_write(dev_priv, DSCA_PICTURE_PARAMETER_SET_4, 546 - pps_val); 547 - /* 548 - * If 2 VDSC instances are needed, configure PPS for second 549 - * VDSC 550 - */ 551 - if (crtc_state->dsc.dsc_split) 552 - intel_de_write(dev_priv, DSCC_PICTURE_PARAMETER_SET_4, 553 - pps_val); 554 - } else { 555 - intel_de_write(dev_priv, 556 - ICL_DSC0_PICTURE_PARAMETER_SET_4(pipe), 557 - pps_val); 558 - if (crtc_state->dsc.dsc_split) 559 - intel_de_write(dev_priv, 560 - ICL_DSC1_PICTURE_PARAMETER_SET_4(pipe), 561 - pps_val); 562 - } 421 + intel_dsc_pps_write(crtc_state, 4, pps_val); 563 422 564 - /* Populate PICTURE_PARAMETER_SET_5 registers */ 565 - pps_val = 0; 566 - pps_val |= DSC_SCALE_INC_INT(vdsc_cfg->scale_increment_interval) | 567 - DSC_SCALE_DEC_INT(vdsc_cfg->scale_decrement_interval); 423 + /* PPS 5 */ 424 + pps_val = DSC_PPS5_SCALE_INC_INT(vdsc_cfg->scale_increment_interval) | 425 + DSC_PPS5_SCALE_DEC_INT(vdsc_cfg->scale_decrement_interval); 568 426 drm_dbg_kms(&dev_priv->drm, "PPS5 = 0x%08x\n", pps_val); 569 - if (!is_pipe_dsc(crtc, cpu_transcoder)) { 570 - intel_de_write(dev_priv, DSCA_PICTURE_PARAMETER_SET_5, 571 - pps_val); 572 - /* 573 - * If 2 VDSC instances are needed, configure PPS for second 574 - * VDSC 575 - */ 576 - if (crtc_state->dsc.dsc_split) 577 - intel_de_write(dev_priv, DSCC_PICTURE_PARAMETER_SET_5, 578 - pps_val); 579 - } else { 580 - intel_de_write(dev_priv, 581 - ICL_DSC0_PICTURE_PARAMETER_SET_5(pipe), 582 - pps_val); 583 - if (crtc_state->dsc.dsc_split) 584 - intel_de_write(dev_priv, 585 - ICL_DSC1_PICTURE_PARAMETER_SET_5(pipe), 586 - pps_val); 587 - } 427 + intel_dsc_pps_write(crtc_state, 5, pps_val); 588 428 589 - /* Populate PICTURE_PARAMETER_SET_6 registers */ 590 - pps_val = 0; 591 - pps_val |= DSC_INITIAL_SCALE_VALUE(vdsc_cfg->initial_scale_value) | 592 - DSC_FIRST_LINE_BPG_OFFSET(vdsc_cfg->first_line_bpg_offset) | 593 - DSC_FLATNESS_MIN_QP(vdsc_cfg->flatness_min_qp) | 594 - DSC_FLATNESS_MAX_QP(vdsc_cfg->flatness_max_qp); 429 + /* PPS 6 */ 430 + pps_val = DSC_PPS6_INITIAL_SCALE_VALUE(vdsc_cfg->initial_scale_value) | 431 + DSC_PPS6_FIRST_LINE_BPG_OFFSET(vdsc_cfg->first_line_bpg_offset) | 432 + DSC_PPS6_FLATNESS_MIN_QP(vdsc_cfg->flatness_min_qp) | 433 + DSC_PPS6_FLATNESS_MAX_QP(vdsc_cfg->flatness_max_qp); 595 434 drm_dbg_kms(&dev_priv->drm, "PPS6 = 0x%08x\n", pps_val); 596 - if (!is_pipe_dsc(crtc, cpu_transcoder)) { 597 - intel_de_write(dev_priv, DSCA_PICTURE_PARAMETER_SET_6, 598 - pps_val); 599 - /* 600 - * If 2 VDSC instances are needed, configure PPS for second 601 - * VDSC 602 - */ 603 - if (crtc_state->dsc.dsc_split) 604 - intel_de_write(dev_priv, DSCC_PICTURE_PARAMETER_SET_6, 605 - pps_val); 606 - } else { 607 - intel_de_write(dev_priv, 608 - ICL_DSC0_PICTURE_PARAMETER_SET_6(pipe), 609 - pps_val); 610 - if (crtc_state->dsc.dsc_split) 611 - intel_de_write(dev_priv, 612 - ICL_DSC1_PICTURE_PARAMETER_SET_6(pipe), 613 - pps_val); 614 - } 435 + intel_dsc_pps_write(crtc_state, 6, pps_val); 615 436 616 - /* Populate PICTURE_PARAMETER_SET_7 registers */ 617 - pps_val = 0; 618 - pps_val |= DSC_SLICE_BPG_OFFSET(vdsc_cfg->slice_bpg_offset) | 619 - DSC_NFL_BPG_OFFSET(vdsc_cfg->nfl_bpg_offset); 437 + /* PPS 7 */ 438 + pps_val = DSC_PPS7_SLICE_BPG_OFFSET(vdsc_cfg->slice_bpg_offset) | 439 + DSC_PPS7_NFL_BPG_OFFSET(vdsc_cfg->nfl_bpg_offset); 620 440 drm_dbg_kms(&dev_priv->drm, "PPS7 = 0x%08x\n", pps_val); 621 - if (!is_pipe_dsc(crtc, cpu_transcoder)) { 622 - intel_de_write(dev_priv, DSCA_PICTURE_PARAMETER_SET_7, 623 - pps_val); 624 - /* 625 - * If 2 VDSC instances are needed, configure PPS for second 626 - * VDSC 627 - */ 628 - if (crtc_state->dsc.dsc_split) 629 - intel_de_write(dev_priv, DSCC_PICTURE_PARAMETER_SET_7, 630 - pps_val); 631 - } else { 632 - intel_de_write(dev_priv, 633 - ICL_DSC0_PICTURE_PARAMETER_SET_7(pipe), 634 - pps_val); 635 - if (crtc_state->dsc.dsc_split) 636 - intel_de_write(dev_priv, 637 - ICL_DSC1_PICTURE_PARAMETER_SET_7(pipe), 638 - pps_val); 639 - } 441 + intel_dsc_pps_write(crtc_state, 7, pps_val); 640 442 641 - /* Populate PICTURE_PARAMETER_SET_8 registers */ 642 - pps_val = 0; 643 - pps_val |= DSC_FINAL_OFFSET(vdsc_cfg->final_offset) | 644 - DSC_INITIAL_OFFSET(vdsc_cfg->initial_offset); 443 + /* PPS 8 */ 444 + pps_val = DSC_PPS8_FINAL_OFFSET(vdsc_cfg->final_offset) | 445 + DSC_PPS8_INITIAL_OFFSET(vdsc_cfg->initial_offset); 645 446 drm_dbg_kms(&dev_priv->drm, "PPS8 = 0x%08x\n", pps_val); 646 - if (!is_pipe_dsc(crtc, cpu_transcoder)) { 647 - intel_de_write(dev_priv, DSCA_PICTURE_PARAMETER_SET_8, 648 - pps_val); 649 - /* 650 - * If 2 VDSC instances are needed, configure PPS for second 651 - * VDSC 652 - */ 653 - if (crtc_state->dsc.dsc_split) 654 - intel_de_write(dev_priv, DSCC_PICTURE_PARAMETER_SET_8, 655 - pps_val); 656 - } else { 657 - intel_de_write(dev_priv, 658 - ICL_DSC0_PICTURE_PARAMETER_SET_8(pipe), 659 - pps_val); 660 - if (crtc_state->dsc.dsc_split) 661 - intel_de_write(dev_priv, 662 - ICL_DSC1_PICTURE_PARAMETER_SET_8(pipe), 663 - pps_val); 664 - } 447 + intel_dsc_pps_write(crtc_state, 8, pps_val); 665 448 666 - /* Populate PICTURE_PARAMETER_SET_9 registers */ 667 - pps_val = 0; 668 - pps_val |= DSC_RC_MODEL_SIZE(vdsc_cfg->rc_model_size) | 669 - DSC_RC_EDGE_FACTOR(DSC_RC_EDGE_FACTOR_CONST); 449 + /* PPS 9 */ 450 + pps_val = DSC_PPS9_RC_MODEL_SIZE(vdsc_cfg->rc_model_size) | 451 + DSC_PPS9_RC_EDGE_FACTOR(DSC_RC_EDGE_FACTOR_CONST); 670 452 drm_dbg_kms(&dev_priv->drm, "PPS9 = 0x%08x\n", pps_val); 671 - if (!is_pipe_dsc(crtc, cpu_transcoder)) { 672 - intel_de_write(dev_priv, DSCA_PICTURE_PARAMETER_SET_9, 673 - pps_val); 674 - /* 675 - * If 2 VDSC instances are needed, configure PPS for second 676 - * VDSC 677 - */ 678 - if (crtc_state->dsc.dsc_split) 679 - intel_de_write(dev_priv, DSCC_PICTURE_PARAMETER_SET_9, 680 - pps_val); 681 - } else { 682 - intel_de_write(dev_priv, 683 - ICL_DSC0_PICTURE_PARAMETER_SET_9(pipe), 684 - pps_val); 685 - if (crtc_state->dsc.dsc_split) 686 - intel_de_write(dev_priv, 687 - ICL_DSC1_PICTURE_PARAMETER_SET_9(pipe), 688 - pps_val); 689 - } 453 + intel_dsc_pps_write(crtc_state, 9, pps_val); 690 454 691 - /* Populate PICTURE_PARAMETER_SET_10 registers */ 692 - pps_val = 0; 693 - pps_val |= DSC_RC_QUANT_INC_LIMIT0(vdsc_cfg->rc_quant_incr_limit0) | 694 - DSC_RC_QUANT_INC_LIMIT1(vdsc_cfg->rc_quant_incr_limit1) | 695 - DSC_RC_TARGET_OFF_HIGH(DSC_RC_TGT_OFFSET_HI_CONST) | 696 - DSC_RC_TARGET_OFF_LOW(DSC_RC_TGT_OFFSET_LO_CONST); 455 + /* PPS 10 */ 456 + pps_val = DSC_PPS10_RC_QUANT_INC_LIMIT0(vdsc_cfg->rc_quant_incr_limit0) | 457 + DSC_PPS10_RC_QUANT_INC_LIMIT1(vdsc_cfg->rc_quant_incr_limit1) | 458 + DSC_PPS10_RC_TARGET_OFF_HIGH(DSC_RC_TGT_OFFSET_HI_CONST) | 459 + DSC_PPS10_RC_TARGET_OFF_LOW(DSC_RC_TGT_OFFSET_LO_CONST); 697 460 drm_dbg_kms(&dev_priv->drm, "PPS10 = 0x%08x\n", pps_val); 698 - if (!is_pipe_dsc(crtc, cpu_transcoder)) { 699 - intel_de_write(dev_priv, DSCA_PICTURE_PARAMETER_SET_10, 700 - pps_val); 701 - /* 702 - * If 2 VDSC instances are needed, configure PPS for second 703 - * VDSC 704 - */ 705 - if (crtc_state->dsc.dsc_split) 706 - intel_de_write(dev_priv, 707 - DSCC_PICTURE_PARAMETER_SET_10, pps_val); 708 - } else { 709 - intel_de_write(dev_priv, 710 - ICL_DSC0_PICTURE_PARAMETER_SET_10(pipe), 711 - pps_val); 712 - if (crtc_state->dsc.dsc_split) 713 - intel_de_write(dev_priv, 714 - ICL_DSC1_PICTURE_PARAMETER_SET_10(pipe), 715 - pps_val); 716 - } 461 + intel_dsc_pps_write(crtc_state, 10, pps_val); 717 462 718 - /* Populate Picture parameter set 16 */ 719 - pps_val = 0; 720 - pps_val |= DSC_SLICE_CHUNK_SIZE(vdsc_cfg->slice_chunk_size) | 721 - DSC_SLICE_PER_LINE((vdsc_cfg->pic_width / num_vdsc_instances) / 722 - vdsc_cfg->slice_width) | 723 - DSC_SLICE_ROW_PER_FRAME(vdsc_cfg->pic_height / 724 - vdsc_cfg->slice_height); 463 + /* PPS 16 */ 464 + pps_val = DSC_PPS16_SLICE_CHUNK_SIZE(vdsc_cfg->slice_chunk_size) | 465 + DSC_PPS16_SLICE_PER_LINE((vdsc_cfg->pic_width / num_vdsc_instances) / 466 + vdsc_cfg->slice_width) | 467 + DSC_PPS16_SLICE_ROW_PER_FRAME(vdsc_cfg->pic_height / 468 + vdsc_cfg->slice_height); 725 469 drm_dbg_kms(&dev_priv->drm, "PPS16 = 0x%08x\n", pps_val); 726 - if (!is_pipe_dsc(crtc, cpu_transcoder)) { 727 - intel_de_write(dev_priv, DSCA_PICTURE_PARAMETER_SET_16, 728 - pps_val); 729 - /* 730 - * If 2 VDSC instances are needed, configure PPS for second 731 - * VDSC 732 - */ 733 - if (crtc_state->dsc.dsc_split) 734 - intel_de_write(dev_priv, 735 - DSCC_PICTURE_PARAMETER_SET_16, pps_val); 736 - } else { 737 - intel_de_write(dev_priv, 738 - ICL_DSC0_PICTURE_PARAMETER_SET_16(pipe), 739 - pps_val); 740 - if (crtc_state->dsc.dsc_split) 741 - intel_de_write(dev_priv, 742 - ICL_DSC1_PICTURE_PARAMETER_SET_16(pipe), 743 - pps_val); 744 - } 470 + intel_dsc_pps_write(crtc_state, 16, pps_val); 745 471 746 472 if (DISPLAY_VER(dev_priv) >= 14) { 747 - /* Populate PICTURE_PARAMETER_SET_17 registers */ 748 - pps_val = 0; 749 - pps_val |= DSC_SL_BPG_OFFSET(vdsc_cfg->second_line_bpg_offset); 473 + /* PPS 17 */ 474 + pps_val = DSC_PPS17_SL_BPG_OFFSET(vdsc_cfg->second_line_bpg_offset); 750 475 drm_dbg_kms(&dev_priv->drm, "PPS17 = 0x%08x\n", pps_val); 751 - intel_de_write(dev_priv, 752 - MTL_DSC0_PICTURE_PARAMETER_SET_17(pipe), 753 - pps_val); 754 - if (crtc_state->dsc.dsc_split) 755 - intel_de_write(dev_priv, 756 - MTL_DSC1_PICTURE_PARAMETER_SET_17(pipe), 757 - pps_val); 476 + intel_dsc_pps_write(crtc_state, 17, pps_val); 758 477 759 - /* Populate PICTURE_PARAMETER_SET_18 registers */ 760 - pps_val = 0; 761 - pps_val |= DSC_NSL_BPG_OFFSET(vdsc_cfg->nsl_bpg_offset) | 762 - DSC_SL_OFFSET_ADJ(vdsc_cfg->second_line_offset_adj); 478 + /* PPS 18 */ 479 + pps_val = DSC_PPS18_NSL_BPG_OFFSET(vdsc_cfg->nsl_bpg_offset) | 480 + DSC_PPS18_SL_OFFSET_ADJ(vdsc_cfg->second_line_offset_adj); 763 481 drm_dbg_kms(&dev_priv->drm, "PPS18 = 0x%08x\n", pps_val); 764 - intel_de_write(dev_priv, 765 - MTL_DSC0_PICTURE_PARAMETER_SET_18(pipe), 766 - pps_val); 767 - if (crtc_state->dsc.dsc_split) 768 - intel_de_write(dev_priv, 769 - MTL_DSC1_PICTURE_PARAMETER_SET_18(pipe), 770 - pps_val); 482 + intel_dsc_pps_write(crtc_state, 18, pps_val); 771 483 } 772 484 773 485 /* Populate the RC_BUF_THRESH registers */ ··· 548 740 rc_buf_thresh_dword[2]); 549 741 intel_de_write(dev_priv, DSCA_RC_BUF_THRESH_1_UDW, 550 742 rc_buf_thresh_dword[3]); 551 - if (crtc_state->dsc.dsc_split) { 743 + if (vdsc_instances_per_pipe > 1) { 552 744 intel_de_write(dev_priv, DSCC_RC_BUF_THRESH_0, 553 745 rc_buf_thresh_dword[0]); 554 746 intel_de_write(dev_priv, DSCC_RC_BUF_THRESH_0_UDW, ··· 567 759 rc_buf_thresh_dword[2]); 568 760 intel_de_write(dev_priv, ICL_DSC0_RC_BUF_THRESH_1_UDW(pipe), 569 761 rc_buf_thresh_dword[3]); 570 - if (crtc_state->dsc.dsc_split) { 762 + if (vdsc_instances_per_pipe > 1) { 571 763 intel_de_write(dev_priv, 572 764 ICL_DSC1_RC_BUF_THRESH_0(pipe), 573 765 rc_buf_thresh_dword[0]); ··· 613 805 rc_range_params_dword[6]); 614 806 intel_de_write(dev_priv, DSCA_RC_RANGE_PARAMETERS_3_UDW, 615 807 rc_range_params_dword[7]); 616 - if (crtc_state->dsc.dsc_split) { 808 + if (vdsc_instances_per_pipe > 1) { 617 809 intel_de_write(dev_priv, DSCC_RC_RANGE_PARAMETERS_0, 618 810 rc_range_params_dword[0]); 619 811 intel_de_write(dev_priv, ··· 656 848 intel_de_write(dev_priv, 657 849 ICL_DSC0_RC_RANGE_PARAMETERS_3_UDW(pipe), 658 850 rc_range_params_dword[7]); 659 - if (crtc_state->dsc.dsc_split) { 851 + if (vdsc_instances_per_pipe > 1) { 660 852 intel_de_write(dev_priv, 661 853 ICL_DSC1_RC_RANGE_PARAMETERS_0(pipe), 662 854 rc_range_params_dword[0]); ··· 762 954 struct drm_i915_private *dev_priv = to_i915(crtc->base.dev); 763 955 u32 dss_ctl1_val = 0; 764 956 u32 dss_ctl2_val = 0; 957 + int vdsc_instances_per_pipe = intel_dsc_get_vdsc_per_pipe(crtc_state); 765 958 766 959 if (!crtc_state->dsc.compression_enable) 767 960 return; ··· 770 961 intel_dsc_pps_configure(crtc_state); 771 962 772 963 dss_ctl2_val |= LEFT_BRANCH_VDSC_ENABLE; 773 - if (crtc_state->dsc.dsc_split) { 964 + if (vdsc_instances_per_pipe > 1) { 774 965 dss_ctl2_val |= RIGHT_BRANCH_VDSC_ENABLE; 775 966 dss_ctl1_val |= JOINER_ENABLE; 776 967 } ··· 796 987 } 797 988 } 798 989 990 + static u32 intel_dsc_pps_read(struct intel_crtc_state *crtc_state, int pps, 991 + bool *check_equal) 992 + { 993 + struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc); 994 + struct drm_i915_private *i915 = to_i915(crtc->base.dev); 995 + i915_reg_t dsc_reg[2]; 996 + int i, vdsc_per_pipe, dsc_reg_num; 997 + u32 val = 0; 998 + 999 + vdsc_per_pipe = intel_dsc_get_vdsc_per_pipe(crtc_state); 1000 + dsc_reg_num = min_t(int, ARRAY_SIZE(dsc_reg), vdsc_per_pipe); 1001 + 1002 + drm_WARN_ON_ONCE(&i915->drm, dsc_reg_num < vdsc_per_pipe); 1003 + 1004 + intel_dsc_get_pps_reg(crtc_state, pps, dsc_reg, dsc_reg_num); 1005 + 1006 + if (check_equal) 1007 + *check_equal = true; 1008 + 1009 + for (i = 0; i < dsc_reg_num; i++) { 1010 + u32 tmp; 1011 + 1012 + tmp = intel_de_read(i915, dsc_reg[i]); 1013 + 1014 + if (i == 0) { 1015 + val = tmp; 1016 + } else if (check_equal && tmp != val) { 1017 + *check_equal = false; 1018 + break; 1019 + } else if (!check_equal) { 1020 + break; 1021 + } 1022 + } 1023 + 1024 + return val; 1025 + } 1026 + 1027 + static u32 intel_dsc_pps_read_and_verify(struct intel_crtc_state *crtc_state, int pps) 1028 + { 1029 + struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc); 1030 + struct drm_i915_private *i915 = to_i915(crtc->base.dev); 1031 + u32 val; 1032 + bool all_equal; 1033 + 1034 + val = intel_dsc_pps_read(crtc_state, pps, &all_equal); 1035 + drm_WARN_ON(&i915->drm, !all_equal); 1036 + 1037 + return val; 1038 + } 1039 + 1040 + static void intel_dsc_get_pps_config(struct intel_crtc_state *crtc_state) 1041 + { 1042 + struct drm_dsc_config *vdsc_cfg = &crtc_state->dsc.config; 1043 + struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc); 1044 + struct drm_i915_private *i915 = to_i915(crtc->base.dev); 1045 + int num_vdsc_instances = intel_dsc_get_num_vdsc_instances(crtc_state); 1046 + u32 pps_temp; 1047 + 1048 + /* PPS 0 */ 1049 + pps_temp = intel_dsc_pps_read_and_verify(crtc_state, 0); 1050 + 1051 + vdsc_cfg->bits_per_component = REG_FIELD_GET(DSC_PPS0_BPC_MASK, pps_temp); 1052 + vdsc_cfg->line_buf_depth = REG_FIELD_GET(DSC_PPS0_LINE_BUF_DEPTH_MASK, pps_temp); 1053 + vdsc_cfg->block_pred_enable = pps_temp & DSC_PPS0_BLOCK_PREDICTION; 1054 + vdsc_cfg->convert_rgb = pps_temp & DSC_PPS0_COLOR_SPACE_CONVERSION; 1055 + vdsc_cfg->simple_422 = pps_temp & DSC_PPS0_422_ENABLE; 1056 + vdsc_cfg->native_422 = pps_temp & DSC_PPS0_NATIVE_422_ENABLE; 1057 + vdsc_cfg->native_420 = pps_temp & DSC_PPS0_NATIVE_420_ENABLE; 1058 + vdsc_cfg->vbr_enable = pps_temp & DSC_PPS0_VBR_ENABLE; 1059 + 1060 + /* PPS 1 */ 1061 + pps_temp = intel_dsc_pps_read_and_verify(crtc_state, 1); 1062 + 1063 + vdsc_cfg->bits_per_pixel = REG_FIELD_GET(DSC_PPS1_BPP_MASK, pps_temp); 1064 + 1065 + if (vdsc_cfg->native_420) 1066 + vdsc_cfg->bits_per_pixel >>= 1; 1067 + 1068 + crtc_state->dsc.compressed_bpp = vdsc_cfg->bits_per_pixel >> 4; 1069 + 1070 + /* PPS 2 */ 1071 + pps_temp = intel_dsc_pps_read_and_verify(crtc_state, 2); 1072 + 1073 + vdsc_cfg->pic_width = REG_FIELD_GET(DSC_PPS2_PIC_WIDTH_MASK, pps_temp) * num_vdsc_instances; 1074 + vdsc_cfg->pic_height = REG_FIELD_GET(DSC_PPS2_PIC_HEIGHT_MASK, pps_temp); 1075 + 1076 + /* PPS 3 */ 1077 + pps_temp = intel_dsc_pps_read_and_verify(crtc_state, 3); 1078 + 1079 + vdsc_cfg->slice_width = REG_FIELD_GET(DSC_PPS3_SLICE_WIDTH_MASK, pps_temp); 1080 + vdsc_cfg->slice_height = REG_FIELD_GET(DSC_PPS3_SLICE_HEIGHT_MASK, pps_temp); 1081 + 1082 + /* PPS 4 */ 1083 + pps_temp = intel_dsc_pps_read_and_verify(crtc_state, 4); 1084 + 1085 + vdsc_cfg->initial_dec_delay = REG_FIELD_GET(DSC_PPS4_INITIAL_DEC_DELAY_MASK, pps_temp); 1086 + vdsc_cfg->initial_xmit_delay = REG_FIELD_GET(DSC_PPS4_INITIAL_XMIT_DELAY_MASK, pps_temp); 1087 + 1088 + /* PPS 5 */ 1089 + pps_temp = intel_dsc_pps_read_and_verify(crtc_state, 5); 1090 + 1091 + vdsc_cfg->scale_decrement_interval = REG_FIELD_GET(DSC_PPS5_SCALE_DEC_INT_MASK, pps_temp); 1092 + vdsc_cfg->scale_increment_interval = REG_FIELD_GET(DSC_PPS5_SCALE_INC_INT_MASK, pps_temp); 1093 + 1094 + /* PPS 6 */ 1095 + pps_temp = intel_dsc_pps_read_and_verify(crtc_state, 6); 1096 + 1097 + vdsc_cfg->initial_scale_value = REG_FIELD_GET(DSC_PPS6_INITIAL_SCALE_VALUE_MASK, pps_temp); 1098 + vdsc_cfg->first_line_bpg_offset = REG_FIELD_GET(DSC_PPS6_FIRST_LINE_BPG_OFFSET_MASK, pps_temp); 1099 + vdsc_cfg->flatness_min_qp = REG_FIELD_GET(DSC_PPS6_FLATNESS_MIN_QP_MASK, pps_temp); 1100 + vdsc_cfg->flatness_max_qp = REG_FIELD_GET(DSC_PPS6_FLATNESS_MAX_QP_MASK, pps_temp); 1101 + 1102 + /* PPS 7 */ 1103 + pps_temp = intel_dsc_pps_read_and_verify(crtc_state, 7); 1104 + 1105 + vdsc_cfg->nfl_bpg_offset = REG_FIELD_GET(DSC_PPS7_NFL_BPG_OFFSET_MASK, pps_temp); 1106 + vdsc_cfg->slice_bpg_offset = REG_FIELD_GET(DSC_PPS7_SLICE_BPG_OFFSET_MASK, pps_temp); 1107 + 1108 + /* PPS 8 */ 1109 + pps_temp = intel_dsc_pps_read_and_verify(crtc_state, 8); 1110 + 1111 + vdsc_cfg->initial_offset = REG_FIELD_GET(DSC_PPS8_INITIAL_OFFSET_MASK, pps_temp); 1112 + vdsc_cfg->final_offset = REG_FIELD_GET(DSC_PPS8_FINAL_OFFSET_MASK, pps_temp); 1113 + 1114 + /* PPS 9 */ 1115 + pps_temp = intel_dsc_pps_read_and_verify(crtc_state, 9); 1116 + 1117 + vdsc_cfg->rc_model_size = REG_FIELD_GET(DSC_PPS9_RC_MODEL_SIZE_MASK, pps_temp); 1118 + 1119 + /* PPS 10 */ 1120 + pps_temp = intel_dsc_pps_read_and_verify(crtc_state, 10); 1121 + 1122 + vdsc_cfg->rc_quant_incr_limit0 = REG_FIELD_GET(DSC_PPS10_RC_QUANT_INC_LIMIT0_MASK, pps_temp); 1123 + vdsc_cfg->rc_quant_incr_limit1 = REG_FIELD_GET(DSC_PPS10_RC_QUANT_INC_LIMIT1_MASK, pps_temp); 1124 + 1125 + /* PPS 16 */ 1126 + pps_temp = intel_dsc_pps_read_and_verify(crtc_state, 16); 1127 + 1128 + vdsc_cfg->slice_chunk_size = REG_FIELD_GET(DSC_PPS16_SLICE_CHUNK_SIZE_MASK, pps_temp); 1129 + 1130 + if (DISPLAY_VER(i915) >= 14) { 1131 + /* PPS 17 */ 1132 + pps_temp = intel_dsc_pps_read_and_verify(crtc_state, 17); 1133 + 1134 + vdsc_cfg->second_line_bpg_offset = REG_FIELD_GET(DSC_PPS17_SL_BPG_OFFSET_MASK, pps_temp); 1135 + 1136 + /* PPS 18 */ 1137 + pps_temp = intel_dsc_pps_read_and_verify(crtc_state, 18); 1138 + 1139 + vdsc_cfg->nsl_bpg_offset = REG_FIELD_GET(DSC_PPS18_NSL_BPG_OFFSET_MASK, pps_temp); 1140 + vdsc_cfg->second_line_offset_adj = REG_FIELD_GET(DSC_PPS18_SL_OFFSET_ADJ_MASK, pps_temp); 1141 + } 1142 + } 1143 + 799 1144 void intel_dsc_get_config(struct intel_crtc_state *crtc_state) 800 1145 { 801 1146 struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc); 802 1147 struct drm_i915_private *dev_priv = to_i915(crtc->base.dev); 803 - struct drm_dsc_config *vdsc_cfg = &crtc_state->dsc.config; 804 1148 enum transcoder cpu_transcoder = crtc_state->cpu_transcoder; 805 - enum pipe pipe = crtc->pipe; 806 1149 enum intel_display_power_domain power_domain; 807 1150 intel_wakeref_t wakeref; 808 - u32 dss_ctl1, dss_ctl2, pps0 = 0, pps1 = 0; 1151 + u32 dss_ctl1, dss_ctl2; 809 1152 810 1153 if (!intel_dsc_source_support(crtc_state)) 811 1154 return; ··· 978 1017 crtc_state->dsc.dsc_split = (dss_ctl2 & RIGHT_BRANCH_VDSC_ENABLE) && 979 1018 (dss_ctl1 & JOINER_ENABLE); 980 1019 981 - /* FIXME: add more state readout as needed */ 982 - 983 - /* PPS0 & PPS1 */ 984 - if (!is_pipe_dsc(crtc, cpu_transcoder)) { 985 - pps1 = intel_de_read(dev_priv, DSCA_PICTURE_PARAMETER_SET_1); 986 - } else { 987 - pps0 = intel_de_read(dev_priv, 988 - ICL_DSC0_PICTURE_PARAMETER_SET_0(pipe)); 989 - pps1 = intel_de_read(dev_priv, 990 - ICL_DSC0_PICTURE_PARAMETER_SET_1(pipe)); 991 - } 992 - 993 - vdsc_cfg->bits_per_pixel = pps1; 994 - 995 - if (pps0 & DSC_NATIVE_420_ENABLE) 996 - vdsc_cfg->bits_per_pixel >>= 1; 997 - 998 - crtc_state->dsc.compressed_bpp = vdsc_cfg->bits_per_pixel >> 4; 1020 + intel_dsc_get_pps_config(crtc_state); 999 1021 out: 1000 1022 intel_display_power_put(dev_priv, power_domain, wakeref); 1001 1023 }
+113 -258
drivers/gpu/drm/i915/display/intel_vdsc_regs.h
··· 46 46 _ICL_PIPE_DSS_CTL2_PB, \ 47 47 _ICL_PIPE_DSS_CTL2_PC) 48 48 49 - /* MTL Display Stream Compression registers */ 50 - #define _MTL_DSC0_PICTURE_PARAMETER_SET_17_PB 0x782B4 51 - #define _MTL_DSC1_PICTURE_PARAMETER_SET_17_PB 0x783B4 52 - #define _MTL_DSC0_PICTURE_PARAMETER_SET_17_PC 0x784B4 53 - #define _MTL_DSC1_PICTURE_PARAMETER_SET_17_PC 0x785B4 54 - #define MTL_DSC0_PICTURE_PARAMETER_SET_17(pipe) _MMIO_PIPE((pipe) - PIPE_B, \ 55 - _MTL_DSC0_PICTURE_PARAMETER_SET_17_PB, \ 56 - _MTL_DSC0_PICTURE_PARAMETER_SET_17_PC) 57 - #define MTL_DSC1_PICTURE_PARAMETER_SET_17(pipe) _MMIO_PIPE((pipe) - PIPE_B, \ 58 - _MTL_DSC1_PICTURE_PARAMETER_SET_17_PB, \ 59 - _MTL_DSC1_PICTURE_PARAMETER_SET_17_PC) 60 - #define DSC_SL_BPG_OFFSET(offset) ((offset) << 27) 61 - 62 - #define _MTL_DSC0_PICTURE_PARAMETER_SET_18_PB 0x782B8 63 - #define _MTL_DSC1_PICTURE_PARAMETER_SET_18_PB 0x783B8 64 - #define _MTL_DSC0_PICTURE_PARAMETER_SET_18_PC 0x784B8 65 - #define _MTL_DSC1_PICTURE_PARAMETER_SET_18_PC 0x785B8 66 - #define MTL_DSC0_PICTURE_PARAMETER_SET_18(pipe) _MMIO_PIPE((pipe) - PIPE_B, \ 67 - _MTL_DSC0_PICTURE_PARAMETER_SET_18_PB, \ 68 - _MTL_DSC0_PICTURE_PARAMETER_SET_18_PC) 69 - #define MTL_DSC1_PICTURE_PARAMETER_SET_18(pipe) _MMIO_PIPE((pipe) - PIPE_B, \ 70 - _MTL_DSC1_PICTURE_PARAMETER_SET_18_PB, \ 71 - _MTL_DSC1_PICTURE_PARAMETER_SET_18_PC) 72 - #define DSC_NSL_BPG_OFFSET(offset) ((offset) << 16) 73 - #define DSC_SL_OFFSET_ADJ(offset) ((offset) << 0) 74 - 75 49 /* Icelake Display Stream Compression Registers */ 76 50 #define DSCA_PICTURE_PARAMETER_SET_0 _MMIO(0x6B200) 77 51 #define DSCC_PICTURE_PARAMETER_SET_0 _MMIO(0x6BA00) 52 + #define _DSCA_PPS_0 0x6B200 53 + #define _DSCC_PPS_0 0x6BA00 54 + #define DSCA_PPS(pps) _MMIO(_DSCA_PPS_0 + (pps) * 4) 55 + #define DSCC_PPS(pps) _MMIO(_DSCC_PPS_0 + (pps) * 4) 78 56 #define _ICL_DSC0_PICTURE_PARAMETER_SET_0_PB 0x78270 79 57 #define _ICL_DSC1_PICTURE_PARAMETER_SET_0_PB 0x78370 80 58 #define _ICL_DSC0_PICTURE_PARAMETER_SET_0_PC 0x78470 ··· 63 85 #define ICL_DSC1_PICTURE_PARAMETER_SET_0(pipe) _MMIO_PIPE((pipe) - PIPE_B, \ 64 86 _ICL_DSC1_PICTURE_PARAMETER_SET_0_PB, \ 65 87 _ICL_DSC1_PICTURE_PARAMETER_SET_0_PC) 66 - #define DSC_NATIVE_422_ENABLE BIT(23) 67 - #define DSC_NATIVE_420_ENABLE BIT(22) 68 - #define DSC_ALT_ICH_SEL (1 << 20) 69 - #define DSC_VBR_ENABLE (1 << 19) 70 - #define DSC_422_ENABLE (1 << 18) 71 - #define DSC_COLOR_SPACE_CONVERSION (1 << 17) 72 - #define DSC_BLOCK_PREDICTION (1 << 16) 73 - #define DSC_LINE_BUF_DEPTH_SHIFT 12 74 - #define DSC_BPC_SHIFT 8 75 - #define DSC_VER_MIN_SHIFT 4 76 - #define DSC_VER_MAJ (0x1 << 0) 88 + #define _ICL_DSC0_PPS_0(pipe) _PICK_EVEN((pipe) - PIPE_B, \ 89 + _ICL_DSC0_PICTURE_PARAMETER_SET_0_PB, \ 90 + _ICL_DSC0_PICTURE_PARAMETER_SET_0_PC) 91 + #define _ICL_DSC1_PPS_0(pipe) _PICK_EVEN((pipe) - PIPE_B, \ 92 + _ICL_DSC1_PICTURE_PARAMETER_SET_0_PB, \ 93 + _ICL_DSC1_PICTURE_PARAMETER_SET_0_PC) 94 + #define ICL_DSC0_PPS(pipe, pps) _MMIO(_ICL_DSC0_PPS_0(pipe) + ((pps) * 4)) 95 + #define ICL_DSC1_PPS(pipe, pps) _MMIO(_ICL_DSC1_PPS_0(pipe) + ((pps) * 4)) 77 96 78 - #define DSCA_PICTURE_PARAMETER_SET_1 _MMIO(0x6B204) 79 - #define DSCC_PICTURE_PARAMETER_SET_1 _MMIO(0x6BA04) 80 - #define _ICL_DSC0_PICTURE_PARAMETER_SET_1_PB 0x78274 81 - #define _ICL_DSC1_PICTURE_PARAMETER_SET_1_PB 0x78374 82 - #define _ICL_DSC0_PICTURE_PARAMETER_SET_1_PC 0x78474 83 - #define _ICL_DSC1_PICTURE_PARAMETER_SET_1_PC 0x78574 84 - #define ICL_DSC0_PICTURE_PARAMETER_SET_1(pipe) _MMIO_PIPE((pipe) - PIPE_B, \ 85 - _ICL_DSC0_PICTURE_PARAMETER_SET_1_PB, \ 86 - _ICL_DSC0_PICTURE_PARAMETER_SET_1_PC) 87 - #define ICL_DSC1_PICTURE_PARAMETER_SET_1(pipe) _MMIO_PIPE((pipe) - PIPE_B, \ 88 - _ICL_DSC1_PICTURE_PARAMETER_SET_1_PB, \ 89 - _ICL_DSC1_PICTURE_PARAMETER_SET_1_PC) 90 - #define DSC_BPP(bpp) ((bpp) << 0) 97 + /* PPS 0 */ 98 + #define DSC_PPS0_NATIVE_422_ENABLE REG_BIT(23) 99 + #define DSC_PPS0_NATIVE_420_ENABLE REG_BIT(22) 100 + #define DSC_PPS0_ALT_ICH_SEL REG_BIT(20) 101 + #define DSC_PPS0_VBR_ENABLE REG_BIT(19) 102 + #define DSC_PPS0_422_ENABLE REG_BIT(18) 103 + #define DSC_PPS0_COLOR_SPACE_CONVERSION REG_BIT(17) 104 + #define DSC_PPS0_BLOCK_PREDICTION REG_BIT(16) 105 + #define DSC_PPS0_LINE_BUF_DEPTH_MASK REG_GENMASK(15, 12) 106 + #define DSC_PPS0_LINE_BUF_DEPTH(depth) REG_FIELD_PREP(DSC_PPS0_LINE_BUF_DEPTH_MASK, depth) 107 + #define DSC_PPS0_BPC_MASK REG_GENMASK(11, 8) 108 + #define DSC_PPS0_BPC(bpc) REG_FIELD_PREP(DSC_PPS0_BPC_MASK, bpc) 109 + #define DSC_PPS0_VER_MINOR_MASK REG_GENMASK(7, 4) 110 + #define DSC_PPS0_VER_MINOR(minor) REG_FIELD_PREP(DSC_PPS0_VER_MINOR_MASK, minor) 111 + #define DSC_PPS0_VER_MAJOR_MASK REG_GENMASK(3, 0) 112 + #define DSC_PPS0_VER_MAJOR(major) REG_FIELD_PREP(DSC_PPS0_VER_MAJOR_MASK, major) 91 113 92 - #define DSCA_PICTURE_PARAMETER_SET_2 _MMIO(0x6B208) 93 - #define DSCC_PICTURE_PARAMETER_SET_2 _MMIO(0x6BA08) 94 - #define _ICL_DSC0_PICTURE_PARAMETER_SET_2_PB 0x78278 95 - #define _ICL_DSC1_PICTURE_PARAMETER_SET_2_PB 0x78378 96 - #define _ICL_DSC0_PICTURE_PARAMETER_SET_2_PC 0x78478 97 - #define _ICL_DSC1_PICTURE_PARAMETER_SET_2_PC 0x78578 98 - #define ICL_DSC0_PICTURE_PARAMETER_SET_2(pipe) _MMIO_PIPE((pipe) - PIPE_B, \ 99 - _ICL_DSC0_PICTURE_PARAMETER_SET_2_PB, \ 100 - _ICL_DSC0_PICTURE_PARAMETER_SET_2_PC) 101 - #define ICL_DSC1_PICTURE_PARAMETER_SET_2(pipe) _MMIO_PIPE((pipe) - PIPE_B, \ 102 - _ICL_DSC1_PICTURE_PARAMETER_SET_2_PB, \ 103 - _ICL_DSC1_PICTURE_PARAMETER_SET_2_PC) 104 - #define DSC_PIC_WIDTH(pic_width) ((pic_width) << 16) 105 - #define DSC_PIC_HEIGHT(pic_height) ((pic_height) << 0) 114 + /* PPS 1 */ 115 + #define DSC_PPS1_BPP_MASK REG_GENMASK(9, 0) 116 + #define DSC_PPS1_BPP(bpp) REG_FIELD_PREP(DSC_PPS1_BPP_MASK, bpp) 106 117 107 - #define DSCA_PICTURE_PARAMETER_SET_3 _MMIO(0x6B20C) 108 - #define DSCC_PICTURE_PARAMETER_SET_3 _MMIO(0x6BA0C) 109 - #define _ICL_DSC0_PICTURE_PARAMETER_SET_3_PB 0x7827C 110 - #define _ICL_DSC1_PICTURE_PARAMETER_SET_3_PB 0x7837C 111 - #define _ICL_DSC0_PICTURE_PARAMETER_SET_3_PC 0x7847C 112 - #define _ICL_DSC1_PICTURE_PARAMETER_SET_3_PC 0x7857C 113 - #define ICL_DSC0_PICTURE_PARAMETER_SET_3(pipe) _MMIO_PIPE((pipe) - PIPE_B, \ 114 - _ICL_DSC0_PICTURE_PARAMETER_SET_3_PB, \ 115 - _ICL_DSC0_PICTURE_PARAMETER_SET_3_PC) 116 - #define ICL_DSC1_PICTURE_PARAMETER_SET_3(pipe) _MMIO_PIPE((pipe) - PIPE_B, \ 117 - _ICL_DSC1_PICTURE_PARAMETER_SET_3_PB, \ 118 - _ICL_DSC1_PICTURE_PARAMETER_SET_3_PC) 119 - #define DSC_SLICE_WIDTH(slice_width) ((slice_width) << 16) 120 - #define DSC_SLICE_HEIGHT(slice_height) ((slice_height) << 0) 118 + /* PPS 2 */ 119 + #define DSC_PPS2_PIC_WIDTH_MASK REG_GENMASK(31, 16) 120 + #define DSC_PPS2_PIC_HEIGHT_MASK REG_GENMASK(15, 0) 121 + #define DSC_PPS2_PIC_WIDTH(pic_width) REG_FIELD_PREP(DSC_PPS2_PIC_WIDTH_MASK, pic_width) 122 + #define DSC_PPS2_PIC_HEIGHT(pic_height) REG_FIELD_PREP(DSC_PPS2_PIC_HEIGHT_MASK, pic_height) 121 123 122 - #define DSCA_PICTURE_PARAMETER_SET_4 _MMIO(0x6B210) 123 - #define DSCC_PICTURE_PARAMETER_SET_4 _MMIO(0x6BA10) 124 - #define _ICL_DSC0_PICTURE_PARAMETER_SET_4_PB 0x78280 125 - #define _ICL_DSC1_PICTURE_PARAMETER_SET_4_PB 0x78380 126 - #define _ICL_DSC0_PICTURE_PARAMETER_SET_4_PC 0x78480 127 - #define _ICL_DSC1_PICTURE_PARAMETER_SET_4_PC 0x78580 128 - #define ICL_DSC0_PICTURE_PARAMETER_SET_4(pipe) _MMIO_PIPE((pipe) - PIPE_B, \ 129 - _ICL_DSC0_PICTURE_PARAMETER_SET_4_PB, \ 130 - _ICL_DSC0_PICTURE_PARAMETER_SET_4_PC) 131 - #define ICL_DSC1_PICTURE_PARAMETER_SET_4(pipe) _MMIO_PIPE((pipe) - PIPE_B, \ 132 - _ICL_DSC1_PICTURE_PARAMETER_SET_4_PB, \ 133 - _ICL_DSC1_PICTURE_PARAMETER_SET_4_PC) 134 - #define DSC_INITIAL_DEC_DELAY(dec_delay) ((dec_delay) << 16) 135 - #define DSC_INITIAL_XMIT_DELAY(xmit_delay) ((xmit_delay) << 0) 124 + /* PPS 3 */ 125 + #define DSC_PPS3_SLICE_WIDTH_MASK REG_GENMASK(31, 16) 126 + #define DSC_PPS3_SLICE_HEIGHT_MASK REG_GENMASK(15, 0) 127 + #define DSC_PPS3_SLICE_WIDTH(slice_width) REG_FIELD_PREP(DSC_PPS3_SLICE_WIDTH_MASK, slice_width) 128 + #define DSC_PPS3_SLICE_HEIGHT(slice_height) REG_FIELD_PREP(DSC_PPS3_SLICE_HEIGHT_MASK, slice_height) 136 129 137 - #define DSCA_PICTURE_PARAMETER_SET_5 _MMIO(0x6B214) 138 - #define DSCC_PICTURE_PARAMETER_SET_5 _MMIO(0x6BA14) 139 - #define _ICL_DSC0_PICTURE_PARAMETER_SET_5_PB 0x78284 140 - #define _ICL_DSC1_PICTURE_PARAMETER_SET_5_PB 0x78384 141 - #define _ICL_DSC0_PICTURE_PARAMETER_SET_5_PC 0x78484 142 - #define _ICL_DSC1_PICTURE_PARAMETER_SET_5_PC 0x78584 143 - #define ICL_DSC0_PICTURE_PARAMETER_SET_5(pipe) _MMIO_PIPE((pipe) - PIPE_B, \ 144 - _ICL_DSC0_PICTURE_PARAMETER_SET_5_PB, \ 145 - _ICL_DSC0_PICTURE_PARAMETER_SET_5_PC) 146 - #define ICL_DSC1_PICTURE_PARAMETER_SET_5(pipe) _MMIO_PIPE((pipe) - PIPE_B, \ 147 - _ICL_DSC1_PICTURE_PARAMETER_SET_5_PB, \ 148 - _ICL_DSC1_PICTURE_PARAMETER_SET_5_PC) 149 - #define DSC_SCALE_DEC_INT(scale_dec) ((scale_dec) << 16) 150 - #define DSC_SCALE_INC_INT(scale_inc) ((scale_inc) << 0) 130 + /* PPS 4 */ 131 + #define DSC_PPS4_INITIAL_DEC_DELAY_MASK REG_GENMASK(31, 16) 132 + #define DSC_PPS4_INITIAL_XMIT_DELAY_MASK REG_GENMASK(9, 0) 133 + #define DSC_PPS4_INITIAL_DEC_DELAY(dec_delay) REG_FIELD_PREP(DSC_PPS4_INITIAL_DEC_DELAY_MASK, \ 134 + dec_delay) 135 + #define DSC_PPS4_INITIAL_XMIT_DELAY(xmit_delay) REG_FIELD_PREP(DSC_PPS4_INITIAL_XMIT_DELAY_MASK, \ 136 + xmit_delay) 151 137 152 - #define DSCA_PICTURE_PARAMETER_SET_6 _MMIO(0x6B218) 153 - #define DSCC_PICTURE_PARAMETER_SET_6 _MMIO(0x6BA18) 154 - #define _ICL_DSC0_PICTURE_PARAMETER_SET_6_PB 0x78288 155 - #define _ICL_DSC1_PICTURE_PARAMETER_SET_6_PB 0x78388 156 - #define _ICL_DSC0_PICTURE_PARAMETER_SET_6_PC 0x78488 157 - #define _ICL_DSC1_PICTURE_PARAMETER_SET_6_PC 0x78588 158 - #define ICL_DSC0_PICTURE_PARAMETER_SET_6(pipe) _MMIO_PIPE((pipe) - PIPE_B, \ 159 - _ICL_DSC0_PICTURE_PARAMETER_SET_6_PB, \ 160 - _ICL_DSC0_PICTURE_PARAMETER_SET_6_PC) 161 - #define ICL_DSC1_PICTURE_PARAMETER_SET_6(pipe) _MMIO_PIPE((pipe) - PIPE_B, \ 162 - _ICL_DSC1_PICTURE_PARAMETER_SET_6_PB, \ 163 - _ICL_DSC1_PICTURE_PARAMETER_SET_6_PC) 164 - #define DSC_FLATNESS_MAX_QP(max_qp) ((max_qp) << 24) 165 - #define DSC_FLATNESS_MIN_QP(min_qp) ((min_qp) << 16) 166 - #define DSC_FIRST_LINE_BPG_OFFSET(offset) ((offset) << 8) 167 - #define DSC_INITIAL_SCALE_VALUE(value) ((value) << 0) 138 + /* PPS 5 */ 139 + #define DSC_PPS5_SCALE_DEC_INT_MASK REG_GENMASK(27, 16) 140 + #define DSC_PPS5_SCALE_INC_INT_MASK REG_GENMASK(15, 0) 141 + #define DSC_PPS5_SCALE_DEC_INT(scale_dec) REG_FIELD_PREP(DSC_PPS5_SCALE_DEC_INT_MASK, scale_dec) 142 + #define DSC_PPS5_SCALE_INC_INT(scale_inc) REG_FIELD_PREP(DSC_PPS5_SCALE_INC_INT_MASK, scale_inc) 168 143 169 - #define DSCA_PICTURE_PARAMETER_SET_7 _MMIO(0x6B21C) 170 - #define DSCC_PICTURE_PARAMETER_SET_7 _MMIO(0x6BA1C) 171 - #define _ICL_DSC0_PICTURE_PARAMETER_SET_7_PB 0x7828C 172 - #define _ICL_DSC1_PICTURE_PARAMETER_SET_7_PB 0x7838C 173 - #define _ICL_DSC0_PICTURE_PARAMETER_SET_7_PC 0x7848C 174 - #define _ICL_DSC1_PICTURE_PARAMETER_SET_7_PC 0x7858C 175 - #define ICL_DSC0_PICTURE_PARAMETER_SET_7(pipe) _MMIO_PIPE((pipe) - PIPE_B, \ 176 - _ICL_DSC0_PICTURE_PARAMETER_SET_7_PB, \ 177 - _ICL_DSC0_PICTURE_PARAMETER_SET_7_PC) 178 - #define ICL_DSC1_PICTURE_PARAMETER_SET_7(pipe) _MMIO_PIPE((pipe) - PIPE_B, \ 179 - _ICL_DSC1_PICTURE_PARAMETER_SET_7_PB, \ 180 - _ICL_DSC1_PICTURE_PARAMETER_SET_7_PC) 181 - #define DSC_NFL_BPG_OFFSET(bpg_offset) ((bpg_offset) << 16) 182 - #define DSC_SLICE_BPG_OFFSET(bpg_offset) ((bpg_offset) << 0) 144 + /* PPS 6 */ 145 + #define DSC_PPS6_FLATNESS_MAX_QP_MASK REG_GENMASK(28, 24) 146 + #define DSC_PPS6_FLATNESS_MIN_QP_MASK REG_GENMASK(20, 16) 147 + #define DSC_PPS6_FIRST_LINE_BPG_OFFSET_MASK REG_GENMASK(12, 8) 148 + #define DSC_PPS6_INITIAL_SCALE_VALUE_MASK REG_GENMASK(5, 0) 149 + #define DSC_PPS6_FLATNESS_MAX_QP(max_qp) REG_FIELD_PREP(DSC_PPS6_FLATNESS_MAX_QP_MASK, max_qp) 150 + #define DSC_PPS6_FLATNESS_MIN_QP(min_qp) REG_FIELD_PREP(DSC_PPS6_FLATNESS_MIN_QP_MASK, min_qp) 151 + #define DSC_PPS6_FIRST_LINE_BPG_OFFSET(offset) REG_FIELD_PREP(DSC_PPS6_FIRST_LINE_BPG_OFFSET_MASK, \ 152 + offset) 153 + #define DSC_PPS6_INITIAL_SCALE_VALUE(value) REG_FIELD_PREP(DSC_PPS6_INITIAL_SCALE_VALUE_MASK, \ 154 + value) 183 155 184 - #define DSCA_PICTURE_PARAMETER_SET_8 _MMIO(0x6B220) 185 - #define DSCC_PICTURE_PARAMETER_SET_8 _MMIO(0x6BA20) 186 - #define _ICL_DSC0_PICTURE_PARAMETER_SET_8_PB 0x78290 187 - #define _ICL_DSC1_PICTURE_PARAMETER_SET_8_PB 0x78390 188 - #define _ICL_DSC0_PICTURE_PARAMETER_SET_8_PC 0x78490 189 - #define _ICL_DSC1_PICTURE_PARAMETER_SET_8_PC 0x78590 190 - #define ICL_DSC0_PICTURE_PARAMETER_SET_8(pipe) _MMIO_PIPE((pipe) - PIPE_B, \ 191 - _ICL_DSC0_PICTURE_PARAMETER_SET_8_PB, \ 192 - _ICL_DSC0_PICTURE_PARAMETER_SET_8_PC) 193 - #define ICL_DSC1_PICTURE_PARAMETER_SET_8(pipe) _MMIO_PIPE((pipe) - PIPE_B, \ 194 - _ICL_DSC1_PICTURE_PARAMETER_SET_8_PB, \ 195 - _ICL_DSC1_PICTURE_PARAMETER_SET_8_PC) 196 - #define DSC_INITIAL_OFFSET(initial_offset) ((initial_offset) << 16) 197 - #define DSC_FINAL_OFFSET(final_offset) ((final_offset) << 0) 156 + /* PPS 7 */ 157 + #define DSC_PPS7_NFL_BPG_OFFSET_MASK REG_GENMASK(31, 16) 158 + #define DSC_PPS7_SLICE_BPG_OFFSET_MASK REG_GENMASK(15, 0) 159 + #define DSC_PPS7_NFL_BPG_OFFSET(bpg_offset) REG_FIELD_PREP(DSC_PPS7_NFL_BPG_OFFSET_MASK, bpg_offset) 160 + #define DSC_PPS7_SLICE_BPG_OFFSET(bpg_offset) REG_FIELD_PREP(DSC_PPS7_SLICE_BPG_OFFSET_MASK, \ 161 + bpg_offset) 162 + /* PPS 8 */ 163 + #define DSC_PPS8_INITIAL_OFFSET_MASK REG_GENMASK(31, 16) 164 + #define DSC_PPS8_FINAL_OFFSET_MASK REG_GENMASK(15, 0) 165 + #define DSC_PPS8_INITIAL_OFFSET(initial_offset) REG_FIELD_PREP(DSC_PPS8_INITIAL_OFFSET_MASK, \ 166 + initial_offset) 167 + #define DSC_PPS8_FINAL_OFFSET(final_offset) REG_FIELD_PREP(DSC_PPS8_FINAL_OFFSET_MASK, \ 168 + final_offset) 198 169 199 - #define DSCA_PICTURE_PARAMETER_SET_9 _MMIO(0x6B224) 200 - #define DSCC_PICTURE_PARAMETER_SET_9 _MMIO(0x6BA24) 201 - #define _ICL_DSC0_PICTURE_PARAMETER_SET_9_PB 0x78294 202 - #define _ICL_DSC1_PICTURE_PARAMETER_SET_9_PB 0x78394 203 - #define _ICL_DSC0_PICTURE_PARAMETER_SET_9_PC 0x78494 204 - #define _ICL_DSC1_PICTURE_PARAMETER_SET_9_PC 0x78594 205 - #define ICL_DSC0_PICTURE_PARAMETER_SET_9(pipe) _MMIO_PIPE((pipe) - PIPE_B, \ 206 - _ICL_DSC0_PICTURE_PARAMETER_SET_9_PB, \ 207 - _ICL_DSC0_PICTURE_PARAMETER_SET_9_PC) 208 - #define ICL_DSC1_PICTURE_PARAMETER_SET_9(pipe) _MMIO_PIPE((pipe) - PIPE_B, \ 209 - _ICL_DSC1_PICTURE_PARAMETER_SET_9_PB, \ 210 - _ICL_DSC1_PICTURE_PARAMETER_SET_9_PC) 211 - #define DSC_RC_EDGE_FACTOR(rc_edge_fact) ((rc_edge_fact) << 16) 212 - #define DSC_RC_MODEL_SIZE(rc_model_size) ((rc_model_size) << 0) 170 + /* PPS 9 */ 171 + #define DSC_PPS9_RC_EDGE_FACTOR_MASK REG_GENMASK(19, 16) 172 + #define DSC_PPS9_RC_MODEL_SIZE_MASK REG_GENMASK(15, 0) 173 + #define DSC_PPS9_RC_EDGE_FACTOR(rc_edge_fact) REG_FIELD_PREP(DSC_PPS9_RC_EDGE_FACTOR_MASK, \ 174 + rc_edge_fact) 175 + #define DSC_PPS9_RC_MODEL_SIZE(rc_model_size) REG_FIELD_PREP(DSC_PPS9_RC_MODEL_SIZE_MASK, \ 176 + rc_model_size) 213 177 214 - #define DSCA_PICTURE_PARAMETER_SET_10 _MMIO(0x6B228) 215 - #define DSCC_PICTURE_PARAMETER_SET_10 _MMIO(0x6BA28) 216 - #define _ICL_DSC0_PICTURE_PARAMETER_SET_10_PB 0x78298 217 - #define _ICL_DSC1_PICTURE_PARAMETER_SET_10_PB 0x78398 218 - #define _ICL_DSC0_PICTURE_PARAMETER_SET_10_PC 0x78498 219 - #define _ICL_DSC1_PICTURE_PARAMETER_SET_10_PC 0x78598 220 - #define ICL_DSC0_PICTURE_PARAMETER_SET_10(pipe) _MMIO_PIPE((pipe) - PIPE_B, \ 221 - _ICL_DSC0_PICTURE_PARAMETER_SET_10_PB, \ 222 - _ICL_DSC0_PICTURE_PARAMETER_SET_10_PC) 223 - #define ICL_DSC1_PICTURE_PARAMETER_SET_10(pipe) _MMIO_PIPE((pipe) - PIPE_B, \ 224 - _ICL_DSC1_PICTURE_PARAMETER_SET_10_PB, \ 225 - _ICL_DSC1_PICTURE_PARAMETER_SET_10_PC) 226 - #define DSC_RC_TARGET_OFF_LOW(rc_tgt_off_low) ((rc_tgt_off_low) << 20) 227 - #define DSC_RC_TARGET_OFF_HIGH(rc_tgt_off_high) ((rc_tgt_off_high) << 16) 228 - #define DSC_RC_QUANT_INC_LIMIT1(lim) ((lim) << 8) 229 - #define DSC_RC_QUANT_INC_LIMIT0(lim) ((lim) << 0) 178 + /* PPS 10 */ 179 + #define DSC_PPS10_RC_TGT_OFF_LOW_MASK REG_GENMASK(23, 20) 180 + #define DSC_PPS10_RC_TGT_OFF_HIGH_MASK REG_GENMASK(19, 16) 181 + #define DSC_PPS10_RC_QUANT_INC_LIMIT1_MASK REG_GENMASK(12, 8) 182 + #define DSC_PPS10_RC_QUANT_INC_LIMIT0_MASK REG_GENMASK(4, 0) 183 + #define DSC_PPS10_RC_TARGET_OFF_LOW(rc_tgt_off_low) REG_FIELD_PREP(DSC_PPS10_RC_TGT_OFF_LOW_MASK, \ 184 + rc_tgt_off_low) 185 + #define DSC_PPS10_RC_TARGET_OFF_HIGH(rc_tgt_off_high) REG_FIELD_PREP(DSC_PPS10_RC_TGT_OFF_HIGH_MASK, \ 186 + rc_tgt_off_high) 187 + #define DSC_PPS10_RC_QUANT_INC_LIMIT1(lim) REG_FIELD_PREP(DSC_PPS10_RC_QUANT_INC_LIMIT1_MASK, lim) 188 + #define DSC_PPS10_RC_QUANT_INC_LIMIT0(lim) REG_FIELD_PREP(DSC_PPS10_RC_QUANT_INC_LIMIT0_MASK, lim) 230 189 231 - #define DSCA_PICTURE_PARAMETER_SET_11 _MMIO(0x6B22C) 232 - #define DSCC_PICTURE_PARAMETER_SET_11 _MMIO(0x6BA2C) 233 - #define _ICL_DSC0_PICTURE_PARAMETER_SET_11_PB 0x7829C 234 - #define _ICL_DSC1_PICTURE_PARAMETER_SET_11_PB 0x7839C 235 - #define _ICL_DSC0_PICTURE_PARAMETER_SET_11_PC 0x7849C 236 - #define _ICL_DSC1_PICTURE_PARAMETER_SET_11_PC 0x7859C 237 - #define ICL_DSC0_PICTURE_PARAMETER_SET_11(pipe) _MMIO_PIPE((pipe) - PIPE_B, \ 238 - _ICL_DSC0_PICTURE_PARAMETER_SET_11_PB, \ 239 - _ICL_DSC0_PICTURE_PARAMETER_SET_11_PC) 240 - #define ICL_DSC1_PICTURE_PARAMETER_SET_11(pipe) _MMIO_PIPE((pipe) - PIPE_B, \ 241 - _ICL_DSC1_PICTURE_PARAMETER_SET_11_PB, \ 242 - _ICL_DSC1_PICTURE_PARAMETER_SET_11_PC) 190 + /* PPS 16 */ 191 + #define DSC_PPS16_SLICE_ROW_PR_FRME_MASK REG_GENMASK(31, 20) 192 + #define DSC_PPS16_SLICE_PER_LINE_MASK REG_GENMASK(18, 16) 193 + #define DSC_PPS16_SLICE_CHUNK_SIZE_MASK REG_GENMASK(15, 0) 194 + #define DSC_PPS16_SLICE_ROW_PER_FRAME(slice_row_per_frame) REG_FIELD_PREP(DSC_PPS16_SLICE_ROW_PR_FRME_MASK, \ 195 + slice_row_per_frame) 196 + #define DSC_PPS16_SLICE_PER_LINE(slice_per_line) REG_FIELD_PREP(DSC_PPS16_SLICE_PER_LINE_MASK, \ 197 + slice_per_line) 198 + #define DSC_PPS16_SLICE_CHUNK_SIZE(slice_chunk_size) REG_FIELD_PREP(DSC_PPS16_SLICE_CHUNK_SIZE_MASK, \ 199 + slice_chunk_size) 243 200 244 - #define DSCA_PICTURE_PARAMETER_SET_12 _MMIO(0x6B260) 245 - #define DSCC_PICTURE_PARAMETER_SET_12 _MMIO(0x6BA60) 246 - #define _ICL_DSC0_PICTURE_PARAMETER_SET_12_PB 0x782A0 247 - #define _ICL_DSC1_PICTURE_PARAMETER_SET_12_PB 0x783A0 248 - #define _ICL_DSC0_PICTURE_PARAMETER_SET_12_PC 0x784A0 249 - #define _ICL_DSC1_PICTURE_PARAMETER_SET_12_PC 0x785A0 250 - #define ICL_DSC0_PICTURE_PARAMETER_SET_12(pipe) _MMIO_PIPE((pipe) - PIPE_B, \ 251 - _ICL_DSC0_PICTURE_PARAMETER_SET_12_PB, \ 252 - _ICL_DSC0_PICTURE_PARAMETER_SET_12_PC) 253 - #define ICL_DSC1_PICTURE_PARAMETER_SET_12(pipe) _MMIO_PIPE((pipe) - PIPE_B, \ 254 - _ICL_DSC1_PICTURE_PARAMETER_SET_12_PB, \ 255 - _ICL_DSC1_PICTURE_PARAMETER_SET_12_PC) 201 + /* PPS 17 (MTL+) */ 202 + #define DSC_PPS17_SL_BPG_OFFSET_MASK REG_GENMASK(31, 27) 203 + #define DSC_PPS17_SL_BPG_OFFSET(offset) REG_FIELD_PREP(DSC_PPS17_SL_BPG_OFFSET_MASK, offset) 256 204 257 - #define DSCA_PICTURE_PARAMETER_SET_13 _MMIO(0x6B264) 258 - #define DSCC_PICTURE_PARAMETER_SET_13 _MMIO(0x6BA64) 259 - #define _ICL_DSC0_PICTURE_PARAMETER_SET_13_PB 0x782A4 260 - #define _ICL_DSC1_PICTURE_PARAMETER_SET_13_PB 0x783A4 261 - #define _ICL_DSC0_PICTURE_PARAMETER_SET_13_PC 0x784A4 262 - #define _ICL_DSC1_PICTURE_PARAMETER_SET_13_PC 0x785A4 263 - #define ICL_DSC0_PICTURE_PARAMETER_SET_13(pipe) _MMIO_PIPE((pipe) - PIPE_B, \ 264 - _ICL_DSC0_PICTURE_PARAMETER_SET_13_PB, \ 265 - _ICL_DSC0_PICTURE_PARAMETER_SET_13_PC) 266 - #define ICL_DSC1_PICTURE_PARAMETER_SET_13(pipe) _MMIO_PIPE((pipe) - PIPE_B, \ 267 - _ICL_DSC1_PICTURE_PARAMETER_SET_13_PB, \ 268 - _ICL_DSC1_PICTURE_PARAMETER_SET_13_PC) 269 - 270 - #define DSCA_PICTURE_PARAMETER_SET_14 _MMIO(0x6B268) 271 - #define DSCC_PICTURE_PARAMETER_SET_14 _MMIO(0x6BA68) 272 - #define _ICL_DSC0_PICTURE_PARAMETER_SET_14_PB 0x782A8 273 - #define _ICL_DSC1_PICTURE_PARAMETER_SET_14_PB 0x783A8 274 - #define _ICL_DSC0_PICTURE_PARAMETER_SET_14_PC 0x784A8 275 - #define _ICL_DSC1_PICTURE_PARAMETER_SET_14_PC 0x785A8 276 - #define ICL_DSC0_PICTURE_PARAMETER_SET_14(pipe) _MMIO_PIPE((pipe) - PIPE_B, \ 277 - _ICL_DSC0_PICTURE_PARAMETER_SET_14_PB, \ 278 - _ICL_DSC0_PICTURE_PARAMETER_SET_14_PC) 279 - #define ICL_DSC1_PICTURE_PARAMETER_SET_14(pipe) _MMIO_PIPE((pipe) - PIPE_B, \ 280 - _ICL_DSC1_PICTURE_PARAMETER_SET_14_PB, \ 281 - _ICL_DSC1_PICTURE_PARAMETER_SET_14_PC) 282 - 283 - #define DSCA_PICTURE_PARAMETER_SET_15 _MMIO(0x6B26C) 284 - #define DSCC_PICTURE_PARAMETER_SET_15 _MMIO(0x6BA6C) 285 - #define _ICL_DSC0_PICTURE_PARAMETER_SET_15_PB 0x782AC 286 - #define _ICL_DSC1_PICTURE_PARAMETER_SET_15_PB 0x783AC 287 - #define _ICL_DSC0_PICTURE_PARAMETER_SET_15_PC 0x784AC 288 - #define _ICL_DSC1_PICTURE_PARAMETER_SET_15_PC 0x785AC 289 - #define ICL_DSC0_PICTURE_PARAMETER_SET_15(pipe) _MMIO_PIPE((pipe) - PIPE_B, \ 290 - _ICL_DSC0_PICTURE_PARAMETER_SET_15_PB, \ 291 - _ICL_DSC0_PICTURE_PARAMETER_SET_15_PC) 292 - #define ICL_DSC1_PICTURE_PARAMETER_SET_15(pipe) _MMIO_PIPE((pipe) - PIPE_B, \ 293 - _ICL_DSC1_PICTURE_PARAMETER_SET_15_PB, \ 294 - _ICL_DSC1_PICTURE_PARAMETER_SET_15_PC) 295 - 296 - #define DSCA_PICTURE_PARAMETER_SET_16 _MMIO(0x6B270) 297 - #define DSCC_PICTURE_PARAMETER_SET_16 _MMIO(0x6BA70) 298 - #define _ICL_DSC0_PICTURE_PARAMETER_SET_16_PB 0x782B0 299 - #define _ICL_DSC1_PICTURE_PARAMETER_SET_16_PB 0x783B0 300 - #define _ICL_DSC0_PICTURE_PARAMETER_SET_16_PC 0x784B0 301 - #define _ICL_DSC1_PICTURE_PARAMETER_SET_16_PC 0x785B0 302 - #define ICL_DSC0_PICTURE_PARAMETER_SET_16(pipe) _MMIO_PIPE((pipe) - PIPE_B, \ 303 - _ICL_DSC0_PICTURE_PARAMETER_SET_16_PB, \ 304 - _ICL_DSC0_PICTURE_PARAMETER_SET_16_PC) 305 - #define ICL_DSC1_PICTURE_PARAMETER_SET_16(pipe) _MMIO_PIPE((pipe) - PIPE_B, \ 306 - _ICL_DSC1_PICTURE_PARAMETER_SET_16_PB, \ 307 - _ICL_DSC1_PICTURE_PARAMETER_SET_16_PC) 308 - #define DSC_SLICE_ROW_PER_FRAME(slice_row_per_frame) ((slice_row_per_frame) << 20) 309 - #define DSC_SLICE_PER_LINE(slice_per_line) ((slice_per_line) << 16) 310 - #define DSC_SLICE_CHUNK_SIZE(slice_chunk_size) ((slice_chunk_size) << 0) 205 + /* PPS 18 (MTL+) */ 206 + #define DSC_PPS18_NSL_BPG_OFFSET_MASK REG_GENMASK(31, 16) 207 + #define DSC_PPS18_SL_OFFSET_ADJ_MASK REG_GENMASK(15, 0) 208 + #define DSC_PPS18_NSL_BPG_OFFSET(offset) REG_FIELD_PREP(DSC_PPS18_NSL_BPG_OFFSET_MASK, offset) 209 + #define DSC_PPS18_SL_OFFSET_ADJ(offset) REG_FIELD_PREP(DSC_PPS18_SL_OFFSET_ADJ_MASK, offset) 311 210 312 211 /* Icelake Rate Control Buffer Threshold Registers */ 313 212 #define DSCA_RC_BUF_THRESH_0 _MMIO(0x6B230)
+17 -3
drivers/gpu/drm/i915/display/intel_vrr.c
··· 42 42 info->monitor_range.max_vfreq - info->monitor_range.min_vfreq > 10; 43 43 } 44 44 45 + bool intel_vrr_is_in_range(struct intel_connector *connector, int vrefresh) 46 + { 47 + const struct drm_display_info *info = &connector->base.display_info; 48 + 49 + return intel_vrr_is_capable(connector) && 50 + vrefresh >= info->monitor_range.min_vfreq && 51 + vrefresh <= info->monitor_range.max_vfreq; 52 + } 53 + 45 54 void 46 55 intel_vrr_check_modeset(struct intel_atomic_state *state) 47 56 { ··· 117 108 const struct drm_display_info *info = &connector->base.display_info; 118 109 int vmin, vmax; 119 110 120 - if (!intel_vrr_is_capable(connector)) 121 - return; 122 - 123 111 if (adjusted_mode->flags & DRM_MODE_FLAG_INTERLACE) 124 112 return; 113 + 114 + crtc_state->vrr.in_range = 115 + intel_vrr_is_in_range(connector, drm_mode_vrefresh(adjusted_mode)); 116 + if (!crtc_state->vrr.in_range) 117 + return; 118 + 119 + if (HAS_LRR(i915)) 120 + crtc_state->update_lrr = true; 125 121 126 122 vmin = DIV_ROUND_UP(adjusted_mode->crtc_clock * 1000, 127 123 adjusted_mode->crtc_htotal * info->monitor_range.max_vfreq);
+1
drivers/gpu/drm/i915/display/intel_vrr.h
··· 14 14 struct intel_crtc_state; 15 15 16 16 bool intel_vrr_is_capable(struct intel_connector *connector); 17 + bool intel_vrr_is_in_range(struct intel_connector *connector, int vrefresh); 17 18 void intel_vrr_check_modeset(struct intel_atomic_state *state); 18 19 void intel_vrr_compute_config(struct intel_crtc_state *crtc_state, 19 20 struct drm_connector_state *conn_state);
+2 -5
drivers/gpu/drm/i915/display/skl_universal_plane.c
··· 16 16 #include "intel_display_types.h" 17 17 #include "intel_fb.h" 18 18 #include "intel_fbc.h" 19 + #include "intel_frontbuffer.h" 19 20 #include "intel_psr.h" 20 21 #include "skl_scaler.h" 21 22 #include "skl_universal_plane.h" ··· 1247 1246 } 1248 1247 1249 1248 /* FLAT CCS doesn't need to program AUX_DIST */ 1250 - if (!HAS_FLAT_CCS(dev_priv)) 1249 + if (!HAS_FLAT_CCS(dev_priv) && DISPLAY_VER(dev_priv) < 20) 1251 1250 intel_de_write_fw(dev_priv, PLANE_AUX_DIST(pipe, plane_id), 1252 1251 skl_plane_aux_dist(plane_state, color_plane)); 1253 1252 ··· 2198 2197 2199 2198 /* Wa_22011186057 */ 2200 2199 if (IS_ALDERLAKE_P(i915) && IS_DISPLAY_STEP(i915, STEP_A0, STEP_B0)) 2201 - return false; 2202 - 2203 - /* Wa_14013215631 */ 2204 - if (IS_DG2_DISPLAY_STEP(i915, STEP_A0, STEP_C0)) 2205 2200 return false; 2206 2201 2207 2202 return plane_id < PLANE_SPRITE4;
+24 -8
drivers/gpu/drm/i915/display/skl_watermark.c
··· 1367 1367 u64 data_rate = 0; 1368 1368 1369 1369 for_each_plane_id_on_crtc(crtc, plane_id) { 1370 - if (plane_id == PLANE_CURSOR) 1370 + if (plane_id == PLANE_CURSOR && DISPLAY_VER(i915) < 20) 1371 1371 continue; 1372 1372 1373 1373 data_rate += crtc_state->rel_data_rate[plane_id]; ··· 1514 1514 return 0; 1515 1515 1516 1516 /* Allocate fixed number of blocks for cursor. */ 1517 - cursor_size = skl_cursor_allocation(crtc_state, num_active); 1518 - iter.size -= cursor_size; 1519 - skl_ddb_entry_init(&crtc_state->wm.skl.plane_ddb[PLANE_CURSOR], 1520 - alloc->end - cursor_size, alloc->end); 1517 + if (DISPLAY_VER(i915) < 20) { 1518 + cursor_size = skl_cursor_allocation(crtc_state, num_active); 1519 + iter.size -= cursor_size; 1520 + skl_ddb_entry_init(&crtc_state->wm.skl.plane_ddb[PLANE_CURSOR], 1521 + alloc->end - cursor_size, alloc->end); 1522 + } 1521 1523 1522 1524 iter.data_rate = skl_total_relative_data_rate(crtc_state); 1523 1525 ··· 1533 1531 const struct skl_plane_wm *wm = 1534 1532 &crtc_state->wm.skl.optimal.planes[plane_id]; 1535 1533 1536 - if (plane_id == PLANE_CURSOR) { 1534 + if (plane_id == PLANE_CURSOR && DISPLAY_VER(i915) < 20) { 1537 1535 const struct skl_ddb_entry *ddb = 1538 1536 &crtc_state->wm.skl.plane_ddb[plane_id]; 1539 1537 ··· 1581 1579 const struct skl_plane_wm *wm = 1582 1580 &crtc_state->wm.skl.optimal.planes[plane_id]; 1583 1581 1584 - if (plane_id == PLANE_CURSOR) 1582 + if (plane_id == PLANE_CURSOR && DISPLAY_VER(i915) < 20) 1585 1583 continue; 1586 1584 1587 1585 if (DISPLAY_VER(i915) < 11 && ··· 2618 2616 2619 2617 if (old_dbuf_state->joined_mbus != new_dbuf_state->joined_mbus) { 2620 2618 /* TODO: Implement vblank synchronized MBUS joining changes */ 2621 - ret = intel_modeset_all_pipes(state, "MBUS joining change"); 2619 + ret = intel_modeset_all_pipes_late(state, "MBUS joining change"); 2622 2620 if (ret) 2623 2621 return ret; 2624 2622 } ··· 3720 3718 if (HAS_SAGV(i915)) 3721 3719 debugfs_create_file("i915_sagv_status", 0444, minor->debugfs_root, i915, 3722 3720 &intel_sagv_status_fops); 3721 + } 3722 + 3723 + unsigned int skl_watermark_max_latency(struct drm_i915_private *i915) 3724 + { 3725 + int level; 3726 + 3727 + for (level = i915->display.wm.num_levels - 1; level >= 0; level--) { 3728 + unsigned int latency = skl_wm_latency(i915, level, NULL); 3729 + 3730 + if (latency) 3731 + return latency; 3732 + } 3733 + 3734 + return 0; 3723 3735 }
+2
drivers/gpu/drm/i915/display/skl_watermark.h
··· 46 46 bool skl_watermark_ipc_enabled(struct drm_i915_private *i915); 47 47 void skl_watermark_debugfs_register(struct drm_i915_private *i915); 48 48 49 + unsigned int skl_watermark_max_latency(struct drm_i915_private *i915); 50 + 49 51 void skl_wm_init(struct drm_i915_private *i915); 50 52 51 53 struct intel_dbuf_state {
+1 -2
drivers/gpu/drm/i915/gem/i915_gem_clflush.c
··· 6 6 7 7 #include <drm/drm_cache.h> 8 8 9 - #include "display/intel_frontbuffer.h" 10 - 11 9 #include "i915_config.h" 12 10 #include "i915_drv.h" 13 11 #include "i915_gem_clflush.h" 12 + #include "i915_gem_object_frontbuffer.h" 14 13 #include "i915_sw_fence_work.h" 15 14 #include "i915_trace.h" 16 15
+1 -1
drivers/gpu/drm/i915/gem/i915_gem_domain.c
··· 5 5 */ 6 6 7 7 #include "display/intel_display.h" 8 - #include "display/intel_frontbuffer.h" 9 8 #include "gt/intel_gt.h" 10 9 11 10 #include "i915_drv.h" ··· 15 16 #include "i915_gem_lmem.h" 16 17 #include "i915_gem_mman.h" 17 18 #include "i915_gem_object.h" 19 + #include "i915_gem_object_frontbuffer.h" 18 20 #include "i915_vma.h" 19 21 20 22 #define VTD_GUARD (168u * I915_GTT_PAGE_SIZE) /* 168 or tile-row PTE padding */
+2 -2
drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
··· 1436 1436 if (unlikely(reloc->write_domain & (reloc->write_domain - 1))) { 1437 1437 drm_dbg(&i915->drm, "reloc with multiple write domains: " 1438 1438 "target %d offset %d " 1439 - "read %08x write %08x", 1439 + "read %08x write %08x\n", 1440 1440 reloc->target_handle, 1441 1441 (int) reloc->offset, 1442 1442 reloc->read_domains, ··· 1447 1447 & ~I915_GEM_GPU_DOMAINS)) { 1448 1448 drm_dbg(&i915->drm, "reloc with read/write non-GPU domains: " 1449 1449 "target %d offset %d " 1450 - "read %08x write %08x", 1450 + "read %08x write %08x\n", 1451 1451 reloc->target_handle, 1452 1452 (int) reloc->offset, 1453 1453 reloc->read_domains,
+1
drivers/gpu/drm/i915/gem/i915_gem_object.c
··· 37 37 #include "i915_gem_dmabuf.h" 38 38 #include "i915_gem_mman.h" 39 39 #include "i915_gem_object.h" 40 + #include "i915_gem_object_frontbuffer.h" 40 41 #include "i915_gem_ttm.h" 41 42 #include "i915_memcpy.h" 42 43 #include "i915_trace.h"
-89
drivers/gpu/drm/i915/gem/i915_gem_object.h
··· 11 11 #include <drm/drm_file.h> 12 12 #include <drm/drm_device.h> 13 13 14 - #include "display/intel_frontbuffer.h" 15 14 #include "intel_memory_region.h" 16 15 #include "i915_gem_object_types.h" 17 16 #include "i915_gem_gtt.h" ··· 805 806 unsigned int flags, 806 807 const struct i915_sched_attr *attr); 807 808 808 - void __i915_gem_object_flush_frontbuffer(struct drm_i915_gem_object *obj, 809 - enum fb_op_origin origin); 810 - void __i915_gem_object_invalidate_frontbuffer(struct drm_i915_gem_object *obj, 811 - enum fb_op_origin origin); 812 - 813 - static inline void 814 - i915_gem_object_flush_frontbuffer(struct drm_i915_gem_object *obj, 815 - enum fb_op_origin origin) 816 - { 817 - if (unlikely(rcu_access_pointer(obj->frontbuffer))) 818 - __i915_gem_object_flush_frontbuffer(obj, origin); 819 - } 820 - 821 - static inline void 822 - i915_gem_object_invalidate_frontbuffer(struct drm_i915_gem_object *obj, 823 - enum fb_op_origin origin) 824 - { 825 - if (unlikely(rcu_access_pointer(obj->frontbuffer))) 826 - __i915_gem_object_invalidate_frontbuffer(obj, origin); 827 - } 828 - 829 809 int i915_gem_object_read_from_page(struct drm_i915_gem_object *obj, u64 offset, void *dst, int size); 830 810 831 811 bool i915_gem_object_is_shmem(const struct drm_i915_gem_object *obj); ··· 864 886 static inline int i915_gem_object_userptr_validate(struct drm_i915_gem_object *obj) { GEM_BUG_ON(1); return -ENODEV; } 865 887 866 888 #endif 867 - 868 - /** 869 - * i915_gem_object_get_frontbuffer - Get the object's frontbuffer 870 - * @obj: The object whose frontbuffer to get. 871 - * 872 - * Get pointer to object's frontbuffer if such exists. Please note that RCU 873 - * mechanism is used to handle e.g. ongoing removal of frontbuffer pointer. 874 - * 875 - * Return: pointer to object's frontbuffer is such exists or NULL 876 - */ 877 - static inline struct intel_frontbuffer * 878 - i915_gem_object_get_frontbuffer(const struct drm_i915_gem_object *obj) 879 - { 880 - struct intel_frontbuffer *front; 881 - 882 - if (likely(!rcu_access_pointer(obj->frontbuffer))) 883 - return NULL; 884 - 885 - rcu_read_lock(); 886 - do { 887 - front = rcu_dereference(obj->frontbuffer); 888 - if (!front) 889 - break; 890 - 891 - if (unlikely(!kref_get_unless_zero(&front->ref))) 892 - continue; 893 - 894 - if (likely(front == rcu_access_pointer(obj->frontbuffer))) 895 - break; 896 - 897 - intel_frontbuffer_put(front); 898 - } while (1); 899 - rcu_read_unlock(); 900 - 901 - return front; 902 - } 903 - 904 - /** 905 - * i915_gem_object_set_frontbuffer - Set the object's frontbuffer 906 - * @obj: The object whose frontbuffer to set. 907 - * @front: The frontbuffer to set 908 - * 909 - * Set object's frontbuffer pointer. If frontbuffer is already set for the 910 - * object keep it and return it's pointer to the caller. Please note that RCU 911 - * mechanism is used to handle e.g. ongoing removal of frontbuffer pointer. This 912 - * function is protected by i915->display.fb_tracking.lock 913 - * 914 - * Return: pointer to frontbuffer which was set. 915 - */ 916 - static inline struct intel_frontbuffer * 917 - i915_gem_object_set_frontbuffer(struct drm_i915_gem_object *obj, 918 - struct intel_frontbuffer *front) 919 - { 920 - struct intel_frontbuffer *cur = front; 921 - 922 - if (!front) { 923 - RCU_INIT_POINTER(obj->frontbuffer, NULL); 924 - } else if (rcu_access_pointer(obj->frontbuffer)) { 925 - cur = rcu_dereference_protected(obj->frontbuffer, true); 926 - kref_get(&cur->ref); 927 - } else { 928 - drm_gem_object_get(intel_bo_to_drm_bo(obj)); 929 - rcu_assign_pointer(obj->frontbuffer, front); 930 - } 931 - 932 - return cur; 933 - } 934 889 935 890 #endif
+103
drivers/gpu/drm/i915/gem/i915_gem_object_frontbuffer.h
··· 1 + /* SPDX-License-Identifier: MIT */ 2 + /* 3 + * Copyright © 2023 Intel Corporation 4 + */ 5 + 6 + #ifndef __I915_GEM_OBJECT_FRONTBUFFER_H__ 7 + #define __I915_GEM_OBJECT_FRONTBUFFER_H__ 8 + 9 + #include <linux/kref.h> 10 + #include <linux/rcupdate.h> 11 + 12 + #include "display/intel_frontbuffer.h" 13 + #include "i915_gem_object_types.h" 14 + 15 + void __i915_gem_object_flush_frontbuffer(struct drm_i915_gem_object *obj, 16 + enum fb_op_origin origin); 17 + void __i915_gem_object_invalidate_frontbuffer(struct drm_i915_gem_object *obj, 18 + enum fb_op_origin origin); 19 + 20 + static inline void 21 + i915_gem_object_flush_frontbuffer(struct drm_i915_gem_object *obj, 22 + enum fb_op_origin origin) 23 + { 24 + if (unlikely(rcu_access_pointer(obj->frontbuffer))) 25 + __i915_gem_object_flush_frontbuffer(obj, origin); 26 + } 27 + 28 + static inline void 29 + i915_gem_object_invalidate_frontbuffer(struct drm_i915_gem_object *obj, 30 + enum fb_op_origin origin) 31 + { 32 + if (unlikely(rcu_access_pointer(obj->frontbuffer))) 33 + __i915_gem_object_invalidate_frontbuffer(obj, origin); 34 + } 35 + 36 + /** 37 + * i915_gem_object_get_frontbuffer - Get the object's frontbuffer 38 + * @obj: The object whose frontbuffer to get. 39 + * 40 + * Get pointer to object's frontbuffer if such exists. Please note that RCU 41 + * mechanism is used to handle e.g. ongoing removal of frontbuffer pointer. 42 + * 43 + * Return: pointer to object's frontbuffer is such exists or NULL 44 + */ 45 + static inline struct intel_frontbuffer * 46 + i915_gem_object_get_frontbuffer(const struct drm_i915_gem_object *obj) 47 + { 48 + struct intel_frontbuffer *front; 49 + 50 + if (likely(!rcu_access_pointer(obj->frontbuffer))) 51 + return NULL; 52 + 53 + rcu_read_lock(); 54 + do { 55 + front = rcu_dereference(obj->frontbuffer); 56 + if (!front) 57 + break; 58 + 59 + if (unlikely(!kref_get_unless_zero(&front->ref))) 60 + continue; 61 + 62 + if (likely(front == rcu_access_pointer(obj->frontbuffer))) 63 + break; 64 + 65 + intel_frontbuffer_put(front); 66 + } while (1); 67 + rcu_read_unlock(); 68 + 69 + return front; 70 + } 71 + 72 + /** 73 + * i915_gem_object_set_frontbuffer - Set the object's frontbuffer 74 + * @obj: The object whose frontbuffer to set. 75 + * @front: The frontbuffer to set 76 + * 77 + * Set object's frontbuffer pointer. If frontbuffer is already set for the 78 + * object keep it and return it's pointer to the caller. Please note that RCU 79 + * mechanism is used to handle e.g. ongoing removal of frontbuffer pointer. This 80 + * function is protected by i915->display.fb_tracking.lock 81 + * 82 + * Return: pointer to frontbuffer which was set. 83 + */ 84 + static inline struct intel_frontbuffer * 85 + i915_gem_object_set_frontbuffer(struct drm_i915_gem_object *obj, 86 + struct intel_frontbuffer *front) 87 + { 88 + struct intel_frontbuffer *cur = front; 89 + 90 + if (!front) { 91 + RCU_INIT_POINTER(obj->frontbuffer, NULL); 92 + } else if (rcu_access_pointer(obj->frontbuffer)) { 93 + cur = rcu_dereference_protected(obj->frontbuffer, true); 94 + kref_get(&cur->ref); 95 + } else { 96 + drm_gem_object_get(intel_bo_to_drm_bo(obj)); 97 + rcu_assign_pointer(obj->frontbuffer, front); 98 + } 99 + 100 + return cur; 101 + } 102 + 103 + #endif
+1
drivers/gpu/drm/i915/gem/i915_gem_phys.c
··· 13 13 #include "gt/intel_gt.h" 14 14 #include "i915_drv.h" 15 15 #include "i915_gem_object.h" 16 + #include "i915_gem_object_frontbuffer.h" 16 17 #include "i915_gem_region.h" 17 18 #include "i915_gem_tiling.h" 18 19 #include "i915_scatterlist.h"
+15 -24
drivers/gpu/drm/i915/gem/selftests/i915_gem_client_blt.c
··· 83 83 enum client_tiling { 84 84 CLIENT_TILING_LINEAR, 85 85 CLIENT_TILING_X, 86 - CLIENT_TILING_Y, 87 - CLIENT_TILING_4, 86 + CLIENT_TILING_Y, /* Y-major, either Tile4 (Xe_HP and beyond) or legacy TileY */ 88 87 CLIENT_NUM_TILING_TYPES 89 88 }; 90 89 ··· 164 165 BLIT_CCTL_DST_MOCS(gt->mocs.uc_index)); 165 166 166 167 src_pitch = t->width; /* in dwords */ 167 - if (src->tiling == CLIENT_TILING_4) { 168 + if (src->tiling == CLIENT_TILING_Y) { 168 169 src_tiles = XY_FAST_COPY_BLT_D0_SRC_TILE_MODE(YMAJOR); 169 - src_4t = XY_FAST_COPY_BLT_D1_SRC_TILE4; 170 - } else if (src->tiling == CLIENT_TILING_Y) { 171 - src_tiles = XY_FAST_COPY_BLT_D0_SRC_TILE_MODE(YMAJOR); 170 + if (GRAPHICS_VER_FULL(to_i915(batch->base.dev)) >= IP_VER(12, 50)) 171 + src_4t = XY_FAST_COPY_BLT_D1_SRC_TILE4; 172 172 } else if (src->tiling == CLIENT_TILING_X) { 173 173 src_tiles = XY_FAST_COPY_BLT_D0_SRC_TILE_MODE(TILE_X); 174 174 } else { ··· 175 177 } 176 178 177 179 dst_pitch = t->width; /* in dwords */ 178 - if (dst->tiling == CLIENT_TILING_4) { 180 + if (dst->tiling == CLIENT_TILING_Y) { 179 181 dst_tiles = XY_FAST_COPY_BLT_D0_DST_TILE_MODE(YMAJOR); 180 - dst_4t = XY_FAST_COPY_BLT_D1_DST_TILE4; 181 - } else if (dst->tiling == CLIENT_TILING_Y) { 182 - dst_tiles = XY_FAST_COPY_BLT_D0_DST_TILE_MODE(YMAJOR); 182 + if (GRAPHICS_VER_FULL(to_i915(batch->base.dev)) >= IP_VER(12, 50)) 183 + dst_4t = XY_FAST_COPY_BLT_D1_DST_TILE4; 183 184 } else if (dst->tiling == CLIENT_TILING_X) { 184 185 dst_tiles = XY_FAST_COPY_BLT_D0_DST_TILE_MODE(TILE_X); 185 186 } else { ··· 323 326 t->buffers[i].vma = vma; 324 327 t->buffers[i].tiling = 325 328 i915_prandom_u32_max_state(CLIENT_NUM_TILING_TYPES, prng); 326 - 327 - /* Platforms support either TileY or Tile4, not both */ 328 - if (HAS_4TILE(i915) && t->buffers[i].tiling == CLIENT_TILING_Y) 329 - t->buffers[i].tiling = CLIENT_TILING_4; 330 - else if (!HAS_4TILE(i915) && t->buffers[i].tiling == CLIENT_TILING_4) 331 - t->buffers[i].tiling = CLIENT_TILING_Y; 332 329 } 333 330 334 331 return 0; ··· 358 367 359 368 y = div64_u64_rem(v, stride, &x); 360 369 361 - if (tiling == CLIENT_TILING_4) { 362 - v = linear_x_y_to_ftiled_pos(x_pos, y_pos, stride, 32); 363 - 364 - /* no swizzling for f-tiling */ 365 - swizzle = I915_BIT_6_SWIZZLE_NONE; 366 - } else if (tiling == CLIENT_TILING_X) { 370 + if (tiling == CLIENT_TILING_X) { 367 371 v = div64_u64_rem(y, 8, &y) * stride * 8; 368 372 v += y * 512; 369 373 v += div64_u64_rem(x, 512, &x) << 12; 370 374 v += x; 371 375 372 376 swizzle = gt->ggtt->bit_6_swizzle_x; 377 + } else if (GRAPHICS_VER_FULL(gt->i915) >= IP_VER(12, 50)) { 378 + /* Y-major tiling layout is Tile4 for Xe_HP and beyond */ 379 + v = linear_x_y_to_ftiled_pos(x_pos, y_pos, stride, 32); 380 + 381 + /* no swizzling for f-tiling */ 382 + swizzle = I915_BIT_6_SWIZZLE_NONE; 373 383 } else { 374 384 const unsigned int ytile_span = 16; 375 385 const unsigned int ytile_height = 512; ··· 406 414 switch (tiling) { 407 415 case CLIENT_TILING_LINEAR: return "linear"; 408 416 case CLIENT_TILING_X: return "X"; 409 - case CLIENT_TILING_Y: return "Y"; 410 - case CLIENT_TILING_4: return "F"; 417 + case CLIENT_TILING_Y: return "Y / 4"; 411 418 default: return "unknown"; 412 419 } 413 420 }
+18 -18
drivers/gpu/drm/i915/gt/gen8_ppgtt.c
··· 242 242 GEM_BUG_ON(end > vm->total >> GEN8_PTE_SHIFT); 243 243 244 244 len = gen8_pd_range(start, end, lvl--, &idx); 245 - DBG("%s(%p):{ lvl:%d, start:%llx, end:%llx, idx:%d, len:%d, used:%d }\n", 246 - __func__, vm, lvl + 1, start, end, 247 - idx, len, atomic_read(px_used(pd))); 245 + GTT_TRACE("%s(%p):{ lvl:%d, start:%llx, end:%llx, idx:%d, len:%d, used:%d }\n", 246 + __func__, vm, lvl + 1, start, end, 247 + idx, len, atomic_read(px_used(pd))); 248 248 GEM_BUG_ON(!len || len >= atomic_read(px_used(pd))); 249 249 250 250 do { ··· 252 252 253 253 if (atomic_fetch_inc(&pt->used) >> gen8_pd_shift(1) && 254 254 gen8_pd_contains(start, end, lvl)) { 255 - DBG("%s(%p):{ lvl:%d, idx:%d, start:%llx, end:%llx } removing pd\n", 256 - __func__, vm, lvl + 1, idx, start, end); 255 + GTT_TRACE("%s(%p):{ lvl:%d, idx:%d, start:%llx, end:%llx } removing pd\n", 256 + __func__, vm, lvl + 1, idx, start, end); 257 257 clear_pd_entry(pd, idx, scratch); 258 258 __gen8_ppgtt_cleanup(vm, as_pd(pt), I915_PDES, lvl); 259 259 start += (u64)I915_PDES << gen8_pd_shift(lvl); ··· 270 270 u64 *vaddr; 271 271 272 272 count = gen8_pt_count(start, end); 273 - DBG("%s(%p):{ lvl:%d, start:%llx, end:%llx, idx:%d, len:%d, used:%d } removing pte\n", 274 - __func__, vm, lvl, start, end, 275 - gen8_pd_index(start, 0), count, 276 - atomic_read(&pt->used)); 273 + GTT_TRACE("%s(%p):{ lvl:%d, start:%llx, end:%llx, idx:%d, len:%d, used:%d } removing pte\n", 274 + __func__, vm, lvl, start, end, 275 + gen8_pd_index(start, 0), count, 276 + atomic_read(&pt->used)); 277 277 GEM_BUG_ON(!count || count >= atomic_read(&pt->used)); 278 278 279 279 num_ptes = count; ··· 325 325 GEM_BUG_ON(end > vm->total >> GEN8_PTE_SHIFT); 326 326 327 327 len = gen8_pd_range(*start, end, lvl--, &idx); 328 - DBG("%s(%p):{ lvl:%d, start:%llx, end:%llx, idx:%d, len:%d, used:%d }\n", 329 - __func__, vm, lvl + 1, *start, end, 330 - idx, len, atomic_read(px_used(pd))); 328 + GTT_TRACE("%s(%p):{ lvl:%d, start:%llx, end:%llx, idx:%d, len:%d, used:%d }\n", 329 + __func__, vm, lvl + 1, *start, end, 330 + idx, len, atomic_read(px_used(pd))); 331 331 GEM_BUG_ON(!len || (idx + len - 1) >> gen8_pd_shift(1)); 332 332 333 333 spin_lock(&pd->lock); ··· 338 338 if (!pt) { 339 339 spin_unlock(&pd->lock); 340 340 341 - DBG("%s(%p):{ lvl:%d, idx:%d } allocating new tree\n", 342 - __func__, vm, lvl + 1, idx); 341 + GTT_TRACE("%s(%p):{ lvl:%d, idx:%d } allocating new tree\n", 342 + __func__, vm, lvl + 1, idx); 343 343 344 344 pt = stash->pt[!!lvl]; 345 345 __i915_gem_object_pin_pages(pt->base); ··· 369 369 } else { 370 370 unsigned int count = gen8_pt_count(*start, end); 371 371 372 - DBG("%s(%p):{ lvl:%d, start:%llx, end:%llx, idx:%d, len:%d, used:%d } inserting pte\n", 373 - __func__, vm, lvl, *start, end, 374 - gen8_pd_index(*start, 0), count, 375 - atomic_read(&pt->used)); 372 + GTT_TRACE("%s(%p):{ lvl:%d, start:%llx, end:%llx, idx:%d, len:%d, used:%d } inserting pte\n", 373 + __func__, vm, lvl, *start, end, 374 + gen8_pd_index(*start, 0), count, 375 + atomic_read(&pt->used)); 376 376 377 377 atomic_add(count, &pt->used); 378 378 /* All other pdes may be simultaneously removed */
+2 -2
drivers/gpu/drm/i915/gt/intel_gtt.h
··· 35 35 #define I915_GFP_ALLOW_FAIL (GFP_KERNEL | __GFP_RETRY_MAYFAIL | __GFP_NOWARN) 36 36 37 37 #if IS_ENABLED(CONFIG_DRM_I915_TRACE_GTT) 38 - #define DBG(...) trace_printk(__VA_ARGS__) 38 + #define GTT_TRACE(...) trace_printk(__VA_ARGS__) 39 39 #else 40 - #define DBG(...) 40 + #define GTT_TRACE(...) 41 41 #endif 42 42 43 43 #define NALLOC 3 /* 1 normal, 1 for concurrent threads, 1 for preallocation */
+5 -5
drivers/gpu/drm/i915/i915_driver.c
··· 183 183 pre |= IS_ICELAKE(dev_priv) && INTEL_REVID(dev_priv) < 0x7; 184 184 pre |= IS_TIGERLAKE(dev_priv) && INTEL_REVID(dev_priv) < 0x1; 185 185 pre |= IS_DG1(dev_priv) && INTEL_REVID(dev_priv) < 0x1; 186 + pre |= IS_DG2_G10(dev_priv) && INTEL_REVID(dev_priv) < 0x8; 187 + pre |= IS_DG2_G11(dev_priv) && INTEL_REVID(dev_priv) < 0x5; 188 + pre |= IS_DG2_G12(dev_priv) && INTEL_REVID(dev_priv) < 0x1; 186 189 187 190 if (pre) { 188 191 drm_err(&dev_priv->drm, "This is a pre-production stepping. " ··· 699 696 700 697 intel_device_info_print(INTEL_INFO(dev_priv), 701 698 RUNTIME_INFO(dev_priv), &p); 702 - intel_display_device_info_print(DISPLAY_INFO(dev_priv), 703 - DISPLAY_RUNTIME_INFO(dev_priv), &p); 704 699 i915_print_iommu_status(dev_priv, &p); 705 700 for_each_gt(gt, dev_priv, i) 706 701 intel_gt_info_print(&gt->info, &p); ··· 732 731 733 732 /* Set up device info and initial runtime info. */ 734 733 intel_device_info_driver_create(i915, pdev->device, match_info); 734 + 735 + intel_display_device_probe(i915); 735 736 736 737 return i915; 737 738 } ··· 1569 1566 if (root_pdev) 1570 1567 pci_d3cold_disable(root_pdev); 1571 1568 1572 - rpm->suspended = true; 1573 - 1574 1569 /* 1575 1570 * FIXME: We really should find a document that references the arguments 1576 1571 * used below! ··· 1619 1618 disable_rpm_wakeref_asserts(rpm); 1620 1619 1621 1620 intel_opregion_notify_adapter(dev_priv, PCI_D0); 1622 - rpm->suspended = false; 1623 1621 1624 1622 root_pdev = pcie_find_root_port(pdev); 1625 1623 if (root_pdev)
+2 -2
drivers/gpu/drm/i915/i915_driver.h
··· 15 15 16 16 #define DRIVER_NAME "i915" 17 17 #define DRIVER_DESC "Intel Graphics" 18 - #define DRIVER_DATE "20201103" 19 - #define DRIVER_TIMESTAMP 1604406085 18 + #define DRIVER_DATE "20230929" 19 + #define DRIVER_TIMESTAMP 1695980603 20 20 21 21 extern const struct dev_pm_ops i915_pm_ops; 22 22
+2 -16
drivers/gpu/drm/i915/i915_drv.h
··· 437 437 (MEDIA_VER(i915) >= (from) && MEDIA_VER(i915) <= (until)) 438 438 439 439 #define DISPLAY_VER(i915) (DISPLAY_RUNTIME_INFO(i915)->ip.ver) 440 + #define DISPLAY_VER_FULL(i915) IP_VER(DISPLAY_RUNTIME_INFO(i915)->ip.ver, \ 441 + DISPLAY_RUNTIME_INFO(i915)->ip.rel) 440 442 #define IS_DISPLAY_VER(i915, from, until) \ 441 443 (DISPLAY_VER(i915) >= (from) && DISPLAY_VER(i915) <= (until)) 442 444 ··· 646 644 #define IS_TIGERLAKE_UY(i915) \ 647 645 IS_SUBPLATFORM(i915, INTEL_TIGERLAKE, INTEL_SUBPLATFORM_UY) 648 646 649 - 650 - 651 - 652 - 653 - 654 - 655 - 656 647 #define IS_XEHPSDV_GRAPHICS_STEP(__i915, since, until) \ 657 648 (IS_XEHPSDV(__i915) && IS_GRAPHICS_STEP(__i915, since, until)) 658 - 659 - #define IS_MTL_DISPLAY_STEP(__i915, since, until) \ 660 - (IS_METEORLAKE(__i915) && \ 661 - IS_DISPLAY_STEP(__i915, since, until)) 662 - 663 - #define IS_DG2_DISPLAY_STEP(__i915, since, until) \ 664 - (IS_DG2(__i915) && \ 665 - IS_DISPLAY_STEP(__i915, since, until)) 666 649 667 650 #define IS_PVC_BD_STEP(__i915, since, until) \ 668 651 (IS_PONTEVECCHIO(__i915) && \ ··· 693 706 #define CMDPARSER_USES_GGTT(i915) (GRAPHICS_VER(i915) == 7) 694 707 695 708 #define HAS_LLC(i915) (INTEL_INFO(i915)->has_llc) 696 - #define HAS_4TILE(i915) (INTEL_INFO(i915)->has_4tile) 697 709 #define HAS_SNOOP(i915) (INTEL_INFO(i915)->has_snoop) 698 710 #define HAS_EDRAM(i915) ((i915)->edram_size_mb) 699 711 #define HAS_SECURE_BATCHES(i915) (GRAPHICS_VER(i915) < 6)
+1 -1
drivers/gpu/drm/i915/i915_gem.c
··· 40 40 #include <drm/drm_vma_manager.h> 41 41 42 42 #include "display/intel_display.h" 43 - #include "display/intel_frontbuffer.h" 44 43 45 44 #include "gem/i915_gem_clflush.h" 46 45 #include "gem/i915_gem_context.h" 47 46 #include "gem/i915_gem_ioctls.h" 48 47 #include "gem/i915_gem_mman.h" 48 + #include "gem/i915_gem_object_frontbuffer.h" 49 49 #include "gem/i915_gem_pm.h" 50 50 #include "gem/i915_gem_region.h" 51 51 #include "gem/i915_gem_userptr.h"
+1 -1
drivers/gpu/drm/i915/i915_getparam.c
··· 109 109 return value; 110 110 break; 111 111 case I915_PARAM_PXP_STATUS: 112 - value = intel_pxp_get_readiness_status(i915->pxp); 112 + value = intel_pxp_get_readiness_status(i915->pxp, 0); 113 113 if (value < 0) 114 114 return value; 115 115 break;
+2 -2
drivers/gpu/drm/i915/i915_gpu_error.c
··· 1757 1757 struct intel_uncore *uncore = gt->_gt->uncore; 1758 1758 struct drm_i915_private *i915 = uncore->i915; 1759 1759 1760 - if (GRAPHICS_VER(i915) >= 6) 1760 + if (DISPLAY_VER(i915) >= 6 && DISPLAY_VER(i915) < 20) 1761 1761 gt->derrmr = intel_uncore_read(uncore, DERRMR); 1762 1762 1763 1763 if (GRAPHICS_VER(i915) >= 8) ··· 1972 1972 struct drm_i915_private *i915 = error->i915; 1973 1973 1974 1974 error->wakelock = atomic_read(&i915->runtime_pm.wakeref_count); 1975 - error->suspended = i915->runtime_pm.suspended; 1975 + error->suspended = pm_runtime_suspended(i915->drm.dev); 1976 1976 1977 1977 error->iommu = i915_vtd_active(i915); 1978 1978 error->reset_count = i915_reset_count(&i915->gpu_error);
+2
drivers/gpu/drm/i915/i915_irq.c
··· 751 751 752 752 GEN3_IRQ_RESET(uncore, GEN11_GU_MISC_); 753 753 GEN3_IRQ_RESET(uncore, GEN8_PCU_); 754 + 755 + intel_uncore_write(uncore, GEN11_GFX_MSTR_IRQ, ~0); 754 756 } 755 757 756 758 static void cherryview_irq_reset(struct drm_i915_private *dev_priv)
-1
drivers/gpu/drm/i915/i915_pci.c
··· 713 713 .has_3d_pipeline = 1, \ 714 714 .has_64bit_reloc = 1, \ 715 715 .has_flat_ccs = 1, \ 716 - .has_4tile = 1, \ 717 716 .has_global_mocs = 1, \ 718 717 .has_gt_uc = 1, \ 719 718 .has_llc = 1, \
+4 -277
drivers/gpu/drm/i915/i915_reg.h
··· 2693 2693 #define PIPE_MISC2_FLIP_INFO_PLANE_SEL(plane_id) REG_FIELD_PREP(PIPE_MISC2_FLIP_INFO_PLANE_SEL_MASK, (plane_id)) 2694 2694 #define PIPE_MISC2(pipe) _MMIO_PIPE(pipe, _PIPE_MISC2_A, _PIPE_MISC2_B) 2695 2695 2696 - /* Skylake+ pipe bottom (background) color */ 2697 - #define _SKL_BOTTOM_COLOR_A 0x70034 2698 - #define _SKL_BOTTOM_COLOR_B 0x71034 2699 - #define SKL_BOTTOM_COLOR_GAMMA_ENABLE REG_BIT(31) 2700 - #define SKL_BOTTOM_COLOR_CSC_ENABLE REG_BIT(30) 2701 - #define SKL_BOTTOM_COLOR(pipe) _MMIO_PIPE(pipe, _SKL_BOTTOM_COLOR_A, _SKL_BOTTOM_COLOR_B) 2702 - 2703 2696 #define _ICL_PIPE_A_STATUS 0x70058 2704 2697 #define ICL_PIPESTATUS(pipe) _MMIO_PIPE2(pipe, _ICL_PIPE_A_STATUS) 2705 2698 #define PIPE_STATUS_UNDERRUN REG_BIT(31) ··· 4206 4213 #define GLK_PS_COEF_DATA_SET(pipe, id, set) _MMIO_PIPE(pipe, \ 4207 4214 _ID(id, _PS_COEF_SET0_DATA_1A, _PS_COEF_SET0_DATA_2A) + (set) * 8, \ 4208 4215 _ID(id, _PS_COEF_SET0_DATA_1B, _PS_COEF_SET0_DATA_2B) + (set) * 8) 4209 - /* legacy palette */ 4210 - #define _LGC_PALETTE_A 0x4a000 4211 - #define _LGC_PALETTE_B 0x4a800 4212 - /* see PALETTE_* for the bits */ 4213 - #define LGC_PALETTE(pipe, i) _MMIO(_PIPE(pipe, _LGC_PALETTE_A, _LGC_PALETTE_B) + (i) * 4) 4214 - 4215 - /* ilk/snb precision palette */ 4216 - #define _PREC_PALETTE_A 0x4b000 4217 - #define _PREC_PALETTE_B 0x4c000 4218 - /* 10bit mode */ 4219 - #define PREC_PALETTE_10_RED_MASK REG_GENMASK(29, 20) 4220 - #define PREC_PALETTE_10_GREEN_MASK REG_GENMASK(19, 10) 4221 - #define PREC_PALETTE_10_BLUE_MASK REG_GENMASK(9, 0) 4222 - /* 12.4 interpolated mode ldw */ 4223 - #define PREC_PALETTE_12P4_RED_LDW_MASK REG_GENMASK(29, 24) 4224 - #define PREC_PALETTE_12P4_GREEN_LDW_MASK REG_GENMASK(19, 14) 4225 - #define PREC_PALETTE_12P4_BLUE_LDW_MASK REG_GENMASK(9, 4) 4226 - /* 12.4 interpolated mode udw */ 4227 - #define PREC_PALETTE_12P4_RED_UDW_MASK REG_GENMASK(29, 20) 4228 - #define PREC_PALETTE_12P4_GREEN_UDW_MASK REG_GENMASK(19, 10) 4229 - #define PREC_PALETTE_12P4_BLUE_UDW_MASK REG_GENMASK(9, 0) 4230 - #define PREC_PALETTE(pipe, i) _MMIO(_PIPE(pipe, _PREC_PALETTE_A, _PREC_PALETTE_B) + (i) * 4) 4231 - 4232 - #define _PREC_PIPEAGCMAX 0x4d000 4233 - #define _PREC_PIPEBGCMAX 0x4d010 4234 - #define PREC_PIPEGCMAX(pipe, i) _MMIO(_PIPE(pipe, _PIPEAGCMAX, _PIPEBGCMAX) + (i) * 4) /* u1.16 */ 4235 - 4236 - #define _GAMMA_MODE_A 0x4a480 4237 - #define _GAMMA_MODE_B 0x4ac80 4238 - #define GAMMA_MODE(pipe) _MMIO_PIPE(pipe, _GAMMA_MODE_A, _GAMMA_MODE_B) 4239 - #define PRE_CSC_GAMMA_ENABLE REG_BIT(31) /* icl+ */ 4240 - #define POST_CSC_GAMMA_ENABLE REG_BIT(30) /* icl+ */ 4241 - #define PALETTE_ANTICOL_DISABLE REG_BIT(15) /* skl+ */ 4242 - #define GAMMA_MODE_MODE_MASK REG_GENMASK(1, 0) 4243 - #define GAMMA_MODE_MODE_8BIT REG_FIELD_PREP(GAMMA_MODE_MODE_MASK, 0) 4244 - #define GAMMA_MODE_MODE_10BIT REG_FIELD_PREP(GAMMA_MODE_MODE_MASK, 1) 4245 - #define GAMMA_MODE_MODE_12BIT REG_FIELD_PREP(GAMMA_MODE_MODE_MASK, 2) 4246 - #define GAMMA_MODE_MODE_SPLIT REG_FIELD_PREP(GAMMA_MODE_MODE_MASK, 3) /* ivb-bdw */ 4247 - #define GAMMA_MODE_MODE_12BIT_MULTI_SEG REG_FIELD_PREP(GAMMA_MODE_MODE_MASK, 3) /* icl-tgl */ 4248 4216 4249 4217 /* Display Internal Timeout Register */ 4250 4218 #define RM_TIMEOUT _MMIO(0x42060) ··· 4467 4513 #define PICAINTERRUPT_IMR _MMIO(0x16FE54) 4468 4514 #define PICAINTERRUPT_IIR _MMIO(0x16FE58) 4469 4515 #define PICAINTERRUPT_IER _MMIO(0x16FE5C) 4470 - 4471 4516 #define XELPDP_DP_ALT_HOTPLUG(hpd_pin) REG_BIT(16 + _HPD_PIN_TC(hpd_pin)) 4472 4517 #define XELPDP_DP_ALT_HOTPLUG_MASK REG_GENMASK(19, 16) 4473 - 4474 4518 #define XELPDP_AUX_TC(hpd_pin) REG_BIT(8 + _HPD_PIN_TC(hpd_pin)) 4475 4519 #define XELPDP_AUX_TC_MASK REG_GENMASK(11, 8) 4476 - 4520 + #define XE2LPD_AUX_DDI(hpd_pin) REG_BIT(6 + _HPD_PIN_DDI(hpd_pin)) 4521 + #define XE2LPD_AUX_DDI_MASK REG_GENMASK(7, 6) 4477 4522 #define XELPDP_TBT_HOTPLUG(hpd_pin) REG_BIT(_HPD_PIN_TC(hpd_pin)) 4478 4523 #define XELPDP_TBT_HOTPLUG_MASK REG_GENMASK(3, 0) 4479 4524 ··· 5882 5929 #define CDCLK_FREQ_540 REG_FIELD_PREP(CDCLK_FREQ_SEL_MASK, 1) 5883 5930 #define CDCLK_FREQ_337_308 REG_FIELD_PREP(CDCLK_FREQ_SEL_MASK, 2) 5884 5931 #define CDCLK_FREQ_675_617 REG_FIELD_PREP(CDCLK_FREQ_SEL_MASK, 3) 5932 + #define MDCLK_SOURCE_SEL_CDCLK_PLL REG_BIT(25) 5885 5933 #define BXT_CDCLK_CD2X_DIV_SEL_MASK REG_GENMASK(23, 22) 5886 5934 #define BXT_CDCLK_CD2X_DIV_SEL_1 REG_FIELD_PREP(BXT_CDCLK_CD2X_DIV_SEL_MASK, 0) 5887 5935 #define BXT_CDCLK_CD2X_DIV_SEL_1_5 REG_FIELD_PREP(BXT_CDCLK_CD2X_DIV_SEL_MASK, 1) ··· 6223 6269 #define WM_DBG_DISALLOW_MAXFIFO (1 << 1) 6224 6270 #define WM_DBG_DISALLOW_SPRITE (1 << 2) 6225 6271 6226 - /* pipe CSC */ 6227 - #define _PIPE_A_CSC_COEFF_RY_GY 0x49010 6228 - #define _PIPE_A_CSC_COEFF_BY 0x49014 6229 - #define _PIPE_A_CSC_COEFF_RU_GU 0x49018 6230 - #define _PIPE_A_CSC_COEFF_BU 0x4901c 6231 - #define _PIPE_A_CSC_COEFF_RV_GV 0x49020 6232 - #define _PIPE_A_CSC_COEFF_BV 0x49024 6233 - 6234 - #define _PIPE_A_CSC_MODE 0x49028 6235 - #define ICL_CSC_ENABLE (1 << 31) /* icl+ */ 6236 - #define ICL_OUTPUT_CSC_ENABLE (1 << 30) /* icl+ */ 6237 - #define CSC_BLACK_SCREEN_OFFSET (1 << 2) /* ilk/snb */ 6238 - #define CSC_POSITION_BEFORE_GAMMA (1 << 1) /* pre-glk */ 6239 - #define CSC_MODE_YUV_TO_RGB (1 << 0) /* ilk/snb */ 6240 - 6241 - #define _PIPE_A_CSC_PREOFF_HI 0x49030 6242 - #define _PIPE_A_CSC_PREOFF_ME 0x49034 6243 - #define _PIPE_A_CSC_PREOFF_LO 0x49038 6244 - #define _PIPE_A_CSC_POSTOFF_HI 0x49040 6245 - #define _PIPE_A_CSC_POSTOFF_ME 0x49044 6246 - #define _PIPE_A_CSC_POSTOFF_LO 0x49048 6247 - 6248 - #define _PIPE_B_CSC_COEFF_RY_GY 0x49110 6249 - #define _PIPE_B_CSC_COEFF_BY 0x49114 6250 - #define _PIPE_B_CSC_COEFF_RU_GU 0x49118 6251 - #define _PIPE_B_CSC_COEFF_BU 0x4911c 6252 - #define _PIPE_B_CSC_COEFF_RV_GV 0x49120 6253 - #define _PIPE_B_CSC_COEFF_BV 0x49124 6254 - #define _PIPE_B_CSC_MODE 0x49128 6255 - #define _PIPE_B_CSC_PREOFF_HI 0x49130 6256 - #define _PIPE_B_CSC_PREOFF_ME 0x49134 6257 - #define _PIPE_B_CSC_PREOFF_LO 0x49138 6258 - #define _PIPE_B_CSC_POSTOFF_HI 0x49140 6259 - #define _PIPE_B_CSC_POSTOFF_ME 0x49144 6260 - #define _PIPE_B_CSC_POSTOFF_LO 0x49148 6261 - 6262 - #define PIPE_CSC_COEFF_RY_GY(pipe) _MMIO_PIPE(pipe, _PIPE_A_CSC_COEFF_RY_GY, _PIPE_B_CSC_COEFF_RY_GY) 6263 - #define PIPE_CSC_COEFF_BY(pipe) _MMIO_PIPE(pipe, _PIPE_A_CSC_COEFF_BY, _PIPE_B_CSC_COEFF_BY) 6264 - #define PIPE_CSC_COEFF_RU_GU(pipe) _MMIO_PIPE(pipe, _PIPE_A_CSC_COEFF_RU_GU, _PIPE_B_CSC_COEFF_RU_GU) 6265 - #define PIPE_CSC_COEFF_BU(pipe) _MMIO_PIPE(pipe, _PIPE_A_CSC_COEFF_BU, _PIPE_B_CSC_COEFF_BU) 6266 - #define PIPE_CSC_COEFF_RV_GV(pipe) _MMIO_PIPE(pipe, _PIPE_A_CSC_COEFF_RV_GV, _PIPE_B_CSC_COEFF_RV_GV) 6267 - #define PIPE_CSC_COEFF_BV(pipe) _MMIO_PIPE(pipe, _PIPE_A_CSC_COEFF_BV, _PIPE_B_CSC_COEFF_BV) 6268 - #define PIPE_CSC_MODE(pipe) _MMIO_PIPE(pipe, _PIPE_A_CSC_MODE, _PIPE_B_CSC_MODE) 6269 - #define PIPE_CSC_PREOFF_HI(pipe) _MMIO_PIPE(pipe, _PIPE_A_CSC_PREOFF_HI, _PIPE_B_CSC_PREOFF_HI) 6270 - #define PIPE_CSC_PREOFF_ME(pipe) _MMIO_PIPE(pipe, _PIPE_A_CSC_PREOFF_ME, _PIPE_B_CSC_PREOFF_ME) 6271 - #define PIPE_CSC_PREOFF_LO(pipe) _MMIO_PIPE(pipe, _PIPE_A_CSC_PREOFF_LO, _PIPE_B_CSC_PREOFF_LO) 6272 - #define PIPE_CSC_POSTOFF_HI(pipe) _MMIO_PIPE(pipe, _PIPE_A_CSC_POSTOFF_HI, _PIPE_B_CSC_POSTOFF_HI) 6273 - #define PIPE_CSC_POSTOFF_ME(pipe) _MMIO_PIPE(pipe, _PIPE_A_CSC_POSTOFF_ME, _PIPE_B_CSC_POSTOFF_ME) 6274 - #define PIPE_CSC_POSTOFF_LO(pipe) _MMIO_PIPE(pipe, _PIPE_A_CSC_POSTOFF_LO, _PIPE_B_CSC_POSTOFF_LO) 6275 - 6276 - /* Pipe Output CSC */ 6277 - #define _PIPE_A_OUTPUT_CSC_COEFF_RY_GY 0x49050 6278 - #define _PIPE_A_OUTPUT_CSC_COEFF_BY 0x49054 6279 - #define _PIPE_A_OUTPUT_CSC_COEFF_RU_GU 0x49058 6280 - #define _PIPE_A_OUTPUT_CSC_COEFF_BU 0x4905c 6281 - #define _PIPE_A_OUTPUT_CSC_COEFF_RV_GV 0x49060 6282 - #define _PIPE_A_OUTPUT_CSC_COEFF_BV 0x49064 6283 - #define _PIPE_A_OUTPUT_CSC_PREOFF_HI 0x49068 6284 - #define _PIPE_A_OUTPUT_CSC_PREOFF_ME 0x4906c 6285 - #define _PIPE_A_OUTPUT_CSC_PREOFF_LO 0x49070 6286 - #define _PIPE_A_OUTPUT_CSC_POSTOFF_HI 0x49074 6287 - #define _PIPE_A_OUTPUT_CSC_POSTOFF_ME 0x49078 6288 - #define _PIPE_A_OUTPUT_CSC_POSTOFF_LO 0x4907c 6289 - 6290 - #define _PIPE_B_OUTPUT_CSC_COEFF_RY_GY 0x49150 6291 - #define _PIPE_B_OUTPUT_CSC_COEFF_BY 0x49154 6292 - #define _PIPE_B_OUTPUT_CSC_COEFF_RU_GU 0x49158 6293 - #define _PIPE_B_OUTPUT_CSC_COEFF_BU 0x4915c 6294 - #define _PIPE_B_OUTPUT_CSC_COEFF_RV_GV 0x49160 6295 - #define _PIPE_B_OUTPUT_CSC_COEFF_BV 0x49164 6296 - #define _PIPE_B_OUTPUT_CSC_PREOFF_HI 0x49168 6297 - #define _PIPE_B_OUTPUT_CSC_PREOFF_ME 0x4916c 6298 - #define _PIPE_B_OUTPUT_CSC_PREOFF_LO 0x49170 6299 - #define _PIPE_B_OUTPUT_CSC_POSTOFF_HI 0x49174 6300 - #define _PIPE_B_OUTPUT_CSC_POSTOFF_ME 0x49178 6301 - #define _PIPE_B_OUTPUT_CSC_POSTOFF_LO 0x4917c 6302 - 6303 - #define PIPE_CSC_OUTPUT_COEFF_RY_GY(pipe) _MMIO_PIPE(pipe,\ 6304 - _PIPE_A_OUTPUT_CSC_COEFF_RY_GY,\ 6305 - _PIPE_B_OUTPUT_CSC_COEFF_RY_GY) 6306 - #define PIPE_CSC_OUTPUT_COEFF_BY(pipe) _MMIO_PIPE(pipe, \ 6307 - _PIPE_A_OUTPUT_CSC_COEFF_BY, \ 6308 - _PIPE_B_OUTPUT_CSC_COEFF_BY) 6309 - #define PIPE_CSC_OUTPUT_COEFF_RU_GU(pipe) _MMIO_PIPE(pipe, \ 6310 - _PIPE_A_OUTPUT_CSC_COEFF_RU_GU, \ 6311 - _PIPE_B_OUTPUT_CSC_COEFF_RU_GU) 6312 - #define PIPE_CSC_OUTPUT_COEFF_BU(pipe) _MMIO_PIPE(pipe, \ 6313 - _PIPE_A_OUTPUT_CSC_COEFF_BU, \ 6314 - _PIPE_B_OUTPUT_CSC_COEFF_BU) 6315 - #define PIPE_CSC_OUTPUT_COEFF_RV_GV(pipe) _MMIO_PIPE(pipe, \ 6316 - _PIPE_A_OUTPUT_CSC_COEFF_RV_GV, \ 6317 - _PIPE_B_OUTPUT_CSC_COEFF_RV_GV) 6318 - #define PIPE_CSC_OUTPUT_COEFF_BV(pipe) _MMIO_PIPE(pipe, \ 6319 - _PIPE_A_OUTPUT_CSC_COEFF_BV, \ 6320 - _PIPE_B_OUTPUT_CSC_COEFF_BV) 6321 - #define PIPE_CSC_OUTPUT_PREOFF_HI(pipe) _MMIO_PIPE(pipe, \ 6322 - _PIPE_A_OUTPUT_CSC_PREOFF_HI, \ 6323 - _PIPE_B_OUTPUT_CSC_PREOFF_HI) 6324 - #define PIPE_CSC_OUTPUT_PREOFF_ME(pipe) _MMIO_PIPE(pipe, \ 6325 - _PIPE_A_OUTPUT_CSC_PREOFF_ME, \ 6326 - _PIPE_B_OUTPUT_CSC_PREOFF_ME) 6327 - #define PIPE_CSC_OUTPUT_PREOFF_LO(pipe) _MMIO_PIPE(pipe, \ 6328 - _PIPE_A_OUTPUT_CSC_PREOFF_LO, \ 6329 - _PIPE_B_OUTPUT_CSC_PREOFF_LO) 6330 - #define PIPE_CSC_OUTPUT_POSTOFF_HI(pipe) _MMIO_PIPE(pipe, \ 6331 - _PIPE_A_OUTPUT_CSC_POSTOFF_HI, \ 6332 - _PIPE_B_OUTPUT_CSC_POSTOFF_HI) 6333 - #define PIPE_CSC_OUTPUT_POSTOFF_ME(pipe) _MMIO_PIPE(pipe, \ 6334 - _PIPE_A_OUTPUT_CSC_POSTOFF_ME, \ 6335 - _PIPE_B_OUTPUT_CSC_POSTOFF_ME) 6336 - #define PIPE_CSC_OUTPUT_POSTOFF_LO(pipe) _MMIO_PIPE(pipe, \ 6337 - _PIPE_A_OUTPUT_CSC_POSTOFF_LO, \ 6338 - _PIPE_B_OUTPUT_CSC_POSTOFF_LO) 6339 - 6340 - /* pipe degamma/gamma LUTs on IVB+ */ 6341 - #define _PAL_PREC_INDEX_A 0x4A400 6342 - #define _PAL_PREC_INDEX_B 0x4AC00 6343 - #define _PAL_PREC_INDEX_C 0x4B400 6344 - #define PAL_PREC_SPLIT_MODE REG_BIT(31) 6345 - #define PAL_PREC_AUTO_INCREMENT REG_BIT(15) 6346 - #define PAL_PREC_INDEX_VALUE_MASK REG_GENMASK(9, 0) 6347 - #define PAL_PREC_INDEX_VALUE(x) REG_FIELD_PREP(PAL_PREC_INDEX_VALUE_MASK, (x)) 6348 - #define _PAL_PREC_DATA_A 0x4A404 6349 - #define _PAL_PREC_DATA_B 0x4AC04 6350 - #define _PAL_PREC_DATA_C 0x4B404 6351 - /* see PREC_PALETTE_* for the bits */ 6352 - #define _PAL_PREC_GC_MAX_A 0x4A410 6353 - #define _PAL_PREC_GC_MAX_B 0x4AC10 6354 - #define _PAL_PREC_GC_MAX_C 0x4B410 6355 - #define _PAL_PREC_EXT_GC_MAX_A 0x4A420 6356 - #define _PAL_PREC_EXT_GC_MAX_B 0x4AC20 6357 - #define _PAL_PREC_EXT_GC_MAX_C 0x4B420 6358 - #define _PAL_PREC_EXT2_GC_MAX_A 0x4A430 6359 - #define _PAL_PREC_EXT2_GC_MAX_B 0x4AC30 6360 - #define _PAL_PREC_EXT2_GC_MAX_C 0x4B430 6361 - 6362 - #define PREC_PAL_INDEX(pipe) _MMIO_PIPE(pipe, _PAL_PREC_INDEX_A, _PAL_PREC_INDEX_B) 6363 - #define PREC_PAL_DATA(pipe) _MMIO_PIPE(pipe, _PAL_PREC_DATA_A, _PAL_PREC_DATA_B) 6364 - #define PREC_PAL_GC_MAX(pipe, i) _MMIO(_PIPE(pipe, _PAL_PREC_GC_MAX_A, _PAL_PREC_GC_MAX_B) + (i) * 4) /* u1.16 */ 6365 - #define PREC_PAL_EXT_GC_MAX(pipe, i) _MMIO(_PIPE(pipe, _PAL_PREC_EXT_GC_MAX_A, _PAL_PREC_EXT_GC_MAX_B) + (i) * 4) /* u3.16 */ 6366 - #define PREC_PAL_EXT2_GC_MAX(pipe, i) _MMIO(_PIPE(pipe, _PAL_PREC_EXT2_GC_MAX_A, _PAL_PREC_EXT2_GC_MAX_B) + (i) * 4) /* glk+, u3.16 */ 6367 - 6368 - #define _PRE_CSC_GAMC_INDEX_A 0x4A484 6369 - #define _PRE_CSC_GAMC_INDEX_B 0x4AC84 6370 - #define _PRE_CSC_GAMC_INDEX_C 0x4B484 6371 - #define PRE_CSC_GAMC_AUTO_INCREMENT REG_BIT(10) 6372 - #define PRE_CSC_GAMC_INDEX_VALUE_MASK REG_GENMASK(7, 0) 6373 - #define PRE_CSC_GAMC_INDEX_VALUE(x) REG_FIELD_PREP(PRE_CSC_GAMC_INDEX_VALUE_MASK, (x)) 6374 - #define _PRE_CSC_GAMC_DATA_A 0x4A488 6375 - #define _PRE_CSC_GAMC_DATA_B 0x4AC88 6376 - #define _PRE_CSC_GAMC_DATA_C 0x4B488 6377 - 6378 - #define PRE_CSC_GAMC_INDEX(pipe) _MMIO_PIPE(pipe, _PRE_CSC_GAMC_INDEX_A, _PRE_CSC_GAMC_INDEX_B) 6379 - #define PRE_CSC_GAMC_DATA(pipe) _MMIO_PIPE(pipe, _PRE_CSC_GAMC_DATA_A, _PRE_CSC_GAMC_DATA_B) 6380 - 6381 - /* ICL Multi segmented gamma */ 6382 - #define _PAL_PREC_MULTI_SEG_INDEX_A 0x4A408 6383 - #define _PAL_PREC_MULTI_SEG_INDEX_B 0x4AC08 6384 - #define PAL_PREC_MULTI_SEG_AUTO_INCREMENT REG_BIT(15) 6385 - #define PAL_PREC_MULTI_SEG_INDEX_VALUE_MASK REG_GENMASK(4, 0) 6386 - #define PAL_PREC_MULTI_SEG_INDEX_VALUE(x) REG_FIELD_PREP(PAL_PREC_MULTI_SEG_INDEX_VALUE_MASK, (x)) 6387 - 6388 - #define _PAL_PREC_MULTI_SEG_DATA_A 0x4A40C 6389 - #define _PAL_PREC_MULTI_SEG_DATA_B 0x4AC0C 6390 - /* see PREC_PALETTE_12P4_* for the bits */ 6391 - 6392 - #define PREC_PAL_MULTI_SEG_INDEX(pipe) _MMIO_PIPE(pipe, \ 6393 - _PAL_PREC_MULTI_SEG_INDEX_A, \ 6394 - _PAL_PREC_MULTI_SEG_INDEX_B) 6395 - #define PREC_PAL_MULTI_SEG_DATA(pipe) _MMIO_PIPE(pipe, \ 6396 - _PAL_PREC_MULTI_SEG_DATA_A, \ 6397 - _PAL_PREC_MULTI_SEG_DATA_B) 6398 - 6399 6272 #define _MMIO_PLANE_GAMC(plane, i, a, b) _MMIO(_PIPE(plane, a, b) + (i) * 4) 6400 6273 6401 6274 /* Plane CSC Registers */ ··· 6267 6486 #define PLANE_CSC_POSTOFF(pipe, plane, index) _MMIO_PLANE(plane, _PLANE_CSC_POSTOFF_HI_1(pipe) + \ 6268 6487 (index) * 4, _PLANE_CSC_POSTOFF_HI_2(pipe) + \ 6269 6488 (index) * 4) 6270 - 6271 - #define _PIPE_A_WGC_C01_C00 0x600B0 /* s2.10 */ 6272 - #define _PIPE_A_WGC_C02 0x600B4 /* s2.10 */ 6273 - #define _PIPE_A_WGC_C11_C10 0x600B8 /* s2.10 */ 6274 - #define _PIPE_A_WGC_C12 0x600BC /* s2.10 */ 6275 - #define _PIPE_A_WGC_C21_C20 0x600C0 /* s2.10 */ 6276 - #define _PIPE_A_WGC_C22 0x600C4 /* s2.10 */ 6277 - 6278 - #define PIPE_WGC_C01_C00(pipe) _MMIO_TRANS2(pipe, _PIPE_A_WGC_C01_C00) 6279 - #define PIPE_WGC_C02(pipe) _MMIO_TRANS2(pipe, _PIPE_A_WGC_C02) 6280 - #define PIPE_WGC_C11_C10(pipe) _MMIO_TRANS2(pipe, _PIPE_A_WGC_C11_C10) 6281 - #define PIPE_WGC_C12(pipe) _MMIO_TRANS2(pipe, _PIPE_A_WGC_C12) 6282 - #define PIPE_WGC_C21_C20(pipe) _MMIO_TRANS2(pipe, _PIPE_A_WGC_C21_C20) 6283 - #define PIPE_WGC_C22(pipe) _MMIO_TRANS2(pipe, _PIPE_A_WGC_C22) 6284 - 6285 - /* pipe CSC & degamma/gamma LUTs on CHV */ 6286 - #define _CGM_PIPE_A_CSC_COEFF01 (VLV_DISPLAY_BASE + 0x67900) 6287 - #define _CGM_PIPE_A_CSC_COEFF23 (VLV_DISPLAY_BASE + 0x67904) 6288 - #define _CGM_PIPE_A_CSC_COEFF45 (VLV_DISPLAY_BASE + 0x67908) 6289 - #define _CGM_PIPE_A_CSC_COEFF67 (VLV_DISPLAY_BASE + 0x6790C) 6290 - #define _CGM_PIPE_A_CSC_COEFF8 (VLV_DISPLAY_BASE + 0x67910) 6291 - #define _CGM_PIPE_A_DEGAMMA (VLV_DISPLAY_BASE + 0x66000) 6292 - /* cgm degamma ldw */ 6293 - #define CGM_PIPE_DEGAMMA_GREEN_LDW_MASK REG_GENMASK(29, 16) 6294 - #define CGM_PIPE_DEGAMMA_BLUE_LDW_MASK REG_GENMASK(13, 0) 6295 - /* cgm degamma udw */ 6296 - #define CGM_PIPE_DEGAMMA_RED_UDW_MASK REG_GENMASK(13, 0) 6297 - #define _CGM_PIPE_A_GAMMA (VLV_DISPLAY_BASE + 0x67000) 6298 - /* cgm gamma ldw */ 6299 - #define CGM_PIPE_GAMMA_GREEN_LDW_MASK REG_GENMASK(25, 16) 6300 - #define CGM_PIPE_GAMMA_BLUE_LDW_MASK REG_GENMASK(9, 0) 6301 - /* cgm gamma udw */ 6302 - #define CGM_PIPE_GAMMA_RED_UDW_MASK REG_GENMASK(9, 0) 6303 - #define _CGM_PIPE_A_MODE (VLV_DISPLAY_BASE + 0x67A00) 6304 - #define CGM_PIPE_MODE_GAMMA (1 << 2) 6305 - #define CGM_PIPE_MODE_CSC (1 << 1) 6306 - #define CGM_PIPE_MODE_DEGAMMA (1 << 0) 6307 - 6308 - #define _CGM_PIPE_B_CSC_COEFF01 (VLV_DISPLAY_BASE + 0x69900) 6309 - #define _CGM_PIPE_B_CSC_COEFF23 (VLV_DISPLAY_BASE + 0x69904) 6310 - #define _CGM_PIPE_B_CSC_COEFF45 (VLV_DISPLAY_BASE + 0x69908) 6311 - #define _CGM_PIPE_B_CSC_COEFF67 (VLV_DISPLAY_BASE + 0x6990C) 6312 - #define _CGM_PIPE_B_CSC_COEFF8 (VLV_DISPLAY_BASE + 0x69910) 6313 - #define _CGM_PIPE_B_DEGAMMA (VLV_DISPLAY_BASE + 0x68000) 6314 - #define _CGM_PIPE_B_GAMMA (VLV_DISPLAY_BASE + 0x69000) 6315 - #define _CGM_PIPE_B_MODE (VLV_DISPLAY_BASE + 0x69A00) 6316 - 6317 - #define CGM_PIPE_CSC_COEFF01(pipe) _MMIO_PIPE(pipe, _CGM_PIPE_A_CSC_COEFF01, _CGM_PIPE_B_CSC_COEFF01) 6318 - #define CGM_PIPE_CSC_COEFF23(pipe) _MMIO_PIPE(pipe, _CGM_PIPE_A_CSC_COEFF23, _CGM_PIPE_B_CSC_COEFF23) 6319 - #define CGM_PIPE_CSC_COEFF45(pipe) _MMIO_PIPE(pipe, _CGM_PIPE_A_CSC_COEFF45, _CGM_PIPE_B_CSC_COEFF45) 6320 - #define CGM_PIPE_CSC_COEFF67(pipe) _MMIO_PIPE(pipe, _CGM_PIPE_A_CSC_COEFF67, _CGM_PIPE_B_CSC_COEFF67) 6321 - #define CGM_PIPE_CSC_COEFF8(pipe) _MMIO_PIPE(pipe, _CGM_PIPE_A_CSC_COEFF8, _CGM_PIPE_B_CSC_COEFF8) 6322 - #define CGM_PIPE_DEGAMMA(pipe, i, w) _MMIO(_PIPE(pipe, _CGM_PIPE_A_DEGAMMA, _CGM_PIPE_B_DEGAMMA) + (i) * 8 + (w) * 4) 6323 - #define CGM_PIPE_GAMMA(pipe, i, w) _MMIO(_PIPE(pipe, _CGM_PIPE_A_GAMMA, _CGM_PIPE_B_GAMMA) + (i) * 8 + (w) * 4) 6324 - #define CGM_PIPE_MODE(pipe) _MMIO_PIPE(pipe, _CGM_PIPE_A_MODE, _CGM_PIPE_B_MODE) 6325 6489 6326 6490 /* Gen4+ Timestamp and Pipe Frame time stamp registers */ 6327 6491 #define GEN4_TIMESTAMP _MMIO(0x2358) ··· 6350 6624 #define TCSS_DDI_STATUS(tc) _MMIO(_PICK_EVEN(tc, \ 6351 6625 _TCSS_DDI_STATUS_1, \ 6352 6626 _TCSS_DDI_STATUS_2)) 6627 + #define TCSS_DDI_STATUS_PIN_ASSIGNMENT_MASK REG_GENMASK(28, 25) 6353 6628 #define TCSS_DDI_STATUS_READY REG_BIT(2) 6354 6629 #define TCSS_DDI_STATUS_HPD_LIVE_STATUS_TBT REG_BIT(1) 6355 6630 #define TCSS_DDI_STATUS_HPD_LIVE_STATUS_ALT REG_BIT(0)
+1
drivers/gpu/drm/i915/i915_vma.c
··· 29 29 #include "display/intel_display.h" 30 30 #include "display/intel_frontbuffer.h" 31 31 #include "gem/i915_gem_lmem.h" 32 + #include "gem/i915_gem_object_frontbuffer.h" 32 33 #include "gem/i915_gem_tiling.h" 33 34 #include "gt/intel_engine.h" 34 35 #include "gt/intel_engine_heartbeat.h"
+1 -1
drivers/gpu/drm/i915/i915_vma_resource.c
··· 94 94 call_rcu(&fence->rcu, unbind_fence_free_rcu); 95 95 } 96 96 97 - static struct dma_fence_ops unbind_fence_ops = { 97 + static const struct dma_fence_ops unbind_fence_ops = { 98 98 .get_driver_name = get_driver_name, 99 99 .get_timeline_name = get_timeline_name, 100 100 .release = unbind_fence_release,
+2 -50
drivers/gpu/drm/i915/intel_clock_gating.c
··· 349 349 intel_uncore_write(&i915->uncore, GEN7_MISCCPCTL, misccpctl); 350 350 } 351 351 352 - static void icl_init_clock_gating(struct drm_i915_private *i915) 353 - { 354 - /* Wa_1409120013:icl,ehl */ 355 - intel_uncore_write(&i915->uncore, ILK_DPFC_CHICKEN(INTEL_FBC_A), 356 - DPFC_CHICKEN_COMP_DUMMY_PIXEL); 357 - 358 - /*Wa_14010594013:icl, ehl */ 359 - intel_uncore_rmw(&i915->uncore, GEN8_CHICKEN_DCPR_1, 360 - 0, ICL_DELAY_PMRSP); 361 - } 362 - 363 - static void gen12lp_init_clock_gating(struct drm_i915_private *i915) 364 - { 365 - /* Wa_1409120013 */ 366 - if (DISPLAY_VER(i915) == 12) 367 - intel_uncore_write(&i915->uncore, ILK_DPFC_CHICKEN(INTEL_FBC_A), 368 - DPFC_CHICKEN_COMP_DUMMY_PIXEL); 369 - 370 - /* Wa_14013723622:tgl,rkl,dg1,adl-s */ 371 - if (DISPLAY_VER(i915) == 12) 372 - intel_uncore_rmw(&i915->uncore, CLKREQ_POLICY, 373 - CLKREQ_POLICY_MEM_UP_OVRD, 0); 374 - } 375 - 376 - static void adlp_init_clock_gating(struct drm_i915_private *i915) 377 - { 378 - gen12lp_init_clock_gating(i915); 379 - 380 - /* Wa_22011091694:adlp */ 381 - intel_de_rmw(i915, GEN9_CLKGATE_DIS_5, 0, DPCE_GATING_DIS); 382 - 383 - /* Bspec/49189 Initialize Sequence */ 384 - intel_de_rmw(i915, GEN8_CHICKEN_DCPR_1, DDI_CLOCK_REG_ACCESS, 0); 385 - } 386 - 387 352 static void xehpsdv_init_clock_gating(struct drm_i915_private *i915) 388 353 { 389 354 /* Wa_22010146351:xehpsdv */ ··· 765 800 CG_FUNCS(pvc); 766 801 CG_FUNCS(dg2); 767 802 CG_FUNCS(xehpsdv); 768 - CG_FUNCS(adlp); 769 - CG_FUNCS(gen12lp); 770 - CG_FUNCS(icl); 771 803 CG_FUNCS(cfl); 772 804 CG_FUNCS(skl); 773 805 CG_FUNCS(kbl); ··· 797 835 */ 798 836 void intel_clock_gating_hooks_init(struct drm_i915_private *i915) 799 837 { 800 - if (IS_METEORLAKE(i915)) 801 - i915->clock_gating_funcs = &nop_clock_gating_funcs; 802 - else if (IS_PONTEVECCHIO(i915)) 838 + if (IS_PONTEVECCHIO(i915)) 803 839 i915->clock_gating_funcs = &pvc_clock_gating_funcs; 804 840 else if (IS_DG2(i915)) 805 841 i915->clock_gating_funcs = &dg2_clock_gating_funcs; 806 842 else if (IS_XEHPSDV(i915)) 807 843 i915->clock_gating_funcs = &xehpsdv_clock_gating_funcs; 808 - else if (IS_ALDERLAKE_P(i915)) 809 - i915->clock_gating_funcs = &adlp_clock_gating_funcs; 810 - else if (GRAPHICS_VER(i915) == 12) 811 - i915->clock_gating_funcs = &gen12lp_clock_gating_funcs; 812 - else if (GRAPHICS_VER(i915) == 11) 813 - i915->clock_gating_funcs = &icl_clock_gating_funcs; 814 844 else if (IS_COFFEELAKE(i915) || IS_COMETLAKE(i915)) 815 845 i915->clock_gating_funcs = &cfl_clock_gating_funcs; 816 846 else if (IS_SKYLAKE(i915)) ··· 839 885 i915->clock_gating_funcs = &i85x_clock_gating_funcs; 840 886 else if (GRAPHICS_VER(i915) == 2) 841 887 i915->clock_gating_funcs = &i830_clock_gating_funcs; 842 - else { 843 - MISSING_CASE(INTEL_DEVID(i915)); 888 + else 844 889 i915->clock_gating_funcs = &nop_clock_gating_funcs; 845 - } 846 890 }
-14
drivers/gpu/drm/i915/intel_device_info.c
··· 410 410 const struct intel_device_info *match_info) 411 411 { 412 412 struct intel_runtime_info *runtime; 413 - u16 ver, rel, step; 414 413 415 414 /* Setup INTEL_INFO() */ 416 415 i915->__info = match_info; ··· 417 418 /* Initialize initial runtime info from static const data and pdev. */ 418 419 runtime = RUNTIME_INFO(i915); 419 420 memcpy(runtime, &INTEL_INFO(i915)->__runtime, sizeof(*runtime)); 420 - 421 - /* Probe display support */ 422 - i915->display.info.__device_info = intel_display_device_probe(i915, HAS_GMD_ID(i915), 423 - &ver, &rel, &step); 424 - memcpy(DISPLAY_RUNTIME_INFO(i915), 425 - &DISPLAY_INFO(i915)->__runtime_defaults, 426 - sizeof(*DISPLAY_RUNTIME_INFO(i915))); 427 - 428 - if (HAS_GMD_ID(i915)) { 429 - DISPLAY_RUNTIME_INFO(i915)->ip.ver = ver; 430 - DISPLAY_RUNTIME_INFO(i915)->ip.rel = rel; 431 - DISPLAY_RUNTIME_INFO(i915)->ip.step = step; 432 - } 433 421 434 422 runtime->device_id = device_id; 435 423 }
-1
drivers/gpu/drm/i915/intel_device_info.h
··· 146 146 func(gpu_reset_clobbers_display); \ 147 147 func(has_reset_engine); \ 148 148 func(has_3d_pipeline); \ 149 - func(has_4tile); \ 150 149 func(has_flat_ccs); \ 151 150 func(has_global_mocs); \ 152 151 func(has_gmd_id); \
+1
drivers/gpu/drm/i915/intel_gvt_mmio_table.c
··· 5 5 6 6 #include "display/intel_audio_regs.h" 7 7 #include "display/intel_backlight_regs.h" 8 + #include "display/intel_color_regs.h" 8 9 #include "display/intel_display_types.h" 9 10 #include "display/intel_dmc_regs.h" 10 11 #include "display/intel_dp_aux_regs.h"
-1
drivers/gpu/drm/i915/intel_runtime_pm.c
··· 652 652 653 653 rpm->kdev = kdev; 654 654 rpm->available = HAS_RUNTIME_PM(i915); 655 - rpm->suspended = false; 656 655 atomic_set(&rpm->wakeref_count, 0); 657 656 658 657 init_intel_runtime_pm_wakeref(rpm);
+2 -2
drivers/gpu/drm/i915/intel_runtime_pm.h
··· 6 6 #ifndef __INTEL_RUNTIME_PM_H__ 7 7 #define __INTEL_RUNTIME_PM_H__ 8 8 9 + #include <linux/pm_runtime.h> 9 10 #include <linux/types.h> 10 11 11 12 #include "intel_wakeref.h" ··· 44 43 atomic_t wakeref_count; 45 44 struct device *kdev; /* points to i915->drm.dev */ 46 45 bool available; 47 - bool suspended; 48 46 bool irqs_enabled; 49 47 bool no_wakeref_tracking; 50 48 ··· 110 110 static inline void 111 111 assert_rpm_device_not_suspended(struct intel_runtime_pm *rpm) 112 112 { 113 - WARN_ONCE(rpm->suspended, 113 + WARN_ONCE(pm_runtime_suspended(rpm->kdev), 114 114 "Device suspended during HW access\n"); 115 115 } 116 116
+1
drivers/gpu/drm/i915/intel_step.c
··· 124 124 125 125 static const struct intel_step_info dg2_g12_revid_step_tbl[] = { 126 126 [0x0] = { COMMON_GT_MEDIA_STEP(A0), .display_step = STEP_C0 }, 127 + [0x1] = { COMMON_GT_MEDIA_STEP(A1), .display_step = STEP_C0 }, 127 128 }; 128 129 129 130 static const struct intel_step_info adls_rpls_revids[] = {
+33 -7
drivers/gpu/drm/i915/pxp/intel_pxp.c
··· 359 359 intel_runtime_pm_put(&i915->runtime_pm, wakeref); 360 360 } 361 361 362 + static bool pxp_required_fw_failed(struct intel_pxp *pxp) 363 + { 364 + if (__intel_uc_fw_status(&pxp->ctrl_gt->uc.huc.fw) == INTEL_UC_FIRMWARE_LOAD_FAIL) 365 + return true; 366 + if (HAS_ENGINE(pxp->ctrl_gt, GSC0) && 367 + __intel_uc_fw_status(&pxp->ctrl_gt->uc.gsc.fw) == INTEL_UC_FIRMWARE_LOAD_FAIL) 368 + return true; 369 + 370 + return false; 371 + } 372 + 373 + static bool pxp_fw_dependencies_completed(struct intel_pxp *pxp) 374 + { 375 + if (HAS_ENGINE(pxp->ctrl_gt, GSC0)) 376 + return intel_pxp_gsccs_is_ready_for_sessions(pxp); 377 + 378 + return pxp_component_bound(pxp); 379 + } 380 + 362 381 /* 363 382 * this helper is used by both intel_pxp_start and by 364 383 * the GET_PARAM IOCTL that user space calls. Thus, the 365 384 * return values here should match the UAPI spec. 366 385 */ 367 - int intel_pxp_get_readiness_status(struct intel_pxp *pxp) 386 + int intel_pxp_get_readiness_status(struct intel_pxp *pxp, int timeout_ms) 368 387 { 369 388 if (!intel_pxp_is_enabled(pxp)) 370 389 return -ENODEV; 371 390 372 - if (HAS_ENGINE(pxp->ctrl_gt, GSC0)) { 373 - if (wait_for(intel_pxp_gsccs_is_ready_for_sessions(pxp), 250)) 391 + if (pxp_required_fw_failed(pxp)) 392 + return -ENODEV; 393 + 394 + if (pxp->platform_cfg_is_bad) 395 + return -ENODEV; 396 + 397 + if (timeout_ms) { 398 + if (wait_for(pxp_fw_dependencies_completed(pxp), timeout_ms)) 374 399 return 2; 375 - } else { 376 - if (wait_for(pxp_component_bound(pxp), 250)) 377 - return 2; 400 + } else if (!pxp_fw_dependencies_completed(pxp)) { 401 + return 2; 378 402 } 379 403 return 1; 380 404 } ··· 407 383 * the arb session is restarted from the irq work when we receive the 408 384 * termination completion interrupt 409 385 */ 386 + #define PXP_READINESS_TIMEOUT 250 387 + 410 388 int intel_pxp_start(struct intel_pxp *pxp) 411 389 { 412 390 int ret = 0; 413 391 414 - ret = intel_pxp_get_readiness_status(pxp); 392 + ret = intel_pxp_get_readiness_status(pxp, PXP_READINESS_TIMEOUT); 415 393 if (ret < 0) 416 394 return ret; 417 395 else if (ret > 1)
+1 -1
drivers/gpu/drm/i915/pxp/intel_pxp.h
··· 26 26 void intel_pxp_mark_termination_in_progress(struct intel_pxp *pxp); 27 27 void intel_pxp_tee_end_arb_fw_session(struct intel_pxp *pxp, u32 arb_session_id); 28 28 29 - int intel_pxp_get_readiness_status(struct intel_pxp *pxp); 29 + int intel_pxp_get_readiness_status(struct intel_pxp *pxp, int timeout_ms); 30 30 int intel_pxp_get_backend_timeout_ms(struct intel_pxp *pxp); 31 31 int intel_pxp_start(struct intel_pxp *pxp); 32 32 void intel_pxp_end(struct intel_pxp *pxp);
+4 -3
drivers/gpu/drm/i915/pxp/intel_pxp_gsccs.c
··· 18 18 #include "intel_pxp_types.h" 19 19 20 20 static bool 21 - is_fw_err_platform_config(u32 type) 21 + is_fw_err_platform_config(struct intel_pxp *pxp, u32 type) 22 22 { 23 23 switch (type) { 24 24 case PXP_STATUS_ERROR_API_VERSION: 25 25 case PXP_STATUS_PLATFCONFIG_KF1_NOVERIF: 26 26 case PXP_STATUS_PLATFCONFIG_KF1_BAD: 27 + pxp->platform_cfg_is_bad = true; 27 28 return true; 28 29 default: 29 30 break; ··· 227 226 if (ret) { 228 227 drm_err(&i915->drm, "Failed to init session %d, ret=[%d]\n", arb_session_id, ret); 229 228 } else if (msg_out.header.status != 0) { 230 - if (is_fw_err_platform_config(msg_out.header.status)) { 229 + if (is_fw_err_platform_config(pxp, msg_out.header.status)) { 231 230 drm_info_once(&i915->drm, 232 231 "PXP init-session-%d failed due to BIOS/SOC:0x%08x:%s\n", 233 232 arb_session_id, msg_out.header.status, ··· 270 269 drm_err(&i915->drm, "Failed to inv-stream-key-%u, ret=[%d]\n", 271 270 session_id, ret); 272 271 } else if (msg_out.header.status != 0) { 273 - if (is_fw_err_platform_config(msg_out.header.status)) { 272 + if (is_fw_err_platform_config(pxp, msg_out.header.status)) { 274 273 drm_info_once(&i915->drm, 275 274 "PXP inv-stream-key-%u failed due to BIOS/SOC :0x%08x:%s\n", 276 275 session_id, msg_out.header.status,
+17 -1
drivers/gpu/drm/i915/pxp/intel_pxp_pm.c
··· 34 34 } 35 35 } 36 36 37 - void intel_pxp_resume_complete(struct intel_pxp *pxp) 37 + static void _pxp_resume(struct intel_pxp *pxp, bool take_wakeref) 38 38 { 39 + intel_wakeref_t wakeref; 40 + 39 41 if (!intel_pxp_is_enabled(pxp)) 40 42 return; 41 43 ··· 50 48 if (!HAS_ENGINE(pxp->ctrl_gt, GSC0) && !pxp->pxp_component) 51 49 return; 52 50 51 + if (take_wakeref) 52 + wakeref = intel_runtime_pm_get(&pxp->ctrl_gt->i915->runtime_pm); 53 53 intel_pxp_init_hw(pxp); 54 + if (take_wakeref) 55 + intel_runtime_pm_put(&pxp->ctrl_gt->i915->runtime_pm, wakeref); 56 + } 57 + 58 + void intel_pxp_resume_complete(struct intel_pxp *pxp) 59 + { 60 + _pxp_resume(pxp, true); 61 + } 62 + 63 + void intel_pxp_runtime_resume(struct intel_pxp *pxp) 64 + { 65 + _pxp_resume(pxp, false); 54 66 } 55 67 56 68 void intel_pxp_runtime_suspend(struct intel_pxp *pxp)
+3 -2
drivers/gpu/drm/i915/pxp/intel_pxp_pm.h
··· 13 13 void intel_pxp_suspend(struct intel_pxp *pxp); 14 14 void intel_pxp_resume_complete(struct intel_pxp *pxp); 15 15 void intel_pxp_runtime_suspend(struct intel_pxp *pxp); 16 + void intel_pxp_runtime_resume(struct intel_pxp *pxp); 16 17 #else 17 18 static inline void intel_pxp_suspend_prepare(struct intel_pxp *pxp) 18 19 { ··· 30 29 static inline void intel_pxp_runtime_suspend(struct intel_pxp *pxp) 31 30 { 32 31 } 33 - #endif 32 + 34 33 static inline void intel_pxp_runtime_resume(struct intel_pxp *pxp) 35 34 { 36 - intel_pxp_resume_complete(pxp); 37 35 } 36 + #endif 38 37 #endif /* __INTEL_PXP_PM_H__ */
+4 -3
drivers/gpu/drm/i915/pxp/intel_pxp_tee.c
··· 22 22 #include "intel_pxp_types.h" 23 23 24 24 static bool 25 - is_fw_err_platform_config(u32 type) 25 + is_fw_err_platform_config(struct intel_pxp *pxp, u32 type) 26 26 { 27 27 switch (type) { 28 28 case PXP_STATUS_ERROR_API_VERSION: 29 29 case PXP_STATUS_PLATFCONFIG_KF1_NOVERIF: 30 30 case PXP_STATUS_PLATFCONFIG_KF1_BAD: 31 + pxp->platform_cfg_is_bad = true; 31 32 return true; 32 33 default: 33 34 break; ··· 345 344 if (ret) { 346 345 drm_err(&i915->drm, "Failed to send tee msg init arb session, ret=[%d]\n", ret); 347 346 } else if (msg_out.header.status != 0) { 348 - if (is_fw_err_platform_config(msg_out.header.status)) { 347 + if (is_fw_err_platform_config(pxp, msg_out.header.status)) { 349 348 drm_info_once(&i915->drm, 350 349 "PXP init-arb-session-%d failed due to BIOS/SOC:0x%08x:%s\n", 351 350 arb_session_id, msg_out.header.status, ··· 393 392 drm_err(&i915->drm, "Failed to send tee msg for inv-stream-key-%u, ret=[%d]\n", 394 393 session_id, ret); 395 394 } else if (msg_out.header.status != 0) { 396 - if (is_fw_err_platform_config(msg_out.header.status)) { 395 + if (is_fw_err_platform_config(pxp, msg_out.header.status)) { 397 396 drm_info_once(&i915->drm, 398 397 "PXP inv-stream-key-%u failed due to BIOS/SOC :0x%08x:%s\n", 399 398 session_id, msg_out.header.status,
+9
drivers/gpu/drm/i915/pxp/intel_pxp_types.h
··· 27 27 struct intel_gt *ctrl_gt; 28 28 29 29 /** 30 + * @platform_cfg_is_bad: used to track if any prior arb session creation resulted 31 + * in a failure that was caused by a platform configuration issue, meaning that 32 + * failure will not get resolved without a change to the platform (not kernel) 33 + * such as BIOS configuration, firwmware update, etc. This bool gets reflected when 34 + * GET_PARAM:I915_PARAM_PXP_STATUS is called. 35 + */ 36 + bool platform_cfg_is_bad; 37 + 38 + /** 30 39 * @kcr_base: base mmio offset for the KCR engine which is different on legacy platforms 31 40 * vs newer platforms where the KCR is inside the media-tile. 32 41 */
+2
drivers/gpu/drm/i915/selftests/mock_gem_device.c
··· 181 181 /* Set up device info and initial runtime info. */ 182 182 intel_device_info_driver_create(i915, pdev->device, &mock_info); 183 183 184 + intel_display_device_probe(i915); 185 + 184 186 dev_pm_domain_set(&pdev->dev, &pm_domain); 185 187 pm_runtime_enable(&pdev->dev); 186 188 pm_runtime_dont_use_autosuspend(&pdev->dev);
+9 -3
drivers/gpu/drm/i915/soc/intel_pch.c
··· 218 218 unsigned short id; 219 219 enum intel_pch pch_type; 220 220 221 - /* DG1 has south engine display on the same PCI device */ 222 - if (IS_DG1(dev_priv)) { 223 - dev_priv->pch_type = PCH_DG1; 221 + /* 222 + * South display engine on the same PCI device: just assign the fake 223 + * PCH. 224 + */ 225 + if (DISPLAY_VER(dev_priv) >= 20) { 226 + dev_priv->pch_type = PCH_LNL; 224 227 return; 225 228 } else if (IS_DG2(dev_priv)) { 226 229 dev_priv->pch_type = PCH_DG2; 230 + return; 231 + } else if (IS_DG1(dev_priv)) { 232 + dev_priv->pch_type = PCH_DG1; 227 233 return; 228 234 } 229 235
+2
drivers/gpu/drm/i915/soc/intel_pch.h
··· 30 30 /* Fake PCHs, functionality handled on the same PCI dev */ 31 31 PCH_DG1 = 1024, 32 32 PCH_DG2, 33 + PCH_LNL, 33 34 }; 34 35 35 36 #define INTEL_PCH_DEVICE_ID_MASK 0xff80 ··· 67 66 68 67 #define INTEL_PCH_TYPE(dev_priv) ((dev_priv)->pch_type) 69 68 #define INTEL_PCH_ID(dev_priv) ((dev_priv)->pch_id) 69 + #define HAS_PCH_LNL(dev_priv) (INTEL_PCH_TYPE(dev_priv) == PCH_LNL) 70 70 #define HAS_PCH_MTP(dev_priv) (INTEL_PCH_TYPE(dev_priv) == PCH_MTP) 71 71 #define HAS_PCH_DG2(dev_priv) (INTEL_PCH_TYPE(dev_priv) == PCH_DG2) 72 72 #define HAS_PCH_ADP(dev_priv) (INTEL_PCH_TYPE(dev_priv) == PCH_ADP)
+5
drivers/media/cec/core/cec-adap.c
··· 1688 1688 } 1689 1689 EXPORT_SYMBOL_GPL(cec_s_phys_addr); 1690 1690 1691 + /* 1692 + * Note: In the drm subsystem, prefer calling (if possible): 1693 + * 1694 + * cec_s_phys_addr(adap, connector->display_info.source_physical_address, false); 1695 + */ 1691 1696 void cec_s_phys_addr_from_edid(struct cec_adapter *adap, 1692 1697 const struct edid *edid) 1693 1698 {
+5
drivers/media/cec/core/cec-notifier.c
··· 196 196 } 197 197 EXPORT_SYMBOL_GPL(cec_notifier_set_phys_addr); 198 198 199 + /* 200 + * Note: In the drm subsystem, prefer calling (if possible): 201 + * 202 + * cec_notifier_set_phys_addr(n, connector->display_info.source_physical_address); 203 + */ 199 204 void cec_notifier_set_phys_addr_from_edid(struct cec_notifier *n, 200 205 const struct edid *edid) 201 206 {
+6
include/drm/display/drm_dp_helper.h
··· 699 699 void drm_dp_cec_register_connector(struct drm_dp_aux *aux, 700 700 struct drm_connector *connector); 701 701 void drm_dp_cec_unregister_connector(struct drm_dp_aux *aux); 702 + void drm_dp_cec_attach(struct drm_dp_aux *aux, u16 source_physical_address); 702 703 void drm_dp_cec_set_edid(struct drm_dp_aux *aux, const struct edid *edid); 703 704 void drm_dp_cec_unset_edid(struct drm_dp_aux *aux); 704 705 #else ··· 714 713 } 715 714 716 715 static inline void drm_dp_cec_unregister_connector(struct drm_dp_aux *aux) 716 + { 717 + } 718 + 719 + static inline void drm_dp_cec_attach(struct drm_dp_aux *aux, 720 + u16 source_physical_address) 717 721 { 718 722 } 719 723
+8
include/drm/drm_connector.h
··· 817 817 * @quirks: EDID based quirks. Internal to EDID parsing. 818 818 */ 819 819 u32 quirks; 820 + 821 + /** 822 + * @source_physical_address: Source Physical Address from HDMI 823 + * Vendor-Specific Data Block, for CEC usage. 824 + * 825 + * Defaults to CEC_PHYS_ADDR_INVALID (0xffff). 826 + */ 827 + u16 source_physical_address; 820 828 }; 821 829 822 830 int drm_display_info_set_bus_formats(struct drm_display_info *info,
+1
include/drm/drm_edid.h
··· 612 612 int drm_edid_connector_update(struct drm_connector *connector, 613 613 const struct drm_edid *edid); 614 614 int drm_edid_connector_add_modes(struct drm_connector *connector); 615 + bool drm_edid_is_digital(const struct drm_edid *drm_edid); 615 616 616 617 const u8 *drm_find_edid_extension(const struct drm_edid *drm_edid, 617 618 int ext_id, int *ext_index);
+6 -2
include/drm/i915_pciids.h
··· 689 689 #define INTEL_RPLU_IDS(info) \ 690 690 INTEL_VGA_DEVICE(0xA721, info), \ 691 691 INTEL_VGA_DEVICE(0xA7A1, info), \ 692 - INTEL_VGA_DEVICE(0xA7A9, info) 692 + INTEL_VGA_DEVICE(0xA7A9, info), \ 693 + INTEL_VGA_DEVICE(0xA7AC, info), \ 694 + INTEL_VGA_DEVICE(0xA7AD, info) 693 695 694 696 /* RPL-P */ 695 697 #define INTEL_RPLP_IDS(info) \ 696 698 INTEL_RPLU_IDS(info), \ 697 699 INTEL_VGA_DEVICE(0xA720, info), \ 698 700 INTEL_VGA_DEVICE(0xA7A0, info), \ 699 - INTEL_VGA_DEVICE(0xA7A8, info) 701 + INTEL_VGA_DEVICE(0xA7A8, info), \ 702 + INTEL_VGA_DEVICE(0xA7AA, info), \ 703 + INTEL_VGA_DEVICE(0xA7AB, info) 700 704 701 705 /* DG2 */ 702 706 #define INTEL_DG2_G10_IDS(info) \