Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'drm-intel-next-2024-04-17-1' of https://anongit.freedesktop.org/git/drm/drm-intel into drm-next

Core Changes (DRM):

- Fix documentation of DP tunnel functions (Imre)
- DP MST read sideband messaging cap (Jani)
- Preparation patches for Adaptive Sync SDP Support for DP (Mitul)

Driver Changes:

i915 core (non-display):
- Documentation improvements (Nirmoy)
- Add includes for BUG_ON/BUILD_BUG_ON in i915_memcpy.c (Joonas)
- Do not print 'pxp init failed with 0' when it succeed (Jose)
- Clean-up, including removal of dead code for unsupported platforms (Lucas)
- Adding new DG2 PCI ID (Ravi)

{i915,xe} display:
- Spelling fix (Colin Ian)
- Document CDCLK components (Gustavo)
- Lunar Lake display enabling, including cdclk and other refactors (Gustavo, Bala)
- BIOS/VBT/opregion related refactor (Jani, Ville, RK)
- Save a few bytes of memory using {kstrdup,kfree}_const variant (Christophe)
- Digital port related refactor/clean-up (Ville)
- Fix 2s boot time regression on DP panel replay init (Animesh)
- Remove redundant drm_rect_visible() overlay use (Arthur)
- DSC HW state readout fixes (Imre)
- Remove duplication on audio enable/disable on SDVO and g4x+ DP (Ville)
- Disable AuxCCS framebuffers if built for Xe (Juha-Pekka)
- Fix DSI init order (Ville)
- DRRS related refactor and fixes (Bhanuprakash)
- Fix DSB vblank waits with VRR (Ville)
- General improvements on register name and use of REG_BIT (Ville)
- Some display power well related improvements (Ville)
- FBC changes for better w/a handling (Ville)
- Make crtc disable more atomic (Ville)
- Fix hwmon locking inversion in sysfs getter (Janusz)
- Increase DP idle pattern wait timeout to 2ms (Shekhar)
- PSR related fixes and improvents (Jouni)
- Start using container_of_const() for some extra const safety (Ville)
- Use drm_printer more on display code (Ville)
- Fix Jasper Lake boot freeze (Jonathon)
- Update Pipe src size check in skl_update_scaler (Ankit)
- Enable MST mode for 128b/132b single-stream sideband (Jani)
- Pass encoder around more for port/phy checks (Jani)
- Some initial work to make display code more independent from i915 (Jani)
- Pre-populate the cursor physical dma address (Ville)
- Do not bump min backlight brightness to max on enable (Gareth)
- Fix MTL supported DP rates - removal of UHBR13.5 (Arun)
- Fix the computation for compressed_bpp for DISPLAY < 1 (Ankit)
- Bigjoiner modeset sequence redesign and MST support (Ville)
- Enable Adaptive Sync SDP Support for DP (Mitul)
- Implemnt vblank sycnhronized mbus joining changes (Ville, Stanislav)
- HDCP related fixes (Suraj)
- Fix i915_display_info debugfs when connectors are not active (Ville)
- Clean up on Xe compat layer (Jani)
- Add jitter WAs for MST/FEC/DSC links (Imre)
- DMC wakelock implementation (Luca)

Signed-off-by: Dave Airlie <airlied@redhat.com>

# -----BEGIN PGP SIGNATURE-----
#
# iQEzBAABCAAdFiEEbSBwaO7dZQkcLOKj+mJfZA7rE8oFAmYfzQEACgkQ+mJfZA7r
# E8qYvAf/T8KrEewHOWz7NOaKcFRCNYaF4QTdVOfgHUYBX5NPDF/xzwFdHCL8QWQu
# bwKwE2b94VEyruG3DYwTMd8GNcDxrsOrmU0IZe3PVkm+BvHLTmrOqL6BlCd85zXF
# 02IuE+LCaWREmmpLMcsDMxsaaq8yp+cw9/F0jJDrH6LiyfxFriefxyZYpGYjRCuv
# 8GP1fHXLFV2yys4rveR/+y9xIhgy82mVcg3/Kfk0+er7gALkY6Vc0N38wedET9MZ
# ZPfVidBeaTkIKcCDFKnFzGjG+9rNQ7NFrXyS7Hl97VolGt2l03qGGPNW1PouDiUx
# 7Y8CJOc+1k9wyBMKl0a/NQBRAqSZBQ==
# =JvZN
# -----END PGP SIGNATURE-----
# gpg: Signature made Wed 17 Apr 2024 23:22:09 AEST
# gpg: using RSA key 6D207068EEDD65091C2CE2A3FA625F640EEB13CA
# gpg: Good signature from "Rodrigo Vivi <rodrigo.vivi@intel.com>" [unknown]
# gpg: aka "Rodrigo Vivi <rodrigo.vivi@gmail.com>" [unknown]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 6D20 7068 EEDD 6509 1C2C E2A3 FA62 5F64 0EEB 13CA
From: Rodrigo Vivi <rodrigo.vivi@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/Zh_Q72gYKMMbge9A@intel.com

+3627 -2994
+9
Documentation/gpu/i915.rst
··· 204 204 .. kernel-doc:: drivers/gpu/drm/i915/display/intel_dmc.c 205 205 :internal: 206 206 207 + DMC wakelock support 208 + -------------------- 209 + 210 + .. kernel-doc:: drivers/gpu/drm/i915/display/intel_dmc_wl.c 211 + :doc: DMC wakelock support 212 + 213 + .. kernel-doc:: drivers/gpu/drm/i915/display/intel_dmc_wl.c 214 + :internal: 215 + 207 216 Video BIOS Table (VBT) 208 217 ---------------------- 209 218
+5 -6
Documentation/gpu/rfc/i915_vm_bind.h
··· 93 93 * Multiple VA mappings can be created to the same section of the object 94 94 * (aliasing). 95 95 * 96 - * The @start, @offset and @length must be 4K page aligned. However the DG2 97 - * and XEHPSDV has 64K page size for device local memory and has compact page 98 - * table. On those platforms, for binding device local-memory objects, the 99 - * @start, @offset and @length must be 64K aligned. Also, UMDs should not mix 100 - * the local memory 64K page and the system memory 4K page bindings in the same 101 - * 2M range. 96 + * The @start, @offset and @length must be 4K page aligned. However the DG2 has 97 + * 64K page size for device local memory and has compact page table. On that 98 + * platform, for binding device local-memory objects, the @start, @offset and 99 + * @length must be 64K aligned. Also, UMDs should not mix the local memory 64K 100 + * page and the system memory 4K page bindings in the same 2M range. 102 101 * 103 102 * Error code -EINVAL will be returned if @start, @offset and @length are not 104 103 * properly aligned. In version 1 (See I915_PARAM_VM_BIND_VERSION), error code
+37
drivers/gpu/drm/display/drm_dp_helper.c
··· 2948 2948 } 2949 2949 EXPORT_SYMBOL(drm_dp_vsc_sdp_log); 2950 2950 2951 + void drm_dp_as_sdp_log(struct drm_printer *p, const struct drm_dp_as_sdp *as_sdp) 2952 + { 2953 + drm_printf(p, "DP SDP: AS_SDP, revision %u, length %u\n", 2954 + as_sdp->revision, as_sdp->length); 2955 + drm_printf(p, " vtotal: %d\n", as_sdp->vtotal); 2956 + drm_printf(p, " target_rr: %d\n", as_sdp->target_rr); 2957 + drm_printf(p, " duration_incr_ms: %d\n", as_sdp->duration_incr_ms); 2958 + drm_printf(p, " duration_decr_ms: %d\n", as_sdp->duration_decr_ms); 2959 + drm_printf(p, " operation_mode: %d\n", as_sdp->mode); 2960 + } 2961 + EXPORT_SYMBOL(drm_dp_as_sdp_log); 2962 + 2963 + /** 2964 + * drm_dp_as_sdp_supported() - check if adaptive sync sdp is supported 2965 + * @aux: DisplayPort AUX channel 2966 + * @dpcd: DisplayPort configuration data 2967 + * 2968 + * Returns true if adaptive sync sdp is supported, else returns false 2969 + */ 2970 + bool drm_dp_as_sdp_supported(struct drm_dp_aux *aux, const u8 dpcd[DP_RECEIVER_CAP_SIZE]) 2971 + { 2972 + u8 rx_feature; 2973 + 2974 + if (dpcd[DP_DPCD_REV] < DP_DPCD_REV_13) 2975 + return false; 2976 + 2977 + if (drm_dp_dpcd_readb(aux, DP_DPRX_FEATURE_ENUMERATION_LIST_CONT_1, 2978 + &rx_feature) != 1) { 2979 + drm_dbg_dp(aux->drm_dev, 2980 + "Failed to read DP_DPRX_FEATURE_ENUMERATION_LIST_CONT_1\n"); 2981 + return false; 2982 + } 2983 + 2984 + return (rx_feature & DP_ADAPTIVE_SYNC_SDP_SUPPORTED); 2985 + } 2986 + EXPORT_SYMBOL(drm_dp_as_sdp_supported); 2987 + 2951 2988 /** 2952 2989 * drm_dp_vsc_sdp_supported() - check if vsc sdp is supported 2953 2990 * @aux: DisplayPort AUX channel
+13 -7
drivers/gpu/drm/display/drm_dp_mst_topology.c
··· 3608 3608 EXPORT_SYMBOL(drm_dp_get_vc_payload_bw); 3609 3609 3610 3610 /** 3611 - * drm_dp_read_mst_cap() - check whether or not a sink supports MST 3611 + * drm_dp_read_mst_cap() - Read the sink's MST mode capability 3612 3612 * @aux: The DP AUX channel to use 3613 3613 * @dpcd: A cached copy of the DPCD capabilities for this sink 3614 3614 * 3615 - * Returns: %True if the sink supports MST, %false otherwise 3615 + * Returns: enum drm_dp_mst_mode to indicate MST mode capability 3616 3616 */ 3617 - bool drm_dp_read_mst_cap(struct drm_dp_aux *aux, 3618 - const u8 dpcd[DP_RECEIVER_CAP_SIZE]) 3617 + enum drm_dp_mst_mode drm_dp_read_mst_cap(struct drm_dp_aux *aux, 3618 + const u8 dpcd[DP_RECEIVER_CAP_SIZE]) 3619 3619 { 3620 3620 u8 mstm_cap; 3621 3621 3622 3622 if (dpcd[DP_DPCD_REV] < DP_DPCD_REV_12) 3623 - return false; 3623 + return DRM_DP_SST; 3624 3624 3625 3625 if (drm_dp_dpcd_readb(aux, DP_MSTM_CAP, &mstm_cap) != 1) 3626 - return false; 3626 + return DRM_DP_SST; 3627 3627 3628 - return mstm_cap & DP_MST_CAP; 3628 + if (mstm_cap & DP_MST_CAP) 3629 + return DRM_DP_MST; 3630 + 3631 + if (mstm_cap & DP_SINGLE_STREAM_SIDEBAND_MSG) 3632 + return DRM_DP_SST_SIDEBAND_MSG; 3633 + 3634 + return DRM_DP_SST; 3629 3635 } 3630 3636 EXPORT_SYMBOL(drm_dp_read_mst_cap); 3631 3637
+4 -3
drivers/gpu/drm/display/drm_dp_tunnel.c
··· 436 436 437 437 /** 438 438 * drm_dp_tunnel_put - Put a reference for a DP tunnel 439 - * @tunnel - Tunnel object 440 - * @tracker - Debug tracker for the reference 439 + * @tunnel: Tunnel object 440 + * @tracker: Debug tracker for the reference 441 441 * 442 442 * Put a reference for @tunnel along with its debug *@tracker, which 443 443 * was obtained with drm_dp_tunnel_get(). ··· 1170 1170 EXPORT_SYMBOL(drm_dp_tunnel_alloc_bw); 1171 1171 1172 1172 /** 1173 - * drm_dp_tunnel_atomic_get_allocated_bw - Get the BW allocated for a DP tunnel 1173 + * drm_dp_tunnel_get_allocated_bw - Get the BW allocated for a DP tunnel 1174 1174 * @tunnel: Tunnel object 1175 1175 * 1176 1176 * Get the current BW allocated for @tunnel. After the tunnel is created / ··· 1892 1892 /** 1893 1893 * drm_dp_tunnel_mgr_create - Create a DP tunnel manager 1894 1894 * @dev: DRM device object 1895 + * @max_group_count: Maximum number of tunnel groups 1895 1896 * 1896 1897 * Creates a DP tunnel manager for @dev. 1897 1898 *
+1 -5
drivers/gpu/drm/i915/Makefile
··· 32 32 # Enable -Werror in CI and development 33 33 subdir-ccflags-$(CONFIG_DRM_I915_WERROR) += -Werror 34 34 35 - # Fine grained warnings disable 36 - CFLAGS_i915_pci.o = -Wno-override-init 37 - CFLAGS_display/intel_display_device.o = -Wno-override-init 38 - CFLAGS_display/intel_fbdev.o = -Wno-override-init 39 - 40 35 # Support compiling the display code separately for both i915 and xe 41 36 # drivers. Define I915 when building i915. 42 37 subdir-ccflags-y += -DI915 ··· 265 270 display/intel_display_rps.o \ 266 271 display/intel_display_wa.o \ 267 272 display/intel_dmc.o \ 273 + display/intel_dmc_wl.o \ 268 274 display/intel_dpio_phy.o \ 269 275 display/intel_dpll.o \ 270 276 display/intel_dpll_mgr.o \
+1 -2
drivers/gpu/drm/i915/display/icl_dsi.c
··· 1616 1616 struct drm_connector_state *conn_state) 1617 1617 { 1618 1618 struct drm_i915_private *i915 = to_i915(encoder->base.dev); 1619 - struct intel_dsi *intel_dsi = container_of(encoder, struct intel_dsi, 1620 - base); 1619 + struct intel_dsi *intel_dsi = enc_to_intel_dsi(encoder); 1621 1620 struct intel_connector *intel_connector = intel_dsi->attached_connector; 1622 1621 struct drm_display_mode *adjusted_mode = 1623 1622 &pipe_config->hw.adjusted_mode;
+1 -1
drivers/gpu/drm/i915/display/intel_atomic.c
··· 62 62 { 63 63 struct drm_device *dev = connector->dev; 64 64 struct drm_i915_private *dev_priv = to_i915(dev); 65 - struct intel_digital_connector_state *intel_conn_state = 65 + const struct intel_digital_connector_state *intel_conn_state = 66 66 to_intel_digital_connector_state(state); 67 67 68 68 if (property == dev_priv->display.properties.force_audio)
+5 -5
drivers/gpu/drm/i915/display/intel_backlight.c
··· 761 761 762 762 WARN_ON(panel->backlight.max == 0); 763 763 764 - if (panel->backlight.level <= panel->backlight.min) { 765 - panel->backlight.level = panel->backlight.max; 764 + if (panel->backlight.level < panel->backlight.min) { 765 + panel->backlight.level = panel->backlight.min; 766 766 if (panel->backlight.device) 767 767 panel->backlight.device->props.brightness = 768 768 scale_hw_to_user(connector, ··· 949 949 else 950 950 props.power = FB_BLANK_POWERDOWN; 951 951 952 - name = kstrdup("intel_backlight", GFP_KERNEL); 952 + name = kstrdup_const("intel_backlight", GFP_KERNEL); 953 953 if (!name) 954 954 return -ENOMEM; 955 955 ··· 963 963 * compatibility. Use unique names for subsequent backlight devices as a 964 964 * fallback when the default name already exists. 965 965 */ 966 - kfree(name); 966 + kfree_const(name); 967 967 name = kasprintf(GFP_KERNEL, "card%d-%s-backlight", 968 968 i915->drm.primary->index, connector->base.name); 969 969 if (!name) ··· 987 987 connector->base.base.id, connector->base.name, name); 988 988 989 989 out: 990 - kfree(name); 990 + kfree_const(name); 991 991 992 992 return ret; 993 993 }
+136 -62
drivers/gpu/drm/i915/display/intel_bios.c
··· 25 25 * 26 26 */ 27 27 28 + #include <linux/firmware.h> 29 + 28 30 #include <drm/display/drm_dp_helper.h> 29 31 #include <drm/display/drm_dsc_helper.h> 30 32 #include <drm/drm_edid.h> ··· 2732 2730 print_ddi_port(devdata); 2733 2731 } 2734 2732 2733 + static int child_device_expected_size(u16 version) 2734 + { 2735 + BUILD_BUG_ON(sizeof(struct child_device_config) < 40); 2736 + 2737 + if (version > 256) 2738 + return -ENOENT; 2739 + else if (version >= 256) 2740 + return 40; 2741 + else if (version >= 216) 2742 + return 39; 2743 + else if (version >= 196) 2744 + return 38; 2745 + else if (version >= 195) 2746 + return 37; 2747 + else if (version >= 111) 2748 + return LEGACY_CHILD_DEVICE_CONFIG_SIZE; 2749 + else if (version >= 106) 2750 + return 27; 2751 + else 2752 + return 22; 2753 + } 2754 + 2755 + static bool child_device_size_valid(struct drm_i915_private *i915, int size) 2756 + { 2757 + int expected_size; 2758 + 2759 + expected_size = child_device_expected_size(i915->display.vbt.version); 2760 + if (expected_size < 0) { 2761 + expected_size = sizeof(struct child_device_config); 2762 + drm_dbg(&i915->drm, 2763 + "Expected child device config size for VBT version %u not known; assuming %d\n", 2764 + i915->display.vbt.version, expected_size); 2765 + } 2766 + 2767 + /* Flag an error for unexpected size, but continue anyway. */ 2768 + if (size != expected_size) 2769 + drm_err(&i915->drm, 2770 + "Unexpected child device config size %d (expected %d for VBT version %u)\n", 2771 + size, expected_size, i915->display.vbt.version); 2772 + 2773 + /* The legacy sized child device config is the minimum we need. */ 2774 + if (size < LEGACY_CHILD_DEVICE_CONFIG_SIZE) { 2775 + drm_dbg_kms(&i915->drm, 2776 + "Child device config size %d is too small.\n", 2777 + size); 2778 + return false; 2779 + } 2780 + 2781 + return true; 2782 + } 2783 + 2735 2784 static void 2736 2785 parse_general_definitions(struct drm_i915_private *i915) 2737 2786 { ··· 2790 2737 struct intel_bios_encoder_data *devdata; 2791 2738 const struct child_device_config *child; 2792 2739 int i, child_device_num; 2793 - u8 expected_size; 2794 2740 u16 block_size; 2795 2741 int bus_pin; 2796 2742 ··· 2813 2761 if (intel_gmbus_is_valid_pin(i915, bus_pin)) 2814 2762 i915->display.vbt.crt_ddc_pin = bus_pin; 2815 2763 2816 - if (i915->display.vbt.version < 106) { 2817 - expected_size = 22; 2818 - } else if (i915->display.vbt.version < 111) { 2819 - expected_size = 27; 2820 - } else if (i915->display.vbt.version < 195) { 2821 - expected_size = LEGACY_CHILD_DEVICE_CONFIG_SIZE; 2822 - } else if (i915->display.vbt.version == 195) { 2823 - expected_size = 37; 2824 - } else if (i915->display.vbt.version <= 215) { 2825 - expected_size = 38; 2826 - } else if (i915->display.vbt.version <= 250) { 2827 - expected_size = 39; 2828 - } else { 2829 - expected_size = sizeof(*child); 2830 - BUILD_BUG_ON(sizeof(*child) < 39); 2831 - drm_dbg(&i915->drm, 2832 - "Expected child device config size for VBT version %u not known; assuming %u\n", 2833 - i915->display.vbt.version, expected_size); 2834 - } 2835 - 2836 - /* Flag an error for unexpected size, but continue anyway. */ 2837 - if (defs->child_dev_size != expected_size) 2838 - drm_err(&i915->drm, 2839 - "Unexpected child device config size %u (expected %u for VBT version %u)\n", 2840 - defs->child_dev_size, expected_size, i915->display.vbt.version); 2841 - 2842 - /* The legacy sized child device config is the minimum we need. */ 2843 - if (defs->child_dev_size < LEGACY_CHILD_DEVICE_CONFIG_SIZE) { 2844 - drm_dbg_kms(&i915->drm, 2845 - "Child device config size %u is too small.\n", 2846 - defs->child_dev_size); 2764 + if (!child_device_size_valid(i915, defs->child_dev_size)) 2847 2765 return; 2848 - } 2849 2766 2850 2767 /* get the number of child device */ 2851 2768 child_device_num = (block_size - sizeof(*defs)) / defs->child_dev_size; ··· 2890 2869 static void 2891 2870 init_vbt_missing_defaults(struct drm_i915_private *i915) 2892 2871 { 2872 + unsigned int ports = DISPLAY_RUNTIME_INFO(i915)->port_mask; 2893 2873 enum port port; 2894 - int ports = BIT(PORT_A) | BIT(PORT_B) | BIT(PORT_C) | 2895 - BIT(PORT_D) | BIT(PORT_E) | BIT(PORT_F); 2896 2874 2897 2875 if (!HAS_DDI(i915) && !IS_CHERRYVIEW(i915)) 2898 2876 return; ··· 3001 2981 return vbt; 3002 2982 } 3003 2983 2984 + static struct vbt_header *firmware_get_vbt(struct drm_i915_private *i915, 2985 + size_t *size) 2986 + { 2987 + struct vbt_header *vbt = NULL; 2988 + const struct firmware *fw = NULL; 2989 + const char *name = i915->display.params.vbt_firmware; 2990 + int ret; 2991 + 2992 + if (!name || !*name) 2993 + return NULL; 2994 + 2995 + ret = request_firmware(&fw, name, i915->drm.dev); 2996 + if (ret) { 2997 + drm_err(&i915->drm, 2998 + "Requesting VBT firmware \"%s\" failed (%d)\n", 2999 + name, ret); 3000 + return NULL; 3001 + } 3002 + 3003 + if (intel_bios_is_valid_vbt(i915, fw->data, fw->size)) { 3004 + vbt = kmemdup(fw->data, fw->size, GFP_KERNEL); 3005 + if (vbt) { 3006 + drm_dbg_kms(&i915->drm, 3007 + "Found valid VBT firmware \"%s\"\n", name); 3008 + if (size) 3009 + *size = fw->size; 3010 + } 3011 + } else { 3012 + drm_dbg_kms(&i915->drm, "Invalid VBT firmware \"%s\"\n", 3013 + name); 3014 + } 3015 + 3016 + release_firmware(fw); 3017 + 3018 + return vbt; 3019 + } 3020 + 3004 3021 static u32 intel_spi_read(struct intel_uncore *uncore, u32 offset) 3005 3022 { 3006 3023 intel_uncore_write(uncore, PRIMARY_SPI_ADDRESS, offset); ··· 3045 2988 return intel_uncore_read(uncore, PRIMARY_SPI_TRIGGER); 3046 2989 } 3047 2990 3048 - static struct vbt_header *spi_oprom_get_vbt(struct drm_i915_private *i915) 2991 + static struct vbt_header *spi_oprom_get_vbt(struct drm_i915_private *i915, 2992 + size_t *size) 3049 2993 { 3050 2994 u32 count, data, found, store = 0; 3051 2995 u32 static_region, oprom_offset; ··· 3089 3031 3090 3032 drm_dbg_kms(&i915->drm, "Found valid VBT in SPI flash\n"); 3091 3033 3034 + if (size) 3035 + *size = vbt_size; 3036 + 3092 3037 return (struct vbt_header *)vbt; 3093 3038 3094 3039 err_free_vbt: ··· 3100 3039 return NULL; 3101 3040 } 3102 3041 3103 - static struct vbt_header *oprom_get_vbt(struct drm_i915_private *i915) 3042 + static struct vbt_header *oprom_get_vbt(struct drm_i915_private *i915, 3043 + size_t *sizep) 3104 3044 { 3105 3045 struct pci_dev *pdev = to_pci_dev(i915->drm.dev); 3106 3046 void __iomem *p = NULL, *oprom; ··· 3150 3088 3151 3089 pci_unmap_rom(pdev, oprom); 3152 3090 3091 + if (sizep) 3092 + *sizep = vbt_size; 3093 + 3153 3094 drm_dbg_kms(&i915->drm, "Found valid VBT in PCI ROM\n"); 3154 3095 3155 3096 return vbt; ··· 3163 3098 pci_unmap_rom(pdev, oprom); 3164 3099 3165 3100 return NULL; 3101 + } 3102 + 3103 + static const struct vbt_header *intel_bios_get_vbt(struct drm_i915_private *i915, 3104 + size_t *sizep) 3105 + { 3106 + const struct vbt_header *vbt = NULL; 3107 + intel_wakeref_t wakeref; 3108 + 3109 + vbt = firmware_get_vbt(i915, sizep); 3110 + 3111 + if (!vbt) 3112 + vbt = intel_opregion_get_vbt(i915, sizep); 3113 + 3114 + /* 3115 + * If the OpRegion does not have VBT, look in SPI flash 3116 + * through MMIO or PCI mapping 3117 + */ 3118 + if (!vbt && IS_DGFX(i915)) 3119 + with_intel_runtime_pm(&i915->runtime_pm, wakeref) 3120 + vbt = spi_oprom_get_vbt(i915, sizep); 3121 + 3122 + if (!vbt) 3123 + with_intel_runtime_pm(&i915->runtime_pm, wakeref) 3124 + vbt = oprom_get_vbt(i915, sizep); 3125 + 3126 + return vbt; 3166 3127 } 3167 3128 3168 3129 /** ··· 3202 3111 void intel_bios_init(struct drm_i915_private *i915) 3203 3112 { 3204 3113 const struct vbt_header *vbt; 3205 - struct vbt_header *oprom_vbt = NULL; 3206 3114 const struct bdb_header *bdb; 3207 3115 3208 3116 INIT_LIST_HEAD(&i915->display.vbt.display_devices); ··· 3215 3125 3216 3126 init_vbt_defaults(i915); 3217 3127 3218 - vbt = intel_opregion_get_vbt(i915, NULL); 3219 - 3220 - /* 3221 - * If the OpRegion does not have VBT, look in SPI flash through MMIO or 3222 - * PCI mapping 3223 - */ 3224 - if (!vbt && IS_DGFX(i915)) { 3225 - oprom_vbt = spi_oprom_get_vbt(i915); 3226 - vbt = oprom_vbt; 3227 - } 3228 - 3229 - if (!vbt) { 3230 - oprom_vbt = oprom_get_vbt(i915); 3231 - vbt = oprom_vbt; 3232 - } 3128 + vbt = intel_bios_get_vbt(i915, NULL); 3233 3129 3234 3130 if (!vbt) 3235 3131 goto out; ··· 3248 3172 parse_sdvo_device_mapping(i915); 3249 3173 parse_ddi_ports(i915); 3250 3174 3251 - kfree(oprom_vbt); 3175 + kfree(vbt); 3252 3176 } 3253 3177 3254 3178 static void intel_bios_init_panel(struct drm_i915_private *i915, ··· 3420 3344 * additional data. Trust that if the VBT was written into 3421 3345 * the OpRegion then they have validated the LVDS's existence. 3422 3346 */ 3423 - if (intel_opregion_get_vbt(i915, NULL)) 3424 - return true; 3347 + return intel_opregion_vbt_present(i915); 3425 3348 } 3426 3349 3427 3350 return false; ··· 3781 3706 const void *vbt; 3782 3707 size_t vbt_size; 3783 3708 3784 - /* 3785 - * FIXME: VBT might originate from other places than opregion, and then 3786 - * this would be incorrect. 3787 - */ 3788 - vbt = intel_opregion_get_vbt(i915, &vbt_size); 3789 - if (vbt) 3709 + vbt = intel_bios_get_vbt(i915, &vbt_size); 3710 + 3711 + if (vbt) { 3790 3712 seq_write(m, vbt, vbt_size); 3713 + kfree(vbt); 3714 + } 3791 3715 3792 3716 return 0; 3793 3717 }
+2 -1
drivers/gpu/drm/i915/display/intel_bw.h
··· 52 52 u8 num_active_planes[I915_MAX_PIPES]; 53 53 }; 54 54 55 - #define to_intel_bw_state(x) container_of((x), struct intel_bw_state, base) 55 + #define to_intel_bw_state(global_state) \ 56 + container_of_const((global_state), struct intel_bw_state, base) 56 57 57 58 struct intel_bw_state * 58 59 intel_atomic_get_old_bw_state(struct intel_atomic_state *state);
+167 -75
drivers/gpu/drm/i915/display/intel_cdclk.c
··· 39 39 #include "intel_pcode.h" 40 40 #include "intel_psr.h" 41 41 #include "intel_vdsc.h" 42 + #include "skl_watermark.h" 43 + #include "skl_watermark_regs.h" 42 44 #include "vlv_sideband.h" 43 45 44 46 /** ··· 64 62 * On SKL+ the DMC will toggle the CDCLK off/on during DC5/6 entry/exit. 65 63 * DMC will not change the active CDCLK frequency however, so that part 66 64 * will still be performed by the driver directly. 65 + * 66 + * There are multiple components involved in the generation of the CDCLK 67 + * frequency: 68 + * 69 + * - We have the CDCLK PLL, which generates an output clock based on a 70 + * reference clock and a ratio parameter. 71 + * - The CD2X Divider, which divides the output of the PLL based on a 72 + * divisor selected from a set of pre-defined choices. 73 + * - The CD2X Squasher, which further divides the output based on a 74 + * waveform represented as a sequence of bits where each zero 75 + * "squashes out" a clock cycle. 76 + * - And, finally, a fixed divider that divides the output frequency by 2. 77 + * 78 + * As such, the resulting CDCLK frequency can be calculated with the 79 + * following formula: 80 + * 81 + * cdclk = vco / cd2x_div / (sq_len / sq_div) / 2 82 + * 83 + * , where vco is the frequency generated by the PLL; cd2x_div 84 + * represents the CD2X Divider; sq_len and sq_div are the bit length 85 + * and the number of high bits for the CD2X Squasher waveform, respectively; 86 + * and 2 represents the fixed divider. 87 + * 88 + * Note that some older platforms do not contain the CD2X Divider 89 + * and/or CD2X Squasher, in which case we can ignore their respective 90 + * factors in the formula above. 67 91 * 68 92 * Several methods exist to change the CDCLK frequency, which ones are 69 93 * supported depends on the platform: ··· 1021 993 return DIV_ROUND_CLOSEST(cdclk - 1000, 500); 1022 994 } 1023 995 1024 - static void skl_set_preferred_cdclk_vco(struct drm_i915_private *dev_priv, 1025 - int vco) 996 + static void skl_set_preferred_cdclk_vco(struct drm_i915_private *i915, int vco) 1026 997 { 1027 - bool changed = dev_priv->skl_preferred_vco_freq != vco; 998 + bool changed = i915->display.cdclk.skl_preferred_vco_freq != vco; 1028 999 1029 - dev_priv->skl_preferred_vco_freq = vco; 1000 + i915->display.cdclk.skl_preferred_vco_freq = vco; 1030 1001 1031 1002 if (changed) 1032 - intel_update_max_cdclk(dev_priv); 1003 + intel_update_max_cdclk(i915); 1033 1004 } 1034 1005 1035 1006 static u32 skl_dpll0_link_rate(struct drm_i915_private *dev_priv, int vco) ··· 1232 1205 * Use the current vco as our initial 1233 1206 * guess as to what the preferred vco is. 1234 1207 */ 1235 - if (dev_priv->skl_preferred_vco_freq == 0) 1208 + if (dev_priv->display.cdclk.skl_preferred_vco_freq == 0) 1236 1209 skl_set_preferred_cdclk_vco(dev_priv, 1237 1210 dev_priv->display.cdclk.hw.vco); 1238 1211 return; ··· 1240 1213 1241 1214 cdclk_config = dev_priv->display.cdclk.hw; 1242 1215 1243 - cdclk_config.vco = dev_priv->skl_preferred_vco_freq; 1216 + cdclk_config.vco = dev_priv->display.cdclk.skl_preferred_vco_freq; 1244 1217 if (cdclk_config.vco == 0) 1245 1218 cdclk_config.vco = 8100000; 1246 1219 cdclk_config.cdclk = skl_calc_cdclk(0, cdclk_config.vco); ··· 1418 1391 {} 1419 1392 }; 1420 1393 1421 - static const struct intel_cdclk_vals lnl_cdclk_table[] = { 1394 + static const struct intel_cdclk_vals xe2lpd_cdclk_table[] = { 1422 1395 { .refclk = 38400, .cdclk = 153600, .ratio = 16, .waveform = 0xaaaa }, 1423 1396 { .refclk = 38400, .cdclk = 172800, .ratio = 16, .waveform = 0xad5a }, 1424 1397 { .refclk = 38400, .cdclk = 192000, .ratio = 16, .waveform = 0xb6b6 }, ··· 1683 1656 } 1684 1657 1685 1658 out: 1659 + if (DISPLAY_VER(dev_priv) >= 20) 1660 + cdclk_config->joined_mbus = intel_de_read(dev_priv, MBUS_CTL) & MBUS_JOIN; 1686 1661 /* 1687 1662 * Can't read this out :( Let's assume it's 1688 1663 * at least what the CDCLK frequency requires. ··· 1879 1850 return vco == ~0; 1880 1851 } 1881 1852 1853 + static bool mdclk_source_is_cdclk_pll(struct drm_i915_private *i915) 1854 + { 1855 + return DISPLAY_VER(i915) >= 20; 1856 + } 1857 + 1858 + static u32 xe2lpd_mdclk_source_sel(struct drm_i915_private *i915) 1859 + { 1860 + if (mdclk_source_is_cdclk_pll(i915)) 1861 + return MDCLK_SOURCE_SEL_CDCLK_PLL; 1862 + 1863 + return MDCLK_SOURCE_SEL_CD2XCLK; 1864 + } 1865 + 1866 + int intel_mdclk_cdclk_ratio(struct drm_i915_private *i915, 1867 + const struct intel_cdclk_config *cdclk_config) 1868 + { 1869 + if (mdclk_source_is_cdclk_pll(i915)) 1870 + return DIV_ROUND_UP(cdclk_config->vco, cdclk_config->cdclk); 1871 + 1872 + /* Otherwise, source for MDCLK is CD2XCLK. */ 1873 + return 2; 1874 + } 1875 + 1876 + static void xe2lpd_mdclk_cdclk_ratio_program(struct drm_i915_private *i915, 1877 + const struct intel_cdclk_config *cdclk_config) 1878 + { 1879 + intel_dbuf_mdclk_cdclk_ratio_update(i915, 1880 + intel_mdclk_cdclk_ratio(i915, cdclk_config), 1881 + cdclk_config->joined_mbus); 1882 + } 1883 + 1882 1884 static bool cdclk_compute_crawl_and_squash_midpoint(struct drm_i915_private *i915, 1883 1885 const struct intel_cdclk_config *old_cdclk_config, 1884 1886 const struct intel_cdclk_config *new_cdclk_config, ··· 2014 1954 val |= BXT_CDCLK_SSA_PRECHARGE_ENABLE; 2015 1955 2016 1956 if (DISPLAY_VER(i915) >= 20) 2017 - val |= MDCLK_SOURCE_SEL_CDCLK_PLL; 1957 + val |= xe2lpd_mdclk_source_sel(i915); 2018 1958 else 2019 1959 val |= skl_cdclk_decimal(cdclk); 2020 1960 ··· 2027 1967 { 2028 1968 int cdclk = cdclk_config->cdclk; 2029 1969 int vco = cdclk_config->vco; 2030 - u16 waveform; 2031 1970 2032 1971 if (HAS_CDCLK_CRAWL(dev_priv) && dev_priv->display.cdclk.hw.vco > 0 && vco > 0 && 2033 1972 !cdclk_pll_is_unknown(dev_priv->display.cdclk.hw.vco)) { ··· 2041 1982 } else 2042 1983 bxt_cdclk_pll_update(dev_priv, vco); 2043 1984 2044 - waveform = cdclk_squash_waveform(dev_priv, cdclk); 1985 + if (HAS_CDCLK_SQUASH(dev_priv)) { 1986 + u16 waveform = cdclk_squash_waveform(dev_priv, cdclk); 2045 1987 2046 - if (HAS_CDCLK_SQUASH(dev_priv)) 2047 1988 dg2_cdclk_squash_program(dev_priv, waveform); 1989 + } 2048 1990 2049 1991 intel_de_write(dev_priv, CDCLK_CTL, bxt_cdclk_ctl(dev_priv, cdclk_config, pipe)); 2050 1992 ··· 2090 2030 return; 2091 2031 } 2092 2032 2033 + if (DISPLAY_VER(dev_priv) >= 20 && cdclk < dev_priv->display.cdclk.hw.cdclk) 2034 + xe2lpd_mdclk_cdclk_ratio_program(dev_priv, cdclk_config); 2035 + 2093 2036 if (cdclk_compute_crawl_and_squash_midpoint(dev_priv, &dev_priv->display.cdclk.hw, 2094 2037 cdclk_config, &mid_cdclk_config)) { 2095 2038 _bxt_set_cdclk(dev_priv, &mid_cdclk_config, pipe); ··· 2100 2037 } else { 2101 2038 _bxt_set_cdclk(dev_priv, cdclk_config, pipe); 2102 2039 } 2040 + 2041 + if (DISPLAY_VER(dev_priv) >= 20 && cdclk > dev_priv->display.cdclk.hw.cdclk) 2042 + xe2lpd_mdclk_cdclk_ratio_program(dev_priv, cdclk_config); 2103 2043 2104 2044 if (DISPLAY_VER(dev_priv) >= 14) 2105 2045 /* ··· 2326 2260 } 2327 2261 2328 2262 /** 2329 - * intel_cdclk_needs_modeset - Determine if changong between the CDCLK 2330 - * configurations requires a modeset on all pipes 2263 + * intel_cdclk_clock_changed - Check whether the clock changed 2331 2264 * @a: first CDCLK configuration 2332 2265 * @b: second CDCLK configuration 2333 2266 * 2334 2267 * Returns: 2335 - * True if changing between the two CDCLK configurations 2336 - * requires all pipes to be off, false if not. 2268 + * True if CDCLK changed in a way that requires re-programming and 2269 + * False otherwise. 2337 2270 */ 2338 - bool intel_cdclk_needs_modeset(const struct intel_cdclk_config *a, 2271 + bool intel_cdclk_clock_changed(const struct intel_cdclk_config *a, 2339 2272 const struct intel_cdclk_config *b) 2340 2273 { 2341 2274 return a->cdclk != b->cdclk || ··· 2387 2322 static bool intel_cdclk_changed(const struct intel_cdclk_config *a, 2388 2323 const struct intel_cdclk_config *b) 2389 2324 { 2390 - return intel_cdclk_needs_modeset(a, b) || 2325 + return intel_cdclk_clock_changed(a, b) || 2391 2326 a->voltage_level != b->voltage_level; 2392 2327 } 2393 2328 ··· 2433 2368 ret); 2434 2369 } 2435 2370 2436 - /** 2437 - * intel_set_cdclk - Push the CDCLK configuration to the hardware 2438 - * @dev_priv: i915 device 2439 - * @cdclk_config: new CDCLK configuration 2440 - * @pipe: pipe with which to synchronize the update 2441 - * 2442 - * Program the hardware based on the passed in CDCLK state, 2443 - * if necessary. 2444 - */ 2445 2371 static void intel_set_cdclk(struct drm_i915_private *dev_priv, 2446 2372 const struct intel_cdclk_config *cdclk_config, 2447 - enum pipe pipe) 2373 + enum pipe pipe, const char *context) 2448 2374 { 2449 2375 struct intel_encoder *encoder; 2450 2376 ··· 2445 2389 if (drm_WARN_ON_ONCE(&dev_priv->drm, !dev_priv->display.funcs.cdclk->set_cdclk)) 2446 2390 return; 2447 2391 2448 - intel_cdclk_dump_config(dev_priv, cdclk_config, "Changing CDCLK to"); 2392 + intel_cdclk_dump_config(dev_priv, cdclk_config, context); 2449 2393 2450 2394 for_each_intel_encoder_with_psr(&dev_priv->drm, encoder) { 2451 2395 struct intel_dp *intel_dp = enc_to_intel_dp(encoder); ··· 2575 2519 update_cdclk, update_pipe_count); 2576 2520 } 2577 2521 2522 + bool intel_cdclk_is_decreasing_later(struct intel_atomic_state *state) 2523 + { 2524 + const struct intel_cdclk_state *old_cdclk_state = 2525 + intel_atomic_get_old_cdclk_state(state); 2526 + const struct intel_cdclk_state *new_cdclk_state = 2527 + intel_atomic_get_new_cdclk_state(state); 2528 + 2529 + return new_cdclk_state && !new_cdclk_state->disable_pipes && 2530 + new_cdclk_state->actual.cdclk < old_cdclk_state->actual.cdclk; 2531 + } 2532 + 2578 2533 /** 2579 2534 * intel_set_cdclk_pre_plane_update - Push the CDCLK state to the hardware 2580 2535 * @state: intel atomic state ··· 2601 2534 intel_atomic_get_old_cdclk_state(state); 2602 2535 const struct intel_cdclk_state *new_cdclk_state = 2603 2536 intel_atomic_get_new_cdclk_state(state); 2604 - enum pipe pipe = new_cdclk_state->pipe; 2537 + struct intel_cdclk_config cdclk_config; 2538 + enum pipe pipe; 2605 2539 2606 2540 if (!intel_cdclk_changed(&old_cdclk_state->actual, 2607 2541 &new_cdclk_state->actual)) ··· 2611 2543 if (IS_DG2(i915)) 2612 2544 intel_cdclk_pcode_pre_notify(state); 2613 2545 2614 - if (pipe == INVALID_PIPE || 2615 - old_cdclk_state->actual.cdclk <= new_cdclk_state->actual.cdclk) { 2616 - drm_WARN_ON(&i915->drm, !new_cdclk_state->base.changed); 2546 + if (new_cdclk_state->disable_pipes) { 2547 + cdclk_config = new_cdclk_state->actual; 2548 + pipe = INVALID_PIPE; 2549 + } else { 2550 + if (new_cdclk_state->actual.cdclk >= old_cdclk_state->actual.cdclk) { 2551 + cdclk_config = new_cdclk_state->actual; 2552 + pipe = new_cdclk_state->pipe; 2553 + } else { 2554 + cdclk_config = old_cdclk_state->actual; 2555 + pipe = INVALID_PIPE; 2556 + } 2617 2557 2618 - intel_set_cdclk(i915, &new_cdclk_state->actual, pipe); 2558 + cdclk_config.voltage_level = max(new_cdclk_state->actual.voltage_level, 2559 + old_cdclk_state->actual.voltage_level); 2619 2560 } 2561 + 2562 + /* 2563 + * mbus joining will be changed later by 2564 + * intel_dbuf_mbus_{pre,post}_ddb_update() 2565 + */ 2566 + cdclk_config.joined_mbus = old_cdclk_state->actual.joined_mbus; 2567 + 2568 + drm_WARN_ON(&i915->drm, !new_cdclk_state->base.changed); 2569 + 2570 + intel_set_cdclk(i915, &cdclk_config, pipe, 2571 + "Pre changing CDCLK to"); 2620 2572 } 2621 2573 2622 2574 /** ··· 2654 2566 intel_atomic_get_old_cdclk_state(state); 2655 2567 const struct intel_cdclk_state *new_cdclk_state = 2656 2568 intel_atomic_get_new_cdclk_state(state); 2657 - enum pipe pipe = new_cdclk_state->pipe; 2569 + enum pipe pipe; 2658 2570 2659 2571 if (!intel_cdclk_changed(&old_cdclk_state->actual, 2660 2572 &new_cdclk_state->actual)) ··· 2663 2575 if (IS_DG2(i915)) 2664 2576 intel_cdclk_pcode_post_notify(state); 2665 2577 2666 - if (pipe != INVALID_PIPE && 2667 - old_cdclk_state->actual.cdclk > new_cdclk_state->actual.cdclk) { 2668 - drm_WARN_ON(&i915->drm, !new_cdclk_state->base.changed); 2578 + if (!new_cdclk_state->disable_pipes && 2579 + new_cdclk_state->actual.cdclk < old_cdclk_state->actual.cdclk) 2580 + pipe = new_cdclk_state->pipe; 2581 + else 2582 + pipe = INVALID_PIPE; 2669 2583 2670 - intel_set_cdclk(i915, &new_cdclk_state->actual, pipe); 2671 - } 2584 + drm_WARN_ON(&i915->drm, !new_cdclk_state->base.changed); 2585 + 2586 + intel_set_cdclk(i915, &new_cdclk_state->actual, pipe, 2587 + "Post changing CDCLK to"); 2672 2588 } 2673 2589 2674 2590 static int intel_pixel_rate_to_cdclk(const struct intel_crtc_state *crtc_state) ··· 2822 2730 2823 2731 if (crtc_state->dsc.compression_enable) 2824 2732 min_cdclk = max(min_cdclk, intel_vdsc_min_cdclk(crtc_state)); 2825 - 2826 - /* 2827 - * HACK. Currently for TGL/DG2 platforms we calculate 2828 - * min_cdclk initially based on pixel_rate divided 2829 - * by 2, accounting for also plane requirements, 2830 - * however in some cases the lowest possible CDCLK 2831 - * doesn't work and causing the underruns. 2832 - * Explicitly stating here that this seems to be currently 2833 - * rather a Hack, than final solution. 2834 - */ 2835 - if (IS_TIGERLAKE(dev_priv) || IS_DG2(dev_priv)) { 2836 - /* 2837 - * Clamp to max_cdclk_freq in case pixel rate is higher, 2838 - * in order not to break an 8K, but still leave W/A at place. 2839 - */ 2840 - min_cdclk = max_t(int, min_cdclk, 2841 - min_t(int, crtc_state->pixel_rate, 2842 - dev_priv->display.cdclk.max_cdclk_freq)); 2843 - } 2844 2733 2845 2734 return min_cdclk; 2846 2735 } ··· 3010 2937 3011 2938 vco = cdclk_state->logical.vco; 3012 2939 if (!vco) 3013 - vco = dev_priv->skl_preferred_vco_freq; 2940 + vco = dev_priv->display.cdclk.skl_preferred_vco_freq; 3014 2941 3015 2942 for_each_new_intel_crtc_in_state(state, crtc, crtc_state, i) { 3016 2943 if (!crtc_state->hw.enable) ··· 3131 3058 return NULL; 3132 3059 3133 3060 cdclk_state->pipe = INVALID_PIPE; 3061 + cdclk_state->disable_pipes = false; 3134 3062 3135 3063 return &cdclk_state->base; 3136 3064 } ··· 3193 3119 *need_cdclk_calc = true; 3194 3120 3195 3121 return 0; 3122 + } 3123 + 3124 + int intel_cdclk_state_set_joined_mbus(struct intel_atomic_state *state, bool joined_mbus) 3125 + { 3126 + struct intel_cdclk_state *cdclk_state; 3127 + 3128 + cdclk_state = intel_atomic_get_cdclk_state(state); 3129 + if (IS_ERR(cdclk_state)) 3130 + return PTR_ERR(cdclk_state); 3131 + 3132 + cdclk_state->actual.joined_mbus = joined_mbus; 3133 + cdclk_state->logical.joined_mbus = joined_mbus; 3134 + 3135 + return intel_atomic_lock_global_state(&cdclk_state->base); 3196 3136 } 3197 3137 3198 3138 int intel_cdclk_init(struct drm_i915_private *dev_priv) ··· 3317 3229 drm_dbg_kms(&dev_priv->drm, 3318 3230 "Can change cdclk cd2x divider with pipe %c active\n", 3319 3231 pipe_name(pipe)); 3320 - } else if (intel_cdclk_needs_modeset(&old_cdclk_state->actual, 3232 + } else if (intel_cdclk_clock_changed(&old_cdclk_state->actual, 3321 3233 &new_cdclk_state->actual)) { 3322 3234 /* All pipes must be switched off while we change the cdclk. */ 3323 3235 ret = intel_modeset_all_pipes_late(state, "CDCLK change"); 3324 3236 if (ret) 3325 3237 return ret; 3326 3238 3239 + new_cdclk_state->disable_pipes = true; 3240 + 3327 3241 drm_dbg_kms(&dev_priv->drm, 3328 3242 "Modeset required for cdclk change\n"); 3243 + } 3244 + 3245 + if (intel_mdclk_cdclk_ratio(dev_priv, &old_cdclk_state->actual) != 3246 + intel_mdclk_cdclk_ratio(dev_priv, &new_cdclk_state->actual)) { 3247 + int ratio = intel_mdclk_cdclk_ratio(dev_priv, &new_cdclk_state->actual); 3248 + 3249 + ret = intel_dbuf_state_set_mdclk_cdclk_ratio(state, ratio); 3250 + if (ret) 3251 + return ret; 3329 3252 } 3330 3253 3331 3254 drm_dbg_kms(&dev_priv->drm, ··· 3396 3297 u32 limit = intel_de_read(dev_priv, SKL_DFSM) & SKL_DFSM_CDCLK_LIMIT_MASK; 3397 3298 int max_cdclk, vco; 3398 3299 3399 - vco = dev_priv->skl_preferred_vco_freq; 3300 + vco = dev_priv->display.cdclk.skl_preferred_vco_freq; 3400 3301 drm_WARN_ON(&dev_priv->drm, vco != 8100000 && vco != 8640000); 3401 3302 3402 3303 /* ··· 3438 3339 dev_priv->display.cdclk.max_cdclk_freq = dev_priv->display.cdclk.hw.cdclk; 3439 3340 } 3440 3341 3441 - dev_priv->max_dotclk_freq = intel_compute_max_dotclk(dev_priv); 3342 + dev_priv->display.cdclk.max_dotclk_freq = intel_compute_max_dotclk(dev_priv); 3442 3343 3443 3344 drm_dbg(&dev_priv->drm, "Max CD clock rate: %d kHz\n", 3444 3345 dev_priv->display.cdclk.max_cdclk_freq); 3445 3346 3446 3347 drm_dbg(&dev_priv->drm, "Max dotclock rate: %d kHz\n", 3447 - dev_priv->max_dotclk_freq); 3348 + dev_priv->display.cdclk.max_dotclk_freq); 3448 3349 } 3449 3350 3450 3351 /** ··· 3618 3519 3619 3520 seq_printf(m, "Current CD clock frequency: %d kHz\n", i915->display.cdclk.hw.cdclk); 3620 3521 seq_printf(m, "Max CD clock frequency: %d kHz\n", i915->display.cdclk.max_cdclk_freq); 3621 - seq_printf(m, "Max pixel clock frequency: %d kHz\n", i915->max_dotclk_freq); 3522 + seq_printf(m, "Max pixel clock frequency: %d kHz\n", i915->display.cdclk.max_dotclk_freq); 3622 3523 3623 3524 return 0; 3624 3525 } ··· 3632 3533 debugfs_create_file("i915_cdclk_info", 0444, minor->debugfs_root, 3633 3534 i915, &i915_cdclk_info_fops); 3634 3535 } 3635 - 3636 - static const struct intel_cdclk_funcs mtl_cdclk_funcs = { 3637 - .get_cdclk = bxt_get_cdclk, 3638 - .set_cdclk = bxt_set_cdclk, 3639 - .modeset_calc_cdclk = bxt_modeset_calc_cdclk, 3640 - .calc_voltage_level = rplu_calc_voltage_level, 3641 - }; 3642 3536 3643 3537 static const struct intel_cdclk_funcs rplu_cdclk_funcs = { 3644 3538 .get_cdclk = bxt_get_cdclk, ··· 3776 3684 void intel_init_cdclk_hooks(struct drm_i915_private *dev_priv) 3777 3685 { 3778 3686 if (DISPLAY_VER(dev_priv) >= 20) { 3779 - dev_priv->display.funcs.cdclk = &mtl_cdclk_funcs; 3780 - dev_priv->display.cdclk.table = lnl_cdclk_table; 3687 + dev_priv->display.funcs.cdclk = &rplu_cdclk_funcs; 3688 + dev_priv->display.cdclk.table = xe2lpd_cdclk_table; 3781 3689 } else if (DISPLAY_VER(dev_priv) >= 14) { 3782 - dev_priv->display.funcs.cdclk = &mtl_cdclk_funcs; 3690 + dev_priv->display.funcs.cdclk = &rplu_cdclk_funcs; 3783 3691 dev_priv->display.cdclk.table = mtl_cdclk_table; 3784 3692 } else if (IS_DG2(dev_priv)) { 3785 3693 dev_priv->display.funcs.cdclk = &tgl_cdclk_funcs;
+13 -2
drivers/gpu/drm/i915/display/intel_cdclk.h
··· 18 18 struct intel_cdclk_config { 19 19 unsigned int cdclk, vco, ref, bypass; 20 20 u8 voltage_level; 21 + /* This field is only valid for Xe2LPD and above. */ 22 + bool joined_mbus; 21 23 }; 22 24 23 25 struct intel_cdclk_state { ··· 53 51 54 52 /* bitmask of active pipes */ 55 53 u8 active_pipes; 54 + 55 + /* update cdclk with pipes disabled */ 56 + bool disable_pipes; 56 57 }; 57 58 58 59 int intel_crtc_compute_min_cdclk(const struct intel_crtc_state *crtc_state); ··· 65 60 void intel_update_max_cdclk(struct drm_i915_private *dev_priv); 66 61 void intel_update_cdclk(struct drm_i915_private *dev_priv); 67 62 u32 intel_read_rawclk(struct drm_i915_private *dev_priv); 68 - bool intel_cdclk_needs_modeset(const struct intel_cdclk_config *a, 63 + bool intel_cdclk_clock_changed(const struct intel_cdclk_config *a, 69 64 const struct intel_cdclk_config *b); 65 + int intel_mdclk_cdclk_ratio(struct drm_i915_private *i915, 66 + const struct intel_cdclk_config *cdclk_config); 67 + bool intel_cdclk_is_decreasing_later(struct intel_atomic_state *state); 70 68 void intel_set_cdclk_pre_plane_update(struct intel_atomic_state *state); 71 69 void intel_set_cdclk_post_plane_update(struct intel_atomic_state *state); 72 70 void intel_cdclk_dump_config(struct drm_i915_private *i915, ··· 80 72 struct intel_cdclk_config *cdclk_config); 81 73 int intel_cdclk_atomic_check(struct intel_atomic_state *state, 82 74 bool *need_cdclk_calc); 75 + int intel_cdclk_state_set_joined_mbus(struct intel_atomic_state *state, bool joined_mbus); 83 76 struct intel_cdclk_state * 84 77 intel_atomic_get_cdclk_state(struct intel_atomic_state *state); 85 78 86 - #define to_intel_cdclk_state(x) container_of((x), struct intel_cdclk_state, base) 79 + #define to_intel_cdclk_state(global_state) \ 80 + container_of_const((global_state), struct intel_cdclk_state, base) 81 + 87 82 #define intel_atomic_get_old_cdclk_state(state) \ 88 83 to_intel_cdclk_state(intel_atomic_get_old_global_obj_state(state, &to_i915(state->base.dev)->display.cdclk.obj)) 89 84 #define intel_atomic_get_new_cdclk_state(state) \
+60 -57
drivers/gpu/drm/i915/display/intel_combo_phy_regs.h
··· 25 25 4 * (dw)) 26 26 27 27 #define ICL_PORT_CL_DW5(phy) _MMIO(_ICL_PORT_CL_DW(5, phy)) 28 - #define CL_POWER_DOWN_ENABLE (1 << 4) 29 - #define SUS_CLOCK_CONFIG (3 << 0) 28 + #define CL_POWER_DOWN_ENABLE REG_BIT(4) 29 + #define SUS_CLOCK_CONFIG REG_GENMASK(1, 0) 30 30 31 31 #define ICL_PORT_CL_DW10(phy) _MMIO(_ICL_PORT_CL_DW(10, phy)) 32 - #define PG_SEQ_DELAY_OVERRIDE_MASK (3 << 25) 33 - #define PG_SEQ_DELAY_OVERRIDE_SHIFT 25 34 - #define PG_SEQ_DELAY_OVERRIDE_ENABLE (1 << 24) 35 - #define PWR_UP_ALL_LANES (0x0 << 4) 36 - #define PWR_DOWN_LN_3_2_1 (0xe << 4) 37 - #define PWR_DOWN_LN_3_2 (0xc << 4) 38 - #define PWR_DOWN_LN_3 (0x8 << 4) 39 - #define PWR_DOWN_LN_2_1_0 (0x7 << 4) 40 - #define PWR_DOWN_LN_1_0 (0x3 << 4) 41 - #define PWR_DOWN_LN_3_1 (0xa << 4) 42 - #define PWR_DOWN_LN_3_1_0 (0xb << 4) 43 - #define PWR_DOWN_LN_MASK (0xf << 4) 44 - #define PWR_DOWN_LN_SHIFT 4 45 - #define EDP4K2K_MODE_OVRD_EN (1 << 3) 46 - #define EDP4K2K_MODE_OVRD_OPTIMIZED (1 << 2) 32 + #define PG_SEQ_DELAY_OVERRIDE_MASK REG_GENMASK(26, 25) 33 + #define PG_SEQ_DELAY_OVERRIDE_ENABLE REG_BIT(24) 34 + #define PWR_DOWN_LN_MASK REG_GENMASK(7, 4) 35 + #define PWR_UP_ALL_LANES REG_FIELD_PREP(PWR_DOWN_LN_MASK, 0x0) 36 + #define PWR_DOWN_LN_3_2_1 REG_FIELD_PREP(PWR_DOWN_LN_MASK, 0xe) 37 + #define PWR_DOWN_LN_3_2 REG_FIELD_PREP(PWR_DOWN_LN_MASK, 0xc) 38 + #define PWR_DOWN_LN_3 REG_FIELD_PREP(PWR_DOWN_LN_MASK, 0x8) 39 + #define PWR_DOWN_LN_2_1_0 REG_FIELD_PREP(PWR_DOWN_LN_MASK, 0x7) 40 + #define PWR_DOWN_LN_1_0 REG_FIELD_PREP(PWR_DOWN_LN_MASK, 0x3) 41 + #define PWR_DOWN_LN_3_1 REG_FIELD_PREP(PWR_DOWN_LN_MASK, 0xa) 42 + #define PWR_DOWN_LN_3_1_0 REG_FIELD_PREP(PWR_DOWN_LN_MASK, 0xb) 43 + #define EDP4K2K_MODE_OVRD_EN REG_BIT(3) 44 + #define EDP4K2K_MODE_OVRD_OPTIMIZED REG_BIT(2) 47 45 48 46 #define ICL_PORT_CL_DW12(phy) _MMIO(_ICL_PORT_CL_DW(12, phy)) 49 - #define ICL_LANE_ENABLE_AUX (1 << 0) 47 + #define ICL_LANE_ENABLE_AUX REG_BIT(0) 50 48 51 49 /* ICL Port COMP_DW registers */ 52 50 #define _ICL_PORT_COMP 0x100 ··· 52 54 _ICL_PORT_COMP + 4 * (dw)) 53 55 54 56 #define ICL_PORT_COMP_DW0(phy) _MMIO(_ICL_PORT_COMP_DW(0, phy)) 55 - #define COMP_INIT (1 << 31) 57 + #define COMP_INIT REG_BIT(31) 56 58 57 59 #define ICL_PORT_COMP_DW1(phy) _MMIO(_ICL_PORT_COMP_DW(1, phy)) 58 60 59 61 #define ICL_PORT_COMP_DW3(phy) _MMIO(_ICL_PORT_COMP_DW(3, phy)) 60 - #define PROCESS_INFO_DOT_0 (0 << 26) 61 - #define PROCESS_INFO_DOT_1 (1 << 26) 62 - #define PROCESS_INFO_DOT_4 (2 << 26) 63 - #define PROCESS_INFO_MASK (7 << 26) 64 - #define PROCESS_INFO_SHIFT 26 65 - #define VOLTAGE_INFO_0_85V (0 << 24) 66 - #define VOLTAGE_INFO_0_95V (1 << 24) 67 - #define VOLTAGE_INFO_1_05V (2 << 24) 68 - #define VOLTAGE_INFO_MASK (3 << 24) 69 - #define VOLTAGE_INFO_SHIFT 24 62 + #define PROCESS_INFO_MASK REG_GENMASK(28, 26) 63 + #define PROCESS_INFO_DOT_0 REG_FIELD_PREP(PROCESS_INFO_MASK, 0) 64 + #define PROCESS_INFO_DOT_1 REG_FIELD_PREP(PROCESS_INFO_MASK, 1) 65 + #define PROCESS_INFO_DOT_4 REG_FIELD_PREP(PROCESS_INFO_MASK, 2) 66 + #define VOLTAGE_INFO_MASK REG_GENMASK(25, 24) 67 + #define VOLTAGE_INFO_0_85V REG_FIELD_PREP(VOLTAGE_INFO_MASK, 0) 68 + #define VOLTAGE_INFO_0_95V REG_FIELD_PREP(VOLTAGE_INFO_MASK, 1) 69 + #define VOLTAGE_INFO_1_05V REG_FIELD_PREP(VOLTAGE_INFO_MASK, 2) 70 70 71 71 #define ICL_PORT_COMP_DW8(phy) _MMIO(_ICL_PORT_COMP_DW(8, phy)) 72 - #define IREFGEN (1 << 24) 72 + #define IREFGEN REG_BIT(24) 73 73 74 74 #define ICL_PORT_COMP_DW9(phy) _MMIO(_ICL_PORT_COMP_DW(9, phy)) 75 75 ··· 88 92 #define ICL_PORT_PCS_DW1_LN(ln, phy) _MMIO(_ICL_PORT_PCS_DW_LN(1, ln, phy)) 89 93 #define DCC_MODE_SELECT_MASK REG_GENMASK(21, 20) 90 94 #define RUN_DCC_ONCE REG_FIELD_PREP(DCC_MODE_SELECT_MASK, 0) 91 - #define COMMON_KEEPER_EN (1 << 26) 92 - #define LATENCY_OPTIM_MASK (0x3 << 2) 93 - #define LATENCY_OPTIM_VAL(x) ((x) << 2) 95 + #define COMMON_KEEPER_EN REG_BIT(26) 96 + #define LATENCY_OPTIM_MASK REG_GENMASK(3, 2) 97 + #define LATENCY_OPTIM_VAL(x) REG_FIELD_PREP(LATENCY_OPTIM_MASK, (x)) 94 98 95 99 /* ICL Port TX registers */ 96 100 #define _ICL_PORT_TX_AUX 0x380 ··· 107 111 #define ICL_PORT_TX_DW2_AUX(phy) _MMIO(_ICL_PORT_TX_DW_AUX(2, phy)) 108 112 #define ICL_PORT_TX_DW2_GRP(phy) _MMIO(_ICL_PORT_TX_DW_GRP(2, phy)) 109 113 #define ICL_PORT_TX_DW2_LN(ln, phy) _MMIO(_ICL_PORT_TX_DW_LN(2, ln, phy)) 110 - #define SWING_SEL_UPPER(x) (((x) >> 3) << 15) 111 - #define SWING_SEL_UPPER_MASK (1 << 15) 112 - #define SWING_SEL_LOWER(x) (((x) & 0x7) << 11) 113 - #define SWING_SEL_LOWER_MASK (0x7 << 11) 114 - #define FRC_LATENCY_OPTIM_MASK (0x7 << 8) 115 - #define FRC_LATENCY_OPTIM_VAL(x) ((x) << 8) 116 - #define RCOMP_SCALAR(x) ((x) << 0) 117 - #define RCOMP_SCALAR_MASK (0xFF << 0) 114 + #define SWING_SEL_UPPER_MASK REG_BIT(15) 115 + #define SWING_SEL_UPPER(x) REG_FIELD_PREP(SWING_SEL_UPPER_MASK, (x) >> 3) 116 + #define SWING_SEL_LOWER_MASK REG_GENMASK(13, 11) 117 + #define SWING_SEL_LOWER(x) REG_FIELD_PREP(SWING_SEL_LOWER_MASK, (x) & 0x7) 118 + #define FRC_LATENCY_OPTIM_MASK REG_GENMASK(10, 8) 119 + #define FRC_LATENCY_OPTIM_VAL(x) REG_FIELD_PREP(FRC_LATENCY_OPTIM_MASK, (x)) 120 + #define RCOMP_SCALAR_MASK REG_GENMASK(7, 0) 121 + #define RCOMP_SCALAR(x) REG_FIELD_PREP(RCOMP_SCALAR_MASK, (x)) 118 122 119 123 #define ICL_PORT_TX_DW4_AUX(phy) _MMIO(_ICL_PORT_TX_DW_AUX(4, phy)) 120 124 #define ICL_PORT_TX_DW4_GRP(phy) _MMIO(_ICL_PORT_TX_DW_GRP(4, phy)) 121 125 #define ICL_PORT_TX_DW4_LN(ln, phy) _MMIO(_ICL_PORT_TX_DW_LN(4, ln, phy)) 122 - #define LOADGEN_SELECT (1 << 31) 123 - #define POST_CURSOR_1(x) ((x) << 12) 124 - #define POST_CURSOR_1_MASK (0x3F << 12) 125 - #define POST_CURSOR_2(x) ((x) << 6) 126 - #define POST_CURSOR_2_MASK (0x3F << 6) 127 - #define CURSOR_COEFF(x) ((x) << 0) 128 - #define CURSOR_COEFF_MASK (0x3F << 0) 126 + #define LOADGEN_SELECT REG_BIT(31) 127 + #define POST_CURSOR_1_MASK REG_GENMASK(17, 12) 128 + #define POST_CURSOR_1(x) REG_FIELD_PREP(POST_CURSOR_1_MASK, (x)) 129 + #define POST_CURSOR_2_MASK REG_GENMASK(11, 6) 130 + #define POST_CURSOR_2(x) REG_FIELD_PREP(POST_CURSOR_2_MASK, (x)) 131 + #define CURSOR_COEFF_MASK REG_GENMASK(5, 0) 132 + #define CURSOR_COEFF(x) REG_FIELD_PREP(CURSOR_COEFF_MASK, (x)) 129 133 130 134 #define ICL_PORT_TX_DW5_AUX(phy) _MMIO(_ICL_PORT_TX_DW_AUX(5, phy)) 131 135 #define ICL_PORT_TX_DW5_GRP(phy) _MMIO(_ICL_PORT_TX_DW_GRP(5, phy)) 132 136 #define ICL_PORT_TX_DW5_LN(ln, phy) _MMIO(_ICL_PORT_TX_DW_LN(5, ln, phy)) 133 - #define TX_TRAINING_EN (1 << 31) 134 - #define TAP2_DISABLE (1 << 30) 135 - #define TAP3_DISABLE (1 << 29) 136 - #define SCALING_MODE_SEL(x) ((x) << 18) 137 - #define SCALING_MODE_SEL_MASK (0x7 << 18) 138 - #define RTERM_SELECT(x) ((x) << 3) 139 - #define RTERM_SELECT_MASK (0x7 << 3) 137 + #define TX_TRAINING_EN REG_BIT(31) 138 + #define TAP2_DISABLE REG_BIT(30) 139 + #define TAP3_DISABLE REG_BIT(29) 140 + #define SCALING_MODE_SEL_MASK REG_GENMASK(20, 18) 141 + #define SCALING_MODE_SEL(x) REG_FIELD_PREP(SCALING_MODE_SEL_MASK, (x)) 142 + #define RTERM_SELECT_MASK REG_GENMASK(5, 3) 143 + #define RTERM_SELECT(x) REG_FIELD_PREP(RTERM_SELECT_MASK, (x)) 144 + 145 + #define ICL_PORT_TX_DW6_AUX(phy) _MMIO(_ICL_PORT_TX_DW_AUX(6, phy)) 146 + #define ICL_PORT_TX_DW6_GRP(phy) _MMIO(_ICL_PORT_TX_DW_GRP(6, phy)) 147 + #define ICL_PORT_TX_DW6_LN(ln, phy) _MMIO(_ICL_PORT_TX_DW_LN(6, ln, phy)) 148 + #define O_FUNC_OVRD_EN REG_BIT(7) 149 + #define O_LDO_REF_SEL_CRI REG_GENMASK(6, 1) 150 + #define O_LDO_BYPASS_CRI REG_BIT(0) 140 151 141 152 #define ICL_PORT_TX_DW7_AUX(phy) _MMIO(_ICL_PORT_TX_DW_AUX(7, phy)) 142 153 #define ICL_PORT_TX_DW7_GRP(phy) _MMIO(_ICL_PORT_TX_DW_GRP(7, phy)) 143 154 #define ICL_PORT_TX_DW7_LN(ln, phy) _MMIO(_ICL_PORT_TX_DW_LN(7, ln, phy)) 144 - #define N_SCALAR(x) ((x) << 24) 145 - #define N_SCALAR_MASK (0x7F << 24) 155 + #define N_SCALAR_MASK REG_GENMASK(30, 24) 156 + #define N_SCALAR(x) REG_FIELD_PREP(N_SCALAR_MASK, (x)) 146 157 147 158 #define ICL_PORT_TX_DW8_AUX(phy) _MMIO(_ICL_PORT_TX_DW_AUX(8, phy)) 148 159 #define ICL_PORT_TX_DW8_GRP(phy) _MMIO(_ICL_PORT_TX_DW_GRP(8, phy))
+1 -4
drivers/gpu/drm/i915/display/intel_crt.c
··· 348 348 { 349 349 struct drm_device *dev = connector->dev; 350 350 struct drm_i915_private *dev_priv = to_i915(dev); 351 - int max_dotclk = dev_priv->max_dotclk_freq; 351 + int max_dotclk = dev_priv->display.cdclk.max_dotclk_freq; 352 352 enum drm_mode_status status; 353 353 int max_clock; 354 354 355 355 status = intel_cpu_transcoder_mode_valid(dev_priv, mode); 356 356 if (status != MODE_OK) 357 357 return status; 358 - 359 - if (mode->flags & DRM_MODE_FLAG_DBLSCAN) 360 - return MODE_NO_DBLESCAN; 361 358 362 359 if (mode->clock < 25000) 363 360 return MODE_CLOCK_LOW;
+157 -176
drivers/gpu/drm/i915/display/intel_crtc_state_dump.c
··· 12 12 #include "intel_hdmi.h" 13 13 #include "intel_vrr.h" 14 14 15 - static void intel_dump_crtc_timings(struct drm_i915_private *i915, 15 + static void intel_dump_crtc_timings(struct drm_printer *p, 16 16 const struct drm_display_mode *mode) 17 17 { 18 - drm_dbg_kms(&i915->drm, "crtc timings: clock=%d, " 19 - "hd=%d hb=%d-%d hs=%d-%d ht=%d, " 20 - "vd=%d vb=%d-%d vs=%d-%d vt=%d, " 21 - "flags=0x%x\n", 22 - mode->crtc_clock, 23 - mode->crtc_hdisplay, mode->crtc_hblank_start, mode->crtc_hblank_end, 24 - mode->crtc_hsync_start, mode->crtc_hsync_end, mode->crtc_htotal, 25 - mode->crtc_vdisplay, mode->crtc_vblank_start, mode->crtc_vblank_end, 26 - mode->crtc_vsync_start, mode->crtc_vsync_end, mode->crtc_vtotal, 27 - mode->flags); 18 + drm_printf(p, "crtc timings: clock=%d, " 19 + "hd=%d hb=%d-%d hs=%d-%d ht=%d, " 20 + "vd=%d vb=%d-%d vs=%d-%d vt=%d, " 21 + "flags=0x%x\n", 22 + mode->crtc_clock, 23 + mode->crtc_hdisplay, mode->crtc_hblank_start, mode->crtc_hblank_end, 24 + mode->crtc_hsync_start, mode->crtc_hsync_end, mode->crtc_htotal, 25 + mode->crtc_vdisplay, mode->crtc_vblank_start, mode->crtc_vblank_end, 26 + mode->crtc_vsync_start, mode->crtc_vsync_end, mode->crtc_vtotal, 27 + mode->flags); 28 28 } 29 29 30 30 static void 31 - intel_dump_m_n_config(const struct intel_crtc_state *pipe_config, 31 + intel_dump_m_n_config(struct drm_printer *p, 32 + const struct intel_crtc_state *pipe_config, 32 33 const char *id, unsigned int lane_count, 33 34 const struct intel_link_m_n *m_n) 34 35 { 35 - struct drm_i915_private *i915 = to_i915(pipe_config->uapi.crtc->dev); 36 - 37 - drm_dbg_kms(&i915->drm, 38 - "%s: lanes: %i; data_m: %u, data_n: %u, link_m: %u, link_n: %u, tu: %u\n", 39 - id, lane_count, 40 - m_n->data_m, m_n->data_n, 41 - m_n->link_m, m_n->link_n, m_n->tu); 36 + drm_printf(p, "%s: lanes: %i; data_m: %u, data_n: %u, link_m: %u, link_n: %u, tu: %u\n", 37 + id, lane_count, 38 + m_n->data_m, m_n->data_n, 39 + m_n->link_m, m_n->link_n, m_n->tu); 42 40 } 43 41 44 42 static void ··· 50 52 } 51 53 52 54 static void 53 - intel_dump_dp_vsc_sdp(struct drm_i915_private *i915, 54 - const struct drm_dp_vsc_sdp *vsc) 55 - { 56 - struct drm_printer p = drm_dbg_printer(&i915->drm, DRM_UT_KMS, NULL); 57 - 58 - drm_dp_vsc_sdp_log(&p, vsc); 59 - } 60 - 61 - static void 62 - intel_dump_buffer(struct drm_i915_private *i915, 63 - const char *prefix, const u8 *buf, size_t len) 55 + intel_dump_buffer(const char *prefix, const u8 *buf, size_t len) 64 56 { 65 57 if (!drm_debug_enabled(DRM_UT_KMS)) 66 58 return; ··· 118 130 return output_format_str[format]; 119 131 } 120 132 121 - static void intel_dump_plane_state(const struct intel_plane_state *plane_state) 133 + static void intel_dump_plane_state(struct drm_printer *p, 134 + const struct intel_plane_state *plane_state) 122 135 { 123 136 struct intel_plane *plane = to_intel_plane(plane_state->uapi.plane); 124 - struct drm_i915_private *i915 = to_i915(plane->base.dev); 125 137 const struct drm_framebuffer *fb = plane_state->hw.fb; 126 138 127 139 if (!fb) { 128 - drm_dbg_kms(&i915->drm, 129 - "[PLANE:%d:%s] fb: [NOFB], visible: %s\n", 130 - plane->base.base.id, plane->base.name, 131 - str_yes_no(plane_state->uapi.visible)); 140 + drm_printf(p, "[PLANE:%d:%s] fb: [NOFB], visible: %s\n", 141 + plane->base.base.id, plane->base.name, 142 + str_yes_no(plane_state->uapi.visible)); 132 143 return; 133 144 } 134 145 135 - drm_dbg_kms(&i915->drm, 136 - "[PLANE:%d:%s] fb: [FB:%d] %ux%u format = %p4cc modifier = 0x%llx, visible: %s\n", 137 - plane->base.base.id, plane->base.name, 138 - fb->base.id, fb->width, fb->height, &fb->format->format, 139 - fb->modifier, str_yes_no(plane_state->uapi.visible)); 140 - drm_dbg_kms(&i915->drm, "\trotation: 0x%x, scaler: %d, scaling_filter: %d\n", 141 - plane_state->hw.rotation, plane_state->scaler_id, plane_state->hw.scaling_filter); 146 + drm_printf(p, "[PLANE:%d:%s] fb: [FB:%d] %ux%u format = %p4cc modifier = 0x%llx, visible: %s\n", 147 + plane->base.base.id, plane->base.name, 148 + fb->base.id, fb->width, fb->height, &fb->format->format, 149 + fb->modifier, str_yes_no(plane_state->uapi.visible)); 150 + drm_printf(p, "\trotation: 0x%x, scaler: %d, scaling_filter: %d\n", 151 + plane_state->hw.rotation, plane_state->scaler_id, plane_state->hw.scaling_filter); 142 152 if (plane_state->uapi.visible) 143 - drm_dbg_kms(&i915->drm, 144 - "\tsrc: " DRM_RECT_FP_FMT " dst: " DRM_RECT_FMT "\n", 145 - DRM_RECT_FP_ARG(&plane_state->uapi.src), 146 - DRM_RECT_ARG(&plane_state->uapi.dst)); 153 + drm_printf(p, "\tsrc: " DRM_RECT_FP_FMT " dst: " DRM_RECT_FMT "\n", 154 + DRM_RECT_FP_ARG(&plane_state->uapi.src), 155 + DRM_RECT_ARG(&plane_state->uapi.dst)); 147 156 } 148 157 149 158 static void 150 - ilk_dump_csc(struct drm_i915_private *i915, const char *name, 159 + ilk_dump_csc(struct drm_i915_private *i915, 160 + struct drm_printer *p, 161 + const char *name, 151 162 const struct intel_csc_matrix *csc) 152 163 { 153 164 int i; 154 165 155 - drm_dbg_kms(&i915->drm, 156 - "%s: pre offsets: 0x%04x 0x%04x 0x%04x\n", name, 157 - csc->preoff[0], csc->preoff[1], csc->preoff[2]); 166 + drm_printf(p, "%s: pre offsets: 0x%04x 0x%04x 0x%04x\n", name, 167 + csc->preoff[0], csc->preoff[1], csc->preoff[2]); 158 168 159 169 for (i = 0; i < 3; i++) 160 - drm_dbg_kms(&i915->drm, 161 - "%s: coefficients: 0x%04x 0x%04x 0x%04x\n", name, 162 - csc->coeff[3 * i + 0], 163 - csc->coeff[3 * i + 1], 164 - csc->coeff[3 * i + 2]); 170 + drm_printf(p, "%s: coefficients: 0x%04x 0x%04x 0x%04x\n", name, 171 + csc->coeff[3 * i + 0], 172 + csc->coeff[3 * i + 1], 173 + csc->coeff[3 * i + 2]); 165 174 166 175 if (DISPLAY_VER(i915) < 7) 167 176 return; 168 177 169 - drm_dbg_kms(&i915->drm, 170 - "%s: post offsets: 0x%04x 0x%04x 0x%04x\n", name, 171 - csc->postoff[0], csc->postoff[1], csc->postoff[2]); 178 + drm_printf(p, "%s: post offsets: 0x%04x 0x%04x 0x%04x\n", name, 179 + csc->postoff[0], csc->postoff[1], csc->postoff[2]); 172 180 } 173 181 174 182 static void 175 - vlv_dump_csc(struct drm_i915_private *i915, const char *name, 183 + vlv_dump_csc(struct drm_printer *p, const char *name, 176 184 const struct intel_csc_matrix *csc) 177 185 { 178 186 int i; 179 187 180 188 for (i = 0; i < 3; i++) 181 - drm_dbg_kms(&i915->drm, 182 - "%s: coefficients: 0x%04x 0x%04x 0x%04x\n", name, 183 - csc->coeff[3 * i + 0], 184 - csc->coeff[3 * i + 1], 185 - csc->coeff[3 * i + 2]); 189 + drm_printf(p, "%s: coefficients: 0x%04x 0x%04x 0x%04x\n", name, 190 + csc->coeff[3 * i + 0], 191 + csc->coeff[3 * i + 1], 192 + csc->coeff[3 * i + 2]); 186 193 } 187 194 188 195 void intel_crtc_state_dump(const struct intel_crtc_state *pipe_config, ··· 188 205 struct drm_i915_private *i915 = to_i915(crtc->base.dev); 189 206 const struct intel_plane_state *plane_state; 190 207 struct intel_plane *plane; 208 + struct drm_printer p; 191 209 char buf[64]; 192 210 int i; 193 211 194 - drm_dbg_kms(&i915->drm, "[CRTC:%d:%s] enable: %s [%s]\n", 195 - crtc->base.base.id, crtc->base.name, 196 - str_yes_no(pipe_config->hw.enable), context); 212 + if (!drm_debug_enabled(DRM_UT_KMS)) 213 + return; 214 + 215 + p = drm_dbg_printer(&i915->drm, DRM_UT_KMS, NULL); 216 + 217 + drm_printf(&p, "[CRTC:%d:%s] enable: %s [%s]\n", 218 + crtc->base.base.id, crtc->base.name, 219 + str_yes_no(pipe_config->hw.enable), context); 197 220 198 221 if (!pipe_config->hw.enable) 199 222 goto dump_planes; 200 223 201 224 snprintf_output_types(buf, sizeof(buf), pipe_config->output_types); 202 - drm_dbg_kms(&i915->drm, 203 - "active: %s, output_types: %s (0x%x), output format: %s, sink format: %s\n", 204 - str_yes_no(pipe_config->hw.active), 205 - buf, pipe_config->output_types, 206 - intel_output_format_name(pipe_config->output_format), 207 - intel_output_format_name(pipe_config->sink_format)); 225 + drm_printf(&p, "active: %s, output_types: %s (0x%x), output format: %s, sink format: %s\n", 226 + str_yes_no(pipe_config->hw.active), 227 + buf, pipe_config->output_types, 228 + intel_output_format_name(pipe_config->output_format), 229 + intel_output_format_name(pipe_config->sink_format)); 208 230 209 - drm_dbg_kms(&i915->drm, 210 - "cpu_transcoder: %s, pipe bpp: %i, dithering: %i\n", 211 - transcoder_name(pipe_config->cpu_transcoder), 212 - pipe_config->pipe_bpp, pipe_config->dither); 231 + drm_printf(&p, "cpu_transcoder: %s, pipe bpp: %i, dithering: %i\n", 232 + transcoder_name(pipe_config->cpu_transcoder), 233 + pipe_config->pipe_bpp, pipe_config->dither); 213 234 214 - drm_dbg_kms(&i915->drm, "MST master transcoder: %s\n", 215 - transcoder_name(pipe_config->mst_master_transcoder)); 235 + drm_printf(&p, "MST master transcoder: %s\n", 236 + transcoder_name(pipe_config->mst_master_transcoder)); 216 237 217 - drm_dbg_kms(&i915->drm, 218 - "port sync: master transcoder: %s, slave transcoder bitmask = 0x%x\n", 219 - transcoder_name(pipe_config->master_transcoder), 220 - pipe_config->sync_mode_slaves_mask); 238 + drm_printf(&p, "port sync: master transcoder: %s, slave transcoder bitmask = 0x%x\n", 239 + transcoder_name(pipe_config->master_transcoder), 240 + pipe_config->sync_mode_slaves_mask); 221 241 222 - drm_dbg_kms(&i915->drm, "bigjoiner: %s, pipes: 0x%x\n", 223 - intel_crtc_is_bigjoiner_slave(pipe_config) ? "slave" : 224 - intel_crtc_is_bigjoiner_master(pipe_config) ? "master" : "no", 225 - pipe_config->bigjoiner_pipes); 242 + drm_printf(&p, "bigjoiner: %s, pipes: 0x%x\n", 243 + intel_crtc_is_bigjoiner_slave(pipe_config) ? "slave" : 244 + intel_crtc_is_bigjoiner_master(pipe_config) ? "master" : "no", 245 + pipe_config->bigjoiner_pipes); 226 246 227 - drm_dbg_kms(&i915->drm, "splitter: %s, link count %d, overlap %d\n", 228 - str_enabled_disabled(pipe_config->splitter.enable), 229 - pipe_config->splitter.link_count, 230 - pipe_config->splitter.pixel_overlap); 247 + drm_printf(&p, "splitter: %s, link count %d, overlap %d\n", 248 + str_enabled_disabled(pipe_config->splitter.enable), 249 + pipe_config->splitter.link_count, 250 + pipe_config->splitter.pixel_overlap); 231 251 232 252 if (pipe_config->has_pch_encoder) 233 - intel_dump_m_n_config(pipe_config, "fdi", 253 + intel_dump_m_n_config(&p, pipe_config, "fdi", 234 254 pipe_config->fdi_lanes, 235 255 &pipe_config->fdi_m_n); 236 256 237 257 if (intel_crtc_has_dp_encoder(pipe_config)) { 238 - intel_dump_m_n_config(pipe_config, "dp m_n", 258 + intel_dump_m_n_config(&p, pipe_config, "dp m_n", 239 259 pipe_config->lane_count, 240 260 &pipe_config->dp_m_n); 241 - intel_dump_m_n_config(pipe_config, "dp m2_n2", 261 + intel_dump_m_n_config(&p, pipe_config, "dp m2_n2", 242 262 pipe_config->lane_count, 243 263 &pipe_config->dp_m2_n2); 244 - drm_dbg_kms(&i915->drm, "fec: %s, enhanced framing: %s\n", 245 - str_enabled_disabled(pipe_config->fec_enable), 246 - str_enabled_disabled(pipe_config->enhanced_framing)); 264 + drm_printf(&p, "fec: %s, enhanced framing: %s\n", 265 + str_enabled_disabled(pipe_config->fec_enable), 266 + str_enabled_disabled(pipe_config->enhanced_framing)); 247 267 248 - drm_dbg_kms(&i915->drm, "sdp split: %s\n", 249 - str_enabled_disabled(pipe_config->sdp_split_enable)); 268 + drm_printf(&p, "sdp split: %s\n", 269 + str_enabled_disabled(pipe_config->sdp_split_enable)); 250 270 251 - drm_dbg_kms(&i915->drm, "psr: %s, psr2: %s, panel replay: %s, selective fetch: %s\n", 252 - str_enabled_disabled(pipe_config->has_psr), 253 - str_enabled_disabled(pipe_config->has_psr2), 254 - str_enabled_disabled(pipe_config->has_panel_replay), 255 - str_enabled_disabled(pipe_config->enable_psr2_sel_fetch)); 271 + drm_printf(&p, "psr: %s, psr2: %s, panel replay: %s, selective fetch: %s\n", 272 + str_enabled_disabled(pipe_config->has_psr), 273 + str_enabled_disabled(pipe_config->has_psr2), 274 + str_enabled_disabled(pipe_config->has_panel_replay), 275 + str_enabled_disabled(pipe_config->enable_psr2_sel_fetch)); 256 276 } 257 277 258 - drm_dbg_kms(&i915->drm, "framestart delay: %d, MSA timing delay: %d\n", 259 - pipe_config->framestart_delay, pipe_config->msa_timing_delay); 278 + drm_printf(&p, "framestart delay: %d, MSA timing delay: %d\n", 279 + pipe_config->framestart_delay, pipe_config->msa_timing_delay); 260 280 261 - drm_dbg_kms(&i915->drm, 262 - "audio: %i, infoframes: %i, infoframes enabled: 0x%x\n", 263 - pipe_config->has_audio, pipe_config->has_infoframe, 264 - pipe_config->infoframes.enable); 281 + drm_printf(&p, "audio: %i, infoframes: %i, infoframes enabled: 0x%x\n", 282 + pipe_config->has_audio, pipe_config->has_infoframe, 283 + pipe_config->infoframes.enable); 265 284 266 285 if (pipe_config->infoframes.enable & 267 286 intel_hdmi_infoframe_enable(HDMI_PACKET_TYPE_GENERAL_CONTROL)) 268 - drm_dbg_kms(&i915->drm, "GCP: 0x%x\n", 269 - pipe_config->infoframes.gcp); 287 + drm_printf(&p, "GCP: 0x%x\n", pipe_config->infoframes.gcp); 270 288 if (pipe_config->infoframes.enable & 271 289 intel_hdmi_infoframe_enable(HDMI_INFOFRAME_TYPE_AVI)) 272 290 intel_dump_infoframe(i915, &pipe_config->infoframes.avi); ··· 285 301 intel_dump_infoframe(i915, &pipe_config->infoframes.drm); 286 302 if (pipe_config->infoframes.enable & 287 303 intel_hdmi_infoframe_enable(DP_SDP_VSC)) 288 - intel_dump_dp_vsc_sdp(i915, &pipe_config->infoframes.vsc); 304 + drm_dp_vsc_sdp_log(&p, &pipe_config->infoframes.vsc); 305 + if (pipe_config->infoframes.enable & 306 + intel_hdmi_infoframe_enable(DP_SDP_ADAPTIVE_SYNC)) 307 + drm_dp_as_sdp_log(&p, &pipe_config->infoframes.as_sdp); 289 308 290 309 if (pipe_config->has_audio) 291 - intel_dump_buffer(i915, "ELD: ", pipe_config->eld, 310 + intel_dump_buffer("ELD: ", pipe_config->eld, 292 311 drm_eld_size(pipe_config->eld)); 293 312 294 - drm_dbg_kms(&i915->drm, "vrr: %s, vmin: %d, vmax: %d, pipeline full: %d, guardband: %d flipline: %d, vmin vblank: %d, vmax vblank: %d\n", 295 - str_yes_no(pipe_config->vrr.enable), 296 - pipe_config->vrr.vmin, pipe_config->vrr.vmax, 297 - pipe_config->vrr.pipeline_full, pipe_config->vrr.guardband, 298 - pipe_config->vrr.flipline, 299 - intel_vrr_vmin_vblank_start(pipe_config), 300 - intel_vrr_vmax_vblank_start(pipe_config)); 313 + drm_printf(&p, "vrr: %s, vmin: %d, vmax: %d, pipeline full: %d, guardband: %d flipline: %d, vmin vblank: %d, vmax vblank: %d\n", 314 + str_yes_no(pipe_config->vrr.enable), 315 + pipe_config->vrr.vmin, pipe_config->vrr.vmax, 316 + pipe_config->vrr.pipeline_full, pipe_config->vrr.guardband, 317 + pipe_config->vrr.flipline, 318 + intel_vrr_vmin_vblank_start(pipe_config), 319 + intel_vrr_vmax_vblank_start(pipe_config)); 301 320 302 - drm_dbg_kms(&i915->drm, "requested mode: " DRM_MODE_FMT "\n", 303 - DRM_MODE_ARG(&pipe_config->hw.mode)); 304 - drm_dbg_kms(&i915->drm, "adjusted mode: " DRM_MODE_FMT "\n", 305 - DRM_MODE_ARG(&pipe_config->hw.adjusted_mode)); 306 - intel_dump_crtc_timings(i915, &pipe_config->hw.adjusted_mode); 307 - drm_dbg_kms(&i915->drm, "pipe mode: " DRM_MODE_FMT "\n", 308 - DRM_MODE_ARG(&pipe_config->hw.pipe_mode)); 309 - intel_dump_crtc_timings(i915, &pipe_config->hw.pipe_mode); 310 - drm_dbg_kms(&i915->drm, 311 - "port clock: %d, pipe src: " DRM_RECT_FMT ", pixel rate %d\n", 312 - pipe_config->port_clock, DRM_RECT_ARG(&pipe_config->pipe_src), 313 - pipe_config->pixel_rate); 321 + drm_printf(&p, "requested mode: " DRM_MODE_FMT "\n", 322 + DRM_MODE_ARG(&pipe_config->hw.mode)); 323 + drm_printf(&p, "adjusted mode: " DRM_MODE_FMT "\n", 324 + DRM_MODE_ARG(&pipe_config->hw.adjusted_mode)); 325 + intel_dump_crtc_timings(&p, &pipe_config->hw.adjusted_mode); 326 + drm_printf(&p, "pipe mode: " DRM_MODE_FMT "\n", 327 + DRM_MODE_ARG(&pipe_config->hw.pipe_mode)); 328 + intel_dump_crtc_timings(&p, &pipe_config->hw.pipe_mode); 329 + drm_printf(&p, "port clock: %d, pipe src: " DRM_RECT_FMT ", pixel rate %d\n", 330 + pipe_config->port_clock, DRM_RECT_ARG(&pipe_config->pipe_src), 331 + pipe_config->pixel_rate); 314 332 315 - drm_dbg_kms(&i915->drm, "linetime: %d, ips linetime: %d\n", 316 - pipe_config->linetime, pipe_config->ips_linetime); 333 + drm_printf(&p, "linetime: %d, ips linetime: %d\n", 334 + pipe_config->linetime, pipe_config->ips_linetime); 317 335 318 336 if (DISPLAY_VER(i915) >= 9) 319 - drm_dbg_kms(&i915->drm, 320 - "num_scalers: %d, scaler_users: 0x%x, scaler_id: %d, scaling_filter: %d\n", 321 - crtc->num_scalers, 322 - pipe_config->scaler_state.scaler_users, 323 - pipe_config->scaler_state.scaler_id, 324 - pipe_config->hw.scaling_filter); 337 + drm_printf(&p, "num_scalers: %d, scaler_users: 0x%x, scaler_id: %d, scaling_filter: %d\n", 338 + crtc->num_scalers, 339 + pipe_config->scaler_state.scaler_users, 340 + pipe_config->scaler_state.scaler_id, 341 + pipe_config->hw.scaling_filter); 325 342 326 343 if (HAS_GMCH(i915)) 327 - drm_dbg_kms(&i915->drm, 328 - "gmch pfit: control: 0x%08x, ratios: 0x%08x, lvds border: 0x%08x\n", 329 - pipe_config->gmch_pfit.control, 330 - pipe_config->gmch_pfit.pgm_ratios, 331 - pipe_config->gmch_pfit.lvds_border_bits); 344 + drm_printf(&p, "gmch pfit: control: 0x%08x, ratios: 0x%08x, lvds border: 0x%08x\n", 345 + pipe_config->gmch_pfit.control, 346 + pipe_config->gmch_pfit.pgm_ratios, 347 + pipe_config->gmch_pfit.lvds_border_bits); 332 348 else 333 - drm_dbg_kms(&i915->drm, 334 - "pch pfit: " DRM_RECT_FMT ", %s, force thru: %s\n", 335 - DRM_RECT_ARG(&pipe_config->pch_pfit.dst), 336 - str_enabled_disabled(pipe_config->pch_pfit.enabled), 337 - str_yes_no(pipe_config->pch_pfit.force_thru)); 349 + drm_printf(&p, "pch pfit: " DRM_RECT_FMT ", %s, force thru: %s\n", 350 + DRM_RECT_ARG(&pipe_config->pch_pfit.dst), 351 + str_enabled_disabled(pipe_config->pch_pfit.enabled), 352 + str_yes_no(pipe_config->pch_pfit.force_thru)); 338 353 339 - drm_dbg_kms(&i915->drm, "ips: %i, double wide: %i, drrs: %i\n", 340 - pipe_config->ips_enabled, pipe_config->double_wide, 341 - pipe_config->has_drrs); 354 + drm_printf(&p, "ips: %i, double wide: %i, drrs: %i\n", 355 + pipe_config->ips_enabled, pipe_config->double_wide, 356 + pipe_config->has_drrs); 342 357 343 - intel_dpll_dump_hw_state(i915, &pipe_config->dpll_hw_state); 358 + intel_dpll_dump_hw_state(i915, &p, &pipe_config->dpll_hw_state); 344 359 345 360 if (IS_CHERRYVIEW(i915)) 346 - drm_dbg_kms(&i915->drm, 347 - "cgm_mode: 0x%x gamma_mode: 0x%x gamma_enable: %d csc_enable: %d\n", 348 - pipe_config->cgm_mode, pipe_config->gamma_mode, 349 - pipe_config->gamma_enable, pipe_config->csc_enable); 361 + drm_printf(&p, "cgm_mode: 0x%x gamma_mode: 0x%x gamma_enable: %d csc_enable: %d\n", 362 + pipe_config->cgm_mode, pipe_config->gamma_mode, 363 + pipe_config->gamma_enable, pipe_config->csc_enable); 350 364 else 351 - drm_dbg_kms(&i915->drm, 352 - "csc_mode: 0x%x gamma_mode: 0x%x gamma_enable: %d csc_enable: %d\n", 353 - pipe_config->csc_mode, pipe_config->gamma_mode, 354 - pipe_config->gamma_enable, pipe_config->csc_enable); 365 + drm_printf(&p, "csc_mode: 0x%x gamma_mode: 0x%x gamma_enable: %d csc_enable: %d\n", 366 + pipe_config->csc_mode, pipe_config->gamma_mode, 367 + pipe_config->gamma_enable, pipe_config->csc_enable); 355 368 356 - drm_dbg_kms(&i915->drm, "pre csc lut: %s%d entries, post csc lut: %d entries\n", 357 - pipe_config->pre_csc_lut && pipe_config->pre_csc_lut == 358 - i915->display.color.glk_linear_degamma_lut ? "(linear) " : "", 359 - pipe_config->pre_csc_lut ? 360 - drm_color_lut_size(pipe_config->pre_csc_lut) : 0, 361 - pipe_config->post_csc_lut ? 362 - drm_color_lut_size(pipe_config->post_csc_lut) : 0); 369 + drm_printf(&p, "pre csc lut: %s%d entries, post csc lut: %d entries\n", 370 + pipe_config->pre_csc_lut && pipe_config->pre_csc_lut == 371 + i915->display.color.glk_linear_degamma_lut ? "(linear) " : "", 372 + pipe_config->pre_csc_lut ? 373 + drm_color_lut_size(pipe_config->pre_csc_lut) : 0, 374 + pipe_config->post_csc_lut ? 375 + drm_color_lut_size(pipe_config->post_csc_lut) : 0); 363 376 364 377 if (DISPLAY_VER(i915) >= 11) 365 - ilk_dump_csc(i915, "output csc", &pipe_config->output_csc); 378 + ilk_dump_csc(i915, &p, "output csc", &pipe_config->output_csc); 366 379 367 380 if (!HAS_GMCH(i915)) 368 - ilk_dump_csc(i915, "pipe csc", &pipe_config->csc); 381 + ilk_dump_csc(i915, &p, "pipe csc", &pipe_config->csc); 369 382 else if (IS_CHERRYVIEW(i915)) 370 - vlv_dump_csc(i915, "cgm csc", &pipe_config->csc); 383 + vlv_dump_csc(&p, "cgm csc", &pipe_config->csc); 371 384 else if (IS_VALLEYVIEW(i915)) 372 - vlv_dump_csc(i915, "wgc csc", &pipe_config->csc); 385 + vlv_dump_csc(&p, "wgc csc", &pipe_config->csc); 373 386 374 387 dump_planes: 375 388 if (!state) ··· 374 393 375 394 for_each_new_intel_plane_in_state(state, plane, plane_state, i) { 376 395 if (plane->pipe == crtc->pipe) 377 - intel_dump_plane_state(plane_state); 396 + intel_dump_plane_state(&p, plane_state); 378 397 } 379 398 }
+23 -1
drivers/gpu/drm/i915/display/intel_cursor.c
··· 509 509 intel_de_write_fw(dev_priv, PLANE_SEL_FETCH_CTL(pipe, plane->id), 0); 510 510 } 511 511 512 + static void wa_16021440873(struct intel_plane *plane, 513 + const struct intel_crtc_state *crtc_state, 514 + const struct intel_plane_state *plane_state) 515 + { 516 + struct drm_i915_private *dev_priv = to_i915(plane->base.dev); 517 + u32 ctl = plane_state->ctl; 518 + int et_y_position = drm_rect_height(&crtc_state->pipe_src) + 1; 519 + enum pipe pipe = plane->pipe; 520 + 521 + ctl &= ~MCURSOR_MODE_MASK; 522 + ctl |= MCURSOR_MODE_64_2B; 523 + 524 + intel_de_write_fw(dev_priv, PLANE_SEL_FETCH_CTL(pipe, plane->id), ctl); 525 + 526 + intel_de_write(dev_priv, PIPE_SRCSZ_ERLY_TPT(pipe), 527 + PIPESRC_HEIGHT(et_y_position)); 528 + } 529 + 512 530 static void i9xx_cursor_update_sel_fetch_arm(struct intel_plane *plane, 513 531 const struct intel_crtc_state *crtc_state, 514 532 const struct intel_plane_state *plane_state) ··· 547 529 intel_de_write_fw(dev_priv, PLANE_SEL_FETCH_CTL(pipe, plane->id), 548 530 plane_state->ctl); 549 531 } else { 550 - i9xx_cursor_disable_sel_fetch_arm(plane, crtc_state); 532 + /* Wa_16021440873 */ 533 + if (crtc_state->enable_psr2_su_region_et) 534 + wa_16021440873(plane, crtc_state, plane_state); 535 + else 536 + i9xx_cursor_disable_sel_fetch_arm(plane, crtc_state); 551 537 } 552 538 } 553 539
+185 -182
drivers/gpu/drm/i915/display/intel_cx0_phy.c
··· 29 29 #define INTEL_CX0_LANE1 BIT(1) 30 30 #define INTEL_CX0_BOTH_LANES (INTEL_CX0_LANE1 | INTEL_CX0_LANE0) 31 31 32 - bool intel_is_c10phy(struct drm_i915_private *i915, enum phy phy) 32 + bool intel_encoder_is_c10phy(struct intel_encoder *encoder) 33 33 { 34 + struct drm_i915_private *i915 = to_i915(encoder->base.dev); 35 + enum phy phy = intel_encoder_to_phy(encoder); 36 + 34 37 if ((IS_LUNARLAKE(i915) || IS_METEORLAKE(i915)) && phy < PHY_C) 35 38 return true; 36 39 ··· 49 46 return ilog2(lane_mask); 50 47 } 51 48 52 - static u8 intel_cx0_get_owned_lane_mask(struct drm_i915_private *i915, 53 - struct intel_encoder *encoder) 49 + static u8 intel_cx0_get_owned_lane_mask(struct intel_encoder *encoder) 54 50 { 55 51 struct intel_digital_port *dig_port = enc_to_dig_port(encoder); 56 52 ··· 116 114 intel_display_power_put(i915, POWER_DOMAIN_DC_OFF, wakeref); 117 115 } 118 116 119 - static void intel_clear_response_ready_flag(struct drm_i915_private *i915, 120 - enum port port, int lane) 117 + static void intel_clear_response_ready_flag(struct intel_encoder *encoder, 118 + int lane) 121 119 { 122 - intel_de_rmw(i915, XELPDP_PORT_P2M_MSGBUS_STATUS(i915, port, lane), 120 + struct drm_i915_private *i915 = to_i915(encoder->base.dev); 121 + 122 + intel_de_rmw(i915, XELPDP_PORT_P2M_MSGBUS_STATUS(i915, encoder->port, lane), 123 123 0, XELPDP_PORT_P2M_RESPONSE_READY | XELPDP_PORT_P2M_ERROR_SET); 124 124 } 125 125 126 - static void intel_cx0_bus_reset(struct drm_i915_private *i915, enum port port, int lane) 126 + static void intel_cx0_bus_reset(struct intel_encoder *encoder, int lane) 127 127 { 128 - enum phy phy = intel_port_to_phy(i915, port); 128 + struct drm_i915_private *i915 = to_i915(encoder->base.dev); 129 + enum port port = encoder->port; 130 + enum phy phy = intel_encoder_to_phy(encoder); 129 131 130 132 intel_de_write(i915, XELPDP_PORT_M2P_MSGBUS_CTL(i915, port, lane), 131 133 XELPDP_PORT_M2P_TRANSACTION_RESET); ··· 141 135 return; 142 136 } 143 137 144 - intel_clear_response_ready_flag(i915, port, lane); 138 + intel_clear_response_ready_flag(encoder, lane); 145 139 } 146 140 147 - static int intel_cx0_wait_for_ack(struct drm_i915_private *i915, enum port port, 141 + static int intel_cx0_wait_for_ack(struct intel_encoder *encoder, 148 142 int command, int lane, u32 *val) 149 143 { 150 - enum phy phy = intel_port_to_phy(i915, port); 144 + struct drm_i915_private *i915 = to_i915(encoder->base.dev); 145 + enum port port = encoder->port; 146 + enum phy phy = intel_encoder_to_phy(encoder); 151 147 152 - if (__intel_de_wait_for_register(i915, 153 - XELPDP_PORT_P2M_MSGBUS_STATUS(i915, port, lane), 154 - XELPDP_PORT_P2M_RESPONSE_READY, 155 - XELPDP_PORT_P2M_RESPONSE_READY, 156 - XELPDP_MSGBUS_TIMEOUT_FAST_US, 157 - XELPDP_MSGBUS_TIMEOUT_SLOW, val)) { 148 + if (intel_de_wait_custom(i915, 149 + XELPDP_PORT_P2M_MSGBUS_STATUS(i915, port, lane), 150 + XELPDP_PORT_P2M_RESPONSE_READY, 151 + XELPDP_PORT_P2M_RESPONSE_READY, 152 + XELPDP_MSGBUS_TIMEOUT_FAST_US, 153 + XELPDP_MSGBUS_TIMEOUT_SLOW, val)) { 158 154 drm_dbg_kms(&i915->drm, "PHY %c Timeout waiting for message ACK. Status: 0x%x\n", 159 155 phy_name(phy), *val); 160 156 ··· 166 158 "PHY %c Hardware did not detect a timeout\n", 167 159 phy_name(phy)); 168 160 169 - intel_cx0_bus_reset(i915, port, lane); 161 + intel_cx0_bus_reset(encoder, lane); 170 162 return -ETIMEDOUT; 171 163 } 172 164 173 165 if (*val & XELPDP_PORT_P2M_ERROR_SET) { 174 166 drm_dbg_kms(&i915->drm, "PHY %c Error occurred during %s command. Status: 0x%x\n", phy_name(phy), 175 167 command == XELPDP_PORT_P2M_COMMAND_READ_ACK ? "read" : "write", *val); 176 - intel_cx0_bus_reset(i915, port, lane); 168 + intel_cx0_bus_reset(encoder, lane); 177 169 return -EINVAL; 178 170 } 179 171 180 172 if (REG_FIELD_GET(XELPDP_PORT_P2M_COMMAND_TYPE_MASK, *val) != command) { 181 173 drm_dbg_kms(&i915->drm, "PHY %c Not a %s response. MSGBUS Status: 0x%x.\n", phy_name(phy), 182 174 command == XELPDP_PORT_P2M_COMMAND_READ_ACK ? "read" : "write", *val); 183 - intel_cx0_bus_reset(i915, port, lane); 175 + intel_cx0_bus_reset(encoder, lane); 184 176 return -EINVAL; 185 177 } 186 178 187 179 return 0; 188 180 } 189 181 190 - static int __intel_cx0_read_once(struct drm_i915_private *i915, enum port port, 182 + static int __intel_cx0_read_once(struct intel_encoder *encoder, 191 183 int lane, u16 addr) 192 184 { 193 - enum phy phy = intel_port_to_phy(i915, port); 185 + struct drm_i915_private *i915 = to_i915(encoder->base.dev); 186 + enum port port = encoder->port; 187 + enum phy phy = intel_encoder_to_phy(encoder); 194 188 int ack; 195 189 u32 val; 196 190 ··· 201 191 XELPDP_MSGBUS_TIMEOUT_SLOW)) { 202 192 drm_dbg_kms(&i915->drm, 203 193 "PHY %c Timeout waiting for previous transaction to complete. Reset the bus and retry.\n", phy_name(phy)); 204 - intel_cx0_bus_reset(i915, port, lane); 194 + intel_cx0_bus_reset(encoder, lane); 205 195 return -ETIMEDOUT; 206 196 } 207 197 ··· 210 200 XELPDP_PORT_M2P_COMMAND_READ | 211 201 XELPDP_PORT_M2P_ADDRESS(addr)); 212 202 213 - ack = intel_cx0_wait_for_ack(i915, port, XELPDP_PORT_P2M_COMMAND_READ_ACK, lane, &val); 203 + ack = intel_cx0_wait_for_ack(encoder, XELPDP_PORT_P2M_COMMAND_READ_ACK, lane, &val); 214 204 if (ack < 0) 215 205 return ack; 216 206 217 - intel_clear_response_ready_flag(i915, port, lane); 207 + intel_clear_response_ready_flag(encoder, lane); 218 208 219 209 /* 220 210 * FIXME: Workaround to let HW to settle 221 211 * down and let the message bus to end up 222 212 * in a known state 223 213 */ 224 - intel_cx0_bus_reset(i915, port, lane); 214 + intel_cx0_bus_reset(encoder, lane); 225 215 226 216 return REG_FIELD_GET(XELPDP_PORT_P2M_DATA_MASK, val); 227 217 } 228 218 229 - static u8 __intel_cx0_read(struct drm_i915_private *i915, enum port port, 219 + static u8 __intel_cx0_read(struct intel_encoder *encoder, 230 220 int lane, u16 addr) 231 221 { 232 - enum phy phy = intel_port_to_phy(i915, port); 222 + struct drm_i915_private *i915 = to_i915(encoder->base.dev); 223 + enum phy phy = intel_encoder_to_phy(encoder); 233 224 int i, status; 234 225 235 226 assert_dc_off(i915); 236 227 237 228 /* 3 tries is assumed to be enough to read successfully */ 238 229 for (i = 0; i < 3; i++) { 239 - status = __intel_cx0_read_once(i915, port, lane, addr); 230 + status = __intel_cx0_read_once(encoder, lane, addr); 240 231 241 232 if (status >= 0) 242 233 return status; ··· 249 238 return 0; 250 239 } 251 240 252 - static u8 intel_cx0_read(struct drm_i915_private *i915, enum port port, 241 + static u8 intel_cx0_read(struct intel_encoder *encoder, 253 242 u8 lane_mask, u16 addr) 254 243 { 255 244 int lane = lane_mask_to_lane(lane_mask); 256 245 257 - return __intel_cx0_read(i915, port, lane, addr); 246 + return __intel_cx0_read(encoder, lane, addr); 258 247 } 259 248 260 - static int __intel_cx0_write_once(struct drm_i915_private *i915, enum port port, 249 + static int __intel_cx0_write_once(struct intel_encoder *encoder, 261 250 int lane, u16 addr, u8 data, bool committed) 262 251 { 263 - enum phy phy = intel_port_to_phy(i915, port); 252 + struct drm_i915_private *i915 = to_i915(encoder->base.dev); 253 + enum port port = encoder->port; 254 + enum phy phy = intel_encoder_to_phy(encoder); 264 255 int ack; 265 256 u32 val; 266 257 ··· 271 258 XELPDP_MSGBUS_TIMEOUT_SLOW)) { 272 259 drm_dbg_kms(&i915->drm, 273 260 "PHY %c Timeout waiting for previous transaction to complete. Resetting the bus.\n", phy_name(phy)); 274 - intel_cx0_bus_reset(i915, port, lane); 261 + intel_cx0_bus_reset(encoder, lane); 275 262 return -ETIMEDOUT; 276 263 } 277 264 ··· 287 274 XELPDP_MSGBUS_TIMEOUT_SLOW)) { 288 275 drm_dbg_kms(&i915->drm, 289 276 "PHY %c Timeout waiting for write to complete. Resetting the bus.\n", phy_name(phy)); 290 - intel_cx0_bus_reset(i915, port, lane); 277 + intel_cx0_bus_reset(encoder, lane); 291 278 return -ETIMEDOUT; 292 279 } 293 280 294 281 if (committed) { 295 - ack = intel_cx0_wait_for_ack(i915, port, XELPDP_PORT_P2M_COMMAND_WRITE_ACK, lane, &val); 282 + ack = intel_cx0_wait_for_ack(encoder, XELPDP_PORT_P2M_COMMAND_WRITE_ACK, lane, &val); 296 283 if (ack < 0) 297 284 return ack; 298 285 } else if ((intel_de_read(i915, XELPDP_PORT_P2M_MSGBUS_STATUS(i915, port, lane)) & 299 286 XELPDP_PORT_P2M_ERROR_SET)) { 300 287 drm_dbg_kms(&i915->drm, 301 288 "PHY %c Error occurred during write command.\n", phy_name(phy)); 302 - intel_cx0_bus_reset(i915, port, lane); 289 + intel_cx0_bus_reset(encoder, lane); 303 290 return -EINVAL; 304 291 } 305 292 306 - intel_clear_response_ready_flag(i915, port, lane); 293 + intel_clear_response_ready_flag(encoder, lane); 307 294 308 295 /* 309 296 * FIXME: Workaround to let HW to settle 310 297 * down and let the message bus to end up 311 298 * in a known state 312 299 */ 313 - intel_cx0_bus_reset(i915, port, lane); 300 + intel_cx0_bus_reset(encoder, lane); 314 301 315 302 return 0; 316 303 } 317 304 318 - static void __intel_cx0_write(struct drm_i915_private *i915, enum port port, 305 + static void __intel_cx0_write(struct intel_encoder *encoder, 319 306 int lane, u16 addr, u8 data, bool committed) 320 307 { 321 - enum phy phy = intel_port_to_phy(i915, port); 308 + struct drm_i915_private *i915 = to_i915(encoder->base.dev); 309 + enum phy phy = intel_encoder_to_phy(encoder); 322 310 int i, status; 323 311 324 312 assert_dc_off(i915); 325 313 326 314 /* 3 tries is assumed to be enough to write successfully */ 327 315 for (i = 0; i < 3; i++) { 328 - status = __intel_cx0_write_once(i915, port, lane, addr, data, committed); 316 + status = __intel_cx0_write_once(encoder, lane, addr, data, committed); 329 317 330 318 if (status == 0) 331 319 return; ··· 336 322 "PHY %c Write %04x failed after %d retries.\n", phy_name(phy), addr, i); 337 323 } 338 324 339 - static void intel_cx0_write(struct drm_i915_private *i915, enum port port, 325 + static void intel_cx0_write(struct intel_encoder *encoder, 340 326 u8 lane_mask, u16 addr, u8 data, bool committed) 341 327 { 342 328 int lane; 343 329 344 330 for_each_cx0_lane_in_mask(lane_mask, lane) 345 - __intel_cx0_write(i915, port, lane, addr, data, committed); 331 + __intel_cx0_write(encoder, lane, addr, data, committed); 346 332 } 347 333 348 - static void intel_c20_sram_write(struct drm_i915_private *i915, enum port port, 334 + static void intel_c20_sram_write(struct intel_encoder *encoder, 349 335 int lane, u16 addr, u16 data) 350 336 { 337 + struct drm_i915_private *i915 = to_i915(encoder->base.dev); 338 + 351 339 assert_dc_off(i915); 352 340 353 - intel_cx0_write(i915, port, lane, PHY_C20_WR_ADDRESS_H, addr >> 8, 0); 354 - intel_cx0_write(i915, port, lane, PHY_C20_WR_ADDRESS_L, addr & 0xff, 0); 341 + intel_cx0_write(encoder, lane, PHY_C20_WR_ADDRESS_H, addr >> 8, 0); 342 + intel_cx0_write(encoder, lane, PHY_C20_WR_ADDRESS_L, addr & 0xff, 0); 355 343 356 - intel_cx0_write(i915, port, lane, PHY_C20_WR_DATA_H, data >> 8, 0); 357 - intel_cx0_write(i915, port, lane, PHY_C20_WR_DATA_L, data & 0xff, 1); 344 + intel_cx0_write(encoder, lane, PHY_C20_WR_DATA_H, data >> 8, 0); 345 + intel_cx0_write(encoder, lane, PHY_C20_WR_DATA_L, data & 0xff, 1); 358 346 } 359 347 360 - static u16 intel_c20_sram_read(struct drm_i915_private *i915, enum port port, 348 + static u16 intel_c20_sram_read(struct intel_encoder *encoder, 361 349 int lane, u16 addr) 362 350 { 351 + struct drm_i915_private *i915 = to_i915(encoder->base.dev); 363 352 u16 val; 364 353 365 354 assert_dc_off(i915); 366 355 367 - intel_cx0_write(i915, port, lane, PHY_C20_RD_ADDRESS_H, addr >> 8, 0); 368 - intel_cx0_write(i915, port, lane, PHY_C20_RD_ADDRESS_L, addr & 0xff, 1); 356 + intel_cx0_write(encoder, lane, PHY_C20_RD_ADDRESS_H, addr >> 8, 0); 357 + intel_cx0_write(encoder, lane, PHY_C20_RD_ADDRESS_L, addr & 0xff, 1); 369 358 370 - val = intel_cx0_read(i915, port, lane, PHY_C20_RD_DATA_H); 359 + val = intel_cx0_read(encoder, lane, PHY_C20_RD_DATA_H); 371 360 val <<= 8; 372 - val |= intel_cx0_read(i915, port, lane, PHY_C20_RD_DATA_L); 361 + val |= intel_cx0_read(encoder, lane, PHY_C20_RD_DATA_L); 373 362 374 363 return val; 375 364 } 376 365 377 - static void __intel_cx0_rmw(struct drm_i915_private *i915, enum port port, 366 + static void __intel_cx0_rmw(struct intel_encoder *encoder, 378 367 int lane, u16 addr, u8 clear, u8 set, bool committed) 379 368 { 380 369 u8 old, val; 381 370 382 - old = __intel_cx0_read(i915, port, lane, addr); 371 + old = __intel_cx0_read(encoder, lane, addr); 383 372 val = (old & ~clear) | set; 384 373 385 374 if (val != old) 386 - __intel_cx0_write(i915, port, lane, addr, val, committed); 375 + __intel_cx0_write(encoder, lane, addr, val, committed); 387 376 } 388 377 389 - static void intel_cx0_rmw(struct drm_i915_private *i915, enum port port, 378 + static void intel_cx0_rmw(struct intel_encoder *encoder, 390 379 u8 lane_mask, u16 addr, u8 clear, u8 set, bool committed) 391 380 { 392 381 u8 lane; 393 382 394 383 for_each_cx0_lane_in_mask(lane_mask, lane) 395 - __intel_cx0_rmw(i915, port, lane, addr, clear, set, committed); 384 + __intel_cx0_rmw(encoder, lane, addr, clear, set, committed); 396 385 } 397 386 398 387 static u8 intel_c10_get_tx_vboost_lvl(const struct intel_crtc_state *crtc_state) ··· 431 414 { 432 415 struct drm_i915_private *i915 = to_i915(encoder->base.dev); 433 416 const struct intel_ddi_buf_trans *trans; 434 - enum phy phy = intel_port_to_phy(i915, encoder->port); 435 417 u8 owned_lane_mask; 436 418 intel_wakeref_t wakeref; 437 419 int n_entries, ln; ··· 439 423 if (intel_tc_port_in_tbt_alt_mode(dig_port)) 440 424 return; 441 425 442 - owned_lane_mask = intel_cx0_get_owned_lane_mask(i915, encoder); 426 + owned_lane_mask = intel_cx0_get_owned_lane_mask(encoder); 443 427 444 428 wakeref = intel_cx0_phy_transaction_begin(encoder); 445 429 ··· 449 433 return; 450 434 } 451 435 452 - if (intel_is_c10phy(i915, phy)) { 453 - intel_cx0_rmw(i915, encoder->port, owned_lane_mask, PHY_C10_VDR_CONTROL(1), 436 + if (intel_encoder_is_c10phy(encoder)) { 437 + intel_cx0_rmw(encoder, owned_lane_mask, PHY_C10_VDR_CONTROL(1), 454 438 0, C10_VDR_CTRL_MSGBUS_ACCESS, MB_WRITE_COMMITTED); 455 - intel_cx0_rmw(i915, encoder->port, owned_lane_mask, PHY_C10_VDR_CMN(3), 439 + intel_cx0_rmw(encoder, owned_lane_mask, PHY_C10_VDR_CMN(3), 456 440 C10_CMN3_TXVBOOST_MASK, 457 441 C10_CMN3_TXVBOOST(intel_c10_get_tx_vboost_lvl(crtc_state)), 458 442 MB_WRITE_UNCOMMITTED); 459 - intel_cx0_rmw(i915, encoder->port, owned_lane_mask, PHY_C10_VDR_TX(1), 443 + intel_cx0_rmw(encoder, owned_lane_mask, PHY_C10_VDR_TX(1), 460 444 C10_TX1_TERMCTL_MASK, 461 445 C10_TX1_TERMCTL(intel_c10_get_tx_term_ctl(crtc_state)), 462 446 MB_WRITE_COMMITTED); ··· 471 455 if (!(lane_mask & owned_lane_mask)) 472 456 continue; 473 457 474 - intel_cx0_rmw(i915, encoder->port, lane_mask, PHY_CX0_VDROVRD_CTL(lane, tx, 0), 458 + intel_cx0_rmw(encoder, lane_mask, PHY_CX0_VDROVRD_CTL(lane, tx, 0), 475 459 C10_PHY_OVRD_LEVEL_MASK, 476 460 C10_PHY_OVRD_LEVEL(trans->entries[level].snps.pre_cursor), 477 461 MB_WRITE_COMMITTED); 478 - intel_cx0_rmw(i915, encoder->port, lane_mask, PHY_CX0_VDROVRD_CTL(lane, tx, 1), 462 + intel_cx0_rmw(encoder, lane_mask, PHY_CX0_VDROVRD_CTL(lane, tx, 1), 479 463 C10_PHY_OVRD_LEVEL_MASK, 480 464 C10_PHY_OVRD_LEVEL(trans->entries[level].snps.vswing), 481 465 MB_WRITE_COMMITTED); 482 - intel_cx0_rmw(i915, encoder->port, lane_mask, PHY_CX0_VDROVRD_CTL(lane, tx, 2), 466 + intel_cx0_rmw(encoder, lane_mask, PHY_CX0_VDROVRD_CTL(lane, tx, 2), 483 467 C10_PHY_OVRD_LEVEL_MASK, 484 468 C10_PHY_OVRD_LEVEL(trans->entries[level].snps.post_cursor), 485 469 MB_WRITE_COMMITTED); 486 470 } 487 471 488 472 /* Write Override enables in 0xD71 */ 489 - intel_cx0_rmw(i915, encoder->port, owned_lane_mask, PHY_C10_VDR_OVRD, 473 + intel_cx0_rmw(encoder, owned_lane_mask, PHY_C10_VDR_OVRD, 490 474 0, PHY_C10_VDR_OVRD_TX1 | PHY_C10_VDR_OVRD_TX2, 491 475 MB_WRITE_COMMITTED); 492 476 493 - if (intel_is_c10phy(i915, phy)) 494 - intel_cx0_rmw(i915, encoder->port, owned_lane_mask, PHY_C10_VDR_CONTROL(1), 477 + if (intel_encoder_is_c10phy(encoder)) 478 + intel_cx0_rmw(encoder, owned_lane_mask, PHY_C10_VDR_CONTROL(1), 495 479 0, C10_VDR_CTRL_UPDATE_CFG, MB_WRITE_COMMITTED); 496 480 497 481 intel_cx0_phy_transaction_end(encoder, wakeref); ··· 1872 1856 static void intel_c10pll_readout_hw_state(struct intel_encoder *encoder, 1873 1857 struct intel_c10pll_state *pll_state) 1874 1858 { 1875 - struct drm_i915_private *i915 = to_i915(encoder->base.dev); 1876 1859 u8 lane = INTEL_CX0_LANE0; 1877 1860 intel_wakeref_t wakeref; 1878 1861 int i; ··· 1882 1867 * According to C10 VDR Register programming Sequence we need 1883 1868 * to do this to read PHY internal registers from MsgBus. 1884 1869 */ 1885 - intel_cx0_rmw(i915, encoder->port, lane, PHY_C10_VDR_CONTROL(1), 1870 + intel_cx0_rmw(encoder, lane, PHY_C10_VDR_CONTROL(1), 1886 1871 0, C10_VDR_CTRL_MSGBUS_ACCESS, 1887 1872 MB_WRITE_COMMITTED); 1888 1873 1889 1874 for (i = 0; i < ARRAY_SIZE(pll_state->pll); i++) 1890 - pll_state->pll[i] = intel_cx0_read(i915, encoder->port, lane, 1891 - PHY_C10_VDR_PLL(i)); 1875 + pll_state->pll[i] = intel_cx0_read(encoder, lane, PHY_C10_VDR_PLL(i)); 1892 1876 1893 - pll_state->cmn = intel_cx0_read(i915, encoder->port, lane, PHY_C10_VDR_CMN(0)); 1894 - pll_state->tx = intel_cx0_read(i915, encoder->port, lane, PHY_C10_VDR_TX(0)); 1877 + pll_state->cmn = intel_cx0_read(encoder, lane, PHY_C10_VDR_CMN(0)); 1878 + pll_state->tx = intel_cx0_read(encoder, lane, PHY_C10_VDR_TX(0)); 1895 1879 1896 1880 intel_cx0_phy_transaction_end(encoder, wakeref); 1897 1881 } ··· 1902 1888 const struct intel_c10pll_state *pll_state = &crtc_state->cx0pll_state.c10; 1903 1889 int i; 1904 1890 1905 - intel_cx0_rmw(i915, encoder->port, INTEL_CX0_BOTH_LANES, PHY_C10_VDR_CONTROL(1), 1891 + intel_cx0_rmw(encoder, INTEL_CX0_BOTH_LANES, PHY_C10_VDR_CONTROL(1), 1906 1892 0, C10_VDR_CTRL_MSGBUS_ACCESS, 1907 1893 MB_WRITE_COMMITTED); 1908 1894 1909 1895 /* Custom width needs to be programmed to 0 for both the phy lanes */ 1910 - intel_cx0_rmw(i915, encoder->port, INTEL_CX0_BOTH_LANES, PHY_C10_VDR_CUSTOM_WIDTH, 1896 + intel_cx0_rmw(encoder, INTEL_CX0_BOTH_LANES, PHY_C10_VDR_CUSTOM_WIDTH, 1911 1897 C10_VDR_CUSTOM_WIDTH_MASK, C10_VDR_CUSTOM_WIDTH_8_10, 1912 1898 MB_WRITE_COMMITTED); 1913 - intel_cx0_rmw(i915, encoder->port, INTEL_CX0_BOTH_LANES, PHY_C10_VDR_CONTROL(1), 1899 + intel_cx0_rmw(encoder, INTEL_CX0_BOTH_LANES, PHY_C10_VDR_CONTROL(1), 1914 1900 0, C10_VDR_CTRL_UPDATE_CFG, 1915 1901 MB_WRITE_COMMITTED); 1916 1902 1917 1903 /* Program the pll values only for the master lane */ 1918 1904 for (i = 0; i < ARRAY_SIZE(pll_state->pll); i++) 1919 - intel_cx0_write(i915, encoder->port, INTEL_CX0_LANE0, PHY_C10_VDR_PLL(i), 1905 + intel_cx0_write(encoder, INTEL_CX0_LANE0, PHY_C10_VDR_PLL(i), 1920 1906 pll_state->pll[i], 1921 1907 (i % 4) ? MB_WRITE_UNCOMMITTED : MB_WRITE_COMMITTED); 1922 1908 1923 - intel_cx0_write(i915, encoder->port, INTEL_CX0_LANE0, PHY_C10_VDR_CMN(0), pll_state->cmn, MB_WRITE_COMMITTED); 1924 - intel_cx0_write(i915, encoder->port, INTEL_CX0_LANE0, PHY_C10_VDR_TX(0), pll_state->tx, MB_WRITE_COMMITTED); 1909 + intel_cx0_write(encoder, INTEL_CX0_LANE0, PHY_C10_VDR_CMN(0), pll_state->cmn, MB_WRITE_COMMITTED); 1910 + intel_cx0_write(encoder, INTEL_CX0_LANE0, PHY_C10_VDR_TX(0), pll_state->tx, MB_WRITE_COMMITTED); 1925 1911 1926 - intel_cx0_rmw(i915, encoder->port, INTEL_CX0_LANE0, PHY_C10_VDR_CONTROL(1), 1912 + intel_cx0_rmw(encoder, INTEL_CX0_LANE0, PHY_C10_VDR_CONTROL(1), 1927 1913 0, C10_VDR_CTRL_MASTER_LANE | C10_VDR_CTRL_UPDATE_CFG, 1928 1914 MB_WRITE_COMMITTED); 1929 1915 } ··· 2051 2037 int intel_cx0_phy_check_hdmi_link_rate(struct intel_hdmi *hdmi, int clock) 2052 2038 { 2053 2039 struct intel_digital_port *dig_port = hdmi_to_dig_port(hdmi); 2054 - struct drm_i915_private *i915 = intel_hdmi_to_i915(hdmi); 2055 - enum phy phy = intel_port_to_phy(i915, dig_port->base.port); 2056 2040 2057 - if (intel_is_c10phy(i915, phy)) 2041 + if (intel_encoder_is_c10phy(&dig_port->base)) 2058 2042 return intel_c10_phy_check_hdmi_link_rate(clock); 2059 2043 return intel_c20_phy_check_hdmi_link_rate(clock); 2060 2044 } ··· 2100 2088 int intel_cx0pll_calc_state(struct intel_crtc_state *crtc_state, 2101 2089 struct intel_encoder *encoder) 2102 2090 { 2103 - struct drm_i915_private *i915 = to_i915(encoder->base.dev); 2104 - enum phy phy = intel_port_to_phy(i915, encoder->port); 2105 - 2106 - if (intel_is_c10phy(i915, phy)) 2091 + if (intel_encoder_is_c10phy(encoder)) 2107 2092 return intel_c10pll_calc_state(crtc_state, encoder); 2108 2093 return intel_c20pll_calc_state(crtc_state, encoder); 2109 2094 } ··· 2158 2149 static void intel_c20pll_readout_hw_state(struct intel_encoder *encoder, 2159 2150 struct intel_c20pll_state *pll_state) 2160 2151 { 2161 - struct drm_i915_private *i915 = to_i915(encoder->base.dev); 2162 2152 bool cntx; 2163 2153 intel_wakeref_t wakeref; 2164 2154 int i; ··· 2165 2157 wakeref = intel_cx0_phy_transaction_begin(encoder); 2166 2158 2167 2159 /* 1. Read current context selection */ 2168 - cntx = intel_cx0_read(i915, encoder->port, INTEL_CX0_LANE0, PHY_C20_VDR_CUSTOM_SERDES_RATE) & PHY_C20_CONTEXT_TOGGLE; 2160 + cntx = intel_cx0_read(encoder, INTEL_CX0_LANE0, PHY_C20_VDR_CUSTOM_SERDES_RATE) & PHY_C20_CONTEXT_TOGGLE; 2169 2161 2170 2162 /* Read Tx configuration */ 2171 2163 for (i = 0; i < ARRAY_SIZE(pll_state->tx); i++) { 2172 2164 if (cntx) 2173 - pll_state->tx[i] = intel_c20_sram_read(i915, encoder->port, INTEL_CX0_LANE0, 2165 + pll_state->tx[i] = intel_c20_sram_read(encoder, INTEL_CX0_LANE0, 2174 2166 PHY_C20_B_TX_CNTX_CFG(i)); 2175 2167 else 2176 - pll_state->tx[i] = intel_c20_sram_read(i915, encoder->port, INTEL_CX0_LANE0, 2168 + pll_state->tx[i] = intel_c20_sram_read(encoder, INTEL_CX0_LANE0, 2177 2169 PHY_C20_A_TX_CNTX_CFG(i)); 2178 2170 } 2179 2171 2180 2172 /* Read common configuration */ 2181 2173 for (i = 0; i < ARRAY_SIZE(pll_state->cmn); i++) { 2182 2174 if (cntx) 2183 - pll_state->cmn[i] = intel_c20_sram_read(i915, encoder->port, INTEL_CX0_LANE0, 2175 + pll_state->cmn[i] = intel_c20_sram_read(encoder, INTEL_CX0_LANE0, 2184 2176 PHY_C20_B_CMN_CNTX_CFG(i)); 2185 2177 else 2186 - pll_state->cmn[i] = intel_c20_sram_read(i915, encoder->port, INTEL_CX0_LANE0, 2178 + pll_state->cmn[i] = intel_c20_sram_read(encoder, INTEL_CX0_LANE0, 2187 2179 PHY_C20_A_CMN_CNTX_CFG(i)); 2188 2180 } 2189 2181 ··· 2191 2183 /* MPLLB configuration */ 2192 2184 for (i = 0; i < ARRAY_SIZE(pll_state->mpllb); i++) { 2193 2185 if (cntx) 2194 - pll_state->mpllb[i] = intel_c20_sram_read(i915, encoder->port, INTEL_CX0_LANE0, 2186 + pll_state->mpllb[i] = intel_c20_sram_read(encoder, INTEL_CX0_LANE0, 2195 2187 PHY_C20_B_MPLLB_CNTX_CFG(i)); 2196 2188 else 2197 - pll_state->mpllb[i] = intel_c20_sram_read(i915, encoder->port, INTEL_CX0_LANE0, 2189 + pll_state->mpllb[i] = intel_c20_sram_read(encoder, INTEL_CX0_LANE0, 2198 2190 PHY_C20_A_MPLLB_CNTX_CFG(i)); 2199 2191 } 2200 2192 } else { 2201 2193 /* MPLLA configuration */ 2202 2194 for (i = 0; i < ARRAY_SIZE(pll_state->mplla); i++) { 2203 2195 if (cntx) 2204 - pll_state->mplla[i] = intel_c20_sram_read(i915, encoder->port, INTEL_CX0_LANE0, 2196 + pll_state->mplla[i] = intel_c20_sram_read(encoder, INTEL_CX0_LANE0, 2205 2197 PHY_C20_B_MPLLA_CNTX_CFG(i)); 2206 2198 else 2207 - pll_state->mplla[i] = intel_c20_sram_read(i915, encoder->port, INTEL_CX0_LANE0, 2199 + pll_state->mplla[i] = intel_c20_sram_read(encoder, INTEL_CX0_LANE0, 2208 2200 PHY_C20_A_MPLLA_CNTX_CFG(i)); 2209 2201 } 2210 2202 } ··· 2346 2338 dp = true; 2347 2339 2348 2340 /* 1. Read current context selection */ 2349 - cntx = intel_cx0_read(i915, encoder->port, INTEL_CX0_LANE0, PHY_C20_VDR_CUSTOM_SERDES_RATE) & BIT(0); 2341 + cntx = intel_cx0_read(encoder, INTEL_CX0_LANE0, PHY_C20_VDR_CUSTOM_SERDES_RATE) & BIT(0); 2350 2342 2351 2343 /* 2352 2344 * 2. If there is a protocol switch from HDMI to DP or vice versa, clear ··· 2355 2347 */ 2356 2348 if (intel_c20_protocol_switch_valid(encoder)) { 2357 2349 for (i = 0; i < 4; i++) 2358 - intel_c20_sram_write(i915, encoder->port, INTEL_CX0_LANE0, RAWLANEAONX_DIG_TX_MPLLB_CAL_DONE_BANK(i), 0); 2350 + intel_c20_sram_write(encoder, INTEL_CX0_LANE0, RAWLANEAONX_DIG_TX_MPLLB_CAL_DONE_BANK(i), 0); 2359 2351 usleep_range(4000, 4100); 2360 2352 } 2361 2353 ··· 2363 2355 /* 3.1 Tx configuration */ 2364 2356 for (i = 0; i < ARRAY_SIZE(pll_state->tx); i++) { 2365 2357 if (cntx) 2366 - intel_c20_sram_write(i915, encoder->port, INTEL_CX0_LANE0, PHY_C20_A_TX_CNTX_CFG(i), pll_state->tx[i]); 2358 + intel_c20_sram_write(encoder, INTEL_CX0_LANE0, PHY_C20_A_TX_CNTX_CFG(i), pll_state->tx[i]); 2367 2359 else 2368 - intel_c20_sram_write(i915, encoder->port, INTEL_CX0_LANE0, PHY_C20_B_TX_CNTX_CFG(i), pll_state->tx[i]); 2360 + intel_c20_sram_write(encoder, INTEL_CX0_LANE0, PHY_C20_B_TX_CNTX_CFG(i), pll_state->tx[i]); 2369 2361 } 2370 2362 2371 2363 /* 3.2 common configuration */ 2372 2364 for (i = 0; i < ARRAY_SIZE(pll_state->cmn); i++) { 2373 2365 if (cntx) 2374 - intel_c20_sram_write(i915, encoder->port, INTEL_CX0_LANE0, PHY_C20_A_CMN_CNTX_CFG(i), pll_state->cmn[i]); 2366 + intel_c20_sram_write(encoder, INTEL_CX0_LANE0, PHY_C20_A_CMN_CNTX_CFG(i), pll_state->cmn[i]); 2375 2367 else 2376 - intel_c20_sram_write(i915, encoder->port, INTEL_CX0_LANE0, PHY_C20_B_CMN_CNTX_CFG(i), pll_state->cmn[i]); 2368 + intel_c20_sram_write(encoder, INTEL_CX0_LANE0, PHY_C20_B_CMN_CNTX_CFG(i), pll_state->cmn[i]); 2377 2369 } 2378 2370 2379 2371 /* 3.3 mpllb or mplla configuration */ 2380 2372 if (intel_c20phy_use_mpllb(pll_state)) { 2381 2373 for (i = 0; i < ARRAY_SIZE(pll_state->mpllb); i++) { 2382 2374 if (cntx) 2383 - intel_c20_sram_write(i915, encoder->port, INTEL_CX0_LANE0, 2375 + intel_c20_sram_write(encoder, INTEL_CX0_LANE0, 2384 2376 PHY_C20_A_MPLLB_CNTX_CFG(i), 2385 2377 pll_state->mpllb[i]); 2386 2378 else 2387 - intel_c20_sram_write(i915, encoder->port, INTEL_CX0_LANE0, 2379 + intel_c20_sram_write(encoder, INTEL_CX0_LANE0, 2388 2380 PHY_C20_B_MPLLB_CNTX_CFG(i), 2389 2381 pll_state->mpllb[i]); 2390 2382 } 2391 2383 } else { 2392 2384 for (i = 0; i < ARRAY_SIZE(pll_state->mplla); i++) { 2393 2385 if (cntx) 2394 - intel_c20_sram_write(i915, encoder->port, INTEL_CX0_LANE0, 2386 + intel_c20_sram_write(encoder, INTEL_CX0_LANE0, 2395 2387 PHY_C20_A_MPLLA_CNTX_CFG(i), 2396 2388 pll_state->mplla[i]); 2397 2389 else 2398 - intel_c20_sram_write(i915, encoder->port, INTEL_CX0_LANE0, 2390 + intel_c20_sram_write(encoder, INTEL_CX0_LANE0, 2399 2391 PHY_C20_B_MPLLA_CNTX_CFG(i), 2400 2392 pll_state->mplla[i]); 2401 2393 } 2402 2394 } 2403 2395 2404 2396 /* 4. Program custom width to match the link protocol */ 2405 - intel_cx0_rmw(i915, encoder->port, lane, PHY_C20_VDR_CUSTOM_WIDTH, 2397 + intel_cx0_rmw(encoder, lane, PHY_C20_VDR_CUSTOM_WIDTH, 2406 2398 PHY_C20_CUSTOM_WIDTH_MASK, 2407 2399 PHY_C20_CUSTOM_WIDTH(intel_get_c20_custom_width(clock, dp)), 2408 2400 MB_WRITE_COMMITTED); 2409 2401 2410 2402 /* 5. For DP or 6. For HDMI */ 2411 2403 if (dp) { 2412 - intel_cx0_rmw(i915, encoder->port, lane, PHY_C20_VDR_CUSTOM_SERDES_RATE, 2404 + intel_cx0_rmw(encoder, lane, PHY_C20_VDR_CUSTOM_SERDES_RATE, 2413 2405 BIT(6) | PHY_C20_CUSTOM_SERDES_MASK, 2414 2406 BIT(6) | PHY_C20_CUSTOM_SERDES(intel_c20_get_dp_rate(clock)), 2415 2407 MB_WRITE_COMMITTED); 2416 2408 } else { 2417 - intel_cx0_rmw(i915, encoder->port, lane, PHY_C20_VDR_CUSTOM_SERDES_RATE, 2409 + intel_cx0_rmw(encoder, lane, PHY_C20_VDR_CUSTOM_SERDES_RATE, 2418 2410 BIT(7) | PHY_C20_CUSTOM_SERDES_MASK, 2419 2411 is_hdmi_frl(clock) ? BIT(7) : 0, 2420 2412 MB_WRITE_COMMITTED); 2421 2413 2422 - intel_cx0_write(i915, encoder->port, INTEL_CX0_BOTH_LANES, PHY_C20_VDR_HDMI_RATE, 2414 + intel_cx0_write(encoder, INTEL_CX0_BOTH_LANES, PHY_C20_VDR_HDMI_RATE, 2423 2415 intel_c20_get_hdmi_rate(clock), 2424 2416 MB_WRITE_COMMITTED); 2425 2417 } ··· 2428 2420 * 7. Write Vendor specific registers to toggle context setting to load 2429 2421 * the updated programming toggle context bit 2430 2422 */ 2431 - intel_cx0_rmw(i915, encoder->port, lane, PHY_C20_VDR_CUSTOM_SERDES_RATE, 2423 + intel_cx0_rmw(encoder, lane, PHY_C20_VDR_CUSTOM_SERDES_RATE, 2432 2424 BIT(0), cntx ? 0 : 1, MB_WRITE_COMMITTED); 2433 2425 } 2434 2426 ··· 2516 2508 return val; 2517 2509 } 2518 2510 2519 - static void intel_cx0_powerdown_change_sequence(struct drm_i915_private *i915, 2520 - enum port port, 2511 + static void intel_cx0_powerdown_change_sequence(struct intel_encoder *encoder, 2521 2512 u8 lane_mask, u8 state) 2522 2513 { 2523 - enum phy phy = intel_port_to_phy(i915, port); 2514 + struct drm_i915_private *i915 = to_i915(encoder->base.dev); 2515 + enum port port = encoder->port; 2516 + enum phy phy = intel_encoder_to_phy(encoder); 2524 2517 i915_reg_t buf_ctl2_reg = XELPDP_PORT_BUF_CTL2(i915, port); 2525 2518 int lane; 2526 2519 ··· 2537 2528 drm_dbg_kms(&i915->drm, 2538 2529 "PHY %c Timeout waiting for previous transaction to complete. Reset the bus.\n", 2539 2530 phy_name(phy)); 2540 - intel_cx0_bus_reset(i915, port, lane); 2531 + intel_cx0_bus_reset(encoder, lane); 2541 2532 } 2542 2533 2543 2534 intel_de_rmw(i915, buf_ctl2_reg, ··· 2545 2536 intel_cx0_get_powerdown_update(lane_mask)); 2546 2537 2547 2538 /* Update Timeout Value */ 2548 - if (__intel_de_wait_for_register(i915, buf_ctl2_reg, 2549 - intel_cx0_get_powerdown_update(lane_mask), 0, 2550 - XELPDP_PORT_POWERDOWN_UPDATE_TIMEOUT_US, 0, NULL)) 2539 + if (intel_de_wait_custom(i915, buf_ctl2_reg, 2540 + intel_cx0_get_powerdown_update(lane_mask), 0, 2541 + XELPDP_PORT_POWERDOWN_UPDATE_TIMEOUT_US, 0, NULL)) 2551 2542 drm_warn(&i915->drm, "PHY %c failed to bring out of Lane reset after %dus.\n", 2552 2543 phy_name(phy), XELPDP_PORT_RESET_START_TIMEOUT_US); 2553 2544 } 2554 2545 2555 - static void intel_cx0_setup_powerdown(struct drm_i915_private *i915, enum port port) 2546 + static void intel_cx0_setup_powerdown(struct intel_encoder *encoder) 2556 2547 { 2548 + struct drm_i915_private *i915 = to_i915(encoder->base.dev); 2549 + enum port port = encoder->port; 2550 + 2557 2551 intel_de_rmw(i915, XELPDP_PORT_BUF_CTL2(i915, port), 2558 2552 XELPDP_POWER_STATE_READY_MASK, 2559 2553 XELPDP_POWER_STATE_READY(CX0_P2_STATE_READY)); ··· 2589 2577 return val; 2590 2578 } 2591 2579 2592 - static void intel_cx0_phy_lane_reset(struct drm_i915_private *i915, 2593 - struct intel_encoder *encoder, 2580 + static void intel_cx0_phy_lane_reset(struct intel_encoder *encoder, 2594 2581 bool lane_reversal) 2595 2582 { 2583 + struct drm_i915_private *i915 = to_i915(encoder->base.dev); 2596 2584 enum port port = encoder->port; 2597 - enum phy phy = intel_port_to_phy(i915, port); 2598 - u8 owned_lane_mask = intel_cx0_get_owned_lane_mask(i915, encoder); 2585 + enum phy phy = intel_encoder_to_phy(encoder); 2586 + u8 owned_lane_mask = intel_cx0_get_owned_lane_mask(encoder); 2599 2587 u8 lane_mask = lane_reversal ? INTEL_CX0_LANE1 : INTEL_CX0_LANE0; 2600 2588 u32 lane_pipe_reset = owned_lane_mask == INTEL_CX0_BOTH_LANES 2601 2589 ? XELPDP_LANE_PIPE_RESET(0) | XELPDP_LANE_PIPE_RESET(1) ··· 2605 2593 XELPDP_LANE_PHY_CURRENT_STATUS(1)) 2606 2594 : XELPDP_LANE_PHY_CURRENT_STATUS(0); 2607 2595 2608 - if (__intel_de_wait_for_register(i915, XELPDP_PORT_BUF_CTL1(i915, port), 2609 - XELPDP_PORT_BUF_SOC_PHY_READY, 2610 - XELPDP_PORT_BUF_SOC_PHY_READY, 2611 - XELPDP_PORT_BUF_SOC_READY_TIMEOUT_US, 0, NULL)) 2596 + if (intel_de_wait_custom(i915, XELPDP_PORT_BUF_CTL1(i915, port), 2597 + XELPDP_PORT_BUF_SOC_PHY_READY, 2598 + XELPDP_PORT_BUF_SOC_PHY_READY, 2599 + XELPDP_PORT_BUF_SOC_READY_TIMEOUT_US, 0, NULL)) 2612 2600 drm_warn(&i915->drm, "PHY %c failed to bring out of SOC reset after %dus.\n", 2613 2601 phy_name(phy), XELPDP_PORT_BUF_SOC_READY_TIMEOUT_US); 2614 2602 2615 2603 intel_de_rmw(i915, XELPDP_PORT_BUF_CTL2(i915, port), lane_pipe_reset, 2616 2604 lane_pipe_reset); 2617 2605 2618 - if (__intel_de_wait_for_register(i915, XELPDP_PORT_BUF_CTL2(i915, port), 2619 - lane_phy_current_status, lane_phy_current_status, 2620 - XELPDP_PORT_RESET_START_TIMEOUT_US, 0, NULL)) 2606 + if (intel_de_wait_custom(i915, XELPDP_PORT_BUF_CTL2(i915, port), 2607 + lane_phy_current_status, lane_phy_current_status, 2608 + XELPDP_PORT_RESET_START_TIMEOUT_US, 0, NULL)) 2621 2609 drm_warn(&i915->drm, "PHY %c failed to bring out of Lane reset after %dus.\n", 2622 2610 phy_name(phy), XELPDP_PORT_RESET_START_TIMEOUT_US); 2623 2611 ··· 2625 2613 intel_cx0_get_pclk_refclk_request(owned_lane_mask), 2626 2614 intel_cx0_get_pclk_refclk_request(lane_mask)); 2627 2615 2628 - if (__intel_de_wait_for_register(i915, XELPDP_PORT_CLOCK_CTL(i915, port), 2629 - intel_cx0_get_pclk_refclk_ack(owned_lane_mask), 2630 - intel_cx0_get_pclk_refclk_ack(lane_mask), 2631 - XELPDP_REFCLK_ENABLE_TIMEOUT_US, 0, NULL)) 2616 + if (intel_de_wait_custom(i915, XELPDP_PORT_CLOCK_CTL(i915, port), 2617 + intel_cx0_get_pclk_refclk_ack(owned_lane_mask), 2618 + intel_cx0_get_pclk_refclk_ack(lane_mask), 2619 + XELPDP_REFCLK_ENABLE_TIMEOUT_US, 0, NULL)) 2632 2620 drm_warn(&i915->drm, "PHY %c failed to request refclk after %dus.\n", 2633 2621 phy_name(phy), XELPDP_REFCLK_ENABLE_TIMEOUT_US); 2634 2622 2635 - intel_cx0_powerdown_change_sequence(i915, port, INTEL_CX0_BOTH_LANES, 2623 + intel_cx0_powerdown_change_sequence(encoder, INTEL_CX0_BOTH_LANES, 2636 2624 CX0_P2_STATE_RESET); 2637 - intel_cx0_setup_powerdown(i915, port); 2625 + intel_cx0_setup_powerdown(encoder); 2638 2626 2639 2627 intel_de_rmw(i915, XELPDP_PORT_BUF_CTL2(i915, port), lane_pipe_reset, 0); 2640 2628 ··· 2652 2640 int i; 2653 2641 u8 disables; 2654 2642 bool dp_alt_mode = intel_tc_port_in_dp_alt_mode(enc_to_dig_port(encoder)); 2655 - u8 owned_lane_mask = intel_cx0_get_owned_lane_mask(i915, encoder); 2656 - enum port port = encoder->port; 2643 + u8 owned_lane_mask = intel_cx0_get_owned_lane_mask(encoder); 2657 2644 2658 - if (intel_is_c10phy(i915, intel_port_to_phy(i915, port))) 2659 - intel_cx0_rmw(i915, port, owned_lane_mask, 2645 + if (intel_encoder_is_c10phy(encoder)) 2646 + intel_cx0_rmw(encoder, owned_lane_mask, 2660 2647 PHY_C10_VDR_CONTROL(1), 0, 2661 2648 C10_VDR_CTRL_MSGBUS_ACCESS, 2662 2649 MB_WRITE_COMMITTED); ··· 2677 2666 if (!(owned_lane_mask & lane_mask)) 2678 2667 continue; 2679 2668 2680 - intel_cx0_rmw(i915, port, lane_mask, PHY_CX0_TX_CONTROL(tx, 2), 2669 + intel_cx0_rmw(encoder, lane_mask, PHY_CX0_TX_CONTROL(tx, 2), 2681 2670 CONTROL2_DISABLE_SINGLE_TX, 2682 2671 disables & BIT(i) ? CONTROL2_DISABLE_SINGLE_TX : 0, 2683 2672 MB_WRITE_COMMITTED); 2684 2673 } 2685 2674 2686 - if (intel_is_c10phy(i915, intel_port_to_phy(i915, port))) 2687 - intel_cx0_rmw(i915, port, owned_lane_mask, 2675 + if (intel_encoder_is_c10phy(encoder)) 2676 + intel_cx0_rmw(encoder, owned_lane_mask, 2688 2677 PHY_C10_VDR_CONTROL(1), 0, 2689 2678 C10_VDR_CTRL_UPDATE_CFG, 2690 2679 MB_WRITE_COMMITTED); ··· 2716 2705 const struct intel_crtc_state *crtc_state) 2717 2706 { 2718 2707 struct drm_i915_private *i915 = to_i915(encoder->base.dev); 2719 - enum phy phy = intel_port_to_phy(i915, encoder->port); 2708 + enum phy phy = intel_encoder_to_phy(encoder); 2720 2709 struct intel_digital_port *dig_port = enc_to_dig_port(encoder); 2721 2710 bool lane_reversal = dig_port->saved_port_bits & DDI_BUF_PORT_REVERSAL; 2722 2711 u8 maxpclk_lane = lane_reversal ? INTEL_CX0_LANE1 : ··· 2730 2719 intel_program_port_clock_ctl(encoder, crtc_state, lane_reversal); 2731 2720 2732 2721 /* 2. Bring PHY out of reset. */ 2733 - intel_cx0_phy_lane_reset(i915, encoder, lane_reversal); 2722 + intel_cx0_phy_lane_reset(encoder, lane_reversal); 2734 2723 2735 2724 /* 2736 2725 * 3. Change Phy power state to Ready. 2737 2726 * TODO: For DP alt mode use only one lane. 2738 2727 */ 2739 - intel_cx0_powerdown_change_sequence(i915, encoder->port, INTEL_CX0_BOTH_LANES, 2728 + intel_cx0_powerdown_change_sequence(encoder, INTEL_CX0_BOTH_LANES, 2740 2729 CX0_P2_STATE_READY); 2741 2730 2742 2731 /* ··· 2746 2735 */ 2747 2736 2748 2737 /* 5. Program PHY internal PLL internal registers. */ 2749 - if (intel_is_c10phy(i915, phy)) 2738 + if (intel_encoder_is_c10phy(encoder)) 2750 2739 intel_c10_pll_program(i915, crtc_state, encoder); 2751 2740 else 2752 2741 intel_c20_pll_program(i915, crtc_state, encoder); ··· 2778 2767 intel_cx0_get_pclk_pll_request(maxpclk_lane)); 2779 2768 2780 2769 /* 10. Poll on PORT_CLOCK_CTL PCLK PLL Ack LN<Lane for maxPCLK> == "1". */ 2781 - if (__intel_de_wait_for_register(i915, XELPDP_PORT_CLOCK_CTL(i915, encoder->port), 2782 - intel_cx0_get_pclk_pll_ack(INTEL_CX0_BOTH_LANES), 2783 - intel_cx0_get_pclk_pll_ack(maxpclk_lane), 2784 - XELPDP_PCLK_PLL_ENABLE_TIMEOUT_US, 0, NULL)) 2770 + if (intel_de_wait_custom(i915, XELPDP_PORT_CLOCK_CTL(i915, encoder->port), 2771 + intel_cx0_get_pclk_pll_ack(INTEL_CX0_BOTH_LANES), 2772 + intel_cx0_get_pclk_pll_ack(maxpclk_lane), 2773 + XELPDP_PCLK_PLL_ENABLE_TIMEOUT_US, 0, NULL)) 2785 2774 drm_warn(&i915->drm, "Port %c PLL not locked after %dus.\n", 2786 2775 phy_name(phy), XELPDP_PCLK_PLL_ENABLE_TIMEOUT_US); 2787 2776 ··· 2842 2831 const struct intel_crtc_state *crtc_state) 2843 2832 { 2844 2833 struct drm_i915_private *i915 = to_i915(encoder->base.dev); 2845 - enum phy phy = intel_port_to_phy(i915, encoder->port); 2834 + enum phy phy = intel_encoder_to_phy(encoder); 2846 2835 u32 val = 0; 2847 2836 2848 2837 /* ··· 2869 2858 intel_de_write(i915, XELPDP_PORT_CLOCK_CTL(i915, encoder->port), val); 2870 2859 2871 2860 /* 5. Poll on PORT_CLOCK_CTL TBT CLOCK Ack == "1". */ 2872 - if (__intel_de_wait_for_register(i915, XELPDP_PORT_CLOCK_CTL(i915, encoder->port), 2873 - XELPDP_TBT_CLOCK_ACK, 2874 - XELPDP_TBT_CLOCK_ACK, 2875 - 100, 0, NULL)) 2861 + if (intel_de_wait_custom(i915, XELPDP_PORT_CLOCK_CTL(i915, encoder->port), 2862 + XELPDP_TBT_CLOCK_ACK, 2863 + XELPDP_TBT_CLOCK_ACK, 2864 + 100, 0, NULL)) 2876 2865 drm_warn(&i915->drm, "[ENCODER:%d:%s][%c] PHY PLL not locked after 100us.\n", 2877 2866 encoder->base.base.id, encoder->base.name, phy_name(phy)); 2878 2867 ··· 2903 2892 static void intel_cx0pll_disable(struct intel_encoder *encoder) 2904 2893 { 2905 2894 struct drm_i915_private *i915 = to_i915(encoder->base.dev); 2906 - enum phy phy = intel_port_to_phy(i915, encoder->port); 2907 - bool is_c10 = intel_is_c10phy(i915, phy); 2895 + enum phy phy = intel_encoder_to_phy(encoder); 2896 + bool is_c10 = intel_encoder_is_c10phy(encoder); 2908 2897 intel_wakeref_t wakeref = intel_cx0_phy_transaction_begin(encoder); 2909 2898 2910 2899 /* 1. Change owned PHY lane power to Disable state. */ 2911 - intel_cx0_powerdown_change_sequence(i915, encoder->port, INTEL_CX0_BOTH_LANES, 2900 + intel_cx0_powerdown_change_sequence(encoder, INTEL_CX0_BOTH_LANES, 2912 2901 is_c10 ? CX0_P2PG_STATE_DISABLE : 2913 2902 CX0_P4PG_STATE_DISABLE); 2914 2903 ··· 2931 2920 /* 2932 2921 * 5. Poll on PORT_CLOCK_CTL PCLK PLL Ack LN<Lane for maxPCLK**> == "0". 2933 2922 */ 2934 - if (__intel_de_wait_for_register(i915, XELPDP_PORT_CLOCK_CTL(i915, encoder->port), 2935 - intel_cx0_get_pclk_pll_ack(INTEL_CX0_BOTH_LANES) | 2936 - intel_cx0_get_pclk_refclk_ack(INTEL_CX0_BOTH_LANES), 0, 2937 - XELPDP_PCLK_PLL_DISABLE_TIMEOUT_US, 0, NULL)) 2923 + if (intel_de_wait_custom(i915, XELPDP_PORT_CLOCK_CTL(i915, encoder->port), 2924 + intel_cx0_get_pclk_pll_ack(INTEL_CX0_BOTH_LANES) | 2925 + intel_cx0_get_pclk_refclk_ack(INTEL_CX0_BOTH_LANES), 0, 2926 + XELPDP_PCLK_PLL_DISABLE_TIMEOUT_US, 0, NULL)) 2938 2927 drm_warn(&i915->drm, "Port %c PLL not unlocked after %dus.\n", 2939 2928 phy_name(phy), XELPDP_PCLK_PLL_DISABLE_TIMEOUT_US); 2940 2929 ··· 2955 2944 static void intel_mtl_tbt_pll_disable(struct intel_encoder *encoder) 2956 2945 { 2957 2946 struct drm_i915_private *i915 = to_i915(encoder->base.dev); 2958 - enum phy phy = intel_port_to_phy(i915, encoder->port); 2947 + enum phy phy = intel_encoder_to_phy(encoder); 2959 2948 2960 2949 /* 2961 2950 * 1. Follow the Display Voltage Frequency Switching Sequence Before ··· 2969 2958 XELPDP_TBT_CLOCK_REQUEST, 0); 2970 2959 2971 2960 /* 3. Poll on PORT_CLOCK_CTL TBT CLOCK Ack == "0". */ 2972 - if (__intel_de_wait_for_register(i915, XELPDP_PORT_CLOCK_CTL(i915, encoder->port), 2973 - XELPDP_TBT_CLOCK_ACK, 0, 10, 0, NULL)) 2961 + if (intel_de_wait_custom(i915, XELPDP_PORT_CLOCK_CTL(i915, encoder->port), 2962 + XELPDP_TBT_CLOCK_ACK, 0, 10, 0, NULL)) 2974 2963 drm_warn(&i915->drm, "[ENCODER:%d:%s][%c] PHY PLL not unlocked after 10us.\n", 2975 2964 encoder->base.base.id, encoder->base.name, phy_name(phy)); 2976 2965 ··· 3054 3043 void intel_cx0pll_readout_hw_state(struct intel_encoder *encoder, 3055 3044 struct intel_cx0pll_state *pll_state) 3056 3045 { 3057 - struct drm_i915_private *i915 = to_i915(encoder->base.dev); 3058 - enum phy phy = intel_port_to_phy(i915, encoder->port); 3059 - 3060 - if (intel_is_c10phy(i915, phy)) 3046 + if (intel_encoder_is_c10phy(encoder)) 3061 3047 intel_c10pll_readout_hw_state(encoder, &pll_state->c10); 3062 3048 else 3063 3049 intel_c20pll_readout_hw_state(encoder, &pll_state->c20); ··· 3063 3055 int intel_cx0pll_calc_port_clock(struct intel_encoder *encoder, 3064 3056 const struct intel_cx0pll_state *pll_state) 3065 3057 { 3066 - struct drm_i915_private *i915 = to_i915(encoder->base.dev); 3067 - enum phy phy = intel_port_to_phy(i915, encoder->port); 3068 - 3069 - if (intel_is_c10phy(i915, phy)) 3058 + if (intel_encoder_is_c10phy(encoder)) 3070 3059 return intel_c10pll_calc_port_clock(encoder, &pll_state->c10); 3071 3060 3072 3061 return intel_c20pll_calc_port_clock(encoder, &pll_state->c20); ··· 3129 3124 intel_atomic_get_new_crtc_state(state, crtc); 3130 3125 struct intel_encoder *encoder; 3131 3126 struct intel_cx0pll_state mpll_hw_state = {}; 3132 - enum phy phy; 3133 3127 3134 3128 if (DISPLAY_VER(i915) < 14) 3135 3129 return; ··· 3142 3138 return; 3143 3139 3144 3140 encoder = intel_get_crtc_new_encoder(state, new_crtc_state); 3145 - phy = intel_port_to_phy(i915, encoder->port); 3146 3141 3147 3142 if (intel_tc_port_in_tbt_alt_mode(enc_to_dig_port(encoder))) 3148 3143 return; 3149 3144 3150 3145 intel_cx0pll_readout_hw_state(encoder, &mpll_hw_state); 3151 3146 3152 - if (intel_is_c10phy(i915, phy)) 3147 + if (intel_encoder_is_c10phy(encoder)) 3153 3148 intel_c10pll_state_verify(new_crtc_state, crtc, encoder, &mpll_hw_state.c10); 3154 3149 else 3155 3150 intel_c20pll_state_verify(new_crtc_state, crtc, encoder, &mpll_hw_state.c20);
+1 -2
drivers/gpu/drm/i915/display/intel_cx0_phy.h
··· 11 11 #include <linux/bits.h> 12 12 13 13 enum icl_port_dpll_id; 14 - enum phy; 15 14 struct drm_i915_private; 16 15 struct intel_atomic_state; 17 16 struct intel_c10pll_state; ··· 21 22 struct intel_encoder; 22 23 struct intel_hdmi; 23 24 24 - bool intel_is_c10phy(struct drm_i915_private *dev_priv, enum phy phy); 25 + bool intel_encoder_is_c10phy(struct intel_encoder *encoder); 25 26 void intel_mtl_pll_enable(struct intel_encoder *encoder, 26 27 const struct intel_crtc_state *crtc_state); 27 28 void intel_mtl_pll_disable(struct intel_encoder *encoder);
+117 -119
drivers/gpu/drm/i915/display/intel_ddi.c
··· 200 200 port_name(port)); 201 201 } 202 202 203 - static void intel_wait_ddi_buf_active(struct drm_i915_private *dev_priv, 204 - enum port port) 203 + static void intel_wait_ddi_buf_active(struct intel_encoder *encoder) 205 204 { 206 - enum phy phy = intel_port_to_phy(dev_priv, port); 205 + struct drm_i915_private *dev_priv = to_i915(encoder->base.dev); 206 + enum port port = encoder->port; 207 207 int timeout_us; 208 208 int ret; 209 209 ··· 218 218 } else if (IS_DG2(dev_priv)) { 219 219 timeout_us = 1200; 220 220 } else if (DISPLAY_VER(dev_priv) >= 12) { 221 - if (intel_phy_is_tc(dev_priv, phy)) 221 + if (intel_encoder_is_tc(encoder)) 222 222 timeout_us = 3000; 223 223 else 224 224 timeout_us = 1000; ··· 331 331 struct drm_i915_private *i915 = to_i915(encoder->base.dev); 332 332 struct intel_dp *intel_dp = enc_to_intel_dp(encoder); 333 333 struct intel_digital_port *dig_port = enc_to_dig_port(encoder); 334 - enum phy phy = intel_port_to_phy(i915, encoder->port); 335 334 336 335 /* DDI_BUF_CTL_ENABLE will be set by intel_ddi_prepare_link_retrain() later */ 337 336 intel_dp->DP = dig_port->saved_port_bits | ··· 344 345 intel_dp->DP |= DDI_BUF_PORT_DATA_10BIT; 345 346 } 346 347 347 - if (IS_ALDERLAKE_P(i915) && intel_phy_is_tc(i915, phy)) { 348 + if (IS_ALDERLAKE_P(i915) && intel_encoder_is_tc(encoder)) { 348 349 intel_dp->DP |= ddi_buf_phy_link_rate(crtc_state->port_clock); 349 350 if (!intel_tc_port_in_tbt_alt_mode(dig_port)) 350 351 intel_dp->DP |= DDI_BUF_CTL_TC_PHY_OWNERSHIP; ··· 894 895 const struct intel_crtc_state *crtc_state) 895 896 { 896 897 struct drm_i915_private *i915 = to_i915(dig_port->base.base.dev); 897 - enum phy phy = intel_port_to_phy(i915, dig_port->base.port); 898 898 899 899 /* 900 900 * ICL+ HW requires corresponding AUX IOs to be powered up for PSR with ··· 912 914 return intel_display_power_aux_io_domain(i915, dig_port->aux_ch); 913 915 else if (DISPLAY_VER(i915) < 14 && 914 916 (intel_crtc_has_dp_encoder(crtc_state) || 915 - intel_phy_is_tc(i915, phy))) 917 + intel_encoder_is_tc(&dig_port->base))) 916 918 return intel_aux_power_domain(dig_port); 917 919 else 918 920 return POWER_DOMAIN_INVALID; ··· 982 984 struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc); 983 985 struct drm_i915_private *dev_priv = to_i915(crtc->base.dev); 984 986 enum transcoder cpu_transcoder = crtc_state->cpu_transcoder; 985 - enum phy phy = intel_port_to_phy(dev_priv, encoder->port); 987 + enum phy phy = intel_encoder_to_phy(encoder); 986 988 u32 val; 987 989 988 990 if (cpu_transcoder == TRANSCODER_EDP) ··· 1111 1113 { 1112 1114 struct drm_i915_private *dev_priv = to_i915(encoder->base.dev); 1113 1115 const struct intel_ddi_buf_trans *trans; 1114 - enum phy phy = intel_port_to_phy(dev_priv, encoder->port); 1116 + enum phy phy = intel_encoder_to_phy(encoder); 1115 1117 int n_entries, ln; 1116 1118 u32 val; 1117 1119 ··· 1174 1176 const struct intel_crtc_state *crtc_state) 1175 1177 { 1176 1178 struct drm_i915_private *dev_priv = to_i915(encoder->base.dev); 1177 - enum phy phy = intel_port_to_phy(dev_priv, encoder->port); 1179 + enum phy phy = intel_encoder_to_phy(encoder); 1178 1180 u32 val; 1179 1181 int ln; 1180 1182 ··· 1225 1227 const struct intel_crtc_state *crtc_state) 1226 1228 { 1227 1229 struct drm_i915_private *dev_priv = to_i915(encoder->base.dev); 1228 - enum tc_port tc_port = intel_port_to_tc(dev_priv, encoder->port); 1230 + enum tc_port tc_port = intel_encoder_to_tc(encoder); 1229 1231 const struct intel_ddi_buf_trans *trans; 1230 1232 int n_entries, ln; 1231 1233 ··· 1326 1328 const struct intel_crtc_state *crtc_state) 1327 1329 { 1328 1330 struct drm_i915_private *dev_priv = to_i915(encoder->base.dev); 1329 - enum tc_port tc_port = intel_port_to_tc(dev_priv, encoder->port); 1331 + enum tc_port tc_port = intel_encoder_to_tc(encoder); 1330 1332 const struct intel_ddi_buf_trans *trans; 1331 1333 int n_entries, ln; 1332 1334 ··· 1524 1526 { 1525 1527 struct drm_i915_private *i915 = to_i915(encoder->base.dev); 1526 1528 const struct intel_shared_dpll *pll = crtc_state->shared_dpll; 1527 - enum phy phy = intel_port_to_phy(i915, encoder->port); 1529 + enum phy phy = intel_encoder_to_phy(encoder); 1528 1530 1529 1531 if (drm_WARN_ON(&i915->drm, !pll)) 1530 1532 return; ··· 1538 1540 static void adls_ddi_disable_clock(struct intel_encoder *encoder) 1539 1541 { 1540 1542 struct drm_i915_private *i915 = to_i915(encoder->base.dev); 1541 - enum phy phy = intel_port_to_phy(i915, encoder->port); 1543 + enum phy phy = intel_encoder_to_phy(encoder); 1542 1544 1543 1545 _icl_ddi_disable_clock(i915, ADLS_DPCLKA_CFGCR(phy), 1544 1546 ICL_DPCLKA_CFGCR0_DDI_CLK_OFF(phy)); ··· 1547 1549 static bool adls_ddi_is_clock_enabled(struct intel_encoder *encoder) 1548 1550 { 1549 1551 struct drm_i915_private *i915 = to_i915(encoder->base.dev); 1550 - enum phy phy = intel_port_to_phy(i915, encoder->port); 1552 + enum phy phy = intel_encoder_to_phy(encoder); 1551 1553 1552 1554 return _icl_ddi_is_clock_enabled(i915, ADLS_DPCLKA_CFGCR(phy), 1553 1555 ICL_DPCLKA_CFGCR0_DDI_CLK_OFF(phy)); ··· 1556 1558 static struct intel_shared_dpll *adls_ddi_get_pll(struct intel_encoder *encoder) 1557 1559 { 1558 1560 struct drm_i915_private *i915 = to_i915(encoder->base.dev); 1559 - enum phy phy = intel_port_to_phy(i915, encoder->port); 1561 + enum phy phy = intel_encoder_to_phy(encoder); 1560 1562 1561 1563 return _icl_ddi_get_pll(i915, ADLS_DPCLKA_CFGCR(phy), 1562 1564 ADLS_DPCLKA_CFGCR_DDI_CLK_SEL_MASK(phy), ··· 1568 1570 { 1569 1571 struct drm_i915_private *i915 = to_i915(encoder->base.dev); 1570 1572 const struct intel_shared_dpll *pll = crtc_state->shared_dpll; 1571 - enum phy phy = intel_port_to_phy(i915, encoder->port); 1573 + enum phy phy = intel_encoder_to_phy(encoder); 1572 1574 1573 1575 if (drm_WARN_ON(&i915->drm, !pll)) 1574 1576 return; ··· 1582 1584 static void rkl_ddi_disable_clock(struct intel_encoder *encoder) 1583 1585 { 1584 1586 struct drm_i915_private *i915 = to_i915(encoder->base.dev); 1585 - enum phy phy = intel_port_to_phy(i915, encoder->port); 1587 + enum phy phy = intel_encoder_to_phy(encoder); 1586 1588 1587 1589 _icl_ddi_disable_clock(i915, ICL_DPCLKA_CFGCR0, 1588 1590 RKL_DPCLKA_CFGCR0_DDI_CLK_OFF(phy)); ··· 1591 1593 static bool rkl_ddi_is_clock_enabled(struct intel_encoder *encoder) 1592 1594 { 1593 1595 struct drm_i915_private *i915 = to_i915(encoder->base.dev); 1594 - enum phy phy = intel_port_to_phy(i915, encoder->port); 1596 + enum phy phy = intel_encoder_to_phy(encoder); 1595 1597 1596 1598 return _icl_ddi_is_clock_enabled(i915, ICL_DPCLKA_CFGCR0, 1597 1599 RKL_DPCLKA_CFGCR0_DDI_CLK_OFF(phy)); ··· 1600 1602 static struct intel_shared_dpll *rkl_ddi_get_pll(struct intel_encoder *encoder) 1601 1603 { 1602 1604 struct drm_i915_private *i915 = to_i915(encoder->base.dev); 1603 - enum phy phy = intel_port_to_phy(i915, encoder->port); 1605 + enum phy phy = intel_encoder_to_phy(encoder); 1604 1606 1605 1607 return _icl_ddi_get_pll(i915, ICL_DPCLKA_CFGCR0, 1606 1608 RKL_DPCLKA_CFGCR0_DDI_CLK_SEL_MASK(phy), ··· 1612 1614 { 1613 1615 struct drm_i915_private *i915 = to_i915(encoder->base.dev); 1614 1616 const struct intel_shared_dpll *pll = crtc_state->shared_dpll; 1615 - enum phy phy = intel_port_to_phy(i915, encoder->port); 1617 + enum phy phy = intel_encoder_to_phy(encoder); 1616 1618 1617 1619 if (drm_WARN_ON(&i915->drm, !pll)) 1618 1620 return; ··· 1635 1637 static void dg1_ddi_disable_clock(struct intel_encoder *encoder) 1636 1638 { 1637 1639 struct drm_i915_private *i915 = to_i915(encoder->base.dev); 1638 - enum phy phy = intel_port_to_phy(i915, encoder->port); 1640 + enum phy phy = intel_encoder_to_phy(encoder); 1639 1641 1640 1642 _icl_ddi_disable_clock(i915, DG1_DPCLKA_CFGCR0(phy), 1641 1643 DG1_DPCLKA_CFGCR0_DDI_CLK_OFF(phy)); ··· 1644 1646 static bool dg1_ddi_is_clock_enabled(struct intel_encoder *encoder) 1645 1647 { 1646 1648 struct drm_i915_private *i915 = to_i915(encoder->base.dev); 1647 - enum phy phy = intel_port_to_phy(i915, encoder->port); 1649 + enum phy phy = intel_encoder_to_phy(encoder); 1648 1650 1649 1651 return _icl_ddi_is_clock_enabled(i915, DG1_DPCLKA_CFGCR0(phy), 1650 1652 DG1_DPCLKA_CFGCR0_DDI_CLK_OFF(phy)); ··· 1653 1655 static struct intel_shared_dpll *dg1_ddi_get_pll(struct intel_encoder *encoder) 1654 1656 { 1655 1657 struct drm_i915_private *i915 = to_i915(encoder->base.dev); 1656 - enum phy phy = intel_port_to_phy(i915, encoder->port); 1658 + enum phy phy = intel_encoder_to_phy(encoder); 1657 1659 enum intel_dpll_id id; 1658 1660 u32 val; 1659 1661 ··· 1678 1680 { 1679 1681 struct drm_i915_private *i915 = to_i915(encoder->base.dev); 1680 1682 const struct intel_shared_dpll *pll = crtc_state->shared_dpll; 1681 - enum phy phy = intel_port_to_phy(i915, encoder->port); 1683 + enum phy phy = intel_encoder_to_phy(encoder); 1682 1684 1683 1685 if (drm_WARN_ON(&i915->drm, !pll)) 1684 1686 return; ··· 1692 1694 static void icl_ddi_combo_disable_clock(struct intel_encoder *encoder) 1693 1695 { 1694 1696 struct drm_i915_private *i915 = to_i915(encoder->base.dev); 1695 - enum phy phy = intel_port_to_phy(i915, encoder->port); 1697 + enum phy phy = intel_encoder_to_phy(encoder); 1696 1698 1697 1699 _icl_ddi_disable_clock(i915, ICL_DPCLKA_CFGCR0, 1698 1700 ICL_DPCLKA_CFGCR0_DDI_CLK_OFF(phy)); ··· 1701 1703 static bool icl_ddi_combo_is_clock_enabled(struct intel_encoder *encoder) 1702 1704 { 1703 1705 struct drm_i915_private *i915 = to_i915(encoder->base.dev); 1704 - enum phy phy = intel_port_to_phy(i915, encoder->port); 1706 + enum phy phy = intel_encoder_to_phy(encoder); 1705 1707 1706 1708 return _icl_ddi_is_clock_enabled(i915, ICL_DPCLKA_CFGCR0, 1707 1709 ICL_DPCLKA_CFGCR0_DDI_CLK_OFF(phy)); ··· 1710 1712 struct intel_shared_dpll *icl_ddi_combo_get_pll(struct intel_encoder *encoder) 1711 1713 { 1712 1714 struct drm_i915_private *i915 = to_i915(encoder->base.dev); 1713 - enum phy phy = intel_port_to_phy(i915, encoder->port); 1715 + enum phy phy = intel_encoder_to_phy(encoder); 1714 1716 1715 1717 return _icl_ddi_get_pll(i915, ICL_DPCLKA_CFGCR0, 1716 1718 ICL_DPCLKA_CFGCR0_DDI_CLK_SEL_MASK(phy), ··· 1765 1767 { 1766 1768 struct drm_i915_private *i915 = to_i915(encoder->base.dev); 1767 1769 const struct intel_shared_dpll *pll = crtc_state->shared_dpll; 1768 - enum tc_port tc_port = intel_port_to_tc(i915, encoder->port); 1770 + enum tc_port tc_port = intel_encoder_to_tc(encoder); 1769 1771 enum port port = encoder->port; 1770 1772 1771 1773 if (drm_WARN_ON(&i915->drm, !pll)) ··· 1785 1787 static void icl_ddi_tc_disable_clock(struct intel_encoder *encoder) 1786 1788 { 1787 1789 struct drm_i915_private *i915 = to_i915(encoder->base.dev); 1788 - enum tc_port tc_port = intel_port_to_tc(i915, encoder->port); 1790 + enum tc_port tc_port = intel_encoder_to_tc(encoder); 1789 1791 enum port port = encoder->port; 1790 1792 1791 1793 mutex_lock(&i915->display.dpll.lock); ··· 1801 1803 static bool icl_ddi_tc_is_clock_enabled(struct intel_encoder *encoder) 1802 1804 { 1803 1805 struct drm_i915_private *i915 = to_i915(encoder->base.dev); 1804 - enum tc_port tc_port = intel_port_to_tc(i915, encoder->port); 1806 + enum tc_port tc_port = intel_encoder_to_tc(encoder); 1805 1807 enum port port = encoder->port; 1806 1808 u32 tmp; 1807 1809 ··· 1818 1820 static struct intel_shared_dpll *icl_ddi_tc_get_pll(struct intel_encoder *encoder) 1819 1821 { 1820 1822 struct drm_i915_private *i915 = to_i915(encoder->base.dev); 1821 - enum tc_port tc_port = intel_port_to_tc(i915, encoder->port); 1823 + enum tc_port tc_port = intel_encoder_to_tc(encoder); 1822 1824 enum port port = encoder->port; 1823 1825 enum intel_dpll_id id; 1824 1826 u32 tmp; ··· 2084 2086 const struct intel_crtc_state *crtc_state) 2085 2087 { 2086 2088 struct drm_i915_private *dev_priv = to_i915(dig_port->base.base.dev); 2087 - enum tc_port tc_port = intel_port_to_tc(dev_priv, dig_port->base.port); 2088 - enum phy phy = intel_port_to_phy(dev_priv, dig_port->base.port); 2089 + enum tc_port tc_port = intel_encoder_to_tc(&dig_port->base); 2089 2090 u32 ln0, ln1, pin_assignment; 2090 2091 u8 width; 2091 2092 2092 - if (!intel_phy_is_tc(dev_priv, phy) || 2093 + if (!intel_encoder_is_tc(&dig_port->base) || 2093 2094 intel_tc_port_in_tbt_alt_mode(dig_port)) 2094 2095 return; 2095 2096 ··· 2324 2327 { 2325 2328 struct drm_i915_private *i915 = to_i915(encoder->base.dev); 2326 2329 struct intel_digital_port *dig_port = enc_to_dig_port(encoder); 2327 - enum phy phy = intel_port_to_phy(i915, encoder->port); 2328 2330 2329 - if (intel_phy_is_combo(i915, phy)) { 2331 + if (intel_encoder_is_combo(encoder)) { 2332 + enum phy phy = intel_encoder_to_phy(encoder); 2330 2333 bool lane_reversal = 2331 2334 dig_port->saved_port_bits & DDI_BUF_PORT_REVERSAL; 2332 2335 ··· 2809 2812 const struct drm_connector_state *conn_state) 2810 2813 { 2811 2814 struct drm_i915_private *dev_priv = to_i915(encoder->base.dev); 2812 - struct intel_dp *intel_dp = enc_to_intel_dp(encoder); 2813 2815 2814 - if (HAS_DP20(dev_priv)) { 2816 + if (HAS_DP20(dev_priv)) 2815 2817 intel_dp_128b132b_sdp_crc16(enc_to_intel_dp(encoder), 2816 2818 crtc_state); 2817 - if (crtc_state->has_panel_replay) 2818 - drm_dp_dpcd_writeb(&intel_dp->aux, PANEL_REPLAY_CONFIG, 2819 - DP_PANEL_REPLAY_ENABLE); 2820 - } 2819 + 2820 + /* Panel replay has to be enabled in sink dpcd before link training. */ 2821 + if (crtc_state->has_panel_replay) 2822 + intel_psr_enable_sink(enc_to_intel_dp(encoder), crtc_state); 2821 2823 2822 2824 if (DISPLAY_VER(dev_priv) >= 14) 2823 2825 mtl_ddi_pre_enable_dp(state, encoder, crtc_state, conn_state); ··· 3091 3095 intel_dp_dual_mode_set_tmds_output(intel_hdmi, false); 3092 3096 } 3093 3097 3098 + static void intel_ddi_post_disable_hdmi_or_sst(struct intel_atomic_state *state, 3099 + struct intel_encoder *encoder, 3100 + const struct intel_crtc_state *old_crtc_state, 3101 + const struct drm_connector_state *old_conn_state) 3102 + { 3103 + struct drm_i915_private *dev_priv = to_i915(encoder->base.dev); 3104 + struct intel_crtc *pipe_crtc; 3105 + 3106 + for_each_intel_crtc_in_pipe_mask(&dev_priv->drm, pipe_crtc, 3107 + intel_crtc_joined_pipe_mask(old_crtc_state)) { 3108 + const struct intel_crtc_state *old_pipe_crtc_state = 3109 + intel_atomic_get_old_crtc_state(state, pipe_crtc); 3110 + 3111 + intel_crtc_vblank_off(old_pipe_crtc_state); 3112 + } 3113 + 3114 + intel_disable_transcoder(old_crtc_state); 3115 + 3116 + intel_ddi_disable_transcoder_func(old_crtc_state); 3117 + 3118 + for_each_intel_crtc_in_pipe_mask(&dev_priv->drm, pipe_crtc, 3119 + intel_crtc_joined_pipe_mask(old_crtc_state)) { 3120 + const struct intel_crtc_state *old_pipe_crtc_state = 3121 + intel_atomic_get_old_crtc_state(state, pipe_crtc); 3122 + 3123 + intel_dsc_disable(old_pipe_crtc_state); 3124 + 3125 + if (DISPLAY_VER(dev_priv) >= 9) 3126 + skl_scaler_disable(old_pipe_crtc_state); 3127 + else 3128 + ilk_pfit_disable(old_pipe_crtc_state); 3129 + } 3130 + } 3131 + 3094 3132 static void intel_ddi_post_disable(struct intel_atomic_state *state, 3095 3133 struct intel_encoder *encoder, 3096 3134 const struct intel_crtc_state *old_crtc_state, 3097 3135 const struct drm_connector_state *old_conn_state) 3098 3136 { 3099 - struct drm_i915_private *dev_priv = to_i915(encoder->base.dev); 3100 - struct intel_crtc *slave_crtc; 3101 - 3102 - if (!intel_crtc_has_type(old_crtc_state, INTEL_OUTPUT_DP_MST)) { 3103 - intel_crtc_vblank_off(old_crtc_state); 3104 - 3105 - intel_disable_transcoder(old_crtc_state); 3106 - 3107 - intel_ddi_disable_transcoder_func(old_crtc_state); 3108 - 3109 - intel_dsc_disable(old_crtc_state); 3110 - 3111 - if (DISPLAY_VER(dev_priv) >= 9) 3112 - skl_scaler_disable(old_crtc_state); 3113 - else 3114 - ilk_pfit_disable(old_crtc_state); 3115 - } 3116 - 3117 - for_each_intel_crtc_in_pipe_mask(&dev_priv->drm, slave_crtc, 3118 - intel_crtc_bigjoiner_slave_pipes(old_crtc_state)) { 3119 - const struct intel_crtc_state *old_slave_crtc_state = 3120 - intel_atomic_get_old_crtc_state(state, slave_crtc); 3121 - 3122 - intel_crtc_vblank_off(old_slave_crtc_state); 3123 - 3124 - intel_dsc_disable(old_slave_crtc_state); 3125 - skl_scaler_disable(old_slave_crtc_state); 3126 - } 3137 + if (!intel_crtc_has_type(old_crtc_state, INTEL_OUTPUT_DP_MST)) 3138 + intel_ddi_post_disable_hdmi_or_sst(state, encoder, old_crtc_state, 3139 + old_conn_state); 3127 3140 3128 3141 /* 3129 3142 * When called from DP MST code: ··· 3160 3155 const struct intel_crtc_state *old_crtc_state, 3161 3156 const struct drm_connector_state *old_conn_state) 3162 3157 { 3163 - struct drm_i915_private *i915 = to_i915(encoder->base.dev); 3164 3158 struct intel_digital_port *dig_port = enc_to_dig_port(encoder); 3165 - enum phy phy = intel_port_to_phy(i915, encoder->port); 3166 - bool is_tc_port = intel_phy_is_tc(i915, phy); 3167 3159 3168 3160 main_link_aux_power_domain_put(dig_port, old_crtc_state); 3169 3161 3170 - if (is_tc_port) 3162 + if (intel_encoder_is_tc(encoder)) 3171 3163 intel_tc_port_put_link(dig_port); 3172 3164 } 3173 3165 ··· 3265 3263 struct intel_digital_port *dig_port = enc_to_dig_port(encoder); 3266 3264 struct drm_connector *connector = conn_state->connector; 3267 3265 enum port port = encoder->port; 3268 - enum phy phy = intel_port_to_phy(dev_priv, port); 3269 3266 u32 buf_ctl; 3270 3267 3271 3268 if (!intel_hdmi_handle_sink_scrambling(encoder, connector, ··· 3348 3347 3349 3348 if (DISPLAY_VER(dev_priv) >= 20) 3350 3349 buf_ctl |= XE2LPD_DDI_BUF_D2D_LINK_ENABLE; 3351 - } else if (IS_ALDERLAKE_P(dev_priv) && intel_phy_is_tc(dev_priv, phy)) { 3350 + } else if (IS_ALDERLAKE_P(dev_priv) && intel_encoder_is_tc(encoder)) { 3352 3351 drm_WARN_ON(&dev_priv->drm, !intel_tc_port_in_legacy_mode(dig_port)); 3353 3352 buf_ctl |= DDI_BUF_CTL_TC_PHY_OWNERSHIP; 3354 3353 } 3355 3354 3356 3355 intel_de_write(dev_priv, DDI_BUF_CTL(port), buf_ctl); 3357 3356 3358 - intel_wait_ddi_buf_active(dev_priv, port); 3357 + intel_wait_ddi_buf_active(encoder); 3359 3358 } 3360 3359 3361 3360 static void intel_enable_ddi(struct intel_atomic_state *state, ··· 3363 3362 const struct intel_crtc_state *crtc_state, 3364 3363 const struct drm_connector_state *conn_state) 3365 3364 { 3366 - drm_WARN_ON(state->base.dev, crtc_state->has_pch_encoder); 3365 + struct drm_i915_private *i915 = to_i915(encoder->base.dev); 3366 + struct intel_crtc *pipe_crtc; 3367 3367 3368 - if (!intel_crtc_is_bigjoiner_slave(crtc_state)) 3369 - intel_ddi_enable_transcoder_func(encoder, crtc_state); 3368 + intel_ddi_enable_transcoder_func(encoder, crtc_state); 3370 3369 3371 3370 /* Enable/Disable DP2.0 SDP split config before transcoder */ 3372 3371 intel_audio_sdp_split_update(crtc_state); ··· 3375 3374 3376 3375 intel_ddi_wait_for_fec_status(encoder, crtc_state, true); 3377 3376 3378 - intel_crtc_vblank_on(crtc_state); 3377 + for_each_intel_crtc_in_pipe_mask_reverse(&i915->drm, pipe_crtc, 3378 + intel_crtc_joined_pipe_mask(crtc_state)) { 3379 + const struct intel_crtc_state *pipe_crtc_state = 3380 + intel_atomic_get_new_crtc_state(state, pipe_crtc); 3381 + 3382 + intel_crtc_vblank_on(pipe_crtc_state); 3383 + } 3379 3384 3380 3385 if (intel_crtc_has_type(crtc_state, INTEL_OUTPUT_HDMI)) 3381 3386 intel_enable_ddi_hdmi(state, encoder, crtc_state, conn_state); ··· 3477 3470 struct intel_crtc *crtc) 3478 3471 { 3479 3472 struct drm_i915_private *i915 = to_i915(encoder->base.dev); 3480 - struct intel_crtc_state *crtc_state = 3473 + const struct intel_crtc_state *crtc_state = 3481 3474 intel_atomic_get_new_crtc_state(state, crtc); 3482 - struct intel_crtc *slave_crtc; 3483 - enum phy phy = intel_port_to_phy(i915, encoder->port); 3475 + struct intel_crtc *pipe_crtc; 3484 3476 3485 3477 /* FIXME: Add MTL pll_mgr */ 3486 - if (DISPLAY_VER(i915) >= 14 || !intel_phy_is_tc(i915, phy)) 3478 + if (DISPLAY_VER(i915) >= 14 || !intel_encoder_is_tc(encoder)) 3487 3479 return; 3488 3480 3489 - intel_update_active_dpll(state, crtc, encoder); 3490 - for_each_intel_crtc_in_pipe_mask(&i915->drm, slave_crtc, 3491 - intel_crtc_bigjoiner_slave_pipes(crtc_state)) 3492 - intel_update_active_dpll(state, slave_crtc, encoder); 3481 + for_each_intel_crtc_in_pipe_mask(&i915->drm, pipe_crtc, 3482 + intel_crtc_joined_pipe_mask(crtc_state)) 3483 + intel_update_active_dpll(state, pipe_crtc, encoder); 3493 3484 } 3494 3485 3495 3486 static void ··· 3498 3493 { 3499 3494 struct drm_i915_private *dev_priv = to_i915(encoder->base.dev); 3500 3495 struct intel_digital_port *dig_port = enc_to_dig_port(encoder); 3501 - enum phy phy = intel_port_to_phy(dev_priv, encoder->port); 3502 - bool is_tc_port = intel_phy_is_tc(dev_priv, phy); 3496 + bool is_tc_port = intel_encoder_is_tc(encoder); 3503 3497 3504 3498 if (is_tc_port) { 3505 3499 struct intel_crtc *master_crtc = ··· 3524 3520 static void adlp_tbt_to_dp_alt_switch_wa(struct intel_encoder *encoder) 3525 3521 { 3526 3522 struct drm_i915_private *i915 = to_i915(encoder->base.dev); 3527 - enum tc_port tc_port = intel_port_to_tc(i915, encoder->port); 3523 + enum tc_port tc_port = intel_encoder_to_tc(encoder); 3528 3524 int ln; 3529 3525 3530 3526 for (ln = 0; ln < 2; ln++) ··· 3578 3574 intel_de_posting_read(dev_priv, DDI_BUF_CTL(port)); 3579 3575 3580 3576 /* 6.j Poll for PORT_BUF_CTL Idle Status == 0, timeout after 100 us */ 3581 - intel_wait_ddi_buf_active(dev_priv, port); 3577 + intel_wait_ddi_buf_active(encoder); 3582 3578 } 3583 3579 3584 3580 static void intel_ddi_prepare_link_retrain(struct intel_dp *intel_dp, ··· 3628 3624 intel_de_write(dev_priv, DDI_BUF_CTL(port), intel_dp->DP); 3629 3625 intel_de_posting_read(dev_priv, DDI_BUF_CTL(port)); 3630 3626 3631 - intel_wait_ddi_buf_active(dev_priv, port); 3627 + intel_wait_ddi_buf_active(encoder); 3632 3628 } 3633 3629 3634 3630 static void intel_ddi_set_link_train(struct intel_dp *intel_dp, ··· 3685 3681 3686 3682 if (intel_de_wait_for_set(dev_priv, 3687 3683 dp_tp_status_reg(encoder, crtc_state), 3688 - DP_TP_STATUS_IDLE_DONE, 1)) 3684 + DP_TP_STATUS_IDLE_DONE, 2)) 3689 3685 drm_err(&dev_priv->drm, 3690 3686 "Timed out waiting for DP idle patterns\n"); 3691 3687 } ··· 3976 3972 3977 3973 intel_read_dp_sdp(encoder, pipe_config, HDMI_PACKET_TYPE_GAMUT_METADATA); 3978 3974 intel_read_dp_sdp(encoder, pipe_config, DP_SDP_VSC); 3975 + intel_read_dp_sdp(encoder, pipe_config, DP_SDP_ADAPTIVE_SYNC); 3979 3976 3980 3977 intel_audio_codec_get_config(encoder, pipe_config); 3981 3978 } ··· 4149 4144 static void intel_ddi_sync_state(struct intel_encoder *encoder, 4150 4145 const struct intel_crtc_state *crtc_state) 4151 4146 { 4152 - struct drm_i915_private *i915 = to_i915(encoder->base.dev); 4153 - enum phy phy = intel_port_to_phy(i915, encoder->port); 4154 - 4155 - if (intel_phy_is_tc(i915, phy)) 4147 + if (intel_encoder_is_tc(encoder)) 4156 4148 intel_tc_port_sanitize_mode(enc_to_dig_port(encoder), 4157 4149 crtc_state); 4158 4150 ··· 4161 4159 struct intel_crtc_state *crtc_state) 4162 4160 { 4163 4161 struct drm_i915_private *i915 = to_i915(encoder->base.dev); 4164 - enum phy phy = intel_port_to_phy(i915, encoder->port); 4165 4162 bool fastset = true; 4166 4163 4167 - if (intel_phy_is_tc(i915, phy)) { 4164 + if (intel_encoder_is_tc(encoder)) { 4168 4165 drm_dbg_kms(&i915->drm, "[ENCODER:%d:%s] Forcing full modeset to compute TC port DPLLs\n", 4169 4166 encoder->base.base.id, encoder->base.name); 4170 4167 crtc_state->uapi.mode_changed = true; ··· 4257 4256 static bool crtcs_port_sync_compatible(const struct intel_crtc_state *crtc_state1, 4258 4257 const struct intel_crtc_state *crtc_state2) 4259 4258 { 4259 + /* 4260 + * FIXME the modeset sequence is currently wrong and 4261 + * can't deal with bigjoiner + port sync at the same time. 4262 + */ 4260 4263 return crtc_state1->hw.active && crtc_state2->hw.active && 4264 + !crtc_state1->bigjoiner_pipes && !crtc_state2->bigjoiner_pipes && 4261 4265 crtc_state1->output_types == crtc_state2->output_types && 4262 4266 crtc_state1->output_format == crtc_state2->output_format && 4263 4267 crtc_state1->lane_count == crtc_state2->lane_count && ··· 4354 4348 { 4355 4349 struct drm_i915_private *i915 = to_i915(encoder->dev); 4356 4350 struct intel_digital_port *dig_port = enc_to_dig_port(to_intel_encoder(encoder)); 4357 - enum phy phy = intel_port_to_phy(i915, dig_port->base.port); 4358 4351 4359 4352 intel_dp_encoder_flush_work(encoder); 4360 - if (intel_phy_is_tc(i915, phy)) 4353 + if (intel_encoder_is_tc(&dig_port->base)) 4361 4354 intel_tc_port_cleanup(dig_port); 4362 4355 intel_display_power_flush_work(i915); 4363 4356 ··· 4367 4362 4368 4363 static void intel_ddi_encoder_reset(struct drm_encoder *encoder) 4369 4364 { 4370 - struct drm_i915_private *i915 = to_i915(encoder->dev); 4371 4365 struct intel_dp *intel_dp = enc_to_intel_dp(to_intel_encoder(encoder)); 4372 4366 struct intel_digital_port *dig_port = enc_to_dig_port(to_intel_encoder(encoder)); 4373 - enum phy phy = intel_port_to_phy(i915, dig_port->base.port); 4374 4367 4375 4368 intel_dp->reset_link_params = true; 4376 4369 4377 4370 intel_pps_encoder_reset(intel_dp); 4378 4371 4379 - if (intel_phy_is_tc(i915, phy)) 4372 + if (intel_encoder_is_tc(&dig_port->base)) 4380 4373 intel_tc_port_init_mode(dig_port); 4381 4374 } 4382 4375 ··· 4541 4538 intel_ddi_hotplug(struct intel_encoder *encoder, 4542 4539 struct intel_connector *connector) 4543 4540 { 4544 - struct drm_i915_private *i915 = to_i915(encoder->base.dev); 4545 4541 struct intel_digital_port *dig_port = enc_to_dig_port(encoder); 4546 4542 struct intel_dp *intel_dp = &dig_port->dp; 4547 - enum phy phy = intel_port_to_phy(i915, encoder->port); 4548 - bool is_tc = intel_phy_is_tc(i915, phy); 4543 + bool is_tc = intel_encoder_is_tc(encoder); 4549 4544 struct drm_modeset_acquire_ctx ctx; 4550 4545 enum intel_hotplug_state state; 4551 4546 int ret; ··· 4825 4824 4826 4825 static bool need_aux_ch(struct intel_encoder *encoder, bool init_dp) 4827 4826 { 4828 - struct drm_i915_private *i915 = to_i915(encoder->base.dev); 4829 - enum phy phy = intel_port_to_phy(i915, encoder->port); 4830 - 4831 - return init_dp || intel_phy_is_tc(i915, phy); 4827 + return init_dp || intel_encoder_is_tc(encoder); 4832 4828 } 4833 4829 4834 4830 static bool assert_has_icl_dsi(struct drm_i915_private *i915) ··· 5069 5071 } else if (IS_DG2(dev_priv)) { 5070 5072 encoder->set_signal_levels = intel_snps_phy_set_signal_levels; 5071 5073 } else if (DISPLAY_VER(dev_priv) >= 12) { 5072 - if (intel_phy_is_combo(dev_priv, phy)) 5074 + if (intel_encoder_is_combo(encoder)) 5073 5075 encoder->set_signal_levels = icl_combo_phy_set_signal_levels; 5074 5076 else 5075 5077 encoder->set_signal_levels = tgl_dkl_phy_set_signal_levels; 5076 5078 } else if (DISPLAY_VER(dev_priv) >= 11) { 5077 - if (intel_phy_is_combo(dev_priv, phy)) 5079 + if (intel_encoder_is_combo(encoder)) 5078 5080 encoder->set_signal_levels = icl_combo_phy_set_signal_levels; 5079 5081 else 5080 5082 encoder->set_signal_levels = icl_mg_phy_set_signal_levels; ··· 5124 5126 goto err; 5125 5127 } 5126 5128 5127 - if (intel_phy_is_tc(dev_priv, phy)) { 5129 + if (intel_encoder_is_tc(encoder)) { 5128 5130 bool is_legacy = 5129 5131 !intel_bios_encoder_supports_typec_usb(devdata) && 5130 5132 !intel_bios_encoder_supports_tbt(devdata); ··· 5153 5155 dig_port->ddi_io_power_domain = intel_display_power_ddi_io_domain(dev_priv, port); 5154 5156 5155 5157 if (DISPLAY_VER(dev_priv) >= 11) { 5156 - if (intel_phy_is_tc(dev_priv, phy)) 5158 + if (intel_encoder_is_tc(encoder)) 5157 5159 dig_port->connected = intel_tc_port_connected; 5158 5160 else 5159 5161 dig_port->connected = lpt_digital_port_connected;
+7 -11
drivers/gpu/drm/i915/display/intel_ddi_buf_trans.c
··· 1691 1691 const struct intel_crtc_state *crtc_state, 1692 1692 int *n_entries) 1693 1693 { 1694 - struct drm_i915_private *i915 = to_i915(encoder->base.dev); 1695 - enum phy phy = intel_port_to_phy(i915, encoder->port); 1696 - 1697 1694 if (intel_crtc_has_dp_encoder(crtc_state) && crtc_state->port_clock >= 1000000) 1698 1695 return intel_get_buf_trans(&mtl_c20_trans_uhbr, n_entries); 1699 - else if (intel_crtc_has_type(crtc_state, INTEL_OUTPUT_HDMI) && !(intel_is_c10phy(i915, phy))) 1696 + else if (intel_crtc_has_type(crtc_state, INTEL_OUTPUT_HDMI) && !(intel_encoder_is_c10phy(encoder))) 1700 1697 return intel_get_buf_trans(&mtl_c20_trans_hdmi, n_entries); 1701 - else if (!intel_is_c10phy(i915, phy)) 1698 + else if (!intel_encoder_is_c10phy(encoder)) 1702 1699 return intel_get_buf_trans(&mtl_c20_trans_dp14, n_entries); 1703 1700 else 1704 1701 return intel_get_buf_trans(&mtl_c10_trans_dp14, n_entries); ··· 1704 1707 void intel_ddi_buf_trans_init(struct intel_encoder *encoder) 1705 1708 { 1706 1709 struct drm_i915_private *i915 = to_i915(encoder->base.dev); 1707 - enum phy phy = intel_port_to_phy(i915, encoder->port); 1708 1710 1709 1711 if (DISPLAY_VER(i915) >= 14) { 1710 1712 encoder->get_buf_trans = mtl_get_cx0_buf_trans; 1711 1713 } else if (IS_DG2(i915)) { 1712 1714 encoder->get_buf_trans = dg2_get_snps_buf_trans; 1713 1715 } else if (IS_ALDERLAKE_P(i915)) { 1714 - if (intel_phy_is_combo(i915, phy)) 1716 + if (intel_encoder_is_combo(encoder)) 1715 1717 encoder->get_buf_trans = adlp_get_combo_buf_trans; 1716 1718 else 1717 1719 encoder->get_buf_trans = adlp_get_dkl_buf_trans; ··· 1721 1725 } else if (IS_DG1(i915)) { 1722 1726 encoder->get_buf_trans = dg1_get_combo_buf_trans; 1723 1727 } else if (DISPLAY_VER(i915) >= 12) { 1724 - if (intel_phy_is_combo(i915, phy)) 1728 + if (intel_encoder_is_combo(encoder)) 1725 1729 encoder->get_buf_trans = tgl_get_combo_buf_trans; 1726 1730 else 1727 1731 encoder->get_buf_trans = tgl_get_dkl_buf_trans; 1728 1732 } else if (DISPLAY_VER(i915) == 11) { 1729 - if (IS_PLATFORM(i915, INTEL_JASPERLAKE)) 1733 + if (IS_JASPERLAKE(i915)) 1730 1734 encoder->get_buf_trans = jsl_get_combo_buf_trans; 1731 - else if (IS_PLATFORM(i915, INTEL_ELKHARTLAKE)) 1735 + else if (IS_ELKHARTLAKE(i915)) 1732 1736 encoder->get_buf_trans = ehl_get_combo_buf_trans; 1733 - else if (intel_phy_is_combo(i915, phy)) 1737 + else if (intel_encoder_is_combo(encoder)) 1734 1738 encoder->get_buf_trans = icl_get_combo_buf_trans; 1735 1739 else 1736 1740 encoder->get_buf_trans = icl_get_mg_buf_trans;
+99 -18
drivers/gpu/drm/i915/display/intel_de.h
··· 13 13 static inline u32 14 14 intel_de_read(struct drm_i915_private *i915, i915_reg_t reg) 15 15 { 16 - return intel_uncore_read(&i915->uncore, reg); 16 + u32 val; 17 + 18 + intel_dmc_wl_get(i915, reg); 19 + 20 + val = intel_uncore_read(&i915->uncore, reg); 21 + 22 + intel_dmc_wl_put(i915, reg); 23 + 24 + return val; 17 25 } 18 26 19 27 static inline u8 20 28 intel_de_read8(struct drm_i915_private *i915, i915_reg_t reg) 21 29 { 22 - return intel_uncore_read8(&i915->uncore, reg); 30 + u8 val; 31 + 32 + intel_dmc_wl_get(i915, reg); 33 + 34 + val = intel_uncore_read8(&i915->uncore, reg); 35 + 36 + intel_dmc_wl_put(i915, reg); 37 + 38 + return val; 23 39 } 24 40 25 41 static inline u64 26 42 intel_de_read64_2x32(struct drm_i915_private *i915, 27 43 i915_reg_t lower_reg, i915_reg_t upper_reg) 28 44 { 29 - return intel_uncore_read64_2x32(&i915->uncore, lower_reg, upper_reg); 45 + u64 val; 46 + 47 + intel_dmc_wl_get(i915, lower_reg); 48 + intel_dmc_wl_get(i915, upper_reg); 49 + 50 + val = intel_uncore_read64_2x32(&i915->uncore, lower_reg, upper_reg); 51 + 52 + intel_dmc_wl_put(i915, upper_reg); 53 + intel_dmc_wl_put(i915, lower_reg); 54 + 55 + return val; 30 56 } 31 57 32 58 static inline void 33 59 intel_de_posting_read(struct drm_i915_private *i915, i915_reg_t reg) 34 60 { 61 + intel_dmc_wl_get(i915, reg); 62 + 35 63 intel_uncore_posting_read(&i915->uncore, reg); 64 + 65 + intel_dmc_wl_put(i915, reg); 36 66 } 37 67 38 68 static inline void 39 69 intel_de_write(struct drm_i915_private *i915, i915_reg_t reg, u32 val) 40 70 { 71 + intel_dmc_wl_get(i915, reg); 72 + 41 73 intel_uncore_write(&i915->uncore, reg, val); 74 + 75 + intel_dmc_wl_put(i915, reg); 76 + } 77 + 78 + static inline u32 79 + __intel_de_rmw_nowl(struct drm_i915_private *i915, i915_reg_t reg, 80 + u32 clear, u32 set) 81 + { 82 + return intel_uncore_rmw(&i915->uncore, reg, clear, set); 42 83 } 43 84 44 85 static inline u32 45 86 intel_de_rmw(struct drm_i915_private *i915, i915_reg_t reg, u32 clear, u32 set) 46 87 { 47 - return intel_uncore_rmw(&i915->uncore, reg, clear, set); 88 + u32 val; 89 + 90 + intel_dmc_wl_get(i915, reg); 91 + 92 + val = __intel_de_rmw_nowl(i915, reg, clear, set); 93 + 94 + intel_dmc_wl_put(i915, reg); 95 + 96 + return val; 48 97 } 49 98 50 99 static inline int 51 - intel_de_wait_for_register(struct drm_i915_private *i915, i915_reg_t reg, 52 - u32 mask, u32 value, unsigned int timeout) 100 + __intel_wait_for_register_nowl(struct drm_i915_private *i915, i915_reg_t reg, 101 + u32 mask, u32 value, unsigned int timeout) 53 102 { 54 - return intel_wait_for_register(&i915->uncore, reg, mask, value, timeout); 103 + return intel_wait_for_register(&i915->uncore, reg, mask, 104 + value, timeout); 55 105 } 56 106 57 107 static inline int 58 - intel_de_wait_for_register_fw(struct drm_i915_private *i915, i915_reg_t reg, 59 - u32 mask, u32 value, unsigned int timeout) 108 + intel_de_wait(struct drm_i915_private *i915, i915_reg_t reg, 109 + u32 mask, u32 value, unsigned int timeout) 60 110 { 61 - return intel_wait_for_register_fw(&i915->uncore, reg, mask, value, timeout); 111 + int ret; 112 + 113 + intel_dmc_wl_get(i915, reg); 114 + 115 + ret = __intel_wait_for_register_nowl(i915, reg, mask, value, timeout); 116 + 117 + intel_dmc_wl_put(i915, reg); 118 + 119 + return ret; 62 120 } 63 121 64 122 static inline int 65 - __intel_de_wait_for_register(struct drm_i915_private *i915, i915_reg_t reg, 66 - u32 mask, u32 value, 67 - unsigned int fast_timeout_us, 68 - unsigned int slow_timeout_ms, u32 *out_value) 123 + intel_de_wait_fw(struct drm_i915_private *i915, i915_reg_t reg, 124 + u32 mask, u32 value, unsigned int timeout) 69 125 { 70 - return __intel_wait_for_register(&i915->uncore, reg, mask, value, 71 - fast_timeout_us, slow_timeout_ms, out_value); 126 + int ret; 127 + 128 + intel_dmc_wl_get(i915, reg); 129 + 130 + ret = intel_wait_for_register_fw(&i915->uncore, reg, mask, value, timeout); 131 + 132 + intel_dmc_wl_put(i915, reg); 133 + 134 + return ret; 135 + } 136 + 137 + static inline int 138 + intel_de_wait_custom(struct drm_i915_private *i915, i915_reg_t reg, 139 + u32 mask, u32 value, 140 + unsigned int fast_timeout_us, 141 + unsigned int slow_timeout_ms, u32 *out_value) 142 + { 143 + int ret; 144 + 145 + intel_dmc_wl_get(i915, reg); 146 + 147 + ret = __intel_wait_for_register(&i915->uncore, reg, mask, value, 148 + fast_timeout_us, slow_timeout_ms, out_value); 149 + 150 + intel_dmc_wl_put(i915, reg); 151 + 152 + return ret; 72 153 } 73 154 74 155 static inline int 75 156 intel_de_wait_for_set(struct drm_i915_private *i915, i915_reg_t reg, 76 157 u32 mask, unsigned int timeout) 77 158 { 78 - return intel_de_wait_for_register(i915, reg, mask, mask, timeout); 159 + return intel_de_wait(i915, reg, mask, mask, timeout); 79 160 } 80 161 81 162 static inline int 82 163 intel_de_wait_for_clear(struct drm_i915_private *i915, i915_reg_t reg, 83 164 u32 mask, unsigned int timeout) 84 165 { 85 - return intel_de_wait_for_register(i915, reg, mask, 0, timeout); 166 + return intel_de_wait(i915, reg, mask, 0, timeout); 86 167 } 87 168 88 169 /*
+422 -259
drivers/gpu/drm/i915/display/intel_display.c
··· 275 275 return hweight8(crtc_state->bigjoiner_pipes); 276 276 } 277 277 278 + u8 intel_crtc_joined_pipe_mask(const struct intel_crtc_state *crtc_state) 279 + { 280 + struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc); 281 + 282 + return BIT(crtc->pipe) | crtc_state->bigjoiner_pipes; 283 + } 284 + 278 285 struct intel_crtc *intel_master_crtc(const struct intel_crtc_state *crtc_state) 279 286 { 280 287 struct drm_i915_private *i915 = to_i915(crtc_state->uapi.crtc->dev); ··· 390 383 break; 391 384 } 392 385 393 - if (intel_de_wait_for_register(dev_priv, dpll_reg, 394 - port_mask, expected_mask, 1000)) 386 + if (intel_de_wait(dev_priv, dpll_reg, port_mask, expected_mask, 1000)) 395 387 drm_WARN(&dev_priv->drm, 1, 396 388 "timed out waiting for [ENCODER:%d:%s] port ready: got 0x%x, expected 0x%x\n", 397 389 dig_port->base.base.base.id, dig_port->base.base.name, ··· 436 430 intel_de_rmw(dev_priv, PIPE_ARB_CTL(pipe), 437 431 0, PIPE_ARB_USE_PROG_SLOTS); 438 432 433 + if (DISPLAY_VER(dev_priv) >= 14) { 434 + u32 clear = DP_DSC_INSERT_SF_AT_EOL_WA; 435 + u32 set = 0; 436 + 437 + if (DISPLAY_VER(dev_priv) == 14) 438 + set |= DP_FEC_BS_JITTER_WA; 439 + 440 + intel_de_rmw(dev_priv, 441 + hsw_chicken_trans_reg(dev_priv, cpu_transcoder), 442 + clear, set); 443 + } 444 + 439 445 val = intel_de_read(dev_priv, TRANSCONF(cpu_transcoder)); 440 446 if (val & TRANSCONF_ENABLE) { 441 447 /* we keep both pipes enabled on 830 */ 442 448 drm_WARN_ON(&dev_priv->drm, !IS_I830(dev_priv)); 443 449 return; 450 + } 451 + 452 + /* Wa_1409098942:adlp+ */ 453 + if (DISPLAY_VER(dev_priv) >= 13 && 454 + new_crtc_state->dsc.compression_enable) { 455 + val &= ~TRANSCONF_PIXEL_COUNT_SCALING_MASK; 456 + val |= REG_FIELD_PREP(TRANSCONF_PIXEL_COUNT_SCALING_MASK, 457 + TRANSCONF_PIXEL_COUNT_SCALING_X4); 444 458 } 445 459 446 460 intel_de_write(dev_priv, TRANSCONF(cpu_transcoder), ··· 508 482 /* Don't disable pipe or pipe PLLs if needed */ 509 483 if (!IS_I830(dev_priv)) 510 484 val &= ~TRANSCONF_ENABLE; 485 + 486 + /* Wa_1409098942:adlp+ */ 487 + if (DISPLAY_VER(dev_priv) >= 13 && 488 + old_crtc_state->dsc.compression_enable) 489 + val &= ~TRANSCONF_PIXEL_COUNT_SCALING_MASK; 511 490 512 491 intel_de_write(dev_priv, TRANSCONF(cpu_transcoder), val); 513 492 ··· 566 535 struct drm_i915_private *dev_priv = to_i915(plane->base.dev); 567 536 568 537 return DISPLAY_VER(dev_priv) < 4 || 569 - (plane->fbc && 538 + (plane->fbc && !plane_state->no_fbc_reason && 570 539 plane_state->view.gtt.type == I915_GTT_VIEW_NORMAL); 571 540 } 572 541 ··· 1583 1552 intel_set_pch_fifo_underrun_reporting(dev_priv, pipe, true); 1584 1553 } 1585 1554 1586 - static void glk_pipe_scaler_clock_gating_wa(struct drm_i915_private *dev_priv, 1587 - enum pipe pipe, bool apply) 1555 + /* Display WA #1180: WaDisableScalarClockGating: glk */ 1556 + static bool glk_need_scaler_clock_gating_wa(const struct intel_crtc_state *crtc_state) 1588 1557 { 1589 - u32 val = intel_de_read(dev_priv, CLKGATE_DIS_PSL(pipe)); 1558 + struct drm_i915_private *i915 = to_i915(crtc_state->uapi.crtc->dev); 1559 + 1560 + return DISPLAY_VER(i915) == 10 && crtc_state->pch_pfit.enabled; 1561 + } 1562 + 1563 + static void glk_pipe_scaler_clock_gating_wa(struct intel_crtc *crtc, bool enable) 1564 + { 1565 + struct drm_i915_private *i915 = to_i915(crtc->base.dev); 1590 1566 u32 mask = DPF_GATING_DIS | DPF_RAM_GATING_DIS | DPFR_GATING_DIS; 1591 1567 1592 - if (apply) 1593 - val |= mask; 1594 - else 1595 - val &= ~mask; 1596 - 1597 - intel_de_write(dev_priv, CLKGATE_DIS_PSL(pipe), val); 1568 + intel_de_rmw(i915, CLKGATE_DIS_PSL(crtc->pipe), 1569 + mask, enable ? mask : 0); 1598 1570 } 1599 1571 1600 1572 static void hsw_set_linetime_wm(const struct intel_crtc_state *crtc_state) ··· 1618 1584 intel_de_rmw(i915, hsw_chicken_trans_reg(i915, crtc_state->cpu_transcoder), 1619 1585 HSW_FRAME_START_DELAY_MASK, 1620 1586 HSW_FRAME_START_DELAY(crtc_state->framestart_delay - 1)); 1621 - } 1622 - 1623 - static void icl_ddi_bigjoiner_pre_enable(struct intel_atomic_state *state, 1624 - const struct intel_crtc_state *crtc_state) 1625 - { 1626 - struct intel_crtc *master_crtc = intel_master_crtc(crtc_state); 1627 - 1628 - /* 1629 - * Enable sequence steps 1-7 on bigjoiner master 1630 - */ 1631 - if (intel_crtc_is_bigjoiner_slave(crtc_state)) 1632 - intel_encoders_pre_pll_enable(state, master_crtc); 1633 - 1634 - if (crtc_state->shared_dpll) 1635 - intel_enable_shared_dpll(crtc_state); 1636 - 1637 - if (intel_crtc_is_bigjoiner_slave(crtc_state)) 1638 - intel_encoders_pre_enable(state, master_crtc); 1639 1587 } 1640 1588 1641 1589 static void hsw_configure_cpu_transcoder(const struct intel_crtc_state *crtc_state) ··· 1655 1639 const struct intel_crtc_state *new_crtc_state = 1656 1640 intel_atomic_get_new_crtc_state(state, crtc); 1657 1641 struct drm_i915_private *dev_priv = to_i915(crtc->base.dev); 1658 - enum pipe pipe = crtc->pipe, hsw_workaround_pipe; 1659 1642 enum transcoder cpu_transcoder = new_crtc_state->cpu_transcoder; 1660 - bool psl_clkgate_wa; 1643 + struct intel_crtc *pipe_crtc; 1661 1644 1662 1645 if (drm_WARN_ON(&dev_priv->drm, crtc->active)) 1663 1646 return; 1664 1647 1665 - intel_dmc_enable_pipe(dev_priv, crtc->pipe); 1648 + for_each_intel_crtc_in_pipe_mask_reverse(&dev_priv->drm, pipe_crtc, 1649 + intel_crtc_joined_pipe_mask(new_crtc_state)) 1650 + intel_dmc_enable_pipe(dev_priv, pipe_crtc->pipe); 1666 1651 1667 - if (!new_crtc_state->bigjoiner_pipes) { 1668 - intel_encoders_pre_pll_enable(state, crtc); 1652 + intel_encoders_pre_pll_enable(state, crtc); 1669 1653 1670 - if (new_crtc_state->shared_dpll) 1671 - intel_enable_shared_dpll(new_crtc_state); 1654 + for_each_intel_crtc_in_pipe_mask_reverse(&dev_priv->drm, pipe_crtc, 1655 + intel_crtc_joined_pipe_mask(new_crtc_state)) { 1656 + const struct intel_crtc_state *pipe_crtc_state = 1657 + intel_atomic_get_new_crtc_state(state, pipe_crtc); 1672 1658 1673 - intel_encoders_pre_enable(state, crtc); 1674 - } else { 1675 - icl_ddi_bigjoiner_pre_enable(state, new_crtc_state); 1659 + if (pipe_crtc_state->shared_dpll) 1660 + intel_enable_shared_dpll(pipe_crtc_state); 1676 1661 } 1677 1662 1678 - intel_dsc_enable(new_crtc_state); 1663 + intel_encoders_pre_enable(state, crtc); 1679 1664 1680 - if (DISPLAY_VER(dev_priv) >= 13) 1681 - intel_uncompressed_joiner_enable(new_crtc_state); 1665 + for_each_intel_crtc_in_pipe_mask_reverse(&dev_priv->drm, pipe_crtc, 1666 + intel_crtc_joined_pipe_mask(new_crtc_state)) { 1667 + const struct intel_crtc_state *pipe_crtc_state = 1668 + intel_atomic_get_new_crtc_state(state, pipe_crtc); 1682 1669 1683 - intel_set_pipe_src_size(new_crtc_state); 1684 - if (DISPLAY_VER(dev_priv) >= 9 || IS_BROADWELL(dev_priv)) 1685 - bdw_set_pipe_misc(new_crtc_state); 1670 + intel_dsc_enable(pipe_crtc_state); 1686 1671 1687 - if (!intel_crtc_is_bigjoiner_slave(new_crtc_state) && 1688 - !transcoder_is_dsi(cpu_transcoder)) 1672 + if (DISPLAY_VER(dev_priv) >= 13) 1673 + intel_uncompressed_joiner_enable(pipe_crtc_state); 1674 + 1675 + intel_set_pipe_src_size(pipe_crtc_state); 1676 + 1677 + if (DISPLAY_VER(dev_priv) >= 9 || IS_BROADWELL(dev_priv)) 1678 + bdw_set_pipe_misc(pipe_crtc_state); 1679 + } 1680 + 1681 + if (!transcoder_is_dsi(cpu_transcoder)) 1689 1682 hsw_configure_cpu_transcoder(new_crtc_state); 1690 1683 1691 - crtc->active = true; 1684 + for_each_intel_crtc_in_pipe_mask_reverse(&dev_priv->drm, pipe_crtc, 1685 + intel_crtc_joined_pipe_mask(new_crtc_state)) { 1686 + const struct intel_crtc_state *pipe_crtc_state = 1687 + intel_atomic_get_new_crtc_state(state, pipe_crtc); 1692 1688 1693 - /* Display WA #1180: WaDisableScalarClockGating: glk */ 1694 - psl_clkgate_wa = DISPLAY_VER(dev_priv) == 10 && 1695 - new_crtc_state->pch_pfit.enabled; 1696 - if (psl_clkgate_wa) 1697 - glk_pipe_scaler_clock_gating_wa(dev_priv, pipe, true); 1689 + pipe_crtc->active = true; 1698 1690 1699 - if (DISPLAY_VER(dev_priv) >= 9) 1700 - skl_pfit_enable(new_crtc_state); 1701 - else 1702 - ilk_pfit_enable(new_crtc_state); 1691 + if (glk_need_scaler_clock_gating_wa(pipe_crtc_state)) 1692 + glk_pipe_scaler_clock_gating_wa(pipe_crtc, true); 1703 1693 1704 - /* 1705 - * On ILK+ LUT must be loaded before the pipe is running but with 1706 - * clocks enabled 1707 - */ 1708 - intel_color_load_luts(new_crtc_state); 1709 - intel_color_commit_noarm(new_crtc_state); 1710 - intel_color_commit_arm(new_crtc_state); 1711 - /* update DSPCNTR to configure gamma/csc for pipe bottom color */ 1712 - if (DISPLAY_VER(dev_priv) < 9) 1713 - intel_disable_primary_plane(new_crtc_state); 1694 + if (DISPLAY_VER(dev_priv) >= 9) 1695 + skl_pfit_enable(pipe_crtc_state); 1696 + else 1697 + ilk_pfit_enable(pipe_crtc_state); 1714 1698 1715 - hsw_set_linetime_wm(new_crtc_state); 1699 + /* 1700 + * On ILK+ LUT must be loaded before the pipe is running but with 1701 + * clocks enabled 1702 + */ 1703 + intel_color_load_luts(pipe_crtc_state); 1704 + intel_color_commit_noarm(pipe_crtc_state); 1705 + intel_color_commit_arm(pipe_crtc_state); 1706 + /* update DSPCNTR to configure gamma/csc for pipe bottom color */ 1707 + if (DISPLAY_VER(dev_priv) < 9) 1708 + intel_disable_primary_plane(pipe_crtc_state); 1716 1709 1717 - if (DISPLAY_VER(dev_priv) >= 11) 1718 - icl_set_pipe_chicken(new_crtc_state); 1710 + hsw_set_linetime_wm(pipe_crtc_state); 1719 1711 1720 - intel_initial_watermarks(state, crtc); 1712 + if (DISPLAY_VER(dev_priv) >= 11) 1713 + icl_set_pipe_chicken(pipe_crtc_state); 1721 1714 1722 - if (intel_crtc_is_bigjoiner_slave(new_crtc_state)) 1723 - intel_crtc_vblank_on(new_crtc_state); 1715 + intel_initial_watermarks(state, pipe_crtc); 1716 + } 1724 1717 1725 1718 intel_encoders_enable(state, crtc); 1726 1719 1727 - if (psl_clkgate_wa) { 1728 - intel_crtc_wait_for_next_vblank(crtc); 1729 - glk_pipe_scaler_clock_gating_wa(dev_priv, pipe, false); 1730 - } 1720 + for_each_intel_crtc_in_pipe_mask_reverse(&dev_priv->drm, pipe_crtc, 1721 + intel_crtc_joined_pipe_mask(new_crtc_state)) { 1722 + const struct intel_crtc_state *pipe_crtc_state = 1723 + intel_atomic_get_new_crtc_state(state, pipe_crtc); 1724 + enum pipe hsw_workaround_pipe; 1731 1725 1732 - /* If we change the relative order between pipe/planes enabling, we need 1733 - * to change the workaround. */ 1734 - hsw_workaround_pipe = new_crtc_state->hsw_workaround_pipe; 1735 - if (IS_HASWELL(dev_priv) && hsw_workaround_pipe != INVALID_PIPE) { 1736 - struct intel_crtc *wa_crtc; 1726 + if (glk_need_scaler_clock_gating_wa(pipe_crtc_state)) { 1727 + intel_crtc_wait_for_next_vblank(pipe_crtc); 1728 + glk_pipe_scaler_clock_gating_wa(pipe_crtc, false); 1729 + } 1737 1730 1738 - wa_crtc = intel_crtc_for_pipe(dev_priv, hsw_workaround_pipe); 1731 + /* 1732 + * If we change the relative order between pipe/planes 1733 + * enabling, we need to change the workaround. 1734 + */ 1735 + hsw_workaround_pipe = pipe_crtc_state->hsw_workaround_pipe; 1736 + if (IS_HASWELL(dev_priv) && hsw_workaround_pipe != INVALID_PIPE) { 1737 + struct intel_crtc *wa_crtc = 1738 + intel_crtc_for_pipe(dev_priv, hsw_workaround_pipe); 1739 1739 1740 - intel_crtc_wait_for_next_vblank(wa_crtc); 1741 - intel_crtc_wait_for_next_vblank(wa_crtc); 1740 + intel_crtc_wait_for_next_vblank(wa_crtc); 1741 + intel_crtc_wait_for_next_vblank(wa_crtc); 1742 + } 1742 1743 } 1743 1744 } 1744 1745 ··· 1819 1786 const struct intel_crtc_state *old_crtc_state = 1820 1787 intel_atomic_get_old_crtc_state(state, crtc); 1821 1788 struct drm_i915_private *i915 = to_i915(crtc->base.dev); 1789 + struct intel_crtc *pipe_crtc; 1822 1790 1823 1791 /* 1824 1792 * FIXME collapse everything to one hook. 1825 1793 * Need care with mst->ddi interactions. 1826 1794 */ 1827 - if (!intel_crtc_is_bigjoiner_slave(old_crtc_state)) { 1828 - intel_encoders_disable(state, crtc); 1829 - intel_encoders_post_disable(state, crtc); 1795 + intel_encoders_disable(state, crtc); 1796 + intel_encoders_post_disable(state, crtc); 1797 + 1798 + for_each_intel_crtc_in_pipe_mask(&i915->drm, pipe_crtc, 1799 + intel_crtc_joined_pipe_mask(old_crtc_state)) { 1800 + const struct intel_crtc_state *old_pipe_crtc_state = 1801 + intel_atomic_get_old_crtc_state(state, pipe_crtc); 1802 + 1803 + intel_disable_shared_dpll(old_pipe_crtc_state); 1830 1804 } 1831 1805 1832 - intel_disable_shared_dpll(old_crtc_state); 1806 + intel_encoders_post_pll_disable(state, crtc); 1833 1807 1834 - if (!intel_crtc_is_bigjoiner_slave(old_crtc_state)) { 1835 - struct intel_crtc *slave_crtc; 1836 - 1837 - intel_encoders_post_pll_disable(state, crtc); 1838 - 1839 - intel_dmc_disable_pipe(i915, crtc->pipe); 1840 - 1841 - for_each_intel_crtc_in_pipe_mask(&i915->drm, slave_crtc, 1842 - intel_crtc_bigjoiner_slave_pipes(old_crtc_state)) 1843 - intel_dmc_disable_pipe(i915, slave_crtc->pipe); 1844 - } 1808 + for_each_intel_crtc_in_pipe_mask(&i915->drm, pipe_crtc, 1809 + intel_crtc_joined_pipe_mask(old_crtc_state)) 1810 + intel_dmc_disable_pipe(i915, pipe_crtc->pipe); 1845 1811 } 1846 1812 1847 1813 static void i9xx_pfit_enable(const struct intel_crtc_state *crtc_state) ··· 1868 1836 intel_de_write(dev_priv, BCLRPAT(crtc->pipe), 0); 1869 1837 } 1870 1838 1839 + /* Prefer intel_encoder_is_combo() */ 1871 1840 bool intel_phy_is_combo(struct drm_i915_private *dev_priv, enum phy phy) 1872 1841 { 1873 1842 if (phy == PHY_NONE) ··· 1890 1857 return false; 1891 1858 } 1892 1859 1860 + /* Prefer intel_encoder_is_tc() */ 1893 1861 bool intel_phy_is_tc(struct drm_i915_private *dev_priv, enum phy phy) 1894 1862 { 1895 1863 /* ··· 1911 1877 return false; 1912 1878 } 1913 1879 1880 + /* Prefer intel_encoder_is_snps() */ 1914 1881 bool intel_phy_is_snps(struct drm_i915_private *dev_priv, enum phy phy) 1915 1882 { 1916 1883 /* ··· 1921 1886 return IS_DG2(dev_priv) && phy > PHY_NONE && phy <= PHY_E; 1922 1887 } 1923 1888 1889 + /* Prefer intel_encoder_to_phy() */ 1924 1890 enum phy intel_port_to_phy(struct drm_i915_private *i915, enum port port) 1925 1891 { 1926 1892 if (DISPLAY_VER(i915) >= 13 && port >= PORT_D_XELPD) ··· 1939 1903 return PHY_A + port - PORT_A; 1940 1904 } 1941 1905 1906 + /* Prefer intel_encoder_to_tc() */ 1942 1907 enum tc_port intel_port_to_tc(struct drm_i915_private *dev_priv, enum port port) 1943 1908 { 1944 1909 if (!intel_phy_is_tc(dev_priv, intel_port_to_phy(dev_priv, port))) ··· 1949 1912 return TC_PORT_1 + port - PORT_TC1; 1950 1913 else 1951 1914 return TC_PORT_1 + port - PORT_C; 1915 + } 1916 + 1917 + enum phy intel_encoder_to_phy(struct intel_encoder *encoder) 1918 + { 1919 + struct drm_i915_private *i915 = to_i915(encoder->base.dev); 1920 + 1921 + return intel_port_to_phy(i915, encoder->port); 1922 + } 1923 + 1924 + bool intel_encoder_is_combo(struct intel_encoder *encoder) 1925 + { 1926 + struct drm_i915_private *i915 = to_i915(encoder->base.dev); 1927 + 1928 + return intel_phy_is_combo(i915, intel_encoder_to_phy(encoder)); 1929 + } 1930 + 1931 + bool intel_encoder_is_snps(struct intel_encoder *encoder) 1932 + { 1933 + struct drm_i915_private *i915 = to_i915(encoder->base.dev); 1934 + 1935 + return intel_phy_is_snps(i915, intel_encoder_to_phy(encoder)); 1936 + } 1937 + 1938 + bool intel_encoder_is_tc(struct intel_encoder *encoder) 1939 + { 1940 + struct drm_i915_private *i915 = to_i915(encoder->base.dev); 1941 + 1942 + return intel_phy_is_tc(i915, intel_encoder_to_phy(encoder)); 1943 + } 1944 + 1945 + enum tc_port intel_encoder_to_tc(struct intel_encoder *encoder) 1946 + { 1947 + struct drm_i915_private *i915 = to_i915(encoder->base.dev); 1948 + 1949 + return intel_port_to_tc(i915, encoder->port); 1952 1950 } 1953 1951 1954 1952 enum intel_display_power_domain ··· 2453 2381 struct drm_i915_private *i915 = to_i915(crtc->base.dev); 2454 2382 struct drm_display_mode *adjusted_mode = &crtc_state->hw.adjusted_mode; 2455 2383 struct drm_display_mode *pipe_mode = &crtc_state->hw.pipe_mode; 2456 - int clock_limit = i915->max_dotclk_freq; 2384 + int clock_limit = i915->display.cdclk.max_dotclk_freq; 2457 2385 2458 2386 /* 2459 2387 * Start with the adjusted_mode crtc timings, which ··· 2477 2405 */ 2478 2406 if (intel_crtc_supports_double_wide(crtc) && 2479 2407 pipe_mode->crtc_clock > clock_limit) { 2480 - clock_limit = i915->max_dotclk_freq; 2408 + clock_limit = i915->display.cdclk.max_dotclk_freq; 2481 2409 crtc_state->double_wide = true; 2482 2410 } 2483 2411 } ··· 2780 2708 * always be the user's requested size. 2781 2709 */ 2782 2710 intel_de_write(dev_priv, PIPESRC(pipe), 2783 - PIPESRC_WIDTH(width - 1) | PIPESRC_HEIGHT(height - 1)); 2784 - 2785 - if (!crtc_state->enable_psr2_su_region_et) 2786 - return; 2787 - 2788 - width = drm_rect_width(&crtc_state->psr2_su_area); 2789 - height = drm_rect_height(&crtc_state->psr2_su_area); 2790 - 2791 - intel_de_write(dev_priv, PIPE_SRCSZ_ERLY_TPT(pipe), 2792 2711 PIPESRC_WIDTH(width - 1) | PIPESRC_HEIGHT(height - 1)); 2793 2712 } 2794 2713 ··· 4786 4723 struct drm_connector *connector; 4787 4724 int i; 4788 4725 4789 - intel_bigjoiner_adjust_pipe_src(crtc_state); 4790 - 4791 4726 for_each_new_connector_in_state(&state->base, connector, 4792 4727 conn_state, i) { 4793 4728 struct intel_encoder *encoder = ··· 4853 4792 } 4854 4793 4855 4794 static bool 4795 + intel_compare_dp_as_sdp(const struct drm_dp_as_sdp *a, 4796 + const struct drm_dp_as_sdp *b) 4797 + { 4798 + return a->vtotal == b->vtotal && 4799 + a->target_rr == b->target_rr && 4800 + a->duration_incr_ms == b->duration_incr_ms && 4801 + a->duration_decr_ms == b->duration_decr_ms && 4802 + a->mode == b->mode; 4803 + } 4804 + 4805 + static bool 4856 4806 intel_compare_buffer(const u8 *a, const u8 *b, size_t len) 4857 4807 { 4858 4808 return memcmp(a, b, len) == 0; 4859 4809 } 4860 4810 4811 + static void __printf(5, 6) 4812 + pipe_config_mismatch(struct drm_printer *p, bool fastset, 4813 + const struct intel_crtc *crtc, 4814 + const char *name, const char *format, ...) 4815 + { 4816 + struct va_format vaf; 4817 + va_list args; 4818 + 4819 + va_start(args, format); 4820 + vaf.fmt = format; 4821 + vaf.va = &args; 4822 + 4823 + if (fastset) 4824 + drm_printf(p, "[CRTC:%d:%s] fastset requirement not met in %s %pV\n", 4825 + crtc->base.base.id, crtc->base.name, name, &vaf); 4826 + else 4827 + drm_printf(p, "[CRTC:%d:%s] mismatch in %s %pV\n", 4828 + crtc->base.base.id, crtc->base.name, name, &vaf); 4829 + 4830 + va_end(args); 4831 + } 4832 + 4861 4833 static void 4862 - pipe_config_infoframe_mismatch(struct drm_i915_private *dev_priv, 4863 - bool fastset, const char *name, 4834 + pipe_config_infoframe_mismatch(struct drm_printer *p, bool fastset, 4835 + const struct intel_crtc *crtc, 4836 + const char *name, 4864 4837 const union hdmi_infoframe *a, 4865 4838 const union hdmi_infoframe *b) 4866 4839 { 4840 + struct drm_i915_private *i915 = to_i915(crtc->base.dev); 4841 + const char *loglevel; 4842 + 4867 4843 if (fastset) { 4868 4844 if (!drm_debug_enabled(DRM_UT_KMS)) 4869 4845 return; 4870 4846 4871 - drm_dbg_kms(&dev_priv->drm, 4872 - "fastset requirement not met in %s infoframe\n", name); 4873 - drm_dbg_kms(&dev_priv->drm, "expected:\n"); 4874 - hdmi_infoframe_log(KERN_DEBUG, dev_priv->drm.dev, a); 4875 - drm_dbg_kms(&dev_priv->drm, "found:\n"); 4876 - hdmi_infoframe_log(KERN_DEBUG, dev_priv->drm.dev, b); 4847 + loglevel = KERN_DEBUG; 4877 4848 } else { 4878 - drm_err(&dev_priv->drm, "mismatch in %s infoframe\n", name); 4879 - drm_err(&dev_priv->drm, "expected:\n"); 4880 - hdmi_infoframe_log(KERN_ERR, dev_priv->drm.dev, a); 4881 - drm_err(&dev_priv->drm, "found:\n"); 4882 - hdmi_infoframe_log(KERN_ERR, dev_priv->drm.dev, b); 4849 + loglevel = KERN_ERR; 4883 4850 } 4851 + 4852 + pipe_config_mismatch(p, fastset, crtc, name, "infoframe"); 4853 + 4854 + drm_printf(p, "expected:\n"); 4855 + hdmi_infoframe_log(loglevel, i915->drm.dev, a); 4856 + drm_printf(p, "found:\n"); 4857 + hdmi_infoframe_log(loglevel, i915->drm.dev, b); 4884 4858 } 4885 4859 4886 4860 static void 4887 - pipe_config_dp_vsc_sdp_mismatch(struct drm_i915_private *i915, 4888 - bool fastset, const char *name, 4861 + pipe_config_dp_vsc_sdp_mismatch(struct drm_printer *p, bool fastset, 4862 + const struct intel_crtc *crtc, 4863 + const char *name, 4889 4864 const struct drm_dp_vsc_sdp *a, 4890 4865 const struct drm_dp_vsc_sdp *b) 4866 + { 4867 + pipe_config_mismatch(p, fastset, crtc, name, "dp sdp"); 4868 + 4869 + drm_printf(p, "expected:\n"); 4870 + drm_dp_vsc_sdp_log(p, a); 4871 + drm_printf(p, "found:\n"); 4872 + drm_dp_vsc_sdp_log(p, b); 4873 + } 4874 + 4875 + static void 4876 + pipe_config_dp_as_sdp_mismatch(struct drm_i915_private *i915, 4877 + bool fastset, const char *name, 4878 + const struct drm_dp_as_sdp *a, 4879 + const struct drm_dp_as_sdp *b) 4891 4880 { 4892 4881 struct drm_printer p; 4893 4882 ··· 4952 4841 } 4953 4842 4954 4843 drm_printf(&p, "expected:\n"); 4955 - drm_dp_vsc_sdp_log(&p, a); 4844 + drm_dp_as_sdp_log(&p, a); 4956 4845 drm_printf(&p, "found:\n"); 4957 - drm_dp_vsc_sdp_log(&p, b); 4846 + drm_dp_as_sdp_log(&p, b); 4958 4847 } 4959 4848 4960 4849 /* Returns the length up to and including the last differing byte */ ··· 4972 4861 } 4973 4862 4974 4863 static void 4975 - pipe_config_buffer_mismatch(bool fastset, const struct intel_crtc *crtc, 4864 + pipe_config_buffer_mismatch(struct drm_printer *p, bool fastset, 4865 + const struct intel_crtc *crtc, 4976 4866 const char *name, 4977 4867 const u8 *a, const u8 *b, size_t len) 4978 4868 { 4979 - struct drm_i915_private *dev_priv = to_i915(crtc->base.dev); 4869 + const char *loglevel; 4980 4870 4981 4871 if (fastset) { 4982 4872 if (!drm_debug_enabled(DRM_UT_KMS)) 4983 4873 return; 4984 4874 4985 - /* only dump up to the last difference */ 4986 - len = memcmp_diff_len(a, b, len); 4987 - 4988 - drm_dbg_kms(&dev_priv->drm, 4989 - "[CRTC:%d:%s] fastset requirement not met in %s buffer\n", 4990 - crtc->base.base.id, crtc->base.name, name); 4991 - print_hex_dump(KERN_DEBUG, "expected: ", DUMP_PREFIX_NONE, 4992 - 16, 0, a, len, false); 4993 - print_hex_dump(KERN_DEBUG, "found: ", DUMP_PREFIX_NONE, 4994 - 16, 0, b, len, false); 4875 + loglevel = KERN_DEBUG; 4995 4876 } else { 4996 - /* only dump up to the last difference */ 4997 - len = memcmp_diff_len(a, b, len); 4998 - 4999 - drm_err(&dev_priv->drm, "[CRTC:%d:%s] mismatch in %s buffer\n", 5000 - crtc->base.base.id, crtc->base.name, name); 5001 - print_hex_dump(KERN_ERR, "expected: ", DUMP_PREFIX_NONE, 5002 - 16, 0, a, len, false); 5003 - print_hex_dump(KERN_ERR, "found: ", DUMP_PREFIX_NONE, 5004 - 16, 0, b, len, false); 4877 + loglevel = KERN_ERR; 5005 4878 } 5006 - } 5007 4879 5008 - static void __printf(4, 5) 5009 - pipe_config_mismatch(bool fastset, const struct intel_crtc *crtc, 5010 - const char *name, const char *format, ...) 5011 - { 5012 - struct drm_i915_private *i915 = to_i915(crtc->base.dev); 5013 - struct va_format vaf; 5014 - va_list args; 4880 + pipe_config_mismatch(p, fastset, crtc, name, "buffer"); 5015 4881 5016 - va_start(args, format); 5017 - vaf.fmt = format; 5018 - vaf.va = &args; 4882 + /* only dump up to the last difference */ 4883 + len = memcmp_diff_len(a, b, len); 5019 4884 5020 - if (fastset) 5021 - drm_dbg_kms(&i915->drm, 5022 - "[CRTC:%d:%s] fastset requirement not met in %s %pV\n", 5023 - crtc->base.base.id, crtc->base.name, name, &vaf); 5024 - else 5025 - drm_err(&i915->drm, "[CRTC:%d:%s] mismatch in %s %pV\n", 5026 - crtc->base.base.id, crtc->base.name, name, &vaf); 5027 - 5028 - va_end(args); 4885 + print_hex_dump(loglevel, "expected: ", DUMP_PREFIX_NONE, 4886 + 16, 0, a, len, false); 4887 + print_hex_dump(loglevel, "found: ", DUMP_PREFIX_NONE, 4888 + 16, 0, b, len, false); 5029 4889 } 5030 4890 5031 4891 static void 5032 - pipe_config_pll_mismatch(bool fastset, 4892 + pipe_config_pll_mismatch(struct drm_printer *p, bool fastset, 5033 4893 const struct intel_crtc *crtc, 5034 4894 const char *name, 5035 4895 const struct intel_dpll_hw_state *a, ··· 5008 4926 { 5009 4927 struct drm_i915_private *i915 = to_i915(crtc->base.dev); 5010 4928 5011 - if (fastset) { 5012 - if (!drm_debug_enabled(DRM_UT_KMS)) 5013 - return; 4929 + pipe_config_mismatch(p, fastset, crtc, name, " "); /* stupid -Werror=format-zero-length */ 5014 4930 5015 - drm_dbg_kms(&i915->drm, 5016 - "[CRTC:%d:%s] fastset requirement not met in %s\n", 5017 - crtc->base.base.id, crtc->base.name, name); 5018 - drm_dbg_kms(&i915->drm, "expected:\n"); 5019 - intel_dpll_dump_hw_state(i915, a); 5020 - drm_dbg_kms(&i915->drm, "found:\n"); 5021 - intel_dpll_dump_hw_state(i915, b); 5022 - } else { 5023 - drm_err(&i915->drm, "[CRTC:%d:%s] mismatch in %s buffer\n", 5024 - crtc->base.base.id, crtc->base.name, name); 5025 - drm_err(&i915->drm, "expected:\n"); 5026 - intel_dpll_dump_hw_state(i915, a); 5027 - drm_err(&i915->drm, "found:\n"); 5028 - intel_dpll_dump_hw_state(i915, b); 5029 - } 4931 + drm_printf(p, "expected:\n"); 4932 + intel_dpll_dump_hw_state(i915, p, a); 4933 + drm_printf(p, "found:\n"); 4934 + intel_dpll_dump_hw_state(i915, p, b); 5030 4935 } 5031 4936 5032 4937 bool ··· 5023 4954 { 5024 4955 struct drm_i915_private *dev_priv = to_i915(current_config->uapi.crtc->dev); 5025 4956 struct intel_crtc *crtc = to_intel_crtc(pipe_config->uapi.crtc); 4957 + struct drm_printer p; 5026 4958 bool ret = true; 4959 + 4960 + if (fastset) 4961 + p = drm_dbg_printer(&dev_priv->drm, DRM_UT_KMS, NULL); 4962 + else 4963 + p = drm_err_printer(&dev_priv->drm, NULL); 5027 4964 5028 4965 #define PIPE_CONF_CHECK_X(name) do { \ 5029 4966 if (current_config->name != pipe_config->name) { \ 5030 4967 BUILD_BUG_ON_MSG(__same_type(current_config->name, bool), \ 5031 4968 __stringify(name) " is bool"); \ 5032 - pipe_config_mismatch(fastset, crtc, __stringify(name), \ 4969 + pipe_config_mismatch(&p, fastset, crtc, __stringify(name), \ 5033 4970 "(expected 0x%08x, found 0x%08x)", \ 5034 4971 current_config->name, \ 5035 4972 pipe_config->name); \ ··· 5047 4972 if ((current_config->name & (mask)) != (pipe_config->name & (mask))) { \ 5048 4973 BUILD_BUG_ON_MSG(__same_type(current_config->name, bool), \ 5049 4974 __stringify(name) " is bool"); \ 5050 - pipe_config_mismatch(fastset, crtc, __stringify(name), \ 4975 + pipe_config_mismatch(&p, fastset, crtc, __stringify(name), \ 5051 4976 "(expected 0x%08x, found 0x%08x)", \ 5052 4977 current_config->name & (mask), \ 5053 4978 pipe_config->name & (mask)); \ ··· 5059 4984 if (current_config->name != pipe_config->name) { \ 5060 4985 BUILD_BUG_ON_MSG(__same_type(current_config->name, bool), \ 5061 4986 __stringify(name) " is bool"); \ 5062 - pipe_config_mismatch(fastset, crtc, __stringify(name), \ 4987 + pipe_config_mismatch(&p, fastset, crtc, __stringify(name), \ 5063 4988 "(expected %i, found %i)", \ 5064 4989 current_config->name, \ 5065 4990 pipe_config->name); \ ··· 5071 4996 if (current_config->name != pipe_config->name) { \ 5072 4997 BUILD_BUG_ON_MSG(!__same_type(current_config->name, bool), \ 5073 4998 __stringify(name) " is not bool"); \ 5074 - pipe_config_mismatch(fastset, crtc, __stringify(name), \ 4999 + pipe_config_mismatch(&p, fastset, crtc, __stringify(name), \ 5075 5000 "(expected %s, found %s)", \ 5076 5001 str_yes_no(current_config->name), \ 5077 5002 str_yes_no(pipe_config->name)); \ ··· 5081 5006 5082 5007 #define PIPE_CONF_CHECK_P(name) do { \ 5083 5008 if (current_config->name != pipe_config->name) { \ 5084 - pipe_config_mismatch(fastset, crtc, __stringify(name), \ 5009 + pipe_config_mismatch(&p, fastset, crtc, __stringify(name), \ 5085 5010 "(expected %p, found %p)", \ 5086 5011 current_config->name, \ 5087 5012 pipe_config->name); \ ··· 5092 5017 #define PIPE_CONF_CHECK_M_N(name) do { \ 5093 5018 if (!intel_compare_link_m_n(&current_config->name, \ 5094 5019 &pipe_config->name)) { \ 5095 - pipe_config_mismatch(fastset, crtc, __stringify(name), \ 5020 + pipe_config_mismatch(&p, fastset, crtc, __stringify(name), \ 5096 5021 "(expected tu %i data %i/%i link %i/%i, " \ 5097 5022 "found tu %i, data %i/%i link %i/%i)", \ 5098 5023 current_config->name.tu, \ ··· 5112 5037 #define PIPE_CONF_CHECK_PLL(name) do { \ 5113 5038 if (!intel_dpll_compare_hw_state(dev_priv, &current_config->name, \ 5114 5039 &pipe_config->name)) { \ 5115 - pipe_config_pll_mismatch(fastset, crtc, __stringify(name), \ 5040 + pipe_config_pll_mismatch(&p, fastset, crtc, __stringify(name), \ 5116 5041 &current_config->name, \ 5117 5042 &pipe_config->name); \ 5118 5043 ret = false; \ ··· 5145 5070 5146 5071 #define PIPE_CONF_CHECK_FLAGS(name, mask) do { \ 5147 5072 if ((current_config->name ^ pipe_config->name) & (mask)) { \ 5148 - pipe_config_mismatch(fastset, crtc, __stringify(name), \ 5073 + pipe_config_mismatch(&p, fastset, crtc, __stringify(name), \ 5149 5074 "(%x) (expected %i, found %i)", \ 5150 5075 (mask), \ 5151 5076 current_config->name & (mask), \ ··· 5157 5082 #define PIPE_CONF_CHECK_INFOFRAME(name) do { \ 5158 5083 if (!intel_compare_infoframe(&current_config->infoframes.name, \ 5159 5084 &pipe_config->infoframes.name)) { \ 5160 - pipe_config_infoframe_mismatch(dev_priv, fastset, __stringify(name), \ 5085 + pipe_config_infoframe_mismatch(&p, fastset, crtc, __stringify(name), \ 5161 5086 &current_config->infoframes.name, \ 5162 5087 &pipe_config->infoframes.name); \ 5163 5088 ret = false; \ ··· 5167 5092 #define PIPE_CONF_CHECK_DP_VSC_SDP(name) do { \ 5168 5093 if (!intel_compare_dp_vsc_sdp(&current_config->infoframes.name, \ 5169 5094 &pipe_config->infoframes.name)) { \ 5170 - pipe_config_dp_vsc_sdp_mismatch(dev_priv, fastset, __stringify(name), \ 5095 + pipe_config_dp_vsc_sdp_mismatch(&p, fastset, crtc, __stringify(name), \ 5096 + &current_config->infoframes.name, \ 5097 + &pipe_config->infoframes.name); \ 5098 + ret = false; \ 5099 + } \ 5100 + } while (0) 5101 + 5102 + #define PIPE_CONF_CHECK_DP_AS_SDP(name) do { \ 5103 + if (!intel_compare_dp_as_sdp(&current_config->infoframes.name, \ 5104 + &pipe_config->infoframes.name)) { \ 5105 + pipe_config_dp_as_sdp_mismatch(dev_priv, fastset, __stringify(name), \ 5171 5106 &current_config->infoframes.name, \ 5172 5107 &pipe_config->infoframes.name); \ 5173 5108 ret = false; \ ··· 5188 5103 BUILD_BUG_ON(sizeof(current_config->name) != (len)); \ 5189 5104 BUILD_BUG_ON(sizeof(pipe_config->name) != (len)); \ 5190 5105 if (!intel_compare_buffer(current_config->name, pipe_config->name, (len))) { \ 5191 - pipe_config_buffer_mismatch(fastset, crtc, __stringify(name), \ 5106 + pipe_config_buffer_mismatch(&p, fastset, crtc, __stringify(name), \ 5192 5107 current_config->name, \ 5193 5108 pipe_config->name, \ 5194 5109 (len)); \ ··· 5201 5116 !intel_color_lut_equal(current_config, \ 5202 5117 current_config->lut, pipe_config->lut, \ 5203 5118 is_pre_csc_lut)) { \ 5204 - pipe_config_mismatch(fastset, crtc, __stringify(lut), \ 5119 + pipe_config_mismatch(&p, fastset, crtc, __stringify(lut), \ 5205 5120 "hw_state doesn't match sw_state"); \ 5206 5121 ret = false; \ 5207 5122 } \ ··· 5330 5245 PIPE_CONF_CHECK_CSC(output_csc); 5331 5246 } 5332 5247 5248 + /* 5249 + * Panel replay has to be enabled before link training. PSR doesn't have 5250 + * this requirement -> check these only if using panel replay 5251 + */ 5252 + if (current_config->has_panel_replay || pipe_config->has_panel_replay) { 5253 + PIPE_CONF_CHECK_BOOL(has_psr); 5254 + PIPE_CONF_CHECK_BOOL(has_psr2); 5255 + PIPE_CONF_CHECK_BOOL(enable_psr2_sel_fetch); 5256 + PIPE_CONF_CHECK_BOOL(enable_psr2_su_region_et); 5257 + PIPE_CONF_CHECK_BOOL(has_panel_replay); 5258 + } 5259 + 5333 5260 PIPE_CONF_CHECK_BOOL(double_wide); 5334 5261 5335 5262 if (dev_priv->display.dpll.mgr) ··· 5377 5280 PIPE_CONF_CHECK_INFOFRAME(hdmi); 5378 5281 PIPE_CONF_CHECK_INFOFRAME(drm); 5379 5282 PIPE_CONF_CHECK_DP_VSC_SDP(vsc); 5283 + PIPE_CONF_CHECK_DP_AS_SDP(as_sdp); 5380 5284 5381 5285 PIPE_CONF_CHECK_X(sync_mode_slaves_mask); 5382 5286 PIPE_CONF_CHECK_I(master_transcoder); ··· 5429 5331 PIPE_CONF_CHECK_I(vrr.flipline); 5430 5332 PIPE_CONF_CHECK_I(vrr.pipeline_full); 5431 5333 PIPE_CONF_CHECK_I(vrr.guardband); 5334 + PIPE_CONF_CHECK_I(vrr.vsync_start); 5335 + PIPE_CONF_CHECK_I(vrr.vsync_end); 5432 5336 } 5433 5337 5434 5338 #undef PIPE_CONF_CHECK_X ··· 5676 5576 static void intel_crtc_check_fastset(const struct intel_crtc_state *old_crtc_state, 5677 5577 struct intel_crtc_state *new_crtc_state) 5678 5578 { 5679 - struct drm_i915_private *i915 = to_i915(old_crtc_state->uapi.crtc->dev); 5579 + struct intel_crtc *crtc = to_intel_crtc(new_crtc_state->uapi.crtc); 5580 + struct drm_i915_private *i915 = to_i915(crtc->base.dev); 5680 5581 5681 5582 /* only allow LRR when the timings stay within the VRR range */ 5682 5583 if (old_crtc_state->vrr.in_range != new_crtc_state->vrr.in_range) 5683 5584 new_crtc_state->update_lrr = false; 5684 5585 5685 5586 if (!intel_pipe_config_compare(old_crtc_state, new_crtc_state, true)) 5686 - drm_dbg_kms(&i915->drm, "fastset requirement not met, forcing full modeset\n"); 5587 + drm_dbg_kms(&i915->drm, "[CRTC:%d:%s] fastset requirement not met, forcing full modeset\n", 5588 + crtc->base.base.id, crtc->base.name); 5687 5589 else 5688 5590 new_crtc_state->uapi.mode_changed = false; 5689 5591 ··· 6339 6237 continue; 6340 6238 } 6341 6239 6342 - if (intel_crtc_is_bigjoiner_slave(new_crtc_state)) { 6343 - drm_WARN_ON(&i915->drm, new_crtc_state->uapi.enable); 6240 + if (drm_WARN_ON(&i915->drm, intel_crtc_is_bigjoiner_slave(new_crtc_state))) 6344 6241 continue; 6345 - } 6346 6242 6347 6243 ret = intel_crtc_prepare_cleared_state(state, crtc); 6348 6244 if (ret) 6349 - break; 6245 + goto fail; 6350 6246 6351 6247 if (!new_crtc_state->hw.enable) 6352 6248 continue; 6353 6249 6354 6250 ret = intel_modeset_pipe_config(state, crtc, limits); 6355 6251 if (ret) 6356 - break; 6357 - 6358 - ret = intel_atomic_check_bigjoiner(state, crtc); 6359 - if (ret) 6360 - break; 6252 + goto fail; 6361 6253 } 6362 6254 6255 + for_each_new_intel_crtc_in_state(state, crtc, new_crtc_state, i) { 6256 + if (!intel_crtc_needs_modeset(new_crtc_state)) 6257 + continue; 6258 + 6259 + if (drm_WARN_ON(&i915->drm, intel_crtc_is_bigjoiner_slave(new_crtc_state))) 6260 + continue; 6261 + 6262 + if (!new_crtc_state->hw.enable) 6263 + continue; 6264 + 6265 + ret = intel_modeset_pipe_config_late(state, crtc); 6266 + if (ret) 6267 + goto fail; 6268 + } 6269 + 6270 + fail: 6363 6271 if (ret) 6364 6272 *failed_pipe = crtc->pipe; 6365 6273 ··· 6465 6353 if (ret) 6466 6354 goto fail; 6467 6355 6356 + for_each_new_intel_crtc_in_state(state, crtc, new_crtc_state, i) { 6357 + if (!intel_crtc_needs_modeset(new_crtc_state)) 6358 + continue; 6359 + 6360 + if (intel_crtc_is_bigjoiner_slave(new_crtc_state)) { 6361 + drm_WARN_ON(&dev_priv->drm, new_crtc_state->uapi.enable); 6362 + continue; 6363 + } 6364 + 6365 + ret = intel_atomic_check_bigjoiner(state, crtc); 6366 + if (ret) 6367 + goto fail; 6368 + } 6369 + 6468 6370 for_each_oldnew_intel_crtc_in_state(state, crtc, old_crtc_state, 6469 6371 new_crtc_state, i) { 6470 6372 if (!intel_crtc_needs_modeset(new_crtc_state)) 6471 6373 continue; 6472 6374 6473 - if (new_crtc_state->hw.enable) { 6474 - ret = intel_modeset_pipe_config_late(state, crtc); 6475 - if (ret) 6476 - goto fail; 6477 - } 6375 + intel_bigjoiner_adjust_pipe_src(new_crtc_state); 6478 6376 6479 6377 intel_crtc_check_fastset(old_crtc_state, new_crtc_state); 6480 6378 } ··· 6766 6644 struct drm_i915_private *dev_priv = to_i915(state->base.dev); 6767 6645 const struct intel_crtc_state *new_crtc_state = 6768 6646 intel_atomic_get_new_crtc_state(state, crtc); 6647 + struct intel_crtc *pipe_crtc; 6769 6648 6770 6649 if (!intel_crtc_needs_modeset(new_crtc_state)) 6771 6650 return; 6772 6651 6773 - /* VRR will be enable later, if required */ 6774 - intel_crtc_update_active_timings(new_crtc_state, false); 6652 + for_each_intel_crtc_in_pipe_mask_reverse(&dev_priv->drm, pipe_crtc, 6653 + intel_crtc_joined_pipe_mask(new_crtc_state)) { 6654 + const struct intel_crtc_state *pipe_crtc_state = 6655 + intel_atomic_get_new_crtc_state(state, pipe_crtc); 6656 + 6657 + /* VRR will be enable later, if required */ 6658 + intel_crtc_update_active_timings(pipe_crtc_state, false); 6659 + } 6775 6660 6776 6661 dev_priv->display.funcs.display->crtc_enable(state, crtc); 6777 - 6778 - if (intel_crtc_is_bigjoiner_slave(new_crtc_state)) 6779 - return; 6780 6662 6781 6663 /* vblanks work again, re-enable pipe CRC. */ 6782 6664 intel_crtc_enable_pipe_crc(crtc); ··· 6872 6746 } 6873 6747 6874 6748 static void intel_old_crtc_state_disables(struct intel_atomic_state *state, 6875 - struct intel_crtc_state *old_crtc_state, 6876 - struct intel_crtc_state *new_crtc_state, 6877 6749 struct intel_crtc *crtc) 6878 6750 { 6879 6751 struct drm_i915_private *dev_priv = to_i915(state->base.dev); 6752 + const struct intel_crtc_state *old_crtc_state = 6753 + intel_atomic_get_old_crtc_state(state, crtc); 6754 + struct intel_crtc *pipe_crtc; 6880 6755 6881 6756 /* 6882 6757 * We need to disable pipe CRC before disabling the pipe, 6883 6758 * or we race against vblank off. 6884 6759 */ 6885 - intel_crtc_disable_pipe_crc(crtc); 6760 + for_each_intel_crtc_in_pipe_mask(&dev_priv->drm, pipe_crtc, 6761 + intel_crtc_joined_pipe_mask(old_crtc_state)) 6762 + intel_crtc_disable_pipe_crc(pipe_crtc); 6886 6763 6887 6764 dev_priv->display.funcs.display->crtc_disable(state, crtc); 6888 - crtc->active = false; 6889 - intel_fbc_disable(crtc); 6890 6765 6891 - if (!new_crtc_state->hw.active) 6892 - intel_initial_watermarks(state, crtc); 6766 + for_each_intel_crtc_in_pipe_mask(&dev_priv->drm, pipe_crtc, 6767 + intel_crtc_joined_pipe_mask(old_crtc_state)) { 6768 + const struct intel_crtc_state *new_pipe_crtc_state = 6769 + intel_atomic_get_new_crtc_state(state, pipe_crtc); 6770 + 6771 + pipe_crtc->active = false; 6772 + intel_fbc_disable(pipe_crtc); 6773 + 6774 + if (!new_pipe_crtc_state->hw.active) 6775 + intel_initial_watermarks(state, pipe_crtc); 6776 + } 6893 6777 } 6894 6778 6895 6779 static void intel_commit_modeset_disables(struct intel_atomic_state *state) 6896 6780 { 6897 - struct intel_crtc_state *new_crtc_state, *old_crtc_state; 6781 + struct drm_i915_private *i915 = to_i915(state->base.dev); 6782 + const struct intel_crtc_state *new_crtc_state, *old_crtc_state; 6898 6783 struct intel_crtc *crtc; 6899 - u32 handled = 0; 6784 + u8 disable_pipes = 0; 6900 6785 int i; 6901 6786 6902 6787 for_each_oldnew_intel_crtc_in_state(state, crtc, old_crtc_state, ··· 6915 6778 if (!intel_crtc_needs_modeset(new_crtc_state)) 6916 6779 continue; 6917 6780 6781 + /* 6782 + * Needs to be done even for pipes 6783 + * that weren't enabled previously. 6784 + */ 6918 6785 intel_pre_plane_update(state, crtc); 6919 6786 6920 6787 if (!old_crtc_state->hw.active) 6788 + continue; 6789 + 6790 + disable_pipes |= BIT(crtc->pipe); 6791 + } 6792 + 6793 + for_each_old_intel_crtc_in_state(state, crtc, old_crtc_state, i) { 6794 + if ((disable_pipes & BIT(crtc->pipe)) == 0) 6921 6795 continue; 6922 6796 6923 6797 intel_crtc_disable_planes(state, crtc); 6924 6798 } 6925 6799 6926 6800 /* Only disable port sync and MST slaves */ 6927 - for_each_oldnew_intel_crtc_in_state(state, crtc, old_crtc_state, 6928 - new_crtc_state, i) { 6929 - if (!intel_crtc_needs_modeset(new_crtc_state)) 6801 + for_each_old_intel_crtc_in_state(state, crtc, old_crtc_state, i) { 6802 + if ((disable_pipes & BIT(crtc->pipe)) == 0) 6930 6803 continue; 6931 6804 6932 - if (!old_crtc_state->hw.active) 6805 + if (intel_crtc_is_bigjoiner_slave(old_crtc_state)) 6933 6806 continue; 6934 6807 6935 6808 /* In case of Transcoder port Sync master slave CRTCs can be ··· 6948 6801 * Slave vblanks are masked till Master Vblanks. 6949 6802 */ 6950 6803 if (!is_trans_port_sync_slave(old_crtc_state) && 6951 - !intel_dp_mst_is_slave_trans(old_crtc_state) && 6952 - !intel_crtc_is_bigjoiner_slave(old_crtc_state)) 6804 + !intel_dp_mst_is_slave_trans(old_crtc_state)) 6953 6805 continue; 6954 6806 6955 - intel_old_crtc_state_disables(state, old_crtc_state, 6956 - new_crtc_state, crtc); 6957 - handled |= BIT(crtc->pipe); 6807 + intel_old_crtc_state_disables(state, crtc); 6808 + 6809 + disable_pipes &= ~intel_crtc_joined_pipe_mask(old_crtc_state); 6958 6810 } 6959 6811 6960 6812 /* Disable everything else left on */ 6961 - for_each_oldnew_intel_crtc_in_state(state, crtc, old_crtc_state, 6962 - new_crtc_state, i) { 6963 - if (!intel_crtc_needs_modeset(new_crtc_state) || 6964 - (handled & BIT(crtc->pipe))) 6813 + for_each_old_intel_crtc_in_state(state, crtc, old_crtc_state, i) { 6814 + if ((disable_pipes & BIT(crtc->pipe)) == 0) 6965 6815 continue; 6966 6816 6967 - if (!old_crtc_state->hw.active) 6817 + if (intel_crtc_is_bigjoiner_slave(old_crtc_state)) 6968 6818 continue; 6969 6819 6970 - intel_old_crtc_state_disables(state, old_crtc_state, 6971 - new_crtc_state, crtc); 6820 + intel_old_crtc_state_disables(state, crtc); 6821 + 6822 + disable_pipes &= ~intel_crtc_joined_pipe_mask(old_crtc_state); 6972 6823 } 6824 + 6825 + drm_WARN_ON(&i915->drm, disable_pipes); 6973 6826 } 6974 6827 6975 6828 static void intel_commit_modeset_enables(struct intel_atomic_state *state) ··· 7036 6889 intel_pre_update_crtc(state, crtc); 7037 6890 } 7038 6891 6892 + intel_dbuf_mbus_pre_ddb_update(state); 6893 + 7039 6894 while (update_pipes) { 7040 - for_each_oldnew_intel_crtc_in_state(state, crtc, old_crtc_state, 7041 - new_crtc_state, i) { 6895 + /* 6896 + * Commit in reverse order to make bigjoiner master 6897 + * send the uapi events after slaves are done. 6898 + */ 6899 + for_each_oldnew_intel_crtc_in_state_reverse(state, crtc, old_crtc_state, 6900 + new_crtc_state, i) { 7042 6901 enum pipe pipe = crtc->pipe; 7043 6902 7044 6903 if ((update_pipes & BIT(pipe)) == 0) ··· 7072 6919 } 7073 6920 } 7074 6921 6922 + intel_dbuf_mbus_post_ddb_update(state); 6923 + 7075 6924 update_pipes = modeset_pipes; 7076 6925 7077 6926 /* ··· 7086 6931 if ((modeset_pipes & BIT(pipe)) == 0) 7087 6932 continue; 7088 6933 7089 - if (intel_dp_mst_is_slave_trans(new_crtc_state) || 7090 - is_trans_port_sync_master(new_crtc_state) || 7091 - intel_crtc_is_bigjoiner_master(new_crtc_state)) 6934 + if (intel_crtc_is_bigjoiner_slave(new_crtc_state)) 7092 6935 continue; 7093 6936 7094 - modeset_pipes &= ~BIT(pipe); 6937 + if (intel_dp_mst_is_slave_trans(new_crtc_state) || 6938 + is_trans_port_sync_master(new_crtc_state)) 6939 + continue; 6940 + 6941 + modeset_pipes &= ~intel_crtc_joined_pipe_mask(new_crtc_state); 7095 6942 7096 6943 intel_enable_crtc(state, crtc); 7097 6944 } ··· 7108 6951 if ((modeset_pipes & BIT(pipe)) == 0) 7109 6952 continue; 7110 6953 7111 - modeset_pipes &= ~BIT(pipe); 6954 + if (intel_crtc_is_bigjoiner_slave(new_crtc_state)) 6955 + continue; 6956 + 6957 + modeset_pipes &= ~intel_crtc_joined_pipe_mask(new_crtc_state); 7112 6958 7113 6959 intel_enable_crtc(state, crtc); 7114 6960 } ··· 7128 6968 intel_pre_update_crtc(state, crtc); 7129 6969 } 7130 6970 7131 - for_each_new_intel_crtc_in_state(state, crtc, new_crtc_state, i) { 6971 + /* 6972 + * Commit in reverse order to make bigjoiner master 6973 + * send the uapi events after slaves are done. 6974 + */ 6975 + for_each_new_intel_crtc_in_state_reverse(state, crtc, new_crtc_state, i) { 7132 6976 enum pipe pipe = crtc->pipe; 7133 6977 7134 6978 if ((update_pipes & BIT(pipe)) == 0) ··· 7329 7165 intel_encoders_update_prepare(state); 7330 7166 7331 7167 intel_dbuf_pre_plane_update(state); 7332 - intel_mbus_dbox_update(state); 7333 7168 7334 7169 for_each_new_intel_crtc_in_state(state, crtc, new_crtc_state, i) { 7335 7170 if (new_crtc_state->do_async_flip) ··· 7853 7690 7854 7691 static int max_dotclock(struct drm_i915_private *i915) 7855 7692 { 7856 - int max_dotclock = i915->max_dotclk_freq; 7693 + int max_dotclock = i915->display.cdclk.max_dotclk_freq; 7857 7694 7858 7695 /* icl+ might use bigjoiner */ 7859 7696 if (DISPLAY_VER(i915) >= 11)
+22
drivers/gpu/drm/i915/display/intel_display.h
··· 280 280 base.head) \ 281 281 for_each_if((pipe_mask) & BIT(intel_crtc->pipe)) 282 282 283 + #define for_each_intel_crtc_in_pipe_mask_reverse(dev, intel_crtc, pipe_mask) \ 284 + list_for_each_entry_reverse((intel_crtc), \ 285 + &(dev)->mode_config.crtc_list, \ 286 + base.head) \ 287 + for_each_if((pipe_mask) & BIT((intel_crtc)->pipe)) 288 + 283 289 #define for_each_intel_encoder(dev, intel_encoder) \ 284 290 list_for_each_entry(intel_encoder, \ 285 291 &(dev)->mode_config.encoder_list, \ ··· 348 342 ((crtc) = to_intel_crtc((__state)->base.crtcs[__i].ptr), \ 349 343 (new_crtc_state) = to_intel_crtc_state((__state)->base.crtcs[__i].new_state), 1); \ 350 344 (__i)++) \ 345 + for_each_if(crtc) 346 + 347 + #define for_each_new_intel_crtc_in_state_reverse(__state, crtc, new_crtc_state, __i) \ 348 + for ((__i) = (__state)->base.dev->mode_config.num_crtc - 1; \ 349 + (__i) >= 0 && \ 350 + ((crtc) = to_intel_crtc((__state)->base.crtcs[__i].ptr), \ 351 + (new_crtc_state) = to_intel_crtc_state((__state)->base.crtcs[__i].new_state), 1); \ 352 + (__i)--) \ 351 353 for_each_if(crtc) 352 354 353 355 #define for_each_oldnew_intel_plane_in_state(__state, plane, old_plane_state, new_plane_state, __i) \ ··· 422 408 enum phy intel_port_to_phy(struct drm_i915_private *i915, enum port port); 423 409 bool is_trans_port_sync_mode(const struct intel_crtc_state *state); 424 410 bool is_trans_port_sync_master(const struct intel_crtc_state *state); 411 + u8 intel_crtc_joined_pipe_mask(const struct intel_crtc_state *crtc_state); 425 412 bool intel_crtc_is_bigjoiner_slave(const struct intel_crtc_state *crtc_state); 426 413 bool intel_crtc_is_bigjoiner_master(const struct intel_crtc_state *crtc_state); 427 414 u8 intel_crtc_bigjoiner_slave_pipes(const struct intel_crtc_state *crtc_state); ··· 463 448 bool intel_phy_is_snps(struct drm_i915_private *dev_priv, enum phy phy); 464 449 enum tc_port intel_port_to_tc(struct drm_i915_private *dev_priv, 465 450 enum port port); 451 + 452 + enum phy intel_encoder_to_phy(struct intel_encoder *encoder); 453 + bool intel_encoder_is_combo(struct intel_encoder *encoder); 454 + bool intel_encoder_is_snps(struct intel_encoder *encoder); 455 + bool intel_encoder_is_tc(struct intel_encoder *encoder); 456 + enum tc_port intel_encoder_to_tc(struct intel_encoder *encoder); 457 + 466 458 int intel_get_pipe_from_crtc_id_ioctl(struct drm_device *dev, void *data, 467 459 struct drm_file *file_priv); 468 460
+14
drivers/gpu/drm/i915/display/intel_display_core.h
··· 26 26 #include "intel_global_state.h" 27 27 #include "intel_gmbus.h" 28 28 #include "intel_opregion.h" 29 + #include "intel_dmc_wl.h" 29 30 #include "intel_wm_types.h" 30 31 31 32 struct task_struct; ··· 346 345 struct intel_global_obj obj; 347 346 348 347 unsigned int max_cdclk_freq; 348 + unsigned int max_dotclk_freq; 349 + unsigned int skl_preferred_vco_freq; 349 350 } cdclk; 350 351 351 352 struct { ··· 449 446 } ips; 450 447 451 448 struct { 449 + bool display_irqs_enabled; 450 + 451 + /* For i915gm/i945gm vblank irq workaround */ 452 + u8 vblank_enabled; 453 + 454 + u32 de_irq_mask[I915_MAX_PIPES]; 455 + u32 pipestat_irq_mask[I915_MAX_PIPES]; 456 + } irq; 457 + 458 + struct { 452 459 wait_queue_head_t waitqueue; 453 460 454 461 /* mutex to protect pmdemand programming sequence */ ··· 547 534 struct intel_overlay *overlay; 548 535 struct intel_display_params params; 549 536 struct intel_vbt_data vbt; 537 + struct intel_dmc_wl wl; 550 538 struct intel_wm wm; 551 539 }; 552 540
+8 -79
drivers/gpu/drm/i915/display/intel_display_debugfs.c
··· 31 31 #include "intel_hdmi.h" 32 32 #include "intel_hotplug.h" 33 33 #include "intel_panel.h" 34 + #include "intel_pps.h" 34 35 #include "intel_psr.h" 35 36 #include "intel_psr_regs.h" 36 37 #include "intel_wm.h" ··· 192 191 struct intel_connector *intel_connector, 193 192 bool remote_req) 194 193 { 195 - bool hdcp_cap, hdcp2_cap; 194 + bool hdcp_cap = false, hdcp2_cap = false; 196 195 197 196 if (!intel_connector->hdcp.shim) { 198 197 seq_puts(m, "No Connector Support"); ··· 253 252 struct drm_connector *connector) 254 253 { 255 254 struct intel_connector *intel_connector = to_intel_connector(connector); 256 - const struct drm_connector_state *conn_state = connector->state; 257 - struct intel_encoder *encoder = 258 - to_intel_encoder(conn_state->best_encoder); 259 255 const struct drm_display_mode *mode; 260 256 261 257 seq_printf(m, "[CONNECTOR:%d:%s]: status: %s\n", ··· 269 271 drm_get_subpixel_order_name(connector->display_info.subpixel_order)); 270 272 seq_printf(m, "\tCEA rev: %d\n", connector->display_info.cea_rev); 271 273 272 - if (!encoder) 273 - return; 274 - 275 274 switch (connector->connector_type) { 276 275 case DRM_MODE_CONNECTOR_DisplayPort: 277 276 case DRM_MODE_CONNECTOR_eDP: 278 - if (encoder->type == INTEL_OUTPUT_DP_MST) 277 + if (intel_connector->mst_port) 279 278 intel_dp_mst_info(m, intel_connector); 280 279 else 281 280 intel_dp_info(m, intel_connector); 282 281 break; 283 282 case DRM_MODE_CONNECTOR_HDMIA: 284 - if (encoder->type == INTEL_OUTPUT_HDMI || 285 - encoder->type == INTEL_OUTPUT_DDI) 286 - intel_hdmi_info(m, intel_connector); 283 + intel_hdmi_info(m, intel_connector); 287 284 break; 288 285 default: 289 286 break; 290 287 } 291 288 292 289 seq_puts(m, "\tHDCP version: "); 293 - if (intel_encoder_is_mst(encoder)) { 290 + if (intel_connector->mst_port) { 294 291 intel_hdcp_info(m, intel_connector, true); 295 292 seq_puts(m, "\tMST Hub HDCP version: "); 296 293 } ··· 1096 1103 intel_display_debugfs_params(i915); 1097 1104 } 1098 1105 1099 - static int i915_panel_show(struct seq_file *m, void *data) 1100 - { 1101 - struct intel_connector *connector = m->private; 1102 - struct intel_dp *intel_dp = intel_attached_dp(connector); 1103 - 1104 - if (connector->base.status != connector_status_connected) 1105 - return -ENODEV; 1106 - 1107 - seq_printf(m, "Panel power up delay: %d\n", 1108 - intel_dp->pps.panel_power_up_delay); 1109 - seq_printf(m, "Panel power down delay: %d\n", 1110 - intel_dp->pps.panel_power_down_delay); 1111 - seq_printf(m, "Backlight on delay: %d\n", 1112 - intel_dp->pps.backlight_on_delay); 1113 - seq_printf(m, "Backlight off delay: %d\n", 1114 - intel_dp->pps.backlight_off_delay); 1115 - 1116 - return 0; 1117 - } 1118 - DEFINE_SHOW_ATTRIBUTE(i915_panel); 1119 - 1120 1106 static int i915_hdcp_sink_capability_show(struct seq_file *m, void *data) 1121 1107 { 1122 1108 struct intel_connector *connector = m->private; ··· 1374 1402 return ret; 1375 1403 } 1376 1404 1377 - static int i915_bigjoiner_enable_show(struct seq_file *m, void *data) 1378 - { 1379 - struct intel_connector *connector = m->private; 1380 - struct drm_crtc *crtc; 1381 - 1382 - crtc = connector->base.state->crtc; 1383 - if (connector->base.status != connector_status_connected || !crtc) 1384 - return -ENODEV; 1385 - 1386 - seq_printf(m, "Bigjoiner enable: %d\n", connector->force_bigjoiner_enable); 1387 - 1388 - return 0; 1389 - } 1390 - 1391 1405 static ssize_t i915_dsc_output_format_write(struct file *file, 1392 1406 const char __user *ubuf, 1393 1407 size_t len, loff_t *offp) ··· 1390 1432 return ret; 1391 1433 1392 1434 intel_dp->force_dsc_output_format = dsc_output_format; 1393 - *offp += len; 1394 - 1395 - return len; 1396 - } 1397 - 1398 - static ssize_t i915_bigjoiner_enable_write(struct file *file, 1399 - const char __user *ubuf, 1400 - size_t len, loff_t *offp) 1401 - { 1402 - struct seq_file *m = file->private_data; 1403 - struct intel_connector *connector = m->private; 1404 - struct drm_crtc *crtc; 1405 - bool bigjoiner_en = 0; 1406 - int ret; 1407 - 1408 - crtc = connector->base.state->crtc; 1409 - if (connector->base.status != connector_status_connected || !crtc) 1410 - return -ENODEV; 1411 - 1412 - ret = kstrtobool_from_user(ubuf, len, &bigjoiner_en); 1413 - if (ret < 0) 1414 - return ret; 1415 - 1416 - connector->force_bigjoiner_enable = bigjoiner_en; 1417 1435 *offp += len; 1418 1436 1419 1437 return len; ··· 1488 1554 .write = i915_dsc_fractional_bpp_write 1489 1555 }; 1490 1556 1491 - DEFINE_SHOW_STORE_ATTRIBUTE(i915_bigjoiner_enable); 1492 - 1493 1557 /* 1494 1558 * Returns the Current CRTC's bpc. 1495 1559 * Example usage: cat /sys/kernel/debug/dri/0/crtc-0/i915_current_bpc ··· 1540 1608 return; 1541 1609 1542 1610 intel_drrs_connector_debugfs_add(connector); 1611 + intel_pps_connector_debugfs_add(connector); 1543 1612 intel_psr_connector_debugfs_add(connector); 1544 - 1545 - if (connector_type == DRM_MODE_CONNECTOR_eDP) 1546 - debugfs_create_file("i915_panel_timings", 0444, root, 1547 - connector, &i915_panel_fops); 1548 1613 1549 1614 if (connector_type == DRM_MODE_CONNECTOR_DisplayPort || 1550 1615 connector_type == DRM_MODE_CONNECTOR_HDMIA || ··· 1569 1640 if (DISPLAY_VER(i915) >= 11 && 1570 1641 (connector_type == DRM_MODE_CONNECTOR_DisplayPort || 1571 1642 connector_type == DRM_MODE_CONNECTOR_eDP)) { 1572 - debugfs_create_file("i915_bigjoiner_force_enable", 0644, root, 1573 - connector, &i915_bigjoiner_enable_fops); 1643 + debugfs_create_bool("i915_bigjoiner_force_enable", 0644, root, 1644 + &connector->force_bigjoiner_enable); 1574 1645 } 1575 1646 1576 1647 if (connector_type == DRM_MODE_CONNECTOR_DSI ||
+5
drivers/gpu/drm/i915/display/intel_display_device.c
··· 17 17 #include "intel_display_reg_defs.h" 18 18 #include "intel_fbc.h" 19 19 20 + __diag_push(); 21 + __diag_ignore_all("-Woverride-init", "Allow field initialization overrides for display info"); 22 + 20 23 static const struct intel_display_device_info no_display = {}; 21 24 22 25 #define PIPE_A_OFFSET 0x70000 ··· 770 767 BIT(INTEL_FBC_A) | BIT(INTEL_FBC_B) | 771 768 BIT(INTEL_FBC_C) | BIT(INTEL_FBC_D), 772 769 }; 770 + 771 + __diag_pop(); 773 772 774 773 /* 775 774 * Separate detection for no display cases to keep the display id array simple.
+2
drivers/gpu/drm/i915/display/intel_display_device.h
··· 47 47 #define HAS_DPT(i915) (DISPLAY_VER(i915) >= 13) 48 48 #define HAS_DSB(i915) (DISPLAY_INFO(i915)->has_dsb) 49 49 #define HAS_DSC(__i915) (DISPLAY_RUNTIME_INFO(__i915)->has_dsc) 50 + #define HAS_DSC_MST(__i915) (DISPLAY_VER(__i915) >= 12 && HAS_DSC(__i915)) 50 51 #define HAS_FBC(i915) (DISPLAY_RUNTIME_INFO(i915)->fbc_mask != 0) 51 52 #define HAS_FPGA_DBG_UNCLAIMED(i915) (DISPLAY_INFO(i915)->has_fpga_dbg) 52 53 #define HAS_FW_BLC(i915) (DISPLAY_VER(i915) >= 3) ··· 69 68 #define HAS_TRANSCODER(i915, trans) ((DISPLAY_RUNTIME_INFO(i915)->cpu_transcoder_mask & \ 70 69 BIT(trans)) != 0) 71 70 #define HAS_VRR(i915) (DISPLAY_VER(i915) >= 11) 71 + #define HAS_AS_SDP(i915) (DISPLAY_VER(i915) >= 13) 72 72 #define INTEL_NUM_PIPES(i915) (hweight8(DISPLAY_RUNTIME_INFO(i915)->pipe_mask)) 73 73 #define I915_HAS_HOTPLUG(i915) (DISPLAY_INFO(i915)->has_hotplug) 74 74 #define OVERLAY_NEEDS_PHYSICAL(i915) (DISPLAY_INFO(i915)->overlay_needs_physical)
+1
drivers/gpu/drm/i915/display/intel_display_driver.c
··· 198 198 intel_dpll_init_clock_hook(i915); 199 199 intel_init_display_hooks(i915); 200 200 intel_fdi_init_hook(i915); 201 + intel_dmc_wl_init(i915); 201 202 } 202 203 203 204 /* part #1: call before irq install */
+29 -28
drivers/gpu/drm/i915/display/intel_display_irq.c
··· 117 117 if (drm_WARN_ON(&dev_priv->drm, !intel_irqs_enabled(dev_priv))) 118 118 return; 119 119 120 - new_val = dev_priv->de_irq_mask[pipe]; 120 + new_val = dev_priv->display.irq.de_irq_mask[pipe]; 121 121 new_val &= ~interrupt_mask; 122 122 new_val |= (~enabled_irq_mask & interrupt_mask); 123 123 124 - if (new_val != dev_priv->de_irq_mask[pipe]) { 125 - dev_priv->de_irq_mask[pipe] = new_val; 126 - intel_uncore_write(&dev_priv->uncore, GEN8_DE_PIPE_IMR(pipe), dev_priv->de_irq_mask[pipe]); 124 + if (new_val != dev_priv->display.irq.de_irq_mask[pipe]) { 125 + dev_priv->display.irq.de_irq_mask[pipe] = new_val; 126 + intel_uncore_write(&dev_priv->uncore, GEN8_DE_PIPE_IMR(pipe), 127 + dev_priv->display.irq.de_irq_mask[pipe]); 127 128 intel_uncore_posting_read(&dev_priv->uncore, GEN8_DE_PIPE_IMR(pipe)); 128 129 } 129 130 } ··· 180 179 u32 i915_pipestat_enable_mask(struct drm_i915_private *dev_priv, 181 180 enum pipe pipe) 182 181 { 183 - u32 status_mask = dev_priv->pipestat_irq_mask[pipe]; 182 + u32 status_mask = dev_priv->display.irq.pipestat_irq_mask[pipe]; 184 183 u32 enable_mask = status_mask << 16; 185 184 186 185 lockdep_assert_held(&dev_priv->irq_lock); ··· 234 233 lockdep_assert_held(&dev_priv->irq_lock); 235 234 drm_WARN_ON(&dev_priv->drm, !intel_irqs_enabled(dev_priv)); 236 235 237 - if ((dev_priv->pipestat_irq_mask[pipe] & status_mask) == status_mask) 236 + if ((dev_priv->display.irq.pipestat_irq_mask[pipe] & status_mask) == status_mask) 238 237 return; 239 238 240 - dev_priv->pipestat_irq_mask[pipe] |= status_mask; 239 + dev_priv->display.irq.pipestat_irq_mask[pipe] |= status_mask; 241 240 enable_mask = i915_pipestat_enable_mask(dev_priv, pipe); 242 241 243 242 intel_uncore_write(&dev_priv->uncore, reg, enable_mask | status_mask); ··· 257 256 lockdep_assert_held(&dev_priv->irq_lock); 258 257 drm_WARN_ON(&dev_priv->drm, !intel_irqs_enabled(dev_priv)); 259 258 260 - if ((dev_priv->pipestat_irq_mask[pipe] & status_mask) == 0) 259 + if ((dev_priv->display.irq.pipestat_irq_mask[pipe] & status_mask) == 0) 261 260 return; 262 261 263 - dev_priv->pipestat_irq_mask[pipe] &= ~status_mask; 262 + dev_priv->display.irq.pipestat_irq_mask[pipe] &= ~status_mask; 264 263 enable_mask = i915_pipestat_enable_mask(dev_priv, pipe); 265 264 266 265 intel_uncore_write(&dev_priv->uncore, reg, enable_mask | status_mask); ··· 402 401 PIPESTAT_INT_STATUS_MASK | 403 402 PIPE_FIFO_UNDERRUN_STATUS); 404 403 405 - dev_priv->pipestat_irq_mask[pipe] = 0; 404 + dev_priv->display.irq.pipestat_irq_mask[pipe] = 0; 406 405 } 407 406 } 408 407 ··· 413 412 414 413 spin_lock(&dev_priv->irq_lock); 415 414 416 - if (!dev_priv->display_irqs_enabled) { 415 + if (!dev_priv->display.irq.display_irqs_enabled) { 417 416 spin_unlock(&dev_priv->irq_lock); 418 417 return; 419 418 } ··· 446 445 break; 447 446 } 448 447 if (iir & iir_bit) 449 - status_mask |= dev_priv->pipestat_irq_mask[pipe]; 448 + status_mask |= dev_priv->display.irq.pipestat_irq_mask[pipe]; 450 449 451 450 if (!status_mask) 452 451 continue; ··· 1204 1203 1205 1204 int i915gm_enable_vblank(struct drm_crtc *crtc) 1206 1205 { 1207 - struct drm_i915_private *dev_priv = to_i915(crtc->dev); 1206 + struct drm_i915_private *i915 = to_i915(crtc->dev); 1208 1207 1209 1208 /* 1210 1209 * Vblank interrupts fail to wake the device up from C2+. ··· 1212 1211 * the problem. There is a small power cost so we do this 1213 1212 * only when vblank interrupts are actually enabled. 1214 1213 */ 1215 - if (dev_priv->vblank_enabled++ == 0) 1216 - intel_uncore_write(&dev_priv->uncore, SCPD0, _MASKED_BIT_ENABLE(CSTATE_RENDER_CLOCK_GATE_DISABLE)); 1214 + if (i915->display.irq.vblank_enabled++ == 0) 1215 + intel_uncore_write(&i915->uncore, SCPD0, _MASKED_BIT_ENABLE(CSTATE_RENDER_CLOCK_GATE_DISABLE)); 1217 1216 1218 1217 return i8xx_enable_vblank(crtc); 1219 1218 } ··· 1316 1315 1317 1316 void i915gm_disable_vblank(struct drm_crtc *crtc) 1318 1317 { 1319 - struct drm_i915_private *dev_priv = to_i915(crtc->dev); 1318 + struct drm_i915_private *i915 = to_i915(crtc->dev); 1320 1319 1321 1320 i8xx_disable_vblank(crtc); 1322 1321 1323 - if (--dev_priv->vblank_enabled == 0) 1324 - intel_uncore_write(&dev_priv->uncore, SCPD0, _MASKED_BIT_DISABLE(CSTATE_RENDER_CLOCK_GATE_DISABLE)); 1322 + if (--i915->display.irq.vblank_enabled == 0) 1323 + intel_uncore_write(&i915->uncore, SCPD0, _MASKED_BIT_DISABLE(CSTATE_RENDER_CLOCK_GATE_DISABLE)); 1325 1324 } 1326 1325 1327 1326 void i965_disable_vblank(struct drm_crtc *crtc) ··· 1498 1497 1499 1498 for_each_pipe_masked(dev_priv, pipe, pipe_mask) 1500 1499 GEN8_IRQ_INIT_NDX(uncore, DE_PIPE, pipe, 1501 - dev_priv->de_irq_mask[pipe], 1502 - ~dev_priv->de_irq_mask[pipe] | extra_ier); 1500 + dev_priv->display.irq.de_irq_mask[pipe], 1501 + ~dev_priv->display.irq.de_irq_mask[pipe] | extra_ier); 1503 1502 1504 1503 spin_unlock_irq(&dev_priv->irq_lock); 1505 1504 } ··· 1559 1558 { 1560 1559 lockdep_assert_held(&dev_priv->irq_lock); 1561 1560 1562 - if (dev_priv->display_irqs_enabled) 1561 + if (dev_priv->display.irq.display_irqs_enabled) 1563 1562 return; 1564 1563 1565 - dev_priv->display_irqs_enabled = true; 1564 + dev_priv->display.irq.display_irqs_enabled = true; 1566 1565 1567 1566 if (intel_irqs_enabled(dev_priv)) { 1568 1567 vlv_display_irq_reset(dev_priv); ··· 1574 1573 { 1575 1574 lockdep_assert_held(&dev_priv->irq_lock); 1576 1575 1577 - if (!dev_priv->display_irqs_enabled) 1576 + if (!dev_priv->display.irq.display_irqs_enabled) 1578 1577 return; 1579 1578 1580 - dev_priv->display_irqs_enabled = false; 1579 + dev_priv->display.irq.display_irqs_enabled = false; 1581 1580 1582 1581 if (intel_irqs_enabled(dev_priv)) 1583 1582 vlv_display_irq_reset(dev_priv); ··· 1695 1694 } 1696 1695 1697 1696 for_each_pipe(dev_priv, pipe) { 1698 - dev_priv->de_irq_mask[pipe] = ~de_pipe_masked; 1697 + dev_priv->display.irq.de_irq_mask[pipe] = ~de_pipe_masked; 1699 1698 1700 1699 if (intel_display_power_is_enabled(dev_priv, 1701 1700 POWER_DOMAIN_PIPE(pipe))) 1702 1701 GEN8_IRQ_INIT_NDX(uncore, DE_PIPE, pipe, 1703 - dev_priv->de_irq_mask[pipe], 1702 + dev_priv->display.irq.de_irq_mask[pipe], 1704 1703 de_pipe_enables); 1705 1704 } 1706 1705 ··· 1771 1770 * domain. We defer setting up the display irqs in this case to the 1772 1771 * runtime pm. 1773 1772 */ 1774 - i915->display_irqs_enabled = true; 1773 + i915->display.irq.display_irqs_enabled = true; 1775 1774 if (IS_VALLEYVIEW(i915) || IS_CHERRYVIEW(i915)) 1776 - i915->display_irqs_enabled = false; 1775 + i915->display.irq.display_irqs_enabled = false; 1777 1776 1778 1777 intel_hotplug_irq_init(i915); 1779 1778 }
+5
drivers/gpu/drm/i915/display/intel_display_params.c
··· 116 116 "(0=disabled, 1=enabled) " 117 117 "Default: 1"); 118 118 119 + intel_display_param_named_unsafe(enable_dmc_wl, bool, 0400, 120 + "Enable DMC wakelock " 121 + "(0=disabled, 1=enabled) " 122 + "Default: 0"); 123 + 119 124 __maybe_unused 120 125 static void _param_print_bool(struct drm_printer *p, const char *driver_name, 121 126 const char *name, bool val)
+1
drivers/gpu/drm/i915/display/intel_display_params.h
··· 46 46 param(int, enable_psr, -1, 0600) \ 47 47 param(bool, psr_safest_params, false, 0400) \ 48 48 param(bool, enable_psr2_sel_fetch, true, 0400) \ 49 + param(bool, enable_dmc_wl, false, 0400) \ 49 50 50 51 #define MEMBER(T, member, ...) T member; 51 52 struct intel_display_params {
+37 -36
drivers/gpu/drm/i915/display/intel_display_power_well.c
··· 17 17 #include "intel_dkl_phy.h" 18 18 #include "intel_dkl_phy_regs.h" 19 19 #include "intel_dmc.h" 20 + #include "intel_dmc_wl.h" 20 21 #include "intel_dp_aux_regs.h" 21 22 #include "intel_dpio_phy.h" 22 23 #include "intel_dpll.h" ··· 200 199 gen8_irq_power_well_pre_disable(dev_priv, irq_pipe_mask); 201 200 } 202 201 202 + #define ICL_AUX_PW_TO_PHY(pw_idx) \ 203 + ((pw_idx) - ICL_PW_CTL_IDX_AUX_A + PHY_A) 204 + 203 205 #define ICL_AUX_PW_TO_CH(pw_idx) \ 204 206 ((pw_idx) - ICL_PW_CTL_IDX_AUX_A + AUX_CH_A) 205 207 ··· 221 217 aux_ch_to_digital_port(struct drm_i915_private *dev_priv, 222 218 enum aux_ch aux_ch) 223 219 { 224 - struct intel_digital_port *dig_port = NULL; 225 220 struct intel_encoder *encoder; 226 221 227 222 for_each_intel_encoder(&dev_priv->drm, encoder) { 223 + struct intel_digital_port *dig_port; 224 + 228 225 /* We'll check the MST primary port */ 229 226 if (encoder->type == INTEL_OUTPUT_DP_MST) 230 227 continue; 231 228 232 229 dig_port = enc_to_dig_port(encoder); 233 - if (!dig_port) 234 - continue; 235 230 236 - if (dig_port->aux_ch != aux_ch) { 237 - dig_port = NULL; 238 - continue; 239 - } 240 - 241 - break; 231 + if (dig_port && dig_port->aux_ch == aux_ch) 232 + return dig_port; 242 233 } 243 234 244 - return dig_port; 235 + return NULL; 245 236 } 246 237 247 238 static enum phy icl_aux_pw_to_phy(struct drm_i915_private *i915, ··· 252 253 * as HDMI-only and routed to a combo PHY, the encoder either won't be 253 254 * present at all or it will not have an aux_ch assigned. 254 255 */ 255 - return dig_port ? intel_port_to_phy(i915, dig_port->base.port) : PHY_NONE; 256 + return dig_port ? intel_encoder_to_phy(&dig_port->base) : PHY_NONE; 256 257 } 257 258 258 259 static void hsw_wait_for_power_well_enable(struct drm_i915_private *dev_priv, ··· 395 396 hsw_wait_for_power_well_disable(dev_priv, power_well); 396 397 } 397 398 398 - static bool intel_port_is_edp(struct drm_i915_private *i915, enum port port) 399 + static bool intel_aux_ch_is_edp(struct drm_i915_private *i915, enum aux_ch aux_ch) 399 400 { 400 - struct intel_encoder *encoder; 401 + struct intel_digital_port *dig_port = aux_ch_to_digital_port(i915, aux_ch); 401 402 402 - for_each_intel_encoder(&i915->drm, encoder) { 403 - if (encoder->type == INTEL_OUTPUT_EDP && 404 - encoder->port == port) 405 - return true; 406 - } 407 - 408 - return false; 403 + return dig_port && dig_port->base.type == INTEL_OUTPUT_EDP; 409 404 } 410 405 411 406 static void ··· 408 415 { 409 416 const struct i915_power_well_regs *regs = power_well->desc->ops->regs; 410 417 int pw_idx = i915_power_well_instance(power_well)->hsw.idx; 411 - enum phy phy = icl_aux_pw_to_phy(dev_priv, power_well); 412 418 413 419 drm_WARN_ON(&dev_priv->drm, !IS_ICELAKE(dev_priv)); 414 420 415 421 intel_de_rmw(dev_priv, regs->driver, 0, HSW_PWR_WELL_CTL_REQ(pw_idx)); 416 422 417 - /* FIXME this is a mess */ 418 - if (phy != PHY_NONE) 419 - intel_de_rmw(dev_priv, ICL_PORT_CL_DW12(phy), 420 - 0, ICL_LANE_ENABLE_AUX); 423 + /* 424 + * FIXME not sure if we should derive the PHY from the pw_idx, or 425 + * from the VBT defined AUX_CH->DDI->PHY mapping. 426 + */ 427 + intel_de_rmw(dev_priv, ICL_PORT_CL_DW12(ICL_AUX_PW_TO_PHY(pw_idx)), 428 + 0, ICL_LANE_ENABLE_AUX); 421 429 422 430 hsw_wait_for_power_well_enable(dev_priv, power_well, false); 423 431 424 432 /* Display WA #1178: icl */ 425 433 if (pw_idx >= ICL_PW_CTL_IDX_AUX_A && pw_idx <= ICL_PW_CTL_IDX_AUX_B && 426 - !intel_port_is_edp(dev_priv, (enum port)phy)) 427 - intel_de_rmw(dev_priv, ICL_AUX_ANAOVRD1(pw_idx), 428 - 0, ICL_AUX_ANAOVRD1_ENABLE | ICL_AUX_ANAOVRD1_LDO_BYPASS); 434 + !intel_aux_ch_is_edp(dev_priv, ICL_AUX_PW_TO_CH(pw_idx))) 435 + intel_de_rmw(dev_priv, ICL_PORT_TX_DW6_AUX(ICL_AUX_PW_TO_PHY(pw_idx)), 436 + 0, O_FUNC_OVRD_EN | O_LDO_BYPASS_CRI); 429 437 } 430 438 431 439 static void ··· 435 441 { 436 442 const struct i915_power_well_regs *regs = power_well->desc->ops->regs; 437 443 int pw_idx = i915_power_well_instance(power_well)->hsw.idx; 438 - enum phy phy = icl_aux_pw_to_phy(dev_priv, power_well); 439 444 440 445 drm_WARN_ON(&dev_priv->drm, !IS_ICELAKE(dev_priv)); 441 446 442 - /* FIXME this is a mess */ 443 - if (phy != PHY_NONE) 444 - intel_de_rmw(dev_priv, ICL_PORT_CL_DW12(phy), 445 - ICL_LANE_ENABLE_AUX, 0); 447 + /* 448 + * FIXME not sure if we should derive the PHY from the pw_idx, or 449 + * from the VBT defined AUX_CH->DDI->PHY mapping. 450 + */ 451 + intel_de_rmw(dev_priv, ICL_PORT_CL_DW12(ICL_AUX_PW_TO_PHY(pw_idx)), 452 + ICL_LANE_ENABLE_AUX, 0); 446 453 447 454 intel_de_rmw(dev_priv, regs->driver, HSW_PWR_WELL_CTL_REQ(pw_idx), 0); 448 455 ··· 822 827 intel_de_rmw(dev_priv, GEN8_CHICKEN_DCPR_1, 823 828 0, SKL_SELECT_ALTERNATE_DC_EXIT); 824 829 830 + intel_dmc_wl_enable(dev_priv); 831 + 825 832 gen9_set_dc_state(dev_priv, DC_STATE_EN_UPTO_DC5); 826 833 } 827 834 ··· 852 855 if (DISPLAY_VER(dev_priv) == 9 && !IS_BROXTON(dev_priv)) 853 856 intel_de_rmw(dev_priv, GEN8_CHICKEN_DCPR_1, 854 857 0, SKL_SELECT_ALTERNATE_DC_EXIT); 858 + 859 + intel_dmc_wl_enable(dev_priv); 855 860 856 861 gen9_set_dc_state(dev_priv, DC_STATE_EN_UPTO_DC6); 857 862 } ··· 975 976 if (!HAS_DISPLAY(dev_priv)) 976 977 return; 977 978 979 + intel_dmc_wl_disable(dev_priv); 980 + 978 981 intel_cdclk_get_cdclk(dev_priv, &cdclk_config); 979 982 /* Can't read out voltage_level so can't use intel_cdclk_changed() */ 980 983 drm_WARN_ON(&dev_priv->drm, 981 - intel_cdclk_needs_modeset(&dev_priv->display.cdclk.hw, 984 + intel_cdclk_clock_changed(&dev_priv->display.cdclk.hw, 982 985 &cdclk_config)); 983 986 984 987 gen9_assert_dbuf_enabled(dev_priv); ··· 1397 1396 * The PHY may be busy with some initial calibration and whatnot, 1398 1397 * so the power state can take a while to actually change. 1399 1398 */ 1400 - if (intel_de_wait_for_register(dev_priv, DISPLAY_PHY_STATUS, 1401 - phy_status_mask, phy_status, 10)) 1399 + if (intel_de_wait(dev_priv, DISPLAY_PHY_STATUS, 1400 + phy_status_mask, phy_status, 10)) 1402 1401 drm_err(&dev_priv->drm, 1403 1402 "Unexpected PHY_STATUS 0x%08x, expected 0x%08x (PHY_CONTROL=0x%08x)\n", 1404 1403 intel_de_read(dev_priv, DISPLAY_PHY_STATUS) & phy_status_mask,
+17 -4
drivers/gpu/drm/i915/display/intel_display_types.h
··· 661 661 int broadcast_rgb; 662 662 }; 663 663 664 - #define to_intel_digital_connector_state(x) container_of(x, struct intel_digital_connector_state, base) 664 + #define to_intel_digital_connector_state(conn_state) \ 665 + container_of_const((conn_state), struct intel_digital_connector_state, base) 665 666 666 667 struct dpll { 667 668 /* given values */ ··· 1347 1346 union hdmi_infoframe hdmi; 1348 1347 union hdmi_infoframe drm; 1349 1348 struct drm_dp_vsc_sdp vsc; 1349 + struct drm_dp_as_sdp as_sdp; 1350 1350 } infoframes; 1351 1351 1352 1352 u8 eld[MAX_ELD_BYTES]; ··· 1425 1423 1426 1424 u32 psr2_man_track_ctl; 1427 1425 1426 + u32 pipe_srcsz_early_tpt; 1427 + 1428 1428 struct drm_rect psr2_su_area; 1429 1429 1430 1430 /* Variable Refresh Rate state */ ··· 1434 1430 bool enable, in_range; 1435 1431 u8 pipeline_full; 1436 1432 u16 flipline, vmin, vmax, guardband; 1433 + u32 vsync_end, vsync_start; 1437 1434 } vrr; 1438 1435 1439 1436 /* Stream Splitter for eDP MSO */ ··· 1623 1618 1624 1619 #define to_intel_atomic_state(x) container_of(x, struct intel_atomic_state, base) 1625 1620 #define to_intel_crtc(x) container_of(x, struct intel_crtc, base) 1626 - #define to_intel_crtc_state(x) container_of(x, struct intel_crtc_state, uapi) 1627 1621 #define to_intel_connector(x) container_of(x, struct intel_connector, base) 1628 1622 #define to_intel_encoder(x) container_of(x, struct intel_encoder, base) 1629 - #define to_intel_framebuffer(x) container_of(x, struct intel_framebuffer, base) 1630 1623 #define to_intel_plane(x) container_of(x, struct intel_plane, base) 1631 - #define to_intel_plane_state(x) container_of(x, struct intel_plane_state, uapi) 1624 + 1625 + #define to_intel_crtc_state(crtc_state) \ 1626 + container_of_const((crtc_state), struct intel_crtc_state, uapi) 1627 + #define to_intel_plane_state(plane_state) \ 1628 + container_of_const((plane_state), struct intel_plane_state, uapi) 1629 + #define to_intel_framebuffer(fb) \ 1630 + container_of_const((fb), struct intel_framebuffer, base) 1631 + 1632 1632 #define intel_fb_obj(x) ((x) ? to_intel_bo((x)->obj[0]) : NULL) 1633 1633 1634 1634 struct intel_hdmi { ··· 1748 1738 1749 1739 /* LNL and beyond */ 1750 1740 u8 check_entry_lines; 1741 + u8 silence_period_sym_clocks; 1742 + u8 lfps_half_cycle_num_of_syms; 1751 1743 } alpm_parameters; 1752 1744 1753 1745 ktime_t last_entry_attempt; ··· 1811 1799 1812 1800 bool is_mst; 1813 1801 int active_mst_links; 1802 + enum drm_dp_mst_mode mst_detect; 1814 1803 1815 1804 /* connector directly attached - won't be use for modeset in mst world */ 1816 1805 struct intel_connector *attached_connector;
-8
drivers/gpu/drm/i915/display/intel_display_wa.c
··· 10 10 11 11 static void gen11_display_wa_apply(struct drm_i915_private *i915) 12 12 { 13 - /* Wa_1409120013 */ 14 - intel_de_write(i915, ILK_DPFC_CHICKEN(INTEL_FBC_A), 15 - DPFC_CHICKEN_COMP_DUMMY_PIXEL); 16 - 17 13 /* Wa_14010594013 */ 18 14 intel_de_rmw(i915, GEN8_CHICKEN_DCPR_1, 0, ICL_DELAY_PMRSP); 19 15 } 20 16 21 17 static void xe_d_display_wa_apply(struct drm_i915_private *i915) 22 18 { 23 - /* Wa_1409120013 */ 24 - intel_de_write(i915, ILK_DPFC_CHICKEN(INTEL_FBC_A), 25 - DPFC_CHICKEN_COMP_DUMMY_PIXEL); 26 - 27 19 /* Wa_14013723622 */ 28 20 intel_de_rmw(i915, CLKREQ_POLICY, CLKREQ_POLICY_MEM_UP_OVRD, 0); 29 21 }
+15 -2
drivers/gpu/drm/i915/display/intel_dmc.c
··· 38 38 * low-power state and comes back to normal. 39 39 */ 40 40 41 + #define INTEL_DMC_FIRMWARE_URL "https://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git" 42 + 41 43 enum intel_dmc_id { 42 44 DMC_FW_MAIN = 0, 43 45 DMC_FW_PIPEA, ··· 91 89 __stringify(major) "_" \ 92 90 __stringify(minor) ".bin" 93 91 92 + #define XE2LPD_DMC_MAX_FW_SIZE 0x8000 94 93 #define XELPDP_DMC_MAX_FW_SIZE 0x7000 95 94 #define DISPLAY_VER13_DMC_MAX_FW_SIZE 0x20000 96 95 #define DISPLAY_VER12_DMC_MAX_FW_SIZE ICL_DMC_MAX_FW_SIZE 96 + 97 + #define XE2LPD_DMC_PATH DMC_PATH(xe2lpd) 98 + MODULE_FIRMWARE(XE2LPD_DMC_PATH); 97 99 98 100 #define MTL_DMC_PATH DMC_PATH(mtl) 99 101 MODULE_FIRMWARE(MTL_DMC_PATH); ··· 552 546 pipedmc_clock_gating_wa(i915, true); 553 547 disable_all_event_handlers(i915); 554 548 pipedmc_clock_gating_wa(i915, false); 549 + 550 + intel_dmc_wl_disable(i915); 555 551 } 556 552 557 553 void assert_dmc_loaded(struct drm_i915_private *i915) ··· 957 949 " Disabling runtime power management.\n", 958 950 dmc->fw_path); 959 951 drm_notice(&i915->drm, "DMC firmware homepage: %s", 960 - INTEL_UC_FIRMWARE_URL); 952 + INTEL_DMC_FIRMWARE_URL); 961 953 } 962 954 963 955 release_firmware(fw); ··· 995 987 996 988 INIT_WORK(&dmc->work, dmc_load_work_fn); 997 989 998 - if (DISPLAY_VER_FULL(i915) == IP_VER(14, 0)) { 990 + if (DISPLAY_VER_FULL(i915) == IP_VER(20, 0)) { 991 + dmc->fw_path = XE2LPD_DMC_PATH; 992 + dmc->max_fw_size = XE2LPD_DMC_MAX_FW_SIZE; 993 + } else if (DISPLAY_VER_FULL(i915) == IP_VER(14, 0)) { 999 994 dmc->fw_path = MTL_DMC_PATH; 1000 995 dmc->max_fw_size = XELPDP_DMC_MAX_FW_SIZE; 1001 996 } else if (IS_DG2(i915)) { ··· 1082 1071 1083 1072 if (dmc) 1084 1073 flush_work(&dmc->work); 1074 + 1075 + intel_dmc_wl_disable(i915); 1085 1076 1086 1077 /* Drop the reference held in case DMC isn't loaded. */ 1087 1078 if (!intel_dmc_has_payload(i915))
+6
drivers/gpu/drm/i915/display/intel_dmc_regs.h
··· 97 97 #define TGL_DMC_DEBUG3 _MMIO(0x101090) 98 98 #define DG1_DMC_DEBUG3 _MMIO(0x13415c) 99 99 100 + #define DMC_WAKELOCK_CFG _MMIO(0x8F1B0) 101 + #define DMC_WAKELOCK_CFG_ENABLE REG_BIT(31) 102 + #define DMC_WAKELOCK1_CTL _MMIO(0x8F140) 103 + #define DMC_WAKELOCK_CTL_REQ REG_BIT(31) 104 + #define DMC_WAKELOCK_CTL_ACK REG_BIT(15) 105 + 100 106 #endif /* __INTEL_DMC_REGS_H__ */
+262
drivers/gpu/drm/i915/display/intel_dmc_wl.c
··· 1 + // SPDX-License-Identifier: MIT 2 + /* 3 + * Copyright (C) 2024 Intel Corporation 4 + */ 5 + 6 + #include <linux/kernel.h> 7 + 8 + #include "intel_de.h" 9 + #include "intel_dmc.h" 10 + #include "intel_dmc_regs.h" 11 + #include "intel_dmc_wl.h" 12 + 13 + /** 14 + * DOC: DMC wakelock support 15 + * 16 + * Wake lock is the mechanism to cause display engine to exit DC 17 + * states to allow programming to registers that are powered down in 18 + * those states. Previous projects exited DC states automatically when 19 + * detecting programming. Now software controls the exit by 20 + * programming the wake lock. This improves system performance and 21 + * system interactions and better fits the flip queue style of 22 + * programming. Wake lock is only required when DC5, DC6, or DC6v have 23 + * been enabled in DC_STATE_EN and the wake lock mode of operation has 24 + * been enabled. 25 + * 26 + * The wakelock mechanism in DMC allows the display engine to exit DC 27 + * states explicitly before programming registers that may be powered 28 + * down. In earlier hardware, this was done automatically and 29 + * implicitly when the display engine accessed a register. With the 30 + * wakelock implementation, the driver asserts a wakelock in DMC, 31 + * which forces it to exit the DC state until the wakelock is 32 + * deasserted. 33 + * 34 + * The mechanism can be enabled and disabled by writing to the 35 + * DMC_WAKELOCK_CFG register. There are also 13 control registers 36 + * that can be used to hold and release different wakelocks. In the 37 + * current implementation, we only need one wakelock, so only 38 + * DMC_WAKELOCK1_CTL is used. The other definitions are here for 39 + * potential future use. 40 + */ 41 + 42 + #define DMC_WAKELOCK_CTL_TIMEOUT 5 43 + #define DMC_WAKELOCK_HOLD_TIME 50 44 + 45 + struct intel_dmc_wl_range { 46 + u32 start; 47 + u32 end; 48 + }; 49 + 50 + static struct intel_dmc_wl_range lnl_wl_range[] = { 51 + { .start = 0x60000, .end = 0x7ffff }, 52 + }; 53 + 54 + static void __intel_dmc_wl_release(struct drm_i915_private *i915) 55 + { 56 + struct intel_dmc_wl *wl = &i915->display.wl; 57 + 58 + WARN_ON(refcount_read(&wl->refcount)); 59 + 60 + queue_delayed_work(i915->unordered_wq, &wl->work, 61 + msecs_to_jiffies(DMC_WAKELOCK_HOLD_TIME)); 62 + } 63 + 64 + static void intel_dmc_wl_work(struct work_struct *work) 65 + { 66 + struct intel_dmc_wl *wl = 67 + container_of(work, struct intel_dmc_wl, work.work); 68 + struct drm_i915_private *i915 = 69 + container_of(wl, struct drm_i915_private, display.wl); 70 + unsigned long flags; 71 + 72 + spin_lock_irqsave(&wl->lock, flags); 73 + 74 + /* Bail out if refcount reached zero while waiting for the spinlock */ 75 + if (!refcount_read(&wl->refcount)) 76 + goto out_unlock; 77 + 78 + __intel_de_rmw_nowl(i915, DMC_WAKELOCK1_CTL, DMC_WAKELOCK_CTL_REQ, 0); 79 + 80 + if (__intel_wait_for_register_nowl(i915, DMC_WAKELOCK1_CTL, 81 + DMC_WAKELOCK_CTL_ACK, 0, 82 + DMC_WAKELOCK_CTL_TIMEOUT)) { 83 + WARN_RATELIMIT(1, "DMC wakelock release timed out"); 84 + goto out_unlock; 85 + } 86 + 87 + wl->taken = false; 88 + 89 + out_unlock: 90 + spin_unlock_irqrestore(&wl->lock, flags); 91 + } 92 + 93 + static bool intel_dmc_wl_check_range(u32 address) 94 + { 95 + int i; 96 + bool wl_needed = false; 97 + 98 + for (i = 0; i < ARRAY_SIZE(lnl_wl_range); i++) { 99 + if (address >= lnl_wl_range[i].start && 100 + address <= lnl_wl_range[i].end) { 101 + wl_needed = true; 102 + break; 103 + } 104 + } 105 + 106 + return wl_needed; 107 + } 108 + 109 + static bool __intel_dmc_wl_supported(struct drm_i915_private *i915) 110 + { 111 + if (DISPLAY_VER(i915) < 20 || 112 + !intel_dmc_has_payload(i915) || 113 + !i915->display.params.enable_dmc_wl) 114 + return false; 115 + 116 + return true; 117 + } 118 + 119 + void intel_dmc_wl_init(struct drm_i915_private *i915) 120 + { 121 + struct intel_dmc_wl *wl = &i915->display.wl; 122 + 123 + /* don't call __intel_dmc_wl_supported(), DMC is not loaded yet */ 124 + if (DISPLAY_VER(i915) < 20 || 125 + !i915->display.params.enable_dmc_wl) 126 + return; 127 + 128 + INIT_DELAYED_WORK(&wl->work, intel_dmc_wl_work); 129 + spin_lock_init(&wl->lock); 130 + refcount_set(&wl->refcount, 0); 131 + } 132 + 133 + void intel_dmc_wl_enable(struct drm_i915_private *i915) 134 + { 135 + struct intel_dmc_wl *wl = &i915->display.wl; 136 + unsigned long flags; 137 + 138 + if (!__intel_dmc_wl_supported(i915)) 139 + return; 140 + 141 + spin_lock_irqsave(&wl->lock, flags); 142 + 143 + if (wl->enabled) 144 + goto out_unlock; 145 + 146 + /* 147 + * Enable wakelock in DMC. We shouldn't try to take the 148 + * wakelock, because we're just enabling it, so call the 149 + * non-locking version directly here. 150 + */ 151 + __intel_de_rmw_nowl(i915, DMC_WAKELOCK_CFG, 0, DMC_WAKELOCK_CFG_ENABLE); 152 + 153 + wl->enabled = true; 154 + wl->taken = false; 155 + 156 + out_unlock: 157 + spin_unlock_irqrestore(&wl->lock, flags); 158 + } 159 + 160 + void intel_dmc_wl_disable(struct drm_i915_private *i915) 161 + { 162 + struct intel_dmc_wl *wl = &i915->display.wl; 163 + unsigned long flags; 164 + 165 + if (!__intel_dmc_wl_supported(i915)) 166 + return; 167 + 168 + flush_delayed_work(&wl->work); 169 + 170 + spin_lock_irqsave(&wl->lock, flags); 171 + 172 + if (!wl->enabled) 173 + goto out_unlock; 174 + 175 + /* Disable wakelock in DMC */ 176 + __intel_de_rmw_nowl(i915, DMC_WAKELOCK_CFG, DMC_WAKELOCK_CFG_ENABLE, 0); 177 + 178 + refcount_set(&wl->refcount, 0); 179 + wl->enabled = false; 180 + wl->taken = false; 181 + 182 + out_unlock: 183 + spin_unlock_irqrestore(&wl->lock, flags); 184 + } 185 + 186 + void intel_dmc_wl_get(struct drm_i915_private *i915, i915_reg_t reg) 187 + { 188 + struct intel_dmc_wl *wl = &i915->display.wl; 189 + unsigned long flags; 190 + 191 + if (!__intel_dmc_wl_supported(i915)) 192 + return; 193 + 194 + if (!intel_dmc_wl_check_range(reg.reg)) 195 + return; 196 + 197 + spin_lock_irqsave(&wl->lock, flags); 198 + 199 + if (!wl->enabled) 200 + goto out_unlock; 201 + 202 + cancel_delayed_work(&wl->work); 203 + 204 + if (refcount_inc_not_zero(&wl->refcount)) 205 + goto out_unlock; 206 + 207 + refcount_set(&wl->refcount, 1); 208 + 209 + /* 210 + * Only try to take the wakelock if it's not marked as taken 211 + * yet. It may be already taken at this point if we have 212 + * already released the last reference, but the work has not 213 + * run yet. 214 + */ 215 + if (!wl->taken) { 216 + __intel_de_rmw_nowl(i915, DMC_WAKELOCK1_CTL, 0, 217 + DMC_WAKELOCK_CTL_REQ); 218 + 219 + if (__intel_wait_for_register_nowl(i915, DMC_WAKELOCK1_CTL, 220 + DMC_WAKELOCK_CTL_ACK, 221 + DMC_WAKELOCK_CTL_ACK, 222 + DMC_WAKELOCK_CTL_TIMEOUT)) { 223 + WARN_RATELIMIT(1, "DMC wakelock ack timed out"); 224 + goto out_unlock; 225 + } 226 + 227 + wl->taken = true; 228 + } 229 + 230 + out_unlock: 231 + spin_unlock_irqrestore(&wl->lock, flags); 232 + } 233 + 234 + void intel_dmc_wl_put(struct drm_i915_private *i915, i915_reg_t reg) 235 + { 236 + struct intel_dmc_wl *wl = &i915->display.wl; 237 + unsigned long flags; 238 + 239 + if (!__intel_dmc_wl_supported(i915)) 240 + return; 241 + 242 + if (!intel_dmc_wl_check_range(reg.reg)) 243 + return; 244 + 245 + spin_lock_irqsave(&wl->lock, flags); 246 + 247 + if (!wl->enabled) 248 + goto out_unlock; 249 + 250 + if (WARN_RATELIMIT(!refcount_read(&wl->refcount), 251 + "Tried to put wakelock with refcount zero\n")) 252 + goto out_unlock; 253 + 254 + if (refcount_dec_and_test(&wl->refcount)) { 255 + __intel_dmc_wl_release(i915); 256 + 257 + goto out_unlock; 258 + } 259 + 260 + out_unlock: 261 + spin_unlock_irqrestore(&wl->lock, flags); 262 + }
+31
drivers/gpu/drm/i915/display/intel_dmc_wl.h
··· 1 + /* SPDX-License-Identifier: MIT */ 2 + /* 3 + * Copyright (C) 2024 Intel Corporation 4 + */ 5 + 6 + #ifndef __INTEL_WAKELOCK_H__ 7 + #define __INTEL_WAKELOCK_H__ 8 + 9 + #include <linux/types.h> 10 + #include <linux/workqueue.h> 11 + #include <linux/refcount.h> 12 + 13 + #include "i915_reg_defs.h" 14 + 15 + struct drm_i915_private; 16 + 17 + struct intel_dmc_wl { 18 + spinlock_t lock; /* protects enabled, taken and refcount */ 19 + bool enabled; 20 + bool taken; 21 + refcount_t refcount; 22 + struct delayed_work work; 23 + }; 24 + 25 + void intel_dmc_wl_init(struct drm_i915_private *i915); 26 + void intel_dmc_wl_enable(struct drm_i915_private *i915); 27 + void intel_dmc_wl_disable(struct drm_i915_private *i915); 28 + void intel_dmc_wl_get(struct drm_i915_private *i915, i915_reg_t reg); 29 + void intel_dmc_wl_put(struct drm_i915_private *i915, i915_reg_t reg); 30 + 31 + #endif /* __INTEL_WAKELOCK_H__ */
+248 -72
drivers/gpu/drm/i915/display/intel_dp.c
··· 123 123 return dig_port->base.type == INTEL_OUTPUT_EDP; 124 124 } 125 125 126 + bool intel_dp_as_sdp_supported(struct intel_dp *intel_dp) 127 + { 128 + struct drm_i915_private *i915 = dp_to_i915(intel_dp); 129 + 130 + return HAS_AS_SDP(i915) && 131 + drm_dp_as_sdp_supported(&intel_dp->aux, intel_dp->dpcd); 132 + } 133 + 126 134 static void intel_dp_unset_edid(struct intel_dp *intel_dp); 127 135 128 136 /* Is link rate UHBR and thus 128b/132b? */ ··· 433 425 return max_rate; 434 426 } 435 427 436 - bool intel_dp_can_bigjoiner(struct intel_dp *intel_dp) 428 + bool intel_dp_has_bigjoiner(struct intel_dp *intel_dp) 437 429 { 438 430 struct intel_digital_port *intel_dig_port = dp_to_dig_port(intel_dp); 439 431 struct intel_encoder *encoder = &intel_dig_port->base; ··· 451 443 452 444 static int icl_max_source_rate(struct intel_dp *intel_dp) 453 445 { 454 - struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp); 455 - struct drm_i915_private *dev_priv = to_i915(dig_port->base.base.dev); 456 - enum phy phy = intel_port_to_phy(dev_priv, dig_port->base.port); 446 + struct intel_encoder *encoder = &dp_to_dig_port(intel_dp)->base; 457 447 458 - if (intel_phy_is_combo(dev_priv, phy) && !intel_dp_is_edp(intel_dp)) 448 + if (intel_encoder_is_combo(encoder) && !intel_dp_is_edp(intel_dp)) 459 449 return 540000; 460 450 461 451 return 810000; ··· 469 463 470 464 static int mtl_max_source_rate(struct intel_dp *intel_dp) 471 465 { 472 - struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp); 473 - struct drm_i915_private *i915 = to_i915(dig_port->base.base.dev); 474 - enum phy phy = intel_port_to_phy(i915, dig_port->base.port); 466 + struct intel_encoder *encoder = &dp_to_dig_port(intel_dp)->base; 475 467 476 - if (intel_is_c10phy(i915, phy)) 468 + if (intel_encoder_is_c10phy(encoder)) 477 469 return 810000; 478 470 479 471 return 2000000; ··· 503 499 /* The values must be in increasing order */ 504 500 static const int mtl_rates[] = { 505 501 162000, 216000, 243000, 270000, 324000, 432000, 540000, 675000, 506 - 810000, 1000000, 1350000, 2000000, 502 + 810000, 1000000, 2000000, 507 503 }; 508 504 static const int icl_rates[] = { 509 505 162000, 216000, 270000, 324000, 432000, 540000, 648000, 810000, ··· 1202 1198 } 1203 1199 1204 1200 bool intel_dp_need_bigjoiner(struct intel_dp *intel_dp, 1201 + struct intel_connector *connector, 1205 1202 int hdisplay, int clock) 1206 1203 { 1207 1204 struct drm_i915_private *i915 = dp_to_i915(intel_dp); 1208 - struct intel_connector *connector = intel_dp->attached_connector; 1209 1205 1210 - if (!intel_dp_can_bigjoiner(intel_dp)) 1206 + if (!intel_dp_has_bigjoiner(intel_dp)) 1211 1207 return false; 1212 1208 1213 - return clock > i915->max_dotclk_freq || hdisplay > 5120 || 1209 + return clock > i915->display.cdclk.max_dotclk_freq || hdisplay > 5120 || 1214 1210 connector->force_bigjoiner_enable; 1215 1211 } 1216 1212 ··· 1224 1220 const struct drm_display_mode *fixed_mode; 1225 1221 int target_clock = mode->clock; 1226 1222 int max_rate, mode_rate, max_lanes, max_link_clock; 1227 - int max_dotclk = dev_priv->max_dotclk_freq; 1223 + int max_dotclk = dev_priv->display.cdclk.max_dotclk_freq; 1228 1224 u16 dsc_max_compressed_bpp = 0; 1229 1225 u8 dsc_slice_count = 0; 1230 1226 enum drm_mode_status status; ··· 1237 1233 if (mode->flags & DRM_MODE_FLAG_DBLCLK) 1238 1234 return MODE_H_ILLEGAL; 1239 1235 1236 + if (mode->clock < 10000) 1237 + return MODE_CLOCK_LOW; 1238 + 1240 1239 fixed_mode = intel_panel_fixed_mode(connector, mode); 1241 1240 if (intel_dp_is_edp(intel_dp) && fixed_mode) { 1242 1241 status = intel_panel_mode_valid(connector, mode); ··· 1249 1242 target_clock = fixed_mode->clock; 1250 1243 } 1251 1244 1252 - if (mode->clock < 10000) 1253 - return MODE_CLOCK_LOW; 1254 - 1255 - if (intel_dp_need_bigjoiner(intel_dp, mode->hdisplay, target_clock)) { 1245 + if (intel_dp_need_bigjoiner(intel_dp, connector, 1246 + mode->hdisplay, target_clock)) { 1256 1247 bigjoiner = true; 1257 1248 max_dotclk *= 2; 1258 1249 } ··· 1311 1306 dsc = dsc_max_compressed_bpp && dsc_slice_count; 1312 1307 } 1313 1308 1314 - /* 1315 - * Big joiner configuration needs DSC for TGL which is not true for 1316 - * XE_LPD where uncompressed joiner is supported. 1317 - */ 1318 - if (DISPLAY_VER(dev_priv) < 13 && bigjoiner && !dsc) 1309 + if (intel_dp_joiner_needs_dsc(dev_priv, bigjoiner) && !dsc) 1319 1310 return MODE_CLOCK_HIGH; 1320 1311 1321 1312 if (mode_rate > max_rate && !dsc) ··· 1423 1422 if (DISPLAY_VER(dev_priv) >= 12) 1424 1423 return true; 1425 1424 1426 - if (DISPLAY_VER(dev_priv) == 11 && encoder->port != PORT_A) 1425 + if (DISPLAY_VER(dev_priv) == 11 && encoder->port != PORT_A && 1426 + !intel_crtc_has_type(pipe_config, INTEL_OUTPUT_DP_MST)) 1427 1427 return true; 1428 1428 1429 1429 return false; ··· 1919 1917 dsc_max_bpp = min(dsc_max_bpp, pipe_bpp - 1); 1920 1918 1921 1919 for (i = 0; i < ARRAY_SIZE(valid_dsc_bpp); i++) { 1922 - if (valid_dsc_bpp[i] < dsc_min_bpp || 1923 - valid_dsc_bpp[i] > dsc_max_bpp) 1920 + if (valid_dsc_bpp[i] < dsc_min_bpp) 1921 + continue; 1922 + if (valid_dsc_bpp[i] > dsc_max_bpp) 1924 1923 break; 1925 1924 1926 1925 ret = dsc_compute_link_config(intel_dp, ··· 2402 2399 return intel_dp_link_required(adjusted_mode->crtc_clock, bpp); 2403 2400 } 2404 2401 2402 + bool intel_dp_joiner_needs_dsc(struct drm_i915_private *i915, bool use_joiner) 2403 + { 2404 + /* 2405 + * Pipe joiner needs compression up to display 12 due to bandwidth 2406 + * limitation. DG2 onwards pipe joiner can be enabled without 2407 + * compression. 2408 + */ 2409 + return DISPLAY_VER(i915) < 13 && use_joiner; 2410 + } 2411 + 2405 2412 static int 2406 2413 intel_dp_compute_link_config(struct intel_encoder *encoder, 2407 2414 struct intel_crtc_state *pipe_config, ··· 2420 2407 { 2421 2408 struct drm_i915_private *i915 = to_i915(encoder->base.dev); 2422 2409 struct intel_crtc *crtc = to_intel_crtc(pipe_config->uapi.crtc); 2423 - const struct intel_connector *connector = 2410 + struct intel_connector *connector = 2424 2411 to_intel_connector(conn_state->connector); 2425 2412 const struct drm_display_mode *adjusted_mode = 2426 2413 &pipe_config->hw.adjusted_mode; 2427 2414 struct intel_dp *intel_dp = enc_to_intel_dp(encoder); 2428 2415 struct link_config_limits limits; 2429 - bool joiner_needs_dsc = false; 2430 - bool dsc_needed; 2416 + bool dsc_needed, joiner_needs_dsc; 2431 2417 int ret = 0; 2432 2418 2433 2419 if (pipe_config->fec_enable && 2434 2420 !intel_dp_supports_fec(intel_dp, connector, pipe_config)) 2435 2421 return -EINVAL; 2436 2422 2437 - if (intel_dp_need_bigjoiner(intel_dp, adjusted_mode->crtc_hdisplay, 2423 + if (intel_dp_need_bigjoiner(intel_dp, connector, 2424 + adjusted_mode->crtc_hdisplay, 2438 2425 adjusted_mode->crtc_clock)) 2439 2426 pipe_config->bigjoiner_pipes = GENMASK(crtc->pipe + 1, crtc->pipe); 2440 2427 2441 - /* 2442 - * Pipe joiner needs compression up to display 12 due to bandwidth 2443 - * limitation. DG2 onwards pipe joiner can be enabled without 2444 - * compression. 2445 - */ 2446 - joiner_needs_dsc = DISPLAY_VER(i915) < 13 && pipe_config->bigjoiner_pipes; 2428 + joiner_needs_dsc = intel_dp_joiner_needs_dsc(i915, pipe_config->bigjoiner_pipes); 2447 2429 2448 2430 dsc_needed = joiner_needs_dsc || intel_dp->force_dsc_en || 2449 2431 !intel_dp_compute_config_limits(intel_dp, pipe_config, ··· 2621 2613 vsc->content_type = DP_CONTENT_TYPE_NOT_DEFINED; 2622 2614 } 2623 2615 2616 + static void intel_dp_compute_as_sdp(struct intel_dp *intel_dp, 2617 + struct intel_crtc_state *crtc_state) 2618 + { 2619 + struct drm_dp_as_sdp *as_sdp = &crtc_state->infoframes.as_sdp; 2620 + const struct drm_display_mode *adjusted_mode = 2621 + &crtc_state->hw.adjusted_mode; 2622 + 2623 + if (!crtc_state->vrr.enable || 2624 + !intel_dp_as_sdp_supported(intel_dp)) 2625 + return; 2626 + 2627 + crtc_state->infoframes.enable |= intel_hdmi_infoframe_enable(DP_SDP_ADAPTIVE_SYNC); 2628 + 2629 + /* Currently only DP_AS_SDP_AVT_FIXED_VTOTAL mode supported */ 2630 + as_sdp->sdp_type = DP_SDP_ADAPTIVE_SYNC; 2631 + as_sdp->length = 0x9; 2632 + as_sdp->mode = DP_AS_SDP_AVT_FIXED_VTOTAL; 2633 + as_sdp->vtotal = adjusted_mode->vtotal; 2634 + as_sdp->target_rr = 0; 2635 + as_sdp->duration_incr_ms = 0; 2636 + as_sdp->duration_incr_ms = 0; 2637 + } 2638 + 2624 2639 static void intel_dp_compute_vsc_sdp(struct intel_dp *intel_dp, 2625 2640 struct intel_crtc_state *crtc_state, 2626 2641 const struct drm_connector_state *conn_state) ··· 2754 2723 intel_panel_downclock_mode(connector, &pipe_config->hw.adjusted_mode); 2755 2724 int pixel_clock; 2756 2725 2757 - if (has_seamless_m_n(connector)) 2726 + /* 2727 + * FIXME all joined pipes share the same transcoder. 2728 + * Need to account for that when updating M/N live. 2729 + */ 2730 + if (has_seamless_m_n(connector) && !pipe_config->bigjoiner_pipes) 2758 2731 pipe_config->update_m_n = true; 2759 2732 2760 2733 if (!can_enable_drrs(connector, pipe_config, downclock_mode)) { ··· 2999 2964 g4x_dp_set_clock(encoder, pipe_config); 3000 2965 3001 2966 intel_vrr_compute_config(pipe_config, conn_state); 2967 + intel_dp_compute_as_sdp(intel_dp, pipe_config); 3002 2968 intel_psr_compute_config(intel_dp, pipe_config, conn_state); 3003 2969 intel_dp_drrs_compute_config(connector, pipe_config, link_bpp_x16); 3004 2970 intel_dp_compute_vsc_sdp(intel_dp, pipe_config, conn_state); ··· 3387 3351 */ 3388 3352 if (crtc_state->dsc.compression_enable) { 3389 3353 drm_dbg_kms(&i915->drm, "[ENCODER:%d:%s] Forcing full modeset due to DSC being enabled\n", 3354 + encoder->base.base.id, encoder->base.name); 3355 + crtc_state->uapi.mode_changed = true; 3356 + fastset = false; 3357 + } 3358 + 3359 + if (CAN_PANEL_REPLAY(intel_dp)) { 3360 + drm_dbg_kms(&i915->drm, 3361 + "[ENCODER:%d:%s] Forcing full modeset to compute panel replay state\n", 3390 3362 encoder->base.base.id, encoder->base.name); 3391 3363 crtc_state->uapi.mode_changed = true; 3392 3364 fastset = false; ··· 4083 4039 intel_dp->downstream_ports) == 0; 4084 4040 } 4085 4041 4086 - static bool 4087 - intel_dp_can_mst(struct intel_dp *intel_dp) 4042 + static const char *intel_dp_mst_mode_str(enum drm_dp_mst_mode mst_mode) 4043 + { 4044 + if (mst_mode == DRM_DP_MST) 4045 + return "MST"; 4046 + else if (mst_mode == DRM_DP_SST_SIDEBAND_MSG) 4047 + return "SST w/ sideband messaging"; 4048 + else 4049 + return "SST"; 4050 + } 4051 + 4052 + static enum drm_dp_mst_mode 4053 + intel_dp_mst_mode_choose(struct intel_dp *intel_dp, 4054 + enum drm_dp_mst_mode sink_mst_mode) 4088 4055 { 4089 4056 struct drm_i915_private *i915 = dp_to_i915(intel_dp); 4090 4057 4091 - return i915->display.params.enable_dp_mst && 4092 - intel_dp_mst_source_support(intel_dp) && 4093 - drm_dp_read_mst_cap(&intel_dp->aux, intel_dp->dpcd); 4058 + if (!i915->display.params.enable_dp_mst) 4059 + return DRM_DP_SST; 4060 + 4061 + if (!intel_dp_mst_source_support(intel_dp)) 4062 + return DRM_DP_SST; 4063 + 4064 + if (sink_mst_mode == DRM_DP_SST_SIDEBAND_MSG && 4065 + !(intel_dp->dpcd[DP_MAIN_LINK_CHANNEL_CODING] & DP_CAP_ANSI_128B132B)) 4066 + return DRM_DP_SST; 4067 + 4068 + return sink_mst_mode; 4069 + } 4070 + 4071 + static enum drm_dp_mst_mode 4072 + intel_dp_mst_detect(struct intel_dp *intel_dp) 4073 + { 4074 + struct drm_i915_private *i915 = dp_to_i915(intel_dp); 4075 + struct intel_encoder *encoder = &dp_to_dig_port(intel_dp)->base; 4076 + enum drm_dp_mst_mode sink_mst_mode; 4077 + enum drm_dp_mst_mode mst_detect; 4078 + 4079 + sink_mst_mode = drm_dp_read_mst_cap(&intel_dp->aux, intel_dp->dpcd); 4080 + 4081 + mst_detect = intel_dp_mst_mode_choose(intel_dp, sink_mst_mode); 4082 + 4083 + drm_dbg_kms(&i915->drm, 4084 + "[ENCODER:%d:%s] MST support: port: %s, sink: %s, modparam: %s -> enable: %s\n", 4085 + encoder->base.base.id, encoder->base.name, 4086 + str_yes_no(intel_dp_mst_source_support(intel_dp)), 4087 + intel_dp_mst_mode_str(sink_mst_mode), 4088 + str_yes_no(i915->display.params.enable_dp_mst), 4089 + intel_dp_mst_mode_str(mst_detect)); 4090 + 4091 + return mst_detect; 4094 4092 } 4095 4093 4096 4094 static void 4097 - intel_dp_configure_mst(struct intel_dp *intel_dp) 4095 + intel_dp_mst_configure(struct intel_dp *intel_dp) 4098 4096 { 4099 - struct drm_i915_private *i915 = dp_to_i915(intel_dp); 4100 - struct intel_encoder *encoder = 4101 - &dp_to_dig_port(intel_dp)->base; 4102 - bool sink_can_mst = drm_dp_read_mst_cap(&intel_dp->aux, intel_dp->dpcd); 4103 - 4104 - drm_dbg_kms(&i915->drm, 4105 - "[ENCODER:%d:%s] MST support: port: %s, sink: %s, modparam: %s\n", 4106 - encoder->base.base.id, encoder->base.name, 4107 - str_yes_no(intel_dp_mst_source_support(intel_dp)), 4108 - str_yes_no(sink_can_mst), 4109 - str_yes_no(i915->display.params.enable_dp_mst)); 4110 - 4111 4097 if (!intel_dp_mst_source_support(intel_dp)) 4112 4098 return; 4113 4099 4114 - intel_dp->is_mst = sink_can_mst && 4115 - i915->display.params.enable_dp_mst; 4100 + intel_dp->is_mst = intel_dp->mst_detect != DRM_DP_SST; 4116 4101 4117 - drm_dp_mst_topology_mgr_set_mst(&intel_dp->mst_mgr, 4118 - intel_dp->is_mst); 4102 + drm_dp_mst_topology_mgr_set_mst(&intel_dp->mst_mgr, intel_dp->is_mst); 4103 + 4104 + /* Avoid stale info on the next detect cycle. */ 4105 + intel_dp->mst_detect = DRM_DP_SST; 4106 + } 4107 + 4108 + static void 4109 + intel_dp_mst_disconnect(struct intel_dp *intel_dp) 4110 + { 4111 + struct drm_i915_private *i915 = dp_to_i915(intel_dp); 4112 + 4113 + if (!intel_dp->is_mst) 4114 + return; 4115 + 4116 + drm_dbg_kms(&i915->drm, "MST device may have disappeared %d vs %d\n", 4117 + intel_dp->is_mst, intel_dp->mst_mgr.mst_state); 4118 + intel_dp->is_mst = false; 4119 + drm_dp_mst_topology_mgr_set_mst(&intel_dp->mst_mgr, intel_dp->is_mst); 4119 4120 } 4120 4121 4121 4122 static bool ··· 4206 4117 } 4207 4118 4208 4119 return false; 4120 + } 4121 + 4122 + static ssize_t intel_dp_as_sdp_pack(const struct drm_dp_as_sdp *as_sdp, 4123 + struct dp_sdp *sdp, size_t size) 4124 + { 4125 + size_t length = sizeof(struct dp_sdp); 4126 + 4127 + if (size < length) 4128 + return -ENOSPC; 4129 + 4130 + memset(sdp, 0, size); 4131 + 4132 + /* Prepare AS (Adaptive Sync) SDP Header */ 4133 + sdp->sdp_header.HB0 = 0; 4134 + sdp->sdp_header.HB1 = as_sdp->sdp_type; 4135 + sdp->sdp_header.HB2 = 0x02; 4136 + sdp->sdp_header.HB3 = as_sdp->length; 4137 + 4138 + /* Fill AS (Adaptive Sync) SDP Payload */ 4139 + sdp->db[0] = as_sdp->mode; 4140 + sdp->db[1] = as_sdp->vtotal & 0xFF; 4141 + sdp->db[2] = (as_sdp->vtotal >> 8) & 0xFF; 4142 + sdp->db[3] = as_sdp->target_rr & 0xFF; 4143 + sdp->db[4] = (as_sdp->target_rr >> 8) & 0x3; 4144 + 4145 + return length; 4209 4146 } 4210 4147 4211 4148 static ssize_t ··· 4333 4218 &crtc_state->infoframes.drm.drm, 4334 4219 &sdp, sizeof(sdp)); 4335 4220 break; 4221 + case DP_SDP_ADAPTIVE_SYNC: 4222 + len = intel_dp_as_sdp_pack(&crtc_state->infoframes.as_sdp, &sdp, 4223 + sizeof(sdp)); 4224 + break; 4336 4225 default: 4337 4226 MISSING_CASE(type); 4338 4227 return; ··· 4358 4239 u32 dip_enable = VIDEO_DIP_ENABLE_AVI_HSW | VIDEO_DIP_ENABLE_GCP_HSW | 4359 4240 VIDEO_DIP_ENABLE_VS_HSW | VIDEO_DIP_ENABLE_GMP_HSW | 4360 4241 VIDEO_DIP_ENABLE_SPD_HSW | VIDEO_DIP_ENABLE_DRM_GLK; 4242 + 4243 + if (HAS_AS_SDP(dev_priv)) 4244 + dip_enable |= VIDEO_DIP_ENABLE_AS_ADL; 4245 + 4361 4246 u32 val = intel_de_read(dev_priv, reg) & ~dip_enable; 4362 4247 4363 4248 /* TODO: Sanitize DSC enabling wrt. intel_dsc_dp_pps_write(). */ ··· 4379 4256 return; 4380 4257 4381 4258 intel_write_dp_sdp(encoder, crtc_state, DP_SDP_VSC); 4259 + intel_write_dp_sdp(encoder, crtc_state, DP_SDP_ADAPTIVE_SYNC); 4382 4260 4383 4261 intel_write_dp_sdp(encoder, crtc_state, HDMI_PACKET_TYPE_GAMUT_METADATA); 4262 + } 4263 + 4264 + static 4265 + int intel_dp_as_sdp_unpack(struct drm_dp_as_sdp *as_sdp, 4266 + const void *buffer, size_t size) 4267 + { 4268 + const struct dp_sdp *sdp = buffer; 4269 + 4270 + if (size < sizeof(struct dp_sdp)) 4271 + return -EINVAL; 4272 + 4273 + memset(as_sdp, 0, sizeof(*as_sdp)); 4274 + 4275 + if (sdp->sdp_header.HB0 != 0) 4276 + return -EINVAL; 4277 + 4278 + if (sdp->sdp_header.HB1 != DP_SDP_ADAPTIVE_SYNC) 4279 + return -EINVAL; 4280 + 4281 + if (sdp->sdp_header.HB2 != 0x02) 4282 + return -EINVAL; 4283 + 4284 + if ((sdp->sdp_header.HB3 & 0x3F) != 9) 4285 + return -EINVAL; 4286 + 4287 + as_sdp->length = sdp->sdp_header.HB3 & DP_ADAPTIVE_SYNC_SDP_LENGTH; 4288 + as_sdp->mode = sdp->db[0] & DP_ADAPTIVE_SYNC_SDP_OPERATION_MODE; 4289 + as_sdp->vtotal = (sdp->db[2] << 8) | sdp->db[1]; 4290 + as_sdp->target_rr = (u64)sdp->db[3] | ((u64)sdp->db[4] & 0x3); 4291 + 4292 + return 0; 4384 4293 } 4385 4294 4386 4295 static int intel_dp_vsc_sdp_unpack(struct drm_dp_vsc_sdp *vsc, ··· 4483 4328 } 4484 4329 4485 4330 return 0; 4331 + } 4332 + 4333 + static void 4334 + intel_read_dp_as_sdp(struct intel_encoder *encoder, 4335 + struct intel_crtc_state *crtc_state, 4336 + struct drm_dp_as_sdp *as_sdp) 4337 + { 4338 + struct intel_digital_port *dig_port = enc_to_dig_port(encoder); 4339 + struct drm_i915_private *dev_priv = to_i915(encoder->base.dev); 4340 + unsigned int type = DP_SDP_ADAPTIVE_SYNC; 4341 + struct dp_sdp sdp = {}; 4342 + int ret; 4343 + 4344 + if ((crtc_state->infoframes.enable & 4345 + intel_hdmi_infoframe_enable(type)) == 0) 4346 + return; 4347 + 4348 + dig_port->read_infoframe(encoder, crtc_state, type, &sdp, 4349 + sizeof(sdp)); 4350 + 4351 + ret = intel_dp_as_sdp_unpack(as_sdp, &sdp, sizeof(sdp)); 4352 + if (ret) 4353 + drm_dbg_kms(&dev_priv->drm, "Failed to unpack DP AS SDP\n"); 4486 4354 } 4487 4355 4488 4356 static int ··· 4613 4435 case HDMI_PACKET_TYPE_GAMUT_METADATA: 4614 4436 intel_read_dp_hdr_metadata_infoframe_sdp(encoder, crtc_state, 4615 4437 &crtc_state->infoframes.drm.drm); 4438 + break; 4439 + case DP_SDP_ADAPTIVE_SYNC: 4440 + intel_read_dp_as_sdp(encoder, crtc_state, 4441 + &crtc_state->infoframes.as_sdp); 4616 4442 break; 4617 4443 default: 4618 4444 MISSING_CASE(type); ··· 5545 5363 if (!intel_dp_get_dpcd(intel_dp)) 5546 5364 return connector_status_disconnected; 5547 5365 5366 + intel_dp->mst_detect = intel_dp_mst_detect(intel_dp); 5367 + 5548 5368 /* if there's no downstream port, we're done */ 5549 5369 if (!drm_dp_is_branch(dpcd)) 5550 5370 return connector_status_connected; ··· 5558 5374 connector_status_connected : connector_status_disconnected; 5559 5375 } 5560 5376 5561 - if (intel_dp_can_mst(intel_dp)) 5377 + if (intel_dp->mst_detect == DRM_DP_MST) 5562 5378 return connector_status_connected; 5563 5379 5564 5380 /* If no HPD, poke DDC gently */ ··· 5863 5679 memset(intel_connector->dp.dsc_dpcd, 0, sizeof(intel_connector->dp.dsc_dpcd)); 5864 5680 intel_dp->psr.sink_panel_replay_support = false; 5865 5681 5866 - if (intel_dp->is_mst) { 5867 - drm_dbg_kms(&dev_priv->drm, 5868 - "MST device may have disappeared %d vs %d\n", 5869 - intel_dp->is_mst, 5870 - intel_dp->mst_mgr.mst_state); 5871 - intel_dp->is_mst = false; 5872 - drm_dp_mst_topology_mgr_set_mst(&intel_dp->mst_mgr, 5873 - intel_dp->is_mst); 5874 - } 5682 + intel_dp_mst_disconnect(intel_dp); 5875 5683 5876 5684 intel_dp_tunnel_disconnect(intel_dp); 5877 5685 ··· 5882 5706 5883 5707 intel_dp_detect_dsc_caps(intel_dp, intel_connector); 5884 5708 5885 - intel_dp_configure_mst(intel_dp); 5709 + intel_dp_mst_configure(intel_dp); 5886 5710 5887 5711 /* 5888 5712 * TODO: Reset link params when switching to MST mode, until MST ··· 6665 6489 struct drm_device *dev = intel_encoder->base.dev; 6666 6490 struct drm_i915_private *dev_priv = to_i915(dev); 6667 6491 enum port port = intel_encoder->port; 6668 - enum phy phy = intel_port_to_phy(dev_priv, port); 6669 6492 int type; 6670 6493 6671 6494 /* Initialize the work for modeset in case of link train failure */ ··· 6689 6514 * Currently we don't support eDP on TypeC ports, although in 6690 6515 * theory it could work on TypeC legacy ports. 6691 6516 */ 6692 - drm_WARN_ON(dev, intel_phy_is_tc(dev_priv, phy)); 6517 + drm_WARN_ON(dev, intel_encoder_is_tc(intel_encoder)); 6693 6518 type = DRM_MODE_CONNECTOR_eDP; 6694 6519 intel_encoder->type = INTEL_OUTPUT_EDP; 6695 6520 ··· 6732 6557 intel_connector->get_hw_state = intel_ddi_connector_get_hw_state; 6733 6558 else 6734 6559 intel_connector->get_hw_state = intel_connector_get_hw_state; 6560 + intel_connector->sync_state = intel_dp_connector_sync_state; 6735 6561 6736 6562 if (!intel_edp_init_connector(intel_dp, intel_connector)) { 6737 6563 intel_dp_aux_fini(intel_dp);
+4 -1
drivers/gpu/drm/i915/display/intel_dp.h
··· 88 88 struct drm_connector_state *conn_state); 89 89 bool intel_dp_has_hdmi_sink(struct intel_dp *intel_dp); 90 90 bool intel_dp_is_edp(struct intel_dp *intel_dp); 91 + bool intel_dp_as_sdp_supported(struct intel_dp *intel_dp); 91 92 bool intel_dp_is_uhbr(const struct intel_crtc_state *crtc_state); 92 93 int intel_dp_link_symbol_size(int rate); 93 94 int intel_dp_link_symbol_clock(int rate); ··· 120 119 int bw_overhead); 121 120 int intel_dp_max_link_data_rate(struct intel_dp *intel_dp, 122 121 int max_dprx_rate, int max_dprx_lanes); 123 - bool intel_dp_can_bigjoiner(struct intel_dp *intel_dp); 122 + bool intel_dp_joiner_needs_dsc(struct drm_i915_private *i915, bool use_joiner); 123 + bool intel_dp_has_bigjoiner(struct intel_dp *intel_dp); 124 124 bool intel_dp_needs_vsc_sdp(const struct intel_crtc_state *crtc_state, 125 125 const struct drm_connector_state *conn_state); 126 126 void intel_dp_set_infoframes(struct intel_encoder *encoder, bool enable, ··· 151 149 int mode_clock, int mode_hdisplay, 152 150 bool bigjoiner); 153 151 bool intel_dp_need_bigjoiner(struct intel_dp *intel_dp, 152 + struct intel_connector *connector, 154 153 int hdisplay, int clock); 155 154 156 155 static inline unsigned int intel_dp_unused_lane_mask(int lane_count)
+10 -5
drivers/gpu/drm/i915/display/intel_dp_aux.c
··· 61 61 u32 status; 62 62 int ret; 63 63 64 - ret = __intel_de_wait_for_register(i915, ch_ctl, 65 - DP_AUX_CH_CTL_SEND_BUSY, 0, 66 - 2, timeout_ms, &status); 64 + ret = intel_de_wait_custom(i915, ch_ctl, DP_AUX_CH_CTL_SEND_BUSY, 0, 65 + 2, timeout_ms, &status); 67 66 68 67 if (ret == -ETIMEDOUT) 69 68 drm_err(&i915->drm, ··· 142 143 return precharge + preamble; 143 144 } 144 145 145 - static int intel_dp_aux_fw_sync_len(void) 146 + int intel_dp_aux_fw_sync_len(void) 146 147 { 147 - int precharge = 10; /* 10-16 */ 148 + /* 149 + * We faced some glitches on Dell Precision 5490 MTL laptop with panel: 150 + * "Manufacturer: AUO, Model: 63898" when using HW default 18. Using 20 151 + * is fixing these problems with the panel. It is still within range 152 + * mentioned in eDP specification. 153 + */ 154 + int precharge = 12; /* 10-16 */ 148 155 int preamble = 8; 149 156 150 157 return precharge + preamble;
+1
drivers/gpu/drm/i915/display/intel_dp_aux.h
··· 20 20 21 21 void intel_dp_aux_irq_handler(struct drm_i915_private *i915); 22 22 u32 intel_dp_aux_pack(const u8 *src, int src_bytes); 23 + int intel_dp_aux_fw_sync_len(void); 23 24 24 25 #endif /* __INTEL_DP_AUX_H__ */
+11 -11
drivers/gpu/drm/i915/display/intel_dp_hdcp.c
··· 691 691 u8 bcaps; 692 692 int ret; 693 693 694 + *hdcp_capable = false; 695 + *hdcp2_capable = false; 694 696 if (!intel_encoder_is_mst(connector->encoder)) 695 697 return -EINVAL; 696 698 697 699 ret = _intel_dp_hdcp2_get_capability(aux, hdcp2_capable); 698 700 if (ret) 699 - return ret; 701 + drm_dbg_kms(&i915->drm, 702 + "HDCP2 DPCD capability read failed err: %d\n", ret); 700 703 701 704 ret = intel_dp_hdcp_read_bcaps(aux, i915, &bcaps); 702 705 if (ret) ··· 769 766 return -EINVAL; 770 767 771 768 /* Wait for encryption confirmation */ 772 - if (intel_de_wait_for_register(i915, 773 - HDCP_STATUS(i915, cpu_transcoder, port), 774 - stream_enc_status, 775 - enable ? stream_enc_status : 0, 776 - HDCP_ENCRYPT_STATUS_CHANGE_TIMEOUT_MS)) { 769 + if (intel_de_wait(i915, HDCP_STATUS(i915, cpu_transcoder, port), 770 + stream_enc_status, enable ? stream_enc_status : 0, 771 + HDCP_ENCRYPT_STATUS_CHANGE_TIMEOUT_MS)) { 777 772 drm_err(&i915->drm, "Timed out waiting for transcoder: %s stream encryption %s\n", 778 773 transcoder_name(cpu_transcoder), enable ? "enabled" : "disabled"); 779 774 return -ETIMEDOUT; ··· 802 801 return ret; 803 802 804 803 /* Wait for encryption confirmation */ 805 - if (intel_de_wait_for_register(i915, 806 - HDCP2_STREAM_STATUS(i915, cpu_transcoder, pipe), 807 - STREAM_ENCRYPTION_STATUS, 808 - enable ? STREAM_ENCRYPTION_STATUS : 0, 809 - HDCP_ENCRYPT_STATUS_CHANGE_TIMEOUT_MS)) { 804 + if (intel_de_wait(i915, HDCP2_STREAM_STATUS(i915, cpu_transcoder, pipe), 805 + STREAM_ENCRYPTION_STATUS, 806 + enable ? STREAM_ENCRYPTION_STATUS : 0, 807 + HDCP_ENCRYPT_STATUS_CHANGE_TIMEOUT_MS)) { 810 808 drm_err(&i915->drm, "Timed out waiting for transcoder: %s stream encryption %s\n", 811 809 transcoder_name(cpu_transcoder), enable ? "enabled" : "disabled"); 812 810 return -ETIMEDOUT;
+97 -47
drivers/gpu/drm/i915/display/intel_dp_mst.c
··· 88 88 89 89 if (dsc) { 90 90 flags |= DRM_DP_BW_OVERHEAD_DSC; 91 - /* TODO: add support for bigjoiner */ 92 91 dsc_slice_count = intel_dp_dsc_get_slice_count(connector, 93 92 adjusted_mode->clock, 94 93 adjusted_mode->hdisplay, 95 - false); 94 + crtc_state->bigjoiner_pipes); 96 95 } 97 96 98 97 overhead = drm_dp_bw_overhead(crtc_state->lane_count, ··· 524 525 { 525 526 struct drm_i915_private *dev_priv = to_i915(encoder->base.dev); 526 527 struct intel_atomic_state *state = to_intel_atomic_state(conn_state->state); 528 + struct intel_crtc *crtc = to_intel_crtc(pipe_config->uapi.crtc); 527 529 struct intel_dp_mst_encoder *intel_mst = enc_to_mst(encoder); 528 530 struct intel_dp *intel_dp = &intel_mst->primary->dp; 529 - const struct intel_connector *connector = 531 + struct intel_connector *connector = 530 532 to_intel_connector(conn_state->connector); 531 533 const struct drm_display_mode *adjusted_mode = 532 534 &pipe_config->hw.adjusted_mode; 533 535 struct link_config_limits limits; 534 - bool dsc_needed; 536 + bool dsc_needed, joiner_needs_dsc; 535 537 int ret = 0; 536 538 537 539 if (pipe_config->fec_enable && ··· 542 542 if (adjusted_mode->flags & DRM_MODE_FLAG_DBLSCAN) 543 543 return -EINVAL; 544 544 545 + if (intel_dp_need_bigjoiner(intel_dp, connector, 546 + adjusted_mode->crtc_hdisplay, 547 + adjusted_mode->crtc_clock)) 548 + pipe_config->bigjoiner_pipes = GENMASK(crtc->pipe + 1, crtc->pipe); 549 + 545 550 pipe_config->sink_format = INTEL_OUTPUT_FORMAT_RGB; 546 551 pipe_config->output_format = INTEL_OUTPUT_FORMAT_RGB; 547 552 pipe_config->has_pch_encoder = false; 548 553 549 - dsc_needed = intel_dp->force_dsc_en || 554 + joiner_needs_dsc = intel_dp_joiner_needs_dsc(dev_priv, pipe_config->bigjoiner_pipes); 555 + 556 + dsc_needed = joiner_needs_dsc || intel_dp->force_dsc_en || 550 557 !intel_dp_mst_compute_config_limits(intel_dp, 551 558 connector, 552 559 pipe_config, ··· 573 566 574 567 /* enable compression if the mode doesn't fit available BW */ 575 568 if (dsc_needed) { 576 - drm_dbg_kms(&dev_priv->drm, "Try DSC (fallback=%s, force=%s)\n", 577 - str_yes_no(ret), 569 + drm_dbg_kms(&dev_priv->drm, "Try DSC (fallback=%s, joiner=%s, force=%s)\n", 570 + str_yes_no(ret), str_yes_no(joiner_needs_dsc), 578 571 str_yes_no(intel_dp->force_dsc_en)); 579 572 580 573 if (!intel_dp_mst_dsc_source_support(pipe_config)) ··· 961 954 struct drm_dp_mst_atomic_payload *new_payload = 962 955 drm_atomic_get_mst_payload_state(new_mst_state, connector->port); 963 956 struct drm_i915_private *dev_priv = to_i915(connector->base.dev); 957 + struct intel_crtc *pipe_crtc; 964 958 bool last_mst_stream; 965 959 966 960 intel_dp->active_mst_links--; ··· 970 962 DISPLAY_VER(dev_priv) >= 12 && last_mst_stream && 971 963 !intel_dp_mst_is_master_trans(old_crtc_state)); 972 964 973 - intel_crtc_vblank_off(old_crtc_state); 965 + for_each_intel_crtc_in_pipe_mask(&dev_priv->drm, pipe_crtc, 966 + intel_crtc_joined_pipe_mask(old_crtc_state)) { 967 + const struct intel_crtc_state *old_pipe_crtc_state = 968 + intel_atomic_get_old_crtc_state(state, pipe_crtc); 969 + 970 + intel_crtc_vblank_off(old_pipe_crtc_state); 971 + } 974 972 975 973 intel_disable_transcoder(old_crtc_state); 976 974 ··· 994 980 995 981 intel_ddi_disable_transcoder_func(old_crtc_state); 996 982 997 - intel_dsc_disable(old_crtc_state); 983 + for_each_intel_crtc_in_pipe_mask(&dev_priv->drm, pipe_crtc, 984 + intel_crtc_joined_pipe_mask(old_crtc_state)) { 985 + const struct intel_crtc_state *old_pipe_crtc_state = 986 + intel_atomic_get_old_crtc_state(state, pipe_crtc); 998 987 999 - if (DISPLAY_VER(dev_priv) >= 9) 1000 - skl_scaler_disable(old_crtc_state); 1001 - else 1002 - ilk_pfit_disable(old_crtc_state); 988 + intel_dsc_disable(old_pipe_crtc_state); 989 + 990 + if (DISPLAY_VER(dev_priv) >= 9) 991 + skl_scaler_disable(old_pipe_crtc_state); 992 + else 993 + ilk_pfit_disable(old_pipe_crtc_state); 994 + } 1003 995 1004 996 /* 1005 997 * Power down mst path before disabling the port, otherwise we end ··· 1137 1117 intel_ddi_set_dp_msa(pipe_config, conn_state); 1138 1118 } 1139 1119 1120 + static void enable_bs_jitter_was(const struct intel_crtc_state *crtc_state) 1121 + { 1122 + struct drm_i915_private *i915 = to_i915(crtc_state->uapi.crtc->dev); 1123 + u32 clear = 0; 1124 + u32 set = 0; 1125 + 1126 + if (!IS_ALDERLAKE_P(i915)) 1127 + return; 1128 + 1129 + if (!IS_DISPLAY_STEP(i915, STEP_D0, STEP_FOREVER)) 1130 + return; 1131 + 1132 + /* Wa_14013163432:adlp */ 1133 + if (crtc_state->fec_enable || intel_dp_is_uhbr(crtc_state)) 1134 + set |= DP_MST_FEC_BS_JITTER_WA(crtc_state->cpu_transcoder); 1135 + 1136 + /* Wa_14014143976:adlp */ 1137 + if (IS_DISPLAY_STEP(i915, STEP_E0, STEP_FOREVER)) { 1138 + if (intel_dp_is_uhbr(crtc_state)) 1139 + set |= DP_MST_SHORT_HBLANK_WA(crtc_state->cpu_transcoder); 1140 + else if (crtc_state->fec_enable) 1141 + clear |= DP_MST_SHORT_HBLANK_WA(crtc_state->cpu_transcoder); 1142 + 1143 + if (crtc_state->fec_enable || intel_dp_is_uhbr(crtc_state)) 1144 + set |= DP_MST_DPT_DPTP_ALIGN_WA(crtc_state->cpu_transcoder); 1145 + } 1146 + 1147 + if (!clear && !set) 1148 + return; 1149 + 1150 + intel_de_rmw(i915, CHICKEN_MISC_3, clear, set); 1151 + } 1152 + 1140 1153 static void intel_mst_enable_dp(struct intel_atomic_state *state, 1141 1154 struct intel_encoder *encoder, 1142 1155 const struct intel_crtc_state *pipe_config, ··· 1184 1131 drm_atomic_get_new_mst_topology_state(&state->base, &intel_dp->mst_mgr); 1185 1132 enum transcoder trans = pipe_config->cpu_transcoder; 1186 1133 bool first_mst_stream = intel_dp->active_mst_links == 1; 1134 + struct intel_crtc *pipe_crtc; 1187 1135 1188 1136 drm_WARN_ON(&dev_priv->drm, pipe_config->has_pch_encoder); 1189 1137 ··· 1198 1144 intel_de_write(dev_priv, TRANS_DP2_VFREQLOW(pipe_config->cpu_transcoder), 1199 1145 TRANS_DP2_VFREQ_PIXEL_CLOCK(crtc_clock_hz & 0xffffff)); 1200 1146 } 1147 + 1148 + enable_bs_jitter_was(pipe_config); 1201 1149 1202 1150 intel_ddi_enable_transcoder_func(encoder, pipe_config); 1203 1151 ··· 1228 1172 1229 1173 intel_enable_transcoder(pipe_config); 1230 1174 1231 - intel_crtc_vblank_on(pipe_config); 1175 + for_each_intel_crtc_in_pipe_mask_reverse(&dev_priv->drm, pipe_crtc, 1176 + intel_crtc_joined_pipe_mask(pipe_config)) { 1177 + const struct intel_crtc_state *pipe_crtc_state = 1178 + intel_atomic_get_new_crtc_state(state, pipe_crtc); 1179 + 1180 + intel_crtc_vblank_on(pipe_crtc_state); 1181 + } 1232 1182 1233 1183 intel_hdcp_enable(state, encoder, pipe_config, conn_state); 1234 1184 } ··· 1347 1285 struct drm_dp_mst_topology_mgr *mgr = &intel_dp->mst_mgr; 1348 1286 struct drm_dp_mst_port *port = intel_connector->port; 1349 1287 const int min_bpp = 18; 1350 - int max_dotclk = to_i915(connector->dev)->max_dotclk_freq; 1288 + int max_dotclk = to_i915(connector->dev)->display.cdclk.max_dotclk_freq; 1351 1289 int max_rate, mode_rate, max_lanes, max_link_clock; 1352 1290 int ret; 1353 1291 bool dsc = false, bigjoiner = false; ··· 1364 1302 if (*status != MODE_OK) 1365 1303 return 0; 1366 1304 1367 - if (mode->flags & DRM_MODE_FLAG_DBLSCAN) { 1368 - *status = MODE_NO_DBLESCAN; 1305 + if (mode->flags & DRM_MODE_FLAG_DBLCLK) { 1306 + *status = MODE_H_ILLEGAL; 1307 + return 0; 1308 + } 1309 + 1310 + if (mode->clock < 10000) { 1311 + *status = MODE_CLOCK_LOW; 1369 1312 return 0; 1370 1313 } 1371 1314 ··· 1380 1313 max_rate = intel_dp_max_link_data_rate(intel_dp, 1381 1314 max_link_clock, max_lanes); 1382 1315 mode_rate = intel_dp_link_required(mode->clock, min_bpp); 1383 - 1384 - ret = drm_modeset_lock(&mgr->base.lock, ctx); 1385 - if (ret) 1386 - return ret; 1387 1316 1388 1317 /* 1389 1318 * TODO: ··· 1393 1330 * corresponding link capabilities of the sink) in case the 1394 1331 * stream is uncompressed for it by the last branch device. 1395 1332 */ 1333 + if (intel_dp_need_bigjoiner(intel_dp, intel_connector, 1334 + mode->hdisplay, target_clock)) { 1335 + bigjoiner = true; 1336 + max_dotclk *= 2; 1337 + } 1338 + 1339 + ret = drm_modeset_lock(&mgr->base.lock, ctx); 1340 + if (ret) 1341 + return ret; 1342 + 1396 1343 if (mode_rate > max_rate || mode->clock > max_dotclk || 1397 1344 drm_dp_calc_pbn_mode(mode->clock, min_bpp << 4) > port->full_pbn) { 1398 1345 *status = MODE_CLOCK_HIGH; 1399 1346 return 0; 1400 1347 } 1401 1348 1402 - if (mode->clock < 10000) { 1403 - *status = MODE_CLOCK_LOW; 1404 - return 0; 1405 - } 1406 - 1407 - if (mode->flags & DRM_MODE_FLAG_DBLCLK) { 1408 - *status = MODE_H_ILLEGAL; 1409 - return 0; 1410 - } 1411 - 1412 - if (intel_dp_need_bigjoiner(intel_dp, mode->hdisplay, target_clock)) { 1413 - bigjoiner = true; 1414 - max_dotclk *= 2; 1415 - 1416 - /* TODO: add support for bigjoiner */ 1417 - *status = MODE_CLOCK_HIGH; 1418 - return 0; 1419 - } 1420 - 1421 - if (DISPLAY_VER(dev_priv) >= 10 && 1349 + if (HAS_DSC_MST(dev_priv) && 1422 1350 drm_dp_sink_supports_dsc(intel_connector->dp.dsc_dpcd)) { 1423 1351 /* 1424 1352 * TBD pass the connector BPC, ··· 1437 1383 dsc = dsc_max_compressed_bpp && dsc_slice_count; 1438 1384 } 1439 1385 1440 - /* 1441 - * Big joiner configuration needs DSC for TGL which is not true for 1442 - * XE_LPD where uncompressed joiner is supported. 1443 - */ 1444 - if (DISPLAY_VER(dev_priv) < 13 && bigjoiner && !dsc) { 1386 + if (intel_dp_joiner_needs_dsc(dev_priv, bigjoiner) && !dsc) { 1445 1387 *status = MODE_CLOCK_HIGH; 1446 1388 return 0; 1447 1389 } ··· 1447 1397 return 0; 1448 1398 } 1449 1399 1450 - *status = intel_mode_valid_max_plane_size(dev_priv, mode, false); 1400 + *status = intel_mode_valid_max_plane_size(dev_priv, mode, bigjoiner); 1451 1401 return 0; 1452 1402 } 1453 1403
+1 -1
drivers/gpu/drm/i915/display/intel_dp_tunnel.c
··· 348 348 349 349 out_err: 350 350 drm_dbg_kms(&i915->drm, 351 - "[DPTUN %s][CONNECTOR:%d:%s][ENCODER:%d:%s] Tunnel can't be resumed, will drop and redect it (err %pe)\n", 351 + "[DPTUN %s][CONNECTOR:%d:%s][ENCODER:%d:%s] Tunnel can't be resumed, will drop and reject it (err %pe)\n", 352 352 drm_dp_tunnel_name(intel_dp->tunnel), 353 353 connector->base.base.id, connector->base.name, 354 354 encoder->base.base.id, encoder->base.name,
+2 -5
drivers/gpu/drm/i915/display/intel_dpio_phy.c
··· 399 399 * The flag should get set in 100us according to the HW team, but 400 400 * use 1ms due to occasional timeouts observed with that. 401 401 */ 402 - if (intel_wait_for_register_fw(&dev_priv->uncore, 403 - BXT_PORT_CL1CM_DW0(phy), 404 - PHY_RESERVED | PHY_POWER_GOOD, 405 - PHY_POWER_GOOD, 406 - 1)) 402 + if (intel_de_wait_fw(dev_priv, BXT_PORT_CL1CM_DW0(phy), 403 + PHY_RESERVED | PHY_POWER_GOOD, PHY_POWER_GOOD, 1)) 407 404 drm_err(&dev_priv->drm, "timeout during PHY%d power on\n", 408 405 phy); 409 406
+54 -73
drivers/gpu/drm/i915/display/intel_dpll_mgr.c
··· 107 107 struct intel_crtc *crtc, 108 108 struct intel_encoder *encoder); 109 109 void (*update_ref_clks)(struct drm_i915_private *i915); 110 - void (*dump_hw_state)(struct drm_i915_private *i915, 110 + void (*dump_hw_state)(struct drm_printer *p, 111 111 const struct intel_dpll_hw_state *hw_state); 112 112 bool (*compare_hw_state)(const struct intel_dpll_hw_state *a, 113 113 const struct intel_dpll_hw_state *b); ··· 634 634 return 0; 635 635 } 636 636 637 - static void ibx_dump_hw_state(struct drm_i915_private *i915, 637 + static void ibx_dump_hw_state(struct drm_printer *p, 638 638 const struct intel_dpll_hw_state *hw_state) 639 639 { 640 - drm_dbg_kms(&i915->drm, 641 - "dpll_hw_state: dpll: 0x%x, dpll_md: 0x%x, " 642 - "fp0: 0x%x, fp1: 0x%x\n", 643 - hw_state->dpll, 644 - hw_state->dpll_md, 645 - hw_state->fp0, 646 - hw_state->fp1); 640 + drm_printf(p, "dpll_hw_state: dpll: 0x%x, dpll_md: 0x%x, " 641 + "fp0: 0x%x, fp1: 0x%x\n", 642 + hw_state->dpll, 643 + hw_state->dpll_md, 644 + hw_state->fp0, 645 + hw_state->fp1); 647 646 } 648 647 649 648 static bool ibx_compare_hw_state(const struct intel_dpll_hw_state *a, ··· 1224 1225 i915->display.dpll.ref_clks.nssc = 135000; 1225 1226 } 1226 1227 1227 - static void hsw_dump_hw_state(struct drm_i915_private *i915, 1228 + static void hsw_dump_hw_state(struct drm_printer *p, 1228 1229 const struct intel_dpll_hw_state *hw_state) 1229 1230 { 1230 - drm_dbg_kms(&i915->drm, "dpll_hw_state: wrpll: 0x%x spll: 0x%x\n", 1231 - hw_state->wrpll, hw_state->spll); 1231 + drm_printf(p, "dpll_hw_state: wrpll: 0x%x spll: 0x%x\n", 1232 + hw_state->wrpll, hw_state->spll); 1232 1233 } 1233 1234 1234 1235 static bool hsw_compare_hw_state(const struct intel_dpll_hw_state *a, ··· 1938 1939 i915->display.dpll.ref_clks.nssc = i915->display.cdclk.hw.ref; 1939 1940 } 1940 1941 1941 - static void skl_dump_hw_state(struct drm_i915_private *i915, 1942 + static void skl_dump_hw_state(struct drm_printer *p, 1942 1943 const struct intel_dpll_hw_state *hw_state) 1943 1944 { 1944 - drm_dbg_kms(&i915->drm, "dpll_hw_state: " 1945 - "ctrl1: 0x%x, cfgcr1: 0x%x, cfgcr2: 0x%x\n", 1946 - hw_state->ctrl1, 1947 - hw_state->cfgcr1, 1948 - hw_state->cfgcr2); 1945 + drm_printf(p, "dpll_hw_state: ctrl1: 0x%x, cfgcr1: 0x%x, cfgcr2: 0x%x\n", 1946 + hw_state->ctrl1, hw_state->cfgcr1, hw_state->cfgcr2); 1949 1947 } 1950 1948 1951 1949 static bool skl_compare_hw_state(const struct intel_dpll_hw_state *a, ··· 2398 2402 /* DSI non-SSC ref 19.2MHz */ 2399 2403 } 2400 2404 2401 - static void bxt_dump_hw_state(struct drm_i915_private *i915, 2405 + static void bxt_dump_hw_state(struct drm_printer *p, 2402 2406 const struct intel_dpll_hw_state *hw_state) 2403 2407 { 2404 - drm_dbg_kms(&i915->drm, "dpll_hw_state: ebb0: 0x%x, ebb4: 0x%x," 2405 - "pll0: 0x%x, pll1: 0x%x, pll2: 0x%x, pll3: 0x%x, " 2406 - "pll6: 0x%x, pll8: 0x%x, pll9: 0x%x, pll10: 0x%x, pcsdw12: 0x%x\n", 2407 - hw_state->ebb0, 2408 - hw_state->ebb4, 2409 - hw_state->pll0, 2410 - hw_state->pll1, 2411 - hw_state->pll2, 2412 - hw_state->pll3, 2413 - hw_state->pll6, 2414 - hw_state->pll8, 2415 - hw_state->pll9, 2416 - hw_state->pll10, 2417 - hw_state->pcsdw12); 2408 + drm_printf(p, "dpll_hw_state: ebb0: 0x%x, ebb4: 0x%x," 2409 + "pll0: 0x%x, pll1: 0x%x, pll2: 0x%x, pll3: 0x%x, " 2410 + "pll6: 0x%x, pll8: 0x%x, pll9: 0x%x, pll10: 0x%x, pcsdw12: 0x%x\n", 2411 + hw_state->ebb0, hw_state->ebb4, 2412 + hw_state->pll0, hw_state->pll1, hw_state->pll2, hw_state->pll3, 2413 + hw_state->pll6, hw_state->pll8, hw_state->pll9, hw_state->pll10, 2414 + hw_state->pcsdw12); 2418 2415 } 2419 2416 2420 2417 static bool bxt_compare_hw_state(const struct intel_dpll_hw_state *a, ··· 3378 3389 struct intel_crtc *crtc, 3379 3390 struct intel_encoder *encoder) 3380 3391 { 3381 - struct drm_i915_private *i915 = to_i915(state->base.dev); 3382 3392 struct intel_crtc_state *crtc_state = 3383 3393 intel_atomic_get_new_crtc_state(state, crtc); 3384 3394 struct icl_port_dpll *port_dpll = ··· 3396 3408 3397 3409 3398 3410 port_dpll = &crtc_state->icl_port_dplls[ICL_PORT_DPLL_MG_PHY]; 3399 - dpll_id = icl_tc_port_to_pll_id(intel_port_to_tc(i915, 3400 - encoder->port)); 3411 + dpll_id = icl_tc_port_to_pll_id(intel_encoder_to_tc(encoder)); 3401 3412 port_dpll->pll = intel_find_shared_dpll(state, crtc, 3402 3413 &port_dpll->hw_state, 3403 3414 BIT(dpll_id)); ··· 3422 3435 struct intel_crtc *crtc, 3423 3436 struct intel_encoder *encoder) 3424 3437 { 3425 - struct drm_i915_private *i915 = to_i915(state->base.dev); 3426 - enum phy phy = intel_port_to_phy(i915, encoder->port); 3427 - 3428 - if (intel_phy_is_combo(i915, phy)) 3438 + if (intel_encoder_is_combo(encoder)) 3429 3439 return icl_compute_combo_phy_dpll(state, crtc); 3430 - else if (intel_phy_is_tc(i915, phy)) 3440 + else if (intel_encoder_is_tc(encoder)) 3431 3441 return icl_compute_tc_phy_dplls(state, crtc); 3432 3442 3433 - MISSING_CASE(phy); 3443 + MISSING_CASE(encoder->port); 3434 3444 3435 3445 return 0; 3436 3446 } ··· 3436 3452 struct intel_crtc *crtc, 3437 3453 struct intel_encoder *encoder) 3438 3454 { 3439 - struct drm_i915_private *i915 = to_i915(state->base.dev); 3440 - enum phy phy = intel_port_to_phy(i915, encoder->port); 3441 - 3442 - if (intel_phy_is_combo(i915, phy)) 3455 + if (intel_encoder_is_combo(encoder)) 3443 3456 return icl_get_combo_phy_dpll(state, crtc, encoder); 3444 - else if (intel_phy_is_tc(i915, phy)) 3457 + else if (intel_encoder_is_tc(encoder)) 3445 3458 return icl_get_tc_phy_dplls(state, crtc, encoder); 3446 3459 3447 - MISSING_CASE(phy); 3460 + MISSING_CASE(encoder->port); 3448 3461 3449 3462 return -EINVAL; 3450 3463 } ··· 4007 4026 i915->display.dpll.ref_clks.nssc = i915->display.cdclk.hw.ref; 4008 4027 } 4009 4028 4010 - static void icl_dump_hw_state(struct drm_i915_private *i915, 4029 + static void icl_dump_hw_state(struct drm_printer *p, 4011 4030 const struct intel_dpll_hw_state *hw_state) 4012 4031 { 4013 - drm_dbg_kms(&i915->drm, 4014 - "dpll_hw_state: cfgcr0: 0x%x, cfgcr1: 0x%x, div0: 0x%x, " 4015 - "mg_refclkin_ctl: 0x%x, hg_clktop2_coreclkctl1: 0x%x, " 4016 - "mg_clktop2_hsclkctl: 0x%x, mg_pll_div0: 0x%x, " 4017 - "mg_pll_div2: 0x%x, mg_pll_lf: 0x%x, " 4018 - "mg_pll_frac_lock: 0x%x, mg_pll_ssc: 0x%x, " 4019 - "mg_pll_bias: 0x%x, mg_pll_tdc_coldst_bias: 0x%x\n", 4020 - hw_state->cfgcr0, hw_state->cfgcr1, 4021 - hw_state->div0, 4022 - hw_state->mg_refclkin_ctl, 4023 - hw_state->mg_clktop2_coreclkctl1, 4024 - hw_state->mg_clktop2_hsclkctl, 4025 - hw_state->mg_pll_div0, 4026 - hw_state->mg_pll_div1, 4027 - hw_state->mg_pll_lf, 4028 - hw_state->mg_pll_frac_lock, 4029 - hw_state->mg_pll_ssc, 4030 - hw_state->mg_pll_bias, 4031 - hw_state->mg_pll_tdc_coldst_bias); 4032 + drm_printf(p, "dpll_hw_state: cfgcr0: 0x%x, cfgcr1: 0x%x, div0: 0x%x, " 4033 + "mg_refclkin_ctl: 0x%x, hg_clktop2_coreclkctl1: 0x%x, " 4034 + "mg_clktop2_hsclkctl: 0x%x, mg_pll_div0: 0x%x, " 4035 + "mg_pll_div2: 0x%x, mg_pll_lf: 0x%x, " 4036 + "mg_pll_frac_lock: 0x%x, mg_pll_ssc: 0x%x, " 4037 + "mg_pll_bias: 0x%x, mg_pll_tdc_coldst_bias: 0x%x\n", 4038 + hw_state->cfgcr0, hw_state->cfgcr1, hw_state->div0, 4039 + hw_state->mg_refclkin_ctl, 4040 + hw_state->mg_clktop2_coreclkctl1, 4041 + hw_state->mg_clktop2_hsclkctl, 4042 + hw_state->mg_pll_div0, 4043 + hw_state->mg_pll_div1, 4044 + hw_state->mg_pll_lf, 4045 + hw_state->mg_pll_frac_lock, 4046 + hw_state->mg_pll_ssc, 4047 + hw_state->mg_pll_bias, 4048 + hw_state->mg_pll_tdc_coldst_bias); 4032 4049 } 4033 4050 4034 4051 static bool icl_compare_hw_state(const struct intel_dpll_hw_state *a, ··· 4493 4514 } 4494 4515 4495 4516 /** 4496 - * intel_dpll_dump_hw_state - write hw_state to dmesg 4517 + * intel_dpll_dump_hw_state - dump hw_state 4497 4518 * @i915: i915 drm device 4498 - * @hw_state: hw state to be written to the log 4519 + * @p: where to print the state to 4520 + * @hw_state: hw state to be dumped 4499 4521 * 4500 - * Write the relevant values in @hw_state to dmesg using drm_dbg_kms. 4522 + * Dumo out the relevant values in @hw_state. 4501 4523 */ 4502 4524 void intel_dpll_dump_hw_state(struct drm_i915_private *i915, 4525 + struct drm_printer *p, 4503 4526 const struct intel_dpll_hw_state *hw_state) 4504 4527 { 4505 4528 if (i915->display.dpll.mgr) { 4506 - i915->display.dpll.mgr->dump_hw_state(i915, hw_state); 4529 + i915->display.dpll.mgr->dump_hw_state(p, hw_state); 4507 4530 } else { 4508 4531 /* fallback for platforms that don't use the shared dpll 4509 4532 * infrastructure 4510 4533 */ 4511 - ibx_dump_hw_state(i915, hw_state); 4534 + ibx_dump_hw_state(p, hw_state); 4512 4535 } 4513 4536 } 4514 4537
+2
drivers/gpu/drm/i915/display/intel_dpll_mgr.h
··· 36 36 37 37 enum tc_port; 38 38 struct drm_i915_private; 39 + struct drm_printer; 39 40 struct intel_atomic_state; 40 41 struct intel_crtc; 41 42 struct intel_crtc_state; ··· 378 377 void intel_dpll_sanitize_state(struct drm_i915_private *i915); 379 378 380 379 void intel_dpll_dump_hw_state(struct drm_i915_private *i915, 380 + struct drm_printer *p, 381 381 const struct intel_dpll_hw_state *hw_state); 382 382 bool intel_dpll_compare_hw_state(struct drm_i915_private *i915, 383 383 const struct intel_dpll_hw_state *a,
+3 -2
drivers/gpu/drm/i915/display/intel_dsb.c
··· 343 343 static u32 dsb_chicken(struct intel_crtc *crtc) 344 344 { 345 345 if (crtc->mode_flags & I915_MODE_FLAG_VRR) 346 - return DSB_CTRL_WAIT_SAFE_WINDOW | 346 + return DSB_SKIP_WAITS_EN | 347 + DSB_CTRL_WAIT_SAFE_WINDOW | 347 348 DSB_CTRL_NO_WAIT_VBLANK | 348 349 DSB_INST_WAIT_SAFE_WINDOW | 349 350 DSB_INST_NO_WAIT_VBLANK; 350 351 else 351 - return 0; 352 + return DSB_SKIP_WAITS_EN; 352 353 } 353 354 354 355 static void _intel_dsb_commit(struct intel_dsb *dsb, u32 ctrl,
+1 -4
drivers/gpu/drm/i915/display/intel_dsi.c
··· 64 64 struct intel_connector *intel_connector = to_intel_connector(connector); 65 65 const struct drm_display_mode *fixed_mode = 66 66 intel_panel_fixed_mode(intel_connector, mode); 67 - int max_dotclk = to_i915(connector->dev)->max_dotclk_freq; 67 + int max_dotclk = to_i915(connector->dev)->display.cdclk.max_dotclk_freq; 68 68 enum drm_mode_status status; 69 69 70 70 drm_dbg_kms(&dev_priv->drm, "\n"); 71 - 72 - if (mode->flags & DRM_MODE_FLAG_DBLSCAN) 73 - return MODE_NO_DBLESCAN; 74 71 75 72 status = intel_panel_mode_valid(intel_connector, mode); 76 73 if (status != MODE_OK)
+1 -4
drivers/gpu/drm/i915/display/intel_dvo.c
··· 223 223 struct intel_dvo *intel_dvo = intel_attached_dvo(connector); 224 224 const struct drm_display_mode *fixed_mode = 225 225 intel_panel_fixed_mode(connector, mode); 226 - int max_dotclk = to_i915(connector->base.dev)->max_dotclk_freq; 226 + int max_dotclk = to_i915(connector->base.dev)->display.cdclk.max_dotclk_freq; 227 227 int target_clock = mode->clock; 228 228 enum drm_mode_status status; 229 229 230 230 status = intel_cpu_transcoder_mode_valid(i915, mode); 231 231 if (status != MODE_OK) 232 232 return status; 233 - 234 - if (mode->flags & DRM_MODE_FLAG_DBLSCAN) 235 - return MODE_NO_DBLESCAN; 236 233 237 234 /* XXX: Validate clock range */ 238 235
+3 -3
drivers/gpu/drm/i915/display/intel_fb.c
··· 1106 1106 { 1107 1107 struct drm_i915_private *i915 = to_i915(fb->dev); 1108 1108 unsigned int height; 1109 - u32 alignment; 1109 + u32 alignment, unused; 1110 1110 1111 1111 if (DISPLAY_VER(i915) >= 12 && 1112 1112 !intel_fb_needs_pot_stride_remap(to_intel_framebuffer(fb)) && ··· 1128 1128 height = ALIGN(height, intel_tile_height(fb, color_plane)); 1129 1129 1130 1130 /* Catch potential overflows early */ 1131 - if (add_overflows_t(u32, mul_u32_u32(height, fb->pitches[color_plane]), 1132 - fb->offsets[color_plane])) { 1131 + if (check_add_overflow(mul_u32_u32(height, fb->pitches[color_plane]), 1132 + fb->offsets[color_plane], &unused)) { 1133 1133 drm_dbg_kms(&i915->drm, 1134 1134 "Bad offset 0x%08x or pitch %d for color plane %d\n", 1135 1135 fb->offsets[color_plane], fb->pitches[color_plane],
+29 -3
drivers/gpu/drm/i915/display/intel_fbc.c
··· 826 826 827 827 static void intel_fbc_program_workarounds(struct intel_fbc *fbc) 828 828 { 829 + struct drm_i915_private *i915 = fbc->i915; 830 + 831 + if (IS_SKYLAKE(i915) || IS_BROXTON(i915)) { 832 + /* 833 + * WaFbcHighMemBwCorruptionAvoidance:skl,bxt 834 + * Display WA #0883: skl,bxt 835 + */ 836 + intel_de_rmw(i915, ILK_DPFC_CHICKEN(fbc->id), 837 + 0, DPFC_DISABLE_DUMMY0); 838 + } 839 + 840 + if (IS_SKYLAKE(i915) || IS_KABYLAKE(i915) || 841 + IS_COFFEELAKE(i915) || IS_COMETLAKE(i915)) { 842 + /* 843 + * WaFbcNukeOnHostModify:skl,kbl,cfl 844 + * Display WA #0873: skl,kbl,cfl 845 + */ 846 + intel_de_rmw(i915, ILK_DPFC_CHICKEN(fbc->id), 847 + 0, DPFC_NUKE_ON_ANY_MODIFICATION); 848 + } 849 + 850 + /* Wa_1409120013:icl,jsl,tgl,dg1 */ 851 + if (IS_DISPLAY_VER(i915, 11, 12)) 852 + intel_de_rmw(i915, ILK_DPFC_CHICKEN(fbc->id), 853 + 0, DPFC_CHICKEN_COMP_DUMMY_PIXEL); 854 + 829 855 /* Wa_22014263786:icl,jsl,tgl,dg1,rkl,adls,adlp,mtl */ 830 - if (DISPLAY_VER(fbc->i915) >= 11 && !IS_DG2(fbc->i915)) 831 - intel_de_rmw(fbc->i915, ILK_DPFC_CHICKEN(fbc->id), 0, 832 - DPFC_CHICKEN_FORCE_SLB_INVALIDATION); 856 + if (DISPLAY_VER(i915) >= 11 && !IS_DG2(i915)) 857 + intel_de_rmw(i915, ILK_DPFC_CHICKEN(fbc->id), 858 + 0, DPFC_CHICKEN_FORCE_SLB_INVALIDATION); 833 859 } 834 860 835 861 static void __intel_fbc_cleanup_cfb(struct intel_fbc *fbc)
+5
drivers/gpu/drm/i915/display/intel_fbdev.c
··· 135 135 return i915_gem_fb_mmap(obj, vma); 136 136 } 137 137 138 + __diag_push(); 139 + __diag_ignore_all("-Woverride-init", "Allow field initialization overrides for fb ops"); 140 + 138 141 static const struct fb_ops intelfb_ops = { 139 142 .owner = THIS_MODULE, 140 143 __FB_DEFAULT_DEFERRED_OPS_RDWR(intel_fbdev), ··· 148 145 __FB_DEFAULT_DEFERRED_OPS_DRAW(intel_fbdev), 149 146 .fb_mmap = intel_fbdev_mmap, 150 147 }; 148 + 149 + __diag_pop(); 151 150 152 151 static int intelfb_create(struct drm_fb_helper *helper, 153 152 struct drm_fb_helper_surface_size *sizes)
+1 -1
drivers/gpu/drm/i915/display/intel_gmbus.c
··· 411 411 add_wait_queue(&i915->display.gmbus.wait_queue, &wait); 412 412 intel_de_write_fw(i915, GMBUS4(i915), irq_enable); 413 413 414 - ret = intel_de_wait_for_register_fw(i915, GMBUS2(i915), GMBUS_ACTIVE, 0, 10); 414 + ret = intel_de_wait_fw(i915, GMBUS2(i915), GMBUS_ACTIVE, 0, 10); 415 415 416 416 intel_de_write_fw(i915, GMBUS4(i915), 0); 417 417 remove_wait_queue(&i915->display.gmbus.wait_queue, &wait);
+3 -3
drivers/gpu/drm/i915/display/intel_hdcp.c
··· 369 369 } 370 370 371 371 /* Wait for the keys to load (500us) */ 372 - ret = __intel_wait_for_register(&i915->uncore, HDCP_KEY_STATUS, 373 - HDCP_KEY_LOAD_DONE, HDCP_KEY_LOAD_DONE, 374 - 10, 1, &val); 372 + ret = intel_de_wait_custom(i915, HDCP_KEY_STATUS, 373 + HDCP_KEY_LOAD_DONE, HDCP_KEY_LOAD_DONE, 374 + 10, 1, &val); 375 375 if (ret) 376 376 return ret; 377 377 else if (!(val & HDCP_KEY_LOAD_STATUS))
+56 -40
drivers/gpu/drm/i915/display/intel_hdmi.c
··· 114 114 return VIDEO_DIP_ENABLE_GAMUT; 115 115 case DP_SDP_VSC: 116 116 return 0; 117 + case DP_SDP_ADAPTIVE_SYNC: 118 + return 0; 117 119 case HDMI_INFOFRAME_TYPE_AVI: 118 120 return VIDEO_DIP_ENABLE_AVI; 119 121 case HDMI_INFOFRAME_TYPE_SPD: ··· 139 137 return VIDEO_DIP_ENABLE_GMP_HSW; 140 138 case DP_SDP_VSC: 141 139 return VIDEO_DIP_ENABLE_VSC_HSW; 140 + case DP_SDP_ADAPTIVE_SYNC: 141 + return VIDEO_DIP_ENABLE_AS_ADL; 142 142 case DP_SDP_PPS: 143 143 return VDIP_ENABLE_PPS; 144 144 case HDMI_INFOFRAME_TYPE_AVI: ··· 168 164 return HSW_TVIDEO_DIP_GMP_DATA(cpu_transcoder, i); 169 165 case DP_SDP_VSC: 170 166 return HSW_TVIDEO_DIP_VSC_DATA(cpu_transcoder, i); 167 + case DP_SDP_ADAPTIVE_SYNC: 168 + return ADL_TVIDEO_DIP_AS_SDP_DATA(cpu_transcoder, i); 171 169 case DP_SDP_PPS: 172 170 return ICL_VIDEO_DIP_PPS_DATA(cpu_transcoder, i); 173 171 case HDMI_INFOFRAME_TYPE_AVI: ··· 192 186 switch (type) { 193 187 case DP_SDP_VSC: 194 188 return VIDEO_DIP_VSC_DATA_SIZE; 189 + case DP_SDP_ADAPTIVE_SYNC: 190 + return VIDEO_DIP_ASYNC_DATA_SIZE; 195 191 case DP_SDP_PPS: 196 192 return VIDEO_DIP_PPS_DATA_SIZE; 197 193 case HDMI_PACKET_TYPE_GAMUT_METADATA: ··· 571 563 if (DISPLAY_VER(dev_priv) >= 10) 572 564 mask |= VIDEO_DIP_ENABLE_DRM_GLK; 573 565 566 + if (HAS_AS_SDP(dev_priv)) 567 + mask |= VIDEO_DIP_ENABLE_AS_ADL; 568 + 574 569 return val & mask; 575 570 } 576 571 ··· 581 570 HDMI_PACKET_TYPE_GENERAL_CONTROL, 582 571 HDMI_PACKET_TYPE_GAMUT_METADATA, 583 572 DP_SDP_VSC, 573 + DP_SDP_ADAPTIVE_SYNC, 584 574 HDMI_INFOFRAME_TYPE_AVI, 585 575 HDMI_INFOFRAME_TYPE_SPD, 586 576 HDMI_INFOFRAME_TYPE_VENDOR, ··· 1224 1212 val &= ~(VIDEO_DIP_ENABLE_VSC_HSW | VIDEO_DIP_ENABLE_AVI_HSW | 1225 1213 VIDEO_DIP_ENABLE_GCP_HSW | VIDEO_DIP_ENABLE_VS_HSW | 1226 1214 VIDEO_DIP_ENABLE_GMP_HSW | VIDEO_DIP_ENABLE_SPD_HSW | 1227 - VIDEO_DIP_ENABLE_DRM_GLK); 1215 + VIDEO_DIP_ENABLE_DRM_GLK | VIDEO_DIP_ENABLE_AS_ADL); 1228 1216 1229 1217 if (!enable) { 1230 1218 intel_de_write(dev_priv, reg, val); ··· 1844 1832 bool has_hdmi_sink) 1845 1833 { 1846 1834 struct drm_i915_private *dev_priv = intel_hdmi_to_i915(hdmi); 1847 - enum phy phy = intel_port_to_phy(dev_priv, hdmi_to_dig_port(hdmi)->base.port); 1835 + struct intel_encoder *encoder = &hdmi_to_dig_port(hdmi)->base; 1848 1836 1849 1837 if (clock < 25000) 1850 1838 return MODE_CLOCK_LOW; ··· 1866 1854 return MODE_CLOCK_RANGE; 1867 1855 1868 1856 /* ICL+ combo PHY PLL can't generate 500-533.2 MHz */ 1869 - if (intel_phy_is_combo(dev_priv, phy) && clock > 500000 && clock < 533200) 1857 + if (intel_encoder_is_combo(encoder) && clock > 500000 && clock < 533200) 1870 1858 return MODE_CLOCK_RANGE; 1871 1859 1872 1860 /* ICL+ TC PHY PLL can't generate 500-532.8 MHz */ 1873 - if (intel_phy_is_tc(dev_priv, phy) && clock > 500000 && clock < 532800) 1861 + if (intel_encoder_is_tc(encoder) && clock > 500000 && clock < 532800) 1874 1862 return MODE_CLOCK_RANGE; 1875 1863 1876 1864 /* ··· 1993 1981 struct drm_i915_private *dev_priv = intel_hdmi_to_i915(hdmi); 1994 1982 enum drm_mode_status status; 1995 1983 int clock = mode->clock; 1996 - int max_dotclk = to_i915(connector->dev)->max_dotclk_freq; 1984 + int max_dotclk = to_i915(connector->dev)->display.cdclk.max_dotclk_freq; 1997 1985 bool has_hdmi_sink = intel_has_hdmi_sink(hdmi, connector->state); 1998 1986 bool ycbcr_420_only; 1999 1987 enum intel_output_format sink_format; ··· 2676 2664 drm_scdc_set_scrambling(connector, scrambling); 2677 2665 } 2678 2666 2679 - static u8 chv_port_to_ddc_pin(struct drm_i915_private *dev_priv, enum port port) 2667 + static u8 chv_encoder_to_ddc_pin(struct intel_encoder *encoder) 2680 2668 { 2669 + enum port port = encoder->port; 2681 2670 u8 ddc_pin; 2682 2671 2683 2672 switch (port) { ··· 2699 2686 return ddc_pin; 2700 2687 } 2701 2688 2702 - static u8 bxt_port_to_ddc_pin(struct drm_i915_private *dev_priv, enum port port) 2689 + static u8 bxt_encoder_to_ddc_pin(struct intel_encoder *encoder) 2703 2690 { 2691 + enum port port = encoder->port; 2704 2692 u8 ddc_pin; 2705 2693 2706 2694 switch (port) { ··· 2719 2705 return ddc_pin; 2720 2706 } 2721 2707 2722 - static u8 cnp_port_to_ddc_pin(struct drm_i915_private *dev_priv, 2723 - enum port port) 2708 + static u8 cnp_encoder_to_ddc_pin(struct intel_encoder *encoder) 2724 2709 { 2710 + enum port port = encoder->port; 2725 2711 u8 ddc_pin; 2726 2712 2727 2713 switch (port) { ··· 2745 2731 return ddc_pin; 2746 2732 } 2747 2733 2748 - static u8 icl_port_to_ddc_pin(struct drm_i915_private *dev_priv, enum port port) 2734 + static u8 icl_encoder_to_ddc_pin(struct intel_encoder *encoder) 2749 2735 { 2750 - enum phy phy = intel_port_to_phy(dev_priv, port); 2736 + struct drm_i915_private *dev_priv = to_i915(encoder->base.dev); 2737 + enum port port = encoder->port; 2751 2738 2752 - if (intel_phy_is_combo(dev_priv, phy)) 2739 + if (intel_encoder_is_combo(encoder)) 2753 2740 return GMBUS_PIN_1_BXT + port; 2754 - else if (intel_phy_is_tc(dev_priv, phy)) 2755 - return GMBUS_PIN_9_TC1_ICP + intel_port_to_tc(dev_priv, port); 2741 + else if (intel_encoder_is_tc(encoder)) 2742 + return GMBUS_PIN_9_TC1_ICP + intel_encoder_to_tc(encoder); 2756 2743 2757 2744 drm_WARN(&dev_priv->drm, 1, "Unknown port:%c\n", port_name(port)); 2758 2745 return GMBUS_PIN_2_BXT; 2759 2746 } 2760 2747 2761 - static u8 mcc_port_to_ddc_pin(struct drm_i915_private *dev_priv, enum port port) 2748 + static u8 mcc_encoder_to_ddc_pin(struct intel_encoder *encoder) 2762 2749 { 2763 - enum phy phy = intel_port_to_phy(dev_priv, port); 2750 + enum phy phy = intel_encoder_to_phy(encoder); 2764 2751 u8 ddc_pin; 2765 2752 2766 2753 switch (phy) { ··· 2782 2767 return ddc_pin; 2783 2768 } 2784 2769 2785 - static u8 rkl_port_to_ddc_pin(struct drm_i915_private *dev_priv, enum port port) 2770 + static u8 rkl_encoder_to_ddc_pin(struct intel_encoder *encoder) 2786 2771 { 2787 - enum phy phy = intel_port_to_phy(dev_priv, port); 2772 + struct drm_i915_private *dev_priv = to_i915(encoder->base.dev); 2773 + enum phy phy = intel_encoder_to_phy(encoder); 2788 2774 2789 - WARN_ON(port == PORT_C); 2775 + WARN_ON(encoder->port == PORT_C); 2790 2776 2791 2777 /* 2792 2778 * Pin mapping for RKL depends on which PCH is present. With TGP, the ··· 2801 2785 return GMBUS_PIN_1_BXT + phy; 2802 2786 } 2803 2787 2804 - static u8 gen9bc_tgp_port_to_ddc_pin(struct drm_i915_private *i915, enum port port) 2788 + static u8 gen9bc_tgp_encoder_to_ddc_pin(struct intel_encoder *encoder) 2805 2789 { 2806 - enum phy phy = intel_port_to_phy(i915, port); 2790 + struct drm_i915_private *i915 = to_i915(encoder->base.dev); 2791 + enum phy phy = intel_encoder_to_phy(encoder); 2807 2792 2808 - drm_WARN_ON(&i915->drm, port == PORT_A); 2793 + drm_WARN_ON(&i915->drm, encoder->port == PORT_A); 2809 2794 2810 2795 /* 2811 2796 * Pin mapping for GEN9 BC depends on which PCH is present. With TGP, ··· 2820 2803 return GMBUS_PIN_1_BXT + phy; 2821 2804 } 2822 2805 2823 - static u8 dg1_port_to_ddc_pin(struct drm_i915_private *dev_priv, enum port port) 2806 + static u8 dg1_encoder_to_ddc_pin(struct intel_encoder *encoder) 2824 2807 { 2825 - return intel_port_to_phy(dev_priv, port) + 1; 2808 + return intel_encoder_to_phy(encoder) + 1; 2826 2809 } 2827 2810 2828 - static u8 adls_port_to_ddc_pin(struct drm_i915_private *dev_priv, enum port port) 2811 + static u8 adls_encoder_to_ddc_pin(struct intel_encoder *encoder) 2829 2812 { 2830 - enum phy phy = intel_port_to_phy(dev_priv, port); 2813 + enum phy phy = intel_encoder_to_phy(encoder); 2831 2814 2832 - WARN_ON(port == PORT_B || port == PORT_C); 2815 + WARN_ON(encoder->port == PORT_B || encoder->port == PORT_C); 2833 2816 2834 2817 /* 2835 2818 * Pin mapping for ADL-S requires TC pins for all combo phy outputs ··· 2841 2824 return GMBUS_PIN_9_TC1_ICP + phy - PHY_B; 2842 2825 } 2843 2826 2844 - static u8 g4x_port_to_ddc_pin(struct drm_i915_private *dev_priv, 2845 - enum port port) 2827 + static u8 g4x_encoder_to_ddc_pin(struct intel_encoder *encoder) 2846 2828 { 2829 + enum port port = encoder->port; 2847 2830 u8 ddc_pin; 2848 2831 2849 2832 switch (port) { ··· 2867 2850 static u8 intel_hdmi_default_ddc_pin(struct intel_encoder *encoder) 2868 2851 { 2869 2852 struct drm_i915_private *dev_priv = to_i915(encoder->base.dev); 2870 - enum port port = encoder->port; 2871 2853 u8 ddc_pin; 2872 2854 2873 2855 if (IS_ALDERLAKE_S(dev_priv)) 2874 - ddc_pin = adls_port_to_ddc_pin(dev_priv, port); 2856 + ddc_pin = adls_encoder_to_ddc_pin(encoder); 2875 2857 else if (INTEL_PCH_TYPE(dev_priv) >= PCH_DG1) 2876 - ddc_pin = dg1_port_to_ddc_pin(dev_priv, port); 2858 + ddc_pin = dg1_encoder_to_ddc_pin(encoder); 2877 2859 else if (IS_ROCKETLAKE(dev_priv)) 2878 - ddc_pin = rkl_port_to_ddc_pin(dev_priv, port); 2860 + ddc_pin = rkl_encoder_to_ddc_pin(encoder); 2879 2861 else if (DISPLAY_VER(dev_priv) == 9 && HAS_PCH_TGP(dev_priv)) 2880 - ddc_pin = gen9bc_tgp_port_to_ddc_pin(dev_priv, port); 2862 + ddc_pin = gen9bc_tgp_encoder_to_ddc_pin(encoder); 2881 2863 else if ((IS_JASPERLAKE(dev_priv) || IS_ELKHARTLAKE(dev_priv)) && 2882 2864 HAS_PCH_TGP(dev_priv)) 2883 - ddc_pin = mcc_port_to_ddc_pin(dev_priv, port); 2865 + ddc_pin = mcc_encoder_to_ddc_pin(encoder); 2884 2866 else if (INTEL_PCH_TYPE(dev_priv) >= PCH_ICP) 2885 - ddc_pin = icl_port_to_ddc_pin(dev_priv, port); 2867 + ddc_pin = icl_encoder_to_ddc_pin(encoder); 2886 2868 else if (HAS_PCH_CNP(dev_priv)) 2887 - ddc_pin = cnp_port_to_ddc_pin(dev_priv, port); 2869 + ddc_pin = cnp_encoder_to_ddc_pin(encoder); 2888 2870 else if (IS_GEMINILAKE(dev_priv) || IS_BROXTON(dev_priv)) 2889 - ddc_pin = bxt_port_to_ddc_pin(dev_priv, port); 2871 + ddc_pin = bxt_encoder_to_ddc_pin(encoder); 2890 2872 else if (IS_CHERRYVIEW(dev_priv)) 2891 - ddc_pin = chv_port_to_ddc_pin(dev_priv, port); 2873 + ddc_pin = chv_encoder_to_ddc_pin(encoder); 2892 2874 else 2893 - ddc_pin = g4x_port_to_ddc_pin(dev_priv, port); 2875 + ddc_pin = g4x_encoder_to_ddc_pin(encoder); 2894 2876 2895 2877 return ddc_pin; 2896 2878 }
+1 -1
drivers/gpu/drm/i915/display/intel_hotplug_irq.c
··· 1444 1444 1445 1445 void intel_hpd_irq_setup(struct drm_i915_private *i915) 1446 1446 { 1447 - if (i915->display_irqs_enabled && i915->display.funcs.hotplug) 1447 + if (i915->display.irq.display_irqs_enabled && i915->display.funcs.hotplug) 1448 1448 i915->display.funcs.hotplug->hpd_irq_setup(i915); 1449 1449 } 1450 1450
+1 -4
drivers/gpu/drm/i915/display/intel_lvds.c
··· 392 392 struct drm_i915_private *i915 = to_i915(connector->base.dev); 393 393 const struct drm_display_mode *fixed_mode = 394 394 intel_panel_fixed_mode(connector, mode); 395 - int max_pixclk = to_i915(connector->base.dev)->max_dotclk_freq; 395 + int max_pixclk = to_i915(connector->base.dev)->display.cdclk.max_dotclk_freq; 396 396 enum drm_mode_status status; 397 397 398 398 status = intel_cpu_transcoder_mode_valid(i915, mode); 399 399 if (status != MODE_OK) 400 400 return status; 401 - 402 - if (mode->flags & DRM_MODE_FLAG_DBLSCAN) 403 - return MODE_NO_DBLESCAN; 404 401 405 402 status = intel_panel_mode_valid(connector, mode); 406 403 if (status != MODE_OK)
+11 -47
drivers/gpu/drm/i915/display/intel_opregion.c
··· 27 27 28 28 #include <linux/acpi.h> 29 29 #include <linux/dmi.h> 30 - #include <linux/firmware.h> 31 30 #include <acpi/video.h> 32 31 33 32 #include <drm/drm_edid.h> ··· 262 263 struct opregion_asle *asle; 263 264 struct opregion_asle_ext *asle_ext; 264 265 void *rvda; 265 - void *vbt_firmware; 266 266 const void *vbt; 267 267 u32 vbt_size; 268 268 struct work_struct asle_work; ··· 867 869 { } 868 870 }; 869 871 870 - static int intel_load_vbt_firmware(struct drm_i915_private *dev_priv) 871 - { 872 - struct intel_opregion *opregion = dev_priv->display.opregion; 873 - const struct firmware *fw = NULL; 874 - const char *name = dev_priv->display.params.vbt_firmware; 875 - int ret; 876 - 877 - if (!name || !*name) 878 - return -ENOENT; 879 - 880 - ret = request_firmware(&fw, name, dev_priv->drm.dev); 881 - if (ret) { 882 - drm_err(&dev_priv->drm, 883 - "Requesting VBT firmware \"%s\" failed (%d)\n", 884 - name, ret); 885 - return ret; 886 - } 887 - 888 - if (intel_bios_is_valid_vbt(dev_priv, fw->data, fw->size)) { 889 - opregion->vbt_firmware = kmemdup(fw->data, fw->size, GFP_KERNEL); 890 - if (opregion->vbt_firmware) { 891 - drm_dbg_kms(&dev_priv->drm, 892 - "Found valid VBT firmware \"%s\"\n", name); 893 - opregion->vbt = opregion->vbt_firmware; 894 - opregion->vbt_size = fw->size; 895 - ret = 0; 896 - } else { 897 - ret = -ENOMEM; 898 - } 899 - } else { 900 - drm_dbg_kms(&dev_priv->drm, "Invalid VBT firmware \"%s\"\n", 901 - name); 902 - ret = -EINVAL; 903 - } 904 - 905 - release_firmware(fw); 906 - 907 - return ret; 908 - } 909 - 910 872 int intel_opregion_setup(struct drm_i915_private *dev_priv) 911 873 { 912 874 struct intel_opregion *opregion; ··· 963 1005 if (mboxes & MBOX_BACKLIGHT) { 964 1006 drm_dbg(&dev_priv->drm, "Mailbox #2 for backlight present\n"); 965 1007 } 966 - 967 - if (intel_load_vbt_firmware(dev_priv) == 0) 968 - goto out; 969 1008 970 1009 if (dmi_check_system(intel_no_opregion_vbt)) 971 1010 goto out; ··· 1131 1176 return drm_edid; 1132 1177 } 1133 1178 1179 + bool intel_opregion_vbt_present(struct drm_i915_private *i915) 1180 + { 1181 + struct intel_opregion *opregion = i915->display.opregion; 1182 + 1183 + if (!opregion || !opregion->vbt) 1184 + return false; 1185 + 1186 + return true; 1187 + } 1188 + 1134 1189 const void *intel_opregion_get_vbt(struct drm_i915_private *i915, size_t *size) 1135 1190 { 1136 1191 struct intel_opregion *opregion = i915->display.opregion; ··· 1151 1186 if (size) 1152 1187 *size = opregion->vbt_size; 1153 1188 1154 - return opregion->vbt; 1189 + return kmemdup(opregion->vbt, opregion->vbt_size, GFP_KERNEL); 1155 1190 } 1156 1191 1157 1192 bool intel_opregion_headless_sku(struct drm_i915_private *i915) ··· 1277 1312 memunmap(opregion->header); 1278 1313 if (opregion->rvda) 1279 1314 memunmap(opregion->rvda); 1280 - kfree(opregion->vbt_firmware); 1281 1315 kfree(opregion); 1282 1316 i915->display.opregion = NULL; 1283 1317 }
+6
drivers/gpu/drm/i915/display/intel_opregion.h
··· 53 53 int intel_opregion_get_panel_type(struct drm_i915_private *dev_priv); 54 54 const struct drm_edid *intel_opregion_get_edid(struct intel_connector *connector); 55 55 56 + bool intel_opregion_vbt_present(struct drm_i915_private *i915); 56 57 const void *intel_opregion_get_vbt(struct drm_i915_private *i915, size_t *size); 57 58 58 59 bool intel_opregion_headless_sku(struct drm_i915_private *i915); ··· 118 117 intel_opregion_get_edid(struct intel_connector *connector) 119 118 { 120 119 return NULL; 120 + } 121 + 122 + static inline bool intel_opregion_vbt_present(struct drm_i915_private *i915) 123 + { 124 + return false; 121 125 } 122 126 123 127 static inline const void *
+4 -3
drivers/gpu/drm/i915/display/intel_overlay.c
··· 972 972 rec->dst_width, rec->dst_height); 973 973 974 974 clipped = req; 975 - drm_rect_intersect(&clipped, &crtc_state->pipe_src); 976 975 977 - if (!drm_rect_visible(&clipped) || 978 - !drm_rect_equals(&clipped, &req)) 976 + if (!drm_rect_intersect(&clipped, &crtc_state->pipe_src)) 977 + return -EINVAL; 978 + 979 + if (!drm_rect_equals(&clipped, &req)) 979 980 return -EINVAL; 980 981 981 982 return 0;
+4 -10
drivers/gpu/drm/i915/display/intel_pmdemand.c
··· 119 119 if (!encoder) 120 120 return; 121 121 122 - phy = intel_port_to_phy(i915, encoder->port); 123 - if (intel_phy_is_tc(i915, phy)) 122 + if (intel_encoder_is_tc(encoder)) 124 123 return; 124 + 125 + phy = intel_encoder_to_phy(encoder); 125 126 126 127 if (set_bit) 127 128 pmdemand_state->active_combo_phys_mask |= BIT(phy); ··· 223 222 intel_pmdemand_encoder_has_tc_phy(struct drm_i915_private *i915, 224 223 struct intel_encoder *encoder) 225 224 { 226 - enum phy phy; 227 - 228 - if (!encoder) 229 - return false; 230 - 231 - phy = intel_port_to_phy(i915, encoder->port); 232 - 233 - return intel_phy_is_tc(i915, phy); 225 + return encoder && intel_encoder_is_tc(encoder); 234 226 } 235 227 236 228 static bool
+2 -3
drivers/gpu/drm/i915/display/intel_pmdemand.h
··· 43 43 struct pmdemand_params params; 44 44 }; 45 45 46 - #define to_intel_pmdemand_state(x) container_of((x), \ 47 - struct intel_pmdemand_state, \ 48 - base) 46 + #define to_intel_pmdemand_state(global_state) \ 47 + container_of_const((global_state), struct intel_pmdemand_state, base) 49 48 50 49 void intel_pmdemand_init_early(struct drm_i915_private *i915); 51 50 int intel_pmdemand_init(struct drm_i915_private *i915);
+32 -2
drivers/gpu/drm/i915/display/intel_pps.c
··· 605 605 intel_de_read(dev_priv, pp_stat_reg), 606 606 intel_de_read(dev_priv, pp_ctrl_reg)); 607 607 608 - if (intel_de_wait_for_register(dev_priv, pp_stat_reg, 609 - mask, value, 5000)) 608 + if (intel_de_wait(dev_priv, pp_stat_reg, mask, value, 5000)) 610 609 drm_err(&dev_priv->drm, 611 610 "[ENCODER:%d:%s] %s panel status timeout: PP_STATUS: 0x%08x PP_CONTROL: 0x%08x\n", 612 611 dig_port->base.base.base.id, dig_port->base.base.name, ··· 1668 1669 i915->display.pps.mmio_base = VLV_PPS_BASE; 1669 1670 else 1670 1671 i915->display.pps.mmio_base = PPS_BASE; 1672 + } 1673 + 1674 + static int intel_pps_show(struct seq_file *m, void *data) 1675 + { 1676 + struct intel_connector *connector = m->private; 1677 + struct intel_dp *intel_dp = intel_attached_dp(connector); 1678 + 1679 + if (connector->base.status != connector_status_connected) 1680 + return -ENODEV; 1681 + 1682 + seq_printf(m, "Panel power up delay: %d\n", 1683 + intel_dp->pps.panel_power_up_delay); 1684 + seq_printf(m, "Panel power down delay: %d\n", 1685 + intel_dp->pps.panel_power_down_delay); 1686 + seq_printf(m, "Backlight on delay: %d\n", 1687 + intel_dp->pps.backlight_on_delay); 1688 + seq_printf(m, "Backlight off delay: %d\n", 1689 + intel_dp->pps.backlight_off_delay); 1690 + 1691 + return 0; 1692 + } 1693 + DEFINE_SHOW_ATTRIBUTE(intel_pps); 1694 + 1695 + void intel_pps_connector_debugfs_add(struct intel_connector *connector) 1696 + { 1697 + struct dentry *root = connector->base.debugfs_entry; 1698 + int connector_type = connector->base.connector_type; 1699 + 1700 + if (connector_type == DRM_MODE_CONNECTOR_eDP) 1701 + debugfs_create_file("i915_panel_timings", 0444, root, 1702 + connector, &intel_pps_fops); 1671 1703 } 1672 1704 1673 1705 void assert_pps_unlocked(struct drm_i915_private *dev_priv, enum pipe pipe)
+2
drivers/gpu/drm/i915/display/intel_pps.h
··· 51 51 void intel_pps_unlock_regs_wa(struct drm_i915_private *i915); 52 52 void intel_pps_setup(struct drm_i915_private *i915); 53 53 54 + void intel_pps_connector_debugfs_add(struct intel_connector *connector); 55 + 54 56 void assert_pps_unlocked(struct drm_i915_private *i915, enum pipe pipe); 55 57 56 58 #endif /* __INTEL_PPS_H__ */
+430 -103
drivers/gpu/drm/i915/display/intel_psr.c
··· 171 171 * 172 172 * The rest of the bits are more self-explanatory and/or 173 173 * irrelevant for normal operation. 174 + * 175 + * Description of intel_crtc_state variables. has_psr, has_panel_replay and 176 + * has_sel_update: 177 + * 178 + * has_psr (alone): PSR1 179 + * has_psr + has_sel_update: PSR2 180 + * has_psr + has_panel_replay: Panel Replay 181 + * has_psr + has_panel_replay + has_sel_update: Panel Replay Selective Update 182 + * 183 + * Description of some intel_psr varibles. enabled, panel_replay_enabled, 184 + * sel_update_enabled 185 + * 186 + * enabled (alone): PSR1 187 + * enabled + sel_update_enabled: PSR2 188 + * enabled + panel_replay_enabled: Panel Replay 189 + * enabled + panel_replay_enabled + sel_update_enabled: Panel Replay SU 174 190 */ 175 191 176 192 #define CAN_PSR(intel_dp) ((intel_dp)->psr.sink_support && \ 177 193 (intel_dp)->psr.source_support) 178 - 179 - #define CAN_PANEL_REPLAY(intel_dp) ((intel_dp)->psr.sink_panel_replay_support && \ 180 - (intel_dp)->psr.source_panel_replay_support) 181 194 182 195 bool intel_encoder_can_psr(struct intel_encoder *encoder) 183 196 { ··· 342 329 struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); 343 330 enum transcoder cpu_transcoder = intel_dp->psr.transcoder; 344 331 u32 mask; 332 + 333 + if (intel_dp->psr.panel_replay_enabled) 334 + return; 345 335 346 336 mask = psr_irq_psr_error_bit_get(intel_dp); 347 337 if (intel_dp->psr.debug & I915_PSR_DEBUG_IRQ) ··· 635 619 return false; 636 620 } 637 621 638 - static void intel_psr_enable_sink(struct intel_dp *intel_dp) 622 + static unsigned int intel_psr_get_enable_sink_offset(struct intel_dp *intel_dp) 623 + { 624 + return intel_dp->psr.panel_replay_enabled ? 625 + PANEL_REPLAY_CONFIG : DP_PSR_EN_CFG; 626 + } 627 + 628 + /* 629 + * Note: Most of the bits are same in PANEL_REPLAY_CONFIG and DP_PSR_EN_CFG. We 630 + * are relying on PSR definitions on these "common" bits. 631 + */ 632 + void intel_psr_enable_sink(struct intel_dp *intel_dp, 633 + const struct intel_crtc_state *crtc_state) 639 634 { 640 635 struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); 641 636 u8 dpcd_val = DP_PSR_ENABLE; 642 637 643 - if (intel_dp->psr.panel_replay_enabled) 644 - return; 645 - 646 - if (intel_dp->psr.psr2_enabled) { 638 + if (crtc_state->has_psr2) { 647 639 /* Enable ALPM at sink for psr2 */ 648 - drm_dp_dpcd_writeb(&intel_dp->aux, DP_RECEIVER_ALPM_CONFIG, 649 - DP_ALPM_ENABLE | 650 - DP_ALPM_LOCK_ERROR_IRQ_HPD_ENABLE); 640 + if (!crtc_state->has_panel_replay) { 641 + drm_dp_dpcd_writeb(&intel_dp->aux, 642 + DP_RECEIVER_ALPM_CONFIG, 643 + DP_ALPM_ENABLE | 644 + DP_ALPM_LOCK_ERROR_IRQ_HPD_ENABLE); 645 + 646 + if (psr2_su_region_et_valid(intel_dp)) 647 + dpcd_val |= DP_PSR_ENABLE_SU_REGION_ET; 648 + } 651 649 652 650 dpcd_val |= DP_PSR_ENABLE_PSR2 | DP_PSR_IRQ_HPD_WITH_CRC_ERRORS; 653 - if (psr2_su_region_et_valid(intel_dp)) 654 - dpcd_val |= DP_PSR_ENABLE_SU_REGION_ET; 655 651 } else { 656 652 if (intel_dp->psr.link_standby) 657 653 dpcd_val |= DP_PSR_MAIN_LINK_ACTIVE; 658 654 659 - if (DISPLAY_VER(dev_priv) >= 8) 655 + if (!crtc_state->has_panel_replay && DISPLAY_VER(dev_priv) >= 8) 660 656 dpcd_val |= DP_PSR_CRC_VERIFICATION; 661 657 } 662 658 663 - if (intel_dp->psr.req_psr2_sdp_prior_scanline) 659 + if (crtc_state->has_panel_replay) 660 + dpcd_val |= DP_PANEL_REPLAY_UNRECOVERABLE_ERROR_EN | 661 + DP_PANEL_REPLAY_RFB_STORAGE_ERROR_EN; 662 + 663 + if (crtc_state->req_psr2_sdp_prior_scanline) 664 664 dpcd_val |= DP_PSR_SU_REGION_SCANLINE_CAPTURE; 665 665 666 666 if (intel_dp->psr.entry_setup_frames > 0) 667 667 dpcd_val |= DP_PSR_FRAME_CAPTURE; 668 668 669 - drm_dp_dpcd_writeb(&intel_dp->aux, DP_PSR_EN_CFG, dpcd_val); 669 + drm_dp_dpcd_writeb(&intel_dp->aux, 670 + intel_psr_get_enable_sink_offset(intel_dp), 671 + dpcd_val); 670 672 671 - drm_dp_dpcd_writeb(&intel_dp->aux, DP_SET_POWER, DP_SET_POWER_D0); 673 + if (intel_dp_is_edp(intel_dp)) 674 + drm_dp_dpcd_writeb(&intel_dp->aux, DP_SET_POWER, DP_SET_POWER_D0); 672 675 } 673 676 674 677 static u32 intel_psr1_get_tp_time(struct intel_dp *intel_dp) ··· 1161 1126 return true; 1162 1127 } 1163 1128 1129 + /* 1130 + * See Bspec: 71632 for the table 1131 + * 1132 + * Silence_period = tSilence,Min + ((tSilence,Max - tSilence,Min) / 2) 1133 + * 1134 + * Half cycle duration: 1135 + * 1136 + * Link rates 1.62 - 4.32 and tLFPS_Cycle = 70 ns 1137 + * FLOOR( (Link Rate * tLFPS_Cycle) / (2 * 10) ) 1138 + * 1139 + * Link rates 5.4 - 8.1 1140 + * PORT_ALPM_LFPS_CTL[ LFPS Cycle Count ] = 10 1141 + * LFPS Period chosen is the mid-point of the min:max values from the table 1142 + * FLOOR( LFPS Period in Symbol clocks / 1143 + * (2 * PORT_ALPM_LFPS_CTL[ LFPS Cycle Count ]) ) 1144 + */ 1145 + static bool _lnl_get_silence_period_and_lfps_half_cycle(int link_rate, 1146 + int *silence_period, 1147 + int *lfps_half_cycle) 1148 + { 1149 + switch (link_rate) { 1150 + case 162000: 1151 + *silence_period = 20; 1152 + *lfps_half_cycle = 5; 1153 + break; 1154 + case 216000: 1155 + *silence_period = 27; 1156 + *lfps_half_cycle = 7; 1157 + break; 1158 + case 243000: 1159 + *silence_period = 31; 1160 + *lfps_half_cycle = 8; 1161 + break; 1162 + case 270000: 1163 + *silence_period = 34; 1164 + *lfps_half_cycle = 9; 1165 + break; 1166 + case 324000: 1167 + *silence_period = 41; 1168 + *lfps_half_cycle = 11; 1169 + break; 1170 + case 432000: 1171 + *silence_period = 56; 1172 + *lfps_half_cycle = 15; 1173 + break; 1174 + case 540000: 1175 + *silence_period = 69; 1176 + *lfps_half_cycle = 12; 1177 + break; 1178 + case 648000: 1179 + *silence_period = 84; 1180 + *lfps_half_cycle = 15; 1181 + break; 1182 + case 675000: 1183 + *silence_period = 87; 1184 + *lfps_half_cycle = 15; 1185 + break; 1186 + case 810000: 1187 + *silence_period = 104; 1188 + *lfps_half_cycle = 19; 1189 + break; 1190 + default: 1191 + *silence_period = *lfps_half_cycle = -1; 1192 + return false; 1193 + } 1194 + return true; 1195 + } 1196 + 1197 + /* 1198 + * AUX-Less Wake Time = CEILING( ((PHY P2 to P0) + tLFPS_Period, Max+ 1199 + * tSilence, Max+ tPHY Establishment + tCDS) / tline) 1200 + * For the "PHY P2 to P0" latency see the PHY Power Control page 1201 + * (PHY P2 to P0) : https://gfxspecs.intel.com/Predator/Home/Index/68965 1202 + * : 12 us 1203 + * The tLFPS_Period, Max term is 800ns 1204 + * The tSilence, Max term is 180ns 1205 + * The tPHY Establishment (a.k.a. t1) term is 50us 1206 + * The tCDS term is 1 or 2 times t2 1207 + * t2 = Number ML_PHY_LOCK * tML_PHY_LOCK 1208 + * Number ML_PHY_LOCK = ( 7 + CEILING( 6.5us / tML_PHY_LOCK ) + 1) 1209 + * Rounding up the 6.5us padding to the next ML_PHY_LOCK boundary and 1210 + * adding the "+ 1" term ensures all ML_PHY_LOCK sequences that start 1211 + * within the CDS period complete within the CDS period regardless of 1212 + * entry into the period 1213 + * tML_PHY_LOCK = TPS4 Length * ( 10 / (Link Rate in MHz) ) 1214 + * TPS4 Length = 252 Symbols 1215 + */ 1216 + static int _lnl_compute_aux_less_wake_time(int port_clock) 1217 + { 1218 + int tphy2_p2_to_p0 = 12 * 1000; 1219 + int tlfps_period_max = 800; 1220 + int tsilence_max = 180; 1221 + int t1 = 50 * 1000; 1222 + int tps4 = 252; 1223 + int tml_phy_lock = 1000 * 1000 * tps4 * 10 / port_clock; 1224 + int num_ml_phy_lock = 7 + DIV_ROUND_UP(6500, tml_phy_lock) + 1; 1225 + int t2 = num_ml_phy_lock * tml_phy_lock; 1226 + int tcds = 1 * t2; 1227 + 1228 + return DIV_ROUND_UP(tphy2_p2_to_p0 + tlfps_period_max + tsilence_max + 1229 + t1 + tcds, 1000); 1230 + } 1231 + 1232 + static int _lnl_compute_aux_less_alpm_params(struct intel_dp *intel_dp, 1233 + struct intel_crtc_state *crtc_state) 1234 + { 1235 + struct drm_i915_private *i915 = dp_to_i915(intel_dp); 1236 + int aux_less_wake_time, aux_less_wake_lines, silence_period, 1237 + lfps_half_cycle; 1238 + 1239 + aux_less_wake_time = 1240 + _lnl_compute_aux_less_wake_time(crtc_state->port_clock); 1241 + aux_less_wake_lines = intel_usecs_to_scanlines(&crtc_state->hw.adjusted_mode, 1242 + aux_less_wake_time); 1243 + 1244 + if (!_lnl_get_silence_period_and_lfps_half_cycle(crtc_state->port_clock, 1245 + &silence_period, 1246 + &lfps_half_cycle)) 1247 + return false; 1248 + 1249 + if (aux_less_wake_lines > ALPM_CTL_AUX_LESS_WAKE_TIME_MASK || 1250 + silence_period > PORT_ALPM_CTL_SILENCE_PERIOD_MASK || 1251 + lfps_half_cycle > PORT_ALPM_LFPS_CTL_LAST_LFPS_HALF_CYCLE_DURATION_MASK) 1252 + return false; 1253 + 1254 + if (i915->display.params.psr_safest_params) 1255 + aux_less_wake_lines = ALPM_CTL_AUX_LESS_WAKE_TIME_MASK; 1256 + 1257 + intel_dp->psr.alpm_parameters.fast_wake_lines = aux_less_wake_lines; 1258 + intel_dp->psr.alpm_parameters.silence_period_sym_clocks = silence_period; 1259 + intel_dp->psr.alpm_parameters.lfps_half_cycle_num_of_syms = lfps_half_cycle; 1260 + 1261 + return true; 1262 + } 1263 + 1164 1264 static bool _lnl_compute_alpm_params(struct intel_dp *intel_dp, 1165 1265 struct intel_crtc_state *crtc_state) 1166 1266 { ··· 1312 1142 if (check_entry_lines > 15) 1313 1143 return false; 1314 1144 1145 + if (!_lnl_compute_aux_less_alpm_params(intel_dp, crtc_state)) 1146 + return false; 1147 + 1315 1148 if (i915->display.params.psr_safest_params) 1316 1149 check_entry_lines = 15; 1317 1150 ··· 1323 1150 return true; 1324 1151 } 1325 1152 1153 + /* 1154 + * IO wake time for DISPLAY_VER < 12 is not directly mentioned in Bspec. There 1155 + * are 50 us io wake time and 32 us fast wake time. Clearly preharge pulses are 1156 + * not (improperly) included in 32 us fast wake time. 50 us - 32 us = 18 us. 1157 + */ 1158 + static int skl_io_buffer_wake_time(void) 1159 + { 1160 + return 18; 1161 + } 1162 + 1163 + static int tgl_io_buffer_wake_time(void) 1164 + { 1165 + return 10; 1166 + } 1167 + 1168 + static int io_buffer_wake_time(const struct intel_crtc_state *crtc_state) 1169 + { 1170 + struct drm_i915_private *i915 = to_i915(crtc_state->uapi.crtc->dev); 1171 + 1172 + if (DISPLAY_VER(i915) >= 12) 1173 + return tgl_io_buffer_wake_time(); 1174 + else 1175 + return skl_io_buffer_wake_time(); 1176 + } 1177 + 1326 1178 static bool _compute_alpm_params(struct intel_dp *intel_dp, 1327 1179 struct intel_crtc_state *crtc_state) 1328 1180 { 1329 1181 struct drm_i915_private *i915 = dp_to_i915(intel_dp); 1330 1182 int io_wake_lines, io_wake_time, fast_wake_lines, fast_wake_time; 1183 + int tfw_exit_latency = 20; /* eDP spec */ 1184 + int phy_wake = 4; /* eDP spec */ 1185 + int preamble = 8; /* eDP spec */ 1186 + int precharge = intel_dp_aux_fw_sync_len() - preamble; 1331 1187 u8 max_wake_lines; 1332 1188 1333 - if (DISPLAY_VER(i915) >= 12) { 1334 - io_wake_time = 42; 1335 - /* 1336 - * According to Bspec it's 42us, but based on testing 1337 - * it is not enough -> use 45 us. 1338 - */ 1339 - fast_wake_time = 45; 1189 + io_wake_time = max(precharge, io_buffer_wake_time(crtc_state)) + 1190 + preamble + phy_wake + tfw_exit_latency; 1191 + fast_wake_time = precharge + preamble + phy_wake + 1192 + tfw_exit_latency; 1340 1193 1194 + if (DISPLAY_VER(i915) >= 12) 1341 1195 /* TODO: Check how we can use ALPM_CTL fast wake extended field */ 1342 1196 max_wake_lines = 12; 1343 - } else { 1344 - io_wake_time = 50; 1345 - fast_wake_time = 32; 1197 + else 1346 1198 max_wake_lines = 8; 1347 - } 1348 1199 1349 1200 io_wake_lines = intel_usecs_to_scanlines( 1350 1201 &crtc_state->hw.adjusted_mode, io_wake_time); ··· 1619 1422 return; 1620 1423 } 1621 1424 1425 + /* 1426 + * FIXME figure out what is wrong with PSR+bigjoiner and 1427 + * fix it. Presumably something related to the fact that 1428 + * PSR is a transcoder level feature. 1429 + */ 1430 + if (crtc_state->bigjoiner_pipes) { 1431 + drm_dbg_kms(&dev_priv->drm, 1432 + "PSR disabled due to bigjoiner\n"); 1433 + return; 1434 + } 1435 + 1622 1436 if (CAN_PANEL_REPLAY(intel_dp)) 1623 1437 crtc_state->has_panel_replay = true; 1624 - else 1625 - crtc_state->has_psr = _psr_compute_config(intel_dp, crtc_state); 1626 1438 1627 - if (!(crtc_state->has_panel_replay || crtc_state->has_psr)) 1439 + crtc_state->has_psr = crtc_state->has_panel_replay ? true : 1440 + _psr_compute_config(intel_dp, crtc_state); 1441 + 1442 + if (!crtc_state->has_psr) 1628 1443 return; 1629 1444 1630 1445 crtc_state->has_psr2 = intel_psr2_config_valid(intel_dp, crtc_state); ··· 1663 1454 goto unlock; 1664 1455 1665 1456 if (intel_dp->psr.panel_replay_enabled) { 1666 - pipe_config->has_panel_replay = true; 1457 + pipe_config->has_psr = pipe_config->has_panel_replay = true; 1667 1458 } else { 1668 1459 /* 1669 1460 * Not possible to read EDP_PSR/PSR2_CTL registers as it is ··· 1768 1559 struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); 1769 1560 enum transcoder cpu_transcoder = intel_dp->psr.transcoder; 1770 1561 struct intel_psr *psr = &intel_dp->psr; 1562 + u32 alpm_ctl; 1771 1563 1772 - if (DISPLAY_VER(dev_priv) < 20) 1564 + if (DISPLAY_VER(dev_priv) < 20 || (!intel_dp->psr.psr2_enabled && 1565 + !intel_dp_is_edp(intel_dp))) 1773 1566 return; 1774 1567 1775 - intel_de_write(dev_priv, ALPM_CTL(cpu_transcoder), 1776 - ALPM_CTL_EXTENDED_FAST_WAKE_ENABLE | 1777 - ALPM_CTL_ALPM_ENTRY_CHECK(psr->alpm_parameters.check_entry_lines) | 1778 - ALPM_CTL_EXTENDED_FAST_WAKE_TIME(psr->alpm_parameters.fast_wake_lines)); 1568 + /* 1569 + * Panel Replay on eDP is always using ALPM aux less. I.e. no need to 1570 + * check panel support at this point. 1571 + */ 1572 + if (intel_dp->psr.panel_replay_enabled && intel_dp_is_edp(intel_dp)) { 1573 + alpm_ctl = ALPM_CTL_ALPM_ENABLE | 1574 + ALPM_CTL_ALPM_AUX_LESS_ENABLE | 1575 + ALPM_CTL_AUX_LESS_SLEEP_HOLD_TIME_50_SYMBOLS; 1576 + 1577 + intel_de_write(dev_priv, PORT_ALPM_CTL(cpu_transcoder), 1578 + PORT_ALPM_CTL_ALPM_AUX_LESS_ENABLE | 1579 + PORT_ALPM_CTL_MAX_PHY_SWING_SETUP(15) | 1580 + PORT_ALPM_CTL_MAX_PHY_SWING_HOLD(0) | 1581 + PORT_ALPM_CTL_SILENCE_PERIOD( 1582 + psr->alpm_parameters.silence_period_sym_clocks)); 1583 + 1584 + intel_de_write(dev_priv, PORT_ALPM_LFPS_CTL(cpu_transcoder), 1585 + PORT_ALPM_LFPS_CTL_LFPS_CYCLE_COUNT(10) | 1586 + PORT_ALPM_LFPS_CTL_LFPS_HALF_CYCLE_DURATION( 1587 + psr->alpm_parameters.lfps_half_cycle_num_of_syms) | 1588 + PORT_ALPM_LFPS_CTL_FIRST_LFPS_HALF_CYCLE_DURATION( 1589 + psr->alpm_parameters.lfps_half_cycle_num_of_syms) | 1590 + PORT_ALPM_LFPS_CTL_LAST_LFPS_HALF_CYCLE_DURATION( 1591 + psr->alpm_parameters.lfps_half_cycle_num_of_syms)); 1592 + } else { 1593 + alpm_ctl = ALPM_CTL_EXTENDED_FAST_WAKE_ENABLE | 1594 + ALPM_CTL_EXTENDED_FAST_WAKE_TIME(psr->alpm_parameters.fast_wake_lines); 1595 + } 1596 + 1597 + alpm_ctl |= ALPM_CTL_ALPM_ENTRY_CHECK(psr->alpm_parameters.check_entry_lines); 1598 + 1599 + intel_de_write(dev_priv, ALPM_CTL(cpu_transcoder), alpm_ctl); 1779 1600 } 1780 1601 1781 1602 static void intel_psr_enable_source(struct intel_dp *intel_dp, ··· 1813 1574 { 1814 1575 struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); 1815 1576 enum transcoder cpu_transcoder = intel_dp->psr.transcoder; 1816 - u32 mask; 1577 + u32 mask = 0; 1817 1578 1818 1579 /* 1819 1580 * Only HSW and BDW have PSR AUX registers that need to be setup. ··· 1827 1588 * mask LPSP to avoid dependency on other drivers that might block 1828 1589 * runtime_pm besides preventing other hw tracking issues now we 1829 1590 * can rely on frontbuffer tracking. 1591 + * 1592 + * From bspec prior LunarLake: 1593 + * Only PSR_MASK[Mask FBC modify] and PSR_MASK[Mask Hotplug] are used in 1594 + * panel replay mode. 1595 + * 1596 + * From bspec beyod LunarLake: 1597 + * Panel Replay on DP: No bits are applicable 1598 + * Panel Replay on eDP: All bits are applicable 1830 1599 */ 1831 - mask = EDP_PSR_DEBUG_MASK_MEMUP | 1832 - EDP_PSR_DEBUG_MASK_HPD; 1600 + if (DISPLAY_VER(dev_priv) < 20 || intel_dp_is_edp(intel_dp)) 1601 + mask = EDP_PSR_DEBUG_MASK_HPD; 1833 1602 1834 - /* 1835 - * For some unknown reason on HSW non-ULT (or at least on 1836 - * Dell Latitude E6540) external displays start to flicker 1837 - * when PSR is enabled on the eDP. SR/PC6 residency is much 1838 - * higher than should be possible with an external display. 1839 - * As a workaround leave LPSP unmasked to prevent PSR entry 1840 - * when external displays are active. 1841 - */ 1842 - if (DISPLAY_VER(dev_priv) >= 8 || IS_HASWELL_ULT(dev_priv)) 1843 - mask |= EDP_PSR_DEBUG_MASK_LPSP; 1603 + if (intel_dp_is_edp(intel_dp)) { 1604 + mask |= EDP_PSR_DEBUG_MASK_MEMUP; 1844 1605 1845 - if (DISPLAY_VER(dev_priv) < 20) 1846 - mask |= EDP_PSR_DEBUG_MASK_MAX_SLEEP; 1606 + /* 1607 + * For some unknown reason on HSW non-ULT (or at least on 1608 + * Dell Latitude E6540) external displays start to flicker 1609 + * when PSR is enabled on the eDP. SR/PC6 residency is much 1610 + * higher than should be possible with an external display. 1611 + * As a workaround leave LPSP unmasked to prevent PSR entry 1612 + * when external displays are active. 1613 + */ 1614 + if (DISPLAY_VER(dev_priv) >= 8 || IS_HASWELL_ULT(dev_priv)) 1615 + mask |= EDP_PSR_DEBUG_MASK_LPSP; 1847 1616 1848 - /* 1849 - * No separate pipe reg write mask on hsw/bdw, so have to unmask all 1850 - * registers in order to keep the CURSURFLIVE tricks working :( 1851 - */ 1852 - if (IS_DISPLAY_VER(dev_priv, 9, 10)) 1853 - mask |= EDP_PSR_DEBUG_MASK_DISP_REG_WRITE; 1617 + if (DISPLAY_VER(dev_priv) < 20) 1618 + mask |= EDP_PSR_DEBUG_MASK_MAX_SLEEP; 1854 1619 1855 - /* allow PSR with sprite enabled */ 1856 - if (IS_HASWELL(dev_priv)) 1857 - mask |= EDP_PSR_DEBUG_MASK_SPRITE_ENABLE; 1620 + /* 1621 + * No separate pipe reg write mask on hsw/bdw, so have to unmask all 1622 + * registers in order to keep the CURSURFLIVE tricks working :( 1623 + */ 1624 + if (IS_DISPLAY_VER(dev_priv, 9, 10)) 1625 + mask |= EDP_PSR_DEBUG_MASK_DISP_REG_WRITE; 1626 + 1627 + /* allow PSR with sprite enabled */ 1628 + if (IS_HASWELL(dev_priv)) 1629 + mask |= EDP_PSR_DEBUG_MASK_SPRITE_ENABLE; 1630 + } 1858 1631 1859 1632 intel_de_write(dev_priv, psr_debug_reg(dev_priv, cpu_transcoder), mask); 1860 1633 ··· 1885 1634 intel_dp->psr.psr2_sel_fetch_enabled ? 1886 1635 IGNORE_PSR2_HW_TRACKING : 0); 1887 1636 1888 - lnl_alpm_configure(intel_dp); 1637 + if (intel_dp_is_edp(intel_dp)) 1638 + lnl_alpm_configure(intel_dp); 1889 1639 1890 1640 /* 1891 1641 * Wa_16013835468 ··· 1927 1675 enum transcoder cpu_transcoder = intel_dp->psr.transcoder; 1928 1676 u32 val; 1929 1677 1678 + if (intel_dp->psr.panel_replay_enabled) 1679 + goto no_err; 1680 + 1930 1681 /* 1931 1682 * If a PSR error happened and the driver is reloaded, the EDP_PSR_IIR 1932 1683 * will still keep the error set even after the reset done in the ··· 1947 1692 return false; 1948 1693 } 1949 1694 1695 + no_err: 1950 1696 return true; 1951 1697 } 1952 1698 ··· 1956 1700 { 1957 1701 struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp); 1958 1702 struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); 1959 - enum phy phy = intel_port_to_phy(dev_priv, dig_port->base.port); 1960 1703 u32 val; 1961 1704 1962 1705 drm_WARN_ON(&dev_priv->drm, intel_dp->psr.enabled); ··· 1977 1722 if (!psr_interrupt_error_check(intel_dp)) 1978 1723 return; 1979 1724 1980 - if (intel_dp->psr.panel_replay_enabled) 1725 + if (intel_dp->psr.panel_replay_enabled) { 1981 1726 drm_dbg_kms(&dev_priv->drm, "Enabling Panel Replay\n"); 1982 - else 1727 + } else { 1983 1728 drm_dbg_kms(&dev_priv->drm, "Enabling PSR%s\n", 1984 1729 intel_dp->psr.psr2_enabled ? "2" : "1"); 1985 1730 1986 - intel_snps_phy_update_psr_power_state(dev_priv, phy, true); 1987 - intel_psr_enable_sink(intel_dp); 1731 + /* 1732 + * Panel replay has to be enabled before link training: doing it 1733 + * only for PSR here. 1734 + */ 1735 + intel_psr_enable_sink(intel_dp, crtc_state); 1736 + } 1737 + 1738 + if (intel_dp_is_edp(intel_dp)) 1739 + intel_snps_phy_update_psr_power_state(&dig_port->base, true); 1740 + 1988 1741 intel_psr_enable_source(intel_dp, crtc_state); 1989 1742 intel_dp->psr.enabled = true; 1990 1743 intel_dp->psr.paused = false; ··· 2062 1799 { 2063 1800 struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); 2064 1801 enum transcoder cpu_transcoder = intel_dp->psr.transcoder; 2065 - enum phy phy = intel_port_to_phy(dev_priv, 2066 - dp_to_dig_port(intel_dp)->base.port); 2067 1802 2068 1803 lockdep_assert_held(&intel_dp->psr.lock); 2069 1804 ··· 2096 1835 CLKGATE_DIS_MISC_DMASC_GATING_DIS, 0); 2097 1836 } 2098 1837 2099 - intel_snps_phy_update_psr_power_state(dev_priv, phy, false); 1838 + if (intel_dp_is_edp(intel_dp)) 1839 + intel_snps_phy_update_psr_power_state(&dp_to_dig_port(intel_dp)->base, false); 1840 + 1841 + /* Panel Replay on eDP is always using ALPM aux less. */ 1842 + if (intel_dp->psr.panel_replay_enabled && intel_dp_is_edp(intel_dp)) { 1843 + intel_de_rmw(dev_priv, ALPM_CTL(cpu_transcoder), 1844 + ALPM_CTL_ALPM_ENABLE | 1845 + ALPM_CTL_ALPM_AUX_LESS_ENABLE, 0); 1846 + 1847 + intel_de_rmw(dev_priv, PORT_ALPM_CTL(cpu_transcoder), 1848 + PORT_ALPM_CTL_ALPM_AUX_LESS_ENABLE, 0); 1849 + } 2100 1850 2101 1851 /* Disable PSR on Sink */ 2102 - drm_dp_dpcd_writeb(&intel_dp->aux, DP_PSR_EN_CFG, 0); 1852 + drm_dp_dpcd_writeb(&intel_dp->aux, 1853 + intel_psr_get_enable_sink_offset(intel_dp), 0); 2103 1854 2104 - if (intel_dp->psr.psr2_enabled) 1855 + if (!intel_dp->psr.panel_replay_enabled && 1856 + intel_dp->psr.psr2_enabled) 2105 1857 drm_dp_dpcd_writeb(&intel_dp->aux, DP_RECEIVER_ALPM_CONFIG, 0); 2106 1858 2107 1859 intel_dp->psr.enabled = false; ··· 2162 1888 struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); 2163 1889 struct intel_psr *psr = &intel_dp->psr; 2164 1890 2165 - if (!CAN_PSR(intel_dp)) 1891 + if (!CAN_PSR(intel_dp) && !CAN_PANEL_REPLAY(intel_dp)) 2166 1892 return; 2167 1893 2168 1894 mutex_lock(&psr->lock); ··· 2195 1921 { 2196 1922 struct intel_psr *psr = &intel_dp->psr; 2197 1923 2198 - if (!CAN_PSR(intel_dp)) 1924 + if (!CAN_PSR(intel_dp) && !CAN_PANEL_REPLAY(intel_dp)) 2199 1925 return; 2200 1926 2201 1927 mutex_lock(&psr->lock); ··· 2268 1994 2269 1995 void intel_psr2_program_trans_man_trk_ctl(const struct intel_crtc_state *crtc_state) 2270 1996 { 1997 + struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc); 2271 1998 struct drm_i915_private *dev_priv = to_i915(crtc_state->uapi.crtc->dev); 2272 1999 enum transcoder cpu_transcoder = crtc_state->cpu_transcoder; 2273 2000 struct intel_encoder *encoder; ··· 2288 2013 2289 2014 intel_de_write(dev_priv, PSR2_MAN_TRK_CTL(cpu_transcoder), 2290 2015 crtc_state->psr2_man_track_ctl); 2016 + 2017 + if (!crtc_state->enable_psr2_su_region_et) 2018 + return; 2019 + 2020 + intel_de_write(dev_priv, PIPE_SRCSZ_ERLY_TPT(crtc->pipe), 2021 + crtc_state->pipe_srcsz_early_tpt); 2291 2022 } 2292 2023 2293 2024 static void psr2_man_trk_ctl_calc(struct intel_crtc_state *crtc_state, ··· 2330 2049 } 2331 2050 exit: 2332 2051 crtc_state->psr2_man_track_ctl = val; 2052 + } 2053 + 2054 + static u32 2055 + psr2_pipe_srcsz_early_tpt_calc(struct intel_crtc_state *crtc_state, 2056 + bool full_update, bool cursor_in_su_area) 2057 + { 2058 + int width, height; 2059 + 2060 + if (!crtc_state->enable_psr2_su_region_et || full_update) 2061 + return 0; 2062 + 2063 + if (!cursor_in_su_area) 2064 + return PIPESRC_WIDTH(0) | 2065 + PIPESRC_HEIGHT(drm_rect_height(&crtc_state->pipe_src)); 2066 + 2067 + width = drm_rect_width(&crtc_state->psr2_su_area); 2068 + height = drm_rect_height(&crtc_state->psr2_su_area); 2069 + 2070 + return PIPESRC_WIDTH(width - 1) | PIPESRC_HEIGHT(height - 1); 2333 2071 } 2334 2072 2335 2073 static void clip_area_update(struct drm_rect *overlap_damage_area, ··· 2395 2095 * cursor fully when cursor is in SU area. 2396 2096 */ 2397 2097 static void 2398 - intel_psr2_sel_fetch_et_alignment(struct intel_crtc_state *crtc_state, 2399 - struct intel_plane_state *cursor_state) 2098 + intel_psr2_sel_fetch_et_alignment(struct intel_atomic_state *state, 2099 + struct intel_crtc *crtc, 2100 + bool *cursor_in_su_area) 2400 2101 { 2401 - struct drm_rect inter; 2102 + struct intel_crtc_state *crtc_state = intel_atomic_get_new_crtc_state(state, crtc); 2103 + struct intel_plane_state *new_plane_state; 2104 + struct intel_plane *plane; 2105 + int i; 2402 2106 2403 - if (!crtc_state->enable_psr2_su_region_et || 2404 - !cursor_state->uapi.visible) 2107 + if (!crtc_state->enable_psr2_su_region_et) 2405 2108 return; 2406 2109 2407 - inter = crtc_state->psr2_su_area; 2408 - if (!drm_rect_intersect(&inter, &cursor_state->uapi.dst)) 2409 - return; 2110 + for_each_new_intel_plane_in_state(state, plane, new_plane_state, i) { 2111 + struct drm_rect inter; 2410 2112 2411 - clip_area_update(&crtc_state->psr2_su_area, &cursor_state->uapi.dst, 2412 - &crtc_state->pipe_src); 2113 + if (new_plane_state->uapi.crtc != crtc_state->uapi.crtc) 2114 + continue; 2115 + 2116 + if (plane->id != PLANE_CURSOR) 2117 + continue; 2118 + 2119 + if (!new_plane_state->uapi.visible) 2120 + continue; 2121 + 2122 + inter = crtc_state->psr2_su_area; 2123 + if (!drm_rect_intersect(&inter, &new_plane_state->uapi.dst)) 2124 + continue; 2125 + 2126 + clip_area_update(&crtc_state->psr2_su_area, &new_plane_state->uapi.dst, 2127 + &crtc_state->pipe_src); 2128 + *cursor_in_su_area = true; 2129 + } 2413 2130 } 2414 2131 2415 2132 /* ··· 2469 2152 { 2470 2153 struct drm_i915_private *dev_priv = to_i915(state->base.dev); 2471 2154 struct intel_crtc_state *crtc_state = intel_atomic_get_new_crtc_state(state, crtc); 2472 - struct intel_plane_state *new_plane_state, *old_plane_state, 2473 - *cursor_plane_state = NULL; 2155 + struct intel_plane_state *new_plane_state, *old_plane_state; 2474 2156 struct intel_plane *plane; 2475 - bool full_update = false; 2157 + bool full_update = false, cursor_in_su_area = false; 2476 2158 int i, ret; 2477 2159 2478 2160 if (!crtc_state->enable_psr2_sel_fetch) ··· 2554 2238 damaged_area.x2 += new_plane_state->uapi.dst.x1 - src.x1; 2555 2239 2556 2240 clip_area_update(&crtc_state->psr2_su_area, &damaged_area, &crtc_state->pipe_src); 2557 - 2558 - /* 2559 - * Cursor plane new state is stored to adjust su area to cover 2560 - * cursor are fully. 2561 - */ 2562 - if (plane->id == PLANE_CURSOR) 2563 - cursor_plane_state = new_plane_state; 2564 2241 } 2565 2242 2566 2243 /* ··· 2582 2273 if (ret) 2583 2274 return ret; 2584 2275 2585 - /* Adjust su area to cover cursor fully as necessary */ 2586 - if (cursor_plane_state) 2587 - intel_psr2_sel_fetch_et_alignment(crtc_state, cursor_plane_state); 2276 + /* 2277 + * Adjust su area to cover cursor fully as necessary (early 2278 + * transport). This needs to be done after 2279 + * drm_atomic_add_affected_planes to ensure visible cursor is added into 2280 + * affected planes even when cursor is not updated by itself. 2281 + */ 2282 + intel_psr2_sel_fetch_et_alignment(state, crtc, &cursor_in_su_area); 2588 2283 2589 2284 intel_psr2_sel_fetch_pipe_alignment(crtc_state); 2590 2285 ··· 2651 2338 2652 2339 skip_sel_fetch_set_loop: 2653 2340 psr2_man_trk_ctl_calc(crtc_state, full_update); 2341 + crtc_state->pipe_srcsz_early_tpt = 2342 + psr2_pipe_srcsz_early_tpt_calc(crtc_state, full_update, 2343 + cursor_in_su_area); 2654 2344 return 0; 2655 2345 } 2656 2346 ··· 2710 2394 intel_atomic_get_new_crtc_state(state, crtc); 2711 2395 struct intel_encoder *encoder; 2712 2396 2713 - if (!(crtc_state->has_psr || crtc_state->has_panel_replay)) 2397 + if (!crtc_state->has_psr) 2714 2398 return; 2715 2399 2716 2400 for_each_intel_encoder_mask_with_psr(state->base.dev, encoder, ··· 3310 2994 } 3311 2995 } 3312 2996 2997 + /* 2998 + * On common bits: 2999 + * DP_PSR_RFB_STORAGE_ERROR == DP_PANEL_REPLAY_RFB_STORAGE_ERROR 3000 + * DP_PSR_VSC_SDP_UNCORRECTABLE_ERROR == DP_PANEL_REPLAY_VSC_SDP_UNCORRECTABLE_ERROR 3001 + * DP_PSR_LINK_CRC_ERROR == DP_PANEL_REPLAY_LINK_CRC_ERROR 3002 + * this function is relying on PSR definitions 3003 + */ 3313 3004 void intel_psr_short_pulse(struct intel_dp *intel_dp) 3314 3005 { 3315 3006 struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); ··· 3326 3003 DP_PSR_VSC_SDP_UNCORRECTABLE_ERROR | 3327 3004 DP_PSR_LINK_CRC_ERROR; 3328 3005 3329 - if (!CAN_PSR(intel_dp)) 3006 + if (!CAN_PSR(intel_dp) && !CAN_PANEL_REPLAY(intel_dp)) 3330 3007 return; 3331 3008 3332 3009 mutex_lock(&psr->lock); ··· 3340 3017 goto exit; 3341 3018 } 3342 3019 3343 - if (status == DP_PSR_SINK_INTERNAL_ERROR || (error_status & errors)) { 3020 + if ((!psr->panel_replay_enabled && status == DP_PSR_SINK_INTERNAL_ERROR) || 3021 + (error_status & errors)) { 3344 3022 intel_psr_disable_locked(intel_dp); 3345 3023 psr->sink_not_reliable = true; 3346 3024 } 3347 3025 3348 - if (status == DP_PSR_SINK_INTERNAL_ERROR && !error_status) 3026 + if (!psr->panel_replay_enabled && status == DP_PSR_SINK_INTERNAL_ERROR && 3027 + !error_status) 3349 3028 drm_dbg_kms(&dev_priv->drm, 3350 3029 "PSR sink internal error, disabling PSR\n"); 3351 3030 if (error_status & DP_PSR_RFB_STORAGE_ERROR) ··· 3367 3042 /* clear status register */ 3368 3043 drm_dp_dpcd_writeb(&intel_dp->aux, DP_PSR_ERROR_STATUS, error_status); 3369 3044 3370 - psr_alpm_check(intel_dp); 3371 - psr_capability_changed_check(intel_dp); 3045 + if (!psr->panel_replay_enabled) { 3046 + psr_alpm_check(intel_dp); 3047 + psr_capability_changed_check(intel_dp); 3048 + } 3372 3049 3373 3050 exit: 3374 3051 mutex_unlock(&psr->lock);
+5
drivers/gpu/drm/i915/display/intel_psr.h
··· 21 21 struct intel_plane; 22 22 struct intel_plane_state; 23 23 24 + #define CAN_PANEL_REPLAY(intel_dp) ((intel_dp)->psr.sink_panel_replay_support && \ 25 + (intel_dp)->psr.source_panel_replay_support) 26 + 24 27 bool intel_encoder_can_psr(struct intel_encoder *encoder); 25 28 void intel_psr_init_dpcd(struct intel_dp *intel_dp); 29 + void intel_psr_enable_sink(struct intel_dp *intel_dp, 30 + const struct intel_crtc_state *crtc_state); 26 31 void intel_psr_pre_plane_update(struct intel_atomic_state *state, 27 32 struct intel_crtc *crtc); 28 33 void intel_psr_post_plane_update(struct intel_atomic_state *state,
+8 -4
drivers/gpu/drm/i915/display/intel_psr_regs.h
··· 348 348 #define PORT_ALPM_LFPS_CTL(tran) _MMIO_TRANS2(tran, _PORT_ALPM_LFPS_CTL_A) 349 349 #define PORT_ALPM_LFPS_CTL_LFPS_START_POLARITY REG_BIT(31) 350 350 #define PORT_ALPM_LFPS_CTL_LFPS_CYCLE_COUNT_MASK REG_GENMASK(27, 24) 351 - #define ALPM_CTL_EXTENDED_FAST_WAKE_MIN_LINES 5 352 - #define ALPM_CTL_EXTENDED_FAST_WAKE_TIME(lines) REG_FIELD_PREP(ALPM_CTL_EXTENDED_FAST_WAKE_TIME_MASK, (lines) - ALPM_CTL_EXTENDED_FAST_WAKE_MIN_LINES) 353 - #define ALPM_CTL_AUX_LESS_WAKE_TIME_MASK REG_GENMASK(5, 0) 354 - #define ALPM_CTL_AUX_LESS_WAKE_TIME(val) REG_FIELD_PREP(ALPM_CTL_AUX_LESS_WAKE_TIME_MASK, val) 351 + #define PORT_ALPM_LFPS_CTL_LFPS_CYCLE_COUNT_MIN 7 352 + #define PORT_ALPM_LFPS_CTL_LFPS_CYCLE_COUNT(val) REG_FIELD_PREP(PORT_ALPM_LFPS_CTL_LFPS_CYCLE_COUNT_MASK, (val) - PORT_ALPM_LFPS_CTL_LFPS_CYCLE_COUNT_MIN) 353 + #define PORT_ALPM_LFPS_CTL_LFPS_HALF_CYCLE_DURATION_MASK REG_GENMASK(20, 16) 354 + #define PORT_ALPM_LFPS_CTL_LFPS_HALF_CYCLE_DURATION(val) REG_FIELD_PREP(PORT_ALPM_LFPS_CTL_LFPS_HALF_CYCLE_DURATION_MASK, val) 355 + #define PORT_ALPM_LFPS_CTL_FIRST_LFPS_HALF_CYCLE_DURATION_MASK REG_GENMASK(12, 8) 356 + #define PORT_ALPM_LFPS_CTL_FIRST_LFPS_HALF_CYCLE_DURATION(val) REG_FIELD_PREP(PORT_ALPM_LFPS_CTL_LFPS_HALF_CYCLE_DURATION_MASK, val) 357 + #define PORT_ALPM_LFPS_CTL_LAST_LFPS_HALF_CYCLE_DURATION_MASK REG_GENMASK(4, 0) 358 + #define PORT_ALPM_LFPS_CTL_LAST_LFPS_HALF_CYCLE_DURATION(val) REG_FIELD_PREP(PORT_ALPM_LFPS_CTL_LFPS_HALF_CYCLE_DURATION_MASK, val) 355 359 356 360 #endif /* __INTEL_PSR_REGS_H__ */
+3 -6
drivers/gpu/drm/i915/display/intel_sdvo.c
··· 193 193 } 194 194 195 195 #define to_intel_sdvo_connector_state(conn_state) \ 196 - container_of((conn_state), struct intel_sdvo_connector_state, base.base) 196 + container_of_const((conn_state), struct intel_sdvo_connector_state, base.base) 197 197 198 198 static bool 199 199 intel_sdvo_output_setup(struct intel_sdvo *intel_sdvo); ··· 1944 1944 struct intel_sdvo_connector *intel_sdvo_connector = 1945 1945 to_intel_sdvo_connector(connector); 1946 1946 bool has_hdmi_sink = intel_has_hdmi_sink(intel_sdvo_connector, connector->state); 1947 - int max_dotclk = i915->max_dotclk_freq; 1947 + int max_dotclk = i915->display.cdclk.max_dotclk_freq; 1948 1948 enum drm_mode_status status; 1949 1949 int clock = mode->clock; 1950 1950 1951 1951 status = intel_cpu_transcoder_mode_valid(i915, mode); 1952 1952 if (status != MODE_OK) 1953 1953 return status; 1954 - 1955 - if (mode->flags & DRM_MODE_FLAG_DBLSCAN) 1956 - return MODE_NO_DBLESCAN; 1957 1954 1958 1955 if (clock > max_dotclk) 1959 1956 return MODE_CLOCK_HIGH; ··· 2375 2378 u64 *val) 2376 2379 { 2377 2380 struct intel_sdvo_connector *intel_sdvo_connector = to_intel_sdvo_connector(connector); 2378 - const struct intel_sdvo_connector_state *sdvo_state = to_intel_sdvo_connector_state((void *)state); 2381 + const struct intel_sdvo_connector_state *sdvo_state = to_intel_sdvo_connector_state(state); 2379 2382 2380 2383 if (property == intel_sdvo_connector->tv_format) { 2381 2384 int i;
+9 -7
drivers/gpu/drm/i915/display/intel_snps_phy.c
··· 44 44 } 45 45 } 46 46 47 - void intel_snps_phy_update_psr_power_state(struct drm_i915_private *i915, 48 - enum phy phy, bool enable) 47 + void intel_snps_phy_update_psr_power_state(struct intel_encoder *encoder, 48 + bool enable) 49 49 { 50 + struct drm_i915_private *i915 = to_i915(encoder->base.dev); 51 + enum phy phy = intel_encoder_to_phy(encoder); 50 52 u32 val; 51 53 52 - if (!intel_phy_is_snps(i915, phy)) 54 + if (!intel_encoder_is_snps(encoder)) 53 55 return; 54 56 55 57 val = REG_FIELD_PREP(SNPS_PHY_TX_REQ_LN_DIS_PWR_STATE_PSR, ··· 65 63 { 66 64 struct drm_i915_private *dev_priv = to_i915(encoder->base.dev); 67 65 const struct intel_ddi_buf_trans *trans; 68 - enum phy phy = intel_port_to_phy(dev_priv, encoder->port); 66 + enum phy phy = intel_encoder_to_phy(encoder); 69 67 int n_entries, ln; 70 68 71 69 trans = encoder->get_buf_trans(encoder, crtc_state, &n_entries); ··· 1824 1822 { 1825 1823 struct drm_i915_private *dev_priv = to_i915(encoder->base.dev); 1826 1824 const struct intel_mpllb_state *pll_state = &crtc_state->mpllb_state; 1827 - enum phy phy = intel_port_to_phy(dev_priv, encoder->port); 1825 + enum phy phy = intel_encoder_to_phy(encoder); 1828 1826 i915_reg_t enable_reg = (phy <= PHY_D ? 1829 1827 DG2_PLL_ENABLE(phy) : MG_PLL_ENABLE(0)); 1830 1828 ··· 1881 1879 void intel_mpllb_disable(struct intel_encoder *encoder) 1882 1880 { 1883 1881 struct drm_i915_private *i915 = to_i915(encoder->base.dev); 1884 - enum phy phy = intel_port_to_phy(i915, encoder->port); 1882 + enum phy phy = intel_encoder_to_phy(encoder); 1885 1883 i915_reg_t enable_reg = (phy <= PHY_D ? 1886 1884 DG2_PLL_ENABLE(phy) : MG_PLL_ENABLE(0)); 1887 1885 ··· 1953 1951 struct intel_mpllb_state *pll_state) 1954 1952 { 1955 1953 struct drm_i915_private *dev_priv = to_i915(encoder->base.dev); 1956 - enum phy phy = intel_port_to_phy(dev_priv, encoder->port); 1954 + enum phy phy = intel_encoder_to_phy(encoder); 1957 1955 1958 1956 pll_state->mpllb_cp = intel_de_read(dev_priv, SNPS_PHY_MPLLB_CP(phy)); 1959 1957 pll_state->mpllb_div = intel_de_read(dev_priv, SNPS_PHY_MPLLB_DIV(phy));
+2 -2
drivers/gpu/drm/i915/display/intel_snps_phy.h
··· 17 17 enum phy; 18 18 19 19 void intel_snps_phy_wait_for_calibration(struct drm_i915_private *dev_priv); 20 - void intel_snps_phy_update_psr_power_state(struct drm_i915_private *dev_priv, 21 - enum phy phy, bool enable); 20 + void intel_snps_phy_update_psr_power_state(struct intel_encoder *encoder, 21 + bool enable); 22 22 23 23 int intel_mpllb_calc_state(struct intel_crtc_state *crtc_state, 24 24 struct intel_encoder *encoder);
+10 -23
drivers/gpu/drm/i915/display/intel_tc.c
··· 100 100 static bool intel_tc_port_in_mode(struct intel_digital_port *dig_port, 101 101 enum tc_port_mode mode) 102 102 { 103 - struct drm_i915_private *i915 = to_i915(dig_port->base.base.dev); 104 - enum phy phy = intel_port_to_phy(i915, dig_port->base.port); 105 103 struct intel_tc_port *tc = to_tc_port(dig_port); 106 104 107 - return intel_phy_is_tc(i915, phy) && tc->mode == mode; 105 + return intel_encoder_is_tc(&dig_port->base) && tc->mode == mode; 108 106 } 109 107 110 108 bool intel_tc_port_in_tbt_alt_mode(struct intel_digital_port *dig_port) ··· 122 124 123 125 bool intel_tc_port_handles_hpd_glitches(struct intel_digital_port *dig_port) 124 126 { 125 - struct drm_i915_private *i915 = to_i915(dig_port->base.base.dev); 126 - enum phy phy = intel_port_to_phy(i915, dig_port->base.port); 127 127 struct intel_tc_port *tc = to_tc_port(dig_port); 128 128 129 - return intel_phy_is_tc(i915, phy) && !tc->legacy_port; 129 + return intel_encoder_is_tc(&dig_port->base) && !tc->legacy_port; 130 130 } 131 131 132 132 /* ··· 250 254 static enum intel_display_power_domain 251 255 tc_port_power_domain(struct intel_tc_port *tc) 252 256 { 253 - struct drm_i915_private *i915 = tc_to_i915(tc); 254 - enum tc_port tc_port = intel_port_to_tc(i915, tc->dig_port->base.port); 257 + enum tc_port tc_port = intel_encoder_to_tc(&tc->dig_port->base); 255 258 256 259 return POWER_DOMAIN_PORT_DDI_LANES_TC1 + tc_port - TC_PORT_1; 257 260 } ··· 297 302 static int lnl_tc_port_get_max_lane_count(struct intel_digital_port *dig_port) 298 303 { 299 304 struct drm_i915_private *i915 = to_i915(dig_port->base.base.dev); 300 - enum tc_port tc_port = intel_port_to_tc(i915, dig_port->base.port); 305 + enum tc_port tc_port = intel_encoder_to_tc(&dig_port->base); 301 306 intel_wakeref_t wakeref; 302 307 u32 val, pin_assignment; 303 308 ··· 370 375 { 371 376 struct drm_i915_private *i915 = to_i915(dig_port->base.base.dev); 372 377 struct intel_tc_port *tc = to_tc_port(dig_port); 373 - enum phy phy = intel_port_to_phy(i915, dig_port->base.port); 374 378 375 - if (!intel_phy_is_tc(i915, phy) || tc->mode != TC_PORT_DP_ALT) 379 + if (!intel_encoder_is_tc(&dig_port->base) || tc->mode != TC_PORT_DP_ALT) 376 380 return 4; 377 381 378 382 assert_tc_cold_blocked(tc); ··· 452 458 453 459 static void tc_phy_load_fia_params(struct intel_tc_port *tc, bool modular_fia) 454 460 { 455 - struct drm_i915_private *i915 = tc_to_i915(tc); 456 - enum port port = tc->dig_port->base.port; 457 - enum tc_port tc_port = intel_port_to_tc(i915, port); 461 + enum tc_port tc_port = intel_encoder_to_tc(&tc->dig_port->base); 458 462 459 463 /* 460 464 * Each Modular FIA instance houses 2 TC ports. In SOC that has more ··· 804 812 static bool adlp_tc_phy_is_ready(struct intel_tc_port *tc) 805 813 { 806 814 struct drm_i915_private *i915 = tc_to_i915(tc); 807 - enum tc_port tc_port = intel_port_to_tc(i915, tc->dig_port->base.port); 815 + enum tc_port tc_port = intel_encoder_to_tc(&tc->dig_port->base); 808 816 u32 val; 809 817 810 818 assert_display_core_power_enabled(tc); ··· 1627 1635 1628 1636 bool intel_tc_port_link_needs_reset(struct intel_digital_port *dig_port) 1629 1637 { 1630 - struct drm_i915_private *i915 = to_i915(dig_port->base.base.dev); 1631 - enum phy phy = intel_port_to_phy(i915, dig_port->base.port); 1632 - 1633 - if (!intel_phy_is_tc(i915, phy)) 1638 + if (!intel_encoder_is_tc(&dig_port->base)) 1634 1639 return false; 1635 1640 1636 1641 return __intel_tc_port_link_needs_reset(to_tc_port(dig_port)); ··· 1729 1740 1730 1741 void intel_tc_port_link_cancel_reset_work(struct intel_digital_port *dig_port) 1731 1742 { 1732 - struct drm_i915_private *i915 = to_i915(dig_port->base.base.dev); 1733 - enum phy phy = intel_port_to_phy(i915, dig_port->base.port); 1734 1743 struct intel_tc_port *tc = to_tc_port(dig_port); 1735 1744 1736 - if (!intel_phy_is_tc(i915, phy)) 1745 + if (!intel_encoder_is_tc(&dig_port->base)) 1737 1746 return; 1738 1747 1739 1748 cancel_delayed_work(&tc->link_reset_work); ··· 1848 1861 struct drm_i915_private *i915 = to_i915(dig_port->base.base.dev); 1849 1862 struct intel_tc_port *tc; 1850 1863 enum port port = dig_port->base.port; 1851 - enum tc_port tc_port = intel_port_to_tc(i915, port); 1864 + enum tc_port tc_port = intel_encoder_to_tc(&dig_port->base); 1852 1865 1853 1866 if (drm_WARN_ON(&i915->drm, tc_port == TC_PORT_NONE)) 1854 1867 return -EINVAL;
+3 -5
drivers/gpu/drm/i915/display/intel_tv.c
··· 885 885 bool bypass_vfilter; 886 886 }; 887 887 888 - #define to_intel_tv_connector_state(x) container_of(x, struct intel_tv_connector_state, base) 888 + #define to_intel_tv_connector_state(conn_state) \ 889 + container_of_const((conn_state), struct intel_tv_connector_state, base) 889 890 890 891 static struct drm_connector_state * 891 892 intel_tv_connector_duplicate_state(struct drm_connector *connector) ··· 962 961 { 963 962 struct drm_i915_private *i915 = to_i915(connector->dev); 964 963 const struct tv_mode *tv_mode = intel_tv_mode_find(connector->state); 965 - int max_dotclk = i915->max_dotclk_freq; 964 + int max_dotclk = i915->display.cdclk.max_dotclk_freq; 966 965 enum drm_mode_status status; 967 966 968 967 status = intel_cpu_transcoder_mode_valid(i915, mode); 969 968 if (status != MODE_OK) 970 969 return status; 971 - 972 - if (mode->flags & DRM_MODE_FLAG_DBLSCAN) 973 - return MODE_NO_DBLESCAN; 974 970 975 971 if (mode->clock > max_dotclk) 976 972 return MODE_CLOCK_HIGH;
+19 -17
drivers/gpu/drm/i915/display/intel_vbt_defs.h
··· 485 485 u8 hdmi_iboost_level:4; /* 196+ */ 486 486 u8 dp_max_link_rate:3; /* 216+ */ 487 487 u8 dp_max_link_rate_reserved:5; /* 216+ */ 488 + u8 efp_index; /* 256+ */ 488 489 } __packed; 489 490 490 491 struct bdb_general_definitions { ··· 603 602 u8 custom_vbt_version; /* 155+ */ 604 603 605 604 /* Driver Feature Flags */ 606 - u16 rmpm_enabled:1; /* 165+ */ 607 - u16 s2ddt_enabled:1; /* 165+ */ 608 - u16 dpst_enabled:1; /* 165-227 */ 609 - u16 bltclt_enabled:1; /* 165+ */ 610 - u16 adb_enabled:1; /* 165-227 */ 611 - u16 drrs_enabled:1; /* 165-227 */ 612 - u16 grs_enabled:1; /* 165+ */ 613 - u16 gpmt_enabled:1; /* 165+ */ 614 - u16 tbt_enabled:1; /* 165+ */ 605 + u16 rmpm_enabled:1; /* 159+ */ 606 + u16 s2ddt_enabled:1; /* 159+ */ 607 + u16 dpst_enabled:1; /* 159-227 */ 608 + u16 bltclt_enabled:1; /* 159+ */ 609 + u16 adb_enabled:1; /* 159-227 */ 610 + u16 drrs_enabled:1; /* 159-227 */ 611 + u16 grs_enabled:1; /* 159+ */ 612 + u16 gpmt_enabled:1; /* 159+ */ 613 + u16 tbt_enabled:1; /* 159+ */ 615 614 u16 psr_enabled:1; /* 165-227 */ 616 615 u16 ips_enabled:1; /* 165+ */ 617 - u16 dpfs_enabled:1; /* 165+ */ 616 + u16 dfps_enabled:1; /* 165+ */ 618 617 u16 dmrrs_enabled:1; /* 174-227 */ 619 618 u16 adt_enabled:1; /* ???-228 */ 620 619 u16 hpd_wake:1; /* 201-240 */ 621 - u16 pc_feature_valid:1; 620 + u16 pc_feature_valid:1; /* 159+ */ 622 621 } __packed; 623 622 624 623 /* ··· 881 880 struct lfp_backlight_data_entry { 882 881 u8 type:2; 883 882 u8 active_low_pwm:1; 884 - u8 obsolete1:5; 883 + u8 i2c_pin:3; /* obsolete since ? */ 884 + u8 i2c_speed:2; /* obsolete since ? */ 885 885 u16 pwm_freq_hz; 886 886 u8 min_brightness; /* ???-233 */ 887 - u8 obsolete2; 888 - u8 obsolete3; 887 + u8 i2c_address; /* obsolete since ? */ 888 + u8 i2c_command; /* obsolete since ? */ 889 889 } __packed; 890 890 891 891 struct lfp_backlight_control_method { ··· 907 905 struct bdb_lfp_backlight_data { 908 906 u8 entry_size; 909 907 struct lfp_backlight_data_entry data[16]; 910 - u8 level[16]; /* ???-233 */ 911 - struct lfp_backlight_control_method backlight_control[16]; 908 + u8 level[16]; /* 162-233 */ 909 + struct lfp_backlight_control_method backlight_control[16]; /* 191+ */ 912 910 struct lfp_brightness_level brightness_level[16]; /* 234+ */ 913 911 struct lfp_brightness_level brightness_min_level[16]; /* 234+ */ 914 912 u8 brightness_precision_bits[16]; /* 236+ */ ··· 919 917 * Block 44 - LFP Power Conservation Features Block 920 918 */ 921 919 struct lfp_power_features { 922 - u8 reserved1:1; 920 + u8 dpst_support:1; /* ???-159 */ 923 921 u8 power_conservation_pref:3; 924 922 u8 reserved2:1; 925 923 u8 lace_enabled_status:1; /* 210+ */
+38 -2
drivers/gpu/drm/i915/display/intel_vrr.c
··· 9 9 #include "intel_de.h" 10 10 #include "intel_display_types.h" 11 11 #include "intel_vrr.h" 12 + #include "intel_dp.h" 12 13 13 14 bool intel_vrr_is_capable(struct intel_connector *connector) 14 15 { ··· 114 113 struct drm_i915_private *i915 = to_i915(crtc->base.dev); 115 114 struct intel_connector *connector = 116 115 to_intel_connector(conn_state->connector); 116 + struct intel_dp *intel_dp = intel_attached_dp(connector); 117 117 struct drm_display_mode *adjusted_mode = &crtc_state->hw.adjusted_mode; 118 118 const struct drm_display_info *info = &connector->base.display_info; 119 119 int vmin, vmax; 120 + 121 + /* 122 + * FIXME all joined pipes share the same transcoder. 123 + * Need to account for that during VRR toggle/push/etc. 124 + */ 125 + if (crtc_state->bigjoiner_pipes) 126 + return; 120 127 121 128 if (adjusted_mode->flags & DRM_MODE_FLAG_INTERLACE) 122 129 return; ··· 174 165 if (crtc_state->uapi.vrr_enabled) { 175 166 crtc_state->vrr.enable = true; 176 167 crtc_state->mode_flags |= I915_MODE_FLAG_VRR; 168 + if (intel_dp_as_sdp_supported(intel_dp)) { 169 + crtc_state->vrr.vsync_start = 170 + (crtc_state->hw.adjusted_mode.crtc_vtotal - 171 + crtc_state->hw.adjusted_mode.vsync_start); 172 + crtc_state->vrr.vsync_end = 173 + (crtc_state->hw.adjusted_mode.crtc_vtotal - 174 + crtc_state->hw.adjusted_mode.vsync_end); 175 + } 177 176 } 178 177 } 179 178 ··· 257 240 return; 258 241 259 242 intel_de_write(dev_priv, TRANS_PUSH(cpu_transcoder), TRANS_PUSH_EN); 243 + 244 + if (HAS_AS_SDP(dev_priv)) 245 + intel_de_write(dev_priv, TRANS_VRR_VSYNC(cpu_transcoder), 246 + VRR_VSYNC_END(crtc_state->vrr.vsync_end) | 247 + VRR_VSYNC_START(crtc_state->vrr.vsync_start)); 248 + 260 249 intel_de_write(dev_priv, TRANS_VRR_CTL(cpu_transcoder), 261 250 VRR_CTL_VRR_ENABLE | trans_vrr_ctl(crtc_state)); 262 251 } ··· 281 258 intel_de_wait_for_clear(dev_priv, TRANS_VRR_STATUS(cpu_transcoder), 282 259 VRR_STATUS_VRR_EN_LIVE, 1000); 283 260 intel_de_write(dev_priv, TRANS_PUSH(cpu_transcoder), 0); 261 + 262 + if (HAS_AS_SDP(dev_priv)) 263 + intel_de_write(dev_priv, TRANS_VRR_VSYNC(cpu_transcoder), 0); 284 264 } 285 265 286 266 void intel_vrr_get_config(struct intel_crtc_state *crtc_state) 287 267 { 288 268 struct drm_i915_private *dev_priv = to_i915(crtc_state->uapi.crtc->dev); 289 269 enum transcoder cpu_transcoder = crtc_state->cpu_transcoder; 290 - u32 trans_vrr_ctl; 270 + u32 trans_vrr_ctl, trans_vrr_vsync; 291 271 292 272 trans_vrr_ctl = intel_de_read(dev_priv, TRANS_VRR_CTL(cpu_transcoder)); 293 273 ··· 310 284 crtc_state->vrr.vmin = intel_de_read(dev_priv, TRANS_VRR_VMIN(cpu_transcoder)) + 1; 311 285 } 312 286 313 - if (crtc_state->vrr.enable) 287 + if (crtc_state->vrr.enable) { 314 288 crtc_state->mode_flags |= I915_MODE_FLAG_VRR; 289 + 290 + if (HAS_AS_SDP(dev_priv)) { 291 + trans_vrr_vsync = 292 + intel_de_read(dev_priv, TRANS_VRR_VSYNC(cpu_transcoder)); 293 + crtc_state->vrr.vsync_start = 294 + REG_FIELD_GET(VRR_VSYNC_START_MASK, trans_vrr_vsync); 295 + crtc_state->vrr.vsync_end = 296 + REG_FIELD_GET(VRR_VSYNC_END_MASK, trans_vrr_vsync); 297 + } 298 + } 315 299 }
+4 -3
drivers/gpu/drm/i915/display/skl_scaler.c
··· 213 213 * The pipe scaler does not use all the bits of PIPESRC, at least 214 214 * on the earlier platforms. So even when we're scaling a plane 215 215 * the *pipe* source size must not be too large. For simplicity 216 - * we assume the limits match the scaler source size limits. Might 217 - * not be 100% accurate on all platforms, but good enough for now. 216 + * we assume the limits match the scaler destination size limits. 217 + * Might not be 100% accurate on all platforms, but good enough for 218 + * now. 218 219 */ 219 - if (pipe_src_w > max_src_w || pipe_src_h > max_src_h) { 220 + if (pipe_src_w > max_dst_w || pipe_src_h > max_dst_h) { 220 221 drm_dbg_kms(&dev_priv->drm, 221 222 "scaler_user index %u.%u: pipe src size %ux%u " 222 223 "is out of scaler range\n",
+225 -95
drivers/gpu/drm/i915/display/skl_watermark.c
··· 6 6 #include <drm/drm_blend.h> 7 7 8 8 #include "i915_drv.h" 9 - #include "i915_fixed.h" 10 9 #include "i915_reg.h" 11 10 #include "i9xx_wm.h" 12 11 #include "intel_atomic.h" 13 12 #include "intel_atomic_plane.h" 14 13 #include "intel_bw.h" 14 + #include "intel_cdclk.h" 15 15 #include "intel_crtc.h" 16 16 #include "intel_de.h" 17 17 #include "intel_display.h" 18 18 #include "intel_display_power.h" 19 19 #include "intel_display_types.h" 20 20 #include "intel_fb.h" 21 + #include "intel_fixed.h" 21 22 #include "intel_pcode.h" 22 23 #include "intel_wm.h" 23 24 #include "skl_watermark.h" ··· 2602 2601 return ret; 2603 2602 } 2604 2603 2605 - if (HAS_MBUS_JOINING(i915)) 2604 + if (HAS_MBUS_JOINING(i915)) { 2606 2605 new_dbuf_state->joined_mbus = 2607 2606 adlp_check_mbus_joined(new_dbuf_state->active_pipes); 2607 + 2608 + if (old_dbuf_state->joined_mbus != new_dbuf_state->joined_mbus) { 2609 + ret = intel_cdclk_state_set_joined_mbus(state, new_dbuf_state->joined_mbus); 2610 + if (ret) 2611 + return ret; 2612 + } 2613 + } 2608 2614 2609 2615 for_each_intel_crtc(&i915->drm, crtc) { 2610 2616 enum pipe pipe = crtc->pipe; ··· 2635 2627 ret = intel_atomic_serialize_global_state(&new_dbuf_state->base); 2636 2628 if (ret) 2637 2629 return ret; 2638 - 2639 - if (old_dbuf_state->joined_mbus != new_dbuf_state->joined_mbus) { 2640 - /* TODO: Implement vblank synchronized MBUS joining changes */ 2641 - ret = intel_modeset_all_pipes_late(state, "MBUS joining change"); 2642 - if (ret) 2643 - return ret; 2644 - } 2645 2630 2646 2631 drm_dbg_kms(&i915->drm, 2647 2632 "Enabled dbuf slices 0x%x -> 0x%x (total dbuf slices 0x%x), mbus joined? %s->%s\n", ··· 3057 3056 3058 3057 if (HAS_MBUS_JOINING(i915)) 3059 3058 dbuf_state->joined_mbus = intel_de_read(i915, MBUS_CTL) & MBUS_JOIN; 3059 + 3060 + dbuf_state->mdclk_cdclk_ratio = intel_mdclk_cdclk_ratio(i915, &i915->display.cdclk.hw); 3060 3061 3061 3062 for_each_intel_crtc(&i915->drm, crtc) { 3062 3063 struct intel_crtc_state *crtc_state = ··· 3533 3530 return 0; 3534 3531 } 3535 3532 3536 - /* 3537 - * Configure MBUS_CTL and all DBUF_CTL_S of each slice to join_mbus state before 3538 - * update the request state of all DBUS slices. 3539 - */ 3540 - static void update_mbus_pre_enable(struct intel_atomic_state *state) 3541 - { 3542 - struct drm_i915_private *i915 = to_i915(state->base.dev); 3543 - u32 mbus_ctl, dbuf_min_tracker_val; 3544 - enum dbuf_slice slice; 3545 - const struct intel_dbuf_state *dbuf_state = 3546 - intel_atomic_get_new_dbuf_state(state); 3547 - 3548 - if (!HAS_MBUS_JOINING(i915)) 3549 - return; 3550 - 3551 - /* 3552 - * TODO: Implement vblank synchronized MBUS joining changes. 3553 - * Must be properly coordinated with dbuf reprogramming. 3554 - */ 3555 - if (dbuf_state->joined_mbus) { 3556 - mbus_ctl = MBUS_HASHING_MODE_1x4 | MBUS_JOIN | 3557 - MBUS_JOIN_PIPE_SELECT_NONE; 3558 - dbuf_min_tracker_val = DBUF_MIN_TRACKER_STATE_SERVICE(3); 3559 - } else { 3560 - mbus_ctl = MBUS_HASHING_MODE_2x2 | 3561 - MBUS_JOIN_PIPE_SELECT_NONE; 3562 - dbuf_min_tracker_val = DBUF_MIN_TRACKER_STATE_SERVICE(1); 3563 - } 3564 - 3565 - intel_de_rmw(i915, MBUS_CTL, 3566 - MBUS_HASHING_MODE_MASK | MBUS_JOIN | 3567 - MBUS_JOIN_PIPE_SELECT_MASK, mbus_ctl); 3568 - 3569 - for_each_dbuf_slice(i915, slice) 3570 - intel_de_rmw(i915, DBUF_CTL_S(slice), 3571 - DBUF_MIN_TRACKER_STATE_SERVICE_MASK, 3572 - dbuf_min_tracker_val); 3573 - } 3574 - 3575 - void intel_dbuf_pre_plane_update(struct intel_atomic_state *state) 3576 - { 3577 - struct drm_i915_private *i915 = to_i915(state->base.dev); 3578 - const struct intel_dbuf_state *new_dbuf_state = 3579 - intel_atomic_get_new_dbuf_state(state); 3580 - const struct intel_dbuf_state *old_dbuf_state = 3581 - intel_atomic_get_old_dbuf_state(state); 3582 - 3583 - if (!new_dbuf_state || 3584 - (new_dbuf_state->enabled_slices == old_dbuf_state->enabled_slices && 3585 - new_dbuf_state->joined_mbus == old_dbuf_state->joined_mbus)) 3586 - return; 3587 - 3588 - WARN_ON(!new_dbuf_state->base.changed); 3589 - 3590 - update_mbus_pre_enable(state); 3591 - gen9_dbuf_slices_update(i915, 3592 - old_dbuf_state->enabled_slices | 3593 - new_dbuf_state->enabled_slices); 3594 - } 3595 - 3596 - void intel_dbuf_post_plane_update(struct intel_atomic_state *state) 3597 - { 3598 - struct drm_i915_private *i915 = to_i915(state->base.dev); 3599 - const struct intel_dbuf_state *new_dbuf_state = 3600 - intel_atomic_get_new_dbuf_state(state); 3601 - const struct intel_dbuf_state *old_dbuf_state = 3602 - intel_atomic_get_old_dbuf_state(state); 3603 - 3604 - if (!new_dbuf_state || 3605 - (new_dbuf_state->enabled_slices == old_dbuf_state->enabled_slices && 3606 - new_dbuf_state->joined_mbus == old_dbuf_state->joined_mbus)) 3607 - return; 3608 - 3609 - WARN_ON(!new_dbuf_state->base.changed); 3610 - 3611 - gen9_dbuf_slices_update(i915, 3612 - new_dbuf_state->enabled_slices); 3613 - } 3614 - 3615 3533 static bool xelpdp_is_only_pipe_per_dbuf_bank(enum pipe pipe, u8 active_pipes) 3616 3534 { 3617 3535 switch (pipe) { ··· 3552 3628 return false; 3553 3629 } 3554 3630 3555 - void intel_mbus_dbox_update(struct intel_atomic_state *state) 3631 + static void intel_mbus_dbox_update(struct intel_atomic_state *state) 3556 3632 { 3557 3633 struct drm_i915_private *i915 = to_i915(state->base.dev); 3558 3634 const struct intel_dbuf_state *new_dbuf_state, *old_dbuf_state; 3559 - const struct intel_crtc_state *new_crtc_state; 3560 3635 const struct intel_crtc *crtc; 3561 3636 u32 val = 0; 3562 - int i; 3563 3637 3564 3638 if (DISPLAY_VER(i915) < 11) 3565 3639 return; ··· 3601 3679 val |= MBUS_DBOX_B_CREDIT(8); 3602 3680 } 3603 3681 3604 - for_each_new_intel_crtc_in_state(state, crtc, new_crtc_state, i) { 3682 + for_each_intel_crtc_in_pipe_mask(&i915->drm, crtc, new_dbuf_state->active_pipes) { 3605 3683 u32 pipe_val = val; 3606 - 3607 - if (!new_crtc_state->hw.active) 3608 - continue; 3609 3684 3610 3685 if (DISPLAY_VER(i915) >= 14) { 3611 3686 if (xelpdp_is_only_pipe_per_dbuf_bank(crtc->pipe, ··· 3614 3695 3615 3696 intel_de_write(i915, PIPE_MBUS_DBOX_CTL(crtc->pipe), pipe_val); 3616 3697 } 3698 + } 3699 + 3700 + int intel_dbuf_state_set_mdclk_cdclk_ratio(struct intel_atomic_state *state, 3701 + int ratio) 3702 + { 3703 + struct intel_dbuf_state *dbuf_state; 3704 + 3705 + dbuf_state = intel_atomic_get_dbuf_state(state); 3706 + if (IS_ERR(dbuf_state)) 3707 + return PTR_ERR(dbuf_state); 3708 + 3709 + dbuf_state->mdclk_cdclk_ratio = ratio; 3710 + 3711 + return intel_atomic_lock_global_state(&dbuf_state->base); 3712 + } 3713 + 3714 + void intel_dbuf_mdclk_cdclk_ratio_update(struct drm_i915_private *i915, 3715 + int ratio, bool joined_mbus) 3716 + { 3717 + enum dbuf_slice slice; 3718 + 3719 + if (!HAS_MBUS_JOINING(i915)) 3720 + return; 3721 + 3722 + if (DISPLAY_VER(i915) >= 20) 3723 + intel_de_rmw(i915, MBUS_CTL, MBUS_TRANSLATION_THROTTLE_MIN_MASK, 3724 + MBUS_TRANSLATION_THROTTLE_MIN(ratio - 1)); 3725 + 3726 + if (joined_mbus) 3727 + ratio *= 2; 3728 + 3729 + drm_dbg_kms(&i915->drm, "Updating dbuf ratio to %d (mbus joined: %s)\n", 3730 + ratio, str_yes_no(joined_mbus)); 3731 + 3732 + for_each_dbuf_slice(i915, slice) 3733 + intel_de_rmw(i915, DBUF_CTL_S(slice), 3734 + DBUF_MIN_TRACKER_STATE_SERVICE_MASK, 3735 + DBUF_MIN_TRACKER_STATE_SERVICE(ratio - 1)); 3736 + } 3737 + 3738 + static void intel_dbuf_mdclk_min_tracker_update(struct intel_atomic_state *state) 3739 + { 3740 + struct drm_i915_private *i915 = to_i915(state->base.dev); 3741 + const struct intel_dbuf_state *old_dbuf_state = 3742 + intel_atomic_get_old_dbuf_state(state); 3743 + const struct intel_dbuf_state *new_dbuf_state = 3744 + intel_atomic_get_new_dbuf_state(state); 3745 + int mdclk_cdclk_ratio; 3746 + 3747 + if (intel_cdclk_is_decreasing_later(state)) { 3748 + /* cdclk/mdclk will be changed later by intel_set_cdclk_post_plane_update() */ 3749 + mdclk_cdclk_ratio = old_dbuf_state->mdclk_cdclk_ratio; 3750 + } else { 3751 + /* cdclk/mdclk already changed by intel_set_cdclk_pre_plane_update() */ 3752 + mdclk_cdclk_ratio = new_dbuf_state->mdclk_cdclk_ratio; 3753 + } 3754 + 3755 + intel_dbuf_mdclk_cdclk_ratio_update(i915, mdclk_cdclk_ratio, 3756 + new_dbuf_state->joined_mbus); 3757 + } 3758 + 3759 + static enum pipe intel_mbus_joined_pipe(struct intel_atomic_state *state, 3760 + const struct intel_dbuf_state *dbuf_state) 3761 + { 3762 + struct drm_i915_private *i915 = to_i915(state->base.dev); 3763 + enum pipe pipe = ffs(dbuf_state->active_pipes) - 1; 3764 + const struct intel_crtc_state *new_crtc_state; 3765 + struct intel_crtc *crtc; 3766 + 3767 + drm_WARN_ON(&i915->drm, !dbuf_state->joined_mbus); 3768 + drm_WARN_ON(&i915->drm, !is_power_of_2(dbuf_state->active_pipes)); 3769 + 3770 + crtc = intel_crtc_for_pipe(i915, pipe); 3771 + new_crtc_state = intel_atomic_get_new_crtc_state(state, crtc); 3772 + 3773 + if (new_crtc_state && !intel_crtc_needs_modeset(new_crtc_state)) 3774 + return pipe; 3775 + else 3776 + return INVALID_PIPE; 3777 + } 3778 + 3779 + static void intel_dbuf_mbus_join_update(struct intel_atomic_state *state, 3780 + enum pipe pipe) 3781 + { 3782 + struct drm_i915_private *i915 = to_i915(state->base.dev); 3783 + const struct intel_dbuf_state *old_dbuf_state = 3784 + intel_atomic_get_old_dbuf_state(state); 3785 + const struct intel_dbuf_state *new_dbuf_state = 3786 + intel_atomic_get_new_dbuf_state(state); 3787 + u32 mbus_ctl; 3788 + 3789 + drm_dbg_kms(&i915->drm, "Changing mbus joined: %s -> %s (pipe: %c)\n", 3790 + str_yes_no(old_dbuf_state->joined_mbus), 3791 + str_yes_no(new_dbuf_state->joined_mbus), 3792 + pipe != INVALID_PIPE ? pipe_name(pipe) : '*'); 3793 + 3794 + if (new_dbuf_state->joined_mbus) 3795 + mbus_ctl = MBUS_HASHING_MODE_1x4 | MBUS_JOIN; 3796 + else 3797 + mbus_ctl = MBUS_HASHING_MODE_2x2; 3798 + 3799 + if (pipe != INVALID_PIPE) 3800 + mbus_ctl |= MBUS_JOIN_PIPE_SELECT(pipe); 3801 + else 3802 + mbus_ctl |= MBUS_JOIN_PIPE_SELECT_NONE; 3803 + 3804 + intel_de_rmw(i915, MBUS_CTL, 3805 + MBUS_HASHING_MODE_MASK | MBUS_JOIN | 3806 + MBUS_JOIN_PIPE_SELECT_MASK, mbus_ctl); 3807 + } 3808 + 3809 + void intel_dbuf_mbus_pre_ddb_update(struct intel_atomic_state *state) 3810 + { 3811 + const struct intel_dbuf_state *new_dbuf_state = 3812 + intel_atomic_get_new_dbuf_state(state); 3813 + const struct intel_dbuf_state *old_dbuf_state = 3814 + intel_atomic_get_old_dbuf_state(state); 3815 + 3816 + if (!new_dbuf_state) 3817 + return; 3818 + 3819 + if (!old_dbuf_state->joined_mbus && new_dbuf_state->joined_mbus) { 3820 + enum pipe pipe = intel_mbus_joined_pipe(state, new_dbuf_state); 3821 + 3822 + WARN_ON(!new_dbuf_state->base.changed); 3823 + 3824 + intel_dbuf_mbus_join_update(state, pipe); 3825 + intel_mbus_dbox_update(state); 3826 + intel_dbuf_mdclk_min_tracker_update(state); 3827 + } 3828 + } 3829 + 3830 + void intel_dbuf_mbus_post_ddb_update(struct intel_atomic_state *state) 3831 + { 3832 + struct drm_i915_private *i915 = to_i915(state->base.dev); 3833 + const struct intel_dbuf_state *new_dbuf_state = 3834 + intel_atomic_get_new_dbuf_state(state); 3835 + const struct intel_dbuf_state *old_dbuf_state = 3836 + intel_atomic_get_old_dbuf_state(state); 3837 + 3838 + if (!new_dbuf_state) 3839 + return; 3840 + 3841 + if (old_dbuf_state->joined_mbus && !new_dbuf_state->joined_mbus) { 3842 + enum pipe pipe = intel_mbus_joined_pipe(state, old_dbuf_state); 3843 + 3844 + WARN_ON(!new_dbuf_state->base.changed); 3845 + 3846 + intel_dbuf_mdclk_min_tracker_update(state); 3847 + intel_mbus_dbox_update(state); 3848 + intel_dbuf_mbus_join_update(state, pipe); 3849 + 3850 + if (pipe != INVALID_PIPE) { 3851 + struct intel_crtc *crtc = intel_crtc_for_pipe(i915, pipe); 3852 + 3853 + intel_crtc_wait_for_next_vblank(crtc); 3854 + } 3855 + } else if (old_dbuf_state->joined_mbus == new_dbuf_state->joined_mbus && 3856 + old_dbuf_state->active_pipes != new_dbuf_state->active_pipes) { 3857 + WARN_ON(!new_dbuf_state->base.changed); 3858 + 3859 + intel_dbuf_mdclk_min_tracker_update(state); 3860 + intel_mbus_dbox_update(state); 3861 + } 3862 + 3863 + } 3864 + 3865 + void intel_dbuf_pre_plane_update(struct intel_atomic_state *state) 3866 + { 3867 + struct drm_i915_private *i915 = to_i915(state->base.dev); 3868 + const struct intel_dbuf_state *new_dbuf_state = 3869 + intel_atomic_get_new_dbuf_state(state); 3870 + const struct intel_dbuf_state *old_dbuf_state = 3871 + intel_atomic_get_old_dbuf_state(state); 3872 + u8 old_slices, new_slices; 3873 + 3874 + if (!new_dbuf_state) 3875 + return; 3876 + 3877 + old_slices = old_dbuf_state->enabled_slices; 3878 + new_slices = old_dbuf_state->enabled_slices | new_dbuf_state->enabled_slices; 3879 + 3880 + if (old_slices == new_slices) 3881 + return; 3882 + 3883 + WARN_ON(!new_dbuf_state->base.changed); 3884 + 3885 + gen9_dbuf_slices_update(i915, new_slices); 3886 + } 3887 + 3888 + void intel_dbuf_post_plane_update(struct intel_atomic_state *state) 3889 + { 3890 + struct drm_i915_private *i915 = to_i915(state->base.dev); 3891 + const struct intel_dbuf_state *new_dbuf_state = 3892 + intel_atomic_get_new_dbuf_state(state); 3893 + const struct intel_dbuf_state *old_dbuf_state = 3894 + intel_atomic_get_old_dbuf_state(state); 3895 + u8 old_slices, new_slices; 3896 + 3897 + if (!new_dbuf_state) 3898 + return; 3899 + 3900 + old_slices = old_dbuf_state->enabled_slices | new_dbuf_state->enabled_slices; 3901 + new_slices = new_dbuf_state->enabled_slices; 3902 + 3903 + if (old_slices == new_slices) 3904 + return; 3905 + 3906 + WARN_ON(!new_dbuf_state->base.changed); 3907 + 3908 + gen9_dbuf_slices_update(i915, new_slices); 3617 3909 } 3618 3910 3619 3911 static int skl_watermark_ipc_status_show(struct seq_file *m, void *data)
+11 -2
drivers/gpu/drm/i915/display/skl_watermark.h
··· 58 58 u8 slices[I915_MAX_PIPES]; 59 59 u8 enabled_slices; 60 60 u8 active_pipes; 61 + u8 mdclk_cdclk_ratio; 61 62 bool joined_mbus; 62 63 }; 63 64 64 65 struct intel_dbuf_state * 65 66 intel_atomic_get_dbuf_state(struct intel_atomic_state *state); 66 67 67 - #define to_intel_dbuf_state(x) container_of((x), struct intel_dbuf_state, base) 68 + #define to_intel_dbuf_state(global_state) \ 69 + container_of_const((global_state), struct intel_dbuf_state, base) 70 + 68 71 #define intel_atomic_get_old_dbuf_state(state) \ 69 72 to_intel_dbuf_state(intel_atomic_get_old_global_obj_state(state, &to_i915(state->base.dev)->display.dbuf.obj)) 70 73 #define intel_atomic_get_new_dbuf_state(state) \ 71 74 to_intel_dbuf_state(intel_atomic_get_new_global_obj_state(state, &to_i915(state->base.dev)->display.dbuf.obj)) 72 75 73 76 int intel_dbuf_init(struct drm_i915_private *i915); 77 + int intel_dbuf_state_set_mdclk_cdclk_ratio(struct intel_atomic_state *state, 78 + int ratio); 79 + 74 80 void intel_dbuf_pre_plane_update(struct intel_atomic_state *state); 75 81 void intel_dbuf_post_plane_update(struct intel_atomic_state *state); 76 - void intel_mbus_dbox_update(struct intel_atomic_state *state); 82 + void intel_dbuf_mdclk_cdclk_ratio_update(struct drm_i915_private *i915, 83 + int ratio, bool joined_mbus); 84 + void intel_dbuf_mbus_pre_ddb_update(struct intel_atomic_state *state); 85 + void intel_dbuf_mbus_post_ddb_update(struct intel_atomic_state *state); 77 86 78 87 #endif /* __SKL_WATERMARK_H__ */ 79 88
+10 -8
drivers/gpu/drm/i915/display/skl_watermark_regs.h
··· 32 32 #define MBUS_BBOX_CTL_S1 _MMIO(0x45040) 33 33 #define MBUS_BBOX_CTL_S2 _MMIO(0x45044) 34 34 35 - #define MBUS_CTL _MMIO(0x4438C) 36 - #define MBUS_JOIN REG_BIT(31) 37 - #define MBUS_HASHING_MODE_MASK REG_BIT(30) 38 - #define MBUS_HASHING_MODE_2x2 REG_FIELD_PREP(MBUS_HASHING_MODE_MASK, 0) 39 - #define MBUS_HASHING_MODE_1x4 REG_FIELD_PREP(MBUS_HASHING_MODE_MASK, 1) 40 - #define MBUS_JOIN_PIPE_SELECT_MASK REG_GENMASK(28, 26) 41 - #define MBUS_JOIN_PIPE_SELECT(pipe) REG_FIELD_PREP(MBUS_JOIN_PIPE_SELECT_MASK, pipe) 42 - #define MBUS_JOIN_PIPE_SELECT_NONE MBUS_JOIN_PIPE_SELECT(7) 35 + #define MBUS_CTL _MMIO(0x4438C) 36 + #define MBUS_JOIN REG_BIT(31) 37 + #define MBUS_HASHING_MODE_MASK REG_BIT(30) 38 + #define MBUS_HASHING_MODE_2x2 REG_FIELD_PREP(MBUS_HASHING_MODE_MASK, 0) 39 + #define MBUS_HASHING_MODE_1x4 REG_FIELD_PREP(MBUS_HASHING_MODE_MASK, 1) 40 + #define MBUS_JOIN_PIPE_SELECT_MASK REG_GENMASK(28, 26) 41 + #define MBUS_JOIN_PIPE_SELECT(pipe) REG_FIELD_PREP(MBUS_JOIN_PIPE_SELECT_MASK, pipe) 42 + #define MBUS_JOIN_PIPE_SELECT_NONE MBUS_JOIN_PIPE_SELECT(7) 43 + #define MBUS_TRANSLATION_THROTTLE_MIN_MASK REG_GENMASK(15, 13) 44 + #define MBUS_TRANSLATION_THROTTLE_MIN(val) REG_FIELD_PREP(MBUS_TRANSLATION_THROTTLE_MIN_MASK, val) 43 45 44 46 /* Watermark register definitions for SKL */ 45 47 #define _CUR_WM_A_0 0x70140
+1 -2
drivers/gpu/drm/i915/display/vlv_dsi.c
··· 273 273 struct drm_connector_state *conn_state) 274 274 { 275 275 struct drm_i915_private *dev_priv = to_i915(encoder->base.dev); 276 - struct intel_dsi *intel_dsi = container_of(encoder, struct intel_dsi, 277 - base); 276 + struct intel_dsi *intel_dsi = enc_to_intel_dsi(encoder); 278 277 struct intel_connector *intel_connector = intel_dsi->attached_connector; 279 278 struct drm_display_mode *adjusted_mode = &pipe_config->hw.adjusted_mode; 280 279 int ret;
+1 -1
drivers/gpu/drm/i915/gem/i915_gem_object_types.h
··· 386 386 * and kernel mode driver for caching policy control after GEN12. 387 387 * In the meantime platform specific tables are created to translate 388 388 * i915_cache_level into pat index, for more details check the macros 389 - * defined i915/i915_pci.c, e.g. PVC_CACHELEVEL. 389 + * defined i915/i915_pci.c, e.g. TGL_CACHELEVEL. 390 390 * For backward compatibility, this field contains values exactly match 391 391 * the entries of enum i915_cache_level for pre-GEN12 platforms (See 392 392 * LEGACY_CACHELEVEL), so that the PTE encode functions for these
+2 -2
drivers/gpu/drm/i915/gem/selftests/huge_pages.c
··· 713 713 { 714 714 struct drm_i915_private *i915 = arg; 715 715 unsigned int supported = RUNTIME_INFO(i915)->page_sizes; 716 - bool has_pte64 = GRAPHICS_VER_FULL(i915) >= IP_VER(12, 50); 716 + bool has_pte64 = GRAPHICS_VER_FULL(i915) >= IP_VER(12, 55); 717 717 struct i915_address_space *vm; 718 718 struct i915_gem_context *ctx; 719 719 unsigned long max_pages; ··· 857 857 static int igt_ppgtt_64K(void *arg) 858 858 { 859 859 struct drm_i915_private *i915 = arg; 860 - bool has_pte64 = GRAPHICS_VER_FULL(i915) >= IP_VER(12, 50); 860 + bool has_pte64 = GRAPHICS_VER_FULL(i915) >= IP_VER(12, 55); 861 861 struct drm_i915_gem_object *obj; 862 862 struct i915_address_space *vm; 863 863 struct i915_gem_context *ctx;
+4 -4
drivers/gpu/drm/i915/gem/selftests/i915_gem_client_blt.c
··· 117 117 if (gen < 12) 118 118 return true; 119 119 120 - if (GRAPHICS_VER_FULL(i915) < IP_VER(12, 50)) 120 + if (GRAPHICS_VER_FULL(i915) < IP_VER(12, 55)) 121 121 return false; 122 122 123 123 return HAS_DISPLAY(i915); ··· 166 166 src_pitch = t->width; /* in dwords */ 167 167 if (src->tiling == CLIENT_TILING_Y) { 168 168 src_tiles = XY_FAST_COPY_BLT_D0_SRC_TILE_MODE(YMAJOR); 169 - if (GRAPHICS_VER_FULL(to_i915(batch->base.dev)) >= IP_VER(12, 50)) 169 + if (GRAPHICS_VER_FULL(to_i915(batch->base.dev)) >= IP_VER(12, 55)) 170 170 src_4t = XY_FAST_COPY_BLT_D1_SRC_TILE4; 171 171 } else if (src->tiling == CLIENT_TILING_X) { 172 172 src_tiles = XY_FAST_COPY_BLT_D0_SRC_TILE_MODE(TILE_X); ··· 177 177 dst_pitch = t->width; /* in dwords */ 178 178 if (dst->tiling == CLIENT_TILING_Y) { 179 179 dst_tiles = XY_FAST_COPY_BLT_D0_DST_TILE_MODE(YMAJOR); 180 - if (GRAPHICS_VER_FULL(to_i915(batch->base.dev)) >= IP_VER(12, 50)) 180 + if (GRAPHICS_VER_FULL(to_i915(batch->base.dev)) >= IP_VER(12, 55)) 181 181 dst_4t = XY_FAST_COPY_BLT_D1_DST_TILE4; 182 182 } else if (dst->tiling == CLIENT_TILING_X) { 183 183 dst_tiles = XY_FAST_COPY_BLT_D0_DST_TILE_MODE(TILE_X); ··· 365 365 v += x; 366 366 367 367 swizzle = gt->ggtt->bit_6_swizzle_x; 368 - } else if (GRAPHICS_VER_FULL(gt->i915) >= IP_VER(12, 50)) { 368 + } else if (GRAPHICS_VER_FULL(gt->i915) >= IP_VER(12, 55)) { 369 369 /* Y-major tiling layout is Tile4 for Xe_HP and beyond */ 370 370 v = linear_x_y_to_ftiled_pos(x_pos, y_pos, stride, 32); 371 371
+1 -4
drivers/gpu/drm/i915/gt/gen8_engine_cs.c
··· 189 189 { 190 190 i915_reg_t reg = gen12_get_aux_inv_reg(engine); 191 191 192 - if (IS_PONTEVECCHIO(engine->i915)) 193 - return false; 194 - 195 192 /* 196 193 * So far platforms supported by i915 having flat ccs do not require 197 194 * AUX invalidation. Check also whether the engine requires it. ··· 824 827 cs = gen12_emit_pipe_control(cs, 0, 825 828 PIPE_CONTROL_DEPTH_CACHE_FLUSH, 0); 826 829 827 - if (GRAPHICS_VER(i915) == 12 && GRAPHICS_VER_FULL(i915) < IP_VER(12, 50)) 830 + if (GRAPHICS_VER(i915) == 12 && GRAPHICS_VER_FULL(i915) < IP_VER(12, 55)) 828 831 /* Wa_1409600907 */ 829 832 flags |= PIPE_CONTROL_DEPTH_STALL; 830 833
+20 -20
drivers/gpu/drm/i915/gt/gen8_ppgtt.c
··· 500 500 } 501 501 502 502 static void 503 - xehpsdv_ppgtt_insert_huge(struct i915_address_space *vm, 504 - struct i915_vma_resource *vma_res, 505 - struct sgt_dma *iter, 506 - unsigned int pat_index, 507 - u32 flags) 503 + xehp_ppgtt_insert_huge(struct i915_address_space *vm, 504 + struct i915_vma_resource *vma_res, 505 + struct sgt_dma *iter, 506 + unsigned int pat_index, 507 + u32 flags) 508 508 { 509 509 const gen8_pte_t pte_encode = vm->pte_encode(0, pat_index, flags); 510 510 unsigned int rem = sg_dma_len(iter->sg); ··· 741 741 struct sgt_dma iter = sgt_dma(vma_res); 742 742 743 743 if (vma_res->bi.page_sizes.sg > I915_GTT_PAGE_SIZE) { 744 - if (GRAPHICS_VER_FULL(vm->i915) >= IP_VER(12, 50)) 745 - xehpsdv_ppgtt_insert_huge(vm, vma_res, &iter, pat_index, flags); 744 + if (GRAPHICS_VER_FULL(vm->i915) >= IP_VER(12, 55)) 745 + xehp_ppgtt_insert_huge(vm, vma_res, &iter, pat_index, flags); 746 746 else 747 747 gen8_ppgtt_insert_huge(vm, vma_res, &iter, pat_index, flags); 748 748 } else { ··· 781 781 drm_clflush_virt_range(&vaddr[gen8_pd_index(idx, 0)], sizeof(*vaddr)); 782 782 } 783 783 784 - static void __xehpsdv_ppgtt_insert_entry_lm(struct i915_address_space *vm, 785 - dma_addr_t addr, 786 - u64 offset, 787 - unsigned int pat_index, 788 - u32 flags) 784 + static void xehp_ppgtt_insert_entry_lm(struct i915_address_space *vm, 785 + dma_addr_t addr, 786 + u64 offset, 787 + unsigned int pat_index, 788 + u32 flags) 789 789 { 790 790 u64 idx = offset >> GEN8_PTE_SHIFT; 791 791 struct i915_page_directory * const pdp = ··· 810 810 vaddr[gen8_pd_index(idx, 0) / 16] = vm->pte_encode(addr, pat_index, flags); 811 811 } 812 812 813 - static void xehpsdv_ppgtt_insert_entry(struct i915_address_space *vm, 814 - dma_addr_t addr, 815 - u64 offset, 816 - unsigned int pat_index, 817 - u32 flags) 813 + static void xehp_ppgtt_insert_entry(struct i915_address_space *vm, 814 + dma_addr_t addr, 815 + u64 offset, 816 + unsigned int pat_index, 817 + u32 flags) 818 818 { 819 819 if (flags & PTE_LM) 820 - return __xehpsdv_ppgtt_insert_entry_lm(vm, addr, offset, 821 - pat_index, flags); 820 + return xehp_ppgtt_insert_entry_lm(vm, addr, offset, 821 + pat_index, flags); 822 822 823 823 return gen8_ppgtt_insert_entry(vm, addr, offset, pat_index, flags); 824 824 } ··· 1042 1042 ppgtt->vm.bind_async_flags = I915_VMA_LOCAL_BIND; 1043 1043 ppgtt->vm.insert_entries = gen8_ppgtt_insert; 1044 1044 if (HAS_64K_PAGES(gt->i915)) 1045 - ppgtt->vm.insert_page = xehpsdv_ppgtt_insert_entry; 1045 + ppgtt->vm.insert_page = xehp_ppgtt_insert_entry; 1046 1046 else 1047 1047 ppgtt->vm.insert_page = gen8_ppgtt_insert_entry; 1048 1048 ppgtt->vm.allocate_va_range = gen8_ppgtt_alloc;
+4 -39
drivers/gpu/drm/i915/gt/intel_engine_cs.c
··· 497 497 engine->logical_mask = BIT(logical_instance); 498 498 __sprint_engine_name(engine); 499 499 500 - if ((engine->class == COMPUTE_CLASS && !RCS_MASK(engine->gt) && 501 - __ffs(CCS_MASK(engine->gt)) == engine->instance) || 502 - engine->class == RENDER_CLASS) 500 + if ((engine->class == COMPUTE_CLASS || engine->class == RENDER_CLASS) && 501 + __ffs(CCS_MASK(engine->gt) | RCS_MASK(engine->gt)) == engine->instance) 503 502 engine->flags |= I915_ENGINE_FIRST_RENDER_COMPUTE; 504 503 505 504 /* features common between engines sharing EUs */ ··· 764 765 * and bits have disable semantices. 765 766 */ 766 767 media_fuse = intel_uncore_read(gt->uncore, GEN11_GT_VEBOX_VDBOX_DISABLE); 767 - if (MEDIA_VER_FULL(i915) < IP_VER(12, 50)) 768 + if (MEDIA_VER_FULL(i915) < IP_VER(12, 55)) 768 769 media_fuse = ~media_fuse; 769 770 770 771 vdbox_mask = media_fuse & GEN11_GT_VDBOX_DISABLE_MASK; 771 772 vebox_mask = (media_fuse & GEN11_GT_VEBOX_DISABLE_MASK) >> 772 773 GEN11_GT_VEBOX_DISABLE_SHIFT; 773 774 774 - if (MEDIA_VER_FULL(i915) >= IP_VER(12, 50)) { 775 + if (MEDIA_VER_FULL(i915) >= IP_VER(12, 55)) { 775 776 fuse1 = intel_uncore_read(gt->uncore, HSW_PAVP_FUSE1); 776 777 gt->info.sfc_mask = REG_FIELD_GET(XEHP_SFC_ENABLE_MASK, fuse1); 777 778 } else { ··· 838 839 } 839 840 } 840 841 841 - static void engine_mask_apply_copy_fuses(struct intel_gt *gt) 842 - { 843 - struct drm_i915_private *i915 = gt->i915; 844 - struct intel_gt_info *info = &gt->info; 845 - unsigned long meml3_mask; 846 - unsigned long quad; 847 - 848 - if (!(GRAPHICS_VER_FULL(i915) >= IP_VER(12, 60) && 849 - GRAPHICS_VER_FULL(i915) < IP_VER(12, 70))) 850 - return; 851 - 852 - meml3_mask = intel_uncore_read(gt->uncore, GEN10_MIRROR_FUSE3); 853 - meml3_mask = REG_FIELD_GET(GEN12_MEML3_EN_MASK, meml3_mask); 854 - 855 - /* 856 - * Link Copy engines may be fused off according to meml3_mask. Each 857 - * bit is a quad that houses 2 Link Copy and two Sub Copy engines. 858 - */ 859 - for_each_clear_bit(quad, &meml3_mask, GEN12_MAX_MSLICES) { 860 - unsigned int instance = quad * 2 + 1; 861 - intel_engine_mask_t mask = GENMASK(_BCS(instance + 1), 862 - _BCS(instance)); 863 - 864 - if (mask & info->engine_mask) { 865 - gt_dbg(gt, "bcs%u fused off\n", instance); 866 - gt_dbg(gt, "bcs%u fused off\n", instance + 1); 867 - 868 - info->engine_mask &= ~mask; 869 - } 870 - } 871 - } 872 - 873 842 /* 874 843 * Determine which engines are fused off in our particular hardware. 875 844 * Note that we have a catch-22 situation where we need to be able to access ··· 856 889 857 890 engine_mask_apply_media_fuses(gt); 858 891 engine_mask_apply_compute_fuses(gt); 859 - engine_mask_apply_copy_fuses(gt); 860 892 861 893 /* 862 894 * The only use of the GSC CS is to load and communicate with the GSC ··· 1159 1193 if (GRAPHICS_VER_FULL(i915) == IP_VER(12, 74) || 1160 1194 GRAPHICS_VER_FULL(i915) == IP_VER(12, 71) || 1161 1195 GRAPHICS_VER_FULL(i915) == IP_VER(12, 70) || 1162 - GRAPHICS_VER_FULL(i915) == IP_VER(12, 50) || 1163 1196 GRAPHICS_VER_FULL(i915) == IP_VER(12, 55)) { 1164 1197 regs = xehp_regs; 1165 1198 num = ARRAY_SIZE(xehp_regs);
+5 -5
drivers/gpu/drm/i915/gt/intel_execlists_submission.c
··· 493 493 /* Use a fixed tag for OA and friends */ 494 494 GEM_BUG_ON(ce->tag <= BITS_PER_LONG); 495 495 ce->lrc.ccid = ce->tag; 496 - } else if (GRAPHICS_VER_FULL(engine->i915) >= IP_VER(12, 50)) { 496 + } else if (GRAPHICS_VER_FULL(engine->i915) >= IP_VER(12, 55)) { 497 497 /* We don't need a strict matching tag, just different values */ 498 498 unsigned int tag = ffs(READ_ONCE(engine->context_tag)); 499 499 ··· 613 613 intel_engine_add_retire(engine, ce->timeline); 614 614 615 615 ccid = ce->lrc.ccid; 616 - if (GRAPHICS_VER_FULL(engine->i915) >= IP_VER(12, 50)) { 616 + if (GRAPHICS_VER_FULL(engine->i915) >= IP_VER(12, 55)) { 617 617 ccid >>= XEHP_SW_CTX_ID_SHIFT - 32; 618 618 ccid &= XEHP_MAX_CONTEXT_HW_ID; 619 619 } else { ··· 1907 1907 ENGINE_TRACE(engine, "csb[%d]: status=0x%08x:0x%08x\n", 1908 1908 head, upper_32_bits(csb), lower_32_bits(csb)); 1909 1909 1910 - if (GRAPHICS_VER_FULL(engine->i915) >= IP_VER(12, 50)) 1910 + if (GRAPHICS_VER_FULL(engine->i915) >= IP_VER(12, 55)) 1911 1911 promote = xehp_csb_parse(csb); 1912 1912 else if (GRAPHICS_VER(engine->i915) >= 12) 1913 1913 promote = gen12_csb_parse(csb); ··· 3482 3482 } 3483 3483 } 3484 3484 3485 - if (GRAPHICS_VER_FULL(engine->i915) >= IP_VER(12, 50)) { 3485 + if (GRAPHICS_VER_FULL(engine->i915) >= IP_VER(12, 55)) { 3486 3486 if (intel_engine_has_preemption(engine)) 3487 3487 engine->emit_bb_start = xehp_emit_bb_start; 3488 3488 else ··· 3585 3585 3586 3586 engine->context_tag = GENMASK(BITS_PER_LONG - 2, 0); 3587 3587 if (GRAPHICS_VER(engine->i915) >= 11 && 3588 - GRAPHICS_VER_FULL(engine->i915) < IP_VER(12, 50)) { 3588 + GRAPHICS_VER_FULL(engine->i915) < IP_VER(12, 55)) { 3589 3589 execlists->ccid |= engine->instance << (GEN11_ENGINE_INSTANCE_SHIFT - 32); 3590 3590 execlists->ccid |= engine->class << (GEN11_ENGINE_CLASS_SHIFT - 32); 3591 3591 }
-15
drivers/gpu/drm/i915/gt/intel_gsc.c
··· 103 103 } 104 104 }; 105 105 106 - static const struct gsc_def gsc_def_xehpsdv[] = { 107 - { 108 - /* HECI1 not enabled on the device. */ 109 - }, 110 - { 111 - .name = "mei-gscfi", 112 - .bar = DG1_GSC_HECI2_BASE, 113 - .bar_size = GSC_BAR_LENGTH, 114 - .use_polling = true, 115 - .slow_firmware = true, 116 - } 117 - }; 118 - 119 106 static const struct gsc_def gsc_def_dg2[] = { 120 107 { 121 108 .name = "mei-gsc", ··· 175 188 176 189 if (IS_DG1(i915)) { 177 190 def = &gsc_def_dg1[intf_id]; 178 - } else if (IS_XEHPSDV(i915)) { 179 - def = &gsc_def_xehpsdv[intf_id]; 180 191 } else if (IS_DG2(i915)) { 181 192 def = &gsc_def_dg2[intf_id]; 182 193 } else {
+2 -2
drivers/gpu/drm/i915/gt/intel_gt.c
··· 278 278 intel_uncore_posting_read(uncore, 279 279 XELPMP_RING_FAULT_REG); 280 280 281 - } else if (GRAPHICS_VER_FULL(i915) >= IP_VER(12, 50)) { 281 + } else if (GRAPHICS_VER_FULL(i915) >= IP_VER(12, 55)) { 282 282 intel_gt_mcr_multicast_rmw(gt, XEHP_RING_FAULT_REG, 283 283 RING_FAULT_VALID, 0); 284 284 intel_gt_mcr_read_any(gt, XEHP_RING_FAULT_REG); ··· 403 403 struct drm_i915_private *i915 = gt->i915; 404 404 405 405 /* From GEN8 onwards we only have one 'All Engine Fault Register' */ 406 - if (GRAPHICS_VER_FULL(i915) >= IP_VER(12, 50)) 406 + if (GRAPHICS_VER_FULL(i915) >= IP_VER(12, 55)) 407 407 xehp_check_faults(gt); 408 408 else if (GRAPHICS_VER(i915) >= 8) 409 409 gen8_check_faults(gt);
+4 -48
drivers/gpu/drm/i915/gt/intel_gt_mcr.c
··· 57 57 * are of a "GAM" subclass that has special rules. Thus we use a separate 58 58 * GAM table farther down for those. 59 59 */ 60 - static const struct intel_mmio_range xehpsdv_mslice_steering_table[] = { 60 + static const struct intel_mmio_range dg2_mslice_steering_table[] = { 61 61 { 0x00DD00, 0x00DDFF }, 62 62 { 0x00E900, 0x00FFFF }, /* 0xEA00 - OxEFFF is unused */ 63 - {}, 64 - }; 65 - 66 - static const struct intel_mmio_range xehpsdv_gam_steering_table[] = { 67 - { 0x004000, 0x004AFF }, 68 - { 0x00C800, 0x00CFFF }, 69 - {}, 70 - }; 71 - 72 - static const struct intel_mmio_range xehpsdv_lncf_steering_table[] = { 73 - { 0x00B000, 0x00B0FF }, 74 - { 0x00D800, 0x00D8FF }, 75 63 {}, 76 64 }; 77 65 78 66 static const struct intel_mmio_range dg2_lncf_steering_table[] = { 79 67 { 0x00B000, 0x00B0FF }, 80 68 { 0x00D880, 0x00D8FF }, 81 - {}, 82 - }; 83 - 84 - /* 85 - * We have several types of MCR registers on PVC where steering to (0,0) 86 - * will always provide us with a non-terminated value. We'll stick them 87 - * all in the same table for simplicity. 88 - */ 89 - static const struct intel_mmio_range pvc_instance0_steering_table[] = { 90 - { 0x004000, 0x004AFF }, /* HALF-BSLICE */ 91 - { 0x008800, 0x00887F }, /* CC */ 92 - { 0x008A80, 0x008AFF }, /* TILEPSMI */ 93 - { 0x00B000, 0x00B0FF }, /* HALF-BSLICE */ 94 - { 0x00B100, 0x00B3FF }, /* L3BANK */ 95 - { 0x00C800, 0x00CFFF }, /* HALF-BSLICE */ 96 - { 0x00D800, 0x00D8FF }, /* HALF-BSLICE */ 97 - { 0x00DD00, 0x00DDFF }, /* BSLICE */ 98 - { 0x00E900, 0x00E9FF }, /* HALF-BSLICE */ 99 - { 0x00EC00, 0x00EEFF }, /* HALF-BSLICE */ 100 - { 0x00F000, 0x00FFFF }, /* HALF-BSLICE */ 101 - { 0x024180, 0x0241FF }, /* HALF-BSLICE */ 102 69 {}, 103 70 }; 104 71 ··· 152 185 gt->steering_table[INSTANCE0] = xelpg_instance0_steering_table; 153 186 gt->steering_table[L3BANK] = xelpg_l3bank_steering_table; 154 187 gt->steering_table[DSS] = xelpg_dss_steering_table; 155 - } else if (IS_PONTEVECCHIO(i915)) { 156 - gt->steering_table[INSTANCE0] = pvc_instance0_steering_table; 157 188 } else if (IS_DG2(i915)) { 158 - gt->steering_table[MSLICE] = xehpsdv_mslice_steering_table; 189 + gt->steering_table[MSLICE] = dg2_mslice_steering_table; 159 190 gt->steering_table[LNCF] = dg2_lncf_steering_table; 160 191 /* 161 192 * No need to hook up the GAM table since it has a dedicated 162 193 * steering control register on DG2 and can use implicit 163 194 * steering. 164 195 */ 165 - } else if (IS_XEHPSDV(i915)) { 166 - gt->steering_table[MSLICE] = xehpsdv_mslice_steering_table; 167 - gt->steering_table[LNCF] = xehpsdv_lncf_steering_table; 168 - gt->steering_table[GAM] = xehpsdv_gam_steering_table; 169 196 } else if (GRAPHICS_VER(i915) >= 11 && 170 - GRAPHICS_VER_FULL(i915) < IP_VER(12, 50)) { 197 + GRAPHICS_VER_FULL(i915) < IP_VER(12, 55)) { 171 198 gt->steering_table[L3BANK] = icl_l3bank_steering_table; 172 199 gt->info.l3bank_mask = 173 200 ~intel_uncore_read(gt->uncore, GEN10_MIRROR_FUSE3) & ··· 782 821 for (int i = 0; i < NUM_STEERING_TYPES; i++) 783 822 if (gt->steering_table[i]) 784 823 report_steering_type(p, gt, i, dump_table); 785 - } else if (IS_PONTEVECCHIO(gt->i915)) { 786 - report_steering_type(p, gt, INSTANCE0, dump_table); 787 824 } else if (HAS_MSLICE_STEERING(gt->i915)) { 788 825 report_steering_type(p, gt, MSLICE, dump_table); 789 826 report_steering_type(p, gt, LNCF, dump_table); ··· 801 842 void intel_gt_mcr_get_ss_steering(struct intel_gt *gt, unsigned int dss, 802 843 unsigned int *group, unsigned int *instance) 803 844 { 804 - if (IS_PONTEVECCHIO(gt->i915)) { 805 - *group = dss / GEN_DSS_PER_CSLICE; 806 - *instance = dss % GEN_DSS_PER_CSLICE; 807 - } else if (GRAPHICS_VER_FULL(gt->i915) >= IP_VER(12, 50)) { 845 + if (GRAPHICS_VER_FULL(gt->i915) >= IP_VER(12, 55)) { 808 846 *group = dss / GEN_DSS_PER_GSLICE; 809 847 *instance = dss % GEN_DSS_PER_GSLICE; 810 848 } else {
+1 -1
drivers/gpu/drm/i915/gt/intel_gt_mcr.h
··· 54 54 * the topology, so we lookup the DSS ID directly in "slice 0." 55 55 */ 56 56 #define _HAS_SS(ss_, gt_, group_, instance_) ( \ 57 - GRAPHICS_VER_FULL(gt_->i915) >= IP_VER(12, 50) ? \ 57 + GRAPHICS_VER_FULL(gt_->i915) >= IP_VER(12, 55) ? \ 58 58 intel_sseu_has_subslice(&(gt_)->info.sseu, 0, ss_) : \ 59 59 intel_sseu_has_subslice(&(gt_)->info.sseu, group_, instance_)) 60 60
-4
drivers/gpu/drm/i915/gt/intel_gt_pm_debugfs.c
··· 392 392 drm_puts(p, "no P-state info available\n"); 393 393 } 394 394 395 - drm_printf(p, "Current CD clock frequency: %d kHz\n", i915->display.cdclk.hw.cdclk); 396 - drm_printf(p, "Max CD clock frequency: %d kHz\n", i915->display.cdclk.max_cdclk_freq); 397 - drm_printf(p, "Max pixel clock frequency: %d kHz\n", i915->max_dotclk_freq); 398 - 399 395 intel_runtime_pm_put(uncore->rpm, wakeref); 400 396 } 401 397
-59
drivers/gpu/drm/i915/gt/intel_gt_regs.h
··· 718 718 719 719 #define UNSLICE_UNIT_LEVEL_CLKGATE _MMIO(0x9434) 720 720 #define VFUNIT_CLKGATE_DIS REG_BIT(20) 721 - #define TSGUNIT_CLKGATE_DIS REG_BIT(17) /* XEHPSDV */ 722 721 #define CG3DDISCFEG_CLKGATE_DIS REG_BIT(17) /* DG2 */ 723 722 #define GAMEDIA_CLKGATE_DIS REG_BIT(11) 724 723 #define HSUNIT_CLKGATE_DIS REG_BIT(8) 725 724 #define VSUNIT_CLKGATE_DIS REG_BIT(3) 726 - 727 - #define UNSLCGCTL9440 _MMIO(0x9440) 728 - #define GAMTLBOACS_CLKGATE_DIS REG_BIT(28) 729 - #define GAMTLBVDBOX5_CLKGATE_DIS REG_BIT(27) 730 - #define GAMTLBVDBOX6_CLKGATE_DIS REG_BIT(26) 731 - #define GAMTLBVDBOX3_CLKGATE_DIS REG_BIT(24) 732 - #define GAMTLBVDBOX4_CLKGATE_DIS REG_BIT(23) 733 - #define GAMTLBVDBOX7_CLKGATE_DIS REG_BIT(22) 734 - #define GAMTLBVDBOX2_CLKGATE_DIS REG_BIT(21) 735 - #define GAMTLBVDBOX0_CLKGATE_DIS REG_BIT(17) 736 - #define GAMTLBKCR_CLKGATE_DIS REG_BIT(16) 737 - #define GAMTLBGUC_CLKGATE_DIS REG_BIT(15) 738 - #define GAMTLBBLT_CLKGATE_DIS REG_BIT(14) 739 - #define GAMTLBVDBOX1_CLKGATE_DIS REG_BIT(6) 740 - 741 - #define UNSLCGCTL9444 _MMIO(0x9444) 742 - #define GAMTLBGFXA0_CLKGATE_DIS REG_BIT(30) 743 - #define GAMTLBGFXA1_CLKGATE_DIS REG_BIT(29) 744 - #define GAMTLBCOMPA0_CLKGATE_DIS REG_BIT(28) 745 - #define GAMTLBCOMPA1_CLKGATE_DIS REG_BIT(27) 746 - #define GAMTLBCOMPB0_CLKGATE_DIS REG_BIT(26) 747 - #define GAMTLBCOMPB1_CLKGATE_DIS REG_BIT(25) 748 - #define GAMTLBCOMPC0_CLKGATE_DIS REG_BIT(24) 749 - #define GAMTLBCOMPC1_CLKGATE_DIS REG_BIT(23) 750 - #define GAMTLBCOMPD0_CLKGATE_DIS REG_BIT(22) 751 - #define GAMTLBCOMPD1_CLKGATE_DIS REG_BIT(21) 752 - #define GAMTLBMERT_CLKGATE_DIS REG_BIT(20) 753 - #define GAMTLBVEBOX3_CLKGATE_DIS REG_BIT(19) 754 - #define GAMTLBVEBOX2_CLKGATE_DIS REG_BIT(18) 755 - #define GAMTLBVEBOX1_CLKGATE_DIS REG_BIT(17) 756 - #define GAMTLBVEBOX0_CLKGATE_DIS REG_BIT(16) 757 - #define LTCDD_CLKGATE_DIS REG_BIT(10) 758 725 759 726 #define GEN11_SLICE_UNIT_LEVEL_CLKGATE _MMIO(0x94d4) 760 727 #define XEHP_SLICE_UNIT_LEVEL_CLKGATE MCR_REG(0x94d4) ··· 731 764 #define NODEDSS_CLKGATE_DIS REG_BIT(12) 732 765 #define L3_CLKGATE_DIS REG_BIT(16) 733 766 #define L3_CR2X_CLKGATE_DIS REG_BIT(17) 734 - 735 - #define SCCGCTL94DC MCR_REG(0x94dc) 736 - #define CG3DDISURB REG_BIT(14) 737 767 738 768 #define UNSLICE_UNIT_LEVEL_CLKGATE2 _MMIO(0x94e4) 739 769 #define VSUNIT_CLKGATE_DIS_TGL REG_BIT(19) ··· 953 989 #define GEN7_WA_FOR_GEN7_L3_CONTROL 0x3C47FF8C 954 990 #define GEN7_L3AGDIS (1 << 19) 955 991 956 - #define XEHPC_LNCFMISCCFGREG0 MCR_REG(0xb01c) 957 - #define XEHPC_HOSTCACHEEN REG_BIT(1) 958 - #define XEHPC_OVRLSCCC REG_BIT(0) 959 - 960 992 #define GEN7_L3CNTLREG2 _MMIO(0xb020) 961 993 962 994 /* MOCS (Memory Object Control State) registers */ ··· 1006 1046 #define XEHP_L3SQCREG5 MCR_REG(0xb158) 1007 1047 #define L3_PWM_TIMER_INIT_VAL_MASK REG_GENMASK(9, 0) 1008 1048 1009 - #define MLTICTXCTL MCR_REG(0xb170) 1010 - #define TDONRENDER REG_BIT(2) 1011 - 1012 1049 #define XEHP_L3SCQREG7 MCR_REG(0xb188) 1013 1050 #define BLEND_FILL_CACHING_OPT_DIS REG_BIT(3) 1014 - 1015 - #define XEHPC_L3SCRUB MCR_REG(0xb18c) 1016 - #define SCRUB_CL_DWNGRADE_SHARED REG_BIT(12) 1017 - #define SCRUB_RATE_PER_BANK_MASK REG_GENMASK(2, 0) 1018 - #define SCRUB_RATE_4B_PER_CLK REG_FIELD_PREP(SCRUB_RATE_PER_BANK_MASK, 0x6) 1019 - 1020 - #define L3SQCREG1_CCS0 MCR_REG(0xb200) 1021 - #define FLUSHALLNONCOH REG_BIT(5) 1022 1051 1023 1052 #define GEN11_GLBLINVL _MMIO(0xb404) 1024 1053 #define GEN11_BANK_HASH_ADDR_EXCL_MASK (0x7f << 5) ··· 1058 1109 #define XEHP_COMPCTX_TLB_INV_CR MCR_REG(0xcf04) 1059 1110 #define XELPMP_GSC_TLB_INV_CR _MMIO(0xcf04) /* media GT only */ 1060 1111 1061 - #define XEHP_MERT_MOD_CTRL MCR_REG(0xcf28) 1062 1112 #define RENDER_MOD_CTRL MCR_REG(0xcf2c) 1063 1113 #define COMP_MOD_CTRL MCR_REG(0xcf30) 1064 1114 #define XELPMP_GSC_MOD_CTRL _MMIO(0xcf30) /* media GT only */ ··· 1133 1185 #define EU_PERF_CNTL4 PERF_REG(0xe45c) 1134 1186 1135 1187 #define GEN9_ROW_CHICKEN4 MCR_REG(0xe48c) 1136 - #define GEN12_DISABLE_GRF_CLEAR REG_BIT(13) 1137 1188 #define XEHP_DIS_BBL_SYSPIPE REG_BIT(11) 1138 1189 #define GEN12_DISABLE_TDL_PUSH REG_BIT(9) 1139 1190 #define GEN11_DIS_PICK_2ND_EU REG_BIT(7) ··· 1149 1202 #define FLOW_CONTROL_ENABLE REG_BIT(15) 1150 1203 #define UGM_BACKUP_MODE REG_BIT(13) 1151 1204 #define MDQ_ARBITRATION_MODE REG_BIT(12) 1152 - #define SYSTOLIC_DOP_CLOCK_GATING_DIS REG_BIT(10) 1153 1205 #define PARTIAL_INSTRUCTION_SHOOTDOWN_DISABLE REG_BIT(8) 1154 1206 #define STALL_DOP_GATING_DISABLE REG_BIT(5) 1155 1207 #define THROTTLE_12_5 REG_GENMASK(4, 2) ··· 1624 1678 #define XEHPC_BCS7_BCS8_INTR_MASK _MMIO(0x19011c) 1625 1679 1626 1680 #define GEN12_SFC_DONE(n) _MMIO(0x1cc000 + (n) * 0x1000) 1627 - 1628 - #define GT0_PACKAGE_ENERGY_STATUS _MMIO(0x250004) 1629 - #define GT0_PACKAGE_RAPL_LIMIT _MMIO(0x250008) 1630 - #define GT0_PACKAGE_POWER_SKU_UNIT _MMIO(0x250068) 1631 - #define GT0_PLATFORM_ENERGY_STATUS _MMIO(0x25006c) 1632 1681 1633 1682 /* 1634 1683 * Standalone Media's non-engine GT registers are located at their regular GT
+6 -15
drivers/gpu/drm/i915/gt/intel_gt_sysfs_pm.c
··· 573 573 char *buff) 574 574 { 575 575 struct intel_gt *gt = intel_gt_sysfs_get_drvdata(kobj, attr->attr.name); 576 - struct intel_guc_slpc *slpc = &gt->uc.guc.slpc; 577 576 intel_wakeref_t wakeref; 578 577 u32 mode; 579 578 ··· 580 581 * Retrieve media_ratio_mode from GEN6_RPNSWREQ bit 13 set by 581 582 * GuC. GEN6_RPNSWREQ:13 value 0 represents 1:2 and 1 represents 1:1 582 583 */ 583 - if (IS_XEHPSDV(gt->i915) && 584 - slpc->media_ratio_mode == SLPC_MEDIA_RATIO_MODE_DYNAMIC_CONTROL) { 585 - /* 586 - * For XEHPSDV dynamic mode GEN6_RPNSWREQ:13 does not contain 587 - * the media_ratio_mode, just return the cached media ratio 588 - */ 589 - mode = slpc->media_ratio_mode; 590 - } else { 591 - with_intel_runtime_pm(gt->uncore->rpm, wakeref) 592 - mode = intel_uncore_read(gt->uncore, GEN6_RPNSWREQ); 593 - mode = REG_FIELD_GET(GEN12_MEDIA_FREQ_RATIO, mode) ? 594 - SLPC_MEDIA_RATIO_MODE_FIXED_ONE_TO_ONE : 595 - SLPC_MEDIA_RATIO_MODE_FIXED_ONE_TO_TWO; 596 - } 584 + with_intel_runtime_pm(gt->uncore->rpm, wakeref) 585 + mode = intel_uncore_read(gt->uncore, GEN6_RPNSWREQ); 586 + 587 + mode = REG_FIELD_GET(GEN12_MEDIA_FREQ_RATIO, mode) ? 588 + SLPC_MEDIA_RATIO_MODE_FIXED_ONE_TO_ONE : 589 + SLPC_MEDIA_RATIO_MODE_FIXED_ONE_TO_TWO; 597 590 598 591 return sysfs_emit(buff, "%u\n", media_ratio_mode_to_factor(mode)); 599 592 }
+1 -1
drivers/gpu/drm/i915/gt/intel_gtt.c
··· 680 680 681 681 if (GRAPHICS_VER_FULL(i915) >= IP_VER(12, 70)) 682 682 xelpg_setup_private_ppat(gt); 683 - else if (GRAPHICS_VER_FULL(i915) >= IP_VER(12, 50)) 683 + else if (GRAPHICS_VER_FULL(i915) >= IP_VER(12, 55)) 684 684 xehp_setup_private_ppat(gt); 685 685 else if (GRAPHICS_VER(i915) >= 12) 686 686 tgl_setup_private_ppat(uncore);
+4 -47
drivers/gpu/drm/i915/gt/intel_lrc.c
··· 546 546 END 547 547 }; 548 548 549 - static const u8 xehp_rcs_offsets[] = { 550 - NOP(1), 551 - LRI(13, POSTED), 552 - REG16(0x244), 553 - REG(0x034), 554 - REG(0x030), 555 - REG(0x038), 556 - REG(0x03c), 557 - REG(0x168), 558 - REG(0x140), 559 - REG(0x110), 560 - REG(0x1c0), 561 - REG(0x1c4), 562 - REG(0x1c8), 563 - REG(0x180), 564 - REG16(0x2b4), 565 - 566 - NOP(5), 567 - LRI(9, POSTED), 568 - REG16(0x3a8), 569 - REG16(0x28c), 570 - REG16(0x288), 571 - REG16(0x284), 572 - REG16(0x280), 573 - REG16(0x27c), 574 - REG16(0x278), 575 - REG16(0x274), 576 - REG16(0x270), 577 - 578 - LRI(3, POSTED), 579 - REG(0x1b0), 580 - REG16(0x5a8), 581 - REG16(0x5ac), 582 - 583 - NOP(6), 584 - LRI(1, 0), 585 - REG(0x0c8), 586 - 587 - END 588 - }; 589 - 590 549 static const u8 dg2_rcs_offsets[] = { 591 550 NOP(1), 592 551 LRI(15, POSTED), ··· 654 695 return mtl_rcs_offsets; 655 696 else if (GRAPHICS_VER_FULL(engine->i915) >= IP_VER(12, 55)) 656 697 return dg2_rcs_offsets; 657 - else if (GRAPHICS_VER_FULL(engine->i915) >= IP_VER(12, 50)) 658 - return xehp_rcs_offsets; 659 698 else if (GRAPHICS_VER(engine->i915) >= 12) 660 699 return gen12_rcs_offsets; 661 700 else if (GRAPHICS_VER(engine->i915) >= 11) ··· 676 719 677 720 static int lrc_ring_mi_mode(const struct intel_engine_cs *engine) 678 721 { 679 - if (GRAPHICS_VER_FULL(engine->i915) >= IP_VER(12, 50)) 722 + if (GRAPHICS_VER_FULL(engine->i915) >= IP_VER(12, 55)) 680 723 return 0x70; 681 724 else if (GRAPHICS_VER(engine->i915) >= 12) 682 725 return 0x60; ··· 690 733 691 734 static int lrc_ring_bb_offset(const struct intel_engine_cs *engine) 692 735 { 693 - if (GRAPHICS_VER_FULL(engine->i915) >= IP_VER(12, 50)) 736 + if (GRAPHICS_VER_FULL(engine->i915) >= IP_VER(12, 55)) 694 737 return 0x80; 695 738 else if (GRAPHICS_VER(engine->i915) >= 12) 696 739 return 0x70; ··· 705 748 706 749 static int lrc_ring_gpr0(const struct intel_engine_cs *engine) 707 750 { 708 - if (GRAPHICS_VER_FULL(engine->i915) >= IP_VER(12, 50)) 751 + if (GRAPHICS_VER_FULL(engine->i915) >= IP_VER(12, 55)) 709 752 return 0x84; 710 753 else if (GRAPHICS_VER(engine->i915) >= 12) 711 754 return 0x74; ··· 752 795 static int lrc_ring_cmd_buf_cctl(const struct intel_engine_cs *engine) 753 796 { 754 797 755 - if (GRAPHICS_VER_FULL(engine->i915) >= IP_VER(12, 50)) 798 + if (GRAPHICS_VER_FULL(engine->i915) >= IP_VER(12, 55)) 756 799 /* 757 800 * Note that the CSFE context has a dummy slot for CMD_BUF_CCTL 758 801 * simply to match the RCS context image layout.
+11 -11
drivers/gpu/drm/i915/gt/intel_migrate.c
··· 35 35 return true; 36 36 } 37 37 38 - static void xehpsdv_toggle_pdes(struct i915_address_space *vm, 39 - struct i915_page_table *pt, 40 - void *data) 38 + static void xehp_toggle_pdes(struct i915_address_space *vm, 39 + struct i915_page_table *pt, 40 + void *data) 41 41 { 42 42 struct insert_pte_data *d = data; 43 43 ··· 52 52 d->offset += SZ_2M; 53 53 } 54 54 55 - static void xehpsdv_insert_pte(struct i915_address_space *vm, 56 - struct i915_page_table *pt, 57 - void *data) 55 + static void xehp_insert_pte(struct i915_address_space *vm, 56 + struct i915_page_table *pt, 57 + void *data) 58 58 { 59 59 struct insert_pte_data *d = data; 60 60 ··· 120 120 * 512 entry layout using 4K GTT pages. The other two windows just map 121 121 * lmem pages and must use the new compact 32 entry layout using 64K GTT 122 122 * pages, which ensures we can address any lmem object that the user 123 - * throws at us. We then also use the xehpsdv_toggle_pdes as a way of 123 + * throws at us. We then also use the xehp_toggle_pdes as a way of 124 124 * just toggling the PDE bit(GEN12_PDE_64K) for us, to enable the 125 125 * compact layout for each of these page-tables, that fall within the 126 126 * [CHUNK_SIZE, 3 * CHUNK_SIZE) range. ··· 209 209 /* Now allow the GPU to rewrite the PTE via its own ppGTT */ 210 210 if (HAS_64K_PAGES(gt->i915)) { 211 211 vm->vm.foreach(&vm->vm, base, d.offset - base, 212 - xehpsdv_insert_pte, &d); 212 + xehp_insert_pte, &d); 213 213 d.offset = base + CHUNK_SZ; 214 214 vm->vm.foreach(&vm->vm, 215 215 d.offset, 216 216 2 * CHUNK_SZ, 217 - xehpsdv_toggle_pdes, &d); 217 + xehp_toggle_pdes, &d); 218 218 } else { 219 219 vm->vm.foreach(&vm->vm, base, d.offset - base, 220 220 insert_pte, &d); ··· 925 925 926 926 GEM_BUG_ON(size >> PAGE_SHIFT > S16_MAX); 927 927 928 - if (GRAPHICS_VER_FULL(i915) >= IP_VER(12, 50)) 928 + if (GRAPHICS_VER_FULL(i915) >= IP_VER(12, 55)) 929 929 ring_sz = XY_FAST_COLOR_BLT_DW; 930 930 else if (ver >= 8) 931 931 ring_sz = 8; ··· 936 936 if (IS_ERR(cs)) 937 937 return PTR_ERR(cs); 938 938 939 - if (GRAPHICS_VER_FULL(i915) >= IP_VER(12, 50)) { 939 + if (GRAPHICS_VER_FULL(i915) >= IP_VER(12, 55)) { 940 940 *cs++ = XY_FAST_COLOR_BLT_CMD | XY_FAST_COLOR_BLT_DEPTH_32 | 941 941 (XY_FAST_COLOR_BLT_DW - 2); 942 942 *cs++ = FIELD_PREP(XY_FAST_COLOR_BLT_MOCS_MASK, mocs) |
+1 -51
drivers/gpu/drm/i915/gt/intel_mocs.c
··· 53 53 54 54 /* Helper defines */ 55 55 #define GEN9_NUM_MOCS_ENTRIES 64 /* 63-64 are reserved, but configured. */ 56 - #define PVC_NUM_MOCS_ENTRIES 3 57 56 #define MTL_NUM_MOCS_ENTRIES 16 58 57 59 58 /* (e)LLC caching options */ ··· 366 367 L3_3_WB), 367 368 }; 368 369 369 - static const struct drm_i915_mocs_entry xehpsdv_mocs_table[] = { 370 - /* wa_1608975824 */ 371 - MOCS_ENTRY(0, 0, L3_3_WB | L3_LKUP(1)), 372 - 373 - /* UC - Coherent; GO:L3 */ 374 - MOCS_ENTRY(1, 0, L3_1_UC | L3_LKUP(1)), 375 - /* UC - Coherent; GO:Memory */ 376 - MOCS_ENTRY(2, 0, L3_1_UC | L3_GLBGO(1) | L3_LKUP(1)), 377 - /* UC - Non-Coherent; GO:Memory */ 378 - MOCS_ENTRY(3, 0, L3_1_UC | L3_GLBGO(1)), 379 - /* UC - Non-Coherent; GO:L3 */ 380 - MOCS_ENTRY(4, 0, L3_1_UC), 381 - 382 - /* WB */ 383 - MOCS_ENTRY(5, 0, L3_3_WB | L3_LKUP(1)), 384 - 385 - /* HW Reserved - SW program but never use. */ 386 - MOCS_ENTRY(48, 0, L3_3_WB | L3_LKUP(1)), 387 - MOCS_ENTRY(49, 0, L3_1_UC | L3_LKUP(1)), 388 - MOCS_ENTRY(60, 0, L3_1_UC), 389 - MOCS_ENTRY(61, 0, L3_1_UC), 390 - MOCS_ENTRY(62, 0, L3_1_UC), 391 - MOCS_ENTRY(63, 0, L3_1_UC), 392 - }; 393 - 394 370 static const struct drm_i915_mocs_entry dg2_mocs_table[] = { 395 371 /* UC - Coherent; GO:L3 */ 396 372 MOCS_ENTRY(0, 0, L3_1_UC | L3_LKUP(1)), ··· 376 402 377 403 /* WB - LC */ 378 404 MOCS_ENTRY(3, 0, L3_3_WB | L3_LKUP(1)), 379 - }; 380 - 381 - static const struct drm_i915_mocs_entry pvc_mocs_table[] = { 382 - /* Error */ 383 - MOCS_ENTRY(0, 0, L3_3_WB), 384 - 385 - /* UC */ 386 - MOCS_ENTRY(1, 0, L3_1_UC), 387 - 388 - /* WB */ 389 - MOCS_ENTRY(2, 0, L3_3_WB), 390 405 }; 391 406 392 407 static const struct drm_i915_mocs_entry mtl_mocs_table[] = { ··· 464 501 table->n_entries = MTL_NUM_MOCS_ENTRIES; 465 502 table->uc_index = 9; 466 503 table->unused_entries_index = 1; 467 - } else if (IS_PONTEVECCHIO(i915)) { 468 - table->size = ARRAY_SIZE(pvc_mocs_table); 469 - table->table = pvc_mocs_table; 470 - table->n_entries = PVC_NUM_MOCS_ENTRIES; 471 - table->uc_index = 1; 472 - table->wb_index = 2; 473 - table->unused_entries_index = 2; 474 504 } else if (IS_DG2(i915)) { 475 505 table->size = ARRAY_SIZE(dg2_mocs_table); 476 506 table->table = dg2_mocs_table; 477 507 table->uc_index = 1; 478 508 table->n_entries = GEN9_NUM_MOCS_ENTRIES; 479 509 table->unused_entries_index = 3; 480 - } else if (IS_XEHPSDV(i915)) { 481 - table->size = ARRAY_SIZE(xehpsdv_mocs_table); 482 - table->table = xehpsdv_mocs_table; 483 - table->uc_index = 2; 484 - table->n_entries = GEN9_NUM_MOCS_ENTRIES; 485 - table->unused_entries_index = 5; 486 510 } else if (IS_DG1(i915)) { 487 511 table->size = ARRAY_SIZE(dg1_mocs_table); 488 512 table->table = dg1_mocs_table; ··· 620 670 621 671 intel_gt_mcr_lock(gt, &flags); 622 672 for_each_l3cc(l3cc, table, i) 623 - if (GRAPHICS_VER_FULL(gt->i915) >= IP_VER(12, 50)) 673 + if (GRAPHICS_VER_FULL(gt->i915) >= IP_VER(12, 55)) 624 674 intel_gt_mcr_multicast_write_fw(gt, XEHP_LNCFCMOCS(i), l3cc); 625 675 else 626 676 intel_uncore_write_fw(gt->uncore, GEN9_LNCFCMOCS(i), l3cc);
+1 -5
drivers/gpu/drm/i915/gt/intel_rps.c
··· 1086 1086 struct drm_i915_private *i915 = rps_to_i915(rps); 1087 1087 struct intel_uncore *uncore = rps_to_uncore(rps); 1088 1088 1089 - if (IS_PONTEVECCHIO(i915)) 1090 - return intel_uncore_read(uncore, PVC_RP_STATE_CAP); 1091 - else if (IS_XEHPSDV(i915)) 1092 - return intel_uncore_read(uncore, XEHPSDV_RP_STATE_CAP); 1093 - else if (IS_GEN9_LP(i915)) 1089 + if (IS_GEN9_LP(i915)) 1094 1090 return intel_uncore_read(uncore, BXT_RP_STATE_CAP); 1095 1091 else 1096 1092 return intel_uncore_read(uncore, GEN6_RP_STATE_CAP);
+4 -9
drivers/gpu/drm/i915/gt/intel_sseu.c
··· 214 214 int num_compute_regs, num_geometry_regs; 215 215 int eu; 216 216 217 - if (IS_PONTEVECCHIO(gt->i915)) { 218 - num_geometry_regs = 0; 219 - num_compute_regs = 2; 220 - } else { 221 - num_geometry_regs = 1; 222 - num_compute_regs = 1; 223 - } 217 + num_geometry_regs = 1; 218 + num_compute_regs = 1; 224 219 225 220 /* 226 221 * The concept of slice has been removed in Xe_HP. To be compatible ··· 637 642 { 638 643 struct drm_i915_private *i915 = gt->i915; 639 644 640 - if (GRAPHICS_VER_FULL(i915) >= IP_VER(12, 50)) 645 + if (GRAPHICS_VER_FULL(i915) >= IP_VER(12, 55)) 641 646 xehp_sseu_info_init(gt); 642 647 else if (GRAPHICS_VER(i915) >= 12) 643 648 gen12_sseu_info_init(gt); ··· 846 851 { 847 852 if (sseu->max_slices == 0) 848 853 drm_printf(p, "Unavailable\n"); 849 - else if (GRAPHICS_VER_FULL(i915) >= IP_VER(12, 50)) 854 + else if (GRAPHICS_VER_FULL(i915) >= IP_VER(12, 55)) 850 855 sseu_print_xehp_topology(sseu, p); 851 856 else 852 857 sseu_print_hsw_topology(sseu, p);
+4 -179
drivers/gpu/drm/i915/gt/intel_workarounds.c
··· 258 258 } 259 259 260 260 static void 261 - wa_mcr_write(struct i915_wa_list *wal, i915_mcr_reg_t reg, u32 set) 262 - { 263 - wa_mcr_write_clr_set(wal, reg, ~0, set); 264 - } 265 - 266 - static void 267 261 wa_write_or(struct i915_wa_list *wal, i915_reg_t reg, u32 set) 268 262 { 269 263 wa_write_clr_set(wal, reg, set, set); ··· 912 918 913 919 if (IS_GFX_GT_IP_RANGE(engine->gt, IP_VER(12, 70), IP_VER(12, 74))) 914 920 xelpg_ctx_workarounds_init(engine, wal); 915 - else if (IS_PONTEVECCHIO(i915)) 916 - ; /* noop; none at this time */ 917 921 else if (IS_DG2(i915)) 918 922 dg2_ctx_workarounds_init(engine, wal); 919 - else if (IS_XEHPSDV(i915)) 920 - ; /* noop; none at this time */ 921 923 else if (IS_DG1(i915)) 922 924 dg1_ctx_workarounds_init(engine, wal); 923 925 else if (GRAPHICS_VER(i915) == 12) ··· 1340 1350 gt->steering_table[MSLICE] = NULL; 1341 1351 } 1342 1352 1343 - if (IS_XEHPSDV(gt->i915) && slice_mask & BIT(0)) 1344 - gt->steering_table[GAM] = NULL; 1345 - 1346 1353 slice = __ffs(slice_mask); 1347 1354 subslice = intel_sseu_find_first_xehp_dss(sseu, GEN_DSS_PER_GSLICE, slice) % 1348 1355 GEN_DSS_PER_GSLICE; ··· 1364 1377 */ 1365 1378 if (IS_DG2(gt->i915)) 1366 1379 __set_mcr_steering(wal, GAM_MCR_SELECTOR, 1, 0); 1367 - } 1368 - 1369 - static void 1370 - pvc_init_mcr(struct intel_gt *gt, struct i915_wa_list *wal) 1371 - { 1372 - unsigned int dss; 1373 - 1374 - /* 1375 - * Setup implicit steering for COMPUTE and DSS ranges to the first 1376 - * non-fused-off DSS. All other types of MCR registers will be 1377 - * explicitly steered. 1378 - */ 1379 - dss = intel_sseu_find_first_xehp_dss(&gt->info.sseu, 0, 0); 1380 - __add_mcr_wa(gt, wal, dss / GEN_DSS_PER_CSLICE, dss % GEN_DSS_PER_CSLICE); 1381 1380 } 1382 1381 1383 1382 static void ··· 1493 1520 } 1494 1521 1495 1522 static void 1496 - xehpsdv_gt_workarounds_init(struct intel_gt *gt, struct i915_wa_list *wal) 1497 - { 1498 - struct drm_i915_private *i915 = gt->i915; 1499 - 1500 - xehp_init_mcr(gt, wal); 1501 - 1502 - /* Wa_1409757795:xehpsdv */ 1503 - wa_mcr_write_or(wal, SCCGCTL94DC, CG3DDISURB); 1504 - 1505 - /* Wa_18011725039:xehpsdv */ 1506 - if (IS_XEHPSDV_GRAPHICS_STEP(i915, STEP_A1, STEP_B0)) { 1507 - wa_mcr_masked_dis(wal, MLTICTXCTL, TDONRENDER); 1508 - wa_mcr_write_or(wal, L3SQCREG1_CCS0, FLUSHALLNONCOH); 1509 - } 1510 - 1511 - /* Wa_16011155590:xehpsdv */ 1512 - if (IS_XEHPSDV_GRAPHICS_STEP(i915, STEP_A0, STEP_B0)) 1513 - wa_write_or(wal, UNSLICE_UNIT_LEVEL_CLKGATE, 1514 - TSGUNIT_CLKGATE_DIS); 1515 - 1516 - /* Wa_14011780169:xehpsdv */ 1517 - if (IS_XEHPSDV_GRAPHICS_STEP(i915, STEP_B0, STEP_FOREVER)) { 1518 - wa_write_or(wal, UNSLCGCTL9440, GAMTLBOACS_CLKGATE_DIS | 1519 - GAMTLBVDBOX7_CLKGATE_DIS | 1520 - GAMTLBVDBOX6_CLKGATE_DIS | 1521 - GAMTLBVDBOX5_CLKGATE_DIS | 1522 - GAMTLBVDBOX4_CLKGATE_DIS | 1523 - GAMTLBVDBOX3_CLKGATE_DIS | 1524 - GAMTLBVDBOX2_CLKGATE_DIS | 1525 - GAMTLBVDBOX1_CLKGATE_DIS | 1526 - GAMTLBVDBOX0_CLKGATE_DIS | 1527 - GAMTLBKCR_CLKGATE_DIS | 1528 - GAMTLBGUC_CLKGATE_DIS | 1529 - GAMTLBBLT_CLKGATE_DIS); 1530 - wa_write_or(wal, UNSLCGCTL9444, GAMTLBGFXA0_CLKGATE_DIS | 1531 - GAMTLBGFXA1_CLKGATE_DIS | 1532 - GAMTLBCOMPA0_CLKGATE_DIS | 1533 - GAMTLBCOMPA1_CLKGATE_DIS | 1534 - GAMTLBCOMPB0_CLKGATE_DIS | 1535 - GAMTLBCOMPB1_CLKGATE_DIS | 1536 - GAMTLBCOMPC0_CLKGATE_DIS | 1537 - GAMTLBCOMPC1_CLKGATE_DIS | 1538 - GAMTLBCOMPD0_CLKGATE_DIS | 1539 - GAMTLBCOMPD1_CLKGATE_DIS | 1540 - GAMTLBMERT_CLKGATE_DIS | 1541 - GAMTLBVEBOX3_CLKGATE_DIS | 1542 - GAMTLBVEBOX2_CLKGATE_DIS | 1543 - GAMTLBVEBOX1_CLKGATE_DIS | 1544 - GAMTLBVEBOX0_CLKGATE_DIS); 1545 - } 1546 - 1547 - /* Wa_16012725990:xehpsdv */ 1548 - if (IS_XEHPSDV_GRAPHICS_STEP(i915, STEP_A1, STEP_FOREVER)) 1549 - wa_write_or(wal, UNSLICE_UNIT_LEVEL_CLKGATE, VFUNIT_CLKGATE_DIS); 1550 - 1551 - /* Wa_14011060649:xehpsdv */ 1552 - wa_14011060649(gt, wal); 1553 - 1554 - /* Wa_14012362059:xehpsdv */ 1555 - wa_mcr_write_or(wal, XEHP_MERT_MOD_CTRL, FORCE_MISS_FTLB); 1556 - 1557 - /* Wa_14014368820:xehpsdv */ 1558 - wa_mcr_write_or(wal, XEHP_GAMCNTRL_CTRL, 1559 - INVALIDATION_BROADCAST_MODE_DIS | GLOBAL_INVALIDATION_MODE); 1560 - 1561 - /* Wa_14010670810:xehpsdv */ 1562 - wa_mcr_write_or(wal, XEHP_L3NODEARBCFG, XEHP_LNESPARE); 1563 - } 1564 - 1565 - static void 1566 1523 dg2_gt_workarounds_init(struct intel_gt *gt, struct i915_wa_list *wal) 1567 1524 { 1568 1525 xehp_init_mcr(gt, wal); ··· 1532 1629 1533 1630 /* Wa_14010648519:dg2 */ 1534 1631 wa_mcr_write_or(wal, XEHP_L3NODEARBCFG, XEHP_LNESPARE); 1535 - } 1536 - 1537 - static void 1538 - pvc_gt_workarounds_init(struct intel_gt *gt, struct i915_wa_list *wal) 1539 - { 1540 - pvc_init_mcr(gt, wal); 1541 - 1542 - /* Wa_14015795083 */ 1543 - wa_write_clr(wal, GEN7_MISCCPCTL, GEN12_DOP_CLOCK_GATE_RENDER_ENABLE); 1544 - 1545 - /* Wa_18018781329 */ 1546 - wa_mcr_write_or(wal, RENDER_MOD_CTRL, FORCE_MISS_FTLB); 1547 - wa_mcr_write_or(wal, COMP_MOD_CTRL, FORCE_MISS_FTLB); 1548 - wa_mcr_write_or(wal, XEHP_VDBX_MOD_CTRL, FORCE_MISS_FTLB); 1549 - wa_mcr_write_or(wal, XEHP_VEBX_MOD_CTRL, FORCE_MISS_FTLB); 1550 - 1551 - /* Wa_16016694945 */ 1552 - wa_mcr_masked_en(wal, XEHPC_LNCFMISCCFGREG0, XEHPC_OVRLSCCC); 1553 1632 } 1554 1633 1555 1634 static void ··· 1610 1725 wa_mcr_write_or(wal, XEHP_SQCM, EN_32B_ACCESS); 1611 1726 } 1612 1727 1613 - if (IS_PONTEVECCHIO(gt->i915)) { 1614 - wa_mcr_write(wal, XEHPC_L3SCRUB, 1615 - SCRUB_CL_DWNGRADE_SHARED | SCRUB_RATE_4B_PER_CLK); 1616 - wa_mcr_masked_en(wal, XEHPC_LNCFMISCCFGREG0, XEHPC_HOSTCACHEEN); 1617 - } 1618 - 1619 1728 if (IS_DG2(gt->i915)) { 1620 1729 wa_mcr_write_or(wal, XEHP_L3SCQREG7, BLEND_FILL_CACHING_OPT_DIS); 1621 1730 wa_mcr_write_or(wal, XEHP_SQCM, EN_32B_ACCESS); ··· 1634 1755 1635 1756 if (IS_GFX_GT_IP_RANGE(gt, IP_VER(12, 70), IP_VER(12, 74))) 1636 1757 xelpg_gt_workarounds_init(gt, wal); 1637 - else if (IS_PONTEVECCHIO(i915)) 1638 - pvc_gt_workarounds_init(gt, wal); 1639 1758 else if (IS_DG2(i915)) 1640 1759 dg2_gt_workarounds_init(gt, wal); 1641 - else if (IS_XEHPSDV(i915)) 1642 - xehpsdv_gt_workarounds_init(gt, wal); 1643 1760 else if (IS_DG1(i915)) 1644 1761 dg1_gt_workarounds_init(gt, wal); 1645 1762 else if (GRAPHICS_VER(i915) == 12) ··· 2053 2178 } 2054 2179 } 2055 2180 2056 - static void blacklist_trtt(struct intel_engine_cs *engine) 2057 - { 2058 - struct i915_wa_list *w = &engine->whitelist; 2059 - 2060 - /* 2061 - * Prevent read/write access to [0x4400, 0x4600) which covers 2062 - * the TRTT range across all engines. Note that normally userspace 2063 - * cannot access the other engines' trtt control, but for simplicity 2064 - * we cover the entire range on each engine. 2065 - */ 2066 - whitelist_reg_ext(w, _MMIO(0x4400), 2067 - RING_FORCE_TO_NONPRIV_DENY | 2068 - RING_FORCE_TO_NONPRIV_RANGE_64); 2069 - whitelist_reg_ext(w, _MMIO(0x4500), 2070 - RING_FORCE_TO_NONPRIV_DENY | 2071 - RING_FORCE_TO_NONPRIV_RANGE_64); 2072 - } 2073 - 2074 - static void pvc_whitelist_build(struct intel_engine_cs *engine) 2075 - { 2076 - /* Wa_16014440446:pvc */ 2077 - blacklist_trtt(engine); 2078 - } 2079 - 2080 2181 static void xelpg_whitelist_build(struct intel_engine_cs *engine) 2081 2182 { 2082 2183 struct i915_wa_list *w = &engine->whitelist; ··· 2079 2228 ; /* none yet */ 2080 2229 else if (IS_GFX_GT_IP_RANGE(engine->gt, IP_VER(12, 70), IP_VER(12, 74))) 2081 2230 xelpg_whitelist_build(engine); 2082 - else if (IS_PONTEVECCHIO(i915)) 2083 - pvc_whitelist_build(engine); 2084 2231 else if (IS_DG2(i915)) 2085 2232 dg2_whitelist_build(engine); 2086 - else if (IS_XEHPSDV(i915)) 2087 - ; /* none needed */ 2088 2233 else if (GRAPHICS_VER(i915) == 12) 2089 2234 tgl_whitelist_build(engine); 2090 2235 else if (GRAPHICS_VER(i915) == 11) ··· 2661 2814 static void 2662 2815 ccs_engine_wa_init(struct intel_engine_cs *engine, struct i915_wa_list *wal) 2663 2816 { 2664 - if (IS_PVC_CT_STEP(engine->i915, STEP_A0, STEP_C0)) { 2665 - /* Wa_14014999345:pvc */ 2666 - wa_mcr_masked_en(wal, GEN10_CACHE_MODE_SS, DISABLE_ECC); 2667 - } 2817 + /* boilerplate for any CCS engine workaround */ 2668 2818 } 2669 2819 2670 2820 /* ··· 2694 2850 wa_mcr_masked_field_set(wal, GEN9_ROW_CHICKEN4, THREAD_EX_ARB_MODE, 2695 2851 THREAD_EX_ARB_MODE_RR_AFTER_DEP); 2696 2852 2697 - if (GRAPHICS_VER(i915) == 12 && GRAPHICS_VER_FULL(i915) < IP_VER(12, 50)) 2853 + if (GRAPHICS_VER(i915) == 12 && GRAPHICS_VER_FULL(i915) < IP_VER(12, 55)) 2698 2854 wa_write_clr(wal, GEN8_GARBCNTL, GEN12_BUS_HASH_CTL_BIT_EXC); 2699 2855 } 2700 2856 ··· 2767 2923 2768 2924 if (IS_GFX_GT_IP_STEP(gt, IP_VER(12, 70), STEP_A0, STEP_B0) || 2769 2925 IS_GFX_GT_IP_STEP(gt, IP_VER(12, 71), STEP_A0, STEP_B0) || 2770 - IS_PONTEVECCHIO(i915) || 2771 2926 IS_DG2(i915)) { 2772 2927 /* Wa_22014226127 */ 2773 2928 wa_mcr_write_or(wal, LSC_CHICKEN_BIT_0, DISABLE_D8_D16_COASLESCE); 2774 2929 } 2775 2930 2776 - if (IS_PONTEVECCHIO(i915) || IS_DG2(i915)) { 2931 + if (IS_DG2(i915)) { 2777 2932 /* Wa_14015227452:dg2,pvc */ 2778 2933 wa_mcr_masked_en(wal, GEN9_ROW_CHICKEN4, XEHP_DIS_BBL_SYSPIPE); 2779 2934 2780 2935 /* Wa_16015675438:dg2,pvc */ 2781 2936 wa_masked_en(wal, FF_SLICE_CS_CHICKEN2, GEN12_PERF_FIX_BALANCING_CFE_DISABLE); 2782 - } 2783 2937 2784 - if (IS_DG2(i915)) { 2785 2938 /* 2786 2939 * Wa_16011620976:dg2_g11 2787 2940 * Wa_22015475538:dg2 ··· 2813 2972 _MASKED_BIT_ENABLE(ENABLE_PREFETCH_INTO_IC), 2814 2973 0 /* write-only, so skip validation */, 2815 2974 true); 2816 - } 2817 - 2818 - if (IS_XEHPSDV(i915)) { 2819 - /* Wa_1409954639 */ 2820 - wa_mcr_masked_en(wal, 2821 - GEN8_ROW_CHICKEN, 2822 - SYSTOLIC_DOP_CLOCK_GATING_DIS); 2823 - 2824 - /* Wa_1607196519 */ 2825 - wa_mcr_masked_en(wal, 2826 - GEN9_ROW_CHICKEN4, 2827 - GEN12_DISABLE_GRF_CLEAR); 2828 - 2829 - /* Wa_14010449647:xehpsdv */ 2830 - wa_mcr_masked_en(wal, GEN8_HALF_SLICE_CHICKEN1, 2831 - GEN7_PSD_SINGLE_PORT_DISPATCH_ENABLE); 2832 2975 } 2833 2976 } 2834 2977 ··· 2894 3069 const struct i915_range *mcr_ranges; 2895 3070 int i; 2896 3071 2897 - if (GRAPHICS_VER_FULL(i915) >= IP_VER(12, 50)) 3072 + if (GRAPHICS_VER_FULL(i915) >= IP_VER(12, 55)) 2898 3073 mcr_ranges = mcr_ranges_xehp; 2899 3074 else if (GRAPHICS_VER(i915) >= 12) 2900 3075 mcr_ranges = mcr_ranges_gen12;
+2 -6
drivers/gpu/drm/i915/gt/uc/intel_guc.c
··· 286 286 287 287 /* Wa_22012773006:gen11,gen12 < XeHP */ 288 288 if (GRAPHICS_VER(gt->i915) >= 11 && 289 - GRAPHICS_VER_FULL(gt->i915) < IP_VER(12, 50)) 289 + GRAPHICS_VER_FULL(gt->i915) < IP_VER(12, 55)) 290 290 flags |= GUC_WA_POLLCS; 291 291 292 292 /* Wa_14014475959 */ ··· 315 315 if (IS_DG2_G11(gt->i915)) 316 316 flags |= GUC_WA_CONTEXT_ISOLATION; 317 317 318 - /* Wa_16015675438 */ 319 - if (!RCS_MASK(gt)) 320 - flags |= GUC_WA_RCS_REGS_IN_CCS_REGS_LIST; 321 - 322 318 /* Wa_14018913170 */ 323 319 if (GUC_FIRMWARE_VER(guc) >= MAKE_GUC_VER(70, 7, 0)) { 324 - if (IS_DG2(gt->i915) || IS_METEORLAKE(gt->i915) || IS_PONTEVECCHIO(gt->i915)) 320 + if (IS_DG2(gt->i915) || IS_METEORLAKE(gt->i915)) 325 321 flags |= GUC_WA_ENABLE_TSC_CHECK_ON_RC6; 326 322 } 327 323
+2 -2
drivers/gpu/drm/i915/gt/uc/intel_guc_ads.c
··· 393 393 394 394 /* add in local MOCS registers */ 395 395 for (i = 0; i < LNCFCMOCS_REG_COUNT; i++) 396 - if (GRAPHICS_VER_FULL(engine->i915) >= IP_VER(12, 50)) 396 + if (GRAPHICS_VER_FULL(engine->i915) >= IP_VER(12, 55)) 397 397 ret |= GUC_MCR_REG_ADD(gt, regset, XEHP_LNCFCMOCS(i), false); 398 398 else 399 399 ret |= GUC_MMIO_REG_ADD(gt, regset, GEN9_LNCFCMOCS(i), false); ··· 503 503 504 504 #define LR_HW_CONTEXT_SIZE (80 * sizeof(u32)) 505 505 #define XEHP_LR_HW_CONTEXT_SIZE (96 * sizeof(u32)) 506 - #define LR_HW_CONTEXT_SZ(i915) (GRAPHICS_VER_FULL(i915) >= IP_VER(12, 50) ? \ 506 + #define LR_HW_CONTEXT_SZ(i915) (GRAPHICS_VER_FULL(i915) >= IP_VER(12, 55) ? \ 507 507 XEHP_LR_HW_CONTEXT_SIZE : \ 508 508 LR_HW_CONTEXT_SIZE) 509 509 #define LRC_SKIP_SIZE(i915) (LRC_PPHWSP_SZ * PAGE_SIZE + LR_HW_CONTEXT_SZ(i915))
+1 -1
drivers/gpu/drm/i915/gt/uc/intel_guc_fw.c
··· 26 26 GUC_ENABLE_READ_CACHE_FOR_WOPCM_DATA | 27 27 GUC_ENABLE_MIA_CLOCK_GATING; 28 28 29 - if (GRAPHICS_VER_FULL(uncore->i915) < IP_VER(12, 50)) 29 + if (GRAPHICS_VER_FULL(uncore->i915) < IP_VER(12, 55)) 30 30 shim_flags |= GUC_DISABLE_SRAM_INIT_TO_ZEROES | 31 31 GUC_ENABLE_MIA_CACHING; 32 32
+1 -1
drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
··· 4507 4507 */ 4508 4508 4509 4509 engine->emit_bb_start = gen8_emit_bb_start; 4510 - if (GRAPHICS_VER_FULL(engine->i915) >= IP_VER(12, 50)) 4510 + if (GRAPHICS_VER_FULL(engine->i915) >= IP_VER(12, 55)) 4511 4511 engine->emit_bb_start = xehp_emit_bb_start; 4512 4512 } 4513 4513
-4
drivers/gpu/drm/i915/gt/uc/intel_uc.c
··· 50 50 51 51 /* Default: enable HuC authentication and GuC submission */ 52 52 i915->params.enable_guc = ENABLE_GUC_LOAD_HUC | ENABLE_GUC_SUBMISSION; 53 - 54 - /* XEHPSDV and PVC do not use HuC */ 55 - if (IS_XEHPSDV(i915) || IS_PONTEVECCHIO(i915)) 56 - i915->params.enable_guc &= ~ENABLE_GUC_LOAD_HUC; 57 53 } 58 54 59 55 /* Reset GuC providing us with fresh state for both GuC and HuC.
-12
drivers/gpu/drm/i915/i915_debugfs.c
··· 156 156 case 4: return " WB (2-Way Coh)"; 157 157 default: return " not defined"; 158 158 } 159 - } else if (IS_PONTEVECCHIO(i915)) { 160 - switch (obj->pat_index) { 161 - case 0: return " UC"; 162 - case 1: return " WC"; 163 - case 2: return " WT"; 164 - case 3: return " WB"; 165 - case 4: return " WT (CLOS1)"; 166 - case 5: return " WB (CLOS1)"; 167 - case 6: return " WT (CLOS2)"; 168 - case 7: return " WT (CLOS2)"; 169 - default: return " not defined"; 170 - } 171 159 } else if (GRAPHICS_VER(i915) >= 12) { 172 160 switch (obj->pat_index) { 173 161 case 0: return " WB";
+1 -25
drivers/gpu/drm/i915/i915_drv.h
··· 235 235 /* protects the irq masks */ 236 236 spinlock_t irq_lock; 237 237 238 - bool display_irqs_enabled; 239 - 240 238 /* Sideband mailbox protection */ 241 239 struct mutex sb_lock; 242 240 struct pm_qos_request sb_qos; 243 241 244 242 /** Cached value of IMR to avoid reads in updating the bitfield */ 245 - union { 246 - u32 irq_mask; 247 - u32 de_irq_mask[I915_MAX_PIPES]; 248 - }; 249 - u32 pipestat_irq_mask[I915_MAX_PIPES]; 243 + u32 irq_mask; 250 244 251 245 bool preserve_bios_swizzle; 252 246 253 247 unsigned int fsb_freq, mem_freq, is_ddr3; 254 - unsigned int skl_preferred_vco_freq; 255 248 256 - unsigned int max_dotclk_freq; 257 249 unsigned int hpll_freq; 258 250 unsigned int czclk_freq; 259 251 ··· 341 349 } gem; 342 350 343 351 struct intel_pxp *pxp; 344 - 345 - /* For i915gm/i945gm vblank irq workaround */ 346 - u8 vblank_enabled; 347 352 348 353 bool irq_enabled; 349 354 ··· 533 544 #define IS_DG1(i915) IS_PLATFORM(i915, INTEL_DG1) 534 545 #define IS_ALDERLAKE_S(i915) IS_PLATFORM(i915, INTEL_ALDERLAKE_S) 535 546 #define IS_ALDERLAKE_P(i915) IS_PLATFORM(i915, INTEL_ALDERLAKE_P) 536 - #define IS_XEHPSDV(i915) IS_PLATFORM(i915, INTEL_XEHPSDV) 537 547 #define IS_DG2(i915) IS_PLATFORM(i915, INTEL_DG2) 538 - #define IS_PONTEVECCHIO(i915) IS_PLATFORM(i915, INTEL_PONTEVECCHIO) 539 548 #define IS_METEORLAKE(i915) IS_PLATFORM(i915, INTEL_METEORLAKE) 540 549 #define IS_LUNARLAKE(i915) 0 541 550 ··· 607 620 608 621 #define IS_TIGERLAKE_UY(i915) \ 609 622 IS_SUBPLATFORM(i915, INTEL_TIGERLAKE, INTEL_SUBPLATFORM_UY) 610 - 611 - #define IS_XEHPSDV_GRAPHICS_STEP(__i915, since, until) \ 612 - (IS_XEHPSDV(__i915) && IS_GRAPHICS_STEP(__i915, since, until)) 613 - 614 - #define IS_PVC_BD_STEP(__i915, since, until) \ 615 - (IS_PONTEVECCHIO(__i915) && \ 616 - IS_BASEDIE_STEP(__i915, since, until)) 617 - 618 - #define IS_PVC_CT_STEP(__i915, since, until) \ 619 - (IS_PONTEVECCHIO(__i915) && \ 620 - IS_GRAPHICS_STEP(__i915, since, until)) 621 623 622 624 #define IS_LP(i915) (INTEL_INFO(i915)->is_lp) 623 625 #define IS_GEN9_LP(i915) (GRAPHICS_VER(i915) == 9 && IS_LP(i915))
drivers/gpu/drm/i915/i915_fixed.h drivers/gpu/drm/i915/display/intel_fixed.h
+2 -2
drivers/gpu/drm/i915/i915_getparam.c
··· 160 160 break; 161 161 case I915_PARAM_SLICE_MASK: 162 162 /* Not supported from Xe_HP onward; use topology queries */ 163 - if (GRAPHICS_VER_FULL(i915) >= IP_VER(12, 50)) 163 + if (GRAPHICS_VER_FULL(i915) >= IP_VER(12, 55)) 164 164 return -EINVAL; 165 165 166 166 value = sseu->slice_mask; ··· 169 169 break; 170 170 case I915_PARAM_SUBSLICE_MASK: 171 171 /* Not supported from Xe_HP onward; use topology queries */ 172 - if (GRAPHICS_VER_FULL(i915) >= IP_VER(12, 50)) 172 + if (GRAPHICS_VER_FULL(i915) >= IP_VER(12, 55)) 173 173 return -EINVAL; 174 174 175 175 /* Only copy bits from the first slice */
+2 -3
drivers/gpu/drm/i915/i915_gpu_error.c
··· 1245 1245 if (MEDIA_VER(i915) >= 13 && engine->gt->type == GT_MEDIA) 1246 1246 ee->fault_reg = intel_uncore_read(engine->uncore, 1247 1247 XELPMP_RING_FAULT_REG); 1248 - 1249 - else if (GRAPHICS_VER_FULL(i915) >= IP_VER(12, 50)) 1248 + else if (GRAPHICS_VER_FULL(i915) >= IP_VER(12, 55)) 1250 1249 ee->fault_reg = intel_gt_mcr_read_any(engine->gt, 1251 1250 XEHP_RING_FAULT_REG); 1252 1251 else if (GRAPHICS_VER(i915) >= 12) ··· 1851 1852 if (GRAPHICS_VER(i915) == 7) 1852 1853 gt->err_int = intel_uncore_read(uncore, GEN7_ERR_INT); 1853 1854 1854 - if (GRAPHICS_VER_FULL(i915) >= IP_VER(12, 50)) { 1855 + if (GRAPHICS_VER_FULL(i915) >= IP_VER(12, 55)) { 1855 1856 gt->fault_data0 = intel_gt_mcr_read_any((struct intel_gt *)gt->_gt, 1856 1857 XEHP_FAULT_TLB_DATA0); 1857 1858 gt->fault_data1 = intel_gt_mcr_read_any((struct intel_gt *)gt->_gt,
-6
drivers/gpu/drm/i915/i915_hwmon.c
··· 739 739 hwmon->rg.pkg_rapl_limit = PCU_PACKAGE_RAPL_LIMIT; 740 740 hwmon->rg.energy_status_all = PCU_PACKAGE_ENERGY_STATUS; 741 741 hwmon->rg.energy_status_tile = INVALID_MMIO_REG; 742 - } else if (IS_XEHPSDV(i915)) { 743 - hwmon->rg.pkg_power_sku_unit = GT0_PACKAGE_POWER_SKU_UNIT; 744 - hwmon->rg.pkg_power_sku = INVALID_MMIO_REG; 745 - hwmon->rg.pkg_rapl_limit = GT0_PACKAGE_RAPL_LIMIT; 746 - hwmon->rg.energy_status_all = GT0_PLATFORM_ENERGY_STATUS; 747 - hwmon->rg.energy_status_tile = GT0_PACKAGE_ENERGY_STATUS; 748 742 } else { 749 743 hwmon->rg.pkg_power_sku_unit = INVALID_MMIO_REG; 750 744 hwmon->rg.pkg_power_sku = INVALID_MMIO_REG;
+4 -4
drivers/gpu/drm/i915/i915_irq.c
··· 702 702 gen5_gt_irq_reset(to_gt(dev_priv)); 703 703 704 704 spin_lock_irq(&dev_priv->irq_lock); 705 - if (dev_priv->display_irqs_enabled) 705 + if (dev_priv->display.irq.display_irqs_enabled) 706 706 vlv_display_irq_reset(dev_priv); 707 707 spin_unlock_irq(&dev_priv->irq_lock); 708 708 } ··· 767 767 GEN3_IRQ_RESET(uncore, GEN8_PCU_); 768 768 769 769 spin_lock_irq(&dev_priv->irq_lock); 770 - if (dev_priv->display_irqs_enabled) 770 + if (dev_priv->display.irq.display_irqs_enabled) 771 771 vlv_display_irq_reset(dev_priv); 772 772 spin_unlock_irq(&dev_priv->irq_lock); 773 773 } ··· 784 784 gen5_gt_irq_postinstall(to_gt(dev_priv)); 785 785 786 786 spin_lock_irq(&dev_priv->irq_lock); 787 - if (dev_priv->display_irqs_enabled) 787 + if (dev_priv->display.irq.display_irqs_enabled) 788 788 vlv_display_irq_postinstall(dev_priv); 789 789 spin_unlock_irq(&dev_priv->irq_lock); 790 790 ··· 838 838 gen8_gt_irq_postinstall(to_gt(dev_priv)); 839 839 840 840 spin_lock_irq(&dev_priv->irq_lock); 841 - if (dev_priv->display_irqs_enabled) 841 + if (dev_priv->display.irq.display_irqs_enabled) 842 842 vlv_display_irq_postinstall(dev_priv); 843 843 spin_unlock_irq(&dev_priv->irq_lock); 844 844
+7 -59
drivers/gpu/drm/i915/i915_pci.c
··· 38 38 #include "i915_reg.h" 39 39 #include "intel_pci_config.h" 40 40 41 + __diag_push(); 42 + __diag_ignore_all("-Woverride-init", "Allow field initialization overrides for device info"); 43 + 41 44 #define PLATFORM(x) .platform = (x) 42 45 #define GEN(x) \ 43 46 .__runtime.graphics.ip.ver = (x), \ ··· 59 56 [I915_CACHE_NONE] = 3, \ 60 57 [I915_CACHE_LLC] = 0, \ 61 58 [I915_CACHE_L3_LLC] = 0, \ 62 - [I915_CACHE_WT] = 2, \ 63 - } 64 - 65 - #define PVC_CACHELEVEL \ 66 - .cachelevel_to_pat = { \ 67 - [I915_CACHE_NONE] = 0, \ 68 - [I915_CACHE_LLC] = 3, \ 69 - [I915_CACHE_L3_LLC] = 3, \ 70 59 [I915_CACHE_WT] = 2, \ 71 60 } 72 61 ··· 700 705 I915_GTT_PAGE_SIZE_2M 701 706 702 707 #define XE_HP_FEATURES \ 703 - .__runtime.graphics.ip.ver = 12, \ 704 - .__runtime.graphics.ip.rel = 50, \ 705 708 XE_HP_PAGE_SIZES, \ 706 709 TGL_CACHELEVEL, \ 707 710 .dma_mask_size = 46, \ ··· 723 730 .__runtime.ppgtt_size = 48, \ 724 731 .__runtime.ppgtt_type = INTEL_PPGTT_FULL 725 732 726 - #define XE_HPM_FEATURES \ 727 - .__runtime.media.ip.ver = 12, \ 728 - .__runtime.media.ip.rel = 50 729 - 730 - __maybe_unused 731 - static const struct intel_device_info xehpsdv_info = { 732 - XE_HP_FEATURES, 733 - XE_HPM_FEATURES, 734 - DGFX_FEATURES, 735 - PLATFORM(INTEL_XEHPSDV), 736 - .has_64k_pages = 1, 737 - .has_media_ratio_mode = 1, 738 - .platform_engine_mask = 739 - BIT(RCS0) | BIT(BCS0) | 740 - BIT(VECS0) | BIT(VECS1) | BIT(VECS2) | BIT(VECS3) | 741 - BIT(VCS0) | BIT(VCS1) | BIT(VCS2) | BIT(VCS3) | 742 - BIT(VCS4) | BIT(VCS5) | BIT(VCS6) | BIT(VCS7) | 743 - BIT(CCS0) | BIT(CCS1) | BIT(CCS2) | BIT(CCS3), 744 - .require_force_probe = 1, 745 - }; 746 - 747 733 #define DG2_FEATURES \ 748 734 XE_HP_FEATURES, \ 749 - XE_HPM_FEATURES, \ 750 735 DGFX_FEATURES, \ 736 + .__runtime.graphics.ip.ver = 12, \ 751 737 .__runtime.graphics.ip.rel = 55, \ 738 + .__runtime.media.ip.ver = 12, \ 752 739 .__runtime.media.ip.rel = 55, \ 753 740 PLATFORM(INTEL_DG2), \ 754 741 .has_64k_pages = 1, \ ··· 749 776 DG2_FEATURES, 750 777 .require_force_probe = 1, 751 778 .tuning_thread_rr_after_dep = 1, 752 - }; 753 - 754 - #define XE_HPC_FEATURES \ 755 - XE_HP_FEATURES, \ 756 - .dma_mask_size = 52, \ 757 - .has_3d_pipeline = 0, \ 758 - .has_guc_deprivilege = 1, \ 759 - .has_l3_ccs_read = 1, \ 760 - .has_mslice_steering = 0, \ 761 - .has_one_eu_per_fuse_bit = 1 762 - 763 - __maybe_unused 764 - static const struct intel_device_info pvc_info = { 765 - XE_HPC_FEATURES, 766 - XE_HPM_FEATURES, 767 - DGFX_FEATURES, 768 - .__runtime.graphics.ip.rel = 60, 769 - .__runtime.media.ip.rel = 60, 770 - PLATFORM(INTEL_PONTEVECCHIO), 771 - .has_flat_ccs = 0, 772 - .max_pat_index = 7, 773 - .platform_engine_mask = 774 - BIT(BCS0) | 775 - BIT(VCS0) | 776 - BIT(CCS0) | BIT(CCS1) | BIT(CCS2) | BIT(CCS3), 777 - .require_force_probe = 1, 778 - PVC_CACHELEVEL, 779 779 }; 780 780 781 781 static const struct intel_gt_definition xelpmp_extra_gt[] = { ··· 787 841 }; 788 842 789 843 #undef PLATFORM 844 + 845 + __diag_pop(); 790 846 791 847 /* 792 848 * Make sure any device matches here are from most specific to most
+9 -10
drivers/gpu/drm/i915/i915_perf.c
··· 292 292 #define OAREPORT_REASON_CTX_SWITCH (1<<3) 293 293 #define OAREPORT_REASON_CLK_RATIO (1<<5) 294 294 295 - #define HAS_MI_SET_PREDICATE(i915) (GRAPHICS_VER_FULL(i915) >= IP_VER(12, 50)) 295 + #define HAS_MI_SET_PREDICATE(i915) (GRAPHICS_VER_FULL(i915) >= IP_VER(12, 55)) 296 296 297 297 /* For sysctl proc_dointvec_minmax of i915_oa_max_sample_rate 298 298 * ··· 817 817 */ 818 818 819 819 if (oa_report_ctx_invalid(stream, report) && 820 - GRAPHICS_VER_FULL(stream->engine->i915) < IP_VER(12, 50)) { 820 + GRAPHICS_VER_FULL(stream->engine->i915) < IP_VER(12, 55)) { 821 821 ctx_id = INVALID_CTX_ID; 822 822 oa_context_id_squash(stream, report32); 823 823 } ··· 1419 1419 1420 1420 mask = ((1U << GEN12_GUC_SW_CTX_ID_WIDTH) - 1) << 1421 1421 (GEN12_GUC_SW_CTX_ID_SHIFT - 32); 1422 - } else if (GRAPHICS_VER_FULL(stream->engine->i915) >= IP_VER(12, 50)) { 1422 + } else if (GRAPHICS_VER_FULL(stream->engine->i915) >= IP_VER(12, 55)) { 1423 1423 ctx_id = (XEHP_MAX_CONTEXT_HW_ID - 1) << 1424 1424 (XEHP_SW_CTX_ID_SHIFT - 32); 1425 1425 ··· 2881 2881 int ret; 2882 2882 2883 2883 /* 2884 - * Wa_1508761755:xehpsdv, dg2 2884 + * Wa_1508761755 2885 2885 * EU NOA signals behave incorrectly if EU clock gating is enabled. 2886 2886 * Disable thread stall DOP gating and EU DOP gating. 2887 2887 */ 2888 - if (IS_XEHPSDV(i915) || IS_DG2(i915)) { 2888 + if (IS_DG2(i915)) { 2889 2889 intel_gt_mcr_multicast_write(uncore->gt, GEN8_ROW_CHICKEN, 2890 2890 _MASKED_BIT_ENABLE(STALL_DOP_GATING_DISABLE)); 2891 2891 intel_uncore_write(uncore, GEN7_ROW_CHICKEN2, ··· 2911 2911 /* 2912 2912 * Initialize Super Queue Internal Cnt Register 2913 2913 * Set PMON Enable in order to collect valid metrics. 2914 - * Enable byets per clock reporting in OA for XEHPSDV onward. 2914 + * Enable bytes per clock reporting in OA. 2915 2915 */ 2916 2916 sqcnt1 = GEN12_SQCNT1_PMON_ENABLE | 2917 2917 (HAS_OA_BPC_REPORTING(i915) ? GEN12_SQCNT1_OABPC : 0); ··· 2971 2971 u32 sqcnt1; 2972 2972 2973 2973 /* 2974 - * Wa_1508761755:xehpsdv, dg2 2975 - * Enable thread stall DOP gating and EU DOP gating. 2974 + * Wa_1508761755: Enable thread stall DOP gating and EU DOP gating. 2976 2975 */ 2977 - if (IS_XEHPSDV(i915) || IS_DG2(i915)) { 2976 + if (IS_DG2(i915)) { 2978 2977 intel_gt_mcr_multicast_write(uncore->gt, GEN8_ROW_CHICKEN, 2979 2978 _MASKED_BIT_DISABLE(STALL_DOP_GATING_DISABLE)); 2980 2979 intel_uncore_write(uncore, GEN7_ROW_CHICKEN2, ··· 4122 4123 props->hold_preemption = !!value; 4123 4124 break; 4124 4125 case DRM_I915_PERF_PROP_GLOBAL_SSEU: { 4125 - if (GRAPHICS_VER_FULL(perf->i915) >= IP_VER(12, 50)) { 4126 + if (GRAPHICS_VER_FULL(perf->i915) >= IP_VER(12, 55)) { 4126 4127 drm_dbg(&perf->i915->drm, 4127 4128 "SSEU config not supported on gfx %x\n", 4128 4129 GRAPHICS_VER_FULL(perf->i915));
+1 -1
drivers/gpu/drm/i915/i915_query.c
··· 105 105 struct intel_engine_cs *engine; 106 106 struct i915_engine_class_instance classinstance; 107 107 108 - if (GRAPHICS_VER_FULL(i915) < IP_VER(12, 50)) 108 + if (GRAPHICS_VER_FULL(i915) < IP_VER(12, 55)) 109 109 return -ENODEV; 110 110 111 111 classinstance = *((struct i915_engine_class_instance *)&query_item->flags);
+30 -13
drivers/gpu/drm/i915/i915_reg.h
··· 1750 1750 1751 1751 #define BXT_RP_STATE_CAP _MMIO(0x138170) 1752 1752 #define GEN9_RP_STATE_LIMITS _MMIO(0x138148) 1753 - #define XEHPSDV_RP_STATE_CAP _MMIO(0x250014) 1754 - #define PVC_RP_STATE_CAP _MMIO(0x281014) 1755 1753 1756 1754 #define MTL_RP_STATE_CAP _MMIO(0x138000) 1757 1755 #define MTL_MEDIAP_STATE_CAP _MMIO(0x138020) ··· 2093 2095 #define TRANS_PUSH_EN REG_BIT(31) 2094 2096 #define TRANS_PUSH_SEND REG_BIT(30) 2095 2097 2098 + #define _TRANS_VRR_VSYNC_A 0x60078 2099 + #define TRANS_VRR_VSYNC(trans) _MMIO_TRANS2(trans, _TRANS_VRR_VSYNC_A) 2100 + #define VRR_VSYNC_END_MASK REG_GENMASK(28, 16) 2101 + #define VRR_VSYNC_END(vsync_end) REG_FIELD_PREP(VRR_VSYNC_END_MASK, (vsync_end)) 2102 + #define VRR_VSYNC_START_MASK REG_GENMASK(12, 0) 2103 + #define VRR_VSYNC_START(vsync_start) REG_FIELD_PREP(VRR_VSYNC_START_MASK, (vsync_start)) 2104 + 2096 2105 /* VGA port control */ 2097 2106 #define ADPA _MMIO(0x61100) 2098 2107 #define PCH_ADPA _MMIO(0xe1100) ··· 2317 2312 * (Haswell and newer) to see which VIDEO_DIP_DATA byte corresponds to each byte 2318 2313 * of the infoframe structure specified by CEA-861. */ 2319 2314 #define VIDEO_DIP_DATA_SIZE 32 2315 + #define VIDEO_DIP_ASYNC_DATA_SIZE 36 2320 2316 #define VIDEO_DIP_GMP_DATA_SIZE 36 2321 2317 #define VIDEO_DIP_VSC_DATA_SIZE 36 2322 2318 #define VIDEO_DIP_PPS_DATA_SIZE 132 ··· 2356 2350 #define VIDEO_DIP_ENABLE_VS_HSW (1 << 8) 2357 2351 #define VIDEO_DIP_ENABLE_GMP_HSW (1 << 4) 2358 2352 #define VIDEO_DIP_ENABLE_SPD_HSW (1 << 0) 2353 + /* ADL and later: */ 2354 + #define VIDEO_DIP_ENABLE_AS_ADL REG_BIT(23) 2359 2355 2360 2356 /* Panel fitting */ 2361 2357 #define PFIT_CONTROL _MMIO(DISPLAY_MMIO_BASE(dev_priv) + 0x61230) ··· 2596 2588 #define TRANSCONF_DITHER_TYPE_ST1 REG_FIELD_PREP(TRANSCONF_DITHER_TYPE_MASK, 1) 2597 2589 #define TRANSCONF_DITHER_TYPE_ST2 REG_FIELD_PREP(TRANSCONF_DITHER_TYPE_MASK, 2) 2598 2590 #define TRANSCONF_DITHER_TYPE_TEMP REG_FIELD_PREP(TRANSCONF_DITHER_TYPE_MASK, 3) 2591 + #define TRANSCONF_PIXEL_COUNT_SCALING_MASK REG_GENMASK(1, 0) 2592 + #define TRANSCONF_PIXEL_COUNT_SCALING_X4 1 2593 + 2599 2594 #define _PIPEASTAT 0x70024 2600 2595 #define PIPE_FIFO_UNDERRUN_STATUS (1UL << 31) 2601 2596 #define SPRITE1_FLIP_DONE_INT_EN_VLV (1UL << 30) ··· 3064 3053 #define MCURSOR_MODE_DISABLE 0x00 3065 3054 #define MCURSOR_MODE_128_32B_AX 0x02 3066 3055 #define MCURSOR_MODE_256_32B_AX 0x03 3056 + #define MCURSOR_MODE_64_2B 0x04 3067 3057 #define MCURSOR_MODE_64_32B_AX 0x07 3068 3058 #define MCURSOR_MODE_128_ARGB_AX (0x20 | MCURSOR_MODE_128_32B_AX) 3069 3059 #define MCURSOR_MODE_256_ARGB_AX (0x20 | MCURSOR_MODE_256_32B_AX) ··· 4567 4555 #define GLK_CL1_PWR_DOWN REG_BIT(11) 4568 4556 #define GLK_CL0_PWR_DOWN REG_BIT(10) 4569 4557 4558 + #define CHICKEN_MISC_3 _MMIO(0x42088) 4559 + #define DP_MST_DPT_DPTP_ALIGN_WA(trans) REG_BIT(9 + (trans) - TRANSCODER_A) 4560 + #define DP_MST_SHORT_HBLANK_WA(trans) REG_BIT(5 + (trans) - TRANSCODER_A) 4561 + #define DP_MST_FEC_BS_JITTER_WA(trans) REG_BIT(0 + (trans) - TRANSCODER_A) 4562 + 4570 4563 #define CHICKEN_MISC_4 _MMIO(0x4208c) 4571 4564 #define CHICKEN_FBC_STRIDE_OVERRIDE REG_BIT(13) 4572 4565 #define CHICKEN_FBC_STRIDE_MASK REG_GENMASK(12, 0) ··· 4628 4611 #define DDIE_TRAINING_OVERRIDE_ENABLE REG_BIT(17) /* CHICKEN_TRANS_A only */ 4629 4612 #define DDIE_TRAINING_OVERRIDE_VALUE REG_BIT(16) /* CHICKEN_TRANS_A only */ 4630 4613 #define PSR2_ADD_VERTICAL_LINE_COUNT REG_BIT(15) 4614 + #define DP_FEC_BS_JITTER_WA REG_BIT(15) 4631 4615 #define PSR2_VSC_ENABLE_PROG_HEADER REG_BIT(12) 4616 + #define DP_DSC_INSERT_SF_AT_EOL_WA REG_BIT(4) 4632 4617 4633 4618 #define DISP_ARB_CTL _MMIO(0x45000) 4634 4619 #define DISP_FBC_MEMORY_WAKE REG_BIT(31) ··· 5059 5040 #define _HSW_VIDEO_DIP_SPD_DATA_A 0x602A0 5060 5041 #define _HSW_VIDEO_DIP_GMP_DATA_A 0x602E0 5061 5042 #define _HSW_VIDEO_DIP_VSC_DATA_A 0x60320 5043 + #define _ADL_VIDEO_DIP_AS_DATA_A 0x60484 5062 5044 #define _GLK_VIDEO_DIP_DRM_DATA_A 0x60440 5063 5045 #define _HSW_VIDEO_DIP_AVI_ECC_A 0x60240 5064 5046 #define _HSW_VIDEO_DIP_VS_ECC_A 0x60280 ··· 5074 5054 #define _HSW_VIDEO_DIP_SPD_DATA_B 0x612A0 5075 5055 #define _HSW_VIDEO_DIP_GMP_DATA_B 0x612E0 5076 5056 #define _HSW_VIDEO_DIP_VSC_DATA_B 0x61320 5057 + #define _ADL_VIDEO_DIP_AS_DATA_B 0x61484 5077 5058 #define _GLK_VIDEO_DIP_DRM_DATA_B 0x61440 5078 5059 #define _HSW_VIDEO_DIP_BVI_ECC_B 0x61240 5079 5060 #define _HSW_VIDEO_DIP_VS_ECC_B 0x61280 ··· 5104 5083 #define GLK_TVIDEO_DIP_DRM_DATA(trans, i) _MMIO_TRANS2(trans, _GLK_VIDEO_DIP_DRM_DATA_A + (i) * 4) 5105 5084 #define ICL_VIDEO_DIP_PPS_DATA(trans, i) _MMIO_TRANS2(trans, _ICL_VIDEO_DIP_PPS_DATA_A + (i) * 4) 5106 5085 #define ICL_VIDEO_DIP_PPS_ECC(trans, i) _MMIO_TRANS2(trans, _ICL_VIDEO_DIP_PPS_ECC_A + (i) * 4) 5086 + /*ADLP and later: */ 5087 + #define ADL_TVIDEO_DIP_AS_SDP_DATA(trans, i) _MMIO_TRANS2(trans,\ 5088 + _ADL_VIDEO_DIP_AS_DATA_A + (i) * 4) 5107 5089 5108 5090 #define _HSW_STEREO_3D_CTL_A 0x70020 5109 5091 #define S3D_ENABLE (1 << 31) ··· 5425 5401 #define POWER_SETUP_I1_SHIFT 6 /* 10.6 fixed point format */ 5426 5402 #define POWER_SETUP_I1_DATA_MASK REG_GENMASK(15, 0) 5427 5403 #define GEN12_PCODE_READ_SAGV_BLOCK_TIME_US 0x23 5428 - #define XEHP_PCODE_FREQUENCY_CONFIG 0x6e /* xehpsdv, pvc */ 5404 + #define XEHP_PCODE_FREQUENCY_CONFIG 0x6e /* pvc */ 5429 5405 /* XEHP_PCODE_FREQUENCY_CONFIG sub-commands (param1) */ 5430 5406 #define PCODE_MBOX_FC_SC_READ_FUSED_P0 0x0 5431 5407 #define PCODE_MBOX_FC_SC_READ_FUSED_PN 0x1 ··· 5589 5565 #define ICL_PW_CTL_IDX_TO_PG(pw_idx) \ 5590 5566 ((pw_idx) - ICL_PW_CTL_IDX_PW_1 + SKL_PG1) 5591 5567 #define SKL_FUSE_PG_DIST_STATUS(pg) (1 << (27 - (pg))) 5592 - 5593 - #define _ICL_AUX_REG_IDX(pw_idx) ((pw_idx) - ICL_PW_CTL_IDX_AUX_A) 5594 - #define _ICL_AUX_ANAOVRD1_A 0x162398 5595 - #define _ICL_AUX_ANAOVRD1_B 0x6C398 5596 - #define ICL_AUX_ANAOVRD1(pw_idx) _MMIO(_PICK(_ICL_AUX_REG_IDX(pw_idx), \ 5597 - _ICL_AUX_ANAOVRD1_A, \ 5598 - _ICL_AUX_ANAOVRD1_B)) 5599 - #define ICL_AUX_ANAOVRD1_LDO_BYPASS (1 << 7) 5600 - #define ICL_AUX_ANAOVRD1_ENABLE (1 << 0) 5601 5568 5602 5569 /* Per-pipe DDI Function Control */ 5603 5570 #define _TRANS_DDI_FUNC_CTL_A 0x60400 ··· 5915 5900 #define CDCLK_FREQ_540 REG_FIELD_PREP(CDCLK_FREQ_SEL_MASK, 1) 5916 5901 #define CDCLK_FREQ_337_308 REG_FIELD_PREP(CDCLK_FREQ_SEL_MASK, 2) 5917 5902 #define CDCLK_FREQ_675_617 REG_FIELD_PREP(CDCLK_FREQ_SEL_MASK, 3) 5918 - #define MDCLK_SOURCE_SEL_CDCLK_PLL REG_BIT(25) 5903 + #define MDCLK_SOURCE_SEL_MASK REG_GENMASK(25, 25) 5904 + #define MDCLK_SOURCE_SEL_CD2XCLK REG_FIELD_PREP(MDCLK_SOURCE_SEL_MASK, 0) 5905 + #define MDCLK_SOURCE_SEL_CDCLK_PLL REG_FIELD_PREP(MDCLK_SOURCE_SEL_MASK, 1) 5919 5906 #define BXT_CDCLK_CD2X_DIV_SEL_MASK REG_GENMASK(23, 22) 5920 5907 #define BXT_CDCLK_CD2X_DIV_SEL_1 REG_FIELD_PREP(BXT_CDCLK_CD2X_DIV_SEL_MASK, 0) 5921 5908 #define BXT_CDCLK_CD2X_DIV_SEL_1_5 REG_FIELD_PREP(BXT_CDCLK_CD2X_DIV_SEL_MASK, 1)
-14
drivers/gpu/drm/i915/i915_utils.h
··· 73 73 __i915_printk(i915, i915_error_injected() ? KERN_DEBUG : KERN_ERR, \ 74 74 fmt, ##__VA_ARGS__) 75 75 76 - #if defined(GCC_VERSION) && GCC_VERSION >= 70000 77 - #define add_overflows_t(T, A, B) \ 78 - __builtin_add_overflow_p((A), (B), (T)0) 79 - #else 80 - #define add_overflows_t(T, A, B) ({ \ 81 - typeof(A) a = (A); \ 82 - typeof(B) b = (B); \ 83 - (T)(a + b) < a; \ 84 - }) 85 - #endif 86 - 87 - #define add_overflows(A, B) \ 88 - add_overflows_t(typeof((A) + (B)), (A), (B)) 89 - 90 76 #define range_overflows(start, size, max) ({ \ 91 77 typeof(start) start__ = (start); \ 92 78 typeof(size) size__ = (size); \
+1 -58
drivers/gpu/drm/i915/intel_clock_gating.c
··· 105 105 * Display WA #0562: bxt 106 106 */ 107 107 intel_uncore_rmw(&i915->uncore, DISP_ARB_CTL, 0, DISP_FBC_WM_DIS); 108 - 109 - /* 110 - * WaFbcHighMemBwCorruptionAvoidance:bxt 111 - * Display WA #0883: bxt 112 - */ 113 - intel_uncore_rmw(&i915->uncore, ILK_DPFC_CHICKEN(INTEL_FBC_A), 0, DPFC_DISABLE_DUMMY0); 114 108 } 115 109 116 110 static void glk_init_clock_gating(struct drm_i915_private *i915) ··· 343 349 intel_uncore_write(&i915->uncore, GEN7_MISCCPCTL, misccpctl); 344 350 } 345 351 346 - static void xehpsdv_init_clock_gating(struct drm_i915_private *i915) 347 - { 348 - /* Wa_22010146351:xehpsdv */ 349 - if (IS_XEHPSDV_GRAPHICS_STEP(i915, STEP_A0, STEP_B0)) 350 - intel_uncore_rmw(&i915->uncore, XEHP_CLOCK_GATE_DIS, 0, SGR_DIS); 351 - } 352 - 353 352 static void dg2_init_clock_gating(struct drm_i915_private *i915) 354 353 { 355 354 /* Wa_22010954014:dg2 */ 356 355 intel_uncore_rmw(&i915->uncore, XEHP_CLOCK_GATE_DIS, 0, 357 356 SGSI_SIDECLK_DIS); 358 - } 359 - 360 - static void pvc_init_clock_gating(struct drm_i915_private *i915) 361 - { 362 - /* Wa_14012385139:pvc */ 363 - if (IS_PVC_BD_STEP(i915, STEP_A0, STEP_B0)) 364 - intel_uncore_rmw(&i915->uncore, XEHP_CLOCK_GATE_DIS, 0, SGR_DIS); 365 - 366 - /* Wa_22010954014:pvc */ 367 - if (IS_PVC_BD_STEP(i915, STEP_A0, STEP_B0)) 368 - intel_uncore_rmw(&i915->uncore, XEHP_CLOCK_GATE_DIS, 0, SGSI_SIDECLK_DIS); 369 357 } 370 358 371 359 static void cnp_init_clock_gating(struct drm_i915_private *i915) ··· 372 396 * Display WA #0562: cfl 373 397 */ 374 398 intel_uncore_rmw(&i915->uncore, DISP_ARB_CTL, 0, DISP_FBC_WM_DIS); 375 - 376 - /* 377 - * WaFbcNukeOnHostModify:cfl 378 - * Display WA #0873: cfl 379 - */ 380 - intel_uncore_rmw(&i915->uncore, ILK_DPFC_CHICKEN(INTEL_FBC_A), 381 - 0, DPFC_NUKE_ON_ANY_MODIFICATION); 382 399 } 383 400 384 401 static void kbl_init_clock_gating(struct drm_i915_private *i915) ··· 396 427 * Display WA #0562: kbl 397 428 */ 398 429 intel_uncore_rmw(&i915->uncore, DISP_ARB_CTL, 0, DISP_FBC_WM_DIS); 399 - 400 - /* 401 - * WaFbcNukeOnHostModify:kbl 402 - * Display WA #0873: kbl 403 - */ 404 - intel_uncore_rmw(&i915->uncore, ILK_DPFC_CHICKEN(INTEL_FBC_A), 405 - 0, DPFC_NUKE_ON_ANY_MODIFICATION); 406 430 } 407 431 408 432 static void skl_init_clock_gating(struct drm_i915_private *i915) ··· 414 452 * Display WA #0562: skl 415 453 */ 416 454 intel_uncore_rmw(&i915->uncore, DISP_ARB_CTL, 0, DISP_FBC_WM_DIS); 417 - 418 - /* 419 - * WaFbcNukeOnHostModify:skl 420 - * Display WA #0873: skl 421 - */ 422 - intel_uncore_rmw(&i915->uncore, ILK_DPFC_CHICKEN(INTEL_FBC_A), 423 - 0, DPFC_NUKE_ON_ANY_MODIFICATION); 424 - 425 - /* 426 - * WaFbcHighMemBwCorruptionAvoidance:skl 427 - * Display WA #0883: skl 428 - */ 429 - intel_uncore_rmw(&i915->uncore, ILK_DPFC_CHICKEN(INTEL_FBC_A), 0, DPFC_DISABLE_DUMMY0); 430 455 } 431 456 432 457 static void bdw_init_clock_gating(struct drm_i915_private *i915) ··· 711 762 .init_clock_gating = platform##_init_clock_gating, \ 712 763 } 713 764 714 - CG_FUNCS(pvc); 715 765 CG_FUNCS(dg2); 716 - CG_FUNCS(xehpsdv); 717 766 CG_FUNCS(cfl); 718 767 CG_FUNCS(skl); 719 768 CG_FUNCS(kbl); ··· 744 797 */ 745 798 void intel_clock_gating_hooks_init(struct drm_i915_private *i915) 746 799 { 747 - if (IS_PONTEVECCHIO(i915)) 748 - i915->clock_gating_funcs = &pvc_clock_gating_funcs; 749 - else if (IS_DG2(i915)) 800 + if (IS_DG2(i915)) 750 801 i915->clock_gating_funcs = &dg2_clock_gating_funcs; 751 - else if (IS_XEHPSDV(i915)) 752 - i915->clock_gating_funcs = &xehpsdv_clock_gating_funcs; 753 802 else if (IS_COFFEELAKE(i915) || IS_COMETLAKE(i915)) 754 803 i915->clock_gating_funcs = &cfl_clock_gating_funcs; 755 804 else if (IS_SKYLAKE(i915))
-2
drivers/gpu/drm/i915/intel_device_info.c
··· 70 70 PLATFORM_NAME(DG1), 71 71 PLATFORM_NAME(ALDERLAKE_S), 72 72 PLATFORM_NAME(ALDERLAKE_P), 73 - PLATFORM_NAME(XEHPSDV), 74 73 PLATFORM_NAME(DG2), 75 - PLATFORM_NAME(PONTEVECCHIO), 76 74 PLATFORM_NAME(METEORLAKE), 77 75 }; 78 76 #undef PLATFORM_NAME
-2
drivers/gpu/drm/i915/intel_device_info.h
··· 87 87 INTEL_DG1, 88 88 INTEL_ALDERLAKE_S, 89 89 INTEL_ALDERLAKE_P, 90 - INTEL_XEHPSDV, 91 90 INTEL_DG2, 92 - INTEL_PONTEVECCHIO, 93 91 INTEL_METEORLAKE, 94 92 INTEL_MAX_PLATFORMS 95 93 };
+1 -79
drivers/gpu/drm/i915/intel_step.c
··· 102 102 [0xC] = { COMMON_GT_MEDIA_STEP(C0), .display_step = STEP_D0 }, 103 103 }; 104 104 105 - static const struct intel_step_info xehpsdv_revids[] = { 106 - [0x0] = { COMMON_GT_MEDIA_STEP(A0) }, 107 - [0x1] = { COMMON_GT_MEDIA_STEP(A1) }, 108 - [0x4] = { COMMON_GT_MEDIA_STEP(B0) }, 109 - [0x8] = { COMMON_GT_MEDIA_STEP(C0) }, 110 - }; 111 - 112 105 static const struct intel_step_info dg2_g10_revid_step_tbl[] = { 113 106 [0x0] = { COMMON_GT_MEDIA_STEP(A0), .display_step = STEP_A0 }, 114 107 [0x1] = { COMMON_GT_MEDIA_STEP(A1), .display_step = STEP_A0 }, ··· 146 153 return step; 147 154 } 148 155 149 - static void pvc_step_init(struct drm_i915_private *i915, int pci_revid); 150 - 151 156 void intel_step_init(struct drm_i915_private *i915) 152 157 { 153 158 const struct intel_step_info *revids = NULL; ··· 169 178 return; 170 179 } 171 180 172 - if (IS_PONTEVECCHIO(i915)) { 173 - pvc_step_init(i915, revid); 174 - return; 175 - } else if (IS_DG2_G10(i915)) { 181 + if (IS_DG2_G10(i915)) { 176 182 revids = dg2_g10_revid_step_tbl; 177 183 size = ARRAY_SIZE(dg2_g10_revid_step_tbl); 178 184 } else if (IS_DG2_G11(i915)) { ··· 178 190 } else if (IS_DG2_G12(i915)) { 179 191 revids = dg2_g12_revid_step_tbl; 180 192 size = ARRAY_SIZE(dg2_g12_revid_step_tbl); 181 - } else if (IS_XEHPSDV(i915)) { 182 - revids = xehpsdv_revids; 183 - size = ARRAY_SIZE(xehpsdv_revids); 184 193 } else if (IS_ALDERLAKE_P_N(i915)) { 185 194 revids = adlp_n_revids; 186 195 size = ARRAY_SIZE(adlp_n_revids); ··· 260 275 return; 261 276 262 277 RUNTIME_INFO(i915)->step = step; 263 - } 264 - 265 - #define PVC_BD_REVID GENMASK(5, 3) 266 - #define PVC_CT_REVID GENMASK(2, 0) 267 - 268 - static const int pvc_bd_subids[] = { 269 - [0x0] = STEP_A0, 270 - [0x3] = STEP_B0, 271 - [0x4] = STEP_B1, 272 - [0x5] = STEP_B3, 273 - }; 274 - 275 - static const int pvc_ct_subids[] = { 276 - [0x3] = STEP_A0, 277 - [0x5] = STEP_B0, 278 - [0x6] = STEP_B1, 279 - [0x7] = STEP_C0, 280 - }; 281 - 282 - static int 283 - pvc_step_lookup(struct drm_i915_private *i915, const char *type, 284 - const int *table, int size, int subid) 285 - { 286 - if (subid < size && table[subid] != STEP_NONE) 287 - return table[subid]; 288 - 289 - drm_warn(&i915->drm, "Unknown %s id 0x%02x\n", type, subid); 290 - 291 - /* 292 - * As on other platforms, try to use the next higher ID if we land on a 293 - * gap in the table. 294 - */ 295 - while (subid < size && table[subid] == STEP_NONE) 296 - subid++; 297 - 298 - if (subid < size) { 299 - drm_dbg(&i915->drm, "Using steppings for %s id 0x%02x\n", 300 - type, subid); 301 - return table[subid]; 302 - } 303 - 304 - drm_dbg(&i915->drm, "Using future steppings\n"); 305 - return STEP_FUTURE; 306 - } 307 - 308 - /* 309 - * PVC needs special handling since we don't lookup the 310 - * revid in a table, but rather specific bitfields within 311 - * the revid for various components. 312 - */ 313 - static void pvc_step_init(struct drm_i915_private *i915, int pci_revid) 314 - { 315 - int ct_subid, bd_subid; 316 - 317 - bd_subid = FIELD_GET(PVC_BD_REVID, pci_revid); 318 - ct_subid = FIELD_GET(PVC_CT_REVID, pci_revid); 319 - 320 - RUNTIME_INFO(i915)->step.basedie_step = 321 - pvc_step_lookup(i915, "Base Die", pvc_bd_subids, 322 - ARRAY_SIZE(pvc_bd_subids), bd_subid); 323 - RUNTIME_INFO(i915)->step.graphics_step = 324 - pvc_step_lookup(i915, "Compute Tile", pvc_ct_subids, 325 - ARRAY_SIZE(pvc_ct_subids), ct_subid); 326 278 } 327 279 328 280 #define STEP_NAME_CASE(name) \
+109 -271
drivers/gpu/drm/i915/intel_uncore.c
··· 1106 1106 { .start = 0x1F8510, .end = 0x1F8550 }, 1107 1107 }; 1108 1108 1109 - static const struct i915_range pvc_shadowed_regs[] = { 1110 - { .start = 0x2030, .end = 0x2030 }, 1111 - { .start = 0x2510, .end = 0x2550 }, 1112 - { .start = 0xA008, .end = 0xA00C }, 1113 - { .start = 0xA188, .end = 0xA188 }, 1114 - { .start = 0xA278, .end = 0xA278 }, 1115 - { .start = 0xA540, .end = 0xA56C }, 1116 - { .start = 0xC4C8, .end = 0xC4C8 }, 1117 - { .start = 0xC4E0, .end = 0xC4E0 }, 1118 - { .start = 0xC600, .end = 0xC600 }, 1119 - { .start = 0xC658, .end = 0xC658 }, 1120 - { .start = 0x22030, .end = 0x22030 }, 1121 - { .start = 0x22510, .end = 0x22550 }, 1122 - { .start = 0x1C0030, .end = 0x1C0030 }, 1123 - { .start = 0x1C0510, .end = 0x1C0550 }, 1124 - { .start = 0x1C4030, .end = 0x1C4030 }, 1125 - { .start = 0x1C4510, .end = 0x1C4550 }, 1126 - { .start = 0x1C8030, .end = 0x1C8030 }, 1127 - { .start = 0x1C8510, .end = 0x1C8550 }, 1128 - { .start = 0x1D0030, .end = 0x1D0030 }, 1129 - { .start = 0x1D0510, .end = 0x1D0550 }, 1130 - { .start = 0x1D4030, .end = 0x1D4030 }, 1131 - { .start = 0x1D4510, .end = 0x1D4550 }, 1132 - { .start = 0x1D8030, .end = 0x1D8030 }, 1133 - { .start = 0x1D8510, .end = 0x1D8550 }, 1134 - { .start = 0x1E0030, .end = 0x1E0030 }, 1135 - { .start = 0x1E0510, .end = 0x1E0550 }, 1136 - { .start = 0x1E4030, .end = 0x1E4030 }, 1137 - { .start = 0x1E4510, .end = 0x1E4550 }, 1138 - { .start = 0x1E8030, .end = 0x1E8030 }, 1139 - { .start = 0x1E8510, .end = 0x1E8550 }, 1140 - { .start = 0x1F0030, .end = 0x1F0030 }, 1141 - { .start = 0x1F0510, .end = 0x1F0550 }, 1142 - { .start = 0x1F4030, .end = 0x1F4030 }, 1143 - { .start = 0x1F4510, .end = 0x1F4550 }, 1144 - { .start = 0x1F8030, .end = 0x1F8030 }, 1145 - { .start = 0x1F8510, .end = 0x1F8550 }, 1146 - }; 1147 - 1148 1109 static const struct i915_range mtl_shadowed_regs[] = { 1149 1110 { .start = 0x2030, .end = 0x2030 }, 1150 1111 { .start = 0x2510, .end = 0x2550 }, ··· 1432 1471 0x1d3f00 - 0x1d3fff: VD2 */ 1433 1472 }; 1434 1473 1435 - /* 1436 - * Graphics IP version 12.55 brings a slight change to the 0xd800 range, 1437 - * switching it from the GT domain to the render domain. 1438 - */ 1439 - #define XEHP_FWRANGES(FW_RANGE_D800) \ 1440 - GEN_FW_RANGE(0x0, 0x1fff, 0), /* \ 1441 - 0x0 - 0xaff: reserved \ 1442 - 0xb00 - 0x1fff: always on */ \ 1443 - GEN_FW_RANGE(0x2000, 0x26ff, FORCEWAKE_RENDER), \ 1444 - GEN_FW_RANGE(0x2700, 0x4aff, FORCEWAKE_GT), \ 1445 - GEN_FW_RANGE(0x4b00, 0x51ff, 0), /* \ 1446 - 0x4b00 - 0x4fff: reserved \ 1447 - 0x5000 - 0x51ff: always on */ \ 1448 - GEN_FW_RANGE(0x5200, 0x7fff, FORCEWAKE_RENDER), \ 1449 - GEN_FW_RANGE(0x8000, 0x813f, FORCEWAKE_GT), \ 1450 - GEN_FW_RANGE(0x8140, 0x815f, FORCEWAKE_RENDER), \ 1451 - GEN_FW_RANGE(0x8160, 0x81ff, 0), /* \ 1452 - 0x8160 - 0x817f: reserved \ 1453 - 0x8180 - 0x81ff: always on */ \ 1454 - GEN_FW_RANGE(0x8200, 0x82ff, FORCEWAKE_GT), \ 1455 - GEN_FW_RANGE(0x8300, 0x84ff, FORCEWAKE_RENDER), \ 1456 - GEN_FW_RANGE(0x8500, 0x8cff, FORCEWAKE_GT), /* \ 1457 - 0x8500 - 0x87ff: gt \ 1458 - 0x8800 - 0x8c7f: reserved \ 1459 - 0x8c80 - 0x8cff: gt (DG2 only) */ \ 1460 - GEN_FW_RANGE(0x8d00, 0x8fff, FORCEWAKE_RENDER), /* \ 1461 - 0x8d00 - 0x8dff: render (DG2 only) \ 1462 - 0x8e00 - 0x8fff: reserved */ \ 1463 - GEN_FW_RANGE(0x9000, 0x94cf, FORCEWAKE_GT), /* \ 1464 - 0x9000 - 0x947f: gt \ 1465 - 0x9480 - 0x94cf: reserved */ \ 1466 - GEN_FW_RANGE(0x94d0, 0x955f, FORCEWAKE_RENDER), \ 1467 - GEN_FW_RANGE(0x9560, 0x967f, 0), /* \ 1468 - 0x9560 - 0x95ff: always on \ 1469 - 0x9600 - 0x967f: reserved */ \ 1470 - GEN_FW_RANGE(0x9680, 0x97ff, FORCEWAKE_RENDER), /* \ 1471 - 0x9680 - 0x96ff: render (DG2 only) \ 1472 - 0x9700 - 0x97ff: reserved */ \ 1473 - GEN_FW_RANGE(0x9800, 0xcfff, FORCEWAKE_GT), /* \ 1474 - 0x9800 - 0xb4ff: gt \ 1475 - 0xb500 - 0xbfff: reserved \ 1476 - 0xc000 - 0xcfff: gt */ \ 1477 - GEN_FW_RANGE(0xd000, 0xd7ff, 0), \ 1478 - GEN_FW_RANGE(0xd800, 0xd87f, FW_RANGE_D800), \ 1479 - GEN_FW_RANGE(0xd880, 0xdbff, FORCEWAKE_GT), \ 1480 - GEN_FW_RANGE(0xdc00, 0xdcff, FORCEWAKE_RENDER), \ 1481 - GEN_FW_RANGE(0xdd00, 0xde7f, FORCEWAKE_GT), /* \ 1482 - 0xdd00 - 0xddff: gt \ 1483 - 0xde00 - 0xde7f: reserved */ \ 1484 - GEN_FW_RANGE(0xde80, 0xe8ff, FORCEWAKE_RENDER), /* \ 1485 - 0xde80 - 0xdfff: render \ 1486 - 0xe000 - 0xe0ff: reserved \ 1487 - 0xe100 - 0xe8ff: render */ \ 1488 - GEN_FW_RANGE(0xe900, 0xffff, FORCEWAKE_GT), /* \ 1489 - 0xe900 - 0xe9ff: gt \ 1490 - 0xea00 - 0xefff: reserved \ 1491 - 0xf000 - 0xffff: gt */ \ 1492 - GEN_FW_RANGE(0x10000, 0x12fff, 0), /* \ 1493 - 0x10000 - 0x11fff: reserved \ 1494 - 0x12000 - 0x127ff: always on \ 1495 - 0x12800 - 0x12fff: reserved */ \ 1496 - GEN_FW_RANGE(0x13000, 0x131ff, FORCEWAKE_MEDIA_VDBOX0), /* DG2 only */ \ 1497 - GEN_FW_RANGE(0x13200, 0x13fff, FORCEWAKE_MEDIA_VDBOX2), /* \ 1498 - 0x13200 - 0x133ff: VD2 (DG2 only) \ 1499 - 0x13400 - 0x13fff: reserved */ \ 1500 - GEN_FW_RANGE(0x14000, 0x141ff, FORCEWAKE_MEDIA_VDBOX0), /* XEHPSDV only */ \ 1501 - GEN_FW_RANGE(0x14200, 0x143ff, FORCEWAKE_MEDIA_VDBOX2), /* XEHPSDV only */ \ 1502 - GEN_FW_RANGE(0x14400, 0x145ff, FORCEWAKE_MEDIA_VDBOX4), /* XEHPSDV only */ \ 1503 - GEN_FW_RANGE(0x14600, 0x147ff, FORCEWAKE_MEDIA_VDBOX6), /* XEHPSDV only */ \ 1504 - GEN_FW_RANGE(0x14800, 0x14fff, FORCEWAKE_RENDER), \ 1505 - GEN_FW_RANGE(0x15000, 0x16dff, FORCEWAKE_GT), /* \ 1506 - 0x15000 - 0x15fff: gt (DG2 only) \ 1507 - 0x16000 - 0x16dff: reserved */ \ 1508 - GEN_FW_RANGE(0x16e00, 0x1ffff, FORCEWAKE_RENDER), \ 1509 - GEN_FW_RANGE(0x20000, 0x21fff, FORCEWAKE_MEDIA_VDBOX0), /* \ 1510 - 0x20000 - 0x20fff: VD0 (XEHPSDV only) \ 1511 - 0x21000 - 0x21fff: reserved */ \ 1512 - GEN_FW_RANGE(0x22000, 0x23fff, FORCEWAKE_GT), \ 1513 - GEN_FW_RANGE(0x24000, 0x2417f, 0), /* \ 1514 - 0x24000 - 0x2407f: always on \ 1515 - 0x24080 - 0x2417f: reserved */ \ 1516 - GEN_FW_RANGE(0x24180, 0x249ff, FORCEWAKE_GT), /* \ 1517 - 0x24180 - 0x241ff: gt \ 1518 - 0x24200 - 0x249ff: reserved */ \ 1519 - GEN_FW_RANGE(0x24a00, 0x251ff, FORCEWAKE_RENDER), /* \ 1520 - 0x24a00 - 0x24a7f: render \ 1521 - 0x24a80 - 0x251ff: reserved */ \ 1522 - GEN_FW_RANGE(0x25200, 0x25fff, FORCEWAKE_GT), /* \ 1523 - 0x25200 - 0x252ff: gt \ 1524 - 0x25300 - 0x25fff: reserved */ \ 1525 - GEN_FW_RANGE(0x26000, 0x2ffff, FORCEWAKE_RENDER), /* \ 1526 - 0x26000 - 0x27fff: render \ 1527 - 0x28000 - 0x29fff: reserved \ 1528 - 0x2a000 - 0x2ffff: undocumented */ \ 1529 - GEN_FW_RANGE(0x30000, 0x3ffff, FORCEWAKE_GT), \ 1530 - GEN_FW_RANGE(0x40000, 0x1bffff, 0), \ 1531 - GEN_FW_RANGE(0x1c0000, 0x1c3fff, FORCEWAKE_MEDIA_VDBOX0), /* \ 1532 - 0x1c0000 - 0x1c2bff: VD0 \ 1533 - 0x1c2c00 - 0x1c2cff: reserved \ 1534 - 0x1c2d00 - 0x1c2dff: VD0 \ 1535 - 0x1c2e00 - 0x1c3eff: VD0 (DG2 only) \ 1536 - 0x1c3f00 - 0x1c3fff: VD0 */ \ 1537 - GEN_FW_RANGE(0x1c4000, 0x1c7fff, FORCEWAKE_MEDIA_VDBOX1), /* \ 1538 - 0x1c4000 - 0x1c6bff: VD1 \ 1539 - 0x1c6c00 - 0x1c6cff: reserved \ 1540 - 0x1c6d00 - 0x1c6dff: VD1 \ 1541 - 0x1c6e00 - 0x1c7fff: reserved */ \ 1542 - GEN_FW_RANGE(0x1c8000, 0x1cbfff, FORCEWAKE_MEDIA_VEBOX0), /* \ 1543 - 0x1c8000 - 0x1ca0ff: VE0 \ 1544 - 0x1ca100 - 0x1cbfff: reserved */ \ 1545 - GEN_FW_RANGE(0x1cc000, 0x1ccfff, FORCEWAKE_MEDIA_VDBOX0), \ 1546 - GEN_FW_RANGE(0x1cd000, 0x1cdfff, FORCEWAKE_MEDIA_VDBOX2), \ 1547 - GEN_FW_RANGE(0x1ce000, 0x1cefff, FORCEWAKE_MEDIA_VDBOX4), \ 1548 - GEN_FW_RANGE(0x1cf000, 0x1cffff, FORCEWAKE_MEDIA_VDBOX6), \ 1549 - GEN_FW_RANGE(0x1d0000, 0x1d3fff, FORCEWAKE_MEDIA_VDBOX2), /* \ 1550 - 0x1d0000 - 0x1d2bff: VD2 \ 1551 - 0x1d2c00 - 0x1d2cff: reserved \ 1552 - 0x1d2d00 - 0x1d2dff: VD2 \ 1553 - 0x1d2e00 - 0x1d3dff: VD2 (DG2 only) \ 1554 - 0x1d3e00 - 0x1d3eff: reserved \ 1555 - 0x1d3f00 - 0x1d3fff: VD2 */ \ 1556 - GEN_FW_RANGE(0x1d4000, 0x1d7fff, FORCEWAKE_MEDIA_VDBOX3), /* \ 1557 - 0x1d4000 - 0x1d6bff: VD3 \ 1558 - 0x1d6c00 - 0x1d6cff: reserved \ 1559 - 0x1d6d00 - 0x1d6dff: VD3 \ 1560 - 0x1d6e00 - 0x1d7fff: reserved */ \ 1561 - GEN_FW_RANGE(0x1d8000, 0x1dffff, FORCEWAKE_MEDIA_VEBOX1), /* \ 1562 - 0x1d8000 - 0x1da0ff: VE1 \ 1563 - 0x1da100 - 0x1dffff: reserved */ \ 1564 - GEN_FW_RANGE(0x1e0000, 0x1e3fff, FORCEWAKE_MEDIA_VDBOX4), /* \ 1565 - 0x1e0000 - 0x1e2bff: VD4 \ 1566 - 0x1e2c00 - 0x1e2cff: reserved \ 1567 - 0x1e2d00 - 0x1e2dff: VD4 \ 1568 - 0x1e2e00 - 0x1e3eff: reserved \ 1569 - 0x1e3f00 - 0x1e3fff: VD4 */ \ 1570 - GEN_FW_RANGE(0x1e4000, 0x1e7fff, FORCEWAKE_MEDIA_VDBOX5), /* \ 1571 - 0x1e4000 - 0x1e6bff: VD5 \ 1572 - 0x1e6c00 - 0x1e6cff: reserved \ 1573 - 0x1e6d00 - 0x1e6dff: VD5 \ 1574 - 0x1e6e00 - 0x1e7fff: reserved */ \ 1575 - GEN_FW_RANGE(0x1e8000, 0x1effff, FORCEWAKE_MEDIA_VEBOX2), /* \ 1576 - 0x1e8000 - 0x1ea0ff: VE2 \ 1577 - 0x1ea100 - 0x1effff: reserved */ \ 1578 - GEN_FW_RANGE(0x1f0000, 0x1f3fff, FORCEWAKE_MEDIA_VDBOX6), /* \ 1579 - 0x1f0000 - 0x1f2bff: VD6 \ 1580 - 0x1f2c00 - 0x1f2cff: reserved \ 1581 - 0x1f2d00 - 0x1f2dff: VD6 \ 1582 - 0x1f2e00 - 0x1f3eff: reserved \ 1583 - 0x1f3f00 - 0x1f3fff: VD6 */ \ 1584 - GEN_FW_RANGE(0x1f4000, 0x1f7fff, FORCEWAKE_MEDIA_VDBOX7), /* \ 1585 - 0x1f4000 - 0x1f6bff: VD7 \ 1586 - 0x1f6c00 - 0x1f6cff: reserved \ 1587 - 0x1f6d00 - 0x1f6dff: VD7 \ 1588 - 0x1f6e00 - 0x1f7fff: reserved */ \ 1589 - GEN_FW_RANGE(0x1f8000, 0x1fa0ff, FORCEWAKE_MEDIA_VEBOX3), 1590 - 1591 - static const struct intel_forcewake_range __xehp_fw_ranges[] = { 1592 - XEHP_FWRANGES(FORCEWAKE_GT) 1593 - }; 1594 - 1595 1474 static const struct intel_forcewake_range __dg2_fw_ranges[] = { 1596 - XEHP_FWRANGES(FORCEWAKE_RENDER) 1597 - }; 1598 - 1599 - static const struct intel_forcewake_range __pvc_fw_ranges[] = { 1600 - GEN_FW_RANGE(0x0, 0xaff, 0), 1601 - GEN_FW_RANGE(0xb00, 0xbff, FORCEWAKE_GT), 1602 - GEN_FW_RANGE(0xc00, 0xfff, 0), 1603 - GEN_FW_RANGE(0x1000, 0x1fff, FORCEWAKE_GT), 1475 + GEN_FW_RANGE(0x0, 0x1fff, 0), /* 1476 + 0x0 - 0xaff: reserved 1477 + 0xb00 - 0x1fff: always on */ 1604 1478 GEN_FW_RANGE(0x2000, 0x26ff, FORCEWAKE_RENDER), 1605 - GEN_FW_RANGE(0x2700, 0x2fff, FORCEWAKE_GT), 1606 - GEN_FW_RANGE(0x3000, 0x3fff, FORCEWAKE_RENDER), 1607 - GEN_FW_RANGE(0x4000, 0x813f, FORCEWAKE_GT), /* 1608 - 0x4000 - 0x4aff: gt 1479 + GEN_FW_RANGE(0x2700, 0x4aff, FORCEWAKE_GT), 1480 + GEN_FW_RANGE(0x4b00, 0x51ff, 0), /* 1609 1481 0x4b00 - 0x4fff: reserved 1610 - 0x5000 - 0x51ff: gt 1611 - 0x5200 - 0x52ff: reserved 1612 - 0x5300 - 0x53ff: gt 1613 - 0x5400 - 0x7fff: reserved 1614 - 0x8000 - 0x813f: gt */ 1615 - GEN_FW_RANGE(0x8140, 0x817f, FORCEWAKE_RENDER), 1616 - GEN_FW_RANGE(0x8180, 0x81ff, 0), 1617 - GEN_FW_RANGE(0x8200, 0x94cf, FORCEWAKE_GT), /* 1618 - 0x8200 - 0x82ff: gt 1619 - 0x8300 - 0x84ff: reserved 1620 - 0x8500 - 0x887f: gt 1621 - 0x8880 - 0x8a7f: reserved 1622 - 0x8a80 - 0x8aff: gt 1623 - 0x8b00 - 0x8fff: reserved 1482 + 0x5000 - 0x51ff: always on */ 1483 + GEN_FW_RANGE(0x5200, 0x7fff, FORCEWAKE_RENDER), 1484 + GEN_FW_RANGE(0x8000, 0x813f, FORCEWAKE_GT), 1485 + GEN_FW_RANGE(0x8140, 0x815f, FORCEWAKE_RENDER), 1486 + GEN_FW_RANGE(0x8160, 0x81ff, 0), /* 1487 + 0x8160 - 0x817f: reserved 1488 + 0x8180 - 0x81ff: always on */ 1489 + GEN_FW_RANGE(0x8200, 0x82ff, FORCEWAKE_GT), 1490 + GEN_FW_RANGE(0x8300, 0x84ff, FORCEWAKE_RENDER), 1491 + GEN_FW_RANGE(0x8500, 0x8cff, FORCEWAKE_GT), /* 1492 + 0x8500 - 0x87ff: gt 1493 + 0x8800 - 0x8c7f: reserved 1494 + 0x8c80 - 0x8cff: gt (DG2 only) */ 1495 + GEN_FW_RANGE(0x8d00, 0x8fff, FORCEWAKE_RENDER), /* 1496 + 0x8d00 - 0x8dff: render (DG2 only) 1497 + 0x8e00 - 0x8fff: reserved */ 1498 + GEN_FW_RANGE(0x9000, 0x94cf, FORCEWAKE_GT), /* 1624 1499 0x9000 - 0x947f: gt 1625 1500 0x9480 - 0x94cf: reserved */ 1626 1501 GEN_FW_RANGE(0x94d0, 0x955f, FORCEWAKE_RENDER), ··· 1470 1673 0x9800 - 0xb4ff: gt 1471 1674 0xb500 - 0xbfff: reserved 1472 1675 0xc000 - 0xcfff: gt */ 1473 - GEN_FW_RANGE(0xd000, 0xd3ff, 0), 1474 - GEN_FW_RANGE(0xd400, 0xdbff, FORCEWAKE_GT), 1676 + GEN_FW_RANGE(0xd000, 0xd7ff, 0), 1677 + GEN_FW_RANGE(0xd800, 0xd87f, FORCEWAKE_RENDER), 1678 + GEN_FW_RANGE(0xd880, 0xdbff, FORCEWAKE_GT), 1475 1679 GEN_FW_RANGE(0xdc00, 0xdcff, FORCEWAKE_RENDER), 1476 1680 GEN_FW_RANGE(0xdd00, 0xde7f, FORCEWAKE_GT), /* 1477 1681 0xdd00 - 0xddff: gt 1478 1682 0xde00 - 0xde7f: reserved */ 1479 1683 GEN_FW_RANGE(0xde80, 0xe8ff, FORCEWAKE_RENDER), /* 1480 - 0xde80 - 0xdeff: render 1481 - 0xdf00 - 0xe1ff: reserved 1482 - 0xe200 - 0xe7ff: render 1483 - 0xe800 - 0xe8ff: reserved */ 1484 - GEN_FW_RANGE(0xe900, 0x11fff, FORCEWAKE_GT), /* 1485 - 0xe900 - 0xe9ff: gt 1486 - 0xea00 - 0xebff: reserved 1487 - 0xec00 - 0xffff: gt 1488 - 0x10000 - 0x11fff: reserved */ 1489 - GEN_FW_RANGE(0x12000, 0x12fff, 0), /* 1684 + 0xde80 - 0xdfff: render 1685 + 0xe000 - 0xe0ff: reserved 1686 + 0xe100 - 0xe8ff: render */ 1687 + GEN_FW_RANGE(0xe900, 0xffff, FORCEWAKE_GT), /* 1688 + 0xe900 - 0xe9ff: gt 1689 + 0xea00 - 0xefff: reserved 1690 + 0xf000 - 0xffff: gt */ 1691 + GEN_FW_RANGE(0x10000, 0x12fff, 0), /* 1692 + 0x10000 - 0x11fff: reserved 1490 1693 0x12000 - 0x127ff: always on 1491 1694 0x12800 - 0x12fff: reserved */ 1492 - GEN_FW_RANGE(0x13000, 0x19fff, FORCEWAKE_GT), /* 1493 - 0x13000 - 0x135ff: gt 1494 - 0x13600 - 0x147ff: reserved 1495 - 0x14800 - 0x153ff: gt 1496 - 0x15400 - 0x19fff: reserved */ 1497 - GEN_FW_RANGE(0x1a000, 0x21fff, FORCEWAKE_RENDER), /* 1498 - 0x1a000 - 0x1ffff: render 1695 + GEN_FW_RANGE(0x13000, 0x131ff, FORCEWAKE_MEDIA_VDBOX0), 1696 + GEN_FW_RANGE(0x13200, 0x147ff, FORCEWAKE_MEDIA_VDBOX2), /* 1697 + 0x13200 - 0x133ff: VD2 (DG2 only) 1698 + 0x13400 - 0x147ff: reserved */ 1699 + GEN_FW_RANGE(0x14800, 0x14fff, FORCEWAKE_RENDER), 1700 + GEN_FW_RANGE(0x15000, 0x16dff, FORCEWAKE_GT), /* 1701 + 0x15000 - 0x15fff: gt (DG2 only) 1702 + 0x16000 - 0x16dff: reserved */ 1703 + GEN_FW_RANGE(0x16e00, 0x21fff, FORCEWAKE_RENDER), /* 1704 + 0x16e00 - 0x1ffff: render 1499 1705 0x20000 - 0x21fff: reserved */ 1500 1706 GEN_FW_RANGE(0x22000, 0x23fff, FORCEWAKE_GT), 1501 1707 GEN_FW_RANGE(0x24000, 0x2417f, 0), /* 1502 - 24000 - 0x2407f: always on 1503 - 24080 - 0x2417f: reserved */ 1504 - GEN_FW_RANGE(0x24180, 0x25fff, FORCEWAKE_GT), /* 1708 + 0x24000 - 0x2407f: always on 1709 + 0x24080 - 0x2417f: reserved */ 1710 + GEN_FW_RANGE(0x24180, 0x249ff, FORCEWAKE_GT), /* 1505 1711 0x24180 - 0x241ff: gt 1506 - 0x24200 - 0x251ff: reserved 1712 + 0x24200 - 0x249ff: reserved */ 1713 + GEN_FW_RANGE(0x24a00, 0x251ff, FORCEWAKE_RENDER), /* 1714 + 0x24a00 - 0x24a7f: render 1715 + 0x24a80 - 0x251ff: reserved */ 1716 + GEN_FW_RANGE(0x25200, 0x25fff, FORCEWAKE_GT), /* 1507 1717 0x25200 - 0x252ff: gt 1508 1718 0x25300 - 0x25fff: reserved */ 1509 1719 GEN_FW_RANGE(0x26000, 0x2ffff, FORCEWAKE_RENDER), /* 1510 1720 0x26000 - 0x27fff: render 1511 - 0x28000 - 0x2ffff: reserved */ 1721 + 0x28000 - 0x29fff: reserved 1722 + 0x2a000 - 0x2ffff: undocumented */ 1512 1723 GEN_FW_RANGE(0x30000, 0x3ffff, FORCEWAKE_GT), 1513 1724 GEN_FW_RANGE(0x40000, 0x1bffff, 0), 1514 1725 GEN_FW_RANGE(0x1c0000, 0x1c3fff, FORCEWAKE_MEDIA_VDBOX0), /* 1515 1726 0x1c0000 - 0x1c2bff: VD0 1516 1727 0x1c2c00 - 0x1c2cff: reserved 1517 1728 0x1c2d00 - 0x1c2dff: VD0 1518 - 0x1c2e00 - 0x1c3eff: reserved 1729 + 0x1c2e00 - 0x1c3eff: VD0 1519 1730 0x1c3f00 - 0x1c3fff: VD0 */ 1520 - GEN_FW_RANGE(0x1c4000, 0x1cffff, FORCEWAKE_MEDIA_VDBOX1), /* 1521 - 0x1c4000 - 0x1c6aff: VD1 1522 - 0x1c6b00 - 0x1c7eff: reserved 1523 - 0x1c7f00 - 0x1c7fff: VD1 1524 - 0x1c8000 - 0x1cffff: reserved */ 1525 - GEN_FW_RANGE(0x1d0000, 0x23ffff, FORCEWAKE_MEDIA_VDBOX2), /* 1526 - 0x1d0000 - 0x1d2aff: VD2 1527 - 0x1d2b00 - 0x1d3eff: reserved 1528 - 0x1d3f00 - 0x1d3fff: VD2 1529 - 0x1d4000 - 0x23ffff: reserved */ 1530 - GEN_FW_RANGE(0x240000, 0x3dffff, 0), 1531 - GEN_FW_RANGE(0x3e0000, 0x3effff, FORCEWAKE_GT), 1731 + GEN_FW_RANGE(0x1c4000, 0x1c7fff, FORCEWAKE_MEDIA_VDBOX1), /* 1732 + 0x1c4000 - 0x1c6bff: VD1 1733 + 0x1c6c00 - 0x1c6cff: reserved 1734 + 0x1c6d00 - 0x1c6dff: VD1 1735 + 0x1c6e00 - 0x1c7fff: reserved */ 1736 + GEN_FW_RANGE(0x1c8000, 0x1cbfff, FORCEWAKE_MEDIA_VEBOX0), /* 1737 + 0x1c8000 - 0x1ca0ff: VE0 1738 + 0x1ca100 - 0x1cbfff: reserved */ 1739 + GEN_FW_RANGE(0x1cc000, 0x1ccfff, FORCEWAKE_MEDIA_VDBOX0), 1740 + GEN_FW_RANGE(0x1cd000, 0x1cdfff, FORCEWAKE_MEDIA_VDBOX2), 1741 + GEN_FW_RANGE(0x1ce000, 0x1cefff, FORCEWAKE_MEDIA_VDBOX4), 1742 + GEN_FW_RANGE(0x1cf000, 0x1cffff, FORCEWAKE_MEDIA_VDBOX6), 1743 + GEN_FW_RANGE(0x1d0000, 0x1d3fff, FORCEWAKE_MEDIA_VDBOX2), /* 1744 + 0x1d0000 - 0x1d2bff: VD2 1745 + 0x1d2c00 - 0x1d2cff: reserved 1746 + 0x1d2d00 - 0x1d2dff: VD2 1747 + 0x1d2e00 - 0x1d3dff: VD2 1748 + 0x1d3e00 - 0x1d3eff: reserved 1749 + 0x1d3f00 - 0x1d3fff: VD2 */ 1750 + GEN_FW_RANGE(0x1d4000, 0x1d7fff, FORCEWAKE_MEDIA_VDBOX3), /* 1751 + 0x1d4000 - 0x1d6bff: VD3 1752 + 0x1d6c00 - 0x1d6cff: reserved 1753 + 0x1d6d00 - 0x1d6dff: VD3 1754 + 0x1d6e00 - 0x1d7fff: reserved */ 1755 + GEN_FW_RANGE(0x1d8000, 0x1dffff, FORCEWAKE_MEDIA_VEBOX1), /* 1756 + 0x1d8000 - 0x1da0ff: VE1 1757 + 0x1da100 - 0x1dffff: reserved */ 1758 + GEN_FW_RANGE(0x1e0000, 0x1e3fff, FORCEWAKE_MEDIA_VDBOX4), /* 1759 + 0x1e0000 - 0x1e2bff: VD4 1760 + 0x1e2c00 - 0x1e2cff: reserved 1761 + 0x1e2d00 - 0x1e2dff: VD4 1762 + 0x1e2e00 - 0x1e3eff: reserved 1763 + 0x1e3f00 - 0x1e3fff: VD4 */ 1764 + GEN_FW_RANGE(0x1e4000, 0x1e7fff, FORCEWAKE_MEDIA_VDBOX5), /* 1765 + 0x1e4000 - 0x1e6bff: VD5 1766 + 0x1e6c00 - 0x1e6cff: reserved 1767 + 0x1e6d00 - 0x1e6dff: VD5 1768 + 0x1e6e00 - 0x1e7fff: reserved */ 1769 + GEN_FW_RANGE(0x1e8000, 0x1effff, FORCEWAKE_MEDIA_VEBOX2), /* 1770 + 0x1e8000 - 0x1ea0ff: VE2 1771 + 0x1ea100 - 0x1effff: reserved */ 1772 + GEN_FW_RANGE(0x1f0000, 0x1f3fff, FORCEWAKE_MEDIA_VDBOX6), /* 1773 + 0x1f0000 - 0x1f2bff: VD6 1774 + 0x1f2c00 - 0x1f2cff: reserved 1775 + 0x1f2d00 - 0x1f2dff: VD6 1776 + 0x1f2e00 - 0x1f3eff: reserved 1777 + 0x1f3f00 - 0x1f3fff: VD6 */ 1778 + GEN_FW_RANGE(0x1f4000, 0x1f7fff, FORCEWAKE_MEDIA_VDBOX7), /* 1779 + 0x1f4000 - 0x1f6bff: VD7 1780 + 0x1f6c00 - 0x1f6cff: reserved 1781 + 0x1f6d00 - 0x1f6dff: VD7 1782 + 0x1f6e00 - 0x1f7fff: reserved */ 1783 + GEN_FW_RANGE(0x1f8000, 0x1fa0ff, FORCEWAKE_MEDIA_VEBOX3), 1532 1784 }; 1533 1785 1534 1786 static const struct intel_forcewake_range __mtl_fw_ranges[] = { ··· 2422 2576 ASSIGN_FW_DOMAINS_TABLE(uncore, __mtl_fw_ranges); 2423 2577 ASSIGN_SHADOW_TABLE(uncore, mtl_shadowed_regs); 2424 2578 ASSIGN_WRITE_MMIO_VFUNCS(uncore, fwtable); 2425 - } else if (GRAPHICS_VER_FULL(i915) >= IP_VER(12, 60)) { 2426 - ASSIGN_FW_DOMAINS_TABLE(uncore, __pvc_fw_ranges); 2427 - ASSIGN_SHADOW_TABLE(uncore, pvc_shadowed_regs); 2428 - ASSIGN_WRITE_MMIO_VFUNCS(uncore, fwtable); 2429 2579 } else if (GRAPHICS_VER_FULL(i915) >= IP_VER(12, 55)) { 2430 2580 ASSIGN_FW_DOMAINS_TABLE(uncore, __dg2_fw_ranges); 2431 2581 ASSIGN_SHADOW_TABLE(uncore, dg2_shadowed_regs); 2432 - ASSIGN_WRITE_MMIO_VFUNCS(uncore, fwtable); 2433 - } else if (GRAPHICS_VER_FULL(i915) >= IP_VER(12, 50)) { 2434 - ASSIGN_FW_DOMAINS_TABLE(uncore, __xehp_fw_ranges); 2435 - ASSIGN_SHADOW_TABLE(uncore, gen12_shadowed_regs); 2436 2582 ASSIGN_WRITE_MMIO_VFUNCS(uncore, fwtable); 2437 2583 } else if (GRAPHICS_VER(i915) >= 12) { 2438 2584 ASSIGN_FW_DOMAINS_TABLE(uncore, __gen12_fw_ranges); ··· 2572 2734 * the forcewake domain if any of the other engines 2573 2735 * in the same media slice are present. 2574 2736 */ 2575 - if (GRAPHICS_VER_FULL(uncore->i915) >= IP_VER(12, 50) && i % 2 == 0) { 2737 + if (GRAPHICS_VER_FULL(uncore->i915) >= IP_VER(12, 55) && i % 2 == 0) { 2576 2738 if ((i + 1 < I915_MAX_VCS) && HAS_ENGINE(gt, _VCS(i + 1))) 2577 2739 continue; 2578 2740
-3
drivers/gpu/drm/i915/selftests/intel_uncore.c
··· 71 71 { gen11_shadowed_regs, ARRAY_SIZE(gen11_shadowed_regs) }, 72 72 { gen12_shadowed_regs, ARRAY_SIZE(gen12_shadowed_regs) }, 73 73 { dg2_shadowed_regs, ARRAY_SIZE(dg2_shadowed_regs) }, 74 - { pvc_shadowed_regs, ARRAY_SIZE(pvc_shadowed_regs) }, 75 74 { mtl_shadowed_regs, ARRAY_SIZE(mtl_shadowed_regs) }, 76 75 { xelpmp_shadowed_regs, ARRAY_SIZE(xelpmp_shadowed_regs) }, 77 76 }; ··· 118 119 { __gen9_fw_ranges, ARRAY_SIZE(__gen9_fw_ranges), true }, 119 120 { __gen11_fw_ranges, ARRAY_SIZE(__gen11_fw_ranges), true }, 120 121 { __gen12_fw_ranges, ARRAY_SIZE(__gen12_fw_ranges), true }, 121 - { __xehp_fw_ranges, ARRAY_SIZE(__xehp_fw_ranges), true }, 122 - { __pvc_fw_ranges, ARRAY_SIZE(__pvc_fw_ranges), true }, 123 122 { __mtl_fw_ranges, ARRAY_SIZE(__mtl_fw_ranges), true }, 124 123 { __xelpmp_fw_ranges, ARRAY_SIZE(__xelpmp_fw_ranges), true }, 125 124 };
+1 -1
drivers/gpu/drm/nouveau/nouveau_dp.c
··· 181 181 if (nouveau_mst) { 182 182 mstm = outp->dp.mstm; 183 183 if (mstm) 184 - mstm->can_mst = drm_dp_read_mst_cap(aux, dpcd); 184 + mstm->can_mst = drm_dp_read_mst_cap(aux, dpcd) == DRM_DP_MST; 185 185 } 186 186 187 187 if (nouveau_dp_has_sink_count(connector, outp)) {
+1 -3
drivers/gpu/drm/xe/Makefile
··· 172 172 -Ddrm_i915_gem_object=xe_bo \ 173 173 -Ddrm_i915_private=xe_device 174 174 175 - CFLAGS_i915-display/intel_fbdev.o = -Wno-override-init 176 - CFLAGS_i915-display/intel_display_device.o = -Wno-override-init 177 - 178 175 # Rule to build SOC code shared with i915 179 176 $(obj)/i915-soc/%.o: $(srctree)/drivers/gpu/drm/i915/soc/%.c FORCE 180 177 $(call cmd,force_checksrc) ··· 275 278 i915-display/intel_vdsc.o \ 276 279 i915-display/intel_vga.o \ 277 280 i915-display/intel_vrr.o \ 281 + i915-display/intel_dmc_wl.o \ 278 282 i915-display/intel_wm.o \ 279 283 i915-display/skl_scaler.o \ 280 284 i915-display/skl_universal_plane.o \
-40
drivers/gpu/drm/xe/compat-i915-headers/i915_drv.h
··· 19 19 #include "xe_bo.h" 20 20 #include "xe_pm.h" 21 21 #include "xe_step.h" 22 - #include "i915_gem.h" 23 22 #include "i915_gem_stolen.h" 24 23 #include "i915_gpu_error.h" 25 24 #include "i915_reg_defs.h" 26 25 #include "i915_utils.h" 27 26 #include "intel_gt_types.h" 28 27 #include "intel_step.h" 29 - #include "intel_uc_fw.h" 30 28 #include "intel_uncore.h" 31 29 #include "intel_runtime_pm.h" 32 30 #include <linux/pm_runtime.h> ··· 39 41 return dev_get_drvdata(kdev); 40 42 } 41 43 42 - 43 - #define INTEL_JASPERLAKE 0 44 - #define INTEL_ELKHARTLAKE 0 45 44 #define IS_PLATFORM(xe, x) ((xe)->info.platform == x) 46 45 #define INTEL_INFO(dev_priv) (&((dev_priv)->info)) 47 - #define INTEL_DEVID(dev_priv) ((dev_priv)->info.devid) 48 46 #define IS_I830(dev_priv) (dev_priv && 0) 49 47 #define IS_I845G(dev_priv) (dev_priv && 0) 50 48 #define IS_I85X(dev_priv) (dev_priv && 0) ··· 79 85 #define IS_DG1(dev_priv) IS_PLATFORM(dev_priv, XE_DG1) 80 86 #define IS_ALDERLAKE_S(dev_priv) IS_PLATFORM(dev_priv, XE_ALDERLAKE_S) 81 87 #define IS_ALDERLAKE_P(dev_priv) IS_PLATFORM(dev_priv, XE_ALDERLAKE_P) 82 - #define IS_XEHPSDV(dev_priv) (dev_priv && 0) 83 88 #define IS_DG2(dev_priv) IS_PLATFORM(dev_priv, XE_DG2) 84 - #define IS_PONTEVECCHIO(dev_priv) IS_PLATFORM(dev_priv, XE_PVC) 85 89 #define IS_METEORLAKE(dev_priv) IS_PLATFORM(dev_priv, XE_METEORLAKE) 86 90 #define IS_LUNARLAKE(dev_priv) IS_PLATFORM(dev_priv, XE_LUNARLAKE) 87 91 ··· 89 97 90 98 #define IP_VER(ver, rel) ((ver) << 8 | (rel)) 91 99 92 - #define INTEL_DISPLAY_ENABLED(xe) (HAS_DISPLAY((xe)) && !intel_opregion_headless_sku((xe))) 93 - 94 - #define IS_GRAPHICS_VER(xe, first, last) \ 95 - ((xe)->info.graphics_verx100 >= first * 100 && \ 96 - (xe)->info.graphics_verx100 <= (last*100 + 99)) 97 100 #define IS_MOBILE(xe) (xe && 0) 98 - #define HAS_LLC(xe) (!IS_DGFX((xe))) 99 101 100 102 #define HAS_GMD_ID(xe) GRAPHICS_VERx100(xe) >= 1270 101 103 102 104 /* Workarounds not handled yet */ 103 105 #define IS_DISPLAY_STEP(xe, first, last) ({u8 __step = (xe)->info.step.display; first <= __step && __step <= last; }) 104 - #define IS_GRAPHICS_STEP(xe, first, last) ({u8 __step = (xe)->info.step.graphics; first <= __step && __step <= last; }) 105 106 106 107 #define IS_LP(xe) (0) 107 108 #define IS_GEN9_LP(xe) (0) ··· 111 126 #define IS_KABYLAKE_ULT(xe) (xe && 0) 112 127 #define IS_SKYLAKE_ULT(xe) (xe && 0) 113 128 114 - #define IS_DG1_GRAPHICS_STEP(xe, first, last) (IS_DG1(xe) && IS_GRAPHICS_STEP(xe, first, last)) 115 - #define IS_DG2_GRAPHICS_STEP(xe, variant, first, last) \ 116 - ((xe)->info.subplatform == XE_SUBPLATFORM_DG2_ ## variant && \ 117 - IS_GRAPHICS_STEP(xe, first, last)) 118 - #define IS_XEHPSDV_GRAPHICS_STEP(xe, first, last) (IS_XEHPSDV(xe) && IS_GRAPHICS_STEP(xe, first, last)) 119 - 120 - /* XXX: No basedie stepping support yet */ 121 - #define IS_PVC_BD_STEP(xe, first, last) (!WARN_ON(1) && IS_PONTEVECCHIO(xe)) 122 - 123 - #define IS_TIGERLAKE_DISPLAY_STEP(xe, first, last) (IS_TIGERLAKE(xe) && IS_DISPLAY_STEP(xe, first, last)) 124 - #define IS_ROCKETLAKE_DISPLAY_STEP(xe, first, last) (IS_ROCKETLAKE(xe) && IS_DISPLAY_STEP(xe, first, last)) 125 - #define IS_DG1_DISPLAY_STEP(xe, first, last) (IS_DG1(xe) && IS_DISPLAY_STEP(xe, first, last)) 126 - #define IS_DG2_DISPLAY_STEP(xe, first, last) (IS_DG2(xe) && IS_DISPLAY_STEP(xe, first, last)) 127 - #define IS_ADLP_DISPLAY_STEP(xe, first, last) (IS_ALDERLAKE_P(xe) && IS_DISPLAY_STEP(xe, first, last)) 128 - #define IS_ADLS_DISPLAY_STEP(xe, first, last) (IS_ALDERLAKE_S(xe) && IS_DISPLAY_STEP(xe, first, last)) 129 - #define IS_JSL_EHL_DISPLAY_STEP(xe, first, last) (IS_JSL_EHL(xe) && IS_DISPLAY_STEP(xe, first, last)) 130 - #define IS_MTL_DISPLAY_STEP(xe, first, last) (IS_METEORLAKE(xe) && IS_DISPLAY_STEP(xe, first, last)) 131 - 132 - /* FIXME: Add subplatform here */ 133 - #define IS_MTL_GRAPHICS_STEP(xe, sub, first, last) (IS_METEORLAKE(xe) && IS_DISPLAY_STEP(xe, first, last)) 134 - 135 129 #define IS_DG2_G10(xe) ((xe)->info.subplatform == XE_SUBPLATFORM_DG2_G10) 136 130 #define IS_DG2_G11(xe) ((xe)->info.subplatform == XE_SUBPLATFORM_DG2_G11) 137 131 #define IS_DG2_G12(xe) ((xe)->info.subplatform == XE_SUBPLATFORM_DG2_G12) ··· 118 154 #define IS_ICL_WITH_PORT_F(xe) (xe && 0) 119 155 #define HAS_FLAT_CCS(xe) (xe_device_has_flat_ccs(xe)) 120 156 #define to_intel_bo(x) gem_to_xe_bo((x)) 121 - #define mkwrite_device_info(xe) (INTEL_INFO(xe)) 122 157 123 158 #define HAS_128_BYTE_Y_TILING(xe) (xe || 1) 124 - 125 - #define intel_has_gpu_reset(a) (a && 0) 126 159 127 160 #include "intel_wakeref.h" 128 161 ··· 178 217 #define RUNTIME_INFO(xe) (&(xe)->info.i915_runtime) 179 218 180 219 #define FORCEWAKE_ALL XE_FORCEWAKE_ALL 181 - #define HPD_STORM_DEFAULT_THRESHOLD 50 182 220 183 221 #ifdef CONFIG_ARM64 184 222 /*
-6
drivers/gpu/drm/xe/compat-i915-headers/i915_fixed.h
··· 1 - /* SPDX-License-Identifier: MIT */ 2 - /* 3 - * Copyright © 2023 Intel Corporation 4 - */ 5 - 6 - #include "../../i915/i915_fixed.h"
-9
drivers/gpu/drm/xe/compat-i915-headers/i915_gem.h
··· 1 - /* SPDX-License-Identifier: MIT */ 2 - /* 3 - * Copyright © 2023 Intel Corporation 4 - */ 5 - 6 - #ifndef __I915_GEM_H__ 7 - #define __I915_GEM_H__ 8 - #define GEM_BUG_ON 9 - #endif
-26
drivers/gpu/drm/xe/compat-i915-headers/i915_vgpu.h
··· 9 9 #include <linux/types.h> 10 10 11 11 struct drm_i915_private; 12 - struct i915_ggtt; 13 12 14 - static inline void intel_vgpu_detect(struct drm_i915_private *i915) 15 - { 16 - } 17 13 static inline bool intel_vgpu_active(struct drm_i915_private *i915) 18 14 { 19 15 return false; 20 - } 21 - static inline void intel_vgpu_register(struct drm_i915_private *i915) 22 - { 23 - } 24 - static inline bool intel_vgpu_has_full_ppgtt(struct drm_i915_private *i915) 25 - { 26 - return false; 27 - } 28 - static inline bool intel_vgpu_has_hwsp_emulation(struct drm_i915_private *i915) 29 - { 30 - return false; 31 - } 32 - static inline bool intel_vgpu_has_huge_gtt(struct drm_i915_private *i915) 33 - { 34 - return false; 35 - } 36 - static inline int intel_vgt_balloon(struct i915_ggtt *ggtt) 37 - { 38 - return 0; 39 - } 40 - static inline void intel_vgt_deballoon(struct i915_ggtt *ggtt) 41 - { 42 16 } 43 17 44 18 #endif /* _I915_VGPU_H_ */
-11
drivers/gpu/drm/xe/compat-i915-headers/intel_uc_fw.h
··· 1 - /* SPDX-License-Identifier: MIT */ 2 - /* 3 - * Copyright © 2023 Intel Corporation 4 - */ 5 - 6 - #ifndef _INTEL_UC_FW_H_ 7 - #define _INTEL_UC_FW_H_ 8 - 9 - #define INTEL_UC_FIRMWARE_URL "https://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git" 10 - 11 - #endif
+2 -14
drivers/gpu/drm/xe/xe_device_types.h
··· 497 497 /* For pcode */ 498 498 struct mutex sb_lock; 499 499 500 - /* Should be in struct intel_display */ 501 - u32 skl_preferred_vco_freq, max_dotclk_freq, hti_state; 502 - u8 snps_phy_failed_calibration; 503 - struct drm_atomic_state *modeset_restore_state; 504 - struct list_head global_obj_list; 500 + /* only to allow build, not used functionally */ 501 + u32 irq_mask; 505 502 506 - union { 507 - /* only to allow build, not used functionally */ 508 - u32 irq_mask; 509 - u32 de_irq_mask[I915_MAX_PIPES]; 510 - }; 511 - u32 pipestat_irq_mask[I915_MAX_PIPES]; 512 - 513 - bool display_irqs_enabled; 514 503 u32 enabled_irq_mask; 515 504 516 505 struct intel_uncore { ··· 511 522 unsigned int hpll_freq; 512 523 unsigned int czclk_freq; 513 524 unsigned int fsb_freq, mem_freq, is_ddr3; 514 - u8 vblank_enabled; 515 525 }; 516 526 struct { 517 527 const char *dmc_firmware_path;
+1
drivers/gpu/drm/xe/xe_pci.c
··· 333 333 334 334 static const struct xe_device_desc lnl_desc = { 335 335 PLATFORM(XE_LUNARLAKE), 336 + .has_display = true, 336 337 .require_force_probe = true, 337 338 }; 338 339
+11
include/drm/display/drm_dp.h
··· 1150 1150 1151 1151 #define DP_DPRX_FEATURE_ENUMERATION_LIST_CONT_1 0x2214 /* 2.0 E11 */ 1152 1152 # define DP_ADAPTIVE_SYNC_SDP_SUPPORTED (1 << 0) 1153 + # define DP_ADAPTIVE_SYNC_SDP_OPERATION_MODE GENMASK(1, 0) 1154 + # define DP_ADAPTIVE_SYNC_SDP_LENGTH GENMASK(5, 0) 1153 1155 # define DP_AS_SDP_FIRST_HALF_LINE_OR_3840_PIXEL_CYCLE_WINDOW_NOT_SUPPORTED (1 << 1) 1154 1156 # define DP_VSC_EXT_SDP_FRAMEWORK_VERSION_1_SUPPORTED (1 << 4) 1155 1157 ··· 1641 1639 #define DP_SDP_AUDIO_COPYMANAGEMENT 0x05 /* DP 1.2 */ 1642 1640 #define DP_SDP_ISRC 0x06 /* DP 1.2 */ 1643 1641 #define DP_SDP_VSC 0x07 /* DP 1.2 */ 1642 + #define DP_SDP_ADAPTIVE_SYNC 0x22 /* DP 1.4 */ 1644 1643 #define DP_SDP_CAMERA_GENERIC(i) (0x08 + (i)) /* 0-7, DP 1.3 */ 1645 1644 #define DP_SDP_PPS 0x10 /* DP 1.4 */ 1646 1645 #define DP_SDP_VSC_EXT_VESA 0x20 /* DP 1.4 */ 1647 1646 #define DP_SDP_VSC_EXT_CEA 0x21 /* DP 1.4 */ 1647 + 1648 1648 /* 0x80+ CEA-861 infoframe types */ 1649 1649 1650 1650 #define DP_SDP_AUDIO_INFOFRAME_HB2 0x1b ··· 1800 1796 DP_CONTENT_TYPE_PHOTO = 0x02, 1801 1797 DP_CONTENT_TYPE_VIDEO = 0x03, 1802 1798 DP_CONTENT_TYPE_GAME = 0x04, 1799 + }; 1800 + 1801 + enum operation_mode { 1802 + DP_AS_SDP_AVT_DYNAMIC_VTOTAL = 0x00, 1803 + DP_AS_SDP_AVT_FIXED_VTOTAL = 0x01, 1804 + DP_AS_SDP_FAVT_TRR_NOT_REACHED = 0x02, 1805 + DP_AS_SDP_FAVT_TRR_REACHED = 0x03 1803 1806 }; 1804 1807 1805 1808 #endif /* _DRM_DP_H_ */
+30
include/drm/display/drm_dp_helper.h
··· 98 98 enum dp_content_type content_type; 99 99 }; 100 100 101 + /** 102 + * struct drm_dp_as_sdp - drm DP Adaptive Sync SDP 103 + * 104 + * This structure represents a DP AS SDP of drm 105 + * It is based on DP 2.1 spec [Table 2-126: Adaptive-Sync SDP Header Bytes] and 106 + * [Table 2-127: Adaptive-Sync SDP Payload for DB0 through DB8] 107 + * 108 + * @sdp_type: Secondary-data packet type 109 + * @revision: Revision Number 110 + * @length: Number of valid data bytes 111 + * @vtotal: Minimum Vertical Vtotal 112 + * @target_rr: Target Refresh 113 + * @duration_incr_ms: Successive frame duration increase 114 + * @duration_decr_ms: Successive frame duration decrease 115 + * @operation_mode: Adaptive Sync Operation Mode 116 + */ 117 + struct drm_dp_as_sdp { 118 + unsigned char sdp_type; 119 + unsigned char revision; 120 + unsigned char length; 121 + int vtotal; 122 + int target_rr; 123 + int duration_incr_ms; 124 + int duration_decr_ms; 125 + enum operation_mode mode; 126 + }; 127 + 128 + void drm_dp_as_sdp_log(struct drm_printer *p, 129 + const struct drm_dp_as_sdp *as_sdp); 101 130 void drm_dp_vsc_sdp_log(struct drm_printer *p, const struct drm_dp_vsc_sdp *vsc); 102 131 103 132 bool drm_dp_vsc_sdp_supported(struct drm_dp_aux *aux, const u8 dpcd[DP_RECEIVER_CAP_SIZE]); 133 + bool drm_dp_as_sdp_supported(struct drm_dp_aux *aux, const u8 dpcd[DP_RECEIVER_CAP_SIZE]); 104 134 105 135 int drm_dp_psr_setup_time(const u8 psr_cap[EDP_PSR_RECEIVER_CAP_SIZE]); 106 136
+22 -1
include/drm/display/drm_dp_mst_helper.h
··· 817 817 818 818 void drm_dp_mst_topology_mgr_destroy(struct drm_dp_mst_topology_mgr *mgr); 819 819 820 - bool drm_dp_read_mst_cap(struct drm_dp_aux *aux, const u8 dpcd[DP_RECEIVER_CAP_SIZE]); 820 + /** 821 + * enum drm_dp_mst_mode - sink's MST mode capability 822 + */ 823 + enum drm_dp_mst_mode { 824 + /** 825 + * @DRM_DP_SST: The sink does not support MST nor single stream sideband 826 + * messaging. 827 + */ 828 + DRM_DP_SST, 829 + /** 830 + * @DRM_DP_MST: Sink supports MST, more than one stream and single 831 + * stream sideband messaging. 832 + */ 833 + DRM_DP_MST, 834 + /** 835 + * @DRM_DP_SST_SIDEBAND_MSG: Sink supports only one stream and single 836 + * stream sideband messaging. 837 + */ 838 + DRM_DP_SST_SIDEBAND_MSG, 839 + }; 840 + 841 + enum drm_dp_mst_mode drm_dp_read_mst_cap(struct drm_dp_aux *aux, const u8 dpcd[DP_RECEIVER_CAP_SIZE]); 821 842 int drm_dp_mst_topology_mgr_set_mst(struct drm_dp_mst_topology_mgr *mgr, bool mst_state); 822 843 823 844 int drm_dp_mst_hpd_irq_handle_event(struct drm_dp_mst_topology_mgr *mgr,
+3 -1
include/drm/i915_pciids.h
··· 711 711 INTEL_VGA_DEVICE(0x5692, info), \ 712 712 INTEL_VGA_DEVICE(0x56A0, info), \ 713 713 INTEL_VGA_DEVICE(0x56A1, info), \ 714 - INTEL_VGA_DEVICE(0x56A2, info) 714 + INTEL_VGA_DEVICE(0x56A2, info), \ 715 + INTEL_VGA_DEVICE(0x56BE, info), \ 716 + INTEL_VGA_DEVICE(0x56BF, info) 715 717 716 718 #define INTEL_DG2_G11_IDS(info) \ 717 719 INTEL_VGA_DEVICE(0x5693, info), \
+13 -3
include/uapi/drm/i915_drm.h
··· 2623 2623 * 2624 2624 */ 2625 2625 2626 + /* 2627 + * struct drm_i915_reset_stats - Return global reset and other context stats 2628 + * 2629 + * Driver keeps few stats for each contexts and also global reset count. 2630 + * This struct can be used to query those stats. 2631 + */ 2626 2632 struct drm_i915_reset_stats { 2633 + /** @ctx_id: ID of the requested context */ 2627 2634 __u32 ctx_id; 2635 + 2636 + /** @flags: MBZ */ 2628 2637 __u32 flags; 2629 2638 2630 - /* All resets since boot/module reload, for all contexts */ 2639 + /** @reset_count: All resets since boot/module reload, for all contexts */ 2631 2640 __u32 reset_count; 2632 2641 2633 - /* Number of batches lost when active in GPU, for this context */ 2642 + /** @batch_active: Number of batches lost when active in GPU, for this context */ 2634 2643 __u32 batch_active; 2635 2644 2636 - /* Number of batches lost pending for execution, for this context */ 2645 + /** @batch_pending: Number of batches lost pending for execution, for this context */ 2637 2646 __u32 batch_pending; 2638 2647 2648 + /** @pad: MBZ */ 2639 2649 __u32 pad; 2640 2650 }; 2641 2651